Академический Документы
Профессиональный Документы
Культура Документы
Industrial
Control Systems
Second Edition
Additionally, neither the author nor the publisher have investigated or considered the affect of any
patents on the ability of the reader to use any of the information in a particular application. The
reader is responsible for reviewing any possible patents that may affect any particular use of the
information presented.
Any references to commercial products in the work are cited as examples only. Neither the author
nor the publisher endorse any referenced commercial product. Any trademarks or tradenames
referenced belong to the respective owner of the mark or name. Neither the author nor the publisher
make any representation regarding the availability of any referenced commercial product at any
time. The manufacturer’s instructions on use of any commercial product must be followed at all
times, even if in conflict with the information in this publication.
ISA
67 Alexander Drive
P.O. Box 12277
Research Triangle Park
North Carolina 27709
Corripio, Armando B.
Tuning of industrial control systems / Armando B. Corripio.-- 2nd ed.
p. cm.
Includes bibliographical references and index.
ISBN 1-55617-713-5
1. Process control--Automation. 2. Feedback control systems. I. Title.
TS156.8. C678 2000
670.42’75--dc21
00-010127
TABLE OF CONTENTS
vii
viii Table of Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Unit 1:
Introduction and
Overview
UNIT 1
Learning Objectives — When you have completed this unit, you should be
able to:
When you finish this course you will understand how the methods for
tuning industrial control systems relate to the dynamic characteristics of
the controlled process. By approaching the subject in this way you will
gain insight into the tuning procedures rather than simply memorizing a
series of recipes.
3
4 Unit 1: Introduction and Overview
1-2. Purpose
There are no specific prerequisites for taking this course. However, you
will find it helpful to have some familiarity with the basic concepts of
automatic process control, whether acquired through practical experience
or academic study. In terms of mathematical skills, you do not need to be
intimately familiar with some of the mathematics used in the text in order
to understand the fundamentals of tuning. This book has been designed to
minimize the barrier that mathematics usually presents to students’
understanding of automatic control concepts.
This book is organized into ten separate units. The next three units (Units
2-4) are designed to teach you the fundamental concepts of tuning,
namely, the modes of feedback control, the characterization and
measurement of process dynamic response, the selection of controller
Unit 1: Introduction and Overview 5
You are encouraged to make notes in this textbook. Ample white space has
been provided on every page for this specific purpose.
• Know how to pick the right controller modes and tuning parame-
ters to match the objectives of the control system.
Besides these overall course objectives, each individual unit contains its
own set of learning objectives, which will help you direct your study.
The basic premise of self-study is that students learn best when they
proceed at their own pace. As a result, the amount of time individual
students require for completion will vary substantially. Most students will
complete this course in thirty to forty hours, but your actual time will
depend on your experience and personal aptitude.
Unit 2:
Feedback Controllers
UNIT 2
Feedback Controllers
This unit introduces the basic modes of feedback control, the important
concept of control loop stability, and the ultimate gain or closed-loop
method for tuning controllers.
Learning Objectives — When you have completed this unit, you should be
able to:
Steam
FS
Process
Fluid F C
Ti
Steam
Trap
Condensate
9
10 Unit 2: Feedback Controllers
The process fluid flows inside the tubes of the heater and is heated by
steam condensing on the outside of the tubes. The objective is to control
the outlet temperature, C, of the process fluid in the presence of variations
in process fluid flow (throughput or load), F, and in its inlet temperature,
Ti. This is accomplished by manipulating or adjusting the steam rate to the
heater, Fs, and with it the rate at which heat is transferred into the process
fluid, thus affecting its outlet temperature.
Now that we have defined the important variables of the control system,
the next step is to decide how to accomplish the objective of controlling
the temperature. In Figure 2-1, the approach is to set up a feedback control
loop, which is the most common industrial control technique—in fact, it is
the “bread and butter” of industrial automatic control. The following
procedure illustrates the concept of feedback control:
The desired value of the controlled variable is the set point, and the
difference between the controlled variable and the set point is the error.
Figure 2-2 shows the three pieces of instrumentation that are required to
implement the feedback control scheme:
1. A control valve for manipulating the steam flow.
2. A feedback controller, TC, for comparing the controlled variable
with the set point and calculating the signal to the control valve.
3. A sensor/transmitter, TT, for measuring the controlled variable
and transmitting its value to the controller.
Steam Setpoint
FS r
m TC
F Ti TT
Process C
Fluid
Steam
Trap
Condensate
Modern control systems also use digital controllers. There are three basic
types of digital controllers: distributed control systems (DCS), computer
controllers, and programmable logic controllers (PLC). Some of the more
modern installations use the “fieldbus” concept, in which the signals are
transmitted digitally, that is, in the form of zeros and ones.
Figure 2-2 shows that the feedback control scheme creates a loop around
which signals travel. A change in outlet temperature, C, causes a
proportional change in the signal to the controller, b, and therefore an
error, e. The controller acts on this error by changing the signal to the
control valve, m, causing a change in steam flow to the heater, Fs. This
causes a change in the outlet temperature, C, which then starts a new cycle
of changes around the loop.
The control loop and its various components are easier to recognize when
they are represented as a block diagram, as shown in Figure 2-3. Block
diagrams were introduced by James Watt, who recognized that the
complex workings of the linkages and levers in the flywheel governor are
12 Unit 2: Feedback Controllers
Heater
Heater
Sensor
The signs in the diagram in Figure 2-3 represent the action of the various
input signals on the output signal. That is, a positive sign means that an
increase in input causes an increase in output or direct action, while a
negative sign means that an increase in input causes a decrease in output
or reverse action. For example, the negative sign by the process flow into
the heater means that an increase in flow results in a decrease in outlet
temperature. By following the signals around the loop you will notice that
there is a net reverse action in the loop. This property is known as negative
feedback and, as we will show shortly, it is required if the loop is to be
stable.
The previous section showed that the purpose of the feedback controller is
twofold. First, it computes the error as the difference between the
controlled variable and the set point, and, second, it computes the signal
to the control valve based on the error. This section presents the three basic
modes the controller uses to perform the second of these two functions.
The next section (2-3) discusses how these modes are combined to form
the feedback controllers most commonly used in industry.
The three basic modes of feedback control are proportional, integral or reset,
and derivative or rate. Each of these modes introduces an adjustable or
tuning parameter into the operation of the feedback controller. The
controller can consist of a single mode, a combination of two modes, or all
three.
Proportional Mode
Kce (2-1)
where Kc is the controller gain and e is the error. The significance of the
controller gain is that as it increases so does the change in the controller
output caused by a given error. This is illustrated in Figure 2-4, where the
response in the controller output that is due to the proportional mode is
shown for an instantaneous or step change in error, at various values of
the gain.
PB = 100/Kc (2-2)
Offset
The proportional mode cannot by itself eliminate the error at steady state
in the presence of disturbances and changes in set point. The
unavoidability of this permanent error or offset can best be understood by
imagining that the steam heater control loop of Figure 2-2 has a controller
that has proportional mode only. The formula for such a controller is as
follows:
m = m0 + Kce (2-3)
where m is the controller output signal and m0 is its bias or base value.
This base value is usually adjusted at calibration time to be about 50
percent of the controller output range so as to give the controller room to
move in each direction. However, assume that the bias on the temperature
controller of the steam heater has been adjusted so as to produce zero error
at the normal operating conditions, that is, to position the steam control
valve so that the steam flow is that flow required to produce the desired
outlet temperature at the normal process flow and inlet temperature. In
this manner the initial error of the controller is zero and the controller
output is equal to the bias term.
Figure 2-5 shows the response of the outlet temperature and of the
controller output to a step change in process flow for the case of no control
and for the case of two different values of the proportional gain. For the
case of no control, the steam rate remains the same, which causes the
temperature to drop because there is more fluid to heat with the same
amount of heat. The proportional controller can reduce this error by
opening the steam valve, as shown in Figure 2-5. However, it cannot
Unit 2: Feedback Controllers 15
Figure 2-5. Response of Heater Temperature to Step Change in Process Flow Using a
Proportional Controller
eliminate it completely because, as Eq. 2-3 shows, zero error results in the
original steam valve position, which is not enough steam rate to bring the
temperature back up to its desired value. Although an increased controller
gain results in a smaller steady-state error or offset, it also causes, as
shown in Figure 2-5, oscillations in the response. These oscillations are
caused by the time delays on the signals as they travel around the loop
and by overcorrection on the part of the controller as the gain is increased.
To eliminate the offset a control mode other than proportional is required,
namely, the integral mode.
Integral Mode
Kc
TI ∫
------ e dt (2-4)
where TI is the integral or reset time, and t is time. The calculus operation
of integration is somewhat difficult to visualize, and perhaps it is best
understood by using a physical analogy. Consider the tank shown in
Figure 2-6. Assume that the liquid level in the tank represents the output
of the integral action, while the difference between the inlet and outlet
flow rates represents the error e. When the inlet flow rate is higher than
the outlet flow rate, the error is positive, and the level rises with time at a
rate that is proportional to the error. Conversely, if the outlet flow rate is
higher than the inlet, the level drops at a rate proportional to the negative
16 Unit 2: Feedback Controllers
error. Finally, the only way for the level to remain stationary is for the inlet
and outlet flows to be equal, in which case the error is zero. The integral
mode of the feedback controller acts exactly in this manner, thus fulfilling
its purpose of forcing the error to zero at steady state.
The integral time TI is the tuning parameter of the integral mode. In the
analogous tank in Figure 2-6, the cross-sectional area of the tank represents
the integral time. The smaller the integral time (area), the faster the
controller output (level) will change for a given error (difference in flows).
As the proportional gain is part of the integral mode, integral time means
the time it takes for the integral mode to match the instantaneous change
caused by the proportional mode on a step change in error. This concept is
illustrated in Figure 2-7.
Derivative Mode
The derivative or rate mode responds to the rate of change of the error
over time. This speeds up the controller action, compensating for some of
the delays in the feedback loop. The formula for the derivative action is as
follows:
de
K c TD ------ (2-5)
dt
where TD is the derivative or rate time. The derivative time is the time it
takes the proportional mode to match the instantaneous action of the
derivative mode on an error that changes linearly with time (a ramp). This
is illustrated in Figure 2-8. Notice that the derivative mode acts only when
the error is changing with time.
On-Off Control
The three basic modes of feedback control presented in this section are all
proportional to the error in their action. That is, a doubling in the
magnitude of the error causes a doubling in the magnitude of the change
in controller output. By contrast, on-off control operates by switching the
controller output from one end of its range to the other based only on the
sign of the error, not on its magnitude. On-off controllers are not generally
used in process control, and when they are it is very simple to tune them.
Their only adjustment is the magnitude of a dead band around the set
point.
The next section, 2-3, discusses the procedures for combining the three
basic control modes to produce industrial process controllers. However,
before doing this we need to simplify the notation for the integral and
derivative modes; a simple look at Eqs. 2-4 and 2-5 makes it clear why. A
simpler notation is achieved by introducing the Heaviside operator “s.”
These expressions are easier to manipulate than Eqs. 2-4 and 2-5. For those
readers who are not comfortable with the mathematics, be assured that we
will use these expressions only to simplify the presentation of the material.
Nevertheless, it is important to associate the s operator with rate of change
and its reciprocal with integration. It is also important to realize that since
s is associated with rate of change, it takes on a value of zero (that is, it
disappears) at steady state, when variables do not change with time.
Unit 2: Feedback Controllers 19
K
m = K c e + -------c- e = K c [ 1 + ( 1 ⁄ T I s ) ] e (2-8)
Ts I
Eq. 2-8 shows that the PI controller has two adjustable parameters, the
gain Kc and the integral or reset time TI. Figure 2-9 presents a block
diagram representation of the PI controller.
The simplest formula for the PID or three-mode controller is the addition
of the proportional, integral, and derivative modes, as follows:
K
m = K c e + -------c- e + K c TD s e = K c [ 1 + ( 1 ⁄ T I s ) + TD s ] e (2-9)
Ts I
This equation shows that the PID controller has three adjustable or tuning
parameters, the gain Kc, the integral or reset time TI, and the derivative or
1
Tls
r e m
KC
rate time TD. The block diagram implementation of Eq. 2-9 is sketched in
Figure 2-10. The figure also shows an alternative form that is more
commonly used because it avoids taking the rate of change of the set point
input to the controller. This prevents derivative kick, an undesirable pulse of
short duration on the controller output that would take place when the
process operator changes the set point.
m = Kc ′ [ 1 + ( 1 ⁄ TI ′ s ) ] [ ( 1 + TD ′ s ) ⁄ ( 1 + α TD ′ s ) ] (2-10)
The last term in brackets in Eq. 2-10 is a derivative unit and is attached to
the standard PI controller of Figure 2-9 to create the PID controller, as
shown in Figure 2-11. It contains a filter (lag) to prevent the derivative
mode from amplifying noise. The derivative unit is installed on the
controlled variable input to the controller to avoid the derivative kick, just
as in Figure 2-10. The value of the filter parameter α in Eq. 2-10 is not
adjustable; it is built into the design of the controller. It is usually of the
order of
1
T ls
r e m
KO
b TDs
1
T ls
r e m
KO
TDs
b
Figure 2-10. Block Diagram of Parallel PID Controller with Derivative on the Error Signal, and
with Derivative on the Measurement
Unit 2: Feedback Controllers 21
0.05 to 0.1. The noise filter can and should be added to the derivative term
of the parallel version of the PID controller. Its effect on the response of the
controller is usually negligible because the lag time constant, αTD, is small
relative to the response time of the loop.
The three formulas in Eq. 2-11 convert the parameters of the series PID
controller to those of the parallel version:
where
Fsp = 1 + (TD'/TI')
The formulas for converting the parallel PID parameters to the series are
as follows:
where
Figure 2-11. Block Diagram of Series PID Controller with Derivative on the Measurement
22 Unit 2: Feedback Controllers
• Auto/manual switch
When the controller has the incorrect action, you can recognize instability
by the controller output “running away” to either its upper or its lower
Unit 2: Feedback Controllers 23
limit. For example, suppose the temperature controller on the steam heater
of Figure 2-2 was set so that an increasing temperature increases its
output. In this case, a small increase in temperature would result in an
opening of the steam valve, which in turn would increase the temperature
further, and the cycle would continue until the controller output reached
its maximum with the steam valve fully opened. On the other hand, a
small decrease in temperature would result in a closing of the steam valve,
which would further reduce the temperature, and the cycle would
continue until the controller output is at its minimum point with the steam
valve fully closed. Thus, for the temperature control loop of Figure 2-2 to
be stable, the controller action must be “increasing measurement decreases
output.” This is known as reverse action.
When the controller is tuned too tightly, you can recognize instability by
observing that the signals in the loop oscillate and the amplitude of the
oscillations increases with time, as in Figure 2-12. The reason for this type
of instability is that the tightly tuned controller overcorrects for the error
and, because of the delays and lags around the loop, the overcorrections
are not detected by the controller until some time later. This causes a larger
error in the opposite direction and further overcorrection. If this is allowed
to continue the controller output will end up oscillating between its upper
and lower limits.
The earliest published method for characterizing the process for controller
tuning was proposed by J. G. Ziegler and N. B. Nichols.1 This method
consists of determining the ultimate gain and period of oscillation of the
loop. The ultimate gain is the gain of a proportional controller at which the
loop oscillates with constant amplitude, and the ultimate period is the
period of the oscillations. The ultimate gain is thus a measure of the
controllability of the loop; that is, the higher the ultimate gain, the easier it
is to control the loop. The ultimate period is in turn a measure of the speed
of response of the loop; that is, the longer the period, the slower the loop.
Because this method of characterizing a process must be performed with
the feedback loop closed, that is, with the controller in “Automatic
Output,” it is also known as the “closed-loop method.”
It follows from the definition of the ultimate gain that it is the gain at
which the loop is at the threshold of instability. At gains just below the
ultimate the loop signals will oscillate with decreasing amplitude, as in
Figure 2-5, while at gains above the ultimate the amplitude of the
oscillations will increase with time, as in Figure 2-12. When determining
the ultimate gain of an actual feedback control loop, it is therefore very
important to ensure that it is not exceeded by much, or the system will
become violently unstable.
The procedure for determining the ultimate gain and period is carried out
with the controller in “Auto” and with the integral and derivative modes
removed. It is as follows:
1. Remove the integral mode by setting the integral time to its
highest value (or the reset rate to its lowest value). Alternatively,
if the controller model or program allows the integral mode to be
switched off, then do so.
2. Switch off the derivative mode or set the derivative time to its
lowest value, usually zero.
3. Carefully increase the proportional gain in steps. After each
increase, disturb the loop by introducing a small step change in
the set point, and observe the response of the controlled and
manipulated variables, preferably on a trend recorder. The
variables should start oscillating as the gain is increased, as in
Figure 2-5.
4. When the amplitude of the oscillations remains constant (or
approximately constant) from one oscillation to the next, the
ultimate controller gain has been reached. Record it as Kcu.
Unit 2: Feedback Controllers 25
The procedure just outlined is simple and requires only a minimum upset
to the process, just enough to be able to observe the oscillations.
Nevertheless, the prospect of taking a process control loop to the verge of
instability is not an attractive one from a process operation standpoint.
However, it is not absolutely necessary in practice to obtain sustained
oscillations. It is also important to realize that some simple loops cannot be
made to oscillate with constant amplitude using just a proportional
controller. Fortunately, these are usually the simplest loops to control and
tune.
The next section, 2-6, shows how to use the ultimate gain and period to
tune the feedback controller.
quarter-decay ratio response, or QDR, for short. Figure 2-14 illustrates the
QDR response for a step change in disturbance and for a step change in set
point. Its characteristic is that each oscillation has an amplitude that is one
fourth that of the previous oscillation. Table 2-1 summarizes the formulas
proposed by Ziegler and Nichols for calculating the QDR tuning
parameters of P, PI, and PID controllers from the ultimate gain Kcu and
period Tu.2
It is intuitively obvious that for the proportional (P) controller the gain for
QDR response should be half of the ultimate gain, as Table 2-1 shows. At
the ultimate gain, the maximum error in each direction causes an identical
maximum error in the opposite direction. At half the ultimate gain, the
maximum error in each direction is exactly half the preceding maximum
error in the opposite direction and one fourth the previous maximum
error in the same direction. This is the quarter-decay response.
Figure 2-15 shows the determination of the ultimate gain for the
temperature control loop. A 1°C change in set point is used to start the
oscillations. The figure shows responses for the proportional controller
with gains of 8 and 15%C.O./%T.O. (Note: %C.O. = percent of controller
output range, and %T.O. = percent transmitter output range). Since the
gain of 15%C.O./%T.O. causes sustained oscillations, it is the ultimate
gain, and the period of the oscillations is the ultimate period.
Using the formulas in Table 2-1, the QDR tuning parameters are as
follows:
Figure 2-15. Determination of Ultimate Gain and Period for Temperature Control Loop on
Steam Heater
Figure 2-16 shows the response of the controller output and of the outlet
process temperature to an increase in process flow for the proportional
controller with the QDR gain of 7.5%C.O./%T.O. and with a gain of
4.0%C.O./%T.O. Similarly, Figs. 2-17 and 2-18 show the responses of the PI
and parallel PID controllers, respectively. In each case, the smaller
proportional gain results in less oscillatory behavior and less initial
movement of the controller output, at the expense of a larger initial
deviation and slower return to the set point. This shows that the desired
response can be obtained by varying the values for the tuning parameters,
particularly the gain, given by the formulas.
Notice the offset in Figure 2-16 and the significant improvement that the
derivative mode produces in the responses of Figure 2-18 over those of
Figure 2-17.
(a)
58
K = 7.5%C.O./%T.O.
56 C
M, %C.O.
54 K = 4.0%C.O./%T.O.
C
52
(a)
62
KC= 6.75%C.O./%T.O.
KC= 3.5%C.O./%T.O.
58
M, %C.O.
54
TI = 0.42 min
50
0 0.5 1.0 1.5 2.0 2.5
Time, min
(b)
The QDR tuning formulas allow you to tune controllers for a specific
response when the ultimate gain and period of the loop can be
determined. The units that follow present alternative methods for
characterizing the dynamic response of the loop (Unit 3) and for tuning
feedback controllers (Units 4, 5, and 6). Section 2-7 discusses the need for
such alternative methods.
Unit 2: Feedback Controllers 31
(a)
KC= 11.25%C.O./%T.O.
60
KC= 6.0%C.O./%T.O.
58
M, %C.O.
56
54
TI = 0.31 min
52 TD= 0.05 min
50
0 0.5 1.0 1.5 2.0 2.5
Time, min
(b)
Although the ultimate gain tuning method is simple and fast, other
methods for characterizing the dynamic response of feedback control
loops have been developed over the years. These alternative methods are
needed because it is not always possible to determine the ultimate gain
and period of a loop. As pointed out earlier, some simple loops would not
exhibit constant amplitude oscillations with a proportional controller.
The ultimate gain and period, although sufficient to tune most loops, do
not provide insight into which process or control system characteristics
could be modified to improve the feedback controller performance. A
more fundamental method of characterizing process dynamics is needed
to guide such modifications.
32 Unit 2: Feedback Controllers
There is also a need to develop tuning formulas for responses other than
the quarter-decay ratio response. This is because the set of PI and PID
tuning parameters that produce quarter-decay response are not unique. It
is easy to see that for each setting of the integral and derivative time, there
will usually be a setting of the controller gain that will produce quarter-
decay response. This means there are an infinite number of combinations
of the tuning parameters that satisfy the quarter-decay ratio specification.
2-8. Summary
This unit has introduced the concepts behind feedback control, controller
modes, and stability of control loops. The ultimate gain or closed-loop
method of tuning feedback controllers for quarter-decay ratio response
was described and found to be simple and fast, but limited in the
fundamental insight it can provide into the performance of the feedback
controller. Alternative process characterization and tuning methods will
be presented in the units that follow.
EXERCISES
2-2. Repeat Exercise 2-1 for a conventional house oven. What variable does the
cook vary when he or she adjusts the temperature dial?
2-3. How much does the output of a proportional controller change when the
error changes by 5 percent if its gain is:
a. 20% PB?
b. 50% PB?
c. 250% PB?
2-6. Repeat Exercise 2-5 but with a PID controller that has a gain of 1.0%C.O./
%T.O., a reset rate of 0 repeats per minute, and a derivative time of 2.0
minutes. In this case, the error signal applied to the controller is as shown
below, that is, a ramp of 5%T.O. per minute is applied for five minutes.
2-7. A test is made on the temperature control loop for a fired heater. It is
determined that the controller gain required to cause sustained oscillations
is 1.2%C.O./%T.O., and the period of the oscillations is 4.5 min.
Determine the QDR tuning parameters for a PI controller. Report the
controller gain as a proportional band and the reset rate in repeats per
minute.
2-8. Repeat Exercise 2-8 for a PID controller, both series and parallel.
REFERENCES
Learning Objectives — When you have completed this unit, you should be
able to:
Unit 2 showed you how to determine the ultimate gain and period of a
feedback control loop by performing a test with the loop closed, that is,
with the controller on “automatic output.” By contrast, this unit shows
you how to determine the process dynamic parameters by performing a
test with the controller on “manual output,” that is, an open-loop test.
Such tests present you with a more fundamental model of the process than
the ultimate gain and period.
37
38 Unit 3: Open-Loop Characterization of Process Dynamics
Steam Setpoint
FS r
m TC
b
F Ti TT
Process C
Fluid
Steam
Trap
Condensate
Notice that the controlled variable C does not appear in the diagram of
Figure 3-2(b). This is because, in practice, the true process variable is not
accessible; what is accessible is the measurement of that variable, that is,
the transmitter output signal b. Similarly, the flow through the control
valve, Fs, does not appear in Figure 3-2(b) because, even if it were
measured, the variable of interest is the controller output signal, m, that is,
the variable that is directly manipulated by the controller.
Figure 3-2. Block Diagram of Feedback Control Loop with Controller on Manual. (a) Showing
the Separate Process Blocks. (b) With all the Field Equipment Combined in a Single Block.
sensitive enough to provide the precision required for analyzing the test
results. Computer and microprocessor-based controllers are ideal for
open-loop testing because they are capable of a more precise change in
their output than are their analog counterparts. They also provide trend
recordings that have adjustable ranges on the measurement and time
scales.
The simplest type of open-loop test is a step test, that is, a sudden and
sustained change in the process input signal m. Figure 3-3 shows a typical
step test. You can obtain more accurate results with pulse testing but at the
expense of considerably more involved analysis. Pulse testing is outside
the scope of this book. The interested reader can find excellent discussions
of pulse testing in the books listed in Appendix A, specifically the texts by
Luyben1 and by Smith and Corripio.2 Sinusoidal testing is not at all
appropriate for most industrial processes because such processes are
usually too slow.
Process Gain
The steady-state gain, or simply the gain, is one of the most important
parameters of a process. It is a measure of the sensitivity of the process
output to changes in its input. The gain is defined as the steady-state
change in output divided by the change in input that caused it:
K = Change in output
----------------------------------------- where K is the process gain. (3-1)
Change in input
The change in output is measured after the process reaches a new steady
state (see Figure 3-3), assuming that the process is self-regulating. A self-
regulating process is one that reaches a new steady state when it is driven
by a steady change in input. There are two types of processes that are not
self-regulating: imbalanced or integrating processes and open-loop
unstable processes. A typical example of an imbalanced process is the
liquid level in a tank, and an example of an unstable process is an
exothermic chemical reactor. It is obviously impractical to perform step
tests on processes that are not self-regulating. Fortunately, most processes
are self-regulating.
The gain defined by Eq. 3-1 includes the gains of the transmitter, the
process, and the control valve. This is because, as illustrated in Figure
3-2(b), these three blocks are essentially combined into one. It is common
practice, however, to express the transmitter signal in the engineering
units of the measured variable, in which case it is necessary to convert the
value of the gain to dimensionless units. This is illustrated in Example 3-1.
Example 3-1. Estimation of the Gain from the Step Response. The
step test of Figure 3-3 shows that a 5 percent change in controller output
causes a steady-state change in temperature from 90°C to 95°C. First, the
change in temperature must be converted to a percentage of transmitter
output range. Assume the transmitter range for the steam heater is 50°C to
150°C. Thus, the change in transmitter output signal is as follows:
100 – 0 %T.O.
( 95 – 90 )° C --------------------- ---------------- = 5%T.O.
150 – 50 ° C
By using percent of range as the units of the signals, the value of the gain is
equally valid for electronic, pneumatic, and computer-based controllers.
Example 3-1 illustrates that it is important to keep track of the units of the
gain when tuning controllers.
There are several methods for estimating the process time constant and
dead time from the step response. The first of these methods was
originally proposed by Ziegler and Nichols.3 Let’s call this method the
“tangent” method. The other two methods, the “tangent-and-point”
method and the “two-point” method, give more reproducible results than
the tangent method. The constructions that are required to estimate the
time constant and the dead time are shown in Figure 3-4, which is
basically a reproduction of the step response of Figure 3-3 but showing the
constructions needed to analyze it.
Tangent Method
The tangent method requires you to draw the tangent to the response line
at the point of maximum rate of change or “inflection point,” as shown in
Figure 3-4. The time constant is then defined as the distance in the time
axis between the point where the tangent crosses the initial steady state of
the output variable and the point where it crosses the new steady-state
value. The dead time is the distance in the time axis between the
occurrence of the input step change and the point where the tangent line
crosses the initial steady state. These estimates are indicated in Figure 3-4.
The basic problem with the tangent method is that the drawing of the
tangent is not very reproducible, which creates significant variance in the
estimates of the process time constant and dead time. Another problem
with the tangent method is that its estimate of the process time constant is
too long, and thus it results in tighter controller tuning than the
tangent-and-point and two-point methods.
Figure 3-4. Graphical Determination of Time Constant and Dead Time from Step Response
Unit 3: Open-Loop Characterization of Process Dynamics 43
Tangent-and-Point Method
Two-Point Method
The two-point method makes use of the 63.2 percent point defined in the
tangent-and-point method as well as one other point: where the step
response reaches 28.3 percent of its total steady-state change. This point is
marked in Figure 3-4 as t2. Actually, any two points in the region of
maximum rate of change of the response would do, but the two points
Smith chose result in the following simple estimation formulas for the
time constant and the dead time:
τ = 1.5 (t1 - t2) (3-3)
t0 = t1 - τ (3-4)
The reason the two points should be in the region of maximum rate of
change is that otherwise small errors in the ordinate would cause large
errors in the estimates of t1 and t2. Compared to the tangent-and-point
method, the two-point method results in longer estimates of the dead time
and shorter estimates of the time constant, but it is more reproducible
because it does not require the tangent line to be drawn. This feature is
particularly useful when the response takes the form of sampled values
stored in a computer. In this case, the values of t1 and t2 can be determined
by interpolation, and it is not even necessary to plot the response. In fact,
44 Unit 3: Open-Loop Characterization of Process Dynamics
Example 3-2 illustrates the three methods for determining the dynamic
parameters of the process from the step response.
Example 3-2. Gain and Time Constant of Steam Heater. The step
response of Figure 3-4 is for a step change of 5 percent in the output of the
temperature controller of the steam heater shown in Figure 3-1. This
response is an expanded version of the response of Figure 3-3, which was
used in Example 3-1 to determine the process gain. As in that example, the
steady-state change in temperature is 5°C, or 5 percent of the transmitter
range of 50°C to 150°C. In Example 3-1, the process gain was determined
to be 1.0 %T.O./%C.O. In this example, the process time constant and
dead time are estimated by each of the three methods just discussed.
Tangent Method. Figure 3-4 shows the necessary construction of the tangent
to the response at the point of maximum rate of change (inflection point).
The values of the dead time and time constant are then determined from
the intersection of the tangent line with the initial and final steady-state
lines. From Figure 3-4, we get:
Dead time plus time constant: 0.98 min
Dead time: t0 = 0.12 min
Time constant: τ = 0.98 - 0.12 = 0.86 min
Tangent-and-Point Method. The estimate of the dead time is the same as for
the tangent method. To estimate the time constant, first determine point t1
at which the response reaches 63.2 percent of the total steady-state change:
T = 90.0 + 0.632(5.0) = 93.2°C
From Figure 3-4, we get:
t1 = 0.73 min
Time constant: τ = 0.73 - 0.12 = 0.61 min
Two-Point Method. In addition to the 63.2 percent point, which was
determined in the previous method, now determine the 28.3 percent point:
T = 90.0 + 0.283(5.0) = 91.4°C
From Figure 3-4, we get:
t2 = 0.36 min
Time constant, from Eq. 3-3:τ = 1.5(0.73 - 0.36) = 0.56 min
Dead time, from Eq. 3-4: t0 = 0.73 - 0.36 = 0.17 min
Unit 3: Open-Loop Characterization of Process Dynamics 45
Although, as Section 3-4 showed, the process time constant and dead time
can be estimated from an open-loop step test, it is important to examine
the physical significance of these two dynamic measures of the process.
Doing so will enable us to estimate the process time constant and dead
time from physical process characteristics (e.g., volumes, flow rates, valve
sizes) when it is not convenient to perform the step test. This section
discusses the time constant, and Section 3-5 explores the dead time.
Capacitance
τ = ------------------------------- = Capacitance × Resistance (3-5)
Conductance
The conductance is the ratio of the flow to the potential that drives it:
Figure 3-5. Typical Physical Systems with First-Order Dynamic Response. (a) Electrical R-C
Circuit. (b) Liquid Storage Tank. (c) Gas Surge Tank. (d) Blending Tank.
Electrical System
For this system, the quantity conserved is electric charge, the potential is
electric voltage, and the flow is the electric current. The capacitance is
provided by the ability of the capacitor to store electric charge, and the
conductance is the reciprocal of the resistance of the electrical resistor. The
time constant is then given by:
τ = RC (3-8)
where
ability of the tank to store liquid, and the potential for flow through the
valve is provided by the level of liquid in the tank. The capacitance is
volume of liquid per unit level, that is, the cross-sectional area of the tank,
and the conductance is the change in flow through the valve per unit
change in level. The time constant can then be estimated by:
τ = A/Kv (3-9)
where
The conductance of the valve depends on the valve size and the
percentage of lift. It is usually referred to in terms of flow per unit pressure
drop. Notice that the change in pressure drop across the valve per unit
change in level can be calculated by multiplying the density of the liquid
by the local acceleration of gravity.
Blending Tank
τ = V/F (3-11)
where
The capacitance of the tank is its ability to store air as its density changes
with pressure, which is the potential for flow. Assuming that air at 30 psig
Unit 3: Open-Loop Characterization of Process Dynamics 49
behaves as an ideal gas (z=1) and using the fact that its molecular weight,
M, is 29, the capacitance is as follows:
You can estimate the conductance of the valve using the formulas given by
valve manufacturers for sizing the valves. Because the pressure drop
through the valve is small compared with the pressure in the tank, the
flow is “subcritical,” and the conductance is given by the following
formula:
Kv = W (1 + ∆Pv/P)/(2∆Pv) =
= (100/60)[1 + 5/(30+14.7)]/[(2)(5)]
= 0.1853 (lb/min)/psi
The conductance calculated for the valve is the change in gas flow per unit
change in tank pressure, P. It takes into account the variation in gas
density with pressure and the variation in flow with the square root of the
product of density times the pressure drop across the valve, ∆Pv . For
critical flow, when the pressure drop across the valve is more than one half
the upstream absolute pressure, the conductance can be calculated by the
following formula:
Kv = W/P
Pure dead time, also known as transportation lag or time delay, occurs
when the process variable is transported from one point to another, hence
the term transportation lag. At any point in time, the variable downstream
is what the variable upstream was one dead time before, hence the term
time delay. When the variable first starts changing at the upstream point, it
takes one dead time before the downstream variable starts changing,
hence the term dead time. These concepts are all illustrated in Figure 3-6.
The dead time can be estimated using the following formula:
t 0 = Distance
--------------------- (3-12)
Velocity
50 Unit 3: Open-Loop Characterization of Process Dynamics
Figure 3-6. Transportation Lag (Dead Time or Time Delay). Physical Occurrence and Time
Response.
• Pressure and flow travel at the velocity of sound in the fluid, e.g.,
340 m/s or 1,100 ft/s for air at ambient temperature.
These numbers show that, for the reasonable distances that are typical of
process control systems, pure dead time is only significant for
temperature, composition, and other fluid and solid properties. The
velocity of the fluid in a pipe can be calculated using the following
formula:
v = F/Ap (3-13)
where
Given that, as we shall see shortly, the dead time makes a feedback loop
less controllable, most process control loops are designed to reduce the
Unit 3: Open-Loop Characterization of Process Dynamics 51
dead time as much as possible. Dead time can be reduced by installing the
sensor as close to the equipment as possible, using electronic instead of
pneumatic instrumentation, and by other means of reducing the distance
or increasing the speed of transmission.
Pure dead time is usually not significant for most processes. The process
dead time that is estimated from the response to the step test arises from a
phenomenon that is not necessarily transportation lag, but rather from the
presence of two or more first-order processes in series (e.g., the trays in a
distillation column). When you model these processes with a first-order
model, you need the dead time to represent the delay caused by the
multiple lags in series. As an example, Figure 3-7 shows the response of
the composition in a blending train when it consists of one, two, five, and
nine tanks in series. It assumes that the total blending volume is the same,
for example, each of the five tanks has one-fifth the volume of the single
tank. In the limit, an infinite number of infinitesimal tanks in series results
in a pure dead time that is equal to the time constant of the single tank,
that is, the total volume divided by the volumetric flow.
Most real processes fall somewhere between the two extremes of first-
order (perfectly mixed) processes and transportation (unmixed) processes.
Figure 3-7. Response of Composition Out of a Train of Blending Tanks in Series. Curves are for
One, Two, Five, and Nine Tanks in Series, Keeping in Each Case the Total Volume of All the
Tanks the Same.
52 Unit 3: Open-Loop Characterization of Process Dynamics
= 3.71 ft/s
The formulas provided in the preceding sections of this unit show that, for
concentration and temperature, the time constant and the dead time vary
with process throughput. Eqs. 3-11, 3-12, and 3-13 show that the time
constant and the dead time are inversely proportional to the flow and thus
to the throughput. Eqs. 3-9 and 3-10 also show that, for liquid level and
gas pressure, the time constant varies with the valve conductance, Kv ,
which usually varies since it is a function of the valve characteristics and
of the pressure drop across the valve. Control valve characteristics are
usually selected to maintain the process gain constant, which, for liquid
level and gas pressure, is equivalent to keeping the valve conductance
constant (the valve gain is the reciprocal of the valve conductance).
Of the three parameters of a process, the gain has the greatest influence on
the performance of the control system. Such devices as equal-percentage
control valve characteristics are used to ensure that the process gain is as
constant as possible. The equal-percentage characteristic, shown in
Unit 3: Open-Loop Characterization of Process Dynamics 53
Figure 3-8, is particularly useful for this purpose because the gain of most
rate processes (e.g., fluid flow, heat transfer, mass transfer) decreases as
the flow increases, that is, as the valve opens. As Figure 3-8 shows, the
gain or sensitivity of an equal-percentage valve increases as the valve is
opened, which compensates for the decrease in the process gain.
Reset Windup
Reset windup is more common in batch processes and during the start-up
and shutdown of continuous processes, but when you are tuning
controllers you should always keep the possibility of windup in mind.
Some problems that are apparently tuning problems are really caused by
unexpected reset windup. Unit 4 looks at reset windup in more detail.
Example 3-5 illustrates the variation of the process gain in a steam heater.
It takes advantage of the fact that for the heater the gain can be calculated
from a simple steady-state energy balance on the heater.
Based on the response to a step test, Example 3-1 determined that the gain
of the heater is 1.0%T.O./%C.O. at the design conditions. In this example
we will verify this value from a steady-state energy balance on the heater
and study its dependence on process flow.
An energy balance on the heater, ignoring heat losses, yields the following
formula:
FCp(T - Ti) = FsHv
where Fs is the steam flow, and the other terms have been defined in our
initial statement of the problem. The desired gain is the steady-state
change in outlet temperature per unit change in steam flow:
Hv
K = Change in outlet temperature
--------------------------------------------------------------------- = ----------
-
Change in steam flow FC p
Notice that the gain is inversely proportional to the process flow F. From
this formula, we know that the units of the gain are °C/(kg/s). To convert
them to %T.O./%C.O. (dimensionless), multiply this number by the range
of the valve (2.0 kg/s) and divide the result by the span of the transmitter
(100°C). This results in the following table:
Example 3-5 shows the variation of the process gain, which indicates that
the steam heater is nonlinear. As mentioned earlier, the decrease in process
gain with an increase in flow is characteristic of many process control
systems. This explains the popularity of equal-percentage control valves,
which compensate exactly for this gain variation.
Unit 3: Open-Loop Characterization of Process Dynamics 55
The base response is the controlled variable profile for the batch when the
manipulated variable is maintained at the base or design conditions. Then,
when you apply the step change to the manipulated variable, you obtain a
different profile for the controlled variable. You must then estimate the
process parameters from the difference between the two profiles. This
procedure is demonstrated in Example 3-6.
Figure 3-9. Sketch of Vacuum Pan Used for Batch Crystallization of Sugar
56 Unit 3: Open-Loop Characterization of Process Dynamics
Figure 3-10 shows the base profiles of the supersaturation and viscosity.
The figure also shows the corresponding profiles after a step change in
steam rate is applied. The step response is then given by the difference
between the two curves. As demonstrated by a computer simulation of a
vacuum pan reported by Qi Liwu and Corripio, these curves would be
difficult to obtain on an actual pan because they involve running two
batches with the steam valve held constant.5
Supersaturation
1.20
1.15
Step
1.10 Base
80 Step
Viscosity
60
Base
40
0 6 12 18 24 30
Time, minutes
Figure 3-10. Base Profile and Profile After a Step Change in the Steam Valve of Vacuum Pan.
The Step Response is the Difference Between the Two Profiles.
3-8. Summary
This unit showed you how to perform and analyze a process step test to
determine the parameters of a first-order-plus-dead-time (FOPDT) model
of the process. These parameters are the gain, the time constant, and the
dead time. It also discussed the physical significance of these parameters
and showed how to estimate them from process design parameters for
some simple process loops. The units to follow will use these estimated
dynamic parameters to design and tune feedback, feedforward, and
multivariable controllers.
EXERCISES
3-3. A change of 100 lb/hr in the set point of a steam flow controller for the
reboiler of a distillation column results in a change in the bottoms
temperature of 2°F. The steam flow transmitter has a range of 0 to
5,000 lb/hr, and the temperature transmitter has a calibrated range of
200°F to 250°F. Calculate the process gain for the temperature loop in
°F/(lb/hr) and in %T.O./%C.O.
3-4. When tuning feedforward control systems you need the FOPDT
parameters of the process for step changes both in the disturbance and in
the manipulated variable. The figure below shows the response of the steam
heater outlet temperature of Figure 3-1 to a step change of 2 kg/s in process
flow. Determine the gain, time constant, and dead time for this response
using the slope method and the slope-and-point method.
3-6. A passive low-pass filter can be built with a resistor and capacitor. The
maximum sizes of these two components for use in printed circuit boards
are, respectively, 10 megohms (million ohms) and 100 microfarads
(millionth of farad). What then would be the maximum time constant of a
filter built with these components?
3-7. The surge tank of Figure 3-5b has an area of 50 ft2, and the valve has a
conductance of 50 gpm/ft of level change (1 ft3 = 7.48 gallons). Estimate the
time constant of the response of the level.
3-8. The blender of Figure 3-5d has a volume of 2,000 gallons. Calculate the
time constant of the composition response for product flows of (a) 50 gpm,
(b) 500 gpm, and (c) 5,000 gpm.
3-9. The blender of Figure 3-5d mixes 100 gpm of concentrated solution at
20 lb/gallon with 400 gpm of dilute solution at 2 lb/gallon. Calculate the
steady-state product concentration in lb/gallon. How much would the
outlet concentration change if the concentrated solution rate were to change
to 110 gpm, all other conditions remaining the same? Calculate the process
gain for the suggested change.
3-10. Repeat Exercise 3-9 assuming that the initial rates are 10 gpm of
concentrated solution and 40 gpm of dilute solution and that to do the test
the concentrated solution is changed to 11 gpm.
REFERENCES
Learning Objectives — When you have completed this unit, you should be
able to:
The formulas of Table 4-1 are very similar to those of Table 2-1. Notice, for
example, that in both sets of formulas the proportional gain of the PI
controller is 10 percent lower and the series PID gain 20 percent higher
than that of the P controller. Note also that the derivative or rate time is
one-fourth the integral or reset time for the series PID controller. The ratio
of the integral time of the PI controller to that of the series PID controller is
61
62 Unit 4: How to Tune Feedback Controllers
also the same for both sets of formulas. In other words, the reset action is
about 1.7 times faster when derivative is used than when it is not.
The formulas of Table 4-1, however, provide important insights into the
effect that the parameters of the process have on the tuning of the
controller and thus on the performance of the loop. In particular, they
allow us to draw the following three conclusions:
1. The controller gain is inversely proportional to the process gain
K. Since the process gain represents the product of all the
elements in the loop other than the controller (control valve,
process equipment, and sensor/transmitter), this means that the
loop response depends on the loop gain, that is, the product of all
of the elements in the loop. It also means that if the gain of any of
the elements were to change because of recalibration, resizing, or
nonlinearity (see Section 3-6), the response of the feedback loop
would change unless the controller gain is readjusted.
2. The controller gain must be reduced when the ratio of the process
dead time to its time constant increases. This means that the
controllability of the loop decreases when the ratio of the process
dead time to its time constant increases. It also allows us to define
the ratio of dead time to time constant as the uncontrollability
parameter of the loop:
t
Pu = ---0- (4-1)
τ
where
Notice that it is the ratio of the dead time to the time constant that
determines the controllability of the loop. In other words, a
process with a long dead time is not uncontrollable if its time
constant is much longer than the dead time.
These three conclusions can be very helpful as guidelines for the tuning of
feedback controllers, even in cases where the tuning formulas cannot be
Unit 4: How to Tune Feedback Controllers 63
The three conclusions we have just drawn from the tuning formulas can
also guide the design of the process and its instrumentation when they are
coupled with the methods for estimating time constants and dead times
given in Sections 3-4 and 3-5 of Unit 3. For example, loop controllability
can be improved by reducing the dead time between the manipulated
variable and the sensor or by increasing the process time constant.
Moreover, it is possible to quantitatively estimate the effect of process,
control valve, and sensor nonlinearities on the variability of the loop gain
and thus determine whether there’s any need to readjust the controller
gain when process conditions change.
The formulas of Table 4-1 were developed empirically for the most
common range of the process uncontrollability parameter, which is
between 0.1 and 0.3. This assumes that the process does not exhibit
significant transportation lag, but rather that the dead time is the result of
several time lags in series (e.g., trays in a distillation column).
The QDR formulas were developed for continuous analog controllers and
thus must be adjusted for the sampling frequency of digital controllers—
that is, computer control algorithms, distributed controllers, or
microprocessor-based controllers. Moore and his co-workers proposed
that the process dead time be increased by one half the sampling period to
account for the fact that the controller output is held constant for one
sampling period, where the sampling period is the time between updates
of the controller output.2 Following this procedure, the uncontrollability
parameter for digital controllers is as follows:
T
t 0 + ---
2
P u = -------------- (4-2)
τ
When the process dead time is very small compared with the process time
constant, the effect of the derivative time is minor, and a PI controller can
be used in which the integral time is equal to the process time constant.
For computer (discrete) controllers with a uniform sampling interval, one
half the sample time must be added to the dead time, as in Eq. 4-2.
Gain Adjustment
Although the gain is adjustable, the following formulas are proposed here:
When Pu is less than 0.1 or greater than 0.5, you should use one half
this gain as the starting value.
Unit 4: How to Tune Feedback Controllers 65
• For 5 percent overshoot on set point changes, use the following for-
mula:
The four preceding formulas convey the idea that the controller gain can
be adjusted to obtain a variety of responses. Once you have set the time
parameters using the best estimates of the process time parameters, the
tuning procedure is reduced to adjusting a single parameter: the controller
gain.
This section compares the two methods presented in the first two sections
of this unit by tuning the temperature controller of the steam heater in
Figure 3-1 as well as two other hypothetical processes: one that is
controllable and one that is difficult to control.
For the heat exchanger of Figure 3-1, recall that the first-order-plus-dead-
time model parameters (which we determined in Example 3-2) are as
follows:
K = 1.0 %T.O./%C.O.
Notice that which tuning parameters you use will depend on which
method you use to determine the time constant and dead time. Ziegler
and Nichols used the tangent method to develop their empirical formulas,
working with actual processes and physical simulations. Thus, you should
use the tangent method when tuning for quarter-decay-ratio (QDR)
response. The IMC tuning rules were developed for first-order-plus-dead-
time models, so any of the three methods can be used to determine which
dead time and time constant to use with the IMC formulas. Since the
tangent method gives the smallest value for the uncontrollability
parameter—that is, shortest dead time and longest time constant--it
results in the tightest tuning, while the two-point method produces the
highest value for the uncontrollability parameter and thus the most
conservative tuning.
PI 6.5 0.40 —
PID series 8.6 0.24 0.06
Using these tuning parameters, Figure 4-1 compares the responses of the
temperature transmitter output and of the controller output to a step
increase in process flow to the heater. The advantage of the derivative
mode is obvious: it produces a smaller initial deviation and maintains the
temperature closer to the set point for the entire response, with fewer
oscillations.
Unit 4: How to Tune Feedback Controllers 67
(a)
60 PI
58
M, %C.O.
56
PID
54
52
50
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)
Figure 4-1. Responses of PI and PID Controllers to Disturbance Input on the Heat Exchanger
with QDR Tuning
The next example compares QDR versus IMC tuning of the temperature
controller of the heat exchanger in Figure 3-1.
The IMC tuning rules presented in Section 4-2 give the integral and
derivative times:
A comparison of the tuning parameters shows that the QDR formulas call
for a 30 percent higher gain, for an integral mode over twice as fast, and a
33 percent shorter derivative time than IMC. The difference in derivative
time is caused only by the difference in the method used for estimating the
dead time, as the formulas are identical.
Figure 4-2 compares the QDR and IMC responses of the PID temperature
controller to a step increase in process flow to the heater. Both controllers
perform well, reducing the initial deviation in outlet temperature to about
one-tenth what it would be without control. QDR tuning results in a
slightly smaller initial deviation and brings the temperature back to the set
point of 90°C quicker than does IMC tuning. To achieve this good
performance the QDR-tuned controller causes a 50 percent overcorrection
in controller output, while the IMC-tuned controller smoothly moves the
controller output from its initial to its final position.
To compare the performance to set point changes, the IMC gain must be
adjusted to the value recommended for set point changes which is given
by Eq. 4-6:
The QDR parameters for the PID controller are as shown in Example 4-1.
Figure 4-3 compares the responses of the PID controller to a 5°C set point
change. As expected, QDR tuning results in a large overshoot, while IMC
tuning smoothly moves the variable from its original to its final set point.
QDR tuning also causes a much larger initial change in the controller
output.
(a)
60
QDR
58
M, %C.O.
56 IMC
54
52
50
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)
Figure 4-2. Comparison of PID Responses to Disturbance Input on Heat Exchanger with QDR
and IMC Tuning
However, this solution does not address cases where set point changes are
common, such as batch processes and on-line optimization. One recent
development in industrial operations is to incorporate on-line
optimization programs that automatically change controller set points as
the optimum conditions change. Most of these programs have limits on
the sizes of the set point changes they can make. At any rate, one sure way
to prevent large changes in controller output on set point changes is to
70 Unit 4: How to Tune Feedback Controllers
(a)
100 QDR
M, %C.O.
IMC
50
20
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)
Figure 4-3. Responses to Set Point Change on Heat Exchanger PID Controller with QDR and
IMC Tuning
Where the gain of the IMC controller has been taken as one half the gain
given by Eq. 4-4 for disturbance inputs because the uncontrollability
parameter is less than 0.1. Notice that the gains are rather high, which
indicates very tight control.
Example 4-3 shows that good performance on the controlled variable must
be balanced against too much action on the controller output. This is
because the controller output usually causes disturbances to other
controllers and in some cases manipulates safety-sensitive variables. For
example, in a furnace temperature controller the controller output could
be manipulating the fuel flow to the furnace. A large drop in fuel flow
could cause the flame in the firing box to go out.
72 Unit 4: How to Tune Feedback Controllers
QDR
50
0 2 4 6 8 10
Time, minutes
(a)
50
Controller Output, %C.O.
IMC
40
QDR
30
0 2 4 6 8 10
Time, minutes
(b)
Figure 4-4. Responses to Disturbance Input of a Controllable Process with Pu = 0.05 for PI
Controller with QDR and IMC Tuning
Notice that in this case the IMC formulas call for a faster integral time than
do the QDR formulas. The IMC gain is half the one predicted by Eq. 4-4 for
disturbance inputs because the uncontrollability parameter is greater than
0.5.
Figure 4-5 compares the responses of the PID controllers tuned using the
QDR and IMC formulas to a 10 percent change in disturbance. Notice that
the initial deviation in the controlled variable for both controllers is about
65 percent of what it would be if there were no control (13%T.O. versus
20%T.O.). This is because the high uncontrollability parameter requires
low controller gains. Because of its faster integral mode, the IMC-tuned
controller brings the controlled variable back to set point of 50%T.O.
slightly faster than the QDR-tuned controller. The variation in the
controller output is about the same for both controllers. It is high because
of the large deviation of the controlled variable from the set point.
The four examples in this section have compared the tuning parameters
obtained from the two tuning methods presented in this unit, and it has
compared the performance of the controller when tuned by each of these
methods. To summarize our findings:
QDR
50
IMC
0 10 20 30 40 50
Time, minutes
(a)
50
Controller Output, %C.O.
QDR
40
IMC
30
0 10 20 30 40 50
Time, minutes
(b)
This section presents seven tips that I hope will help you make your
controller tuning task more efficient and satisfying.
1. Tune coarse, not fine.
might give up the task of tuning before you even get started. But
once you realize that the controller performance does not require
tuning parameters to be set precisely, you reduce the number of
significantly different combinations to a workable number.
Moreover, you will be satisfied by the large improvements in
performance that can be achieved by coarse tuning—in sharp
contrast to the frustration you will feel in the small incremental
improvements achieved through fine tuning. How coarse is coarse
tuning? When tuning a controller, I seldom change a parameter by
less than half its current value.
2. Tune with confidence.
Many times, poor loop response can be the result of trying to bring
the controlled variable back to its set point faster than the process
can respond. In such cases, increasing the integral time allows an
increase in the process gain and an improvement in the response.
76 Unit 4: How to Tune Feedback Controllers
A properly tuned controller will behave well as long as its output remains
in a range where it can change the manipulated flow. However, it will
behave poorly if, for any reason, the effect of the controller output on the
manipulated flow is lost. A gap between the limit on the controller output
and the operational limit of the control valve is the most common cause of
reset windup. The symptom is a large overshoot of the controlled variable
while the integral mode in the controller is crossing the gap. Reset windup
occurs most commonly during start-up and shutdown, but it can also
occur during product grade switches and large disturbances during
continuous operation. Momentary loss of a pump may also cause reset
windup.
TC
Reactants
TT
m
Steam
T
Products
Condensate
125
100
m, %C.O.
T, °C
Some processes exhibit what is known as inverse response, that is, an initial
move in the direction opposite to the final steady-state change when the
input is a step change. A typical example of a process with inverse
response is an exothermic reactor where the feed is colder than the reactor.
An increase in the feed rate to the reactor causes the temperature to drop
Unit 4: How to Tune Feedback Controllers 79
initially due to the larger rate in cold feed. However, eventually, the
increase in reactants flow increases the rate of the reaction and with it the
rate of the heat generated by the reaction. This causes the temperature in
the reactor to end up higher than it was initially. Another typical inverse
response is the level in the steam drum of a water tube boiler when the
steam demand changes. The inverse response is caused when the
phenomena of “swell” and “shrink” affect the steam bubbles in the boiler
tubes.
One approach to tuning a feedback controller for a process that has inverse
response is to consider the period of the inverse move as dead time. This is
demonstrated in Example 4-5.
From Figure 4-7, we know the duration of the inverse response is 1.3
minutes. This is taken as the process dead time. The time required to reach
the 63.2 percent point of the response (50.63%T.O.) is shown in the figure
to be 3.3 minutes. Therefore:
The tuning parameters for a PI controller are calculated with the formulas
from Table 4-1:
Kc = 0.9(2.0)/(1.0)(1.3) = 1.4%C.O./%T.O.
Uncontrolled
51
50.63 PI
C, %T.O.
50
3.3 min
1.3 min
49
0 2 4 6 8 10
Time, minutes
(a)
(b)
4-7. Summary
In this unit we looked at controller tuning methods based on the gain, time
constant, and dead time of the process in the feedback control loop, where
the process represents all of the elements between the controller output
and its input. We then compared the tuning methods with each other. We
demonstrated the effect of derivative mode, as well as the question of
when to tune for disturbance inputs or for set point changes. Tuning for
very controllable and very uncontrollable processes was discussed and
illustrated, and some practical tuning tips were presented. The
phenomena of reset windup and inverse response were also discussed.
EXERCISES
4-1. Based on the tuning formulas given in this unit, how must you change the
controller gain if, after the controller is tuned, the process gain were to
double because of its nonlinear behavior?
4-3. Assuming that the quarter-decay ratio formulas of Table 4-1 give the same
tuning parameters as those of Table 2-1, what relationship can be
established between the controller ultimate gain and the gain and
uncontrollability parameter of the process in the loop? What is the
relationship between the ultimate period and the process dead time?
4-6. Readjust the tuning parameters of Exercise 4-5 to reflect that the PID
controller is to be carried out with a processing period of 8 s on a computer
control installation.
4-7. Repeat Exercise 4-5 for a series PID controller tuned by the IMC tuning
rules for disturbance inputs.
4-8. Repeat Exercise 4-5 for a series PID controller tuned by the IMC tuning
rules for set point changes.
82 Unit 4: How to Tune Feedback Controllers
4-9. Which method would you use to tune the slave controller in a cascade
control system? In such a system the output of the master controller takes
action by changing the set point of the slave controller.
4-10. What is the typical symptom of reset windup? What causes it? How can it
be prevented?
REFERENCES
Learning Objectives — When you have completed this unit, you should be
able to:
There are two situations when the controlled variable can be allowed to
vary in a range:
85
86 Unit 5: Mode Selection and Tuning Common Feedback Loops
The second situation when the controlled variable can be allowed to vary
in a range calls for proportional controllers with as wide a proportional
band as possible. This situation is found in the control of level in
intermediate storage tanks and condenser accumulators, as well as in the
control of pressure in gas surge tanks.
Flow control is the simplest and most common of the feedback control
loops. The schematic diagram of a flow control loop in Figure 5-1 shows
that there are no lags between the control valve that causes the flow to
change and the flow sensor/transmitter (FT) that measures the flow. Since
most types of flow sensors (orifice, venturi, flow tubes, magnetic
flowmeters, turbine meters, coriolis, etc.) are very fast, the only significant
lag in the flow loop is the control valve actuator. Most actuators have time
constants on the order of a few seconds.
FC
FT
Example 5-1. Flow Control with Valve Hysteresis. Figure 5-2 shows
the responses of a flow control loop to small variations in pressure drop
across the valve for two different tunings of the controller. The control
valve is assumed to have a hysteresis band of 0.1 percent of the range of
the valve position and a time constant of 0.1 minutes. The curve labeled (a)
is for the traditional tuning of low gain and fast integral, while curve (b) is
for a more aggressive tuning of a gain near unity and slower integral. As
88 Unit 5: Mode Selection and Tuning Common Feedback Loops
Figure 5-2 shows, the more aggressive tuning reduces the variation in
flow, which in this case is caused by the hysteresis in the valve.
There are two reasons for controlling level and pressure: to keep them
constant because of their effect on process or equipment operation or to
smooth out variations in flow while satisfying the material balance. The
former case calls for “tight” control while the latter is usually known as
“averaging” control. Pressure is to gas systems what level is to liquid
systems, although liquid pressure is sometimes controlled.
50.2
(a)
50.1
(b)
Flow, %T.O.
50.0
49.9
49.8
0 10 20 30 40 50
Time, seconds
(a)
(b)
Tight Control
Two examples of tight liquid level control and one example of tight
pressure control are shown in Figure 5-3. It is important to control level in
natural-circulation evaporators and reboilers because a level that is too
low causes deposits on the bare hot tubes, while a level that is too high
causes elevation of the boiling point, which reduces the heat transfer rate
and prevents the formation of bubbles that enhance heat transfer by
promoting turbulence. A good example of tight pressure control or
pressure regulation is the control of the pressure in a liquid or gas supply
header. It is important to maintain the pressure in the supply header
constant to prevent disturbances to the users when there is a sudden
change in the demand of one or more of the users.
To design tight level and pressure control systems one must have a fast-
acting control valve, with a positioner if necessary, so as to avoid
secondary time lags, which would cause oscillatory behavior at high
controller gains. If the level or pressure controller is cascaded to a flow
controller, the latter must be tuned as tight as possible, as mentioned in the
preceding section.
Vapors
LC
LT
Feed Steam
Steam
T
LC T
Condensate
Condensate Bottoms
Product
(a) (b)
PC
Loads
(c)
Figure 5-3. Examples of Tight Control: (a) Calandria Type Evaporator, (b) Thermosyphon
Reboiler, (c) Header Pressure Regulation
90 Unit 5: Mode Selection and Tuning Common Feedback Loops
Two examples of averaging level control are shown in Figure 5-4: the
control of level in a surge tank and in a condenser accumulator drum. Both
the surge tank and the accumulator drum are intermediate process storage
tanks. The liquid level in these tanks has absolutely no effect on the
operation of the process. It is important to realize that the purpose of an
averaging level controller is to smooth out flow variations while keeping
the tank from overflowing or running empty. If the level were to be
controlled tight in such a situation, the outlet flow would vary just as
much as the inlet flow(s), and it would be as if the tank (or accumulator)
were not there.
The averaging level controller should be proportional only with a set point
of 50 percent of range, a gain of 1.0 (proportional band of 100 percent), and
an output bias of 50 percent. This configuration causes the outlet valve to
be fully opened when the level is at 100 percent of range and fully closed
when the level is at 0 percent of range, using the full capacity of the valve
and of the tank. A higher gain would reduce the effective capacity of the
tank for smoothing variations in flow, while a lower gain would reduce
the effective capacity of the control valve and create the possibility that the
tank would overflow or run dry. With this proposed design, the tank
behaves as a low-pass filter to flow variations. The time constant of such a
filter is as follows:
A ( h max – hmin )
τ f = -----------------------------------------
- (5-1)
K c F max
where
hmin and hmax = the low and high points of the range of the level transmitter,
respectively, ft
Fmax = the maximum flow through the control valve when fully
opened (100 percent controller output), ft3/min
Figure 5-4. Averaging Level Control: (a) Surge Tank, (b) Condenser Accumulator Drum
The controller gain is assumed to be 1.0 in this design. When the level
controller is cascaded to a flow controller, Fmax is the upper limit of the
range of flow transmitter in the flow control loop. Notice that an increase
in gain results in a reduction of the filter time constant and therefore less
smoothing of the variations in flow. A good way to visualize this is to
notice that doubling the gain would be equivalent to reducing either the
tank area or the transmitter range by a factor of two, thus reducing the
effective capacity of the tank. On the other hand, reducing the controller
gain to half would be equivalent to reducing the capacity of the valve by
half, thus increasing the possibility that the tank would overflow.
• The level and the flow that is manipulated to control the level oscil-
late with a long period of oscillation. Sometimes the period is so
long that the oscillation is imperceptible, unless it is trended over a
very long time.
There are intermediate situations that do not require a very tight level
control but where it is nevertheless important to ensure that the level does
not swing through the full range of the transmitter as in averaging level
control. A typical example would be a blending tank, where the level
controls the tank volume and therefore the residence time for blending. If
a ±5 percent variation in residence time is acceptable, a proportional
controller with a gain of 5 to 10, or even lower, could be used, as the flow
would not be expected to vary over the full range of the control valve
capacity.
Unit 5: Mode Selection and Tuning Common Feedback Loops 93
Example 5-2. Tight and Averaging Level Control. Figure 5-5 shows the
responses of the control of the level in a tank where the level controller is
tuned for averaging and for tight level. The inlet flow into the tank, shown
by the step changes in the figure, increases by 200 gpm, then by an
additional 200 gpm five minutes later. It then decreases by 200 gpm five
minutes after that and returns to its original value five minutes later. This
simulates the dumping of the contents of two batch reactors into the tank,
each at the rate of 200 gpm for ten minutes, with the second reactor
starting halfway through the dumping of the first one. The integral time of
the level controller is set to twenty minutes, and the tank has a total
capacity of 10,000 gallons, while the valve has a flow capacity of 1,000 gpm
when fully opened.
70 (a)
Level, %T.O.
60
(b)
50
40
0 20 40 60 80 100
Time, minutes
(a)
(b)
Figure 5-5. Level Control Responses (a) Averaging Control, Kc = 1%C.O./%T.O., (b) Tight
Control, Kc = 10%C.O./%T.O. (Inlet flow is represented by the step changes)
94 Unit 5: Mode Selection and Tuning Common Feedback Loops
As Figure 5-5 shows, the averaging level control reduces the variation of
the outlet flow to about half the variation of the inlet flow, and it causes
the changes in the outlet flow to be gradual. On the other hand, tight level
control maintains the level within 5 percent of the set point. Such tight
control of level requires that the outlet flow essentially follow the variation
of the inlet flow.
MC
τ s = ------------p- (5-2)
hA
where
When these units are used, the time constant is calculated in seconds.
Process
SP
TC
TT
Air Fuel
SP
TC
Tin TT SP
QC
F FT
TT
Tout TT
It is the ratio of the dead time to the process time constant that determines
the controllability of the loop (see Unit 4). Thus, in spite of all the sources
for time delays in the sampling and analysis, if the combination of the
analysis sample time and time delay is less than the process time constant
a proportional-integral-derivative controller is indicated. Any of the
tuning methods of Units 2 and 4 can be used, but the IMC tuning rules
Unit 5: Mode Selection and Tuning Common Feedback Loops 97
5-6. Summary
This unit presented some guidelines for selecting and tuning feedback
controllers for several common process variables. While flow control calls
for fast PI controllers with low gains, level and pressure control can be
achieved with simple proportional controllers with high or low gains,
depending on whether the objective is tight control or the smoothing of
flow disturbances. When PI controllers are used for level control, the
integral time should be long, on the order of one hour or longer. PID
controllers are commonly used for temperature and analyzer control.
EXERCISES
5-1. Briefly state the difference between tight level control and averaging level
control. In which of the two is it important to maintain the level at the set
point? Give an example of each.
5-2. What type of controller is recommended for flow control loops? Indicate
typical ranges for the gain and integral times.
5-3. What type of controller is indicated for tight level control? Indicate typical
gains for the controller.
5-4. What type of controller is indicated for averaging level control? Indicate
typical gains for the controller.
5-5. When a PI controller is used for averaging level control, what should the
integral time be? Would an increase in gain increase or decrease
oscillations?
5-6. Estimate the time constant of a temperature sensor weighing 0.03 kg, with
a specific heat of 23 kJ/kg-°C. The thermowell has a contact area of 0.012
m2, and the heat transfer coefficient is 0.6 kW/m2-°C.
5-7. Why are PID controllers commonly used for controlling temperature?
Learning Objectives — When you have completed this unit, you should be
able to:
101
102 Unit 6: Computer Feedback Control
Figure 6-1. Block Diagram of a Computer Feedback Control Loop Showing the Sampled nature
of the Signals
held constant for one sampling interval T. The sampling of the process
variable is done by the analog-to-digital converter (A/D) and multiplexer
(MUX), while the digital-to-analog converter (D/A) updates and holds the
controller output.
Ek = Rk - Ck (6-1)
where
and the subscript “k” stands for the kth sample or calculation of the
controller. The signs of the process variable and the set point are reversed
for a direct-acting controller. Alternatively, the controller gain is set to a
negative value.
Unit 2 established that there are two forms of the PID controller: the
parallel form, Eq. 2-9, and the series form, Eq. 2-10. Table 6-1 presents the
two corresponding forms of the discrete PID controller. Although the
Unit 6: Computer Feedback Control 103
where
αT D TD
B k = --------------------
-B k – 1 – --------------------- ( C – 2C k – 1 + C k – 2 )
T + αT D T + αT D k
Series:
T
∆M k = K c' E k – E k – 1 + ------ E k
T' I
where
E k = Rk – Y k
αT D' T ( α + 1 )T D'
Y k = ----------------------Y k 1 - ( C k – Ck – 1 )
+ ----------------------C k + --------------------------
T + αT D' – T + αT D' T + αT D'
Controller Output:
M k = M k – 1 + ∆M k
where
Rk = set point, %T.O.
Ck = process variable (measurement), %T.O.
Mk = controller output, %C.O.
Ek = error or set point deviation, %T.O.
α = derivative filter parameter
T = sampling interval, min
The PID controller formulas of Table 6-1 are designed to avoid undesirable
pulses on set point changes by having the derivative mode work on the
process variable Ck instead of on the error. The formulas also contain a
derivative filter, with time constant αTD (or αTD‘), which is intended to
limit the magnitude of pulses on the controller output when the process
variable changes suddenly.
the controller output right after the set point is changed. These pulses are
completely avoided by the controller of Table 6-1 since the derivative
mode, acting on the process variable, does not “see” changes in set point.
The minus sign in the formula for the parallel form is used on the
assumption that the error is calculated as in Eq. 6-1 so that, for a direct-
acting controller, the proportional gain would be set to a negative number.
Most modern computer and microprocessor-based controllers provide the
option of having the derivative mode act on the error or on the process
variable. Breaking the “never say never” rule, I can say with confidence
that there is never a good reason for having the derivative act on the error.
In the formulas of Table 6-1 the filter parameter α has a very special
meaning. Its reciprocal, 1/α, is the amplification factor on the change of
the error at each sampling instant, and is also called the “dynamic gain
limit.” Notice that, if α were set to zero, the amplification factor on the
change in error would have no limit. For example, if the sampling interval
is one second (1/60 min) and the derivative time is one minute, the change
in error at each sample with α=0 would be multiplied by a factor of 60
(TD/T = 60). By setting the nonadjustable parameter α to a reasonable
value, say 0.1, the algorithm designer can assure that the change in error
cannot be amplified by a factor greater than 10, independent of the
sampling interval and the derivative time. The dynamic limit is also an
advantage for the control engineer because it allows him or her to set the
derivative time to any desired value without the danger of introducing
large undesirable pulses on the controller output.
The following example illustrates the response of the derivative unit with
and without the filter term.
Directly substituting both the values given and the process variable at
each sample into the series controller of Table 6-1 produces the results
summarized in the following table. The results for the “ideal” derivative
unit are calculated using a filter parameter of zero.
100
Unfiltered
60
40
Filtered
20
Input
0
0 20 40 60 80 100
Time, Seconds
Figure 6-2. Response of Derivative Unit (P+D), with and without Filter, to a Ramp Input.
Notice that the unfiltered (ideal) derivative unit jumps to 30 at time 0 and
increments by 1 each sample. Both these responses are shown graphically
in Figure 6-2. The unfiltered derivative unit is leading the input by one
derivative time (30 s), while the derivative unit with the filter, after a brief
lag, also leads the error by one derivative time. In practice, the lag is too
small to significantly affect the performance of the controller.
• If the controller is the slave of a cascade control scheme (see Unit 7),
the proportional mode must act on the error. Otherwise, when the
main controller changes the set point of the slave, the slave would
106 Unit 6: Computer Feedback Control
and the bars around the error indicate the absolute value or magnitude of
the error. By using the absolute value of the error the gain increases when
the error increases in either the positive or the negative directions.
The nonlinear gain is normally used with averaging level controllers (see
Section 5-3) because it allows a wider variation of the level near the set
point while still preventing the tank from overflowing or running dry, as
illustrated in Figure 6-3. The nonlinear gain allows greater smoothing of
flow variations with a given tank, that is, makes the tank look bigger than
it is, as long as the flow varies near the middle of its range. Some computer
controllers provide the option of having a zero gain at zero error, a feature
that is desirable in some pH control schemes.
100
60
40
20
0
0 20 40 60 80 100
Process Variable, %T.O.
Figure 6-3. Controller Output versus Process Variable for an Averaging Level Controller with
Nonlinear Gain
To prevent the tank from overflowing or running dry, the valve must be
fully opened when the level is at 100 percent of range and closed when the
level is at 0 percent. Since the set point is 50 percent, either of these
requirements takes place when the magnitude of the error is ±50 percent.
With the output bias of 50 percent, using the upper limit requirement in
Eq. 6-2, we get:
This section introduced the most common discrete controllers and the
options that their configurable nature makes possible. The next section
concerns the tuning of these controllers.
When the controller is tuned using the process parameters of gain, time
constant, and dead time that were estimated by the methods presented in
Unit 3, the effect of sampling is not included in the process model. This is
because the process model is obtained from a step test in controller output
(as we learned in Unit 3), and such a step will always take place at a
sampling instant and remains constant after that.
Moore and his coworkers developed a simple correction for the controller
tuning parameters to account for the effect of sampling.1 They pointed out
that when a continuous signal is sampled at regular intervals of time and
then reconstructed by holding the sampled values constant for each
sampling period the reconstructed signal is effectively delayed by
approximately one half the sampling interval (as shown in Figure 6-4).
Now, as Figure 6-1 shows, the digital-to-analog converter holds the output
of the digital controller constant between updates, thus adding one half
the sampling time to the dead time of the process components. To correct
for sampling, one half the sampling time is simply added to the dead time
obtained from the step response. The uncontrollability parameter is then
given by the following:
T
t 0 + ---
2
P u = -------------- (6-3)
τ
Unit 6: Computer Feedback Control 109
Continuous signal
Figure 6-4. Effective Delay of the Sample and Hold (DAC) Unit
where
–T ⁄ τ1 –T ⁄ τ 2
Let N = t0/T a1 = e a2 = e
T ( a 1 – 2a 1 a 2 + a 2 )
T I = --------------------------------------------
-
( 1 – a1 ) ( 1 – a2 )
Ta 1 a 2
T D = -------------------------------------
-
a 1 – 2a 1 a 2 + a 2
Tuning Formulas for the Series Controller
( 1 – q )a 1
K c' = ------------------------------------------------------------
-
K ( 1 – a1 ) [ 1 + N ( 1 – q ) ]
Ta 1
T I' = -------------
-
1 – a1
Ta 2
T D' = -------------
-
1 – a2
T
– ----
τc
q = e (6-4)
Setting q = 0 results in an upper limit for the controller gain. This value can
be used as a guide for the initial tuning of the controller. As is the case
with the tuning formulas presented in Unit 4, the upper limit of the
controller gain decreases with increasing process dead time, parameter N.
To tune the controller, the formulas of Table 6-2 require two process time
constants, τ1 and τ2. When only one time constant is available, the second
time constant τ2 is set to zero. This results in a PI controller because both a2
and the derivative time are zero.
As mentioned earlier, the formulas of Table 6-2 are applicable to any value
of the process parameters and the sample time. In addition, with these
formulas the controller gain can be adjusted to obtain fast response with
reasonable variation in the controller output. The formulas are highly
recommended because they relate the integral and derivative times to the
process time constants, thus reducing the tuning procedure to the
adjustment of the controller gain. The following example illustrates the
use of the formulas of Table 6-2 to the temperature control of the steam
heater.
As the model has only one time constant, the derivative time resulting
from Table 6-2 is zero. That means that the controller becomes a PI
controller. The calculation of the tuning parameters is outlined in the
following table:
Sample time, s 1 2 4 8 16
Dead time, N 10 5 3 1 0
Maximum Kc (q=0), 3.0 2.7 2.0 1.9 1.6
%C.O./%T.O.
Integral time, min 0.55 0.54 0.53 0.50 0.44
112 Unit 6: Computer Feedback Control
Notice that the maximum gain is lower and the integral time faster as the
sampling interval is increased. This means that the loop is less controllable
at the longer sample times. On the other hand, it is not accurate to say that
the sampling interval should always be as short as possible. Recall that for
a sample time of one second the controller must be processed four times
more often than for a sample time of four seconds. This increases the
workload of the computer or microprocessor and thus reduces the number
of loops it can process.
Figure 6-5 shows that a point of diminishing returns can be reached when
selecting the sample time. The figure shows the heater temperature control
responses for a PI controller using the tuning parameters presented in the
preceding table for a step increase in process flow to the heater and
sampling intervals of 1, 2, and 4 seconds. It is evident that the reduction in
sampling interval from two seconds to one does not significantly improve
the response.
When the sample time is more than three or four times the dominant
process time constant, the process reaches steady state after each controller
output move before it is sampled again. This may happen because the
process is very fast or because the sensor is an analyzer with a long cycle
time. For such situations, the formulas of Table 6-2 result in a pure integral
controller:
where
(1 – q)
K I = ----------------------------------------
-
K[ 1 + N( 1 – q )]
Notice that for the case N = 0 and q = 0, the controller gain is the reciprocal
of the process gain. This result makes sense since a loop gain of 1.0 is what
is needed to reduce the error to zero in one sample if the process reaches
steady state during that interval. An interesting application of this is a
chromatographic analyzer sampling a fast process. Because it is in the
nature of such analyzers that a full cycle is required to separate the
mixture and analyze it, the composition is not available to the controller
until the end of the analysis cycle. This means that the process dead time is
approximately one sample, or N = 1. For q = 0, Eq. 6-5 gives a gain of KI =
1/K(1 + 1) = 1/2K, or one half the reciprocal of the process gain. This also
makes sense because when the controller takes action, it takes two
sampling periods to see the result of that action, so the formula says to
Unit 6: Computer Feedback Control 113
(a)
T=1 s
58
2
56
M, %C.O.
54
52
50
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)
Figure 6-5. Response of Heater Temperature with PI Controller Sampled at 1, 2, and 4 Second
Intervals
spread the corrective action equally over two samples. The following
example illustrates what happens when the steam heater is controlled
with a slow-sampling controller.
(a)
(b)
Figure 6-6. Response of Heater Temperature with PI Controller Sampled at 4 and 32 Second
Intervals
Unit 6: Computer Feedback Control 115
It makes sense to ratio the sample time to the process time constant
because the relative change in the process output from one sample to the
next depends only on this ratio. That is, the relative change will be the
same for a process with a one-minute time constant sampled once every
five seconds as it is for a process with a ten-minute time constant sampled
every fifty seconds.
By definition, the loop gain is the product of the gains around the feedback
loop, KKc. You may use the value of this gain recommended by any of the
tuning methods, or alternatively the ultimate loop gain, to test the
sensitivity of controller performance to some parameter such as the
sampling frequency. This is because as the loop gain increases the effect of
disturbances on the process variable decreases. In Figure 6-7, the
maximum loop gain, which is calculated using the tuning formula from
Table 6-2, is plotted against the sample-time-to-time-constant ratio for
116 Unit 6: Computer Feedback Control
• When the dead time is greater than the time constant, longer sam-
ple times may be used because the performance of the loop is lim-
ited by the dead time and not the sample time. This can be verified
by observing the curve for t0/τ = 1 in Figure 6-7; the loop gain is
low and essentially independent of the sample time.
• When the dead time is less than one-tenth the time constant, and a
high gain is desired for the loop, a shorter sample time should be used.
By selecting the proper sample time for each loop, the control engineer can
increase the number of loops the process control system can handle
without experiencing deterioration of performance.
To this point, this module has clearly established that feedback controllers
cannot perform well when the process has a high ratio of dead time to
time constant. The total loop gain must be low for such processes, which
means that the deviations of the controlled variable from its set point
cannot be kept low in the presence of disturbances. One way to improve
the performance of the feedback controller for low controllability loops is
to design a controller that compensates explicitly for the process dead
time. This section presents two controllers that have been proposed to
compensate for dead time, the Smith Predictor and the Dahlin Controller.
Dead time compensation requires you to store and play back past values
of the controller output. Not until the advent of computer-based
controllers was the storage and playback of control signals possible.
Computer memory makes possible the storage and retrieval of past
sampled values.
transfer function. This is done so the model output, after being corrected
for model error and disturbance effects, can be fed to the feedback
controller in such a way that the process dead time is bypassed, hence
compensating for dead time.
where ∆Mk can be computed by either the series form or the parallel
controller of Table 6-1. The last term in the calculation of the output
provides the dead time compensation. Notice that the term vanishes when
there is no dead time, N = 0. The actual controller is tuned with the
formulas from Table 6-2, except for the controller gain, which is given by
the following:
Parallel:
( 1 – q ) ( a 1 – 2 a1 a2 + a2 )
K c = ------------------------------------------------------------
- (6-7)
K ( 1 – a1 ) ( 1 – a 2 )
Series:
(1 – q)a
K c' = -----------------------1- (6-8)
K ( 1 – a1 )
Using the formulas of Table 6-2 for the series controller, the tuning
parameters are as follows:
a1 = e(-0.05/0.56) = 0.915 a2 = 0
Kc = (1-0.5)(0.915/0.085)/(1)[1+3(1-0.5)] = 2.2%C.O./%T.O.
Kc = (1-0.5)(0.915/0.085)/1 = 5.4%C.O./%T.O.
TI = 0.54 min
(a)
58
(a)
56 (b)
M, %C.O.
54
52
50
0 1 2 3 4 5
Time, minutes
(b)
Figure 6-9. Response of Temperature Controller for Steam Heater (a) without Dead time
Compensation, and (b) with Dead Time Compensation
6-5. Summary
EXERCISES
6-3. How and why would you eliminate “proportional kick” on set point
changes? Will the process variable approach its set point faster or slower
when proportional kick is avoided? When must proportional kick be
allowed?
6-5. What is the advantage of the nonlinear proportional gain in averaging level
control situations? In such a case, what must the nonlinear gain be for the
gain to be 0.25%C.O./%T.O. at zero error and still have the controller
output reach its limits when the level reaches its limits (0 and 100%)?
Assume a level set point of 50%T.O. and an output bias of 50%C.O.
6-7. Repeat exercise 6-6, but for a PID controller with dead time compensation.
Specify also how many samples of dead time compensation, N, must be used
in each case.
6-8. What is the basic idea behind the Smith Predictor? What is its major
disadvantage? How does the Dahlin Controller with dead time
compensation overcome the disadvantage of the Smith Predictor?
REFERENCES
Learning Objectives — When you have completed this unit, you should be
able to:
Figure 7-1 shows a typical cascade control system for controlling the
temperature in a jacketed exothermic chemical reactor. The control
objective is to control the temperature in the reactor, but instead of having
the reactor temperature controller, TC 1, directly manipulate the jacket
coolant valve, the jacket temperature is measured and controlled by a
different controller, TC 2, which is the one that manipulates the valve. The
output of the reactor temperature controller, TC 1 (or “master” controller)
is connected or cascaded to the set point of the jacket temperature
controller, TC 2 (or “slave” controller). Notice that only the reactor
temperature set point is maintained at the operator set value. The jacket
temperature set point changes to whatever value is required to maintain
the reactor temperature at its set point. A block diagram of the reactor
cascade control strategy, shown in Figure 7-2, clearly shows that the slave
control loop is inside the master control loop.
• Any disturbances that affect the slave variable are detected and
compensated for by the slave controller before they have time to
affect the primary control variable. Examples of such disturbances
for the reactor of Figure 7-1 are the coolant inlet temperature and
header pressure.
127
128 Unit 7: Tuning Cascade Control Systems
SP
TC
1
Reactants
SP
TT
TC TT
2
Water
Out
Steam
Cooling
Water
Products
The success of cascade control requires one other condition besides the
inner loop being faster than the outer loop: the sensor of the inner loop
must be fast and reliable. One would not consider, for example, cascading
a temperature controller to a chromatographic analyzer controller. On the
other hand, the sensor for the inner loop does not have to be accurate, only
repeatable, because the integral mode in the master controller
compensates for errors in the measurement of the slave variable. In other
words, it is acceptable for the inner loop sensor to be wrong as long as it is
consistently wrong.
Finally, it should be pointed out that cascade control would not be able to
improve the performance of loops that are already very controllable, as,
for example, liquid level and gas pressure control loops. Similarly, cascade
control cannot improve the performance of loops when the controlled
variable does not need to be tightly maintained around its set point, for
example, in averaging level control. When a level controller is cascaded to
a flow controller it is usually justified on the grounds that it provides
greater flexibility in the operation of the process, not because of improved
control performance.
Now that we have looked at the reasons and requirements for using
cascade control, the following sections will consider how to select the
controller modes for cascade control systems and how to tune them.
130 Unit 7: Tuning Cascade Control Systems
In a cascade control system the master controller has the same function as
the controller in a single feedback control loop: to maintain the primary
control variable at its set point. It follows that the selection of controller
modes for the master controller should follow the same guidelines
presented for a single controller in Unit 5. On the other hand, because the
function of the slave controller is not the same as that of the master or
single controller, it requires different design guidelines.
Unlike the master or single feedback controller, the slave controller must
constantly respond to changes in set point, which it must follow as quickly
as possible with a small overshoot and decay ratio. It is also desirable that
the slave controller transmit changes in its set point to its output as
quickly as possible and, if possible, to amplify them because the output of
the slave controller is the one that manipulates the final control element. If
the slave controller is to speed up the response of the master controller, it
must transmit changes in the master controller output (slave set point) to
the final control element at least as fast as if it were not there. It is evident
then that the slave controller must have the following characteristics:
If the gain of the slave controller is greater than one, changes in the master
controller output result in higher immediate changes in the final control
element than is the case when a single feedback loop is used. This
amplification results in the master loop having a faster response.
Whether you should use integral and derivative modes on the slave
controller will depend on the application. Recall from previous units that
adding integral mode results in a reduction of the proportional gain, while
adding derivative mode results in an increase in the proportional gain.
This may suggest that all slave controllers should be proportional-
derivative (PD) controllers, but this is generally not the case.
would require the master controller to take corrective action and therefore
introduce a deviation of the primary controlled variable from its set point.
The use of a fast-acting integral mode on the slave controller would
eliminate both the need for corrective action on the part of the master
controller and the deviation in the primary controlled variable.
The integral mode should not be used in those slave loops in which the
gain is limited by stability. It should also be avoided in those slave loops in
which the disturbances into the inner loop do not cause large offsets in the
slave controller. The jacket temperature controller of the reactor in Figure
7-1 is a typical example of a slave loop that does not require integral mode.
A common rule states that derivative mode should not be used in both the
slave and master controllers. Moreover, since derivative mode would do
the most good on the less controllable loop, which is the outer loop, this
rule essentially comes down to stating that derivative mode should never
be used in the slave controller. There are two reasons for this rule. First,
having all three modes in both the master and slave controller results in
six tuning parameters, which, without the proper guidelines, makes the
tuning task more difficult. Second, it is undesirable to put two derivative
units in series in the loop. However, both of these reasons can be argued
away as follows:
• If you have the derivative of the slave controller act on the process
variable instead of on the error, it will not be in series with the
derivative unit in the master controller.
The controllers in a cascade control system must be tuned from the inside
out. That is, the innermost loop must be tuned first, then the loop around
132 Unit 7: Tuning Cascade Control Systems
it, and so on. The block diagram of Figure 7-2 shows why this is so: the
inner loop is part of the process of the outer loop.
Each loop in a cascade system must be tuned tighter and faster than the
loop around it. Otherwise, the set point of the slave loop would vary more
than its measured variable, which would result in poorer control of the
master variable. Ideally, the slave variable should follow its set point as
quickly as possible, but with little overshoot and few oscillations. Quarter-
decay ratio response is not recommended for the slave controller because
it overshoots set point changes by 50 percent. The ideal overshoot for the
slave variable to a set point change is 5 percent.
After the inner loop is tuned, the master loop can be tuned to follow any
desired performance criteria by any of the methods discussed in Units 2, 4,
5, and 6. Since what is special on cascade systems is the tuning of the slave
loop, the next three sections will discuss some typical slave loops, namely,
flow, temperature, and pressure loops. Keep in mind, however, that any
variable, including composition, can be used as a slave variable provided
it can be measured fast and reliably.
Vapors to
SP Condenser
TC
SP
FC
TT
Column Reflux
Figure 7-3. Flow as the Slave Variable in a Cascade Control Scheme (Distillation Column
Reflux)
The derivative unit must act on the slave’s measured variable only, not on
the error, in order to prevent the connection of two derivative units in
series in the loop.
Another difficulty with pressure as a slave variable is that it can move out
of the transmitter range and thus get out of control. For example, in the
scheme of Figure 7-4, if at low production rate the reboiler temperature
134 Unit 7: Tuning Cascade Control Systems
SP
TC
TT
SP
PC
PT
Steam
Bottoms Condensate
Product
Figure 7-4. Pressure as the Slave Variable in a Cascade Control Scheme (Distillation Column
Reboiler)
drops below 100°C (212°F), the pressure in the steam chest will drop below
atmospheric pressure, moving out of the transmitter range, unless the
pressure transmitter is calibrated to read negative pressures (vacuum).
When both the master and the slave controllers are carried out on the
computer, the inner loop is usually processed at a higher frequency than
the outer loop. This is so the slave controller has time to respond to a set
point change from the master controller before the next change takes
place. Recall that the inner loop should respond faster than the outer loop.
To obtain the process parameters, perform a step test in coolant flow with
the controllers on manual, and record both the reactor temperature and
the jacket temperature. The following results are obtained from the
response of the reactor temperature:
The following results are obtained from the response of the jacket
temperature:
Use the Ziegler-Nichols QDR tuning formulas in Table 4-1 to tune the
single reactor temperature series PID controller:
Use the parameters from the response of the jacket temperature to tune the
jacket temperature controller in the cascade scheme, TC-2. To gain good
response to set point changes from the master controller, use the IMC rules
presented in Section 4-2. Since the dead time is zero, a PI controller is
indicated, and its gain can be as high as is desired. To keep it reasonable,
use the following parameters:
Once you have tuned the jacket temperature controller TC-2, switch it to
automatic, and apply a step test in its set point with the reactor
temperature in manual. Record the response of the reactor temperature to
obtain the following results:
When you compare the results of the response to the step in coolant flow
you see that the reactor temperature loop has both a shorter time constant
and a shorter dead time when the jacket temperature controller is used.
Recall, however, that these parameters depend on the tuning of the jacket
temperature controller. For example, if you used a higher gain for TC-2,
the time parameters would be shorter still.
The reactor temperature controller, TC-1, is now tuned for the preceding
parameters:
Figure 7-5. Reactor Temperature Response to Step Increase of 10°F in Coolant Inlet
Temperature. (a) Single Temperature Controller. (b) Reactor Temperature Cascaded to Jacket
Temperature
Unit 7: Tuning Cascade Control Systems 137
Figure 7-6 shows that the cascade control scheme also improves the
response of the reactor temperature for a step increase in feed flow to the
reactor. However, the improvement in performance is not as dramatic
because the feed flow has a direct effect on the reactor temperature, and
the jacket temperature controller cannot correct it in time. The
improvement in control is due to the faster response of the reactor
temperature to controller output in the cascade scheme. Notice the inverse
response of the temperature to the feed flow. This is because, as the
reactants are colder than the reactor, the increase in reactants flow causes
an immediate drop in temperature. At the same time the increase in flow
causes the reactants’ concentration to increase, which eventually results in
an increase in reaction rate and consequently in temperature.
Figure 7-6. Reactor Temperature Response to Step Increase of 10% in Feed Flow. (a) Single
Temperature Controller. (b) Reactor Temperature Cascaded to Jacket Temperature
138 Unit 7: Tuning Cascade Control Systems
Vent
Air
N.G. Reforming
Process
Steam Compressor
Purge
Figure 7-7. Cascade Control of Reactor Inlet Composition and Pressure in the Ammonia
Synthesis Loop
Unit 7: Tuning Cascade Control Systems 139
Example 7-2 illustrates our earlier point that the slave measurement does
not have to be accurate but does have to be fast. Errors in the slave
measurement are corrected by the integral mode of the master controller.
On the other hand, the measurement of the master controller can be slow,
but it must be accurate. Disturbances in the reforming process are handled
quickly by the slave controller before they have a chance to affect the
primary controlled variable.
Figure 7-7 also shows a pressure-to-flow cascade loop for controlling the
pressure in the synthesis loop. In this cascade, the master controller is the
pressure controller (PC 4), and the slave controller is the purge flow
controller (FC 4). The purge is a small stream removed from the loop to
prevent the accumulation of inert gases (argon and methane) and the
excess nitrogen.
Although analog controllers could carry out both cascade control loops of
Figure 7-5, computer control offers this scheme an unexpected virtue:
patience. For example, in one actual installation where the pressure
control scheme was carried out with analog controllers, the master
controller was operated on manual because it was swinging the purge
flow all over its range. This is because the process for this loop has a time
constant of about one hour. On the same installation a digital controller
with a sample time of five minutes and an integral time of forty-five
minutes was able to maintain the pressure at its optimum set point.
• The slave controller sees a jacket temperature below its set point
(110°C) and calls for the cooling water valve to remain closed.
• The master controller also sees its temperature below set point and
calls for an increase in the jacket temperature set point above the
current 110°C value.
Most computer and DCS controllers detect that the slave controller output
is limited or “clamped” at the closed position. They then prevent the
master controller from increasing its output because this would only result
in a call to close the coolant valve, which is already closed. Does this logic
prevent the cascade control system from winding up? Let us see what
happens next.
Notice that a gap has been created between the set point of the slave
controller, frozen at 110°C, and its measured temperature. As the reactor
temperature crosses its set point of 55°C, the master controller starts
decreasing the set point of the slave controller to bring the temperature
down. However, the coolant valve will not open until the set point of the
slave controller drops below its measured temperature, that is, until the
gap between the slave controller’s set point and its measured temperature
is overcome. Since the set point of the slave controller will change at a rate
controlled by the integral time of the master controller, it takes a long time
for the coolant valve to start to open. As a result, the reactor temperature
overshoots its set point badly, which is the most common symptom of
reset windup. By the time the coolant valve starts to open, the reactor
temperature has reached its trip point of 60°C, and the entire system must
be shut down by dumping the reactor contents into a pool of water below.
As you can see, in this case the saturation or “clamp limit” detection
system could not avoid reset windup.
the master controller calls for a lower jacket temperature than its current
value, and the slave controller responds by opening the coolant valve.
Reset Feedback
A more elegant cascade windup protection method, one that does not
require any logic, is to use a “reset feedback” signal on the control
algorithm. In the cascade scheme, the reset feedback signal is the
measured variable of the slave loop, expressed in percentage of
transmitter range. The reset feedback signal is used in the calculation of
the controller output by the velocity algorithm as follows:
Mk = bk + ∆Mk (7-1)
where
Mk = the output of the master controller and set point of the slave
controller
By using this formula to update the set point of the slave controller every
time the master controller is processed, there will be no possibility of
windup because the master controller will call for an increase or decrease
of the slave variable from its current value, not from the previous set
point. To use the reset feedback approach the slave loop must be processed
more frequently than the master loop, and the slave controller must have
integral mode. Otherwise, any offset in the slave controller would cause
an offset in the master controller, even if the master controller has integral
mode.
7-5. Summary
This unit discussed the reasons for using cascade control, how to select
modes for the slave controller, and the procedure for tuning cascade
control systems. It also looked at cascade windup and ways to protect
against it. Cascade control has proliferated in computer control
installations because there is essentially no cost for the additional slave
controllers. One transmitter and one multiplexer input channel for each
slave loop represent the only additional cost in a computer control system.
EXERCISES
7-3. Is the tuning and selection of modes different for the master controller in a
cascade control system than for the controller in a simple feedback control
loop? Explain.
7-4. What is different about the tuning of the slave controller in a cascade
control system? When should it not have integral mode? If the slave is to
have derivative mode, should it operate on the process variable or the error?
7-5. In what order must the controllers in a cascade control system be tuned?
Why?
7-6. What are the two major difficulties entailed in using temperature as the
process variable of the slave controller in a cascade control system? How
can they be handled?
7-7. Why is pressure a good variable to use as the slave variable in cascade
control? What are the two major difficulties encountered when using
pressure as the slave variable?
7-8. What is the relationship between the processing frequencies of the master
and slave controllers in a computer cascade control system?
7-9. How can reset windup occur in a cascade control system? How can it be
avoided?
Unit 8:
Feedforward and
Ratio
Control
UNIT 8
Learning Objectives — When you have completed this unit, you should be
able to:
Unit 4 showed that some feedback loops are more controllable than others
and that the parameter that measures the uncontrollability of a feedback
loop is the ratio of the dead time to the time constant of the process in the
loop. When this ratio is high, on the order of one or greater, feedback
control cannot prevent disturbances from causing the controlled variable
to deviate substantially from its set point. This is when the strategies of
feedforward and ratio control can have the greatest impact on improving
control performance.
145
146 Unit 8: Feedforward and Ratio Control
G2
Feedback
Controller
R E M C
GC G1
Sensor
These problems are significant in process systems because of the long time
delays involved, sometimes hours in length. The remedy to these
problems is feedforward control.
Figure 8-2 shows the block diagram for pure feedforward control. This
technique consists of measuring the disturbance U instead of the
controlled variable. Corrective action begins as soon as the disturbance
enters the system and can, in theory, prevent any deviation of the
controlled variable from its set point. However, pure feedforward control
requires that you have an exact model of the process and its dynamics as
well as exact compensation for all possible disturbances. The “set point
element” of Figure 8-2 provides for calibrated adjustment of the set point
and seldom includes any dynamic compensation.
Feedforward
Element U
G2
G1
G2
Setpoint
Element
R M C
1
G1
G1
Feedforward-Feedback Control
Feedforward
Element U
G2
G1
G2
Feedback
Controller
R C
E M
GC G1
Sensor
Ratio Control
RC Steam
SP
FC FT FS
F FT C
Process
Fluid
T
Condensate
drop across the control valve. By maintaining a constant ratio when the
operator or another controller changes the process flow, the outlet process
temperature is kept constant, as long as the steam latent heat and process
inlet temperature remain constant.
Wild Stream
FT
(B/A)set
A
SP
%
RC
FY
Manipulated FT
Stream
The value of M that is required to keep C equal to the set point R is given
by the following:
1 G
M = ------- R – ------2- U (8-2)
G1 G1
This is the design equation for the feedforward controller that has the set
point R and disturbance U as inputs and the manipulated variable M as
output. Eq. 8-2 provides the design formulas for both the set point and
feedforward elements of Figure 8-2. The design formula for the set point
element is as follows:
1-
G s = ------ (8-3)
G1
G
G F = – ------2- (8-4)
G1
with
K
Gain = – ------2- (8-6)
K1
152 Unit 8: Feedforward and Ratio Control
Lead of τ
Lead-Lag = ------------------------1- (8-7)
Lag of τ 2
where
The dead time compensator of Eq. 8-8 can only be realized when the dead
time between the disturbance and the controlled variable is longer than
the dead time between the manipulated variable and the controlled
variable. Otherwise, the dead time compensator would call for the
feedforward correction to start before the disturbance takes place, which is
obviously not possible.
Of the three terms of the feedforward controller shown in Eq. 8-5 the gain
is always required, and the dynamic compensators are optional. When
only the gain is used, the feedforward controller is called a “static”
compensator.
Unit 8: Feedforward and Ratio Control 153
Gain Adjustment
You can adjust the feedforward gain with the feedback controller on
manual or automatic. If you do it with the feedback controller on manual
and when the gain is not correct, the controlled variable will deviate from
its set point after a sustained disturbance input. You can then adjust the
gain until the controlled variable is at the set point again. Because of
process nonlinearities, the required feedforward gain may change with
variations in operating conditions. Thus, it may not be possible to achieve
exact compensation with a simple linear controller.
The one thing to remember when tuning the feedforward gain is that you
will have to wait until the system reaches steady state before making the
next adjustment.
Figure 8-6 shows the response of the lead-lag unit to a step change in its
input for two scenarios: the lead being longer than the lag and the lag
being longer than the lead, assuming in each case that the gain is unity.
The initial change in the output of the lead-lag unit is always equal to the
ratio of the lead to the lag. As a result, there is an initial overcorrection
when the lead is longer than the lag, and a partial correction when the lag
is longer than the lead. In either case, the output approaches the steady-
state correction exponentially, at a rate determined by the lag time
constant.
Figure 8-7 shows the response of the lead-lag unit to a ramp input, both for
the lead-longer-than-the-lag scenario and for the lag-longer-than-the-lead
154 Unit 8: Feedforward and Ratio Control
scenario, assuming unity gain. The figure shows where the names lead and
lag come from: After a transient period, the output of the lead-lag unit
either leads the input ramp by the difference between the lead and the lag
or lags it by the difference between the lag and the lead. The ramp
response is more typical of the inputs provided by the disturbances in a
Unit 8: Feedforward and Ratio Control 155
real process than the step response to the type of inputs provided by the
disturbances in a real process. The ramp can also approximate the rising
and dropping portions of slow sinusoidal disturbances.
When you keep the responses to step and ramp inputs in mind, tuning the
lead-lag unit becomes a simple procedure. First, decide by how much you
should lead or lag the feedforward correction to the disturbance; this fixes
the difference between the lead and the lag. Then select the ratio of the
lead to the lag based on how much you want to amplify or attenuate
sudden changes in the disturbance inputs. For example, suppose you
want to lead the disturbance by one minute. A lead of 1.1 minutes and a
lag of 0.1 minutes gives an amplification factor of 1.1/0.1=11, while a lead
of 3 minutes and a lag of 2 minutes gives an amplification factor of only
3/2=1.5. If the disturbance is noisy, for example, a flow, the second choice
is preferable because it results in less amplification of the noise.
τ LD
Yk = Y k – 1 + ( 1 – a ) ( Xk – 1 – Yk – 1 ) + --------- ( Xk – X k – 1 ) (8-9)
τ LG
where
Eq. 8-9 is for unity gain. If the gain is different than unity, it can be applied
to the signal before or after the lead-lag calculation.
Yk = Xk-N (8-10)
Input
Delay
Output
Time
(a)
Input
Output
Delay
Time
(b)
Figure 8-8. Response of Dead Time Compensator: (a) to a Step, (b) to a Ramp
Unit 8: Feedforward and Ratio Control 157
The dead time compensator is easy to tune because it only has one
dynamic parameter, the number of samples of delay N.
Before you apply dead time compensation you must ensure that the dead
time does not delay the action in a feedback control loop. Recall that dead
time always makes a feedback control loop less controllable. The reason it
can be used in feedforward control is that the corrective action always
goes forward; that is, no loop is involved.
the set point of the slave controller (for example, the flow of the
manipulated stream) instead of the valve position.
output signals exit at the bottom (or right). It is at this point that
you must decide on implementation details. These will largely
depend on the equipment used. A good design should be able to
continue to operate safely when some of its input measurements
fail, a characteristic of the design known as “graceful
degradation.”
set TC
TO
SP
m Steam
Feedforward
Fset
Controller SP
FC F
FT
FT TT TT
Process
Fluid TO
W Ti
T
Condensate
where
QL = (1 - η)FHv (8-13)
set C set
F = ----------- ( T – Ti ) W (8-14)
Hv η o
calculation, so the operator only has to enter one set point. This is
an important design requirement.
All of the unknowns of the model have been lumped into a single
coefficient, C/Hvη, and it would seem natural for the feedback
trim controller to adjust this coefficient to correct for variations in
the physical properties and heater efficiency. However, these
parameters are not expected to vary much, and it would not be
desirable for the feedback trim controller to control by adjusting a
term that is not expected to vary. You can create a better control
system structure if you make the feedback controller output
adjust the set point of the feedforward controller or, equivalently,
the product of the unknown coefficient and the set point. This is
done as follows:
set C
F = m – -----------Ti W (8-15)
Hv η
where
m = CToset/Hvη = output of feedback controller
C F (8-16)
----------- = ----------------------------
-
Hv η W ( To – Ti )
T oset
TT TT FT
SP
Ti To W
TC
L/L L/L
TY TY
m
Σ
TY
x
TY
Fset
The following example illustrates how to tune the lead-lag unit for the
feedforward controller we have just designed.
Example 8-2. Tuning of Lead-lag Units. Tune the lead-lag units for the
steam heater feedforward controller of the preceding example. Figure 8-11
compares the responses of the outlet temperature to a change in process
flow with (a) a well-tuned feedback controller, (b) a static feedforward
controller, and (c) a feedforward controller with lead-lag compensation.
Notice that with static compensation the temperature drops even though
the steam flow is immediately increased in proportion to the process flow.
It is evident from the graph in Figure 8-11 that the steam needs to lead the
process flow because the simultaneous action still allows the variable to
deviate in the same direction as when feedforward control is not used.
Curve (c) in Figure 8-11 uses a lead of two minutes and a lag of one minute
for a net lead of one minute. As the process flow is expected to be a noisy
signal, these values limit the amplification of the noise to a factor of two.
With this tuning, the lead-lag unit reduces the deviation of the
temperature by one half of the deviation of the static compensator.
thus the inlet temperature signal needs a lag. Curve (c) of Figure 8-12
shows the response when a lag of one minute and zero lead are installed
on the inlet temperature signal. In this case, you could also have tried
dead time compensation since the dead time to the inlet temperature—the
disturbance—is longer than the dead time to the steam flow—the
manipulated variable.
Figure 8-11. Responses to Step Change in Process Flow to Steam Heater. (a) Feedback
Control, (b) Static Feedforward Control, and (c) Feedforward Control with Lead-Lag
Compensation
Figure 8-12. Responses to Step Change in Inlet Temperature to Steam Heater. (a) Feedback
Control, (b) Static Feedforward Control, and (c) Feedforward Control with Lead-Lag
Compensation
164 Unit 8: Feedforward and Ratio Control
8-5. Summary
EXERCISES
8-1. Why isn't it possible to have perfect control—that is, the controlled variable
always equal to the set point—using feedback control alone? Is perfect
control possible with feedforward control?
8-2. What are the main requirements of feedforward control? What are the
advantages of feedforward control with feedback trim over pure feedforward
control?
8-3. What is ratio control? What is the control objective of the air-to-natural gas
ratio controller in the control system sketched in Figure 7-7 for the
ammonia process? Which are the measured disturbance and the
manipulated variable for that ratio controller?
8-7. Refer to the furnace shown in the following figure. Design a feedforward
controller to compensate for changes in process flow, inlet temperature, and
supplementary fuel flow in the furnace’s outlet temperature control.
Explicitly discuss each of the eight steps of the procedure for designing
nonlinear feedforward compensation outlined in Section 8-4.
Unit 8: Feedforward and Ratio Control 165
Flue gas
FT TT
Process Stream
TT
SP
FC
Main FT
Fuel FT Auxillary
Fuel
REFERENCES
Learning Objectives — When you have completed this unit, you should be
able to:
169
170 Unit 9: Multivariable Control Systems
AC FC
M1
F1x1
AT FT
M2
x F
F2x2
R1 E1 M1 C1
Gc1 G11
G12
G21
R2 E2 M2 C2
Gc2 G22
C2, while G12 and G22 are the corresponding effects of manipulated
variable M2. The two controllers, GC1 and GC2, act on their respective
errors, E1 and E2, to produce the two manipulated variables. Signals R1
and R2 represent the set points of the loops. In the diagram of Figure 9-2
each of the four process blocks includes the gains and dynamics of the
Unit 9: Multivariable Control Systems 171
To look at the effect of interaction assume that the gains of all four process
blocks are positive. That is, an increase in each manipulated variable
results in an increase in each of the controlled variables. Suppose then that
at a certain point a step change in manipulated variable M1 takes place
with both loops on “manual” (opened). Figure 9-3 shows the responses of
both controlled variables, C1 and C2, where the time of the step change is
marked as point “a”. Now suppose that at time “b” control loop 2 is closed
(switched to “automatic”) and that it has integral or reset mode.
Manipulated variable M2 will decrease until controlled variable C2 comes
back down to its original value, which is assumed to be its set point.
Through block G12, the decrease in M2 also causes a decrease in controlled
variable C1, so that the net change in C1 is smaller than the initial change.
Notice that this initial change is the only change that would take place if
there were no interaction, or if controller 2 were kept on manual. The
difference between the initial change and the net change in C1 is the effect
of interaction. It depends on the effect that M1 has on C2 (G21), the effect
that M2 has on C2 (G22, which determines the necessary corrective action
on M2), and the effect that M2 has on C1 (G12). Notice also that, provided
controller 2 has integral mode, the steady-state effect of interaction
depends only on the process gains, not on the controller tuning.
M1 C1 b
a
a
b
b
M2 C2
a
Time
Figure 9-3. Effect of Interaction on the Response of the Controlled and Manipulated Variables
172 Unit 9: Multivariable Control Systems
I invite you to verify that a step in M2, followed by closing control loop 1,
has the same effect on C2, at least qualitatively, as the effect just observed
on C1. It will be shown shortly that the relative effect of interaction for
control loop 2 and control loop 1 is quantitatively the same.
In the case just analyzed, all four process gains were assumed to be
positive (direct actions). The effect of interaction was in the direction
opposite the direct (initial) effect of the step change, which resulted in a
net change smaller than the initial change. This situation, in which the two
loops “fight each other,” is known as “negative” interaction. You can easily
verify that the interaction would also be negative if any two of the process
transfer functions had positive gains and the other two had negative
gains. Notice that it is possible for the effect of interaction to be greater
than the initial effect, in which case the direction of the net change will be
opposite that of the initial change. Here we could say that “the wrong loop
wins the fight,” a situation that, we will soon see, is caused by incorrect
pairing of the loops.
If one of the four process gains had a sign opposite that of the other three,
the net change would be greater than the initial change, as you can also
verify. This is the case of “positive” interaction, when the two loops “help
each other.” Positive interaction is usually easier to handle than negative
interaction because the possibility that inverse response (i.e., the
controlled variable moving in the wrong direction right after a change) or
open-loop overshoot will occur exists only when the process exhibits
negative interaction.
In the following two sections, 9-2 and 9-3, we look at two ways to
approach the problem of loop interaction:
1. By pairing the controlled and manipulated variables so as to
minimize the effect of interaction between the loops.
2. By combining the controller output signals through decouplers
so as to eliminate the interaction between the loops.
Usually, the first step in the design of a control system for a process is
selecting the control loops, that is, selecting those variables that must be
controlled and those that are to be manipulated to control them. This
pairing task has been traditionally performed by the process engineer
using mostly his or her intuition and knowledge of the process.
Fortunately, for a good number of loops, intuition is all that is necessary.
However, when the interactions involved in a system are not clearly
understood and the “intuitive” approach produces the wrong pairing,
control performance will be poor. The expedient solution is to switch the
troublesome controllers to “manual,” which, as mentioned in the
preceding section, eliminates the effect of interaction. The many
controllers operating in manual in control rooms throughout the process
industries are visible reminders of the importance of correctly pairing the
variables in the system. Each one represents a failed attempt to apply
automatic control.
Open-Loop Gains
Consider the 2x2 system of Figure 9-2. The following open-loop gains can be
calculated if a change is applied to manipulated variable M1, while the
other manipulated variable is kept constant, and the changes in controlled
variables C1 and C2 are measured:
174 Unit 9: Multivariable Control Systems
Change in C
K 11 = ---------------------------------1- (9-1)
Change in M 1
Change in C
K 21 = ---------------------------------2-
Change in M 1
Change in C
K 12 = ---------------------------------1- (9-2)
Change in M 2
Change in C
K 22 = ---------------------------------2-
Change in M 2
The open-loop gains can also be obtained from the steady-state equations
or computer simulation programs that were used to design the plant.
There is a natural tendency to try to use the open-loop gains to pair the
variables. However, it is immediately apparent that C1 and C2 and M1 and
M2 do not necessarily have the same dimensions. Thus, attempting to
compare open-loop gains would be similar to trying to decide between
buying a new sofa or a new house. To overcome this problem, Bristol
proposed computing relative gains that are independent of dimensions.
Closed-Loop Gains
However, closed-loop tests are not needed because you can compute the
closed-loop gains from the open-loop gains previously defined. For
example, when both M1 and M2 change, the total change in C1 can be
estimated by the sum of the two changes:
The same holds true for the total change in C2. Now, if C2 is kept constant,
its change is zero:
K 21
- ( Change in M 1 )
Change in M 2 = – --------
K 22
K 12 K 21
- ( Change in M 1 )
Change in C 1 = K 11 – ------------------
K 22
K ij
µ ij = --------
- (9-3)
K ij ′
where µij is the relative gain for the pairing of controlled variable Ci with
manipulated variable Mj.
The following formulas can be used to compute the relative gains for any
2x2 system:
K 11 K 22
µ 11 = µ 22 = -------------------------------------------
- (9-4)
K 11 K 22 – K 12 K 21
K 12 K 21
µ 12 = µ 21 = -------------------------------------------
-
K 12 K 21 – K 11 K 22
It makes sense that the interaction measure for the C1-M1 pair be the same
as for the C2-M2 pair because they represent a single option in the 2x2
system. The other option is C1-M2 and C2-M1.
176 Unit 9: Multivariable Control Systems
The relative gains are dimensionless and can therefore be compared to one
another. To minimize the effect of interaction, the controlled and
manipulated variables are paired so the relative gain for the pair is closest
to unity. This results in the least change to gain when the other loop of the
pair is closed. Notice that in cases where there is no interaction, the
open-loop gain is equal to the closed-loop gain, and the relative gains are
1.0 for one pairing and 0.0 for the other.
The following example illustrates how to calculate the relative gains for a
blending process, and how to interpret the resulting values of the relative
gains.
This means that for the pair F1 with F and F2 with x, the steady-state gain
of each loop increases to 1/0.8 = 1.25 (a 25% change) when the other loop
is closed. Conversely, for the pair F1 with x and F2 with F, the gain of each
loop increases by a factor of 1/0.2 = 5 (a 400% change) when the other loop
is closed! Obviously, the first pairing is significantly less sensitive to
interaction than the second.
Unit 9: Multivariable Control Systems 177
Eq. 9-4 can be used to compute the relative gains for any control system
with two objectives. For systems with more than two controlled and
manipulated variables, the open-loop gain of each loop is determined with
all the other loops opened, and the closed loop gain implies that all the
other loops are closed. The relative gain for each controlled/manipulated
variable pair is still defined as the ratio of the open-loop gain to the closed-
loop gain for that pair.
The following properties of the relative gains are useful for interpreting
them:
1. Positive and Negative Interaction. The relative gains are not only
nondimensional. They are also normalized in the sense that the
sum of the gains of any row or column of the matrix is unity. You
can verify this fact for the 2x2 by adding the relative gain
formulas for each pairing, that is, µ11 + µ12 = 1. This property also
applies to systems with more than two controlled and
manipulated variables.
2. Positive and Negative Interaction. For the 2x2 system, when the two
loops help each other (positive interaction), the relative gains are
between zero and one. When the two loops fight each other
(negative interaction), one set of relative gains is greater than
unity, and the other set is negative. Notice that a negative relative
gain means that the net action of the loop reverses when the other
loop is opened or closed—a very undesirable situation.
For a system with more than two control objectives, the concept
of positive and negative interaction must be applied on a pair-by-
pair basis. In other words, if the relative gain for a pair of
controlled and manipulated variables is positive and less than
178 Unit 9: Multivariable Control Systems
The following example shows that when the steady state relationships are
simple enough, as they are for the blender, the relative gains can be
expressed as formulas in terms of the process variables.
Conservation of mass: F = F1 + F2
F1 x 1 + F 2 x2
Conservation of solute: x = -----------------------------
-
F1 + F2
F2 ( x 1 – x2 ) F1 ( x 2 – x1 )
K x1 = ---------------------------
-K K x2 = ---------------------------
-K
2 v1 2 v2
( F 1 + F2 ) ( F 1 + F2 )
where Kv1 and Kv2 are the valve gains, in (lb/h)/fraction valve position.
Unit 9: Multivariable Control Systems 179
Next, substitute the open-loop gains into the formulas for the relative
gains given in Eq. 9-4. A little algebraic manipulation produces the
following general expressions for the relative gains:
F1 F2
µ F1 = µ x2 = -----------------
- µ F2 = µ x1 = -----------------
-
F1 + F 2 F1 + F 2
In words, the pairing that minimizes interaction has the flow controller
manipulating the larger of the two flows and the composition controller
manipulating the smaller of the two flows. If a ratio controller were used,
the smaller flow should be ratioed to the larger flow, with the flow
controller manipulating the larger flow and the composition controller
manipulating the ratio. It could easily be shown that the ratio controller
decouples the two loops so that a change in flow does not affect the
composition. Notice that the valve gains Kv1 and Kv2 do not affect the
relative gains. This is why they were not considered in Example 9-1.
For most processes the relative gains tell all that needs to be known about
interaction. They are determined from the open-loop, steady-state gains,
which can easily be determined by either on-line or off-line methods.
However, in systems with negative interaction, the pairing recommended
by relative gain analysis may not result in the best control performance
because it does not consider the dynamic response. This is illustrated in
the following example.
The two level variables do not affect the operation of the column directly;
thus, they cannot be made a part of the interaction analysis. However, the
decision regarding which streams control the levels has an effect on the
interaction between the other control loops. Two arrangements or schemes
180 Unit 9: Multivariable Control Systems
Condenser
Reflux
Distillate
Feed
Steam
Bottoms
are considered. To reduce the problem to a 2x2, assume that the column
pressure controller (PC) manipulates the condenser cooling rate.
Reflux Steam
Reflux Steam
Condenser
Reflux
Distillate
Feed
Steam
Bottoms
Notice that the obvious pairing—top temperature with reflux and bottom
temperature with steam—results in less interaction than the other one.
However, even then there is much interaction between the two loops: the
gain of each loop decreases by a factor of 3.38 when the other loop is
switched to automatic, which indicates that the two temperature loops
fight each other. This result indicates that his scheme suffers from negative
interaction.
The sensitivity study on the simulated column gives the following open-
loop gains:
Reflux Bottoms
TC-1 -0.35 -1.05
Condenser
Reflux
Distillate
Feed
Steam
Bottoms
Reflux Bottoms
TC-1 0.90 0.1
The pairing for this scheme is also the obvious one, top temperature with
reflux and bottom temperature with bottoms product flow. However, the
relative gains show only about 10 percent positive interaction; that is, the
two loops help each other, which is indicated by the relative gains being
positive and less than unity.
From steady-state relative gain analysis, it would appear then that Direct
Material Balance Control results in significantly less interaction than
Energy Balance Control. Unfortunately, the Energy Balance Control
scheme, which relative gain analysis showed to have more steady-state
interaction, was found to perform better in this particular case than the
Direct Material Balance Control scheme. The reason for this is dynamic
interaction, which goes undetected by the relative gain matrix. For the first
scheme, the open-loop responses are monotonic, that is, the temperature
stays between its initial value and its final value during the entire
response. On the other hand, for the second scheme the open-loop
responses exhibit inverse response, that is, the temperature moves in one
direction at the beginning of the response and then moves back to a final
Unit 9: Multivariable Control Systems 183
value on the opposite side of its initial value. This causes the feedback
controller to initially take action in the wrong direction, degrading the
performance of the control system.
R1 E1 U1 M1 C1
Gc1 G11
D1 G12
D2 G21
R2 E2 U2 C2
Gc2 G22
M2
G 12
D 1 = – --------
- (9-5)
G 11
G 21
D 2 = – --------
- (9-6)
G 22
As Eqs. 9-7 and 9-8 show, another aspect of decoupling is that two parallel
paths exist between each controller output and its controlled variable. For
processes with negative interaction these two parallel paths have opposite
signs, which creates either an inverse response or an overshoot in the
open-loop step response of each decoupled loop. It is important to realize,
however, that the parallel paths are not created by the decouplers in that
they were already present in the “un-decoupled” system (the interaction
and direct effects).
As the design of the decoupler makes clear, the steady-state effect of the
decoupler on any one loop is the same effect the integral mode of the other
loops would have if the decoupler were not used. What then does the
decoupler achieve? Basically, through decoupling, the effect of interaction
is made independent of whether the other loops are opened or closed.
However, problems may still arise in one loop if the manipulated variable
of another loop is driven to the limits of its range. This is because the
decoupling action is then blocked by the saturation of the valve. It is
therefore important to select the correct pairing of manipulated and
controlled variables even when decoupling is used, so saturation of one of
the manipulated variables in the multivariable system does not drastically
affect the performance of the other loops.
Half Decoupling
K v2
- ( U 2 – U2o )
M 1 = U 1 – ---------
K v1
F 2 K v1
- ( U 1 – U1o )
M 2 = U 2 + ---------------
F K 1 v2
The coefficients correct for the sizes of the two valves. And, in the second
formula, the coefficients correct for the ratio between the two inlet flows
that is required to maintain the composition constant. This ratio is a
function of the two inlet stream compositions and the product
composition set point. If any of these compositions were to vary, you
would have to readjust the gain of the decoupler. There is, however,
another way to design the decoupler that does not require you to readjust
the parameters when process conditions change. It consists of using
simple process models to set up the structure of the control system, as
shown in the next section.
product flow controller should manipulate the sum of the two inlet flows.
Therefore, the output of the flow controller is assumed to be the total inlet
flow, and the smaller flow is subtracted from it to determine the larger
flow:
F1set = U1 - F2
The smaller flow must be measured and the larger flow must be controlled
for this formula.
F2set = U2F1
This formula requires that the smaller flow also be controlled. Figure 9-8
shows the diagram of the resulting control system. In this scheme, the
ratio controller keeps the product composition from changing when the
total flow is changed, and the summer keeps the total flow from changing
when the composition controller takes action. The multivariable control
system is therefore fully decoupled.
set
set
Product
The last two design formulas do not show the scale factors that you may
need to convert the flow signals into the percentage scales of the flow
controllers. The scale factors depend on the spans of the two flow
transmitters rather than on the sizes of the control valves. The flow
controllers allow the signals to be linear with flow. In addition, they take
care of changes in pressure drop across the control valves.
The first step when tuning interacting loops is to prioritize the control
objectives, in other words, to rank the controlled variables in the order in
which is important to maintain them at their set points. The second step is
to check the relative gain for the most important variable and decide if it is
necessary to detune the other loops. The principle behind this approach is
that a loosely tuned feedback control loop—low gain and slow integral—
behaves as if it were opened or, rather, it will make changes in its
manipulated variable slowly enough to allow the controller of the
important variable to correct for the effect of interaction. The decision as to
how loosely to tune the less important loops is based on how different
from unity is the relative gain for the most important loop. It is
understood that the manipulated variable for the most important variable
has been selected to make the relative gain for that loop as close to unity as
possible. When there are more than two interacting loops, the tightness of
tuning for each loop will decrease with its rank.
decoupler is not used. Thus, for example, if the top loop in Figure 9-7 were
the most important of the two, use decoupler D1 but not decoupler D2.
If all of the loops are of equal importance and speed of response, they
must each be tuned while the other loops are in manual. Then, the
controller gain of each loop must be adjusted by multiplying the controller
gain obtained when all other loops were opened by the relative gain for
the loop:
where
Kcij = the controller gain tuned with all the other loops opened,
%C.O./%T.O.
This adjustment accounts for the change in steady-state gain when the
other loops are closed, but it does not account for dynamic effects. If some
of the loops are slower than the others or can be detuned, you must
recalculate the relative gains for the remaining loops as if those were the
only interacting loops, that is, as if the slower or detuned loops were
always opened.
The gain adjustment suggested by Eq. 9-9 should be sufficient for those
loops with positive interaction since their response remains monotonic
when the other loops are closed. However, the loops with negative
interaction may need to be retuned after the other loops are closed. This is
because the other loops will cause either inverse or overshoot response,
which normally requires lower gains and slower integral than monotonic
(minimum phase) loops. Notice that the formula results in a gain
190 Unit 9: Multivariable Control Systems
reduction for the loops with positive interaction and a gain increase for the
loops with negative interaction (assuming the pairing with the positive
relative gain is always used).
When decouplers are used, they must be tuned first and then kept active
while the feedback controllers are tuned. Recall that perfect decoupling
has the same effect on a loop as if the other loops was very tightly tuned.
For example, for the blender control system of Figure 9-8, the ratio and
mass balance controllers must be tuned first and kept active while the flow
and composition controllers are tuned.
The following example shows how interaction affects the tuning of the
controllers.
Initially, the inlet flows are each 1,000 kg/h, and the product concentration
is 50 percent catalyst. Figure 9-9 shows the responses of the product
composition and flow, as well as the inlet flows, for a step decrease of 10
percent in the dilute stream composition. The curves marked (a) are the
responses when the product flow controller is kept in manual, and the
curves marked (b) are for the flow controller in automatic. Notice that the
response of the analyzer controller is more oscillatory when the flow
controller is in automatic. This is because the interaction is positive, with a
relative gain of 0.5 (from the result of Example 9-2). Thus, the gain of the
analyzer controller doubles when the flow controller is switched to
automatic. If the gain of the analyzer controller were to be reduced by one
half—to 2.5%C.O./%T.O.—the response would match the response
obtained when the flow controller is in manual.
Unit 9: Multivariable Control Systems 191
52
48 (a)
46
(b)
2000
(a)
1500
1400 (b)
Concentrated (a)
1000
(b)
600 Dilute
(a)
200
0 10 Time, minutes 20
Figure 9-9. Control of Catalyst Blender. (a) With Product Flow Controller on Manual. (b) With
Product Flow Controller on Automatic.
Example 9-6 illustrates how one of the popular model reference control
schemes, Dynamic Matrix Control, controls a process.
Figure 9-10. Unit Step Responses of Composition and Temperature of Jacketed Chemical
Reactor to Coolant and Reactants Flow
194 Unit 9: Multivariable Control Systems
21
C, %
20
19
224
T, F
220
60
F, ft3/min Fc, lb/min
40
20
1.4
1.0
0 10 Time, minutes 30 40
Figure 9-11. Dynamic Matrix Control of Composition and Temperature of Jacketed Chemical
Reactor
set point. The response is for an output horizon of ten moves, equal-
concern errors of 1 percent composition and 1°F, and move suppression
parameters of 0.15 for the coolant flow and 0.05 for the reactants flow.
Figure 9-11 shows that the dynamic matrix controller is able to change the
temperature while maintaining the composition relatively constant. When
the move suppression parameters were reduced to 0.01 each, the response
was unstable, and when the mode suppression of the coolant flow was
0.05, the controller drove the coolant flow to zero. It is important to realize
that if only two simple feedback controllers, with or without a decoupler,
were used, this would be a very difficult control problem because of the
inverse responses to changes in reactants flow (shown in Figure 9-10).
9-6. Summary
This unit dealt with multivariable control systems and how to tune them.
It showed the effect that loop interaction has on the response of feedback
control systems, and it presented two methods for dealing with that effect.
The first is Bristol's relative gains, which minimizes the effect of
interaction by quantitatively determining the amount of interaction and
by selecting the pairing of controlled and manipulated variables. The
second is loop decoupling. In one example, the distillation column
showed that you must also consider dynamic interaction, undetected by
the relative gains, when pairing controlled and manipulated variables.
Unit 9: Multivariable Control Systems 195
EXERCISES
9-1. Under what conditions does loop interaction take place? What are its
effects? What two things can be done about it?
9-2. For any given loop in a multivariable (interacting) system, define the open-
loop gain, the closed-loop gain, and the relative gain (interaction measure).
9-3. How are the relative gains used to pair controlled and manipulated
variables in an interacting control system? What makes it easy to
determine the relative gains? What is the major shortcoming of the relative
gain approach?
9-4. In a 2x2 control system the four relative gains are 0.5. Is there a best way to
pair the variables to minimize the effect of interaction? By how much does
the gain of a loop change when the other loop is closed? Is the interaction
positive or negative?
9-5. Define positive and negative interaction. What is the range of values of the
relative gain for each type of interaction?
9-6. The open-loop gains for the top and bottom compositions of a distillation
column are the following:
Reflux Steam
Calculate the relative gains and pair the compositions of the distillate and
bottoms to the reflux and steam rates so that the effect of interaction is
minimized.
9-7. The automated showers in the house of the future will manipulate the hot
and cold water flows to maintain constant water temperature and flow. In a
typical design the system is to deliver three gallons per minute (gpm) of
water at 110°F by mixing water at 170°F with water at 80°F. Determine
the open-loop gains, the relative gains, and the preferred pairing for the two
control loops. Hint: the solution to this problem is identical to that of
Example 9-2.
9-8. Design a decoupler to maintain the temperature constant when the flow is
changed in the shower control system of Exercise 9-7. Dynamic effects can
be ignored.
196 Unit 9: Multivariable Control Systems
REFERENCES
Learning Objectives — When you have completed this unit, you should be
able to:
Because most feedback controllers are linear, once they are tuned at a
given process operating condition their performance will vary when the
process operating conditions change. However, since feedback control is
usually a very robust strategy, small variations in process operating
conditions would normally not change the process dynamic behavior
enough to justify adaptive control techniques. Because of this robustness,
we can say that although most processes are nonlinear, very few processes
require adaptive control.
199
200 Unit 10: Adaptive and Self-tuning Control
Unit 8). The presence of feedback trim makes these installations less
sensitive to changing process operating conditions.
Though we have said that most process control applications do not require
adaptive control, the following sections will discuss two examples—
process nonlinearities and process time dependence—where it may be
needed.
Process Nonlinearities
Of the three process model parameters the one most likely to affect the
performance of the loop is the gain. This is because the loop gain is
directly proportional to the process gain. Moreover, for the variables for
which good control is important, temperature and composition, the loop
gain is usually inversely proportional to process throughput (see
Section 3-6). Figure 10-1 shows a typical plot of process gain versus
throughput. This plot applies to the control of composition in a blender or
of outlet temperature in a steam heater or furnace. The gain variation is
even more pronounced in a heat exchanger where the manipulated
variable is the flow of a hot oil or coolant. This very common nonlinearity
can be summarized by the following statement:
Figure 10-1. Variation of Process Gain with Throughput for a Blender, Furnace, or Steam
Heater
Unit 10: Adaptive and Self-tuning Control 201
Process nonlinearities also affect the process time constants and dead time,
but usually to a lesser extent than they affect the gain. In particular, if the
time constants and dead time were to remain proportional to each other as
they vary—as, for example, they would remain in a blender when the
throughput varies—the controllability of the feedback loop would remain
constant since it is defined by the ratio of the effective dead time to the
13
11
pH
7
1
0 50 100 150 200
effective time constant of the loop. This means that, although the
controller integral and derivative times no longer match the speed of
response of the process when the time parameters vary, for most loops the
loop stability and damping of the response are not affected as much by the
time parameters as they are by the variation of the gain.
Valve Characteristics
100
(a)
75
Flow, % of Maximum
(b)
50
25
0
0 20 40 60 80 100
Valve Position, %
Figure 10-3. Equal Percentage Valve Characteristic Compensates for the Decrease in Gain with
Throughput when Pressure Drop Across the Valve is Constant (a), but not when Pressure Drop
Varies with Throughput (b).
204 Unit 10: Adaptive and Self-tuning Control
capacity flow. For example, if the rest of the line in series with the
valve takes up 5 psi of friction loss at design flow, the valve must
take up 5(0.6/0.4)=7.5 psi at that flow.
2. If the temperature or composition controller is cascaded to a flow
controller, then the benefits of the equal-percentage
characteristics in the valve are lost to the temperature or
composition loop. Furthermore, if the flow controller receives a
differential pressure signal that is proportional to the square of
the flow, and it does not extract the square root of this signal, the
gain variation of the master controller would be aggravated by
the square function. Notice that if the flow controller receives a
signal that is proportional to the square of the flow, as the output
of the master controller increases it calls for smaller increments in
flow for the same increments in output. That is, the loop gain will
decrease as the flow (throughput) increases.
3. The equal-percentage characteristic curve does not produce zero
flow at zero valve position. Therefore, the actual valve
characteristic curve must deviate from the equal-percentage
characteristic curve in a region near the closed position. This is
illustrated in Figure 10-3 by the short straight lines near the zero
valve position.
Figure 10-5 shows three examples of temperature control using this simple
gain compensation scheme. In each of the three cases the fuel flow to a
furnace, steam flow to a heater, and hot oil heat rate to an exchanger are
ratioed to the throughput flow. In this last example, the heat rate
computation also provides compensation for the temperature change of
the hot oil.
Unit 10: Adaptive and Self-tuning Control 205
Figure 10-4. Cascade to Ratio Controller Makes the Loop Gain Constant with Throughput
Process
Air
Fuel
SP
TC
SP
RC Steam
SP
FC FT
FT
TT
Process
T
Condensate
Figure 10-5b. Temperature Control of a Steam Heater with Cascade to Ratio Controller
Σ
Process
Hot Oil
Figure 10-5c. Temperature Control of a Hot Oil Exchanger with Cascade to Ratio Controller
Unit 10: Adaptive and Self-tuning Control 207
Figure 10-6 shows that the response of the additive controller is more
oscillatory when the process flow is reduced in half, while the response of
the cascade-to-ratio control scheme is almost the same at half flow as at
full flow. The initial deviation in temperature is higher at half flow because
the gain at that flow is twice that of full flow (see Example 3-5). The
responses of the two schemes are identical when the process flow is
restored to the original flow.
Figure 10-6. Response of Heat Exchanger Temperature to a 50% Pulse Change in Load. (a)
With Additive Feedforward Control, (b) With Temperature Controller Cascaded to the Steam-to-
Process Flow Ratio Controller.
208 Unit 10: Adaptive and Self-tuning Control
Gap Controller
Control
Process
Figure 10-7. pH Control Scheme Uses Two Control Valves and a Gap Controller
Unit 10: Adaptive and Self-tuning Control 209
The following two sections (10-3 and 10-4) look at self-tuning and
adaptive control schemes that can be applied to any process. They
essentially view the process as a black box.
The pattern recognition phase in the auto-tuning sequence starts when the
error (difference between set point and controlled variable) exceeds a
prespecified noise threshold. Such an error may be caused by a
disturbance or by a set point change. The program then searches for three
210 Unit 10: Adaptive and Self-tuning Control
where E1, E2, and E3 are the measured amplitude of the error at each of the
three peaks. Notice that the error of the second peak is assumed to have a
sign opposite that of the other two, and therefore the differences indicated
in the definition of the damping are actually sums.
Auto-Tuning Formulas
Figure 10-8. Closed Loop Response Showing the Peaks which are Used by the Pattern
Recognition Adaptive Technique
Unit 10: Adaptive and Self-tuning Control 211
• Noise band: the minimum magnitude of the error that triggers the
pattern recognition program. This parameter depends on the
expected amplitude of the noise in the measured variable.
• Maximum wait time: the maximum time the algorithm will wait for
the second peak in the response after detecting the first one. This
parameter depends on the time scale of the process response.
Pretuning
Restrictions
The EXACT controller is a rule-base expert system with over two hundred
rules, most of which involve keeping the pattern recognition algorithm
from being confused by peaks that are not caused by the controller tuning
parameters. Nevertheless, the pattern recognition algorithm must be
applied with much care because situations will still arise where it can be
fooled. For example, oscillatory disturbances with a period of the same
order of magnitude as that of the loop will tend to detune the controller
because the auto-tuning algorithm will think the oscillations are caused by
a controller tuning that is too tight. Other situations, such as loop
interaction, may also throw the auto-tuning off if they are not properly
taken into account.
The discrete model of Eq. 10-3 has four very desirable properties:
1. This model can fit the response of most processes, both
monotonic and oscillatory, with and without inverse response,
and with any ratio of dead time to sample time.
2. The parameters of the model can be estimated by linear multiple
regression in a computer control installation because the model
equation is linear in the parameters and their coefficients are the
known sampled values of the controlled variable and the
controller output. Only the dead-time parameter N must be
estimated separately.
3. For a first-order process, the parameters A2 and B2 become zero,
whereas if the dead time is an exact number of samples
parameter B2 is zero for the second-order process and B1 is also
zero for the first-order process.
In summary, the discrete model fits the response of most processes, has
parameters that can be estimated by using a straightforward procedure,
and results in the controller most commonly used in industry.
214 Unit 10: Adaptive and Self-tuning Control
Parameter Estimation
The parameters of the discrete process model are estimated off line by
collecting enough samples of the process output variable C and of the
controller output M. These data are then fed to a least-squares program,
which is readily available in the form of numerical methods package or in
a spreadsheet program (Lotus 1-2-3, Microsoft Excel, Corel QuatroPro,
etc.). One particular package that is specific to process identification,
MATLAB System Identification Toolbox, was developed by Ljung as a
toolbox for the popular MATLAB software package.8
Identification Disturbances
Signal and Noise
+ +
R E M C
Controller Process
+ + +
Tuning
Parameters
Estimator
Adapter
Model Parameters
Symetric Pulse
Figure 10-9. Block Diagram for Parameter Estimation and Input Signals: Symmetric Pulse,
Pseudo-Random Binary Signal (PRBS)
3. The final values of C and M must match their initial values. This
does not happen when a nonzero mean disturbance upsets the
system during the data collection period. Differencing gets
around this requirement. Another way to handle it is to add a
constant term to Eq. 10-3, which is then estimated and becomes
an estimate of the mean value of the disturbance.
of the noise as the variance of the residuals. At any rate, the “trace” of
matrix P, that is, the sum of its diagonal elements, can serve as a measure
of the goodness of the fit.
Adapter
Table 6-2 presented formulas for tuning PID controllers from continuous
model parameters—that is, process gain, time constants, and dead time--
for the PID controllers of Table 6-1. For auto-tuning and adapter
controllers, similar formulas can be developed using the same methods
from the discrete model parameters A1, A2, B0, B1, and B2, which are
calculated by the estimator. Table 10-1 presents these formulas, which can
be used to calculate the controller parameters from the estimated discrete
model parameters. Parameter q is the control performance parameter,
which can be adjusted to obtain tighter (q-->0) or looser (q-->1) control.
T ( 2A 2 + A 1 )
T I = – --------------------------------
-
1 + A1 + A2
TA 2
T D = – -----------------------
-
2A 2 + A 1
For use with the parallel PID controller of Table 6-1.
When the dead-time compensation PID controller is used (see Section 6-4), the gain
changes to: Kc = (1 – q)(2A2 + A1)/(B0 + B1 + B2).
218 Unit 10: Adaptive and Self-tuning Control
output or to its set point. The estimator/adapter will then adjust the
controller parameters until the parameter estimation gain dies out, at
which time the auto-tuning procedure is stopped. You could then repeat
the auto-tuning procedure until the controller parameters do not change
appreciably from the beginning to the end of a run.
In the adaptive mode, the auto-tuning program is allowed to run all the
time, which takes advantage of process disturbances and normal set point
changes. You need perform the initialization only once, and to keep the
estimator alive the memory parameter would be set to a value less than
unity.
The dead time and lead term are zero. The discrete second-order model for
the parameters just given are as follows:
M and Y 0
-1
Estimator Sample
Input Output
Figure 10-10. Input (M) and Output (C) Data Used for Parameter Estimation in Example 10-1
Figure 10-11. Response of Auto-Tuned Controller of Example 10-1 to a Change in Set Point
10-5. Summary
EXERCISES
10-2. Which of the process parameters is most likely to vary and thus affect the
performance of the control loop? Give an example.
10-3. How can the control valve characteristic be selected to compensate for
process gain variations? Cite the requirements that must be met in order
for the valve characteristic to properly compensate for gain variations.
10-6. Briefly describe the adaptive and auto-tuning technique based on pattern
recognition.
10-7. Why is a second-order discrete model useful for identifying the dynamic
response of most processes? Why is it easy to estimate its parameters?
10-8. Cite the requirements for using the least-squares estimation of the
parameters of the discrete process model.
REFERENCES
batch process 69
bias 14
blending process 176 185
blending tank 46 47 51
block diagram 11 12 19 20 32
capacitance 45 46 47 48
cascade control 86 127 129 132 134
135 137 138 139 140
142
cascade windup 139 141
cascade-to-ratio control 202 207 209
characteristics, of valve 52 53 202 220
closed-loop gain 174 175 177
closed-loop time constant 110
coarse tuning 75
comparator 12
compensation for dead time 117 119 120 121
composition control 85
computer cascade control 134 142
computer-based controller 20
conductance 45 46 47 48 49
52 58
conductance, valve 47
control objective 85 145 157 159 160
control valve 10 11 12 13 14
controllability 24 27
controllable process 70 71 72 76 85
controlled variable 10 13 20 22 32
controller 13 14 20 22
action 17 23
computer-based 20
gain 13 15 24 27 32
33
panel-mounted 22
proportional-integral (PI) 19
controller (Cont.)
proportional-integral-derivative (PID) 19
proportional-only 14 19
single-mode 13 19
synthesis 86 118
three-mode 20
two-mode 13
correction for sample time 108 109
covariance matrix 216 218
current-to-pressure transducer 10
derivative (Cont.)
time 17 21 23 26 28
33 104 111
unit 20 104 105
differencing 215
digital controller 11
digital-to-analog converter (DAC) 102
direct action 12
direct material balance control 181 182
discrete model 213 214 217 218 219
distillation column 169 179 194 195
distributed control systems (DCS) 11 101
distributed controller 20
disturbance 10 14 22 26 32
85 86 89 97
dynamic compensation 147 152 156 158 161
163
dynamic gain limit 104 122
dynamic interaction 182 194
estimation
off line 214
recursive 216
EXACT controller 209 211 212
expert system 209 212
gain 37 40 41 44 54
56
closed-loop 174 175 176 177
nonlinear 106 107
open-loop 173 174 177 180 181
relative 176 177 179 182 183
188 190
scheduling 202
steady-state 178 189
variation 54 200 203 208
gap 117
gap controller 208
gas surge tank 46 47
graceful degradation 159
IMC 86
instrumental variable (IV) regression 216
integral controller 16 86 97
integral mode 15 16 18 19 24
27 129 130 141
integral time 16 22 23 24 26
27 61 64 71 73
75
integrating process 40
interaction 76 169 171 172 173
174 176 177 179 181
183 185 188 189 190
194
interaction measure 175
intermediate level control 92
internal model control (IMC) 86 120
inverse response 78 79 80 81 172
182 185 188 193 194
negative feedback 12
negative interaction 172 177 179 181 183
185 188 189 190
noise band 211
nonlinear controller gain 106
nonlinear feedforward compensation 157 164
nonlinearity 62 200 201 202 209
PI controller 16 19 21 86 91
92
PID algorithm 101
PID controller 19 20
pneumatic 10 22
positive interaction 172 177 182 189 190
practical tips 28 74
preset compensation 202
pressure control 89 92 97 129 139
process dead time 41 51
process gain 37 39 40 44 52
53 54 57 110
process nonlinearity 37 53 209
process time constant 42 44 45 92 96
process variable 101 102 103 104 110
119 122
processing frequency 101 115
programmable logic controllers (PLC) 101
proportional band 13 33
proportional controller 15
proportional kick 105
proportional mode 13 14 16 17
proportional-derivative (PD) controller 86
proportional-integral (PI) controller 86 91
proportional-integral controller 87
proportional-only controller 14 19
proportional-on-measurement 70 106
pseudo-random binary signal (PRBS) 215 218
pulse
symmetric 214 215
PV 101
QDR response 26 66 68
QDR tuning 26 27 30 33 61
66 67 68 71 73
quarter-decay ratio (QDR) response 26
rate mode 17
rate time 17 20
ratio control 145 149 164
reactor 40 127 129 131 133
134 135 137 138 141
recursive 218
recursive estimation 212 215 216 218 220
regression 214 220
instrumental variable (IV) 216
maximum likelihood 215
relative gain 174 175 176 177 178
179 180 182 183 188
189 194
relative gain matrix 173 177 182
reset feedback 141
reset mode 15
reset rate 17 24 33
reset time 15 19 33
reset windup 53 61 76 77 78
81 127 132 133 140
142
resistance 45 46 48
reverse action 12 22 23
steady-state 42 44
compensation 146
steam heater 9 14 23 27 28
32 149 159 162 163
step test 37 39 41 45 51
54 55 56
symmetric pulse 214 215
tangent method 42 43 44 45
tangent-and-point method 43
temperature control 94 95 96 127 128
129 131 133 136
three-mode controller 20
tight control 89 93 94 97
tight tuning 22 23
Time 120
time constant 37 39 40 41 43
44 45 46 47 48
52 85 86 87 90
92 94 96
time delay 49 50
trace of matrix 217
transducer 10
transfer function 37
transportation lag 49 50 51
tuning parameter 13 16 19 21 26
27 28 29 32
two-mode controller 13
two-point method 42
ultimate gain 9 23 24 25 26
27 28 31 108
ultimate period 9 24 25 27 28
uncontrollability 62 63 65 66 71
72 73 76 108 109
115
uncontrollable process 74 76 81
unstable 22 24 40
vacuum pan 55
valve
characteristics 52 53 202 203 220
conductance 52
gain 52
hysteresis 87
position control 117 208
valve position control 117
variance of the estimates 216
variance-covariance matrix 216 218
Hang, C. C., Lee, T. H., and Ho, W. K., Adaptive Control, (Research Triangle
Park, NC: ISA, 1993).
Smith, C. A., and Corripio, A. B., Principles and Practice of Automatic Process
Control, 2nd ed., (New York, NY: Wiley, 1997).
UNIT 2
Exercise 2-1.
Controlled variable—the speed of the engine.
Block diagram:
Exercise 2-2.
Controlled variable—the temperature in the oven.
229
230 Appendix B: Solutions to All Exercises
What is varied when the temperature dial is adjusted is the set point.
Block diagram:
Heat
Loss Oven
Thermostat
e
Power Heating
Relay Oven
Element
+ + Oven
Temperature
Gas
Bulb
Sensor
Exercise 2-3.
(a) Change in controller output: 5% x 100/20 = 25%
Exercise 2-4.
Offset in outlet temperature: 8%C.O./(100/20) = 1.6%T.O.
Exercise 2-5.
For a 5%T.O. sustained error, the output of the PI controller will suddenly
change by:
5%
e
0
1.5%/Min
m
5%
3%
0
0 1 2 3 Min
Time
Exercise 2-6.
The output of the PID controller will suddenly change by:
(5%T.O./min)(1.0%C.O./%T.O.) = 5%C.O./min
After five minutes, the output will suddenly drop by 10.0%C.O., as the
error ramp stops. The output will then remain constant at:
Exercise 2-7.
QDR proportional gain:
0.45(1.2%C.O./%T.O.) = 0.54%C.O./%T.O. or 185% PB
QDR integral rate:
1/(4.5 min/1.2) = 0.266 repeats/min
The tuning formulas are from Table 2-1 for PI controllers.
Exercise 2-8.
Series PID controller:
QDR proportional gain:
0.6(1.2%C.O./%T.O.) = 0.72%C.O./%T.O. or 139% PB
QDR integral rate: 1/(4.5 min/2) = 0.44 repeats/min
QDR derivative time: 4.5 min/8 = 0.56 min
Parallel PID controller:
QDR proportional gain:
0.75(1.2%C.O./%T.O.) = 0.90%C.O./%T.O. or 110% PB
QDR integral rate: 1/(4.5 min/1.6) = 0.36 repeats/min
QDR derivative time: 4.5 min/10 = 0.45 min
The tuning formulas are from Table 2-1 for PID controllers.
UNIT 3
Exercise 3-1.
a. Put the controller on manual.
d. Determine the gain, time constant, and dead time from the
response recorded in step c.
Appendix B: Solutions to All Exercises 233
Exercise 3-2.
Gain: The sensitivity of the process output to its input, measured by the
steady-state change in output divided by the change in input.
Time Constant: The response time of the process, determines how long it
takes to reach steady state after a disturbance.
Dead Time: The time it takes for the output to start changing after a
disturbance.
Exercise 3-3.
Gain: K = (2°F)/(100 lb/h) = 0.02°F/(lb/h)
( 100 – 0 )%T.O.
2° F ---------------------------------------
( 250 – 200 )° F %T.O.
- = 2.0 ----------------
-----------------------------------------------------------
( 100 – 0 )%C.O. %C.O.
100 lb/h ---------------------------------------
( 5000 – 0 ) lb/h
Notice that, as the controller output sets the set point of the steam flow
controller, the percent of controller output corresponds to the percent of
steam flow transmitter output.
Exercise 3-4.
234 Appendix B: Solutions to All Exercises
°C
Gain: K = (--------------------------------------
84.0 – 90.0 )° C
- = – 3.0 ----------
2 kg/s kg/s
Slope method (from figure):
Time constant: 1.03 - 0.11 = 0.92 min
Dead time: 0.11 min
Exercise 3-5.
Two-point method:
Exercise 3-6.
Maximum time constant: τ = RC = (10 x 106)(100 x 10-6) = 1,000 s
Exercise 3-7.
Time constant: τ = A/Kv = (50 ft2)/[(50 gpm/ft)/7.48 gal/ft3]
= 7.5 min
Exercise 3-8.
Product flow: Time constant:
F = 50 gpm V/F = 2000/50 = 40.0 min
F = 500 gpm V/F = 2000/500 = 4.0 min
F = 5000 gpm V/F = 2000/5000 = 0.4 min
Appendix B: Solutions to All Exercises 235
Exercise 3-9.
Steady-state product concentration:
= 0.028 (lb/gal)/gpm
Exercise 3-10.
Product concentration:
Thus, the gain at one tenth throughput is ten times the gain at full
throughput.
Unit 4
Exercise 4-1.
If the process gain were to double, the controller gain must be reduced to
half its original value to keep the total loop gain constant.
Exercise 4-2.
The loop is less controllable (has a smaller ultimate gain) as the ratio of the
process dead time to its time constant increases. The process gain does not
affect the controllability of the loop, since the controller gain can be
adjusted to maintain a given loop gain.
236 Appendix B: Solutions to All Exercises
Exercise 4-3.
The required relationships are:
K cu = 2τ Kt 0 Tu = 4 t0
Exercise 4-4.
Process A is less sensitive to changes in controller output than processes B
and C, which have equal sensitivity.
Process A has the fastest response of the three, and process C the slowest.
Exercise 4-5.
Quarter-decay tuning formulas for series PID controller, from the
formulas on Table 4-1:
Exercise 4-6.
To adjust for 8 s sample time we must add 8/2 = 4 s (0.067 min) to the
process dead time. Once more, from the formulas of Table 4-1:
Comparison with the results of Exercise 4-5 shows that the sample time
has a greater effect on the tuning parameters for process A because it is the
fastest of the three.
Appendix B: Solutions to All Exercises 237
Exercise 4-7.
The tuning parameters using the IMC rules for disturbance inputs, from
Eqs. 4-3 and 4-4, for a series PID controller with τc= 0:
Exercise 4-8.
The tuning parameters by the IMC for set point changes, from the Eqs. 4-3
and 4-6 for a series PID controller with τc = 0:
Exercise 4-9.
The IMC tuning rules for set point changes is the preferred method for the
slave controller in a cascade system because it produces fast response with
about 5% overshoot. The disturbance and quarter-decay ratio formulas are
too oscillatory on set point changes for a slave controller.
Exercise 4-10.
The typical symptom of integral windup is excessive overshoot of the
controlled variable; it is caused by saturation of the controller output
beyond the limits of the manipulated variable. Integral windup can be
prevented in simple feedback loops by limiting the controller output at
points that coincide with the limits of the manipulated variable.
Unit 5
Exercise 5-1.
Tight level control is indicated when the level has significant effect on the
process operation, as in a natural-circulation evaporator or reboiler.
Averaging level control is to be used when it is necessary to smooth out
sudden variations in flow, as in a surge tank receiving discharge from
238 Appendix B: Solutions to All Exercises
batch operations to feed a continuous process. The tight level control is the
one that requires the level to be kept at or very near its set point.
Exercise 5-2.
For flow control loops a proportional-integral (PI) controller is
recommended with a gain near but less than 1.0%C.O./%T.O. The integral
time is usually small, of the order of 0.05 to 0.1 minutes.
Exercise 5-3.
For tight level control a proportional controller with a high gain, usually
greater than 10%C.O./%T.O. should be used. When the lag of the control
valve is significant, a proportional-derivative controller could be used.
When a proportional-integral controller is used, the integral time should
be long, of the order of one hour or longer.
Exercise 5-4.
For averaging level control a proportional controller with a gain of
1.0%C.O./%T.O. should be used, because this provides maximum
smoothing of variations in flow while still preventing the level from
overflowing or running dry.
Exercise 5-5.
When a PI controller is used for averaging level control, the integral time
should be long, of the order of one hour or longer. At some values of the
gain, an increase in gain would decrease oscillations in the flow and the
level.
Exercise 5-6.
Time constant, from Eq. 5-2:
= 96 s (1.6 min)
Exercise 5-7.
PID controllers are commonly used for temperature control so that the
derivative mode compensates for the lag of the temperature sensor which
is usually significant.
Appendix B: Solutions to All Exercises 239
Exercise 5-8.
The major difficulty with the control of composition is the dead time
introduced by sampling and by the analysis.
Unit 6
Exercise 6-1.
Computer controllers perform the control calculations at discrete intervals
of time, with the process variable being sampled and the controller output
updated only at the sampling instants, while analog controllers calculate
their outputs continuously with time.
Exercise 6-2.
The “derivative kick” is a pulse on the controller output that takes place at
the next sample after the set point is changed and lasts for one sample. It
can be prevented by having the derivative term act on the process variable
instead of on the error.
Exercise 6-3.
The “proportional kick” is a large step change in controller output right
after a set point change; it can be eliminated by having the proportional
term act on the process variable instead of on the error, so that the operator
can apply large changes in set point without danger of upsetting the
process. When the proportional kick is avoided, the process variable
approaches the set point slowly after it is changed, at a rate determined by
the integral time. The proportional kick must not be avoided whenever it
is necessary to have the process variable follow set point changes fast, as
in the slave controller of a cascade system.
Exercise 6-4.
All three tuning parameters of the parallel version of the PID algorithm
are different from the parameters for the series version. The difference is
minor if the derivative time is much smaller than the integral time.
Exercise 6-5.
The nonlinear gain allows the proportional band to be wider than 100%
when the error is near zero, which is equivalent to having a larger tank in
240 Appendix B: Solutions to All Exercises
Exercise 6-6.
Using the formulas of Table 6-2, with q = 0 (for maximum gain) and the
following parameters:
Exercise 6-7.
If the algorithm has dead time compensation, the gain can be higher
because it does not have to be adjusted for dead time. This only affects the
first two cases, because the dead time is less than one sample for cases (c)
and (d), and, therefore, no dead time compensation is necessary. From Eq.
6-7 and Table 6-2:
Exercise 6-8.
The basic idea of the Smith Predictor is to bypass the process dead time to
make the loop more controllable. This is accomplished with an internal
model of the process responding to the manipulated variable in parallel
with the process. The basic disadvantage is that a complete process model
is required, but it is not used to tune the controller, creating too many
adjustable parameters.
Appendix B: Solutions to All Exercises 241
The Dahlin Algorithm produces the same dead time compensation as the
Smith Predictor, but it uses the model to tune the controller, reducing the
number of adjustable parameters to one, q.
Unit 7
Exercise 7-1.
Cascade control (1) takes care of disturbances into the slave loop reducing
their effect on the controlled variable; (2) makes the master loop more
controllable by speeding up the inner part of the process; and (3) handles
the nonlinearities in the inner loop where they have less effect on
controllability.
Exercise 7-2.
For cascade control to improve the control performance, the inner loop
must be faster than the outer loop. The sensor of the slave loop must be
reliable and fast, although it does not have to be accurate.
Exercise 7-3.
The master controller in a slave control system has the same requirements
as the controller in a simple feedback control loop; thus, the tuning and
mode selection of the master controller are no different from those for a
single controller.
Exercise 7-4.
The tuning of the slave controller is different because it has to respond to
set point changes, which it must follow quickly without too much
oscillation. The slave controller should not have integral mode when it can
be tuned with a high enough proportional gain to maintain the offset
small. If the slave is to have derivative mode, it must act on the process
variable so that it is not in series with the derivative mode of the master
controller.
Exercise 7-5.
The controllers in a cascade system must be tuned from the inside out,
because each slave controller forms part of the process controlled by the
master around it.
242 Appendix B: Solutions to All Exercises
Exercise 7-6.
Temperature as the slave variable (1) introduces a lag because of the
sensor lag, and (2) may cause integral windup because its range of
operation is narrower than the transmitter range. These difficulties can be
handled by (1) using derivative on the process variable to compensate for
the sensor lag, and (2) having the slave measurement fed to the master
controller as its reset feedback variable.
Exercise 7-7.
Pressure is a good slave variable because its measurement is fast and
reliable. The major difficulties are (1) that the operating range may be
narrower than the transmitter range, and (2) that part of the operating
range may be outside the transmitter range, e.g., vacuum when the
transmitter range includes only positive gage pressures.
Exercise 7-8.
In a computer cascade control system the slave controller must be
processed more frequently than the master controller.
Exercise 7-9.
Reset windup can occur in cascade control when the operating range of
the slave variable is wider than the transmitter range. To prevent it, the
slave measurement can be passed to the reset feedback of the master; in
such a scheme the master always takes action based on the current
measurement, not on its set point.
Unit 8
Exercise 8-1.
A feedback controller acts on the error. Thus, if there were no error, there
would be no control action. In theory, perfect control is possible with
feedforward control, but it requires perfect process modeling and
compensation.
Exercise 8-2.
To be used by itself, feedforward control requires that all the disturbances
be measured and accurate models of how the disturbances and the
manipulated variable affect the controlled variable.
Feedforward with feedback trim has the advantages that only the major
disturbances have to be measured and compensation does not have to be
Appendix B: Solutions to All Exercises 243
exact, because the integral action of the feedback controller takes care of
the minor disturbance and the model error.
Exercise 8-3.
Ratio control consists of maintaining constant the ratio of two process
flows by manipulating one of them. It is the simplest form of feedforward
control.
Exercise 8-4.
A lead-lag unit is a linear dynamic compensator consisting of a lead (a
proportional plus derivative term) and a lag (a low-pass filter), each
having an adjustable time constant. It is used in feedforward control to
advance or delay the compensation so as to dynamically match the effect
of the disturbance.
The response of a lead-lag unit to a ramp is a ramp that leads the input
ramp by the difference between the lead and the lag time constants, or lags
it by the difference between the lag and the lead time constants.
Exercise 8-5.
To lead by 1.5 minutes with amplification of 2:
Exercise 8-6.
Dead time compensation consists of storing the feedforward
compensation and playing it back some time later. The time delay is the
adjustable dead time parameter.
244 Appendix B: Solutions to All Exercises
Dead time compensation can be used only when the feedforward action is
to be delayed and a computer or microprocessor device is available to
implement it. It should be used only when the delay time is long relative
to the process time constant.
Exercise 8-7.
Design of feedforward controller for process furnace:
where ∆Hm is the heating value of the main fuel in Btu/gal, ∆Hs
is that of the supplementary fuel gas in Btu/scf, η is the efficiency
of the furnace, and C is the specific heat of the process fluid in
Btu/lb-°F.
8. Instrumentation diagram:
set
set
Unit 9
Exercise 9-1.
Loop interaction takes place when the manipulated variable of each loop
affects the controlled variable of the other loop. The effect is that the gain
and the dynamic response of each loop changes when the auto/manual
state or tuning of the other loops change.
When loop interaction is present, we can (1) pair the loops in the way that
minimizes the effect of interaction and (2) design a control scheme that
decouples the loops.
Exercise 9-2.
Open-loop gain of a loop is the change in its controlled variable divided
by the change in its manipulated variable when all other loops are opened
(in manual).
Closed-loop gain is the gain of a loop when all other loops are closed (auto
state) and have integral mode.
Relative gain (interaction measure) for a loop is the ratio of its open-loop
gain to its closed loop-gain.
246 Appendix B: Solutions to All Exercises
Exercise 9-3.
To minimize interaction for a loop, the relative gain for that loop must be
as close to unity as possible. Thus, the loops must be paired to keep the
relative gains close to unity, which, in a system with more than two control
objectives may require ranking the objectives.
The relative gains are easy to determine because they involve only a
steady-state model of the process, which is usually available at design
time.
The main drawback of the relative gain is that it does not take into account
the dynamic response of the loops.
Exercise 9-4.
When all four relative gains are 0.5, the effect of interaction is the same for
both pairing options. The gain of each loop will double when the other
loop is switched to automatic. The interaction is positive; that is, the loops
help each other.
Exercise 9-5.
When the effect of interaction with other loops is in the same direction as
the direct effect for that loop, the interaction is positive; if the interaction
and direct effects are in opposite direction, the interaction is negative. For
positive interaction, the relative gain is positive and less than unity, while
for negative interaction the relative gain is either negative or greater than
unity.
Exercise 9-6.
Interaction for top composition to reflux and bottom composition to
steam:
Relative gains:
Reflux Steam
Yd 1.19 -0.19
Xb -0.19 1.19
The top composition must be paired to the reflux and the bottom
composition to the steam to minimize the effect of interaction.
Appendix B: Solutions to All Exercises 247
Exercise 9-7.
Let H be the flow of the hot water in gpm, C the flow of the cold water in
gpm, F the total flow in gpm, and T the shower temperature in °F. The
mass and energy balances on the shower, neglecting variations in density
and specific heat, give the following formulas:
These are the same formulas as for the blender of Example 9-2. So, the
relative gains are:
Hot Cold
F H/F C/F
T C/F H/F
So, as the cold water flow is the higher, use it to control the flow, and use
the hot water flow to control the temperature. The relative gain for this
pairing is:
The gain of each loop increases by 50% when the other loop is closed.
Exercise 9-8.
As in the second part of Example 9-4, we can use a ratio controller to
maintain a constant temperature when the flow changes. We would then
ratio the hot water flow (smaller) to the cold water flow (larger) and
manipulate the cold water flow to control the total flow. The design ratio is
0.5 gpm of hot water per gpm of cold water.
Unit 10
Exercise 10-1.
When the process dynamic characteristics (gain, time constant, and dead
time) are expected to change significantly over the region of operation,
adaptive control is worthwhile to maintain the control loop performance.
Exercise 10-2.
The process parameter most likely to change and affect the control loop
performance is the process gain. An example of extreme variation in
process gain is the control of pH in the water neutralization process.
Exercise 10-3.
The equal percentage valve characteristic compensates for the decrease in
process gain with increasing throughput, typical of many blending, heat
transfer, and separation processes. For the equal percentage characteristic
to properly compensate for gain variations: (1) the pressure drop across
the valve must remain constant, (2) the controller output must actuate the
valve (it must not be cascaded to a flow controller), (3) the valve must not
operate in the lower 5% of its range, where the characteristic deviates from
equal percentage.
Exercise 10-4.
When a feedback controller adjusts the ratio of a ratio controller, its output
is multiplied by the process flow, directly compensating for the gain
decrease with throughput.
Exercise 10-5.
A gap or dead band controller can be used for a “valve position controller”
that adjusts a large reagent valve in parallel with a small valve to maintain
the small valve position near half opened. This way the large valve makes
rough adjustments in flow but does not move when the small valve is
doing fine adjustments near neutrality, where the process gain is highest.
Exercise 10-6.
A pattern recognition controller matches an underdamped response curve
to the response of the error by detecting the peaks of the response. The
decay ratio is then controlled by adjusting the controller gain, and the
oscillation period is used to adjust the integral and derivative times.
Exercise 10-7.
The second-order discrete model matches the sampled response of most
processes, because its form is the same for monotonic, oscillatory, inverse
response, integrating, and unstable responses.
Appendix B: Solutions to All Exercises 249
The parameters of a discrete model, except for the dead time, can be
estimated using least squares regression techniques. The second-order
model requires only six parameters, including a bias term to account for
disturbances.
Exercise 10-8.
For least squares regression to successfully estimate the dynamic process
model parameters, (1) the process variable must be changing due to
changes in the controller output, (2) the input/output data must be
differenced or at least entered as differences from their initial steady-state
values, and (3) the noise on the process variable must not be
autocorrelated.
Exercise 10-9.
Recursive estimation provides estimates of the parameters that improve
with each sample of the process input and output. It is convenient to do
on-line autotuning and the only way to do adaptive control. To use
recursive regression for autotuning, the process driving function and
initial covariance matrix are set, and an estimation run is made with the
forgetting factor set to unity. In adaptive control the estimator is kept
running with the forgetting factor set at a value less than unity.
Exercise 10-10.
The diagonal terms of the variance-covariance matrix are the multipliers
of the variance of the noise to obtain the variance of the corresponding
estimated parameters. To keep a parameter from changing during
estimation, the corresponding initial diagonal value of the
variance-covariance matrix is set to zero.