Вы находитесь на странице: 1из 26

EI6801 Computer Control of Processes

April 2017
Part –A
1. Write the need for sample and hold device.
Analog Signal is converted to digital using an A/D conversion system. The A/D converter
coverts a voltage amplitude at its input into a binary code representing a quantized
amplitude closest to the amplitude of the input. Input signal variation during the time of
conversion can lead to erroneous results. Therefore, high performance A/D systems are
preceded by a S/H device which keeps the input to A/D converter constant.
2. Define the State Transition Matrix of a discrete system.
x(k  1)  Fx (k )
x(k )  F k x(0)

 (k )  F k   1 zI  F 1 z 
3. Compare Parametric and Nonparametric methods of system identification

Parametric method
Nonparametric method
 Suitable for controller design,  Simple and fast, needs a few a priori
simulation, prediction etc. information.

 Model parameters are the result of an  Generally, no optimization is required.


optimization. Measurement noise can So, measurement noise comes directly
be filtered out in the optimization to the model and cannot be filtered out
procedure. by optimization.

 Different types of representation can  Suitable for simple controller design


be computed easily methods (PID, loop shaping).

 Needs a priori assumptions about the  Give only some general ideas about the
model structure. system (bandwidth, order).

4. Identify any two important advantages of Recursive Least Square Method.


The recursive least squares (RLS) identification technique has the advantages of
simple calculation and good convergence properties, it is the preferred technique for use in
the design of the self-tuning controllers.
 They are central part of adaptive system
 They can be easily modified into real-time algorithms, aimed at tracking time-varying
parameters.
 Used in fault detection algorithm.
5. Obtain the modified Z-Transform of 1/s
z
Ans:
z 1
6. When to go for feedforward control scheme?
When we need to improve the control loops response to disturbances we go for a
feedforward control wherein the disturbance before even affecting the process will be
determined and eliminated.
7. Write the properties of RGA.
 The RGA can be calculated from open-loop values.
 The RGA elements are scale independent.
1
 The rows and columns of the RGA sum to 1.0.
 In some cases, the RGA is very sensitive to small errors in the gains, Kij.
 We can evaluate the RGA of a system with integrating processes, such as levels.
8. Compare Multioop PID controller with multivariable PID controller.
For a „nxn‟ system muli-loop PID method needs „n‟ PID controller.
Multivariable PID controller needs „nxn‟ PID controller.
9. Identify any two challenges in the control of MIMI process.
Due to process interactions in a MIMO process
 Closed loop systems may become destabilized and
 Controller tuning becomes difficult without the aid of decouplers.
10. Write the objective function for multivariable GPC.
GPC algorithm consists of applying a control sequence that minimizes a multistage cost
function of the form given in equation
2 2

 ( j )  y (t  j t )  w(t  j )     ( j ) u(t  j  1) 
N2 ^ Nu
J ( N1 , N 2 ,... N u )  
jN   j 1
1

Where,
^
y (t  j t ) is an optimum j-step ahead prediction of the system output on data upto time
K,N1 and N2 are the minimum and maximum costing horizons,
Nu- control horizons,
(j) and (j) are weighting sequences and w(t+j) is the future reference trajectory, which
can considered to be constant.
The objective of predictive control is to compute the future control sequence
u(t),u(t+1),…..u(t+Nu) in such a way that the future plant output y(t+j) is driven close to
w(t+j). This is accomplished by minimizing J(N1,N2,….Nu)

Part –B
11 Sketch the block diagram ofa typical sampled data controlled system and explain the 10
a functions performed by each block.
i Ans:

ADC: The analog signal is converted into digital form by A/D conversion system. The
conversion system usually consists of an A/D converter proceeded by a sample-and-hold
device.
DAC: The digital signal coming from digital device is converted into analog signal.
Sample and Hold Device:
Sampler: It is a device which converts an analog signal into a train of amplitude demodulated
pulses.
Hold device: A hold device simply maintains the value of the pulse for a prescribed time
duration.
Sampling: sampling is the conversion of a continuous-time signal into a discrete time signal
obtained by taking samples of the continuous time signal (or analog signal) at discrete time

2
instants.
Sampling frequency: Sampling frequency should be greater than two times of signal
frequency. Fs  2* Fin
Types of sampling:
Periodic sampling: In this sampling, samples are obtained uniformly at intervals of T seconds.
Multiple-order sampling: A particular sampling pattern is separated periodically.
Multiple-rate sampling: In this type two simultaneous sampling operations with different time
periods are carried out on the signal to produce the sampled output.
Final Control Element: A Final control element changes a process in response to a change
in controller output a system. Some example of Final control elements are actuators include
valves, dampers, fluid couplings, gates, and burner tilts to name a few.
Sensor: A sensor is a device that detects and responds to some type of input from the physical
environment. The specific input could be light, heat, motion, moisture, pressure, or any one of a
great number of other environmental phenomena.
11 Test the controllability of the following system. 6
a
ii  x1 (k  1) 1 2  x1 (k ) 
 x (k  1)  3 4  x (k )
 2    2 
 x k 
y k   1 2 1 
 x 2 k 
Ans:
The controllability test is

U  g Fg F 2 g ..... F n1 g 
G matrix is not given. Hence controllability cannot be tested.
OR
11 Describe the principle and design procedure for state feedback control scheme with block 10
b diagram.
i Ans:
Consider the state-space model of a SISO system
x(k + 1) = Ax(k) + Bu(k) (1)
y(k) = Cx(k)
where x(k) ∈ Rn , u(k) and y(k) are scalar.
In state feedback design, the states are feedback to the input side to place the closed poles at
desired locations.

Regulation Problem:
When we want the states to approach zero starting from any arbitrary initial state, the design
problem is known as regulation where the internal stability of the system, with desired
transients, is achieved.

3
Control input: u(k) = −Kx(k) (2)
Tracking Problem:
When the output has to track a reference signal, the design problem is known as tracking
problem. Control input: u(k) = −Kx(k) + Nr(k) where r(k) is the reference signal.

First we will discuss designing a state feedback control law using pole placement
technique for regulation problem. By substituting the control law (2) in the system state model
(1), the closed loop system becomes x(k + 1) = (A − BK)x(k). If K can be designed such that
eigenvalues of A − BK are within the unit circle, then the problem of regulation will be solved.
The control problem can thus be defined as: Design a state feedback gain matrix K such that
the control law given by equation (2) places poles of the closed loop system x(k+1) =
(A−BK)x(k) in desired locations.
Design Procedure: (Ackerman’s Method)
1. Determine the desired characteristic polynomial.
  1   2 ......  n   n  1n1  ....   n1   n I
2. Determine the matrix ф(F) using the coefficient of desired characteristic polynomial.
 ( F )  F n  1F n1  ....   n1F   n I .
3. Calculate the state feedback gain matrix (K)
using Ackermann‟s formula.
K  0 0 .... 0 1Qc1  ( F )
11 Test the stability of the following system. P( z )  z 4  1.2 z 3  0.07 z 2  0.3z  0.08  0 6
b
ii Ans:
Check for Necessary Conditions:
P(1)  1 1.2  0.07  0.3  0.08  0.09  0

(1) 4 P(1)  1  1.2  0.07  0.3  0.08  1.89  0


Satisfied.
Check of sufficient conditions:
Row Z0 Z Z2 Z3 Z4
1 -0.0800 0.3000 0.0700 -1.2000 1.0000
2 1 -1.2 0.07 0.3 -0.08
3 -0.9936 1.1760 -0.0756 -0.2040
4 -0.2040 -0.0756 1.1760 -0.9936
5 0.9456 -1.1839 0.3150
From the table

 0.08  1 ,  0.9936   0.204 and 0.9456  0.315

The sufficient conditions are satisfied. So the system is stable.

12 Derive and explain the steps of the Least Squares Algorithm. 16


a A parametric method can be characterized as a mapping from the recorded data to the

4
estimated parameter vector. The estimated parameters do not have any physical insight of the
process. The various parametric methods of system identification are
1. Least squares (LS) estimate
2. Prediction error method (PEM)
3. Instrumental variable (IV) method
LEAST SQUARES ESTIMATION
The method of least squares is about estimating parameters by minimizing the squared
error between observed data and their expected values. The linear regression is the simple type
of parametric model. This model structure can be written as
T
Y(t) (t) (1)
Where,
Y(t) = Measured Quantity

(t) = n – Vector of known quantities


T
= Y(t 1), Y(t 2),...., Y(t n a ) u(t 1)... u(t nb )
= n – Vector of unknown parameters
The following two examples show how a model can be represented using linear regression
model form.
Example 1:
Consider a following first order linear discrete model
Y(t) a y(t 1) b u(t 1) (2)
The model represented in eq. (2) can be written in linear regression model as follows
Y(t) a y(t 1) b u(t 1) (3)

a
y(t 1) u(t 1)
b
T
(t)
Where, (t) y(t 1) u(t 1)

a
b
The elements of (t) are often called regression variables or regressors while y(t) is
called regressed variable. The is called parameter vector. The variable t takes integer values.

5
Example 2:
Consider a truncated weighting function model.
Y(t) h 0 u(t) h1 u(t 1) ..... h m 1 u(t M 1)
The input signal u(t) u(t 1) ..... h m 1 u(t M 1) are recorded during the experiment.
Hence the regression variables
(t) u(t) u(t 1) ..... h m 1 u(t M 1) is a M- Vector of
known quantities.
T
h 0 h1.... h m 1 is a M- Vector of

unknown parameters to be estimated.

The problem to find an estimate ˆ of the parameter vector as shown in Fig.1 from
experimental measurements given an experimental measurement
Y(1), (1),Y(2), (2).....Y(N), (N) . Here „N‟ represents number of experimental data and
„n‟ represents number of unknown quantities in (t) or number of unknown parameters in .
T
Y(1) (1)
T
Y(2) (2)
.
.
T
Y(N) (N)
This can be written in matrix notation as
Y (4)

Y(1)
.
Where, Y . an ( N x1 ) vector (5)
Y(N)

T
(1)
.
. an ( N x n ) vector (6)
T
(N)

(t) y(t) T
(t) ˆ (7)

6
y(t) - Observed value
T
(t) ˆ - Expected value
And stack these in a vector defined as
(1)
.
.
(N)

In statistical literature the equation errors are often called residuals. The least square estimate of
is defined as the vector ˆ that minimizes the loss function

N N
1 2 1 T 2
V( ) (t) Y(t) (t) (8)
2 t 1 2 t 1
Note:
The order forms of loss function are
1 T
V( ) . (9)
2
1 2
V( ) (10)
2
Where . denotes the Euclidean vector norm.
The ˆ will be estimated from experimental measurements
Y(1), (1),Y(2), (2).....Y(N), (N) , by minimizing the loss function V( ) in (8)
and (7) . The solution to this optimized problem is
ˆ T 1 T
Y (11)
For this solution, the minimum value of V is
min
V( ) V ˆ
1 T
Y Y YT ( T
) 1 T
Y (12)
2
Note:
The matrix T is a positive definite.
The form eq. (11) of the least squares estimate can be rewritten in the equivalent form

t 1 t
ˆ (t) (t) T
(t) (t) Y(t) (13)
s 1 s 1

In many cases (t) is known as function of t. Then (13) might be easier to implement than
(11) since the matrix of large dimension is not needed in eq. (13). Also the form eq. (13) is
the starting point in deriving several recursive estimates.
OR

7
12 With an example for each, Explain any one parametric and non-parametric methods of 16
b
system identification.
PARAMETRIC METHOD OF SYSTEM IDENTIFICATION
A parametric method can be characterized as a mapping from the recorded data to the
estimated parameter vector. The estimated parameters do not have any physical insight of the
process. The various parametric methods of system identification are
4. Least squares (LS) estimate
5. Prediction error method (PEM)
6. Instrumental variable (IV) method
RECURSIVE IDENTIFICATION METHOD
In recursive (also called on-line) identification methods, the parameter estimates are
ˆ
computed recursively in time. This means that if there is an estimate (t 1) based on data

upto time t 1 , then ˆ (t) is computed by some „simple modification‟ of ˆ (t 1) .


The counter parts of on-line methods are the so called off-line or batch methods,
in which all the recorded data are used simultaneously to find the parameter estimates.
 Recursive identification methods have the following general features :
 They are central part of adaptive systems (used, for example, for control and signal
processing) where the action is based on the most recent model.
 Their requirement on primary memory is quite less compare to offline identification
methods which require large memory to store entire data.
 They can be easily modified into real-time algorithms, aimed at tracking time-varying
parameters.
They can be first step in a fault detection algorithm, which is used to find out whether
the system has changed significantly.

Fig.1. A general scheme for adaptive control

8
Most adaptive systems, for example adaptive control systems as shown in Fig. 1. are based
(explicitly or implicitly) on recursive identification.
Then a current estimated model of the process is available at all times. This time
varying model is used to determine the parameters of the (also time-varying) regulator
(also called controller).
In this way the regulator will be dependent on the previous behavior of the process
Through the information flow: Process --- model --- regulator).

If an appropriate principle is used to design the regulator, then the regulator should
adopt to the changing characteristics of the process.
The various identification methods are
 Recursive least squares method
 Real time identification method
 Recursive instrumental variable method
 Recursive prediction error method.

RECURSIVE LEAST SQUARES ESTIMATION


The linear time-variant system can be represented as
A(q 1 ) y(t) B(q 1 ) u(t) (t) (1)
Where,
A(q 1 ) 1 a1q 1
.... a na q na

B(q 1 ) b1q 1 .... bnbq nb

(t) Equation error


This model can be expressed in regression model form as
T
y(t) (t) e(t) (2)
Where,
(t) y(t 1),...., y(t n a )u(t 1).... u(t nb )
a1a 2 ...a na b1b2 ...b n b

Then the least squares parameter estimate is given by


t 1 t
ˆ (t) (s) T
(s) (s)Y(s) (3)
s 1 s 1

The argument t has been used to stress the dependence of ˆ on time. The eq.(3) can
be computed in recursive fashion.

Introduce the notation

9
t 1
T
P(t) (s) (s) (4)
s 1
t
P 1 (t) (s) T
(s)
s 1
t 1
P 1 (t) (s) T
(s) (t) T
(t)
s 1

P 1 (t) P 1 (t 1) (t) T
(t)
P 1 (t 1) P 1 (t) (t) T
(t) (5)
Then using eq.(3) and eq. (4) can be written as
t
ˆ (t) P(t) (s)Y(s) (6)
s 1
Note:
If we replace t by t 1 in eq. (6), we get
t 1
ˆ (t 1) P(t 1) (s)Y(s) (7)
s 1

t 1
ˆ (t 1) P 1 (t 1) (s)Y(s)
s 1
The equation eq. (6) can be written as
t 1
ˆ (t) P(t) (s)Y(s) (t)y(t) (8)
s 1
By substituting eq. (7) in eq. (8) we get
ˆ (t) P(t) P 1 (t 1) ˆ (t 1) (t)y(t)
By substituting eq. (5)
ˆ (t) P(t) P 1 (t) (t) T
(t) ˆ (t 1) (t)y(t)
ˆ (t) P(t) P 1 (t) ˆ (t 1) (t) T
(t) ˆ (t 1) (t) y(t)
ˆ (t) ˆ (t 1) P(t) (t) T
(t) ˆ (t 1) P(t) (t) y(t)

ˆ (t) P(t) P 1 (t) ˆ (t 1) (t) T


(t) ˆ (t 1) (t)y(t)

ˆ (t) ˆ (t 1) P(t) (t) y(t) T


(t) ˆ (t 1) (9)

Thus eq. (9) can be written as


ˆ (t)ˆ (t 1) K (t) (10 a)
K(t) P(t) (t) (10
b)
(t) y(t) T (t) ˆ (t 1) (10 c)
Hence the term (t) should be interpreted as a prediction error. It is the difference between the
10
measured output y(t) and the one-step-ahead prediction
ŷ t |t 1; ˆ (t 1) T
(t) ˆ (t 1) of y(t) made at t 1 based on the model
corresponding to the estimate ˆ (t 1) . If (t) is small, the estimate ˆ (t 1) is „good‟ and
should not be modified very much. The vector K(t) in eq. (10 b) should be interpreted as a
weighting or gain factor showing how much the value of (t) will modify the different
elements of the parameter vector.
To complete the algorithm, eq. (5) must be used to compute P(t) which is needed in eq. (10 b).
However, the use of eq. (5) needs a matrix inversion at each time step. This would be time
consuming procedure. Using matrix inversion lemma, however eq. (5) can be rewritten in
updating equation form as

P(t 1) (t) T (t) P(t 1)


P(t) P(t 1) (11)
1 T (t) P(t 1) (t)
Note that in eq. (11) there is now a scalar division (scalar inversion) instead of matrix
inversion. From eq.(10 b) and eq.(10 c),

P(t 1) (t)
K(t) T
(12)
1 (t) P(t 1) (t)
The recursive least squares algorithm (RLS) consists of
1. ˆ (t) ˆ (t 1) K (t)
2. (t) y(t) T
(t) ˆ (t 1)
P(t 1) (t) T (t) P(t 1)
3. P(t) P(t 1)
1 T (t) P(t 1) (t)

NON-PARAMETRIC METHOD OF SYSTEM IDENTIFICATION


FREQUENCY ANALYSIS
For a frequency analysis, it is convenient to use following system with transfer function model
as
Y(s) G(s) U(s) (1)
Where,
Y(s) = Laplace Transform of Output signal Y(t)
U(s) =Laplace Transform of Input Signal u(t)
G(s) = Transform function of a system
Apply a following sinusoidal Input U(t) to a system (described in eq.(4)
as shown in Figure 2.

11
u t a sin ( t)
Where,
a = amplitude of the sinusoidal input u(t)
= Frequency of the sinusoidal input u(t) in rad/sec

Sinusoidal input Sinusoidal output


u t Y(t)
Figure 2. Sinusoidal Input to G(s)
If the system G(s) is asymptotically stable, then the output y(t) is also a sinusoidal
signal.
y(t) b sin ( t ) (2)
Where,
b - amplitude of amplitude output y(t)
- Phase difference between Input u(t) and output Y(t) ( Shown in Figure . 3)

Figure 3. Input and Output waveforms of G(s)


From (2), we can write
b a G( ) (3a)

arg G( ) (3b)

This can be proved as follows. Assume the system is initially at rest. Then the system G(s) can
be represented using a weighting function h(t) as follows.
t

Y(t) h( )u(t )d (4)


0

Where h(t) is the function whose laplace transform equals . G(s) .

12
t
s
G(s) h( ) e d (5)
0

(ei t
e i t
)
Since sin( t) (6)
2i
Equations (1) (4) (5) and (6)
t

Y(t) h( )u(t )d
0

h( )a sin ( (t ))d
0

t
(ei (t )
e i (t )
)
h( )a d
0
2i
t
a
h( )(ei (t )
e i (t )
)d
2i 0

t
a
h( )(ei t e i
e i t
e i
)d
2i 0

t t
a i t ( i t) i t ( i t)
e h( )e d e h( )e d
2i 0 0

G(i ) h( )e( i t)
d
0

t
( i t)
G( i ) h( )e d
0

a it i t
Y(t) e G(i ) e G( i )
2i
Since we can represent
G(i ) rei
Where,
r magnitude of G(i ) G(i )

argument of G(i ) ei arg G(i )

13
a it
Y(t) e G(i ) ei arg G(i )
e i t
G(i ) e i arg G(i )

2i

G(i ) G( i ) G(i )

a
Y(t) G(i ) ei t ei arg G(i )
e i t
e i arg G(i )

2i
a G(i ) sin t argG(i )
b. sin( t ) (7)

b a G(i )
arg G(i )
From above equations (1) and (3) are proved.
By measuring the amplitudes a and b as well as the phase difference , one can draw a
bode plot (or Nyquist or equivalent plot) for different values. From the bode plot, one can
easily estimate the transfer function model G(s) of a system.

13 Explain any one form of digital PID controller. 8


a
i

The figure shows a continuous data PID controller acting on an error signal e(t). The controller
simply multiplies the error signal e(t) with a constant Kp. The integral control multiplies time
integral of e(t) by Ki and derivative control generates a signal equal to Kd times the time
derivative of e(t). The function of the integral control is to provide action to reduce the area
under e(t) which leads to reduction of steady state error. The derivative control provides
anticipatory action to reduce the overshoots and oscillations in time response. In digital control,
P-control is still implemented by a proportional constant Kp. The integrator and differentiator
can be implemented by various schemes.
Numerical Integration:
Since integration is the most time consuming and difficult, basic mathematical operations on a

14
digital computer using simulation is important. Continuous integration operations are
performed by numerical methods. This replaces the SOH devices at strategic locations in a
control system.

X(s)/R(s)=1/s represents the integrator, where r(t) is the input. Area under the curve is
represented by x(t), with output between t=0, t=T.

Rectangular Integration- forward rectangular integration and backward rectangular


15
integration
These are equivalent to inserting a ZOH in front of an integration.
X ( z) 1 T
Z-transfer function of backward rectangular integration is  (1  z 1 ) z ( 2 ) 
R( z ) s z 1
State equation is X[(k+1)T]=X(kT)+Tr(kT)
X ( z) 1 Tz
Z-transfer function of forward rectangular integration is  (1  z 1 ) z ( 2 ) 
R( z ) s z 1
State equation is X[(k+1)T]=X(kT)+Tr((k+1)T)
The most common method of approximating the derivative of e(t) at t=T is
de(t )  e(kT )  e[(k  1)T ] 
 
dt t T  T 
Taking Z-transform on both sides pf the last equation including proportional constant
KD,
z 1 T Tz T z 1
DD ( z )  K D DI ( z )  K I or K I or K I
Tz z 1 z 1 2 z 1

13 e 2 s 8
a Design dead beat controller for the following process G s   .
s 1
p

ii
1  z 1 
D( z )   
G ( z )  1  z 1 
G ( z )  Z Gh 0 ( s ).G p ( s )  Gh 0 G p ( z )
 1  e  sT e  2 s   1 
G ( z )  Z    ( z  2  z ( 2T ) ).Z  
 s s 1  s ( s  1) 
 1 1  z z 
G ( z )   2  2 T    TakeT  1sec
z z z  z  1 z  e T 
 1 1  z z 
G ( z )   2  3   
z z  z  1 z  0.367 
z  1  0.633 
 2  
z  z  0.367 
1  z 1  z 2  z  0.367  2  z  0.367 
D( z )   
1    1.579 z  2 
G ( z )  1  z  z  1  0.633   z 1 
M ( z)  z  0.367 
D( z )  1.579 z 2  2 
E( z)  z 1 

OR
13 Sketch the block diagram for IMC. 6

16
b
i

A list of the variables used in the above block diagrams are explained below;
d(s)=disturbance d~(s)=estimated disturbance gp(s)=process gp~(s)=process model
q(s)=internal model controller r(s)=set-point r~(s)=set-point u(s) manipulated input
y(s)=measured process output y~(s)=model output

13 10
b Describe the simplified Smith Predictor scheme with the steps.
ii

17
As shown in the figure, the process is conceptually split into a pure lag and a pure dead time. If
the fictitious variable (b) could be measured somehow, that can be connected to the controller
as shown in fig.7.40(b). This would move the dead time outside the loop. The controlled
18
variable (c) would repeat whatever b did after a delay of ɵd. since there is no delay in the
feedback signal (b), the response of the system would be greatly improved. The scheme of
course, cannot be implemented, because b is an unmeasurable signal. Ow a model of the
process is developed and a manipulated variable (m) is applied to the model as shown in the
figure. If the model were perfect and disturbance, L=0, then the controlled variable c will
become equal to the error cm and em=c-cm=0. The arrangement reveals that although the
fictitious process variable b is unavailable, the value of bm can be derived which will be equal
to b unless modelling errors or load upsets are present. It is used as feedback signal. The
difference (c-cm) is the error which arises because of modelling errors or load upsets. To
compensate for these errors, a second feedback loop is implemented using em. this is called the
Smith Predictor control strategy. The Gc(s) is a conventional PI or PID controller which can be
tuned much more tightly because of the elimination of dead time from the loop. Thus the
system consists of a feedback PI algorithm (Gc) that controls a simulated process Gm-(s), which
is easier to control than the real process.

14 Explain how to obtain RGA matrix that help to pair inputs and outputs.
a The relative gain (λij) between input j and output I is defined as follows;
 yi 
 
 u 
 j  uk k  j
λij= 1. ij 
 yi 
 
 u 
 j  yk k  i
The relative-gain array provides a methodology where we select pairs of input and output
variables in order to minimize the interaction among resulting loops. It is a square matrix that
contains individual relative gain as elements, that is   ij . For a 2x2 system, the RGA is
  
   11 12  .
21 22 
 11 12 
RGA is given by     , this equation yields the flowing relationships, 11  12  1
21 22 
11  21  1 12  22  1 21  22  1, then for a 2x2 system, only one relative gain must be
 11 1  11 
calculated for the entire array,   
11 
.
1  11
0.05 0.95 
Consider a relative gain array,    
 0.95 0.05 
We pair y1 with u1 and y2 with u2 in this case.
Consider the whiskey blending problem, which has steady state process gain matrix and RGA:
 y1  0.025  0.075  u1  0.025  0.075 0.25 0.75

y   1   
1  u2   1 1  0.75 0.25 indicating that the
, K ,
 1    
output-input pairings should be y1-u2 and y2-u1. In order to achieve this pairing we, could use
the following block diagram. The difference between r2 and y2 is used to adjust u1, using a PID
controller (gc1), hence, we refer to this pairing as y2-u1. The difference between r2 and y1 is used
to adjust u2 using a PID controller (gc1); hence we refer to this pairing as y1-u2. This
corresponds to the following diagram. This can also be done by redefining variables.
Consider the following RGA for a system with three inputs and three outputs:

19
 1 1  1
   3 4 2 
 3 4 0 
1. We should not pair on a negative relative gain.
2. We should not pair with a relative gain of 0 because that means that particular input
does not have an effect on the particular output when all other loops are open.
3. In row 3 of the RGA, which corresponds to output 3, we would not pair y3 with u3
because of the 0 term. We cannot pair y3 with u1 because of the -3 term, which means
y3 is paired with u2.
1 1  1
    3 4 2 
 3 4 0 
4. From the first row, we cannot pair y1 with u3 because of -1 term. So our only choice is
to pair y1 with u1.
OR
14 8
b Explain how to design decouplers for a 2x2 process.
i The purpose of decouplers is to cancel the interaction effects between the two loops and thus
render two interacting loops.Let us consider the i/p-o/p relatiomships of a 2 i/p-o/p
process.2control loops m1 with y1 and m2 with y2.keep y1 constant.
m1 should change by the following amount:
m1=-H12(S)/H11(S)m2-----------------------------------------------------------------------------------(1)
We introduce a dynamic element
D1(S)=-H12(S)/H11(S)-----------------------------------------------------------------------------------(2)
This uses m2 as i/p and provides o/p which gives the amount by which m1 should be varied to
keep y1 constant and cancel the effect of m2 on y1.This way the decouplers cancels the effect
of loop2 on loop1.

Fig a2x2 process with one decoupler


To eliminate the interaction of loop1 on 2 let us construct another decoupler whose transfer
function is given as
D2(S)=-H21(S)/H22(S)---------------------------------------------------------------------------(3)

20
From fig(b) with 2 feedback loops it is possible to get an i/p-o/p relationship of 2 closed loops.
y1(S)=-Gc1[H11-H12H21/H22]/1+Gc1[H11-H12H21/H22]y1sp-------------------------------------------------------------------------
--------(4)
y2(S)=-Gc2[H22-H12H21/H11]/1+Gc2[H22-H12H21/H11]y2sp-------------------------------------------------------------------------
--------(5)
The equations 3 and 4 shows that o/p of loop 1 and loop2 depend only on the set point of its
own and not the other loop

NOTE:
1.Two interacting control loops are perfectly decoupled only if H11,H12,H21,H22 are perfectly
known.Practically this is not possible so only partial decoupling is possible.
2.For non-linear processes like chemical process ,initially they will be perfectly decoupled .As
process parameters keep changing interaction increases.Solution is to use Adaptive decouplers.
3.Perfect decoupling allows independent tuning of each controller.
4.Decouplers are feedforward control elements.
14 Explain biggest log modulus method 8
b BIG LOG MODULUS METHOD
ii Decouplers are elements used to compensate for interaction among loops when the order is less
than 3.
To reduce the interactions in a nxn systems with decentralized controller, such that control loop
responses reaches set point BLT method is used. Luyben proposed this method.
STEP1:
Calculate the Z-Nsettings for each individual loop.The ultimate gain and ultimate frequency
ofeach diagonal transfer function Gjj(s) are calculated in the classical way.To do this,a value of

21
frequency „w‟ is guessed.The phase angle is calculated and the frequency is varied to find the
point where the Nyquist plot of Gjj(w) crosses the negative real axis(ie -180 degree phase
angle)
The frequency at which it occurs is wu.Reciprocal of real part of Gjj(w)is ultimate gain.
STEP 2:
Detuning factor F is assumed always >1(btw 1.5 to 4)
Gain of all feedback controllers Kci are calculated by assuming ZN gain KZNi by F.
KCi=KZNi/F[KZNi=Kui/2.2]
Then all feedback controllers reset time are calculated by multiplying ZN settings with the
factor F.

Where
F factor can be considered as detuning factor for all the loops .Larger the value of F more stable
is the system.But will be more sluggish to set point and also to load responses.
STEP 3:
Using the guessed value of F and the resulting controller settings a multivariable Nyquist plot
of scalar function W(iw)=-1+det[I+GM(iw)B(iw)] -for multiloop
Big Log Modulus Lcm is defined.
Lcm=20log|w/1+w|
The peak in the plot of Lcm over entire frequency range Lcmmax.
STEP 4:
Ffactor is varied until Lcm=2N.where N is the order of the system. For N=1 is SISO case,the
familiar +2Db max closed loop log modulus criterion is obtained N=3 +4Db
If Lcm max not equal to 2N,then find new value of F and return to step 2.

15 With block diagram and timing diagram, Explain multivariable MPC. 16


a
MODEL PREDICTIVE CONTROL
Model predictive control (MPC) is an advanced method of process control that has been in
use in the process industries in chemical plants and oil refineries since the 1980s. In recent
years it has also been used in power system balancing models.[1] Model predictive controllers
rely on dynamic models of the process, most often linear empirical models obtained by system
identification. The main advantage of MPC is the fact that it allows the current timeslot to be
optimized, while keeping future timeslots in account. This is achieved by optimizing a finite
time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly,
thus differing from LQR. Also MPC has the ability to anticipate future events and can take
control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly
universally implemented as a digital control, although there is research into achieving faster
response times with specially designed analog circuitry.[2]
Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical
examples of MPC.
The models used in MPC are generally intended to represent the behavior of
complex dynamical systems. The additional complexity of the MPC control algorithm is not
generally needed to provide adequate control of simple systems, which are often controlled
well by generic PID controllers. Common dynamic characteristics that are difficult for PID
controllers include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be
caused by changes in the independent variables. In a chemical process, independent variables
that can be adjusted by the controller are often either the set-points of regulatory PID
controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers,
22
etc.). Independent variables that cannot be adjusted by the controller are used as disturbances.
Dependent variables in these processes are other measurements that represent either control
objectives or process constraints.
MPC uses the current plant measurements, the current dynamic state of the process, the MPC
models, and the process variable targets and limits to calculate future changes in the dependent
variables. These changes are calculated to hold the dependent variables close to target while
honoring constraints on both independent and dependent variables. The MPC typically sends
out only the first change in each independent variable to be implemented, and repeats the
calculation when the next change is required.
While many real processes are not linear, they can often be considered to be approximately
linear over a small operating range. Linear MPC approaches are used in the majority of
applications with the feedback mechanism of the MPC compensating for prediction errors due
to structural mismatch between the model and the process. In model predictive controllers that
consist only of linear models, the superposition principle of linear algebra enables the effect of
changes in multiple independent variables to be added together to predict the response of the
dependent variables. This simplifies the control problem to a series of direct matrix algebra
calculations that are fast and robust.
When linear models are not sufficiently accurate to represent the real process nonlinearities,
several approaches can be used. In some cases, the process variables can be transformed before
and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled
with nonlinear MPC that uses a nonlinear model directly in the control application. The
nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a
high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear
model may be linearized to derive a Kalman filter or specify a model for linear MPC.
An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode
approach can provide significant reduction in online computations while maintaining
comparative performance to a non-altered implementation. The proposed algorithm solves N
convex optimization problems in parallel based on exchange of information among
controllers.[4]
Theory behind MPC

MPC is based on iterative, finite-horizon optimization of a plant model. At time the current
plant state is sampled and a cost minimizing control strategy is computed (via a numerical
minimization algorithm) for a relatively short time horizon in the future: Specifically, an online
or on-the-fly calculation is used to explore state trajectories that emanate from the current state
and find a cost-minimizing control strategy until time Only the first step of the control strategy
23
is implemented, then the plant state is sampled again and the calculations are repeated starting
from the new current state, yielding a new control and new predicted state path. The prediction
horizon keeps being shifted forward and for this reason MPC is also called receding horizon
control. Although this approach is not optimal, in practice it has given very good results. Much
academic research has been done to find fast methods of solution of Euler–Lagrange type
equations, to understand the global stability properties of MPC's local optimization, and in
general to improve the MPC method. To some extent the theoreticians have been trying to
catch up with the control engineers when it comes to MPC.
Principles of MPC
Model Predictive Control (MPC) is a multivariable control algorithm that uses:
 an internal dynamic model of the process
 a history of past control moves and
 an optimization cost function J over the receding prediction horizon,
to calculate the optimum control moves.
An example of a non-linear cost function for optimization is given by:
2 2 2
 
  
  

   rk 1  y k 1    rk  2  y k  2    rk 3  y k 3   wu k2  wu k21
     

y  mod el predictive output
r  setpo int
u  change in the manipulate d input
w  weight for the changes in the manipulate d input
The subscripts indicate the sample time.

OR
15 Describe the multivariable Dynamic Matrix Control scheme with detailed algorithmic 16
b steps.

DYNAMIC MATRIX CONTROL


-developed by Shell Oil company in the 1960s and 1970s. It is based on a step response model,

which has the form, yk  s1uk 1  s2 uk 2  ...  sN 1uk  N 1  sN uk  N (1)
 N 1
Which is of the form, yk   si uk i  sN uk  N (2)
i 1

Where yk is the model prediction at time step k, and uk  N is the manipulated input N steps in
the past. The difference between the measured output (yk) and the model prediction is called the

additive disturbance. d k  yk  y k (3)
The corrected prediction is then equal to the actual measured output at step k,
  (4)
y k  y k  d k Similarly the corrected predicted output at the first time step in the future can be
found from

24
  
y c k 1  y c k 1  d k 1
 N 1 
y c k 1   si uk  i 1  sN uk  N 1  d k 1 (5)
i 1
 N 1 
y c k 1  s1uk   si uk  i 1  sN uk  N 1  d k 1
i2
So for the jth step into the future, we find
  
yc k  j  yc k  j  d k  j
 j N 1 
  si uk i  j  sN uk  N  j  d
(6)
y c k 1  si uk  i  j  k j


i 1 i  j 1
 correctionterm
effect of futurecontrolmoves effect of pastcontrolmoves

and we can separate the effect of past and future control moves as shown in the above equation.
The most common assumption is that the correction terms is constant in the future (this is the
  
constant additive disturbance assumption): d k  j  d k  j 1  ...  d  yk  y k (7)
Also, realize that there are no control moves beyond the control horizon of M steps, so
uk  M  uk  M 1  ...  uk  P 1  0 (8)
In matrix-vector form, a prediction horizon of P steps and a control horizon of M steps,
yields
 c 
 y k 1   s1 0 0  0 0 
c  s   uk 
 s 0  0 0   u 
 k 2 
y 2 1

        k 1 

c       
 yk j   s j s j 1 s j 2   s j M 1   
       uk  M 2 
 c     uk  M 1 
 
 
PP
s s 1 s P 2   s 
P  M 1 

    Mx1 currentand futurecontrolmoves u f

y k  P PxM dynamicmatrixS f
c
Px1 corrected outputpredictions Y

 s2 s3 s4  s N 2 s N 1 
s   uk 1 
 3 s 4 s 5  s N 1 0   u 
   0 0   k 2 

     
 s  s   s  0 0   
 uk  N 3 
j 1 j 2 N 1
     
  uk  N  2 

s P1 sP2  0  0   
 ( N 2 ) x1 pastcontrolmoves u past
Px ( N  2 ) matrixS past

 uk  N 1   d k 1 
 u  d 
s N  k  N 2    k 2 
     
    (8a)
uk  N  P  d k P 
 
Px1 pastinputsu P Px1 predicted disturbances

Which we write using matrix-vector notation

25
c 
Y  S f u f  S pastu past  s N u P  d (9)
corrected   predicteddisturbanc es
predicted effect of effect of pastmoves
outputs currentand
futturemoves

In equation (9), the corrected-predicted output response is naturally composed of a “forced


response” (contributions of the current and future control moves) and a “free response” (the
output changes that are predicted if there are no future control moves). The difference between
the setpoint trajectory, r and the future predictions is
c 
r
Y r  [ S pastu P  s N u P  d ]  S f u f (10)
corrected
 
predicted unforcederror( if nocurrentand futurecontrolmoves aremade) E
error E c

Which can be written E c  E  S f u f (11)


Where the future predicted errors are composed of “free response” E and “forced response”
(S f u f ) contributions.

e   w u 
P M 1
The least-squares objective function is   
c 2 c 2

k i k i
(12)
i 1 i 0
Notice that the quadratic terms can be written in matrix-vector form as
 c 
 eck 1 
       
P

  ek 1 ek 2  ek  P  ek 2   E c E c
c 2 c c c T

i 1
ek i

(13)
 c 
ek  P 

 uk 
  
w  u k i  w u k u k 1  u k  M 1  u k 1  
M 1
c 2  
i 0 
 
u k  M 1
And (14)
 w 0 0 0   u k 
 0 w 0 0  

 u k u k 1  u k  M 1 
 0 0  0 

 u k 1    T Wu f

  uf
  
 0 0 0 w u k  M 1
Therefore the objective function can be written in the form,
Subject to the modelling equality constraint (11), E c  E  S f u f (15)
Substituting (15) into (16), the objective function can be written,
  ( E  S f u f )T ( E  S f u f  (u f )T )Wu f (16)
The solution for minimization of this objective function is
u f  ( S f S f  W ) 1 S f
T T
E (17)
unforced errors
K
The current and future control move vector is proportional to the unforced error vector.
Because only the current control move is actually implemented, we use the first row of the
matrix of the K matrix and u f  K1E (18)
Where represents the first row of the matrix, where K  (S f S f  W ) 1 S f
T T

26

Вам также может понравиться