Вы находитесь на странице: 1из 84

NPR COLLEGE OF ENGINEERING &TECHNOLOGY

BE EEE-III/ SEMESTER VI

EE1354 MODERN CONTROL SYSTEMS

Prepared By:

A.R.SALINIDEVI Lect/EEE

1
EE1354 MODERN CONTROL SYSTEMS
(Common to EEE, EIE and ICE)
LTPC
3104
UNIT I STATE SPACE ANALYSIS OF CONTINUOUS TIME SYSTEMS 9
State variable representation Conversion of state variable form to transfer function and vice
versa Eigenvalues and Eigenvectors Solution of state equation Controllability and
observability Pole placement design Design of state observer
UNIT II z-TRANSFORM AND SAMPLED DATA SYSTEMS 9
Sampled data theory Sampling process Sampling theorem Signal reconstruction
Sample and hold circuits z-Transform Theorems on z-Transforms Inverse z-Transforms
Discrete systems and solution of difference equation using z transform Pulse transfer
function Response of sampled data system to step and ramp Inputs Stability studies
Jurys test and bilinear transformation
UNIT III STATE SPACE ANALYSIS OF DISCRETE TIME SYSTEMS 9
State variables Canonical forms Digitalization Solution of state equations
Controllability and Observability Effect of sampling time on controllability Pole
placement by state feedback Linear observer design First order and second order
problems
UNIT IV NONLINEAR SYSTEMS 9
Types of nonlinearity Typical examples Phase-plane analysis Singular points Limit
cycles Construction of phase trajectories Describing function method Basic concepts
Dead Zone Saturation Relay Backlash Liapunov stability analysis Stability in the
sense of Liapunov Definiteness of scalar functions Quadratic forms Second method of
Liapunov Liapunov stability analysis of linear time invariant systems and non-linear system
UNIT V MIMO SYSTEMS 9
Models of MIMO system Matrix representation Transfer function representation Poles
and Zeros Decoupling Introduction to multivariable Nyquist plot and singular values
analysis Model predictive control
L: 45 T: 15 Total: 60
TEXT BOOKS
1. Gopal, M., Digital Control and State Variable Methods, 3rd Edition, Tata McGraw Hill,
2008.
2. Gopal, M., Modern Control Engineering, New Age International, 2005.
REFERENCES
1. Richard C. Dorf and Robert H. Bishop, Modern Control Systems, 8th Edition, Pearson
Education, 2004.
2. Gopal, M., Control Systems: Principles and Design, 2nd Edition, Tata McGraw Hill,
2003.
3. Katsuhiko Ogata, Discrete-Time Control Systems, Pearson Education, 2002.

2
MODERN CONTROL SYSTEM

Unit I
STATE SPACE ANALYSIS OF CONTINUOUS TIME SYSTEMS

State variable representation Conversion of state variable form to transfer function and
vice versa Eigenvalues and Eigenvectors Solution of state equation Controllability
and observability Pole placement design Design of state observer

State Variable Representation


The state variables may be totally independent of each other, leading
to diagonal or normal form or they could be derived as the derivatives of the output. If
them is no direct relationship between various states. We could use a suitable
transformation to obtain the representation in diagonal form.
Phase Variable Representation
It is often convenient to consider the output of the system as one of the state
variable and remaining state variable as derivatives of this state variable. The state
variables thus obtained from one of the system variables and its (n-1) derivatives, are
known as n-dimensional phase variables.
In a third-order mechanical system, the output may be displacement
. .
x1, x1 x2 v and x 2 x3 a in the case of motion of translation or angular displacement
. . .
1 x1, x1 x2 w and x2 x3 w if the motion is rotational,

Where v v, w, a, respectively, are velocity, angular velocity acceleration, angular


acceleration.
Consider a SISO system described by nth-order differential equation

Where

u is, in general, a function of time.


The nth order transfer function of this system is

3
With the states (each being function of time) be defined as

Equation becomes

Using above Eqs state equations in phase satiable loan can he obtained as

Where

Physical Variable Representation


In this representation the state variables are real physical variables, which can
be measured and used for manipulation or for control purposes. The approach generally
adopted is to break the block diagram of the transfer function into subsystems in such a
way that the physical variables can he identified. The governing equations for the
subsystems can he used to identify the physical variables. To illustrate the approach
consider the block diagram of Fig.

One may represent the transfer function of this system as

Taking H(s) = 1, the block diagram of can be redrawn as in Fig. physical variables can
.
be speculated as x1=y, output, x 2 w the angular velocity x3 ia the armature

current in a position-control system.

4
Where

The state space representation can be obtained by

And

State space models from transfer functions

A simple example of system has an input and output as shown in Figure 1. This class
of system has general form of model given in Eq.(1).

u(t) y(t)
S

dn y dn 1y d m 1u
an 1 a0 y (t ) bm 1 b0u (t )
dt n dt n 1 dt m 1

Models of this form have the property of the following:

u (t ) u (t )
1 1 u (t )
2 2 y(t ) 1 1y (t ) 2 y2 (t ) (2)

where, (y1, u1) and (y2,u2) each satisfies Eq,(1).


Model of the form of Eq.(1) is known as linear time invariant (abbr. LTI) system. Assume
the system is at rest prior to the time t0=0, and, the input u(t) (0 t < ) produces the output
y(t) (0 t < ), the model of Eq.(1) can be represented by a transfer function in term of
Laplace transform variables, i.e.:

5
bm s m bm 1 s m 1 b0
y ( s) u ( s) (3)
an s n an 1 s n 1 a0
Then applying the same input shifted by any amount of time produces the same output
shifted by the same amount q of time. The representation of this fact is given by the following
transfer function:
bm s m bm 1 s m 1 b0 s
y(s) e u (s) (4)
an s n an 1 s n 1 a0

Models of Eq.(1) having all bi 0 (i 0) , a state space description arose out of a reduction

to a system of first order differential equations. This technique is quite general. First, Eq.(1)
is written as:
y(n) f t , u (t ), y, y.
y , , y ( n 1)
;
(5)
with initial conditions: y(0)=y0 , y (0) y1 (0), , y ( n 1) (0) yn 1 (0)

Consider the vector x Rn with x1 y, x2 y , x3


y , , xn y (n 1)
, Eq.(5) becomes:

x2
x3
d
X (6)
dt
xn
f t , u (t ), y, y.
y, , y ( n 1)

In case of linear system, Eq.(6) becomes:


0 1 0 0 0
0 0 1 0 0 0
d
X X 0 u (t ); y(t)= 1 0 0 0 X (7)
dt
0 0 1
-a 0 -a1 -a n-1 1

It can be shown that the general form of Eq.(1) can be written as


0 1 0 0 0
0 0 1 0 0 0
d
X X 0 u (t ); y(t)= b 0 b1 b m 0 0 X (8)
dt
0 0 1
-a 0 -a1 -a n-1 1

and, will be represented in an abbreviation form:

6
X AX Bu ; y CX Du; D= 0 (9)
Eq.(9) is known as the controller canonical form of the system.

Transfer function from state space models


We have just showed that a transfer function model can be expressed as a state space
system of controller canonical form. In the reverse direction, it also easy to see that each
linear state space system of Eq.(9) cab be expressed as a LTI transfer function. The procedure
is to take laplace transformation of the both sides of Eq,(9) to give:
sX (s) AX (s) Bu(s) ; y(s) CX (s) Du(s) (10)
So that
1 n( s )
y(s) C sI A B D u (s) G ( s )u ( s ) (11)
d ( s)
An algorithm to compute the transfer function from state space matrices is given by the
Leverrier-Fadeeva-Frame formula of the following:
1 N (s)
sI A
d ( s)
N ( s) sn 1 N0 s n 2 N1 sN n 2 Nn 1
n n 1
d ( s) s d1 s dn 1s dn
where,
N0 I d1 trace( AN 0 ) (12)
N1 AN 0 d1 I d2 1/ 2 trace( AN1 )

1
Nn 1 AN n 2 dn 1 I dn 1 trace( AN n 2 )
n 1
1
0 AN n-1 dn I dn trace ARn 1
n

Therefore, according to the algorithm mentioned, the transfer function becomes:

n(s) CN (s) B CD (13)

CN ( s ) B CD
or, G ( s ) (14)
d ( s)

EigenValues

Consider an equation AX = Y which indicates the transformation of n 1 vector


matrix X into 'n x 1' vector matrix Y by 'n x n' matrix operator A.

7
If there exists such a vector X such that A transforms it to a vector XX then X is
called the solution of the equation,

The set of homogeneous equations (1) have a nontrivial solution only under
the condition,

The determinant | X I - A | is called characteristic polynomial while the equation (2)


is called the characteristic equation.

After expanding, we get the characteristic equation as,

The 'n' roots of the equation (3) i.e. the values of X satisfying the above equation
(3) are called eigen values of the matrix A.

The equation (2) is similar to| sI- A | =0, which is the characteristic equation of the
system. Hence values of X satisfying characteristic equation arc the closed loop
poles of the system. Thus eigen values are the closed loop poles of the system.

Eigen Vectors
Any nonzero vector X i such that AX i i X i is said to be eigen vector associated

with eigenvalue i .Thus let i satisfies the equation

Then solution of this equation is called eigen vector of A associated with eigen
value i and is denoted as Mi.

If the rank of the matrix [ i I - A] is r, then there are (n - r) independent Eigen


vectors. Similarly another important point is that if the eigenvalues of matrix A
are all distinct, then the rank r of matrix A is (n - 1) where n is order of the
system.

8
Mathematically, the Eigen vector can be calculated by taking cofactors of
matrix ( i I - A) along any row.

Where C ki is cofactor of matrix ( i I - A) of kth row.


Key Point: If the cofactor along a particular row gives null solution i.e. all elements of
corresponding eigen vectors are zero then cofactors along any other row must he
obtained. Otherwise inverse of modal matrix M cannot exist.
Example 1
Obtain the Eigen values, Eigen vectors for the matrix

Solution
Eigen values are roots of

Eigen values are

To find Eigen vector,


Let

9
Where C = cofactor

For 2 = -2

For 3 = -3

Example 2
For a system with state model matrices

Obtain the system with state model matrices

10
Solution
The T.F. is given by,

11
Solution of State Equations
Consider the state equation n of linear time invariant system as,
.
X (t ) AX (t ) BU (t )
The matrices A and B are constant matrices. This state equation can be of two types,
1. Homogeneous and
2. Nonhomogeneous
Homogeneous Equation
If A is a constant matrix and input control forces are zero then the equation
takes the form,

Such an equation is called homogeneous equation. The obvious equation is if input is


zero, In such systems, the driving force is provided by the initial conditions of the
system to produce the output. For example, consider a series RC circuit in which
capacitor is initially charged to V volts. The current is the output. Now there is no
input control force i.e. external voltage applied to the system. But the initial voltage on
the capacitor drives the current through the system and capacitor starts
discharging through the resistance R. Such a system which works on the initial
conditions without any input applied to it is called homogeneous system.
Nonhomogeneous Equation
If A is a constant matrix and matrix U(t) is non-zero vector i.e. the input
control forces are applied to the system then the equation takes normal form as,

Such an equation is called nonhomogeneous equation. Most of the practical


systems require inputs to dive them. Such systems arc nonhomogeneous linear
systems.
The solution of the state equation is obtained by considering basic method of

12
finding the solution of homogeneous equation.
Controllability and Observability

More specially, for system of Eq.(1), there exists a similar transformation that will
diagonalize the system. In other words, There is a transformation matrix Q such that

X AX Bu ; y CX Du ; X(0)=X 0 (1)

X QX or X=Q-1 X (2)

X X Bu y = CX Du (3)

1 0 0
0 2 0
Where (4)

0 n

Notice that by doing the diagonalizing transformation, the resulting transfer function between
u(s) and y(s) will not be altered.

Looking at Eq.(3), if bk 0 , then xk (t) is uncontrollable by the input u(t), since, xk (t) is
kt
characterized by the mode e by the equation:

xk (t ) e k t xk (0 )

The lake of controllability of the state xk (t) is reflect by a zero kth row of B , i.e. bk . Which
would cause a complete zero rows in the following matrix (known as the controllability
matrix), i.e.:

2

n 1
b1 b
1 1 b
1 1 1 b 1
2

b2 b
2 2 2 b2 2 n 1b2

C(A,b) B AB A 2 B A 3 B A n-1 B 2
(5)
bk k bk k bk k n 1bk

2
n 1

bn n bn n bn n bn

A C(A,b) matrix with all non-zero row has a rank of N.



In fact , B Q 1B or B QB . Thus, a non-singular C(A,b) matrix implies a non-singular
matrix of C(A,b)of the following:

13
C(A,b) B AB A2 B An-1 B (6)

It is important to note that this result holds in the case of non-distinct eigenvalues as well.

Remark 1]

If matrix A has distinct eigenvalues and is represented as a controller canonical form, it


is easy to show the following identity holds, i.e.:

2 n 1 2 n 1
1 1 1 1 A 1 1 1 1 1 f or each i.

Therefore a transpose of so-called Vandermonde matrix V of n column eigenvectors of A will


diagonalize A, i.e.,

T
1 1 1 2 n 1
1
1 2 n
1 1
2
1
n 1
2 2 2 1
WT 1 2 n
2 2 2
(6)

2 n-1
n 1 n 1 n-1
1 n n n
1 2 n

and

1 1
WT A WT or, A= W T WT WT A WT A

[Remark 2]

There is an alternative way to explain why C(A,b) should have rank n for state controllable,
let us start from the solution of the state space system:

tf
At A( t )
X (t ) e X (0 ) e Bu ( )d (7)
t0

The state controllability requires that for each X(tf) nearby X(t0), there is a finite sequence of
u(t; t [to,tf]).

14
tf
At f A
X (t f ) e X0 e Bu ( )d
t0

or
tf
A At f
e Bu ( )d e X (t f ) X0
t0

n t0 ( k 1)
A
e Bu (t0 k )d
k 0 t0 k

n t0 ( k 1) i n 1
= i ( ) Ai B u (t0 k )d
k 0 t0 k i 0

i=n-1 t0 ( k 1) k
i
= AB i ( )u (t0 k )d
i=0 t0 k k 0

w1
i=n-1 w2
= Ai BWi B AB A2 B An-1 B
i=0
wn

Thus, in order W has non-trival solution, we need that C(A,b) matrix has exact rank n

There are several alternative ways of establishing the state space controllability:

The (n) rows of e At B are linearly independent over the real field for all t.

The controllability grammian

tf
At AT
Gram (to , t f ) e BBT e d is non-singular for all t f t0 .
t0

[Theorem 1] Replace B with b, (i.e. Dim{B}=n 1), a pair [A,b] is non-controllable if


and only if there exists a row vector q 0 such that

qA q, qb 0 (8)

To prove the if part:

If there is such row vector, we have:

15
qA q and qb 0
qAb qb 0
q I A 0 2 2
qA b qAb qb 0 q b, Ab, A2 b, , An 1b 0
and qb 0

qAn-1b n 1
qb 0

Since q 0 , we conclude that: b, Ab, A2 b,, An 1b is singular, and thus the

system is not controllable.

To prove the only if part:

If the pair is noncontrollable, then matrix A can be transformed into non-


controllable form like:

AC ACC bC r
A , b (9)
0 AC 0 n r

Where, r rank C(A,b) (Notice that Eq.(33) is a well-known theorem in linear


system.)

Thus, one can find a row vector has the form q [0 z ] , where z can be selected as

the eigenvector of AC , (i.e.: zAC z ), for then:

qA 0 z A 0 z q (10)

Therefore, we have shown that only if [A, b] is non-controllable, there is a


non-zero row vector satisfying Eq.(8).

In fact, according to Eq.(27),

k
it
e At Ve tV 1
Ve tW T vi wiT e
i 1

and, X (t ) e At X 0 , we have:

t k t n
X (t ) e At X 0 e A(t ) bu ( )d vi wiT e it X 0 vi wiT b e ii ( t )
u ( )d
0 i 1 0 i 1

16
Thus, if b is orthogonal to wi , then the state associated with i will not be controllable,
and hence, the system is not completely controllable. The another form to test for the
controllability of the [A,b] pair is known as the Popov-Belevitch-Hautus (abbrv. PBH) test is
to check if rank sI A b n for all s (not only at eigenvalues of A). This test is based on

the fact that if sI A b has rank n, there cannot be a nonzero row vector q satisfying
Eq.(32). Thus by Theorem 1, pair [A, b] must be controllable.


Referring to the systems described by Eqs.(26) and (27), the state xi (t ) corresponding to

the mode e it is unobservable at the output y1 , if C1i 0 for any i=1,2,,n. The lack of

observability of the state xi (t ) is reflected by a complete zero (ith) column of so called

observability matrix of the system O ( A, C ) , i.e.:


C1 C11 C12 C1n

C1 A 1C11 2 C12 n C1n
O ( A, C1 ) (11)

n 1 n 1
n 1
n 1

C1 A 1 C 2 C12 n C1n


An observable state xi (t ) corresponds to a nonzero column of O ( A, C ) . In the case of distinct
eigenvalues, each nonzero column increases the rank by one. Therefore, the rank of

O ( A, C ) corresponding to the total number of modes that are observable at the output y(t) is
termed the observability rank of the system. As in the case of controllability, it is not
necessary to transform a given state-space system to modal canonical form in order to
determine its rank. In general, the observability matrix of the system is defined as:

C
CA 1
O ( A, C ) = = O ( A, C )Q ( A, C )V

CAn 1


With Q=V-1 nonsingular. There, the rank of O ( A, C ) equals the rank of O ( A, C ) . It is
important to note that this result holds in the case of non-distinct eigenvalues. Thus, a state-
space system is said to be completely (state) observable if its observability matrix has a full
rank n. Otherwise the system is said to be unobservable

17
In particular, it is well known that a state-space system is observable if and only if the
following conditions are satisfied:

The (n) column of Ce At are linearly independent over R for all t.

The observability grammian of the following is nonsingular for all t f t0 :

t
T
Granm,o e A C T Ce A d
t0

I A
The (n+p) n matrix b has rank n at all eigenvalues i of A.
C

Pole Placement Design


The conventional method of design of single input single output control system
consists of design of a suitable controller or compensator in such a way that the dominant
closed loop poles will have a desired damping ratio % and undamped natural frequency con.
The order of the system in this case is increased by 1 or 2 if there are no pole zero
cancellation taking place. It is assumed in this method that the effects on the responses of non-
dominant closed loop poles lo be negligible. Instead of specifying only the dominant closed
loop poles in the conventional method of design, the pole placement technique describes all the
closed loop poles which require measurements of all state variables or inclusion of a state
observer in the system. The system closed loop poles can be placed at arbitrarily chosen
locations with the condition that the system is completely stale controllable. This condition
can be proved and the proof is given below. Consider a control system described by following
slate equation

Here x is a state vector, u is a control signal which is scalar, A is n x n state matrix. B is n x 1


constant matrix.

Fig open loop control system

18
The system defined by above equation represents open loop system. The
state x is not fed back to the control signal u. Let us select the control signal to
be u = - Kx state. This indicates that the control signal is obtained from instantaneous state.
This is called state feedback. The k is a matrix of order l x n called state feedback gain matrix.
Let us consider the control signal to be unconstrained. Substituting value of u in equation 1

The system defined by above equation is shown in the Fig. 5.2. It is a closed loop
control system as the system state x is fed back to the control system as the system stale x
is fed back to control signal u. Thus this a system with state feedback

The solution of equation 2 is say


x(t) = e,x(0) is the initial slate (3)
The stability and the transient response characteristics are determined by the eigen
values of matrix A - BK. Depending on the selection of state feedback gain matrix K, the
matrix A - BK can be made asymptotically stable and it is possible to make x(t) approaching
to zero as time t approaches to infinity provided x(0) * 0. The eigen values of matrix A - BK
arc called regulator poles. These regulator poles when placed in left half of s plane then x(t)
approaches zero as time t approaches infinity. The problem of placing the closed loop poles
at the desired location is called a pole placement problem.
Design of State Observer
In case of state observer, the state variables are estimated based on the
measurements of the output and control variables. The concept of observability
plays important part here in case of state observer.
Consider a system defined by following state equations

19
Let us consider x as the observed state vectors. The observer is basically a
subsystem which reconstructs the state vector of the system. The mathematical
model of the observer is same as that of the plant except the inclusion of additional
term consisting of estimation error to compensate for inaccuracies in matrices A and
B and the lack of the initial error.
The estimation error or the observation error is the difference between the
measured output and the estimated output. The initial error is the difference
between the initial state and the initial estimated state. Thus the mathematical
model of the observer can be defined as,

Here x is the estimated state and C x is the estimated output. The observer has
inputs of output y and control input u. Matrix K^ is called the observer gain
matrix. It is nothing but weighing matrix for the correction term which contains
the difference between the measured output y and the estimated output cx
This additional term continuously corrects the model output and the performance
of the observer is improved.
Full order state observer
The system equations arc already defined as

The mathematical model of the state observer is taken as.

To determine the observer error equation, subtracting equation of x from x wc get

20
The block diagram of the system and full order state observer is shown in the Fig.

The dynamic behavior of the error vector is obtained from the Eigen values of matrix
A-K^C If matrix A-K^C is a stable matrix then the error vector will converge to
zero for any initial error vector e(0). Hence x(t) will converge to x(t) irrespective of
values of x(0) and x(0).
If the Eigen values of matrix A-KeC are selected in such a manner that the dynamic
behavior of the error vector is asymptotically stable and is sufficiently fast then
any of the error vector will tend to zero with sufficient speed.
1/ the system is completely observable then it can be shown that it is possible to select
matrix K,. such that A-K^C has arbitrarily desired Eigen values, i.c. observer gain
matrix Ke can be obtained to get the desired matrix A-KCC.

21
UNIT I
STATE SPACE ANALYSIS OF CONTINUOUS TIME SYSTEMS
PART A
1. What are the advantages of state space analysis?
2. What are the drawbacks in transfer function model analysis?
3. What is state and state variable?
4. What is a state vector?
5. Write the state model of nthorder system?
6. What is state space
7. What are phase variables?
8. Write the solution of homogeneous state equation?
9. Write the solution of nonhomogeneous state equation?
10. What is resolvant matrix?
PART B
1. Explain Kamans test for determining state controllability?
2. Explain Gilberts test for determining state controllability?
3. Find the output of the system having state model,

and
10
The input U(t) is unit step and X(0)
0
4. Show the following system is completely state controllable and observable.

And
5. Obtain the homogenous solution of the equation X(t) =A X(t)

6. Derive the transfer function of observer based controller?

22
UNIT II
Z-TRANSFORM AND SAMPLED DATA SYSTEMS

Sampled data theory Sampling process Sampling theorem Signal reconstruction


Sample and hold circuits z-Transform Theorems on z- Transforms Inverse z-
Transforms Discrete systems and solution of difference equation using z transform
Pulse transfer function Response of sampled data system to step and ramp Inputs
Stability studies Jurys test and bilinear transformation

Sampled Data System


When the signal or information at any or some points in a system is in the form of
discrete pulses. Then the system is called discrete data system. In control engineering the
discrete data system is popularly known as sampled data systems.

Sampling process
Sampling is the conversion of a continuous time signal into a discrete time signal
obtained by taking sample of the continuous time signal at discrete time instants.

Thus if f (t) is the input to the sampler


The output is f(kT)
Where T is called the sampling interval

The reciprocal of T

23
Let 1/T =Fs is called the sampling rate. This type of sampling is called periodic
Sampling, since samples are obtained uniformly at intervals of T seconds.

Multiple order sampling A particular sampling pattern is repeated periodically

Multiple rate sampling - In this method two simultaneous sampling operations with
different time periods are carried out on the signal to produce the sampled output.

Random sampling In this case the sampling instants are random

Sampling Theorem

A band limited continuous time signal with highest frequency fm hertz can be uniquely
recovered from its samples provided that the sampling rate Fs is greater than or equal to 2fm
samples per seconds
Signal Reconstruction

The signal given to the digital controller is a sampled data signal and in turn the
controller gives the controller output in digital form. But the system to be controlled needs an

24
analog control signal as input. Therefore the digital output of controllers must be converters
into analog form

This can be achieved by means of various types of hold circuits. The simplest hold
circuits are the zero order hold (ZOH). In ZOH, the reconstructed analog signal acquires the
same values as the last received sample for the entire sampling period

The high frequency noises present in the reconstructed signal are automatically
filtered out by the control system component which behaves like low pass filters. In a first
order hold the last two signals for the current sampling period. Similarly higher order hold
circuit can be devised. First or higher order hold circuits offer no particular advantage over
the zero order hold
Z- Transform
Definition of Z Transform
Let f (k) = Discrete time signal
F (z) = Z {f (k)} =z transform of f (k)
The z transforms of a discrete time signal or sequence is defined as the power series

k
F (Z ) f (k ) z -------------- 1
k

Where z is a complex variable


Equation (1) is considered to be two sided and the transform is called two sided z transform.
Since the time index k is defined for both positive and negative values.
The one sided z transform of f(k) is defined as

k
F (Z ) f (k ) z --------------- 2
k 0

25
Problem
1. Determine the z transform and their ROC of the discrete sequences f(k) ={3,2,5,7}
Given f (k) = {3, 2, 5, 7}
Where f (0) =3
f (1) =2
f (2) = 5
f (3) = 7
f (k) = 0 for k < 0 and k >3
By definition

k
Z { f (k )} F ( z) f (k ) z
k

The given sequence is a finite duration sequence. Hence the limits of summation can be
changed as k = 0 to k = 3
3
k
F ( z) f (k ) z
k 0

F ( z) f (0) z 0 + f (1) z 1 + f (2) z 2 + f (3) z 3

1 2 3
= 3 2z 5z 7z
Here F (z) is bounded, expect when z =0
The ROC is entire z-plane expect z = 0
2. Determine the z transform of discrete sequences f (k) =u (k)
Given f (k) =u (k)
u (k) is a discrete unit step sequence
u (k) = 1 for k 0
= 0 for k < 0
By definition

k k
Z { f (k )} F ( z) f (k ) z u (k ) z
k k 0

k
z
k 0

(z 1 ) k
k 0

26
F (z) is an infinite geometric series and it converges if z 1

1
F (z) 1
1 z
1
1
1
z
z
z 1
3. Find the one sided z transform of the discrete sequences generated by mathematically
at
sampling the continuous time function f (t) e cos kT
Given
at
f (t) e cos kT
By definition

akT k
F(z) = Z { f (k )} e cos kT z
k 0

akT ej kT
e j kT
k
e z
k 0 2

1 aT 1
e ej T
z 1 k
e aT
e j T
z 1 k

2 k 0 2 k 0

1
WKT ck
k 0 1 c
1 1 1 1
F ( z) aT
21 e e j wT z 1
21 e aT
e j wT
z 1

1 1 1 1
2 ej T 2 e jT
1 1
z e aT z e aT

1 z e aT ze aT

2 z e aT e j T
z e aT e j T

1 ze aT ze aT e j T ze aT ze aT e j T

2 ze aT e j T ze aT e j T

ze aT ze aT e j T ze aT e j T
2 ( ze aT ) 2 ze aT e j T ze aT e j T e j T e j T

27
ze aT 2 ze aT (e j T e j T )
2 z 2 e 2 aT ze aT (e j T e j T ) 1

ze aT ze aT ( ze aT cos T ) ej e j
cos
2 z 2 e 2 aT 2 ze aT cos T 1 2

Inverse z transform
Partial fraction expansion (PFE)
Power series expansion
Partial fraction expansion
Let f (k) =discrete sequence
F (z) =Z {f (k)} = z transform of f (k)
b0 z m b0 z m 1 b0 z m 2 .......... bm
F (z) = where m n
z n a1 z n 1 a2 z n 2 .......... an
The function F (z) can be expressed as a series of sum terms by PFE
n
Ai
F ( z) A0 -------------- 3
i 1 z pi

Where A0 is a constant

A1 , A2 ,........ An are residues

p1 , p 2 ,........ p n are poles

Power series expansion


Let f (k) =discrete sequences
F (z) = Z {f (k)} = z transform of f (k)
By definition

k
F (Z ) f (k ) z
k

On Expanding
F (Z ) (....... f ( 3) z 3 f ( 2) z 2 f ( 1) z 1 f (0) z 0 f (1) z 1
f (2) z 2
.......... ...) -------4
Problem
1. Determine the inverse z transform of the following function
1
(i) F (z) = 1 2
1 1.5 z 0.5 z

28
Given
1
F (z) = 1 2
1 1.5 z 0.5 z
1
1 . 5 0 .5
1
z z2
z2
z2 1 .5 z 0 .5
z2
( z 1) ( z 0.5)
F ( z) z
z ( z 1) ( z 0.5)

By partial fraction expansion


F ( z) A1 A2
z ( z 1) ( z 0.5)

F ( z)
A1 ( z 1)
z
Put z =1
z
A1 ( z 1)
( z 1) ( z 0.5)

z
(z 0.5)

1
(1 0.5)

Put z =0.5

F ( z)
A2 ( z 0.5)
z

z
A2 ( z 0.5)
( z 1) ( z 0.5)

z
(z 1)

29
0 .5
(0.5 1)

-1
F ( z) 2 1
z z 1 z 0.5

2z z
F ( z)
z 1 z 0.5

WKT
z z
Z {a k } and Z {u (k )}
z a z 1

On taking inverse z transform


f (k ) 2 u (k ) (0.5) k , k 0

z2
(ii) F (z) =
z2 z 0.5

Given
z2
F (z) =
z2 z 0.5

z2
( z 0.5 j 0.5) ( z 0.5 j 0.5)
F ( z) z
z ( z 0.5 j 0.5) ( z 0.5 j 0.5)

By partial fraction expansion


F ( z) A A*
z ( z 0.5 j 0.5) ( z 0.5 j 0.5)
F ( z)
A ( z 0.5 j 0.5)
z
Put z = 0.5+j0.5
z
A ( z 0.5 j 0.5)
( z 0.5 j 0.5) ( z 0.5 j 0.5)

z
( z 0.5 j 0.5)

30
0.5 j 0.5
(0.5 j 0.5 0.5 j 0.5)
0.5 j 0.5
z
A* ( z 0.5 j 0.5)
( z 0.5 j 0.5) ( z 0.5 j 0.5)

Put z =0.5-j0.5
z
A*
( z 0.5 j 0.5)

0.5 j 0.5
0.5 j 0.5
(0.5 j 0.5 0.5 j 0.5)

F ( z) (0.5 j 0.5) (0.5 j 0.5) (0.5 j 0.5) z (0.5 j 0.5) z


z ( z 0.5 j 0.5) ( z 0.5 j 0.5) ( z 0.5 j 0.5) ( z 0.5 j 0.5)
z
WKT Z {a k }
z a

On taking inverse z transform


f (k ) (0.5 j 0.5)( 0.5 j 0.5) k (0.5 j 0.5) (0.5 j 0.5) k
0.5 0.5
j( 0.5) (0.5 j 0.5) k j( 0.5) (0.5 j 0.5) k
j j

j (0.5 j 0.5)( 0.5 j 0.5) k j (0.5 j 0.5) (0.5 j 0.5) k

j (0.5 j 0.5) k 1
j (0.5 j 0.5) k 1

2. Determine the inverse z transform of z domain function


3z 2 2 z 1
F (z) = 2
z 3z 2
Given

3z 2 2 z 1
F (z) =
z 2 3z 2

3
z 2 3z 2 3z 2 2z 1

3z 2 9z 6
11z 5

31
11 z 5 11 z 5
F (z) = 3 2
3
z 3z 2 ( z 1) ( z 2)

By PFE
A1 A2
F (z) = 3
( z 1) ( z 2)
11 z 5
A1 ( z 1)
( z 1) ( z 2)

11 z 5 11 5
6
( z 2) 1 2

11 z 5
A2 (z 2)
( z 1) ( z 2)

11 z 5 11(2) 5
17
( z 1) 2 1

6 17
F ( z) 3
( z 1) (z 2)
1 z 1 z 1 z 1 z
3 6 17 3 6z 17 z
z ( z 1) z z 2 ( z 1) z 2

On taking inverse z transform


f (k ) 3 (k ) 6u (k 1) 17 2 ( k 1) u (k 1) ; for k 0

2. Determine the inverse z transform of the following


1
F ( z) Where (i) ROC z 1.0 (ii) ROC z 0.5
3 1 1 2
1 z z
2 2
Given
1
F ( z)
3 1 1 2
1 z z
2 2
(i) z 1.0

32
3 1 7 2 15 3
1 z z z ..........
2 4 8
3 1 1 2
1 z z 1
2 2
3 1 1 2
1 z z
2 2
3 1 1 2
z z
2 2
3 1 9 2 3 3
z z z
2 4 4
7 2 3 2
z z
4 4
7 2 3 2 7 4
z z z
4 4 8
15 3 7 4
z z
8 8
3 1 7 2 15 3
F (z ) 1 z z z .......... ------------ -(i)
2 4 8

k
F (Z ) f (k ) z
k

For a causal signal


k
F ( z) f (k ) z
k 0

1 2 3
F ( z) f (0) z f (1) z f (2) z .......... ....... -------------- (ii)
Comparing equation (i) &(ii)
3 7 15
f (0) 1 , f (1) , f (2) , f (3)
2 4 8
3 7 15
f (k ) {1, , , , ................}for k 0
2 4 8

33
(i) z 0.5

2z 2 6 z 3 11 z 4 30 z 5 ..........
1 2 3 1
z z 1 1
2 2
1 3z 2 z 2

3z 2 z 2
3z 9 z 2 6 z 3

7z 2 6 z3
7z 2 21z 3 14 z 4

15z 3 14z 4
15z 3 45z 4 30z 5

F (z ) 2z 2 6 z 3 11 z 4 30 z 5 .......... -------------- (i)

k
F (Z ) f (k ) z
k

For an anti-causal signal


0
1
F (Z ) f (k ) z
k

F ( z) .......... .. f ( 5) z 5 f ( 4) z 4 f ( 3) z 3 f ( 2) z 2 f ( 1) z 1 f (0) ------------- (ii)


Comparing the equation i & ii
f ( 5) 30 , f ( 4) 14 , f ( 3) 6 , f ( 2) 2 , f ( 1) 0 , f (0) 0
f (k ) {...........30,14,6,2,0,0}

Difference equation
Discrete time systems are described by difference equation of the form

34
If the system is causal, a linear difference equation provides an explicit relationship between
the input and output. This can be seen by rewriting.

Thus the nth value of the output can be computed from the nth input value and the N and M
past values of the output and input, respectively.
Role of z transform in linear difference equations
Equation (1) gives us the form of the linear difference equation that describes the
system. Taking z transform on either side and assuming zero initial conditions, we have

Where H(z) is a z transform of unit sample response h(n).


Stability analysis
Jurys stability test
Bilinear transformation
Jurys stability test
Jurys stability test is used to determine whether the roots of the characteristic
polynomial lie within a unit circle or not. It consists of two parts.One simple test for
necessary condition for stability and another test for sufficient condition for stability.
Let us consider a general characteristic polynomial F (z)

F ( z) an z n an 1 z n 1
.......... ...... a1 z a 0 , where a n 0

Necessary condition for stability

F (1) 0 ; ( 1) n F ( 1) 0

If this necessary condition is not met, then the system is unstable. We need not check the
sufficient condition.

35
Sufficient condition for stability
a0 an
b0 bn 1

c0 cn 2

..................
r0 r2

If the characteristic polynomial satisfies (n-1) conditions, then the system is stable

Jurys test

Bilinear transformation
The bilinear transformation maps the interior of unit circle in the z plane into the left half of
the r-plane.
z 1 1 r
r z
z 1 Or 1 r

Fig.Mapping of unit circle in z-plane into left half of r-plane


Consider the characteristic equation
an z n an 1 z n 1
an 2 z n 2
.......... .. a z a0 ; an 0 .......... ....( i )

1 r
Sub z in Equation (i)
1 r

36
1 r n 1 r n 1 1 r n 2 1 r
an ( ) an 1 ( ) an 2 ( ) ............a( ) a0 0 ............(ii)
1 r 1 r 1 r 1 r
Equation (ii) can be simplified

bn r n bn 1 r n 1 bn 2 r n 2
............ b1 r b0 0

Problem

1. Check for stability of the sampled data control system srepresented by


characteristic equation.
(i ) 5 z 2 2z 2 0

Given
F ( z) 5z 2 2z 2 0

F ( z) a2 z 2 a1 z a0 5z 2 2z 2
2
F (1) 5(1) 2(1) 2
5 2 2
5

( 1) n F ( 1) ( 1) 2 5( 1) 2 2( 1) 2
1(5 2 2)
9
Here n=2
Since F (1) 0 ; ( 1) n F ( 1) 0 , the necessary condition for stability is satisfied.
Check for sufficient condition
It consisting of (2n-3) rows
n=2 (2n-3) = (2*2-3)
=1
So, it consists of only one row
Row z0 z 1 z2
1 a0 a1 a2
a0 2 , a1 2, a 2 5

a0 a1
The necessary condition to be satisfied

37
The necessary & sufficient conditions for stability are satisfied. Hence the system is stable

(ii) F ( z ) z3 0.2 z 2 0.25 z 0.05 0

F (z) a3 z 3 a2 z 2 a1 z a0

z3 0.2 z 2 0.25z 0.05 0


Method 1

Check for necessary condition


F ( z) z3 0.2 z 2 0.25 z 0.05 0

F (1) 13 0.2(1) 2 0.25 (1) 0.05 0.6

( 1) n F ( 1) ( 1) 3[ ( 1) 3 0.2( 1) 2 0.25 ( 1) 0.05 ] 0.9


Here n=3
Since F (1) >0 & ( 1) n F ( 1) >0
The necessary condition for stability is satisfied.
Check for sufficient condition
It consisting of (2n-3) row
n =3, (2n-3) = (2*6-3) =3
So, the table consists of three rows

Row z0 z1 z2 z3
1 a0 a1 a2 a3
2 a3 a2 a1 a0
3 b0 b1 b2

a0 0.05
a1 0.25
a2 0 .2
a3 1

38
a0 a3 0.05 1
b0 0.05 2 1
a3 a0 1 0.05
0.9975

a0 a2 0.05 0.2
b1 0.05( 0.25 * 0.2)
a3 a1 1 0.25
0.1875

a0 a1 0.05 0.25
b3 0.05 * (0.2) * ( 0.25)
a3 a2 1 0.2
0.24

Row z0 z1 z2 z3
1 0.05 -0.25 -0.2 1
2 1 -0.2 -0.25 1
3 -0.9975 0.1875 0.24
The necessary condition to be satisfied
a0 a3 , b0 b2
0.05 1 , 0.9975 0.25

The necessary and sufficient conditions for stability are satisfied. Hence the system is stable.
Method 2
F ( z) z3 0.2 z 2 0.25 z 0.05 0
1 r
Put z
1 r
1 r 3 1 r 2 1 r
F (r ) ( ) 0.2( ) 0.25( ) 0.05 0
1 r 1 r 1 r

On multiplying throughout by (1 r ) 3 we get

39
(1 r ) 3 0.2(1 r ) 2 (1 r ) 0.25(1 r )(1 r ) 2 0.05(1 r ) 3 0
(1 r )(1 r 2 2r ) 0.2(1 r )(1 r 2 ) 0.25(1 r )(1 r 2 ) 0.05(1 r )(1 r 2 2r ) 0
(1 r )(1 r 2 2r 0.2 0.2r 2 ) (1 r )( 0.25 0.25r 2 0.05 0.05r 2 0.1r ) 0
(1 r )(1.2r 2 2r 0.8) (1 r )(0.3r 2 0.1r 0.2) 0
(1.2r 2 2r 0.8 1.2r 3 2r 2 0.8r ) (0.3r 2 0.1r 0.2 0.3r 3 0.1r 2 0.2r ) 0
3 2 3 2
(1.2r 3.2r 2.8r 0.8) ( 0.3r 0.4r 0.1 0.2) 0
3 2
0.9r 3.6r 2.9r 0.6 0
The coefficient of the new characteristic equation is positive. Hence the necessary condition
for stability is satisfied.
The sufficient condition for stability can be determined by constructing routh array as
r 3 : 0.9 2.9 .........row1
2
r : 3.6 0.6 .........row 2
1
r : 2.75 .........row3
0
r : 0.6 .........row4
column1
(3.6 * 2.9) (0.9 * 0.6)
r1 2.75
3.6
(2.75 * 0.6) (0 * 3.6)
r0 0.6
2.75
There is no sign change in the elements of first column of routh array. Hence the sufficient
condition for stability is satisfied.
The necessary condition and sufficient condition for stability are satisfied. Hence the system
is stable.
Pulse transfer function
It is the ratio of s transform of discrete output signal of the system to the z-transform of
discrete input signal to the system. That is
C ( z)
H ( z) (i)
R( z )
Proof
Consider the z-transform of the convolution sum

k
Z [C (k )] h(k m)r (m) z ---------------- (ii)
k 0 m 0

On interchanging the order of summation, we get

40
k
C ( z) r (m). h( k m) z ------------------ (iii)
m 0 k 0

Let l k m Then l m when k 0& l


when l 0

m l m
C ( z) r ( m) z . h(l ) z --------------------- (iv)
m 0 k m

m l
C ( z) r ( m) z . h(l ) z ------------------------ (v)
m 0 k m

C ( z ) R( z ).H ( z )

The pulse transfer function


C ( z)
H ( z) --------------------------- (vi)
R( z )
The block diagram for pulse transfer function

UNIT II
Z-TRANSFORM AND SAMPLED DATA SYSTEMS
PART A
1. What is sampled data control system?
2. Explain the terms sampling and sampler.
3. What is meant by quantization?
4. State (shanons) sampling theorem
5. What is zero order hold?
6. What is region of convergence?
7. Define Z-transform of unit step signal?
8. Write any two properties of discrete convolution.
9. What is pulse transfer function?
10. What are the methods available for the stability analysis of sampled data control
systems?
11. What is bilinear transformation?

41
PART B
1. (i)solve the following difference equation
2 y(k) 2 y(k-1) + y (k-2) = r(k)
y (k) = 0 for k<0 and
r(k) = {1; k= 0,1,2
{0;k<0 (8)
(ii)check if all the roots of the following characteristics equation lie within the circle.
Z41.368Z3+0.4Z2+0.08Z+0.002=0 (8)
2. (i)Explain the concept of sampling process. (6)
(ii)Draw the frequency response of Zero-order Hold (4)
(iii)Explain any two theorems on Z-transform (6)
3. The block diagram of a sampled data system is shown to Fig.(a) Obtain discrete-time state
model for the system. (b) Obtain the equation for inter sample response of the system.

4. The block diagram oils sampled-data system is shown in Fig.


(a) Obtain discrete-time state model for the system
(b) Find the response of the system for a unit step input.
(c) What is the effect on system response (i) when T =0.5 sec (ii) T=1.5 sec

42
UNIT III
STATE SPACE ANALYSIS OF DISCRETE TIME SYSTEMS

State variables Canonical forms Digitalization Solution of state equations


Controllability and Observability Effect of sampling time on controllability Pole
placement by state feedback Linear observer design First order and second order
problems

State variables

Concepts of State and State Variables


State
The state of a dynamic system is the smallest set of variables (called state variables) such
that the knowledge of these variables at t=t0, together with the knowledge of the inputs for
t t 0 , completely determine the behaviour of the system for any time t t0 .

The concept of state is not limited to physical systems. It is applicable to biological systems.
economic systems, social systems, and others.
State variables
The state variables of a dynamic system are the smallest set of variables that determine the
state of the dynamic system. i.e. the state variables are the minimal set of variables such that the
knowledge of these variables at any initial time t = to, together with the knowledge of the
inputs for t t 0 is sufficient to completely determine the behaviour of the system for any

time t t 0 . If atleast n variables x1, x 2 ,...... x n are needed to completely describe the

behaviour of a dynamic system than those n variables are a set of state variables.
The state variables need not be physically measurable or observable quantities. Variables that
do not represent physical quantities and those that are neither measurable nor observable can
also be chosen as state variables. Such freedom in choosing state variables is an added
advantage of the state-space methods.
Canonical forms
They are four main canonical forms to be studied:
1. Controller canonical form
2. Observer canonical form
3. Controllability canonical form
4. Observability canonical form

43
Controller canonical form
Consider the transfer function of the following for illustration:
y ( s) b1 s 2 b2 s b3
u ( s) s3 a1 s 2 a2 s a3
The transfer function is firstly decomposed into two subsystems:

y(s) y(s) z (s) 1


3 2
b1 s 2 b2 s b3
u (s) z (s) u (s) s a1 s a2 s a3

In other words,
z ( s) 1
3 2
;
u (s) s a1 s a2 s a3

y( s) b1 s 2 b2 s b3
and
z(s) 1
It is easy to have the state-space equation of
0 1 0 0
Z 0 0 1 Z 0 u;
a3 a2 a 1

y b3 z1 + b2 z2 + b1z3 = b3 b2 b1 Z

Thus for a general transfer function of


bm s m bm 1 s m 1 b0
y ( s) u ( s)
an s n an 1 s n 1 a0
The state-space representation can be given as
0 1 0 0
0
0 0 1 0
0
A ; b

a a1 a
- 0 - n 1
an an an

b0 b1 bm
C= C 0 0
a0 a0 a0

For convenience, we shall let a0 1 and m n 1.

44
In other words, for system of the following:
b1 s n 1
b2 s n 1 bn
y( s) u ( s)
sn a1 s n 1 an
We have
0 1 0 0 0
0 0 1 0 0
A ; b

-an 1 an 2 -a1 1

C bn bn 1 b1

Observer canonical form

Now, we set n=3 for illustration. Bu assuming all initial values are zero, can be written as

s3 y a1 s 2 y a2 sy a3 y b1 s 2 y b2 sy b3 y

y x1
x1 x2 a1 y b1u a1 x1 x2 b1u
x2 x3 a2 y b2 u a2 x1 x3 b2 u
x3 a3 y b3u a3 x1 b3u

In other words,

a1 1 0 b1
A a2 0 1 ; b b2 ; C 0 0 1
a 0 0 b3

In general,

a1 1 0 0 0
a2 0 1 0
A ; b= ; C= 1 0 0
1 0
an 0 0 0 1

Controllability canonical form

Again, use n=3 as illustration, the controllability form is given as:

45
1
0 0 a3 1 a2 a1 1
A 1 0 a2 ; b= 0 ; C= b 3 b2 b1 a1 1 0
0 1 a1 0 1 0 0

In general,
1
0 0 an an 1 an 2 a1 1
1 0 an 1 an 2 an 3 a1 1 0
A 0 1 0 -an 2 ; b= bn bn 1 b
a1 1 0 0 0 0
0 0 1 -a1 1 0 0 0 0

C 1 0 0

The Observability canonical form

0 1 0 0 0
0 0 1 0 0
A ;
0 0 0 0 1
-an an 2 a1

1
1 0 0 0 0 0
b1
a1 1 0 0 0 0
b
b b

an 2 an 1 a1 1 0
bn
an 1 an 2 a1 1

C= 1 0 0 0

Controllability and Observability

The dynamics of a linear time (shift)) invariant discrete-time system may be expressed in
terms state (plant) equation and output (observation or measurement) equation as follows

Where x(k) an n dimensional slate rector at time t =kT. an r-dimensional control (input)
vector y(k). an m-dimensional output vector ,respectively, are represented as

46
The parameters (elements) of A, an n n (plant parameter) matrix. B an n r control
(input) matrix, and C an m r output parameter, D an m r parametric matrix are constants
for the LTI system. Similar to above equation state variable representation of SISO (single
output and single output) discrete-rime system (with direct coupling of output with input) can
be written as

Where the input u, output y and d. are scalars, and b and c are n-dimensional vectors.

The concepts of controllability and observability for discrete time system are similar to the
continuous-time system.

A discrete time system is said to be controllable if there exists a finite integer n and input
mu(k); k [0, n 1] that will transfer any state x 0 bx(0) to the state x n at k n n.

Controllability

Consider the state Equation can be obtained as

Equation can be written as

State x can be transferred to some arbitrary state x" in at most n steps to be if p(U) = rank
of [ B AB A 2 B ......... A n 1 B] n .

Thus, a system is controllable if the rank composite (n nr) matrix [ B AB A 2 B ......... A n 1 B]


is n.

Observability

Consider the output Equation can be obtained as

47
Thus, we can write

If rank of

!hen initial state x(0) can be determined from the most n measurements of the output and
input.

We can, therefore. State that "A discrete time system is observable if the rank of the
composite nm n matrix.

Effect of sampling time on controllability

We have a continuous-time plant which is to be controlled. The control action may be


either continuous or discrete and must make the plant behave in a desired manner. If discrete
control action is thought of, then the problem of selection of sampling interval arises. The
selection of best sampling interval for a digital control system is a compromise among many
factors. The basic motivation to lower the sampling rate 1/T is the cost. A decrease in
sampling rate means more time is available for control calculations, hence slower computers
are possible for a given control function or more control capacity is available for a given
computer. That economically, the best choice is the slowest possible sampling rate that meets
all the performance specifications. On the other hand, if the sampling rate is too low, the
sampler discards part of the information present in a continuous .tirne signal. The
minimum sampling rate or frequency has a definite relationship with the highest
significant signal frequency (i.e., signal bandwidth). This relationship is given by the
Sampling Theorem according to which the information contained in a signal is fully
preserved in its sampled version so long as the sampling frequency is at least twice the
highest significant frequency contained in the signal. This sets an absolute lower bound to
the sample rate selection.

48
We are usually satisfied with the trial and error method of selection of sampling interval.
We compare the response of the continuous-time plant with models discretized for
various sampling rates. Then the model with the slowest sampling rate which gives a
response within tolerable limits is selected for future work. However, the method is not
rigorous in approach. Also a wide variety of inputs must be given to each prospective
model to ensure that it is a tree representative of the plants.
Pole placement by state feedback
Consider a linear dynamic system in the state space form

In some cases one is able to achieve the goal by using the full state feedback, which
represents a linear combination of the state variables, that is

So that the closed loop system given by

has the desired specifications.


If the pair (A,b) is controllable, the original system can be transformed into phase variable
canonical form,i.e it exists a nonsingular transformation of the characteristic polynomial of A
that is

Such that

Where ai are coefficients of the characteristic polynomial of A, that is

For single input single output systems the state feedback is given by

49
After

Linear observer design

In a linear time invariant observer for reconstruction of the crystal radius from the weighing
signal is derived. As a starting point, a linear approximation of the system behaviour can be
used. For this purpose the nonlinear equations required for observer design need to be
linearized around some operating or ( steady state) values, i.e. the equations are expanded in a
Taylor series which is truncated at the second order

Can be approximated by

0 0 0 0
Around some fixed values ae , ve . With new coordinates rc rc vc tan( c )
In the same way one can continue with the remaining equations needed for describing the
process dynamics. For example, The linear model he derived is

Where x is the state vector, Furthermore. One has the 3 3 system matrix A. the 3 2 control
matrix B and the 1 3 output matrix C. One has to keep in mind that the values of the state
spare vector .r, rho input sector it and the output y describe the deviation of the corresponding
quantities from their operating willies.

50
UNIT III
STATE SPACE ANALYSIS OF DISCRETE TIME SYSTEMS
PART A
1. What is state and state variable?
2. What is a state vector?
3. What is state space?
4. What is input and output space?
5. What are the advantages of state space modeling using physical variable?
6. What are phase variables?
7. What is the advantage and the disadvantage in canonical form of state model?
8. Write the solution of discrete time state equation?
9. Write the expression to determine the solution of discrete time state equation using z-
transform
10. Write the state model of the discrete time system?
PART
1. A linear second order single input continuous time system is described by the
following set of differential equations.
.
X 1 (t ) 2 X 1 (t ) 4 X 2 (t )
.
X 2 (t ) 2 X 1 (t ) X 2 (t ) u (t )

Comment on the controllability and stability of the system.


2. The state space representation of a second order system is
.
x1 x1 (t ) u
.
x2 x1 2 x 2 (t ) u

State whether the system is controllable.


3. A system is described by
. 1 1 0
X X U
1 1 1
Y 1 0 X
Check the controllability and observability of the system
s 3
4. A control system has a transfer function given by G(s) =
( s 1)( s 2) 2

51
Unit IV
NONLINEAR SYSTEMS

Types of nonlinearity Typical examples Phase-plane analysis Singular points Limit


cycles Construction of phase trajectories Describing function method Basic concepts
Dead Zone Saturation Relay Backlash Liapunov stability analysis Stability in the sense
of Liapunov Definiteness of scalar functions Quadratic forms Second method of Liapunov
Liapunov stability analysis of linear time invariant systems and non-linear system

Introduction to Nonlinear Systems


It has been mentioned earlier that a control system is said lo be linear if
it obeys law of superposition. Most of the control systems are nonlinear in
nature and are treated to be linear, under certain approximation, from case of
analysis point of view. Let us discuss now the properties of nonlinear systems.
In practice nonlinearities may exist in the systems inherently or may be
purposely introduced in the systems, to improve the performance. Hence
knowledge of properties of nonlinear systems and various nonlinearities
is important.
Properties of Nonlinear Systems
The various characteristics of nonlinear systems are,
The most important characteristics of a nonlinear system is that it does not
obey the law of superposition. Hence its behavior with respect to standard test
inputs cannot be used as base to analyses its behavior with respect to other
inputs. Its response is different for different amplitudes of input signals. Hence
while doing the analysis of nonlinear system, along with the mathematical model
of the system, it is necessary to have information about amplitudes of the
probable inputs, initial conditions etc. This makes the analysis of the nonlinear
system difficult.
Linear system gives sinusoidal output for a sinusoidal input, may be
introducing a phase shift. But nonlinear system produces higher harmonics and
sometimes the sub harmonics. Hence for sinusoidal input, the output of a
nonlinear system is generally non sinusoidal. The output consists of frequencies
which are multiples of the input frequency i.e. harmonics. The sub harmonics

52
means the presence of frequencies which are lower than the input
frequency. The input and output relations are not linear.
In linear system,the sinusoidal oscillations depend on the input amplitude and
the initial conditions. But in a nonlinear system, the periodic oscillations may
exist which are not dependent on the applied input and other system
parameter variations. In nonlinear system, such periodic oscillations are
nonsinusoidal having fixed amplitude and frequency. Such oscillations arc
called limit cycles in case of nonlinear system.
Another important phenomenon which exists only in case of nonlinear system
is jump resonance. This can be explained by considering a frequency response.
The Fig. (a) Shows the frequency response of a linear system which shows
that output varies continuously as the frequency changes. Similarly though
frequency us increased or decreased, the output travels along the same curve
again and again. But in case of a nonlinear system, if frequency is increased,
the output shows discontinuity i.c. it jumps at a certain frequency. And if
frequency is decreased, it jumps back but at different frequency. This is shown
in the fig.

There is no definite criterion for judging the stability of the nonlinear


system. The analysis and design techniques of linear systems cannot be
applied to the nonlinear system.
Types of nonlinearities
The nonlinearities can be classified as incidental and intentional.
The incidental nonlinearities are those which are inherently present in the system.
Common examples of incidental nonlinearities are saturation, dead zone,

53
coulomb friction, stiction, backlash, etc.
The intentional nonlinearities are those which are deliberately inserted in the
system to modify system characteristics. The most common examples of this type
of nonlinearity is a relay.
In many cases the system presents a nonlinear phenomenon which is fully
characterised by it static characteristics, i.e., its dynamics can be neglected

Saturation
In this type of nonlinearity the output is proportional to input for a limited
range of input signals. When the input exceeds this range, the output tends to become
nearly constant as shown in the fig.

Saturation
Deadzone
The deadzone is the region in which the output is zero for a given input. Many
physical devices do not respond to small signals, i.e., if the input amplitude is less than
some small value, there will be no output. The region in which the output is zero is called
deadzone. When the input is increased beyond this deadzone value, the output will be
linear.

Dead zone

54
Friction
Friction exists in any system when there is relative motion between contacting surfaces.
The different types of friction are viscous friction, coulomb friction and stiction.

Stiction
The viscous friction is linear in nature and the frictional force is directly proportional to
relative velocity of the sliding surface.

Relay

55
Phase plane analysis
Objectives:
- Use eigenvalues and eigenvectors of the Jacobian matrix to characterize the phase
plane behavior.
- Predict the phase-plane behavior close to an equilibrium point, based on the
- Linearized model at that equilibrium point.
- Predict qualitatively the phase-plane behavior of the nonlinear system, when there
are multiple equilibrium points.
Phase-plane analysis
Phase plane analysis is a graphical method for studying second-order systems. This
chapters objective is to gain familiarity of the nonlinear systems through the simple
graphical method.
Concepts of Phase Plane Analysis
Phase portraits
The phase plane method is concerned with the graphical study of second-order autonomous
systems described by

Where
x1, x2 : states of the system
f1, f2: nonlinear functions of the states
Geometrically, the state space of this system is a plane having x1, x2 as
coordinates. This plane is called phase plane. The solution of (2.1) with time varies from zero
to infinity can be represented as a curve in the phase plane. Such a curve is called a phase
plane trajectory. A family of phase plane trajectories is called a phase portrait of a system.
Example1
Phase portrait of a mass-spring system as shown in the fig.
Solution
The governing equation of the mass-spring system in Fig (a) is the familiar linear second-
order differential equation

56
The governing equation of the mass-spring system in Fig (a) is the familiar linear second-
order differential equation

Assume that the mass is initially at rest, at length x0. Then the solution of this equation is

Eliminating time t from the above equations, we obtain the equation of the trajectories

This represents a circle in the phase plane. Its plot is given in fig (b)
The nature of the system response corresponding to various initial conditions is directly
displayed on the phase plane. In the above example, we can easily see that the system
trajectories neither converge to the origin nor diverge to infinity. They simply circle around
the origin, indicating the marginal nature of the systems stability. A major class of second-
order systems can be described by the differential equations of the form

57
In state space form, this dynamics can be represented with x1 = x and x2 = x& as follows

Singular points
A singular point is an equilibrium point in the phase plane. Since equilibrium point is defined
as a point where the system states can stay forever,

Example 2
A nonlinear second-order system

The system has two singular points, one at (0,0) and the other at (3,0) . The motion
patterns of the system trajectories in the vicinity of the two singular points have different
natures. The trajectories move towards the point x = 0 while moving away from the point x =
3.
Constructing Phase Portraits
There are a number of methods for constructing phase plane trajectories for linear or
nonlinear system, such that so-called analytical method, the method of isoclines, the delta
method, Lienards method, and Pells method.
Analytical method

58
There are two techniques for generating phase plane portraits analytically. Both
technique lead to a functional relation between the two phase variables x1 and x2 in the form
g(x1, x2 ) = 0 (2.6) where the constant c represents the effects of initial conditions (and,
possibly, of external input signals). Plotting this relation in the phase plane for different initial
conditions yields a phase portrait.
The first technique involves solving (2.1) for x1 and x2 as a function of time t , i.e., x1(t) =
g1(t) and x2 (t) = g2 (t) , and then, eliminating time t from these equations. The second
technique, on the other hand, involves directly eliminating the time variable, by noting that
and then solving this equation for a functional relation between x1 and x2 . Let us use this
technique to solve the mass spring equation again.
The first case corresponds to a node.
Stable or unstable node (Fig.a -b)
A node can be stable or unstable:
1,2 < 0 : singularity point is called stable node.
1,2 > 0 : singularity point is called unstable node.

There is no oscillation in the trajectories.


Saddle point (Fig.c)
The second case (1 < 0 < 2) corresponds to a saddle point. Because of the unstable pole 2
almost all of the system trajectories diverge to infinity.

59
Stable or unstable locus (Fig.d-e)
The third case corresponds to a focus.
Re(1,2 ) < 0 : stable focus
Re(1,2 ) > 0 : unstable focus

Center point (Fig.f)


The last case corresponds to a certain point. All trajectories are ellipses and the singularity
point is the centre of these ellipses.

Note that the stability characteristics of linear systems are uniquely determined by the
nature of their singularity points. This, however, is not true for nonlinear systems.

60
Limit cycle
In the phase plane, a limit cycle is defined as an isolated closed curve. The trajectory
has to be both closed, indicating the periodic nature of the motion, and isolated, indicating the
limiting nature of the cycle (with nearby trajectories converging or diverging from it).
Depending on the motion patterns of the trajectories in the vicinity of the limit cycle, we can
distinguish three kinds of limit cycles.
Limit cycle can be a drawback in control systems:
Instability of the equilibrium point
Wear and failure in mechanical systems
Loss of accuracy in regulation
Stable Limit Cycles: all trajectories in the vicinity of the limit cycle converge to it as t
(Fig.a).
Unstable Limit Cycles: all trajectories in the vicinity of the limit cycle diverge to it as
t (Fig.b)
Semi-Stable Limit Cycles: some of the trajectories in the vicinity of the limit cycle
converge to it as t (Fig.c)

Difference between center and limit cycle


Center trajectories can be found in the linear or linearized systems with the largest real part of
the eigenvalues of zero value (in marginal point.)
- depend on the initial conditions
Limit cycle can occur in nonlinear systems:
- isolated closed orbit(related to Hopf bifurcation)

61
Center Limit Cycle
Describing function method
. .

Of all the analytical methods developed over the years for nonlinear systems. the describing
function method is generally agreed upon as being the most practically useful. It is an
approximate method, but experience with real systems and computer simulation results, shows
adequate accuracy in many cases. The method predicts whether limit cycle oscillations will exist
or not, and gives numerical estimates of oscillation frequency and amplitude when limit cycles
are predicted. Basically, the method an approximate extension of frequency-response methods
(including Nyquist stability criterion) to nonlinear systems.
To discuss the basic concept underlying the describing function analysis. Let us consider the
block diagram of a nonlinear system shown in Fig. 9.5. Where the blocks GO) and G2(s) represent
the linear elements. While the block N represent, the nonlinear element.

The describing function method provides a "linear approximation" to the nonlinear element
based on the assumption that the input to the nonlinear element so sinusoid of known,
constant amplitude. The fundamental harmonic of the element's output is compared with the
input sinusoid, to determine the steady-state amplitude and phase relation. This relation is the
describing function for the nonlinear element. The method can, thus, be viewed as "harmonic
linearization" of a nonlinear element.
The describing function method is based on the Fourier series. A review of the Fourier series
will be in order here.

62
Fourier series
We begin with the definition of a periodic signal. A signal y(t) is said to be periodic with the
period if y(t+T) =y(t) for every value of t. The smallest positive value of T for which y(t + )
= y(t) is called fundamental period ofy(t). We denote the fundamental period as T0.Obviously,
2 T0 is also a period of y(t), and so is any integer multiple of T0.A periodic signal y(t) may be
represented by the series

The term for n = 1 is called fundamental or first- harmonic, and always has the same
frequency as the repetition rate of the original periodic waveform; whereas n = 2, 3....,
give second, third. and so forth harmonic frequencies as integer multiples of the
fundamental frequency.

Certain simplifications are possible when y(t) has a symmetry clone type or another.

The describing Function approach to the analysis of steady-state oscillations in nonlinear


systems is an approximate tool to estimate the limit cycle parameters.
It is based on the following assumptions

63
There is only one single nonlinear component
The nonlinear component is not dynamical and time invariant
The linear component has low-pass filter properties
The nonlinear characteristic is symmetric with respect to the origin
There is only one single nonlinear component
The system can be represented by a lumped parameters system with two main blocks:
The linear part
The nonlinear part

The nonlinear component is not dynamical and time invariant


The system is autonomous.
All the system dynamics is concentrated in the linear part
So that classical analysis tools such as Nyquist and Bode plots can be applied.

The linear component has low-pass filter properties. This is the main assumption that allows
for neglecting the higher frequency harmonics that can appear when a nonlinear system is
driven by a harmonic signal

The more the low-pass filter assumption is verified the more the estimation error affecting the
limit cycle parameters is small.

64
The nonlinear characteristic is symmetric with respect to the origin. This guarantees that the
static term in the Fourier expansion of the output of the nonlinearity, subjected to an
harmonic signal, can be neglected

Such an assumption is usually taken for the sake of simplicity, and it can be relaxed.
Ideal relay
The negative reciprocal of the DF is the negative real axis in backward direction. A limit
cycle can exist if the relative degree of G(j) is greater than Two

The oscillation frequency is the critical frequency c of the linear system and the
Oscillation magnitude is proportional to the relay gain M.
Liapunovs Stability Analysis
The state equation for a general time invariant system has the form x = f (x,
u). If the input u is constant then the equation will have form x = F(x).
For this system, the points, at which derivatives of all state variables are
zero, are the singular points.
These singular points are nothing but equilibrium points where the system
stays if it is undisturbed when the system is placed at these points.

65
The stability of such a system is defined in two different ways. If the
input to the system is zero with arbitrary initial conditions, the resulting
trajectory in phase-plane, discussed in earlier chapter, tends towards the
equilibrium state.
If tile input to the system is provided then the stability is defined as for
bounded input, the system output is also bounded.

For linear systems with non-zero eigen values, there is only one
equilibrium state and the behaviour of such systems about this
equilibrium state totally determines the qualitative behaviour in the
entire state space.

In case of nonlinear systems, the behaviour for small deviations about the
equilibrium point is different from that for large deviations.

Hence local stability for such systems does not indicate the overall
stability in the state space. Also the non-linear systems having multiple
equilibrium states, the trajectories move from one equilibrium point and
tend to other with time.

Thus stability in case of non-linear system is always referred to


equilibrium state instead of global term stability which is the total stability
of the system.

In case of linear control systems, many of the stability criteria such as


Routh's stability test. Nyquist stability criterions etc. are available. But
these cannot be applied (or non-linear systems.

The second method of Liapunov which is also called direct method of


Liapunov is the most common method for obtaining the stability of non-
linear systems.

This method is equally applicable to time varying systems, stability


analysis of linear, time in variant systems and for solving quadratic
optimal control problem.
Stability in the Sense of Liapunov
.
Consider a system defined by the state equation x f ( x, t ) .Let us assume
that this system has a unique solution starting at the given initial condition. Let us

66
consider this solution as F (t : x 0 , t 0 ) where x x0 at t t 0 and t is the observed time.

F (t 0 : x0 , t 0 ) x0
.
If we consider a state x e for system with equation x f ( x, t ) in such a way that

f ( xe , t ) 0 for all t then this x e is called equilibrium state. For linear, time

invariant systems having A non-singular, there is only one equilibrium state while
there are one or more equilibrium states if A is singular.
In case of non-linear systems as we have seen previously there are more than
one equilibrium states. The Isolated equilibrium states that is isolated from each
other can be shifted to origin i.e. f(0, t) = 0 by properly shifting the coordinates.
These equilibrium states can be obtained from the solution of equation f(x.. t) = 0.
Now we will consider the stability analysis of equilibrium states at the origin. We
will consider a spherical region of radius R about an equilibrium state x e ,. Such that

.
Any equilibrium state x e of the system x f ( x, t ) is said to be stable in the

sense of Liapunov if corresponding to each S( ) there is S ( )such that trajectories


staring in S( ) do not leave S ( ) as time t increases indefinitely. The real
number depends on and in general also depends on t0. If does not depend on
t0, the equilibrium state is said to be uniformly stable.

The region S( ) must be selected first and for each S( ) , there must be a
region S( ) in such a way that the trajectories staring within S( ) do not leave S( )
as time t progresses.

There are many types of stability definitions such as asymptotic stability,


asymptotic stability in large. We will also see the definition of instability along
with definitions of these types of stability.

67
Definiteness of scalar function

Positive Definiteness
A scalar function F(x) is said to be positive definite in a particular
region which includes the origin of state space if F(x) > 0 for all non-zero states
x in that region and F(0) = 0.

Negative Definiteness
A scalar function F(x) is said to be negative definite if - F(x) is positive
definite.

Positive Semidefiniteness
A scalar function F(x) is said to be positive semi definite if it is positive at all states in
the particular region except at the origin and at certain other states where it is zero.

Negative Semidefinite
A scalar function F(x) is said to be negative semidefinite if - F(x) is positive
semidefinite.
Indefiniteness
A scalar function F(x) is said to be indefinite in the particular region if it
assumes both positive and negative values irrespective how small the region is

Quadratic Form

A class of scalar functions which plays important role in Ihe stability analysis based on
Liapunov's second method is the quadratic form

68
P is real symmetric matrix and x is a real vector.
Liapunov's Second Method
A system which is vibrating is stable if its total energy is continuously
decreasing. This indicates that the lime derivative of the total energy must
be negative.
The energy is decreased till an equilibrium state is reached. The total
energy is a positive definite function
This fact obtained from classical mechanics theory is generalized in Liapunov's
second method. If the system has an asymptotically stable equilibrium stale then Ihe
stored energy decays with increase in time till it attains minimum value at the
equilibrium state.
But there is no simple way for defining an energy function. For purely mathematical
system. This difficulty was overcome as Liapunov introduced Liapunov function
method which is fictitious energy function.
Liapunov functions depend on x1, x2, .. xn and t. It is given as F(x1, x2, ... xn, t) or as
F(x, t). In Liapunov's second method, the sign behaviour of F(x, t) and its time
derivative F(x, t) = dF(x, t)/dt gives as information about stability, asymptotic
stability or instability of an equilibrium state without requiring to solve the equations
directly to get the solution
Liapunov's Stability Theorem

Consider a scalar function V(x), where x is n vector and is positive


definite, then the states x that satisfy V(x) = C, where C is a positive constant, lie
on a closed hyper surface in n dimensional slate space at least in the
neighborhood of origin. This is shown in the Fig.
If V(x) is a positive definite function obtained for a given system such that
its time derivative taken along the trajectory is always negative then V(x) becomes
smaller and smaller in terms of C and finally reduced to zero as x reduces to
zero. This indicates asymptotic stability of the origin. Liapunov's main stability
theorem is based on this and gives a sufficient condition for asymptotic stability.

69
Liapunov's stability theorem is as given below. Consider a system
described by equation x f(x, t) where f(o, t) = 0 for all L If there exists a
scalar function V(x,t) having continuous first partial derivatives and
satisfying the conditions such as V(x, t) is positive definite and V(x, t) is
negative definite then the equilibrium state at the origin is uniformly
asymptotically stable.

Consider the system described by x = f(x, t) where f (0, t) = 0 for all If


there exists a scalar function V(x, t) having continuous first partial derivatives
and V(x, t) is positive definite, V (x, t) is negative semidefinite V (0 (t: xg, tg),
t) docs not vanish identically in t a t for any t0 and any Xg * 0 where 0 (t: Xg,
tg) denotes or indicates the solution starting from Xg at tg then the
equilibrium state at origin of the system is uniformly asymptotically stable in
the large.
The equilibrium state at origin is unstable when there exists a scalar
function U(x,t) having continuous, first partial derivatives and satisfying the
conditions U (x, t) is positive definite in some region about the origin and U (x, t)
is positive definite in the same region.
Stability of Linear and Nonlinear Systems

If the equilibrium state in case of linear, time invariant system is


asymptotically stable locally then it is asymptotically stable in the large. But
in case of a nonlinear system, the equilibrium state has to be
asymptotically stable in the large for the state to be locally asymptotically
stable. Hence the asymptotic stability of the equilibrium state of linear, time

70
invariant systems and those of nonlinear systems is different.If it is required to
ent the asymptotic stability of any equilibrium state for a nonlinear system
then the stability analysis of linearized models of non-linear systems is
totally insufficient. The nonlinear systems are to be tested without making
them linearized.

The Direct Method of Liapunov and the Linear System

For linear systems, Liapunov's direct method proves to be a simple


method for stability analysts. Use of Liapunov's method for linear systems
is helpful in extending the thinking towards nonlinear systems.

Consider linear system described by state equaion

The linear system described by above equation is asymptotically stable in the


large at the origin if and only if for any symmetric, posiive definite matrix
Q, there exists a symmetic posiive definite matix P which is the unique
soluion A P + PA = - Q.
The proof of above theorem can be given. For this wc will assume the
symmetic positive deinite matrix P exists which Ls the unique solution of the
equaion V(x) = xT Px.

Consider the scalar (unction, V(x) = xT Px

Let norm of x define as

The system is therefore asymptotically stable in the large at the origin. The result is also
necessary. To prove this, assume that the system is asymptotically stable and P is
negative definite

71
.

This is the contradiction as V(x) = xT Px satisfies instability theorem. Hence the


conditions for the positive definiteness of P are necessary and sufficient for asymptotic
stability of the system.

The Liapunov's direct method applied to linear time invariant systems is same as the
Hurwitz stability criterion.
Example
Show that the following quadratic form is positive definite

Solution
The above given V(s) can be written as

Applying Sylvester's criterion we have,

As all the successive principal minors of the matrix p are positive V(x) is positive
definite.
Example
Investigate the stability of the following non-linear system using direct method of Liaupnov.

Given that

Let the Liapunov function be,

72
.
It can be seen that V < 0 for all non-zero values of xi and x2. Hence the function is negative
definite. Therefore the origin of the system is asymptotically stable in large.

UNIT IV
NONLINEAR SYSTEMS
PART A
1. What are linear and nonlinear systems? Give examples.
2. How nonlinearities are introduced in the systems?
3. How the nonlinearities are classified? Give examples.
4. What is the difference between phase plane and describing function methods of analysis?
5. Write any two properties of nonlinear systems?
6. What is jump resonance?
7. What are limit cycles?
8. What is saturation?
9. What is describing function?
10. Write the describing function of dead zone and saturation nonlinearity.
PART B
11. Write the describing function for the following
(i) Backlash nonlinearity
(ii) Relay with dead zone
dx2 4 x1 3 x 2
12. Construct phase trajectory for the system described by the equation .
dx1 x1 x 2

Comment on the stability of the system.


13. Draw the phase trajectory of the system described by the equation
.. .
x x x2 0 .comment on the stability of the system.

73
UNIT V
MIMO SYSTEMS

Models of MIMO system Matrix representation Transfer function representation


Poles and Zeros Decoupling Introduction to multivariable Nyquist plot and singular
values analysis Model predictive control

MIMO System Model


We consider a MINIO system with a transmit array of MT antennas and a receive array
of MR antennas. The block diagram of such a system is shown in Figure. The transmitted
matrix is a MT x 1 column matrix s where s i is the ith component, transmitted from antenna i.
We consider the channel to be a Gaussian channel such that the elements of s are considered to
be independent identically distributed (i.i.d.) Gaussian variables. If the channel is unknown at
the transmitter, we assume that the signals transmitted from each antenna have equal powers of
Es/MT. The covariance matrix for this transmitted signal is given by

Where Es is the power across the transmitter irrespective of the number of antennas and MT , is
an MT * MT identity matrix. The transmitted signal bandwidth is so narrow that its frequency
response can be considered flat (i.e., the channel

74
memory less). The channel matrix H is a K t x NIT complex matrix. The component 17,,i of
the matrix is the fading coefficient from the jth transmit antenna to the ith receive antenna.
We assume that the received power for each of the receive antennas is equal to the total
transmitted power F. This implies we ignore signal attenuation, antenna gains, and so on.
Thus we obtain the normalization constraint for the elements of H, for a deterministic
channel as

If the channel elements are not deterministic but random, the normalization will apply to the
expected value. We assume that the channel matrix is known at the receiver but unknown at
the transmitter. The channel matrix can he estimated at the receiver by transmitting a training
sequence. If we require the transmitter to know this channel, then we need to communicate
this information to the transmitter via a feedback channel. The elements of H can be
deterministic or random. The noise at the receiver is another column matrix of size MR X 1,
denoted by n. The components of n are zero mean circularly symmetrical complex Gaussian
(ZMCSCG) variables. The covariance matrix of the receiver noise is

If there is no correlation between components of n, the covariance matrix is obtained as

75
Each of the MR receive branches has identical noise power of No. The receiver operates on
the maximum likelihood detection principle over MR receive antennas. The received signals
constitute a MR X 1 column matrix denoted bye, where each complex component refers to a
receive antenna. Since we assumed that the total received power per antenna is equal to the
total transmitted power, the SNR can be written as

Matrix representation

Figure shows a linear dynamic MIMO filter. Its array of K-inputs, after z-transforming, can
be represented by the column vector [F(z)]. Its array of outputs, having the same number of
elements as the input array, is represented by the column vector [G(z)]. The transfer function
of the MIMO filter is represented by a square K x K matrix of transfer functions

76
The output vector can be expressed as

Each output is a linear combination of filtered versions of all the inputs. The transfer function
from input j to output i is Hij(z).
A schematic diagram of [H(z)] is shown in Fig.(a). The signal path from input line j to output
line i is illustrated in Fig.(b). A block diagram of the MIMO filter is shown in Fig.(c). The
input vector is [F(z)]. The output vector [G(z)] is equal to [H(z)][F(z)]. The overall transfer
function of the system is [H(z)].
Transfer function representation
A multivariable process admits nu inputs and ny outputs. In general, the number of inputs
should be larger than or equal to the number of outputs so that the process is controllable.
Thus, we will assume nu > ny. The system is supposed to have been identified in continuous
time by transfer functions. In general, this identification is performed by sequentially
imposing signals such as steps on each input ui (i = 1,..., nu ) and recording the corresponding
vector of the responses yij (j = 1, ... ,ny). From each input-output couple (ui,yij), a transfer
function is deduced by a least-squares procedure.
In open loop, the ny outputs yi, are linked to the nu, inputs uj, and to the nd disturbances dk by
the following set of ny linear equations

77
This will be written in open loop under condensed matrix form as

Where y is the output vector, u is the input vector and d is the disturbance vector (the
modelled disturbances), Gu is the rectangular matrix ny x nu the elements of which are the
input-output transfer functions, and Gd is the rectangular matrix ny x nd, the elements of
which are the disturbance-output transfer function.

Poles and Zeros


As for single-input single output (SISO) systems, poles and zeros determine the stability.
Controllability and observability of a multivariable system. For SISO system, zero, are the
zeros of the numerator polynomial of the scalar transfer function, whereas poles are the zeros
of the denominator polynomial. For multi-input multiple output systems, the transfer function
is not scalar and it is no longer sufficient to determine the zeros and poles of individual
entries of the transfer function matrix. In fact element zeros do not play a major role in
characterizing multivariable systems and their properties beyond their effect on the shape of
the response of the system.
Types of multivariable zeros
1. Input-decoupling zeros, which correspond to uncontrollable modes
2 Output-Decoupling zeros, which correspond to unobservable modes
3. Input-output decoupling zeros
Decoupling

78
The goal of decoupling control is to eliminate complicated loop interactions so that a change
in one process variable will not cause corresponding changes in other process variables. To
do this a non-interacting or decoupling control scheme is used. In this scheme, a
compensation network called a decoupler is used right before the process. This decoupler is
the inverse of the gain array and allows for all measurements to be passed through it in order
to give full decoupling of all of the loops. This is shown pictorially below.

Model predictive control

Model Predictive Control (MPC) originated in the late seventies and has developed
considerably since then. The term Model Predictive Control does not designate a specific
control strategy but rather an ample range of control methods which make explicit use of a
model of the process to obtain the control signal by minimizing an objective function. These
design methods lead to controllers which have practically the same structure and present ad-
equate degrees of freedom. The ideas, appearing in greater or lesser degree in the predictive
control family; are basically:

Explicit use of a model to predict the process output at future time instants (horizon);

Calculation of a control sequence minimizing an objective function; and

receding strategy, so that at each instant the horizon is displaced towards the future, which
involves the application of the first control signal of the sequence calculated at each step.

The various MPC algorithms (also called receding horizon Predictive Control or RHPC) only
differ amongst themselves in the model used to rep-resent the process and the noises and cost
function to be minimized. This type of control is of an open nature, within which many works
have been developed and are widely received by the academic world and industry,. There are

79
many applications of predictive control successfully in use at the current time, not only in the
process industry but also applications to the control of other processes ranging from robots.
Applications in the cement industry drying towers, and robot arms are described in (54),
whilst developments for distillation columns, PVC plants, steam generators, or servos. The
good performance of these applications shows the capacity of the MPC to achieve highly
efficient control systems able to operate during long periods of time with hardly any
intervention.

In order to implement this strategy, the basic structure shown in Figure is used. A model is
used to predict the future plant outputs, based on past and current values and on the proposed
optimal future control actions. These actions are calculated by the optimizer taking into
account the cost function (where the future tracking error is considered) as well as the
constraints. The process model plays, in consequence, a decisive role in the controller. The
chosen model must be able to capture the process dynamics to precisely predict the future
outputs and be simple to implement and understand. As MPC is not a unique technique but
rather a set of different methodologies, there are many types of models used in various
formulations. One of the most popular in industry is the Truncated Impulse Response Model,
which is very simple to obtain as it only needs the measurement of the output when the
process is excited with an impulse input. It is widely accepted in industrial practice because it

80
is very intuitive and can also be used for multivariable processes, although its main
drawbacks are the large number of parameters needed and that only open-loop stable
processes can be described this way. Closely related to this kind of model is the Step
Response Model, obtained when the input is a step.

UNIT V

MIMO SYSTEMS

PART A
1What is state and state variable?
2.What is a state vector?
3.What is state space?
4.What is input and output space?
5.What are the advantages of state space modeling using physical variable?
6.What are phase variables?
7.What is the advantage and the disadvantage in canonical form of state model?
8.Write the solution of discrete time state equation?
9.Write the expression to determine the solution of discrete time state equation using z-
transform
10.Write the state model of the discrete time system?
PART B
1. A linear second order single input continuous time system is described by the
following set of differential equations.
.
X 1 (t ) 2 X 1 (t ) 4 X 2 (t )
.
X 2 (t ) 2 X 1 (t ) X 2 (t ) u (t )

Comment on the controllability and stability of the system.


2. The state space representation of a second order system is
.
x1 x1 (t ) u
.
x2 x1 2 x 2 (t ) u

State whether the system is controllable.


3.. A system is described by

81
. 1 1 0
X X U
1 1 1
Y 1 0 X
Check the controllability and observability of the system
s 3
4. A control system has a transfer function given by G(s) =
( s 1)( s 2) 2

82
83
84

Вам также может понравиться