Академический Документы
Профессиональный Документы
Культура Документы
Lewis 2008
All rights reserved
Updated:
These notes are taken from F.L. Lewis, Applied Optimal Control and Estimation: Digital Design
and Implementation, Prentice-Hall, New Jersey, TI Series, Feb. 1992.
2.4
(2.4.2)
n
m
with state x(t) R , control input u(t) R , and measured output y(t)
(2.4.3)
_
where x(t) is the state vector described in the new coordinates and
T is an n x n transformation matrix that relates the new coordinates
to the old coordinates. It is now required to find the state-space
model
.
_ __
_
x= A x + B u
(2.4.4)
y=
__
_
C x + Du
(2.4.5)
_
which describes the dynamics of the new state x(t) expressed in the
new coordinates. Note that the original system (A,B,C,D) and the
_ _ _ _
transformed system (A,B,C,D) have the same input u(t) and output y(t).
This is because all we have done is redefine the way we describe the
(2.4.6)
(2.4.7)
(2.4.8)
(2.4.9)
_
Thus, the transformed state x(t) satisfies the state equations (2.4.4),
(2.4.5) with transformed system matrices given by (2.4.9).
_ _ _ _
Two systems (A,B,C,D) and (A ,B ,C ,D) are said to be equivalent
if they are related by a SST.
T
C
X
A
State-Space Transformation
Graphical Analysis
It is instructive to examine the SST from a pictorial point of
view. Fig. Error! Reference source not found. shows the state-space
X Rn, the input space URm, and the output space Y Rp. Matrix B is
a transformation from controls in U to the state space X (note that
m
n
it is an m x n matrix so that it maps from R to R ). Likewise, A maps
from X to X, and C maps from X to Y. Since the direct feed matrix
D maps from U to Y and is not changed by a SST, we have not shown it
in the figure.
The SST T acts as shown to map the state space X into a new
_
_
_
state-space X with new coordinates. The state x (t) resides in X .
In a fashion similar to that just described, we may draw in the matrices
_ _ _
_
A, B, C, and D, showing their respective domains and co-domains.
In terms of these constructions, it is now straightforward to
_ _ _
_
express A , B , C , and D in terms of A,B,C,D, and T. Consider first
_
_
B . In the figure, there are two ways to get from U to X . One may
_
either go via B, or via B and then T. Recalling that concatenation
of operators is performed from right to left (that is, given a u in
U, first B operates on u to give Bu, then T operates on Bu to yield
_
TBu), it follows that B = TB, exactly as in (0.
In similar fashion, there are two ways to go from the left-hand
_
_
_
-1
X to the right-hand X. One may either go via A, or via T (i.e. backwards
along the arrow labeled by the left-hand T), then A, then down along
State Equations
The equations of motion are
..
. .
x 1= k(x2-x1) + b(x 2-x 1)
(1)
..
. .
x 2= k(x1-x2) + b(x 1-x 2) + u
.
x1
x2
.
x 2]T
(2)
0x1
0
.
-k -b k bx 1 + 0u .
0 0 0 1x2
0
.
k b -k -bx 2
1
(3)
State-Space Transformation
z= Tx
(4)
.
.
with z= [z1 z 1 z2 z 2]T and nonsingular transformation matrix given by
1/2
T= 0
1
0
0
1/2
0
1
1/2
0
-1
0
0
1/2 .
0
-1
(5)
.
.
.
.
. .
Note that z 1= (x 1+ x 2)/2, and z 2= x 1-x 2.
Computing
1
-1
T = 0
1
0
0
1
0
1
1/2
0
-1/2
0
0
1/2
0
-1/2
(6)
the system matrices in the new coordinates are given by (0. Performing
these operations yields the transformed system description
z1
.
d z 1 =
dt z2
.
z 2
c.
0
0
0
0
0z1
0
_
_
.
0
0
0z 1 + 1/2u = Az + Bu.
0
0
1z2
0
.
0 -2k -2bz 2
-1
(7)
1(s)= s
0
-1 = s2= 0.
s
(8)
Thus, under the influence of the external force u(t) the center of
..
mass moves according to Newton's law u= 2mz 1.
The distance-between-the-masses subsystem has the characteristic
equation
2(s)= s
2k
-1 = s2 + 2bs2 + 2k = 0.
s+2b
(9)
That is, under the influence of the external force u(t) the distance
between the masses satisfies the equation of a damped harmonic
oscillator
.
..
z 2 + 2bz 2 + 2kz2 = -u
(10)
n= 2k
, = b / 2k .
(11)
N, (MN)-1= N-1M-1.
These results only say that by a mathematical redefinition of
coordinates, we do not change the physical properties of the system.
Thus, the system poles and input/output transmission properties do
not change.
_ _ _ _
Two systems (A,B,C,D) and (A ,B ,C ,D) are said to be equivalent
if they are related by a SST. Equivalent systems have the same transfer
function and characteristic equation. Different minimal realizations
of the same transfer function are equivalent..
SST and Reachability
We have seen that the SST T transforms the system (A,B,C,D) to
the new system description (TAT-1, TB, CT-1, D). It is interesting to
determine the effect of T on the reachability matrix.
_ _
The reachability matrix of the transformed system (A,B) is
_
_
Un= [B
__
AB
_2_
_ _
A B ... An-1B]
= [TB
TAT-1TB
= T[B
AB
(TAT-1)2TB ...]
so that
_
Un= TUn.
(2.4.10)
(2.4.11)
_
so that rank(Un)= n if and only if rank (Un)= n. Thus, reachability
is preserved by a SST. Clearly, the control effectiveness in a system
is not changed by simply redefining the states in a mathematical sense.
If the system is reachable, the relation (0 offers a way of finding
_ _ _
the SST that relates two system descriptions (A,B,C,D) and (A,B,C,D
_
). To determine T, one need only find the reachability matrix Un of
_
the original system and the reachability matrix Un of the transformed
_
_
system. This may be accomplished knowing only A, B, A, and B. Then,
T is obtained from (0 by
_
T= UnUn+,
(2.4.12)
with Un the Moore-Penrose inverse [Rao and Mitra 1971] of Un. Since
+
Un has full row rank n, Un is the right inverse, which is given by
Un+= UnT(UnUnT)-1.
(2.4.13)
Therefore,
_
T= UnUnT(UnUnT)-1.
(2.4.14)
If the system has only one input, then Un is square and Un+= Un-1, the
usual matrix inverse. Then,
_
T= UnUn-1.
(2.4.15)
(2.4.16)
_
with, Vn the observability matrix in the original basis and V n the
observability matrix in the new coordinates. Since T is nonsingular,
_
rank(Vn)= rank(Vn), so that the original system (A,B,C,D) is observable
_ _ _ _
if and only if the transformed system (A,B,C,D) is observable.
If (A,C) is observable, the SST may be determined from (A,C) and
_ _
_
the transformed (A,C) by finding Vn and Vn and then using
_ T_ -1_ T
T= (Vn Vn) Vn Vn.
(2.4.17)
(2.4.18)
(2.4.19)
These are just the poles of the system with A as system matrix; they
are always n in number. Since the matrices [A-iI] are singular by
the definition of i, one can now find vectors vi in their nullspaces,
1
so that
10
[A-iI]vi1= 0,
(2.4.20)
1
1
Avi = ivi .
(2.4.21)
or
A[v11
v21
v21
... vn1]
. (2.4.22)
\
v21
... vn1]
(2.4.23)
(2.4.24)
this becomes
AM= MJ.
(2.4.25)
(2.4.26)
11
(2.4.27)
(2.4.28)
12
(2.4.29)
1 1
J1
1 1
J=
1
J2
.
J3
2 1
(2.4.30)
BJ= M-1B,
CJ= CM.
(2.4.32)
13
(2.4.33)
Then
w1T
1
w1T
T
w2T
w2 A= 2
:
\ :
T
T
wn
n wn
(2.4.34)
wiTA= wiTi,
(2.4.35)
or
(2.4.36)
(2.4.37)
(2.4.38)
That is, the sets {wi} and {vi} are
Example 1
14
3 1 1
A 2 0 2
0 0 2
( ) I A ( 1)( 2) 2
For the eigenvalue 1 1 one has
2 1 1 a
( A 1 I )v 2 1 2 b 0
0 0 1 c
1
1
So v1 2
0
1
1
1 a
2 b 0
0 c
so there are TWO linearly independent rank 1
eigenvectors for 2 2 .
Select a=0, then
0
11
v2 1
1
Select b=0, then
1
11
v2 0
1
15
as in (0.
x1
d x2=
dtx3
J1
x1
B1
J2 x2 + B2 u
J3 x3
B3
x1
y= [C1 C2 C3]x2 .
x3
(2.4.39)
(2.4.40)
The direct feed matrix D has been set to zero for notational ease.
Since J is block diagonal, so is (sI-J), which may therefore be
inverted block by block to obtain the transfer function
-1
-1
-1
H(s)= C1(sI-J1) B1 + C2(sI-J2) B2 + C3(sI-J3) B3.
Therefore, it is seen that the Jordan form partitions the system into
subsystems connected in parallel corresponding to the Jordan blocks.
We call this a parallel form realization of the transfer function.
To implement this system using integrators we may use three parallel
channels. If J has the form in (0, one channel will have three
integrators, one will have two, and one will have a single integrator.
JNF From Partial Fraction Expansion
For a system (A,B,C), the Jordan form is given by (M-1AM, M-1B,
CM), with M the modal matrix. (Set D= 0 for convenience.) Therefore,
the transfer function may be written as
-1
-1
-1
-1
-1 -1
H(s)= C(sI-A) B= CMM (sI-A) MM B = CM(sI-J) M B
H(s)= C[v1
-1
w1T
s-1
v2 ... vn ]
s-2
w2T B
:
wnT
s-n
16
Then
or
H(s)=
n
Cvi wiTB .
i=1 s-i
(2.4.41)
(We suppress the superscript "1" on the vi.) This gives a partial
fraction expansion (PFE) in terms of the modal structure of A. The
simplicity of A means precisely that the PFE has no terms involving
(s-i)k for k> 1.
The residue of the pole i is equal to Cvi wiTB, so that if either
Cvi or wiTB is zero, then i does not contribute to the input-output
response. Note that for a MIMO system with m inputs and p outputs,
the residues are p x m matrices and not scalars.
It is not difficult to show that if wiTB= 0 for any wi satisfying
(0, then rank(Un)=
/ n so that the system is not reachable (see the
problems).
Then, we say that the eigenvalue i is unreachable.
Similarly, if Cvi= 0 for any rank 1 right eigenvector vi, then the system
/ n. Then, i is said to be unobservable.
is unobservable, since rank(Vn)=
In fact, these statements are necessary and sufficient; they are known
as the Popov-Belevitch-Hautus (PBH) eigenvector tests for reachability
and observability [Kailath 1980].
Therefore, a pole that is not both reachable and observable makes
no contribution to the transfer function.
This means that the
associated natural mode plays no role in the input/output behavior
of the system. Algebraically, this means that the transfer function
has a zero that cancels out that pole.
The PFE gives us a convenient way to find a parallel state-space
realization for SISO systems. Thus, given a transfer function H(s),
one may perform a PFE to obtain
H(s)=
i=1
Ki
s-i
(2.4.42)
17
. 1
b1
x + b2 u
x=
2
:
b3
(2.4.43)
y= [c1 c2
cn]x
s22 - 1
.
s + 6s +11s + 6
(1)
a.
Pole/Zero Cancellation
s2 - 1
=
(s+1)(s+2)(s+3)
0 + -3 + 4 .
s+1
s+2
s+3
(2)
s-1
,
(s+2)(s+3)
(3)
(s+1)(s-1)
=
(s+1)(s+2)(s+3)
which has the same PFE. Thus, for SISO systems a zero term in the
PFE means exactly that there is a pole/zero cancellation in H(s).
b.
JNF Realization
From the PFE, one may write down a JNF state-space realization
of H(s). One realization is
.
x= -2
0
y= [-3
0x + 1u
-3
1
(4)
4]x.
(5)
One should verify that the transfer function of this system is the
given H(s).
18
Example 2.4-3:
(1)
Ji = I
\ \
(2)
= R
-I
I
R
(3)
19
(4)
yk=
(5)
Cxk + Duk.
0.875
xk+1 = -0.025
0.075
yk =
[ 0.5
0.1
0.9
-0.1
0.5
-0.025
1
0.075 xk + 1
0.975
1
-1
1 uk
1
0] xk.
(6)
(7)
1
T= 1
1
1
1
-1
-1
1
1
(8)
20
0.95
_
xk+1 = 0
0
yk =
[1
0
0.9
-0.1
_
0.1 xk + 0 0 uk
0.9
0 1
0] xk.
(9)
(10)
It is now seen that the controller poles are at z= 0.95, z= 0.9 j0.1.
This form of the controller consists of a first-order subsystem
in parallel with a second-order subsystem. Thus, it will not suffer
as many numerical problems on a finite wordlength machine as a direct
implementation of the third-order controller.
Defining
1
2
3
T
the state components by xk= [x k x k x k] and the control input components
by uk= [u1k
u2k]T, a numerically stable difference
implementation of this controller is now given by
equation
0.9x2k + 0.1x3k
(11)
= x1k + x2k.
Modal Decomposition
The Jordan form reveals a great deal about the structure of the
system in the time domain as well as the frequency domain. Let us
now investigate this structure.
Since A= MJM-1, and eAt is a polynomial in A, we may write
eAt= MeJtM-1.
Therefore, the solution to the continuous state equation from Section
2.1 may be written as
21
0
t
-1
Jt
= Me M x0 + MeJ(t-)M-1Bu() d .
0
For notational ease, let us take A simple so that J= diag{i}.
The superscript "1" denoting rank 1 eigenvectors will be suppressed.
Focusing on the first term, one has
x(t)= [v1 v2
w1T
e 1
... vn]
e2t
w2T x0
\
:
ent wnT
n
T
= vieitwi x0
i=1
(2.4.44)
(2.4.45)y
In similar fashion, one may deal with the forced solution to obtain
the complete state solution
t
n
n
T
T
t
(2.4.46)
x(t)= (wi x0)e i vi + vi ei(t-) wi Bu() d.
i=1
i=1
22
t
y(t)= (wi x0)e i Cvi + Cvi ei(t-) wiTBu() d.
i=1
i=1
0
(2.4.47)
23
(2.4.48)
where superscript (i) denotes the i-th derivative and ai, bi are known
real numbers. Taking the Laplace transform, with initial conditions
of zero, yields the frequency-domain description
n
n-1
n-1
(s + a1s + ... + an)Y(s)= (b1s + ... + bn)U(s).
(2.4.49)
(2.4.50)
24
. 0
1
0
0
0
x= 0
0
1
0 x + 0 u = Ax + Bu
0
0
0
1
0
-a4 -a3 -a2 -a1
1
y= [ b4
b3
b2
b1 ] x = Cx ,
(2.4.51)
(2.4.52)
(2.4.53)
25
0
U4 = 0
0
1
0
0
1
-a1
0
1
-a1
a12-a2
-a1
,
a12-a2
3
-a1 + 2a1a2 - a3
(2.4.54)
26
(1)
where u(t) is the control stroke (between 1) and y(t) is the motor
output angle.
a.
Transfer Function
The transfer function is written by inspection as
H(s)=
29s2 + 29
s + 11s2 + 39s +29
(2)
29(s2 + 1)
.
(s+1)[(s+5)2 + 22]
(3)
or
H(s)=
RCF Realization
. 0
x= 0
-29
1
0
-39
y= [ 29
0
0
1 x + 0 u = Ax + Bu
-11
1
29]x.
(4)
(5)
Now, the theory of Section 2.1 could be used to analytically find the
time response y(t) given any control input u(t) and initial conditions.
c.
27
natural modes corresponding to both the real motor pole and the complex
spring coupler poles are clearly visible.
When the step command is applied, the motor angle initially
increases. As the spring couples the motion to the inertial load,
it begins to move, and due to the snap provided by the spring it pulls
the motor angle to a value that is too high. Then, the load angle
decreases as the spring oscillates; this retards the motor angle.
Finally, after the spring oscillation dies out (at a rate given by
the decay term e-5t), the motor angle settles at the desired value of
1 rad. The final motion is governed by the motor natural mode of e-t.
. -a1
x= -a2
-a3
-a4
1
0
0
0
0
1
0
0
0
b1
0 x + b2 u = Ax + Bu
1
b3
0
b4
(2.4.55)
y= [ 1
0] x = Cx ,
(2.4.56)
28
Except for the first column, the A matrix consists of zeros with
superdiagonal ones. This means precisely that all the integrators
.
are connected in series (i.e. x i= xi+1 plus feedback from x1 and
feedforward from u(t)). The C matrix is all zero except for the first
entry, meaning exactly that the output is hard-wired to the right-most
integrator.
Note that the OCF state equations can be written down directly
from H(s) without drawing the block diagram; the B matrix is just
composed of the numerator coefficient vector stood on end, and the
first column of the A matrix consists of the negatives of the
denominator coefficient vector stood on end.
The remarks made in connection with the RCF on ordering the states
differently amd handling a transfer function with relative degree of
zero hold here also.
By direct computation using A and C in the OCF state-space form,
the observability matrix of the OCF is found to be
3
-a1 + 2a1a2 - a3
V4 =
a12-a2
-a1
a12-a2
-a1
1
0
-a1 1
1
0 ,
0
0
0
0
(2.4.57)
29
30
State Equation:
.
x= Ax + Bu
y= Cx
State-Space Transformation:
_
If x= Tx, then
_
x= (TAT-1)x + (TB)u
_
y= (CT-1)x
(for A diagonalizable)
Modal Decomposition:
t
n
n
t
i
Cvi + Cvi ei(t-) wiTBu() d
y(t)= (wi x0)e
i=1
i=1
0
where
(A-iI)vi= 0
wiT(A-iI)
= 0
Transfer Function:
H(s)= Y(s) =
U(s)
b s3 + b2s2 + b3s + b4
s + a1s3 + a2s2 + a3s + a4
4 1
. 0
1
0
0
0
x= 0
0
1
0 x + 0 u = Ax + Bu
0
0
0
1
0
1
-a4 -a3 -a2 -a1
y= [ b4
b3
b2
b1 ] x = Cx
. -a1 1
0
0
b1
1
0 x + b2 u = Ax + Bu
x= -a2 0
-a3 0
0
1
b3
-a4 0
0
0
b4
y= [ 1
0] x = Cx
31
(1)
It is
s22 - 1
.
s + 6s +11s + 6
(2)
This is the same transfer function that was used in Example 2.4-2.
a.
RCF
From H(s) one may directly write down the RCF realization
. 0
x= 0
-6
0
1
-11
y= [-1
1
0
0x + 0u
-6
1
(3)
1]x.
2
=
s -1
(s+1)(s+2)(s+3)
0 + -3 + 4 .
s+1
s+2
s+3
(4)
Since this has a zero residue for the pole at s= -1, the system cannot
be both reachable and observable. One may easily check U3 to see that
the system is reachable. (Indeed, since (3) is in RCF, it must be
reachable.)
32
C
V3 = CA =
CA2
-1
-6
42
0
-11
66
1
-7
36
(5)
(s+1)(s-1)
=
(s+1)(s+2)(s+3)
s-1
= 2 s-1
,
(s+2)(s+3)
s + 5s +6
(6)
which also has the PFE (4). A RCF realization of the reduced transfer
function is given by
.
x= 0
-6
1x + 0u
-5
1
(7)
y= [-1
1]x.
The reader should verify that this system is both reachable and
observable and hence minimal.
The differential equation corresponding to the reduced transfer
function (6) may be directly written down. It is
y" + 5y' + 6 = u' - u.
(8)
This ODE has the same input/output behavior as (1); that is, if the
initial conditions are zero in each equation, their outputs y(t) are
identical for the same inputs u(t).
33
c.
Time Response
We say that any system (A,B,C) that is not both reachable and
observable is not minimal. If a system is not minimal, then one can
find a state-space description of smaller dimension that has the same
transfer function.
Minimality can be important from several points of view. Suppose,
for instance, that one wants to simulate the ODE (1) of the previous
example on an analog computer.
Suppose, however, that the only
available analog computer has only four integrators.
Then, the
equation (1) cannot be simulated. However, the minimal system (7)
can easily be simulated using four integrators - one for the integrator
associated with each state, one for the summer to manufacture y(t),
and one for the summer to provide the feedback. See Fig. Error!
Reference source not found..
If the system is SISO, then minimality corresponds to the fact
that the transfer function has no pole-zero cancellations. However,
in the multivariable case, this is not so. That is, the transfer
function may have no apparent pole/zero cancellations yet still be
nonminimal.
To illustrate this consider the next example, which shows how
to obtain a minimal realization of a given MIMO transfer function as
long as it has only linear terms in its PFE.
Example 2.4-6:
Realizations
Gilbert's
Method
34
For
Multivariable
Minimal
-1 0 0 0 0 0
1 0
. 0 0 1 0 0 0
0 0
x= 0 -3 -4 0 0 0 x + 1 0 u
0 0 0 0 1 0
0 0
0 0 0 -2 -3 0
0 1
0 0 0 0 0 -3
0 1
(1)
y= 1
0
(2)
0
1
0
0
2
0
0 x .
1
0
0
1/s
x1
u1
y1
x3
1/s
1/s
x2
2
3
x5
1/s
u2
1/s
x4
3
2
y2
1/s
x6
a.
Nonmimality of System
Using our knowledge of the RCF, the transfer function may be
35
1
s+1
H(s)=
1
s2 + 4s + 3
It is
2
s + 3s + 2
.
1
s+3
(3)
To understand how this is written down, note, for instance, that the
second subsystem is
A2 = 0 1 ,
-3 -4
B2 = 0 0 ,
1 0
C2 = 0 0,
1 0
(4)
which has the second column of B2 and the first row of C2 equal to zero.
Therefore, this susbsystem is in RCF and maps u1 to y2. It thus
corresponds to the (2,1) entry of the 2 x 2 transfer function matrix
H(s).
There is no evident pole/zero cancellation in H(s), as each term
is irreducible.
However, one may find the reachability and
observability matrices to convince oneself that the system is not
minimal.
b.
s+1
H(s)=
1
(s+1)(s+3)
(s+1)(s+2)
s+3
36
s+1
1/2 - 1/2
s+1
s+3
1
2
1/2 0 +
s+1
2 - 2
s+1
s+2
s+3
0 -2
0
0 +
s+2
0
0
-1/2 1 .
s+3
(5)
Note that in the MIMO case the residues of the poles are matrices and
not scalars.
The second step is to factor the residue matrices into a minimal
number of vector outer-products of the form vzT.
H(s)=
1[1 2] + 0[1/2 0]
0
1
s+1
1[0 -2]
0
s+2
The result is
0[-1/2 1]
1
s+3
(6)
It should be understood that this step is not unique. That is, other
definitions of the outer products are possible.
The third step is to pull the column vectors out to the left and
the row vectors out to the right to yield
-1
1
2
(s+1)
H(s)= 1 0 1 0
(s+1)-1
1/2 0.
-1
0 -2
0 1 0 1
(s+2)
(s+3)-1-1/2 1
(7)
-1
We are now done, for compare this to H(s)= C(sI-A) B to see that it
is just four scalar subsystems in parallel. The middle diagonal matrix
corresponds to (sI-A)-1, so that the state-space system with transfer
37
. -1
1
x= -1
x + 1/2
-2
0
-3
-1/2
2
0 u
-2
1
y= 1 0 1 0 x.
0 1 0 1
= Ax + Bu
(8)
(9)
38