Академический Документы
Профессиональный Документы
Культура Документы
C Stability
D Controllability
E Observability
1
A. Transfer Functions and Realizations
1 Preliminaries
We begin with the transfer function H(s) of some m input, p output linear time-
invariant continuous-time input-output system. The transfer function H(s) is called
real-rational if each of its p m entries is a rational function in s with real coefficients.
Consider an n dimensional realization of H(s) as
(
x(t) = Ax(t) + Bu(t)
x(t0 ) = x0
y(t) = Cx(t) + Du(t)
In what follows, we shall reserve the letters m, n, p for the numbers of inputs, states,
and outputs respectively. The dimension of a realization is the number of states n
(or the size of the A matrix).
We will often write above more compactly using the packed-matrix notation as
" #
A B
=
C D
2 Building realizations
It is easy enough to compute the transfer function from the realization as
The inverse problem of building internal descriptions from transfer functions is less
trivial and is the subject of realization theory.
2
We begin with a single-input single-output real-rational transfer function
n1 sn1 + + 1 s + 0
H(s) = +d
sn + n1 sn1 + + 1 s + 0
0 1 0 0 0
0 0 0 0 0
"
A B
# .. .. .. .. ..
=
= . . . . .
(2)
C D 0
0 0 1 0
0 1 n2 n1
1
0 1 n2 n1 d
0 0 0
0 0
1 0 0 1 1
"
A C
# .. .. .. .. ..
dual =
= . . . . .
(3)
B D
0 0 0 n2 n2
0 0 1 n1
n1
0 0 0 1 d
3
A1 0 B1 0
(b) composite = 0 A2 0 B2
C1 C2 D 1 D 2
realizes the transfer function
h i
H(s) = H1 (s) H2 (s)
When we use this technique the resulting realizations will generally have (unnecessar-
ily) high dimension. We will later discuss methods for producing minimal realizations
from these less succinct composite realizations.
3 Markov parameters
If we (formally) expand (sI A)1 as the power series
H0 = D and Hk = CAk1 B, k 1
depend only on the transfer function H(s) and not on the particular realization
(A, B, C, D) of H(s). These input-output invariants are called the Markov parame-
ters associated with the transfer function H(s).
4 Equivalent realizations
Definition Two realizations (A, B, C, D) and 1 (A1 , B1 , C1 , D1 ) are called equiv-
alent if they realize the same input-output description.
Proposition and 1 are equivalent if and only if
4
5 Similar realizations
There is a particular class of equivalent realizations that interest us.
Let us begin with the realization of some transfer function H(s). Let T Cnn be
nonsingular. We can define new states via the bijection
T xnew = x
5
B. The State and Output Response
For linear time-invariant systems, we recognize this matrix differential equation from
our earlier study of linear algebra and can immediately write its solution via the
familiar matrix exponential as
(t, t0 ) = eA(tt0 )
2 Properties of (t, to )
Theorem
(a) (t0 , t0 ) = I
(b) (t, t0 ) = (t, t1 )(t1 , t0 )
(c) (t, t0 ) is nonsingular for all < t, t0 < and
[(t, t0 )]1 = (t0 , t)
These properties of the state-transition matrix are called the semi-group properties.
We can infer other properties of the transition matrix (for linear time-invariant sys-
tems) from our earlier results on matrix exponentials. For example, we have
det((t, t0 )) = etraceA(tt0 )
6
The solution of these differential equations is
Indeed, it is clear from (5) that the (t, t0 ) relates the state of the autonomous system
at time t to the the state at time t0 , and hence the phrase state-transition matrix.
We first determine the response of the realization in the case when the initial
conditions are zero, i.e. the response to only to the application of the input u. The
following result provides a closed form solution for the zero-state response.
Theorem Consider the realization above with initial condition x(t0 ) = 0. The
solution of the differential equations defining is
Z t
x(t) = (t, )B( )u( )d
t0
y(t) = C(t)x(t) + D(t)u(t)
Of particular interest to us in the remainder of these notes is the special case of linear
time-invariant realizations. Here, the total response is
Z t
A(tt0 )
x(t) = e x0 + eA(t ) Bu( )d (6)
t0
y(t) = Cx(t) + Du(t)
7
C. Internal stability
1 Internal Stability
Definition The realization is called internally stable if for the autonomous system
lim x(t) = 0
t
or, equivalently, given any > 0, there exists T (, t0 ) such that kx(t)k for t T .
The realization is called uniformly internally stable if the convergence above of x(t)
to zero is uniform in t0 , i.e. T does not depend on t0 .
Proposition The realization is internally stable if and only if
lim k(t, t0 )k = 0
t
Given that internal stability of the realization depends only on the matrix A, we
shall abuse notation and speak of the matrix A being stable to mean that all its
eigenvalues lie in the open left half complex plane.
AX + XA + Q = 0 (7)
(a) The Lyapunov equation (7) has a unique solution for some Q
(b) The Lyapunov equation (7) has a solution for every Q
(c) The Lyapunov equation (7) has a unique solution for every Q
8
(d) For all i , j Spec(A),
i + j 6= 0
Observe that if A is stable, condition (d) above is met and as a consequence assertions
(a) through (c) above hold.
(a) Suppose A is stable. Then, for any Q, the Lyapunov equation (7) has a unique
solution X. A closed-form expression for this solution is
Z
X= eAt QeA t dt
0
If, in particular, Q 0, then X 0. If Q > 0, then X > 0.
(b) Suppose for some Q > 0 the Lyapunov equation (7) has a solution X > 0. Then
A is stable.
The requirements that the solution X be positive definite in part (b) of the above
theorem can be relaxed. This will be done when we introduce stabilizability and
detectability.
9
D. Controllability
1 T -Controllability
Definition Consider the realization and fix T > 0. A state Cn is said to
be T -controllable if there exists an input v(t), 0 t T which drives from initial
condition x(0) = to terminal condition x(T ) = 0.
Let CT Cn be the set of T -controllable states of .
The realization is called T -controllable if every state Cn is T -controllable, i.e.
if CT = Cn .
3 Controllability Grammians
Definition The controllability Grammian of the realization on the interval [a, b]
is the matrix W (a, b) Cnn defined by
Z b
W (a, b) = eAt BB eA t dt
a
(a) CT = R [W (0, T )]
10
(b) Let CT . From (a) above, there exists a vector v Cn such that = W (0, T ).
Then, the input
v(t) = B eA t , 0tT
drives from initial state x(0) = to terminal state x(T ) = 0.
Theorem Consider the realization . Then,
h i
CT = R B AB An1 B = R[Mc ]
The matrix Mc Cnmn above is called the controllability matrix. Observe that this
matrix does not depend on time time interval T allotted for controllability. We there-
fore conclude that if a state Cn is controllable on the interval T , it is controllable
in any (nonzero) interval of time. We may therefore drop the superfluous argument T
in the notion of T -controllability and the in the notation for the controllable subspace,
which we will now write as C.
The characterization for controllability given in this theorem, while elegant and of
important theoretical value, is numerically unattractive. This is because computation
of Mc is difficult and suspect to numerical instability. An alternate characterization
for stable realizations, involving only symmetric matrix operations, is given by the
following result:
Theorem Consider the realization , and suppose A is stable. Then,
C = R[Wc ]
AWc + Wc A + BB = 0
11
It can be readily verified that the controllability matrix for this realization is
0 0 0 1
0 0 1
Mc = .. .. .. ..
. . . .
0 1
1
6 Reachability
Definition Consider the realization and fix T > 0. A state Cn is said to
be T -reachable if there exists an input v(t), T t 0 which drives from initial
condition x(T ) = 0 to terminal condition x(0) = .
Proposition
The above proposition states that controllability and reachability are equivalent no-
tions (for continuous-time linear time-invariant realizations). Following accepted prac-
tice, we shall hence speak only of controllability.
This equivalence between controllability and reachability breaks down for discrete-
time linear time-invariant systems.
12
E. Observability
UO = N [Wo ]
A W o + W o A + C C = 0
13
F. The Kalman decomposition
1 Easy Version
We begin with the simpler version of the Kalman decomposition. By an appropriate
choice of basis for the state-space, we can exhibit clearly the controllable modes
and the uncontrollable modes.
Theorem Consider the realization (A, B, C, D) of some transfer function H(s)
and let dim(C) = r. Let {b1 , , br } be a basis for C and extend this by {br+1 , , bn }
to form a basis for Cn . Define the invertible matrix
h i
T = b1 br br+1 bn
Let new be the realization similar to obtained by the state-space change of basis
T xnew = x.
It is clear that the Kalman decomposition is not unique: different basis choices yield
different decompositions, though they all share the structure described in the above
theorem.
2 General Version
We are now in a position to present the general version of the Kalman decomposition.
Theorem Consider the realization (A, B, C, D) of some transfer function H(s).
Let B = {b1 , , bn1 } be a basis for C UO. Extend B by {bn1 +1 , , bn2 } to form
a basis for C, and by {bn2 +1 , , bn3 } to form a basis for UO. Finally, extend the
collection of vectors {b1 , , bn3 } by {bn3 +1 , , bn } to complete a basis for Cn . Define
the invertible matrix
h i
T = b1 bn
14
Let new be the realization similar to obtained by the state-space change of basis
T xnew = x.
3 Minimal Realizations
Definition A realization is called minimal if it is both controllable and observable.
Theorem Let 1 and 2 be two minimal realizations of some transfer function
H(s). Then
Consider the transfer function H(s) and let be any minimal realization of H(s).
From the above result, the dimension of is independent of the particular minimal
realization chosen. This number depends only on the transfer function H(s) and is
called the McMillan degree (H(s)) of H(s).
is also
Next let be any other realization of H(s) and let n be its dimension. If Sigma
minimal, n = (H(s)). If was not minimal we could produce a lower dimensional
minimal realization of H(s) via the Kalman decomposition, in which case n > (H(s)).
In either event,
n (H(s))
Equivalently, there is no realization of H(s) of dimension less than (H(s)) and ev-
ery minimal realization of H(s) has dimension (H(s)), hence the phrase minimal
realization.
15
G. Balanced Realizations and Model Reduction
(b) The minimum energy necessary to establish the terminal state x(0) = is
kvopt k2 = Wc1
(c) The matrix Wc Cnn above is the unique solution of the Lyapunov equation
AWc + Wc A + BB = 0
16
From the above result, we conclude that vectors for which Wc1 is large represent
almost uncontrollable states in that they are difficult to reach with input signals
of reasonable energy. Thus, in the normal course of operation of the input-output
system H(s) realized by , these states will be well approximated by the zero state.
As a result, we can effectively approximate H(s) by removing these states. This
would result in a more compact, albeit approximate, internal description of H(s).
y(t) = CeAt , t0
Theorem
This result tells us that vectors for which Wo is small represent states that are
almost unobservable in that they have little effect on the output trajectory. As
far as the input-output behaviour of the system H(s) realized by is concerned,
we may well approximate these states by the zero state. Again, we can effectively
approximate H(s) by removing these states. This would result in a reasonable
internal description of H(s) of lower dimension.
The difficulty with this procedure is that we may remove almost uncontrollable
states that happen to be very observable. As we will see now, by an intelligent
choice of state-space basis, we can resolve this potential bias. In the resulting new
realization, the controllability and observability Grammians are equal. Thus all states
are an uncontrollable as they are unobservable.
17
3 Balanced realizations
We first examine the effect of a state-space change of basis on the Grammians Wc and
Wo .
Proposition Let (A, B, C, D) be any internally stable, minimal realization of some
transfer function H(s) with Grammians Wc and Wo . Let T be an invertible matrix
and consider the similar realization (T 1 AT, T 1 B, CT, D). Then, the Grammians
of are
(a) Wc = T 1 Wc [T 1 ]
(b) Wo = T Wo T
As an immediate consequence of the above result, we have the following:
Lemma Let (A, B, C, D) be any internally stable, minimal realization of some
transfer function H(s). Then,
(a) The eigenvalues of Wo Wc are all positive.
(b) These eigenvalues say 12 , 22 , , n2 are independent of the particular inter-
1 2 n > 0
Note that as the realization is minimal, the Grammians Wc and Wo are invertible,
and thus n > 0.
Theorem Let (A, B, C, D) be any internally stable, minimal realization of some
transfer function H(s) and let Wo and Wc be the controllability and observability
1 1
Grammians of respectively. Since Wc2 Wo Wc2 is positive-definite, it admits the Jor-
dan decomposition
1 1
Wc2 Wo Wc2 = U Q2 U
1 1
T = Wc2 U Q 2
and consider the similar realization (T 1 AT, T 1 B, CT, D). Then, the Grammians
of are equal and diagonal, i.e.
Wc = Wo = Q
18
The realization above is said to be balanced.
4 Model Reduction
In many problems of practical interest, it is desirable to obtain a low-order approxi-
mate model of some transfer function H(s). This is the problem of model reduction.
For example, suppose H(s) is some controller we have designed. As most controllers
are implemented digitally, if the sampling rate is too high, we may not have enough
computation time available between samples to implement a very complex controller.
Also, complex high-order controllers are less reliable. For these reasons we may seek
to obtain a low-order approximation to the controller H(s).
Conversely, suppose H(s) is some very high-order plant model. This situation arises,
for example, in the context of finite-element modeling in chemical reactors where
we can have realization with thousands of state variables. As we shall soon discover,
many control design techniques provide controllers whose complexity is comparable to
that of the plant. Worse, these design methods involve time consuming complicated
calculations which are difficult or impossible to conduct for very high order plant
models. For these reasons we may seek to obtain a low-order approximation to the
plant H(s).
When attempting to construct an approximation of the transfer function H(s) it is
generally an excellent idea to retain the unstable part of H(s) and simplify only the
stable part. Indeed, the unstable part contains so much vital information it would be
dangerous to conduct any model reduction on it. This decomposition of H(s) into its
stable and unstable parts can be done using partial fractions, or more generally, as
follows. Start with a minimal realization (A, B, C, D) of the transfer function H(s).
Construct a nonsingular matrix T , so that
" #
1 As 0
T AT =
0 Au
where As is stable and Au is unstable. This can be done, for example, using the
Jordan form, or better still, the real Schur form. Now, with the state-space change of
basis T we obtain the similar realization
As 0 Bs
new = 0 Au B u
Bs Bu D
19
which is decomposition we wanted. There are more efficient, numerically well-behaved
methods to conduct this decomposition. These involve Ricatti equations and factor-
ization theory.
There are many techniques available for model reduction of linear systems. These
include singular perturbation methods, balanced truncation, Hankel model reduction,
etc. Of these, only singular perturbation methods apply to nonlinear models as well.
5 Balanced Truncations
The preceeding development on balanced realizations offers an ad hoc method for
model reduction. This is as follows. Start with a minimal realization of the transfer
function and decompose it into its stable part Hs (s) and its unstable part Hu (s).
Construct a balanced realization (A, B, C, D) for H(s). Write this realization
A11 A12 B1
A21 A22 B2
C1 C2 D
serves as a (possibly good) approximation of H(s). The transfer function Hk (s) (or
its associated realization) are called the k th balanced truncation of H(s).
The balanced truncations Hk (s) are not optimal approximations of H(s) in any math-
ematical sense. They are based on sound heuristics, and one can derive coarse general
bounds on the approximation error. We can easily establish the following
Theorem Balanced truncations are minimal, stable, and balanced.
20