Вы находитесь на странице: 1из 20

18/24-771 LTI Systems: Open Loop Aspects

A Transfer Functions and Realizations

B The State and Output Response

C Stability

D Controllability

E Observability

F The Kalman Decomposition

G Balanced Realizations and Model Reduction

Courtesy of Professor Kameshwar Poolla


UC Berkeley

1
A. Transfer Functions and Realizations

1 Preliminaries
We begin with the transfer function H(s) of some m input, p output linear time-
invariant continuous-time input-output system. The transfer function H(s) is called
real-rational if each of its p m entries is a rational function in s with real coefficients.
Consider an n dimensional realization of H(s) as
(
x(t) = Ax(t) + Bu(t)
x(t0 ) = x0
y(t) = Cx(t) + Du(t)

In what follows, we shall reserve the letters m, n, p for the numbers of inputs, states,
and outputs respectively. The dimension of a realization is the number of states n
(or the size of the A matrix).
We will often write above more compactly using the packed-matrix notation as
" #
A B
=
C D

or in text as (A, B, C, D).


A realization is called real if its state-space matrices are all real. Associated with a
realization (A, B, C, D) is its dual realization
" #
A C
dual =
B D

which has p inputs and m outputs.


We now proceed to study various properties of the realization and of the transfer
function H(s). We will deal exclusively with real-rational transfer functions.

2 Building realizations
It is easy enough to compute the transfer function from the realization as

H(s) = D + C(sI A)1 B (1)

The inverse problem of building internal descriptions from transfer functions is less
trivial and is the subject of realization theory.

2
We begin with a single-input single-output real-rational transfer function

n1 sn1 + + 1 s + 0
H(s) = +d
sn + n1 sn1 + + 1 s + 0

We can verify that



0 1 0 0 0

0 0 0 0 0

"
A B
# .. .. .. .. ..
=

= . . . . .
(2)
C D 0

0 0 1 0

0 1 n2 n1

1
0 1 n2 n1 d

realizes H(s). This realization is variously referred to as the phase-variable, or com-


panion, or controllable canonical form.
Since H(s) is a real single-input single-output transfer function, it follows that

0 0 0

0 0

1 0 0 1 1

"
A C
# .. .. .. .. ..
dual =

= . . . . .
(3)
B D
0 0 0 n2 n2

0 0 1 n1

n1
0 0 0 1 d

also realizes H(s). This is called the observable canonical form.


Armed with the ability to realize single-input single-output transfer functions, we can
build realizations for multi-input multi-output transfer functions by successive use of
the following result.
Lemma Let 1 (A1 , B1 , C1 , D1 ) and 2 (A2 , B2 , C2 , D2 ) be realizations of the transfer
functions H1 (s) and H2 (s) respectively. Then,

A1 0 B1
0 A B2
2
(a) composite =

C1 0 D1


0 C2 D2
realizes the transfer function
" #
H1 (s)
H(s) =
H2 (s)

3

A1 0 B1 0
(b) composite = 0 A2 0 B2

C1 C2 D 1 D 2
realizes the transfer function
h i
H(s) = H1 (s) H2 (s)

When we use this technique the resulting realizations will generally have (unnecessar-
ily) high dimension. We will later discuss methods for producing minimal realizations
from these less succinct composite realizations.

3 Markov parameters
If we (formally) expand (sI A)1 as the power series

(sI A)1 = s1 + As2 + A2 s3 +

and use this in (1) we obtain

H(s) = D + CBs1 + CABs2 + CA2 Bs3 +


= H0 + H1 s1 + H2 s2 + H3 s3

It is clear from the above equation that matrices Hk Cpm defined by

H0 = D and Hk = CAk1 B, k 1

depend only on the transfer function H(s) and not on the particular realization
(A, B, C, D) of H(s). These input-output invariants are called the Markov parame-
ters associated with the transfer function H(s).

4 Equivalent realizations
Definition Two realizations (A, B, C, D) and 1 (A1 , B1 , C1 , D1 ) are called equiv-
alent if they realize the same input-output description.
Proposition and 1 are equivalent if and only if

D = D1 and CAk B = C1 Ak1 B1 for all k 0

Equivalent realizations can have significantly different structural properties such as


stability, controllability, etc.

4
5 Similar realizations
There is a particular class of equivalent realizations that interest us.
Let us begin with the realization of some transfer function H(s). Let T Cnn be
nonsingular. We can define new states via the bijection

T xnew = x

If we rewrite the differential equations defining in terms of these new states, we


arrive at
(
xnew (t) = T 1 AT xnew (t) + T 1 Bu(t)
new
y(t) = CT xnew (t) + Du(t)

This new realization


" #
T 1 AT T 1 B
new = (4)
CT D

also realizes H(s) and is said to be similar to .


Similar realizations are fundamentally the same. They share identical properties such
as stability, controllability, etc. Indeed, we arrived at new from via nothing more
than a state space change of basis.

5
B. The State and Output Response

1 The State-Transition Matrix


We first consider the linear time-varying autonomous system

x(t) = A(t)x(t), x(t0 ) = x0

Definition The state-transition matrix (t, t0 ) associated with the autonomous


system above is the unique solution X of the matrix differential equation

X(t) = A(t)X(t), subject to X(t0 ) = I

For linear time-invariant systems, we recognize this matrix differential equation from
our earlier study of linear algebra and can immediately write its solution via the
familiar matrix exponential as

(t, t0 ) = eA(tt0 )

2 Properties of (t, to )
Theorem

(a) (t0 , t0 ) = I
(b) (t, t0 ) = (t, t1 )(t1 , t0 )
(c) (t, t0 ) is nonsingular for all < t, t0 < and
[(t, t0 )]1 = (t0 , t)

These properties of the state-transition matrix are called the semi-group properties.
We can infer other properties of the transition matrix (for linear time-invariant sys-
tems) from our earlier results on matrix exponentials. For example, we have

det((t, t0 )) = etraceA(tt0 )

3 The Zero-Input Response


We begin by determining the response of the realization in the case when no input
is applied. Equivalently, we are interested in the response due only to the initial
conditions x(t0 ) = x0 .
Theorem Consider the autonomous system

x(t) = A(t)x(t), x(t0 ) = x0

6
The solution of these differential equations is

x(t) = (t, t0 )x0 (5)

Indeed, it is clear from (5) that the (t, t0 ) relates the state of the autonomous system
at time t to the the state at time t0 , and hence the phrase state-transition matrix.

4 The Zero-State Response


Consider the linear time-varying realization
(
x(t) = A(t)x(t) + B(t)u(t)
x(t0 ) = x0
y(t) = C(t)x(t) + D(t)u(t)

We first determine the response of the realization in the case when the initial
conditions are zero, i.e. the response to only to the application of the input u. The
following result provides a closed form solution for the zero-state response.
Theorem Consider the realization above with initial condition x(t0 ) = 0. The
solution of the differential equations defining is
Z t
x(t) = (t, )B( )u( )d
t0
y(t) = C(t)x(t) + D(t)u(t)

5 The Total Response


We are now in a position to determining the the response of the realization in the
general case due both to nonzero initial conditions x0 and to the application of the
input u. Since the differential equations defining are linear, we can appeal to a
superposition argument to write
Z t
x(t) = (t, t0 )x0 + (t, )B( )u( )d
t0
y(t) = C(t)x(t) + D(t)u(t)

Of particular interest to us in the remainder of these notes is the special case of linear
time-invariant realizations. Here, the total response is
Z t
A(tt0 )
x(t) = e x0 + eA(t ) Bu( )d (6)
t0
y(t) = Cx(t) + Du(t)

7
C. Internal stability

1 Internal Stability
Definition The realization is called internally stable if for the autonomous system

x(t) = A(t)x(t), x(t0 ) = x0

with any initial condition x0 Cn we have

lim x(t) = 0
t

or, equivalently, given any  > 0, there exists T (, t0 ) such that kx(t)k  for t T .
The realization is called uniformly internally stable if the convergence above of x(t)
to zero is uniform in t0 , i.e. T does not depend on t0 .
Proposition The realization is internally stable if and only if

lim k(t, t0 )k = 0
t

2 A Characterization of Internal Stability


Theorem The linear time-invariant realization (A, , , ) is internally stable if and
only if all the eigenvalues of A are in the open left-half complex plane. Equivalently,

Real(Spec (A)) < 0

Given that internal stability of the realization depends only on the matrix A, we
shall abuse notation and speak of the matrix A being stable to mean that all its
eigenvalues lie in the open left half complex plane.

3 The Lyapunov equation


Theorem Consider the matrix equation

AX + XA + Q = 0 (7)

The following are equivalent

(a) The Lyapunov equation (7) has a unique solution for some Q
(b) The Lyapunov equation (7) has a solution for every Q
(c) The Lyapunov equation (7) has a unique solution for every Q

8
(d) For all i , j Spec(A),
i + j 6= 0

Observe that if A is stable, condition (d) above is met and as a consequence assertions
(a) through (c) above hold.

4 Stability Testing with the Lyapunov Equation


The stability of a matrix A can be characterized in terms of the Lyapunov equation
via the following central result:
Theorem

(a) Suppose A is stable. Then, for any Q, the Lyapunov equation (7) has a unique
solution X. A closed-form expression for this solution is
Z

X= eAt QeA t dt
0
If, in particular, Q 0, then X 0. If Q > 0, then X > 0.
(b) Suppose for some Q > 0 the Lyapunov equation (7) has a solution X > 0. Then
A is stable.

The requirements that the solution X be positive definite in part (b) of the above
theorem can be relaxed. This will be done when we introduce stabilizability and
detectability.

9
D. Controllability

1 T -Controllability
Definition Consider the realization and fix T > 0. A state Cn is said to
be T -controllable if there exists an input v(t), 0 t T which drives from initial
condition x(0) = to terminal condition x(T ) = 0.
Let CT Cn be the set of T -controllable states of .
The realization is called T -controllable if every state Cn is T -controllable, i.e.
if CT = Cn .

2 The Controllable Subspace CT


Lemma A state Cn is T -controllable if and only if there exists an input v(t), 0
t T such that
Z T
= eAt Bv(t)dt
0

Lemma The set CT of T -controllable states is a subspace of Cn .

3 Controllability Grammians
Definition The controllability Grammian of the realization on the interval [a, b]
is the matrix W (a, b) Cnn defined by
Z b

W (a, b) = eAt BB eA t dt
a

Lemma The controllability Grammian W (a, b) has the following properties:

(a) W (a, b) is Hermitian and, moreover, W (a, b) 0


(b) W (a, b) is a solution of the matrix equation
a b
AX + XA eAa BB eA + eAb BB eA =0
(c) If A is stable, W (, 0) is the unique solution of
AX + XA + BB = 0

4 Tests for Controllability


Proposition

(a) CT = R [W (0, T )]

10
(b) Let CT . From (a) above, there exists a vector v Cn such that = W (0, T ).
Then, the input

v(t) = B eA t , 0tT
drives from initial state x(0) = to terminal state x(T ) = 0.
Theorem Consider the realization . Then,
h i
CT = R B AB An1 B = R[Mc ]

The matrix Mc Cnmn above is called the controllability matrix. Observe that this
matrix does not depend on time time interval T allotted for controllability. We there-
fore conclude that if a state Cn is controllable on the interval T , it is controllable
in any (nonzero) interval of time. We may therefore drop the superfluous argument T
in the notion of T -controllability and the in the notation for the controllable subspace,
which we will now write as C.
The characterization for controllability given in this theorem, while elegant and of
important theoretical value, is numerically unattractive. This is because computation
of Mc is difficult and suspect to numerical instability. An alternate characterization
for stable realizations, involving only symmetric matrix operations, is given by the
following result:
Theorem Consider the realization , and suppose A is stable. Then,

C = R[Wc ]

where Wc is the unique solution of the Lyapunov equation

AWc + Wc A + BB = 0

Note that Wc is the controllability Grammian W (, 0) defined earlier.


A useful geometric characterization of the controllable subspace is given by the fol-
lowing:
Proposition C is the smallest Ainvariant subspace containing R[B].
5 An Example
Consider the realization (A, B, , ) where

0 1 0 0 0

0 0 0 0


0


A= .. .. .. ..
B= ..
. . . . .


0 0 0 1 0


0 1 n2 n1 1

11
It can be readily verified that the controllability matrix for this realization is

0 0 0 1

0 0 1

Mc = .. .. .. ..
. . . .


0 1


1

Observe that rank (Mc ) = n. As a result, is controllable. This is the canonical


example of a single-input controllable system.

6 Reachability
Definition Consider the realization and fix T > 0. A state Cn is said to
be T -reachable if there exists an input v(t), T t 0 which drives from initial
condition x(T ) = 0 to terminal condition x(0) = .
Proposition

(a) A state Cn is controllable if and only if is reachable.


(b) is controllable if and only if is reachable.

The above proposition states that controllability and reachability are equivalent no-
tions (for continuous-time linear time-invariant realizations). Following accepted prac-
tice, we shall hence speak only of controllability.
This equivalence between controllability and reachability breaks down for discrete-
time linear time-invariant systems.

12
E. Observability

1 The Unobservable Subspace UO


Definition Consider the realization . A state Cn is called unobservable if
with initial condition x(0) = and with input u(t) = 0, t 0, the state trajectory of
is

x(t) = 0 for all t 0

Let UO Cn be the set of unobservable states of .


The realization is called observable if the only unobservable state is the zero state,
i.e. UO = 0.

2 Tests for Observability


Lemma A state Cn is unobservable if and only if there

CeAt = 0 for all t 0

Lemma The set UO of unobservable states is a subspace of Cn .


Theorem Consider the realization .

C

CA

UO = N .. = N [Mo ]
.


CAn1

The matrix Mo Cpnn is called the observability matrix.


Theorem Consider the realization , and suppose A is stable. Then,

UO = N [Wo ]

where Wo is the unique solution of the Lyapunov equation

A W o + W o A + C C = 0

The Hermitian matrix (positive semi-definite) Wo is called the observability Gram-


mian.
Proposition UO is the smallest Ainvariant subspace contained in N [C].
Lemma is controllable if and only if dual is observable.

13
F. The Kalman decomposition

1 Easy Version
We begin with the simpler version of the Kalman decomposition. By an appropriate
choice of basis for the state-space, we can exhibit clearly the controllable modes
and the uncontrollable modes.
Theorem Consider the realization (A, B, C, D) of some transfer function H(s)
and let dim(C) = r. Let {b1 , , br } be a basis for C and extend this by {br+1 , , bn }
to form a basis for Cn . Define the invertible matrix
h i
T = b1 br br+1 bn

Let new be the realization similar to obtained by the state-space change of basis
T xnew = x.

(a) The realization new has the following structure



" # A11 A12 B1
T 1 AT T 1 B
new = = 0 A22 0

CT D
C1 C2 D
Here A11 Crr and the matrices B and C are partitioned conformably with the
partition of A.
(b) The realization
reduced (A11 , B1 , C1 , D)
is controllable and also realizes H(s).

It is clear that the Kalman decomposition is not unique: different basis choices yield
different decompositions, though they all share the structure described in the above
theorem.

2 General Version
We are now in a position to present the general version of the Kalman decomposition.
Theorem Consider the realization (A, B, C, D) of some transfer function H(s).
Let B = {b1 , , bn1 } be a basis for C UO. Extend B by {bn1 +1 , , bn2 } to form
a basis for C, and by {bn2 +1 , , bn3 } to form a basis for UO. Finally, extend the
collection of vectors {b1 , , bn3 } by {bn3 +1 , , bn } to complete a basis for Cn . Define
the invertible matrix
h i
T = b1 bn

14
Let new be the realization similar to obtained by the state-space change of basis
T xnew = x.

(a) The realization new has the following structure



A11 A12 A13 A14 B1
" # 0 A22 0 A24 B2
T 1 AT T 1 B
new = = 0 0 A33 A34 0

CT D
0 0 0 A44 0


0 C2 0 C4 D
Here Aii Cni ni for i = 1, , 4 and the matrices B and C are partitioned
conformably with the partition of A.
(b) The realization
reduced (A22 , B2 , C2 , D)
is controllable and observable and also realizes H(s).

3 Minimal Realizations
Definition A realization is called minimal if it is both controllable and observable.
Theorem Let 1 and 2 be two minimal realizations of some transfer function
H(s). Then

(a) 1 and 2 have the same state-space dimension.


(b) 1 and 2 are similar.

Consider the transfer function H(s) and let be any minimal realization of H(s).
From the above result, the dimension of is independent of the particular minimal
realization chosen. This number depends only on the transfer function H(s) and is
called the McMillan degree (H(s)) of H(s).
is also
Next let be any other realization of H(s) and let n be its dimension. If Sigma
minimal, n = (H(s)). If was not minimal we could produce a lower dimensional
minimal realization of H(s) via the Kalman decomposition, in which case n > (H(s)).
In either event,

n (H(s))

Equivalently, there is no realization of H(s) of dimension less than (H(s)) and ev-
ery minimal realization of H(s) has dimension (H(s)), hence the phrase minimal
realization.

15
G. Balanced Realizations and Model Reduction

In this section, we consider an internally stable realization (A, B, C, D) of some trans-


fer function H(s). We shall assume is minimal. Indeed, we can employ the Kalman
decomposition to arrive at minimal realization of H(s).

1 An optimal control problem


Consider the realization and fix a state Cn . Since is minimal, and in par-
ticular controllable, it is clear from the equivalence of controllability and reachability
that there exists an input v(t) that drives from rest (i.e. zero initial state) to ter-
minal state x(0) = . There are, however, infinitely many inputs v(t) that effect this
manouver. We are interested in determining the minimum energy input vopt (t) that
drives from rest to terminal state x(0) =
More precisely, we wish to solve the following optimal control problem (#) :
Z 0
min u udt
uL2 (,0]

subject to the constraint


Z 0
= x(0) = eAt Bu(t)dt

Using the Projection theorem, we can arrive at the following result.


Theorem Consider the problem (#).

(a) The optimal control is



vopt (t) = B eA t Wc1
where
Z 0

Wc = eAt BB eA t dt

(b) The minimum energy necessary to establish the terminal state x(0) = is
kvopt k2 = Wc1
(c) The matrix Wc Cnn above is the unique solution of the Lyapunov equation
AWc + Wc A + BB = 0

16
From the above result, we conclude that vectors for which Wc1 is large represent
almost uncontrollable states in that they are difficult to reach with input signals
of reasonable energy. Thus, in the normal course of operation of the input-output
system H(s) realized by , these states will be well approximated by the zero state.
As a result, we can effectively approximate H(s) by removing these states. This
would result in a more compact, albeit approximate, internal description of H(s).

2 The dual problem


Consider again the realization and fix Cn . We begin with the initial condition
x(0) = , apply zero input, and examine the output trajectory

y(t) = CeAt , t0

The energy in this output trajectory is


Z
E() = y ydt
0

Theorem

(a) Define the matrix Wo Cnn by


Z

Wo = eA t C CeAt dt
0
Then,
E() = Wo
(b) Wo is the unique solution of the Lyapunov equation
A W o + W o A + C C = 0

This result tells us that vectors for which Wo is small represent states that are
almost unobservable in that they have little effect on the output trajectory. As
far as the input-output behaviour of the system H(s) realized by is concerned,
we may well approximate these states by the zero state. Again, we can effectively
approximate H(s) by removing these states. This would result in a reasonable
internal description of H(s) of lower dimension.
The difficulty with this procedure is that we may remove almost uncontrollable
states that happen to be very observable. As we will see now, by an intelligent
choice of state-space basis, we can resolve this potential bias. In the resulting new
realization, the controllability and observability Grammians are equal. Thus all states
are an uncontrollable as they are unobservable.

17
3 Balanced realizations
We first examine the effect of a state-space change of basis on the Grammians Wc and
Wo .
Proposition Let (A, B, C, D) be any internally stable, minimal realization of some
transfer function H(s) with Grammians Wc and Wo . Let T be an invertible matrix
and consider the similar realization (T 1 AT, T 1 B, CT, D). Then, the Grammians
of are
(a) Wc = T 1 Wc [T 1 ]
(b) Wo = T Wo T
As an immediate consequence of the above result, we have the following:
Lemma Let (A, B, C, D) be any internally stable, minimal realization of some
transfer function H(s). Then,
(a) The eigenvalues of Wo Wc are all positive.
(b) These eigenvalues say 12 , 22 , , n2 are independent of the particular inter-


nally stable realization of H(s) chosen.


The positive numbers 1 , 2 , , n above depend only on the input-output descrip-
tion H(s) and are called the Hankel singular values of H(s). Without loss of generality,
these can be ordered as

1 2 n > 0

Note that as the realization is minimal, the Grammians Wc and Wo are invertible,
and thus n > 0.
Theorem Let (A, B, C, D) be any internally stable, minimal realization of some
transfer function H(s) and let Wo and Wc be the controllability and observability
1 1
Grammians of respectively. Since Wc2 Wo Wc2 is positive-definite, it admits the Jor-
dan decomposition
1 1
Wc2 Wo Wc2 = U Q2 U

where U is unitary and Q2 = diag


 2 2
1 , 2 , , n2 . Define the invertible matrix

1 1
T = Wc2 U Q 2

and consider the similar realization (T 1 AT, T 1 B, CT, D). Then, the Grammians
of are equal and diagonal, i.e.

Wc = Wo = Q

18
The realization above is said to be balanced.

4 Model Reduction
In many problems of practical interest, it is desirable to obtain a low-order approxi-
mate model of some transfer function H(s). This is the problem of model reduction.
For example, suppose H(s) is some controller we have designed. As most controllers
are implemented digitally, if the sampling rate is too high, we may not have enough
computation time available between samples to implement a very complex controller.
Also, complex high-order controllers are less reliable. For these reasons we may seek
to obtain a low-order approximation to the controller H(s).
Conversely, suppose H(s) is some very high-order plant model. This situation arises,
for example, in the context of finite-element modeling in chemical reactors where
we can have realization with thousands of state variables. As we shall soon discover,
many control design techniques provide controllers whose complexity is comparable to
that of the plant. Worse, these design methods involve time consuming complicated
calculations which are difficult or impossible to conduct for very high order plant
models. For these reasons we may seek to obtain a low-order approximation to the
plant H(s).
When attempting to construct an approximation of the transfer function H(s) it is
generally an excellent idea to retain the unstable part of H(s) and simplify only the
stable part. Indeed, the unstable part contains so much vital information it would be
dangerous to conduct any model reduction on it. This decomposition of H(s) into its
stable and unstable parts can be done using partial fractions, or more generally, as
follows. Start with a minimal realization (A, B, C, D) of the transfer function H(s).
Construct a nonsingular matrix T , so that
" #
1 As 0
T AT =
0 Au

where As is stable and Au is unstable. This can be done, for example, using the
Jordan form, or better still, the real Schur form. Now, with the state-space change of
basis T we obtain the similar realization

As 0 Bs
new = 0 Au B u


Bs Bu D

Since realizes H(s) we can write

H(s) = Hs (s) + Hu (s) = Cs (sI As )1 Bs + Cu (sI Au )1 Bu + D

19
which is decomposition we wanted. There are more efficient, numerically well-behaved
methods to conduct this decomposition. These involve Ricatti equations and factor-
ization theory.
There are many techniques available for model reduction of linear systems. These
include singular perturbation methods, balanced truncation, Hankel model reduction,
etc. Of these, only singular perturbation methods apply to nonlinear models as well.

5 Balanced Truncations
The preceeding development on balanced realizations offers an ad hoc method for
model reduction. This is as follows. Start with a minimal realization of the transfer
function and decompose it into its stable part Hs (s) and its unstable part Hu (s).
Construct a balanced realization (A, B, C, D) for H(s). Write this realization

A11 A12 B1
A21 A22 B2

C1 C2 D

where A11 Ck k and B, C are partitioned conformably with A. Examine the


Hankel singular values {1 , , n } of Hs (s). If there is a substantial gap between k
and k+1 , then
" #
A11 B1
Hk (s)
C1 D

serves as a (possibly good) approximation of H(s). The transfer function Hk (s) (or
its associated realization) are called the k th balanced truncation of H(s).
The balanced truncations Hk (s) are not optimal approximations of H(s) in any math-
ematical sense. They are based on sound heuristics, and one can derive coarse general
bounds on the approximation error. We can easily establish the following
Theorem Balanced truncations are minimal, stable, and balanced.

20

Вам также может понравиться