Вы находитесь на странице: 1из 12

This version: 4 January 2002 464 13 An introduction to H∞ control theory

Readers wishing only to be able to apply the methods in this chapter should go through
the chapter with the above skeleton as a guide. However, those wishing to see the
details will be happy to see very little omitted. It should also be emphasised that
Mathematica and Maple packages for solving these problems may be found at the URL
Chapter 13 http://mast.queensu.ca/~math332/.1

An introduction to H∞ control theory 13.1 Reduction of robust performance problem to model matching
problem
For SISO systems, the problem of designing for robust performance turns out to be re-
The task of designing a controller which accomplishes a specified task is a challenging ducible in a certain sense to two problems investigated by mathematicians coming under the
chore. When one wishes to add “robustness” to the mix, things become even more chal- broad heading of complex function approximation, or, more descriptively, model matching.
lenging. By robust design, what is meant is that one should design a controller which works In this section we shall discuss how this reduction takes place, as in itself it is not an entirely
not only for a given model, but for plants which are close to that model. In this way, one obvious step.
can have some degree of certainty that even if the model is imperfect, the controller will
behave in a satisfactory manner. The development of systematic design procedures for ro-
13.1.1 A modified robust performance problem First recall the robust performance
bust control can be seen to have been initiated with the important paper of Francis and
problem, Problem 9.24. Given a proper nominal plant R̄P , a function Wu ∈ RH+ ∞ given an
Zames [1984]. Since this time, there have been many developments and generalisations.
uncertainty set P× (R̄P , Wu ) or P+ (R̄P , Wu), and a performance weight Wp ∈ (s), we seek
The understanding of so-called H∞ methods has progressed to the point that a somewhat


a controller RC that stabilises the nominal plant and satisfies either
elementary treatment is possible. We shall essentially follow the approach of Doyle, Francis
and Tannenbaum [1990]. For a recent account of MIMO developments, see [Dullerud and

|Wu T̄L | + |Wp S̄L | < 1 or |Wu RC S̄L | + |WpS̄L | < 1,
∞ ∞
Paganini 1999]. The book by Francis [1987] is also a useful reference.
Although all of the material in this chapter can be followed by any student who has depending on whether one is using multiplicative or additive uncertainty. The robust perfor-
come to grips with the more basic material in this book, much of what we do here is a mance problem, it turns out, is quite difficult. For instance, useful necessary and sufficient
significant diversion from control theory, per se, and is really a development of the necessary conditions for there to exist a solution to the problem are not known. Thus our first step
mathematical tools. Since a complete understanding of the tools is not necessary in order in this section is to come up with a simpler problem that is easier to solve. The simpler
to apply them, it is perhaps worthwhile to outline the bare bones guide to getting through problem is based upon the following result.
this chapter. This might be as follows.
1. One should first make sure that one understands the problem being solved: the robust per- 13.1 Lemma Let R1 , R2 ∈ (s) and denote by |R1 | + |R2 | the -valued function s 7→


formance problem. The first thing done is to modify this problem so that it is tractable; (|R1 (s)| + |R2 (s)|) and by |R1 |2 + |R2 |2 the -valued function s 7→ (|R1 (s)|2 + |R2 (s)|2 ).


this is the content of Problem 13.2. If
|R1 |2 + |R2 |2 < 1

2. The modified robust performance problem is first converted to a model matching problem, ∞ 2
which is stated in generality in Problem 13.3. The content of this conversion is contained
then
in Algorithm 13.5.
|R1 | + |R2 | < 1.

3. The model matching problem is solved in this book in two ways. The first method in-
volves Nevanlinna-Pick interpolation, and to apply this method, follow the steps outlined Proof Define
in Algorithm 13.18. It is entirely possible that you will have to make some modifications  2

S1 = (x, y) ∈ x, y > 0, |x + y| < 1 ,
to the algorithm to ensure that you arrive at a proper controller. The necessary machi-


2 x, y > 0, x + y2 < 1 .
 2
nations are discussed following the algorithm. S2 = (x, y) ∈ 2


4. The second method for solving the model matching problem involves approximation of The result will follow if we can show that S1 ⊂ S2 . However, we note that S1 = [0, 1] × [0, 1]
unstable rational functions by stable ones via Nehari’s Theorem. To apply this method, and S2 is the circle of radius √12 centred at the origin. Clearly S1 ⊂ S2 (see Figure 13.1). 
follow the steps in Algorithm 13.29. As with Nevanlinna-Pick interpolation, one should
be prepared to make some modifications to the algorithm to ensure that things work This leads to a modification of the robust performance problem, and we state this formally
out. The manner in which to carry out these modifications is discussed following the since it is this problem to which we devote the majority of our effort in this chapter. Note
algorithm. that we make a few additional assumptions in the statement of the problem that are not
5. Other problems that can be solved in this manner are discussed in Section 13.5. present in the statement of Problem 9.24. Namely, we assume now the following.
1
These are not currently implemented. Hopefully they will be in the near future.
φ (t)
fj (t)
Re
Im
x1
13.1 Reduction of robust
x2 performance problem to model matching problem 465 466 13 An introduction to H∞ control theory
log ω
dB First let us state the model matching problem.
deg
u = ω0 ln ω 13.3 A model matching problem Let T1 , T2 ∈ RH+ +
∞ . Find θ ∈ RH∞ (s) so that kT1 − θT2 k∞
ln coth(|u| /2) is minimised. 
α or α−1
The model matching problem may not have a solution. In fact, it will often be the case in
φm
applications that it does not have a solution. However, as we shall see as we get into our
ζ
development, even when the problem has no solution, it can be used as a guide to solve the
yos S1
S2
problem that is actually of interest to us, namely the modified robust performance problem,
ζ
Problem 13.2. Some issues concerning existence of solutions to the model matching problem
tos ω0
are the topic of Exercise E13.1
ζ
ωζ,ω0
ω0
13.4 Remark Note that since T1 , T2 ∈ RH+ ∞ , if θ is to be a solution of the model matching
problem, then it can have no imaginary axis poles. Also, since the model matching problem
Figure 13.1 Interpretation of modified robust performance condi- only cares about the value of θ on the imaginary axis, we can without loss of generality (by
tion multiplying a solution θ to the model matching problem by an inner function that cancels
all poles in θ in + ) suppose that θ has no poles in +. 


1. Wp ∈ RH+ ∞ : Thus we add the assumption that Wp be proper since, without loss of Let us outline the steps in performing the reduction of Problem 13.2 to Problem 13.3.
generality as we are only interested on the value of Wp on i , we can suppose that all After we have said how to perform the reduction, we will actually prove that everything

poles of Wp lie in − . works. The reader will wish to recall the notion of spectral factorisation for rational functions


2. Wu and Wp have no common imaginary axis zeros: This is an assumption that, if not (Proposition 12.11) and the notion of a coprime factorisation for a pair of rational functions
satisfied, can be satisfied with minor tweaking of Wu and Wp. (Theorem 10.33).

3. RC is proper: We make this assumption mostly as a matter of convention. If we arrive 13.5 Algorithm for obtaining model matching problem for multiplicative uncertainty Given R̄P ,
at a controller that is improper but solves the problem, then it is often possible to Wu , and Wp as in Problem 13.2.
modify the controller so that it is proper and still solves the problem. 1. Define
Thus our problem becomes. Wp Wp∗ Wu Wu∗
U3 = .
WpWp∗ + Wu Wu∗
13.2 Modified robust performance problem Given 2. If kU3 k∞ ≥ 12 , then Problem 13.2 has no solution.
(i) a nominal proper plant R̄P , 3. Let (P1 , P2) be a coprime fractional representative for R̄P .
(ii) a function Wu ∈ RH+ ∞, 4. Let (ρ1 , ρ2 ) be a coprime factorisation for P1 and P2 :
(iii) an uncertainty model P× (R̄P , Wu) or P+ (R̄P , Wu), and
ρ1 P1 + ρ2 P2 = 1.
(iv) a performance weight Wp ∈ RH+ ∞,
so that Wu and Wp have no common imaginary axis zeros, find a proper controller R C that 5. Define
(v) stabilises the nominal system and
R1 = W p ρ 2 P 2 , S 1 = W u ρ1 P 1 ,
(vi) satisfies either |Wu T̄L |2 +|Wp S̄L|2 ∞ < 1 or |Wu RC S̄L |2 +|Wp S̄L|2 ∞ < 1, depending
R2 = W p P 1 P 2 , S 2 = − W u P1 P2 .
on whether one is using multiplicative or additive uncertainty. 
As should be clear from Figure 13.1, it is possible that for a given R̄P , Wu , and Wp it will not 6. Define Q = [R2R2∗ + S2 S2∗ ]+ .
be possible to solve the modified robust performance problem even though a solution may 7. Let V be an inner function with the property that
exist to the robust performance problem. Thus we are sacrificing something in so modifying R1 R2∗ + S1 S2∗
the problem, but what we gain is a simplified problem that can be solved. V
Q∗

13.1.2 Algorithm for reduction to model matching problem The objective of this has no poles in +.


section is to convert the modified robust performance problem into the model matching 8. Define
problem. We shall concentrate in this section on multiplicative uncertainty, with the reader R1 R2∗ + S1 S2∗
U1 = V, U2 = QV.
filling in the details for additive uncertainty in Exercise E13.2. Q∗
13.1 Reduction of robust performance problem to model matching problem 467 468 13 An introduction to H∞ control theory

9. Define U4 = [ 12 − U3 ]+ . That these functions exist, and are as stated in Steps 1 and 8 of Algorithm 13.5, will be
10. Define proved in the subsequent section. Finally, with U4 as defined in Step 9, in the next section
U1 U2 we shall show that
T1 = , T2 = .
U4 U4
|U1 − θU2 |2 + U3
1
11. Let θ be a solution to Problem 13.3, and by Remark 13.4 suppose that it has no poles < ⇐⇒ ||U4−1 U1 − θU4−1 U2 ||∞ < 1.
∞ 2
in + .


With this rough justification behind us, let us turn to formal proofs of the validity of
12. If kT1 − θT2k∞ ≥ 1 then Problem 13.2 has no solution.
Algorithm 13.5. Readers not interested in this sort of detail can actually skip to Section 13.4.
13. The controller
ρ1 + θP2
RC = , 13.1.3 Proof that reduction procedure works Throughout this section, we let R̄P ,
ρ2 − θP1
Wp, and Wp are as stated in Problem 13.2.
is a solution to Problem 13.2. 
In Step 6 of Algorithm 13.5, we are asked to compute the spectral factorisation of R 2 R2∗ +
The above procedure provides a way to produce a controller satisfying the modified S2 S2∗ . Let us verify that this spectral factorisation exists.
robust performance problem, provided one can find θ in Step 11. That is to say, we have
reduced the finding of a solution to the modified robust performance problem to that of 13.6 Lemma R2 R∗2 + S2 S∗2 admits a spectral factorisation.
solving the model matching problem. It remains to show that all constructions made in Proof We have
Algorithm 13.5 are sensible, and that all claims made are true. In the next section we will R2R2∗ + S2 S2∗ = P1 P1∗ P2 P2∗ (Wp Wp∗ + Wu Wu∗ ).
do this formally. However, before we get into all the details, it is helpful to give a glimpse
into how Algorithm 13.5 comes about. Since P1 , P2 ∈ RH+
∞ we may find an inner-outer factorisation
As we are working with multiplicative uncertainty (see Exercise E13.2 for additive un-
P1 = P1,in P1,out , P2 = P2,in P2,out
certainty), the problem we start out with, of course, is to find a proper R C ∈ (s) that


satisfies by Proposition 12.4. Therefore
|Wu T̄L |2 + |Wp S̄L |2 < 1

∞ 2
P1 P1∗ = P1,in P1,out P1,in
∗ ∗
P1,out ∗
= P1,out P1,out ,
for a given nominal proper plant R̄P , an uncertainty model Wu ∈ RH+ ∞ , and a performance
weight Wp ∈ RH+ ∞ . We choose a coprime fractional representative (P 1 , P2 ) for R̄P and an since P1,in is inner. Since P1,out is outer, it follows that P1,out is a left spectral factor for
associated coprime factorisation (ρ1 , ρ2 ). By Theorem 10.37, any proper stabilising controller P1 P1∗ . Similarly P2 P2∗ admits a spectral factorisation by the outer factor P2,out of P2 . Thus
is then of the form P1 P1∗ and P2 P2∗ admit a spectral factorisation, and so too then does P1 P1∗ P2 P2∗ . Now let
ρ1 + θP2
RC = . (Np , Dp) and (Nu , Du ) be the c.f.r.’s of Wp and Wu . We then have
ρ2 − θP1
A simple computation then gives Np Np∗ Du Du∗ + Nu Nu∗ DpDp∗
Wp Wp∗ + Wu Wu∗ = .
DpDp∗ Du Du∗
T̄L = P1 P2 θ + ρ1 P1 , S̄L = −(P1 P2 θ − ρ2 P2 ).
Since Wp , Wu ∈ RH+ ∞ by hypothesis, Dp and Du , and therefore Dp and Du , have no imaginary
∗ ∗
Defining axis roots. Also by assumption, Np and Nu have no common roots on i . One can then


show (along the lines of Exercise E12.3) that this infers that N pNp∗ Du Du∗ + Nu Nu∗ Dp Dp∗ has
R1 = W p ρ 2 P 2 , S 1 = W u ρ1 P 1 , constant sign on i . Since it is clearly even, one may infer from Proposition 12.6 that


R2 = W p P 1 P 2 , S 2 = − W u P1 P2 , NpNp∗ Du Du∗ + Nu Nu∗ Dp Dp∗ admits a spectral factorisation. Therefore, by Proposition 12.11,
so too does WpWp∗ +Wu Wu∗ . Finally, by Exercise E12.4 we conclude that R2 R2∗ +S2S2∗ admits
we then obtain a spectral factorisation. 
|Wu T̄L |2 + |Wp S̄L|2 = |R1 − θR2 |2 + |S1 − θS2 |2 ∞ .

∞ Our next result declares that U1 , U2 , and U3 are as they should be, meaning that they
satisfy the relation (13.1).
Up to this point, everything is simple enough. Now we claim that there exists functions
U1 , U2 ∈ RH+∞ and U3 ∈ (s), defined in terms of R1 , R2, S1 , and S2 , and having the 13.7 Lemma If U1 , U2 , and U3 as defined in Steps 8 and 1 satisfy


property that
|R1 − θR2 |2 + |S1 − θS2 |2 = |U1 − θU2 |2 + U3 .

(13.1) |R1 − θR2 |2 + |S1 − θS2 |2 = |U1 − θU2 |2 + U3 .

∞ ∞ ∞ ∞
13.1 Reduction of robust performance problem to model matching problem 469 470 13 An introduction to H∞ control theory

Proof It is sufficient that the relation Next we show that with T1 and T2 as defined in Step 10, the modified robust performance
      problem is indeed equivalent to the model matching problem.
R1 − θR2 R1∗ − θ∗ R2∗ + S1 − θS2 S1∗ − θ∗ S2∗ = U1 − θU2 U1∗ − θ∗ U2∗ + U3 (13.2)
holds for all θ and for s = iω. Doing the manipulation shows that if the three relations 13.9 Lemma With T1 and T2 as defined in Step 10 we have

R2 R2∗ + S2 S2∗ = U2 U2∗ |Wu T̄L|2 + |Wp S̄L|2
1
< ⇐⇒ kT1 − θT2k ,
R1 R2 + S1 S2 = U1 U2∗
∗ ∗
(13.3) ∞ 2
R1 R1∗ + S1 S1∗ = U1 U1∗ + U3 . where
ρ2 + θP1
hold for s = iω, then (13.2) will indeed hold for all θ. We let Q = [R 2 R2∗ + S2 S2∗ ]+ . If V is RC = .
ρ1 − θP2
an inner function with the property that
Proof As outlined at the end Section 13.1.2, the condition
R1 R2∗ + S1 S2∗
V 1
Q∗ |Wu T̄L |2 + |Wp S̄L |2

<
∞ 2
has no poles in +, then if U2 = QV we have


is equivalent to
U2 U2∗ = QV Q∗ V ∗ = QQ∗ = R2 R2∗ + S2 S2∗ . |U1 − θU2|2 + U3
1
< .
2 ∞
Thus U2 satisfies the first of equations (13.3). Also,
Also note that by definition, U3 (iω) ≥ 0 for all ω ∈ , and that U3 = U3∗ , the latter fact


R1 R2∗ + S1 S2∗ implying that U3 is even. Therefore, provided that kU3 k∞ < 12 , 12 − U3 admits a spectral
U1 = V
Q∗ factorisation. Now we compute
clearly satisfies the second of equations (13.3). To verify the last of equations (13.3), we may 1
|U1 − θU2 |2 + U3

directly compute <
∞ 2
(R1∗ R2 + S1∗ S2 )(R1 R2∗ + S1 S2∗ ) |U1 (iω) − θ(iω)U2 (iω)|2 + U3 (iω) < 1 , ω ∈

R1 R1∗ + S1 S1∗ − U1 U1∗ = R1 R1∗ + S1 S1∗ − , (13.4) ⇐⇒
2


R2 R2∗ + S2 S2∗
1
using our solution for U1 and the fact that V is inner. A straightforward substitution of the ⇐⇒ |U1 (iω) − θ(iω)U2 (iω)| 2 + U3 (iω) < , ω ∈
2


definitions of R1 , R2 , S1 , and S2 now gives U3 as in Step 1.  2 1
⇐⇒ |U1 (iω) − θ(iω)U2 (iω)| < − U3 (iω), ω ∈
2


Our next lemma verifies Step 2 of Algorithm 13.5.
⇐⇒ |U1 (iω) − θ(iω)U2 (iω)| 2 < [ 12 − U3 (iω)]+ [ 12 − U3 (−iω)], ω∈


13.8 Lemma If kU3 k∞ ≥ 12 then Problem 13.2 has no solution. ⇐⇒ |U1 (iω) − θ(iω)U2 (iω)| 2 < |U4 (iω)|2 , ω ∈


Proof One may verify by direct computation that U4 (iω)U1 (iω) − θ(iω)U4−1 (iω)U2 (iω) 2 < 1,
−1
⇐⇒ ω∈ .


(R1∗ R2 + S1∗ S2 )(R1 R2∗ + S1 S2∗ )
U3 = R1R1∗ + S1 S1∗ − By definition of T1 and T2 , the lemma follows. 
R2 R2∗ + S2 S2∗
(this follows from (13.4)). Now work backwards through the proof of Lemma 13.7 to see It is necessary that T1 , T2 ∈ RH+
in order to fit them into the model matching problem.

that The next lemma ensures that this follows from the constructions we have made.
     
U3 = R1 − θR2 R1∗ − θ∗R2∗ + S1 − θS2 S1∗ − θ∗ S2∗ − U1 − θU2 U1∗ − θ∗ U2∗ 13.10 Lemma T1 , T2 ∈ RH+
∞.
for any admissible θ ∈ RH+
∞. Therefore, if kU3 k∞ ≥ 1
2
then Proof First note that kU3 k < 12 by the time we have gotten to defining T1 and T2 . Therefore
1 U4 = [ 12 − U3 ]+ is strictly proper, and so is invertible in RH+ ∞ . The lemma will then follow
|R1 − θR2|2 + |S1 − θS2|2

≥ . if we can show that U1 , U2 ∈ RH+ ∞.
2 ∞
First let us show that U2 ∈ RH+ ∞ . By definition, U2 is the left half-plane spectral factor
However, a simple working through of the definitions of R1 , R2 , S1 , and S2 shows that this of
implies that for any stabilising controller RC we must have P1 P1∗ P2 P2∗ (Wp Wp∗ + Wu Wu∗ ).
|Wu T̄L |2 + |Wp S̄L |2 ≥ 1 ,

As such, it is the product of the two quantities
∞ 2
as desired.  [P1 P1∗ P2 P2∗ ]+ , [WpWp∗ + Wu Wu∗ ]+ .
13.2 Optimal model matching I. Nevanlinna-Pick theory 471 472 13 An introduction to H∞ control theory

Since each of P1 P2 ∈ RH+ ∗ + +


∞ , [P1 P1 P2 P2 ] ∈ RH∞ . Since

2. Pick was not interested in making the restriction that if a every Pick pair have the
property that its complex conjugate also be a Pick pair.
Np Np∗ Du Du∗+ Nu Nu∗ DpDp∗
Wp Wp∗ + Wu Wu∗ = , 3. Pick was interested in the case where the points a1 , . . . , an lie in the open disk D(0, 1)
DpDp Du Du∗

of radius 1 centred at 0. This is not a genuine distinction, however, as the map s 7→ 1−s1+s
we have bijectively maps + onto D(0, 1), and so translates the domain of concern for Pick to


[Np Np∗ Du Du∗ + Nu Nu∗ Dp Dp∗ ]+ our domain.
[WpWp∗ + Wu Wu∗ ]+ = .
Dp Du 4. Finally, Pick allowed interpolating functions to be general meromorphic functions,
Since Wp , Wu ∈ RH+ ∗ + + + bounded and analytic in + . For obvious reasons, our interest is in the subset of such
∞ , it follows that [Wp Wp + Wu Wu ] ∈ RH∞ . Thus U2 ∈ RH∞ .


Now let us show that U1 ∈ RH+ . A computation shows that functions that are in (s), i.e., functions in RH+
∞. 



P1∗ P2∗ ρ2 P2 Wp Wp∗ − ρ1 P1 Wu Wu∗ 13.2.1 Pick’s theorem Pick’s conditions for the existence of a solution to Prob-
U1 = V.
[P1 P2 P2 P2 ]
∗ ∗ −
[WpWp∗ + Wu Wu∗ ]− lem 13.11 are quite simple. The proof that these conditions are necessary and sufficient
is not entirely straightforward. In this section we only prove necessity, as sufficiency will
The inner function V is designed so that this function has no poles in + . We also claim follow from our algorithm for solving the Nevanlinna-Pick interpolation problem in the en-


that U1 has no poles on i . Since Wp, Wu ∈ RH+ ∞ and since they have no common imaginary suing section. Our necessity proof follows [Doyle, Francis and Tannenbaum 1990]. For the


axis zeros, it follows that [WpWp∗ + Wu Wu∗ ]− has no zeros on i . Clearly, neither P1∗ P2∗ nor statement of Pick’s theorem, recall that M ∈ n×n is Hermitian if M = M ∗ = M .
t


ρ2 P2 WpWp∗ − ρ1 P1 Wu Wu∗ have poles on i . Thus, our claim will follow if we can show that A Hermitian matrix is readily verified to have real eigenvalues. Therefore, the notions of


P1∗ P2∗
[P1 P2∗ P2 P2∗ ]−
has no poles on i . This is true since the imaginary axis zeros of P 1∗ P2∗ and definiteness presented in Section 5.4.1 may be applied to Hermitian matrices.


[P1 P2∗ P2 P2∗ ]− agree in location and multiplicity. Thus we have shown that U 1 is analytic in
+
+ . That U1 ∈ RH∞ will now follow if we can show that U1 is proper.  finish 13.13 Theorem (Pick’s Theorem) Problem 13.11 has a solution if and only if the Pick ma-


trix , the complex k × k symmetric matrix M with components


13.2 Optimal model matching I. Nevanlinna-Pick theory 1 − bj b̄`
Mj` = , j, ` = 1, . . . , k,
aj + ā`
The first solution we shall give to the model matching problem comes from a seemingly is positive-semidefinite.
unrelated interpolation problem. To state the problem, we need a little notation. We let
Proof of necessity Suppose that Problem 13.11 has a solution R. For c 1 , . . . , ck ∈ , not all
RH+,∞ be the collection of proper functions in (s) with no poles in + . Thus RH+,∞ is just




zero, consider the complex input u : (−∞, 0] → given by




like RH+ , except that now we allow the functions to have complex coefficients. Note that



k
k·k∞ still makes sense for functions in RH+, ∞ . Now the interpolation problem is as follows.


X
u(t) = c j e aj t .
j=1
13.11 Nevanlinna-Pick interpolation problem Let {a1 , . . . , ak } ⊂ + and let {b1 , . . . , bk } ⊂


collections of distinct points. A Pick pair is then a pair (aj , bj ), i ∈ {1, . . . , k}. Suppose This can be considered as an input to the transfer function R, with the output computed
that if (ak , bk ) is a Pick pair, then so is (āj , b̄j ), j = 1, . . . , k. by separately computing the real and imaginary parts. Moreover, if h R denotes the inverse
Find R ∈ RH+ Laplace transform for R, the complex output will be
∞ so that
Check
Z ∞
(i) kRk∞ ≤ 1 and y(t) = hR (τ )u(t − τ ) dτ
(ii) R(aj ) = bj , j = 1, . . . , n.  0
k
X Z ∞
This problem was originally solved by Pick [1916], and was solved independently by = cj hR (τ )eaj (t−τ ) dτ
Nevanlinna [1919]. Nevanlinna also gave an algorithm for finding a solution to the interpola- j=1 0

tion problem [Nevanlinna 1929]. In this section, we will state and prove Pick’s necessary and k
X Z ∞
sufficient condition for the solution of the interpolation problem, and also give an algorithm = c j e aj t hR (τ )e−aj τ dt
for determining a solution. j=1 0

k
X
13.12 Remark We should say that the problem solved by Pick is somewhat different than = cj eaj tR(aj )
the one we state here. The difference occurs in three ways. j=1
k
1. Pick actually allowed kRk∞ = 1. However, our purposes will require that the H∞ -norm X
= c j b j e aj t ,
of R be strictly less than 1.
j=1
13.2 Optimal model matching I. Nevanlinna-Pick theory 473 474 13 An introduction to H∞ control theory

where we have used the definition of the Laplace transform. By part (i) of Theorem 5.21, Some easily verified relevant properties of Blaschke functions are the subject of Exer-
and since kRk∞ ≤ 1, it follows that cise E13.3. Also, for a ∈ define a function Aa ∈ (s) by


s−a
Z 0 Z 0

|y(t)|2 dt ≤ |u(t)|2 dt.

 , a∈
s + ā



−∞ −∞ Aa (s) = s2 − 2Re(a)s + |a|2
 , otherwise.
Substituting the current definitions of u and y gives s + 2Re(a)s + |a|2

 2

0 k 0 k
Again, we refer to Exercise E13.3 for some of the easily proven properties of such functions.
Z X 2 Z X 2
cj bj eaj t dt ≤ cj eaj t dt

Let us begin by solving the Nevanlinna-Pick interpolation problem when k = 1.
−∞ j=1 −∞ j=1

0 k 0 k
13.14 Lemma Let a1 ∈ + and let b1 ∈ have the property that |b1 | < 1. The associ-
Z X Z X
=⇒ cj eaj tc̄` eā` t dt ≥ cj bj eaj tc̄` b̄` eā` t dt


−∞ j,`=1 −∞ j,`=1
ated Nevanlinna-Pick interpolation problem has an infinite number of solutions if it has one
solution, and the set of all solutions is given by
Z 0 k
X
=⇒ cj c̄` (1 − bj b̄`)e(aj +ā` )t dt ≥ 0.  
Re(R) R(s) = B−b1 R1 (s)Aa1 (s) , R1 ∈ (s) has no poles in + , and kR1 k∞ < 1 .


−∞ j,`=1

Proof First note that the Nevanlinna-Pick interpolation problem does indeed have a solu-
We now compute
0 tion, namely the trivial solution R0 (s) = b1 .
1
Z
e(aj +ā` )t dt = , Now let R1 ∈ (s) have no poles in + and suppose that kR1 k∞ < 1. If
aj + ā`


−∞

thus giving R(s) = B−b1 R1(s)Aa1 (s)
k
X 1 − bj b̄`
cj c̄` ≥ 0, then R is the composition of the functions
j,`=1
aj + ā`

which we recognise as being equivalent to the expression s 7→ R1 (s)Aa1 (s)


s 7→ M−b1 (s).
x∗ M x ≥ 0,
The first of these functions in analytic in + since both R1 and Aa1 are. Also, by Exer-


where   cise E13.3 and since kR1 k < 1, the first of these functions maps + onto the disk D̄(0, 1).


c̄1 The second of these functions, by Exercise E13.3, is analytic in D̄(0, 1) and maps it onto it-
x =  ...  . self. Thus we can conclude that R ∈ (s) as defined has no poles in + and that kRk∞ < 1.
 


c̄n What’s more, we claim that R(a1) = b1 if R1(a1 ) = b1 . Indeed
Thus we have shown that the Pick matrix is positive-definite.  
R(a1) = B−b1 R1(a1 )Aa1 (a1 ) = B−b1 (0) = b1 ,

13.2.2 An inductive algorithm for solving the interpolation problem In this section using the definitions of Aa1 and Bb1 . It only remains to show that Re(R) solves Prob-
we provide a simple algorithm for solving the Nevanlinna-Pick interpolation problem in the lem 13.11. Clearly, since b1 must be real, it follows that Re(R(a1 )) = b1 . Furthermore, since
situation when the Pick matrix is positive-definite. In doing so, we also complete the proof kRe(R)k∞ < kRk∞ , it follows that kRe(R)k∞ < 1. Finally, Re(R) can have no poles in +


of Theorem 13.13. The algorithm we present in this section follows [Marshall 1975]. since R has no poles in + .


Before we state the algorithm, we need to introduce some notation. First note that by Now suppose that R ∈ (s) solves Problem 13.11. Define R1 ∈ (s) by


the Maximum Modulus Principle, it is necessary that for the Nevanlinna-Pick interpolation
problem to have a solution, each of the numbers b1 , . . . , bk satisfy |bj | ≤ 1. Thus we may Bb1 (R(s))
R1 (s) = .
assume this to be the case when we seek a solution to the problem. For b ∈ satisfying Aa1 (s)


|b| < 1, define the Blaschke function Bb ∈ (s) associated with b by


The function in the numerator is analytic in + and has a zero at s = a1 . Therefore, since



s−b the only zero of Aa1 is at zero, R1 is analytic in + . Furthermore, the H∞ -norm of the

, b∈



numerator is strictly bounded by 1, and since the H∞ -norm of the denominator equals 1, we

1 − b̄s



Bb (s) = 2
s + 2Re(b)s + |b| 2
conclude that kR1 k∞ < 1. This concludes the proof of the lemma. 

 , otherwise.
1 + 2Re(b)s + |b|2 s2

13.2 Optimal model matching I. Nevanlinna-Pick theory 475 476 13 An introduction to H∞ control theory

As stated in the proof of the lemma, if |b1 | < 1, then the one point interpolation problem 13.2.3 Relationship to the model matching problem The above discussion of the
always has the trivial solution R0 (s) = b1 . Let us also do this in the case when k = 2 and Nevanlinna-Pick interpolation problem is not obviously related to the model matching prob-
we have a2 = ā1 6= a1 and b2 = b̄2 6= b2 . lem, Problem 13.3.

13.15 Lemma Let {a1 , a2 = ā1 } ⊂ + and let {b1 , b2 = b̄1 } ⊂ have the property that 13.18 Model matching by Nevanlinna-Pick interpolation Given T1 , T2 ∈ RH+
∞.


|b1 | < 1. Also suppose that a1 6= a2 and b1 6= b2 . The associated Nevanlinna-Pick interpo-
lation problem has an infinite number of solutions if it has one solution, and the set of all
13.3 Optimal model matching II. Nehari’s Theorem
solutions is given by
  In this section we present another method for obtaining a solution, or an approximate
Re(R) R(s) = B−b1 R1 (s)Aa1 (s) , R1 ∈ (s) has no poles in + , and kR1 k∞ < 1 .
solution, to the model matching problem, Problem 13.3. The strategy in this section involves


Proof Problem 13.11 has the solution significantly more development than does the Nevanlinna-Pick procedure from Section 13.2.
However, the algorithm produced in actually easier to apply than is that using Nevanlinna-
Rs(s) = Pick theory. Unfortunately, the methods in this section suffer from on occasion producing
a controller that is improper, and one must devise hacks to get around this, just as with
 Nevanlinna-Pick theory. Our presentation in this section follows that of Francis [1987].
The lemmas gives the form of all solutions to the Nevanlinna-Pick interpolation problem
in the cases when k = 1 and k = 2 with the points being complex conjugates of one another. 13.3.1 Hankel operators in the frequency-domain The key tool for the methods
It turns out that with this case, one can construct solutions to the general problem. To of this section is something new for us: a Hankel operator of a certain type. To initiate
do this, one makes the clever observation (this was Nevanlinna’s contribution) that one can this discussion, let us note that as in Proposition 12.3, but restriction to functions in RL 2 ,
+
reduce a k point interpolation problem to a k − 1 or k − 2 point interpolation problem by we have a decomposition RL2 = RH− 2 ⊕ RH2 . That is to say, any strictly proper rational

properly defining the new k − 1 or k − 2 points. We say how this is done in a definition. function with no poles on i has a unique expression as a sum of a strictly proper rational


function with no poles in + and a strictly proper rational function with no poles in − .


This is no surprise as this decomposition is simply obtained by partial fraction expansion.
13.16 Definition For k > 1, let
The essential idea of this section puts this mundane idea to good use. Let us denote by
{a1, . . . , ak}, {b1, . . . , bk } ⊂ Π+ : RL2 → RH+ −
2 and Π : RL2 → RH2 the projections.



Let us list some operators that are readily verified to have the stated properties.
be as in Problem 13.11. Define 1. The Laurent operator with symbol R: Given R ∈ RL∞ and Q ∈ RL2 , one readily
( sees that RQ ∈ RL2 . Thus, given R ∈ RL∞ we have a map ΛR : RL2 → RL2 defined by
k − 1, bk ∈ ΛR (Q) = RQ. This is the Laurent operator with symbol R.
k̃ =


k − 1, otherwise. 2. The Toeplitz operator with symbol R: Clearly, if R ∈ RL∞ and if Q ∈ RH+ 2 , then
RQ ∈ RL2 . Therefore, Π+(RQ) ∈ RH+ +
2 . Thus, for R ∈ RL∞ , the map ΘR : RH2 → RH2
+
The Nevanlinna reduction of the numbers {a1, . . . , ak } and {b1 , . . . , bk } is the collection +
defined by ΘR (Q) = Π (RQ) is well-defined, and is called the Toeplitz operator with
of numbers
symbol R.
{ã1, . . . , ãk̃}, {b̃1, . . . , b̃k̃ } ⊂
3. The Hankel operator with symbol R: Here again, if R ∈ RL∞ and if Q ∈ RH− 2,


defined by  then RQ ∈ RL2 . Now we map this rational function into RH+ 2 using the projection Π .

With this in mind, we state the algorithm that forms the main result of this section. Thus, for R ∈ RL∞ , we define a map ΓR : RH+ 2 → RH −
2 by Γ R (Q) = Π −
(RQ). This is
the Hankel operator with symbol R.
13.17 Algorithm for solving the Nevanlinna-Pick interpolation problem Given points
13.19 Remarks 1. Note that the Toeplitz and Hankel operators together specify the value of
{a1, . . . , ak}, {b1, . . . , bk } ⊂ the Laurent operator when applied to functions in RH+
2 . That is to say, if R ∈ RL2 then


as in Problem 13.11, additionally assume that |bj | < 1, j = 1, . . . , k, and that the Pick ΛR (Q) = ΘR (Q) + ΓR (Q).
matrix is positive-definite.
2. It is more common to see the Laurent, Toeplitz, and Hankel operators defined for general
analytic functions rather than just rational functions. However, since our interest is
1. 
entirely in the rational case, it is to this is that we restrict our interest. 
13.3 Optimal model matching II. Nehari’s Theorem 477 478 13 An introduction to H∞ control theory

The Laurent, Toeplitz, and Hankel operators are linear. Thus it makes sense to ask (ii) For R1, R2 ∈ RH+
2 we compute
+
questions about the nature of their spectrum. However, the spaces RL 2 , RH− 2 , and RH2
are infinite-dimensional, so these issues are not immediately approachable as they are in hΘR (R1 ), R2 i2 = hΛR (R1 ), R2i2
finite-dimensions. The good news, however, is that these operators are “essentially” finite- = hR1 , Λ∗R (R2 )i2
dimensional. The easiest way to make sense of this is with state-space techniques, and this = hR1 , ΛR∗ (R2 )i2
is done in the next section.
= hR1 , ΘR∗ (R2 )i2 ,
It also turns out that the Laurent, Toeplitz, and Hankel operators are defined on spaces
with an inner product. Indeed, on RL2 we may define an inner product by and this part of the proposition follows.
1
Z ∞ (iii) For R1 ∈ RH+ −
2 and R2 ∈ RH2 we compute
hR1 , R2i2 = R1 (iω)R2 (iω) dω. (13.5)
2π −∞ hΓR (R1 ), R2i2 = hΛR (R1 ), R2 i2
Note that this is an inner product on a real vector space. This inner product may clearly be = hR1 , Λ∗R(R2 )i2
+
applied to any functions in RL2 , including those in the subspaces RH− 2 and RH2 . Indeed, = hR1 , ΛR∗ (R2 )i2 ,
− +
RH2 and RH2 are orthogonal with respect to this inner product (see Exercise E13.4). One
may define the adjoint of any of our operators with respect to this inner product. The and from this the result follows. 
adjoint of the Laurent operator with symbol R is the map Λ∗R : RL2 → RL2 defined by the
In the next section, we will come up with concrete realisations of the Hankel operator
relation
and its adjoint using time-domain methods.
hΛR (R1 ), R2 i2 = hR1 , Λ∗R (R2 )i2
for R1, R2 ∈ RL2 . In like fashion, the Toeplitz operator has an adjoint Θ∗R : RH+ +
2 → RH2 13.3.2 Hankel operators in the time-domain The above operators defined in the ra-
defined by tional function domain are simple enough, but they have interesting and nontrivial counter-
hΘR (R1 ), R2 i2 = hR1 , Θ∗R(R2 )i2 , R1 , R2 ∈ RH+
2, parts in the time-domain. To simplify matters, let us denote by L̄2 (−∞, ∞) those functions
and the Hankel operator has an adjoint Γ∗R : RH− + of time that, when Laplace transformed, give functions in RL 2 . As we saw in Section E.3,
2 → RH2 defined by
this consists exactly of sums of products of polynomial functions, trigonometric functions,
hΓR (R1 ), R2i2 = hR1 , Γ∗R(R2 )i2 , R1 ∈ RH+ −
2 , R2 ∈ RH2 .
and exponential functions of time. Let us denote by L̄2 (−∞, 0] the subset of L̄2 (−∞, ∞)
consisting of functions that are bounded for t < 0, and by L̄2 [0, ∞) the subset of L̄2 (−∞, ∞)
The following result gives explicit formulae for the adjoints. consisting of functions that are bounded for t > 0. Note that

13.20 Proposition For R ∈ RL2 the following statements hold: L̄2 (−∞, ∞) = L̄2 (−∞, 0] ⊕ L̄2 [0, ∞).
(i) Λ∗R = ΛR∗ ;
That is, every function in L̄2 (−∞, ∞) can be uniquely decomposed into a sum of two func-
(ii) Θ∗R = ΘR∗ ; tions, one that is bounded for t < 0 and one that is bounded for t > 0. It is clear that this
+
(iii) Γ∗R (Q) = Π+(ΛR∗ (Q)). decomposition corresponds exactly to the decomposition RL 2 = RH− 2 ⊕ RH2 that uses the
partial fraction expansion. Let us also define projections
Proof (i) We compute

1
Z ∞ Π̄+ : L̄2 (−∞, ∞) → L̄2 [0, ∞)
hΛR (R1 ), R2i2 = ΛR (R1 )(iω)R2(iω) dω Π̄− : L̄2 (−∞, ∞) → L̄2 (−∞, 0].
2π −∞
Z ∞
1
= R(iω)R1 (iω)R2 (iω) dω If we employ the inner product
2π −∞
Z ∞ Z ∞
1 hf1 , f2 i2 = f1 (t)f2 (t) dt (13.6)
= R1(iω)R(−iω)R2 (iω) dω
2π −∞ −∞
Z ∞
1
= R1(iω)R∗(iω)R2 (iω) dω on L̄2 (−∞, ∞), then obviously L̄2 (−∞, 0] and L̄2 [0, ∞) are orthogonal. We hope that it will
2π −∞
be clear from context what we mean when we use the symbol h·, ·i2 in two different ways,
= hR1 , ΛR∗ (R2 )i2 . one for the frequency-domain, and the other for the time-domain.
The following result summarises the previous discussion.
This then gives Λ∗R = ΛR∗ as desired.
13.21 Proposition The Laplace transform is a bijection
Im
x1
x2
log ω
dB
deg 13.3 Optimal model matching II. Nehari’s Theorem 479 480 13 An introduction to H∞ control theory
u = ω0 ln ω
(i)ln from L̄/2)
coth(|u| 2 (−∞, ∞) to RL2 , commutes.
(ii) from (−∞, 0] to RH−
α orL̄α2−1 2 , and Proof Let u ∈ L̄2 [0, ∞) and denote y = Γ̄R (u) ∈ L̄(−∞, 0]. Denote as usual the Laplace
φ m ∞) to RH+ .
(iii) from L̄2 [0, transforms of u and y by û and ŷ. We then have
2
ζ
Furthermore, the diagrams
yos
Π̄+ Π̄−
ΓR (û) = Π− ((R1 + R2 )û).
/ L̄2 [0, ∞) / L̄2 (−∞, 0]
ζ L̄2 (−∞, ∞) L̄2 (−∞, ∞)
tos ω0 L L L L
Note that R2 û ∈ RH+ 2 . Therefore, Π ((R1 + R2 )û) = Π (R1 û). Now let us compute the
− −

ζ     inverse Laplace transform of R1û. Let Σ̃ = (Ã, b̃, c̃t , 01 ) be a complete SISO linear system
ωζ,ω0 RL2 / RL+ RL2 / RL−
ω0 Π+
2
Π+
2 defined so that TΣ̃ = û. Thus à has all eigenvalues in − . Now we compute


commute. Z ∞
L −1 (R1 û)(t) = r1 (t − τ )u(τ ) dτ
We now turn our attention to describing how the operators of Section 13.3.1 appear in −∞
Z ∞
the time-domain, given the correspondence of Proposition 13.21. Our interest is particularly
= − cteAt 1(τ − t)e−Aτ bu(τ ) dτ
in the Hankel operator. Given Proposition 13.21 we expect the analogue of the frequency −∞
domain Hankel operator to map L̄2 [0, ∞) to L̄2 (−∞, 0], given R ∈ RL∞ . To do this, given Z ∞
R ∈ RL∞, let us write R = R1 + R2 for R1 ∈ RH− +
2 and R2 ∈ RH∞ as in Proposition 12.3.
= − cteAt 1(τ − t)e−Aτ bu(τ ) dτ,
t 0
We then let Σ1 = (A, b, c , 01 ) be the complete SISO linear system in controller canonical
form with the PSfrag
property that TΣ1 = R1 . Therefore, the inverse Laplace transform for R1 is
replacements since for τ < 0, u(τ ) = 0. Now note that Π̄− (L −1 (R1 û)) is nonzero only for t < 0 so that
the impulse response for Σ1 . Note that σ(A) ⊂ + . Thus if r1 ∈ L̄2 (−∞, 0] is the inverse we can write
t


Z ∞
Laplace transform of R1 we have e1 (t) (Π̄− (L −1 (R1 û)))(t) = −ct eAt eAτ bu(τ ) dτ.
e2 (t)
( 0
−ct eAt b, t ≤ 0
r1 (t) =
e3 (t) Thus Π̄− (L −1 (R1 û)) = Γ̄R (û). By Proposition 13.21 this means that
0, t > 0,
y(t)
L −1 (Π− (R1 û)) = L −1 (ΓR (û)) = Γ̄R (u),
which is the anticausal impulse u(t)
response for R1 . Now, for u ∈ L̄2 [0, ∞) and for t ≤ 0, let us
define hΣ (t) Z ∞ or, equivalently, that ΓR ◦ L = L ◦ Γ̄R , as claimed. 
hN,D (t)
Γ̄R (u)(t) = r1 (t − τ )u(τ ) dτ. (13.7)
1Σ (t) 0 Up to this point, the value of the time-domain formulation of a Hankel operator is
We take Γ̄R (u)(t) = 0 for t >1N,D
0. We(t) claim that Γ̄R is the time-domain version of the Hankel not at all clear. The simple act of multiplication and partial fraction expansion in the
operator ΓR . Let us first prove ω(t)
that its takes its values in the right space. frequency-domain becomes a little more abstract in the time-domain. However, the value
φ (t) of the time-domain formulation is in its presenting a concrete representation of the Hankel
fj (t)
13.22 Lemma Γ̄R (u) ∈ L̄2 (−∞, 0]. operator and its adjoint. To come up with this representation, we introduce some machinery
Proof For t ≥ 0 we have Re harking back to our observability and controllability discussion in Sections 2.3.1 and 2.3.2.
Im Z ∞ In particular, we begin to dig into the proof of Theorem 2.19. We resume with the situation
+
x1 = −ct eAt
Γ̄R (u)(t) e−Aτ bu(τ ) dτ. where R = R1 + R2 ∈ RL∞ with R1 ∈ RH− 2 and R2 ∈ RH∞ . As above, we let Σ1 =
t
x2 0 (A, b, c , 01 ) be the canonical minimal realisation of R1 . With this notation, we define a
Since A has all eigenvalues in log+ω, −A has all eigenvalues in − , so the integral converges. map CR : L̄2 [0, ∞) → n by




Also, for the same reason, eAt is dB


bounded for t ≤ 0. This shows that Γ̄R (u) ∈ L̄2 (−∞, 0], as Z ∞
claimed. deg  CR (u) = − e−Aτ bu(τ ) dτ,
u = ω ln ω 0
Now let us show that this is0indeed “the same” as the frequency domain Hankel operator.
ln coth(|u| /2) n
and we call this the controllability operator . Similarly, we define O R : → L̄2 (−∞, 0]


13.23 Proposition Let R ∈ RLα or αand write R = R + R with R ∈ RH− and R ∈ RH+ . Let
−1
∞ 1 2 1 2 2 ∞ by
φm (
r1 ∈ L̄2 (−∞, 0] be the inverse Laplace transform of R1 and define Γ̄R as in (13.7). Then ct eAtx, t ≥ 0
ζ (OR (x))(t) =
the diagram 0, t < 0,
yos
Γ̄R
/ L̄2 (−∞, 0]
ζ L̄2 [0, ∞) and we call this the observability operator . Note that the adjoint of the controllability
tos ω0 L L operator will be the map CR∗ : n → L̄2 [0, ∞) satisfying

ζ  
ωζ,ω0 RL+ / RL− n
ω0 2 ΓR 2
hCR (u), xi = hu, CR∗ (x)i2 , u ∈ L̄2 [0, ∞), x ∈ ,


φ (t)
fj (t)
Re
Im
x1
13.3 Optimal model matching II. Nehari’s Theorem 481 482 13 An introduction to H∞ control theory
x2
log ω
n
and the adjoint
dB
of the observability operator will be the map O R∗ : L̄2 (−∞, 0] → satisfying and the observability Gramian


deg hOR (x), yi2 = hx, OR∗ (y)i , y ∈ L̄2 (−∞, 0], x ∈ n
.
Z ∞
t
e−A tcct e−At dt.


u = ω0 ln ω OR =
In each case,
ln coth(|u| /2) the inner product of equation (13.6) is being used on L̄2 (−∞, 0] and L̄2 [0, ∞). 0
The following result summarises the value of introducing this notation.
α or α−1
These are each elements of n×n . We have previously encountered the controllability


φm Gramian in the proof of Theorem 2.19, and the observability Gramian may be used in a
13.24 Proposition
ζ
With the above notation, the following statements hold:
similar manner. However, here we are interested in their relationship with the Hankel oper-
(i) the ydiagrams
os ator ΓR . The following result gives this relationship, as well as providing a characterisation
n n
ζ u:: JJ 99 II of C R and O R in terms of the Liapunov ideas of Section 5.4.



CR uuu
JJ OR OR∗ tt II CR∗
tos ω0 u JJ ttt II
uu JJ tt II
u J % t I$$
ζ u % t
13.25 Proposition With the above notation, the following statements hold:
ωζ,ω0 L̄2 [0, ∞) / / L̄2 (−∞, 0] L̄2 (−∞, 0] // L̄2 [0, ∞)

Γ̄R Γ̄R
ω0
(i) (At , CR , −bbt ) is a Liapunov triple;
commute; (ii) (A, OR , −cct ) is a Liapunov triple;
( t
−bt e−A τ x, t ≤ 0 (iii) CR ◦ CR∗ = CR ;
(ii) (CR∗ (x))(τ ) =
0, t > 0; (iv) OR∗ ◦ OR = OR ;
Z 0
t (v) OR is invertible.
(iii) OR∗ (y) = eA t cy(t) dt;
−∞
( R∞ t
Proof (i) Since −A is Hurwitz, by part (i) of Theorem 5.32 there is a unique symmetric
− 0 bt eA (t−τ ) cy(t) dt, τ ≤0 matrix P so that (−At , P , bbt) is a Liapunov triple. What’s more, the proof of Theorem 5.32
(iv) (Γ̄∗R (y))(τ ) =
0, τ > 0; gives P explicitly as Z ∞
t
(v) CR∗ is injective; P = e−At bbt e−A t dt.
0
Proof (i) It suffices to show that the left diagram commutes, since if it does, the right
diagram will also commute by the definition of the adjoint. However, the left diagram may Now one sees trivially that (A, −P , −b, bt) is also a Liapunov triple. This part of the
be easily seen to commute by virtue of the very definitions of C R , OR , and Γ̄R . proposition now follows because C R = −P .
(ii) This follows from the definition of CR and the inner product in equation (13.6). (ii) The proof here is exactly as for part (i).
(iii) This follows from the definition of OR and the inner product in equation (13.6). (iii) This follows from the characterisations of CR and CR∗ given in Proposition 13.24.
(iv) This follows from the right diagram in part (i), along with parts (ii) and (ii). (iv) This follows from the characterisations of OR and OR∗ given in Proposition 13.24.
t
(v) It suffices to show that CR∗ (x) = 0 if and only if x = 0. If (CR∗ (x))(τ ) = −bte−A τ x = 0 (v) Since O R is square, injectivity is equivalent to invertibility. Suppose that O R is not
for all τ then successive differentiation with respect to τ and evaluation at τ = 0 gives invertible. Then, since O R is positive-semidefinite, there exists x ∈ n so that xt OR x = 0,


or so that Z ∞
−btx = 0, bt Atx = 0, . . . , (−1)nbt (At )n−1 x = 0. t
xte−A t cct e−At x dt.
0
Since (A, b) is controllable, this implies that x = 0. 
This means that cte−At x = 0 for all t ∈ [0, ∞). Differentiating successively with respect to
Thus the above result gives a simple way of relating a Hankel operator and its adjoint
t at t = 0 gives
to operators with either a domain or a range that is finite-dimensional. In the next section,
ctx = 0, −ct Ax = 0, . . . , (−1)n−1 ct An−1 x = 0.
we shall put this to good use.
This implies that (A, c) is not observable. It therefore follows that O R is indeed injective. 
13.3.3 Hankel singular values and Schmidt pairs Recall that a singular value for This, then, is interesting as it affords us the possibility of characterising the singular
a linear map A between inner product spaces is, by definition, an eigenvalue of A ∗ A. One values of the Hankel operator in terms of the eigenvalues of an n × n matrix. This is
readily verifies that singular values are real and nonnegative. Our objective in this section summarised in the following result.
is to find the singular values of a Hankel operator ΓR using our representation Γ̄R in the
time-domain.
13.26 Theorem The nonzero eigenvalues of the following three operators,
As in the previous section, for R ∈ RL∞ we write R = R1 + R2 with R1 ∈ RH+ 2 and
R2 ∈ RH− t (i) Γ∗R ◦ ΓR ,
∞ . We let Σ1 = (A, b, c , 01 ) be the complete SISO linear system in controller
canonical form so that TΣ1 = R1 . We next introduce the controllability Gramian (ii) Γ̄∗R ◦ Γ̄R , and
Z ∞
t
(iii) CR OR ,
CR = e−At bbt e−A t dt, agree.
0
13.3 Optimal model matching II. Nehari’s Theorem 483 484 13 An introduction to H∞ control theory

Proof That the eigenvalues for Γ∗R ΓR and Γ̄∗R Γ̄R agree is a simple consequence of Proposi- so that (R1 , R2) ∈ RH+ −
2 × RH2 are a σ-Schmidt pair for ΓR . The matter of finding Schmidt
tion 13.23: the Laplace transform or its inverse will deliver eigenvalues and eigenvectors for pairs for Hankel operators is a simple enough proposition as one may use Theorem 13.26.
either of Γ∗R ΓR or Γ̄∗R Γ̄R given eigenvalues and eigenvectors for the other. Indeed, suppose that σ 2 > 0 is an eigenvalue for C R OR with eigenvector x. Then a σ-
Now let σ 2 > 0 be an eigenvalue for Γ̄∗R Γ̄R with eigenvector u ∈ L̄2 (−∞, 0]. By part (i) Schmidt pair for Γ̄R is readily verified to be given by (u1 , u2) where
of Proposition 13.24 this means that
1 ∗
u1 = O (OR x)
CR∗ OR∗ OR CR (u) = σ 2 u σ R
=⇒ CR CR∗ OR∗ OR CR (u) = σ 2 CR (u). u2 = OR (x).

If x = CR (u) then x 6= 0 since otherwise it would follow that σ 2 = 0. This shows that σ 2 is
13.3.4 Nehari’s Theorem In this section we state and prove a famous theorem of Ne-
an eigenvalue of C R OR with eigenvector x.
hari [1957]. This theorem is just one in a sweeping research effort in “Hankel norm approxi-
Now suppose that σ 2 6= 0 is an eigenvalue for C R OR with eigenvector x. Thus
mation,” with key contributions being made in a sequence of papers by Adamjan, Arov and
C R OR x = σ2 x Krein [1968a, 1968b, 1971]. Our interest in this section is in a special version of this rather
=⇒ CR∗ OR C R OR = σ 2 CR∗ OR x. general work, as we are only interested in rational functions, whereas Nehari was interested
in general H∞ functions.
If u = CR∗ OR x then we claim that u 6= 0. Indeed, from part (v) of Proposition 13.24, C R∗ is
injective, and from part (v) of Proposition 13.25, O R is injective. Thus u = 0 if and only if 13.28 Theorem Let R0 ∈ RH− ∞ , let σ1 > 0 be the largest Hankel singular value for R0 , and
x = 0. Thus we see that σ 2 is an eigenvalue for Γ̄∗R Γ̄R with eigenvalue u.  let (R1 , R2) ∈ RH+ −
2 × RH2 be a σ1 -Schmidt pair. Then

Thus we have a characterisation of all nonzero singular values of the Hankel operator as inf kR0 − Rk∞ = σ1 ,
eigenvalues of an n × n matrix. This is something of a coup. We shall suppose that the R∈RH+

nonzero singular values are arranged in descending order σ 1 ≥ σ2 ≥ · · · ≥ σk , so that σ1


denotes the largest of the singular values. We shall call σ 1 , . . . , σk the Hankel singular and if R ∈ RH+
∞ satisfies R1 (R0 − R) = σ1 R2 then kR0 − Rk∞ = σ1 .
values for the Hankel operator ΓR . Proof First let us show that σ1 is a lower bound for kR0 − Rk∞ . For any R ∈ RH+
2 we
Now we wish to talk about the “size” of a Hankel operator ΓR . Since ΓR is a linear map compute, using part (i) of Theorem 5.21,
between two inner product spaces—from RH+ −
2 to RH2 —we may simply define its norm in
the same manner in which we defined the induced signal norms in Definition 5.19. Thus we ||(R0 − R)Q||2
kR0 − Rk∞ = sup
define Q∈RH+ ||Q||2
||ΓR (Q)||2 2
Q not zero
kΓR k = sup .
Q∈RH+ ||Q||2 ||Π− (R0 − R)Q||2
2
Q not zero ≥ sup
Q∈RH+ ||Q||2
2
This is called the Hankel norm of the Hankel operator ΓR . The following result follows Q not zero
easily from Theorem 13.26 if one knows just a little more operator theory than is really ||Π− (R0 )Q||2
within the confines of this course. However, it is an essential result for us. = sup
Q∈RH+ ||Q||2
2
Q not zero
13.27 Corollary If σ1 is the largest Hankel singular value, then kΓR k = σ1 . = kΓR k .
Next, let us look a little more closely at eigenvectors induced by singular values. Thus
we let σ 2 be a nonzero singular value for Γ̄∗R Γ̄R with eigenvector u1 ∈ L̄2 [0, ∞). Now define Now let (R1 , R2) be a σ1 -Schmidt pair and write Q = (R0 − R)R1 for R ∈ RH+ ∞ . Since

u2 ∈ L̄2 (−∞, 0] by u2 = σ1 Γ̄R (u1 ). Then one readily computes R1 ∈ RH+ + + − −


2 , ΓR0 (R1 ) ∈ RH2 . Since RR1 ∈ RH2 , Π (Q) = Π (R0 R1 ) = ΓR0 (R1 ). Therefore,

Γ̄R (u1 ) = σu2


Γ̄∗R (u2 ) = σu1 .
When an operator and its adjoint possess the same eigenvalue in this manner, the resulting
eigenvectors (u1 , u2) are called a σ-Schmidt pair for the operator. Of course, if R j is the
Laplace transform of uj , j = 1, 2, then we have
ΓR (R1 ) = σR2
Γ∗R (R2 ) = σR1 ,
13.4 A robust performance example 485 486 13 An introduction to H∞ control theory

we compute Exercises
0 ≤ kQ − ΓR0 (R1 )k22 E13.1 Exercise on existence of solutions to the model matching problem.
= kQk22 + hΓR0 (R1 ), ΓR0 (R1 )i2 − 2 hQ, ΓR0 (R1 )i2 E13.2 Verify that the following algorithm for reducing the modified robust performance
problem for additive uncertainty actually works.
kQk22


= + hΓR0 (R1 ), ΓR0 (R1 )i2 − 2 Π− (Q), ΓR0 (R1 ) 2
= kQk22 − hΓR0 (R1 ), ΓR0 (R1 )i2 13.30 Algorithm for obtaining model matching problem for additive uncertainty Given
kQk22 R̄P , Wu , and Wp as in Problem 13.2.


= − R1, Γ∗R0 ΓR0 (R1 ) 2
= kQk22 − σ12 hR1 , R1i2 1. Define
Wp Wp∗ Wu Wu∗
= kQk22 − σ12 kR1 k22 U3 = .
WpWp∗ + Wu Wu∗
≤ kR0 − Rk2∞ kR1 k22 − σ12 kR1k22
2. If kU3 k∞ ≥ 12 , then Problem 13.2 has no solution.
= kR0 − Rk2∞ − σ12 kR1k22

3. Let (P1 , P2) be a coprime fractional representative for R̄P .
≥ 0. 4. Let (ρ1 , ρ2 ) be a coprime factorisation for P1 and P2 :
This shows that Q = ΓR0 (R1 ), or, equivalently, ρ1 P1 + ρ2 P2 = 1.
(R0 − R)R1 = ΓR0 (R1 ) = σ1 R2 ,
5. Define
as claimed. 
R1 = W p ρ 2 P 2 , S 1 = W u ρ1 P 1 ,
R2 = W p P 1 P 2 , S 2 = − W u P1 P2 .
13.3.5 Relationship to the model matching problem The previous buildup has been
significant, and it is perhaps not transparent how Hankel operators and Nehari’s Theorem
6. Define Q = [R2 R2∗ + S2 S2∗ ]+ .
relate in any way to the model matching problem. The relationship is, in fact, quite simple,
and in this section we give a simple algorithm for obtaining a solution to the model matching 7. Let V be an inner function with the property that
problem using the tools of this section. However, as with Nevanlinna-Pick theory, there is a R1 R2∗ + S1 S2∗
drawback in that on occasion a hack will have to be employed. Nonetheless, the process is V
Q∗
systematic enough.
Let us come right out and state the algorithm. has no poles in +.


8. Define
13.29 Model matching by Hankel norm approximation Given T1 , T2 ∈ RH+
∞. R1 R2∗ + S1 S2∗
U1 = V, U2 = QV.
Q∗
13.4 A robust performance example 9. Define U4 = [ 12 − U3 ]+ .
10. Define
13.5 Other problems involving H∞ methods T1 =
U1
, T2 =
U2
.
U4 U4
It turns out that the robust performance problem is only one of a number of problems
11. Let θ be a solution to Problem 13.3.
falling under the umbrella of H∞ control. In this section we briefly indicate some other
problems whose solution can be reduced to a model matching problem, and thus whose 12. If kT1 − θT2k∞ ≥ 1 then Problem 13.2 has no solution.
solution can be obtained by the methods in this chapter. 13. The controller
ρ1 + θP2
RC = ,
ρ2 − θP1
is a solution to Problem 13.2. 
Finish E13.3 Möbius functions
+
E13.4 Show that RH− 2 and RH2 are orthogonal with respect to the inner product on RL 2
defined in equation (13.5).

Вам также может понравиться