Вы находитесь на странице: 1из 47
http://www. elsolucionario.blogspot .com

http://www.elsolucionario.blogspot.com

http://www. elsolucionario.blogspot .com
http://www. elsolucionario.blogspot .com
http://www. elsolucionario.blogspot .com
http://www. elsolucionario.blogspot .com

Solution Manual for Adaptive Control

Second Edition

Karl Johan Åström Björn Wittenmark

Preface

This Solution Manual contains solutions to selected problems in the second edition of Adaptive Control published by Addison-Wesley 1995, ISBN 0-201-

55866-1.

PROBLEM SOLUTIONS

SOLUTIONS TO CHAPTER 1

1.5 Linearization of the valve shows that

v = 4v 3

0 u

The loop transfer function is then

G 0 (s)G PI (s)4v 3

0

where G PI is the transfer function of a PI controller i.e.

G PI (s) =

K 1 +

sT i

1

The characteristic equation for the closed loop system is

sT i (s + 1) 3 + K 4v 0 3 (sT i + 1) = 0

with K

= 0.15 and T i = 1 we get

(s + 1) s(s + 1) 2 + 0.6v 3

0 = 0

(s + 1)(s 3 + 2s 2 + s + 0.6v 3

0 ) = 0

The root locus of this equation with respect to v o is sketched in Fig. 1. According to the Routh Hurwitz criterion the critical case is

0.6v 3

0 = 2

v 0 =

10

3

3

= 1.49

Since the plant G 0 has unit static gain and the controller has integral action the steady-state output is equal to v 0 and the set point y r . The closed-loop system is stable for y r = u c = 0.3 and 1.1 but unstable for y r = u c = 5.1. Compare with Fig. 1.9.

1

2

Problem Solutions

2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2 -1.5 -1 -0.5 0 0.5
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Imag Axis

Figure 1.

Real Axis

Root locus in Problem 1.5.

1.6 Tune the controller using the Ziegler-Nichols closed-loop method. The frequency u , where the process has 180 phase lag is first determined. The controller parameters are then given by Table 8.2 on page 382 where

we have

K u =

1

G 0 (i

u )

G 0 (s) =

e s q

1 + s q

arg G 0 (i

)

= q

arctan q

=

q

G 0 (i

)

K

T i

0.5

1.0

0.45

1

5.24

2.0

0.45

1

2.62

4.1

0.45

1

1.3

A simulation of the system obtained when the controller is tuned for the smallest flow q = 0.5 is shown Fig. 2. The Ziegler-Nichols method is not the best tuning method in this case. In the Fig. 3 we show results for

Solutions to Chapter 1

3

Process output 2 1 0 0 10 20 30 40 Control signal 2 1 0
Process output
2
1
0
0
10
20
30
40
Control signal
2
1
0
0 10
20
30
40

Figure 2. Simulation in Problem 1.6. Process output and control signal are shown for q = 0.5 (full), q = 1 (dashed), and q = 2 (dotted). The controller is designed for q = 0.5.

controller designed for q = 1 and in Fig. 4when the controller is designed for q = 2.

1.7 Introducing the feedback

the system becomes

dx

dt

=

100

0

1

0

0

3

0

y 1 =

110

x

u = k 2 y 2

x k 2

0

2

1

101

x +

The transfer function from u 1 to y 1 is

G(s) =

110

s + 10

2k 2

k 2

s + 3

0

s 2 + (4 k 2 )s + 3 + k 2

=

(s + 1)(s + 3)(s + 1 + k 2 )

0

2k 2

s + 1 + k 2

1

1

0

0

1

0

0

u 1

4

Problem Solutions

Process output 2 1 0 0 10 20 30 40 Control signal 2 1 0
Process output
2
1
0
0
10
20
30
40
Control signal
2
1
0
0 10
20
30
40

Figure 3. Simulation in Problem 1.6. Process output and control signal are shown for q = 0.5 (full), q = 1 (dashed), and q = 2 (dotted). The controller is designed for q = 1.

The static gain is

G(0) =

3 + k 2 3(1 + k 2 )

Solutions to Chapter 1

5

Process output 2 1 0 0 10 20 30 40 Control signal 2 1 0
Process output
2
1
0
0
10
20
30
40
Control signal
2
1
0
0 10
20
30
40

Figure 4. Simulation in Problem 1.6. Process output and control signal are shown for q = 0.5 (full), q = 1 (dashed), and q = 2 (dotted). The controller is designed for q = 2.

6

Problem Solutions

SOLUTIONS TO CHAPTER 2

2.1 The function V can be written as

V(x 1 ◊◊◊x m ) =

n

i,j =1

x i x j (a ij + a ji ) 2 +

n

i=1

b i x i + c

Taking derivative with respect to x i we get

V x i

=

n

j =1

(a ij + a ji )x j

+ b i

In vector notation this can be written as

grad x V(x) = (A + A T )x + b

2.2 The model is

y t =

T

t

+ e t

=

u t

u t 1

b 0 + e t

b

1

The least squares estimate is given as the solution of the normal equation

ˆ

=

(

T ) 1 T Y =

(a) Input is a unit step

u 2

t

u t u t 1

u t =

1

0

u t u t 1

u 2

t

1

t 0 otherwise

Evaluating the sums we get

ˆ

ˆ

=

=

N

N 1

1

N 1

N 1

N 1

y

1

N

2

y t y 1

1

The estimation error is

ˆ

=

N

1

N

2

y

y

1

t

t

N 1

=

e

1

N

2

1

1

e t e 1

1

u t y t

u t 1 y t

1

N

N 1

N

1

N

2

y

y

t

t

Solutions to Chapter 2

7

Hence

E(

ˆ

)(

ˆ

) T

= (

T

) 1 1 =

1

1

1

N

N 1

1

1

1

1

when N . Notice that the variance of the estimates do not go to zero as N . Consider, however, the estimate of b 0 + b 1 .

E(

ˆ

b 0 +

b 1 b 0 b 1 ) =   1

ˆ

1

1

1

1

N

N 1

1

1

=

1

N 1

With a step input it is thus possible to determine the combination b 0 + b 1 consistently. The individual values of b 0 and b 1 can, however, not be determined consistently.

(b) Input u is white noise with Eu 2 = 1 and u is independent of e.

Eu 2

t

=

1

Eu t u t 1 = 0

1

N

0

N

0

N 1 In this case it is thus possible to determine both parameters consis- tently.

0

N 1

1

0

1

ˆ

cov(

)

= 1 E(

T

) 1

=

=

2.3 Data generating process:

y(t) = b 0 u(t) + b 1 u(t 1) + e(t) =

T (t) 0 + e¯(t)

where

and

Model:

T (t) = u(t),

0

=

b 0

e¯(t) = b 1 u(t 1) + e(t)

ˆ

yˆ(t) = bu(t)

or

where

 

ˆ

T (t)

y(t) =

bu(t) +

(t) =

ˆ

(t) = y(t) yˆ(t)

+ (t)

The least squares estimate is given by

T

(

ˆ

0 )

=

T E d

E d =

e¯(1)

.

.

.

e¯(N)

8

Problem Solutions

(a)

1

N

T

=

1

N

1

N

T E d = 1

N

N

1

N

u 2 (t) Eu 2

u(t)e¯(t) = 1

N

N

N

u(t) (b 1 u(t 1) + e(t))

1 1

b 1 E (u(t)u(t 1)) + E (u(t)e(t))

N

Hence

ˆ

E(u 2 ) = 1

ˆ

b =

ˆ

u(t) =

1

0

t 1 t < 1

E (u(t)u(t 1)) = 1

0 + b 1

=

b 0 + b 1

Eu(t)e(t) = 0

N

i.e. b converges to the stationary gain

(b)

u(t)

Hence

N(0,

)

2.6 The model is

Eu 2 = 2

ˆ

b b 0

Eu(t)u(t 1) = 0

N

Eu(t)e(t) = 0

y t

=

T

t

+

t

=

y t 1

u t 1

a

b

1 u t 1       a b    

T

t

+

e t + ce t 1

t

The least squares estimate is given by the solution to the normal equation (2.5). The estimation error is

ˆ

=

(

T ) 1

y 2

t

1

T

y t 1 u t 1

=

y t 1 u t 1

u 2

t

1

1

y t 1 e t c y t 1 e t 1

u t 1 e t

+

c u t 1 e t 1

Notice that T and are not independent. u t and e t are independent, y t

depends on e t , e t 1 , e t 2 ,

Taking mean values we get

and y t depends on u t 1 , u t 2 ,

E(

ˆ

)

=

E(

T ) 1 E( T )

To evaluate this expression we calculate

E

y 2

t

1

y t 1 u t 1

y t 1 u t 1

u 2

t

1

=

N

Ey 2

0

t

0

Eu 2

t

Solutions to Chapter 2

9

and

Since

we get

E  

2.8 The model is

y t 1 e t c y t 1 e t 1

u t 1

e t +

c u t 1

e t 1

=

cNEy t 1 e t 1

0

Ey t 1 e t 1

= E (ay t 2

+ bu t 2 + e t 1 ) e t 1 = 2

y t =

b

a u t +

q

+

q

+ c

+ a e t

q

Ey

Eu

2

t

2

t

xb 2 a 2

1

+ 1 2ac + c 2 1 a 2

2

=

= 1

=

Ee 2

t

2

E(aˆ a) 2 =

 

ˆ

b) 2

 

E(

b

= 0

c(1 a 2 ) + (1 2ac + c 2 ) 2

2

b 2

y(t) =

a + bt + e(t) = T + e(t)

=

1

t

=

a

b

According to Theorem 2.1 the solution is given by equation (2.6), i.e.

where T Hence ˆ = =
where
T
Hence
ˆ
=
=

ˆ

= (

T ) 1 T Y

=

111

123

1

◊◊◊ N

Y =

y(1)

y(2)

.

.

.

y(N)

N

t=1

N

1

t

t=1

N

t

t=1

N

t=1

t

2

1

N

y(t)

t=1

N

t=1

ty(t)

2

N(N

6

1) ((2N + 1)s 0 3s 1 )

N(N + 1)(N 1) ( (N + 1)s 0 + 2s 1 )

10

Problem Solutions

where we have made use of

N

t=1

and introduced

t = N(N + 1)

2

s 0 =

N

t=1

y(t)

N

t=1

t 2 = N(N + 1)(N + 2)

6

s 1 =

N

t=1

ty(t)

The covariance of the estimate is given by

cov(

ˆ

)

=

2 (

T ) 1 =

12

N(N + 1)(N 1)

(N + 1)(2N + 1)

6

N + 1

2

N + 1

1

2

 

ˆ

decreases as N 3 for large N but the

Notice that the variance of

b

2.17

variance of aˆ decreases as N 1 . The reason for this is that the regressor

associated with a is 1 but the regressor associated with b is t. Notice that

there are better numerical methods to solve for

ˆ

!

(a) The following derivation gives a formula for the asymptotic LS esti- mate

ˆ

b =

(

T ) 1

Y =

N

k=1

(k 1) 2 1

N

k=1

(k 1)y¯(k)

=

1

N

N

k=1

u(k 1) 2 1

1

N

N

k=1

u(k 1)y¯(k)

E(u(k 1) 2 ) 1 E (u(k 1) y¯(k)) ,

as

N

The equations for the closed loop system are

u(k) = K (u c (k) y(k))

y¯(k) = y(k) + ay(k 1) = bu(k 1)

The signals u(k) and y(k) are stationary signals. This follows since

the controller gain is chosen so that the closed loop system is stable. It then follows that E(u(k 1) 2 ) = E(u(k) 2 ) and E(u(k 1) y¯(k)) = E(bu(k 1) 2 ) = bE(u(k) 2 ) exist and the asymptotic LS estimate

ˆ

becomes b = b, i.e. we have an unbiased estimate.

Solutions to Chapter 2

11

Estimator
Estimator

b

u

y

Solutions to Chapter 2 11 Estimator b u y d Σ y u c = 0
d Σ
d
Σ

y

u c = 0 Σ
u c = 0
Σ

K

1

1 q + a
1
q + a
2 11 Estimator b u y d Σ y u c = 0 Σ K −
2 11 Estimator b u y d Σ y u c = 0 Σ K −

Figure 5.

The system redrawn.

(b)

Similarly to (a), we get

1

N

N

k=1

u(k 1) 2 1

1

N

N

k=1

u(k 1)y¯(k)

(u 2 (k 1)) 0 1 (u(k 1)y¯(k)) 0 ,

as

N

where () 0 denote the stationary value of the argument. We have

u 2 (k 1) 0 =

(u(k 1)y¯(k)) 0 =

((u(k)) 0 ) 2 (u(k)) 0 b ((u(k) 0 + d 0 )

(u(k)) 0 = H ud (1)d 0 =

K b

+ a + K b d 0

1

and the asymptotic LS estimate becomes

ˆ

b

= (u 2 (k 1)) 0 1 (u(k 1)y¯(k)) 0 = b 1 +

= b 1 1 + a + K b

K b

=

1 + a

K

(u(k)) 0

d

0

How do we interpret this result? The system may be redrawn as in

(c)

q+a y¯ , and we can regard

q+a as the controller for the system in Figure 5. It is then obvious that we have estimated the negative inverse of the static controller gain.

Introduction of high pass regressor filters as in Figure 6 eliminates or at least reduces the influence from the disturbance d on the estimate of b. One choice of regressor filter could be H f (q 1 ) = 1 q 1 , i.e. a differentiator. Another possibility would be to introduce a constant in

Figure 5. Since U c = 0, we have that u =

K

K

12

Problem Solutions

Estimator H f H f d u u c y b Σ K Σ q
Estimator
H f
H f
d
u
u c
y
b
Σ
K
Σ q + a
− 1
Figure 6.
Introduction of regressor filters.

the regressor and then estimate both b and bd. The regression model is in this case

y¯(t) =   u(t 1)

1

b

bd

= (t) T

2.18 The equations for recursive least squares are

y(t)

=

T (t 1) 0

ˆ

ˆ

(t) = (t 1) + K(t) (t)

(t)

=

y(t)

T (t 1)