Вы находитесь на странице: 1из 129

EL2620 Nonlinear Control

Lecture notes
Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen
This revision December 2011

Automatic Control
KTH, Stockholm, Sweden

Preface
Many people have contributed to these lecture notes in nonlinear control.
Originally it was developed by Bo Bernhardsson and Karl Henrik Johansson,
and later revised by Bo Wahlberg and myself. Contributions and comments
by Mikael Johansson, Ola Markusson, Ragnar Wallin, Henning Schmidt,
Krister Jacobsson, Bj
orn Johansson and Torbjorn Nordling are gratefully
acknowledged.

Elling W. Jacobsen
Stockholm, December 2011

Lecture 1

EL2620

Practical information
Course outline
Linear vs Nonlinear Systems
Nonlinear differential equations

Lecture 1

EL2620 Nonlinear Control

stex@s3.kth.se

STEX (entrance floor, Osquldasv. 10), course material

hanna.holmqvist@ee.kth.se

Hanna Holmqvist, course administration

Per Hagg,
Farhad Farokhi, teaching assistants
pehagg@kth.se, farakhi@kth.se

jacobsen@kth.se

Elling W. Jacobsen, lectures and course responsible

Instructors

28h lectures, 28h exercises, 3 home-works

7.5 credits, lp 2

Automatic Control Lab, KTH

EL2620 Nonlinear Control

Disposition

Lecture 1

EL2620

2011

2011

Course Goal

Todays Goal

Lecture 1

Derive equilibrium points

Transform differential equations to first-order form

systems

Mathematically describe common nonlinearities in control

Describe distinctive phenomena in nonlinear dynamic systems

You should be able to

EL2620

Lecture 1

use some practical nonlinear control design methods

apply the most powerful nonlinear analysis methods

understand common nonlinear control phenomena

You should after the course be able to

2011

2011

To provide participants with a solid theoretical foundation of nonlinear


control systems combined with a good engineering understanding

EL2620

Course Outline

Lecture 1

Summary (L14)

fuzzy control (L11-L13)

2011

Alternatives: gain scheduling, optimal control, neural networks,

methods (L7-L10)

Control design: compensation, high-gain design, Lyapunov

functions (L3-L6)

Feedback analysis: linearization, stability theory, describing

simulation (L1-L2)

Introduction: nonlinear models and phenomena, computer

EL2620

Lecture 1

5h written exam on December 11 2012

this!)

KTH Social

Linear Systems

superposition

scaling

Lecture 1

Notice the importance to have zero initial conditions

Y (s) = G(s)U (s)

2011

: M M is

x(t)

= Ax(t) + Bu(t), y(t) = Cx(t), x(0) = 0


! t
y(t) = g(t) " u(t) =
g( )u(t )d

Example: Linear time-invariant systems

S(u) = S(u)
S(u + v) = S(u) + S(v)

Definition: Let M be a signal space. The system S


linear if for all u, v M and R

EL2620

Lecture 1

Two course compendia sold by STEX.


6

2011

Alternative textbooks (decreasing mathematical rigour):


Sastry, Nonlinear Systems: Analysis, Stability and Control; Vidyasagar,
Nonlinear Systems Analysis; Slotine & Li, Applied Nonlinear Control; Glad &
metoder.
Ljung, Reglerteori, flervariabla och olinjara
Only references to Khalil will be given.

Homeworks: 3 computer exercises to hand in (and review)

Software: Matlab

Lecture notes: Copies of transparencies (from previous year)

2002. Optional but highly recommended.

Compulsory course items


3 homeworks, have to be handed in on time (we are strict on

Material
Textbook: Khalil, Nonlinear Systems, Prentice Hall, 3rd ed.,

EL2620

Exercises: Class room and home exercises

Course Information

2011

All info and handouts are available at the course homepage at

EL2620

Linear Systems Have Nice Properties

Linear model valid only if sin

Unrealistic answer. Clearly outside linear region!

0.1
0
= 2 rad = 114
0.05

Lecture 1

Must consider nonlinear model. Possibly also include other


nonlinearities such as centripetal force, saturation, friction etc.

so that

= 0 ) gives

t2
x(t) 10 0 0.050
2

Linear model (step response with

Can the ball move 0.1 meter in 0.1 seconds from steady state?

EL2620

Lecture 1

Frequency analysis possible Sinusoidal inputs give sinusoidal


outputs: Y (i) = G(i)U (i)

Superposition Enough to know a step (or impulse) response

11

2011

2011

Local stability=global stability Stability if all eigenvalues of A (or


poles of G(s)) are in the left half-plane

EL2620

Lecture 1

2011

Linear Models are not Rich Enough

12

2011

10

m
x(t) = mg sin (t), Linear model: x(t) = g(t)

Linear models can not describe many phenomena seen in


nonlinear systems

EL2620

Lecture 1

Nonlinear model:

Linear Models may be too Crude


Approximations
Example:Positioning of a ball on a beam

EL2620

PSfr

1
s

x2

+ f (1 x1 )

!f (x2 xc )

"

0
0

50

100

150

200

10

15

50

50

100
time [s]

Output

100

Input

150

150

200

200

15

2011

0
0

0
0
10

0
0
4

0.2

0.4

10

10

10

15
Time t

15
Time t

15
Time t

20

20

20

25

25

25

S TEP R ESPONSES

30

30

30

2011

Stable Periodic Solutions

Motor:

1
s(1+5s)

Sum

K=5

G(s) =

Constant

P-controller

-1

Motor

5s 2+s

1
Backlash

Example: Position control of motor with back-lash

EL2620

Lecture 1

16

2011

14

(The linearized gain of the valve increases with increasing amplitude)

Stability depends on amplitude of the reference signal!

Lecture 1

0.7, ! = 0.4

x2
"
1

20

r = 1.72

r = 1.68

r = 0.2

Lecture 1

x1 exp

1
(s+1)(s+1)

Controller:

x 2

x1 exp

-1

2
u

1
(s+1)2

Process

= u2

EL2620

Existence of multiple stable equilibria for the same input gives


hysteresis effect

x 1

Valve

Multiple Equilibria

Example: chemical reactor

EL2620

Lecture 1

Simulink block diagram:

Motor

Example: Control system with valve characteristic f (u)

13

2011

Stability Can Depend on Reference Signal

EL2620

Coolant temp xc

Temperature x2

Output y
Output y
Output y

Back-lash induces an oscillation

10

10

10

20

20

20

Time t

Time t

Time t

30

30

30

Lecture 1

10

-2

10

10

-2

10

40

40

40

10
Frequency (Hz)

a=2

10
Frequency (Hz)

a=1

a sin t

-2

-1

-2

-1

Saturation

2
3
Time t

2
3
Time t

y(t) =

Harmonic Distortion

How predict and avoid oscillations?

-0.5

0
0.5

-0.5

0
0.5

-0.5

0.5

Example: Sinusoidal response of saturation

EL2620

Lecture 1

Output y

Output y

Output y

50

50

50

k=1

"

Period and amplitude independent of initial conditions:

EL2620

Amplitude y

Amplitude y

19

Ak sin(kt)

2011

17

2011

Automatic Tuning of PID Controllers

Relay

A
T

Process

10

Time

Energy in tone k
Energy in tone 1

k=2

Lecture 1

Trade-off between effectivity and linearity

20

2011

Introduces spectrum leakage, which is a problem in cellular systems

Effective amplifiers work in nonlinear region

Example: Electrical amplifiers

Total Harmonic Distortion

"

Nonlinearities such as rectifiers, switched electronics, and


transformers give rise to harmonic distortion

Example: Electrical power distribution

EL2620

Lecture 1

PID

18

2011

Relay induces a desired oscillation whose frequency and amplitude


are used to choose PID parameters

EL2620

10

10

Time t

15

15

Time t

Lecture 1

20

20

25

25

30

30

tf =

1
x0

1
0t<
x0

dx
2
= dt
Recall the trick: x = x
x2
1
1
x0
Integrate

= t x(t) =
x(t) x(0)
1 x0 t

Solution interval depends on initial condition!

Solution not defined for

x0
,
has solution x(t) =
1 x0 t

= x , x(0) = x0

Existence Problems

+ y y 3 = a sin(t)

Subharmonics

Example: The differential equation x

EL2620

Lecture 1

-1

-0.5

0.5

0.5

a sin t

-0.5

Example: Duffings equation y


+ y

EL2620

23

2011

21

2011

Nonlinear Differential Equations

Lecture 1

0
0

0.5

1.5

2.5

3.5

4.5

2
Time t

Finite escape time of dx/dt = x2

Finite Escape Time

(1)

2011

24

2011

22

: [0, T ] Rn such that (1)

x = Ax, x(0) = x0 , gives x(t) = exp(At)x0

Simulation for various initial conditions x0

EL2620

Lecture 1

Example:

When is the solution unique?

When does there exists a solution?

over an interval [0, T ] is a C1 function x


is fulfilled.

x(t)

= f (x(t)), x(0) = x0

Definition: A solution to

EL2620

x(t)

x =

Time t

Lipschitz Continuity

-1

-0.5

0.5

1.5

x, x(0) = 0, has many solutions:


#
(t C)2 /4 t > C
x(t) =
0
tC

Uniqueness Problems

Lecture 1

(x( =

x21

+ +

x2n

Euclidean norm is given by

Slope L

(f (x) f (y)( L(x y(

Definition: f : Rn Rn is Lipschitz continuous if there exists


L, r > 0 such that for all
x, y Br (x0 ) = {z Rn : (z x0 ( < r},

EL2620

Lecture 1

Example:

EL2620

x(t)

27

2011

25

2011

Physical Interpretation
x(T ) = 0

x(0) = x0

Lecture 1

f being C 1 implies Lipschitz continuity


(L = maxxBr (x0 ) f $ (x))

f being C 0 is not sufficient (cf., tank example)

= (r, L)

Remarks

Appendix C.1. Based on the contraction mapping theorem

has a unique solution in Br (x0 ) over [0, ]. Proof: See Khalil,

x(t)

= f (x(t)),

> 0 such that

Local Existence and Uniqueness

dx
dx
=
ds
dt

= T t ds = dt and thus

Theorem:
If f is Lipschitz continuous, then there exists

EL2620

Lecture 1

Hint: Reverse time s

28

2011

26

2011

where x is the water level. It is then impossible to know at what time


t < T the level was x(t) = x0 > 0.

x = x,

Consider the reverse example, i.e., the water tank lab process with

EL2620

State-Space Models

x = f (x) + g(x)u,

x = Ax + Bu,

Affine in u:

Linear:

Lecture 1

x =

&

Pendulum

'T

= y

dy
dt

...

dn1 y
dtn1

%T

k
g
x 2 =
x2 sin x1
2
MR
R

x 1 = x2

gives

M R2 + k + M gR sin = 0

express the equation in x

Example:

dn y
,
dtn

Transformation to First-Order System

y = Cx

y = h(x)

Given a differential equation in y with highest derivative

EL2620

Lecture 1

x = f (x, u),

Explicit:

y = h(x)

f (x, u, y, x,
u,
y,
. . .) = 0

General:

State x, input u, output y

EL2620

31

2011

29

2011

x = f (x, t)

Equilibria

2011

30

2011

f (x , u , y , 0, 0, . . .) = 0
0 = f (x , u ), y = h(x )
0 = f (x ) + g(x )u , y = h(x )
0 = Ax + Bu , y = Cx

Lecture 1

Often the equilibrium is defined only through the state x

Linear:

Affine in u:

Explicit:

General:

Corresponds to putting all derivatives to zero:

32

Definition: A point (x , u , y ) is an equilibrium, if a solution starting


in (x , u , y ) stays there forever.

EL2620

Lecture 1

x = f (x, xn+1 )
x n+1 = 1

is always possible to transform to an autonomous system by


introducing xn+1 = t:

A nonautonomous system

Transformation to Autonomous System

EL2620

Multiple Equilibria

k
g
x2 sin x1
2
MR
R

Lecture 1

When we want to push performance to the limit

Lecture 1

Phase plane analysis

When the range of operation is large

Backlash

Dead Zone

Math
Function

eu

Coulomb &
Viscous Friction

Look-Up
Table

Saturation

Next Lecture

When distinctive nonlinear phenomena are relevant

Linearization

EL2620

Lecture 1

Relay

Sign

Abs

|u|

Some Common Nonlinearities in Control


Systems

EL2620

Simulation in Matlab

35

2011

33

2011

When the system is strongly nonlinear

When do we need Nonlinear Analysis &


Design?

EL2620

Lecture 1

x 1 = x 2 = 0 gives x2 = 0 and sin(x1 ) = 0

x 2 =

x 1 = x2

Alternatively in first-order form:

= = 0 gives sin = 0 and thus = k

M R2 + k + M gR sin = 0

Example: Pendulum

EL2620

36

2011

34

2011

Lecture 2

EL2620 Nonlinear Control

Analysis Through Simulation

= f (t, x, u)

=0

Lecture 2

Spice, EMTP, ADAMS, gPROMS

Special purpose simulation tools

Omsim, Dymola, Modelica


http://www.modelica.org

DAEs F (t, x,
x, u)

ACSL, Simnon, Simulink

ODEs x

2011

2011

Wrap-up of Lecture 1: Nonlinear systems and phenomena


Modeling and simulation in Simulink
Phase-plane analysis

Simulation tools:

EL2620

Lecture 2

EL2620

Todays Goal

Lecture 2

> matlab
>> simulink

EL2620

Lecture 2

Simulink

Do phase-plane analysis using pplane (or other tool)

Linearize using Simulink

Model and simulate in Simulink

You should be able to

EL2620

2011

2011

An Example in Simulink

Transfer Fcn

s+1
Scope

To Workspace2

Transfer Fcn

s+1

To Workspace

To Workspace1

Lecture 2

>> plot(t,y)

2011

2011

Check Save format of output blocks (Array instead of Structure)

Step

Clock

Save Results to Workspace

stepmodel.mdl

EL2620

Lecture 2

Step

File -> New -> Model


Double click on Continous
Transfer Fcn
Step (in Sources)
Scope (in Sinks)
Connect (mouse-left)
Simulation -> Parameters

EL2620

Choose Simulation Parameters

How To Get Better Accuracy

10

-0.6

-0.8

-0.6

-0.8

Lecture 2

-1

-0.4

-0.4

-1

-0.2

0.2

-0.2

0.4

0.2

0.6

0.8

0.4

Refine = 1

0.6

0.8

Refine adds interpolation points:

Refine = 10

10

2011

2011

Modify Refine, Absolute and Relative Tolerances, Integration method

EL2620

Lecture 2

Dont forget Apply

EL2620

Use Scripts to Document Simulations


1
s

Lecture 2

In

In

Sum

Subsystem

In

Gain

1/A

Subsystem2

In

s
Integrator

h = (u q)/A
!
q = a 2g h
f(u)
Fcn

1
Out

Example: Two-Tank System

The system consists of two identical tank models:

EL2620

11

2011

-1

u2

(s+1)(s+1)

1
(s+1)2

Process
y

Linearization in Simulink

Valve

= u2

2011

10

2011

Lecture 2

12

>> A=2.7e-3;a=7e-6,g=9.8;
>> [x0,u0,y0]=trim(twotank,[0.1 0.1],[],0.1)
x0 =
0.1000
0.1000
u0 =
9.7995e-006
y0 =
0.1000
>> [aa,bb,cc,dd]=linmod(twotank,x0,u0);
>> sys=ss(aa,bb,cc,dd);
>> bode(sys)

Linearize about equilibrium (x0 , u0 , y0 ):

EL2620

Lecture 2

Lecture 2

Motor

Example: Control system with valve characteristic f (u)

Nonlinear Control System

Simulink block diagram:

PSfr

EL2620

open_system(stepmodel)
set_param(stepmodel,RelTol,1e-3)
set_param(stepmodel,AbsTol,1e-6)
set_param(stepmodel,Refine,1)
tic
sim(stepmodel,6)
toc
subplot(2,1,1),plot(t,y),title(y)
subplot(2,1,2),plot(t,u),title(u)

2011

If the block-diagram is saved to stepmodel.mdl,


the following Script-file simstepmodel.m simulates the system:

EL2620

Differential Equation Editor

deedemo1

deedemo3

Homework 1

deedemo2

DEE

Differential Equation
Editor

deedemo4

Lecture 2

Write in English.

The report should be short and include only necessary plots.

See the course homepage for a report example

report.

15

2011

13

2011

Follow instructions in Exercise Compendium on how to write the

Use your favorite phase-plane analysis tool

EL2620

Lecture 2

Run the demonstrations

>> dee

dee is a Simulink-based differential equation editor

EL2620

Phase-Plane Analysis

Lecture 2

This was the preferred tool last year!

http://math.rice.edu/dfield

Down load DFIELD and PPLANE from

http://www.control.lth.se/ictools

Download ICTools from

EL2620

14

2011

Local Stability

Stability definitions
Linearization
Phase-plane analysis
Periodic solutions

Lecture 3

EL2620 Nonlinear Control

x(t)

!x(t)! < !, t 0

= 0 is not stable it is called unstable.

Lecture 3

If x

!x(0)! <

= f (x) with f (0) = 0


Definition: The equilibrium x = 0 is stable if for all ! > 0 there
exists = (!) > 0 such that

Consider x

EL2620

Lecture 3

EL2620

2011

2011

Asymptotic Stability

lim x(t) = 0
t

Lecture 3

2011

2011

The equilibrium is globally asymptotically stable if it is stable and


limt x(t) = 0 for all x(0).

!x(0)! <

Definition: The equilibrium x = 0 is asymptotically stable if it is


stable and can be chosen such that

EL2620

Lecture 3

Analyze stability of periodic solutions through Poincare maps

center points

Classify equilibria into nodes, focuses, saddle points, and

Sketch phase portraits for two-dimensional systems

Linearize around equilibria and trajectories

Todays Goal
Explain local and global stability

You should be able to

EL2620

Linearization Around a Trajectory

m(t)

Lecture 3

2011

2011

= (h0 (t), v0 (t), m0 (t)) , u0 (t) u0 > 0, be a solution.

h(t)
= v(t)
v(t)
= g + ve u(t)/m(t)
m(t)

= u(t)

Example

0 1
0
0

e u0
e
u(t)
(t) + m0vu
x (t) = 0 0 (m0vu
2x
0 t)
0t
1
0 0
0

Let x0 (t)
Then,

h(t)

EL2620

Lecture 3

(x0 (t) + x(t), u0 (t) + u(t))

(x0 (t), u0 (t))

x(t)

= f (x0 (t) + x(t), u0 (t) + u(t))


f
(x0 (t), u0 (t))
x(t)
= f (x0 (t), u0 (t)) +
x
f
+
(x0 (t), u0 (t))
u(t) + O(!
x, u!2 )
u

Let (x0 (t), u0 (t)) denote a solution to x = f (x, u) and consider


another solution (x(t), u(t)) = (x0 (t) + x
(t), u0 (t) + u(t)):

EL2620

f
(x0 (t), u0 (t))
x
f
B(x0 (t), u0 (t)) =
(x0 (t), u0 (t))
u
A(x0 (t), u0 (t)) =

1 + cos2 t
1 sin t cos t
1 sin t cos t 1 + sin2 t

Lecture 3

is unbounded solution for

> 1.
8

2011

2011

, >0

2 2 4
(t) = =
2
which are stable for 0 < < 2. However,
' (1)t
(
e
cos t et sin t
x(t) =
x(0),
e(1)t sin t et cos t

Pointwise eigenvalues are given by

A(t) =

'

Pointwise Left Half-Plane Eigenvalues of


A(t) Do Not Impose Stability

EL2620

Lecture 3

Note that A and B are time dependent. However, if


(x0 (t), u0 (t)) (x0 , u0 ) then A and B are constant.

where

x (t) = A(x0 (t), u0 (t))


x(t) + B(x0 (t), u0 (t))
u(t)

Hence, for small (


x, u), approximately

EL2620

= 0 needs further investigation.

x(t) = c1 e

1 t

v1 + c 2 e

v1 v2

2 t

v2 ,

Lecture 3

where the constants c1 and c2 are given by the initial conditions

This implies that

= 1 v1 etc).

0
2 t

1 t

where v1 , v2 are the eigenvectors of A (Av1

, e
+
eAt = V et V 1 = v1 v2

If A is diagonalizable, then

x(t) = eAt x(0), t 0.

) *
) *
d x1
x
=A 1
x
x2
dt 2

Linear Systems Revival

Analytic solution:

EL2620

Lecture 3

A proof is given next lecture.

The theorem is also called Lyapunovs Indirect Method.

The case (A)

The fundamental result for linear systems theory!

If (A) > 0, then x0 is unstable

If (A) < 0, then x0 is asymptotically stable

Denote A

f
(x0 )
x

,1

= f (x) with f C1 .
and (A) = max Re((A)).

Lyapunovs Linearization Method

Theorem: Let x0 be an equilibrium of x

EL2620

11

2011

2011

x(t) = c1 e1 t v1 + c2 e2 t v2

Lecture 3

Fast eigenvalue/vector: x(t) c1 e1 t v1 + c2 v2 for small t.


Moves along the fast eigenvector for small t

Slow eigenvalue/vector: x(t) c2 e2 t v2 for large t.


Moves along the slow eigenvector towards x = 0 for large t

Solution:

Given the eigenvalues 1 < 2 < 0, with corresponding


eigenvectors v1 and v2 , respectively.

Example: Two real negative eigenvalues

EL2620

Lecture 3

x0 is thus an asymptotically stable equilibrium for the nonlinear


system.

x 2 = cos x2 x31 5x2

x 1 = x21 + x1 + sin x2

Example

= (1, 0)T is given by


'
(
1 1
A=
,
(A) = {2, 4}
3 5

at the equilibrium x0

The linearization of

EL2620

12

2011

10

2011

center point

ExampleUnstable Focus

unstable focus

Re i = 0

1 < 0 < 2

saddle point

2011

Lecture 3

r = r
=

15

*

x =
x,
, > 0,
1,2 = i

)
*)
*)
*1
1 1 et eit
0
1 1
At
x(0)
x(t) = e x(0) =
i i
0
et eit i i
In polar coordinates r =
x21 + x22 , = arctan x2 /x1
(x1 = r cos , x2 = r sin ):

EL2620

Lecture 3

stable focus

Re i > 0

Im i *= 0 : Re i < 0

unstable node

1 , 2 > 0

stable node

Im i = 0 : 1 , 2 < 0

Six cases:

The location of the eigenvalues (A) determines the characteristics


of the trajectories.

13

2011

Phase-Plane Analysis for Linear Systems

EL2620

Lecture 3

EL2620

x1

1,2 = 1 i

Re

Im

x2

1,2 = 0.3 i

Equilibrium Points for Linear Systems

Lecture 3

EL2620

16

2011

14

2011

(1 , 2 ) = (1, 2) and

v1 v2

*
1 1
x =
x
0 2

)
*

Lecture 3

Hint: Two cases; only one linear independent eigenvector or all


vectors are eigenvectors

= 2 ?

1 1
=
0 1

5 minute exercise: What is the phase portrait if 1

EL2620

Lecture 3

ExampleStable Node

v1 is the slow direction and v2 is the fast.

EL2620

19

2011

17

2011

+ c3

2011

18

2011

x = f (x) = Ax + g(x),

Lecture 3

Remark: If the linearized system has a center, then the nonlinear


system has either a center or a focus.

with lim%x%0 !g(x)!/!x! = 0. If z = Az has a focus, node, or


saddle point, then x = f (x) has the same type of equilibrium at the
origin.

Theorem: Assume

Close to equilibrium points nonlinear system linear system

20

Phase-Plane Analysis for Nonlinear Systems

EL2620

Lecture 3

Fast: x2 = x1
Slow: x2 = 0

EL2620

How to Draw Phase Portraits

Lecture 3

+ n, 0) since

x2 (t) + KT

sin(in x1 (t))

x 1 = 0 x2 = 0
x 2 = 0 sin(in x1 ) = 0 x1 = in + n

Equilibria are (in

x 2 (t) = T

x 1 (t) = x2 (t)

= (out , out ), K, T > 0, and in (t) in .

Phase-Plane Analysis of PLL

dee or pplane

Let (x1 , x2 )

EL2620

Lecture 3

1. Matlab:

By computer:

5. Guess solutions

4. Try to find possible periodic orbits

dx2
x 1
=
dx1
x 2

3. Sketch (x 1 , x 2 ) for some other points. Notice that

2. Sketch local behavior around equilibria

1. Find equilibria

By hand:

EL2620

23

2011

21

2011

sin()

K
out
1 + sT

Filter

1
s

VCO

Classification of Equilibria

Phase
Detector

2 + T 1 + KT 1 = 0

Lecture 3

>0

2 + T 1 KT 1 = 0
Saddle points for all K, T

n odd:

K > (4T )1 gives stable focus


0 < K < (4T )1 gives stable node

n even:

Linearization gives the following characteristic equations:

EL2620

Lecture 3

in

sin

out

out

= A sin[t + in (t)].

Phase-Locked Loop
A PLL tracks phase in (t) of a signal sin (t)

EL2620

24

2011

22

2011

Phase-Plane for PLL

Lecture 3

Only r

gives

1
=
r

'

r cos r sin
sin cos

('

r = r(1 r2 )
= 1

x 2 = r(1 r ) sin + r cos

x 1
x 2

x 1 = cos r r sin
x 2 = sin r + r cos

x 1 = r(1 r2 ) cos r sin

= 1 is a stable equilibrium!

Now, from (1)

'

x2 = r sin

x1 = r cos

Periodic solution: Polar coordinates.

implies

EL2620

Lecture 3

.
/
.
/
(K, T ) = (1/2, 1): focuses 2k, 0 , saddle points (2k + 1), 0

EL2620

27

2011

25

2011

Periodic Solutions

t 0

>0

Lecture 3

Note that x(t)

const is by convention not regarded periodic

When is it stable?

When does there exist a periodic solution?

A periodic orbit is the image of x in the phase portrait.

x(t + T ) = x(t),

A system has a periodic solution if for some T

EL2620

Lecture 3

x 2 = x1 + x2 x2 (x21 + x22 )

x 1 = x1 x2 x1 (x21 + x22 )

Example of an asymptotically stable periodic solution:

EL2620

28

2011

26

(1)

2011

t (x0 )

= f (x) is sometimes denoted

Flow

P (x ) = x

= x corresponds to a periodic orbit.

x is called a fixed point of P .

Lecture 3

Rn

Existence of Periodic Orbits

A point x such that P (x )

EL2620

Lecture 3

t () is called the flow.

to emphasize the dependence on the initial point x0

The solution of x

EL2620

31

2011

29

2011

Poincare Map

Lecture 3

32

2011

30

2011

1 minute exercise: What does a fixed point of P k corresponds to?

EL2620

Lecture 3

P (x)

t (x)

P (x) = (x) (x)


where (x) is the time of first return.

Let

Rn be an n 1-dim hyperplane transverse to f at x0 .


Definition: The Poincare map P : is

Assume t (x0 ) is a periodic solution with period T .

EL2620

Stable Periodic Orbit

'

'
e4 0
0
1

Lecture 3

Stable periodic orbit (as we already knew for this example) !

dP
W =
(1, 2k) =
d(r0 , 0 )

The periodic solution that corresponds to (r(t), (t))


asymptotically stable because

= (1, t) is

[1 + (r02 1)e22 ]1/2


0 + 2

(r0 , 0 ) = (1, 2k) is a fixed point.

P (r0 , 0 ) =

The Poincare map is

EL2620

Lecture 3

If |i (W )| > 1 for some i, then the periodic orbit is unstable.

orbit is asymptotically stable

If |i (W )| < 1 for all i *= j , then the corresponding periodic

j (W ) = 1 for some j

if x is close to x .

P (x) W x

The linearization of P around x gives a matrix W such that

EL2620

35

2011

33

2011

[1 +

(r02

Lecture 3

2t 1/2

(r0 , 0 ) = 2.

is

1)e

First return time from any point (r0 , 0 )

t (r0 , 0 ) =

'

= {(r, ) : r > 0, = 2k}.

The solution is

Choose

r = r(1 r2 )
= 1

, t + 0

ExampleStable Unit Circle


Rewrite (1) in polar coordinates:

EL2620

34

2011

f ()

G(s)

History

Inputoutput stability

Lecture 5

EL2620 Nonlinear Control

Lecture 5

Solution by Popov (1960)

Kalmans conjecture (1957)

(False!)

(False!)

f (y)

(Led to the Circle Criterion)

Aizermans conjecture (1949)

Lure and Postnikovs problem (1944)

For what G(s) and f () is the closed-loop system stable?

EL2620

Lecture 5

EL2620

2011

2011

2011

Lecture 5

Question: How should we measure the size of u and y ?

Here S can be a constant, a matrix, a linear time-invariant system, etc

The gain of S is the largest amplification from u to y

2011

Idea: Generalize the concept of gain to nonlinear dynamical systems

EL2620

Lecture 5

Passivity

Circle Criterion

Small Gain Theorem

analyze stability using

Gain

Todays Goal
derive the gain of a system

You should be able to

EL2620

and

Norms

x21

+ +

Lecture 5

Why? What amplification is described by the eigenvalues?

is not a gain (nor a norm).

(M ) = max |i (M )|

Eigenvalues are not gains

"x" = max{|x1 |, . . . , |xn |}

"x" =

The spectral radius of a matrix M

EL2620

Lecture 5

Max norm:

Euclidean norm:

Examples:

"x" = || "x", for all R

x2n

"x" = 0 x = 0

: R , such that for all x, y

"x + y" "x" + "y"

"x" 0

Definition:
A norm is a function " "

A norm " " measures size

EL2620

2011

2011

= diag {1 , . . . , n } ; U U = I ; V V = I

M = U V

Cnn has a singular value decomposition

Gain of a Matrix

"M x"
"x"

Lecture 5

sup-norm:

"x"2 =

"#

"x" = suptR+ |x(t)|

2-norm (energy norm):

Examples:

|x(t)|2 dt

A signal norm " "k is a norm on the space of signals x.

: R+ R.

Signal Norms
A signal x is a function x

EL2620

Lecture 5

where " " is the Euclidean norm.

xRn

max (M ) = 1 = sup

The gain of M is the largest singular value of M :

i - the singular values

where

Every matrix M

EL2620

2011

2011

Parsevals Theorem

1
|x(t)| dt =
2

1
y(t)x(t)dt =
2

|X(i)|2 d.

Y (i)X(i)d.

Lecture 5

S2

(S1 )(S2 ).

S1

2 minute exercise: Show that (S1 S2 )

EL2620

Lecture 5

11

2011

The power calculated in the time domain equals the power calculated
in the frequency domain

"x"22

In particular,

then

L2 have the Fourier transforms


$
$
it
eit y(t)dt,
X(i) =
e
x(t)dt,
Y (i) =

Theorem: If x, y

2011

L2 denotes the space of signals with bounded energy: "x"2 <

EL2620

uL2

(S) = sup

f ()

uL2

y(t)
x

Kx
x

f (x)

K|x| and

Gain of a Static Nonlinearity

Lecture 5

&
# %
#
"y"22 = 0 f 2 u(t) dt 0 K 2 u2 (t)dt = K 2 "u"22 ,
where u(t) = x , t (0, 1), gives equality, so
'
(f ) = sup "y"2 "u"2 = K
Proof:

u(t)

= u(t) is

||"u"2
"u"2
= sup
= ||
"u"2
uL2 "u"2

Lemma: A static nonlinearity f such that |f (x)|


f (x ) = Kx has gain (f ) = K .

EL2620

Lecture 5

uL2

() = sup

12

2011

10

2011

"y"2
"S(u)"2
= sup
"u"2 uL2 "u"2

Example: The gain of a scalar static system y(t)

The gain of S is defined as

y = S(u).

System Gain
A system S is a map from L2 to L2 :

EL2620

(G)

r1

e1

S2

S1

e2

r2

The Small Gain Theorem

Lecture 5

13

15

2011

Theorem: Assume S1 and S2 are BIBO stable. If (S1 )(S2 ) < 1,


then the closed-loop system is BIBO stable from (r1 , r2 ) to (e1 , e2 )

EL2620

Lecture 5

Arbitrary close to equality by choosing u(t) close to sin t.

"y"22 =

10

2011

(0, ) and |G(i )| = K

10

|G(i)|

$
1
|Y (i)|2 d
2
$
1
=
|G(i)|2 |U (i)|2 d K 2 "u"22
2

Proof: Assume |G(i)| K for


for some . Parsevals theorem gives

10 -1
10

-2

-1

10

10

10

Gain of a Stable Linear System

% &
"Gu"2
G = sup
= sup |G(i)|
uL2 "u"2
(0,)

Lemma:

EL2620

BIBO Stability

f ()

Lecture 5

Ky
y

f (y)

(0, 1/2).
16

2011

14

2011

f (y)
K, y *= 0, f (0) = 0
y

Small Gain Theorem gives BIBO stability for K

(G) = 2 and (f ) K .

2
,
(s + 1)2

G(s)

ExampleStatic Nonlinear Feedback

G(s) =

EL2620

Lecture 5

< .

% &
"S(u)"2
S = sup
"u"2
uL2

= Ax is asymptotically stable then


G(s) = C(sI A)1 B + D is BIBO stable.

Example: If x

Definition:
S is bounded-input bounded-output (BIBO) stable if (S)

EL2620

"e1 "2

"r1 "2 + (S2 )"r2 "2


1 (S2 )(S1 )

"e1 "2 "r1 "2 + (S2 )["r2 "2 + (S1 )"e1 "2 ]

Proof of the Small Gain Theorem

"e2 "2

"r2 "2 + (S1 )"r1 "2


1 (S1 )(S2 )

2011

17

2011

Lecture 5

f ()

G(s)

-1

-0.5

-1

0.5

-0.5

0.5

1.5

G(i)
2

19

Let f (y) = Ky in the previous example. Then the Nyquist Theorem


proves stability for all K [0, ), while the Small Gain Theorem
only proves stability for K (0, 1/2).

Small Gain Theorem can be Conservative

EL2620

Lecture 5

Note: Formal proof requires " "2e , see Khalil

so also e2 is bounded.

Similarly we get

(S2 )(S1 ) < 1, "r1 "2 < , "r2 "2 < give "e1 "2 < .

gives

EL2620

G(s)

-1
-1

-0.5

0.5

-0.5

The Nyquist Theorem

From: U(1)

0.5

f ()

G(s)

k1 y

k2 y f (y)

k11

The Circle Criterion


k12
G(i)

2011

18

2011

f (y)
k2 , y *= 0,
y

f (0) = 0

Lecture 5

If the Nyquist curve of G(s) does not encircle or intersect the circle
defined by the points 1/k1 and 1/k2 , then the closed-loop
system is BIBO stable.

0 k1

20

Theorem: Assume that G(s) has no poles in the right half plane, and

EL2620

Lecture 5

Theorem: If G has no poles in the right half plane and the Nyquist
curve G(i), [0, ), does not encircle 1, then the
closed-loop system is stable.

EL2620

To: Y(1)

2011

f (y)

-1

-0.5

-1

0.5

-0.5

1
K
0

0.5

f(()

(
G(s)

r2

k
1
G(i)

Lecture 5

23

2011

21

(0, 4).

< 1, where
G
(=
is stable (This has to be checked later). Hence,
G
1 + kG
)
)
) 1
)
1
)
=)
+ k )) > R
(
G(i)
|G(i)|

(
Small Gain Theorem gives stability if |G(i)|R

r(1

EL2620

Lecture 5

the Circle Criterion gives that the system is BIBO stable if K

= 1/K .

1.5

G(i)

The circle is defined by 1/k1 = and 1/k2


Since
min Re G(i) = 1/4

Ky

ExampleStatic Nonlinear Feedback (contd)

EL2620

e1

y2
f ()

G(s)
e2

y1

r2

r(1

1
G(i)

k11

k12

Lecture 5

r2

y1

G(i)
G
is stable since 1/k is inside the circle.
1 + kG

k1

Note that G(s) may have poles on the imaginary axis, e.g.,
integrators are allowed

Note that

k2

f(()

2011

22

2011

24

C : |z + k| > R} mapped

(
G

= (k1 + k2 )/2, f((y) = f (y) ky , and r(1 = r1 kr2 :


)
)
) f((y) ) k2 k1
)
)
=: R, y *= 0,
f((0) = 0
) y )
2

Proof of the Circle Criterion

The curve G1 (i) and the circle {z


through z + 1/z gives the result:

EL2620

Lecture 5

r1

Let k

EL2620

Passivity and BIBO Stability

Passive System

> 0 such that

Lecture 5

Warning: There exist many other definitions for strictly passive

,y, u-T )(|y|2T + |u|2T ), for all T > 0 and all u

and strictly passive if there exists )

27

2011

Definition: Consider signals u, y : [0, T ] Rm . The system S is


passive if
,y, u-T 0, for all T > 0 and all u

EL2620

Lecture 5

25

2011

The main result: Feedback interconnections of passive systems are


passive, and BIBO stable (under some additional mild criteria)

EL2620

y T (t)u(t)dt

cos =

,y, u-T
=0
|y|T |u|T

u = sin t and y = cos t are orthogonal if T = k ,

,y, u-T |y|T |u|T

Lecture 5

2 minute exercise: Is the pure delay system y(t) = u(t )


passive? Consider for instance the input u(t) = sin ((/)t).

EL2620

Lecture 5

Example:
because

Cauchy-Schwarz Inequality:

28

2011

26

2011

= |y|T |u|T cos


!
|y|T = ,y, y-T - length of y , - angle between u and y

If u and y are interpreted as vectors then ,y, u-T

,y, u-T =

Scalar Product
Scalar product for signals y and u

EL2620

)) 0,

Lecture 5

passive.

1
is strictly passive,
G(s) =
s+1
1
G(s) =
is passive but not strictly
s

Example:

Re G(i

-0.4

-0.2

0.2

0.4

0.6

0.2

0.4

0.6

G(i)
0.8

> 0 such that

> 0

It is strictly passive if and only if there exists )

31

2011

du
Cu2 (T )
dt =
0
dt
2

di
Li2 (T )
L i(t)dt =
0
dt
2

u(t)C

Ri2 (t)dt R,i, i-T 0

Passivity of Linear Systems

Theorem: An asymptotically stable linear system G(s) is passive if


and only if
Re G(i) 0,
> 0

EL2620

Lecture 5

di
u = L : ,u, i-T =
dt

du
: ,u, i-T =
i=C
dt

u(t) = Ri(t) : ,u, i-T =

29

2011

ExamplePassive Electrical Components

EL2620

y2

e1

S2

S1
e2

y1

r2

= ,y1 , r1 -T + ,y2 , r2 -T = ,y, r-T

0 if ,y1 , e1 -T 0 and ,y2 , e2 -T 0.

= supuL2

y
&y&2
&u&2

< .

Lecture 5

Hence, )"y"22

1
"y"2 "u"2
)

"y"2 "u"2 , so

)(|y|2 + |u|2 ) ,y, u- |y| |u| = "y"2 "u"2

Proof:

Lemma: If S is strictly passive then (S)

32

2011

A Strictly Passive System Has Finite Gain

EL2620

Lecture 5

Hence, ,y, r-T

,y, e-T = ,y1 , e1 -T + ,y2 , e2 -T = ,y1 , r1 y2 -T + ,y2 , r2 + y1 -T

Proof:

30

2011

Lemma: If S1 and S2 are passive then the closed-loop system from


(r1 , r2 ) to (y1 , y2 ) is also passive.

r1

Feedback of Passive Systems is Passive

EL2620

r1

y2

e1

S2

S1

e2

y1

r2

The Passivity Theorem

y2

e1

S2

S1

e2

y1

r2
1

The Passivity Theorem is a


Small Phase Theorem

Lecture 5

S2 passive cos 2 0 |2 | /2
S1 strictly passive cos 1 > 0 |1 | < /2

r1

EL2620

Lecture 5

Theorem: If S1 is strictly passive and S2 is passive, then the


closed-loop system is BIBO stable from r to y .

EL2620

35

2011

33

2011

Proof

1
2,y2 , r2 -T + ,y, r-T
)

G(s)

and the result follows.

|y|2T

+
1
2+
|y|T |r|T
)

1
|y1 |2T + |y2 |2T 2,y2 , r2 -T + |r1 |2T ,y, r-T
)

1
|y1 |2T + ,r1 y2 , r1 y2 -T ,y, r-T
)

2011

34

2011

Lecture 5

36

2 minute exercise: Apply the Passivity Theorem and compare it with


the Nyquist Theorem. What about conservativeness? [Compare the
discussion on the Small Gain Theorem.]

EL2620

Lecture 5

Let T

Hence

or

Therefore

S1 strictly passive and S2 passive give


%
&
) |y1 |2T + |e1 |2T ,y1 , e1 -T + ,y2 , e2 -T = ,y, r-T

EL2620

ExampleGain Adaptation

c
s

G(s)

( )u

ym y

Gain Adaptation is BIBO Stable

c > 0.

ym

Lecture 5

S is passive (see exercises).


If G(s) is strictly passive, the closed-loop system is BIBO stable

EL2620

Lecture 5

G(s)

(t)

Model

G(s)

d
= cu(t)[ym (t) y(t)],
dt

Adaptation law:

Process

Applications in telecommunication channel estimation and in noise


cancellation etc.

EL2620

39

2011

37

2011

Lecture 5

Let G(s)

EL2620

Lecture 5

G(s)

G(s)
ym

c
s

0
0

0.5

1.5

-2

10

10

y , ym

15

15

20

20

1
, c = 1, u = sin t, and (0) = 0.
s+1

Simulation of Gain Adaptation

(t)

Gain AdaptationClosed-Loop System

replacements

EL2620

40

2011

38

2011

x *= 0

: R R such that
n

Lyapunov vs. Passivity

Lecture 5

V uT y

Passivity idea: Increase in stored energy Added energy

V 0

Lyapunov idea: Energy is decreasing

Storage function is a generalization of Lyapunov function

EL2620

Lecture 5

absorbed energy

43

2011

41

2011

V (T ) represents the stored energy in the system


$ T

V (x(T ))
y(t)u(t)dt + V (x(0)) , T > 0
, -. /
, -. /
0
,
-.
/ stored energy at t = 0
stored energy at t = T

Remark:

V (0) = 0 and V (x) 0,


V (x) uT y , x, u

A storage function is a C function V

x = f (x, u),

y = h(x)

Storage Function

Consider the nonlinear control system

EL2620

Storage Function and Passivity

,y, u-T =

$
0

ExampleKYP Lemma
x = Ax + Bu,

y = Cx

BT P = C

Lecture 5

and hence the system is strictly passive. This fact is part of the
Kalman-Yakubovich-Popov lemma.

= 0.5xT Qx + uy < uy, x *= 0

42

44

2011

V = 0.5(x T P x + xT P x)
= 0.5xT (AT P + P A)x + uB T P x

= 0.5xT P x. Then

AT P + P A = Q,

Assume there exists positive definite matrices P, Q such that

Consider V

2011

y(t)u(t)dt V (x(T ))V (x(0)) = V (x(T )) 0

> 0,

Consider an asymptotically stable linear system

EL2620

Lecture 5

y = h(x)

= 0, then the system is passive.

Proof: For all T

with x(0)

x = f (x, u),

Lemma: If there exists a storage function V for a system

EL2620

Todays Goal

Lecture 5

Passivity

Circle Criterion

Small Gain Theorem

analyze stability using

derive the gain of a system

You should be able to

EL2620

45

2011

G(s)

Lecture 6

-2

-1

10

4
and u = sat e give a stable oscillation.
s(s + 1)2

Motivating Example

Describing function analysis

Lecture 6

EL2620 Nonlinear Control

How can the oscillation be predicted?

G(s) =

EL2620

Lecture 6

EL2620

15

20

2011

2011

Todays Goal

A Frequency Response Approach

Lecture 6

But, can we talk about the frequency response, in terms of gain and
phase lag, of a static nonlinearity?

2011

A (linear) feedback system will have sustained oscillations


(center) if the loop-gain is 1 at the frequency where the phase lag
is 180o

Nyquist / Bode:

EL2620

Lecture 6

function analysis

2011

Analyze existence and stability of periodic solutions by describing

Derive describing functions for static nonlinearities

You should be able to

EL2620

n=1

N.L.

G(s)

a2n + b2n sin[nt + arctan(an /bn )]

Key Idea

Lecture 6

That is, we assume all higher harmonics are filtered out by G

2011

# |G(i)| for n 2, then n = 1 suffices, so that


'
y(t) |G(i)| a21 + b21 sin[t + arctan(a1 /b1 ) + arg G(i)]

If |G(in)|

u(t) =

!
"

e(t) = A sin t gives

EL2620

Lecture 6

Note: Sometimes we make the change of variable t

= 2/T and
#
#
2 T
2 T
u(t) cos nt dt, bn () =
u(t) sin nt dt
an () =
T 0
T 0

where

a0
+
(an cos nt + bn sin nt)
2
n=1

2011

= u(t + T ) has a Fourier series expansion

Fourier Series

a0 ! " 2
+
=
an + b2n sin[nt + arctan(an /bn )]
2
n=1

u(t) =

A periodic function u(t)

EL2620

2
T

N.L.

&2
u(t) u
$k (t) dt

Lecture 6

Amplitude dependent gain and phase shift!

can be used instead of u(t) to analyze the system.

N (A, )

u
$1 (t)
u
$1 (t) = |N (A, )|A sin[t + arg N (A, )]

e(t)

b1 () + ia1 ()
A

= 0, then

u(t)

N (A, ) =

If G is low pass and a0

e(t)

Definition of Describing Function

min

The describing function is

EL2620

Lecture 6

solves

a0 !
u
$k (t) =
+
(an cos nt + bn sin nt)
2
n=1

The Fourier Coefficients are Optimal


The finite expansion

EL2620

2011

2011

-1

-0.5

0.5

a1 =

4H
b1 () + ia1 ()
=
A
A

f ()

G(s)

1/N (A)

G(i)

Existence of Periodic Solutions

N (A) =

G(i) = 1/N (A)

Lecture 6

The intersections of the curves G(i) and 1/N (A)


give and A for a possible periodic solution.

G(i)N (A) = 1

11

2011

2011

Proposal: sustained oscillations if loop-gain 1 and phase-lag 180o

replacements

EL2620

Lecture 6

= 2t/T

1 2
u() cos d = 0
0
#
#
1 2
2
4H
b1 =
u() sin d =
H sin d =
0
0

Describing Function for a Relay

The describing function for a relay is thus

EL2620

Odd Static Nonlinearities

G(s) =

G(s)

-1

-0.5

-0.4

-0.3

-0.2

-0.1

0.1

-0.8

-0.6

-0.4

-0.2

G(i)

1/N (A)

Periodic Solutions in Relay System

2011

10

2011

Lecture 6

12

3
with feedback u = sgn y
(s + 1)3

No phase lag in f (), arg G(i) = for = 3 = 1.7

G(i 3) = 3/8 = 1/N (A) = A/4 A = 12/8 0.48

EL2620

Lecture 6

Nf +g (A) = Nf (A) + Ng (A)

Nf (A) = Nf (A)

Nf (A, ) = Nf (A)

Im Nf (A, ) = 0

Assume f () and g() are odd (i.e. f (e) = f (e)) static


nonlinearities with describing functions Nf and Ng . Then,

EL2620

10

Lecture 6

EL2620

Lecture 6

b1 =

a1 =

#
1 2
u() cos d = 0
0
#
#
1 2
4 /2
u() sin d =
u() sin d
0
0
#
#
4A 0 2
4D /2
sin d +
sin d
0
0
)
*
A
20 + sin 20

Note that G filters out almost all higher-order harmonics.

-1

-0.5

0.5

The prediction via the describing function agrees very well with the
true oscillations:

EL2620

15

2011

13

2011

-1

-0.5

0.5

= arcsin D/A.

Lecture 6

0.1
0

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.1

N (A) for H = D = 1

10

)
*
1
Hence, if H = D , then N (A) =
20 + sin 20 .

If H += D , then the rule Nf (A) = Nf (A) gives


)
*
H
20 + sin 20
N (A) =
D

EL2620

Lecture 6

where 0

= A sin t = A sin . First set H = D. Then for


(0, )
(
A sin ,
(0, 0 ) ( 0 , )
u() =
D,
(0 , 0 )

Describing Function for a Saturation

Let e(t)

EL2620

16

2011

14

2011

Lecture 6

oscillation amplitude is decreasing.

If G() does not encircle the point 1/N (A), then the

amplitude is increasing.

If G() encircles the point 1/N (A), then the oscillation

1/N (A)

G()

Stability of Periodic Solutions

Assume that G(s) is stable.

EL2620

Lecture 6

19

2011

17

2011

5 minute exercise: What oscillation amplitude and frequency do the


describing function analysis predict for the Motivating Example?

EL2620

KG(s)
1/K

G(i)

The Nyquist Theorem

2011

1/N (A)

G()

An Unstable Periodic Solution

Lecture 6

An intersection with amplitude A0 is unstable if A < A0 leads to


decreasing amplitude and A > A0 leads to increasing.

EL2620

Lecture 6

system is stable (damped oscillations)

20

2011

18

If G(i) does not encircle the point 1/K , then the closed-loop

is unstable (growing amplitude oscillations).

If G(i) encircles the point 1/K , then the closed-loop system

displays sustained oscillations

If G(i) goes through the point 1/K the closed-loop system

Assume that G is stable, and K is a positive gain.

EL2620

G(s) =

G(s)

-0.2
-5

-0.15

-0.1

-0.05

0.05

0.1

-4

-3

-2

-1

1/N (A)

G(i)

(s + 10)2
with feedback u = sgn y
(s + 1)3

-1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.6

0.8

= arcsin D/A.

= A sin t = A sin . Then for (0, )


(
0,
(0, 0 )
u() =
1,
(0 , 0 )

where 0

Lecture 6

Describing Function for a Quantizer

Let e(t)

EL2620

Lecture 6

gives one stable and one unstable limit cycle. The left most
intersection corresponds to the stable one.

0.15

0.2

Stable Periodic Solution in Relay System

EL2620

23

2011

21

2011

Automatic Tuning of PID Controller

Lecture 6

EL2620

Lecture 6

Relay

A
T

Process

10

Time

#
1 2
u() cos d = 0
a1 =
0
#
#
1 2
4 /2
b1 =
u() sin d =
sin d
0
0
4
4"
= cos 0 =
1 D2 /A2

+
0,
A<D
N (A) =
4 "
1 D2 /A2 , A D
A

PID

Period and amplitude of relay feedback limit cycle can be used for
autotuning:

EL2620

24

2011

22

2011

0
0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

10

= sgn y :

Friction

Lecture 6

The oscillation depends on the zero at s

= z.

s(s z)
G
= 3
with feedback u = sgn y
1 + GC
s + 2s2 + 2s + 1

Corresponds to

yref

Control loop with friction F

27

2011

25

2011

Accuracy of Describing Function Analysis

EL2620

Lecture 6

1.3/A for large amplitudes

N (A) for D = 1

Plot of Describing Function Quantizer

Notice that N (A)

EL2620

Describing Function Pitfalls

15

15

20

20

25

25

30

30

0.2

0.4

0.6

Lecture 6

0.5

Accurate results only if y is close to sinusoidal!

(17.3, 0.23), respectively.

= (11.4, 1.00) and

z = 1/3

-0.5

z = 4/3

DF gives period times and amplitudes (T, A)

-0.4
-1

10

z = 4/3

10

0.8

1.2

-0.4

z = 1/3

-0.2

-0.2

0.2

0.4

-1

EL2620

Lecture 6

and can be far from the true values.

28

2011

26

2011

The predicted amplitude and frequency are only approximations

A limit cycle may exist even if the DF does not predict it.

A DF may predict a limit cycle even if one does not exist.

Describing function analysis can give erroneous results.

EL2620

=x ?

Analysis of OscillationsA Summary

Lecture 6

Powerful graphical methods

Approximate results

Describing function analysis

Frequency-domain:

Hard to use for large problems

Rigorous results but only for simple examples

Poincare maps and Lyapunov functions

Time-domain:

EL2620

Lecture 6

2 minute exercise: What is N (A) for f (x)

EL2620

31

2011

29

2011

e(t) = A sin t
f ()

u(t)

Harmonic Balance

a0 !
+
(an cos nt + bn sin nt)
2
n=1

2011

30

2011

Lecture 6

function analysis

32

Analyze existence and stability of periodic solutions by describing

Todays Goal
Derive describing functions for static nonlinearities

You should be able to

EL2620

Lecture 6

Example: f (x) = x2 gives u(t) = (1 cos 2t)/2. Hence by


considering a0 = 1 and a2 = 1/2 we get the exact result.

may give much better result. Describing function corresponds to


k = 1 and a0 = 0.

u
$k (t) =

A few more Fourier coefficients in the truncation

EL2620

PSfr

PID

u
G

The Problem with Saturating Actuator

Compensation for friction

Friction models

Compensation for saturation (anti-windup)

Lecture 7

EL2620 Nonlinear Control

Lecture 7

1
+ Td s
Recall: CPID (s) = K 1 +
Ti s

Example: I-part in PID

"
3

2011

2011

Leads to problem when system and/or the controller are unstable

behavior!

The feedback path is broken when u saturates Open loop

EL2620

Lecture 7

EL2620

Todays Goal

0.1

0.1

20

20

40
Time

40

60

60

80

80

ExampleWindup in PID Controller

Lecture 7

PID controller without (dashed) and with (solid) anti-windup

EL2620

Lecture 7

Compensation for friction

Anti-windup for PID and state-space controllers

You should be able to analyze and design

EL2620

y
u

2011

2011

Anti-Windup for PID Controller

K
Ti

KTd s

KTd s

1
Tt

1
s

1
Tt

1
s

es

Actuator

es

Actuator

Actuator model

State feedback

Actuator

x = A
x + B sat v + K(y C x)
v = L(xm x)

Observer

xm

Lecture 7

sat v

Anti-Windup for Observer-Based


State Feedback Controller

K
Ti

x is estimate of process state, xm desired (model) state


Need actuator model if sat v is not measurable

EL2620

Lecture 7

(b)

(a)

Anti-windup (a) with actuator output available and (b) without

EL2620

2011

2011

Anti-Windup is Based on Tracking

%
#
Ke( ) es ( )
1 t
+
es ( ) d
d
Ti
Tt
Tt 0

2011

Lecture 7

(a)
y

SA

(b)

y
SB

2011

Idea: Rewrite representation of control law from (a) to (b) with the
same inputoutput relation, but where the unstable SA is replaced by
a stable SB . If u saturates, then (b) behaves better than (a).

Windup possible if F unstable and u saturates

x c = F xc + Gy
u = Cxc + Dy

Anti-Windup for
General State-Space Controller

State-space controller:

EL2620

Lecture 7

I(t) =

# t$

Remark: If 0 < Tt # Ti , then the integrator state becomes sensitive


to the instances when es $= 0:

Tt = Ti

Tt = Ti Td

Common choices of Tt :

The tracking time Tt is the design parameter of the anti-windup

When the control signal saturates, the integration state in the


controller tracks the proper state

EL2620

Controllers with Stable Zeros

= G KD.

2011

2011

Lecture 7

Note that this implies G KD = 0 in the figure on the previous


slide, and we thus obtain P-feedback with gain D under saturation.
11

and the eigenvalues of the observer based controller becomes equal


to the zeros of F (s) when u saturates

K = G/D

F KC = F GC/D

u=0

Thus, choose observer gain

x c = F xc + Gy
u = Cxc + Dy

&
'(
)
x c = (F GC/D) xc
y = C/Dxc

zero dynamics

Most controllers are minimum phase, i.e. have zeros strictly in LHP

EL2620

Lecture 7

where G0

x c = F0 xc + G0 y + Ku
u = sat(Cxc + Dy)

Choose K such that F0 = F KC has desired (stable)


eigenvalues. Then use controller

x c = F xc + Gy + K(u Cxc Dy)


= (F KC)xc + (G KD)y + Ku

Mimic the observer-based controller:

EL2620

s 1

F KC

s 1

xc

xc

sat

y
+

1 1
D
F (s)

sat

Lecture 7

If the transfer function (1/F (s) 1/D) in the feedback loop is


stable (stable zeros) No stability problems in case of saturation

10

12

2011

= lims F (s) and consider the feedback implementation

Controller F (s) with Stable Zeros

G KD

2011

It is easy to show that transfer function from y to u with no saturation


equals F (s)!

Let D

EL2620

Lecture 7

State-space controller without and with anti-windup:

EL2620

Internal Model Control (IMC)

Q(s)

*
G(s)

G(s)

y*

Lecture 7

Choose Q

G1 .

v
*
G

IMC with Static Nonlinearity

y*.

G and choose Q stable with Q G1 .

Include nonlinearity in model

EL2620

Lecture 7

*
Design: assume G

Assume G stable. Note: feedback from the model error y

C(s)

differ!
IMC: apply feedback only when system G and model G

EL2620

15

2011

13

2011

Lecture 7

An alternative way to implement anti-windup!

No integration.

(T1 s + 1)
y
s

T1 s + 1
umax
y
s + 1
s + 1

> umax (v = umax ):

If |u|

u=

< umax (v = u): PI controller u =

* = T1 s + 1 y + 1 v
u = Q(y Gv)
s + 1
s + 1

= 0 and abuse of Laplace transform notation

Example (contd)

if |u|

Assume r

EL2620

Lecture 7

PI-controller!

*
1 QG
!
"
T1
1
T1 s + 1
=
1+
F =
s

T1 s

T1 s + 1
, < T1
s + 1

1
T1 s + 1

Example
*
G(s)
=
Q=

F =

Gives the controller

Choose

EL2620

16

2011

14

2011

Other Anti-Windup Solutions

Friction

Lecture 7

How to detect friction in control loops?

How to compensate for friction?

How to model friction?

Problems:

Earthquakes

Sometimes too small:

Friction in brakes

Sometimes good:

Friction in valves and other actuators

Often bad:

Friction is present almost everywhere

EL2620

Lecture 7

Apply optimal control theory (Lecture 12)

Conditionally reset integration state to zero

Dont update controller states at saturation

Tune controller to avoid saturation

Other solutions include:

Solutions above are all based on tracking.

EL2620

19

2011

17

2011

Bumpless Transfer

1
Tr

1
Tm

uc

1
Tr

1
s

PD

1
s

1
Tr

Lecture 7

Ff

EL2620

Lecture 7

Fp

0
0

0
0

0.2

0.4

0
0

10

Ff

Fp

10

10

Stick-Slip Motion

Note the incremental form of the manual control mode (u

uc

PID with anti-windup and bumpless transfer:

20

20

20

20

Time

Time

Time

2011

18

2011

uc /Tm )

Another application of the tracking idea is in the switching between


automatic and manual control modes.

EL2620

Lecture 7

EL2620

xr

0.2

0.2

0
0

0.5

20

20

20

40

40

40

ms

60

60

60

Friction

80

80

80

Friction Modeling

PID

100
Time

100
Time

100
Time

Position Control of Servo with Friction

Lecture 7

EL2620

23

2011

21

2011

Stribeck Effect

Lecture 7

-0.04

-200

-150

-100

-50

50

100

150

200

Stribeck (1902)

-0.03

-0.02

-0.01

0
0.01
Velocity [rad/sec]

Steady state friction: Joint 1

0.02

0.03

0.04

Friction increases with decreasing velocity (for low velocities)

EL2620

Lecture 7

5 minute exercise: Which are the signals in the previous plots?

EL2620

Friction [Nm]

24

2011

22

2011

Classical Friction Models

Integral Action

Lecture 7

xr

PID

1
ms

Friction

v
1
s

If friction force changes quickly, then large integral action


(small Ti ) necessary. May lead to stability problem

Works if friction force changes slowly (v(t) const)

Integral action compensates for any external disturbance

EL2620

Lecture 7

Advanced models capture various friction phenomena better

EL2620

27

2011

25

2011

Friction Compensation

K
Ti

e(t)

e( )d where
e(t)

+t

Lecture 7

Disadvantage: Error wont go to zero

Advantage: Avoid that small static error introduces oscillation

Modified Integral Action


Modify the integral part to I

EL2620

Lecture 7

The Knocker

Adaptive friction compensation

Model-based friction compensation

Dither signal

Integral action

Lubrication

EL2620

28

2011

26

2011

Dither Signal

PID

+
+

Friction
estimator

1
ms

Friction

z = kuPID sgn v
a
= z km|v|
F = a
sgn v

F = a sgn v

uPID

Friction estimator:

Lecture 7

1
ms

Friction

v
1
s

1
s

Adaptive Friction Compensation

Coulomb friction model:

EL2620

Lecture 7

Cf., mechanical maze puzzle


(labyrintspel)

xr

Avoid sticking at v = 0 (where there is high friction)


by adding high-frequency mechanical vibration (dither )

replacements

EL2620

31

2011

29

2011

u = uPID + F

Lecture 7

= 0.

= k(a a
)
= ke

de
d
a
dz
d
=
= + km |v|
dt
dt
dt
dt
= kuPID sgn v + kmv sgn v
= k sgn v(uPID mv)

= k sgn v(F F )

e=aa
0 as t

d
Remark: Careful with dt
|v| at v

Proof:

Adaptation converges:

EL2620

Lecture 7

u and F does apply at the same point

An estimate F F is available

Possible if:

an estimate of F .
where uPID is the regular control signal and F

use control signal

m
x=uF

Model-Based Friction Compensation


For process with friction F :

EL2620

32

2011

30

2011

-10

-5

10

-1

-0.5

-10

-5

10

-1

-0.5

0.5

PI-controller

P-controller with adaptive friction compensation

0.5

Detection of Friction in Control Loops

P-controller

vref

50

50

100

100

150

150

200

time

200

Lecture 7

Horch: PhD thesis (2000) and patent

10.5

11

11.5

12

98

99

100

101

102

250

250

300

300

350

350

Idea: Monitor loops automatically and estimate friction

Q: When should valves be maintained?

Friction is due to wear and increases with time

EL2620

Lecture 7

-10

-5

10

-1

-0.5

0.5

EL2620

35

2011

33

2011

The Knocker

Todays Goal

Lecture 7

Compensation for friction

10

Anti-windup for PID, state-space, and polynomial controllers

You should be able to analyze and design

EL2620

Lecture 7

Hagglund:
Patent and Innovation Cup winner

-0.5

0.5

1.5

2.5

Coulomb friction compensation and square wave dither


Typical control signal u

EL2620

36

2011

34

2011

Lecture 7

Quantization

Backlash

EL2620

Next Lecture

37

2011

Lecture 8

Fin!

EL2620

Lecture 8

EL2620

Tin

in

" D!

xin

2D

" D!

xout

out

Linear and Angular Backlash

Backlash
Quantization

Lecture 8

EL2620 Nonlinear Control

Tout

Fout
!

2011

2011

Todays Goal

Lecture 8

sometimes inducing oscillations

bad for control performance

necessary for a gearbox to work in high temperature

increasing with wear

Backlash
present in most mechanical and hydraulic systems

Backlash (glapp) is

EL2620

Lecture 8

Quantization

Backlash

You should be able to analyze and design for

EL2620

2011

2011

otherwise

in contact

xout
out

xin
in

xin | = D and x in (xin xout ) > 0.

x out

!
x in ,
=
0,

Backlash Model

Effects of Backlash

Lecture 8

ref

1
1 + sT

in
1
s

in

P-control of motor angle with gearbox having backlash with D

EL2620

Lecture 8

backlash is a dynamic phenomena.

out

2011

2011

= 0.2

Multivalued output; current output depends on history. Thus,

in contact if |xout

EL2620

10

15

20

No backlash K

25

30

35

= 0.25, 1, 4

40

0
0

0.2

0.4

0.6

0.8

1.2

1.4

1.6

10

15

Backlash K

20

25

30

35

= 0.25, 1, 4

xin xout
(in out )

40

Lecture 8

but never vanish

Note that the amplitude will decrease with decreasing D,

2011

2011

Oscillations for K = 4 but not for K = 0.25 or K = 1. Why?

0
0

0.5

1.5

EL2620

Lecture 8

(Torque)

Force

Alternative Model

Not equivalent to Backlash Model

EL2620

Describing Function for Backlash

K=4

-2

Real Axis

K = 0.25

K=1

1/N (A)

Nyquist Diagrams

0
0

10

15

Lecture 8

Describing function predicts oscillation well

20

25

30

35

Input and output of backlash

= 0.33, = 1.24
Simulation: A = 0.33, = 2/5.0 = 1.26

DF analysis: Intersection at A

0.2

0.4

0.6

0.8

1.2

1.4

1.6

1.8

Describing Function Analysis

K = 4, D = 0.2:

-4
-4

-3

-2

-1

EL2620

Re N (A) =

1
+ arcsin(1 2D/A)
2
# $
%&
D
D
+ 2(1 2D/A)
1
A
A
$
%
D
4D
1
Im N (A) =
A
A

"

< D then N (A) = 0 else

Lecture 8

If A

EL2620

Imaginary Axis

40

11

2011

2011

-3.5

-2.5

-2

-1.5
-1
Re -1/N(A)

-0.5

0.5

Stability Proof for Backlash System

1
1 + sT

in

1
s

Lecture 8

Q: What about in and out ?

Note that in and out will not converge to zero

ref

in

Do there exist conditions that guarantee stability (of the


steady-state)?

out

12

2011

10

2011

1 as A (physical interpretation?)

-3

The describing function method is only approximate.

replacements

EL2620

Lecture 8

Note that 1/N (A)

-16
-4

-14

-12

-10

-8

-6

-4

-2

1/N (A) for D = 0.2:

EL2620

Im -1/N(A)

G(s)

Lecture 8

Backlash inverse

Linear controller design

Deadzone

G(s)

BL

in in contact
0 otherwise

in

out

Backlash Compensation

Mechanical solutions

EL2620

Lecture 8

out

out =

The block BL satisfies

in

Rewrite the system:

EL2620

15

2011

13

2011

Homework 2

G(s)

BL

Lecture 8

ref

1+sT2
K 1+sT
1

1
1 + sT

in

1
s

in

Linear Controller Design


Introduce phase lead compensation:

EL2620

Lecture 8

Circle Criterion BL contained in sector [0, 1]

=1

out

Small Gain Theorem BL has gain (BL)

Passivity Theorem BL is passive

in

Analyze this backlash system with inputoutput stability results:

EL2620

out

16

2011

14

2011

-5

without filter

Real Axis

Nyquist Diagrams

10

-2

-1

u, with/without filter

y, with/without filter

10

10

15

15

20

20

+ if u(t) > u(t)

u+D
+ if u(t) < u(t)
xin (t) =
uD

x (t) otherwise
in

Oscillation removed!

with filter

0
0

0.5

1.5

Lecture 8

+ > D then over-compensation (may give oscillation)


If D

+ < D then under-compensation (decreased backlash)


If D

+ = D then perfect compensation (xout = u)


If D

EL2620

Lecture 8

-10
-10

-8

-6

-4

-2

10

1+sT2
F (s) = K 1+sT
with T1 = 0.5, T2 = 2.0:
1

EL2620

Imaginary Axis

19

2011

17

2011

Backlash Inverse

D
D

Lecture 8

-2

0.4

0.8

1.2

10

10

Motor with backlash on input in feedback with PD-controller

EL2620

! xout

ExamplePerfect Compensation

$
! ##
! xin
! #$
!
#
#
##
#

Lecture 8

Idea: Let xin jump 2D when x out should change sign


xout

EL2620

xin

20

2011

18

2011

40

60

80

21

Lecture 8

2011

EL2620

2011

22

A/D

Lecture 8

Roundoff, overflow, underflow in computations

Quantization of parameters

Quantization in A/D and D/A converters

D/A

/2

What precision is needed in computations? (864 bits?)

23

e2 fe de =

/2

/2

e
y

2
e2
de =

12

Lecture 8

24

But, added noise can never affect stability while quantization can!

May be reasonable if is small compared to the variations in u

Var(e) =

Model quantization error as a uniformly distributed stochastic signal e


independent of u with

20

What precision is needed in A/D and D/A converters? (814 bits?)

-2

2011

Linear Model of Quantization

80

60

40

0.4

0.4

20

0.8

0.8

ExampleOver-Compensation
1.2

EL2620

1.2

ExampleUnder-Compensation

2011

Quantization

EL2620

Lecture 8

EL2620

-1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.6

A/Delta

0,
A<D
4 .
1 D2 /A2 , A > D
A

0.8

2011

25

2011

Lecture 8

Reducing reduces only the oscillation amplitude

Controller with gain margin > 1/0.79 = 1.27 avoids oscillation

27

Predicts oscillation if Nyquist curve intersects negative real axis to


the left of /4 0.79

The maximum value is 4/ 1.27 attained at A 0.71.

EL2620

Lecture 8

N (A) =

Lecture 6

Describing Function for Deadzone Relay

EL2620

N (A)

u
Q

/2

Describing Function for Quantizer

Lecture 8

G(s)

> 2/1.27 = 1.57 with Q.


1es
s

Stable oscillation predicted for K

Nyquist of linear part (K & ZOH & G(s)) intersects at 0.5K :


Stability for K < 2 without Q.

= 0.2

ExampleMotor with P-controller.


Quantization of process output with

EL2620

Lecture 8

A < 2

0,
#
$
%2
n
N (A) =
2i 1
4 /

1
<A<
, 2n1

2
A i=1
2A

EL2620

28

2011

26

2n+1

2011

(a)

(b)

(c)

Quantization

EL2620

Lecture 8

EL2620

Output

Output

Time

K = 1.6

K = 1.2

K = 0.8

0.05

0.98

1.02

31

0.05

0.05

0.05

0.05

50

50

50

Time

100

100

100

Lecture 8

150

D/A

Real axis

= 0.01 in D/A converter:

Lecture 8

100

150

150

Quantization

EL2620

Lecture 8

Au = 0.005 and T = 39
Simulation: Au = 0.005 and T = 39

Time

100

100

2011

29

A/D

150

150

150

Example1/s2 & 2nd-Order Controller

EL2620

Describing function:

50

50

50

50

50

50

2011

Ay = 0.01 and T = 39
Simulation: Ay = 0.01 and T = 28

Describing function:

= 0.02 in A/D converter:

0.05

Output

Output

Output

Input

Output
Unquantized
Input

Imaginary axis

32

2011

30

2011

Quantization Compensation

Lecture 8

pling, and digital lowpass filter to improve accuracy of A/D


converter

Use analog dither, oversam-

anti-windup to improve D/A


converter

Use the tracking idea from

A/D

controller

Digital

filter

Avoid unstable controller and gain margins < 1.3

Improve accuracy (larger word length)

EL2620

Analog

decim.

D/A

33

2011

Todays Goal

Lecture 8

Quantization

Backlash

You should now be able to analyze and design for

EL2620

34

2011

f ()

Feedback amplifiers were the solution!

f ()

Lecture 9

Black, Bode, and Nyquist at Bell Labs 19201950.

f ()

High signal amplification with low distortion was needed.

History of the Feedback Amplifier

Nonlinear control design based on high-gain control

Lecture 9

EL2620 Nonlinear Control

New YorkSan Francisco communication link 1914.

EL2620

Lecture 9

EL2620

2011

2011

Todays Goal

2 e f (e)

Lecture 9

f ()

$ 1/1 , yields

f (e)
2
e

choose K

1
r
K

1
2
r y
r
1 + 1 K
1 + 2 K

1 e

2011

2011

Linearization Through High Gain Feedback

EL2620

Lecture 9

Sliding mode controllers

High-gain control systems

You should be able to analyze and design

EL2620

A Word of Caution

f ()

k
s

!
"
u = k v f (u)

Lecture 9

> 0 large and df /du > 0, then u 0 and


!
"
0 = k v f (u)

f (u) = v

If k

f (u)

u = f 1 (v)

2011

Remark: How to Obtain f 1 from f using


Feedback

EL2620

Lecture 9

2011

Nyquist: high loop-gain may induce oscillations (due to dynamics)!

EL2620

Inverting Nonlinearities

F (s)

f1 ()
f ()

G(s)

2011

2011

Lecture 9

The case K

0
0

0.2

0.4

0.6

0.2

= u2 through feedback.

f ()

0.4

0.6

y(r)

= 100 is shown in the plot: y(r) r.

Linearization of f (u)

0.8

0.8

f (u)
1

ExampleLinearization of Static Nonlinearity

EL2620

Lecture 9

Should be combined with feedback as in the figure!

Controller

Compensation of static nonlinearity through inversion:

EL2620

= 0.1

Lecture 9

Gtot = (Gcl )100 10100

dGtot
= (1 + 103 )100 1 0.1
Gtot

and total distortion

S = (1 + GK)1 0.01. Then

dG
dGcl
=S
0.001
Gcl
G

100 feedback amplifiers in series give total amplification

Choose K

= 0.1

11

2011

1
dG
dG
dGcl
=
=S
Gcl
1 + GF G
G

ExampleDistortion Reduction

Let G = 1000,
distortion dG/G

EL2620

Lecture 9

2011

= (1 + GF )1

S is the closed-loop sensitivity to open-loop perturbations.

dGcl
1
=
dG
(1 + GF )2

Small perturbations dG in G gives

G
Gcl =
1 + GF

The closed-loop system is

The Sensitivity Function S

EL2620

f ()

2011

10

2011

Lecture 9

14
16
480

1938
1941

1
1923

Channels

Year
1914

30000

1000

150400

60

Loss (dB)

600

40

620

36

No amps

The feedback amplifier was patented by Black 1937.

12

Transcontinental Communication Revolution

EL2620

Lecture 9

Several links give distortion-free high gain.

f ()

Distortion Reduction via Feedback


The feedback reduces distortion in each link.

EL2620

1
r

GF (i)

|S(i)| r1

f ()

GF

Sensitivity and the Circle Criterion

OnOff Control

G(s)

Lecture 9

The relay corresponds to infinitely high gain at the switching point.

Common in temperature control, level control etc.

15

2011

13

2011

1
f (y)
1

1+r
y
1r

Onoff control is the simplest control strategy.

EL2620

Lecture 9

Then, the Circle Criterion gives stability if

|1 + GF (i)| > r

Consider a circle C

:= {z C : |z + 1| = r}, r (0, 1).


GF (i) stays outside C if

EL2620

k1 y

x = Ax + Bu,

u [1, 1]

= xT P x, P = P T > 0, represents the energy of

A Control Design Idea

2011

Lecture 9

x = Ax + B

x = Ax B
B T QP x = 0

16

is minimized if u = sgn(B T P x) (Notice that V = a + bu, i.e. just


a segment of line in u, 1 < u < 1. Hence the lowest value is at an
endpoint, depending on the sign of the slope b. )

V = xT (AT P + P A)x + 2B T P xu

Choose u such that V decays as fast as possible:

Assume V (x)

EL2620

Lecture 9

1
k1 =
1+r
1
k2 =
1r

Hence, |S(i)| small implies low sensitivity to nonlinearities.


k2 y f (y)

This corresponds to a large sector for f ().

If |S(i)| is small, we can choose r large (close to one).

14

2011

Small Sensitivity Allows Large Uncertainty

EL2620

+ (1

)fn

f+

Lecture 9

x1 = 1

This implies the following behavior

x1 = 1

x2 < 0

x2 > 0

x 2 (t) x1 + 1, dx2 1 + x1
dx1

x2 = 0

x2 < 0

x2 > 0

f + + (1 )f

= {x : (x) = 0}.

dx2

x (t) x1 1,
1 x1

2
dx1

For small x2 we have

EL2620

Lecture 9

(x) < 0

(x) > 0

= f + + (1 )f , where satisfies
= 0 for the normal projections of f + , f

The sliding surface is S

fn+

f+

Sliding Modes

(x) > 0
(x) < 0

The sliding mode is x

# +
f (x),
x =
f (x),

EL2620

19

2011

17

2011

Ax B,
Ax + B,

x2 > 0
x2 < 0

Lecture 9

ueq is called the equivalent control.

Sliding Mode Dynamics

x =

The dynamics along the sliding surface S is obtained by setting


u = ueq [1, 1] such that x(t) stays on S .

EL2620

Lecture 9

Example
%
$ %
0 1
1
x =
x+
u = Ax + Bu
1 1
1
u = sgn (x) = sgn x2 = sgn(Cx)
is equivalent to

EL2620

20

2011

18

2011

=0

x = Ax + Bu
u = sgn (x) = sgn(Cx)

= CAx/CB .

Lecture 9

because (x)

= x2 = 0. (Same result as before.)

!
"
ueq = CAx/CB = 1 1 x = x1 ,

Example (contd): For the example:

gives ueq

> 0. The sliding surface S = {x : Cx = 0} so


$
%
!
"
d
0 = (x)

=
f (x) + g(x)u = C Ax + Bueq
dx

Assume CB

EL2620

= {x : x2 = 0}.

ueq = x1

Equivalent Control for Linear System

gives the dynamics on the sliding surface S

=0

x 1 = x2 + ueq = x1
*+,-

Insert this in the equation for x 1 :

Lecture 9

2011

23

2011

21

= ueq such that (x)

= x 2 = 0 on (x) = x2 = 0 gives

Example (contd)

0 = x 2 = x1 x2 + ueq = x1 + ueq
*+,-

Finding u

EL2620

if the inverse exists.

d
f (x)
dx

Lecture 9

Remark: The condition that Cx = 0 corresponds to the zero at


s = 0, and thus this dynamic disappears on S = {x : Cx = 0}.

under the constraint Cx = 0, where the eigenvalues of


(I BC/CB)A are equal to the zeros of
sG(s) = sC(sI A)1 B .

= {x : Cx = 0} is given by
%
$
1
BC Ax,
x = Ax + Bueq = I
CB

The dynamics on S

EL2620

%1

Sliding Dynamics

d
ueq =
g(x)
dx

The equivalent control is thus given by

Lecture 9

2011

24

2011

22

= {x : (x) = 0}. Then, for x S ,


$
%
d dx
d
0 = (x)

=
f (x) + g(x)u
dx dt
dx

x = f (x) + g(x)u
u = sgn (x)

Deriving the Equivalent Control

has a stable sliding surface S

Assume

EL2620

Proof

2011

Closed-Loop Stability

= 2 (x)/2 with (x) = pT x. Then,


!
"
V = T (x)(x)

= xT p pT f (x) + pT g(x)u

= 0.

+ +

pn1 x(1)
n

pn x(0)
n

2011

25

Lecture 9

27

where x(k) denote time derivative. Now p corresponds to a stable


differential equation, and xn 0 exponentially as t . The
state relations xk1 = x k now give x 0 exponentially as t .

p1 x(n1)
n

0 = (x) = p1 x1 + + pn1 xn1 + pn xn

so x tend to (x)

V = (x) sgn (x) < 0

With the chosen control law, we get

Consider V (x)

EL2620

Lecture 9

but this transfer function is also 1/(sG(s)) Hence, the eigenvalues of


(I BC/CB)A are equal to the zeros of sG(s).

1
1
1
1
+
CA(sI ((I
BC)A))1
B
CB CB
CB
CB

Hence, the transfer function from y to u equals

1
1
CAx
y
y = Cx y = CAx + CBu u =
CB
CB
$
%
1
1
x = I
BC Ax
B y
CB
CB

x = Ax + Bu

EL2620

Design of Sliding Mode Controller

2011

pT f (x)
T
sgn (x),
T
p g(x) p g(x)

> 0:

Lecture 9

Note that ts

0 as .

ts =

= 0) is
0
<

Hence, the time to the first switch ((x)

(x)

it follows that as long as (x)

(x)(x)

= (x) sgn (x)

= (x0 ) > 0. Since

Time to Switch
Consider an initial point x0 such that 0

EL2620

Lecture 9

T
> 0 is a design
" parameter, (x) = p x, and
p = p1 . . . pn are the coefficients of a stable polynomial.

where !

u=

Choose control law

28

2011

26

Idea: Design a control law that forces the state to (x) = 0. Choose
(x) such that the sliding mode tends to the origin. Assume

x1
f1 (x) + g1 (x)u

x1
d
x2

.. =
= f (x) + g(x)u
..
dt .

.
xn
xn1

EL2620

ExampleSliding Mode Controller

"T

0
=3

Lecture 9

y = y

x1

Time Plots

x2

pT Ax
T sgn (x)
pT B
p B
= 2x1 sgn(x1 + x2 )

u=

and sliding dynamics

ts =

2011

31

2011

29

= s + 1 so that (x) = x1 + x2 . The controller is

Simulation agrees well with


time to switch

x(0) = 1.5 0

Initial condition
!

EL2620

Lecture 9

Choose p1 s + p2
given by

%
$ %
1 0
1
x =
x+
u
1 0
0
!
"
y= 0 1 x

Design state-feedback controller for

EL2620

x1

The Sliding Mode Controller is Robust

= sgn(pT g4) and > 0 is sufficiently large.

Lecture 9

(High gain control with stable open loop zeros)

The closed-loop system is thus robust against model errors!

if sgn(pT g)

6
5 T
pT g
p (f g4T f4g T )p

T sgn (x) < 0


V = (x)
pT g4
p g4

32

2011

30

2011

= 0.5. Note the sliding surface (x) = x1 + x2 .

Phase Portrait

Assume that only a model x = f4(x) + g


4(x)u of the true system
x = f (x) + g(x)u is known. Still, however,

EL2620

Lecture 9

x2

Simulation with

EL2620

Comments on Sliding Mode Control

Next Lecture

Lecture 9

Exact feedback linearization

Lyapunov design methods

EL2620

Lecture 9

Compare puls-width modulated control signals

Applications in robotics and vehicle control

Smooth version through low pass filter or boundary layer

Often impossible to implement infinite fast switching

Efficient handling of model uncertainties

EL2620

35

2011

33

2011

Todays Goal

Lecture 9

Sliding mode controllers

High-gain control systems

You should be able to analyze and design

EL2620

34

2011

Nonlinear Controllers

Lyapunov-based control design methods

Input-output linearization

Exact feedback linearization

Lecture 10

EL2620 Nonlinear Control

Lecture 10

Linear controller: z = Az + By , u = Cz

2011

2011

Linear dynamics, static nonlinearity: z = Az + By , u = c(z)

Nonlinear dynamical controller: z = a(z, y), u = c(z)

EL2620

Lecture 10

EL2620

Nonlinear Observers

x
# = f (#
x, u)

Lecture 10

Second case is called Extended Kalman Filter

Linearize f at x
#(t), find K = K(#
x) for the linearization

Linearize f at x0 , find K for the linearization

Choices of K

x
# = f (#
x, u) + K(y h(#
x))

Feedback correction, as in linear case,

Simplest observer

x = f (x, u), y = h(x)

What if x is not measurable?

EL2620

Lecture 10

k and ! may include dynamics.

has nice properties.

x = f (x, !(x))

State feedback: Find u = !(x) such that

Output feedback: Find u = k(y) such that the closed-loop


system
!
!
""
x = f x, k h(x)

x = f (x, u)
y = h(x)

Output Feedback and State Feedback

has nice properties.

EL2620

2011

2011

x 2 = x21 + u

x 1 = a sin x2

Another Example

z1 = z2 , z2 = a cos x2 (x21 + u)

Lecture 10

u(x) =

x21

v
, x2 [/2, /2]
+
a cos x2

and the linearizing control becomes

yields

z1 = x1 , z2 = x 1 = a sin x2

Perform transformation of states into linearizable form:

How do we cancel the term sin x2 ?

EL2620

Lecture 10

Lyapunov-based design - backstepping control

Input-output linearization

Exact feedback linearization

Some state feedback control approaches

EL2620

2011

2011

Exact Feedback Linearization

x = cos x x3 + u

Lecture 10

with (A, B) controllable and (x) nonsingular for all x in the domain
of interest.

x = Ax + B(x) (u (x))

2011

is feedback linearizable if there exist a diffeomorphism T whose


domain contains the origin and transforms the system into the form

x = f (x) + g(x)u

Definition: A nonlinear system

is called a diffeomorphism

T and T 1 continuously differentiable

T invertible for x in the domain of interest

= T (x) with

Diffeomorphisms

x = kx + v

A nonlinear state transformation z

EL2620

Lecture 10

yields the linear system

u(x) = cos x + x3 kx + v

The state-feedback controller

Example 1:

2011

idea: use a state-feedback controller u(x) to make the system linear

x = f (x) + g(x)u

Consider the nonlinear control-affine system

EL2620

Exact vs. Input-Output Linearization


+ u, y = x2

u = x1 %(1

The state feedback controller

Lecture 10

x21 )x2
+u

x21 )x2

+v

y = v

y = x 2 = x1 + %(1 x21 )x2 + u

y = x 1 = x2

Differentiate the output

x 2 = x1 + %(1
y = x1

x 1 = x2

Example: controlled van der Pol equation

EL2620

Lecture 10

Caution: the control has rendered the state x1 unobservable

x 1 = a sin x2 , x 2 = v, y = x2
which is linear from v to y

to obtain

u = x21 + v

11

2011

2011

If we want a linear input-output relationship we could instead use

which is nonlinear in the output.

z1 = z2 ; z2 = v ; y = sin1 (z2 /a)

The control law u = x21 + v/a cos x2 yields

x 1 = a sin x2 , x 2 =

x21

The example again, but now with an output

EL2620

Input-Output Linearization

2011

= 1/sp

Lie Derivatives

2011

10

Lecture 10

Lkf h(x)

d(Lk1
d(Lf h)
f h)
f (x), Lg Lf h(x) =
g(x)
=
dx
dx

Repeated derivatives

12

dh
dh
x =
(f (x) + g(x)u) ! Lf h(x) + Lg h(x)u
dx
dx
where Lf h(x) and Lg h(x) are Lie derivatives (Lf h is the derivative
of h along the vector field of x = f (x))
y =

The derivative of the output

x = f (x) + g(x)u ; x Rn , u R1
y = h(x), y R

Consider the nonlinear SISO system

EL2620

Lecture 10

i.e., G(s)

y (p) = v

The general idea: differentiate the output, y = h(x), p times untill the
control u appears explicitly in y (p) , and then determine u so that

linear from the input v to the output y

x = f (x) + g(x)u
y = h(x)

Use state feedback u(x) to make the control-affine system

EL2620

Lie derivatives and relative degree

2011

=nm

The input-output linearizing control

Lecture 10

$ %& '
dh
x = Lf h(x) + Lg h(x)u
dx

=0

y (p) = Lpf h(x) + Lg Lp1


f h(x)u

..
.

y =

Differentiating the output repeatedly

x = f (x) + g(x)u
y = h(x)

Consider a nth order SISO system with relative degree p

EL2620

Lecture 10

15

2011

13

Lg Lfi1 h(x) = 0, i = 1, . . . , p1 ; Lg Lp1


f h(x) $= 0 x D

A nonlinear system has relative degree p if

has relative degree p

b0 sm + . . . + bm
Y (s)
= n
U (s)
s + a1 sn1 + . . . + an

A linear system

integrators between the input and the output (the number of times
y must be differentiated for the input u to appear)

The relative degree p of a system is defined as the number of

EL2620

Example

1
Lg Lp1
f h(x)

Lecture 10

=2

Lpf h(x) + v

y (p) = v

results in the linear input-output system

u=

and hence the state-feedback controller

EL2620

Lecture 10

Thus, the system has relative degree p

"

y = x 2 = x1 + %(1 x21 )x2 + u

y = x 1 = x2

Differentiating the output

x 2 = x1 + %(1 x21 )x2 + u


y = x1

x 1 = x2

The controlled van der Pol equation

EL2620

16

2011

14

2011

Zero Dynamics

2011

Lecture 10

Here we limit discussion to Back-stepping control design, which


require certain f discussed later.

Methods depend on structure of f

Verify stability through Control Lyapunov function

Find stabilizing state feedback u = u(x)

x = f (x, u)

17

19

2011

Lyapunov-Based Control Design Methods

EL2620

Lecture 10

phase (and should not be input-output linearized!)

A system with unstable zero dynamics is called non-minimum

The dynamics of the n p states not observable in the linearized


dynamics of y are called the zero dynamics. Corresponds to the
dynamics of the system when y is forced to be zero for all times.

Thus, if p < n then n p states are unobservable in y .

the relative degree of the system

Note that the order of the linearized system is p, corresponding to

EL2620

x 2 = x1 + %(1 x21 )x2 + u

x 1 = x2

van der Pol again

2011

x = cos x x3 + u

A simple introductory example

= x2 /2

Lecture 10

18

20

2011

But, the term x3 in the control law may require large control moves!

Thus, the system is globally asymptotically stable

V (x) > 0 , V = kx2 < 0

Choose the Lyapunov candidate V (x)

u = cos x + x3 kx

Apply the linearizing control

Consider

EL2620

Lecture 10

(but bounded)

With y = x2 the relative degree p = 1 < n and the zero


dynamics are given by x 1 = 0, which is not asymptotically stable

it yourself!

With y = x1 the relative degree p = n = 2 and there are no


zero dynamics, thus we can transform the system into y
= v . Try

EL2620

u = cos x kx
2

= 0 with f (0) = 0.

(1)

2011

21

Lecture 10

x2

g(x1 )

f ()

x1

23

Idea: See the system as a cascade connection. Design controller first


for the inner loop and then for the outer.

at x

x 1 = f (x1 ) + g(x1 )x2


x 2 = u

= u(x) that stabilizes

Back-Stepping Control Design

We want to design a state feedback u

EL2620

Lecture 10

Thus, also globally asymptotically stable (and more negative V )

V (x) > 0 , V = x4 kx2 < 0

= x /2

x = cos x x3 + u

Choose the same Lyapunov candidate V (x)

Now try the control law

The same example

2011

0.5

1.5

2.5
time [s]

3.5

State trajectory

4.5

linearizing
non-linearizing

-200

200

400

600

800

1000

0.5

1.5

2.5
time [s]

Control input

3.5

(x1 ) and there exists Lyapunov fcn

Lecture 10

This is a critical assumption in backstepping control!

for some positive definite function W .

)
*
dV
1
V 1 (x1 ) =
f (x1 ) + g(x1 )(x1 ) W (x1 )
dx1

can be stabilized by v
=
V1 = V1 (x1 ) such that

x 1 = f (x1 ) + g(x1 )
v

Suppose the partial system

EL2620

Lecture 10

The linearizing control is slower and uses excessive input. Thus,


linearization can have a significant cost!

0
0

10

= 10

Simulating the two controllers


Simulation with x(0)

EL2620

State x

EL2620

Input u

24

2011

22

4.5

linearizing
non-linearizing

2011

The Trick

(x1 )

x2

g(x1 )

f + g

(
x1

dV1
g(x1 ) k,
dx1
k>0

g(x1 )

Lecture 10

Lecture 10

u=

x1

dV
g(z) (xk (z))
dz

W ). Then,

2011

26

2011

28

= 0 with V (z) + (xk (z))2 /2 being a Lyapunov fcn.

d
f (z) + g(z)xk
dz

stable, and V (z) a Lyapunov fcn (with V

z = f (z) + g(z)(z)

= 0, f (0) = 0,

z = f (z) + g(z)xk
x k = u

= (x1 , . . . , xk1 )T and

f + g

Back-Stepping Lemma

1)
(x

Assume (0)

stabilizes x

V 2 (x1 , x2 ) W (x1 ) k 2

v=

)
*
d
d

(x1 ) =
x 1 =
f (x1 ) + g(x1 )x2
dx1
dx1

Lemma: Let z

EL2620

Lecture 10

where

= x2 (x1 ) and control v = u :

x 1 = f (x1 ) + g(x1 )(x1 ) + g(x1 )


= v

Introduce new state

EL2620

= 0 is asymptotically stable for (1) with control law

u(x) = (x)
+ v(x).
If V1 radially unbounded, then global stability.

Hence, x

gives

Choosing

Consider V2 (x1 , x2 )

27

2011

25

2011

= V1 (x1 ) + 2 /2. Then,


)
*
dV1
dV1

f (x1 ) + g(x1 )(x1 ) +


g(x1 ) + v
V2 (x1 , x2 ) =
dx1
dx1
dV1
W (x1 ) +
g(x1 ) + v
dx1

EL2620

Lecture 10

x 1 = f (x1 ) + g(x1 )(x1 ) + g(x1 )[x2 (x1 )]


x 2 = u

Equation (1) can be rewritten as

EL2620

Strict Feedback Systems

Back-Stepping

Example

32

Lecture 10

Lecture 10

u = n (x1 , . . . , xn )

V2 = x21 /2 + (x2 + x21 + x1 )2 /2

2011

30

2011

u1 = (2x1 1)(x21 + x2 ) x1 (x2 + x21 + x1 ) = 2 (x1 , x2 )

= x21 x1 stabilizes the first equation. With


V1 (x1 ) = x21 /2, Back-Stepping Lemma gives
where 1 (x1 )

x 1 = x21 + 1 (x1 ), x 2 = u1

Step 0 Verify strict feedback form


Step 1 Consider first subsystem

x 1 = x21 + x2 , x 2 = x3 , x 3 = u

Design back-stepping controller for

EL2620

Lecture 10

x = Ax + Bu on strict feedback form.

2 minute exercise: Give an example of a linear system

EL2620

Back-stepping results in the final state feedback

31

2011

29

2011

by stepping back from x1 to u (see Khalil pp. 593594 for details).

Vk (x1 , . . . , xk ) = Vk1 (x1 , . . . , xk1 ) + [xk k1 ]2 /2

Back-stepping generates stabilizing feedbacks k (x1 , . . . , xk )


(equal to u in Back-Stepping Lemma) and Lyapunov functions

on strict feedback form.

x = f (x) + g(x)u

Back-Stepping Lemma can be applied recursively to a system

EL2620

$= 0

x1 , . . . , xk do not depend on xk+2 , . . . , xn .

Lecture 10

Note:

where gk

x n = fn (x1 , . . . , xn ) + gn (x1 , . . . , xn )u

..
.

x 1 = f1 (x1 ) + g1 (x1 )x2


x 2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3
x 3 = f3 (x1 , x2 , x3 ) + g3 (x1 , x2 , x3 )x4

Back-stepping Lemma can be applied to stabilize systems on strict


feedback form:

EL2620

Lecture 10

which globally stabilizes the system.

33

2011

)
*
d2
dV2
g(z) (xn 2 (z))
f (z) + g(z)xn
u = u2 =
dz
dz
2 2
2
V2
=
(x1 + x2 ) +
x3
(x3 2 (x1 , x2 ))
x1
x2
x2

gives

x 1 = x21 + x2
x 2 = x3
x 3 = u

Step 2 Applying Back-Stepping Lemma on

EL2620

x = f (x, u)

Controllability

Gain scheduling

Nonlinear controllability

Lecture 11

EL2620 Nonlinear Control

Lecture 11

is controllable if for any x , x there exists T > 0 and


u : [0, T ] R such that x(0) = x0 and x(T ) = x1 .

Definition:

EL2620

Lecture 11

EL2620

2011

2011

Todays Goal

Lecture 11

Is there a corresponding result for nonlinear systems?

has full rank.

!
"
Wn = B AB . . . An1 B

x = Ax + Bu

Linear Systems

is controllable if and only if

Lemma:

EL2620

Lecture 11

Apply gain scheduling to simple examples

Determine if a nonlinear system is controllable

You should be able to

EL2620

2011

2011

2011

= 0 and

z = Az + B1 u1 + B2 u2

= u2 = 0 gives

Lecture 11

2011

Linearization does not capture the controllability good enough

linearization is not controllable. Still the car is controllable!

0
cos(0 + 0 )

0
sin(0 + 0 )
B1 = ,
B2 =

0
sin(0 )
1
0
!
"
rank Wn = rank B AB . . . An1 B = 2 < 4, so the

with A

Linearization for u1

EL2620

Lecture 11

system is not controllable

A nonlinear system can be controllable, even if the linearized

Hence, if rank Wn = n then there is an ! > 0 such that for every


x1 B! (0) there exists u : [0, T ] R so that x(T ) = x1

Remark:

at x = 0 with f (0) = 0. If the linear system is controllable then the


nonlinear system is controllable in a neighborhood of the origin.

x = f (x) + g(x)u

z = Az + Bu

Controllable Linearization

be the linearization of

Lemma: Let

EL2620

(x, y)

Input:
velocity

u1 steering wheel velocity, u2 forward

Car Example

2011

f
g
f
g
x
x

Lecture 11

*
) *
cos x2
x1
f=
,
g=
x1
1
)
*)
* )
*) *
1 0
cos x2
0 sin x2
x1
[f, g] =

0 0
x1
1
0
1
*
)
cos x2 + sin x2
=
x1

Example:

[f, g] =

: Rn Rn is a vector field

Lie Brackets
Lie bracket between vector fields f, g
defined by

EL2620

Lecture 11

2011

x
0
cos( + )

d
y 0
sin( + )
= u1 +
u2 = g1 (z)u1 + g2 (z)u2
dt 0
sin()

1
0

EL2620

4. Finally, for t

Lecture 11

t [0, !)
t [!, 2!)
t [2!, 3!)
t [3!, 4!)

[2!, 3!]

x(4!) = x0 + !2

[3!, 4!]

dg1
dg2
g1
g2
dx
dx

dg1
1 dg2
dg2
g1
g2 +
g2
dx
dx
2 dx

Proof, continued

x(3!) = x0 + !g2 + !

3. Similarily, for t

EL2620

Lecture 11

(1, 0),
(0, 1),
(1, 0),
(0, 1),

x(4!) = x(0) + !2 [g1 , g2 ] + O(!3 )

(u1 , u2 ) =

x = g1 (x)u1 + g2 (x)u2

Lie Bracket Direction

The system can move in the [g1 , g2 ] direction!

gives motion

the control

For the system

EL2620

11

2011

2011

Proof

= g2 (x0 ) +

dg2
dx !g1 (x0 )

Car Example (Contd)

Lecture 11

g2
g1
g3 := [g1 , g2 ] =
g1
g2
x
x


0 0 sin( + ) sin( + )
0


cos( + ) 0
0 0 cos( + )
=
0
0
cos() 0
0 0
0 0
0
0
1

sin( + )

cos( + )
=

cos()
0

EL2620

Lecture 11

12

2011

10

x(2!) = x0 + !(g1 (x0 ) + g2 (x0 ))+


*
)
dg2
1 dg2
1 dg1
(x0 )g1 (x0 ) +
(x0 )g1 (x0 ) +
(x0 )g2 (x0 )
!2
2 dx
dx
2 dx

and with x(!) from (1), and g2 (x(!))

1 dg2
g2 (x(!))!2
2 dx

1 dg1
g1 (x0 )!2 + O(!3 )
2 dx

x(2!) = x(!) + g2 (x(!))! +

[!, 2!]

x(!) = x0 + g1 (x0 )! +

(1)

2011

[0, !], assuming ! small and x(0) = x0 , Taylor series yields

2. Similarily, for t

1. For t

EL2620

2011

Parking Theorem

Lecture 11

Wriggle, Drive, Wriggle, Drive

13

15

2011

You can get out of any parking lot that is ! > 0 bigger than your car
by applying control corresponding to g4 , that is, by applying the
control sequence

EL2620

Lecture 11

(x, y)

(u1 , u2 ) = {(1, 0), (0, 1), (1, 0), (0, 1)}

We can hence move the car in the g3 direction (wriggle) by applying


the control sequence

EL2620

2011

14

2011

Lecture 11

16

2 minute exercise: What does the direction [g1 , g2 ] correspond to for


a linear system x = g1 (x)u1 + g2 (x)u2 = B1 u1 + B2 u2 ?

EL2620

Lecture 11

sideways movement

g4 direction corresponds to

(sin(), cos())

sin( + 2)
cos( + 2)
g3
g2

g3
g2 = . . . =
g4 := [g3 , g2 ] =

x
x
0

The car can also move in the direction

EL2620

[g1 , [g2 , [g1 , g2 ]]]

Lecture 11

Controllable because {g1 , g2 , [g1 , g2 ]} spans R3

cos
0
sin
g1 = sin , g2 = 0 , [g1 , g2 ] = cos
0
1
0

2011

19

2011

17

[g2 , [g2 , [g1 , g2 ]]]

[g2 , [g1 , g2 ]]

(x1 , x2 )

ExampleUnicycle

[g2 , [g1 , [g1 , g2 ]]]

[g1 , g2 ]

The Lie Bracket Tree


x
cos
0
d 1
x2 = sin u1 + 0 u2
dt

0
1

EL2620

Lecture 11

[g1 , [g1 , [g1 , g2 ]]]

[g1 , [g1 , g2 ]]

EL2620

2011

Lecture 11

Is the linearization of the unicycle controllable?

Show that {g1 , g2 , [g1 , g2 ]} spans R3 for the unicycle

2 minute exercise:

EL2620

Lecture 11

20

2011

18

The system can be steered in any direction of the Lie bracket tree

Remark:

is controllable if the Lie bracket tree (together with g1 and g2 ) spans


Rn for all x

x = g1 (x)u1 + g2 (x)u2

Controllability Theorem
Theorem:The system

EL2620

Gain Scheduling

Controller

Process

Gain
schedule

Output

Operating
condition

2011

21

Lecture 11

23

Examples of scheduling variable are production rate, machine speed,


Mach number, flow rate

= K(), where is the scheduling

Control
signal

Example: PID controller with K


variable.

Command
signal

Controller
parameters

Control parameters depend on operating conditions:

EL2620

Lecture 11

2011


0
cos
x1


d
sin

x
0

2
u1 + u2
=
dt 0
1
0
1

ExampleRolling Penny

Controllable because {g1 , g2 , [g1 , g2 ], [g2 , [g1 , g2 ]]} spans R4

(x1 , x2 )

EL2620

2011

Lecture 11

EL2620

Lecture 11

v
u

x = f

Flo

Linear

Quickopening

Eualpercentage
Position

Valve Characteristics

A: The answer requires Lie brackets and further concepts from


differential geometry (see Khalil and PhD course)

24

2011

22

Q: When can we transform x = f (x) + g(x)u into z = Az + bv by


means of feedback u = (x) + (x)v and change of variables
z = T (x) (see previous lecture)?

When is Feedback Linearization Possible?

EL2620

uc

Lecture 11

5.0

5.1

1.0

1.1

0.2

0.3

uc

uc

uc

With gain scheduling:

EL2620

Lecture 11

10

Valve characteristics

EL2620

20

20

20

PI

0.5

40

40

40

f 1

60

60

60

1.5

f(u)

Process

Nonlinear Valve

80

80

80

f (u)

G 0 (s)

100
Time

100
Time

100
Time

27

2011

25

2011

Lecture 11

uc

uc

uc

q =

Pitch dynamics

EL2620

Lecture 11

5.0

5.2

1.0

1.1

0.2

0.3

20

20

20

Nz

Flight Control

10

10

10

Without gain scheduling:

EL2620

30

30

30

40
Time

40
Time

40
Time

28

2011

26

2011

1.6
0.8
1.2
Mach number

Todays Goal

0.4

Flight Control

2.0

Lecture 11

Apply gain scheduling to simple examples

Determine if a nonlinear system is controllable

You should be able to

EL2620

Lecture 11

20

40

60

80

Operating conditions:

EL2620

Altitude (x1000 ft)

2.4

31

2011

29

2011

Lecture 11

EL2620

Filter

Pitch rate

Filter

Acceleration

Filter

Position

Pitch stick

A/D

A/D

A/D
M

T2 s
1+ T2 s

1
1+ T3 s

T1s
1+ T1s

K DSE

VIAS

H M VIAS

K QD

K Q1

K SG

Gear

VIAS H

M H

K NZ

Filter

D/A

D/A

To servos

The Pitch Control Channel

30

2011

Optimal Control Problems

Optimal control

Lecture 12

EL2620 Nonlinear Control

2011

2011

Lecture 12

- determining the optimal controller can be hard

- difficult to formulate control objectives as a single objective


function

+ can deal with constraints

+ applicable to nonlinear problems

+ provides a systematic design framework

u(t)

min J(x, u, t), x = f (t, x, u)

Idea: formulate the control design problem as an optimization problem

EL2620

Lecture 12

EL2620

Todays Goal

ExampleBoat in Stream

Lecture 12

v(x2 )

x 1 (t) = v(x2 ) + u1 (t)


x 2 (t) = u2 (t)
x1 (0) = x2 (0) = 0

max x1 (tf )
u:[0,tf ]U

u(t) U = {(u1 , u2 ) : u21 + u22 = 1}

Rudder angle control:

=1

x2

Sail as far as possible in x1 direction


Speed of water v(x2 ) with dv/dx2

EL2620

Lecture 12

Design controllers based on optimal control theory

You should be able to

EL2620

x1

2011

2011

min

tf

L(x(t), u(t)) dt + (x(tf ))

x(t)

= f (x(t), u(t)), x(0) = x0

u:[0,tf ]U

Lecture 12

Final time tf fixed (free later)

Constraints on x from the dynamics

Infinite dimensional optimization problem:


Optimization over functions u : [0, tf ] U

U Rm set of admissible control

Remarks:

[1 u(t)]x(t)dt

x(t)

= u(t)x(t)
x(0) = x0 > 0

tf

> 0)

Optimal Control Problem

Standard form:

EL2620

Lecture 12

max

u:[0,tf ][0,1]

amount of stored profit

change of production rate (

portion of x stored

portion of x reinvested

production rate

Maximization of stored profit

ExampleResource Allocation

x(t) [0, )
u(t) [0, 1]
1 u(t)
u(t)x(t)
[1 u(t)]x(t)

EL2620

2011

2011

ExampleMinimal Curve Length

2011

"

Pontryagins Maximum Principle

1 + u2 (t)dt

tf

Lecture 12

uU

u (t) = arg min H(x (t), u, (t))

Moreover, the optimal control is given by

H T
T

(t)
=
(x (t), u (t), (t)), (tf ) =
(x (tf ))
x
x

where (t) solves the adjoint equation

uU

2011

min H(x (t), u, (t)) = H(x (t), u (t), (t)), t [0, tf ]

Suppose the optimal control problem above has the solution


u : [0, tf ] U and x : [0, tf ] Rn . Then,

H(x, u, ) = L(x, u) + T f (x, u)

Theorem: Introduce the Hamiltonian function

EL2620

Lecture 12

tf

x(t)

= u(t)
x(0) = a

u:[0,tf ]R

min

(t, x(t)) with x(0) = a

Line: Vertical through (tf , 0)

Curve:

x(t)

Find the curve with minimal length between a given point and a line

EL2620

Remarks

2011

= 0)

u1 (t) = "

Lecture 12

21 (t) + 22 (t)

, u2 (t) = "

2 (t)

1
tf t
, u2 (t) = "
1 + (t tf )2
1 + (t tf )2

21 (t) + 22 (t)

1 (t)

u1 +u2 =1

1 (t)u1 + 2 (t)u2
= arg 2min
2

u1 (t) = "

Hence,

or

u1 +u2 =1

u (t) = arg 2min


1 (t)(v(x2 (t)) + u1 ) + 2 (t)u2
2

Optimal control

EL2620

Lecture 12

maximum is due to Pontryagins original formulation


9

11

2011

Solution involves 2n ODEs with boundary conditions x(0) = x0


and (tf ) = T /x(x (tf )). Often hard to solve explicitly.

The Maximum Principle provides all possible candidates.

there may exist many or none solutions


(cf., minu:[0,1]R x(1), x = u, x(0)

Pontryagins Maximum Principle provides necessary condition:

increase the criterium. Then perform a clever Taylor expansion.

See textbook, e.g., Glad and Ljung, for proof. The outline is simply
to note that every change of u(t) from the optimal u (t) must

EL2620

tf

x(0) = x0

[u(t) 1]x(t)dt

(t)
= 1 u (t) (t)u (t),

Adjoint equation

(tf ) = 0

H = L + T f = (u 1)x + ux

Hamiltonian satisfies

Lecture 12

x(t)

= u(t)x(t),

u:[0,tf ][0,1]

min

ExampleResource Allocation (contd)

EL2620

Lecture 12

2 (tf ) = 0

1 (tf ) = 1

(x) = x1

&
%
$ v(x2 ) + u1
2
u2

1 (t) = 1, 2 (t) = t tf

1 (t) = 0,
2 (t) = 1 (t),

$
H #
= 0 1 ,
x

Adjoint equations

have solution

H = f = 1

ExampleBoat in Stream (contd)


Hamiltonian satisfies

EL2620

12

2011

10

2011

u[0,1]

(x (t) > 0)

Lecture 12

min

tf

1+

"

x(t)

= u(t),

u:[0,tf ]R

x(0) = a

u2 (t)dt

5 minute exercise: Find the curve with minimal length by solving

EL2620

Lecture 12

For t

tf , we have u (t) = 0 (why?) and thus (t)


= 1.

For t < tf 1/ , we have u (t) = 1 and thus (t)


= (t).

= arg min u(1 + (t)),


u[0,1]
'
0,
(t) 1/
=
1,
(t) < 1/

u (t) = arg min (u 1)x (t) + (t)ux (t)

Optimal control

EL2620

15

2011

13

2011

0.4

0.4

0.6

0.6

0.8

0.8

u (t)

1.2

1.2

t [0, tf 1/]
t (tf 1/, tf ]

0.2

0.2

(t)

1.4

1.4

1.6

1.6

1.8

1.8

Lecture 12

u4 dt + x(1)
x = x + u
x(0) = 0

min

5 minute exercise II: Solve the optimal control problem

EL2620

Lecture 12

Its optimal to reinvest in the beginning

0.2

0.4

0.6

0.8

-1

-0.8

-0.6

-0.4

0
-0.2

'
1,

u (t) =
0,

EL2620

16

2011

14

2011

HistoryCalculus of Variations

"
1 + y $ (x)2
"
dx
2gy(x)

Goddards Rocket Problem (1910)

Lecture 12

maxu h(tf )

0 u umax and m(tf ) = m1 (empty)

Optimization criterion:

Constraints:

u motor force, D = D(v, h) air resistance

(v(0), h(0), m(0)) = (0, 0, m0 ), g, > 0

How to send a rocket as high up in the air as possible?

u D
v

d
m

h =
v

dt
m
u

EL2620

Lecture 12

Find the curve enclosing largest area (Euler)

Solved by John and James Bernoulli, Newton, lHospital

"
"
1 + y $ (x)2
dx2 + dy 2
= "
dx
v
2gy(x)

J(y) =

ds
dt =
=
v

Minimize

time

19

2011

17

2011

Brachistochrone (shortest time) problem (1696): Find the


(frictionless) curve that takes a particle from A to B in shortest

EL2620

HistoryOptimal Control

tf

L(x(t), u(t)) dt + (x(tf ))

Lecture 12

Final state is constrained: (x(tf )) = x3 (tf ) m1 = 0

End time tf is free

Note the diffences compared to standard form:

x(t)

= f (x(t), u(t)), x(0) = x0


(x(tf )) = 0

min

u:[0,tf ]U

Generalized form:

EL2620

Lecture 12

Financeportfolio theory

PhysicsSnells law, conservation laws

Aeronauticssatellite orbits

Roboticstrajectory generation

Huge influence on engineering and other sciences:

Bellmans Dynamic Programming (1957)

Pontryagins Maximum Principle (1956)

The space race (Sputnik, 1957)

EL2620

20

2011

18

2011

Solution to Goddards Problem

H(x (tf ), u (tf ), (tf ), n0 )

(tf , x (tf ))
= n0 (tf , x (tf )) T
t
t

H T

(x (t), u (t), (t), n0 )


(t)
=
x

T (tf ) = n0 (tf , x (tf )) + T


(tf , x (tf ))
x
x

Lecture 12

defines end point constraints

With fixed tf : H(x (tf ), u (tf ), (tf ), n0 ) = 0

tf may be a free variable

Remarks:

EL2620

Lecture 12

Took 50 years before a complete solution was presented

resistance is high, in order to burn fuel at higher level

Hard: e.g., it can be optimal to have low speed when air

D(v, h) + 0:

Burn fuel as fast as possible, because it costs energy to lift it

Easy: let u(t) = umax until m(t) = m1

D(v, h) 0:

23

2011

21

2011

x = (v, h, m)T , L 0, (x) = x2 , (x) = x3 m1

Goddards problem is on generalized form with

EL2620

L(x(t), u(t)) dt + (tf , x(tf ))

0, Rn such that (n0 , T ) += 0 and

2011

ExampleMinimum Time Control

H(x, u, , n0 ) = n0 L(x, u) + T f (x, u)

2011

22

tf

1 dt =

tf

(x(tf )) = (x1 (tf ), x2 (tf ))T = (0, 0)T


u (t) = arg min 1 + 1 (t)x2 (t) + 2 (t)u
u[1,1]
'
1,
2 (t) < 0
=
1,
2 (t) 0

Optimal control is the bang-bang control

Lecture 12

min

u:[0,tf ][1,1]

x 1 (t) = x2 (t), x 2 (t) = u(t)

u:[0,tf ][1,1]

min

24

Bring the states of the double integrator to the origin as fast as possible

EL2620

Lecture 12

where

min H(x (t), u, (t), n0 ) = H(x (t), u (t), (t), n0 ), t [0, tf ]

Then, there exists n0


uU

tf

: [0, tf ] U and x : [0, tf ] Rn are

x(t)

= f (x(t), u(t)), x(0) = x0


(tf , x(tf )) = 0

u:[0,tf ]U

min

Theorem: Suppose u
solutions to

General Pontryagins Maximum Principle

EL2620

x1 (t) = x1 (0) + x2 (0)t + t2 /2


x2 (t) = x2 (0) + t

= = 1, we have

u:[0,)Rm

min

Lecture 12

where L

(xT Qx + uT Ru) dt

SA + AT S + Q SBR1 B T S = 0

= R1 B T S and S > 0 is the solution to

u = Lx

x = Ax + Bu

Linear Quadratic Control

has optimal solution

with

EL2620

Lecture 12

These define the switch curve, where the optimal control switch

x1 (t) x2 (t)2 /2 = const

Eliminating t gives curves

With u(t)

= 0, 2 (t) = 1 (t) gives

1 (t) = c1 , 2 (t) = c2 c1 t

1 (t)
Adjoint equations

EL2620

27

2011

25

2011

2011

Properties of LQ Control

Lecture 12

1978)

But, then system may have arbitrarily poor robustness! (Doyle,

If x is not measurable, then one may use a Kalman filter; leads to


linear quadratic Gaussian (LQG) control.

Phase margin 60 degrees

Closed-loop system stable with u = (t)Lx for


(t) [1/2, ) (infinite gain margin)

Stabilizing

EL2620

Lecture 12

Efficient design method for nonlinear problems

reference value to a linear regulator, which keeps the system


close to the wanted trajectory

We may use the optimal open-loop solution u (t) as the

28

2011

26

Optimal control problem makes no distinction between open-loop


control u (t) and closed-loop control u (t, x).

Reference Generation using Optimal Control

EL2620

Tetra Pak Milk Race

Move milk in minimum time without spilling

Pros & Cons for Optimal Control

Lecture 12

Hard to solve the equations that give optimal controller

Hard to find suitable criteria

+ Captures limitations (as optimization constraints)

+ Applicable to nonlinear control problems

+ Systematic design procedure

EL2620

Lecture 12

[Grundelius & Bernhardsson,1999]

EL2620

31

2011

29

2011

15

Acceleration

0.05

0.05

0.1

0.1

0.15

0.15

0.2

0.2

Slosh

0.25

0.25

0.3

0.3

0.35

0.35

0.4

0.4

SF2852 Optimal Control Theory

2011

30

2011

Lecture 12

Applications: Aeronautics, Robotics, Process Control,


Bioengineering, Economics, Logistics

32

Numerical Methods: Numerical solution of optimal control problems

Pontryagins Maximum principle: Main results; Special cases such


as time optimal control and LQ control

Dynamic Programming: Discrete & continuous; Principle of


optimality; Hamilton-Jacobi-Bellman equation

http://www.math.kth.se/optsyst/

Optimization and Systems Theory

Period 3, 7.5 credits

EL2620

Lecture 12

Optimal time = 375 ms, TetraPak = 540ms

-1

-0.5

0.5

-15

-10

-5

10

Given dynamics of system and maximum slosh = 0.63, solve


.t
minu:[0,tf ][10,10] 0 f 1 dt, where u is the acceleration.

EL2620

Todays Goal

Lecture 12

Understand possibilities and limitations of optimal control

Generalized form

Standard form

Design controllers based on optimal control theory for

You should be able to

EL2620

33

2011

Fuzzy logic and fuzzy control


Artificial neural networks

Lecture 13

EL2620 Nonlinear Control

Fuzzy Control

Lecture 13

IF Speed is High AND Traffic is Heavy


THEN Reduce Gas A Bit

Example of a rule:

Implement as rules (instead of as differential equations)

Model operators control actions (instead of the plant)

Idea:

Transfer process knowledge to control algorithm is difficult

Many plants are manually controlled by experienced operators

EL2620

Lecture 13

en
and M. Johansson
Some slides copied from K.-E. Arz

EL2620

2011

2011

Todays Goal

Model Controller Instead of Plant

Lecture 13

Fuzzy control design


Model manual control Implement control rules

Conventional control design


Model plant P Analyze feedback Synthesize controller C
Implement control algorithm

EL2620

Lecture 13

understand simple neural networks

understand the basics of fuzzy logic and fuzzy controllers

You should

EL2620

2011

2011

Fuzzy Set Theory

x A to a certain degree A (x)

x A or x $ A

Fuzzy Logic

AND Y OR Z

Lecture 13

Mimic human linguistic (approximate) reasoning [Zadeh,1965]

Defines logic calculations as X

Fuzzy logic:
AND: AB (x) = min(A (x), B (x))
OR: AB (x) = max(A (x), B (x))
NOT: A! (x) = 1 A (x)

Conventional logic:
AND: A B
OR: A B
NOT: A!

How to calculate with fuzzy sets (A, A )?

EL2620

Lecture 13

25

Warm

Cold

10

A fuzzy set is defined as (A, A )

Membership function:
A : [0, 1] expresses the degree x belongs to A

Fuzzy set theory:

Conventional set theory:

Specify how well an object satisfies a (vague) description

EL2620

2011

2011

Example

Cold

10

10

Cold

Lecture 13

25

CW

Warm

Q1: Is it cold AND warm?

EL2620

Lecture 13

10

Cold

CW

25

Warm

Q2: Is it cold OR warm?

25

Warm

= 1/3.

Example

15

Q2: Is x = 15 warm?
A2: It is not really warm since W (15)

Q1: Is the temperature x = 15 cold?


A1: It is quite cold since C (15) = 2/3.

EL2620

2011

2011

Fuzzy
Controller

u
Plant

Fuzzy Control System


y

Fuzzifier

Lecture 13

Example
y = 15: C (15)

15

25

Warm

Cold

10

= 2/3 and W (15) = 1/3

Fuzzy set evaluation of input y

EL2620

Lecture 13

Fuzzy controller is a nonlinear mapping from y (and r ) to u

r, y, u : [0, ) ( R are conventional signals

EL2620

11

2011

2011

y
Fuzzifier

Fuzzy Controller
Fuzzy
Inference

Fuzzy Inference

Lecture 13

Rule 2:

! "# $

2.

1.

2.

IF y is Warm THEN u
! is"#Low$
! "# $

1.

! "# $

Examples of fuzzy rules:


Rule 1: IF y is Cold THEN u is High

3. Aggregate rule outputs

2. Calculate fuzzy output of each rule

1. Calculate degree of fulfillment for each rule

Fuzzy Inference:

EL2620

Lecture 13

Fuzzifier and defuzzifier act as interfaces to the crisp signals

Defuzzifier

Fuzzy Controller

Fuzzifier: Fuzzy set evaluation of y (and r )


Fuzzy Inference: Fuzzy set calculations
Defuzzifier: Map fuzzy set to u

EL2620

12

2011

10

2011

Lecture 13

EL2620

Lecture 13

3. Aggregate rule outputs

15

2011

13

2011

1. Calculate degree of fulfillment for rules

EL2620

2. Calculate fuzzy output of each rule

Lecture 13

EL2620

Lecture 13

Defuzzifier

16

2011

14

2011

Note that mu is standard fuzzy-logic nomenclature for truth value:

EL2620

Fuzzifier

Fuzzy Controller

Fuzzy
Inference
Defuzzifier

Lecture 13

EL2620

19

2011

Lecture 13

EL2620

Lecture 13

Lecture 13

20

2011

18

2011

ExampleFuzzy Control of Steam Engine

EL2620

http://isc.faqs.org/docs/air/ttfuzzy.html
17

2011

Defuzzifier: Map fuzzy set to u

3. Aggregate rule outputs

2. Calculate fuzzy output of each rule

1. Calculate degree of fulfillment for each rule

Fuzzy Inference: Fuzzy set calculations

Fuzzy ControllerSummary

Fuzzifier: Fuzzy set evaluation of y (and r )

EL2620

Lecture 13

EL2620

Lecture 13

EL2620

Rule-Based View of Fuzzy Control

23

2011

21

2011

Lecture 13

EL2620

Lecture 13

EL2620

Nonlinear View of Fuzzy Control

24

2011

22

2011

Pros and Cons of Fuzzy Control

2011

Lecture 13

EL2620

Lecture 13

Brain neuron

xn

x1
x2

wn

()

Artificial neuron

w1
w2

Neurons

27

2011

25

Fuzzy control is a way to obtain a class of nonlinear controllers

Not obvious how to include dynamics in controller

Sometimes hard to combine with classical control

Limited analysis and synthesis

Disadvantages

Intuitive for non-experts in conventional control

Explicit representation of operator (process) knowledge

User-friendly way to design nonlinear controllers

Advantages

EL2620

Lecture 13

xn

w2
x2

wn

w1
x1

()

%
'
&
y = b + ni=1 wi xi

Model of a Neuron
Inputs: x1 , x2 , . . . , xn
Weights: w1 , w2 , . . . , wn
Bias: b
Nonlinearity: ()
Output: y

EL2620

Lecture 13

A network of computing components (neurons)

Neural Networks
How does the brain work?

EL2620

28

2011

26

2011

A Simple Neural Network

Lecture 13

Success Stories

Lecture 13

Pattern recognition (e.g., speech, vision), data classification

Applications took off in mid 80s

31

2011

29

2011

Complex problems with unknown and highly nonlinear structure

McCulloch & Pitts (1943), Minsky (1951)

Artificial neural networks:

Cement kilns, washing machines, vacuum cleaners

Applications took off in mid 70s

Complex problems but with possible linguistic controls

Zadeh (1965)

Fuzzy controls:

EL2620

Output Layer

Represents a nonlinear mapping from inputs to outputs

u3

u2

u1

Input Layer Hidden Layer

Neural network consisting of six neurons:

EL2620

Neural Network Design

Todays Goal

Lecture 13

understand simple neural networks

understand the basics of fuzzy logic and fuzzy controllers

You should

EL2620

Lecture 13

The choice of weights are often done adaptively through learning

3. How to choose the weights?

2. How many neurons in each layer?

1. How many hidden layers?

EL2620

32

2011

30

2011

Next Lecture

Lecture 13

PhD thesis projects

Master thesis projects

Spring courses in control

EL2620 Nonlinear Control revisited

EL2620

33

2011

Whats on the exam?

Question 1

Master thesis projects

Spring courses in control

Summary and repetition

Lecture 14

EL2620 Nonlinear Control

Question 2

Lecture 14

Lecture 14

Is the system generically nonlinear? E.g, x = xu

saturations, valves etc

Optimal control

Nonlinear controllability

Some nonlinearities to compensate for?

analyze and simulate with nonlinear model

varying operating conditions?

strong nonlinearities (under feedback!)?

Evaluate:

linear methods (loop shaping, state feedback, . . . )

Start with the simplest:

The answer is highly problem dependent. Possible (learning)


approach:

What design method should I use in practice?

EL2620

Lecture 14

See homepage for old exams

TEFYMA or BETA
(No other material: textbooks, exercises, calculators etc. Any
other basic control book must be approved by me before the
exam.).

You may bring lecture notes, Glad & Ljung Reglerteknik, and

Sign up on course homepage

Exam
Regular written exam (in English) with five problems

EL2620

Exact feedback linearization, input-output linearization, zero dynamics

Lyapunov based design: back-stepping

Sliding modes, equivalent controls

Describing functions

Compensating static nonlinearities

Circle Criterion, Small Gain Theorem, Passivity Theorem

Lyapunov stability (local and global), LaSalle

2011

2011

Nonlinear models: equilibria, phase portaits, linearization and stability

EL2620

Lecture 14

EL2620

2011

2011

Question 3

k1 y

k11
k12

The Circle Criterion

k2 y f (y)

G(i)

Lecture 14

If the Nyquist curve of G(s) stays on the correct side of the circle
defined by the points 1/k1 and 1/k2 , then the closed-loop
system is BIBO stable.

k1 f (y)/y k2 .

Theorem Consider a feedback loop with y = Gu and


u = f (y). Assume G(s) is stable and that

EL2620

Lecture 14

of them can be used to prove instability.

2011

Since they do not provide necessary conditions for stability, none

But, if one method does not prove stability, another one may.

Criterion all provide only sufficient conditions for stability

No, the Small Gain Theorem, Passivity Theorem and Circle

2011

Can a system be proven stable with the Small Gain Theorem and
unstable with the Circle Criterion?

EL2620

Question 4

k1 < 0 < k2 : Stay inside the circle

Lecture 14

Only Case 1 and 2 studied in lectures. Only G stable studied.

2011

2011

< 0 < k2 ?

0 = k1 < k2 : Stay to the right of the line Re s = 1/k2

0 < k1 < k2 : Stay outside circle

The different cases

Other cases: Multiply f and G with 1.

3.

2.

1.

Stable system G

EL2620

Lecture 14

Can you review the circle criterion? What about k1

EL2620

Please repeat antiwindup

Question 5

G KD

Lecture 14

Choose K such that F

s 1

F KC

s 1

xc

xc

sat

KC has stable eigenvalues.

AntiwindupGeneral State-Space Model

EL2620

Lecture 14

EL2620

11

2011

2011

Lecture 14

EL2620

Lecture 14

EL2620

(b)

(a)

K
Ti

1
Tt

1
s

1
Tt

1
s

+
es

Actuator

es

Actuator

Actuator model

Question 6

KTd s

KTd s

Please repeat Lyapunov theory

K
Ti

Tracking PID

12

2011

10

2011

#x(t)# < R, t 0

> 0 there exists r > 0, such that

lim x(t) = 0

Lecture 14

then x

= 0 is globally asymptotically stable.

V (x) as #x#

V (x) < 0 for all x )= 0

V (x) > 0, for all x )= 0

V (0) = 0

15

2011

13

2011

= 0. Assume that V : Rn R

Lyapunov Theorem for Global Stability

Theorem Let x = f (x) and f (0)


is a C 1 function. If

EL2620

Lecture 14

globally asymptotically stable, if asymptotically stable for all


x(0) Rn .

#x(0)# < r

locally asymptotically stable, if locally stable and

#x(0)# < r

locally stable, if for every R

= 0 of x = f (x) is

Stability Definitions

An equilibrium point x

EL2620

= 0 is locally stable. Furthermore, if

= 0 is locally asymptotically stable.

Lecture 14

then x

= 0 is globally asymptotically stable.


16

= 0. If there exists a C1 function

= f (x) such that V (x) = 0 is x(t) = 0

V (x) as #x#
(5) The only solution of x
for all t

(4)

V (x) 0 for all x

V (x) > 0 for all x )= 0

(2)
(3)

V (0) = 0

(1)

Theorem: Let x = f (x) and f (0)


V : Rn R such that

14

2011

LaSalles Theorem for Global Asymptotic


Stability

EL2620

Lecture 14

then x

V (x) < 0 for all x , x )= 0

then x

V (x) 0 along all trajectories in

V (x) > 0, for all x , x )= 0

V (0) = 0

2011

Rn . Assume that

Lyapunov Theorem for Local Stability


Theorem Let x = f (x), f (0) = 0, and 0
V : R is a C 1 function. If

EL2620

2011

g
k
x 1 = x2 , x 2 = sin x1 x2
l
m
1
k
g
V (x) = (1 cos x1 ) + x22 V = x22
l
2
m

Example: Pendulum with friction

2011

17

Lecture 14

The domain is compact if we consider


= {(x1 , x2 ) R2 |V (x) c}

M = {(x1 , x2 )|x1 = k, x2 = 0}

19

The invariant points in E are given by x 1 = x2 = 0 and x 2 = 0.


Thus, the largest invariant set in E is

We can not prove global asymptotic stability; why?


The set E = {(x1 , x2 )|V = 0} is E = {(x1 , x2 )|x2 = 0}

EL2620

Lecture 14

and V is a positive definite function

= {x Rn |V (x) c}

Remark : a compact set (bounded and closed) is obtained if we e.g.,


consider

Let V

: Rn R be a C 1 function such that V (x) 0 for x .


Let E be the set of points in where V (x) = 0. If M is the largest
invariant set in E , then every solution with x(0) approaches M
as t

x = f (x).

Rn be a bounded and closed set that is invariant

LaSalles Invariant Set Theorem

Theorem Let
with respect to

EL2620

2011

Lecture 14

asymptotic stability of the origin.

If we e.g., consider : x21 + x22 1 then


M = {(x1 , x2 )|x1 = 0, x2 = 0} and we have proven

EL2620

Lecture 14

In particular, if the compact region does not contain any fixed point
then the -limit set is a limit cycle

20

2011

18

Poincare-Bendixson Any orbit of a continuous 2nd order system that


stays in a compact region of the phase plane approaches its -limit
set, which is either a fixed point, a periodic orbit, or several fixed
points connected through homoclinic or heteroclinic orbits

Relation to Poincare-Bendixson Theorem

EL2620

Question 7

x 1 = ax1 (t)

Lecture 14

Choose large a: fast convergence along sliding manifold

(x) = ax1 + x2 = 0

x 1 = x2 (t)
x 2 = x1 (t)x2 (t) + u(t)

Example

Choose S for desired behavior, e.g.,

EL2620

Lecture 14

3. The equivalent control

2. The sliding control

1. The sliding manifold

There are 3 essential parts you need to understand:

23

2011

21

2011

Please repeat the most important facts about sliding modes.

EL2620

Step 1. The Sliding Manifold S

S of

2011

Step 2. The Sliding Controller

Lecture 14

Example: f (x)

u=

2011

u = x1 x2 x2 sgn(x1 + x2 )
24

f (x) + x2 + sgn()
g(x)
= x1 x2 , g(x) = 1, = x1 + x2 , yields

= x2 , x 2 = f (x) + g(x)u and

= 0.5 2 yields V =

V = (x2 + f (x) + g(x)u) < 0

For 2nd order system x 1


= x1 + x2 we get

Lyapunov function V (x)

Use Lyapunov ideas to design u(x) such that S is an attracting


invariant set

EL2620

22

R2 then S is R1 , i.e., a curve in the state-plane (phase plane).

Lecture 14

If x

and make S invariant

S = {x Rn |(x) = 0} R1

Idea: use u to force the system onto a sliding manifold


dimension n 1 in finite time

x = f (x) + g(x)u, x Rn , u R1

Aim: we want to stabilize the equilibrium of the dynamic system

EL2620

2011

ueq = x2 x1 x2

Question 8

Lecture 14

Can you repeat backstepping?

EL2620

Lecture 14

27

2011

25

Thus, the sliding controller will take the system to the sliding manifold
S in finite time, and the equivalent control will keep it on S .

= x 1 + x 2 = x2 + x1 x2 + ueq = 0

Example:

However, an equivalent control ueq (t) that keeps x(t) on S can be


computed from = 0 when = 0

S , then u will chatter

Step 3. The Equivalent Control

When trajectory reaches sliding mode, i.e., x


(high frequency switching).

EL2620

Note!

Backstepping Design

Lecture 14

In this course we only consider f (x, u) with a special structure,


namely strict feedback structure

V (x) > 0 , V (x) < 0 x Rn

28

2011

26

2011

General Lyapunov control design: determine a Control Lyapunov


function V (x, u) and determine u(x) so that

x = f (x, u)

We are concerned with finding a stabilizing control u(x) for the


system

EL2620

Lecture 14

This is OK, but is not completely general (see example)

u = sgn()

Previous years it has often been assumed that the sliding mode
control always is on the form

EL2620

The Backstepping Result

2011

29

2011

Lecture 14

31

!
"
d
dV
u(x) =
f (x1 )+g(x1 )x2
g(x1 )(xk (x1 ))f2 (x1 , x2 )
dx1
dx1

with corresponding controller

x 1 = f1 (x1 ) + g1 (x1 )x2


x 2 = f2 (x1 , x2 ) + u

(x1 ))2 /2 is a Control

= (x1 ).

Then V2 (x1 , x2 ) = V1 (x1 ) + (x2


Lyapunov Function for the system

with corresponding controller u

x 1 = f1 (x1 ) + g1 (x1 )u

Let V1 (x1 ) be a Control Lyapunov Function for the system

EL2620

)= 0

x n = fn (x1 , . . . , xn ) + gn (x1 , . . . , xn )u

..
.

x 1 = f1 (x1 ) + g1 (x1 )x2


x 2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3
x 3 = f3 (x1 , x2 , x3 ) + g3 (x1 , x2 , x3 )x4

Strict Feedback Systems

x1 , . . . , xk do not depend on xk+2 , . . . , xn .

Lecture 14

Note:

where gk

EL2620

The Backstepping Idea

Lecture 14

EL2620

Lecture 14

Repeat backlash compensation

Question 9

x1 = f1 (x1 ) + g1 (x1 )x2


x2 = f2 (x1 , x2 ) + u

find a Control Lyapunov function V2 (x1 , x2 ), with corresponding


control u = 2 (x1 , x2 ), for the system

x 1 = f1 (x1 ) + g1 (x1 )u

Given a Control Lyapunov Function V1 (x1 ), with corresponding


control u = 1 (x1 ), for the system

EL2620

32

2011

30

2011

Backlash Compensation

1+sT2
K 1+sT
1

1
1 + sT
1
s

Question 10

in

Lecture 14

Can you repeat linearization through high gain feedback?

EL2620

Lecture 14

ref

in

Linear controller design: Phase lead compensation

Backlash inverse

Linear controller design

Deadzone

EL2620

33

35

2011

out

2011

-5

without filter
10

-2

-1
0

u, with/without filter

y, with/without filter

Oscillation removed!

with filter

0
0

0.5

1.5

10

10

Inverting Nonlinearities

Real Axis

Nyquist Diagrams

f1 ()

f ()

Lecture 14

Should be combined with feedback as in the figure!

F (s)

Controller

15

15

G(s)

Compensation of static nonlinearity through inversion:

EL2620

Lecture 14

-10
-10

-8

-6

-4

-2

10

1+sT2
F (s) = K 1+sT
with T1 = 0.5, T2 = 2.0:
1

describing function is removed

20

20

36

2011

34

2011

Choose compensation F (s) such that the intersection with the

EL2620

Imaginary Axis

f ()

k
s

#
$
e = v f (u)

Lecture 14

EL2620

Lecture 14

What about describing functions?

Question 12

> 0 large and df /du > 0, then e 0 and


#
$
0 = v f (u)

f (u) = v

If k

f (u)

39

2011

u = f 1 (v)

37

2011

Remark: How to Obtain f 1 from f using


Feedback

EL2620

What should we know about inputoutput stability?

Question 11

%y%2
%u%2

n=1

N.L.

G(s)

a2n + b2n sin[nt + arctan(an /bn )]

2011

38

2011

Lecture 14

40

. |G(i)| for n 2, then n = 1 suffices, so that


'
y(t) |G(i)| a21 + b21 sin[t + arctan(a1 /b1 ) + arg G(i)]

If |G(in)|

u(t) =

%
&

e(t) = A sin t gives

Idea Behind Describing Function Method

EL2620

Lecture 14

Passivity Theorem

Circle Criterion

Small Gain Theorem

BIBO stability

System gain (S) = supuL2

You should understand and be able to derive/apply

EL2620

N.L.

e(t)
N (A, )

u
(1 (t)

More Courses in Control

Lecture 14

EL2421 Project Course in Automatic Control, per 2

EL1820 Modelling of Dynamic Systems, per 1

EL2520 Control Theory and Practice, Advanced Course, per 4

EL2745 Principles of Wireless Sensor Networks, per 3

EL2450 Hybrid and Embedded Control Systems, per 3

EL2620

Lecture 14

= 0 then

u(t)

u
(1 (t) = |N (A, )|A sin[t + arg N (A, )] u(t)

If G is low pass and a0

e(t)

N (A, ) =

b1 () + ia1 ()
A

Definition of Describing Function

The describing function is

EL2620

43

2011

41

2011

G(i) =

G(i)
1/N (A)

y = G(i)u = G(i)N (A)y

e
f () u G(s)

Existence of Periodic Solutions

EL2520 Control Theory and Practice,


Advanced Course

1
N (A)

Lecture 14

Contact: Mikael Johansson mikaelj@kth.se

Lectures, exercises, labs, computer exercises

Real time optimization: Model Predictive Control (MPC)

Design of multivariable controllers: LQG, H -optimization

Robustness and performance

Linear multivariable systems

Multivariable control:

Period 4, 7.5 p

44

2011

42

2011

Aim: provide an introduction to principles and methods in advanced


control, especially multivariable feedback systems.

EL2620

Lecture 14

The intersections of the curves G(i) and 1/N (A)


give and A for a possible periodic solution.

replacements

EL2620

EL2450 Hybrid and Embedded Control


Systems

EL1820 Modelling of Dynamic Systems

Lecture 14

Contact: Hakan
Hjalmarsson, hjalmars@kth.se

Lectures, exercises, labs, computer exercises

Computer tools for modeling, identification, and simulation

experiments: parametric identification, frequency response

physics: lagrangian mechanics, electrical circuits etc

Model dynamical systems from

Period 1, 6 p

Aim: teach how to systematically build mathematical models of


technical systems from physical laws and from measured signals.

EL2620

Lecture 14

Contact: Dimos Dimarogonas dimos@kth.se

Lectures, exercises, homework, computer exercises

Control over communication networks

Scheduling of real-time software

Computer-implementation of control algorithms

How is control implemented in reality:

Period 3, 7.5 p

Aim:course on analysis, design and implementation of control


algorithms in networked and embedded systems.

EL2620

47

2011

45

2011

EL2745 Principles of Wireless Sensor


Networks

Research topics in WSNs

Design of practical WSNs

Lecture 14

Lecture 14

Contact: Jonas Martensson


jonas1@kth.se

No regular lectures or labs

Project management (lecturers from industry)

Preparation for Master thesis project

Team work

From start to goal...: apply the theory from other courses

Period 4, 12 p

46

48

2011

Aim: provide practical knowledge about modeling, analysis, design,


and implementation of control systems. Give some experience in
project management and presentation.

EL2421 Project Course in Control

Contact: Carlo Fischione carlofi@kth.se

EL2620

2011

Essential tools within communication, control, optimization and


signal processing needed to cope with WSN

THE INTERNET OF THINGS

Period 3, 7.5 cr

Aim: provide the participants with a basic knowledge of wireless


sensor networks (WSN)

EL2620

Lecture 14

Check old projects

Discuss with professors, lecturers, PhD and MS students

The topic and the results of your thesis are up to you

Hints:

Get insight in research and development

Collaboration with leading industry and universities

The research edge

Cross-disciplinary

Theory and practice

49

2011

Doing Master Thesis Project at Automatic


Control Lab

EL2620

Doing PhD Thesis Project at Automatic


Control

Lecture 14

4-5 yrs to PhD (lic after 2-3 yrs)

teaching (20%), fun (100%)

Research (50%), courses (30%),

World-wide job market

Competitive

International collaborations and travel

Get paid for studying

Intellectual stimuli

EL2620

50

2011