Академический Документы
Профессиональный Документы
Культура Документы
The aim of this notes is to mainly cover all the theoretical aspects of the course of Laboratory of
Embedded Control Systems. Practical insight on the relevant aspects of the project development will
also be covered. However, most of the practical issues related to the use of the robotic platform (i.e.,
the Lego Mindstorm), on the realtime embedded kernel to be adopted (i.e., NXT Osek) and on the
chosen simulation and design software (i.e., Scicos Lab) will be referenced throughout these notes.
2
Contents
3
4 CONTENTS
4 Control Design 51
4.1 Feedback Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.1 The Example of the Electric Heater . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.2 The Example of the DC Brushless Motor . . . . . . . . . . . . . . . . . . . . . 55
4.1.3 The Closed Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2 Basic Concept of Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2.1 Linear TimeInvariant System Stability . . . . . . . . . . . . . . . . . . . . . . 58
4.3 Feedback Design for Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4 Root Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4.1 Root locus construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.4.2 Analysis of a Second Order System in the Root Locus . . . . . . . . . . . . . . 72
4.5 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5 Digital Control 74
5.1 Discretization of Linear Continuous Time Controllers . . . . . . . . . . . . . . . . . . 74
5.1.1 Approximation of the Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2 Multitask Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
This Chapter describes the structure of the course and summarizes the fundamental concepts assumed
to be already known by the students.
1. Understanding the use of Scicoslab. We will give first a brief description of the adopted tool
(Section 1.2), then it will be used throughout these notes to show how the theory can be applied
in practice;
2. Approaching the mechanical platform (Section 1.3);
3. Recap on Laplace Transform (Chapter 2);
5
6 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL
4. Modeling systems, such as actuators and sensors, and learn how to identifies their physical
parameters (Chapter 3);
5. Control law design techniques for Single Input Single Output (SISO) systems (Chapter 4);
7. Simulation and control design for a complex system like a wheeled mobile robot (Chapter 6);
Throughout these notes, a list of reference books and materials will be given on purpose. In this
introductory section, we simply list some books that explains the basic concepts compounding the
Model Based Design paradigm that can be used as reference. For linear systems and control, we
can suggest [PPR03] and [Kai80]. Two books are mainly related to the joint world of control design
and digital implementation, which are [Oga95] and [ AW96]. For what concern robot modeling and
control, a useful book is [SSVO08].
Executing the command who at the command line, which just lists the variables in the command
window;
Menu Applications/Browser Variables, which opens the Browser Variables window (or type
browsevar();).
By closing Scicoslab, the workspace created variables are deleted. You can save variables in a file
using the save function. You can determine how numbers are displayed in the command window
with the format function, but the internal representation of the number is independent from the
chosen display format.
Scicoslab provides users with a large variety of basic objects starting with numbers, variables and
character strings up to more sophisticated objects such as booleans, polynomials, and structures. An
object is a basic element or a set of basic elements arranged in a vector, a matrix, a hypermatrix, or
a structure (list).
There are a set of special built-in constants, which are summarized in (Fig. 1.3).
Moreover, a set of matrix operators is reported in Fig. 1.5. Furthermore, the operator $ returns
the index of the last element of a matrix.
1.2.3 Strings
Strings in Scicoslab are delimited by either single or double quotes ", which are equivalent. If one
of these two characters is to be inserted in a string, it has to be preceded by a delimiter, which is
again a single or double quote. Basic operations on strings are the concatenation (operator +)
and the function length. A string is just a 1x1 string matrix whose type is denoted by string (as
returned by typeof). A brief list of string functions is offered in Fig. 1.6.
The command exec works properly only if the current directory of Scicoslab matches the directory
where the script has been saved. Typing cd (or chdir) and then the name of the desired folder the
current directory changes to the desired folder. If no name is specified, then the directory is changed
to the home directory. Typing getcwd (or pwd) the current directory is displayed. Alternatively,
excute the menu File / Get Current Directory to get the current folder and the menu File /
Change Directory to select the desired folder.
A function is identified through its calling syntax. Functions can be hard-coded functions (hard
coded functions are sometimes called primitives) or Scicoslab-coded function (they are sometimes
called macros in Scicoslab). A Scicoslab-coded function can be defined interactively using the key-
words function and endfunction, loaded from a Scicoslab script using exec or getf, or saved and
loaded in binary mode using save and load. Definition of a function:
function [<name1>,<name2>,...,<namep>]=<name-of-function>(<arg1>,<arg2>,... ,<argn>)
<instructions>
10 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL
endfunction
Calling of a function:
[<v1>,<v2>,...,<vp>]=<name-of-function>(<expr1>,<expr2>,... ,<exprn>)
If more than one function shares the same variable, it should be defined with global. Warning: if
a function uses an undeclared variable in its scope, Scicoslab will search for it also in the workspace!
A script may contain function definitions. As previously mentioned, Scicoslab script file names
usually have the extension sce, but if the script contains only function definitions, then the sci
extension is used. In that case the command getf can be used instead of exec to load the functions.
When Scicoslab is started it executes (as a script) the file .scilab or scilab.ini, if it exists.
The execution of Scicoslab code can be interrupted using the command pause (menu Stop of the
Control menu in the main menu) or using the interruption key code Ctrl-C. The pause command
can be added to the function explicitly. The command whereami (or where), it is possible to detect
where the pause has occurred. In the local environment of the pause, one can check the status of
variables, and the current execution can be resumed with the command resume. Leaving the pause
mode and ending the debugging is done with the abort command.
1.2. BRIEF INTRODUCTION TO SCICOSLAB 11
Branching :
if <condition> then
<instructions>
else
<instructions>
end
select <expr> ,
case <expr1> then
<instructions>
case <expr2> then
<instructions>
...
else
<instructions>
end
Iterations :
for <name>=<expr>
<instructions>
end
while <condition>
<instructions>
end
Plotting
The basic command for plotting data in a figure is plot2d. The properties of the graph (as well as
of each entity in Scicoslab) can be obtained using the get command. Then, each property is made
accessible and can be modified by the user. In the Graphics window menu it is possible to click on
the GED button, which opens the Graphics Editor.
To create a figure type hf = scf(n), where n is the identifier and hf is the figure handle. The
latter specify the plot to be modified. For example, clf(hf) clears the figure whose handel is hf, while
12 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL
scf(hf) gives the focus to the figure whose handel is hf. Typing ax1=gca();ax1.grid=[0,0]; adds
the grid, where gca stands for get current axes. Important function for handling plots and figures are
reported in Fig. 1.9, while windows primitives are briefly described in Fig. 1.10. Finally, Fig. 1.11
summarizes the window primitives.
Plotting examples:
t=0:0.1:3;
plot2d(t,sin(t));
t=linspace(-20*%pi,20*%pi,2000);
param3d1(sin(t),t.*cos(t)/max(t),t/100)
1.2. BRIEF INTRODUCTION TO SCICOSLAB 13
x=linspace(-%pi,%pi,40); y=linspace(-%pi,%pi,40);
plot3d(x,y,sinh(x)*cos(y))
v=rand(1,2000,n);
histplot([-6:0.4:6],v,[1],015, ,[-4,0,4,0.5],[2,2,2,1]);
function [y]=f2(x); y=exp(-x.*x/2)/sqrt(2*%pi); endfunction;
x=-6:0.1:6;x=x;plot2d(x,f2(x),1);
1.2.5 Scicos
Scicos is a block-diagram based simulation tool, which means that the mathematical model to be
simulated is represented with function blocks. It is quite similar to Simulink and LabVIEW Simulation
Module and it is automatically installed with Scilab. Scicos is able to easily implement Hardware In
the Loop simulations. The homepage is at http://scicos.org/.
Scicos can simulate linear and nonlinear continuous-time, and discrete-time, dynamic systems.
The simulation time scale is controllable: it can run as fast as possible (hence the time scale is also
simulated); it can run in Realtime, i.e., with a real or scaled time axis.
To launch Scicos:
At the start-up, the initial blank Scicos window appears (Fig. 1.12). The window will be filled
with Scicos blocks. The blocks used to build the mathematical model are organized in palettes
14 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL
(Fig. 1.13.(a)). To display the palettes in a tree structure, select Palettes / Pal tree in the Scicos
menu. To see the individual blocks on a specific palette, click the plus sign in front of the palette
(Fig. 1.13.(b)). Alternatively, select the desired palette via the Palette / Palettes menu. Right
clicking on each element of the palette, i.e., on each Scicos block, a popup menu opens, with three
different choices:
Place in Diagram: places the block in the Scicos window, where it can be moved freely. Alter-
natively, you can draganddrop the element in the Scicos window directly;
Details: shows the details of the block, e.g., number of inputs, number of outputs, etc..
Sources: various signals generators that are the inputs to the model;
Sinks: various visualizations of signals, that are the output variables of interest;
Branching: blocks performing the routing of the signals in the model,e.g., multiplexers, demul-
tiplexers, switch, buses, etc.;
1.2. BRIEF INTRODUCTION TO SCICOSLAB 15
Nonlinear: nonlinear operations on quantities, e.g., absolute values, products, logarithms, sat-
urations, etc.;
Matrix: operations on matrices taken as inputs, e.g., singular value decomposition, determi-
nants, inverse, etc.;
Integer, Iterations, Modelica, LookupTables, Threshold, Demo Blocks and Old Blocks will not
be used in this course. However, the interested student can always refer to the associated help.
To simulate a Scicos model, select Simulate / Run. Selecting the Simulate / Setup menu one gets
simulation parameters window (Fig. 1.14). Most of the parameters can generally be left unchanged
in this dialog window, except for:
The Final integration time defines the final (stop) time of the simulation. But, if an End block
exists in the block diagram, the Final simulation time in that block will override the simulation
stop time if less than it;
16 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL
The Realtime scaling parameter defines the relative speed of the real time compared to the
simulated time. For example, if Realtime scaling is 0.2, the real time runs 0.2 times as fast as
the simulated time. In other words, the simulator runs 1/0.2 = 5 times faster than real time.
You can use this parameter to speed up or slow down the simulator;
The maximum step size can be set to a proper value. As a rule of thumb, one tenth of the
quickest time-constant (or apparent time-constant) of the system to be simulated;
The solver method (i.e. the numerical method that Scicos uses to solve the underlying algebraic
and differential equations making up the model) can be selected via the solver parameter, but
in most cases the default solver can be accepted;
The time scale fo the simulation is defined by the user and the particular application to simulate
(e.g., a mass flow parameter can be given in kg/s or kg/min). However the choice of the seconds
is suggested.
Model parameters and simulation parameters can be set in the Context of the simulator. For,
select Diagram / Context from the Scicos menu bar. The Context is simply represented by a number
of Scilab expressions defining the parameters (or variables) and assigning them values (the use of
the command global is appreciated). These parameters are used throughout the Scicos diagram, e.g.,
blocks, user defined functions, etc.
The Scifunc
Scilab coded functions can be executed as Scicos blocks, i.e., Scifunc. When the Scifunc is placed
in the Scicos window, it is possible to visualize its properties. Doubleclicking on the Scifunc block,
the first property window is visualized (Fig. 1.15).
1.2. BRIEF INTRODUCTION TO SCICOSLAB 17
(a) (b)
Input port sizes: dimension of the inputs, corresponding to the dimension of the input of the
Scilab function;
Output port sizes: dimension of the outputs, corresponding to the dimension of the output of
the Scilab function;
Input event port sizes: dimension of the event port. It will be used to select an additional
event trigger for teh function, e.g., the event generated by an external clock to discretized the
execution of the function;
Is block always active (0:no, 1:yes): specifies if the function is always executed (i.e., continuous
time behaviour) or executed only when the trigger is active (i.e., discrete time or event based
behaviour).
The function flag 1 properties are summarized in Fig. 1.16.(a). Name of the function to be
executed, with input u1 (i.e., input 1 of the block) and output y1 . The function flag 4 properties are
reported in Fig. 1.16.(b). Initialization of specific variables of the function using the global command
(even though it is not strictly necessary). The function flag 5 properties are depicted in Fig. 1.16.(c).
Rarely used: it determines the operations to be performed when the function finishes. Finally,
function flag 6 properties are depicted in Fig. 1.16.(d). Rarely used: constraints on the input/output
dimension/values.
18 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL
The actuators are three Direct Current Brushless Motors that can be simultaneously controlled
by the embedded platform. The motors are basically transducers that transform electric power
into mechanical power, hence are the components that generate the motion and the interaction of
the mechanical platform with the surrounding environment. This hardware components need to
be properly controlled in order to achieve desired motion of the mechanical system with specified
(target) performance. According to the Model Based Design, a model of the actuators is necessary
to meet this goal.
The sensors are transducers that translates physical phenomena into digital signals. They are
mainly divided into two classes: exteroceptive, that sense the environment surroundings the me-
chanical platform, and proprioceptive, that instead sense quantities that are inside the mechanical
platform. The Lego Mindstorm can host up to four different sensors simultaneously. Only a pro-
prioceptive sensor is available, which is the incremental encoder mounted on each motor shaft that
measures the current angular position of the motor with a resolution of the degree. Instead, there
are four different exteroceptive sensors: a contact switch, that returns a binary value if the contact
takes place or not, a sonar sensor, that receives the echo reflected by the objects (if any) that have
been hit by the ultrasonic emitted signal, a microphone sensor, that returns the sound intensity (in
deciBel) captured in the environment, and a light sensor, that captures the ambient light or the light
reflected by the available lamp and returns a value that is proportional to the color of the object
that has been hit by the light. Also a sensor model should be available in order to setup a realistic
simulation of the whole system.
1.4 Bibliography
[
AW96] K.J.
Astrom and B. Wittenmark, Computer Controlled Systems, Prentice Hall Inc.,
November 1996.
20 BIBLIOGRAPHY
[Groc] , http://www.scicoslab.org.
[Kai80] Thomas Kailath, Linear systems, vol. 1, Prentice-Hall Englewood Cliffs, NJ, 1980.
[Minb] , http://www.nxtprograms.com/index1.html.
[Oga95] K. Ogata, Discrete-time control systems, Prentice-Hall, Inc. Upper Saddle River, NJ, USA,
1995.
[PPR03] Charles L Phillips, John M Parr, and Eve Ann Riskin, Signals, systems, and transforms,
Prentice Hall, 2003.
[Rie] Eike Rietsch, An Introduction to Scilab from a Matlab Users Point of View.
[SSVO08] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: modelling, planning and
control, Springer Verlag, 2008.
[vDS] Lydia E. van Dijk and Christoph L. Spiel, Scilab Bag of Tricks.
Chapter 2
The Laplace Transform and its inverse is here briefly summarized. A useful reference on this topic
can be found on the notes of the course of Signals and Systems of Prof. Palopoli as well as in [PPR03].
whose Region of Convergence (ROC) is the region where the Laplace transform is defined, i.e, where
the integral converges. Assuming that U(s) is the ratio of two polynomials, we have:
3. if the signal u(t) is left-sided (u(t) = 0, t > t1 > ) then the ROC of U(s) is of the form
(the half plane to the left of the vertical line Re(s) = min );
4. if the signal u(t) is right-sided (u(t) = 0, t < t1 < +) then the ROC of U(s) is of the form
(the half plane to the right of the vertical line Re(s) = max );
5. if the signal is two-sided (i.e., it has an infinite duration for both positive and negative times)
then the ROC is a vertical stripe of the form
21
22 CHAPTER 2. THE LAPLACE TRANSFORM
In the following, we offer a brief summary of the properties of the bilateral Laplace transform,
that applies, with minor changes, also to the monolateral transform.
Proposition 2.1 (Linearity) Given L(u1 (t)) = U1 (s) with ROC R1 and L(u2 (t)) = U2 (s) with
ROC R2 , we get
L(a1 u1 (t) + a2 u2 (t)) = a1 U1 (s) + a2 U2 (s),
with ROC R1 R2 R .
with ROC R = R.
Proposition 2.3 (Shifiting in the s domain) L(u(t)) = U(s) with ROC R, we get
Since Re(s) R, it follows that Re(s s0 ) R. Hence, Re(s) Re(s0 ) + R, or, in other words,
R = Re(s0 ) + R.
2.1. BILATERAL L-TRANSFORM 23
Proposition 2.4 (Time scaling) L(u(t)) = U(s) with ROC R and a 6= 0 (a R), we get:
1 s
L(u(at)) = U ,
|a| a
where = at. Since Re(s) R, it follows that Re( as ) R. Hence, Re(s) aR, or, in other words,
R = aR. For a < 0, the same results hold with the only difference that = at changes the integral
terms (hence, the sign).
Proposition 2.5 (Differentiation in the time domain) L(u(t)) = U(s) with ROC R, we get
du(t)
L = sU(s),
dt
Proposition 2.6 (Differentiation in the s domain) L(u(t)) = U(s) with ROC R, we get
dU(s)
L(tu(t)) = ,
ds
with ROC R = R.
Proposition 2.7 (Convolution) L(u1(t)) = U1 (s) with ROC R1 and L(u2 (t)) = U2 (s) with ROC
R2 , we get
L(u1(t) u2 (t)) = U1 (s)U2 (s),
with ROC R such that R1 R2 R .
Proof of Proposition 2.7
Z +
L(u1 (t) u2 (t)) = (u1 (t) u2 (t))est dt =
t=
Z + Z + Z + Z +
st st
u1 ( )u2 (t )d e dt = u1( )u2 (t )e dt d
t= = = t=
Z + Z + Z +
st
= u1 ( ) u2 (t )e dt d = u1 ( )(es U2 (s))d
= t= =
Z +
s
= u1( )e d U2 (s) = U1 (s)U2 (s)
=
The region of convergence R is at least equals to R1 R2 .
Proposition 2.8 (Integration) L(u(t)) = U(s) with ROC R, we get
Z t
1
L u( )d = U(s),
s
with ROC R such that R = R {Re(s) > 0};
Proof of Proposition 2.8
Z t Z +
u( )d = u( )1(t ) = u(t) 1(t),
= =
and +
+ +
est
1
Z Z
st st
L(1(t)) = 1(t)e dt = e dt = =
t= t=0 s t=0 s
with ROC R1 = {Re(s) > 0}, the thesis follows from the convolution property.
Proposition 2.9 (Integration over s) L(u(t)) = U(s) with ROC R, we get
Z +
u(t)
L = U()d,
t s
with ROC R = R.
Proof of Proposition 2.9
Z +
u(t) u(t) st
L = e dt
t t= t
Z + Z + Z + Z +
t t
= u(t) e d dt = u(t)e dt d
t= =s =s t=
Z +
= U()d
=s
The ROC R is trivially equals to R.
2.2. UNILATERAL L-TRANSFORM AND ROC 25
In what follows a set of basic bilateral Laplace transforms is derived by the direct application of
the previously enumerated properties.
1
s
(by definition); ROC: Re(s) < 0.
1
(s+a)
(by shifting in the s domain); ROC: Re(s) < Re(a).
2a
s2 a2
(by linearity and shifting in the s domain); ROC: Re(a) < Re(s) < Re(a).
s2 + 2
(by linearity and shifting in the s domain); ROC: Re(s) > 0.
s
s2 + 2
(by linearity and shifting in the s domain); ROC: Re(s) > 0.
(sa)2 + 2
(by linearity and shifting in the s domain); ROC: Re(s) > a.
where the integration from 0 allows to embrace the dirac . The unilateral L-tranform is equivalent
to the Laplace tranform of 1(t) u(t). Therefore, the ROC is always of the form Re(s) > max . Most
of the properties of the bilateral transform also apply to monolateral laplace transform, with little
differences. For example, the differentiation property is modified as follows:
dn u(t)
L(u(t)) = U(s) L( ) = sn U(s) sn1 u(0 ) sn2 u(0 ) . . . u(n1) (0 ),
dtn
n
with u(n1) defined as d dtu(t)
n . Indeed, the differentiation property for bilateral transforms is changed
in the integration by parts, evaluating the integral in the interval (0 , +) rather than (, +).
26 CHAPTER 2. THE LAPLACE TRANSFORM
(hr) 1 dr
(s pi )h X(s) |s=pi .
ci = r
r! ds
Finally, for each component of the partial fractal expansion, the monolateral L we have:
1
L{epit } = ,
s pi
and
n!
L{tn epi t } = .
(s pi )n+1
If all poles of Y (s) are strictly in the left half plane (i, Re(pi ) < 0), then y(t) goes to 0 for
t ;
If at least one of the poles of Y (s) is in the right half plane (i such that Re(pi ) > 0), then
y(t) becomes unbounded as t ;
2.4. PROPERTIES OF A SIGNAL GIVEN THE L-TRANSFORM 27
If all poles of Y (s) are such that Re(pi ) 0, then if all poles with real part equal to zero are
simple then y(t) remains bounded (although it does not necessarily go to 0), otherwise it is
unbounded.
Final value theorem: limt y(t) = lims0 sY (s) (if the limit exists and is finite).
assuming that u(t) = 1(t), i.e., the unitary step function, with u(0) = 1 and y(0) = 1. To
= sY (s) y(0 ) and
this end, the property on the time differentiation is applied twice, i.e., L(y)
L(u)
= sU(s) u(0 ). Hence
Therefore,
3s + 1 2
Y (s)(s + ) = U(s)(3s + 1) + (y(0 ) + 3u(0 )) Y (s) = U(s) + ,
s+ s+
1
that substituting U(s) = s
yields to
3s + 1 2
Y (s) = + .
(s + )s s +
and hence
1 3 + 1 t
yf (t) = e 1(t).
For the second component,
2
yu (t) = L1 ( ) = 21(t)et .
s+
Finally
t 1 3 + 1 t
y(t) = 21(t)e + e 1(t)
Notice how this inverse Laplace transform has been applied applying the superposition principle,
which is only valid for linear systems.
Given y(t) it is possible to determine its steady state value for > 0 by computing directly
1
lim y(t) = .
t+
28 BIBLIOGRAPHY
However, using the analysis tools previously summarized, it is possible to determine what will be the
steady state value of y(t), since it is possible to apply the final value theorem (i.e., the limit exists
because > 0). In fact,
1
lim y(t) = lim sY (s) = .
t+ s0
Notice how < 0 makes the limit undefined.
As an exercise, compute the output to the step signal of the system
with y(0)
= y(0) = 0.
Hint: for complex roots j, we have an oscillating behavior for the time response of the system
whose frequency is related to and its damping factor is related to .
2.5 Bibliography
[PPR03] Charles L Phillips, John M Parr, and Eve Ann Riskin, Signals, systems, and transforms,
Prentice Hall, 2003.
Chapter 3
This chapter presents a brief introduction to dynamical systems and a way to model them considering
the interacting phenomena that relates the physical variables and the parameters involved. Then,
linear systems, which will be the subject of this course, will also be considered. The modeling issues
finally ends with a description of the actuators and sensors involved in this course.
The final section of this chapter is devoted to the identification technique to be adopted for
systems parameters estimation, with particular emphasis on the Lego Mindstorm motor model.
The reference books throughout this section are [PPR03, Kai80].
Example 3.1 Consider a bank account. The amount of money in the account is given by y, which
is updated on a monthly basis. Initially the amount is zero, i.e., y[0] = 0. The interest is equal to .
Hence:
y[n] = y[n 1] + u[n 1],
where u[n] represents the input, that is the overall withdraws (u[n] < 0) or deposits (u[n] 0) done
at month n. If > 1 then the amount of money will grow in time, if 0 < < 1 it will decrease. For
= 1 there is no interest.
29
30 CHAPTER 3. MODELING AND IDENTIFICATION
where t R is the continuous time. As before, a continuous time system can be roughly defined as a
relation between a ndimensional signal describing the input of the system with and a mdimensional
signal describing the output variables of a system, i.e.,
S : [N 7 V] 7 [N 7 W], (3.1)
where t R is the time, V Rn is the space representing the system inputs and W Rm is the
output space.
Example 3.2 Consider the mass and spring dynamic system of Fig. 3.1. m is the mass of the body,
K is the spring elastic parameter, B is the damping parameter. In this case we have an input f ,
which is the force applied to the body, and an output x, which is the position of the mass on the plane
of motion.
Example 3.3 Consider the truck with a trailer of Fig. 3.2. m1 and m2 are the masses, k is the
spring elastic parameter of the transmission, b is the damping parameter of the transmission and
is the viscous friction of the air. The input f is the traction force. The outputs of the system are
x1 , the position of the truck on the road, and x1 x2 , the relative position between the truck and the
trailer.
3.1. DYNAMIC SYSTEMS 31
A first taxonomy about dynamic systems (discrete and continuous time) is related to the number
of inputs and outputs. Indeed:
The system is said Single Input Single Ouput (SISO) iff V is a tuple of one signal, and W is a
tuple of one signal;
The system is said Single Input Multiple Ouput (SIMO) iff V is a tuple of one signal, W is a
tuple of more than one signal;
The system is said Multiple Input Single Ouput (MISO) iff V is a tuple of more than one signal,
W is a tuple of one signal;
The system is said Mingle Input Single Ouput (MIMO) iff both V and W are tuples of more
than one signal.
Example 3.4 Consider again the bank account system of Example 3.1. Initially the amount is zero,
i.e., y[0] = 0, while dynamic system is simply given by
where u[n] represents the input, that is the overall withdraws (u[n] < 0) or deposits (u[n] 0) done
at month n. If > 1 then the amount of money will grow in time, if 0 < < 1 it will decrease. For
= 1 there is no interest. This is a SISO system.
Example 3.5 Lets go back to the mass and spring system of Example 3.2. The dynamic equations
are derived noticing that the effects involved are:
K B f
x = x x + .
m m m
This system is a SISO system.
32 CHAPTER 3. MODELING AND IDENTIFICATION
Example 3.6 For the truck and trailer system of Example 3.3. The dynamic equations are derived
from the Newtons, Rayleighs and Hooks laws, plus the aerodynamical drag force x 2i , with i = 1, 2.
Therefore the overall system description is given by:
m1 x1 = f k(x1 x2 ) b(x 1 x 2 ) x 21
m2 x2 = k(x1 x2 ) + b(x 1 x 2 ) x 22
In the rest of these notes, we will consider only one modeling technique, that is the I/O description.
More in depth, we describe the possible behaviors of the system by showing how the output is
computed directly from the input.
Continuous time systems are generically expressed by Ordinary Differential Equations (ODEs),
while discrete time systems by difference equations. From the previous examples and by defining
x(t + k)x(t) shifted forward by k time steps for difference equations,
(k)
D x(t) = k ,
d x(t) the kth derivative of x(t) for continuous time,
dtk
we can infer that the I/O relation for a SISO system is in general given by
where t R, y(t) : R 7 R and u(t) : R 7 R. If the system is well posed then it can be described
with the normal form
D (n) y(t) = F y(t), D (1) y(t), . . . , D (n1) y(t), u(t), D (1)u(t), . . . , D (p) u(t), t .
(3.2)
A well posed system of difference or differential equations describes the whole time evolution of
the involved physical quantities whenever the initial conditions are properly given. For a system in
normal form (3.2), the initial conditions are specified by the values of y(t0 ) at time t0 (referred to as
initial time) and the corresponding values of all the n 1 shift operators or time derivatives of y(t)
at the same instant, together with u(t0 ) and the first p 1 shift operators or time derivatives of u(t)
at the same time instant.
Example 3.7 The discrete time equation in the Examples 3.4 and 3.5 are already in their normal
form.
Definition 3.8 (Causality) A system is causal if the output y(t) at time t is only related to the
initial conditions and from the inputs u( ), for t. It is strictly causal if this relation is true for
< t.
3.1. DYNAMIC SYSTEMS 33
In plain words, the causal adjective refers to the existence of a cause/effect relation between
inputs and outputs. Therefore, a causal system cannot predict the future to produce its output, so
all physical systems are causal. Examples of algorithmic exceptions: CD drive, media players and
ideal filters. In such cases, the output depends on the future inputs since the output is delayed.
Given a system expressed in normal form (3.2) is causal if p = n and strictly causal if p < n.
Definition 3.9 (Stationarity) If the system does not changes its behavior in time, it is time in-
variant or stationary.
dF ()
A system is stationary in its normal form (3.2) if dt
= 0. The solution of a time invariant
system does not changes if the initial time changes.
Superposition principle: For any two inputs f (t) and g(t), the output of the system is
Scaling: for any input f (t) and for any scalar , the output of the system is
If the normal form (3.2) is given, a system is linear if it can be rewritten with
n1 p
X X
(n) (i)
D y(t) = ai (t)D y(t) + bj (t)D (j) u(t)
i=0 j=0
If it is also stationary, the terms ai and bj are constant. It comes out that for a linear system
the analysis is rather simplified, since it is possible to apply the superposition principle and to
compute the trajectory in the state space by summation of a homogeneous solution and a generic
solution. This two solutions are referred to as the unforced system response (for u 0 and given
initial conditions) and the forced response (for zero initial conditions and generic input). The forced
response is sometimes called the zero-state response to highlight the fact that the initial conditions
are zeroed.
Example 3.11 The bank account in Example 3.4 is a discrete-time causal, stationary and linear
system. However, if the Examples 3.4 bank interest is supposed to change in time, the system be-
comes non-stationary. The mass and spring system in Example 3.5 is a continuous time causal,
stationary and linear system, while the truck and trailer of Example 3.6 is a continuous time causal
and stationary system, however it is not linear.
34 CHAPTER 3. MODELING AND IDENTIFICATION
In the domain of the complex variable s, the previous convolution operation for continuous time
systems (which are the systems we will face during this course) turns to an algebraic operation for
the L-transform, i.e.,
Y (s) = H(s)U(s) + H0 (s),
where H(s) is the transfer function of the system, that is by definition the L-transform of the impulse
response of the system assuming zerostate initial condition, and H0 (s) is the transfer function given
the initial conditions. It turns then out that the inverse Laplace transform of H(s)U(s) is the forced
response of the system, while the inverse Laplace transform H0 (s) is the unforced response. For
example, these two components are clearly reported in Example 2.10.
Hence:
1 cs + k
X(s) = F (s) + (s).
ms2 + cs + k ms2 + cs + k
Recalling that the overall time evolution is obtained using the superposition principle, let us first
consider only the effect of the road (F (s) = 0), assuming the wheel is climbing a step of amplitude
3.2. ANALYSIS OF A LINEAR SYSTEM 35
a 6= 0, i.e.,
cs + k a(cs + k)
X(s) = (s) = . (3.3)
ms2 + cs + k s(ms2 + cs + k)
Furthermore, let us consider only the effect given by the poles, i.e., the roots of the denominator, in
order to have:
a q
X1 (s) = = s2
2
s(ms + cs + k) s( 2 + 2n s + 1)
n
q k
k c
where n = m , = 2km and q = ka . Since the system has two poles (the third is related to the
step input), it is usually referred to as a second order system. n is usually referred to as the natural
frequency of the system. is the damping factor of the system.
p
X1 (s) has three poles: p1 = 0, p2,3 = n n 2 1. The values of the roots of the
denominator obviously influence the time response of the system, that, in their turn, ar influenced
by the parameter of the system, i.e., m, k and c. From physics, it should be m > 0, k 0 and c 0.
We will focus our analysis only to a subset of the possible configurations of the parameters, leaving
to the interested reader all the other cases. Each configuration defines a different behavior of the
system, which are discussed in the following sections.
q c1 c2 c3
X1 (s) = s2 2
= + + ,
s( 2
n
+ n
s + 1) s s p2 s p3
c1 = sX1 (s)|s=0 = q,
qn2
c2 = (s p2 )X1 (s)|s=p2 = ,
p2 (p2 p3 )
qn2
c3 = (s p3 )X1 (s)|s=p3 = ,
p3 (p2 p3 )
(1) (2)
q c1 c c2
X1 (s) = 2 2
= + 2 + ,
s( s 2 + n
s + 1) s s n (s n )2
n
with coefficients
c1 = sX1 (s)|s=0 = q,
(1) 1 d
(s n )2 X1 (s) |s=n = q,
c2 =
1! ds
(2)
c2 = (s n )2 X1 (s)|s=n = qn ,
that yields to
x1 (t) = q q(1 n t)en t 1(t).
Regardless of the amplitude of the step a, the final value of x1 (t) in both cases for 1 is
In other words, once the car climbs an unitary step, the car explodes! A realistic behavior would not
be the one here described, as it was supposed to be using a damping parameter c < 0 that, instead,
generates energy! Notice that this is true even if 1 < < 0 (that corresponds again to c < 0, but
with complex conjugated poles with positive real parts). In this case the limit is not defined, due to
the persistent and divergent oscillations.
For reference, Fig. 3.4 reports the output behavior for different (Fig. 3.4.(a,b)), coincident (Fig. 3.4.(c))
and complex and conjugated (Fig. 3.4.(d)) roots with different inputs.
3.2. ANALYSIS OF A LINEAR SYSTEM 37
80 10
70 20
60 30
50 40
Amplitude
Amplitude
40 50
30 60
20 70
10 80
0 90
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Time (sec) Time (sec)
(a) (b)
a = 0.1; k = 100; m = 400; c = 400 a = 0.1; k = 100; m = 400; c = 200
0.7 0.4
0.2
0.6
0.5
0.2
0.4
0.4
Amplitude
Amplitude
0.6
0.3
0.8
0.2 1
1.2
0.1
1.4
0 1.6
0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25 30
Time (sec) Time (sec)
(c) (d)
Figure 3.4: Divergent outputs due to: (a) Positive step a = 0.1, positive roots ( < 1), (b) Negative
step a = 0.1, positive roots ( < 1), (c) Positive step a = 0.1, positive coincident roots ( = 1),
(d) Positive step a = 0.1, complex roots with positive real part (1 < < 0).
p In this case, n > 0 and 1. The poles are real and given by p1 = 0,
a feasible dynamic.
p2,3 = n n 2 1 < 0. Since the system is at rest for t 0, if p2 6= p3 , we have that the
time evolution of the system is only given by the forced response
q c1 c2 c3
X1 (s) = 2 2
= + + ,
s( s 2 + n
s + 1) s s p2 s p3
n
qn2
p2 t
ep3 t
e
x1 (t) = q + 1(t).
p2 p3 p2 p3
0.9
1
0.8
0.7
3
0.6
Amplitude
Amplitude
0.5 4
0.4
5
0.3
0.2
7
0.1
0 8
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (sec) Time (sec)
(a) (b)
3
x 10 a = 0.1; k = 100; m = 400; c = 400
1
0.9
0.8
0.7
0.6
Amplitude
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25 30
Time (sec)
(c)
Figure 3.5: Convergent outputs due to: (a) Positive step a = 0.1, negative roots ( 1), (b) Negative
step a = 0.1, negative roots ( 1), (c) Positive step a = 0.1, negative coincident roots ( = 1).
qn2 a
lim sX1 (s) = =q= .
s0 p2 p3 k
a q 2 a
lim x1 (t) = q = or lim sX1 (s) = 2n = q =
t+ k s0 p2 k
Again, Fig. 3.5 reports the output behavior for different (Fig. 3.5.(a,b)) and coincident (Fig. 3.5.(c))
roots with different inputs.
An important characteristic of the output of a (stable) linear system, in addition to the steady
state value, is the time in which the system reaches that value. For this reason, this time is referred
3.2. ANALYSIS OF A LINEAR SYSTEM 39
to as the settling time Ts of the system. For engineers, it is not relevant when the system effectively
reaches the steady state value (in fact, by the inverse Laplace transform, it is reached for t +),
but it is more important the time in which the system reaches the % of the steady state value and
remains, for t +, within that bound. This quantity is a function of the parameters n and
of the system. Such a function is described by means of the time evolution of the system response.
More precisely:
100
|q x1 (t)| q = q, t Ts
100
Therefore, for 1, p2 6= p3 and p2 , p3 R ,
qn2
p2 t
ep3 t
e
x1 (t) = q + 1(t),
p2 p3 p2 p3
qn2
p2 t
ep3 t
e
|q x1 (t)| q
q.
p2 p3 p2 p3
and, finally
log(|pi (p2 p3 )|) log(n2 )
Ts = .
pi
Hence for each root, we have two possible values for the time Ts . Of course, the root that is
smaller in modulus will govern the rate of convergence of the system towards its steady state value.
This idea is quite effective from a practical viewpoint and it can be generalized to any number
of roots, even if the roots are complex. This approximation is usually referred to as the system
approximation of the dominant pole. Fig. 3.6.(a) depicts the approximation to the dominant p pole
2
p system. For the described system, assuming that p2 = n + n 1
thus introduced for this
and p3 = n n 2 1, p2 is the dominant pole. Hence, the greater (in modulus)
q is p2 , the
k
faster is the rate of convergence. This result is accomplished by increasing n = m and making
k
c
= 2km 1. Fig. 3.6.(b) reports the output for two different choices of the spring constant with
a mass m = 400 kg.
If p2 = p3 = n (with = 1), we have
hence
|q x1 (t)| q |q(1 + n t)en t | q.
40 CHAPTER 3. MODELING AND IDENTIFICATION
3
x 10 Comparison
1
Choice 1
Choice 2
G2 0.9
0.01
x (t)
1
0.009 only p2 0.8
only p3
0.008 0.7
0.007
0.6
0.006
Amplitude
Amplitude
0.5
0.005
0.4
0.004
0.003 0.3
0.002 0.2
0.001
0.1
0
0 5 10 15 20 25 30
0
Time (sec) 0 5 10 15 20 25 30
Time (sec)
(a) (b)
Figure 3.6: (a) Rate of convergence for the overall system and for each exponential alone. The settling
time is also shown for = 0.05. This graph clearly shows the approximation to the dominant pole.
(b) Two choices of the spring constant have been made: k1 = 1600 and c1 = q3kk11 (1 > 1), and
q m
k
2k2 c1 m1 n1
k2 = 400 and c2 = q
k
(in order to have n2 = 2
and 2 = 21 ).
k1 m2
having > 1 behaves like a first order system since the output of the system to a step input is
governed by a dominant pole, that defines the rate of convergence (i.e., settling time) of the system.
Straightforwardly, the larger is the difference between the dominant pole(s) and all the other
poles, the better is the approximation. For instance, consider the two transfer functions
4 1
P1 (s) = 2
, Pa (s) = ,
(s + 1)(s + 2)(s + 2s + 2) s+1
(3.4)
400 1
P2 (s) = 2
, Pa (s) =
(s + 1)(s + 20)(s + 2s + 20) s+1
whose Pa (s) is the same dominant pole approximation and their outputs when the input is u(t) = 1(t),
reported in Fig. 3.7. It is evident that the approximation of P2 (s) is the more accurate since its poles
are more spread out.
3.2. ANALYSIS OF A LINEAR SYSTEM 41
1.0
P_1
0.9
P_2
P_a
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0 1 2 3 4 5 6 7 8
Figure 3.7: Response to a unitary input for the plants reported in eqrefeq:AnalysisDomPoles whose
dominant pole approximation is given by the same Pa (s).
c1 = sX1 (s)|s=0 = q,
p
qn2 1 2 j
c2 = (s p2 )X1 (s)|s=p2 = = ,
p2 (p2 p3 )
p
2 1 2
p
qn2 1 2 + j
c3 = (s p3 )X1 (s)|s=p3 = = ,
p3 (p2 p3 )
p
2 1 2
Noticing that:
1. cos() = sin 2 ;
sin() cos()
2. arctan cos() 2 = arctan sin() ;
It follows that h i
1(t),
n t sin(n 1 2 t + )
p
x1 (t) = q q Ne
= 1 1 2
with N and = arctan
. The final value of x1 (t) for 0 < < 1 is again
1 2
a
lim x1 (t) = q =
t+ k
Let us now consider the settling time in this particular case. Again
|q x1 (t)| q, t Ts ,
or, equivalently,
n t sin(n .
p
|Ne 1 2 t + )|
Due to the sinusoidal function, this equation is quite difficult to evaluate. An
p effective upper bound
to Ts is derived using a worst case approach, i.e., by considering | sin(n 1 2t + )| = 1, and
recalling that N > 0 for 0 < < 1, i.e.,
en t ,
N
and finally get
log() log(N)
Ts .
n
Notice that the real part of the complex roots n plays a fundamental role in rate of convergence
towards the steady state value.
Another important parameter for a second order system is given by the overshoot, that is the
maximum value reached by the time evolution of the output with respect to the steady state value.
More precisely, the overshoot O is defined as the difference between the maximum value reached by
the system response xmax and the steady state value x(), normalized by the difference between the
initial value x(0) and the steady state value
|xmax x()|
O= .
|x(0) x()|
An exact relation between the damping parameter and the overshoot O is derived computing the
time derivative
dx1 (t)
en t n 1 2 cos(n 1 2 t + )
p p
= q N
dt
=
n t sin(n 1 2 t + )
p
+ qn Ne
+ n sin(n 1 2 t + ),
p p p
n 1 2 cos(n 1 2 t + )
3.2. ANALYSIS OF A LINEAR SYSTEM 43
It is now evident that for h = 1 we have a maximum. Hence, the time in which the system reaches
its maximum value is
To = p .
n 1 2
Increasing n , the overshoot will happen earlier. On the other hand, it will happen later if
increases. The corresponding overshoot value is obtained noticing that x(0) = 0, x() = q and
2
xmax = q 1 + e 1 .
Therefore
|xmax x()|
O= = e 12 ,
|x(0) x()|
i.e., the maximum overshoot depends only on the damping parameter of the complex roots. For
example, if a maximum overshoot is desired, should be greater than a min .
Fig. 3.8.(a) depicts time evolution for two different choices of the spring constant for a vehicle
mass m = 400 kg. Notice that with the choice k1 = 1600 we will have a lower settling time Ts and
a lower time for the overshoot To (hence the system is faster than with the second choice). On the
other hand, this choice presents a larger maximum value, situation that may not be acceptable in all
cases.
44 CHAPTER 3. MODELING AND IDENTIFICATION
3
x 10 a = 0.1; k = 100; m = 400; c = 0
2
3
x 10 Comparison 1.8
1.5
Choice 1
Choice 2 1.6
1.4
1 1.2
Amplitude
Amplitude
0.8
0.5
0.6
0.4
0.2
0
0 5 10 15
0
Time (sec) 0 5 10 15 20 25 30
Time (sec)
(a) (b)
k1
Figure 3.8: (a) Two choices of the spring constant have been made: k1 = 1600 and c1 = q
k
2 m1
q
k
2k2 c1 m1 n1
(0 < 1 < 1), and k2 = 400 and c2 = q
k
(in order to have n2 = 2
and 2 = 21 < 1). The
k1 m2
settling time is referred to a threshold of = 0.02. (b) Imaginary roots.
With a different choice of the parameters, the output can be completely different (Fig. 3.8.(b)).
Indeed, in the case of m > 0, k > 0 and c = 0, the oscillations are not dumped since the transfer
function has p1 = 0 and p2,3 that are complex and conjugated with Re(p2,3 ) = 0. In this case, it is
not possible to define the Ts , but it is possible to define the overshoot (left as exercise).
Let us now recall the overall transfer function (3.3) given by the effect of the road (F (s) = 0)
and the road, i.e.,
X(s) = (cs + k)X1 (s),
In this case, the real system response is given by the previously obtained time evolution (multiplied
by k) with added the time derivative of the time response x1 (t), scaled by the factor c, i.e.,
dx1 (t)
x(t) = c + kx1 (t).
dt
Notice how a zero pops up in this case. To analyze how this zero modifies the output of the system,
let us consider a generic second order system with stable, complex and conjugated roots (0 < < 1,
n > 0)
s + 1
Y (s) = s2 = ( s + 1)G(s),
s( 2 + 2n s + 1)
n
excited by an unitary step signal. Define y1 (t) = L1 [G(s)]. Therefore, y(t) = y1dt(t) + y1 (t). Hence:
en t cos(t + ) + N n en t sin(t + )
y(t) = N
+ 1 Ne n t sin(t + ),
3.2. ANALYSIS OF A LINEAR SYSTEM 45
Comparison
8
=0
=1
6 = 1.2
= 1
y(t)
2
4
0 2 4 6 8 10
sec
Figure 3.9: Effect of the zeros, with = 0.1, n = 5 and unitary input step.
= 1 2 p
with N 1 , = arctan
and = n 1 2 . Notice that:
1 2
related to the step input), it is a second order system. Therefore, its output can be analyzed with
the tools introduced in Section 3.2.3.
3.4 Identification
System identification is a general term to describe mathematical tools and algorithms that build
dynamical models from measured data. Measured data are used to calculate a transfer function of
the system. Two different approaches are usually adopted for linear systems:
1. Timedomain approach: A measure of the time response of a system can be derived using a
controlled input.
1. black box model: no prior model is available. Most system identification algorithms are of this
type;
2. grey box model: a certain model based on both insight into the system and experimental data is
constructed. This model does however still have a number of unknown free parameters which
can be estimated using system identification.
In these notes the objective of system identification are the Lego motor parameters. We already
know that a system like that acts as a second order system. As a consequence, we adopt the grey
box model.
All the identification procedures need some measured data to extrapolate the model and some
other to validate the identified model. In plain words, to identify the parameters of a model we need
to check if a model fits experimental measurements or other empirical data. A common approach to
test this fit is to split the data into two disjoint subsets:
2. Verification data: are used to test the model. An accurate model will closely match the
verification data even though this data was not used to set the models parameters (cross-
validation).
48 CHAPTER 3. MODELING AND IDENTIFICATION
The second procedure is called model verification. A metric is needed to measure distances
between observed ym (t) and predicted data y(t) and, hence, to assess model fit. As a metric, we can
adopt some standard indices used as performance indices for control systems:
RT
Integral squared error (ISE) = 0
(ym (t) y(t))2 dt;
RT
Integral absolute error (IAE) = |ym (t) y(t)|dt;
0
RT
Integral time squared error (ITSE) = 0 t(ym (t) y(t))2dt;
RT
Integral time absolute error (ITAE) = 0 t|ym (t) y(t)|dt.
A rigorous approach to system identification implies the adoption of the chosen metric as an
objective function to minimize in the procedure. For example, a technique usually adopted is the
(linear) regression to gain the minimum mean squared error (MMSE). For our purposes and due to
the simplicity of the identification problem, we adopt a simpler trialanderror procedure:
1. We collect output measurements and then we try to estimate second order system parameters
(for instance, using a least square approach);
2. Next, we verify the estimated model against measurements, computing the fitting error using
the chosen metric;
3. If the estimated values are satisfactory, then the procedure terminates, otherwise a different
choice is made at point 1.
Example 3.12 Let us consider the motor model (3.6), shown in Fig. 3.10. Its time response is given
by: h i
p
1 (t) = q + 2Nen t cos(n 1 2 t + ) 1(t)
q
with N = 2 and = arctan
2
. The final value of 1 (t) for 0 < < 1 is
2 1 1
lim 1 (t) = q,
t+
= 1
the settling time ( is the percentage used to evaluate the settling time, while N ) is
1 2
)
log 100
log(N
Ts ,
n
and finally the overshoot is given by
|max ()|
O= = e 12
|(0) ()|
The relation involving the settling time is approximated. As a consequence, the estimates of n
derived from that equation can be very inaccurate. In combination with the settling time, we can use
3.4. IDENTIFICATION 49
the real part of the complex and conjugated poles, that is n . In particular, the decay rate of the
oscillations are given by en t . Therefore,
en (t0 +T ) (t0 + T ) ()
log = log ,
e n t0 (t0 ) ()
where t0 is a certain time instant, while T is the period of the oscillations. For example, t0 can be
fixed as the time of one maximum of the oscillations (for instance, the overshoot), and t0 + T is the
time of the next maximum. However, the minima can also be chosen. It has to be noted that it is
possible to have more than one pair of measurements from the output signal. On the other hand, for
highly damped outputs it can be difficult to detect the output peaks, even though they exists.
Additionally, in order to have an estimate of n , we can exploit the relation of the oscillation
frequency, that is the imaginary part of the complex and conjugated roots, i.e.,
p
d = n 1 2 .
Therefore, by measuring the time between two consecutive peaks, the period T of the oscillations can
be derived. Hence,
2
n = p .
T 1 2
Again, multiple estimates of the period are available.
but, since the phase is at most , we can simply limit the estimated G(j) [2, 0].
3.5 Bibliography
[Kai80] Thomas Kailath, Linear systems, vol. 1, Prentice-Hall Englewood Cliffs, NJ, 1980.
[PPR03] Charles L Phillips, John M Parr, and Eve Ann Riskin, Signals, systems, and transforms,
Prentice Hall, 2003.
Chapter 4
Control Design
This chapter presents the basic concepts underlying the control law design for linear plants. After
introducing the feedback paradigm, the concept of stability for linear systems is presented. From the
notion of stability, a useful tool for control law synthesis, the root locus is presented, which is based
on the analysis of the poles of the system subjected to the control law.
d(t) are the noises, or disturbances, noncontrollable inputs that modify the behavior of the
system;
u(t) are the controllable inputs, or controls, that are in general chosen by the control designer
in order to satisfy predefined performance;
yr (t) are the performance outputs, or controlled outputs, that correspond the target of the
control law;
ym (t) are the measured outputs, whose knowledge (together with the additional knowledge of
the model of the system) gives the designer information on the current behavior of the system.
This system will have a certain behavior as a function of its internal dynamic and of the value of
the inputs. This behavior, for a linear system, can be defined by its impulse response h(t) or by the
transfer function H(s). For this generic system, there will be four transfer functions
that is one for each pair of input/output. The previous description is only formal and may or may
not be physically verified. For example, a SISO system will have ym (t) = yr (t) and d(t) = 0, hence
only one transfer function H(s) = Hr,u (s) will effectively be considered.
The behavior of the system as represented in Fig. 4.1 is called in open loop, since there is not any
feedback from the output to the inputs. Notice that in order to modify the behavior of the system,
51
52 CHAPTER 4. CONTROL DESIGN
for example in order to let the performance output yr (t) to meet the desired target behavior, it is
possible to properly use the controlled input u. Indeed, even without feedback, it is possible to define
a control law u(t) that does not take into account the current behavior of the system, observed from
ym , but simply relies on the model of the system. This control technique that disregard ym is then
called in open loop. As it may be intuitive, such an approach is not so effective for a lot of different
reasons, and it is usually not implemented.
Again, in order to show the benefits of the feedback and weakness of the open loop approach, we
will make use of two running examples.
30
28
26
Temperature (C)
24
22
20
18
Ideal
Uncertain R
t
16 Uncertain T
s
T
d
14
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
sec
we need a model. Letting the (constant) thermic power lost be given by the difference between the
temperature of the heating area T and the ambient temperature Ts (thermodynamics), this simplified
model is derived
dT Ts T dT V 2 Ts T
c =Q+ = + ,
dt Rt dt cR cRt
where Rt is the thermal resistance of the fluid (the air) and c is the thermal capacity of the fluid. It
is now evident that the problem is solved if the system is at equilibrium with T = Td , that is if
r
dT V 2 Ts Td R
=0= + V = (Td Ts ) .
dt cR cRt Rt
Therefore, applying an input voltage
r
R
u(t) = V (t) = V = (Td Ts )
Rt
we easily make the temperature (the performance output yr (t) = T (t)) converge towards Td after a
transient that depends on the initial conditions, i.e., the initial temperature value T (0) (see Fig. 4.3,
blue solid line). It is evident that, in the hypothesis that the model is accurate enough, the open
loop strategy works quite well. But, what happens if the resistance of the fluid Rt changes (e.g., by
humidity changes)? And, what happens if the resistance R changes in time (e.g., aging)? Again, what
happens if the ambient temperature Ts changes (e.g., season change)? In such cases, it is visually
evident from Fig. 4.3 that the steady state temperature value is no more reached. Moreover, we may
want better performances or we may want reduce the effect of disturbances (for example, a nonideal
voltage generator). Such problems are solved using a feedback paradigm, i.e., the possibility to modify
the behavior of a system acting on its inputs by means of a controller. The simplest feedback strategy
we can apply is due to the nonlinear on/off controller: assuming the availability of a thermometer
(a sensor) that measures the temperature (hence, the measured and performance outputs are equal
yr (t) ym (t)), we can:
54 CHAPTER 4. CONTROL DESIGN
32
30
28
Temperature (C)
26
24
22
20
18
Rele
16 Uncertain T
s
Td
14
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
sec
Figure 4.4: Comparison between open loop control strategy and feedback bang-bang control strategy
for the electric heater with uncertain parameters.
If T > Td , set V = 0.
This controller is also referred to as bang bang controller, with an obvious meaning. From an
implementation viewpoint, this controller can be implemented with a standard relay or a digital
comparator. This approach may generate an infinite number of switches between 0 and Vmax in a
finite time, which, in turn, generate a desired sliding mode of the outputs around the desired value.
Therefore, this controller is feasible only in principle. Moreover, since this kind of control approach
can be applied to any kind of system, even to mechanical systems, this behavior may generate high
frequency vibrations, that are hardly bearable. A straightforward improvement adds an hysteresis
to the relays, using a memory (or an hysteresis):
0 if ym (t) = T > Td +
V (t + t) = Vmax if ym (t) = T < Td
V (t) otherwise
The results for uncertain Ts are compared with the opne loop strategy in Fig. 4.4. The point here is
that using the measured outputs ym (t) we can get information about the system current dynamics
and then, in case, improving its behavior. Strictly speaking, we can do feedback.
Other intuitive feedback control examples are:
1. Electronics/robotics example: light tracking with two light sensors for a unicyclelike robot.
To summarize, the to implement the feedback paradigm, a model of the system (given in terms
of differential or difference equations or in terms of impuse response) is needed, on the top of which
a control law can be designed.
70
60
50
(rad/s)
40
30
20
10
Rele
0 Uncertain R
d
0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01
sec
(a)
Rele (BangBang) Control
Variable Torque
0.01 80
0.009 70
0.008
60
0.007
50
0.006
(rad/s)
r (N/m)
40
0.005
0.004 30
0.003
20
0.002
10
Rele
0.001 Uncertain R
d
0
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
sec sec
(b) (c)
Figure 4.5: Open loop (a) and relay control (c) in the case of timevarying load torque (b) for the
DC Motor.
whose initial conditions are well defined. The solution of the differential equation yields to n signals
y(t), D (1) y(t), . . . , D (n1) y(t), one for each initial condition. Therefore, by stacking all this signals in
a vector
y(t)
D (1) y(t)
x(t) = .. ,
.
(n1)
D y(t)
the initial condition can be given by x(t0 ) = x0 (t0 being the initial time), while the solution of the
differential equation (the trajectory of the system) is given by x(t) = x (x0 , u, t). Among all the
4.2. BASIC CONCEPT OF STABILITY 57
(a) (b)
possible trajectories, an equilibrium is a trajectory for which Dx(t) = 0. In this section we will try
to infer the stability of a system trajectory (or equilibrium point).
Example 4.1 Consider a pendulum and its lower equilibrium point. Of course, since it is an equi-
librium point, leaving the pendulum in that position, it will not move. However, is this equilibrium
stable? Well, it is known that the lower equilibrium is stable, indeed by moving the pendulum from
its lower position (i.e., by perturbing its equilibrium), the pendulum starts to oscillate until it finally
reaches the lower equilibrium. However, in an ideal case in which the air friction is neglected as well
as the friction of the pendulum rotating nod, the pendulum starts to oscillates indefinitely. This is
exactly the same effect for the car suspension model in Section 3.2 when the viscous friction c = 0,
which yields to imaginary roots.
Consider now the upper equilibrium for an inverted pendulum. Again, the pendulum will not
move if no perturbation occurs. In the case of a perturbation, the pendulum will definitely leave the
upper equilibrium point and it will never come back to that point. HEnce, we can conclude that the
equilibrium point is unstable.
From the previous example, it is clear that the stability of an equilibrium, or of a trajectory,
can be assessed by studying the behavior of the trajectories of the system (i.e., the solution of the
(t) = x (x , u, t), with
differential equations system) if the initial conditions are perturbed, i.e., x
x 6= x0 .
Definition 4.2 (Stability) A trajectory x(t) = x (x0 , u, t) is stable if all the trajectories that
starts from initial points x sufficiently close to x0 remain arbitrarily closed to x(t), as depicted in
Fig. 4.7.(a). More precisely: if > 0, > 0 s.t. kx x0 k < , then kx (x , u, t)x (x0 , u, t)k < ,
t.
Definition 4.3 (Attractiveness) A trajectory x (x0 , u, t) is attractive if all the trajectories that
starts from initial points x sufficiently close to x0 converges towards x (x0 , u, t), for t +.
More precisely: if > 0 s.t. kx x0 k < , then limt+ kx (x , u, t) x (x0 , u, t)k = 0.
58 CHAPTER 4. CONTROL DESIGN
(a) (b)
Figure 4.7: (a) Stable trajectory and (b) asymptotically stable trajectory.
Substituting the trajectory with the equilibrium, i.e., x = x (x0 , u, t) = constant, we have the
same definitions of equilibrium stable, attractive, asymptotically stable and unstable, as shown in
Fig. 4.8. Notice that the stability problems related to an equilibrium point can be translated into
stability problems of the origin, by a suitable change of variables.
Moreover:
1. The subspace of the state space given by all the points x from which all the trajectories
asymptotically converges to an equilibrium point x0 is called Region of Asymptotic Stability
(RAS).
2. An equilibrium point is Globally Asymptotically Stable if the RAS coincides with the entire
state space.
Definition 4.6 (Uniform Ultimate Boundness) A trajectory x(t) = x (x0 , u, t) is Uniform Ul-
timate Boundness (UUB) w.r.t. a set S if T (x0 , S) s.t. x(t) S, t t0 + T .
These concepts of stability are due to Lyapunov, hence they are usually called concepts of Lya-
punov Stability, that are the basis for the control theory. Indeed, such definitions apply to any
system.
(a) (b)
Figure 4.8: (a) Stable equilibrium point and (b) asymptotically stable equilibrium point.
1. For a liner system, the stability property of a point of a trajectory is inherited by the whole
system. Hence, the RAS GAS;
2. if Repi < 0, i, then the system is asymptotically stable, i.e., its impulse response tends towards
zero. Indeed, the output is given by a combination of exponentials of the form eRepi t . Therefore,
the system is also BIBO stable;
3. if i such that Repi > 0, then the system is unstable, i.e., its impulse response tends towards
infinity. Indeed, the output is again given by a combination of exponentials of the form eRepi t .
Therefore, the system is also BIBO unstable;
4. if Repi < 0, i 6= j and pj = 0, then the system is marginally stable, i.e., its impulse re-
sponse tends towards a constant value. Indeed, the output is again given by a combination of
exponentials of the form eRepi t . However, the system is BIBO unstable;
5. if Repi < 0, i 6= j, l and pj = pl = 0, then the system is unstable, i.e., its impulse response
tends towards infinity. Indeed, the output is again given by a combination of exponentials of
the form eRepi t , but there is also t. Obviously, the system is BIBO unstable.
6. The attractive property determines the stability, while the instability determines an unlimited
behavior;
7. In an asymptotically stable system, all the trajectories of the system converges to zero expo-
nentially.
60 CHAPTER 4. CONTROL DESIGN
-
P(s)
P (s )
(a) (b)
Figure 4.9: LTI plant model in open loop (a) and in basic closed loop (b).
with m roots of the numerator (zeros) and n roots of the denominator (poles). If Repi < 0, i, the
system is asymptotically stable. If so, it is possible to apply the Final Value Theorem to understand
what is the steady state value reached by Y (s) for known reference inputs R(s), i.e.,
P (0) = 1,
then there exists t such that y(t) = r(t) (zero steady state tracking error), t t, for r(t) = a1(t).
Even if P (0) = b 6= 1, we can apply an amplifier of gain 1b in series, in order to compensate for the
steady state gain: Qm
(s zi ) a
lim y(t) = lim sKp Qni=1 = aP (0).
i=1 (s pi ) s
t+ s0
4.3. FEEDBACK DESIGN FOR LINEAR SYSTEMS 61
However, even in the absence of modeling errors, it may be that i such that pi = 0 or i such that
zi = 0 or the value of the reference changes to r(t) = at. In this situations, more flexible solutions
are given to the controller designer if the feedback is created between the output and the input of the
system. Consider for example the simplest unitary feedback of Fig. 4.9.(b). The transfer function
between the error and the reference will be given by
R(s)
E(s) = R(s) Y (s) = R(s) P (s)E(s) E(s) = ,
1 + P (s)
If 1 + P (s) has all asymptotically stable roots, we can apply again the Final Value Theorem, that
yields to
R(s) d(s)
lim e(t) = lim s = lim s R(s).
t+ s0 1 + P (s) s0 d(s) + Kp n(s)
Therefore:
Therefore:
1.0
P
P/s
P*s
0.5
0.0
0.5
0 2 4 6 8 10 12 14 16 18 20
Figure 4.10: Tracking error for a system with if Re(pi ) < 0, i, if i such that pi = 0, and if i such
that zi = 0.
limt+ e(t) 6 0, i
pi = i: pi = 0 i, j: pi = pj = 0
r(t) = a1(t) finite 0 0
r(t) = at infinite finite 0
r(t) = at2 infinite infinite finite
Table 4.1: Effect of poles in the origin for the tracking error of linear systems with different reference
inputs.
From the previous examples, the Table 4.1 can be synthesized. In other words, since e(t) =
r(t) y(t), a certain output signal y(t) of the closed loop system can be generated if is already inside
the system. More precisely, to generate a step, i.e., Y (s) = as , at least one pole in the origin is
needed. This is the socalled Internal Model Principle.
The properties analyzed until now refers to the system P (s) as a given. However, if the plant
P (s) does not have one (or more) poles in the origin, it is possible to modify its behavior by using a
dynamic controller C(s) mounted in feedback, as reported in Fig. 4.12. In such a case, we have the
plant P (s) and an additional block C(s): the controller. The objective on the control designer is to
determine C(s) in order to satisfy certain closed loop performance, such as:
settling time;
4.3. FEEDBACK DESIGN FOR LINEAR SYSTEMS 63
10
P
P/s
8 P/s^2
2
0 10 20 30 40 50 60 70 80 90 100
Figure 4.11: Tracking error for a system with if Re(pi ) < 0, i, if i such that pi = 0, and if i, j
such that pi = pj = 0.
-
C(s) P(s)
maximum overshoot;
rise time.
Let us first concentrate on the first performance: the steady state tracking error. For, we will
resort to the Internal Model Principle. Consider
1
P (s) = and r(t) = 1(t),
s+2
and, using the control feedback paradigm, we have
R(s)
E(s) = .
1 + C(s)P (s)
It is evident that in order to have zero steady state tracking error for a step reference it is needed a
pole in the origin, than it is sufficient to add it in the controller, i.e., C(s) = Ksc , that yields to
R(s) R(s) s(s + 2)R(s)
E(s) = = 1
= .
1 + Ksc P (s) 1 + Ksc s+2 s(s + 2) + Kc
In this particular case, the tracking error is zero (due to the internal model principle, see Fig. 4.13).
However, the additional free parameter Kc can further modify the output performance. Indeed,
64 CHAPTER 4. CONTROL DESIGN
1.8
P
1.6 C = 1/s
C = 100/s
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0 2 4 6 8 10 12 14 16 18 20
Kc
Figure 4.13: First order system controlled with C(s) = s
, with Kc = 1 and Kc = 100.
by increasing it, the output response is faster (shorter rise and settling time) than having Kc = 1.
However, oscillations with relatively high frequency may be difficult to manage for certain mechanical
systems, as well as the overshoot. Indeed, with Kc = 1 the system behaves (according to the dominant
pole approximation) as a first order system, while with Kc = 100 the system behaves as a second
order system. Therefore, increasing the power of the control signal (hence, having a more powerful
actuators) is not always desirable. Notice that this behavior can be even worse depending on the
plant P (s) to control. In fact, consider
1
P (s) = and r(t) = 1(t).
s2 + 2s + 2
Kc
In this case, using C(s) = s
, the tracking error transfer function is
In this particular case, with Kc = 1 the system is asymptotically stable with zero steady state tracking
error, while for Kc = 100 the system is unstable (with unbounded tracking error), as depicted in
Fig. 4.14.
The lessons learned in these two cases are:
The higher is the gain of the controller, the faster is the response of the system. Since the gain
is directly related to the cost of the control action, this conclusion is straightforward;
For high controller gains (undesirable) side effects are generated on the output, e.g., overshoot,
oscillations or even instability.
Hence, to the performance list previously defined, stability of the closed loop system should be
added. In fact, stability is the most important performance the system should respect.
Hence, a systematic tool to analyze the closed loop behavior of a system is necessary for a correct
control design.
4.4. ROOT LOCUS 65
4e+013
P
C = 1/s
2e+013 C = 100/s
0e+000
2e+013
4e+013
6e+013
8e+013
0 2 4 6 8 10 12 14 16 18 20
(a) (b)
Kc
Figure 4.14: (a) Second order system controlled with C(s) = s
, with Kc = 1 and Kc = 100. (b)
Zoomed graph.
3.5
P_1
P_2
3.0
P_3
P_4
2.5
2.0
1.5
1.0
0.5
0.0
0 1 2 3 4 5 6 7 8
hence, since P (s)C(s) = C(s)P (s) (that is true only for SISO systems),
C(s)P (s)
Gcl (s) = ,
1 + C(s)P (s)
C(s)P (s)
Gcl (s) = .
1 + C(s)P (s)
where Kp is given by the plant model, Kc is the gain parameter of the controller and
n(s)
G(s) = C (s)P (s) = . (4.3)
d(s)
Since G(s) is a complex function, the equation (4.2) is satisfied for any value of 0 Kc + if:
Modulus constraint:
n(s) 1
Kp
d(s) = Kc ,
which yields to
Kc Kp n(s) = 1.
d(s)
Phase constraint:
n(s) 1
=
d(s) Kp
or, equivalently
m n
(
X X mod 2, if Kp > 0
(s zi ) (s pi ) = ,
i=1 i=1
0 mod 2, if Kp < 0
Remark 4.7 The constraint on the phase is sufficient to draw the entire locus. The correspondence
between the position of the roots and the value of Kc is determined by the constraint on the modulus.
Therefore:
Property 1 The root locus has a number of branches that is equal to the number of open loop poles,
i.e., of G(s). Each branch starts from a pole of G(s) for Kc = 0; for Kc +, m branches
terminate on the m zeros of G(s), while n m roots goes towards infinity.
d(s) + Kc Kp n(s) = 0,
Therefore:
4.4. ROOT LOCUS 69
Figure 4.17: Graphical representation of the phase for real and complex roots.
Property 3 For Kp > 0, a point on the real axis belongs to the locus iff it has on its right an odd
number of singularities. For Kp < 0, that number should be even or null.
Consider now the n m singularities that goes to infinity for Kc + along asymptotes. The
center of all the asymptotes is a point on the real axis centered in
Pn Pm
i=1 pi i=1 zi
a= ,
nm
i.e., the center of mass of the distribution of poles and zeros. To derive the direction of the asymptotes,
express s = rej and consider r +. Hence, the phase constraint becomes
m
X n
X
j
lim (re zi ) (rej pi )
r+
i=1 i=1
m
X n
X
j
(re ) (rej ) =
i=1 i=1
m
X n
X
= (n m).
i=1 i=1
A point s = on the real axis may be a point of breakin/breakaway, in which multiple branches,
say l, intersect. Hence, s = is a solution of multiplicity l of equation 1 + Kc Kp G(s) = 0. In other
words, 1 + Kc Kp G(s) = (s )l h(s) = 0. Therefore
The l branches that enter in a breakin point and the l branches that consequently exit from the
breakaway point have tangents that divide the overall 2 angle symmetrically. Hence, there is an
angle of /l between each breakin branch and its adjacent branches (that are breakaway branches),
and viceversa:
Property 5 The root locus may have breakaway (and breakin) points in which a number of l
branches intersect. Such points satisfy the phase constraint and the additional h 1 equations
dj G(s)
= 0, j = 1, , h 1.
dsj
Using the phase constraint, it is also possible to compute the angle by which a branch leaves a
pole or tends towards a zero, as reported next:
Property 6 For Kp > 0, the angle by which the locus leaves a pole pj or tends to a zero zj is given
by
X m Xn
(2 + 1) + (pj zi ) (pj pi ), for the pole pj
i=1 i=1
m n .
X X
(2 + 1) + (zj zi ) (zj pi ), for the pole zj
i=1 i=1
Remark 4.8 It is possible to compute the value of Kc by which the locus intersects the imaginary
axis, i.e., the boundary of the stability region, solving
1 + Kc Kp G(j) = 0,
and then solving for the imaginary and real part w.r.t. and Kc .
Summarizing, the root locus is sketched following the 6 properties previously depicted. In par-
ticular, it is sufficient to follow these steps:
0.010 40
open loop poles open loop poles
0.008
asymptotic directions 30 asymptotic directions
0.006
20
0.004
10
0.002
Imag. axis
Imag. axis
0.000 0
0.002
10
0.004
20
0.006
30
0.008
0.010 40
14 12 10 8 6 4 2 0 2.5 2.0 1.5 1.0 0.5 0.0 0.5
Real axis Real axis
(a) (b)
Figure 4.18: (a) Root locus for the plant P (s) with C(s) = Kc = 1. (b) Root locus for the plant
P (s) with C(s) = Kc /s = 1/s.
Find asymptotes;
Example 4.9 To compute the root locus of the subsequent system when C(s) is purely proportional
C(s) = Kc ,
1
P (s) = .
s+2
This system, has one pole in 2 and one zero at infinity, hence all the real axis between and
2 belongs to the locus, as reported in Fig. 4.18.(a).
Now, consider the problem of having zero steady state error when the input is a step signal, i.e.,
1 C(s)P (s)
P (s) = , r(t) = 1(t), lim e(t) = 0, where Y (s) = R(s).
s+2 t+ 1 + C(s)P (s)
For the internal model principle summarized in Table 4.1, we should add a pole in the origin using
the controller C(s) = Ksc . In such a case the transfer function to be studied, reported in (4.3), is
given by
1
G(s) = C(s)P (s) = , Kc = 1,
s(s + 2)
and reported in Fig. 4.18.(b). To visually have a comparison between the open and closed loop plant
behaviors, consider the graph shown in Fig. 4.19.(a). The plant has hence zero steady state tracking
error, as desired.
To summarize, the plant P (s) closed in loop with C(s) = Kc is closed loop asymptotically stable
for any value of Kc and negative feedback. This is true also for C(s) = Ksc and any value of Kc for
72 CHAPTER 4. CONTROL DESIGN
Root Locus
1
1
1.0 0.64 0.5 0.34 0.16
0.8
P 0.8 0.76
0.9
P_cl
0.6
0.6 0.86
0.8
0.4
0.7
0.4
0.94
Imaginary Axis
0.2
0.6 0.2 0.985
0.5 0
(a) (b)
Figure 4.19: (a) Open and closed loop responses to a unitary step reference. (b) Grid showing the
damping factor and the natural frequency n on the root locus.
negative feedback. However, changing the value of the gain the overall output behavior changes. As
depicted in Fig. 4.13, the output changes from a typical output of a first order to a second order one.
This is due by the position of the closed loop poles, that univocally determine the response. Using the
approximation by the dominant pole, the closest to the imaginary axis is the slower pole, hence the
dominant (since it has the smaller real part). Analytically, for Kc = 1 we have two coincident real
roots in s1,2 = 1. For Kc = 100, we have two complex and conjugated roots in s1,2 = 1 10j.
1
G(s) = s2
.
2
n
+ 2 n s + 1
For example, this is the case of Example 4.9, where for Kc = 100 we have two complex and conjugated
roots in s1,2 = 1 10j, with unitary steady state gain by the presence of a pole in the origin for
the controller C(s). Recalling Section 3.2.3, the roots of G(s) are of the form:
p
s1,2 = n n 1 2,
therefore
= cos( arctan 2(Im(s1 ), Re(s1 )) and n = 1/|Re(s1 )|.
The values of and n are depicted using a grid superimposed to the root locus in Fig. 4.19.(b).
To summarize, it is evident that we can model the performance of the system, i.e., rise time,
settling time, overshoot, rather then simple stability, using the root locus and the first or second
order dominant pole approximation.
4.5. BIBLIOGRAPHY 73
4.5 Bibliography
Chapter 5
Digital Control
The control law for the motor control synthesized using the root locus method is continuous time,
i.e., it continuously measures the angular velocity of the motor and computes the control input.
However, the control law should be implemented in a digital embedded platform. Therefore, the
control law has to be properly sampled in order to obtain a discrete time controller. This chapter
presents a methodology to discretize a continuous time system and then implement it on a digital
embedded platform. The references on this subject are [Oga95] and [AW96]. For the details about
the particular Operating System used on the Lego Mindstorm, the reader is referred to [Chi].
The plant measurements, e.g., the angular velocity of a motor shaft, are discretized using a
sampler, i.e., a A/D converter. The most common sampler is the zeroorder hold, aka sample
& hold.
The sampled measurements are passed to the algorithm implementing the digital controller.
The result of the computation is then given to a data reconstructor, i.e., a D/A converter,
which reconstruct the continuous signal to be fed to the plant in a continuous piecewise signal.
There exist different solutions to discretize a system. A trivial solution may be to use a sampling
time that is the smaller possible, while keeping the controller continuous and using numeric tools
for differential equations. Alternatively, there are different solutions that tries to minimize the dis-
cretization approximations, which can be applied to transfer functions G(s) or to state space models.
In this case, the approximation introduced are strictly related to the discrete time approximation of
the integral of a continuous function.
74
5.1. DISCRETIZATION OF LINEAR CONTINUOUS TIME CONTROLLERS 75
Example 5.1 Consider the following transfer function and its time representation
Y (s)
= G(s) = y(t)
+ y(t) = u(t).
U(s) s+
Considering the continuoustime integral, one gets
Z t
y(t) = y( ) + u( )d,
0
As reported in Example 5.1, a linear differential equation expressed in continuous time is expressed
in discrete time by means of a linear difference equation, where the difference is supposed with respect
to time. An analogous of the differentialoperator of continuous time equations can be defined for
linear difference equations with constant coefficients. The forwardshift operator is denoted by q, i.e.,
qf (k) = f (k + 1),
where the sampling period is assumed to be the time unit, i.e., if f (t) is sampled every T seconds
and f (k) f (kT ), then qf (k) = f (k + 1) f (kT + T ).
The backwardshift operator, or delay operator, is denoted by q 1 , i.e.,
q 1 f (k) = f (k 1).
(a) (b)
(c)
Figure 5.1: Different approximation of the integral: (a) Eulers method, (b) backward rectangular
rule and (c) trapezoidal rule.
that is, it is sufficient to substitute s in G(s) with the proper shift operator equation.
The second method, the inverse of the previous, is given by the backward difference or backward
rectangular rule (Fig. 5.1.(b))
dx(k + 1) dqx(k) q1
= x(k).
dt dt T
Therefore, by recalling the fact that s corresponds to the differential operator, one gets
dqx(k) q1 q1
x(t) sq ,
dt T T
5.1. DISCRETIZATION OF LINEAR CONTINUOUS TIME CONTROLLERS 77
5.1.2 Hints
A quite simple way to automatically discretize the system is to:
4. The coefficients of the two polynomials in q are the coefficients of y(k), i.e., (an q n + an1 q n1 + + a1 q
turns to an y(k + n) + an1 y(k + n 1) + + a1 y(k + 1) + a0 y(k). The same for u(k).
The most accurate discretization algorithm among the previous is the one given by the trapezoidal
rule. Indeed, it gives the best approximation of the integral operator. More precisely:
The forward difference may generate an unstable discretetime system starting from a stable
continuoustime system;
The backward difference may generate stable discretetime systems starting from unstable
continuoustime systems;
The trapezoidal rule maps continuoustime stable systems into discretetime stable systems
and unstable into unstable systems.
Remark 5.3 Usually, the shift operator is a convenient way to express the complex variable z, the
variable of the Ztransform, that is the discrete time counterpart of the Laplace transform. As long
as the Ztransform is considered, the approximation of the trapezoidal rule is also called Tustins
approximation or bilinear transformation. The Tustin approximation is derived by the approximation
of the continuous time delay of the sampling time T , i.e.,
1 + sT /2
z = esT ,
1 sT /2
An NXTOsek program consists of two parts: an OIL (Osek Implementation Language) and a
C/C++ source code. The OIL is basically a file describing the architectures of the scheduler, the
tasks (e.g., priority, activation, autostart), etc.. Once the OIL and the C/C++ coded files are
compiled, an RXE file is generated, which has to be downloaded (via USB) on the Lego NXT brick
using the rxeflash.sh.
Fig. 5.2 reports an OIL file for the Hello world example, which is coded in Fig. 5.3.
Fig. 5.4 reports an example code for a program that reads the light sensor and displays it on the
Lego brick.
Finally, Fig. 5.5 shows the code for a periodic task.
5.3 Bibliography
[
AW96] K.J.
Astrom and B. Wittenmark, Computer Controlled Systems, Prentice Hall Inc., Novem-
ber 1996.
[Oga95] K. Ogata, Discrete-time control systems, Prentice-Hall, Inc. Upper Saddle River, NJ, USA,
1995.
BIBLIOGRAPHY 81
Figure 5.4: OIL and C code for a program that reads the light sensor and displays its value on the
Lego brick.
82 BIBLIOGRAPHY
Figure 5.5: OIL and C code for a task with periodic activation.
Chapter 6
The robotic system adopted for the project is a unicyclelike vehicle, which is a wheeled mobile robot
(WMR) with differential drive [SSVO08]. First, a rigorous methodology to derive the kinematic
model of a vehicle (and in particular of a unicycle) is presented. Since the model is non-linear, it will
be linearized using first order Taylor approximation. It is then possible to derive a controller for the
linearized system using the root locus.
h(q, t) = 0.
In this case, since the constraint depends explicitly from time, it is called rheonomic. In the subse-
quent, only scleronomic constraints will be considered (time invariant), i.e.,
h(q) = 0,
where h(q) is a vector function with r entries, i.e., h(q) = [h1 (q), . . . , hr (q)]T , one for each constraint.
Such constraints, are called holonomic (or integrable). The effect of holonomic constraints is to reduce
the space of accessible configurations to a subset of Rn with dimension Rnr . A mechanical system
for which all the constraints can be expressed in the form is called holonomic.
In the presence of holonomic constraints, the implicit function theorem, or Dini theorem, can
be used to express r generalized coordinates as a function of the remaining n r, so as to actually
83
84 CHAPTER 6. SIMULATION AND CONTROL OF A WHEELED ROBOT
eliminate them from the formulation of the problem. However, due to singularities this procedure
may introduce, it is more convenient to replace the original generalized coordinates with a reduced
set of n r new coordinates that are directly defined on the accessible subspace. Noticing that,
holonomic constraints are generally the result of mechanical interconnections between the various
bodies of the system, prismatic and revolute joints used in robot manipulators are a typical source
of holonomic constraints, and joint variables are an example of reduced sets of coordinates.
Constraints that involve generalized coordinates and velocities
q) = 0,
c(q,
A(q)q = 0.
Figure 6.1: Wagon constrained on a linear track, with generalized coordinates q = [x, y, ]T .
trajectories verify, instantly, the nonholonomic constraint. Hence, only the velocities that belong to
the null space of A(q) are feasible. More precisely, the kinematic model is defined by
q = G(q)u,
where G(q) is a basis of N (A(q)), whose columns uvi are the input vector fields (not unique in
general). Notice that the kinematic model expresses velocity that are compatible with the constraints,
indeed
A(q)q = A(q)G(q)u = 0.
Notice that q Rn and u Rm , where m = n r. Furthermore, such a kinematic model here
derived is driftless, because one has q = 0 if the input is zero.
The constrained kinematic model represent a very useful tool for the nonholonomic systems
analysis. In mobile robotics, indeed, the constructive mechanical simplicity yields to nonholonomic
constraints.
The constraint for the wagon is the linear track, with mathematical description given by y =
x tan b + b, where b is the attitude angle of the linear track w.r.t. X axis. Therefore,
h1 (q) = y x tan b b = 0
dh1 (q) .
dt
= y x tan b = 0
The second constraint expresses the equality between the attitude angle of the track and the
attitude angle of the wagon
h2 (q) = b = 0
.
dc2 (q)
dt
= = 0
Both constraints are then holonomic, since they are expressed with respect to positions.
In Pfaffian form, we have
x
sin b cos b 0
A(q)q = y = 0 .
0 0 1
Notice that the rank number of the matrix A(q) is always equal to two. Therefore, the dimension of
the null space will be 1, and the kinematic model is given by
cos b
q = G(q)u = sin b u.
0
Remark 6.1 Notice how the holonomic constraint limits the accessibility of the wagon on the plane.
Furthermore, only one variable is independent, i.e., the position on the track, and hence the other
two variables may be removed using the implicit function theorem.
c2 (q) = arctan xy 2 = 0
.
dc2 (q)
dt
= + Ry2 x Rx2 y = 0
Figure 6.2: Wagon constrained on a circular track, with generalized coordinates q = [x, y, ]T .
Since the dimension of the null space of A(q) is equal to 1, a possible choice is given by
y y
G(q) = x q = x u.
1 1
Summarizing, this example presents two holonomic constraints, hence only the position on the
circular track is of interest. It is worthwhile to note that u is the forward velocity of the wagon on
the circular track, since the motion is counterclockwise.
where R is the wheels radius and D is the length of the wheel axle.
For space limitations, kinematic models of more complicated vehicles are not considered in these
notes (e.g., cooperative unicycles, bicycle, carlike). However, it is possible, using the presented
mathematic tools, to derive the kinematic model of any vehicle.
The dynamic models of the vehicles are not considered in these notes, since from a practical
viewpoint, most of the vehicles are controlled using the kinematic model. Indeed, it is possible to
design a low level controller that compensate for all the dynamic effects of the platform. This is
exactly the linear controller implemented for the vehicles motors. In practice, the most popular and
widely used linear low level controller is the Proportional Integral Derivative (PID).
that, solving the second equation for e (s) and then plugging it into the first equation give
v
Ye (s) = U2 (s),
s2
which is the transfer function of the problem. It is now possible to use the root locus approach in
Section 4.4 to design a suitable controller satisfying the desired target performance and successively
discretize and implement it following the steps reported in Chapter 5 to finally have the robot
following the desired path.
6.3 Bibliography
[
AW96] K.J.
Astrom and B. Wittenmark, Computer Controlled Systems, Prentice Hall Inc.,
November 1996.
[Kai80] Thomas Kailath, Linear systems, vol. 1, Prentice-Hall Englewood Cliffs, NJ, 1980.
[SSVO08] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: modelling, planning and
control, Springer Verlag, 2008.