Вы находитесь на странице: 1из 90

University of Trento

Faculty of Mathematical, Physical and Natural Sciences


Master of Science in Computer Science

Department of Engineering and Computer Science


DISI

NOTES FOR THE COURSE

Laboratory of Embedded Control Systems

Luigi Palopoli and Daniele Fontanelli


Abstract

The aim of this notes is to mainly cover all the theoretical aspects of the course of Laboratory of
Embedded Control Systems. Practical insight on the relevant aspects of the project development will
also be covered. However, most of the practical issues related to the use of the robotic platform (i.e.,
the Lego Mindstorm), on the realtime embedded kernel to be adopted (i.e., NXT Osek) and on the
chosen simulation and design software (i.e., Scicos Lab) will be referenced throughout these notes.

2
Contents

1 Introduction and Background Material 5


1.1 The Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Brief Introduction to Scicoslab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Command Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Scicoslab Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Scicos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.6 Scicoslab references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3 The Lego Mindstorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2 The Laplace Transform 21


2.1 Bilateral L-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Unilateral L-transform and ROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Inverse L-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Properties of a Signal given the L-transform . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Modeling and Identification 29


3.1 Dynamic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1.1 Fundamental Properties of Dynamic Systems . . . . . . . . . . . . . . . . . . . 32
3.1.2 Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Analysis of a Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.1 System with Divergent Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.2 First Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.3 Second Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 Motor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.1 Time Domain Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.2 Frequency Domain Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3
4 CONTENTS

4 Control Design 51
4.1 Feedback Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.1 The Example of the Electric Heater . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.2 The Example of the DC Brushless Motor . . . . . . . . . . . . . . . . . . . . . 55
4.1.3 The Closed Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2 Basic Concept of Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2.1 Linear TimeInvariant System Stability . . . . . . . . . . . . . . . . . . . . . . 58
4.3 Feedback Design for Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4 Root Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4.1 Root locus construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.4.2 Analysis of a Second Order System in the Root Locus . . . . . . . . . . . . . . 72
4.5 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5 Digital Control 74
5.1 Discretization of Linear Continuous Time Controllers . . . . . . . . . . . . . . . . . . 74
5.1.1 Approximation of the Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2 Multitask Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

6 Simulation and Control of a Wheeled Robot 83


6.1 Kinematic Model of the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.1.1 Wagon Constrained on a Linear Track . . . . . . . . . . . . . . . . . . . . . . 85
6.1.2 Wagon Constrained on a Circular Track . . . . . . . . . . . . . . . . . . . . . 86
6.1.3 The Unicycle Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Kinematic Linear Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2.1 Estimation Algorithms and Observers . . . . . . . . . . . . . . . . . . . . . . . 90
6.3 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Chapter 1

Introduction and Background Material

This Chapter describes the structure of the course and summarizes the fundamental concepts assumed
to be already known by the students.

1.1 The Course


The course of Laboratory of Embedded Control Systems is not a course on Automatic Control, nor
a course on RealTime control and definitely not a course on Digital Control. However, the basic
knowledge of all the previously mentioned Engineering areas are needed to successfully complete the
assigned project.
In fact, this course is a project course in which the development of a practical real system
and the way to achieve it constitute the basis of the exam test. More precisely, starting from the
knowledge gained during the course of Signals and Systems, the students will be guided through
the so called Model Based Design paradigm. In practice, given a mathematical model of a generic
system (or plant), with the Model Based Design it is possible to synthesize a control law that is able
to modify the behavior of the plant according to some predefined performance indices. The system
considered in this course is a wheeled mobile robot to be assembled with the Lego Mindstorm kits
that are available. Basically, the students have to: 1) derive a mathematical model the robot (and
its components, i.e., actuators, sensors, motion constraints); 2) identify the physical parameters of
the model; 3) formulate design specifications and performance targets for the modelled system; 4)
design proper control laws in order to fullfil the pre-specified goals; 5) simulate the system subjected
to the control law to assess its performance, limits and adherence to the specifications; 6) generate a
software implementation of the controller implementing the control law and finally 7) test the overall
system on the field.
Therefore, in order to fullfil the stimulating challenges of this course, the students will be guided
through the following scheduled steps:

1. Understanding the use of Scicoslab. We will give first a brief description of the adopted tool
(Section 1.2), then it will be used throughout these notes to show how the theory can be applied
in practice;
2. Approaching the mechanical platform (Section 1.3);
3. Recap on Laplace Transform (Chapter 2);

5
6 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL

4. Modeling systems, such as actuators and sensors, and learn how to identifies their physical
parameters (Chapter 3);

5. Control law design techniques for Single Input Single Output (SISO) systems (Chapter 4);

6. Digital control design (Chapter 5);

7. Simulation and control design for a complex system like a wheeled mobile robot (Chapter 6);

8. Practical digital implementation (Section 5.2) and test on the field.

Throughout these notes, a list of reference books and materials will be given on purpose. In this
introductory section, we simply list some books that explains the basic concepts compounding the
Model Based Design paradigm that can be used as reference. For linear systems and control, we
can suggest [PPR03] and [Kai80]. Two books are mainly related to the joint world of control design
and digital implementation, which are [Oga95] and [ AW96]. For what concern robot modeling and
control, a useful book is [SSVO08].

1.2 Brief Introduction to Scicoslab


Scicoslab is a free scientific software package for numerical computations, modeling and simulation of
dynamical systems, data processing and statistics providing a powerful open computing environment
for engineering and scientific applications (see [Groc] and [Grob]). Scicoslab is an open source
software. Since 1994 it has been distributed freely along with the source code via the Internet. It
is currently used in educational and industrial environments around the world. It includes hundreds
of mathematical functions with the possibility to add interactively programs from various languages
(C, C++, Fortran...). It has sophisticated data structures (including lists, polynomials, rational
functions, linear systems...), an interpreter and a high level programming language.
Scicoslab is quite similar to Matlab, and the range of functions are comparable. The largest
benefit of Scicoslab is of course that it is free. Additional similarities are with Octave, which is also
free, but Scicoslab comes with Scicos, which is automatically installed with Scicoslab. Scicos is a
block-diagram based simulation tool similar to Simulink and LabVIEW Simulation Module, and it is
able to easily implement Hardware In the Loop simulations as well as interface with external devices.

1.2.1 Command Window


Starting Scicoslab, it opens a command window (Fig. 1.1). Scicoslab commands are executed at the
command line by entering the command, and then clicking the Enter button on the keyboard.
Typing help the help browser appears (Fig. 1.2). You can type help and the name of a command
(or apropos and the name of a command) to get directly to the help of the desired command.
Variables are defined by simply assigning them the result of an operation. Using ; at the end of
an operation avoid the output to be shown. Using , in the command line allow the execution of more
than one command sequentially. Scicoslab is case sensitive, i.e., a and A are two different variables.
Typing the name of a variable in the command window, the variable exists in the workspace.
There are two ways to see the contents of the workspace:
1.2. BRIEF INTRODUCTION TO SCICOSLAB 7

Figure 1.1: Scicoslab command window.

Executing the command who at the command line, which just lists the variables in the command
window;

Menu Applications/Browser Variables, which opens the Browser Variables window (or type
browsevar();).

By closing Scicoslab, the workspace created variables are deleted. You can save variables in a file
using the save function. You can determine how numbers are displayed in the command window
with the format function, but the internal representation of the number is independent from the
chosen display format.
Scicoslab provides users with a large variety of basic objects starting with numbers, variables and
character strings up to more sophisticated objects such as booleans, polynomials, and structures. An
object is a basic element or a set of basic elements arranged in a vector, a matrix, a hypermatrix, or
a structure (list).
There are a set of special built-in constants, which are summarized in (Fig. 1.3).

1.2.2 Vectors and Matrices


Scicoslab functions are vectorized, i.e. functions can be called with vectorial arguments. A matrix in
Scicoslab refers to one- or two-dimensional arrays, which are internally stored as a one-dimensional
array (two-dimensional arrays are stored in column order). It is therefore always possible to access
matrix entries with one or two indices. Vectors and scalars are stored as matrices. Multidimensional
matrices can also be used in Scicoslab. They are called hypermatrices.
Elementary construction operators, which are overloaded for all matrix types, are the row con-
catenation operator ; and the column concatenation operator ,. These two operators perform the
concatenation operation when used in a matrix context, that is, when they appear between [ and ].
Fig. 1.4 summarizes basic matrix operations. In addition, the following two matrix functions proves
to be very useful:
8 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL

Figure 1.2: Scicoslab help window.

The function size returns the dimension of a matrix;


The function length returns the length of a matrix (i.e., the overall number of entries).

Moreover, a set of matrix operators is reported in Fig. 1.5. Furthermore, the operator $ returns
the index of the last element of a matrix.

1.2.3 Strings
Strings in Scicoslab are delimited by either single or double quotes ", which are equivalent. If one
of these two characters is to be inserted in a string, it has to be preceded by a delimiter, which is
again a single or double quote. Basic operations on strings are the concatenation (operator +)
and the function length. A string is just a 1x1 string matrix whose type is denoted by string (as
returned by typeof). A brief list of string functions is offered in Fig. 1.6.

1.2.4 Scicoslab Programming


A Scicoslab script is a text file of name *.sce containing commands. Editing of scripts can be
carried out using the Scipad editor (type Scipad or use the menu). Scripts can also have extension
sci, however the default extension when saving a script file in Scipad is sce. It is very useful to
use scripts even for small set of operations because the project is saved in files which is good for
documentation and very convenient for debug and repeated executions. Line starting with \\ are
considered as comments.
There are two ways to execute the name.sce script:

With the Execute / Load into Scicoslab menu in Scipad;


1.2. BRIEF INTRODUCTION TO SCICOSLAB 9

Figure 1.3: Scicoslab built in constants.

Figure 1.4: Scicoslab matrix functions.

By executing the command exec name.sce at the command line.

The command exec works properly only if the current directory of Scicoslab matches the directory
where the script has been saved. Typing cd (or chdir) and then the name of the desired folder the
current directory changes to the desired folder. If no name is specified, then the directory is changed
to the home directory. Typing getcwd (or pwd) the current directory is displayed. Alternatively,
excute the menu File / Get Current Directory to get the current folder and the menu File /
Change Directory to select the desired folder.
A function is identified through its calling syntax. Functions can be hard-coded functions (hard
coded functions are sometimes called primitives) or Scicoslab-coded function (they are sometimes
called macros in Scicoslab). A Scicoslab-coded function can be defined interactively using the key-
words function and endfunction, loaded from a Scicoslab script using exec or getf, or saved and
loaded in binary mode using save and load. Definition of a function:
function [<name1>,<name2>,...,<namep>]=<name-of-function>(<arg1>,<arg2>,... ,<argn>)
<instructions>
10 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL

Figure 1.5: Scicoslab matrix operations.

Figure 1.6: Scicoslab string functions.

endfunction
Calling of a function:
[<v1>,<v2>,...,<vp>]=<name-of-function>(<expr1>,<expr2>,... ,<exprn>)
If more than one function shares the same variable, it should be defined with global. Warning: if
a function uses an undeclared variable in its scope, Scicoslab will search for it also in the workspace!
A script may contain function definitions. As previously mentioned, Scicoslab script file names
usually have the extension sce, but if the script contains only function definitions, then the sci
extension is used. In that case the command getf can be used instead of exec to load the functions.
When Scicoslab is started it executes (as a script) the file .scilab or scilab.ini, if it exists.
The execution of Scicoslab code can be interrupted using the command pause (menu Stop of the
Control menu in the main menu) or using the interruption key code Ctrl-C. The pause command
can be added to the function explicitly. The command whereami (or where), it is possible to detect
where the pause has occurred. In the local environment of the pause, one can check the status of
variables, and the current execution can be resumed with the command resume. Leaving the pause
mode and ending the debugging is done with the abort command.
1.2. BRIEF INTRODUCTION TO SCICOSLAB 11

In the following, a list of basic programming instructions:

Branching :

if <condition> then
<instructions>
else
<instructions>
end
select <expr> ,
case <expr1> then
<instructions>
case <expr2> then
<instructions>
...
else
<instructions>
end

Iterations :

for <name>=<expr>
<instructions>
end
while <condition>
<instructions>
end

The presence of a break statement in the <instructions> ends the loop.


For basic data input/output, follows the functions in Fig. 1.7). Several Scicoslab functions can
used to perform formatted input-output. The functions write and read are based on Fortran
formatted input and output. They redirect input and output to streams, which are obtained through
the use of the function file.
Scicoslab has its own binary internal format for saving Scicoslab objects and Scicoslab graphics
in a machine-independent way. It is possible to save variables in a binary file from the current
environment using the command save. A set of saved variables can be loaded in a running Scicoslab
environment with the load command. A list of binary input/output functions is given in Fig. 1.8.

Plotting
The basic command for plotting data in a figure is plot2d. The properties of the graph (as well as
of each entity in Scicoslab) can be obtained using the get command. Then, each property is made
accessible and can be modified by the user. In the Graphics window menu it is possible to click on
the GED button, which opens the Graphics Editor.
To create a figure type hf = scf(n), where n is the identifier and hf is the figure handle. The
latter specify the plot to be modified. For example, clf(hf) clears the figure whose handel is hf, while
12 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL

Figure 1.7: Scicoslab Input/Output functions.

Figure 1.8: Scicoslab binary Input/Output functions.

scf(hf) gives the focus to the figure whose handel is hf. Typing ax1=gca();ax1.grid=[0,0]; adds
the grid, where gca stands for get current axes. Important function for handling plots and figures are
reported in Fig. 1.9, while windows primitives are briefly described in Fig. 1.10. Finally, Fig. 1.11
summarizes the window primitives.
Plotting examples:

t=0:0.1:3;
plot2d(t,sin(t));

function y=fun(t); y=sin(t).*sin(t); endfunction;


t=0:0.1:3;
plot2d(t,fun(t));

t=linspace(-20*%pi,20*%pi,2000);
param3d1(sin(t),t.*cos(t)/max(t),t/100)
1.2. BRIEF INTRODUCTION TO SCICOSLAB 13

Figure 1.9: Scicoslab plot functions.

x=linspace(-%pi,%pi,40); y=linspace(-%pi,%pi,40);
plot3d(x,y,sinh(x)*cos(y))

v=rand(1,2000,n);
histplot([-6:0.4:6],v,[1],015, ,[-4,0,4,0.5],[2,2,2,1]);
function [y]=f2(x); y=exp(-x.*x/2)/sqrt(2*%pi); endfunction;
x=-6:0.1:6;x=x;plot2d(x,f2(x),1);

1.2.5 Scicos
Scicos is a block-diagram based simulation tool, which means that the mathematical model to be
simulated is represented with function blocks. It is quite similar to Simulink and LabVIEW Simulation
Module and it is automatically installed with Scilab. Scicos is able to easily implement Hardware In
the Loop simulations. The homepage is at http://scicos.org/.
Scicos can simulate linear and nonlinear continuous-time, and discrete-time, dynamic systems.
The simulation time scale is controllable: it can run as fast as possible (hence the time scale is also
simulated); it can run in Realtime, i.e., with a real or scaled time axis.
To launch Scicos:

Type scicos in the command window;

Select Applications / Scicos in the Scilab menu.

At the start-up, the initial blank Scicos window appears (Fig. 1.12). The window will be filled
with Scicos blocks. The blocks used to build the mathematical model are organized in palettes
14 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL

Figure 1.10: Scicoslab graphics primitives.

(Fig. 1.13.(a)). To display the palettes in a tree structure, select Palettes / Pal tree in the Scicos
menu. To see the individual blocks on a specific palette, click the plus sign in front of the palette
(Fig. 1.13.(b)). Alternatively, select the desired palette via the Palette / Palettes menu. Right
clicking on each element of the palette, i.e., on each Scicos block, a popup menu opens, with three
different choices:

Place in Diagram: places the block in the Scicos window, where it can be moved freely. Alter-
natively, you can draganddrop the element in the Scicos window directly;

Help: gets the Scilab help for the specified block;

Details: shows the details of the block, e.g., number of inputs, number of outputs, etc..

Sources: various signals generators that are the inputs to the model;

Sinks: various visualizations of signals, that are the output variables of interest;

Events: event generators for the model;

Branching: blocks performing the routing of the signals in the model,e.g., multiplexers, demul-
tiplexers, switch, buses, etc.;
1.2. BRIEF INTRODUCTION TO SCICOSLAB 15

Figure 1.11: Scicoslab window primitives.

Linear: representations of linear models, e.g., integrators, derivators, sums/substractions,


gains, transfer functions, etc.;

Nonlinear: nonlinear operations on quantities, e.g., absolute values, products, logarithms, sat-
urations, etc.;

Matrix: operations on matrices taken as inputs, e.g., singular value decomposition, determi-
nants, inverse, etc.;

Others: user defined functions, deadband, Scilab functions, etc.;

Integer, Iterations, Modelica, LookupTables, Threshold, Demo Blocks and Old Blocks will not
be used in this course. However, the interested student can always refer to the associated help.

To simulate a Scicos model, select Simulate / Run. Selecting the Simulate / Setup menu one gets
simulation parameters window (Fig. 1.14). Most of the parameters can generally be left unchanged
in this dialog window, except for:

The Final integration time defines the final (stop) time of the simulation. But, if an End block
exists in the block diagram, the Final simulation time in that block will override the simulation
stop time if less than it;
16 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL

Figure 1.12: Scicos initial window (blank sheet).

The Realtime scaling parameter defines the relative speed of the real time compared to the
simulated time. For example, if Realtime scaling is 0.2, the real time runs 0.2 times as fast as
the simulated time. In other words, the simulator runs 1/0.2 = 5 times faster than real time.
You can use this parameter to speed up or slow down the simulator;

The maximum step size can be set to a proper value. As a rule of thumb, one tenth of the
quickest time-constant (or apparent time-constant) of the system to be simulated;

The solver method (i.e. the numerical method that Scicos uses to solve the underlying algebraic
and differential equations making up the model) can be selected via the solver parameter, but
in most cases the default solver can be accepted;

The time scale fo the simulation is defined by the user and the particular application to simulate
(e.g., a mass flow parameter can be given in kg/s or kg/min). However the choice of the seconds
is suggested.

Model parameters and simulation parameters can be set in the Context of the simulator. For,
select Diagram / Context from the Scicos menu bar. The Context is simply represented by a number
of Scilab expressions defining the parameters (or variables) and assigning them values (the use of
the command global is appreciated). These parameters are used throughout the Scicos diagram, e.g.,
blocks, user defined functions, etc.

The Scifunc
Scilab coded functions can be executed as Scicos blocks, i.e., Scifunc. When the Scifunc is placed
in the Scicos window, it is possible to visualize its properties. Doubleclicking on the Scifunc block,
the first property window is visualized (Fig. 1.15).
1.2. BRIEF INTRODUCTION TO SCICOSLAB 17

(a) (b)

Figure 1.13: Scicos palette.

Input port sizes: dimension of the inputs, corresponding to the dimension of the input of the
Scilab function;

Output port sizes: dimension of the outputs, corresponding to the dimension of the output of
the Scilab function;

Input event port sizes: dimension of the event port. It will be used to select an additional
event trigger for teh function, e.g., the event generated by an external clock to discretized the
execution of the function;

Is block always active (0:no, 1:yes): specifies if the function is always executed (i.e., continuous
time behaviour) or executed only when the trigger is active (i.e., discrete time or event based
behaviour).

The function flag 1 properties are summarized in Fig. 1.16.(a). Name of the function to be
executed, with input u1 (i.e., input 1 of the block) and output y1 . The function flag 4 properties are
reported in Fig. 1.16.(b). Initialization of specific variables of the function using the global command
(even though it is not strictly necessary). The function flag 5 properties are depicted in Fig. 1.16.(c).
Rarely used: it determines the operations to be performed when the function finishes. Finally,
function flag 6 properties are depicted in Fig. 1.16.(d). Rarely used: constraints on the input/output
dimension/values.
18 CHAPTER 1. INTRODUCTION AND BACKGROUND MATERIAL

Figure 1.14: Scicos simulation properties window.

Figure 1.15: Scifunc window.

1.2.6 Scicoslab references


The latest release of ScicosLab, its documentation, and many third-party contributions (toolboxes)
can be found on the official ScicosLab home page [Groc]. There is also a newsgroup dedicated to
Scicoslab: comp.soft-sys.math.scilab.
Free on-line references on Scicoslab can be found in [Groa, Pin, vDS, Rie].

1.3 The Lego Mindstorm


The system that will be used as a practical benchmark for the theoretical aspects learnt during this
course is the Lego Mindstorm kit [Mina, Minb]. The kit comes with an embedded computing platform
that is able to host a free realtime kernel (i.e., NXT Osek [Chi]) and offers standard interfaces to
the hardware components, that sensors and actuators.
1.4. BIBLIOGRAPHY 19

(a) (b) (c) (d)

Figure 1.16: Scifunc property windows.

The actuators are three Direct Current Brushless Motors that can be simultaneously controlled
by the embedded platform. The motors are basically transducers that transform electric power
into mechanical power, hence are the components that generate the motion and the interaction of
the mechanical platform with the surrounding environment. This hardware components need to
be properly controlled in order to achieve desired motion of the mechanical system with specified
(target) performance. According to the Model Based Design, a model of the actuators is necessary
to meet this goal.
The sensors are transducers that translates physical phenomena into digital signals. They are
mainly divided into two classes: exteroceptive, that sense the environment surroundings the me-
chanical platform, and proprioceptive, that instead sense quantities that are inside the mechanical
platform. The Lego Mindstorm can host up to four different sensors simultaneously. Only a pro-
prioceptive sensor is available, which is the incremental encoder mounted on each motor shaft that
measures the current angular position of the motor with a resolution of the degree. Instead, there
are four different exteroceptive sensors: a contact switch, that returns a binary value if the contact
takes place or not, a sonar sensor, that receives the echo reflected by the objects (if any) that have
been hit by the ultrasonic emitted signal, a microphone sensor, that returns the sound intensity (in
deciBel) captured in the environment, and a light sensor, that captures the ambient light or the light
reflected by the available lamp and returns a value that is proportional to the color of the object
that has been hit by the light. Also a sensor model should be available in order to setup a realistic
simulation of the whole system.

1.4 Bibliography
[
AW96] K.J.
Astrom and B. Wittenmark, Computer Controlled Systems, Prentice Hall Inc.,
November 1996.
20 BIBLIOGRAPHY

[Chi] Takashi Chikamasa, http://lejos-osek.sourceforge.net/.

[Groa] The Scicoslab Group, Introduction To Scicoslab - Users Guide.

[Grob] The Scilab Group, http://scilab.org.

[Groc] , http://www.scicoslab.org.

[Kai80] Thomas Kailath, Linear systems, vol. 1, Prentice-Hall Englewood Cliffs, NJ, 1980.

[Mina] Lego Mindostorm, http://mindstorms.lego.com/en-us/default.aspx.

[Minb] , http://www.nxtprograms.com/index1.html.

[Oga95] K. Ogata, Discrete-time control systems, Prentice-Hall, Inc. Upper Saddle River, NJ, USA,
1995.

[Pin] Bruno Pincon, Une Introduction `a Scilab.

[PPR03] Charles L Phillips, John M Parr, and Eve Ann Riskin, Signals, systems, and transforms,
Prentice Hall, 2003.

[Rie] Eike Rietsch, An Introduction to Scilab from a Matlab Users Point of View.

[SSVO08] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: modelling, planning and
control, Springer Verlag, 2008.

[vDS] Lydia E. van Dijk and Christoph L. Spiel, Scilab Bag of Tricks.
Chapter 2

The Laplace Transform

The Laplace Transform and its inverse is here briefly summarized. A useful reference on this topic
can be found on the notes of the course of Signals and Systems of Prof. Palopoli as well as in [PPR03].

2.1 Bilateral L-transform


Given any function u(t), its bilateral Laplace transform is given by
Z
L (u(t)) = U(s) = u(t)est dt,

whose Region of Convergence (ROC) is the region where the Laplace transform is defined, i.e, where
the integral converges. Assuming that U(s) is the ratio of two polynomials, we have:

1. the ROC does not contain any pole;

2. if the function u(t) is such that u(t) = 0, t


/ [t1 , t2 ] (with t1 and t2 finite), then the ROC is the
entire complex plane (with the possible exceptions of s = 0 and s = );

3. if the signal u(t) is left-sided (u(t) = 0, t > t1 > ) then the ROC of U(s) is of the form

Re(s) < min

(the half plane to the left of the vertical line Re(s) = min );

4. if the signal u(t) is right-sided (u(t) = 0, t < t1 < +) then the ROC of U(s) is of the form

Re(s) > max

(the half plane to the right of the vertical line Re(s) = max );

5. if the signal is two-sided (i.e., it has an infinite duration for both positive and negative times)
then the ROC is a vertical stripe of the form

1 < Re(s) < 2

21
22 CHAPTER 2. THE LAPLACE TRANSFORM

In the following, we offer a brief summary of the properties of the bilateral Laplace transform,
that applies, with minor changes, also to the monolateral transform.

Proposition 2.1 (Linearity) Given L(u1 (t)) = U1 (s) with ROC R1 and L(u2 (t)) = U2 (s) with
ROC R2 , we get
L(a1 u1 (t) + a2 u2 (t)) = a1 U1 (s) + a2 U2 (s),
with ROC R1 R2 R .

Proof of Proposition 2.1


Z + Z +
st
L(a1 u1 (t) + a2 u2 (t)) = a1 u1 (t)e dt + a2 u2 (t)est dt
t= t=
= a1 U1 (s) + a2 U2 (s)

The ROC is trivial (since at least each term has to converge). 

Proposition 2.2 (Time shifting) L(u(t)) = U(s) with ROC R, we get:

L(u(t t0 )) = est0 U(s),

with ROC R = R.

Proof of Proposition 2.2


Z + Z +
L(u(t t0 )) = u(t t0 )e dt =st
u( )es( +t0 ) d
t= =
Z +
= est0 u( )es d = est0 U(s)
=

where = t t0 . Therefore, the ROC R = R. 

Proposition 2.3 (Shifiting in the s domain) L(u(t)) = U(s) with ROC R, we get

L(es0 t u(t)) = U(s s0 ),

with ROC R = Re(s0 ) + R.

Proof of Proposition 2.3


Z + Z +
s0 t
L(e u(t)) = s0 t
e u(t)e st
dt = u(t)e(ss0)t dt
t= t=
= U(s s0 )

Since Re(s) R, it follows that Re(s s0 ) R. Hence, Re(s) Re(s0 ) + R, or, in other words,
R = Re(s0 ) + R. 
2.1. BILATERAL L-TRANSFORM 23

Proposition 2.4 (Time scaling) L(u(t)) = U(s) with ROC R and a 6= 0 (a R), we get:

1 s
L(u(at)) = U ,
|a| a

with ROC R = aR.

Proof of Proposition 2.4 considering a > 0 we have:


Z +
1 + 1 s
Z

st
L(u(at)) = u(at)e dt = u( )es a d = U
t= a = a a

where = at. Since Re(s) R, it follows that Re( as ) R. Hence, Re(s) aR, or, in other words,
R = aR. For a < 0, the same results hold with the only difference that = at changes the integral
terms (hence, the sign). 

Proposition 2.5 (Differentiation in the time domain) L(u(t)) = U(s) with ROC R, we get
 
du(t)
L = sU(s),
dt

with ROC R such that R R ;

Proof of Proposition 2.5


  Z +
du(t) du(t) st +
e dt = u(t)est t=

L =
dt t= dt
Z + Z +
dest
u(t) dt = s u(t)est = sU(s)
t= dt t=

(integration by parts). The ROC R is at least equals to R. 

Proposition 2.6 (Differentiation in the s domain) L(u(t)) = U(s) with ROC R, we get

dU(s)
L(tu(t)) = ,
ds
with ROC R = R.

Proof of Proposition 2.6


+ +
dest
Z Z
st
L(tu(t)) = tu(t)e dt = u(t) dt
t= t= ds
R + st
d t=
u(t)e dt dU(s)
= =
ds ds
The ROC R is trivially equals to R. 
24 CHAPTER 2. THE LAPLACE TRANSFORM

Proposition 2.7 (Convolution) L(u1(t)) = U1 (s) with ROC R1 and L(u2 (t)) = U2 (s) with ROC
R2 , we get
L(u1(t) u2 (t)) = U1 (s)U2 (s),
with ROC R such that R1 R2 R .
Proof of Proposition 2.7
Z +
L(u1 (t) u2 (t)) = (u1 (t) u2 (t))est dt =
t=
Z + Z +  Z + Z + 
st st
u1 ( )u2 (t )d e dt = u1( )u2 (t )e dt d
t= = = t=
Z + Z +  Z +
st
= u1 ( ) u2 (t )e dt d = u1 ( )(es U2 (s))d
= t= =
Z + 
s
= u1( )e d U2 (s) = U1 (s)U2 (s)
=

The region of convergence R is at least equals to R1 R2 . 
Proposition 2.8 (Integration) L(u(t)) = U(s) with ROC R, we get
Z t 
1
L u( )d = U(s),
s
with ROC R such that R = R {Re(s) > 0};
Proof of Proposition 2.8
Z t Z +
u( )d = u( )1(t ) = u(t) 1(t),
= =

and +
+ +
est

1
Z Z
st st
L(1(t)) = 1(t)e dt = e dt = =
t= t=0 s t=0 s
with ROC R1 = {Re(s) > 0}, the thesis follows from the convolution property. 
Proposition 2.9 (Integration over s) L(u(t)) = U(s) with ROC R, we get
  Z +
u(t)
L = U()d,
t s

with ROC R = R.
Proof of Proposition 2.9
 Z + 
u(t) u(t) st
L = e dt
t t= t
Z + Z +  Z + Z + 
t t
= u(t) e d dt = u(t)e dt d
t= =s =s t=
Z +
= U()d
=s

The ROC R is trivially equals to R. 
2.2. UNILATERAL L-TRANSFORM AND ROC 25

In what follows a set of basic bilateral Laplace transforms is derived by the direct application of
the previously enumerated properties.

L ((t)) = 1 (by definition); ROC: all s.

L ((t t0 )) = et0 s (time shifting); ROC: all s.


1
L (1(t)) = s
(by definition); ROC: Re(s) > 0.
n!
sn+1
(by differentiation in the s domain); ROC: Re(s) > 0.
1
(by shifting in the s domain and differentiation in the s domain mutiplication by t);
(s+a)n+1
ROC: Re(s) > Re(a).
a
s(s+a)
(by linearity and shifting in the s domain); ROC: Re(s) > 0.

1
s
(by definition); ROC: Re(s) < 0.
1
(s+a)
(by shifting in the s domain); ROC: Re(s) < Re(a).

2a
s2 a2
(by linearity and shifting in the s domain); ROC: Re(a) < Re(s) < Re(a).

s2 + 2
(by linearity and shifting in the s domain); ROC: Re(s) > 0.
s
s2 + 2
(by linearity and shifting in the s domain); ROC: Re(s) > 0.

(sa)2 + 2
(by linearity and shifting in the s domain); ROC: Re(s) > a.

2.2 Unilateral L-transform and ROC


The unilateral Laplace transform is defined as
Z +
U(s) = u(t)est dt,
0

where the integration from 0 allows to embrace the dirac . The unilateral L-tranform is equivalent
to the Laplace tranform of 1(t) u(t). Therefore, the ROC is always of the form Re(s) > max . Most
of the properties of the bilateral transform also apply to monolateral laplace transform, with little
differences. For example, the differentiation property is modified as follows:

dn u(t)
L(u(t)) = U(s) L( ) = sn U(s) sn1 u(0 ) sn2 u(0 ) . . . u(n1) (0 ),
dtn
n
with u(n1) defined as d dtu(t)
n . Indeed, the differentiation property for bilateral transforms is changed

in the integration by parts, evaluating the integral in the interval (0 , +) rather than (, +).
26 CHAPTER 2. THE LAPLACE TRANSFORM

2.3 Inverse L-transform


Let us consider a generic L-transform
N(s) (s z1 ) . . . (s zm )
X(s) = =k ,
D(s) (s p1 ) . . . (s pn )
in which n m and all poles pk are simple. Hence the partial fractal expansion gives:
c1 cn
X(s) = + ...+ ,
s p1 s pn
where
c1 = (s p1 )X(s)|s=p1 ,
...
cn = (s pn )X(s)|s=pn .
In the case of multiple roots,
(s z1 ) . . . (s zm )
X(s) = k ,
(s p1 )(s pi )h . . . (s pn )
where
(0) 1 (1) 1 (2) 1 (h) 1 1
X(s) = c1 + . . . + ci + ci 2
+ . . . + ci + . . . + cn ,
s p1 s pi (s pi ) (s pi ) h s pn
(hr)
and for ci (where r = {0, 1, . . . , h}), we have

(hr) 1 dr 
(s pi )h X(s) |s=pi .

ci = r
r! ds
Finally, for each component of the partial fractal expansion, the monolateral L we have:
1
L{epit } = ,
s pi
and
n!
L{tn epi t } = .
(s pi )n+1

2.4 Properties of a Signal given the L-transform


From the Laplace transform and its inverse, we can easily infer the following properties of y(t)
analyzing Y (s) = L(y(t)):

If all poles of Y (s) are strictly in the left half plane (i, Re(pi ) < 0), then y(t) goes to 0 for
t ;

If at least one of the poles of Y (s) is in the right half plane (i such that Re(pi ) > 0), then
y(t) becomes unbounded as t ;
2.4. PROPERTIES OF A SIGNAL GIVEN THE L-TRANSFORM 27

If all poles of Y (s) are such that Re(pi ) 0, then if all poles with real part equal to zero are
simple then y(t) remains bounded (although it does not necessarily go to 0), otherwise it is
unbounded.

Initial value theorem: y(0) = lims sY (s);

Final value theorem: limt y(t) = lims0 sY (s) (if the limit exists and is finite).

Example 2.10 The objective is to compute y(t) for the system

+ y(t) = u(t) 3u(t),


y(t)

assuming that u(t) = 1(t), i.e., the unitary step function, with u(0) = 1 and y(0) = 1. To
= sY (s) y(0 ) and
this end, the property on the time differentiation is applied twice, i.e., L(y)

L(u)
= sU(s) u(0 ). Hence

sY (s) y(0) + Y (s) = U(s) 3sU(s) + 3u(0 ).

Therefore,
3s + 1 2
Y (s)(s + ) = U(s)(3s + 1) + (y(0 ) + 3u(0 )) Y (s) = U(s) + ,
s+ s+
1
that substituting U(s) = s
yields to

3s + 1 2
Y (s) = + .
(s + )s s +

For the first component, the partial fractal expansion is applied


3s + 1 1 3 + 1
Yf (s) = = ,
s(s + ) s (s + )

and hence  
1 3 + 1 t
yf (t) = e 1(t).

For the second component,
2
yu (t) = L1 ( ) = 21(t)et .
s+
Finally  
t 1 3 + 1 t
y(t) = 21(t)e + e 1(t)

Notice how this inverse Laplace transform has been applied applying the superposition principle,
which is only valid for linear systems.
Given y(t) it is possible to determine its steady state value for > 0 by computing directly

1
lim y(t) = .
t+
28 BIBLIOGRAPHY

However, using the analysis tools previously summarized, it is possible to determine what will be the
steady state value of y(t), since it is possible to apply the final value theorem (i.e., the limit exists
because > 0). In fact,
1
lim y(t) = lim sY (s) = .
t+ s0
Notice how < 0 makes the limit undefined.
As an exercise, compute the output to the step signal of the system

y(t) + y(t) = u(t)

with y(0)
= y(0) = 0.
Hint: for complex roots j, we have an oscillating behavior for the time response of the system
whose frequency is related to and its damping factor is related to .

2.5 Bibliography
[PPR03] Charles L Phillips, John M Parr, and Eve Ann Riskin, Signals, systems, and transforms,
Prentice Hall, 2003.
Chapter 3

Modeling and Identification

This chapter presents a brief introduction to dynamical systems and a way to model them considering
the interacting phenomena that relates the physical variables and the parameters involved. Then,
linear systems, which will be the subject of this course, will also be considered. The modeling issues
finally ends with a description of the actuators and sensors involved in this course.
The final section of this chapter is devoted to the identification technique to be adopted for
systems parameters estimation, with particular emphasis on the Lego Mindstorm motor model.
The reference books throughout this section are [PPR03, Kai80].

3.1 Dynamic Systems


A discrete time signal can be represented by a function
f : N 7 V,
that maps time instants (e.g., 1 a ms, 2 2a ms, 3 3a ms, . . . ) onto a ndimensional state.
A discrete time system can be roughly defined as a relation between a ndimensional signal
describing the input of the system with and a mdimensional signal describing the output variables
of a system, i.e., the variables that can be directly measured by the available sensors, hence
S : [N 7 V] 7 [N 7 W],
where V Rn is the space representing the system inputs and W Rm is the output space.

Example 3.1 Consider a bank account. The amount of money in the account is given by y, which
is updated on a monthly basis. Initially the amount is zero, i.e., y[0] = 0. The interest is equal to .
Hence:
y[n] = y[n 1] + u[n 1],
where u[n] represents the input, that is the overall withdraws (u[n] < 0) or deposits (u[n] 0) done
at month n. If > 1 then the amount of money will grow in time, if 0 < < 1 it will decrease. For
= 1 there is no interest.

A continuous time signal can be represented by a function


f : R 7 V,

29
30 CHAPTER 3. MODELING AND IDENTIFICATION

Figure 3.1: Mass and spring dynamic system.

Figure 3.2: Truck and trailer dynamic system.

where t R is the continuous time. As before, a continuous time system can be roughly defined as a
relation between a ndimensional signal describing the input of the system with and a mdimensional
signal describing the output variables of a system, i.e.,

S : [N 7 V] 7 [N 7 W], (3.1)

where t R is the time, V Rn is the space representing the system inputs and W Rm is the
output space.

Example 3.2 Consider the mass and spring dynamic system of Fig. 3.1. m is the mass of the body,
K is the spring elastic parameter, B is the damping parameter. In this case we have an input f ,
which is the force applied to the body, and an output x, which is the position of the mass on the plane
of motion.

Example 3.3 Consider the truck with a trailer of Fig. 3.2. m1 and m2 are the masses, k is the
spring elastic parameter of the transmission, b is the damping parameter of the transmission and
is the viscous friction of the air. The input f is the traction force. The outputs of the system are
x1 , the position of the truck on the road, and x1 x2 , the relative position between the truck and the
trailer.
3.1. DYNAMIC SYSTEMS 31

A first taxonomy about dynamic systems (discrete and continuous time) is related to the number
of inputs and outputs. Indeed:

The system is said Single Input Single Ouput (SISO) iff V is a tuple of one signal, and W is a
tuple of one signal;

The system is said Single Input Multiple Ouput (SIMO) iff V is a tuple of one signal, W is a
tuple of more than one signal;

The system is said Multiple Input Single Ouput (MISO) iff V is a tuple of more than one signal,
W is a tuple of one signal;

The system is said Mingle Input Single Ouput (MIMO) iff both V and W are tuples of more
than one signal.

In this course we will basically restrict to SISO systems.


The system dynamic description is given in terms of the phenomena (mechanical, electrical,
economical, etc.) that relates its behavior.

Example 3.4 Consider again the bank account system of Example 3.1. Initially the amount is zero,
i.e., y[0] = 0, while dynamic system is simply given by

y[n] = y[n 1] + u[n 1],

where u[n] represents the input, that is the overall withdraws (u[n] < 0) or deposits (u[n] 0) done
at month n. If > 1 then the amount of money will grow in time, if 0 < < 1 it will decrease. For
= 1 there is no interest. This is a SISO system.

Example 3.5 Lets go back to the mass and spring system of Example 3.2. The dynamic equations
are derived noticing that the effects involved are:

The Newton law: m


x = Forces;

The forces are divided into three components:

1. An external force (input) f (t);


2. The Rayleigh force: B x;

3. The Hooke law: Kx.

Therefore the overall system description is given by:

K B f
x = x x + .
m m m
This system is a SISO system.
32 CHAPTER 3. MODELING AND IDENTIFICATION

Example 3.6 For the truck and trailer system of Example 3.3. The dynamic equations are derived
from the Newtons, Rayleighs and Hooks laws, plus the aerodynamical drag force x 2i , with i = 1, 2.
Therefore the overall system description is given by:

m1 x1 = f k(x1 x2 ) b(x 1 x 2 ) x 21


m2 x2 = k(x1 x2 ) + b(x 1 x 2 ) x 22

This system is a SIMO system.

In the rest of these notes, we will consider only one modeling technique, that is the I/O description.
More in depth, we describe the possible behaviors of the system by showing how the output is
computed directly from the input.
Continuous time systems are generically expressed by Ordinary Differential Equations (ODEs),
while discrete time systems by difference equations. From the previous examples and by defining

x(t + k)x(t) shifted forward by k time steps for difference equations,
(k)
D x(t) = k ,
d x(t) the kth derivative of x(t) for continuous time,
dtk

we can infer that the I/O relation for a SISO system is in general given by

F y(t), D (1) y(t), . . . , D (n) y(t), u(t), D (1)u(t), . . . , D (p) u(t), t = 0,




where t R, y(t) : R 7 R and u(t) : R 7 R. If the system is well posed then it can be described
with the normal form

D (n) y(t) = F y(t), D (1) y(t), . . . , D (n1) y(t), u(t), D (1)u(t), . . . , D (p) u(t), t .

(3.2)

A well posed system of difference or differential equations describes the whole time evolution of
the involved physical quantities whenever the initial conditions are properly given. For a system in
normal form (3.2), the initial conditions are specified by the values of y(t0 ) at time t0 (referred to as
initial time) and the corresponding values of all the n 1 shift operators or time derivatives of y(t)
at the same instant, together with u(t0 ) and the first p 1 shift operators or time derivatives of u(t)
at the same time instant.

Example 3.7 The discrete time equation in the Examples 3.4 and 3.5 are already in their normal
form.

3.1.1 Fundamental Properties of Dynamic Systems


Dynamic systems may or may not satisfy a set of properties that can simplify their analysis and
control. Here in the following, the most important are presented and discussed.

Definition 3.8 (Causality) A system is causal if the output y(t) at time t is only related to the
initial conditions and from the inputs u( ), for t. It is strictly causal if this relation is true for
< t.
3.1. DYNAMIC SYSTEMS 33

In plain words, the causal adjective refers to the existence of a cause/effect relation between
inputs and outputs. Therefore, a causal system cannot predict the future to produce its output, so
all physical systems are causal. Examples of algorithmic exceptions: CD drive, media players and
ideal filters. In such cases, the output depends on the future inputs since the output is delayed.
Given a system expressed in normal form (3.2) is causal if p = n and strictly causal if p < n.

Definition 3.9 (Stationarity) If the system does not changes its behavior in time, it is time in-
variant or stationary.

dF ()
A system is stationary in its normal form (3.2) if dt
= 0. The solution of a time invariant
system does not changes if the initial time changes.

Definition 3.10 (Linearity) According to definition in (3.1), a system is linear if:

Superposition principle: For any two inputs f (t) and g(t), the output of the system is

S(f + g)(t) = S(f )(t) + S(g)(t);

Scaling: for any input f (t) and for any scalar , the output of the system is

S(f )(t) = S(f )(t).

If the normal form (3.2) is given, a system is linear if it can be rewritten with

n1 p
X X
(n) (i)
D y(t) = ai (t)D y(t) + bj (t)D (j) u(t)
i=0 j=0

If it is also stationary, the terms ai and bj are constant. It comes out that for a linear system
the analysis is rather simplified, since it is possible to apply the superposition principle and to
compute the trajectory in the state space by summation of a homogeneous solution and a generic
solution. This two solutions are referred to as the unforced system response (for u 0 and given
initial conditions) and the forced response (for zero initial conditions and generic input). The forced
response is sometimes called the zero-state response to highlight the fact that the initial conditions
are zeroed.

Example 3.11 The bank account in Example 3.4 is a discrete-time causal, stationary and linear
system. However, if the Examples 3.4 bank interest is supposed to change in time, the system be-
comes non-stationary. The mass and spring system in Example 3.5 is a continuous time causal,
stationary and linear system, while the truck and trailer of Example 3.6 is a continuous time causal
and stationary system, however it is not linear.
34 CHAPTER 3. MODELING AND IDENTIFICATION

3.1.2 Impulse Response


The output response of a linear system subjected of a certain input is given by the convolution
between the impulse response (either h[n] for discrete time systems or h(t) for continuous time) of
the system and the input, i.e.,

y[n] = h[n] u[n] or y(t) = h(t) u(t).

In the domain of the complex variable s, the previous convolution operation for continuous time
systems (which are the systems we will face during this course) turns to an algebraic operation for
the L-transform, i.e.,
Y (s) = H(s)U(s) + H0 (s),
where H(s) is the transfer function of the system, that is by definition the L-transform of the impulse
response of the system assuming zerostate initial condition, and H0 (s) is the transfer function given
the initial conditions. It turns then out that the inverse Laplace transform of H(s)U(s) is the forced
response of the system, while the inverse Laplace transform H0 (s) is the unforced response. For
example, these two components are clearly reported in Example 2.10.

3.2 Analysis of a Linear System


The analysis of the transfer function H(s) of a system dives a lot of information to the control
engineering on both the foreseen behavior of the system when it is subjected to nominal inputs, e.g.,
impulse, unitary step, ramp, and the best way to control it and to modify its behavior. Usually, the
analysis is conducted by imposing as input the unitary step. This is mainly due by the simplicity of
the input signal and, on the other hand, by the fact that a generic signal can be approximated with
a piecewise constant signal.
The analysis of a linear system is here proposed by using the practical example of a simplified car
suspension, reported in Fig. 3.3, which assumes the validity of the socalled quarter model of the car.
m is one fourth of the total mass of the car, k is the spring elastic parameter of the suspension, while
c is the damping parameter of the suspension. (t) is the shape of the road (exogenous disturbance
input) while f (t) is an external force applied to the car (the input to the model). x(t) is the position
of the vehicle, measure w.r.t. the vertical axis (system output).
Following the modeling procedure used in Examples 3.2 and 3.3, the dynamic model of the system
is
mx(t) + k(x(t) (t)) + c(x(t)
(t)) = f (t)
where f (t) comprises also the force due to the gravity acceleration. Assuming that the system is at
rest for t < 0, i.e., x(t)
= x(t) = (t) = 0, t < 0, the monolateral L trasform is given by

X(s)(ms2 + cs + k) = F (s) + (cs + k)(s).

Hence:
1 cs + k
X(s) = F (s) + (s).
ms2 + cs + k ms2 + cs + k
Recalling that the overall time evolution is obtained using the superposition principle, let us first
consider only the effect of the road (F (s) = 0), assuming the wheel is climbing a step of amplitude
3.2. ANALYSIS OF A LINEAR SYSTEM 35

Figure 3.3: Simplified model of a car suspension.

a 6= 0, i.e.,
cs + k a(cs + k)
X(s) = (s) = . (3.3)
ms2 + cs + k s(ms2 + cs + k)
Furthermore, let us consider only the effect given by the poles, i.e., the roots of the denominator, in
order to have:
a q
X1 (s) = = s2
2
s(ms + cs + k) s( 2 + 2n s + 1)
n
q k
k c
where n = m , = 2km and q = ka . Since the system has two poles (the third is related to the
step input), it is usually referred to as a second order system. n is usually referred to as the natural
frequency of the system. is the damping factor of the system.
p
X1 (s) has three poles: p1 = 0, p2,3 = n n 2 1. The values of the roots of the
denominator obviously influence the time response of the system, that, in their turn, ar influenced
by the parameter of the system, i.e., m, k and c. From physics, it should be m > 0, k 0 and c 0.
We will focus our analysis only to a subset of the possible configurations of the parameters, leaving
to the interested reader all the other cases. Each configuration defines a different behavior of the
system, which are discussed in the following sections.

3.2.1 System with Divergent Behaviors


Let us consider the car suspension of Fig. 3.3 with parameters given by m > 0, k > 0 and c 2kk ,
m
which are not feasible in reality. Hence, a not achievable dynamics should be obtained (sanity check
of the model).
pIn this case, n > 0 and 1. Therefore, the poles are given by p1 = 0, p2,3 = n
n 2 1 > 0. Since the system is at rest for t 0, if p2 6= p3 , we have that the time evolution of
36 CHAPTER 3. MODELING AND IDENTIFICATION

the system is only given by the forced response

q c1 c2 c3
X1 (s) = s2 2
= + + ,
s( 2
n
+ n
s + 1) s s p2 s p3

whose inverse Laplace transform has the coefficients

c1 = sX1 (s)|s=0 = q,
qn2
c2 = (s p2 )X1 (s)|s=p2 = ,
p2 (p2 p3 )
qn2
c3 = (s p3 )X1 (s)|s=p3 = ,
p3 (p2 p3 )

and hence the time dynamic

qn2 ep2 t ep3 t


  
x1 (t) = q + 1(t).
p2 p3 p2 p3

On the other hand, if p2 = p3 = n (with = 1), we have

(1) (2)
q c1 c c2
X1 (s) = 2 2
= + 2 + ,
s( s 2 + n
s + 1) s s n (s n )2
n

with coefficients
c1 = sX1 (s)|s=0 = q,
(1) 1 d 
(s n )2 X1 (s) |s=n = q,

c2 =
1! ds
(2)
c2 = (s n )2 X1 (s)|s=n = qn ,

that yields to
x1 (t) = q q(1 n t)en t 1(t).
 

Regardless of the amplitude of the step a, the final value of x1 (t) in both cases for 1 is

lim x1 (t) = sign(a),


t+

In other words, once the car climbs an unitary step, the car explodes! A realistic behavior would not
be the one here described, as it was supposed to be using a damping parameter c < 0 that, instead,
generates energy! Notice that this is true even if 1 < < 0 (that corresponds again to c < 0, but
with complex conjugated poles with positive real parts). In this case the limit is not defined, due to
the persistent and divergent oscillations.
For reference, Fig. 3.4 reports the output behavior for different (Fig. 3.4.(a,b)), coincident (Fig. 3.4.(c))
and complex and conjugated (Fig. 3.4.(d)) roots with different inputs.
3.2. ANALYSIS OF A LINEAR SYSTEM 37

a = 0.1; k = 100; m = 400; c = 600 a = 0.1; k = 100; m = 400; c = 600


90 0

80 10

70 20

60 30

50 40

Amplitude

Amplitude
40 50

30 60

20 70

10 80

0 90
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Time (sec) Time (sec)

(a) (b)
a = 0.1; k = 100; m = 400; c = 400 a = 0.1; k = 100; m = 400; c = 200
0.7 0.4

0.2
0.6

0.5
0.2

0.4
0.4
Amplitude

Amplitude
0.6

0.3
0.8

0.2 1

1.2

0.1
1.4

0 1.6
0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25 30
Time (sec) Time (sec)

(c) (d)

Figure 3.4: Divergent outputs due to: (a) Positive step a = 0.1, positive roots ( < 1), (b) Negative
step a = 0.1, positive roots ( < 1), (c) Positive step a = 0.1, positive coincident roots ( = 1),
(d) Positive step a = 0.1, complex roots with positive real part (1 < < 0).

3.2.2 First Order Systems


Let us consider the car suspension of Fig. 3.3 with parameters given by m > 0, k > 0 and c 2kk ,
m

p In this case, n > 0 and 1. The poles are real and given by p1 = 0,
a feasible dynamic.
p2,3 = n n 2 1 < 0. Since the system is at rest for t 0, if p2 6= p3 , we have that the
time evolution of the system is only given by the forced response
q c1 c2 c3
X1 (s) = 2 2
= + + ,
s( s 2 + n
s + 1) s s p2 s p3
n

that, as done in Section 3.2.1, the time evolution is given by

qn2
 p2 t
ep3 t
 
e
x1 (t) = q + 1(t).
p2 p3 p2 p3

In the case of coincident roots p2 = p3 = n (with = 1), we have

x1 (t) = q q(1 + n t)en t 1(t).


 
38 CHAPTER 3. MODELING AND IDENTIFICATION
3 4
x 10 a = 0.25; k = 100; m = 400; c = 600 x 10 a = 0.1; k = 100; m = 400; c = 2400
1 0

0.9
1

0.8

0.7

3
0.6

Amplitude
Amplitude
0.5 4

0.4
5

0.3

0.2

7
0.1

0 8
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (sec) Time (sec)

(a) (b)
3
x 10 a = 0.1; k = 100; m = 400; c = 400
1

0.9

0.8

0.7

0.6
Amplitude

0.5

0.4

0.3

0.2

0.1

0
0 5 10 15 20 25 30
Time (sec)

(c)

Figure 3.5: Convergent outputs due to: (a) Positive step a = 0.1, negative roots ( 1), (b) Negative
step a = 0.1, negative roots ( 1), (c) Positive step a = 0.1, negative coincident roots ( = 1).

The final value of x1 (t) for > 1 is


a
lim x1 (t) = q =
t+ k
or, equivalently (since the system has negative poles)

qn2 a
lim sX1 (s) = =q= .
s0 p2 p3 k

The final value of x1 (t) for = 1 is

a q 2 a
lim x1 (t) = q = or lim sX1 (s) = 2n = q =
t+ k s0 p2 k

Again, Fig. 3.5 reports the output behavior for different (Fig. 3.5.(a,b)) and coincident (Fig. 3.5.(c))
roots with different inputs.
An important characteristic of the output of a (stable) linear system, in addition to the steady
state value, is the time in which the system reaches that value. For this reason, this time is referred
3.2. ANALYSIS OF A LINEAR SYSTEM 39

to as the settling time Ts of the system. For engineers, it is not relevant when the system effectively
reaches the steady state value (in fact, by the inverse Laplace transform, it is reached for t +),
but it is more important the time in which the system reaches the % of the steady state value and
remains, for t +, within that bound. This quantity is a function of the parameters n and
of the system. Such a function is described by means of the time evolution of the system response.
More precisely:
100
 
|q x1 (t)| q = q, t Ts
100
Therefore, for 1, p2 6= p3 and p2 , p3 R ,

qn2
 p2 t
ep3 t
 
e
x1 (t) = q + 1(t),
p2 p3 p2 p3

and, for a > 0 (q > 0) and t > 0,

qn2
 p2 t
ep3 t

e
|q x1 (t)| q
q.
p2 p3 p2 p3

Using the superposition principle, we have


|pi (p2 p3 )|
epi t , for i = 2, 3,
n2
that, using the logarithms we have

pi t log(|pi (p2 p3 )|) log(n2 ),

and, finally
log(|pi (p2 p3 )|) log(n2 )
Ts = .
pi
Hence for each root, we have two possible values for the time Ts . Of course, the root that is
smaller in modulus will govern the rate of convergence of the system towards its steady state value.
This idea is quite effective from a practical viewpoint and it can be generalized to any number
of roots, even if the roots are complex. This approximation is usually referred to as the system
approximation of the dominant pole. Fig. 3.6.(a) depicts the approximation to the dominant p pole
2
p system. For the described system, assuming that p2 = n + n 1
thus introduced for this
and p3 = n n 2 1, p2 is the dominant pole. Hence, the greater (in modulus)
q is p2 , the
k
faster is the rate of convergence. This result is accomplished by increasing n = m and making
k
c
= 2km 1. Fig. 3.6.(b) reports the output for two different choices of the spring constant with
a mass m = 400 kg.
If p2 = p3 = n (with = 1), we have

x1 (t) = q q(1 + n t)en t 1(t),


 

hence
|q x1 (t)| q |q(1 + n t)en t | q.
40 CHAPTER 3. MODELING AND IDENTIFICATION

3
x 10 Comparison
1
Choice 1
Choice 2
G2 0.9
0.01
x (t)
1
0.009 only p2 0.8

only p3
0.008 0.7

0.007
0.6

0.006

Amplitude
Amplitude

0.5
0.005

0.4
0.004

0.003 0.3

0.002 0.2

0.001
0.1

0
0 5 10 15 20 25 30
0
Time (sec) 0 5 10 15 20 25 30
Time (sec)

(a) (b)

Figure 3.6: (a) Rate of convergence for the overall system and for each exponential alone. The settling
time is also shown for = 0.05. This graph clearly shows the approximation to the dominant pole.
(b) Two choices of the spring constant have been made: k1 = 1600 and c1 = q3kk11 (1 > 1), and
q m
k
2k2 c1 m1 n1
k2 = 400 and c2 = q
k
(in order to have n2 = 2
and 2 = 21 ).
k1 m2

Applying the superposition principle we get two results (again)


log() W()
t and t
n n
where W is the Lambert W function (otherwise we can solve it numerically). Anyway, this is not an
interesting analysis, since we dont have a dominant pole.
To summarize, a second order system
1
X1 (s) = s2 2
2
n
+ n
s +1

having > 1 behaves like a first order system since the output of the system to a step input is
governed by a dominant pole, that defines the rate of convergence (i.e., settling time) of the system.
Straightforwardly, the larger is the difference between the dominant pole(s) and all the other
poles, the better is the approximation. For instance, consider the two transfer functions
4 1
P1 (s) = 2
, Pa (s) = ,
(s + 1)(s + 2)(s + 2s + 2) s+1
(3.4)
400 1
P2 (s) = 2
, Pa (s) =
(s + 1)(s + 20)(s + 2s + 20) s+1
whose Pa (s) is the same dominant pole approximation and their outputs when the input is u(t) = 1(t),
reported in Fig. 3.7. It is evident that the approximation of P2 (s) is the more accurate since its poles
are more spread out.
3.2. ANALYSIS OF A LINEAR SYSTEM 41

1.0
P_1
0.9
P_2
P_a
0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0.0
0 1 2 3 4 5 6 7 8

Figure 3.7: Response to a unitary input for the plants reported in eqrefeq:AnalysisDomPoles whose
dominant pole approximation is given by the same Pa (s).

3.2.3 Second Order Systems


Let us consider the car suspension of Fig. 3.3 with parameters given by m > 0, k > 0 and 0 < c 2kk .
m
In this case, n > 0pand 0 < < 1. The poles are then complex and conjugated and given by p1 = 0,
p2,3 = n jn 1 2 , with negative real part. xSince the system is at rest for t 0, we have
that the time evolution of the system is only given by the forced response is
q c1 c2 c3
X1 (s) = s2 2
= + + ,
s( 2
n
+ n
s + 1) s s p2 s p3

with coefficients given by

c1 = sX1 (s)|s=0 = q,
p
qn2 1 2 j
c2 = (s p2 )X1 (s)|s=p2 = = ,
p2 (p2 p3 )
p
2 1 2
p
qn2 1 2 + j
c3 = (s p3 )X1 (s)|s=p3 = = ,
p3 (p2 p3 )
p
2 1 2

which yields to the time response


h p i
n t 2
x1 (t) = q + 2Ne cos(n 1 t + ) 1(t)
42 CHAPTER 3. MODELING AND IDENTIFICATION
 
with N = q and = arctan .
2 1 2 1 2

Noticing that:
1. cos() = sin 2 ;


   
sin() cos()
2. arctan cos() 2 = arctan sin() ;

It follows that h i
1(t),
n t sin(n 1 2 t + )
p
x1 (t) = q q Ne
 
= 1 1 2
with N and = arctan
. The final value of x1 (t) for 0 < < 1 is again
1 2

a
lim x1 (t) = q =
t+ k
Let us now consider the settling time in this particular case. Again
|q x1 (t)| q, t Ts ,
or, equivalently,
n t sin(n .
p
|Ne 1 2 t + )|
Due to the sinusoidal function, this equation is quite difficult to evaluate. An
p effective upper bound
to Ts is derived using a worst case approach, i.e., by considering | sin(n 1 2t + )| = 1, and

recalling that N > 0 for 0 < < 1, i.e.,

en t ,
N
and finally get
log() log(N)
Ts .
n
Notice that the real part of the complex roots n plays a fundamental role in rate of convergence
towards the steady state value.
Another important parameter for a second order system is given by the overshoot, that is the
maximum value reached by the time evolution of the output with respect to the steady state value.
More precisely, the overshoot O is defined as the difference between the maximum value reached by
the system response xmax and the steady state value x(), normalized by the difference between the
initial value x(0) and the steady state value
|xmax x()|
O= .
|x(0) x()|
An exact relation between the damping parameter and the overshoot O is derived computing the
time derivative
dx1 (t)
en t n 1 2 cos(n 1 2 t + )
p p
= q N
dt
=
n t sin(n 1 2 t + )
p
+ qn Ne
+ n sin(n 1 2 t + ),

p p p
n 1 2 cos(n 1 2 t + )
3.2. ANALYSIS OF A LINEAR SYSTEM 43

and imposing it to be equal to 0, which yields to


 p  p1 2
tan n 1 2 t + = .


1 2 p
Recalling that tan() =
, it follows that n 1 2 t = h, with h = 0, 1, . . . . Hence the
maximum and minimum values for x1 (t) are given by the time instants
h
th = p ,
n 1 2
relation that highlights the dependence of the step response to the imaginary part of the complex
root. Rewriting the time evolution of the system response using these derived relations

h 2
e 1
,
x1 (t) = q 1 p sin(h + )
1 2

and noticing that


= sin()
sin(h + ) cos(h) = sin()(1)
h
,
and p
= 1 2 = 1 2,
p
tan() sin()

we finally get to  
h
h 12
x1 (t) = q 1 (1) e .

It is now evident that for h = 1 we have a maximum. Hence, the time in which the system reaches
its maximum value is

To = p .
n 1 2
Increasing n , the overshoot will happen earlier. On the other hand, it will happen later if
increases. The corresponding overshoot value is obtained noticing that x(0) = 0, x() = q and
 
2
xmax = q 1 + e 1 .

Therefore
|xmax x()|
O= = e 12 ,
|x(0) x()|
i.e., the maximum overshoot depends only on the damping parameter of the complex roots. For
example, if a maximum overshoot is desired, should be greater than a min .
Fig. 3.8.(a) depicts time evolution for two different choices of the spring constant for a vehicle
mass m = 400 kg. Notice that with the choice k1 = 1600 we will have a lower settling time Ts and
a lower time for the overshoot To (hence the system is faster than with the second choice). On the
other hand, this choice presents a larger maximum value, situation that may not be acceptable in all
cases.
44 CHAPTER 3. MODELING AND IDENTIFICATION

3
x 10 a = 0.1; k = 100; m = 400; c = 0
2

3
x 10 Comparison 1.8
1.5
Choice 1
Choice 2 1.6

1.4

1 1.2

Amplitude
Amplitude

0.8

0.5
0.6

0.4

0.2

0
0 5 10 15
0
Time (sec) 0 5 10 15 20 25 30
Time (sec)

(a) (b)

k1
Figure 3.8: (a) Two choices of the spring constant have been made: k1 = 1600 and c1 = q
k
2 m1
q
k
2k2 c1 m1 n1
(0 < 1 < 1), and k2 = 400 and c2 = q
k
(in order to have n2 = 2
and 2 = 21 < 1). The
k1 m2
settling time is referred to a threshold of = 0.02. (b) Imaginary roots.

With a different choice of the parameters, the output can be completely different (Fig. 3.8.(b)).
Indeed, in the case of m > 0, k > 0 and c = 0, the oscillations are not dumped since the transfer
function has p1 = 0 and p2,3 that are complex and conjugated with Re(p2,3 ) = 0. In this case, it is
not possible to define the Ts , but it is possible to define the overshoot (left as exercise).
Let us now recall the overall transfer function (3.3) given by the effect of the road (F (s) = 0)
and the road, i.e.,
X(s) = (cs + k)X1 (s),
In this case, the real system response is given by the previously obtained time evolution (multiplied
by k) with added the time derivative of the time response x1 (t), scaled by the factor c, i.e.,

dx1 (t)
x(t) = c + kx1 (t).
dt
Notice how a zero pops up in this case. To analyze how this zero modifies the output of the system,
let us consider a generic second order system with stable, complex and conjugated roots (0 < < 1,
n > 0)
s + 1
Y (s) = s2 = ( s + 1)G(s),
s( 2 + 2n s + 1)
n

excited by an unitary step signal. Define y1 (t) = L1 [G(s)]. Therefore, y(t) = y1dt(t) + y1 (t). Hence:

en t cos(t + ) + N n en t sin(t + )
y(t) = N
+ 1 Ne n t sin(t + ),
3.2. ANALYSIS OF A LINEAR SYSTEM 45

Comparison
8
=0
=1
6 = 1.2
= 1

y(t)
2

4
0 2 4 6 8 10
sec

Figure 3.9: Effect of the zeros, with = 0.1, n = 5 and unitary input step.
 
= 1 2 p
with N 1 , = arctan
and = n 1 2 . Notice that:
1 2

1. = 0: the system is a standard second order system;


2. > 0: since the maxima and minima of y(t) is given by the points y 1 (t) = y1 (t), it follows
that the first maximum is leaded (lead effect);
3. < 0: for the same reason, we have a reversed first peak and a lagged positive peak (lag effect).
This behavior is pictorially reported in Fig. 3.9. Besides the effect of shortening the rise time of
the signal by adding a feedforward term in the output1 , the zeros have another important property.
Indeed, consider a standard second order system with a zero and a generic input U(s)
s + 1
Y (s) = U(s).
s2
2
n
+ 2n s + 1
t
Let us analyze what happens when u(t) = e 1(t) with generic initial conditions. To understand
the problem, let us consider a simplified system, given by
s + 1
Y (s) = U(s),
s+1
from which
y(0) u(0)
Y (s) = + .
s+1 s+1
t
In such a case, by selecting u(0) = y(0)+

it follows that y(t) = 0, t even if u(t) = e 1(t)! This
important property is referred to the zeros of a transfer function with generic initial conditions. If
the zeros are equal to the poles of the L transform of the input, the input does not appear in the
output. Hence, such zeros are called blocking zeros. Moreover, for suitable initial conditions, the
output of the system is identically zero (thats why the name).
1
The term in feedforward is mainly due to the presence of time derivative of the output in the output itself. We
can consider the time derivative as an information that allows to locally predict the future behavior of the system.
46 CHAPTER 3. MODELING AND IDENTIFICATION

Figure 3.10: DC brushless motor

3.3 Motor model


The DC brushless motor available for the Lego Mindstorm can be characterized by two different
components: the electrical part and the mechanical part (see Fig. 3.10). For the electrical part we
have
dc
L = Rc Vcm + V,
dt
where c is the current of the armature circuit, R and L are the resistance and the inductance of the
armature circuit respectively, V is the input voltage, while Vcm is the back electromotive force (back
EMF).
For the mechanical part,
d
J = b + m r ,
dt
where is the angular velocity of the motor shaft, J is the moment of inertia of the motor shaft, b is
the viscous friction of the motor shaft, m is the generated torque and r is the torque load, unknown
in the general case.
The overall model is then given by
 dc
L dt = Rc Vcm + V
. (3.5)
J d
dt
= b + m r
From the model, it can be seen that the voltage V controls the velocity of the motor shaft, while the
current c governs the torque m . Indeed, the back EMF Vcm = kcm , while m = km c.
Neglecting the additional noise term r and applying the Laplace transform to the previous
equations, the transfer function for this model can be determined considering as output the measured
angular velocity ]omega(t) and as input the voltage V (t), i.e.,
k
(s) = R
 V (s). (3.6)
s+ L
(JLs + bL) + k 2
Consider the output of the system for a step of amplitude A:
k A q
1 (s) = = s2 ,
s(JLs2 2
+ (bL RJ)s + Rb + k ) s s( 2 + 2n s + 1)
n
3.4. IDENTIFICATION 47
q
Rb+k2
q
Rb+k 2 (bLRJ) kA
where n = JL
,= JL
2(Rb+k 2 )
and q = Rb+k 2 . Since the system has two poles (the third is

related to the step input), it is a second order system. Therefore, its output can be analyzed with
the tools introduced in Section 3.2.3.

3.4 Identification
System identification is a general term to describe mathematical tools and algorithms that build
dynamical models from measured data. Measured data are used to calculate a transfer function of
the system. Two different approaches are usually adopted for linear systems:

1. Timedomain approach: A measure of the time response of a system can be derived using a
controlled input.

2. Frequencydomain approach: A measure of the frequency response of a stable system can be


derived using a sinusoidal input. Those measurements can be used to produce a Bode plot of
the frequency response.

3.4.1 Time Domain Approach


A common approach is to start from the behavior of the system (known by the measured outputs
of the system) and the known external influences (inputs to the system) and try to determine a
mathematical relation between them without going into the details of what is actually happening
inside the system. This approach is called system identification. Two types of models are common
in the field of system identification:

1. black box model: no prior model is available. Most system identification algorithms are of this
type;

2. grey box model: a certain model based on both insight into the system and experimental data is
constructed. This model does however still have a number of unknown free parameters which
can be estimated using system identification.

In these notes the objective of system identification are the Lego motor parameters. We already
know that a system like that acts as a second order system. As a consequence, we adopt the grey
box model.
All the identification procedures need some measured data to extrapolate the model and some
other to validate the identified model. In plain words, to identify the parameters of a model we need
to check if a model fits experimental measurements or other empirical data. A common approach to
test this fit is to split the data into two disjoint subsets:

1. Training data: are used to estimate the model parameters;

2. Verification data: are used to test the model. An accurate model will closely match the
verification data even though this data was not used to set the models parameters (cross-
validation).
48 CHAPTER 3. MODELING AND IDENTIFICATION

The second procedure is called model verification. A metric is needed to measure distances
between observed ym (t) and predicted data y(t) and, hence, to assess model fit. As a metric, we can
adopt some standard indices used as performance indices for control systems:
RT
Integral squared error (ISE) = 0
(ym (t) y(t))2 dt;
RT
Integral absolute error (IAE) = |ym (t) y(t)|dt;
0
RT
Integral time squared error (ITSE) = 0 t(ym (t) y(t))2dt;
RT
Integral time absolute error (ITAE) = 0 t|ym (t) y(t)|dt.

A rigorous approach to system identification implies the adoption of the chosen metric as an
objective function to minimize in the procedure. For example, a technique usually adopted is the
(linear) regression to gain the minimum mean squared error (MMSE). For our purposes and due to
the simplicity of the identification problem, we adopt a simpler trialanderror procedure:

1. We collect output measurements and then we try to estimate second order system parameters
(for instance, using a least square approach);

2. Next, we verify the estimated model against measurements, computing the fitting error using
the chosen metric;

3. If the estimated values are satisfactory, then the procedure terminates, otherwise a different
choice is made at point 1.

Example 3.12 Let us consider the motor model (3.6), shown in Fig. 3.10. Its time response is given
by: h i
p
1 (t) = q + 2Nen t cos(n 1 2 t + ) 1(t)
 
q
with N = 2 and = arctan
2
. The final value of 1 (t) for 0 < < 1 is
2 1 1

lim 1 (t) = q,
t+

= 1
the settling time ( is the percentage used to evaluate the settling time, while N ) is
1 2

)

log 100
log(N
Ts ,
n
and finally the overshoot is given by

|max ()|
O= = e 12
|(0) ()|

The relation involving the settling time is approximated. As a consequence, the estimates of n
derived from that equation can be very inaccurate. In combination with the settling time, we can use
3.4. IDENTIFICATION 49

the real part of the complex and conjugated poles, that is n . In particular, the decay rate of the
oscillations are given by en t . Therefore,
en (t0 +T ) (t0 + T ) ()
log = log ,
e n t0 (t0 ) ()
where t0 is a certain time instant, while T is the period of the oscillations. For example, t0 can be
fixed as the time of one maximum of the oscillations (for instance, the overshoot), and t0 + T is the
time of the next maximum. However, the minima can also be chosen. It has to be noted that it is
possible to have more than one pair of measurements from the output signal. On the other hand, for
highly damped outputs it can be difficult to detect the output peaks, even though they exists.
Additionally, in order to have an estimate of n , we can exploit the relation of the oscillation
frequency, that is the imaginary part of the complex and conjugated roots, i.e.,
p
d = n 1 2 .
Therefore, by measuring the time between two consecutive peaks, the period T of the oscillations can
be derived. Hence,
2
n = p .
T 1 2
Again, multiple estimates of the period are available.

3.4.2 Frequency Domain Approach


The rationale underlying the frequency domain approach is related to the harmonic response of a
linear, openloop asymptotically stable system described by G(j). In such a case we already know
that after the transient
u(t) = a sin(t) y(t) = |G(j)|a sin(t + G(j)),
and, using as inputs some sinusoids with different frequencies , we can get the frequency response
of the system approximated by points. From the frequency response, we get the transfer function
description. However, since we are using sampled data signals, attention must be payed to the
relation of the frequency and the sampling time used for the output sampling, since aliasing effects
may rise.
The gain is easily estimated by taking the ratio between the peaks of the input and those of the
outputs (that will happen, in general, at different times due to the presence of the phase G(j)).
Therefore:
ymax
|G(j)| = ,
umax
while the phase is again estimated using the peak times ty and tu of the output and input signals,
respectively. In particular, we have:

ty + G(j) = + ky 2, and tu = + ku 2.
2 2
where ku , ky Z. It has to be noted that in general ku 6= ky if the number of peaks for the output
and the input is not correctly taken into account. However this is not a problem as soon as we exploit
the greybox approach to system identification, since the system is second order. Therefore:
G(j) = (tu ty ) + (ky ku )2,
50 BIBLIOGRAPHY

but, since the phase is at most , we can simply limit the estimated G(j) [2, 0].

3.5 Bibliography
[Kai80] Thomas Kailath, Linear systems, vol. 1, Prentice-Hall Englewood Cliffs, NJ, 1980.

[PPR03] Charles L Phillips, John M Parr, and Eve Ann Riskin, Signals, systems, and transforms,
Prentice Hall, 2003.
Chapter 4

Control Design

This chapter presents the basic concepts underlying the control law design for linear plants. After
introducing the feedback paradigm, the concept of stability for linear systems is presented. From the
notion of stability, a useful tool for control law synthesis, the root locus is presented, which is based
on the analysis of the poles of the system subjected to the control law.

4.1 Feedback Paradigm


Let us consider the generic system model represented in Fig. 4.1. For such a system, it is customary
to identify the inputs and the outputs as:

d(t) are the noises, or disturbances, noncontrollable inputs that modify the behavior of the
system;
u(t) are the controllable inputs, or controls, that are in general chosen by the control designer
in order to satisfy predefined performance;
yr (t) are the performance outputs, or controlled outputs, that correspond the target of the
control law;
ym (t) are the measured outputs, whose knowledge (together with the additional knowledge of
the model of the system) gives the designer information on the current behavior of the system.

This system will have a certain behavior as a function of its internal dynamic and of the value of
the inputs. This behavior, for a linear system, can be defined by its impulse response h(t) or by the
transfer function H(s). For this generic system, there will be four transfer functions

Yr,u (s) = Hr,u (s)U(s)and Yr,d(s) = Hr,d (s)D(s),


Ym,u (s) = Hm,u (s)U(s)and Ym,d (s) = Hm,d (s)D(s),

that is one for each pair of input/output. The previous description is only formal and may or may
not be physically verified. For example, a SISO system will have ym (t) = yr (t) and d(t) = 0, hence
only one transfer function H(s) = Hr,u (s) will effectively be considered.
The behavior of the system as represented in Fig. 4.1 is called in open loop, since there is not any
feedback from the output to the inputs. Notice that in order to modify the behavior of the system,

51
52 CHAPTER 4. CONTROL DESIGN

Figure 4.1: Generic system model.

Figure 4.2: Electric heater model.

for example in order to let the performance output yr (t) to meet the desired target behavior, it is
possible to properly use the controlled input u. Indeed, even without feedback, it is possible to define
a control law u(t) that does not take into account the current behavior of the system, observed from
ym , but simply relies on the model of the system. This control technique that disregard ym is then
called in open loop. As it may be intuitive, such an approach is not so effective for a lot of different
reasons, and it is usually not implemented.
Again, in order to show the benefits of the feedback and weakness of the open loop approach, we
will make use of two running examples.

4.1.1 The Example of the Electric Heater


First, consider the simple example of an electric heater (see Fig. 4.2). By applying the voltage V , the
current i that flows in the circuit is determined by the value of the thermic resistance R. The thermic
power Q is then given by the so called Joules law and equals to Q = Ri2 . In order to determine
what is the value of the voltage to guarantee that the temperature T is equal to a desired value Td
4.1. FEEDBACK PARADIGM 53

Open Loop Control

30

28

26

Temperature (C)
24

22

20

18
Ideal
Uncertain R
t
16 Uncertain T
s
T
d

14
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
sec

Figure 4.3: Control of the EH with an open loop approach.

we need a model. Letting the (constant) thermic power lost be given by the difference between the
temperature of the heating area T and the ambient temperature Ts (thermodynamics), this simplified
model is derived
dT Ts T dT V 2 Ts T
c =Q+ = + ,
dt Rt dt cR cRt
where Rt is the thermal resistance of the fluid (the air) and c is the thermal capacity of the fluid. It
is now evident that the problem is solved if the system is at equilibrium with T = Td , that is if
r
dT V 2 Ts Td R
=0= + V = (Td Ts ) .
dt cR cRt Rt
Therefore, applying an input voltage
r
R
u(t) = V (t) = V = (Td Ts )
Rt
we easily make the temperature (the performance output yr (t) = T (t)) converge towards Td after a
transient that depends on the initial conditions, i.e., the initial temperature value T (0) (see Fig. 4.3,
blue solid line). It is evident that, in the hypothesis that the model is accurate enough, the open
loop strategy works quite well. But, what happens if the resistance of the fluid Rt changes (e.g., by
humidity changes)? And, what happens if the resistance R changes in time (e.g., aging)? Again, what
happens if the ambient temperature Ts changes (e.g., season change)? In such cases, it is visually
evident from Fig. 4.3 that the steady state temperature value is no more reached. Moreover, we may
want better performances or we may want reduce the effect of disturbances (for example, a nonideal
voltage generator). Such problems are solved using a feedback paradigm, i.e., the possibility to modify
the behavior of a system acting on its inputs by means of a controller. The simplest feedback strategy
we can apply is due to the nonlinear on/off controller: assuming the availability of a thermometer
(a sensor) that measures the temperature (hence, the measured and performance outputs are equal
yr (t) ym (t)), we can:
54 CHAPTER 4. CONTROL DESIGN

Rele (BangBang) Control

32

30

28

Temperature (C)
26

24

22

20

18

Rele
16 Uncertain T
s
Td

14
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
sec

Figure 4.4: Comparison between open loop control strategy and feedback bang-bang control strategy
for the electric heater with uncertain parameters.

If T > Td , set V = 0.

If T < Td , set V = Vmax .

This controller is also referred to as bang bang controller, with an obvious meaning. From an
implementation viewpoint, this controller can be implemented with a standard relay or a digital
comparator. This approach may generate an infinite number of switches between 0 and Vmax in a
finite time, which, in turn, generate a desired sliding mode of the outputs around the desired value.
Therefore, this controller is feasible only in principle. Moreover, since this kind of control approach
can be applied to any kind of system, even to mechanical systems, this behavior may generate high
frequency vibrations, that are hardly bearable. A straightforward improvement adds an hysteresis
to the relays, using a memory (or an hysteresis):

0 if ym (t) = T > Td +
V (t + t) = Vmax if ym (t) = T < Td
V (t) otherwise

The results for uncertain Ts are compared with the opne loop strategy in Fig. 4.4. The point here is
that using the measured outputs ym (t) we can get information about the system current dynamics
and then, in case, improving its behavior. Strictly speaking, we can do feedback.
Other intuitive feedback control examples are:

1. Electronics/robotics example: light tracking with two light sensors for a unicyclelike robot.

2. Mechanical example: watt regulator.

3. Mechanical example: toilette flusher.


4.1. FEEDBACK PARADIGM 55

To summarize, the to implement the feedback paradigm, a model of the system (given in terms
of differential or difference equations or in terms of impuse response) is needed, on the top of which
a control law can be designed.

4.1.2 The Example of the DC Brushless Motor


Let us recall the model of the DC Brushless motor in Fig. 3.10 and its model description (3.5), which
is reported here for reference:  dc
= R c Lk + VL
dt L .
d
dt
= Jb + Jk c J1 r
Is it possible to let the motor shaft moves with a well defined desired velocity d ?
First of all, let us define each variable of the system w.r.t. the system model definition previously
described. The input u(t) is the voltage V (t), the disturbance input d(t) is given by r (t), the
measured output ym (t) as well as the performance output yr (t) are both given by (t). The open
loop solution would be to compute the steady state value in which = d , given a well defined
r = r , i.e.,
Rb + k 2 R
V = d + r .
k k
The time evolution of this open loop strategy is reported in Fig. 4.5.(a). As stated for the electric
heater case, this is not a robust solution and it is prone to failure due to modelling errors, r time
variability (Fig. 4.5.(b)) or d changes. Again, a bangbang feedback controller would solve the
problem more efficiently, even though only for positive (or negative) values of the d , as shown in
Fig. 4.5.(c) assumign a time varying r .

4.1.3 The Closed Loop


The simplest bang-bang control strategy of feedback is not so smart, mainly due to mechanical
problems and flexibility. Moreover, its hysteresisbased modification results in a rigid feedback
scheme and, additionally, the dynamic properties of the given system my not satisfy the desired
requirements. The solution is found using a more effective closed loop strategy. The term closed
loop refers to the loop closure of the output to input using the feedback. The control problem is then
related to the study of an additional system that uses the information given by the measured output
to determine the necessary input signals to satisfy the performance requirements. The controller is
usually defined as a dynamic system itself, i.e., a system having a transfer function (in the case of
a linear controllers) with its own dynamic. The openloop model presented in Fig. 4.1 turns then
to the general closedloop system of Fig. 4.6. Hence, the feedback paradigm involves two dynamical
systems, one is given (the plant P ) and the other one is designed (the controller C). The role of the
signal r is to generate the reference to be tracked (if any).
Recalling the example of Section 3.2.3 and the effect of the roots of the numerator and the
denominator, it is clear that it is sufficient to modify the positions of the roots of P (s), i.e., the
transfer function of the linear plant, to modify the behavior of the system. The controller implements
the desired changes on these roots. Moreover, we also know that divergent or convergent behaviors
of the output of a linear system depends on the real part of the roots of the denominator. Before
going into the details, we must focus on clear definitions of these convergent and divergent behaviors,
which define the stability of a system.
56 CHAPTER 4. CONTROL DESIGN

Rele (BangBang) Control

70

60

50

(rad/s)
40

30

20

10

Rele
0 Uncertain R

d

0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01
sec

(a)
Rele (BangBang) Control
Variable Torque
0.01 80

0.009 70

0.008
60

0.007
50
0.006

(rad/s)
r (N/m)

40
0.005

0.004 30

0.003
20

0.002
10
Rele
0.001 Uncertain R
d
0
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
sec sec

(b) (c)

Figure 4.5: Open loop (a) and relay control (c) in the case of timevarying load torque (b) for the
DC Motor.

4.2 Basic Concept of Stability


Consider a generic system described by the normal form (3.2), here reported for reference
D (n) y(t) = F y(t), D (1) y(t), . . . , D (n1) y(t), u(t), D (1)u(t), . . . , D (p) u(t), t ,


whose initial conditions are well defined. The solution of the differential equation yields to n signals
y(t), D (1) y(t), . . . , D (n1) y(t), one for each initial condition. Therefore, by stacking all this signals in
a vector
y(t)
D (1) y(t)
x(t) = .. ,

.
(n1)
D y(t)
the initial condition can be given by x(t0 ) = x0 (t0 being the initial time), while the solution of the
differential equation (the trajectory of the system) is given by x(t) = x (x0 , u, t). Among all the
4.2. BASIC CONCEPT OF STABILITY 57

(a) (b)

Figure 4.6: Equivalent representations of generic closed loop model.

possible trajectories, an equilibrium is a trajectory for which Dx(t) = 0. In this section we will try
to infer the stability of a system trajectory (or equilibrium point).
Example 4.1 Consider a pendulum and its lower equilibrium point. Of course, since it is an equi-
librium point, leaving the pendulum in that position, it will not move. However, is this equilibrium
stable? Well, it is known that the lower equilibrium is stable, indeed by moving the pendulum from
its lower position (i.e., by perturbing its equilibrium), the pendulum starts to oscillate until it finally
reaches the lower equilibrium. However, in an ideal case in which the air friction is neglected as well
as the friction of the pendulum rotating nod, the pendulum starts to oscillates indefinitely. This is
exactly the same effect for the car suspension model in Section 3.2 when the viscous friction c = 0,
which yields to imaginary roots.
Consider now the upper equilibrium for an inverted pendulum. Again, the pendulum will not
move if no perturbation occurs. In the case of a perturbation, the pendulum will definitely leave the
upper equilibrium point and it will never come back to that point. HEnce, we can conclude that the
equilibrium point is unstable.
From the previous example, it is clear that the stability of an equilibrium, or of a trajectory,
can be assessed by studying the behavior of the trajectories of the system (i.e., the solution of the
(t) = x (x , u, t), with
differential equations system) if the initial conditions are perturbed, i.e., x

x 6= x0 .
Definition 4.2 (Stability) A trajectory x(t) = x (x0 , u, t) is stable if all the trajectories that
starts from initial points x sufficiently close to x0 remain arbitrarily closed to x(t), as depicted in
Fig. 4.7.(a). More precisely: if > 0, > 0 s.t. kx x0 k < , then kx (x , u, t)x (x0 , u, t)k < ,
t.
Definition 4.3 (Attractiveness) A trajectory x (x0 , u, t) is attractive if all the trajectories that
starts from initial points x sufficiently close to x0 converges towards x (x0 , u, t), for t +.
More precisely: if > 0 s.t. kx x0 k < , then limt+ kx (x , u, t) x (x0 , u, t)k = 0.
58 CHAPTER 4. CONTROL DESIGN

(a) (b)

Figure 4.7: (a) Stable trajectory and (b) asymptotically stable trajectory.

Definition 4.4 (Asymptotic stability) A trajectory is asymptotically stable if it is stable and


attractive, as reported in Fig. 4.7.(b).

Definition 4.5 (Instability) A trajectory is unstable if it is not stable.

Substituting the trajectory with the equilibrium, i.e., x = x (x0 , u, t) = constant, we have the
same definitions of equilibrium stable, attractive, asymptotically stable and unstable, as shown in
Fig. 4.8. Notice that the stability problems related to an equilibrium point can be translated into
stability problems of the origin, by a suitable change of variables.
Moreover:

1. The subspace of the state space given by all the points x from which all the trajectories
asymptotically converges to an equilibrium point x0 is called Region of Asymptotic Stability
(RAS).

2. An equilibrium point is Globally Asymptotically Stable if the RAS coincides with the entire
state space.

Definition 4.6 (Uniform Ultimate Boundness) A trajectory x(t) = x (x0 , u, t) is Uniform Ul-
timate Boundness (UUB) w.r.t. a set S if T (x0 , S) s.t. x(t) S, t t0 + T .

These concepts of stability are due to Lyapunov, hence they are usually called concepts of Lya-
punov Stability, that are the basis for the control theory. Indeed, such definitions apply to any
system.

4.2.1 Linear TimeInvariant System Stability


For Linear TimeInvariant (LTI) systems, the analysis of stability is related to the analysis of the
roots pi of the denominator of the transfer function:
4.2. BASIC CONCEPT OF STABILITY 59

(a) (b)

Figure 4.8: (a) Stable equilibrium point and (b) asymptotically stable equilibrium point.

1. For a liner system, the stability property of a point of a trajectory is inherited by the whole
system. Hence, the RAS GAS;

2. if Repi < 0, i, then the system is asymptotically stable, i.e., its impulse response tends towards
zero. Indeed, the output is given by a combination of exponentials of the form eRepi t . Therefore,
the system is also BIBO stable;

3. if i such that Repi > 0, then the system is unstable, i.e., its impulse response tends towards
infinity. Indeed, the output is again given by a combination of exponentials of the form eRepi t .
Therefore, the system is also BIBO unstable;

4. if Repi < 0, i 6= j and pj = 0, then the system is marginally stable, i.e., its impulse re-
sponse tends towards a constant value. Indeed, the output is again given by a combination of
exponentials of the form eRepi t . However, the system is BIBO unstable;

5. if Repi < 0, i 6= j, l and pj = pl = 0, then the system is unstable, i.e., its impulse response
tends towards infinity. Indeed, the output is again given by a combination of exponentials of
the form eRepi t , but there is also t. Obviously, the system is BIBO unstable.

6. The attractive property determines the stability, while the instability determines an unlimited
behavior;

7. In an asymptotically stable system, all the trajectories of the system converges to zero expo-
nentially.
60 CHAPTER 4. CONTROL DESIGN

R(s) + E(s) Y(s)

    -
P(s)
P (s )
(a) (b)

Figure 4.9: LTI plant model in open loop (a) and in basic closed loop (b).

4.3 Feedback Design for Linear Systems


In the light of the rigorous definition of stability, in this Section we recall the previously introduced
concepts of the feedback paradigm, but with a special emphasis to linear systems. Hence, consider
a generic system (Fig. 4.9.(a)) in Evans form
Qm
(s zi ) n(s)
P (s) = Kp Qni=1 = Kp ,
i=1 (s pi ) d(s)

with m roots of the numerator (zeros) and n roots of the denominator (poles). If Repi < 0, i, the
system is asymptotically stable. If so, it is possible to apply the Final Value Theorem to understand
what is the steady state value reached by Y (s) for known reference inputs R(s), i.e.,

lim y(t) = lim sY (s) = lim sP (s)R(s).


t+ s0 s0

This analysis is carried out in open loop.


Suppose that the target behavior of the output Y (s) is to follow R(s) with zero error. This
problem arises, for example, for the DC brushless motor in Section 3.3, where the reference is the
desired angular velocity of the motor and the output is the actual angular velocity of the motor. If
Y (s) always follows R(s), then it is sufficient to change R(s) to change the velocity of the motor.
Therefore, assume that P (s) is open loop asymptotically stable. The tracking error e(t) = r(t) y(t)
when r(t) is a step of amplitude a is given by
a
lim y(t) = lim sP (s)R(s) = lim sP (s)
t+ s0
Qm
s0 s Q
m
i=1 (s zi ) a (zi )
= lim sKp n
Q = aKp ni=1
Q = aP (0).
i=1 (s pi ) s i=1 (pi )
s0

In other words, if the steady state gain of P (s) is unitary, i.e.,

P (0) = 1,

then there exists t such that y(t) = r(t) (zero steady state tracking error), t t, for r(t) = a1(t).
Even if P (0) = b 6= 1, we can apply an amplifier of gain 1b in series, in order to compensate for the
steady state gain: Qm
(s zi ) a
lim y(t) = lim sKp Qni=1 = aP (0).
i=1 (s pi ) s
t+ s0
4.3. FEEDBACK DESIGN FOR LINEAR SYSTEMS 61

However, even in the absence of modeling errors, it may be that i such that pi = 0 or i such that
zi = 0 or the value of the reference changes to r(t) = at. In this situations, more flexible solutions
are given to the controller designer if the feedback is created between the output and the input of the
system. Consider for example the simplest unitary feedback of Fig. 4.9.(b). The transfer function
between the error and the reference will be given by

R(s)
E(s) = R(s) Y (s) = R(s) P (s)E(s) E(s) = ,
1 + P (s)

If 1 + P (s) has all asymptotically stable roots, we can apply again the Final Value Theorem, that
yields to
R(s) d(s)
lim e(t) = lim s = lim s R(s).
t+ s0 1 + P (s) s0 d(s) + Kp n(s)

Consider as reference r(t) = a1(t). Hence


Qn
d(s) i=1 (pQ
i)
lim e(t) = lim s R(s) = Qn m a.
s0 d(s) + Kp n(s)
i=1 (pi ) + Kp i=1 (zi )
t+

Therefore:

if pi 6= 0, i, we get a finite error. Increasing Kp , the steady state error decreases;

if i such that pi = 0, then we get limt+ e(t) = 0, as desired;

if i such that zi = 0, then we get limt+ e(t) = a.

These three behaviors are depicted in Fig. 4.10, assuming


1
P (s) = and r(t) = 1(t).
s + 0.5

Next, consider as reference r(t) = at. Hence


Qn
i=1 (s pi ) a
lim e(t) = lim s Qn Qm 2
.
i=1 (s pi ) + Kp i=1 (s zi ) s
t+ s0

Therefore:

if pj 6= 0, j, we get an infinite error;

if j such that pj = 0, we get


Qn
i=1,j6=i (pi )
lim e(t) = a.
Kp m
Q
i=1 (zi )
t+

Increasing Kp , the steady state error decreases.

if i, j such that pi = pj = 0, then we get limt+ e(t) = 0.


62 CHAPTER 4. CONTROL DESIGN

1.0
P
P/s
P*s

0.5

0.0

0.5
0 2 4 6 8 10 12 14 16 18 20

Figure 4.10: Tracking error for a system with if Re(pi ) < 0, i, if i such that pi = 0, and if i such
that zi = 0.

limt+ e(t) 6 0, i
pi = i: pi = 0 i, j: pi = pj = 0
r(t) = a1(t) finite 0 0
r(t) = at infinite finite 0
r(t) = at2 infinite infinite finite

Table 4.1: Effect of poles in the origin for the tracking error of linear systems with different reference
inputs.

These three behaviors are depicted in Fig. 4.11, assuming


s+1
P (s) = and r(t) = at
s + 10

From the previous examples, the Table 4.1 can be synthesized. In other words, since e(t) =
r(t) y(t), a certain output signal y(t) of the closed loop system can be generated if is already inside
the system. More precisely, to generate a step, i.e., Y (s) = as , at least one pole in the origin is
needed. This is the socalled Internal Model Principle.
The properties analyzed until now refers to the system P (s) as a given. However, if the plant
P (s) does not have one (or more) poles in the origin, it is possible to modify its behavior by using a
dynamic controller C(s) mounted in feedback, as reported in Fig. 4.12. In such a case, we have the
plant P (s) and an additional block C(s): the controller. The objective on the control designer is to
determine C(s) in order to satisfy certain closed loop performance, such as:

steady state tracking error;

settling time;
4.3. FEEDBACK DESIGN FOR LINEAR SYSTEMS 63

10
P
P/s
8 P/s^2

2
0 10 20 30 40 50 60 70 80 90 100

Figure 4.11: Tracking error for a system with if Re(pi ) < 0, i, if i such that pi = 0, and if i, j
such that pi = pj = 0.

R(s) + E(s) U(s) Y(s)

-
C(s) P(s)

Figure 4.12: Closed loop scheme for linear control.

maximum overshoot;
rise time.
Let us first concentrate on the first performance: the steady state tracking error. For, we will
resort to the Internal Model Principle. Consider
1
P (s) = and r(t) = 1(t),
s+2
and, using the control feedback paradigm, we have
R(s)
E(s) = .
1 + C(s)P (s)
It is evident that in order to have zero steady state tracking error for a step reference it is needed a
pole in the origin, than it is sufficient to add it in the controller, i.e., C(s) = Ksc , that yields to
R(s) R(s) s(s + 2)R(s)
E(s) = = 1
= .
1 + Ksc P (s) 1 + Ksc s+2 s(s + 2) + Kc

In this particular case, the tracking error is zero (due to the internal model principle, see Fig. 4.13).
However, the additional free parameter Kc can further modify the output performance. Indeed,
64 CHAPTER 4. CONTROL DESIGN

1.8
P
1.6 C = 1/s
C = 100/s
1.4

1.2

1.0

0.8

0.6

0.4

0.2

0.0
0 2 4 6 8 10 12 14 16 18 20

Kc
Figure 4.13: First order system controlled with C(s) = s
, with Kc = 1 and Kc = 100.

by increasing it, the output response is faster (shorter rise and settling time) than having Kc = 1.
However, oscillations with relatively high frequency may be difficult to manage for certain mechanical
systems, as well as the overshoot. Indeed, with Kc = 1 the system behaves (according to the dominant
pole approximation) as a first order system, while with Kc = 100 the system behaves as a second
order system. Therefore, increasing the power of the control signal (hence, having a more powerful
actuators) is not always desirable. Notice that this behavior can be even worse depending on the
plant P (s) to control. In fact, consider
1
P (s) = and r(t) = 1(t).
s2 + 2s + 2
Kc
In this case, using C(s) = s
, the tracking error transfer function is

R(s) s(s2 + 2s + 2)R(s)


E(s) = = .
1 + Ksc P (s) s(s2 + 2s + 2) + Kc

In this particular case, with Kc = 1 the system is asymptotically stable with zero steady state tracking
error, while for Kc = 100 the system is unstable (with unbounded tracking error), as depicted in
Fig. 4.14.
The lessons learned in these two cases are:

The higher is the gain of the controller, the faster is the response of the system. Since the gain
is directly related to the cost of the control action, this conclusion is straightforward;

For high controller gains (undesirable) side effects are generated on the output, e.g., overshoot,
oscillations or even instability.

Hence, to the performance list previously defined, stability of the closed loop system should be
added. In fact, stability is the most important performance the system should respect.

Hence, a systematic tool to analyze the closed loop behavior of a system is necessary for a correct
control design.
4.4. ROOT LOCUS 65

4e+013
P
C = 1/s
2e+013 C = 100/s

0e+000

2e+013

4e+013

6e+013

8e+013
0 2 4 6 8 10 12 14 16 18 20

(a) (b)

Kc
Figure 4.14: (a) Second order system controlled with C(s) = s
, with Kc = 1 and Kc = 100. (b)
Zoomed graph.

4.4 Root Locus


Given a generic transfer function G(s) in Evans form
Qm
(s zi ) n(s)
G(s) = Kg Qni=1 = Kg ,
i=1 (s pi ) d(s)
we already know that its stability is given by Repi , i. Fig. 4.15 reports a set of behaviors related
to the following transfer functions in Evans form:
1 1
P1 (s) = 2
, P2 (s) = ,
(s + 1)(s + 2)(s + 2s + 2) (s 0.4)(s + 2)(s2 + 2s + 2)
(4.1)
1 1
P3 (s) = 2
, P4 (s) = 2
s(s + 2)(s + 2s + 2) s (s + 2)(s2 + 2s + 2)
As it is evident from Fig. 4.15 or, more precisely, from the inverse Laplace transform, the shape of
the system output is critically related to the poles. For instance, two complex and conjugated roots
will generate an oscillating output. Moreover, if Repi < Repj < 0, then the contribution to the
output given by pi will vanish before the contribution of pj . In more strict theoretic terms, the
mode associated with pi is faster than the mode associated with pj . Since the performance of the
system is related to the shape of the output, the performance are related to the position of the
poles in the complex plane. Trivially, the same holds for stability. Furthermore, from Section 3.2,
the slower is the convergence rate of a stable mode, the more it influences the output, which yields
to the dominant pole approximation. Usually, it is of the first or second order.
We will know consider a graphic tool that shows where the poles are and where they move when
the controller parameters change. We are interesting in the poles of the closed loop transfer function,
i.e.,
Y (s) = P (s)U(s) = P (s)C(s)E(s) = P (s)C(s)(R(s) Y (s)),
66 CHAPTER 4. CONTROL DESIGN

3.5
P_1
P_2
3.0
P_3
P_4
2.5

2.0

1.5

1.0

0.5

0.0
0 1 2 3 4 5 6 7 8

Figure 4.15: Impulse responses for the systems in (4.1).

hence, since P (s)C(s) = C(s)P (s) (that is true only for SISO systems),

C(s)P (s)
Gcl (s) = ,
1 + C(s)P (s)

which relates the reference R(s) to the output Y (s).


For positive feedback
C(s)P (s)
Gcl (s) = .
1 C(s)P (s)
In both cases, we need a tool to study the pole placement of Gcl (s) as a function of C(s).
The root locus, or Evans locus, is a graphical method that depicts the curves of the roots of the
denominator of the closed loop transfer function in the complex plane (sometimes called Argand plane
or Gauss plane). The curves are parameterized by a parameter, typically the gain of the loop.
Consider the closed loop transfer function

C(s)P (s)
Gcl (s) = .
1 + C(s)P (s)

Rewriting C(s) in Evans form, we get


Qmc
(s zi )
C(s) = Kc Qni=1 = Kc C (s).
c
i=1 (s p i )
4.4. ROOT LOCUS 67

Similarly, for P (s) Qmp


(s zi )
P (s) = Kp Qni=1
p
= Kp P (s).
i=1 (s pi )
The closed loop poles will be given by the solution of

1 + C(s)P (s) = 1 + Kc C (s)Kp P (s) = 1 + Kc Kp G(s) = 0, (4.2)

where Kp is given by the plant model, Kc is the gain parameter of the controller and

n(s)
G(s) = C (s)P (s) = . (4.3)
d(s)

Since G(s) is a complex function, the equation (4.2) is satisfied for any value of 0 Kc + if:

Modulus constraint:
n(s) 1
Kp
d(s) = Kc ,

which yields to
Kc Kp n(s) = 1.

d(s)
Phase constraint:
n(s) 1
=
d(s) Kp
or, equivalently
m n
(
X X mod 2, if Kp > 0
(s zi ) (s pi ) = ,
i=1 i=1
0 mod 2, if Kp < 0

where m = mc + mp and n = nc + np . This way, the sign of Kc Kp G(s) is negative.


Strictly speaking, a point s in the Gauss plane belongs to the root locus if the sum of the
phases of the vectors starting from the singularities (poles or zeros) and ending on the point s
is equal to (or 0 if Kp < 0), as depicted in Fig. 4.16.

Remark 4.7 The constraint on the phase is sufficient to draw the entire locus. The correspondence
between the position of the roots and the value of Kc is determined by the constraint on the modulus.

4.4.1 Root locus construction


Consider the subsequent equation
d(s) + Kc Kp n(s) = 0,
that has the same roots of the previous equation. Hence:

for Kc = 0, the roots are given by the n roots of d(s);

for Kc +, the roots are given by the m roots of n(s).


68 CHAPTER 4. CONTROL DESIGN

Figure 4.16: Graphical representation of the pahse constraint.

Therefore:

Property 1 The root locus has a number of branches that is equal to the number of open loop poles,
i.e., of G(s). Each branch starts from a pole of G(s) for Kc = 0; for Kc +, m branches
terminate on the m zeros of G(s), while n m roots goes towards infinity.

If a certain point s = in the complex plane is a solution of the equation

d(s) + Kc Kp n(s) = 0,

then its complex and conjugated s = should be a solution. Hence:

Property 2 The root locus is symmetric w.r.t. the real axis.

Consider again the constraint on the phase of the singularities, i.e.,


m n
(
X X mod 2, if Kp > 0
(s zi ) (s pi ) = ,
i=1 i=1
0 mod 2, if K p < 0

and points s that lie on the real axis (Fig. 4.17).

Each couple of complex and conjugated roots offers zero contribution;

A real pole or a real zero on the left of s generates a phase contribution of 0;

A real pole on the right offers a phase contribution of ; a real zero of .

Therefore:
4.4. ROOT LOCUS 69

Figure 4.17: Graphical representation of the phase for real and complex roots.

Property 3 For Kp > 0, a point on the real axis belongs to the locus iff it has on its right an odd
number of singularities. For Kp < 0, that number should be even or null.
Consider now the n m singularities that goes to infinity for Kc + along asymptotes. The
center of all the asymptotes is a point on the real axis centered in
Pn Pm
i=1 pi i=1 zi
a= ,
nm
i.e., the center of mass of the distribution of poles and zeros. To derive the direction of the asymptotes,
express s = rej and consider r +. Hence, the phase constraint becomes
m
X n
X
j
lim (re zi ) (rej pi )
r+
i=1 i=1
m
X n
X
j
(re ) (rej ) =
i=1 i=1
m
X n
X
= (n m).
i=1 i=1

Again, from the phase constraint


(
mod 2, if Kp > 0
(n m) = ,
0 mod 2, if Kp < 0
which finally yields to:
Property 4 The n m singularities tends towards infinity along n m asymptotes with direction

(2l + 1)
= , if Kp > 0


nm , with l = 0, . . . , n m 1.
2l
= , if Kp < 0


nm
70 CHAPTER 4. CONTROL DESIGN

The center of the asymptotes is Pn


pi m
P
i=1 i=1 zi
a= .
nm

A point s = on the real axis may be a point of breakin/breakaway, in which multiple branches,
say l, intersect. Hence, s = is a solution of multiplicity l of equation 1 + Kc Kp G(s) = 0. In other
words, 1 + Kc Kp G(s) = (s )l h(s) = 0. Therefore

d(1 + Kc Kp G(s)) dG(s)


= Kc K p
ds ds  
l1 dh(s)
= (s ) lh(s) + (s ) .
ds

The l branches that enter in a breakin point and the l branches that consequently exit from the
breakaway point have tangents that divide the overall 2 angle symmetrically. Hence, there is an
angle of /l between each breakin branch and its adjacent branches (that are breakaway branches),
and viceversa:

Property 5 The root locus may have breakaway (and breakin) points in which a number of l
branches intersect. Such points satisfy the phase constraint and the additional h 1 equations

dj G(s)
= 0, j = 1, , h 1.
dsj
Using the phase constraint, it is also possible to compute the angle by which a branch leaves a
pole or tends towards a zero, as reported next:

Property 6 For Kp > 0, the angle by which the locus leaves a pole pj or tends to a zero zj is given
by
X m Xn
(2 + 1) + (pj zi ) (pj pi ), for the pole pj





i=1 i=1
m n .
X X
(2 + 1) + (zj zi ) (zj pi ), for the pole zj



i=1 i=1

For Kp < 0, substitute (2 + 1) with 2.

Remark 4.8 It is possible to compute the value of Kc by which the locus intersects the imaginary
axis, i.e., the boundary of the stability region, solving

1 + Kc Kp G(j) = 0,

and then solving for the imaginary and real part w.r.t. and Kc .

Summarizing, the root locus is sketched following the 6 properties previously depicted. In par-
ticular, it is sufficient to follow these steps:

Mark open-loop poles and zeros;


4.4. ROOT LOCUS 71

Evans root locus Evans root locus

0.010 40
open loop poles open loop poles
0.008
asymptotic directions 30 asymptotic directions

0.006
20
0.004

10
0.002
Imag. axis

Imag. axis
0.000 0

0.002
10

0.004
20
0.006

30
0.008

0.010 40
14 12 10 8 6 4 2 0 2.5 2.0 1.5 1.0 0.5 0.0 0.5
Real axis Real axis

(a) (b)

Figure 4.18: (a) Root locus for the plant P (s) with C(s) = Kc = 1. (b) Root locus for the plant
P (s) with C(s) = Kc /s = 1/s.

Mark real axis portions, depending on the sign of Kp ;

Find asymptotes;

Phase condition on test point to find angle of departure;

Compute breakaway/breakin points;

Connect all the graphical objects found.

Example 4.9 To compute the root locus of the subsequent system when C(s) is purely proportional
C(s) = Kc ,
1
P (s) = .
s+2
This system, has one pole in 2 and one zero at infinity, hence all the real axis between and
2 belongs to the locus, as reported in Fig. 4.18.(a).
Now, consider the problem of having zero steady state error when the input is a step signal, i.e.,
1 C(s)P (s)
P (s) = , r(t) = 1(t), lim e(t) = 0, where Y (s) = R(s).
s+2 t+ 1 + C(s)P (s)
For the internal model principle summarized in Table 4.1, we should add a pole in the origin using
the controller C(s) = Ksc . In such a case the transfer function to be studied, reported in (4.3), is
given by
1
G(s) = C(s)P (s) = , Kc = 1,
s(s + 2)
and reported in Fig. 4.18.(b). To visually have a comparison between the open and closed loop plant
behaviors, consider the graph shown in Fig. 4.19.(a). The plant has hence zero steady state tracking
error, as desired.
To summarize, the plant P (s) closed in loop with C(s) = Kc is closed loop asymptotically stable
for any value of Kc and negative feedback. This is true also for C(s) = Ksc and any value of Kc for
72 CHAPTER 4. CONTROL DESIGN

Root Locus
1
1
1.0 0.64 0.5 0.34 0.16
0.8
P 0.8 0.76
0.9
P_cl
0.6
0.6 0.86
0.8

0.4
0.7
0.4
0.94

Imaginary Axis
0.2
0.6 0.2 0.985

0.5 0

0.4 0.2 0.985


0.2
0.94
0.3 0.4
0.4

0.2 0.6 0.86


0.6

0.1 0.8 0.76


0.8
0.64 0.5 0.34 0.16
0.0
1
0 2 4 6 8 10 12 14 16 18 20 1 0.8 0.6 0.4 0.2 10 0.2 0.4 0.6 0.8 1
Real Axis

(a) (b)

Figure 4.19: (a) Open and closed loop responses to a unitary step reference. (b) Grid showing the
damping factor and the natural frequency n on the root locus.

negative feedback. However, changing the value of the gain the overall output behavior changes. As
depicted in Fig. 4.13, the output changes from a typical output of a first order to a second order one.
This is due by the position of the closed loop poles, that univocally determine the response. Using the
approximation by the dominant pole, the closest to the imaginary axis is the slower pole, hence the
dominant (since it has the smaller real part). Analytically, for Kc = 1 we have two coincident real
roots in s1,2 = 1. For Kc = 100, we have two complex and conjugated roots in s1,2 = 1 10j.

4.4.2 Analysis of a Second Order System in the Root Locus


The characteristic of a second order system are given by the values of the damping factor and of
the natural frequency n (see Section 3.2.3). Considering only two complex and conjugated roots,
the closed loop plant can be described as

1
G(s) = s2
.
2
n
+ 2 n s + 1

For example, this is the case of Example 4.9, where for Kc = 100 we have two complex and conjugated
roots in s1,2 = 1 10j, with unitary steady state gain by the presence of a pole in the origin for
the controller C(s). Recalling Section 3.2.3, the roots of G(s) are of the form:
p
s1,2 = n n 1 2,

therefore
= cos( arctan 2(Im(s1 ), Re(s1 )) and n = 1/|Re(s1 )|.
The values of and n are depicted using a grid superimposed to the root locus in Fig. 4.19.(b).
To summarize, it is evident that we can model the performance of the system, i.e., rise time,
settling time, overshoot, rather then simple stability, using the root locus and the first or second
order dominant pole approximation.
4.5. BIBLIOGRAPHY 73

4.5 Bibliography
Chapter 5

Digital Control

The control law for the motor control synthesized using the root locus method is continuous time,
i.e., it continuously measures the angular velocity of the motor and computes the control input.
However, the control law should be implemented in a digital embedded platform. Therefore, the
control law has to be properly sampled in order to obtain a discrete time controller. This chapter
presents a methodology to discretize a continuous time system and then implement it on a digital
embedded platform. The references on this subject are [Oga95] and [AW96]. For the details about
the particular Operating System used on the Lego Mindstorm, the reader is referred to [Chi].

5.1 Discretization of Linear Continuous Time Controllers


Linear continuous time controllers need to be discretized in order to be implemented in a digital em-
bedded system, i.e., to be implemented by an algorithm. Discretization is the process of transferring
continuous models into discrete models. In the case of a dynamic system, the models are transferred
from continuous time to discrete time, hence obtaining a discrete time controller. Therefore, the
system is sampled using a well defined sampling time. In embedded-control systems, the sampling
time is usually lower bounded by feasibility reasons.
In practice, discretizing a continuous time system comprises the following steps:

The plant measurements, e.g., the angular velocity of a motor shaft, are discretized using a
sampler, i.e., a A/D converter. The most common sampler is the zeroorder hold, aka sample
& hold.

The sampled measurements are passed to the algorithm implementing the digital controller.

The result of the computation is then given to a data reconstructor, i.e., a D/A converter,
which reconstruct the continuous signal to be fed to the plant in a continuous piecewise signal.

There exist different solutions to discretize a system. A trivial solution may be to use a sampling
time that is the smaller possible, while keeping the controller continuous and using numeric tools
for differential equations. Alternatively, there are different solutions that tries to minimize the dis-
cretization approximations, which can be applied to transfer functions G(s) or to state space models.
In this case, the approximation introduced are strictly related to the discrete time approximation of
the integral of a continuous function.

74
5.1. DISCRETIZATION OF LINEAR CONTINUOUS TIME CONTROLLERS 75

Example 5.1 Consider the following transfer function and its time representation
Y (s)
= G(s) = y(t)
+ y(t) = u(t).
U(s) s+
Considering the continuoustime integral, one gets
Z t
y(t) = y( ) + u( )d,
0

that, assuming a sampling time T for the discretetime approximation, turns to


Z kT T Z kT
y(kT ) = y( ) + u( )d + y( ) + u( )d = y(kT T ) + A,
0 kT T
R kT
whereas A = kT T
y( ) + u( )d is the area under the function y( ) + u( ) between kT T
and kT .

As reported in Example 5.1, a linear differential equation expressed in continuous time is expressed
in discrete time by means of a linear difference equation, where the difference is supposed with respect
to time. An analogous of the differentialoperator of continuous time equations can be defined for
linear difference equations with constant coefficients. The forwardshift operator is denoted by q, i.e.,

qf (k) = f (k + 1),

where the sampling period is assumed to be the time unit, i.e., if f (t) is sampled every T seconds
and f (k) f (kT ), then qf (k) = f (k + 1) f (kT + T ).
The backwardshift operator, or delay operator, is denoted by q 1 , i.e.,

q 1 f (k) = f (k 1).

5.1.1 Approximation of the Integral


The first approximation of the integral is given by the Eulers method
dx(t) x(t + T ) x(t)
,
dt T
also known as forward difference or forward rectangular rule, which is depicted in Fig. 5.1.(a). An
alternative interpretation of the forward difference method comes with the shift operator:
dx(t) x(t + T ) x(t) q1
= x(t),
dt T T
or, by using the discrete index k,
dx(k) q1
x(k).
dt T
Therefore, by recalling the fact that s corresponds to the differential operator, one gets
dx(k) q1 q1
x(k) s ,
dt T T
76 CHAPTER 5. DIGITAL CONTROL

(a) (b)

(c)

Figure 5.1: Different approximation of the integral: (a) Eulers method, (b) backward rectangular
rule and (c) trapezoidal rule.

that is, it is sufficient to substitute s in G(s) with the proper shift operator equation.
The second method, the inverse of the previous, is given by the backward difference or backward
rectangular rule (Fig. 5.1.(b))

dx(t) x(t) x(t T ) dx(t + T ) x(t + T ) x(t)


or .
dt T dt T
An alternative interpretation of the backward difference method using the shift operator is the
following:
dx(t + T ) x(t + T ) x(t) q1
= x(t),
dt T T
or, by using the discrete index k,

dx(k + 1) dqx(k) q1
= x(k).
dt dt T
Therefore, by recalling the fact that s corresponds to the differential operator, one gets

dqx(k) q1 q1
x(t) sq ,
dt T T
5.1. DISCRETIZATION OF LINEAR CONTINUOUS TIME CONTROLLERS 77

and, more easily,


q1
s ,
qT
that is again sufficient to substitute s in G(s) with the proper shift operator equation.
Finally, the third and more accurate approximation of the integral is represented in Fig. 5.1.(c),
which is the trapezoidal rule, i.e.,
Z t2
x(t2 ) x(t1 )
x(t)dt = (t2 t1 ) .
t1 2
The trapezoidal rule can be interpreted as the integral of the time derivatives
q1
 
1 dx(t + T ) dx(t)
+ x(t),
2 dt dt T
or, using the discrete index k,
 
1 dqx(k) dx(k) q1
+ x(k).
2 dt dt T
Therefore, by recalling the fact that s corresponds to the differential operator, one gets
q1 q1
 
1 dqx(k) dx(k) qs + s
+ x(k)
2 dt dt T 2 T
and, more easily,
2 q1
s .
T q+1
Example 5.2 Consider the same system of Example 5.1. Using the forward rectangular rule, one
gets
q1
y(k) + y(k) = u(k) y(k + 1) = (1 T ) y(k) + T u(k),
T
that is the difference equation that expresses the discretetime approximation of Y (s) = G(s)U(s).
Notice that the initial condition for y(k) is needed. The same result can be obtained by simply write
the equation in s and than substituting Y (s) with y(k), U(s) with u(k) and s with q1
T
. This is very
n
useful as soon as the transfer function has terms of the type s .
Instead, using the backward rectangular rule, one gets
q1 y(k) T u(k + 1)
y(k) + y(k) = u(k) y(k + 1) = + .
qT 1 + T 1 + T
Notice that this time both the initial conditions for y(k) and for u(k) are needed. As before, it s
sufficient to write the equation in s and than substituting Y (s) with y(k), U(s) with u(k) and s with
q1
qT
.
Finally, using the trapezoidal rule, one gets
2 q1 2 T T T
y(k) + y(k) = u(k) y(k + 1) = y(k) + u(k) + u(k + 1).
T q+1 2 + T 2 + T 2 + T
Again, the initial conditions for y(k) and for u(k) are needed. AS in the previous cases, the same
result can be obtained by simply write the equation in s and than substituting Y (s) with y(k), U(s)
with u(k) and s with T2 q1
q+1
.
78 CHAPTER 5. DIGITAL CONTROL

5.1.2 Hints
A quite simple way to automatically discretize the system is to:

1. Substitute to the variable s in G(s) the function of the variable q;

2. Simplify the expressions of the numerator and the denominator;

3. The denominator will be multiplied by y(k), the numerator by u(k);

4. The coefficients of the two polynomials in q are the coefficients of y(k), i.e., (an q n + an1 q n1 + + a1 q
turns to an y(k + n) + an1 y(k + n 1) + + a1 y(k + 1) + a0 y(k). The same for u(k).

The most accurate discretization algorithm among the previous is the one given by the trapezoidal
rule. Indeed, it gives the best approximation of the integral operator. More precisely:

The forward difference may generate an unstable discretetime system starting from a stable
continuoustime system;

The backward difference may generate stable discretetime systems starting from unstable
continuoustime systems;

The trapezoidal rule maps continuoustime stable systems into discretetime stable systems
and unstable into unstable systems.

Remark 5.3 Usually, the shift operator is a convenient way to express the complex variable z, the
variable of the Ztransform, that is the discrete time counterpart of the Laplace transform. As long
as the Ztransform is considered, the approximation of the trapezoidal rule is also called Tustins
approximation or bilinear transformation. The Tustin approximation is derived by the approximation
of the continuous time delay of the sampling time T , i.e.,

1 + sT /2
z = esT ,
1 sT /2

which is the Pade approximant of the first order.

5.2 Multitask Implementation


Osek [Chi] is an open standard for embedded system architectures. NXTOsek Comprises NXT device
drivers, a kernel and an operating system. It provides C and C++ programming environment (with
GCC tool chain) and API for NXT sensors, actuators and other devices.
To use efficiently the NXTOsek, the following components are needed:

GNU ARM compiler;

Updated NXT Firmware to John Hansens Enhanced NXT firmware;

NXTOsek set up.


5.3. BIBLIOGRAPHY 79

Figure 5.2: OIL file for the Hello world example.

An NXTOsek program consists of two parts: an OIL (Osek Implementation Language) and a
C/C++ source code. The OIL is basically a file describing the architectures of the scheduler, the
tasks (e.g., priority, activation, autostart), etc.. Once the OIL and the C/C++ coded files are
compiled, an RXE file is generated, which has to be downloaded (via USB) on the Lego NXT brick
using the rxeflash.sh.
Fig. 5.2 reports an OIL file for the Hello world example, which is coded in Fig. 5.3.
Fig. 5.4 reports an example code for a program that reads the light sensor and displays it on the
Lego brick.
Finally, Fig. 5.5 shows the code for a periodic task.

5.3 Bibliography
[
AW96] K.J.
Astrom and B. Wittenmark, Computer Controlled Systems, Prentice Hall Inc., Novem-
ber 1996.

[Chi] Takashi Chikamasa, http://lejos-osek.sourceforge.net/.


80 BIBLIOGRAPHY

Figure 5.3: C source code for the Hello world example.

[Oga95] K. Ogata, Discrete-time control systems, Prentice-Hall, Inc. Upper Saddle River, NJ, USA,
1995.
BIBLIOGRAPHY 81

Figure 5.4: OIL and C code for a program that reads the light sensor and displays its value on the
Lego brick.
82 BIBLIOGRAPHY

Figure 5.5: OIL and C code for a task with periodic activation.
Chapter 6

Simulation and Control of a Wheeled


Robot

The robotic system adopted for the project is a unicyclelike vehicle, which is a wheeled mobile robot
(WMR) with differential drive [SSVO08]. First, a rigorous methodology to derive the kinematic
model of a vehicle (and in particular of a unicycle) is presented. Since the model is non-linear, it will
be linearized using first order Taylor approximation. It is then possible to derive a controller for the
linearized system using the root locus.

6.1 Kinematic Model of the Robot


Consider a mechanical system whose configuration q Rn is described by a vector of generalized
coordinates, where the space of all possible robot configurations coincides with Rn . The motion of the
system, that is represented by the evolution of q over time, may be subject to constraints that can
be classified under various criteria. For example, they may be expressed as equalities or inequalities
(respectively, bilateral or unilateral constraints). For simplicitys sake, only bilateral constraints will
be considered in what follows.
Assume that a subset of coordinates qii, with i = 1, . . . , r n, is subjected to some bilateral
constraints. Such constraints can be related to positions q or velocity q. Constraints that are only
function of the positions can be expressed as

h(q, t) = 0.

In this case, since the constraint depends explicitly from time, it is called rheonomic. In the subse-
quent, only scleronomic constraints will be considered (time invariant), i.e.,

h(q) = 0,

where h(q) is a vector function with r entries, i.e., h(q) = [h1 (q), . . . , hr (q)]T , one for each constraint.
Such constraints, are called holonomic (or integrable). The effect of holonomic constraints is to reduce
the space of accessible configurations to a subset of Rn with dimension Rnr . A mechanical system
for which all the constraints can be expressed in the form is called holonomic.
In the presence of holonomic constraints, the implicit function theorem, or Dini theorem, can
be used to express r generalized coordinates as a function of the remaining n r, so as to actually

83
84 CHAPTER 6. SIMULATION AND CONTROL OF A WHEELED ROBOT

eliminate them from the formulation of the problem. However, due to singularities this procedure
may introduce, it is more convenient to replace the original generalized coordinates with a reduced
set of n r new coordinates that are directly defined on the accessible subspace. Noticing that,
holonomic constraints are generally the result of mechanical interconnections between the various
bodies of the system, prismatic and revolute joints used in robot manipulators are a typical source
of holonomic constraints, and joint variables are an example of reduced sets of coordinates.
Constraints that involve generalized coordinates and velocities

q) = 0,
c(q,

q) is a vector function with r entries, i.e., c(q,


where c(q, q) = [c1 (q, q)]T , are called
q), . . . , cr (q,
kinematic constraints. Such constrains limit the instantaneous admissible motion of the mechanical
system by reducing the set of generalized velocities that can be attained at each configuration. Kine-
matic constraints are generally linear in the generalized velocities and hence they can be expressed
as
ci (q)q = 0,
and, hence, can be expressed in a more compact Pfaffian form

A(q)q = 0.

i = 1, . . . , r, are assumed to be smooth as well as linearly independent, i.e., A(q) is of


ci (q)q,
full rank. Clearly, the kinematic constraints of the Pfaffian form can be obtained from direct time
derivation of holonomic constraints, i.e.,

dhi (q) hi (q) hi (q)


= q = 0 = ci (q).
dt q q
It is trivial that for a holonomic mechanical system, r kinematic constraints correspond to r holonomic
constraints, obtained by integration of the kinematic constraints. Unfortunately, this is not always
the case, since there may be kinematic constraints that are not integrable. In this case, the constraint
is called nonholonomic and the system it belongs is called in its turn nonholonomic.
Beyond the mechanical aspects that may generate a holonomic system rather than a nonholonomic
one, the main difference between such systems is related to their mobility. Indeed, consider an
integrable kinematic constraint ci (q)q = 0. Its integral can be written as hi (q) = hi (q0 ), where q0 is
the initial robot configuration. Hence, since we have imposed a constraint, the motion of the system
will be constrained to a level surface of the scalar function hi (q), defined by hi (q0 ) and of dimension
n 1.
For nonholonomic systems, instead, only the velocities are constrained on a subspace of dimension
n 1 given by the kinematic constraint. Nevertheless, the fact that the constraint is non-integrable
means that there is no loss of accessibility to Rn . In other words, while the number of DOFs decreases
to n 1 due to the constraint, the number of generalized coordinates q cannot be reduced, not even
locally. This important property is extended to any number of nonholonomic constraints r < n.
Summarizing, holonomic constraints limit the accessibility of the mechanical structure (e.g., the
joints of a manipulator) while nonholonomic constraints impose a limit in the velocity space but
preserve the accessibility of the mechanical structure. In other words, nonholonomy constrained the
system velocities q. In the kinematic model of the constrained nonholonomic system, the feasible
6.1. KINEMATIC MODEL OF THE ROBOT 85

Figure 6.1: Wagon constrained on a linear track, with generalized coordinates q = [x, y, ]T .

trajectories verify, instantly, the nonholonomic constraint. Hence, only the velocities that belong to
the null space of A(q) are feasible. More precisely, the kinematic model is defined by

q = G(q)u,

where G(q) is a basis of N (A(q)), whose columns uvi are the input vector fields (not unique in
general). Notice that the kinematic model expresses velocity that are compatible with the constraints,
indeed
A(q)q = A(q)G(q)u = 0.
Notice that q Rn and u Rm , where m = n r. Furthermore, such a kinematic model here
derived is driftless, because one has q = 0 if the input is zero.
The constrained kinematic model represent a very useful tool for the nonholonomic systems
analysis. In mobile robotics, indeed, the constructive mechanical simplicity yields to nonholonomic
constraints.

6.1.1 Wagon Constrained on a Linear Track


With reference to Figure 6.1, let us consider a wagon constrained on a linear track with generalized
coordinates q = [x, y, ]T .
86 CHAPTER 6. SIMULATION AND CONTROL OF A WHEELED ROBOT

The constraint for the wagon is the linear track, with mathematical description given by y =
x tan b + b, where b is the attitude angle of the linear track w.r.t. X axis. Therefore,

h1 (q) = y x tan b b = 0

dh1 (q) .
dt
= y x tan b = 0

The second constraint expresses the equality between the attitude angle of the track and the
attitude angle of the wagon
h2 (q) = b = 0

.
dc2 (q)
dt
= = 0
Both constraints are then holonomic, since they are expressed with respect to positions.
In Pfaffian form, we have

  x
sin b cos b 0
A(q)q = y = 0 .
0 0 1

Notice that the rank number of the matrix A(q) is always equal to two. Therefore, the dimension of
the null space will be 1, and the kinematic model is given by

cos b
q = G(q)u = sin b u.
0

Remark 6.1 Notice how the holonomic constraint limits the accessibility of the wagon on the plane.
Furthermore, only one variable is independent, i.e., the position on the track, and hence the other
two variables may be removed using the implicit function theorem.

6.1.2 Wagon Constrained on a Circular Track


With reference to Figure 6.2, let us consider a wagon constrained on a circular track with generalized
coordinates q = [x, y, ]T .
In this case, the mathematical description of the circular track is given by x2 + y 2 = R2 , that
defines the first constraint
h1 (q) = x2 + y 2 R2 = 0

dh1 (q) .
dt
= 2xx + 2y y = 0
The second constraint relates the wagon orientation and the circular track. More precisely the wagon
orientation is always tangent to the track, i.e.,

c2 (q) = arctan xy 2 = 0
 
.
dc2 (q)
dt
= + Ry2 x Rx2 y = 0

The Pfaffian form of the constraints is



  x
x y 0
A(q)q = y = 0 .
y x R2

6.1. KINEMATIC MODEL OF THE ROBOT 87

Figure 6.2: Wagon constrained on a circular track, with generalized coordinates q = [x, y, ]T .

Since the dimension of the null space of A(q) is equal to 1, a possible choice is given by

y y
G(q) = x q = x u.
1 1
Summarizing, this example presents two holonomic constraints, hence only the position on the
circular track is of interest. It is worthwhile to note that u is the forward velocity of the wagon on
the circular track, since the motion is counterclockwise.

6.1.3 The Unicycle Vehicle


The unicyclelike vehicle, depicted in Figure 6.3, is of relevance in mobile robotics, since it is widely
used in factory automation as well as in academic research. This is mainly due to its mechanical
design simplicity. Even though we are interested in wheeled mobile robots, the kinematic model of
the unicyclelike vehicle is the same as the majority of the tracked vehicles, whose steering technique
is called skid steering. As previously discussed, these vehicles do not have any minimum steering
radius and cannot locally move on the direction parallel to the traction wheels axle.
The generalized coordinate chosen to localize the vehicle on the plane of motion are q = [x, y, ]T ,
where (x, y) represents the midpoint of the traction wheels axle, while is the orientation of the
vehicle w.r.t. the horizontal X axis. The fact that the vehicle cannot translate in the direction of
the wheel axle is mathematically described by
c1 (q) = x sin y cos = 0,
88 CHAPTER 6. SIMULATION AND CONTROL OF A WHEELED ROBOT

Figure 6.3: Unicyclelike vehicle, with generalized coordinates q = [x, y, ]T .

whose Pfaffian form is


  x
A(q)q = sin cos 0 y = 0 .

It is of worth noticing the analogy with the first constraint of the linear track.
Since this is the only constraint that is presented, it is now possible to derive the kinematic model
of the unicycle computing a possible null space of A(q)

cos 0 cos 0
G(q) = sin 0 q = sin 0 u.
0 1 0 1
Notice that in this case we have two inputs u = [u1 , u2 ]T . u1 is the forward velocity of the
vehicle, since it is directed in the normal direction to the constraint. u2 is instead the angular
velocity, positive counterclockwise. Furthermore, notice that the constraint is nonholonomic since,
indeed the vehicle can reach every point of the plane of motion. In more strict theoretic terms, this
fact reflects that the nonlinear system is completely reachable.
From a practical viewpoint, the vehicle is usually controlled using the rotational velocities
(l , r ) of the left and right wheel respectively. To obtain the velocities of the kinematic model,
the following relation are used
u1 = R2 (r + l )

R
u2 = D (r l )
6.2. KINEMATIC LINEAR CONTROL 89

where R is the wheels radius and D is the length of the wheel axle.
For space limitations, kinematic models of more complicated vehicles are not considered in these
notes (e.g., cooperative unicycles, bicycle, carlike). However, it is possible, using the presented
mathematic tools, to derive the kinematic model of any vehicle.
The dynamic models of the vehicles are not considered in these notes, since from a practical
viewpoint, most of the vehicles are controlled using the kinematic model. Indeed, it is possible to
design a low level controller that compensate for all the dynamic effects of the platform. This is
exactly the linear controller implemented for the vehicles motors. In practice, the most popular and
widely used linear low level controller is the Proportional Integral Derivative (PID).

6.2 Kinematic Linear Control


In order to design a liner controller for the kinematic model of the unicycle, we need first to linearize
it using the Taylor approximation up to the first order. To this end, the equilibrium point around
which the linearization operates is given by the problem at hand. Let us assume that the robot has
to be controlled around a desired path (path following problem, [SSVO08]), which is a straight line.
Hence, assume that the straight line coincides with the X axis of the reference frame (this is not
strictly necessary, albeit this choice simplifies the equations). Furthermore, assume that the straight
line is identified by a wall and that two sonars are used to measure the distance and the orientation
of the vehicle from the wall. In such a situation, the kinematic model becomes:

x e cos e 0
q e = y e = sin e 0 u,
e 0 1
where ye is the distance from the path and e is the relative orientation between the robot and
the path. Since the path coincides with the X axis, the variable xe corresponds to the distance
travelled along the path, which cannot be measured by any available sensor (unless open loop dead
reckoning is used, i.e., by integrating the value of the motor encoders, which generates a divergent
estimate). Moreover, quantifying xe is not necessary if the target is to follow the path, indeed we
are not interested in determine for how long the path is travelled along the X axis. Since xe does
not influence the dynamic of the robot (which only depends on e and u(t)), it can be neglected.
Furthermore, we can assume that the forward velocity u1 (t) is given and fixed to a constant u1 (t) = v.
With this choices, the nonlinear model turns to
     
y e sin e v 0
q e = = + u.
e 0 1 2
It is now possible to derive the linearized system using as linearization point the desired equilib-
rium, i.e., qe = [0, 0]T and u2 (t) = 0, which yields to the linear system
     
y e v 0
q e = = e + u.
e 0 1 2
The monolateral Laplace transform (assuming zero initial condition for both ye and e ) of the
linearized system yields to      
sYe (s) e (s)v 0
= + U2 (s),
se (s) 0 1
90 BIBLIOGRAPHY

that, solving the second equation for e (s) and then plugging it into the first equation give
v
Ye (s) = U2 (s),
s2
which is the transfer function of the problem. It is now possible to use the root locus approach in
Section 4.4 to design a suitable controller satisfying the desired target performance and successively
discretize and implement it following the steps reported in Chapter 5 to finally have the robot
following the desired path.

6.2.1 Estimation Algorithms and Observers


In the design of the controller we have hypothesized that the output of the system ye (t) is available
by on-board sensors. However, this is not always possible. In the particular problem of measuring
the distance from a wall, for example, using the measurement coming from a single sonar on the
robot is not sufficient. Indeed, the quantity ye (t) is the distance of the midpoint of the wheel axle
to the wall, which is, by definition of distance, the line joining the midpoint of the wheel axle to
the wall and perpendicular to it. Therefore, to define this quantity it is necessary to determine the
orientation of the vehicle with respect to the wall. The orientation can be derived by two measures
of the distance hence, either, using only one sonar in two successive positions or two sonars at the
same time.
The algorithm used to combine available sensorial data to infer the quantities of interest is called
an estimation algorithm, since every measure is affected by an error and, hence, only an estimate of
the desired quantity can be reconstructed. In this class we can recognize image or signal processing
algorithms, digital filters, sensor fusion algorithms, etc.. It is important to note that each time the
quantities of interest are the quantities that also define the dynamic of the system, e.g., ye and e ,
then the estimator is called an observer. The observer is a dynamic system whose dynamic is given
by the model of the system to estimate. The interested reader can find a detailed description on the
subjet in [Kai80, AW96].

6.3 Bibliography
[
AW96] K.J.
Astrom and B. Wittenmark, Computer Controlled Systems, Prentice Hall Inc.,
November 1996.

[Kai80] Thomas Kailath, Linear systems, vol. 1, Prentice-Hall Englewood Cliffs, NJ, 1980.

[SSVO08] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: modelling, planning and
control, Springer Verlag, 2008.

Вам также может понравиться