Академический Документы
Профессиональный Документы
Культура Документы
AND
MACHINE
INTELLIGENCE
MECH3460, PART I
ROBOTICS
School of Mechanical Engineering
Dr. A. Dehghani
Room no. 448
a.dehghani@leeds.ac.uk
PART I
ROBOTICS
MODULE INFORMATION
Module Specification
Programme of study:
Compulsory:
Number of credits:
1 and 2
Form of assessment:
Each part of the module has 50% (20% final exam and 30% course work)
Module lecturer:
CONTENTS
2. Kinematics
2.1 Definitions
2.2 Transformations
2.3 Properties of transformation matrices
2.4 Forward kinematics
2.5 Matlab and the robotics toolbox
2.6 Inverse kinematics
13
3. Design
3.1 Actuators
3.2 Internal state sensors
3.3 External state sensors
3.4 End effectors
3.5 Mechanical arrangement and specification: PUMA 500 series
28
37
5. Programming
5.1 Introduction
5.2 Drive-through teaching
5.3 Programming using the VAL II language
5.4 VAL II trajectory generation
5.5 Trajectory calculation
46
55
65
8. Mobile robots
8.1 Introduction
8.2 Space Robotics: planetary rovers
8.3 Characteristic functions of mobile robots.
66
Appendix A
Matrix review
76
Appendix B
Formula sheets
84
Books
No books are essential for this course. However the following books are recommended:
1. K S Fu, R C Gonzalez & C S G Lee Robotics. McGraw-Hill, 1987.
2. RP Paul Robot Manipulators MIT Press, 1981
3. R D Klafter, T A Chmielewski, M Negin Robotic Engineering: An integrated approach Prentice-Hall, 1989
4. J J Craig Introduction to Robotics Addison-Wesley, 1986.
5. F N-Nagy & A Siegler, Engineering Foundations of Robotics. Prentice-Hall, 1987.
6. M C Fairhurst, Computer Vision for Robotic Systems. Prentice-Hall, 1988.
Week
Session theme
Introduction
Kinematics
Kinematics
Kinematics
Kinematics
Example Class
Design
Design
Dynamics
Dynamics
Examples Class
Control
Control
Vision systems
Mobile robots
Mobile robotics: localization
Examples Class
Navigation
10
Example Class
Autonomous robots
11
1. INTRODUCTION
1.1 Robotics: a definition
What is a robot?
One dictionary definition of a robot is:
An automatic apparatus or device that performs functions ordinarily ascribed to humans or operates
with what appears to be almost human intelligence.
The Robot Industries Association (RIA) in the USA uses a more restrictive definition:
A robot is a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or
specialised devices through variable programmed motions for the performance of a variety of tasks.
Defining a robot is tricky, but the key features are that it should be adaptable to a variety of tasks, and be able to operate
with a degree of autonomy, i.e. without constant human supervision. Both these features suggest a robot should have a
programmable memory, so that it can be reprogrammed for different tasks, and operate according to the stored
programme rather than direct human control. The adaptability also implies that the mechanical configuration cannot be
too specialised for a particular function. The main problem with most definitions is how to interpret "a variety of tasks";
how wide a range of tasks is required before a machine becomes a robot?
The origin of a word
The word robot was first used by the Czech writer Karel Capek in a play entitled Rossums Universal Robots in 1921.
Capeks robots were hard-working humanoid machines. The word derives from robota, the Czech word for slave
labourer.
The term robotics, meaning the technical field encompassing robot technology, was first used by Isaac Asimov in 1942
in a short story entitled Runaround.
Examples of robots
There are two main types of robots:
Robot manipulators: jointed robot arms which are now quite common in manufacturing industry. This type of
robot has had a significant impact and is by far the most important industrially and economically.
Mobile robots: vehicles capable of autonomous motion.
Of course some devices come into both categories, i.e. a mobile robot which carries a manipulator.
Table 1.1 lists examples of robots and robot-like devices
"Near-relation"
1.2 History
The manipulator can usually be divided into two elements (see Figure 1.1): the arm, designed to provide linear position
control within the working envelope (usually 3 DOF), and the wrist, attached to the end of the arm and providing angular
position control (again, usually 3 DOF).
The end-effector
The end-effector is the robot hand, i.e. a gripper or other device attached to the moving end of the manipulator. There is a
great variety of gripper designs, with varying degrees of adaptability to handling different workpieces. Alternatively the
end-effector may be a specific tool, such as a paint sprayer or a welding torch.
The controller
The controller is usually a dedicated robot control computer system, with VDU, disk and printing facilities, allowing
efficient creation of robot control programs. A teach pendant is often included; this is a small hand-held keyboard in
which each key moves the robot in a particular way.
The controller cabinet often contains the power conversion unit for the robot, i.e. power supply and amplifiers for the
electric motors in an electrically driven manipulator. A separate power conversion unit would be used for hydraulic or
pneumatically actuated manipulators, containing a hydraulic pump or pneumatic compressor as appropriate (Figure 1.2).
External sensing system
Sometimes additional sensing systems are used to help monitor and control the robot; these would be interfaced to the
robot controller. For example a vision system might be used.
10
This configuration is applied in a radial workplace layout where the work is approached primarily in the horizontal plane
- for example, small circular manufacturing cells.
Polar or spherical configuration (revolute-revolute-prismatic)
This configuration combines rotational movement in both vertical and horizontal planes with a single linear (in/out)
movement of the arm. It presents the following advantages:
Easily controlled/programmed movements. Large payload capacity.
Fast operation. Accuracy and repeatability at a long reach.
It is suited to lifting and shifting applications which do not require sophisticated path movements to be traced.
Jointed or articulated or revolute configuration. (revolute-revolute-revolute)
Jointed configuration consists of a number of rigid arms connected by rotary joints. In addition, the whole structure has a
rotary movement around the base. It is also termed as anthropomorphic configuration since it resembles the movements
of a human body. Some of the advantages of this configuration are:
Extremely good maneuverability. Ability to reach over obstructions.
Large reach for small floor area. Fast operation due to rotary joints, but less accuracy.
SCARA configuration. (prismatic-revolute-revolute)
Selective Compliance Assembly Robot Arm (SCARA) configuration is a combination of the cylindrical and the jointed
configuration operating in the horizontal plane. Links connected by rotary joints provide movement in the horizontal
plane, while vertical movement is provided at the base of the arm. (or sometimes at the end-effector). Advantages of this
configuration include:
Extremely good manoeuvrability. Fast operation.
Relatively high payload capacity. High accuracy.
This configuration was developed for assembly-type operations.
Figure 1.4 shows some specific examples of a variety of manipulator configurations.
Control method
Many industrial manipulators are servo-controlled. Thus each joint actuator is operated under closed-loop control,
allowing the joint to be positioned accurately anywhere within its range of movement; also the velocity and acceleration
of the joint can be controlled as required. A dedicated computer system with its own robot programming language will be
used to control the robot. Servo-controlled robots will be the main subject of this course.
At the cheaper, less sophisticated, end of the market are pick-and-place or bang-bang robots. These have non servocontrolled actuators which will only stop moving when they reach a mechanical end-stop; hence each actuator will only
be stationary at one or other ends of its stroke. Also the velocity and acceleration is not controlled during motion. This
type of robot is controlled by a sequencer (e.g. a programmable logic controller, PLC) which operates the joints in the
correct order and can start or stop operation depending on external sensors. Programming can only be achieved through
setting up the sequencer and altering the end-stop positions.
11
However, apart from in the automotive sector, the take up of robot technology has been slow, particularly in the UK. This
has been due to the large capital investment required, and concerns over reliability of high technology, and adaptability to
product changes. The social impact has also been a concern in some quarters, as robots reduce the need for unskilled
labour.
The attached extract from the Computing and Control Engineering Journal summarises the current industrial impact of
robotics and predicts future trends. Further information can be found in the library (Edward Boyle, mainly Mechanical
Engineering K-13), e.g. R D Klafter, T A Chmielewski, M Negin Robotic Engineering: An integrated approach, Sections
1.6-1.9.
12
2. KINEMATICS
2.1 Definitions
Kinematics is the study of motion without regard to the forces which are required to produce that motion. It includes the
study of position, velocity and acceleration (both linear and angular) of one point in a mechanism and how that interrelates with the motion of other points. For a robot manipulator the two most important analytical problems are:
the forward kinematics: this is the calculation of the linear position and orientation of the end-effector from the
joint positions.
the inverse kinematics: this is the calculation of the joint positions from the position and orientation of the endeffector.
The inverse kinematics can be very complex for some manipulator configurations, but it is usually essential to be able to
calculate the joint positions (i.e. angles for revolute joints) required to move the end-effector to a desired position and
orientation.
Careful definition of co-ordinate frames is very important in kinematic analyses. For example Figure 2.1 shows coordinate frames chosen to define the positions and orientations of the base of a robot {B}, its end-effector{E}, and a
work surface {W}. The position of a component C on the work surface is specified by a vector defined in the work
surface co-ordinate frame. It is important to realise the frame {E} is attached to the end-effector, i.e. it moves as the
robot moves.
2.2 Transformations
In robot kinematic analysis, we need to be able to transform or map a vector specified in one co-ordinate frame to a
vector which defines the same point but relative to another co-ordinate frame. For example in Figure 2.1, the vector
defining C is given in the {W} frame; we would need to transform this into the robot base co-ordinate frame {B} to make
a start on calculating how to move the end-effector to pick up the component. Figure 2.2 shows frames {B} and {W} in
more detail. The vectors WC, BC and BWo are 3x1 column vectors; e.g. the vector BC that we need to calculate is given by:
(2.1)
13
Pure translation
Consider the situation shown in Figure 2.3, where frames {B} and {W} have the same orientation. The difference
between the two frames is purely a translation, and BC can be calculated by vector addition:
(2.2)
Pure rotation
Consider the situation shown in Figure 2.4, where frames {B} and {W} have the same origin position. The difference
between the two frames is purely a rotation. BC can be found by taking the x, y and z components of WC in turn, and
projecting them onto the {B} axes. Thus taking the x component first, Wcx will in this example give Wcx cosq when
projected onto the XB axis, Wcx sinq when projected onto the YB axis and zero when projected onto the ZB axis (see
Figure 2.5); these three components are the transformation of Wcx into the {B} frame. Transforming Wcy and Wcz into the
{B} frame as well, and adding the results gives:
14
(2.3)
These equations can be expressed in matrix form:
(2.4)
where
(2.5)
RW is described as the rotation matrix for transforming from {W} to {B}. The element values depend on the relative
orientation of the frames; the elements in equation (2.5) are only valid for this example i.e. where the difference between
the frames is just due to a rotation about the Z axis. In general terms, the columns of the rotation matrix can be defined as
the unit vectors i, j, k for frame {W} projected into frame {B}:
(2.6)
15
(2.8)
or
(2.9)
The matrix BTW is called the homogeneous transformation matrix, or simply the 4x4 transformation matrix, and plays
a crucial role in robot kinematics. The 4x1 position vectors BC and WC have the additional element simply to allow the
transformation to be expressed as this single matrix multiplication. There is no accepted notation to differentiate 4x1
from 3x1 position vectors: the context is sufficient to determine whether the additional "1" should be present.
Example 2-2: Constructing and using a homogeneous transformation matrix. See class notes.
16
The homogeneous transformation matrix has been used to map a vector from one reference frame to another. The actual
point to which the vector refers (C in the examples above) does not change position. However the transformation matrix
can also be considered as an operator which moves objects, i.e. rotates and translates them to another position. For
example in Figure 2.6 C1 is moved to C2.
(2.12)
Hence from equation (2.10):
Hence (2.13)
The TRANS() and ROT() operators are given by:
(2.14)
17
(2.15)
(2.16)
(2.17)
Note that each rotation matrix is for clockwise rotation when looking in the direction of the axis about which rotation is
taking place (thus q is negative for anticlockwise rotation).
In this case equation (2.13) gives:
(2.18)
Using transformation to describe frames
Just as transformation matrices can be used to move objects described by vectors, they can be used to move between
frames in exactly the same way. Furthermore, the transformation required to move from frame {B} to frame {W} can be
used as a description of the position and orientation of {W} relative to {B}. This is exactly the same transformation, BTW,
as that needed to map a vector defined in frame {W} to frame {B}.
To understand this, consider Figure 2.7, in which frame {W} is the same as frame {B} rotated about the Z axis and then
translated. The matrix BTWB can be used to map W C2 to B C2 (equation 2.9):
(2.19)
However if in the figure vector B C1 is numerically identical to vector W C2 we can write:
(2.20)
Thus BTW moves C1 to C2 . Since B C1 and W C2 are actually the same vector, just in different frames, this movement must
be the movement required to map {B} onto {W}, as postulated above.
18
Example 2-3: Transformation matrix to describe relative frame positions. See class notes.
19
(2.25)
A useful formula for inversion is (see e.g. [4]):
(2.26)
20
21
Denavit-Hartenberg notation
Although the method outlined above is perfectly adequate, it requires a lot of analysis for each new robot considered.
Hence a convention called the Denavit-Hartenberg notation is usually adopted, leading to a more systematic procedure.
Firstly it should be noted that the actual shape of any link is not important; its only purpose is to locate one joint relative
to another. Hence we can draw a manipulator as a collection of joint axes (Figure 2.10). Between each pair of adjacent
axes we can always draw a line which is perpendicular to both axes (Figure 2.11); this mutual perpendicular is unique
except when the axes are parallel, in which case it can be placed at the users discretion. When the axes intersect the
perpendicular is at the intersection point and of zero length.
22
23
Example 2-5: Determining link parameters for Puma 560. See class notes, Figures 2.13 and 2.14 and Table 2.1
ai
Ai
qi
Di
24
Giving
(2.28)
where C represents cos and S represents sin. Thus once the link parameters have been established for a manipulator it is a
straightforward procedure to apply equation (2.28) to find each link transformation, and (2.27) to solve the forward
kinematics.
Example 2-6: Determining forward kinematics for Puma 560. See class notes,
25
type: T = [1 2; 3 4]
Note that the numbers in a row can be separated either by spaces or commas. Typing T on its own will now echo the
contents of variable T (note that Matlab is case sensitive, so typing t will not work).
Examples of a few useful commands are given below:
plot(y) Plot() is a general plotting command. This example plots the data found in vector y.
who Lists all the variable names which currently exist.
clear Clears all variables
More information
Typing demo at the Matlab prompt gives access to demonstrations showing the capabilities of Matlab. Clicking on Visit
under the Matlab heading and entering the intro demo under Matrices gives a tutorial on matrix manipulation in Matlab.
An introduction to the Robotics Toolbox is given in a paper attached to this handout. Typing rtdemo gives
demonstrations which tie in with the examples in this paper. Only some of the analytical techniques have been covered so
far in this course. (Note: the Puma 560 link parameters accessed by the puma560 command are for the same frame
definitions as used in examples 2-5 and 2-6. The parameter values are in radians and metres)
2.6 Inverse kinematics
The inverse kinematic solution is the calculation of the joint variables from the end-effector position and orientation. This
is particularly important in practice, for example to allow the manipulator to pick up a component at a known position:
the end-effector can only be moved to that position if the equivalent joint variables can be calculated. For a manipulator
with n joints, the desired end-effector position would be specified by the transformation matrix 0 Tn . The forward
kinematic solution would express this matrix as a function of the joint variables; e.g. 0 Tn (q 1, q 2, ... q n) where all the
joint angles are the variables (i.e. all revolute joints). Thus the following equation must be solved:
Tn (q 1, q 2, ... q n) =
(2.29)
This gives 12 equations in n unknowns. However the nine elements that form the rotation part of the matrix are
dependent; there are only three independent rotation equations. Thus there are six independent equations, which can be
solved if n = 6. However the simultaneous solution of the non-linear equations is not in general possible analytically (i.e.
there is no general closed-form solution). Hence there are two solution approaches:
26
Numerical solution: an iterative approach which can be very time-consuming. However it is applicable in all
cases (for n=6). There are various numerical solution techniques available; they will not be discussed in this
course. The ikine() command in the Robotics Toolbox uses a numerical approach to solve the inverse kinematics
problem.
Specific closed-form solution: in many specific manipulator configurations, simplifications can be made which
allow a closed-form solution to be found. The solution method adopted depends on the configuration.
For any solution method, there are two potential problems:
No solution. A manipulator will only have a limited workspace. If 0 Tn is beyond the reach of the robot, no
solution will be found. (Note that even if a solution is found, the practical limitations on the range of rotation of
the joints may make the position impossible to achieve). If n < 6, then the workspace is restricted to a subset of
normal three dimensional movement: e.g. a planar manipulator cannot be asked to move outside the plane.
Multiple solutions. For many end-effector positions there are several manipulator poses which will achieve that
position (see Figure 2.15). In a redundant manipulator, i.e. n > 6, there is always a range of solutions. Numerical
solvers such as ikine() only return one solution dependent on the starting values for iteration.
Example 2-7: A specific closed-form inverse kinematic solution. See class notes.
27
3. DESIGN
Certain aspects of robot manipulator design are briefly reviewed in this section. These aspects are:
selection and characteristics of actuators
selection and characteristics of sensors
end effector (gripper) design
Also information is provided on a PUMA 500 series robot as a case study in the mechanical arrangement of a
manipulator and its performance specification.
3.1 Actuators
Any industrial robot will use either an electric, hydraulic or pneumatic drive system:
Electrically actuated robots are almost all driven by DC motors. These robots tend not to be as powerful as
hydraulic robots, i.e. the move more slowly and exert lower forces, but they do exhibit good accuracy and
repeatability properties. For very low power applications stepper motors can be used.
Hydraulically actuated robots have the advantage of mechanical simplicity (few moving parts), as well as
physical strength and high speed.
Pneumatic drive systems are normally reserved for small, limited sequence pick and place applications. Lack of
stiffness (air compressibility) and control problems associated with stiction prevent their use if good accuracy is
required.
Electric actuators
D.C. Motors
The principal variation among different types of DC motors lies in the mechanism used to develop the magnetic field. In
a permanent magnet DC motor, the field is developed, as the name suggests, by permanent magnets. In such a motor, the
torque T is related to armature (rotor) current I a by:
(3.1)
where KB is a constant. A current amplifier is often used to drive the motor so that motor torque can be controlled
directly.
The magnetic field can also be generated by an electromagnetic. This is most common in larger motors (more than a few
kilowatts). In electromagnet motors the torque is given by:
(3.2)
where If is the current in the field windings (stator) and Kf is a constant (Figure 3.1). In many cases the field current is
derived from the same source as the armature current. Figure 3.2 shows the main two ways to accomplish this: shuntwound and series-wound motors.
28
Stepper motors
A stepper motor can change its position to any one of a number of known angles. Hence it is a digital rather than
analogue actuator, and is well suited to digital (i.e. computer) control. However stepper motors have limited power
output so are only used for light duty robotic applications.
The most common type is the permanent magnet stepper motor which has a rotor consisting of several permanent
magnets and a stator containing four windings, as shown in Figure 3.3a.
The rotor would be held in the position shown if VA is positive and VB is zero. The electro-magnet formed by rotor
winding A would then attract north pole 1 on the rotor, and winding A' would repel north poles 3 and 4. If VA is now
switched to zero and VB becomes positive, the rotor will rotate by one step anticlockwise, so that N 5 is directly opposite
winding B. As there are 90 between each winding and 360 /5 = 72 between each rotor pole, one step is 90 -72 =18 .
By switching the winding voltages in the correct sequence the rotor will continue to step around 18 at a time, as shown
in Figure 3.3b.
29
For the motor shown there are 20 steps per revolution. With more rotor poles there would be more steps per revolution,
allowing a finer control over angular position. A typical value for a good stepper motor would be 200 steps per
revolution (i.e. each step is 1.8 ).
Position control of a stepper motor. Unlike other motors, the position of a stepper motor can be controlled without using
a position sensor. As long as the starting position is known, the motor can be stepped around to any of its possible
positions. A stepper motor controller is usually used to switch the currents in the windings appropriately, and this
controller can be driven by a computer using two digital signals: a direction signal (e.g. 0V for clockwise, 5V for
anticlockwise), and a step signal (a series of pulses, each pulse causing the motor to move by one step).
Hydraulic Actuators
Hydraulic systems make use of a virtually incompressible fluid, usually oil, which is forced under high pressure into a
cylinder. The cylinder contains a piston which moves in response to the pressure of the fluid. Both rotary and telescopic
(prismatic) actuators are available and are widely used for high power robot applications.
High pressure fluid (typically at 100bar to 300bar) is supplied by a hydraulic power supply, which consists of a pump, a
relief valve to regulate the pressure, and an accumulator to iron-out pressure ripples. The hydraulic actuation system
itself consists of a cylinder, and a valve to control the direction and rate of flow.
30
The diagram below shows a hydraulic cylinder controlled by a spool valve. The horizontal position of the spool (x) can
be changed (e.g. by a solenoid) to direct flow into either end of the cylinder.
The fundamental equation which describes the valve characteristic is called the orifice equation:
(3.3)
where Q is the flowrate through the valve and D P is the pressure drop across the valve in the direction of flow. K V is a
constant for the valve.
Example 3-1. Characteristics of the hydraulic actuator of Figure 3.4 (for positive x). See class notes
31
Meaning
Units used in
calculation
Ps
Supply pressure
N/m2
P1 and P2
Cylinder pressures
N/m2
Q1 and Q2
Volume flowrates
m3/s
Piston velocity
m/s
A1 and A2
m2
Kv
Pneumatic Actuators
In a pneumatic actuator, a compressible fluid, air, is used to drive a piston. As in the case of hydraulic actuators, an
electrical signal controls a valve which, in turn, controls the flow to the cylinder.
In the simplest pneumatic control system, a solenoid-operated on-off valve directs air at maximum flowrate to the
cylinder. To return the piston, this supply valve is closed and an exhaust valve opened. The piston is returned by a spring,
or, if a double-acting cylinder is used, a constant pressure on the other side will cause a return. For each movement the
piston is only halted when it reaches a mechanical end-stop. This is called a bang-bang control scheme.
Such a simple control method is ideal for grippers; the pneumatic piston supply closes the gripper until the gripping force
equals the piston force.
In addition to their use in grippers, pneumatic actuators are often used in simple robots. A totally pneumatic robot can be
sequenced through a complex series of operations by a simple controller which opens and closes valves in sequence.
Such robots have mechanical end-stops which are adjusted to suit the particular application, and they are often used as
pick-and-place robots to move components between two known positions. Advantages include:
High speed and relatively high power-to-weight ratio. (However due to sealing problems working pressures are
limited to approx. 10bar, giving much lower forces than hydraulics)
Low cost.
Simplicity of control.
Non contamination of work space, oil leaks and noise.
Unlike hydraulic oil, air is highly compressible. Hence pneumatic actuators are not very stiff, so that maintaining a
constant position under varying loads is difficult. Also seal friction hinders high precision position control. Hence
pneumatics are not normally used for servo-controlled robots.
32
33
normally converted into pulses which are then counted, so the number of increments per revolution is 4 times the number
of lines on the disc.
34
3.4 End-effectors
Some applications dictate that the end-effector is a specific tool, such as a welding torch, riveting gun, or paint sprayer.
However, many applications require the manipulator to pick up objects, so the end-effector has to be some sort of gripper
or robot hand. Two-fingered grippers like those shown in Figure 3.6 are common. Even for simple objects, gripper
design has to be carefully considered so that small misalignments between object and gripper are accommodated. Gripper
design becomes much more complex if a wide range of objects has to be handled by the same device. Anthropomorphic
(human-like) dexterous robot hands, which have several multi-jointed fingers, have been researched extensively over the
last 20 years to approach this problem, but a reliable control strategy has yet to be developed.
Some manipulators must wield a range of different tools during their working day, in which case a tool change system is
used. A tool change unit is a device which holds the various tools, and allows the manipulator to automatically engage
any required tool, and deposit it back in the unit when finished. The tools have a common mechanical interface for
mating with the manipulator. This system also allows a manipulator to use different grippers, thus greatly increasing the
range of objects which can be handled.
Gripper design issues
There are three main factors to be considered in gripper design:
Workpiece related features
1. Geometric form or shape of item to be moved or assembled.
2. Mass of item, position of centre of gravity and moment of Inertia.
3. Machined or rough surfaces - errors in positioning.
4. Type of material - hard or soft.
5. Changes in shape between loading and unloading.
Robot related features
1. Acceleration of robot arm - relevant to gripping forces.
2. Load limitations - actuator torques and flexure of arm.
3. Gripper is part of pay load - mass an important consideration.
4. Operating time of gripper is part of overall cycle time.
Workplace related features
1. Obstructions to be negotiated
2. Approach direction.
3. Environment - hot, corrosive, explosive, radioactive or underwater.
35
36
In the general case, all the variables calculated are 3x1 vectors. In the case of a planar manipulator the linear motions and
forces are 2x1 vectors and the angular motions and torques are scalars.
37
Example 4-1 Recursive Newton-Euler inverse dynamics solution for two-link jointed planar manipulator (zero gravity).
See class notes
The general Recursive Newton-Euler formulation for an n-link planar jointed manipulator is given below. It is assumed
that each link centroid lies on the X axis of the link frame (i.e. Y component is zero)
Outward recursions
For link i = 1 to n, calculate:
1. Angular velocity of link:
(4.1)
(4.2)
(4.3)
(4.4)
(4.5)
(4.6)
38
Inward recursions
For link i = n to 1, calculate:
1. Force exerted on link i by link i-1
(4.7)
(4.8)
Summary of notation
Note: all linear and angular velocities and accelerations are measured relative to a fixed frame (such a frame is often also
known as an earth or world frame). However linear velocities and accelerations, even though measured relative to a fixed
frame, can be expressed in any frame like any other vector. In the planar case angular velocities/accelerations are scalar
so this issue does not arise for angular motion.
Linear accelerations/forces: where Q represents a linear quantity,
(as used
(4.9)
Example 4-2 Joint torques for two-link jointed planar manipulator with gravity and external end-effector force. See class
notes
39
40
Values 6 to 15 are obligatory inertial parameters. Note that the mass products of inertia are terms which only
appear in three dimensional rigid body dynamics.
Values 16 to 20 are optional actuator characteristics.
Note that rne() assumes gravity is present, acting in the negative Z0 direction. Entering help rne at the Matlab prompt will
give more information; help dyn will provide a reminder of the dyn matrix format.
(4.12)
This set of equations has to be repeated for as many time steps D t as necessary to simulate the required period of
operation of the manipulator.
4.3 Control
Single joint control
In many industrial manipulators, each joint is driven using a separate, independent control system. For example, consider
joint 1 of a PUMA 560 manipulator, which is driven by a DC servomotor through a gear train, as depicted in Figure 4.2.
41
A simple analysis of this drive system proceeds as follows. (Note: this is just an example; when tackling problems of this
type it is always best to go back to first principles rather than using the equation (4.15) derived below).
The joint torque t will accelerate the joint and the manipulator above it; these have inertia J:
(4.13)
Torque delivered by the motor tm is used to provide the joint torque, to overcome any unmeasured disturbance torque t d
(which could include friction) and to accelerate the motor itself:
(4.14)
where G is the gear ratio (motor speed over joint speed). Substituting equation (4.13) into equation (4.14):
or
(4.15)
Note that the term JE, representing J+JmG2, can be interpreted as the effective inertia at the joint including motor inertia.
From equation (3.1), motor torque is proportional to current. If a current amplifier is used, then the current is in turn
proportional to the control signal u. Thus if K m is a constant:
(4.16)
Equations (4.15) and (4.16) constitute the model of the plant which is included in the control system block diagram of
Figure 4.3.
A simple proportional position control system is shown in Figure 4.3. There is only one controller parameter Kp to
choose, and this does not give much freedom to alter the dynamics of the system in a desirable manner. Instead a
Proportional Derivative (PD) or Proportional Integral Derivative (PID) controller could be used. The scheme shown in
Figure 4.4 is quite common in robot control; it is a variant of PD control which includes feedforward.
42
This scheme requires position and velocity feedback; however if joint velocity is not measured it can be generated by
differentiating the position feedback signal. The feedforward filter is used to determine demand velocity and acceleration
from the demand position, but often this filter is not required because the velocity and acceleration values are available
from a trajectory generation routine which computes the demand position profile (trajectory generation will be discussed
later in the course).
The signal
is an estimate of the motor torque required to follow the demanded trajectory. In fact if the position and
Example 4-3 Derivation of transfer function for PD controller with feed forward (Figure 4.4). See class notes
Controlling the whole manipulator
In order to design a separate feedback controller for each joint the dynamic interaction between joints has to be
neglected. For joint 1 of the PUMA, the biggest problem is that the inertia J varies considerably as other joints move. For
controlling some of the other joints the interaction is even more severe. Thus a method of accounting for the dynamics of
the whole manipulator is required. The computed torque method of Figure 4.5 is such a method.
43
The computed torque controller has a similar structure to the feedforward PD approach. However it now uses the inverse
dynamics solution of equation (4.10) to calculate the motor torques required to perform the desired movement. The
inverse dynamics calculation uses the following parameters:
measured joint positions and velocities
the signal
shown in the block diagram, which can be considered as the desired acceleration. In fact it is exactly the same as the
demand acceleration when there is no position or velocity error.
Note that the inverse dynamics equations used must give the motor torques rather than the jointtorques. The equations
will include the motor inertia terms, and any gear ratio G between motor and joint rotation.
Figure 4.5 represents the whole controller, i.e. for all joints, so that each variable is now a vector of n elements, and each
gain block actually represents a set of n values for scaling the variables. Figure 4.6 shows that each gain is now in fact a
matrix, with the n values forming the leading diagonal (n=2 in this case) .
44
The major difficulty with the computed torque method is that significant computing power is required to perform the
inverse dynamics calculation sufficiently quickly. Hence few current industrial manipulators use this approach. Most use
individual joint controllers.
Example 4-3 Computed torque control method for single link manipulator. See class notes
45
5. PROGRAMMING
5.1 Introduction
Robot programming refers to the process of creating a program to drive a robot through a series of movements to carry
out a particular task. Once created the program would be executed whenever the task had to be performed. Typically the
task would be a repetitive one, e.g. fitting windscreens on a car production line, and so the program would execute over
long periods of time.
Four methods of robot programming exist, which can be split into on-line and off-line methods:
On-line or teach by showing programming:
o
Drive-through teaching consists of the human operator controlling the robot using keys on a teach
pendant, e.g. there may be two keys (a plus and minus) to move each joint. Once an important position
is reached it can be recorded by hitting a record key. In this way a series of positions can be recorded,
and then played back as required.
Lead-through teaching is based on the same principle as drive-through, except that the end-effector is
physically dragged by the operator rather than using a teach pendant. This is often used for paint
spraying robots for example.
Off-line programming:
o
Robot programming language. High level computer languages are available which are specifically
designed for robot control e.g. VAL II from Unimation and AML from IBM. End-effector locations can
be programmed by entering co-ordinate values. These languages offer considerably more flexibility
than on-line methods.
The most common methods in current industrial use are drive-through teaching, and using a programming language. As
an example, the implementation of these two methods for a Unimation PUMA 560 robot will be described. Unimation
developed its own robot programming language VAL (Variable Assembly Language) in 1979, since when it has been
regularly updated and enhanced. Now called VAL II, it is a high level interpreted language whose programs consist of a
sequential series of instructions. As is seen in the next section, VAL II is also involved in drive-through teaching,
because the recorded locations are stored as VAL II instructions.
Typical hardware required for robot programming and control is shown in Figure 5.1. The Controller contains
microprocessor and interface cards (Figure 5.2), and the operator communicates with the controller via the VDT (visual
display terminal, containing keyboard, disk drive and monitor) and the teach pendant.
46
47
In TOOL mode, T represents the position and orientation of {6new} relative to {6old}, so:
(5.2)
In WORLD mode, T represents the movement which the end-effector has to undergo within the fixed world frame, so:
(5.3)
Thus the inverse kinematic calculation to find the new joint angles should be performed on 0 T6new found from equation
(5.2) or (5.3) as appropriate.
Playback
The teach pendant records positions by forming a VAL II program. Each time the record key is hit one line of the
program is created. The pick and place example would create the following program called prog:
The program line numbers correspond to the numbers in brackets after the operators actions listed at the start of this
section. Thus the action which caused the line to be created can be identified. Note that:
the MOVET instruction is a move instruction created by the teach pendant. Other move instructions will be
introduced shortly.
eight robot positions are stored in variables location1 etc.; the last number of the variable name is automatically
incremented for each new position.
the second argument to the MOVET command is the gripper opening. This can indicate a variable finger
separation, but in our case a simple binary pneumatic gripper is assumed which can just be driven open or
closed, represented by 25.4 and 0.0 respectively.
To playback the movements, the program can be executed by typing at the VDT: EXECUTE prog
48
Similarly moving the robot to C, with gripper pointing down, and typing: HERE place, would record variable place. If
the locations pick and place are known heights above the work surface, then the instruction APPRO (meaning approach),
can be used to lower the end-effector the prescribed distance i.e. move along the tool Z axis. Similarly there is an
instruction DEPART which raises the end effector (moves in the negative tool Z axis).
Example 5-1 Write a VAL II program off-line to perform pick and place (Figure 5.4). See class notes
Table 5.1 summarises a few VAL II program instructions. There are many more which are not covered in this course.
Note that the SET instruction can be used to enter robot locations without using the teach pendant; locations are either
specified as a transformation of the end-effector in the world frame, or as joint angles. The transformation is given in
terms of X,Y and Z co-ordinates, and three angles O, A and T, defined in Figure 5.7, which uniquely define the rotation
matrix.
Table 5.2 summarises some so-called monitor commands; these are commands which are given directly to the operating
system, rather than forming part of a program.
MOVE <location>
Programmed move
APPRO <distance>
DEPART <distance>
OPEN
Open gripper
CLOSE
Close gripper
Executes program
TEACH <location>
HERE <location>
SPEED <percent>
49
Joint co-ordination
A common controller algorithm for moving from one location to another would be to:
Determine joint angles for start location
Determine joint angles for end location
Determine duration for complete movement (normally dependent on the joint that has to move the furthest).
For each joint: determine a trajectory, consistent with Figure 5.6, which would implement the move with the
correct duration
Move the six joints simultaneously according to the individual trajectories. All joints should complete the
movement at the same time.
As trajectory generation occurs at individual joint level, this algorithm is known as joint-interpolated movement. It is
computationally efficient, but does not result in straight line movement of the end-effector. This is illustrated in Figure
5.9. Figure 5.10 shows joint interpolated movement for the pick and place task. As the MOVE, APPRO and DEPART
instructions use joint interpolated movement then our program of Section 5.3 would in fact produce this wiggly path.
50
However straight line motion can be produced if the trajectory generation is carried out in Cartesian space. In other
words the trajectory of Figure 5.8 is applied to the linear co-ordinates and angles which represent the end-effector
location 0T6 . These trajectories are then sampled to produce a set of 0T6 transformations spanning the whole movement;
performing the inverse kinematics calculation on each one of these gives a set of joint angle vectors for the manipulator
to follow. As the inverse kinematics have to be solved repetitively, it is a computationally intensive algorithm. In VAL II
there are variants of many of the movement instructions which give straight-line movement, e.g.:
MOVES <location>
APPROS <distance>
DEPARTS <distance>
51
(5.4)
(5.5)
(5.6)
52
(5.7)
Substituting equation (5.4) into (5.7):
(5.8)
(5.9)
Given a required duration for the move, equation (5.9) can be solved for velocity v. Alternatively, v may be specified,
and the duration must be found. These calculations assume that the acceleration a is a known value.
Example 5-2 Calculating constant velocity value for a linear trajectory with parabolic blends. See class notes.
Polynomial trajectories
There are a variety of other methods for calculating trajectories, for example using third or fifth order polynomials. A
fifth order polynomial has the form:
(5.10)
Its derivatives are:
(5.11)
(5.12)
53
Applying equations (5.10), (5.11) and (5.12) to both the start and end of the trajectory gives six equations in the six
unknown coefficients a0 to a5; hence a solution can be found. Normally the start and end velocities and accelerations will
be zero. If q1 is the start angle at t=0, (5.10) to (5.12) give:
(5.13)
(5.14)
(5.15)
If q2 is the end angle reached at t=t2, and making use of (5.13) to (5.15), equations (5.10) to (5.12) give:
(5.10)
(5.11)
(5.12)
Equations (5.10) to (5.12) can be solved for the remaining three coefficients a 3 , a4 and a5.
Fifth order polynomial trajectories have the advantage that jerk (the derivative of acceleration) remains low. The sudden
changes in acceleration in the linear with parabolic blends method gives high jerk values. However, given that much of
the movement can occur at maximum velocity, the latter method normally has the advantage of shorter movement times.
The Matlab Robotics Toolbox command jtraj() generates a trajectory between two joint angle vectors using a fifth order
polynomial.
54
6. VISION SYSTEMS
6.1 Introduction
A robot vision system is a sophisticated optical sensor which has the potential to enable a robot to respond intelligently
in an uncertain environment. Common uses are:
identification of objects in the robots working environment
estimation of object position and orientation
tracking of moving objects
identification of component defects
Most current commercial vision systems can only operate in environments with the following constraints:
number of objects that that need to be identified is limited.
number of objects in a scene simultaneously is limited.
objects do not overlap or touch.
objects viewed from one known direction (normally from above)
objects illuminated so as to obtain high dark-to-light contrast.
Computer data processing algorithms are the key to successful vision system operation. These algorithms fall into three
related fields (Figure 6.1):
1. Image processing - the raw image is improved in some way; this can include smoothing and edge detection.
2. Object or pattern recognition - the output of this process is a description of the image based on a knowledge of the
objects expected to be found in the image.
3. Scene analysis - concerned with the transformation of simple features into abstract descriptions relating to objects
that cannot be simply recognised based on pattern matching. It deals extensively with three-dimensional image
understanding (texture, 3D shape, etc.). Artificial intelligence techniques are often used. Scene analysis is outside the
scope of this module.
55
The charge is measured periodically, with sample interval ts, and also reset to zero each sample time. Thus the charge
detected is a measure of the average light intensity on the element during the previous sample interval.
The most common way in which to accomplish this "matrix read" is in a top-to-bottom, left-to-right scanning process
called raster scanning (Figure 6.2). While the charge in an element at the bottom of the matrix is being measured and
neutralised, charge is once again building up at the top. Since charge continues to accumulate over the entire surface of
the light sensitive matrix at all times, it is necessary to return immediately to the top of the matrix and begin scanning
again.
Vacuum-tube cameras
Sometimes vacuum-tube TV cameras, also known as scanning photomultipliers, are used in vision systems, although
these are now becoming less common. The vidicon tube is a well known example. These cameras capture the image in
the following way:
each complete recorded image - called a frame - consists of a raster scan with 625 lines.
25 frames are recorded per second
Consequently it takes 64m s to scan one line. This time includes not only the active video signal but also the retrace
periods, approximately 18% of the line time; the active video time is 52m s per line. Figure 6.3 shows the output of a TV
camera as it scans three successive lines; the raster scanning process effectively converts a picture from a two
dimensional signal to a one dimensional signal where voltage is a function of time.
To form a digital image in computer memory the voltage signal must be fed into an analogue to digital converter (ADC);
this will sample the signal at a fixed frequency. A sample frequency of 9.84MHz is common, giving 512 pixels per active
part of a line. Figure 6.4 illustrates the process.
56
Experiments have shown that at a given light level, the human eye can discern only about 30 grey levels. However, with
a change in average light intensity, the eye adapts by opening or closing the iris, giving a greater overall range. Thirty
shades of grey would indicate that 5 bits is adequate; the use of 8 bits allows a limited emulation of the effects of the iris.
The distance between one pixel and the next must be sufficiently small to prevent aliasing. To successfully capture an
image consisting of a sinusoid with a known spatial frequency (cycles/m), the resolution (pixels/m) of the vision system
must be at least twice that frequency. Figure 6.5 shows that the sinusoid can appear to be at a much lower spatial
frequency if the resolution is too low.
57
Smoothing
Most raw images will be affected by noise. Spurious but substantial inaccuracy in the grey level of individual randomly
distributed pixels gives a speckled effect known as salt-and-pepper noise. Smoothing or filtering the signal is often
required. Local averaging is a common technique. This replaces the pixel value at the centre of a square window with the
average of all the values in the window.
Example 6-1. Apply local averaging to the image in Figure 6.7 using a 3x3 window. See class notes
58
59
Thresholding will produce a binary image, from which the outline of an object can be detected. The pixels which form
this outline can be identified by contour following, which consists of:
1. searching for a first edge point (a point between a 0 and a 1)
2. moving to the next edge point using the following rules:
3. continuing moving around the object until arriving at the first edge point again
60
(6.1)
where Gij is the grey level of pixel (i,j). The 2 x 2 pixel window used by the operator is shown in Figure 6.11.
Original image: Grey levels Gij Differentiated image: square of operator ( Rij2 )
61
The function of the threshold detector is to decide which elements of the differentiated image should be considered as
edge candidates. An edge is present if Rij > T where T is a chosen threshold level. For grey levels greater than T, the
matrix element is set to one; otherwise it is set to zero. This is shown in Figure 6.13.
Once the edge points are detected, these must be connected to find the lines that define the image. The iterative endpoint fit is a typical method used for finding a line. This method finds the most extreme edge points in a matrix window
and introduces a line to connect these end points, as shown in Figure 6.14a. It then looks to see if edge points (binary l's)
fall on the line. If not, it chooses the most distant point from the line and replaces the single line with two lines, as shown
in Figure 6.14b. The process is continued until a series of line segments is found to match the edge-point pattern, as in
Figure 6.14c. The edge line segments can be stored as vectors.
62
63
64
65
8. MOBILE ROBOTS
8.1 Introduction
Ground-based mobile robots have so far been developed for two main application areas:
planetary exploration
automatic transportation in factories
Robotic planetary rovers are the key to future space exploration. The expense and safety implications of sending humans
to other planets are still very adverse. The focus of current activity is the design and control of rovers suitable for
exploring Mars. NASAs Pathfinder Mission, launched in December 1996, has demonstrated the use of a rover to explore
the surface of Mars for the first time. This is discussed in more detail in Section 8.2.
Automatic Guided Vehicles (AGVs) are currently available for transporting materials and components in factories. They
follow marked routes around the factory floor, and have some sensing capability to detect obstacles. The current
developments in factory-based and service mobile robots are described in Section 8.3. This section highlights the aspects
of intelligence which need to be present, such as a navigational ability, before mobile robots can become truly
autonomous.
The lecture slides cover typical examples with regard to mobile robots.
8.2 Space robotics: planetary rovers
Autonomous rovers will play an important role in planetary exploration. NASAs Mars Surveyor Programme consists of
a scientific survey of Mars over the next 10 years using a series of rovers. The rovers must move around on the surface of
the planet to conduct experiments on geophysical, meteorological and biological conditions. The first rover, Sojourner,
touched down in mid 1997 as part of the Pathfinder Mission.
Some particular features needed in planetary rovers are:
an ability to move over rough terrain with high stability to carry scientific instruments safely;
mechanical structure and locomotion have to be robust - maintenance and repair are not possible;
for full functionality rovers must be fitted with robot arms to handle objects, collect samples etc.;
specialist sensors for perception in the Mars environment;
an on-board power source.
Robots for planetary exploration require a degree of intelligence for several reasons:
the robot has to move in a natural, unstructured and a priori unknown environment;
much of the information on the environment has to be acquired and interpreted using the robots own sensors;
there is no possibility of continuous interaction between humans and the robot because of the significant delays
in communication with Earth.
There are two approaches to rover locomotion:
wheels, usually with large suspension displacements. These rovers are fast and robust but can only cross
relatively smooth ground.
Legs, i.e. walking robots. These tend to be slow but can cross very rough terrain
Typical examples are shown in the slides.
Semi-autonomous navigation
The NASA pathfinder Mission uses semi-autonomous rover control. The long time delay (possibly 30 minutes for a there
and back communication to Mars) precludes direct teleoperation.
66
Satellites around Mars send images to Earth. From these, it is possible to create a date topographic map of the planet
surface. The rover, which carries a pair of mini-cameras, can send stereo pictures to Earth. A human operator compares
these stereo pictures with the topographic map information to determine vehicle location and heading, and using a pair of
3DOF joysticks, directs the rover along a safe path.
The commands are then transmitted to the robot for execution. The Rover autonomously tries to reach the destination by
using sensor-based reactive behaviors of varying complexity. These might include obstacle avoidance, or searching for
specific features. Active force control is used to accommodate imprecise knowledge of the terrain.
Note that there is no need to move fast on Mars the top speed of the current rover is 7mmm/s. So the control of the
rover need only be based on it kinematic model.
Autonomously traverse 100m of rough terrain: terrain within sight of the lander.
Autonomously traverse 100m of rough terrain over the horizon with the return to lander.
Autonomously traverse 1km of rough terrain with execution of selected manipulation tasks.
Complete science/sample acquisition and return to lander with over the horizon navigation
Rover Technology
From the mechanical point of view rover research includes:
Vehicle stability
Legged verses wheeled vehicle mobility
Handling and grasping dexterity
The miniaturization of rovers, reducing mass and power consumption, is also a major research thrust.
This has led to the classification of rover designs by mass. Rovers over 20g are said to be full size; lighter rovers are
called microrovers.
Other active areas of research are:
67
Microrovers
The cost of a full size rover mission is several billion dollars. The light weight and compact volume of microrovers
allows a low flight cost. Microrovers will be able to autonomously traverse many kilometres on the surface of Mars,
perform scientist, perform scientist-directed experiments, and return relevant data back to Earth. Present microrover
technology has several limitations precluding more ambitious science-rich missions. Current microrovers have very
limited traverse capability (tens of meters), have limited science packages on board, are designed for short-term (10-day)
missions and require repetitive ground control. Figure 8.4 shows some prototypes.
The specifications and features of Sojourner, the only rover that has actually landed on Mars, include.
11.5kg mass
The size of a milk crate
Each wheel is independently driven (2000.1 gear ratio). Encoders measure wheel rotation.
The wheels are independently steerable. Potentiometers measure steering angle.
The top speed is 0.4m/min
Laser striping and camera system determine the presence of obstacles in its path
Carries an x-ray spectrometer, to analyse the composition of the rocks
Power provided by solar cells and 6 lithium thionyl chloride D-cell batteries. These give a maximum power
output of 30W.
A heater unit warms the electrical components (ambient temperatures between -40 degrees C and +40 degrees
C)
Command and telemetry is provided by a modem that links the microrover with the lander.
NASAs latest experimental prototype, ROCKY-7 has the following features:
Less than 20Kg
Ability to traverse autonomously a complex area
Acquire in-situ geochemical data
Low power stereo vision (acuity of human eye; viewpoint can be raised 1m above the surface)
2 DOF stowable manipulator arm with subsurface reach
2 DOF end-effector for digging, grasping and instrument pointing
Onboard spectrometer with fibre optic path to end of arm
Pointable solar array
Bi-directional sensing and driving
Increased capacity for more instruments
New wheel geometry with compact actuation
Ability to autonomously recognise designated targets
Nanorovers
Nanorover concept is of a small planetary surface explorer, typically weighing a few grams, moving a few millimetres
every minute. It would move about in a reactive mode on the surface, much the same way as an insect does. That is, if
there is an obstacle on the left, it moves right and vice versa. If it begins to move out of the sunlight and is losing power,
it changes course. If it senses more of what it is seeking (e.g. Water vapour) on one side than the other, it turns toward its
goal. Large numbers of such systems can be accommodated on the lander to compensate for possible individual failures.
Since the introduction of Shakey at the Stanford Research in 1970, mobile robots have gained significant commercial and
scientific interest and have reached high levels of machine intelligence. Even though existing mobile service robots are
quite different in size and shape they mostly share elements of the same application independent functionality.
68
These typical, in a wide sense, application-independent functions will be defined in the following:
Environmental modelling
In order to assure a collision-free and goal orientated motion across constrained environments the robot needs to
have information on the operational area and its surroundings. In addition to a map of the environment which
may be externally given by CAD-date, sensors should enable the robot to build, detail or update its maps even in
dynamic environments. The representation of maps reach from bit-maps to symbolic description of complex 3D
worlds.
Navigation
Navigation comprises motion planning, localisation, motion control and collision avoidance. Motion planning
determines the ideal trajectory between start and final positions in coordinates, velocities and delay times. It
takes into account constraints and boundary conditions like restricted areas, limits in mission time or in available
resources.
During missions motion planning can be modified, detailed or updated as sensor signals provide new information on
environments or external signals on altered mission goals.
Localization of mobile systems requires, due to measurement errors, regular referencing to external (artificial or natural)
landmarks. Usually dead-reckoning as a simple form of measuring the vehicles travelled path in conjunction to external
landmark referencing is used.
Motion control assures the vehicles proper motion along given paths or trajectories. Interpolation between via-points and
servo-control of the actuators are performed in constant time intervals at high frequency.
Task Planning
The automated execution of the service task goes beyond the actual motion planning. General task level
commands are decomposed to elementary tasks of which motion elements enter motion planning.
Personal Safety
Aspects of personal safety are either assured by a suitable layout and design or active safety sensors or a
combination of both. Regulations, standards or guidelines for regarding personal safety of robots in public areas
have still to be worked out.
Another interesting drive configuration is made of four mecanum wheels, all of them driven. These wheels consist of a
rim on which small reels are arranged at angles of 45 degree. When turning the wheel, this angle results in a force
component parallel to the wheel and in a second force component not parallel to the wheel. Usually this component will
be cancelled through the force components of the other wheels, but with special adjustment of speed and direction of all
wheels the vehicle can drive into an arbitrary direction, similar to a hovercraft. Therefore this drive configuration results
in a high manoeuvrability of the vehicle but requires good surfaces with sufficient friction.
For figures relating to these please refer to the slides.
3.3.2 Environmental Perception
The perception of the environment of a robot is achieved by various sensors. Therefore a robust and reliable system is the
key feature in the field of mobile robots. In particular to cope with an unknown environment a high performance
perception system is required. Depending on the requirements of the task some sensors may be superior to others. In
service robots applications one usually chooses a combination of different sensors. By using sensors working with
different principles once can achieve optimal results in the environmental perception.
69
In the following some of the most commonly used sensors are presented.
The first and simplest example of a sensor is a bumper. It just detects a mechanical bump at an obstacle by closing
an electrical or mechanical contact. In service robots it is used for safety functions only.
Another example for a sensor is the cheap and widespread ultrasonic sensor. Common ultrasonic sensors measure
the distance to walls or obstacles by sending out a short ultrasonic pulse and measuring the time for the reflection
from the obstacle. Nowadays, the use of special shapes of the beam and the control of the phase of the ultrasonic
waves allows to build even scanning ultrasonic sensor, the so called phased array sensors (17,18). Besides being a
cheap sensor alternative ultrasonic sensors are reliable, have a range of a few meters and show a high resolution of a
few millimetres. The flexibility of ultrasonic sensors allows also the construction of a wide beam which is better
suited for general obstacle detection or wall following. Due to these advantages and the low cost most mobile robots
are equipped with some kind of ultrasonic sensors.
A further improvement of sensor technology can be achieved by using laser light instead of sound waves for
measuring distances. A laser beam is sent out, reflected by an obstacle and then caught by a detector. The distance to
the obstacle is either calculated by measuring the time of flight of a laser pulse (19) or by using phase modulated
beams and measuring the interference between the beam sent out and the one returning (20). Deflecting the laser
beam with a rotating mirror yields two or even three dimensional laser scanners which are able to reliably measure
object distances of up to a range of 25m. Currently the application is only limited by the high prices compared to
other sensor systems.
Another widely used sensor for environmental perception is the camera or stereo camera (21,22,23). Usually one or
two CCD cameras are used to take (stereo) pictures of the environment. These pictures are then processed by
standard image processing and pattern extraction techniques. In this way objects, markers or distances can be
computed. Unfortunately low costs and ease use of these cameras has to be balanced against the high computing
power required to perform feature extraction and interpretation.
The environmental modelling unit processes the sensor data and relates them to the world model building up
environmental maps. These data and the environmental map are then used for the motion control of the mobile robot. For
precise modelling and perception of the environment it is important to reduce measurement errors and to guarantee an
adequate registration of all relevant features in the environment. This is achieved by using and evaluating different
sensors which should preferably work with different underlying physical principles. With such a sensor configuration one
can minimize erroneous measurements. For example, a laser scanner or a camera system can hardly detect glass doors
whereas ultrasonic sensors can do this easily. The combination of various sensors to one plausible stream of sensor data
is known as sensor fusion. Other approaches [ 24, 25, 26] show how to fuse heterogeneous multi-sensor information of
such different sensors as laser scanners, ultrasonic sensors or vision systems. These fused sensor data are used to build up
a world model as reliable and complete as possible. Nowadays other algorithms use also fuzzy logic or neural network
techniques to combine sensor data and to extract relevant features and patterns [27,28]. These pre-processed data are then
used to construct a map or model of the environment.
When creating maps [29,30] there are generally two common representations, the geometric and the topological
approach. A geometric map represents objects according to their absolute (Cartesian) geometric relationships. It can be a
grid map or a more abstract map such as a line or polygon map. Often grid maps are used as they have the advantage of
requiring less computation than other maps and that they are build up more quickly. The shape and size of the grids can
be different and even variable. Commonly used are square or hexagon grid maps, where objects or the probability of
finding objects is noted. By contrast the topological map is based on recording the geometric relationships between the
observed features rather than their absolute position with respect to an arbitrary coordinate frame of reference [31]. The
resulting representation takes the form of a graph where the nodes represent the observed features and the edges represent
the relationships between these features. Unlike geometric maps, topological maps can be built and maintained without
any estimates for the absolute position of the robot. This approach allows one to integrate large area maps without
suffering from uncertain position of the robot.
70
Map type
Properties
Quad-/octree model
Vector map
Topological map
The plan generated this way contains a sequence of action elements (e.g. movement, picking up items, manipulating
items) with assigned resources ( e.g. the robot or its gripper). The motion control manager then manages start and
destination of a path and plans the course and any actions in between. The resulting motions of a robot are called
trajectories or paths and consists of a sequence of desired positions, velocities and accelerations at some point. The
sequence of plan elements is called a task execution sequence. During this planning stage all constraints and restrictions,
like closed or impassable areas, areas of intense disturbances are considered as well as target times, resources, supplies or
the processing of parallel or sequential tasks.
Room cleaning
Water
Information tour
(postal/service) delivery
industrial transportation
Central station has to estimate needs. Trajectories to stores and to production areas
have to be computed.
8.3.5 Navigation
The navigation of a mobile robot contains localization, motion control, the already discussed motion planning and
collision avoidance. Its task is also the online and real-time re-planning of trajectories in the case of obstacles blocking
the pre-planned path or other unexpected events occurring. Regarding the complexity of the navigational task one usually
divides the navigational functions into different classes:
The most direct coupling of sensors and actors is achieved by reflexes. This is a strong relationship between a sensori
stimulus and a reaction of the system with bypassing any higher task planning functions of the robot. Especially critical
life support mechanisms are based upon these reflexes. They are basically characterised by a short response time and
they are difficult to inhibit by higher intelligence functions. They guarantee a safe behaviour of the vehicle in emergency
and other unexpected situations. Other more complex mechanisms lead to local navigation schemes. This level is still of
high reactivity and can cope with changes in the environment such as unexpected or even moving obstacles. Its task is
also the re-planning of trajectories in the case of obstacles blocking the path, danger or other sudden events. As it
determines the vehicles path on-line and real-time it usually can not guarantee to produce an optimal trajectory. The
most complex mechanism of action is the global navigation, generating paths to goals given by the task planning unit.
The paths generated in this way consider all data provided by the world model and results in near optimal movements
[32,33].
71
In the following a brief overview over the navigational techniques used most commonly to generate trajectories for
mobile robots is presented.
72
The localisation of mobile robot systems requires measures to compensate for relatively imprecise measurement errors of
movement and position due to dead reckoning errors such as slippage and drift. This can be done via position updating,
error correction, motion surveillance and stabilization techniques. It has proven to be useful to use tow different schemes
for localisation, which complement each other. Hence most mobile robots are equipped with sensors for dead reckoning
as well as position updating. Dead reckoning navigational methods are procedures to estimate the position of the vehicle
resulting from differential changes in the vehicles position, speed and acceleration. Whereas position updating takes
place with referencing the robot at external features. These can be natural or artificial landmarks in the environment of
the robot, which allow to deduce the absolute position of the vehicle. Any cumulative measurement errors in dead
reckoning can be corrected by these position update mechanisms as they allow to re-calibrate the position of the vehicle
absolutely, improving the localization accuracy dramatically. If there is no possibility for frequent position updates and
yet a high demand for good localization accuracy one has to pay particular attention to high quality dead reckoning
sensor systems. Table 8.1 sums up the components for performing the dead reckoning of mobile robots.
Position updating
Feature identification in the environment
active beacons (ultrasonic, infrared, or
radio emission)
passive beacons ( reflector, magnets, metal
inductors)
speedometer
The compliance of the vehicles course with the given trajectory calculated by the planning algorithms is done by the
position-controller. It performs the necessary fine interpolation between given way points, generates steering codes for
the drives and monitors the actual position given by the localization unit compared to the requested position.
Summarising one can say that the motion precision of a mobile robot depends on two factors: The accuracy of the
localization and the quality of the position controller.
Another important aspect of the navigation of mobile robots is the collision avoidance [39,40]. It will be activated when
the vehicles sensors detect an obstacle blocking the pre- planned path. Its task is then to steer the vehicle around the
obstacle on a detour as optimal as possible. Time and resource consumption during the detour need to be minimized as
well. Usually this task will be done by a reactive motion control, whose inputs are the known environment and the
current sensor data. There exist various concepts for this local obstacle avoidance, which will be briefly described in the
following.
Contour following
The sensor data are used to drive a minimal distance course around an obstacle until the vehicle reaches the pre-planned
path again [41].
Advantages:
easy to implement
Fast algorithms available
Disadvantages:
Edge following
The edges of obstacles are determined and the edge with the least deviation from the pre-planned course is followed until
the vehicle reaches the pre-planned path again.
73
Disadvantages:
Potential fields
All obstacles produce imaginary repulsive forces which act on the vehicle [34,35].
Advantages:
easy to realize
Fast algorithms available
Disadvantages: oscillations around calculated path
Local minima can catch the vehicle
doors and small passages are difficult to pass
Vector fields
The environment is modelled as two dimensional histogram whose cells contain the probabilities of finding obstacles
there [39]. Then an imaginary force vector is constructed out of the histogram cells which acts similar to the potential
field algorithm.
Advantages:
Disadvantages:
easy to realize
high computation power needed
Large changes in movement direction occur
Typical problems of potential field algorithms arise
Unfortunately, most algorithms for obstacle avoidance have some disadvantages. Hence in a real application the obstacle
avoidance will often be a combination of various algorithms. These hybrid architectures can then provide reliable local
obstacle avoidance.
74
APPENDICIES
75
MATRIX REVIEW
76
77
78
79
80
81
82
Ref. Niku, S.B., Introduction to Robotics, Analysis, Systems, Applications, 2001, Prentice Hall,
83
APPENDIX B
PART I
ROBOTICS
FORMULA SHEETS
84
2. KINEMATICS
2.1 Definitions
2.2 Transformations
(2.1)
(2.2)
85
(2.3)
(2.4)
where
(2.5)
(2.6)
86
(2.7)
(2.8)
or
(2.9)
(2.10)
(2.11)
87
(2.12)
(2.13)
(2.14)
(2.15)
(2.16)
(2.17)
(2.18)
(2.19)
(2.20)
88
(2.21)
(2.22)
(2.23)
(2.24)
(2.25)
(2.25)
(2.26)
89
(2.27)
90
91
92
(2.28)
93
(4.1)
(4.2)
94
(4.3)
(4.4)
(4.5)
(4.6)
Inward recursions
For link i = n to 1, calculate:
1. Force exerted on link i by link i-1
(4.7)
(4.8)
Summary of notation
Note: all linear and angular velocities and accelerations are measured relative to a fixed frame (such a frame is often also
known as an earth or world frame). However linear velocities and accelerations, even though measured relative to a fixed
frame, can be expressed in any frame like any other vector. In the planar case angular velocities/accelerations are scalar
so this issue does not arise for angular motion.
Linear accelerations/forces: where Q represents a linear quantity,
(as used
(4.9)
95
(4.13)
(4.14)
or
(4.15)
(4.16)
96
97
8. Localisation
Basic Concepts
98
Trilateration
Multilateration
99
Trilateration algorithm
100