Вы находитесь на странице: 1из 100

ROBOTICS

AND
MACHINE
INTELLIGENCE

MECH3460, PART I
ROBOTICS
School of Mechanical Engineering
Dr. A. Dehghani
Room no. 448

a.dehghani@leeds.ac.uk

SCHOOL OF MECHANICAL ENGINEERING


MECH3460 ROBOTICS AND MACHINE INTELLIGENCE

PART I

ROBOTICS
MODULE INFORMATION
Module Specification
Programme of study:
Compulsory:

BEng/MEng Mechatronics and Robotics, Level 3

Number of credits:

20 (Part I and II)

Semester in which taught:

1 and 2

Timetabled teaching sessions:

22 fifty minute lecture periods in the first semester (Part I)

Form of assessment:

Each part of the module has 50% (20% final exam and 30% course work)

Module lecturer:

Dr. A. Dehghani, Room 448, a.dehghani@leeds.ac.uk

CONTENTS

INDUSTRIAL ROBOT MANIPULATORS


1. Introduction
1.1 Robotics: a definition
1.2 History
1.3 The parts of a robot manipulator system
1.4 Robot manipulator classification
1.5 Industrial, economic and social impact of robots.

2. Kinematics
2.1 Definitions
2.2 Transformations
2.3 Properties of transformation matrices
2.4 Forward kinematics
2.5 Matlab and the robotics toolbox
2.6 Inverse kinematics

13

3. Design
3.1 Actuators
3.2 Internal state sensors
3.3 External state sensors
3.4 End effectors
3.5 Mechanical arrangement and specification: PUMA 500 series

28

4. Dynamics and control


4.1 Inverse Dynamics
4.2 Forward Dynamics
4.3 Control

37

5. Programming
5.1 Introduction
5.2 Drive-through teaching
5.3 Programming using the VAL II language
5.4 VAL II trajectory generation
5.5 Trajectory calculation

46

EMERGING ROBOTIC TECHNOLOGY AND APPLICATIONS


6. Vision systems
6.1 Introduction
6.2 Vision Hardware
6.3 Image processing
6.4 Object recognition

55

7. Advanced robotic applications


7.1 Introduction
7.2 Examples of selected robot systems

65

8. Mobile robots
8.1 Introduction
8.2 Space Robotics: planetary rovers
8.3 Characteristic functions of mobile robots.

66

Appendix A
Matrix review

76

Appendix B
Formula sheets

84

MODULE AIMS AND OBJECTIVES


Aim: to provide an understanding of robot analysis and technology, including the study of robot manipulators currently
used in manufacturing industry, and an introduction to other applications for robotics (space, medical etc.)
At the end of this part of the module you should be able to :
describe the different mechanical configurations for robot manipulators
choose robot actuator and sensor technology appropriate for a given application
undertake kinematic analysis of robot manipulators
understand robot programming concepts
analyze the dynamics of planar manipulators
design, in concept, robot control systems
understand basic concepts in machine vision
describe the social and economic impact of industrial robotics
appreciate the current state and potential for robotics in new application areas (e.g. medical)

Books
No books are essential for this course. However the following books are recommended:
1. K S Fu, R C Gonzalez & C S G Lee Robotics. McGraw-Hill, 1987.
2. RP Paul Robot Manipulators MIT Press, 1981
3. R D Klafter, T A Chmielewski, M Negin Robotic Engineering: An integrated approach Prentice-Hall, 1989
4. J J Craig Introduction to Robotics Addison-Wesley, 1986.
5. F N-Nagy & A Siegler, Engineering Foundations of Robotics. Prentice-Hall, 1987.
6. M C Fairhurst, Computer Vision for Robotic Systems. Prentice-Hall, 1988.

7. S B Niku, Introduction to Robotics, Analysis, Systems, Applications, Prentice Hall, 2001.

These texts are referred to in the handouts as [1], [2] etc.

Planned content of teaching sessions

Week

Session theme

Introduction
Kinematics

Kinematics
Kinematics

Kinematics
Example Class

Design
Design

Dynamics
Dynamics

Examples Class
Control

Control
Vision systems

Mobile robots
Mobile robotics: localization

Examples Class
Navigation

10

Example Class
Autonomous robots

11

Advanced robot applications


Examples Class

1. INTRODUCTION
1.1 Robotics: a definition
What is a robot?
One dictionary definition of a robot is:
An automatic apparatus or device that performs functions ordinarily ascribed to humans or operates
with what appears to be almost human intelligence.
The Robot Industries Association (RIA) in the USA uses a more restrictive definition:
A robot is a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or
specialised devices through variable programmed motions for the performance of a variety of tasks.
Defining a robot is tricky, but the key features are that it should be adaptable to a variety of tasks, and be able to operate
with a degree of autonomy, i.e. without constant human supervision. Both these features suggest a robot should have a
programmable memory, so that it can be reprogrammed for different tasks, and operate according to the stored
programme rather than direct human control. The adaptability also implies that the mechanical configuration cannot be
too specialised for a particular function. The main problem with most definitions is how to interpret "a variety of tasks";
how wide a range of tasks is required before a machine becomes a robot?
The origin of a word
The word robot was first used by the Czech writer Karel Capek in a play entitled Rossums Universal Robots in 1921.
Capeks robots were hard-working humanoid machines. The word derives from robota, the Czech word for slave
labourer.
The term robotics, meaning the technical field encompassing robot technology, was first used by Isaac Asimov in 1942
in a short story entitled Runaround.
Examples of robots
There are two main types of robots:
Robot manipulators: jointed robot arms which are now quite common in manufacturing industry. This type of
robot has had a significant impact and is by far the most important industrially and economically.
Mobile robots: vehicles capable of autonomous motion.
Of course some devices come into both categories, i.e. a mobile robot which carries a manipulator.
Table 1.1 lists examples of robots and robot-like devices

Table 1.1 Robot examples


Robot

"Near-relation"

(Fulfills all usual criteria)

(Has some robot-like features)

1.2 History

1.3 The parts of a robot manipulator system


The manipulator
The manipulator is the mechanical arm, containing actuators, sensors and structural components. The number of degreesof-freedom (DOF) of the manipulator is the number of independent position variables which would have to be specified
to locate all parts of the mechanism. Six DOF manipulators are common, as when appropriately designed, they allow
independent control of all three linear displacements (XYZ) and all three angular displacements.

The manipulator can usually be divided into two elements (see Figure 1.1): the arm, designed to provide linear position
control within the working envelope (usually 3 DOF), and the wrist, attached to the end of the arm and providing angular
position control (again, usually 3 DOF).
The end-effector
The end-effector is the robot hand, i.e. a gripper or other device attached to the moving end of the manipulator. There is a
great variety of gripper designs, with varying degrees of adaptability to handling different workpieces. Alternatively the
end-effector may be a specific tool, such as a paint sprayer or a welding torch.
The controller
The controller is usually a dedicated robot control computer system, with VDU, disk and printing facilities, allowing
efficient creation of robot control programs. A teach pendant is often included; this is a small hand-held keyboard in
which each key moves the robot in a particular way.
The controller cabinet often contains the power conversion unit for the robot, i.e. power supply and amplifiers for the
electric motors in an electrically driven manipulator. A separate power conversion unit would be used for hydraulic or
pneumatically actuated manipulators, containing a hydraulic pump or pneumatic compressor as appropriate (Figure 1.2).
External sensing system
Sometimes additional sensing systems are used to help monitor and control the robot; these would be interfaced to the
robot controller. For example a vision system might be used.

1.4 Robot manipulator classification


Industrial robots can be classified in a number of ways. These classifications try to place a particular robot into a
category or group whereby it can be compared with like robots in the same group. Robots are commonly grouped by
consideration of the following characteristics: configuration, control method, actuator type, application.
Configuration
This is the most established form of classification of a robot system. Under this characteristic robot manipulators are
grouped according to their physical design or geometrical structure, which is known as their configuration. Manipulators
can have serial links or parallel links; a serial manipulator, with a sequence of links attached one after the other by joints
like a human arm, is almost universal in current industrial practice. Each joint is either revolute (rotary) or prismatic
(sliding). If the end-effector (e.g. gripper) of the robot is to be positioned anywhere in three dimensional space, then at
least three joints are required in the robot arm. The combination of revolute and prismatic joints chosen for the three
joints in the arm dictates the configuration; there are five common configurations in industrial use (Figure 1.3):
Cartesian or rectangular configuration (prismatic-prismatic-prismatic)
Cartesian configuration provides movements in X, Y and Z axes, as those provided by a milling machine. It is also called
rectangular since it covers a three-dimensional rectangular volume. Some of the advantages of this configuration are:
Easily controlled/programmed movements. High accuracy.
Inherently stiff structure. Large payload capacity.
This configuration is applicable in those areas where linear movement and high accuracy are demanded, such as
manipulation of components through apertures, or pick-and-place applications where the workplace is essentially flat.
Cylindrical configuration (revolute-prismatic-prismatic)
The movements in this configuration are rotation about the base and linear travel in the vertical and horizontal planes.
Some of the advantages are:
Easily controlled/programmed movements. Good accuracy.
Structural simplicity, offering good reliability. Fast operation.

10

This configuration is applied in a radial workplace layout where the work is approached primarily in the horizontal plane
- for example, small circular manufacturing cells.
Polar or spherical configuration (revolute-revolute-prismatic)
This configuration combines rotational movement in both vertical and horizontal planes with a single linear (in/out)
movement of the arm. It presents the following advantages:
Easily controlled/programmed movements. Large payload capacity.
Fast operation. Accuracy and repeatability at a long reach.
It is suited to lifting and shifting applications which do not require sophisticated path movements to be traced.
Jointed or articulated or revolute configuration. (revolute-revolute-revolute)
Jointed configuration consists of a number of rigid arms connected by rotary joints. In addition, the whole structure has a
rotary movement around the base. It is also termed as anthropomorphic configuration since it resembles the movements
of a human body. Some of the advantages of this configuration are:
Extremely good maneuverability. Ability to reach over obstructions.
Large reach for small floor area. Fast operation due to rotary joints, but less accuracy.
SCARA configuration. (prismatic-revolute-revolute)
Selective Compliance Assembly Robot Arm (SCARA) configuration is a combination of the cylindrical and the jointed
configuration operating in the horizontal plane. Links connected by rotary joints provide movement in the horizontal
plane, while vertical movement is provided at the base of the arm. (or sometimes at the end-effector). Advantages of this
configuration include:
Extremely good manoeuvrability. Fast operation.
Relatively high payload capacity. High accuracy.
This configuration was developed for assembly-type operations.
Figure 1.4 shows some specific examples of a variety of manipulator configurations.
Control method
Many industrial manipulators are servo-controlled. Thus each joint actuator is operated under closed-loop control,
allowing the joint to be positioned accurately anywhere within its range of movement; also the velocity and acceleration
of the joint can be controlled as required. A dedicated computer system with its own robot programming language will be
used to control the robot. Servo-controlled robots will be the main subject of this course.
At the cheaper, less sophisticated, end of the market are pick-and-place or bang-bang robots. These have non servocontrolled actuators which will only stop moving when they reach a mechanical end-stop; hence each actuator will only
be stationary at one or other ends of its stroke. Also the velocity and acceleration is not controlled during motion. This
type of robot is controlled by a sequencer (e.g. a programmable logic controller, PLC) which operates the joints in the
correct order and can start or stop operation depending on external sensors. Programming can only be achieved through
setting up the sequencer and altering the end-stop positions.

1.5 Industrial, economic and social impact of robots


Robot manipulators have the potential to remove the need for people to perform many dangerous, dirty or difficult tasks
within industry, particularly in the manufacturing sector. They can also:
increase a companys productivity, which may reduce costs.
improve repeatability and hence quality in manufacturing operations.
relieve human operators of boring repetitive tasks.

11

However, apart from in the automotive sector, the take up of robot technology has been slow, particularly in the UK. This
has been due to the large capital investment required, and concerns over reliability of high technology, and adaptability to
product changes. The social impact has also been a concern in some quarters, as robots reduce the need for unskilled
labour.
The attached extract from the Computing and Control Engineering Journal summarises the current industrial impact of
robotics and predicts future trends. Further information can be found in the library (Edward Boyle, mainly Mechanical
Engineering K-13), e.g. R D Klafter, T A Chmielewski, M Negin Robotic Engineering: An integrated approach, Sections
1.6-1.9.

12

2. KINEMATICS
2.1 Definitions
Kinematics is the study of motion without regard to the forces which are required to produce that motion. It includes the
study of position, velocity and acceleration (both linear and angular) of one point in a mechanism and how that interrelates with the motion of other points. For a robot manipulator the two most important analytical problems are:
the forward kinematics: this is the calculation of the linear position and orientation of the end-effector from the
joint positions.
the inverse kinematics: this is the calculation of the joint positions from the position and orientation of the endeffector.
The inverse kinematics can be very complex for some manipulator configurations, but it is usually essential to be able to
calculate the joint positions (i.e. angles for revolute joints) required to move the end-effector to a desired position and
orientation.
Careful definition of co-ordinate frames is very important in kinematic analyses. For example Figure 2.1 shows coordinate frames chosen to define the positions and orientations of the base of a robot {B}, its end-effector{E}, and a
work surface {W}. The position of a component C on the work surface is specified by a vector defined in the work
surface co-ordinate frame. It is important to realise the frame {E} is attached to the end-effector, i.e. it moves as the
robot moves.

2.2 Transformations
In robot kinematic analysis, we need to be able to transform or map a vector specified in one co-ordinate frame to a
vector which defines the same point but relative to another co-ordinate frame. For example in Figure 2.1, the vector
defining C is given in the {W} frame; we would need to transform this into the robot base co-ordinate frame {B} to make
a start on calculating how to move the end-effector to pick up the component. Figure 2.2 shows frames {B} and {W} in
more detail. The vectors WC, BC and BWo are 3x1 column vectors; e.g. the vector BC that we need to calculate is given by:

(2.1)

13

Pure translation
Consider the situation shown in Figure 2.3, where frames {B} and {W} have the same orientation. The difference
between the two frames is purely a translation, and BC can be calculated by vector addition:
(2.2)

Pure rotation
Consider the situation shown in Figure 2.4, where frames {B} and {W} have the same origin position. The difference
between the two frames is purely a rotation. BC can be found by taking the x, y and z components of WC in turn, and
projecting them onto the {B} axes. Thus taking the x component first, Wcx will in this example give Wcx cosq when
projected onto the XB axis, Wcx sinq when projected onto the YB axis and zero when projected onto the ZB axis (see
Figure 2.5); these three components are the transformation of Wcx into the {B} frame. Transforming Wcy and Wcz into the
{B} frame as well, and adding the results gives:

14

(2.3)
These equations can be expressed in matrix form:
(2.4)

where

(2.5)

RW is described as the rotation matrix for transforming from {W} to {B}. The element values depend on the relative
orientation of the frames; the elements in equation (2.5) are only valid for this example i.e. where the difference between
the frames is just due to a rotation about the Z axis. In general terms, the columns of the rotation matrix can be defined as
the unit vectors i, j, k for frame {W} projected into frame {B}:

(2.6)

Example 2-1: Rotational transformation. See class notes.

15

Combined translation and rotation


Returning to Figure 2.2, a transformation involving both translation and rotation can be performed by first expressing WC
in a frame aligned with {B} but with origin at W o, and then adding in the translation:
(2.7)
This whole transformation can be expressed in one matrix operation:

(2.8)
or

(2.9)

The matrix BTW is called the homogeneous transformation matrix, or simply the 4x4 transformation matrix, and plays
a crucial role in robot kinematics. The 4x1 position vectors BC and WC have the additional element simply to allow the
transformation to be expressed as this single matrix multiplication. There is no accepted notation to differentiate 4x1
from 3x1 position vectors: the context is sufficient to determine whether the additional "1" should be present.

Example 2-2: Constructing and using a homogeneous transformation matrix. See class notes.

2.3 Properties of transformation matrices


Using transformation to move vectors

16

The homogeneous transformation matrix has been used to map a vector from one reference frame to another. The actual
point to which the vector refers (C in the examples above) does not change position. However the transformation matrix
can also be considered as an operator which moves objects, i.e. rotates and translates them to another position. For
example in Figure 2.6 C1 is moved to C2.

So the transformation matrix must calculate BC2 from BC1:


(2.10)
To construct T it is convenient to break down the movement into separate rotations and translations. In Figure 2.6 the
first part of the movement is a rotation by angle q about the Z axis. A shorthand notation for the 4x4 transformation
matrix which gives this rotation is ROT(Z, q ). Thus the intermediate vector formed by the rotation is given by:
(2.11)
The second part of the movement is a translation represented by vector D, and denoted TRANS(d x, dy, dz). Thus:

(2.12)
Hence from equation (2.10):
Hence (2.13)
The TRANS() and ROT() operators are given by:

(2.14)

17

(2.15)

(2.16)

(2.17)
Note that each rotation matrix is for clockwise rotation when looking in the direction of the axis about which rotation is
taking place (thus q is negative for anticlockwise rotation).
In this case equation (2.13) gives:

(2.18)
Using transformation to describe frames
Just as transformation matrices can be used to move objects described by vectors, they can be used to move between
frames in exactly the same way. Furthermore, the transformation required to move from frame {B} to frame {W} can be
used as a description of the position and orientation of {W} relative to {B}. This is exactly the same transformation, BTW,
as that needed to map a vector defined in frame {W} to frame {B}.
To understand this, consider Figure 2.7, in which frame {W} is the same as frame {B} rotated about the Z axis and then
translated. The matrix BTWB can be used to map W C2 to B C2 (equation 2.9):
(2.19)
However if in the figure vector B C1 is numerically identical to vector W C2 we can write:
(2.20)
Thus BTW moves C1 to C2 . Since B C1 and W C2 are actually the same vector, just in different frames, this movement must
be the movement required to map {B} onto {W}, as postulated above.

18

Example 2-3: Transformation matrix to describe relative frame positions. See class notes.

Summary of interpretations of transformation.


Transformation matrix QTR can be used to change the frame in which the position of a point is
defined:
Transformation matrix T can be used to move a point or vector:
Transformation matrix QTR describes the position and orientation of frame {R} relative to frame {Q}.

Mathematical properties of transformation matrices


Compound transformations
In Figure 2.8 the vector W C may be known, but E C needs to be calculated. If the transformations representing the
position and orientation of {W} relative to {B} and {B} relative to {E} are known, the following calculations can be
performed:
(2.21)
(2.22)

19

or equations (2.21) and (2.22) can be combined to give:


(2.23)
Combining the transformations we can define:
(2.24)
Commutativity
As expected in matrix multiplication, transformations are not commutative:
(2.25)
Inversion
In Figure 2.8 we may know BTE rather than the ETB transformation required for equation (2.23). ETB is found from:

(2.25)
A useful formula for inversion is (see e.g. [4]):

(2.26)

Example 2-4: Inverting a transformation matrix. See class notes.

20

2.4 Forward kinematics


Problem definition and approach to solution
Forward kinematics is the calculation of end effector position and orientation relative to a fixed co-ordinate frame such as
one attached to the robots base. In Figure 2.9, no end effector is connected to the manipulator, so the derivation of the
position and orientation of the flange (link 6) is the goal. We can define the position of a link by attaching a frame to it;
and so we will have a total of seven frames in Figure 2.9 (only the first four are shown). The transformation 0T6 is the
solution to the forward kinematics problem, and is given by:
(2.27)
Each individual transformation matrix is a function of one joint angle, so that 0T6 is a function of all the robot joint
angles.

21

Denavit-Hartenberg notation
Although the method outlined above is perfectly adequate, it requires a lot of analysis for each new robot considered.
Hence a convention called the Denavit-Hartenberg notation is usually adopted, leading to a more systematic procedure.
Firstly it should be noted that the actual shape of any link is not important; its only purpose is to locate one joint relative
to another. Hence we can draw a manipulator as a collection of joint axes (Figure 2.10). Between each pair of adjacent
axes we can always draw a line which is perpendicular to both axes (Figure 2.11); this mutual perpendicular is unique
except when the axes are parallel, in which case it can be placed at the users discretion. When the axes intersect the
perpendicular is at the intersection point and of zero length.

Rules for fixing frames to links (e.g. link 2 in Figure 2.11):


Z2 points along joint 3 axis (in either direction)
X2 points along link 2 perpendicular, away from joint 2.
Y2 completes cartesian co-ordinate frame.
These rules can be extrapolated to the other links. However they are not entirely applicable to the first and last links:
Link 0: Z0 should point along joint 1 axis, but otherwise the frame position can be freely chosen.
Link 6 (or link n where n is the number of joints): X6 should be perpendicular to the joint 6 axis, but otherwise
the frame position can be freely chosen.
Using these standard frames, four parameters can be defined which uniquely specify the link and joint geometry. Each
parameter for link i can be thought of as a successive movement required to map frame {i-1} to frame {i}:
1. Link offset Di : displacement along the Zi-1 axis to go from the link i-1 to the link i perpendicular
2. Joint angle q i : rotation about Zi-1 required to align Xi-1 with Xi (positive rotation is clockwise looking in the
direction of Zi-1)
3. Link length Ai: the length of the perpendicular, i.e. the displacement required in the X i direction to bring the origin
of frame {i-1} coincident with that of frame {i}.
4. Link twist a i : the rotation required about Xi to make Zi-1 coincident with Zi (positive rotation is clockwise looking
in the direction of Xi )
Figure 2.12 summarises these parameter definitions.
If joint i is revolute then the joint angle q i is a variable. However if the joint is prismatic, the link offset D i is the variable.

22

23

Example 2-5: Determining link parameters for Puma 560. See class notes, Figures 2.13 and 2.14 and Table 2.1

Table 2.1 Puma 560 link parameters


Link i

ai

Ai

qi

Di

24

Forward kinematics calculation using standard link parameters


From the description of the link parameters as translations and rotations which map one link frame onto the next, and
using the intermediate frame {P} shown in Figure 2.12, it can be seen that:

Giving

(2.28)
where C represents cos and S represents sin. Thus once the link parameters have been established for a manipulator it is a
straightforward procedure to apply equation (2.28) to find each link transformation, and (2.27) to solve the forward
kinematics.

Example 2-6: Determining forward kinematics for Puma 560. See class notes,

2.5 Matlab and the robotics toolbox


Introduction
Matlab is a general purpose mathematical analysis and simulation software package, with a bias towards engineering
dynamics and control applications. Toolboxes of commands are available for performing specific tasks; one of these is
the Robotics Toolbox. This contains tools for analysing manipulator kinematics, trajectory planning, and dynamics and
control. The use of Matlab for engineering analysis is widespread in both industry and academia.
Running Matlab
1. Login to a PC attached to the Novell Network (UCS).
2. Enter Windows.
3. Select Departmental Software icon from desktop.
4. Select Mechanical engineering
5. Scroll down and select Matlab. Matlab will take a few seconds to start up.
You will now see the Matlab prompt (>>).

25

When you want to leave Matlab, type exit at the prompt.

Commands and variables


Commands may be typed at the Matlab prompt. On-line help about a particular command can be found by typing help
followed by the command name. Lists of commands can be accessed from the Help menu.
Also variables may be entered at the prompt. A matlab variable can be a scalar or a matrix.

e.g. to enter the 2x2 matrix

type: T = [1 2; 3 4]

Note that the numbers in a row can be separated either by spaces or commas. Typing T on its own will now echo the
contents of variable T (note that Matlab is case sensitive, so typing t will not work).
Examples of a few useful commands are given below:
plot(y) Plot() is a general plotting command. This example plots the data found in vector y.
who Lists all the variable names which currently exist.
clear Clears all variables
More information
Typing demo at the Matlab prompt gives access to demonstrations showing the capabilities of Matlab. Clicking on Visit
under the Matlab heading and entering the intro demo under Matrices gives a tutorial on matrix manipulation in Matlab.
An introduction to the Robotics Toolbox is given in a paper attached to this handout. Typing rtdemo gives
demonstrations which tie in with the examples in this paper. Only some of the analytical techniques have been covered so
far in this course. (Note: the Puma 560 link parameters accessed by the puma560 command are for the same frame
definitions as used in examples 2-5 and 2-6. The parameter values are in radians and metres)
2.6 Inverse kinematics
The inverse kinematic solution is the calculation of the joint variables from the end-effector position and orientation. This
is particularly important in practice, for example to allow the manipulator to pick up a component at a known position:
the end-effector can only be moved to that position if the equivalent joint variables can be calculated. For a manipulator
with n joints, the desired end-effector position would be specified by the transformation matrix 0 Tn . The forward
kinematic solution would express this matrix as a function of the joint variables; e.g. 0 Tn (q 1, q 2, ... q n) where all the
joint angles are the variables (i.e. all revolute joints). Thus the following equation must be solved:

Tn (q 1, q 2, ... q n) =

(2.29)

This gives 12 equations in n unknowns. However the nine elements that form the rotation part of the matrix are
dependent; there are only three independent rotation equations. Thus there are six independent equations, which can be
solved if n = 6. However the simultaneous solution of the non-linear equations is not in general possible analytically (i.e.
there is no general closed-form solution). Hence there are two solution approaches:

26

Numerical solution: an iterative approach which can be very time-consuming. However it is applicable in all
cases (for n=6). There are various numerical solution techniques available; they will not be discussed in this
course. The ikine() command in the Robotics Toolbox uses a numerical approach to solve the inverse kinematics
problem.
Specific closed-form solution: in many specific manipulator configurations, simplifications can be made which
allow a closed-form solution to be found. The solution method adopted depends on the configuration.
For any solution method, there are two potential problems:
No solution. A manipulator will only have a limited workspace. If 0 Tn is beyond the reach of the robot, no
solution will be found. (Note that even if a solution is found, the practical limitations on the range of rotation of
the joints may make the position impossible to achieve). If n < 6, then the workspace is restricted to a subset of
normal three dimensional movement: e.g. a planar manipulator cannot be asked to move outside the plane.
Multiple solutions. For many end-effector positions there are several manipulator poses which will achieve that
position (see Figure 2.15). In a redundant manipulator, i.e. n > 6, there is always a range of solutions. Numerical
solvers such as ikine() only return one solution dependent on the starting values for iteration.

Example 2-7: A specific closed-form inverse kinematic solution. See class notes.

27

3. DESIGN
Certain aspects of robot manipulator design are briefly reviewed in this section. These aspects are:
selection and characteristics of actuators
selection and characteristics of sensors
end effector (gripper) design
Also information is provided on a PUMA 500 series robot as a case study in the mechanical arrangement of a
manipulator and its performance specification.

3.1 Actuators
Any industrial robot will use either an electric, hydraulic or pneumatic drive system:
Electrically actuated robots are almost all driven by DC motors. These robots tend not to be as powerful as
hydraulic robots, i.e. the move more slowly and exert lower forces, but they do exhibit good accuracy and
repeatability properties. For very low power applications stepper motors can be used.
Hydraulically actuated robots have the advantage of mechanical simplicity (few moving parts), as well as
physical strength and high speed.
Pneumatic drive systems are normally reserved for small, limited sequence pick and place applications. Lack of
stiffness (air compressibility) and control problems associated with stiction prevent their use if good accuracy is
required.

Electric actuators
D.C. Motors
The principal variation among different types of DC motors lies in the mechanism used to develop the magnetic field. In
a permanent magnet DC motor, the field is developed, as the name suggests, by permanent magnets. In such a motor, the
torque T is related to armature (rotor) current I a by:
(3.1)
where KB is a constant. A current amplifier is often used to drive the motor so that motor torque can be controlled
directly.
The magnetic field can also be generated by an electromagnetic. This is most common in larger motors (more than a few
kilowatts). In electromagnet motors the torque is given by:
(3.2)
where If is the current in the field windings (stator) and Kf is a constant (Figure 3.1). In many cases the field current is
derived from the same source as the armature current. Figure 3.2 shows the main two ways to accomplish this: shuntwound and series-wound motors.

28

Fig 3.1 Common configuration for larg motors

Figure 3.2 Shunt and series wound motors

Stepper motors
A stepper motor can change its position to any one of a number of known angles. Hence it is a digital rather than
analogue actuator, and is well suited to digital (i.e. computer) control. However stepper motors have limited power
output so are only used for light duty robotic applications.
The most common type is the permanent magnet stepper motor which has a rotor consisting of several permanent
magnets and a stator containing four windings, as shown in Figure 3.3a.

The rotor would be held in the position shown if VA is positive and VB is zero. The electro-magnet formed by rotor
winding A would then attract north pole 1 on the rotor, and winding A' would repel north poles 3 and 4. If VA is now
switched to zero and VB becomes positive, the rotor will rotate by one step anticlockwise, so that N 5 is directly opposite
winding B. As there are 90 between each winding and 360 /5 = 72 between each rotor pole, one step is 90 -72 =18 .
By switching the winding voltages in the correct sequence the rotor will continue to step around 18 at a time, as shown
in Figure 3.3b.

29

Figure 3.3a Stepper motor construction

Figure 3.3b Stepper motor operation.

For the motor shown there are 20 steps per revolution. With more rotor poles there would be more steps per revolution,
allowing a finer control over angular position. A typical value for a good stepper motor would be 200 steps per
revolution (i.e. each step is 1.8 ).
Position control of a stepper motor. Unlike other motors, the position of a stepper motor can be controlled without using
a position sensor. As long as the starting position is known, the motor can be stepped around to any of its possible
positions. A stepper motor controller is usually used to switch the currents in the windings appropriately, and this
controller can be driven by a computer using two digital signals: a direction signal (e.g. 0V for clockwise, 5V for
anticlockwise), and a step signal (a series of pulses, each pulse causing the motor to move by one step).

Hydraulic Actuators
Hydraulic systems make use of a virtually incompressible fluid, usually oil, which is forced under high pressure into a
cylinder. The cylinder contains a piston which moves in response to the pressure of the fluid. Both rotary and telescopic
(prismatic) actuators are available and are widely used for high power robot applications.
High pressure fluid (typically at 100bar to 300bar) is supplied by a hydraulic power supply, which consists of a pump, a
relief valve to regulate the pressure, and an accumulator to iron-out pressure ripples. The hydraulic actuation system
itself consists of a cylinder, and a valve to control the direction and rate of flow.

30

The diagram below shows a hydraulic cylinder controlled by a spool valve. The horizontal position of the spool (x) can
be changed (e.g. by a solenoid) to direct flow into either end of the cylinder.

Figure 3.4 Hydraulic actuator

The fundamental equation which describes the valve characteristic is called the orifice equation:
(3.3)
where Q is the flowrate through the valve and D P is the pressure drop across the valve in the direction of flow. K V is a
constant for the valve.

Example 3-1. Characteristics of the hydraulic actuator of Figure 3.4 (for positive x). See class notes

31

Table 3.1 summarises the notation for this example.

Example 3-1. Summary of notations


Symbol

Meaning

Units used in
calculation

Ps

Supply pressure

N/m2

P1 and P2

Cylinder pressures

N/m2

Q1 and Q2

Volume flowrates

m3/s

Valve spool displacement

Piston velocity

m/s

A1 and A2

Piston full and annular areas respectively

m2

External force on piston rod

Kv

Valve flow constant

Pneumatic Actuators
In a pneumatic actuator, a compressible fluid, air, is used to drive a piston. As in the case of hydraulic actuators, an
electrical signal controls a valve which, in turn, controls the flow to the cylinder.
In the simplest pneumatic control system, a solenoid-operated on-off valve directs air at maximum flowrate to the
cylinder. To return the piston, this supply valve is closed and an exhaust valve opened. The piston is returned by a spring,
or, if a double-acting cylinder is used, a constant pressure on the other side will cause a return. For each movement the
piston is only halted when it reaches a mechanical end-stop. This is called a bang-bang control scheme.
Such a simple control method is ideal for grippers; the pneumatic piston supply closes the gripper until the gripping force
equals the piston force.
In addition to their use in grippers, pneumatic actuators are often used in simple robots. A totally pneumatic robot can be
sequenced through a complex series of operations by a simple controller which opens and closes valves in sequence.
Such robots have mechanical end-stops which are adjusted to suit the particular application, and they are often used as
pick-and-place robots to move components between two known positions. Advantages include:
High speed and relatively high power-to-weight ratio. (However due to sealing problems working pressures are
limited to approx. 10bar, giving much lower forces than hydraulics)
Low cost.
Simplicity of control.
Non contamination of work space, oil leaks and noise.
Unlike hydraulic oil, air is highly compressible. Hence pneumatic actuators are not very stiff, so that maintaining a
constant position under varying loads is difficult. Also seal friction hinders high precision position control. Hence
pneumatics are not normally used for servo-controlled robots.

32

3.2 Internal state sensors


Robot sensors may be divided into two principal categories according to function:
internal state sensors detect variables such as arm joint position which are important for basic robot control.
external state sensors measure environmental parameters, and detect how the robot is interacting with its
environment.
Joint position sensors are by far the most important internal state sensors. Rotary or linear position sensors can be used as
appropriate for revolute and prismatic joints respectively. Three common position sensor types are described below; all
are available in both rotary and linear configurations.
Potentiometer
A potentiometer measures position via the voltage at a wiper which can slide along a resistive element. The resistive
element can be:

A conductive plastic film over a wirewound element (a hybrid potentiometer).


Wirewound potentiometers are cheapest but least accurate (their resolution is limited by how closely packed the wire
coils can be).
Linear Variable Differential Transformer (LVDT)
In an LVDT the displacement of an iron core is measured by the alteration of inductance between a primary winding and
two secondary windings. The primary winding carries an alternating current of constant amplitude. With the iron core in
the mid-position, the induced voltages in the two secondary windings are equal, but the output voltage is zero as it is the
difference between these two voltages. With the iron core displaced, one secondary winding has a greater induced
voltage than the other, and so the output voltage is non-zero. A demodulator converts the alternating output voltage into a
constant (DC) voltage proportional to its amplitude.
Incremental Optical Encoder
An incremental optical encoder is a digital sensor, designed to measure either rotary or linear displacement. A rotary
encoder contains a graduated disc as shown below. The lines on the disc interrupt the passage of light from a lightemitting diode (LED) to two photo-detectors, allowing the lines to be counted. Often a third photodetector gives a single
pulse per revolution as a marker.

Figure 3.5 A rotary incremental optical encoder.


The main two detectors are offset by half a line's width from one another. Channel A goes high (i.e. photo-detector A is
exposed to the light) just before channel B if the disc is rotating clockwise. For anti-clockwise rotation, channel B goes
high first. Thus the direction of rotation can be determined. The high and low going edges on both channels A and B are

33

normally converted into pulses which are then counted, so the number of increments per revolution is 4 times the number
of lines on the disc.

3.3 External state sensors


The use of external sensing mechanisms allows a robot to behave in a responsive manner. This is in contrast to preprogrammed operations in which a robot is simply taught to perform repetitive tasks using internal sensors only.
Although the latter is the predominant form of operation for current industrial robots, the use of sensing technology does
endow machines with a greater degree of intelligence for dealing with the unexpected.
Non-contact sensors are used to determine the position of objects in the robots environment, such as components which
are to be picked up with a gripper:
Proximity sensors are simple devices which can be used to determine whether a component is in its expected
position. Depending on type, they can see distances of between 1 and 100mm, and often just give a binary
on/off output. Optical, magnetic, acoustic and capacitative based devices are common.
Range finding: range finders determine the distance to an object. For example ultrasonic time-of-flight sensors
determine distance by measuring time delay in reflected waves.
Vision systems: the most sophisticated type of non-contact sensor. To be discussed later on in this module.
Contact sensors can be used in robot grippers to measure gripping forces:
Touch (or force) sensors measure point forces. They can be simply binary switches which detect whether a force
is present, or they can give high accuracy force measurement using (for example) strain gauges. The latter is
essential for gripping delicate objects.
Tactile sensors measure the spatial distribution of force, as well as its magnitude. A matrix of force sensing
elements is often used. Tactile sensors can be used to help identify the object being gripped, or its orientation, or
how securely the object is being held.
Slip sensors measure dynamic force variation. Tactile sensors can double as slip sensors; alternatively
movement detection devices can be used.

34

ROBOTICS COMPUTER AIDED LEARNING SOFTWARE


Computer-aided learning (CAL) software is available which covers various aspects of robotics, particularly relating to
robot design.
To run the software:
1. Login to a PC attached to the University network, and call up the Windows desktop.
2. Select the Departmental Software icon from the Windows Applications window.
3. Scroll down and select Mechanical engineering
4. Scroll down and select CALGroup.
5. After a short delay, a list of subject areas is displayed. Choose Manufacturing - management and robotics
6. A list of modules are then displayed. Highlight Robotics, and then click on the Run Module button.
Everything in the CALGroup Robotics module is relevant to this course, and should be considered as core material.
Chapters 3 and 4 (End-effectors and Sensors) are particularly relevant at this stage.

3.4 End-effectors
Some applications dictate that the end-effector is a specific tool, such as a welding torch, riveting gun, or paint sprayer.
However, many applications require the manipulator to pick up objects, so the end-effector has to be some sort of gripper
or robot hand. Two-fingered grippers like those shown in Figure 3.6 are common. Even for simple objects, gripper
design has to be carefully considered so that small misalignments between object and gripper are accommodated. Gripper
design becomes much more complex if a wide range of objects has to be handled by the same device. Anthropomorphic
(human-like) dexterous robot hands, which have several multi-jointed fingers, have been researched extensively over the
last 20 years to approach this problem, but a reliable control strategy has yet to be developed.
Some manipulators must wield a range of different tools during their working day, in which case a tool change system is
used. A tool change unit is a device which holds the various tools, and allows the manipulator to automatically engage
any required tool, and deposit it back in the unit when finished. The tools have a common mechanical interface for
mating with the manipulator. This system also allows a manipulator to use different grippers, thus greatly increasing the
range of objects which can be handled.
Gripper design issues
There are three main factors to be considered in gripper design:
Workpiece related features
1. Geometric form or shape of item to be moved or assembled.
2. Mass of item, position of centre of gravity and moment of Inertia.
3. Machined or rough surfaces - errors in positioning.
4. Type of material - hard or soft.
5. Changes in shape between loading and unloading.
Robot related features
1. Acceleration of robot arm - relevant to gripping forces.
2. Load limitations - actuator torques and flexure of arm.
3. Gripper is part of pay load - mass an important consideration.
4. Operating time of gripper is part of overall cycle time.
Workplace related features
1. Obstructions to be negotiated
2. Approach direction.
3. Environment - hot, corrosive, explosive, radioactive or underwater.

35

4. Position and orientation of workpiece.


5. Services available - compressed air etc.
Gripper designs in use
Gripper configurations and concepts currently in use and are shown in Figure 3.7:
1,2 - external or internal surfaces used for gripping
3,4,5 - parallel, quasi-parallel and scissor jaws
6 - compensating linkage - x maintained constant.
7,8 - conformity with surface of component.
9,10 - parallel jaws (plane and line contact)
11,12 - illustrating importance of component shape and friction properties
13,14 - V location.
15,16 - design dictated by approach to component.
17,18,19,20 - compliance for the handling of delicate objects.
21 - collet for internal and external gripping.
22,23,24 - inflatable
25,26 - expanding for gripping large objects by internal surfaces
27,28 - vacuum (sealing time required)
29, 30 - vacuum plus conformance to surface irregularities.
3.5 Mechanical arrangement and specification: PUMA 500 series
The following extract from the Unimate PUMA 500 series equipment manual serves as a case study in the mechanical
arrangement of a typical industrial manipulator, and gives specifications for this particular well known arm.

36

4. DYNAMICS AND CONTROL


Manipulator dynamics is concerned with the relationship between the forces / torques acting on a manipulator and its
resulting motion. In this section we will be chiefly considering jointed manipulators, i.e. with all revolute joints, so that
the controlling inputs to the manipulator are in the form of torques. There are two main problems:
Inverse dynamics: this is the calculation of joint torques from joint positions, velocities and accelerations. This
calculation is useful in robot control.
Forward dynamics: this is the calculation of joint accelerations from joint positions, velocities and torques. This
calculation is used for the dynamic simulation of manipulators.
4.1 Inverse dynamics
The process of calculating joint torques from manipulator motion can be performed using the Recursive Newton-Euler
method. It is recursive as a set of calculations is performed for each link in turn, from link 1 to link n (the outward
recursions), and then another set is performed from link n to link 1 (the inward recursions). Each outward recursion
computes the resultant force and moment that must be acting on a link given the motion it is undergoing; each inward
recursion computes the actual joint torque. The calculation procedure is as follows:
Outward recursions
For link i = 1 to n, calculate:
1. Angular velocity of link (

2. Angular acceleration of link (

3. Linear acceleration of link at frame origin (

4. Linear acceleration of link at centroid (


)
5. Resultant force acting on link at centroid ( iFi )
6. Resultant moment acting on link around centroid ( Ni )
Inward recursions
For link i = n to 1, calculate:
1. Force exerted on link i by link i-1 (ifi )
2. Torque exerted on link i by link i-1 (ti )

In the general case, all the variables calculated are 3x1 vectors. In the case of a planar manipulator the linear motions and
forces are 2x1 vectors and the angular motions and torques are scalars.

Jointed planar manipulators


The inverse dynamics calculation using the Recursive Newton-Euler method is relatively straightforward for planar
manipulators with any number of links. Manipulators with a combination of revolute and prismatic joints can be tackled,
but in these notes we will concentrate on jointed manipulators. For example, and 2-link jointed planar manipulator is
shown in Figure 4.1.

37

Example 4-1 Recursive Newton-Euler inverse dynamics solution for two-link jointed planar manipulator (zero gravity).
See class notes
The general Recursive Newton-Euler formulation for an n-link planar jointed manipulator is given below. It is assumed
that each link centroid lies on the X axis of the link frame (i.e. Y component is zero)
Outward recursions
For link i = 1 to n, calculate:
1. Angular velocity of link:

(4.1)

2. Angular acceleration of link

(4.2)

3. Linear acceleration of link at frame origin

4. Linear acceleration of link at centroid

(4.3)

(4.4)

5. Resultant force acting on link at centroid

(4.5)

6. Resultant moment acting on link around centroid

(4.6)

38

Inward recursions
For link i = n to 1, calculate:
1. Force exerted on link i by link i-1

(4.7)

2. Torque exerted on link i by link i-1

(4.8)

Summary of notation
Note: all linear and angular velocities and accelerations are measured relative to a fixed frame (such a frame is often also
known as an earth or world frame). However linear velocities and accelerations, even though measured relative to a fixed
frame, can be expressed in any frame like any other vector. In the planar case angular velocities/accelerations are scalar
so this issue does not arise for angular motion.
Linear accelerations/forces: where Q represents a linear quantity,

is that quantity for link k expressed in the

frame j co-ordinate system. A further subscript (x or y) indicates a particular scalar component of


in equation 4.8).

(as used

Angular velocities/accelerations/torques: where Q represents an angular quantity,


is that quantity for link k.
Links are defined by their dimensions (length Li and centroid position ci) and inertial properties (mass mi and
mass moment of inertia Ii)
Matrices
are rotation matrices as defined previously, except that only 2x2 matrices are required in the planar case.

External forces exerted on manipulator


The Recursive Newton-Euler method can also be used to calculate joint torques where external forces act on the
manipulator, including gravity.
External forces or moments exerted by the manipulator at/around a point on a link can simply be added to the
force or moment equations (4.7) and (4.8) during the inward recursions.
A particularly common special case is end-effector force and torque, which are represented by
in equations (4.7) and (4.8) (
would be defined as the force/torque exerted by the manipulator,
not on the manipulator).
There is an easy way to include gravity: say that link 0 is accelerating vertically upwards at 1 g. Thus assuming
Y0 points vertically up, the following should be used in equation (4.3):

(4.9)

Example 4-2 Joint torques for two-link jointed planar manipulator with gravity and external end-effector force. See class
notes

39

Inverse dynamics for general (non-planar) manipulators


The Recursive Newton-Euler method is also applicable to the case of full three dimensional dynamics. The same steps
are used, but interactions between motion in different planes results in added equation complexity. The full equations of
motion will not be derived here; however they are implemented in the Matlab Robotics Toolbox command rne(), which
is used as follows:
tau = rne(dyn, q, qd, qdd)
This calculates tau as a vector of the n joint torques, where q, qd, and qdd are vectors of n joint angles, joint velocities
and joint accelerations respectively (in rad, rad/s and rad/s2 assuming revolute joints). The nx20 matrix dyn contains all
the manipulator parameters required for the calculation. Each row contains parameters for one link in the following
order:
1 a link twist angle
2 A link length
3 q link rotation angle
4 D link offset distance
5 s joint type, 0 for revolute, non-zero for prismatic
6 m mass of the link
7 rx position of link centroid with respect to the link co-ordinate frame
8 ry
9 rz
10 Ixx mass moments of inertia about link centroid
11 Iyy
12 Izz
13 Ixy mass products of inertia about link centroid
14 Iyz
15 Ixz
16 Jm motor armature inertia
17 G reduction gear ratio: actuator speed/link speed
18 B viscous friction at motor
19 Tc+ coulomb friction (positive rotation) at motor
20 Tc- coulomb friction (negative rotation) at motor
To summarise the parameters:
The first five values relate to kinematic link parameters; thus the matrix can be used in kinematic analysis
commands like fkine(), in which case all but the first 5 columns are ignored.

40

Values 6 to 15 are obligatory inertial parameters. Note that the mass products of inertia are terms which only
appear in three dimensional rigid body dynamics.
Values 16 to 20 are optional actuator characteristics.
Note that rne() assumes gravity is present, acting in the negative Z0 direction. Entering help rne at the Matlab prompt will
give more information; help dyn will provide a reminder of the dyn matrix format.

4.2 Forward dynamics


The Recursive Newton-Euler method is a convenient form of solution for the inverse dynamics problem. However it
would also be possible to combine all equations (4.1) to (4.8) for each link together symbolically, to give one rather
lengthy equation of the form:
(4.10)
The torque, position, velocity and acceleration variables in this equation are each a vector with n elements. In general it is
found to be possible to rearrange equation (4.10) to calculate the joint accelerations from position, velocity and torque:
(4.11)
Equation (4.11) represents the forward dynamics calculation. This calculation is important in dynamic simulation of
manipulators. Equation (4.11) has to be integrated numerically to calculate joint position, velocity and acceleration
histories corresponding to specified joint torques. A very simple numerical integration method is given by:

(4.12)
This set of equations has to be repeated for as many time steps D t as necessary to simulate the required period of
operation of the manipulator.

4.3 Control
Single joint control
In many industrial manipulators, each joint is driven using a separate, independent control system. For example, consider
joint 1 of a PUMA 560 manipulator, which is driven by a DC servomotor through a gear train, as depicted in Figure 4.2.

41

A simple analysis of this drive system proceeds as follows. (Note: this is just an example; when tackling problems of this
type it is always best to go back to first principles rather than using the equation (4.15) derived below).
The joint torque t will accelerate the joint and the manipulator above it; these have inertia J:
(4.13)
Torque delivered by the motor tm is used to provide the joint torque, to overcome any unmeasured disturbance torque t d
(which could include friction) and to accelerate the motor itself:

(4.14)
where G is the gear ratio (motor speed over joint speed). Substituting equation (4.13) into equation (4.14):

or

(4.15)

Note that the term JE, representing J+JmG2, can be interpreted as the effective inertia at the joint including motor inertia.
From equation (3.1), motor torque is proportional to current. If a current amplifier is used, then the current is in turn
proportional to the control signal u. Thus if K m is a constant:
(4.16)
Equations (4.15) and (4.16) constitute the model of the plant which is included in the control system block diagram of
Figure 4.3.

A simple proportional position control system is shown in Figure 4.3. There is only one controller parameter Kp to
choose, and this does not give much freedom to alter the dynamics of the system in a desirable manner. Instead a
Proportional Derivative (PD) or Proportional Integral Derivative (PID) controller could be used. The scheme shown in
Figure 4.4 is quite common in robot control; it is a variant of PD control which includes feedforward.

42

This scheme requires position and velocity feedback; however if joint velocity is not measured it can be generated by
differentiating the position feedback signal. The feedforward filter is used to determine demand velocity and acceleration
from the demand position, but often this filter is not required because the velocity and acceleration values are available
from a trajectory generation routine which computes the demand position profile (trajectory generation will be discussed
later in the course).
The signal

is an estimate of the motor torque required to follow the demanded trajectory. In fact if the position and

velocity errors are zero, and there is no disturbance torque,


acceleration.

is the exact motor torque required to give the demanded

Example 4-3 Derivation of transfer function for PD controller with feed forward (Figure 4.4). See class notes
Controlling the whole manipulator
In order to design a separate feedback controller for each joint the dynamic interaction between joints has to be
neglected. For joint 1 of the PUMA, the biggest problem is that the inertia J varies considerably as other joints move. For
controlling some of the other joints the interaction is even more severe. Thus a method of accounting for the dynamics of
the whole manipulator is required. The computed torque method of Figure 4.5 is such a method.

43

The computed torque controller has a similar structure to the feedforward PD approach. However it now uses the inverse
dynamics solution of equation (4.10) to calculate the motor torques required to perform the desired movement. The
inverse dynamics calculation uses the following parameters:
measured joint positions and velocities
the signal
shown in the block diagram, which can be considered as the desired acceleration. In fact it is exactly the same as the
demand acceleration when there is no position or velocity error.
Note that the inverse dynamics equations used must give the motor torques rather than the jointtorques. The equations
will include the motor inertia terms, and any gear ratio G between motor and joint rotation.
Figure 4.5 represents the whole controller, i.e. for all joints, so that each variable is now a vector of n elements, and each
gain block actually represents a set of n values for scaling the variables. Figure 4.6 shows that each gain is now in fact a
matrix, with the n values forming the leading diagonal (n=2 in this case) .

44

The major difficulty with the computed torque method is that significant computing power is required to perform the
inverse dynamics calculation sufficiently quickly. Hence few current industrial manipulators use this approach. Most use
individual joint controllers.

Example 4-3 Computed torque control method for single link manipulator. See class notes

45

5. PROGRAMMING
5.1 Introduction
Robot programming refers to the process of creating a program to drive a robot through a series of movements to carry
out a particular task. Once created the program would be executed whenever the task had to be performed. Typically the
task would be a repetitive one, e.g. fitting windscreens on a car production line, and so the program would execute over
long periods of time.
Four methods of robot programming exist, which can be split into on-line and off-line methods:
On-line or teach by showing programming:
o

Drive-through teaching consists of the human operator controlling the robot using keys on a teach
pendant, e.g. there may be two keys (a plus and minus) to move each joint. Once an important position
is reached it can be recorded by hitting a record key. In this way a series of positions can be recorded,
and then played back as required.

Lead-through teaching is based on the same principle as drive-through, except that the end-effector is
physically dragged by the operator rather than using a teach pendant. This is often used for paint
spraying robots for example.

Off-line programming:
o

Robot programming language. High level computer languages are available which are specifically
designed for robot control e.g. VAL II from Unimation and AML from IBM. End-effector locations can
be programmed by entering co-ordinate values. These languages offer considerably more flexibility
than on-line methods.

Robot simulation. A more sophisticated form of programming language environment incorporates a


robot simulator. Hence programs can be tested in simulation. These systems also allow drive-through
programming of the virtual robot; movements taught in this way can be integrated with program code
if required.

The most common methods in current industrial use are drive-through teaching, and using a programming language. As
an example, the implementation of these two methods for a Unimation PUMA 560 robot will be described. Unimation
developed its own robot programming language VAL (Variable Assembly Language) in 1979, since when it has been
regularly updated and enhanced. Now called VAL II, it is a high level interpreted language whose programs consist of a
sequential series of instructions. As is seen in the next section, VAL II is also involved in drive-through teaching,
because the recorded locations are stored as VAL II instructions.
Typical hardware required for robot programming and control is shown in Figure 5.1. The Controller contains
microprocessor and interface cards (Figure 5.2), and the operator communicates with the controller via the VDT (visual
display terminal, containing keyboard, disk drive and monitor) and the teach pendant.

46

5.2 Drive-through teaching


Consider the example task depicted in Figure 5.3. To teach this pick and place task, the operator would do the following:
1. Type at the VDT: EDIT prog This starts recording a program called prog.
2. Type at the VDT: TEACH location1 This specifies the name of the first point recorded
3. Use the teach pendant to move the robot to point A in Figure 5.4.
4. Use the teach pendant to open the gripper
5. Hit record key on teach pendant to store current robot and gripper positions. (1)
6. Use the teach pendant to move the robot to point B
7. Hit record key (2)
8. Close gripper, and hit record. (3)
9. Move back up to A, and hit record. (4)
10. Move across to C, and hit record. (5)
11. Move down to D, and hit record. (6)
12. Open gripper, and hit record. (7)
13. Move back up to C, and hit record. (8)
14. Hit carriage return on the VDT to stop teaching.
Note that the numbers in brackets are program line numbers to which we will refer later.

Moving the robot using the teach pendant


There are three different modes in which the teach pendant of Figure 5.5 can operate to move the robot. These modes are
selected by the TOOL, WORLD and JOINT keys:
In the JOINT mode the six pairs of plus and minus keys move the joints 1 to 6 individually.
In the WORLD mode, three pairs of keys move the end-effector along the X, Y and Z axes of a world coordinate frame, and the other three pairs rotate the end-effector about the same axes. The world co-ordinate
frame is the same as the link 0 frame.
In the TOOL mode, the keys move the end-effector along and around the axes of the tool co-ordinate frame,
which is the same as the link 6 frame.
The world and tool co-ordinate frames are shown in Figure 5.6.
In the WORLD and TOOL modes, the inverse kinematics calculation has to be performed to calculate the joint angles.
Define frame {0} as the world frame, frame {6old} as the tool frame before movement, and {6new} as the tool frame
after movement. Let the movement, about whichever frame, be represented by T; e.g. if the RY + key is hit, this may
correspond to a 2 rotation about Y, giving:
(5.1)

47

In TOOL mode, T represents the position and orientation of {6new} relative to {6old}, so:

(5.2)
In WORLD mode, T represents the movement which the end-effector has to undergo within the fixed world frame, so:
(5.3)
Thus the inverse kinematic calculation to find the new joint angles should be performed on 0 T6new found from equation
(5.2) or (5.3) as appropriate.

Playback
The teach pendant records positions by forming a VAL II program. Each time the record key is hit one line of the
program is created. The pick and place example would create the following program called prog:

1. MOVET location1, 25.4 (moves to A, gripper open)


2. MOVET location2, 25.4 (moves to B)
3. MOVET location3, 0.0 (closes gripper)
4. MOVET location4, 0.0 (moves to A)
5. MOVET location5, 0.0 (moves to C)
6. MOVET location6, 0.0 (moves to D)
7. MOVET location7, 25.4 (opens gripper)
8. MOVET location8, 25.4 (moves to D)

The program line numbers correspond to the numbers in brackets after the operators actions listed at the start of this
section. Thus the action which caused the line to be created can be identified. Note that:
the MOVET instruction is a move instruction created by the teach pendant. Other move instructions will be
introduced shortly.
eight robot positions are stored in variables location1 etc.; the last number of the variable name is automatically
incremented for each new position.
the second argument to the MOVET command is the gripper opening. This can indicate a variable finger
separation, but in our case a simple binary pneumatic gripper is assumed which can just be driven open or
closed, represented by 25.4 and 0.0 respectively.
To playback the movements, the program can be executed by typing at the VDT: EXECUTE prog

5.3 Programming using the VAL II language


The VAL II language contains a wide range of instructions which allow loops, conditional statements (if..then),
subroutines and many other constructs expected in high level languages. None of these are available if the robot is
programmed solely through the teach pendant. However recording some important positions using the pendant can be
valuable, as these can then be used in more sophisticated programs.
Consider the pick and place task of Figure 5.4. The teach pendant could be used to move the robot to point A, with the
gripper orientated downwards. To record that position in variable pick, type:
HERE pick

48

Similarly moving the robot to C, with gripper pointing down, and typing: HERE place, would record variable place. If
the locations pick and place are known heights above the work surface, then the instruction APPRO (meaning approach),
can be used to lower the end-effector the prescribed distance i.e. move along the tool Z axis. Similarly there is an
instruction DEPART which raises the end effector (moves in the negative tool Z axis).

Example 5-1 Write a VAL II program off-line to perform pick and place (Figure 5.4). See class notes

Table 5.1 summarises a few VAL II program instructions. There are many more which are not covered in this course.
Note that the SET instruction can be used to enter robot locations without using the teach pendant; locations are either
specified as a transformation of the end-effector in the world frame, or as joint angles. The transformation is given in
terms of X,Y and Z co-ordinates, and three angles O, A and T, defined in Figure 5.7, which uniquely define the rotation
matrix.
Table 5.2 summarises some so-called monitor commands; these are commands which are given directly to the operating
system, rather than forming part of a program.

MOVET <location>, <gripper opening>

Teach pendant generated move instruction

MOVE <location>

Programmed move

APPRO <distance>

Approach: move along positive tool Z axis

DEPART <distance>

Move along negative tool Z axis

OPEN

Open gripper

CLOSE

Close gripper

SET <location> = TRANS(X,Y,Z,O,A,T)

Stores location given as a transformation

SET <location> = PPOINT(q1,q2,q3,q4,q5,q6)

Stores location given as joint angles

Table 5.1 VAL II program instructions.

EDIT <program name>

Allows program entry

EXECUTE <program name>

Executes program

TEACH <location>

Sets up location variable name for drive-through teaching

HERE <location>

Stores current robot location in given variable

SPEED <percent>

Sets movement speed as a percentage of maximum

49

Table 5.2 VAL II monitor commands

5.4 VAL II Trajectory generation


Velocity profile
In moving from one location to another, the manipulator will first accelerate, then move with a constant velocity, and
finally decelerate. This velocity profile and the corresponding position trajectory are shown in Figure 5.8. In the case of
VAL II, the rate of acceleration and deceleration are fixed by the controller, but the constant velocity value can be
entered by the operator using the SPEED command.

Joint co-ordination
A common controller algorithm for moving from one location to another would be to:
Determine joint angles for start location
Determine joint angles for end location
Determine duration for complete movement (normally dependent on the joint that has to move the furthest).
For each joint: determine a trajectory, consistent with Figure 5.6, which would implement the move with the
correct duration
Move the six joints simultaneously according to the individual trajectories. All joints should complete the
movement at the same time.
As trajectory generation occurs at individual joint level, this algorithm is known as joint-interpolated movement. It is
computationally efficient, but does not result in straight line movement of the end-effector. This is illustrated in Figure
5.9. Figure 5.10 shows joint interpolated movement for the pick and place task. As the MOVE, APPRO and DEPART
instructions use joint interpolated movement then our program of Section 5.3 would in fact produce this wiggly path.

50

However straight line motion can be produced if the trajectory generation is carried out in Cartesian space. In other
words the trajectory of Figure 5.8 is applied to the linear co-ordinates and angles which represent the end-effector
location 0T6 . These trajectories are then sampled to produce a set of 0T6 transformations spanning the whole movement;
performing the inverse kinematics calculation on each one of these gives a set of joint angle vectors for the manipulator
to follow. As the inverse kinematics have to be solved repetitively, it is a computationally intensive algorithm. In VAL II
there are variants of many of the movement instructions which give straight-line movement, e.g.:
MOVES <location>
APPROS <distance>
DEPARTS <distance>

Continuous path motion


When moving between a series of points, as in Figure 5.11, the trajectories described above would cause the manipulator
to stop at each point. In many cases this is inefficient; the manipulator only needs to stop at the end points, not at the
intermediate or via points. VAL II has a mode called continuous path movement, in which the robot moves at a constant
speed past all via points. At the normal point of deceleration of one movement, the robot starts to change direction
towards the point corresponding to the end of the acceleration phase of the next movement. Hence the corners are
rounded-off, as shown in Figure 5.12. The VAL II monitor commands which control the continuous path mode are:
ENABLE CP (Enables continuous path mode)
DISABLE CP (Disables continuous path mode)

51

Figure 5.11 Non-continuous path mode

5.5 Trajectory calculation

Linear trajectory with parabolic blends


The trajectory formed by a period of constant acceleration, constant velocity, and then constant deceleration is analysed
in this section. Figure 5.13 shows a change in position from 0 to q , involving constant acceleration a, constant velocity v,
and constant deceleration -a. When considering the motion of a revolute joint, these would all be angular quantities.
The duration of the parabolic blend (i.e. constant acceleration part) can be found easily from:

(5.4)

Displacement during parabolic blend:


Displacement during linear trajectory:

(5.5)
(5.6)

Substituting equation (5.5) into (5.6):

52

(5.7)
Substituting equation (5.4) into (5.7):

(5.8)

(5.9)
Given a required duration for the move, equation (5.9) can be solved for velocity v. Alternatively, v may be specified,
and the duration must be found. These calculations assume that the acceleration a is a known value.

Example 5-2 Calculating constant velocity value for a linear trajectory with parabolic blends. See class notes.

Polynomial trajectories
There are a variety of other methods for calculating trajectories, for example using third or fifth order polynomials. A
fifth order polynomial has the form:
(5.10)
Its derivatives are:
(5.11)
(5.12)

53

Applying equations (5.10), (5.11) and (5.12) to both the start and end of the trajectory gives six equations in the six
unknown coefficients a0 to a5; hence a solution can be found. Normally the start and end velocities and accelerations will
be zero. If q1 is the start angle at t=0, (5.10) to (5.12) give:
(5.13)
(5.14)
(5.15)
If q2 is the end angle reached at t=t2, and making use of (5.13) to (5.15), equations (5.10) to (5.12) give:
(5.10)
(5.11)
(5.12)
Equations (5.10) to (5.12) can be solved for the remaining three coefficients a 3 , a4 and a5.
Fifth order polynomial trajectories have the advantage that jerk (the derivative of acceleration) remains low. The sudden
changes in acceleration in the linear with parabolic blends method gives high jerk values. However, given that much of
the movement can occur at maximum velocity, the latter method normally has the advantage of shorter movement times.
The Matlab Robotics Toolbox command jtraj() generates a trajectory between two joint angle vectors using a fifth order
polynomial.

Use of trajectories for controller demand signals


In Section 4.3, control algorithms were introduced which required joint position demand signals to be differentiated to
obtain joint velocity and acceleration. In fact the position demand is likely to be a trajectory created by a linear plus
parabolic, fifth order polynomial, or similar method. Hence the trajectory generation equation(s) can be differentiated so
that velocity and acceleration can be calculated analytically, avoiding the need for numerical differentiation.

54

6. VISION SYSTEMS
6.1 Introduction
A robot vision system is a sophisticated optical sensor which has the potential to enable a robot to respond intelligently
in an uncertain environment. Common uses are:
identification of objects in the robots working environment
estimation of object position and orientation
tracking of moving objects
identification of component defects
Most current commercial vision systems can only operate in environments with the following constraints:
number of objects that that need to be identified is limited.
number of objects in a scene simultaneously is limited.
objects do not overlap or touch.
objects viewed from one known direction (normally from above)
objects illuminated so as to obtain high dark-to-light contrast.
Computer data processing algorithms are the key to successful vision system operation. These algorithms fall into three
related fields (Figure 6.1):
1. Image processing - the raw image is improved in some way; this can include smoothing and edge detection.
2. Object or pattern recognition - the output of this process is a description of the image based on a knowledge of the
objects expected to be found in the image.
3. Scene analysis - concerned with the transformation of simple features into abstract descriptions relating to objects
that cannot be simply recognised based on pattern matching. It deals extensively with three-dimensional image
understanding (texture, 3D shape, etc.). Artificial intelligence techniques are often used. Scene analysis is outside the
scope of this module.

6.2 Vision hardware


CCD cameras
For robotic applications, solid-state cameras are usually used because of their ruggedness, low image distortion, low
power requirements and small size. Solid-state cameras can operate on visible light and image resolution can be as high
as 800 by 800 picture elements (pixels). Modern cameras can produce information faster than a computer can process it,
sometimes as fast as 2000 pictures/second.
The solid state camera is usually a CCD (charge coupled device). A lens is used to form the image on a matrix of light
sensitive elements; a small electrical charge is formed in any element upon which light falls. The conductivity of the
material from which the matrix is made is low and the charges tend to remain in the specific regions where they are
created. The charge builds up in proportion to the light intensity and the duration of exposure:

55

The charge is measured periodically, with sample interval ts, and also reset to zero each sample time. Thus the charge
detected is a measure of the average light intensity on the element during the previous sample interval.
The most common way in which to accomplish this "matrix read" is in a top-to-bottom, left-to-right scanning process
called raster scanning (Figure 6.2). While the charge in an element at the bottom of the matrix is being measured and
neutralised, charge is once again building up at the top. Since charge continues to accumulate over the entire surface of
the light sensitive matrix at all times, it is necessary to return immediately to the top of the matrix and begin scanning
again.

Vacuum-tube cameras
Sometimes vacuum-tube TV cameras, also known as scanning photomultipliers, are used in vision systems, although
these are now becoming less common. The vidicon tube is a well known example. These cameras capture the image in
the following way:
each complete recorded image - called a frame - consists of a raster scan with 625 lines.
25 frames are recorded per second
Consequently it takes 64m s to scan one line. This time includes not only the active video signal but also the retrace
periods, approximately 18% of the line time; the active video time is 52m s per line. Figure 6.3 shows the output of a TV
camera as it scans three successive lines; the raster scanning process effectively converts a picture from a two
dimensional signal to a one dimensional signal where voltage is a function of time.
To form a digital image in computer memory the voltage signal must be fed into an analogue to digital converter (ADC);
this will sample the signal at a fixed frequency. A sample frequency of 9.84MHz is common, giving 512 pixels per active
part of a line. Figure 6.4 illustrates the process.

Digitising: quantisation and aliasing


The digital representation of a pixel is quantised, i.e. it can have only a finite number of possible values, defined by the
number of bits used to store the value in memory. For example in a monochrome system 8 bits are often used, allowing a
pixel to be represented as one of a possible 256 grey level values (2 8). A colour system may have 8 bits of storage for
each primary colour (red, green, blue).

56

Experiments have shown that at a given light level, the human eye can discern only about 30 grey levels. However, with
a change in average light intensity, the eye adapts by opening or closing the iris, giving a greater overall range. Thirty
shades of grey would indicate that 5 bits is adequate; the use of 8 bits allows a limited emulation of the effects of the iris.
The distance between one pixel and the next must be sufficiently small to prevent aliasing. To successfully capture an
image consisting of a sinusoid with a known spatial frequency (cycles/m), the resolution (pixels/m) of the vision system
must be at least twice that frequency. Figure 6.5 shows that the sinusoid can appear to be at a much lower spatial
frequency if the resolution is too low.

57

6.3 Image processing


Before objects can be located and identified in an image, the data must be processed to reduce noise, identify which parts
of the image are foreground and background, and hence detect the boundaries of objects. Typical images are shown in
Figure 6.6. There may only be one type of object that can appear in the image and the task is to determine its position and
orientation. In other applications there may be one or more objects in the image taken from a small set of possible parts
and the objective is to both locate and identify each part.
Once stored in computer memory, the image data file may be accessed as a conventional two-dimensional array of
numbers. Each number is referred to as a pixel, typically represented by one 8-bit byte. Note that a single image
comprising 512 x 512 pixels requires 256kB of memory; thus storing multiple images can require considerable memory
capacity. Sometimes one aim of image processing is data reduction, i.e. reducing the memory requirement.

Smoothing
Most raw images will be affected by noise. Spurious but substantial inaccuracy in the grey level of individual randomly
distributed pixels gives a speckled effect known as salt-and-pepper noise. Smoothing or filtering the signal is often
required. Local averaging is a common technique. This replaces the pixel value at the centre of a square window with the
average of all the values in the window.

Example 6-1. Apply local averaging to the image in Figure 6.7 using a 3x3 window. See class notes

58

Segmentation: binary image coding


It will be assumed that the objects in the image are fairly simple and can be represented by their two-dimensional
projections, as provided by a single camera view. Furthermore, it is assumed that the shape is adequate to identify the
object; colour or variation in brightness is not required.
Segmentation of an image means partitioning it into different regions, usually background and object(s). In applications
with good contrast and little noise, an image can be segmented into object and background by choosing a brightness
(grey level) threshold, T. Any region with brightness above the threshold is an object.
A histogram of the frequency of occurrence of particular grey levels, such as Figure 6.8, can be used to help choose the
threshold. The mean grey level value over the whole image is often a good threshold value.
Sometimes a single threshold across the entire image cannot provide sufficient discrimination, so local thresholds must
be chosen. The most common approach is called block thresholding: the picture is partitioned into rectangular blocks
and different threshold levels are used on each block. Typical block sizes are 32 x 32 or 64 x 64 for a 512 x 512 picture
image. The block is first analysed and a threshold is chosen; then that block of the image is thresholded using the results
of the analysis. Figure 6.8 Example histogram for image containing white objects on black background.

59

Thresholding will produce a binary image, from which the outline of an object can be detected. The pixels which form
this outline can be identified by contour following, which consists of:
1. searching for a first edge point (a point between a 0 and a 1)
2. moving to the next edge point using the following rules:

3. continuing moving around the object until arriving at the first edge point again

60

Segmentation: edge detection in a complex image


Edge detection in a complex image with less contrast and significant noise is more difficult. For example smoothing to
reduce noise will also blur edges. One approach is to calculate the first derivative between adjacent grey-scale values. A
block diagram of such an edge-detector system is shown in Figure 6.10.
An algorithm frequently used for a pixel differentiator is the Roberts cross operator. This operator is defined as:

(6.1)
where Gij is the grey level of pixel (i,j). The 2 x 2 pixel window used by the operator is shown in Figure 6.11.

Original image: Grey levels Gij Differentiated image: square of operator ( Rij2 )

61

The function of the threshold detector is to decide which elements of the differentiated image should be considered as
edge candidates. An edge is present if Rij > T where T is a chosen threshold level. For grey levels greater than T, the
matrix element is set to one; otherwise it is set to zero. This is shown in Figure 6.13.
Once the edge points are detected, these must be connected to find the lines that define the image. The iterative endpoint fit is a typical method used for finding a line. This method finds the most extreme edge points in a matrix window
and introduces a line to connect these end points, as shown in Figure 6.14a. It then looks to see if edge points (binary l's)
fall on the line. If not, it chooses the most distant point from the line and replaces the single line with two lines, as shown
in Figure 6.14b. The process is continued until a series of line segments is found to match the edge-point pattern, as in
Figure 6.14c. The edge line segments can be stored as vectors.

62

6.4 Object recognition


Once the edge of an object has been completely defined, there are two stages to identifying the object:
feature extraction - determining key parameters which describe the object
comparison of the features of the object with those of in a database of known objects
In addition to identifying the object, the position of the object can be defined, for example by determining the Cartesian
co-ordinates of its centre of area, or the centre of the smallest rectangle which encloses the object.
The most common features which are used for object recognition are:
Surface area
Perimeter length
Length in X and Y directions.
Ratio of the X and Y lengths
Moments of area about centroid
Axis of the least moment of area

Some of these are illustrated in Figure 6.15.

63

64

7. ADVANCED ROBOTIC APPLICATIONS


7.1 Introduction
Robots have become commonplace in industry, but are yet to make a significant impact in other areas of life. The
industrial environment is ideal for robots, as it is well structured and controlled, and robots can be productive when
undertaking a limited range of simple tasks. In many other situations there are more unknowns and uncertainties in the
environment, so the robot has to respond to unexpected events. Hence an ability to perform simple pre-programmed tasks
is not adequate. Such a robot requires:
increased sensory ability to be able to measure the surrounding environment e.g. advanced range finding, vision,
sound, and touch sensors. Also it will need sensor fusion algorithms to combine measured signals from different
sensors to form a coherent environmental recognition system.
increased intelligence to respond appropriately to sensor information whilst performing programmed tasks.
Intelligent algorithms include expert systems, e.g. based on fuzzy logic, which encode the decision making
ability of human experts. These algorithms could alternatively consist of artificial neural networks, which mimic
the physical structure of neural connections in the brain, and have the ability to learn from experience.
Some analysts believe that robots will move from the industrial sector in two stages:
Service robots will appear first, used for example in medicine, care of the disabled, catering, and construction.
Personal robots will follow, which will interact more intelligently with humans in the home or office,
functioning as a personal slave. These robots will need to be cheaper than service robots.
Many applications not only require the manipulative abilities of a robot arm and hand, but also need to be entirely
mobile. This requires advanced mechanical design to reduce weight, and minimise the power requirement whilst
maintaining good performance.
Table 7.1 shows some promising application areas and the tasks which a robot might carry out within those areas. Some
production robots are beginning to appear for a few tasks in each area.

7.2 Examples of selected robot systems


Typical examples will be covered in the lecture sessions (please refer to the slides of the lectures).
Mobile robots are described in Section 8.

65

8. MOBILE ROBOTS
8.1 Introduction
Ground-based mobile robots have so far been developed for two main application areas:
planetary exploration
automatic transportation in factories
Robotic planetary rovers are the key to future space exploration. The expense and safety implications of sending humans
to other planets are still very adverse. The focus of current activity is the design and control of rovers suitable for
exploring Mars. NASAs Pathfinder Mission, launched in December 1996, has demonstrated the use of a rover to explore
the surface of Mars for the first time. This is discussed in more detail in Section 8.2.
Automatic Guided Vehicles (AGVs) are currently available for transporting materials and components in factories. They
follow marked routes around the factory floor, and have some sensing capability to detect obstacles. The current
developments in factory-based and service mobile robots are described in Section 8.3. This section highlights the aspects
of intelligence which need to be present, such as a navigational ability, before mobile robots can become truly
autonomous.
The lecture slides cover typical examples with regard to mobile robots.
8.2 Space robotics: planetary rovers
Autonomous rovers will play an important role in planetary exploration. NASAs Mars Surveyor Programme consists of
a scientific survey of Mars over the next 10 years using a series of rovers. The rovers must move around on the surface of
the planet to conduct experiments on geophysical, meteorological and biological conditions. The first rover, Sojourner,
touched down in mid 1997 as part of the Pathfinder Mission.
Some particular features needed in planetary rovers are:
an ability to move over rough terrain with high stability to carry scientific instruments safely;
mechanical structure and locomotion have to be robust - maintenance and repair are not possible;
for full functionality rovers must be fitted with robot arms to handle objects, collect samples etc.;
specialist sensors for perception in the Mars environment;
an on-board power source.
Robots for planetary exploration require a degree of intelligence for several reasons:
the robot has to move in a natural, unstructured and a priori unknown environment;
much of the information on the environment has to be acquired and interpreted using the robots own sensors;
there is no possibility of continuous interaction between humans and the robot because of the significant delays
in communication with Earth.
There are two approaches to rover locomotion:
wheels, usually with large suspension displacements. These rovers are fast and robust but can only cross
relatively smooth ground.
Legs, i.e. walking robots. These tend to be slow but can cross very rough terrain
Typical examples are shown in the slides.

Semi-autonomous navigation
The NASA pathfinder Mission uses semi-autonomous rover control. The long time delay (possibly 30 minutes for a there
and back communication to Mars) precludes direct teleoperation.

66

Satellites around Mars send images to Earth. From these, it is possible to create a date topographic map of the planet
surface. The rover, which carries a pair of mini-cameras, can send stereo pictures to Earth. A human operator compares
these stereo pictures with the topographic map information to determine vehicle location and heading, and using a pair of
3DOF joysticks, directs the rover along a safe path.
The commands are then transmitted to the robot for execution. The Rover autonomously tries to reach the destination by
using sensor-based reactive behaviors of varying complexity. These might include obstacle avoidance, or searching for
specific features. Active force control is used to accommodate imprecise knowledge of the terrain.
Note that there is no need to move fast on Mars the top speed of the current rover is 7mmm/s. So the control of the
rover need only be based on it kinematic model.

The NASA Mars Surveyor Programme


The broad objectives of the Mars Surveyor Programme are to observe and gather materials representative of the planets
geophysical, meteorological and biological conditions and to return a varied selection of samples. Since the payload of
the return vehicle is limited, the mission requires a sophisticated on-site system that can explore, assay and select. The
programme includes development of rovers with enhanced vehicle mobility, and with an ability to navigate
autonomously and to manipulate scientific instrumentation.
Specific goals by the year 2001:

Autonomously traverse 100m of rough terrain: terrain within sight of the lander.
Autonomously traverse 100m of rough terrain over the horizon with the return to lander.
Autonomously traverse 1km of rough terrain with execution of selected manipulation tasks.
Complete science/sample acquisition and return to lander with over the horizon navigation

Rover Technology
From the mechanical point of view rover research includes:

Vehicle stability
Legged verses wheeled vehicle mobility
Handling and grasping dexterity

The miniaturization of rovers, reducing mass and power consumption, is also a major research thrust.
This has led to the classification of rover designs by mass. Rovers over 20g are said to be full size; lighter rovers are
called microrovers.
Other active areas of research are:

Obstacle avoidance and fault-tolerance;


Sensor suits for long distance navigation;
Autonomous performance of the designated sample acquisition task soil, rock and atmosphere samples may
need to be acquired.
Autonomous search and recognition of potentially interesting targets;
Sensor pointing, emplacement, or burial;
Identification and integration of science instruments on small rover platforms for experimentation;
Intelligent vision and touch-guided grasping;
Camera positioning for scientific imaging, navigation, and vehicle self-inspection;
Analysis of multi-spectral imaging, navigation, and vehicle self-inspection;
Analysis of multi-spectral imagine data to find areas of interest;
Rover arm able to position the instruments in different positions;

67

Microrovers
The cost of a full size rover mission is several billion dollars. The light weight and compact volume of microrovers
allows a low flight cost. Microrovers will be able to autonomously traverse many kilometres on the surface of Mars,
perform scientist, perform scientist-directed experiments, and return relevant data back to Earth. Present microrover
technology has several limitations precluding more ambitious science-rich missions. Current microrovers have very
limited traverse capability (tens of meters), have limited science packages on board, are designed for short-term (10-day)
missions and require repetitive ground control. Figure 8.4 shows some prototypes.
The specifications and features of Sojourner, the only rover that has actually landed on Mars, include.
11.5kg mass
The size of a milk crate
Each wheel is independently driven (2000.1 gear ratio). Encoders measure wheel rotation.
The wheels are independently steerable. Potentiometers measure steering angle.
The top speed is 0.4m/min
Laser striping and camera system determine the presence of obstacles in its path
Carries an x-ray spectrometer, to analyse the composition of the rocks
Power provided by solar cells and 6 lithium thionyl chloride D-cell batteries. These give a maximum power
output of 30W.
A heater unit warms the electrical components (ambient temperatures between -40 degrees C and +40 degrees
C)
Command and telemetry is provided by a modem that links the microrover with the lander.
NASAs latest experimental prototype, ROCKY-7 has the following features:
Less than 20Kg
Ability to traverse autonomously a complex area
Acquire in-situ geochemical data
Low power stereo vision (acuity of human eye; viewpoint can be raised 1m above the surface)
2 DOF stowable manipulator arm with subsurface reach
2 DOF end-effector for digging, grasping and instrument pointing
Onboard spectrometer with fibre optic path to end of arm
Pointable solar array
Bi-directional sensing and driving
Increased capacity for more instruments
New wheel geometry with compact actuation
Ability to autonomously recognise designated targets

Nanorovers
Nanorover concept is of a small planetary surface explorer, typically weighing a few grams, moving a few millimetres
every minute. It would move about in a reactive mode on the surface, much the same way as an insect does. That is, if
there is an obstacle on the left, it moves right and vice versa. If it begins to move out of the sunlight and is losing power,
it changes course. If it senses more of what it is seeking (e.g. Water vapour) on one side than the other, it turns toward its
goal. Large numbers of such systems can be accommodated on the lander to compensate for possible individual failures.

8.3 Characteristic Functions of Mobile Robots


The automation of a broad variety of service tasks require mobility on one form or the other. Courier services,
transportation, safeguarding, inspection, maintenance and data acquisition mostly require mobile platforms which cope
with (partially) unstructured or in the most general case unknown environments.

Since the introduction of Shakey at the Stanford Research in 1970, mobile robots have gained significant commercial and
scientific interest and have reached high levels of machine intelligence. Even though existing mobile service robots are
quite different in size and shape they mostly share elements of the same application independent functionality.

68

These typical, in a wide sense, application-independent functions will be defined in the following:

Sensor based environmental perception


Scanners, surface, edge or volume detecting sensors such as ultra-sound and vision or the combination of these
provide information on distance, contours and on the existence or the absence of obstacles in the robots
surrounding environment.

Environmental modelling
In order to assure a collision-free and goal orientated motion across constrained environments the robot needs to
have information on the operational area and its surroundings. In addition to a map of the environment which
may be externally given by CAD-date, sensors should enable the robot to build, detail or update its maps even in
dynamic environments. The representation of maps reach from bit-maps to symbolic description of complex 3D
worlds.

Navigation
Navigation comprises motion planning, localisation, motion control and collision avoidance. Motion planning
determines the ideal trajectory between start and final positions in coordinates, velocities and delay times. It
takes into account constraints and boundary conditions like restricted areas, limits in mission time or in available
resources.

During missions motion planning can be modified, detailed or updated as sensor signals provide new information on
environments or external signals on altered mission goals.
Localization of mobile systems requires, due to measurement errors, regular referencing to external (artificial or natural)
landmarks. Usually dead-reckoning as a simple form of measuring the vehicles travelled path in conjunction to external
landmark referencing is used.
Motion control assures the vehicles proper motion along given paths or trajectories. Interpolation between via-points and
servo-control of the actuators are performed in constant time intervals at high frequency.

Task Planning
The automated execution of the service task goes beyond the actual motion planning. General task level
commands are decomposed to elementary tasks of which motion elements enter motion planning.

Interaction and Communication


Man-machine interfaces offer input channels to the task planning and operate either by voice, key-type or even
by gesture or by a combination of both

Monitoring, error recognition and retrieval


For operational safety all critical functions have to be surveyed either by redundancy or specific safety monitors.

Personal Safety
Aspects of personal safety are either assured by a suitable layout and design or active safety sensors or a
combination of both. Regulations, standards or guidelines for regarding personal safety of robots in public areas
have still to be worked out.

Another interesting drive configuration is made of four mecanum wheels, all of them driven. These wheels consist of a
rim on which small reels are arranged at angles of 45 degree. When turning the wheel, this angle results in a force
component parallel to the wheel and in a second force component not parallel to the wheel. Usually this component will
be cancelled through the force components of the other wheels, but with special adjustment of speed and direction of all
wheels the vehicle can drive into an arbitrary direction, similar to a hovercraft. Therefore this drive configuration results
in a high manoeuvrability of the vehicle but requires good surfaces with sufficient friction.
For figures relating to these please refer to the slides.
3.3.2 Environmental Perception
The perception of the environment of a robot is achieved by various sensors. Therefore a robust and reliable system is the
key feature in the field of mobile robots. In particular to cope with an unknown environment a high performance
perception system is required. Depending on the requirements of the task some sensors may be superior to others. In
service robots applications one usually chooses a combination of different sensors. By using sensors working with
different principles once can achieve optimal results in the environmental perception.

69

In the following some of the most commonly used sensors are presented.

The first and simplest example of a sensor is a bumper. It just detects a mechanical bump at an obstacle by closing
an electrical or mechanical contact. In service robots it is used for safety functions only.
Another example for a sensor is the cheap and widespread ultrasonic sensor. Common ultrasonic sensors measure
the distance to walls or obstacles by sending out a short ultrasonic pulse and measuring the time for the reflection
from the obstacle. Nowadays, the use of special shapes of the beam and the control of the phase of the ultrasonic
waves allows to build even scanning ultrasonic sensor, the so called phased array sensors (17,18). Besides being a
cheap sensor alternative ultrasonic sensors are reliable, have a range of a few meters and show a high resolution of a
few millimetres. The flexibility of ultrasonic sensors allows also the construction of a wide beam which is better
suited for general obstacle detection or wall following. Due to these advantages and the low cost most mobile robots
are equipped with some kind of ultrasonic sensors.

A further improvement of sensor technology can be achieved by using laser light instead of sound waves for
measuring distances. A laser beam is sent out, reflected by an obstacle and then caught by a detector. The distance to
the obstacle is either calculated by measuring the time of flight of a laser pulse (19) or by using phase modulated
beams and measuring the interference between the beam sent out and the one returning (20). Deflecting the laser
beam with a rotating mirror yields two or even three dimensional laser scanners which are able to reliably measure
object distances of up to a range of 25m. Currently the application is only limited by the high prices compared to
other sensor systems.

Another widely used sensor for environmental perception is the camera or stereo camera (21,22,23). Usually one or
two CCD cameras are used to take (stereo) pictures of the environment. These pictures are then processed by
standard image processing and pattern extraction techniques. In this way objects, markers or distances can be
computed. Unfortunately low costs and ease use of these cameras has to be balanced against the high computing
power required to perform feature extraction and interpretation.

8.3.3 Environmental Modelling

The environmental modelling unit processes the sensor data and relates them to the world model building up
environmental maps. These data and the environmental map are then used for the motion control of the mobile robot. For
precise modelling and perception of the environment it is important to reduce measurement errors and to guarantee an
adequate registration of all relevant features in the environment. This is achieved by using and evaluating different
sensors which should preferably work with different underlying physical principles. With such a sensor configuration one
can minimize erroneous measurements. For example, a laser scanner or a camera system can hardly detect glass doors
whereas ultrasonic sensors can do this easily. The combination of various sensors to one plausible stream of sensor data
is known as sensor fusion. Other approaches [ 24, 25, 26] show how to fuse heterogeneous multi-sensor information of
such different sensors as laser scanners, ultrasonic sensors or vision systems. These fused sensor data are used to build up
a world model as reliable and complete as possible. Nowadays other algorithms use also fuzzy logic or neural network
techniques to combine sensor data and to extract relevant features and patterns [27,28]. These pre-processed data are then
used to construct a map or model of the environment.

When creating maps [29,30] there are generally two common representations, the geometric and the topological
approach. A geometric map represents objects according to their absolute (Cartesian) geometric relationships. It can be a
grid map or a more abstract map such as a line or polygon map. Often grid maps are used as they have the advantage of
requiring less computation than other maps and that they are build up more quickly. The shape and size of the grids can
be different and even variable. Commonly used are square or hexagon grid maps, where objects or the probability of
finding objects is noted. By contrast the topological map is based on recording the geometric relationships between the
observed features rather than their absolute position with respect to an arbitrary coordinate frame of reference [31]. The
resulting representation takes the form of a graph where the nodes represent the observed features and the edges represent
the relationships between these features. Unlike geometric maps, topological maps can be built and maintained without
any estimates for the absolute position of the robot. This approach allows one to integrate large area maps without
suffering from uncertain position of the robot.

70

Map type

Properties

Simple grid map

square or hexagon occupancy map, for free, unknown or occupied cells

Quad-/octree model

square elements of size 2, indicating free, unknown or occupied calls

Certainty or histogram grid map

probability or pseudo-probability of cell occupation

Vector map

sensor data is combined to line or polygon elements

Topological map

environment is classified, features are connected via topological relations

8.3.4 Task Planning


A basic problem in robotics is to resolve externally and internally specified tasks and commands and plan the resulting
motions and sub-tasks. The planning system needs to transform a task oriented problem into a plan which describes how
the given problem can be solved by the robot [21,23,32]. For this transformation a detailed knowledge base and world
model have to be available. These models give the robot a description of its environment and therefore enable it to
construct the necessary operations needed to fulfil the task.

The plan generated this way contains a sequence of action elements (e.g. movement, picking up items, manipulating
items) with assigned resources ( e.g. the robot or its gripper). The motion control manager then manages start and
destination of a path and plans the course and any actions in between. The resulting motions of a robot are called
trajectories or paths and consists of a sequence of desired positions, velocities and accelerations at some point. The
sequence of plan elements is called a task execution sequence. During this planning stage all constraints and restrictions,
like closed or impassable areas, areas of intense disturbances are considered as well as target times, resources, supplies or
the processing of parallel or sequential tasks.

Specific task or operation

Properties and requirements

Room cleaning
Water

An optimum trajectory has to be created which completely covers a given area.


and other resources have to be checked.

Information tour

A trajectory around the exhibition has to be planned. Information needs to be given at


various exhibits.

(postal/service) delivery

Trajectory to central station and to relevant outposts has to be calculated. Dropping


and picking of items has to be coordinated.

industrial transportation

Central station has to estimate needs. Trajectories to stores and to production areas
have to be computed.

8.3.5 Navigation
The navigation of a mobile robot contains localization, motion control, the already discussed motion planning and
collision avoidance. Its task is also the online and real-time re-planning of trajectories in the case of obstacles blocking
the pre-planned path or other unexpected events occurring. Regarding the complexity of the navigational task one usually
divides the navigational functions into different classes:
The most direct coupling of sensors and actors is achieved by reflexes. This is a strong relationship between a sensori
stimulus and a reaction of the system with bypassing any higher task planning functions of the robot. Especially critical
life support mechanisms are based upon these reflexes. They are basically characterised by a short response time and
they are difficult to inhibit by higher intelligence functions. They guarantee a safe behaviour of the vehicle in emergency
and other unexpected situations. Other more complex mechanisms lead to local navigation schemes. This level is still of
high reactivity and can cope with changes in the environment such as unexpected or even moving obstacles. Its task is
also the re-planning of trajectories in the case of obstacles blocking the path, danger or other sudden events. As it
determines the vehicles path on-line and real-time it usually can not guarantee to produce an optimal trajectory. The
most complex mechanism of action is the global navigation, generating paths to goals given by the task planning unit.
The paths generated in this way consider all data provided by the world model and results in near optimal movements
[32,33].

71

In the following a brief overview over the navigational techniques used most commonly to generate trajectories for
mobile robots is presented.

Vector graph path planning


- is based on a map which models all obstacles geometrically
-all obstacles are enlarged by the size of the vehicle to allow reducing the vehicle to a point object
- paths are constructed by connecting the vertices of the obstacles that are of free line in sight
- all possible paths are searched for an optimal trajectory using standard search algorithms.

Free space planning


- is based on the free space in the environment of the vehicle, which is modelled with simple geometric figures
-all obstacles are enlarged by the size of the vehicle to allow reducing the vehicle to a point object
- free space areas are connected
all possible paths are searched for an optimal trajectory using standard search algorithms

Grid based navigation algorithms


- environment is modelled as a grid of cells, which can be free or occupied
- all obstacles are enlarged by the size of the vehicle to allow reducing the vehicle to a point object
- free grid points are connected and all possible paths are searched for an optimal one
- the accuracy of the path and the computation time needed depend highly on the size and number of grid cells
used

Distance transform algorithms


- environment is mapped as a grid of cells, where each cell is assigned a value which corresponds to the distance
of the cell to the goal [21]
- the optimal path is found as a sequence of cells going downhill towards the goal.
-large computational effort is needed for building up the distance grid map

Potential field algorithms


-obstacles or impassable areas produce artificial repulsive forces acting on the robot [34,35]
- the goal acts with an artificial attractive force on the robot
- the trajectory is found by applying physical force laws to the robot
- the main problem is the existence of local minima where the vehicle might be stuck

72

Behaviour based navigation


- the task is broken up into simple basic behaviours like follow wall or avoid obstacle [36-38]
-a set of simple behaviours yield the overall behaviour of the vehicle
- the algorithm yields a robust local navigation algorithm, but gives usually an unpredictable path when used for
global navigation

The localisation of mobile robot systems requires measures to compensate for relatively imprecise measurement errors of
movement and position due to dead reckoning errors such as slippage and drift. This can be done via position updating,
error correction, motion surveillance and stabilization techniques. It has proven to be useful to use tow different schemes
for localisation, which complement each other. Hence most mobile robots are equipped with sensors for dead reckoning
as well as position updating. Dead reckoning navigational methods are procedures to estimate the position of the vehicle
resulting from differential changes in the vehicles position, speed and acceleration. Whereas position updating takes
place with referencing the robot at external features. These can be natural or artificial landmarks in the environment of
the robot, which allow to deduce the absolute position of the vehicle. Any cumulative measurement errors in dead
reckoning can be corrected by these position update mechanisms as they allow to re-calibrate the position of the vehicle
absolutely, improving the localization accuracy dramatically. If there is no possibility for frequent position updates and
yet a high demand for good localization accuracy one has to pay particular attention to high quality dead reckoning
sensor systems. Table 8.1 sums up the components for performing the dead reckoning of mobile robots.

Table 8.1: Localization methods and sensors


Dead reckoning
Odometer
(angle-) acceleration sensors
gyroscope

Position updating
Feature identification in the environment
active beacons (ultrasonic, infrared, or
radio emission)
passive beacons ( reflector, magnets, metal
inductors)

speedometer
The compliance of the vehicles course with the given trajectory calculated by the planning algorithms is done by the
position-controller. It performs the necessary fine interpolation between given way points, generates steering codes for
the drives and monitors the actual position given by the localization unit compared to the requested position.
Summarising one can say that the motion precision of a mobile robot depends on two factors: The accuracy of the
localization and the quality of the position controller.
Another important aspect of the navigation of mobile robots is the collision avoidance [39,40]. It will be activated when
the vehicles sensors detect an obstacle blocking the pre- planned path. Its task is then to steer the vehicle around the
obstacle on a detour as optimal as possible. Time and resource consumption during the detour need to be minimized as
well. Usually this task will be done by a reactive motion control, whose inputs are the known environment and the
current sensor data. There exist various concepts for this local obstacle avoidance, which will be briefly described in the
following.
Contour following
The sensor data are used to drive a minimal distance course around an obstacle until the vehicle reaches the pre-planned
path again [41].
Advantages:

easy to implement
Fast algorithms available

Disadvantages:

static environment is presumed

Edge following
The edges of obstacles are determined and the edge with the least deviation from the pre-planned course is followed until
the vehicle reaches the pre-planned path again.

73

Disadvantages:

obstacles need to be a convex surface structure


Size of obstacles needs to be small compared to the sensor range

Potential fields
All obstacles produce imaginary repulsive forces which act on the vehicle [34,35].
Advantages:
easy to realize
Fast algorithms available
Disadvantages: oscillations around calculated path
Local minima can catch the vehicle
doors and small passages are difficult to pass
Vector fields
The environment is modelled as two dimensional histogram whose cells contain the probabilities of finding obstacles
there [39]. Then an imaginary force vector is constructed out of the histogram cells which acts similar to the potential
field algorithm.
Advantages:
Disadvantages:

easy to realize
high computation power needed
Large changes in movement direction occur
Typical problems of potential field algorithms arise

Unfortunately, most algorithms for obstacle avoidance have some disadvantages. Hence in a real application the obstacle
avoidance will often be a combination of various algorithms. These hybrid architectures can then provide reliable local
obstacle avoidance.

74

APPENDICIES

75

MATRIX REVIEW

76

77

78

79

80

81

82

Ref. Niku, S.B., Introduction to Robotics, Analysis, Systems, Applications, 2001, Prentice Hall,

83

APPENDIX B

SCHOOL OF MECHANICAL ENGINEERING


MECH3460 ROBOTICS AND MACHINE INTELLIGENCE

PART I

ROBOTICS

FORMULA SHEETS

84

2. KINEMATICS
2.1 Definitions
2.2 Transformations

(2.1)

(2.2)

85

(2.3)

(2.4)

where

(2.5)

(2.6)

86

(2.7)

(2.8)
or

(2.9)

(2.10)
(2.11)

87

(2.12)
(2.13)

(2.14)

(2.15)

(2.16)

(2.17)

(2.18)

(2.19)

(2.20)

88

Summary of interpretations of transformation.


Transformation matrix QTR can be used to change the frame in which the position of a point is
defined:
Transformation matrix T can be used to move a point or vector:
Transformation matrix QTR describes the position and orientation of frame {R} relative to frame {Q}.

(2.21)
(2.22)
(2.23)
(2.24)
(2.25)

(2.25)

(2.26)

89

(2.27)

90

Rules for fixing frames to links (e.g. link 2 in Figure 2.11):


Z2 points along joint 3 axis (in either direction)
X2 points along link 2 perpendicular, away from joint 2.
Y2 completes cartesian co-ordinate frame.
These rules can be extrapolated to the other links. However they are not entirely applicable to the first and last links:
Link 0: Z0 should point along joint 1 axis, but otherwise the frame position can be freely chosen.
Link 6 (or link n where n is the number of joints): X6 should be perpendicular to the joint 6 axis, but otherwise
the frame position can be freely chosen.
Using these standard frames, four parameters can be defined which uniquely specify the link and joint geometry. Each
parameter for link i can be thought of as a successive movement required to map frame {i-1} to frame {i}:
1. Link offset Di : displacement along the Zi-1 axis to go from the link i-1 to the link i perpendicular
2. Joint angle q i : rotation about Zi-1 required to align Xi-1 with Xi (positive rotation is clockwise looking in the
direction of Zi-1)
3. Link length Ai: the length of the perpendicular, i.e. the displacement required in the X i direction to bring the origin
of frame {i-1} coincident with that of frame {i}.
4. Link twist a i : the rotation required about Xi to make Zi-1 coincident with Zi (positive rotation is clockwise looking
in the direction of Xi )
Figure 2.12 summarises these parameter definitions.
If joint i is revolute then the joint angle q i is a variable. However if the joint is prismatic, the link offset D i is the variable.

91

92

(2.28)

93

4. DYNAMICS AND CONTROL


4.1 Inverse dynamics
Outward recursions
For link i = 1 to n, calculate:
1. Angular velocity of link (

2. Angular acceleration of link (

3. Linear acceleration of link at frame origin (

4. Linear acceleration of link at centroid (


)
5. Resultant force acting on link at centroid ( iFi )
6. Resultant moment acting on link around centroid ( Ni )
Inward recursions
For link i = n to 1, calculate:
1. Force exerted on link i by link i-1 (ifi )
2. Torque exerted on link i by link i-1 (ti )

For link i = 1 to n, calculate:


1. Angular velocity of link:

(4.1)

2. Angular acceleration of link

(4.2)

94

3. Linear acceleration of link at frame origin

(4.3)

4. Linear acceleration of link at centroid

(4.4)

5. Resultant force acting on link at centroid

(4.5)

6. Resultant moment acting on link around centroid

(4.6)

Inward recursions
For link i = n to 1, calculate:
1. Force exerted on link i by link i-1

(4.7)

2. Torque exerted on link i by link i-1

(4.8)

Summary of notation
Note: all linear and angular velocities and accelerations are measured relative to a fixed frame (such a frame is often also
known as an earth or world frame). However linear velocities and accelerations, even though measured relative to a fixed
frame, can be expressed in any frame like any other vector. In the planar case angular velocities/accelerations are scalar
so this issue does not arise for angular motion.
Linear accelerations/forces: where Q represents a linear quantity,

is that quantity for link k expressed in the

frame j co-ordinate system. A further subscript (x or y) indicates a particular scalar component of


in equation 4.8).

(as used

Angular velocities/accelerations/torques: where Q represents an angular quantity,


is that quantity for link k.
Links are defined by their dimensions (length Li and centroid position ci) and inertial properties (mass mi and
mass moment of inertia Ii)
Matrices
are rotation matrices as defined previously, except that only 2x2 matrices are required in the planar case.

External forces exerted on manipulator


The Recursive Newton-Euler method can also be used to calculate joint torques where external forces act on the
manipulator, including gravity.
External forces or moments exerted by the manipulator at/around a point on a link can simply be added to the
force or moment equations (4.7) and (4.8) during the inward recursions.
A particularly common special case is end-effector force and torque, which are represented by
in equations (4.7) and (4.8) (
would be defined as the force/torque exerted by the manipulator,
not on the manipulator).
There is an easy way to include gravity: say that link 0 is accelerating vertically upwards at 1 g. Thus assuming
Y0 points vertically up, the following should be used in equation (4.3):

(4.9)

95

(4.13)

(4.14)

or

(4.15)
(4.16)

96

97

8. Localisation
Basic Concepts

98

Trilateration

Multilateration

99

Trilateration algorithm

Forms pairs of beacons from set of 3


Each pair evaluates two possible solutions
Forms set of 6 possible locations
Closest 3 points averaged to get a reasonable solution
Can be performed using:
3 beacons nodes

100

Вам также может понравиться