Вы находитесь на странице: 1из 35

Dr.V.

Umasankar
Joint space
Open loop ⇒Computed torque control
Closed loop
Regulation problem ⇒PD ( Potential Plus Derivative) + gravity compensation
Trajectory tracking problem
•Nominal conditions ⇒Inverse dynamic control
• With uncertainty ⇒Robust control &Adaptive control
Operational space
Closed loop
Regulation problem ⇒PD + gravity compensation
Trajectory tracking problem
• Nominal conditions ⇒Inverse dynamic control
The robot has to interact with the environment either to
manipulate it, or to avoid to collide with it, or to interact
with other devices or humans.
In order to effectively interact with the environment, the
robot must be endowed with external (exteroceptive)
sensors, which make the robot controller aware of the
situation.

Common sensors used to measure such interaction are


force sensors and vision sensors.
A robot commonly interacts with the working
environment, manipulating objects and performing
operations on surfaces (think of grinding, deburring,
part assembly) .
In modern robotics, situations are common where the
robot physically interacts with the operator (think of tele
manipulationor rehabilitation).
We are now dealing with the modelling of the
environment and the use of external sensors:
A first way to address the interaction with the environment is to
endow the manipulator with devices that facilitate the execution
of the task in a passive way: The RCC ( Remote Centre of
compliance) is used in assembly tasks(peg-in-a-hole)

It is typically placed between the robot’s wrist and the gripper.

The RCC lets the gripper assembly move in the plane


perpendicular to the peg’s axis. This allows the peg to rotate.

With the RCC, the forces generated by any misalignment can be


compensated in a passive way.
During more general interaction
tasks, the use of a purely positional
control strategy (the same adopted
in free motion) may lead to
problems due to positioning errors
and uncertainties in task planning,
related to a incomplete knowledge
of the environment.

In many applications controlling


the contact force is as important as
controlling the speed of the robot
along the prescribed direction.
Examples where an active control
of the interaction is desirable
include:
mechanical machining (deburring,
surface finishing, polishing,
assembly,...) tele-manipulation
cooperation of multi-manipulator
systems dexterous robot hands
physical human-robot interaction

For an active control of interaction


it is necessary to use devices for
force (and moment) measurement
Force/torque sensors are devices that return the
measurements of three components of force and three
components of moment with respect to a local frame.
Sensors are based on strain gauges, i.e. devices that can
measure a strain. The strain induces a variation in the
resistance of a circuit.
Strain gauges are suitably mounted,
in such a way as to return
information from which
three components of force
and moment can be computed.
When studying the interaction of
the robot with the environment, the
problem arises how to include the
effect of a force applied at the end-
effector in the model.
Suppose then that a force is applied
at the end-effector
What are the joint torques that keep
the system in equilibrium?
As a matter of fact a system of
forces may act at the end-effector.
How can this system be reduced?
From classical mechanics…
One force applied in a point (the
resultant) One torque

It follows that we will always be


able to represent a system of forces
with two vectors:
Force based interaction control strategies can be divided into two
categories:

Impedance/admittance control. The goal is to assign a prescribed


dynamic relation between interaction forces and position errors
We want that the manipulator, in case of unplanned interactions,
behaves like a generalized mass-spring-damper system

Hybrid position and force control. We separate directions which


are only position controlled from directions which are only
force/torque controlled. In the directions constrained by the
environment we want that a force/torque with a specified value
be established
Explicit control
The force (or the impedance) is directly assigned, acting
on the control variables (joint torques)
Implicit control
The force (or the impedance) is indirectly assigned,
acting on the set-points of the position control loops
The flow of power among two physical systems can
always be defined as the product of two conjugate
quantities: a (generalized) effort and a (generalized)
flow

In the electrical framework, the flow is the current,


while the effort is the voltage. In the mechanical
framework, instead, the flow is the (linear or angular)
velocity, the effort is the force (or the moment)
The mechanical impedance is then defined as
dynamical relation that is established between force
and velocity (or displacement) for a mechanical
system. The admittance is just the reciprocal of the
impedance
The impedance control aims at making the
manipulator, position controlled and in interaction
with the environment, assume a desired mechanical
impedance, like a generalized mass-spring-damper
system.
Examples of impedance/admittance control for physical human-
robot interaction:
An alternative way to perform an interaction task is to assign
reference values of the forces and of the positions, consistent with
the geometry of the environment.
Some directions can be in fact subjected to constraints on the
position, other ones to constraints on the forces that can be
applied.
In general, it is not possible to impose both the force, and the
position, along the same direction
We make the following distinction:
Natural constraints: they are imposed by the
environment along each degree of freedom of the task;
they depend on the geometry of the task.
Artificial constraints: constraints imposed by the control
system, related to the task execution strategy
Hybrid position/force control is based on a nominal
model of the interaction. Inconsistency may however
occur in the measurements, due e.g. to:
Friction at the contact (a force is detected in a nominally
free direction).
Compliance in the robot structure and/or at the contact
(a displacement is detected in a direction which is
nominally constrained in motion) uncertainty in the
environment geometry at the contact.

The first two sources of inconsistency are automatically


filtered out through the selection matrices. The third
source of inconsistency can be mitigated by real time
estimation process.
Artificial vision devices are useful sensors for robotics
because they mimic the human sense of sight and allow
to gather information from the environment without
contact.
Nowadays several robotic controllers integrate vision
systems.
The typical use of vision in industrial robotics is to
detect an object in the robot’s scene, whose position
(and orientation) is then used for online path planning
in order to drive the robot to the identified object.
Online re-planning of the path can also be performed
when the vision system detects some unexpected change
in the path the robot is supposed to follow (for example
a corner in a contouring task).
Alternatively, visual measurements can be used in a real
time feedback loop in order to improve position control
of the end effector: this is the concept of visual
servoying.
The camera has to be calibrated before usage in a robotic
vision system:
Internal calibration:

Determination of the intrinsic parameters of the camera (like


the focal length λ) as well as of some additional distortion
parameters due to lens imperfections and misalignments in
the optical

External calibration:
Determination of the extrinsic parameters of the camera like
the position and the orientation of the camera with respect to
a reference frame system.
The first decision to be made when setting up a vision-based
control system is where to place the camera.
The camera can be:
mounted in a fixed location in the workspace (eye-to-hand
configuration) so that it can observe the manipulator and
any objects to be manipulated attached to the robot above
the wrist (eye-in-hand configuration)
Robotic vision control systems can be classified based on various
criteria.
A first classification is based on the following question:
Is the control structure hierarchical, with the vision system
providing set-points as input to robot’s joint-level controller, or
does the visual controller directly compute the joint-level inputs?
In the first case: dynamic look and move.
In the second case: direct visual servoing
Advantages of the dynamic look and move
approach:
the reduced sampling rate of the visual signal does not
compromise the overall performance of the position control
system.
in several industrial robot controllers it is only allowed to
operate at the position set points level the robot can be seen
as an ideal positioner in the Cartesian space, thus
simplifying the design of the vision control system.
A second classification is based on the following question:
Is the error signal defined in 3D (task space) coordinates or
directly in terms of image features?

In the first case : position based control


In the second case :image based control

Position based control:


vision data are used to build a partial 3D representation of the
world pose estimation algorithms are computationally intensive
(a real-time implementation is required) and sensitive to errors in
camera calibration
Image based control:
uses the image data directly to control the robot motion
an error function is defined in terms of quantities that
can be directly measured in an image, and a control law
is constructed that maps this error directly to robot
motion.

Вам также может понравиться