Академический Документы
Профессиональный Документы
Культура Документы
Umasankar
Joint space
Open loop ⇒Computed torque control
Closed loop
Regulation problem ⇒PD ( Potential Plus Derivative) + gravity compensation
Trajectory tracking problem
•Nominal conditions ⇒Inverse dynamic control
• With uncertainty ⇒Robust control &Adaptive control
Operational space
Closed loop
Regulation problem ⇒PD + gravity compensation
Trajectory tracking problem
• Nominal conditions ⇒Inverse dynamic control
The robot has to interact with the environment either to
manipulate it, or to avoid to collide with it, or to interact
with other devices or humans.
In order to effectively interact with the environment, the
robot must be endowed with external (exteroceptive)
sensors, which make the robot controller aware of the
situation.
External calibration:
Determination of the extrinsic parameters of the camera like
the position and the orientation of the camera with respect to
a reference frame system.
The first decision to be made when setting up a vision-based
control system is where to place the camera.
The camera can be:
mounted in a fixed location in the workspace (eye-to-hand
configuration) so that it can observe the manipulator and
any objects to be manipulated attached to the robot above
the wrist (eye-in-hand configuration)
Robotic vision control systems can be classified based on various
criteria.
A first classification is based on the following question:
Is the control structure hierarchical, with the vision system
providing set-points as input to robot’s joint-level controller, or
does the visual controller directly compute the joint-level inputs?
In the first case: dynamic look and move.
In the second case: direct visual servoing
Advantages of the dynamic look and move
approach:
the reduced sampling rate of the visual signal does not
compromise the overall performance of the position control
system.
in several industrial robot controllers it is only allowed to
operate at the position set points level the robot can be seen
as an ideal positioner in the Cartesian space, thus
simplifying the design of the vision control system.
A second classification is based on the following question:
Is the error signal defined in 3D (task space) coordinates or
directly in terms of image features?