Вы находитесь на странице: 1из 6

Second Order Sliding Mode Visual Tracking

in Finite Time for Uncertain Planar


Manipulators with Uncalibrated Camera

J. D. Fierro-Rojas ∗ V. Parra-Vega ∗
A. Espinosa-Romero ∗∗


Mechatronics Division - CINVESTAV, México
(jfierro,vparra)@mail.cinvestav.mx
∗∗
Institute for Applied Mathematics and Systems -
UNAM, México
arturoe@cic3.iimas.unam.mx

Abstract: This paper considers the problem of tracking control of planar robot
manipulators through visual servoing under uncertain knowledge of the robot and
camera parameters for fixed-camera configuration. We designed a controller based
on a passivity-based second order sliding mode approach which achieves finite
time convergence of tracking errors specified in the screen coordinate frame by
introducing a time base generator into the sliding surface. Simulations results for
a 2 degrees of freedom direct drive manipulator with uncalibrated CCD camera
are presented to illustrate the controller’s performance.

Keywords: Control of robots, visual servoing, uncertain robot dynamics, camera


calibration, second order sliding mode control.

1. INTRODUCTION analytic jacobian matrix, and furthermore, those


are singular at rotation angle θ = π2 . In contrast,
a first order sliding mode (1SM) controller pro-
In order for visual servo controllers can result in posed by (Fierro-Rojas et al., 2002) shows global
satisfactory control under high performance re- tracking for planar robots when all physical robot
quirements, including high speed tasks and direct- and vision parameters are considered unknown.
robot actuators, robot dynamics must be taking Notice that our previous approach (Fierro-Rojas
into account. Nevertheless most of previous works et al., 2002) is not singular at θ = π2 , and does
assumed the ideal performance of the joint servo not require knowledge of jacobian matrix.
mechanism and ignored the robot dynamics. As
solutions to this problem some adaptation meth- In this paper, and similarly to (Fierro-Rojas et
ods have been proposed (Shen et al., 2001), (Hsu al., 2002), we develop a second order sliding mode
and Aquino, 1999) and (Bishop and Spong, 1997), (2SM) with global tracking visual feedback con-
which guarantee local tracking for the dynamic troller for planar manipulators in an image-based
model of robot arms subject to uncertainty on the approach under unknown parameters. A change
parameters of the vision system. These schemes of coordinates parameterized by a TBG is pro-
yield local tracking by exploiting the fact that posed into the sliding surface such that finite-time
the rotation matrix is constant, and formal and convergence of tracking error arises. However, the
rigorous stability analysis support these results. uncertainty on camera parameters does not allow
However, these papers assume knowledge of the to obtain chattering-free control. We stress that
the semi-continuous 2SM control yields global YC ∈ ΣC and XB − YB ∈ ΣB ; λf > 0 is the focal
exponential tracking versus the stable regime of length; R = R(θ) ∈ IR2×2 denotes the rotation
the pise-wise continuous 1SM control. To illus- matrix of ΣB with respect to ΣC ; f (q) is the
trate the performance of the proposed controller direct kinematics function; C OB = [c Ob1 ,c Ob2 ]T
we present some simulations that confirms the is the position of ΣC with respect to ΣB ; OI is
expected convergence behavior of the trajectory the position of the intersection of the optical axis
errors in screen coordinates. with respect to ΣI ; and finally, OX = [Ox1 , Ox2 ]T
denotes the origin of ΣI in the ΣS coordinate
system.
2. ROBOT-CAMERA MODEL

Consider the set-up of a planar manipulator us- 2.2 Differential Kinematics


ing a vision system as depicted in Fig. (1).
In order to describe motion of the end-effector By differentiating equation (1) we obtain the
in an screen coordinate system, some coordi- velocity of the end-effector with respect to the
nates frames are defined, namely the robot base screen frame
frame ΣB = {XB , YB , ZB }, the end-effector
frame ΣE = {XE , YE , ZE }, the camera frame
 
λf −1 0
ΣC = {XC , YC , ZC }, the CCD image frame ΣI = ẋ = α RJ q̇
λf − z 0 1
{XI , YI } and the screen frame ΣS = {u, v}, which = Rα J q̇ (2)
are refereed in the following subsection.
where J = J (q) is the Jacobian matrix of the
manipulator and
 
λf −1 0
Rα = α R. (3)
λf − z 0 1

2.2.1. Inverse differential kinematics Accord-


ing to the equation (2), the following mapping
appears

q̇ = J −1 Rα
−1
ẋ (4)
to establish an explicit dependence of joint veloc-
ity coordinates in terms of image velocity vector.

Proposition 1. For any vector z ∈ IR2 the product


J −1 Rα
−1
z can be represented in the following
linear form
Fig. 1. System coordinates frames: ZB kZC and
angle (XB , XC ) = θ. J −1 Rα
−1
z = Yv (q, z) θv (5)

whose elements of Yv (q, z) ∈ IR2×p2 do not depend


neither on the rotation matrix nor links length,
2.1 Camera Model and Forward Kinematics
and θv ∈ IRp2 ×1 is composed of parameters of the
The position of the robot end-effector in the screen rotation matrix and parameters of the Jacobian
coordinate frame ΣS , based on the perspective matrix.
projection model (Hutchinson et al., 1996) is given
by 1
2.3 Robot Dynamics
    
u αλf −1 0
x= = R f (q) In the absence of friction or other disturbances,
v λf − z 0 1 the dynamics of a serial n−link rigid, non-
c   
O −α 0 redundant, fully actuated robot manipulator can
− c b1 + OI + O X , (1)
Ob2 0 α be written as follows 2
where α > 0 is the scale factor in pixels; z > 0 H (q) q̈ + C (q, q̇) q̇ + G (q) = τ (6)
is the distance of separation of the planes XC −
2 Without loss of generality, our controller can be applied
1 For a detailed procedure to obtain the explicit relation- with similar results if we consider dynamic friction, for
ship see for instance (Kelly et al., 1996). instance the LuGre model.
where q ∈ IRn is the vector of joint displacements, 4. PRELIMINARY CONTROLLER DESIGN
τ ∈ IRn×1 stands for the vector applied joint
torques, H(q) ∈ IRn×n is the symmetric positive 4.1 Problem Statement
definite manipulator inertia matrix, C(q, q̇)q̇ ∈
IRn stands for the vector of centripetal and Corio- We consider the problem of designing a vi-
lis torques, and finally G(q) ∈ IRn is the vector of sual servo controller for the dynamic model
gravitational torques. Two important properties of robot manipulators under uncalibrated cam-
of robot dynamics useful for stability analysis are era and unknown physical robot parameters,
the following. that guarantees finite-time tracking of a given
time-varying image-based trajectory denoted by
Property 1. The time derivative of the inertia (xTd (t), ẋTd (t), ẍTd (t))T ∈ IR3n , with the following
matrix, and the centripetal and Coriolis matrix assumptions:
satisfy a skew-symmetric matrix
  Assumptions 1. Image coordinates x, and ẋ are
T 1
X Ḣ (q) − C (q, q̇) X = 0, ∀X ∈ IRn (7) available.
2
Assumptions 2. Inertial robot parameters are un-
Property 2. Robot dynamics are linearly param-
known, and the camera is not calibrated.
eterizable in terms of a known regressor Yb =
Yb (q, q̇, q̈) ∈ IRn×p1 and a vector θb ∈ IRp1 of
robot parameters as follows The fix camera is modelled as a static operator (1)
that relates screen and joint coordinates. Thus,
H (q) q̈ + C (q, q̇) q̇ + G (q) = Yb θb (8) there exists a functional that relates image errors
and joint errors. Then, we are interested in design-
The time base generator concept, necessary to ing a joint output error manifold sq , in terms of
achieve finite-time visual tracking, and the prob- visual error manifold sx , which satisfies a passivity
lem statement are discussed in the following sec- inequality hsq , τ ∗ i with respect to the virtual joint
tion. input τ ∗ . To this end, we need to derive the robot
dynamics in sq coordinates, and the passivity in-
equality will dictate the control structure as well
3. TIME BASE GENERATOR as the storage function. To proceed we first derive
the known parametric case (the camera is cali-
In (Parra-Vega and Hirzinger, 2000), a well-posed brated), and afterwards we present the unknown
TBG algorithm is proposed to guarantee finite- parametric case (the camera is not calibrated)
time convergence of robot manipulators. For com- that satisfies the problem above.
pleteness we present the basics of TBG-based con-
trol (Parra-Vega and Hirzinger, 2000). Consider
the following first order time-varying ordinary dif- 4.2 Visual Error Manifold
ferential equation
Consider the following nominal reference with
ẏ = −λ(t)y (9) respect to the screen frame

where
ẋr = ẋd − λ(t)∆x + sd − Ki υ (11)
ξ˙ υ̇ = sign (sδ ) (12)
λ(t) = λ0 (10)
(1 − ξ) + δ
where ẋr is base on a time-varying continuous
where λ0 = 1 + ,   1, and 0 < δ  1. The time state-independent TBG gain λ(t), xd and ẋd de-
base generator ξ = ξ(t) ∈ C 2 must be provided note the desired position and velocity of the end-
by the user so as to ξ goes smoothly from 0 to 1 effector with respect to the screen frame, respec-
in finite-time t = tb > 0, and ξ˙ = ξ(t) ˙ is a bell tively, and
shaped derivative of ξ such that ξ(t˙ 0 ) = ξ(t
˙ b ) ≡ 0.
In this conditions, the solution of (9) is y(t) = sδ = s − s d (13)
y(t0 )[(1 − ξ) + δ]1+ , with λ(tb ) > 0. Note that
s = ∆ẋ + λ(t)∆x (14)
y(tb ) = y(t0 )δ 1+ > 0 can be made arbitrarily
−κt
small in arbitrary finite time tb . Also note that sd = s(t0 ) exp (15)
the transient of y(t) is shaped by ξ(t) over time.
with the integral feedback gain Ki > 0 whose
Thus, if our controller yields a closed-loop equa- precise lower bound is to be defined yet; κ > 0; the
tion similar to (9), for y the position tracking sgn (y) is the discontinuous signum(y) function
errors of the robot, then finite-time convergence of y ∈ IRn ; ∆x = x − xd is the image-based
arises. end-effector position tracking error; sd = s(t0 ) ∈
C 1 ⇒ sδ (t0 ) = 0. In this way, the derivative of 5. SECOND ORDER SLIDING MODE WITH
(11) becomes TBG VISUAL SERVOING

5.1 Uncalibrated Joint Error Manifold


ẍr = ẍd − λ(t)∆ẋ − λ̇(t)∆x + ṡd
−Ki sign(sδ ) (16) To handle the parametric uncertainty of the cam-
era system, note that q̇r allows a linear parame-
Then, the visual error manifold (screen coordi- terization, that is q̇r = J −1 Rα −1
ẋr ≡ Y (q, ẋr )θv ,
nates extended error) is given by where θv incorporates intrinsic and extrinsic cam-
era parameters and Y (q, ẋr ) is composed of known
variables. Then, since θv is unknown, we define a
sx = ẋ − ẋr new nominal reference q̄˙ r as follows:
Zt
q̄˙ r = Yv θ̄v (21)
= s δ + Ki sgn(sδ )(ζ)dζ (17)
t0
where Yv = Yv (q, ẋr ), and θ̄v is tuned such that
J −1 Rα
−1
ẋr is well-posed. From equations (19),
Note that if sδ = 0 then tracking is obtained.
(21) and proposition (5), the uncalibrated joint
error manifold s̄q vector is given by

s̄q = q̇ − q̄˙ r = q̇ − q̄˙ r ± q̇r


4.3 Joint Error Manifold = sq − Yv θ̄v + Yv θv
= sq − Yv ∆θv (22)
According to (4), a nominal reference q̇r in the
joint space is defined as follows where ∆θv = θ̄v − θv . It is useful to give q̈r now
¨q̄ r = Ẏv θ̄v (23)
q̇r = J −1 Rα
−1
ẋr (18)
In order to compensate the effects on robot dy-
Thus, the joint error manifold sq in joint space is namics due to definition of new nominal references
given by (q̄˙ r 6= q̇r , ¨q̄ r 6= q̈r , and therefore s̄q 6= sq ), it is
convenient to express the error s̄˙ q in terms of ṡq
as follows
sq = q̇ − q̇r
= J −1 Rα
−1
(ẋ − ẋr ) s̄˙ q = ṡq − Ẏv ∆θv (24)
= J −1 Rα
−1
sx (19)
5.2 Open-loop Error Equation
We can see that if we design a controller that
yields convergence of sq , then sx will converge Using nominal references (21)-(23), the uncali-
since by assumptions J and Rα are well-posed. brated open-loop system can be written as follows
Note that convergence of sx implies ∆ẋ, ∆x → 0.
Because of the time derivative of q̇r is required in a H (q) s̄˙ q + C (q, q̇) s̄q = τ − Ȳbr θb (25)
passivity-based controller designing, this is obtain
as follow where Ȳbr = Ybr (q, q̇, q̄˙ r , ¨q̄ r ) is available for mea-
surement. Considering equations (22) and (24),
open loop dynamics is expressed in terms of sq
q̈r = J −1 Rα
−1
ẍr + J˙−1 Rα
−1
ẋr (20) and ṡq by

Remark 1. Parameter uncertainty. Having de-


fined the nominal references in both the joint and H (q) ṡq + C (q, q̇) sq = τ − Ȳbr θb + Yve ∆θve(26)
screen frames, it is possible to design a controller where
based on the calibrated joint error manifold, so
the intrinsic α and λf , and the extrinsic z and θ
Yve ∆θve = H(q)Ẏv ∆θv + C(q, q̇)Yv ∆θv
camera parameters are required, which is quite re-
strictive since usually some of them are unknown, with Yve = Yve (q, q̇, ẋr , ẍr ). Since H(q), C(q, q̇)
or at least very difficult to compute in real time. are linearly parameterizable, then last equation
Therefore, in the following, we present a controller can be written in terms of a linear parameteri-
that yields finite time tracking with neither knowl- zation, too. At this stage the problem becomes
edge of inertia robot parameters nor knowledge of in computing τ in (26) such that sq be bounded
intrinsic and extrinsic camera parameters. subject to unknown θb , ∆θve .
5.3 Main Result ˙ q
sTδ ṡδ = −sTδ Ki sgn(sδ ) + sTδ Rα J ṡq + sTδ Rα Js
≤ −Ki |sδ | + 0 |sδ ||ṡq | + 1 |sδ ||sq |
We propose the following controller
≤ −Ki |sδ | + 2 |sδ | + 3 |sδ |
τ T
= −Ȳbr Θb sgn(Ȳbr sq ) − γsgn(sq ) (27) ≤ −Ki |sδ | + 4 |sδ |
≤ −µ|sδ |, µ = Ki − 4 > 0, (31)
where Θb ∈ IRp1×p1 , Θbii ≥ |θbi | , and γ > 0.
where 0 ≥ |Rα J|, 1 ≥ |Rα J|,˙ 2 ≥ |ṡq |, 3 ≥ |sq |
Theorem 1. Consider a robot manipulator (6) 4 = 2 + 3 . Thus, if Ki > 4 , equation (31)
with the second order sliding mode with time base qualifies as the sliding mode condition for sδ = 0
generator visual servoing scheme (27), subject to for all time since sδ (t0 ) = 0 ∀t0 . Thus, a second
robot and camera parametric uncertainties. Then, order sliding mode regime is induced at sδ = 0 for
the closed-loop system yields finite-time conver- all time.
gence of image tracking errors. Now, as shown in section 3, the TBG induces
finite time convergence if we substitute y = ∆x in
Proof. The following closed loop error equation equation (9), that is the following equation arises
between (6) and (27) arises
x(ts ) = xd (ts ) + ∆x(t0 )δ 1+ (32)
T
H(q)ṡq = −C(q, q̇)sq − Ȳbr Θb sgn(Ȳbr sq ) − Ȳbr θb In this way, tracking errors converge to an ar-
−Yve Θv sgn(YvTe sq ) − γsgn(sq ) bitrary small vicinity of ∆x = 0 in arbitrary
finite time t = tb without knowledge of manip-
+Yve ∆θve + τ ∗ (28)
ulator dynamics, and with uncalibrated camera.
for τ ∗ ≡ 0 a virtual control input. Note that the Afterwards, for t > tb , sδ (t) = 0, which implies
passivity inequality hsq , τ ∗ i = V̇ + γ|sq |, with the ∆ẋ = −λ0 ∆x + ε. Then, since sd (t) → 0, ∆x → 0
following energy storage function exponentially. ♦

1 T Remark 2. Signum of sq . Because of robot and


V = s H (q) sq (29)
2 q vision system parameters are unknown, sq is not
available. However, its signum can be easily deter-
whose rate of change yields mined from equation (19) and using proposition
(5), namely the sign of sq = Yv (q, sx )θv is deter-
V̇ ≤ −γ |sq | + sTq Yve ∆θve mined by the sign of the known regressor Yv (q, sx ),
since the vector θv it is assumed unknown but
≤ −γ |sq | + |sq ||Yve ||∆θve | constant.
where we have used Property 1. Note that
Remark 3. Experimental evaluation. The discon-
Yve ∆θve = f1 (ẋr , ẍr , θve , ∆θve , θb ), sq = f2 (ẋr ,
tinuous nature of the signum function make phys-
ẍr , θve , υ), and there exists an upper bound for
ical implementation of our controller impractical,
the regressors θve , θb because the entries of these
and hence at least a piecewise continuous approx-
regressors depend on trigonometry functions and
imation of the signum function must be imple-
link lengths, bounded desired trajectories and the
mented in order to not only reduce chattering but
state of the system, then there exists a large
also to be able to physically realize the controller.
enough feedback gain γ such that
Remark 4. Extension to 3D. With the exception
V̇ ≤ −γ |sq | + f0 |sq | of the camera model (1) and proposition (5),
the controller design was conducted taking no
for smooth and bounded function f0 ≥ g(f1 , f2 ). account of dimension of the robot workspace,
Then, according to the second method of Lya- which indicate the possibility of extending our
punov, there arises stability of sq , that is, sq is scheme to the 3D spatial case as a future research
bounded, with L∞ boundedness for ṡq , therefore topic.
multiplying equation (19) by Rα J, becomes in
sx = Rα Jsq with a derivative ṡx = Rα J ṡq +
˙ q , that is, it gives rise to, from equation (17),
Rα Js 6. SIMULATIONS
˙ q
ṡδ = −Ki sgn(sδ ) + Rα J ṡq + Rα Js (30) A two-rigid link, planar robot without friction
forces is considered. Dimensions of the robot and
Now, in order to produce the sliding mode condi- camera parameters are given in Table 1, where
tion for sδ , we multiply the previous equation by subindex 1 and 2 stand for first and second link,
sTδ to obtain respectively. The endpoint of the manipulator is
requested to draw a circle defined with respect to The closed-loop system exhibit exponential con-
the vision frame xd = (xd1 , xd2 )T = (0.1 cos ωt + vergence of tracking errors for any given initial
0.05, 0.1 sin ωt+0.05)T , where ω = 2 rad/sec, with conditions despite of the size of the parametric
tb = 1.0 sec as the desired convergence time. Data uncertainty. Finite time convergence is visualized
allows to visualize the stability properties stated through simulation results when all parameters
in Theorem 1. are unknown.
Table 1. Camera and robot parameters.
ROBOT SYSTEM Value - Unit REFERENCES
Length link l1 , l2 0.4, 0.3 m
Center of gravity 1,2 lc1 , lc2 0.1776, 0.1008 m Bishop, B.E. and M.W. Spong (1997). Adaptive
Mass link m1 , m2 9.1, 2.5714 kg calibration and control of 2d monocular vi-
Inertia link I1 , I2 0.284, 0.0212 kgm2 sual servo systems. IFAC Symp. on Robot
Gravity acceleration gz 9.8 m/sec2
VISION SYSTEM .
Control, Nantes, France.
Clock-wise rotation angle θ π
rad Fierro-Rojas, J.D., V. Parra-Vega
8
Scale factor α 72727 pixeles/m and A. Espinosa-Romero (2002). 2d sliding
Depth field of view z 1.5 m mode visual servoing for uncertain manipu-
Camera offset C OB [−0.2 −0.1]T m lators with uncalibrated camera. IEEE\RSJ
Offset ΣI OI [0.0005 0.0003]T m
Conf. IROS.
Focal length λf 0.008 m
Hsu, L. and P. Aquino (1999). Adaptive visual
Position errors Velocity errors
tracking with uncertain manipulator dynam-
100 500
joint 2 ics and uncalibrated camera. Proc. 38th IEEE
400
0 CDC, Phoenix, Arizona, pp.1248–1253.
−100
300
joint 1
Hutchinson, S., G.D. Hager and P.I. Corke (1996).
[pixels/s]
[pixels]

joint 1
200 A tutorial on visual servo control. Trans. on
−200
100 Robotics and Automation, 12, 651–670.
−300 0 Kelly, R., P. Shirkey and M.W. Spong (1996).
joint 2
−400
0 1 2 3 4 5
−100
0 1 2 3 4 5
Fixed-camera visual servo control for planar
Desired and end−effector trajectories
robots. Proc. of the 1996 IEEE Int. Conf. on
30
Applied torques
−400
−300
−200 0 200 Robotics and Automation, Minnesota.
x1,xd1 [pixels]
−200
Parra-Vega, V. and G. Hirzinger (2000). Finite-
20
joint 1 −100
x(t) time tracking for robot manipulators with
x ,xd [pixels]

10 0
xd(t)
continuous control. SYROCO, Wien.
[Nm]

Shen, Y., Y.H. Liu and K. Li (2001). Asymptotic


2

0 100
2

200 trajectory tracking of manipulators using un-


−10 joint 2 300
Robot workspace calibrated visual feedback. Submitted to the
boundary
−20 400
0 1 2 3 4 5 IEEE/ASME Trans. Mechatronics.

Fig. 2. Tracking of image-based desired trajecto-


ries: Theorem 1 controller for tb = 1 sec.
Applied torques Applied torques
1500 25

joint 1

20
1000

15
500
joint 1

10
0
[Nm]

[Nm]

joint 2 5

−500 joint 2
0

−1000
−5

−1500
−10

−2000 −15
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Fig. 3. Applied torques: 1SM (left), and 2SM


Theorem 1 (right).

7. CONCLUSIONS

We have proposed a new image-based servo con-


troller for uncertain planar robots and uncali-
brated camera in a passivity-based second order
sliding mode with time base generator approach.

Вам также может понравиться