Вы находитесь на странице: 1из 27

This article was downloaded by: [81.161.248.

93] On: 13 August 2013, At: 10:47 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Advanced Robotics
Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tadr20

Workspace Generation for Multifingered Manipulation


Yisheng Guan , Hong Zhang , Xianmin Zhang & Zhangjie Guan
a d a b c

School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, Guangdong 510640, P. R. China;, Email: ysguan@scut.edu.cn
b

School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, Guangdong 510640, P. R. China, Department of Computing Science, University of Alberta, Edmonton, AB, Canada
c

School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, Guangdong 510640, P. R. China
d

School of Aeronautics Science and Engineering, Beijing University of Aeronautics and Astronautics, Beijing, P. R. China Published online: 02 Apr 2012.

To cite this article: Yisheng Guan , Hong Zhang , Xianmin Zhang & Zhangjie Guan (2011) Workspace Generation for Multifingered Manipulation, Advanced Robotics, 25:18, 2293-2317, DOI: 10.1163/016918611X603837 To link to this article: http://dx.doi.org/10.1163/016918611X603837

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the Content) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by

Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sublicensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Advanced Robotics 25 (2011) 22932317

brill.nl/ar

Full paper Workspace Generation for Multingered Manipulation


Yisheng Guan a, , Hong Zhang a,b , Xianmin Zhang a and Zhangjie Guan c
a

Downloaded by [81.161.248.93] at 10:47 13 August 2013

School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, Guangdong 510640, P. R. China b Department of Computing Science, University of Alberta, Edmonton, AB, Canada c School of Aeronautics Science and Engineering, Beijing University of Aeronautics and Astronautics, Beijing, P. R. China Received 30 November 2010; accepted 11 January 2011

Abstract In this paper, we propose a novel numerical approach and algorithm to compute and visualize the workspace of a multingered hand manipulating an object. Based on feasibility analysis of grasps, the proposed approach uses an optimization technique to rst compute discretely the position boundary of the grasped object and then calculate the rotation ranges of the object at specied positions within the boundary. In other words, workspace generation with the approach is fullled by obtaining reachable boundaries of the grasped object in the sense of both position and orientation, and the discrete boundary points are computed by a series of optimization models. Unlike in workspace generation of other robotic systems where only geometric and kinematic parameters of the robots are considered, all factors including geometric, kinematic and forcerelated factors that affect the workspace of a handobject system can be taken into account in our approach to generate the workspace of multingered manipulation. Since various constraints can be integrated into the optimization models, our method is general and complete, with adaptability to various grasps and manipulations. Workspace generation with the approach in both planar and spatial cases are illustrated with examples. The approach provides an effective and general solution to the long-term open and challenging problem of workspace generation of multingered manipulation. Part of the work has been published in the Proceedings of IEEE/RSJ IROS2008 and IEEE/ASME AIM2008. Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2011 Keywords Workspace generation, reachable boundary, multingered manipulation, grasp feasibility

1. Introduction Knowledge about the workspace of a robot is crucial in motion planning and control of the robot in various tasks. The importance of the workspace was rst realized in
*

To whom correspondence should be addressed. E-mail: ysguan@scut.edu.cn


DOI:10.1163/016918611X603837

Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2011

2294

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

the early robotics research for manipulators, which can be traced back to 20 years ago. Many methods have been proposed for the workspace of conventional robots, among which are geometric analysis [1, 2], random search by the Monte Carlo method [3, 4] and polynomial discriminant [5, 6]. Research on the workspace of a parallel manipulator can also be found in the literature [79]. The workspace of a space manipulator was studied in Ref. [10] using the conventional analytic or geometric method based on the concept of a virtual manipulator for a space robotic system consisting of a manipulator and a carrying vehicle. The convolution product of real-valued functions on the special Euclidean group was introduced and applied to the determination of the workspace of manipulators that have a nite number of joint states [11]. A diffusion-based algorithm was developed for snake-like hyperredundant robots in Ref. [12]. Actually, the importance of the workspace lies not only in the planning and control of robotic motion and manipulation, but also in robot design where the workspace is used as a criterion so that the designed robot has maximum workspace [1315]. In addition to robotic systems, workspace analysis has also been carried out for human systems. The reachable envelope of the human upper extremity was studied in Ref. [16] where the upper extremity was modeled as a 9-d.o.f. system and its reachable envelope was generated by analytical formulation. Since there are a total of 232 singular surfaces for the workspace of the 7-d.o.f. arm/hand, it is impossible or impractical (if applicable) to use this method for complex robots with more d.o.f. and three-dimensional (3-D) multingered hands grasping an object which is a complicated system usually with more than 9-d.o.f. and a lot of constraints. Although workspaces of many robots have been extensively studied, little work has been done on those of multingered hands or multiple robot systems. This may be in part because of the difculty in this case. The earliest and most notable work on the workspace of a multingered hand should be credited to Ref. [17], in which the workspace of a restricted set of hands was analyzed using geometric methods. The workspace of each nger was assumed and the relationship between them was considered in the derivation of the workspace of the hand. Unfortunately, the proposed solution has certain critical limitations, including some restricted assumptions such as xed-point contact, not considering rotation d.o.f. for each position in the workspace, and, especially, not taking into account collisions or intersections between the object and the ngers and those between the nger links themselves. Due to so many affecting factors, the workspace of a multingered hand manipulating an object is not the intersection or union of the workspaces of ngers in the hand. Therefore, geometric, analytic or other conventional methods or algorithms presented in the literature could not provide a good solution to this problem. Instead, as pointed out in Ref. [17], for actual hands with irregularly shaped workspaces, it is likely that numerical methods will be the best way to determine the total workspace. In this paper, we propose exactly a numerical or computational method to obtain the workspace of a multingered manipulation in static or quasi-static state.

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2295

The presented method originates from our research on feasibility analysis of multingered grasps [1820]. The method uses a numerical optimization technique to rst nd the boundary of the object position without restrictions on its orientation and then to compute the rotation ranges of the grasped object at specic feasible positions within the boundary, from which the workspaces are built. Both the 2-D or planar and 3-D or spatial cases of multingered manipulation are addressed for workspace generation. The evaluation of grasp constraints, the optimization models, visualization of the workspaces and, hence, the corresponding algorithms are similar, but different. While the workspace of 2-D multingered manipulation is visualized in a 3-D coordinate frame, the one of 3-D multingered manipulation is decomposed into two categories position workspace, meaning the reachable space of the grasped object, and orientation workspace, representing the rotational angles of the object at a xed position within the position workspace about various axes through the specic position and depicted in different 3-D coordinate frames. Separation of the workspace is a common method for visualization of highdimensional space, and has also been used for workspace analysis of parallel robots [9], a double parallel manipulator using an analytic method [8] and in the Piano Movers Problem [21, 22]. Constraints in the force domain, including force equilibrium, limits of joint torques and frictional constraints, are crucial and essential in a multingered hand object system. The workspace of such a system hence has quite different characteristics from those of other conventional robots and, hence, needs a novel method to obtain it. Since our optimization technique for workspace generation can incorporate various constraints on multingered grasps, all factors that affect the workspace of a robotic handobject system (including geometry of the object, kinematics of the multingered hand and force-related factors, such as force equilibrium of the grasped object, grasp force, the limits of driving torques of the hand and frictional constraint) can be taken into account in the workspace generation of multingered manipulation. The presented method takes into account the factors ignored in Ref. [17] and, thus, is more general. It is adaptive to various polygonal objects to be manipulated, to various contact states between the ngers and the object, and to any number of ngers in the hand. As a result, the proposed numerical method provides a good and complete solution to the problem of the workspace of multingered manipulation or coordination of multiple robots. We also point out that the optimization-based numerical framework or methodology for workspace generation may be widely applied to parallel robots and other sophisticated robotic systems, including humanoid robots [23]. 2. Algorithm 2.1. Denitions and Notations Consider the grasp of an object using a multingered hand (Fig. 1). Assume that the hand has m ngers, with n joints in total. Denote the joint angles by a vector

Downloaded by [81.161.248.93] at 10:47 13 August 2013

2296

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

(a)

(b)

(c)

Figure 1. Multingered hands and objects: (a) 2-D case, (b) 3-D case and (c) hand grasping an box.

= (1 , 2 , . . . , n ). is the parameter describing the hand conguration. The hand structurally consists of tips and links of the ngers. Topologically, we regard a ngertip as a point t and a link as a line segment l , and call them the topological features of the hand. Then, the set of topological features for the hand is dened as H = {l1 , l2 , . . . , ln , t1 , t2 , . . . , tm }, where li (i = 1, 2, . . . , n) and tj (j = 1, 2, . . . , m) represent the i th link and the j th ngertip, respectively. In the 2-D or planar case (Fig. 1a), the conguration of an object may be described by the position (x, y) and orientational angle of the object frame O , with respect to the palm frame P . Topologically, a polygonal object consists of features including vertices and edges. Denote the sets of vertices and edges of the object by V = {vi } and E = {ej }, respectively, where vi and ej represent the i th vertex and j th edge in turn. Each edge has two vertices. We use ei (vj , vk ) to denote the edge formed by two vertices vj and vk . The set of topological features for the object is therefore O = {V , E }. In the 3-D or spatial case (Fig. 1b), the conguration of the object is described by the position (x, y, z) and orientation (, , ), in RPY or Euler angles. A polyhedral object consists topologically of faces in addition to vertices and edges. Similar to the sets of edges and vertices (E and V ), the set of faces of the object is denoted by U = {uk }, where uk represents the k th face. (We do not use F and fk here for face feature, since we reserve them for force notation later on.) A face contains several vertices and edges. We use ui (v1 , v2 , . . . , vk ) to denote the face containing vertices v1 , v2 , . . . , vk . Therefore, the set of topological features for the object is O = {V , E, U }. Dene a contact pair, c = (h, o), between the hand and the object to be a set of two topological features in contact, one from the hand and the other from the object, where h H , and o O . In the 3-D case there are six typical contact pairs between the hand and the object, which are: (t, u), tipface; (l, u), linkface; (l, e), linkedge; (t, v), tipvertex; (t, e), tipedge; and (l, v), linkvertex. We consider only the rst three types of contact pairs, and discard the last three types, since they are degenerate cases and seldom take place in a grasp. In the 2-D case, there are four typical contact pairs, without (t, u) and (l, u) in the above list, and the pair (t, v), which is a kind of pointpoint or xed-point contact, seldom takes place. The state of contact in a grasp can be dened by a set of contact pairs, and a canonical grasp is dened as a set of contact pairs, g = {c1 , c2 , . . . , cnc }, which involves at least two ngers and can be achieved kinematically by the hand. With

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2297

the above denitions and notations, we now dene the workspace of a multingered hand as follows: Workspace For a given grasp, the workspace of a multingered manipulation is the range of possible motion in both position and orientation of the object that is being manipulated by the hand maintaining the grasp. Note that in the denition, the given grasp should be kept, which means that gaits are not involved in the workspace of a multingered manipulation. The workspaces of multingered manipulation have quite different characteristics from those of manipulators or other conventional robots and depend essentially on many affecting factors, including the geometry (sizes and shapes) of the grasped object, the kinematic properties of the hand, the rolling and sliding that occur at the contacts, and the locations of the contacts on the object and the hand [17]. To emphasize these factors, we prefer the term workspace of multingered manipulation to workspace of a multingered hand. 2.2. Algorithm Description Workspace generation is to nd the reachable range of the object motion in manipulation by a multingered hand maintaining a given grasp. Using the above notations, we now propose the algorithm to obtain the workspace. This algorithm is based on kinematic feasibility analysis in our previous work [1820], which uses a method of global optimization modeling. The ideas of the algorithms for 2-D and 3-D cases are the same, but the steps are different with different visualization due to different dimensions. 2.2.1. In the 2-D Case As stated before, in the 2-D case, the conguration of an object is described by a linear vector (x, y) and an angular scale ; the workspace is therefore of three dimensions and hence can be depicted in a 3-D coordinate frame, although the units of the three components are not the same. The algorithm (Algorithm 1) can be described as follows: (i) Maintaining the given grasp, nd the maxima and minima of the object position in X and Y directions xmax , xmin and ymax , ymin with respect to the hand coordinate frame. (ii) Construct the rectangle in the XY plane bounded by xmax , xmin and ymax , ymin obtained in Step (i), and divide it into a grid with reasonable resolution using horizontal and vertical lines, as shown in Fig. 2a. (iii) Along the horizontal or vertical grid lines, nd the boundary points of the workspace. Connect adjacent boundary points in turn to form boundary curves of the workspace. (iv) For each node within the workspace boundary, nd the maximum and minimum of the object orientation angle maintaining the grasp feasibility, and

Downloaded by [81.161.248.93] at 10:47 13 August 2013

2298

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

(a)

(b)

(c)

(d)

Figure 2. Local coordinates and their slicing/griding for workspace generation: (a) 2-D case, (b) (, , ), (c) local coordinate grid and (d) 3-D case.

Downloaded by [81.161.248.93] at 10:47 13 August 2013

check whether the grasp remains feasible for the intermediate values of between the extrema. (v) Upon completing the above two steps for all the nodes in the grid, render the workspace graphically in a 3-D frame with coordinates (x, y, ). 2.2.2. In the 3-D Case Algorithm 1 can be extended to the 3-D case. In the 3-D case, since the conguration of an object is described by a position vector P (x, y, z) and an angular vector (, , ), as stated previously, the workspace is 6-D, and, hence, cannot be drawn in one 3-D coordinate frame. Nevertheless, we can visualize the ranges of position and orientation of the manipulated object separately in different frames. For convenience, we call the space of the possible object position and that of the possible object orientation at a specic position the position workspace and orientation workspace, respectively. Note that, unlike the workspace with constant orientation as dened in Ref. [9], the orientation of the grasped object is not xed or predened in the generation of the position workspace, although the orientation can be xed to get a subset of the position workspace. However, the orientation workspace depends on the object location; that is, only at one specied point in the position workspace is the orientation workspace computed and visualized, although the total orientation workspace can be also generated and visualized in a similar manner, without specifying or xing the object location in the position workspace. For the convenience of visualization, we redene the orientation parameter (, , ) as follows. Let (, ) be the local coordinates of a unit spherical surface determining a unique axis about which the object is to rotate by , as shown in Fig. 2b, where = {(, ): < , /2 < /2} and 0 < 2 or < [25]. The algorithm (Algorithm 2) for the 3-D case then consists of the following steps: (i) Find the extreme (maximal and minimal) positions of the original point O of the object frame in X, Y and Z directions, xmin , xmax , ymin , ymax , zmin and zmax , respectively, with respect to the hand base frame.

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2299

(ii) Use the extreme positions obtained in Step (i) to construct a box whose six faces are dened by x = xmin , x = xmax , y = ymin , y = ymax , z = zmin and z = zmax . The workspace is clearly enclosed by this box, as shown in Fig. 2d. (iii) Uniformly slice the above box along the Z direction using horizontal planes with a reasonable resolution determined according to the size of the box. (iv) Uniformly slice the box along the Y direction using a series of vertical planes parallel to the plane XOZ with a reasonable resolution determined by the box size. (v) The horizontal and vertical slices obtained above intersect to give rise to horizontal line segments parallel to the X -axis of the world frame, as illustrated by ab in Fig. 2d. On these segments, we nd the maximal and minimal values of the x coordinate of the reference point, if they exist. Thus, we obtain two points of the reachable boundary on each intersected line segment. All such points form a boundary curve on one horizontal slice. (vi) Similarly, the extreme points on each vertical slice form a boundary curve on that slice. All these horizontal and vertical boundary curves form the position workspace. (vii) At a specied point in the position workspace, there is a corresponding range of possible rotation of the object. To obtain this rotation range, divide into a grid the rectangular area of the local coordinates with a reasonable resolution to obtain a set of nodes, as shown in Fig. 2c. At each node, obtain its corresponding rotation vector. (viii) For each rotation vector obtained in Step (vii), calculate the maximal rotation angle max about it, using the optimization model based on feasibility analysis. (ix) On the rotation axis, plot a point whose distance from the origin of object local frame is equal to the maximum obtained in Step (viii). (x) Every three adjacent points on the axes drawn in Step (ix) form a facet. All such facets form approximately the orientational workspace at the given object position. (xi) For the rotation workspace at other points within the position workspace, repeat Steps (vii)(x). Note that in Step (viii) it is not necessary to calculate the minimal rotation angle min about the rotation vector, since to do so is in fact to obtain the maximal one about the inverse rotation vector. The local coordinate plane covers the whole surface of a sphere and, equivalently, all rotation vectors.

Downloaded by [81.161.248.93] at 10:47 13 August 2013

2300

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2.2.3. Remarks on the Algorithms In the algorithms, we use an optimization technique to get the extreme points of the workspace based on the kinematic feasibility of the given grasp. If the position of the object is out of the position workspace, or if an orientation is out of the orientation workspace, then the grasp is not feasible. In other words, for any conguration of the object within its workspace, the grasp is feasible. Therefore, the feasibility analysis and the associated optimization model play a key role in the algorithms. They take into account the following factors for the workspace of a multingered manipulation, as will be illustrated in the following sections: Hand kinematics, such as link lengths, locations distribution of ngers in the hand. Object geometry, including the shape and size of the object. The grasp itself, including the grasp type (e.g., xed-point grasp) and contact positions. Force constraints, including force equilibrium, frictional forces, grasp forces and joint torques. Other constraints, including joint limits, contact constraints and collision-free constraints. In determining the kinematic feasibility of a grasp, there are two central constraints contact constraints, which describe the contact between two topological features of the hand and the object, and collision-free constraints, which are required for other topological features not to make contacts. They are the key to the optimization models. In the following sections, we rst formulate them and then set-up optimization models used in the algorithms. 3. Constraint Formulation Since we use points/vertices, segments/edges and faces as the topological features of the hand and the object, a contact constraint or a collision-free constraint can be evaluated by the spatial relationship among them. It was found that these constraints can be conveniently and concisely evaluated by the computation of signed triangular areas and tetrahedral volume from computational geometry [24]. In this section, we review these constraint formulation. The details can be found in our previous work on the feasibility analysis of multingered grasps [1820]. 3.1. Basic Lemmas It is known from computational geometry that the relationship between three points or a point and a directed line segment can be completely described by the signed area of the triangle formed by them, with the following lemma.

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2301

Figure 3. Signed triangular area and tetrahedral volume, and their application in constraints: (a) 2-D area, (b) 3-D area, (c) tetrahedral volume and (d) collision avoidance.

Lemma 1. Given three points p1 , p2 and p3 , p3 is to the left (right) of the directed p line 1 p2 if and only if Ap1 p2 p3 > 0 (< 0); p3 is collinear with p1 p2 if and only if Ap1 p2 p3 = 0,
Downloaded by [81.161.248.93] at 10:47 13 August 2013

where Ap1 p2 p3 is the area of the triangle formed by the three ordered points p1 (x1 , y1 ), p2 (x2 , y2 ) and p3 (x3 , y3 ), and is calculated easily as Ap1 p2 p3 = 1 2 (x1 (y2 y3 ) + x2 (y3 y1 ) + x3 (y1 y2 )). This area may be positive, zero or negative, depending on the relative positions of these three ordered points, as shown in Fig. 3a. Correspondingly, the spatial relationship between three points p1 , p2 and p3 in the 3-D case can be determined by the cross product: sp1 p2 p3 = (p2 p1 ) (p3 p1 ). The magnitude of the vector sp1 p2 p3 is equal to twice the area A(p1 , p2 , p3 ) of the triangle formed by p1 , p2 and p3 , and the direction of sp1 p2 p3 indicates the orientation of the plane in which these points lie and the location (the left or right) of p3 relative to the vector p 1 p2 , as illustrated in Fig. 3b. The volume of the tetrahedron formed by four ordered points p1 (x1 , y1 , z1 ), p2 (x2 , y2 , z2 ), p3 (x3 , y3 , z3 ) and p4 (x4 , y4 , z4 ) can be calculated by: x1 1 x2 Vp1 p2 p3 p4 = 6 x3 x4 y1 y2 y3 y4 z1 z2 z3 z4 1 1 . 1 1

This volume may be positive, negative or zero, depending on the order of points in the calculation. We dene the above side of a plane p1 p2 p3 to be along (p2 p1 ) (p3 p1 ), and the below side is the opposite. Then the relationship among four points or a point and an oriented plane can be described by the following lemma (Fig. 3c). Lemma 2. Given four points p1 , p2 , p3 and p4 , p4 is below (above) to the plane p1 p2 p3 if and only if Vp1 p2 p3 p4 > 0 (< 0). p4 is coplanar with p1 , p2 and p3 if and only if Vp1 p2 p3 p4 = 0. Based on above lemmas, contact constraints and collision-free constraints can be easily evaluated, as shown in the following subsections.

2302

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

Figure 4. Evaluation of contact and collision avoidance in the 2-D case.

3.2. Contact and Collision-Free Constraints in the 2-D Case 3.2.1. VertexLink Contact Constraint For a vertex (point) vi to make contact with a link (line segment) lk (pk , pk +1 ) (Fig. 4a), it must be satised that: (i) vi and pk , pk +1 are collinear and (ii) vi is within the segment pk pk +1 . These conditions can be formulated as Apk pk+1 vi = 0 & max(Avi 1 vi pk , Avi vi +1 pk+1 ) < 0, where max( ) is the function nding the maximum among its input parameters, used here to describe the conjunction of several conditions. The second formula also guarantees that the line segment would not penetrate into the object (the penetration case is shown in Fig. 4b). 3.2.2. EdgeLink Contact Constraint The condition that an edge ei (vi , vi +1 ) makes contact with a link lk (pk , pk +1 ) can be similarly derived and translated into the following set of equalities and inequalities (Fig. 4c): Apk pk+1 vi = 0 & Apk pk+1 vi +1 = 0 & min{Avi 1 vi pk Avi 1 vi pk+1 , Avi +1 vi +2 pk Avi +1 vi +2 pk+1 } < 0, where min( ) is the function to nd the minimum among its input parameters, used here to describe the disjunction of several conditions and used here to describe the conjunction of some conditions. 3.2.3. Collision-Free Constraint Collision-free constraints can be modeled in a similar way. For example, the condition for two line segments not to intersect is that at least one segment is on the same side of another one, as shown in Fig. 4d. Therefore, the condition so that a link p1 p2 and an edge v1 v2 do not collide with each other is: max(Ap1 p2 v1 Ap1 p2 v2 , Av1 v2 p1 Av1 v2 p2 ) > 0. 3.3. Contact and Collision-Free Constraints in the 3-D Case 3.3.1. PointFace Contact Constraint Pointface contact occurs when a ngertip keeps in touch with a surface of the object. The necessary and sufcient condition for a point t to be on a convex face u(v1 , v2 , . . . , vk ) can be described as Vv1 v2 v3 t = 0 & min(sv1 v2 t sv2 v3 t , . . . , sv1 v2 t svk v1 t ) > 0, where the rst formulation is to guarantee that the point is on the plane in which the face lies and the second guarantees that the point is furthermore within the face. The minimum function min( ) is used here to the conjunction of some conditions.

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2303

3.3.2. Two-Segment Contact Constraint For two line segments contact each other it means that they are coplanar and intersect each other, which can be described by the proposition: two line segments l(p1 , p2 ) and e(v1 , v2 ) intersect if and only if: Vp1 p2 v1 v2 = 0 & max(sp1 p2 v1 sp1 p2 v2 , sv1 v2 p1 sv1 v2 p2 ) < 0. This proposition can be used to describe the contact between a nger link and an edge of the object. 3.3.3. Segment Collision-Free Constraint When any of the conditions in the above proposition is false, there is no collision between two line segments. That is, the constraint for two segments not to intersect is (i) Vp1 p2 v1 v2 = 0, i.e., |Vp1 p2 v1 v2 | > 0, or (ii) sp1 p2 v1 sp1 p2 v2 > 0 or (iii) sv1 v2 p1 sv1 v2 p2 > 0. The disjunction of these conditions can be expressed as the corollary: two line segments l(p1 , p2 ) and e(v1 , v2 ) do not intersect if and only if: max(|Vp1 p2 v1 v2 |, sp1 p2 v1 sp1 p2 v2 , sv1 v2 p1 sv1 v2 p2 ) > 0. 3.3.4. SegmentFace Contact Constraint The conditions for a line segment l(p1 , p2 ) to make contact with one face u(v1 , v2 , . . . , vk ) are that (i) the line segment and the face are coplanar, and that (ii) at least one endpoint of the line segment lies on the face or (iii) the segment intersects at least one edge of the face. The conditions are described as: Vv1 v2 v3 p1 = 0 & Vv1 v2 v3 p2 = 0 & min{ max(minp1 u , minp2 u ), min(maxei l )} < 0. The rst two formulae are equivalent to condition (i) above. In the last formula, max(minp1 u , minp2 u ) is for condition (ii), and min(maxei l ) for condition (iii), where minpj u (j = 1, 2) and maxei l (i = 1, 2, . . . , k ) are dened as: min{sv1 v2 pj svi vi +1 pj (i = 2, . . . , k)} and max(sp1 p2 vi sp1 p2 vi +1 , svi vi +1 p1 svi vi +1 p2 ). 3.3.5. SegmentFace Collision-Free Constraint The satisfaction of the conditions in the preceding subsection guarantees the segmentface contact; however, the invalidity of any of them does not mean that the segment and the face do not collide with each other. For example, even if they are not coplanar, the link may also penetrates the face. The sufcient condition for collision avoidance between a face u(v1 , v2 , . . . , vk ) and a line segment l(p1 , p2 ) can be expressed as: max{Vv1 v2 v3 p1 Vv1 v2 v3 p2 , Vv1 v2 v3 p1 Vvi vi +1 p1 p2 (i = 1, 2, . . . , k)} > 0, where Vv1 v2 v3 p1 Vv1 v2 v3 p2 > 0 means the two endpoints of the segment l are on the same side of the plane v1 v2 v3 in which the face u lies. If the two endpoints of l are on different sides of , Vv1 v2 v3 p1 Vvi vi +1 p1 p2 > 0 guarantees that the point where

Downloaded by [81.161.248.93] at 10:47 13 August 2013

2304

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

l penetrates is to the right side the edge ei (vi , vi +1 ) and, therefore, outside of u (Fig. 3d). The above condition and dissatisfaction of those in the previous subsection form the sufcient and necessary conditions of segmentface collision-free constraints. 3.4. Constraints in the Force Domain Under the static or quasi-static condition, the force constraints include the force equilibrium, contact forces without slipping and joint torque limits. Suppose that the handobject is at static or quasi-static state, then all forces acting on the object must sum up to be zero and all forces applied to each nger must be balanced by nger joint torques, which are always bounded. In addition, with frictional contact, the normal force must be sufcient to counter tangential force in order for slipping not to occur. The static equilibrium of the object means GF + W = 0, where G = [G1 Gnc ] is the grasp matrix of dimension 3 k (2-D case) or 6 k (3-D case), nc is the number of contacts on the object, k = k1 + + knc , and ki is the dimension of the wrench basis or frictional cone at the i th contact. For frictionless point contact ki = 1; for point contact with friction, ki = 2 (2-D case) or ki = 3 (3-D case); and for a soft nger in the 3-D case, ki = 4. W = [w1 w2 w3 ]T 3 (2-D case) or W = [w1 w6 ]T 6 (3-D case) is the external wrench (including the gravity) exerted on the object by the environment, F = [f1 fnc ]T k is the set of contact forces generated by the hand, fi ki is the i th contact force [25]. With a frictional point contact (vertexlink or ngertipedge contacts), to avoid slipping the contact force must be in its friction cone, i.e., fit fin , where is t t 1 2 (2-D case) or fi (3-D case) and fin 1 are the coefcient of friction, fi the normal and tangential components of the contact force fi exerted at the i th contact position. The contact force is unilateral, i.e., the normal force must be compressive, fin > 0. With linkedge contact, the grasp forces are indeterminate and it is reported that some of them are unfeasible. They may be equivalent to a few of grasp forces with frictional point contacts. An comprehensive analysis of the indeterminate grasp forces in power grasps with a rigid-body model can be found in Ref. [26]. The condition of static equilibrium of the hand requires that the contact forces TF = applied to the hand be balanced by joint torques, which can be described as Jh , where Jh is the hand Jacobian, consisting of as many columns as joint torques and as many rows as the contact forces, both normal and tangential. Jh depends on the locations of contact on the hand and is a function of the hand conguration . F is the reaction of F , i.e., the force exerted on the hand by the object, but described in the palm frame. is the set of joint torques of the hand. In practice, joint torques are limited, i.e., [T L , T U ], where T L and T U are the lower and upper bounds of , respectively. Considering torque limits and zero lower boundary T L = 0, we TF have the constraint 0 Jh T U.

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2305

3.5. Other Constraints When a contact requires the exact locations or positions of contacts between two topological features (e.g., a xed point contact), only two points are involved, the constraint is trial: (xi xi )2 + (yi yi )2 = 0 or (xi xi )2 + (yi yi )2 + (zi zi )2 = 0,

Downloaded by [81.161.248.93] at 10:47 13 August 2013

in both the 2-D and 3-D cases, where (xi , yi ) or (xi , yi , zi ) and (xi , yi ) or (xi , yi , zi ) are the coordinates of the desired locations of contact points on the hand and on the object, respectively. All coordinates are with respect to a common coordinate frame. The condition implies that each term in the above equations be zero and hence each pair of corresponding coordinate components are equal, i.e., xi = xi , and so on. For joint limits of the hand, let
L L L L T = [ 1 2 n ]

and

U U U T = [ 1 2 n ]

be the lower and upper bounds of joint limits, respectively. Then the constraint on U. joint angle is L 4. Optimization Models The workspace of a robot may be formed and discretely described by its boundary surfaces or curves. The target of workspace generation is hence to obtain these boundaries, which can be performed by optimization technique as follows. 4.1. In the 2-D Case In Step (i) of Algorithm 1, we need to nd the extrema of the object position x and y , considering all constraints in a grasp. This can be fullled by a problem of constrained optimization. Based on the constraint evaluation in the previous section, the global optimization model is in the form: max (or min ) L U L U Ai = 0 (i = 1, 2, . . .) (j = 1, 2, . . .) Aj > 0 (< 0) mink { } > 0 (< 0) (k = 1, 2, . . .) s.t. maxl { } > 0 (< 0) (l = 1, 2, . . .) GF + W = 0 T F TU 0 Jh f n fit fin n i fi > 0 (i = 1, 2, . . . , nc ),

(1)

2306

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

Downloaded by [81.161.248.93] at 10:47 13 August 2013

where {x, y } represents the position component to be maximized or minimized, {x, y } {}, i.e., x = y, y = x (when the extreme value of one position coordinate is computed, the constraint on another coordinate should be considered). The superscripts L and U mean the lower and upper bounds of the corresponding variable, respectively. Ai is a signed triangular area coming from contact constraints in the grasp. maxk ( ) and minl ( ) in the inequalities come from contact constraints and/or collision-free constraints. They and Ai , Aj are functions of joint angles , object position (x, y) and orientation . The four formulas in the lower part of the above model are constraints in the force domain, including force equilibrium of the object, driving torque limits, slipping avoidance, and unilateral contact force. In Step (iii) of Algorithm 1, the search for the maxima or minima of one position variable (x or y ) is performed along a horizontal or vertical griding line. This is done in the optimization by xing one position variable at some value. Using = a (where a is the value associated to the searching line) to replace the rst constraint in (1), we get the required optimization model in this step. In Step (iv) of Algorithm 1, at each node (within the boundary) of the grid where the location of the object is given, we need to determine the range of object orientation that maintains the feasibility of the grasp. The objective function in the optimization model to full this is max (or min ) and the constraints function is almost the same as in (1), but with (x xd )2 + (y yd )2 = 0 to replace the rst one in (1) (i.e., L U ). (xd , yd ) is the coordinate of a grid node within the boundary of workspace obtained in Step (iii) of Algorithm 1. 4.2. In the 3-D Case In Step (i) of Algorithm 2, we need to nd the maxima and the minima of the object position x , y and z, respectively, considering the constraints on contact, collision avoidance and joint limits. This can be realized by a problem of constrained global optimization. Based on the constraint evaluation in previous section, the optimization model can be described as: max w or min w L U Pw Pw , Pw L U , ] [ L U ] [ , s.t. (i = 1, 2, . . .) Vi = 0 maxj ( ) > 0 (< 0) (j = 1, 2, . . .) mink ( ) > 0 (< 0) (k = 1, 2, . . .),

(2)

where w {x, y, z} represents the position variable to be maximized or minimized, w {x, y, z} {w} and then Pw stands for the position coordinates excluding w , L and U are vectors of the lower and upper bounds of joint e.g., Px = (y, z). L , P U and L , U are the constraints (lower and upper angles, respectively; Pw w

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2307

Downloaded by [81.161.248.93] at 10:47 13 August 2013

bounds) of the object position and orientation with respect to the palm frame, respectively. Vi is a tetrahedral volume and comes from contact constraints in the grasp. j and k mean a number of inequalities with maxj ( ) and mink ( ) for contact constraints and/or collision-free constraints. They and Vi are functions of and/or P , . Note that the constraints in the force domain described in Section 3.4 can be directly incorporated into the optimization models in the same manner as in the 2-D case. In Step (v) of Algorithm 2, the search for the maxima or minima of one position variable (x , y or z) is performed along a line (the intersection of two perpendicular slices). This is done in the optimization by xing the other two variables at the values associated to the line. In Step (viii), at each node of the grid where the location of the object is given, we need to determine the range of the object rotation, which can be fullled using the optimization model: max or min (, ) [ L, U] (x x )2 + (y y )2 + (z z )2 = 0
d d d

s.t.

Vi = 0 maxj ( ) > 0 (< 0) mink ( ) > 0 (< 0)

(i = 1, 2, . . .) (j = 1, 2, . . .) (k = 1, 2, . . .),

(3)

where (xd , yd , zd ) is the position of the given grid node. If these constraints are violated in the computation, then the grasp is not kinematically feasible at this position and this means that the node is out of the manipulation workspace. Otherwise, the node is within the workspace and the maximum or minimum of can be obtained at this node with the given grasp. Optimization model (3) is also employed in the late steps of the algorithm to nd the rotation range of the object at a given position in the workspace, but with a specied value of (, ) (a node in ) in the model. Note that, in a canonical grasp, the grasp positions are not specied and the previous optimization models are amendable to this situation. If the grasp positions are dened (e.g., in a xed-point grasp), then the following contact constraint should be incorporated in the optimization models: (xp xq )2 + (yp yq )2 + (zp zq )2 = 0, (4)

where p(xp , yp , zp ) and q(xq , yq , zq ) are the contact points of the hand and the object, respectively. It can be seen that the optimization models for workspace computation are very complicated with a number of nonlinear functions. An efcient algorithm or software tool is required to solve them.

2308

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

5. Illustrative Examples To illustrate and verify the effectiveness of the proposed approach and the corresponding algorithms, we provide several examples. In the examples, the optimization models are solved with the commercial software tool LGO (GUI version), which is a model development system for Lipschitz-continuous global optimization [27], developed by Pinter Consulting Services. The computation time with LGO depends on the box size (intervals) of variables and the algorithms used in the optimization solution (two algorithms, branch-and-bound and random search, are available in LGO), and it may vary a little bit in different runs for the same model. It is possible to develop a system with the core of LGO to automatically solve all optimization models and visualize the workspaces. On a PC with an Intel Core2 CPU (1.66 GHz) and 1 GB memory, the typical running time with LGO for each optimization model is 0.20 s in the 2-D case and 0.7 s in the 3-D case including I/O actions. Since the analysis is off-line, this time is quite acceptable. 5.1. In the 2-D Case In the example, we generate the workspace of a hand grasping and manipulating a rectangular object of size 70 50 mm2 . Provided that the handobject system is in a vertical plane, the object weight must be balanced. The hand is assumed to consist of two ngers with 2-d.o.f. each. Suppose the lengths of links are l1 = 50, l2 = 40, l3 = 50, l4 = 40 and d = 60 (d is the distance between the two nger base points) (all in mm). Let 1 , 3 [/6, 2/3] and 2 , 4 [0, 5/12] (in rad); their positive directions are shown in Fig. 1a. Assume that the limits of driving torques of the hand are T1 = 400, T2 = 500, T3 = 500 and T4 = 400 (all in Nmm); the normal n n 10 N, f2 10 N; the coefcient of friction is = 0.3. In contact forces are f1 the manipulation, the two ngertips t1 and t2 contact with the two middle points, q1 and q2 , of the left and right short edges, respectively, no sliding is allowed. Then the xed-point contact constraints can be expressed as (xt1 xq1 )2 + (yt1 yq1 )2 + (xt2 xq2 )2 + (yt2 yq2 )2 = 0, where the coordinates can be easily obtained by nger kinematics and object conguration. From feasibility analysis, it is found that the maximum weight of the object that can be grasped by the hand is wmax = 4.11 N. Now suppose that the object weight is wg = 3.5 N. From the optimization models in the form of (1), the extreme positions of the object are found, as listed in the second row of Table 1, and the correspondTable 1. Extreme positions of the grasped object Weight wg = 4.0 wg = 3.5 wg = 0.0 xmin 8.80 42.89 47.52 xmax 8.80 42.89 47.52 ymin 70.94 64.41 64.41 ymax 73.51 84.23 89.75 min,max 1.36 18.23 18.58

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2309

(a)

(b)

Downloaded by [81.161.248.93] at 10:47 13 August 2013

(c)

(d)

Figure 5. Hand-object congurations at the four extreme positions: (a) xmin , (b) xmax , (c) ymin and (d) ymax .

ing congurations of the handobject systems are shown in Fig. 5, where one of the base joints reaches its limit (Fig. 5ac) or the base joints reach their driving capability (Fig. 5d, in other words, the base joints need larger driving torques to lift the object higher without slipping). It can be seen from Fig. 5 that the congurations at xmin , xmax are symmetric about the Y -axis, which is due to the symmetry of the handobject system about this axis. The enclosing rectangle bounded by xmin , xmax , ymin and ymax is divided into a grid of 84 20 nodes (with intervals of 1 mm). Along the vertical griding lines, a series of boundary points are obtained from the models in the form of (1), from which the boundary curve in the XY plane is plotted, as in Fig. 6a. At each node within the boundary curve, we then further calculate the minimum and maximum of object orientation using models similar to (1), and check the grasp feasibility for the intermediate values of between its two extrema at a 1 resolution. In this example, the workspace along the -axis is continuous for each (x , y ) node, as is intuitively obvious. Finally, the complete workspace is visualized in a 3-D frame, as shown in Fig. 6c, whose coordinates are position (x, y) and orientation . From Fig. 6, it can be seen that the projection of the workspace on the X Y plane is symmetric and the visualized 3-D workspace is axis-symmetric about the Y -axis (due to the symmetry of the handobject system). For comparison, more calculations are performed for the workspaces of objects with the same geometry (size and shape), but different weights. The extreme posi-

2310

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

(a)

(b)

Downloaded by [81.161.248.93] at 10:47 13 August 2013

(c)

(d)

Figure 6. Workspace of a rectangular object with different weights (wg = 3.5 N and wg = 0): (a) top view (wg = 3.5 N), (b) top view (wg = 0), (c) 3-D (x, y, ) frame (wg = 3.5 N) and (d) 3-D (x, y, ) frame (wg = 0).

tions of the object with weight 4.0 N (near the maximum 4.11 N that can be held by the hand) are listed in the rst row of Table 1. It can be induced that the workspace is much smaller than the previous one. Consider further manipulation of an object without weight (i.e., wg = 0; say, the handobject system is on a friction-free horizontal plane, the force of the hand is still under consideration). The extreme positions of the object are computed as listed in the third row of Table 1. The enclosed box formed by xmin , xmax , ymin and ymax is divided into a grid of 102 26 nodes (the intervals of x and y coordinates are 0.95 and 1.01 mm, respectively). The corresponding workspace is shown in Fig. 6b and d. It is the same as that with only kinematic constraints in the grasp taken into account (constraints in the force domain are ignored). It is clearly observed that the workspace of the manipulation with full constraints is smaller than that with only kinematic constraints considered it is a subset of the latter. It can be seen from Fig. 6c and d that the part of the workspace near the position boundary becomes thinner. This indicates that when the object is near the boundary curve in the X Y plane, the range of rotation becomes small and on the curve no rotation range is allowed with the grasp; in other words, the orientation of the object is unique at one specied position on the boundary. As stated before, the formulae and algorithm for workspace generation are general. They are adaptive to various contact types, not limited to xed point contacts in the preceding example. We have obtained the workspace in the case that sliding is allowed, i.e., the grasp is dened by two contact pairs: g = {(t1 , e1 ), (t2 , e3 )}, without specifying the contact locations. It was found that the workspace in this case is much larger than those with xed-point contacts [28]. The result is consis-

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2311

tent with the prediction in Ref. [17] that if the contact points are allowed to move across the surfaces through rolling or sliding, it is possible to increase the size of the workspace greatly. 5.2. In the 3-D Case The example is modeled according to a three-ngered hand manipulating a box of 35 25 75 mm3 , as shown in Fig. 1c. The hand consists of three identical ngers with 3 d.o.f. each (nger f1 is hidden by f2 in Fig. 1c). The axes of the two outer joints in one nger are parallel and perpendicular to that of the rst joint. The link lengths of one nger are 18, 50 and 40 mm, respectively. The nger coordinate frames are located at (7.2, 30, 19.2), (7.2, 30, 19.2) and (7.2, 0, 19.2), respectively, in the palm frame O XYZ , with angles of 22.5 between their Xi axes and the palm X -axis. The joint ranges of the rst joints of the ngers are: 1 [45, 60], 4 [60, 45] and 7 [60, 60], and those of the second and third joints are 2 , 5 , 8 [60, 90] and 3 , 6 , 9 [0, 90] (all in deg). We consider the manipulation with xed-point contact: {(t1 , q1 ), (t2 , q2 ), (t3 , q3 )}, where t1 , t2 and t3 are the ngertips of the three ngers, q1 (0, 30, 12.5) and q2 (0, 30, 12.5) are two points on the top face of the box (dened in object frame), and q3 (0, 0, 12.5) is the central point on the bottom face. The contact constraint is in the form of (4), and the collision-free constraints are mainly of two segments and segmentface. For the sake of computation simplicity, we treat each link as an ideal line segment with zero cross-sectional area and do not consider the factors in force domain. Even in this simplied case, the optimization models have 15 variables (nine for joint angles and six for object position and orientation), and up to 55 constraints in total (one equality for the contact constraints and 54 inequalities for the collision-free constraints). From the optimization models in the form of (2), the extreme object positions are obtained as: xmin = 56.59, xmax = 113.67, ymin = 78.23, ymax = 78.26, zmin = 57.38 and zmax = 57.38 (all in mm). The corresponding congurations of the handobject system are shown in Fig. 7. It is found that ymax , ymin , zmax and zmin are approximately equal, respectively; xmax , zmax and zmin are on the plane XOZ of the palm frame. This observation can also be derived from the symmetry of the handobject system about the plane XOZ . xmin , ymax and ymin are neither on the plane XOZ nor on the plane XOY , which is also reasonable, as will be seen soon in the position workspace. We use horizontal planes z = 0, 15, 30 and 45 and vertical planes y = 0, 15, 30, 45 and 60 (parallel to the XOY and XOZ plane of the palm frame, respectively) to slice the enclosing box constructed by planes x = xmin , x = xmax , y = ymin , y = ymax , z = zmin and z = zmax . On each slicing plane, we use the optimization model in a form similar to (2) to get extreme points by xing two position variables and calculating the extreme values of the third position variable (e.g., on horizontal plane z = 30, set y = i y (i = 1, 2, . . . , 27), to solve for the minimal and maximal values of x , respectively, where y = 3 mm is the resolution for the boundary point; in other words, we are searching for the boundary points

Downloaded by [81.161.248.93] at 10:47 13 August 2013

2312

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

(a)

(b)

(b)

Downloaded by [81.161.248.93] at 10:47 13 August 2013

(c)

(d)

(d)

Figure 7. Grasp congurations at the extreme positions: (a) xmax (113.7, 0.0, 0.0, 38.5, 37.8, 0.0), (b) ymax (74.0, 78.3, 22.3, 138.6, 24.0, 22.6), (c) zmax (85.1, 0.4, 57.4, 90.0, 0.0, 7.0), (d) xmin (56.6, 60.6, 27.6, 142.5, 34.5, 17.3), (e) ymin (73.7, 78.2, 23.2, 41.8, 24.5, 21.6) and (f) zmin (85.1, 0.4, 57.4, 88.9, 1.0, 7.0).

along lines such as ab in Fig. 2 and the distance between two adjacent searching lines is 3 mm). Connecting adjacent boundary points, we obtain boundary curves on these slicing planes, as shown in Fig. 8a and b. These horizontal and vertical curves roughly construct the position workspace, as shown in Fig. 8c. The part of the position workspace in the Y direction is obtained according to the symmetrical property of the handobject system about the vertical plane XOZ of the palm frame. For orientation workspace, we discretize the local coordinate plane of a sphere by using a grid with a resolution of 0.2904 rad (12 ) to get a series of nodes (31 16 = 496 nodes in total) corresponding to rotation vectors (Fig. 2b and c). The orientation workspace depends on the object position. As an example, let the object be at position (95.0, 10.0, 0.0) in the palm frame. At each node, from the optimization model in the form of (3), we solve for the maximal rotation angle max . In a 3-D frame with (, , ) as three coordinates where (, ) dene a rotation axis and indicates the maximal angle that the object can rotate about the rotation axis, the rotation range is depicted in Fig. 9a. For better intuition, the rotation vector corresponding to each (, , ) is depicted in a coordinate frame, as shown in Fig. 9b, where the rotation direction is shown by that of the vector (ray segment) and the maximal rotation angle by the magnitude of the vector. All endpoints of the vectors span roughly the orientation workspace, as shown in Fig. 9c and d. Note

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2313

Downloaded by [81.161.248.93] at 10:47 13 August 2013

(a)

(b)

(c)

Figure 8. Position workspace. (a) On horizontal planes. (b) On vertical planes. (c) In the 3-D palm coordinate frame.

(a)

(b)

Figure 9. Orientation workspaces at positions (95.0, 10.0, 0.0) (top) and (115.0, 25.0, 10.0) (bottom). (a) In (, , ) frame. (b) Rotation vectors. (c) Rotation vectors (views 1 and 2).

that, the better the griding resolution is, the smoother the rotation workspace looks.

2314

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

Downloaded by [81.161.248.93] at 10:47 13 August 2013

(c)

Figure 9. (Continued.)

Each point on the surface of the rotation workspace corresponds to a rotation axis and the maximal rotation around it. For comparison, consider the orientation workspace at position (105.0, 20.0, 15.0), which is nearer the boundary of the position workspace than at (95.0, 10.0, 0.0). The corresponding orientation workspace is also shown in Fig. 9 (bottom row). From the results, we see that the shapes and volumes of orientation workspaces at different positions are quite different. The results show, and it can be imagined, that with the object approaching the boundary of the position workspace, the orientation workspace becomes smaller and smaller or thinner and thinner. On the boundary, the orientation workspaces degenerate to vectors corresponding to the orientations at those positions (Fig. 7). The examples clearly illustrate the algorithms and verify the effectiveness of the proposed method, although the nger links and tips are regarded ideally as line segments and points, respectively, for the sake of simplicity. In a realistic hand, the actual shapes and sizes of the nger links and tips cannot be ignored. In this case, the links and ngertips maybe approximated by polyhedrons. The contact and collisionfree constraints are then evaluated between these approximated polyhedrons and the grasped object using their corresponding topological features, and the algorithm can still be applied in the same manner. However, the number of constraints will

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2315

increase greatly, resulting in much larger models and demanding more powerful software tools to solve them. 6. Conclusions In this paper, we have systematically and thoroughly studied the problem of generating the workspace of a multingered hand manipulating an object, which is a difcult problem when using analytical or geometric methods for the solution. Using an optimization technique, we have presented a novel numerical approach to and algorithms for this problem in both planar and spatial cases. All factors that affect the workspace have been taken into account, which include the kinematics of the hand, geometry of the object to be manipulated, force equilibrium, frictional force and driving torque limits. The approach is based on feasibility analysis of grasps, where the central constraints, i.e., contact constraints, collision-free constraints and force constraints, play a key role. We have formulated optimization models for the computation of the workspace and then visualized the workspace in 3-D coordinate frames. It has been observed from the examples in the 2-D case that the factors or constraints in the force domain play an important role in the workspace generation of multingered manipulation. This is an important characteristics and is quite different from the workspaces of other robots where only geometric and kinematic factors are evolved. The workspace with all constraints taken into account is a subset of that with only kinematic constraints considered and is more practical. It has been veried that the more and stricter the constraints on a grasp, the smaller the workspace of the manipulation. Our approach provides an effective, general and complete solution to the longterm open problem of the workspace of multingered manipulation or coordination of multiple robots in which analytical methods cannot be effectively applied. With consideration of full constraints in a grasp and its generality, the presented approach represents a signicant improvement over previous methods. Although only examples of manipulating rectangular and cubic objects are provided, our method is not limited to this. It is adaptive to a variety of handobject systems it can be used to generate the workspace of a multingered hand with any number of ngers manipulating various polygonal objects in various grasp congurations with various contact types, owing to the adaptation of our feasibility analysis. Acknowledgement The work is in part supported by the National Science Fund for Distinguished Young Scholars of China (50825504). References
1. K. Gupta, On the nature of robot workspace, Int. J. Robotics Res. 5, 112121 (1986).

Downloaded by [81.161.248.93] at 10:47 13 August 2013

2316

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2. Z. Lai and C. Meng, The dexterous workspace of simple manipulators, IEEE Trans. Robotics Automat. 4, 99103 (1988). 3. D. Alciatore and C. Ng, Determining manipulator workspace boundaries using the Monte Carlo method and least squares segmentation, ASME Robotics Kinemat. Dyn. Control DE-72, 141146 (1994). 4. J. Rastegar and D. Perel, Generation of manipulator workspace boundary geometry using the Monte Carlo method and interactive computer graphics, ASME Trends Dev. Mech. Mach. Robotics 3, 299305 (1988). 5. D. Kohli and J. Spanos, Workspace analysis of mechanical manipulators using polynomial discriminants, ASME J. Mech. Trans. Automat. Des. 107, 209215 (1985). 6. H. Zhang, Efcient evaluation of the feasibility of robot displacement trajectories, IEEE Trans. Syst. Man Cybernet. 23, 324330 (1993). 7. V. Kumar, Characterization of workspaces of parallel manipulators, ASME J. Mech. Des. 114, 368375 (1992). 8. M. Lee and K. Park, Workspace and singularity analysis of a double parallel manipulator, IEEE/ASME Trans. Mechatron. 5, 367374 (2000). 9. J. P. Merlet, Parallel Robots. Kluwer, Dordrecht (2000). 10. Z. Vafa and S. Dubowsky, The kinematics and dynamics of space manipulators: the virtual manipulator approach, Int. J. Robotics Res. 9, 321 (1990). 11. G. Chirikjian and I. Ebert-Uphoff, Numerical convolution on the euclidean group with applications to workspace generation, IEEE Trans. Robotics Automat. 14, 123136 (1998). 12. Y. Wang and G. Chirikjian, Workspace generation of hyper-redundant manipulators as a diffusion process on SE(N), IEEE Trans. Robotics Automat. 20, 399408 (2004). 13. C. Gosselin and J. Angeles, A global performance index for the kinematic optimization of robotic manipulators, ASME J. Mech. Des. 113, 220226 (1991). 14. B. Monsarrat and C. Gosselin, Workspace analysis and optimal design of a 3-leg 6-d.o.f. parallel platform mechanism, IEEE Trans. Robotics Automat. 19, 954966 (2003). 15. G. Yang, C. Pham and S. Yeo, Workspace performance optimization of fully restrained cabledriven parallel manipulators, in: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Beijing, pp. 8590 (2006). 16. J. Yang, K. Abdel-Malek and K. Nebel, Reach envelope of a 9-degree-of-freedom model of the upper extremity, Int. J. Robotics Automat. 20, 240259 (2005). 17. J. Kerr and B. Roth, Analysis of multingered hands, Int. J. Robotics Res. 4, 317 (1986). 18. Y. Guan and H. Zhang, Kinematic graspability of a 2D multingered hand, in: Proc. IEEE Int. Conf. on Robotics and Automation, San Francisco, CA, pp. 35913596 (2000). 19. Y. Guan and H. Zhang, Kinematic feasibility of 3D multingered grasps, IEEE Trans. Robotics Automat. 19, 507513 (2003). 20. Y. Guan and H. Zhang, Feasibility of 2-d multingered grasps, Int. J. Robotics Automat. 20, 271 280 (2005). 21. J. Schwartz and M. Sharir, On the piano movers problem 1: the case of a two-dimensional rigid polygonal body moving admist polygonal barriers, Commun. Pure Appl. Math. 36, 345398 (1983). 22. J. Schwartz and M. Sharir, A survey of motion planning and related geometric algorithms, Artif. Intell. 37, 157169 (1988). 23. Y. Guan, Y. Yokoi and X. Zhang, Numerical methods for reachable space generation of humanoid robots, Int. J. Robotics Res. 27, 935950 (2008). 24. J. ORourke, Computational Geometry in C. Cambridge University Press, Cambridge (1993).

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Y. Guan et al. / Advanced Robotics 25 (2011) 22932317

2317

25. R. Murray, Z. Li and S. Sastry, A Mathematical Introduction to Robotic Manipulation. CRC Press, Boca Raton, FL (1994). 26. T. Omata and K. Nagata, Rigid body analysis of the indeterminate grasp force in power grasps, IEEE Trans. Robotics Automat. 16, 4654 (2000). 27. J. Pinter, Global Optimization in Action. Kluwer, Dordrecht (1996). 28. Y. Guan and H. Zhang, Workspace of 2D multingered manipulation, in: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Las Vegas, NV, pp. 37053710 (2003).

About the Authors


Yisheng Guan received his BS degree from Hunan University of Agriculture, in 1987, his MS from Harbin Institute of Technology, in 1990, and his PhD from Beijing University of Aeronautics and Astronautics, in 1998, all in P. R. China. He conducted research in the Department of Computing Science, University of Alberta, Canada, as a Postdoctoral Fellow, from 1998 to 2000, and in Intelligent Systems Institute, AIST, Japan, with a Fellowship of the Japan Society for the Promotion of Science (JSPS), from 2003 to 2005. He is currently a Professor in the School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, P. R. China. His research interests include humanoid robotics, multingered hands, biomimetic robotics and medical robotics. Hong Zhang received his BS degree from Northeastern University, Boston, MA, USA, in 1982, and his PhD degree from Purdue University, West Lafayette, IN, USA, in 1986, both in Electrical and Computer Engineering. He is currently a Professor in the Department of Computing Science, University of Alberta, and the Director of the Centre for Intelligent Mining Systems. He is the holder of the NSERC Industrial Research Chair in Intelligent Sensing Systems. His current research interests include robotics, computer vision and image processing.

Downloaded by [81.161.248.93] at 10:47 13 August 2013

Xianmin Zhang received his PhD degree from Beijing University of Aeronautics and Astronautics, P. R. China, in 1993. After that he conducted research as a Postdoctorial Fellow at Northwestern Polytechnic University, P. R. China, and Minnesota University, USA. He is currently is a Professor and a Vice Dean of the School of Mechanic and Automotive Engineering, South China University of Technology, Guangzhou, P. R. China. He has authored or coauthored over 150 papers, and successfully applied for more than 20 patents. His current research interests lie in elastic and compliant mechanisms, topological optimization, dynamics and vibration control of mechanical systems, precision manufacturing, and automation. Zhangjie Guan received his BS degree from Hunan Institute of Science and Technology, P. R. China, in 2009. He is currently pursuing a Masters degree in the School of Aeronautics Science and Engineering, Beijing University of Aeronautics and Astronautics.

Вам также может понравиться