Вы находитесь на странице: 1из 8

CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027

Contents lists available at ScienceDirect

CIRP Journal of Manufacturing Science and Technology


journal homepage: www.elsevier.com/locate/cirpj

A method for detection of randomly placed objects for robotic handling


Panagiota Tsarouchi, Stereos-Alexandros Matthaiakis, George Michalos, Sotiris Makris,
George Chryssolouris *
Laboratory for Manufacturing Systems and Automation, Department of Mechanical Engineering and Aeronautics, University of Patras, Patras 26500, Greece

A R T I C L E I N F O A B S T R A C T

Article history: The image based systems still have open issues in order to meet the latest manufacturing requirements
Available online 23 April 2016 for simplicity, low cost as well as the limited maintenance requirements. In this direction, there is a
method proposed for the recognition of 3D randomly placed objects for eventual robotic handling. The
Keywords: method includes a 2D vision system and is combined with data from computer-aided design (CAD) les
Object recognition for the generation of 3D coordinates. It is generic and can be used for the identication of multiple
Image processing algorithm randomly placed objects. The method has been implemented in a software tool using MATLAB1 and has
Robot been applied to a consumer goods case for the recognition of shaver handles.
Sensor 2016 The Authors. This is an open access article under the CC BY-NC-ND license (http://
Measurement
creativecommons.org/licenses/by-nc-nd/4.0/).

Introduction An extended review on the sensor technologies and the research


trends for assembly systems have shown that vision systems are
Important aspects of vision systems, in robotic applications, are suitable for objects recognition, inspection and robot handling
the simplicity of the algorithm, the low cost and the reduced need for applications [38]. Vision-based robot control is investigated in [9
maintenance, while aspects such as the fast and effective identica- 14], while a survey on the visual servoing systems is presented in
tion still constitutes an unsolved problem. Even though adequately [1517]. Research has been done in the design aspects of the
efcient and accurate algorithms have been developed, the proces- machine vision systems for industrial applications [18], and has led
sing speed still fails to meet the modern manufacturing requirements to improvements in reliability and product quality [1,19,20]. The
[1]. The problem becomes further complicated owing to the objects vision system classication was investigated in [2127].
properties, such as shape, material, colour, etc. Additionally, the The pattern recognition method has also been investigated in
requirements for simplicity and low cost are directly connected to the [14,28,29], while 3D vision systems have been presented in [30
production rate that is expected to be increased with the introduction 38]. Such systems have the advantages of recognizing the objects
of robotic equipment in modern production lines [2]. characteristics, but they are based on complicated algorithms and
The need for objects recognition systems is met in multiple are prompt to failures in industrial environments. Methods for
industrial applications, where different objects of variable shapes errors measurement in vision systems have been researched in
and sizes should be handled. An example of such an application is [39], while techniques for the classication of objects and point
the consumer goods industry. This sector lacks in exible, low cost descriptors are designated in [4044]. Histogram-based image
and simple automation solutions that will eventually cope with descriptors have been evaluated for the classication of 3D objects
high production rates and a variety of products (Fig. 1(a)). [45], while a comparison between the methods of local and full
Currently, the high speed feeding and handling of objects imposes ranking point descriptors was reported in [46,47].
the need for dedicated equipment, namely feeder bowls. Such A low cost system is proposed for 3D randomly placed objects in
solutions are characterized as noisy, expensive and are dedicated order to meet the challenges of the semi-structured industrial
to product specic equipment. Specically, feeder bowls (Fig. 1(b)) environment. This system is based on simple algorithms for the
cannot perform in more than one product, whilst for new products recognition of 2D objects combined with CAD data for the
need redesigning. computation of 3D coordinates. The use of CAD data, instead of
a 3D vision system, reduces the complexity and the cost. The object
pose recognition is based on observations with the physical object,
* Corresponding author. Tel.: +30 2610 997262; fax: +30 2610 997744. while the measurement of 3rd coordinates, along the z axis, was
E-mail address: xrisol@otenet.gr (G. Chryssolouris). made with the help of the object design les (CAD). The techniques

http://dx.doi.org/10.1016/j.cirpj.2016.04.005
1755-5817/ 2016 The Authors. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
P. Tsarouchi et al. / CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027 21

Fig. 1. (a) Variety of products in consumer goods industry and (b) feeder bowls.

of the 2D vision system comprise basic image processing as the pose, orientation, coordinates of the POIs, etc. are computed
algorithms for the identication of the Points of Interest (POIs) via the binary image processing and the contrast value measure-
and the object pose identication in the image. The method is ments in the blue band of the RGB image. In the RGB image, the
applied to a consumer goods pilot case. value of 255 is considered being 100% visible in terms of blue
colour and 0 of being fully transparent, based on a single 8-bit byte
Method description system. The objects shape is approximated with an ellipse and
some of its geometrical features are computed with the help of the
The proposed method includes (a) an ofine system using the ellipse, comprising the major, minor axis, edges, middle points,
CAD les of the objects and (b) an online system for the recognition orientation, etc. Other features that have been taken into account
of object poses and 2D POIs (Fig. 2). The ofine system is used for are the centroid and middle points, which are estimated by diving
the observation of the candidate objects in order for the different the distance of the major axis.
poses and the POIs to be decided upon. Observations with physical The next step concerns the automatic objects pose detection.
objects allow the identication of the different objects n poses and This method is based on the maximum distance between the k
the denition of the POIs. The distance from the objects lying middle points and the centroid point. The poses are decided upon
surface and the POI along the z axis is measured in a CAD software the distances of these k points (Eq. (3.1)), as well as their contrast
platform for the n different poses. The z axis coordinate of each values in the blue band. The POI automatic recognition is
different POI is stored and used later on, in the online vision computed on the basis of the highest contrast values, compared
system. with those of the middle points, depending on the object.
The image based recognition system runs online in order to
k
detect the objects poses and compute the 2D POIs coordinates. The DIFF k jCMP C CP j (3.1)
transformation between the image plane and the world reference
frame (WRF) of the estimated 2D POI follows. Since the 2D POI where:
coordinates have been estimated, the 3D POI coordinates are
generated with the help of the ofine system.  DIFFk is the absolute difference of the contrast values between
The online system starts with basic image processing techni- the k middle points and the centroid;
k
ques and principles, via the combination of algorithms for object  CMP is the contrast value of the kth middle point;
pose detection and POI recognition. The geometrical features, such  CCP is the contrast value of the centroid point.

Fig. 2. Method overview.


22 P. Tsarouchi et al. / CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027

The generation of the POIs 3D coordinates in the WRF  The proposed method for the detection of the object pose is
comprises the nal target of the online system. The reference based on the measurement of the measurement of the points
systems transformations into the WRF follow. Two transforma- contrast values, as well as on the computation of the objects
tions are used among the three different reference frames, namely intermediate points;
the camera, the pixel and the world reference frame.  The method for the 2D POI recognition is based on the absolute
The rst transformation is estimated with the following differences of the contract values between the different points on
equation: the object.
2 3 2 32 3
xim 1=sx 0 ccx x
4 yim 5 4 0 1=sy ccy 54 y 5 (3.2)
1 0 0 1 1 System implementation

where: The proposed system has been implemented as a software tool


in MATLAB1 (Fig. 3). Algorithms for the detection of the image
 xim, yim are the coordinates on the image plane; processing, objects poses, calculation and transformation of the 2D
 (ccx, ccy) are the coordinates of the principal point (cc) on the POIs coordinates have been developed under this tool. The results
image plane reference frame; of the ofine system have been stored into a text le and are used
 sx and sy are the effective size of the pixels in the horizontal and as an input in the proposed online system.
vertical directions, estimated in millimetres. A 2D vision sensor has been used for the acquisition of images.
This 2D vision sensor is a high resolution 2MP Basler A641FC camera,
The point in the pixels frame is the vector [x; y; 1] which denotes the which is equipped with 1/1.8 CCD colour image sensor, at a
transformed coordinates of the POI, with respect to the image plane maximum resolution of 1624  1234 pixels. This camera is mounted
reference system. The POI coordinates are estimated by Eq. (3.3). on a xed distance from the conveyor, namely Zc = 415 mm. The
2 3 2 31 2 3 cameras focal length is 7.1 mm and is equipped with a Computar
x 1=sx 0 ccx xim M0814-MP2 lens, which supports the manual iris and focus control.
4y5 4 0 1=sy ccy 5 4 yim 5 (3.3)
1 0 0 1 1
Pilot case study
The principal points projection on the surface that the object is
lying on, considering the cameras pinhole model, is the WRF. Any Description and layout setup
dened user frame can be used as a WRF, with the application of
the appropriate transformation. The transformation that provides The proposed method has been applied to a consumer goods
the WRF coordinates, with respect to the image plane reference industry case study in order to replace the manual or semi-
system, is presented in the following equation: automated ways for the feeding and packaging of handles
2 3 2 32 3 (Fig. 4(a)). The setup includes two industrial robots placed
xWRF Zc=f 0 0 x
4 yWRF 5 4 0 symmetrically in front of a moving conveyor that is fed with
Zc=f 0 54 y 5 (3.4)
zWRF 0 0 Zc 1 shaver handles. An industrial camera is placed at the forepart of the
conveyor belt, for the recognition of the randomly placed shaver
where: handles (Fig. 4(b)).
The vision system may provide a reliable solution for cost
 Zc is the distance from the horizontal measurement surface to effective recognition of multiple objects and thus may enable high
the cameras sensor surface; speed handling by articulated. Cost efciency will be achieved by:
 f is the cameras focal length.
 The use of 2D combined with CAD data les, instead of 3D vision
The 3D POI coordinates are computed as the result of the image system;
based recognition and the measurements on the CAD les, given  The replacement of the heavy feeder bowls with the proposed
the different objects poses. The unique features of this research are system;
summarized as follows:  The re-conguration of the cell for similar products allowed by
the robotized solution.
 The combination of an ofine system that is based on the poses
identication and the 3rd coordinate estimation, through CAD The high production rate of 100 objects/min imposes the need
les measurements and an online 2D vision system for 3D for simple and fast object detection and pose estimation and
coordinates computation; therefore, the use of a slow 3D scanning system is not viable.

Fig. 3. System implementation overview.


P. Tsarouchi et al. / CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027 23

Fig. 4. (a) Consumer goods industry scenario and (b) system setup.

Fig. 5. (a) Shaver handles geometry and (b) different poses of the object.

Instead, the use of the objects CAD les enables the supplementa- help of a vision system and the CAD les. Some examples of
tion of the measurement by the 2D system with information, recognition results are illustrated in Fig. 8. Four different images
regarding the 3rd coordinate of the identied POIs thus, minimiz- and their recognition results, along with the RGB images in the
ing the computational time. corresponding binary ones, are presented in this gure.
The results of the four images with IDs 1, 2, 3, 6 as well as three
Application of the method in the shaver handles recognition additional ones are reported in Table 1. For each image, there is a
presentation of the number of objects, the pose of shaver handles,
In the case of consumer goods, the object is the shavers handle. the 3D POI coordinates, as well as their orientation.
The POI as well as the centroid and middle points are illustrated in The results in an RGB image, with more than three shaver
Fig. 5(a). Based on the geometry of the shavers handle, which is handles, are visualized in Fig. 9. Nine shaver handles were found in
randomly dropped on the at belt, as shown in Fig. 5(b), three this image and their features were successfully recognized.
different poses are observed. The shavers handle is found in three An example of the proposed online system of shaver handles, is
poses, from left to right: in the back up (pose 1), lying on its side illustrated, in more detail, in Fig. 10. The ellipse shape was adjusted
(pose 2) and the back down pose (pose 3). Depending on these to every handle that was found in the image. The major axis of each
poses, three different coordinates of the z axis are considered from ellipse has shown each objects orientation in respect to the x axis
the CAD les. of the image plane, where the centroid and middle points were
The proposed online system workow, based on the informa- estimated. Additionally, the POI is automatically computed and
tion identied by the three different poses, is applied to the illustrated in yellow colour. The parallel lines to the minor axis of
consumer goods industry, as illustrated in Fig. 6. ellipse were used for the estimation of the sum of all the pixels
The transformation between the camera reference frame and contrast values, in order for the objects pose to be decided upon
the WRF in the case of the shaver handle is illustrated in Fig. 7. according to the colour of each one of the poses.

Results of 2D vision system recognition Comparison with other methods

The proposed method has been successfully tested on numer- Different methods were also evaluated for the image based
ous images with different object orientation and poses. The recognition system. The point descriptors, is one of these methods
obtained results include the assignment number of the recognized for the evaluation of the Speeded-Up Robust Features (SURF)
object, the orientation with respect to the x axis of the image plane, features extraction. An example of the SURF features, recognized in
the pose of the object and the 3D coordinates of the POI with the the shaver handles images, is visualized in Fig. 11, where the
24 P. Tsarouchi et al. / CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027

Fig. 6. Flowchart of online system for shaver handles recognition.

Fig. 7. Transformation from camera to WRF for a shaver handle.

function detectSURFFeatures is used in the MATLAB1 toolbox. detected features were not enough for the recognition of all the
These images are the same as those presented in Fig. 8. This different poses of the shaver handles; as regards the same objects
function implements the Speeded-Up Robust Features (SURF) and the same pose, there were differences in the number of
algorithm, described in [47] and nds blob features. The detection detected features.
of features is useful when they represent specic data structures in The fact that the object was symmetric and that some pairs of
the image, such as specic points, edges, corners, etc. The SURF the point descriptors confused the algorithm for the object
P. Tsarouchi et al. / CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027 25

Table 1
Method results for the shaver handles.

Image Object Pose 3D POI coordinates Orientation (8)


ID ID [x, y, z] (mm)

1 1 Lying on [154.919, 45.048, 11.040] 64.939


its side
1 2 Back down [80.057, 2.386, 22.450] 85.993
1 3 Back up [11.925, 99.667, 1.240] 58.514
2 1 Back up [93.292, 96.309, 1.240] 101.725
2 2 Back up [61.417, 27.749, 1.240] 98.733
2 3 Back up [26.865, 124.338, 1.240] 69.748
3 1 Back up [76.984, 1.854, 1.240] 38.438
3 2 Lying on [66.157, 88.544, 11.040] 77.0185
its side
3 3 Back up [39.118, 101.755, 1.240] 69.556
4 1 Back down [108.925, 18.749, 22.450] 80.666
4 2 Back down [38.017, 0.972, 22.450] 60.1968
4 3 Back down [6.336, 126.759, 22.450] 87.8413
5 1 Lying on [159.301, 111.633, 11.040] 70.867
its side Fig. 10. Image based object recognition an example.
5 2 Lying on [86.809, 0.785, 11.040] 113.306
its side
5 3 Lying on [3.120, 123.931, 11.040] 88.253 recognition, did not help the evaluation of this method. As a result,
its side
this method could not be taken into consideration for the specic
6 1 Back down [122.659, 114.977, 22.450] 78.387
6 2 Lying on [43.203, 13.945, 11.040] 74.715
objects, on its own, since the features that were recognized did not
its side help in the direction of the POIs computation. The direction of
6 3 Back up [43.298, 119.217, 1.240] 42.333 combining multiple algorithms for features extraction, might have
7 1 Lying on [24.633, 95.057, 11.040] 115.024 had better results, as the multi-level feature processing could
its side
possibly provide a solution to the POIs computation. This
7 2 Back down [98.210, 5.347, 22.450] 100.577
7 3 Back down [15.460, 116.101, 22.450] 128.185 consideration is oriented towards the specic methodologys

Fig. 8. RGB images and recognition results in binary images.

Fig. 9. (a) RGB image with multiple objects and (b) recognition results.
26 P. Tsarouchi et al. / CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027

Fig. 11. SURF features extraction.

application and would not help if it were applied to similar objects geometrical features and the contrast values. The recognition
without any change on the feature processing strategy. result was completed with the use of the objects CAD data. The
benets of the proposed system among others include:
Accuracy and errors
 The estimation of the relative error being 2.00355% in x and
The POIs x and y axis coordinates were compared with the real 3.49674% in the y axis respectively; comparatively it is quite
point coordinates, since they were measured with the help of a good for similar applications, from an accuracy point of view.
millimetre paper, placed on the working area. Forty-eight different  The fact that an online 2D vision system was combined with
x and y axis coordinated images were compared to their real ofine data from CAD les for the computation of 3D POI
position according to this WRF. The millimetre paper was tted coordinates allows for a signicant decrease in the cost of the
into the working area and the real positions in the eld of view, nal system, as well as a decrease in the setup time.
with a pencil and ruler, were measured in order for the POIs of each
shaver handle to be measured. Future study will be focusing on this systems implementation and
The average relative errors in the two axes are 2.00355% in x and improvement in the cell setup, along with the robots and the
3.49674% in the y axis respectively. It was observed that when the mechatronics systems involved. The online system will be appropri-
POI was found near the cc projection area, the relative error was ately modied in order for the POIs to be recognized and thus enable
minor, in contrast to that when the POIs were found far away from the completion of the handling process by the gripping devices, on a
this point. From an applications point of view, these errors in a moving conveyor. The challenges met in this kind of vision systems
future study will not have a signicant effect on the gripping not only do they concern the recognition and computation of the 3D
devices ability to accurately approach, pick up and nally handle coordinates, but also the consideration of real life problems. Such
the shavers. problems include the existence of dust, the need for extra lighting,
distortions, etc. which are common in industrial facilities as well as the
Discussion effect of shiny surfaces that cause light reections and even
occlusions, such as those on the moving conveyor.
The proposed study combines the advantages of an image based Experiments will be conducted to test the accuracy of the 3D
system and the use of data from CAD for the recognition of 3D coordinates calculation and the way that the data of the robots
objects and the estimation of coordinates. Currently, a minor issue TCP (Tool Center Point) will be managed for its synchronization
on the proposed system is the use of contrast values in order for the with the moving conveyor. Last but not least, the design and
measurements to be performed, thus having a signicant increase development of a gripper for the handling of such objects has been
in the computation time. Despite that fact, this method helps with considered for future work. The experiments performed, under the
the designing of similar vision systems in different objects, by AUTORECON FP7 EU project, aim at providing a robust and efcient
adjusting the computational logic of contrast values. In this way, vision system, which could guide both robots to handle the object
the proposed method can be generalized in a variety of small 3D at a rate of 100 objects/min.
objects, indifferent shapes, for similar applications. Additionally, it
is quite exible and allows the recognition of randomly and not Acknowledgements
well-oriented objects in the working area, similar to other 3D
vision system applications, such as bin picking. This paper is an extension of the conference paper presented at
Some examples of commercial tools for such applications are the 2nd CIRP Global Web Conference, CIRPe2013, held on 1112
among others the ISRA vision system [48], the Visio Nerf [49], the June 2013 [53]. The research has been partially supported by the
FANUC iR PickTool [50] and the 3D Area Sensor [51], as well as the research project AUTORECON Autonomous co-operative
Pick-itTM Robotic Picking [52]. Despite these tools being commer- machines for highly recongurable assembly operations of the
cial and performing well in such applications, they entail future funded by the European Commission.
challenges, namely the high cost, the vision sensor dimensions
and weight, as well as the use of laser scanners and lighting
mechanisms, all of which make their application unaffordable.
References
Additionally, further challenges are directly related to the integra-
tion aspects with the robot control systems. Given these challenges, [1] Kodagali, J., Balaji, S., 2012, Computer Vision and Image Analysis Based
this study has come up with a solution that is based on an image Techniques for Automatic Characterization of Fruits A Review, International
based method for 3D objects recognition using CAD les data. Journal of Computer Applications, 50/6: 612.
[2] Chryssolouris, G., 2006, Manufacturing Systems: Theory and Practice, Springer.
[3] Santochi, M., Dini, G., 1998, Sensor Technology in Assembly Systems, CIRP
Conclusions and future work Annals Manufacturing Technology, 503524. http://dx.doi.org/10.1016/
S0007-8506(07)63239-9.
[4] Tonshoff, H.K., Janocha, H., Seidel, M., 1988, Image Processing in a Production
The development of functions for the POI recognition, in various Environment, CIRP Annals Manufacturing Technology, 579590. http://
lighting conditions, was based on the combination of the objects dx.doi.org/10.1016/S0007-8506(07)60755-0.
P. Tsarouchi et al. / CIRP Journal of Manufacturing Science and Technology 14 (2016) 2027 27

[5] Michalos, G., Makris, S., Eytan, A., Matthaiakis, S., Chryssolouris, G., 2012, Robot [28] Wang, F.-Y., Lever, P.J.A., Pu, B., 1993, A Robotic Vision System for Object
Path Correction Using Stereo Vision System, Procedia CIRP, 352357. http:// Identication and Manipulation Using Synergetic Pattern Recognition, Robot-
dx.doi.org/10.1016/j.procir.2012.07.061. ics and Computer-Integrated Manufacturing, 445459. http://dx.doi.org/
[6] Vahrenkamp, N., Boge, C., Welke, K., Asfour, T., Walter, J., et al, 2009, Visual 10.1016/0736-5845(93)90008-8.
Servoing for Dual Arm Motions on a Humanoid Robot, 9th IEEE-RAS Interna- [29] Richards, C.A., Papanikolopoulos, N.P., 1997, Detection and Tracking for Ro-
tional Conference on Humanoid Robots, HUMANOIDS09, pp.208214. http:// botic Visual Servoing Systems, Robotics and Computer-Integrated
dx.doi.org/10.1109/ICHR.2009.5379577. Manufacturing, 101120. http://dx.doi.org/10.1016/S0736-5845(96)00034-8.
[7] Han, S.H., See, W.H., Lee, J., Lee, M.H., Hashimoto, H., 2000, Image-Based Visual [30] Kruger, J., Nickolay, B., Heyer, P., Seliger, G., 2005, Image Based 3D Surveillance
Servoing Control of a SCARA Type Dual-Arm Robot, ISIE2000. Proceedings of for exible Man-Robot-Cooperation, CIRP Annals Manufacturing Technolo-
the 2000 IEEE International Symposium on Industrial Electronics (Cat. No. gy, 1922. http://dx.doi.org/10.1016/S0007-8506(07)60040-7.
00TH8543), vol. 2, . http://dx.doi.org/10.1109/ISIE.2000.930351. [31] Reinhart, G., Tekouo, W., 2009, Automatic Programming of Robot-Mounted 3D
[8] Han, S.H., Choi, J.W., Son, K., Lee, M.C., Lee, J.M., et al, 2001, A Study on Feature- Optical Scanning Devices to Easily Measure Parts in High-Variant Assembly,
Based Visual Servoing Control of Eight Axes-Dual Arm Robot by Utilizing CIRP Annals Manufacturing Technology, 58/1: 2528. http://dx.doi.org/
Redundant Feature, in: Proceedings 2001 ICRA. IEEE International Conference 10.1016/j.cirp.2009.03.125.
on Robotics and Automation (Cat. No.01CH37164), vol. 3, . http://dx.doi.org/ [32] Lanzetta, M., Santochi, M., Tantussi, G., 1999, Computer-Aided Visual Inspec-
10.1109/ROBOT.2001.933018. tion in Assembly, CIRP Annals Manufacturing Technology, 1316. http://
[9] Hutchinson, S., Hager, G.D., Corke, P.I., 1996, A Tutorial on Visual Servo Control, dx.doi.org/10.1016/S0007-8506(07)63121-7.
IEEE Transactions on Robotics and Automation, 12/5: 651670. http:// [33] Kruger, J., Lien, T.K., Verl, A., 2009, Cooperation of Human and Machines in
dx.doi.org/10.1109/70.538972. Assembly Lines, CIRP Annals Manufacturing Technology, 58/2: 628646.
[10] Chaumette, F., Hutchinson, S., 2006, Visual Servo Control. I. Basic Approaches, http://dx.doi.org/10.1016/j.cirp.2009.09.009.
IEEE Robotics and Automation Magazine, 13/4: 8290. http://dx.doi.org/ [34] Reinhart, G., Werner, J., 2007, Flexible Automation for the Assembly in Motion,
10.1109/MRA.2006.250573. CIRP Annals Manufacturing Technology, 56/1: 2528. http://dx.doi.org/
[11] Chaumette, F., Hutchinson, S., 2007, Visual Servo Control. II. Advanced 10.1016/j.cirp.2007.05.008.
Approaches [Tutorial], IEEE Robotics and Automation Magazine, 14/1: 109 [35] Reinhart, G., Clarke, S., Petzold, B., Schilp, J., Milberg, J., 2004, Telepresence as a
118. http://dx.doi.org/10.1109/MRA.2007.339609. Solution to Manual Micro-Assembly, CIRP Annals Manufacturing Technolo-
[12] Makris, S., Tsarouchi, P., Surdilovic, D., Kruger, J., 2014, Intuitive Dual Arm gy, 2124. http://dx.doi.org/10.1016/S0007-8506(07)60636-2.
Robot Programming for Assembly Operations, CIRP Annals Manufacturing [36] Pham, T.V., Smeulders, A.W.M., 2005, Object Recognition with Uncertain
Technology, 63/1: 1316. http://dx.doi.org/10.1016/j.cirp.2014.03.017. Geometry and Uncertain Part Detection, Computer Vision and Image Under-
[13] Zhao, Y., Cheah, C.C., 2009, Neural Network Control of Multingered Robot standing, 99/2: 241258. http://dx.doi.org/10.1016/j.cviu.2005.01.006.
Hands Using Visual Feedback, IEEE Transactions on Neural Networks, 20/5: [37] Bai, X., Yang, X., Latecki, L.J., 2008, Detection and Recognition of Contour Parts
758767. http://dx.doi.org/10.1109/TNN.2008.2012127. Based on Shape Similarity, Pattern Recognition, 41/7: 21892199. http://
[14] Lippiello, V., Siciliano, B., Villani, L., 2005, An Experimental Setup for Visual dx.doi.org/10.1016/j.patcog.2007.12.016.
Servoing Applications on an Industrial Robotic Cell, in: Proceedings, 2005 IEEE/ [38] Alaoui Mhamdi, M.A., Ziou, D., 2014, A Local Approach for 3D Object Recogni-
ASME International Conference on Advanced Intelligent Mechatronics, . http:// tion Through a Set of Size Functions, Image and Vision Computing, 32/12:
dx.doi.org/10.1109/AIM.2005.1511212. 10301044. http://dx.doi.org/10.1016/j.imavis.2014.08.015.
[15] Kragic, D., Christensen, H.I., 2002, Survey on Visual Servoing for Manipulation, [39] Zhu, W., Mei, B., Yan, G., Ke, Y., 2014, Measurement Error Analysis and
Report, ISRN KTH/NA/P-02/01-SE, . Accuracy Enhancement of 2D Vision System for Robotic Drilling, Robotics
[16] Malis, E., 2002, Survey of Vision-Based Robot Control, European Naval Ship and Computer-Integrated Manufacturing, 30/2: 160171. http://dx.doi.org/
Design, Captain Computer IV Forum, ENSIETA, Brest, France. 10.1016/j.rcim.2013.09.014.
[17] Kelly, R., 1996, Robust Asymptotically Stable Visual Servoing of Planar Robots, [40] Bicego, M., Castellani, U., Murino, V., 2005, A Hidden Markov Model Approach
IEEE Transactions on Robotics and Automation, 12/5: 759766. http:// for Appearance-Based 3D Object Recognition, Pattern Recognition Letters, 26/
dx.doi.org/10.1109/70.538980. 16: 25882599. http://dx.doi.org/10.1016/j.patrec.2005.06.005.
[18] Golnabi, H., Asadpour, A., 2007, Design and Application of Industrial Machine [41] Schindler, K., Suter, D., 2008, Object Detection by Global Contour Shape,
Vision Systems, Robotics and Computer-Integrated Manufacturing, 23/6: 630 Pattern Recognition, 41/12: 37363748. http://dx.doi.org/10.1016/j.pat-
637. http://dx.doi.org/10.1016/j.rcim.2007.02.005. cog.2008.05.025.
[19] Beserra Gomes, R., Ferreira da Silva, B.M., Rocha, L.K.D.M., Aroca, R.V., Velho, [42] Melkemi, M., Djebali, M., 2001, Weighted A-Shape: A Descriptor of the Shape
L.C.P.R., et al, 2013, Efcient 3D Object Recognition Using Foveated Point of a Point Set, Pattern Recognition, 34:11591170.
Clouds, Computers & Graphics, 37/5: 496508. http://dx.doi.org/10.1016/ [43] Scovanner, P., Ali, S., Shah, M., 2007, A 3-Dimensional SIFT Descriptor and its
j.cag.2013.03.005. Application to Action Recognition, in: Proceedings of the 15th International
[20] Xie, S.Q., Haemmerle, E., Cheng, Y., Gamage, P., 2008, Vision-Guided Robot Conference on Multimedia MULTIMEDI07, 357. http://dx.doi.org/10.1145/
Control for 3D Object Recognition and Manipulation, in Ceccarelli M, (Ed.) 1291233.1291311.
Robot Manipulators. InTech. [44] Frome, A., Huber, D., Kolluri, R., Thomas, B., 2004, Recognizing Objects in Range
[21] Smith, C., Karayiannidis, Y., Nalpantidis, L., Gratal, X., Qi, P., et al, 2012, Dual Data Using Regional Point Descriptors, European Conference on Computer
Arm Manipulation A Survey, Robotics and Autonomous Systems, 1340 Vision, 1:114.
1353. http://dx.doi.org/10.1016/j.robot.2012.07.005. [45] Laptev, I., 2009, Improving Object Detection with Boosted Histograms, Image
[22] Han, S.H., See, W.H., Lee, J., Lee, M.H., Hashimoto, H., 2000, Image-Based Visual and Vision Computing, 27/5: 535544. http://dx.doi.org/10.1016/j.ima-
Servoing Control of a SCARA Type Dual-Arm Robot, ISIE2000. Proceedings of vis.2008.08.010.
the 2000 IEEE International Symposium on Industrial Electronics (Cat. [46] Chan, C.H., Yan, F., Kittler, J., Mikolajczyk, K., 2015, Full Ranking as Local
No.00TH8543), vol. 2, . http://dx.doi.org/10.1109/ISIE.2000.930351. Descriptor for Visual Recognition: A Comparison of Distance Metrics on Sn,
[23] Kelly, R., Carelli, R., Nasisi, O., Kuchen, B., Reyes, F., 2000, Stable Visual Servoing Pattern Recognition, 48/4: 13281336. http://dx.doi.org/10.1016/j.pat-
of Camera-in-Hand Robotic Systems, IEEE/ASME Transactions on Mechatro- cog.2014.10.010.
nics, 5/1: 3948. http://dx.doi.org/10.1109/3516.828588. [47] Bay, H., Ess, A., Tuytelaars, T., Van Gool, L., 2008, Speeded-Up Robust Features
[24] Zereik, E., Sorbara, A., Casalino, G., Didot, F., 2009, Autonomous Dual-Arm (SURF), Computer Vision and Image Understanding, 110/3: 346359. http://
Mobile Manipulator Crew Assistant for Surface Operations: Force/Vision- dx.doi.org/10.1016/j.cviu.2007.09.014.
Guided Grasping, RAST 2009 Proceedings of 4th International Conference [48] ISRA Vision, (2015), http://www.isravision.com/.
on Recent Advances Space Technologies, pp.710715. http://dx.doi.org/ [49] VISION NERF, (2015), http://www.visionerf.com/fr/.
10.1109/RAST.2009.5158284. [50] Vision Functions for Robots, (2015), http://www.fanuc.eu/es/en/robots/
[25] Lippiello, V., Villani, L., Siciliano, B., 2007, An Open Architecture for Sensory accessories/vision.
Feedback Control of a Dual-Arm Industrial Robotic Cell, Industrial Robot: An [51] Robotic Bin Picking Made Possible with the New FANUC iRVision 3D Area
International Journal, 4653. http://dx.doi.org/10.1108/01439910710718441. Sensor, (2015), http://robot.fanucamerica.com/robot-applications/
[26] Iwata, H., Sugano, S., 2009, Design of Human Symbiotic Robot TWENDY-ONE, Bin-Picking-Robot-with-3D-Area-Sensor-Vision-System.aspx.
in: Proceedings IEEE International Conference on Robotics and Automation, [52] Pick-itTM Robotic Picking, (2015), http://www.intermodalics.eu/pick-it-3d.
pp.580586. http://dx.doi.org/10.1109/ROBOT.2009.5152702. [53] Tsarouchi, P., Michalos, G., Makris, S., Chryssolouris, G., 2013, Vision System for
[27] Zhou, J., Yu, Y., 2011, Coordination Control of Dual-Arm Modular Robot Based Robotic Handling of Randomly Placed Objects, Procedia CIRP, 6166. http://
on Position Feedback Using Optotrak3020, Industrial Robot: An International dx.doi.org/10.1016/j.procir.2013.06.169.
Journal, 172185. http://dx.doi.org/10.1108/01439911111106381.

Вам также может понравиться