Академический Документы
Профессиональный Документы
Культура Документы
A R T I C L E I N F O A B S T R A C T
Article history: Using a metrology system simulation approach, an algorithm is presented to determine the best position
Submitted by Lin Li (1), UK for a robot mounted 3D vision system. Point cloud data is simulated, taking into account sensor
performance, to create a ranked list of the best camera positions. These can be used by a robot to
Keywords: autonomously determine the most advantageous camera position for locating a target object. The
Metrology algorithm is applied to an Ensenso active stereo 3D camera. Results show that when used in combination
Cognitive robotics with a RANSAC object recognition algorithm, it increased positional precision by two orders of
3D image processing magnitude, from worst to best case.
2017 Published by Elsevier Ltd on behalf of CIRP.
http://dx.doi.org/10.1016/j.cirp.2017.04.069
0007-8506/ 2017 Published by Elsevier Ltd on behalf of CIRP.
Please cite this article in press as: Kinnell P, et al. Autonomous metrology for robot mounted 3D vision systems. CIRP Annals -
Manufacturing Technology (2017), http://dx.doi.org/10.1016/j.cirp.2017.04.069
G Model
CIRP-1627; No. of Pages 4
Please cite this article in press as: Kinnell P, et al. Autonomous metrology for robot mounted 3D vision systems. CIRP Annals -
Manufacturing Technology (2017), http://dx.doi.org/10.1016/j.cirp.2017.04.069
G Model
CIRP-1627; No. of Pages 4
is then calculated and stored against the camera position for that
view. This score is therefore a function of the number of key points
theoretically visible, but also as l itself is the probability of the key
point being captured by the vision system, a high score is also
indicative of a set of key points that can be easily measured with
the vision system. Therefore the combination of these two factors
provides a scoring system that considers quantity as well as quality
of the information-rich key points which are visible.
Please cite this article in press as: Kinnell P, et al. Autonomous metrology for robot mounted 3D vision systems. CIRP Annals -
Manufacturing Technology (2017), http://dx.doi.org/10.1016/j.cirp.2017.04.069
G Model
CIRP-1627; No. of Pages 4
6. Conclusion
Acknowledgements
Fig. 5. Standard deviation of the transform axis rotations.
References
[1] Ahmad R, Plapper P (2015) Generation of Safe Tool-Path for 2.5D Milling/
Drilling Machine-Tool Using 3D ToF Sensor. CIRP Journal of Manufacturing
Science and Technology 10:8491.
[2] Bi ZM, Wang L (2010) Advances in 3D Data Acquisition and Processing for
Industrial Applications. Robotics and Computer-Integrated Manufacturing
26(5):403413.
[3] Michalos G, Makris S, Papakostas N, Mourtzis D, Chryssolouris G (2010)
Automotive Assembly Technologies Review: Challenges and Outlook for a
Flexible and Adaptive Approach. CIRP Journal of Manufacturing Science and
Technology 2:8191.
[4] Hodgson JR, Kinnell P, Justham L, Jackson MR (2015) Characterizing the
Fig. 6. Standard deviation of the transform axis translations. Inuence of surFace Roughness and Inclination on 3D Vision Sensor Perfor-
mance. Proceedings of SPIE The International Society for Optical Engineering, vol.
9875.
algorithm as a function of the quality of the measured data were [5] Scott WR, Roth G, Rivest J-F (2003) View Planning for Automated Three-
Dimensional Object Reconstruction and Inspection. ACM Computing Surveys
the main concern. The random sampling component of the 35(1):6496.
RANSAC algorithm means that it is provides a different result [6] Mavrinac A, Chen X, Alarcon-Herrera JL (2015) Semiautomatic Model-Based
each time it is run. Therefore to assess how good the data collected View Planning for Active Triangulation 3-D Inspection Systems. IEEE/ASME
Transactions on Mechatronics 20(2):799811.
for each viewpoint was, the repeatability of the algorithm for a
[7] Perez L, Rodrguez I., Rodrguez N, Usamentiaga R, Garca D (2016) Robot
single set of data from each viewpoint was tested; based on the Guidance Using Machine Vision Techniques in Industrial Environments: A
assumption that viewing positions with a large quantity of Comparative Review. Sensors 16(3):335.
information-rich points lead to a more repeatable result from [8] Schmitt RH, Peterek M, Morse E, Knapp W, Galetto M, Hartig F, Estler WT
(2016) Advances in Large-Scale Metrology Review and Future Trends. CIRP
the RANSAC algorithm. For each data set collected from the eight Annals Manufacturing Technology 65(2):643665.
viewing positions, the RANSAC algorithm was run 50 times. In this [9] Monostori L, Kadar B, Bauernhansl T, Kondoh S, Kumara S, Reinhart G, Ueda K
way the variation of the target objects position estimation, due to (2016) Cyber-Physical Systems in Manufacturing. CIRP Annals Manufacturing
Technology 65(2):621641.
the quality of the measured data, was considered. Variation was [10] Hodgson JR, Kinnell P, Justham L, Lohse N, Jackson MR (2017) Novel Metrics
assessed in terms of both spatial translations and rotations. Spatial and Methodology for the Characterisation of 3D Imaging Systems. Optics and
translations were considered in three orthogonal axes (x, y and z), Lasers in Engineering 91:169177.
[11] Chvatal V (1975) A Combinatorial Theorem in Plane Geometry. Journal of
and rotations were considered about these three axes. The Combinatorial Theory Series B 18(1):3941.
standard deviation of rotations and translations are shown on [12] Raffaeli R, Mengoni M, Germani M, Mandorli F (2013) Off-line View Planning
the charts in Figs. 5 and 6 respectively; these charts illustrate the for the Inspection of Mechanical Parts. International Journal on Interactive
Design and Manufacturing (IJIDeM) 7(1):112.
large positional variations that can be expected with data that is
[13] Leifman G, Shtrom E, Tal A (2012) Surface Regions of Interest for Viewpoint
collected from the least optimal viewing positions, ranked 197 Selection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
200. For the worst ranked positions, translation standard devia- IEEE, 414421.
[14] Zhong Y (2009) Intrinsic Shape Signatures: A Shape Descriptor for 3D Object
tions up to as much as 14 mm were found, and rotational standard
Recognition. IEEE 12th International Conference on Computer Vision Workshops
deviations up to 12 degrees were found. In contrast, for the best (ICCV Workshops), IEEE, 689696.
ranked viewpoints all the translation standard deviations were [15] Gartner B (1999) Fast and Robust Smallest Enclosing Balls, Springer, Berlin,
less than 0.15 mm and all the rotational standard deviations were Heidelberg325338. http://dx.doi.org/10.1007/3-540-48481-7_29.
[16] Papazov C, Burschka D (2010) An Efcient Ransac for 3D Object Recognition in
less than 0.14 degrees; therefore, by selecting the best ranked Noisy and Occluded Scenes. Asian Conference on Computer Vision, Springer,
viewing positions, it is possible to increase the precision of object 135148.
Please cite this article in press as: Kinnell P, et al. Autonomous metrology for robot mounted 3D vision systems. CIRP Annals -
Manufacturing Technology (2017), http://dx.doi.org/10.1016/j.cirp.2017.04.069