Вы находитесь на странице: 1из 6

2017 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE)

The Design of Face Recognition and Tracking for


Human-Robot Interaction
W.S. Mada Sanjaya1,2∗ , Dyah Anggraeni1,2 , Kiki Zakaria1,2 , Atip Juwardi1,2 ,
and Madinatul Munawwaroh1,2
1
dept. of Physics, Faculty of Science and Technology, UIN Sunan Gunung Djati, Bandung, Indonesia
2
Bolabot Techno Robotic Institute, CV Sanjaya Star Group, Bandung, Indonesia

madasws@gmail.com

Abstract—This paper discusses the development of Social (LBPH) Face Recognizer Method based on OpenCV Library
Robot named SyPEHUL (System of Physic, Electronic, Humanoid and Python 2.7. Finally, the face recognition system will be
Robot and Machine Learning) which can recognize and tracking implemented to 12 Degree of Freedom (DoF) Social Robot
human face. Face recognition and tracking process use Cascade
Classification and LBPH (Local Binary Pattern Histogram) Face named SyPEHUL to recognize and tracking human face based
Recognizer method based on OpenCV library and Python 2.7. on Arduino microcontroller for Human-Robot Interaction.
The social robot hardware based on Arduino microcontroller The paper is organized as follows. In section 2, described the
contains by 12 DoF (Degree of Freedom) motor servos to actuate theoretical background of Face Detection and Face Recogni-
robotic head and its face. The face recognition system has been tion in details. In section 3, describe the experimental method
implemented to Social Robot which can recognize and tracking
human face and then mentioned the person name. The face of this research. In section 4 described the results and the
recognition system of Social Robot result shows a good accuracy implementation of Face Recognition and Tracking to Social
for Human-Robot Interaction. Robot in detail. Finally, in Section 5 the concluding remarks
Index Terms—Face Recognition, Social Robot, Arduino, Python are given.
2.7, Human-Robot Interaction, SyPEHUL.
II. T HEORETICAL BACKGROUND
I. I NTRODUCTION A. Face Detection using Viola-Jones Method
Face recognition is an image processing method to locate Paul Viola and Michael Jones have published the face
the human face which need camera to capture the image of detection method which usually referred as the Viola-Jones
human face. The image processing will search the important method or only Viola-Jones in 2001 [2]. This method can
feature of a human face on the image, thus another object detect a human face in an Image by four key concepts:
will be ignored [1]. The image processing locates human 1) Haar features: Haar feature uses to find out the existence
face used by various algorithm and method, for example: of human face on captured image [24]. This feature used
AdaBoost [2], Viola-Jones method [3] [4] [5], Roberts Cross to detect the bright side and the dark side of the captured
method [6], and other [7] [6]. For the classifier to recognize image [25]. The existence of Haar features is determined by
a human face can be use; Local Binary Pattern(LBP) [8], subtracting the average of the dark-region pixels by the average
Hidden Markov Model(HMM) [9], Bayesian [10], Support of the pixel light-region. The calculation forms a rectangle
Vector Machine(SVM) [11], and other [12] [13]. shown at Fig. 1 around the detected face shown at Fig. 2. If
In this decade, face recognition can be implemented to many the difference is above the threshold, then the Haar features
projects are; social robot [14] [15], attendance automation [16], are said to be ”exist”.
home security [17] [18] [19], game [5], and other. Especially
for Social Robot project, face recognition can be increase
the Robot’s ability for Human-Robot Interaction. Because,
Social Robot is a future technology which can communicated,
entertaining or help human’s works. The Social Robot which
has build by researchers, for example; KISMET [14], Eddie
[20], Flobi [21], Muecas [22], Probo [23], and other [15]. (a) (b) (c)
In this paper will be described a development of Social Fig. 1. Haar rectangular feature [2]: (a)Edge, (b)Line, (c)Four rectangle.
Robot named SyPEHUL (System of Physic, Electronic, Hu-
manoid Robot and Machine Learning) which can recognize 2) Integral Image: The integral image used to speed up
and track human face. The real-time face detection and the feature detection in a way increase the pixel values from
recognize human face uses Cascade Classification method of original image. The integrated value already represents the
(Viola-Jones method) and Local Binary Pattern Histogram sum of all pixels above the threshold and is to the left of the

978-1-5386-0658-2/17/$31.00 ©2017 IEEE 315


2017 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE)

B. Face Recognizer using Local Binary Pattern (LBP) Method


Local Binary Pattern (LBP) is defined as texture descriptor
to the comparison of the pixel binary value at the center of the
image with 8-pixel values around it [13]. The LBP method can
be implemented on face images to extract the features which
can be used to get a measure of the similarity between images.
On Fig. 5 illustrated the LBP process using 3x3 pixel image.
The value of the image center is reduced by the surrounding
value. If the result more or equal ”0” then it is given ”1”, but
Fig. 2. Haar feature result.
if less than ”0” then given ”0”. Then the 8 binary value sorted
clockwise or otherwise, and convert to decimal to replace the
pixel value image center.
image. By starting the addition procedure, from the top left
of the image (original pixel) become the bottom right of the
image (addition pixel result) show at Fig. 3.

(a) (b) Fig. 5. Illustration of Local Binary Pattern (LBP).


Fig. 3. (a) Integration process, (b) Rectangle divided into multiple segments
[2]. The decimal value can be expressed in (1) [27]:

3) AdaBoost Machine-Learning Algorithm: Viola and Jones LBPP,R (x0 , y0 ) = Σp−1 p


p=0 s(gp − g0 )2 , (1)
use an AdaBoost algorithm because of boosted the classifi-
from the function s(x), define:
cation performance of a simple learning, in other word are
fast and easy computation. AdaBoost learning process gives
s(x) = {1,x≥0
0,x<0 , (2)
a number of data, thus can be grouped into the classifier. A
classifier composed of small features on the face, therefore from (2) aims to eliminate the variability caused by the
commonly used as pattern detection. [26] contrast illumination, therefore the facial image with various
4) Cascade Classifier: The filter chain which was shown illumination will get the almost similar output. By extracting
in Fig. 4(a) is made by a series of AdaBoost classifier. The the pixels value using LBP will get a new matrix value
sequence of filters in the cascade is based by the weight of which converts to a histogram for facial vector features. The
results from the AdaBoost. The cascade eliminates a candidate histogram of these image patterns called labels which forms
if it was not passed the first stage. The larger the filter weights a feature vector and is represented the image texture. These
in the first chain, then to eliminate the non-face image area will histograms can be used to measure the similarity between the
also be done quickly. Thus, a Cascade Classifier that combines images. [27]
many features efficiently. III. E XPERIMENTAL M ETHOD
A. Method and System Design
In general, the main tools and component to build Social
Robot in this research are Personal Computer, Arduino mi-
crocontroller, Social Robot (SyPEHUL), 8MP of WebCam,
speaker, connections, and others. The system program is writ-
ten in Arduino IDLE and Python 2.7 (with OpenCV Library).
Fig. 6 show the general process of Face Recognition and
Tracking of Social Robot SyPEHUL.

Fig. 4. Cascade classifier illustration [2].

316
2017 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE)

Fig. 6. General research scheme.

From the Fig. 7 can be explained that the WebCam as a


Social Robot vision sensor. The face detection process by using
Cascade Classifier Method algorithm. The process divided into
2 sections: The first process makes a training data consist of
features extraction process using LBPH Feature Method and
classifying human face using LBPH Face Recognizer Method.
In this paper, the trained data consist by face image capture of
11 people, and each people has 11 images. The second process
is a testing of trained data by trained respondents which need
LBPH features extraction. Then new image which captured in
this section is matched with Trained Data. If the classification
to identifying the person has done, the system will mention
the person’s name. From the face detection, besides can detect
the human face is can get the human face coordinate. The
coordinate data can be convert become a Social Robot Head
control to follow the human face. The system is work in
real-time based on Python 2.7, OpenCV Library and Arduino
microcontroller.
B. Hardware Design
To developed Social Robot hardware, it has a motor servo to
Fig. 7. General system scheme of Social Robot algorithm.
actuate the robots neck and its face. In addition the WebCam
as a visual sensor, speaker as the sound and the Arduino
microcontroller as controller of the actuator. The concept
and the realization of Social Robot named SyPEHUL in this
research illustrated on Fig. 8 which the hardware has developed
on the previous research [28].
SyPEHUL mechatronics is consisted by 12 motor servos
which connected with Arduino microcontroller, divided to
control; eyebrow (by 2 servos), eye (by 2 servos), mouth (by 4
servos), ear (by 2 servos), and neck (by 2 servos). Each servo
supplied by the 5-volt battery which has 100 mA of current,
and each ground of servo connected to the ground of Arduino
(a) (b)
Board illustrated on Fig. 9.
Fig. 8. SyPEHUL design; (a)Concept, (b)Realization [28].

IV. R ESULTS AND D ISCUSSION


A. Face Recognition System
The first step to develop a Social Robot which can recognize
human faces is making a database which contains by human
face image. In this research, to capture a human face image

317
2017 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE)

TABLE I
T HE FACE RECOGNITION ACCURACY RATE OF TRAINED RESPONDENTS IN
PERCENT (%).

No Name Accuracy(%)
1 Dyah 80
2 Mada 100
3 Rizki 80
4 Fikri 100
5 Ikhsan 80
6 Jaka 100
7 Awin 100
8 Tiara 100
9 Indry 100
10 Lutfi 100
11 Ratih 80

Fig. 9. Robot social SyPEHUL schematic [28].

need a face detection algorithm which uses Cascade Classifica-


tion method based on OpenCV library and Python 2.7. Fig. 10
is a database of some people images to make a database.

Fig. 11. The examination of face recognition by respondents.

Fig. 10. Database example. B. The Face Tracking System


The database of the image made of 11 people (6 Male and 5 The face detection algorithm can detect a face in all pixel
Female), which each person has 11 image data. Thus, the total area of the image (because in the image not only appear human
image of the database is 121 images. From the obtain database, face). Therefore, the detected human face on an image is has a
the classification process needs to distinguish between each coordinate. The implementation of face coordinate for Social
person. In this research, the classification uses Local Binary Robot SyPEHUL is to control the robot neck to track human
Pattern Histogram (LBPH) Feature Extraction and LBPH Face face. In other words, the Social Robot head can following
Recognizer based on OpenCV library and Python 2.7. The human face.
database after the classification is now called the trained data. To control Social Robot neck must set up the resolution
After that, the classified data is trained by the respondents of the captured image. In this research, the video capture
for clarification. The test results are shown on TABLE I is resolution setup is 480 x 640 pixel. Then, to find a center
the average of accuracy rate of face recognition by trained coordinate of a human face from the WebCam by using (3)
respondents (in the database). and (4).
TABLE I has been describing that the accuracy of Face
Recognition of the trained respondent is 92.73%. The result xx = x + (w/2), (3)
shows the good accuracy to recognize the human face, so
the method can be implemented for Human-Robot Interaction. yy = y + (h/2), (4)
Fig. 11 shows the examination of face recognition by respon-
dents. x is the initial face coordinate on horizontal, w is width
coordinate of face, xx is a center value of face coordinate on
horizontal, y is initial face coordinate on vertical, h is height
of face coordinate, yy is a center value of face coordinate

318
2017 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE)

on vertical, and the units in pixel. From the equation can get
the value to control the neck movement which divided into
4 movements, are; Up, Down, Right, and Left, illustrated on
Fig. 12.

Fig. 12. WebCam pixel coordinate and angle of motor servo. Fig. 13. The examination of face recognition and tracking for Human-Robot
interaction.
To make the Social Robot Head can following human
face, from the value result of center face coordinate must
be corresponding with the angle of the motor servo. As we V. C ONCLUSION
know, the Social Robot Head must be set the minimum and
maximum value of angle to control the Social Robot head In this research has been presented a development of Social
which can be looking up/down (vertically) or looking right/left Robot named SyPEHUL which can recognize and track the
(horizontally). In this research, the motor servo setup is 90o to human face. The face recognition processed by the algorithm
150o for horizontal angle arrange, and 20o to 70o for vertical based on Python 2.7 (with the OpenCV library) by using
angle arrange. The servos angle setup is written and upload Cascade Classification and LBPH Face Recognizer method has
into Arduino Board to control Social Robot SyPEHUL’s head. a good accuracy rate (92.73%). Also, the implementation of
Fig. 12 describe the information of image pixel, direction, facial tracking to control 12 DoF of Social Robot SyPEHUL
and servos angle of Social Robot Head. The concept to make based on Arduino microcontroller works effectively to control
the Social Robot head is moving; WebCam has captured the Robot’s head. The future works will focus on the combination
human face. After that, the algorithm will get the center of speech recognition and facial expression to enhance the
coordinate value which will process on Python. From Python, emotional expression of SyPEHUL for Human-Robot Interac-
the information will be sent to Arduino IDLE to control the tion.
Social Robot head to following human face.
C. The System Implementation to Social Robot R EFERENCES
After the Face Recognition and Tracking system are success-
[1] M. S. Kalas, “Real Time Face Detection and Tracking Using OpenCV,”
ful, the system can be applied to control the Social Robot Head International Journal of Soft Computing and Artificial Intelligence,
SyPEHUL. The result of the examination shown on Fig. 13. vol. 2, no. 1, pp. 41–44, 2014.
The social robot head can following the human face properly, [2] P. Viola and M. J. Jones, “Robust Real-time Object Detection,” Cam-
bridge Research Laboratory The, Cambridge, Massachusetts, Tech. Rep.
and can recognize the human face. The implementation of February, 2001.
Face Recognition has been successful, in addition, the Social [3] S. Tikoo and N. Malik, “Detection of Face using Viola Jones and
Robot can be mentioned the human name. The result of Recognition using Back Propagation Neural Network,” International
Journal of Computer Science and Mobile Computing, vol. 5, no. 5, pp.
face recognition and tracking can be apply for Human-Robot 288–295, 2016.
Interaction. [4] R. Boda and M. J. P. Priyadarsini, “Face Detection and Tracking Using
KLT and Viola Jones,” ARPN Journal of Engineering and Applied
Sciences, vol. 11, no. 23, pp. 13 472–13 476, 2016.
[5] C. Zhan, W. Li, P. Ogunbona, and F. Safaei, “A Real-Time Facial Ex-
pression Recognition System for Online Games,” International Journal
of Computer Games Technology, vol. 2008, pp. 1–7, 2008.
[6] S. Das, “Comparison of Various Edge Detection Technique,” Inter-
national Journal of Signal Processing, Image Processing and Pattern
Recognition, vol. 9, no. 2, pp. 143–158, 2016.
[7] H. C. V. Lakshmi and S. PatilKulakarni, “Segmentation Algorithm for
Multiple Face Detection in Color Images with Skin Tone Regions using
Color Spaces and Edge Detection Techniques,” International Journal of
Computer Theory and Engineering,, vol. 2, no. 4, pp. 552–558, 2010.

319
2017 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE)

[8] C. Shan, S. Gong, and P. W. Mcowan, “Facial expression recognition


based on Local Binary Patterns : A comprehensive study,” Image
and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009. [Online].
Available: http://dx.doi.org/10.1016/j.imavis.2008.08.005
[9] A. Punitha and M. K. Geetha, “HMM Based Real Time Facial Expres-
sion Recognition,” International Journal of Emerging Technology and
Advanced Engineering, vol. 3, no. 1, pp. 180–185, 2013.
[10] S. V. Hedaoo, M. D. Katkar, and S. P. Khandait, “Feature Tracking and
Expression Recognition of Face Using Dynamic Bayesian Network,”
International Journal of Engineering Trends and Technology, vol. 8,
no. 10, pp. 517–521, 2014.
[11] N. O. Sadek, N. A. Hikal, and F. W. Zaki, “Intelligent Real-Time Facial
Expression Recognition from Video Sequences based on Hybrid Feature
Tracking Algorithms,” International Journal of Advanced Computer
Science and Applications, vol. 8, no. 1, pp. 303–311, 2017.
[12] M. I. Khan and M. A.-A. Bhuiyan, “Facial Expression Recognition for
Human-Robot Interface,” International Journal of Computer Science and
Network Security, vol. 9, no. 4, pp. 300–306, 2009.
[13] M. R. G and J. Rajesham, “Weighted Local Active Pixel Pattern
(WLAPP) for effective Face Recognition,” International Journal of
Scientific & Engineering Research, vol. 4, no. 8, pp. 1540–1546, 2013.
[14] C. Breazeal, “Toward Sociable Robot,” Robotics and Autonomous Sys-
tem, vol. 42, pp. 167–175, 2003.
[15] M. Shayganfar, C. Rich, and C. L. Sidner, “A Design Methodology for
Expressing Emotion on Robot Faces,” Int. Conf. of Intelligent Robots
and Systems, vol. 12, 2012.
[16] N. Thakare, M. Shrivastava, and N. Kumari, “Face Detection and
Recognition for Automatic Attendance System,” International Journal
of Computer Science and Mobile Computing, vol. 5, no. 4, pp. 74–78,
2016.
[17] N. Sharma and I. Thanaya, “Home Security System Based on Sensors
and IoT,” International Journal of Innovative Research in Science,
Engineering and Technology, vol. 5, no. 2, pp. 10 357–10 362, 2016.
[18] P. V. Hajari and A. G. Andurkar, “Review Paper on System for Voice
and Facial Recognition using Raspberry Pi,” International Journal
of Advanced Research in Computer and Communication Engineering,
vol. 4, no. 4, pp. 232–234, 2015.
[19] R. Manjunatha and R. Nagaraja, “Home Security System and Door
Access Control Based on Face Recognition,” International Research
Journal of Engineering and Technology, vol. 4, no. 3, pp. 437–442,
2017.
[20] M. Buss, S. Sosnowski, A. Bittermann, and K. Kolja, “Design and eval-
uation of emotion-display EDDIE Design and Evaluation of Emotion-
Display EDDIE,” International Conference on Intelligent Robots and
Systems, no. June 2017, pp. 3113–3118, 2006.
[21] F. Hegel, F. Eyssel, and B. Wrede, “The Social Robot Flobi ’: Key
Concepts of Industrial Design,” IEEE International Symposium on Robot
and Human Interactive Communication, vol. 19, pp. 107–112, 2010.
[22] F. Cid, J. Moreno, P. Bustos, and P. Nunez, “Muecas: A Multi-Sensor
Robotic Head for Affective Human Robot Interaction and Imitation ,”
Sensors, vol. 14, pp. 7711–7737, 2014.
[23] I. Doroftei, F. Adascalitei, D. Lefeber, B. Vanderborght, and I. A.
Doroftei, “Facial expressions recognition with an emotion expressive
robotic head,” 7th International Conference on Advanced Concepts in
Mechanical Engineering, vol. 147, 2016.
[24] M. K. Dabhi and B. K. Pancholi, “Face Detection System Based on
Viola - Jones Algorithm,” International Journal of Science and Research,
vol. 5, no. 4, pp. 2015–2017, 2016.
[25] M. Chaudhari, S. Sondur, and G. Vanjare, “A review on Face Detection
and study of Viola Jones method,” International Journal of Computer
Trends and Technology (IJCTT), vol. 25, no. 1, pp. 54–61, 2015.
[26] P. Viola, O. M. Way, and M. J. Jones, “Robust Real-Time Face
Detection,” International Journal of Computer Vision, vol. 57, no. 2,
pp. 137–154, 2004.
[27] M. A. Rahim, M. N. Hossain, T. Wahid, and M. S. A. Azam, “Face
Recognition using Local Binary Patterns (LBP),” Global Journal of
Computer Science and Technology Graphics & Vision, vol. 13, no. 4,
2013.
[28] W. S. M. Sanjaya, D. Anggraeni, A. Juwardi, and M. Munawwaroh,
“Design of Real Time Facial Tracking and Expression Recognition for
Human-Robot Interaction,” in ICCSE, 2018, pp. 1–10.

320

Оценить