Вы находитесь на странице: 1из 4

Ms. Shubhangi J. Moon et al.

/ (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES


Vol No. 2, Issue No. 1, 043 - 046

A Real Time Hand Gesture Recognition Technique


by using Embedded device.
Ms. Shubhangi J. Moon Prof. R. W. Jasutkar
M.E IV sem( Embedded System & Computing) Computer Science & Engg., dept
Computer Science & Engg., dept. G.H.Raisoni.College of Engg.
G.H.Raisoni.College of Engg. Nagpur, India
Nagpur, India r_jasutkar@yahoo.com
shu.chand@gmail.com

Abstract— This paper explores how to interact with the entertainment, recreation, health-care, nursing, etc. [6] In
Humanoid robot using the user defined hand gesture. for this human-human interaction, multiple communication modals
scan line algorithm is used. We explain how to design human- such as speech, gestures and body movements are frequently
computer interface, and to regulate and set the motion of used. The standard input methods, such as text input via the

T
humanoid robot. And then the planning motion sequences are
stored in the motion database A real-time hand gesture
keyboard and pointer/location information from a mouse, do
recognition system is developed for human-robot interaction of not provide a natural, intuitive interaction between humans
service robot. Gesture-based interface offers a way to enable and robots. Therefore, it is essential to create device for
untrained users to interact with robots more easily and natural and intuitive communication between humans and
efficiently. Proposed system presents a human-robot interface robots. Furthermore, for intuitive gesture-based interaction
system where robot will perform the same gesture like human between human and robot, the robot should understand the
ES
perform. System deals with a method to recognize hand-gesture
in system. The system uses single camera to recognize the user's
hand-gesture. It is hard to recognize hand-gesture since a
human-hand is the object with high degree of freedom and there
meaning of gesture with respect to society and culture. The
ability to understand hand gestures will improve the
naturalness and efficiency of human interaction with robot,
and allow the user to communicate in complex tasks without
follows the self-occlusion problem, the well-known problem in
vision-based recognition area. However, when we use multiple using tedious sets of detailed instructions. This interactive
images & scan line algorithm to increases the processing speed, system uses robot eye’s cameras or web cameras to identify
this will increase the human computer interaction by using real humans and recognize their gestures based on hand poses.
time hand gestures. Robot is different from the computer machine in the sense
that it has to take inputs and give outputs like human being. In
Keywords— Gesture, Hand gesture recognition, robotics, Human other words, the robot has to behave like a human way while
Computer Interaction (HCI). being operational as a machine. Robots are being used not
A
only in factories but also in mines, construction sites, public
I. INTRODUCTION
places and homes. A new category of robots, also called as
The term Gesture is defined as “movement to convey Field and Service Robots (FSR) are supposed to perform a
meaning” or "the use of motions of the limbs or body as a variety of tasks in unknown, unstructured and changing
means of expression; a movement usually of the body or limbs environments. Research in Human- Robot Interaction (HRI)
that expresses or emphasizes an idea. [3] The main purpose of [1] mainly talks about the use of robots in coordination with
gesture recognition research is to identify a particular human humans. Despite significant amount of literature available in
IJ

gesture and convey information to the user pertaining to “artificial intelligence” research, robots and computers are still
individual gesture. From the corpus of gestures, specific machine like devices and research being carried out to make
gesture of interest can be identified, and on the basis of that, as human-like machine as possible. In the subtasks, where
specific command for execution of action can be given to high-level cognition or intelligence is needed, the robot has to
robotic system. Overall aim is to make the computer to ask for help from operator.
understand human body language thereby bridging the gap A gesture is defined as a string of movements with specific
between machine and human. Hand gesture recognition can be Breaks that are reached progressively over time. [3]Gestures
used to enhance human–computer interaction without correlate to movement and a change of position as a function
depending on traditional input devices such as keyboard and of time. Some simple gestures commonly have only one
mouse. position to reach from the beginning to the end of the gesture.
The use of intelligent robots encourages the view of the Other gestures cover multiple positions and rarely remain in a
machine as a partner in communication rather than as a tool. stationary pose. Modeling and recognizing a gesture is a
In the near future, robots will interact closely with a group of difficult challenge since gestures occur dynamically both in
humans in their everyday environment in the field of shape and in duration. This also makes it difficult to delineate

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 43


Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 2, Issue No. 1, 043 - 046

between erroneous motions and specific gestures. Gestures Processor: to execute the software program stored in
can broadly be divided into two categories, a to ROM
communicative/meaningful gesture and a non-communicative Ports: to communicate with external devices like
or transitional gesture. In order to identify different types of driver controller or camera controller.
communicative motions, it is important to classify gestures.  Motor driver controller: This controller will
The aim of the proposed system is mainly designed for employee different sub controller for different actions
controlling robotic hand or an individual robot by merely gets performed by the robot. Those mainly divided in
showing hand gestures in front of a camera. With the help of to 1.Hand motor driver 2. Head motor driver 3. Wheel
this technique one can pose a hand gesture in the vision range motor driver
of a robot and corresponding to this notation, desired action is  Camera Controller: This unit will capture the video
performed by the robotic system. Simple video camera is used input for the system using frame capturing technology.
for computer vision, which helps in monitoring gesture
presentation. This approach consists of four modules: (a) A The following processes compose the general framework:-
real time hand gesture formation monitor and gesture capture,
(b) feature extraction, (c) Pattern matching for gesture a) Initialization: The recognizable postures are stored in a
recognition, (d) Command determination corresponding to visual memory, which is created in a start-up step. In order to
shown gesture and performing action by robotic system. Real- configure this memory, different ways are proposed.
time hand tracking technique is used for object detection in
the range of vision. The primary goal of hand gesture b) Acquisition: a frame from the web cam is captured.

T
recognition research is to create a Embedded device which
can identify specific hand gestures and use them to convey c) Segmentation: each frame is processed separately before
information or for device control. its analysis: the image is smoothed, skin pixels are labeled,
noise is removed and small gaps are filled. Image edges are
II. SYSTEM COMPONENTS found, and finally, after a blob analysis, the blob which
represents the user’s hand is segmented. A new image is

Hand Motor
Driver
Head Motor
Driver
ES
Wheel Motor
Driver
created which contains the portion of the original one where
the user’s hand was placed.

d) Pattern Recognition: once the user’s hand has been


segmented, its posture is compared with those stored in the
system’s visual memory (VMS) using scan line algorithm.

Camera e) Executing Action: finally, the system carries out the


Controller
Overall Execution Sequence
corresponding action according to the recognized hand gesture.
A
Power Supply Unit

Figure 1. Physical architecture.


IJ

Main objective of a proposed system is

 The proposed embedded systems will used to improve


the Human computer interaction by merely showing
the hand gesture in front of camera which is inside of a
robot.
 Single web camera is used.

 Microcontroller: this main component will manage all


the processing and hardware controlling. This
component then mainly utilize
ROM: to store embedded software program to process
the system flow
Figure 2. Overall execution sequence.

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 44


Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 2, Issue No. 1, 043 - 046

Any camera first get controlled trough it’s driver software training phase and a testing phase. [4] In the training phase,
which is provided with the camera hardware. This driver is the user shows hand gestures which were captured using Web
also customized for specific operating system. Now camera.
application needs to contact operating system for camera In this we have to save each and every posture in
access. Once this process complete live cam view displayed in database. So instead of managing a huge database, we are
supportable control like picture box but this is real time live going to divide the frame into different scan lines and just
view so it is not possible to process it directly so we need to examine the pixels which fall under a particular scan line.
get the current frame out of live streaming for processing. This reduces the size of database to some extent because we
This is what we called it frame extraction and we load this are going to store the coordinates of the scan lines in the form
frame in memory for fast processing. Even we got the frame it of database. Initially we are going to examine whether this
is not easy to identify the color from it so we need to perform plan work for the color of arm (i.e. the color of shirt) and if it
image processing here and as we know each pixel is made up is successful then we are going to implement this for entire
of 3 bit of RGB so we try to extract RGB value for each pixel hand gesture. Scan line algorithm is beneficial because it will
and we also try to compare those value with predefined value scan only those pixels where the hand movements action is
like what we do in pattern matching. All this process will performed. So this will increase our processing Speed.
executed for next frame, here we are doing frame capturing
and frame processing both, it should be compulsory the both
process should be synchronous for smooth performance. This
is how real time image processing work and utilized in our

T
project.

IV. OVERVIEW OF PROPOSED WORK

Start

Get Camera View


ES
Get video Frame Figure 4.Initial position.

As shown in fig.3, the system scan the pixel color which fall
Get Image Pixel under lines indicated as L for left hand movements and R for
right hand movements respectively and accordingly takes the
respective action.
A
Get RGB Value
V. THE RECOGNITION SYSTEM ARCHITECTURE

Compare RGB Value


IJ

Colour as Output

Stop

Figure 3.. Steps of Processing. Figure 5. Recognition system


The aim of this paper is to present a Embedded The proposed system consists of four main
device based on pattern recognition techniques for classifying components the large testing/training data, the hand
hand gestures into ten categories: hand pointing up, pointing gesture feature extraction, APs-based hand gesture
down, pointing left, pointing right and pointing front. [5] We recognition and applications as shown in Fig.5. [5]
have applied a simple pattern recognition technique to the The hand gesture feature extraction component is
problem of hand gesture recognition. This method has a designed and implemented based on software. In this

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 45


Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 2, Issue No. 1, 043 - 046

component, we extract hand gesture feature as vector [7] Chen-Chiung Hsieh and Dung-Hua Liou, David Lee, A Real Time Hand
Gesture Recognition System Using Motion History Image, 2nd International
to hand gesture segmentation and vector extraction. Conference on Signal Processing Systems (ICSPS), 5-7 July 2010, Dalian.
In addition, the APs-based hand gesture recognition
component is developed based on hardware. [7] It is
able to satisfy quick recognition speed. Moreover, [8] Chenglong Yu, Xuan Wang, Hejiao Huang, Jianping Shen, Kun Wu,
Vision-Based Hand Gesture Recognition Using Combinational Features,
one of the main goals of this paper is to propose one Sixth International Conference on Intelligent Information Hiding and
kind of architecture based on software and hardware Multimedia Signal Processing,2010, Darmstadt, Germany.
to solve the real-time hand gesture recognition
problem. On the other hand, the architecture is also
[9] Mary Ruttum and Sarangi P. Parikh, Can robots recognize common
designed for those applications employing interaction Marine gestures?,42nd Southeastern Symposium on System Theory
between human and computer through hand gestures. (SSST),2010, Tyler, TX.

ACKNOWLEDGMENT

The Author would like to gratefully thank to prof.R.


W.Jasutkar for her valuable guidance & her Support.

CONCLUSION & FUTURE SCOPE

T
There are many approaches to hand gesture recognition,
and each approach has its strengths and weaknesses. The
strength of the proposed method in this paper is Scan line
algorithm The Proposed algorithm achieves 88% average
recognition rate.
The weakness of the system is that the actions performed
by robot is from small distances so in our future work, the
ES
system will be converted to wireless model so actions can be
performed from long distances, and some additional gesture
can be designed for Human Computer interaction such as sign
language translation

REFERENCES

[1] Jagdish Lal Raheja, Radhey Shyam, Umesh Kumar, P Bhanu Prasad,
Real-Time Robotic Hand Control using Hand Gestures, Second International
Conference on Machine Learning and Computing, 2010.

[2] Mr. Chetan A. Burande, Prof. Raju M. Tugnayat, Prof.Dr. Nitin K.


A
Choudhary, Advanced Recognition Techniques for Human Computer
Interaction, 2nd International Conference on Computer and Automation
Engineering (ICCAE), 2010, Singapore.

[3] Hatice Gunes,MassimoPiccardi,Tony Jan, Face and Body Gesture


Recognition for a Vision-Bases Multimodel Analyzer, conferences in research
and practice in information technology,Vol 36,2004.
IJ

[4] G.R.S. Murthy, R.S. JadonHand Gesture Recognition using Neural


Networks, Advance Computing Conference (IACC), 2010 IEEE 2nd
International,, 19-20 Feb. 2010, Patiala.

[5] Wang Ke, Wang Li, Li Ruifeng, Zhao Lijun, Real-time Hand Gesture
Recognition for Service Robot, 2010 International Conference on Intelligent
Computation Technology and Automation.

[6] Chi-Min Oh, Md. Zahidul Islam , Jae-Wan Park and Chil-Woo Lee, A
Gesture Recognition Interface with Upper Body Model-based Pose Tracking,
Computer Engineering and Technology (ICCET), 2010 2nd International
Conference, 16-18 April 2010, Chengdu .

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 46

Вам также может понравиться