Вы находитесь на странице: 1из 6

SIGN LANGUAGE RECOGNITION FOR DEAF AND DUMB

PEOPLE USING ANDROID ENVIRONMENT


A. Gayathri1, Dr.A. Sasi Kumar2
1
PG Scholar, Dept.of IT, School of Computing Sciences
2
Associate Professor, Dept.of IT, School of Computing Sciences
Vels University,Chennai, India.

ABSTRACT 1. INTRODUCTION
This paper helps the deaf and dumb person to Android application have shown a dramatic
communicate with the rest of the world using improvement in their functionality to a point
sign language. Communication plays an where it is now possible to have cellular phone
important role for human beings. Speech-to- execute Java programs. As a result, cellular users
sign technology and VRS enables audible throughout the world are now able to read and
language translation on smart phones with write email, browse web pages and play java
signing and application has characters feature games using their cellular phones. This trend has
in mobile without dialling number uses a promoted as to propose the use of android
technology that translates spoken and written application for better communication. Before
words into sign language with video. SMS/MMS, deaf people rarely used mobile
Interaction between normal people with blind phones. Now texting allows deaf people
person is very difficult because of remotely to communicate with both deaf and
communication problems. There are many hearing parties. Mobile video chat may one day
applications available in the market to help replace texting, but only for conversations
the blind people to interact with the world. between hearing callers, not for those between
Voice-based email and chatting systems are deaf and hearing callers. Outfit-7 is an
available to communicate with each other by application in which an image movement will
blinds. This helps to interact with persons by repeat everything we say in a high-pitched voice.
blind people. This work includes a voice Without dialing number we can use this
based, text based and video based interaction application.
approach. Video chat technology continues to
improve and one day may be the preferred This paper deals an alternative for gesture
means of mobile communication among the detection using image processing technique
deaf. Technologies not mashed up to solve the between deaf people which overcomes the above
problem of mobile sign language translation technique and paves the way for the
in daily life activities. Video interpreter is communication between deaf and normal people
responsible for helping deaf or hearing in their daily activities using sign language and
impaired individuals understand what is video relay service. Video technology continues
being said in a variety of situations. The main to improve and one day may be the preferred
feature of this work is that it can be used to means of mobile communication among the
learn sign language and to provide sign deaf. It allows deaf, hard-of-hearing and speech
language translation of video for people with impaired individuals to communicate over video
hearing impairment. or other technology with hearing people in real-
Keywords: Speech Recognition, Sign time, via a sign language interpreter. The idea
Language, Speech Translation. behind SE (Signed English) and other signing
system parallel to English is the deaf people will
learn English better if they are exposed.

ISSN (PRINT): 2393-8374, (ONLINE): 2394-0697, VOLUME-4, ISSUE-5, 2017


84 
INTERNATIONAL JOURNAL OF CURRENT ENGINEERING AND SCIENTIFIC RESEARCH (IJCESR)

The sign language provides video by improving  The deaf person reads it (and sees ASL
small-screen mobile communication among the video)
deaf. There are mainly three parts:
1.1 Features of the work
 Speech-Recognition Engine  Without dialing number we can
 Database and communicate to other like face to face
 Recognized Text communication.

Under Speech- Recognition Engine we include  It does not require large amount of
Sign to Speech with the help of outfit – 7 and storage as it uses the Hand speak support
Video Relay Service (VRS - enables audible through online.
language translation on smart phones with
signing) technologies and Speech to Sign using  The sign words are signed in the same
Mimix technology. Secondly, SQL lite database order as letters appear in English
is used to store the inputs given by the alphabets.
application user which are then viewed from the
database. Finally, Text (or video) recognized  This paper prepares individuals to work
through Mimix makes it easier to have a clear, as interpreter/translators facilitating and
two-way communication with a deaf without mediating communication between
having to know sign language. It works based on Deaf/Hard of Hearing and hearing
recorder. This feature, along with the power of people.
JSON (Java Script Object Notation) which
establishes it is a great choice for incorporation  Accurate and appropriate transfer of a
in the proposed architecture. message from a source language into a
target language from the point of view of
The main goal of this paper is to determine style and culture.
gesture recognition that might enable the deaf to
converse with the hearing people remotely and is  Learn the culture and history of Deaf
done by a JSON interpreter. We are not aware of people to better understanding
any research which aim is to provide un- communication between Deaf and
intermediated mobile communication between Hearing individuals.
deaf and hearing people, each conversing using
their own natural languages. Hence our project  This app is perfect for sending messages
has provided the idea of implementing you would otherwise be too shy to say in
communication between deaf and hearing people person, like apologize to someone,
in day-to-day life. Initially, mobile search profess love or sing a song.
functionality must recognize either ASL 1.2 Domain Introduction
(American Sign Language) Text or voice and 1.2.1 Deaf-Hearing Communication
convert it to both text message as well as video Since all deaf are not using sign language in
for relevant input. ASL2TXT enable sign their day to day life, for ease of exposition, we
language finger spelling communication (signs define the term “deaf” broadly, to include any
displayed in the keyboard) take text and display person who communicates primarily using
video. American Sign Language (ASL). Some hearing
people use both audible and sign languages, we
The process abounds the following: use the term “hearing” to suggest a person who
 A deaf person signs speaks in audible language and does not sign.
 Software translates signs into text (and Technical literature uses the term “translation” in
video) favor of “interpretation,” So we follow the
 The hearing person reads it (and view it) standard for that reason.
 The hearing person and deaf people 1.2.2 Sign Language Interpreter
speaks into microphone Sign language interpreter is responsible for
 Software translates voice into text (and helping deaf or hearing impaired individuals
ASL video) understand what is being said in a variety of

ISSN (PRINT): 2393-8374, (ONLINE): 2394-0697, VOLUME-4, ISSUE-5, 2017


85 
INTERNATIONAL JOURNAL OF CURRENT ENGINEERING AND SCIENTIFIC RESEARCH (IJCESR)

situations. An interpreter must understand the Translate allows users to type text in their native
subject matter so he or she can accurately tongues and receive textual and audible
translate what is being spoken into sign language translations in several vernaculars.
Interpreters may also be used in one-on-one 1.3 Human Interpreters
situations; they might use technology to provide For lengthy, sophisticated conversations it is
services from a remote location. difficult to imagine a workable computer system
1.2.3 Open Standard that would improve over human interpreters. The
It allows deaf, hard-of-hearing and speech ability of human interpreters to perform language
impaired individuals to communicate over video translation may always exceed a computer’s
or other technology with hearing people in real- ability. Hence in some situations, mobile
time, via a sign language interpreter. The video TXT2ASL translation may be more convenient
interpreter facilitates communication between than a relay or even a handwritten note. Like
the participants who are located together at the texting, we envision TXT2ASL as an
other site. The JSON format was originally enhancement to smart phones and other mobile
specified by Douglas Crock ford. JSON or devices, not as a replacement for human
JavaScript Object Notation, is an open standard interpreters.
format that uses human-readable text to transmit 2. LITERATURE SURVEY
data objects consisting of attribute–value pairs. It The purpose of the Literature Survey is to give
is used primarily to transmit data between a the brief overview and also to establish complete
server and web application, as an alternative to information about the reference papers. The goal
XML. Although originally derived from the
of Literature Survey is to completely specify the
JavaScript scripting language, JSON is a
language-independent data format. Code for technical details related to the main project in a
parsing and generating JSON data is readily concise and unambiguous manner.
available in many programming languages.
1.2.4 Video-Relay Service In different approaches have been used by
Deaf callers can also contact hearing parties different researchers for recognition of various
through interpreters using mobile video chat hand gestures which were implemented in
through smart phones, tablet PCs, or iPods with different fields [1]. The whole approaches could
Wi-Fi connection, but these solutions still be divided into three broad categories
require human interpreters. But in to overcome
this Free sign language resources and  Hand segmentation approaches
extracurricular materials for language  Feature extraction approaches and
enthusiasts, ASL students and learners,  Gesture recognition approaches.
instructors and teachers, interpreters,
homeschoolers, parents and professionals who All the available systems are not portable and not
are interested in learning how to sign language affordable to poor people. This paper introduces
online and/or beyond classes for practice or self- new android application which will detect the
study. This is achieved by using the resource Indian sign language via mobile camera and
hand speak implemented along with the JSON converts into corresponding text or voice output.
technique. Video of ASL is available at various Now our system provides 65% of correct
websites, such as ASL Pro Michigan State predicting and we are working on improving its
University’s ASL Browser and Signing Savvy. efficiency. Hence we took the idea of
Users access video by typing their text-string implementing the gesture video with the help of
identifiers. ASL2TXT requires a reverse ASL hand speak technology which helps the deaf
Dictionary, one which allows users to gesture people to view their relevant sign language video
signs, then read text translations, or listens to based on the text given as input. We include the
audio translations. idea of providing the link to the application
1.2.5 Texting and Speech Translation which helps in extracting the video. It proves its
SMS/MMS enables signers to communicate with maximum efficiency.
both deaf and hearing parties. Video chat
technology continues to improve and one day Sign language is used as a communication
may be the preferred means of mobile medium among deaf and dumb people to convey
communication among the deaf. Google the message with each other [2]. In order to

ISSN (PRINT): 2393-8374, (ONLINE): 2394-0697, VOLUME-4, ISSUE-5, 2017


86 
INTERNATIONAL JOURNAL OF CURRENT ENGINEERING AND SCIENTIFIC RESEARCH (IJCESR)

bridge the gap in communication among deaf , supplement other larger systems, giving users a
dumb community and normal community, lot of different choice for data entry. A speech-to-text
research work has been carried out to automate system can also improve system accessibility by
the process of sign language interpretation with providing data entry options for blind, deaf, or
the help of image processing and pattern physically handicapped users. Voice SMS is an
recognition techniques. This paper proposes application developed in this work that allows a
optimized approaches of implementing the user to record and convert spoken messages into
famous Viola Jones algorithm with LBP (Local SMS text message. User can send messages to
Binary Pattern) features for hand gesture the entered phone number. Speech recognition
recognition which will recognize Indian sign for Voice uses a technique based on hidden
language gestures in a real time environment. An Markov models (HMM - Hidden Markov
optimized algorithm has been implemented in Model). It is currently the most successful and
the form of an android application and tested most flexible approach to speech recognition.
with real time data. This implemented algorithm Using the speech recognizer, which works over
is not a robust and real time. Hence we are using the Internet, allows much faster data processing.
the already recorded video stored in a cloud
storage which is considered to be the easiest way Text-to-speech (TTS) convention [5] transforms
of interpreting the users input in relevant manner. linguistic information stored as data or text into
This above algorithm does not prove its speech. It is widely used in audio reading devices
efficiency in any sort of background but our for blind people now a day. In the last few years
project overcomes this issue to the larger extent. however, the use of text-to-speech conversion
technology has grown far beyond the disabled
A number of developing countries continue to community to become a major adjunct to the
provide educational services to students with rapidly growing use of digital voice storage for
disabilities in "segregated" schools. Also all voice mail and voice response systems. This
students, regardless of their personal paper presents a method to design a Text to
circumstances, have a right of access to and Speech conversion module by the use of Matlab
participation in the education system, according by simple matrix operations.
to their potential and ability [3]. However, with
the rapidly growing population and increasing 3. SYSTEM ARCHITECTURE
number of people with blindness along with
other disabilities, need for use of technology in
the field of education has become imminent. In
this project, through the use of speech
technology, attempts to provide solutions for
some of these issues by creating an interactive
system. We took the idea of using voice over text
technology from the above proposed system
because on considering the deaf people, they
either have speech ability or be a dumb which
again depends on their birth. It will be a Figure 1. Overall System Architecture
revolutionary change that willbenefit hearing
impaired people, boost their confidence and put  A deaf person signs through the sign
them with regular people. language keyboard displayed in an
application as shown in Figure 1.
For the past several decades, designers have
processed speech for a wide variety of  Software translates signs into text and ASL
applications ranging from mobile video through interpretation process.
communications to automatic reading machines
[4]. Speech has not been used much in the field  The hearing person read it or view the sign
of electronics and computers due to the language video extracted through hand
complexity and variety of speech signals and speak.
sounds. Our speech-to-text system directly
acquires and converts speech to text. It can

ISSN (PRINT): 2393-8374, (ONLINE): 2394-0697, VOLUME-4, ISSUE-5, 2017


87 
INTERNATIONAL JOURNAL OF CURRENT ENGINEERING AND SCIENTIFIC RESEARCH (IJCESR)

 The hearing person and deaf people speak 4. PROPOSED SYSTEM


into microphone which is recognized through Using this application we paved a way for the
Google server. deaf person who can easily interact with normal
person anywhere. This project also supports
 Software translates voice into text and ASL Automatic translation, automotive speech
video interpreted through JSON (Java Script recognition, and Speech-to-sign transmission.
Object Notation). Our proposed system includes a variety of
technologies. It consist two main parts hardware
 The deaf person reads it and sees ASL video and software. In hardware parts we required
as the sent SMS is stored in the inbox which phone, speaker. In software we mainly consider
can be seen at any time. outfit-7 (which is used in tomcat application) and
Video Relay Service (VRS). All these parts can
We have two existing systems, be brought together in an integrated system. In
this system we implement outfit-7 in VSR
 Communication through cell (with dialing application. Outfit-7 is an application for the
number) mobile phone, with the software, which will
convert everything we say in a high pitched
 Face to face communication (without dialing voice. Without dialling number we can use this
number). application.

3.1 Communication through Cell The main important way for communication
This happens only between caller and callie as between deaf has been implemented in our
they communicate only through dialling number. project; it is nothing but ASL (American Sign
When a caller dials the relevant information Language). All letters are signed using only the
through text or voice mobile search system right hand which is raise with the palm facing the
require a database larger than the capacity of a viewer. SE (Sign English) is a reasonable manual
given mobile device as it must retrieve the exact parallel to English. The idea behind SE and other
information only through cloud. It may be signing system parallel to English is the deaf
preferable at times to go to the cloud for image people will learn English better if they are
search, analysis and translation into text/voice, exposed. SE uses two kinds of gesture: Sign
depending on the processing power of the mobile Words, and Sign Markers.
devices, the resolution of the images and the size Each Sign word stands for a separate entry in a
of the vocabulary database. However, Standard English dictionary. In our project we
satisfactory results have already been reported as implement the Sign Word concept, which is
an issue. It can be used only between the caller useful in conversion of Sign Language into
and Callie. For communication between deaf and words. The sign words are signed in the same
hearing person we must dial the number. For order as words appear in an English sentence.
daily activities that are for normal face to face Most of signs in SE are taken from ASL. But
communication we cannot use this application. these signs are now used in the same order as
3.2 Face to Face Communication English words and with the same meaning. By
Today a new option is available for them and for using this application deaf person can easily
you to enjoy a conversation with each other it’s interact with normal person anywhere, and he
a new app called Mimix. Anything a person will can also use this application for mobile sign
say is immediately translated to sign language translation using VSR and by using UTF-7 he
through Mimix making it easier to have a clear, can communicate in daily activates without
two-way communication with a deaf without dialling number.
having to know sign language. It works based on
recorder. The limitations in MIMIX Application 5. CONCLUSION
In this Mimix application the limitation is to By using this application deaf person can easily
convert the normal language into sign we first interact with normal person anywhere, and he
record the sentence the by clicking convertor can also use this application for mobile sign
button it convert to sign language. For every translation using VSR and by using UTF-7 he
sentence the recording is necessary to record the can communicate in daily activates without
sentence. By cause of this it takes time.

ISSN (PRINT): 2393-8374, (ONLINE): 2394-0697, VOLUME-4, ISSUE-5, 2017


88 
INTERNATIONAL JOURNAL OF CURRENT ENGINEERING AND SCIENTIFIC RESEARCH (IJCESR)

dialling number. We can use this application for People ” International Journal of
mobile sign translation using VRS, and with Development Research Vol. 4, Issue, 3, pp.
UTF-7 communication can be made without 749-752, March, 2014.
dialling number.
[3] Shanmukha Swamy, Chethan M P,
6. FUTURE ENHANCEMENT Mahantesh Gatwadi, “Indian Sign Language
In future important journals include Mimix, Interpreter with Android Implementation”,
Outfit – 7, VRS on speech and Audio processing, International Journal of Computer
computer speech and language. It involves both Applications (0975 – 8887) Volume 97–
speech recognition and translation components. No.13, pp. 36-41, July 2014.
By using this application deaf people can
[4] Sinora Ghosalkar ,Saurabh Pandey,Shailesh
communicate with normal people anywhere.
Padhra,Tanvi Apte, “Android Application on
Examination Using Speech Technology for
REFERENCES
Blind People”, International Journal of
[1] Raghavendhar Reddy.B, Mahender.E, Research in Computer and Communication
“Speech to Text Conversion using Android Technology, Vol 3, Issue3, March-2014.
Platform”, International Journal of
[5] Tapas Kumar Patra, Biplab Patra, Puspanjali
Engineering Research and Applications
Mohapatra, “Text to Speech Conversion with
(IJERA) Vol. 3, Issue 1, pp.253-258, January
Phonematic Concatenation”, International
-February 2013.
Journal of Electronics Communication and
[2] Sangeetha, K. and Barathi Krishna, L. Computer Technology (IJECCT) Volume 2
“Gesture Detection for Deaf and Dumb Issue 5 , pp.223-226, September 2012.

ISSN (PRINT): 2393-8374, (ONLINE): 2394-0697, VOLUME-4, ISSUE-5, 2017


89 

Вам также может понравиться