Вы находитесь на странице: 1из 6

International Journal of Scientific Research Engineering & Technology (IJSRET)

Volume 2 Issue 8 pp 498-503 November 2013 www.ijsret.org ISSN 2278 0882

Computer Vision Techniques for Supporting Blind or Vision Impaired People: An Overview
Abu Sadat Mohammed Yasin1, Md. Majharul Haque2, Sadia Binte Anwar 3, Md. Shakil Ahamed Shohag4
1

Shahjalal University of Science and Technology, Bangladesh 2 University of Dhaka, Bangladesh 3 World University, Bangladesh 4 University of Development Alternative, Bangladesh

ABSTRACT
For the blessing of modern technology, especially in mobile technology improvement and handheld devices etc. and the incorporation of artificial intelligence, the blind/vision impaired people lives are becoming colorful. They are now thinking about themselves quite differently because of these handy tools that are going to be in reality which were only in hearts desire. Many researchers feel interest to do something for blind/vision impaired people. In this situation it can be claimed that computer vision with mobile technology is the best option for them. Mobile devices are small in size with less power consumption and their computational power has increased a lot in this age. Thus a lot of research & applications are made by some great researchers. In this paper, computer vision techniques to support blind or vision impaired people are addressed and different procedures of various researchers are discussed. This paper emphasized to depicts a comparative survey on a variety of procedures in this arena. A range of techniques are compared here that have been prepared for supporting blind or vision impaired people. Some promising approaches with significant points are indicated here and particular concentration is dedicated to describe different methods from raw level to similar like high experienced, so that in future one can get considerable instruction for further analysis. Keywords - Mobile technologies, artificial intelligence, blind/vision impaired, handheld devices, computer vision. Computer vision is concerned with methods for acquiring, processing, analyzing, and of useful information from a single image or a sequence of images. It is providing features to think like human and act like human that is very much helpful for impaired people. Various sections are there in this ground such as computer-vision based way finding, visual object identifying, text extraction and get the message etc. To develop theoretical and algorithmic basis to achieve automatic visual understanding are incorporated here [2] [3]. The applications of computer vision are numerous and include: agriculture medical image augmented analysis reality pollution autonomous monitoring vehicles process control biometrics, remote sensing forensics robotics character security and recognition surveillance industrial quality transport inspection supporting blind face recognition or vision gesture analysis impaired people geosciences And many more. image restoration Among all of the areas of computer vision, we are particularly interested on vision techniques for supporting blind or vision impaired people. According to the (June 2012) estimates of the World Health Organization [4], 285 million people are visually impaired worldwide: 39 million are blind and 246 million have low vision. About 90% of the worlds visually impaired live in third world countries. The rest of the paper is organized as follows. Section 2 presents a comprehensive literature review about different procedures of computer vision techniques for supporting blind/vision impaired people. At a glance

1. INTRODUCTION
Humans use their eyes to see and visually sense the world around them. They have also brain to think or do analysis about anything. Computer vision is the science that aims to give a similar, if not better, capability to a machine or computer [1]. Challenges related to independent mobility are well recognized to enhance quality of life and compromise the safety of individuals with severe vision impairment.

IJSRET @ 2013

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 2 Issue 8 pp 498-503 November 2013 www.ijsret.org ISSN 2278 0882

comparison among the various procedures is illustrated in section 3. Section 4 turns conclusion with a brief about this paper.

2. REVIEW ON COMPUTER VISION TECHNIQUES FOR SUPPORTING BLIND AND VISION IMPAIRED PEOPLE
In the very beginning of the research in the arena of launching artificial intelligence to establish techniques of Computer Vision for supporting blind and vision impaired people, disclosed the paradigms for serving people who are unable to go alone. There are a lot of interesting publications, projects and mobile apps available in this field as this is a burning topic nowadays. Here is an overview on them with indicating significant points. 2.01 Volodymyr Ivanchenko et al. [5] in 2008 described Clear Path Guidance Guiding blind and visually impaired wheelchair users along a clear path that uses computer vision to sense the presence of obstacles or other terrain features and warn the user accordingly. Since multiple terrain features can be distributed anywhere on the ground, and their locations relative to a moving wheelchair are continually changing, it is challenging to communicate this wealth of spatial information in a way that is rapidly comprehensible to the user. To develop a novel user interface that allows the user to interrogate the environment by sweeping a standard (unmodified) white cane back and forth: the system continuously tracks the cane location and sounds an alert if a terrain feature is detected in the direction the cane is pointing. Experiments are described demonstrating the feasibility of the approach. 2.02 Hend S. Al-Khalifa [6] in 2008 proposed Utilizing QR Code This paper proposed a barcode-based system to help the visually impaired and blind people identifying objects in the introduced environment. The system is based on the idea of utilizing QR codes (twodimensional barcode) affixed to an object and scanned using a camera phone equipped with QR reader software. The reader decodes the barcode to a URL and directs the phone's browser to fetch an audio file from the web that contains a verbal description of the object. Their proposed system is expected to be useful in real-time interaction with different environments and to further illustrate the potential of new idea. 2.03 Jerey P. Bighamy et al. [7] in 2010 introduced VizWiz::LocateIt

This method enables blind people to take a picture and ask for assistance in finding a specific object. The request is first forwarded to remote workers who outline the object, enabling efficient and accurate automatic computer vision to guide users interactively from their existing cell phones. A two-stage algorithm is presented that uses this information to conduct users to the appropriate object interactively from their phone. They produced an app in ios and app is live in itunes, currently iPhone users can use and now they are trying to make the app for android mobiles. 2.04 Jing Su et al. [8] in September 2010 introduced Timbremap A sonication interface enabling visually impaired users to explore complex indoor layouts using o-theshelf touch-screen mobile devices. This is achieved using audio feedback to guide the users nger on the devices touch interface to convey geometry. The userstudy evaluation shows Timbremap is eective in conveying non-trivial geometry and enabling visually impaired users to explore indoor layouts. 2.05 Roberto Manduchi et al. [9] in 2010 explored Blind Guidance using mobile computer vision Here they focused on the usability of a way finding and localization system for persons with visual impairment. This system uses special color markers, placed at key locations in the environment, which can be detected by a regular camera phone. Three blind participants tested the system in various indoor locations and under different system settings. Quantitative performance results are reported for the different system settings, the reduced field of view setting is the most challenging one. The role of frame rate and of marker size did not result fully clear in the experiments. 2.06 Barcodes Ender Tekin and James M. Coughlan [10] in 2010 presented Find and Read Product Barcodes They described a mobile phone application that guides visually impaired user to the barcode on a package in real-time using the phone's built-in video camera. Once the barcode is located by the system, the user is prompted with audio signals to bring the camera closer to the barcode until it can be resolved by the camera, which is then decoded and the corresponding product information read aloud using text-to-speech. Experiments with a blind volunteer demonstrate proof of concept of the system, which allowed the volunteer to locate barcodes which were then translated to product information that was announced to the user. 2.07 Y. Tian et al. [11] in 2010 explored Door Detection\ Indoor Way finding Here they presented a robust image-based door detection algorithm based on doors general and stable

IJSRET @ 2013

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 2 Issue 8 pp 498-503 November 2013 www.ijsret.org ISSN 2278 0882

features (edges and corners) instead of appearance features (color, texture, etc). A generic geometric door model is built to detect doors by combining edges and corners. Furthermore, additional geometric information is employed to distinguish doors from other objects with similar size and shape (e.g. bookshelf, cabinet, etc). The robustness and generalize ability of the proposed detection algorithm are evaluated against a challenging database of doors collected from a variety of environments over a wide range of colors, textures, occlusions, illuminations, scale, and views. 2.08 F. Bellotti et al. [12] in May 2011 presented LodeStar A location-aware multimedia mobile guide, called LodeStar, was designed to enhance visually impaired peoples fruition and enjoyment of cultural and natural heritage. It aimed at supporting visually impaired persons in the construction of location awareness by providing them with location-related added value information about object-to-self and object-to-object spatial relations. It presented the underlying psychological and theoretical basis, the description of the CHI design of the mobile guide and user test results coming from trials conducted with real users in two contexts of authentic use: the Galata Sea Museum and the Villa Serra naturalistic park in Genoa. Results mainly revealed that the guide can contribute to peoples ability to construct overall awareness of an unfamiliar area including geographical and cultural aspects. The users still appreciate the guide descriptions of point of interests in terms of content and local/global spatial indications. Interviews highlighted for improving the visit experience of visually impaired people. 2.09 Joo Jos et al. [13] in May 2011 invented a system Local Navigation Aid Here the proposed SmartVision prototype is a small, cheap and easily wearable navigation aid for blind and visually impaired persons. Its functionality addresses global navigation for guiding the user to some destiny, and local navigation for negotiating paths, sidewalks and corridors, with avoidance of static as well as moving obstacles. Local navigation applies to both inand outdoor situations. In this article they focused on local navigation: the detection of path borders and obstacles in front of the user and just beyond the reach of the white cane, such that the user can be assisted in centering on the path and alerted to looming hazards. Using a stereo camera worn at chest height, a portable computer in a shoulder-strapped pouch or pocket and only one earphone or small speaker, the system is inconspicuous, it is no hindrance while walking with the cane, and it does not block normal surround

sounds. The vision algorithms are optimized such that the system can work at a few frames per second. 2.10 Brilhault, A. et al. [14] in 2011 introduced Pedestrian Positioning They designed an assistive device for the Blind based on adapted GIS, and fusion of GPS and vision based positioning. The proposed assistive device may improve user positioning, even in urban environment where GPS signals are degraded. The estimated position would then be compatible with assisted navigation for the Blind. Interestingly the vision module may also answer Blind needs by providing them with situational awareness (localizing objects of interest) along the path. Note that the solution proposed for positioning could also enhance autonomous robots or vehicles localization. 2.11 Bharat Bhargava et al. [15] in 2011 explored Pedestrian Crossing A mobile-cloud collaborative approach provided here for context-aware outdoor navigation, where the computational power of resources made available by cloud computing providers for real-time image processing. The system architecture also has the advantages of being extensible and having minimal infrastructural reliance, thus allowing for wide usability. They have developed an outdoor navigation application with integrated support for pedestrian crossing guidance and report experiment results, which suggest that the proposed approach is promising for real-time crossing guidance for blind. 2.12 Boris Schauertey et al. [16] in 2012 proposed Find lost things Here they proposed a computer vision system that helps blind people to find lost objects. To this end, they combine color- and SIFT-based object detection with signification to guide the hand of the user towards potential target object locations. This way, it is able to guide the user's attention and effectively reduce the space in the environment that needs to be explored. They verified the suitability of the proposed system in a user study. They experimentally demonstrated that the system makes it easier for visually impaired users to find misplaced items, especially if the target object is located at an unexpected location. 2.13 Alexander Dreyer Johnsen et al. [17] in 2012 proposed Touch-based mobile for the visually impaired They revealed the idea to use touch-based phones by the visually impaired. Two possible technologies are screen-readers and haptics (tactile feedback). In this paper they suggested a solution based on a combination of voice and haptics. Design research, was chosen as the methodology for the project, highlights the importance of developing a solution

IJSRET @ 2013

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 2 Issue 8 pp 498-503 November 2013 www.ijsret.org ISSN 2278 0882

over the course of several iterations, and to perform dedicated for the blind users. The main module can product evaluation using external participants. The recognize and match scanned objects to a database of research contribution is an Android based prototype objects. e.g. food or medicine containers. The two that demonstrates the new user interface, allowing the other modules are capable of detecting major colors visually impaired to seamlessly interact with a smartand locate direction of the maximum brightness phone. Operation relies on voice and haptic feedback, regions in the captured scenes. The paper was where the user receives information when tapping or concluded with a short summary of the tests of the dragging the finger across the screen. The proposed software aiding activities of daily living of a blind solution is unique in several ways, it keeps gestures to user. a minimum, it does not rely on physical keys, and it takes the technologies of screen readers and haptics 3. COMPARISION AMONG THE one step further. TECHNIQUES 2.14 K. Matusiak et al. [18] in June 2013 invented At a glance comparison among the computer vision Object recognition in mobile phone techniques to support blind or vision impaired people In this work, they described main features of software has been shown in table 1: modules developed for Android smart phones that are Table 1. Comparison among the techniques of computer vision to support blind or vision impaired people # Researcher(s), Year, Comments about the procedure Reference 1 Volodymyr This method described Clear Path Guidance technique for vision Ivanchenko, 2005, [5] impaired person while moving on wheelchair. A standard (unmodified) white cane used in back and forth. The system continuously tracks the cane location and sounds an alert if a terrain feature is detected in the direction the cane is pointing. 2 Hend S. Al-Khalifa, Utilizing QR codes (two-dimensional barcode) affixed to an object 2008, [6] and scanned using a camera phone equipped with QR reader software. The reader decodes the barcode to a URL and directs the phone's browser to fetch an audio file from the web that contains a verbal description of the object. Dependents on internet connection. 3 Jerey P. Bighamy, Mobile application in iOS platform, so its a costly solution. Image 2010, [7] processing and dependent on internet connection and mobile message technology. Also dependent on available peoples answer. 4 Jing Su, 2010, [8] Layout/map app. non-trivial geometry and works in indoor layouts. Every floor should have a digital map and should be downloadable or accessible by the app. 5 Roberto Manduchi, Way finding in indoor layout. Dependent on special color markers, 2010, [9] placed at key locations in the environment, which can be detected by a regular camera phone. Its based on camera and image processing. 6 Barcodes Ender Tekin, This method used mobile application, Bar code reader and text to 2010, [10] speech algorithm. Speak loudly until read the barcode perfectly. 7 Y. Tian, 2010, [11] Image-based door detection algorithm based on doors general and stable features (edges and corners) instead of appearance features (color, texture, etc.). Proposed detection algorithm are evaluated against a challenging database of doors collected from a variety of environments over a wide range of colors, textures, occlusions, illuminations, scale, and views. 8 F. Bellotti, 2011, [12] Works in outdoor. Human computer Interaction. Provide value added information about location-related, object-to-self and objectto-object spatial relations. 9 Joo Jos, 2011, [13] Small, cheap and easily wearable navigation aid for blind and visually impaired persons. Its functionality addresses global IJSRET @ 2013

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 2 Issue 8 pp 498-503 November 2013 www.ijsret.org ISSN 2278 0882

10 Brilhault, A., 2011, [14]

11 Bharat Bhargava, 2011, [15]

12 Boris Schauertey, 2012, [16] 13 Alexander Dreyer Johnsen, 2012, [17]

navigation for guiding the user to some destiny, and local navigation for negotiating paths, sidewalks and corridors, with avoidance of static as well as moving obstacles. Local navigation applies to both indoor and outdoor situations. This is a GPS based application, works even in urban environment where GPS signals are degraded. The estimated position would then be compatible with assisted navigation for the Blind. This vision module may also answer Blind needs by providing them with situational awareness (localizing objects of interest) along the path. Mobile-cloud collaborative approach for context-aware outdoor navigation. It uses the computational power of resources made available by cloud computing providers for real-time image processing. Combining color and SIFT-based object detection towards potential target object locations. Very intelligent app to find out lost objects especially in unexpected location. Possible technologies are screen-readers and haptics (tactile feedback). This solution based on a combination of voice and haptics. Android based prototype that demonstrates the new user interface, allowing the visually impaired to seamlessly interact with a smart-phone. Operation relies on voice and haptic feedback, where the user receives information when tapping or dragging the finger across the screen. Color and object recognition technique. Its a mobile application in android platform. Cheap in cost and dependant on mobile database only. Should include artificial intelligence for future improvement.

14 K. Matusiak, 2013, [18]

REFERENCES 4. CONCLUSION
All the above publications/projects are very interesting and useful. Some of them got very impressive result. But it is expected to have a complete package or solution where blind or vision impaired people will get all the solution together (i.e. map, indoor-outdoor navigation, object recognition, obstacle recognition, person recognition, human crowd behavior, crowd human counting, study/reading, entertainment etc) in one software in cheap rate and in hand held devices like android. The app would be very intelligent and less dependent on internet (except outdoor map) or any other device. We believe that an android operating system for blind or vision impaired people with builtin the discussed features will be available in near future and this paper will help the developer to know the very background in broad-spectrum. [1] The British Machine Vision Association and Society for Pattern Recognition [BMVA], http://www.bmva.org/visionoverview, 2013. [2] Linda G. Shapiro and George C. Stockman, Computer Vision, Prentice Hall. ISBN 0-13030796-3, 2001. [3] Tim Morris, Computer Vision and Image Processing, Palgrave Macmillan. ISBN 0-33399451-5, 2004. [4] World Health Organization: Visual impairment and blindness, http://www.who.int/mediacentre/factsheets/fs282/e n/, 2013. [5] Volodymyr Ivanchenko, James Coughlan, William Gerrey and Huiying Shen, Computer Vision-Based Clear Path Guidance for Blind Wheelchair Users, Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility, Pages 291-292, ISBN: 978-1-59593976-0, 2008. [6] Hend S. Al-Khalifa, Utilizing QR Code and Mobile Phones for Blinds and Visually Impaired People, ICCHP '08 Proceedings of the 11th international conference on Computers Helping

IJSRET @ 2013

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 2 Issue 8 pp 498-503 November 2013 www.ijsret.org ISSN 2278 0882

People with Special Needs, Pages 1065 1069, ISBN: 978-3-540-70539-0, 2008. [7] Jerey P. Bighamy, Chandrika Jayant H, Andrew Miller, Brandyn Whitez, and Tom Yeh, VizWiz::LocateIt - Enabling Blind People to Locate Objects in Their Environment, In Proceedings of the Workshop on Computer Vision Applications for the Visually Impaired (CVAVI 2010). San Francisco, California, 2010. [8] Jing Su, Alyssa Rosenzweig, Ashvin Goel, Eyal de Lara and Khai N. Truong, Timbremap: Enabling the Visually-Impaired to Use Maps on TouchEnabled Devices, MobileHCI '10: Proceedings of the 12th international conference on Human computer interaction with mobile devices and services, Pages 17-26, ISBN: 978-1-60558-835-3, 2010. [9] Roberto Manduchi, Sri Kurniawan and Homayoun Bagherinia, Blind guidance using mobile computer vision: a usability study, Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility, Pages 241-242, ISBN: 978-1-60558-881-0, 2010. [10] Barcodes Ender Tekin and James M. Coughlan, A Mobile Phone Application Enabling Visually Impaired Users to Find and Read Product, ICCHP'10 Proceedings of the 12th international conference on Computers helping people with special needs, Pages 290-295, ISBN:3-642-14099-8 978-3-642-14099-0, 2010. [11] Y. Tian, C. Yi, and A. Arditi, Improving Computer Vision-Based Indoor Wayfinding for Blind Persons with Context Information, in Proc. ICCHP (2), pp.255-262, 2010. [12] F. Bellotti, R. Berta, A. DeGloria and M. Margarone, LodeStar: a mobile device to enhance visually impaired people experience of cultural and naturalistic places, in Proceedings of ReThinking Technology in Museums Conference, University of Limerick, Ireland. 26-27, 2011. [13] Joo Jos, Miguel Farrajota, Joo M.F. Rodrigues and J.M. Hans du Buf, The SmartVision local navigation aid for blind and visually impaired persons, international Journal of Digital Content Technology and its Applications Vol.5 No.5, 2011. [14] Brilhault, A., Kammoun, S., Gutierrez, O., Truillet, P., Jouffrais, C., Fusion of Artificial Vision and GPS to Improve Blind Pedestrian Positioning, New Technologies, Mobility and Security (NTMS), 4th IFIP International Conference on , Page(s): 1 5, 2011. [15] Bharat Bhargava, Pelin Angin and Lian Duan, A Mobile-Cloud Pedestrian Crossing Guide for the Blind, Department of Computer Science, Purdue University, 2011.

[16] Boris Schauertey, Manel Martinezy, Angela Constantinescuz and Rainer Stiefelhagen, An Assistive Vision System for the Blind that Helps Find Lost Things, ICCHP'12 Proceedings of the 13th international conference on Computers Helping People with Special Needs - Volume Part II, Pages 566-572, ISBN: 978-3-642-31533-6, 2012. [17] Alexander Dreyer Johnsen, Tor-Morten Grnli, Bendik Bygstad, Making Touch-Based Mobile Phones Accessible for the Visually Impaired, Norwegian School of IT, Publication: Norsk informatikkonferanse, ISBN: 9788232100132, 2012. [18] K. Matusiak, P.Skulimowski, and P. Strumitto, Object recognition in a mobile phone application for visually impaired users, in Human System Interaction (HSI), The 6th International Conference, Sopot, Poland, Pages- 479 484, ISSN: 2158-2246, Print ISBN: 978-1-4673-56350, 2013.

IJSRET @ 2013

Вам также может понравиться