Академический Документы
Профессиональный Документы
Культура Документы
By
Supervisor
Mr. Yawar Abbas Abid
i
COMSATS University, Islamabad
Sahiwal Campus - Pakistan
A project presented to
In partial fulfilment
By
ii
DECLARATION
We hereby declare that this software, neither whole nor as a part has been copied out from any
source. It is further declared that we have developed this software and accompanied report
entirely on the basis of our personal efforts. If any part of this project is proved to be copied
out from any source or found to be reproduction of some other. We will stand by the
consequences. No Portion of the work presented has been submitted of any application for any
other degree or qualification of this or any other university or institute of learning.
iii
CERTIFICATE OF APPROVAL
It is to certify that the final year project of MCS “Third Eye for Blinds” was developed by
KAMRAN OWAISI (CIIT/SP17-MCS-003), MUHAMMAD ABRAR (CIIT/SP17-MCS-
002) and MOHSIN IFTIKHAR (CIIT/SP17-MCS-014) under the supervision of “Mr.
YAWAR ABBAS ABID” and that in his opinion; it is fully adequate, in scope and quality for
the degree of Master of Computer Sciences.
---------------------------------------
---------------------------------------
---------------------------------------
iv
Executive Summary
In Pakistan, approximately 4.5 percent of the population is suffering from blindness and the
percentage increases for the senior citizens. On international level, its way higher than this and
many products and gadgets are being developed to provide assistance to visually disabled
persons. But such products are either expensive and out of budget or most of them are heavy
to carry. The actual motive behind the proposed project is to overcome the mentioned hurdles
for the poor blind people in our country. Our project is targeted to benefit the community and
will facilitate the visually disabled persons so that they can easily understand their environment
and live their life independently. This will also be beneficial for the visually impaired people
(partially blind).
We are using raspberry pi for the needed processing along with a camera and a headphone.
Camera will capture the scene, image will be processed and the system using headphones will
narrate it to the blind person so that he/she can easily perceive and interpret his/her
surroundings without someone’s help. This project will provide aid to under-privileged visually
disabled persons in Pakistan by providing them a deep solution so that their blindness does not
become a cause of fear for them and their poverty does not stop them to explore and enjoy the
nature.
v
Acknowledgement
All praise is to Almighty Allah who bestowed upon us a minute portion of His boundless
knowledge by virtue of which we were able to accomplish this challenging task.
We are greatly indebted to our project supervisor “Mr. Yawar Abbas Abid”. Without their
personal supervision, advice and valuable guidance, completion of this project would have
been doubtful. We are deeply indebted to them for their encouragement and continual help
during this work.
And we are also thankful to our parents and family who have been a constant source of
encouragement for us and brought us the values of honesty & hard work.
vi
Abbreviations
vii
Third Eye for Blinds
TABLE OF CONTENTS
Contents
1 Introduction .............................................................................................2
1.1 Introduction ...................................................................................................................... 2
2 Literature Review ....................................................................................7
2.1 Existing Systems .............................................................................................................. 7
2.1.1 Drawbacks of Existing Systems ............................................................................... 8
2.2 Related work .................................................................................................................... 9
3 Methodology & Workplan....................................................................13
3.1 Adopted Methodology ................................................................................................... 13
3.1.1 Spiral Model........................................................................................................... 13
3.2 Working ......................................................................................................................... 14
3.2.1 Rasberry Pi ............................................................................................................. 14
3.2.2 Camera ................................................................................................................... 14
3.2.3 SD Card.................................................................................................................. 14
3.2.4 HDMI Cable........................................................................................................... 14
3.2.5 Headphones ............................................................................................................ 14
3.2.6 PIR Motion sensor ................................................................................................. 15
4 System Analysis and Design .................................................................17
4.1 Requirements Gathering Techniques ............................................................................. 17
4.2 Requirements Analysis .................................................................................................. 17
4.2.1 Functional Requirements ....................................................................................... 17
4.2.2 Non-Functional Requirements ............................................................................... 18
4.3 System Design ............................................................................................................... 20
4.4 Use case design .............................................................................................................. 21
4.4.1 Use case Description .............................................................................................. 22
4.5 Sequence Diagram ......................................................................................................... 24
4.6 Activity Diagram ........................................................................................................... 24
5 System Implementation ........................................................................27
5.1 Introduction .................................................................................................................... 27
5.2 Screenshots .................................................................................................................... 27
6 System Testing .......................................................................................33
6.1 Introduction .................................................................................................................... 33
viii
Third Eye for Blinds
ix
Third Eye for Blinds
LIST OF FIGURES
x
Third Eye for Blinds
LIST OF TABLES
xi
xii
CHAPTER # 1
INTRODUCTION
1
Third Eye for Blinds
1 Introduction
In this chapter, we will introduce this application, software tools, problem statement,
objectives, scope of this application, proposed application, motivation of this proposed
solution, relevance to courses and tools & techniques which are used to implement this
application.
1.1 Introduction
World is a global village now. Technology is going to advance day by day. Computers involves in
every fields of life. Most important, Artificial intelligence that is a branch of computer science.
Artificial intelligence is a part of many fields. Artificial Intelligence means, a machine think like
human’s and behave like human. Machine would be thinking rationale. Artificial intelligence-
based devices are common now likes mobiles, home devices, Biometric devices etc.
Artificial intelligence involves in many fields like in Computer Science, Medical, Business and
Nuclear reactor etc. In computer science, AI involves in Robotics, online shopping websites, AI
based chatbots. In Medical, company applying AI to make better and faster diagnoses than humans.
The primary aim of AI in healthcare is to analyse relationships between prevention or treatment
techniques and patient outcomes. In business,
In Pakistan, approximately 4.3 percent of the population is suffering from blindness and the
percentage increases for the senior citizens. On international level, many products and gadgets are
being developed to provide assistance to visually disabled persons. But such products are either
expensive and out of budget or most of them are heavy to carry. The actual motive behind the
proposed project is to overcome the mentioned hurdles for the poor blind people in our country.
Our project is targeted to benefit the community and will facilitate the visually disabled persons
so that they can easily understand their environment and live their life independently. This will
also be beneficial for the visually impaired people (partially blind). We have developed such a
cost-effective and handy device, which will enable the blind people to interact with the
environment just like the normal people. The device comprises of a raspberry pi, camera, headset
and the button. When the user presses the button, image is being captured by camera and send to
2
Third Eye for Blinds
the Microsoft Cognitive Services (API) for processing. The API processes the image, identify its
contents (objects, people, their emotions, gender and estimated age).
The API returns labels, some description and also the words or text (in case of text reading). API’s
result then processes and converts into a complete description which comprises of description of
image contents, persons’ gender with their emotions (if found). Finally, that description is
transformed into a voice recording and then played through head phones for the blind person. So,
our product will describe the scene to the blind person, read out the text documents, sign boards,
and recognize the gender of people found in an image using face recognition.
Most of the people are working on different projects for helping out the blind people in
understanding about their surroundings and environment. But our project will nourish the blind’s
ability to recognize the gender of the person in front of him/her and keep the user up-to-date with
the surrounding events. The target of our project is to facilitate specific group of blind person, user
should be able to understand some little English language. The design of our product is quite
simple as the user will have to simply press a button, after which the system will capture an image
and ask the user regarding which of the three given functionalities to be performed:
• Scene Description
3
Third Eye for Blinds
• Text Reading
We also used motion sensor that will help the blind person. If some hurdles coming in front of the
blind person, the motion sensors will buzzers and blind person active for perform further action.
This motion sensor also help the blind person if the WiFi connection accidently lost, so the blind
person active by just hearing the buzzer in case of any hurdles or obstacles.
Third Eye for Blinds saves significant amount of money and gives better performance. The
following features are the motivational source of our project:
• Saves significant amount of money.
• Facilitate blinds people.
• The whole system is under the control of a user.
• Very cost effective for development and deployment.
4
Third Eye for Blinds
• Headphones
• Memory card
• PIR Motion sensor
• Power Bank
• Breadboard
• USB Sound Card
• Memory Card
• Resistors
• Wires
The rest of the thesis is as follows, Chapter 2 include Related work, Chapter 3 contains
Methodology, Chapter 4 contains Results, Chapter 5 contains Conclusion and Chapter 6 Contains
Future Work.
5
Third Eye for Blinds
CHAPTER # 2
LITERATURE REVIEW
6
Third Eye for Blinds
2 Literature Review
In this chapter we will discuss about all the related work which have done in past.
Before the implementation of proposed solution another application with their existence is called
existing application. From the perspective of Visionary vision, Microsoft introduced an AI seeing
project (1) (2) for blind in which by wearing a smart glasses (Pivothead) a blind can touch a panel
on eyewear to capture a photo then eyewear translates the image into speech after analysing it by
using Microsoft Cognitive Services. The eyewear also describes the emotions of the people in
scene and users can also take snap of the text so that eyewear can read text for the blind. The
eyewear also describes the gender and expected age of the persons. Microsoft used smart glasses
in this project due to which the product is highly expensive and is not feasible for common people.
Figure 2.1 represents Microsoft Smart glasses (Pivothead).
Another milestone is Smart stick for the blinds. This Smart stick will have an Ultrasonic sensor to
sense distance from any obstacle, LDR to sense lighting conditions and a RF remote using which
the blind man could remotely locate his stick. All the feedbacks will be given to the blind man
through a Buzzer (3).
7
Third Eye for Blinds
In Pivothead, the price of Pivothead is around $650, that is too much costly for ordinary people.
It’s also needs annual subscription of $10/month.
In smart stick, the lack of camera and voice. That’s means it cannot recognize between gender. Its
only recognize obstacles through sensors and inform the user’s through buzzers.
8
Third Eye for Blinds
Assisted Vision Smart Glasses is a related project developed at Oxford University (4). The vision
glasses are constructed using transparent OLED displays, a gyroscope, a GPS unit, two small
cameras and a headphone as shown in Figure 2.3. These smart glasses enable the blind to identify
the difference between person and obstacles. The glasses can translate different sign boards and
text in an image. GPS module used to give directions and gyroscope calculate changes as the blind
who wear the glasses move. It can identify some patterns but researchers are still working on
understanding the patterns and its classification.
These vision smart glasses are too heavy to carry them all the time. Its structure is so complicated
and can enhance burden on the face and can make anyone exhausted by wearing it.
AI glasses is an under progress smart glasses project at CINVESTAV (Centre for Research and
Advanced Studies of the National Polytechnic Institute) in Mexico (5). It’s a light weight normal
pair of glasses with batteries that can remain in continuous use for approximately four hours. The
9
Third Eye for Blinds
glasses are combined with stereo sound sensors and GPS technology attached to a tablet. The
device demonstrates spoken directions, recognize the sign boards and different things by machine
learning and also uses ultrasound to detect obstacles. But the product is too expensive as its
estimated cost is between $1000 and $1500 which will not be easily affordable by the ordinary
people.
In 2016, Facebook has launched Automatic Alternative text (6) which generates the description of
a photo for visually impaired people through object recognition. If user/blind uses screen reader
on iOS or Android, then Automatic Alternative text describe the contents of a photo uploaded or
post on Facebook in a voice e.g. the blind person will hear that “this image may contain a group
of people, standing, outside
10
Third Eye for Blinds
The vOICE vision technology (7) is for totally blind people. The vOICE system scans a scene with
a live camera from left to right, converts it into sound and transmits the voice to the headphones.
Visual information of left and right side objects are fed into left and right ears respectively. The
system also scans words, text from documents and convert them into sound or voice for blind
people. The vOICE vision technology facilitate people who are blind since birth and also those
who became blind due to an accident or a disease. The purpose of vOICE vision technology is to
help blind people to create images in their brain and make them able to perform their routine
activities without other’s help. vOICE glasses is shown in Figure 2.5.
11
Third Eye for Blinds
CHAPTER # 3
METHODOLGY
&
WORKPLAN
12
Third Eye for Blinds
Whenever a small or large project have started to develop, first thing all of programmers required
is methodology. Methodology is a way of developing a project, in which all of the programmers
gather the user’s requirements, design the project, implement it, and after all this testing and
maintenance of the project, in a satisfaction of user and according to the project requirements.
Adopted methodology is one that we chose to develop our project.
13
Third Eye for Blinds
3.2 Working
3.2.1 Rasberry Pi
Rasberry pi is a device that we used for our project. The Raspberry Pi is a low cost, credit-card
sized computer that plugs into a computer monitor or TV, and uses a standard keyboard and mouse.
It is a capable little device that enables people of all ages to explore computing, and to learn how
to program in languages like Scratch and Python. It’s capable of doing everything you’d expect a
desktop computer to do, from browsing the internet and playing high-definition video, to making
spreadsheets, word-processing, and playing games. What’s more, the Raspberry Pi has the ability
to interact with the outside world, and has been used in a wide array of digital maker projects, from
music machines and parent detectors to weather stations and tweeting birdhouses with infra-red
cameras. We want to see the Raspberry Pi being used by kids all over the world to learn to program
and understand how computers work.
3.2.2 Camera
Camera is used to capture image. Camera is captured image and send to different API’s for further
processing. We connected camera with Rasberry pi. When a user presses the button on raspberry
pi, the camera capture image.
3.2.3 SD Card
We used SD card for the installation of OS system as well as provide storage for user’s purpose
and it provides storage so that we can install desired software on Rasberry pi.
3.2.5 Headphones
Headphones are connected with Rasberry pi and is used to provide sound to the blind person after
processing of image from further stages. After processing final voice are played through
headphones in the ears of blind person.
14
Third Eye for Blinds
This motion sensor is used to detect the movement of a person and detection within the range of
5m-10m. If anything comes in its way, it detects and plays a buzzer.
15
Third Eye for Blinds
CHAPTER # 4
SYSTEM ANALYSIS
&
DESIGN
16
Third Eye for Blinds
The procedure for gathering requirements has its own defined procedure according to the
complexity of the application. To define project schedule and processing, different models and
techniques also focused on this chapter.
Requirements analysis is the process of planning, forecasting and studying the overall former
needs of the application requirements. Requirements analysis is further divided into two parts:
1. Functional Requirements
2. Non-Functional Requirements
17
Third Eye for Blinds
FR01 The system will capture the image by pressing a push button.
FR02 The system will be capable of performing speech recognition and shall
perform next task according to the user’s choice.
FR03 The system will upload image to with Microsoft Cognitive Services (API).
FR04 The system will be efficiently convert labels of image into a complete
sentence.
FR05 The system will translate captured image into voice recording.
Table 4.2 represents non-functional requirement i.e. Reliability, which describes that reliability of
a system depends upon the handling of exception and errors.
Table 4.2: Non-functional Requirement (Reliability)
Requirement No Reliability
18
Third Eye for Blinds
NFR01 The system will be reliable and error free as when an exception occurs
then the system will automatically restart the raspberry pi.
Table 4.3 represents non-functional requirement i.e. Accuracy, which ensures the accuracy of
image contents translation into voice recording, recognition of person and speech recognition.
Requirement No Accuracy
NFR02 The system will accurately translate the image contents into voice
recording.
Table 4.4 represents non-functional requirement i.e. Performance, which ensures the performance
of system interaction with API and user.
Table 4.4: Non-functional Requirement (Performance)
Requirement No Performance
NFR05 The system will interact with API and process image in a few seconds.
NFR06 The system will response quickly after the user’s voice input.
19
Third Eye for Blinds
Table 4.5 represents non-functional requirement i.e. Usability, that describes the ease of use.
Requirement No Usability
NFR07 The design of a system shall be easy to use for the blind people.
NFR08 The system shall be interactive and perform tasks according to the
user’s choice.
Figure 4.1 represents system design in which system is divided into subtasks and flow of each task
is from top to bottom. Major tasks of our system are Text Reading and Scene Description which
are divided into different sub tasks or modules. These tasks are further divided into others tasks
that complete the system.
20
Third Eye for Blinds
An important part of the analysis phase is to drawing the diagrams of Use cases. They are used
through the phase of analysis of a project to find and divide functionality of the application.
Application is separated into actors and use cases. Actors play the role that are played by the
application users. Use cases define the application behaviour when one of the actors sends any
particular motivation. This type of behaviour can be described by text. It describes the motivation
nature that activates use case, the inputs and outputs to some other actors and the behaviour of
21
Third Eye for Blinds
conversion of inputs to the outputs. Usually the use case describes everything that can go wrong
during the detailed behaviour and what will be helpful action taken by the application.
Figure 4.2 portrait use case diagram, which represents that the scene description module receives
image from take picture module, upload to the Microsoft API and provide labels. These labels then
convert into a complete sentence by sentence making process. Similarly labels of text are retrieved
by Microsoft Text Analytics API.
22
Third Eye for Blinds
MODULES WORKING
Controller Controller is used to control all the activities and task performed in
the whole process.
Text reading Scene description extends the text reading which is given to Text
analytical API and transferred to label retrieval.
Label retrieval Label are retrieved from Text analytical API so that sentence making
can perform.
Text to voice Then text is converted into voice through Text to Speech API and then
handset delivered that voice to blind.
Take picture Picture is taken for the scene description and text reading.
23
Third Eye for Blinds
In Figure 4.3, User takes an image and sends it to the controller (if no image is taken then show an
error message). Controller sends that image to the Microsoft Cognitive API so that image can get
uploaded. After getting JSON response, the labels get retrieved and sentence making is performed
through Sentence Making process. If error is occurred then it gets back to the controller again.
Activity diagram is used to describe the dynamic aspects of the system. Activity diagram is
basically a flowchart to represent the flow from one activity to another activity. The activity can
24
Third Eye for Blinds
be described as an operation of the system. The control flow is drawn from one operation to
another. This flow can be sequential, branched, or concurrent. Activity diagrams deal with all
type of flow control by using different elements such as fork, join, etc.
25
Third Eye for Blinds
CHAPTER # 5
SYSTEM
IMPLEMENTATION
26
Third Eye for Blinds
5 System Implementation
In this chapter, we’ll focus on an implementation of “Third Eye for Blinds” system. Where a user
can perform many activities in real world.
5.1 Introduction
The most important goal of this phase is to develop the system. The work in this phase should be
much more straightforward as a result of the work done in the planning and design phases. This
phase involves changing design specifications into executable programs.
5.2 Screenshots
Figure 5.1 shows Rasberry pi, it’s a device that we used for our project. The Raspberry Pi is a low
cost, credit-card sized computer that plugs into a computer monitor or TV, and uses a standard
keyboard and mouse. It’s also called mini-computer.
Figure 5.2 shows PiCamera that is used to capture image. PiCamera captures image and send to
different API’s for further processing. We connected camera with Rasberry pi. When a user presses
the button on raspberry pi, the camera capture image.
27
Third Eye for Blinds
Figure 5.3 shows Headphones those are connected with Rasberry pi and is used to provide sound
to the blind person after processing of image from further stages. After processing final voice are
played through headphones in the ears of blind person.
28
Third Eye for Blinds
Figure 5.4 shows HDMI cable which is used to connect raspberry pi with Laptop or LED. It’s
basically used for connection and provides power supply during processing of raspberry pi.
Figure 5.5 shows SD card that we used for the installation of OS system as well as provide storage
for user’s purpose and it provides storage so that we can install desired software on Rasberry pi.
Figure 5.6 shows the power bank that is used to provide continuous power supply to the raspberry
pi. We are using a power bank of 20,000 mAH, that provide 18-20 hours power.
29
Third Eye for Blinds
Figure 5.7 shows a breadboard that is used for circuit construction because the solderless breadboard does
not require soldering, it is reusable. This makes it easy to use for creating temporary prototypes and
experimenting with circuit design.
30
Third Eye for Blinds
Figure 5.8 shows the overall configuration of the system. It’s consisted of raspberry pi, breadboard,
headphones, picamera, push button and sound card.
31
Third Eye for Blinds
CHAPTER # 6
TESTING
32
Third Eye for Blinds
6 System Testing
In this chapter, we will discuss the testing phase of developed system “Third Eye for Blinds” in
different manner to know that how much efficient and effective system is.
6.1 Introduction
A process of performing as application or program with the intention of finding errors and whether
the application is fulfilling user needs. It can also be defined as the ability of a program in meeting the
required or desired results.
In many methodologies of software engineering, a separate phase is called phase of testing which
is performed after the completion of the implementation. There is a benefit in using this approach
that it is hard to see one's own mistakes, and a fresh eye can find observable errors much faster
than the person who has read the material many times.
A process of performing as application or program with the intention of finding errors and whether
the application is fulfilling user needs.
33
Third Eye for Blinds
• Describes scene
• Read out text
• Recognize people
Table 6.1 describe the testing of “Button Press”, in which a user press a button as input and in
response, an interrupt generate and modules execute. Testing of Button Press module is done
successfully.
34
Third Eye for Blinds
Output summary
Success:
Interrupt generate and modules execute.
Pre-conditions User press the button
Post-conditions Module execution
Table 6.2 describe the testing of “Capture Image”, in which a user pushed a button as input and in
response system captured the image by camera. Testing of Capture Image module is done
successfully.
Table 6.3 describe the testing of “Speech to Text” module, in which system take user’s voice as
input and the system records the user voice and then successfully converted into text by using
CMU sphinx. So, the testing of speech to text module is done successfully.
35
Third Eye for Blinds
Table 6.4 describe the testing of “Scene Description” module, in which the captured image is sent
to the Microsoft Vision API which provide labels, description of image contents along with
confidence values, gender of people and their expected emotions. So, expected output get received
in testing of “Scene Description”.
Table 6.4: TC-04
36
Third Eye for Blinds
Table 6.5 describe the testing of “Text Analytics” module, in which the captured image of text is
sent to the Microsoft Vision API which provide the words and labels of text found in an image.
So, expected output is received in testing of “Text Analytics” module.
Table 6.6 describe the testing of “Sentence making” module, in which the output of Microsoft
Vision API (labels, image description, gender with emotions) is converted into a complete sentence
which provides the complete description of scene and text. Testing of “Sentence making” module
is done successfully.
Table 6.6: TC-06
37
Third Eye for Blinds
Output summary
Success:
MVA’s output is converted into complete sentence which comprises of description of image,
gender of people (if found) with the emotions of majority people in case of scene description.
In case of text analytics module, complete sentence comprises of the complete sentences of the
text found in the image.
Table 6.7 describe the testing of “Text to Speech” module, in which the sentence (scene
description, text) is being played through headphones after converting into voice recording. “Text
to Speech” module is successfully tested.
Table 6.7:TC-07
38
Third Eye for Blinds
CHAPTER # 7
CONCLUSIONS
39
Third Eye for Blinds
7 Conclusion
In this chapter, we will discuss the results of this system “Third Eye for Blinds” with conclude
remarks.
7.1 Conclusion
Third Eye for blind is developed for blind people, especially for ordinary people. The main
objectives of this system are to provide artificial vision to blind, so that they can survive their life,
storage for relatives and friends, face recognition etc. It’s also cost effective for development.
On international level many of the products and gadgets are developed to lubricate blind persons.
But such products are expensive and out of budget and most of them are heavy to carry, so basic
privilege of this project is to provide financial ease and easy to handle device. Having all 5 senses
is a blessing and all of them plays a major role in life. Through an eye we can see things, understand
them and feel our surroundings but no vision cause depression and other psychological issues. Our
product will describe the scene to the blind person, read out the text documents, recognize the
gender of people.
Gender recognition is one of the features of this product. Sometimes visually impaired person
cannot differentiate between male and female unless he/she listens the voice and sometime even
the voice of a person is too ambiguous to identify either its male’s voice or females so this product
helps user by recognizing the gender.
Products like this have a lot of scope in technology world and this world is evolving day by day
and when we talk about market shares we’ll be amazed because figures are very high. It means
making money is one of the most important and inevitable fact in this technology world. So, this
product will open some ways to step into the real market.
40
Third Eye for Blinds
CHAPTER # 8
FUTURE WORK
41
Third Eye for Blinds
8 Future Work
In this chapter, we will discuss about future work of our project and also explain how we can make
it more cost effect and optimal in performance.
In order to provide the Internet connectivity, GSM module can be used for making it more portable
and to take it anywhere. In future, we can remotely connect with any device to upgrade the
software. Moreover, in future we can upgrade our product by introducing additional functionality
in which we include other languages including national language (Urdu) so that it will help and
target uneducated blind people. PIR motion sensors will be used to detect the hurdles and motion.
Raspberry Pi 3 will be replaced by Pi Zero for making the device more cost-efficient and fast.
Snapdragon 53 chip can also be used instead of Pi Zero for more efficient working. Wireless
headphone can be replaced by Bluetooth technology using the impact 4 h.264. In future Microsoft
Cognitive AI services can be replaced by Neural talk and walk technology for performing the
offline functionality.
Enrolment functionality will be added that will facilitate the blind person to recognize their family,
friends and relatives. Blind person can save the specific person by speaking in the database. If that
person will appear in front of camera, the headphones will speak that person name.
42
Third Eye for Blinds
CHAPTER # 9
REFERENCES
43
Third Eye for Blinds
9 References
1. Microsoft. Seeing AI Project. Pivothead. [Online] 2016. [Cited: 10 March 2019.]
http://www.pivothead.com/seeingai/..
3. Raj, Aswinth. Smart Blind Stick using Arduino. CircuitDigest. [Online] 08 January 2018. [Cited: 29
November 2018.] https://circuitdigest.com/microcontroller-projects/arduino-smart-blind-stick.
4. Hill Simon. Blind Technologies. Digital Trends. [Online] 2014. [Cited: 5 April 2019.]
http://www.digitaltrends.com/mobile/blind-technologies/..
5. Investigacion Desarrollo. Artificial Intelligence Lenses. PHY ORG. [Online] 2014. [Cited: 11
November 2018.] https://phys.org/news/2014-05-artificial-intelligence-lenses.html.
44