Академический Документы
Профессиональный Документы
Культура Документы
CHAPTER-1
INTRODUCTION
Chapter 1 –Introduction
1.1 Introduction
Since ages, asking for directions makes people socializable and also inconvenient sometimes, the olden way to
overcome this is to place some direction boards in some places, which gives the travelers a convenience and assurance
of direction and as the time passes, gps and google maps came into existence and gave user a whole new experience of
travelling, which takes the destination point and the user location to direct him to destination. The google algorithm
also generates a best and shortest path to travel, but where as coming to the indoor positioning like for getting the
directions in places like headquarters, campus or airports railway stations etc. and this again drags people back to the
initial stages.
The evolution of robotics, which gives a solution for this problem, in general case A person assists the user or
the user himself has to find the destination, in order to reduce the human interaction and inconvenience and promoting
the terms of automation the greeting droid can be used, the robot is a automation which calculates the shortest route to
accompany people to the destination. The greeting droid receives the visitors as it watches the surroundings with its
eye(camera) and grabs the attention of the visitor with its greeting and it asks for the visitor details for the id card
generation and personal storage and then generates a path that is to be covered to take visitor the tour.
1.2 Robotics
Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronic
engineering, information engineering, computer science, and others. Robotics deals with the design, construction,
operation, and use of robots, as well as computer systems for their control, sensory feedback, and information
processing.
The concept of creating machines that can operate autonomously dates back to classical times, but research
into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout
history, it has been frequently assumed by various scholars, inventors, engineers, and technicians that robots will one
day be able to mimic human behavior and manage tasks in a human-like fashion. Today, robotics is a rapidly growing
field, as technological advances continue; researching, designing, and building new robots serve various practical
purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to
people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks. Robotics is
also used in STEM (science, technology, engineering, and mathematics) as a teaching aid.
In our case, consider navigating a mobile robot inside a building to a distant waypoint. It should execute
this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description
of these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion planning
algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks
(e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g.
imperfect models of the environment or robot).
The path planning that are used for the greeting droid are
Static path
Line follower
The static path is a path planning technique used for the paths that are clear and wide, the static planning takes the
input in the form of distance to be covered and then generates a static route based on the AO* algorithm.
The line follower is another path planning technique which gives more accuracy, the main reason to implement this
technique is to improve the accuracy and the make the robot follow a given route which decreases the rate of failures
to the least.
The AO* is a upgrade to the A* algorithm which can handle both the AND & OR graphs to generate the desired
path, In AO* algorithm, the procedure is similar but there are and constraints traversing specific paths. If you are
traversing those paths, cost of all the paths originating from the preceding node are added till that level, where
you find the goal state irrespective of whether they take you to the goal state or not.
1.5 Conclusion
Based on the discussions in the chapter, the greeting droid using the processor can roll around with the help of the
Coordinates (algorithm). The upcoming units explains the end to end process of the total descriptions.
CHAPTER-2
Literature Review
2. Literature Review
2.1 A Robotic Wayfinding System for the Visually Impaired
We present an emerging indoor assisted navigation system for the visually impaired. The core of the system is
a mobile robotic base with a sensor suite mounted on it. The sensor suite consists of an RFID reader and a
laser range finder. Small passive RFID sensors are manually inserted in the environment. We describe how
the system was deployed in two indoor environments and evaluated by visually impaired participants in a
series of pilot experiments.
We have built and deployed a prototype of a robotic guide for the visually impaired. Its name is RG,
which stands for “robotic guide.” Our basic research objective is to alleviate localization and navigation
problems of purely autonomous approaches by incrementing environments with inexpensive and reliable
sensors that can be placed in and out of environments without disrupting any indigenous activities.
Effectively, the environment becomes a distributed tracking and guidance system (Kulyukin & Blair2003;
Kulyukin, Gharpure, & De Graw 2004) that consists of stationary nodes, e.g., sensors and computers, and
mobile nodes, e.g., robotic guides.
Additional requirements are: 1) that the instrumentation be fast, e.g., two to three hours, and require only
commercial off-the-shelf (COTS) hardware components; 2) that sensors be inexpensive, reliable, easy to
maintain (no external power supply), and provide accurate localization; 3) that all computation run onboard
the robot; and 4) that human-robot interaction be both reliable and intuitive from the perspective of the
visually impaired users. The first two requirements make the systems that satisfy them replicable,
maintainable, and robust. The third requirement eliminates the necessity of running substantial off-board
computation to keep the robot operational. In emergency situations, e.g., computer security breaches, power
failures, and fires, off-board computers are likely to become dysfunctional and paralyze the robot if it depends
on them. The fourth requirement explicitly considers the needs of the target population.
2000). Thus, there is a clear need for systems that improve the wayfinding abilities of the visually
impaired, especially in unfamiliar indoor environments, where conventional aids, such as white canes
and guide dogs, are of limited use.
Robot-assisted navigation can help the visually impaired overcome these limitations. First, the amount of
body gear carried by the user is significantly minimized, because most of it can be mounted on the robot and
powered from onboard batteries. Consequently, the navigation-related physical load is significantly reduced.
Second, the user can interact with the robot in ways unimaginable with guide dogs and white canes, i.e.,
speech, wearable keyboard, audio, etc. These interaction modes make the user feel more at ease and reduce
her navigation-related cognitive load. Third, the robot can interact with other people in the environment, e.g.,
ask them to yield or receive instructions. Fourth, robotic guides can carry useful payloads, e.g., suitcases and
grocery bags. Finally, the user can use robotic guides in conjunction with her conventional navigation aids.
centers, are a perfect niche for robotic guides. Guide dogs and white canes are of limited use in such
environments, because they do not have any environment-specific topological knowledge and,
consequently, cannot help their users find paths to useful destinations.
The visually impaired people are increased to 230 millions in the world according to the last report of the
international health society. The annual rate of increasing is expected to be 7 millions per year. How to help
this population sector to make their daily life more ease is an important question. The use of assistive
technology could be one of possible solutions. The most successful and widely used walking aid for visually
impaired people is the white cane. It is used to detect obstacles on the ground, uneven surfaces, holes,
steps, and puddles. The white cane is inexpensive, and is so lightweight and small that can be folded and
tucked away in a pocket. However, users must train to use it over than 100 hours and it is not intelligent
to guide the use to his target in autonomous manner.
The sequential system development approach is adopted as shown in figure (1). This model consists of a
sequence of processes from user requirements (quality, performance), through system requirements (technical
specifications), architectural design, and component development to the testing cycle of integration,
installation and operations. At each process boundary, a review or a test allows progress to be monitored and a
commitment made to the next stage [13]. These boundaries act as quality milestones, controlling the gradual
transformation from a high risk idea into a complete product.
The system design for an intelligent walker machine to assist the visually impaired is presented. Different
requirements are discussed and integrated with the performance goals. The architectural design for hardware
modules is emphasized. The goal seeking module is tested via simulation using a 2D map. A simple path
planning is developed using a linked list via points table. A PD fuzzy logic controller is developed to achieve
the goal seeking objective. The preliminary simulation results for goal seeking module are encouraged us to
continue our development. A virtual reality module will be added to the simulation in order to demonstrate the
maneuverability of the walker machine in a real time environment.
Chapter-3
System Requirements
3. System Requirements
3.1 Software Requirements
Opencv- OpenCV (Open Source Computer Vision Library) is released under a BSD license and hence it’s
free for both academic and commercial use. It has C++, Python and Java interfaces and supports Windows,
Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency and with a strong
focus on real-time applications.
Step to install:
PYQT5- Qt is set of cross-platform C++ libraries that implement high-level APIs for accessing
many aspects of modern desktop and mobile systems. These include location and positioning
services, multimedia, NFC and Bluetooth connectivity, a Chromium based web browser, as well as
traditional UI development.PyQt5 is a comprehensive set of Python bindings for Qtv5.
Installation steps:
pip3 install --user pyqt5
sudo apt-get install python3-pyqt5
sudo apt-get install pyqt5-dev-tools
sudo apt-get install qttools5-dev-tools
Firebase- Firebase is essentially a real time database. The data appears as JSON files and allows real time
changes to occur on the connected client side. When you build cross-platform apps using iOS, Android,
JavaScript SDKs, your clients end up getting all the data that was updated.
Installation steps:
Install Foundation:
Install Foundation:
1. Install Foundation command line in npm using npm install -g foundation-cli
2. Go 1. Install directory
to ~/Github Foundation(or command line
the directory forinyour
npm using npm
projects) install
and run -g new.
foundation
foundation-cli
3. Choose "A site" as the installation
2. Go to ~/Github directory (or the directory for your projects) and
4. Enter your site name and choose ZURB Template
run foundation new.
5. Change directory ~/Github/your-web-site
3. Choose "A site" as the installation
4. Enter your site name and choose ZURB Template
Install Firebase
5. Change directory ~/Github/your-web-site
1. Install firebase command line using npm install -g firebase-tools
2. RunInstall init and enter your-web-site name.
Firebase
firebase
3. Run npm start, and it will populate the dist folder with your site.
1. Install firebase command line using npm install -g firebase-tools
2. Run firebase init and enter your-web-site name. Dept. of CSE
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY
GREETING DROID
Deployment
Processor: Raspberry pi3 1 GHz; recommended to have two raspberry pi’s to reduce load on each processor by
sharing.
Hard Drive: Minimum 32 GB memory card; Recommended 64 GB or more
Memory (RAM): Minimum 1 GB; Recommended 2 GB or above
Sound card speakers/Bluetooth speakers
Webcam
7-Inch LCD display for UI representation.
Keyboard.
Thermal Printer 58mm.
Johnson motors for wheels.
Line Follower sensors.
Ultra-sonic sensors to avoid collisions.
Chapter-4
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
GREETING DROID
Implementation
4. Implementation
4.1 Gifplay
The droid stays in a static position and displays a static gif until a visitor arrives in the area.
import
import cv2 cv2
import
import numpynumpy
as np as np
import
import time,os
time,os
# Create
# Create a VideoCapture
a VideoiCapture object
object and read and file
from input read from input file
# the
# If Ifinput
theis input is pass
the camera, the0 camera, pass
instead of the video0fileinstead
name of the video file name
cap
cap = cv2.VideoCapture('source.gif')
= cv2.VideoCapture('source.gif')
# Check
# Check if camera
if camera opened successfully
opened successfully
if if (cap.isOpened()==
(cap.isOpened()== False): False):
print("Error
print("Error opening opening video
video stream stream or file")
or file")
frame_counter=0;
frame_counter=0;
# Read
# Read until until video is completed
video is completed
while(cap.isOpened()):
while(cap.isOpened()):
if( os.stat("comd.txt").st_size
if( os.stat("comd.txt").st_size > 0 ): > 0 ):
f=open("comd.txt","w")
f=open("comd.txt","w")
f.write("");
f.write("");
f.close()
f.close()
breakbreak
# Capture
# Capture frame-by-frame
frame-by-frame
time.sleep(0.2)
time.sleep(0.2)
ret, ret,
frame =frame = cap.read()
cap.read()
if retif==ret
True:== True:
frame_counter=frame_counter+1;
frame_counter=frame_counter+1;
# Display
# Display the
the resulting resulting frame
frame
#cv2.resize(frame,(300,500))
#cv2.resize(frame,(300,500))
cv2.namedWindow("Frame",cv2.WND_PROP_FULLSCREEN)
cv2.namedWindow("Frame",cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("Frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow('Frame',frame)
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
if cv2.waitKey(25) & 0xFF == ord('q'):
GREETING DROID
cv2.setWindowProperty("Frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow('Frame',frame)
From the above program, we have imported the opencv and desired packages in order to capture the person identified.
The gif image of robotic appearance will be played until a person is identified.
4.2 Id Generator
Whenever the visitor gives the details in the UI the droid generates ID to the visitor and prints it using a thermal
printer.
##img1.show()
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
font = ImageFont.truetype('Roboto-Bold.ttf', size=30)
GREETING DROID
image.paste(img,(0,0),img)
##img1.show()
font = ImageFont.truetype('Roboto-Bold.ttf', size=30)
image.save('greeting_card.jpg')
#imiage.show()
os.system("lpr -o portrait -o fit-to-page -P Printer greeting_card.jpg")
4.3 Destination
At the time of ID card generation, the processor parallely generates the route to the destination by passing through the
layers of the AO* algorithm.
def show_entry_fields():
From the above table of program, the details from the visitor are added to the internal files using serialization and
parsing the information given to decide the destination.
4.4 Movement
The movement of the droid is done by a loop that always checks whether the droid gets the right inputs and signals
and follows the commands accordingly
import RPi.GPIO as IO
import time,os
from pygame import mixer
IO.setwarnings(False)
IO.setmode(IO.BOARD)
IO.setup(16,IO.IN) #GPIO 2 -> Left IR out
IO.setup(18,IO.IN) #GPIO 3 -> Right IR out
IO.setup(36,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(32,IO.OUT) #GPIO 4 -> Motor 1 terminal A
IO.setup(38,IO.OUT) #GPIO 17 -> Motor Left terminal A
IO.setup(36,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(40,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(38,IO.OUT) #GPIO 17 -> Motor Left terminal A
IO.setup(5,IO.OUT) #Left Motor Enabler
IO.setup(40,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(7,IO.OUT) #Right Motor
IO.setup(5,IO.OUT) Enabler
#Left Motor Enabler
IO.setup(7,IO.OUT) #Right Motor Enabler
IO.output(5,IO.HIGH)
IO.output(7,IO.HIGH)
IO.output(5,IO.HIGH)
IO.output(7,IO.HIGH)
location=""
stri="principal"
location=""
while(1):
stri="principal"
if( os.stat("buf.txt").st_size > 0 ):
while(1):
f=open("buf.txt","r")
if( os.stat("buf.txt").st_size > 0 ):
stri=f.read()
f=open("buf.txt","r")
f.close() stri=f.read()
f=open("buf.txt","w")
f.close()
f.write("")
f=open("buf.txt","w")
f.close() f.write("")
break f.close()
break
junctionCross=0
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
def selectRouting(stri):
GREETING DROID
junctionCross=0
def selectRouting(stri):
global junctionCross;
if("principal" in stri):
junctionCross=2;
else:
junctionCross=4;
def burst():
IO.output(32,True) #1A+
IO.output(36,False) #1B-
IO.output(38,True) #2A+
IO.output(40,False) #2B-
time.sleep(0.8)
IO.output(32,False) #1A+
IO.output(36,False) #1B-
IO.output(38,False) #2A+
IO.output(40,False) #2B-
def linefollow(junctionCross):
while 1:
if(junctionCross==0):
break;
burst();
linefollow(junctionCross);
The above table of program, we had imported the packages like Rpi.Gpio for the gpio pin communication and pygame mixer for the audio
generator.
The motors used are the Johnson motors which works on the electric supply from the battery so we use a driver
for supply of electricity and also for operating the electric flow, the electric can be operated which results in the
manipulative operation of the motors.
def motor1():
init()
print('forward')
GPIO.output(33,GPIO.HIGH)
GPIO.output(31,GPIO.LOW)
time.sleep(1)
stop()
def motor2():
init()
print('reverse')
GPIO.output(33,GPIO.LOW)
GPIO.output(31,GPIO.HIGH)
time.sleep(1)
stop()
def servo():
init()
print('hand shake')
pwm=GPIO.PWM(37,50)
pwm.start(5)
i=1
while i<=3:
pwm.ChangeDutyCycle(7)
time.sleep(0.5)
pwm.ChangeDutyCycle(10)
i=i+1
pwm.ChangeDutyCycle(7)
time.sleep(0.2)
stop()
def stop():
init()
GPIO.output(33,GPIO.HIGH)
GPIO.output(31,GPIO.HIGH)
time.sleep(0.2)
GPIO.cleanup()
while(1):
if( os.stat("buf1.txt").st_size > 0 ):
f=open("buf1.txt","w")
f.write("")
f.close()
break
motor1()
time.sleep(1.5)
servo()
time.sleep(1.5)
motor2()
4.6 Cleanup
To avoid any miscomputation errors and mistakes or buffer errors for power shortage the cleanup code is used
which always refresh all the drivers pins and registers and keeps it clean and ready to listen the commands sent.
import RPi.GPIO as IO
IO.setwarnings(False)
IO.setmode(IO.BOARD)
IO.setup(3,IO.OUT) #GPIO 4 -> Motor 1 terminal A
Chapter-5
DESIGN
5. Design
5.1 Use case diagram
Use case diagrams consists of actors, use cases and their relationships. The diagram is used to model the
system/subsystem of an application. A single use case diagram captures a particular functionality of a
system. To model a system, the most important aspect is to capture the dynamic behavior.
Class diagram describes the attributes and operations of a class and also the constraints imposed on
the system. The class diagrams are widely used in the modeling of objectoriented systems because they are
the only UML diagrams, which can be mapped directly with object-oriented languages.
Class diagram shows a collection of classes, interfaces, associations, collaborations, and constraints. It is
also known as a structural diagram.
Fig 5.4-Flow diagram of the greeting droid which displays the data flow
Chapter-6
Results
6. As of the implementation in the previous chapter-4, the results for the implementation are discussed here.
6.1 Gifplay:
Fig-6.1-gif image of the greeting droid that plays on the start of the program
As referred in the chapter-4 the fig6.1 is the image that plays continuously until a person gets recognized in the
area of the droid.
Fig 6.2- query window image of the greeting droid that plays after the gifplay.
6.3 ID Generator
Saran
From the above figure, this is the image of the id card that gets generated by the details given by the user and the date
is printed to maintain a reference.
6.4 Movement
The movement of the droid always depends on the commands that are passed by the line following sensor and
understands in which direction it should move.
6.5 Cleanup
All gpio pins cleaned can be used for the further operations
>>>
The cleanup code always helps the pins of the raspberry pi to clear the recent buffer memory and set the pins in a
ready state.
Chapter-7
Testing of project modules
7. Testing
Software testing is defined as an activity to check whether the actual results match the expected results and to
ensure that the software system is Defect free. It involves execution of a software component or system
component to evaluate one or more properties of interest.
Status Successful
Comment No comments
Status Successful
Comment No comments
Status Successful.
Table 7.6 Running a cleanup script to clear the buffer of the processor.
Status Successful
Comment No comments
Chapter-8
Conclusion
8. Conclusion
We presented an indoor assisted navigation system for the visitors or outlanders. The system consists of a mobile
robotic guide and small RFID sensors embedded in the environment. The system allows individuals to navigate in
unfamiliar indoor environments and interact with the robotic guide via text, sound, and a fixed keyboard.
The main question addressed by our research is the feasibility of robot-assisted navigation in indoor environments
instrumented with inexpensive passive sensors. So, is indoor robot-assisted navigation feasible? While our
experiments show that the technology has promise, the answer to this question cannot be given either negatively or
affirmatively at this point. The benefits of autonomous systems, such as greeting droid, increase with longer
deployments. Only through longer deployments can one answer how easy it is to maintain the system over extended
time periods and whether the target environment accepts the technology sociologically. Our deployments have not
been sufficiently long to answer either question.
A major barrier to long-term deployment is the unwillingness of real target environments, e.g., airports, head quarters,
colleges to agree to the exploratory long-term deployments (two to three months) of assistive technologies. Only after
this barrier is overcome, both sociologically and
technologically, will we be able to render a definite verdict on the feasibility of indoor robot-assisted navigation.
Chapter-9
Future Enhancements
9. Future enhancement
In this paper, we showed how line follower sensors can be used in robot-assisted indoor navigation for the outlanders
to the particular place. We presented a robotic guide for the visitors that was deployed and tested both with and
without visitors in two indoor environments. The experiments illustrate that passive line following sensors can act as
reliable stimuli that trigger local navigation behaviors to achieve global navigation objectives.
In the scope of future enhancement we want to develop a droid that can also work on voice inputs and also a dynamic
human-robot interaction system which also works for the visually impaired individuals.
The enhancement can also extend the idea of path planning using the most advanced and stimulated algorithms which
can be used for the dynamic reallocation of the droid from place to place.
Chapter-10
References
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2
C5&q=robotic+guides&btnG=
https://www.aaai.org/Papers/IAAI/2004/IAAI04-014
https://patentimages.storage.googleapis.com/5c/13/36/4
57618abb5d367/US4928546
https://www.aaai.org/Papers/Symposia/Spring/2004/SS-
04-03/SS04-03-001
https://www.researchgate.net/profile/Vladimir_Kulyuki
n/publication/224755759_RFID_in_Robot-
Assisted_Indoor_Navigation_for_the_Visually_Impaire
d/links/0c960530cc8e2e33a1000000
https://www.researchgate.net/profile/Wahied_Gharieb/
publication/254606101_Intelligent_Robotic_Walker_De
sign/links/549360a30cf25de74db4f2d6/Intelligent-
Robotic-Walker-Design