Вы находитесь на странице: 1из 50

GREETING DROID

CHAPTER-1
INTRODUCTION

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter 1 –Introduction

1.1 Introduction
Since ages, asking for directions makes people socializable and also inconvenient sometimes, the olden way to
overcome this is to place some direction boards in some places, which gives the travelers a convenience and assurance
of direction and as the time passes, gps and google maps came into existence and gave user a whole new experience of
travelling, which takes the destination point and the user location to direct him to destination. The google algorithm
also generates a best and shortest path to travel, but where as coming to the indoor positioning like for getting the
directions in places like headquarters, campus or airports railway stations etc. and this again drags people back to the
initial stages.
The evolution of robotics, which gives a solution for this problem, in general case A person assists the user or
the user himself has to find the destination, in order to reduce the human interaction and inconvenience and promoting
the terms of automation the greeting droid can be used, the robot is a automation which calculates the shortest route to
accompany people to the destination. The greeting droid receives the visitors as it watches the surroundings with its
eye(camera) and grabs the attention of the visitor with its greeting and it asks for the visitor details for the id card
generation and personal storage and then generates a path that is to be covered to take visitor the tour.

1.2 Robotics
Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronic
engineering, information engineering, computer science, and others. Robotics deals with the design, construction,

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

operation, and use of robots, as well as computer systems for their control, sensory feedback, and information
processing.
The concept of creating machines that can operate autonomously dates back to classical times, but research
into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout
history, it has been frequently assumed by various scholars, inventors, engineers, and technicians that robots will one
day be able to mimic human behavior and manage tasks in a human-like fashion. Today, robotics is a rapidly growing
field, as technological advances continue; researching, designing, and building new robots serve various practical
purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to
people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks. Robotics is
also used in STEM (science, technology, engineering, and mathematics) as a teaching aid.

1.3 Path Planning


Path planning (also known as the navigation problem) is a term used in robotics for the process of breaking down a
desired movement task into discrete motions that satisfy movement constraints and possibly optimize some aspect of
the movement.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

In our case, consider navigating a mobile robot inside a building to a distant waypoint. It should execute
this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description
of these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion planning
algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks
(e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g.
imperfect models of the environment or robot).

The path planning that are used for the greeting droid are

 Static path
 Line follower
The static path is a path planning technique used for the paths that are clear and wide, the static planning takes the
input in the form of distance to be covered and then generates a static route based on the AO* algorithm.
The line follower is another path planning technique which gives more accuracy, the main reason to implement this
technique is to improve the accuracy and the make the robot follow a given route which decreases the rate of failures
to the least.

1.4 AO* algorithm


A* is a computer algorithm that is widely used in pathfinding and graph traversal, which is the process of finding a
path between multiple points, called "nodes". It enjoys widespread use due to its performance and accuracy. However,
in practical travel-routing systems, it is generally outperformed by algorithms which can pre-process the graph to
attain better performance, although other work has found A* to be superior to other approaches.
A* algorithm can not search AND - OR graphs efficiently

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

The AO* is a upgrade to the A* algorithm which can handle both the AND & OR graphs to generate the desired
path, In AO* algorithm, the procedure is similar but there are and constraints traversing specific paths. If you are
traversing those paths, cost of all the paths originating from the preceding node are added till that level, where
you find the goal state irrespective of whether they take you to the goal state or not.

1.5 Conclusion
Based on the discussions in the chapter, the greeting droid using the processor can roll around with the help of the
Coordinates (algorithm). The upcoming units explains the end to end process of the total descriptions.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

CHAPTER-2
Literature Review

2. Literature Review
2.1 A Robotic Wayfinding System for the Visually Impaired
We present an emerging indoor assisted navigation system for the visually impaired. The core of the system is
a mobile robotic base with a sensor suite mounted on it. The sensor suite consists of an RFID reader and a
laser range finder. Small passive RFID sensors are manually inserted in the environment. We describe how
the system was deployed in two indoor environments and evaluated by visually impaired participants in a
series of pilot experiments.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

We have built and deployed a prototype of a robotic guide for the visually impaired. Its name is RG,
which stands for “robotic guide.” Our basic research objective is to alleviate localization and navigation
problems of purely autonomous approaches by incrementing environments with inexpensive and reliable
sensors that can be placed in and out of environments without disrupting any indigenous activities.
Effectively, the environment becomes a distributed tracking and guidance system (Kulyukin & Blair2003;
Kulyukin, Gharpure, & De Graw 2004) that consists of stationary nodes, e.g., sensors and computers, and
mobile nodes, e.g., robotic guides.

Published by: Vladimir Kulyukin* Chaitanya, Gharpure Pradnya Sute


Nathan De Graw John Nicholson

Additional requirements are: 1) that the instrumentation be fast, e.g., two to three hours, and require only
commercial off-the-shelf (COTS) hardware components; 2) that sensors be inexpensive, reliable, easy to
maintain (no external power supply), and provide accurate localization; 3) that all computation run onboard
the robot; and 4) that human-robot interaction be both reliable and intuitive from the perspective of the
visually impaired users. The first two requirements make the systems that satisfy them replicable,
maintainable, and robust. The third requirement eliminates the necessity of running substantial off-board
computation to keep the robot operational. In emergency situations, e.g., computer security breaches, power
failures, and fires, off-board computers are likely to become dysfunctional and paralyze the robot if it depends
on them. The fourth requirement explicitly considers the needs of the target population.

2.2 ROBOTIC DEVICES


This disclosure describes a robotic device helpfull in the construction of prosthetic or artificial body parts. A
wrist is described that will allow full motion of a human-like wrist with various numbers of fingers. It is
comprised of a ball and socket joint where the ball is cut into parallel segments, each segment being attached
to one finger. Attached to each segment are two cables to control up-down motion and attached to the entire
ball are two cables to control back-forth motion. The socket portion, attached to the arm, contains guides for
the tendons at four equally spaced points; top, bottom, right side, and left side. When tendons pull the ball
right or left, all segments of the ball move together, but when the tendons pull the ball up or down, only the
segment of the ball attached to the tendon that is pulled move, and each finger is attached to one segment of
the wrist, allowing independent movement of the fingers up and down, but non-independent movement of the
fingers right and left

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Published by: David A. Walters

2.3 Human-Robot Interaction in a Robotic Guide for the Visually Impaired


We present an assisted indoor navigation system for the visually impaired. The system consists of a mobile
robotic guide and small sensors embedded in the environment. We describe the hardware and software
components of the system and discuss several aspects of human-robot interaction that we observed in the
initial stages of a pilot study with several visually impaired participants.
Published by: Vladimir Kulyukin ∗ Chaitanya Gharpure Nathan De Graw
Most of the research and development(R&D) has focused on removing structural barriers to universal access,
e.g., retrofitting vehicles for wheelchair access, building ramps and bus lifts, improving wheelchair controls,
and providing access to various devices through specialized interfaces, e.g., sip and puff, haptic, and Braille.
For the 11.4 million visually impaired people in the United States(LaPlante & Carlson 2000), this R&D has
done little to remove the main functional barrier: the inability to navigate. This inability denies the visually
impaired equal access to many private and public buildings, limits their use of public transportation, and
makes the visually impaired a group with one of the highest unemployment rates (74%)(LaPlante& Carlson

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

2000). Thus, there is a clear need for systems that improve the wayfinding abilities of the visually
impaired, especially in unfamiliar indoor environments, where conventional aids, such as white canes
and guide dogs, are of limited use.

Robot-assisted navigation can help the visually impaired overcome these limitations. First, the amount of
body gear carried by the user is significantly minimized, because most of it can be mounted on the robot and
powered from onboard batteries. Consequently, the navigation-related physical load is significantly reduced.
Second, the user can interact with the robot in ways unimaginable with guide dogs and white canes, i.e.,
speech, wearable keyboard, audio, etc. These interaction modes make the user feel more at ease and reduce
her navigation-related cognitive load. Third, the robot can interact with other people in the environment, e.g.,
ask them to yield or receive instructions. Fourth, robotic guides can carry useful payloads, e.g., suitcases and
grocery bags. Finally, the user can use robotic guides in conjunction with her conventional navigation aids.

2.4 RFID in Robot-Assisted Indoor Navigation for the Visually Impaired


We describe how Radio Frequency Identification (RFID) can be used in robot-assisted indoor navigation for
the visually impaired. We present a robotic guide for the visually impaired that was deployed and tested both
with and without visually impaired participants in two indoor environments. We describe how we modified
the standard potential fields algorithms to achieve navigation at moderate walking speeds and to avoid
oscillation in narrow spaces. The experiments illustrate that passive RFID tags deployed in the environment
can act as reliable stimuli that trigger local navigation behaviors to achieve global navigation objectives.
Published by: Vladimir Kulyukin, Chaitanya Gharpure John Nicholson, Sachin Pavithran
What environments are suitable for robotic guides? There is little need for such guides in familiar
environments where conventional navigation aids are adequate. For example, a guide dog typically picks a
route after three to five trials. While there is a great need for assisted navigation outdoors, the robotic
solutions, due to severe sensor challenges, have so far been inadequate for the job and have not compared
favorably to guide dogs[4]. Therefore, we believe that unfamiliar indoor environments that are dynamic and
complex, e.g., airports and conference

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

centers, are a perfect niche for robotic guides. Guide dogs and white canes are of limited use in such
environments, because they do not have any environment-specific topological knowledge and,
consequently, cannot help their users find paths to useful destinations.

2.5 Intelligent Robotic Walker Design


This paper aims to present the current development of an intelligent robotic walker. This machine is designed
as a walking aid for the visually impaired. The machine is equipped with different sensors to navigate its
motion in the indoor environment such as hospitals, hotels, airports, and museums. The user drives the walker
while the two front wheels have the ability to steer in order to avoid obstacles and to seek the target. The
control system interacts continuously with the user via voice commands. A self-autonomy goal seeker is
developed using fuzzy logic. A discrete logic navigator is used to avoid obstacles in the work space. The
indoor environment is described using a 2D road map. The hardware architecture is designed using a
hierarchical approach. It is currently under development using microcontrollers. In the paper, the system
design and preliminary simulation results are presented.
Published by: W. GHARIEB

The visually impaired people are increased to 230 millions in the world according to the last report of the
international health society. The annual rate of increasing is expected to be 7 millions per year. How to help
this population sector to make their daily life more ease is an important question. The use of assistive
technology could be one of possible solutions. The most successful and widely used walking aid for visually

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

impaired people is the white cane. It is used to detect obstacles on the ground, uneven surfaces, holes,
steps, and puddles. The white cane is inexpensive, and is so lightweight and small that can be folded and
tucked away in a pocket. However, users must train to use it over than 100 hours and it is not intelligent
to guide the use to his target in autonomous manner.

The sequential system development approach is adopted as shown in figure (1). This model consists of a
sequence of processes from user requirements (quality, performance), through system requirements (technical
specifications), architectural design, and component development to the testing cycle of integration,
installation and operations. At each process boundary, a review or a test allows progress to be monitored and a
commitment made to the next stage [13]. These boundaries act as quality milestones, controlling the gradual
transformation from a high risk idea into a complete product.

The system design for an intelligent walker machine to assist the visually impaired is presented. Different
requirements are discussed and integrated with the performance goals. The architectural design for hardware
modules is emphasized. The goal seeking module is tested via simulation using a 2D map. A simple path
planning is developed using a linked list via points table. A PD fuzzy logic controller is developed to achieve
the goal seeking objective. The preliminary simulation results for goal seeking module are encouraged us to
continue our development. A virtual reality module will be added to the simulation in order to demonstrate the
maneuverability of the walker machine in a real time environment.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-3
System Requirements

3. System Requirements
3.1 Software Requirements
 Opencv- OpenCV (Open Source Computer Vision Library) is released under a BSD license and hence it’s
free for both academic and commercial use. It has C++, Python and Java interfaces and supports Windows,
Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency and with a strong
focus on real-time applications.

Step to install:

Step 1: Update packages

Step 2: Install OS libraries

Step 3: Install Python libraries

Step 4: Download OpenCV and OpenCV_contrib


Step 4.1: Download opencv from Github
Step 4.2: Download opencv_contrib from Github
Step 5: Compile and install OpenCV with contrib modules
Step 5.1:AND
USHA RAMA COLLEGE OF ENGINEERING Create a build directory
TECHONOLOGY Dept. of CSE
Step 5.2: Run CMake
GREETING DROID

 PYQT5- Qt is set of cross-platform C++ libraries that implement high-level APIs for accessing
many aspects of modern desktop and mobile systems. These include location and positioning
services, multimedia, NFC and Bluetooth connectivity, a Chromium based web browser, as well as
traditional UI development.PyQt5 is a comprehensive set of Python bindings for Qtv5.

Installation steps:
pip3 install --user pyqt5
sudo apt-get install python3-pyqt5
sudo apt-get install pyqt5-dev-tools
sudo apt-get install qttools5-dev-tools

 Firebase- Firebase is essentially a real time database. The data appears as JSON files and allows real time
changes to occur on the connected client side. When you build cross-platform apps using iOS, Android,
JavaScript SDKs, your clients end up getting all the data that was updated.

Installation steps:

Install Foundation:
Install Foundation:
1. Install Foundation command line in npm using npm install -g foundation-cli
2. Go 1. Install directory
to ~/Github Foundation(or command line
the directory forinyour
npm using npm
projects) install
and run -g new.
foundation
foundation-cli
3. Choose "A site" as the installation
2. Go to ~/Github directory (or the directory for your projects) and
4. Enter your site name and choose ZURB Template
run foundation new.
5. Change directory ~/Github/your-web-site
3. Choose "A site" as the installation
4. Enter your site name and choose ZURB Template
Install Firebase
5. Change directory ~/Github/your-web-site
1. Install firebase command line using npm install -g firebase-tools
2. RunInstall init and enter your-web-site name.
Firebase
firebase
3. Run npm start, and it will populate the dist folder with your site.
1. Install firebase command line using npm install -g firebase-tools
2. Run firebase init and enter your-web-site name. Dept. of CSE
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY
GREETING DROID

4. Choose dist as your main directory for Firebase.


5. Install a local server using npm install -g serve.
6. Run serve dist to indicate that you're referring to the Foundation compiled site.
7. Try to access Firebase in your localhost. For example, localhost:3000.

Deployment

1. Run firebase deploy


2. Run firebase open to open your site automatically.

3.2 Hardware Requirements


We strongly recommend a raspberry pi with the newest version.

 Processor: Raspberry pi3 1 GHz; recommended to have two raspberry pi’s to reduce load on each processor by
sharing.
 Hard Drive: Minimum 32 GB memory card; Recommended 64 GB or more
 Memory (RAM): Minimum 1 GB; Recommended 2 GB or above
 Sound card speakers/Bluetooth speakers
 Webcam
 7-Inch LCD display for UI representation.
 Keyboard.
 Thermal Printer 58mm.
 Johnson motors for wheels.
 Line Follower sensors.
 Ultra-sonic sensors to avoid collisions.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-4
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
GREETING DROID

Implementation

4. Implementation
4.1 Gifplay
The droid stays in a static position and displays a static gif until a visitor arrives in the area.
import
import cv2 cv2
import
import numpynumpy
as np as np
import
import time,os
time,os
# Create
# Create a VideoCapture
a VideoiCapture object
object and read and file
from input read from input file
# the
# If Ifinput
theis input is pass
the camera, the0 camera, pass
instead of the video0fileinstead
name of the video file name
cap
cap = cv2.VideoCapture('source.gif')
= cv2.VideoCapture('source.gif')

# Check
# Check if camera
if camera opened successfully
opened successfully
if if (cap.isOpened()==
(cap.isOpened()== False): False):
print("Error
print("Error opening opening video
video stream stream or file")
or file")

frame_counter=0;
frame_counter=0;
# Read
# Read until until video is completed
video is completed
while(cap.isOpened()):
while(cap.isOpened()):
if( os.stat("comd.txt").st_size
if( os.stat("comd.txt").st_size > 0 ): > 0 ):
f=open("comd.txt","w")
f=open("comd.txt","w")
f.write("");
f.write("");
f.close()
f.close()
breakbreak
# Capture
# Capture frame-by-frame
frame-by-frame
time.sleep(0.2)
time.sleep(0.2)
ret, ret,
frame =frame = cap.read()
cap.read()
if retif==ret
True:== True:
frame_counter=frame_counter+1;
frame_counter=frame_counter+1;
# Display
# Display the
the resulting resulting frame
frame
#cv2.resize(frame,(300,500))
#cv2.resize(frame,(300,500))
cv2.namedWindow("Frame",cv2.WND_PROP_FULLSCREEN)
cv2.namedWindow("Frame",cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("Frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow('Frame',frame)
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
if cv2.waitKey(25) & 0xFF == ord('q'):
GREETING DROID

cv2.setWindowProperty("Frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow('Frame',frame)

if cv2.waitKey(25) & 0xFF == ord('q'):


break
if frame_counter == 7:
frame_counter = 0
cap = cv2.VideoCapture("source.gif")
# Break the loop
else:
cap.set(1,1)
break

# When everything done, release the video capture object


cap.release()

# Closes all the frames


cv2.destroyAllWindows()

From the above program, we have imported the opencv and desired packages in order to capture the person identified.
The gif image of robotic appearance will be played until a person is identified.

4.2 Id Generator
Whenever the visitor gives the details in the UI the droid generates ID to the visitor and prints it using a thermal
printer.

from PIL import Image, ImageDraw, ImageFont


from PIL import Image, ImageDraw, ImageFont
import os
import os
while(1):
while(1):if( os.stat("details.txt").st_size > 0 ):
if( os.stat("details.txt").st_size > 0 ):
f=open("details.txt","r")
f=open("details.txt","r")
message=f.read()
message=f.read()
f.close()
f.close() f=open("details.txt","w")
f=open("details.txt","w")
f.write("")
f.write("")
f.close()
f.close() break
break
img=Image.open("white.jpg")
img=Image.open("white.jpg")
width, height = img.size
width, height = img.size
img.resize((350,280)).save("white2.jpg")
img.resize((350,280)).save("white2.jpg")
img=Image.open("new.jpg")
img=Image.open("new.jpg")
width, height = img.size
width, height = img.size
img.resize((350,180)).save("trail.jpg")
img.resize((350,180)).save("trail.jpg")
image = Image.open('white2.jpg')
image = Image.open('white2.jpg')
draw = ImageDraw.Draw(image)
img1
draw = Image.open('white.jpg', 'r').convert("RGBA")
= ImageDraw.Draw(image)
img= Image.open('white.jpg',
img1 = Image.open('trail.jpg','r').convert("RGBA")
'r').convert("RGBA")
img = Image.open('trail.jpg','r').convert("RGBA")
image.paste(img,(0,0),img)

##img1.show()
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
font = ImageFont.truetype('Roboto-Bold.ttf', size=30)
GREETING DROID

image.paste(img,(0,0),img)

##img1.show()
font = ImageFont.truetype('Roboto-Bold.ttf', size=30)

(x, y) = (0, 200)

#message = "Happy Birthday!"


color = 'rgb(0, 0, 0)' # black color

draw.text((x, y), message, fill=color, font=font)

image.save('greeting_card.jpg')

#imiage.show()
os.system("lpr -o portrait -o fit-to-page -P Printer greeting_card.jpg")

4.3 Destination
At the time of ID card generation, the processor parallely generates the route to the destination by passing through the
layers of the AO* algorithm.

def show_entry_fields():

if(e3.get() == "principal" or e3.get()=="director"):


cap = cv2.VideoCapture(0)
file1 = open("details.txt","w");
file1.write(str("Name: " + e1.get()) + "\n" + "Phone: " +str(e2.get()))
file1.close()
while(True):
font = cv2.FONT_HERSHEY_SIMPLEX
ret,frame = cap.read()
id="saran"
img=frame
cv2.putText(img,'Click Y to for your image',(20,50), font,
1.3,(0,255,0),2,cv2.LINE_AA)
cv2.imshow('img1',img)
if (cv2.waitKey(1) & 0xFF == ord('y')): #save on pressing 'y'
cv2.imwrite('new.jpg',frame)
cv2.imshow('frame1',frame);
f21=open("buf.txt","w")
f21.write(e3.get());
f21.close();
f212=open("comd.txt","w")
f212.write("");
f212.close();
f=open("buf1.txt","w")
f.write("Hello")
f.close()
cv2.destroyAllWindows()
app.bind('<Escape>', lambda e: tk.Frame.destroy(app))
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
tk.Frame.destroy(app)
GREETING DROID

From the above table of program, the details from the visitor are added to the internal files using serialization and
parsing the information given to decide the destination.

4.4 Movement
The movement of the droid is done by a loop that always checks whether the droid gets the right inputs and signals
and follows the commands accordingly

import RPi.GPIO as IO
import time,os
from pygame import mixer
IO.setwarnings(False)
IO.setmode(IO.BOARD)
IO.setup(16,IO.IN) #GPIO 2 -> Left IR out
IO.setup(18,IO.IN) #GPIO 3 -> Right IR out
IO.setup(36,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(32,IO.OUT) #GPIO 4 -> Motor 1 terminal A
IO.setup(38,IO.OUT) #GPIO 17 -> Motor Left terminal A
IO.setup(36,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(40,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(38,IO.OUT) #GPIO 17 -> Motor Left terminal A
IO.setup(5,IO.OUT) #Left Motor Enabler
IO.setup(40,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(7,IO.OUT) #Right Motor
IO.setup(5,IO.OUT) Enabler
#Left Motor Enabler
IO.setup(7,IO.OUT) #Right Motor Enabler
IO.output(5,IO.HIGH)
IO.output(7,IO.HIGH)
IO.output(5,IO.HIGH)
IO.output(7,IO.HIGH)
location=""
stri="principal"
location=""
while(1):
stri="principal"
if( os.stat("buf.txt").st_size > 0 ):
while(1):
f=open("buf.txt","r")
if( os.stat("buf.txt").st_size > 0 ):
stri=f.read()
f=open("buf.txt","r")
f.close() stri=f.read()
f=open("buf.txt","w")
f.close()
f.write("")
f=open("buf.txt","w")
f.close() f.write("")
break f.close()
break

junctionCross=0
USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE
def selectRouting(stri):
GREETING DROID

junctionCross=0

def selectRouting(stri):
global junctionCross;
if("principal" in stri):
junctionCross=2;
else:
junctionCross=4;

def burst():
IO.output(32,True) #1A+
IO.output(36,False) #1B-
IO.output(38,True) #2A+
IO.output(40,False) #2B-
time.sleep(0.8)

IO.output(32,False) #1A+
IO.output(36,False) #1B-
IO.output(38,False) #2A+
IO.output(40,False) #2B-

def linefollow(junctionCross):
while 1:
if(junctionCross==0):
break;

if(IO.input(16)==True and IO.input(18)==True): #both while


print("Moving Forward")
IO.output(32,True) #1A+
IO.output(36,False) #1B-
IO.output(38,True) #2A+
IO.output(40,False) #2B-

elif(IO.input(16)==False and IO.input(18)==True): #turn right


print("Moving Right")
print("------------------------")
IO.output(32,True) #1A+
IO.output(36,True) #1B-
IO.output(38,True) #2A+
IO.output(40,False) #2B-

elif(IO.input(16)==True and IO.input(18)==False): #turn left


print("Moving Left")
IO.output(32,True) #1A+
IO.output(36,False) #1B-
IO.output(38,True) #2A+
IO.output(40,True) #2B-
else: #stay still
print("Stopped at black line")
IO.output(32,True) #1A+
IO.output(36,True) #1B-
IO.output(38,True) #2A+
0 IO.output(40,True) #2B-
junctionCross=junctionCross-1;

burst();

linefollow(junctionCross);

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


def startMoving():
GREETING DROID

The above table of program, we had imported the packages like Rpi.Gpio for the gpio pin communication and pygame mixer for the audio
generator.

4.5 Motor Operations

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

The motors used are the Johnson motors which works on the electric supply from the battery so we use a driver
for supply of electricity and also for operating the electric flow, the electric can be operated which results in the
manipulative operation of the motors.

import RPi.GPIO as GPIO


import time
def init():
GPIO.setmode(GPIO.BOARD)
GPIO.setup(29,GPIO.OUT)
GPIO.setup(31,GPIO.OUT)
GPIO.setup(33,GPIO.OUT)
GPIO.setup(37,GPIO.OUT)
GPIO.output(29,GPIO.HIGH)

def motor1():
init()
print('forward')
GPIO.output(33,GPIO.HIGH)
GPIO.output(31,GPIO.LOW)
time.sleep(1)
stop()

def motor2():
init()
print('reverse')
GPIO.output(33,GPIO.LOW)
GPIO.output(31,GPIO.HIGH)
time.sleep(1)
stop()

def servo():
init()
print('hand shake')
pwm=GPIO.PWM(37,50)
pwm.start(5)
i=1
while i<=3:
pwm.ChangeDutyCycle(7)
time.sleep(0.5)
pwm.ChangeDutyCycle(10)
i=i+1
pwm.ChangeDutyCycle(7)
time.sleep(0.2)
stop()

def stop():
init()
GPIO.output(33,GPIO.HIGH)
GPIO.output(31,GPIO.HIGH)
time.sleep(0.2)
GPIO.cleanup()
while(1):
if( os.stat("buf1.txt").st_size > 0 ):
f=open("buf1.txt","w")
f.write("")
f.close()
break

motor1()
time.sleep(1.5)
servo()
time.sleep(1.5)
motor2()

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

4.6 Cleanup
To avoid any miscomputation errors and mistakes or buffer errors for power shortage the cleanup code is used
which always refresh all the drivers pins and registers and keeps it clean and ready to listen the commands sent.

import RPi.GPIO as IO
IO.setwarnings(False)
IO.setmode(IO.BOARD)
IO.setup(3,IO.OUT) #GPIO 4 -> Motor 1 terminal A

IO.setup(5,IO.OUT) #GPIO 17 -> Motor Left terminal A


##IO.setup(6,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(7,IO.OUT) #Left Motor Enabler
######IO.setup(8,IO.OUT) #Right Motor Enabler

##IO.setup(9,IO.OUT) #GPIO 2 -> Left IR out


##IO.setup(10,IO.OUT) #GPIO 3 -> Right IR out
IO.setup(11,IO.OUT) #GPIO 4 -> Motor 1 terminal A
IO.setup(12,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(13,IO.OUT) #GPIO 17 -> Motor Left terminal A
##IO.setup(14,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(15,IO.OUT) #Left Motor Enabler
IO.setup(16,IO.OUT) #Right Motor Enabler

##IO.setup(17,IO.OUT) #GPIO 2 -> Left IR out


IO.setup(18,IO.OUT) #GPIO 3 -> Right IR out
IO.setup(19,IO.OUT) #GPIO 4 -> Motor 1 terminal A
##IO.setup(20,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(21,IO.OUT) #GPIO 17 -> Motor Left terminal A
IO.setup(22,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(23,IO.OUT) #Left Motor Enabler
IO.setup(24,IO.OUT) #Right Motor Enabler

##IO.setup(25,IO.OUT) #GPIO 2 -> Left IR out


IO.setup(26,IO.OUT) #GPIO 3 -> Right IR out
##IO.setup(27,IO.OUT) #GPIO 4 -> Motor 1 terminal A
##IO.setup(28,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(29,IO.OUT) #GPIO 17 -> Motor Left terminal A
##IO.setup(30,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(31,IO.OUT) #Left Motor Enabler
IO.setup(32,IO.OUT) #Right Motor Enabler

IO.setup(33,IO.OUT) #GPIO 2 -> Left IR out


##IO.setup(34,IO.OUT) #GPIO 3 -> Right IR out
IO.setup(35,IO.OUT) #GPIO 4 -> Motor 1 terminal A
IO.setup(36,IO.OUT) #GPIO 14 -> Motor 1 terminal B
IO.setup(37,IO.OUT) #GPIO 17 -> Motor Left terminal A
IO.setup(38,IO.OUT) #GPIO 18 -> Motor Left terminal B
##IO.setup(39,IO.OUT) #Left Motor Enabler
IO.setup(40,IO.OUT) #Right Motor Enabler
print('1')

IO.setup(3,IO.HIGH) #GPIO 4 -> Motor 1 terminal A

IO.setup(5,IO.HIGH) #GPIO 17 -> Motor Left terminal A


##IO.setup(6,IO.HIGH) #GPIO 18 -> Motor Left terminal B
IO.setup(7,IO.HIGH) #Left Motor Enabler
######IO.setup(8,IO.HIGH) #Right Motor Enabler

##IO.setup(9,IO.HIGH) #GPIO 2 -> Left IR HIGH


##IO.setup(10,IO.OUT) #GPIO 3 -> Right IR out
IO.setup(11,IO.HIGH) #GPIO 4 -> Motor 1 terminal A
IO.setup(12,IO.HIGH) #GPIO 14 -> Motor 1 terminal B
IO.setup(13,IO.HIGH) #GPIO 17 -> Motor Left terminal A
##IO.setup(14,IO.OUT) #GPIO 18 -> Motor Left terminal B
IO.setup(15,IO.HIGH) #Left Motor Enabler
IO.setup(16,IO.HIGH) #Right Motor Enabler

##IO.setup(17,IO.OUT) #GPIO 2 -> Left IR out


IO.setup(18,IO.HIGH) #GPIO 3 -> Right IR out
IO.setup(19,IO.HIGH) #GPIO 4 -> Motor 1 terminal A
##IO.setup(20,IO.OUT) #GPIO 14 -> Motor 1 terminal B
##IO.setup(17,IO.OUT)#GPIO
IO.setup(21,IO.HIGH) #GPIO172 ->
-> Motor
Left IR outterminal A
Left
IO.setup(18,IO.HIGH)
IO.setup(22,IO.HIGH)
USHA #GPIO 18
#GPIO
RAMA COLLEGE OF ENGINEERING3 AND
->
-> Right
Motor IR outterminal B
Left
TECHONOLOGY Dept. of CSE
IO.setup(19,IO.HIGH) #Left
IO.setup(23,IO.HIGH) #GPIO Motor
4 -> Motor 1 terminal A
Enabler
GREETING DROID

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-5
DESIGN

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

5. Design
5.1 Use case diagram
Use case diagrams consists of actors, use cases and their relationships. The diagram is used to model the
system/subsystem of an application. A single use case diagram captures a particular functionality of a
system. To model a system, the most important aspect is to capture the dynamic behavior.

Fig:5.1 usecase diagram for greeting droid

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

5.2 class diagram

Class diagram describes the attributes and operations of a class and also the constraints imposed on
the system. The class diagrams are widely used in the modeling of objectoriented systems because they are
the only UML diagrams, which can be mapped directly with object-oriented languages.

Class diagram shows a collection of classes, interfaces, associations, collaborations, and constraints. It is
also known as a structural diagram.

The purpose of the class diagram can be summarized as −

 Analysis and design of the static view of an application.

 Describe responsibilities of a system.

Fig:5.2 class diagram for greeting droid

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

5.3 Sequence diagram


A sequence diagram shows object interactions arranged in time sequence. It depicts the objects and classes
involved in the scenario and the sequence of messages exchanged between the objects needed to carry out the
functionality of the scenario. Sequence diagrams are typically associated with use case realizations in the Logical
View of the system under development. Sequence diagrams are sometimes called event diagrams or event
scenarios.

Fig 5.3 sequence diagram for greeting droid

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

5.4 Flow Diagram


The flow diagram of the greeting droid demonstrates how the data flows through the number of layers of
hardware and software to generate a result.

Fig 5.4-Flow diagram of the greeting droid which displays the data flow

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-6
Results

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

6. As of the implementation in the previous chapter-4, the results for the implementation are discussed here.

6.1 Gifplay:

Fig-6.1-gif image of the greeting droid that plays on the start of the program

As referred in the chapter-4 the fig6.1 is the image that plays continuously until a person gets recognized in the
area of the droid.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

6.2 Query Window


After loading the Gifplay the greeting droid load a query intake window which takes the details and the
destination from the visitor and the goes for the further steps.

Fig 6.2- query window image of the greeting droid that plays after the gifplay.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

6.3 ID Generator

Saran

Fig-6.2 the id card generates based on the visitor details

From the above figure, this is the image of the id card that gets generated by the details given by the user and the date
is printed to maintain a reference.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

6.4 Movement
The movement of the droid always depends on the commands that are passed by the line following sensor and
understands in which direction it should move.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

6.5 Cleanup

All gpio pins cleaned can be used for the further operations

>>>

The cleanup code always helps the pins of the raspberry pi to clear the recent buffer memory and set the pins in a
ready state.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-7
Testing of project modules

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

7. Testing
Software testing is defined as an activity to check whether the actual results match the expected results and to
ensure that the software system is Defect free. It involves execution of a software component or system
component to evaluate one or more properties of interest.

7.1 Unit Testing

What is unit testing?


It is a level of software testing where individual units/ components of a software are tested. The purpose is to
validate that each unit of the software performs as designed. A unit is the smallest testable part of any
software. It usually has one or a few inputs and usually a single output. In procedural programming, a unit
may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a
method, which may belong to a base/ super class, abstract class or derived/ child class. (Some treat a module
of an application as a unit.
How unit testing improves manageability
Managers are under pressure to perform. This means that they are responsible for accomplishment through the
effort of others. If a change lowers the team’s performance, they can expect to be held accountable for the
consequences. This pressure puts the focus on managing and controlling in ways that can favour the status
quo.

 Visibility and reporting


 Control and correction
 Customer satisfaction

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Table7.1 Playing a gif image at the startup of program

Test case ID Unit Test-1

Description Testing the static gif playing User Interface

Steps 1.Add the path of the source image


to the python script
2.Execute the python script to load
the image using opencv

Test Setup Playing a gif image

Input runs when the program is started

Output plays a gif image in fullscreen

Status Successful

Comment No comments

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Table7.2:printing a ID card based on the given inputs along with date.

Test case ID Unit Test-2

Description ID card generator based on the given and


details and adding date to it

Steps 1.Visitor enters the details into the


User Interface
2. Details given should be loaded to
generate a Id card along with date

Test Setup Printing a ID card

Input Entering the user details

Output ID card generaton

Status Successful

Comment No comments

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Table 7.3: Generating a destination path based on the given input.

Test case ID Unit Test-3

Description Generating destination path based on the


given input.

Steps 1.Getting the destination given by


the visitor in the User Interface
2.Generating and Analysing the
destination path to reach.

Test Setup Droid moving from its source place to the


destination.

Input The Destination entered by the visitor

Output Generating a path to reach the


destination.
Status Successful

Comment Uses normal logic for limited


destinations but uses AO* algorithm
when it has a large number of
destinations.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Table 7.4 Movement of droid based on the commands of the sensor.

Test case ID Unit Test-4

Description Movement of the droid based on the


commands from the line follower sensor

Steps 1.Takes the path of the destination.

2.Follows the line follower until the


line is reached.

Test Setup Movement of the droid based on the


destination.

Input Destination to reach.

Output Movement of the droid to the


destination.

Status Successful.

Comment Follows the black line by using line


following sensors.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Table 7.5 Checking the Motor controls of the droid.

Test case ID Unit Test-5

Description Checking the motor controls of the droid.

Steps 1.Importing the motor controls of


the raspberry pi using RPi.gpio.

2.Checking the motor controls based


on the inputs given.

Test Setup Checking the motor controls.

Input Giving commands through the python script.

Output Working of the motors according to


the commands.
Status Successful

Comment Controlling the motors using the


python script using the Rpi.gpio
library.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Table 7.6 Running a cleanup script to clear the buffer of the processor.

Test case ID Unit Test-6

Description Running a cleanup script to clear the buffer


of the raspberry pi.

Steps 1.Import all the pins using the pin


numbers.
2. Set the buffer to high to make the
pins ready to listen to the further
commands.

Test Setup Running a cleanup file.

Input Gpio pin numbers.

Output Cleaned up pins ready to listen.

Status Successful

Comment No comments

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-8
Conclusion

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

8. Conclusion

We presented an indoor assisted navigation system for the visitors or outlanders. The system consists of a mobile
robotic guide and small RFID sensors embedded in the environment. The system allows individuals to navigate in
unfamiliar indoor environments and interact with the robotic guide via text, sound, and a fixed keyboard.

The main question addressed by our research is the feasibility of robot-assisted navigation in indoor environments
instrumented with inexpensive passive sensors. So, is indoor robot-assisted navigation feasible? While our
experiments show that the technology has promise, the answer to this question cannot be given either negatively or
affirmatively at this point. The benefits of autonomous systems, such as greeting droid, increase with longer
deployments. Only through longer deployments can one answer how easy it is to maintain the system over extended
time periods and whether the target environment accepts the technology sociologically. Our deployments have not
been sufficiently long to answer either question.

A major barrier to long-term deployment is the unwillingness of real target environments, e.g., airports, head quarters,
colleges to agree to the exploratory long-term deployments (two to three months) of assistive technologies. Only after
this barrier is overcome, both sociologically and
technologically, will we be able to render a definite verdict on the feasibility of indoor robot-assisted navigation.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-9
Future Enhancements

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

9. Future enhancement
In this paper, we showed how line follower sensors can be used in robot-assisted indoor navigation for the outlanders
to the particular place. We presented a robotic guide for the visitors that was deployed and tested both with and
without visitors in two indoor environments. The experiments illustrate that passive line following sensors can act as
reliable stimuli that trigger local navigation behaviors to achieve global navigation objectives.
In the scope of future enhancement we want to develop a droid that can also work on voice inputs and also a dynamic
human-robot interaction system which also works for the visually impaired individuals.
The enhancement can also extend the idea of path planning using the most advanced and stimulated algorithms which
can be used for the dynamic reallocation of the droid from place to place.

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

Chapter-10
References

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE


GREETING DROID

https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2
C5&q=robotic+guides&btnG=

https://www.aaai.org/Papers/IAAI/2004/IAAI04-014

https://patentimages.storage.googleapis.com/5c/13/36/4
57618abb5d367/US4928546

https://www.aaai.org/Papers/Symposia/Spring/2004/SS-
04-03/SS04-03-001

https://www.researchgate.net/profile/Vladimir_Kulyuki
n/publication/224755759_RFID_in_Robot-
Assisted_Indoor_Navigation_for_the_Visually_Impaire
d/links/0c960530cc8e2e33a1000000

https://www.researchgate.net/profile/Wahied_Gharieb/
publication/254606101_Intelligent_Robotic_Walker_De
sign/links/549360a30cf25de74db4f2d6/Intelligent-
Robotic-Walker-Design

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY Dept. of CSE

Вам также может понравиться