Вы находитесь на странице: 1из 43

PERSONAL INTELLIGENT ASSISTANCE

DONIYA ANTONY
M.TECH CSIS
RET19CSCY07
 INTRODUCTION
 MOTIVATION
 AREA OF RESEARCH
- RELATED WORKS
 METHODOLOGY
 RESULTS
 CONCLUSION
 REFERENCES

2
 Intelligent Assistance (IA) refers to the use of
intelligent agents that help individuals to perform
tasks.
 They can actually do one or many of the
following:
•Work autonomously
•Meet goals
•Maintain historical data
•Perceive and assist

3
 IAs are programmed with artificial intelligence,
machine learning, and voice recognition
technology.
 It is also refered as software that has been
designed to assist people with basic tasks.
 Such Tasks, which might have been performed by
a personal assistant, including:
• Reading text or E-mail messages aloud.
• Placing calls & reminding the end user about
appointments.

4
 It is one of the fast growing engineering
technologies.
 Nearly 20% people of the world are suffering from
various disabilities.
 It enables a user to perform operations such as
open calculator, and can ask queries.
 It has various applications in different fields.

5
• Natural Language Processing.
To Understand user’s speech input.
• Automatic Speech Recognition.
To understand command according to user’s input.
• Artificial Intelligence.
To learn things from user and to store all
information bout behavior and relations of user.
• Inter Process Communication.
To get important information from other
software applications.
6
PAPER METHOD ADVANTAGE DISADVANTAGE

Next-Generation of Virtual ASR model Low cost and Sometimes the model
Personal Assistants (Microsoft time saving can itself be complex
Cortana,Apple Siri, Amazon approach.
and hard to
Alexa and Google Home)
completely
understand.
Investigation and Development NLU Recognize and It not always gives
of the Intelligent Voice Assistant synthesize speech correct answers
for the Internet of Things Using in real-time mode. related to the
Machine Learning
location.

Construction of a Voice NLU It helps visual It is not that much


Driven Life Assistant System impaired people. accurate.
for Visually Impaired People

7
METHODOLOGY

8
Personal Assistants (PAs)

Popular PAs currently include:


• Apple’s Siri
• Amazon alexa
• Many more.
Working of PA
Features of all PAs are almost same but the working of PAs
is different.
I will discuss the working of Siri.
Siri recognize:
• Your voice
• Understand your commands
• Communicate with server
• Interpret your request
• Retrieve information for you.
Siri’s Working

• Siri Basically consists of three layers.


• Speech to text
• Text Analysing
• Interpret commands
Siri’s Working (First
Layer):
• Speech to text:
• A Piece of software that converts audio to text.
• It doesn’t understand just anything you might say.
• Siri has much easier job than Dragon or Mac’s speech
recognition facility.
• It has to understand the words & sentences that are
related to
appointment contacts, messages, maps etc.
• Example:
• When we say “Car to Aftab”, it will write it as “Call to
Aftab”.
Siri’s Working (Second
Layer):
 Text Analysing:
 Converted text is just letters for computer.
A piece of software convert text to something that is
understandable for
computer.
 Computer understand the command, so Siri convert
this text to computer command.
 Computer command consists of functions & the parameters of
these functions.
(Second Layer)
Continue…
 Siri maps the words to functions and parameters to create a
command that computer can understand.
 Example of Reminder.
 Ambiguous or Half command
 Get more information to clarify if command is ambiguous.
 Full Command
 Auto generated Parameter
Siri’s Working (Third
Layer):
 Interpret commands:
 In this level Siri isn’t doing much.
 Example:
 You already have calendar app & you press button to view & create
appointments & meetings. Siri will push these button for you.
 In this layer, that mapped computer command, go to server
through internet.
 Simultaneously, your speech evaluated locally.
A local recogniser communicate with server to judge
whether command will be best handle locally or not.
 Example:
 Play Music, Restaurant reservation, Movie Rating.
Siri’s
Interface:

 You don’t need to


tell Siri much about
yourself.
 When you activate the
Siri, a black screen will be
adorned with a wavy white
line along the bottom and
white text that reads, “What
can I help you with?” in the
centre of the screen.
Features

 Make Phone Calls:


 Call to Faisal Ali, Call to University of Gujrat
 Get Direction
 Direct me to Fawara Chok, Take me to Fawara Chok
 Send Messages
 Email to Aftab Subject Hello, Send SMS to Aftab where are
you?,
Message Aftab
 Set Reminders
 Remind me to go for walk at 7AM, Remind me to study when
I’m at
home.
Features Continue….

 Ask Questions
 Willit rain today?, What is 34 times 86?, How many
Pakistani rupees are
in One Dollar?, Tell me the height of Minar-e-
Pakistan.
 Schedule meetings & Appointments
 Schedule a meeting tomorrow morning with Aftab
 Play Music & Videos
 Play Life of Pie, Play songs from (album name)
 Set Alarms
 Wake me up at 6.30AM
Amazon Alexa
Alexa is an intelligent personal assistant developed by
Amazon, made popular by the Amazon Echo and the
Amazon Echo Dot devices developed by Amazon.

Capable of,
 Voice Interaction
 Music Playback
 Providing real time information
 Controls several smart devices such as home automation
systems
How its
Works?
Amazon Echo
Amazon’s Alexa-controlled Echo speaker is a wireless speaker.
But it’s capable of much more. Using nothing but the sound of your voice,
you can search the Web, create to-do and shopping lists, shop online, get
instant weather reports, and control popular smart-home products.

1
Amazon ECHO
◇ Simly we can say it’s a “SMART SPEAKER”.
◇ Information, music, news, weather, and more – instantly
◇ Controlled by your voice for hands-free convenience
◇ Echo begins working as soon as it detects the wake word.
◇ Echo is also an expertly tuned speaker that can fill any room
with immersive sound.
◇ You can pick Alexa or Amazon as your wake word.
Always Getting Smarter

◇Echo's brain is in the cloud, running on Amazon Web


Services so it continually learns and adds more functionality
over time
◇ The more you use Echo, the more it adapts to your
speech patterns, vocabulary, and personal preferences
◇Echo has been fine-tuned to deliver crisp vocals with
dynamic bass response
◇Its dual downward-firing speakers produce 360° omni-
directional audio to fill the room with immersive sound
Process is easy
Want
big
impa
ct?
Use
Feature Comparison

 Features of all VPAs are almost common.


 But there are some features which some VPAs
doesn’t have.
 And these features make some VPAs more
reliable, sufficient and attract to customer.
 Features of some VPAs are:
Alexa Google Now Siri
Summon with Yes No – always listening Yes
hardware
button Features: of ‘OK Google’

Web Search Yes Yes Yes

Geofencing (e.g. Yes Yes Limited


reminding you to
make a purchase
when you’re near
a business)
Predictive Yes Yes No
Notifications (e.g.
traffic on your
commute is bad)
Event or Contact Yes Yes Yes
based notification
(when your sister
calls, tell her happy
birthday)
ALGORITHMS USED

• Hidden Markov model


• Neural network
• Dynamic time wrapping
Hidden Markov Models (HMM).

• The most flexible and successful approach to speech


recognition so far has been Hidden Markov Models
(HMM).
• A Hidden Markov Model is a collection of states
connected by transitions.
• It begins with a designated initial state.
• Formally, an HMM consists of the following
elements:
• {s} = A set of states.
• {a ij } = A set of transition probabilities, where a ij is
the probability of taking the transition from state i to
state j.
• {bi(u)} = A set of emission probabilities, where biis
the probability distributionover the acoustic space
describing the likelihood of emitting each possible
sounduwhile in state i.
• Since a ijand bi are both probabilities, they must
satisfy the following properties:
• a ij ≥ 0, bi(u) ≥ 0, ∀ i, j, u

• Σ a ij = 1, ∀i
j
• Σ bi(u) = 1, ∀i
u
Neural Networks

• A neural network consists of many simple processing


units each of which is connected to many other
units.
• Each unit has a numerical activation level.
• The connections between units are weighted and
the new activation is usually calculated as a function
of the sum of the weighted inputs from other units.
• Some units in a network are usually designated as input
units where other as output units
• Those units which are neither input nor output units are
called hidden units.
Dynamic Time Warping

• Dynamic Time Warping algorithm is one of the oldest and


most important algorithms in speech recognition.
• The simplest way to recognize an isolated word sample is to
compare it against a number of stored word templates and
determine the “best match”.
• This goal depends upon a number of factors.
• First, different samples of a given word will have somewhat
different durations.
• This problem can be eliminated by simply normalizing the
templates and the unknown speech so that they all have an
equal duration.
RESULTS

Timing & Accuracy:

 We asked all three assistants the same series


of questions, to
measure which one
performed the best.
 “Where can I see the movie
The Equalizer?”
 Siri (3.5 seconds)
 Alexa (6 seconds)
 Google Now (5.47 seconds)
Timing & Accuracy:

 Remind me to pick
up the dry cleaning.
 Siri: 2.7 seconds.
 Alexa: 5 seconds.
 Google Now (6
seconds)

 Best Answer: Siri: In terms


of quickness and ease
of use, it taps to set up
the reminder
CONCLUSION
• Personal Assistants are very effective way to organize your
schedule.
• Many areas can benefit from this technology.
• It can be used for intuitive operation of computer-based
systems in daily life.
• This technology will spawn revolutionary changes in the
modern world and become a pivot technology.
• Within five years, personal intelligent assistance will
become so pervasive in our daily lives that service
environments lacking this technology will be considered
inferior.
REFERENCES
[1] IPA - An Intelligent Personal Assistant Agent For Task
Performance Support Gabriela Czibula, Grigoreta Sofia Cojocar
Department of Computer Science, Babes-Bolyai University (2009)
[2] An Intelligent Speech Interface for Personal Assistants in RD
Projects Emerson Cabrera Paraiso.2 Jean-Paul A. Earth Laboratoire
Heudiasyc,Universite de Technologie de Compigne (2005)KÅRE
SJÖLANDER {2003}
[3] KLAUS RIES {1999}, HMM AND NEURAL NETWORK
BASED SPEECH ACT DETECTION, International Conference
on Acoustics and Signal Processing (ICASSP’99)
46
4 B. PLANNERER {2005}, AN INTRODUCTION TO
SPEECH RECOGNITION
5 KIMBERLEE A. KEMBLE, AN INTRODUCTION TO
SPEECH RECOGNITION, Voice Systems Middleware
Education, IBM

6 LAURA SCHINDLER {2005}, A SPEECH RECOGNITION


AND SYNTHESIS TOOL, Department of Mathematics and
Computer Science, College of Arts and Science, Stetson
University
47
THANK YOU

48

Оценить