Вы находитесь на странице: 1из 28

SMART MIRROR

Introduction
In todays society, information is available to us at a glance through our phones, our laptops, our desktops,
and more. But an extra level of interaction is required in order to access the information. As technology
grows day by day, we are moving towards a more automated and interconnected world because of
various wireless connected embedded devices. These are responsible for changing and improving the
standards and quality of living. Many devices are being developed which use concepts of multimedia
communication, artificial intelligence, internet of things(IoT) .Most people have mirrors at home so the
concept of a smart mirror that you can interact with is attractive and has been fantasized in many
futuristic movies. provide a large surface ideal for displaying information and interacting with.Smart
mirrors, have recently started to be developed by many people in the Maker community, with varying
degrees of interactivity.
There are several products in the market that attempt to be your attractive hub of
daily information. The Amazon Echo and the upcoming Google Home present themselves as a small
speaker that can relay information through sound. You can request news or music, fulfilling your need
to obtain media content in a hands-free manner. However, not all data is suitable for conveyance by
voice. Both designs lack the key ability to convey information visually. Asking for the morning traffic
can give you a time estimate, but it barely comes close to a detailed map with your route information.
Having the news read to you is convenient, but many prefer reading the news at their own pace. A smart
display would be a product that would be able to answer all of these concerns, while staying smoothly
modern.
This final year project describes how a smart mirror was built from scratch
using a Raspberry Pi for the hardware and custom software built on top of Raspbian, a Linux
distribution. The project was to create a Smart Mirror device that people could not only interact with
but also to further develop the technology so that it would let you install and develop your own
applications for it. The Smart Mirror would help in developing smart houses with, iot as well as finding
its applications in industries.

1
Smart mirror is an innovative techonology that enables the user to access
smartphone like features on a basic mirror.These smart mirrors are embedded with several
electronic components such as displays, sensors etc, which enhance the user experience.

OBJECTIVE/CONCEPT

The main goal of this project was to develop a smart mirror device such that the device was to look
like a regular mirror but would have a screen inside and you would be able to interact with it using
voice commands. The operating system would support running apps and would provide a simple
API for third party developers to create their own apps for the Smart Mirror. The software needed
to be designed to be modular and responsive in order to fit different hardware. The developed
mirror is able to show the basic GUI and other information updated from website, such as weather
and links of latest news. The primary aim was to incorporate face and voice recognition technology
into a smart mirror. The main strength of this project is that this is a new kind of smart device
that people don’t see every day and it looks very spectacular. Understanding of user needs and
expectations is key to successful development.

FEATURES

1) Platform is designed to be lightweight. It runs on a tiny computer, such as the Raspberry sPi.
2) It is modular and extensible. Developers can implement their own plugins using any
programming languages and integrate them to their smart mirror systems easily.
3) The server component in our platform enables a continuous, real-time connection and remote
draw calls.

COMPONENTS

RASPBERRY PI 3(B+)

Raspberry Pi is a credit-card sized computer manufactured and designed in the United Kingdom
by the Raspberry Pi foundation..It does not include a built-in hard disk but it uses a micro SD

2
card for booting and persistent storage The Raspberry Pi is the back bone of this project and is
used to fulfill all computational requirements. The Raspberry Pi computer has come out with
various versions over the years. Our project employs the use of Raspberry Pi 3 Model B+. The
Raspberry Pi 3 Model B+ is the latest production Raspberry Pi 3 featuring a 64-bit quad
core processor running at 1.4 GHz. It incorporates built-in WiFi and Bluetooth connectivity with
enhanced dual-band 2.4 GHz and 5 GHz Wifi, Bluetooth 4.2/BLE and faster Ethernet.
Mic

One mode of interaction with the smart mirror is through microphones. USB mic had to be used
because the Raspberry Pi does not have a regular microphone input. The microphone is a
Cheap and simple one connected through a USB to the Pi. The voice recognition system works
when we say a special word (say “ALEXA”), microphone is triggered to listen for a voice
command.
Frame and support

The frame is made of wood and it provides the support for the mirror and all the other components.
Everything was put together on a wooden frame. It frames the glass and provides a way for
hanging the mirror on a wall. The frame can be attached and detached from the back part so it gets
easy to change the glass.
One-way glass
One-way mirrors are the same as two-way mirrors. It has a semi-transparent mirror coating so it is
see through and reflective at the same time! How it looks depends on the lighting on either side of
the mirror. Keep the “backside” dark and the mirror side bright to turn it into a mirror.

LCD DISPLAY

An LCD panel placed behind the mirror is used to present to the user the desired interface.
PI CAM
The Raspberry Pi camera module can be used to take high-definition video, as well as stills
photographs. It supports 1080p30, 720p60 and VGA90 video modes, as well as still capture. It
attaches via a 15cm ribbon cable to the CSI port on the Raspberry Pi.Locate the camera port and
connect the camera by gently pulling up on the plastic edges, pushing in the camera ribbon and
then pushing the plastic connector back into place.

3
HDMI -VGA

HDMI, which stands for high-definition multimedia interface, The cables are extremely fast in
transmitting signals, in comparison to their analogous counterparts. Even a basic HDMI cable
can carry at least 5 GB per second, which ensures a rather quick signal transmission.
A Video Graphics Array connector is a three-row 15-pin DE-15 connector. The 15-pin VGA
connector was provided on many video cards, computer monitors, laptop computers, projectors,
and high definition television sets.A video graphics array (VGA) cable is a type of computer cable
that carries visual display data from the CPU to the monitor. A complete VGA cable consists of a
cable and a connector at each end, and the connectors are typically blue.A VGA cable is used
primarily to link a computer to a display device. When the computer is running, the video card
transmits video display signals via the VGA cable, which are then displayed on the display device.
VGA cables are available in different types, where shorter cables with coaxial cable and insulation
provide better video/display quality.

Software
All the software runs on the Raspberry Pi 3 ,and there are many operating systems to choose
from. We chose to use Raspbian which is the official Linux distribution from the Raspberry Pi
Foundation.
To install it, download Raspbian from the official Raspberry Pi website and copied it on a
microSD card. Then insert the card on the Raspberry Pi, and started it and followed the setup
instructions which are quite simple. Once Raspbian was installed, the first thing was to update
the distribution with the latest packages.The Raspberry Pi3 is connected to a monitor via HDMI
cable and a webcam is attached .Raspberry Pi 3 acts as the main control center for this model.
The Monitor will be getting input from RPi using HDMI cable and voice commands can be given
to RPi using a microphone.

For friendly interaction with the mirror ,we are going to include Amazon alexa or google
assistant.So that we can interact with our mirror like what we want.The user interface will be
show the data on the mirror and the empty space in between will accommodate the reflection of
the user.It provides most basic common amenities such as weather of the city, latest updates of
news and headlines and local time corresponding to the location. Using speech processing

4
techniques the Smart Mirror therefore interacts with the user through verbal commands, functions
and listens to the user’s question and responds them adequately.

Technologies Used
Below are a list of technologies we used in our project. Being a web-based application, we
used a variety of web tools:

• HTML

• CSS

• JavaScript

• NodeJS

• Electron

• GitHub

• Raspberry Pi

• Amazon Alexa Voice Service

The primary hardware component for our system is the Raspberry Pi. In
particular, we use the Raspberry Pi 3(B+) as it offers additional processing power,

5
more RAM, and offers onboard bluetooth and WiFi for connectivity. The Raspberry
Pi was selected for its ease of use and availability to the maker community. The
Raspberry Pi is capable of running several flavors of Linux, all of which should be
capable of running our software platform.
In order to display content, NodeJS and Electron were chosen. Electron building
multi-platform applications over a NodeJS base. NodeJS runs as the main server and
hosts web content using JavaScript. The server also has extensions to interact with
the RPI hardware GPIO so future users may add additional IoT connectivity. Using
Electron, a minimal web browser is launched to display the content hosted by the
server. Together, the application is able to be used on a wide variety of computers and
is highly scalable exactly what was needed to meet the requirements.
Electron allows for the use of standard web technologies to implement a front end.
This allowed the use of HTML, CSS, and JavaScript to implement the entirety of the
system. Using standard and familiar technologies was important for speed of
implementation as well as future users who may wish to modify the system. Using
unfamiliar technologies and programming languages would increase the barrier for
entry and deter potential users who may not have as much of a technological
background. We also used an Electron package called ’electron-boilerplate.’ This
package provided a faster start to application development and allows for more
integrated testing as well as the potential to build release executables for multiple
systems.
In order to host the source code and other components of the project, a version
control system was necessary. GitHub was chosen for its familiarity and compatibility
with other technologies. GitHub allowed for multiple developers to work on the
project at once and keep track of version history.

SYSTEM INPUT DESIGN


The magic mirror will display in accordance with the user commands.
An ENTITY-RELATIONSHIP diagram

6
7
FACIAL RECOGNITION
Human beings perform face recognition automatically every day and practically with no
effort.Although it sounds like a very simple task for us, it has proven to be a complex task for a
computer, as it has many variables that can impair the accuracy of the methods, for example:
illumination variation, low resolution, occlusion, amongst other. Face recognition is basically the
task of recognizing a person based on its facial image. It has become very popular in the last two
decades, mainly because of the new methods developed and the high quality of the current
videos/cameras.

Note that face recognition is different of face detection:

8
 Face Detection: it has the objective of finding the faces (location and size) in an image and
probably extract them to be used by the face recognition algorithm.

 Face Recognition: with the facial images already extracted, cropped, resized and usually
converted to grayscale, the face recognition algorithm is responsible for finding characteristics
which best describe the image.

There are different types of face recognition algorithms, for example:

 Eigenfaces (1991)

 Local Binary Patterns Histograms (LBPH) (1996)

 Fisherfaces (1997)

 Scale Invariant Feature Transform (SIFT) (1999)

 Speed Up Robust Features (SURF) (2006)

Local Binary Patterns Histograms (LBPH)

It is one of the easiest face recognition algorithms.It can represent local features in the
images.It is possible to get great results (mainly in a controlled environment).It is robust
against monotonic gray scale transformations.It is provided by the OpenCV library (Open
Source Computer Vision Library).

Local Binary Pattern (LBP) is a simple yet very efficient texture operator which labels the
pixels of an image by thresholding the neighborhood of each pixel and considers the result as
a binary number.

It was first described in 1994 (LBP) and has since been found to be a powerful feature for
texture classification.

The LBPH uses 4 parameters:

 Radius: the radius is used to build the circular local binary pattern and represents the radius
around the central pixel. It is usually set to 1.

9
 Neighbors: the number of sample points to build the circular local binary pattern. Keep in
mind: the more sample points you include, the higher the computational cost. It is usually set
to 8.

 Grid X: the number of cells in the horizontal direction. The more cells, the finer the grid, the
higher the dimensionality of the resulting feature vector. It is usually set to 8.

 Grid Y: the number of cells in the vertical direction. The more cells, the finer the grid, the
higher the dimensionality of the resulting feature vector. It is usually set to 8.

Training the Algorithm: First, we need to train the algorithm. We need to set an ID (it may
be a number or the name of the person) for each image, so the algorithm will use this information
to recognize an input image and give you an output. Images of the same person must have the
same ID. With the training set already constructed, let’s see the LBPH computational steps.

Applying the LBP operation: The first computational step of the LBPH is to create an
intermediate image that describes the original image in a better way, by highlighting the facial
characteristics. To do so, the algorithm uses a concept of a sliding window, based on the
parameters radius and neighbors.

The image below shows this procedure:

Based on the image above, let’s break it into several small steps so we can understand it easily:

10
 Suppose we have a facial image in grayscale.

 We can get part of this image as a window of 3x3 pixels.

 It can also be represented as a 3x3 matrix containing the intensity of each pixel (0~255).

 Then, we need to take the central value of the matrix to be used as the threshold.

 This value will be used to define the new values from the 8 neighbors.

 For each neighbor of the central value (threshold), we set a new binary value. We set 1 for
values equal or higher than the threshold and 0 for values lower than the threshold.

 Now, the matrix will contain only binary values (ignoring the central value). We need to
concatenate each binary value from each position from the matrix line by line into a new
binary value (e.g. 10001101). Note: some authors use other approaches to concatenate the
binary values (e.g. clockwise direction), but the final result will be the same.

 Then, we convert this binary value to a decimal value and set it to the central value of the
matrix, which is actually a pixel from the original image.

 At the end of this procedure (LBP procedure), we have a new image which represents better
the characteristics of the original image.

 Note: The LBP procedure was expanded to use a different number of radius and neighbors,
it is called Circular LBP.

11
It can be done by using bilinear interpolation. If some data point is between the pixels, it uses
the values from the 4 nearest pixels (2x2) to estimate the value of the new data point.

4. Extracting the Histograms: Now, using the image generated in the last step, we can use
the Grid X and Grid Y parameters to divide the image into multiple grids, as can be seen in the
following image:

Based on the image above, we can extract the histogram of each region as follows:

 As we have an image in grayscale, each histogram (from each grid) will contain only 256
positions (0~255) representing the occurrences of each pixel intensity.

 Then, we need to concatenate each histogram to create a new and bigger histogram.
Supposing we have 8x8 grids, we will have 8x8x256=16.384 positions in the final
histogram. The final histogram represents the characteristics of the image original image.

Performing the face recognition: In this step, the algorithm is already trained. Each histogram
created is used to represent each image from the training dataset. So, given an input image, we
perform the steps again for this new image and creates a histogram which represents the image.

 So to find the image that matches the input image we just need to compare two histograms
and return the image with the closest histogram.

 We can use various approaches to compare the histograms (calculate the distance between
two histograms), for example: euclidean distance, chi-square, absolute value, etc. In this

12
example, we can use the Euclidean distance (which is quite known) based on the following
formula:

 So the algorithm output is the ID from the image with the closest histogram. The algorithm
should also return the calculated distance, which can be used as a ‘confidence’
measurement. Note: don’t be fooled about the ‘confidence’ name, as lower confidences are
better because it means the distance between the two histograms is closer.

 We can then use a threshold and the ‘confidence’ to automatically estimate if the algorithm
has correctly recognized the image. We can assume that the algorithm has successfully
recognized if the confidence is lower than the threshold defined.

Voice recognition

AMAZON ALEXA
Amazon Alexa, known simply as Alexa, is a virtual assistant developed by Amazon, first used in
the Amazon Echo and the Amazon Echo Dot smart speakers developed by Amazon Lab126. It is
capable of voice interaction, music playback, making to-do lists, setting alarms, providing
weather, traffic, sports, and other real-time information, such as news. Alexa can also control
several smart devices using itself as a home automation system. Users are able to extend the Alexa
capabilities by installing "skills" (additional functionality developed by third-party vendors, in
other settings more commonly called apps such as weather programs and audio features).

Alexa supports a multitude of subscription-based and free streaming services on Amazon devices.
These streaming services include: Prime Music, Amazon Music, Amazon Music Unlimited, Apple
Music, TuneIn, iHeartRadio, Audible, Pandora, and Spotify Premium.

13
Amazon allows device manufacturers to integrate Alexa voice capabilities into their own
connected products by using the Alexa Voice Service (AVS), a cloud-based service that provides
APIs to interface with Alexa. Products built using AVS have access to Alexa's growing list of
capabilities including all of the Alexa Skills. AVS provides cloud-based automatic speech
recognition (ASR) and natural language understanding (NLU). There are no fees for companies
looking to integrate Alexa into their products by using AVS.

Amazon allows developers to build and publish skills for Alexa using the Alexa Skills Kit
known as Alexa Skills.[52] These third-party developed skills, once published, are available
across Alexa-enabled devices. Users can enable these skills using the Alexa app.
A "Smart Home Skill API"[53] is available, meant to be used by hardware manufacturers to allow
users to control smart home devices.[54]
Most skills run code almost entirely in the cloud, using Amazon's AWS Lambda service.[55]
In April 2018, Amazon launched Blueprints, a tool for individuals to build skills for their
personal use.[56]
In February 2019, Amazon further expanded the capability of Blueprints by allowing customers
to publish skills they've built with the templates to its Alexa Skill Store in the US for use by
anyone with an Alexa-enabled device. [57]

Alexa is Amazon’s cloud-based voice service available on over 100 million of devices from
Amazon and third-party device manufacturers. With Alexa, you can build natural voice
experiences that offer customers a more intuitive way to interact with the technology they use
every day.The Alexa Voice Service enables you to access cloud-based Alexa capabilities with the
support of AVS APIs, hardware kits, software tools. Alexa is always getting smarter with new
features and capabilities added to the cloud. When you connect your device-side client to the Alexa
Voice Service, you give customers access to a wealth of information. All they have to do is asAlexa
makes life easier by helping you multitask. Set kitchen timers, wake-up alarms, manage reminders,
add items to your shopping or to-do lists, and ask what’s on your calendar.

It is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming
podcasts, playing audiobooks, and providing weather, traffic, sports, and other real-time
information, such as news. Alexa can also control several smart devices using itself as a home
automation system.

Alexa skills are essentially apps, which enable Amazon's voice assistant to connect to
hardware and software to perform certain tasks. Amazon's Alexa is the voice-activated,

14
interactive AI bot, or personal assistant, that lets people speak with their Amazon Echo, Echo Dot
and other Amazon smart home devices.

Amazon Web Services (AWS) is a secure cloud services platform, offering compute power,
database storage, content delivery and other functionality to help businesses scale and grow.

1)Create Amazon developer account

2)Create a new AVS product

3)Create and enable new security profile

4)clone the alexa sample app from github

5)Updatre and run the install script

6)Start the AVS

7)Initiate the process of registering rpi with amazon

8)complete device registration process

9)send alexa a test command

10)start the wake word engine

Voice assistants have definitely become a trend, with options like Google Assistant, Amazon
Alexa, and Apple’s Siri. What really separates them from each other is their ecosystems, and this
is where Amazon Alexa shines the most.

Amazon provides the API for using its voice service, Alexa. Integrating custom devices with it
allows you to bring the full Amazon Echo functionality to those devices and opens a wide range
of opportunities both for DIY/PoC and enterprise solutions.

DataArt developed an open solution that allows turning any Linux-based device into an Amazon
Echo device. Since Raspberry Pi is the most popular board for IoT projects, we decided to use it
in our setup. But basically, any Linux system can run the demo.

Below, you’ll find the latest version of an Amazon Alexa virtual device project (version 1.1).
This project aims to provide the ability to bring Alexa to any Linux device, including embedded
systems like Raspberry Pi boards.

At the end, you will be able to build a voice-activated, digital home assistant, which will answer
a multitude of questions, read books, play music, tell jokes, provide weather, and even more.

The binary release is packed into a snap package, which is a perfect way to deliver this project.

How to Add Alexa to a Raspberry Pi

15
1. You need to create your own Alexa Device on the Amazon developer portal. Follow
this manual to create your own device and security profile.
Add http://alexa.local:3000/authresponse to the Allowed Return URLs
and http://alexa.local:3000 to the Allowed-Origins.
2. Connect an audio device: a microphone and speakers to your device. It could be a USB
headset, for example.
3. Install the PulseAudio snap: sudo snap install --devmode pulseaudio
4. Install the Alexa snap from the store: sudo snap install --channel beta alexa
5. Open http://alexa.local:3000 in a web browser on a local device or a device on the same
network. Note: The app provides an mDNS advertisement of the local domain alexa.local.
This is very helpful for using with monitorless devices.
6. Fill in the device credentials that were created during step 1, then click ‘log in’. Note: The
voice detection threshold is a float value for adjusting voice detection. The smaller the value,
the easier it is to trigger. You may need to adjust it for your mic and voice.
7. Fill in your Amazon credentials.
8. Now you can speak with Alexa. The app uses voice activation, so say ‘Alexa’ and the phrase
that you want to say to her. The app will make a beep via the speakers when it hears the
‘Alexa’ keyword and starts recording.
9. Enjoy Alexa without the need to buy special hardware.

Set Up Raspberry Pi

Download Raspbian OS and image your SD card


Before you can start your Raspberry Pi, you'll need to use your Linux, MacOS or Windows PC
to load an Operating System onto an SD card to boot the Pi from. We'll use Raspbian NOOBS
OS version 2.9.0 for this tutorial. On your PC, click here to download the NOOBS_v2_9_0.zip
file, then unzip it locally on your PC.

After unzipping the download, open the NOOBS_v2_9_0 folder and drag and drop the entire
folder's contents onto an empty 8/16/32gb microSD card. Now you're ready to insert the
microSD card into your Pi as shown below.

16
Assemble your Raspberry Pi
1. Check that your micro SD card is inserted into the micro SD card slot on your Pi. The SD
card contacts should face up.

2. Plug in the USB microphone and install your earbuds or speaker into the 3.5mm audio
jack on your Pi.

3. Connect the keyboard and mouse via the USB ports. Make sure you don't cover the USB
mic.

4. Connect your monitor using the HDMI port.

5. Connect the Ethernet Cable (if not using Wi-Fi)

17
18
Start up your Raspberry Pi
Plug in the power supply to the micro USB connector on the Pi. You should see a loading screen
go through some startup steps before booting to desktop. If you run into any errors upon startup,
try booting from a different pre-imaged micro SD card. If nothing happens after that, ensure your
micro SD card is inserted upside-down (contacts facing up).

Upon successful startup, you'll be prompted to select an operating system and locale. Check the
box next to Raspbian [RECOMMENDED] and select your language and keyboard preferences
at the bottom of the screen. Once you've selected your preferences, click "Install" in the upper
left corner of the pop-up box.

Note: if you don't have the correct locale/keyboard selected, you might get password errors.
Make sure to use the correct keyboard layout for your keyboard - you can test it when selecting
to ensure any special characters you type show up correctly.

It should take around 15 minutes to fully install. Once it's done, you'll see a success message -
click OK and you'll be taken to a desktop. Select your locale/language/keyboard but feel free to
close the window when it asks you to change your password or do any other setup steps.

If you missed the option, you can always check your Keyboard Configuration by clicking on the
Raspberry icon in the top right and selecting Mouse and Keyboard settings from
the Preferences menu. Click Keyboard Layout from the Keyboard tab and select the
appropriate configuration. Make sure whatever you pick, you can write characters such as
"quotes" and @ symbols.

19
If you're not using wired Ethernet, activate 2.4 GHz Wi-Fi on your Raspberry Pi by clicking on
the connectivity icon in the top-right corner of the toolbar and selecting your SSID. Note that
the Raspberry Pi 3B doesn't work with 5 GHz Wi-Fi.

To make sure your sound will output over the 3.5mm audio jack, right-click on the speaker
graphic in the upper-right hand corner of your Pi and select "Analog".

Verify connectivity by opening a web browser – click on the globe icon in the top-left toolbar.

Input AVS Credentials

Download the AVS Device SDK


Now that your device has internet connectivity, open a terminal by clicking on the black window
logo near the top left corner of your toolbar.

Your terminal should come up in /home/pi directory. Let's start by upgrading apt-get to ensure
you have access to required dependencies. Note that the AVS Device SDK v1.10.0 requires
PulseAudio to be installed and BlueAlsa must be disabled.

Copy and paste the following command into your terminal window and hit return to upgrade apt-
get.

sudo apt-get upgrade

Now, let's get the SDK installation and configuration scripts. Copy and paste the following into
your terminal and hit return:

wget https://raw.githubusercontent.com/alexa/avs-device-sdk/master/tools/Install/setup.sh \

wget https://raw.githubusercontent.com/alexa/avs-device-sdk/master/tools/Install/genConfig.sh \

wget https://raw.githubusercontent.com/alexa/avs-device-sdk/master/tools/Install/pi.sh

20
Download your credentials
If you didn't already save it to your Pi when creating your product profile, it's time to get
your config.json file onto your client device. Start by opening a browser and logging into
your AVS dashboard. Click on your Product Name, it should be AVS Tutorials Project or
whatever you named it when creating the product profile.

This will take you to a product menu - on the left side you should see Product Details. Select
Security Profile below that and choose Other devices and platforms from the Web -
Android/Kindle - iOS - Other devices and platforms menu.

When you click the Download button on your Security Profile in your web browser, you'll see
a config.json file appear in your home/pi/downloads folder. In the file manager, copy this file
from the /downloads folder and place it in your home/pi folder as shown in the picture below.

21
Now that your Raspberry Pi has your own unique credentials loaded on it, let's build the AVS
Device SDK to voice-enable your prototype.

Build the AVS Device SDK

You are now ready to run the install script. This will install all dependencies, including
the Wake Word Engine(WWE) from Sensory. The WWE compares incoming audio to an
onboard model of a wake word ("Alexa") and will initiate the transmission of audio to the cloud
when triggered. Note that this WWE is provided for prototyping purposes only and would need
to be licensed for a commercial device. The AVS Device SDK is modular and flexible. When
you're ready to build your product, you can choose any WWE you prefer. Remember that for
AVS products, the wake word must be Alexa so that your customers aren't confused about how
to interact with your device.

To run the install script, open a terminal by clicking on the console window in the Pi's toolbar in
the upper-left corner of the screen (or just use your existing terminal window). You should see
a setup.sh script in your /home/pi/ directory. This pulls the credentials from your config.json file
to run the install script. To launch the setup script, copy and paste the following command into
your terminal window and hit return:

cd /home/pi/

22
sudo bash setup.sh config.json [-s 1234]

Note that the field in double brackets is the Device Serial Number which will be unique to each
instance of the SDK. In this case it's pre-populated with 1234.

Type "AGREE" when it prompts you to accept the licensing terms from our third-party
libraries. Unless, of course, you disagree!

This will kick off the installation process which could take over 20 minutes. Note that about 15
minutes into the install, it's going to pause and ask you to accept Sensory Wake Word's terms
and conditions (you'll need to hit "return" and then type "yes" to accept).

Once you've finished compiling, you should see a success screen similar to the one shown here.
If your device freezes up - don't worry, just restart by unplugging your Pi's power cord. When
you get back to your desktop, re-run the above setup.sh command to finish your install.

23
Now you just need to launch the sample app and get a refresh token from AVS so your device
can authenticate with the cloud via Login With Amazon (LWA).

Talk with Alexa

Congratulations on creating your first prototype with Alexa built-in! If your sample app is still
running, you should see ASCII art indicating that Alexa's status is IDLE, meaning the client is
waiting for you to initiate a conversation.

To test your prototype, put your earbuds in, and say "Alexa" into the microphone to trigger
the Wake Word Engine. Because you're using an inexpensive USB microphone, you might

24
need to speak closer to your microphone to ensure your voice is heard. When you build a
commercial Alexa-enabled product, you can use a high-performance Audio Front End (AFE) to
pick up your customer's voice. Check out the dev kits for AVSpage to learn more.

When you say Alexa, you should see a bunch of messages scroll in your terminal window. One
of those will show the status changing to Listening, indicating the wake word has been
recognized. Then say "Tell me a joke." If Alexa responds with Thinking..., then Speaking, you
have a working prototype… and probably, a very bad joke.

If you don't hear anything over your earbuds, check that your audio output is set to "Analog" by
right-clicking on the speaker icon in the top-right corner of your Pi's screen. Your audio output
might be set to HDMI by default. Also ensure your speaker/earbuds are turned on and plugged
into your Raspberry Pi's 3.5mm audio jack.

If Alexa isn't responding or your Sample App appears stuck (or displaying error messages when
you try to speak), just type "s" and hit return to stop that interaction. You can also type "q" and
hit return to exit from the Sample App, or just close the terminal window to force quit the
Sample App.

At any time, to relaunch the sample app you can just type the following into a terminal window:

cd /home/pi/

sudo bash startsample.sh

25
Talk with Alexa!
 Say "Alexa", then ask "What time is it?"

 Say "Alexa", then ask "What's the volume of a sphere?"

 Say "Alexa", then say "Who is Rich King?"

 Say "Alexa", then say "How did the stock market do today?"

 Say "Alexa", then ask "Where were you born?"

 Say "Alexa", then say "How do you say friend in Russian?"


Try a multi-turn interaction
Say "Alexa", then ask "Set an alarm". When she asks what time, just say a number, like "8".
She'll want to know if that's AM or PM.

You probably noticed that despite having a bit of back and forth with Alexa, you only said the
wake word onceat the start of the conversation. This is called a multi-turn interaction and it's a
more natural method of communication besscause you can continue speaking without starting
every phrase with "Alexa."

In your terminal window, you can scroll up until you see the UI state Listening…. Right above
that you'll see that the state of the Audio Input Processor (AIP) has changed
from IDLE to EXPECTING_SPEECH and then RECOGNIZING - without you speaking the
wake word!

Alexa only listens when customers indicate that they want to speak to the cloud. Typically,
this means the AIP was triggered by the Wake Word Engine running on the client. In this case,
it's been activated via a Directive delivered down to your client from the cloud. When Alexa
asks you a question, the AIP activates because it knows the customer wants to provide a
response.

You can learn more about multi-turn interactions here.

26
Other multi-turn interactions to try
 Say "Alexa, Wikipedia." You'll have the option to request information on a number of topics
without speaking the wake word before the subject.

 Say "Alexa, let's chat." Initiate a conversation with a chatbot.

 Say "Alexa, play Yes Sire." Play a medieval-themed game using your voice, entirely in
multi-turn interactions.
Alexa is multi-lingual
Today, Alexa can speak Japanese, German, and many global dialects of English. To try some of
these settings, type c and hit return. Then type 1 to see language options. When you ship your
product with Alexa Built-in and your customer wishes to change languages, your device will
need to send a SettingsUpdated event to the cloud. Learn more by clicking here.

There's lots more to learn - let's start by setting our device location!

Applications

The designed mirror has the advantages of small size, simple operation, low cost, and is suitable
for broad application prospects.

1) The need for smart mirrors in the automotive sector has led to active growth for its market,
the presence of embedded electronics makes smart mirrors more popular.

27
2) The implementation of activated light sensors, and electrochromic ssmart mirrors have surely
improved road safety by with reducing driver failure.
3) Smart mirrors are effectively being utilized in applications such as advertising, healthcare,
consumer and others.
4) People could have their email messages, trending tweets or Facebook posts, breaking news
and alert messages on their smart mirrors.
5) The mirror also provides a picture-in-picture to facilitate the display of services such as maps,
videos via YouTube.
6) The user didn’t even have to worry about turning on and off the system because the mirror
will detect motion and do the work for them.
7) The facial recognition technology used can be future enhanced as a means of security. Adding
security means that no one can try to access sensitive data that maybe displayed on your mirror
via the use of APIs
8) The Smart Mirror has scope in the field of IoT and home automation. The Smart Mirror can
be connected to the home appliances, mobile devices, etc. which can expand the functionality
of the mirror.

Conclusion

Understanding of user needs and expectations is key to successful development of smart mirrors.
This smart mirror aims to reduce and possibly eliminate the need for the user to make time in their
daily morning or nightly routine to check their PC, tablet, or smart phone for the information they
need. We are making efforts to design an efficient system which is used for effective time
management and productivity for the user. This system basically works on voice commands which
can help the users interact with the system easily without remembering commands because it
accepts the natural language used by the user. Through this the user can easily communicate with
the living room environment around him, So the user don’t have to check his mobile phones
everytime he/she need any information, he/she can just ask the system about the data needed . In
future there may be much more advancement in this project and we can see it in our smart home.

28

Вам также может понравиться