Вы находитесь на странице: 1из 48

MODULE 1

(Mobile Application Development)

SUBMITTED BY:Barsha Singh


Charanjeet Singh
Somya Nandanwar
Parayani Berteria
1

INTRODUCTION
Mobile Computing is an umbrella term used to describe technologies that enable
people to access network services anyplace, anytime, and anywhere.
A communication device can exhibit any one of the following characteristics:
Fixed and wired: This configuration describes the typical desktop computer
in an office. Neither weight nor power consumption of the devices allow for
mobile usage. The devices use fixed networks for performance reasons.
Mobile and wired: Many of todays laptops fall into this category; users
carry the laptop from one hotel to the next, reconnecting to the companys
network via the telephone network and a modem.
Fixed and wireless: This mode is used for installing networks, e.g., in
historical buildings to avoid damage by installing wires, or at trade shows to
ensure fast network setup.
Mobile and wireless: This is the most interesting case. No cable restricts the
user, who can roam between different wireless networks.

Limitations of Mobile Computing


Resource constraints: Battery
Bandwidth: Although they are continuously increasing, transmission rates
are still very low for wireless devices compared to desktop systems. Local
wireless system reaches some Mbits/s while wide area offers some 10
Kbits/s.
Lower security, simpler to attack: Not only can portable devices be stolen
more easily, but the radio interface is also prone to the dangers of
eavesdropping. Wireless access must always include encryption,
2

authentication, and other security mechanisms that must be efficient and


simple to use.
Regulations and spectrum: Frequencies have to be coordinated, and
unfortunately, only a very limited amount of frequencies are available (due
to technical and political reasons).

GSM (Global System for Mobile communication)


It is a digital mobile telephony system that is widely system for mobile
communication. GSM uses a variation of time division multiple access (TDMA)
and is the most widely used of the three digital wireless telephony technologies
(TDMA, GSM, and CDMA). GSM digitizes and compresses data, then sends it
down a channel with two other streams of user data, each in its own time slot. It
operates at either the 900 MHz or 1800 MHz frequency band.
Time division multiple access (TDMA) is a channel access method for shared
medium networks. It allows several users to share the same frequency channel by
dividing the signal into different time slots. The users transmit in rapid succession,
one after the other, each using its own time slot.

Code division multiple access (CDMA) is a channel access method used by


various radio communication technologies.
The multiple access method allows several terminals connected to the same multipoint transmission medium to transmit over it and to share its capacity. Examples
of shared physical media are wireless networks, bus networks, ring
networks and half-duplex point-to-point links.
3

CDMA is an example of multiple access, which is where several transmitters can


send information simultaneously over a single communication channel. This allows
several users to share a band of frequencies (see bandwidth). To permit this without
undue interference between the users, CDMA employs spreadspectrum technology and a special coding scheme (where each transmitter is
assigned a code).

Stands for

CDMA
Code Division Multiple

GSM
Global System for Mobile

Storage Type

Access
Internal Memory

communication
SIM (subscriber identity

25%
Single (850 MHz)

module) Card
75%
Multiple

Global market share


Frequency band

(850/900/1800/1900
MHz)

Introduction to Android development


What is Android?
Android is the world's most popular operating system for mobile devices and
tablets. It is an open source operating system, created by Google, and available to
all kinds of developers with various expertise levels, ranging from rookie to
professional.
(The term 'open source' sounds pretty familiar, doesn't it? Well, open-source means
software with source available for modification and bound to an open source
license agreement. More about open source terminology can be found here).
4

From a developer's perspective, Android is a Linux-based operating system for


smartphones and tablets. It includes a touch screen user interface, widgets, camera,
network data monitoring and all the other features that enable a cell phone to be
called a smartphone. Android is a platform that supports various applications,
available through the Android Play Store. The Android platform also allows end
users to develop, install and use their own applications on top of the Android
framework. The Android framework is licensed under the Apache License, with
Android application developers holding the right to distribute their applications
under their customized license. Like most software, Android is released in
versions. Google has also assigned names to its versions since April 2009. Below
are all the versions of Android released to date:

Version No.

Name

For:

1.0

Android Beta

Phone

1.1

Android

Phone

1.5

Cupcake

Phone

1.6

Donut

Phone

2.0/2.1

clair

Phone

2.2.x

Froyo

Phone

2.3.x

Gingerbread

Phone

3.x

Honeycomb

Tablet

4.0.x

Ice Cream Sandwich

Phone and Tablet

4.1/4.2

Jelly Bean

Phone and Tablet

Understanding Android
To begin development on Android even at the application level, I think it is
paramount to understand the basic internal architecture. Knowing how things are
arranged inside helps us understand the application framework better, so we can
can design the application in a better way.
Android is an OS based on Linux. Hence, deep inside, Android is pretty similar to
Linux. To begin our dive into the Android internals, let us look at an architectural
diagram.

Applications
The diagram shows four basic apps (App 1, App 2, App 3 and App 4), just to give
the idea that there can be multiple apps sitting on top of Android. These apps are
like any user interface you use on Android; for example, when you use a music
player, the GUI on which there are buttons to play, pause, seek, etc is an
application. Similarly, is an app for making calls, a camera app, and so on. All
these apps are not necessarily from Google. Anyone can develop an app and make
7

it available to everyone through Google Play Store. These apps are developed in
Java, and are installed directly, without the need to integrate with Android OS.
Application Framework
Scratching further below the applications, we reach the application framework,
which application developers can leverage in developing Android applications. The
framework offers a huge set of APIs used by developers for various standard
purposes, so that they don't have to code every basic task.The framework consists
of certain entities; major ones are:
Activity Manager
This manages the activities that govern the application life cycle and has
several states. An application may have multiple activities, which have their
own life cycles. However, there is one main activity that starts when the
application is launched. Generally, each activity in an application is given a
window that has its own layout and user interface. An activity is stopped
when another starts, and gets back to the window that initiated it through an
activity callback.
Notification Manager
This manager enables the applications to create customized alerts
Views
Views are used to create layouts, including components such as grids, lists,
buttons, etc.
Resource Managers
8

Applications do require external resources, such as graphics, external


strings, etc. All these resources are managed by the resource manager, which
makes them available in a standardized way.
Content Provider
Applications also share data. From time to time, one application may need
some data from another application. For example, an international calling
application will need to access the user's address book. This access to
another application's data is enabled by the content providers.
Libraries
This layer holds the Android native libraries. These libraries are written in C/C++
and offer capabilities similar to the above layer, while sitting on top of the kernel.
A few of the major native libraries include
Surface Manager: Manages the display and compositing window-ing
manager. - Media framework: Supports various audio and video formats and
codecs including their playback and recording.
System C Libraries: Standard C library like libc targeted for ARM or
embedded devices.
OpenGL ES Libraries : These are the graphics libraries for rendering 2D and
3D graphics.
SQLite : A database engine for Android.

Android Runtime
The Android runtime consists of the Dalvik Virtual Machine. It is basically a
virtual machine for embedded devices, which like any other virtual machine is a
bytecode interpreter. When we say it is for embedded devices, it means it is low on
memory, comparatively slower and runs on battery power. Besides the Dalvik
Virtual Machine, it also consists of the core libraries, which are Java libraries and
are available for all devices.
Kernel
The Android OS is derived from Linux Kernel 2.6 and is actually created from
Linux source, compiled for mobile devices. The memory management, process
management etc. are mostly similar. The kernel acts as a Hardware Abstraction
Layer between hardware and the Android software stack.
Android SDK
As already mentioned, Android is open source and hence the source code is
available for all developers. In totality it is called the Android SDK. You can
download, build and work on Android in a number of different ways--it all depends
on what you want to do. If your goal is to develop an Android application, you
don't necessarily need to download all the source. Google recommends the Eclipse
IDE, for which there is an available Android Developer Tools (ADT) plugin,
through which you can install the specific SDK, create projects, launch emulators,
debug, etc. You can see more details of Eclipse and ADT through Android's official
website for developers - http://developer.android.com/sdk/index.html
Android Development for Windows Users
Android as of now does not support building on Windows, so if you want to
modify the Android OS itself, you'll have to use Linux (see building the Android
10

OS). However, on Windows, we do have tools and plugins for application and
native Android development. And here we will talk about setting up basic Android
development tools on Windows.
Downloading the Android SDK and developer tools
oogle provides a convenient bundle to download and setup Android for Windows
developers, which you can download here, under the name ADT bundle for
Windows. The exact name of the file you download will depend on your OS
architecture (32 vs 64 bit), but for my case (64 Bit Win 7), I see the following zip
file downloaded: adt-bundle-windows-x86_64.zip. Extracting the zip file, I have
contents as in the following snapshot:

First of all, we have Eclipse, which is the IDE for writing source. As an IDE it
provides the environment for developing Android Applications. Android
Applications are developed primarily in Java. Next we have the 'sdk', which does
not include any of the source. However, it holds the already built platform tools,
11

tools, images and some platform specific libraries. When we say, building Android
is not supported on Windows,we mean that we can't compile system images and
tools. However, other sources needed for application development are be available
through the SDK Manager, which is the third entity present in the extracted zip
file.

12

So, let's download the sources! Double click the SDKManager.exe. You'll see
something like this:

13

This is the SDK Manager, via which we can install or delete whatever version of
SDK we want. As you can see, it mentions we have the Android SDK Tools and
Android SDK Platform Tools installed.
The latest Android available, as of the writing of this article, is 4.2, but with the
SDK we can download and install any of the previous versions too. Now let us
play around with the latest Android--i.e.,4.2, which is also known as Jelly Bean.
Select the check box for "Android 4.2(API 17)", which will select everything
required for and under Android 4.2.

14

n all, as we can see, SDK Manager found 6 packages that need installation. Click
the "Install 6 packages" button. We see another dialog box for package descriptions

15

and license.

Select "Accept All" and click Install, which will initiate the download and then
installation. When done, you will see "Installed" in front of all the packages
selected.
From there, we can create our virtual device or use one of the standard devices
available. To create a new virtual device, click on 'New' as we see in the following
snapshot:

16

From there, we can create our virtual device or use one of the standard devices
available. To create a new virtual device, click on 'New' as we see in the following
snapshot:
Another window will pop up to take inputs for device type, target processor,
17

memory, etc.

18

You can also select pre-loaded options that correspond to the specifications of
existing Android devices. To see them in the Android Virtual Device Manager, go
to the Device Definitions tab.

19

20

We select the first in our list, "Nexus S by Google", and add a virtual device by
clicking "Create AVD". The following dialog box requires the "Target" and "CPU"
to be specified along with the size of the SD Card.

21

22

We will assign Target as "Android 4.2 - API level 17" CPU as "ARM" and SD
Card Size equals "1024 MiB" and click "OK". We can now see the newly defined
virtual device in the AVD list

23

24

To launch the emulator, select it and click "Start".


Frameworks:There are scores of such frameworks to choose from these days, and as seen in our
Top 15 list below, the best are getting quite sophisticated. Yet there is still
considerable grumbling about the state of mobile cross-platform frameworks. They
may be fine for the majority of Android apps being developed, yet few seem to be
capable of handling all the requirements of a professional-quality enterprise or
consumer app.
If you're familiar with Java and Eclipse, and Android is initially the sole
destination, Google's Android SDK and related Android Development Tools (ADT)
Eclipse plugin are probably the better choices. The problem is that most app
publishers prefer to start with iOS, or else ship on iOS and Android simultaneously,
with perhaps a BlackBerry or Windows Phone version as well. Others lack the
experience to go native.
Nevertheless, the official tools deserve a look, as it's usually difficult to port one's
cross-platform effort to the Android SDK in mid-stream. So we'll start with
Google's tools first before moving on to the multi-platform frameworks.

SDK:Android software development is the process by which new applications are


created for the Android operating system. Applications are usually developed in
Java programming language using the Android Software Development Kit (SDK),
but other development environments are also available.

25

As of July 2013, more than one million applications have been developed for
Android, with over 25 billion downloads. A June 2011 research indicated that over
67% of mobile developers used the platform, at the time of publication.[5] In Q2
2012, around 105 million units of Android smart phones were shipped which
acquires a total share of 68% in overall smart phones sale till Q2 2012.
Android sdk:The Android software development kit (SDK) includes a comprehensive set of
development tools.[8] These include a debugger, libraries, a handset emulator based
on QEMU, documentation, sample code, and tutorials. Currently supported
development platforms include computers running Linux (any modern desktop
Linux distribution), Mac OS X 10.5.8 or later, and Windows XP or later. Also,
Android software can be developed on Android itself by using specialized Android
applications.[9][10][11] As of 2014 and earlier, the officially supported integrated
development environment (IDE) is Eclipse using the Android Development Tools
(ADT) Plugin, though IntelliJ IDEA IDE (all editions) fully supports Android
development out of the box,[12] and NetBeans IDE also supports Android
development via a plugin.[13] As of 2015, Android Studio,[14] made by Google, and
powered by IntelliJ, is the official IDE, however developers are free to use others.
Additionally, developers may use any text editor to edit Java and XML files, then
use command line tools (Java Development Kit and Apache Ant are required) to
create, build and debug Android applications as well as control attached Android
devices (e.g., triggering a reboot, installing software package(s) remotely).[15]
Enhancements to Android's SDK go hand in hand with the overall Android
platform development. The SDK also supports older versions of the Android
platform in case developers wish to target their applications at older devices.
26

Development tools are downloadable components, so after one has downloaded the
latest version and platform, older platforms and tools can also be downloaded for
compatibility testing.
Generic UI building blocks(GUIBB)
A UI guideline-conforming template for displaying content in an application.
You can use Floorplan Manager (FPM) to compile application-specific views
(UIBBs), realized as Web Dynpro components in other applications, into new FPM
applications. However, there is generally a high level of variance in the display and
navigational behavior of these freestyle UIBBs. They also cannot be configured by
the FPM framework.
GUIBBs make it possible to improve the uniformity of these user-specific views;
they provide an application with a harmonized look and feel.
GUIBBs are fully integrated into the FPM framework. Amongst other things, they
take care of the UI layout, spacing, and certain product standards such as
accessibility. GUIBBs provide you with a 'code-free' configuration of the UI.
GUIBBs are generic configurations based on feeder classes. There is a complete
separation of the UI and the business logic; the business logic is contained in the
feeder class. The application, at design time, defines the data to be displayed along
with a configuration. The concrete display of the data on the UI is only determined
by the GUIBB at runtime; this is done automatically using the configuration
provided and with the data, to be displayed, specified by the feeder class.
GUIBBs can also be created dynamically at runtime; the FPM framework indicates
the presence of such GUIBBs at design-time in the Message Area of the FPM
configuration editor, FLUID.
User Interface
27

Each activity may contain different views and view groups in a hierarchical
tree. You can visualize this tree with the view groups and layout objects as
the trunk and branches (because view group objects can be cast into views)
and with the views or widgets as the leaves. A single view, in its most basic
format, is a drawable rectangle. A view group, in its most basic format, is
an object containing one or more views. This object hierarchy allows you
to lay out complex user interfaces without having to go through the error prone
process of calculating view rectangles and widget overlap maps. If,
on the other hand, that sort of thing is your bag, Android will stay out of
the way and let you render in the style of Java MEs hand-drawn game
canvas.
three major food groups:
XML-defined widgets/views and view groups: Good for basic information
display and menus
Android native views: TextViews, LayoutGroups, ScrollBars, and text
entry
Custom views: The game programmers best friend

Linear Layout
All elements are arranged in a descending column from top to bottom or
Left to right. Each element can have gravity and weight properties that
denote how they dynamically grow and shrink to fill space. Elements arrange
themselves in a row or column
Relative Layout
28

Each child element is laid out in relation to other child elements.


Relationships can be established so that children will start themselves
where a previous child ends. Children can relate only to elements that are
listed before them. So, build your dependency from the beginning of the XML file
to the end
Absolute Layout
Each child must be given a specific location within the bounds of the
parent layout object. The AbsoluteLayout object is probably the easiest to
build and visualize but the hardest to migrate to a new device or screen
size
Table Layout
Table Layout is a layout object that allows you to specify table rows

Android users:Android users are the users that use android devices such as phone, tablets, pc,
televisions etc.
There are approximately 67 million android users in India whereas the no continue
to be rising day by day.
Being 67million users in India there are about 190 billion users around the globe.
Thus this category is very huge in number and continues to be rising day by day.

VOICEUSER INTERFACE (VUI)

29

A voiceuser interface (VUI) makes human interaction with computers possible


through a voice/speech platform in order to initiate an automated service or
process.
A VUI is the interface to any speech application. Controlling a machine by simply
talking to it was science fiction only a short time ago. Until recently, this area was
considered to be artificial intelligence. However, with advances in technology,
VUIs have become more commonplace, and people are taking advantage of the
value that these hands-free, eyes-free interfaces provide in many situations.
However, VUIs are not without their challenges. People have very little patience
for a "machine that doesn't understand". Therefore, there is little room for error:
VUIs need to respond to input reliably, or they will be rejected and often ridiculed
by their users. Designing a good VUI requires interdisciplinary talents of computer
science, linguistics and human factors psychology all of which are skills that are
expensive and hard to come by. Even with advanced development tools,
constructing an effective VUI requires an in-depth understanding of both the tasks
to be performed, as well as the target audience that will use the final system. The
closer the VUI matches the user's mental model of the task, the easier it will be to
use with little or no training, resulting in both higher efficiency and higher user
satisfaction.
Future uses
Pocket-size devices, such as PDAs or mobile phones, currently rely on small
buttons for user input. These are either built into the device or are part of a touchscreen interface, such as that of the Apple iPod Touch and iPhone. Extensive
button-pressing on devices with such small buttons can be tedious and inaccurate,
30

so an easy-to-use, accurate, and reliable VUI would potentially be a major


breakthrough in the ease of their use. Nonetheless, such a VUI would also benefit
users of laptop- and desktop-sized computers, as well, as it would solve numerous
problems currently associated with keyboard and mouse use, including repetitivestrain injuries such as carpal tunnel syndrome and slow typing speed on the part of
inexperienced keyboard users. Moreover, keyboard use typically entails either
sitting or standing stationary in front of the connected display; by contrast, a VUI
would free the user to be far more mobile, as speech input eliminates the need to
look at a keyboard.
Such developments could literally change the face of current machines and have
far-reaching implications on how users interact with them. Hand-held devices
would be designed with larger, easier-to-view screens, as no keyboard would be
required. Touch-screen devices would no longer need to split the display between
content and an on-screen keyboard, thus providing full-screen viewing of the
content. Laptop computers could essentially be cut in half in terms of size, as the
keyboard half would be eliminated and all internal components would be
integrated behind the display, effectively resulting in a simple tablet computer.
Desktop computers would consist of a CPU and screen, saving desktop space
otherwise occupied by the keyboard and eliminating sliding keyboard rests built
under the desk's surface. Television remote controls and keypads on dozens of
other devices, from microwave ovens to photocopiers, could also be eliminated.
Numerous challenges would have to be overcome, however, for such developments
to occur. First, the VUI would have to be sophisticated enough to distinguish
between input, such as commands, and background conversation; otherwise, false
input would be registered and the connected device would behave erratically.
31

Second, the VUI would have to work in concert with highly sophisticated software
in order to accurately process and find/retrieve information or carry out an action
as per the particular user's preferences. For instance, if you prefers information
from a particular newspaper, and if she prefers that the information be summarized
in point-form, she might say, "Computer, find me some information about the
flooding in southern China last night"; in response, the VUI that is familiar with
her preferences would "find" facts about "flooding" in "southern China" from that
source, convert it into point-form, and deliver it to her on screen and/or in voice
form, complete with a citation. Therefore, accurate speech-recognition software,
along with some degree of artificial intelligence on the part of the machine
associated with the VUI, would be required.

MOBILE APP
A mobile app is a computer program designed to run on smartphones, tablet
computers and other mobile devices.
Apps are usually available through application distribution platforms, which began
appearing in 2008 and are typically operated by the owner of the mobile operating
system, such as the Apple App Store, Google Play, Windows Phone Store, and
BlackBerry App World. Some apps are free, while others must be bought. Usually,
they are downloaded from the platform to a target device, such as an iPhone,
BlackBerry, Android phone or Windows Phone, but sometimes they can be
downloaded to laptops or desktop computers. For apps with a price, generally a
percentage, 20-30%, goes to the distribution provider (such as iTunes), and the rest
goes to the producer of the app.[1] The same app can therefore cost the average

32

Smartphone user a different price depending on whether they use iPhone, Android,
or BlackBerry 10 devices.
The term "app" is a shortening of the term "application software". It has become
very popular, and in 2010 was listed as "Word of the Year" by the American
Dialect Society.[2] In 2009, technology columnist David Pogue said that newer
smartphones could be nicknamed "app phones" to distinguish them from earlier
less-sophisticated smartphones.[3]
Mobile apps were originally offered for general productivity and information
retrieval, including email, calendar, contacts, stock market and weather
information. However, public demand and the availability of developer tools drove
rapid expansion into other categories, such as word processing, social media,
picture sharing, mobile games, factory automation, GPS mapping and locationbased services, banking, networking and file transfer, education, video streaming,
order tracking, ticket purchases and recently mobile medical apps.
Development
Developing apps for mobile devices requires considering the constraints and
features of these devices. Mobile devices run on battery and have less powerful
processors than personal computers and also have more features such as location
detection and cameras. Developers also have to consider a wide array of screen
sizes, hardware specifications and configurations because of intense competition in
mobile software and changes within each of the platforms.
Mobile application development[10] requires use of specialized integrated
development environments. Mobile apps are first tested within the development
environment using emulators and later subjected to field testing. Emulators provide
33

an inexpensive way to test applications on mobile phones to which developers may


not have physical access.
As part of the development process, Mobile User Interface (UI) Design is also an
essential in the creation of mobile apps. Mobile UI considers constraints &
contexts, screen, input and mobility as outlines for design. The user is often the
focus of interaction with their device, and the interface entails components of both
hardware and software. User input allows for the users to manipulate a system, and
device's output allows the system to indicate the effects of the users' manipulation.
Mobile UI design constraints include limited attention and form factors, such as a
mobile device's screen size for a user's hand(s). Mobile UI contexts signal cues
from user activity, such as location and scheduling that can be shown from user
interactions within a mobile application. Overall, mobile UI design's goal is
primarily for an understandable, user-friendly interface. The UI of mobile apps
should: consider users' limited attention, minimize keystrokes, and be task-oriented
with a minimum set of functions. This functionality is supported by Mobile
enterprise application platforms or Integrated development environments (IDEs).
Distribution
List of mobile software distribution platforms
Amazon Appstore
Amazon Appstore is an alternate application store for the Android operating
system. It was opened in March 2011, with 3800 applications.[11] The Amazon
Appstore's Android Apps can also run on BlackBerry 10 devices.
App Store
Apple's App Store for iOS was the first app distribution service, which set the
standard for app distribution services and continues to do so[citation needed], opened on
34

July 10, 2008, and as of January 2011, reported over 10 billion downloads. As of
June 6, 2011, there were 425,000 apps available, which had been downloaded by
200 million iOS users.[12][13] During Apple's 2012 Worldwide Developers
Conference, Apple CEO Tim Cook announced that the App Store has 650,000
available apps to download as well as "an astounding 30 billion apps" downloaded
from the app store until that date.[14] From an alternative perspective, figures seen
in July 2013 by the BBC from tracking service Adeven indicate over two-thirds of
apps in the store are "zombies", barely ever installed by consumers.[15]
BlackBerry World
BlackBerry World is the application store for BlackBerry 10 and BlackBerry OS
devices. It opened in April 2009 as BlackBerry App World, and as of February
2011, was claiming the largest revenue per app: $9,166.67 compared to $6,480.00
at the Apple App Store and $1,200 in the Android market. In July 2011, it was
reporting 3 million downloads per day and 1 billion total downloads.[16] In May
2013, BlackBerry World reached over 120,000 apps. BlackBerry 10 users can also
run Android apps.
Google Play
Google Play (formerly known as the Android Market) is an international online
software store developed by Google for Android devices. It opened in October
2008.[17] In August 2014, there were approximately 1.3+ million apps available for
Android,[18] and the estimated number of applications downloaded from Google
Play was 40 billion.
Nokia Store
An app store for Nokia phones was launched internationally in May 2009. As of
April 2011 there were 50,000 apps, and as of August 2011, Nokia was reporting
35

9 million downloads per day. In February 2011, Nokia reported that it would start
using Windows Phone as its primary operating system.[19] In May 2011, Nokia
announced plans to rebrand its Ovi product line under the Nokia brand[20] and Ovi
Store was renamed Nokia Store in October 2011.[21] Nokia Store remains as the
distribution platform for its previous lines of mobile operating systems but will no
longer allow developers to publish new apps or app updates for its legacy Symbian
and MeeGo operating systems from January 2014.[22]
Windows Phone Store
Windows Phone Store was introduced by Microsoft for its Windows Phone
platform, which was launched in October 2010. As of October 2012, it has over
120,000 apps available.[23]
Windows Store
Windows Store was introduced by Microsoft for its Windows 8 and Windows RT
platforms. While it can also carry listings for traditional desktop programs certified
for compatibility with Windows 8, it is primarily used to distribute "Windows
Store apps"which are primarily built for use on tablets and other touch-based
devices (but can still be used with a keyboard and mouse, and on desktop
computers and laptops).[24][25]
Samsung Apps Store
An app store for Samsung mobile phones was founded in September 2009.[26] As of
October 2011, Samsung Apps reached 10 million downloads. Currently the store is
available in 125 countries and it offers apps for Windows Mobile, Android and
Bada platforms.

36

Enterprise management
Mobile application management
Mobile application management (MAM) describes software and services
responsible for provisioning and controlling access to internally developed and
commercially available mobile apps used in business settings, which has become
necessary with the onset of Bring your own device (BYOD) phenomenon. When
an employee brings a personal device into an enterprise setting, mobile application
management enables the corporate IT staff to transfer required applications, control
access to business data, and remove locally cached business data from the device if
it is lost, or when its owner no longer works with the company.
Top 25 popular mobile apps in U.S.
These are the 25 most popular (but not the most used) apps in the USA by their
active usage, as of August 2014. This list counts usage among all of the application
stores.
Facebook

Apple maps

Google maps

Yahoo mail

Google search

Skype

Gmail

Twitter

Instagram

Youtube

Google play

37

Amazon
Facebook messenger
Yahoo stocks

Text- to Speech Technique


Speech synthesis is the artificial production of human speech. A computer system
used for this purpose is called a speech synthesizer, and can be implemented in
software or hardware products. A text-to-speech (TTS) system converts normal
language text into speech; other systems render symbolic linguistic representations
like phonetic transcriptions into speech.[1]
Synthesized speech can be created by concatenating pieces of recorded speech that
are stored in a database. Systems differ in the size of the stored speech units; a
system that stores phones or diphones provides the largest output range, but may
lack clarity. For specific usage domains, the storage of entire words or sentences
allows for high-quality output. Alternatively, a synthesizer can incorporate a model
of the vocal tract and other human voice characteristics to create a completely
"synthetic" voice output.[2]
The quality of a speech synthesizer is judged by its similarity to the human voice
and by its ability to be understood clearly. An intelligible text-to-speech program
allows people with visual impairments or reading disabilities to listen to written
works on a home computer. Many computer operating systems have included
speech synthesizers since the early 1990s.

38

Overview of a typical TTS system


A text-to-speech system (or "engine") is composed of two parts:[3] a front-end and a
back-end. The front-end has two major tasks. First, it converts raw text containing
symbols like numbers and abbreviations into the equivalent of written-out words.
This process is often called text normalization, pre-processing, or tokenization. The
front-end then assigns phonetic transcriptions to each word, and divides and marks
the text into prosodic units, like phrases, clauses, and sentences. The process of
assigning phonetic transcriptions to words is called text-to-phoneme or graphemeto-phoneme conversion. Phonetic transcriptions and prosody information together
make up the symbolic linguistic representation that is output by the front-end. The
back-endoften referred to as the synthesizerthen converts the symbolic
linguistic representation into sound. In certain systems, this part includes the
computation of the target prosody (pitch contour, phoneme durations), which is
then imposed on the output speech.

Designing the Right UI


1. Consistency, consistency, consistency. The most important thing you can
possibly do is ensure your user interface works consistently. If you can
double-click on items in one list and have something happen, then you
39

should be able to double-click on items in any other list and have the same
sort of thing happen. Put your buttons in consistent places on all your
windows, use the same wording in labels and messages, and use a consistent
color scheme throughout. Consistency in your user interface enables your
users to build an accurate mental model of the way it works, and accurate
mental models lead to lower training and support costs.
2. Set standards and stick to them. The only way you can ensure consistency
within your application is to set user interface design standards, and then
stick to them. You should follow Agile Modeling (AM)s Apply Modeling
Standards practice in all aspects of software development, including user
interface design.
3. Be prepared to hold the line. When you are developing the user interface
for your system you will discover that your stakeholders often have some
unusual ideas as to how the user interface should be developed. You should
definitely listen to these ideas but you also need to make your stakeholders
aware of your corporate UI standards and the need to conform to them.
4. Explain the rules. Your users need to know how to work with the
application you built for them. When an application works consistently, it
means you only have to explain the rules once. This is a lot easier than
explaining in detail exactly how to use each feature in an application stepby-step.
5. Navigation between major user interface items is important. If it is
difficult to get from one screen to another, then your users will quickly
become frustrated and give up. When the flow between screens matches the
40

flow of the work the user is trying to accomplish, then your application will
make sense to your users. Because different users work in different ways,
your system needs to be flexible enough to support their various
approaches. User interface-flow diagrams should optionally be developed to
further your understanding of the flow of your user interface.
6. Navigation within a screen is important. In Western societies, people read
left to right and top to bottom. Because people are used to this, should you
design screens that are also organized left to right and top to bottom when
designing a user interface for people from this culture? You want to organize
navigation between widgets on your screen in a manner users will find
familiar to them.
7. Word your messages and labels effectively. The text you display on your
screens is a primary source of information for your users. If your text is
worded poorly, then your interface will be perceived poorly by your users.
Using full words and sentences, as opposed to abbreviations and codes,
makes your text easier to understand. Your messages should be worded
positively, imply that the user is in control, and provide insight into how to
use the application properly. For example, which message do you find more
appealing You have input the wrong information or An account number
should be eight digits in length. Furthermore, your messages should be
worded consistently and displayed in a consistent place on the screen.
Although the messages The persons first name must be input and An
account number should be input are separately worded well, together they
are inconsistent. In light of the first message, a better wording of the second
message would be The account number must be input to make the two
messages consistent.
41

8. Understand the UI widgets. You should use the right widget for the right
task, helping to increase the consistency in your application and probably
making it easier to build the application in the first place. The only way you
can learn how to use widgets properly is to read and understand the userinterface standards and guidelines your organization has adopted.
9. Look at other applications with a grain of salt. Unless you know another
application has been verified to follow the user interface-standards and
guidelines of your organization, dont assume the application is doing things
right. Although looking at the work of others to get ideas is always a good
idea, until you know how to distinguish between good user interface design
and bad user interface design, you must be careful. Too many developers
make the mistake of imitating the user interface of poorly designed software.
10.Use color appropriately. Color should be used sparingly in your
applications and, if you do use it, you must also use a secondary indicator.
The problem is that some of your users may be color blind and if you are
using color to highlight something on a screen, then you need to do
something else to make it stand out if you want these people to notice it. You
also want to use colors in your application consistently, so you have a
common look and feel throughout your application.
11.Follow the contrast rule. If you are going to use color in your application,
you need to ensure that your screens are still readable. The best way to do
this is to follow the contrast rule: Use dark text on light backgrounds and
light text on dark backgrounds. Reading blue text on a white background is
easy, but reading blue text on a red background is difficult. The problem is

42

not enough contrast exists between blue and red to make it easy to read,
whereas there is a lot of contrast between blue and white.
12.Align fields effectively. When a screen has more than one editing field, you
want to organize the fields in a way that is both visually appealing and
efficient. I have always found the best way to do so is to left-justify edit
fields: in other words, make the left-hand side of each edit field line up in a
straight line, one over the other. The corresponding labels should be rightjustified and placed immediately beside the field. This is a clean and
efficient way to organize the fields on a screen.
13.Expect your users to make mistakes. How many times have you
accidentally deleted some text in one of your files or deleted the file itself?
Were you able to recover from these mistakes or were you forced to redo
hours, or even days, of work? The reality is that to err is human, so you
should design your user interface to recover from mistakes made by your
users.
14.Justify data appropriately. For columns of data, common practice is to
right-justify integers, decimal align floating-point numbers, and to leftjustify strings.
15.Your design should be intuitable. In other words, if your users dont know
how to use your software, they should be able to determine how to use it by
making educated guesses. Even when the guesses are wrong, your system
should provide reasonable results from which your users can readily
understand and ideally learn.

43

16.Dont create busy user interfaces. Crowded screens are difficult to


understand and, hence, are difficult to use. Experimental results show that
the overall density of the screen should not exceed 40 percent, whereas local
density within groupings should not exceed 62 percent.
17.Group things effectively. Items that are logically connected should be
grouped together on the screen to communicate they are connected, whereas
items that have nothing to do with each other should be separated. You can
use white space between collections of items to group them and/or you can
put boxes around them to accomplish the same thing.

MULTIMODAL USER INTERFACE


Multimodal human-computer interaction refers to the interaction with the virtual
and physical environment through natural modes of communication, i. e. the
modes involving the five human senses. This implies that multimodal interaction
enables a more free and natural communication, interfacing users with automated
systems in both input and output. Specifically, multimodal systems can offer a
flexible, efficient and usable environment allowing users to interact through input
modalities, such as speech, handwriting, hand gesture and gaze, and to receive
information by the system through output modalities, such as speech synthesis,
smart graphics and others modalities, opportunely combined. Then a multimodal
system has to recognize the inputs from the different modalities combining them
according to temporal and contextual constraints in order to allow their
interpretation. This process is known as multimodal fusion, and it is the object of
several research works from nineties to now. The fused inputs are interpreted by
the system. Naturalness and flexibility can produce more than one interpretation
44

for each different modality (channel) and for their simultaneous use, and they
consequently can produce multimodal ambiguity generally due to imprecision,
noises or other similar factors. For solving ambiguities, several methods have been
proposed. Finally the system returns to the user outputs through the various modal
channels (disaggregated) arranged according to a consistent feedback (fission).
Multi-modal input
Two major groups of multimodal interfaces have merged, one concerned in
alternate input methods and the other in combined input/output. The first group of
interfaces combined various user input modes beyond the
traditional keyboard and mouse input/output, such as speech, pen, touch, manual
gestures, gaze and head and body movements. The most common such interface
combines a visual modality (e.g. a display, keyboard, and mouse) with a
voice modality (speech recognition for input, speech synthesis and recorded audio
for output). However other modalities, such as pen-based input
or haptic input/output may be used. Multimodal user interfaces are a research area
in human-computer interaction (HCI).
The advantage of multiple input modalities is increased usability: the weaknesses
of one modality are offset by the strengths of another. On a mobile device with a
small visual interface and keypad, a word may be quite difficult to type but very
easy to say (e.g. Poughkeepsie). Consider how you would access and search
through digital media catalogs from these same devices or set top boxes. And in
one real-world example, patient information in an operating room environment is
accessed verbally by members of the surgical team to maintain an antiseptic
environment, and presented in near realtime aurally and visually to maximize
comprehension.

45

Multimodal input user interfaces have implications for accessibility. A welldesigned multimodal application can be used by people with a wide variety of
impairments. Visually impaired users rely on the voice modality with some keypad
input. Hearing-impaired users rely on the visual modality with some speech input.
Other users will be "situationally impaired" (e.g. wearing gloves in a very noisy
environment, driving, or needing to enter a credit card number in a public place)
and will simply use the appropriate modalities as desired. On the other hand, a
multimodal application that requires users to be able to operate all modalities is
very poorly designed.
The most common form of input multimodality in the market makes use of
the XHTML+Voice (aka X+V) Web markup language, an
open specification developed by IBM,Motorola, and Opera Software. X+V is
currently under consideration by the W3C and combines several W3C
Recommendations including XHTML for visual markup, VoiceXMLfor voice
markup, and XML Events, a standard for integrating XML languages. Multimodal
browsers supporting X+V include IBM WebSphere Everyplace Multimodal
Environment,Opera for Embedded Linux and Windows, and ACCESS
Systems NetFront for Windows Mobile. To develop multimodal
applications, software developers may use a software development kit, such as
IBM WebSphere Multimodal Toolkit, based on the open source Eclipse framework,
which includes an X+V debugger, editor, and simulator.
Multimodal input and output[edit]
The second group of multimodal systems presents users with multimedia displays
and multimodal output, primarily in the form of visual and auditory cues. Interface
designers have also started to make use of other modalities, such as touch and
46

olfaction. Proposed benefits of multimodal output system include synergy and


redundancy. The information that is presented via several modalities is merged and
refers to various aspects of the same process. The use of several modalities for
processing exactly the same information provides an increased bandwidth of
information transfer . Currently, multimodal output is used mainly for improving
the mapping between communication medium and content and to support attention
management in data-rich environment where operators face considerable visual
attention demands.
An important step in multimodal interface design is the creation of natural
mappings between modalities and the information and tasks. The auditory channel
differs from vision in several aspects. It is omnidirection, transient and is always
reserved. Speech output, one form of auditory information, received considerable
attention. Several guidelines have been developed for the use of speech. Michaelis
and Wiggins (1982) suggested that speech output should be used for simple short
messages that will not be referred to later. It was also recommended that speech
should be generated in time and require an immediate response.
The sense of touch was first utilized as a medium for communication in the late
1950s. It is not only a promising but also a unique communication channel. In
contrast to vision and hearing, the two traditional senses employed in HCI, the
sense of touch is proximal: it senses objects that are in contact with the body, and it
is bidirectonal in that it supports both perception and acting on the environment.
Examples of auditory feedback include auditory icons in computer operating
systems indicating users actions (e.g. deleting a file, open a folder, error), speech
output for presenting navigational guidance in vehicles, and speech output for
warning pilots on modern airplane cockpits. Examples of tactile signals include
47

vibrations of the turn-signal lever to warn drivers of a car in their blind spot, the
vibration of auto seat as a warning to drivers, and the stick shaker on modern
aircraft alerting pilots to an impending stall.
Invisible interface spaces became available using sensor technology. Infrared,
ultrasound and cameras are all now commonly used.[30] Transparency of interfacing
with content is enhanced providing an immediate and direct link via meaningful
mapping is in place, thus the user has direct and immediate feedback to input and
content response becomes interface affordance (Gibson 1979).

48

Вам также может понравиться