Вы находитесь на странице: 1из 49

INTRODUCTION

Mobile computing means different things to different people.


Ubiquitous, wireless and remote computing. Wireless and mobile computing
are not synonymous. Wireless is a transmission or information transport
method that enables mobile computing.

MOBILE COMPUTING FRAMEWORK:

Mobile computing is expanding in four dimensions

(a)Wireless delivery technology and switching methods:

• Radio- based systems

• Cellular communications

• Wireless packet data networks

• Satellite networks

Infrared or light based mobile computing

(b)Mobile information access device:

• Portable Computers

• PDA

• Palmtops

(c) Mobile data internetworking standards and equipments:

• CDMA

• IrDA

(d) Mobile computing based business application:

Dreams behind mobile computing:

1
Location based services:

• Elements of location based services:

 Geocoding:

This is a task of processing textual address to add a positional


co-ordinate to each address. These co-ordinate are then
indexed to enable the address to be searched geographically in
ways such as “find me my nearest”.

• Latitude: the first component of a spherical cell


based system used to record positions on the earth‘s
surface. Latitude which gives the location of a place
north or south of the equator is expressed by
angular measurement ranging from 0 at the equator
to 90 at the pole.

• Longitude: latitude, the location of a place east or


west of a north south line called the prime
meridian, is measured in angles ranging from 0 at
the prime meridian to 180 at the International Date
Line. The international date line passes through
London’s Greenwich observatory, UK.

 Map content :

 Proximity searching: this is very important element of


LBS.

 Routing and driving directions: it is a interaction


between the users location and a planned destination.
Routes can be calculated and displayed on the map and
driving directions can be provided according to shortest
distance or the fastest route.

2
 Rendering: this is a production of maps for display onto
the screen of the device. Rendering images are typically
personalized according to the specific LBS request.

GPS in LBS world

The global positioning system is a network is a network of 24 Navstar


satellites orbiting earth at 11000 miles. Dod has established it at the cost of
about US $ 13 billion, access to GPS to all users including those in other
countries. GPS provides specially coded satellite signals that can be processed
by GPS receiver. Basically GPS works by using four GPS satellite signals to
compute positions in three dimensions In the receiver clock.

Operation control system has the responsibility for maintaining the


satellite and its proper positions.

How GPS finds where you are:

Complex error correction used by satellite to determine the accurate


speeds.

Here are some techniques to make improvements in LBS system.

• Time of arrival: here the differences in the time of arrival of the


signal from the mobile to more than one base station are used to
calculate the location of the device.

• Angle of arrival: AOA is a system that calculates the angle at


which a signal arrives at two base stations from the handset,
using triangulation to find location.

3
UBIQUITOUS COMPUTING

Firstly introduction of pervasive computing is necessary Pervasive computing


this implies that computer has the capability to obtain information from the
environment in what it is embedded and utilized to dynamically build model of
computing.

Input interface of pervasive computing utilizes “Multimodal “interface


and that means developing systems that can recognize voice and gestures.

WHAT IS UBIQUITOUS TECHNOLOGY?

Ubiquitous computing is intangible-physically, figuratively, literally,


living and working environments embedded with computing devices in a
seamless, invisible way.

True ubiquitous computing involves devices embedded transparently in


our physical and social movements, integrating both mobile and pervasive
computing.

Ubiquitous computing represents a situation in which computers will be


embedded in our natural movements and interactions with our environments
both physical and social. UC will help to organize and mediate social
interactions wherever and whenever these situations might occur.

UC TECHNOLOGIES

UC represents amalgamation of quite a number of existing and future


technologies like “mobile computing, pervasive computing, wearable
computing, embedded computing location, context-aware computing
technologies” fulfill the dreams of bringing UC into reality.

Software Infrastructure and Design Challenges for UC Applications

4
Ubiquitous computing applications will be embedded in the user’s physical
environments and integrate seamlessly with their everyday tasks.

• Task dynamism- UC applications, by virtue by virtue of being


available everywhere at all times, will have to adapt to the dynamism of
user’s environment and the resulting uncertainties .in these
environments; user may serendipitously change their goals or adapt their
actions to a changing environment.

• Device Heterogeneity and Resource Constraints – the omnipresence


of UC applications is typically achieved by either making the
technological artifacts (devices) move with the user or having the
applications move between devices tracking the user. In both cases,
applications have to adapt to changing technological capabilities in their
environment.

• Computing in a Social environment – another major characteristic of


UC technology is that it has a significant impact on the social
environments in which it is used. An introduction of a UC environment
implies the introduction of sensors, which irrevocably have an impact on
social structure.

Research Challenges:

• Semantic modeling – A fundamental necessity for an adaptable and


compos able computing environment is the ability to describe the
preferences of users and the relevant characteristics of computing
components using a high level semantic model.

Ontology can be used to describe user’s task environment, as well as


their goals, toanable reasoning about a user’s needs and therefore dynamically
adapt to changes .the research challenges in semantic modeling include

5
developing a modeling language to express the rich and complex nature of
ontology’s, developing and validating for various domains of user activity.

• Building the Software Infrastructure – An effective software


infrastructure for running UC applications must be capable of
finding, adapting and delivering the appropriate applications to the
user’s context.

• Developing and Configuring Applications – Currently services are


being described using a standard description language and in the
future, using standard ontologies such semantic descriptions could
enable automatic composition of services, which in turn enables an
infrastructure that dynamically adapts to tasks.

• Validating the user experience – the development of effective


methods for testing and evaluating the usage scenarios enabled by
pervasive applications is an important area that needs more attention
from researchers.

Design of user interfaces for UC

The mobile access is the gateway technology required to make


information available at any place and at any time. In addition the computing
system should be aware of user’s context not only to be able to respond in an
appropriate manner with respect to the user’s cognitive and social state but also
to anticipate needs of the users.

Speech recognition, position sensing and eye tracking should be


common inputs and in the future, stereographic audio and visual output will be
coupled with 3D virtual reality information. In addition heads-up projection
displays should allow superposition of information onto the user’s
environment.

6
UC technologies benefits

The most profound technologies are those that disappear and weave
themselves into the fabric of everyday life until they are indistinguishable from
it. It will have profound effect on the way people access and use services that
only make sense by virtue of being embedded in the environment.

Trends of Computing:

Graph: Mobility & Embeddebness

7
Scenario of virtual reality &Ubiquitous Computing

8
Network Structure

9
]Infrared Data Association LAN Access Extensions for Link

Management Protocol (IrLAN)

Introduction

The creation of the IrDA protocols and their broad industry support has
led to IrDA-compliant infrared ports becoming common on laptop computers.
With the IrDA approval of the higher media speeds of 1.15 and 4 megabits per
second (Mbps), the infrared link is becoming fast enough to support a network
interface. This document describes a protocol, conforming to the IrDA
specifications, that has these features:

 E nables a computer with an IrDA adapter to attach to a local area


network (LAN) through an access point device that acts as the network adapter
for the computer.

Enables two computers with IrDA adapters to communicate as though


they were attached through a LAN.

Enables a computer with an IrDA-compliant adapter to be attached to


a LAN through a second computer that is already attached to the LAN (the
second computer must also have an IrDA-compliant adapter).

The proposed protocol, the infrared LAN (IrLAN) protocol, should


allow for interoperability of all devices supporting the protocol.

Design Goals

The IrLAN protocol has these design goals:

The IrLAN protocol deals with the issues associated with running
legacy networking protocols over an infrared link. It supports three different
operating modes that represent the possible configurations between infrared
devices and between infrared devices and an attached network.
10
From a client operating-system perspective, the IrLAN protocol must
be implemented completely as a set of network media-level drivers. No
modification of the existing network protocols should be necessary.

The IrLAN protocol must not impose excessive processing constraints


on access point devices, which may be implemented with slower processors
than typically found in modern computers.

Definition of Terms

The following technical terms are used in this document.

Control channel

An IrLMP communication channel used by the client and offered by the


provider to allow for the setup and configuration of a data channel.

Data channel

An IrLMP communication channel used by the client and provider to


exchange LAN-formatted packets.

Frame (or media frame)

A block of data on the media. A packet may consist of multiple media


frames.

11
IAS (information access service)

Part of the IrDA protocol suite, the IAS is a standard IrLMP client that
implements a local store of configuration information. Information is stored
under a primary key called the class and under sub keys in each class called
attributes. The class may only contain sub keys, each of which is unique in the
class, and each sub key may contain a corresponding value, which may be a
string or an integer. Multiple objects of the same class are allowed, and each
object in the IAS may be read by a remote station supporting the IAS protocol.

IrLAN client (or client)

The station in an IrLAN link that is using the IrLAN services of a


provider to set up an IrLAN link. The client is the active component in the
IrLAN protocol; it issues requests to the IrLAN provider to establish a data link
and to configure the link.

IrLAP (Infrared Link Access Protocol)

A protocol, based on the HDLC protocol, designed to control an infrared


link. IrLAP provides for discovery of devices, their connection over an infrared
link, and reliable data delivery between devices.

IrLMP (Infrared Link Management Protocol)

A multiplexing protocol designed to run on top of IrLAP. IrLMP is


multipoint-capable even though IrLAP is not. When IrLAP becomes
multipoint-capable, multiple machines will be able to communicate
concurrently over an infrared link.

Infrared LAN access point device

A network adapter with an infrared link to the LAN client.


Conceptually, the infrared link is the bus that the LAN card resides on.

12
LAN

A local area network.

LSAP (logical service access point)

A unique 1-byte identifier used by IrLMP to multiplex and demultiplex


packets sent using IrLAP. Clients of IrLMP logically open an LSAP and then
attach it to a remote node, or receive attachment from a remote node. Clients
typically advertise their LSAP to other clients by writing entries in the local
IAS.

NIC (network interface controller)

A piece of hardware designed to transmit and receive packets on a LAN


network.

Packet

A block of data that is transmitted or received over the media. The


media may break a packet down into several media frames to deliver it.

Primary station

A term used in IrLAP to specify the station that is controlling the


infrared link. The other side of the link is where the secondary station resides
(or secondary stations reside). No secondary station can transmit without
receiving permission from the primary station.

IrLAN Provider (provider)

The station in an IrLAN link that is providing the IrLAN protocol


interface.

13
Secondary station

A term used in IrLAP to specify a station that is controlled by the


primary station. The secondary station can send when it receives permission
from the primary station.

TinyTP

A lightweight protocol, supporting flow control and segmentation and


reassembly, that is designed for use over an IrLMP connection.

Window size

One of the parameters negotiated between the two infrared nodes as part
of establishing an IrLAP connection. The window size specifies the number of
consecutive IrLAP frames that a node can transmit before it must allow the
other node an opportunity to transmit. The maximum IrLAP window size is
seven frames.

Overview

The IrLAN protocol is a “sided” protocol that defines a two-channel


interface between a protocol client and a protocol server. An IrLAN provider is
passive. It is up to the IrLAN client to discover and then attach to the provider
and open up a data channel over which LAN packets can be transmitted and
received. In IrLAN peer-to-peer mode (which is also described in “Access
Methods”), each station has both an IrLan client and provider. There is a race
to determine which node will open the Data channel. This race condition is
resolved by the protocol in State Machines described later in this document.

The client begins setting up the connection by reading an object’s


information in the provider’s IAS. The object specifies an IrLMP LSAP for the
“control channel.” The client connects to the control channel and uses the
control channel to negotiate the characteristics of a data channel. Once the data
14
channel has been negotiated, it is opened and then configured. All
configurations are handled through the control channel. The data channel is
used solely for the transmission and reception of packets formatted for the
LAN. The IrLAN protocol defines a graceful close, but it is seldom used
because it would require user intervention to initiate a disconnect. Typically,
the connection will close down “ungracefully” through an IrLAP connection
timeout. Both the control and data channels use the TinyTP protocol for
segmentation and reassembly of packetsand for flow control.

Access Methods

The IrLAN protocol is intended to support these modes of operation:

 Access point

 Peer-to-peer

 Hosted

Access Point Mode

An access point device is hardware supporting both a LAN network


interface controller (NIC) and an infrared transceiver. For communication over
the infrared link, the access point device runs a protocol stack that conforms to
the IrDA standards and runs the IrLAN protocol over the IrDA stack. The
access point device implements a network adapter for the client using infrared
as the bus for accessing the adapter.

The following illustration shows the access point mode of operation.

15
Filtering information is passed from the client to the access point device
to minimize the transmission of unwanted traffic over the infrared link. In this
case, the access point device assigns a unique UNICAST address to each client
connecting to the device.

It is quite reasonable to expect future implementation of access point


devices to support multiple concurrent clients connecting to the LAN. In this
case, each client would be assigned a unique LAN address, and the access point
device would likely use a NIC supporting multiple concurrent UNICAST
addresses.

Peer-to-Peer Mode

The IrLAN protocol peer-to-peer mode allows nodes running network


operating systems that are peer-to-peer capable to create ad-hoc networks. The
following illustration shows the peer-to-peer mode.
16
In peer-to-peer mode, there is no physical connection to a wired LAN.
Filtering information can still be sent to the provider during the connection
setup process. The filters allow the provider to lower traffic when both peers
are not running the exact same protocol suites. Also, the filters can lower traffic
in the case of

Point-to-multipoint traffic.

In peer-to-peer mode, each peer must provide a Server Control LSAP in


addition to its Client Control LSAP and Data LSAP. Each Client Control LSAP
connects to its peer’s Server Control LSAP. This allows each node to establish
and control its peer’s Data LSAP using the command set described herein.

17
Hosted Mode

In hosted mode, the provider has a wired network connection, but has
multiple nodes attempting to communicate through the wired connection. The
following illustration shows hosted mode.

Unlike access point mode, both the host machine and the client(s) share
the same NIC address in host mode. To make host mode work, the host must
run special bridging and routing software that will handle the proper routing of
packets. The algorithms used in this mode are highly protocol-dependent.

Some Computer Science Issues in Ubiquitous computing

Ubiquitous computing is the method of enhancing computer use by


making many computers available throughout the physical environment, but
making them effectively invisible to the user. Since we started this work at
18
Xerox PARC in 1988, a number of researchers around the world have begun to
work in the ubiquitous computing framework. This paper explains what is new
and different about the computer science in ubiquitous computing. It starts with
a brief overview of ubiquitous computing, and then elaborates through a series
of examples drawn from various sub disciplines of computer science: hardware
components (e.g. chips), network protocols, interaction substrates (e.g.
software for screens and pens), applications, privacy, and computational
methods. Ubiquitous computing offers a framework for new and exciting
research across the spectrum of computer science.

A few places in the world have begun work on a possible next


generation computing environment in which each person is continually
interacting with hundreds of nearby wirelessly interconnected computers. The
point is to achieve the most effective kind of technology, that which is
essentially invisible to the user. To bring computers to this point while
retaining their power will require radically new kinds of computers of all sizes
and shapes to be available to each person. I call this future world "Ubiquitous
Computing" (short form: "Ubicomp") [Weiser 1991]. The research method for
ubiquitous computing is standard experimental computer science: the
construction of working prototypes of the necessary infrastructure in sufficient
quantity to debug the viability of the systems in everyday use, using ourselves
and a few colleagues as guinea pigs. This is an important step towards insuring
that our infrastructure research is robust and scalable in the face of the details
of the real world.

The idea of ubiquitous computing first arose from contemplating the


place of today's computer in actual activities of everyday life. In particular,
anthropological studies of work life [Suchman 1985, Lave 1991] teach us that
people primarily work in a world of shared situations and unexamined
technological skills. However the computer today is isolated and isolating from
the overall situation, and fails to get out of the way of the work. In other words,
19
rather than being a tool through which we work, and so which disappears from
our awareness, the computer too often remains the focus of attention. And this
is true throughout the domain of personal computing as currently implemented
and discussed for the future, whether one thinks of PC's, palmtops, or
dynabooks. The characterization of the future computer as the "intimate
computer" [Kay 1991], or "rather like a human assistant" [Tesler 1991] makes
this attention to the machine itself particularly apparent.

Getting the computer out of the way is not easy. This is not a graphical
user interface (GUI) problem, but is a property of the whole context of usage of
the machine and the affordances of its physical properties: the keyboard, the
weight and desktop position of screens, and so on. The problem is not one of
"interface". For the same reason of context, this was not a multimedia problem,
resulting from any particular deficiency in the ability to display certains kinds
of realtime data or integrate them into applications. (Indeed, multimedia tries to
grab attention, the opposite of the ubiquitous computing ideal of invisibility).
The challenge is to create a new kind of relationship of people to computers,
one in which the computer would have to take the lead in becoming vastly
better at getting out of the way so people could just go about their lives.

In 1988, when I started PARC's work on ubiquitous computing, virtual


reality (VR) came the closest to enacting the principles we believed important.
In its ultimate environment, VR causes the computer to become effectively
invisible by taking over the human sensory and effects systems [Rheingold 91].
VR is extremely useful in scientific visualization and entertainment, and will be
very significant for those niches. But as a tool for productively changing
everyone's relationship to computation, it has two crucial flaws: first, at the
present time (1992), and probably for decades, it cannot produce a simulation
of significant verisimilitude at reasonable cost (today, at any cost). This means
that users will not be fooled and the computer will not be out of the way.
Second, and most importantly, it has the goal of fooling the user -- of leaving
20
the everyday physical world behind. This is at odds with the goal of better
integrating the computer into human activities, since humans are of and in the
everyday world.

Ubiquitous computing is exploring quite different ground from Personal


Digital Assistants, or the idea that computers should be autonomous agents that
take on our goals. The difference can be characterized as follows. Suppose you
want to lift a heavy object. You can call in your strong assistant to lift it for
you, or you can be yourself made effortlessly, unconsciously, stronger and just
lift it. There are times when both are good. Much of the past and current effort
for better computers has been aimed at the former; ubiquitous computing aims
at the latter.

The approach I took was to attempt the definition and construction of


new computing artifacts for use in everyday life. I took my inspiration from the
everyday objects found in offices and homes, in particular those objects whose
purpose is to capture or convey information. The most ubiquitous current
informational technology embodied in artifacts is the use of written symbols,
primarily words, but including also pictographs, clocks, and other sorts of
symbolic communication. Rather than attempting to reproduce these objects
inside the virtual computer world, leading to another "desktop model" [Buxton
90], instead I wanted to put the new kind of computer also out in this world of
concrete information conveyers. And because these written artifacts occur in
many different sizes and shapes, with many different affordances, so I wanted
the computer embodiments to be of many sizes and shapes, including tiny
inexpensive ones that could bring computing to everyone.

The physical affordances in the world come in all sizes and shapes; for
practical reasons our ubiquitous computing work begins with just three
different sizes of devices: enough to give some scope, not enough to deter
progress. The first size is the wall-sized interactive surface, analogous to the

21
office whiteboard or the home magnet-covered refrigerator or bulletin board.
The second size is the notepad, envisioned not as a personal computer but as
analogous to scrap paper to be grabbed and used easily, with many in use by a
person at once. The cluttered office desk or messy front hall table are real-life
examples. Finally, the third size is the tiny computer, analogous to tiny
individual notes or PostIts, and also like the tiny little displays of words found
on book spines, light switches, and hallways. Again, I saw this not as a
personal computer, but as a pervasive part of everyday life, with many active at
all times. I called these three sizes of computers, respectively, boards, pads, and
tabs, and adopted the slogan that, for each person in an office, there should be
hundreds of tabs, tens of pads, and one or two boards. Specifications for some
prototypes of these three sizes in use at PARC .This then is phase I of
ubiquitous computing: to construct, deploy, and learn from a computing
environment consisting of tabs, pads, and boards. This is only phase I, because
it is unlikely to achieve optimal invisibility. (Later phases are yet to be
determined). But it is a start down the radical direction, for computer science,
away from attention on the machine and back on the person and his or her life
in the world of work, play, and home.

Hardware Prototypes

New hardware systems design for ubiquitous computing has been


oriented towards experimental platforms for systems aJnd applications of
invisibility. New chips have been less important than combinations of existing
components that create experimental opportunities. The first ubiquitous
computing technology to be deployed was the Liveboard [Elrod 92], which is
now a Xerox product. Two other important pieces of prototype hardware
supporting our research at PARC are the Tab and the Pad.

22
Tab

The ParcTab is a tiny information doorway. For user interaction it has a


pressure sensitive screen on top of the display, three buttons underneath the
natural finger positions, and the ability to sense its position within a building.
The display and touchpad it uses are standard commercial units.

The key hardware design problems in the pad are size and power
consumption. With several dozens of these devices sitting around the office, in
briefcases, in pockets, one cannot change their batteries every week. The
PARC design uses the 8051 to control detailed interactions, and includes
software that keeps power usage down. The major outboard components are a
small analog/digital converter for the pressure sensitive screen, and analog
sense circuitry for the IR receiver. Interestingly, although we have been
approached by several chip manufacturers about our possible need for custom
chips for the Tab, the Tab is not short of places to put chips. The display size
leaves plenty of room, and the display thickness dominates total size. Off-the-
shelf components are more than adequate for exploring this design space, even
with our severe size, weight, and power constraints.

A key part of our design philosophy is to put devices in everyday use,


not just demonstrate them. We can only use techniques suitable for quantity
100 replication, which excludes certain things that could make a huge
difference, such as the integration of components onto the display surface
itself. This technology, being explored at PARC, ISI, and TI, while very
promising, is not yet ready for replication.

The Tab architecture is carefully balanced among display size,


bandwidth, processing, and memory. For instance, the small display means that
even the tiny processor is capable of four frame/sec video to it, and the IR
bandwidth is capable of delivering this. The bandwidth is also such that the
processor can actually time the pulse widths in software timing loops. Our
23
current design has insufficient storage, and we are increasing the amount of
non-volatile RAM in future tabs from 8k to 128k. The tab's goal of postit-note-
like casual use puts it into a design space generally unexplored in the
commercial or research sector.

Pad

The pad is really a family of notebook-sized devices. Our initial pad, the
ScratchPad, plugged into a Sun SBus card and provided an X-window-system-
compatible writing and display surface. This same design was used inside our
first wall-sized displays, the liveboards, as well. Our later untethered pad
devices, the XPad and MPad, continued the system design principles of X-
compatibility, ease of construction, and flexibility in software and hardware
expansion.

As I write, at the end of 1992, commercial portable pen devices have


been on the market for two years, although most of the early companies have
now gone out of business. Why should a pioneering research lab be building its
own such device? Each year we ask ourselves the same question, and so far
three things always drive us to continue to design our own pad hardware.

First, we need the right balance of features; this is the essence of


systems design. The commercial devices all aim at particular niches, and so
balance their design to that niche. For research we need a rather different
balance, all the more so for ubiquitous computing. For instance, can the device
communicate simultaneously along multiple channels? Does the O.S support
multiprocessing? What about the potential for high-speed tethering? Is there a
high-quality pen? Is there a high-speed expansion port sufficient for video in
and out? Is sound in/out and ISDN available? Optional keyboard? Any one
commercial device tends to satisfy some of these, ignore others, and choose a
balance of the ones it does satisfy that optimize its niche, rather than ubiquitous

24
computing-style scrap computing. The balance for us emphasizes
communication, ram, multi-media, and expansion ports.

Second, apart from balance are the requirements for particular features.
Key among these are a pen emphasis, connection to research environments like
Unix, and communication emphasis. A high-speed (>64kbps) wireless
capability is built into no commercial devices, nor do they generally have a
sufficiently high speed port to which such a radio can be added. Commercial
devices generally come with DOS or Penpoint, and while we have developed in
both, they are not our favorite research vehicles because of lack of full access
and customizability.

The third thing driving our own pad designs is ease of expansion and
modification. We need full hardware specs, complete O.S. source code, and the
ability to rip-out and replace both hardware and software components.
Naturally these goals are opposed to best price in a niche market, which orients
the documentation to the end user, and which keeps price down by integrated
rather than modular design.

We have now gone through three generations of Pad designs. Six


scratchpads were built, three XPads, and thirteen MPads, the latest. The MPad
uses an FPGA for almost all random logic, giving extreme flexibility. For
instance, changing the power control functions, and adding high-quality sound,
was relatively simple FPGA changes. The Mpad has built-in both IR (tab
compatible) and radio communication, and includes sufficient uncommitted
space for adding new circuit boards later. It can be used with a tether that
provides it with recharging and operating power and an ethernet connection.
The operating system is a standalone version of the public-domain Portable
Common Runtime developed at PARC [Weiser 89].

25
The CS of Ubicomp

In order to construct and deploy tabs, pads, and boards at PARC, we


found ourselves needing to readdress some of the well-worked areas of existing
computer science. The fruitfulness of ubiquitous computing for new Computer
Science problems clinched our belief in the ubiquitous computing framework.

In what follows I walk up the levels of organization of a computer


system, from hardware to application. For each level I describe one or two
examples of computer science work required by ubiquitous computing.
Ubicomp is not yet a coherent body of work, but consists of a few scattered
communities. The point of this paper is to help others understand some of the
new research challenges in ubiquitous computing, and inspire them to work on
them. This is more akin to a tutorial than a survey, and necessarily selective.

The areas I discuss below are: hardware components (e.g. chips),


network protocols, interaction substrates (e.g. software for screens and pens),
applications, privacy, and computational methods.

Issues of hardware components

In addition to the new systems of tabs, pads, and boards, ubiquitous


computing needs some new kinds of devices. Examples of three new kinds of
hardware devices are: very low power computing, low-power high-bits/cubic-
meter communication, and pen devices.

Low Power

In general the need for high performance has dominated the need for
low power consumption in processor design. However, recognizing the new
requirements of ubiquitous computing, a number of people have begun work in
using additional chip area to reduce power rather than to increase performance
[Lyon 93]. One key approach is to reduce the clocking frequency of their chips
26
by increasing pipelining or parallelism. Then, by running the chips at reduced
voltage, the effect is a net reduction in power, because power falls off as the
square of the voltage while only about twice the area is needed to run at half
the clock speed.

Power = CL * Vdd2 * f

• Where CL is the gate capacitance, Vdd the supply voltage, and f the
clocking frequency.

This method of reducing power leads to two new areas of chip design:
circuits that will run at low power, and architectures that sacrifice area for
power over performance. The second requires some additional comment,
because one might suppose that one would simply design the fastest possible
chip, and then run it at reduced clock and voltage. However, as Lyon
illustrates, circuits in chips designed for high speed generally fail to work at
low voltages. Furthermore, attention to special circuits may permit operation
over a much wider range of voltage operation, or achieve power savings via
other special techniques, such as adiabatic switching [Lyon 93].

Wireless

A wireless network capable of accommodating hundreds of high speed


devices for every person is well beyond the commercial wireless systems
planned even ten years out [Rush 92], which are aimed at one low speed
(64kbps or voice) device per person. Most wireless work uses a figure of merit
of bits/sec x range, and seeks to increase this product. We believe that a better
figure of merit is bits/sec/meter3. This figure of merit causes the optimization
of total bandwidth throughout a three-dimensional space, leading to design
points of very tiny cellular systems.

27
Because we felt the commercial world was ignoring the proper figure of
merit, we initiated our own small radio program. In 1989 we built spread-
spectrum transceivers at 900 MHz, but found them difficult to build and adjust,
and prone to noise and multipath interference. In 1990 we built direct
frequency-shift-keyed transceivers also at 900 MHz, using very low power to
be license-free. While much simpler, these transceivers had unexpectedly and
unpredictably long range, causing mutual interference and multipath problems.
In 1991 we designed and built our current radios, which use the near-field of
the electromagnetic spectrum. The near-field has an effective fall-off of r6 in
power, instead of the more usual r2, where r is the distance from the
transmitter. At the proper levels this band does not require an FCC license,
permits reuse of the same frequency over and over again in a building, has
virtually no multipath or blocking effects, and permits transceivers that use
extremely low power and low parts count. We have deployed a number of near-
field radios within PARC.

Pens

A third new hardware component is the pen for very large displays. We
needed pens that would work over a large area (at least 60"x40"), not require a
tether, and work with back projection. These requirements are generated from
the particular needs of large displays in ubiquitous computing -- casual use, no
training, naturalness, multiple people at once. No existing pens or touchpad’s
could come close to these requirements. Therefore members of the Electronics
and Imaging lab at PARC devised a new infrared pen. A camera-like device
behind the screen senses the pen position, and information about the pen state
(e.g. buttons) is modulated along the IR beam. The pens need not touch the
screen, but can operate from several feet away. Considerable DSP and analog
design work underlies making these pens effective components of the
ubiquitous computing system [Elrod 92].

28
Network Protocols

Ubicomp changes the emphasis in networking in at least four areas:


wireless media access, wide-bandwidth range, real-time capabilities for
multimedia over standard networks, and packet routing.

A "media access" protocol provides access to a physical medium.


Common media access methods in wired domains are collision detection and
token-passing. These do not work unchanged in a wireless domain because not
every device is assured of being able to hear every other device (this is called
the "hidden terminal" problem). Furthermore, earlier wireless work used
assumptions of complete autonomy, or a statically configured network, while
ubiquitous computing requires a cellular topology, with mobile devices
frequently coming on and off line. We have adapted a media access protocol
called MACA, first described by Phil Karn, with some of our own
modifications for fairness and efficiency.

The key idea of MACA is for the two stations desiring to communicate
to first do a short handshake of Request-To-Send-N-bytes followed by Clear-
To-Send-N-bytes. This exchange allows all other stations to hear that there is
going to be traffic, and for how long they should remain quiet. Collisions,
which are detected by timeouts, occur only during the short RTS packet.

Adapting MACA for ubiquitous computing use required considerable


attention to fairness and real-time requirements. MACA (like the original
Ethernet) requires stations whose packets collide to back off a random time and
try again. If all stations but one back off, that one can dominate the bandwidth.
By requiring all stations to adapt the back off parameter of their neighbors we
create a much fairer allocation of bandwidth.

Some applications need guaranteed bandwidth for voice or video. We


added a new packet type, NCTS (n) (Not Clear to Send), to suppress all other

29
transmissions for (n) bytes. This packet is sufficient for a base station to do
effective bandwidth allocation among its mobile units. The solution is robust,
in the sense that if the base station stops allocating bandwidth then the system
reverts to normal contention.

When a number of mobile units share a single base station, that base
station may be a bottleneck for communication. For fairness, a base station
with N > 1 nonempty output queues needs to contend for bandwidth as though
it were N stations. We therefore make the base station contend just enough
more aggressively that it is N times more likely to win a contention for media
access.

Two other areas of networking research at PARC with ubiquitous


computing implications are gigabit networks and real-time protocols. Gigabit-
per-second speeds are important because of the increasing number of medium
speed devices anticipated by ubiquitous computing, and the growing
importance of real-time (multimedia) data. One hundred 256kbps portables per
office imply a gigabit per group of forty offices, with all of PARC needing an
aggregate of some five gigabits/sec. This has led us to do research into local-
area ATM switches, in association with other gigabit networking projects
[Lyles 92].

Real-time protocols are a new area of focus in packet-switched


networks. Although real-time delivery has always been important in telephony,
a few hundred milliseconds never mattered in typical packet-switched
applications like telnet and file transfer. With the ubiquitous use of packet-
switching, even for telephony using ATM, the need for real-time capable
protocols has become urgent if the packet networks are going to support multi-
media applications. Again in association with other members of the research
community, PARC is exploring new protocols for enabling multimedia on the
packet-switched internet [Clark 92].

30
The internet routing protocol, IP, has been in use for over ten years.
However, neither it nor its OSI equivalent, CLNP, provides sufficient
infrastructure for highly mobile devices. Both interpret fields in the network
names of devices in order to route packets to the device. For instance, the "13"
in IP name 13.2.0.45 is interpreted to mean net 13 and network routers
anywhere in the world are expected to know how to get a packet to net 13, and
all devices whose name starts with 13 are expected to be on that network. This
assumption fails as soon as a user of a net 13 mobile device takes her device on
a visit to net 36 (Stanford). Changing the device name dynamically depending
on location is no solution: higher level protocols like TCP assume that
underlying names won't change during the life of a connection, and a name
change must be accompanied by informing the entire network of the change so
that existing services can find the device.

A number of solutions have been proposed to this problem, among them


Virtual IP from Sony [Teraoka 91] and Mobile IP from Columbia University
[Ioannidis 93]. These solutions permit existing IP networks to interoperate
transparently with roaming hosts. The key idea of all approaches is to add a
second layer of IP address, the "real" address indicating location, to the existing
fixed device address. Special routing nodes that forward packets to the right
real address, and keep track of where this address is, are required for all
approaches. The internet community has a working group looking at standards
for this area (contact Deering@xerox.com for more information).

Interaction Substrates

Ubicomp has led us into looking at new substrates for interaction. I


mention four here that span the space from virtual keyboards to protocols for
window systems.

Pads have a tiny interaction area -- too small for a keyboard, too small
even for standard hand printing recognition. Hand printing has the further
31
problem that it requires looking at what is written. Improvements in voice
recognition are no panacea, because when other people are present voice will
often be inappropriate. As one possible solution, we developed a method of
touch-printing that uses only a tiny area and does not require looking. As
drawbacks, our method requires a new printing alphabet to be memorized, and
reaches only half the speed of a fast typist [Goldberg 93].

Liveboards have a huge interaction area 400 times that of the tab. Using
conventional pull down or popup menus might require walking across the room
to the appropriate button, a serious problem. We have developed methods of
location-independent interaction by which even complex interactions can be
popped up at any location.

The X window system, although designed for network use, makes it


difficult for windows to move once instantiated at a given X server. This is
because the server retains considerable state about individual windows, and
does not provide convenient ways to move that state. For instance, context and
window IDs are determined solely by the server, and cannot be transferred to a
new server, so that applications that depend upon knowing their value (almost
all) will break if a window changes servers. However, in the ubiquitous
computing world a user may be moving frequently from device to device, and
wanting to bring windows along.

Christian Jacobi at PARC has implemented a new X toolkit that


facilitates window migration. Applications need not be aware that they have
moved from one screen to another; or if they like, they can be so informed with
an up call. We have written a number of applications on top of this toolkit, all
of which can be "whistled up" over the network to follow the user from screen
to screen. The author, for instance, frequently keeps a single program
development and editing environment open for days at a time, migrating its
windows back and forth from home to work and back each day.

32
A final window system problem is bandwidth. The bandwidth available
to devices in ubiquitous computing can vary from kilobits/sec to gigabits/sec,
and with window migration a single application may have to dynamically
adjust to bandwidth over time. The X window system protocol was primarily
developed for Ethernet speeds, and most of the applications written in it were
similarly tested at 10Mbps. To solve the problem of efficient X window use at
lower bandwidth, the X consortium is sponsoring a "Low Bandwidth X" (LBX)
working group to investigate new methods of lowering bandwidth. [Fulton 93].

Applications

Applications are of course the whole point of ubiquitous computing.


Two examples of applications are locating people and shared drawing.

Ubicomp permits the location of people and objects in an environment.


This was first pioneered by work at Olivetti Research Labs in Cambridge,
England, in their Active Badge system [Want 92]. In ubiquitous computing we
continue to extend this work, using it for video annotation, and updating
dynamic maps. For instance, the picture below (figure 3) shows a portion of
CSL early one morning, and the individual faces are the locations of people.
This map is updated every few seconds, permitting quick locating of people, as
well as quickly noticing a meeting one might want to go to (or where one can
find a fresh pot of coffee).

PARC, EuroPARC, and the Olivetti Research Center have built several
different kinds of location servers. Generally these have two parts: a central
database of information about location that can be quickly queried and dumped,
and a group of servers that collect information about location and update the
database. Information about location can be deduced from logins, or collected
directly from an active badge system. The location database may be organized
to dynamically notify clients, or simply to facilitate frequent polling.

33
Some example uses of location information are: automatic phone
forwarding, locating an individual for a meeting, and watching general activity
in a building to feel in touch with its cycles of activity (important for
telecommuting).

PARC has investigated a number of shared meeting tools over the past
decade, starting with the CoLab work [Stefik 87], and continuing with video
draw and commune [Tang 91]. Two new tools were developed for investigating
problems in ubiquitous computing. The first is Tivoli [Pedersen 93], the second
Slate, each based upon different implementation paradigms. First their
similarities: they both emphasize pen-based drawing on a surface, they both
accept scanned input and can print the results, they both can have several users
at once operating independently on different or the same pages, they both
support multiple pages. Tivoli has a sophisticated notion of a stroke as spline,
and has a number of features making use of processing the contents and
relationships among strokes. Tivoli also uses gestures as input control to select,
move, and change the properties of objects on the screen. When multiple
people use Tivoli each must be running a separate copy, and connect to the
others. On the other hand, Slate is completely pixel based, simply drawing ink
on the screen. Slate manages all the shared windows for all participants, as long
as they are running an X window server, so its aggregate resource use can be
much lower than Tivoli, and it is easier to setup with large numbers of
participants. In practice we have used slate from a Sun to support shared
drawing with users on Macs and PCs. Both Slate and Tivoli have received
regular use at PARC.

Shared drawing tools are a topic at many places. For instance, Bellcore
has a toolkit for building shared tools [Hill 93], and Jacobsen at LBL uses
multicast packets to reduce bandwidth during shared tool use. There are some
commercial products [Chatterjee 92], but these are usually not multi-page and
so not really suitable for creating documents or interacting over the course of a
34
whole meeting. The optimal shared drawing tool has not been built. For its user
interface, there remain issues such as multiple cursors or one, gestures or not,
and using an ink or a character recognition model of pen input. For its
substrate, is it better to have a single application with multiple windows, or
many applications independently connected? Is packet-multicast a good
substrate to use? What would it take to support shared drawing among 50
people, 5,000 people? The answers are likely both technological and social.

Three new kinds of applications of ubiquitous computing are beginning


to be explored at PARC. One is to take advantage of true invisibility, literally
hiding machines in the walls. An example is the Responsive Environment
project led by Scott Elrod. This aims to make a building's heat, light, and power
more responsive to individually customized needs, saving energy and making a
more comfortable environment.

A second new approach is to use so-called "virtual communities" via the


technology of MUDs. A MUD, or "Multi-User Dungeon," is a program that
accepts network connections from multiple simultaneous users and provides
access to a shared database of "rooms", "exits", and other objects. MUDs have
existed for about ten years, being used almost exclusively for recreational
purposes. However, the simple technology of MUDs should also be useful in
other, non-recreational applications, providing a casual environment integrating
virtual and real worlds [Curtis 92].

A third new approach is the use of collaboration to specify information


filtering. Described in the December 1992 issue of Communications of the
ACM, this work by Doug Terry extends previous notions of information filters
by permitting filters to reference other filters, or to depend upon the values of
multiple messages. For instance, one can select all messages that have been
replied to by Smith (these messages do not even mention Smith, of course), or
all messages that three other people found interesting. Implementing this

35
required inventing the idea of a "continuous query", which can effectively
sample a changing database at all points in time. Called "Tapestry", this system
provides new ways for people to invisibly collaborate.

Privacy of Location

Cellular systems inherently need to know the location of devices and


their use in order to properly route information. For instance, the traveling
pattern of a frequent cellular phone user can be deduced from the roaming data
of cellular service providers. This problem could be much worse in ubiquitous
computing with its more extensive use of cellular wireless. So a key problem
with ubiquitous computing is preserving privacy of location. One solution, a
central database of location information, means that the privacy controls can be
centralized and so perhaps done well -- on the other hand one break-in there
reveals all, and centrality is unlikely to scale worldwide. A second source of
insecurity is the transmission of the location information to a central site. This
site is the obvious place to try to snoop packets, or even to use traffic analysis
on source addresses.

Our initial designs were all central, initially with unrestricted access,
gradually moving towards controls by individual users on who can access
information about them. Our preferred design avoids a central repository, but
instead stores information about each person at that person's PC or workstation.
Programs that need to know a person's location must query the PC, and run
whatever gauntlet of security the user has chosen to install there. EuroPARC
uses a system of this sort.

Accumulating information about individuals over long periods is both


one of the more useful things to do, and also most quickly raises hackles. A key
problem for location is how to provide occasional location information for
clients that need it while somehow preventing the reliable accumulation of
long-term trends about an individual. So far at PARC we have experimented
36
only with short-term accumulation of information to produce automatic daily
diaries of activity [Newman 90].

It is important to realize that there can never be a purely technological


solution to privacy, that social issues must be considered in their own right. In
the computer science lab we are trying to construct systems that are privacy
enabled, that can give power to the individual. But only society can cause the
right system to be used. To help prevent future oppressive employers or
governments from taking this power away, we are also encouraging the wide
dissemination of information about location systems and their potential for
harm. We have cooperated with a number of articles in the San Jose Mercury
News, the Washington Post, and the New York Times on this topic. The result,
we hope, is technological enablement combined with an informed populace
that cannot be tricked in the name of technology.

Computational Methods

An example of a new problem in theoretical computer science emerging


from ubiquitous computing is optimal cache sharing. This problem originally
arose in discussions of optimal disk cache design for portable computer
architectures. Bandwidth to the portable machine may be quite low, while its
processing power is relatively high, introducing as a possible design point the
compression of pages in a ram cache, rather than writing them all the way back
over a slow link. The question arises of the optimal strategy for partitioning
memory between compressed and uncompressed pages.

This problem can be generalized as follows :

The Cache Sharing Problem. A problem instance is given by a sequence


of page requests. Pages are of two types, U and C (for uncompressed and
compressed), and each page is either IN or OUT. A request is served by
changing the requested page to IN if it is currently OUT. Initially all pages are

37
OUT. The cost to change a type-U (type-C) page from OUT to IN is CU
(respectively, CC). When a requested page is OUT, we say that the algorithm
missed. Removing a page from memory is free.

Lower Bound Theorem: No deterministic, on-line algorithm for cache


sharing can be c-competitive for

• c < MAX (1+CU/(CU+CC), 1+CC/(CU+CC))

This lower bound for c ranges from 1.5 to 2, and no on-line algorithm can
approach closer to the optimum than this factor. Bern et al also construct an
algorithm that achieves this factor, therefore providing an upper bound as well.
They further propose a set of more general symbolic programming tools for
solving competitive algorithms of this sort.

38
SMART-ITS PLATFORM

You can think of a Smart-Its as a small, self-contained, stick-on


computer that users can attach to objects much like a 3M Post-It note. To
ensure flexibility, we used a modular approach when designing the hardware.
A Smart-Its consists of a core board with a wireless transceiver to let the device
communicate with other Smart- Its, plus a sensor board that gives the Smart-Its
data about its surroundings. For more information about the Smart-Its
architecture, see “The Smart-Its Hardware” sidebar. The standard sensor board
has five sensors: light, sound, pressure, acceleration, and temperature. For
specific purposes, we could add other sensors—such as a gas sensor, a load
sensor, or a camera for receiving images. We have also developed several APIs
to aid application development. For instance, there is a communication API to
facilitate communication between Smart-Its and other devices. Another
example is the perception API, which allows the abstraction of low-level sensor
data to higher-level concepts, so-called percepts.

The major advantage of the Smart-Its platform is that it allows designers


and researchers to construct responsive or intelligent environments with
comparably little overhead. Typically, research projects that develop smart or
context-aware objects require building a lot of custom hardware and software
from scratch. The Smart-Its project addresses this issue by presenting a
standardized hardware solution coupled with communication and sensing APIs.
Future interactive systems examples:-

39
The A-Life system uses an Oximeter and
other sensors to provide rescue teams with
updated information about avalanche
victims.

The rescue team’s interface is PDA based and gives


instant access to vital information to aid in their
work.

40
Together, the items in the smart restaurant installation created
an interactive
environment where visitors could interact with smart objects.

System Software for Ubiquitous Computing

Ubiquitous computing, or ubicomp, systems designers embed devices in


various physical objects and places. Frequently mobile, these devices— such as
those we carry with us and embed in cars—are typically wirelessly networked.

Some 30 years of research have gone into creating distributed


computing systems, and we’ve invested nearly 20 years of experience in
mobile computing. With this background, and with today’s developments in
miniaturization and wireless operation, our community seems poised to realize
the ubicomp vision. However, we aren’t there yet. Ubicomp software must
deliver functionality in our everyday world. It must do so on failure-prone
hardware with limited resources. Additionally, ubicomp software must operate
in conditions of radical change. Varying physical circumstances cause

41
components routinely to make and break associations with peers of a new
degree of functional heterogeneity. Mobile and distributed computing research
has already addressed parts of these requirements, but a qualitative difference
remains between the requirements and the achievements. In this article, we
examine today’s ubiquitous systems, focusing on software infrastructure, and
discuss the road that lies ahead.

Characteristics of ubiquitous systems

We base our analysis on physical integration and spontaneous


interoperation, two main characteristics of ubicomp systems, because much of
the ubicomp vision, as expounded by Mark Weiser and others, either deals
directly with or is predicated on them.

Physical integration

An ubicomp system involves some integration between computing


nodes and the physical world. For example, a smart coffee cup, such as a
Media- Cup, serves as a coffee cup in the usual way but also contains sensing,
processing, and networking elements that let it communicate its state (full or
empty, held or put down). So, the cup can give colleagues a hint about the state
of the cup’s owner. Or consider a smart meeting room that senses the presence
of users in meetings, records their actions, and provides services as they sit at a
table or talk at a whiteboard. The room contains digital furniture such as chairs
with sensors, whiteboards that record what’s written on them, and projectors
that you can activate from anywhere in the room using a PDA (personal digital
assistant).Human administrative, territorial, and cultural considerations mean
that ubicomp takes place in more or less discrete environments based, for
example, on homes, rooms, or airport lounges. In other words, the world
consists of ubiquitous systems rather than “the ubiquitous system.” So, from
physical integration, we draw our Boundary Principle: Ubicomp system
designers should divide the ubicomp world into environments with boundaries
42
that demarcate their content. A clear system boundary criterion—often, but not
necessarily, related to a boundary in the physical world—should exist. A
boundary should specify an environment’s scope but doesn’t necessarily
constrain interoperation.

Figure:

The ubiquitous computing world comprises environments with boundaries and


components appearing in or moving between them.

43
Spontaneous interoperation

In an environment, or ambient, there are components—units of software


that implement abstractions such as services, clients, resources, or applications
(see Figure). An environment can contain infrastructure components, which are
more or less fixed, and spontaneous components based on devices that arrive
and leave routinely. Although this isn’t a hard and fast distinction, we’ve found
it to be useful. In a ubiquitous system, components must spontaneously
interoperate in changing environments. A component interoperates
spontaneously if it interacts with a set of communicating components that can
change both identity and functionality overtime as its circumstances change. A
spontaneously interacting component changes partners during its normal
operation, as it moves or as other components enter its environment; it changes
partners without needing new software or parameters. Rather than a de facto
characteristic, spontaneous interoperation is a desirable ubicomp feature.
Mobile computing research has successfully addressed aspects of
interoperability through work on adaptation to heterogeneous content and
devices, but it has not discovered how to achieve the interoperability that the
breadth of functional heterogeneity found in physically integrated systems
requires. For a more concrete definition, suppose the owners of a smart meeting
room propose a “magic mirror,” which shows those facing it their actions in the
meeting. Ideally, the mirror would interact with the room’s other components
from the moment you switch it on. It would make spontaneous associations
with all relevant local sources of information about users. As another example,
suppose a visitor from another organization brings his PDA into the room and,
without manually configuring it in any way, uses it to send his presentation to
the room’s projector.

We choose spontaneous rather than the related term ad hoc because the
latter tends to be associated with networking—

44
adhoc networks are autonomous systems of mobile routers. But you cannot
achieve spontaneous interoperation as we have defined it solely at the network
level. Also, some use ad hoc to mean infrastructure free. However, ubicomp
involves both infrastructure-free and infrastructure enhanced computing. This
includes, for example, spontaneous interaction between small networked
sensors such as particles of smart dust, separate from any other support, or our
previous example of a PDA interacting with infrastructure components in its
environment. From spontaneous interoperation, we draw our Volatility
Principle: You should design ubicomp systems on the assumption that the set
of participating users, hardware, and software is highly dynamic and
unpredictable. Clear invariants that govern the entire system’s execution should
exist. The mobile computing community will recognize the Volatility Principle
in theory, if not by name. However, research has concentrated on how
individual mobile components adapt to fluctuating conditions.

Here, we emphasize that ubicomp adaptation should involve more than


“every component for itself” and not restrict itself to the short term; designers
should specify system-wide invariants and implement them despite volatility.

Examples

The following examples help clarify how physical integration and


spontaneous interoperation characterize ubicomp systems. Our previous
ubicomp example of the magic mirror demonstrates physical integration
because it can act as a conventional mirror and is sensitive to a user’s identity.
It spontaneously interoperates with components in any room.

Non examples of ubicomp include:

1. Accessing email over a phone line from a laptop.

This case involves neither physical integration nor spontaneous inter


operation; the laptop maintains the same association to a fixed email server.
45
This exemplifies a physically mobile system: it can operate in various physical
environments but only because those environments are equally transparent to it.
A truly mobile (although not necessarily ubiquitous) system engages in
spontaneous interactions.

2. A collection of wirelessly connected laptops at a conference.

Laptops with an IEEE 802.11 capability can connect spontaneously to


the same local IEEE 802.11 network, assuming no encryption exists. Laptops
can run various applications that enable interaction, such as file sharing. You
could argue this discovery of the local network is physical integration, but
you’d miss the essence of ubicomp, which is integration with that part of the
world that has a non electronic function for us. As for spontaneous interaction,
realistically, even simple file sharing would require considerable manual
intervention.

3. A smart coffee cup and saucer.

Our smart coffee cup (inspired by but different from the Media Cup)
clearly demonstrates physical integration, and it demonstrates a device that you
could only find in a ubiquitous system. However, if you constructed it to
interact only with its corresponding smart saucer according to a specialized
protocol, it would not satisfy spontaneous interoperation. The owner couldn’t
use the coffee cup in another environment if she forgot the saucer, so we would
have localization instead of ubiquitous functionality.

4. Peer-to-peer games.

Users play games such as Pirates! with portable devices connected by


local area wireless network. Some cases involve physical integration because
the client devices have sensors, such as proximity sensors in Pirates!
Additionally, some games can discover other players over the local network,
and this dynamic association between the players’ devices resembles
46
spontaneous interaction. However, such games require preconfigured
components. A more convincing case would involve players with generic
game-playing “pieces,” which let them spontaneously join local games even if
they had never encountered them before.

5. The Web

The Web is increasingly integrated with the physical world. Many


devices have small, embedded Web servers, and you can turn objects into
physical hyperlinks—the user is presented with a Web page when it senses an
identifier on the object. Numerous Web sites with new functionality (for
example, new types of e- sites) spring up on the Web without, in many cases,
the need to reconfigure browsers. However, this lacks spontaneous
interoperation— the Web requires human supervision to keep it going. The
“human in the loop” changes the browser’s association to Web sites.
Additionally, the user must sometimes install plug-ins to use a new type of
downloaded content.

Software challenges

Physical integration and spontaneous interoperation have major


implications for software infrastructure. These ubicomp challenges include a
new level of component interoperability and extensibility, and new
dependability guarantees, including adaptation to changing environments,
tolerance of routine failures or failure like conditions, and security despite a
shrunken basis of trust. Put crudely, physical integration for system designers
can imply “the resources available to you are sometimes highly constrained,”
and spontaneous interoperation means “the resources available to you are
highly dynamic but you must work anywhere, with minimal or no
intervention.” Additionally, an ubicomp system’s behavior while it deals with
these issues must match users’ expectations of how the physical world behaves.
This is extremely challenging because users don’t think of the physical world

47
as a collection of computing environments, but as a collection of places with
rich cultural and administrative semantics.

The semantic Rubicon

Systems software practitioners have long ignored the divide between


system- and human-level semantics. Ubicomp removes this luxury. Little
evidence exists to suggest that software alone can meet ubicomp challenges,
given the techniques currently available. Therefore, in the short term, designers
should make clear choices about what system software will not do, but humans
will. System designers should pay careful, explicit attention to what we call the
semantic Rubicon of ubiquitous systems (see Figure). (The historical Rubicon
river marked the boundary of what was Italy in the time of Julius Caesar.
Caesar brought his army across it into Italy, but only after great hesitation.) The
semantic Rubicon is the division between system and user for high-level
decision-making or physical world semantics processing. When responsibility
shifts between system and user, the semantic Rubicon is crossed. This division
should be exposed in system design, and the criteria and mechanisms for
crossing it should be clearly indicated. Although the semantic Rubicon might
seem like a human- computer interaction (HCI) notion, especially to systems
practitioners, it is, or should be, a ubicomp systems notion.

Figure:

The semantic Rubicon demarcates responsibility for decision


making between the system and the user; allocation between the
two sides can be static or dynamic.

48
REFERENCES

1. Mark Weiser Article from Georgia tech future computing environment.

2. www.google.com

3. wikiepedia.com

49

Вам также может понравиться