Вы находитесь на странице: 1из 11

The World Through The Computer: A New Human Computer Interaction Style Based on Wearable Computers

Jun Rekimoto Sony Computer Science Laboratory Inc. 3-14-13, Higashi-gotanda, Shinagawa-ku, Tokyo 141 Japan rekimoto@csl.sony.co.jp April 11, 1994

Abstract

Introduction

In this paper, I propose a new concept of human computer interaction called The World Through the Computer. This metaphor will make computers as transparent as possible. A user interacts the real world that is augmented by the computers synthetic information ( such as synthetic image and audio) by using a wearable computer. The users situation is automatically recognized by the computer through several input devices, such as a position tracker, a barcode reader (to identify the real world objects), and a speech recognizer. The user wears such devices and act freely in the real-world. The computer always tracks the users act and situations, and guides the user through the superimposed images and voice. A Wireless communications method is used to link the user to the computer and other users. The device the user wears could be small enough, even with the todays technology. This technology assists the users to collaborate each other, as well as guides their individual activities. In the near feature, I expect that such devices will be as commonplace as todays portable audio devices (Walkmans, pocket radios), electronic hearing aids, eyeglasses, and wristwatches. This paper describes design philosophies behind this concept, and discusses several research issues to realize the concept. 1

As the term human computer interaction implies, we have addressed the interface between human and computers for a long time. However, regardless of the fact that we are living in the real world, we have paid less attention to a gap between the computer and the real world. As a result, the interface between human and computers and the interface between human and the real world are not well integrated. For example, we can easily relate objects in a database, which is a computer generated world, but it is hard to make a relation among real world objects, or between a real object and a computer object. Suppose that there is a system which maintains a document database. Users of this system can store and retrieve documents to and from the database. However, once a document has been printed out, the system no longer maintains such output; the user must relate these outputs to objects maintained in the computer, at the users cost. We clearly need computers that can understand real world things, as well as the world inside the computer. Another problem is limited availability of computers. Todays computers tend to restrict the users freedom while using them. Users have to go to the specic places (the desk in the ofce, for example) to utilize computers power. Once the user is away from these places, the computer no longer assists the user. Even though portable

computers ease such restriction, they still dominate the users both hands to operate them. It is hard to use such computers while doing other tasks (e.g., walking, driving a car, or manipulating other machines). This situation can be compared to a world without wristwatches. In such a world, people must go to a location where a clock (i.e., desktop computers) exists. Can you imagine a world where people can not look at a clock without user account? This is the world where we are living in. To address these problems, I propose a new human computer interaction metaphor called the world through the computer. With this metaphor, the user wears the computer but does not interact the computer directly. Instead, the user interacts the real world freely, and interactions between the user and the real world are captured by the computer, and the computer supplies the information which augments the real world information. I also call this metaphor HyperReality, because with this technology, the user deals with the computer enhanced real world, which is beyond actual reality. Although systems based on this metaphor will use devices similar to those of virtual reality, such as a headmounted display, the philosophy behind such equipment is completely different from that of virtual reality. Virtual reality attempts to surround a user by the computer generated world and isolates the user from the real world. On the contrary, HyperReality does not isolate the user from the real world. Instead, it makes the real world more friendly and informative. In other words, this technology tries to make computers as transparent as possible, so that computers become a part of the users body, like todays hearing aids or eyeglasses. The rest of this paper is organized as follows. In the next section, I categorize current HCI styles and introduce the concept of HyperReality. The following section (Section 3) explains several components used in systems based on HyperReality, and discusses their technical feasibility. Possible applications and related activities are presented in Section 4 and Section 5, respectively. Finally, Section 6 concludes the paper. 2

Human Styles

Computer

Interaction

In this section, I categorize human computer interaction styles based on the notion of gaps between the computer world and the real world (Figure 1).

Conventional Desktops - gaps between computes and the real world


Currently, a desktop computer with a bitmap display and a pointing device (e.g., a mouse) is the most common style of human-computer interaction. This style is often referred as a GUI (graphical user interface). GUI is the major progress from an old fashioned, text-oriented user interfaces, and it enables novice users to access power of computers. Nevertheless, I would like to mention the shortfalls of GUI based systems. Although quite a few workers and students spend a long time away from their desks, GUI systems forces the user to sit in front of computers (on the desk) to use them. Even so-called desk-workers often leave their ofces to attend a meeting or to go on a business trip, for example. While they are away from their desks, they cannot get the power of the computers. Or, they have to change the style of work to utilize the computers. Even though portable computers (notebook computers, pen based computers) are becoming popular recently, they are essentially reduced desktop computers and cannot always assist the user. For example, a surgeon cannot use a notebook computer during the operation, because notebook computers sill require at lease a chair (and possibly a desk) to use it, and dominate the users both hands to operate it. Similarly, it is impossible to use such computers while the user is driving a car. As size and weight of electronic devices have been reduced dramatically, displays and keyboards are becoming dominant factors that prevent portable computers from being smaller and lighter. A desktop metaphor and overlapping windows are no longer effective when a user is using a small handheld computer with a small screen. These problems cannot be solved easily unless we abandon a traditional GUI style of interaction.

R C
Gap

R
(a) Conventional (b) Virtual Reality

R C C

R C

(c) Ubiquitous Computers C Computer World R Real World

(d) Hyper Reality Human - Computer Interaction Human - Real World Interaction Real World - Computer Interaction

Figure 1: Classication of Human Computer Interaction Styles

Virtual Reality - a computer as the world


Virtual Reality (VR) is a style of human computer interaction that surrounds a user by a synthetic 3-D world. With a high-performance graphics computer and special devices such as a head-mounted display and a DataGlove, the user can see and manipulate such environments. A problem here is, in virtual reality (VR) systems, that the interface between the user and the real world is totally eliminated. The user is isolated from the reality, and only the interface between human and the computer (i.e., the virtual world) remains. Although there are many possible applications with this 3

HCI style, as long as we are living in a real world and cannot be isolated from that, VR will not be the primary HCI for the future.

Ubiquitous Computers - computers in the world


An opposite approach to VR is making everything in daily life a computer. Sakamuras MTRON and Euro PARCs ubiquitous computers are both along this line. For example, if a door were a computer, it would detect when the people is coming in the room. Similarly, blackboards,

cabinets, or even notebooks would have embedded computers. Ubiquitous computers are expected to communicate one another. They achieve collaborative task by negotiating. For example, when a telephone rings, it asks the stereo system to turn down its volume, because it may interfere the telephone conversation. The problems of this approach are cost and reliability. Currently, we do not know how to program these computers effectively; each requires different software and should collaborate each other. If the everyday life is lled with a massive number of computers, we must expect some computes are always out of order. It is very difcult to detect such computers and x them. Although computers cost is getting cheaper and cheaper, it is still costly to embed a computer in every document in the ofce, for example. Recently some software researchers are addressing so called agent oriented programming, a new software architecture that enables to construct a very complex openended distributed system based on a group of autonomous entities, or agents. At this time, however, they do not give us a clear answer to how do we program such agents easily.

A wireless communications method is used to link the user to back-end computers and other users. This technology assists the users to collaborate each other, as well as guides their individual activities. Assuming existence of a back-end computer which stores vast amount of data, a device that the user carries can be small enough, even with the todays hardware technology. Superimposed information can be either 2D or 3D. Some applications might require precise location of 3D synthetic objects as in see-through virtual reality systems. In other applications, even two dimensional information should be useful enough when it is supplied in a timely manner.

The System Architecture

In this section, I present the architecture of the system based on HyperReality. The overall architecture is shown in Figure 2. Note that this gure illustrates all possible components used in various applications. In practice, only a subset of these items will be used in an actual application; the conguration will vary according to the applications needs.

Hyper Reality - a world through the com- 3.1 puter


To address problems described in previous sections, I propose a new concept of human computer interaction called the world through the computer, or HyperReality. HyperReality is a metaphor of humancomputer interface which makes computers transparent. Based on this metaphor, a user always wears a small computer and interacts the real world that is augmented by the computers synthetic information (images and audio). The users situation is automatically recognized by the computer through several input devices, such as a position tracker, a barcode reader (to identify the real world objects), or a speech recognizer. As a result, the computer can present highly personalized and context sensitive information to the user. For example, when the user is looking at a specic page in the book, the compute supplies information about the page, or annotations written by other users regarding that page. 4

Input

To make computers small enough to be wearable, I consider a conventional keyboard is not suitable for our purpose. A Keyboard also restricts the user from using their hands for other purposes (e.g., driving a car) during computer operation. Assuming that the computer can understand situations and intentions of a user, explicit commands by the user can be very terse and vague. For example, if the computer understands what the user wants at a specic situation, only issuing a command like show is enough to see E-mail messages or a map of current location. Thus, situation-awareness is the key to make computer operations easy and less cumbersome. Possible alternatives to keyboards are the following (and combinations of these): voice Voice input is an ideal method for wearable computers. The voice is totally weightless and does not

Activity Base World Information Activity Information Wireless Communications

Action Rules

The Server Part

Head Position Tracker Barcode Real World Object RealWorldScanner Image Imposing Head Up Display Voice Synthesizer Voice Recognizer Earphone Microphone Visual Audio Voice

Real World

Other Input Devices (Buttons, Paddle, Bat, etc.)

misc

The Wearable Part

User

Figure 2: The System Architecture of HyperReality Systems dominate the users hands. Assuming a HyperReality system has a built-in microphone to communicate with other users or to memorize voice memos, to use voice as a command is quite natural. Even with todays technology, it is feasible to assume a speaker dependent 1 continuous speech recognition with around a hundred words of vocabulary. Though realizing complete natural conversation between the human and computers will be the long way, using voice as a sequence of commands can be still quite useful. code keyboard Although word processing is not an intended usage of HyperReality systems, with a code keyboard, the user can input alphanumeric characters by one hand with reasonable speed. buttons, paddle We now know how two or three buttons can provide rich interactions through video game computers such as Nintendos and SEGAs.
1 Speaker

3.2

Implicit Input

Beside that explicit input described in the previous section, the system also deals with implicit input. Implicit input is a ow of data from the user or environment to the system, without given explicitly as a command. The system tries to detect the users situation based on information which is obtained as implicit input. Ideally, HyperReality systems should understand whole interactions between the user and the real world, that include what the user watches, what the user hears, what the user says, and what the user behaves. Although obtaining such information is not detectable by todays technology, we can retrieve the following useful information as implicit input: Real world objects that surround the user. Additional information with explicit input, such as when and where the command is issued. The users body status. 5

independence is not required for wearable computers.

Detecting Real World Objects

IDs on the stable objects (such as a door or a wall) can also be used to detect the location in a building. To identify the real world objects, the system uses a barA Short range wireless communications method, such code as a real world ID. I expects a HyperReality system as infrared connection, is another candidate. For example, will be equipped with a wearable barcode sensor, or the when the user is reachable by an infrared transmitter, it real world scanner (RWS). This device would be small indicates that the user is within the emission range of the enough so that the user can easily put it in his/her pocket. transmitter. A RWS continuously scans any object in front of it. When Some applications require ne grain position and orienthe user holds a book which has a barcode on the cover, tation tracking. For example, computer enhanced surgery the RWS reads the barcode and detects which book the requires precise matching of physical and computer genuser is holding. Suppose that there is a barcode on everated object. In such a case, 6 degrees of freedom ery door in the building. When the user stands in front (DOF) position sensors, which are used in VR applicaof a door, the system detects where the user is located tions, would be used. by scanning a barcode on the door. A RWS itself has a barcode on it, that identies the user himself. As a consequence, when two users meet on the road, each users Detecting Human Status RWS identies each other. This identication makes possible to display each others information on each others Detecting human status, such as EEG, EKG, blood pressure, blood ow, or pulse rate, is another possibility of head-up display. In contrast to ubiquitous computers, a barcode is the implicit input. Recent VR research involves these human low cost and reliable solution of making everything a status as a feedback from the user. For example, a VR computer. Suppose that every page in a book has a unique based sports trainer detects blood ow of the users nbarcode. When the user opens a page, its page ID is de- ger, to control the difculty of the training. These techtected by the computer, so the system can supply specic nologies can be applied to the HyperReality systems. information regarding the page. When the user has some comments or ideas while he is reading that page, the user can simply read out them; the system records the voice in- 3.3 Output formation with the page ID tagged for later retrieval. This Information generated by the system is either superimscenario is almost equivalent of having computers in ev- posed upon the users sight via head-up display, or narery page of the book without any cost. Barcodes are better rated as a synthetic voice. I assume the user always than ubiquitous computers from a viewpoint of reliability, wears the head-up display and the earphone. I expect because they do not require batteries, do not consume en- such equipment will be as light as todays sunglasses and ergy, and they never break down. Walkmans earphone. A possible enhancement is to use

Detecting locations
The users location and the time when the command is issued are important implicit information to the system. In HyperReality, every command (explicit input) will be tagged with the time and the location, and these additional data are used by the system to understand the users intention. To detect the users situation more precisely, the system also attempts to recognize physical objects near the user. Detecting the users location can be achieved in various ways. A GPS locates the user in a large city. Real world 6

non-dominant ears earphone as microphone. This microphone picks up bones vibration from the eardrum caused by the users voice. 2

3.4

Communications

To keep wearable devices light and small, as well as to support collaborative work among users, the wearable part of the HyperReality system is connected to the backend computer via wireless (or wired) communications
2 Sanyo and Applied Engineering have been manufacturing similar ear-microphone as a hands-free telephone headset.

method. The back-end computer is a conventional desktop (or desk-side) computer that acts as a server to wearable devices. The server has a kind of active database (called the Activity Base) that stores the users activity and real worlds information. The back-end computer also acts as a gateway to other computer systems and networks. Required bandwidth between wearables and servers will vary according to the applications need. A simple navigation system will require relatively low bandwidth (e.g., 1K bps), while a teleconferencing system that displays 3D images will require high bandwidth (e.g., 110M bps). The former can be achieved by todays wireless communications method (analog/digital cellular services, digital packet radio services), while the latter may require a wired communications method or wireless within limited place (e.g., in the conference room).

Figure 3: Teleconferencing with HyperReality

allows a group of users to share their memory through the communications method and the activity base. 3.5 Activity Base The ActivityBase is a kind of expert system and will The Activity Base, which is stored in the server system, be implemented based on the results from the current records users activities and world information. It infers database research such as deductive database or active which information should be supplied to the user. The ac- database. However, it is still unclear and requires further tivity information of the user is collected from the users research to realize how the system infers the users intenimplicit and explicit input described above. It is also pos- tion effectively and gives useful information in a timely sible to store data other than activity log (e.g., meeting manner. schedules) in the database. For example, when the user stands in front of a meeting room, the system can detect the users location by scan- 4 Applications ning a barcode on the rooms door, and can give the user a meeting schedules regarding this room. Note that, in this 4.1 Teleconferencing example, the user may not issue any command such as a database query. Without an explicit command, the system One of the most attractive applications for HyperReality is can infer what the user wants to know based on the current teleconferencing. A common problem with current telesituation. conferencing systems (including TV-conference systems, When the user rents a book from the library, the user desktop teleconferencing systems, such as MERMAID[8] can simply say I rent a book to the system while hold- or CRUISER[3]) is gaze awareness. Participants faces ing the book. The system detects who rent which book are placed on the screen as two dimensional images, rewhen based on implicit input (i.e., situation) and store gardless of the physical positions of the participants. The such information to the Activity Base. participants eye contacts are heavily prevented due to With the combination of the communications method, such conguration. Although some recent researches adthe activity base acts as a shared brain among users. This dressed this problem (such as Ishii et al.s ClearBoard [4]), conceptual brain records and integrates each users activ- there is no system that allows correct eye contact among ity, and the user can recall information of other users ac- (more than) three participants. 3 tivity (often without giving an explicit command) as well 3 In ClearBoard, the number of participants is strictly limited to two. as the users individual information. Thus, HyperReality 7

Research directions of VR systems are shifting to construction of a shared virtual space connected by networks [7, 1, 6]. With these technologies, two or more participants can enter a computer generated shared world simultaneously, and carry out some collaborated work. However, these systems isolate the user from the real world, which is not adequate to practical teleconferencing. A typical conguration of teleconferencing is connecting two or more points via networks, and each point has two or more participants. Hence, teleconferencing system must support local (through the real world) communications as well as remote (through the virtual world) communications. To achieve this local-remote transparency, any gaps between the real and the virtual worlds should not exist. Teleconferencing with the HyperReality technology would satisfy these requirements. Remote participants are superimposed upon the real conference room. Besides that the quality of the image, the participants can talk to the remote participant as if he/she was in the conference room. The system fosters communications between local-remote as well as local-local, by seamless eye contacts among participants. When one participant point to something on the whiteboard, his/her intention is clear to other participants. In conventional teleconferencing systems, such pointing can not convey the participants intention, hence other (often less intuitive) methods (e.g., tele pointer) have to be introduced. The HyperReality system superimposes any 3D objects upon the real world. With this technology, for example, the participants can share 3D model of nancial information, architecture models, or real-time scientic simulation. Such enhancement cannot be available in todays normal conference equipment, so it would dramatically improve the productivity and the creativity of the meeting. I believe this capability will redene the meaning of the collaboration. Of course, such application cannot be realized without research efforts. Here are the research items we have to tackle: Development of an ultra-light head up (see-through) display. Holographic optics might be a key enabling technology for those devices. Real-time synthesis of a photo-realistic 3D human face. It involves translation of 2D image to 3D 8

Phone 403-4900 Schedule: 11:00-12:00 Meeting

ard Insert C

Figure 4: Everyday Guide model. We might be able to pick up technology from the computer vision eld. I assume one or two cameras are used to capture each participants face image. However, this face image must be imposed into 3D space, because other participants might see the face from different positions. Note that each participants head position and orientation are tracked by the system (to generate correct 3D image according to the head position/orientation), this might ease the situation. Real-time transmission of 3D human face. I dont assume broad communications channels (such as BISDN) because of their time of availability. Instead, we assume currently available teleconferencing standard (such as H.261 and narrow band ISDN) with few additional transmissions for 3D models. Hence, we must reduce the amount of trafc as low as possible. One possibility is that the system prepares a correct 3D model and images (from many directions) of each participant before the conference, and only transmit location information and face image (2D) during the conference. The system generates (fakes) participants 3D image based on stored information and real-time information. We might be able to extract (or infer) participants facial expression from voice (at least his cheek bones motion).

4.2

Everyday Guide

Another application is everyday guidance for individual users. With this technology, a user always wears a see through head-up display which looks like eyeglasses. The

head-up display gives the user context dependent information. For example, When the user is looking for the meeting room in an unfamiliar building, the system guides the way by superimposing an arrow symbol upon his/her sight. When the user reaches the conference room and stands in front of the door, information about the room (which group occupies the room now, for instance) is displayed on his/her sight. To accomplish such applications, we must consider how the system knows the situation of the user. Explicit information from the user, such as hand gesture or voice command, is of course used as a key to infer the situation. Additionally, implicit input acts an important role in these applications. The research items arisen here include the following: How does the system organize vast incoming ow of information. Since the system incorporates both explicit and implicit input, the number of data from the users tends to be vast. Suppose that each user sends 10 items (either explicitly or implicitly) per minute, 10 hours a day. To record 100 users activities for one year, the system must deal with 219,000,000 items of data. The Activity Base must organize these data, and sometimes must dispose unnecessary data.

the system can detects what part of the machine the user is manipulating by reading a barcode on the machine. 3D image will be useful for this purpose. For example, The user looks at the wall and can see where the power line behind the wall is located, with superimposed image on the real sight. Using the capability of superimposition, a HyperReality system can be used to visualize vital (invisible) information in dangerous situations. For example, the system can visualize a radioactive level in a nuclear power plant, high temperature area in a boiler room, or high voltage area in a transformer substation.

4.4

Medical Assistance

Computer Assisted Surgery is another area where HyperReality can help the user (a surgeon). Like maintenance tasks 4 , information regarding an area where the surgeon is looking at is superimposed upon the surgeons sight. Hands-free operations are particularly important for this application, because the surgeon cannot touch keyboards or any other input devices during the operation.

Related Activities

How does the system give users adequate and timely information suitable to their situation. If the system 5.1 The Kabuki Earphone Guide gives the user information that the user do not need, or gives information in wrong time, the information The Kabuki earphone guide is a small box (looks like a small radio) with an earphone that has been used for more becomes meaningless. than ten years at Kabuki-za, the National Kabuki The How do we register the real world information in ater in Tokyo. This device has a radio receiver in it, and the database. To supply useful information to the supplies various kinds of narrative information during the users, the system must know the real world things play. Both English and Japanese versions are available. surrounding them. When the wearable part of the An English version acts as a receiver of simultaneous system sends a barcode to the Activity Base, the sys- translation. It translates all lines in the play (normally tem must identify the real world object correspond- spoken in old-style Japanese) into English. In contrast ing to the code. to the English version, a Japanese version is beyond a simultaneous translator. It does not try to translate lines into modern Japanese. Instead, it augments the information of 4.3 Maintenance the play. For example, historical backgrounds of the story The HyperReality system can support the user while he or or the performance are explained according to the scene. she is manipulating other machines or other computers. Narration does not interfere in the story or lines, because The system supplies additional information that makes it is supplied only while the actors are not speaking. manipulation easier. Information will be highly context 4 In fact, surgery is a special kind of maintenance task. sensitive and suitable for the specic situation, because 9

All narration is per-recorded in audio tape, and the operator controls the tape recorder so that narration is played back timely according to the actual plays pace (it varies every time). Narration is transmitted into the theater, and audiences with receivers can hear it during the play. Though this system is not a high-tech device by any means, it demonstrates how context sensitive information is effective. Based on my personal experience, with this device even a novice audience can act as a Kabuki expert. In such information, timing is quite important, because if the same information were supplied prior to the play, or after the play, it would be less useful.

5.4

Vu-Man and Navigator

5.2

NaviGlasses

Shigemitu Oozahata (the founder of Macinteligens, Inc.) proposed a conceptual future user interface called NaviGlasses in 1998 [?]. The NaviGlasses is a sunglasses like computer that superimposes information upon physical image. For example, if the user looks at the building, information regarding that building is displayed on the NaviGlasses. He published several articles in Japanese journals. Though his work is almost invisible from international research communities, I consider it is one of the earliest idea regarding wearable computers. Unfortunately, he assumes unreachable technology 5 to realize NaviGlasses, and that keeps this work from serious consideration by other researchers.

As far as I know, CMUs Vu-Man is the rst real project intended to create true wearable computers [2, 5]. First prototype of Vu-Man is an Intel-386 based PC/AT compatible with the PrivateEye as an hands-free display device. It has no keyboard, but some function buttons are equipped on the surface of the computer. The system itself is small and can be carried on the users waist. This experimental computer is used as a campus navigator. The user can ask the computer where am I and the computer answers the location, or displays the map of the campus on the screen (superimposed upon the users sight). The Navigator is a successor of the Vu-Man project and is developing more sophisticated wearable computer based on Vu-Mans experience. The Navigator allows the user to operate the computer by voice. CMUs Sphinx voice recognizer will be used with the Navigator.

Summary

In this proposal, I propose a new human-computer interaction style called the HyperReality. HyperReality is the ideal computer in near future when highly mobile and communication rich computers are widely accepted and required. Like todays hearing aids and eyeglasses, HyperReality assists the users real world activities in a transparent way. The user is navigated by the information which is imposed upon the users sight. The computer recognizes the users 5.3 Mock-up Wearable Computers situation, and gives suitable information to the user. Real world objects (including everyday things surrounding the Takemasa and other industrial designers at NEC design user) are identied by the system by detecting barcodes Inc. have also proposed a new kind of computer series printed on them. This low-tech solution eliminates several called wearable computers. In fact, the term wearable problems which arise in implementing ubiquitous comcomputer is coined by this team. A part of their work puters. includes making many fancy mock-ups, and they have I do not argue that HyperReality and similar wearable demonstrated several applications by using them. Some computers will supersede or exclude todays desktop and mock-ups are merely PCs with strap to carry around the notebook computers. Instead, HyperReality coexists with waist, but others use head-up displays. Their work was todays computers and even assists the user while using purely conceptual, that is, serious technological considerthose old-style computers, because those old computers ations have been omitted. are part of the real world for HyperReality users. On 5 For example, each pixel of NaviGlasses is a computer and acts as the other hand, HyperReality would be a primary computer in some situations where todays computer could display and camera. 10

not support. Super real teleconferencing, everyday navigator, computer assisted maintenance and computer assisted surgery are typical examples of them.

References
[1] Christopher Codella, Reza Jalili, Lawrence Koved, et al. Interactive simulation in a multi-person virtual world. In CHI 92, 1992. [2] Bryce Cogswell, Zary Segall, Dan Siewiorek, and Asim Smailagic. Wearable computers: Concepts and examples. Research Report CMUCDS-92-10, School of Computer Science, Carnegie Mellon University, December 1992. distribution is limited to peer communications and specic requests. [3] R.S. Fish, R.E. Kraut, and B.L. Chalfonte. The video window system in informal communication. In Proc. of CSCW90, Conference on Computer-Supported Cooperative Work. ACM, 1990. [4] Hiroshi Ishii, Minoru Kobarashi, and Jonathan Grudin. Integration of inter-personal space and shared workspace: ClearBorad design and experiments. In ACM 1992 Conference on Computer-Supported Cooperative Work, pp. 3342, Toronto, Canada, November 1992. [5] Paco Xander Nathan. Escaping the desktrap: Wearables! MONDO 2000, No. 9, pp. 2325, 1993. [6] Chris Shaw, Mark Green, Jiandong Liang, and Yunqi Sun. Decoupled simulation in virtual reality with the MR Toolkit. ACM Transactions on Information Systems, Vol. 11, No. 3, 1993. [7] Haruo Takemura and Fumio Kishino. Cooperative work environment using virtual workspace. In ACM 1992 Conference on Computer-Supported Cooperative Work, pp. 226232, Toronto, Canada, November 1992. [8] K. Watabe, S. Sakata, K. Maeno, H Fukuoka, and T. Ohmori. Distributed multiparty desktop conferencing system: MERMAID. In Proc. of the Conf. on Computer-Supported Cooperative Work (CSCW 90), pp. 2738, 1990.

11

Вам также может понравиться