Вы находитесь на странице: 1из 4

Autonomous Robots 10, 131134, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands.

Guest Editorial: Personal Robotics


Welcome to the special issue of the Autonomous Robots journal on the emerging area of Personal Robotics. What is personal robotics? The concept is evolving, but there is a picture emerging which is more than the home robot that does dishes and mows the lawn. A personal robot may or may not be highly autonomous and intelligent (e.g., it may be a tele-robot with little or no autonomy). It may or may not be highly dexterous or anthropomorphic. A personal robot serves some function for a human being, and adapts to their actions and needs. The main criterion is that it fullls its role well. Because that role involves interaction with a person, new concepts from human-computer interaction (HCI) and user-centered design are important for the design of personal robots. The papers in this collection span very intelligent robots that can interact with people at high level, to tele-robots that augment or extend human senses, to medical robots that assist injured or physically-challenged people. A robotics researcher will nd some familiar concepts in these chapters, like motion planning and supervisory control, but with a new emphasis on environments that are full of people going about their business. The roboticist will also nd some new methodologies, like user centered design and psychological studies. These techniques will be more and more important as robots nd their way into new human contexts. Our involvement with personal robots began with discussions of one of the editors with Howard Moraff at NSF in early 1996. There was already considerable interest in personal robotics on both sides of the Atlantic, and Howards efforts inspired a US-French workshop on personal robotics at LAAS-CNRS in Toulouse in January 1998. There was a great deal of excitement at the workshop about personal robotics, but less than complete agreement about what the eld should be. By the end of that workshop, some clear themes had emerged, and the workshop team had converged on a set of criteria and key problems for personal robotics. Some key application areas are: (i) (ii) (iii) (iv) (v) (vi) (vii) Telepresence, providing rich remote experiences through robotics. Assistive medical, replacing or augmenting human sensing and action. Companion/Pet robots, the main goal is emotional satisfaction. Education, robots as teachers or learning partners. Domestic, robots to help with domestic chores like cleaning etc. Journeyman robots, which partner with a human to perform tasks. Interface robots, which provide haptic or tactile sensation.

The LAAS workshop was followed by a related workshop at the IEEE International Conference on Robotics and Automation (ICRA) in 1999. That workshop covered several of the application areas, and was one of the most successful at the conference. The following year, it was felt that educational robotics deserved a special emphasis. The result was a workshop on Personal Robots for Education at ICRA in 2000. The present volume of papers was solicited in summer of 1999 after the rst ICRA workshop. These papers represent a cross-section of techniques and applications, and point the way to an energetic future for this area. Personal robots are a radical idea for many people. The familiar prototypes for robots are the mechanical men of 1950s science ction lms, R2D2 and C3PO from Star Wars, and Sojourner, everyones favorite plucky Mars probe. Allowing one of these machines to roam ones house, or the thought of being entertained by one of them, is below most peoples threshold of credibility. And yet simple toys like Furby and MS Barney that are commercial successes are true personal robots. These toys do not look like the prototypes of robots. They are soft, non-mechanical, and are not useful in the sense of performing menial tasks that we expect to be the staple of home robots. Instead they

132

Canny and Agah

move, do a little sensing, and interact, teach and entertain humans, to a surprising degree. Personal robots need to interact effectively with people, and they need to t comfortably into peoples lifestyles. Personal robots have great potential for connecting people with interesting, rich and interactive remote spaces. Tele-visits to museums, galleries, or exotic places can both entertain and provide a unique educational experience. The success of the Jason project, the tele-studies of the titanic, and the daily pictures from Sojourners Mars visit hint at these possibilities. But controlling personal tele-robots from a remote computer can be difcult because of environment clutter, limited sensing and network delays. In Internet Control Architecture for Internet-Based Personal Robot, Han, Kim, Kim, and Kim (KAIST, Taejon, Korea) describe a control architecture that is resilient to network delays for tele-operation of personal robots over the Internet. They use a local model of the robot to plan collision-avoiding motions, and then update the remote robots goal positions. In Insect Telepresence, All and Nourbakhsh (Carnegie Mellon University, Pittsburgh, Pennsylvania) describe a system that allows museum visitors to explore the inside of an insects enclosure, and interact face-to-face with the insects using a tiny tele-operated camera. They employed user-centered design techniques and formal HCI principles in the design of their interface. Apart from pure user control, personal robots can be outtted with varying amounts of autonomy. This allows them to work synergistically with a person. In Enhancing Randomized Motion Planners: Exploring with Haptic Hints, Bayazit, Song, and Amato (Texas A&M University, College Station, Texas) explore cooperative solution of motion planning problems by human and computer. Based on the probabilistic roadmap framework, their planner allows human intervention in cases where the system has failed to recognize certain poses of the robot that could improve the success of the plan. The study of human motion and cognition can drive the design of personal robots. In Moving Personal Robots in Real-Time Using Primitive Motions, Xu and Zheng (Ohio State University, Columbus, Ohio) draw on studies of human motion to build a taxonomy of motion primitives for robots. They build a reexive control scheme that composes these primitives into more complex motions. Such a system can be naturally controlled with a few commands from a human. In Psychological Effects of Behavior Patterns of a Mobile Personal Robot, Butler and Agah (The University of Kansas, Lawrence, Kansas) address the human co-existence question. They study a group of 40 subjects and their reactions to a robot that is passing near, avoiding them, or performing a task near them. Their study is aimed at better understanding of human-robot interaction, and how robot behavior can be best designed so that it inspires condence and comfort from the people around the robot. Personal robots can help users with sensory-motor injuries or disabilities. In A Stewart Platform-Based System for Ankle Telerehabilitation, Girone, Burdea, Bouzit, Popescu, and Deutsch (Rutgers University, Piscataway, New Jersey), describe the Rutgers Ankle, a haptic interface designed for orthopedic rehabilitation. This device can be connected to the net, and allows patients to exercise at home while their progress is monitored remotely. In Multiobjective Navigation of a Guide Mobile Robot for the Visually Impaired Based on Intention Inference of Obstacles, Kang, Kim, Lee, and Bien (KAIST, Taejon, Korea) describe a system for avoiding moving obstacles (such as other pedestrians) for a visually-impaired person. Their system tracks the pedestrians positions and infers their intended goals using a fuzzy reasoning system. Those goals are used to predict their future positions and advise the user on how to maintain safe distance from others. In The CAM-Brain Machine (CBM) an FPGA Based Tool for Evolving a 75 Million Neuron Articial Brain to Control a Lifesized Kitten Robot, de Garis, Korkin, and Fehr (STARLAB, Brussels, Belgium) describe the architecture of a very large, real-time neural network. Their design uses ordinary RAM to store the pattern of interconnections and a custom FPGA circuit to perform many neuron updates per second. The system uses a genetic algorithm to update the network. Their goal is true articial brains, and the rst application is to a life-size kitten robot called Robokitty. In Supervised Autonomy: A Framework for Human-Robot Systems Development, Cheng and Zelinsky (The Australian National University, Canberra, Australia) describe a supervisory control system that relies on the robot to perform basic functions of perception and action. The human provides qualitative instructions, and receives feedback through a graphical user interface. The system has been designed in a human-centered way, to help users accomplish their tasks. The communication of task information between robot and human is the subject of Information Sharing via Projection Function for Coexistence of Robot and Human by Wakita, Hirai, Hori, and Fujiwara (Electrotechnical Laboratory, Tsukuba, Japan). They describe projection functions as one approach to information sharing. They describe the design of such a projection system and its interface to a user.

Guest Editorial: Personal Robotics

133

It should be noted that since the limitations in space did not allow for all ten papers to be included in the same issue, the rst seven papers are included in this special issue, and the other three papers will be published in a subsequent issue. The guest editors would like to thank the authors, the reviewers, and the staff at Kluwer Academic for all their contributions. We would also like to express our gratitude to the editor, Professor George A. Bekey, for supporting and encouraging this special issue. We hope that the collection of papers in this special issue can serve as a medium for further understanding of the emerging area of personal robotics. John F. Canny Computer Science Division, University of California, Berkeley, California Email: jfc@cs.berkeley.edu Arvin Agah Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, Kansas Email: agah@ukans.edu

Arvin Agah is Assistant Professor of Electrical Engineering and Computer Science at the University of Kansas. His research interests include human interactions with intelligent systems (robots, computers, and interfaces) and distributed autonomous systems (robots and agents). He has published over 60 articles in these areas. He has taught courses in articial intelligence, robotics, software engineering, computer systems design laboratory and intelligent agents. He has served as the technical program committee member, conference session chair, and organizing committe member for various international technical conferences. He is a member of ACM and senior member of IEEE. Dr. Agah received his B.A. Computer Science (Highest Honors) from the University of Texas at Austin, in 1986; M.S. Computer Science from Purdue University, West Lafayette, Indiana, in 1988; M.S. Biomedical Engineering from the University of Southern California, Los Angeles, California, in 1993; and Ph.D. Computer Science from the University of Southern California, in 1994. Dr. Agah has been a member of research staff at Xerox Corporations Webster Research Center, Rochester, New York; IBM Corporations Los Angeles Scientic Center, Santa Monica, California; Ministry of International Trade and Industrys Mechanical Engineering Laboratory, Tsukuba, Japan; and Naval Research Laboratorys Navy Center for Applied Research in Articial Intelligence, Washington, D.C. He has been a instructor at Manseld Business School, Austin, Taxas; Purdue Universitys Department of Computer Science, West Lafayette, Indiana; and University of Tsukubas Department of Engineering Systems, Tsukuba, Japan. He has also worked as a systems analyst and software engineer for entertainment law rms and management companies in Century City and Beverly Hills, California.

John Canny is a professor in the Computer Science Division at the University of California at Berkeley. He came from MIT in 1987 after his thesis on robot motion planning, which won the ACM dissertation award. He received a Packard Foundation Fellowship and a PYI while at Berkeley. His main research interests are human-computer interaction through computer graphics and robotics. This includes work in gestural input and

134

Canny and Agah

physically-based simulation. He also works on manufacturing systems and novel manipulation methods down to micro-scale. His robotics work was on path planning, grasping and the co-creation (with Ken Goldberg) of RISC robotics, which is a fusion of algorithmic intelligence and traditional manufacturing hardware. He has worked in applied computational geometry and with Brian Mirtich on the development of a physically-based simulator called IMPULSE. He developed inexpensive, ubiquitous telepresence robots called PRoPs, which evolved from airborne to terrestrial locomotion. Recently, he started the 3DDI project on direct 3D interaction with researchers from Berkeley, MIT and UCSF. 3DDI includes the balanced co-development of simulation and rendering algorithms with radically new hardware for acquiring and displaying into the world.

Вам также может понравиться