Академический Документы
Профессиональный Документы
Культура Документы
6, DECEMBER 2014
1725
I. INTRODUCTION
HEN people lose some level of mobility, sensing capabilities, or cognitive abilities because of age or disabilities, they cannot complete activities of daily living (ADLs).
One possible solution is to build suitable and practical assistive
robotic systems to assist this group of people with their ADLs.
Given current rapidly aging population and existence of a large
number of people with different kinds of disabilities, assistive
robotic systems find many applications and show great potential.
In particular, assistant robotic systems are designed for people
who have partly or completely lost their mobility or sensing
capabilities. These systems enable people who would otherwise
need help to live independently, which is a very important factor of the quality of modern life [1]. Kassler [2], Broekens [3],
and Flandorfer [4] provide excellent surveys on the history of
assistive robots, the research progress, and the user acceptance
of assistive robots.
From the current literature, there are three main categories of
assistive robots: manipulation aids, mobility aids, and cognitive
Manuscript received April 28, 2013; revised September 8, 2013 and December 10, 2013; accepted December 20, 2013. Date of publication January 28,
2014; date of current version June 13, 2014. Recommended by Technical Editor R. Oboe. This work was supported in part by the National Basic Research
Program of China (2011CB302400), in part by the National Natural Science
Foundation of China under Grants 61175072 and 51165033, in part by the 863
Program (2013AA013804), and in part by the Natural Sciences and Engineering
Research Council of Canada.
The authors are with the Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S5B6, Canada (e-mail:
hrwang@sce.carleton.ca; xpliu@sce.carleton.ca).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TMECH.2014.2299213
aids. Manipulation aids: Manipulation aids are usually composed of a light-weight robot arm to manipulate objects and a
computer used to interface with the human. ProVAR [5], AfMASTER/RAID [6], Giving-A-Hand [7], MANUS [8], UCF
MANUS [9], and Care-O-bot [10] are good examples of this
type of assistive robot. Mobility aids: These robots are either
built on a walker or a wheelchair to help people who need
mobility assistance. NavChair [11], Hephaestus [12], Wheelseley [13], and Kares Pam-Aid [14] are successful systems for the
desired group of people. Cognitive aids: These robots mainly
focus on assistive tasks for people with mental health problems.
Baby harp seal robot PARO [15] and CosmoBot [16] show the
potential value for this health problem.
One problem for assistive robotic systems is that there is no
universal or general solution to all kinds of different requirements, especially from the clinical perspective. This means that
the design and development of such a system must be application or task dependent. For example, in current manipulation
aids, the robot works in a limited workspace and manipulates
light-weight objects. Therefore, if the user wants to transport
heavy objects in a relatively large space such as shopping in
a grocery store, we have to design a novel assistive robot to
assist the user. A prototype of a novel mobile assistive robot for
seniors and disabled people has been developed at STAR Lab,
Carleton University [17]. This robot prototype assists seniors
and disabled people who have partly or completely lost their
mobility or sensory capabilities, but need to move relatively
heavy objects. This assistive robot is similar to an electric pallet
truck, which helps workers to lift and move heavier and stacked
pallets in a warehouse.
Seniors and disabled people, the target users of these robots,
have limited mobility, and sensory and cognitive levels. Therefore, safe operation of the system under human control cannot
be guaranteed. For example, seniors who lose some visual capacity may not notice obstacles while operating the assistive
robot. Disabled people who lose some mobility may not react
fast enough to command the robot to avoid obstacles. A fully
automatic controller could solve these problems. However, assistive robots should provide help only when it is needed. When
a fully automatic controller takes over all control authority, the
user feels the robot is out of control and may try to reclaim
control of the robot. For assistive robots, the users rejection of
the robot is an unsatisfactory situation and may even cause danger. Therefore, it is a challenging problem to design a suitable
controller for a mobile assistive robot.
In order to compensate for the target users control ability
and respect users self-esteem, the concept of shared control is
introduced into the assistive robot. The shared controller only
1083-4435 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
1726
(1)
WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT
Fig. 2.
1727
block sends a control signal with a coefficient s . This control signal is calculated by the obstacle avoidance algorithm in
Section IV. Both the human control and autonomous control
signals are collected by the shared control block. The shared
control block outputs the shared control signal to the mobile
robot. An adaption law based on the human and autonomous
control inputs is applied to change the allocation weights s for
each control input in the shared control block.
III. CONTROL ALGORITHM FOR THE NONHOLONOMIC
MOBILE ROBOT
Fig. 1.
1728
1 2 1 2 1 2
r + + .
2 e 2 e 2 e
(6)
(7)
If we make vm as
vm =
vr cos e
+ v re cos e
cos e
(8)
V2 = e r e m + (r m ) .
(9)
e
Fig. 3.
x m
cos m 0
vm
Pm = y m = sin m 0
(2)
m
0
1
m
where m is the angle between the robot heading and the positive
X-axis, vm is the translational velocity, and m is the rotational
velocity of the robot.
We define errors as
re
(xh xm )2 + (yh ym )2
r
m
.
(3)
=
r e m
e
r r
e
Here, re is the distance between the robots current position and the humans position. e is the angle between the human inputs direction (r ) and the mobile robots orientation
(m ). r e is the angle between re and the positive X-axis. e is
the error between the angle of re (r e ) and the mobile robots
orientation(m ). e is the error between the direction of re (r e )
and the human inputs direction (r ).
If we assume the humans velocity and rotational velocity are
vr and r , respectively, the derivative of the errors e and e
can be written as
sin e
sin e
e = r e m = vm
vr
m
(4)
re
re
e = r m = r m .
(5)
m =
r e e + e r + 2e
e + e
(10)
WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT
Fig. 5.
Fig. 4.
1729
to design the obstacle avoidance algorithm for a robotic manipulator. The main idea of this principle is to introduce a virtual
zone around the robot, which helps the robot interact with the
environment. The robot is commanded by the deformation of
this virtual zone to stay out of areas of potential risk. Unlike
other techniques with static zones, this DVZ is a dynamic zone.
The shape and geometry of the zone depend on the mobile
robots translational and rotational velocity.
In this section, first, we will introduce the general statement
and calculation for the DVZ principle in Section IV-A and B.
Then, we will implement this principle for the obstacle algorithm in our system which is introduced in Sections IV-C, D,
and E.
h will get closer to each other and there will be less obstacles
within the DVZ around the robot. Therefore, if we want to use
(14) to design the obstacle avoidance controller, we have to
differentiate along its grads direction to control vm and m
for the mobile robot. With this controller, can be minimized
to avoid obstacles.
To calculate the control signal for mobile robot to avoid obstacles, differentiate with respect to U
= U []U + U []U + I []I.
(15)
(11)
(12)
(13)
(14)
1730
Here, d(D ) is the distance between the robot and the obstacle
which is given by the sonar sensor on the robot. The intrusion
ratio is introduced to quantify the DVZ deformation. We can
design the controller of the mobile robot to minimize ID to
ensure the robot can avoid the obstacles.
In addition, we introduce an average angle D to indicate the
current orientation of the obstacles relative to the mobile robot
(dh (D ) d(D ))D dD
D =
+ m .
(19)
(dh (D ) d(D ))dD
From (19), we can see that the average angle D is the
weighted average of D . This value gives the direction where
most of the obstacles are located. Using the controller of the
mobile robot, we can command the robot to move away from
this direction.
(20)
The derivation of VD is
V D = V D
+ V D
= ID ID + (m + D )(m + D )
and V D
(21)
can be written as
V D
= ID ID
= ID (JIxDm vx m + JIyDm vy m + JIpDo b vob ).
vx m = kv x m JIxDm
(23)
vy m = kv y m JIyDm .
(24)
and
Here, kv x m and kv y m are the gains for vx m and vy m , respectively. These two gains can control the amplitudes of vx m and
vy m , respectively. The Jacobian functions JIxDm and JIyDm can be
expressed as
ID
xm xob (D )
xm
J
dD
=
=
ID
xm
dh (D )d(D )
.
(25)
ym yob (D )
J y m = ID =
d
D
ID
ym
dh (D )d(D )
(26)
VD = VD I + VD
1
1
= ID 2 + (m (D ))2 .
2
2
(22)
WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT
1731
(28)
Fig. 6.
Fig. 7.
U h = .
1732
Fig. 9.
Fig. 8.
Experimental environment.
Fig. 10. Wall following experiments: (a) human and robot trajectory;
(b) s in the experiment (Timestamps are shown besides the trajectories. Big
font numbers are the timestamps for the robots trajectory. Small font numbers
are the timestamps for humans trajectory).
WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT
Fig. 11. Narrow corridor passing experiment (door width: 880 mm): (a) human
and robot trajectory; (b) s in the experiment.
1733
Fig. 12. Narrow corridor passing experiment (door width: 680 mm): (a) human
and robot trajectory; (b) s in the experiment.
ment, the user holds the stylus of the Phantom Omni and control
the robot moving along the wall. We notice that the robot cannot
be very close to the wall. This is because of the effectiveness of
the DVZ in the obstacle avoidance algorithm around the robot.
When the robot gets too close to the wall which is a danger
action, this virtual zone will expel the robot out of the potential danger. When the robot goes straight and meets the wall
on the right side, we can see a more obvious evidence of the
effectiveness of DVZ and the shared controller. Here, the user
intends to command the robot to move towards the wall. The
DVZ gives the avoidance command to the shared controller. The
shared controller calculates the allocation weights s and gives
the control command to the mobile robot. We can see that s
is around 0.7 at this point [shown in Fig. 10(b)]. The obstacle
avoidance algorithm takes most of the control of the robot. The
robot turns to a safer direction immediately. When the robot
meets the other wall, the same process happens. s increases to
help the robot move out of danger. The dynamic process of the
change on s can be found at Fig. 10(b). We may also notice
that there are some flat parts in s . This phenomenon is cause by
the numerical accuracy of Algorithm 1. Since bisection method
is involved in this algorithm, we need to adjust the maximum
iterations and tolerance to meet the fixed updated rate 100 Hz.
1734
Fig. 13. Door passing experiments: (a) human and robot trajectory; (b) s in
the experiment.
Fig. 14. Complex environment experiment: (a) human and robot trajectory;
(b) s in the experiment.
WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT
helps the user pass the door, avoid the obstacle, follow the
wall, and pass the narrow corridor. For the overall performance
of the shared controller, we can conclude that the designed
shared controller meets the design requirements of the system
and performs well in a complex environment.
VII. CONCLUSION
This paper presents a prototype of a novel assistive robot for
transporting heavy objects with an adaptive servo-level shared
control, which has great potential for healthcare applications.
The system consists of two parts: one is the mobile robot from
MOBILEROBOTSs Peoplebot; the other is the haptic device
from Sensables Phantom Omni. The mobile robot works as
the base for the system to carry the haptic device and loads. In
order to control the movement of mobile robot using the haptic device, we developed a tracking controller for the mobile
robot. To avoid the obstacles in the environment, an obstacle
controller is designed based on the DVZ principle. An adaptive servo-level shared controller is applied to mix these two
controllers outputs into a shared control output to command
the robot. This shared control output can compensate for the
users control ability and respect the users self-esteem at the
same time. Convex analysis is applied to analyze the stability of
this shared controller. The experiments show the effectiveness
of the designed control algorithms. Future work will focus on
applying the developed system to clinical cases and improve the
system based on specific application requirements.
REFERENCES
[1] G. Lacey, User involvement in the design and evaluation of a smart
mobility aid, J. Rehabil. Res. Develop., vol. 37, no. 6, pp. 709723,
2000.
[2] M. Kassler, Robotics for health care: a review of the literature, Robotica,
vol. 11, no. 6, pp. 495516, 1993.
[3] H. R. J. Broekens and M. Heerink, Assistive social robots in elderly care:
a review, Gerontechnology, vol. 8, no. 2009, pp. 94103, 2009.
[4] P. Flandorfer, Population ageing and socially assistive robots for elderly
persons: The importance of sociodemographic factors for user acceptance, Int. J. Population Res., vol. 12, no. 2012, pp. 113, 2012.
[5] J. J. Wagner, M. Wickizer, H. F. M. Van der Loos, and L. J. Leifer, User
testing and design iteration of the provar user interface, in Proc. IEEE
Int. Workshop Robot Human Commun, 1999, pp. 1822.
[6] H. V. der Loos, J. Hammel, D. Lees, D. Chang, and I. Perkash, Field
evaluation of a robot workstation for quadriplegic office workers, Eur.
Rev. Biomed, vol. 5, pp. 317319, 1990.
[7] M. Johnson, E. Guglielmelli, G. D. Lauro, C. Laschi, M. Carrozza, and
P. Dario, Giving-a-hand system: The development of a task-specific robot
appliance, Adv. Rehabil. Robot., vol. 306, pp. 127141, 2004.
[8] H. H. Kwee, Integrated control of manus manipulator and wheelchair
enhanced by environmental docking, Robotica, vol. 16, no. 5, pp. 491
498, 1998.
[9] D.-J. Kim, Z. Wang, and A. Behal, Motion segmentation and control
design for UCFMANUSan intelligent assistive robotic manipulator,
IEEE/ASME Trans. Mechatronics, vol. 17, no. 5, pp. 936948, Oct. 2012.
[10] B. Graf, M. Hans, and R. D. Schraft, Care-o-bot iidevelopment of a
next generation robotic home assistant, Auton. Robots, vol. 16, no. 2,
pp. 193205, 2004.
[11] S. Levine, D. Bell, L. Jaros, R. Simpson, Y. Koren, and J. Borenstein, The
navchair assistive wheelchair navigation system, IEEE Trans. Rehabil.
Eng., vol. 7, no. 4, pp. 443451, Dec. 1999.
[12] R. C. Simpson, D. Poirot, and F. Baxter, The hephaestus smart wheelchair
system, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 10, no. 2, pp. 118
122, Jun. 2002.
1735
[13] H. A. Yanco, Wheelesley: A robotic wheelchair system: Indoor navigation and user interface, in Assistive Technology and Artificial Intelligence,
Applications in Robotics, User Interfaces and Natural Language Processing. London, U.K.: Springer-Verlag, 1998, pp. 256268.
[14] Z. Bien, M.-J. Chung, P.-H. Chang, D.-S. Kwon, D.-J. Kim, J.-S. Han,
J.-H. Kim, D.-H. Kim, H.-S. Park, S.-H. Kang, K. Lee, and S.-C. Lim,
Integration of a rehabilitation robotic system (kares ii) with humanfriendly man-machine interaction units, Auton. Robots, vol. 16, pp. 165
191, Mar. 2004.
[15] K. Wada, T. Shibata, T. Saito, and K. Tanie, Effects of three months robot
assisted activity to depression of elderly people who stay at a health service
facility for the aged, in Proc. SICE Annu. Conf., 2004, pp. 26092614.
[16] A. J. Brisben, A. D. Lockerd, and C. Lathan, Design evolution of an interactive robot for therapy, Telemedicine e-Health, vol. 10, no. 2, pp. 252
259, 2004.
[17] H. Wang and X. Liu, Haptic interaction for mobile assistive robots,
IEEE Trans. Instrum. Meas., vol. 60, no. 11, pp. 35013509, Nov. 2011.
[18] K. A. Tahboub, Natural and manmade shared-control systems: An
overview, in Proc. IEEE Int. Conf. Robot. Autom., 2001, vol. 3, pp. 2655
2660.
[19] J. Kofman, X. Wu, T. Luu, and S. Verma, Teleoperation of a robot
manipulator using a vision-based human-robot interface, IEEE Trans.
Ind. Electron., vol. 52, no. 5, pp. 12061219, Oct. 2005.
[20] J. Borenstein and Y. Koren, Teleautonomous guidance for mobile robots,
IEEE Trans. Syst., Man Cybern., vol. 20, no. 6, pp. 14371443, Nov./Dec.
1990.
[21] S. T. Venkataraman and S. Hayati, Shared/traded control of telerobots
under time delay, Comput. Electr. Eng., vol. 19, no. 6, pp. 481494,
1993.
[22] G. Hirzinger, B. Brunner, J. Dietrich, and J. Heindl, Sensor-based space
roboticsrotex and its telerobotic features, IEEE Trans. Robot. Autom.,
vol. 9, no. 5, pp. 649661, Oct. 1993.
[23] D. J. Bruemmer, D. A. Few, R. L. Boring, J. L. Marble, M. C. Walton, and
C. W. Nielsen, Shared understanding for collaborative control, IEEE
Trans. Syst., Man, Cybern. Part A: Syst. Humans., vol. 35, no. 4, pp. 494
504, Jul. 2005.
[24] B. Khademian and K. Hashtrudi-Zaad, Shared control architectures for
haptic training: Performance and coupled stability analysis, Int. J. Robot.
Res., vol. 30, no. 13, pp. 16271642, 2011.
[25] B. Khademian, J. Apkarian, and K. Hashtrudi-Zaad, Assessment of environmental effects on collaborative haptic guidance, Presence, Teleoperators Virtual Environ., vol. 20, no. 3, pp. 191206, 2011.
[26] B. Khademian and K. Hashtrudi-Zaad, Dual-user teleoperation systems:
New multilateral shared control architecture and kinesthetic performance
measures, IEEE/ASME Trans. Mechatronics, vol. 17, no. 5, pp. 895906,
Oct. 2012.
[27] B. Khademian and K. Hashtrudi-Zaad, Performance issues in collaborative haptic training, in Proc. IEEE Int. Conf. Robot. Autom., 2007,
pp. 32573262.
[28] A. Hansson and M. Servin, Semi-autonomous shared control of largescale manipulator arms, Control Eng. Practice, vol. 18, no. 9, pp. 1069
1076, 2010.
[29] H. Yu, M. Spenko, and S. Dubowsky, An adaptive shared control system
for an intelligent mobility aid for the elderly, Auton. Robots, vol. 15,
no. 1, pp. 5366, 2003.
[30] C. Urdiales, J. M. Peula, M. Fdez-Carmona, C. Barru, E. J. Prez,
I. Snchez-Tato, J. C. Del Toro, F. Galluppi, U. Corts, R. Annichiaricco,
C. Caltagirone, and F. Sandoval, A new multi-criteria optimization strategy for shared control in wheelchair assisted navigation, Auton. Robots,
vol. 30, no. 2, pp. 179197, 2011.
[31] A. Poncela, C. Urdiales, E. J. Prez, and F. Sandoval, A new efficiencyweighted strategy for continuous human/robot cooperation in navigation,
IEEE Trans. Syst., Man, Cybern. A, Syst. Humans, vol. 39, no. 3, pp. 486
500, May 2009.
[32] Q. Li, W. Chen, and J. Wang, Dynamic shared control for humanwheelchair cooperation, in Proc. IEEE Int. Conf. Robot. Autom., May
2011, pp. 42784283.
[33] W. Ren and R. Beard, Satisfying approach to human-in-the-loop safeguarded control, in Proc. Amer. Control Conf.,, Jun. 2005, vol. 7,
pp. 49854990.
[34] Performance PeopleBotTM Operations Manual. MOBILEROBOTS Company, Amherst, NH, USA, 2007, pp. 182.
[35] P. Morin and C. Samson, Trajectory tracking for nonholonomic vehicles,
in Robot Motion and Control: Recent Developments. Berlin, Germany:
Springer, LNCIS, vol. 335, 2006, pp. 323.
1736