Вы находитесь на странице: 1из 12

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 19, NO.

6, DECEMBER 2014

1725

Adaptive Shared Control for a Novel Mobile


Assistive Robot
Huanran Wang, Student Member, IEEE, and Xiaoping P. Liu, Senior Member, IEEE

AbstractThis paper presents a new adaptive servo-level shared


control scheme for a mobile assistive robot that aims at assisting
senior and disabled people to transport heavy objects in a complex environment with obstacles. Several technical problems and
challenges related to the assistive robotic system and the shared
controller are addressed. Specifically, a nonlinear tracking controller is developed for the robot to follow the user. An obstacle
avoidance controller is developed based on the deformable virtual zone principle for the robot to avoid obstacles. The adaptive
servo-level shared controller utilizes the tracking controller and the
obstacle avoidance controllers outputs to generate a new shared
control output to command the robot. Experiments show that the
user can guide the movement of the robot safely and smoothly in
the complex environment with the developed controllers.
Index TermsAssistive robot, convex analysis, shared control.

I. INTRODUCTION
HEN people lose some level of mobility, sensing capabilities, or cognitive abilities because of age or disabilities, they cannot complete activities of daily living (ADLs).
One possible solution is to build suitable and practical assistive
robotic systems to assist this group of people with their ADLs.
Given current rapidly aging population and existence of a large
number of people with different kinds of disabilities, assistive
robotic systems find many applications and show great potential.
In particular, assistant robotic systems are designed for people
who have partly or completely lost their mobility or sensing
capabilities. These systems enable people who would otherwise
need help to live independently, which is a very important factor of the quality of modern life [1]. Kassler [2], Broekens [3],
and Flandorfer [4] provide excellent surveys on the history of
assistive robots, the research progress, and the user acceptance
of assistive robots.
From the current literature, there are three main categories of
assistive robots: manipulation aids, mobility aids, and cognitive

Manuscript received April 28, 2013; revised September 8, 2013 and December 10, 2013; accepted December 20, 2013. Date of publication January 28,
2014; date of current version June 13, 2014. Recommended by Technical Editor R. Oboe. This work was supported in part by the National Basic Research
Program of China (2011CB302400), in part by the National Natural Science
Foundation of China under Grants 61175072 and 51165033, in part by the 863
Program (2013AA013804), and in part by the Natural Sciences and Engineering
Research Council of Canada.
The authors are with the Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S5B6, Canada (e-mail:
hrwang@sce.carleton.ca; xpliu@sce.carleton.ca).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TMECH.2014.2299213

aids. Manipulation aids: Manipulation aids are usually composed of a light-weight robot arm to manipulate objects and a
computer used to interface with the human. ProVAR [5], AfMASTER/RAID [6], Giving-A-Hand [7], MANUS [8], UCF
MANUS [9], and Care-O-bot [10] are good examples of this
type of assistive robot. Mobility aids: These robots are either
built on a walker or a wheelchair to help people who need
mobility assistance. NavChair [11], Hephaestus [12], Wheelseley [13], and Kares Pam-Aid [14] are successful systems for the
desired group of people. Cognitive aids: These robots mainly
focus on assistive tasks for people with mental health problems.
Baby harp seal robot PARO [15] and CosmoBot [16] show the
potential value for this health problem.
One problem for assistive robotic systems is that there is no
universal or general solution to all kinds of different requirements, especially from the clinical perspective. This means that
the design and development of such a system must be application or task dependent. For example, in current manipulation
aids, the robot works in a limited workspace and manipulates
light-weight objects. Therefore, if the user wants to transport
heavy objects in a relatively large space such as shopping in
a grocery store, we have to design a novel assistive robot to
assist the user. A prototype of a novel mobile assistive robot for
seniors and disabled people has been developed at STAR Lab,
Carleton University [17]. This robot prototype assists seniors
and disabled people who have partly or completely lost their
mobility or sensory capabilities, but need to move relatively
heavy objects. This assistive robot is similar to an electric pallet
truck, which helps workers to lift and move heavier and stacked
pallets in a warehouse.
Seniors and disabled people, the target users of these robots,
have limited mobility, and sensory and cognitive levels. Therefore, safe operation of the system under human control cannot
be guaranteed. For example, seniors who lose some visual capacity may not notice obstacles while operating the assistive
robot. Disabled people who lose some mobility may not react
fast enough to command the robot to avoid obstacles. A fully
automatic controller could solve these problems. However, assistive robots should provide help only when it is needed. When
a fully automatic controller takes over all control authority, the
user feels the robot is out of control and may try to reclaim
control of the robot. For assistive robots, the users rejection of
the robot is an unsatisfactory situation and may even cause danger. Therefore, it is a challenging problem to design a suitable
controller for a mobile assistive robot.
In order to compensate for the target users control ability
and respect users self-esteem, the concept of shared control is
introduced into the assistive robot. The shared controller only

1083-4435 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

1726

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 19, NO. 6, DECEMBER 2014

assists the user when help is needed. The concept of shared


control is used extensively in humanrobot interaction systems
from teleoperation systems, medical training systems to assistive robotic systems and more [18]. There are two categories for
shared control: task-level shared control and servo-level shared
control.
Task-level shared control is macrolevel control. The user gives
general orders to the robot to complete a specific task such
as passing through a door, moving along a narrow corridor,
avoiding obstacles, etc. The robot will carry out specific task
orders such as where and when to go straight, or where and
when to turn around. Macrolevel shared control is found in
teleoperation applications [19][23]. In these applications, at
the beginning, the user takes over full control of the mobile
robot. But when a given situation is triggered, the robot takes
over the control from the user and performs the appropriate
action. For example, when an obstacle appears, the robot will
take full control authority from the user to avoid the obstacle.
Servo-level shared control is microlevel control. The control
signal is generated at every sample point. The control input
for the robot platform is the combination of the human and
autonomous control inputs. At any sample point, the user is
involved in controlling the movement of the robot. Equation (1)
shows this idea
us = (1 s )uh + s ur .

(1)

Here, us is the shared control input for the robots actuators,


uh and ur are the human and autonomous inputs, respectively,
and s is the allocation weight to adjust the proportion of uh
and ur in us .
The allocation weight s can be a fixed or a real-time changing value. Based on these two different kinds of allocation
weights, servo-level shared control can be further divided into
two categories: fixed servo-level shared control and adaptive
servo-level shared control.
Fixed servo-level shared control is used in a situation with
prior knowledge for the two inputs. For example, this kind of
shared control can be used in dual-user haptic training for medical surgery [24][27]. The trainer knows the level of the trainee
and can use the allocation weight s to adjust the dominance to
train the trainee at different levels.
Adaptive servo-level shared control is used in a situation
where some inputs may bring unstable results or the environment may change dramatically. The allocation weight changes
adaptively according to given rules to adjust the proportion of
the human and autonomous inputs in the shared control input.
For example, in a large-scale manipulator for gripping and lifting heavy objects developed in Umea University, Sweden [28],
s is given by the magnitude of the operator reference velocity,
the operators distance to the path and the duration of interaction.
This adaptive weight gives the optimal proportion for the inputs
to complete the manipulating tasks. In assistive robots, adaptive
servo-level shared control is very important to ensure the stable
and safe operation of the robot platform. H. Yu [29] uses an
adaptive law in which s is calculated from the performance
indexes history of the user. C. Urdiales [30] and A. Poncela [31]
use the average of the performance indexes to calculate s .

Q. Li [32] uses a minimax multiobjective optimization algorithm to calculate s .


There are two problems in existing adaptive servo-level
shared control algorithms: variations in design method and lack
of stability analysis. The variation of these adaptive servo-level
shared control algorithms is based on the objective of their developed system. For example, the personal aid in [29] is to
compensate for the users mobility and monitor their health
condition. So, the history of performance indexes is very important for this research. The smart wheelchairs in [30][32] are
used to eliminate onerous maneuver actions and dangers like
collisions and falls. So, a combination of performance indexes
such as safety, smoothness, and direction are used to calculate
s . Stability analysis of shared control algorithms is important from both the practical and control theorys point of view.
For practical assistive robot systems, users are usually seniors
and disabled people who have limited sensory and mobility
capability. An unsafe and unstable system may place the user
in a dangerous situation. From control theorys perspective, a
solid analysis of the controller helps to optimize the performance of the system. Therefore, stability analysis for systems
with a shared controller is essential before the release of the
robot. There is little discussion of such analysis in the literature. Ren [33] investigates the stability problem of his shared
control problem using the control Lyapunov function method.
As mentioned previously, shared control algorithms vary from
each other based on their applications. Therefore, the stability
analysis methods also vary slightly in different shared control
algorithms. However, the common problem in stability analysis for adaptive servo-level shared control is to find a suitable
range for the allocation weight to guarantee the safety and stable
operation of the system. To solve this common problem, some
fundamental framework should be investigated.
This paper presents an adaptive servo-level shared controller
for our mobile assistive robot with specific design considerations
and stability guarantees. The proposed assistive robot works in
an environment with obstacles. The proposed shared controller
should be able to assist the user to move freely and smoothly and
simultaneously avoid the randomly placed obstacles. Also, the
stability of the controller should be guaranteed to ensure the safe
operation of the robot. To meet these requirements, a nonlinear
tracking controller is developed to control the robot to follow
the user. An obstacle avoidance controller is developed based
on the deformable virtual zone (DVZ) principle. The adaptive
servo-level shared controller adaptively allocates the weights of
the two controllers outputs and commands the movement of
the mobile robot by the shared control output. Convex analysis
is applied to analyze the stability of the shared controller.
The main contribution of this paper has two parts: One part
is the development of a customized adaptive servo-level shared
control for the prototype of a novel mobile assistive robot. The
nonlinear tracking controller and DVZ based obstacle avoidance algorithm in this shared controller are highly customized
based on our assistive robots particular application. The other
part is the stability analysis of the shared controller by convex analysis. For the stability analysis, until now, little research
attention has been paid to this problem for the shared control

WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT

Fig. 2.

1727

Adaptive servo-level shared control framework.

block sends a control signal with a coefficient s . This control signal is calculated by the obstacle avoidance algorithm in
Section IV. Both the human control and autonomous control
signals are collected by the shared control block. The shared
control block outputs the shared control signal to the mobile
robot. An adaption law based on the human and autonomous
control inputs is applied to change the allocation weights s for
each control input in the shared control block.
III. CONTROL ALGORITHM FOR THE NONHOLONOMIC
MOBILE ROBOT
Fig. 1.

Overview of the developed mobile robotic system.

design, especially in the assistive robot field. The introduction


of convex analysis in this paper gives a fundamental tool for
stability analysis of shared controller in assistive robot systems
with complex and changing behaviors.
The organization of the paper is: Section II gives an overview
of the proposed shared control diagram. Section III describes
the details of the nonlinear controller for the mobile robot. Section IV introduces the obstacle avoidance algorithm based on
the DVZ principle. Section V presents the adaptive servo-level
shared controller. The experimental results and the analysis are
shown in Section VI. The final part is the conclusion and future
work.
II. OVERVIEW OF THE SYSTEM
An overview of the proposed assistive robotic system is shown
in Fig. 1. The system has two main hardware components: a mobile robot and a haptic interface. The mobile robot is a PeopleBot
robot from MOBILEROBOTS, which is a very powerful robotic
platform suitable for humanrobot interaction. It is nonholonomic with two different types of wheels: two driving wheels
and one omnidirectional wheel for balance [34]. There are 16
sonar sensors placed around the base of the mobile robot which
will be used in the obstacle avoidance algorithm. The haptic
interface is a Phantom Omni haptic device from Sensable Technology which is light-weight and can be easily mounted on the
top of the robot. The basket holding the heavy objects is placed
at the mobile robots base.
The proposed adaptive servo-level shared control framework
is shown in Fig. 2. The human control block sends the control
signal with a coefficient 1 s . This control signal is given by
the tracking controller in Section III. The autonomous control

One of the technical problems in the development of the


proposed assistive robotic system is to design a stable tracking
controller for the mobile robot to follow the user. There are
two kinds of models for mobile robot. One is a kinematics
model with the translational and rotational velocities as control
inputs. The other is a dynamic model with motor torques as
control inputs. Because the Peoplebot base is a differentialdrive nonholonomic mobile robot, the control signals given by
the robots software interface are translational and rotational
velocities rather than motor torques. The controller is designed
based on a kinematics model of the mobile robot.
This kinematics model of mobile robot is a nonlinear model.
Therefore, nonlinear control design techniques are applied to
develop the tracking controller. Generally, the tracking controllers are designed based on the Lyapunov stability theory [35].
Intelligent methods such as neural networks [36] and fuzzy
logic [37], [38] are also applied together with the Lyapunov stability theory. However, in our current application, the intelligent
methods are not suitable. Most neural networks based tracking
controllers require offline learning. Fuzzy logic controller needs
to define some suitable membership functions in advance.
Backstepping [39], sliding mode [40], and transverse function
[41] techniques are also applied to design the controller. These
approaches could give a stable tracking controller for the mobile
robot. However, due to the computation power of the onboard
computer, some of them are too complicated to be implemented
in our system. Therefore, we need an appropriate approach to
design the tracking controller for our system.
In our application, a kinematics model is used to describe the
mobile base of assistive robot. The Phantom Omni is used as
a control input device to control the mobile robot. Therefore,
the tracking controller design should consider the features of
the model and the input. Some of the tracking controllers are
designed base on a Cartesian coordinate. Some of the tracking

1728

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 19, NO. 6, DECEMBER 2014

We give the following candidate Lyaponuv function:


V = V1 + V2 =

1 2 1 2 1 2
r + + .
2 e 2 e 2 e

(6)

The derivative of V1 = 12 re2 is


V 1 = re re = re (vr cos e vm cos e ).

(7)

If we make vm as
vm =

vr cos e
+ v re cos e
cos e

(8)

V1 = v re2 cos2 e < 0, where v (v > 0) is the gain for vm


which can control the amplitude of vm .
The derivative of V2 = 12 2e + 12 e2 is


e

V2 = e r e m + (r m ) .
(9)
e
Fig. 3.

Here, r e is the rotational velocity of re . r e = r e .


If we make m as

Coordinate of the mobile robot and haptic device.

controllers are designed based on a polar coordinate. For our


application, the Phantom Omni base is a polar robot. Therefore,
we design the tracking controller based on the polar coordinate.
The coordinate arrangement of the system is shown in Fig. 3.
The point Ph = [xh , yh ]T and Pm = [xm , ym ]T are the reference positions of the humans hand (also the position of the
Phantom Omnis stylus) and the mobile robot in the world coordinate XY , respectively.
Consider the following kinematics model for the nonholonomic mobile robot:

x m
cos m 0 

vm

Pm = y m = sin m 0
(2)
m
0
1
m
where m is the angle between the robot heading and the positive
X-axis, vm is the translational velocity, and m is the rotational
velocity of the robot.
We define errors as

re
(xh xm )2 + (yh ym )2

r
m

.
(3)
=

r e m
e
r r
e

Here, re is the distance between the robots current position and the humans position. e is the angle between the human inputs direction (r ) and the mobile robots orientation
(m ). r e is the angle between re and the positive X-axis. e is
the error between the angle of re (r e ) and the mobile robots
orientation(m ). e is the error between the direction of re (r e )
and the human inputs direction (r ).
If we assume the humans velocity and rotational velocity are
vr and r , respectively, the derivative of the errors e and e
can be written as
sin e
sin e
e = r e m = vm
vr
m
(4)
re
re
e = r m = r m .

(5)

m =

r e e + e r + 2e
e + e

(10)

then we have V2 = 2e < 0, where ( > 0) is the gain


for m which can control the amplitude of m .
By combining (8) and (10), we make the system stable (V <
0) and have the controller for the mobile robot.
Comparing with other conventional approach that we applied
in our previous work [17], the tracking controller in this paper has a faster response time. In the conventional approach,
the Lyapunov function is chosen mainly based on the position errors. We need to do coordinate transformation to get the
position errors. Sensor noise will be introduced in this transformation process. The Lyapunov function in this paper is chosen
mainly based on the angle errors. These angle errors in the controllers (8) and (10) can be derived directly from the encoders
of Phantom Omni. This is a very convenient feature that we do
not need lots of coordinate transformation. Less sensor noise
will be introduced. With this less noise information, less filter work will be involved and the delay of the controller will
be smaller. Therefore, we have a faster response time of the
tracking controller in this paper.
IV. OBSTACLE AVOIDANCE ALGORITHM
BASED ON DVZ PRINCIPLE
In order to easily apply our mobile assistive robot, we assume
that we do not have a prior knowledge of the environment. In
other words, we do not know the shape of the room, the positions
of the obstacles, etc. The obstacle avoidance algorithm should
be able to detect obstacles using the robots range sensors in realtime. Therefore, the conventional approach that needs the map
of the environment is not applicable in our system. In addition,
in order to analyze the stability of the shared controller, the
obstacle avoidance controller should be Lyapunov stable.
Based on the aforementioned reason, a DVZ principle based
real-time obstacle avoidance controller is developed in this paper. The DVZ principle was first proposed by Zapata [42][44]

WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT

Fig. 5.
Fig. 4.

1729

DVZ shape diagram.

DVZ principle diagram.

to design the obstacle avoidance algorithm for a robotic manipulator. The main idea of this principle is to introduce a virtual
zone around the robot, which helps the robot interact with the
environment. The robot is commanded by the deformation of
this virtual zone to stay out of areas of potential risk. Unlike
other techniques with static zones, this DVZ is a dynamic zone.
The shape and geometry of the zone depend on the mobile
robots translational and rotational velocity.
In this section, first, we will introduce the general statement
and calculation for the DVZ principle in Section IV-A and B.
Then, we will implement this principle for the obstacle algorithm in our system which is introduced in Sections IV-C, D,
and E.

h will get closer to each other and there will be less obstacles
within the DVZ around the robot. Therefore, if we want to use
(14) to design the obstacle avoidance controller, we have to
differentiate along its grads direction to control vm and m
for the mobile robot. With this controller, can be minimized
to avoid obstacles.
To calculate the control signal for mobile robot to avoid obstacles, differentiate with respect to U
= U []U + U []U + I []I.

(15)

Here, U is the differentiation operator with respect to U . I


is the differentiation operator with respect to I. Equation (15)
is a functional equation expression and has to be parameterized
to be differentiable. This parameterization process is to find a
specific mathematical expression for the controlled DVZ.

A. General Statement for the DVZ Principle


The different DVZ definitions can be found in Fig. 4.
The controlled DVZ is defined as h which is controlled by
U , where U is the velocity of the robot U = [vm , m ]. We have
the following function:
h = (U ).

(11)

When the mobile robot is moving, sensors on the mobile robot


start working. If there are no obstacles, the sensing boundary
is defined as . If there are obstacles, the sensing boundary
is defined as . The difference between these two boundaries
can be defined as I, which is the deformation of the sensing
boundary. Here, we have
I = .

(12)

We define I as a DVZ depending on sensor boundary deformation I and controlled by U


I = (I, U ).

(13)

The deformation of DVZ can be written as


= I h = (I, U ) (U ).

(14)

The deformation function is the difference between the


sensor boundary deformation I and the controlled DVZ.
From (14), we know that if we want to make the robot avoid
obstacles, we should decrease . When decreases, I and

B. Parameterized Controlled DVZ


The nonholonomic mobile robot has the constraint that the
robots translational velocity is always orthogonal to the driven
wheel axis. Because of this constraint, an elliptical shape (shown
in Fig. 5) is chosen as the shape of the controlled DVZ. This
shape focuses mainly on the front of the robot and partly on
both sides of the robot. When the robot turns, the direction of
the DVZ changes according to the robots rotational velocity.
This change is suitable and practical for the robot to sense its
environment, just as when we turn a vehicle, the focus is on the
turning side direction rather than straight ahead.
We assume [xD , yD ]T is a point on the ellipse which has
axis ax and by . Unlike the current literature about DVZ, a polar
coordinate is applied to formulate the mathematical expression
of the DVZ. By using the polar coordinate, the calculation of
the derivative can be largely simplified. The equation describing
the shape of the controlled DVZ can be expressed as

xD = ax cos D cos by sin D sin
(16)
yD = ax cos D sin + by sin D cos .
Here, ax and by are the semiminor and semimajor axes of the
ellipse, respectively. They are directly proportional to the robots
translational velocity. We have ax = ka x vm and by = kb y vm .
ka x , and kb y (ka x , kb y > 0) are the proportional ratios. These
two equations mean that when the robot moves faster, the ellipse

1730

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 19, NO. 6, DECEMBER 2014

becomes larger. D is the angle between the robots heading


direction and the line that connects the point on the ellipse and
the origin, and is the angle between the DVZ by axis and
the robots heading direction. is directly proportional to the
rotational velocity. We have = k m m , k m (k m > 0) as
the proportional ratio. The origin of this ellipse is the mobile
robot.
The distance between the robot and the point on the DVZ
can be defined as dh . Because the DVZ equations are defined in
polar coordinates, we can easily calculate dh as

dh (D ) = x2 D + y 2 D

= a2x cos2 D + b2y sin2 D .


(17)
C. Deformation of the Controlled DVZ
With the mathematical expression of the controlled DVZ, we
can precisely describe the DVZ deformation. An intrusion ratio
is introduced in (18) to record the deformation of DVZ around
the mobile robot

dh (D ) d(D )
dD .
(18)
ID =
dh (D )

Here, d(D ) is the distance between the robot and the obstacle
which is given by the sonar sensor on the robot. The intrusion
ratio is introduced to quantify the DVZ deformation. We can
design the controller of the mobile robot to minimize ID to
ensure the robot can avoid the obstacles.
In addition, we introduce an average angle D to indicate the
current orientation of the obstacles relative to the mobile robot

(dh (D ) d(D ))D dD

D =
+ m .
(19)
(dh (D ) d(D ))dD
From (19), we can see that the average angle D is the
weighted average of D . This value gives the direction where
most of the obstacles are located. Using the controller of the
mobile robot, we can command the robot to move away from
this direction.

The following Lyapunov function candidate is chosen to design the controller:

(20)

The derivation of VD is
V D = V D

+ V D

= ID ID + (m + D )(m + D )
and V D

(21)

can be written as

V D

= ID ID
= ID (JIxDm vx m + JIyDm vy m + JIpDo b vob ).

vx m = kv x m JIxDm

(23)

vy m = kv y m JIyDm .

(24)

and

Here, kv x m and kv y m are the gains for vx m and vy m , respectively. These two gains can control the amplitudes of vx m and
vy m , respectively. The Jacobian functions JIxDm and JIyDm can be
expressed as


ID
xm xob (D )
xm

J
dD
=
=

ID
xm
dh (D )d(D )

.
(25)


ym yob (D )

J y m = ID =
d

D
ID
ym
dh (D )d(D )

In (25), [xm , ym ] is the position of the mobile robot.


[xob (D ), yob (D )] is the position of the obstacle at angle of
D to the robot. The derivation of (25) is the same as [43].
If we combine the translational velocity controller in each
axis, we have the translational velocity controller vo for the
obstacle avoidance controller
vo = vx m cos m + vy m sin m .

(26)

The rotational velocity controller o for the obstacle avoidance controller is


o = k o (m + D ) D .
(27)
Here, k o is the gain that can control the amplitude of o .
V. STABILITY ANALYSIS AND DESIGN FOR THE ADAPTIVE
SERVO-LEVEL SHARED CONTROLLER

D. Obstacle Avoidance Controller

VD = VD I + VD
1
1
= ID 2 + (m (D ))2 .
2
2

Here, JIxDm and JIyDm is the Jacobian function of the X-axis


and Y-axis velocities of the intrusion ratios time derivation. We
note vx m = x m , vy m = y m . JIpDo b is the Jacobian function of the
obstacles velocity of the intrusion ratios time derivation. vob
is the obstacles velocity. Because the sample time of the sonar
sensor is very short, the obstacles can be assumed to be still
when the sonar sensor detects them. So, the obstacles velocity
vob = 0.
From (21) and (22), we can choose the translational velocity
controller in each axis as

(22)

This section addresses the adaptive servo-level shared control


stability analysis and design for the proposed mobile assistive
robot. The objective of this shared controller is to dynamically
assign the control weight to the human and the autonomous
control according to the real-time situation for the final control
signal. The calculation of the allocation weight is important to
guarantee the stability and performance of the adaptive shared
controller. Because our assistive robot system is a nonlinear
system, conventional stability analysis approaches cannot be
used. Convex analysis with set theory which is suitable to describe characteristics of nonlinear system can help us analyze
the shared controller in our system.
In this section, first, we use convex analysis to analyze the
stability of the shared controller. Then, based on this analysis, a
calculation method for the allocation weight is given.

WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT

1731

A. Stability Analysis of The Shared Controller


The shared controller can be formulated as
us = (1 s )uh + s ur .

(28)

Here, us = [vs , s ] is the shared controllers output for the


mobile robot (vs and s are the translational and rotational
velocity). uh = [vm , m ] and ur = [vo , o ] are the human and
autonomous control input, respectively. uh = [vm , m ] is the
tracking controller in Section III. ur = [vo , o ] is the obstacle
avoidance controller in Section IV. s is the allocation weight
with domain 0 < s < 1.
A generalized expression for the feasible human input set
Uh can be defined as Uh = {uh : Vh (uh ) < 0}. For the human input uh , it is one possible solution. Therefore, we have
uh Uh . Because in a real situation, stable range of the tracking controller is continuous, we can assume the set Uh is convex. The feasible autonomous control input set Ur similarly
can be defined as Ur = {ur : Vr (ur ) < 0}. The obstacle avoidance controller ur is one possible solution. We have ur Ur .
Also, in a real situation, stable range of the obstacle avoidance
controller is continuous, we can assume the set Ur is convex.
We can see that us is a point on the closed line segment passing
through uh and ur . The set for points on this closed line segment
can be defined as Usl = {us : us = (1 s )uh + s ur |us
Us , uh Uh , 0 < s < 1}.
There are four relationships between Uh and Ur that apply to
real situations of the developed assistive robot.
1) Uh Ur . This is a stable situation. The feasible set of
human inputs is a subset of the feasible set of autonomous
control input. The human input not only makes the tracking error smaller but also avoids obstacles. The user acts
perfectly and makes no mistake. Therefore, we simply put
s = 0 to let the user control the system.
2) Ur Uh . The feasible set of autonomous control inputs
is a subset of the feasible set of human inputs. Some of the
human inputs can achieve both goals, but some inputs cannot. In a real application, this situation means there are lots
of obstacles around the robot. It is very hard for the user to
find a feasible control input to achieve both goals. Therefore, only through assigning an allocation weight s , the
shared control signal is able to compensate the users deficiency and achieve both goals. From control theory point
to stabilize the system.
of view, this is to find a range of s
The feasible set of us is Us = Ur Usl which is the line
segment in Ur . The allocation weight s that can stabilize
the shared controller should be s l < s < 1. Here, s l
is the lower bound of s . Any s within this range is a
feasible solution for the shared controller. Otherwise, the
shared controller cannot achieve both goals. Fig. 6 shows
thissituation.
3) Ur Uh = , Ur  Uh and Uh  Ur . In this situation,
the feasible set of human inputs and the feasible set of
autonomous control inputs intersect. In a real application,
this situation is the same as the second situation,
 butmore
complicate. The feasible set of 
us is Us = Ur Uh Usl ,
which is the line segment in Ur Uh . The stable allocation

Fig. 6.

Diagram of the stability analysis for shared control: U r U h .

Fig. 7.

Diagram of the stability analysis for shared control: U r

U h = .

weight s should be s l < s < s u . Here, s u is the


upper
 bound of s . Fig. 7 shows this situation.
4) Ur Uh = . This is an unstable situation. The feasible
set of human inputs and the feasible set of autonomous
control inputs do not intersect. There is no value of the allocation weight can help us achieve both goals (following
the user and avoiding obstacles). This situation may occur when the user does not notice the danger or obstacles.
This situation is the most dangerous one and may cause
collision. Therefore, we stop the movement of the robot
(us = 0) immediately.
B. Algorithm to Calculate the Allocation Weight
Based on the stability analysis of the adaptive shared controller, we develop an algorithm to calculate the allocation
weight. From the stability analysis, we know that the upper
and lower bounds for the allocation weight are important values
to choose the allocation weight. Because the feasible sets of
each control input are convex, there is only one value on both
the boundary of the feasible set Ur or Uh and the closed segment
line of us . Therefore, we can use a bisection method to find s u
and s l . To calculate s , we make s = 12 (s u + s l ). This
equation means that s lies in the stable range and also has
some margin to guarantee stability.

1732

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 19, NO. 6, DECEMBER 2014

Fig. 9.

Fig. 8.

User tracking Experiment without obstacles.

Experimental environment.

To illustrate the whole idea for this algorithm, below is the


pseudo code.
VI. EXPERIMENTAL RESULTS
The prototype of the mobile assistive robot and the corresponding algorithms are developed and designed separately in
previous sections. To verify these algorithms for the mobile
assistive robot, the experiment is carried out on our assistive
robotic system. The experimental environment is a plane surface with some arranged walls, obstacles, etc., which are built
by paper boxes (shown in Fig. 8).
We carry out the following experiments:
1) user tracking;
2) wall following;
3) door passing;
4) narrow corridor passing;
5) complex environment navigation.
The first experiment shows the result only for the tracking
controller. The robot follows the movement for the user. This
experiment shows how much the users control ability deteriorates when there are no obstacles. The later four experiments
show how the shared controller helps robot navigator in the real

Fig. 10. Wall following experiments: (a) human and robot trajectory;
(b) s in the experiment (Timestamps are shown besides the trajectories. Big
font numbers are the timestamps for the robots trajectory. Small font numbers
are the timestamps for humans trajectory).

world. Experiments (2)(4) are simple actions while the robot


navigates. These simple actions can combine together to form
more complex navigating actions. In other words, the complex
navigating actions can be decomposed into these simple actions. Therefore, we need to know the performance of the robot

WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT

Fig. 11. Narrow corridor passing experiment (door width: 880 mm): (a) human
and robot trajectory; (b) s in the experiment.

in these simple actions and make the corresponding adjustment


of the algorithms. After these simple actions experiments, we
carry out the experiment in a complex environment to test the
overall performance of the robot.
A. User Tracking Experiment
Fig. 9 shows the result of how the tracking controller helps
the robot to follow the user. The user moves the stylus to control
the mobile robot moving in a round-like trajectory. The solid
line is the robots position. The dash line is the position of the
users hand that holds the Phantom stylus. Although the dash
line cannot represent the absolute position of the user, it still can
illustrate the position of the user on certain level. To simplify
the expression, we take the position of the users hand as the
position of the user. From the results, we can conclude that the
tracking controller works well on the human input. From Figs.
9(a) and (d), the position error finally converges to zero and
the robot finally moves to the desired destination. The spike in
Fig. 9(b) is cause by the robot turning from 180 to 180 .
This is actually the same orientation of the robot in the world
coordinate.
B. Wall Following Experiment
In this experiment, the paper boxes form three walls which
connect together as shown in Fig. 10. When we start the experi-

1733

Fig. 12. Narrow corridor passing experiment (door width: 680 mm): (a) human
and robot trajectory; (b) s in the experiment.

ment, the user holds the stylus of the Phantom Omni and control
the robot moving along the wall. We notice that the robot cannot
be very close to the wall. This is because of the effectiveness of
the DVZ in the obstacle avoidance algorithm around the robot.
When the robot gets too close to the wall which is a danger
action, this virtual zone will expel the robot out of the potential danger. When the robot goes straight and meets the wall
on the right side, we can see a more obvious evidence of the
effectiveness of DVZ and the shared controller. Here, the user
intends to command the robot to move towards the wall. The
DVZ gives the avoidance command to the shared controller. The
shared controller calculates the allocation weights s and gives
the control command to the mobile robot. We can see that s
is around 0.7 at this point [shown in Fig. 10(b)]. The obstacle
avoidance algorithm takes most of the control of the robot. The
robot turns to a safer direction immediately. When the robot
meets the other wall, the same process happens. s increases to
help the robot move out of danger. The dynamic process of the
change on s can be found at Fig. 10(b). We may also notice
that there are some flat parts in s . This phenomenon is cause by
the numerical accuracy of Algorithm 1. Since bisection method
is involved in this algorithm, we need to adjust the maximum
iterations and tolerance to meet the fixed updated rate 100 Hz.

1734

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 19, NO. 6, DECEMBER 2014

Fig. 13. Door passing experiments: (a) human and robot trajectory; (b) s in
the experiment.

This gives less numerical accuracy for Algorithm 1. Therefore,


some flat parts exist in s . This effect can be eliminated by
upgrade the CPU of the note book which will be done in our
future work.
C. Narrow Corridor Passing Experiment
In this experiment, we perform two separate experiments with
different widths of the corridor. The width is around 880 mm
in the first experiment and around 680 mm in the second one.
The translational velocity of the robot is faster in the second
experiment. In both experiments, the robot can safely reach
the destinations. We also find an interesting phenomenon. By
comparing these two experiments, we find that when the robot
moving faster in a narrower corridor the robot has an unsmooth
trajectory [see Fig. 11(a) and 12(a)]. The reason for this phenomenon is the change of DVZs size. The DVZ enlarges when
the robot is moving fast. Thus, the robot will interact with the
narrower corridor with a larger DVZ. This larger DVZ will give
a higher value of the obstacle avoidance control command. The
robot will turn faster with a higher rotational velocity. Therefore, there is an unsmooth trajectory in the second experiment.
This is the same phenomenon that the humans trajectory is not

Fig. 14. Complex environment experiment: (a) human and robot trajectory;
(b) s in the experiment.

straight when he moves very fast in a narrow corridor. Because


of this phenomenon, the user should move slowly when passing
a narrow corridor to avoid a unsmooth trajectory of the mobile
robot.
D. Door Passing Experiment
In this experiment, the paper boxes form a door which has a
width around 880 mm. The initial position of the robot is parallel
to the door. The user commands the robot to turn right and pass
the door. We can see from Fig. 13 that the user turns the robot
too late. The robot may collide to the right side of the door. At
this time, s increases and the robot turns to avoid the collision.
Then, the user safely commands the robot to move across the
door.
E. Complex Environment Navigation Experiment
In this experiment, the user and the robot navigate a more
complex environment. This environment contains a door, an
obstacle, several walls, and a narrow corridor. As we can see
from the trajectory of the robot [see Fig. 14(a)] and the value
of s [see Fig. 14(b)], s increases when the user is not able
to command the robot to a safe direction. The shared controller

WANG AND LIU: ADAPTIVE SHARED CONTROL FOR A NOVEL MOBILE ASSISTIVE ROBOT

helps the user pass the door, avoid the obstacle, follow the
wall, and pass the narrow corridor. For the overall performance
of the shared controller, we can conclude that the designed
shared controller meets the design requirements of the system
and performs well in a complex environment.
VII. CONCLUSION
This paper presents a prototype of a novel assistive robot for
transporting heavy objects with an adaptive servo-level shared
control, which has great potential for healthcare applications.
The system consists of two parts: one is the mobile robot from
MOBILEROBOTSs Peoplebot; the other is the haptic device
from Sensables Phantom Omni. The mobile robot works as
the base for the system to carry the haptic device and loads. In
order to control the movement of mobile robot using the haptic device, we developed a tracking controller for the mobile
robot. To avoid the obstacles in the environment, an obstacle
controller is designed based on the DVZ principle. An adaptive servo-level shared controller is applied to mix these two
controllers outputs into a shared control output to command
the robot. This shared control output can compensate for the
users control ability and respect the users self-esteem at the
same time. Convex analysis is applied to analyze the stability of
this shared controller. The experiments show the effectiveness
of the designed control algorithms. Future work will focus on
applying the developed system to clinical cases and improve the
system based on specific application requirements.
REFERENCES
[1] G. Lacey, User involvement in the design and evaluation of a smart
mobility aid, J. Rehabil. Res. Develop., vol. 37, no. 6, pp. 709723,
2000.
[2] M. Kassler, Robotics for health care: a review of the literature, Robotica,
vol. 11, no. 6, pp. 495516, 1993.
[3] H. R. J. Broekens and M. Heerink, Assistive social robots in elderly care:
a review, Gerontechnology, vol. 8, no. 2009, pp. 94103, 2009.
[4] P. Flandorfer, Population ageing and socially assistive robots for elderly
persons: The importance of sociodemographic factors for user acceptance, Int. J. Population Res., vol. 12, no. 2012, pp. 113, 2012.
[5] J. J. Wagner, M. Wickizer, H. F. M. Van der Loos, and L. J. Leifer, User
testing and design iteration of the provar user interface, in Proc. IEEE
Int. Workshop Robot Human Commun, 1999, pp. 1822.
[6] H. V. der Loos, J. Hammel, D. Lees, D. Chang, and I. Perkash, Field
evaluation of a robot workstation for quadriplegic office workers, Eur.
Rev. Biomed, vol. 5, pp. 317319, 1990.
[7] M. Johnson, E. Guglielmelli, G. D. Lauro, C. Laschi, M. Carrozza, and
P. Dario, Giving-a-hand system: The development of a task-specific robot
appliance, Adv. Rehabil. Robot., vol. 306, pp. 127141, 2004.
[8] H. H. Kwee, Integrated control of manus manipulator and wheelchair
enhanced by environmental docking, Robotica, vol. 16, no. 5, pp. 491
498, 1998.
[9] D.-J. Kim, Z. Wang, and A. Behal, Motion segmentation and control
design for UCFMANUSan intelligent assistive robotic manipulator,
IEEE/ASME Trans. Mechatronics, vol. 17, no. 5, pp. 936948, Oct. 2012.
[10] B. Graf, M. Hans, and R. D. Schraft, Care-o-bot iidevelopment of a
next generation robotic home assistant, Auton. Robots, vol. 16, no. 2,
pp. 193205, 2004.
[11] S. Levine, D. Bell, L. Jaros, R. Simpson, Y. Koren, and J. Borenstein, The
navchair assistive wheelchair navigation system, IEEE Trans. Rehabil.
Eng., vol. 7, no. 4, pp. 443451, Dec. 1999.
[12] R. C. Simpson, D. Poirot, and F. Baxter, The hephaestus smart wheelchair
system, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 10, no. 2, pp. 118
122, Jun. 2002.

1735

[13] H. A. Yanco, Wheelesley: A robotic wheelchair system: Indoor navigation and user interface, in Assistive Technology and Artificial Intelligence,
Applications in Robotics, User Interfaces and Natural Language Processing. London, U.K.: Springer-Verlag, 1998, pp. 256268.
[14] Z. Bien, M.-J. Chung, P.-H. Chang, D.-S. Kwon, D.-J. Kim, J.-S. Han,
J.-H. Kim, D.-H. Kim, H.-S. Park, S.-H. Kang, K. Lee, and S.-C. Lim,
Integration of a rehabilitation robotic system (kares ii) with humanfriendly man-machine interaction units, Auton. Robots, vol. 16, pp. 165
191, Mar. 2004.
[15] K. Wada, T. Shibata, T. Saito, and K. Tanie, Effects of three months robot
assisted activity to depression of elderly people who stay at a health service
facility for the aged, in Proc. SICE Annu. Conf., 2004, pp. 26092614.
[16] A. J. Brisben, A. D. Lockerd, and C. Lathan, Design evolution of an interactive robot for therapy, Telemedicine e-Health, vol. 10, no. 2, pp. 252
259, 2004.
[17] H. Wang and X. Liu, Haptic interaction for mobile assistive robots,
IEEE Trans. Instrum. Meas., vol. 60, no. 11, pp. 35013509, Nov. 2011.
[18] K. A. Tahboub, Natural and manmade shared-control systems: An
overview, in Proc. IEEE Int. Conf. Robot. Autom., 2001, vol. 3, pp. 2655
2660.
[19] J. Kofman, X. Wu, T. Luu, and S. Verma, Teleoperation of a robot
manipulator using a vision-based human-robot interface, IEEE Trans.
Ind. Electron., vol. 52, no. 5, pp. 12061219, Oct. 2005.
[20] J. Borenstein and Y. Koren, Teleautonomous guidance for mobile robots,
IEEE Trans. Syst., Man Cybern., vol. 20, no. 6, pp. 14371443, Nov./Dec.
1990.
[21] S. T. Venkataraman and S. Hayati, Shared/traded control of telerobots
under time delay, Comput. Electr. Eng., vol. 19, no. 6, pp. 481494,
1993.
[22] G. Hirzinger, B. Brunner, J. Dietrich, and J. Heindl, Sensor-based space
roboticsrotex and its telerobotic features, IEEE Trans. Robot. Autom.,
vol. 9, no. 5, pp. 649661, Oct. 1993.
[23] D. J. Bruemmer, D. A. Few, R. L. Boring, J. L. Marble, M. C. Walton, and
C. W. Nielsen, Shared understanding for collaborative control, IEEE
Trans. Syst., Man, Cybern. Part A: Syst. Humans., vol. 35, no. 4, pp. 494
504, Jul. 2005.
[24] B. Khademian and K. Hashtrudi-Zaad, Shared control architectures for
haptic training: Performance and coupled stability analysis, Int. J. Robot.
Res., vol. 30, no. 13, pp. 16271642, 2011.
[25] B. Khademian, J. Apkarian, and K. Hashtrudi-Zaad, Assessment of environmental effects on collaborative haptic guidance, Presence, Teleoperators Virtual Environ., vol. 20, no. 3, pp. 191206, 2011.
[26] B. Khademian and K. Hashtrudi-Zaad, Dual-user teleoperation systems:
New multilateral shared control architecture and kinesthetic performance
measures, IEEE/ASME Trans. Mechatronics, vol. 17, no. 5, pp. 895906,
Oct. 2012.
[27] B. Khademian and K. Hashtrudi-Zaad, Performance issues in collaborative haptic training, in Proc. IEEE Int. Conf. Robot. Autom., 2007,
pp. 32573262.
[28] A. Hansson and M. Servin, Semi-autonomous shared control of largescale manipulator arms, Control Eng. Practice, vol. 18, no. 9, pp. 1069
1076, 2010.
[29] H. Yu, M. Spenko, and S. Dubowsky, An adaptive shared control system
for an intelligent mobility aid for the elderly, Auton. Robots, vol. 15,
no. 1, pp. 5366, 2003.
[30] C. Urdiales, J. M. Peula, M. Fdez-Carmona, C. Barru, E. J. Prez,
I. Snchez-Tato, J. C. Del Toro, F. Galluppi, U. Corts, R. Annichiaricco,
C. Caltagirone, and F. Sandoval, A new multi-criteria optimization strategy for shared control in wheelchair assisted navigation, Auton. Robots,
vol. 30, no. 2, pp. 179197, 2011.
[31] A. Poncela, C. Urdiales, E. J. Prez, and F. Sandoval, A new efficiencyweighted strategy for continuous human/robot cooperation in navigation,
IEEE Trans. Syst., Man, Cybern. A, Syst. Humans, vol. 39, no. 3, pp. 486
500, May 2009.
[32] Q. Li, W. Chen, and J. Wang, Dynamic shared control for humanwheelchair cooperation, in Proc. IEEE Int. Conf. Robot. Autom., May
2011, pp. 42784283.
[33] W. Ren and R. Beard, Satisfying approach to human-in-the-loop safeguarded control, in Proc. Amer. Control Conf.,, Jun. 2005, vol. 7,
pp. 49854990.
[34] Performance PeopleBotTM Operations Manual. MOBILEROBOTS Company, Amherst, NH, USA, 2007, pp. 182.
[35] P. Morin and C. Samson, Trajectory tracking for nonholonomic vehicles,
in Robot Motion and Control: Recent Developments. Berlin, Germany:
Springer, LNCIS, vol. 335, 2006, pp. 323.

1736

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 19, NO. 6, DECEMBER 2014

[36] R. Fierro and F. L. Lewis, Control of a nonholonomic mobile robot using


neural networks, IEEE Trans. Neural Netw., vol. 9, no. 4, pp. 589600,
Jul. 1998.
[37] H. A. Hagras, A hierarchical type-2 fuzzy logic control architecture for
autonomous mobile robots, IEEE Trans. Fuzzy Syst., vol. 12, no. 4,
pp. 524539, Aug. 2004.
[38] T. Das and I. N. Kar, Design and implementation of an adaptive fuzzy
logic-based controller for wheeled mobile robots, IEEE Trans. Control
Syst. Technol., vol. 14, no. 3, pp. 501510, May 2006.
[39] Z. Jiang and H. Nijmeijer, Tracking control of mobile robots: A case
study in backstepping, Automatica, vol. 33, no. 7, pp. 13931399, 1997.
[40] J. Yang and J. Kim, Sliding mode control for trajectory tracking of nonholonomic wheeled mobile robots, IEEE Trans. Robot. Autom., vol. 15,
no. 3, pp. 578587, Jun. 1999.
[41] P. Morin and C. Samson, Practical stabilization of driftless systems on lie
groups: The transverse function approach, IEEE Trans. Autom. Control,
vol. 48, no. 9, pp. 14961508, Sep. 2003.
[42] R. Zapata, P. Lepinay, and P. Thompson, Reactive behaviors of fast
mobile robots, J. Robot. Syst., vol. 11, no. 1, pp. 1320, 1994.
[43] R. Zapata, A. Cacitti, and P. Lpinay, Dvz-based collision avoidance
control of non-holonomic mobile manipulators, J. Eur. Syst. Automat.,
vol. 38, no. 5, pp. 559588, 2004.
[44] L. Lapierre, R. Zapata, and P. Lepinay, Combined path-following and
obstacle avoidance control of a wheeled robot, Int. J. Robot. Res., vol. 26,
no. 4, pp. 361375, 2007.

Huanran Wang (S06) received the B.Sc. and M.Sc.


degrees with a major in electrical engineering and
pattern recognition from Harbin Engineering University, Harbin, China, in 2005 and 2008, respectively.
He is currently working toward the Ph.D. degree in
electrical engineering at Carleton University, Ottawa,
ON, Canada.
His research interests include assistive robotics,
haptics, and optimization.

Xiaoping P. Liu (SM06) received the B.Sc. and


M.Sc. degrees from Northern Jiaotong University,
Beijing, China in 1992 and 1995, respectively, and
the Ph.D. degree from the University of Alberta, Edmonton, AB, Canada, in 2002.
He has been with the Department of Systems and
Computer Engineering, Carleton University, Ottawa,
ON, Canada, since July 2002, and he is currently a
Professor and Canada Research Chair. His research
interests include interactive networked systems and
teleoperation, haptics, micromanipulation, robotics,
intelligent systems, context-aware intelligent networks, and their applications
to biomedical engineering. He has published more than 200 research articles.
He serves as an Associate Editor for several journals including IEEE ACCESS,
IEEE/ASME TRANSACTIONS ON MECHATRONICS, IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, Intelligent Service Robotics, International Journal of Robotics and Automation, Control and Intelligent Systems, and
International Journal of Advanced Media and Communication. He is a licensed
member of the Professional Engineers of Ontario (P.Eng). He has served on
the Organization Committees of numerous conferences including as the General Chair of the 2008 IEEE International Workshop on Haptic Audio Visual
Environments and their Applications, and the General Chair of the 2005 IEEE
International Conference on Mechatronics and Automation.
Dr. Liu received a 2007 Carleton Research Achievement Award, a 2006
Province of Ontario Early Researcher Award, a 2006 Carty Research Fellowship,
the Best Conference Paper Award of the 2006 IEEE International Conference on
Mechatronics and Automation, and a 2003 Province of Ontario Distinguished
Researcher Award.

Вам также может понравиться