Вы находитесь на странице: 1из 21

Journal of Intelligent and Robotic Systems 35: 171–191, 2002.

171
© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Development of a Sensor Fusion Strategy for


Robotic Application Based on Geometric
Optimization

G. C. NANDI
Indian Institute of Information Technology, Allahabad-211 002, India;
e-mail: gcnandi@yahoo.com; gcnandi@iiita.ac.in

DEBJANI MITRA
Electronics Engineering Department, Indian School of Mines, Dhanbad-826 004, India;
e-mail: debjani7@yahoo.com

(Received: 29 January 2001; in final form: 24 January 2002)


Abstract. Fusion of multi-sensor information is an important technology, which is growing expo-
nentially due to its tremendous application potential in many areas. Effective fusion of data from
sensors is very critical in increasing an intelligent system’s capability to accomplish complex tasks.
Appropriate fusion technologies are needed to be developed specially when a system requires redun-
dant sensors to be used. More the redundancy in sensors, more is the computational complexity for
controlling the system and more is its intelligence level. This research presents a strategy developed
for multiple sensor fusion, based on geometric optimization. Each sensor’s uncertainty has been mod-
eled using classical Lagrangian optimization techniques. However, the uniqueness and effectiveness
of the present technique lies on the fact that starting from the optimized value as initial estimate the
accuracy of the sensory information has further been improved up to any pre defined bounded range,
by developing two architectures – FFA (fission–fusion architecture) and FDD (fusion in differential
domain). Sufficient evidences and analyses have been provided in the paper to show its effectiveness
in various applications.

Key words: uncertainty ellipsoid, sensor fusion, fission–fusion architecture, fusion in the differential
domain, multiple baseline stereo.

1. Introduction
Information fusion encompasses the theory, techniques, and tools conceived and
employed for a synergistic combination of information acquired from multiple
sources (like sensors, databases and even information gathered by humans) into
one representational format. The purpose of this synergy exploitation is to make
the resulting decision or action much better (qualitatively and/or quantitatively)
than would be possible by using the sources individually. Information fusion ex-
ists naturally as biological sensor fusion [18, 22] in the human and animal world
to achieve more precise assessment of the surrounding environment, for threat
identification and target recognition [31, 37]. Fusion of information and data from
172 G. C.NANDI AND D. MITRA

multiple sensors [13] has a widespread application in a variety of intelligent and


highly automated systems [27].
In military area it is used in command and control of air warfare, avionics,
electronic warfare, ocean surveillance, remotely piloted vehicles, air-to-air and
surface-to-air defense, battlefield intelligence, target acquisition, strategic warning
and defense system, detecting, tracking and identification of targets and aircraft
and similar operations [6, 12, 34]. Sensors like radar, electronic support measures,
infrared, IFF, Electro-optic images, MTI radar, ground-based acoustic sensors, etc.
as discussed in [35] are the ones commonly involve fusion technique adoption.
Remote sensing systems using aerial photo mapping for identification and loca-
tion purposes such as those developed in [7, 19] for monitoring of agricultural and
natural resources, weather and natural disasters also has to use extensive informa-
tion fusion. They mostly use image systems using multi-spectral sensors.
For mobile robots, which are extensively mounted with multi-sensor suites,
methods of integration of data, from different sensors operating simultaneously,
are needed for robot’s self-location, map making, path computing, motion planning
and motion execution [4, 24, 26].
Information fusion from multiple sensors is extremely advantageous for on-
line condition based maintenance and monitoring of complex mechanical equip-
ment like turbomachinary, helicopter gear-trains and other industrial manufactur-
ing equipment, as discussed in [14] by reducing cost and improving safety and reli-
ability. Here mostly sensors like accelerometers, temperature and pressure gauges,
acoustic and infrared etc are used.
In some medical applications as discussed in [17] data are fused extensively
from sensors like NMR (Nuclear Magnetic Resonance) and acoustic imaging de-
vices for getting improved diagnostic capabilities, reducing false diagnosis.
In most applications the information to be fused usually comes from multiple
sensors monitored over a common period of time or from a single sensor monitored
over an extended period of time.
To increase the capabilities of intelligent machines and systems they have to
acquire, interpret and integrate information from a variety of sensors. Motion con-
trol of intelligent robots performing inspection and manipulation tasks, complex
automated operations, obstacle avoidance and navigation in dynamic, and unknown
environment are all based on feedback from the sensors [5, 15, 32] – both external
and internal like visual, tactile, force/torque etc. The sensors provide the robotic
system relevant information regarding some features of interest in the environment
for intelligent interaction and operation in the unstructured environment, without
the help of human operator. Effective fusion of data from the sensors is thus very
critical in increasing the system’s capability to accomplish complex tasks.
Fusion of multi-sensor data provides significant advantages over single source
data, as we are able to obtain information more accurately concerning features
that are too difficult or impossible to know with individual sensors [21]. Primar-
ily, statistical advantages [9, 11] are gained through fusing the redundancy and
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 173

complementarity in the information. Several examples of different applications, al-


gorithms and architectures developed due to these advantages have been presented
in [1, 2]. Complimentary information from multiple sensors allows perceiving of
those features in the environment that are impossible to perceive using just the
information from a single sensor. Redundant information is provided from a group
of sensors when each sensor perceives possibly with a different fidelity the same
features in the environment.
The conventional approach, in the use of redundant sensors, especially in the
area of robotic applications, is to select the one sensory information that looks more
appropriate for the situation than the other does. For example, the joint sensors of a
robot manipulator may be used to map between Cartesian and joint space and also
to compute the position of the elbow. A redundant sensor such as camera vision
is required to be mounted on the robot gripper to supply the same information for
many precision manipulations like robot assisted LASER surgery, manipulating
objects in space shuttle cargo bay, etc. [16, 30, 40].
Fusion of redundant information can reduce overall uncertainty and thus in-
crease the accuracy with which the features are perceived by the robotic system.
Also it increases reliability in case of sensor error or failure.
Such fusion of sensory readings as suggested in [20] can either be at low level
(used for direct integration of sensory data resulting in parameter and state esti-
mates) or at high level (used for indirect integration of sensory data in hierarchi-
cal architectures, through command arbitration and integration of control signals
suggested by different modules).
The inherent complexity in fusion arises due to the nonlinearity between the
low-level sensory data from specific sensors and the high level sensory information
to be obtained by processing the sensory data. This comes from both the inherent
structural nonlinearity and the computational nonlinearity. When sensors contribute
only part of the desired information the nonlinearity can be generalized to fuse in-
formation from the sensors. In the following section we are focused on this aspect.

2. Development of Generalized Fusion Approach Based on Geometric


Optimization
To date, a number of various architectures have been developed for sensor fusion.
Some architecture are specific, some are quite general. Too much generalization
would cost too much complexity, which may not be justified. Information fusion
and techniques developed for optimal information processing in distributed multi
sensor environments through intelligent integration of the multi sensor data has
gained popularity over the past decade [3, 23, 25, 36]. In [8] Dasarathy interest-
ingly explained the relevance of two terminology of nuclear physics: “Fusion”
and “Fission” in the context of sensory information processing. According to him,
the information generated in the environment can be thought of as undergoing
decomposition into its components by the sensors: that is sensor caused fission.
174 G. C.NANDI AND D. MITRA

This information fragmentation (fission) has to be appropriately counteracted by


a sensor or information fusion process. This supports the postulate that fusion is
a fission inversion process. This idea seemed to be interesting for developing new
fusion strategies [28, 29] and require further attention to be paid.
In the present approach first a fusion based sensor integration architecture has
been developed, using some of the mathematical toolboxes illustrated in “Advanced
Robotics-Redundancy and Optimization” by Nakamura. Each sensor’s uncertainty
has been represented by an uncertainty ellipsoid. By this geometry of uncertainty,
the non-linearity has been treated in a fairly generalized fashion so as to include
both structural as well as computational non-linearity. In the present investigation
Gaussian noise has only been added to the raw (low level) sensory data, which
simplifies mathematical formulation and at the same time ensures possibility of
inducing more realistic non-Gaussian disturbances to the higher level sensory in-
formation. The sensory information from a vision camera and an optical encoder
has been fused so as to minimize the volume of the uncertainty ellipsoids. This
fusion process being theoretically optimal (since it is based on Lagrangian Opti-
mization method) gives a minimized uncertainty. Next a new fission–fusion based
sensor integration architecture with feedback has been developed to eliminate fur-
ther the already minimized uncertainty to any desired pre assigned value. This
architecture fuses information after making a consensus between direct fusion and
fusion of individual sensory information. The latter provides better information
specially when the nonlinear sensing structures of the sensor models being fused
and the covariance matrices of the additive uncertainty incorporated in their data
are widely different (as in our fusion results using a joint angle sensor and a vi-
sion sensor on a robot manipulator). Lastly, we use feedback from the higher-level
fused information data and process it in the differential domain by the geometric
optimization fusion method to eliminate the uncertainty that still existed in our
fused information due to inherent errors in the sensors.
The major objectives of this paper are to
• determine the propagation of the low level uncertainty from sensory data to
the high level information associated with it,
• construct the uncertainty ellipsoid for each sensor model and fuse the uncer-
tainty ellipsoids in the geometrical domain using Lagrangian Optimization
Technique and determine the optimal weightage parameters corresponding to
the minimized volume of the uncertainty ellipsoid,
• develop a fission–fusion architecture and fusion in the differential domain
(FDD) for further minimizing the variance in the high level sensory infor-
mation.

2.1. PROPAGATION OF UNCERTAINTY


Each sensory measurement normally involves many sets of parameters represent-
ing the global pose, the object features in both model and transformed space and
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 175

also specific sensory features [38]. There are many different methods for determin-
ing the transformation from sensor co-ordinate to model coordinates and the error
associated with that computation will clearly be dependent on specific methods.
Here we choose a fairly generalized scheme and derive specific error bounds on
the model transformation for that scheme.
Given set of possible poses of the sensed data, each one consisting of a set of
triples (pi , n̂i , fi ), where pi is the vector representing the sensed position, n̂i is the
vector representing the sensed normal and fi is the face assigned to this sensed
data for that particular pose. We want to determine the actual transformation from
model coordinates to sensed coordinates corresponding to the pose.
The transformations have been computed for two different types of sensors:
• Sensor 1: Joint Position Sensor,
• Sensor 2: Camera Model Sensor.

2.2. UNCERTAINTY ELLIPSOID OF SENSORY INFORMATION


Any information processing system in general can be described by a set of pa-
rameters. Each parameter is usually measured by single or multiple sensors or
estimated by some computer programs that use these sensory measurements. The
resulting parameter values could possibly be widely varying, depending mainly
on the nature of the sensing models. Hence, one of the obvious goals would be
to determine the parameter representing the information, Xi ∈ Rn from a set of
sensory observational data, Di ∈ Rmi , assuming that Xi and Di are related through
a known nonlinear vector function,

Fi (Xi , Di ) = 0. (1)

Here i = 1, . . . , N, N is the number of sensor units, mi is the number of inde-


pendent measurements, and n is the dimension of information. (1) may be used to
define the mapping

Xi = fi (Di ) or Di = gi (Xi ). (2)

Let the disturbance or uncertainty included in the sensory data be additive and be
represented by
i + Di .
Di = D (3)

Here Di , Di ∈ Rmi are the undisturbed low level data and the disturbance,
respectively. Assuming a Gaussian disturbance for Di , we get

E[Di ] = 0. (4)

The covariance matrix for the ith sensor,

V [Di ] = Qi = diag(σj2i ) ∈ Rmi ×mi , j = 1, . . . , mi . (5)


176 G. C.NANDI AND D. MITRA

From (2) and (3)


i + Di ) ≈ fi (D
Xi = fi (D i ) + Ji (Di )Di , (6)

where Ji (Di ) ∈ Rn×mi ) is the Jacobian matrix of fi with respect to Di .


When all the sensors sense the same vector Xi , its mean (X i ), and covariance
matrix V [Xi ], can be derived using Equations (4) and (6), as

E[Xi ] = Xi = fi (D
i ), (7)
   
V [Xi ] = E (Xi − Xi )(Xi − X
i )T = E Ji Di DiT JiT = Ji Qi JiT . (8)

(7) means that, if we repeat infinitely for a large number of measurements and
compute the Xi ’s, their average will converge to the true value of Xi . This is a
natural result of the neglect of the global deterministic calibration errors that can
be identified and compensated beforehand by careful calibration. The noise that is
considered in this analysis is assumed to be local and stochastic. Although both are
sources of uncertainty, they should be treated separately.
(8) shows that the covariance matrix of Xi is no longer diagonal, since the
Jacobian matrix is not diagonal in general. This implies that the correlation of
Xij (j = 1, . . . , n), i.e., the j th element of Xi is included in the model although
Dij (j = 1, . . . , mi ) are assumed to be uncorrelated.
It is to be noted that for a full rank Ji , the resultant matrix of (8) is positive
definite, since Qi is positive definite from Equation (5). Now Ji Qi JiT being a
symmetric positive definite matrix, its singular value decomposition is given by
Ji Qi JiT = Ui Ai UiT where Ui = (ei1 , ei2 , . . . , ein ) ∈ Rn×n

1 for j = k,
eij eik =
T
0 for j = k, (9)
Ai = diag(ai1 , ai2 , . . . , ain ), ai1  ai2  · · ·  ain  0.

Therefore, ain represent the uncertainty of Xi in the direction of eij (unit vectors).
If we check the scalar variance in all the directions, the collection of the vectors
whose directions are represented by the unit vectors and magnitudes are the corre-
sponding uncertainties form an ellipsoid with eij as the directions of principal axes

and 2 ain as their lengths. This ellipsoid is called uncertainty ellipsoid. Here ei1
√ √
and ai1 correspond to the most uncertain direction and ein and ain correspond
to the least uncertain direction. In the next section a strategy would be developed to
fuse different uncertainty ellipsoids with a view to minimize the overall uncertainty.

2.3. MINIMIZING UNCERTAINTY BY GEOMETRIC FUSION


Given a set of uncertainty ellipsoids associated with each sensor as determined
from (9), the problem is to assign weightage parameters (Wi ) with each set of
sensory system so as to minimize geometrically the volume of the fused uncertainty
ellipsoid.
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 177

Hence the fused information Xf will be in the linear combination


N
Xf = Wi Xi , Wi ∈ Rn×n . (10)
i=1

The mean of the fused information will be



N 
N
E[Xf ] = Wi E[Xi ] = i .
Wi X (11)
i=1 i=1

i = X
The global calibration errors having been assumed to be compensated, X f

for all i where Xf is the true value of Xf , so that,
f .
E[Xf ] = X (12)

We have the constraint



N
Wi = In , where In ∈ Rn×n is an identity matrix. (13)
i=1

i = X
Using X f and earlier equations the covariance matrix of Xf is given by
  T

N 
N
V [Xf ] = E Wi Ji Di Wi Ji Di
i=1 i=1

N
= Wi Ji Qi JiT WiT = Wf Qf WfT ∈ Rn×n
i=1

where Wf = (W1 W2 . . . WN ) ∈ Rn×Nn


 
J1 Q1 J1T . . . 0
Qf =  ..
.
..
.
..
.  ∈ RNn×Nn . (14)
0 ... JN QN JNT
The shape and size of the uncertainty ellipsoid of the fused information thus de-
pends upon the choice of the weightage parameters.
The singular value decomposition of V [Xf ] = Wf Qf WfT = Uf Af UfT

Uf = (ef 1 , . . . , ef n ) ∈ Rn×n , efj ∈ Rn ,


(15)
Af = diag(af 1 , . . . , af n ), af 1  · · ·  af n > 0.

Here 2 af k give the length of the kth longest principal axis of the uncertainty el-
lipsoid of the fused information, Xf and ef k represents its direction. The geometric

volume of this ellipsoid with 2 af k as their lengths is.
178 G. C.NANDI AND D. MITRA


Here 2 af k gives the length of the kth longest principal axis of the uncer-
tainty ellipsoid of the fused information, Xf and ef k represents its direction. The

geometric volume of this ellipsoid with 2 af k as their lengths is
 n 1/2
π n/2 
Volume = af k , (16)
(1 + n/2) k=1

where is the gamma function.


The determinant of a matrix can be computed as the product of its singular
values
    
n
det Wf Qf WfT = det Uf Af UfT = af k , (17)
k=1
π n/2 
Volume = det(Wf Qf Wf ). (18)
(1 + n/2)
The volume of the fused uncertainty ellipsoid can be minimized by minimizing
det(Wf Qf WfT ) subject to the constraint (13).
Solving this using the method of geometric optimization we have the weightage
parameters for the geometrically optimized fusion derived as
 −1

N
 −1  −1
Wi = Ji Qi JiT Ji Qi JiT . (19)
i=1

3. Geometric Fusion of Camera Model and Joint Sensor Model


Here we are considering a scenario where robot hand is equipped with a vision
camera to monitor its mapping with the object placed in the cartesian space.
For the vision sensor, it is a common practice to choose the center of the image
as the camera center and invariably the latter may be off by upto several pixels
for most cameras. This along with other factors causes uncertainty in the image
position relative to the camera center and this uncertainty propagates to the corre-
sponding cartesian space information acquired by it. For some specialized jobs like
robotized surgery, etc., this inaccuracy won’t be acceptable.
For a particular arm configuration, the inverse kinematics problem usually has
several possible solutions. Even though an appropriate solution is selected through
suitable techniques, it would definitely incorporate uncertainty or error due to the
uncertainty in the sensory information specifying the desired end-effector position.
Even otherwise the joint angles being measured data will be inherently inaccurate.
Thus any vision based autonomous tasks such as placement, manipulation, mo-
tion planning, path planning, obstacle avoidance, etc., can be approached as the
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 179

problem of interpreting position information from two sensor models giving infor-
mation based on noisy sensory data. For this interpretation, the fusion strategies
developed in the previous section has been applied in the following manner.
For a 2-degree of freedom planner manipulator (extension to 3-D model is
straightforward), the mapping between the sensory data and the Cartesian position
can be expressed as
X = l1 cos(θ1 ) + l2 cos(θ1 + θ2 ),
(20)
Y = l1 sin(θ1 ) + l2 sin(θ1 + θ2 ),
and this sensor has been treated as sensor 1.
λ((X − X0 ) cos θ + (Y − Y0 ) sin θ − r1 )
x= ,
−(X − X0 ) sin θ sin α + (Y − Y0 ) cos θ sin α − (Z − Z0 ) cos α + r3 + λ
(21)
λ(−(X − X0 ) sin θ sin α + (Y − Y0 ) cos ϑ cos α + (Z − Z0 ) sin α − r2
y= .
−(X − X0 ) sin θ sin α + (Y − Y0 ) cos θ sin α − (Z − Z0 ) cos α + r3 + λ
(22)
The general Camera model [10] defined by (21) and (22) has been treated as
sensor 2.
Inaccuracy or disturbances were modeled as
θ1meas = θ1act + +θ1 and θ2meas = θ2act + +θ2 for sensor 1 and
xmeas = xact + +x and ymeas = yact + +y for sensor 2.
They were simulated through random number generators limiting the relative error
% to a specified limit and these were used to obtain the covariance matrices for the
two sensors from 100 such generated errors.
The Jacobian matrices were computed from (20)–(22), and using (8) the co-
variance matrices of the sensory information from sensor 1 and sensor 2 were
obtained. Next (13), (14) and (19) were used to fuse the uncertainty ellipses of
these two sensors, to derive the weightage matrices and to obtain the covariance
matrix of the fused information.
During fusion, as we had optimized (minimized) the area of the fused uncer-
tainty ellipse, there remains an absolute finite error even after fusion.
Figure 1 shows how for arbitrary five end-effector locations, this absolute error
varies with the different net percentage errors introduced in the individual sensory
data.
In the next step, the same information was fused after considering the individual
dimensions separately. The absolute error was seen decreasing substantially when
fusion was done after separating the sensory information at the individual sensory
levels (fission–fusion) as indicated in Figure 2. For multi-dimensional information
different dimensions of the information are affected in a different manner in terms
of the uncertainty propagation. This signifies the possibility of better fusion results
180 G. C.NANDI AND D. MITRA

Figure 1.

Figure 2.

Figure 3.
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 181

Figure 4.

Figure 5.

by considering each dimension of the information separately. Figure 3 shows that


by a proper variation of additive noise in the differential domain we are able to
minimize the absolute error almost to zero by repeated fusion in this domain for a
certain number of iterations. Details of the underlying strategy have been discussed
in the next section.
Figure 4 shows the plot of the trace of the covariance matrix of the position in-
formation obtained from the camera vision sensor for different values of Gaussian
error in the sensory data whose covariance matrix was Q = diag(0.00010968,
0.00010968).
Figure 5 represents the plot of trace of covariance matrix of position information
from the joint sensor for different sets of joint angles whose covariance matrix was
Q = diag(0.0068, 0.0049).
182 G. C.NANDI AND D. MITRA

Figure 6.

Figure 7.
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 183

Figure 8.

These plots clearly indicate the strong dependence of the fusion on the location
workspace and the observational measurements of sensory data.
Hence a particular workspace with twelve arbitrary points as shown in Figure 6
were chosen for analyzing some more specific results.
Figure 7 shows the trace of the covariance matrix of the position information
for sensor 1, sensor 2 and the fused information. The fused information is seen
to have a smaller variance for all the 12 location points. Through singular value
decomposition of all these covariance information matrices, the uncertainty ellipses
were obtained both in magnitude and direction. Figure 8 shows the area of these
ellipses for sensor 1, sensor 2 and the fused information. This evidently shows that
the total uncertainty of the fused information reduces at each point. For a given
system of sensors, the amount of reduction would mainly depend on the accuracy
of the developed noise model of the low-level data. The result, however are very
much significant for precise positioning or similar such applications.

4. Proposition of FDD (Fusion in the Differential Domain)


In most multisensor based robotic systems, information acquisition from the en-
vironment for some specific task performance is usually conducted in more than
one phase. In the first phase, “macro” information is acquired by detecting the
184 G. C.NANDI AND D. MITRA

environmental scene from far away and decision is made whether or not to acquire
more information. If more information is required, the system “zooms” to obtain
“micro” information, taking a closer look at the scene of interest. If still more
information is desired, the system proceeds to the next closer stage and so on.
Obviously different types of sensors are used in each stage and the abilities of the
sensor models to transform and manipulate the probabilistic uncertainties of the
environment, normally improves as the phases get closer and closer.
Motivated by this idea, we propose a technique of fusion in the differential
domain (FDD) for further reducing the uncertainty that remains in the sensory
information even after adopting the fusion methodology described in Section 2. In
this approach, the absence of dynamic uncertainties in the differential domain has
been assumed since fine manipulations of the sensory data are expected to give less
erroneous information. Let Xdf be the residual consensus error or uncertainty that
remains in our sensory information after geometric fusion through the weightage
parameters as derived in (19). If we redefine the original error function in the neigh-
borhood of the fused optimal weightage parameters N i=1 Wi = 1, it should be
possible to find another Xdf , which would monotonically, increase and/or decrease
around the error function. It is quite logical to expect that the sensors in the neigh-
borhood of its goal point will issue more accurate and less erroneous information.
Let us represent the sensory information, sensory data and noise in the differ-
ential domain, for the ith sensor (i = 1, . . . , N) by Xdi , Ddi and ndi , respectively.
N is the total number of sensory units. The noise, as random measurement errors,
can be expressed to be additive to the mapping of (2) in the following manner:
Ddi = gi (Xdi ) + ndi . (23)
The noise ndi can be assumed as a multivariate random vector with a N ×N positive
definite covariance matrix Qdi .
  T 
Qdi = E ndi − E[ndi ] ndi − E[ndi ] . (24)
Treating Xdi as an unknown non-random vector and ndi having a zero mean and
Gaussian distribution, the conditional density function of Ddi given Xdi will be
p(Ddi | Xdi )
 
1 1 T −1  
= exp − Ddi − gi (Xdi ) Qdi Ddi − gi (Xdi ) . (25)
(2π )N/2 |Qdi |1/2 2
Since Qdi is positive definite and symmetric, its inverse exists. We intend to find
that value of Xdi which maximizes (25), for which we can determine the maximum
likelihood estimator. This estimator, hence has to minimize the expression form
K(Xdi ):
 T  
K(Xdi ) = Ddi − gi (Xdi ) Q−1 di Ddi − gi (Xdi ) . (26)
Minimization of the above expression for estimator determination would be valid
even for additive errors that cannot be assumed Gaussian. Although gi (Xdi )’s in
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 185

general would be nonlinear vector functions, but expanding them in the differential
domain, in a Taylor series about a reference point Xdo , can linearize them. To a
reasonable extent, only the first two terms can be retained,

gi (Xdi ) = gi (Xdo ) + G(Xdi − Xdo ), (27)

where Xdi and Xdo ∈ Rn , n being the dimension of sensory information and G ∈
RN×n is the matrix of derivatives evaluated at Xdo .
 ∂g ∂g1 
1
···
 ∂Xd1 ∂Xdn 
 . .. 

G =  ..  (28)
. .
 ∂g ∂g 
N N
···
∂Xd1 ∂Xdn
Each row of this matrix is the gradient vector of one of the components of gi (Xdn ).
The vector Xdo has been taken as an initial estimate of Xdi determined from the
preliminary fusion results using Equations (14) and (19). The value of Xdo can
also be obtained if previous iteration of some other estimation procedure has been
followed or some a priori information is available.
In the subsequent analysis it has been assumed that Xdo is sufficiently close to
Xdi so that (27) is more or less an accurate assumption.
Using (27), we can write as follows:

Ddi − gi (Xdi ) = Ddi − gi (Xdo ) − G(Xdi − Xdo )



= Ddi − gi (Xdo ) + GXdo − GXdi = Ddi − GXdi , (29)

where

Ddi = Ddi − gi (Xdo ) + GXdo . (30)

Hence (26) is expressed as



K(Xdi ) = (Ddi − GXdi )T Q−1 
di (Ddi − GXdi ). (31)

To minimize this, the gradient of K(Xdi ) has to be calculated and solved for the
value of Xdi such that
 
  ∂K ∂K ∂K T
grad K(Xdi ) = ··· = 0. (32)
∂Xd1 ∂Xd2 ∂Xdn
di .
This gradient is computed at Xdi = X
Qdi ’s being symmetric matrices, QTdi = Qdi , and hence (Q−1 T −1
di ) = (Qdi )
T
−1 −1
= Qdi , thereby implying that Qdi is a symmetric matrix as well. Therefore, from
(32), we get

2GT Q−1  T −1 
di GXdi − 2G Qdi Ddi = 0. (33)
186 G. C.NANDI AND D. MITRA

Assuming the matrix GT Q−1


di G to be non-singular, (33) is solved as:
 −1 T −1 
di = GT Q−1
X di G G Qdi Ddi
 T −1 −1 T −1  
= G Qdi G G Qdi Ddi − gi (Xdo ) + GXdo
 −1 T −1  −1 T −1  
= GT Q−1 di G G Qdi GXdo + GT Q−1 di G G Qdi Ddi − gi (Xdo )
 −1 T −1  
= Xdo + GT Q−1 di G G Qdi Ddi − gi (Xdo ) . (34)
In the simulation study with the sensor models as defined in the previous section,
the above iterations were performed by taking Xdo = Xdf , the absolute error
remaining in the fused information. This was made known from the uncertainty
ellipsoid of the fused information. The matrix
 ∂θ ∂θ1 
1
 ∂X ∂Y 
 
 ∂θ2 ∂θ2 
 
 ∂X ∂Y 
G=  ∂x

 ∂x 
 
 ∂X ∂Y 
 
∂y ∂y
∂X ∂Y

Figure 9.
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 187

was computed through (20)–(22) [Ddi −gi (Xdo )] was substituted with manipulative
random noise whose covariance matrix was taken to be Qdi . This should be the net
error in the low-level sensory data in the differential domain and multiplying this
with the respective Jacobian matrices should give the corresponding errors in the
sensory information. The latter must represent the correction adjustment factors for
the individual sensory information readings. The plots in Figure 9 shows that it is
possible to manipulate the noise in the differential domain such that the variance
changes in the vicinity of the optimized uncertainty and obtain these adjustment
factors for individual sensory readings. In the first plot of Figure 9, the dotted
line corresponds to the variance in X-coordinate fused information at a particular
location point (before FDD). In the same plot, we see how the variance changes in
50 iterations performed as per (34). Near the iteration number 35 to 42, we find that
it varies closely around the original variance. Hence in this region, a particular iter-
ation number may be selected so that corresponding to that iteration, the correction
adjustment factor for the X-dimension information can be obtained for both the
sensors. The adjustment factors in X-dimension information predicted for sensor 1
(S1) and sensor 2 (S2) for all the iteration have also been shown in Figure 9 as
‘deltaX’ and depending on the iteration number they can be appropriately selected.
Thus on repeating the fusion process with corrective adjustment terms obtained
from differential domain, accuracy of point placement tasks can be significantly
improved and its uncertainty can also be minimized to pre-assigned values.

5. Fusion of Depth Information Using Multiple Baseline Stereo


In stereo matching using multiple base lines images with different baselines are
obtained by lateral displacement of camera and adding the SSD values from multi-
ple steroe pairs global mismatch is reduced. However, there is a trade off between
accuracy (correctness) and precision in this type of matching. In [39, 33] signifi-
cant contributions in obtaining increased precision, removing ambiguity has been
discussed. However, none of them considered noise in baseline measurements. In
our view noise in baseline measurements is inevitable and by using our fusion algo-
rithm, as discussed above, we have successfully counteracted the effect of baseline
noise and could further improve the distance estimate without increasing the num-
ber of baselines. Analyzing the statistical characteristics of the processed intensity
function (pif) near the correct match, the variance of the estimated distance is
2in2
Vd(i) =  . (35)
j ∈W (g (x + j ))
BL2i f 2  2

Here in2 is the variance of the Gaussian white image noise, BLi is the ith baseline
measurement, f is the focal length, g(x) is the image intensity function near the
matching position. The summation is taken over a window W at a pixel position x
of the image. Figure 10 shows for different baselines how the error in pif values
vary with the pixel position, x, when noise in baseline is taken into account. It is
188 G. C.NANDI AND D. MITRA

Figure 10.

Figure 11.

seen to significantly affect the sum of the pif functions to be used in estimating
the depth information. Figure 11 shows the variation in the precision estimation of
stereo matching taking random noise in three baselines of ratio 1 : 2 : 3. The 4th
plot shows a significant reduction in the variance after fusion of the three baselines.
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 189

During simulations a cosine intensity function was used as the image intensity and
the window size over which the functions were evaluated was taken as five.

6. Conclusions
In this paper we have presented a sensor fusion strategy based on geometric op-
timization using Lagrangian method and used it to fuse information from both
external as well as internal sensors of a robot manipulator. Here a camera sensor
mounted on a robot gripper has been chosen as external sensor and optical encoders
mounted on robot joint has been considered as internal sensor – both specifying
the same attribute, i.e., the desired location of the robot gripper in the Cartesian
space. This is a typical robot positioning problem, which has been formulated
here as sensor fusion problem, having very significant application for any type
of robotised visual-based manipulation tasks. The fusion results obtained clearly
indicate that the accuracy of manipulators could be improved upon significantly
by adopting our fusion strategy. More specifically, here we have developed two
new strategies that could improve upon the performance available from existing
fusion methodologies in terms of reducing the residual uncertainty. The first ap-
proach is to consider each dimension of the information separately and then apply
the geometric fusion method. The absolute error and uncertainty in this case has
been shown to be lesser when it was adopted priori as coarse corrections before
attempting the actual fusion. This “Fission–Fusion” approach has been proved to
be very useful in the consideration of multi-dimensional information and when the
covariance matrix of each individual matrices are close to singular.
In the second approach, we have proposed the strategy of “Fusion in the Dif-
ferential Domain” (FDD) as a means to further reduce the uncertainty that remains
in the fused information, which even can raise the precision up to nanotechnology
level. The simulation results strongly indicate that through this strategy, a correc-
tion factor for the individual sensory information can be predicted that would ac-
tually represent a smaller uncertainty in the overall information than that obtained
through the usual fusion process.
Also it has been shown that in case of stereo matching problem precision es-
timate of depth information by multiple baselines is strongly affected by baseline
noise and by application of our fusion strategies the variance can be made smaller
and thus the uncertainty of correct matching can be reduced significantly. As fu-
ture work, artificial intelligence approaches like artificial neural network and fuzzy
logic models of the fusion strategies outlined here would be taken up.

Acknowledgements
This research is sponsored by MHRD, Govt. of India, through project No. MHRD
(31)99-2000/116/EMM.
190 G. C.NANDI AND D. MITRA

References

1. Abidi, M. A. and Gonzalez, R. C.: Data Fusion in Robotics and Machine Intelligence,
Academic Press, Boston, MA, 1992.
2. Bar-Shalom, Y.: Multitarget-Multisensor Tracking: Advanced Applications, Artech House,
London, 1990.
3. Bhanu, B. and Jones, T.: Image understanding research for automatic target recognition, IEEE
Aerospace Electron Systems Mag. 8 (1993), 15–23.
4. Borthwick, S. and Durrant-Whyte, H.: Dynamic localization of autonomous guided vehicles,
in: Proc. of 1994 IEEE Internat. Conf. on Multisensor Fusion, 1994, Las Vegas, NV, pp. 92–97.
5. Briot, M., Talou, J. C., and Bauzil, G.: The multisensors which help a mobile robot find its
place, Sensor Rev. 1(1) (1981), 15–19.
6. Comparato, V. G.: Fusion – the key to tactical mission success, in: C. W. Weaver (ed.), Sensor
Fusion, Proc. SPIE 931, Orlando, FL, April 1988, pp. 2–7.
7. Daniel, M. M. and Willsky, A. S.: A multiresolution methodology for signal-level fusion and
data assimilation with applications to remote sensing, Proc. IEEE 85 (1997), 164–180.
8. Dasarathy, B. V.: Sensor fusion potential exploitation – innovative architectures and illustrative
applications, Proc. IEEE 85(1) (1997).
9. Durrant-Whyte, H. F.: Sensor models and multisensor integration, Internat. J. Robotics Res.
7(6) (1988), 97–113.
10. Fu, K. S., Gonzalez, R. C., and Lee, C. S. G.: Robotics, Control, Sensing, Vision, and
Intelligence, Intl. edn, McGraw Hill, New York, 1987.
11. Hager, G. D.: Task Directed Sensor Fusion and Planning, Kluwer Academic, Boston, MA,
1990.
12. Hall, D.L., Linn, R. J., and Llinas, J.: A survey of data fusion systems, in: Proc. SPIE Conf. on
Data Structure and Target Classification, Orlando, FL, April 1991, vol. 1470, pp. 13–36.
13. Hall, D. L. and Llinas, J.: An introduction to multisensor data fusion, Proc. IEEE 85(1) (1997).
14. Hansen, R. J., Hall, D. L., and Kurtz, S. K.: A new approach to the challenge of machinery
prognostics, Trans. ASME J. Engrg. Gas Turbines Power (April 1995), 320–325.
15. Harman, L. D.: Automated tactile sensing, Internat. J. Robotics Res. 1(2) (1982).
16. Herman, H. and Schempf, H.: Serpentine manipulator planning and control for NASA space –
Shutle payload servicing, Carnegie Mellon University, The Robotics Institute, Technical Report
No. RI-TR-92-10.
17. Hill, D., Edwards, P., and Hawkes, D.: Fusing medical images, Image Processing 6(2) (1994),
22–24.
18. Howard, I. P.: Human Visual Orientation, Wiley, Chichester, UK, 1982, Chapter 11.
19. Gesing, W. S. and Reid, D. B.: An integrated multisensor aircraft track recovery system for
remote sensing, IEEE Trans. Automat. Control 28(3) (1983), 356–363.
20. Kam, M., Zhu, X., and Kalata, P.: Sensor fusion for mobile robot navigation, Proc. IEEE 85(1)
(1997), 108–119.
21. Klein, L. A.: Sensor and data fusion concepts and applications, in: SPIE Opt. Engineering
Press, Tutorial Texts 14 (1993).
22. Kreithen, M. L.: Orientational strategies in birds: A tribute to W. T. Keeton, in: Behavioral
Energetics: The Art of Survival in Vertebrates, Ohio State Univ., Columbus, OH, 1983, pp. 3–
28.
23. Leclere, F. and Plamondon, R.: Automatic signature verification: The state of the art 1989–
1993, Internat. J. Pattern Recognition Artificial Intell. 8(3) (1994), 643–660.
24. Li, W.: Fuzzy logic based robot navigation in uncertain environment by multisensor integration,
in: Proc. of IEEE Internat. Conf. on Multisensor Fusion and Integration for Intelligent Systems,
1994, pp. 259–265.
SENSOR FUSION STRATEGY FOR ROBOTIC APPLICATION 191

25. Liggins, M. E., Kadar, I. et al.: Distributed fusion architectures and algorithms for target
tracking, Proc. of the IEEE 85(1) (1997).
26. Lopez-Orozco, J. A. et al.: An asynchronous, robust, and distributed multi-sensor fusion system
for mobile robots, Internat. J. Robotics Res. 19(10) (2000), 914–932.
27. Luo, R. C. and Kay, M. G.: Multi-sensor integration and fusion in intelligent systems, IEEE
Trans. Systems Man Cybernet. 19(5) (1989).
28. Nandi, G. C., Mitra, D., and Mukhopadhyay, A. K.: Information fusion from multiple sensors
for robotic applications, in: MATLAB India Millennium Conference, Bangalore, India, 15–17
November 2000, pp. 139–150.
29. Nandi, G. C. and Mitra, D.: Development of a sensor integration strategy based on geometric
optimization, SPIE Proc. 4385 (April 2001), 282–291.
30. Neisus, B., Dautzenberg, P., and Trapp, R.: Robotic manipulator for endroscopic handling of
surgical effectors and cameras, in: Proc. of the 1st Internat. Symp. on Medical Robotics and
Computer Assisted Surgery, Vol. 1, 1994, pp. 169–175.
31. Newman, E. A. and Hartline, P. H.: The infrared ‘vision’ of snakes, Sci. Amer. 246(3) (1982),
116–127.
32. Nitzan, D. et al.: Use of sensors in robot systems, in: Proc. of Internat. Conf. on Adv. Robotics,
Tokyo, Japan, September 1983, pp. 123–132.
33. Okutomi, M. and Kanade, T.: A multiple-baseline stereo, IEEE Trans. Pattern Anal. Mach.
Intelligence 15(4) (1993).
34. Proc. of 1994 7th Natl. Symp. on Sensor Fusion, ERIM, Ann Arbor, MI, 1994.
35. Proc. Data Fusion Syst. Conf., Johns Hopkins University, Naval Air Development Center,
Warminster, PA, 1986–1994.
36. Rosenblatt, J. K. and Thorpe, C. E.: Combining multiple goals in a behavior-based ar-
chitecture”, in: Proc. of Internat. Conf. on Intelligent Robots and Systems, Vol. 1, 1995,
pp. 136–141.
37. Simmons, J. A. et al.: Composition of biosonar signals for target recognition by echolocating
bats, Neural Networks 8(7/8) (1995), 1239–1262.
38. Trivedi, M. M. et al.: Developing robotic systems with multiple sensors, IEEE Trans. Systems
Man Cybernet. 20(6) (1990).
39. Tsai, R. Y.: Multiple frame image point matching and 3D surface reconstruction, IEEE Trans.
Pattern Anal. Mach. Intelligence 5(2) (1983).
40. Ueno, M., Ross, W., and Friedman, M.: TORCS: A Teleoperated Robot Control System for the
self mobile space manipulator, CMU-RI-TR-91-07.