Вы находитесь на странице: 1из 22

15

Positioning
Daniel Fleischer and Christian Pich
15.1 Introduction
Many applications of sensor and ad hoc networks prot from the knowledge
of geographic positions of all or just some of the sensor nodes. E.g., geometric
routing can use positional information to pass packets eciently to target
nodes. Tasks like measuring temperature or humidity levels of a certain area
can only be fullled, if each sensor node knows its geographic position or at
least an approximation of it. Attaching a GPS (global positioning system)
to each sensor node is an undesirable solution, since GPS is quite costly and
clumsy, compared to the small sensor nodes. Furthermore, the perception of
GPS signals might be disturbed when there is no direct connection to the
satellites due to some obstacles (buildings, trees,. . . ).
Thus, there is a need to obtain the positions of sensor nodes, just from the
network structure, without each sensor node being equipped with GPS. There
exist quite a lot of approaches. Some of them demand the existence of a few
sensor nodes that know their position (landmark nodes). Others do not need
any absolute positions at all and just compute virtual coordinates instead,
i.e. coordinates that do not try to exactly mirror the original coordinates,
but rather aim to yet improve algorithms that rely on positional information.
E.g., for geometric routing this has been shown to be sucient [328]. The
approaches can be further subdivided into methods where
each sensor node only knows its neighbors ([290], Section 15.3.1),
each sensor node furthermore knows the distance to all of its neighbors
([20], Section 15.3.2),
each sensor node knows the angle between each two of its neighbors ([52],
Section 15.3.6).
In general, the underlying graph model for sensor networks is given by unit
disk graphs (in the following we will also consider the more realistic model of
quasi unit disk graphs). Each sensor node is supposed to have a nite scan
radius r, within which neighbors can be detected. Hence, by scaling, there
D. Wagner and R. Wattenhofer (Eds.): Algorithms for Sensor and Ad Hoc Networks, LNCS 4621, pp. 283304, 2007.
c Springer-Verlag Berlin Heidelberg 2007
284 D. Fleischer and C. Pich
is an edge between two nodes, if and only if the Euclidean distance between
them is less or equal to 1. The problem UNIT DISK GRAPH RECOGNITION
is to decide, whether a given graph is a unit disk graph. This problem was
rst shown to be NP-hard in [50]. This implies that there cannot exist a
polynomial time algorithm that computes the positions of sensor nodes, given
only their adjacencies (unless P = NP), because otherwise this algorithm
computes valid coordinates for a given graph if and only if it is a unit disk
graph. Despite this hardness result there exist a lot of heuristics to determine
the positions of sensor nodes that work quite well.
Outline of this chapter. In Section 15.2 proof sketches of the NP-hardness of
UNIT DISK GRAPH RECOGNITION and some related problems are shown.
Section 15.3 introduces some heuristics for positioning and obtaining virtual
coordinates. Finally, Section 15.4 concludes with some chapter notes.
15.2 Hardness Results
In this section we consider the complexity of the problem of computing the po-
sitions of nodes in a sensor network within various scenarios. In Section 15.2.1
only the network G = (V, E) is known, Section 15.2.2 deals with approxi-
mations, Sections 15.2.3 and 15.2.4 additionally consider distance and angle
information, respectively. We will use the following denitions.
Denition 15.2.1 (Embedding). An embedding of a graph G = (V, E) in
the Euclidean plane is a mapping p : V R
2
.
Denition 15.2.2 (Realization). A realization of a unit disk graph G =
(V, E) in the Euclidean plane is an embedding p of G, such that
{u, v} E = dist(p(u), p(v)) 1 , (15.1)
{u, v} / E = dist(p(u), p(v)) > 1 . (15.2)
15.2.1 Unit Disk Graph Recognition
We derive that the problem of computing the positions of nodes in a sensor
network, given only its adjacencies, is NP-hard. The basic decision problem
with respect to positioning is UNIT DISK GRAPH RECOGNITION.
Problem 15.2.3 (UNIT DISK GRAPH RECOGNITION). Given a
graph G = (V, E). Is G a unit disk graph?
Theorem 15.2.4 ([50]). UNIT DISK GRAPH RECOGNITION is NP-
hard.
15 Positioning 285
Fig. 15.1. Realization of a unit disk graph. Finding such an embedding is NP-hard
Proof. We present a proof sketch of the reduction from 3SAT with the addi-
tional condition that each variable is contained in at most 3 clauses, leaving
the problem NP-complete. The basic building block for the proof is called a
cage, see Fig. 15.2. A cage is a cycle with some additional independent nodes
called beads, i.e. a short path that is attached to two nodes of the cycle. Note
that only a bounded number of nodes can be embedded inside the cycle be-
cause otherwise some not adjacent nodes would be too close. If several cages
(and some extra nodes to ensure the beads can only ip in two directions) are
connected to a chain, see Fig. 15.3, embedding the leftmost beads inside the
cycle leads to an embedding of the rightmost beads outside the cycle. This
eect is used to propagate the information between variables and clauses of
an instance I of 3SAT. Apart from the cages used for the chains, there is a dif-
ferent type of cage for each clause, see Fig. 15.4. Not all the beads of a clause
cage can be embedded inside the cycle. The beads that have to be embedded
outside the cycle take care that this clause is true. For each variable, there is
a structure of several cages, see Fig. 15.5. The 5 nodes are embedded either
in the left or right cycle, forcing the other beads of the corresponding cycle
to be embedded outside. This corresponds to a truth setting of the variable.
Together with a structure of cages for crossings of chains, now all basic gad-
gets are available to create a graph G that models an instance I of 3SAT; G
is a UDG if and only if I is satisable.
Corollary 15.2.5. There is no polynomial time algorithm that computes a
realization of a given unit disk graph (unless P = NP).
286 D. Fleischer and C. Pich
Fig. 15.2. Basic building block is a cage
Fig. 15.3. Information propagation through chains of cages
Fig. 15.4. Modeling a clause. Not all beads can be embedded inside the cycle
Proof. Assume there is such a polynomial time algorithm A, and run A on an
arbitrary graph G. If A does not terminate within its given upper time bound
with respect to the input size, or the embedding does not fulll conditions
(15.1) and (15.2) (which can be checked in polynomial time), G is not a UDG.
Otherwise, if the embedding fullls both conditions, G is a UDG. Hence, a
UDG can be recognized, a contradiction to Theorem 15.2.4.
Note that Corollary 15.2.5 implies that there cannot exist a polynomial time
algorithm that computes the positions of nodes in a sensor network, given just
its adjacencies, since these positions form a realization. Computing the actual
coordinates of the sensor nodes is thus at least as hard as nding a realization
of a UDG, which proved to be NP-hard.
15 Positioning 287
Fig. 15.5. Modeling a variable. Embedding the inner 5 black nodes on the left or
right-hand side, corresponds to a truth setting
15.2.2 Unit Disk Graph Approximation
The last section showed that it is NP-hard to decide whether a given graph
is a unit disk graph. In this section we introduce d-quasi unit disk graphs,
where d is a constant in [0, 1], and show that their recognition is also NP-
hard (for d 1/

2, Theorem 15.2.10). Theorem 15.2.9 states that for a unit


disk graph no approximate embedding (in the sense of Denition 15.2.8) with
quality q
_
3/2 can be found in polynomial time.
Denition 15.2.6 (d-Quasi Unit Disk Graph). A graph G = (V, E) is
called a d-quasi unit disk graph (d-QUDG, 0 d 1), if there is an embed-
ding p, such that
{u, v} E = dist(p(u), p(v)) 1 , (15.3)
{u, v} / E = dist(p(u), p(v)) > d . (15.4)
Note that this denition does not dene whether there is an edge between
two nodes at a distance in (d, 1]. This yields a more realistic model of sensor
networks, since nodes, whose distance is within that interval, still may or
already may no more detect each other.
Denition 15.2.7 (Realization of a d-QUDG). An embedding p of a d-
QUDG G = (V, E) is called a realization of G, if it fullls both (15.3) and
(15.4).
Denition 15.2.8 (Quality). The quality q(p) of an embedding p of a graph
G = (V, E) is dened by
q(p) =
max
{u,v}E
dist(u, v)
min
{u,v}/ E
dist(u, v)
.
288 D. Fleischer and C. Pich
Thus, the quality of a realization of a UDG is smaller than 1 (a good embedding
has a small value for its quality). Note that the two concepts d-QUDG
and quality are very similar. Let p be an embedding of a graph G with
quality q, where w.l.o.g. the maximum edge length is 1 (scale p by a factor of
1/ max
{u,v}E
dist(u, v)). Then, p is a realization of G as a (1/q )-QUDG
for every > 0. On the other hand, if p is an embedding of a graph G as
d-QUDG, then p is an embedding of quality q(p) < 1/d.
Theorem 15.2.9 ([235]). Given a UDG G, it is NP-hard to nd an embed-
ding p, such that q(p)
_
3/2 (n) , where lim
n
(n) = 0 .
Theorem 15.2.10 ([235]). Given a graph G, it is NP-hard to decide,
whether G can be realized as a d-quasi unit disk graph with d 1/

2 .
Proofs of these theorems can be found in [235] and are omitted here.
Note that Theorem 15.2.10 is more interesting from a practical point of
view than Theorem 15.2.9. If sensors at a distance in (d, 1] may or may not
be connected, the sensor network is no more a UDG as assumed for Theo-
rem 15.2.9 but only a d-QUDG.
Corollary 15.2.11. Given a graph G, it is NP-hard to nd an embedding p,
such that q(p) <

2 .
15.2.3 Distance Information
0.94
0.52
0.84
0.66
0.78
0.78
0.6
0.47
0.23
0.87
0.3
0.66
0.83
0.87
0.66
0.94
0.19
0.59
0.93
0.44
0.63
0.28
0.92
0.48
0.78
0.96
0.83
0.63
0.72
0.99
0.17
0.36
0.77
0.5
0.91
0.76
0.66
0.75
0.64
0.32
0.44
0.88
0.85
Fig. 15.6. Scenario: each node knows the distances to its neighbors
15 Positioning 289
Assume for this section that distance information is available, i.e. each node
knows the distances to all of its neighbors. Even in this scenario, it is NP-hard
to nd a realization.
Theorem 15.2.12 ([20]). Given a graph G and distances between all pairs
of adjacent nodes, it is NP-hard to decide, whether G can be realized as a
unit disk graph.
Proof. We very briey sketch the proof of [20], a reduction from CIRCUIT
SATISFIABILITY. The basic building block is called a binary hinge, see
Fig. 15.7, i.e. a structure that has exactly two embeddings (modulo transla-
tion, rotation and reection). Note that each triangle has exactly one embed-
ding when distances are given. Binary hinges with some more rigid structures
are then used to form NOT and implication gates, and junctions to connect
the gates. These gates already form a universal set, hence for an instance I
of CIRCUIT SATISFIABILITY one can construct a graph G modeling the
circuit that has a realization if and only if I is satisable.
Fig. 15.7. A binary hinge has exactly two embeddings. Upper and lower bars con-
sisting of several triangles are rigid, i.e. there is only one embedding
15.2.4 Local Angle Information
Assume for this section that local angle information is available, i.e. each
node v with two neighbors u and w knows the angle (u, v, w). But also in this
scenario it remains NP-hard to nd a realization. Note that the knowledge of
both distances and angles immediately delivers the actual coordinates modulo
translation, rotation and reection (if G is connected).
Theorem 15.2.13 ([52]). Given a graph G and local angles for each node,
it is NP-hard to decide whether G can be realized as a unit disk graph.
290 D. Fleischer and C. Pich
16
29
40
Fig. 15.8. Scenario: each node knows local angle between neighbors (only displayed
for black node)
Proof. We very roughly present a proof sketch of a reduction from 3SAT found
in [52]. Three basic structures are used to build up a graph G with given local
angle constraints, corresponding to an instance I of 3SAT that is realizable
if and only if I is satisable. The basic building blocks are springs, ampliers
and propagators, see Fig. 15.9. A spring is not xed in its length, e.g. the
spring of Fig. 15.9 can have an arbitrary length from the interval ]1,4]. An
amplier xes the ratio between two edge lengths and a propagator is used to
propagate edge lengths horizontally and vertically. Using these 3 basic building
blocks one can construct 0/1 blocks that correspond to a variable, and clause
components that are connected in such a way that there is a realization if and
only if a given instance of 3SAT is satisable.
15.3 Algorithms
The last section showed that computing the positions of sensor nodes is NP-
hard in general. Even if we consider approximations or additional information
is available, the problem remains NP-hard. Anyhow, this section presents
several heuristics that deliver quite good results in practice.
15.3.1 Virtual Coordinates
This section roughly describes the algorithms by Moscibroda et al. [290] and
Rao et al. [328]. The algorithm by Moscibroda et al. was the rst algorithm to
15 Positioning 291
Fig. 15.9. Basic building blocks with dierent embeddings from left to right: spring,
amplier, propagator
Fig. 15.10. Two realizations of the same unit disk graph
compute a realization of a given UDG with provable approximation ratio and
polynomial running time. Apart from the adjacencies this algorithm needs
no further input, in particular no distance or local angle information need
to be available. Since no absolute positions are available, this algorithm can
only compute virtual coordinates, i.e. nodes are assigned to positions in an
arbitrary coordinate system, where close nodes in this system are supposed
to be close in real coordinates. Often, this is sucient to improve algorithms
that depend on geographical positions, e.g. geometric routing [328].
The algorithm can be divided into four steps.
Step 1: solve an LP to obtain a metric on the nodes,
Step 2: compute a volume respecting embedding into R
n
,
Step 3: random projection into R
2
,
Step 4: local renement of the random projection.
We briey go through these steps.
Step 1. A metric on the input graph is computed, satisfying certain addi-
tional constraints, e.g. (u, v) 1 for all {u, v} E and (u, v)

n for all
pairs of nodes. These constraints together with equations such that satises
positiveness, triangle inequality and some spreading constraints given by

vS
(u, v) |S|
3/2
, for all independent sets S V, u V, const. (15.5)
292 D. Fleischer and C. Pich
form the LP that has to be solved. The spreading constraints (15.5) ensure
that the sum of pairwise distances of nodes of independent sets is suciently
great, such that they will be well spread over the plane in the following steps.
Note that there are possibly exponentially many equations and it is not trivial
how to reduce their number. For details we refer to [290].
Step 2. Volume respecting embeddings into high-dimensional spaces were in-
troduced in [124]. When nodes from this high-dimensional space are projected
into a random lower dimensional space, arbitrary volumes spanned by tuples
of nodes are well preserved, i.e. roughly speaking large volumes are supposed
to be projected to large volumes, small volumes to small volumes. This step
nds an embedding of the nodes of the given graph into R
n
that has the prop-
erty that the Euclidean distances of the nodes in R
n
closely match and that
volumes (and distances, respectively) will be well preserved when randomly
projecting to a lower dimensional subspace. The next step consists of such a
random projection into R
2
.
Step 3. A vector in R
n
is expected to be scaled by a factor of
_
2/n by a
projection to a random subspace of dimension 2 and the probability is con-
centrated around this expectation. Hence, the distances between the nodes in
R
n
and between their projections to a random subspace will be well preserved
up to a scaling factor of
_
2/n.
Step 4. The nal step consists of a local renement process. First, the projected
nodes of step 3 are enclosed in a grid of cell width 1/

n. For each cell C a


maximal independent set is computed. Let M be the maximal cardinality
over all these independent sets. The grid is further rened to cells with width
1/

nM. The nodes of each maximal independent set S of a cell C are then
arbitrarily assigned to dierent centers of cells of the rened grid. Nodes in a
cell C that are not in the maximal independent set S, are arbitrarily assigned
to one of its neighbors v in S and positioned on a circle of radius 1/(3

nM)
around v. Non-neighbored nodes are equally distributed on the circle while
neighbored nodes may be placed on the same position. Finally, the plane is
scaled by a factor of

nM, such that the cell width of the rened grid is 1.
Theorem 15.3.1 ([290]). Steps 1 to 4 yield a polynomial time algorithm,
whose quality is in O(log
2.5
n

log log n) with high probability.


Another heuristical algorithm can be found in [328]. There do not exist any
proven bounds on the quality of this algorithm, but experimental studies
show quite good results. The algorithm can be seen as the last member of
a sequence of three algorithms with decreasing information available. While
the rst algorithm needs all positions of perimeter nodes, the second assumes
that perimeter nodes are known (to be perimeter nodes), the last needs no
additional information apart from the adjacencies of the given graph.
Positions of perimeter nodes are known. If the positions of perimeter nodes
(i.e. the nodes that span the convex hull of the network) are known, the
15 Positioning 293
remaining nodes can be set to positions according to the following spring-
embedder model. Assume that each edge of the given graph acts like a spring,
whose attracting force is proportional to the distance between its end nodes.
In this model a node v will be stable in its position (x
v
, y
v
), when all forces
eecting it sum up to zero, i.e. when vs position is the barycenter of its
neighbors
x
v
=

u:{u,v}E
x
u
/d(v) ,
y
v
=

u:{u,v}E
y
u
/d(v) . (15.6)
To set all remaining nodes to stable positions, their positions are all initialized
by an arbitrary point inside the perimeter nodes, and then equations (15.6) are
iteratively applied for these nodes, i.e. an iteration round consists of computing
updated positions for all nodes based on their old positions. The iteration will
converge to stable positions, and this forms the nal embedding of all nodes.
This procedure will not put any nodes outside of the convex hull. Alternatively,
the positions can also be computed by solving those lines of the two linear
equation systems
Lx = 0, Ly = 0,
where L is the Laplacian matrix of the given graph corresponding to nodes
that are not perimeter nodes. If the given graph is connected, this yields a
unique solution (see, e.g. [367]).
Perimeter nodes are known. If it is just known which nodes are perimeter
nodes, but no positions of them are available, virtual coordinates for these
perimeter nodes are rst computed, and then the previous algorithm is ap-
plied. So, how can virtual coordinates for the perimeter nodes be obtained?
First each perimeter node v determines the hop-distances, i.e. the number
of edges of a shortest path, to all other perimeter nodes by simply ooding
the network with a message like (v,1), where the index is increased by one
when forwarding this message and where only the message with least index
from a specic node v is forwarded/accepted. Depending on the number of
iterations the following step should be done by all perimeter nodes parallel,
or all the distance information is passed to one designated node that executes
the following step. Again we use a spring-embedder model, but this time, the
springs are relaxed when having a length of the hop-distance of the previ-
ous step. This can again be solved iteratively and after a sucient number
of iterations the positions are used as virtual coordinates for the perimeter
nodes.
No additional information. If perimeter nodes are not known, it is rst tried
to identify them, and afterwards the previous procedure is applied.To identify
294 D. Fleischer and C. Pich
perimeter nodes, one designated bootstrap node v oods the entire network,
such that each node knows its distance to v. Then each node that is farthest
away among all its distance-2-neighbors, decides to be a perimeter node.
15.3.2 Distance Information
For the next sections it is assumed that distances between adjacent nodes are
available, i.e. sensor nodes are able to measure the distance to their neighbors,
e.g. via signal strength. A straight-forward method is to obtain positions by
triangulation, e.g. see [341], i.e. a node v can compute its position when having
3 neighbors that know their positions (that are not collinear) because there is
only one position satisfying the 3 distances to its neighbors. If initially there
are 3 nodes knowing their position, this method can eventually be applied
incrementally to deliver positions for all nodes of the network. If n nodes are
placed independently, uniformly at random on a square of maximum length
_
n/8 log n, this is possible with high probability [115]. Anyhow, the disadvan-
tage of this incremental method is that it is extremely sensitive with respect
to errors in the distance measurements because of error propagation.
Fig. 15.11. Two realizations of a UDG with the same distances between adjacent
nodes
15.3.3 Force Based Optimization
In 2003, Priyantha et al. were the rst to propose an algorithm that assigns
coordinates to the nodes in a sensor network by using only the measured
distances d
ij
between every pair of adjacent nodes n
i
, n
j
with {n
i
, n
j
} E
and that works fully distributed and without landmark nodes (sometimes
also called anchor nodes), hence called AFL (Anchor Free Location). The
algorithm basically uses a distributed mass-spring optimization method that
tries to adjust the location of any node such that the global energy of the
system (penalizing dierences between measured and calculated distances) is
minimized.
15 Positioning 295
Through simulation, the authors observe that the crucial point in the
optimization process is the initial arrangement of the system. When there are
multiple solutions to the system of distance constraints, they mainly dier
in foldovers, meaning that there are larger parts of the graph that fold over
other parts without violating any of the distance constraints. To prevent the
optimization from converging into a folded local minimum, a fold-free initial
arrangement is needed.
Hence, the algorithm consists of two phases: In the rst phase, a heuristic
method constructs a fold-free placement that is already close to the real
one. Then, in the second phase, this placement is rened by the optimization
process.
Phase 1: Initial fold-free placement. A heuristic selects ve reference nodes:
n
1
, n
2
, n
3
, n
4
should be on the periphery of the sensor network while n
5
should
be roughly in the center. First, let h
ij
= h
ji
denote the hop count, i.e. the
graph-theoretical distance, between n
i
and n
j
, and assume that the maximum
radio range of every node equals 1. For tie-breaks, nodes can be assigned
unique identiers.
Select a node n
0
randomly, then choose n
1
to maximize h
01
.
Select a node n
2
maximizing h
12
.
From all n
3
minimizing |h
13
h
23
|, choose the one maximizing |h
13
+h
23
|.
From all n
4
minimizing |h
14
h
24
|, choose the one maximizing h
34
.
From all n
5
minimizing |h
15
h
25
|, choose the one minimizing |h
35
h
45
|.
(a) (b) (c)
(d) (e) (f)
Fig. 15.12. Finding reference nodes n
1
to n
5
. Subgures (a) through (e) correspond
to the ve steps, and (f) shows the resulting reference nodes. The relevant shortest
paths are drawn with bold line; grey nodes have been found in previous step, black
nodes are found in the current step.
296 D. Fleischer and C. Pich
Thus, the roughly antipodal pairs (n
1
, n
2
) and (n
3
, n
4
) should be per-
pendicular, and n
5
should be in the center. Fig. 15.12 depicts an example
of nding the reference nodes; the graph is due to [322]). The hop counts
from nodes n
1
, n
2
, n
3
, n
4
to all other nodes n
i
are then used to approximate
the radial coordinates (
i
,
i
), thereby arranging the n
i
in concentric circles
around n
5
:

i
= h
5i
,
i
= arctan
_
h
1i
h
2i
h
3i
h
4i
_
This method it is not guaranteed to produce embeddings without foldovers,
but the authors state that it can indeed reduce the probability of conver-
gence towards local minima. Fig. 15.13 shows an initial embedding generated
by AFL. Interestingly enough, it could be used for computing rough virtual
coordinates.
Fig. 15.13. Left: Randomly generated sensor network, initial positions. Right: Ini-
tial embedding constructed by AFL. It can be seen how the nodes are placed around
in a concentric fashion, many of them having identical positions due to equal dis-
tances to the ve reference nodes. The images are taken from the original article.
Phase 2: Mass-spring optimization. Once the initial embedding is calculated,
the distributed mass-spring optimization is started. At any time, each node
n
i
knows
an estimate p
i
of its own position,
the measured distances d
ij
to all its neighbors n
j
,
the estimates p
j
of positions of all neighbors n
j
,
and hence the estimated distances

d
ij
= p
j
p
i
to all its neighbors n
j
.
and periodically sends its own estimated position p
i
to all of its neighbors n
j
.
Then, expressed by means of the spring-model, the spring between n
i
and n
j
exerts a force
15 Positioning 297
F
ij
=
_

d
ij
d
ij
_

p
j
p
i
p
j
p
i

on node n
i
towards n
j
. Summing up the force vectors for all springs attached
to n
i
results in the net force vector
F
i
=

j
F
ij
.
The dierences between estimated and measured distances are quadratically
penalized by the energy
E
i
=

j
_

d
ij
d
ij
_
2
of node n
i
, so the total energy E of the system is
E =

i
E
i
.
The energy E
i
of node n
i
, and hence the global energy E, can be reduced by
moving the node into the direction of its net force vector F
i
. This approach is
known as gradient descent . To prevent the optimization process from running
into a local minumum, the amount by which the node is moved must not
be too large, but in turn, a too small value will result in slow convergence.
The authors propose to normalize the amount of movement by deg(n
i
), the
number of neighbors of n
i
:
p
i
p
i
+
F
i
2 deg(n
i
)
The distributed optimization is iterated until the change in position p
i
stays
below a previously dened threshold value for every node n
i
. The resulting
embedding still has degrees of freedom in rotation and reection, but can be
pinned down when at least three nodes know their absolute positions.
15.3.4 Spectral Methods
Another algorithm using distance information was introduced by Gotsman
and Koren in 2004 [166] and is based the distributed computation of certain
eigenvectors of a matrix associated with the sensor network.
It is very similar to the AFL method described above in that it rst com-
putes an initial embedding which is then rened by some distributed optimiza-
tion technique. However, it tries to overcome some of AFLs disadvantages by
using more sophisticated techniques in both phases. For example, it uses only
direct communication between adjacent nodes instead of multi-hop messages.
Moreover, it seems to be more stable against errors in distance measurements.
298 D. Fleischer and C. Pich
Phase 1: Initialization. For the ease of presentation, we keep to the original
article and start presenting the one-dimensional layout, which is then extended
to higher-dimensional cases. Since the algorithm is heavily based on methods
from linear algebra, it makes sense to write the x-coordinates as vectors x
R
n
, with x
i
being the x-coordinate assigned to node n
i
.
Again, the key to success in constructing a good initial conguration that
is free of foldovers is to use an energy function whose minimization keeps
adjacent nodes close together and non-adjacent nodes apart, and which can be
optimized in a distributed fashion. Given the measured distances d
ij
between
every adjacent pair {ij} E, the energy function for a given placement x is
dened as
E(x) =

{ij}E
w
ij
x
i
x
j

i<j
x
i
x
j

2
where the w
ij
denotes the weight of node pair i, j the enumerator. This weight
can be interpreted as the similarity of two adjacent nodes i, j, derived from
the measured distance d
ij
; the default setting is w
ij
= exp(d
ij
). The de-
nominator is to be maximized, thereby trying to achieve a good scattering of
all nodes over the whole placement by taking into account all pairs of nodes,
regardless of there being a connecting edge.
The problem of nding a placement x minimizing E(x) can be directly
solved using results from standard linear algebra: Let the square matrix W
have entries w
ij
, and let D denote the diagonal matrix with the ith diagonal
entry being the sum of the entries in W: D
ii
=

j
w
ij
. Then, the global
minimum of E(x) (which is known to exist and to be zero since the Euclidean
distances are derived from a measurement) is the eigenvector of the related
Laplacian matrix L = DW associated with the smallest positive eigenvalue.
However, the closely related transition matrix D
1
W is used in practice.
This is mainly because, unlike the Laplacian, the highest eigenvector of the
transition matrix is 1, which is convenient for numerical computations.
The desired eigenvector is computed by iteratively adding to a node value
x
i
the weighted average value of all its adjacent nodes j by
x
i
x
i
+
1
2
_

j
w
ij
x
j

j
w
ij
_
where the coecient
1
2
damps the growth of x. This method of iterat-
ing averaging is called power iteration and in general it will converge to the
eigenvector associated with the eigenvalue with the highest absolute value.
Reversing the order of eigenvectors by using I D
1
W (I being the unity
matrix) in the power iteration yields the desired lowest eigenvectors of the
original matrix D
1
W.
The top eigenvector of D
1
W is v
1
= 1
n
= (1, 1, . . . , 1) and is associated
with the eigenvalue 1, so we are rather interested in the second eigenvector v
2
,
which is the actual solution. Thus, we rst compute v
2
by starting the power
15 Positioning 299
iteration with a vector u that is D-orthogonal to v
1
(i.e., u
T
Dv
1
= 0; see
below). Then, the next highest eigenvector v
3
can be computed by starting
the process with a vector D-orthogonal to v
2
. In the end, we will be able to
use v
2
as x-coordinates and v
3
as y-coordinates for the initial embedding.
Since computations can only be done locally, there is the need for a dis-
tributed way of orthogonalization against x when computing y. This is done
by initializing the power iteration with y = Au, where u is a random vec-
tor, and A is a local matrix (with non-zero entries only for adjacent nodes)
dened as
A
ij
=

x
j
D
ii
if {i, j} E,
0 if {i, j} E, i = j,

k
A
i,k
if i = j.
which ensures that A
T
Dx = 0 and thus u
T
(A
T
Dx) = (Au)
T
Dx = y
T
Dx = 0,
i.e. y is D-orthogonal to x. This vector is D-orthogonal to x = v
2
but not to
v
1
= 1
n
. The v
3
component must hence be prevented from vanishing against
the v
1
component in the power iteration. This can be achieved by carefully
scaling and translating y in the process. See [166] for details.
Fig. 15.14. Left: An initial spectral embedding of the graph in Fig. 15.13 found by
the eigenprojection. Right: The embedding after the optimization procedure, being
very close to the actual conguration. Image from the original article.
Phase 2: Energy minimization.. In contrast to the AFL approach which uses
gradient descent to minimize the local energy in each step, a distinct approach
called majorization is used. This is mainly because it does not depend on the
scale, and it does not require any heuristically xed damping factor for the
optimization step size, as opposed to the mass-spring optimization.
The objective function to be minimized when reconstructing the true node
coordinates is the localized stress
stress(p) =

j:{i,j}E
(p
i
p
j
d
ij
)
2
,
300 D. Fleischer and C. Pich
at node n
i
. Once a good initial conguration is found, there is good chance for
the optimization to converge to the global minimum, in which d
ij
= l
ij
for all
i, j, i.e. stress(p) = 0, and that is guaranteed to exist because the measured
distances have been derived from an existing (yet unknown) placement.
Majorization is very common in multidimensional optimization and based
on the fact that it is possible to nd a sequence of layouts p(0), p(1), p(2), . . .
with non-increasing stress, converging to the desired solution layout. The main
idea is to use a majorizing function M(p, q) of stress(p), satisfying stress(p)
M(p, q) for all layouts q, and stress(p) = M(p, p). Starting at p(0), the function
M
0
(p, p(0)) is minimized as a function of p. The resulting minimum p(1) is
then used to minimize the new majorizing function M
1
(p, p(1)), and so on.
This is repeated until convergence; practically, the iteration is terminated
when the absolute dierence between successive layouts falls below a given
threshold.
Improving a layout p(t) to p(t + 1) is done using a distributed version of
Jacobi iteration at each node i, namely
p
i

1
deg(i)

j:{i,j}E
(p
j
+ d
ij
(p
i
(t) + p
j
(t))) inv (p
j
p
i
) ,
where inv(x) := x
1
for x = 0, and inv(0) := 0. Eventually, this yields
p(t + 1) = p
i
. The sequence of the p(t), t N (each computed by such an
iterative process), indeed converges to the minimum, as shown, e.g., in [166]
or [146].
15.3.5 Multidimensional Scaling
In 2004, Shang and Ruml [345] presented another distributed method (without
landmark nodes) of localizing based on multidimensional scaling, being an
extension of a centralized version called MDS-MAP presented by the same
authors earlier.
The method consists of three phases: First, each node constructs a local
map of its two-hop neighborhood. Second, all local maps are successively
merged into a global map. Third, the computed map is further rened. Thus,
the rst and second step can be viewed as an initialization similar to the ones
in the algorithms described above, and the nal renement step corresponds
to the optimization processes.
When only connectivity information is available, i.e., if each node knows
which other nodes are within reach, the method can provide virtual coordi-
nates that can be used, e.g., for routing. When the measured distances are also
known, they can be used for rening both the local and the resulting global
maps to get an estimate of absolute coordinates. In addition, [345] presents an
approach to dealing with only partial measurements (when distances between
nodes are only known in parts of the ad hoc network).
15 Positioning 301
Constructing the local maps. In the rst phase of the algorithm, each node
constructs the subgraph induced by the set of nodes at most k steps away. In
the original article, k = 2 proved a good compromise between accuracy and
computation costs.
To construct such subgraphs, it suces for each node to collect the directly
available information related to all of its neighbors (who they are, and, if
available, how far they are away) and then pass this collection to all of its
neighbors. Thus, each node can construct the distance matrix related to its
subgraph, and, by multidimensional scaling using the rst two eigenvectors,
generate a local embedding.
Once this embedding is computed, it gets rened using the measured dis-
tances. The energy to be minimized in this step is a weighted version of the
energy used in the AFL algorithm
E =

i<j
w
ij
_
d
ij


d
ij
_
2
,
where

d
ij
represents the estimated distance resulting from the positions
assigned to nodes i and j, and d
ij
is the measured distance, or, if it is not
available or j is more than one hop away from j, the shortest-path distance,
i.e. d
ij
= h
ij
.
The weights w
ij
can be used to lower the importance of nodes that are
not direct neighbors, for example:
w
ij
=

1 if j is direct neighbor of i,
1
4
if j is two hops away from i,
0 otherwise.
Since the MDS and the renement are computed locally and no further node-
to-node communication is needed, any standard optimization method can be
used, such as gradient descent, simulated annealing, etc.
Merging the local maps into a global map. We rst describe how two par-
tial maps are merged into one larger map. Then, we briey discuss dierent
strategies of constructing the entire global map from the local patches.
Consider two partial maps p, q, which can be read as position vectors over
R
2
of varying length. Let I denote the set of nodes present in both p and q.
Then our goal is nding a linear transformation T : R
2
R
2
that transforms
the nodes in I from their position assigned by q as closely as possible to
positions assigned by p, by minimizing the sum of squared errors

i<j
(p
i
T(q
i
))
2
.
This linear transformation T translates, reects, rotates, and scales the one
map to t best (in the sense given above) over the other, as can be seen in
302 D. Fleischer and C. Pich
(a) (b)
(c) (d) (e)
Fig. 15.15. Merging the two local maps (a) and (b). Nodes that are present in both
maps are depicted by grey color; the source nodes are drawn in black. Removing the
white nodes results in maps (c) and (d), which are merged and augmented by the
white nodes, yielding map (e).
Fig. 15.15. The authors of the original article report that using only non-
scaling transformations is slightly better than scaling. The resulting position
r
i
in the resulting new map r is then given as
r
i
=

(p
i
+ T(q
i
)) /2 if i I,
p
i
if i was present in p but not in q,
T(q
i
) if i was present in q but not in p.
The straightforward way of composing the local patches to a global map is
a greedy approach: A core map is successively extended by sewing on new
patches until the local patches of all nodes have been integrated. A new patch
can be selected randomly, or by choosing the candidate patch with a maximum
number of nodes common with the core map.
However, since merging operations in an ad hoc network can be carried out
in parallel, another more balanced and ecient way of constructing the global
map is distributed map merging. Here, merge operations form a binary merge
tree in which maps of the same size are merged. This strategy appears less
error-prone, usually works faster, and requires less communication between
nodes. Once the global map is composed by integrating the local patches, it
can be further rened by using one of the techniques presented in this section.
15.3.6 Linear Programming
For the scenario that each sensor with two or more neighbors knows the local
angles between them (i.e. each sensor node is able to measure the dierence
angle of signals from two of its neighbors) Bruck et al. [52] presented a heuristic
15 Positioning 303
Fig. 15.16. Two realizations of a UDG with the same local angles
based on linear programming. Section 15.2.4 showed that obtaining an exact
solution in general is NP-hard.
Observe that local angles can easily be transformed to absolute angles
e
i
,
i.e. angles between adjacent nodes and some x-axis that is dened, e.g. by one
designated edge e
1
, if the given graph is connected.
Lemma 15.3.2. If local angles between adjacent nodes of a unit disk graph
are known, all pairs of crossing edges can be determined.
Proof. We check whether two edges e
1
= {u
1
, v
1
} and e
2
= {u
2
, v
2
} cross. By
simple geometrics, two edges can only cross, when there is a node, lets say
u
1
, that is connected to all three other nodes. Now, if e
1
does not cut the
triangle formed by e
2
and u
1
(which can be checked by the angles in u
1
), e
1
and e
2
do not cross. Otherwise, it can be checked by the angles in u
2
or v
2
,
whether v
1
is inside the triangle (e
1
and e
2
do not cross) or not (e
1
and e
2
cross).
The variables of the following linear program are the edge lengths (e
i
). If
they are also known, all positions can straight-forward be computed (modulo
translation and rotation).
Edge length constraint : for all edges
0 (e
i
) 1 .
Cycle constraint : for any cycle e
1
, e
2
, . . . , e
p
p

i=1
(e
i
) cos
e
i
= 0 and
p

i=1
(e
i
) sin
e
i
= 0 .
Non-adjacent node pair constraint : for any path e
1
, e
2
, whose end nodes
are not adjacent
(e
1
) + (e
2
) > 1 .
304 D. Fleischer and C. Pich
Crossing edge constraint: for any two crossing edges e
1
= {u
1
, v
1
} and
e
2
= {u
2
, v
2
}, where u
1
is a node connected to all three other nodes
(e
1
) (e
2
) sin()/ sin( + ),
where denotes the angle between u
1
, u
2
, v
2
and denotes the angle
between u
2
, u
1
, v
1
.
These four types of constraints are used as basic constraints. Their number can
be reduced by the following observation. Assume three edges e
1
, e
2
, e
3
form a
triangle. Hence, their lengths are not independent from each other. Keeping
just (e
1
) and xing (e
2
) = c
2
(e
1
) and (e
3
) = c
3
(e
1
) for some constants
c
2
, c
3
reduces the number of variables. This can further be applied to larger
rigid subgraphs, e.g. a sequence of triangles, where succeeding triangles share
an edge. Another advantage of this method is to deliver tighter bounds. For
any two non-adjacent nodes of a rigid subgraph their distance can be expressed
as c(e
1
), where (e
1
) is the independent edge length of this subgraph. Since
this distance has to be greater than 1, we can give a tighter edge length
constraint 1/c < (e
1
) 1. For dense graphs these methods drastically reduce
the number of variables (and thus constraints). Note that the number of cycle
constraints is bounded by the size of a cycle base, since it is sucient to
ensure these constraints on pairwise independent cycles, instead of all possibly
exponentially many cycles.
15.4 Chapter Notes
This chapter gave an overview over the complexity of the problem to com-
pute positions of nodes in a sensor network within various scenarios (only
adjacencies given [50], d-quasi unit disk graphs [235], given angles [52], given
distances [20], . . . ) and presented several heuristics [290, 328, 166, 345, 52].
Although the problem is NP-hard in each scenario, the heuristics deliver quite
good results. In general one can say that they work well when the network
is suciently dense. Density means that there is more information available
that can be used to compute positions. Additional knowledge like angles or
distances drastically improves the results of the heuristics. Anyhow, sparse
networks remain a challenge.

Вам также может понравиться