Академический Документы
Профессиональный Документы
Культура Документы
AbstractIn this paper, we propose and study the distributed are proposed for parameter estimation in networks with incre-
blind adaptive algorithms for wireless sensor network appli- mental or diffusion topology. These techniques are developed
cations. Specifically, we derive distributed forms of the blind based on ideal (i.e. distorsionless) inter-sensor channel for
least mean square (LMS) and recursive least square (RLS)
algorithms based on the constant modulus (CM) criterion. the exchange of information in the distributed cooperation.
We assume that the inter-sensor communication is single-hop In [5] and [6] the authors have proposed distributed LMS
with Hamiltonian cycle to save the power and communication and RLS algorithms, respectively, for non-ideal inter-sensor
resources. The distributed blind adaptive algorithm runs in the wireless channels by incorporating additive noise.
network with the collaboration of nodes in time and space to These algorithms which were initially developed for pa-
estimate the parameters of an unknown system or a physical
phenomenon. Simulation results demonstrate the effectiveness of rameter estimation, can be applied more generally to obtain
the proposed algorithms, and show their superior performance distributed solutions to various problems of adaptive filtering.
over the corresponding non-cooperative adaptive algorithms. When used in this way, these algorithms are classified as non-
Keywords: Distributed adaptive algorithms, Wireless sensor blind, or training-based, since they require a reference signal to
networks, Incremental network topology, Constant modulus drive the adaptation process. In practice, the use of a reference
criterion signal might entail significant costs (especially reduced band-
width efficiency) and in many cases, it is physically infeasible.
I. I NTRODUCTION Therefore, developing blind distributed adaptive algorithms is
Decentralized signal processing offers significant advan- somehow indispensable and will be the next logical step in
tages over its centralized counterpart [1]. In a centralized the research trend. Generally, the use of blind adaptation is
approach, in order to reach a consensus on the underlying possible in scenarios where there exists side information about
signal parameters of interest, each sensor must communicate the transmitted signal, also called signal restoration properties.
with the fusion center. This causes network congestion and In this work, we develop new adaptive algorithms for
result in a waste of communication resources, such as power distributed blind equalization that use the constant energy
and bandwidth. More importantly, any malfunction in the envelope property of the received signals. Specifically, we
fusion center may cause a network breakdown. By developing focus on a basic signal model in which each sensor has access
robust decentralized signal processing algorithms, we can to a filtered copy of a constant envelope signal contaminated
distribute the computation between the local nodes, reduce by additive noise. We assume that the unknown filtering
the amount of communications overhead in the exchange of applied to the desired signal is identical for each sensor, up
information, and remove the dependence of the network on the to an independent phase shift. We derive distributed forms of
fusion center. Within the above framework of cooperative, in- the blind LMS and RLS algorithms which allow the sensors
network distributed processing, there has been much interest to cooperate over wireless to identify the common adaptive
lately in the study of new distributed adaptive algorithms equalizer weights needed for envelope restoration. To save
for the solution of parameter estimation problems in which power and bandwidth, the new distributed algorithms use an
the underlying signal statistics are unknown or time-varying. incremental approach for inter-sensor communications, i.e.
Clearly, adaptivity can help the network to track variations in single-hop Hamiltonian cycle. The effectiveness of the pro-
the desired signal parameter over time as new measurements posed algorithms is demonstrated by simulations, which show
become available. More importantly, as a result of distributed a significant performance gain in signal restoration compared
adaptive processing, a sensor network becomes robust against to the non-cooperative algorithms.
changes in the network environment, network topology and
node failure. II. S YSTEM MODEL AND PROBLEM FORMULATION
Recently, there have been some advances in distributed The system model under consideration is shown in block
adaptive signal processing for sensor network applications. In diagram form in Fig. 1. We consider a sub-network of N
[2], [3] and [4], distributed adaptive LMS and RLS algorithms neighboring sensors (nodes) geographically distributed over
M denotes the filter length, that can be used at each sensor
Hamiltonian Cycle to restore the constant modulus property in its measurement
e j1 v1(i) 1(i) uk (i). Assuming slow time-variations in the unknown system
+
Sensor-1 and adaptive process, we can represent them in terms of
u1(i) 1(i) +
their
PL corresponding time-varying
PMsystem functions Bi (z) =
l 1
e j2 v2(i) (i, l)z and W (z) = w(i, l)z l , respectively,
Unknown system
e1(i) 2(i) i
or phenomenon
l=0 l=0
Sensor-2 where z denotes the unit delay operator. To perform the
+
u2(i) 2(i) + desired equalization task adequately, the adaptive solution
s(i) should ideally satisfy the condition Wi (z) = 1/Bi (z).
e2(i)
In practice, because of measurement noise and lag in the
...
...
...
adaptive process, this condition can only be approximately
e j N vN(i) satisfied. In a traditional, i.e. non-cooperative approach, each
N(i)
Sensor-N sensor would run its own copy of a standard blind adaptive
+
uN(i) N(i) + algorithm for constant modulus restoration, such as the LMS-
CMA [7] or the RLS-CMA [8]. However, this approach does
eN(i) not exploit available means of communication between the
Fig. 1: System model for distributed blind adaptive equaliza- sensors and is therefore sub-optimal. In this paper, we seek
tion a distributed solution to the above blind adaptive equalization
problem in which each sensor maintains and locally update
an area where a physical phenomenon of interest is being its own copy of the adaptive equalizer weights (that can be
monitored. Each sensor measures the distorted signal coming used to filter its measurement signal), but cooperates through
from the output of an unknown system, modeled as a linear, exchange of information over wireless links in seeking a
(possibly) time-varying filter with constant envelope source globally optimal solution (i.e. across the set of N sensors).
signal s(i) as input. We assume that the unknown filtering Let k (i) CM 1 denote local adaptive equalizer weight
applied to the desired signal is identical for each sensor, but vector of sensor k at time i. To save power and bandwidth,
that the measurements are made in the presence of independent we assume an incremental approach for inter-sensor commu-
phase shift and additive measurement noise at each sensor. nications, i.e. single-hop pre-defined Hamiltonian cycle, as
Specifically, the measured signal by sensor k at discrete-time shown by the dashed line in Fig. 1. At each step in this
i, denoted uk (i), bears the following relation with the systems cycle, repeated once per iteration over the adaptation time
parameters and input s(i) index i, the kth sensor recursively updates its weight vector,
i.e. k (i 1) k (i), by making use of the updated weight
L
X vector k1 (i) from its predecessor in the cycle, and then
uk (i) = ejk (i, l)s(i l) + vk (i) (1)
communicates the result of this update to its successor. The
l=0
choice and definition of the sequence of sensors visited in a
where (i, l), l = 0, . . . , L denote the impulse response cycle is based on link and availability considerations that fall
coefficients of the unknown system for lag l at time i, outside the scope of this work. Here, the wireless channels
L is the assumed system order, vk (i) is an additive noise used in inter-sensor communication are perfect (noise-free
component at the kth sensor while k represents the phase and distortionless), but generalization in the style of [5] and
shift of the measured signal by the kth sensor. These unknown [6] can be envisaged. In the following sections, we develop
phase shifts, which are assumed to remain constant over the the proposed distributed blind adaptive LMS-CMA and RLS-
integration time of the adaptation process, are modeled as CMA.
independent and identically (i.i.d.) random variables uniformly
distributed between [0, 2]. The additive noise terms {vk (i)} III. DISTRIBUTED LMS-CMA
are modeled as i.i.d. white noise sequence, with each sample The new distributed algorithms for blind adaptation will
having a complex circular symmetric Gaussian distribution, be derived by breaking down the centralized CM-based op-
i.e. vk (i) C(0, k2 ) where k2 denotes the measurement noise timization problem into a set of local optimization problems,
power at the kth sensor. The above system model formulation in which the only coupling is through the exchange of a nodes
is suitable for adaptive system modeling, system identification updated weight vector to its successor in the Hamiltonian
and channel equalization. cycle. This approach will be first applied to derive a distributed
Because of the distortion induced by the unknown system LMS-CMA in this section, and then extended to derive RLS-
and the additive noise, the measured signal uk (i) at the kth CMA in the next section.
sensor will generally not exhibit the constant modulus property We begin by considering a centralized LMS formulation for
of the input. The problem of interest here is to devise a the CM-based adaptation in a sensor network. With reference
blind adaptive equalizer, in the form of a time-varying finite to Fig. 1, the output of the equalizer at node k at time instant
impulse response (FIR) filter with the global coefficient vector i is given by:
w(i) = [w(i, 0), w(i, 1), . . . , w(i, M 1)]T CM 1 where yk (i) = uk (i) k (i) (2)
where, uk (i) = [uk (i), uk (i 1), . . . , uk (i M + 1)] is the where 0 < 1 is the step size of the steepest descent
local data vector at node k. By collecting the local data vectors iteration. After calculating the partial derivative, we obtain:
in the central processor, we form a global data matrix U (i) , N
[u1 (i)T , u2 (i)T , . . . , uN (i)T ]T for further processing. In the
X
w(i) = w(i 1) + E[uH
k yk (i) |yk (i)|
p2
(k |yk (i)|p )]
expanded form, the latter can be written as: k=1
(7)
u1 (i) u1 (i 1) ... u1 (i M + 1)
Proceeding as in [2], the steepest descent update formula in
u (i)
2 u2 (i 1) ... u2 (i M + 1)
(7) can be implemented in a distributed manner by cooperation
U (i) ,
.. .. .. ..
(3)
of the local nodes as given in the algorithm below :
. . . .
0 (i) w(i 1)
uN (i) uN (i 1) . . . uN (i M + 1)
k (i) = k1 (i) + E[uH
k yk (i) |yk (i)|
p2
(k |yk (i)|p )],
For the CM criterion, the global cost function at the central k = 1, 2 . . . N
processor is formulated as: w(i) N (i) (8)
p q
J(w) = E[k |y(i)| k ], (4) In the distributed steepest descent algorithm (8), we need to
perform N iterations over the spatial dimension k, i.e. node
where p and q are positive real numbers, y(i) , k = 1 to node k = N in a predefined cycle. During the ith
[y1 (i), y2 (i), . . . , yN (i)]T and , [1 , 2 . . . N ]T . The kth such cycle, node k uses the updated estimate received from
entry of , k , is a positive real number that represent the its predecessor in the cycle, i.e. k1 (i), to update its current
desired constant modulus value to be restored at the kth estimate k (i), which is then transmitted to its successor.
node. For the sake of generality, we keep the subscript k in That is, update 7 is realized via a sequence of N single hop
our derivation, although we shall later assume k = 1 for wireless information exchange between adjacent nodes in the
k = 1, . . . , N when presenting simulation results in Section V. cycle. The final distributed LMS-CMA can now be obtained by
The use of the parameters p and q allows additional flexibility approximating the gradient in (8) with its stochastic version by
in the problem solution (see e.g. [8]). The traditional mean using instantaneous data at the time instant i. For compactness
square error (MSE)-based CM cost function corresponds to in presentation, we introduce the error function at node k,
the choice p = 1 and q = 2. Here, the use of q = 2 (i.e. defined as ek (i) = yk (i) |yk (i)|p2 (k |yk (i)|p ). The
MSE-based CM) is favored, since as we explain below, it
enables a partial decomposition of the cost function as a sum of Algorithm 1 Distributed LMS-CMA
simple terms, which in turn is amenable to distributed adaptive 0 (i) w(i 1)
processing. for k = 1 : N do
The global equalizers coefficients, denoted by wo CM 1 , yk (i) = uk (i) k (i)
can be found by minimizing the above cost function; i.e., ek (i) = yk (i) |yk (i)|p2 (k |yk (i)|p )
k (i) = k1 (i) + uH k ek (i)
wo = min J(w) = min E[k |U (i)w|p kq ] (5) end for
wCM wCM
w(i) N (i)
In (4), the use of absolute value in |U (i)w|p
must be interpreted element-wise, i.e. |U (i)w|p = results are summarized in algorithm 1, which is somewhat
[|u1 (i)w|p , |u2 (i)w|p , . . . , |uN (i)w|p ]T . By expanding similar in structure to the non-blind distributed LMS algorithm
the squared Euclidean norm of (4) when q = 2, we can write: developed in [2]. The effectiveness of the algorithm above will
be demonstrated through numerical simulations in Section V.
J(w) = E[k |U (i)w|p k2 ] In the next Section, we derive an incremental distributed RLS-
N
=
X
Jk (w) CMA by using a similar approach.
k=1 IV. D ISTRIBUTED RLS-CMA
where Jk (w) = Ek |uk (i)w|p k2 can be interpreted as the The general form of the weighted least squares (WLS) cost
the local objective function at node k. In a centralized scheme, function for CM signal restoration at a central processor can
the steepest descent iterative solution to the above optimization be expressed as:
can be expressed based on the partial derivative of the local i
objective functions as:
X
J(w, i) = il k |U (l)w|p k2 (9)
l=0
w(i) = w(i 1) J(w(i 1))
N where 0 < 1 is the forgetting factor, U (l) is the data
matrix given in (3), and w is the global equalizers weights.
X
= w(i 1) Jk (w(i 1)) (6)
k=1 This cost function provides a weighted sum of the modulus
errors at the different nodes, from time l = 0 to current time Equation (16) can be iteratively updated in time and space
l = i, with past errors weighted by il . In this work, based on using only the local data at sensor k, by proceeding as follows:
the value of the parameter p, we derive two different versions
R0 (i) R(i 1)
of the distributed RLS-CMA. In the first case, we set p = 1
and then develop a first version of the distributed RLS-CMA Rk (i) = Rk1 (i) + uH
k uk (i),
without making any assumption about the signal environment; k = 1, 2 . . . N (17)
whereas for the second version, p can take any arbitrary real R(i) RN (i)
positive value, but in this case, we need to assume that the
signal environment is slowly varying or stationary.
We note that there is no physical sensor corresponding to the
A. Distributed RLS-CMA for p = 1 index k = 0; the latter is introduced only for convenience in
In this case, the global objective function in (9) can be joining both ends of the incremental cycle of spatial updates
written based on the local data as: over index k as time is incremented from i 1 to i, i.e.,
N X
i
R0 (i) RN (i 1).
J(w, i) =
X
il |k | uk (l)w| |2 (10) Note that each update in (17) only invoke a rank one
k=1 l=0
additive term. Therefore, the inverse of the sampled correlation
matrix given in (17) can be computed locally according to
The stationary point of this cost function can be found by Woodburys identity. As a result, the inverse of the global
computing its partial derivative and equating it to zero, which correlation matrix can be calculated in distributed fashion by
yields: using the set of local data as:
X X
il uH
k (l)uk (l)w = il k (l)uH
k (l)k (11) R0 (i)1 1 R1 (i 1)
1 1
k,l k,l Rk1 (i)uH
k (i)uk (i)Rk1 (i)
Rk1 (i) = Rk1
1
(i) 1+uk (i)Rk1 (i)uH (i)
,
P
where the summation k,l is over the range 1 k N and k (18)
k = 1, 2 . . . N
0 l l, and we have introduced 1
R1 (i) RN (i)
yk (i)
k (i) = . (12)
|yk (i))| We note that to update its local estimate of the inverse
correlation matrix with this approach, sensor k only makes
Equivalently, (11) can be expressed in matrix form as use of its local observation vector uk (i) along with the
R(i)w = r(i) (13) inverse correlation matrix estimate of its predecessor in the
1
incremental cycle, i.e. Rk1 (i).
where we define The recursive formula given in (15) can also be updated in
i i a distributed manner based on the local data at the kth sensor,
for k = 1, . . . , N . Indeed, by expanding the term {UH (i)
X X
R(i) = il U H (l)U (l), r(i) = il UH (l)(l),
l=0 l=0 U H (i)U (i)w(i 1)}, we can obtain the following recursion
(14) formula:
and UH (i) = [1 (i)uH 1 (i), 2 (i)uH
2 (i), . . . , N (i)uH
N (i)]. N
Therefore, the WLS solution at current time i can be computed
X
w(i) = w(i 1) + R1 (i) uH
k (i)ek (i) (19)
as w(i) = R1 (i)r(i). Alternatively, the optimal weight k=1
vector in (13) can be recursively updated by means of the
following relation: where we define the modulus error at node k as
ek (i) = k (i)k uk (i)w(i 1). (20)
w(i) = w(i 1) + R1 (i)[UH (i) U H (i)U (i)w(i 1)]
(15) Equation (19) can be implemented in a distributed manner as:
As a first step towards the derivation of a distributed
RLS solution for the CM problem, we focus on the efficient 0 (i) w(i 1)
updating of the required inverse correlation matrix R1 (i) in ek (i) = k (i)k uk (i)w(i 1),
(15). Indeed, to avoid costly matrix inversion, we can compute k (i) = k1 (i) + R1 (i)uH
k (i)ek (i), (21)
R1 (i) recursively, in a distributed and incremental manner as k = 1, 2 . . . N
explained below. Using the definition of the sample correlation
matrix in [9], we know that R(i) = R(i 1) + U H (i)U (i). w(i) N (i)
Equivalently, by expanding the product U H (i)U (i) in terms Finally, we can arrive at a fully distributed, incremental
of the local sensor observations, we obtain: algorithm by substituting w(i 1) and R1 (i) with k1 (i)
N and Rk1 (i), respectively, with the latter quantity being up-
X
R(i) = R(i 1) + uH dated in a distributed manner as in (18). Similar to [10],
k (i)uk (i) (16)
k=1 the substitution of w(i 1) by k1 (i) leads to better
adaptive performance, whereas substituting R1 (i) with the where Z(i) = [zT1 (i), zT2 (i), . . . , zTN (i)]T is the modified data
local update Rk1 (i) causes some performance degradation. By matrix. Computing the partial derivative of (27), and equating
applying these modifications, we obtain the first version of the it to zero yields:
distributed RLS-CMA given in Algorithm 2. In this algorithm, i i
during the ith cycle time, sensor k 1 forwards its updated
X X
il Z H (l)Z(l)w = il Z H (l) (28)
local weight vector estimate k1 (i) and inverse correlation l=0 l=0
1
matrix estimate Rk1 (i) to sensor k where the corresponding
The solution of (28) Pcan be given as w(i) = Rz1 (i)rz (i),
estimates are updated using only the local observation uk (i). i il H
where Rz (i) = l=0 Z (l)Z(l) and rz (i) =
Pi il H
Algorithm 2 Distributed adaptive RLS-CMA when p = 1 l=0 Z (i). In the same way as we have shown in
IV-A, the optimal weights w(i) can be updated by the recur-
0 (i) w(i 1); R01 (i) 1 R1 (i 1) sive formula given below:
for k = 1 : N do
R1 (i))uH 1
k (i)uk (i)Rk1 (i) w(i) = w(i 1) + Rz1 (i)Z H (i)[ Z(i)w(i 1)] (29)
Rk1 (i) = Rk1
1
(i) k11+uk (i)Rk1 (i)uH (i)
k
ek (i) = k (i)k uk (i) k1 (i) By following the same procedure as in IV-A, this calculation
k (i) = k1 (i) + Rk1 (i)uHk (i)ek (i) can be performed in a distributed mean as follow:
end for
1 0 (i) w(i 1)
w(i) N (i); R1 (i) RN (i)
zk (i) = |uk (i)w(i 1)|p2 wH (i 1)uH
k (i)uk (i)
ek (i) = k zk (i)w(i 1)
B. Distributed RLS-CMA for general value of p (30)
k (i) = k1 (i) + Rz1 (i)zH
k (i)ek (i)
In this case, the global objective function (9) takes the k = 1, 2 . . . N
following form: w(i) N (i)
N
X Again, the global autocorrelation matrix Rz1 (i) can be up-
J(w, i) = Jk (w, i) (22)
dated based on the local data, say:
k=1