Вы находитесь на странице: 1из 6

Distributed Blind Adaptive Algorithms Based on

Constant Modulus for Wireless Sensor Networks


Reza Abdolee Benoit Champagne
Electrical and Computer Engineering Electrical and Computer Engineering
McGill University McGill University
3480 University Street Montreal, PQ, Canada H3A 2A7 3480 University Street Montreal, PQ, Canada H3A 2A7
Email: reza.abdolee@mail.mcgill.ca Email: benoit.champagne@mcgill.ca

AbstractIn this paper, we propose and study the distributed are proposed for parameter estimation in networks with incre-
blind adaptive algorithms for wireless sensor network appli- mental or diffusion topology. These techniques are developed
cations. Specifically, we derive distributed forms of the blind based on ideal (i.e. distorsionless) inter-sensor channel for
least mean square (LMS) and recursive least square (RLS)
algorithms based on the constant modulus (CM) criterion. the exchange of information in the distributed cooperation.
We assume that the inter-sensor communication is single-hop In [5] and [6] the authors have proposed distributed LMS
with Hamiltonian cycle to save the power and communication and RLS algorithms, respectively, for non-ideal inter-sensor
resources. The distributed blind adaptive algorithm runs in the wireless channels by incorporating additive noise.
network with the collaboration of nodes in time and space to These algorithms which were initially developed for pa-
estimate the parameters of an unknown system or a physical
phenomenon. Simulation results demonstrate the effectiveness of rameter estimation, can be applied more generally to obtain
the proposed algorithms, and show their superior performance distributed solutions to various problems of adaptive filtering.
over the corresponding non-cooperative adaptive algorithms. When used in this way, these algorithms are classified as non-
Keywords: Distributed adaptive algorithms, Wireless sensor blind, or training-based, since they require a reference signal to
networks, Incremental network topology, Constant modulus drive the adaptation process. In practice, the use of a reference
criterion signal might entail significant costs (especially reduced band-
width efficiency) and in many cases, it is physically infeasible.
I. I NTRODUCTION Therefore, developing blind distributed adaptive algorithms is
Decentralized signal processing offers significant advan- somehow indispensable and will be the next logical step in
tages over its centralized counterpart [1]. In a centralized the research trend. Generally, the use of blind adaptation is
approach, in order to reach a consensus on the underlying possible in scenarios where there exists side information about
signal parameters of interest, each sensor must communicate the transmitted signal, also called signal restoration properties.
with the fusion center. This causes network congestion and In this work, we develop new adaptive algorithms for
result in a waste of communication resources, such as power distributed blind equalization that use the constant energy
and bandwidth. More importantly, any malfunction in the envelope property of the received signals. Specifically, we
fusion center may cause a network breakdown. By developing focus on a basic signal model in which each sensor has access
robust decentralized signal processing algorithms, we can to a filtered copy of a constant envelope signal contaminated
distribute the computation between the local nodes, reduce by additive noise. We assume that the unknown filtering
the amount of communications overhead in the exchange of applied to the desired signal is identical for each sensor, up
information, and remove the dependence of the network on the to an independent phase shift. We derive distributed forms of
fusion center. Within the above framework of cooperative, in- the blind LMS and RLS algorithms which allow the sensors
network distributed processing, there has been much interest to cooperate over wireless to identify the common adaptive
lately in the study of new distributed adaptive algorithms equalizer weights needed for envelope restoration. To save
for the solution of parameter estimation problems in which power and bandwidth, the new distributed algorithms use an
the underlying signal statistics are unknown or time-varying. incremental approach for inter-sensor communications, i.e.
Clearly, adaptivity can help the network to track variations in single-hop Hamiltonian cycle. The effectiveness of the pro-
the desired signal parameter over time as new measurements posed algorithms is demonstrated by simulations, which show
become available. More importantly, as a result of distributed a significant performance gain in signal restoration compared
adaptive processing, a sensor network becomes robust against to the non-cooperative algorithms.
changes in the network environment, network topology and
node failure. II. S YSTEM MODEL AND PROBLEM FORMULATION
Recently, there have been some advances in distributed The system model under consideration is shown in block
adaptive signal processing for sensor network applications. In diagram form in Fig. 1. We consider a sub-network of N
[2], [3] and [4], distributed adaptive LMS and RLS algorithms neighboring sensors (nodes) geographically distributed over
M denotes the filter length, that can be used at each sensor
Hamiltonian Cycle to restore the constant modulus property in its measurement
e j1 v1(i) 1(i) uk (i). Assuming slow time-variations in the unknown system
+
Sensor-1 and adaptive process, we can represent them in terms of
u1(i) 1(i) +
their
PL corresponding time-varying
PMsystem functions Bi (z) =
l 1
e j2 v2(i) (i, l)z and W (z) = w(i, l)z l , respectively,
Unknown system

e1(i) 2(i) i
or phenomenon

l=0 l=0
Sensor-2 where z denotes the unit delay operator. To perform the
+
u2(i) 2(i) + desired equalization task adequately, the adaptive solution
s(i) should ideally satisfy the condition Wi (z) = 1/Bi (z).
e2(i)
In practice, because of measurement noise and lag in the
...

...

...
adaptive process, this condition can only be approximately
e j N vN(i) satisfied. In a traditional, i.e. non-cooperative approach, each
N(i)
Sensor-N sensor would run its own copy of a standard blind adaptive
+
uN(i) N(i) + algorithm for constant modulus restoration, such as the LMS-
CMA [7] or the RLS-CMA [8]. However, this approach does
eN(i) not exploit available means of communication between the
Fig. 1: System model for distributed blind adaptive equaliza- sensors and is therefore sub-optimal. In this paper, we seek
tion a distributed solution to the above blind adaptive equalization
problem in which each sensor maintains and locally update
an area where a physical phenomenon of interest is being its own copy of the adaptive equalizer weights (that can be
monitored. Each sensor measures the distorted signal coming used to filter its measurement signal), but cooperates through
from the output of an unknown system, modeled as a linear, exchange of information over wireless links in seeking a
(possibly) time-varying filter with constant envelope source globally optimal solution (i.e. across the set of N sensors).
signal s(i) as input. We assume that the unknown filtering Let k (i) CM 1 denote local adaptive equalizer weight
applied to the desired signal is identical for each sensor, but vector of sensor k at time i. To save power and bandwidth,
that the measurements are made in the presence of independent we assume an incremental approach for inter-sensor commu-
phase shift and additive measurement noise at each sensor. nications, i.e. single-hop pre-defined Hamiltonian cycle, as
Specifically, the measured signal by sensor k at discrete-time shown by the dashed line in Fig. 1. At each step in this
i, denoted uk (i), bears the following relation with the systems cycle, repeated once per iteration over the adaptation time
parameters and input s(i) index i, the kth sensor recursively updates its weight vector,
i.e. k (i 1) k (i), by making use of the updated weight
L
X vector k1 (i) from its predecessor in the cycle, and then
uk (i) = ejk (i, l)s(i l) + vk (i) (1)
communicates the result of this update to its successor. The
l=0
choice and definition of the sequence of sensors visited in a
where (i, l), l = 0, . . . , L denote the impulse response cycle is based on link and availability considerations that fall
coefficients of the unknown system for lag l at time i, outside the scope of this work. Here, the wireless channels
L is the assumed system order, vk (i) is an additive noise used in inter-sensor communication are perfect (noise-free
component at the kth sensor while k represents the phase and distortionless), but generalization in the style of [5] and
shift of the measured signal by the kth sensor. These unknown [6] can be envisaged. In the following sections, we develop
phase shifts, which are assumed to remain constant over the the proposed distributed blind adaptive LMS-CMA and RLS-
integration time of the adaptation process, are modeled as CMA.
independent and identically (i.i.d.) random variables uniformly
distributed between [0, 2]. The additive noise terms {vk (i)} III. DISTRIBUTED LMS-CMA
are modeled as i.i.d. white noise sequence, with each sample The new distributed algorithms for blind adaptation will
having a complex circular symmetric Gaussian distribution, be derived by breaking down the centralized CM-based op-
i.e. vk (i) C(0, k2 ) where k2 denotes the measurement noise timization problem into a set of local optimization problems,
power at the kth sensor. The above system model formulation in which the only coupling is through the exchange of a nodes
is suitable for adaptive system modeling, system identification updated weight vector to its successor in the Hamiltonian
and channel equalization. cycle. This approach will be first applied to derive a distributed
Because of the distortion induced by the unknown system LMS-CMA in this section, and then extended to derive RLS-
and the additive noise, the measured signal uk (i) at the kth CMA in the next section.
sensor will generally not exhibit the constant modulus property We begin by considering a centralized LMS formulation for
of the input. The problem of interest here is to devise a the CM-based adaptation in a sensor network. With reference
blind adaptive equalizer, in the form of a time-varying finite to Fig. 1, the output of the equalizer at node k at time instant
impulse response (FIR) filter with the global coefficient vector i is given by:
w(i) = [w(i, 0), w(i, 1), . . . , w(i, M 1)]T CM 1 where yk (i) = uk (i) k (i) (2)
where, uk (i) = [uk (i), uk (i 1), . . . , uk (i M + 1)] is the where 0 < 1 is the step size of the steepest descent
local data vector at node k. By collecting the local data vectors iteration. After calculating the partial derivative, we obtain:
in the central processor, we form a global data matrix U (i) , N
[u1 (i)T , u2 (i)T , . . . , uN (i)T ]T for further processing. In the
X
w(i) = w(i 1) + E[uH
k yk (i) |yk (i)|
p2
(k |yk (i)|p )]
expanded form, the latter can be written as: k=1
(7)
u1 (i) u1 (i 1) ... u1 (i M + 1)

Proceeding as in [2], the steepest descent update formula in
u (i)
2 u2 (i 1) ... u2 (i M + 1)
(7) can be implemented in a distributed manner by cooperation
U (i) ,
.. .. .. ..
(3)
of the local nodes as given in the algorithm below :
. . . .
0 (i) w(i 1)
uN (i) uN (i 1) . . . uN (i M + 1)
k (i) = k1 (i) + E[uH
k yk (i) |yk (i)|
p2
(k |yk (i)|p )],
For the CM criterion, the global cost function at the central k = 1, 2 . . . N
processor is formulated as: w(i) N (i) (8)
p q
J(w) = E[k |y(i)| k ], (4) In the distributed steepest descent algorithm (8), we need to
perform N iterations over the spatial dimension k, i.e. node
where p and q are positive real numbers, y(i) , k = 1 to node k = N in a predefined cycle. During the ith
[y1 (i), y2 (i), . . . , yN (i)]T and , [1 , 2 . . . N ]T . The kth such cycle, node k uses the updated estimate received from
entry of , k , is a positive real number that represent the its predecessor in the cycle, i.e. k1 (i), to update its current
desired constant modulus value to be restored at the kth estimate k (i), which is then transmitted to its successor.
node. For the sake of generality, we keep the subscript k in That is, update 7 is realized via a sequence of N single hop
our derivation, although we shall later assume k = 1 for wireless information exchange between adjacent nodes in the
k = 1, . . . , N when presenting simulation results in Section V. cycle. The final distributed LMS-CMA can now be obtained by
The use of the parameters p and q allows additional flexibility approximating the gradient in (8) with its stochastic version by
in the problem solution (see e.g. [8]). The traditional mean using instantaneous data at the time instant i. For compactness
square error (MSE)-based CM cost function corresponds to in presentation, we introduce the error function at node k,
the choice p = 1 and q = 2. Here, the use of q = 2 (i.e. defined as ek (i) = yk (i) |yk (i)|p2 (k |yk (i)|p ). The
MSE-based CM) is favored, since as we explain below, it
enables a partial decomposition of the cost function as a sum of Algorithm 1 Distributed LMS-CMA
simple terms, which in turn is amenable to distributed adaptive 0 (i) w(i 1)
processing. for k = 1 : N do
The global equalizers coefficients, denoted by wo CM 1 , yk (i) = uk (i) k (i)
can be found by minimizing the above cost function; i.e., ek (i) = yk (i) |yk (i)|p2 (k |yk (i)|p )
k (i) = k1 (i) + uH k ek (i)
wo = min J(w) = min E[k |U (i)w|p kq ] (5) end for
wCM wCM
w(i) N (i)
In (4), the use of absolute value in |U (i)w|p
must be interpreted element-wise, i.e. |U (i)w|p = results are summarized in algorithm 1, which is somewhat
[|u1 (i)w|p , |u2 (i)w|p , . . . , |uN (i)w|p ]T . By expanding similar in structure to the non-blind distributed LMS algorithm
the squared Euclidean norm of (4) when q = 2, we can write: developed in [2]. The effectiveness of the algorithm above will
be demonstrated through numerical simulations in Section V.
J(w) = E[k |U (i)w|p k2 ] In the next Section, we derive an incremental distributed RLS-
N
=
X
Jk (w) CMA by using a similar approach.
k=1 IV. D ISTRIBUTED RLS-CMA
where Jk (w) = Ek |uk (i)w|p k2 can be interpreted as the The general form of the weighted least squares (WLS) cost
the local objective function at node k. In a centralized scheme, function for CM signal restoration at a central processor can
the steepest descent iterative solution to the above optimization be expressed as:
can be expressed based on the partial derivative of the local i
objective functions as:
X
J(w, i) = il k |U (l)w|p k2 (9)
l=0
w(i) = w(i 1) J(w(i 1))
N where 0 < 1 is the forgetting factor, U (l) is the data
matrix given in (3), and w is the global equalizers weights.
X
= w(i 1) Jk (w(i 1)) (6)
k=1 This cost function provides a weighted sum of the modulus
errors at the different nodes, from time l = 0 to current time Equation (16) can be iteratively updated in time and space
l = i, with past errors weighted by il . In this work, based on using only the local data at sensor k, by proceeding as follows:
the value of the parameter p, we derive two different versions
R0 (i) R(i 1)
of the distributed RLS-CMA. In the first case, we set p = 1
and then develop a first version of the distributed RLS-CMA Rk (i) = Rk1 (i) + uH
k uk (i),
without making any assumption about the signal environment; k = 1, 2 . . . N (17)
whereas for the second version, p can take any arbitrary real R(i) RN (i)
positive value, but in this case, we need to assume that the
signal environment is slowly varying or stationary.
We note that there is no physical sensor corresponding to the
A. Distributed RLS-CMA for p = 1 index k = 0; the latter is introduced only for convenience in
In this case, the global objective function in (9) can be joining both ends of the incremental cycle of spatial updates
written based on the local data as: over index k as time is incremented from i 1 to i, i.e.,
N X
i
R0 (i) RN (i 1).
J(w, i) =
X
il |k | uk (l)w| |2 (10) Note that each update in (17) only invoke a rank one
k=1 l=0
additive term. Therefore, the inverse of the sampled correlation
matrix given in (17) can be computed locally according to
The stationary point of this cost function can be found by Woodburys identity. As a result, the inverse of the global
computing its partial derivative and equating it to zero, which correlation matrix can be calculated in distributed fashion by
yields: using the set of local data as:
X X
il uH
k (l)uk (l)w = il k (l)uH
k (l)k (11) R0 (i)1 1 R1 (i 1)
1 1
k,l k,l Rk1 (i)uH
k (i)uk (i)Rk1 (i)
Rk1 (i) = Rk1
1
(i) 1+uk (i)Rk1 (i)uH (i)
,
P
where the summation k,l is over the range 1 k N and k (18)
k = 1, 2 . . . N
0 l l, and we have introduced 1
R1 (i) RN (i)
yk (i)
k (i) = . (12)
|yk (i))| We note that to update its local estimate of the inverse
correlation matrix with this approach, sensor k only makes
Equivalently, (11) can be expressed in matrix form as use of its local observation vector uk (i) along with the
R(i)w = r(i) (13) inverse correlation matrix estimate of its predecessor in the
1
incremental cycle, i.e. Rk1 (i).
where we define The recursive formula given in (15) can also be updated in
i i a distributed manner based on the local data at the kth sensor,
for k = 1, . . . , N . Indeed, by expanding the term {UH (i)
X X
R(i) = il U H (l)U (l), r(i) = il UH (l)(l),
l=0 l=0 U H (i)U (i)w(i 1)}, we can obtain the following recursion
(14) formula:
and UH (i) = [1 (i)uH 1 (i), 2 (i)uH
2 (i), . . . , N (i)uH
N (i)]. N
Therefore, the WLS solution at current time i can be computed
X
w(i) = w(i 1) + R1 (i) uH
k (i)ek (i) (19)
as w(i) = R1 (i)r(i). Alternatively, the optimal weight k=1
vector in (13) can be recursively updated by means of the
following relation: where we define the modulus error at node k as
ek (i) = k (i)k uk (i)w(i 1). (20)
w(i) = w(i 1) + R1 (i)[UH (i) U H (i)U (i)w(i 1)]
(15) Equation (19) can be implemented in a distributed manner as:
As a first step towards the derivation of a distributed
RLS solution for the CM problem, we focus on the efficient 0 (i) w(i 1)
updating of the required inverse correlation matrix R1 (i) in ek (i) = k (i)k uk (i)w(i 1),
(15). Indeed, to avoid costly matrix inversion, we can compute k (i) = k1 (i) + R1 (i)uH
k (i)ek (i), (21)
R1 (i) recursively, in a distributed and incremental manner as k = 1, 2 . . . N
explained below. Using the definition of the sample correlation
matrix in [9], we know that R(i) = R(i 1) + U H (i)U (i). w(i) N (i)
Equivalently, by expanding the product U H (i)U (i) in terms Finally, we can arrive at a fully distributed, incremental
of the local sensor observations, we obtain: algorithm by substituting w(i 1) and R1 (i) with k1 (i)
N and Rk1 (i), respectively, with the latter quantity being up-
X
R(i) = R(i 1) + uH dated in a distributed manner as in (18). Similar to [10],
k (i)uk (i) (16)
k=1 the substitution of w(i 1) by k1 (i) leads to better
adaptive performance, whereas substituting R1 (i) with the where Z(i) = [zT1 (i), zT2 (i), . . . , zTN (i)]T is the modified data
local update Rk1 (i) causes some performance degradation. By matrix. Computing the partial derivative of (27), and equating
applying these modifications, we obtain the first version of the it to zero yields:
distributed RLS-CMA given in Algorithm 2. In this algorithm, i i
during the ith cycle time, sensor k 1 forwards its updated
X X
il Z H (l)Z(l)w = il Z H (l) (28)
local weight vector estimate k1 (i) and inverse correlation l=0 l=0
1
matrix estimate Rk1 (i) to sensor k where the corresponding
The solution of (28) Pcan be given as w(i) = Rz1 (i)rz (i),
estimates are updated using only the local observation uk (i). i il H
where Rz (i) = l=0 Z (l)Z(l) and rz (i) =
Pi il H
Algorithm 2 Distributed adaptive RLS-CMA when p = 1 l=0 Z (i). In the same way as we have shown in
IV-A, the optimal weights w(i) can be updated by the recur-
0 (i) w(i 1); R01 (i) 1 R1 (i 1) sive formula given below:
for k = 1 : N do
R1 (i))uH 1
k (i)uk (i)Rk1 (i) w(i) = w(i 1) + Rz1 (i)Z H (i)[ Z(i)w(i 1)] (29)
Rk1 (i) = Rk1
1
(i) k11+uk (i)Rk1 (i)uH (i)
k
ek (i) = k (i)k uk (i) k1 (i) By following the same procedure as in IV-A, this calculation
k (i) = k1 (i) + Rk1 (i)uHk (i)ek (i) can be performed in a distributed mean as follow:
end for
1 0 (i) w(i 1)
w(i) N (i); R1 (i) RN (i)
zk (i) = |uk (i)w(i 1)|p2 wH (i 1)uH
k (i)uk (i)
ek (i) = k zk (i)w(i 1)
B. Distributed RLS-CMA for general value of p (30)
k (i) = k1 (i) + Rz1 (i)zH
k (i)ek (i)
In this case, the global objective function (9) takes the k = 1, 2 . . . N
following form: w(i) N (i)
N
X Again, the global autocorrelation matrix Rz1 (i) can be up-
J(w, i) = Jk (w, i) (22)
dated based on the local data, say:
k=1

where R0 (i)1 1 Rz1 (i 1)


i 1
Rk1 (i)zH 1
k (i)zk (i)Rk1 (i)
Rk1 (i) = Rk1
1
(i) ,
X
Jk (w, i) = il |k |uk (l)w|p |2 , (23) 1+zk (i)Rk1 (i)zH
k
(i) (31)
l=0 k = 1, 2 . . . N
1
is the local cost function at node k. Here, each local cost Rz1 (i) RN (i)
function can be transformed into the conventional RLS cost
Finally, in the recursion part of (30), we can substitute
function by applying the suggested technique in [8]. According
to this letter, if we assume the signal environment is stationary Algorithm 3 Distributed adaptive RLS-CMA, p general
or slowly varying, then the difference between uk (i)w(i 1)
and uk (i)w(i) is negligible. Hence, the local cost function in 0 (i) w(i 1); R01 (i) 1 Rz1 (i 1)
(23) can be rearranged as: for k = 1 : N do
zk (i) = |uk (i) k1 (i)|p2 H H
k1 (i)uk (i)uk (i)
i
R1 (i)zH (i)zk (i)R1 (i)
Rk1 (i) = Rk1
1
X k
Jk (w, i) = il |k |uk (l)w(l 1)|p2 (i) k11+zk (i)Rk1 (i)zH
k1
(i)
k
l=0 ek (i) = k zk (i) k1 (i)
w (l 1)uH
H
k (l)uk (l)w|
2
(24) k (i) = k (i) + Rk1 (i)zH
k (i)ek (i)
end for
This can be expressed more compactly as: w(i) N (i); Rz1 (i) RN 1
(i)
i
X
Jk (w, i) = il |k zk (l)w|2 (25) w(i 1) and Rz1 (i) with k1 (i) and Rk1 (i), respectively,
l=0 to attain the second version of distributed RLS-CMA, which
where we define is summarized in Algorithm 3.

zk (l) = |uk (l)w(l 1)|p2 wH (l 1)uH V. S IMULATION R ESULTS


k (l)uk (l) (26)
In our simulations, we use the system model described in
As a result of this approximation, the global cost function section II. In particular, we consider a quadrature amplitude
takes the following form: modulation (QAM) communication framework with indepen-
i dent source signal samples s(i) uniformly distributed over a
X
J(w, i) = il k |Z(l)w|p k2 (27) unit magnitude QAM constellation [11] ;accordingly, the value
l=0 of k is set to 1 for k = 1, . . . , N . The unknown system in
Fig.1 is modeled as a time-invariant FIR filter with length
M = 10 and randomly generated parameter vector , where
each entry is derived from an i.i.d. complex circular Gaussian
distribution with zero-mean and variance one. We consider a
network of N = 5 distributed sensors with identical value of
the signal-to-noise ratio (SNR) set to 20dB.
In our simulations, we compare the proposed distributed
versions of the LMS-CMA and RLS-CMA to their non-
distributed counterparts, i.e. in which the sensor nodes individ-
ually attempt to process their inputs without benefiting from
any exchange of information with other nodes in the network.
The performance of the developed algorithms is evaluated
based on mean square error (MSE) criterion. Both distributed
and non-distributed LMS-CMA run with equal step size of
= 0.0001. In the RLS-based algorithms, the forgetting factor
is set to = 0.96, and R01 (0) = I with the parameter Fig. 3: MSE of the distributed RLS-CMA for different value
= 0.01. Both the LMS and RLS-based algorithms are of p
initialized with the weight vector w(1) = [1, 0, . . . , 0]. unknown underlying system. Simulation results demonstrate
The results shown in Fig. 2 and 3 are drawn over 500 the effectiveness of the proposed algorithms, and show their
independent runs, with different system parameter selected superior performance over the corresponding non-distributed
as above for each run. The graphs in Fig. 2 indicate that, adaptive algorithms.
following an initial period of rapid learning, the distributed In this work, we have used a system identification example
LMS-CMA and RLS-CMA achieve their steady-state level of to develop the proposed algorithms. However, the estimation
residual error faster than the non-distributed LMS-CMA and scenario under consideration can be generalized to more com-
RLS-CMA, respectively. Moreover, as a result of the spatial plex situations by modifying the underlying system model and
diversity introduced by the local nodes, the distributed LMS- making changes to the adaptive process running on individual
CMA and RLS-CMA offer better steady state performance nodes; this avenue is currently under investigation.
(i.e. lower residual error) as compared to the non-distributed
algorithms. Note that for the Fig. 2, the values of p and q are R EFERENCES
set to two. Finally, the effect of the choice of the parameter p [1] M. Rabbat and R. Nowak, Distributed optimization in sensor networks,
is illustrated in Fig. 3, where we observe that for this particular in Int. Symp. on Inf. Proc. in Sensor Networks, 2004, pp. 2027.
[2] A. Sayed and C. Lopes, Adaptive processing over distributed networks,
scenario, the best performance is obtained with p = 1.5. IEICE Trans. Fund. of Elect. and Comm. Comp. Sci., vol. 90, no. 8, pp.
15041510, 2007.
[3] C. Lopes and A. Sayed, Diffusion least-mean squares over adaptive
networks, in Proc. IEEE Int. Conf. on Acoust., Speech, Signal Process.,
vol. 3, 2007, pp. 917920.
[4] F. Cattivelli, C. Lopes, and A. Sayed, Diffusion recursive least-squares
for distributed estimation over adaptive networks, IEEE Trans. Signal
Process., vol. 56, no. 5, pp. 18651877, 2008.
[5] I. Schizas, G. Mateos, and G. Giannakis, Distributed LMS for
consensus-based in-network adaptive processing, IEEE Trans. Signal
Process., vol. 57, no. 6, pp. 23652382, 2009.
[6] G. Mateos, I. Schizas, and G. Giannakis, Distributed recursive least-
squares for consensus-based in-network adaptive estimation, IEEE
Trans. Signal Process., vol. 57, no. 11, pp. 45834588, 2009.
[7] R. Johnson, P. Schniter, T. Endres, J. Behm, D. Brown, and R. Casas,
Blind equalization using the constant modulus criterion: A review,
Proceedings of the IEEE, vol. 86, no. 10, pp. 19271950, 1998.
[8] Y. Chen, T. Le-Ngoc, B. Champagne, and C. Xu, Recursive least
squares constant modulus algorithm for blind adaptive array, IEEE
Trans. Signal Process., vol. 52, no. 5, pp. 14521456, 2004.
[9] S. Haykin, Adaptive filter theory. Prentice-Hall Information Systems
Science, 1996.
Fig. 2: MSE of distributed and non-distributed adaptive blind [10] A. Sayed and C. Lopes, Distributed processing over adaptive networks,
algorithms in Proc. IEEE Int. Symp. on Signal Proc. & and Its App., 2007, pp. 13.
[11] J. Proakis and M. Salehi, Digital communications. McGraw-hill New
VI. C ONCLUSION York, 1995.
In this paper, we develop distributed LMS-CMA and RLS-
CMA for wireless sensor network applications. In our model,
the developed blind algorithm runs in the network in a
distributed and adaptive manner over the joint time and
space domains to estimate and track the parameters of an

Вам также может понравиться