Академический Документы
Профессиональный Документы
Культура Документы
1, JANUARY 2007 1
Abstract—We develop a computationally efficient approxima- most common suboptimal detectors are the linear receivers,
tion of the maximum likelihood (ML) detector for 16 quadrature i.e., the matched filter (MF), the decorrelator or zero forcing
amplitude modulation (16-QAM) in multiple-input multiple- (ZF), the minimum mean-squared error (MMSE) detectors,
output (MIMO) systems. The detector is based on solving
a convex relaxation of the ML problem by an affine-scaling decision feedback equalization (DFE), and the semidefinite
cyclic Barzila-Borwein method for box constrained relaxation relaxation (SDR) detector. There are many other suboptimal
(BCR) problem. Simulation results in a random MIMO system detection schemes ranging from lattice-based algorithms, al-
show that this proposed approach outperforms the conventional ternating variable methods, to expectation maximization and
decorrelator detector including the semidefinite relaxation (SDR) many more [10].
detector. We also note that the complexity of the proposed
approach is less than that of those detectors. In the case of The ZF algorithm is a straightforward approach to signal
8 antennas and 4 users, about 99% fewer computations are detection. The receiver with the ZF detector uses the estimated
required when compared to the SDR and ML detectors. channel matrix to detect the transmitted signal as follows:
Index Terms—Maximum likelihood (ML) detection, MIMO s̃ = (HH H)−1 HH y = H+ y,
systems, affine-scaling, cyclic Barzila-Borwein (CBB) method,
box constrained optimization. where y is the received signal, HH and H+ denote the
Hermitian conjugate and the pseudo inverse respectively, and
s is the transmitted symbols. Then each element of s̃ is moved
I. I NTRODUCTION to the nearest constellation point.
symbols, and w is a length N complex random vector with original ML quadratic small while keeping the components of
normal distribution of zero mean and covariance σ 2 I. The x near feasible points for ML.
symbols of s belongs to some known complex constellation. The solution to (6) is typically closer to a feasible point
In this paper, we consider a 16-QAM constellation, i.e., the for ML than the solution to (5). Nonetheless, a solution to
real part and the imaginary part of si for i = 1, . . . , K belong z (6) is typically fractional. We move each component of z
to the set {±1, ±3}. to a feasible point for ML by a rounding process. In other
In order to avoid handling complex-valued variables, it is words, each component of zi is replaced by quantize(zi )
more convenient to use the following decoupled model: where quantize(α) rounds α to the nearest element in the set
{±1, ±3}.
y = Hs + w (2) The approximation quantize(z) to a solution of ML may not
where be a local minimum in the discrete ML problem. Hence, to
· ¸ · ¸ · ¸ obtain a discrete local minimizing approximation to a solution
Re{y} Re{s} Re{w}
y= ;s = ;w = ; of ML, we perform cycles of discrete coordinate descent where
Im{y} Im{s} Im{w}
the starting point x is a quantized solution to (6). The discrete
· ¸
Re{H} −Im{H} coordinate descent algorithm is the following:
H=
Im{H} Re{H} for i = 1 : 2K
Using these definitions, we can formulate the ML-detector of t̂ = arg min {f (xi (t)) : t ∈ {−3, −1, 1, 3}}
the transmitted symbols as xi = t̂ (replace i-th component of x by t̂)
½ end
min ky − Hsk2
ML: (3) where
subject to si ∈ {±1, ±3}, i = 1, . . . , 2K.
The ML detector is a combinatorial problem and can be solved xi (t) = (x1 , . . . , xi−1 , t, xi+1 , . . . , x2K ).
in a brute-force fashion by searching over all the 42K = 16K
possibilities. Clearly, as K increases, the brute-force search Since f is a quadratic function, the computational complexity
becomes prohibitively expensive. of the this loop is essentially on the order of the computational
cost of one function evaluation of (5). Hence, its computational
As an alternative to a brute-force search, we compute an
expense is negligible compared with solving (5).
initial approximation to a solution of ML by solving the
In summary, our quadratic programming algorithm for ob-
following continuous (relaxed) box-constrained optimization
taining an approximation to a ML solution is the following:
problem:
½ 1. Compute a solution x̂ to box constrained quadratic pro-
min f (s) := ky − Hsk2 gramming problem (5).
RML: (4)
subject to − 3 ≤ si ≤ 3, i = 1, . . . , 2K. 2. Using x̂ as a starting guess, compute a solution z to
In RML, we ignore the integer constraints in ML and only the penalized, quadratic box constrained optimization
require that xi lies between −3 and +3. Let A = HT H, problem (6).
b = HT y, and x = s. Then, the relaxation of the ML problem 3. Apply the quantization operator to each component of z.
RML is equivalent to the following quadratic programming 4. Perform discrete coordinate descent starting from x = z.
problem with box constraints,
1 T
III. T HE A FFINE -S CALING C YCLIC BB M ETHOD
min 2 x Ax− bT x
In this section, we describe the optimization algorithm
subject to −3 ≤ xi ≤ 3, i = 1, . . . , 2K. (5) AS CBB [4]–[7] that we use to solve either (5) or (6). The
We will solve the relaxation (5) by the AS CBB method, CBB method can be applied to an unconstrained problem
which is described in the next section. Since a solution of (5) min f (x), x ∈ Rn ,
typically has noninteger components, we need to transform
our noninteger solutions to a point feasible for ML. We move where f is continuously differentiable, and Rn denotes Eu-
closer to a feasible point for ML by solving another quadratic clidean n space. Suppose that x0 is an initial point, xk is the
programming problem with a penalty term. Let y denote an current point, and gk is the gradient of f at xk , then gradient
optimal solution to (5); let yi+ denote the smallest integer −1 methods calculates the next point from
or +1 or +3 which is greater than or equal to yi . let yi− denote
xk+1 = xk − αk gk ,
the largest integer −3 or −1 or +1 which is less than or equal
to yi . We consider the penalized problem: where αk is a stepsize computed by some line search algo-
P2K rithm. In the steepest descent (SD) method, the stepsize is
min 21 xT Ax − bT x + p i=1 (yi+ − xi )(xi − yi− )
chosen such that f (x) is minimized along the search direction
subject to − 3 ≤ xi ≤ 3, i = 1, . . . , 2K. (6) gk :
αkSD ∈ arg min f (xk − αgk ).
In our numerical experiments, the penalty parameter p was α∈R
.5 times the largest diagonal element of A. When we solve It is well-known that steepest descent method can be very slow
the penalized problem (6), the optimal x must make both the when the Hessian of f is ill-conditioned at a local minimum.
SHELL et al.: BARE DEMO OF IEEETRAN.CLS FOR JOURNALS 3
On the other hand, it has been shown that if the exact steepest where k ≥ 2 and λ0 > 0 is a fixed parameter. The starting
descent step is reused in a cyclic fashion, then the convergence parameter value λBB1 can be chosen freely, subject to the
is accelerated. Given an integer m ≥ 2, which we call the constraint λ1 ≥ λ0 ; for example,
cyclic length, cyclic steepest descent (CSD) method can be
λBB = max {λ0 , kg1 k∞ }.
expressed as: 1
min{f (x) : x ≥ 0}, (9) fkmax = max{f (xk−i ) : 0 ≤ i ≤ min(k − 1, M − 1)}. (13)
where f : R → R is a real-valued, continuously differentiable Here M > 0 is a fixed integer. The AS CBB algorithm with
function defined over domain x ≥ 0. AS CBB is valid for a nonmonotone line search can be described as follows:
problems with both upper and lower bound constraints, how-
ever, to simplify the discussion, we focus on the nonnegativity Affine-scaling CBB algorithm with line search
constraint x ≥ 0. Initialize k = 1, x1 = starting guess, and f0r = f (x1 ).
The algorithm starts at a point x1 in the interior of the While xk is not a stationary point
feasible set, and generates a sequence xk , k ≥ 2, by the 1. Let dk be given by (11).
following rule: 2. Choose fkr so that f (xk ) = fk−1 r
≤ fkr ≤
r
xk+1 = xk + dk (10) max{fk−1 , fkmax } and fkr ≤ fkmax infinitely often.
3. Let fR be either fkr or min{fkmax , fkr }. If f (xk +
where the i-th component of dk is given by dk ) ≤ fR + δgkT dk , then αk = 1.
µ ¶ 4. If f (xk + dk ) > fR + δgkT dk , then αk = η j where
1
dki = − gi (xk ). (11) j > 0 is the smallest integer such that
λk + gi+ (xk )/xki
f (xk + η j dk ) ≤ fR + η j δgkT dk . (14)
Here λk is a positive scalar, gi (x) is the i-th component of
the gradient ∇f (x), and t+ = max{0, t} for any scalar t. We 5. Set xk+1 = xk + αk dk and k = k + 1.
compute λk using a cyclic version [5] of the Barzilai-Borwein End
(BB) stepsize rule [2]. That is, we first define
Here the parameters δ and η used in the Armijo line search
λBB
k := arg min kλtk−1 − vk−1 k2 of Step 4 must satisfy δ ∈ (0, 1) and η ∈ (0, 1).
λ≥λ0
( ) If the iterates generated by AS CBB method converge to
tT
k−1 vk−1 a nondegenerate local minimizer and the second order suffi-
= max λ0 , T , (12)
tk−1 tk−1 cient optimality condition holds, the local convergence rate is
4 JOURNAL OF LATEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007
−2
IV. S IMULATION R ESULTS 10