Академический Документы
Профессиональный Документы
Культура Документы
m
i=1
R
i
. The LDPC codewords are then modulated to
an M-ary PAM constellation to obtain N symbols. The rst
N symbols denoted by X
s1
are transmitted during T
1
subject
to a power constraint E[X
2
s1
] P
s1
, where X
s1
is the random
variable associated with the independent and identically dis-
tributed (i.i.d.) sequence X
s1
. The remaining N symbols are
transmitted during T
2
and satisfy the constraint E[X
2
s2
] P
s2
.
Due to the average power constraint at the source, we have
P
s1
+ P
s2
P
s
. The length-N signal sequences received
at the relay and destination during T
1
are given as
Y
r
= c
sr
X
s1
+Z
r
and Y
d1
= c
sd
X
s1
+Z
d1
,
respectively, where Z
r
and Z
d1
are i.i.d. zero-mean unit-
variance Gaussian noise sequences. The relay quantizes Y
r
using an L-level USQ to obtain a sequence W of quantization
indices. Let k
0
, . . . , k
L
be the quantization region boundaries.
If q is the quantization step size, we have k
0
= , k
i
=
_
i
L
2
_
q for i = 1, . . . , L1, and k
L
= +. The quantizer
output W = w {0, . . . , L 1} if the received signal
Y
r
{x : x R, k
w
x < k
w+1
}. The quantization indices
are SW coded with Y
d1
as the decoder side-information
and provided error protection to form a length- N codeword
sequence X
r
drawn from an M-ary PAM constellation subject
to a power constraint E[X
2
r
] P
r
/ . Note that since the
relay does not transmit anything during T
1
, normalizing the
power constraint by makes sure that the average power
consumption at the relay is P
r
. The codeword X
r
is then
transmitted to the destination during T
2
, the same time as X
s2
1
Since an Mary PAM represents a quadrature component of an MM
QAM, the results presented in this paper can be easily extended to the case
of QAM as well.
is transmitted. The destination receives the superposition of
both signals:
Y
d2
= c
sd
X
s2
+c
rd
X
r
+Z
d2
,
where Z
d2
once again is an i.i.d. zero-mean, unit-variance
Gaussian noise sequence.
The destination rst attempts to recover the quantization
indices W by treating the transmission X
s2
from the source
as interference. It can do so if [5], [7]
H (W | Y
d1
) I (Y
d2
; X
r
) (1)
The information term on the right hand side of (1) is
the constrained capacity of an AWGN channel with an M-
ary input and an M-ary interference and can be computed
numerically as (assuming noise to be of unit-variance)
C(S, I) = m
1
M
M
i=1
_
i
(y) log
_
M
j=1
i
(y)
j
(y)
_
dy (2)
with
i
(y) =
M
k=1
1
M
f
g
(y x
i
S x
k
I).
Here x
i
represent i-th point of the unit energy Mary PAM
constellation, S and I are the received source and interferer
powers respectively and f
g
(z) is the zero mean, unit variance
Gaussian probability density function evaluated at z. When
I = 0, (2) reduces to the constrained capacity of an AWGN
channel with an M-ary input.
After recovering W (and consequently X
r
), the destination
cancels the interference caused by X
r
and attempts to recover
the source message jointly from Y
d1
, Y
d2
and W. The
destination is capable of recovering the source message if the
transmission rate satises
R I (W, Y
d1
; X
s1
) + I (X
s2
; Y
d2
|X
r
) . (3)
The information terms in (3) can be evaluated numerically
using (2) and the conditional probability density function
f (y
r
|y
d1
); we leave out the details because of space limi-
tations. It should be noted that the achievable rate expression
in (3) corresponds to a particular power allocation P
s1
, P
s2
,
the quantization parameters L and q, and the half-duplexing
parameter that satisfy the constraint (1) and the average
power constraint P
s1
+ P
s2
P
s
. Thus one needs to search
over these parameters to maximize the achievable rate under
the given constraints.
A. Numerical Results and Observations
In Fig. 1, we present the achievable rates of the CF strategy
versus P
s
where all transmissions from the source as well as
the relay are modulated onto an M = 4-ary PAM constellation.
In generating the results, we assume that L = M = 4, and
numerically search over , q, and the power allocation between
P
s1
and P
s2
that yield the maximum achievable rate in (3)
while satisfying the constraint (1) (in addition to the constraint
P
s1
+ P
s2
P
s
). Some observations that can be made from
these numerical results are:
3
At an overall transmission rate of 1.0 b/s, the CF strategy
outperforms DF by a margin of 1.15 dB, whereas the gain
from direct transmission is approximately 1.2 dB. Note that
in order to have a fair comparison, we have assumed that
for the direct transmission case, the source transmits with a
power equal to P
s
+P
r
.
It was shown in [5] that a binary quantizer (L = 2, q = )
sufced for the case when transmission from the source and
relay were modulated on to a binary constellation (M = 2),
i.e. no signicant gains were observed with L > 2. Results
in Fig. 1 however indicate that a binary quantizer does
not sufce for higher order constellations. For example, for
M = 4, we observe that in order to achieve a transmission
rate of 1.5 b/s, employing a binary quantizer at the relay
requires 0.89 dB more transmission power than the case with
L = 4. At the same time, our numerical results (not shown
in the gure) seem to indicate that going beyond L = 4 does
not yield any noticeable gains.
A similar observation is made for the case of M = 8. As
shown in Fig. 2, achieving a transmission rate of 2.0 b/s with
a binary quantizer requires 1.2 dB more power at the source
than the case with L = 4; the L = 4 quantizer on the other
hand requires 0.36 dB more power than the case with L = 8.
Weve also found numerically that going beyond L = 8 in
this case does not give any further noticeable gains.
2 0.5 1 2.5 4 5.5 7 8.5 10
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
P
s
(dB)
R
a
t
e
(
b
/
s
)
CF(L=4)
CF(L=2)[5]
DF
Direct transmission with P=P
s
+P
r
Fig. 1. CF achievable rates versus the source power Ps for M = 4 and
their comparison with DF and direct transmission rates. The other parameters
are set to Pr = -6 dB, d
rd
= 0.2 m, and dsr = 0.95 m.
Based on these observations, as well as those from [5],
we can reasonably conclude that an M-level quantizer should
sufce for the case when the transmissions from the source
and relay are modulated onto an M-ary PAM constellation.
An intuitive explanation behind this notion is the fact that the
number of bits per quantization index for an M-level quantizer
is log M, the same as the maximum channel capacity (in b/s)
on the relay to destination link over which the quantization
indices need to be transmitted. Thus, it should be sufcient
to employ an M-level quantizer when the relay employs an
M-ary PAM constellation.
2 0 2 4 6 8 10 12 14 15
0
0.5
1
1.5
2
2.5
3
P
s
(dB)
R
a
t
e
(
b
/
s
)
CF(L=8)
CF(L=4)
CF(L=2)[5]
Fig. 2. A comparison of CF achievable rates for several L. The parameters
are set to M = 8, Pr = -3 dB, d
rd
= 0.15 m, and dsr = 0.95 m.
IV. PRACTICAL ML-CF RELAYING SYSTEM
Whereas in the past, researchers focused on minimizing
Euclidean distance and maximizing asymptotic gains, recent
research in the domain of coded modulation has proved
that schemes like multi-level coding (MLC) [14] and bit-
interleaved coded modulation (BICM) [15] can achieve ca-
pacity while providing both power and bandwidth efciency.
In particular, multi-stage decoding (MSD) where each
code/level is decoded individually instead of jointly has been
shown to approach channel capacity with limited complex-
ity [16]. Thus, we implement the practical ML-CF relaying
scheme with the help of multi-level LDPC codes to encode the
message at the source, and multi-level IRA codes to provide
joint compression and error protection at the relay. In the
following subsections, we will describe our ML-CF coding
scheme in detail.
LDPC Code
2
LDPC Code
m
m
e
s
s
a
g
e
Xs1
Xs2
M-PAM
Modulator
LDPC Code
1
P2S
P2S
P2S
1 s
P
2 s
P
Fig. 3. Encoding at the Source node using m LDPC codes. The P2S blocks
indicate parallel to serial conversion.
A. Message Encoding
A block diagram of multi-level message encoder at the
Source node is shown in Fig. 3. The message to be transmitted
to the destination is partitioned into m bit-streams. Each bit-
stream is encoded with a length-N LDPC code of rate R
i
,
i = 1, . . . , m so that the length of the i
th
message bit-stream
is NR
i
. The sum of the individual code rates gives the overall
transmission rate in b/s, i.e.
m
i=1
R
i
= R. The resulting
codewords are serially fed m bits at a time into a unit energy
4
Scalar
Quantizer
IRA
Xr2
Xrl
M-PAM
Modulator
P2S
P2S
P2S
Yr
1N
1N
1N
IRA
P2S
P2S
P2S
2N
2N
2N
IRA
P2S
P2S
P2S
mN
mN
mN
2N
1N
mN
N W2
N Wl
N W1
r P
M-PAM
Modulator
M-PAM
Modulator
r P
r P
Xr1
Fig. 4. Multilevel DJSCC encoding at the relay using l IRA codes.
M-PAM modulator, i.e. the k
th
bit of all codewords forms
the k
th
symbol of the PAM sequence, k = 1, . . . , N. The
rst N symbols of this PAM sequence are scaled by
P
s1
to form the sequence X
s1
that satises the average power
constraint of P
s1
, and which is transmitted to the relay and the
destination during T
1
. The remaining N symbols are scaled
by
P
s2
to form the sequence X
s2
that satises an average
power constraint of P
s2
and which is transmitted during T
2
.
B. Multilevel Distributed Joint Source Channel Coding
As mentioned earlier, the relay quantizes the received
sequence Y
r
using an L-level quantizer, the quantization
step-size of which is chosen to maximize the overall
transmission rate in (3). The quantizer outputs a sequence
W consisting of N quantization indices, each of length
l = log L bits. The sequence W now needs to be compressed
using SW coding with Y
d1
as the decoder side-information.
At the same time, channel coding is also required to protect
its transmission against noise on the relay to destination link.
Instead of providing separate SW and channel coding, we
resort to Distributed Joint Source Channel Coding (DJSCC)
[5] in which SW coding and error protection is implemented
in a joint manner. The challenge however is that for L > 2,
each quantization index is composed of l > 1 bits as opposed
to the binary case in [5] where the quantization indices were
one-bit each. Taking this into account, we propose to use
multiple binary IRA codes to implement multi-level DJSCC,
the details of which are given below.
Encoding: We split the quantization index sequence W into l
bit-plane sequences W
1
, . . . , W
l
each being of length N;
W
1
corresponds to a sequence comprising of the least-
signicant bits of the original quantization sequence, whereas
W
l
corresponds to the most-signicant bits. One possibility
could have been to encode each one of the l quantization
bit-plane with m IRA codes, the parity bits of which are
then mapped to an M-PAM constellation (similar to Fig. 3).
However, this approach requires the use of l m IRA codes,
= = = = =
check
nodes
parity
nodes(miN)
systematic
nodes(N)
Soft Demodulation
LLR Calculation
1
W
1 i
W
Yd1
Y
d2
L
e
=
Fig. 5. DJSCC decoder for the i th quantization bit plane.
which becomes prohibitive when both l and m are large.
Instead, we use a single IRA code for each bit-plane as shown
in Fig. 4. Bit-plane i, i = 1, . . . , l is encoded with a code that
has m
i
N parity bits (the appropriate choice of
i
s would be
discussed later). These parity bits are then mapped m bits at a
time to a length-
i
N symbol sequence X
ri
(with an average
power P
r
/ . These symbol sequences are transmitted to the
destination one after the other, starting with X
r1
and ending
with X
rl
. Since the total number of transmissions from the
relay are N symbols, we have the constraint
l
i=1
i
= .
Decoding: Note that the systematic bits of the IRA code are
not transmitted over the physical channel. However since Y
d1
at the destination is correlated with W, one can think of the
systematic bits as being transmitted over a virtual correlation
channel with Y
d1
as the output. The decoding of the bit-
planes is done in stages starting with W
1
and ending with
W
l
. Thus when attempting to decode W
i
, the calculation of
log-likelihood ratios (LLRs) corresponding to the systematic
bits not only use Y
d1
but also
W
1
, . . . ,
W
i1
, the decoded
versions of the respective quantization index bit-planes from
the previous stages, as shown in Fig. 5. This decoding strategy
follows directly from the chain-rule of entropy, using which
the information theoretic constraint (1) necessary for recovery
of the quantization index sequence W can be rewritten as
i=1
H (W
i
|Y
d1
, W
1
, . . . , W
i1
)
l
i=1
i
I (X
r
; Y
d2
) (4)
For each bit-plane i, i = 1, . . . , l, we choose
i
to satisfy the
individual constraint
H (W
i
|Y
d1
, W
1
, . . . , W
i1
)
i
I (X
r
; Y
d2
) . (5)
This makes sure that the overall conditional entropy of W
given Y
d1
satises (4). Thus if the codes used are capacity
achieving, the proposed methodology should guarantee that
the quantization index sequence W is recoverable.
Noting that multiple parity-bits of an IRA code are mapped
to the same modulated symbol, we employ an iterative soft-
demodulation strategy [15]. The iterative strategy for recover-
ing the quantization indices is summarized below:
5
1: Repeat for all bit-planes i = 1, . . . , l.
2: Initialize extrinsic LLRs (from parity nodes to the soft
demodulator) L
e
= 0.
3: Use Y
d1
and
W
1
, . . . ,
W
i1
to calculate the a-priori
LLRs for the systematic nodes.
4: While (stopping criterion not met)
2
4: (Soft demodulation) Use Y
d2
and L
e
to calculate the
a-priori LLRs for the parity nodes.
5: Run one iteration of belief propagation (BP) algorithm
on the IRA decoding graph (L
e
is updated).
6: end while
7: Obtain
W
i
by hard-thresholding the a-posteriori LLRs
from the systematic nodes.
After decoding all bit-planes, the DJSCC decoder passes the
estimates
W
1
, . . . ,
W
l
to the message decoder.
C. Message Decoding
Destination uses MSD on the m LDPC decoding graphs to
recover the corresponding codewords, and hence the original
message sequence. We have two types of variable nodes at
each stage of the LDPC decoder. The rst type correspond
to the symbols received during T
1
. The a-priori LLRs for
these type of nodes are calculated using Y
d1
,
W
1
, . . . ,
W
l
,
and the decoded codewords corresponding to the previous
stages. The second type of variable nodes are those which
correspond to T
2
. For these nodes Y
d2
and the decoded
codewords corresponding to the previous stages are used to
evaluate the a-priori LLRs.
D. Code Design
For optimizing the degree distributions of the LDPC and
IRA codes, we rst use the information-theoretic analysis of
Section III to evaluate (for given relay position and power P
r
)
the optimum parameters , q, P
s1
and P
s2
that are required
to achieve a target transmission rate of R b/s. The analysis
also yields the target rates R
i
, i = 1, . . . , m for the individual
LDPC codes, as well as
i
, i = 1, . . . , l, that govern the target
rates of the individual IRA codes.
The degree distributions for the multi-level LDPC and IRA
codes are designed using the EXIT chart strategy [17] with
Gaussian approximation [13]. The approach we use is a direct
consequence of chain rule of mutual information/entropy, i.e.,
we assume perfect knowledge of prior bit-planes, and no
information about subsequent ones. This simplies the design
process in the sense that the individual codes can be designed
one by one, in a serial fashion. For each LDPC level, we take
into account the fact that there are two types of variable nodes
with different SNR characteristics: the rst type corresponding
to T
1
and the other to T
2
. The design process is similar to the
one in [5], so we do not include details in this paper. In the
following, we briey explain how degree distributions of a
single level of IRA codes are optimized; the procedure has
to be repeated for all l levels. Each IRA code has two types
of variable nodes, the systematic and the parity nodes. The
2
In our simulations, we stop when either the maximum number of iterations
have exceeded or the correct codeword has been decoded.
=
I
sc
ch
I
I
cs
i
d p
I
i
p d
I
1
i
d p
I
1
i
d p
I
i
p c
I
i
c p
I
systematic nodes check nodes parity nodes
soft
demodulator
Fig. 6. Information ow for IRA codes.
parity nodes are divided into m groups; corresponding bits
from each group map to the same M-ary symbol. As shown
in Fig. 5 each parity node is connected to two consecutive
check nodes and vice-versa, with as many parity nodes as the
check nodes. Thus the check nodes can also be divided into m
types. We x the check nodes within group i, i = 1, . . . , m,
to have a regular degree of d
i
, and then design the systematic
node degree distribution (x) =
D
d=1
j
d
x
d1
, where D
is the maximum systematic node degree. Since each group
contains equal number of check nodes, the overall check node
degree distribution is given as (x) =
m
i=1
i
x
di1
with
i
= d
i
_
m
j=1
d
j
_
1
. Let I
sc
be the a-priori information
from the systematic nodes to the check nodes as shown in
Fig. 6. If I
i
pc
is the information ow from the parity to
check nodes in group i, the information from check to parity
nodes can be evaluated using the approximate check to bit-
node duality[12] as
I
i
cp
1 J
_
d
i
J
1
(1 I
sc
) +J
1
(1 I
pc
)
_
(6)
where J() is the information that a log-likelihood ratio
drawn from a Gaussian distribution of mean and vari-
ance 2 conveys about the bit it represents. The information
from the parity nodes in group i to the soft demodulator is
given as I
i
pd
= J
_
2J
1
_
I
i
cp
__
. For informations I
j
pd
,
j = 1, . . . , m, and given channel conditions, we evaluate the
EXIT function of the soft demodulator using Monte-Carlo
simulations [18] to obtain I
i
dp
and consequently
I
i
pc
= J
_
J
1
_
I
i
cp
_
+J
1
_
I
i
dp
_
_
(7)
for all i = 1, . . . , m. This completes one iteration of decoding
from the check- to parity- to soft demodulator back to parity-
to check-node. In order to simplify the optimization process,
we assume that the iterations on this side of the decoding graph
(right hand side of the check node in Fig. 6) continue until
a xed point is reached. In other words, for a given I
sc
,
we initially assume I
pc
to be zero, and then continue the
iterations specied by (6) and (7) until the point where all
I
1
pc
, . . . , I
m
pc
converge to xed values. If
I
i
pc
denotes the
xed point, the check-nodes to systematic node information is
given as
I
cs
1
m
i=1
J
_
(d
i
1)J
1
(1 I
sc
) (8)
+2J
1
_
1
I
i
pc
_
_
6
For convergence of the IRA code at the j-th level, we need to
satisfy the constraint
D
d=1
j
d
J
_
(d 1)J
1
(I
cs
) +J
1
(I
ch
)
_
> I
sc
(9)
for all I
sc
[0, 1), where I
ch
is the information on
the systematic nodes obtained from the (virtual correlation)
channel. When designing the i-th level IRA code, we have
I
ch
= I (W
i
; Y
d1
|W
1
, . . . , W
i1
). Note that I
cs
in (9) is
in fact a function of I
sc
, but we omit that dependence for
notational convenience. In addition to (9), we have the trivial
constraints
D
i=1
i
= 1, and
i
> 0 for all i = 1, . . . , D. Un-
der these constraint, the IRA code rate needs to be maximized
which is equivalent to maximizing
D
d=1
j
d
d
. By discretizing
I
sc
on the interval [0, 1), the optimization can be easily
solved using linear programming.
V. SIMULATION RESULTS
In this section, we present simulation results for a 4-PAM
CF relaying system with transmission rates of 1.0 and 1.5
b/s. The setup we consider corresponds to d
rd
= 0.2 m,
d
sr
= 0.95 m and P
r
= 6 dB. We list the optimized
information theoretic parameters required to achieve the target
transmission rates in Table I. Note that if the above information
TABLE I
OPTIMIZED PARAMETERS FOR 4-PAM CF RELAYING WITH
dsr = 0.95, d
rd
= 0.2 M AND Pr = 6 DB. ALL POWERS ARE SPECIFIED
IN DB AND RATES IN B/S.
Rate
0
1
P
s1
P
s2
q R
1
R
2
1.0 0.57 0.26 0.17 4.63 2.61 1.55 0.63 0.83
1.5 0.65 0.28 0.09 8.65 7.25 2.44 0.37 0.67
theoretic parameters are used to design the LDPC and IRA
codes, the practical coding losses would imply that the rates
of the optimized codes are less than those required. Therefore
we keep , P
s2
,
0
, and
1
xed at the theoretical values
and increase P
s1
gradually until codes of required rates are
achieved. The power P
s2
is not increased from its theoretical
minimum so as not to increase the interference that X
s2
causes
while decoding W [5].
We simulate the optimized degree distributions at a nite
block-length of N = 2 10
5
symbols. In this case too we x
all parameters except P
s1
to the information theoretic values
and gradually increase this power until the desired bit-error
rate (BER) of 10
5
is achieved. Simulation results indicate
that the overall power P
s
required to achieve the target BER
at a transmission rate of 1.0 b/s is only 0.56 dB more than
the theoretical limit. On the other hand, at a transmission rate
of 1.5 b/s, the proposed ML-CF scheme suffers a loss of only
0.63 dB from the theoretical bound.
VI. CONCLUSIONS
We have presented an ML-CF strategy for a half-duplex
Gaussian relay network where the transmissions from the
source and the relay are drawn from an M-ary PAM con-
stellation. The compression of the signal received at the relay
is achieved by quantizing it before applying SW compression.
Numerical evaluation of information theoretic analysis indi-
cates that it is sufcient to consider an M-level quantizer,
i.e. one does not gain much by going beyond M levels.
At the same time, one suffers a signicant degradation in
performance by considering less than M levels. A coding
scheme using LDPC and IRA codes was also presented. Multi-
level LDPC codes were used to encode the source message,
whereas multiple IRA codes were used to implement multi-
level DJSCC of the quantization indices. Simulation of the
proposed methodology indicates performance close to the
theoretical bound.
REFERENCES
[1] T. Cover and A. Gamal, Capacity theorems for the relay channel,
Information Theory, IEEE Transactions on, vol. 25, no. 5, pp. 572
584, sep 1979.
[2] A. Host-Madsen, Capacity bounds for cooperative diversity, Informa-
tion Theory, IEEE Transactions on, vol. 52, no. 4, pp. 1522 1544, april
2006.
[3] Z. Liu, V. Stankovic, and Z. Xiong, Practical compress-forward code
design for the half-duplex relay channel, in Proc. Conf. on Inf. Sci. and
Systems, 2005.
[4] , Wyner-ziv coding for the half-duplex relay channel, in Acous-
tics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP05).
IEEE International Conference on, vol. 5. IEEE, 2005, pp. v1113.
[5] M. Uppal, Z. Liu, V. Stankovic, and Z. Xiong, Compress-forward
coding with bpsk modulation for the half-duplex gaussian relay channel,
Signal Processing, IEEE Transactions on, vol. 57, no. 11, pp. 4467
4481, nov. 2009.
[6] M. Uppal, G. Yue, X. Wang, and Z. Xiong, A rateless coded proto-
col for half-duplex wireless relay channels, Signal Processing, IEEE
Transactions on, vol. 59, no. 1, pp. 209 222, jan. 2011.
[7] D. Slepian and J. Wolf, Noiseless coding of correlated information
sources, Information Theory, IEEE Transactions on, vol. 19, no. 4, pp.
471 480, jul 1973.
[8] R. Blasco Serrano, Coding strategies for compress-and-forward relay-
ing, Licentiate Thesis, Royal Institute of Technology (KTH), Stock-
holm, Sweden, 2010.
[9] A. Avestimehr, S. Diggavi, and D. Tse, Wireless network information
ow: A deterministic approach, Information Theory, IEEE Transactions
on, vol. 57, no. 4, pp. 1872 1905, april 2011.
[10] V. Nagpal, I.-H. Wang, M. Jorgovanovic, D. Tse, and B. Nikolic and,
Quantize-map-and-forward relaying: Coding and system design, in
Communication, Control, and Computing (Allerton), 2010 48th Annual
Allerton Conference on, 29 2010-oct. 1 2010, pp. 443 450.
[11] S. H. Lim, Y.-H. Kim, A. El Gamal, and S.-Y. Chung, Noisy network
coding, Information Theory, IEEE Transactions on, vol. 57, no. 5, pp.
3132 3152, may 2011.
[12] A. Ashikhmin, G. Kramer, and S. ten Brink, Extrinsic information
transfer functions: model and erasure channel properties, Information
Theory, IEEE Transactions on, vol. 50, no. 11, pp. 26572673, 2004.
[13] S. Chung, T. Richardson, and R. Urbanke, Analysis of sum-product
decoding of low-density parity-check codes using a gaussian approxi-
mation, Information Theory, IEEE Transactions on, vol. 47, no. 2, pp.
657670, 2001.
[14] H. Imai and S. Hirakawa, A new multilevel coding method using error-
correcting codes, Information Theory, IEEE Transactions on, vol. 23,
no. 3, pp. 371 377, may 1977.
[15] G. Caire, G. Taricco, and E. Biglieri, Bit-interleaved coded modula-
tion, Information Theory, IEEE Transactions on, vol. 44, no. 3, pp.
927 946, may 1998.
[16] U. Wachsmann, R. Fischer, and J. Huber, Multilevel codes: theoretical
concepts and practical design rules, Information Theory, IEEE Trans-
actions on, vol. 45, no. 5, pp. 1361 1391, jul 1999.
[17] S. Ten Brink, Designing iterative decoding schemes with the extrinsic
information transfer chart, AEU Int. J. Electron. Commun, vol. 54, no. 6,
pp. 389398, 2000.
[18] C. Wang, S. Kulkarni, and H. Poor, Density evolution for asymmet-
ric memoryless channels, Information Theory, IEEE Transactions on,
vol. 51, no. 12, pp. 42164236, 2005.