Академический Документы
Профессиональный Документы
Культура Документы
ABSTRACT Bit error rate at the receiver, data rate loss, computational
complexity, inband and outband distortion. Here we have
Several properties of OFDM has made it an essential modulation studied through simulation results the performance of Clipping
scheme for high speed transmission links. But one major and filtering, PTS and SLM based PAPR reduction techniques
drawback ofOFDM is its large Peak-to- average power ratio. and cited the selection criteria for these techniques basing on
Here we have reviewed some of the PAPR reduction techniques various parameters. While calculating the PAPR, the actual time
and compared the performance of clipping and filtering,partial domain OFDM signal that is in analog form must be taken into
transmit sequence and selected mapping methods for QAM account since the IFFT outputs will miss some of the signal
modulated OFDM. We have also shown analytically, the peaks. So, if we calculate the PAPR by using these sample
relation between passband and baseband PAPR and also the values then the calculated PAPR will be less than the actual
criteria for optimum design of the phase rotation table for the PAPR. This is an Optimistic result which is far from the real
selected mapping technique. Finally we have proposed to use the results but, these samples values are enough for signal
Chu sequence as the phase sequence in the SLM technique to reconstruction. To substantiate this issue, oversampling is
get reduction in PAPR. performed. After oversampling the increased samples are close
to the real analog signal and calculation of PAPR based on the
General Terms increased sample values will give the true PAPR. PAPR is not a
Peak-to average power reduction, Orthogonal frequency problem with constant amplitude signals as it is a problem with
Division Multiplexing, Quadrature Amplitude Modulation non-constant amplitude signals. Since OFDM and MIMO-
OFDM are based on OFDM, they also need PAPR
Keywords reduction.This article is organized as follows: In section 2, a
Clipping and filtering; PTS (partial transmit sequence); brief discussion of base band and pass band PAPR is followed
SLM(selected mapping); Chu-SLM by the discussion of various PAPR reduction techniques with
their merits and demerits. Then in section 3, the performance of
1. INTRODUCTION three techniques iscompared through simulation results and
those include: clipping and filtering, Partial transmit sequence
The OFDM signal amplitudes vary widely with high PAPR, as a
and selective mapping method. In section 4, at first we have
consequence high power Radio frequency amplifiers will
shown the optimal design criteria for the phase rotation table of
introduce inter-modulation between different subcarriers and
SLM technique followed by the performance comparison of Chu
introduce additional interference causing increase in bit error
sequence with all other sequences used so far [6-8] and
rate. Therefore, RF power amplifiers should operate in a very
compared the results of Classical SLM with Chu SLM for QAM
large linear region to avoid the signal peaks from getting into the
modulation schemes. Finally conclusions are drawn in section 5
non-linear region of the power amplifier causing in-band
followed by the references.
distortion i.e. inter modulation among the subcarriers and out of
band radiation. To overcome this, the power amplifiers should 2. PAPR AND ITS REDUCTION IN OFDM
be operated with a large power back-offs and this indirectly SYSTEMS
leads to very inefficient amplification and increase in transmitter The complex discrete-time baseband equivalent time –domain
power. This rhythms the invention of various PAPR reduction OFDM signal can be expressed as:
techniques. This article discusses coding, clipping and
X k exp j 2 nk N .
filtering[1,2], peakwindowing [3], partial transmit sequence [4], N 1
selected mapping technique [5].This article describes the basic x n IFFT X k 1
principle of all these techniques. The selection of any of the N k 0
PAPR reduction technique may be at the cost of PAPR reduction (1)
capability, low coding overhead, synchronization between
transmitter and receiver, increase in transmit power, increase in
52
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011
(2)
version as :
f z z z 2
exp z 2 2 2 2 z exp z 2
L. N 1
X k .exp j 2 mf k L.N (8)
n
Fzmax z P Z max z
PAPR x m max x m
2
E x m
2
for (9)
m 0,1,....N .L
P Z0 z . P Z1 z ...... P Z N 1 z
this method could not work for higher order bit rates and
PAPR S t max S t
2 2
E S (t ) reduction of PAPR is at the expense of coding rate. The non-
linear clipping process introduces In-band distortion causing
degradation in Bit Error Rate performance of the system and
also causes out of band noise which reduces the spectral
max Re S t exp j 2 fct E Re S t exp j 2 fct
2
(5) efficiency. Using Filtering with clipping [11] the out of band
noise is reduced but the in-band noise is not. To reduce the In-
band noise each OFDM block is oversampled by padding the
In baseband case the peak –to- mean envelop power original input with zeros and then taking a longer IFFT. Use of
ratio(PMEPR)is defined as: Forward error correction (FEC) codes with clipping and filtering
[2] can reduce both the noises and improves the BER
performance and spectral efficiency.
PMEPR S t max S t E S t
2 2
(6)
2.2 Peak Windowing
This method [3] concerns with removing the less occurred peak
values. It provides large reduction in PAPR along with the
Where 𝑠 𝑡 is the complex baseband equivalent of the pass advantages of easy implementation, independence in no of
band signal . Now to find the probability that the CF exceeds z, subcarriers and undisturbed coding rate at the cost of increase in
we have to consider the complementary CDF (CCDF) as: BER and out of band noise. Peak windowing with FEC codes
[2] can compensate the increase in BER. Any windowing can be
53
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011
used , provided it has a good spectral properties. Examples are betterperformance in both the cases and the Clipping and
cosine, Kaiser and Hamming. By removing peaks, PAPR cannot filtering is giving the worst performance.
be reduced beyond a certain limit since the average value of
OFDM signal decreases, thus increasing PAPR. OFDM signal
exhibits “Bottoms” similar to peaks. By increasing these
bottoms above certain level, the average value of OFDM signal
can be shifted up and thus reducing PAPR. After this the
samplevalues are amplified and the total method is referred as
Peak- windowing with Pre- amplification.
p
x IFFT b p x p b p .IFFT X p b p x p
p p
p 1 p 1 p 1
3.COMPARISION OF PAPRREDUCTION
TECHNIQUES 3. OPTIMAL SLM AND PERFORMANCE
Here we have compared the performance of clipping and OF CHU-SEQUENCE
filtering, PTS and Classical SLM based OFDM. Figure.1(a-b) After applying SLM, the OFDM signal is expressed [5] below
shows the results for N=255 and 510.The simulation is done for 0 ≤ t < NT and v= 0,1,…V-1 as:
taking the clipping ratio to be 0.8 for both N values. Clipping
ratio is the ratio of clip level (M) and the r.m.s power of OFDM N 1
signal. From these figures we can see that as N increases the
CCDF plots for all techniques occurs at a larger distance from
xv t 1
N
x b
0
n v,n .exp( j 2 nft ) (11)
54
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011
Where, bv,n = 𝑏𝑣, 𝑛 exp(j ф v,n) and the V uncorrelated N length have got better performance than the classical SLM. The
phase shift vectors are defined as: complexity is very less with Chu- SLM, since no need to send
side Information to the receiver. Since the Pass band PAPR of
N 1 Chu SLM is very less, it can be a promising PAPR reduction
Bv bv , n n 0
(12) technique for high data rate Passband applications. Also the
Computational time complexity is very less compared to all
other sequences. So, it is practicable to use in a complex
The frequency domain OFDM (unmodified) signal is given by communication network .
the vectors {X k}; for k=0,1,2….N-1 and k is the sub carrier
index and N is the no of. Sub carriers. During the V no of
multiple representation of the same OFDM signal the average
power is unchanged means that;
E Xv E X E X
2
n
2
k
2 2
(13)
The equation (13) assumes that Xk has zero mean and variance
σ2 , hence all {xnv} for v=0,1,….V -1, contain the same
information about X k. If for each v = 0,1,2.…V-1 and for each
n= 0,1,…N-1; {xn v}is independently and identically distributed
complex Gaussian distribution andif it has independent real and
imaginary part, the optimum design of the phase rotation table
will be obtained at the condition of asymptotic mutual
independence between {xn}v and {xn}l for all v≠l and with
E[exp(jф)]=0 for ф to be uniformly distributed in [0,2π). Hence, Fig. 2a CCDF of 16-QAM OFDM with Classical-SLM
this is the optimal SLM condition.
The Chu sequence is given by :
Yi k e j k i*i N
, For N even
j ki i 1 N
e , For N odd, gcd (k,N) = 1 (14)
55
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011
56
Int. Jr. of Advanced Computer Engineering & Architecture
Vol. 2 No. 2 (June-December, 2012)
Abstract
It is well known that the orthogonal frequency division multiplexing (OFDM) is a promising
technique for getting high data rate in multi path fading environment. Hence the advance
technologies like LTE and WiMAX use this as its physical layer. The well-known
disadvantage of OFDM is its high peak to average power ratio (PAPR). The PAPR reduction
using Selected mapping (SLM) technique is being analyzed here. With following this PAPR
reduction technique it requires to send extra Side Information (SI) index along with the
transmitted OFDM signal and error in detecting these extra bits at the receiver leads to data
rate loss.Without sending this SI index the detection of this thing will also be possible with
following the sub-optimal algorithm at receiver. This paper analyses the PAPR reduction
performance using complimentary cumulative distribution function (CCDF) plot and the
probability of SI detection error performance as per the criteria of WiMAX(IEEE
802.16e).The BER (Bit Error Rate) performance is also being analyzed here.
Keywords: Orthogonal frequency division multiplexing (OFDM); Peak to average power ratio
(PAPR); selected mapping technique (SLM); complimentary cumulative distribution function
(CCDF); Side Information (SI)
1INTRODUCTION
IEEE.802.16 standard defines several wireless metropolitan area network (WMAN) technologies. WiMAX is a
certification applied to 802.16 products tested by the WiMAX forum. IEEE 802.16d stands for fixed WiMAX
and IEEE 802.16e stands for mobile WiMAX. In mobile WIMAX for downlink the modulation schemes used
are QPSK, 16-QAM, 64-QAM and that for uplink are QPSK, 16-QAM, 64-QAM (optional). Here the analysis
of the simulation work being described using the modulation schemes QPSK and 16-QAM.
To satisfy the high data rate in frequency selective fading environment WiMAX considers the OFDM as its
physical layer [7].The main disadvantage of OFDM is its high PAPR (Peak to Average Power Ratio). For the
linear modulation scheme a linear power amplifier being used at the transmitter. With an increase of this PAPR
at any time instant the probability of shifting the operating point of the linear power amplifier to the saturation
region becomes more. This shifting of operating point leads to in-band distortion and out-of-band radiation,
which can also be avoided with increasing the dynamic range of power amplifier and again it leads to the
increasing of size and cost of power amplifier. Also considering the power constraint problem it is required to
reduce this PAPR. There are a lot of techniques [3] like Clipping and Filtering, Coding, Partial Transmit
Sequence (PTS), Selected Mapping (SLM), Tone Reservation (TR) and Tone Injection (TI) present for the
reduction of PAPR and out of which SLM [1] is a promising technique. This SLM is also being known as the
179
conventional SLM. According to this technique the main data block will be divided into several independent
blocks then each will be converted into OFDM symbol and finally the symbol with less PAPR will be
transmitted. Here the performances like the PAPR reduction and Bit Error Rate analysis are being done with
considering the baseband domain transmission.
For the baseband complex signal the PAPR is defined as the ratio between the maximum power and the
average power for its envelope.
| |
| |
This paper is organized as follows: In Section 2, the SLM technique is presented. In Section 3 the simulation
results for the WIMAX parameters are analyzed. Finally the conclusion is being prosed in Section 4.
2 SLM TECHNIQUE
The conventional Selected Mapping (SLM) [1]is one of the promising technique for reduction of the PAPR. The
aim of this method is to generate many independent OFDM blocks from a single data block and then select one
us consider number of phase sequences with the sequence length of (i.e. the number of subcarriers). Then
having PAPR. The independent OFDM sequences can be found with finding independent phase sequences. Let
^ Φ . (2)
∈ ,!……….
∈ #, … … … . . $
Where
% % . (3)
in each phase sequence will be uniformly distributed in &#, !' with satisfying the
condition [4, 6] of () ∅ + #. In this way the number of alternative phase sequences can be
The random phasesΦ
found.According to fig.1 the number of alternative candidate vectors (i.e. , to , )can be generated with
multiplying each phase sequence to the original data block. The next job is to find IDFT of the alternative
candidate vectors using IFFT algorithm that leads to alternative OFDM signals. Out of all these OFDM signals
transmit the signal with minimum PAPR.
The information about the phase sequence required to select the minimum PAPR based OFDM signal
(SI) index which is nothing but the set of -./0! 1number of bits. To detect the exact transmitted data at the
should be sent along with this selected signal. This extra information is to be known as the Side Information
receiver the perfect detection of this SI index should be done. There will be a chance to lose the whole
transmitted data block with the erroneous detection of this SI index. With a slide modification of this classical
SLM technique the transmission of the SI index can be avoided. That modification is being described in the next
section.
180
2 MODIFIED SLM TECHNIQUE
%
The fig.1 describes concept about the classical SLM [1]. The modification can be made with designing the phase
sequence. According to the classical SLM technique% . With following the new SLM technique [2]
extension factor with satisfying the condition 2 > . With following this concept of designing the phase
optimal algorithm [2].According to the concept of sub-optimal algorithm out of number of phase sequences
sequence the perfect detection of SI index at the receiver becomes easy. This detection can be done using sub-
one sequence is being selected which satisfies the minimum energy difference between transmitted signal and
received signal. This selected phase sequence should be the perfect sequence that gives the information about
transmitted data block.There may be some probability of error with detecting the perfect phase sequence which
of bit error rate performance being done for considering a fixed value of 2. All these above things are being
is being analyzed here as the probability of SI index error with respect to different values of 2.Also the analysis
3SIMULATION RESULTS
To analyze different results the transmission channel is considered as a quasi-static frequency selective Rayleigh
fading with equal power taps. Also the model for nonlinear solid state power amplifier (SSPA) [5] is considered
at the transmitter output.
;/> ? @ B </; 4#
278.98 :; <9 = A C
;/> ? </; 4!#
5!
The modulation schemes considered for the simulation work are QPSK and 16 QAM.Toanalyze PAPR
reduction performance the over sampling factor [8] is being considered into account. As the actual data
transmission is in analog domain but we are analyzing indiscrete time domain. So to get perfect PAPR value the
∑GI# , G
oversampling factor is to be considered into account.The discrete time base band OFDM signal is given as
!' G
H E
D E
, where is the oversampling ratio. This is also known as the E point
inverse fast Fourier transforms (IFFT). Here the power amplifier was considered before transmitting the OFDM
power amplifier (SSPA) [5] with a smoothness parameter J 5, small signal gain K
signal having minimum PAPR. For the simulation model of power amplifier use Rapp’s model of solid state
(IBO) of 7 db. For the channel assume L M equal power taps with time domain coefficient N′O is a complex
and an input back off
The fig.1 shows the complementary cumulative distribution function (CCDF) for the PAPR reduction obtained
. ! and the used subcarriers as 4#, 56#, 4!# .The increased value of subcarriers leads to increase in the
by using modified SLM technique for 16-QAM modulation. Here the value of extension factor is taken as2
PAPR value. With increasing the value of the IFFT size increases which leads to more number of additions
and multiplications and that leads to high PAPR.
181
3.2 Probability of SI detection error
In addition to PAPR reduction in WIMAX using modified SLM technique also our aim is to avoid the
transmission of SI index along with the selected OFDM symbol. The information can be draw from the fig.2
that the probability of error in detection of SI index at the receiver will be less for using QPSK modulation than
number of subcarriers results in a lower probability of detection error this is due to the fact that the 2 is
that of the 16-QAM modulation with increasing the value of extension factor. Also we get that increasing the
repeated many times in a phase sequence. The sub-optimal algorithm [2] is being used for the detection of SI
index at receiver. This algorithm performs well for QPSK modulation than 16-QAM due the reason that the
energy per symbol for QPSK is constant.
Fig.2 CCDF for the PAPR obtained using modified SLM technique for the data subcarriers of WIMAX
182
3.2 Bit error rate Performance
to Noise Ratio). This BER performance is being analyzed with considering P 1.2. With taking a less value of
graphs are same because the energyper symbol is constant but for 16-QAM it holds good at higher SNR (Signal
4 Conclusion
In WIMAX our aim is to get high data rate simultaneously with a long range of communication. So by using the
physical layer as OFDM we will get high data rate with decreasing of PAPR using new SLM technique. Our
research work performed by considering OFDM scheme based on QPSK and 16-QAM modulation that this
technique performs well for the large number of subcarriers. In fact the probability of SI detection error
performs well with increasing the value of extension factor and/or the number of subcarriers. If we use this
technique in WIMAX we have to pay significant price for the increase in complexity at the receiver due the use
of a SI detection block.
References
[1] R. W. Bauml, R. F. H. Fischer, and J. B. Huber, “Reducing the peak-to-average power ratio of multicarrier
modulation by selected mapping,’’ Electron. Lett.vol.32, no. 22, pp. 2056-2057, Oct. 1996.
[2] Stephane Y. Le Goff, Samer S. Al-Samahi, Boon KeinKhoo, Charalampos C. Tsimenidis, and Bayan S.
Sharif, “Selected Mapping without Side Information for PAPR Reduction in OFDM,”IEEE Trans. Wireless
Commun., vol. 8, no. 7, pp. 3320-3325, Jul. 2009.
[3] SeungHee Han, Jae Hong Lee, “An overview of Peak-to-transmission,” IEEE Wireless commun.,Apr. 2005
[4] G. Tong Zhou, and Liang Peng, “Optimality Condition for Selected Mapping in OFDM,” IEEE Trans.
Signal Proces., vol. 54, no. 8, pp. 3159-3164, Aug. 2006.
[5] Elena Costa, Michele Midrio, and SilvanoPupolin, “Impact of Amplifier Nonlinearities on OFDM
Transmission System Performance,” IEEE commun.Lett., vol. 3, no. 2, pp. 37-39, Feb. 1999.
[6] A. D. S. Jayalath, C. Tellambure and H. Wu, “Reduced complexity PTS and new phase sequences for SLM
to reduce PAP of an OFDM signal,” IEEE. Conf. VTC. 2000. 3, pp. 1914-1917.
[7] Jeffrey Andrews, ArunabhaGhosh, RiasMuhamed, “Fundamentals of WIMAX: Understanding
BroadbandWireless Networking,” Pearson, 2008.
[8] Won Young Yang Chung-Gu Kang Yong Soo Cho, Jaekwon Kim. “MIMO-OFDM
WirelessCommunications WithMatlab,” John Wiley & Sons, illustrated edition, 2010.
183
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012
ABSTRACT bandwidth. But, using multilevel signaling the data rate can be
increased by a factor of log2M, where M is the no of signal
OFDM with quadrature amplitude modulation (QAM) levels. Here we have used QAM modulated OFDM to achieve
technique can be used for high speed optical applications. high data rate.
Increasing the order of modulation, the bit error rate (BER)
increases. Forward Error correction (FEC) coding like LDPC As data rate increases, the duration of bits gets shorter as a
coding is generally used to improve the BER performance. result of which more bits are affected by a given pattern of
LDPC provides large minimum distance and also the power noise concluding the statement that higher data rate leads to
efficiency of the LDPC code increases significantly with the higher error rate. The solution now is to increase the signal to
code length. In this paper we have compared the Soft decision noise ratio (SNR), which sets the upper bound on the
and Hard decision algorithms of LDPC codes. A long achievable data rate. Shannon’s formula assumes only white
Irregular LDPC code is simulated over the BIAWGN channel noise (thermal noise) and it does not account for the impulse
demonstrating the fact that LDPC coded OFDM with soft noise or distortion due to attenuation and delay. While the
decision decoding provides very lower bit error rate as well as Shannon’s formula represents the theoretical maximum that
a larger gain in transmitter power and thus making the link can be achieved, but in practice much lower rates are
more power efficient than the case with hard decision achieved. Similarly it does not suggest rather provide a
decoding. Through simulation, we have shown the advantages yardstick for finding a suitable signal code to achieve error
of using this long Irregular LDPC coded OFDM link in free transmission [12]. In this paper we have used long
Optical wireless communication (OWC). Irregular LDPC code which answers all the questions aroused
from Shannon’s theorem, while applied to several practical
General Terms communication applications.
Orthogonal Frequency Division Multiplexing, Low Density Rest of the paper is organized as follows: Section 2 describes
Parity check coding, Quadrature Amplitude Modulation. the Problem formulations. An overview of LDPC codes is
given in Section 3. Comparison of performance of LDPC
Keywords coded OFDM is made in Section 4. Finally few conclusions
Long Irregular LDPC code, BIAWGN channel, Soft decision are drawn in Section 5.
decoding, Message-passing (MP) algorithm, Error rate floors.
2. PROBLEM FORMULATIONS
1. INTRODUCTION
OFDM provides an effective and low complexity means for For a signal transmitted without coding, an SNR of around
eliminating inter-symbol interference for transmission over 10.5 to 11dB is required to achieve a BER of 10-6. Use of
frequency selective fading channels. In OFDM the subcarrier Reed Solomon (RS) code was resulting in 4 dB of coding gain
frequencies are chosen in such a way that the signals are and leaving of 6dB gain was still there. Even BCH and
mathematically orthogonal over one OFDM symbol period convolutional codes were going down to 5dB and were not
[2]. During the past decades, channel coding has been used able to go down to 1dB or 2dB. So, for a long time there is a
extensively in most digital transmission systems, from those belief that it is not possible to do any better than this 4dB or
requiring only error detection, to those needing very high 5dB of coding gain. Recently Turbo and LDPC codes are
coding gains. In the past, optical communication systems have discovered to achieve large coding gains. A comparative
ignored channel coding, until it became clear that it could be a study of existing coding techniques is presented in Table 1.
powerful, yet inexpensive, tool to add margins against line
impairments such as amplified spontaneous emission (ASE) Though Turbo and LDPC codes belong to the family of
noise, channel cross talk, nonlinear pulse distortion, and fiber compound codes [7, 8], the LDPC codes have the following
aging-induced losses[3,7]. Nowadays, channel coding is a advantages over Turbo codes. Firstly, in Turbo code due to
standard practice in many optical communication links. the presence of low weight code words there is a chance of
mistaken of a codeword for its nearby codeword due to noise
For any channel, the bandwidth, data rate, noise and error rate in channel. But, the construction of LDPC code avoids the
are related to each other. Greater the bandwidth, greater will presence of low weight codewords.
be the cost. All transmission channels of any practical interest
are limited in bandwidth due to the constrained physical Here we have used a very long irregular code which has
properties of the transmission medium. Inorder to use excellently used the distance properties of LDPC codes. The
bandwidth efficiently, in a digital transmission, it is required presence of low weight codewords causes the error floor
to obtain higher possible data rate at a particular limit of error region of Turbo codes to be around a bit error rate (BER) of
rate. But, noise is the main constraint in obtaining higher 10-5to10-6.While in contrast the LDPC codes have error floor
possible data rate at a given error rate. If binary signals are region around 10-6 to 10-8.
transmitted then the supported data rate will be twice the
16
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012
For an irregular low-density parity-check code [6] the degrees Classical block codes are decoded with ML (Maximum –
of each set of nodes are chosen according to some Likelihood) decoding algorithm. Reviewing the use of
distribution. In the construction of irregular LDPC code, the
decoding techniques, one can state that if no extra information
First step involves selecting a profile that describes the
is known about the codeword C other than the information
desired number of columns of each weight and the desired
from channel, the ML decoder will be the best one to choose
number of rows of each weight. Second step includes a
the correct codeword successfully. But, if the decoder has a
Construction method, i.e. algorithm for putting edges between
priori information about C then it is desirable to use a more
the vertices in a way that satisfies the constraints. The edges
sophisticated decoding like Maximum a posteriori (MAP) or
are placed “completely at random” subject to the profile
block–MAP decoder. For these decoders, the value of K (no
constraints. The no of 1’s in the Irregular LDPC code matrix
of message bits) does not have to be very large, so as to make
is given by
decoding completely impossible ,since large block lengths
results in large parity check and generator matrices.
17
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012
Instead the iterative decoding algorithms used in case of The hard decision message-passing algorithm is known as bit-
LDPC codes decoding can handle large block lengths and flipping algorithm and the passed messages are binary in
produce accurate estimates of the Probability (Ci/channel nature [11, 13]. A binary hard decision about each received bit
is made by the detector. This is implemented in [1]. The
output for Ci ) using repeated low–complexity processes. The
problem with this type of decoding is that the decoder
Irregular feature of the parity check matrix of LDPC codes performance is not good. The reason for this is it does not take
enable them to show high performance with respect to the likelihood values into account rather it takes a binary
Shannon limit [11]. decision about the occurrence of a transmitted bit. Hence, the
bit error rate (BER) plots have not so good error rate floors.
3.1 Encoding of LDPC codes The sum-product algorithm (SPA) is the soft-decision
The set of valid codewords are those which satisfies the parity message-passing algorithm. It is similar to the bit-flipping
check constraints. The generator matrix is used to map the algorithm except that the passed messages between bit node
messages to the codewords [11, 13, 14]. and check node are probabilities. It has two versions, one is
probability domain version that computes a posteriori
The (j , i) th entry of G is ‘1’if the jth message bit plays a role probability (APP) and another is log-domain version that
in determining the ith codeword. All possible linear computes log-likelihood ratios (LLRs). Both of these versions
combination of the rows of G gives the set of codewords for use likelihood values in decoding. But, the log-domain
the code with generator G. The generator matrix (G) satisfies version is better since in log-domain the complex
the following equation. multiplications are converted to additions resulting in low cost
implementation of the decoder with a more stable
GH T 0 (4) performance [9, 11, 13].
The steps for encoding after designing the parity check matrix Our effort here utilizes the log-domain SPA. So our
are as below: discussion here is limited to log-domain SPA which is
explained below.
i. Put H in the row-echelon form to get a new
3.2.1Sum-product Algorithm (Soft-decision
matrix as H gr . Algorithm)
ii. Then convert H gr to reduced row- echelon form If we are transmitting a Codeword with N number of bits, then
the APP is the probability that the given bit in the transmitted
denoted by H grr . codeword is equal to 1 or 0, given the channel output for that
bit. Then the APP ratio or the likelihood ratio (LR) is given by
Pr c j 0 channeloutputfor 0
iii. Then put H grr into standard form: H grr = [ A
I N K ], A K ) by K binary
is ( N l (c j ) @
Pr c j 1 channeloutputfor1
where
Pr c j 0 channeloutputfor 0
v. Then the message is mapped to the codeword by the
relation C mG , where m is message vector to
be encoded.
L c j @log
3.2 Decoding of LDPC codes Pr c j 1 channeloutputfor1
To give an introduction to LDPC decoding algorithms, we
must say that Gallager [14] has also provided a decoding
algorithm that is typically near optimal. The iterative
decoding algorithms used in LDPC decoding measures the (6)
L c j , where
probability distribution of variables in graph based models
and come under different names depending on the The output of the band pass demapper [1] is
pace in reducing the complexity of decoding errors. In these three parameters are updated using the following
message-passing algorithm, messages pass back and forth equations for each iteration.
between the bit and check nodes until the equation (3) is
satisfied.
18
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012
Table 1. Comparative study of coding techniques
1
L rij 2a tanh tanh L q j i Coding Historical Coding Distance from
jvi j 2 Technique Timeline Gain Shannon limit
L q ji L c j L r
(approx) (approx)
i j Convolutional 1955 5dB 5dB
i c j i Code
L Q j L c j L rij
Regular LDPC 1960 9dB 1~1.5dB
Turbo Code 1993 9.3dB 0.7dB
ic j Irregular 1999 10.5dB 0.0045dB
LDPC
(7)
4. PERFORMANCE COMPARISION OF
LDPC CODED OFDM
The simulation environment is same as in [1] except that the
LDPC decoding type is soft decision type using log domain
SPA as discussed in Section 3.
This system is simulated entirely referring to the block
diagram in Figure.1 of reference [1] by introducing the LDPC
encoder block at label ‘A’ and LDPC decoder block at label
‘Q’ and taking AWGN channel. Here the simulation is carried
out using Matlab.We consider an irregular LDPC code with
parity check matrix of size 32400 by 64800. Parity-check
matrix of the LDPC code is stored as a sparse logical matrix.
The system was simulated for OFDM with 8, 16, 32 and64
Figure.1: BER of (32400,64800)LDPC coded OFDM for
QAM. Columns 32401 to 64800 are a lower triangular matrix.
16, 32 and 64 QAM With Hard-decision Decoding
Only the elements on its main diagonal and the sub diagonal
immediately below are 1's.Since the last N K columns of
the parity check matrix is of lower triangular type, it is
referred as the forward substitution method of encoding. This
is discussed in section 3. The code rate is ½.
In reference [1], the LDPC decoder is of hard decision type
and here the LDPC decoder is of soft decision type .The
information is binary in nature. The channel is thus can be
referred as a BIAWGN channel. The Fig 1 shows the BER
performance of 16 and 32 and 64 QAM modulated LDPC
coded OFDM in hard-decision mode [1]. The number of
iterations performed at the decoder is 40. As the number of
iterations increases, a more appropriate converged value is
obtained. Figure 2 and Fig 3 shows the BER performance
with soft decision LDPC decoding for decoder iterations of 10
and 20 respectively. It is observed from the three figures that
in the lower SNR region, LDPC coded 16-QAM modulated Figure.2: BER of LDPC(32400,64800)coded OFDM for 8
OFDM signal gives error floor around 10-5 with 40 decoder
and 16 QAM with Soft Decision Decoding
iterations of hard decision decoding nearly between 12-14 dB
of SNR, while with soft decision decoding it is around10-7 at
approximately 8dB of SNR with lesser number of decoder
iterations of 10 and around 10-9 nearly at 8 dB of SNR with
decoder iterations of 20. This clearly illustrates the fact that
with soft decision decoding increase in decoder iterations also
improves error floor performance. Hence with soft decision
decoding there is a gain in power compared to hard decision
case with lesser number of decoder iterations.
5. CONCLUSION
With soft-decision decoding the BER performance of this
long Irregular LDPC code is much better than that with hard
decision decoding for all the higher order QAM modulated
OFDM signals. The error floor region with this long block Fig. 3: BER of LDPC (32400, 64800) coded OFDM for
length Irregular LDPC code is within 10-5 to 10-6 with hard 16,32and64 QAM with Soft Decision Decoding
decision decoding and within 10-7 to 10-8 with soft decision
19
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012
decoding of 10 decoder iterations and 10-5 to 10-9 with soft [11] Bernhard M.J.Leiner, “LDPC codes – a brief Tutorial”,
decision decoding of 20 decoder iterations. This is a better April 8, 2005.
result as far as better coding gain and error floor region is
considered. The QAM modulated OFDM signal can be a good [12] William Stallings,“Wireless Communication and
match to the higher information carrying capacity of optical Networks”, Pearson Education,2002.
carriers. But, the BER increases as the modulation order [13] William E. Ryan, “An Introduction to LDPC codes”,
increases. Using a long Irregular LDPC code can reduce the August 19, 2003.
BER along with approaching the Shannon’s limit. Hence , we
can use this code with higher order QAM modulated OFDM [14] Robert G. Gallager,” Low-Density Parity-check Codes”,
for improving performance of optical AWGN channel in Free 1963.
space optical communication . With the advantage of OFDM [15] AMIN SHOKROLLAHI, “LDPC Codes: An
to deal with Inter Symbol Interference (ISI), this work can Introduction”, April 2, 2003.
also be extended to deal with ISI due to chromatic dispersion
and polarization mode dispersion in Optical communication 8. AUTHORS PROFILE
systems. Further due to good error floor region with soft
decision it can be used to avoid effect of four wave mixing in Ms. Madhusmita Mishra : Received B.E Degree in Electronics
BER performance. In another scenario, without taking the
and Communication Engineering from Utkal University, Orissa
weather effects into account, the deep-space channel is a
perfect additive white Gaussian noise channel. Also, deep- in 1997. She creditably completed her M.E Degree in
space communications do not care about the block lengths. Communication Control and Networking from R.G.P.V, Bhopal
Therefore, we can use this long irregular code in deep-space in 2005. She is serving the National Institute of Technology-
communication applications to get better results, since larger Rourkela, India as a Research Scholar in the Department of
block length results in higher minimum distance between Electronics and Communication Engineering from 2009
Codewords and thus increasing the error correction capability.
onwards. Her specialization is focused on Communication
System Design.
6. FUTER RESEARCH DIRECTION
With the above method of encoding, the encoding complexity Prof. Sarat Kumar Patra: Received Bsc (Engg.) from UCE
can become prohibitively complex as we move to long codes Burla in Electronics and Telecommunication Engg. discipline.
of length of the order of 105 or 106. Use of a structured parity After completion of his graduation he served for India's
check matrix can be useful to reduce this implementation prestigious Defense Research Development Organization
complexity. This is the current topic of investigation. (DRDO) as a scientist. He completed M.Tech at NIT Rourkela
(Formerly known as REC Rourkela) in Communication Engg.
7. REFERENCES Specialization in 1992. He received PhD from University of
Edinburgh, UK in 1998. He has been associated with different
[1] Mishra, M.; Patra, S.K.; Turuk, A.K. ,”Performance of professional bodies such as senior member of IEEE, Life
Power efficient LDPC coded OFDM over AWGN
Member IETE (India), IE (India), CSI (India) and ISTE (India).
channel”, RAIT 2012, Page(s): 185 – 191.
He has published more than 70 international journal and
[2] Eldomabiala, Mathias, coinchon, conference papers. Currently he is working as Professor in the
KarimMaouche,“Studyof OFDM Modulation,” Ierucom Department of Electronics & Communication Engineering at
Institute, December, 1999.
NIT Rourkela. His Current research area includes mobile and
[3]. Enrico Forestieri, “Optical wireless communication, Communication Signal processing and
CommunicationTheoryandTechniques, SPRINGER, soft computing.
2005.
[4] C. Berrou and A. Glavieux, “Near optimum error Prof. Ashok Kumar Turuk:Dr Ashok Kumar Turuk received
correcting coding and decoding: Turbo codes,” IEEE his BE and ME in Computer Science and Engineering from
Trans. Commun., vol. 44, pp.1261-1271, Oct. 1996 National Institute of Technology, Rourkela(Formerly Regional
[5] R. Gallager,“Low-density parity-check codes,” IEEE Engineering College, Rourkela) in the year 1992 and 2000
Trans. Inform.Theory, vol. 8, no. 1, pp. 21–28, Jan. 1962. respectively. He obtained his PhD from IIT, Kharagpur in the
year2005.Currently he is working as Associate Professor in the
[6] T. J. Richardson and R. L. Urbanke, “Efficient encoding
Department of Computer Science & Engineering at NIT
of low-density parity-check codes,” IEEE Trans. Inform.
Theory, vol. 47, no. 2, pp. 638– 656, Feb. 2001. Rourkela. His research interest includes Ad-Hoc Network,
Optical Network, Sensor Network, Distributed System and Grid
[7 William Shieh, Ivan Djordjevic,”OFDM for Optical Computing.
Communication”, Academic Press, October, 2009.
[8] Shu Lin and Daniel J.Costello ,“Error Control Coding”,
Jr.second. edition, Prentice Hall, 2004.
[9] Ferrari and Raheli, “Ldpc coded Modulations”,
SPRINGER Publication, 2009.
[10] R.E.Blahut, “Algebraic codes for data transmission”, 1st
edition, Cambridge university press, 2003.
20
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org
ABSTRACT code which answers all the questions aroused from Shanon’s
A long Irregular LDPC code that performs at rates extremely theorem and it has excellently used the distance properties of
close to the Shannon capacity has been taken. The code has LDPC codes[1,2] .Though we can communicate in principle
carefully chosen degree patterns. Simulations has been done at rate near channel capacity with arbitrarily small error
with Hard and Soft-decision decoding to compare the probability, the parameter Rc (cut off rate ) represents an
performance of this code with the rates 1/2 ,1/3, 1/4 ,2/3 ,2/5, upper limit on rate for reliable practical communication
3/4, 3/5, 4/5, 5/6, 8/9 and 9/10 over various channels like [4,9,10].The Rc act as a compact figure of merit for a
AWGN channel, Rayleigh fading channel and Rician fading modulation and demodulation system employing channel
channel. The little dependence of BER performance on various coding technique. The rest part of the paper is as follows. The
channels is explored here along with the conjecture of the second part explores the cut off rate as a means of assessing
concept of computational cutoff rate that represents an upper modulation and coding options. The third part gives the
limit on rate of transmission for practically instrumentable details of encoding and decoding methods used here and
reliable communications. discusses simulation results followed by the conclusion
section.
General Terms 2. CUT OFF RATE TOWARDS
Orthogonal frequency division multiplexing (OFDM),
Quadrature Amplitude Modulation (QAM)
ASSESSING MODULATION AND
CODING OPTIONS
Keywords In a generic model for the point to point digital
LDPC (Low Density parity check), Upper Bound, Cut off communication system, the Information source is modeled
Rate, Bhattacharyya bound, Hard-decision, Soft-decision probabilistically and messages are viewed as outputs from
some random experiments [4]. For the action of the channel
1. INTRODUCTION on the input signal, a well defined mathematical model is
For any channel, the bandwidth, data rate, noise and error rate assumed and this includes stochastic and deterministic
are related to each other. The greater the bandwidth, the aspects. In analog systems mean-square error between source
greater will be the cost. All transmission channels of any and destination waveforms is taken as criteria, whereas the
practical interest are limited in bandwidth due to the performance is measured by symbol error probability or
constrained physical properties of the transmission medium. message error probability in discrete communication. These
For digital data transmission, in order to use bandwidth performance measurement criterions are referred as fidelity
efficiently it is required to get higher possible data rate at a criterions. For every combination of source model and fidelity
particular limit of error rate for a given bandwidth and to criterion, a rate distortion function can be assigned as R(n),
which the main constraint is the noise. If binary signals are which is specified in bits per unit of time that depends only on
transmitted then the supported data rate will be twice the the source description and on the fidelity criterion. The
bandwidth. But, using multilevel signaling the data rate can be argument n of the rate distortion function is the smallest
increased by a factor of log2M, where M is the no of signal expected or average distortion achievable by any system
levels [5]. Now as data rate increases, the bits become shorter representing the source with R(i) bits per unit source time.
in duration and as a result more bits are affected by a given The solution for ‘i’ is obtained from
pattern of noise concluding the statement that higher data rate
leads to higher error rate. The solution now is to increase the
R(i* ) S (1)
signal to noise ratio (SNR), which sets the upper bound on the
achievable data rate. Shanon’s formula assumes only white Any system how much complicated it may be, can have an
noise (thermal noise) and it does not account for the impulse * *
average distortion of less than i . If the i , resulting from
noise or distortion due to attenuation and delay. While the
Shanon’s formula represents the theoretical maximum that above is unacceptably large then we have to render for either
can be achieved, but in practice much lower rates are providing greater channel capacity (S) or slowing the source
achieved. Similarly it does not suggest rather provide a symbol production rate. The reason for adopting cut off
yardstick for finding a suitable signal code to achieve error coding rate is broadly in the sense to achieve highly reliable
free transmission. Here we have used a long Irregular LDPC
26
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org
P2 y1 y2 is
be:
Where, the probability of the event that
n 1
y1 is transmitted, but y2 has higher likelihood .The P yi P yij
summation here can be interpreted as an n-dimensional sum j 0
(10)
including all z ' s and not just those in the decision region
meant for the codeword y2 . PB y1 , y2 is the
Hence, the probability measure assigned to selection of a
given code will be:
Bhattacharyya bound here and it does not require channel
P y1 , y2 P y1 P y2 P y1 j P y2 j (11)
symmetry or memory less behavior. The negative logarithm n
of Bhattacharyya bound is known as Bhattacharyya distance
and is given by j 1
d B y1 , y2 log PB y1 , y2 (5) Taking the two codeword to be symmetric in its arguments
and replacing P2 y1 y2 by P y1 , y2 , the upper
Hence the two codeword upper bound on error probability is bound on the two codeword error probability with a randomly
selected pair of codewords will be the ensemble average error
PB y1 , y2 2 dB y1 , y2 (6)
probability as below:
27
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org
P y P y P z y1 j P z j y2 j
1/2 decoder (Hard-decision or Soft-decision) [1,2] that gives the
1j 2j j decoded message. Here the simulation is implemented in the
j 0 z j y1 j y2 j baseband domain using matlab coding. The LDPC code is an
(12) irregular LDPC code with parity check matrix (32400,64800).
Parity-check matrix of the LDPC code is stored as a sparse
Simplifying the above equation with the fact that each term in logical matrix. The system was simulated over the three
channels with various rates as 1/2, 1/3, 1/4, 2/3, 2/5, 3/4 ,3/5,
the product is independent of position index j and taking the
4/5, 5/6, 8/9 and 9/10 respectively. The LDPC decoder is of
subscripted variables as dummy variables, we will get the Hard and Soft-decision type [1,2]. The information is binary
simplified form of above equation as below: in nature. The encoding and decoding strategies are given
below.
2
n 1
1/2
P2 y1 , y2 P y P z y (13) 3.1 Encoding of LDPC codes
j 0 z y The set of valid codewords are those which satisfies the parity
check constraints as given below ,where H and C are the
To represent the equation (13) more simple we will define a parity check matrix and the codeword respectively[6,7].
quantity as:
HC T 0
(18)
1/2
2
Rc P log 2 P y P z y
(14) But, the mapping of messages to these codewords through the
z y use of generator matrix shows how to encode the message.
The (j,i) th entry of G(generator matrix) is ‘1’if the jth
message bit plays a role in determining the ith codeword. The
Using Rc P ,the bound on error probability for the
set of all possible linear combination of the rows of G gives
the set of codewords for the code with generator G. Hence G
ensemble of two codeword codes is given by satisfies the following equation.
GH T 0
P2 y i , y j 2 nRc P (19)
The steps for encoding after designing the parity check matrix
(15) are as below:
2
1/2
Then put H grr into standard form :
Rc max log 2 P y P z y (16)
P y z y H grr = [ A I N K ], where A is ( N K ) by K
binary matrix and I N K C mG is the identity
Now from above concept (15) can be written as
matrix of order N K .
28
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org
29
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org
such tiers can be established likewise. Then start decoding parameters are updated using the following equations for each
iteration[8,12,13].
proceeding from the leaves of the tree and by the time decoder
reach at the root of the tree (C N), other erroneous bits may
1
have been corrected. Figure1and 2 and 3 above compares the L rij 2a tanh tanh L q j i
performance of the long irregular LDPC coded QAM j vi j 2
L q ji L c j L r
modulated signal with various defined rates with this Hard-
decision algorithm. i j
i c j i
30
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org
code is better. After 7.5 dB, rate 1/4 ,1/3 and 1/2 codes are [11] R.Gallager,“Low-densityparity-checkcodes,”IEEETrans.
giving nearly same performance up to 8.5 dB, keeping the error Inform.Theory, vol. 8, no. 1, pp. 21–28, Jan. 1962.
floor between 10-3 to 10-4 . After 8.5 dB up to and 9.5 dB the
performance of rate 1/3,1/2 and 2/5 is better than all others [12] Bernhard M.J. Leiner, “LDPC Codes- a brief Tutorial”,
giving an error floor of nearly 10-6. Between 9.5-10dB, the rate April 8,2005.
1/3 and 3/5 are giving the better performance, keeping the error
[13] William E.Ryan, “ An Introduction to LDPC Codes”,
floor between 10-7 to 10-8. Between 10-10.5 dB, the rate 3/5
August,19,2003.
gives better error floor than rate 1/3, keeping it nearly at 10-9. At
12dB, the rate 2/5 code is giving better error floor (nearly 10-10)
than all other rates. But, here also the rate 9/10 code is giving
worst performance in both lower as well as higher SNR AUTHORS PROFILE
regions and thus giving an idea of cut-off rate in this case for
the practically instrumentable reliable communications .
Ms. Madhusmita Mishra: Received B.E Degree in Electronics
Similarly, with soft-decision decoding over Rayleigh fading and Communication Engineering from Utkal University, Orissa
channel, clearly it is visible that the rate 1/2and 1/3 codes are in 1997. She creditably completed her M.E Degree in
giving better performance between 7-8 dB, keeping the error Communication Control and Networking from R.G.P.V, Bhopal
floor between 10-5 to 10-6. Similarly with rate9/10 code the in 2005. She is serving the National Institute of Technology-
performance is the worst. thus giving an idea of cut-off rate Rourkela, India as a Research Scholar in the Department of
for the practically instrumentable reliable communications . Electronics and Communication Engineering from 2009
onwards. Her specialization is focused on Communication
With soft-decision decoding over Rician fading channel, clearly System Design.
it is visible that between 6-8db the rate 1/4code is giving better
performance and between 8-9dB,the rate 1/3 and 2/5 codes are
Prof. Sarat Kumar Patra: Received Bsc (Engg.) from UCE
giving better performance ,keeping the error floor between 10-5
to 10-6. Similarly with rate 9/10 code the performance is the Burla in Electronics and Telecommunication Engg. discipline.
worst. thus giving an idea of cut-off rate in this case for the After completion of his graduation he served for India's
practically instrumentable reliable communications . prestigious Defense Research Development Organization
(DRDO) as a scientist. He completed M.Tech at NIT Rourkela
5. REFERENCES (Formerly known as REC Rourkela) in Communication Engg.
[1] Mishra, M.; Patra, S.K.; Turuk, A.K. ,”Performance of
Power efficient LDPC coded OFDM over AWGN Specialization in 1992. He received PhD from University of
channel”, RAIT, March 2012, Page(s): 185 – 191. Edinburgh, UK in 1998. He has been associated with different
professional bodies such as senior member of IEEE, Life
[2] Mishra, M.; Patra, S.K.; Turuk, A.K. ,”Long Irregular
Member IETE (India), IE (India), CSI (India) and ISTE (India).
LDPC Coded OFDM with Soft Decision”, IJCA(0975-
8887),vol-56,N0.4,October 2012. He has published more than 70 international journal and
conference papers. Currently he is working as Professor in the
[3] TODD K.MOON, “ Error Correction coding”, Willey- Department of Electronics & Communication Engineering at
Interscience,2006.
NIT Rourkela. His Current research area includes mobile and
[4] STEPHEN G.WILSON, “Digital Modulation and wireless communication, Communication Signal processing and
coding”, Pearson Education,2003. soft computing.
[5] STALLINGS, “ Wireless Communications and
Networks”, Pearson Education,2002. Prof. Ashok Kumar Turuk:Dr Ashok Kumar Turuk received
his BE and ME in Computer Science and Engineering from
[6] T. J. Richardson and R. L. Urbanke, “Efficient encoding
National Institute of Technology, Rourkela(Formerly Regional
of low-density parity-check codes,” IEEE Trans. Inform.
Theory, vol. 47, no. 2, pp. 638– 656,Feb. 2001. Engineering College, Rourkela) in the year 1992 and 2000
respectively. He obtained his PhD from IIT, Kharagpur in the
[7] Shu Lin and Daniel J.Costello ,“Error Control Coding”, year2005.Currently he is working as Associate Professor in the
Jr.second.edition, Prentice Hall, 2004.
Department of Computer Science & Engineering at NIT
[8] Ferrari and Raheli, “Ldpc coded Modulations”, Rourkela. His research interest includes Ad-Hoc Network,
SPRINGER Publication,2009. Optical Network, Sensor Network, Distributed System and Grid
[9] Michele,Gianluigi and Riccardo.,”Does the Performance Computing.
of LDPC Codes Depend on the Channel,”IEEE
Transactions on Communications, 54(12),December
2006.
[10] Ungerboeck,G.,”ChannelcodingwithAmplitude/Phasemo
dulation,”IEEECommunicationsMagazine,25(5-
21),1987.
31