Вы находитесь на странице: 1из 21

International Journal of Computer Applications (0975 – 8887)

Volume 35– No.6, December 2011

Analysis of Peak-to-Average Power Ratio Reduction


Techniques for OFDM using a New-phase Sequence

Madhusmita Mishra Sarat Kumar Patra Ashok Kumar Turuk


Dept. of Electronics and Dept. of Electronics and Dept. of Computer Science
Communication Communication and Engineering
National Institute Of National Institute Of National Institute Of
Technology Technology Technology
Rourkella-769008, India Rourkella-769008, India
Rourkella-769008, India

ABSTRACT Bit error rate at the receiver, data rate loss, computational
complexity, inband and outband distortion. Here we have
Several properties of OFDM has made it an essential modulation studied through simulation results the performance of Clipping
scheme for high speed transmission links. But one major and filtering, PTS and SLM based PAPR reduction techniques
drawback ofOFDM is its large Peak-to- average power ratio. and cited the selection criteria for these techniques basing on
Here we have reviewed some of the PAPR reduction techniques various parameters. While calculating the PAPR, the actual time
and compared the performance of clipping and filtering,partial domain OFDM signal that is in analog form must be taken into
transmit sequence and selected mapping methods for QAM account since the IFFT outputs will miss some of the signal
modulated OFDM. We have also shown analytically, the peaks. So, if we calculate the PAPR by using these sample
relation between passband and baseband PAPR and also the values then the calculated PAPR will be less than the actual
criteria for optimum design of the phase rotation table for the PAPR. This is an Optimistic result which is far from the real
selected mapping technique. Finally we have proposed to use the results but, these samples values are enough for signal
Chu sequence as the phase sequence in the SLM technique to reconstruction. To substantiate this issue, oversampling is
get reduction in PAPR. performed. After oversampling the increased samples are close
to the real analog signal and calculation of PAPR based on the
General Terms increased sample values will give the true PAPR. PAPR is not a
Peak-to average power reduction, Orthogonal frequency problem with constant amplitude signals as it is a problem with
Division Multiplexing, Quadrature Amplitude Modulation non-constant amplitude signals. Since OFDM and MIMO-
OFDM are based on OFDM, they also need PAPR
Keywords reduction.This article is organized as follows: In section 2, a
Clipping and filtering; PTS (partial transmit sequence); brief discussion of base band and pass band PAPR is followed
SLM(selected mapping); Chu-SLM by the discussion of various PAPR reduction techniques with
their merits and demerits. Then in section 3, the performance of
1. INTRODUCTION three techniques iscompared through simulation results and
those include: clipping and filtering, Partial transmit sequence
The OFDM signal amplitudes vary widely with high PAPR, as a
and selective mapping method. In section 4, at first we have
consequence high power Radio frequency amplifiers will
shown the optimal design criteria for the phase rotation table of
introduce inter-modulation between different subcarriers and
SLM technique followed by the performance comparison of Chu
introduce additional interference causing increase in bit error
sequence with all other sequences used so far [6-8] and
rate. Therefore, RF power amplifiers should operate in a very
compared the results of Classical SLM with Chu SLM for QAM
large linear region to avoid the signal peaks from getting into the
modulation schemes. Finally conclusions are drawn in section 5
non-linear region of the power amplifier causing in-band
followed by the references.
distortion i.e. inter modulation among the subcarriers and out of
band radiation. To overcome this, the power amplifiers should 2. PAPR AND ITS REDUCTION IN OFDM
be operated with a large power back-offs and this indirectly SYSTEMS
leads to very inefficient amplification and increase in transmitter The complex discrete-time baseband equivalent time –domain
power. This rhythms the invention of various PAPR reduction OFDM signal can be expressed as:
techniques. This article discusses coding, clipping and

 X k  exp  j 2 nk N  .
filtering[1,2], peakwindowing [3], partial transmit sequence [4], N 1
selected mapping technique [5].This article describes the basic x  n  IFFT  X  k   1
principle of all these techniques. The selection of any of the N k 0
PAPR reduction technique may be at the cost of PAPR reduction (1)
capability, low coding overhead, synchronization between
transmitter and receiver, increase in transmit power, increase in

52
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011

Where, n = (0,1,2…….N); N→No of subcarriers; X[k]→ FZmax  P  Z max  z   1  P  Z max  z 


denotes the kth modulated phase shift keying (PSK) or (7)
 1  FZmax  z   1  1  exp   z 2  
N
quadrature amplitude modulated (QAM) symbol;X[k] =ak +jbk;
where ak and bk are real and imaginary component of
X[k].Assuming ak and bk to be independent of each other , if we
separate x[n] in equation(1) into its real andimaginary The PAPR of equation (5) deals with the pass band signal S (t)
components and evaluate the expectation and variance of them with a carrier frequency of fc which is much higher than inverse
and thereafter apply Central limit theorem for large N,then the of one symbol period of 𝑆(𝑡),hence the PAPR of the continuous
probability distribution of x[n]I and x[n]Q will follow the time base band OFDM signal and its corresponding pass band
Gaussian distribution. The amplitude of OFDM signal thereafter
signal will have almost the same PAPR. But,the PAPR of
has a Rayleigh distribution with zero mean and a variance of N
times the variance of one complex sinusoid. Let {Zn}be the discrete time baseband signal x[n] in equation(1) may not be the
magnitudes of complex samples. Assuming that the average same as that for the continuous time baseband signal x(t) and
power of complex passband OFDM signalS (t) is equal to one, itwill be low, since x[n] may not have all the peaks of x(t).
the {Zn} are the normalized Rayleigh random variables with its Measurement of the PAPR for x(t) from the PAPR of x[n]can
own average power, which has the probability density function be done by L (≥4) times interpolatingthe x[n].The IFFT output
[9] as shown below: signal x[n] can be expressed in terms of the L times interpolated

    (2)
version as :
f z  z  z  2
exp  z 2 2 2  2 z exp  z 2
L. N 1

 X k .exp  j 2 mf k L.N  (8)
n

where E and D being expectation and variation operator. For x  m  1 L .N


(n=0,1,2…N-1) the following equation holds for the condition k 0
of: E ak   E bk   0andD ak   D bk    2
 
where X’[k] =X[k], for 0≤ k< N/2 ;and 0,elsewhere andN, f’
E zn 2  2 2  1 (3) and X[k]are the no of subcarriers, the subcarrier spacing and the
complex symbol carried over a subcarrier k respectively. For
The Max {Zn} =Z max = Crest Factor=CF; and the cumulative x’[m],the PAPR is defined to be :
distribution function (CDF) of Z max is given by

Fzmax  z   P  Z max  z 
PAPR  x  m  max x  m
2

E x  m 
2
 for (9)
m  0,1,....N .L
  P  Z0  z  .  P  Z1  z  ......  P  Z N 1  z 

 1  exp   z 2 


N Various PAPR reduction techniques are discussed below.
(4)
2.1 Coding, Clipping and Filtering
For baseband condition; CF= (PAPR) 1/2 and for pass band In coding technique [1-2] the codewords with minimum PAPR
condition CF= (PMEPR) 1/2; where PAPR of a complex pass are found from a given set of codewords and the input data
band OFDM signal is given as: blocks are mapped to these selected codewords. The encoding
and decoding complexity increases for large no. of carriers and

 
this method could not work for higher order bit rates and
PAPR S  t   max S  t 
2 2
E S (t ) reduction of PAPR is at the expense of coding rate. The non-
linear clipping process introduces In-band distortion causing
degradation in Bit Error Rate performance of the system and

 
also causes out of band noise which reduces the spectral
max Re S  t  exp  j 2 fct  E Re S t  exp  j 2 fct 
2
(5) efficiency. Using Filtering with clipping [11] the out of band
noise is reduced but the in-band noise is not. To reduce the In-
band noise each OFDM block is oversampled by padding the
In baseband case the peak –to- mean envelop power original input with zeros and then taking a longer IFFT. Use of
ratio(PMEPR)is defined as: Forward error correction (FEC) codes with clipping and filtering
[2] can reduce both the noises and improves the BER

 
performance and spectral efficiency.
PMEPR S  t   max S  t  E S t 
2 2
(6)
2.2 Peak Windowing
This method [3] concerns with removing the less occurred peak
values. It provides large reduction in PAPR along with the
Where 𝑠 𝑡 is the complex baseband equivalent of the pass advantages of easy implementation, independence in no of
band signal . Now to find the probability that the CF exceeds z, subcarriers and undisturbed coding rate at the cost of increase in
we have to consider the complementary CDF (CCDF) as: BER and out of band noise. Peak windowing with FEC codes
[2] can compensate the increase in BER. Any windowing can be

53
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011

used , provided it has a good spectral properties. Examples are betterperformance in both the cases and the Clipping and
cosine, Kaiser and Hamming. By removing peaks, PAPR cannot filtering is giving the worst performance.
be reduced beyond a certain limit since the average value of
OFDM signal decreases, thus increasing PAPR. OFDM signal
exhibits “Bottoms” similar to peaks. By increasing these
bottoms above certain level, the average value of OFDM signal
can be shifted up and thus reducing PAPR. After this the
samplevalues are amplified and the total method is referred as
Peak- windowing with Pre- amplification.

2.3 Partial Transmit Sequence


In this method [4, 12, 13, 14] the input data vector is partitioned
into P no. of sub blocks. and then IFFT for each sub block is
Fig.1a CCDF Comparison for N=255 of16-QAM OFDM
taken. Then each IFFT output goes through scrambling (rotating
the phase independently) followed by the multiplication of a
corresponding complex phase factor
bp =exp(j∅𝑝), where p= 1, 2,…..P and at this point theOFDM
signal is given by:

 p 
x  IFFT  b p x p    b p .IFFT  X p     b p x p
p p

 p 1  p 1 p 1

(10)Where { x p} is a partial transmit sequence and {Xp} are the


consecutively located data sub blocks. The phase vector is Fig.1b CCDF Comparison for N=510 of 16 QAM OFDM
chosen such that the PAPR is minimized. Initially all the phase
factors (bp) are set to 1 for all p=1:P;it is set to be
Table.1 Comparison of various phase sequences
PAPRmin.Then in the second step the PAPR of equation(5) is
determined for bp=-1. If PAPR exceeds PAPRmin bp is set back
to 1; else PAPRmin=PAPR and this process continues till p<P Serial Name of Baseband Passband Computational
and the process stops with the result of optimal phase factors . no the PAPR in PAPR in Time
PAPR performance of this technique is affected by the no of Sequence dB dB complexity(in
sub blocks, the sub block partitioning method and the allowed Secs)
phase factors.
1 Chu 0 4.27 0.026929
2.4 Selective Mapping Technique
2 Hadamard 2.84 4.28 0.066156
In this method [5, 14, 15, 16] the parallel input data vector is
multiplied with V different phase sequences (each of length N)
to create V modified data blocks with different phases before the 3 Hilbert 10.53 10.53 0.052609
IFFT operation. Then after the IFFT operation, among the
modified data blocks the block having minimum PAPR is 4 Circulant 11.24 11.24 0.171689
selected for transmission. Information about the selected phase
sequence should be transmitted to the receiver as side 5 Riemann 5.11 5.95 0.235032
information and this is the reason for complexity. SLM can be
used for any no of subcarriers and for any signal constellation. It 6 Newman 9.07 9.07 0.027521
provides significant gain with moderate complexity. Channel
coding is needed to protect the side information[6].

3.COMPARISION OF PAPRREDUCTION
TECHNIQUES 3. OPTIMAL SLM AND PERFORMANCE
Here we have compared the performance of clipping and OF CHU-SEQUENCE
filtering, PTS and Classical SLM based OFDM. Figure.1(a-b) After applying SLM, the OFDM signal is expressed [5] below
shows the results for N=255 and 510.The simulation is done for 0 ≤ t < NT and v= 0,1,…V-1 as:
taking the clipping ratio to be 0.8 for both N values. Clipping
ratio is the ratio of clip level (M) and the r.m.s power of OFDM N 1
signal. From these figures we can see that as N increases the
CCDF plots for all techniques occurs at a larger distance from
xv  t   1
N
x b
0
n v,n .exp( j 2 nft ) (11)

thevertical axis, however the SLMtechnique is giving

54
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011

Where, bv,n = 𝑏𝑣, 𝑛 exp(j ф v,n) and the V uncorrelated N length have got better performance than the classical SLM. The
phase shift vectors are defined as: complexity is very less with Chu- SLM, since no need to send
side Information to the receiver. Since the Pass band PAPR of
N 1 Chu SLM is very less, it can be a promising PAPR reduction
Bv   bv , n n 0
(12) technique for high data rate Passband applications. Also the
Computational time complexity is very less compared to all
other sequences. So, it is practicable to use in a complex
The frequency domain OFDM (unmodified) signal is given by communication network .
the vectors {X k}; for k=0,1,2….N-1 and k is the sub carrier
index and N is the no of. Sub carriers. During the V no of
multiple representation of the same OFDM signal the average
power is unchanged means that;

E Xv   E X   E X   
2
n
2
k
2 2
(13)

The equation (13) assumes that Xk has zero mean and variance
σ2 , hence all {xnv} for v=0,1,….V -1, contain the same
information about X k. If for each v = 0,1,2.…V-1 and for each
n= 0,1,…N-1; {xn v}is independently and identically distributed
complex Gaussian distribution andif it has independent real and
imaginary part, the optimum design of the phase rotation table
will be obtained at the condition of asymptotic mutual
independence between {xn}v and {xn}l for all v≠l and with
E[exp(jф)]=0 for ф to be uniformly distributed in [0,2π). Hence, Fig. 2a CCDF of 16-QAM OFDM with Classical-SLM
this is the optimal SLM condition.
The Chu sequence is given by :

Yi  k   e j k  i*i  N
, For N even
j ki  i 1 N
e , For N odd, gcd (k,N) = 1 (14)

The proposed Chu sequence [10] is giving 0 dB PAPR without


oversampling and 4.27 dB with a oversampling factor of 4 .
Table-1 compares these above values with those for the
Hadamard , Hilbert, Circulant, Riemann and Newman sequences
selected from the rows of corresponding matrices. Chu-sequence
is giving same PAPR with normalization also. So, no need of
normalization as in [6] for Riemann matrix.It is giving less
PAPR compared to all sequences[6-8]. The computational time
complexity is also less compared to all the sequences in the Fig. 2 b CCDF of 16-QAM OFDM with Chu-SLM
literature. Since the Chu matrix has a particular structure, there
is no need of sending the side information with coding for 5. REFERENCES
accurate detection of signal. The receiver itself can generate Chu [1] Jones, Wilkinson,T.A and Barton: “ Block coding scheme
sequence for decoding. Using Chu sequence as phase with SLM, for reduction of peak to mean envelop power ratio of
we have shown the performance of 16-QAM modulated OFDM multicarrier schemes”, Electronics Letters,30(25),Dec1994.
signal with Chu-SLM and compared it with the 16-QAM
modulated OFDM signal with Classical SLM for different [2] Wulich. D and Goldfeld.L.: “ Reduction of peak factor in
values of N. Fig 2(a-b) shows these results for different values orthogonal multicarrier modulation by amplitude limiting
of N. The Chu-SLM has shown very much better performance and coding”, IEEE Transactions on Communications,
than the Classical SLM. 47(1), January,1999.
[3] Van Nee and Wild. A:“ Reducing the peak to average power
4. CONCLUSION ratio of OFDM “, IEEE conference proceedings , VTC,
The SLM technique is a promising technique to be used with 1998.
higher no. of subcarriers. The true objective of SLM OFDM
[4] Cimini: “Peak-to-average power ratio reduction of an OFDM
scheme is to reduce the probability of crest factor exceeding
signal using partial transmit sequence”, IEEE
some threshold level rather than to reduce the crest factor of
Communication Letters, 4(3),86-88.
alternative symbol sequence.A phase sequence set that makes as
many crest factors of alternative symbol sequences and look [5] R.W. Bauml ,R.F.H Fischer and J.B.Huber: “Reducing the
statistical independent can perform well.With Chu SLM we peak-to-average power ratio of multicarrier modulation by

55
International Journal of Computer Applications (0975 – 8887)
Volume 35– No.6, December 2011

selected mapping”; IEEE ECTRONICS LETTERS, 32(22), 6. AUTHORS PROFILE


1996.
[6]N.V. Irukulapati, V.K . Chakka and A.Jain: “SLM based Ms. MadhusmitaMishra :Received B.E Degree in Electronics
PAPR reduction of OFDM signal using new phase and Communication Engineering from Utkal University, Orissa
sequence”; IEEE ELECTRONICS LETTERS, 45(24), in 1997. She creditably completed her M.E Degree in
2009. Communication Control and Networking from R.G.P.V, Bhopal
in 2005. She is serving the National Institute of Technology-
[7] M. Palanivelan, Sheila Anand and M. Gunasekaran: “ Rourkela, India as a Research Scholar in the Department of
Matrix based low complexity PAPR reduction in OFDM Electronics and Communication Engineering from 2009
systems”; IJECT, 2(2), 2011. onwards. Her specialization is focused on Communication
System Design.
[8] A.D.S. Jayalath, C. Tellambura and H.Wu: “ Reduced
complexity PTS and new phase sequences for SLM to
Prof. Sarat Kumar Patra: Received Bsc (Engg.) from UCE
reduce PAP of an OFDM signal”; IEEE, VTC 2000.
Burla in Electronics and Telecommunication Engg. discipline.
[9] Tao Jiang and Yiyan Wu: “An overview: “Peak –to- After completion of his graduation he served for India's
Average Power Ratio reduction Techniques for OFDM prestigious Defense Research Development Organization
signals” ;IEEE TRANSACTIONS ON BROADCASTING,
(DRDO) as a scientist. He completed M.Tech at NIT Rourkela
54(2), JUNE, 2008.
(Formerly known as REC Rourkela) in Communication Engg.
[10] Chu, D.C: “ Polyphase codes with periodic correlation Specialization in 1992. He received PhD from University of
properties”; IEEE Trans. Info. Theory, 18(4),531-532,1972. Edinburgh, UK in 1998. He has been associated with different
[11] Li,X. and Cimini ,L.J.” Effects of Clipping and Filtering on professional bodies such as senior member of IEEE, Life
the Performance of OFDM”, Communication letters,2(5) MemberIETE (India), IE (India), CSI (India) and ISTE(India).
May,1998. He has published more than 70 international journal and
[12] D.W. Lim, S. J. Heo , J. S. No, and H. Chung, “ A New conference papers. Currently he is working as Professor in the
PTS OFDM Scheme with Low Complexity for PAPR Department of Electronics &Communication Engineering at
reduction”, IEEE Trans. Broadcasting, 52(1), pp. 77-82, NIT Rourkela. His Current research area includes mobile and
Mar.2006. wireless communication, Communication Signal processing
[13] Y. Xiao, X. Lei, Q. Wen and S. Li , “ A class of low andSoft computing.
complexity PTS techniques for PAPR reduction in OFDM
systems”, IEEE signal Processing Letters, 14(10),pp.680- Prof. Ashok Kumar Turuk received his BE and ME in
683, Oct.2007. Computer Science and Engineering from National Institute of
[14] R.J. Baxley and G.T. Zhou, “Comparing selected mapping Technology, Rourkela(Formerly Regional Engineering College,
and partial transmit sequence for PAR reduction”, IEEE Rourkela) in the year 1992 and 2000 respectively. He obtained
Transactions Broadcasting’ 53(4), pp. 797-803, Dec.2007. his PhD from IIT, Kharagpur in the year2005.Currently he is
working as Associate Professor in the Department of Computer
[15] S. J. Heo, H. S. Noh, J.S.No and D.J.Shin, “ A modified
SLM scheme with low complexity for PAPR reduction in Science & Engineering at NIT Rourkela. His research interest
OFDM systems”, IEEE Trans. Broadcasting,53(4), 804- includes Ad-Hoc Network, Optical Network, SensorNetwork,
808, Dec.2007. DistributedSystem and Grid Computing.
[16] C. L. Wang and Q.Y. Wuan, “ Low – Complexity selected
mapping schemes for peak-to-average power ratio
reduction in OFDM systems”, IEEE Trans. Signal
Processing, 53(12),pp. 4652-4660, Dec. 2005.

56
Int. Jr. of Advanced Computer Engineering & Architecture
Vol. 2 No. 2 (June-December, 2012)

Selected Mapping Based PAPR Reduction in WiMAX Without Sending the


Side Information
HimanshuBhusan Mishra, Madhusmita Mishra and Sarat Kumar Patra

Dept. of Electronics and Communication


National Institute of Technology
Rourkella-769008, India
mishra.himanshubhusan@gmail.com
Dept. of Electronics and Communication
National Institute of Technology
Rourkella-769008, India
madhusmita.nit@gmail.com
Dept. of Electronics and Communication
National Institute of Technology
Rourkella-769008, India
skpatra@nitrkl.ac.in

Abstract

It is well known that the orthogonal frequency division multiplexing (OFDM) is a promising
technique for getting high data rate in multi path fading environment. Hence the advance
technologies like LTE and WiMAX use this as its physical layer. The well-known
disadvantage of OFDM is its high peak to average power ratio (PAPR). The PAPR reduction
using Selected mapping (SLM) technique is being analyzed here. With following this PAPR
reduction technique it requires to send extra Side Information (SI) index along with the
transmitted OFDM signal and error in detecting these extra bits at the receiver leads to data
rate loss.Without sending this SI index the detection of this thing will also be possible with
following the sub-optimal algorithm at receiver. This paper analyses the PAPR reduction
performance using complimentary cumulative distribution function (CCDF) plot and the
probability of SI detection error performance as per the criteria of WiMAX(IEEE
802.16e).The BER (Bit Error Rate) performance is also being analyzed here.

Keywords: Orthogonal frequency division multiplexing (OFDM); Peak to average power ratio
(PAPR); selected mapping technique (SLM); complimentary cumulative distribution function
(CCDF); Side Information (SI)

1INTRODUCTION

IEEE.802.16 standard defines several wireless metropolitan area network (WMAN) technologies. WiMAX is a
certification applied to 802.16 products tested by the WiMAX forum. IEEE 802.16d stands for fixed WiMAX
and IEEE 802.16e stands for mobile WiMAX. In mobile WIMAX for downlink the modulation schemes used
are QPSK, 16-QAM, 64-QAM and that for uplink are QPSK, 16-QAM, 64-QAM (optional). Here the analysis
of the simulation work being described using the modulation schemes QPSK and 16-QAM.
To satisfy the high data rate in frequency selective fading environment WiMAX considers the OFDM as its
physical layer [7].The main disadvantage of OFDM is its high PAPR (Peak to Average Power Ratio). For the
linear modulation scheme a linear power amplifier being used at the transmitter. With an increase of this PAPR
at any time instant the probability of shifting the operating point of the linear power amplifier to the saturation
region becomes more. This shifting of operating point leads to in-band distortion and out-of-band radiation,
which can also be avoided with increasing the dynamic range of power amplifier and again it leads to the
increasing of size and cost of power amplifier. Also considering the power constraint problem it is required to
reduce this PAPR. There are a lot of techniques [3] like Clipping and Filtering, Coding, Partial Transmit
Sequence (PTS), Selected Mapping (SLM), Tone Reservation (TR) and Tone Injection (TI) present for the
reduction of PAPR and out of which SLM [1] is a promising technique. This SLM is also being known as the

179
conventional SLM. According to this technique the main data block will be divided into several independent
blocks then each will be converted into OFDM symbol and finally the symbol with less PAPR will be
transmitted. Here the performances like the PAPR reduction and Bit Error Rate analysis are being done with
considering the baseband domain transmission.
For the baseband complex signal the PAPR is defined as the ratio between the maximum power and the
average power for its envelope.
| |

| |
This paper is organized as follows: In Section 2, the SLM technique is presented. In Section 3 the simulation
results for the WIMAX parameters are analyzed. Finally the conclusion is being prosed in Section 4.

2 SLM TECHNIQUE

The conventional Selected Mapping (SLM) [1]is one of the promising technique for reduction of the PAPR. The
aim of this method is to generate many independent OFDM blocks from a single data block and then select one

us consider number of phase sequences with the sequence length of (i.e. the number of subcarriers). Then
having PAPR. The independent OFDM sequences can be found with finding independent phase sequences. Let

the 9 point of 9 phase sequence is given as

^ Φ . (2)

∈ ,!……….
∈ #, … … … . . $
Where

The number of subcarriers

In this classical SLM technique

% % . (3)

Fig.1 The Block diagram for SLM technique

in each phase sequence will be uniformly distributed in &#, !' with satisfying the
condition [4, 6] of () ∅ + #. In this way the number of alternative phase sequences can be
The random phasesΦ

found.According to fig.1 the number of alternative candidate vectors (i.e. , to , )can be generated with
multiplying each phase sequence to the original data block. The next job is to find IDFT of the alternative
candidate vectors using IFFT algorithm that leads to alternative OFDM signals. Out of all these OFDM signals
transmit the signal with minimum PAPR.
The information about the phase sequence required to select the minimum PAPR based OFDM signal

(SI) index which is nothing but the set of -./0! 1number of bits. To detect the exact transmitted data at the
should be sent along with this selected signal. This extra information is to be known as the Side Information

receiver the perfect detection of this SI index should be done. There will be a chance to lose the whole
transmitted data block with the erroneous detection of this SI index. With a slide modification of this classical
SLM technique the transmission of the SI index can be avoided. That modification is being described in the next
section.

180
2 MODIFIED SLM TECHNIQUE

%
The fig.1 describes concept about the classical SLM [1]. The modification can be made with designing the phase
sequence. According to the classical SLM technique% . With following the new SLM technique [2]

at some points out of ∈ #, , … … … $ it is required to put 2 ^ Φ . Here 2 is the


a modified scheme can be considered for the construction of phase sequences. In this modified phase sequence

extension factor with satisfying the condition 2 > . With following this concept of designing the phase

optimal algorithm [2].According to the concept of sub-optimal algorithm out of number of phase sequences
sequence the perfect detection of SI index at the receiver becomes easy. This detection can be done using sub-

one sequence is being selected which satisfies the minimum energy difference between transmitted signal and
received signal. This selected phase sequence should be the perfect sequence that gives the information about
transmitted data block.There may be some probability of error with detecting the perfect phase sequence which

of bit error rate performance being done for considering a fixed value of 2. All these above things are being
is being analyzed here as the probability of SI index error with respect to different values of 2.Also the analysis

analyzed here with considering the parameters of WiMAX.

3SIMULATION RESULTS

To analyze different results the transmission channel is considered as a quasi-static frequency selective Rayleigh
fading with equal power taps. Also the model for nonlinear solid state power amplifier (SSPA) [5] is considered
at the transmitter output.

The important parameters of WIMAX are to be used are given as follows:

Used data subcarriers 4#, 56#, 4!#

;/> ? @ B </; 4#
278.98 :; <9 = A C
;/> ? </; 4!#
5!

The modulation schemes considered for the simulation work are QPSK and 16 QAM.Toanalyze PAPR
reduction performance the over sampling factor [8] is being considered into account. As the actual data
transmission is in analog domain but we are analyzing indiscrete time domain. So to get perfect PAPR value the

∑GI# , G
oversampling factor is to be considered into account.The discrete time base band OFDM signal is given as
!' G
H E
D E
, where is the oversampling ratio. This is also known as the E point
inverse fast Fourier transforms (IFFT). Here the power amplifier was considered before transmitting the OFDM

power amplifier (SSPA) [5] with a smoothness parameter J 5, small signal gain K
signal having minimum PAPR. For the simulation model of power amplifier use Rapp’s model of solid state

(IBO) of 7 db. For the channel assume L M equal power taps with time domain coefficient N′O is a complex
and an input back off

added has mean µ # and varianceσ!


zero mean Gaussian sample representing the fading experienced by the zth tap. Also consider the noise to be
# , where # denotes the power spectral density (PSD) of the
additive white Gaussian noise (AWGN).

3.1 PAPR Reduction Performance

The fig.1 shows the complementary cumulative distribution function (CCDF) for the PAPR reduction obtained

. ! and the used subcarriers as 4#, 56#, 4!# .The increased value of subcarriers leads to increase in the
by using modified SLM technique for 16-QAM modulation. Here the value of extension factor is taken as2

PAPR value. With increasing the value of the IFFT size increases which leads to more number of additions
and multiplications and that leads to high PAPR.

181
3.2 Probability of SI detection error

In addition to PAPR reduction in WIMAX using modified SLM technique also our aim is to avoid the
transmission of SI index along with the selected OFDM symbol. The information can be draw from the fig.2
that the probability of error in detection of SI index at the receiver will be less for using QPSK modulation than

number of subcarriers results in a lower probability of detection error this is due to the fact that the 2 is
that of the 16-QAM modulation with increasing the value of extension factor. Also we get that increasing the

repeated many times in a phase sequence. The sub-optimal algorithm [2] is being used for the detection of SI
index at receiver. This algorithm performs well for QPSK modulation than 16-QAM due the reason that the
energy per symbol for QPSK is constant.

Fig.2 CCDF for the PAPR obtained using modified SLM technique for the data subcarriers of WIMAX

Fig.3 Plot of Probability of error in detecting SI index with respect to increasing in C

182
3.2 Bit error rate Performance

Fig.4 BER performance with considering N=360


Bit error rate analysis is very important in communication systems. The plot of this analysis is shown in the fig.
3 for N=360. Also the quasi-static frequency selective Rayleigh fading is being considered. Here for one
modulation scheme two graphs are shown, one is with consideration of the perfect SI index detection at the
receiver and the other is with application of sub optimal algorithm at the receiver. For QPSK modulation the two

to Noise Ratio). This BER performance is being analyzed with considering P 1.2. With taking a less value of
graphs are same because the energyper symbol is constant but for 16-QAM it holds good at higher SNR (Signal

P there will be degradation in BER performance.

4 Conclusion

In WIMAX our aim is to get high data rate simultaneously with a long range of communication. So by using the
physical layer as OFDM we will get high data rate with decreasing of PAPR using new SLM technique. Our
research work performed by considering OFDM scheme based on QPSK and 16-QAM modulation that this
technique performs well for the large number of subcarriers. In fact the probability of SI detection error
performs well with increasing the value of extension factor and/or the number of subcarriers. If we use this
technique in WIMAX we have to pay significant price for the increase in complexity at the receiver due the use
of a SI detection block.

References

[1] R. W. Bauml, R. F. H. Fischer, and J. B. Huber, “Reducing the peak-to-average power ratio of multicarrier
modulation by selected mapping,’’ Electron. Lett.vol.32, no. 22, pp. 2056-2057, Oct. 1996.
[2] Stephane Y. Le Goff, Samer S. Al-Samahi, Boon KeinKhoo, Charalampos C. Tsimenidis, and Bayan S.
Sharif, “Selected Mapping without Side Information for PAPR Reduction in OFDM,”IEEE Trans. Wireless
Commun., vol. 8, no. 7, pp. 3320-3325, Jul. 2009.
[3] SeungHee Han, Jae Hong Lee, “An overview of Peak-to-transmission,” IEEE Wireless commun.,Apr. 2005
[4] G. Tong Zhou, and Liang Peng, “Optimality Condition for Selected Mapping in OFDM,” IEEE Trans.
Signal Proces., vol. 54, no. 8, pp. 3159-3164, Aug. 2006.
[5] Elena Costa, Michele Midrio, and SilvanoPupolin, “Impact of Amplifier Nonlinearities on OFDM
Transmission System Performance,” IEEE commun.Lett., vol. 3, no. 2, pp. 37-39, Feb. 1999.
[6] A. D. S. Jayalath, C. Tellambure and H. Wu, “Reduced complexity PTS and new phase sequences for SLM
to reduce PAP of an OFDM signal,” IEEE. Conf. VTC. 2000. 3, pp. 1914-1917.
[7] Jeffrey Andrews, ArunabhaGhosh, RiasMuhamed, “Fundamentals of WIMAX: Understanding
BroadbandWireless Networking,” Pearson, 2008.
[8] Won Young Yang Chung-Gu Kang Yong Soo Cho, Jaekwon Kim. “MIMO-OFDM
WirelessCommunications WithMatlab,” John Wiley & Sons, illustrated edition, 2010.

183
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012

Long Irregular LDPC Coded OFDM with Soft Decision

Madhusmita Mishra Sarat Kumar Patra Ashok Kumar Turuk


Department of Electronics and Department of Electronics and Department of Computer
Communication Engineering Communication Engineering Science and Engineering
National Institute of Technology National Institute of Technology National Institute of Technology
Rourkela-769008, India Rourkela-769008, India Rourkela-769008, India

ABSTRACT bandwidth. But, using multilevel signaling the data rate can be
increased by a factor of log2M, where M is the no of signal
OFDM with quadrature amplitude modulation (QAM) levels. Here we have used QAM modulated OFDM to achieve
technique can be used for high speed optical applications. high data rate.
Increasing the order of modulation, the bit error rate (BER)
increases. Forward Error correction (FEC) coding like LDPC As data rate increases, the duration of bits gets shorter as a
coding is generally used to improve the BER performance. result of which more bits are affected by a given pattern of
LDPC provides large minimum distance and also the power noise concluding the statement that higher data rate leads to
efficiency of the LDPC code increases significantly with the higher error rate. The solution now is to increase the signal to
code length. In this paper we have compared the Soft decision noise ratio (SNR), which sets the upper bound on the
and Hard decision algorithms of LDPC codes. A long achievable data rate. Shannon’s formula assumes only white
Irregular LDPC code is simulated over the BIAWGN channel noise (thermal noise) and it does not account for the impulse
demonstrating the fact that LDPC coded OFDM with soft noise or distortion due to attenuation and delay. While the
decision decoding provides very lower bit error rate as well as Shannon’s formula represents the theoretical maximum that
a larger gain in transmitter power and thus making the link can be achieved, but in practice much lower rates are
more power efficient than the case with hard decision achieved. Similarly it does not suggest rather provide a
decoding. Through simulation, we have shown the advantages yardstick for finding a suitable signal code to achieve error
of using this long Irregular LDPC coded OFDM link in free transmission [12]. In this paper we have used long
Optical wireless communication (OWC). Irregular LDPC code which answers all the questions aroused
from Shannon’s theorem, while applied to several practical
General Terms communication applications.
Orthogonal Frequency Division Multiplexing, Low Density Rest of the paper is organized as follows: Section 2 describes
Parity check coding, Quadrature Amplitude Modulation. the Problem formulations. An overview of LDPC codes is
given in Section 3. Comparison of performance of LDPC
Keywords coded OFDM is made in Section 4. Finally few conclusions
Long Irregular LDPC code, BIAWGN channel, Soft decision are drawn in Section 5.
decoding, Message-passing (MP) algorithm, Error rate floors.
2. PROBLEM FORMULATIONS
1. INTRODUCTION
OFDM provides an effective and low complexity means for For a signal transmitted without coding, an SNR of around
eliminating inter-symbol interference for transmission over 10.5 to 11dB is required to achieve a BER of 10-6. Use of
frequency selective fading channels. In OFDM the subcarrier Reed Solomon (RS) code was resulting in 4 dB of coding gain
frequencies are chosen in such a way that the signals are and leaving of 6dB gain was still there. Even BCH and
mathematically orthogonal over one OFDM symbol period convolutional codes were going down to 5dB and were not
[2]. During the past decades, channel coding has been used able to go down to 1dB or 2dB. So, for a long time there is a
extensively in most digital transmission systems, from those belief that it is not possible to do any better than this 4dB or
requiring only error detection, to those needing very high 5dB of coding gain. Recently Turbo and LDPC codes are
coding gains. In the past, optical communication systems have discovered to achieve large coding gains. A comparative
ignored channel coding, until it became clear that it could be a study of existing coding techniques is presented in Table 1.
powerful, yet inexpensive, tool to add margins against line
impairments such as amplified spontaneous emission (ASE) Though Turbo and LDPC codes belong to the family of
noise, channel cross talk, nonlinear pulse distortion, and fiber compound codes [7, 8], the LDPC codes have the following
aging-induced losses[3,7]. Nowadays, channel coding is a advantages over Turbo codes. Firstly, in Turbo code due to
standard practice in many optical communication links. the presence of low weight code words there is a chance of
mistaken of a codeword for its nearby codeword due to noise
For any channel, the bandwidth, data rate, noise and error rate in channel. But, the construction of LDPC code avoids the
are related to each other. Greater the bandwidth, greater will presence of low weight codewords.
be the cost. All transmission channels of any practical interest
are limited in bandwidth due to the constrained physical Here we have used a very long irregular code which has
properties of the transmission medium. Inorder to use excellently used the distance properties of LDPC codes. The
bandwidth efficiently, in a digital transmission, it is required presence of low weight codewords causes the error floor
to obtain higher possible data rate at a particular limit of error region of Turbo codes to be around a bit error rate (BER) of
rate. But, noise is the main constraint in obtaining higher 10-5to10-6.While in contrast the LDPC codes have error floor
possible data rate at a given error rate. If binary signals are region around 10-6 to 10-8.
transmitted then the supported data rate will be twice the

16
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012

Secondly, the use of BCJR algorithm aids the computational


   
complexity of Turbo codes since no of computations in each M p   wi i   N   ui i  (2)
recursion of the BCJR algorithm scales linearly with the no of
states in the trellis. Commonly used Turbo codes have 16
 i   i 
states in the trellis, where as the LDPC decoders use a simple
parity check trellis having only 2 states. Also the extreme Where M p is the no of parity check constraints, N is the
sparseness of the parity check matrix results in a low
complexity LDPC decoding algorithm. length of the code, ui and wi are the column and row degree
Thirdly, being parallelizable, the speed of decoding of LDPC distributions respectively. If H and C are the parity check
code is greater than that of Turbo code. Fourthly, the LDPC matrix and the codeword respectively, then the codeword
decoder declares a decoding failure, whenever unable to constraint is given by
correctly decode, where as the Turbo decoders perform extra
computations to declare a stopping criterion, which depends
upon the establishment of a threshold. Finally LDPC code of  HC T   0 (3)
almost any rate and block length can be designed only from
the specification of target parity check matrix, while the turbo
code rate is depending on a puncturing schedule. Where each row of H corresponds to a parity check equation
In the LDPC codes, parity check matrix is generated randomly and each column of H corresponds to a bit in the codeword.
and it answers these two questions raised by Shannon’s paper
The (j , i)th entry of H is 1, if the ith codeword bit is included
[15]. The first one is –“How the codes can be constructed
which will satisfy the upper bound of Channel coding theorem in the jth parity check equation. More than one parity check
by Shannon” and the second one is- “How efficiently the matrices may satisfy the above constraint and hence there may
maximum likelihood decoding can be done for the code”? be more than one parity check matrices describing a particular
code. But, any possible two parity check matrices representing
The answer to the first question is random construction of
code , which the LDPC codes have and the answer to the the same code may not have the same no of rows. In an H
second question is use of a code with large block length. Here, matrix, if there is no mathematical relationship between rows,
this is the motivation behind using a long Irregular LDPC it will be said to have linearly independent rows and otherwise
code for making the link power efficient by achieving larger to have dependent rows. More the parity check constraints,
coding gain.
lesser is the number of satisfied codewords provided the
3. OVERVIEW OF LDPC CODES parity check equations are linearly independent.
Low-density parity-check (LDPC) codes are linear block
codes specified by a parity check matrix H containing mostly The LDPC codes can be differentiated from classical block
0’s and only a small number of 1’s. codes [9 10] by the sparseness of their parity check matrix.
This property of H is essential for an iterative decoding
A regular ( N , wc , wr ) LDPC code [5,14] is a code of complexity that increases only linearly with the code length.
The classical block codes can also work with iterative
block length N with a Mp × N parity check matrix
algorithms, but finding a sparse H matrix for existing codes
where each column contains a small fixed number, wc ≥ 3 of of them is impractical.
1’s and each row contains a small fixed number, wr ≥ wc In case of LDPC codes, first a parity check matrix with
of 1’s. required properties is designed and then it is encoded with
Low-density implies that wc << Mp and wr << N . suitable existing encoding algorithms and then decoded
Number of ones in the regular parity check matrix is given by iteratively using a tanner graph representation of their parity
check matrix using any one of the existing iterative
wc · N = wr . M p (1)
algorithms.

For an irregular low-density parity-check code [6] the degrees Classical block codes are decoded with ML (Maximum –
of each set of nodes are chosen according to some Likelihood) decoding algorithm. Reviewing the use of
distribution. In the construction of irregular LDPC code, the
decoding techniques, one can state that if no extra information
First step involves selecting a profile that describes the
is known about the codeword C other than the information
desired number of columns of each weight and the desired
from channel, the ML decoder will be the best one to choose
number of rows of each weight. Second step includes a
the correct codeword successfully. But, if the decoder has a
Construction method, i.e. algorithm for putting edges between
priori information about C then it is desirable to use a more
the vertices in a way that satisfies the constraints. The edges
sophisticated decoding like Maximum a posteriori (MAP) or
are placed “completely at random” subject to the profile
block–MAP decoder. For these decoders, the value of K (no
constraints. The no of 1’s in the Irregular LDPC code matrix
of message bits) does not have to be very large, so as to make
is given by
decoding completely impossible ,since large block lengths
results in large parity check and generator matrices.

17
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012

Instead the iterative decoding algorithms used in case of The hard decision message-passing algorithm is known as bit-
LDPC codes decoding can handle large block lengths and flipping algorithm and the passed messages are binary in
produce accurate estimates of the Probability (Ci/channel nature [11, 13]. A binary hard decision about each received bit
is made by the detector. This is implemented in [1]. The
output for Ci ) using repeated low–complexity processes. The
problem with this type of decoding is that the decoder
Irregular feature of the parity check matrix of LDPC codes performance is not good. The reason for this is it does not take
enable them to show high performance with respect to the likelihood values into account rather it takes a binary
Shannon limit [11]. decision about the occurrence of a transmitted bit. Hence, the
bit error rate (BER) plots have not so good error rate floors.
3.1 Encoding of LDPC codes The sum-product algorithm (SPA) is the soft-decision
The set of valid codewords are those which satisfies the parity message-passing algorithm. It is similar to the bit-flipping
check constraints. The generator matrix is used to map the algorithm except that the passed messages between bit node
messages to the codewords [11, 13, 14]. and check node are probabilities. It has two versions, one is
probability domain version that computes a posteriori
The (j , i) th entry of G is ‘1’if the jth message bit plays a role probability (APP) and another is log-domain version that
in determining the ith codeword. All possible linear computes log-likelihood ratios (LLRs). Both of these versions
combination of the rows of G gives the set of codewords for use likelihood values in decoding. But, the log-domain
the code with generator G. The generator matrix (G) satisfies version is better since in log-domain the complex
the following equation. multiplications are converted to additions resulting in low cost
implementation of the decoder with a more stable
GH T  0 (4) performance [9, 11, 13].

The steps for encoding after designing the parity check matrix Our effort here utilizes the log-domain SPA. So our
are as below: discussion here is limited to log-domain SPA which is
explained below.
i. Put H in the row-echelon form to get a new
3.2.1Sum-product Algorithm (Soft-decision
matrix as H gr . Algorithm)
ii. Then convert H gr to reduced row- echelon form If we are transmitting a Codeword with N number of bits, then
the APP is the probability that the given bit in the transmitted
denoted by H grr . codeword is equal to 1 or 0, given the channel output for that
bit. Then the APP ratio or the likelihood ratio (LR) is given by

Pr  c j  0 channeloutputfor 0 
iii. Then put H grr into standard form: H grr = [ A
I N  K ], A  K ) by K binary
is ( N l (c j ) @
Pr  c j  1 channeloutputfor1
where

matrix and I N K C  mG is the identity matrix


of order N K .
(5)
iv. Then the generator matrix will be G   I K A 
T
Then the log-APP ratio or the log-likelihood ratio(LLR) will
be given by

 Pr  c j  0 channeloutputfor 0  
v. Then the message is mapped to the codeword by the
relation C  mG , where m is message vector to
be encoded.
L  c j  @log  
3.2 Decoding of LDPC codes  Pr  c j  1 channeloutputfor1 
To give an introduction to LDPC decoding algorithms, we
 
must say that Gallager [14] has also provided a decoding
algorithm that is typically near optimal. The iterative
decoding algorithms used in LDPC decoding measures the (6)
L  c j  , where
probability distribution of variables in graph based models
and come under different names depending on the The output of the band pass demapper [1] is

 c  is the jth bit of the transmitted codeword C. The three


circumstances. But, collectively they are termed as message-
passing (MP) algorithms [11,13]. j

key parameters in this algorithm are L  r  , L  q  and


The task of the decoder is to detect and correct the flipped
bits during transmission over any channel. Every received ij ji

L  Q  . The L  q  is initiated as L  q  = L  c  and


word that does not satisfy equation (3) above will not be a
codeword and hence the design of H keeps an important j ji ji j

pace in reducing the complexity of decoding errors. In these three parameters are updated using the following
message-passing algorithm, messages pass back and forth equations for each iteration.
between the bit and check nodes until the equation (3) is
satisfied.

18
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012

 
Table 1. Comparative study of coding techniques
1
L  rij   2a tanh   tanh  L  q j i    Coding Historical Coding Distance from
 jvi j 2  Technique Timeline Gain Shannon limit

L  q ji   L  c j    L r 
(approx) (approx)
i j Convolutional 1955 5dB 5dB
i c j i Code

L  Q j   L  c j    L  rij 
Regular LDPC 1960 9dB 1~1.5dB
Turbo Code 1993 9.3dB 0.7dB
ic j Irregular 1999 10.5dB 0.0045dB
LDPC
(7)
4. PERFORMANCE COMPARISION OF
LDPC CODED OFDM
The simulation environment is same as in [1] except that the
LDPC decoding type is soft decision type using log domain
SPA as discussed in Section 3.
This system is simulated entirely referring to the block
diagram in Figure.1 of reference [1] by introducing the LDPC
encoder block at label ‘A’ and LDPC decoder block at label
‘Q’ and taking AWGN channel. Here the simulation is carried
out using Matlab.We consider an irregular LDPC code with
parity check matrix of size 32400 by 64800. Parity-check
matrix of the LDPC code is stored as a sparse logical matrix.
The system was simulated for OFDM with 8, 16, 32 and64
Figure.1: BER of (32400,64800)LDPC coded OFDM for
QAM. Columns 32401 to 64800 are a lower triangular matrix.
16, 32 and 64 QAM With Hard-decision Decoding
Only the elements on its main diagonal and the sub diagonal
immediately below are 1's.Since the last N  K columns of
the parity check matrix is of lower triangular type, it is
referred as the forward substitution method of encoding. This
is discussed in section 3. The code rate is ½.
In reference [1], the LDPC decoder is of hard decision type
and here the LDPC decoder is of soft decision type .The
information is binary in nature. The channel is thus can be
referred as a BIAWGN channel. The Fig 1 shows the BER
performance of 16 and 32 and 64 QAM modulated LDPC
coded OFDM in hard-decision mode [1]. The number of
iterations performed at the decoder is 40. As the number of
iterations increases, a more appropriate converged value is
obtained. Figure 2 and Fig 3 shows the BER performance
with soft decision LDPC decoding for decoder iterations of 10
and 20 respectively. It is observed from the three figures that
in the lower SNR region, LDPC coded 16-QAM modulated Figure.2: BER of LDPC(32400,64800)coded OFDM for 8
OFDM signal gives error floor around 10-5 with 40 decoder
and 16 QAM with Soft Decision Decoding
iterations of hard decision decoding nearly between 12-14 dB
of SNR, while with soft decision decoding it is around10-7 at
approximately 8dB of SNR with lesser number of decoder
iterations of 10 and around 10-9 nearly at 8 dB of SNR with
decoder iterations of 20. This clearly illustrates the fact that
with soft decision decoding increase in decoder iterations also
improves error floor performance. Hence with soft decision
decoding there is a gain in power compared to hard decision
case with lesser number of decoder iterations.

5. CONCLUSION
With soft-decision decoding the BER performance of this
long Irregular LDPC code is much better than that with hard
decision decoding for all the higher order QAM modulated
OFDM signals. The error floor region with this long block Fig. 3: BER of LDPC (32400, 64800) coded OFDM for
length Irregular LDPC code is within 10-5 to 10-6 with hard 16,32and64 QAM with Soft Decision Decoding
decision decoding and within 10-7 to 10-8 with soft decision

19
International Journal of Computer Applications (0975 – 8887)
Volume 56– No.4, October 2012

decoding of 10 decoder iterations and 10-5 to 10-9 with soft [11] Bernhard M.J.Leiner, “LDPC codes – a brief Tutorial”,
decision decoding of 20 decoder iterations. This is a better April 8, 2005.
result as far as better coding gain and error floor region is
considered. The QAM modulated OFDM signal can be a good [12] William Stallings,“Wireless Communication and
match to the higher information carrying capacity of optical Networks”, Pearson Education,2002.
carriers. But, the BER increases as the modulation order [13] William E. Ryan, “An Introduction to LDPC codes”,
increases. Using a long Irregular LDPC code can reduce the August 19, 2003.
BER along with approaching the Shannon’s limit. Hence , we
can use this code with higher order QAM modulated OFDM [14] Robert G. Gallager,” Low-Density Parity-check Codes”,
for improving performance of optical AWGN channel in Free 1963.
space optical communication . With the advantage of OFDM [15] AMIN SHOKROLLAHI, “LDPC Codes: An
to deal with Inter Symbol Interference (ISI), this work can Introduction”, April 2, 2003.
also be extended to deal with ISI due to chromatic dispersion
and polarization mode dispersion in Optical communication 8. AUTHORS PROFILE
systems. Further due to good error floor region with soft
decision it can be used to avoid effect of four wave mixing in Ms. Madhusmita Mishra : Received B.E Degree in Electronics
BER performance. In another scenario, without taking the
and Communication Engineering from Utkal University, Orissa
weather effects into account, the deep-space channel is a
perfect additive white Gaussian noise channel. Also, deep- in 1997. She creditably completed her M.E Degree in
space communications do not care about the block lengths. Communication Control and Networking from R.G.P.V, Bhopal
Therefore, we can use this long irregular code in deep-space in 2005. She is serving the National Institute of Technology-
communication applications to get better results, since larger Rourkela, India as a Research Scholar in the Department of
block length results in higher minimum distance between Electronics and Communication Engineering from 2009
Codewords and thus increasing the error correction capability.
onwards. Her specialization is focused on Communication
System Design.
6. FUTER RESEARCH DIRECTION
With the above method of encoding, the encoding complexity Prof. Sarat Kumar Patra: Received Bsc (Engg.) from UCE
can become prohibitively complex as we move to long codes Burla in Electronics and Telecommunication Engg. discipline.
of length of the order of 105 or 106. Use of a structured parity After completion of his graduation he served for India's
check matrix can be useful to reduce this implementation prestigious Defense Research Development Organization
complexity. This is the current topic of investigation. (DRDO) as a scientist. He completed M.Tech at NIT Rourkela
(Formerly known as REC Rourkela) in Communication Engg.
7. REFERENCES Specialization in 1992. He received PhD from University of
Edinburgh, UK in 1998. He has been associated with different
[1] Mishra, M.; Patra, S.K.; Turuk, A.K. ,”Performance of professional bodies such as senior member of IEEE, Life
Power efficient LDPC coded OFDM over AWGN
Member IETE (India), IE (India), CSI (India) and ISTE (India).
channel”, RAIT 2012, Page(s): 185 – 191.
He has published more than 70 international journal and
[2] Eldomabiala, Mathias, coinchon, conference papers. Currently he is working as Professor in the
KarimMaouche,“Studyof OFDM Modulation,” Ierucom Department of Electronics & Communication Engineering at
Institute, December, 1999.
NIT Rourkela. His Current research area includes mobile and
[3]. Enrico Forestieri, “Optical wireless communication, Communication Signal processing and
CommunicationTheoryandTechniques, SPRINGER, soft computing.
2005.
[4] C. Berrou and A. Glavieux, “Near optimum error Prof. Ashok Kumar Turuk:Dr Ashok Kumar Turuk received
correcting coding and decoding: Turbo codes,” IEEE his BE and ME in Computer Science and Engineering from
Trans. Commun., vol. 44, pp.1261-1271, Oct. 1996 National Institute of Technology, Rourkela(Formerly Regional
[5] R. Gallager,“Low-density parity-check codes,” IEEE Engineering College, Rourkela) in the year 1992 and 2000
Trans. Inform.Theory, vol. 8, no. 1, pp. 21–28, Jan. 1962. respectively. He obtained his PhD from IIT, Kharagpur in the
year2005.Currently he is working as Associate Professor in the
[6] T. J. Richardson and R. L. Urbanke, “Efficient encoding
Department of Computer Science & Engineering at NIT
of low-density parity-check codes,” IEEE Trans. Inform.
Theory, vol. 47, no. 2, pp. 638– 656, Feb. 2001. Rourkela. His research interest includes Ad-Hoc Network,
Optical Network, Sensor Network, Distributed System and Grid
[7 William Shieh, Ivan Djordjevic,”OFDM for Optical Computing.
Communication”, Academic Press, October, 2009.
[8] Shu Lin and Daniel J.Costello ,“Error Control Coding”,
Jr.second. edition, Prentice Hall, 2004.
[9] Ferrari and Raheli, “Ldpc coded Modulations”,
SPRINGER Publication, 2009.
[10] R.E.Blahut, “Algebraic codes for data transmission”, 1st
edition, Cambridge university press, 2003.

20
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org

Performance Analysis of Ensemble of Long Irregular


LDPC Code over various Channels with Cut off Rate

Madhusmita Mishra Sarat Kumar Patra Ashok Kumar Turuk


Dept. of Electronics and Dept. of Electronics and Dept. of Computer Science
Communication Communication Engineering
National Institute Of National Institute Of National Institute Of
Technology Technology Technology
Rourkella-769008, India Rourkella-769008, India Rourkella-769008, India

ABSTRACT code which answers all the questions aroused from Shanon’s
A long Irregular LDPC code that performs at rates extremely theorem and it has excellently used the distance properties of
close to the Shannon capacity has been taken. The code has LDPC codes[1,2] .Though we can communicate in principle
carefully chosen degree patterns. Simulations has been done at rate near channel capacity with arbitrarily small error
with Hard and Soft-decision decoding to compare the probability, the parameter Rc (cut off rate ) represents an
performance of this code with the rates 1/2 ,1/3, 1/4 ,2/3 ,2/5, upper limit on rate for reliable practical communication
3/4, 3/5, 4/5, 5/6, 8/9 and 9/10 over various channels like [4,9,10].The Rc act as a compact figure of merit for a
AWGN channel, Rayleigh fading channel and Rician fading modulation and demodulation system employing channel
channel. The little dependence of BER performance on various coding technique. The rest part of the paper is as follows. The
channels is explored here along with the conjecture of the second part explores the cut off rate as a means of assessing
concept of computational cutoff rate that represents an upper modulation and coding options. The third part gives the
limit on rate of transmission for practically instrumentable details of encoding and decoding methods used here and
reliable communications. discusses simulation results followed by the conclusion
section.
General Terms 2. CUT OFF RATE TOWARDS
Orthogonal frequency division multiplexing (OFDM),
Quadrature Amplitude Modulation (QAM)
ASSESSING MODULATION AND
CODING OPTIONS
Keywords In a generic model for the point to point digital
LDPC (Low Density parity check), Upper Bound, Cut off communication system, the Information source is modeled
Rate, Bhattacharyya bound, Hard-decision, Soft-decision probabilistically and messages are viewed as outputs from
some random experiments [4]. For the action of the channel
1. INTRODUCTION on the input signal, a well defined mathematical model is
For any channel, the bandwidth, data rate, noise and error rate assumed and this includes stochastic and deterministic
are related to each other. The greater the bandwidth, the aspects. In analog systems mean-square error between source
greater will be the cost. All transmission channels of any and destination waveforms is taken as criteria, whereas the
practical interest are limited in bandwidth due to the performance is measured by symbol error probability or
constrained physical properties of the transmission medium. message error probability in discrete communication. These
For digital data transmission, in order to use bandwidth performance measurement criterions are referred as fidelity
efficiently it is required to get higher possible data rate at a criterions. For every combination of source model and fidelity
particular limit of error rate for a given bandwidth and to criterion, a rate distortion function can be assigned as R(n),
which the main constraint is the noise. If binary signals are which is specified in bits per unit of time that depends only on
transmitted then the supported data rate will be twice the the source description and on the fidelity criterion. The
bandwidth. But, using multilevel signaling the data rate can be argument n of the rate distortion function is the smallest
increased by a factor of log2M, where M is the no of signal expected or average distortion achievable by any system
levels [5]. Now as data rate increases, the bits become shorter representing the source with R(i) bits per unit source time.
in duration and as a result more bits are affected by a given The solution for ‘i’ is obtained from
pattern of noise concluding the statement that higher data rate
leads to higher error rate. The solution now is to increase the
R(i* )  S (1)
signal to noise ratio (SNR), which sets the upper bound on the
achievable data rate. Shanon’s formula assumes only white Any system how much complicated it may be, can have an
noise (thermal noise) and it does not account for the impulse * *
average distortion of less than i . If the i , resulting from
noise or distortion due to attenuation and delay. While the
Shanon’s formula represents the theoretical maximum that above is unacceptably large then we have to render for either
can be achieved, but in practice much lower rates are providing greater channel capacity (S) or slowing the source
achieved. Similarly it does not suggest rather provide a symbol production rate. The reason for adopting cut off
yardstick for finding a suitable signal code to achieve error coding rate is broadly in the sense to achieve highly reliable
free transmission. Here we have used a long Irregular LDPC

26
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org

communication at rates approaching channel capacity limit n 1


defined by the physical channel .Channel coding is useful in P z y   P  z j yj  (7)
any kind of noisy channel transmission problem and it offers j 0
particularly impressive gains on fading channels. If a block
code C has a list of Q codewords, each an n-tuple, whose Now substituting (7) to (4) and expanding the n-fold sum , we
entries are from an alphabet of size k, and then assuming that can write the bound as a product of scalar summations as
the message source selects messages equiprobably and below:
independently, the entropy of the codeword selection process
will be log2Q bits per message. Hence the exchanged n 1 Q 1 1/2
information rate will be given by
PB  y1 , y2      P  zkj y1 j  P  zkj y2 j 
j 0 k 0
R  log 2Q n bits per codeword symbol (2)
(8)

Where Q is the no. of possible codewords for a given rate. In


So the no of possible codewords for a given rate will be given
this equation the bound for the error probability is a
by
symmetric function of its two arguments. Replacing the
quantity inside the summation as b j and mentioning b j as the
Q  2nR (3)
channel transition probabilities for the jth symbol position, we
Next step here is to reach at a decision of designing an can notice that b j is a function of the choice of two code
intelligent coded communication system.
symbols. So the bound in more simple form is:

2.1 Concept of Upper Bound n 1


An upper bound must be set on the evaluation of error PB  y1 , y2    b j (9)
probability and the Bhattacharyya bound is such an bound. It j 0
is given by below expression:
2.2 Concept of Cut off rate Rc
P2  y1  y2    z P  z y1  P  z y2 
1/2 1/2
Now , without concentrating on any two specific codewords y1
and y2 and doing random selection of two codeword codes
 PB  y1 , y2  (4) from the ensemble of all two codeword codes of length n and
assuming code symbols of a given codeword are generated
independently, then the probability assigned to n-tuples will

P2  y1  y2  is
be:
Where, the probability of the event that
n 1
y1 is transmitted, but y2 has higher likelihood .The P  yi    P  yij 
summation here can be interpreted as an n-dimensional sum j 0
(10)
including all z ' s and not just those in the decision region
meant for the codeword y2 . PB  y1 , y2  is the
Hence, the probability measure assigned to selection of a
given code will be:
Bhattacharyya bound here and it does not require channel

P  y1 , y2   P  y1  P  y2    P  y1 j P  y2 j  (11)
symmetry or memory less behavior. The negative logarithm n
of Bhattacharyya bound is known as Bhattacharyya distance
and is given by j 1

d B  y1 , y2    log  PB  y1 , y2  (5) Taking the two codeword to be symmetric in its arguments
and replacing P2  y1  y2  by P  y1 , y2  , the upper
Hence the two codeword upper bound on error probability is bound on the two codeword error probability with a randomly
selected pair of codewords will be the ensemble average error

PB  y1 , y2   2 dB  y1 , y2  (6)
probability as below:

This bound is surprisingly tight for most channel of interest.


Now, if the channel is memory less, then denoting the output
variable as Z, we can have

27
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org

then converting it into a QAM modulated OFDM wave and


passing it into the AWGN channel or Rayleigh fading channel
P2  y1 , y2   or Rician channel. At the receiver side, the QAM demodulator
calculates the llr (log likelihood ratios) followed by the LDPC
n 1

  P  y P  y   P  z y1 j  P  z j y2 j 
1/2 decoder (Hard-decision or Soft-decision) [1,2] that gives the
1j 2j j decoded message. Here the simulation is implemented in the
j 0 z j y1 j y2 j baseband domain using matlab coding. The LDPC code is an
(12) irregular LDPC code with parity check matrix (32400,64800).
Parity-check matrix of the LDPC code is stored as a sparse
Simplifying the above equation with the fact that each term in logical matrix. The system was simulated over the three
channels with various rates as 1/2, 1/3, 1/4, 2/3, 2/5, 3/4 ,3/5,
the product is independent of position index j and taking the
4/5, 5/6, 8/9 and 9/10 respectively. The LDPC decoder is of
subscripted variables as dummy variables, we will get the Hard and Soft-decision type [1,2]. The information is binary
simplified form of above equation as below: in nature. The encoding and decoding strategies are given
below.
2

n 1
1/2 
P2  y1 , y2      P  y  P  z y   (13) 3.1 Encoding of LDPC codes
j 0 z  y  The set of valid codewords are those which satisfies the parity
check constraints as given below ,where H and C are the
To represent the equation (13) more simple we will define a parity check matrix and the codeword respectively[6,7].
quantity as:
 HC T   0
(18)
  1/2 
2

Rc  P    log 2   P  y  P  z y   
 (14) But, the mapping of messages to these codewords through the
 z  y   use of generator matrix shows how to encode the message.
 The (j,i) th entry of G(generator matrix) is ‘1’if the jth
message bit plays a role in determining the ith codeword. The
Using Rc  P  ,the bound on error probability for the
set of all possible linear combination of the rows of G gives
the set of codewords for the code with generator G. Hence G
ensemble of two codeword codes is given by satisfies the following equation.

GH T  0
P2 y i , y j  2  nRc  P  (19)
The steps for encoding after designing the parity check matrix
(15) are as below:

 Put H in the row-echelon form to get a new matrix


In a larger code , where n is large, provided if the codeword
probability structure is unchanged, we can generalize the as H gr .
results for any pair of codewords. The distribution on code
symbols that means P(y) can be chosen freely so as to obtain  Then convert H gr to reduced row- echelon form
the smallest upper bound. Thus we define Rc to be denoted by H grr .

   2
1/2  
  Then put H grr into standard form :

Rc  max  log 2    P  y  P  z y     (16)
P y    z y    H grr = [ A I N  K ], where A is ( N K ) by K
 
binary matrix and I N  K C  mG is the identity
Now from above concept (15) can be written as
matrix of order N K .

P2  yi , y j   2  nRc  Then the generator matrix will be G   I K AT 


(17)
 Then the message is mapped to the codeword by the
relation C  mG , where m is message vector to
3. SIMULATION ENVIRONMENT AND be encoded.
RESULTS
3.2 Decoding of LDPC codes
Here we have used a long Irregular LDPC code which
answers all the questions aroused from Shanon’s theorem Gallager has also provided a decoding algorithm that is
[1,2,11]. LDPC code of almost any rate and block length can typically near optimal [11]. The iterative decoding algorithms
be designed only from the specification of target parity check used in LDPC decoding measures the probability distribution
matrix. This system is simulated at the transmitter side by of variables in graph based models and come under different
encoding the serial stream of data by the LDPC encoder and names depending on the circumstances. But, collectively they

28
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org

are termed as message-passing (MP) algorithms [12,13]. The


task of the decoder is to detect and correct the flipped bits
during transmission over any channel. Every received word
that does not satisfy equation (18) above will not be a
codeword.

Fig1: Comparison of BER performance of LDPC coded


QAM modulated OFDM wave with Hard-decision over Fig4:Comparison of WER performance of LDPC coded
AWGN channel with various rates QAM modulated OFDM wave with Soft-decision decoding
over AWGN channel with various rates.

Fig2: Comparison of BER performance of LDPC coded


QAM modulated OFDM wave with Hard-decision over
Rayleigh fading channel with various rates

Fig5: Comparison of WER performance of LDPC coded


QAM modulated OFDM wave with Soft-decision decoding
over Rayleigh fading channel with various rates

In message-passing algorithm, messages pass back and


forward between the bit and check nodes .The hard decision
message-passing algorithm is known as bit-flipping algorithm
and the passed messages are binary in nature and this
algorithm is explained below. Sum-product algorithm (SPA)
Fig3: Comparison of BER performance of LDPC coded
is the soft-decision message-passing algorithm. It is similar to
QAM modulated OFDM wave with Hard-decision over
the bit-flipping algorithm except that the passed messages
Rician fading channel with various rates
between bit node and check node are probabilities [8]. Fig.1
compares the Bit error rate (BER) performance of the long

29
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org

irregular LDPC coded QAM modulated OFDM with various


defined rates with Hard-decision decoding as mentioned
above, over the AWGN channel. Figure2 and 3 does the same
over Rayleigh fading channel and Rician fading channels
respectively. Fig.4 compares the Word Error rate (WER)
performance of the long irregular LDPC coded QAM
modulated OFDM with various defined rates with Soft-
decision decoding as mentioned above, over the AWGN
channel. Figure5 and 6 does the same over Rayleigh fading
channel and Rician fading channels respectively.

3.2.1 Bit-flipping Algorithm (Hard-decision


Algorithm):
For each bit CN, the checks which are influenced by that bit
are computed first. Then if the number of nonzero checks
exceeds some thresholds, then that particular bit is decided to
be incorrect and corrected by flipping it. This simple scheme
is capable of correcting more than one error . Suppose that CN
is in error along with the other bits influencing its checks. Fig6: Comparison of WER performance of LDPC coded
Then assuming no cycles in the tanner graph, arrange it as a QAM modulated OFDM wave with Soft-decision decoding
tree with CN as a root and mark the bits which are in error [3]. over Rician fading channel with various rates.
The bits are said to be in tier 1 if they are connected to the
checks connected to the root node. The bits that are connected
to the checks from the first tier are said to be in tier 2.Many
The L  q  is initiated as L  q  = L  c  and these three
ji ji j

such tiers can be established likewise. Then start decoding parameters are updated using the following equations for each
iteration[8,12,13].
proceeding from the leaves of the tree and by the time decoder
reach at the root of the tree (C N), other erroneous bits may
 1 
have been corrected. Figure1and 2 and 3 above compares the L  rij   2a tanh   tanh  L  q j i   
performance of the long irregular LDPC coded QAM  j vi j 2 
L  q ji   L  c j    L r 
modulated signal with various defined rates with this Hard-
decision algorithm. i j
i c j i

3.2.2 Sum-product Algorithm (Soft-decision L  Q j   L  c j    L  rij 


Algorithm): ic j

If we are transmitting a Codeword with N number of bits, then


the APP is the probability that the given bit in the transmitted (22)
codeword is equal to 1 or 0, given the channel output for that
bit. Then the APP ratio or the likelihood ratio (LR) is given by 4. CONCLUSION
Pr  c j  0 channeloutputfor 0 
The simulation results show that with rate ¼, the performance
of the code is better than all other rates in all the three cases of
l (c j )  channels with hard-decision decoding for lower value of SNR
Pr  c j  1 channeloutputfor1
(20) and also the error floor region is between 10-4 to 10-5 in all the
three channel cases more or less with different values of SNR
between 20 to 30 dB. While the result is obtained at slightly
Then the log-APP ratio or the log-likelihood ratio(LLR) will lower value of SNR in AWGN channel cases than the other two
be given by fading channel cases .So the performance of the code with hard-
decision decoding is slightly dependent on the channel type
with various rates.
 Pr  c j  0 channeloutputfor 0  
L  c j   log 
Similarly with rate 9/10, the performance of BER plot degrades

 Pr  c j  1 channeloutputfor1 
than that with all other rates in all the three channel cases and
(21) also the error floor region is between 10-4 to 10-5 in all the three
  channel cases more or less. So we can say that performance of
the code is slightly dependent on the channel type. Hence the
The three key parameters in this algorithm are L  rij  , cut off rate here is 9/10 for all the three channel types discussed.
Thus through simulation results we got a cut-off rate with hard-
L  q ji  and L  Q j  . decision decoding, which will act as an information base for the
practically instrumentable reliable communications.
With soft decision decoding, the scenario is little different .For
the AWGN channel case, up to 7.5dB, performance of rate 1/4

30
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 4– No.8, December 2012 – www.ijais.org

code is better. After 7.5 dB, rate 1/4 ,1/3 and 1/2 codes are [11] R.Gallager,“Low-densityparity-checkcodes,”IEEETrans.
giving nearly same performance up to 8.5 dB, keeping the error Inform.Theory, vol. 8, no. 1, pp. 21–28, Jan. 1962.
floor between 10-3 to 10-4 . After 8.5 dB up to and 9.5 dB the
performance of rate 1/3,1/2 and 2/5 is better than all others [12] Bernhard M.J. Leiner, “LDPC Codes- a brief Tutorial”,
giving an error floor of nearly 10-6. Between 9.5-10dB, the rate April 8,2005.
1/3 and 3/5 are giving the better performance, keeping the error
[13] William E.Ryan, “ An Introduction to LDPC Codes”,
floor between 10-7 to 10-8. Between 10-10.5 dB, the rate 3/5
August,19,2003.
gives better error floor than rate 1/3, keeping it nearly at 10-9. At
12dB, the rate 2/5 code is giving better error floor (nearly 10-10)
than all other rates. But, here also the rate 9/10 code is giving
worst performance in both lower as well as higher SNR AUTHORS PROFILE
regions and thus giving an idea of cut-off rate in this case for
the practically instrumentable reliable communications .
Ms. Madhusmita Mishra: Received B.E Degree in Electronics
Similarly, with soft-decision decoding over Rayleigh fading and Communication Engineering from Utkal University, Orissa
channel, clearly it is visible that the rate 1/2and 1/3 codes are in 1997. She creditably completed her M.E Degree in
giving better performance between 7-8 dB, keeping the error Communication Control and Networking from R.G.P.V, Bhopal
floor between 10-5 to 10-6. Similarly with rate9/10 code the in 2005. She is serving the National Institute of Technology-
performance is the worst. thus giving an idea of cut-off rate Rourkela, India as a Research Scholar in the Department of
for the practically instrumentable reliable communications . Electronics and Communication Engineering from 2009
onwards. Her specialization is focused on Communication
With soft-decision decoding over Rician fading channel, clearly System Design.
it is visible that between 6-8db the rate 1/4code is giving better
performance and between 8-9dB,the rate 1/3 and 2/5 codes are
Prof. Sarat Kumar Patra: Received Bsc (Engg.) from UCE
giving better performance ,keeping the error floor between 10-5
to 10-6. Similarly with rate 9/10 code the performance is the Burla in Electronics and Telecommunication Engg. discipline.
worst. thus giving an idea of cut-off rate in this case for the After completion of his graduation he served for India's
practically instrumentable reliable communications . prestigious Defense Research Development Organization
(DRDO) as a scientist. He completed M.Tech at NIT Rourkela
5. REFERENCES (Formerly known as REC Rourkela) in Communication Engg.
[1] Mishra, M.; Patra, S.K.; Turuk, A.K. ,”Performance of
Power efficient LDPC coded OFDM over AWGN Specialization in 1992. He received PhD from University of
channel”, RAIT, March 2012, Page(s): 185 – 191. Edinburgh, UK in 1998. He has been associated with different
professional bodies such as senior member of IEEE, Life
[2] Mishra, M.; Patra, S.K.; Turuk, A.K. ,”Long Irregular
Member IETE (India), IE (India), CSI (India) and ISTE (India).
LDPC Coded OFDM with Soft Decision”, IJCA(0975-
8887),vol-56,N0.4,October 2012. He has published more than 70 international journal and
conference papers. Currently he is working as Professor in the
[3] TODD K.MOON, “ Error Correction coding”, Willey- Department of Electronics & Communication Engineering at
Interscience,2006.
NIT Rourkela. His Current research area includes mobile and
[4] STEPHEN G.WILSON, “Digital Modulation and wireless communication, Communication Signal processing and
coding”, Pearson Education,2003. soft computing.
[5] STALLINGS, “ Wireless Communications and
Networks”, Pearson Education,2002. Prof. Ashok Kumar Turuk:Dr Ashok Kumar Turuk received
his BE and ME in Computer Science and Engineering from
[6] T. J. Richardson and R. L. Urbanke, “Efficient encoding
National Institute of Technology, Rourkela(Formerly Regional
of low-density parity-check codes,” IEEE Trans. Inform.
Theory, vol. 47, no. 2, pp. 638– 656,Feb. 2001. Engineering College, Rourkela) in the year 1992 and 2000
respectively. He obtained his PhD from IIT, Kharagpur in the
[7] Shu Lin and Daniel J.Costello ,“Error Control Coding”, year2005.Currently he is working as Associate Professor in the
Jr.second.edition, Prentice Hall, 2004.
Department of Computer Science & Engineering at NIT
[8] Ferrari and Raheli, “Ldpc coded Modulations”, Rourkela. His research interest includes Ad-Hoc Network,
SPRINGER Publication,2009. Optical Network, Sensor Network, Distributed System and Grid
[9] Michele,Gianluigi and Riccardo.,”Does the Performance Computing.
of LDPC Codes Depend on the Channel,”IEEE
Transactions on Communications, 54(12),December
2006.
[10] Ungerboeck,G.,”ChannelcodingwithAmplitude/Phasemo
dulation,”IEEECommunicationsMagazine,25(5-
21),1987.

31

Вам также может понравиться