Вы находитесь на странице: 1из 8

LDPC CODES

Theoretical Analysis
Sanjeev Kumar Ragini Gupta Jagvinder Kaur
AP, Deptt. of ECE, M.Tech M.Tech
ACET,Amritsar ACET,Amritsar ACET,Amritsar

1. Introduction
Low Density Parity Check (LDPC) code is a linear error correcting code, a method of
transmitting a message over a noisy transmission channel and is constructed using a sparse
bipartite graph. These are basically a class of recently re-discovered highly efficient linear
block codes.The name comes from the characteristic of their parity-check matrix which
contains only a few 1’s in comparison to the amount of 0’s. [1] Their main advantage is that
they provide a performance which is very close to the capacity for a lot of different channels
and linear time complex algorithms for decoding. .[3] LDPC codes are capacity-approaching
codes, which means that practical constructions exist that allow the noise threshold to be set
very close to the theoretical maximum (the Shannon limit) for a symmetric memory-less
channel. The noise threshold defines an upper bound for the channel noise, up to which the
probability of lost information can be made as small as desired. Using iterative belief
propagation techniques, LDPC codes can be decoded in time linear to their block length.

2. History
LDPC codes were invented by Robert Gallager in his 1960 MIT Ph. D. dissertation. Long
time being ignored due to
i. Requirement of high complexity computation
ii. Introduction of Reed-Solomon codes
iii. The concatenated RS and convolutional codes were considered perfectly suitable for
error control coding.
LDPC codes were impractical to implement when first developed by Gallager in 1960 and
were forgotten, but they were rediscovered in 1996 by MacKay and Richardson/Urbanke.
Turbo codes, another class of capacity-approaching codes discovered in 1993, became the
coding scheme of choice in the late 1990s, used for applications such as deep space satellite
communications. However, in the last few years, the advances in low-density parity-check
codes have seen them surpass turbo codes in terms of error floor and performance in the
higher code rate range, leaving turbo codes better suited for the lower code rates.[7]

3. Representation of LDPC codes


The structure of a code is completely described by the generator matrix G or the parity check
matrix H. The capacity of correcting symbol errors in a codeword is determined by the
minimum distance (dmin).
- d min is the least weight of the rows in G.
- d min is the least number of columns in H that sum up to 0.
Basically there are two different possibilities to represent LDPC codes. Like all linear block
codes they can be described via matrices. The second possibility is a graphical representation.

1
3.1 Matrix Representation
Let’s look at an example for a low-density parity-check matrix first. The matrix defined in
equation (1) is a parity check matrix with dimension n×m for a (8, 4) code. We can now
define two numbers describing this matrix. Wr for the number of 1’s in each row and Wc for
the columns. For a matrix to be called low-density the two conditions Wc << n and Wr << m
must be satisfied.

H= 01011001
11100100 - (1)
00100111
10011010

As we can see here in H matrix Wc = 2 which is less than n (no. of rows) and Wr = 4 which is
also less than m (no. of columns).

3.2 Graphical Representation


Tanner introduced an effective graphical representation for LDPC codes.[2] Not only provide
these graphs a complete representation of the code, they also help to describe the decoding
algorithm as explained later on in this paper.

Figure 1: Tanner graph corresponding to the parity check matrix in equation (1).

Tanner graphs are bipartite graphs. A bipartite graph is a graph (nodes or vertices connected
by undirected edges) whose nodes may be separated into two classes and where edges may
only connect two nodes not residing in the same class. The two classes of nodes in a Tanner
graph are the variable nodes (or bit nodes) and the check nodes (or function nodes). That
means that the nodes of the graph are separated into two distinctive sets and edges are only
connecting nodes of two different types. The two types of nodes in a Tanner graph are called
variable nodes (V-nodes) and check nodes (C-nodes). Figure 1 is an example for such a
Tanner graph and represents the same code as the matrix H. The creation of such a graph is
rather straight forward. It consists of m check nodes (the number of parity bits) and n variable
nodes (the number of bits in a codeword). The Tanner graph of a code is drawn according
to the following rule: “Check node Ci is connected to variable node Vj if the element Hij
of H is a 1”.

2
4. Regular and irregular LDPC codes
A LDPC code is called regular if Wc is constant for every column and Wr = Wc · (n/m) is also
constant for every row. The example matrix from equation (1) is regular with Wc = 2 and Wr
= 4. It’s also possible to see the regularity of this code while looking at the graphical
representation. There is the same number of incoming edges for every v-node and also for all
the c-nodes. If H is low density but the numbers of 1’s in each row or column aren’t constant
the code is called an irregular LDPC code.

5. Encoding LDPC codes


Encoding using LDPC codes is not a tough job to do. The simple method of encoding is
explained in the following steps:

Step 1. Choosing the right parity check matrix‘H’ for encoding the message.
H must be a sparse zero-one matrix. n and n-k are the number of columns and the number of
rows, respectively, in H. The last n-k columns in H must be an invertible matrix in GF(2).
Note here
n is the number of bits in the codeword generated,
k is the number of bits in the message to be encoded, and
n-k or q is the number of parity check bits to be added to message

Step 2. Now after choosing the right parity check matrix H the next step is to generate the the
generator matrix G corresponding to the parity check matrix H. As we know that parity check
matrix H is represented as H = [PT: Iq]n×q and
generator matrix G is represented asG = [Ik:Pk×q]k×n
So by applying simple matrix operations on H matrix we can eaily find out the generator
matrix G.

Step 3. Now the next and the final step of encoding is the multiplication of the generator
matrix G and the message to be encoded. This multiplication results into a codeword with n
binary bits.
Encoding can be easily understood by working on some examples, here we are taking
examples of LDPC codes with different code rates r. The code rate r can be calculated as
r = k/n

Example 1. Code rate r = 1/2


Let n = 6 and k = 3 so that the code rate is 3/6 = 1/2
Let the parity check matrix with n columns and (n-k) rows be chosen as:

H = 1 1 1 1 0 0
0 0 1 1 0 1
1 0 0 1 1 0

3
Message = 1 1 0

H = 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0
0 0 1 1 0 1 → 0 1 1 0 0 1 → 0 1 1 0 1 0
1 0 0 1 1 0 1 1 0 0 1 0 1 1 0 0 0 1

PT I
G = 1 0 0 1 0 1
0 1 0 1 1 1
0 0 1 1 1 0

I P

Finally, by multiplying all eight possible 3-bit strings by G, all eight valid codewords are
obtained. For example, the codeword for the bit-string '110' is obtained by:

1 0 0 1 0 1
(1 1 0 ) . 0 1 0 1 1 1 = (1 1 0 0 1 0)
message 0 0 1 1 1 0 codeword
G matrix

Example 2. Code rate r = 1/3


Let n = 6 and k = 2 so that the code rate is 2/6 = 1/3
Let the parity check matrix with n columns and (n-k) rows be chosen as:

1 0 1 0 0 0
H= 0 1 0 1 0 0
1 0 1 0 1 0
0 1 0 0 0 1

Message = 1 1

The codeword of n bits generated by the above mentioned method is


Codeword = (1 1 1 1 0 1)

Example 3. Code rate r = 2/3


Let n = 9 and k = 6 so that the code rate is 6/9 = 2/3
Let the parity check matrix with n columns and (n-k) rows be chosen as:

4
1 1 1 1 0 0 1 1 1
H= 0 0 1 1 0 1 0 1 0
1 0 0 1 1 0 1 0 0

Message = 1 0 1 1 0 1

The codeword of n bits generated by the above mentioned method is

Codeword = (1 0 1 1 0 1 0 1 0)

6. Decoding LDPC codes


Iterative Decoding of LDPC Codes
General decoding of linear block codes is done only if c is a valid codeword, we have
cHT = 0
For binary symmetric channel (BSC), the received codeword is c added with an error vector
e. The decoder needs to find out e and flip the corresponding bits.
The decoding algorithm is based on linear algebra.
Graph-based algorithms
1. Sum-product algorithm for general graph-based codes;
2. MAP (BCJR) algorithm for trellis graph-based codes;
3. Message passing algorithm for bipartite graph-based codes.
The most powerful algorithm is to use belief propagation - each check node in generation
graph (or generation matrix, which is just a different representation of the same entity) sends
to codeword nodes what they belief a valid bit with calculated probability, after error
probability that given bit is zero or one becomes less than requested number (according to
Shannon's theorem code, which allows to reduce error probability infinitely, always exist for
given rate and communication capacity), codeword calculation (decoding) is completed.
There are two mechanisms - hard and soft decision algorithms, the former is simpler, but the
latter frequently is faster. Let’s take an example of the hard decision algorithm.

Step 1. Let parity check matrix to be (not very sparse) set:


01011001
11100100
00100111
10011010

Transmitted codeword : 1 0 0 1 0 1 0 1

Check word, calculated my multiplication of matrix and codeword is : 0 0 0 0

5
Let's during transmission of the codeword and check word over the channel, the codeword
was changed to this (last bit changed i.e.8 th bit):

Received codeword : 1 0 0 1 0 1 0 0

Step 2. Draw the tanner graph corresponding to the parity check matrix H.

Step 3. Here starts decoding algorithm, where each codeword node V first sends its bit
to each check node C.
C0 node will receive 0 1 0 1 bits from V1, V3, V4 and V7.
C1 node will receive 1 0 0 1 bits from V0, V1, V2 and V5.
C2 node will receive 0 1 0 1 bits from V2, V5, V6 and V7.
C3 node will receive 1 1 0 0 bits from V0, V3, V4 and V6.

Step 4. Next step is to calculate the answer for each code node i.e. for V0 to V7.
Received check word is 0 0 0 0 (calculated above), so set of simple equations starts here.
Each check node C gets three out of four received bits and XORing (summing modulo 2) and
sends to the codeword node a bit it expects to be correct to satisfy received check bit. Here is
an example for first check node C0, using bits 0101 from above we get:
Y0 ^ 1 ^ 0 ^ 1 = 0.Y0 = 0
0 ^ Y1 ^ 0 ^ 1 = 1.Y1 = 1
0 ^ 1 ^ Y2 ^ 1= 0.Y2 = 0
0 ^ 1 ^ 0 ^ Y3 = 1.Y3 = 1
For second check node C1, using bits 1001 we get:
Y0 ^ 0 ^ 0 ^ 1 = 1.Y0 = 1
1 ^ Y1 ^ 0 ^ 1 = 0.Y1 = 0
1^ 0 ^ Y2 ^ 1 = 0.Y2 = 0
1^ 0 ^ 0 ^ Y3 = 1.Y3 = 1
For third check node C2, using bits 0101 we get:
Y0 ^ 1 ^ 0 ^ 1 = 0.Y0 = 0
0 ^ Y1 ^ 0 ^ 1= 1.Y1 = 1
0^ 1 ^ Y2 ^ 1= 0.Y2 = 0
0^ 1 ^ 0 ^ Y3 = 1.Y3 = 1
For fourth check node C3, using bits 1100 we get:
Y0 ^ 1 ^ 0 ^ 0 = 1.Y0 = 1
1 ^ Y1 ^ 0 ^ 0= 1.Y1 = 1

6
1^ 1 ^ Y2 ^ 0= 0.Y2 = 0
1^ 1 ^ 0 ^ Y3 = 0.Y3 = 0

Step 5. The next step is to write all the obtained bits in the matrix form for all the four check
nodes from C0 to C3:

V1 V3 V4 V7
C0 = 0 1 0 1
V0 V1 V2 V5
C1 = 1 0 0 1
V2 V5 V6 V7
C2 = 0 1 0 1
V0 V3 V4 V6
C3 = 1 1 0 0

Then we send Yi to Vi code bits. After all check nodes are processed, codeword nodes has
following set of bits:
V0 : 1 from C1, 1 from C3, 1 from originally received codeword.
V1 : 0 from C0, 0 from C1, 1 from originally received codeword.
V2 : 0 from C1, 0 from C2, 0 from originally received codeword.
V3 : 1 from C0, 1 from C3, 1 from originally received codeword.
V4 : 0 from C0, 0 from C3, 0 from originally received codeword.
V5 : 01from C1, 1 from C2, 1 from originally received codeword.
V6 : 0 from C2, 0 from C3, 0 from originally received codeword.
V7 : 1 from C0, 1 from C2, 1 from originally received codeword.

The above results can be written in the tabular form as shown below:

V- Received Messages from check nodes Decision


node

V0 1 C1 →1 C3→1 1
V1 0 C0→0 C1→0 0
V2 0 C1→0 C2→0 0
V3 1 C0→1 C3→1 1
V4 0 C0→0 C3→0 0
V5 1 C1→1 C2→1 1
V6 0 C2→0 C3→0 0
V7 0 C0→1 C2→1 1

Table 1: Describes decoding algorithm, the v-nodes use the answer messages from the c-nodes to perform
a majority vote on the bit value.

Step 6. Then using the voting for each bit (i.e. which bit has more 'votes' in above table out of
three cases), we get a new codeword:
10010101

7
The same steps then are repeated until codeword stopped to change. In our case we get it after
the first run. Soft decision algorithm uses essentially the same logic, but it operates with
probabilities of the bit to be 1 or 0, each probabilities are recalculated in each run, and after
probability is higher than requested value (or error probability is less than requested value),
loop stops. Real world examples use much bigger codewords (up to several thousands of
bits), but logic is always the same. This is low-density parity check code iterative decoding
algorithm presentation.

7. Summary
This tutorial presents the deep and clear understanding of Low Density Parity Check codes
making them simpler and easier to understand and implement. LDPC codes are finding
increasing use in applications where reliable and highly efficient information transfer over
bandwidth in the presence of data-corrupting noise is desired like recently, LDPC codes have
been considered for many industrial standards of next generation communication systems
such as DVB S2, WLAN (802.11.n), WiMAX (802.16e), and 10GBaseT (802.3an). LDPC
codes shows good performance at high SNR (signal to noise ratio), but turbo code shows
better performance than LDPC code at low SNR. LDPC codes perform even better than turbo
codes for very large block lengths (n>10^5) and can come within 0.1 dB of the Shannon
capacity [8,9]. LDPC codes are finding increasing use in applications where reliable and highly
efficient information transfer over bandwidth or return-channel constrained links in the
presence of data-corrupting noise is desired.
Low-density-parity-check codes have been studied a lot in the last few years and huge
progresses have been made in the understanding and ability to design iterative coding
systems. The iterative decoding approach is already used in turbo codes but the structure of
LDPC codes give even better results. In many cases they allow a higher code rate and also a
lower error floor rate. Furthermore they make it possible to implement parallelizable
decoders. The main disadvantages are that encoders are somehow more complex and that the
code length has to be rather long to yield good results.

References
[1] R. G. Gallager, Low-Density Parity-Check Codes, M.I.T. Press, Cambridge, MA, 1963. (Also, R. G.
Gallager, “Low density parity-check codes,” IRE Trans. Inform. Theory, IT-8, pp. 21-28, Jan. 1962.)
[2] R. M. Tanner, “A recursive approach to low complexity codes,” IEEE Trans. Inform. Theory, vol. 27,
pp. 533-547, Sept. 1981.
[3] D. MacKay and R. Neal, “Good codes based on very sparse matrices, Cryptography and Coding, 5th
IMA Conf., C. Boyd, Ed., Lecture Notesin Computer Science, 1995.
[4] D. J. C. MacKay, “Good Error-Correcting Codes Based on Very Sparse Matrices,” IEEE Trans.
Inform.Theory, vol. 45, no. 2, pp. 399-431, 1999.
[5] T. Richardson and R. Urbanke, “The capacity of low-density parity check codes under message passing
decoding," IEEE Trans. Inform. Theory, vol. 47, pp. 599-618, 2001.
[6] M. Chiani and A. Ventura, "Design and performance evaluation of some high-rate irregular low-
density parity-check codes, Proc. IEEE Globecom, Nov. 2001.
[7] V.V. Zyablov and M. S. Pinsker, “Estimation of error-correction complexity of Gallager low-density
codes," Probl. Inform. Transm., vol. 11, pp. 18-28, 1976.
[8] Error Correcting Code Robin Schriebman April 13, 2006
[9] Thierry Lestable, Ernesto Zimmerman, Marie-Helene Hamon, and Stephan Stiglmayr Block-“LDPC
Codes Vs Duo-Binary Turbo-Codes for European Next Generation Wireless Systems”.

Вам также может понравиться