Вы находитесь на странице: 1из 4

Sum-Product decoding of convolutional codes

Toshiyuki SHOHON, Yuuichi OGAWA , Haruo OGIWARA


Oyama National College of Technology, Oyama, Tochigi, Japan Nagaoka University of Technology, Nagaoka, Niigata, Japan E-mail: masa@comm.nagaokaut.ac.jp, ogiwara@vos.nagaokaut.ac.jp Abstract This article proposes two methods to improve the sum-product soft-in/soft-out decoding performance of convolutional codes. The rst method is to transform a parity check equation in such a way as to remove cycles of length four in a Tanner graph of a convolutional code, and performs sum-product algorithm (SPA) with the transformed parity check equation. This method improves the performance of (7, 5)8 convolutional code (CC1). However, for (45, 73)8 convolutional code (CC2), the method does not effect. The second proposed method is to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. This method improves the performance for both convolutional codes (CC1, CC2). The performance is close to that by BCJR algorithm and it is less complex than BCJR algorithm. codes. This article is organized as follows: section 2 discusses removing of four-cycles in a Tanner graph in polynomial form, and section 3 proposes a method to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. Section 4 discusses the decoding complexity comaprison with BCJR algorithm.

2.

Removing of cycles of length four in Tanner graph 2.1 Parity check equation of convolutional codes
As an example, consider a convolutional code with rate 1/2 and a generating matrix G(D) given by G(D) = [1 G1 (D)/G0 (D)]. (1) Let X (D) and P (D) denote polynomials of an information sequence and a parity sequence, respectively. For the sake of simplicity, X (D) and P (D) are also denoted by X and P , respectively. The parity check equation is given by H (X, P ) = G1 (D)X (D) + G0 (D)P (D). (2)

1. Introduction
Soft-input and soft-output (SISO) decoding of convolutional code is used for decoding of turbo code [1]. It is also applied to turbo equalization [2], [3] with convolutional code. Most widely used SISO algorithm for convolutional code is modied BCJR algorithm. It is decient as the decoding complexity is large so as to apply for high speed decoding. Sum-product algorithm (SPA) used for low density parity check (LDPC) codes has low decoding complexity and suited for parallel processing. It may be expected to apply SPA for decoding of convolutional code. However, straight forward applications of SPA to decoding of convolutional code can not realize good performance. This article proposes two method to improve decoding performance of SPA decoding of convolutional code. The rst method converts a Tanner graph with four-cycles into one without four-cycles. Similar method is proposed for block codes [4]. This article derives an equivalent and straight forward method with good use of the property of convolutional code. By the method, performance of (7, 5)8 code (CC1) is improved but that of (45, 73)8 code (CC2) is not. This implies that four-cycles is not always the primary factor of the poor SPA decoding performance. To improve the performance of CC2, second method is proposed. The method is to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. By using the two methods, performance of SPA decoding become close to that of BCJR decoding. An application of SPA to the trellis diagram of a convolutional code has been proposed. It derives the BCJR algorithm as an instance of SPA [5]. The methods proposed in this article are quite different as they are applied on the Tanner graph of the parity check equations of convolutional

The parity check equation H (X, P ) = 0 iff (X, P ) is a code word. The order of the parity check equation H (X, P ) is the maximum order of G0 (D) and G1 (D). In this article, the order of H (X, P ) is denoted by 0 .

2.2

Algorithm

A method of removing of four-cycles (RmFC) in a Tanner graph for block codes is proposed [4]. The discussion is based on a parity check matrix. For convolutional codes, the same method is discussed by using polynomials in this subsection. It is more suited to convolutional codes. As an example code CC1 is taken up. The generating matrix G(D) of the code is G(D) = [1 (D2 + 1)/(D2 + D1 + 1)]. The parity check equation H (X, P ) of the code is given by H (X, P ) =(D2 + 1)X (D) + (D2 + D1 + 1)P (D). (3)

Figure 1 shows the Tanner graph of the code. There are fourcycles as shown by bold lines. An information bit xk and a parity bit pk at the same time instance k belong to a fourcycle. To remove the cycles, auxiliary bit node sequence A(D) given by A(D) = X (D) + P (D) is added and an auxiliary check node sequence given by (4)

978-1-4244-4378-9/09/$25.00 2009 IEEE

64

Proceedings of IWSDA09

Ck

Ck+1

Ck+2

x k 2 p k 2

x k 1 Fig. 1

p k 1

xk

pk

xk+1
D 2 +1 ] D 2 +D 1 +1

pk+1

xk+2

pk+2 Ck+2 Ck+2

Tanner graph of [1 Ck Ck

code. Ck+1 Ck+1

xk2 ak2 pk2 xk1 ak1 pk1 Fig. 2

xk

ak

pk

xk+1 ak+1 pk+1 xk+2 ak+2 pk+2

Tanner graph after removing of cycles of length four.

H1 (X, P, A) = X (D) + P (D) + A(D)

(5)

10

is added. As the result, the cycles are removed. Substituting eq.(4) into eq.(3), we have H2 (X, P, A) = DP (D) + (D2 + 1)A(D). (6)
BER

Equations(5) and (6) are new parity check equations and the corresponding Tanner graph is given by Fig. 2. It shows the cycle is removed and girth is 8. In the similar way, the parity check equation for CC2 after removing the cycles is given by H1 (X, P, A) =(D + 1)X (D) + D2 P (D) + A(D) H2 (X, P, A) =D5 X (D) + P (D) + (D3 + 1)A(D) and girth becomes 6. (8) (7)

10

Sum-Product Original Sum-Product RmFC BCJR

10

2
Eb/N0 [dB]

(a) CC1
10
0

2.3 Bit error rate performance


Figure 3 shows bit error rate (BER) performance given by simulation of CC1 and CC2 after RmFC over the additive white Gaussian noise (AWGN) channel. In the simulation, the number of information bits in a block is 1024, that of transmitted block is 104 and that of maximum iteration of SPA decoding is 100. It shows that performance of CC1 by SPA is only 0.1dB inferior to that by BCJR algorithm, but that of CC2 by SPA is much worse than that by BCJR algorithm. This result show that RmFC improves performance of some convolutional code, but it does not for other code.

10 BER

10

Sum-Product Original Sum-Product RmFC BCJR

10

Eb/N0 [dB]

(b) CC2 Fig. 3 bit error rate performance

3. Decoding based on higher order parity check equation 3.1 Higher order parity check equation
To improve the performance of CC2, second method is proposed. The method is to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. The parity check equation is given by

H (X, P ) =M (D)H (X, P ) =M (D)G1 (D)X (D) + M (D)G0 (D)P (D) =G1 (D)X (D) + G0 (D)P (D) where

(9) (10) (11)

65

Table 1 HOPCE for CC1. nf c G0 G1 2 2 7 5 3 5 11 17 33 21 4 3 5 5 43 71 6 5 153 101 7 6 321 207 653 401 8 9 9 8 1503 1031 10 15 3113 2241 Table 2 nf c 11 31 23 18 16 5 11 11 12 14 9 7 17 19 HOPCE for CC2. G0 G1 45 73 157 111 261 327 507 625 1427 1045 3101 2007 5303 6011 14441 10047 24033 31001 61033 42001 120003 144111 304003 210111 640201 442107 1450003 1014111

10

5 6 7 8 9 10 11 12 13 14 15 16 17 18

BER 10
4

10

Order of HOPCE (a) CC1


10
3

BER

10

G0 (D) =M (D)G0 (D) G1 (D) =M (D)G1 (D)

(12) (13)
10
5

and M (D) is a non-zero polynomial without terms of X, P . We call H (X, P ) original parity check equation. The derived check equation is equivalent to the original check equation since we have H (X, P ) = M (D)H (X, P ) = 0 iff H (X, P ) = 0 (14) ( X, P ) = M (D)H (X, P ) = 0 H iff H (X, P ) = 0. From eqs.(11) (13), the order of H (X, P ) is higher than that of H (X, P ). Therefore, we call H (X, P ) higher order parity check equation (HOPCE). From eq.(11) (13), the order, , of H (X, P ) is controlled by the order of M (D). Table 1 and Table 2 show the HOPCEs for CC1 and CC2, respectively. Since a method to get the optimum H (X, P ) is not known, the check equations given by a heuristic method is shown. The method is as follows. First, get H (X, P )s for all M (D)s with the predened order. Next, select a H (X, P ) with the minimum nf c that is the number of four-cycles per check node.

10

15

20

Order of HOPCE (b) CC2 Fig. 4 Bit error rate performance for order of HOPCE.

Picking up CC1 as an example, we count nf c . Equation (2) becomes H (X, P ) =(D2 + 1)X (D) + (D2 + D + 1)P (D). (15)

The Tanner graph is given by Fig. 1. It shows that fourcycles constructed by Ck and Ct (t k + 1) are 1. Ck pk1 Ck+1 pk Ck 2. Ck xk Ck+2 pk Ck . Therefore, we have nf c = 2.

3.3

BER performance for order of HOPCE

3.2 Number of four-cycles per check node


Let Ct be check node at time instance t. The number of fourcycles per check node concerning Ct , nf c,t , is dened by the number of four-cycles constructed by Ck and Ct (t k +1). Since nf c,k+1 (k t + 1) does not count the fourcycles concerning Ck (k t), the number of four-cycles at each time instance becomes countable without duplication. In addition, since nf c,t does not change with t for convolutional codes, sufx t can be omitted as nf c .

Figure 4 shows performance versus constraint length. The simulations are carried out at Eb /N0 = 5.0 [dB] and 100 maximum iterations. The leftmost plot corresponds to the original parity check equation. The gure shows that performance is improved by using the HOPCE though BER does not monotonously decrease with the order. The best performance is obtained by = 4, nf c = 3, (G0 , G1 ) = (33, 21)8 for CC1, and by = 10, nf c = 5, (G0 , G1 ) = (3101, 2007)8 for CC2.

66

Table 3

Total number of operations sum-product BCJR (3 2k 1)2m + k 2n + 2n 1 (1 + 2k+1 )2m+1 + (2k + n 1)2n + 6n + 1 2n + k 2n + 1 101 521

10 BER

addition multiplication special operation (tanh, tanh1 , log, exp) total: CC1 CC2

dc 2 4dc 1 2dc 39 53

10

SPA Original SPA HOPCE


BCJR

4 Eb/N0 [dB]

times of that of BCJR.

(a) CC1
10
0

5.

Conclusion

10 BER

10

SPA Original SPA HOPCE


BCJR

10

2 Eb/N0 [dB]

(b) CC2 Fig. 5 Bit error rate performance

Sum-Product decoding of convolutional code is discussed. The conventional method has been known to give poor performance. The reason has been considered to the existence of four-cycles in the Tanner graph. A method to remove the cycles is discussed in polynomial form which is more suited to convolutional codes than the matrix form already proposed for block codes. The method is applied to two convolutional codes which are optimum codes for turbo codes. Performance of one code (CC1) is improved by the proposed method and performance is within 0.1dB from that by BCJR decoding. Performance of the other code (CC2) is insufcient. To improve the performance, this paper proposes a method to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. By the method performance becomes within 0.1dB and 0.3dB from that of BCJR decoding for CC1 and CC2, respectively.

3.4 BER performance


Figure 5 shows the performance of SPA decoding using the best parity check equation given in the previous subsection. The simulation conditions are the same as the previous section. By using the HOPCEs, performance is improved 0.9dB and 1.3dB compared to the conventional SPA decoding and only inferior 0.1dB and 0.3 dB to the BCJR decoding at BER=104 for CC1 and CC2, respectively.

References
[1] Berrou, C., Glavieux, A., Near optimum error correcting coding and decoding: Turbo-codes, IEEE Trans. Commun., vol.44, no.9, pp.12611271, Sep. 1996. [2] Catherine Douillard, Michel J ez equel, Claude Berrou, Annie Picart, Pierre Didier, and Alain Glavieux, Iterative correction of inter-symbol interference: TurboEqualization, European Trans. Telecommun., vol.6, no.5, pp.507511, 1995. [3] Christophe Laot, Alain Glavieux, and Jo el Labat, Turbo equalization: adaptive equalization and channel decoding jointly optimized, IEEE J. Selected Areas in Commun., vol.19, no.9, pp.17441752, Sep.2001. [4] S. Sankaranarayanan and B. Vasic, Iterative decoding of linear block codes: A parity-check orthogonalization approach, IEEE Trans. Inform. Theory, vol.51, no.9, pp.33473353, Sep. 2005. [5] F. R. Kschischang, B. J. Frey and H. A. Loeliger, Factor graph and the sum-product algorithm, IEEE Trans. Inform. Theory, vol.47, no.2, pp.498 519, 2001.

4. Decoding complexity
The decoding complexity of the proposed algorithm is compared to that of BCJR algorithm. Table 4 shows the number of operations needed for 1 bit SISO decoding. In the table, symbols k and n show the number of input bits and the number of output bits per time instance, respectively. Symbols m and dc show the number of delay elements in the encoder and degree of check node, respectively. The average number of SPA iterations for CC1 was 2.7 and that for CC2 was 1.4 at Eb /N0 = 6[dB]. In consideration of SPA iterations, the number of SPA operations for CC1 becomes 105.3 and that for CC2 becomes 74.2. For CC1, the number of operations of SPA is comparable to that of BCJR. For CC2, the number of operations of SPA is 0.14

67

Вам также может понравиться