Вы находитесь на странице: 1из 32

University of Kansas | School of Engineering

Turbo Codes
Vidya T Ramachandran April 22, 2008

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Abstract
In 1993, group of French researchers, Berrou, Glavieux and Thitimajshima presented a new class of error correction codes, termed as Turbo Codes. These codes were shown to achieve a performance in terms of Bit Error Rate (BER) within 0.7 dB of the Shannon capacity limit. Turbo-codes promise the attainment of the Holy Grail of communication theory. They have a very wide range of applications mainly in wireless communications, ranging from the third generation mobile systems to deep-space exploration.

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Outline
Introduction Channel Capacity Why turbo codes perform so well Review of Convolutional codes
RSC Encoding

Turbo Code Architecture


Encoder Interleaver Decoder Example: UMTS Turbo Encoder-Interleaver-Decoder

Performance Practical Issues Applications Conclusion

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Introduction
Powerful class of error correcting codes Iterative decoding Discovered by Berrou, Glavieux and Thitimajshima in 1993 at ICC, Genve, Switzerland Advantages
Come closest to approaching the Shannon capacity limit on maximum achievable data transfer rate over a noisy channel For a certain BER, power can be decreased

Claude Berrou

Drawbacks
High decoder complexity Relatively high latency due to interleaving and iterations when decoding

Applications in areas where power saving is required or low SNR is available


Alain Glavieux
Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Coding Timeline

Clearly, the coding world can be divided into 2 time zones: BTC & ATC Before and After Turbo Codes
From: http://userver.ftw.at/%7Ejossy/turbo/2004/lecture01.pdf

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Channel Capacity Shannons Limit


Claude Shannon, A mathematical theory of communication, Bell Systems Technical Journal, 1948 Every channel has associated with it a capacity C Measured in bits per channel use
(modulated symbol)

The channel capacity is an upper bound on information rate r There exists a code of rate r < C that
achieves reliable communications. Showed that the Bit Error probability approaches zero as block length n of the code goes to infinity by selecting a rate r < C code at random.

Shannon playing with mechanical mouse

Shannon in his office at Bell Labs

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Why Turbo codes perform so well


Linear Code - code for which the modulo-2 sum of two valid code words (XOR-ing each bit position) is also a valid code word. Good linear code
has mostly high-weight code words except, all- zeros code word. Desired as they are distinct, so easier for decoder to distinguish. Use turbo encoder with Interleaver to reduce low-weight code words One of the two encoders will occasionally produce a low-weight output, the probability that both encoders simultaneously produce a low-weight output is extremely small.

Random codes achieve the best performance


Shannon showed that as n, random codes achieve channel capacity. However, not feasible because code must contain enough structure so that decoding can be realized with actual hardware.

With turbo codes:


The non-uniform Interleaver adds apparent randomness to the code. Yet, they contain enough structure so that decoding is feasible.
7

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Review: Convolutional codes


Input data shifted into and along shift register k bits at once k binary inputs n binary outputs K-1 delay elements (linear shift registers) Coefficients are either 1 or 0 Operation - Addition: XOR rate r = k / n K = Constraint length number of bits that each output depends on. Non-systematic codes
encoders input bits do not appear at its output Correct by using systematic RSC codes
Department of Electrical Engineering and Computer Science

Rate r = Convolutional encoder with constraint length K = 3

From: M.C. Valenti, Turbo Codes and Iterative Processing, in Proc. IEEE New Zealand Wireless Communications Symposium, (Auckland New Zealand), Nov. 1998

University of Kansas | School of Engineering

RSC - Recursive Systematic Convolutional Encoding


An RSC encoder is constructed from a standard convolutional encoder by feeding back one of the outputs. An RSC code is systematic. The input bits appear directly in the
output.

An RSC encoder is an Infinite Impulse Response(IIR) Filter.


An arbitrary input will cause a good (high weight) output with high probability Some inputs will cause bad (low weight) outputs.

From: M.C. Valenti, Turbo Codes and Iterative Processing, in Proc. IEEE New Zealand Wireless Communications Symposium, (Auckland New Zealand), Nov. 1998.

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Turbo Encoder
Two constituent parallel RSC encoders to interleaved versions of the same information u to be transmitted. A non-uniform Interleaver scrambles the ordering of bits at the input of the second encoder.
Uses a pseudo-random interleaving pattern.

Increase the code rate via a puncturing technique which enables to select the coded bits following a particular pattern.

From: A study of turbo codes for UMTS third generation cellular standard by Teodor Iliev, University of Rousse, ACM International Conference Proceedings, 2007, Volume 285.

Department of Electrical Engineering and Computer Science

10

University of Kansas | School of Engineering

Parallel Concatenated Encoding


It consists of two conventional feedback shift-register-based convolutional encoders whose inputs are separated by an Interleaver. The key to this technique lies in the recursive nature of the encoder and the impact of the Interleaver on the data bits. The two constituent encoders are coding the same information sequence u but in a different order. For each input binary information symbol ui , we keep the systematic output of xsi = ui of the first RSC encoder, and the parity outputs, x1pi and x2pi of both RSC encoders All these symbols are then multiplexed in order to form the following turbo-coded sequence: {...,ui ,x1pi ,x2pi ,ui+1 ,x1pi +1 ,x2pi+1 ,ui+2 ,x1pi +2 ,x2pi+1,...} Code rate is thus R=1/3. Increase the code rate via a puncturing technique to say, as follows: {...,ui , x1pi , ui+1 , x1pi +1 , ui+2 , x1pi +2 ,...} Uses tail bits (sequence of 3 zeros) to return encoders back to the allzeros state
Department of Electrical Engineering and Computer Science

11

University of Kansas | School of Engineering

UMTS Turbo Encoder

From: M.C. Valenti and J. Sun, Turbo Codes, Chapter 12 of Handbook of RF and Wireless Technologies, (editor: F. Dowla), Newnes, 2004, Pages 375-78.

Department of Electrical Engineering and Computer Science

12

University of Kansas | School of Engineering

Output Stream Format


Input three feedback bits generated immediately after encoding k-bit code word Output stream format:
X1 Z1 Z1X2 Z2 Z2 XL ZL ZL; XL+1 ZL+1 XL+2 ZL+2 XL+3 ZL+3;XL+1ZL+1 XL+2ZL+2 XL+3ZL+3

L data bits and their associated 2L parity bits (total of 3L bits)

3 tail bits for upper encoder & their 3 parity bits

3 tail bits for lower encoder & their 3 parity bits

Total number of coded bits = 3L + 12


Code rate: r = L / (3L + 12) 1/3

Department of Electrical Engineering and Computer Science

13

University of Kansas | School of Engineering

UMTS Interleaver:
Device that rearranges the order of the data bits in a prescribed, but irregular, manner. Although the same set of data bits is present at the output of the Interleaver, the order of these bits has been changed, much like a shuffled deck of cards Without the Interleaver, the two constituent encoders would receive the data in the exact same order and thusassuming identical constituent encoderstheir outputs would be the same. Thus, the output of the second encoder will almost surely be different than the output of the first encoder Quite different than the rectangular interleaves that are commonly used in wireless systems to help break up deep fades
Rectangular channel Interleaver tries to space the data out according to a regular pattern, a turbo code Interleaver tries to randomize the ordering of the data in an irregular manner.

Department of Electrical Engineering and Computer Science

14

University of Kansas | School of Engineering

UMTS Interleaver: Inserting Data into Matrix

Data is fed row-wise into a R by C matrix. R = 5, 10, or 20. 8 C 256 If L < RC then matrix is padded with dummy characters.

Slide from : Iterative Solutions Coded Modulation Library (ISCML) - Theory of Operation ppt, Oct. 3, 2005, Matthew Valenti at http://www.iterativesolutions.com/idownload.htm

Department of Electrical Engineering and Computer Science

15

University of Kansas | School of Engineering

UMTS Interleaver: Reading Data From Matrix


Intra-Row Permutations
Data is permuted within each row.

Data is read from matrix column-wise. X1 = X40 X38 = X24 X2 = X26 X39 = X16 X3 = X18 X40 = X8

Slide from : Iterative Solutions Coded Modulation Library (ISCML) Theory of Operation ppt, Oct. 3, 2005, Matthew Valenti at http://www.iterativesolutions.com/idownload.htm

Department of Electrical Engineering and Computer Science

16

University of Kansas | School of Engineering

UMTS Interleaver: Inter-Row Permutations

Rows are permuted. If R = 5 or 10, the matrix is reflected about the middle row. For R=20 the rule is more complicated and depends on L.

Slide from : Iterative Solutions Coded Modulation Library (ISCML) Theory of Operation ppt, Oct. 3, 2005, Matthew Valenti at http://www.iterativesolutions.com/idownload.htm

Department of Electrical Engineering and Computer Science

17

University of Kansas | School of Engineering

Turbo Decoder

Divide-and-conquer approach
From: M.C. Valenti and J. Sun, Turbo Codes, Chapter 12 of Handbook of RF and Wireless Technologies, (editor: F. Dowla), Newnes, 2004, Pages 375-78.

Department of Electrical Engineering and Computer Science

18

University of Kansas | School of Engineering

Iterative Decoding
Ui - modulating code bit 0 or 1 hard value Yi - corresponding received signal any value soft value Log-Likelihood Ratio (LLR) is used as input to decoder
R ( Ui ) = ln P(Yi | Ui = 1) P(Yi | Ui = 0)

For BPSK over AWGN channel with noise variance of 2,


LLR = R (Ui ) = 2 Yi / 2

For each data bit Xi , decoder computes the following LLR:


( Xi ) = ln P(Xi = 1| Y1 . . . Yn) P(Xi = 0| Y1 . . . Yn)

Soft input soft output (SISO) processors compute 2 LLR estimates.


( X i ) = ( Xi ) + ( Xi )
Final LLR estimate = LLR from upper encoder + LLR from lower encoder
Department of Electrical Engineering and Computer Science

19

University of Kansas | School of Engineering

SISO Processor
Iterative decoding
first SISO processor passes its LLR output to the input of the second SISO processor and vice versa Improve performance by sharing LLR estimates between the 2 SISO processors. feedback operation reminiscent of the feedback between exhaust and intake compressor in a turbo engine. Use extrinsic information w( Xi ) to prevent positive feedback
subtract the systematic input of each SISO from its output prior to sharing information with the other decoder.

SISO processor uses a trellis diagram to represent all possible sequences of encoder states sweeps through the labeled trellis in a prescribed manner to obtain LLR estimates of each data bit using:
Soft output Viterbi algorithm (SOVA) maximum a posteriori (MAP) algorithm SISO processor uses logarithmic version of the MAP algorithm called log-MAP

Department of Electrical Engineering and Computer Science

20

University of Kansas | School of Engineering

Log-MAP Algorithm:
Log-MAP is similar to the Viterbi algorithm.
Multiplications become additions. Additions become special max* operator (Jacobi logarithm)

max(x, y) = ln (ex + ey) = max (x, y) + ln (1 + e|yx|) = max (x, y) + fc (|y x|)

Implementation:
Sweep through the trellis in forward direction using modified Viterbi algorithm. Sweep through the trellis in backward direction using modified Viterbi algorithm. Determine LLR for each trellis section. Determine output extrinsic info for each trellis section.

Flavors of Log-MAP algorithm are max-log-MAP algorithm, constant-log-MAP and linear-log-MAP


21

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Characteristics of Turbo Codes


Turbo codes have extraordinary performance at low SNR.
Very close to the Shannon limit. Due to a low multiplicity of low weight code words.

However, turbo codes have a BER floor.


This is due to their low minimum distance.

Performance improves for larger block sizes.


Larger block sizes mean more latency (delay). However, larger block sizes are not more complex to decode. The BER floor is lower for larger frame/Interleaver sizes

The complexity of a constraint length KTC turbo code is the same as a K = KCC convolutional code, where:
KCC 2 + KTC + log2 * (number decoder iterations)

Department of Electrical Engineering and Computer Science

22

University of Kansas | School of Engineering

Performance as a function of number of iterations


Simulation Setup: R = ; K= 5 N= 256 X 256 Interleaver matrix G1 = {1 1 1 1 1} ; G2 = {1 0 0 0 1} Expected: Shannons Limit Binary Modulation R=

@ Eb/No = 0 dB BER (Pe) = 0 (or 10-5)


Obtained: For SNR > 0, BER decreases as a function of decoding step p For p = 18,

@ Eb/No = 0.7 dB BER (Pe) 10-5

Performances are at 0.7 dB from Shannons Limit !


Department of Electrical Engineering and Computer Science

From: BERROU, C., GLAVIEUX, A., and THITIMAJSHIMA, P.: Near Shannon limit
error-correcting coding: turbo codes, Proc. IEEE International Conference Communication, Geneva, Switzerland, 1993, pp.10641070

23

University of Kansas | School of Engineering

Performance as a function of the Interleaver size


R = 1/2 ; K=5 ; 18 decoder iterations As frame size increases, performance improves. However, as Interleaver size increases, decoder latency also increases. High latency means low BER! Tradeoff between performance and latency
Exploited in wireless communication systems.

From: M.C. Valenti, Turbo Codes and Iterative Processing, in Proc. IEEE New Zealand Wireless Communications Symposium, (Auckland New Zealand), Nov. 1998.

Department of Electrical Engineering and Computer Science

24

University of Kansas | School of Engineering

Performance comparison of two rate codes


Convolutional code
K = 15 ; free distance* (dfree) = 18

Turbo code
K = 5 ; L = 65,536 ; free distance = 6

Error Floor effect at low BER for turbo codes. Increasing dfree improves bit error performance. Turbo codes have a comparatively lower dfree due to small number of free distance code words. Hence, it maybe better to use convolutional codes at high SNR values.
From: M.C. Valenti, Turbo Codes and Iterative Processing, in Proc. IEEE New Zealand Wireless Communications Symposium, (Auckland New Zealand), Nov. 1998.

*dfree - minimum hamming weight of all non-zero code words

Department of Electrical Engineering and Computer Science

25

University of Kansas | School of Engineering

Practical Implementation Issues


Error floor
BER curve begins to flatten at higher SNR due to the presence of a few low-weight code words that become significant only at high SNRs. hinders the ability of a turbo code to achieve extremely small BERs. Solution: serially concatenated convolutional codes (SCCC)
excellent performance at high SNR error floor pushed way down to BER 10-10, but worse at low SNR

Latency
Increased delay due to large Interleaver sizes in encoder/decoder Encoder/Decoder Delay = ( information bit period ) X ( latency ) Example : For 8kbps (speech transmissions), N = 65,536 bits delay = 65536 / 8 = 8192 ms 8s unacceptable delay in voice services.

Department of Electrical Engineering and Computer Science

26

University of Kansas | School of Engineering

Issues
Complexity
If max-log-MAP algorithm is used, then each half-iteration would require that the Viterbi algorithm be executed twice. Example: If 8 full-iterations are executed, then the Viterbi algorithm will be invoked 32 times. contrast to the decoding of a conventional convolutional code, which only requires the Viterbi algorithm to be executed once. Solution: Turbo decoding progresses until a fixed number of iterations have completed. However, the decoder will often converge early. Can stop once it has converged (i.e. BER = 0). Stopping early can greatly increase the throughput. For a simulation, it is reasonable to automatically halt once the BER goes to zero. Simply halt the decoder iterations once the entire frame has been completely corrected.
27

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Issues
Memory Limitations
Storing the entire beta trellis can require a significant amount of memory: 8L states
For L=5114 there will have to be 40,912 stored values. The values only need to be single-precision floats.

An alternative is to use a sliding window approach.


The metrics for only a portion of the code trellis are saved in memory Rather than running the MAP algorithm over the entire trellis, it is only run over each window

Channel estimation
For an AWGN channel, the SNR must be known. For a fading channel with random amplitude fluctuations, the per-bit gain of the channel must also be known. complicated by the fact that turbo codes typically operate at very low SNR.
28

Department of Electrical Engineering and Computer Science

University of Kansas | School of Engineering

Applications
Development of other new codes
Serial Concatenated Block Codes LDPC - Low-Density Parity Check Codes, or Gallagher codes RCA - Repeat-Accumulate Code

Deep-space exploration
At the cost of lower bandwidth efficiencies & low BER, delay not important Used in the Mars Exploration: Pathfinder mission,1997 FEC Coding in UMTS: 3G third generation mobile radio standard for application both to speech services, where latency must be minimized (BER = 4 X 10-2) and to data services that must provide very low BER (10-5).

Turbo iterative decoding principle


allows separate decoding and synchronization in receivers without increasing complexity.

Department of Electrical Engineering and Computer Science

29

University of Kansas | School of Engineering

Conclusions
Practical means of attaining the Shannon capacity bounds for a communication channel Parallel concatenated encoding Iterative decoding Interleaver design, heart of turbo coding.
Significant latency introduced. Larger Interleaver size, means a longer decoding delay, gives a lower bit error rate.

Best suited at low BER in the range of 10-4 to 10-6 Applications ranging from mobile phones to satellite systems. Not the ultimate error correction codes, but groundbreaking, giving rise to new codes.

Department of Electrical Engineering and Computer Science

30

University of Kansas | School of Engineering

References
[1] BERROU, C., GLAVIEUX, A., and THITIMAJSHIMA, P.: Near Shannon limit error-correcting coding: turbo codes, Proc. IEEE International Conference Communication, Geneva, Switzerland, 1993, pp.10641070 [2] M.C. Valenti and J. Sun, Turbo Codes, Chapter 12 of Handbook of RF and Wireless Technologies, (editor: F. Dowla), Newnes, 2004, Pages 375-78. [3] Burr, A., Dept. of Electron., York Univ., Turbo-codes: the ultimate error control codes? IEEE ELECTRONICS & COMMUNICATION ENGINEERING JOURNAL, Aug 2001, Volume: 13, Issue 4, Pages: 155-165 [4] M.C. Valenti, Turbo Codes and Iterative Processing, in Proc. IEEE New Zealand Wireless Communications Symposium, (Auckland New Zealand), Nov. 1998. http://www.csee.wvu.edu/~mvalenti/turbo.html [5] Iterative Solutions Coded Modulation Library (ISCML ) - open source toolbox for simulation of modern communication systems - Theory of Operation power presentation, Oct. 3, 2005, Matthew Valenti. http://www.iterativesolutions.com/idownload.htm [6] Bernard Sklar, A primer on turbo code concepts, IEEE Communications Magazine, vol. 35, no. 12, pp. 94-102, Dec. 1997.
Department of Electrical Engineering and Computer Science

31

University of Kansas | School of Engineering

References
[7] Lecture at TU Vienna: Theory and Design of Turbo and Related Codes, Jossy Sayir and Gottfried Lechner, Senior Researchers at Vienna Research Center for Telecommunications (FTW) [8] S. A. Barbulescu and S. S. Pietrobon, "Turbo codes: A tutorial on a new class of powerful error correcting coding schemes, Part 2: Decoder design and performance," J. Elec. and Electron. Eng., Australia, vol. 19, pp. 143-152, Sep. 1999 [9] Application and standardization of turbo codes in third-generation high-speed wireless data services by Lee, L.-N.; Hammons, A.R., Jr.; Feng-Wen Sun; Eroz, M.; Vehicular Technology, IEEE Transactions on Volume 49, Issue 6, Nov. 2000 Page(s):2198 - 2207 [10] Turbo code performance and design trade-offs by Achiba, R.; Mortazavi, M.; Fizell, W.; MILCOM 2000. 21st Century Military Communications Conference Proceedings Volume 1, 22-25 Oct. 2000 Page(s):174 - 180 vol.1 [11] Class presentation slides on LDPC Codes and Trellis Coded Modulation by Dr.Sam Shanmughan in EECS 865 (Wireless Communication), Fall 07.

Department of Electrical Engineering and Computer Science

32

Вам также может понравиться