Вы находитесь на странице: 1из 17

1

Content
1.INTRODUCTION........ 4
2.CONVOLUTION CODING.4
2.1 Definition 5
2.2 Representations of Convolutional Code .6
2.2.1. Tree diagram ...7
2.2.2 Trellis diagram..7
2.2.3 State diagram.7

2.3 The Algorithm of convolution code.8


3.THE VITERBI DECODER..11
3.1 The Viterbi algorithm 11
3.2 Computing the path metrics.....13
3.3 Finding the Most Likely Path....14

4.CHANNEL CODING WITH 16-QUAM..15


REFERENCE ...............................................................................................17

1.INTRODUCTION
In telecommunication and coding, theory digital data transmission is
frequently increasing over noisy channels. Channel coding is a technique used
for controlling errors in data transmission. There are many techniques and
algorithms available for this process but Convolution Encoder and Viterbi
Decoder are the most efficient . It gives improved coding gain and enhanced
performance. Convolution Encoding with Viterbi Decoding is a Forward Error
Correction technique that is particularly suited to a channel in which the
transmitted signal is corrupted mainly by Additive White Gaussian Noise
(AWGN). Forward Error Correction can be divided into two categories: Block
codes and convolution codes. Fixed size blocks of bits are called Block codes
while convolution codes work on uninformed length blocks of bits. The
Convolution Encoder is frequently used in digital communication systems .

2. CONVOLUTION CODING
2.1 Definition
A Convolution code is defined by three integers: n, k and K, where the
proportion kin is termed code rate, n is the number of output bits, k is the
number of input bits and K is called constraint length. The constraint length
signifies the number of k-bit shifts over which a single information bit can
influence the encoder output. The longer the constraint length, the larger the
number of parity bits that are influenced by any given message bit. Because the
parity bits are the only bits sent over the channel, a larger constraint length generally
implies a greater resilience to bit errors. The trade-off, though, is that it will take
considerably longer to decode codes of long constraint length, so one cant increase
the constraint length arbitrarily and expect fast decoding.
If a convolutional code that produces r parity bits per window and slides the
window forward by one bit at a time, its rate (when calculated over long messages)
is 1/r. The greater the value of r, the higher the resilience of bit errors, but the tradeoff is that a proportionally higher amount of communication bandwidth is devoted
to coding overhead. In practice, we would like to pick r and the constraint length to
be as small as possible while providing a low enough resulting probability of a bit
error.

Fig. 0.Convolution code with two parity bits per massage bit(r=2) and constraint
length K=3.
This figure shows a structure of Convolution Encoder for generator
polynomial of code rate. where UI and U2 are first and second code symbols
respectively.
Output branch word and input bits to be fed to the register depend on the
code rate. Here two bit Output branch word and one bit input are fed to
register at each clock cycle.

Fig. 1. Architecture of Convolution Encode

Let GJ(x) be the generator polynomial for upper connection and G2(x) be the
generator polynomial for lower connection which has been shown in equation
(1).

1 = 1 + + 4 + 5 + 6
(1)
2 = 1 + + 3 + 4 + 6

2.2 Representations of Convolutional Code


There are three alternative methods that are often used to describe a convolutional
code:
1. Tree diagram
2. Trellis diagram
3. State diagram

2.2.1. Tree diagram


The tree diagram in the figure 2 repeats itself after the third stage

Fig. 2.
This is consistent with the fact that the constraint length K=3.
The output sequence at each stage is determined by the input bit and the two
previous input bits. In other words, we may sat that the 3-bit output sequence for
6

each input bit is determined by the input bit and the four possible states of the shift
register, denoted as a=00, b=01, c =10, and d=11

2.2.2 Trellis diagram


The convolutional encoder has a shift register made up of two memory cells. With
the message bit stored in each memory cell being 0 or 1, it follows that this
encoder can assume any one of 22 = 4 possible states

Fig. 3. The Trellis diagram


The trellis depicts the evolution of the convolutional encoders state across time.
The message sequence (10011) produces the encoded output sequence (11, 10, 11,
11, 01), which agrees with our previous result.

2.2.3 State diagram


The nodes of the figure represent the four possible states of the encoder a, b, c, and
d, with each node having two incoming branches and two outgoing branches,
following the graphical rule described previously.

Fig. 4. The State diagram


The nodes of the figure represent the four possible states of the encoder a, b, c, and
d, with each node having two incoming branches and two outgoing branches,
following the graphical rule described previously.
The binary label on each branch represents the encoders output as it moves from
one state to another. Suppose, for example, the current state of the encoder is (01),
which is represented by node c. The application of input symbol 1 to the encoder
results in the state (10) and the encoded output (00).
The trellis and the state diagrams each have 2(1) possible states. There are
2 branches entering each state and 2 branches leaving each state.

2.3 The Algorithm of convolution code.

Fig. 5. An encoder for a binary rate R=1/2 convolutional code.


8

The information digits u = 0 1 . . . are not as in the previous section separated


into blocks. Instead they form an infinite sequence that is shifted into a register, in
our example, of length or memory m = 2. The encoder has two linear output
(1) (1)

(2) (2)

functions. The two output sequences (1) = 0 1 and (2) = 0 1 are


(1) (1) (2) (2)

interleaved by a serialize to form a single-output sequence 0 1 0 1 that


is transmitted over the channel. For each information digit that enters the encoder,
two channel digits are emitted. Thus, the code rate of this encoder is R =
bits/channel use.
Assuming that the content of the register is zero at time t = 0, we notice that the
two output sequences can be viewed as a convolution of the input sequence and
the two sequences 11100 . . . and 10100 . . ., respectively. These latter sequences
specify the linear output functions; that is, they specify the encoder. The fact that
the output sequences can be described by convolutions is why such codes are
called convolutional codes.In a general rate R = b/c, where b c, binary
convolutional encoder (without feedback ) the causal, that is, zero for time t < 0,
information sequence
(1) (2)

() (1) (2)

()

() (1) (2)

()

= 0 1 . = 0 0 . 0 1 1 . 1 (1)
is encoded as the causal code sequence
(1) (2)

= 0 1 . = 0 0 . 0 1 1 . 1 (2)
Where
= ( , 1 , . ) (3)
The parameter m is called the encoder memory. The function f is required to be a
(+1)

linear function from 2


in matrix form:

2 .It is often convenient to write such a function

= 0 + 1 1 + + (4)
where Gi, 0 i m, is a binary b c matrix.
From (4 ) then
0 1 = (0 1 ). (5)
9

Or
0
Where = (

1
0

and where here and hereafter the parts of matrices left blank are assumed to be
filled in with zeros. We call G the generator matrix and Gi, 0 i m, the generator
submatrices.
In this figure (5) we illustrate a general convolutional encoder (without feedback).

Fig. 6. A general convolutional encoder (without feedback).

Example the rate R= has the following generator submatrices:

10

The encoder of a binary convolutional code with rate 1/, measured in bits per
symbol, call ass a finite-state machine, Its consist of an M-stage shift register and n
module 2 addets and a multiplexer that serialize the outputs
The code rate is give by
=

1
=
/

( + )
( + )

Whre is the length of the message sequence


When , the code rate will becomes:
=

1
/

3.The Viterbi decoder


3.1 The Viterbi algorithm
The Viterbi Decoder uses two metrics: the branch metric (BM) and the
path metric (PM). The branch metric is a measure of the distance between
what was transmitted and what was received, and is defined for each arc in
the trellis. In hard decision decoding, where we are given a sequence of
digitized parity bits, the branch metric is the Hamming distance between the
expected parity bits and the received ones

Fig. 7. The brand metric for hard decision decoding.


The receiver gets the parity bits 00
11

The received bits are 00. For each state transition, the number on the arc
shows the branch metric for that transition. Two of the branch metrics are 0,
corresponding to the only states and transitions where the corresponding
Hamming distance is 0. The other non-zero branch metrics correspond to
cases when there are bit errors.
The path metric is a value associated with a state in the trellis (i.e., a value
associated with each node). For hard d e c i s i o n d e c o d i ng , it corresponds
to the Hamming distance over the most likely path from the initial state to the
current state in the trellis. By most likely, we mean the path with smallest
Hamming distance between the initial state and the current state, measured
over all possible paths between the two states. The path with the smallest
Hamming distance minimizes the total number of bit errors, and is most likely
when the BER is low.
The key insight in the Viterbi algorithm is that the receiver can compute the
path metric for a (state, time) pair incrementally using the path metrics of
previously computed states and the branch metrics.
The Viterbi Decoding uses the Viterbi algorithm, figure (7) shows all the
intermediatesteps of the implemented algorithm.
It consists of two decision blocks and some internal steps to decode the received
data. Branch matrices are calculated from the symbols and the pattern; maximwn
possibility of BM (Branch matrices) is four.
The calculated BM are stored in registers, which have interface with feedback frm
state ends; allthe intermediate steps will be executed for all the possible states

12

Fig. 8. Flowchart of Viterbi Decoder

3.2 Computing the path metrics


Suppose the receiver has computed the path metric PM[s, i] for each
states (of which there are 21 , where k is the constraint length) at time
step i. The value of PM[s, i] is the total number of bit errors detected when
comparing the received parity bits to the most likely transmitted message,
considering all messages that could have been sent by the transmitter until
time step i (starting from state 00, which we will take by convention to be
the starting state always).
Among all the possible states at time step i, the most likely state is the
one with the smallest path metric. If there is more than one such state, they
are all equally good possibilities.
We determine the path metric at time step i + 1, PM[s, i + 1], for each
states. The transmitter is at states at time step i + 1, then it must have been
in only one of two possible states at time step i. These two predecessor
states, labeled and , are always the same for a given state. In fact, they
depend only on the constraint length of the code and not on the parity
functions

13

Any message sequence that leaves the transmitter in states at time i + 1


must have left the transmitter in state or state at time i
One of the following two properties must hold:
1. The transmitter was in state 10 at time i and the ith message bit
was a 0. If that is the case, then the transmitter sent 11 as the parity
bits and there were two bit errors, because we received the bits 00.
Then, the path metric of the new state, PM[01, i + 1] is equal to
PM[10, i] + 2, because the new state is 01 and the corresponding
path metric is larger by 2 because there are 2 errors.
2. The other (mutually exclusive) possibility is that the transmitter
was in state 11 at time i and the ith message bit was a 0. If that is
the case, then the transmitter sent 01 as the parity bits and there was
one bit error, because we received 00. The path metric of the new
state, PM[01, i + 1] is equal to PM[11, i] + 1. Formalizing the
above intuition, we can easily see that:
[, + 1] = min([ ], [, ] + [ ]
where and are the two predecessor states.
In the decoding algorithm, it is important to remember which arc
corresponded to the minimum, because we need to traverse this path from
the final state to the initial one keeping track of the arcs we used, and then
finally reverse the order of the bits to produce the most likely message.

3.3 Finding the Most Likely Path


We can now describe how the decoder finds the most likely path. Initially,
state 00 has a cost of 0 and the other 2 k 1 1 states have a cost of .
The main loop of the algorithm consists of two main steps: calculating the branch
metric for the next set of parity bits, and computing the path metric for the next
column. The pathmetric computation may be thought of as an add-compare-select
procedure:
1. Add the branch metric to the path metric for the old state.
2. Compare the sums for paths arriving at the new state (there are only two such
paths to compare at each new state because there are only two incoming arcs from
the previous column).
3. Select the path with the smallest value, breaking ties arbitrarily. This path
corresponds to the one with fewest errors.

14

4.Channel coding with 16-QUAM


We assume the additive white Gaussian noise (AWGN) channel with one-sided
noise spectral density 0 Over each quadrature channel, called I or Q channel, 16QAM has four levels: 1 and 2 , where 2 = 31
We consider two natural mappings between the rate 3/4 code encoder output and
the 16-QAM that can be easily implemented. The first mapping, denoted as 1 is
= 1 , = 2 , = 1 , = 2
The seconde mapping, denoted as 2 is = 1 , = 1 , = 2 , = 2

Fig. 9 . The 16-QAM coding


In order to generate coded signal sequences with a large SED from the binary
coded sequences with a large HD, we use the Gray code over each quadrature
channel. One obvious reason for using a Gray code is as follows: since the HD
between any two adjacent signal points is 1 for a Gray code, for an
equally spaced N-level (with N equal to a power of 2) amplitude modulation (NAM) signal with minimum SED 6, two binary sequences of log, N bits with a HD
of dH will generate two signal points having a SED 2 through Gray
code. If we use a Gray code for each quadrature channel of a 22 -ary QAM map a
binary code with a free HD

15


The coding gain in , where is the energy per bit. If we choose the uncoded

QPsK as the reference which has a SED = 4 , the energy per symbol
for the 16-QAM using a rate code is
E = 3Eb =

4
8
4
2V12 +
(V12 + V22 ) +
2V22 = 10V12
16
16
16

The normaziled by
412
12
=
=
= 0.3

4
With the use of Gray code, the overall coded QAM (CQAM) becomes nonlinear,
although the binary code used is linear. That is, the distance structure of the
CQAM depends on the transmitted signal sequence. If the transmitted sequences
consists of W points, then over one quadrature channel, an incorrect signal point
corresponding to a 2-bit sequence with a HD 1from a transmitted signal point can
only have SED equal to 412 , If a transmitted signal is one of four corner points,
then over one quadrature channel, an incorrect signal point corresponding to a 2-bit
sequence with a HD 1from the transmitted signal point can have SED equal to
either 412 422 = 3612 . Denote as the total number of information bit
errors produced by all incorrect paths. was found to be 27 for 1 and 15 for

2 . For a large , we have


0
16
Where is the decoded BER and =

2
0

The 16-QAM can realize 3.8 dB coding gain in


error

16

2 /2

at = 105 for the bit


0

REFERENCE
1. Fundamentals of Convolutional Coding 2nd Ed, ROLF JOHANNESSON
KAMIL Sh. ZIGANGIROV
2. Theory and Design of Digital Communication Systems, TRI T. HA

3. Convolutional codes, Dr. Chih-Peng Li.


4. Coded QAM Using a Binary Convolutional Code, Qiang Wang and
Larry Y. Onotera.
5. www.en.wikipedia.org/wiki/Convolutional_code
6. Digital Communications, John G. Proakis
7. Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture
Notes

17

Вам также может понравиться