Академический Документы
Профессиональный Документы
Культура Документы
Content
1.INTRODUCTION........ 4
2.CONVOLUTION CODING.4
2.1 Definition 5
2.2 Representations of Convolutional Code .6
2.2.1. Tree diagram ...7
2.2.2 Trellis diagram..7
2.2.3 State diagram.7
1.INTRODUCTION
In telecommunication and coding, theory digital data transmission is
frequently increasing over noisy channels. Channel coding is a technique used
for controlling errors in data transmission. There are many techniques and
algorithms available for this process but Convolution Encoder and Viterbi
Decoder are the most efficient . It gives improved coding gain and enhanced
performance. Convolution Encoding with Viterbi Decoding is a Forward Error
Correction technique that is particularly suited to a channel in which the
transmitted signal is corrupted mainly by Additive White Gaussian Noise
(AWGN). Forward Error Correction can be divided into two categories: Block
codes and convolution codes. Fixed size blocks of bits are called Block codes
while convolution codes work on uninformed length blocks of bits. The
Convolution Encoder is frequently used in digital communication systems .
2. CONVOLUTION CODING
2.1 Definition
A Convolution code is defined by three integers: n, k and K, where the
proportion kin is termed code rate, n is the number of output bits, k is the
number of input bits and K is called constraint length. The constraint length
signifies the number of k-bit shifts over which a single information bit can
influence the encoder output. The longer the constraint length, the larger the
number of parity bits that are influenced by any given message bit. Because the
parity bits are the only bits sent over the channel, a larger constraint length generally
implies a greater resilience to bit errors. The trade-off, though, is that it will take
considerably longer to decode codes of long constraint length, so one cant increase
the constraint length arbitrarily and expect fast decoding.
If a convolutional code that produces r parity bits per window and slides the
window forward by one bit at a time, its rate (when calculated over long messages)
is 1/r. The greater the value of r, the higher the resilience of bit errors, but the tradeoff is that a proportionally higher amount of communication bandwidth is devoted
to coding overhead. In practice, we would like to pick r and the constraint length to
be as small as possible while providing a low enough resulting probability of a bit
error.
Fig. 0.Convolution code with two parity bits per massage bit(r=2) and constraint
length K=3.
This figure shows a structure of Convolution Encoder for generator
polynomial of code rate. where UI and U2 are first and second code symbols
respectively.
Output branch word and input bits to be fed to the register depend on the
code rate. Here two bit Output branch word and one bit input are fed to
register at each clock cycle.
Let GJ(x) be the generator polynomial for upper connection and G2(x) be the
generator polynomial for lower connection which has been shown in equation
(1).
1 = 1 + + 4 + 5 + 6
(1)
2 = 1 + + 3 + 4 + 6
Fig. 2.
This is consistent with the fact that the constraint length K=3.
The output sequence at each stage is determined by the input bit and the two
previous input bits. In other words, we may sat that the 3-bit output sequence for
6
each input bit is determined by the input bit and the four possible states of the shift
register, denoted as a=00, b=01, c =10, and d=11
(2) (2)
() (1) (2)
()
() (1) (2)
()
= 0 1 . = 0 0 . 0 1 1 . 1 (1)
is encoded as the causal code sequence
(1) (2)
= 0 1 . = 0 0 . 0 1 1 . 1 (2)
Where
= ( , 1 , . ) (3)
The parameter m is called the encoder memory. The function f is required to be a
(+1)
= 0 + 1 1 + + (4)
where Gi, 0 i m, is a binary b c matrix.
From (4 ) then
0 1 = (0 1 ). (5)
9
Or
0
Where = (
1
0
and where here and hereafter the parts of matrices left blank are assumed to be
filled in with zeros. We call G the generator matrix and Gi, 0 i m, the generator
submatrices.
In this figure (5) we illustrate a general convolutional encoder (without feedback).
10
The encoder of a binary convolutional code with rate 1/, measured in bits per
symbol, call ass a finite-state machine, Its consist of an M-stage shift register and n
module 2 addets and a multiplexer that serialize the outputs
The code rate is give by
=
1
=
/
( + )
( + )
1
/
The received bits are 00. For each state transition, the number on the arc
shows the branch metric for that transition. Two of the branch metrics are 0,
corresponding to the only states and transitions where the corresponding
Hamming distance is 0. The other non-zero branch metrics correspond to
cases when there are bit errors.
The path metric is a value associated with a state in the trellis (i.e., a value
associated with each node). For hard d e c i s i o n d e c o d i ng , it corresponds
to the Hamming distance over the most likely path from the initial state to the
current state in the trellis. By most likely, we mean the path with smallest
Hamming distance between the initial state and the current state, measured
over all possible paths between the two states. The path with the smallest
Hamming distance minimizes the total number of bit errors, and is most likely
when the BER is low.
The key insight in the Viterbi algorithm is that the receiver can compute the
path metric for a (state, time) pair incrementally using the path metrics of
previously computed states and the branch metrics.
The Viterbi Decoding uses the Viterbi algorithm, figure (7) shows all the
intermediatesteps of the implemented algorithm.
It consists of two decision blocks and some internal steps to decode the received
data. Branch matrices are calculated from the symbols and the pattern; maximwn
possibility of BM (Branch matrices) is four.
The calculated BM are stored in registers, which have interface with feedback frm
state ends; allthe intermediate steps will be executed for all the possible states
12
13
14
15
The coding gain in , where is the energy per bit. If we choose the uncoded
QPsK as the reference which has a SED = 4 , the energy per symbol
for the 16-QAM using a rate code is
E = 3Eb =
4
8
4
2V12 +
(V12 + V22 ) +
2V22 = 10V12
16
16
16
The normaziled by
412
12
=
=
= 0.3
4
With the use of Gray code, the overall coded QAM (CQAM) becomes nonlinear,
although the binary code used is linear. That is, the distance structure of the
CQAM depends on the transmitted signal sequence. If the transmitted sequences
consists of W points, then over one quadrature channel, an incorrect signal point
corresponding to a 2-bit sequence with a HD 1from a transmitted signal point can
only have SED equal to 412 , If a transmitted signal is one of four corner points,
then over one quadrature channel, an incorrect signal point corresponding to a 2-bit
sequence with a HD 1from the transmitted signal point can have SED equal to
either 412 422 = 3612 . Denote as the total number of information bit
errors produced by all incorrect paths. was found to be 27 for 1 and 15 for
2
0
16
2 /2
REFERENCE
1. Fundamentals of Convolutional Coding 2nd Ed, ROLF JOHANNESSON
KAMIL Sh. ZIGANGIROV
2. Theory and Design of Digital Communication Systems, TRI T. HA
17