Академический Документы
Профессиональный Документы
Культура Документы
2
k 1 2
K
k
k bits
n-1
Output
Convoltuional Code
Convolutional codes
k = number of bits shifted into the encoder at one time
k=1 is usually used!!
n = number of encoder output bits corresponding to the k information bits r = k/n = code rate K = constraint length, encoder memory
Each encoded bit is a function of the present input bits and their past ones.
Generator Sequence
u r0 r1
(1) 1
r2
(1) 2
v
(1) 3
(1) 0
= 1, g
= 0, g
= 1, and g
= 1.
( ( ( ( g 02 ) = 1, g1( 2 ) = 1, g 22 ) = 1, g 32 ) = 0, and g 42 ) = 1.
x1
x2
00
Present Next Output
1(11)
0 1 0 1 0 1 0 1
00 00 01 01 10 10 11 11
00 10 00 10 01 11 01 11
00 11 11 00 01 10 10 01
10
1(10)
1(01)
State Diagram
5
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
0( 11 )
0( 11 )
0( 11 )
01
)
01
01
0( 11 )
01
01
0(0 1
0(0 1
0(0 1
10
10
) 0(10
10
0(10
) 10 1(
10
) 10 1(
0(0 1
10
) 10 1(
0(10
0(10
0(0 1
0( 11 )
1 (0 0)
1 (0 0)
1 (0 0)
) 10 1(
Encoding Process
Input: 1 Output: 11 0 01 1 00 1 10 1 01 0 10 0 11
00
0(00)
) 1(11
00
0(00)
) 1(11
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
0( 11 )
0( 11 )
0( 11 )
01
)
01
)
01
0( 11 )
01
01
0(0 1
0(0 1
10
10
) 0(10
10
0(10
) 10 1(
10
) 10 1(
0(0 1
10
) 10 1(
0(10
0(10
0 (0 1)
0(0 1
0( 11 )
0 1(0
1 (0 0)
1 (0 0)
) 10 1(
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
) 10 1(
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
0( 11 )
0( 11 )
0( 11 )
01
0 (0 1)
01
)
01
0( 11 )
01
01
0 (0 1)
0(0 1
10
10
) 0(10
10
0(10
) 10 1(
10
) 10 1(
0(0 1
10
) 10 1(
0(10
0(10
0 (0 1)
0( 11 )
0 1(0
1 (0 0)
1 (0 0)
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
) 10 1(
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
4
01
0( 11 )
0( 11 )
0( 11 )
01
)
01
0( 11 )
01
01
1
0 (0 1)
0 (0 1)
0(0 1
10
10
10
) 10 1(
10
) 10 1(
0(0 1
10
) 10 1(
2
) ) ) 0(10 0(10
11
0(10
0(10
0 (0 1)
0( 11 )
0 1(0
1 (0 0)
1 (0 0)
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
) 10 1(
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
4
01
3
01
0( 11 )
0( 11 )
0( 11 )
01
0( 11 )
01
01
1
0 (0 1)
0 (0 1)
0(0 1
10
10
10
10
) 10 1(
0(0 1
10
) 10 1(
2
) 10 1(
1
) ) 0(10 0(10 0(10 )
0(10
12
0 (0 1)
0( 11 )
0 1(0 )
1 (0 0)
1 (0 0)
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
) 10 1(
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
4
01
3
01
3
01
0( 11 )
0( 11 )
0( 11 )
0( 11 )
01
01
1
0 (0 1)
0 (0 1)
0(0 1
10
10
10
10
0(0 1
10
) 10 1(
2
) 10 1(
1
) 10 1(
3
) 0(10 0(10 )
0(10
0(10
13
0 (0 1)
0( 11 )
0 1(0 )
1 (0 0)
1 (0 0)
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
) 10 1(
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
4
01
3
01
3
01
3
0( 11 )
0( 11 )
0( 11 )
0( 11 )
01
01
1
0 (0 1)
3
0 (0 1)
0 (0 1)
0(0 1
10
10
10
10
0(0 1
10
2
) 10 1(
1
) 10 1(
3
) 10 1(
3
0(10 )
0(10
0(10
0(10
14
0( 11 )
0 1(0 )
1 (0 0)
1 (0 0)
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
) 10 1(
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
4
01
3
01
3
01
3
0( 11 )
3
01
0( 11 )
0( 11 )
0( 11 )
0( 11 )
01
0 1(0 )
1 (0
1 (0
1
0 (0 1)
3
0 (0 1)
0)
0)
0 (0 1)
0(0 1
10
10
10
10
0(0 1
10
2
) 10 1(
1
) 10 1(
3
) 10 1(
3
0(10 )
0(10
0(10
0(10
15
00
0(00)
) 1 (1 1
00
0(00)
) 1 (1 1
) 10 1(
00
0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
00
0(00)
00
4
01
3
01
3
01
3
0( 11 )
3
01
0( 11 )
0( 11 )
0( 11 )
0( 11 )
01
0 1(0 )
1 (0
1 (0
1
0 (0 1)
3
0 (0 1)
0)
0)
0 (0 1)
0(0 1
10
10
10
10
0(0 1
10
2
) 10 1(
1
) 10 1(
3
) 10 1(
3
0(10 )
0(10
0(10
0(10
16
00
0(00)
00
0(00)
) 1 (1 1
01 11
00
0(00)
) 1 (1 1
00 00
10 10
01 01
10 11
00 0(00)
) 1 (1 1
00 0(00)
) 1 (1 1
00
0(00)
11 11
00
0(00)
00
0( 11 )
0( 11 )
0( 11 )
01
01
01
0( 11 )
01
01
1
0 (0 1)
3
0 (0 1)
0 (0 1)
0(0 1
10
) 10 1(
10
10
10
0(0 1
10
2
) 10 1(
1
) 10 1(
3
) 10 1(
3
0(10 )
0(10
0(10
0(10
Output: 1011100
17
0( 11 )
) 1 (1 1
0 1(0 )
1 (0 0)
1 (0 0)
Convolutional Codes
Convolutional Codes
Convolutional codes differ from block codes in that the encoder contains memory and the n encoder outputs at any time unit depend not only on the k inputs but also on m previous input blocks. An (n, k, m) convolutional code can be implemented with a kinput, n-output linear sequential circuit with input memory m. Typically, n and k are small integers with k<n, but the memory order m must be made large to achieve low error probabilities. In the important special case when k=1, the information sequence is not divided into blocks and can be processed continuously. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes.
19
Convolutional Codes
Shortly thereafter, Wozencraft proposed sequential decoding as an efficient decoding scheme for convolutional codes, and experimental studies soon began to appear. In 1963, Massey proposed a less efficient but simpler-toimplement decoding method called threshold decoding. Then in 1967, Viterbi proposed a maximum likelihood decoding scheme that was relatively easy to implement for cods with small memory orders. This scheme, called Viterbi decoding, together with improved versions of sequential decoding, led to the application of convolutional codes to deep-space and satellite communication in early 1970s.
20
Convolutional Code
A convolutional code is generated by passing the information sequence to be transmitted through a linear finite-state shift register. In general, the shift register consists of K (k-bit) stages and n linear algebraic function generators.
21
Convoltuional Code
Convolutional codes k = number of bits shifted into the encoder at one time k=1 is usually used!! n = number of encoder output bits corresponding to the k information bits Rc = k/n = code rate K = constraint length, encoder memory. Each encoded bit is a function of the present input bits and their past ones.
22
23
24
25
29
30
31
32
the encoder consists of an m= 3-stage shift register together with n=2 modulo-2 adders and a multiplexer for serializing the encoder outputs.
The mod-2 adders can be implemented as EXCLUSIVE-OR gates.
Since mod-2 addition is a linear operation, the encoder is a linear feedforward shift register. All convolutional encoders can be implemented using a linear feedforward shift register of this type.
33
34
36
v = (1 1, 0 1, 0 0, 0 1, 0 1, 0 1, 0 0, 1 1).
37
( ( g m1) g m2 )
where the blank areas are all zeros, the encoding equations can be rewritten in matrix form as v = uG. G is called the generator matrix of the code. Note that each row of G is identical to the preceding row but shifted n = 2 places to right, and the G is a semi-infinite matrix, corresponding to the fact that the information sequence u is of arbitrary length.
38
Example 10.2
If u=(1 0 1 1 1), then
v = uG 11 0 1 11 = (1 0 1 1 1) = (1 1, 0 1, 0 0, 0 11 11 01 11 11 11 01 11 11 11 0 1 11 11 0 1 1, 0 1, 0 1, 0 0,
39
11 11 11 1 1),
40
41
42
v (1) = (1 0 1) (1 1) + (1 1 0) (0 1) = (1 0 0 1) v ( 2 ) = (1 0 1) (0 1) + (1 1 0) (1 0) = (1 0 0 1) v ( 3) = (1 0 1) (1 1) + (1 1 0) (1 0) = (0 0 1 1)
and
v = (1 1 0, 0 0 0 , 0 0 1, 1 1 1).
43
44
45
An example of a (4, 3, 2) convolutional encoder in which the shift register length are 0, 1, and 2.
46
47
49
v (1) ( D) = u( D)g (1) ( D) v ( 2 ) ( D ) = u( D )g ( 2 ) ( D ), where u(D) = u0 + u1D + u2D2 + is the information sequence.
52
( ( g (1) ( D ) = g 01) + g1(1) D + + g m1) D m ( ( g ( 2 ) ( D ) = g 02 ) + g1( 2 ) D + + g m2 ) D m and all operations are modulo-2. After multiplexing, the code word become v ( D) = v (1) ( D 2 ) + Dv ( 2 ) ( D 2 ) the indeterminate D can be interpreted as a delay operator, and the power of D denoting the number of time units a bit is delayed with respect to the initial bit.
53
m = max deg g ( j ) (D )
1 j n
55
K i = max deg g i( j ) (D ) ,
1 j n
1 i k,
where g i( j ) (D ) is the generator polynomial relating the ith input to the jth output, and the encoder memory order m is
m = max K i = max deg g i( j ) .
1 i k 1 j n 1 i k
56
G (D)
( D ) (n) g 2 ( D )
( g1
n)
n g (k ) ( D )
V (D ) = U(D )G (D )
where U ( D ) u (1) ( D ), u (2) ( D ), , u ( k ) ( D ) is the k-tuple of (1) (2) input sequences and V ( D ) v ( D ), v ( D), , v ( n ) ( D ) is the n-tuple of output sequences. After multiplexing, the code word becomes
v(D ) = v (1) D n + Dv (2 ) D n +
( )
( )
+ D n 1 v (n ) D n .
( )
58
] [ [
v ( D ) = (1 + D 9 ) + (1 + D 9 ) D + ( D 6 + D 9 ) D 2 = 1 + D + D8 + D 9 + D10 + D11
59
v (D ) = u (i ) D n g i (D )
i =1
( )
where
gi ( D ) g (i1) ( D n ) + Dg (i 2) ( D n ) + D n 1g (i n 1) ( D n ) , 1 i k ,
60
g (D ) = g (1) D 2 + Dg (2 ) D 2 = 1 + D + D 3 + D 4 + D 5 + D 6 + D 7
and for u(D)=1+D2+D3+D4 , the code word is
v ( D ) = u ( D2 ) g ( D ) = (1 + D 4 + D 6 + D8 ) (1 + D + D 3 + D 4 + D 5 + D 6 + D 7 ) = 1 + D + D 3 + D 7 + D 9 + D11 + D14 + D15
( )
( )
(u ( ) u ( )
1 1 l 1 l 2
u l(1)K1
2) 2) u l(1u l( 2
2) u l( K 2
k k) u l(1)u l( 2
k) u l( K k
62
i = b0 20 + b1 21 +
63
+ bK 1 2 K 1
64
66
Structural Properties of Convolutional Codes Modified encoder state diagram of a (2, 1, 3) code.
The path representing the sate sequence S0S1S3S7S6S5S2S4S0 has path gain X2X1X1X1X2X1X2X2=X12.
68
Structural Properties of Convolutional Codes Modified encoder state diagram of a (3, 2, 1) code.
The path representing the sate sequence S0S1S3S2S0 has path gain X2X1X0X1 =X12.
69
70
where Ci is the sum of the loop gains, ' Ci ' C j 'is the product of i i' , j the loop gains of two nontouching loops summed over all pairs of nontouching loops, '' Ci '' C j '' Cl '' is the product of the loop gains of i , j '' , l '' three nontouching loops summed over all nontouching loops.
71
i ,j
'
'
i , j ,l
''
''
''
T (X ) =
F
i i
where the sum in the numerator is over all forward paths and Fi is the gain of the ith forward path.
72
(C 8
(C (C (C (C (C (C (C (C (C
= X8 = X3 = X7 = X2 = X9 = X8 = X3 = X) = X5
2 3 4
5 6
) ) ) ) ) ) ) ) )
9 10
(C11 = X )
= X4
(C C = X ) Loop pair 2: ( loop 3, loop 11) ( C C = X ) Loop pair 3: ( loop 4, loop 8) (C C = X ) Loop pair 4: ( loop 4, loop 11) ( C C = X ) Loop pair 5: ( loop 6, loop 11) ( C C = X ) Loop pair 6: ( loop 7, loop 9) (C C = X ) Loop pair 7: ( loop 7, loop 10) ( C C = X ) Loop pair 8: ( loop 7, loop 11) (C C = X ) Loop pair 9: ( loop 8, loop 11) ( C C = X ) Loop pair 10: ( loop 10, loop 11) ( C C = X )
Loop pair 1: ( loop 2, loop 8)
4 2 8 8 3 11 8 3 4 3 4 11 9 6 11 9 8 7 7 7 10 4 7 11 2 8 11 5 10 11
74
11
= X4
11
= X8
)
)
( + (X (X
+ X8 + X3 + X3 + X4 + X8 + X7 + X4 + X2 + X5 + X 8 = 1 2X + X 3
75
Foward path 1 : S 0 S1 S 3 S 7 S 6 S 5 S 2 S 4 S 0 F1 = X 12 Foward path 2 : S 0 S1 S 3 S 7 S 6 S 4 S 0 Foward path 3 : S 0 S1 S 3 S 6 S 5 S 2 S 4 S 0 Foward path 4 : S 0 S1 S 3 S 6 S 4 S 0 Foward path 5 : S 0 S1 S 2 S 5 S 3 S 7 S 6 S 4 S 0 Foward path 6 : S 0 S1 S 2 S 5 S 3 S 6 S 4 S 0 Foward path 7 : S 0 S1 S 2 S 4 S 0
76
7 2
( ) (F = X ) (F = X ) (F = X ) (F = X ) (F = X ) (F = X ).
11 6 3 4 8 5 7 6 7 7
77
78
79
T(X) provides a complete description of the weight distribution of all nonzero code words that diverge from and remerge with state S0 exactly once. In this case, there is one such code word of weight 6, three of weight 7, five of weight 8, and so on.
80
( + (X
) ( )
= 1 2 X 2 X 2 X 3 + X 4 + X 5.
81
Fi i = X 5 (1 X X 2 X 3 + X 5 ) + X 4 (1 X 2 ) + X 6 1 + X 5 (1 X 3 )
i
+ X 4 1 + X 3 1 X X 2 + X 6 1 X 2 + X 6 1 + X 5 (1 X ) + X 8 1 + X 4 1 X X 2 X 3 + X 4 + X 7 1 X 3 + X 6 1 + X 3 (1 X ) + X 6 1 = 2 X 3 + X 4 + X 5 + X 6 X 7 X 8.
( (
This code contains two nonzero code word of weight 3, five of weight 4, fifteen of weight 5, and so on.
82
T ( X , Y , Z ) = Ai , j ,l X Y Z .
i j l i , j ,l
The coefficient Ai,j,l denotes the number of code words with weight i, whose associated information sequence has weight j, and whose length is l branches.
83
Structural Properties of Convolutional Codes The augment state diagram for the (2, 1, 3) codes.
84
= 1 + XY Z + Z 2 X 2Y 2 Z 4 Z 3 X 3 YZ 3 Y 3 Z 6
3
( (Z
Z4
) ( ) X (Y Z
8 3
Y 4 Z 7 X 9Y 4 Z 7
85
) )+
87
88
89
X b = DX c + DX d X d = D2 X c + D2 X d X e = D2 X b The transfer function for the code is defined as T(D)=Xe/Xa. By solving the state equations, we obtain: D6 = D 6 + 2 D8 + 4 D10 + 8 D12 + = ad D d T ( D) = 1 2D2 d =6
2(d 6 ) 2 (even d ) ad = 0 (odd d )
90
91
X b = JDX c + JDX d X d = JND 2 X c + JND 2 X d X e = JD 2 X b Upon solving these equations for the ratio Xe/Xa, we obtain the transfer function: J 3 ND 6 T ( D, N , J ) = 1 JND 2 (1 + J )
= J 3 ND 6 + J 4 N 2 D8 + J 5 N 2 D8 + J 5 N 3 D10 + 2 J 6 N 3 D10 + J 7 N 3 D10 +
92
Reference: John G. Proakis, Digital Communications, Fourth Edition, pp. 477 482, McGraw-Hill, 2001.
93
94
0 Pm I P0 0 P1 0 P2 I P0 0 P1 0 Pm 1 0 Pm I P0 0 Pm 2 0 Pm 1 0 Pm G= where I is the k k identity matrix, 0 is the k k all-zero matrix, and Pl is the k (n k) matrix
( ( g1,kl+1) g1,kl+ 2 ) g (k +1) g (k + 2 ) 2,l Pl = 2,l g (kk,l+1) g (kk,l+ 2 ) ( g1,nl) g (2n,l) , g (kn,l)
95
G(D) = [1 1 + D + D3].
)(
The (2, 1, 3) systematic code requires only one modulo-2 adder with three inputs.
99
100
The (3, 2, 2) systematic encoder, and it requires only two stages of encoder memory rather than 4.
101
102
103
n , k = D l
105
107
()
108
Structural Properties of Convolutional Codes In choosing nonsystematic codes for use in a communication system, it is important to avoid the selection of catastrophic codes. Only a fraction 1/(2n 1) of (n, 1, m) nonsystematic codes are catastrophic. A similar result for (n, k, m) codes with k > 1 is still lacking.
110
Introduction
In 1967, Viterbi introduced a decoding algorithm for convolutional codes which has since become known as Viterbi algorithm. Later, Omura showed that the Viterbi algorithm was equivalent to finding the shortest path through a weighted graph. Forney recognized that it was in fact a maximum likelihood decoding algorithm for convolutional codes; that is, the decoder output selected is always the code word that gives the largest value of the log-likelihood function.
112
and an information sequence of length L=5. The trellis diagram contains L+m+1 time units or levels, and these are labeled from 0 to L+m in Figure (a). Assuming that the encoder always starts in state S0 and returns to state S0, the first m time units correspond to the encoders departure from state S0, and the last m time units correspond to the encoders return to state S0.
113
114
116
i =0
N 1
where P(ri |vi) is a channel transition probability. This is a minimum error probability decoding rule when all code words are equally likely.
117
i =0
M ( ri | v i ) = M ( ri | vi ).
i =0
N 1
The decision made by the log-likelihood function is called the soft-decision. If the channel is added with AWGN, soft-decision decoding leads to finding the path with minimum Euclidean distance.
118
i =0
The following algorithm, when applied to the received sequence r from a DMC, finds the path through the trellis with the largest metric (i.e., the maximum likelihood path). The algorithm processes r in an iterative manner. At each step, it compares the metrics of all paths entering each state, and stores the path with the largest metric, called the survivor, together with its metric.
119
120
M r | v M (r | v ) ,
( )
all v v.
121
d ( r, v ) =
L + m 1 i =0
d ( r , v ) = d ( r , v ).
i i i =0 i i
N 1
In applying the Viterbi algorithm to the BSC, d(ri, vi) becomes the branch metric, d(ri, vi) becomes the bit metric, and the algorithm must find the path through the trellis with the smallest metric (i.e., the path closest to r in Hamming distance). The detail of the algorithm are exactly the same, except that the Hamming distance replaces the log-likelihood function as the metric and the survivor at each state is the path with the smallest metric. This kind of decoding is called hard-decision.
124
125
v = (1 1 1, 0 1 0, 1 1 0, 0 1 1, 1 1 1, 1 0 1, 0 1 1)
is shown as the highlighted path in the figure, and the decoded information sequence is u = (1 1 0 0 1) . The fact that the final survivor has a metric of 7 means that no other path through the trellis differs from r in fewer than seven positions. Note that at some states neither path is crossed out. This indicates a tie in the metric values of the two paths entering that state. If the final survivor goes through any of these states, there is more than one maximum likelihood path (i.e., more than one path whose distance from r is minimum).
126
127
131
132
133
134
135
The puncturing matrices are selected to satisfy a ratecompatibility criterion, where the basic requirement is that lower-rate codes (higher redundancy) should transmit the same coded bits as all higher-rate codes plus additional bits. The resulting codes obtained from a single rate 1/n convolutional code are called rate-compatible punctured convolutional (RCPC) code. In applying RCPC codes to systems that require unequal error protection of the information sequence, we may format the groups of bits into a frame structure and each frame is terminated after the last group of information bits by K-1 zeros.
137
138
Practical Considerations
The minimum free distance dfree can be increased either by decreasing the code rate or by increasing the constraint length, or both. If hard-decision decoding is used, the performances (coding gains) are reduced by approximately 2 dB for the AWGN channel when compared with the soft-decision decoding.
139
Practical Considerations
Two important issues in the implementation of Viterbi decoding are: The effect of path memory truncation, which is a desirable feature that ensures a fixed decoding delay. The degree of quantization of the input signal to the Viterbi decoder. The path memory truncation to about five constraint lengths has been found to result in negligible performance loss.
140
Practical Considerations
Bit error probability for rate Viterbi decoding with eight-level quantized inputs to the decoder and 32-bit path memory.
141
Practical Considerations
Performance of rate codes with harddecision Viterbi decoding and 32-bit path memory truncation
142
Practical Considerations
Performance of rate , K=5 code with eight-, four-, and two-level quantization at the input to the Viterbi decoder. Path truncation length=32 bits.
143
Practical Considerations
Performance of rate , K=5 code with 32-, 16-, and 8-biy path memory truncation and eight- and two-level quantization.
144