Академический Документы
Профессиональный Документы
Культура Документы
Example: x1 = [2 0 2 1] and x2 = [1 0 2 0]
Hamming weight: wH(x1) = 3 and wH(x2) = 2
Hamming distance: dH(x1, x2) = wH(x1-x2) = wH([1 0 0 1]) = 2
Algebra for Finite Fields
Finite Algebra (groups)
Set (): An arbitrary choice of elements without any predefined
operations between the set elements
The set can be finite (e.g. the set of all integer numbers smaller than 4, i.e. {0,1,2,3})
or infinite (e.g. set of all real numbers )
The cardinality || of the set defines the number of objects within the set
Examples:
Matrices with integer elements: ring with identity under matrix addition and
multiplication. I is multiplicative identity. Matrix multiplication is not commutative
The integers under modulo q addition and multiplication form a commutative ring
with identity (but without multiplicative inverse)
Finite Algebra (fields)
Field (, , ): Set of elements with operations and
Conditions:
(, ) is a commutative group identity element “0”
( \ {0}, ) is a commutative group identity element “1”
multiplicative inverse for all elements \ {0}
is distributive: a (b c) = (a b) (a c)
i 0
Vector Spaces
Vector Space : A vector space over field denotes a set of
vectors with the definition of an addition “+” and a scalar multiplication “·”
Vector: n-tuple of elements ai from ground field a a0 a1 an 1 ai
Vector addition a b a0 b0 a1 b1 an 1 bn 1
Scalar multiplication a a0 a1 an 1
Conditions:
is closed: a + b holds for all a, b
· a holds for all and a
Rules ab ba (a b) a b
(a b) c a (b c) ( ) a a a
a0a ( ) a ( a)
a ( a ) 0 1 a a
Linear combination b 1a1 2a 2 na n
Vector Spaces
Inner product: a b b a
n 1
(a b) ( a) b
a b ai bi a0 b0 a1 b1 an 1 bn 1
i 0 c (a b) (c a) (c b)
Spanning Set: The collection of vectors {v0,v1, …, vn-1} is a spanning set of , if
each vector in can be represented by a linear combination of {v0,v1, …, vn-1}
Basis: A spanning set for with minimal cardinality.
A vector space of dimension k (dim =k) has k elements {v0,v1, …, vk-1}
For every vector a there is a unique representation
v0 The rows of V
k 1 v contain the basis
a i v i 0 v 0 1v1 k 1v k 1 0 1 k 1 1
α V vectors, i.e. they
i 0
span the vector
v k 1 space
Addition a ( D) b( D) a0 b0 a1 b1 D am 1 bm 1 D m 1
Multiplication a ( D) b( D) a0 b0 a1 b0 a0 b1 D am 1 bn 1 D m n 1
Irreducible polynomial:
A polynomial p(D) of rank m with coefficients piGF(p) is called irreducible, if it cannot be
factorized into polynomials of rank smaller than m. Therefore, p(D) has no zeros in GF(p).
The term irreducible always corresponds to a certain field.
Primitive polynomial:
For each prime number p and for each m there exists an irreducible
polynomial p(D) with coefficients piGF(p) with the property:
All powers z1, …, zq-1 of an element zGF(pm) with p(z)=0 (i.e., z is a root of p(D))
are different from each other and z0 = zq-1 = 1 holds.
z is called primitive element and p(D) primitive polynomial.
Example: p = 2 and m = 2
p(D) = D2+D+1 is primitive polynomial in GF(2)
z is a root of p(D) p(z) = 0 = z2 + z + 1 condition z2 = z + 1
q-1 = pm - 1= 22-1= 3 elements unequal 0
z0 1 z2 z 1
z1 z z3 z z 2 z 2 z z 1 z 1 z0
Extension Fields
Extension field GF(pm)
p(D) is a primitive polynomial with rank m and coefficients piGF(p) and zGF(p)
is the primitive element of p(D). The extension field GF(pm) can be spanned by
the powers z0, …, zq-2.
m 1
GF p 0, z , z ,, z , z z 1 pi z i p0 ,, pm 1 GF p
m 1 2 q 2 q 1 0
i 0
Exponential Representation Polynomial Representation
Table of primitive polynomials for GF(2m)
(given are the non-vanishing exponents)
q p(D) q p(D) q p(D) p(D)
4 (0,1,2) 32 (0,2,5) 64 (0,1,6) (0,1,3,4,6)
8 (0,1,3) (0,2,3,4,5) (0,1,2,4,6) (0,2,4,5,6)
(0,2,3) (0,1,2,4,5) (0,1,2,5,6) (0,1,2)
16 (0,1,4) (0,1,2,3,5) (0,3,6) (0,1,4,5,6)
(0,1,2,3,4) (0,1,3,4,5) (0,2,3) (0,1,3)
(0,1,2) (0,3,5) (0,2,3,5,6) (0,5,6)
(0,3,4)
Example for Calculation in GF(8) – Exponential Representation
Binary representation
Primitive polynomial: p(D) = D3 + D + 1 condition: z3 = z + 1 of exponent
z1 = z z5 = z3 + z2 = z + 1 + z2 = z2 + z + 1 binary expon. polyn.
z2 = z2 z6 = z4 + z3 = z2 + z + z + 1 = z2 + 1 000 0 0
z3 = z + 1 z7 = z5 + z4 = z2 + z + 1 + z2 + z = 1 001 z1 z
z4 = z2 + z 010 z2 z2
011 z3 z+1
Example: a 011 101 011 100 z 3 z 5 z 3 z 4 100 z4 z2+z
101 z5 z2+z+1
b 100 111 011 000 z 4 1 z 3 0 110 z6 z2+1
111 z7 1
Multiplication
a b z 3 z 4 z 5 1 z 3 z 3 z 4 0 z 7 z 5 z 6
1 z 2 z 1 z 2 1 z 1 z 3 011
Addition
a b z 3 z 4 z 5 1 z 3 z 3 z 4 0 z 1 z 2 z z 2 z 1 1 0 z 4
z 2 1 z 2 z 0 z 4 z 6 z 4 0 z 4 110 100 000 100
Example for Calculation in GF(8) – Polynomial Representation
Binary representation
Primitive polynomial: p(D) = D3 + D + 1 condition: z3 = z+1 of coefficients p2z2+p1z+p0
z1 = z z5 = z3 + z2 = z + 1 + z2 = z2 + z + 1 binary polyn. expon.
z2 = z2 z6 = z4 + z3 = z2 + z + z + 1 = z2 + 1 000 0 0
z3 = z + 1 z7 = z5 + z4 = z2 + z + 1 + z2 + z = 1 001 1 z0
z4 = z2 + z 010 z z
011 z+1 z3
Example: a 011 101 011 100 z 1 z 2 1 z 1 z 2 100 z2 z2
101 z2+1 z6
b 100 111 011 000 z 2 z 2 z 1 z 1 0 110 z2+z z4
Multiplication 111 z2+z+1 z5
a b z 1 z 2 z 2 1 z 2 z 1 z 1 z 1 z 2 0
z3 z 2 z 4 z3 z 2 z 2 z 1 z 2 z z 1 0
z 4 z z 2 z z z 2 100
Addition
a b z 1 z 2 z 2 1 z 2 z 1 z 1 z 1 z 2 0
z 2 z 1 z 0 z 2 111 010 000 100
Properties of Linear Codes
and
Decoding Principles
Distance Properties for (n, k, dmin)q codes
Recall:
Hamming weight wH(x1): number of non-zero symbols in a code word
Hamming distance dH(x1, x2) = wH(x1- x2) y xe with e GF(q ) n
Minimum Hamming distance dmin dmin= 4 t = 1, t´=3
d min min d H a, b min wH a
Linearity: d min a
a ,b , a b , a0 t
t
n
Sphere packing bound (Hamming bound): q r q 1
nk r
r 0
Proof: With n-k parity symbols exactly qn-k syndromes exist. For correcting all error
vectors e with wH(e)£t each e has to be assigned to one of the qn-k syndromes. Right
hand side (r.h.s.) denotes the possible number of error vectors of maximum length t.
To correct t errors number of syndromes must be larger as number of error samples
Equality holds for perfect codes, i.e. number of syndromes corresponds to number
of correctable error pattern (7,4,3)2-Hamming-Code, (23,12,7)-Golay-Code
Distance Properties for (n, k, dmin)q codes
n q 1 q k 1 n q 1
Plotkin bound: d min
q 1
k
q
Proof:
• Average weight of each symbol in GF(q) is (q -1)/q
• Average weight of code word of length n is n(q -1)/q
• Without the all-zero word the average weight is increased by a factor of qk /(qk -1) r.h.s
• Average weight is always larger or equal to minimum weight (dmin)
Gilbert-Varshamov bound:
Previous bounds achieve estimation for the minimum distance, but not a guarantee
for the existence of a real code
Gilbert-Varshamov bound proofs the existence of real code (no code construction)
There exists a (n, k, dmin)q code if the following condition is fulfilled
d min 2
n 1 q 1 r
r
nk
q
r 0
Distance Properties and IOWEF
Minimum distance is often not sufficient for a tight performance bound
Distance spectrum: weight distribution of the code
Due to linearity, wH(x) and not distance between all code words is investigated
n n
A( D) Ad D D d wH ( x )
1 Ad D d Ad = number of code words x with wH(x) = d
d 0 x d d min D = replacement character
Depends only on the code, but not on the encoder (ux) word error probability Pw
n
A
d 0
d q k number of information words = number of code words
Suitable for calculating the bit error rate Pb, as the connection between information
words u and code words x is considered IOWEF depends also on encoder
Distance Properties and IOWEF
Example: (7,4)2-Hamming code consists of 2k=24=16 code words
code word wH(u) wH(x) d Ad
Distance spectrum:
0000 000 0 0 0 1 A(D) = 1 + 7D3 + 7D4 + D7
0001 111 1 4 1 0
0010 110 1 3 2 0 Input Output Weight Enumerating Function
0011 001 2 3 3 7 A(D) = 1
0100 101 1 3 4 7 + 3W D3 + 3 W2 D3 + W3 D3
0101 010 2 3 5 0 + W D4 + 3 W2D4 + 3 W3 D4
0110 011 2 4 6 0
0111 100 3 4 + W4 D7
7 1
1000 011 1 3
1001 100 2 3
1010 101 2 4 d=0 d=1 d=2 d=3 d=4 d=5 d=6 d=7
IOWEF
1011 010 3 4
1100 110 2 4 w=0 1 0 0 0 0 0 0 0
1101 001 3 4 w=1 0 0 0 3 1 0 0 0
1110 000 3 3 w=2 0 0 0 3 3 0 0 0
1111 111 4 7 w=3 0 0 0 1 3 0 0 0
w=4 0 0 0 0 0 0 0 1
General Decoding Principals
Maximum-a-posteriori (MAP) criterion
Optimum decoding criterion determines the code word x that maximizes the
a-posteriori-probability Pr{x|y}
Pr x
xˆ arg max Pr x y arg max Pr y x
x x Pr y Depends on transition probabilities
arg max Pr y x Pr x Pr{y|x} and on a-priori-probabilities Pr{x}
x
Choose that code word x with the minimum distance to the receive word y
minimizes word error probability Pw
Effort of MAP/ML increases exponentially with k (qk comparisons)
Possible to correct more than ë(dmin-1)/2û errors
General Decoding Principals
In the sequel we consider always the word error rate Pw and not the bit error rate Pb
Error Detection for Discrete Symmetric Channel
Received code word: y x e X0
1-Pe
Y0
with e ≠0 if transmission error occurred X1 Y1
Probability of specific error e with wH(e): X2 Pe /(q-1) Y2
wH ( e )
X3 Y3
P
1 Pe
n wH ( e )
e
q 1 Xq-1 Yq-1
Ad 1 Pe
nd
For q = 2 (BSC): Pue Ped
d d min
Error Correction for Binary Symmetric Channel
Word error probability (prob. of non-detectable or erroneously correctable error)
depends on decoding principle
Simple derivation for Bounded Minimum Distance (BMD)
serves as upper bound for BDD and MLD exact for perfect codes
Probability of correct decoding for t-error correcting (n, k, dmin)-code:
t
n
1 Pe Ped
nd
Pcorrect All errors wH(e) ≤ t = ë(dmin-1)/2û can be corrected
d 0 d
n
n
1 Pe Ped
nd
d t 1 d
Upper bound for BDD and MLD!
Error Correction for Soft-Output Channels (1)
Assume that x(i) was transmitted and the received signal is y = x(i) + n
Probability of decoding error
Pe x( i ) Pr Decoding error x( i ) Pr y i x ( i )
with set
i y Pr y x( i ) Pr y x ( j ) , x ( j ) , j i
i.e. the set of all y that are closer to x(i) than to any other x(j) ≠ x(i)
all yi lead to correct decision for MLD
min. quadratic Euclidian distance
i x(i)
Complementary set
i out \ i y Pr y x( j ) Pr y x( i ) , j i
Error Correction for Soft-Output Channels (2)
i,2
Complementary set i,1
i out \ i y Pr y x( j ) Pr y x( i ) , j i i,3
qk
y Pr y x ( j ) Pr y x( i )
j 1
j i i , j
i,4
i,6
Bound for decoding error i,7 i,5
The sets i,j are in
qk
k
q
y x
Pe x Pr i , j
( i )
(i )
Pr y x (i ) general not disjoint
i, j
j 1
j i jj 1i
Union Bound
Interpretation:
Receive signal can be contained in several i,j
sum over single sets is larger than union set
Equality holds if all sets i,j are disjoint, i.e. each y occurs in one set i,j only
Error Correction for Soft-Output Channels (3)
Antipodal transmission: x Es / Ts
MLD:
Pr y i , j x (i )
Pr y x ( j) 2
yx (i ) 2
x(i )
Disturbance by noise: y = x(i) + n
Pr x
2 2
Pr y i , j x (i ) (i )
x ( j)
n n
n 1 ( i ) n 1
2
Pr xr xr nr nr
2
( j)
r 0 r 0
n 1 1 n 1 ( i ) 2
nr xr xr xr xr
(i ) ( j) ( j)
Pr
r 0 2 r 0
Pr
Error Correction for Soft-Output Channels (4)
n 1
nr xr( i ) xr( j )
r 0
Only differing positions of x(i) and x(j) are of importance (all other positions are zero)
only dH(x(i), x(j)) positions are considered
is Gaussian distributed random variable with
mean = 0
variance
2
N0 / 2 Es
2
dH x , x 2
(i ) ( j)
2d H x , x Es N 0 / Ts
(i ) ( j) 2
Ts Ts
1 n 1 ( i )
is constant: xr xr
2
( j)
2 r 0
2
1 Es
dH x , x 2
(i ) ( j)
2 d H x , x Es / Ts
(i ) ( j)
2 Ts
Error Correction for Soft-Output Channels (5)
Integration over Gaussian distributed delivers:
2
1
Pr y i , j x Pr
2 2
22
e d
2 d H x( i ) , x( j ) Es / Ts
2
1
4 d H ( x( i ) , x( j ) ) Es N 0 / Ts2
e d
2 2 d H ( x , x ) Es N 0 / T
(i ) ( j)
s
2
d H ( x ( i ) , x ( j ) ) Es / N 0
1 1
2
2
e d e d
d H ( x ( i ) , x ( j ) ) Es / N 0
1
erfc
2
d H ( x ( i ) , x ( j ) ) Es / N 0
Error Correction for Soft-Output Channels (6)
Union Bound for word error probability (independent of x(i))
1 2k Es 1 n Es
Pw Pe x erfc d H (x , x )
(i ) (i ) ( j)
Ad erfc d
2 j 1 N 0 2 d dmin N0
j i
Approximation for Pw
With the general approximation
erfc
x y e y erfc x
we yield
Es
Es Es Es Es ( d 1) N0 Es
erfc d erfc d e erfc
N 0 N 0 N 0 N 0 N 0
E
1 Es Ns0
Pe x erfc
(i )
e A( D ) E
s
2 N0 D e N0
Estimation of Error Probability for (7,4)2-Hamming Code
t = 1, dmin = 3 and A(D) = 1 + 7D3 + 7D4 + D7 1 Eb
Pe erfc Rc
Hard decision 2 N 0
n
t
Pw 1 1 Pe Ped
nd 0
10
d 0 d Hard, BMD
Soft - MLD
1 1 Pe 7 Pe 1 Pe
7 6
-2
10
Soft decision
-4
10
1 n Es Pw
Pw Ad erfc d
With hard decision at
input of decoder
2 d dmin N0 information is lost
-6
10
72 erfc 3 NEs0 72 erfc 4 NEs0
erfc 7
-8
1 Es 10
2 N0 2 4 6 8 10 12
Eb / N 0 in dB
Description of Linear Block Codes by
Matrices
Description of Linear Block Codes by Matrices (1)
Information word u u0 u1 uk 1 and code word x x0 x1 xn 1
Generator matrix of dimension k×n: each row contains a valid code word
g 0,0 g 0,n 1
- rows are linear independent
G g i , j GF( q )
- they span the code (vector space)
g k 1,0 g k 1,n 1 basis of the code space
Encoding: x u G mod q linear combination of rows of G with coefficients ui
Code:
x u G mod q u GF q
k
in the sequel, all calculations
are carried out mod q
Systematic encoder (u explicitly part of x: x = [u p] )
1 0 Gaussian normal form
G I k k Pk n k Pk n k parity symbols are given by linear
0 1 combination of information symbols
Description of Linear Block Codes by Matrices (2)
Parity check matrix of dimension (n-k)×n:
h0,0 h0,n 1 - rows are linear independent
H hi , j GF(q ) - they span a vector space
orthogonal to rows in G
hn k 1,0 hn k 1,n 1
Gaussian normal form (systematic encoder, i.e. x = [u p] )
1 0
H PkTn k I n k n k PkTn k
0 1
Code: x HT mod q 0
x GF q x HT mod q 0
n
is null space of H
v H v GF q
nk
with code words of the dual code b= v H
Recall: H is of dimension (n-k)×n
Code word b contains n symbols, but
Information word v consists of only n-k symbols
Code space contains only qn-k elements
If n-k is much smaller than k, it can be favorable to perform decoding for original
code with respect to the dual code
Examples of Codes (1)
Repetition code: (n, 1, n)-code
1 1 0 Rc 1
and d min n
n
G 1 1 1 1 H n 1
0 0 , 11 ,,
n 1 0 1
1 0 1
Rc n 1
and d min 2
G n 1 H 1 1 1 1 n
0 1 1 n
Examples of Codes (2)
Hamming code:
qr 1
The length of a (n, n-r ,3)q-Hamming code of rank r is defined by n
q 1
Hamming codes are perfect codes, i.e. the number of distinct syndromes equals
the number of correctable errors.
All Hamming codes have a minimum distance of dmin = 3, whereas the code rate
tends to Rc = 1 for n→.
k (q r 1) /(q 1) r (q r 1) r (q 1) (1 q r ) r (q 1)q r
Rc r
r
1
n (q 1) /(q 1)
r
(q 1)
r
(1 q )
Columns of parity check matrix represent all 2r-1 possible binary vectors of
length r
q = 2: (3,1), (7,4), (15,11), (31,26), (63,57), (127,120)
Examples of Codes (3)
Hamming code:
Example: (7, 4, 3)2-Hamming code
1 0 0 0 0 1 1
0 0 1 1 1 1 0 0
1 0 0 1 0 1
G H 1 0 1 1 0 1 0
0 0 1 0 1 1 0
1 1 0 1 0 0 1
0 0 0 1 1 1 1
Simplex code:
Simplex code is obtained by exchanging G and H (dual codes)
All code words have same pair-wise Hamming distance dH = 4 → simplex
Standard Array and Syndrome Decoding (1)
Assumption: y = x + e mod q is received vector
Decoding by calculating syndrome: s = [s0 s1 … sn-k-1]
s y HT x e HT x
HT e HT e HT
0
Syndrome depends only on error vector e, but not on transmitted code word x!
Construction of a (n’, k’, d’min) code from a given (n, k, dmin) code “New codes
from old codes”
Expansion: appending additional parity check symbols
n n, k k Rc Rc , d min
d min
Multiple of divisor: Rg ( D ) a ( D ) g ( D) 0
Example: n = 7, m = 3
x 0 011 01 0 D 2 D 3 D 5 x 01 0 0 011
nk
Generator polynomial: g ( D ) g 0 g1 D g nk D
Step 2 Step 4
1 0 0 0 1 0 1 g (D) D 2 g (D) D3g (D)
0 1 0 0 1 1 1
Dg ( D ) D 3
g ( D )
0 0 1 0 1 1 0 D g (D)
2
0 0 1 0 1 1 0 2
D g (D)
0 0 0 1 0 1 1 D3g (D) 0 0 0 1 0 1 1 D3g (D)
Similar to the matrix description the code can also be described using a parity
check polynomial h(D): monic polynomial (hk=1)
h( D ) g ( D ) D n 1 rank h( D ) k h( D ) h0 h1D D k
Code:
x( D) GFq D n 1 RDn 1 x( D) h( D) 0
Relation:
hk h0 h ( D)
h h
k 0 Dh ( D)
n k 1
hk h0 D h ( D)
k 1
k k 1
with reciprocal polynomial h ( D) D h D h0 D h1D hk 1D 1
p ( D) Rg ( D ) u ( D) D n k
p ( D ) Rg ( D ) u0 D n k D u1 D n k D uk 2 D n k D uk 1 D n k (2)
Applying modulo division to product:
Rg ( D ) D a ( D ) Rg ( D ) Rg ( D ) D Rg ( D ) a ( D ) Rg ( D ) D Rg ( D ) a ( D ) (3)
yields
Rg ( D ) ui D n k D a ( D ) Rg ( D ) ui D n k D Rg ( D ) a ( D ) (4)
r (1) ( D )
r (2) ( D )
r ( k 1) ( D )
r ( k ) ( D)
Properties of polynomials ri(D) recursive structure:
r (0) ( D) 0, r ( i ) ( D) Rg ( D ) uk i D n k D r ( i 1) ( D) , r ( k ) ( D ) p ( D )
Systematic Encoding with Generator Polynomial (4)
Polynomials r(i)(D) can be recursively calculated:
n k 1
r ( D)
(i )
r
j 0
(i )
j D j Rg ( D ) uk i D n k D r ( i 1) ( D )
n k 1
j n k 1
j
Rg ( D ) uk i D D rj D Rg ( D ) uk i D rn k 1 D rj 1 D
nk ( i 1) nk ( i 1) nk ( i 1)
j 0 j 1
n k 1 ( i 1)
Rg ( D ) uk i rn k 1 D Rg ( D ) rj 1 D j
( i 1) nk
j 1
i 0
g ( D)
nk n k 1
x( D) u ( D) gi D u ( D) Di nk
u ( D) g D i
i
u ( D) D n k Rg ( D ) D n k u ( D)
i 0 i 0
Systematic Encoding with Generator Polynomial (5)
It follows: n k 1
Rg ( D ) D nk
u ( D) u ( D)
i 0
gi Di
n k 1 ( i 1)
r ( D) Rg ( D ) uk i r
(i ) ( i 1)
n k 1 D nk
Rg ( D ) rj 1 D j
j 1
n k 1
Rg ( D ) uk i r ( i 1)
n k 1 D nk
rj(i 11) D j
j 1
n k 1 j
n k 1
uk i r ( i 1)
n k 1 j j 1
g D r ( i 1)
D j
j 0 j 1
n k 1
rj(i 11) g j uk i rn(i k1)1 D j ; r(1i 1) 0
j 0
Systematic Encoding with Generator Polynomial (6)
n k 1
r ( D)
(i )
rj(i 11) g j uk i rn(i k1)1 D j ; r(1i 1) 0
j 0
i =1:
n k 1
r ( D)
(1)
rj(0)
1 g j u k 1 r (0)
n k 1
D j
j 0
i = 2:
n k 1
r ( D)
(2)
rj(1)1 g j uk 2 rn(1) k 1 D j
j 0
r(1)
1 g 0 u k 2 r (1)
n k 1
r (1)
nk 2 g n k 1 u k 2 r (1)
n k 1
D n k 1
(1)
calculate difference between next symbol uk-2 and (n-k)-th cell rn k 1
→ multiply result in each branch with corresponding negative coefficient –gj
(1)
→ add products to old contents of previous cells rj 1
i > 2: repeat until all k information symbols are processed
i = k: shift register contains parity symbols
→ append parity symbols at information word
Systematic Encoding with Generator Polynomial (8)
General structure of shift register
Calculation for the j-th register cell: rj rj 1 g j uk i rn k 1
(i ) ( i 1) ( i 1)
-
uk-1 u1 u0
+
u1
03 u1
02 u1
01 u10
0
1 0
1
1
0
0
0
1 0
0
1 1
0
0
1
0 10 01 0
01
Example: u = [1 0 1 0], x = [0 0 1 | 1 0 1 0]
Systematic Encoding with Parity Check Polynomial (1)
Equivalent representation with parity check polynomial (without derivation)
k 1
xm hi xm i k f xm 1 ,, xm k with x [ x0 xn k 1 xn k xn 1 ]
i 0
n k parity symbols k information symbols
+
RDn 1 x( D) h( D) 0
Systematic Encoding with Parity Check Polynomial (2)
0
1 10
1 11
00 1
0 0
11 1
0 00 1 0
0
1
Example: u = [1 0 1 0], x = [0 0 1 | 1 0 1 0]
Possible Choices for Encoder Implementation
Matrix operation: x = u·G
n·k = n2·Rc operations (additions and multiplications)
s ( D ) Rg ( D ) y ( D) Rg ( D ) x( D ) e( D ) Rg ( D ) x( D) Rg ( D ) e( D)
Rg ( D ) e( D )
y0
y1
-g0 -g1 -gn-k-1
Algebraic decoding
Generally very demanding mathematical approaches
Application of BMD for Reed-Solomon and BCH codes leads to efficient approach
Non-Algebraic decoding
Calculation of syndrome by exploiting code structure
Demonstrative calculation of error pattern out of syndrome
Computational effort increases with number of correctable errors
Examples: Meggit decoder, majority logic decoder, threshold decoder, ...
Principle of Non-Algebraic Decoding
Principle:
An adjusted cyclic shift of y(D) leads to error pattern with largest exponent Dn-1
Lend: number of error patterns that contain largest exponent Dn-1
Each correctable error e(D) can be described by cyclic shift of one of these Lend error
patterns
save only Lend syndromes s(D) in a list and corresponding error pattern e(D)
Basic approach for iterative decoding:
Calculate syndrome s(D) for receive word y(D)
for m = 0:n-1
• if s(D) correct y(D) by corresponding error pattern e(D)
• else shift y(D) by one position and calculate new syndrome for RD n 1 D y ( D )
end
If no syndrome was found in at all error pattern is not detectable
Several variants are possible
Summary: Cyclic Block Codes
Polynomial Symbol Rank (max)
Information u(D) = u0 + u1 D + … + uk-1 Dk-1 k-1
Code x(D) = x0 + x1 D + … + xn-1 Dn-1 n-1
Generator g(D) = g0 + g1 D + … + Dn-k n-k Rg ( D ) x( D) 0
Non-systematic encoding x( D) u ( D) g ( D)
with p ( D) Rg ( D ) u ( D) D
nk
Systematic encoding x( D) p( D) u ( D) D n k
Syndrom calculation s ( D) Rg ( D ) y ( D)
Reed-Solomon-Codes
and
BCH-Codes
Reed-Solomon and BCH-Codes
Until now no analytical construction method for really good codes has been
presented
Wish: Construction of codes with defined properties, e.g. minimum distance
RS- and BCH-Codes are very powerful codes with some advantages
Here we restrict to non-binary RS-Codes and binary BCH-Codes
They can be constructed in an analytical way
The minimum distance dmin is known and can be used as a design parameter
for RS-Codes the complete weight distribution is known
RS-Codes are MDS codes satisfy the Singleton bound with equality
Both codes are very powerful, if the block length n is not too large
The codes can be adapted to the error structure of the channel
RS-Codes for burst errors and BCH-Codes for single errors
Decoding with respect to the BMD method is easily possible
Simple soft-decision information (erasures, BSEC) can be used in the decoder
Compact description in spectral domain
Spectral Transformation on Galois Fields (1)
Recall: Discrete Fourier Transformation (DFT) of the vector a = [a0 … an-1]
with aÎ is a vector A = [A0 … An-1] with elements
n 1 2 n 1
a i a i
j i
Ai a e n
with n-th root of unity e
j 2n
n 1
0 0
n 1
then A(D) is called the DFT of a(D) Ai a z i
a z i
if the i-th element of A(D) is given by 0
Spectral Transformation on Galois Fields (2)
Discrete spectral transformation on Galois fields:
n 1
A( D ) DFT a ( D) Ai a z i
a z i
0
n 1
a ( D) IDFT A( D) ai A z A z i
i
0
C ( D ) RDn 1 A( D ) B ( D ) ci ai bi C ( D) RDn 1 D b A( D) ci z ib ai
Example for Spectral Transformation on GF
GF(5)={0, 1, 2, 3, 4} with primitive element z = 3
z1 = 3, z2 = 4, z3 = 2, z4 = 1 = z0 and z-1 = 2
n 1
A( D ) DFT a ( D ) 2 2 D 2 4 D 3
Definition of Reed-Solomon (RS)-Codes (1)
Goal: Construct a code of block length n and code rate Rc=k/n with elements
in GF(pm) which can correct t errors (design distance d = dmin)
d = 2t+1 is required (if d is odd the relation n-k = d-1=2t holds MDS code)
Philosophy to construct such a code
Define polynomial (in frequency domain) of rank k-1 with coefficients XiÎGF(pm)
X ( D ) X 0 X 1D X k 1D k 1 X ( D ) D 1 D 2 D k 1
Fundamental sentence of algebra: X(D) has at most k-1 different roots i ÎGF(pm)
Take n different nonzero elements 0,…, n-1ÎGF(pm), insert them in X(D) and take
the result as element of the code word x of length n
x x0 x1 xn 1 with xi X (i )
The code word x has a minimum weight of wH(x) d = n-k+1
• X(D) has at most k-1 roots at most k-1 coefficients xi are zero
• As x(D) consists of n coefficients, the remaining n-(k-1) elements have to be nonzero
Definition of Reed-Solomon (RS)-Codes (2)
In order to construct a code all qk = pmk different polynomials X(D) with
maximum rank k-1 are taken and n different nonzero elements i are inserted to
construct the set of qk code words x(D)
Due to the linearity, such a code has minimum distance dmin = n-k+1
Code word length n corresponds to number of nonzero elements in the finite field.
The choice of the extension field GF(pm) determines the block length n !
k p m d min
Code rate Rc
n pm 1
Generator Polynomial for Reed-Solomon-Codes
Cyclic code: x( D ) u ( D ) g ( D ) with rank g (d ) n k
As X(D) is of rank k-1 the remaining n-k coefficients of the length n vector X are
zero X ( D) X X D X D k 1
0 1
X X 0 X 1 X k 1 0 0
k 1
Due to the relation Xi = x(z –i ), n-k consecutive powers of z are zeros of x(D)
x z k x z k 1 x z ( n 1) 0 g z k g z ( n 1) 0
As this relation is true for all x(D) (independent of u(D)), it has to be fulfilled by g(D)
Consequently, g(D) can be factorized into the following n-k terms
n 1 n 1 nk
g ( D) D z i
D z i
z n
D z
i
i k i k i 1
Generator and Parity Check Polynomial for RS-Codes
n 1 n 1 nk
Generator polynomial: g ( D) D z i
D z i
z n
D z i
i k i k i 1
n 1
General connection*: g ( D ) h( D ) D 1 D z i
n
i 0
k 1 k 1 n
Parity check polynomial: h( D) D z i
D z i
z n
D z i
i 0 i 0 i n k 1
Generalization
The roots of g(D) and h(D) can be on arbitrary consecutive positions
n k 1 n 1
g ( D) D z
i
i
h( D ) D z
i nk
i
Hint*: With zn = 1 the relation zn-1= 0 follows. As all powers of z fulfill Dn-1= 0 and the
powers z, 0 n-1, are different, the factorization of Dn-1 is given by
n 1
D 1 D z z
n
n i
1 z n 1 1 1 0
i 0
(Final) Definition of Reed-Solomon (RS)-Codes
Definition of Reed-Solomon-Codes
For an arbitrary prime number p, an integer m and an arbitrary design
distance d=dmin, a RS-Code is defined as a (n, k, dmin)q= (pm-1, pm-d, d)q code.
The code consists of all time-domain words x(D) ↔ [x0 … xn-1] with
coefficients in GF(pm), such that the corresponding frequency-domain words
X(D) ↔ [X0 … Xn-1] are zero in a cyclic sequence of n-k = d-1 consecutive
positions.
x( D ) X ( D) RDn 1 D n k B ( D ) with rank B ( D ) k 1 B(D)
Interpretation X X 1 X d 2 0 x z x z 1 x z ( d 2) 0
Parameter denotes the starting position of n-k consecutive zeros
The complete distance spectrum is given in analytical form
j d 1
d d min
n
n
A( D ) Ad D d
with Ad (q 1) (1) q d dmin j
d 0 d j 0 j
RS-Code are MDS-Codes and dmin=d holds: d min d n k 1
Examples for Reed-Solomon (RS)-Codes (1)
Counting in Galois field GF(8): p = 2, m = 3,
Primitive polynomial: p(D) = D3 + D + 1
Condition: z3 = z + 1
z1 = z z5 = z3 + z2 = z + 1 + z2 = z2 + z + 1
z2 = z2 z6 = z4 + z3 = z2 + z + z + 1 = z2 + 1
z3 = z + 1 z7 = z5 + z4 = z2 + z + 1 + z2 + z =1
z4 = z2 + z
0, z1 , z 2 , z 3 , z 4 , z 5 , z 6
each element can be described by a binary triple
h( D ) D z 3 D z D z D z D z
4 5 6 7
g ( D) D z1 D z 2 D z 3 D z 4
Generator polynomial:
D 4 z 3 D 3 D 2 zD z 3
Parity check polynomial: h( D ) D z 5 D z D z D
6 7 3
z3D2 z5D z 4
BCH-Codes: Preliminaries (1)
Codes by Bose, Ray-Chaudhuri, Hocquenghem (BCH) with xiÎGF(p)
we restrict our self to binary BCH codes, i.e. p = 2 xiÎGF(2),
Splitting fields , Cyclotomic Cosets: The number set
{0, 1, …, n-1} can be splitted in disjunctive sets Ki, so called splitting fields.
For n = q-1 with q = pm the splitting field Ki is given by
K i i q j mod n , j 0,1,, 1 index i denotes the smallest number in Ki
K 0
0 K 0,1,, n 1
i i union of all splitting fields contains all numbers
m2 ( D) D z D z D z D D 1
3 5 6 3 2
Properties:
Due to the choice of , gi GF(p) is guaranteed.
The target minimum distance d is reached if contains d - 1 successive numbers.
Concerning the real minimum distance of the code, d = n - k +1 holds.
The number of information bits of the code is k = n - ||.
Definition of BCH-Codes (2)
As d-1 consecutive numbers in are demanded, the code contains all code
words, where the roots of x(D) correspond to d-1 consecutive powers of the
primitive element z
x( D) GFp [ D] x z x z 1 x z d 2 0
(15,11,5)16-RS-Code
dmin= d = 5 2 symbol errors can be corrected (in GF(16))
Binary representation: each code symbol contains 4 bits burst errors up to length
t = 8 bits are correctable
e 0000 1111 1111 0000 correctable
e 0001 1111 1000 0000 not correctable
e 0001 0010 1000 0000 not correctable
Comparison of RS- and BCH-Codes (2)
(64,45,7)2-BCH-Code
Word length n and code rate Rc is comparable to (15,11,5)16-RS-Code
Capable of correcting always t =3 single errors (due to dmin=7)
not guarantied by RS-Code
Conclusion
RS-Codes are better suited for correcting burst errors
Binary BCH-Codes should be used for channels yielding single errors
The choice of the coding scheme depends on the error properties of the
communication channel
Table of primitive BCH-Codes
n k t n k t n k t n k t
7 4 1 127 85 6 255 123 19 511 403 12
78 7 115 21 394 13
15 11 1 71 9 107 22 385 14
7 2 64 10 99 23 . .
5 3 57 11 91 25 259 30
50 13 87 26 . .
31 26 1 43 14 79 27 130 55
21 2 36 15 71 29 . .
16 3 29 21 63 30 . .
11 5 22 23 55 31 1023 1013 1
6 7 15 27 47 42 1003 2
8 31 45 43 993 3
63 57 1 37 45 983 4
51 2 255 247 1 29 47 973 5
45 3 239 2 21 55 963 6
39 4 231 3 13 59 953 7
36 5 223 4 9 63 943 8
30 6 215 5 933 9
24 7 207 6 511 502 1 923 10
18 10 199 7 493 2 913 11
16 11 191 8 484 3 903 12
10 13 187 9 475 4 . .
7 15 179 10 466 5 768 26
171 11 457 6 . .
127 120 1 163 12 448 7 513 57
113 2 155 13 439 8 . .
106 3 147 14 430 9 258 106
99 4 139 15 421 10 . .
92 5 131 18 412 11 . .
Bit error rates for BCH codes
Word error probability for Bounded Minimum Distance (BMD) decoding for a
t-error correcting (n, k, dmin)-code was derived in the beginning of this chapter:
n
n
Pw 1 Pe Ped
nd
d t 1 d
d t 1 k d
1 Eb
with Pe erfc Rc
2 N 0
0
BER vs. Eb/N0
10 (and not vs. Es/N0)
uncoded
-1
10 Rc = 0.97, t= 1
Rc = 0.94, t= 2
At first the BER
-2
10 Rc = 0.87, t= 4
decreases with
-3
Rc = 0.75, t= 8
decreasing Rc due
10
Rc = 0.51, t= 18 to increasing dmin
-4
10 Rc = 0.36, t= 25 and t
-5 Rc = 0.18, t= 42
10 From t ³ 25 the
P Rc = 0.08, t= 55
b 10
-6
performance
-7 worsens
10
BCH codes are
-8
10 asymptotically bad
-9
10 For t ³ 25 the Rc
10
-10 drops faster than
3 4 5 6 7 8 9 10 11 12 13 dmin and t rises
Eb / N 0 in dB Rc effects Eb/N0
Bit error rates for BCH codes: Code rate Rc = 1/2
0
Comparison of
10 codes with
uncoded
10
-1
(7, 4), t= 1 different block
(15, 7), t= 2
-2 length n but
10 (31, 16), t= 3
-3
(63, 30), t= 6 constant rate
10 (127, 64), t= 10
(255, 131), t= 19
Rc » 1/2
-4 (511, 259), t= 30
10
(1023, 513), t= 57
10
-5
For constant Rc
P the performance
b 10
-6
-7
increases
10 significantly with
10
-8
block length n
-9
10
-10
10
3 4 5 6 7 8 9 10 11 12 13
Eb / N 0 in dB
Bit error rates for BCH codes: Code rate Rc = 1/4
0
Comparison of
10 codes with
uncoded
10
-1
(31, 6), t= 7 different block
-2 (63, 16), t= 11 length n but
10 (127, 29), t= 21
-3 (255, 63), t= 30
constant rate
10 (511, 130), t= 55 Rc » 1/4
-4 (1023, 258), t= 106
10
10
-5
For constant Rc
P the performance
b 10
-6
-7
increases
10 significantly with
10
-8
block length n
-9
10
-10
10
3 4 5 6 7 8 9 10 11 12 13
Eb / N 0 in dB
Bit error rates for BCH codes: Code rate Rc = 3/4
0
Comparison of
10 codes with
uncoded
10
-1
(31, 21), t= 2 different block
-2 (63, 45), t= 3 length n but
10 (127, 99), t= 4
-3 (255, 191), t= 8
constant rate
10 (511, 385), t= 14 Rc » 3/4
-4 (1023, 768), t= 26
10
10
-5
For constant Rc
P the performance
b 10
-6
-7
increases
10 significantly with
10
-8
block length n
-9
10
-10
10
3 4 5 6 7 8 9 10 11 12 13
Eb / N 0 in dB
Decoding of BCH and RS-Codes
Maximum-Likelihood Decoding of BCH- and RS-Codes is too complex, but
Bounded Minimum Distance (BMD) decoding leads to efficient algorithms
(hard input symbols are supposed or erasure information BEC or BSEC)
Assumptions: y x e Y X E
Symbols of x and e are q-ary for RS-Codes and p-ary for BCH-Codes
Symbols X and E are always q-ary
Design distance d = 2t+1
No error vector with more than ≤ t errors occur (otherwise the decoding fails)
The positions 0 up to d-2 contain the parity frequencies
X 0 0 X 2t X 2t 1 X n 1
The first 2t positions of receive word Y contain explicitly symbols of error word E
Y E0 E2t 1 X 2t E2t X n 1 En 1
Principle of Decoding
Syndrome: first 2t symbols of Y (this is not the DFT of s(D))
2 t 1 n 1
S ( D ) Si D i
with Si Ei Yi y z i
i 0 0
C E
0
i mod n
0
C
0
Si 0 for i ,, 2 1
Calculation of the error location polynomial C(D) (4)
Key- equation or Newton-equation (C0 = 1)
Si C Si 0 for i , , 2 1
1
Solution:
Assumption: At most t errors can occur
Arrange St,t and check for singularity
• If St,t is singular less than t errors occurred
– Reduce the dimension of the matrix by neglecting the last row and the last column
– Check for singularity if it is still singular, repeat neglecting last row and last
column, …
• If S, is not singular: dimension of S, corresponds to number of errors
Ei f Ei 1 , , Ei
Chien
Chien
search
search
error location
part
partof
ofIDFT
IDFT set
ei E z , i I
i
I={i | C(zi)=0}
part of e(D)
y(D)
-
xˆ ( D )
+