Вы находитесь на странице: 1из 116

Chapter 2.

Linear Block Codes


 Algebra for finite fields
 Definition of set, group, ring, field, galois field
 Extension field
 Distance Properties of Block Codes
 Hamming Distance, Performance Bounds
 Distance Spectrum and IOWEF
 Decoding Principles and error rate performance
 Matrix description of block codes
 Description with generator and parity check matrices
 Cyclic block codes
 Description with generator and parity check polynomials
 Implementation using shift register
 Reed-Solomon and BCH codes
Principle of Channel Coding
 If all 2n words x = [x0 x1 … xn-1] of length n are used for data transmission, no
error detection or correction is possible
 To enable error detection/correction the transmit alphabet (set of used transmit
vectors x, i.e. code words) has to be restricted

 Defining the subset  of all code words x   is not a trivial problem

 Goal of code definition is to maximize the distance properties of the code


 Maximize the minimum distance of all code words
 Minimize the number of code words with small distance
 In general, no optimal codes are known

 Efficient encoding and decoding structures for codes are desired


 usually algebraic structures are used
 unlikely to find best codes
Principle of Channel Coding
u channel
channel x u  u0 u1  uk 1   GF(q ) k ui, xj can take q
different values
k encoder
encoder n x   x0 x1  xn 1   GF(q ) n
ui , x j  GF(q )
 Code: set of all code words x defines the block code  (n,k)q code
 Code defines a vector space  of cardinality || = qk < qn
 k-dimensional subset  with qk elements out of n-dimensional space GF(q)n
 n-k symbols added for error detection/correction  no new information  redundancy
 Encoder: device that maps information word u onto code word x by adding
redundancy → code rate Rc = k / n
 Code  (same set of code words x) can be generated by different encoder
 Distance properties and word error probability depend only on code (not true for BER)
 Basic encoder properties
 Systematic encoder: codeword x explicitly contains information word u, e.g. x = [u p]
 Non-systematic encoder: codeword x does not explicitly contain information word u
 Each block code can be achieved by a systematic encoder
Principle of Channel Coding
u channel
channel x u  u0 u1  uk 1   GF(q ) k ui, xj can take q
different values
k encoder
encoder n x   x0 x1  xn 1   GF(q ) n
ui , x j  GF(q )
 Linearity: linear combination of two code words yields always a code word
x1  x 2  x3  G for all x1 , x 2  G
 Hint: Addition and Multiplication with respect to GF(q)n
 Each linear block code contains the all zero word x1  x1  0  G for x1  G

 Hamming weight wH(x1): number of non-zero symbols in a code word


 Hamming distance dH(x1, x2): number of differing symbols between 2 code
words x1 and x2 d  x , x   w (x - x )
H 1 2 H 1 2

 Example: x1 = [2 0 2 1] and x2 = [1 0 2 0]
 Hamming weight: wH(x1) = 3 and wH(x2) = 2
 Hamming distance: dH(x1, x2) = wH(x1-x2) = wH([1 0 0 1]) = 2
Algebra for Finite Fields
Finite Algebra (groups)
 Set (): An arbitrary choice of elements without any predefined
operations between the set elements
 The set can be finite (e.g. the set of all integer numbers smaller than 4, i.e. {0,1,2,3})
or infinite (e.g. set of all real numbers )
 The cardinality || of the set defines the number of objects within the set

 Group (, ): Set  of elements with binary operation 


Conditions:
  is closed under : ab for all a,b
  is associative: a  (b  c) = (a  b)  c for all a,b,c 
 Existence of one identity element e: ae=a for each a  
 Each element a in  has an inverse a-1: a  a-1 = a-1  a = e
binary operation: operates
Commutative group (abelian groups): ab=ba on two set elements at a
time, yielding a third element
Finite Algebra (groups)
 Example 1: Infinite group
The set of integers form an infinite commutative group under integer addition,
but not under multiplication (no multiplicative inverse)
 Examples: (,+), (,+), (,+) 15

 Example 2: Finite group 2 0


4
 Achieved by the application of modular arithmetic to the set of integers 3

a  b  c mod m  c  Rm [a  b] x = c mod m  c=Rm[x] is the


remainder on division of x (dividend)
 a  b  d m  c , d  by m (modulus, divisor)

The set {0, 1, 2, …, m -1} forms a commutative group of order m


+ 0 1 2 3
under modulo m addition for any positive m.
0 0 1 2 3
 Finite groups are of primary interest for channel coding 1 1 2 3 0
 Example: m = 4   = {0, 1, 2, 3} 2 2 3 0 1
 The order of a group is defined to be the cardinality of the group 3 3 0 1 2
Finite Algebra (groups)
 Example 3: Modular Multiplication a  b  c mod m  c  Rm [ a  b]
 In contrast to modulo addition the modular multiplication cannot be used to form a
finite group using the integers and arbitrary moduli m “1” is identity element
 Example:  = {1, 2, 3, 4, 5, 6, 7} under modulo 8 w.r.t. multiplication
• 2 · 4 = 0 mod 8  as 0 is not in , the operation is not closed over   no group
 Example:  = {0, 1, 2, 3, 4, 5, 6, 7} under modulo 8 no x such that x · 0 = 1mod 8
i.e. no inverse
The set {1, 2, …, p -1} forms a commutative group of order (p -1) under element for “0”
modulo p multiplication if and only if p is a prime integer.
 Proof: · 1 2 3 4 5 6
 p is not prime: p = a·b for 1< a, b < p  a·b = 0 mod p  not closed 1 1 2 3 4 5 6
 p is prime: no such pair of elements a, b  closure is satisfied, as 2 2 4 6 1 3 5
all pairs of elements 1< a, b < p fulfill a·b ≠ 0 mod p 3 3 6 2 5 1 4
 For any xÎ, the products {x ·1, x ·2, …, x ·(p-1)} are distinct 4 4 1 5 2 6 3
5 5 3 1 6 4 2
 Example 4: Group of order 6 under modulo 7:  = {1, 2, 3, 4, 5, 6} 6 6 5 4 3 2 1
Finite Algebra (ring)
 Ring (, , ): Set  of elements with two binary operations  and 
Conditions:
 (, ) is a commutative group  identity element “0”
  is closed under  Multiplication need not
  is distributive: a  (b  c) = a  b  a  c • to have an inverse
• to be commutative
  is associative: a  (b  c) = (a  b)  c
• to have an identity element
Commutative ring: a  b = b  a
Ring with identity: a  1 = a  identity element “1”

 Examples:
 Matrices with integer elements: ring with identity under matrix addition and
multiplication. I is multiplicative identity. Matrix multiplication is not commutative
 The integers under modulo q addition and multiplication form a commutative ring
with identity (but without multiplicative inverse)
Finite Algebra (fields)
 Field (, , ): Set  of elements with operations  and 
Conditions:
 (, ) is a commutative group  identity element “0”
 ( \ {0}, ) is a commutative group  identity element “1”
 multiplicative inverse for all elements  \ {0}
  is distributive: a  (b  c) = (a  b)  (a  c)

“A field is a commutative ring with identity in which every element has a


multiplicative inverse”
 Examples: Infinite Fields
 Rational numbers (,+,·)
 The real and the complex numbers (,+,·), (,+,·)
 The integers do not form a field (most integers do not have an integer multiplicative
inverse)  , , 
Galois Field
 Galois Field GF(q) (finite field)
 Field with limited number (i.e., q) of elements, e.g. GF(2), GF(3), GF(8)=GF(23)
GF(2) + 0 1 · 0 1
0 0 1 0 0 0
1 1 0 1 0 1
 q must be prime (q = p  “prime field”) or a power of a prime (q = pm, m )
 Representation of the elements of GF(q)
 For q = p the Galois Field GF(p) is given by the integer numbers {0, 1, ... , q -1}
 For q = pm the Galois Field GF(q) cannot be represented by {0, 1, ... , q -1}!
Instead GF(pm) is represented by
• all polynomials p(D)=p0 + p1D + … + pm-1Dm-1 of degree < m with coefficients piGF(p)
• powers of one element zGF(q) (number of nonzero elements equals q-1 = pm-1)
 m 1 
GF  p    pi  D p0 ,, pm 1  GF  p    0, z1 , z 2 ,, z q  2 , z q 1  z 0  1
m i

 i 0 
Vector Spaces
 Vector Space : A vector space  over field  denotes a set of
vectors with the definition of an addition “+” and a scalar multiplication “·”
 Vector: n-tuple of elements ai from ground field  a   a0 a1  an 1  ai  
 Vector addition a  b   a0  b0 a1  b1  an 1  bn 1 
 Scalar multiplication   a    a0   a1    an 1 

Conditions:
  is closed: a + b   holds for all a, b  
 · a   holds for all    and a  
 Rules ab ba   (a  b)    a    b
(a  b)  c  a  (b  c) (   )  a    a    a
a0a (   )  a    (   a)
a  ( a )  0 1 a  a
 Linear combination b  1a1   2a 2     na n
Vector Spaces
 Inner product: a b  b a
n 1
  (a  b)  (  a)  b
a  b   ai  bi  a0  b0  a1  b1    an 1  bn 1
i 0 c  (a  b)  (c  a)  (c  b)
 Spanning Set: The collection of vectors {v0,v1, …, vn-1} is a spanning set of , if
each vector in  can be represented by a linear combination of {v0,v1, …, vn-1}
 Basis: A spanning set for  with minimal cardinality.
 A vector space  of dimension k (dim  =k) has k elements {v0,v1, …, vk-1}
 For every vector a  there is a unique representation
 v0  The rows of V
k 1  v  contain the basis
a    i v i   0 v 0  1v1     k 1v k 1   0 1   k 1    1 
 α  V vectors, i.e. they
i 0   
  span the vector
 v k 1  space 

 Example: {[1000],[0100],[0010],[0001]} is a basis of 4 , dim  = 4


How to describe Galois Fields?
The integers {0, 1, 2, …, p -1}, where p is a prime, form the Under addition modulo p the
elements form an additive
GF(p) under modulo p addition and multiplication. commutative group
Under multiplication modulo p
 GF(5) = {0, 1, 2, 3, 4} under mod 5 the elements GF(p)\0 form a
multiplicative group
+ 0 1 2 3 4 · 0 1 2 3 4 If ,  distribute as they do
0 0 1 2 3 4 0 0 0 0 0 0 in integer arithmetic, then we
have a field
1 1 2 3 4 0 1 0 1 2 3 4
2 2 3 4 0 1 2 0 2 4 1 3 GF(5) \ 0 = {1, 2, 3, 4}
= {24, 21, 23, 22 }
3 3 4 0 1 2 3 0 3 1 4 2  all nonzero elements
4 4 0 1 2 3 4 0 4 3 2 1 are achieved as powers of
the element z = 2
 Primitive Element: A primitive element of the field GF(q) is an element z such
that every field element except zero can be expressed as a power of z
GF  q   0, z1 , z 2 ,, z q  2 , z q 1  z 0  1 z q 1  z 0  1
How to describe Galois Fields?
 Recall:  = {1, 2, …, p -1} forms a commutative group of order (p -1) under
modulo p multiplication if and only if p is a prime integer
 GF(4): + 0 1 2 3 · 0 1 2 3 Addition and multiplication modulo-4
0 0 1 2 3 0 0 0 0 0 Calculating modulo-4 does not
1 1 2 3 0 1 0 1 2 3 achieve a Galois field
2 2 3 0 1  there is no unique
2 0 2 0 2 2-1=? multiplicative inverse
3 3 0 1 2 3 0 3 2 1

 GF(4): + 0 1 2 3 · 0 1 2 3 Attention: addition and multiplication


0 0 1 2 3 are not modulo-4 addition and not
0 0 0 0 0
modulo-4 multiplication!
1 1 0 3 2 1 0 1 2 3
2 2 3 0 1 Strange: 1+1 = 0 but 1+2 = 3
2 0 2 3 1
2+2 = 0 but 2·2 = 3
3 3 2 1 0 3 0 3 1 2 rules are required
How to describe Galois Fields?
 1. Solution: Represent the elements of the set by abstract symbols, e.g.
= { , , , } or = {0, 1, a, ā}
+ 0 1 a ā · 0 1 a ā Observations:
• x + x = 0 for all x GF(4)
0 0 1 a ā 0 0 0 0 0
• ā = a2 = 1 + a
1 1 0 ā a 1 0 1 a ā • a3 = ā3 = 1 (primitive elements)
a a ā 0 1 a 0 a ā 1 • {a0, a1, a2} = {1, a, ā}
ā ā a 1 0 ā 0 ā 1 a • {ā0, ā1, ā2} = {1, ā , a}

 2. Solution: Define bit-tupel to denote the different elements  vector space


+ 00 10 01 11 · 00 10 01 11 GF(4) = {00,10,01,11}
00 00 10 01 11 00 00 00 00 00
Observations:
10 10 00 11 01 10 00 10 01 11 • vector addition mod 2
01 01 11 00 10 01 00 01 11 10 • [1 0] · [0 1] = [0 1]
11 11 01 10 00 11 00 11 10 01 • [1 1] · [1 1] = [0 1]
How to describe Galois Fields?
 3. Solution: Use polynomial to describe the set elements GF(4)={0,1, z,1+z}
with primitive polynomial
+ 0 1 z 1+z · 0 1 z 1+z
p  D   D2  D  1
0 0 1 z 1+z 0 0 0 0 0
and root z
1 1 0 1+z z 1 0 1 z 1+z
p  z   0  z2  z 1
z z 1+z 0 1 z 0 z 1+z 1
1+z 1+z z 1 0 1+z 0 1+z 1 z z2  z 1
 4. Solution: Use exponents of one element GF(4)={0,1, z, z2}
+ 0 1 z z2 · 0 1 z z2 with primitive element z
0 0 1 z z2 0 0 0 0 0
1 1 0 z2 z 1 0 1 z z2
z z z2 0 1 z 0 z z2 1
z2 z2 z 1 0 z2 0 z2 1 z
Extension Fields
 Polynomial: Polynomial p(D) of rank m with coefficients piGF(p) is denoted by
m 1
p ( D )  p0  p1D    pm 1D m 1
  pi D i with pi  GF( p ) p ( D )  GF( p )[ D]
i 0

 Addition a ( D)  b( D)   a0  b0    a1  b1  D     am 1  bm 1  D m 1
 Multiplication a ( D)  b( D)   a0  b0    a1  b0  a0  b1  D     am 1  bn 1  D m  n 1

 Irreducible polynomial:
A polynomial p(D) of rank m with coefficients piGF(p) is called irreducible, if it cannot be
factorized into polynomials of rank smaller than m. Therefore, p(D) has no zeros in GF(p).
The term irreducible always corresponds to a certain field.

 Attention: the absence of zeros is not a sufficient condition for irreducible


polynomials.
 Example: p(D) = D4+D2+1 = (D2+D+1)2 is reducible, but p(0) = p(1)  0
Extension Fields

 Primitive polynomial:
For each prime number p and for each m there exists an irreducible
polynomial p(D) with coefficients piGF(p) with the property:
All powers z1, …, zq-1 of an element zGF(pm) with p(z)=0 (i.e., z is a root of p(D))
are different from each other and z0 = zq-1 = 1 holds.
z is called primitive element and p(D) primitive polynomial.

 Example: p = 2 and m = 2
 p(D) = D2+D+1 is primitive polynomial in GF(2)
 z is a root of p(D)  p(z) = 0 = z2 + z + 1  condition z2 = z + 1
 q-1 = pm - 1= 22-1= 3 elements unequal 0
z0  1 z2  z 1
z1  z z3  z  z 2  z 2  z  z  1  z  1  z0
Extension Fields
 Extension field GF(pm)
p(D) is a primitive polynomial with rank m and coefficients piGF(p) and zGF(p)
is the primitive element of p(D). The extension field GF(pm) can be spanned by
the powers z0, …, zq-2.
 m 1 
GF  p   0, z , z ,, z , z  z  1   pi  z i p0 ,, pm 1  GF  p  
m 1 2 q 2 q 1 0

 i 0 
Exponential Representation Polynomial Representation
 Table of primitive polynomials for GF(2m)
(given are the non-vanishing exponents)
q p(D) q p(D) q p(D) p(D)
4 (0,1,2) 32 (0,2,5) 64 (0,1,6) (0,1,3,4,6)
8 (0,1,3) (0,2,3,4,5) (0,1,2,4,6) (0,2,4,5,6)
(0,2,3) (0,1,2,4,5) (0,1,2,5,6) (0,1,2)
16 (0,1,4) (0,1,2,3,5) (0,3,6) (0,1,4,5,6)
(0,1,2,3,4) (0,1,3,4,5) (0,2,3) (0,1,3)
(0,1,2) (0,3,5) (0,2,3,5,6) (0,5,6)
(0,3,4)
Example for Calculation in GF(8) – Exponential Representation
Binary representation
 Primitive polynomial: p(D) = D3 + D + 1  condition: z3 = z + 1 of exponent
z1 = z z5 = z3 + z2 = z + 1 + z2 = z2 + z + 1 binary expon. polyn.
z2 = z2 z6 = z4 + z3 = z2 + z + z + 1 = z2 + 1 000 0 0
z3 = z + 1 z7 = z5 + z4 = z2 + z + 1 + z2 + z = 1 001 z1 z
z4 = z2 + z 010 z2 z2
011 z3 z+1
 Example: a   011 101 011 100   z 3 z 5 z 3 z 4  100 z4 z2+z
 
101 z5 z2+z+1
b  100 111 011 000   z 4 1 z 3 0  110 z6 z2+1
111 z7 1
 Multiplication
a  b  z 3  z 4  z 5 1  z 3  z 3  z 4  0  z 7  z 5  z 6
 1  z 2  z  1  z 2  1  z  1  z 3  011
 Addition
a  b   z 3  z 4 z 5  1 z 3  z 3 z 4  0    z  1  z 2  z z 2  z  1  1 0 z 4 
  z 2  1 z 2  z 0 z 4    z 6 z 4 0 z 4   110 100 000 100
Example for Calculation in GF(8) – Polynomial Representation
Binary representation
 Primitive polynomial: p(D) = D3 + D + 1  condition: z3 = z+1 of coefficients p2z2+p1z+p0
z1 = z z5 = z3 + z2 = z + 1 + z2 = z2 + z + 1 binary polyn. expon.
z2 = z2 z6 = z4 + z3 = z2 + z + z + 1 = z2 + 1 000 0 0
z3 = z + 1 z7 = z5 + z4 = z2 + z + 1 + z2 + z = 1 001 1 z0
z4 = z2 + z 010 z z
011 z+1 z3
 Example: a   011 101 011 100   z 1 z 2 1 z  1 z 2  100 z2 z2
 
101 z2+1 z6
b  100 111 011 000   z 2 z 2  z 1 z 1 0  110 z2+z z4
 Multiplication 111 z2+z+1 z5
a  b   z 1  z 2   z 2 1   z 2  z 1   z 1   z 1  z 2  0
 z3  z 2  z 4  z3  z 2  z 2  z  1  z 2  z  z  1  0
 z 4  z  z 2  z  z  z 2  100
 Addition
a  b   z 1  z 2 z 2 1  z 2  z 1 z  1  z 1 z 2  0 
  z 2  z 1 z 0 z 2   111 010 000 100
Properties of Linear Codes
and
Decoding Principles
Distance Properties for (n, k, dmin)q codes
 Recall:
 Hamming weight wH(x1): number of non-zero symbols in a code word
 Hamming distance dH(x1, x2) = wH(x1- x2) y  xe with e  GF(q ) n
 Minimum Hamming distance dmin dmin= 4  t = 1, t´=3
d min  min d H  a, b  min wH  a 
Linearity: d min  a
a ,b , a  b , a0 t

 Minimum distance dmin and bounds w.r.t wH(e)


 Number of detectable errors: t   d min  1
d 1 dmin= 5  t = 2, t´=4
 Number of correctable errors: t   min 
 2 
 Further errors can possibly be detected or t
corrected, but this is not guarantied
 A code can simultaneous correct t errors
and detect t´ > t errors if t  t   1  d min
Distance Properties for (n, k, dmin)q codes
 Singleton bound: d min  n  k  1
 Proof: All code words differ in at least dmin positions. If we cancel the first dmin-1
positions, all shortened code words of length n - dmin+1 are still different
 there are qk different shortened code words in the space of qn-dmin+1 words.
Therefore, k £ n - dmin+1 is required
 dmin is always smaller or equal than number of parity bits plus one
 Equality holds for MDS codes (Maximum Distance Separable): Repetition codes
are the only binary MDS codes. Non-binary Reed-Solomon codes are MDS codes


t
n
 Sphere packing bound (Hamming bound): q   r   q  1
nk r

r 0
 Proof: With n-k parity symbols exactly qn-k syndromes exist. For correcting all error
vectors e with wH(e)£t each e has to be assigned to one of the qn-k syndromes. Right
hand side (r.h.s.) denotes the possible number of error vectors of maximum length t.
 To correct t errors number of syndromes must be larger as number of error samples
 Equality holds for perfect codes, i.e. number of syndromes corresponds to number
of correctable error pattern  (7,4,3)2-Hamming-Code, (23,12,7)-Golay-Code
Distance Properties for (n, k, dmin)q codes
n  q  1 q k 1 n  q  1
 Plotkin bound: d min  
q 1
k
q
 Proof:
• Average weight of each symbol in GF(q) is (q -1)/q
• Average weight of code word of length n is n(q -1)/q
• Without the all-zero word the average weight is increased by a factor of qk /(qk -1)  r.h.s
• Average weight is always larger or equal to minimum weight (dmin)

 Gilbert-Varshamov bound:
 Previous bounds achieve estimation for the minimum distance, but not a guarantee
for the existence of a real code
 Gilbert-Varshamov bound proofs the existence of real code (no code construction)
 There exists a (n, k, dmin)q code if the following condition is fulfilled

 
d min  2
n 1 q 1 r
r  
nk
q 
r 0
Distance Properties and IOWEF
 Minimum distance is often not sufficient for a tight performance bound
 Distance spectrum: weight distribution of the code
 Due to linearity, wH(x) and not distance between all code words is investigated
n n
A( D)   Ad D   D d wH ( x )
1  Ad D d Ad = number of code words x with wH(x) = d
d 0 x d  d min D = replacement character
 Depends only on the code, but not on the encoder (ux)  word error probability Pw
n

A
d 0
d  q k  number of information words = number of code words

 Input Output Weight Enumerating Function (IOWEF)


k n
A(W , D)   Aw,d  W w D d Aw,d = number of code words x with input
w0 d 0 weight wH(u) = w and output weight wH(x) = d

 Suitable for calculating the bit error rate Pb, as the connection between information
words u and code words x is considered  IOWEF depends also on encoder
Distance Properties and IOWEF
 Example: (7,4)2-Hamming code consists of 2k=24=16 code words
code word wH(u) wH(x) d Ad
Distance spectrum:
0000 000 0 0 0 1 A(D) = 1 + 7D3 + 7D4 + D7
0001 111 1 4 1 0
0010 110 1 3 2 0 Input Output Weight Enumerating Function
0011 001 2 3 3 7 A(D) = 1
0100 101 1 3 4 7 + 3W D3 + 3 W2 D3 + W3 D3
0101 010 2 3 5 0 + W D4 + 3 W2D4 + 3 W3 D4
0110 011 2 4 6 0
0111 100 3 4 + W4 D7
7 1
1000 011 1 3
1001 100 2 3
1010 101 2 4 d=0 d=1 d=2 d=3 d=4 d=5 d=6 d=7
IOWEF
1011 010 3 4
1100 110 2 4 w=0 1 0 0 0 0 0 0 0
1101 001 3 4 w=1 0 0 0 3 1 0 0 0
1110 000 3 3 w=2 0 0 0 3 3 0 0 0
1111 111 4 7 w=3 0 0 0 1 3 0 0 0
w=4 0 0 0 0 0 0 0 1
General Decoding Principals
 Maximum-a-posteriori (MAP) criterion
 Optimum decoding criterion determines the code word x that maximizes the
a-posteriori-probability Pr{x|y}
Pr x
xˆ  arg max Pr x y  arg max Pr y x
x x Pr y Depends on transition probabilities
 arg max Pr y x  Pr x Pr{y|x} and on a-priori-probabilities Pr{x}
x

 If x = g(u) is the encoding function, the estimated information bits are uˆ  g 1  xˆ 


 Maximum-Likelihood Decoding (MLD)
 For equally likely code words or if decoder has no knowledge about Pr{x}
xˆ  arg max Pr y x
x

 Choose that code word x with the minimum distance to the receive word y
 minimizes word error probability Pw
 Effort of MAP/ML increases exponentially with k (qk comparisons)
 Possible to correct more than ë(dmin-1)/2û errors
General Decoding Principals

MLD: Maximum BDD: Bounded BMD: Bounded


Likelihood Distance Minimum Distance
Decoding Decoding Decoding

Decoding ball of radius t per x Radius t = ë(dmin-1)/2û


Only those y within one decoding ball are Decoding ball are disjoint
correctable
not correctable decoded, i.e. assigned to corresponding x, BMD equals MLD for
code words error is declared for y lying in several balls or perfect codes
no balls  small drawback compared to MLD
Error Rate Performance of Linear Codes
Error Rate Performance of Linear Codes
 Bounds for the error rate performance of linear block codes

 Error Detection: Probability of an undetected error

 Error Correction: Probability of erroneous error correction or non-correctable


error
 for hard decision (i.e. eGF(q)n  yGF(q)n)
 for soft decision (without quantization  noise vector n  y n)

 In the sequel we consider always the word error rate Pw and not the bit error rate Pb
Error Detection for Discrete Symmetric Channel
 Received code word: y  x  e X0
1-Pe
Y0
 with e ≠0 if transmission error occurred X1 Y1
 Probability of specific error e with wH(e): X2 Pe /(q-1) Y2
wH ( e )
X3 Y3
 P 
1  Pe 
n  wH ( e )
 e 
 q 1 Xq-1 Yq-1

 Probability of undetected error (e  y ):


 error is undetectable if e  wH(e) ≥ dmin  consider all qk codewords x as error e
wH ( x ) d
 P  n
 P 
 1  Pe   Ad  1  Pe 
n  wH ( x ) nd
Pue   e    e 
x \0  q 1 d  d min  q 1
Requires whole distance spectrum A(D)
n

 Ad  1  Pe 
nd
 For q = 2 (BSC): Pue   Ped
d  d min
Error Correction for Binary Symmetric Channel
 Word error probability (prob. of non-detectable or erroneously correctable error)
depends on decoding principle
 Simple derivation for Bounded Minimum Distance (BMD)
 serves as upper bound for BDD and MLD  exact for perfect codes
 Probability of correct decoding for t-error correcting (n, k, dmin)-code:
t
n
     1  Pe   Ped
nd
Pcorrect All errors wH(e) ≤ t = ë(dmin-1)/2û can be corrected
d 0  d 

 Probability of decoding error:


t
n Requires only dmin, not the
 1      1  Pe   Ped
nd
Pw  1  Pcorrect whole distance spectrum
d 0  d 

n
n
     1  Pe   Ped
nd

d t 1  d 
Upper bound for BDD and MLD!
Error Correction for Soft-Output Channels (1)
 Assume that x(i) was transmitted and the received signal is y = x(i) + n
 Probability of decoding error
  
Pe  x( i )   Pr Decoding error x( i )  Pr y  i x ( i ) 
with set
    
i  y Pr y x( i )  Pr y x ( j ) , x ( j )  , j  i 
 i.e. the set of all y that are closer to x(i) than to any other x(j) ≠ x(i)
 all yi lead to correct decision for MLD
 min. quadratic Euclidian distance
i x(i)

 Complementary set
    
i  out \ i  y Pr y x( j )  Pr y x( i ) , j  i 
Error Correction for Soft-Output Channels (2)
i,2
 Complementary set i,1

    
i  out \ i  y Pr y x( j )  Pr y x( i ) , j  i i,3

  
qk

 
  y Pr y x ( j )  Pr y x( i )
j 1 
j i i , j
i,4
i,6
 Bound for decoding error i,7 i,5
The sets i,j are in
  qk
k
q
y   x 
Pe  x   Pr   i , j    
( i )
(i )
 Pr y   x (i ) general not disjoint
i, j
j 1
 j i  jj 1i
Union Bound
 Interpretation:
 Receive signal can be contained in several i,j
 sum over single sets is larger than union set
 Equality holds if all sets i,j are disjoint, i.e. each y occurs in one set i,j only
Error Correction for Soft-Output Channels (3)
 Antipodal transmission: x   Es / Ts
 MLD:


Pr y  i , j x (i )
  Pr  y  x ( j) 2
 yx (i ) 2
x(i ) 
 Disturbance by noise: y = x(i) + n

   Pr  x 
2 2
Pr y  i , j x (i ) (i )
x ( j)
n  n

 n 1 ( i ) n 1
2
 Pr   xr  xr  nr    nr 
2
( j)

 r 0 r 0 
 n 1 1 n 1 ( i ) 2
 nr   xr  xr       xr  xr  
(i ) ( j) ( j)

 Pr  
r 0 2 r 0
    Pr   
   
 
Error Correction for Soft-Output Channels (4)
n 1
    nr   xr( i )  xr( j ) 
r 0
 Only differing positions of x(i) and x(j) are of importance (all other positions are zero)
 only dH(x(i), x(j)) positions are considered
  is Gaussian distributed random variable with
 mean  = 0
 variance
2
N0 / 2  Es 
 
2
  dH  x , x    2
(i ) ( j)
  2d H  x , x   Es N 0 / Ts
(i ) ( j) 2

Ts  Ts 
1 n 1 ( i )
  is constant:       xr  xr 
2
( j)

2 r 0
2
1  Es 
    dH  x , x    2
(i ) ( j)
  2  d H  x , x   Es / Ts
(i ) ( j)

2  Ts 
Error Correction for Soft-Output Channels (5)
 Integration over Gaussian distributed  delivers:
 2

1
Pr y  i , j x  Pr    
2 2

 22
e d

 
2 d H x( i ) , x( j ) Es / Ts

2
1

4 d H ( x( i ) , x( j ) ) Es N 0 / Ts2
 e d
 2  2 d H ( x , x ) Es N 0 / T
(i ) ( j)
s
2

 d H ( x ( i ) , x ( j ) ) Es / N 0 
1 1
 
2
  2
  e d   e d
   d H ( x ( i ) , x ( j ) ) Es / N 0

1
  erfc
2
 d H ( x ( i ) , x ( j ) ) Es / N 0 
Error Correction for Soft-Output Channels (6)
 Union Bound for word error probability (independent of x(i))
1 2k  Es  1 n  Es 
Pw  Pe  x     erfc  d H (x , x ) 
(i ) (i ) ( j)
    Ad  erfc  d  
2 j 1  N 0  2 d  dmin  N0 
j i

 Approximation for Pw
 With the general approximation
erfc  
x  y  e  y  erfc  x
 we yield
Es
 Es   Es Es Es   ( d 1) N0  Es 
erfc  d   erfc  d   e  erfc  
 N 0   N 0 N 0 N 0   N 0 

E
1  Es  Ns0
Pe  x    erfc 
(i )
  e  A( D ) E
 s
2  N0  D  e N0
Estimation of Error Probability for (7,4)2-Hamming Code
 t = 1, dmin = 3 and A(D) = 1 + 7D3 + 7D4 + D7 1  Eb 
Pe   erfc  Rc  
 Hard decision 2  N 0 
n
t
Pw  1      1  Pe   Ped
nd 0
10
d 0  d  Hard, BMD
Soft - MLD
 1  1  Pe   7 Pe 1  Pe 
7 6
-2
10

 Soft decision
-4
 10
1 n Es  Pw
Pw    Ad  erfc  d  
With hard decision at
input of decoder
2 d  dmin  N0  information is lost
-6

   
10
 72 erfc 3 NEs0  72 erfc 4 NEs0

erfc  7 
-8
 1 Es 10
2 N0 2 4 6 8 10 12
Eb / N 0 in  dB 
Description of Linear Block Codes by
Matrices
Description of Linear Block Codes by Matrices (1)
 Information word u  u0 u1  uk 1  and code word x   x0 x1  xn 1 
 Generator matrix of dimension k×n: each row contains a valid code word
 g 0,0  g 0,n 1 
  - rows are linear independent
G     g i , j  GF( q )
- they span the code (vector space)
 g k 1,0  g k 1,n 1   basis of the code space
 Encoding: x  u  G mod q  linear combination of rows of G with coefficients ui

 Code: 
  x  u  G mod q u  GF  q 
k
 in the sequel, all calculations
are carried out mod q
 Systematic encoder (u explicitly part of x: x = [u p] )
1 0  Gaussian normal form
G  I k k Pk n  k     Pk n  k  parity symbols are given by linear
 
0 1  combination of information symbols
Description of Linear Block Codes by Matrices (2)
 Parity check matrix of dimension (n-k)×n:
 h0,0  h0,n 1  - rows are linear independent
 
H     hi , j  GF(q ) - they span a vector space
orthogonal to rows in G
 hn  k 1,0  hn  k 1,n 1 
 Gaussian normal form (systematic encoder, i.e. x = [u p] )
 1 0
H    PkTn  k I n  k n  k    PkTn  k  
 
 0 1 

 Code: x  HT mod q  0  
  x  GF  q  x  HT mod q  0
n
  is null space of H

 Generator and parity check matrix: G  HT  0


  Pk n  k 
G  HT   I knormal  I n  k n  k    I k k Pk n  k   
T
 E.g. Gaussian k P form
k n  k    P T
 k n  k 0
I
 n  k n  k 
Elementary Matrix Operations
 Generator matrix:
 Changing order of columns leads to equivalent codes (changing symbol order in
code words)
• Columns of G can be arranged arbitrarily
• Equivalent codes have identical distance properties, only mapping from u to x is different
• However: error correction capabilities may vary between equivalent codes (e.g. burst
errors)
 Linear combination of rows of G does not affect the code
 Elementary matrix operations allow Gaussian normal form: G  I k k Pk n  k 
 Parity check matrix:
 Changing order of columns leads to equivalent codes (cf. generator matrix)
 Minimum distance dmin of the code corresponds to the minimum number of linear
dependent columns in H! (No equivalent relation for G is known.)
Dual Code
 If H is used as generator matrix, a code  being orthogonal to the original
code  is defined
 Dual code is defined by
   
  b  GF  q  b  x  x    b  GF  q  b  G T  0
 n n


 v  H v  GF  q 
nk

 with code words of the dual code b= v H
 Recall: H is of dimension (n-k)×n
 Code word b contains n symbols, but
 Information word v consists of only n-k symbols
 Code space contains only qn-k elements

 If n-k is much smaller than k, it can be favorable to perform decoding for original
code  with respect to the dual code 
Examples of Codes (1)
 Repetition code: (n, 1, n)-code
1 1 0  Rc  1
and d min  n
 n
G  1 1 1 1 H     n 1
      0 0 , 11 ,,
n 1 0 1  

 Single parity check (SPC) code: (n, n-1, 2)-code


 Repetition code is dual code of single parity check code
→ generator and parity check matrix are exchanged

1 0 1 
 Rc  n 1
and d min  2
G   n 1 H  1 1 1 1 n
  
0 1 1  n
Examples of Codes (2)
 Hamming code:
qr 1
The length of a (n, n-r ,3)q-Hamming code of rank r is defined by n 
q 1
Hamming codes are perfect codes, i.e. the number of distinct syndromes equals
the number of correctable errors.
All Hamming codes have a minimum distance of dmin = 3, whereas the code rate
tends to Rc = 1 for n→.
k (q r  1) /(q  1)  r (q r  1)  r (q  1) (1  q  r )  r (q  1)q  r
Rc     r

r 
1
n (q  1) /(q  1)
r
(q  1)
r
(1  q )

Columns of parity check matrix represent all 2r-1 possible binary vectors of
length r
 q = 2: (3,1), (7,4), (15,11), (31,26), (63,57), (127,120)
Examples of Codes (3)
 Hamming code:
 Example: (7, 4, 3)2-Hamming code
1 0 0 0 0 1 1
0 0 1 1 1 1 0 0 
1 0 0 1 0 1
G  H  1 0 1 1 0 1 0 
0 0 1 0 1 1 0  
  1 1 0 1 0 0 1 
0 0 0 1 1 1 1

 Perfect code: columns of H represent all 2n-k-1 syndromes (except 0)


 For appropriate H: position of syndrome s corresponds to position of error in y
 encoder is not systematic any more!

 Simplex code:
 Simplex code is obtained by exchanging G and H (dual codes)
 All code words have same pair-wise Hamming distance dH = 4 → simplex
Standard Array and Syndrome Decoding (1)
 Assumption: y = x + e mod q is received vector
 Decoding by calculating syndrome: s = [s0 s1 … sn-k-1]
s  y  HT   x  e   HT  x
 HT  e  HT  e  HT
0
 Syndrome depends only on error vector e, but not on transmitted code word x!

 Error detection: check on s = 0  no detectable error (e  , e.g. e = 0)


s ≠ 0  error detected (e  )

 Error correction: qn - qk possible errors e, but only qn-k possible syndromes s


 different error events e lead to the same syndrome s
 no one-to-one correspondence  not every error is correctable
Standard Array and Syndrome Decoding (2)
 Standard array decoding:
 Partition set of all receive words y into cosets  of those words

that generate same syndrome s and store cosets in table   e  GF(q) n e  HT  s 
  due to s = 0
 Determine coset leader e, e.g. word with minimum Hamming weight wH(e)
 If syndrome s ≠ 0, search received word y in cosets , 1 £  £ qn-k -1, and subtract
coset leader from y: xˆ  y  e Realization of Maximum-Likelihood decoding
 Syndrome decoding:
 Instead of storing whole cosets, just save syndromes s and corresponding coset
leaders e
 If syndrome s ≠ 0, simply determine coset leader e associated with syndrome s and
correct error by subtracting e from received word: xˆ  y  e
Syndrome Decoding for (7,4)-Hamming Code
000 001 010 011
0000000
0001111
0000001
0001110
0000010
0001101
0000011
0001100 0 1 1 1 1 0 0 
H  1 0 1 1 0 1 0 
0010110 0010111 0010100 0010101
0011001 0011000 0011011 0011010
0100101 0100100 0100111 0100110  
1 1 0 1 0 0 1 
0101010 0101011 0101000 0101001
0110011 0110010 0110001 0110000
0111100 0111101 0111110 0111111
1000011 1000010 1000001 1000000
1001100 1001101 1001110 1001111  Only 27-4 = 8 different syndromes, but 27 - 24 =
1010101 1010100 1010111 1010110
1011010
1100110
1011011
1100111
1011000
1100100
1011001
1100101 128 - 16 = 112 possible error patterns
1101001 1101000 1101011 1101010
1110000 1110001 1110010 1110011
1111111 1111110 1111101 1111100  dmin = 3  t = 1 syndrome coset leader
100 101 110 111
0000100 0000101 0000110 0000111 error correctable 001 0000001
0001011 0001010 0001001 0001000
0010010 0010011 0010000 0010001
0011101 0011100 0011111 0011110 010 0000010
0100001 0100000 0100011 0100010
0101110 0101111 0101100 0101101 011 1000000
0110111 0110110 0110101 0110100  Coset leader
0111000 0111001 0111010 0111011 100 0000100
1000111
1001000
1000110
1001001
1000101
1001010
1000100
1001011 represented by
1010001 1010000 1010011 1010010 101 0100000
1011110 1011111 1011100 1011101 7 vectors with
1100010 1100011 1100000 1100001 110 0010000
1101101 1101100 1101111 1101110 wH(e) = 1
1110100
1111011
1110101
1111010
1110110
1111001
1110111
1111000
111 0001000
Modification of Linear Codes

 Construction of a (n’, k’, d’min) code from a given (n, k, dmin) code “New codes
from old codes”
 Expansion: appending additional parity check symbols
n  n, k   k Rc  Rc , d min
  d min

 Puncturing: removing redundancy from code word


n  n, k   k Rc  Rc , d min
  d min

 Lengthen: appending additional information symbols


n  n, k   k , n  k   n  k Rc  Rc , d min
  d min

 Shortening: removing information symbols from code word


n  n, k   k , n  k   n  k Rc  Rc , d min
  d min
Cyclic Codes
Cyclic Block Codes (1)
 Restriction to linear cyclic block codes
→ they can be described with generator and parity check matrix
 Property of cyclic codes: cyclic shift generates new valid code word
 x0 x1  xn1      xn1 x0 x1  xn2   
 Better (more compact) description by polynomials
n 1
 x0 x1  xn1   GF(q) n
 x( D)   xi  D i  GFq  D n 1 with xi  GF( q )
i 0

 Example: 1 011 011  x( D )  1  D  D  D  D


2 3 5 6

 GFq[D]: set of all polynomials of arbitrary rank, coefficients xi  GF(q)


 GFq[D]r: set of all polynomials of maximum rank r, coefficients xi  GF(q)
 Cyclic shift by m positions with respect to polynomials
 xnm  xn1 x0  xnm1   RDn 1  D m  x( D) 
Cyclic Block Codes (2)
 Remainder of polynomial division
b( D )  u ( D )  g ( D )  r ( D )  Rg ( D ) b( D )   r ( D ) with rank r ( D )  rank g ( D )
 Calculation modulo g(D) is equivalent to setting g(D) = 0  simplifies calculation!
 Rules for polynomial division:
 Additivity: Rg ( D )  a ( D )  b( D)   Rg ( D )  a ( D )   Rg ( D ) b( D ) 

 Multiplication: Rg ( D )  a ( D )  b( D )   Rg ( D )  Rg ( D )  a ( D )   Rg ( D ) b( D ) 

 Multiple of divisor: Rg ( D )  a ( D )  g ( D)   0

 Rank: rank a ( D )  rank g ( D )  Rg ( D )  a ( D )   a ( D )

 Modulo Dn-1: RDn 1  D m   D Rn [ m ] because RDn 1  D m   D


m
 D n  Rn [ m ]
D n 1
Cyclic Block Codes (3)
 Proof of cyclic shift relation for m=1:
D  x( D )  RDn 1  D   x0  x1D    xn  2 D n  2  xn 1D n 1  

 RDn 1  x0 D  x1D 2    xn  2 D n 1  xn 1D n 


with
 x0 D  x1D    xn  2 D
2 n 1
 RDn 1  xn 1D 
n
RDn 1  D n   D n 1
D n 1
 x0 D  x1D 2    xn  2 D n 1  xn 1
  xn 1 x0 x1  xn  2 

 Example: n = 7, m = 3
x   0 011 01 0  D 2  D 3  D 5  x   01 0 0 011

x ( D )  RD7 1  D 3  x( D )   RD7 1  D 5  D 6  D8  using

DD D 5 6 RD7 1  D8   D R7 [8]  D1


Generator and Parity Check Polynomial (1)
 For each cyclic (n,k) code  exactly one generator polynomial g(D) of rank n-k
exists. With information polynomial u(D) = u0 + ... + uk-1Dk-1 of rank k-1 the code is

given by   x( D)  u ( D)  g ( D) u ( D)  GF  D 
q 
k 1

nk
 Generator polynomial: g ( D )  g 0  g1 D    g nk D

 Normalization: gnk  1  g ( D )  g 0  g1D    D n  k monic polynomial

 Generator polynomial g(D) does not uniquely describe a code


 Additional parameter like code length n or number of info symbols k is necessary
Let g(D) be a generator polynomial of rank n-k. If  is a cyclic code, than
Rg(D)[Dn-1] = 0 is fulfilled, i.e. g(D) divides Dn-1 with vanishing remainder!
 The same g(D) can construct different cyclic codes with the same number of parity
bits, as long as Rg(D)[Dn-1]=0 is fulfilled
Generator and Parity Check Polynomial (2)
 Code:
   x( D )  u ( D )  g ( D ) u ( D )  GFq  D k 1 Code is given by all possible linear
combinations of cyclically shifted
 
 x( D )  GFq  D n 1 Rg ( D )  x( D )   0 versions of g(D)  Rg(D)[x(D)]=0

 Cyclic Codes are linear block codes


 Code can also be described by a generator matrix G
 Constructing generator matrix G on basis of g(D) (generally in nonsystematic form)
 g0  g nk   g ( D) 
 g0  g nk   Dg ( D) 
  
     
   
 g0  g n  k   D k 1 g ( D) 

 Systematic form can be achieved by elementary row operations  example


Example: Systematic (7,4) Cyclic Code
 Generator polynomial: g(D) = 1 + D2 + D3, k = 4, n = 7
 Construction of systematic generator matrix
Step 1 Step 3
       
     0 1 0 0 1 1 1   Dg ( D )  D 3 g ( D ) 
     
    0 0 1 0 1 1 0  2
D g (D) 
       
0 0 0 1 0 1 1   D3g (D)  0 0 0 1 0 1 1  D3g (D) 

Step 2 Step 4
     1 0 0 0 1 0 1   g (D)  D 2 g (D)  D3g (D) 
    0 1 0 0 1 1 1  
     Dg ( D )  D 3
g ( D ) 
0 0 1 0 1 1 0   D g (D) 
2
0 0 1 0 1 1 0  2
D g (D) 
       
0 0 0 1 0 1 1   D3g (D)  0 0 0 1 0 1 1  D3g (D) 

 Cyclic and systematic (7,4,3) Hamming code


Generator and Parity Check Polynomial (3)
 Well-known conditions for matrices:
G  HT mod q  0 x  HT mod q  0

 Similar to the matrix description the code can also be described using a parity
check polynomial h(D): monic polynomial (hk=1)
h( D )  g ( D )  D n  1  rank h( D )  k h( D )  h0  h1D    D k

 Code:


  x( D)  GFq  D n 1 RDn 1  x( D)  h( D)   0 
 Relation:

RDn 1  x( D)  h( D )   RDn 1 u ( D)  g ( D )  h( D)   RDn 1 u ( D )   D n  1   0


Generator and Parity Check Polynomial (4)
 Parity check polynomial h(D):
 Constructing parity check matrix H on basis of h(D)

 hk  h0   h ( D) 
 h  h   
 k 0    Dh ( D) 
     
   n  k 1 
 hk  h0 D h ( D) 
k 1
 k k 1
with reciprocal polynomial h ( D)  D  h D  h0 D  h1D    hk 1D  1

 Dual code: exchange generator polynomial and reciprocal parity check


polynomial
g  ( D)  h ( D )  D k  h  D 1 
h  ( D )  g ( D )  D n  k  g  D 1 
Systematic Encoding with Generator Polynomial (1)
 Non-systematic encoding of information polynomial u(D) = u0 + ... + uk-1Dk-1:
x( D)  u ( D)  g ( D)

 Systematic encoding approach: adding n-k parity symbols by separate


polynomial p(D) of rank n-k-1 to linearly shifted version of u(D)
x( D)  p( D)  u ( D)  D n k p(D) u(D)
n-k k
 Calculating p(D) :
Rg ( D )  x( D)  Rg ( D )  p ( D)  u ( D)  D n  k   Rg ( D )  p ( D)  Rg ( D ) u ( D)  D n  k 
!
 p ( D)  Rg ( D ) u ( D)  D nk
  0

p ( D)  Rg ( D )  u ( D)  D n k 

 Efficient calculation by using linear recursive shift register!


Systematic Encoding with Generator Polynomial (2)
 Horner‘s scheme:

D n  k  u ( D )  D n  k  u0  D D n  k  u1  D  D n  k  u2   D  D n  k  uk 1   (1)

 Inserting (1) into last equation of last slide p ( D )  R 


g ( D)  u ( D )  D nk



p ( D )  Rg ( D )  u0  D n  k  D  u1  D n  k   D   uk  2  D n  k  D  uk 1  D n  k   (2)
  
 Applying modulo division to product:
Rg ( D )  D  a ( D )   Rg ( D )  Rg ( D )  D   Rg ( D )  a ( D )   Rg ( D )  D  Rg ( D )  a ( D )  (3)

yields
Rg ( D )  ui  D n  k  D  a ( D )   Rg ( D )  ui  D n  k  D  Rg ( D )  a ( D )  (4)

 Applying (4) into (2) leads to description of linear shift register


Systematic Encoding with Generator Polynomial (3)
 With R[ ] = Rg(D)[ ] the following equation holds:
p ( D)  R  u ( D)  D n  k 

 R  u0  D n  k  D  R  u1  D n  k   D  R  uk  2  D n  k  D  R  uk 1  D n  k    


  

r (1) ( D )

r (2) ( D )

r ( k 1) ( D )

r ( k ) ( D)
 Properties of polynomials ri(D)  recursive structure:
r (0) ( D)  0, r ( i ) ( D)  Rg ( D )  uk  i D n  k  D  r ( i 1) ( D)  , r ( k ) ( D )  p ( D )
Systematic Encoding with Generator Polynomial (4)
 Polynomials r(i)(D) can be recursively calculated:
n  k 1
r ( D) 
(i )
r
j 0
(i )
j  D j  Rg ( D )  uk  i  D n  k  D  r ( i 1) ( D ) 

 n  k 1
j  n  k 1
j
 Rg ( D )  uk  i D  D   rj  D   Rg ( D )  uk  i D  rn  k 1  D   rj 1  D 
nk ( i 1) nk ( i 1) nk ( i 1)

 j 0   j 1 
 n  k 1 ( i 1) 
 Rg ( D )  uk  i  rn  k 1   D   Rg ( D )   rj 1  D j 
( i 1) nk

 j 1 

 Division modulo g(D) is equivalent with setting g(D) = 0 n  k 1


 because of gn-k = 1 this corresponds to (repeated) substitution D    gi  D i
nk

i 0
g ( D)


nk n  k 1
x( D)  u ( D)   gi  D  u ( D)  Di nk
 u ( D)   g D i
i
 u ( D)  D n  k  Rg ( D )   D n  k u ( D) 
i 0 i 0
Systematic Encoding with Generator Polynomial (5)
 It follows: n  k 1
Rg ( D )  D nk
 u ( D)   u ( D)  
i 0
gi  Di

 Polynomials r(i)(D) can be recursively calculated:

 n  k 1 ( i 1) 
r ( D)  Rg ( D )  uk i  r
(i ) ( i 1)
n  k 1  D nk
  Rg ( D )   rj 1  D j 

 j 1 
n  k 1
 Rg ( D )  uk i  r ( i 1)
n  k 1  D nk

  rj(i 11)  D j
j 1

 n  k 1 j
n  k 1
  uk i  r ( i 1)
n  k 1    j   j 1
  g  D  r ( i 1)
 D j

 j 0  j 1
n  k 1
   rj(i 11)  g j   uk i  rn(i k1)1    D j ; r(1i 1)  0
j 0
 
Systematic Encoding with Generator Polynomial (6)
n  k 1
r ( D) 
(i )
  rj(i 11)  g j   uk  i  rn(i k1)1    D j ; r(1i 1)  0
j 0
 

 Interpretation as linear shift register:


Initialization: r0    rn  k 1  0
(0) (0)

 i =1:
n  k 1
r ( D) 
(1)
  rj(0)
 1  g j   u k 1  r (0)

n  k 1 
  D j

j 0

  0  g 0   uk 1  0    0  g1   uk 1  0    D    0  g n  k 1   uk 1  0    D n  k 1


 g 0  uk 1  g1  uk 1  D    g n  k 1  uk 1  D n  k 1
- power of D determines branch of shift register
- negate last information symbol uk-1 and multiply it in each branch with
corresponding negative coefficient –gj of the generator polynomial
(1)
→ register cell rj contains uk-1·gj
Systematic Encoding with Generator Polynomial (7)
n  k 1
r ( D) 
(i )
  rj(i 11)  g j   uk  i  rn(i k1)1    D j ; r(1i 1)  0
j 0
 

 i = 2:
n  k 1
r ( D) 
(2)
  rj(1)1  g j   uk  2  rn(1) k 1    D j
j 0
 

  r(1)
1  g 0   u k 2  r (1)

n  k 1 
     r (1)
 nk 2  g n  k 1   u k 2  r (1)

n  k 1 
  D n  k 1

(1)
calculate difference between next symbol uk-2 and (n-k)-th cell rn  k 1
→ multiply result in each branch with corresponding negative coefficient –gj
(1)
→ add products to old contents of previous cells rj 1
 i > 2: repeat until all k information symbols are processed
 i = k: shift register contains parity symbols
→ append parity symbols at information word
Systematic Encoding with Generator Polynomial (8)
 General structure of shift register
 Calculation for the j-th register cell: rj  rj 1  g j   uk i  rn  k 1 
(i ) ( i 1) ( i 1)

-
uk-1 u1 u0
+

-g0 -g1 -gn-k-1

r0(i ) r1( i ) rn(i )k 1


Systematic Encoding with Generator Polynomial (9)
 Structure of shift register for (7,4)-Hamming-Code
g ( D)  1  D  D3

u1
03 u1
02 u1
01 u10
0
1 0
1
1
0

0
0
1 0
0
1 1
0
0
1
0 10 01 0
01

 Example: u = [1 0 1 0], x = [0 0 1 | 1 0 1 0]
Systematic Encoding with Parity Check Polynomial (1)
 Equivalent representation with parity check polynomial (without derivation)
k 1
xm   hi  xm i  k  f  xm 1 ,, xm  k  with x  [ x0  xn  k 1 xn  k  xn 1 ]
i 0
  
n  k parity symbols k information symbols

 General structure of shift register


u initialization with
xm+1 xm+k information word u(D)
in each step one parity
symbol xm, m = n-k-1,…,0
-hk-1 -h0 is generated

+
RDn 1  x( D)  h( D)  0
Systematic Encoding with Parity Check Polynomial (2)

 Structure of shift register for (7,4)-Hamming-Code


h( D )  1  D  D 2  D 4

0
1 10
1 11
00 1
0 0
11 1
0 00 1 0

0
1

 Example: u = [1 0 1 0], x = [0 0 1 | 1 0 1 0]
Possible Choices for Encoder Implementation
 Matrix operation: x = u·G
 n·k = n2·Rc operations (additions and multiplications)

 Polynomial multiplication x(D) = u(D)·g(D) (non-systematic)


 (n-k)·k = n2·(1-Rc) ·Rc operations

 Shift register with g(D)


 Shift register of length n-k, k shifts are carried out
 (n-k)·k = n2·(1-Rc) ·Rc operations
 Can be parallelized by application of n-k processors

 Shift register with h(D)


 Shift register of length k, n-k shifts are carried out
 (n-k)·k = n2·(1-Rc) ·Rc operations
 Parallelization can not be achieved
Calculation of Syndrome with Shift Register
 Representation of syndrome as polynomial s(D) = s0+...+sn-k-1Dn-k-1 of rank £n-k-1

s ( D )  Rg ( D )  y ( D)   Rg ( D )  x( D )  e( D )   Rg ( D )  x( D)   Rg ( D )  e( D) 
 Rg ( D )  e( D ) 

 General structure of shift register

y0
y1
-g0 -g1 -gn-k-1

yn-1 r0(i ) r1(i ) rn(i)k 1


Decoding Performance for Burst Errors
 Cyclic codes suited especially for detection of burst errors
 s(D)=0: no error or error is not detectable s(D) does not necessarily
 s(D)0: detectable error agree to s = yHT as H is
not uniquely defined!
t
 Burst error:
 erroneous symbols are concentrated in a certain part of the received word
 Burst error of length t: t successive symbols are erroneous with high probability
 Not every symbol within this part need to be erroneous: wH(e) £ t
 Cyclic errors can start at end of code word and continue at beginning
All cyclic (n,k)q-codes are able to detect all burst errors up to a length of
t’ ≤ n-k. They fail for longer burst errors only at a rate of
q  ( n  k 1) /(q  1) for t   n  k  1
Pue  
 q  ( nk )
for t   n  k  2
Cyclic Redundancy Check (CRC) Codes
 CRC codes are cyclic (2r-1, 2r-r-2, 4)2 codes whose generator polynomial has the form
g(D) = (1+D)·p(D) where p(D) is a primitive polynomial of rank r.

 Properties of cyclic redundancy check codes


 All error patterns with weight wH(e) =3 are detected.
 All error patterns with odd weight are detected.
 All burst errors up to a length of r + 1 are detected.
 Only a rate of 2-r of errors with length r + 2 cannot be detected.
 Only a rate of 2-(r+1) of errors with length larger than r + 2 cannot be detected.

 Example: CRC code with 16 parity bits (r = n-k-1 = 15) detects


 100 % of burst errors with length £ 16.
 99,9969 % of burst errors with length 17.
 99,9985 % of burst errors with length ³ 18.

 Wide area of applications, e.g.


 Protection of compressed files (ZIP)
 Detection of errors for concealment in GSM speech transmission
Algebraic and Non-Algebraic Decoding
 Always same procedure for decoding of cyclic codes
 Calculation of syndrome s(D)
 Decide error pattern e(D) from s(D)
(e.g. by storing all s(D) and corresponding e(D)  very expensive)
 Error correction by subtracting error pattern e(D) from received word y(D)

 Algebraic decoding
 Generally very demanding mathematical approaches
 Application of BMD for Reed-Solomon and BCH codes leads to efficient approach

 Non-Algebraic decoding
 Calculation of syndrome by exploiting code structure
 Demonstrative calculation of error pattern out of syndrome
 Computational effort increases with number of correctable errors
 Examples: Meggit decoder, majority logic decoder, threshold decoder, ...
Principle of Non-Algebraic Decoding
 Principle:
 An adjusted cyclic shift of y(D) leads to error pattern with largest exponent Dn-1
 Lend: number of error patterns that contain largest exponent Dn-1
 Each correctable error e(D) can be described by cyclic shift of one of these Lend error
patterns
 save only Lend syndromes s(D) in a list  and corresponding error pattern e(D)
 Basic approach for iterative decoding:
 Calculate syndrome s(D) for receive word y(D)
 for m = 0:n-1
• if s(D)    correct y(D) by corresponding error pattern e(D)
• else  shift y(D) by one position and calculate new syndrome for RD n 1  D  y ( D ) 
 end
 If no syndrome was found in  at all  error pattern is not detectable
 Several variants are possible
Summary: Cyclic Block Codes
Polynomial Symbol Rank (max)
Information u(D) = u0 + u1  D + … + uk-1  Dk-1 k-1
Code x(D) = x0 + x1  D + … + xn-1  Dn-1 n-1
Generator g(D) = g0 + g1  D + … + Dn-k n-k Rg ( D )  x( D)   0

Check h(D) = h0 + h1  D + … + Dk k RDn 1  x( D)  h( D )   0


Syndrom s(D) = s0+s1  D+… + sn-1  Dn-k-1 n-k-1 h( D )  g ( D )  D n  1

 Cyclic shift by m positions RD 1  D m  x( D)    D m  x( D) 


n n
D 1

 Non-systematic encoding x( D)  u ( D)  g ( D)

with p ( D)  Rg ( D )  u ( D)  D 
nk
 Systematic encoding x( D)  p( D)  u ( D)  D n k

 Syndrom calculation s ( D)  Rg ( D )  y ( D) 
Reed-Solomon-Codes
and
BCH-Codes
Reed-Solomon and BCH-Codes
 Until now no analytical construction method for really good codes has been
presented
 Wish: Construction of codes with defined properties, e.g. minimum distance
 RS- and BCH-Codes are very powerful codes with some advantages
 Here we restrict to non-binary RS-Codes and binary BCH-Codes
 They can be constructed in an analytical way
 The minimum distance dmin is known and can be used as a design parameter
 for RS-Codes the complete weight distribution is known
 RS-Codes are MDS codes  satisfy the Singleton bound with equality
 Both codes are very powerful, if the block length n is not too large
 The codes can be adapted to the error structure of the channel
 RS-Codes for burst errors and BCH-Codes for single errors
 Decoding with respect to the BMD method is easily possible
 Simple soft-decision information (erasures, BSEC) can be used in the decoder
 Compact description in spectral domain
Spectral Transformation on Galois Fields (1)
 Recall: Discrete Fourier Transformation (DFT) of the vector a = [a0 … an-1]
with aÎ  is a vector A = [A0 … An-1] with elements
n 1 2 n 1
  a   i  a   i 
j i
Ai   a  e n
with n-th root of unity   e
j 2n
n  1
 0  0

 Now, we define the DFT of a vector a of length n with elements in GF(pm)


 Use primitive element z of order n = pm-1, i.e. zn = z0 = 1 and thus z is a n-th root of
unity, as the “kernel” of the DFT for GF(pm)
 If we define two polynomials with coefficients in GF(pm)
n 1 n 1
a ( D)   ai  D i
 a   a0 a1  an 1  and A( D )   Ai  D i  A   A0 A1  An 1 
i 0 i 0

n 1
then A(D) is called the DFT of a(D) Ai  a  z i
  a   z  i
if the i-th element of A(D) is given by  0
Spectral Transformation on Galois Fields (2)
 Discrete spectral transformation on Galois fields:
n 1
A( D )  DFT  a ( D)   Ai  a  z i
  a   z  i
 0

n 1
a ( D)  IDFT  A( D)   ai  A  z    A  z i
i

 0

 Important correspondence between vectors and polynomials:


a  z  i   0  Ai  0 if z-i is a root of a(D)  i-th component of A is 0
A  z i   0  ai  0 if zi is a root of A(D)  i-th component of a is 0
 Cyclic convolution Cyclic shift by b symbols:
c( D )  RDn 1  a ( D)  b( D)   Ci   Ai  Bi c( D )  RDn 1  D b  a ( D )   Ci  z  ib  Ai

C ( D )  RDn 1  A( D )  B ( D )   ci  ai  bi C ( D)  RDn 1  D b  A( D)   ci  z ib  ai
Example for Spectral Transformation on GF
 GF(5)={0, 1, 2, 3, 4} with primitive element z = 3
 z1 = 3, z2 = 4, z3 = 2, z4 = 1 = z0 and z-1 = 2
n 1

 Example for a(D) = 2+ 2D + 3D3  a =[2 2 0 3] Ai  a  z i


  a   z  i
 0
n 1
A0  a  z 0
  a   30
 2203 2
 0
n 1
A1  a  z 1
  a   21  2  2  2  3  23  2  4  3  3  0
 0
n 1
A2  a  z 2
  a   22  2  2  22  3  232  2  2  4  3  4  2
 0
n 1
A3  a  z 3
  a   2 3
 2  2  2 3
 3  2 33
 2  23  3 2  4
 0

A( D )  DFT  a ( D )   2  2 D 2  4 D 3
Definition of Reed-Solomon (RS)-Codes (1)
 Goal: Construct a code  of block length n and code rate Rc=k/n with elements
in GF(pm) which can correct t errors (design distance d = dmin)
 d = 2t+1 is required (if d is odd the relation n-k = d-1=2t holds  MDS code)
 Philosophy to construct such a code
 Define polynomial (in frequency domain) of rank  k-1 with coefficients XiÎGF(pm)
X ( D )  X 0  X 1D    X k 1D k 1 X ( D )   D  1    D   2  D   k 1 
 Fundamental sentence of algebra: X(D) has at most k-1 different roots i ÎGF(pm)

 Take n different nonzero elements 0,…, n-1ÎGF(pm), insert them in X(D) and take
the result as element of the code word x of length n
x   x0 x1  xn 1  with xi  X (i )
 The code word x has a minimum weight of wH(x) d = n-k+1
• X(D) has at most k-1 roots  at most k-1 coefficients xi are zero
• As x(D) consists of n coefficients, the remaining n-(k-1) elements have to be nonzero
Definition of Reed-Solomon (RS)-Codes (2)
 In order to construct a code  all qk = pmk different polynomials X(D) with
maximum rank k-1 are taken and n different nonzero elements i are inserted to
construct the set of qk code words x(D)
 Due to the linearity, such a code has minimum distance dmin = n-k+1

 Usually, the i are chosen as power of a primitive element z of order n=q-1=pm-1


 z0=1, z1, z2,…, zn-1 and zn = z0 = 1

 Definition of RS-Codes in Frequency Domain


We assume an element z  GF(q) of order n, i.e. n is the smallest number
with zn = 1.
The (n, k, dmin)q-Reed-Solomon code  is defined by the set of all polynomials
X(D) with degree X(D)  k-1 and Xi  GF(q).
The code words x   are constructed by xi = X( zi ).
The smallest Hamming distance of the code is dmin = n - k + 1.
Basic Properties of Reed-Solomon-Codes
 Code word length n  p  1
m

 Code word length n corresponds to number of nonzero elements in the finite field.
 The choice of the extension field GF(pm) determines the block length n !

 Dimension of the code   q k  p mk

 Information word length k  p  d min


m
d  n  k  1  pm 1  k  1  pm  k

 Number of parity check symbols n  k  2t

k p m  d min
 Code rate Rc  
n pm  1
Generator Polynomial for Reed-Solomon-Codes
 Cyclic code: x( D )  u ( D )  g ( D ) with rank g (d )  n  k

 As X(D) is of rank k-1 the remaining n-k coefficients of the length n vector X are
zero X ( D)  X  X D    X D k 1
0 1
X   X 0 X 1  X k 1 0 0
k 1

 In frequency domain the code words of a RS-Code contain n-k=d-1 consecutive


zeros, known as parity frequencies
X i  0 for k  i  n  1

 Due to the relation Xi = x(z –i ), n-k consecutive powers of z are zeros of x(D)
x  z  k   x  z  k 1     x  z  ( n 1)   0 g  z  k     g  z  ( n 1)   0

 As this relation is true for all x(D) (independent of u(D)), it has to be fulfilled by g(D)
 Consequently, g(D) can be factorized into the following n-k terms
n 1 n 1 nk
g ( D)    D  z i
   D  z i
z n
   D  z 
i

i k i k i 1
Generator and Parity Check Polynomial for RS-Codes
n 1 n 1 nk
 Generator polynomial: g ( D)    D  z i
   D  z i
z n
   D  z  i

i k i k i 1

n 1
 General connection*: g ( D ) h( D )  D  1    D  z  i 
n

i 0

k 1 k 1 n
 Parity check polynomial: h( D)    D  z i
   D  z i
z n
   D  z  i

i 0 i 0 i  n  k 1

 Generalization
 The roots of g(D) and h(D) can be on arbitrary consecutive positions
n  k   1 n   1
g ( D)   D  z 
i 
i
h( D )   D  z 
i nk 
i

 Hint*: With zn = 1 the relation zn-1= 0 follows. As all powers of z fulfill Dn-1= 0 and the
powers z, 0    n-1, are different, the factorization of Dn-1 is given by
n 1
D 1   D  z  z 
n

n i
 1  z n  1  1  1  0
i 0
(Final) Definition of Reed-Solomon (RS)-Codes
 Definition of Reed-Solomon-Codes
For an arbitrary prime number p, an integer m and an arbitrary design
distance d=dmin, a RS-Code is defined as a (n, k, dmin)q= (pm-1, pm-d, d)q code.
The code consists of all time-domain words x(D) ↔ [x0 … xn-1] with
coefficients in GF(pm), such that the corresponding frequency-domain words
X(D) ↔ [X0 … Xn-1] are zero in a cyclic sequence of n-k = d-1 consecutive
positions.
 
  x( D ) X ( D)  RDn 1  D n  k   B ( D )  with rank B ( D )  k  1 B(D)

 Interpretation X   X  1    X   d  2  0 x  z     x  z   1     x  z  (   d  2)   0
 Parameter  denotes the starting position of n-k consecutive zeros
 The complete distance spectrum is given in analytical form
j  d  1
d  d min
n
n
A( D )   Ad D d
with Ad     (q  1)   (1)     q d  dmin  j
d 0 d  j 0  j 
 RS-Code are MDS-Codes and dmin=d holds: d min  d  n  k  1
Examples for Reed-Solomon (RS)-Codes (1)
 Counting in Galois field GF(8): p = 2, m = 3,
 Primitive polynomial: p(D) = D3 + D + 1
 Condition: z3 = z + 1
z1 = z z5 = z3 + z2 = z + 1 + z2 = z2 + z + 1
z2 = z2 z6 = z4 + z3 = z2 + z + z + 1 = z2 + 1
z3 = z + 1 z7 = z5 + z4 = z2 + z + 1 + z2 + z =1
z4 = z2 + z

 Elements of GF(8): GF  8   0,1, z ,1  z , z ,1  z , z  z ,1  z  z 


2 2 2 2

 0, z1 , z 2 , z 3 , z 4 , z 5 , z 6 
 each element can be described by a binary triple

 Length of code words: n = pm - 1 = 23 - 1 = 7 (number of nonzero element in GF(pm))


Examples for Reed-Solomon (RS)-Codes (2)
 Example 1: t = 1 error correctable  dmin = 2t +1 = 3
 Dimension of code: k = pm - dmin = 23 - 3 = 5  || = qk = 85 = 32768 code words
 Code rate: Rc = k / n = 5 / 7 = 0.714
(7,5,3)8 code
 Generator polynomial: g ( D)   D  z 1
 D  z   D  z D  z
2 2 4 3
 (21,15,3)2 code

h( D )   D  z 3  D  z  D  z  D  z  D  z 
4 5 6 7

 Parity check polynomial:


 D5  z 4 D 4  D3  z 5 D 2  z 4
 Example 2: t = 2 errors correctable  dmin = 2t +1 = 5
 Dimension of code: k = pm - dmin = 23 - 5 = 3  || = qk = 83 = 512 code words
 Code rate: Rc = k / n = 3 / 7 = 0.429

g ( D)   D  z1  D  z 2  D  z 3  D  z 4 
 Generator polynomial:
 D 4  z 3 D 3  D 2  zD  z 3


 Parity check polynomial: h( D )  D  z 5  D  z  D  z   D
6 7 3
 z3D2  z5D  z 4
BCH-Codes: Preliminaries (1)
 Codes by Bose, Ray-Chaudhuri, Hocquenghem (BCH) with xiÎGF(p)
 we restrict our self to binary BCH codes, i.e. p = 2  xiÎGF(2),
 Splitting fields , Cyclotomic Cosets: The number set
{0, 1, …, n-1} can be splitted in disjunctive sets Ki, so called splitting fields.
For n = q-1 with q = pm the splitting field Ki is given by
K i  i  q j mod n , j  0,1,,   1 index i denotes the smallest number in Ki

 Some basic properties of splitting fields


 Ki   K i  K j i  Ø no element member of diverse splitting fields

 K  0
0  K  0,1,, n  1
i i union of all splitting fields contains all numbers

 Interesting for BCH codes:


 When the elements of Ki are used as exponents of a primitive element z ÎGF(pm)
and the zi are used in the factorized polynomial mi(D), this mi(D) contains only
coefficients in the prime field GF(p) and is irreducible with respect to GF(p).
BCH-Codes: Preliminaries (2)
 Construction rule K i  i  q j mod n , j  0,1,,   1

 Example: Splitting field for {0, 1, …, 6}, i.e. n = 7:


 Parameter: n = q-1 leads with p = 2 and m = 1 to q = pm = 2 and  = 3
 K0 = { 0 },
 K1 = { 2j mod 7, j = 0, 1, 2 } = { 1, 2, 4 },
 K3 = { 3·2j mod 7, j = 0, 1, 2 } = { 3, 5, 6 }

 Polynomials of splitting fields have coefficients in GF(2)


 m1 ( D )   D  z 1
 D  z 2
 D  z 4
  D 3
 D 1

 m2 ( D)   D  z  D  z  D  z   D  D  1
3 5 6 3 2

 “Equivalent” example: For a all coefficients of the polynomial


 D  a   D  a*   D 2  2 Re a  D  a 2 are real!
Definition of BCH-Codes (1)
 Definition of BCH-Codes:
We assume Ki as the splitting fields for n = pm -1 and z to be a primitive
element of GF(pm). Furthermore,  = i Ki represents the union of an
arbitrary number of splitting fields.
A BCH code of primitive length n is determined by the generator polynomial
g ( D)    D  z     mi ( D)
 i

where index i runs over all splitting fields contributing to .

 Properties:
 Due to the choice of   , gi  GF(p) is guaranteed.
 The target minimum distance d is reached if  contains d - 1 successive numbers.
 Concerning the real minimum distance of the code,   d = n - k +1 holds.
 The number of information bits of the code is k = n - ||.
Definition of BCH-Codes (2)
 As d-1 consecutive numbers in  are demanded, the code  contains all code
words, where the roots of x(D) correspond to d-1 consecutive powers of the
primitive element z
 
  x( D)  GFp [ D] x  z    x  z  1     x  z   d  2   0

 Consequently, X(D) contains in at least d-1 (cyclic) consecutive positions zeros



  x( D) X n     X n  d    2  0, X 2i  X i2  X 2i  X i2  garanties xi  GF( p)

 In contrast to RS-codes the position of the roots (zeros) effect dmin


 Properties
 Choice of arbitrary mi(D) effects dimension of code and code rate  flexibility
 Not necessary MDS-Codes, as dmin≥ d k  n  d min  1  n  d  1
 dmin≥ d can not be exploited by BMD decoding
 With choice of d, the parameters k, Rc and dmin are not fixed (to be calculated)
 Existence of non-primitive BCH-Codes, i.e. n < 2-1
Examples for BCH-Codes
 Example: p = 2, n = 7, t = 1 error correctable  dmin = 3 GF(8)  z3 = z+1
 d - 1 = 2 successive numbers in  are required z1 = z z 5 = z2 + z + 1
z 2 = z2 z6 = z2 + 1
 Splitting fields for p = 2, n = 7: K0 = { 0 }, K1 = { 1, 2, 4 }, K3 = { 3, 5, 6 }
z3 = z + 1 z7 = 1
 Polynomials of splitting fields: z4 = z2 + z
m1(D) = (D - z1)(D - z2)(D - z4) = D3 + D + 1
m3(D) = (D - z3)(D - z5)(D - z6) = D3 + D2 + 1
 both polynomials fulfill the requirements for the generator polynomial
 We achieve the cyclic (7,4,3)-Hamming-Code
 Dimension of code: k = n-3 = 4  | | = 24 = 16 code words
 Code rate: Rc = k / n = 4 / 7
 Example: t = 2 error correctable  dmin = 5
 g(D)=m1(D) m3(D) = D6 + D5 + D4 + D3 + D2 + D + 1
 k = n – 6 = 1  Repetition Code
Comparison of RS- and BCH-Codes (1)
 BCH- and RS-Codes are special cases of each other  transformable
 E.g. by restricting RS-Codes to xiGF(2) a BCH is achieved, thus BCH-Codes form
BCH
a binary subset of all RS-Codes  d min  d min
RS
d
 BCH-Codes can also be defined on GF(pmr)  correction of burst errors of length m
and contain primitive length n = pmr-1  for r =1 RS-Code is achieved
 All 1-error-correcting BCH-Codes are cyclic Hamming-Codes

 (15,11,5)16-RS-Code
 dmin= d = 5  2 symbol errors can be corrected (in GF(16))
 Binary representation: each code symbol contains 4 bits  burst errors up to length
t = 8 bits are correctable
e   0000 1111 1111 0000   correctable
e   0001 1111 1000 0000   not correctable
e   0001 0010 1000 0000   not correctable
Comparison of RS- and BCH-Codes (2)
 (64,45,7)2-BCH-Code
 Word length n and code rate Rc is comparable to (15,11,5)16-RS-Code
 Capable of correcting always t =3 single errors (due to dmin=7)
 not guarantied by RS-Code

 Conclusion
 RS-Codes are better suited for correcting burst errors
 Binary BCH-Codes should be used for channels yielding single errors

The choice of the coding scheme depends on the error properties of the
communication channel
Table of primitive BCH-Codes
n k t n k t n k t n k t
7 4 1 127 85 6 255 123 19 511 403 12
78 7 115 21 394 13
15 11 1 71 9 107 22 385 14
7 2 64 10 99 23 . .
5 3 57 11 91 25 259 30
50 13 87 26 . .
31 26 1 43 14 79 27 130 55
21 2 36 15 71 29 . .
16 3 29 21 63 30 . .
11 5 22 23 55 31 1023 1013 1
6 7 15 27 47 42 1003 2
8 31 45 43 993 3
63 57 1 37 45 983 4
51 2 255 247 1 29 47 973 5
45 3 239 2 21 55 963 6
39 4 231 3 13 59 953 7
36 5 223 4 9 63 943 8
30 6 215 5 933 9
24 7 207 6 511 502 1 923 10
18 10 199 7 493 2 913 11
16 11 191 8 484 3 903 12
10 13 187 9 475 4 . .
7 15 179 10 466 5 768 26
171 11 457 6 . .
127 120 1 163 12 448 7 513 57
113 2 155 13 439 8 . .
106 3 147 14 430 9 258 106
99 4 139 15 421 10 . .
92 5 131 18 412 11 . .
Bit error rates for BCH codes
 Word error probability for Bounded Minimum Distance (BMD) decoding for a
t-error correcting (n, k, dmin)-code was derived in the beginning of this chapter:
n
n
Pw      1  Pe   Ped
nd

d t 1  d 

 Bit error probability (BER) can be approximated by


n
 d t n
Pb   min 1,     e
nd
  1  P  Pe
d

d t 1  k  d
 
1  Eb 
with Pe   erfc  Rc  
2  N 0 

 BER depends on code parameter n, k, t and Eb/N0


Bit error rates for BCH codes: length n = 255

0
 BER vs. Eb/N0
10 (and not vs. Es/N0)
uncoded
-1
10 Rc = 0.97, t= 1
Rc = 0.94, t= 2
 At first the BER
-2
10 Rc = 0.87, t= 4
decreases with
-3
Rc = 0.75, t= 8
decreasing Rc due
10
Rc = 0.51, t= 18 to increasing dmin
-4
10 Rc = 0.36, t= 25 and t
-5 Rc = 0.18, t= 42
10  From t ³ 25 the
P Rc = 0.08, t= 55
b 10
-6
performance
-7 worsens
10
 BCH codes are
-8
10 asymptotically bad
-9
10  For t ³ 25 the Rc
10
-10 drops faster than
3 4 5 6 7 8 9 10 11 12 13 dmin and t rises
Eb / N 0 in  dB   Rc effects Eb/N0
Bit error rates for BCH codes: Code rate Rc = 1/2

0
 Comparison of
10 codes with
uncoded
10
-1
(7, 4), t= 1 different block
(15, 7), t= 2
-2 length n but
10 (31, 16), t= 3

-3
(63, 30), t= 6 constant rate
10 (127, 64), t= 10
(255, 131), t= 19
Rc » 1/2
-4 (511, 259), t= 30
10
(1023, 513), t= 57

10
-5
 For constant Rc
P the performance
b 10
-6

-7
increases
10 significantly with
10
-8
block length n
-9
10
-10
10
3 4 5 6 7 8 9 10 11 12 13
Eb / N 0 in  dB 
Bit error rates for BCH codes: Code rate Rc = 1/4

0
 Comparison of
10 codes with
uncoded
10
-1
(31, 6), t= 7 different block
-2 (63, 16), t= 11 length n but
10 (127, 29), t= 21
-3 (255, 63), t= 30
constant rate
10 (511, 130), t= 55 Rc » 1/4
-4 (1023, 258), t= 106
10

10
-5
 For constant Rc
P the performance
b 10
-6

-7
increases
10 significantly with
10
-8
block length n
-9
10
-10
10
3 4 5 6 7 8 9 10 11 12 13
Eb / N 0 in  dB 
Bit error rates for BCH codes: Code rate Rc = 3/4

0
 Comparison of
10 codes with
uncoded
10
-1
(31, 21), t= 2 different block
-2 (63, 45), t= 3 length n but
10 (127, 99), t= 4
-3 (255, 191), t= 8
constant rate
10 (511, 385), t= 14 Rc » 3/4
-4 (1023, 768), t= 26
10

10
-5
 For constant Rc
P the performance
b 10
-6

-7
increases
10 significantly with
10
-8
block length n
-9
10
-10
10
3 4 5 6 7 8 9 10 11 12 13
Eb / N 0 in  dB 
Decoding of BCH and RS-Codes
 Maximum-Likelihood Decoding of BCH- and RS-Codes is too complex, but
Bounded Minimum Distance (BMD) decoding leads to efficient algorithms
(hard input symbols are supposed or erasure information  BEC or BSEC)

 Assumptions: y  x  e  Y  X  E
 Symbols of x and e are q-ary for RS-Codes and p-ary for BCH-Codes
 Symbols X and E are always q-ary
 Design distance d = 2t+1
 No error vector with more than  ≤ t errors occur (otherwise the decoding fails)
 The positions 0 up to d-2 contain the parity frequencies
X   0 0 X 2t X 2t 1  X n 1 
 The first 2t positions of receive word Y contain explicitly symbols of error word E
Y   E0  E2t 1 X 2t  E2t  X n 1  En 1 
Principle of Decoding
 Syndrome: first 2t symbols of Y (this is not the DFT of s(D))
2 t 1 n 1
S ( D )   Si D i
with Si  Ei  Yi   y z  i
i 0  0

 Principle steps of decoding


 Transform (part of) receive word y into Y in frequency domain (using DFT)
 Syndrome S corresponds to Y at parity frequencies
 If S = 0 no detectable error occurred, if S0
• Error detection: end of the decoding process
• Error correction: calculate error location polynomial C(D) and error polynomial E(D)
(determine error frequencies E2t-1,…En-1 which result in e(D) with minimum weight wH(e(D) )
 Transformation of E(D) into time domain  e(D)
 Calculation of estimated codeword
xˆ ( D)  y ( D)  e( D)
Calculation of the error location polynomial C(D) (1)
 Task: For given syndrome determine error vector with minimum weight wH(e)
 Search for suitable polynomial e(D) of minimum degree
 Determine the erroneous positions i
 Determine the value of ei

 Error location set: Indicates erroneous locations in y


  i ei  0, i  0,, n  1 with     t

 Error location polynomial: with normalization C0=1


C ( D )    D  z i   C 0  C1D    C  D  C ( D )   1  Dz  i   1  C1D1    C D 
i i

 If no error occurred  =Æ and C(D)=1


 Roots of error location polynomial C(D) are given by zi with iÎ
 If C(D) is known the erroneous positions can be calculated   i C ( z i )  0  
Calculation of the error location polynomial C(D) (2)
 Chien Search: Check C(zi)=0 for all powers i of z to determine the roots in GF
 Example: GF(8) with p(D) = D3+D+1
p(D) = D3+D+1=0  z3 = z+1
 e(D) = 1+D3  e = (1 0 0 1 0 …)  C(D) = (1-D)(1-D z-3)
• Relations:
 Given: C(D) = 1 - (1+z-3) D + z-3D2= 1 - (1+z4) D + z4D2 z1 = z z 5 = z2 + z + 1
z2 = z2 z6 = z2 + 1
 C(z0) = 1 - (1+z4) z0 + z4 (z0)2 = 1+1+ z4 + z4 = 0 z3 = z + 1 z7 = 1
z4 = z 2 + z
 C(z1) = 1 - (1+z4) z1 + z4 (z1)2 = 1+z+z5 +z6 = 1 0 • z7 = z3· z4 = 1  z-3 = z4
 C(z2) = 1 - (1+z4) z2 + z4 (z2)2 = 1+z2+z6 +z = z  0 • z10 = z7· z3 = 1 · z3 = z3
 C(z3) = 1 - (1+z4) z3 + z4 (z3)2 = 1+z3+z7 +z10 = 1+z3+1 +z3 = 0
c( D )  IDFT  C ( D ) 
 How to determine C(D) from S(D)?
 Recall relation for e(D) = IDFT(E(D))
e  z  i   0  Ei  0 and E  z i   0  ei  0
 iÎ : erroneous position  zi is root of C(D)  ci=0
ei  ci  0
 i : correct position  ei=0
Calculation of the error location polynomial C(D) (3)
 Interpretation in frequency domain
ei  ci  0 RD n 1 C ( D )  E ( D )   0

 All coefficients of RDn-1[C(D)·E(D)] must equal zero!


 Due to modulo Dn-1 only powers between 0 and n-1 occur
 Higher powers of C(D)·E(D) are mapped to this area by conducting (i+j) mod n
 Coefficient of the i-th power (i = 0,…,n-1) is given by

 C E
 0
 i   mod n
0

 Analysis only at positions   i 2 -1


 only the coefficients E0, …, E2-1 of the error polynomial are used
 corresponds to syndrome S defined at the parity frequencies (Ei=Si for 0 i 2-1)

C
 0
  Si   0 for i  ,, 2  1
Calculation of the error location polynomial C(D) (4)
 Key- equation or Newton-equation (C0 = 1)

Si   C  Si   0 for i  , , 2  1
1

 Solve this equation for all i (over all  times)


 equation system with  equations and  unknowns C1, … , C
 can be solved in principle

 Matrix description of the key-equation


  S    S0 S1  S 1   C 
 S   S S 2  S   C1 
 1  1  
          
     
 S S
 2 1  
1 S   S 2  2   C1 
S  ,
Solving the Key-Equation
 Problem: Dimension of S, is not known, as the receiver does not know the
number of errors 

 Solution:
 Assumption: At most t errors can occur
 Arrange St,t and check for singularity
• If St,t is singular  less than t errors occurred
– Reduce the dimension of the matrix by neglecting the last row and the last column
– Check for singularity  if it is still singular, repeat neglecting last row and last
column, …
• If S, is not singular: dimension of S, corresponds to number of errors 

 Solve key-equation with determined number of errors 


 Therefore, algorithms with reduced complexity should be used
 Berlekamp-Massey-Algorithm
Berlekamp-Massey-Algorithm
 Interpretation of Key-equation as S
Si-1 Si-t
shift-register of length t with feedback

Si   C  Si 
1 Si -C1 -Ct
 Task: Determine coefficients Ci of
minimum length () yielding syndrome S
+
 synthesis of filter
 Solution: Iterative algorithm
 d-1 = 2t symbols of the syndrome are known (at parity frequencies, i.e., S0,…,S2t-1)
 Initialize shift register of length t with first t symbols of the syndrome  S0,…,St-1
 Within t cyclic shifts the coefficients Ci are determined, so that the remaining
symbols St,…,S2t-1 are achieved at the output
 error location polynomial C(D) is determined
 Using Chien-Search the error location set  can be found
 erroneous positions in y are known
Calculation of the Error Polynomial E(D)
 RS-Codes are non-binary codes  inverting the error positions is not sufficient
 error symbols have to be calculated
 Different approaches in time and in frequency domain
 Simple approach in frequency domain: E
Ei-1 Ei-t
 Due to C0=1 the following relation holds

Ei   C E i   mod n
1
Ei -C1 -Ct

Ei  f  Ei 1 , , Ei  

 Due to E0 = S0, …, E2-1= S2-1 the +


remaining n-2t error symbols Ei can be
determined in a successive way  implementation by recursive shift register
 Transformation of E(D) into time domain e(D) and error correction xˆ ( D)  y ( D)  e( D)
 To reduce complexity of transformation, only nonzero elements ei are calculated
Decoding of RS- and BCH-Codes
received word calculation S0, ... ,S2t-1 Solving
Solving
calculation
of
of key
key
syndrome
syndrome equation
equation
syndrome error location
polynomial
Recursive
Recursivecomplement
complement
C(D)
t
Ei   C  Ei 
1

Chien
Chien
search
search
error location
part
partof
ofIDFT
IDFT set
 
ei  E z , i  I
i
I={i | C(zi)=0}

part of e(D)

y(D)
-
xˆ ( D )
+

Вам также может понравиться