Вы находитесь на странице: 1из 43

EE 387, John Gill, Stanford University

Notes #7, Handout #24

Definition of BCH codes


BCH codes are cyclic codes over GF(q) (the channel alphabet) that are defined
by a (d1) n check matrix over GF(q m) (the decoder alphabet):

2b

1 b+1
2(b+1)
H = .
..
..
.
1 b+d2 2(b+d2)

Design parameters:

(n1)b

(n1)(b+1)

..
...

(n1)(b+d2)

is an element of GF(q m) of order n


b is any integer (0 b < n is sufficient)
d is an integer with 2 d n (d = 1 and d = n + 1 are trivial cases)
Rows of H are the first n powers of consecutive powers of .
EE 387 Notes #7, Page 1

Special cases of BCH codes


A primitive BCH code is a BCH code defined using a primitive element .
If is a primitive element of GF(q m), then the blocklength is n = q m 1.
This is the maximum possible blocklength for decoder alphabet GF(q m).
A narrow-sense BCH code is a BCH code with b = 1.
Some decoding formulas simplify when b = 1. However, b 6= 1 is usually used.
The parity-check matrix for a

1
1
.
.
1

t-error-correcting primitive narrow-sense BCH code is

2 (n1)
2 4 2(n1)
,
..
..
..
...

2t 4t 2t(n1)

where n = q m 1 and is an n-th root of unity in GF(q m).

Each row of H is a row of the finite field Fourier transform matrix of size n.
Codewords are n-tuples whose spectra have 0s at 2t consecutive frequencies.
EE 387 Notes #7, Page 2

Reed-Solomon codes
Reed-Solomon codes are BCH codes where decoder alphabet = channel alphabet.
Minimal polynomials over GF(Q) of elements of GF(Q) have degree 1.
Thus the generator polynomial of a t-error-correcting Reed-Solomon code is
g(x) = (x b)(x b+1) (x b+2t1)
= g0 + g1x + + g2t1x2t1 + x2t ,
where g0, g1 . . . , g2t1 are elements of GF(Q).
The minimum distance is 2t + 1, independent of the choice of and b.
Usually is chosen to be primitive in order to maximize blocklength.
The base exponent b can be chosen to reduce encoder/decoder complexity.

EE 387 Notes #7, Page 3

Reed-Solomon code: example


Audio compact discs and CD-ROMs use 2EC Reed-Solomon codes over GF(28).
The primitive polynomial used to define field arithmetic is x8 + x4 + x3 + x2 + 1.
The base exponent is b = 0.
The generator polynomial of the (255,251) Reed-Solomon code is
g(x) = (x + 1)(x + )(x + 2)(x + 3)
= x4 + 75x3 + 249x2 + 78x + 6
= x4 + 0fx3 + 36x2 + 78x + 40 .
In the hexadecimal representations of the coefficients, the most significant bit is on
the left; that is, 1 = 01, = 02, 2 = 04, and so on.

There are no prime trinomials of degree 8; in fact, there are no prime trinomials of degree 8m for any m .

EE 387 Notes #7, Page 4

(15,7,5) BCH code: parity-check matrix


Parity-check matrix of (15, 7, 5) binary BCH code:

1
0

"
#
0
0
1 2 3 14

H=
=

3
6
9
42
1
1

0
0

0
1
0
0
0
0
0
1

0
0
1
0
0
0
1
1

0
0
0
1
0
1
0
1

1
1
0
0
1
1
1
1

0
1
1
0
1
0
0
0

0
0
1
1
0
0
0
1

1
1
0
1
0
0
1
1

1
0
1
0
0
1
0
1

0
1
0
1
1
1
1
1

1
1
1
0
1
0
0
0

0
1
1
1
0
0
0
1

1
1
1
1
0
0
1
1

1
0
1
1
0
1
0
1

1
0

1
1

Since rows of H are linearly independent, there are 28 syndromes. There are
1+

15
1

15
2

= 121 < 27 < 28

error patterns of weight 2. This code does not achieve the Hamming bound.
A systematic parity-check matrix can be found using the generator polynomial.
There is no (15,8) binary linear block code with minimum distance 5 .
EE 387 Notes #7, Page 5

(15,7,5) BCH code: generator polynomial


Generator polynomial is lcm of minimal polynomials of , 2, 3, 4 :
g(x) = f1(x)f3(x) = (x4 + x + 1)(x4 + x3 + x2 + x + 1)
= x8 + x7 + x6 + x4 + 1 = 1 + x4 + x6 + x7 + x8
BCH codes are cyclic, hence have shift register encoders and syndrome circuits:
m(x)

c(x)

Modified error trapping can be used for (15, 7, 5) binary BCH code.
Any 2 -bit error pattern can be rotated into the 8 check positions. However, two error trapping passes may be needed.

EE 387 Notes #7, Page 6

(15,5,7) BCH code


The generator polynomial of a 3EC BCH code is defined by zeroes , 3, 5 :
g(x) = f1(x)f3(x)f5(x) = (x4 + x + 1)(x4 + x3 + x2 + x + 1)(x2 + x + 1)
= x10 + x8 + x5 + x4 + x2 + x + 1
Parity-check matrix is 3 15 over GF(24) or 12 15 over GF(2):

1
0

2
14
1
1

3
6
42
H = 1 =
0

5
10
70
0
1

0
0

0
1
0
0
0
0
0
1
0
1
1
0

0
0
1
0
0
0
1
1
1
1
1
0

0
0
0
1
0
1
0
1
1
0
0
0

1
1
0
0
1
1
1
1
0
1
1
0

0
1
1
0
1
0
0
0
1
1
1
0

0
0
1
1
0
0
0
1
1
0
0
0

1
1
0
1
0
0
1
1
0
1
1
0

1
0
1
0
0
1
0
1
1
1
1
0

0
1
0
1
1
1
1
1
1
0
0
0

1
1
1
0
1
0
0
0
0
1
1
0

0
1
1
1
0
0
0
1
1
1
1
0

1
1
1
1
0
0
1
1
1
0
0
0

1
0
1
1
0
1
0
1
0
1
1
0

1
0

1
0

The last two rows are linearly redundant 10 check equations k = 5.


EE 387 Notes #7, Page 7

(15,5,7) BCH code: redundant rows


H has two redundant rows:
bottom row is zero
next to last row is same as previous row
H has 10 independent rows and defines a (15, 5) binary cyclic code.
Parity-check polynomial h(x) = (x4 + x3 + 1)(x + 1) includes all the prime
divisors of x15 1 that are not included in g(x).
The dual of this BCH code is a (15, 10) expurgated Hamming code with d = 4.
The (15, 5) BCH code is obtained from the (15, 4) maximum-length code by
augmentation including the complements of the original codewords.
The weight enumerator is A(x) = 1 + 15x7 + 15x8 + x15 .

EE 387 Notes #7, Page 8

(31,16,7) BCH code


Zeroes of codewords are , 3, 5 in GF(25). Parity-check matrix:

H=

1
0

0
0

0
1
0
0
0
0
0
0
1
0
1
0
1
0
0

0
0
1
0
0
0
1
0
1
0
1
0
0
0
1

0
0
0
1
0
0
1
0
1
1
1
1
1
1
1

0
0
0
0
1
0
1
1
1
0
0
0
1
1
0

1
0
1
0
0
1
1
1
1
1
1
0
0
1
1

0
1
0
1
0
1
1
0
0
0
0
1
0
0
1

0
0
1
0
1
0
0
0
1
1
0
0
0
0
1

1
0
1
1
0
0
1
1
1
1
0
1
0
1
1

0
1
0
1
1
1
1
0
1
0
1
0
1
1
1

1
0
0
0
1
0
1
0
0
1
0
1
1
0
0

1
1
1
0
0
0
0
1
0
0
0
1
1
1
1

0
1
1
1
0
1
0
1
0
0
1
0
0
1
0

0
0
1
1
1
1
0
1
1
0
0
0
0
1
0

1
0
1
1
1
1
1
1
0
0
1
0
1
1
0

1
1
1
1
1
1
0
1
1
1
0
0
1
1
1

1
1
0
1
1
1
1
0
0
1
1
1
0
0
0

1
1
0
0
1
0
0
1
1
0
1
1
1
1
0

1
1
0
0
0
1
1
1
1
0
0
1
1
0
1

0
1
1
0
0
1
1
1
0
1
0
0
1
0
0

0
0
1
1
0
1
0
0
1
0
0
0
1
0
1

0
0
0
1
1
0
1
0
0
0
0
1
1
1
0

1
0
1
0
1
0
0
0
0
1
1
1
0
0
1

1
1
1
1
0
0
0
1
0
1
1
0
1
0
1

0
1
1
1
1
1
0
0
0
1
1
1
0
1
0

1
0
0
1
1
0
0
1
1
1
0
1
0
0
0

1
1
1
0
1
1
1
0
1
1
0
1
0
1
0

1
1
0
1
0
0
1
1
0
0
1
1
1
0
0

0
1
1
0
1
1
0
1
0
1
1
1
0
1
1

0
1

0
1

1
0
0
1
0
1
0
0
1
1
0
0
0
1
1

Generator polynomial: x15 + x11 + x10 + x9 + x8 + x7 + x5 + x3 + x2 + x + 1.


It is not obvious that every set of 6 columns of H is linearly independent!
For blocklength 31, all binary BCH codes with d = 7 have 15 check bits.
The expanded code with (n, k, d ) = (32, 16, 8) is a Reed-Muller code.

EE 387 Notes #7, Page 9

BCH codes with decoder alphabet GF(16)


Suppose that is primitive in GF(16).
The following parity-check matrix defines a primitive, narrow-sense BCH code over
each channel alphabet that is a subfield of GF(16).

1 2 14
1 2 4 28

H =
1 3 6 42
1 4 8 56
The three possible channel alphabets are GF(2) , GF(22) , and GF(24):
The BCH codes corresponding to these channels alphabets are
(15,7) binary BCH code over GF(2) (presented earlier in lecture)
(15,9) BCH code over GF(4)
(15,11) Reed-Solomon code over GF(16)
The blocklengths in symbols are 15; blocklengths in bits are 15, 30, and 60.
EE 387 Notes #7, Page 10

Channel alphabet GF(16)


The four rows of H are linearly independent over GF(24) (BCH bound, later).
Thus H defines a (15, 11) code over GF(24) with minimum distance 5.
This code is a (15, 11) 2EC Reed-Solomon code over GF(16).
Using table of powers of (Blahut p. 86), we can find generator polynomial:
g(x) = (x + )(x + 2)(x + 3)(x + 4)
= 10 + 3x + 6x2 + 13x3 + x4 = 7 + 8 x + E x2 + D x3 + x4
Coefficients of generator polynomial are computed using GF(24) arithmetic.
Coefficients can be expressed either in exponential or binary representation.
exponential notation simplifies multiplication (add exponents mod 15)
binary notation simplifies addition (exclusive-or of 4-bit values)
Hardware implementations use bit vectors and often log/antilog tables.

EE 387 Notes #7, Page 11

Channel alphabet GF(4)


Let GF(4) be {0, 1, , }, where = + 1 = 2 .
GF(16) consists of 2-tuples over GF(4) using primitive polynomial x2 + x + .
H defines a BCH code

1
0

0
H[22] =
1

1
0

over subfield GF(4).


0
1

1
1

1
1
1
0

1
1

1
1

0
1

0
1
0

1
0
1

0
1
0

The final two rows, corresponding to conjugate 4 over GF(4), are redundant.
(Row 7 equals row 1 + row 2, while row 8 equals row 2).
Thus g(x) = f1(x)f2(x)f3(x) has degree 6 (15, 9, 5) code over GF(4) .
EE 387 Notes #7, Page 12

Channel alphabet GF(2)


This 16 15 matrix has redundant rows over GF(2). Every row in the second and
fourth blocks is a linear combination of the first four rows.

1
0
0

0
0
H[21] =

1
0

0
0
0

0
1
0
0
0
0
1
0
0
0
0
1
1
1
0
0

0
0
1
0
1
1
0
0
0
0
1
1
1
0
1
0

0
0
0
1
0
0
1
1
0
1
0
1
1
1
1
1

1
1
0
0
1
0
1
0
1
1
1
1
0
1
0
0

0
1
1
0
1
1
1
0
1
0
0
0
0
1
1
0

0
0
1
1
1
1
1
1
0
0
0
1
0
1
0
1

1
1
0
1
1
0
0
1
0
0
1
1
1
0
1
1

1
0
1
0
0
1
0
0
0
1
0
1
0
0
1
0

0
1
0
1
0
0
0
1
1
1
1
1
0
0
1
1

1
1
1
0
0
1
1
0
1
0
0
0
1
1
1
0

0
1
1
1
1
1
0
1
0
0
0
1
1
0
0
1

1
1
1
1
0
1
0
1
0
0
1
1
0
0
0
1

1
0
1
1
0
1
1
1
0
1
0
1
1
1
0
1

1
0
0

1
1

1
1

1
1
1

When we delete the redundant rows of H , we obtain the parity-check matrix of the
(15, 7) 2EC BCH code over GF(2) shown earlier.
EE 387 Notes #7, Page 13

Vandermonde matrix
Definition: The Vandermonde matrix V (X1, . . . , X) is

1
1

1
X1
X2

X
V =
..
..
...
.
..
X11 X21 X1
One application of Vandermonde matrices is for polynomial interpolation.
Given values of f (x) of degree 1 at distinct points X1, . . . , X ,
Yi = f (Xi) = f0 + f1Xi + + f1X 1

(i = 1, . . . , )

the coefficients of f (x) can be found by solving the matrix equation






Y1 Y2 . . . Y = f0 f1 . . . f1 V (X1, . . . , X) .

EE 387 Notes #7, Page 14

Nonsingular Vandermonde matrix


Lemma: The Vandermonde matrix V (X1, . . . , X) is nonsingular if and only if the
parameters X1, . . . , X are distinct. In fact,
det V (X1, . . . , X) =

(Xi Xj ) =

i>j

i1
Y
Y

(Xi Xj ) .

i=1 j=1

Proof : The determinant is a polynomial in variables, X1, . . . , X .


As a polynomial in Xi , its zeroes are Xj for j 6= i.
Thus Xi Xj is a factor of the determinant for every pair (i, j) with i > j .
These are all the factors because the degree of det V (X1, . . . , X) is
0 + 1 + + ( 2) + ( 1) =

 
( 1)
=
.
2
2

The coefficient of the main diagonal monomial

Y
Xii1 = 1 X2 X32 . . . X1
i=1

equals 1 in both the determinant and the above formula for the determinant.
EE 387 Notes #7, Page 15

BCH bound
Theorem: A BCH code whose parity-check matrix has d 1 rows has dmin d.
Proof : Every set of d 1 columns of H is linearly independent over GF(q m).
To see this, consider a submatrix consisting of columns i1, . . . , id1 .

i1 b

id1b

i1(b+1)

id1(b+1)

det
=
..
..
...

i1(b+d2)
id1(b+d2)

id1
i1
i1 b
id1b
(
) det .
6= 0
..
...
.

(d2)i1
(d2)id1

This determinant is nonzero because i1 6= 0, . . . , id1 6= 0 and the second matrix


is a Vandermonde matrix with distinct columns.
EE 387 Notes #7, Page 16

Design of BCH codes


Codewords of BCH code have zeroes that are d 1 consecutive powers of .
Conjugates over channel alphabet GF(q) are also zeroes.
The degree of the generator polynomial is the total number of conjugates.
Example: Channel alphabet GF(2), decoder alphabet GF(26).
The first six conjugacy classes, represented by exponents:
{0} {1, 2, 4, 8, 16, 32} {3, 6, 12, 24, 48, 33}
{5, 10, 20, 40, 17, 34} {7, 14, 28, 56, 49, 35} {9, 18, 36}
d = 5 requires 4 powers. Exponents {1, 2, 3, 4} 12 conjugates.
d = 9 requires 8 powers. Exponents {1, . . . , 8} 24 conjugates.
d = 11 requires 10 powers. Exponents {1, . . . , 10} 27 conjugates.
d = 4 requires 3 powers.
Exponents {1, 2, 3} 12 conjugates.
Better: {0, 1, 2} 7 conjugates (expurgated code)
EE 387 Notes #7, Page 17

GF(256): powers of primitive element


0
16
32
48
64
80
96
112
128
144
160
176
192
208
224
240

0
01
4C
9D
46
5F
FD
D9
81
85
A8
E6
E3
82
51
12
2C

1
02
98
27
8C
BE
E7
AF
1F
17
4D
D1
DB
19
A2
24
58

2
04
2D
4E
05
61
D3
43
3E
2E
9A
BF
AB
32
59
48
B0

3
08
5A
9C
0A
C2
BB
86
7C
5C
29
63
4B
64
B2
90
7D

4
10
B4
25
14
99
6B
11
F8
B8
52
C6
96
C8
79
3D
FA

5
20
75
4A
28
2F
D6
22
ED
6D
A4
91
31
8D
F2
7A
E9

6
40
EA
94
50
5E
B1
44
C7
DA
55
3F
62
07
F9
F4
CF

7
80
C9
35
A0
BC
7F
88
93
A9
AA
7E
C4
0E
EF
F5
83

8
1D
8F
6A
5D
65
FE
0D
3B
4F
49
FC
95
1C
C3
F7
1B

9
3A
03
D4
BA
CA
E1
1A
76
9E
92
E5
37
38
9B
F3
36

10
74
06
B5
69
89
DF
34
EC
21
39
D7
6E
70
2B
FB
6C

11
E8
0C
77
D2
0F
A3
68
C5
42
72
B3
DC
E0
56
EB
D8

12
CD
18
EE
B9
1E
5B
D0
97
84
E4
7B
A5
DD
AC
CB
AD

13
87
30
C1
6F
3C
B6
BD
33
15
D5
F6
57
A7
45
8B
47

14
13
60
9F
DE
78
71
67
66
2A
B7
F1
AE
53
8A
0B
8E

15
26
C0
23
A1
F0
E2
CE
CC
54
73
FF
41
A6
09
16
01

EE 387 Notes #7, Page 18

Decoder alphabet GF(256)


Narrow-sense primitive 2EC BCH codes over GF(2), GF(22), GF(24), GF(28)
can be defined by the same parity-check matrix:

1 2 254
1 2 4 508

H =
1 3 6 752
1 4 8 1016

Generator polynomials lcm(f1(x), . . . , f4(x)) have coefficients from subfields.


GF(22) = {0, 1, 85, 170} = {00 , 01 , D6 , D7 }
GF(24) = {0, 1, 17, . . . , 238} = span{01 , 0B , 98 , D6 }
Subfield
GF(28)
GF(24)
GF(22)
GF(2)

Degree
4
8
12
16

Polynomial coefficients
01 1E D8 E7 74
01 D6 01 DD 0B 98 98 98 D7
01 01 00 D7 00 00 00 D6 D7 D7 01 D7 01
1 0 1 1 0 1 1 1 1 0 1 1 0 0 0 1 1
EE 387 Notes #7, Page 19

Encoding and syndrome circuits


Binary BCH codes are defined using GF(2m) but are still cyclic over GF(2).
Shift registers can be used for encoding and for syndrome computation.
The (31, 21) binary primitive 2EC BCH code with generator polynomial
(x5 + x2 + 1)(x5 + x4 + x3 + x2 + 1) = 1 + x3 + x5 + x6 + x8 + x9 + x10
has the following shift register encoder.
m(x)

c(x)

Syndrome s(x) mod g(x) used for error detection has a similar circuit.

EE 387 Notes #7, Page 20

Encoder for (255,251) Reed-Solomon code


m0

m7

0
1
2
3
4
5
6
7
0

0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7

00000010
00000001
10111000
0 1 0 1 1 1 0 0 6
00101110
00010111
10110011
11100001
0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7

00011110
00001111
10111111
1 1 1 0 0 1 1 1 78
11001011
11011101
11010110
01101011
0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7

01101100
00110110
00011011
1 0 1 1 0 1 0 1 249
11100010
01110001
10000000
00000000
0 1 2 3 4 5 6 7

11110000
01111000
00111100
0 0 0 1 1 1 1 0 75
00001111
10111111
11100111
11001011
0 1 2 3 4 5 6 7

EE 387 Notes #7, Page 21

Reed-Solomon encoder
A Reed-Solomon code with d = 8 has the following generator polynomial:
g(x) = (x + 3)(x + 2)(x + 1)(x + 1)(x + +1)(x + +2)(x + +3)
= x7 + 6bx6 + 09x5 + 9ex4 + 9ex3 + 09x2 + 6bx + 1
Since the reciprocals of its zeroes are also zeroes, g(x) is its mirror image.
Thus the encoder corresponding to g(x) has only 3 distinct scalers.
m(x)

9e

09

6b

c(x)

EE 387 Notes #7, Page 22

Generator matrix for (255,223) 4EC BCH code

EE 387 Notes #7, Page 23

Decoding algorithms for BCH codes


Decoding BCH and Reed-Solomon codes consists of the following major steps.
1. Compute partial syndromes Si = r(i) for i = b, . . . , b + d 2.
2. Find coefficients 1, . . . , of error-locator polynomial (x) =

(1xXi)

i=1

by solving linear equations with constant coefficients Sb, . . . , Sb+d2 .


3. Find the zeroes X11, . . . , X1 of (x). If there are symbol errors, they are
in locations i1, . . . , i where X1 = i1 , . . . , X = i .
4. Solve linear equations, whose constant coefficients are powers of Xi , for
error magnitudes Y1, . . . , Y . (Not needed for channel alphabet GF(2).)
Efficient procedures for solving linear systems of equations in steps 2 and 4:
Berlekamp, Massey (step 2)
Forney (step 4)
Sugiyama-Kasahara-Hirasawa-Namekawa (Euclidean) (steps 2 and 4)

EE 387 Notes #7, Page 24

Error locations and magnitudes


Suppose there are t errors in locations i1, . . . , i .
Let the error magnitudes be ei1 , . . . , ei (values in GF(q), channel alphabet).
The error polynomial is
e(x) = ei1 xi1 + + ei xi .
The senseword r(x) can be written r(x) = c(x) + e(x).
The partial syndromes are values in decoder alphabet GF(q m) :
Sj = r(j ) = c(j ) + e(j ) = e(j )
= ei1 ji1 + + ei ji = ei1 i1j + + ei i j .
Change of variables:
error locators:

X 1 = i1 , . . . , X = i

error magnitudes:

Y1 = ei1 , . . . , Y = ei (just renaming)

EE 387 Notes #7, Page 25

Syndrome equations
Error locators are elements of the decoder alphabet GF(q m).
Error magnitudes are elements of the channel alphabet GF(q).
Important special case: Yi = 1 for channel alphabet GF(2).
For today, assume narrow-sense BCH code (b = 1) with d = 2t + 1.
Partial syndromes are constants in system of 2t equations in 2 unknowns:
S1 = Y 1 X 1 + + Y X
S2 = Y1X12 + + Y X2
..
S2t = Y1X12t + + Y X2t
This is an algebraic system of equations of degree 2t.
Goal: reduce to one-variable polynomial equation with solutions.

EE 387 Notes #7, Page 26

Error-locator polynomial
The error-locator polynomial (x) is defined by
(x) = (1 xX1)(1 xX2) (1 xX )
=
=

(1 xXi)

i=1

(Xi)

i=1

(x Xi1)

i=1

= 1 + 1 x + + x .
The zeroes of (x) are X11, . . . , X1 the reciprocals of error locators.
The degree of (x) is the number of errors.

elegant!

The decoder must determine as well as the error locations.


The Peterson-Gorenstein-Zierler decoder can be used to find (x) from Sj .
PGZ is not efficient for large t but is easy to understand.
EE 387 Notes #7, Page 27

PGZ decoder example


Syndromes for 2EC narrow-sense BCH code with decoder alphabet GF(2m):
Sj = Y1X1j + Y2X2j ,

j = 1, . . . , 4

Suppose two errors. Then zeroes of (x) = 1 + 1x + 2x2 are X11, X21 .
0=1+

1X11

0=1+

1X21

2X12

2X22

Y1 X13

Y1X13 + 1Y1X12 + 2Y1X1 = 0


Y2 X23

Y2X23 + 1Y2X22 + 2Y2X2 = 0

(Y1X13 + Y2X23) + 1(Y1X12 + Y2X22) + 2(Y


+ Y2X}2) = 0
| 1X1 {z
|
{z
}
{z
}
|
S3

S2

S1

Similarly, multiplying by YiXi4 and summing gives another equation:

(Y1X14 + Y2X24) + 1(Y1X13 + Y2X23) + 2(Y1X12 + Y2X22) = 0


|
|
|
{z
}
{z
}
{z
}
S4

S3

S2

We have obtained two linear equations in the unknowns 1, 2 :



 
 
S3 + S2 1 + S1 2 = 0
S1 S2 2
S3

=
.
S2 S3 1
S4
S4 + S3 1 + S3 2 = 0

EE 387 Notes #7, Page 28

PGZ decoder example (2)


The determinant of the coefficient matrix is:
S1S3 S22 = Y1Y2(X1X23 + X13X2) = Y1Y2X1X2(X1 + X2)2 6= 0
because Yi 6= 0, Xi 6= 0, and X1 6= X2 . So we can solve for 1, 2 .




S
S
S1 S2
2
M21 = 1 3
M2 =
S2 S1
S2 S3
where = det M2 = S1S3 + S22 . Coefficients of (x) are given by
 

 
 2

2
S3
S3 + S2 S4
1 S3 S2
1
=
=
1
S2 S1 S4
S2 S3 + S1 S4
The error locator polynomial is (x) = 1 + 1x + 2x2 , where
1 =

S2 S3 + S1 S4
,
S1S3 + S22

2 =

S32 + S2S4
.
S1S3 + S22

The common denominator = S1S3 + S22 need be computed only once.


Computation of 1 , 2 uses 8 multiplications and one inversion in GF(2m ) .

EE 387 Notes #7, Page 29

PGZ decoder example (3)


Next find two zeroes X11, X21 of (x) (perhaps by exhaustive search).
If (x) does not have two distinct zeroes, an uncorrectable error has occurred.
Finally find the error magnitudes:
1  
  
S1
X1 X2
Y1
=
Y2
S2
X12 X22
 2
 
X 2 X 2 S1
1
=
X1X2(X1 + X2) X12 X1 S2
Matrix-vector product gives error magnitudes:
X 2 S1 + S2
X22S1 + X2S2
=
X1X2(X1 + X2)
X1(X1 + X2)
X12S1 + X1S2
X 1 S1 + S2
Y2 =
=
X1X2(X1 + X2)
X2(X1 + X2)
Y1 =

Computation of Y1 , Y2 takes about 6 multiplications and 2 reciprocals.


EE 387 Notes #7, Page 30

PGZ decoder example (4)


If M2 is singular, that is, S1S3 + S22 = 0, then we solve the simpler equation
M1 [ 1 ] = [ S2 ] [ S1 ][ 1 ] = [ S2 ]
The error locator polynomial has degree 1:
(x) = 1 + 1x = 1 +

S2
S1
x X11 =
S1
S2

If S2 6= 0 then the single error locator is the reciprocal of the zero of (x):
Y1X12
S2
=
X1 =
S1
Y 1 X1
Error magnitude is obtained from S1 = Y1X1 :
Y1 =

S2
Y 2X 2
S1
= 1 = 1 21 .
X1
S2
Y 1 X1

Finally we check S4 = Y1X14 . If not, an uncorrectable error has been detected.


EE 387 Notes #7, Page 31

PGZ in general
By definition of the error locator polynomial, (Xi1) = 0:
1 + 1Xi1 + + Xi = 0

(i = 1, . . . , )

Multiply this equation by YiXij+ for any j 1:


YiXij+ + 1YiXij+1 + + YiXij = 0
This equation has only positive powers of Xi . Now sum over i:

X
i=1

YiXij+

+ 1

YiXij+1

+ +

i=1

YiXij = 0

i=1

Thus if j 1 and j + 2t, that is, 1 j 2t ,


Sj+ + 1Sj+1 + + Sj = 0
We have obtained 2t linear equations in unknowns 1, . . . , 2 :
Sj + Sj+11 + + Sj+11 = Sj+
EE 387 Notes #7, Page 32

Linear equations for 1, . . . ,


The first linear equations for 1, . . . , have a coefficient matrix:

S+1

S1 S2
S
S2 S3 S+1 1 S+2

.
..
..
...
.. = ..
.
S2
1
S S+1 S21

For any = 1, 2, . . . , t, let M be the matrix

S1
S2

S2
S3

M =
..
...
..
S S+1

S
S+1
..
.
S21

Lemma: Suppose that there are t symbol errors. Then M is nonsingular, but
M is singular for > .
Matrices that are constant along anti-diagonals are called Hankel matrices.
EE 387 Notes #7, Page 33

Determining number of errors


Proof : Syndrome equations are satisfied if we define Xi = 0 when < i t.
P

1
Y
X

Y
X
i
i
i
i
P 1
S1
S
P1
+1
2
Y
X

Y
X

i
i
.
.
.
i
1
1
i
..
.. =
M = ..

...
.
...
P ..

P
S S21

21

1 Yi Xi
1 Yi Xi

1
Y1 X1 Y X
X
X

1
.
.
.
.
.
.

.
.
.
..
=
...
...
.

Y1X1 YX
X11 X1

1
Y1 X1
0
1 X11
... ..
... ... . . .
...
...
...

= ...
.
0
Y X
X11 X1
1 X1

If i then Xi 6= 0 and Yi 6= 0. Therefore M is the product of nonsingular


Vandermonde and diagonal matrices and is nonsingular.
But if > the middle matrix has a zero element YX on its diagonal.
The middle matrix is singular for > and therefore M is singular.
EE 387 Notes #7, Page 34

Peterson-Gorenstein-Zierler (PGZ) decoder: summary


1. Compute partial syndromes Sj = r(j ).
2. Find largest t such that det M 6= 0.
3. Solve the following linear system for the coefficients of (x).
M [ , . . . , 1 ]T = [ S+1, . . . , S2 ]T
4. Find X11, . . . , X1 , the zeroes of (x), in GF(q m), the decoder alphabet.
If (x) has < distinct zeroes, an uncorrectable error has occurred.
5. Solve following system of linear equations for error magnitudes Y1, . . . , Y .
Y 1 X 1 + + Y X = S1
Y1X12 + + Y X2 = S2
..
..
Y1X1 + + Y X = S
..
..
2t
2t
Y1X1 + + Y X = S2t
The Forney algorithm, Yi =

(X 1 )
i
, is a closed form solution for step 5.
(X 1 )
i

EE 387 Notes #7, Page 35

3EC Reed-Solomon code


Narrow-sense BCH codes usually do not have the simplest generator polynomial or
parity-check matrices.
For that reason, Reed-Solomon codes are usually defined using b = 0.
The following matrix defines a

1
1

1
H=
1

1
1

three error correcting Reed-Solomon code:

1
1
1
1
2 3
n1

2 4 6 2(n1)

3 6 9 3(n1)

4 8 12 4(n1)
5 10 15 5(n1)

The generator polynomial is


g(x) = (x + 1)(x + )(x + 2)(x + 3)(x + 4)(x + 5)
Another trick to reduce encoder complexity is to choose b so that the generator polynomial is reversible inverses of
zeroes are also zeroes so half as many scalers are needed in the encoding circuit.
EE 387 Notes #7, Page 36

3EC Reed-Solomon decoding (1)


The partial syndromes defined by Sj = r(j ) for j = 0, . . . , 5 satisfy the equations
S0
S1
S2
S3
S4
S5

=
=
=
=
=
=

Y1
Y 1 X1
Y1X12
Y1X13
Y1X14
Y1X15

+ Y2
+ Y 2 X2
+ Y2X22
+ Y2X23
+ Y2X24
+ Y2X25

+ Y3
+ Y 3 X3
+ Y3X32
+ Y3X33
+ Y3X34
+ Y3X35

where X1, X2, X3 are error location numbers and Y1, Y2, Y3 are error magnitudes.
The coefficients of the error locator polynomial (x) satisfy the linear equations:

S3
3
S0 S1 S2
S1 S2 S3 2 = S4
S5
1
S2 S3 S4
EE 387 Notes #7, Page 37

3EC Reed-Solomon decoding (2)


If there are three errors, then the solutions can be found using Cramers rule:
0 = S2(S1S3 + S2S2) + S3(S0S3 + S1S2) + S4(S0S2 + S1S1)
1 = S3(S1S3 + S2S2) + S4(S0S3 + S1S2) + S5(S0S2 + S1S1)
2 = S3(S1S4 + S2S3) + S4(S0S4 + S2S2) + S5(S0S3 + S1S2)
3 = S3(S2S4 + S3S3) + S4(S1S4 + S2S3) + S5(S1S3 + S2S2)
Note that we can choose 0 = 1 by dividing the other coefficients by 0 .
Let X1, X2, X3 be the zeroes of
(x) = 0 + 1x + 2x2 + 3x3 .
The location of the incorrect symbols are i1, i2, i3 , where
X 1 = i1 ,

X 2 = i2 ,

X 3 = i3 .

EE 387 Notes #7, Page 38

3EC Reed-Solomon decoding (3)


Finally, the error magnitudes Y1, Y2, Y3 can be found by solving equations that use
the first three syndrome components, S0, S1, S2 :
Y1 =

S2 + S1(X2 + X3) + S0X2X3


(X1 + X2)(X1 + X3)

Y2 =

S2 + S1(X1 + X3) + S0X1X3


(X2 + X1)(X2 + X3)

Y3 =

S2 + S1(X1 + X2) + S0X1X2


(X3 + X1)(X3 + X2)

Starting from the partial syndromes S0, S1, . . . , S5 , approximately 30 Galois field
multiplications and 3 Galois field divisions are needed to perform decoding.
This estimate does not count the effort needed to find the zeroes of (x).

EE 387 Notes #7, Page 39

Partial syndrome circuits for GF(32)


3

Horners method:
5

r(x)

Multiplication by
, 3 uses matrices:
(5 + 2 + 1 = 0)

S1

0
0

M = 0
0
1

1
0
0
0
0

0
1
0
0
1

0
0
1
0
0

0
0

0 ,
1
0

S3

M3

0
0

= 1
0
0

0
0
0
1
0

0
0
1
0
1

1
0
0
1
0

0
1

0
0
1

Circuit for
r() and r(3)

r(x)

EE 387 Notes #7, Page 40

Partial syndrome circuit for (255,251) R-S code

0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7

10000000
01000000
00100000
00010000 0
00001000 a
00000100
00000010
00000001
0 1 2 3 4 5 6 7

r0

r7

0 1 2 3 4 5 6 7

01000000
00100000
00010000
00001000 1
00000100 a
00000010
00000001
10111000

0 1 2 3 4 5 6 7

00100000
00010000
00001000
0 0 0 0 0 1 0 0 a2
00000010
00000001
10111000
01011100

0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7

00010000
00001000
00000100
00000010 3
00000001 a
10111000
01011100
00101110
0 1 2 3 4 5 6 7

0
1
2
3
4
5
6
7

EE 387 Notes #7, Page 41

Chien search
The Chien search is a clever method for finding zeroes of the error locator
polynomial by brute force.
The Chien search evaluates (i) for i = 1, 2, . . . , n using constant
multiplications instead of general multiplications.
Key idea: use state variables Q1, . . . , Q such that at time i
Qj = j ji ,

j = 1, . . . , .

Each state variable is updated by multiplication by a constant:


Qj Qj j ,
Sum of state variables at time i is

i = 1, . . . , n .

Qj = (i) 1.

j=1

An error location is identified whenever this sum equals 1.

EE 387 Notes #7, Page 42

Chien search circuit #1


Memory elements are initialized with coefficients of error locator polynomial, i.e.,
j = 0 for j = + 1, . . . , t.

1
1

/ERRLOC

Output signal ERRLOC is true when (i) = 0.


Since the zeroes of (x) are the reciprocals of the error location numbers, ERRLOC
is true for values of i such that i = ni = Xl .
As i runs from 1 to n, error locations are detected from msb down to lsb.
Chien search can also be run backwards, using scalers for 1, . . . , t .
EE 387 Notes #7, Page 43

Chien search circuit #2


Double-speed Chien search: evaluate (2i) and (2i+1) at same time.
2

2t

1
1

ERRLOC0 (ERRLOC1)

/ERRLOC0

/ERRLOC1

is asserted when an even (odd) error location is found.

This circuit is more efficient than two separate copies of the Chien search engine
because the memory storage elements are shared.
EE 387 Notes #7, Page 44

Chien search circuit #3


Scaler 2j usually requires more gates than scaler j for small values of j .
We can reduce cost of double-speed Chien search by using 2j = j j

/ERRLOC1

/ERRLOC0

The cascade of two scalers for i may be slightly slower than one scaler for 2i .

EE 387 Notes #7, Page 45

PGZ decoder: review


1. Compute partial syndromes Sj = r(j ).
2. Solve a linear system of equations for coefficients of (x):
M [ , . . . , 1 ]T = [ S+1, . . . , S2 ]T
where is the largest number t such that det M 6= 0.
3. Find the zeroes of (x) are X11, . . . , X1 , which are the reciprocals of the
error locators X1 = i1 , . . . , X = i .
4. Solve a system of linear equations for the error magnitudes Y1, . . . , Y .
Y 1 X 1 + + Y X = S1
Y1X12 + + Y X2 = S2
..
..
Y1X12t + + Y X2t = S2t
The Forney algorithm (1965) is a simple closed formula for Y1, . . . , Y .

EE 387 Notes #7, Page 46

Forney Algorithm
Consider a BCH code defined by the zeroes b, b+1, . . . , b+2t1 .
Forney algorithm: the error magnitude Yi corresponding to error locator Xi is
Xi1b(Xi1)
Yi =
,
(Xi1)
where (x) is the formal derivative of the error-locator polynomial,

(x) =

iixi1 ,

i=1

and (x) is the error evaluator polynomial, S(x)(x) mod x2t .


The Forney algorithm is slightly simpler for narrow-sense BCH codes (b = 1):
(Xi1)
Yi = 1 .
(Xi )
Fact: Forneys algorithm uses 2 2 multiplications to compute all error magnitudes.
EE 387 Notes #7, Page 47

Partial syndrome polynomial


Definition: The partial syndrome polynomial for a narrow-sense BCH code is the
generating function of the sequence S1, S2, . . . , S2t :
S(x) = S1 + S2x + S3x2 + + S2tx2t1 .
For the BCH code defined by b, . . . , b+2t1 , the partial syndrome polynomial is
S(x) = Sb + Sb+1x + + Sb+2t1x2t1 .
The PGZ decoder uses linear equations for coefficients of (x), j = 1, . . . , 2t .
Sj + + Sj+11 + Sj+ = 0

narrow sense codes

Sb+j1 + + Sb+j+21 + Sb+j+1 = 0

general BCH codes

In both cases, the left hand side is the coefficient of x+j1 in the polynomial
product S(x)(x).
We can define partial syndromes Si for every i > 0 . However, the decoder can compute only the first 2t values.
EE 387 Notes #7, Page 48

Error evaluator polynomial


Definition: The error evaluator polynomial (x) is defined by the key equation:
(x) = S(x)(x) mod x2t ,
where S(x) is partial syndrome polynomial and (x) is error-locator polynomial.
The coefficient of x+j1 in S(x)(x) is 0 if 1 j 2t by PGZ equations.

Therefore deg S(x)(x) mod x2t < if there are t errors.

The error evaluator polynomial can be computed explicitly from (x):


0 = S b
1 = Sb+1 + Sb1
2 = Sb+2 + Sb+11 + Sb2
..
1 = Sb+1 + Sb+21 + + Sb1

Multiply-accumulates needed: 0 + 1 + + 2 = 21 ( 1)( 2) 21 2


EE 387 Notes #7, Page 49

Formal derivative
We can obtain a closed formula for Yi in terms of (x), (x), and Xi .
First we need the notion of the formal derivative of a polynomial.
Definition: The formal derivative of
f (x) = f0 + f1x + f2x2 + + fnxn
is the polynomial
f (x) = f1 + 2f2x + 3f3x2 + + nfnxn1
Most of the familiar properties of derivatives hold. In particular, product rules:
(f (x)g(x)) = f (x)g(x) + f (x)g (x)
n
n
 X
Y
Y
fi(x) =
fi(x)
fj (x)
i=1

i=1

j6=i

Formal derivatives of polynomials are defined algebraically, not by taking limits.


EE 387 Notes #7, Page 50

Properties of formal derivatives


Fact: A polynomial f (x) over GF(q) has a repeated zero iff f (x) = 0.
Proof : If is a zero of f (x), then x is a factor of f (x):
f (x) = f1(x)(x ) f (x) = f1(x) + f1 (x)(x ) f () = f1() .
Thus is a repeated zero f (x) has factor (x )2 if and only if f () = 0.
Over GF(2m), the formal derivative has only even powers of the indeterminant:
f (x) = f1 + 2f2x + 3f3x2 + + nfnxn1 = f1 + 3f3x2 + 5f5x4 +
since 2 = 1 + 1 = 0 , 4 = 2 + 2 = 2(1 + 1) = 0 , and so on.
So the formal derivative (x) has at most /2 nonzero coefficients.
Since (x) is a polynomial in x2 of degree < /2, we can compute () using
one squaring and /2 multiply-accumulate operations.

Note that f (x) = 0 for all polynomials over GF(2m ) .


EE 387 Notes #7, Page 51

Forney algorithm: derivation (1)


We can express error evaluator (x) in terms of error location numbers Xi and
error magnitudes Yi .
First we derive a closed formula for S(x).
S(x) =

2t1
X

Sb+j xj

j=0

2t1

XX

YiXib+j xj

j=0 i=1

X
i=1

YiXib

2t1
X

Xij xj

j=0

Next we use the definition (x) =

Y
l=1

X
i=1

YiXib

1 (Xix)2t
1 Xi x


1 Xlx to compute S(x)(x).
EE 387 Notes #7, Page 52

Forney algorithm: derivation (2)


S(x)(x) =


X

YiXib

i=1


X

YiXib

i=1

1 (Xix)2t
1 Xi x
Y

 Y

(1 Xlx)

l=1

(1 Xlx) 1 (Xix)

2t

l6=i

YiXib

i=1

(1 Xlx)

YiXib(Xix)2t

i=1

l6=i

(1 Xlx)

l6=i

The second sum in the final expression is a polynomial in x of degree 2t.


Thus the remainder modulo x2t of the second sum is 0.
Therefore
(x) = S(x)(x) mod x

2t

X
i=1

YiXib

(1 Xlx) .

l6=i

EE 387 Notes #7, Page 53

Forney algorithm: derivation (3)


We just found (x) in terms Xi and Yi . Next use the product formula for (x):
Y


X
Y

(x) =
(Xl) (1 Xj x) .
(1 Xlx) =
l=1

l=1

j6=l

When we evaluate (x) at Xi1 , only one term in the sum is nonzero:
Y
(Xi1) = Xi (1 Xj Xi1) .
j6=i

Similarly, the value of (x) at Xi1 includes only one term from the sum:
(Xi1)

X
l=1

Thus

YlXlb

Y
j6=l

(1

Xj Xi1)

YiXib

(1 Xj Xi1) .

j6=i

1
YiXib
(Xi1)
(b1) (Xi )
=
.
Yi = Xi
Xi
(Xi1)
(Xi1)

EE 387 Notes #7, Page 54

Forney algorithm during Chien search


t
Error
magnitudes Yl can be computed by a Chien-search-like circuit:

b1

m
m
1

2
m

Yi

(i)

m
1

m 2

(i)

At each time i, the values of (i) and (i) are available.


Thus Yl can be computed by one division and one multiplication by i(b1) .
EE 387 Notes #7, Page 55

Forney algorithm: summary


Suppose that a BCH code is defined by zeroes b, b+1, . . . , b+2t1 .
Suppose that the error-locator polynomial has degree .
The error evaluator polynomial consists of the first terms of S(x)(x).
P
The formal derivative (x) is i=1 iixi1 .

Then the error magnitude Yi corresponding to error location number Xi is


Xi1b(Xi1)
Yi =
,
(Xi1)

Computation of the coefficients of (x) uses 2/2 multiplications.


Computation of Yi needs + ( 1) + 2 = 2 + 1 multiplications + one reciprocal.
Forneys algorithm finds all error magnitudes using 2.5 2 multiplications.
When GF(2m) is the decoder alphabet, (x) has only /2 nonzero coefficients,
which reduces the total operation count to 2 2 multiplications.

EE 387 Notes #7, Page 56

Euclidean BCH decoding algorithm


Sugiyama, Kasahara, Hirasawa, Namekawa (1975). The key equation is
(x) = S(x)(x) mod x2t (x) = S(x)(x) + b(x)x2t
for some polynomial b(x) of degree < .
Suppose that the extended Euclidean algorithm is used to calculate gcd(S(x), x2t).
For i = 1, 2, . . .:
ri(x) = ri2(x) Qi(x)ri1(x) = ai(x)S(x) + bi(x)x2t
ai(x) = ai2(x) Qi(x)ai1(x) ,

bi(x) = bi2(x) Qi(x)bi1(x)

At some step i the remainder ri(x) has degree < t.1


The first such remainder is ri(x) = (x), where is the constant term of ai(x).
The error-locator polynomial (x) = 1ai(x) is a polynomial of least degree such
that deg(S(x)(x) mod x2t) < t.
1

Unless Si = 0 for i t , and not all Si = 0 , in which case an uncorrectable error has occurred.
EE 387 Notes #7, Page 57

Euclidean algorithm: pseudocode


1. Compute syndomes: Sj = r(j ) , j = 1, . . . , 2t
2. Initialize:
s(x) x2t ;

t(x)

2t
X

Sj xj1 ;

j=1


1 0
A(x)
;
0 1

3. While deg t(x) t





s(x)
Q(x)
;
t(x)

 


s(x)
0
1
s(x)

;
t(x)
1 Q(x) t(x)


0
1
A(x)
A(x) ;
1 Q(x)

4. Finalize:
A22(0);

The quotient

s(x)
t(x)

(x) 1A22(x);

is defined by s(x) =

(x) 1t(x) ;


s(x)
t(x) + r(x) , deg r(x) < deg t(x) .
t(x)
EE 387 Notes #7, Page 58

Euclidean algorithm tableau


1

S S S S S S S S S S S S

1
q1 q0

q1 q0

q1 q0
q1 q0
q1 q0
q1 q0
q1 q0

(x)

(x)

Usually the quotient qi(x) is linear. In this case


ri(x) = ri2(x) qi1xri1(x) qi0ri1(x)
Coefficients of qi(x) can be found from first 2 coefficients of ri2(x), ri1(x).
EE 387 Notes #7, Page 59

Euclidean algorithm: example (1)


6EC Reed-Solomon code over GF(28):
One error:
ri(x)
00 00 00 00 00 00 00 00 00 00 00 00 01
25 2E 12 A1 D5 D8 DF 95 C9 ED B2 05
29

Qi(x)

CE A7

ai(x)
00
01
CE A7

(x) = 29/CE = 25 , (x) = (CE A7)/CE = 01 71


Op count: mul = 29 , div = 5
Three errors:
ri(x)
00 00 00 00
10 1a cf dc
dd 9b 4a 2f
b9 54 34 fa
b1 47 69

00
28
f9
db

00
1d
3f
eb

00
d2
05
a6

00
52
34
87

00
5d
04
bc

00 00 00 01
19 57 ec
76 66
4a

Qi(x)

1c 6d
6e d8
ae e1

ai(x)

1c 6d
5d 97 17
d3 f2 ad 2b

(x) = 10 b3 f5 , (x) = 01 5c d7 4f
Op count: mul = 91 , div = 13
EE 387 Notes #7, Page 60

Euclidean algorithm: example (2)


6EC Reed-Solomon code over GF(28):
Six errors:
ri(x)
00
04
A6
53
B4
70
94
EC

00
19
A4
DA
3E
2B
33
E2

00
50
36
D8
51
5F
CC
0D

00
23
F1
39
46
D7
1F
83

00
BB
B0
2E
82
ED
E2
78

00
AF
C5
52
BA
F3
84
DB

00
AE
75
3D
35
45
07

00
F6
08
A6
21
C8

00
41
79
DC
65

00 00 00 01
CB 0A AB
E9 6B
51

Qi(x)

A7
8B
2D
0C
69
AD

3C
71
5C
4F
5C
47

ai(x)
00
01
A7
DD
2D
1C
25
3B

3C
3C
23
8C
55
EB

B3
06
5F
7C
8C

23
36 C4
53 6B D2
C9 FE 8E BA

(x) = 04 F4 16 CC F2 BA , (x) = 01 7C 95 B7 09 DA 82
Op count: mul = 169 , div = 25

EE 387 Notes #7, Page 61

Euclidean algorithm: computational cost


The Euclidean algorithm produces remainders such that deg ri(x) < deg ri1(x).
initial remainder S(x) has degree 2t 1
final remainder (x) has degree t 1
Therefore at most t major steps are needed.
Each major step is polynomial division followed by polynomial multiplication:
ri2(x) = Qi(x)ri1(x) + ri(x)
ai(x) = ai2(x) Qi(x)ai1(x)
At step i approximately 2 (2t i) multiplications are used to find Qi(x) and 2i
multiplications to find ai(x). Total multiplications per step 4t.
Overall cost to find (x) and (x): 4t2 multiplications and t reciprocals.

Solving Mt [ t , . . . , 1 ]T = [ St+1 , . . . , S2t ]T directly takes t3 /6 operations.

EE 387 Notes #7, Page 62

Berlekamp decoding algorithm


Berlekamp (1967) invented an efficient iterative procedure for solving the linear
equations with coefficient matrices

S1 S2
S3
S
S
S4 S+1
2 S3

S5 S+2 ,
M = S3 S4
.
..
..
..
...
.

S S+1 S+2 S21


P
where Sj = i=1 YiXij is a partial syndrome.

Each M is found by using results of computations for some previous M plus


an additional O() operations.
Summing over from 1 to gives total cost O( 2) operations.
In Berlekamps original algorithm, a table with 2t rows stored intermediate results.

EE 387 Notes #7, Page 63

Massey decoding algorithm: shift register synthesis


Massey (1969) showed how finding the error-locator polynomial (x) is equivalent
to a shift-register synthesis problem:
Given a sequence S1, S2, . . . , S2t , find the shortest sequence 1, 2, . . . , that
generates S+1, . . . , S2t starting from S1, . . . , S in a shift-register of size .
S

S1 S2

...

...

S2

S1

...

Recall that if the number of errors is then


Sj + Sj+11 + + Sj+11 = Sj+
for j = 1, . . . , 2t . The PGZ system of equations is a convolution S .
EE 387 Notes #7, Page 64

Berlekamp-Massey algorithm: overview


Partial syndromes S1, . . . , S2t are examined one at a time for k = 1, . . . , 2t.
At the end of the k-th step, (k)(x) of degree L satisfies first k equations:
(k)

(k)

Ski + Ski11 + + SkiLL = 0

i = 0, . . . , k L

If (k1)(x) satisfies k-th equation, then obviously


(k)(x) = (k1)(x) .
The key and surprising idea: when (k1)(x) does not work (sum is (k)(x) 6= 0),
update it as follows:
(k)(x) = (k1)(x) (k)T (x) ,
where T (x) =

1
xkr (r)(x)
(r)

is the last failing (r)(x) shifted and scaled.

When the degree of (x) has increased, we save (k1)(x) for future steps:
T (x) =
(k) (x) is called the discrepancy at step k .

1
x(k1)(x)
(k)

EE 387 Notes #7, Page 65

Berlekamp-Massey algorithm: pseudocode


(x) = 1 ;
L = 0;
T (x) = x ;
for ( k = 1 ; k 2t ; k ++) {
L
X
=
iSki ;

/* "connection polynomial" */
/* L always equals deg (x) */
/* "correction polynomial" */

/* Sk + 1Sk1 + + LSkL1 */

i=0

if ( == 0 ) {
N (x) = (x) ;
} else {
N (x) = (x) T (x) ;
if ( L < k L ) {
L = k L;
T (x) = 1(x) ;
}
}
T (x) = xT (x) ;
(x) = N (x) ;

/* keep same (x) if == 0 */


/* new (x) has 0 discrepancy */

/* new correction polynomial */


/* shift correction polynomial */
/* possibly new value of (x) */

EE 387 Notes #7, Page 66

Berlekamp-Massey tableau
The following figure shows typical computation for 6EC BCH code.
k

(x)

T (x)
0

T T

T T

T T T

T T T

T T T T

T T T T

T T T T T

T T T T T

10

T T T T T T

11

T T T T T T

12

T T T T T T T

EE 387 Notes #7, Page 67

Berlekamp-Massey example (1)


6EC Reed-Solomon code over GF(28). One error.
S = 6f 81 63 f9 74 6f 81 63 f9 74 6f 81
k
1
2
3
4
5
6
7
8
9
10
11
12

(k)(x)
01 6F
01 0A
01 0A
01 0A
01 0A
01 0A
01 0A
01 0A
01 0A
01 0A
01 0A
01 0A

T (k)(x)
00 32
00 00 32
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00

32
00
00
00
00
00
00
00
00
00

32
00
00
00
00
00
00
00
00

32
00
00
00
00
00
00
00

32
00
00
00
00
00
00

32
00
00
00
00
00

32
00
00
00
00

32
00 32
00 00 32
00 00 00 32

Op count: mul = 28 , div = 1

EE 387 Notes #7, Page 68

Berlekamp-Massey example (2)


6EC Reed-Solomon code over GF(28). Two errors.
S = b0 91 cc d1 99 26 0a 8a 70 67 96 c9
k
1
2
3
4
5
6
7
8
9
10
11
12

(k)(x)
01 B0
01 AB
01 AB A6
01 44 87
01 44 87
01 44 87
01 44 87
01 44 87
01 44 87
01 44 87
01 44 87
01 44 87

T (k)(x)
00 87
00 00 87
00 6F 16
00 00 6F
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00
00 00 00

16
6F
00
00
00
00
00
00
00

16
6F
00
00
00
00
00
00

16
6F
00
00
00
00
00

16
6F
00
00
00
00

16
6F
00
00
00

16
6F 16
00 6F 16
00 00 6F 16

Op count: mul = 45 , div = 2

EE 387 Notes #7, Page 69

Berlekamp-Massey example (3)


6EC Reed-Solomon code over GF(28). 6 errors.
S = bc 30 bb 24 81 74 e5 a7 bd 2b 95 34
k
1
2
3
4
5
6
7
8
9
10
11
12

(k)(x)
01 BC
01 F2
01 F2 7E
01 D9 9D
01 D9 89
01 88 05
01 88 88
01 9A ED
01 9A 06
01 DF 87
01 DF BA
01 62 B4

74
58
CC
96
2D
96
05
E9

7A
23
43
D9
06
48

T (k)(x)
00 95
00 00 95
00 98 F4
00 00 98
00 AC 6F
00 00 AC
00 C9 66
00 00 C9
45
00 77 57
C2
00 00 77
50 B4 00 5E E6
F7 57 00 00 5E

F4
AD
6F
CA
66
E6
57
BB
E6

AD
3A
CA
09
E6
6C
BB

3A
DF
09 DF
3F 9E
6C 3F 9E

Op count: mul = 123 , div = 6

EE 387 Notes #7, Page 70

Berlekamp-Massey example (4)


6EC Reed-Solomon code over GF(28). 7 errors.
S = f1 9f 5e 6e 5c 52 b2 46 02 99 b2 17
k
1
2
3
4
5
6
7
8
9
10
11
12

(k)(x)
01 F1
01 CC
01 CC 24
01 9D D9
01 9D 0A
01 04 1B
01 04 9C
01 8F 8A
01 8F 1E
01 0B D8
01 0B 36
01 0F 1A

A2
64
A5
51
3B
53
CA
8D

BD
66
A6
10
F8
A9

T (k)(x)
00 E7
00 00 E7
00 CE 0B
00 00 CE
00 30 6F
00 00 30
00 4C 2D
00 00 4C
F3
00 F3 04
1B
00 00 F3
B6 D7 00 57 7B
F6 BB 00 00 57

0B
33
6F
3A
2D
1C
04
37
7B

33
B2
3A
6E
1C
84
37

B2
0D
6E 0D
19 62
84 19 62

Op count: mul = 123 , div = 6


If there are 7 errors, the Berlekamp-Massey algorithm usually produces a polynomial (x) of degree 6 . But (x) has

6 zeroes in GF(2m ) with probability 1/6! = the conditional probability of miscorrection.

EE 387 Notes #7, Page 71

Berlekamp-Massey algorithm: program flow


=4

3
2
1
0
0

=4

=3

2
1
0
0

k
EE 387 Notes #7, Page 72

Berlekamp-Massey: computational cost


The Berlekamp-Massey algorithm keeps a current estimate of
connection polynomial (x)
correction polynomial T (x).
When necessary (x) and T (x) are updated by parallel assignment:




(x)
(x) T (x)

T (x)
1x(x)
Storage requirements: 2t decoder alphabet symbols, for (x) and T (x).
Worst case running time (multiply/divide):
2 + 4 + + 4t 4t2
The running time with a fixed number of multipliers is O(t2).
If t multipliers are available, the algorithm can be performed in O(t) steps.
Some authors refer to this as linear run time.
EE 387 Notes #7, Page 73

Solving error-locator polynomials: degree 2


Polynomials over GF(2m) of degree 4 can be factored using linear methods.
Consider an error-locator polynomial of degree 2:
(x) = 1 + 1x + 2x2 .
Squaring is a linear transformation of GF(2m) over scalar field GF(2).
Therefore the equation (x) = 0 can be rewritten as
x(2S + 1I) = 1 ,
where x is the unknown m-tuple, S is the m m matrix over GF(2) that
represents squaring, and 1 is the m-tuple (1, 0, . . . , 0).
If there are two distinct solutions, they are the zeroes of (x).
The squaring matrix S can be precomputed, so the coefficients of the m m
matrix A = 2S + 1I can be computed in O(m2) bit operations
Solving the system requires O(m3) bit operations or O(m2) word operations.
EE 387 Notes #7, Page 74

Solving error-locator polynomials faster: degree 2


Use the change of variables x =

1
u. Then (x) = 0 becomes
2

(x) = 2x2 + 1x + 1


2

1
1
u + 1
u +1
= 2
2
2


21
2
21 2 21
2
u + u+1 =
u +u+ 2
=
2
2
2
1
The simplified equation is of the form u2 + u + c = 0.
It can be solved using the precomputed pseudo-inverse of S + I .
If U1 is a zero of u2 + u +

, then X1 = 1 U1 is a zero of (x), as is


2
1
2

X2 =

1
1
(U1 + 1) = X1 +
.
2
2

If 2m is not too large, we can store a table of zeroes of u2 + u + c.


EE 387 Notes #7, Page 75

Erasure correction
Erasures are special received symbols used to represent uncertainty. Examples:
Demodulator erases a symbol when signal quality is poor.
Lower level decoder erases symbols of codeword that has an uncorrectable error.
Theorem: A block code can correct up to d 1 erasures.
Proof : If the number of erasures is less than d , then there is only one codeword
that agrees with the received sequence.
c1
r

11111111
00000000
00000000
11111111
00000000
11111111
00000000
11111111
< d erasures

c2

Conversely, if two codewords differ in exactly d symbols, the received sequence


obtained by erasing the differing symbols cannot be decoded.
Fact: A block code can correct t errors and erasures iff d 2t + + 1 .

EE 387 Notes #7, Page 76

Erasure correction for linear block codes


Erasures can be corrected for linear block codes by solving linear equations.
The equation cH T = 0 gives n k equations for the erasure values.
Any d 1 columns of H are linearly indepenent, so the equations can be solved
when < d n k + 1.
Example: Let r = [ 0 0 ? ? 0 1 0 ] be

H = 0
0

received sequence for (7, 4) Hamming code.

0 0 1 0 1 1
1 0 1 1 1 0 .
0 1 0 1 1 1

The parity-check matrix yields three equations for the erased bits x and y:
0 = 10 + 00 + 0x + 1y + 00 + 11 + 10 = 1 + y
0 = 00 + 10 + 0x + 1y + 10 + 11 + 00 = 1 + y
0 = 00 + 00 + 1x + 0y + 10 + 11 + 10 = 1 + x
Therefore x = 1, y = 1 and the decoded codeword is c = [ 0 0 1 1 0 1 0 ].

EE 387 Notes #7, Page 77

Erasure correction for BCH codes (1)


Consider a BCH code defined by parameters (, n, b, d).
Suppose < d erasures in locations j1, . . . , j and no errors.
Define the erasure locators
U l = jl ,

l = 1, . . . , .

The syndrome equations for the erasure magnitudes are


S1

= E1U1b

+ E2U2b

+ + EUb

S2
= E1U1b+1 + E2U2b+1 + + EUb+1
..
.
Sd1 = E1U1b+d2 + E2U2b+d2 + + EUb+d2
This system of linear equations has a unique solution for E1, E2, . . . , E because
the coefficient matrix is column-scaled Vandermonde.
The Forney algorithm provides a faster solution.
EE 387 Notes #7, Page 78

Erasure correction for BCH codes (2)


The erasure locator polynomial is defined by
(x) =

l=1

(1 Ulx) = 1 + 1x + 2x2 + + x

Unlike the error locator polynomial, the values of Ul are known.


The coefficients of (x) can be computed by polynomial multiplication.

Qi

(1 Ulx) = (1 Uix)
l=1

Q i1
l=1

(1 Ulx)

For each i we use i 1 multiply-accumulates. Total operations:

1
2 (

1).

The Forney algorithm gives values of errors in erasure locations:


El =

Ul1b

(Ul1)
,
(Ul1)

l = 1, . . . , .

where (X) = S(x)(x) mod x2t has degree 1.


0 = S1, 1 = S2 + S11, . . . , p1 = S + S11 + + S1
Computing error magnitudes takes 25 2 multiply-accumulates.
EE 387 Notes #7, Page 79

Erasure correction example: wireless network


Random bit errors, large collision rate. Maximum packet size: 600 bytes
Encoding procedure for concatenated code:
Divide data into three equal rows
Create two check rows by (5,3) shortened Reed-Solomon code on columns
Encode rows with shortened (255,239) 2EC BCH code on fragments of
29 data bytes
Example: 58-byte frame. Subframes have 20 bytes and 2 BCH check bytes.
column codeword

row checks

subframe 1
subframe 2
subframe 3
checkframe 1

111
000
000
111
000
111
000
111

checkframe 2

EE 387 Notes #7, Page 80

Decoding procedure for concatenated code


BCH code corrects 2 bit errors in 31 bytes, since
29 8 + 16 = 232 + 16 = 248 255 = 28 1 .
A subframe with 200 bytes requrires 200/29 = 7 fragments.
Exercise: Find probability that a frame is lost because of random errors.
Row miscorrections and burst errors are corrected using (5, 3) column code. Up to
two lost subframes can be replaced.
Erasure correction procedure requires solving linear equations.

We precompute the inverses of coefficient matrices for the 52 = 10 possible
combinations of two lost subframes.

Each byte in missing subframe is computed using 3 Galois field multiplications.


Software correction takes 10 to 15 M68000 machine instructions per byte.
Trick to reduce time: store logarithms of precomputed matrix constants.
When only one subframe is lost, it can be replaced by XORing the other subframes.
EE 387 Notes #7, Page 81

Error and erasure decoding: binary case


If there are erasures in a binary senseword r, then t = 12 (d 1 ) errors can
be corrected using an errors-only decoder:
1. Let c0 and c1 be the codewords obtained by decoding the n-tuples r0 and r1
obtained from r by replacing all erasures with zeroes and ones, respectively.
be the one that is closer to r.
2. Compare c0 and c1 with r and let c
(Either c0 or c1 or both might be undefined because of decoder failure.
An undefined ci is ignored.)
c 0110 ...

r0

0000 ...

r1

1111 ...

???? ...

If number of errors is 12 (d 1 ) then


d (
c, r) 1 + 1 (d 1 )
H

21 + 12 (d 1 ) =

1
2 (d

1) .

.
This shows that r is within the decoding sphere of c
EE 387 Notes #7, Page 82

Error and erasure correction: Berlekamp-Massey


1. Compute erasure locator polynomial (x) =

l=1 (1

Ulx).

2. Compute partial syndrome polynomial S(x) using 0 for erased locations.


3. Compute modified syndrome polynomial (x) = S(x)(x) mod x2t . Modified
syndromes are 1, . . . , 2t .
4. Run Berlekamp-Massey algorithm with the modified syndromes 1, . . . , 2t
to find the error-locator polynomial (x) of degree 21 (2t ).
5. Use the modified key equation to find error evaluator polynomial (x):
(x) = S(x)(x)(x) mod x2t = S(x)(x) mod x2t
where (x) = (x)(x) is the error-and-erasure locator polynomial.
6. Use modified Forney algorithm to compute error magnitudes:
Yi = Xi1b

(Xi1)
,
(Xi1)

El = Ul1b

(Ul1)
(Ul1)

for i = 1, . . . , and l = 1, . . . , .
EE 387 Notes #7, Page 83

Erasure correction application: variable redundancy


Some communications systems use varying amounts of error protection:
An ECC subsystem may be customized for specific application.
Adaptive system may increase or decrease check symbols as needed.
Obvious approach: use different Reed-Solomon code generator polynomials:
gt(x) = (x + )(x + 2) (x + 2t) ,

t = 1, 2, . . . , T .

Problem: encoders for different generator polynomials require many scalers.


Clever solution: use generator polynomial for maximum error correction but
transmit only 2t check symbols, that is, delete = 2T 2t checks symbols.
information symbols

checks

deleted

Then use errors-and-erasures decoding, where missing check symbols are erased.

EE 387 Notes #7, Page 84

Syndrome modification (1)


The modified syndrome polynomial for Reed-Solomon code is easy to find.
1. Deleted checks are considered to be in locations 1, 2, . . . , .
2. Compute modified syndrome using circuit below.
3. Use modified syndromes T1, . . . , T2t in Berlekamp-Massey algorithm.
4. Find zeroes of (x).
5. Compute (x) = (x)(x) and (x) = S(x)(x)(x) mod x2t .
6. Use Forney algorithm to find error magnitudes and erasure magnitudes.
Erasure correction can be used by an encoder to generate check symbols from
partial syndromes of the message symbols.
Partial syndromes may be easier to compute because the circuits are uncoupled,
hence fewer long wires are needed.

EE 387 Notes #7, Page 85

Syndrome modification (2)


S1

(x) = S(x)

S2

(1 + ix)

l=1

(decoder alphabet GF(2 ))

S3
1

S4
1

T1

S5
1

T2

S6
1

T3

S7
1

S8

T4
EE 387 Notes #7, Page 86

Вам также может понравиться