Вы находитесь на странице: 1из 38

Error Correcting Codes

Hamid Mohammadi
Communication II
08/04/2014
Overview
As long as the information transfer rate over the channel is within the
channel bandwidth, then it is possible to construct a code such that the
error probability can be made arbitrarily small.
Costs of increased reliability:
1- transfer rate decreased
2- code becomes more complex
Overview
Types of errors (Distortion, Noise, Interference)
Single bit errors
Mostly in parallel communications (for example corruption of one of
the eight bits to be send)
Burst errors
Mostly in serial communications (according to the duration of the noise
and data transfer rate)
Overview
The key to achieving error-free digital communication in
the presence of noise is the addition of appropriate
redundancy to the original data bits
Block codes
Every block of kdata digits convert to a longer ndigits
(n>k)
Convolutional codes
The coded sequence of ndigits depends not only in the
kdata digits but also (N-1) earlier data digits (The
encoder has memory)
Shannons Theorem
Shannons noisy channel coding theorem
For a noisy channel with capacity C, there exists a
code of rate R<C such that:
Shannons Theorem
Code length n: The number of bits in codeword
Data digits k
Check digits m=n-k, n>k
Code rate R=k/n
This code is (n, k) code
The higher the number of check bits m(and hence the more
reliable the code), the lower the information rate R.
Shannons Theorem
Data digits (d
1
,d
2
,d
3
,..,d
k
) : kdigit vector
Code-word (c
1
,c
2
,c
3
,....,c
n
) : ndigit vector
Number of errors we want to detect or correct: t
Total code words : 2
n
Total data words : 2
k
We want to transmit d
j
using codeword c
j
c
j(Noise)
c
j
(t errors or fewer) then c
j
will lie somewhere inside the
Hammingsphereof radius t centered c
j
In other words, If the received word be inside the Hamming sphere of
radius t then we decide that the true transmitted code word was c
j
Hamming distance
The Hamming distance is the number of bits which differ between the
codewords.
and
This is the number of edges in a shortest path between the two vertices of
these shape corresponding to the codewords
Hamming distance
The minimum distance dof a code is the minimal distance between any two
non-identical codewords.
For the (3,2) parity check code d=2
The (4,1) repetition code (consisting of codewords 0000 and 1111) has
minimum distance 4 any two errors can be detected. In addition, any
single error can be corrected.
In general, a code with minimum distance dwill detect up to d/2
errors and will correct up to t=(d-1)/2 errors. or
d
min
=2t+1
PARITY CHECK CODES
Sample of parity checking block code : RS232
Hamming distance
(n, k) block codes
Information rate: R = k/n
So, the higher the number of check bits (and hence the more reliable the
code), the lower the information rate.
The problem is now to devise a code which maximizes reliability and
information rate, while still allowing detection of transmission errors.
Hamming Bound
The number of ways in which up to t errors can occur:
Thus for each codeword we must leave vertices unused.
We have 2
k
codewords so we must leave a total of words unused.
Therefore the total number of words must at least be :
But the total number of words available is : 2
n
So we require :
Hamming Bound
Hamming bound is a necessary but not sufficient condition.
But for single error correcting is sufficient too.
Hamming Codes
Acode for which this inequality changes to equality, is known as Perfect code
Binary, Single error correcting, Perfect codes are called Hammingcodes
t=1 so d
min
=(2 t) +1 = 3
From Eq. (15.3b) we have:
or n= 2
m
-1
So Hamming Codes are (n,k) codes which n= 2
m
-1 and d
min
=3
(15.3b)
Hamming Codes
Example : (7 , 4) systematic Hamming code
Hamming Codes
Example : (7 , 4) systematic Hamming code
[0 1 1 0]
1 1 1
0 1 1
1 0 1
1 1 0
=
[1 1 0]
P1 P2 P3
d P = c
p
Linear Block Codes
c= (c
1
,c
2
,c
3
,....c
k
,,c
n
) : ndigit vector
d= (d
1
,d
2
,d
3
,..,d
k
) : kdigit vector
c
1
=d
1
, c
2
=d
2
, , c
k
=d
k
Remaining digits from c
k+1
to c
n
are linear
combinations of d
1
,d
2
,..., d
k
systematic code:
The leading k digits of word are the data digits and
the remaining m=n-k digits are the parity check
digits that are formed by the linear combination of
data digits.
Linear Block Codes
or
Linear Block Codes
Generator matrix:
Linear Block Codes
Linear Block Codes
Note that the distance between any
two codewords is at least 3. Hence,
the code can correct at least one error.
d
min
= 2t+1
Linear Block Codes, Decoding
c
p
= d.P (15.8)
Parity check matrix:
Linear Block Codes, Decoding
So every code word must satisfy equation (15.10a):
c.H
T
= 0
Assume we have an error in received data r so
r = 1 0 1 1 0 1
c= 1 0 0 1 0 1 e= 0 0 1 0 0 0
Syndrome:
In case of error, s= rH
T
will no longer be zero
Linear Block Codes, Decoding
s = e
i
H
T
We want to find e
i
, but solving this equation wont give us a unique answer for e
i
This equation is satisfied by 2
k
error vectors
Linear Block Codes, Decoding
Linear Block Codes, Decoding
Linear Block Codes, Decoding
Linear Block Codes, Decoding
Linear Block Codes, Decoding
Cyclic Redundancy Check
Cyclic Codes
Cyclic Codes
Systematic Cyclic Codes
Systematic Cyclic Codes
Systematic Cyclic Codes
Decoding
Decoding
Decoding

Вам также может понравиться