Академический Документы
Профессиональный Документы
Культура Документы
its Applications
Hung-Lin Fu
Dept. of Applied Mathematics
National Chiao Tung University
Hsin Chu, Taiwan
Basic Ideas
Messages Transmission
Correctness and Security
Save time and expense
Security Study is the main job of
Cryptography
Coding Theory not only deals with the
correctness of transmission but also
the quickness of transmission.
The flow of
Transmission
Message
Encode
Demodulation
Modulation
Decode
Original Message
Through Noisy Channel
Examples
Grades A, B, C, and D
Use digits 0 and 1 to encode
A : 00
B : 01
C : 10
D : 11
00
Send A
Receiving
Following demodulation and decoding
We expect to receive the original
message A.
Unfortunately, it is possible to make
errors due to the noise.
Probability of Errors
Let p denote the error probability of
sending 0 and receiving 1.
In a symmetric channel, sending 1 and
receiving 0 also has error probability p.
If t digits are transmitted, then the
probability of making s errors is
C(t,s)ps(1-p)(t-s).
The probability of making errors is
C(t,1)p1(1-p)t-1 + C(t,2)p2(1-p)t-2 + + pt.
(1-p)
p p
(1-p)
Symmetric Channel
It happens!
Let p = 0.01.
It looks small. But, in fact, this is a very large
number if we consider a transmission of real
world. Million digits are transmitted in a
minute. So, we have error digits about 10,000
in a minute.
Therefore, if we use 00, 01, 10, and 11 for A,
B, C, and D, then errors in transmitting words
occur! The probability of making errors(words)
is 2x(0.01)x(0.99) + (0.01)2 = 0.0199.
An Improvement
Parity check digits
00 000
01 011
10 101
11 110
The probability of making errors without
noticing is smaller!
C(3,2)x(0.01)2x(0.99) + (0.01)3 = 0.000298.
We can add more digits instead of just one.
Error Correction
When an error occurs, we may not be
able to know where is the error digit.
So, ask for retransmission.
Retransmission is not always possible.
Hamming Distance
The message we send can be
expressed as an n-dimension vector
over the finite field GF(2) if the
message has n digits.
E.g. 010101
(1,0,1,0,1,0)
Let GF(2) = K.
Kn is a set of 2n vectors.
A New Metric
Maximum Likelihood
Decoding
Linear Codes
Weights of Codewords
Each vector of a code is called a
codeword.
The weight of a codeword is the
number of 1s in the codeword.
E.g. wt(101011) = 4.
Proposition. The distance of a linear
code is equal to the minimum weight
of a non-zero codeword.
Main Theorem
Theorem. A code with distance d can detect d-1
errors and correct [(d-1)/2] errors.
Proof. If u and w are two codewords of
u
v
w
the
code C and d(v,w) < [(d-1)/2], then for each y in
C, d(v,y) > d(v,w).
Better Codes
Two Constructions
Use a Steiner triple system of order 7.
{1,2,4}, {2,3,5}, {3,4,6}, {4,5,7}, {5,6,1}, {6,7,2},
{7,1,3}.
1101000 0010111 0000000
0110100 1001011 1111111
0011010 1100101
0001101 1110010
1000110 0111001
0100011 1011100
1010001 0101110
BCH Codes
A different Point of
View
Quiz
Consider R7.
x7+1 = (1+x)(1+x+x3)(1+x2+x3) (?)
(Hint: 1 = -1, (1+x)2 = 1+x2.)
The set of all polynomials in R 7 which
are multiples of 1+x+x3 forms a linear
code with 16 codewords. This is
essentially the same code as
constructed above.
Reed-Solomen Codes
Design of Compact
Discs
(Key Contributions)
Story- Continued
Keep Going
A Brief Overview
Concealment( )
Error-Correcting Ability
EFM modulation
Figure 2
pits
Figure 4
Encoding
Encoded by C(227):(28,24,5)-RS:
The resulting 24 byte word (remember, it has an
included two block delay -- so some symbols in this
word are from blocks two blocks behind) has 4 bytes of
parity added. This particular parity is called "Q" parity.
Parity errors found in this part of the algorithm are called
C1 errors. More on the Q parity later.
4-frame delay interleaved:
Now, the resulting 24 + 4Q = 28 bytes word is
interleaved. Each of the 28 bytes is delayed by a
different period. Each period is an integral multiple of 4
blocks. So the first byte might be delayed by 4 blocks,
the second by 8 blocks, the third by 12 blocks and so
on. The interleaving spreads the word over a total of 28
x 4 = 112 blocks
Another RS code
Encoded by C(223):(32,28,5)-RS:
The resulting 28 byte words are again subjected to a parity
operation. This generates four more parity bytes called P
bytes which are placed at the end of the 28 bit data word. The
word is now a total of 28 + 4 = 32 bytes long. Parity errors
found in this part of the algorithm are called C2 errors.
Finally, another odd-even delay is performed -- but this time
delay by just a single block. Both the P and Q parity bits are
inverted (turning the "1s" into "0s") to assist data readout
during muting.
EFM
1 sync word
1 subcode signal
6*2*2*14 data bits
8*14 parity bits
34*3 merge bits
24 bits
14 bits
336 bits (14 comes from 8)
112 bits
102 bits
Music:
Final Words
, !
, !