Вы находитесь на странице: 1из 35

Outline

• Transmitters (Chapters 3 and 4, Source Coding and


Modulation) (week 1 and 2)
• Receivers (Chapter 5) (week 3 and 4)
• Received Signal Synchronization
(Chapter 6) (week 5)
• Channel Capacity (Chapter 7) (week 6)
• Error Correction Codes (Chapter 8) (week 7 and 8)
• Equalization (Bandwidth Constrained Channels) (Chapter
10) (week 9)
• Adaptive Equalization (Chapter 11) (week 10 and 11)
• Spread Spectrum (Chapter 13) (week 12)
• Fading and multi path (Chapter 14) (week 12)
Channel Capacity (Chapter 7) (week 6)
• Discrete Memoryless Channels
• Random Codes
• Block Codes
• Trellis Codes
Channel Models
• Discrete Memoryless Channel
– Discrete-discrete
• Binary channel, M-ary channel
– Discrete-continuous
• M-ary channel with soft-decision (analog)
– Continuous-continuous
• Modulated waveform channels (QAM)
Discrete Memoryless Channel
• Discrete-discrete
– Binary channel, M-ary channel

Probability transition matrix

 P(Y  y1 | X  x1 ) . . . 
 . . P( yi | x j )  p ji . 
P 
 . . . . 
 
 . . . pq 1Q 1 
Discrete Memoryless Channel
• Discrete-continuous
– M-ary channel with soft-decision (analog)
output
x0
AWGN
1 x1
( y  xk ) 2 / 2 2
p( y | xk )  e x2 y
2
.
.
 p( y | X  x1 )  .
 
P  xq-1
 p ( y | X  xk ) 
 
 
Discrete Memoryless Channel
• Continuous-continuous
– Modulated waveform channels (QAM)
– Assume Band limited waveforms, bandwidth = W
• Sampling at Nyquist = 2W sample/s
– Then over interval of N = 2WT samples use an
orthogonal function expansion:
x(t )  y (t )
N
  yi f i (t )
N
  xi f i (t )
i 1 n(t ) i 1
N
  ni f i (t )
i 1
Discrete Memoryless Channel
• Continuous-continuous
– Using orthogonal function expansion:

x(t )  y (t )
N
  yi f i (t )
N
  xi f i (t ) i 1
n(t )
 y(t) f (t)dtf (t)
i 1
N

N T
  ni f i (t )
*
i i
0
i 1

   x(t )  n(t ) f (t )dt f (t )


i 1
N T
*
i i
0
i 1
N
  xi  ni  f i (t )
i 1
Discrete Memoryless Channel
• Continuous-continuous
– Using orthogonal function expansion get an
equivalent discrete time channel:
y1  x1  n1
Gaussian noise
x1 y1
. 1 y2
 ( yi  xi ) 2 / 2 i2
p( yi | xi )  e .
2i
.
. .
. .
xN yN
Capacity of binary symmetric channel
• BSC X  {0,1} Y  {0,1}
X Y
1 p
0 0
p p 1  p p 
P 
1 1  p 1  p
1 p
Capacity of binary symmetric channel
X Y
• Average Mutual Information 0
1 p
0
P(Y  0 | X  0) p p
I ( X ; Y )  P( X  0) P(Y  0 | X  0) log 
P(Y  0) 1 1
P(Y  1 | X  0) 1 p
P( X  0) P(Y  1 | X  0) log 
P(Y  1)
P(Y  0 | X  1)
P( X  1) P(Y  0 | X  1) log 
P(Y  0)
P(Y  1 | X  1)
P( X  1) P(Y  1 | X  1) log
P(Y  1)
1 p
 P( X  0)(1  p) log 
(1  p) P( X  0)  pP( X  1)
p
P( X  0) p log 
pP( X  0)  (1  p ) P( X  1)
p
P( X  1) p log 
(1  p) P( X  0)  pP( X  1)
1 p
P( X  1)(1  p) log
pP( X  0)  (1  p) P( X  1)
Capacity of binary symmetric channel
• Channel Capacity is Maximum Information
1
– earlier showed: max( I ( X ; Y ))  P( X  1)  P( X  0) 
2
 1 p
C  max( I ( X ; Y ))   P( X  0)(1  p ) log 
 (1  p ) P ( X  0)  pP ( X  1)
p
P( X  0) p log 
pP( X  0)  (1  p) P( X  1) X Y
p 1 p
P( X  1) p log  0 0
(1  p) P( X  0)  pP( X  1)
p p
1 p 
P( X  1)(1  p) log 1 1
pP( X  0)  (1  p) P( X  1)  P ( X 1)  P ( X 0)  1 1 p
2

 (1  p) log 2(1  p)  p log 2 p


Capacity of binary symmetric channel
• Channel Capacity
– When p=1 bits are inverted but information is perfect if
invert them back!

X Y
1 p
0 0
p p
1 1
1 p

C  (1  p) log 2 2(1  p)  p log 2 2 p


Capacity of binary symmetric channel
• Effect of SNR on Capacity
– Binary PAM signal (digital signal amplitude 2A)

A g (t ) g (t )

A  g (t )  g (t )  g (t ) X Y
1 p
0 0
p p
AGWN 1 1
p(r | s1 ) 
1
e
( r  b ) 2
/ N0 1 p
N 0
1 ( r   b )2 / N0
p ( r | s2 )  e
N 0
Capacity of binary symmetric channel
• Effect of SNR
– Binary PAM signal (digital signal amplitude 2A)

s1
A g (t ) g (t )
0

A  g (t )  g (t )  g (t ) X Y
1 p
0 0
p p
1 0  b )2 / N0 1 1

( r 
P(e | s1 )  e dr 1 p
N 0 

 2 b 
 Q   P (e | s 2 )

 N0 
Capacity of binary symmetric channel
• Effect of SNR
– Binary PAM signal (digital signal amplitude 2A)

Pb  P (e | s1 )  P (e | s2 )
1
2
1
2

 2 b 
 Q 

 N0  X
1 p
Y


 Q 2 SNRb  Q 2 b    0

1
p p
0

1
1 p
 1 
 Q 2 A 

 2N0 
Capacity of binary symmetric channel
• Effect of SNR
– Binary PAM signal (digital signal amplitude 2A)
N0 Not sure about this
rms noise   
2 Does it depend on bandwidth?

 1 
Pb  Q 2 A 
2 N 0 
X Y
 1 p
0 0
 2  p p
 Q 2 A 
 1 1
4 N 1 p
 0 
 2A  1 2A 
 Q 12   Q 2 
    rms noise 
Capacity of binary symmetric channel
• Effect of SNR
– Binary PAM signal (digital signal amplitude 2A)

 1 Amplitude 
Pb  p  Q 2 
 rms noise 
1  1 Amplitude 
 erfc  2  X
1 p
Y

2  2 rms noise  0
p p
0

1 1
1 p
C  (1  p) log 2(1  p)  p log 2 p
Capacity of binary symmetric channel
• Effect of SNR
– Binary PAM signal (digital signal amplitude 2A)
SNR Pb
1 0.308538
2 0.158655 Pb (BER) vs SNR for binary channel

 Amplitude 
3 0.066807
4 0.02275 1
Pb  erfc  12 
1
5 0.00621
0.1

 2 rms noise 
6 0.00135
7
8
0.000233
3.17E-05
0.01 2
0.001
9 3.4E-06
0.0001
10 2.87E-07
11 1.9E-08 1E-05
12 9.87E-10 1E-06
13 4.02E-11 1E-07
14 1.28E-12 X Y
BER

1 p
1E-08 1.20E+01,
15 3.19E-14
9.87E-10
16 6.11E-16 1E-09
1E-10 0 0
1E-11
1E-12
1.40E+01, p p
1.28E-12
1E-13
1 1
1E-14
1E-15
1 p
1E-16
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
SNR = A/rms noise
Capacity of binary symmetric channel
• Effect of SNR
– Binary PAM signal (digital signal amplitude 2A)
C  (1  p) log 2 2(1  p)  p log 2 2 p
SNR Pb C 1  Amplitude 
0 0.5 0
Capacity © and BER vs SNR for binary channel Pb  erfc  12 
 2 rms noise 
1 0.308538 0.108522
2
3
0.158655
0.066807
0.368917
0.646106 1.2
2
4 0.02275 0.843385
5 0.00621 0.945544
6 0.00135 0.985185 1
7 0.000233 0.996857
8 3.17E-05 0.999481
9 3.4E-06 0.999933 0.8 X Y
10
11
2.87E-07
1.9E-08
0.999993
0.999999 1 p
12
13
9.87E-10
4.02E-11
1
1
0.6 0 0
BER

14
15
1.28E-12
3.19E-14
1
1
p p
0.4
16 6.11E-16 1
1 1
0.2
1 p

0
At capacity SNR = 7,
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
so waste lots of SNR
-0.2
SNR = A/rms noise to get low BER!!!
Capacity of binary symmetric channel
• Effect of SNRb
– Binary PAM signal (digital signal amplitude 2A)
C  (1  p) log 2 2(1  p)  p log 2 2 p
 
SNRb (dB) Pb C
-20 0.443769 0.009143

Pb  p  Q 2 b
-18 0.429346 0.014452 Capacity C and BER vs
-16 0.411325 0.022809 SNR for binary channel
1.2
-14 0.388906 0.03591
-12 0.361207 0.056319

 erfc   
-10 0.32736 0.087793
1
-8 0.286715
-6 0.239229
-4 0.186114
0.135561
0.206245
0.306729
1
-2 0.130645 0.440797 0.8
b
0 0.07865 0.602597
2
Capacity C and BER

2 0.037506 0.769261
4 0.012501 0.90305 0.6
6 0.002388 0.975757
8 0.000191 0.997366
10 3.87E-06 0.999925 0.4 X Y
1 p
12 9.01E-09 1
14 6.81E-13 1
0.2 0 0
p p
0
-20 -12 -4 4 12 1 1
-0.2
1 p
SNR per bit (dB)
Channel Capacity of Discrete
Memoryless Channel
• Discrete-discrete
– Binary channel, M-ary channel

Probability transition matrix

 P(Y  y1 | X  x1 ) . . . 
 . . P( yi | x j )  p ji . 
P 
 . . . . 
 
 . . . pq 1Q 1 
Channel Capacity of Discrete
Memoryless Channel
Average Mutual Information

q 1 Q 1
P(Y  yi | X  x j )
I ( X ; Y )   P( X  x j ) P(Y  yi | X  x j ) log
j  0 i 1 P(Y  y j )
q 1 Q 1
P ( yi | x j )
  P( x j ) P( yi | x j ) log
j  0 i 1 P( y j )
Channel Capacity of Discrete
Memoryless Channel
Channel Capacity is Maximum Information
Occurs for P( x j )  p, for all j
only if P  symmetric
Otherwise must work out max P( x j )

 q 1 Q 1 P(Y  yi | X  x j ) 
C  max I ( X ; Y )   max   P( X  x j ) P(Y  yi | X  x j ) log 
P( x j )  P(Y  y j ) 
 j 0 i 1 
P( x j )

 q 1 Q 1 P( yi | x j ) 
 max   P( x j ) P( yi | x j ) log 
 P ( y ) 
  P ( x j )0,
P( x j )
j  0 i 1 j q 1
 P ( x j ) 1
j 0
Channel Capacity Discrete
Memoryless Channel
x0
• Discrete-continuous x1
x2 y
.
• Channel Capacity .
.
xq-1

 q 1  p (Y  y | X  xi ) 
C  max I ( X ; Y )   max    P( X  xi ) p(Y  y | X  xi ) log dy 
P ( xi ) P ( xi )
 i 0  p(Y  y ) 
where
q 1
 p( y | X  x1 ) 
p(Y  y )   P( X  xi ) p(Y  y | X  xi )  
P 
i 0

 p ( y | X  xk ) 
 
 
Channel Capacity Discrete
Memoryless Channel
x0
• Discrete-continuous x1
x2 y
.
• Channel Capacity with AWGN .
.
xq-1
1 ( y  xk ) 2 / 2 2
p( y | xk )  e
2
 1  ( y  xi ) 2 / 2 2 
 q 1 e 
 
C  max   P( X  xi )
1
e  ( y  xi ) 2 / 2 2
log 2 dy 
P ( xi )  
2
q 1
1 


i 0

i 0
P( X  xi )
2 
e  ( y  xi ) 2 / 2 2


Channel Capacity Discrete
Memoryless Channel
x0
• Binary Symmetric PAM-continuous x1
x2 y
.
• Maximum Information when: .
.
xq-1
P( X  A)  P( X   A)  12

1  2e 2 A / 2
2 2


 y 2 / 2 2
C 1
log A2 / 2 2 e dy
2
2  e e  A2 / 2 2 

2e  2 A2 / 2 2
 

 y 2 / 2 2
 log A2 / 2 2  A2 / 2 2
e dy 
e e 

Channel Capacity Discrete
Memoryless Channel
• Binary Symmetric PAM-continuous
• Maximum Information when:

 2e 2 A / 2
2 2
1 

 y 2 / 2 2
C 1
log A2 / 2 2 e dy
2  e  A / 2
2 2 2

 e
2e  2 A2 / 2 2
 

/ 2 2
 log e y
2

/ 2 2 / 2 2
dy 
 e A
2 2

eA 
Channel Capacity Discrete
Memoryless Channel
• Binary Symmetric PAM-continuous
• Versus Binary Symmetric discrete
SNRb (dB) Pb C
-20 0.443769 0.009143
-18 0.429346 0.014452 Capacity C and BER vs
-16 0.411325 0.022809 SNR for binary channel
1.2
-14 0.388906 0.03591
-12 0.361207 0.056319
-10 0.32736 0.087793
-8 0.286715 0.135561 1
-6 0.239229 0.206245
-4 0.186114 0.306729
-2 0.130645 0.440797 0.8
0 0.07865 0.602597
Capacity C and BER

2 0.037506 0.769261
4 0.012501 0.90305 0.6
6 0.002388 0.975757
8 0.000191 0.997366
10 3.87E-06 0.999925 0.4
12 9.01E-09 1
14 6.81E-13 1
0.2

0
-20 -12 -4 4 12

-0.2
SNR per bit (dB)
Discrete Memoryless Channel
• Continuous-continuous
– Modulated waveform channels (QAM)
– Assume Band limited waveforms, bandwidth = W
• Sampling at Nyquist = 2W sample/s
– Then over interval of N = 2WT samples use an
orthogonal function expansion:
x(t )  y (t )
N
  yi f i (t )
N
  xi f i (t )
i 1 n(t ) i 1
N
  ni f i (t )
i 1
Discrete Memoryless Channel
• Continuous-continuous
– Using orthogonal function expansion get an
equivalent discrete time channel:
y1  x1  n1
Gaussian noise
x1 y1
. 1 y2
 ( yi  xi ) 2 / 2 i2
p( yi | xi )  e .
2i
.
. .
. .
xN yN
Discrete Memoryless Channel
1
e ( yi  xi ) / 2 i
2 2
p( yi | xi ) 
• Continuous-continuous 2i

• Capacity is (Shannon) x(t )  y (t )


M
  yi f i (t )
M
  xi f i (t )
n(t ) i 1
1 i 1

C  lim max I ( X ; Y )
M
  ni f i (t )
T  p ( x ) T i 1

N  2WT
p(y N | x N )
I ( X N ; YN )        p(y N | x N ) p(x N ) log dx N dy N
XN
YN
p(y N )
N   p( yi | xi )
    p( yi | xi ) p( x j ) log d yi dxi
  p( yi )
i 1
Discrete Memoryless Channel
• Continuous-continuous
• Maximum Information when:
1  xi 2 / 2 x2
p( xi )  e Statistically independent
2 x zero mean Gaussian inputs

N
 2 2

then max I ( X N ; YN )   2 log 1 
1 x

p( x)  N 
i 1  0 
1  2 x2 
 N log 1  
2  N0 
 2 x2 
 WT log 1  
 N0 
Discrete Memoryless Channel
• Continuous-continuous
• Constrain average power in x(t):

1 T
Pav   E[ x 2 (t )]dt
T 0
1 N
  E ( xi2 )
2 i 1
N x2
  2W x2
T
Discrete Memoryless Channel
1
e ( yi  xi ) / 2 i
2 2
p( yi | xi ) 
• Continuous-continuous 2i

• Thus Capacity is: x(t )  y (t )


M
  yi f i (t )
M
  xi f i (t )
1 n(t )
C  lim max I ( X N ; YN )
i 1 i 1
M
  ni f i (t )
T  p ( x ) T i 1

 2 x2 
 lim W log 1  
T 
 N0 
 Pav 
 W log 1  
 WN 0 
Discrete Memoryless Channel
1
e ( yi  xi ) / 2 i
2 2
p( yi | xi ) 
• Continuous-continuous 2i

• Thus Normalized Capacity is:x(t)  y (t )


M
  yi f i (t )
M
  xi f i (t )
 Pav 
, but Pav  C b
C i 1 n(t ) i 1

 log 2 1  M
  ni f i (t )
W  WN 0  etab/No (dB)
C/W i 1

 C b 
-1.44036 0.1 10
-1.36402 0.15
-1.24869 0.225

 log 2 1  
-1.07386 0.3375
-0.8075 0.50625
-0.39875 0.759375

 WN 0  0.234937 1.139063
1.230848 1.708594
2.822545 2.562891 1

 5.41099 3.844336
9.669259 5.766504
-10 0 10 20 30


16.65749 8.649756

1
27.92605 12.97463
C /W
2 45.69444 19.46195

b 73.22669 29.19293
115.4055 43.78939

N0 C /W 179.5542 65.68408 0.1