Академический Документы
Профессиональный Документы
Культура Документы
by Bernard Sklar
Digital communication system
Important features of a DCS:
The transmitter sends a waveform from a finite set
of possible waveforms during a limited time
The channel distorts, attenuates the transmitted
signal
The receiver decides which waveform was
transmitted given the distorted/noisy received
signal. There is a limit to the time it has to do this
task.
The probability of an erroneous decision is an
important measure of system performance
Lecture 1 2
Digital versus analog
Advantages of digital communications:
Regenerator receiver
Original Regenerated
pulse pulse
Propagation distance
Lecture 1 3
Classification of signals
Deterministic and random signals
Deterministic signal: No uncertainty with respect to
the signal value at any time.
Random signal: Some degree of uncertainty in
signal values before it actually occurs.
Thermal noise in electronic circuits due to the random
movement of electrons. See my notes on Noise
Reflection of radio waves from different layers of
ionosphere
Interference
Lecture 1 4
Classification of signals …
Periodic and non-periodic signals
A discrete signal
Analog signals
Lecture 1 5
Classification of signals ..
Energy and power signals
A signal is an energy signal if, and only if, it has nonzero
but finite energy for all time:
A signal is a power signal if, and only if, it has finite but
nonzero power for all time:
Lecture 1 6
Random process
A random process is a collection (ensemble) of time functions,
or signals, corresponding to various outcomes of a random
experiment. For each outcome, there exists a deterministic
function, which is called a sample function or a realization.
Random
variables
Real number
Sample functions
or realizations
(deterministic
function)
time (t)
Lecture 1 7
Random process …
Strictly stationary: If none of the statistics of the random process are
affected by a shift in the time origin.
and
respectively. In other words, you get the same result from averaging over the
ensemble or over all time.
Lecture 1 8
Autocorrelation
Autocorrelation of an energy signal
Lecture 1 9
Spectral density
Energy signals:
Power signals:
Random process:
Power spectral density (PSD):
Lecture 1 10
Properties of an autocorrelation function
For real-valued (and WSS in case of
random signals):
1. Autocorrelation and spectral density form a
Fourier transform pair. – see Linear systems,
noise
2. Autocorrelation is symmetric around zero.
3. Its maximum value occurs at the origin.
4. Its value at the origin is equal to the average
power or energy.
Lecture 1 11
Noise in communication systems
Thermal noise is described by a zero-mean, Gaussian random
process, n(t).
Its PSD is flat, hence, it is called white noise. is the Standard
Deviation and 2 is the Variance of the random process.
[w/Hz]
Power spectral
density
Autocorrelation
function
Probability density function
Lecture 1 12
Signal transmission through linear systems
Input Output
Linear system - see my notes
Deterministic signals:
Random signals:
Lecture 1 13
Signal transmission … - cont’d
Ideal filters:
Non-causal!
Low-pass
Band-pass High-pass
Realizable filters:
RC filters Butterworth filter
Lecture 1 14
Bandwidth of signal
Baseband versus bandpass:
Baseband Bandpass
signal signal
Local oscillator
Bandwidth dilemma:
Bandlimited signals are not realizable!
Realizable signals have infinite bandwidth! We approximate
“Band-Limited” in our analysis!
Lecture 1 15
Bandwidth of signal …
Different definition of bandwidth:
a) Half-power bandwidth d) Fractional power containment bandwidth
b) Noise equivalent bandwidth e) Bounded power spectral density
c) Null-to-null bandwidth f) Absolute bandwidth
(a)
(b)
(c)
(d)
(e)50dB
Lecture 1 16
Formatting and transmission of baseband signal
A Digital Communication System
Digital info.
Textual Format
source info.
Pulse
Analog Transmit
Sample Quantize Encode modulate
info.
Pulse
Bit stream waveforms Channel
Format
Analog
info. Low-pass
Decode Demodulate/
filter Receive
Textual Detect
sink
info.
Digital info.
Lecture 2 17
Format analog signals
To transform an analog waveform into a form
that is compatible with a digital
communication system, the following steps
are taken:
1. Sampling – See my notes on Sampling
2. Quantization and encoding
3. Baseband transmission
Lecture 2 18
Sampling
See my notes on Fourier Series, Fourier Transform and Sampling
x (t ) | X ( f ) |
xs (t )
| Xs ( f ) |
Lecture 2 19
Aliasing effect
LP filter
Nyquist rate
aliasing
Lecture 2 20
Sampling theorem
Out
In
Average quantization noise power
Quantized
Lecture 2 22
Encoding (PCM)
Lecture 2 23
Quantization example
amplitude
x(t)
111 3.1867
100 0.4552
010 -1.3657
Lecture 2 24
Quantization error
Quantizing error: The difference between the input and output of
a quantizer
e(t)x
ˆ(t)x(t)
+
e(t) The Noise Model is an
approximation!
xˆ(t) x(t)
Lecture 2 25
Quantization error …
Quantizing error:
Granular or linear errors happen for inputs within the dynamic
range of quantizer
Saturation errors happen for inputs outside the dynamic range
of quantizer
Saturation errors are larger than linear errors (AKA as “Overflow”
or “Clipping”)
Saturation errors can be avoided by proper tuning of AGC
Saturation errors need to be handled by Overflow Detection!
Quantization noise variance:
2 2 2 2 2
E
{[
xq
(
x)]
}qe(
x
)p
(x
)
dx Lin
Sat
12
L
/2
q2
2
2
Lin
q
p(
xl)
q l
l Uniform q. 2
Lin
012
l 12
Lecture 2 26
Uniform and non-uniform quant.
Uniform (linear) quantizing:
No assumption about amplitude statistics and correlation
properties of the input.
Not using the user-related specifications
Robust to small changes in input statistic by not finely tuned to a
specific set of input parameters
Simple implementation
Application of linear quantizer:
Signal processing, graphic and display applications, process
control applications
Non-uniform quantizing:
Using the input statistics to tune quantizer parameters
Larger SNR than uniform quantizing with same number of levels
Non-uniform intervals in the dynamic range with same quantization
noise variance
Application of non-uniform quantizer:
Commonly used for speech
Examples are -law (US) and A-law (international)
Lecture 2 27
Non-uniform quantization
It is achieved by uniformly quantizing the “compressed” signal.
(actually, modern A/D converters use Uniform quantizing at 12-13 bits
and compand digitally)
At the receiver, an inverse compression characteristic, called
“expansion” is employed to avoid signal distortion.
compression+expansion companding
y C(x) x̂
x(t ) y (t ) yˆ (t ) xˆ (t )
x ŷ
Compress Qauntize Expand
Transmitter Channel Receiver
Lecture 2 28
Statistics of speech amplitudes
In speech, weak signals are more frequent than strong ones.
0.5
0.0
1.0 2.0 3.0
Normalized magnitude of speech signal
S
Using equal step sizes (uniform quantizer) gives low for weak
N q
signals and high S for strong signals.
N q
Adjusting the step size of the quantizer by taking into account the speech statistics
improves the average SNR for the input range.
Lecture 2 29
Baseband transmission
Lecture 2 30
PCM waveforms
Unipolar-RZ +V Miller +V
0 -V
+V +V
Bipolar-RZ 0 Dicode NRZ 0
-V -V
0 T 2T 3T 4T 5T 0 T 2T 3T 4T 5T
Lecture 2 31
PCM waveforms …
Criteria for comparing and selecting PCM
waveforms:
Spectral characteristics (power spectral density and
bandwidth efficiency)
Bit synchronization capability
Error detection capability
Interference and noise immunity
Implementation cost and complexity
Lecture 2 32
Spectra of PCM waveforms
Lecture 2 33
M-ary pulse modulation
Lecture 2 34
PAM example
Lecture 2 35
Formatting and transmission of baseband signal
b
Information (data) rate: R 1
/Tb [bits/sec]
Symbol rate : R 1 /T [symbols/s
ec]
For real time transmission: Rb mR
Lecture 3 36
Quantization example
amplitude
x(t)
111 3.1867
100 0.4552
010 -1.3657
Lecture 3 37
Example of M-ary PAM
• A2
10B2
3B
A.
‘11’
‘1’ B
T
T T ‘01’
T -B ‘00’ T T
‘0’ ‘10’
-A. -3B
Lecture 3 38
Example of M-ary PAM …
0 Ts 2Ts
2.2762 V 1.3657 V
0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb
1 1 0 1 0 1
Rb=1/Tb=3/Ts
R=1/T=1/Tb=3/Ts
0 T 2T 3T 4T 5T 6T
Rb=1/Tb=3/Ts
R=1/T=1/2Tb=3/2Ts=1.5/Ts
0 T 2T 3T
Lecture 3 39
Today we are going to talk about:
Receiver structure
Demodulation (and sampling)
Detection
First step for designing the receiver
Matched filter receiver
Correlator receiver
Lecture 3 40
Demodulation and detection
mi Pulse g i (t ) Bandpass si (t ) M-ary modulation
Format
modulate modulate i 1
, ,M
channel
transmitted symbol hc (t )
estimated symbol n(t )
Demod.
Format Detect
m̂i z (T ) & sample r (t )
Lecture 3 42
Example: Channel impact …
h
(
ct
)
(t
)0
.
5(
t0
.
75T
)
Lecture 3 43
Receiver tasks
Demodulation and sampling:
Waveform recovery and preparing the received
signal for detection:
Improving the signal power to the noise power (SNR)
using matched filter
Reducing ISI using equalizer
Sampling the recovered waveform
Detection:
Estimate the transmitted symbol based on the
received sample
Lecture 3 44
Receiver structure
z (T ) m̂i
r (t ) Threshold
Frequency Receiving Equalizing
comparison
down-conversion filter filter
Lecture 3 45
Baseband and bandpass
Bandpass model of detection process is
equivalent to baseband model because:
The received bandpass waveform is first
transformed to a baseband waveform.
Equivalence theorem:
Performing bandpass linear signal processing followed by
heterodyning the signal to the baseband, yields the same
results as heterodyning the bandpass signal to the
baseband , followed by a baseband linear signal
processing.
Lecture 3 46
Steps in designing the receiver
Find optimum solution for receiver design with the
following goals:
1. Maximize SNR
2. Minimize ISI
Steps in design:
Model the received signal
Find separate solutions for each of the goals.
First, we focus on designing a receiver which
maximizes the SNR.
Lecture 3 47
Design the receiver filter to maximize the SNR
n(t )
AWGN
Simplify the model:
Received signal in AWGN
Ideal channels
hc(t) (t)
si (t ) r (t ) r(t)si(t)n
(t)
n(t )
AWGN
Lecture 3 48
Matched filter receiver
Problem:
Design the receiver filter h(t ) such that the SNR is
maximized at the sampling time when si(t),i1 ,...,
M
is transmitted.
Solution:
The optimum filter, is the Matched filter, given by
*
h(
t) hopt (t) si(T t)
*
H (f ) H (f
opt i( ) S f)exp( j2 fT )
0 T t 0 T t
Lecture 3 49
Example of matched filter
(t)
y s
i(t)
h (t)
opt
si (t ) h opt (t ) A2
A A
T T
T t T t 0 T 2T t
(t)
y s
i(t)
h (t)
opt
si (t ) h opt (t ) A2
A A
T T
Lecture 3 50
Properties of the matched filter
The Fourier transform of a matched filter output with the matched signal as input
is, except for a time delay factor, proportional to the ESD of the input signal.
Z
(
f)|S
(f)
|2
exp(
j
2 fT
)
The output signal of a matched filter is proportional to a shifted version of the
autocorrelation function of the input signal to which the filter is matched.
z
(
t
)R(
st
T
) z
(
T)
R(
s0
)Es
The output SNR of a matched filter depends only on the ratio of the signal energy
to the PSD of the white noise at the filter input.
S E
s
max
N
T N
0/2
Lecture 3 51
Correlator receiver
The matched filter output at the sampling time,
can be realized as the correlator output.
z
(T)
h (
T)
opt
r(
T)
T
r()s
i(
)d
*
r
(t),
s
(t)
0
Lecture 3 52
Implementation of matched filter receiver
z1 (T )
s (T t) z1
*
1
Matched filter output:
r (t ) z
z Observation
vector
sM (T t)
* z M
zM (T )
z
i r
(t)si(
T t) i 1,...,M
z(
z
1(
T),
z(
T
2 MT
),...,
z())
(
z,
z
12,...,
z
M)
Lecture 3 53
Implementation of correlator receiver
Bank of M correlators
s 1 (t )
T z1 (T )
0
z1 Correlators output:
r (t ) z
z Observation
vector
s M (t )
T z M
0 z M (T )
z(
z
1(
T),
z(
T
2 MT
),...,
z())
(
z,
z
12,...,
z
M)
T
zi r(t)si(t)dt i 1,...,M
0
Lecture 3 54
Implementation example of matched filter
receivers
s1 (t )
A
Bank of 2 matched filters
T
0 T t A z1 (T )
T z1
r (t )
0 T
z
z
s2 (t )
z 2
0 T
z 2 (T )
0 T t
A A
T T
Lecture 3 55
Receiver job
Demodulation and sampling:
Waveform recovery and preparing the received
signal for detection:
Improving the signal power to the noise power (SNR)
using matched filter
Reducing ISI using equalizer
Sampling the recovered waveform
Detection:
Estimate the transmitted symbol based on the
received sample
Lecture 4 56
Receiver structure
Digital Receiver
Step 1 – waveform to sample transformation Step 2 – decision making
z (T ) m̂i
r (t ) Threshold
Frequency Receiving Equalizing
comparison
down-conversion filter filter
Lecture 4 57
Implementation of matched filter receiver
z1 (T )
s (T t) z1
*
1
Matched filter output:
r (t ) z
z Observation
vector
sM (T t)
* z M
zM (T )
z
i r
(t)si(
T t) i 1,...,M
z(
z
1(
T),
z(
T
2 MT
),...,
z())
(
z,
z
12,...,
z
M)
Lecture 4 58
Implementation of correlator receiver
Bank of M correlators
s 1 (t )
T z1 (T )
0
z1 Correlators output:
r (t ) z
z Observation
vector
s M (t )
T z M
0 z M (T )
z(
z
1(
T),
z(
T
2 MT
),...,
z())
(
z,
z
12,...,
z
M)
T
zi r(t)si(t)dt i 1,...,M
0
Lecture 4 59
Today, we are going to talk about:
Detection:
Estimate the transmitted symbol based on the
received sample
Signal space used for detection
Orthogonal N-dimensional space
Signal to waveform transformation and vice versa
Lecture 4 60
Signal space
What is a signal space?
Vector representations of signals in an N-dimensional
orthogonal space
Why do we need a signal space?
It is a means to convert signals to vectors and vice versa.
It is a means to calculate signals energy and Euclidean
distances between signals.
Why are we interested in Euclidean distances between
signals?
For detection purposes: The received signal is transformed to
a received vector. The signal which has the minimum
Euclidean distance to the received signal is estimated as the
transmitted signal.
Lecture 4 61
Schematic example of a signal space
2 (t )
s1 (a11,a12)
1 (t )
z (z1, z2)
s3 (a31,a32)
s2 (a21,a22)
s
1t)
( a
11
1(t)
a
122t)
( s1
(a ,a
11 )
12
Transmitted signal
alternatives
s
2(t)
a
21
1(t)
a
222t)
( s2
(a ,a
21 )
22
s
3t)
( a
31
1(t)
a
322t)
( s3
(a ,a
31 )
32
Received signal at
matched filter output
z
(t)
z
1 (
1t)
z
2 (
2t)
z(
z1,z
2)
Lecture 4 62
Signal space
To form a signal space, first we need to know
the inner product between two signals
(functions):
Inner (scalar)
product:
(
*
x(t),
y(t) x (t
)y t)
dt
Analogous to the “dot” product of discrete n-space vectors
= cross-correlation between x(t) and y(t)
dx,y x
(t)y
(t)
E1 d s1 , z
1 (t )
E3 z (z1, z2)
d s3 , z E2 d s2 , z
s3 (a31,a32)
s2 (a21,a22)
i1,
2,
3
Lecture 4 65
Orthogonal signal space
N-dimensional orthogonal signal space Nis characterized by
N linearly independent functions j (t)j1 called basis
functions. The basis functions must satisfy the orthogonality
condition
0 t T
T
*
(t),
(t) (
t)(
t
)dt
K
i j
0
i i ji
j,i 1,...,
j
N
where 1
ij
ij
0
ij
Lecture 4 66
Example of an orthonormal basis
Example: 2-dimensional orthonormal signal space
2
1(t) 2t /T) 0t T
cos( 2 (t )
T
(t) 2 sin(2t /T) 0t T
2
T
T
1 (t )
0
1(t),2(t) 1(t)2(t)dt0
0
1(t) 2(t) 1
Example: 1-dimensional orthonormal signal space
1 (t )
1 1(t) 1
T 1 (t )
0
0 T t
Lecture 4 67
Signal space …
Any arbitrary finite set of waveforms si (t )iM1
where each member of the set is of duration T, can be
expressed as a linear combination of N orthonogal
waveforms j (t)N where N M .
j 1
N
si(t)aj(t)
ij
i 1,...,M
j
1 NM
where
j 1,...,N
T
1 1 *
a
ij
Kj
s
(
it),
(
jt
)
K
s
j0
(
it)(
jt)
dt
i 1,...,M
0 t T
i
2
i
s (a,a
i1 i2,...,
a )
iN E Kj aij
j
1
Vector representation of waveform Waveform energy
Lecture 4 68
Signal space …
N
si(t)aj(t)
ij i
s (a,a
i1 i2,...,
a )
iN
j
1
Waveform to vector conversion Vector to waveform conversion
1 (t ) 1 (t )
T ai1
ai1
a i1 a i1
0 sm
si (t ) sm
si (t )
N (t ) N (t )
T a iN a iN
0 aiN aiN
Lecture 4 69
Example of projecting signals to an
orthonormal signal space
2 (t )
s1 (a11,a12)
1 (t )
s3 (a31,a32)
s2 (a21,a22)
s
(
1t
)a
11
1(
t
)a12
2(
t
) s
1(
a ,
a
11 )
12
Transmitted signal
alternatives s(
t
2
) a (
t
)a (
t
)
21
1 s
(a,
a ) 22
2 2 21
22
s(
t
3
)
a (
t
)a(
t
)
31
1 s
(a,
a ) 32
2 3 31
32
T
ij
a si(t)j(t)dt j 1,...,N i 1,...,M
0Lecture 4
0 t T
70
Signal space – cont’d
To find an orthonormal basis functions for a given
set of signals, the Gram-Schmidt procedure can be
used.
Gram-Schmidt procedure:
Given a signal set si (t )i 1 , compute an orthonormal basis j (t)j1
M N
1. Define 1 (t) s(t
)/E s (t)
/s (t
) i
1 1 1 1 1
2. For i 2,...,M compute d (
it
) si(
t) si(
t ),
(
jt
) (
jt)
If di (t) 0 let t)
i( d
i(t)/di(
t)
j1
Lecture 4 71
Example of Gram-Schmidt procedure
Find the basis functions and plot the signal space for the following
transmitted signals:
s1 (t ) s2 (t )
A
T 0 T t
A
0 T t T
1(t)
0 s2(t)1(t)dtA
T
2 s2(t), 0 T t
s2 s1
d2(t)s2(t)(A)1(t)0 1 (t )
-A 0 A
Lecture 4 72
Implementation of the matched filter receiver
z1
(T t)
z1
1 Observation
vector
r (t )
z
z
N(T t) z N
zN
N
si(t)aj(t)
ij i 1,...,M
j
1
z(z1,z2,...,
zN) NM
zj r
(t) j(
T t) j 1,...,N
Lecture 4 73
Implementation of the correlator receiver
Bank of N correlators
1 (t )
T z1
0
r1
r (t ) z
z Observation
N (t ) vector
T r N
0 zN
N
si(t)aj(t)
ij i 1,...,M
j
1
z(z1,z2,...,
zN) NM
T
j(t)dt j 1,...,N
zj r(t)
0
Lecture 4 74
Example of matched filter receivers using
basic functions
s1 (t ) s2 (t ) 1 (t )
A 1
T T
0 T t
0 T t A 0 T t
T
1 matched filter
1 (t )
z1 z
1
r (t ) T z1 z
0 T t
Lecture 4 75
White noise in the orthonormal signal space
~
n j(t)
(t), 0 j 1,...,N n N
independent zero-mean
j j 1
Gaussain random variables with
variance var(nj)N 0/2
Lecture 4 76
Detection of signal in AWGN
Detection problem:
Given the observation vector , perform
z a mapping
from to an estimate of the transmitted m̂ symbol,
z
, such that the average probability of error in the
decision is minimized.
m i
n
mi si z m̂
Modulator Decision rule
Lecture 5 77
Statistics of the observation Vector
AWGN channel model: z si n
i
Signal vector s (a ,a
i1 i2,...,
a )
iN
is deterministic.
Elements of noise vector n(n 1 ,n2,...,
n are i.i.d
N)
Gaussian random variables with zero-mean and
variance N 0 / 2. The noise vector pdf is
1 n
2
p (n ) exp
n
N 0N
/2
N
0
The elements of observed vector z(z,z,..., are
zN)
1 2
independent Gaussian random variables. Its pdf is
1 z s
2
N i
p (z |s ) exp
z i
N 0
/2
N 0
Lecture 5 78
Detection
Optimum decision rule (maximum a posteriori
probability):
ˆ
m
Setm
iif
Pr(
m
isent
|z
) Pr(
m
ksent
|z
)
,
for
all
ki
where k 1
,...,
M .
Applying Bayes’ rule gives:
ˆ
Set
mmiif
p(z|m)
p
k
z k
,is
maximum
for
all
ki
pz(z
)
Lecture 5 79
Detection …
Vector
zlies
inside
region
Z
i if
p
z(z|m k)
ln[
p
k ],is k
maximum
for
all i.
pz(z)
That
means
ˆ
m mi
Lecture 5 80
Detection (ML rule)
For equal probable symbols, the optimum
decision rule (maximum posteriori probability)
is simplified to:
ˆ
Set
m m
iif
p
(
zz|m
),
k is
maximum
for
all
ki
or equivalently:
ˆ
Set
mmiif
ln[
p
(
zz|
m )],
k is
maximum
for
all
ki
Vector
zlies
inside
region
Z
i if
ln[
pz(
z |m
k)],
is k
maximum
for
all i
That
means
ˆ
m mi
Lecture 5 82
Detection rule (ML)…
It can be simplified to:
Vector
z
lies
inside
region
Z
iif
zs
k,is
minimum
for
all
ki
or equivalently:
Vector
rlies
inside
region
Z
iif
N
1
j
1
zjaE
kj
2
k,is k
maximum
for
alli
where
E
kisthe
energy
of
s
k(t).
Lecture 5 83
Maximum likelihood detector block
diagram
,s1
1
E1 Choose
z 2 m̂
the largest
, s M
1
EM
2
Lecture 5 84
Schematic example of the ML decision regions
2 (t )
Z2
s2
Z1
s3 s1
Z3 1 (t )
s4
Z4
Lecture 5 85
Average probability of symbol error
Erroneous decision: For the transmitted symbol or
mi
equivalently signal vector , san
i
error in decision occurs if
z not fall inside region . Z i
the observation vector does
Probability of erroneous decision for a transmitted symbol
P
(
em)
i ˆ
Pr(
m m
iand
m
isent)
or equivalently
)
i
P̂r(
m
m Pr(
m
isent)Pr
z
does
not
lie
insi
Z
m
iise
Probability of correct decision for a transmitted symbol
m
)
i
P̂r(
m Pr(
msent)Pr(
i z
lies
inside
Z
imsen
i
P
(
m
ci
)Pr(
z
lies
i i
inside
Zmsent)
p(
z
z|m
i)
dz
i)
Z
i
P
e(
m 1 P
c(mi)
Lecture 5 86
Av. prob. of symbol error …
Average probability of symbol error :
M
P
(
EM)Pr
(
mˆ
mi)
i1
For equally probable symbols:
1M 1M
P
E(
M ) Pe(mi)
1 Pc(mi)
Mi
1 Mi
1
1M
1 p
z(z|m
i)
dz
Mi
1Zi
Lecture 5 87
Example for binary PAM
pz (z | m2) pz (z | m1)
s2 s1
1 (t )
Eb 0 Eb
ss/
2
P
(m)P
(m)
Q1 2
e 1 e 2
N /2
0
2E
PP (
2 )
Q b
B E N
0
Lecture 5 88
Union bound
Union bound
The probability of a finite union of events is upper bounded
by the sum of the probabilities of the individual events.
k
i
ki
Lecture 5 89
Example of union bound
2
Z2 r Z1
P
(
em)
1p
r(
r|m
)
1dr s2 s1
Z
2Z
3Z
4
Union bound: 1
s3 s4
4 Z3
Z4
P
e(m)
1 P
2(
sk,s)
1
k 2
2 2 2
A2 r r r
s2 s1 s2 s1 s2 s1
1 1 1
s3 s4 s3 s4 s3 s4
A3 A4
P
(
2s,s
2 )
1 p
r(
r
A
|m
)
1dr P
(
2s,s
3 )
1 p
r(
r|m
A
)
1dr P
(
2s,s
4 )
1 p
r(
r|m
)
1dr
2 3 A
4
Lecture 5 90
Upper bound based on minimum distance
P
(
2s
k,
s)
i
Pr(
zis
closer
s
k than
s
, when
i s
iis
to
sent)
1 u2 d/2
N
)
N
exp(
du
Q ik
N/2
d
ik 0 0 0
dik si sk
1M M d/2
P
(
M
E
)
M
P(
s
2k,
s)
i
(
M1
)
Q
min
N/2
i1
k
1 0
ki
dminmin
dik
Minimum distance in the signal space: i,k
ik
Lecture 5 91
Example of upper bound on av. Symbol
error prob. based on union bound
s E E , i1
,...,
4 2 (t )
i i s
d i , k 2 Es
ik Es s2
d min 2 Es d 2,3 d1, 2
s3 s1
1 (t )
Es Es
d 3, 4 d1, 4
s4
Es
Lecture 5 92
Eb/No figure of merit in digital
communications
SNR or S/N is the average signal power to the
average noise power. SNR should be modified
in terms of bit-energy in DCS, because:
Signals are transmitted within a symbol duration
and hence, are energy signal (zero power).
A merit at bit-level facilitates comparison of
different DCSs transmitting different number of bits
per symbol.
E ST SW Rb : Bit rate
b
b
N
0 N/WN R
b
W : Bandwidth
Lecture 5 93
Example of Symbol error prob. For PAM
signals
Binary PAM
s2 s1
1 (t )
Eb 0 Eb
4-ary PAM
s4 s3 s2 s1
Eb Eb 0 Eb Eb 1 (t )
6 2 2 6
5 5 5 5
1 (t )
1
T
0 T t
Lecture 5 94
Inter-Symbol Interference (ISI)
ISI in the detection process due to the filtering
effects of the system
Overall equivalent system transfer function
H
(
f) H
t(
f)H(
c f)
H (
r f)
creates echoes and hence time dispersion
causes ISI at sampling time
k
z k
s k
n
is
i
i
k
Lecture 6 95
Inter-symbol interference
Baseband system model
x1 x2
xk Tx filter Channel r (t ) Rx. filter
zk
x̂k
ht (t ) hc (t ) hr (t ) Detector
t kT
T Ht ( f ) Hc ( f ) Hr ( f )
x3 T n(t )
Equivalent model
x1 x2
xk Equivalent system z (t ) zk
x̂k
h(t ) Detector
t kT
T H( f )
x3 T nˆ (t )
filtered noise
H
(
f) H
t(
f)H(
c f)
H (
r f)
Lecture 6 96
Nyquist bandwidth constraint
Nyquist bandwidth constraint:
The theoretical minimum required system bandwidth to detect Rs
[symbols/s] without ISI is Rs/2 [Hz].
Equivalently, a system with bandwidth W=1/2T=Rs/2 [Hz] can
support a maximum transmission rate of 2W=1/T=Rs [symbols/s]
without ISI.
1R R
s
W s
2 [symbo Hz]
2T
Bandwidth 2 R/W
efficiency, W[bits/s/Hz] :
An important measure in DCs representing data throughput per
hertz of bandwidth.
Showing how efficiently the bandwidth resources are used by
signaling techniques.
Lecture 6 97
Ideal Nyquist pulse (filter)
Ideal Nyquist filter Ideal Nyquist pulse
H( f ) (t)sinc(
h t/T)
T 1
0 f 2T T 0 T 2T t
1 1
2T 2T
1
W
2T
Lecture 6 98
Nyquist pulses (filters)
Nyquist pulses (filters):
Pulses (filters) which results in no ISI at the
sampling time.
Nyquist filter:
Its transfer function in frequency domain is obtained
by convolving a rectangular function with any real
even-symmetric frequency function
Nyquist pulse:
Its shape can be represented by a sinc(t/T) function
multiply by another time function.
Example of Nyquist filters: Raised-Cosine filter
Lecture 6 99
Pulse shaping to reduce ISI
Goals and trade-off in pulse-shaping
Reduce ISI
Efficient bandwidth utilization
Robustness to timing error (small side lobes)
Lecture 6 100
The raised cosine filter
Raised-Cosine Filter
A Nyquist pulse (No ISI at the sampling time)
1 for
|f| 2
W W
0
2|f|
W2
W
H
(
f)
cos 0
for
2W W
|f| W
0
4W W0
0 for
|f| W
h
(
t
)2
W(sinc(
2
Wt
))
cos[
2(WW
0)
t]
0 0 2
1[
4
(WW)
t
0]
WW0
Excess bandwidth: W W Roll-off factor r
W0
0 r 1
0
Lecture 6 101
The Raised cosine filter – cont’d
|H(f)|
|H (f)|
RC h(t)hRC(t)
1 r 0 1
r 0.5
0.5 0.5 r 1
r 1 r 0.5
r 0
1 3 1 0 1 3 1 3T 2T T 0 T 2T 3T
T 4T 2T 2T 4T T
R
sSB
Baseband
W (
1 r
) s
Passband
W (1
DSBr)
Rs
2
Lecture 6 102
Pulse shaping and equalization to
remove ISI
No ISI at the sampling time
H(
f
RC)
H(
t f
)H(
cf)
Hr(
f)
He(
f)
1
He(f) Taking care of ISI
Hc(f) caused by channel
Lecture 6 103
Example of pulse shaping
Square-root Raised-Cosine (SRRC) pulse shaping
Amp. [V]
Third pulse
t/T
First pulse
Second pulse
Data symbol
Lecture 6 104
Example of pulse shaping …
Raised Cosine pulse at the output of matched filter
Amp. [V]
t/T
Lecture 6 105
Eye pattern
Eye pattern:Display on an oscilloscope which
sweeps the system response to a baseband signal at
the rate 1/T (T symbol duration)
Distortion
due to ISI
Noise margin
amplitude scale
Sensitivity to
timing error
Timing jitter
time scale
Lecture 6 106
Example of eye pattern:
Binary-PAM, SRRQ pulse
Perfect channel (no noise and no ISI)
Lecture 6 107
Example of eye pattern:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=20 dB) and no ISI
Lecture 6 108
Example of eye pattern:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=10 dB) and no ISI
Lecture 6 109
Equalization – cont’d
z (T ) m̂i
r (t ) Threshold
Frequency Receiving Equalizing
comparison
down-conversion filter filter
Lecture 6 110
Equalization
ISI due to filtering effect of the communications
channel (e.g. wireless channels)
Channels behave like band-limited filters
H
c(f)Hc(f)e j c(f)
Lecture 6 111
Equalization: Channel examples
Example of a frequency selective, slowly changing (slow fading)
channel for a user at 35 km/h
Lecture 6 112
Equalization: Channel examples …
Example of a frequency selective, fast changing (fast fading)
channel for a user at 35 km/h
Lecture 6 113
Example of eye pattern with ISI:
Binary-PAM, SRRQ pulse
Non-ideal channel and no noise
hc(
t) (t) 0.
7(t T)
Lecture 6 114
Example of eye pattern with ISI:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=20 dB) and ISI
h(
ct
) (
t) 0.
7(tT)
Lecture 6 115
Example of eye pattern with ISI:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=10 dB) and ISI
h(
ct
) (
t) 0.
7(tT)
Lecture 6 116
Equalizing filters …
Baseband system model
a1
a(tkT
k
) Tx filter
k Channel r (t ) Equalizer Rx. filter z (t ) z k âk
ht (t ) hc (t ) he (t ) hr (t ) Detector
t kT
Ta a Ht ( f ) Hc ( f ) He ( f ) Hr ( f )
2 3
n(t )
Equivalent model
H
(
f) H
t(
f)H(
c f)
H (
r f)
a1
a(tkT
) Equivalent system
k z (t ) x(t ) Equalizer z (t )
zk âk
k h(t ) he (t ) Detector
t kT
Ta a H( f ) He ( f )
2 3 nˆ (t )
filtered noise
ˆ(t)n
n (t)h
r(t)
Lecture 6 117
Equalization – cont’d
Equalization using
MLSE (Maximum likelihood sequence estimation)
Filtering – See notes on
z-Transform and Digital Filters
Transversal filtering
Zero-forcing equalizer
Decision feedback
Using the past decisions to remove the ISI contributed
by them
Adaptive equalizer
Lecture 6 118
Equalization by transversal filtering
Transversal filter:
A weighted tap delayed line that reduces the effect
of ISI by
N
proper adjustment of the filter taps.
z(t )
nN
cnx(
t n ) n N ,...,
N k 2N ,...,
2 N
x(t )
c N c N 1 c N 1 cN
z (t )
Coeff.
adjustment
Lecture 6 119
Transversal equalizing filter …
Zero-forcing equalizer:
The filter taps are adjusted such that the equalizer output
is forced to be zero at N sample points on each side:
Adjust 1 k 0
z
(k)
cn nN N 0k
1
,...,
N
Adjust
c N
min
E(z
( )
kT a
k)2
n n N
Lecture 6 120
Example of equalizer
2-PAM with SRRQ Matched filter outputs at the sampling time
Non-ideal channel
h
c(
t)(t)0 .
3(tT )
One-tap DFE
ISI-no noise,
No equalizer
ISI-no noise,
DFE equalizer
ISI- noise
No equalizer
ISI- noise
DFE equalizer
Lecture 6 121
Block diagram of a DCS
Digital modulation
Channel
Digital demodulation
Lecture 7 122
Bandpass modulation
Bandpass modulation: The process of converting a
data signal to a sinusoidal waveform where its
amplitude, phase or frequency, or a combination of
them, are varied in accordance with the transmitting
data.
Bandpass signal:
s
(
it
)
g(
t
T
2
E
)i
T
cos
ct(
i
1
)
ti(
t
)
0
tT
where gT (t ) is the baseband pulse shape with energy E g.
We assume here (otherwise will be stated):
g (t ) is a rectangular pulse shape with unit energy.
T
Gray coding is used for mapping bits to symbols.
1
123 M
M
E denotes average symbol energy given by E
s Lecture 7 s i
E
1 i
Demodulation and detection
Demodulation: The receiver signal is converted to
baseband, filtered and sampled.
Detection: Sampled values are used for detection
using a decision rule such as the ML detection rule.
1 (t )
T z1
0
z1
r (t ) z
z Decision
circuits m̂
N (t ) (ML detector)
T z N
0 zN
Lecture 7 124
Coherent detection
Coherent detection
requires carrier phase recovery at the receiver and
hence, circuits to perform phase estimation.
Sources of carrier-phase mismatch at the receiver:
Propagation delay causes carrier-phase offset in the
received signal.
The oscillators at the receiver which generate the carrier
signal, are not usually phased locked to the transmitted
carrier.
Lecture 7 125
Coherent detection ..
Circuits such as Phase-Locked-Loop (PLL) are
implemented at the receiver for carrier phase
estimation ( ).ˆ
I branch
r
(
t
)g(
t
T
2
E
T
) icos
t
i(
i
t
)
n
(t
)
2
T
ct
cos ˆ
PLL
Used by
Oscillator 90 deg. correlators
2
sin ct
ˆ
T
Q branch
Lecture 7 126
Bandpass Modulation Schemes
Lecture 7 127
One dimensional modulation,
demodulation and detection
Amplitude Shift Keying (ASK) modulation:
s
i(t
)
2E
i
cos
c
t
T
On-off keying (M=2):
si(t)ai1(t) i 1,,M “0” “1”
s2 s1
2
ct
1(t) cos 0 E1
1 (t )
T
ai Ei
Lecture 7 128
One dimensional mod.,…
M-ary Pulse Amplitude modulation (M-PAM)
st)
i( a
i
2
cos
ct
T
4-PAM:
si (t ) ai 1 (t ) i 1, , M “00” “01” “11” “10”
s1 s2 s3 s4
cosc t
2 1 (t )
1 (t ) 3 Eg Eg Eg 3 Eg
T 0
ai ( 2i 1 M ) E g
Ei s i E g 2i 1 M
2 2
( M 2 1)
Es Eg
3
Lecture 7 129
Example of bandpass modulation:
Binary PAM
Lecture 7 130
One dimensional mod.,...–cont’d
Coherent detection of M-PAM
1 (t )
T z1
ML detector
r (t ) (Compare with M-1 thresholds) m̂
0
Lecture 7 131
Two dimensional modulation,
demodulation and detection (M-PSK)
M-ary Phase Shift Keying (M-PSK)
s(t
)
2E
s 2
c
cos
t
i
i
T M
si(t)ai11(t)ai22(t) i1
,,M
2
1(t) cosct 2(t) sin
2
ct
T T
2i 2i
ai1 E
s cos ai2 E
s sin
M M
s Ei s
2
E i
Lecture 7 132
Two dimensional mod.,… (MPSK)
BPSK (M=2)
2 (t )
“0” “1”
8PSK (M=8)
s1 s2
2 (t )
Eb Eb 1 (t ) s3 “011”
“010” “001”
s4 s2
QPSK (M=4) Es
“000”
2 (t ) “110” s1
“01” “00” s5 1 (t )
s2 s1
“111” “100”
Es
s6 s8
1 (t ) “101” s7
s3 “11” “10”
s4
Lecture 7 133
Two dimensional mod.,…(MPSK)
Coherent detection of MPSK
1 (t )
T z1
z1 ˆ
0
r (t ) m̂
arctan Compute Choose
2 (t ) z2 | i ˆ | smallest
T
0
z2
Lecture 7 134
Two dimensional mod.,… (M-QAM)
M-ary Quadrature Amplitude Mod. (M-QAM)
s
i(t
)
2E
i
cos
c
ti
T
s
i(
t)
ai
1(
1t)
a
i2 (
2t) i
1,
,M
1(
t)
2
cos
c
t
2(
t)
2
sin
ct
T T
2
(M
1)
where
ai
1 and
ai2are
PAM
symbols
and
Es
3
(M
1
,M
1
)(
M
3,M
)
1 (M
1
,M
1
)
a
i,
1a
i
2
(M1
,M3
) ( M
3,M3
) ( M1
,M3
)
(M
1
,
M
1
)(M
3
,
M
1)(M
1
,
M
1
)
Lecture 7 135
Two dimensional mod.,… (M-QAM)
16-QAM
2 (t )
“0000” “0001” “0011” “0010”
s1 s2 3
s3 s4
s13 s14 -3
s15 s16
“0100” “0101” “0111” “0110”
Lecture 7 136
Two dimensional mod.,… (M-QAM)
1 (t )
T z1
ML detector
(Compare
with
M1threshold
s)
0
r (t ) Parallel-to-serial m̂
converter
2 (t )
T z2
ML detector
(Compare
with
M1threshold
s)
0
Lecture 7 137
Multi-dimensional modulation, demodulation &
detection
M-ary Frequency Shift keying (M-FSK)
s
(
i t
)
2
E
T
s
cos
t
i
2
T
E
s
cos
c
t (
i
1
)
t
f
1
2
2 T
3 (t )
M
si(t)aijj(t) i1
,,M s3
j
1 Es
E ij
2
it
i(t) cos aij s s2
T 0 ij 2 (t )
Es
s Ei s
2
E i s1
Es
1 (t )
Lecture 7 138
Multi-dimensional mod.,…(M-FSK)
1 (t )
T z1
0
z1 ML detector:
r (t ) z
z Choose
the largest element m̂
M (t ) in the observed vector
T z M
0 zM
Lecture 7 139
Non-coherent detection
Non-coherent detection:
No need for a reference in phase with the received
carrier
Less complexity compared to coherent detection at
the price of higher error rate.
Lecture 7 140
Non-coherent detection …
Differential coherent detection
Differential encoding of the message
The symbol phase changes if the current bit is different
from the previous bit.
s
(
it
)
2
E
T
cos
t
0(
it
)
,
0tT,
i
1
,...,M
(
k
nT
) ((
k
n1)
T)(
i nT
)
i
Symbol index: k 0 1 2 3 4 5 6 7
Data bits: mk 1 1 0 1 0 1 1
Diff. encoded bits 1 1 1 0 0 1 1 1 s2 0 s1 1 (t )
Symbol phase: k 0 0
Lecture 7 141
Non-coherent detection …
Coherent detection for diff encoded mod.
assumes slow variation in carrier-phase mismatch during two symbol
intervals.
correlates the received signal with basis functions
uses the phase difference between the current received vector and
previously estimated symbol
2
E
cos
r(
t) t
0 (
it
) n
(t
),
0t T
T
(
i
nT
)
((
jn
1)
T)
i(
nT
)((
jn
1
)
T
)(
inT
)
2 (t )
(a2 , b2 )
i (a1 , b1 )
Lecture 7 142 1 (t )
Non-coherent detection …
Optimum differentially coherent detector
1 (t )
T
r (t )
0
Decision m̂
Delay
T
Sub-optimum differentially coherent detector
T
r (t )
0
Decision m̂
Delay
T
Lecture 7 143
Non-coherent detection …
Energy detection
Non-coherent detection for orthogonal signals (e.g. M-
FSK)
Lecture 7 144
Non-coherent detection …
Non-coherent detection of BFSK
2/Tcos(
t)
1
z11
2
T
0
z11 z12
2 2
2/Tsin(
t)
1
T z12
r (t )
0
2 + z (T )
Decision stage:
m̂
2/Tcos(
2t) ifz(T)0
,mˆ1
ifz(T)0
,mˆ0
T z 21
2
-
0
2/Tsin(
2t) z21 z22
2 2
T z 22
0
2
Lecture 7 145
Example of two dim. modulation
2 (t )
16QAM 8PSK s3 “011”
“010” “001”
2 (t ) s4 s2
“0000” “0001” “0011” “0010”
s1 s2 3
s3 s4 Es
“000”
“110” s1
s5 1 (t )
“1000” “1001” “1011” “1010”
s5 s6 s7 s8
1 “111” “100”
-3 -1 1 3
1 (t )
s6 s8
s9 s10 -1
s11 s
12 “101” s 7
2 (t )
“1100” “1101” “1111” “1110”
“00”
QPSK s 2“01” s1
s13 s14 -3
s15 s
16
“0100” “0101” “0111” “0110” Es
1 (t )
Lecture 8 147
Error probability of bandpass modulation
1 (t )
T r1
0 r1 Decision
r (t ) r
r Circuits
m̂
N (t ) Compare z
T r N with threshold.
0 rN
Lecture 8 148
Error probability …
The matched filters output (observation vector= r) is
the detector input and the decision variable is a z f (r)
function of r, i.e.
For MPAM, MQAM and MFSK with coherent detection z r
For MPSK with coherent detection z r
Pr(
r
lies
|
is
i
inside
sent)
Z
Pr(z
satis
con
C|
is
is
Hence, we need to know the statistics of z, which
depends on the modulation scheme and the
detection type.
Lecture 8 149
Error probability …
AWGN channel model: r si n
i
The signal vector s (ai1,a
i2,...,
aiN)is deterministic.
The elements of the noise vector n(n 1,n2,...,
n
N)
are i.i.d
Gaussian random variables with zero-mean and
variance N 0 / 2. The noise vector's pdf is
1 n
2
p (n ) N exp
n
N
0
/2
N
0
The elements of the observed vector r(r 1,r
2,...,
rNare
)
independent Gaussian random variables. Its pdf is
1 r s
2
N i
p (r |s ) exp
r i
N 0 /
2
N
0
Lecture 8 150
Error probability …
Eb Eb 1 (t )
s 2“1”
2 (t )
s1s2 2 E
b Eb
2E E
B
P Q b
B
P Q b
N N
0 0
Lecture 8 151
Error probability …
Non-coherent detection of BFSK
2/Tcos(
t)
Decision variable:
1
Difference of envelopes
r11
2 z z1 z2
T
0
z1 r
11
2 2
r
12
2/Tsin(
t)
1
T r12
r (t )
0
2 +
z
Decision rule:
m̂
2/Tcos(
2t) ifz(T)0
,mˆ1
ifz(T)0
,mˆ0
T r21
2
-
0
2/Tsin(
2t) z2 r21r22
2 2
T r22
0
2
Lecture 8 152
Error probability – cont’d
Non-coherent detection of BFSK …
1 1
P
B Pr(
z1z
2|s
2
) z
Pr(
2 z
1|s
1)
2 2
Pr(
z
1z
2|
s2)
E z
1
Pr(z
2|
s2,z
2
)
p
0
1
Pr(
z z
2|
s2,
z2)
p(
z|
2s)
2
dz
2
0
z
2
(
z|
1s)
2dz
1p(
z|
2s)
2dz
2
1 E
B exp
P
b
Rayleigh pdf Rician pdf
2 2 N0
Similarly, non-coherent detection of DBPSK
1 E
P
B exp
b
2 N0
Lecture 8 153
Error probability ….
Coherent detection of M-PAM
Decision variable:
z r1
“00” “01” “11” “10”
s1 s2 s3 s4
4-PAM 1 (t )
3 Eg Eg 0 Eg 3 Eg
1 (t )
T r1
ML detector
r (t ) (Compare with M-1 thresholds) m̂
0
Lecture 8 154
Error probability ….
Coherent detection of M-PAM ….
Error happens if the noise, n r s , exceeds in amplitude
1 1 m
one-half of the distance between adjacent symbols. For symbols
on the border, error can happen only in one direction. Hence:
P
(
s
e)
m
Pr
|n
|
1
|
r
1
s
|
mE
g
for
2
mM
1
;
P
(
s
)
e
Pr
n
1
r
sEand
P
(
s
)
1
Pr
1n
r
s
1 g
E eM11Mg
M
1 M
2 1 1
P
(
M
E
)
M
m
1
P
(s
)
em
M
Pr
|n
1
| E
gPr
M
n
1E
gPr
M
n
1
E
g
)
2
(
M
M
1
)
Pr
n
1 E
g
2
(
M
M
1
)
E
p(
n
1
n)
dn
2
(
M
M
1
Q
2
E
N
g
0
g
2
(
M 1
)
E
s(log
2M)
Eb E
g
3
Gaussian pdf with
2(
M )
16log
ME
zero mean and variance N0 / 2
P
(M
) Q
2 b
E 2
M M 1N
0
Lecture 8 155
Error probability …
Coherent detection
2 (t )
“0000” “0001”
of M-QAM s1 s2 s 3“0011”s 4 “0010”
“1000”
s “1001”
s s 7“1011”s8 “1010”
5 6
16-QAM 1 (t )
s9 s10 s11 s12
“1100” “1101” “1111” “1110”
r (t ) Parallel-to-serial m̂
converter
2 (t )
T r2 ML detector
0
(Compare
with
M1threshold
s)
Lecture 8 156
Error probability …
Coherent detection of M-QAM …
M-QAM can be viewed as the combination of two MPAM
modulations on I and Q branches, respectively.
No error occurs if no error is detected on either the I or the Q
branch.
Considering the symmetry of the signal space and the
orthogonality of the I and Q branches:
P
(
M
E
)
1
P(
M
C
)
1
Pr(
no
error
detecte
on
Iand
Qbran
Pr(
no
error
Q
detected
on
Iand
branch
Pr(no
error
on
I)Pr
err
on
Q
2
Pr(no
error
on
I)
1P
M
E
2
13
log
M E
P
(M
)4
1Q 2 b
E
M Average probability of
M1 N
0 symbol error for MPAM
Lecture 8 157
Error probability …
Coherent detection
of MPSK 2 (t )
s 3 “011”
“010”
s4 s“001”
2
Es
“110”
s“000”
1
8-PSK s5 1 (t )
“111”
1 (t ) s6 s8“100”
T r1 “101”s 7
r1 ˆ
0
r (t ) m̂
arctan Compute Choose
2 (t ) r2 | i ˆ | smallest
T
0
r2 Decision variable
z ̂ r
Lecture 8 158
Error probability …
Coherent detection of MPSK …
The detector compares the phase of observation vector to M-1
thresholds.
Due to the circular symmetry of the signal space, we have:
P
E(M )
1
P(
M
C
)
1
M
M
1
m
1
P
(
s
c)
m
1
P(
s
c
1
)
1
/
M
/
M
p(
ˆ)
d
where
p
(
ˆ)
E
2s
cos(
)
E
s 2
exp
sin
;||
N0 N
0 2
It can be shown that
2
E
or
2log
M
E
N
s 2 b
P
(
EM) 2
Q sin P
(
EM)2Q
N sin
0
M
0
M
Lecture 8 159
Error probability …
Coherent detection of M-FSK
1 (t )
T r1
0
r1 ML detector:
r (t ) r
r Choose
the largest element m̂
M (t ) in the observed vector
T r M
0 rM
Lecture 8 160
Error probability …
Coherent detection of M-FSK …
The dimension of the signal space is M. An upper
bound for the average symbol error probability can be
obtained by using the union bound. Hence:
E
P
E(
M )
M
1
Q
N
s
0
or, equivalently
P
(
EM
)M
1
Q
log
M
2
N
Eb
0
Lecture 8 161
Bit error probability versus symbol error
probability
Number of bits per symbol k log
2M
For orthogonal M-ary signaling (M-FSK)
PB 2k1 M/ 2
k
PE 2 1 M1
PB 1
lim
k P 2
E
P
B E
E
P for
P 1
k
Lecture 8 162
Probability of symbol error for binary
modulation
Note!
• “The same average symbol
energy for different sizes of
PE signal space”
Eb / N0 dB
Lecture 8 163
Probability of symbol error for M-PSK
Note!
• “The same average symbol
energy for different sizes of
signal space”
PE
Eb / N0 dB
Lecture 8 164
Probability of symbol error for M-FSK
Note!
• “The same average symbol
energy for different sizes of
PE signal space”
Eb / N0 dB
Lecture 8 165
Probability of symbol error for M-
PAM
Note!
• “The same average symbol
energy for different sizes of
PE signal space”
Eb / N0 dB
Lecture 8 166
Probability of symbol error for M-
QAM
Note!
• “The same average symbol
energy for different sizes of
PE signal space”
Eb / N0 dB
Lecture 8 167
Example of samples of matched filter output
for some bandpass modulation schemes
Lecture 8 168
Block diagram of a DCS
Digital modulation
Channel
Digital demodulation
Lecture 9 169
What is channel coding?
Channel coding:
Transforming signals to improve communications
performance by increasing the robustness against
channel impairments (noise, interference, fading, ...)
Waveform coding: Transforming waveforms to better
waveforms
Structured sequences: Transforming data sequences into
better sequences, having structured redundancy.
-“Better” in the sense of making the decision process less
subject to errors.
Lecture 9 170
Error control techniques
Automatic Repeat reQuest (ARQ)
Full-duplex connection, error detection codes
The receiver sends feedback to the transmitter,
saying that if any error is detected in the received
packet or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively).
The transmitter retransmits the previously sent
packet if it receives NACK.
Forward Error Correction (FEC)
Simplex connection, error correction codes
The receiver tries to correct some errors
Hybrid ARQ (ARQ+FEC)
Full-duplex, error detection and correction codes
Lecture 9 171
Why using error correction coding?
Error performance vs. bandwidth
Power vs. bandwidth
Data rate vs. bandwidth PB
A
Coding gain: F
Lecture 9 172
Channel models
Discrete memory-less channels
Discrete input, discrete output
Binary Symmetric channels
Binary input, binary output
Gaussian channels
Discrete input, continuous output
Lecture 9 173
Linear block codes
Lecture 9 174
Some definitions
Binary field :
The set {0,1}, under modulo 2 binary addition and
multiplication forms a field.
Addition Multiplication
0 0 0 0 0 0
011 0 1 0
1 0 1 10 0
Binary field
1 is
1 also
0 called 1Galois
1 1 field, GF(2).
Lecture 9 175
Some definitions…
Fields :
Let F be a set of objects on which two operations ‘+’
and ‘.’ are defined.
F is said to be a field if and only if
1. F forms a commutative group under + operation. The
additive identity element is labeled “0”.
a,
1. F-{0} b F
forms
a a b
commutativeb a
groupF under . Operation. The
multiplicative identity element is labeled “1”.
a
,b
F
ab
ba
F
(
ab)
c (a b) (
ac
)
Lecture 9 176
Some definitions…
Vector space:
Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space
over F if:
1. Commutative: u,
v
Vu
v
vu
F
2.
a F, v
3. Distributive:
V
av
uV
(a b) v
4. Associative:
a
vb
v
and
a
(
uv
)a
u
av
5. a,
bF
,v
V(
a
b)
v
a
(b
v)
vV,1vv
Lecture 9 177
Some definitions…
Examples of vector spaces
The set of binary n-tuples, denoted by Vn
V
4{(
0000
),
(
0001
),
(
0010
),
(
0011
),
(
0100
),
(
010
),
(
01
),
(
1000
),
(
1001
),
(
1010
),
(
1011
),
(
1100
),
(
110
),
(
11
)}
Vector subspace:
A subset S of the vector space Vn is called a
subspace if:
The all-zero vector is in S.
The sum of any two vectors in S is also in S.
Example:
{(
0000
),
(
0101
),
(1010
),
(1111
)}
is
asubs
of
V.
4
Lecture 9 178
Some definitions…
Spanning set:
A collection of vectors G v 1,v2, ,v ,
n
is said to
be a spanning set for V or to span V if
linear combinations of the vectors in G include all
vectors in the vector space V,
Example:
(
1000
),
(0110
),
(
1100
),
(0011
),
(
1001
)span
V
.
4
Bases:
The spanning set of V that has minimal cardinality is
called the basis for V.
Cardinality of a set is the number of objects in the set.
Example:
(
1000
),
(0100
),
(
0010
),
(
0001
)is
a
basis
for
V
.
4
Lecture 9 179
Linear block codes
VkCVn
Members of C are called code-words.
The all-zero codeword is a codeword.
Any linear combination of code-words is a codeword.
Lecture 9 180
Linear block codes – cont’d
mapping Vn
Vk
C
Bases of C
Lecture 9 181
Linear block codes – cont’d
Channel
Data block Codeword
encoder
k bits n bits
n-kRedundant
bits
k
c Code
R rate
n
Lecture 9 182
Linear block codes – cont’d
The Hamming weight of the vector U,
denoted by w(U), is the number of non-zero
elements in U.
The Hamming distance between two vectors
U and V, is the number of elements in which
they differ.
d(
U,V ) w(U V )
Lecture 9 183
Linear block codes – cont’d
e dmin1
dmin1
t
2
Lecture 9 184
Linear block codes – cont’d
1n nj
P
B j
1
p(
1p
)
nj
n
jt
j
Lecture 9 185
Linear block codes – cont’d
Discrete, memoryless, symmetric channel
model 1 1-p 1
p
Tx. bits Rx. bits
p
0 1-p 0
p Q
N
sin
2
Q
(M>2):
si
2log2M Ec
example, for M-PSK modulation on AWGN channels
2
2
log
2M E R
b c
20
log
M
M 0
log
M
2 N
M
Ec Ec RcEb
where is energy per coded bit, given by
Lecture 9 186
Linear block codes –cont’d
mapping Vn
Vk
C
Bases of C
A matrix G is constructed by taking as its rows the
vectors of the basis, .
{V1,V 2, ,Vk}
v
11 v vn
V
12 1
1 v
G
v21 v22 2n
V
k
vk1 v
k2 vkn
Lecture 9 187
Linear block codes – cont’d
Encoding in (n,k) block code
U mG V1
V
(u ,u ,,u )(m ,m , ,m ) 2
1 2 n 1 2 k
Vk
Lecture 9 188
Linear block codes – cont’d
Example: Block code (6,3)
000 000000
V1 110100 100 110100
GV 0
2 11010
010 011010
V
3 101001
110 1 011 1 0
001 1 010 0 1
101 0 111 0 1
011 110011
111 000111
Lecture 9 189
Linear block codes – cont’d
Systematic block code (n,k)
For a systematic code, the first (or last) k elements in
the codeword are information bits.
G[P Ik]
Ik kkidentity
matrix
k
P k(nk) matrix
U
(
u,
u
,...,
u)(p
,
p,...,
p
,
m,m,...
m)
12 n
1
2
n
k
1
2
k
parity
bits
mess
bits
Lecture 9 190
Linear block codes – cont’d
H[Ink PT
]
Lecture 9 191
Linear block codes – cont’d
r Ue
r(
r
1,
r
2,....,
r
n)
received
codeword
or vecto
e(e,
e ,....,
e
Syndrome
1 2 ) error
pattern
testing:
n or vecto
Lecture 9 192
Linear block codes – cont’d
Standard array
For row i 2,3,..., 2nkfind a vector in Vofn
minimum
weight that is not already listed in the array.
Call this pattern e and form the i : throw as the
i
corresponding coset
zero
codeword U 1 U2 U 2k
e2 e2 U 2 e 2 U k
2 coset
en
k enk
U 2 enk U k
coset leaders 2 2 2 2
Lecture 9 193
Linear block codes – cont’d
Note that ˆ
U
rˆ
e
(
U
ˆ
e)
eU
(e
eˆ
)
If , the error is corrected.
eˆ e
If , undetectable decoding error occurs.
eˆ e
Lecture 9 194
Linear block codes – cont’d
Example: Standard array for the (6,3) code
codewords
000000
110100
011010
101110
101001
011101
110011
000111
000001
110101
011011
101111
101000
011100
110010
000110
000010
110111
011000
101100
101011
011111
110001
000101
000100
110011
011100
101010
101101
011010
110111
000110
001000
111100
010000
100100 coset
100000
010100
100101
010001 010110
Coset leaders
Lecture 9 195
Linear block codes – cont’d
Lecture 9 196
Hamming codes
Hamming codes
Hamming codes are a subclass of linear block codes
and belong to the category of perfect codes.
Hamming codes are expressed as a function of a
single integer m 2.
Code
length
: nm
21
Number
of
informatio
nbits
:k 2m
m
1
Number
of
parity
bits
:
n-km
Error : t
correction
capability
1
Lecture 9 197
Hamming codes
Example: Systematic Hamming code (7,4)
10001 1
1
H
0 10101
1
[I T
P]
3
3
001 1
101
01110 0 0
101010 0
G [
P I]
110 0 010 44
1110 0 01
Lecture 9 198
Cyclic block codes
Cyclic codes are a subclass of linear block
codes.
Encoding and syndrome calculation are easily
performed using feedback shift-registers.
Hence, relatively long block codes can be
implemented with a reasonable complexity.
BCH and Reed-Solomon codes are cyclic
codes.
Lecture 9 199
Cyclic block codes
U
(u
,
u
0,
1u,...,
u
2 n)
1
“i” cyclic shifts of U
U(
i
)
(u
Example: n
i,
u
n
i,...,
1 u
n
1,
u,
0u
,
1u
2,...,
u
n
i)
1
U
(
1101
)
(
1
)
U( (
2
)
1110
)U ( (
3
)
0111
)U ( (
4
)
1011
)U (
11
)
U
Lecture 9 200
Cyclic block codes
Algebraic structure of Cyclic codes, implies expressing
codewords in polynomial form
U
(
X
)u
u
X
uX
01 2
2
...
u
nX
1
n
1
deg
(
n-
1)
Relationship between a codeword and its cyclic shifts:
X
U(
X)u
0X
uX
1
2
...,
u
n2Xn1
u
nX
1
n
u
uX
uX 2
...
uXn1
u
1
Xn
u
n
1 0
1
n2
n
n1
(
1
U)
(X) u
n(
1X
n
1)
U(
1)
(X
)
u
n(
1Xn
1
)
Hence:
(
1
U)
(X
)XU
(
X) n
modulo
(
X 1)
By extension (
i
U)
(X
)Xi
U (
X) n
modulo
(
X 1)
Lecture 9 201
Cyclic block codes
Basic properties of Cyclic codes:
Let C be a binary (n,k) linear cyclic code
1. Within the set of code polynomials in C, there is a
unique monic polynomial with minimal degree
is called the generator polynomial. g ( X )
r n. g(X)
1. Every code polynomial in C can be expressed
uniquelyg(
asX ) g g
0 1 X ... g rX r
Lecture 9 202
Cyclic block codes
The orthogonality of G and H in polynomial form is
expressed as g(X ).h
This
(X )
means
X n
1 is
also a factor of
h( X ) n
X 1
1. The row , of the generator matrix is formed by
i,i 1
the coefficients of,...,
thek cyclic shift of the generator
polynomial. " i 1"
g 1
0 g g 0
g )
r
(X
X g(X) g
0 g
1 g
r
G
k g
0 g
1 g
r
X1
g(
X
)
0 g 1
0 g g
r
Lecture 9 203
Cyclic block codes
Lecture 9 204
Cyclic block codes
Example: For the systematic (7,4) Cyclic code
with generator polynomial g (X) 1XX 3
Form
the
codeword
polynomial
:
U(X)p(X)X3m(X)1X3 X5 X6
U(1
0
010
1
1)
parity
bitsmessage
bits
Lecture 9 205
Cyclic block codes
Find the generator and parity check matrices, G and H,
respectively.
g
(X)
1X
1
0X
2
X
1 3
(
g0,g
1,g
2,g
3)
(1101
)
1101000
0 110100 Not in systematic form.
G We do the following:
0011010
row(1)
row(3)
row(3)
0001101
row(1)
row(2)
row(4)
row(
1 1 0 1 0 0 0
0
0
1 0 0 1 011
G
1 1 0 1 0 H
0 1 0 1 1 1
0
1 1 1 0 0 1 0
0 0 1 0 111
1 0 1 0 0 0 1
I 33 PT
P I 44
Lecture 9 206
Cyclic block codes
Lecture 9 207
Example of the block codes
PB
8PSK
QPSK
Eb / N0 [dB]
Lecture 9 208
Convolutional codes
Convolutional codes offer an approach to error control
coding substantially different from that of block codes.
A convolutional encoder:
encodes the entire data stream, into a single codeword.
does not need to segment the data stream into blocks of fixed
size (Convolutional codes are often forced to block structure by periodic
truncation).
is a machine with memory.
This fundamental difference in approach imparts a
different nature to the design and evaluation of the code.
Block codes are based on algebraic/combinatorial
techniques.
Convolutional codes are based on construction techniques.
Lecture 10 209
Convolutional codes-cont’d
A Convolutional code is specified by three
parameters or(n, k, K ) where
(k / n, K)
is the coding rate, determining the number
of data bits per coded bit.
R k/n
cIn practice, usually k=1 is chosen and we assume that
from now on.
K is the constraint length of the encoder a where
the encoder has K-1 memory elements.
There is different definitions in literatures for constraint
length.
Lecture 10 210
Block diagram of the DCS
Channel
Input
sequence
Codeword
sequence
Ui u1i,...,u
ji,...,u
ni
Branch rd
wo
(n coded
bits)
Z zi,...,z
,...,z
i 1
ji
ni
Demodulato
routputs
noutputs
per
Branch
d wor
for
Branch
di wor
Lecture 10 211
A Rate ½ Convolutional encoder
Convolutional encoder (rate ½, K=3)
3 shift-registers where the first one takes the
incoming data bit and the rest, form the memory of
the encoder.
Lecture 10 212
A Rate ½ Convolutional encoder
u1 u1
u1 u 2 u1 u 2
t3 1 0 1 t4 0 1 0
0 0 1 0
u2 u2
Lecture 10 213
A Rate ½ Convolutional encoder
m (101
) Encoder
U(
1110
0010
11
)
Lecture 10 214
Effective code rate
Initialize the memory before encoding the first bit (all-
zero)
Clear out the memory after encoding the last bit (all-
zero)
Hence, a tail of zero-bits is appended to data bits.
Lecture 10 215
Encoder representation
Vector representation:
We define n binary vector with K elements (one
vector for each modulo-2 adder). The i:th element
in each vector, is “1” if the i:th stage in the shift
register is connected to the corresponding modulo-
2 adder, and “0” otherwise.
Example:
u1
g1 (111
)
m u1 u 2
g2 (101
)
u2
Lecture 10 216
Encoder representation – cont’d
Impulse response representaiton:
The response of encoder to a single “one” bit that
goes through it.
Example: Branch word
Register
contents u1 u2
100 1 1
Input
sequence
: 100
010 1 0
Output
sequence
:11
10
11
001 1 1
Input m Output
1 11 10 11
0 00 00 00
1 11 10 11
Modulo-2 sum: 11 10 00 10 11
Lecture 10 217
Encoder representation – cont’d
Polynomial representation:
We define n generator polynomials, one for each
modulo-2 adder. Each polynomial is of degree K-1 or
less and describes the connection of the shift
registers to the corresponding modulo-2 adder.
Example:
g
(
1X
)g(
1
) (
1
)
g
0 1.
X(
1
) 2
g
2.
X
1
X
X2
g
(
2X
)g(
2
) (
g
0 1
2)
.X
(
g
2
2
.X
) 2
1
X2
Lecture 10 218
Encoder representation –cont’d
In more details:
m
(X
)g(
1 X
)(
1X2
)(
1 XX2
)
1 XX3
X4
m
(X
)g2(X)(
1X2
)(
1 X2
)
1 X4
m
(X
)g(
1 X
)
1 X0
.X2
X3
X4
m
(X
)g2(X)
10.
X
0.X2
0
.X3
X4
U
(X)(
1,
1)
(1
,0
)X(
0,0
)X2
(
1,0
)X3
(
1,
1)X4
U
11 10 00 10 11
Lecture 10 219
State diagram
A finite-state machine only encounters a finite
number of states.
State of a machine: the smallest amount of
information that, together with a current input
to the machine, can predict the output of the
machine.
In a Convolutional encoder, the state is
represented by the content of the memory.
Hence, there are states.
2 K 1
Lecture 10 220
State diagram – cont’d
A state diagram is a way to represent the
encoder.
A state diagram contains all the states and all
possible transitions between them.
Only two transitions initiating from a state
Only two transitions ending up in a state
Lecture 10 221
State diagram – cont’d
State
S 0 00 0/00
1/11
S 2 10 0/11
1/00
S1 01 1/01
0/10
0/01
S3 11 1/10
ti ti 1 Time
Lecture 10 223
Trellis –cont’d
A trellis diagram for the example code
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11 1/11 1/11
0/11 0/11 0/11 0/11 0/11
1/00 1/00 1/00 1/00 1/00
0/10 0/10 0/10 0/10 0/10
1/01 1/01 1/01 1/01 1/01
0/01 0/01 0/01 0/01 0/01
t1 t2 t3 t4 t5 t6
Lecture 10 224
Trellis – cont’d
t1 t2 t3 t4 t5 t6
Lecture 10 225
Block diagram of the DCS
Channel
Input
sequence
Codeword
sequence
Ui u1i,...,u
ji,...,u
ni
Branch rd
wo
(n coded
bits)
Z zi,...,z
,...,z
i 1
ji
ni
Demodulato
routputs
noutputs
per
Branch
d wor
for
Branch
di wor
Lecture 11 226
State diagram
A finite-state machine only encounters a finite
number of states.
State of a machine: the smallest amount of
information that, together with a current input
to the machine, can predict the output of the
machine.
In a Convolutional encoder, the state is
represented by the content of the memory.
Hence, there are states.
2 K 1
Lecture 11 227
State diagram – cont’d
A state diagram is a way to represent the
encoder.
A state diagram contains all the states and all
possible transitions between them.
There can be only two transitions initiating
from a state.
There can be only two transitions ending up in
a state.
Lecture 11 228
State diagram – cont’d
State
S 0 00 0/00
1/11
S 2 10 0/11
1/00
S1 01 1/01
0/10
0/01
S3 11 1/10
ti ti 1 Time
Lecture 11 230
Trellis –cont’d
A trellis diagram for the example code
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11 1/11 1/11
0/11 0/11 0/11 0/11 0/11
1/00 1/00 1/00 1/00 1/00
0/10 0/10 0/10 0/10 0/10
1/01 1/01 1/01 1/01 1/01
0/01 0/01 0/01 0/01 0/01
t1 t2 t3 t4 t5 t6
Lecture 11 231
Trellis – cont’d
t1 t2 t3 t4 t5 t6
Lecture 11 232
Optimum decoding
If the input sequence messages are equally likely, the
optimum decoder which minimizes the probability of
error is the Maximum likelihood decoder.
Lecture 11 233
ML decoding for memory-less channels
Due to the independent channel statistics for
memoryless channels, the likelihood function becomes
n
p
(
Z
|
U)
p(
Z,
Z (
m
)
,...,
Z
,...
|U)p
(
z
,
z
1
2
Z
|
Ui
)
1
2p
(
z
|
,...,
z
,...u
i) (
m
)
i
1
(
m
)
ii
i
1
j1
(
m
)
ji
ji
i
1
(
m
ii
)
i
1j
n
1
Path metric Branch metric Bit metric
The path metric up to time index "i,"is called the partial path metric.
Lecture 11 234
Binary symmetric channels (BSC)
1 1
p
Modulator Demodulator pp
(1
|0) p
(0|1
)
input output
p
1pp(
1 )
|1 p(0|0
)
0 1-p 0
If is
) the Hamming distance between Z and
dmd(Z,
U(m
)
U, then
p
(Z (
m
|U )
)
pd
m
(
1pL
)ndm Size of coded sequence
U(m
)
d
1
log
m
p
p
L
n
log(
1p)
ML decoding rule:
Choose the path with minimum Hamming distance
from the received sequence.
Lecture 11 235
AWGN channels
Lecture 11 236
Soft and hard decisions
In hard decision:
The demodulator makes a firm or hard decision
whether a one or a zero was transmitted and
provides no other information for the decoder such
as how reliable the decision is.
Lecture 11 237
Soft and hard decision-cont’d
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision.
The side information provides the decoder with a
measure of confidence for the decision.
The demodulator outputs which are called soft-
bits, are quantized to more than two levels.
Decoding based on soft-bits, is called the
“soft-decision decoding”.
On AWGN channels, a 2 dB and on fading
channels a 6 dB gain are obtained by using
soft-decoding instead of hard-decoding.
Lecture 11 238
The Viterbi algorithm
The Viterbi algorithm performs Maximum likelihood
decoding.
It finds a path through the trellis with the largest
metric (maximum correlation or minimum distance).
It processes the demodulator outputs in an iterative
manner.
At each step in the trellis, it compares the metric of all
paths entering each state, and keeps only the path with
the smallest metric, called the survivor, together with its
metric.
It proceeds in the trellis by eliminating the least likely
paths.
It reduces the decoding complexity to !
L 2 K 1
Lecture 11 239
The Viterbi algorithm - cont’d
Viterbi algorithm:
A. Do the following set up:
For a data block of L bits, form the trellis. The trellis has
L+K-1 sections or levels and starts at time andt1 ends
up at time .t L K
Label all the branches in the trellis with their
corresponding branch metric.
For each state in the trellis at the time ti which is
denoted by S (ti) { 0,1 K
,...,
2 define a parameter S(ti ),ti
,1}
B. Then, do the following:
Lecture 11 240
The Viterbi algorithm - cont’d
1. Set (0,t1) 0and i 2.
2. At time ti , compute the partial path metrics for all
the paths entering each state.
3. Set S(ti ),ti equal to the best partial path metric
entering each state at time t.i
Keep the survivor path and delete the dead paths
from the trellis.
1. If i L K, increase iby 1 and return to step 2.
A. Start at state zero at time . Follow the
surviving branches backwards t L K through the
trellis. The path found is unique and
corresponds to the ML codeword.
Lecture 11 241
Example of Hard decision Viterbi
decoding
m (101
)
U( 111000
1011
)
Z (111011
1001
)
t1 t2 t3 t4 t5 t6
Lecture 11 242
Example of Hard decision Viterbi
decoding-cont’d
Label all the branches with the branch metric
(Hamming distance)
S(ti ),ti
0 2 1 2 1 1
0 1 0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Lecture 11 243
Example of Hard decision Viterbi
decoding-cont’d
i=2
0 2 2
1 2 1 1
0 1 0
0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Lecture 11 244
Example of Hard decision Viterbi
decoding-cont’d
i=3
0 2 2
1 3
2 1 1
0 1 0
0 3
0 1 1
2
0 1 0
0
1
2 2
2 1
1
t1 t2 t3 t4 t5 t6
Lecture 11 245
Example of Hard decision Viterbi
decoding-cont’d
i=4
0 2 2
1 3
2 0
1 1
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 246
Example of Hard decision Viterbi
decoding-cont’d
i=5
0 2 2
1 3
2 0
1 1
1
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 247
Example of Hard decision Viterbi
decoding-cont’d
i=6
0 2 2
1 3
2 0
1 1
1 2
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 248
Example of Hard decision Viterbi decoding-
cont’d
Trace back and then:
ˆ (100)
m
ˆ
U ( 11 1011
0000
)
0 2 2
1 3
2 0
1 1
1 2
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 249
Example of soft-decision Viterbi decoding
2 2 222 2 ˆ (101
m )
Z(
1
,,, , , 1
,,1, ,
1)
3 3 333 3 ˆ
U ( 111000
1011
)
m (101
)
U( 1110 00
1011
)
0 -5/3 -5/3
0 -5/3
-1/3 10/3
1/3 1/3
-1/3 14/3
0 1/3 1/3
5/3 5/3 5/3 8/3
1/3
-5/3 -1/3
4/3 1/3 Partial metric
3 2
5/3 13/3 S(ti ),ti
-4/3 5/3
-5/3 Branch metric
1/3 5/3 10/3
-5/3
t1 t2 t3 t4 t5 t6
Lecture 11 250
Trellis of an example ½ Conv. code
Channel
Input
sequence
Codeword
sequence
Ui u1i,...,u
ji,...,u
ni
Branch rd
wo
(n coded
bits)
Z zi,...,z
,...,z
i 1
ji
ni
Demodulato
routputs
noutputs
per
Branch
d wor
for
Branch
di wor
Lecture 12 252
Soft and hard decision decoding
In hard decision:
The demodulator makes a firm or hard decision
whether one or zero was transmitted and provides
no other information for the decoder such as how
reliable the decision is.
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
Lecture 12 253
Soft and hard decision decoding …
ML soft-decisions decoding rule:
Choose the path in the trellis with minimum
Euclidean distance from the received sequence
Lecture 12 254
The Viterbi algorithm
Lecture 12 255
Example of hard-decision Viterbi
decoding
ˆ (100)
m
Z(
1110
1110
01
) ˆ
U (11 1011
0011
)
m (101
)
U( 111000
1011
)
0 2 2
1 3
2 0
1 1
1 2
1 0
0
0 3 2
0 1 1
1 2 Partial metric
0 0 3
0 2 S(ti ),ti
1
2 2
2 1 3
Branch metric
1
t1 t2 t3 t4 t5 t6
Lecture 12 256
Example of soft-decision Viterbi decoding
2 2 222 2 ˆ (101
m )
Z(
1
,,, , , 1
,,1, ,
1)
3 3 333 3 ˆ
U ( 111000
1011
)
m (101
)
U( 1110 00
1011
)
0 -5/3 -5/3
0 -5/3
-1/3 10/3
1/3 1/3
-1/3 14/3
0 1/3 1/3
5/3 5/3 5/3 8/3
1/3
-5/3 -1/3
4/3 1/3 Partial metric
3 2
5/3 13/3 S(ti ),ti
-4/3 5/3
-5/3 Branch metric
1/3 5/3 10/3
-5/3
t1 t2 t3 t4 t5 t6
Lecture 12 257
Today, we are going to talk about:
The properties of Convolutional codes:
Free distance
Transfer function
Systematic Conv. codes
Catastrophic Conv. codes
Error performance
Interleaving
Concatenated codes
Error correction scheme in Compact disc
Lecture 12 258
Free distance of Convolutional codes
Distance properties:
Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:
Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the
all-zero codeword.
This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and re-merge to the all-zero
path.
It is called the minimum free distance or the free distance of the
code, denoted by dfreeordf
Lecture 12 259
Free distance …
t1 t2 t3 t4 t5 t6
Lecture 12 260
Transfer function of Convolutional codes
Transfer function:
The transfer function of the generating function is a
tool which provides information about the weight
distribution of the codewords.
The weight distribution specifies weights of different paths
in the trellis (codewords) with their corresponding lengths
and amount of information.
T
(D
,L,
N
)
Di j l
L N
id
fjKl1
D,
L,N
:place
holders
i
:distance
of
the
path
from
the
all
-
zero
path
j
:number
of
branches
that
path
until
it
remerges
the
takes
to
the
all
-
zero
path
l
: weight
of
the
informatio
nbits
correspond
ing
to
the
path
Lecture 12 261
Transfer function …
a = 00 b = 10 c = 01 e = 00
D 2 LN DL D2L
DLN DL
d =11
DLN
Lecture 12 262
Transfer function …
Solve T
(D,L
,N)Xe/X
a
5
3
D
LN
T
(
D
,L
,
N
)
D5
L
3 6
N4
D
LN
26
D
L5
N
2
...
1
DL
(
1L)
N
One path with weight 5, length 3 and data weight of 1
One path with weight 6, length 4 and data weight of 2
One path with weight 5, length 5 and data weight of 2
Lecture 12 263
Systematic Convolutional codes
A Conv. Coder at rate k / n is systematic if the
k-input bits appear as part of the n-bits branch
word.
Input Output
Lecture 12 264
Catastrophic Convolutional codes
Catastrophic error propagations in Conv. code:
A finite number of errors in the coded bits cause an
infinite number of errors in the decoded data bits.
A Convolutional code is catastrophic if there is a
closed loop in the state diagram with zero
weight.
Systematic codes are not catastrophic:
At least one branch of output word is generated by
input bits.
Small fraction of non-systematic codes are
catastrophic.
Lecture 12 265
Catastrophic Conv. …
Example of a catastrophic Conv. code:
Assume all-zero codeword is transmitted.
Three errors happens on the coded bits such that the decoder
takes the wrong path abdd…ddce.
This path has 6 ones, no matter how many times stays in the
loop at node d.
It results in many erroneous decoded data bits.
10
Input Output
a 11 b 10 c 01 e
00 10 01 00
01 d 11
11
00
Lecture 12 266
Performance bounds for Conv. codes
Error performance of the Conv. codes is
analyzed based on the average bit error
probability (not the average codeword error
probability), because
Codewords have variable sizes due to different
sizes of the input.
For large blocks, codeword error probability may
converge to one bit but the bit error probability may
remain constant.
….
Lecture 12 267
Performance bounds …
Analysis is based on:
Assuming the all-zero codeword is transmitted
Evaluating the probability of an “error event”
(usually using bounds such as union bound).
An “error event” occurs at a time instant in the trellis if a
non-zero path leaves the all-zero path and re-merges to it
at a later time.
Lecture 12 268
Performance bounds …
Bounds on bit error probability for
memoryless channels:
Hard-decision decoding:
dT
(
D,L
,N)
P
B
dN N1
,
L1
,
D
2p(
1p
)
Lecture 12 269
Performance bounds …
Error correction capability of Convolutional codes,
given by t(df 1)/2,depends on
If the decoding is performed long enough (within 3 to 5
times of the constraint length)
How the errors are distributed (bursty or random)
For a given code rate, increasing the constraint
length, usually increases the free distance.
For a given constraint length, decreasing the
coding rate, usually increases the free distance.
The coding gain is upper bounded
coding
gain
10
log
(
R
10cdf)
Lecture 12 270
Performance bounds …
Basic coding gain (dB) for soft-decision Viterbi
decoding
(dB) PB K 7 8 6 7
3
6
.8 10 4.2 4
.4 3
.5 3
.8
5
9
.6 10 5.7 5
.9 4
.6 5.1
7
11.3 10 6.2 6
.5 5
.3 5
.8
Upperbound 7.0 7.3 6
.0 7
.0
Lecture 12 271
Interleaving
Convolutional codes are suitable for memoryless
channels with random error events.
Lecture 12 272
Interleaving …
Interleaving is achieved by spreading the
coded symbols in time (interleaving) before
transmission.
The reverse in done at the receiver by
deinterleaving the received sequence.
“Interleaving” makes bursty errors look like
random. Hence, Conv. codes can be used.
Types of interleaving:
Block interleaving
Convolutional or cross interleaving
Lecture 12 273
Interleaving …
Consider a code with t=1 and 3 coded bits.
A burst error of length 3 can not be corrected.
A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors
Interleaver Deinterleaver
A1 B1 C1 A2 B2 C2 A3 B3 C3 A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors 1 errors 1 errors
Lecture 12 274
Concatenated codes
A concatenated code uses two levels on coding, an
inner code and an outer code (higher rate).
Popular concatenated codes: Convolutional codes with
Viterbi decoding as the inner code and Reed-Solomon codes
as the outer code
The purpose is to reduce the overall complexity, yet
achieving the required error performance.
Channel
Output Outer Inner
Deinterleaver Demodulate
data decoder decoder
Lecture 12 275
Practical example: Compact disc
Lecture 12 276
Compact disc – cont’d
Encoder
C2 D* C1 D
interleave encode interleave encode interleave
C2 D* C1 D
deinterleave decode deinterleave decode deinterleave
Decoder
Lecture 12 277
Goals in designing a DCS
Goals:
Maximizing the transmission bit rate
Minimizing probability of bit error
Minimizing the required power
Minimizing required system bandwidth
Maximizing system utilization
Minimize system complexity
Lecture 13 278
Error probability plane
(example for coherent MPSK and MFSK)
M-PSK M-FSK
bandwidth-efficient power-efficient
k=5
k=4
Bit error probability
k=1
k=2
k=4
k=3
k=5
k=1,2
Eb / N0 [dB] Eb / N0 [dB]
Lecture 13 279
Limitations in designing a DCS
Limitations:
The Nyquist theoretical minimum bandwidth
requirement
The Shannon-Hartley capacity theorem (and the
Shannon limit)
Government regulations
Technological limitations
Other system requirements (e.g satellite orbits)
Lecture 13 280
Nyquist minimum bandwidth requirement
The theoretical minimum bandwidth needed
for baseband transmission of Rs symbols per
second is Rs/2 hertz.
H( f ) (t)sinc(
h t/T)
T 1
0 f 2T T 0 T 2T t
1 1
2T 2T
Lecture 13 281
Shannon limit
Channel capacity: The maximum data rate at
which error-free communication over the channel is
performed.
Channel capacity of AWGV channel (Shannon-
Hartley capacity theorem):
S
CW
log
21 [bits/s
]
N
W[
Hz]
:Bandwidth
SE
bC [
Watt
]
:Average
received
signal
power
NN
W
0 [Watt]
:Average
noise
power
Lecture 13 282
Shannon limit …
The Shannon theorem puts a limit on the
transmission data rate, not on the error
probability:
Theoretically possible to transmit information at any
rate Rb
, with an arbitrary small error probability
Rb C coding scheme
by using a sufficiently complicated
Lecture 13 283
Shannon limit …
C/W [bits/s/Hz]
Unattainable
region
Practical region
SNR [bits/s/Hz]
Lecture 13 284
Shannon limit …
S
C Wlog21
N C EbC
log
21
S EbC W N0W
N N0W
C
As
Wor 0, weget
:
W
E 1 Shannon limit
b
0
.6931.6[dB]
N
0 log
2e
Lecture 13 285
Shannon limit …
W/C [Hz/bits/s]
Practical region
Unattainable
region
M=2 R<C
Practical region
M=4 M=2
M=8
M=16
Shannon limit MPSK
MQAM PB 105
Power limited MFSK
Eb / N0 [dB]
Lecture 13 287
Power and bandwidth limited systems
Bandwidth-limited systems:
save bandwidth at the expense of power (for example by
using spectrally efficient modulation schemes)
Lecture 13 288
M-ary signaling
Bandwidth efficiency:
R
b log
M 1
2
[bits/s/Hz
]
W WT
s WT
b
Lecture 13 289
Design example of uncoded systems
Design goals:
1. The bit error probability at the modulator output must meet the
system error requirement.
2. The transmission bandwidth must not exceed the available
channel bandwidth.
Input M-ary
modulator R
R [bits/s] R
s [symbols
]
log
2M
Output M-ary
demodulator P E E
r
bR sR
P
(
EM
)
f
Es
,P
B
g
P(
EM
) N N N
s
0
N 0 0 0
Lecture 13 290
Design example of uncoded systems …
b
R WC Band
-limited
channel
MPSK
modulation
M8s
R R
b/log
2M9600
/33200 C
[sym/s]
W 4000
[Hz]
E E Pr 1
s
(log
2M) b
(log
2M) 62
.67
N
0 N0 N0Rb
P
E(M)
8 2
Q2E
s/N /M
0sin()2
.2
10
5
P (M)
P
B E
7
.3 6
10
105
log2M
Lecture 13 291
Design example of uncoded systems …
Choose a modulation scheme that meets the following
system requirements:
An
AWGN
channel
with
WC45
[kHz]
P
r
48 R
b
[dB.Hz]
9600 5
[bits/s]
PB10
N
0
E P1
b
r 6
.61
8.2[dB]
N
0 N
0Rb
b
R W
Cand
relatively
small
Eb/N0
power
-limited
channel
MFSK
M
16
W s
MR MR )
b/(log
2M 16
9600
/438
.4
[ksym/s]
W
C 45
[kH
E E Pr 1
s
(log
2M) b
(log
2M) 26
.44
N
0 N0 N0Rb
M1 E 2
k1
P(
M
16)
exp
2
s
1
.4
10
5
P P(
M )7
.3
10
6
10
5
2
E B k E
2 N0 1
Lecture 13 292
Design example of coded systems
Design goals:
1. The bit error probability at the decoder output must meet the
system error requirement.
2. The rate of the code must not expand the required transmission
bandwidth beyond the available channel bandwidth.
3. The code should be as simple as possible. Generally, the shorter
the code, the simpler will be its implementation.
Input M-ary
Encoder
R [bits/s] modulator
R
n R
s [sym
]
R
c R[bits/s] log
2M
k
Output M-ary
Decoder
PB f (pc) demodulator
P E E E
P
(
EM
)
f
Es
,
p
c
g
P(
EM
) r
b
Rc
Rcs
Rs
0
N N
0 N0 N0 N
0
Lecture 13 293
Design example of coded systems …
Choose a modulation/coding scheme that meets the following
system requirements:
An
AWGN
channel
with
WC4000
[Hz]
P
r
53 R
b
[dB.Hz]
9600 9
[bits/s]
PB10
N
0
R
bW
CBand
-
limited
channel
MPSK
modula
M
8
RsR
/
blog
M
2
9600
/
3
3200
4000
P (M ) 6
PB E
7 .
3 10 10 9
Not low enough : power -limit sys
log2M
The requirements are similar to the bandwidth-limited uncoded
system, except that the target bit error probability is much lower.
Lecture 13 294
Design example of coded systems
Using 8-PSK, satisfies the bandwidth constraint, but
not the bit error probability constraint. Much higher
power is required for uncoded 8-PSK.
9
E
P
B
10
b
16
dB
N0
uncoded
Lecture 13 295
Design example of coded systems
For simplicity, we use BCH codes.
The required coding gain is:
E
E
G
(
dB
)
b
(
dB
)
c
(
dB
)
16
13
.
22.
8dB
N
0
N
0
uncoded
coded
Among the BCH codes, we choose the one which provides the
required coding gain and bandwidth expansion with minimum
amount of redundancy.
Lecture 13 296
Design example of coded systems …
Bandwidth compatible BCH codes
Lecture 13 297
Design example of coded systems …
Examine that the combination of 8-PSK and (63,51)
BCH codes meets the requirements:
nb
R
63
9600
R
s
3953
[sym
W40
C [H
k
log
M
2
51
3
E
Pr 1
2E
4
s s
50.47 P
E(
M ) 2
Q sin 1
.210
N
0 N R
0 s N0 M
P (
M ) 1
.2104
p
c E
4 5
10
log
2M 3
1n nj
P
B
n
j
j
p
jc
1
t
(
1 p
c)
nj
1.
2
10
10
10
9
Lecture 13 298
Effects of error-correcting codes on error
performance
Error-correcting codes at fixed SNR
influence the error performance in two ways:
1. Improving effect:
The larger the redundancy, the greater the error-
correction capability
2. Degrading effect:
Energy reduction per channel symbol or coded bits for
real-time applications due to faster signaling.
The degrading effect vanishes for non-real time
applications when delay is tolerable, since the
channel symbol energy is not reduced.
Lecture 13 299
Bandwidth efficient modulation schemes
Lecture 13 300
Course summary
In a big picture, we studied:
Fundamentals issues in designing a digital
communication system (DSC)
Basic techniques: formatting, coding, modulation
Design goals:
Probability of error and delay constraints
Lecture 13 301
Block diagram of a DCS
Digital modulation
Channel
Digital demodulation
Lecture 13 302
Course summary – cont’d
In details, we studies:
1. Basic definitions and concepts
Signals classification and linear systems
Random processes and their statistics
WSS, cyclostationary and ergodic processes
Autocorrelation and power spectral density
Power and energy spectral density
Noise in communication systems (AWGN)
Bandwidth of signal
2. Formatting
Continuous sources
Nyquist sampling theorem and aliasing
Uniform and non-uniform quantization
Lecture 13 303
Course summary – cont’d
1. Channel coding
Linear block codes (cyclic codes and Hamming codes)
Encoding and decoding structure
Generator and parity-check matrices (or
polynomials), syndrome, standard array
Codes properties:
Linear property of the code, Hamming distance,
minimum distance, error-correction capability,
coding gain, bandwidth expansion due to
redundant bits, systematic codes
Lecture 13 304
Course summary – cont’d
Convolutional codes
Encoder and decoder structure
decisions
Coding gain, Hamming distance, Euclidean distance,
affects of free distance, code rate and encoder
memory on the performance (probability of error and
bandwidth)
Lecture 13 305
Course summary – cont’d
1. Modulation
Baseband modulation
Signal space, Euclidean distance
Orthogonal basic function
Matched filter to reduce ISI
Equalization to reduce channel induced ISI
Pulse shaping to reduce ISI due to filtering at the
transmitter and receiver
Minimum Nyquist bandwidth, ideal Nyquist pulse
shapes, raise cosine pulse shape
Lecture 13 306
Course summary – cont’d
Baseband detection
Structure of optimum receiver
Optimum receiver structure
Optimum detection (MAP)
Maximum likelihood detection for equally likely symbols
Average bit error probability
Union bound on error probability
Lecture 13 307
Course summary – cont’d
Passband modulation
Modulation schemes
Lecture 13 308
Course summary – cont’d
1. Trade-off between modulation and coding
Channel models
Discrete inputs, discrete outputs
Memoryless channels : BSC
Channels with memory
Discrete input, continuous output
AWGN channels
Shannon limits for information transmission rate
Comparison between different modulation and coding
schemes
Probability of error, required bandwidth, delay
Trade-offs between power and bandwidth
Uncoded and coded systems
Lecture 13 309
Information about the exam:
Exam date:
8th of March 2008 (Saturday)
Allowed material:
Any calculator (no computers)
Mathematics handbook
Swedish-English dictionary
A list of formulae that will be available with the
exam.
Lecture 13 310