Вы находитесь на странице: 1из 112

Chapter 4

Optimum Receiver for AWGN Channels


Wireless Information Transmission System Lab.
Institute of Communications Engineering
g
g
National Sun YatYat-sen University

Contents

4.1 Waveform and Vector Channel Models

4.2 Waveform and Vector AWGN Channels

4.3 Optimal Detection and Error Probability for Band-Limited


Signaling
4.4 Optimal Detection and Error Probability for Power-Limited
Signaling
4.5 Optimal Detection in Presence of Uncertainty: Non-Coherent
Detection
4.6 A Comparison of Digital Signaling Methods
4.10 Performance Analysis for Wireline and Radio
Communication Systems

Chapter 4.1
Waveform and Vector Channel Models

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g
g
National Sun YatYat-sen University

Waveform & Vector Channel Models(1)


( )

In this chapter, we study the effect of noise on the reliability of


the modulation systems studied in Chapter3.
We assume that the transmitter sends digital information by use of
M signals waveforms {sm(t)=1,2,,M }. Each waveform is
transmitted within the symbol interval of duration T, i.e. 0tT.
Th additive
The
dditi white
hit Gaussian
G
i noise
i (AWGN) channel
h
l model:
d l
r (t ) = sm (t ) + n(t )

sm(t): transmitted signal


n(t)
(t) : sample
l function
f ti off AWGN process
with PSD: nn( f )=N0/2 (W/Hz)

Waveform & Vector Channel Models(2)


( )

Based on the observed signal r(t), the receiver makes the decision
about which message m, 1m M was transmitted

Any orthonormal basis {j(t), 1 j N} can be used for


expansion of a zero-mean white Gaussian process (Problem 2.8-1)

m]
Optimum decision: minimize error probability Pe = P[m

The resulting coefficients are iid zero-mean


zero mean Gaussian random variables
with variance N0/2
{j(t), 1 j N} can be used for expansion of noise n(t)

Using {j(t), 1 j N}, r (t ) = sm (t ) + n(t ) has the vector form


r = sm + n

All vectors are N-dimensional

Components in n are i.i.d. zero-mean Gaussian with variance N0/2


5

Waveform & Vector Channel Models(3)


( )

It is convenient to subdivide the receiver into two partsthe


signal demodulator and the detector.

The function of the signal demodulator is to convert the


received waveform r(t) into an N-dimensional vector r=[r1 r2
..rN ] where
h N is
i the
h dimension
di
i off the
h transmitted
i d signal
i l
waveform.
The function of the detector
d t t is to decide which of the M
possible signal waveforms was transmitted based on the vector
r.
6

Waveform & Vector Channel Models(4)


( )

Two realizations of the signal demodulator are described in the


next two sections:
One is based on the use of signal correlators.
The second is based on the use of matched filters.

The optimum detector that follows the signal demodulator is


designed to minimize the probability of error.

Chapter 4.1
4.1--1
Optimal Detection for General Vector Channel

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g
g
National Sun YatYat-sen University

4.1--1 Optimal
4.1
p
Detection for General Vector Channel ((1))

AWGN channel model: r = sm + n


Message m is chosen from the set {1,2,..,M} with probability
Pm
Components in n are i.i.d. N(0, N0/2) ; PDF of noise n is
1
p ( n) =
N
0

n2

j =1 j

2 2

1
=
N
0

N
e 0

A more general vector channel model:

sm is selected from {sm,1 j M} according to a priori probability Pm


The received vector r statistically
y depends
p
on the transmitted vector
through the conditional PDF p(r|sm).
9

4.1--1 Optimal
4.1
p
Detection for General Vector Channel ((2))

Based on the observation r, receiver decides which message is


transmitted, m {1, 2,..., M }
Decision function: g (r ):a function from R N into messages {1, 2,..., M }
Given r is received, the probability of correct detection:
P[correct decision | r ] = P[m sent | r ]

The probability of correct detection


P[correct decision] = P[correct decision|r ]p (r )dr = P[m sent|r ] p (r )dr

Max P[correct detection]= max P[m | r ] for each r


Optimal decision rule:
m = g opt (r ) = arg max P[m | r ]
1 m M

= arg max P[ sm | r ]
1 m M

10

MAP and ML Receivers

max P[ sm | r ]
Optimum decision rule: m = g opt (r ) = arg
1 m M

Maximize a posteriori probability (MAP) rule

MAP rule can be simplified


p ( r , sm )
p ( r | sm ) p ( sm )
P p ( r | sm )
= arg max
= arg max m
p(r )
p(r )
p(r )
1 m M
1 m M
= arg max Pm p (r | sm )

m = g opt (r ) = arg max


1 m M

When the messages are equiprobable a priori, P1==PM=1/M


m = g opt (r ) = arg max p(r | sm )
1m M

p(r|sm) is called likelihood of message m


Maximum-likelihood
Maximum
likelihood (ML) receiver
Note: p(r|sm) is channel conditional PDF.

11

The Decision Region


g

For any detector, R N {1, 2,..., M }


Partition the output space RN into M regions (D1, D2,, DM),
if r Dm, then m = g (r ) = m
Dm is decision region for message m

For a MAP detector,


Dm = {r R N : P[ m | r ] > P[ m | r ], for all 1 m M and m m}

If more than one messages


g achieve the maximum a posteriori
p
probability, arbitrary assign r to one of the decision regions

12

The Error Probability


y (1)
( )

When sm is transmitted, an error occurs when r is not in Dm


Symbol error probability of a receiver with {Dm, 1 m M}
M

m =1

m =1

Pe = Pm P[r Dm sm sent] = Pm Pe|m

Pe|m is the error pprobabilityy when message


g m is transmitted

Pe|m =

Dmc

p(r | sm )dr =

1 m M
m m

Dm

p(r | sm )dr

Symbol error probability (or message error probability)


M

Pe = Pm
m =1

1 m M
m m

Dm

p(r | sm )dr

13

The Error Probability


y (2)
( )

Another type of error probability: Bit error probability Pb: error


probability in transmission of a single bit
Requires knowledge of how different bit sequences are
mapped to signal points
Finding bit error probability is not easy unless the constellation
exhibits
hibit certain
t i symmetric
t i property
t

R l i between
Relation
b
symbol
b l error prob.
b andd bit
bi error prob.
b

Pe
Pb Pe kPb
k

14

Sufficient Statistics ((An Example)


p )

Assumption (1) :the observation r can be written in terms of r1


and r2, r = (r1, r2)
Assumption (2): p(r1 , r2 | sm ) = p(r1 | sm ) p(r2 | r1 )

The MAP detection becomes

m = arg max Pm p (r | sm ) = arg max Pm p (r1 ,r2 | sm )


1 m M

1 m M

= arg max Pm p(r1 | sm ) p(r2 | r1 ) = arg max Pm p (r1 | sm )


1 m M

Under these assumptions, the optimal detection

1 m M

Based only
y on r1 r1 : sufficient
ff
statistics for detection of sm
r2 can be ignored r2 : irrelevant data or irrelevant information

Recognizing sufficient statistics helps to reduce complexity of


the detection through ignoring irrelevant data.
15

Preprocessing
p
g at the Receiver ((1))
sm

Channel

G(r)

Detector

Assume that the receiver applies an invertible operation G(r)


before detection
The optimal detection is
m = arg max Pm p(r , | sm ) = arg max Pm p(r | sm ) p( | r )
1 m M

1 m M

= argg max Pm p(r | sm )


1 m M

when r is given, does not depend on sm

The optimal detector based on the observation of makes the


same decision as the optimal detector based on the observation of
r.
The invertible does not change the optimality of the receiver
16

Preprocessing
p
g at the Receiver ((2))

Ex.4.1-3 Assume that the received vector is of the form


r = sm + n
where n is colored noise. Let us further assume that there existing
an invertible whitening operator denoted by W, s.t. v = Wn is a
white vector.
Consider
= Wr = Ws m + v

Equivalent to a channel with white noise for detection


No degradation on the performance
The linear operator W is called a whitening filter

17

Chapter 4.2
Waveform and Vector AWGN Channels

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g
g
National Sun YatYat-sen University

18

Waveform & Vector AWGN Channels(1)


( )

Waveform AWGN channel:


r (t ) = sm (t ) + n(t )

sm (t ) {s1 (t ), s2 (t ),...., sM (t )} with prior probability Pm

n(t) : zero-mean white Gaussian with PSD N0/2

By Gram-Schmidt
B
G
S h idt procedure,
d
we derive
d i an orthonormal
th
l basis
b i
{j(t), 1 j N}, and vector representation of signals {sm, 1 m M}

Noise process n(t) is decomposed into two components:


N

n1 (t ) = n j j (t ),
)
j =1

where
h n j = n(t )), j (t )

n2 (t ) = n(t ) n1 (t )

19

Waveform & Vector AWGN Channels(2)


( )
sm (t ) =
r (t ) =

j =1
j=
N

j (t ), where smj = sm (t ), j (t )

(s
j =1

mj

mj

+ n j ) j (t ) + n2 (t )

Define rj = smj + n j
where
rj = sm (t ),
) j (t ) + n(t ),
) j (t ) = sm (t ) + n(t ),
) j (t ) = r (t ),
) j (t )
N

So r (t ) = rj j (t ) + n2 (t )
j =1

where rj = r(t), j (t )

The noise components


p
{{nj} are iid zero-mean Gaussian with
variance N0/2

20

Waveform & Vector AWGN Channels(3)


( )

Prove that the noise components {nj} are iid zero-mean Gaussian
with variance N0/2

n j = n(t ) j (t )dt

(Zero-Mean)
E[n j ] = E n(t ) j (t )dt = E[n(t )] j (t )dt = 0

COV[ni n j ] = E[ni n j ] E[ni ]E[n j ] = E n(t )i (t )dt n( s ) j (t ) ds

[n(t )n( s )]i (t ) j ( s )dtds

= ( N 0 / 2) (t s )i (t )dt j ( s )ds
d

i= j
N0 / 2
= ( N 0 / 2) i ( s ) j ( s )ds =

i j
0

21

n(t) is white.

Waveform & Vector AWGN Channels(4)


( )

COV[n j n2 (t )] = E[n j n2 (t )] = E[n j n(t )] E[n j n1 (t )]

= E n(t ) n( s ) j ( s )ds E n j nii (t )

i =1

N0
N0
t
s
s
ds
=

(
)

j (t )
j

2
2
N0
N0
=
j (t ) j (t ) = 0
2
2

n2(t) is uncorrelated with {nj}


n2((t)) and n1((t)) are independent
p

22

Waveform & Vector AWGN Channels(5)


( )

Since is n2(t) independent of sm(t) and n1(t)


N

r (t ) = ( smj + n j ) j (t ) + n2 (t )
j =1

Only the first component carries information


Second component is irrelevant data and can be ignored

The AWGN waveform channel


r (t ) = sm (t ) + n(t ),

1 m M

is equivalent to N-dimensional vector channel


r = s m + n,

1 m M

23

Chapter 4.24.2-1
Optimal Detection for the Vector AWGN
Channel
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g
g
National Sun YatYat-sen University

24

4.2--1 Optimal
4.2
p
Detection for the Vector AWGN Channel ((1))

The MAP detector in AWGN channel is


m = arg max[ Pm p (r | s m )]
1 m M

= arg max Pm [ pn (r s m )]

r = sm + n

1 m M

N
r sm

1
N
= arg max Pm
e 0
N
1 m M
0

2
r sm
= arg max Pm e N0

1 m M

r sm
= arg max ln Pm

N 0
1 m M

25

n ~ N (0,
1

N
0

N0
I)
2

is a constant

ln(x) is increasing

4.2--1 Optimal
4.2
p
Detection for the Vector AWGN Channel ((2))
2

r sm
m = arg max ln Pm

Multiply N0/2
N 0
1 m M

(c)
1
2
N
= arg max 0 ln Pm r s m
2
1 m M 2

1
2
2
N

= arg max 0 ln Pm ( r + s m 2r s m )
2
s
= Em
2
1 m M 2
m

(b)

1
N

= arg max 0 ln Pm Em + r s m
2
1 m M 2

(d )

(e)

= argg max [ m + r s m ]
1 m M

(MAP)

26

is dropped

N0
1
ln Pm Em
2
2
Bi term
Bias
t

m =

4.2--1 Optimal
4.2
p
Detection for the Vector AWGN Channel ((3))

MAP decision rule for AWGN vector channel:


m = arg max [m + r s m ]
1 m M

m =

N0
1
ln Pm Em
2
2

If Pm=1/M, for all m, the optimal decision becomes

1
2
2
N
m = arg max 0 ln Pm r s m = arg max r s m

2
1 m M 2
1 m M

= arg
arg min r s m
1 m M

Nearest neighbor or minimum distance detector


Signals are equiprobable in AWGN channel
MAP=ML=minimum distance

27

4.2--1 Optimal
4.2
p
Detection for the Vector AWGN Channel ((4))

For minimum-distance detector, boundary of decisions Dm and


Dm are equidistant from sm and sm
Right figure:

2-dim constellation (N=2)


4 signal points (M=4)

When signals are equiprobable


and have equal
q energy
gy
N0
1
ln Pm Em indep of m
2
2

m =

m = arg max r s m
1 m M

28

4.2--1 Optimal
4.2
p
Detection for the Vector AWGN Channel ((5))

In general, the decision region is


Dm = {r N : r s m + m > r s m ' + m ' , for all 1 m M and m m} (4.2-20)

Each region is described in terms of at most M-1 inequalities


For each boundary:
r (s m s m ) > m m equation of a hyperplane

r (t )sm (t )dt
r sm =

Em = s

2
m

= sm2 (t )dt

in
i AWGN channel,
h
l

1 2
N0

MAP : m = arg max ln Pm + r (t )sm (t )dt sm (t )dt

2
1 m M 2

1 2

ML: m = arg max r (t ) sm (t )dt sm (t )dt


2
1 m M

29

4.2--1 Optimal
4.2
p
Detection for the Vector AWGN Channel ((6))

Distance metric: Euclidean distance between r and sm


D (r, s m ) = r s m

Correlation metric : negative


g
of modified distance metric
C (r, s m ) = 2r s m s m

Modified distance metric: distance when ||r||2 is removed


D(r, s m ) = 2r s m + s m

( r (t ) sm (t ) ) dt

With these definitions,,

= 2 r (t ) sm (t )dt sm2 (t )dt

MAP : m = arg max [ N 0 ln Pm D(r, s m ) ]


1 m M

= arg max
max [ N 0 ln Pm + C (r, s m ) ]
1 m M

ML : m = arg max C (r, s m )


1 m M

30

Optimal
p
Detection for Binary
y Antipodal
p
Signaling
g
g (1)
( )

In binary antipodal signaling,

s1(t) =s(t), with p1=p


s2(t) = s(t), with p2=1p

Vector representation (N=1) is

s1 =

ES =

s2 = E S = Eb

Eb
F
From
(4
(4.2-20)
2 20)

N
N
1
1

D1 = r : r Eb + 0 ln p Eb > r Eb + 0 ln(1 p) Eb
2
2
2
2

N0
1 p
= r : r >
ln

p
N0
4 Eb
1 p

rth =
ln
p
4 Eb
= {r : r > rth }

31

Optimal
p
Detection for Binary
y Antipodal
p
Signaling
g
g (2)
( )

N0
1 p
ln
D1 = r : r > rth

p
4 Eb

When p0, rth , entire real line becomes D2


When p1,
p1 rth ,
entire real line becomes D1
When p=1/2, rth =0, minimum distance rule

Error probability of MAP receiver:


2

Pe = Pm
m =1

1 m 2
m m

Dm

(
= p p ( r | s =

P(r | s m )dr

)
(
E ) dr + (1 p) p ( r | s =

)
E ) dr

= p p r | s = Eb dr + (1 p) p r | s = Eb dr
D2

rth

D1

rth

32

Optimal
p
Detection for Binary
y Antipodal
p
Signaling
g
g (3)
( )

(
= pP N (

)
(
/ 2 ) < r + (1 p) P N (

rth

rth

Eb , N 0

th

Eb rth
rth + Eb
= pQ
+ (1 p)Q
N /2
N /2
0
0

Eb , N 0 / 2 > rth

Q( x) = P [ N (0,1) > x ]
Q( x) = 1 Q( x)

When p=1/2, rth =0, the error probability is simplified as


2 Eb
Pe = Q
N0

Pe = p p r | s = Eb dr + ((1 p) p r | s = Eb dr

P N

Since the system


y
is binary,
y, Pe=Pb

Eb , N 0 / 2 < rth

Eb rth
= Q
N 2
0

33

= 1 P N Eb , N 0 / 2 > rth

r Eb
= 1 Q th

N 2
0

Error Probabilityy for Equiprobable


q p
Binaryy Signaling
g
g Schemes ((1))

In AWGN channel, transmitter transmits either s1(t) or s2(t), and


assume two signals are equiprobable

Equiprobable in AWGN channeldecision regions are separated by


th prependicular
the
di l bisector
bi t off th
the li
line connecting
ti s1 andd s2
Error probabilities when s1 or s2 is transmitted are equal
When s1 is transmitted,
transmitted error occurs when

r is in D2
Distance between the pprojection
j
of rs1 on s2s1 from s1
is greater than d12/2, d12=|| s2s1 ||

n ( s2 s1 ) d12

d122
>
Pb = P
= P n ( s2 s1 ) >

d
2
2

12
s2 s1
is a unit vector. n = r s1.
d12
34

Error Probabilityy for Equiprobable


q p
Binaryy Signaling
g
g Schemes ((2))

2
Since n ( s2 s1 ) ~ N (0, d12 N 0 / 2)

Pb = P n ( s2 s1 ) > d122 / 2

d
d12 / 2
= Q
= Q
d N /2
2N0
0
12

2
12

m
P[ X > ] = Q


m
P[ X < ] = Q

( 2.3 12 )

Since Q(
Q(x)) is decreasing,
g min. error pprobabilityy =max. d12

d =

2
12

( s1 (t ) s2 (t ) )

dt

When equiprobable
q p
signals
g
have the same energy,
gy, Es1= Es2 =E
d122 = Es1 + Es 2 2 s1 (t ), s2 (t ) = 2 E (1 )

1 1 is cross-correlation coefficient
d12 is maximized when = 1 antipodal signals

35

Optimal
p
Detection for Binaryy Orthogonal
g
Signaling
g
g (1)
( )

For binary orthogonal signals,

Eb i = j
si (t )s j (t )dt = 0 i j 1 i, j 2
j (t ) = s j (t ) / Eb , vector representation
Ch
Choose
i is
i

s1 =

Eb , 0 ;

s 2 = 0, Eb

When
h signals
i l are equiprobable
i b bl (figure)
(fi
)

Error probability : ( d = 2 Eb )
Pb = Q d 2 / 2 N 0 = Q Eb / N 0

) (

Given the same Pb, binary orthogonal signals


requires
i twice
t i energy off antipodal
ti d l signals
i l
A binary orthogonal signaling requires twice
the energy per bit of a binary antipodal signaling system to provide
the same error probability.
36

Optimal
p
Detection for Binaryy Orthogonal
g
Signaling
g
g (2)
( )

Figure: Error Probability v.s.


SNR/bit for binary orthogonal
and binary antipodal signaling
systems.

Signal-to-noise ratio (SNR) per bit:

b = Eb / N 0

37

Chapter 4.24.2-2
Implementation of the Optimal Receiver for
AWGN Channels
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g
g
National Sun YatYat-sen University

38

4.2--2 Implementation
4.2
p
of the Optimal
p
Receiver for AWGN Channels

Present different implementations of MAP receivers in AWGN


channel
Correlation Receiver
Matched Filter Receiver

All these structures are equivalent in performance and result in


minimum error probability.

39

The Correlation Receiver ((1))

(from 4.2-17)
The MAP decision in AWGN channel
N0
1
where m =
ln Pm Em
m = arg max[ m + r s m ],

1 m M

1)

r is derived
from

rj = r (t ) j (t )dt Correlation receiver

1))
2)
3)

Find the inner pproduct of r and sm, 1 m M


Add the bias term m
Choose m that maximize the result

40

The Correlation Receiver ((2))


mss and smss can be computed once and stored in memory

41

41

The Correlation Receiver ((3))

Another implementation

m = arg max[ m + r (t ) sm (t )dt ],


1 m M

Requires M correlators
Usually M > N
Less preferred

42

where m =

N0
1
ln Pm Em
2
2

The Matched Filter Receiver ((1))

In both correlation receiver, we compute

rx = r (t ) x(t )dt

x(t)
( ) is j((t)) or sm((t))
Define h(t)= x(Tt) for arbitrary T: filter matched to x(t)

If r(t)
( ) is applied
pp
to h(t),
( ) the output
p y(
y(t)) is
y (t ) = r (t ) * h(t )=
=

r ( )h(t )d

r ( ) x(T t + )d

h ( t ) = x (T t )
h ( ) = x (T )
h ( t ) = x (T ( t ) )
= x (T t + )

rx = y (T ) = r ( ) x( )d

rx can be obtained by sampling the output of matched filter at


t=T.
43

The Matched Filter Receiver ((2))

A matched filter implementation of the optimal receiver

44

The Matched Filter Receiver ((3))


Frequency Domain Interpretation:
Property I:

Matched filter to signal s(t) is h(t)=s(Tt). The properties of Fourier


transform
f
is
i
H ( f ) = S * ( f )e j 2 fT
Conj(signal spectrum)

Sampling delay of T

|H( f )|=|S( f )|
H( f ))= S(
S( f ) 2 f T

45

The Matched Filter Receiver ((4))

Property II: signal-to-noise maximizing property

Assume that r(t) = s(t) + n(t) is passed through a filter h(t), and the output
y(t) yS(t)+v(t) is sampled at time T

Signal Part: F{ yS(t)}=


(t)} H( f ) S( f )

ys (T ) = H ( f ) S ( f )e j 2 f T df

Zero-Mean Gaussian Noise: Sv( f ) = (N0 /2) |H( f )|2

VAR[v(T )] = ( N 0 / 2) H ( f ) df = ( N 0 / 2) Eh

Eh is the energy in h(t).

R l i h' Theorem:
Rayleigh's
Th

46

x ( t ) dt =
2

X ( f ) df = Ex
2

The Matched Filter Receiver ((5))

The SNR at the output of filter H( f ) is


ys 2 (T )
SNR O =
VAR[v(T )]

From Cauchy-Schwartz inequality


y (T ) = H ( f ) S ( f )e

2
S

H ( f ) df

j 2 fT

df

S ( f )e

Rayleigh's Theorem:

j 2 fT

x ( t ) dt =
2

X ( f ) df = Ex

df =Eh Es

Equality holds iff H( f ) = S*( f ) ej2 f T , C

SNR O

Es Eh
2E
E
= s = S
( N 0 / 2) Eh
N0 N0 2

The matched filter h(t)=s(Tt), i.e. H( f ) = S*( f ) ej2 f T ,


maximizes
i i
SNR
47

Matched--Filter
Matched

Time-Domain Property of the matched filter.

If a signal s(t) is corrupted by AWGN, the filter with an


impulse response matched to s(t) maximizes the output
signal-to-noise
i lt
i ratio
ti (SNR).
(SNR)
Proof: let us assume the receiver signal r(t) consists of the
signal s(t) and AWGN n(t) which has zero-mean
zero mean and
1
nm ( f ) = N 0 W/Hz.
2
Suppose the signal r(t) is passed through a filter with
impulse response h(t), 0tT, and its output is sampled
at time t=T. The output
p signal
g of the filter is:
t

y (t ) = r (t ) h(t ) = r ( )h(t )d
0

= s ( )h(t )d + n( )h(t )d
48

Matched--Filter
Matched

Proof: ((cont.))
At the sampling instant t=T:
T
T
y (T ) = s ( ) h (T ) d + n ( ) h (T ) d
0

= y s (T ) + yn (T )

This problem
Thi
bl is
i to
t select
l t the
th filter
filt impulse
i
l response that
th t
maximizes the output SNR0 defined as:
SNR 0 =

E y (T ) =
2
n

ys2 (T )

E yn2 (T )

E [ n( )n(t )] h(T )h(T t )dtd


T

T T
T
1
1
= N 0 (t )h(T )h(T t )dtd = N 0 h 2 (T t )dt
0 0
0
2
2

49

Matched--Filter
Matched

Proof: (cont.)
2
By substituting for ys (T) and E y n ( T ) into SNR0.

' = T
2

s ( ) h(T ) d
h ( ') s (T ') d '
0

SNR 0 =
=
T
T
1
1
2
N 0 h (T t ) dt
N 0 h 2 (T t ) dt
0
0
2
2
T

Denominator of the SNR depends on the energy in h(t).


The maximum output
p SNR over h(t)
( ) is obtained by
y
maximizing the numerator subject to the constraint that the
denominator is held constant.
50

Matched--Filter
Matched

Proof: (cont.)
Cauchy-Schwarz inequality: if g1(t) and g2(t) are finiteenergy signals, then
2

g (t ) g (t ) dt g 2 (t ) dt g 2 (t ) dt
2
1 2
1

with equality when g1(t)=Cg2(t) for any arbitrary constant


C.
If we set g1(t)=h1(t) and g2(t)=s(Tt), it is clear that the
SNR is maximized when h(t)=Cs(Tt).

51

Matched--Filter
Matched

Proof: (cont.)
The output (maximum) SNR obtained with the matched
filter is:
2

s ( ) h(T ) d
s ( )Cs (T (T ) ) d
2 0
0

SNR 0 =
=
T
T
2 2
1
N
2
0
C
s (T (T t ) ) dt
N 0 h (T t ) dt

0
0
2
h ( t ) = s (T t )
2 T 2
2
=
s (t ) dt =
h ( ) = s (T )

0
N0
N0
T

h (T ) = s (T (T ) )

Note that the output SNR from the matched filter depends
on the energy of the waveform s(t) but not on the detailed
characteristics
h
t i ti off s(t).
(t)
52

The Matched Filter Receiver ((6))

Ex 4.2-1 M=4 biorthogonal signals are constructed by two signals


in fig.(a) for transmission in AWGN channel. The noise is zero
mean and PSD=N0/2
Dimension N=2, and basis function
1 (t ) = 2 / T ,

0t T /2

2 (t ) = 2 / T ,

T /2t T

IImpulse
l response off two
t
matched filters (fig (b))
h1 (t ) = 1 (T t ) = 2 / T ,

T /2t T

h2 (t ) = 2 (T t ) = 2 / T ,

0t T /2

y ( t ) = s ( t ) * h ( t ) = s ( )h(t )d

= s ( ) s (T t + )d

53

The Matched Filter Receiver ((7))

If s1(t) is transmitted, the noise-free responses of two matched


filter(fig(c)) sampled at t=T are
y1s (T ) = A2T / 2= E ; y2 s (T ) = 0

If s1(t) is transmitted
transmitted, the received vector formed from two
matched filter outputs at sampling instances t = T is
r = (r1 , r2 ) = ( E + n1 , n2 )

Noise : n1=y1n(T) & n2=y2n(T)


T

ykn
k (T ) = n (t )k (t ) dt ,

k = 1, 2

a)
b))

E[nk ] = E[ ykn(T )] = 0

VAR[[ nk ] = ( N 0 / 2)) Ek = N 0 / 2

SNR for the first matched filter


( E )2 2 E
SNR o =
=
N0 / 2 N0
54

VAR[v(T ))] = ( N 0 / 2 ) Eh

( 4.2 52 )

Chapter 4.24.2-3
AU
Union
i B
Bound
d on P
Probability
b bilit
of Errors of ML Detection
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g
g
National Sun YatYat-sen University

55

4.2-3
4.2A Union Bound on Probability of Errors of ML Detection (1)

When signals are equiprobable, Pm=1/M, ML decision is optimal.


The
h error probability
b bili becomes
b
1
Pe =
M

1
Pe|m =

M
m =1
M


m =1 1 m ' M
m ' m

Dm '

p(r | s m )dr

For AWGN channel,


Pe|m =

1 m ' M
m ' m

Dm '

p (r | s m )dr =

r s m

Dm '

1 m ' M
m ' m

pn (r s m )dr

(4.2-63)

1
N
=
D e 0 dr
N 1 m ' M m '
0

m ' m
For most constellation, the integrals does not have a close form
Its convenient to have upper bounds for the error probability
The union bound is the simplest and most widely used bound which
is quite tight particularly at high SNR.

56

4.2-3
4.2A Union Bound on Probability of Errors of ML Detection (2)

In general, the decision region Dm under ML detection is

Dm ' = {r R N : p (r | sm ' ) > p ( r | sk ), for all 1 k M and k m '}

Define Dmm ' = { p (r | s m ' ) > p(r | s m )}


Decision region for m in a binary equiprobable signals sm & sm
Dm ' Dmm '
Pairwise Error

Dm '

p ( r | sm )dr

Pe|m =

1 m ' M
m ' m

1 m ' M
m ' m

1
Pe
M

Dmm '

Dm '

p ( r | sm ) dr

Dmm '

Probability Pmm

(4.2-67)

p (r | s m )dr

p (r | sm )dr =


m =1 1 m ' M
m ' m

Dmm '

1 m ' M
m ' m

Pmm '

1
p (r | sm )dr
d =
M
57

m =1 1 m ' M
m ' m

Pmm '

4.2-3
4.2A Union Bound on Probability of Errors of ML Detection (3)

In an AWGN channel

2M

2
Pairwise probability : Pmm ' = Pb = Q d mm
' /(2 N 0 )
d2
1 M
Pe
Q mm '

M m =1 1 m ' M 2 N 0
m ' m
1 x /2
M

Q( x) e
2

2
d mm
'

4 N0

(4.2-37)

Union bound for an AWGN channel.

m =1 1 m ' M
m ' m

Distance enumerator function for a constellation T(X):

T(X ) =

d mm ' =||s m s m ' ||

2
d mm
'

ad X

d2

all distinct d 'ss

1m ,m ' M ,m m '

ad : # of ordered pairs (m,m) s.t. m m, and ||smsm||=d


58

4.2-3
4.2A Union Bound on Probability of Errors of ML Detection (4)

Union bound: Pe 1


2M

2
d mm
'

4 N0

m =1 1 m ' M
m ' m

1
=
T(X ) 1
2M
X = e 4 N0

|| s m s m ' ||
Minimum distance: d min = 1 mmin
, m ' M ,
mm '

Since Q(x) is decreasing


d2
d2
mm '
Q min
Q
2 N0
2 N0

Th error probability
The
b bilit (looser
(l
form
f
off the
th union
i bound)
b d)
2

d min
2
2

d mm '
d min
1
M 1 4 N0
Pe
Q

(
M

1)
)
Q

M m =1 1 m ' M 2 N 0
2 N0
2

m ' m
M

Good constellation provides maximum possible minimum distance

59

4.2-3
4.2A Union Bound on Probability of Errors of ML Detection (5)

Ex 4.2-2 Consider a 16-QAM constellation (M=16)


From Chap 3.2(equation3.2-44), the minimum distance is
d min

6 log 2 M
8
=
Ebavg =
Ebavg
M 1
5

Total 1615
15=240
240 possible distances

Distance enumerator function :


2

9d 2

10 d 2

T ( X ) = 48 X d + 36 X 2 d + 32 X 4 d + 48 X 5 d + 16 X 8d
+ 16 X

+ 24 X

+ 16 X

13 d 2

+ 4X

18 d 2

Upper bound of error probability


1
1 4 N0
Pe T e
32

60

4.2-3
4.2A Union Bound on Probability of Errors of ML Detection (6)

Pe

M 1
e
2

2
d min
4 N0

15
e
2

2 Ebavg
5 N0

When SNR is large T ( X ) 48 X


1
Pe T e
32

1
4 N0

48
e

32

2
d min
i
4 N0

3
e
2

d2

Exact error probability


4 Ebavg
Pe = 3Q
5N0

9 4 Ebavg
Q
4 5N0

(
(see
example
l 4.3-1)
4 3 1)

61

),

2 Ebavg
5 N0

Lower Bound on Probability


y of Error ((1))

In an equiprobable M-ary signaling scheme


1
Pe =
M
1

M
1
=
M

1 M
p (r | s m )dr
P[Error | m sent] =

D
M m =1 m
m =1
M
1 M
p (r | s m )dr = p (r | s m )dr

Dmc ' m
M m =1 Dmm '
m =1
M

d mm '
Q
,

2
N
m =1
0

Dm ' Dmm '


C
C
Dmm

D
'
m

for all m ' m


(Finding m
m such that dmm is miniized
miniized.))

To derive the tightest lower bound maximize the right hand side
m
d mm ' 1 M d min

max Q
Q
=

m ' m
M
N
N
2
2
m
=
1
m =1
0
0

: distance from m to its nearest neighbor


g

1
Pe
M

m
d min

62

Lower Bound on Probability


y of Error ((2))
d min d min
m

m
(d min
denotes the distance from m to its nearest neighbor in the constellation)

m
d min
Q d min / 2 N 0 , At least one signal at distance d min from s m
Q

2N
otherwise
0
0,

1
Pe
M

d min
Q

N
2
1< m < M
0

m ' m: ||s m s m ' || = d min

Nmin : number of points in the constellation s.t.


m ' m : || s m s m ' ||= d min

N min d min

Q
2N
M
0

d min
Pe ( M 1)Q

2N
0

63

Chapter 4.3: Optimal Detection


and Error Probability for
Bandlimited Signaling
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g
g
National Sun YatYat-sen University

64

Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

In this section we studyy signaling


g
g schemes that are
mainly characterized by their low bandwidth
requirements.
q
These signaling schemes have low dimensionality which
is independent from the number of transmitted signals,
and, as we will see, their power efficiency decreases
when the number of messages increases.
This family of signaling schemes includes ASK, PSK,
and QAM
QAM.

65

Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

For ASK signaling


g
g scheme:
d min =

(3.2-22)

The constellation points: {dmin/2, 3dmin/2,, (M


1)dmin/2 }
Two type of signal points

12 log 2 M
Ebavg
2
M 1

M2 inner points: detection error occurs if |n| > dmin /2.


2 outer points: error probability is half of inner points.

Let Pei and Peo are error probabilities of inner and


outer points

d min
1

Pei = P n > d min = 2Q

2
2
N

66

d min
1
Peo = Pei = Q
2N
2
0

Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

Symbol
y
error pprobabilityy
d min
d min
1
P error m sent =
2( M 2)Q
+ 2Q

M
m =1
2 N0
2 N 0

2 ( M 1) d min
=
Q

12 log
l 2M
M
2 N0
d min =
Ebavg
2
M 1

E
6log 2 M bavg
1

= 2 1 Q

2
M M 1 N 0
6log M Ebavg

Doubling M
b
2
2Q
for
large
M

increasing rate by 1
M 2 1 N0

bit/transmission

1
Pe =
M

Decrease with M

SNR/bit

Need 4 times SNR/bit


to keep performance

67

Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

Symbol error probability for


ASK or PAM signaling

For large M, the distance


b t
between
M andd 2M is
i
roughly 6dB.

68

Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

In M-aryy PSK signaling,


g
g, assume signals
g
are
equiprobable.
minimum distance decision is optimal.
p
error probability = error prob. when s1 is transmitted.

When

s1 = ( E , 0)

is transmitted,, received signal


g is

r = ( r1 , r2 ) = ( E + n1 , n2 )
r1

p
~ N ( E , N 0 / 2)) and r2 ~ N ((0,, N 0 / 2)) are indep.
(r1 E ) 2 + r22
1
p (r1 , r2 ) =
exp

N
N0
0

r2
V = r + r , = arctan
r1
2
1

2
2

v 2 + E 2 Ev cos
v
pV , (v, ) =
exp
N0
N0

69

Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

Marginal PDF of Q is

p ( ) = pV , (v, ) dv
0

v
e
N0

v 2 + E 2 Ev cos
N0

1 s sin 2
=
e
ve

0
2

dv

( v 2 s cos ) 2
2

dv

E
Symbol SNR: s = N
0

As gs increases, pQ(q ) is
more peaked around q = 0.
70

Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

Decision region:
g
D1 = { : / M < / M }
Error probability is
Pe = 1

/M

/ M

Not have a simple form except for M=2 or 4.


4
When M=2 binary antipodal signaling
Pb = Q

p ( )d

2 Eb / N 0

When M=4
M=4, two binary phase modulation signals
2
2

Pc = (1 Pb ) = 1 Q 2 Eb / N 0

Pe = 1 Pc = 2Q 2 Eb / N 0 1 Q 2 Eb / N 0
2

71

Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

Symbol error probability of M-PSK

M , Required SNR

72

Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

For large
g SNR (E/N
(
) , p() is approximated
pp
0>> 1)
p ( ) s / cos e

s sin 2

, for | | / 2

Error probability is approximated by


/M
2
Pe 1
s / cos e s sin d
/ M

sin( / M )

u2

u = s sin

s = E / N0

du


2 Eb
= 2Q 2 s sin = 2Q (2 log 2 M ) sin

M
M N0

Eb
s
E
=
=
When M=2 or 4:
N 0 N 0 log 2 M log 2 M
Pe 2Q 2 Eb / N 0

73

Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

For large
g M and large
g SNR:

sin(/M) ~ /M
Error probability is approximated by
2 2 log M E
b
2
Pe 2Q

M
N 0

for large M

For large M,
M doubling M reduces effective SNR by 6 dB.
dB

When Gray codes is used in the mapping

Since the most probable errors occur in the erroreous selection of an


adjacent phase to the true phase
1
Pb Pe
k
74

Differentially Encoded PSK Signaling

In p
practice,, the carrier pphase is extracted from the
received signal by performing nonlinear operation
phase ambiguity
p
g y

For BPSK

The received signal is first squared.


The resulting double-frequency component is filtered.
The signal is divided by 2 in frequency to extract an estimate of the
carrier frequency and phase .

This operation result in a phase ambiguity of 180in the carrier phase.

For QPSK,
QPSK there are phase ambiguities of 90 and 180 in
the phase estimate.
Consequently,
q
y, we do not have an absolute estimate of the
carrier phase for demodulation.
75

Differentially Encoded PSK Signaling

The p
phase ambiguity
g y can be overcome by
y encodingg the
information in phase differences between successive signals.

In BPSK

Bit 1 is transmitted by shifting the phase of carrier by 180.


Bit 0 is transmitted by a zero phase shift.

IIn QPSK
QPSK, the
h phase
h
shifts
hif are 0,
0 90, 180, and
d -90
90, corresponding
di
to bits 00, 01, 11, and 10, respectively.

The PSK signals resulting from the encoding process are


differentially encoded.
The
h detector
d
is
i a simple
i l phase
h
comparator that
h
compares the phase of the demodulated signal over two
consecutive
i intervals
i
l to extract the
h information.
i f
i
76

Differentially Encoded PSK Signaling

Coherent demodulation of differentially encoded PSK


results in a higher error probability than that derived for
absolute phase encoding.
With differentially encoded PSK, an error in the
demodulated phase of the signal in any given interval
usually
ll results
lt in
i decoding
d di errors over two
t consecutive
ti
signaling intervals.
Th error probability
The
b bilit in
i differentially
diff
i ll encoded
d d M-ary
M
PSK is approximately twice the error probability of Mary PSK with absolute phase encoding
encoding.

Only a relative small loss in SNR.

77

Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

In detection of Q
QAM signals,
g
, need two filter matched to
1 (t ) = 2 / Eg g (t ) cos 2 f ct
2 (t ) = 2 / Eg g (t ) sin 2 f ct
Output of matched filters r =( r1, r2)
Compute C (r, s m ) = 2r s m Em (See 4.2-28)
Select m = arg max C (r, s m )

1 m M

To determine Pe must specify signal constellation


For M=4,, figg ((a)) and ((b)) are ppossible constellations.
Assume both have dmin =2A

r = 2 A Eavg = 2 A2
1
2(3 A2 ) + 2 A2 = 2 A2
E
=
A1 = A, A2 = 3 A avg
4
78

Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

When M=8,, four ppossible constellations in fig


g ((a)~(d).
) ( )

Signal points (Amc, Ams)


All have dmin =2A
Average energy
Eavg

1
=
M
A2
=
M

2
2
(
A
+
A
mc ms )
m =1
M

(a
m =1

2
mc

2
+ ams
)

(a) and (C): Eavg=6A2


(b): Eavg=6.83A
=6 83A2
(d): Eavg=4.73A2
(d) require less energy

79

Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

Rectangular QAM
Generated by two PAM signals in I-phase and Q-phase carriers.
Easily demodulated.
For M16, only requires energy slight greater than that of the
best 16-QAM constellation.
When k is even, the constellation is square, the minimum
distance
6 log M
d min =

Ebavg

M 1
Can be considered as two M -ary PAM constellations.
An error occurs if either n1 or n2 is large enough to cause an error.
error
Probability of correct decision is

Pc , M QAM = P

2
c , M PAM

80

= 1 Pe,

M PAM

Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

Probability of errors of square M-QAM

Pe , M QAM = 1 (1 Pe,

M PAM

1
1 Pe ,
2

M PAM

Th error probability
The
b bilit off PAM is
i (from
(f
(4.3-4)
(4 3 4) & (4.3-5)
(4 3 5) )

Pe,

2
)
= 2 Pe ,
M PAM

M PAM

1 d min

= 2 1
Q
M 2 N0

1 3log 2 M Ebavg

= 2 1
Q

M 1 N0
M

Thus, the error probability of square M-QAM is


1

Pe, M QAM

1 3log 2 M Ebavg

= 4 1
Q
M 1 N0
M

3log M bavg
2
4Q

M 1 N0


1 3log 2 M Ebavg
1 1
Q

M 1 N0
M

The upper bound is


quite
it tight
ti ht for
f large
l
M
81

Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

The penalty of increasing


transmission rate is 3dB/bit
for QAM.
The penalty of increasing
transmission rate is 6dB/bit
for PAM and PSK
PSK.
QAM is more power efficient
compared with PAM and
PSK.
The advantage
g of PSK is its
constant-envelope properties.
More comparisons
p
are shown
in the text (page 200).
82

Chapter 4.3
4.3--4 Demodulation and Detection

ASK,, PSK and QAM


Q
have one- or two-dimensional
constellation

Basis functions of PSK and QAM:


1 (t ) = 2 / Eg g (t ) cos 2 f c t
2 (t ) = 2 / Eg g (t ) sin 2 f c t

Basis functions of PAM:


1 (t ) = 2 / Eg g (t ) cos 2 f c t

r(t) and basis functions are bandpass high sampling rate

To relieve the requirement on sampling rate

First, demodulate signals to obtain a lowpass equivalent signals.


Then, perform signal detection.
83

Chapter 4.3
4.3--4 Demodulation and Detection

From chapp 2.1 ((2.1-21 and 2.1-24):


)
Ex = Exl / 2

x(t ), y (t ) = Re { xl (t ), yl (t ) } / 2

Optimal detection rule (MAP) becomes


N0
1

g max r s m +
ln Pm Em
m = arg
2
2
1 m M

= arg max ( Re[rl s ml ] + N 0 ln Pm Eml / 2 )


1 m M

1
2

= arg max Re rl (t ) sml (t )dt + N 0 ln Pm sml (t ) dt


2
1 m M

ML decision rule is

1
2

m = arg max Re
R rl (t ) s ml (t )dt
d s ml (t ) dt
d

2
1 m M

84

Chapter 4.3
4.3--4 Demodulation and Detection
Complex
p
matched filter.

Detailed structure of a
complex matched filter
in terms of its in-phase
and quadrature
components..

Throughout this discussion we have assumed that the receiver


has complete knowledge of the carrier frequency and phase.
85

Chaper 4.4:Optimal
Ch
4 4 O i l Detection
D
i and
d
Error Probability for PowerLimited Signaling
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g
g
National Sun YatYat-sen University

86

Chaper 4.44.4-1 Optimal Detection & Error Prob.


for Orthogonal Signaling

In an equal-energy orthogonal signaling scheme, N=M and


s1 = ( E , 0,..., 0)
(0 E ,..., 0)
s 2 = (0,
=
s M = (0,
(0 00,..., E )

For equiprobable equal-energy orthogonal signals, optimum


d t t
detector=
largest
l
t cross-correlation
l ti between
b t
r and
d sm
m = arg max r s m
1 m M

Constellation is symmetric & distance between signal points is


Error probability is independent of the transmitted signal
87

2E

Chaper 4.44.4-1 Optimal Detection & Error Prob.


for Orthogonal Signaling

Suppose that s1 is transmitted, the received vector is


r = ( E + n1 , n2 ,..., nM )

E is symbol energy
( 1, n2,, nM) are iid zero-mean Gaussian r.vs
(n
r vs with n2 = N 0 / 2
R1 = r s1

Define random variables

= ( E + n1 , n2 ,,...,, nM )

Rm = r s m , 1 m M

E,0

,0

= E + En1
Rm = Enm , 2 m M

A correct decision is made if R1 > Rm for m=2,3,,M


Pc = P[ R1 > R2 , R1 > R3 ,..., R1 > RM | s1 sent]

Bayes
Theorem

n2, n3,,
nM are iid

= P[ E + n1 > n2 , E + n1 > n3 ,..., E + n1 > nM | s1 sent]

= P[n2 < n + E , n3 < n + E ,..., nM < n + E | s1 sent,n1 = n] pn1 (n) dn

( P[n

< n + E | s1 sent,n1 = n]
88

M 1

pn1 (n) dn

Chaper 4.44.4-1 Optimal Detection & Error Prob.


for Orthogonal Signaling

Since n2~N(0,N
( , 0/2))

n+ E
P[n2 < n + E | s1 sent,n1 = n] = 1 Q

N /2
0

M 1
n2

1
n+ E
N
Pc =
1 Q
e 0 dn

n+ E
N 0
N 0 / 2
x=
2
N0 / 2
x 2 E / N0 )
(

1
M 1
2
(1 Q ( x)) e
=
dx

2
2
x 2 E / N0 )
( x
(

1

1
M 1
e
2

1
(1
(
))
Pe = 1 Pc =
Q
x
e
dx
2

2
Pe
Pe
P[[s m received | s1 sent ] =
= k
, 2 m M.
M 1 2 1
89

2 E / N0
2

dx = 1

Chaper 4.44.4-1 Optimal Detection & Error Prob.


for Orthogonal Signaling

Assume that s1 corresponds to data


sequence of length k and first bit=0

Probability
y that first bit is detected as 1 =
prob. of detecting as {sm: first bit =1}
Pb = 2

k 1

Pe
2k 1
1
=
P

Pe
e
k
k
2 1 2 1
2

Last approximation holds for k>>1

Fig: Prob. of bit error v.s. SNR per bit

Increasing M, required SNR is reduced


in contrast with ASK, PSK and QAM

90

Error Probability in FSK signaling

FSK signaling is a special case of orthogonal signaling when


f =

l
,
2T

for any positive integer l

In binary FSK, a frequency separation that guarantees


orthogonality does not minimize the error probability.

For binary FSK


FSK, the error probability is minimized when (see
Problem 4.18)
f =

0.715
T

91

A Union Bound on the Probability of Error in


Orthogonal Signaling

From Sec.4.2-3, the union bound in AWGN channel is


Pe

M 1
e
2

2
d min
4 N0

In orthogonal signaling
signaling, d min = 2 E
E

M 1 2 N0
Pe
e
< Me 2 N0
2

Using M=2k and Eb=E/k,


Pe < 2k e

kEb
2 N0

=e

k E
b 2ln 2
2 N0

If Eb / N 0 > 2 ln 2 = 1.39 (1.42dB) Pe 0 as k

If SNR per bit > 1.42 dB, reliable communication is possible (Sufficient, but
y)
not necessary)
92

A Union Bound on the Probability of Error in Orthogonal


Signaling

A necessary and sufficient condition for reliable


communications is
Eb
> ln 2 = 0.693 ( 1.6dB)
N0

The -1.6 dB bound is obtained from a tighter bound on error


probability
e ( k / 2)( Eb / N0 2ln 2) ,
Eb / N 0 > 4 ln 2

Pe
2
k ( Eb / N 0 ln
l 2)
2e
, ln 2 Eb / N 0 4 ln 2

The minimum value of SNR per bit needed, i.e., -1.6 dB is


Shannon Limit.

93

Chaper 4.44.4-2 Optimal Detection & Error Prob. for


Bi--orthogonal Signaling
Bi

A set of M
M=2
2k biorthogonal signals comes from N
N=M/2
M/2
orthogonal signals by including the negatives of these signals

Requires
q
onlyy M/2 cross-correlators or matched filters

Vector representation of biorthogonal signals


s1 = s N +1 = ( E , 0,..., 0)
s 2 = s N + 2 = (0, E ,..., 0)
=
s N = s 2 N = (0, 0,..., E )

Assume that s1 is transmitted, the received signal vector is


r = ( E + n1 , n2 ,..., nN )

{nm} are zero-mean, iid Gaussian r.vs with n2 = N 0 / 2

94

94

Chaper 4.44.4-2 Optimal Detection & Error Prob. for


Bi--orthogonal Signaling
Bi

Since all signals are equiprobable and have equal energy


energy, the
optimum detector decides m with the larges magnitude of
C ( r, s m ) = r s m ,

1 m M / 2

The sign is to decide whether sm(t) or sm(t) is transmitted

P [ correct decision ] = P[[r1 = E + n1 > 0, r1 >| rm |=| nm |,| m = 2,3,..., M ]

But, P [| nm |< r1 | r1 > 0] =

1
N0

r1

r1

e x

/ N0

dx =

1
2

r1

N0 2
r1

e x / 2 dx

N0 2

Probability of correct decision is


( M / 2) 1

r1 ~ N ( E , N 0 / 2)
1

2
N0 2
x /2
Pc =
e
dx
p (r1 )dr1
r1

v = r1 2 / N 0
0

2
N
0

( M / 2) 1
1
1 v + 2 E / N0 x2 / 2
v2 / 2
=
e
d
dx
e
dv
d

2E/N 0
( v + 2 E / N0 )
2
2

r1

95

Chaper 4.44.4-2 Optimal Detection & Error Prob. for


Bi--orthogonal Signaling
Bi

Symbol error Probability


Pe=1Pc

Fi Pe v.s. Eb/N0
Fig:

E=k Eb

Shannon
Limit
96

96

Chaper 4.44.4-3 Optimal Detection & Error Prob. for


Simplex Signaling

Simplex
p signals
g
are obtained from shifting
g a set of orthogonal
g
signals by the average of these orthogonal signals
Geometry of simples signals is exactly the same as that of original
orthogonal signals

The error probability equals to the original orthogonal signals


Since simplex signals have lower energy, the energy in the
expression of error probability should be scaled, i.e.,
1
Pe = 1 Pc =
2

M 1

e
1
(1
(
))
Q
x

M 2E
x
M 1 N 0

/ 2

dx

A relative
l i gain
i off 10 llog(M/M1)
( / 1) dB
d over orthogonal
h
l signaling
i li

M=2 3dB gain; M=10, 0.46 dB gain

97

Chap 44.5
Ch
5 :Optimal
O i l Detection
D
i in
i
Presence of Uncertainty: Noncoherent Detection
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g
g
National Sun YatYat-sen University

98

Chaper 4.5 Optimal Detection in Presence of Uncertainty:


Non--coherent Detection
Non

Previous sections assume that signal{s


g { m((t)}
)} or orthonormal basis {j((t)}
)}
are available
In many cases the assumption is not valid

Transmission over channel introduces a random attenuation or a random


phase shift to the signal
Imperfect knowledge of signals at rx when the tx and rx are not perfectly
synchronized
Although the tx knows {sm(t)}, due to asynchronism, it can only use
td)}; td is random time slip between the tx and rx clock

{sm(t

Consider transmission over AWGN channel with random parameter()

r (t ) = sm (t ; ) + n(t )

By K-L expansion theorem (2.8-2), we can find an orthornaormal basis

r = s m , + n
99

Chaper 4.5 Optimal Detection in Presence of Uncertainty:


Non--coherent Detection
Non

The optimal (MAP) detection rule is (see 4.2-15)


m = arg max Pm p (r | m)
1 m M

= arg
g max Pm p (r | m, ) p ( )d
1 m M

= arg max Pm pn (r s m , ) p ( )d
1 m M

The decision rule determines the decision regions

The minimum error probability is


M

Pe = Pm
m 1
M

= Pm
m 1

Dmc

( p(r | m, ) p( )d ) dr

( p (r s
M

m '=1
m ' m

Dm '

m ,

) p ( )d dr

100

(4.5-3)

Chaper 4.5 Optimal Detection in Presence of Uncertainty:


Non--coherent Detection
Non

( ) Consider a binaryy antipodal


(ex)
p
signaling
g
g system
y
w. equiprobable
q p
signals s1(t)=s(t) & s2(t)=s(t) in an AWGN channel w. noise PSD N0/2

The channel is modeled as


r (t ) = Asm (t ) + n(t )

A>0: random gain with PDF p(A)


A<0: random ggain with PDF p(
p(A)=0
)
p(r | m, A) = pn( r A sm )
Optimal decision region for s1(t) is

{
= {r : e

D1 = r : e

( r A Eb ) 2 / N 0

A2 Eb / N 0

= {r : r > 0}

p ( A) dA > e

( r + A Eb ) 2 / N 0

2 rA
A Eb / N 0

2 rA
A Eb / N 0

) p( A)dA > 0} A>0

2 rA Eb / N 0

=e

p ( A)dA

4rA
A Eb / N 0

2 rA Eb / N 0

>1

=>4rA Eb / N 0 > 0
101 => r > 0

>0

(e 2 rA

Eb / N 0

2 rA Eb / N 0

)>0

Chaper 4.5 Optimal Detection in Presence of Uncertainty:


Non--coherent Detection
Non

The error probability is


( r + A Eb )
1

N0
Pb =
e
dr p ( A)dA
0 0

N0

= P N ( A Eb , 0 ) > 0 p ( A)dA
0
2


A Eb
P N (0,1) >
p( A)dA

N 0 / 2

(
= E Q ( A

)
2 E / N )

= Q A 2 Eb / N 0 p ( A)dA
0

102

Chaper 4.54.5-1 Noncoherent Detection of Carrier


Modulated Signals

For carrier modulated signals, {sm(t)} are bandpass with


lowpass equivalents sml(t)
sm (t ) = Re sml (t )e j 2 fct

In AWGN channel,

r (t ) = sm (t td ) + n(t )
td : random
d
time
ti asynchronism
h i between
b t
t and
tx
d rx

r (t ) = Re sml (t td )e j 2 fc ( t td ) + n(t )
= Re sml (t td )e j 2 fctd e j 2 fct + n(t )

Lowpass equivalent of sm(t td) is sml (t td )e j 2 fctd


In practice, td <<TS sml (t td) sml (t)
The random phase shift =2fctd could be large since fc is large

Noncoherent Detection
103

Chaper 4.54.5-1 Noncoherent Detection of Carrier


Modulated Signals

In the noncoherent case,


Re rl (t )e j 2 fct = Re (e j sml (t ) + nl (t ))e j 2 fct

The baseband channel model:


rl (t ) = e j sml (t ) + nl (t )

Vector equivalent form:


rl = e j s ml + n l
The optimum noncoherent detection is

Pm 2
j
(
r

s ml )d
m = arg max
p
e
l
nl

0
2
1 m M
2
F
From
(4
(4.5-3)
5 3)
2 rl e j s ml /(4 N0 )
Pm
1
= arg max
e
d
N 0
2 ( 4 N 0 )
1 m M

104

From (2.9 13)

nl ~ N (0 N 1 , 2 N 0 I N )

Chaper 4.54.5-1 Noncoherent Detection of Carrier


Modulated Signals
P
1
= arg
m
g max m
2 ( 4 N 0 ) N
1 m M
P
= arg max m e
2
1 m M

E
m
2 N0

Pm
e
2

P
= arg max m e
2
1 m M

= arg max
1 m M

Em
2 N0

Em
2 N0

e
e
e

rl e j s ml /(4 N 0 )

sm = sml cos 2 f c t

Em = sml cos 2 f c t dt=


s ml

1
Re[ rl e j s ml ]
2 N0

1
Re[|rl s ml |e j ( ) ]
2 N0

{
2 Re
R {r e

}
} + 2E

= rl 2 Re rl e j sml + e j sml
2

= rl

sml

= 2Em

rl e j sml

1
Re[( rl s ml ) e j ]
2 N0

rl e j sml = e j sml

sml

r =e- j ( sml ) r
H

=e- j r sml

rl s ml =| rl s ml | e j

: phase of rl s ml
Pm 2 N0 2 2 N0 |rl sml |cos( )
= arg max
e
e
d

0
2
1 m M
I 0 ( x) is modified Bessel function
Em

= arg max Pm e
1 m M

Em
2 N0

| r s |
I 0 l ml
2 N0
105

of the 1st kind and order zero


I 0 ( x) =

1
2

e x cos d

Chaper 4.54.5-1 Noncoherent Detection of Carrier


Modulated Signals

If signals are equiprobable and have equal energy,


| rl s ml |

m = arg max I 0

2
N
1 m M
0

= arg max | rl s ml |

I0(x) is increasing

1 m M

E l Detector
Envelop
D t t

= arg max | rl (t ) sm* l (t )dt |


1 m M

106

4.6 Comparison
p
of Digital
g
Modulation Methods

One can compare the digital modulation methods on the basis of


the SNR required to achieve a specified probability of error.
However, such a comparison would not be very meaningful,
unless it were made on the basis of some constraint, such as a
fixed data rate of transmission or, on the basis of a fixed
bandwidth.
bandwidth
For multiphase signals, the channel bandwidth required is
simply the bandwidth of the equivalent low-pass
low pass signal pulse
g(t) with duration T and bandwidth W, which is approximately
equal to the reciprocal of T.
R
Since T=k/R=(log2M)/R, it follows that W =
log 2 M

107

4.6 Comparison
p
of Digital
g
Modulation Methods

As M is increased,, the channel bandwidth required,


q
, when the bit
rate R is fixed, decreases. The bandwidth efficiency is measured
by the bit rate to bandwidth ratio, which is
R
= log 2 M
W
The bandwidth-efficient method for transmitting PAM is singlesideband. The channel bandwidth required to transmit the signal
is approximately
pp
y equal
q to 1/2T and,
R
= 2 log 2 M
W

this iis a ffactor


thi
t off 2 bbetter
tt th
than PSK
PSK.
For QAM, we have two orthogonal carriers, with each carrier
having a PAM signal.
signal
108

4.6 Comparison
p
of Digital
g
Modulation Methods

Thus, we double the rate relative to PAM. However, the QAM


signal must be transmitted via double-sideband.
double sideband Consequently,
Consequently
QAM and PAM have the same bandwidth efficiency when the
bandwidth is referenced to the band
band-pass
pass signal.
As for orthogonal signals, if the M = 2k orthogonal signals are
constructed by
y means of orthogonal
g
carriers with minimum
frequency separation of 1/2T, the bandwidth required for
transmission of k = log2M information bits is
W =

M
M
M
R
=
=
2T 2(k R ) 2 log 2 M

In the case, the bandwidth increases as M increases.


In the case of biorthogonal
g
signals,
g
the required
q
bandwidth is
one-half of that for orthogonal signals.
109

4.6 Comparison
of Digital
Modulation Methods
p
g
A compact and meaningful
comparison of modulation
methods is one based on the
normalized data rate R/W (bits per
second per hertz of bandwidth)
versus the
th SNR per bit (b/N0 )
required to achieve a given error
probability
probability.
In the case of PAM, QAM, and
PSK, increasing M results in a
higher bit-to-bandwidth ratio R/W.

110

4.6 Comparison
p
of Digital
g
Modulation Methods

However, the cost of achieving the higher data rate is an


increase in the SNR per bit.
Consequently, these modulation methods are appropriate for
communication
i i channels
h
l that
h are bandwidth
b d id h limited,
li i d where
h we
desire a R/W >1 and where there is sufficiently high SNR to
support increases in M.
M

Telephone channels and digital microwave ratio channels are


p of such band-limited channels.
examples

In contrast, M-ary orthogonal signals yield a R/W 1. As M


increases, R/W decreases due to an increase in the required
channel bandwidth.
The SNR per bit required to achieve a given error probability
decreases as M increases.
111

4.6 Comparison
p
of Digital
g
Modulation Methods

Consequently, M
M-ary
ary orthogonal signals are appropriate for
power-limited channels that have sufficiently large bandwidth to
accommodate a large number of signals.
A M, the
As
h error probability
b bili can be
b made
d as small
ll as desired,
d i d
provided that SNR>0.693 (-1.6dB). This is the minimum SNR
per bit required to achieve reliable transmission in the limit as
the channel bandwidth W and the corresponding R/W0.
The figure above also shown the normalized capacity of the
b d li i d AWGN channel,
band-limited
h
l which
hi h is
i due
d to Shannon
h
(1948).
(
)
The ratio C/W, where C (=R) is the capacity in bits/s, represents
the highest achievable bit rate-to-bandwidth
rate to bandwidth ratio on this
channel.
Hence,, it serves the upper
pp bound on the bandwidth efficiencyy of
any type of modulation.
112

Вам также может понравиться