Вы находитесь на странице: 1из 19

Digital Modulation : Part V

Satyajit Thakor
IIT Mandi

17 November, 2016
Optimum detection for binary antipodal

I Assumption: subsequent message signals are independent of


each other (memoryless property)
I Thus, the detector only needs to consider its input in a given bit
interval when making a decision on the transmitted signal.
I Output of the demodulator is

y = sm + n, m = 1, 2

where sm = Eb and n is zero-mean Gaussian with n2 = N0 /2
I For some threshold , if y > then the detector declares that
s1 (t) was transmitted. Else, it declares that s2 (t) was
transmitted.
I We will discuss optimality of such a scheme and optimal
threshold.
Optimum detection for binary antipodal

I For the binary antipodal signals, the average probability of error


as a function of the threshold is
Z Z
Pe () = P (s1 ) f (y|s1 ) + P (s2 ) f (y|s2 )

where, P (s1 ) and P (s2 ) are a priori probabilities of transmitted


signals.
I To find optimal minimizing the probability of error,
differentiate Pe () with respect to and equate to zero:

P (s1 )f ( |s1 ) = P (s2 )f ( |s2 )


f ( |s1 ) P (s2 )
=
f ( |s2 ) P (s1 )
Optimum detection for binary antipodal

I Continuing,


P (s2 )
Eb )2 /N0 ( + Eb )2 /N0
e( e =
P (s1 )

P (s2 )
e4 Eb /N0 =
P (s1 )
N0 P (s2 )
= ln
4 Eb P (s1 )

I Note that, if P (s1 ) > P (s2 ), then < 0, and if P (s2 ) > P (s1 ),
then > 0.
I In practice, the two signals are usually equally probable, i.e., the
a priori probabilities P (s1 ) = P (s2 ) = 1/2 and hence = 0.
Optimum detection for binary antipodal

I Then, the average probability of error is


1 0 1
Z Z
Pe = f (y|s1 )dy + f (y|s2 )dy
2 2 0
Z 0
= f (y|s1 )dy

Z 0 2
1
= e(y Eb ) /N0 dy
N0
Z 2Eb /N0
1 2 (y Eb )
= ex /2 dx by letting x = p
2 N0 /2
r ! s
2Eb 2
d12
=Q = Q
N0 2N0


I Note that d12 = 2 Eb is the distance between the constellation
points for binary antipodal signaling.
Optimum detection for binary antipodal

I Q() describe the area under the tail of Gaussian PDF.


I Error probability tends to zero exponentially as SNR increases
by the upper bound
1 2
Q(x) ex /2 for all x 0.
2
Optimum detection for binary orthogonal

I Output of demodulator: y = (y1 , y2 )



I Recall: when s1 (t) is transmitted, y = ( Eb + n1 , n2 )
I For equiprobable signals, i.e., P (s1 ) = P (s2 ) = 1/2, the
optimum detector simply compares y1 with y2 .
I If y1 > y2 , the detector declares that s1 (t) was transmitted else,
declares that s2 (t) was transmitted.
I We will discuss shortly the optimality of this scheme.
I Based on this decision rule, assuming that s1 (t) was transmitted,
the probability of error is the probability that y1 y2 < 0.
I Since y1 and y2 are Gaussian with equal 2 = N0 /2 and
statistically independent, the difference is
p
z = y1 y2 = Eb + n1 n2
Optimum detection for binary orthogonal

I It can be proved that


Z 2
1
f (z) = fY1 (z + y2 )fY2 (y2 )dy2 = e(z Eb ) /2N0
2N0

I Thus, z is also Gaussian with mean Eb and variance 2 = N0 .
I Averagne probability of error (for equiprobable messages) is
Z 0 Z Eb /N0
1 2
Pe = Pr(z < 0) = f (z)dz = ex /2 dx
2
r ! s
Eb 2
d12
=Q = Q
N0 4N0

I For the same error probability Pe , the binary antipodal signals


require a factor of two (3 dB) less signal energy than orthogonal
signals.
Comparision
Optimal decision rules: MAP and ML

I Consider general M -ary signaling scheme for m = 1, 2, . . . , M

y = sm + n, sm = (sm1 , sm2 , . . . , smN ), n = (n1 , n2 , . . . , nN )

I The noise components are uncorrelated, Gaussian with


2 = N0 /2 and hence i.i.d.
N n2
Y 1 N
P i
i=1 N0
f (n) = f (ni ) = N e
i=1 (N0 ) 2
I Hence, {yk |smk } too are independent Gaussian and the
conditional PDF f (y | sm ) factorizes as
N
Y
f (y | sm ) = f (yk | smk ), m = 1, 2, . . . , M
k=1
1 2
where f (yk | smk ) = e(yk smk ) /N0 , k = 1, 2, . . . , N
N0
Optimal decision rules: MAP and ML

N
" #
1 X
f (y | sm ) = N exp (yk smk )2 /N0
(N0 ) 2 k=1
1
exp k y sm k2 /N0
 
= N m = 1, 2, . . . , M
(N0 ) 2
I Example: signal constellation, noise cloud, and received vector
Optimal decision rules: MAP and ML

I Posterior probabilities are defined as:

P (sm | y) = P (signal sm is transmitted | y) m = 1, 2, . . . , M

I Decision based on maximization of posterior probability


maximizes the probability of correct decision (prove soon...)
I After receiving y, the receiver chooses the sm that maximizes
P (sm |y).
I This decision criterion is called the maximum a posteriori
probability (MAP) criterion.
I How to compute P (sm | y) to make optimal decision?
Optimal decision rules: MAP and ML

I How to compute P (sm | y) to make optimal decision?

f (y | sm )P (sm )
P (sm | y) = by Bayes rule
f (y)
M
X
where, f (y) = f (y | sm )P (sm )
m=1

I Thus, a posteriori probabilities can be computed from a priori


probabilities P (sm ) and conditional PDFs f (y | sm ).
I Note that, when messages are equiprobable, i.e., P (sm ) = 1/M ,
the decision rule based on finding the signal that maximizes
P (sm | y) is equivalent to finding the signal that maximizes
f (y | sm ). Why?
I The conditional PDF f (y | sm ) (or any monotonic function of
it) is usually called the likelihood function.
Optimal decision rules: MAP and ML

I The decision criterion based on the maximum of f (y | sm ) over


the M signals is called the maximum-likelihood (ML) criterion.
I Note that, ML = MAP when message signals are equiprobable.
I The likelihood function is
N
" #
1 X
2
f (y | sm ) = N exp (yk smk ) /N0 .
(N0 ) 2 k=1

I For ease of computation, log-likelihood function is used (log is


monotonic):
N
N 1 X
ln f (y | sm ) = ln(N0 ) (yk smk )2
2 N0
k=1
Optimal decision rules: MAP and ML

I The maximization over sm is equivalent to finding the signals


that minimize the (square of) Euclidean distance
N
X
D(y, sm ) = (yk smk )2
k=1

I D(y, sm ), m = 1, 2, . . . , M are called the distance metrics.


I Hence, for the AWGN channel, the decision rule based on the
ML criterion reduces to finding the signal sm that is closest in
distance to the received signal vector y.
I This decision rule is also known as minimum distance detection.
A union bound on probability of error

I For binary equiprobable signaling over an AWGN channel,


regardless of the signaling scheme, the error probability is
 
d
Pe = Q
N0
where d is the Euclidean distance between two points in the
constellation.
I We will derive an upper bound on error probability for general
M -ary scheme.
I Pe|m : Probability of error when sm (t) is sent
I Ei : event that i is detected at the receiver
I Then,

M
[ M
X
Pe|m = P Ei | sm (t) sent P (Ei | sm (t) sent )
i=1,i6=m i=1,i6=m
A union bound on probability of error

I Using the minimum distance detection, si is detected when y is


closer to si than sm :
D(y, si ) < D(y, sm )

I Thus,
P (Ei | sm (t) sent ) P (D(y, si ) < D(y, sm ))

I But P (D(y, si ) < D(y, sm )) is the probability of error in a


binary equiprobable signaling system,
 
dmi
P (D(y, si ) < D(y, sm )) = Q
N0

I Hence, M  
d
mi
X
Pe|m Q
i=1,i6=m
N0
A union bound on probability of error

I We can obtain an even simpler upper bound by defining

dmin = min dmm0


1m,m0 M,m0 6=M

I Then, for any i    


d d
Q mi Q min
N0 N0

I And hence,
M    
d d
min min
X
Pe|m Q = (M 1)Q
i=1,i6=m
N0 N0
2
M 1 d4N
min
= e 0
2
Homework problems

I Examples: 8.3.8, 8.3.9, 8.4.1-8.4.3


I Other example and exercise problems from the reference books

I All the best for the final exams!

Вам также может понравиться