Вы находитесь на странице: 1из 12

Department of Electrical Engineering

EE 703: Digital Message Transmission (Autumn 2014)


Course Instructor: Prof. Abhay Karandikar
Problem Set 2 Soltuions
1. In a communication system, one of M equally likely messages is transmitted using the
voltages s
i
= i, i = 0, 1, 2, . . . M 1. The received voltage is r = s
i
+ n, where n
is a zero mean Gaussian random variable with variance
2
.
(a) Determine the optimum decision region.
(b) Compute the probability of error.
Solution:
Since all the messages are equally likely P(m
i
) =
1
M
i = 1, ..., M. Also
f
R/m
1
(r/m
i
) =
1

2
2
e
(rs
i
)
2
2
2
. From the maximum aposteriori probability(MAP) rule,
declare that message m
k
was transmitted if
P(m
k
/r)
m
k
P(m
i
/r) i = k
=
f
R/m
k
(r/m
k
)P(m
k
)
f
R
(r)
m
k

f
R/m
i
(r/m
i
)P(m
i
)
f
R
(r)
= f
R/m
k
(r/m
k
)
m
k
f
R/m
i
(r/m
i
)
=
1

2
2
e
(rs
k
)
2
2
2
m
k

2
2
e
(rs
i
)
2
2
2
= |r s
k
|
m
k
|r s
i
| i = k
which is the minimum distance decoding rule.Thus the decision rule is
if r <

2
, declare m
0
;
otherwise, if r >
(2M3)
2
, declare m
M1
;
otherwise, if j

2
< r < j +

2
for some j = 1, 2, ..., M 2, declare m
j
.
The decision regions are shown in gure 1.
The probability of error is given by
P
e
=
M1

i=0
P(m
i
)P
e/m
i
=
1
M
M1

i=0
P
e/m
i
1
Figure 1: Decision regions for Q1.
Now,
P
e/m
0
= P(R <

2
|s
0
was transmitted)
=

2
1

2
e
r
2
2
2
dr
= Q
(

2
)
where Q(x) :=

x
1

2
e
y
2
2
dy. Similarly, P
e/m
M1
= Q(

2
).
For 1 j M 2,
P
e/m
j
= P
(
R
(
, (j
1
2
)
)

(
(j +
1
2
),
))
=

(j
1
2
)

2
e
(rj)
2
2
2
dr +


(j+
1
2
)
1

2
e
(rj)
2
2
2
dr
= 2Q
(

2
)
Finally, we obtain P
e
=
2(M1)
M
Q
(

2
)
.
2. A communication system is used to transmit two equally likely messages m
0
and
m
1
. The channel output is a continuous random variable R. The conditional density
functions f
R
(r/m
0
) and f
R
(r/m
1
) are as shown.
(a) Determine the optimum decision rule.
(b) Determine the probability of error.
2
2 1
1
0
f
R
(r/m
0
)
r 0
f
R
(r/m
1
)
3 2 1 1 r
1
4
f
R
(r|m
0
) =

r , 0 r 1
2 r , 1 r 2
0 otherwise
f
R
(r|m
0
) =

1
4
, 1 r 3
0 otherwise
Since, both the inputs are equally likely, so the optimal decision rule is ML rule.
So, optimal detector sets m(r) = m
0
i
f
R
(r|m
0
) f
R
(r|m
1
)
= r
1
4
, 2 r
1
4
=
1
4
r
7
4
So, the optimum decision rule is
m(r) =

m
0
,
1
4
r
7
4
m
1
, 1 r <
7
4
,
7
4
< r 3
If P
c
represents the probability of correct decision, then probability of error
= 1 P
c
= 1
1
2
P(r|m
0
)
1
2
P(r|m
1
)
= 1
1
2
(

1
1
4
rdr +
7
4
1
(2 r)dr +
1
4
1
1
4
dr +

3
7
4
1
4
dr
)
=
7
32
3
3. Consider the transmission of voltage levels 3 and 1 with probabilities
2
3
and
1
3
respectively. Consider the received voltage to be corrupted by zero mean additive
Gaussian random variable with unit variance. Sketch the optimum decision regions.
Solution: fom the MAP rule we know that the decision will be given in favour of m
i
if
f
R
(r|m
i
)P(m
i
) f
R
(r|m
k
)P(m
k
) k
here, m = m
0
, if
f
R
(r|m
0
)P(m
0
) f
R
(r|m
1
)P(m
1
)

2
e

(r+3)
2
2
.
2
3

1

2
e

(r1)
2
2
1
3

(r + 3)
2
2
+ ln2
(r 1)
2
2
r 1 +
ln2
4
(1)
So the decision rule is
m =

m
0
if r 1 +
ln2
4
m
1
else
(2)
4. A communication system transmits +a and a with probabilities
1
3
and
2
3
over
a channel which adds zero mean unit variance Gaussian random variable to the
transmitted signal.
4

r
a

+a
0
r
(a) The receiver uses the following structure.
Is this an optimal receiver ? Discuss.
(b) Determine the probability of error for the receiver shown above and compare it
with optimal receiver performance if this is not the optimal receiver.
Solution:
(a) From MAP rule m = m
0
if
f
R
(r|m
0
)P(m
0
) f
R
(r|m
1
)P(m
1
)

2
e

(ra)
2
2
.
1
3

1

2
e

(r+a)
2
2
.
2
3
(r + a)
2
(r a)
2
2ln2
r
ln2
2a
So the optimum decision rule is
m =

m
0
if r
ln2
2a
m
1
else
So this is not an optimal receiver.
(b) Error for optimal decision rule
P( m = m
1
|m
0
) =
ln2
2a

2
e

(ra)
2
2
dr
= Q(a
ln2
2a
)
And we nd
P( m = m
0
|m
1
) =


ln2
2a
1

2
e

(r+a)
2
2
dr
= Q(a +
ln2
2a
)
5
So the total error is
P
e
= P(m
0
)P( m = m
1
|m
0
) + P(m
1
)P( m = m
0
|m
1
)
=
1
3
Q(a
ln2
2a
) +
2
3
Q(a +
ln2
2a
)
But for the given decision rule,
P( m = m
0
|m
1
) =


0
1

2
e

(r+a)
2
2
dr
= Q(a)
P( m = m
1
|m
0
) =

2
e

(ra)
2
2
dr
= Q(a)
So total Error is
P

e
=
1
3
Q(a) +
2
3
Q(a) = Q(a)
Now well have to show that error probability for optimal decision rule is smaller.
We know, for large enough x
Q(x) e

x
2
2
P
e
=
1
3
Q(a
ln2
2a
) +
2
3
Q(a +
ln2
2a
)
=
1
3
e

(a
ln2
2a
)
2
2
+
2
3
e

(a+
ln2
2a
)
2
2
= e

a
2
2
(
1
3
e
1
2
ln2
+
2
3
e

1
2
ln2
)e
k
2
,where e
k
2
= e

(ln2)
2
8a
2
= e
(
a
2
2
+k
2
)
(
1
3
.

2 +
2
3
1

2
)
= e
(
a
2
2
+k
2
)
2

2
3
e

a
2
2
But for the given decision rule, we may approximate the error probability as
P

e
= Q(a) e

a
2
2
So,
P
e
P

e
(3)
So for the given decision rule the error probability is larger than the optimal decision
rule.
6
5. An observation R is dened as follows
R = S + N
R = N
where S and N are independent zero mean Gaussian random variables with variances

2
S
and
2
N
. is a constant.
(a) Determine and sketch the optimum MAP rule.
(b) Calculate the probability of error for the MAP rule.
(c) Repeat the above steps when the random variable S has mean value .
(d) Determine the error probability when
S
0.
Solution: Let us assume, two hypotheses given are H
0
and H
1
respectively. So
The observed variable r for H
0
has mean=0, variance=
2

2
S
+
2
N
=
2
R
(say)
and for H
1
has mean=0, variance=
2
N
Assume, P(m
0
) = p
0
and P(m
1
) = p
1
(a)From MAP rule, m = m
0
, when
f
R
(r|m
0
)P(m
0
) f
R
(r|m
1
)P(m
1
)

2
2
R
e

r
2
2
2
R
.p
0

1

2
2
N
e

r
2
2
2
N
.p
1

p
0

R
e

r
2
2
2
R

p
1

N
e

r
2
2
2
N
r
2
2(

2
N

2
R

2
S
)ln(
p
1

R
p
0

N
)
If p
1

R
> p
0

N
, then decision will be taken in favour of m
0
if
r (

S
)

2ln(
p
1

R
p
0

N
) or r (

S
)

2ln(
p
1

R
p
0

N
)
say r
0
= (

S
)

2ln(
p
1

R
p
0

N
) So, Decision rule is,
m =

m
0
, when,r r
0
or r r
0
m
1
, else
7
But if, p
1

R
p
0

N
, then decison will be taken in favour of m
0
for all r.
Then the decision rule becomes
m = m
0
r (4)
(b) For the rst case,
P( m = m
0
|m
1
) =


r
0
1

2
2
N
e

r
2
2
2
N
dr +

r
0

2
2
N
e

r
2
2
2
N
dr
= 2Q(
r
0

N
) (5)
8
P( m = m
1
|m
0
) =

r
0
r
0
1

2
2
R
e

r
2
2
2
R
= Q(
r
0

R
) Q(
r
0

R
)
= 1 2Q(
r
0

R
)
So total error is
P
e
= p
0
(1 2Q(
r
0

R
)) + 2p
1
Q(
r
0

R
)
Now for the second case,
P( m = m
0
|m
1
) = 1 and P( m = m
1
|m
0
) = 0
So the error is,
P
e
= P( m = m
0
|m
1
).P(m
1
) + P( m = m
1
|m
0
).P(m
0
)
= 1.p
1
+ 0.p
0
= p
1
(c) When S has a mean
Thne observed variable r for H
0
has mean= and variance=
2

2
S
+
2
N
=
2
R
(say)
and for H
1
has mean=0 and variance=
2
N
From MAP rule, m = m
0
, when
f
R
(r|m
0
)P(m
0
) f
R
(r|m
1
)P(m
1
)

2
2
R
e

(r)
2
2
2
R
.p
0

1

2
2
N
e

r
2
2
2
N
.p
1

p
0

R
e

(r)
2
2
2
R

p
1

N
e

r
2
2
2
N
(6)
from this we can get
r
2
(
2
R

2
N
)

2
R

2
N
+ r
2

2
R
(

2
R
+ 2ln(
p
1

R
p
0

N
)) 0
(7)
this is equivalent to
ar
2
+ br + c 0, where, a =
(
2
R

2
N
)

2
R

2
N
,b =
2

2
R
,c = (

2
R
+ 2ln(
p
1

R
p
0

N
))
(8)
9
Suppose r
1
and r
2
are two roots of this equation and r
1
r
2
. So
(r r
1
)(r r
2
) 0
either r r
1
or r r
2
(9)
So the decision rule is
m =

m
0
, when,r r
2
or r r
1
m
1
, else
P( m = m
0
|m
1
) =

r
2

2
2
N
e

r
2
2
2
N
dr +


r
1
1

2
2
N
e

r
2
2
2
N
dr
Q(
r
2

N
) + Q(
r
1

N
)
1 Q(
r
2

N
) + Q(
r
1

N
)
P( m = m
1
|m
0
) =

r
1
r
2
1

2
2
R
e

(r)
2
2
2
R
dr
Q(
r
2

R
) Q(
r
1

R
)
(d)When
S
0, then we may do the following approximation

2
R
=
2

2
S
+
2
N

2
R

2
N
So the quadratic euation of r becomes
r
2

2
N
(

2
N
+ 2ln(
p
1
p
0
)) 0
r

2
+

2
N

ln(
p
1
p
0
) = r
3
(say)
10
So decision rule is
m =

m
0
,if r r
3
m
1
,else
P( m = m
0
|m
1
) =


r
3
1

2
2
N
e

r
2
2
2
N
dr
= 1 Q(
r
3

N
)
P( m = m
1
|m
0
) =

r
3

2
2
N
e

(r)
2
2
2
N
dr
= Q(
(r
3
)

N
)
= Q(
r
3

N
)
So total error is
P
e
= P( m = m
0
|m
1
)P(m
1
) + P( m = m
1
|m
0
)P(m
0
)
P
e
= p
1
(1 Q(
r
3

N
)) + p
0
Q(
r
3

N
)
Put apprporiate expressions for all notations.
6. Consider a binary communication system where message m
0
is transmitted by sending
S
0
= 0 and message m
1
by random variable S with a known probability density function
11
f
S
(s). Consider the following binary hypothesis testing problem
H
0
: m
0
R = N
H
1
: m
1
R = S + N
where
f
S
(s) =
{
ae
as
, s 0,
0, s < 0.
f
N
(n) =
{
be
bn
, n 0,
0, n < 0.
Assume a > 0, b > 0 and a = b. Find the decision rule that minimizes the probability
of error if m
0
and m
1
are equally likely.
Solution:
f
R
(r|m
0
) =
{
be
br
, r 0
0 , r < 0
If S and N to be independent,then
f
R
(r|m
1
) = f
S
(r) f
N
(r)
=

f
S
()f
N
(r )d
= f
R
(r|m
1
) =

ab
ba
(e
ar
e
br
) , r 0
0 , otherwise
Decide hypothesis H
0
if
f
R
(r|m
0
) f
R
(r|m
1
)
= r
1
a b
ln
(
a
b
)
Therefore, decide:
H
0
, if 0 r
1
b a
ln
(
a
b
)
H
1
, if r
1
b a
ln
(
a
b
)
It should be noted that the result is independent of whether a > b or a < b.
12

Вам также может понравиться