You are on page 1of 6

ELEC 5360: Digital Communications

Midterm Examination
October 7, 2011, 6:308:30 PM
Open Book, Open Notes
(There are 4 problems, 20+30+25+25)
1. (20 pts) Short Questions.
(a) (5 pts) Compared with orthogonal modulation, what is the advantage of bi-orthogonal modulation
and simplex signaling, respectively?
(b) (5 pts) What are the main advantages of CPM (Continuous Phase Modulation)?
(c) (5 pts) Why do we need phase recovery and symbol synchronization? Which one is more important
for the QAM signals?
(d) (5 pts) What is the optimal detector for signaling with memory? What is the practical way to
implement the optimal detector?
Solution
(a) Bi-orthogonal provides a higher data rate. Simplex signaling obtains the same minimum distance
with lower energy.
(b) CPM has lower side lobes and constant amplitude.
(c) As we have propagation delay and carrier oset in practical communication systems, carrier re-
covery and symbol synchronization are needed for coherent demodulation. Carrier phase recovery
is more important for QAM signals, as the carrier phase oset will cause power penalty and
cross-talk between I and Q channels.
(d) For signaling with memory, MLSD is optimal, and it can be implemented by the Viterbi algorithm.
2. (30 pts) Consider a modulation scheme whose constellation is shown in Fig. 1. The AWGN channel is
assumed with power spectral density N
0
/2. Answer the following questions:

s
1
s
2
s
6
s
7
s
8
s
4
s
3
s
5
d

-d

2d

-2d

2d

d

-d

-2d

Figure 1: A specic modulation scheme.
(a) (4 pts) How many bits can we transmit in each symbol?
(b) (4 pts) Find out the average symbol energy.
(c) (6 pts) Assuming equally likely symbols, what is the optimal detector and what is the decision
region for each symbol? Plot these regions.
(d) (8 pts) What is the pair-wise error probability P
e
(s
1
|s
2
), i.e., the probability of detecting s
2
while
transmitting s
1
? What is P
e
(s
4
|s
1
)?
(e) (8 pts) Derive the union bound for P(error|s
1
).
Solution
(a) M = 8, k = log
2
M = 3 bits in each symbol.
(b) The average symbol energy energy is
E
avg
=
1
M
M

i=1
=
1
8
[
4 d
2
+ 4 (2d)
2
]
=
5
2
d
2
.
(c) The optimal detector is the ML detector, and the optimal decision boundaries are shown in Fig.
2.
(d) The pair-wise error probability
P
e
(s
2
|s
1
) = Q

d
2
12
2N
0

= Q

d
2
2N
0

P
e
(s
4
|s
1
) = Q

d
2
14
2N
0

= Q

4d
2
2N
0


s
1
s
2
s
6
s
7
s
8
s
4
s
3
s
5
Figure 2: Optimal decision regions.
(e) Similarly, we have
P
e
(s
3
|s
1
) = Q

5d
2
2N
0

P
e
(s
5
|s
1
) = Q

9d
2
2N
0

P
e
(s
6
|s
1
) = Q

8d
2
2N
0

Then due to symmetry, the union bound for the error probability of s
1
is
P(error|s
1
)
8

k=2
P
e
(s
1
|s
k
)
=P
e
(s
2
|s
1
) + 2P
e
(s
3
|s
1
) + 2P
e
(s
4
|s
1
) +P
e
(s
5
|s
1
) +P
e
(s
6
|s
1
)
=Q

d
2
2N
0

+ 2Q

4d
2
2N
0

+ 2Q

5d
2
2N
0

+Q

9d
2
2N
0

+Q

8d
2
2N
0

. (1)
3. (25 pts) Consider standard BPSK modulation with two equally likely constellation points a distance d
apart. The received signal is y = x + n, where the additive noise n is uniformly distributed between
d and d, i.e., n U[d, d].
(a) (3 pts) What is the pdf (probability density function) of n?
(b) (5 pts) Specify the MAP and ML decision regions.
(c) (4 pts) What is the value of SNR?
(d) (5 pts) What is the probability of error of the MAP detector?
(e) (8 pts) Now consider unequal prior probabilities with P(x = d/2) = 0.75 and R(x = d/2) = 0.25.
Determine the MAP decision region and calculate the error probability.
Solution
(a) f
n
(z) =
1
2d
, z [d, d].
(b) x = +1 if y d/2, and x = 1 if y d/2. The decision in the region y (d/2, d/2) does not
matter.
(c) E
b
=
d
2
4
, and noise variance is
1
2d

d
d
n
2
dn = d
2
/3, so
SNR =
d
2
/4
d
2
/3
=
3
4
(d) The error probability is
P
e
=
1
2
P( x = 1|x = 1) +
1
2
P( x = 1|x = 1) =
1
4
(e) The decision region is x = +1 if y d/2, and x = 1 if y < d/2. (Note that this is the
decision region for P(x = 1) = .5 + for any > 0) The error probability is
P
e
=
1
4
P( x = 1|x = 1) +
3
4
P( x = 1|x = 1) =
1
8
+ 0 =
1
8
.
4. (25 pts) Consider a simple channel encoder repetition codes, e.g., for a rate
1
3
repetition code, the
information bit x is encoded into xxx, for x = 0 or 1. Assume equal prior probabilities, i.e., P(s =
1) = P(s = 2) = 0.5, and assume a binary symmetric channel with error probability f P(0|1) =
P(1|0) < 0.5, as shown in Fig. 3. The decoding rule is shown in Table 1, which is a majority-vote
decoding algorithm.
0 0
1
1
1-f
1-f
f
f
Figure 3: The binary symmetric channel (BSC).
Table 1: Decoding rule for the repetition code
Received sequence r 000 001 010 100 101 110 011 111
Decoded message s 0 0 0 0 1 1 1 1
(a) (5 pts) Assuming the message sequence is [0, 0, 1, 0, 1, 1, 0], what is the encoded sequence with a
rate 1/3 repetition code? If the received sequence at the destination is [0 0 0 0 0 1 1 1 1 0 0 0 0 1 0 1 1 1 0 0 1],
which sent bits are in error?
(b) (5 pts) Derive the likelihood ratios
P(r|s=1)
P(r|s=0)
for dierent received sequences r for a rate 1/3 repe-
tition code.
(c) (5 pts) Show that the simple majority-vote decoder is the optimal decoder with the given assump-
tions, in the sense that it minimizes the error probability.
(d) (5 pts) Show that the rate 1/3 repetition code reduces the error probability compared to the
uncoded system.
(e) (5 pts) Derive the bit error probability of a general rate 1/N repetition code for odd N.
Table 2: Decoding rule for the repetition code
Received sequence r 000 001 010 100 101 110 011 111
Likelihood ratio
P(r|s=1)
P(r|s=0)

3

3
Solution
(a) The encoded sequence is [000000111000111111000]. The decoded sequence is [0, 0, 1, 0, 0, 1, 0], so
the fth bit is in error.
(b) As shown in Table 2, where =
1f
f
.
(c) With equal a prior probability, the MAP detector is the same as the ML detector, which minimizes
the error probability. According to the likelihood ratio in Table 2, the majority-vote decoder
maximizes the likelihood ratio as =
1f
f
> 1 for f < 0.5, so it is a ML detector, and hence is
optimal.
(d) The error probability of the coded system is P
e
= 3f
2
(1 f) + f
3
(see below for the result on
general N), so we need to show
3f
2
(1 f) +f
3
< f
which is equivalent to
2f
2
3f + 1 = 2(f 1)(f
1
2
) > 0
as f < 0.5, this inequality always holds.
(e) The error probability is
P
e
(N) =
N

n=(N+1)/2
(
N
n
)
f
n
(1 f)
Nn
.