Вы находитесь на странице: 1из 41

Digital Communication Exercises

Contents
1 Converting a Digital Signal to an Analog Signal 2

2 Decision Criteria and Hypothesis Testing 7

3 Generalized Decision Criteria 11

4 Vector Communication Channels 13

5 Signal Space Representation 17

6 Optimal Receiver for the Waveform Channel 23

7 The Probability of Error 28

8 Bit Error Probability 34

9 Connection with the Concept of Capacity 39

1
1 Converting a Digital Signal to an Analog Signal
1. [1, Problem 4.15].
Consider a four-phase PSK signal represented by the equivalent lowpass signal
X
u(t) = In g(t − nT )
n

p
where In takes on one of of the four possible values 1/2(±1 ± j) with equal probability. The
sequence of information symbols {In } is statistically independent (i.i.d).

(a) Determine the power density spectrum of u(t) when


(
A, 0 ≤ t ≤ T,
g(t) =
0, otherwise.

(b) Repeat (1a) when (


A sin(πt/T ), 0 ≤ t ≤ T,
g(t) =
0, otherwise.

(c) Compare the spectra obtained in (1a) and (1b) in terms of the 3dB bandwidth and the
bandwidth to the first spectral zero. Here you may find the frequency numerically.

Solution:
2 P∞ 2
We have that SU (f ) = 1
T |G(f )| m=−∞ CI (m)e−j2πf mT , E(In ) = 0, E(|In | ) = 1, hence
(
1, m = 0,
CI (m) =
0, m 6= 0.
P∞ 2
therefore m=−∞ CI (m)e−j2πf mT = 1 ⇒ SU (f ) = 1
T |G(f )| .

(a) For the rectangular pulse:

sin πf T −j2πf T /2 2 sin2 πf T


G(f ) = AT e ⇒ |G(f )| = A2 T 2
πf T (πf T )2

where the factor e−j2πf T /2 is due to the T /2 shift of the rectangular pulse from the center.
Hence:
sin2 πf T
SU (f ) = A2 T
(πf T )2
RT
(b) For the sinusoidal pulse: G(f ) = 0 A sin(πt/T ) exp(−j2πf t)dt. By using the trigonometric
identity sin x = exp(jx)−exp(−jx)
2j it is easily shown that:
2
cos2 πT f

2AT cos πT f −j2πf T /2 2 2AT
G(f ) = e ⇒ |G(f )| =
π 1 − 4T 2 f 2 π (1 − 4T 2 f 2 )2

Hence: 2
cos2 πT f

2A
SU (f ) = T
π (1 − 4T 2 f 2 )2

2
(c) The 3dB frequency for (1a) is:

sin2 πf3dB T 1 0.44


2
= ⇒ f3dB ∼
=
(πf3dB T ) 2 T

(where this solution is obtained graphically), while the 3dB frequency for the sinusoidal pulse
on (1b) is: f3dB ∼
= 0.59
T .
The rectangular pulse spectrum has the first spectral null at f = 1/T , whereas the spectrum
of the sinusoidal pulse has the first null at 3/2T . Clearly the spectrum for the rectangular
pulse has a narrower main lobe. However, it has higher sidelobes.

2. [1, Problem 4.21].


The lowpass equivalent representation of a PAM signal is
X
u(t) = In g(t − nT )
n

Suppose g(t) is a rectangular pulse and

In = an − an−2

where {an } is a sequence of uncorrelated 1 binary values (1, −1) random variables that occur with
equal probability.

(a) Determine the autocorrelation function of the sequence {In }.


(b) Determine the power density spectrum of u(t).
(c) Repeat (2b) if the possible values of an are (0, 1).

Solution:

(a)

CI (m) = E{In+m In } = E{(an+m − an+m−2 )(an − an−2 )}



2,
 m = 0,
= −1, m = ±2

0, otherwise.

= 2δ(m) − δ(m − 2) − δ(m + 2)

2 P∞
(b) SU (f ) = 1
T |G(f )| m=−∞ CI (m)e−j2πf mT , where

X
CI (m)e−j2πf mT = 4 sin2 2πf T,
m=−∞

and  2
2 sin πf T
|G(f )| = (AT )2 .
πf T
Therefore:  2
sin πf T
SU (f ) = 4A2 T sin2 2πf T
πf T
1 E{a a } = 0 for n 6= m.
n m

3
(c) If {an } takes the values (0, 1) with equal probability then E{an } = 1/2 and E{an+m an } =
1
4 [1 + δ(m)]. Then:

1
CI (m) = [2δ(m) − δ(m − 2) − δ(m + 2)] ⇒ Φii (f ) = sin2 2πf T
4
 2
sin πf T
SU (f ) = A2 T sin2 2πf T
πf T

Thus, we obtain the same result as in (2b) but the magnitude of the various quantities is
reduced by a factor of 4.

3. [2, Problem 1.16].


A zero mean stationary process x(t) is applied to a linear filter whose impulse response is defined
by a truncated exponential: (
ae−at , 0 ≤ t ≤ T,
h(t) =
0, otherwise.
Show that the power spectral density of the filter output y(t) is defined by

a2
SY (f ) = (1 − 2 exp(−aT ) cos 2πf T + exp(−2aT ))SX (f )
a2 + 4π 2 f 2
where SX (f ) is the power spectral density of the filter input.
Solution:
The frequency response of the filter is:
Z ∞
H(f ) = h(t) exp(−j2πf t)dt
−∞
Z ∞
= a exp(−at) exp(−j2πf t)dt
−∞
Z ∞
= a exp(−(a + j2πf )t)dt
−∞
a
= [1 − e−aT (cos 2πf T − j sin 2πf T )].
a + j2πf

The squared magnitude response is:

2 a2
1 − 2e−aT cos 2πf T + e−2aT

|H(f )| = 2 2 2
a + 4π f

And the required PSD follows.


4. [1, Problem 4.32].
The information sequence {an } is a sequence of i.i.d random variables, each taking values +1 and
−1 with equal probability. This sequence is to be transmitted at baseband by a biphase coding
scheme, described by X
s(t) = an g(t − nT )
n

where g(t) is defined by (


1, 0 ≤ t ≤ T /2,
g(t) =
−1, T /2 ≤ t ≤ T.

4
(a) Find the power spectral density of s(t).
(b) Assume that it is desirable to have a zero in the power spectrum at f = 1/T . To this end
we use precoding scheme by introducing bn = an + kan−1 , where k is some constant, and
then transmit the {bn } sequence using the same g(t). Is it possible to choose k to produce a
frequency null at f = 1/T ? If yes, what are the appropriate value and the resulting power
spectrum?
(c) Now assume we want to to have zeros at all multiples of f0 = 1/4T . Is it possibl to have
these zeros with an appropriate choice of k in the previous part? If not then what kind of
precoding do you suggest to result in the desired nulls?

Solution:
1 2
(a) Since µ = 0, σa2 = 1, we have SS (f ) = T |G(f )| .

T sin(πf T /2) −j2πf T /4 T sin(πf T /2) −j2πf 3T /4


G(f ) = e − e
2 πf T /2 2 πf T /2
T sin(πf T /2) −j2πf T
= e (2j sin(πf T /2))
2 πf T /2
sin2 (πf T /2) −j2πf T
= jT e ⇒
πf T /2
 2 2
2 sin (πf T /2)
|G(f )| = T 2 ⇒
πf T /2
 2 2
sin (πf T /2)
SS (f ) = T
πf T /2

(b) For non-independent information sequence the power spectrum of s(t) is given by SS (f ) =
1 2 P∞ −j2πf mT
T |G(f )| m=−∞ CB (m)e .

CB (m) = E{bn+m bn }
= E{an+m an } + kE{an+m−1 an } + kE{an+m an−1 } + k 2 E{an+m−1 an−1 }

2
1 + k , m = 0,

= k, m = ±1

0, otherwise.

Hence:

X
CB (m)e−j2πf mT = 1 + k 2 + 2k cos 2πf T
m=−∞

We want:

X
CB (m)e−j2πf mT f =1/T = 0 ⇒ 1 + k 2 + 2k = 0 ⇒ k = −1

SS (1/T ) = 0 ⇒
m=−∞

and the resulting power spectrum is:


2
sin2 πf T /2

SS (f ) = 4T sin2 πf T
πf T /2

5
(c) The requirement for zeros at f = l/4T, l = ±1, ±2, . . . means 1 + k 2 + 2k cos πl/2 = 0,
which cannot be satisfied for all l. We can avoid that by using precoding in the form:
bn = an + kan−4 . Then

2
1 + k , m = 0,

CB (m) = k, m = ±4

0, otherwise.


X
CB (m)e−j2πf mT = 1 + k 2 + 2k cos 2πf 4T
m=−∞

and a value of k = −1 will zero this spectrum in all multiples of 1/4T .

5. [1, Problem 4.29].


Show that 16-QAM on {±1, ±3} × {±1, ±3} can be represented as a superposition of two 4PSK
signals where each component is amplified separately before summing. i.e., let

s(t) = G[An cos 2πf t + Bn sin 2πf t] + [Cn cos 2πf t + Dn sin 2πf t]

where {An }, {Bn }, {Cn } and {Dn } are statically independent binary sequences with element from
the set {+1, −1} and G is the amplifier gain. You need to show s(t) an also be written as

s(t) = In cos 2πf t + Qn sin 2πf t

and determine In and Qn in terms of An , Bn , Cn and Dn .


Solution:
The 16-QAM signal is represented as s(t) = In cos 2πf t + Qn sin 2πf t where In = {±1, ±3}, Qn =
{±1, ±3}. A superposition of two 4-QAM (4-PSK) signals is:

s(t) = G[An cos 2πf t + Bn sin 2πf t] + [Cn cos 2πf t + Dn sin 2πf t]

where An , Bn , Cn , Dn = {±1}. Clearly In = GAn + Cn , In = GBn + Dn . From these equations it


is easy to see that G = 2 gives the required equivalence.

6
2 Decision Criteria and Hypothesis Testing
Remark 1. Hypothesis testing is another common name for decision problem: You have to decide
between two or more hypothesis, say H0 , H1 , H2 , . . . where Hi can be interpreted as ”the unknown
parameter has value i”. Decoding a constellation with K symbols can be interpreted as selecting the
correct hypothesis from H0 , H1 , . . . , HK−1 where Hi is the hypothesis that Si was transmitted.

1. Consider an equal probability binary source p(0) = p(1) = 1/2, and a continuous output channel:
fR|M (r|”1”) = ae−ar r≥0
−br
fR|M (r|”0”) = be r≥0 b>a>0

1
(a) Find a constant K such that the optimal decision rule is r≷K.
0
(b) Find the respective error probability.
Solution:

(a) Optimal decision rule:


0
p(0)fR|M (r|”0”)≷p(1)fR|M (r|”1”)
1
Using the defined channel distributions:
0
be−br ≷ ae−ar
1
0
a −(a−b)r
1 ≷ e
b
1
0
a
0 ≷ ln( ) + (b − a)r
b
1
1 ln( ab )
r ≷ =K
a−b
0

(b)
p(e) = p(0) Pr{r > K|0} + p(1) Pr{r < K|1}
1  ∞ −bt
Z Z K
ae−at dt

= be dt +
2 K 0
1 −bK −aK
= [e +1−e ]
2
2. Consider a binary source: Pr{x = −2} = 2/3, Pr{x = 1} = 1/3, and the following channel
y = A · x, A ∼ N (1, 1)
where x and A are independent.

7
(a) Find the optimal decision rule.
(b) Calculate the respective error probability.
Solution:

(a) First we will find the conditional distribution of y given x:


(Y | − 2) ∼ N (−2, 4), (Y |1) ∼ N (1, 1)
Hence the decision rule will be:
−2
2 1 (y + 2)2  1 1 (y − 1)2 
√ exp − ≷ √ exp −
3 8π 8 3 2π 2
1
−2
(y + 2)2 (y − 1)2
− ≷ −
8 2
1
−2
3y(y − 4) ≷ 0
1
(
−2, y < 0, 4 < y
⇒ x̂(y) =
1, otherwise.

(b)
Z ∞
2 4
Z Z 0 
1
p(e) = f (y| − 2)dy + f (y|1)dy + f (y|1)dy
3 0 3 −∞ 4
         
2 0+2 4+2 1 0−1 4−1
= Q −Q + 1−Q −Q
3 2 2 3 1 1
1 ∼ 0.15821
= Q(1) − Q(3) =
3
3. Decision rules for binary channels.

(a) The Binary Symmetric Channel (BSC) has binary (0 or 1) inputs and outputs. It
outputs each bit correctly with probability 1 − p and incorrectly with probability p. Assume
0 and 1 are equally likely inputs. State the MAP and ML decision rules for the BSC when
p < 12 . How are the decision rules different when p > 12 ?
(b) The Binary Erasure Channel (BEC) has binary inputs as with the BSC. However there
are three possible outputs. Given an input of 0, the output is 0 with probability 1 − p1 and 2
with probability p1 . Given an input of 1, the output is 1 with probability 1 − p2 and 2 with
probability p2 . Assume 0 and 1 are equally likely inputs. State the MAP and ML decision
rules for the BEC when p1 < p2 < 12 . How are the decision rules different when p2 < p1 < 21 ?

Solution:

(a) For equally likely inputs the MAP and ML decision rules are identical. In each case we wish
to maximize py|x (y|xi ) over the possible choices for xi . The decision rules are shown below,
1
p < ⇒ X̂ = Y
2
1
p > ⇒ X̂ = 1 − Y
2

8
(b) Again, since we have equiprobable signals, the MAP and ML decision rules are the same.
The decision rules are as follows,
(
1 Y, Y = 0, 1.
p1 < p2 < ⇒ X̂ =
2 1, Y = 2.
(
1 Y, Y = 0, 1.
p2 < p1 < ⇒ X̂ =
2 0, Y = 2.

4. In a binary hypothesis testing problem, the observation Z is Rayleigh distributed under both
hypotheses with different parameter, that is,

z2
 
z
f (z|Hi ) = 2 exp − 2 z ≥ 0, i = 0, 1
σi 2σi

You need to decide if the observed variable Z was generated with σ02 or with σ12 , namely choose
between H0 and H1 .

(a) Obtain the decision rule for the minimum probability of error criterion. Assume that the H0
and H1 are equiprobable.
(b) Extend your results to N independent observations, and derive the expressions for the re-
sulting probability of error. PN
Note: If R ∼ Rayleigh(σ) then i=1 Ri2 has a gamma distribution with parameters N and
N
2σ 2 : Y = i=1 Ri2 ∼ Γ(N, 2σ 2 ).
P

Solution:

(a)

z2
log(f (z|Hi )) = log z − log σi2 −
2σi2
H1
σ02

2 1 1
⇒ log f (z|H1 ) − log f (z|H0 ) = log + z − ≷0
σ12 2σ02 2σ12
H0
H1  2  2 2 
σ1 σ1 σ0
⇒ z2 ≷ 2 log · =γ
σ02 σ12 − σ02
H0

Since z ≥ 0, the following decision rule is obtained:


( √
H1 , z ≥ γ,
Ĥ = √
H0 , z < γ.

f (z|H1 )
(b) Let f (z|H0 ) be denoted as Likelihood Ration Test (LRT) 2 , hence

log LRT = log f (z|H1 ) − log f (z|H0 )


2 LRT f (z|H1 )
, f (z|H0 )

9
For N i.i.d observations:
N
X −1
log(f (z|Hi )) = log(f (zn |Hi ))
n=0
N −1
X zn2
= log zn − N log σi2 −
n=0
2σi2
N −1
X zn2
= −N log σi2 + log zn −
n=0
2σi2

The log LRT will be:

H1 −1
 NX
σ12
  
1 1
log LRT : −N log + zn2 ≷ 0
− 2
σ02 2σ02 2σ1
n=0 H0
N −1 H1  2  2 2 
X σ1 σ1 σ0
⇒ zn2 ≷ 2N log 2 · 2 − σ2 = γ̃
σ 0 σ 1 0
n=0 H0
PN −1
Define Y = n=0 zn2 , then Y |Hi ∼ Γ(N, 2σi2 ).

γ(N, γ̃/2σ02 )
PF A = Pr{decoding H1 if H0 was transmitted} = Pr{Y > γ̃|H0 } = 1 −
Γ(N )
γ(N, γ̃/2σ12 )
PM = Pr{decoding H0 if H1 was transmitted} = 1 − Pr{Y > γ̃|H1 } =
Γ(N )

where γ(K, x/θ) is the lower incomplete gamma function 3 .

Rx
3 γ(s, x) = 0 ts−1 e−t dt.

10
3 Generalized Decision Criteria
1. Bayes decision criteria.
Consider an equiprobable binary symmetric source m ∈ {0, 1}. For the observation, R, conditional
probability density function is
(
1
fR|M (r|M = 0) = 2 , |r| < 1,
0, otherwise.
1 −|r|
fR|M (r|M = 1) = e
2
(a) Obtain the decision rule for the minimum probability of error criterion and the correspond-
ingly minimal probability of error.
 
0 2α
(b) For the cost matrix C = , obtain the optimal generalized decision rule and the error
α 0
probability.

Solution:

(a)

|r| > 1 : fR|M (r|M = 0) ⇒ m̂ = 1.


1 −|r| 1 1
2e
|r| < 1 : 1
≷1 ⇒ − |r| ≷0 ⇒ m̂ = 0
2 0 0

The probability of error


Z 1
1 −|r| 1
p(e) = p(0) · 0 + p(1) · e dr = [1 − e−1 ]
−1 2 2

(b) The decision rule

fR|M (r|M = 1) 1
p(0) C10 − C00 α 1
≷ · = =
fR|M (r|M = 0) p(1) C01 − C11 2α 2
0
|r| > 1 : fR|M (r|M = 0) = 0
1 −|r| 1 1
2e 1
|r| < 1 : 1
≷ ⇒ − |r| ≷ − ln 2
2
2 0 0
(
1, |r| < ln 2, |r| > 1
⇒ m̂ =
0, ln 2 < |r| < 1.

Probability of error
Z ln 2
1
PF A = Pr{m̂ = 1|m = 0} = dr = ln 2
2
Z− ln 2
1 1
PM = Pr{m̂ = 0|m = 1} = − e−|r| dr = − e−1 ∼
=
ln 2<|r|<1 2 2
1 1
p(e) = p(0)PF A + p(1)PM = [ln 2 + − e−1 ]
2 2

11
2. Non Gaussian additive noise.
Consider the source m ∈ {1, −1}, Pr{m = 1} = 0.9, Pr{m = −1} = 0.1. The observation, y, obeys

y = m + N, N ∼ U [−2, 2]

(a) Obtain the decision rule for the minimum probability of error criterion and the minimal
probability of error.
 
0 1
(b) For the cost matrix C = , obtain the optimal Bayes decision rule and the error
100 0
probability.

Solution:

(a)
(
1
4, −1 < y < 3,
f (y|1) =
0, otherwise.
(
1
4, −3 < y < 1,
f (y| − 1) =
0, otherwise.
(
−1, −3 < y < −1,
⇒ m̂ =
1, −1 < y < 3.

The probability of error


Z 1
1
p(e) = p(1) · 0 + p(−1) · dy = 0.05
−1 4

(b) The decision rule

1
f (y|1) p(−1) 100
≷ ·
f (y| − 1) p(1) 1
−1
1
⇒ p(1)f (y|1) ≷ 100p(−1)f (y| − 1)
−1
(
−1, −3 < y < 1,
⇒ m̂ =
1, 1 < y < 3.

The probability of error


Z 1
1
p(e) = p(−1) · 0 + p(1) · dy = 0.45
−1 4

12
4 Vector Communication Channels
Remark 2. Vectors are denoted with boldface letters, e.g. x, y.

1. General Gaussian vector channel.


Consider the Gaussian vector channel with the sources p(m0 ) = q, p(m1 ) = 1 − q, s0 = [1, 1]T , s1 =
[−1, −1]T . For sending m0 the transmitter sends s0 and for sending m1 the transmitter sends s1 .
The observations, ri , obeys
 2 
σ 0
r = si + n n = [n1 , n2 ], n ∼ N (0, Λn ), Λn = 1
0 σ22

The noise vector, n, and the messages mi are independent.

(a) Obtain the optimal decision rule using MAP criterion, and examine it for the following cases:
i. q = 21 , σ1 = σ2 .
ii. q = 12 , σ12 = 2σ22 .
iii. q = 31 , σ12 = 2σ22 .
(b) Derive the error probability for the obtained decision rule.

Solution:

(a) The conditional probability distribution function R|Si ∼ N (Si , Λn ):


 
1 1
f (r|si ) = p exp (r − si )T Λ−1
n (r − si )
(2π)2 det Λn 2

The MAP optimal decision rule

m0
p(m0 )f (r|s0 ) ≷ p(m1 )f (r|s1 )
m1
  m0  
q 1 T −1 1−q 1
p exp (r − s0 ) Λn (r − s0 ) ≷ p exp (r − s1 )T Λ−1
n (r − s1 )
(2π)2 det Λn 2 (2π)2 det Λn 2
m1
  m0  
1 T −1 1 T −1
q exp (r − s0 ) Λn (r − s0 ) ≷ (1 − q) exp (r − s1 ) Λn (r − s1 )
2 2
m1
m0
1−q
(r − s1 ) T
Λ−1
n (r − s1 ) − (r − s0 )T
Λ−1
n (r − s0 ) ≷ 2 ln
q
m1

Assign rT = [x, y]

m0
(x + 1)2 (y + 1)2 (x − 1)2 (y − 1)2 1−q
2 + 2 − 2 − ≷ 2 ln
σ1 σ2 σ1 σ22 q
m1
m
x y 01 1 − q
⇒ + ≷ ln
σ12 σ22 2 q
m1

13
i. For the case q = 12 , σ1 = σ2 the decision rule becomes

m0
x+y ≷0
m1

ii. For the case q = 12 , σ12 = 2σ22 the decision rule becomes

m0
x + 2y ≷ 0
m1

iii. For the case q = 31 , σ12 = 2σ22 the decision rule becomes

m0
x + 2y ≷ ln 2
m1

(b) Denote K , 1
2 ln 1−q
q , and define z =
x
σ12
+ y
σ22
. The conditional distribution of Z is

σ12 + σ22 σ12 + σ22


Z|si ∼ N ((−1)i , ), i = 0, 1
σ12 σ22 σ12 σ22

The decision rule in terms of z, K


m0
z≷K
m1
The error probability

p(e) = p(m0 ) Pr{z < K|m0 } + p(m1 ) Pr{z > K|m1 }

Assigning the conditional distribution


 K − σ12 +σ22 
σ2 σ2
Pr{z < K|m0 } = 1 − Q q 2 1 22
σ1 +σ2
σ12 σ22
 K + σ12 +σ22 
σ2 σ2
Pr{z > K|m1 } = Q q 2 1 22
σ1 +σ2
σ12 σ22

q 
For the case q = 12 , σ1 = σ2 the error probability equals Q 2
σ2
.
1

2. Non Gaussian additive vector channel.


Consider a binary hypothesis testing problem in which the sources s0 = [1, 2, 3], s1 = [1, −1, −3]
are equiprobable. The observations, ri , obeys

r = si + n, n = [n0 , n1 , n2 ]

where n elements are i.i.d with the following probability density function
1 −|nk |
fNK (nk ) = e
2

14
Obtain the optimal decision rule using MAP criteria.
Solution:
The optimal decision rule using MAP criteria
0
p(s0 )f (r|s0 ) ≷ p(s1 )f (r|s1 )
1
0
f (r|s0 ) ≷ f (r|s1 )
1

The conditional probability distribution function


2
Y
f (r|si ) = fN (r − si ) = fN (nk = rk − sik )
k=0
1 −|r0 −si,0 | 1 −|r1 −si,1 | 1 −|r2 −si,2 | 1
= e e e = e−[|r0 −si,0 |+|r1 −si,1 |+|r2 −si,2 |]
2 2 2 8
An assignment of the si elements yield
1
|r0 − 1| + |r1 − 2| + |r2 − 3| ≷ |r0 − 1| + |r1 + 1| + |r2 + 3|
0
1
|r1 − 2| + |r2 − 3| ≷ |r1 + 1| + |r2 + 3|
0

Note that the above decision rule compares the distance from the axis in both hypotheses, unlike
in the Gaussian vector channel in which the Euclidean distance is compared.
3. Gaussian two-channel.
Consider the following two-channel problem, in which the observations under the two hypotheses
are
      
Z1 1 0 V1 −1
H0 : = +
Z2 0 12 V2 − 12
      
Z1 1 0 V1 1
H1 : = + 1
Z2 0 12 V2 2

where V1 and V2 are independent, zero-mean Gaussin variables with variance σ 2 .


(a) Find the minimum probability of error receiver if both hypotheses are equally likely. Simplify
the receiver structure.
(b) Find the minimum probability of error.
Solution:
 
Z
Let Z = 1 . The conditional distribution of Z is
Z2
Z|H0 ∼ N (µ0 , Λ), Z|H1 ∼ N (µ1 , Λ),
     
−1 1 2 1 0
µ0 = , , µ1 = 1 Λ=σ
− 21 2 0 41

15
(a) The decision rule

H1
f (z|H1 ) p(H0 )

f (z|H1 ) p(H1 )
H0
H1
log f (z|H1 ) − log f (z|H0 ) ≷ 0
H0
H1
2
(z1 + 2z2 ) ≷ 0
σ2
H0
H1
⇒ z1 + 2z2 ≷ 0
H0

(b) Define X = Z1 + 2Z2 . Since V1 , V2 are independent Z1 , Z2 are independent as well. A linear
combination of Z1 , Z2 yia Gaussian R.V with the following parameters

E{X|H0 } = −2, E{X|H1 } = 2,


V ar{X|H0 } = V ar{X|H1 } = 2σ 2

And the probability of error events


Z ∞
PF A = Pr{Ĥ = H1 |H = H0 } = f (x|H1 )dx,
0
Z ∞
PM = Pr{Ĥ = H0 |H = H1 } = 1 − f (x|H0 )dx
0

16
5 Signal Space Representation
1. [1, Problem 4.9].
Consider a set of M orthogonal signal waveforms sm (t), 1 ≤ m ≤ M, 0 ≤ t ≤ T 4 , all of which
have the same energy ε 5 . Define a new set of waveforms as
M
1 X
s0m (t) = sm (t) − sk (t), 1 ≤ m ≤ M, 0 ≤ t ≤ T
M
k=1

Show that the M signal waveforms {s0m (t)} have equal energy, given by
ε
ε0 = (M − 1)
M

and are equally correlated, with correlation coefficient


Z T
1 1
ρmn = s0m (t)s0n (t)dt = −
ε0 0 M −1

Solution:
The energy of the signal waveform s0m (t) is:
Z ∞ Z ∞ M 2
0 2 1 X
|s0m (t)|

ε = dt = sm (t) − M sk (t) dt

−∞ −∞ k=1
Z ∞ M XM Z ∞ M Z
2 1 X 2 X ∞
= sm (t) + 2 sk (t)sl (t)dt − sm (t)sk (t)dt
−∞ M −∞ M −∞
k=1 l=1 k=1
M M
1 XX 2
= ε+ 2 εδkl − ε
M M
k=1 l=1
1 2 M −1
= ε ε− ε= ε
M M M

The correlation coefficient is given by:


Z ∞ Z M∞   M 
1 1 X 1 X
ρmn = s0m (t)s0n (t)dt = sm (t) −
sk (t) sn (t) − sl (t) dt
ε0 −∞ −∞ M M
k=1 l=1
Z ∞ M M ∞ M
1 2 X ∞
Z  Z
1 1 XX
= sm (t)sn (t)dt + sk (t)s l (t)dt − sm (t)sk (t)dt
ε0 −∞ M2 −∞ ε0 M −∞
k=1 l=1 k=1
1 2
M2 M ε − M ε 1
= M −1
=−
M ε
M −1

2. [1, Problem 4.10].


4 hs
j (t), sk (t)i = 0, ∀j 6= k, j, k ∈ {1, 2, . . . , M }. R∞
5 The energy of the signal waveform s (t) is: ε =
m −∞ |sm (t)|2 dt

17
Consider the following three waveforms

1
2,
 0 ≤ t < 2,
1
f1 (t) = − 2 , 2 ≤ t < 4,

0, otherwise.

(
1
f2 (t) = 2 , 0 ≤ t < 4,
0, otherwise.

1
2,
 0 ≤ t < 1, 2 ≤ t < 3
1
f3 (t) = − 2 , 1 ≤ t < 2, 3 ≤ t < 4

0, otherwise.

(a) Show that these waveforms are orthonormal.


(b) Check if you can express x(t) as a weighted linear combination of fn (t), n = 1, 2, 3, if


 −1, 0 < t < 1

1, 1≤t<3
x(t) =


 −1, 3 ≤ t < 4
0, otherwise.

and if you can determine the weighting coefficients, otherwise explain.

Solution:

(a) To show that the waveforms fn (t), n = 1, 2, 3 are orthogonal we have to prove that:
Z ∞
fn (t)fm (t)dt = 0. m 6= n
−∞

For n = 1, m = 2:
Z ∞ Z 4
f1 (t)f2 (t)dt = f1 (t)f2 (t)dt
−∞ 0
Z 2 Z 4
= f1 (t)f2 (t)dt + f1 (t)f2 (t)dt
0 2
Z 2 Z 4
1 1
= dt − dt = 0
4 0 4 2

For n = 1, m = 3:
Z ∞ Z 4
f1 (t)f3 (t)dt = f1 (t)f3 (t)dt
−∞ 0
Z 1 Z 2 Z 3 Z 4
1 1 1 1
= dt − dt − dt + dt = 0
4 0 4 1 4 2 4 3

For n = 2, m = 3:
Z ∞ Z 4
f2 (t)f3 (t)dt = f2 (t)f3 (t)dt
−∞ 0
Z 1 Z 2 Z 3 Z 4
1 1 1 1
= dt − dt + dt − dt = 0
4 0 4 1 4 2 4 3

18
Thus, the signals fn (t) are orthogonal. It is also straightforward to prove that the signals
have unit energy: Z ∞
2
|fn (t)| dt = 1, n = 1, 2, 3
−∞
Hence, they are orthonormal.
(b) We first determine the weighting coefficients
Z ∞
xn = x(t)fn (t)dt, n = 1, 2, 3
−∞
4
1 1 1 2 3 4
Z Z Z Z Z
1 1
x1 = x(t)f1 (t)dt = − dt + dt − dt + dt = 0
0 2 0 2 1 2 2 2 3
Z 4 Z 4
1
x2 = x(t)f2 (t)dt = x(t)dt = 0
0 2 0
Z 4 Z 1
1 2 3 4
Z Z Z
1 1 1
x1 = x(t)f1 (t)dt = − dt − dt + dt + dt = 0
0 2 0 2 1 2 2 2 3

As it is observed, x(t) is orthogonal to the signal waveforms fn (t), n = 1, 2, 3 and thus it can
not represented as a linear combination of these functions.
3. [1, Problem 4.11].
Consider the following four waveforms

2,
 0 ≤ t < 1,
s1 (t) = −1, 1 ≤ t < 4,

0, otherwise.


−2,
 0 ≤ t < 1,
s2 (t) = 1, 1 ≤ t < 3,

0, otherwise.


1,
 0 ≤ t < 1, 2 ≤ t < 3,
s3 (t) = −1, 1 ≤ t < 2, 3 ≤ t < 4,

0, otherwise.



1, 0 ≤ t < 1,

−2, 1 ≤ t < 3,
s4 (t) =


2, 3 ≤ t < 4,
0, otherwise.

(a) Determine the dimensionality of the waveforms and a set of basis functions.
(b) Use the basis functions to present the four waveforms by vectors s1 , s2 , s3 and s4 .
(c) Determine the minimum distance between any pair of vectors.
Solution:
(a) As an orthonormal set of basis functions we consider the set
( (
1, 0 ≤ t < 1, 1, 1 ≤ t < 2,
f1 (t) = f2 (t) =
0, otherwise. 0, otherwise.
( (
1, 2 ≤ t < 3, 1, 3 ≤ t < 4,
f3 (t) = f4 (t) =
0, otherwise. 0, otherwise.

19
In matrix notation, the four waveforms can be represented as
    
s1 (t) 2 −1 −1 −1 f1 (t)
s2 (t) −2 1 1 0 f2 (t)
 
s3 (t) =  1 −1 1 −1 f3 (t)
  

s4 (t) 1 −2 −2 2 f4 (t)

Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of the
waveforms is 4.
(b) The representation vectors are
 
s1 = 2 −1 −1 −1
 
s2 = −2 1 1 0
 
s3 = 1 −1 1 −1
 
s4 = 1 −2 −2 2

(c) The distance between the first and the second vector is:
q
2
q   2 √
d1,2 = |s1 − s2 | = 4 −2 −2 −1 = 25

Similarly we find that:


q
2
q   2 √
d1,3 = |s1 − s3 | = 1 0 −2 0 = 5
q
2
q   2 √
d1,4 = |s1 − s4 | = 1 1 1 −3 = 12
q
2
q   2 √
d2,3 = |s2 − s3 | = −3 2 0 1 = 14
q
2
q   2 √
d2,4 = |s2 − s4 | = −3 3 3 −2 = 31
q
2
q   2 √
d3,4 = |s3 − s4 | = 0 1 3 −3 = 19

Thus, the minimum distance between any pair of vectors is dmin = 5.

4. [2, Problem 5.4].


(a) Using Gram-Schmidt orthogonalization procedure, find a set of orthonormal basis functions
to represent the following signals
( ( (
2, 0 ≤ t < 1, −4, 0 ≤ t < 2, 3, 0 ≤ t < 3,
s1 (t) = s2 (t) = s3 (t) =
0, otherwise. 0, otherwise. 0, otherwise.

(b) Express each of the signals si (t), i = 1, 2, 3 in terms of the basis functions found in (4a).
Solution:

(a) The energy of s1 (t) and the first basis are


Z 1 Z 1
2
E1 = |s1 (t)| dt = 22 dt = 4
0 0
(
s1 (t) 1, 0 ≤ t < 1,
⇒ φ1 (t) = √ =
E1 0, otherwise.

20
Define
Z T Z 1
s21 = −4 · 1dt = 4
s2 (t)φ1 (t)dt =
0
(0
−4, 1 ≤ t < 2,
g2 (t) = s2 (t) − s21 φ1 (t) =
0, otherwise.
Hence, the second basis function is
(
g2 (t) −1, 1 ≤ t < 2,
φ2 (t) = qR =
T
g22 (t)dt 0, otherwise.
0

Define
Z T Z 1
s31 = s3 (t)φ1 (t)dt = 3 · 1dt = 3
0 0
Z 2T Z 2
s32 = 3 · −1dt = −3
s3 (t)φ2 (t)dt =
T 1
(
3, 2 ≤ t < 3,
g3 (t) = s3 (t) − s31 φ1 (t) − s32 φ2 (t) =
0, otherwise.
Hence, the third basis function is
(
1, 2 ≤ t < 3,
g3 (t)
φ3 (t) = qR =
T 2
g3 (t)dt 0, otherwise.
0

(b)
s1 (t) = 2φ1 (t)
s2 (t) = −4φ1 (t) + 4φ2 (t)
s3 (t) = 3φ1 (t) − 3φ2 (t) + 3φ3 (t)

5. Optimum receiver.
Suppose one of M equiprobable signals xi (t), i = 0, . . . , M − 1 is to be transmitted during a period
of time T over an AWGN channel. Moreover, each signal is identical to all others in the subinterval
[t1 , t2 ] where 0 < t1 < t2 < T .
(a) Show that the optimum receiver may ignore the subinterval [t1 , t2 ].
(b) Equivalently, show that if x0 , . . . , xM −1 all have the same projection in one dimension6 , then
this dimension may be ignored.
(c) Does this result necessarily hold true if the noise is Gaussian but not white? Explain.
Solution:

(a) The data signals xi (t) being equiprobable, the optimum decision rule is the Maximum Like-
2
lihood (ML) rule, given by, (in vector form) mini |y − xi | . From the invariance of the inner
product, the ML rule is equivalent to,
Z T
2
min |y(t) − xi (t)| dt
i 0
6 xT
 
i = xi1 xi2 ... xiN are vectors of length N , ∃k : xik = xjk , ∀i, j ∈ {0, . . . , M − 1}.

21
The integral is then written as a sum of three integrals,
Z T Z t1 Z t2 Z T
2 2 2 2
|y(t) − xi (t)| dt = |y(t) − xi (t)| dt + |y(t) − xi (t)| dt + |y(t) − xi (t)| dt
0 0 t1 t2

Since the second integral over the interval [t1 , t2 ] is constant as a function of i, the optimum
decision rule reduces to,
 Z t1 Z T 
2 2
min |y(t) − xi (t)| dt + |y(t) − xi (t)| dt
i 0 t2

And therefore, the optimum receiver may ignore the interval [t1 , t2 ].
(b) In an appropriate orthonormal basis of dimension N ≤ M , the vectors xi and y are given
by,

xTi = xi1 xi2 . . . xiN


 

yT = y1 y2 . . . yN
 

Assume that xim = x1m for all i, the optimum decision rule becomes,
M
X M
X
2 2 2
min |yk − xik | ⇔ min |yk − xik | + |ym − xim |
i i
k=1 k=1,k6=m

2
Since |ym − xim | is constant for all i, the optimum decision rule becomes,
M
X 2
min |yk − xik |
i
k=1,k6=m

Therefore, the projection xm might be ignored by the optimum receiver.


(c) The result does not hold true if the noise is colored Gaussian noise. This is due to the fact
that the noise along one component is correlated with other components and hence might
not be irrelevant. In such a case, all components turn out to be relevant. Equivalently, by
duality, the same result holds in the time domain.

22
6 Optimal Receiver for the Waveform Channel
1. [1, Problem 5.4].
A binary digital communication system employs the signals
( (
0, 0 ≤ t < T, A, 0 ≤ t < T,
s0 (t) = s1 (t) =
0, otherwise. 0, otherwise.

for transmitting the information. This is called on-off signaling. The demodulator cross-correlates
the received signal r(t) with si (t), i = 0, 1 and samples the output of the correlator at t = T .
(a) Determine the optimum detector for an AWGN channel and the optimum threshold, assuming
that the signals are equally probable.
(b) Determine the probability of error as a function of the SNR. How does on-off signalling
compare with antipodal signaling?
Solution:
(a) The correlation type demodulator employs a filter:
(
√1 , 0 ≤ t < T,
f (t) = T
0, otherwise.

Hence, the sampled outputs of the cross-correlators are:


r = si + n, i = 0, 1

where s0 = 0, s1 = A T and the noise term n is a zero-mean Gaussian random variable with
variance σn2 = N20 . The probability density function for the sampled output is:

1 r2 1 (r−A T )2
f (r|s0 ) = √ e− N0 f (r|s1 ) = √ e− N0
πN0 πN0
The minimum error decision rule is:
s1
f (r|s1 )
≷ 1
f (r|s0 )
s0
s1
1 √
⇒r ≷ A T
2
s0

(b) The average probability of error is:


1

Z ∞ Z 2A T
1 1
p(e) = √ f (r|s0 )dr + f (r|s1 )dr
2 1
2A T 2 −∞
1

∞ √
2A T
Z Z
1 1 r2 1 1 (r−A T )2
= √ √ e− N0 dr + √ e− N0 dr
2 1
2A T πN0 2 −∞ πN0
q √
Z ∞ Z − 21 2
A T
1 1 − x2 1 N0 1 x2
= √ √ e 2 dx + √ e− 2 dx
2 2π 2 2π
q
1 2
2 N0 A T −∞

2 √ √
 r 
1
= Q A T = Q( SNR)
2 N0

23
where
1 2
2A T
SNR =
N0
Thus, the on-off signaling requires a factor of two more energy to achieve the same probability
of error as the antipodal signaling.
2. [2, Problem 5.11].
Consider the optimal detection of the sinusoidal signal
 
8πt
s(t) = sin , 0≤t≤T
T

in additive white Gaussian noise.


(a) Determine the correlator output (at t = T ) assuming a noiseless input.
(b) Determine the corresponding match filter output, assuming that the filter includes a delay
T to make it casual.
(c) Hence show that these two outputs are the same at time instant t = T .
Solution:
For the noiseless case, the received signal r(t) = s(t), 0 ≤ t ≤ T .

(a) The correlator output is:


Z T Z T Z T  
8πτ T
y(T ) = r(τ )s(τ )dτ = s2 (τ )dτ = sin2 dτ =
0 0 0 T 2

(b) The matched filter is defined by the impulse response h(t) = s(T − t). The matched filter
output is therefore:
Z ∞ Z ∞
y(t) = r(λ)h(t − λ)dλ = s(λ)s(T − t + λ)dλ
−∞ −∞
T    
8π(T − t + λ)
Z
8πλ
= sin sin dλ
0 T T
1 T 1 T
   
8π(T − t) 8π(T − t + λ)
Z Z
= cos dλ − cos dλ
2 0 T 2 0 T
     
T 8π(t − T ) T 8π(T − t) T 8πt
= cos − sin − sin .
2 T 16π T 16π T

(c) When the matched filter output is sampled at t = T , we get

T
y(T ) =
2

which is exactly the same as the correlator output determined in item (2a).

3. SNR Maximization with a Matched Filter.


Prove the following theorem:
For the real system shown in Figure 1, the filter h(t) that maximizes the signal-to-noise ratio at
sample time Ts is given by the matched filter h(t) = x(Ts − t).

24
n(t)

x(t) + h(t) y(T s)


t = Ts

Figure 1: SNR maximization by matched filter.

solution:
Compute the SNR at sample time t = Ts as follows:

Signal Energy = [x(t) ∗ h(t)|t=Ts ]2


Z ∞ 2
= x(t)h(Ts − t)dt = [hx(t), h(Ts − t)i]2
−∞

The sampled noise at the matched filter output has energy or mean-square
Z ∞ Z ∞ 
Noise Energy = E n(t)h(Ts − t)dt n(s)h(Ts − s)ds
−∞ −∞
Z ∞Z ∞
N0
= δ(t − s)h(Ts − t)h(Ts − s)dtds
−∞ −∞ 2
Z ∞
N0
= h2 (Ts − t)dt
2 −∞
N0 2
= khk
2

The signal-to-noise ratio, defined as the ratio of the signal power in to the noise power, equals

2 [hx(t), h(Ts − t)i]2


SNR = 2
N0 khk

The Cauchy-Schwarz Inequality states that


2 2
[hx(t), h(Ts − t)i]2 ≤ kxk khk

with equality if and only if x(t) = kh(Ts − t) where k is some arbitrary constant. Thus, by
inspection, the SNR is maximized over all choices for h(t) when h(t) = x(Ts − t). The filter h(t)
is matched to x(t), and the corresponding maximum SNR (for any k) is
2 2
SNRmax = kxk
N0

4. The optimal receiver.


Consider the signals s0 (t), s1 (t) with the respective probabilities p0 , p1 .
q
E

T, 0 ≤ t < aT, (q
2E
cos 2πt
 
, 0 ≤ t < T,
 q
s0 (t) = − T T
E
T , aT ≤ t < T, s1 (t) =

 0, otherwise.
0, otherwise.

25
The observation, r(t), obeys
r(t) = si (t) + n(t), i = 0, 1
N0 N0
E{n(t)n(τ )} = δ(t − τ ), n(t) ∼ N (0, δ(t − τ )).
2 2
(a) Find the optimal receiver for the above two signals, write the solution in terms of s0 (t) and
s1 (t).
(b) Find the error probability of the optimal receiver for equiprobable signals.
(c) Find the parameter a, which minimizes the error probability.

Solution:

(a) We will use a type II, which uses filters matched to the signals si (t), i = 0, 1. The optimal
receiver is depicted in Figure 2.

N0 1
ln p0 - E
2 2

y0
h 0 (t) +
t =T

r(t) Max
y1
h1 (t) +
t =T

N0 1
ln p1 - E
2 2

Figure 2: Optimal receiver - II.

where h0 (t) = s0 (T − t), h1 (t) = s1 (T − t).

The Max block in Figure 2 can be implemented as follows


s0 (t)
y = y0 − y1 ≷ 0
s1 (t)

The R.V y obeys


N0 E N0 E
y = [h0 (t) ∗ r(t)] t=T + ln p0 − − [h1 (t) ∗ r(t)] t=T − ln p1 +
2 2 2 2
N0 p0
= ln + [(h0 (t) − h1 (t)) ∗ r(t)] t=T

2 p1
Hence the optimal receiver can be implemented using one convolution operation instead of
two convolution operations, as depicted in Figure 3.
(b) For an equiprobable binary constellation, in an AWGN channel, the probability of error is
given by
 
d/2
p(e) = Q , d = ks0 − s1 k
σ
2 2 2
d2 = ks0 − s1 k = ks0 k + ks1 k − 2 hs0 , s1 i

26
N0 p
ln 0
2 p1

Decision
r(t) h 0 (t) - h1 (t) +
Rule
t =T

Figure 3: Optimal receiver - II.

where σ 2 is the noise variance.

The correlation coefficient between the two signals, ρ, equals

hs0 , s1 i hs0 , s1 i
ρ= =
ks0 k ks1 k E

and for equal energy signals

d2
2E − 2 hs0 , s1 i
=
p
⇒d = 2E(1 − ρ)
s 
E(1 − ρ)
⇒ p(e) = Q
N0

(c) ρ is the only parameter, in p(e), affected by a. An explicit calculation of ρ yields


Z T
hs0 , s1 i = s0 (t)s1 (t)dt
0
Z aT r r Z Tr r
E 2E 2πt E 2E 2πt
= cos dt − cos dt
0 T T T aT E E T
√ E √ E
= 2 sin 2πa + 2 sin 2πa
√ 2π 2π
2
⇒ρ = sin 2πa
π s

E(1 − π2 sin 2πa)
 
⇒ p(e) = Q
N0

In order to minimize the probability of error, we will maximize the Q function argument:

sin 2πa = −1
3
⇒a =
4

27
7 The Probability of Error
1. [1, Problem 5.10].
A ternary communication system transmits one of three signals, s(t), 0, or −s(t), every T seconds.
The received signal is one either r(t) = s(t) + z(t), r(t) = z(t) or r(t) = −s(t) + z(t), where z(t) is
white Gaussian noise with E{z(t)} = 0 and Φzz (τ ) = 21 E{z(t)z ∗ (τ )} = N0 δ(t − τ ) . The optimum
receiver computes the correlation metric
Z T 
U = Re r(t)s∗ (t)dt
0

and compares U with a threshold A and a threshold −A. If U > A the decision is made that s(t)
was sent. If U < A, the decision is made in favor of −s(t). If −A ≤ U ≤ A, the decision is made
in favor of 0.
(a) Determine the three conditional probabilities of error p(e|s(t)), p(e|0)) and p(e| − s(t)).
(b) Determine the average probability of error p(e) as a function of the threshold A, assuming
that the three symbols are equally probable a priori.
(c) Determine the value of A that minimizes p(e).
Solution:
 

RT
  s(t) + z(t) 
(a) U = Re 0 r(t)s(t)dt , where r(t) = −s(t) + z(t) depending on which signal was
z(t)
 
sent. If we assume that s(t) was sent:
Z T  Z T 
U = Re s(t)s∗ (t)dt + Re z(t)s∗ (t)dt = 2E + N
0 0
 
1
RT ∗
RT ∗
where E = 2 0
s(t)s (t)dt is a constant, and N = Re 0
z(t)s (t)dt is a Gaussian
random variable with zero mean and variance 2EN0 . Hence, given that s(t) was sent, the
probability of error is:
 
2E − A
p1 (e) = Pr{N < A − 2E} = Q √
2EN0

When −s(t) is transmitted: U = −2E + N , and the corresponding conditional error proba-
bility is:  
2E − A
p2 (e) = Pr{N > −A + 2E} = Q √
2EN0
and finally, when 0 is transmitted: U = N , and the corresponding error probability is:
 
A
p3 (e) = Pr{N > A or N < −A} = 2Q √
2EN0

(b)     
1 2 2E − A A
p(e) = [p1 (e) + p2 (e) + p3 (e)] = Q √ +Q √
3 3 2EN0 2EN0

28
(c) In order to minimize p(e):
dp(e)
=0⇒A=E
dA
R∞ t 2
where we differentiate Q(x) = √1 e− 2 dt with respect to x, using the Leibnitz rule:
x 2π
d
R∞  df
dx f (x)
g(a)da = − dx g(f (x)).
Using this threshold: r 
4 E
p(e) = Q
3 2N0

2. [1, Problem 5.19].


Consider a signal detector with an input

r = ±A + n, A>0

where +A and −A occur with equal probability and the noise variable n is characterized by the
Laplacian p.d.f: √
1 2|n|
f (n) = √ e− σ

(a) Determine the probability of error as a function of the parameters A and σ.
(b) Determine the SNR required to achieve an error probability of 10−5 . How does the SNR
compare with the result for Gaussian p.d.f?

Solution:

2
(a) Let λ = σ . The optimal receiver uses the criterion:

A
f (r|A)
= e−λ[|r−A|−|r+A|] ≷ 1
f (r| − A)
−A
A
⇒r ≷ 0
−A

The average probability of error is:


1 1
p(e) = Pr{Error|A} + Pr{Error| − A}
2 2
1 0 1 ∞
Z Z
= f (r|A)dr + f (r| − A)dr
2 −∞ 2 0
1 0 λ −λ|r−A| 1 ∞ λ −λ|r+A|
Z Z
= e dr + e dr
2 −∞ 2 2 0 2
λ −A −λ|x| λ ∞ −λ|x|
Z Z
= e dx + e dx
4 −∞ 4 A
1 −λA 1 −√2A
= e = e σ
2 2

(b) The variance of the noise is σ 2 , hence the SNR is:

A2
SNR =
σ2

29
and the probability of error is given by:
1 −√2SNR
p(e) = e
2
For p(e) = 10−5 we obtain:

ln(2 · 10−5 ) = − 2SNR ⇒ SNR = 17.674 dB
If the noise was Gaussian, then the probability of error for antipodal signalling is:

 
p(e) = Q SNR

−5
where SNR√ is the signal to noise ratio at the output of the matched filter. With p(e) = 10
we find SNR = 4.26 and therefore SNR = 12.594 dB. Thus the required signal to noise
ratio is 5 dB less when the additive noise is Gaussian.
3. [1, Problem 5.38].
The discrete sequence p
rk = Eb ck + nk , k = 1, 2, . . . , n
represents the output sequence of samples from a demodulator, where ck = ±1 are elements of
one of two possible code words, C1 = [1 1 . . . 1] and C2 = [1 1 . . . 1 − 1 . . . − 1]. The code word
C2 has w elements that are +1 and n − w elements that are −1, where w is a positive integer.
The noise sequence {nk } is white Gaussian with variance σ 2 .
(a) What is the optimum ML detector for the two possible transmitted signals?
(b) Determine the probability of error as a function of the parameters σ 2 , Eb , w.
(c) What is the value of w that minimizes the the error probability?
Solution:
(a) The optimal ML detector selects the sequence Ci that minimizes the quantity:
n
X p
D(r, Ci ) = (rk − Eb cik )2
k=1

The metrics of the two possible transmitted sequences are


w
X p n
X p
D(r, C1 ) = (rk − Eb )2 + (rk − Eb )2
k=1 k=w+1
Xw p n
X p
D(r, C2 ) = (rk − Eb )2 + (rk + Eb )2
k=1 k=w+1

Since the first term of the right side is common for the two equations, we conclude that the
optimal ML detector can base its decisions only on the last n − w received elements of r.
That is
w
X p w
X p C2
(rk − Eb )2 − (rk + Eb )2 ≷ 0
k=w+1 k=w+1 C1
or equivalently
w
X C1
rk ≷ 0
k=w+1 C2

30

(b) Since rk = Eb cik + nk the probability of error Pr{Error|C1 } is
p n
X 
Pr{Error|C1 } = Pr Eb (n − w) + nk < 0
k=w+1
n
 X p 
= Pr nk < −(n − w) Eb
k=w+1

Pn
The R.V u = k=w+1 nk is zero-mean Gaussian with variance σu2 = (n − w)σ 2 . Hence
√ r
−(n−w) Eb
x2
  
Eb (n − w)
Z
1
Pr{Error|C1 } = p1 (e) = p exp − 2 dx = Q
2πσu2 −∞ σu σ2

Similarly we find that Pr{Error|C1 } = Pr{Error|C2 } and since the two sequences are
equiprobable
r 
Eb (n − w)
p(e) = Q
σ2

(c) The probability of error p(e) is minimized when Eb (n−w)


σ2 is maximized, that is for w = 0. This
implies that C1 = −C2 and thus the distance between the two sequences is the maximum
possible.

4. Sub optimal receiver.


Consider a binary system transmitting the signals s0 (t), s1 (t) with equal probability.
(q (q
2E
T sin 2πt
T , 0 ≤ t ≤ T, 2E 2πt
T cos T , 0 ≤ t ≤ T,
s0 (t) = s1 (t) =
0, otherwise. 0, otherwise.

The observation, r(t), obeys

r(t) = si (t) + n(t), i = 0, 1

N0
where n(t) is white Gaussian noise with E{n(t)} = 0 and E{n(t)n(τ )} = 2 δ(t − τ ).

(a) Sketch an optimal and efficient (in the sense of minimal number of filters) receiver. What is
the error probability when this receiver is used?
(b) What is the error probability of the following receiver?

Z T
2
s0
r(t)dt ≷ 0
0 s1

(c) Consider the following receiver

Z aT s0
r(t)dt ≷ K, 0≤a≤1
0 s1
R aT
where K is the optimal threshold for 0 r(t)dt. Find a which minimizes the probability of
error. Numerical solution may be used.

31
s 0 (T - t)
t =T

r(t) Max

s1 (T - t)
t =T

Figure 4: Optimal receiver type II.

Solution:

(a) The signals are equiprobable and have equal energy. We will use type II receiver, depicted
in Figure 4.
The distance between the signals is
2
Z T √
   
2E 2πt 2πt
d2 = sin − cos = 2E ⇒ d = 2E
0 T T T
The receiver depicted in Figure 4 is equivalent to the the following (and more efficient)
receiver, depicted in Figure 5.

s0
>
r(t) s 0 (T - t) - s1 (T - t) 0
<
t =T
s1

Figure 5: Efficient optimal receiver.

For a binary system with equiprobable signals s0 (t) and s1 (t) the probability of error is given
by !  
d  d d
p(e) = Q =Q q =Q √
2σ 2 N0 2N0
2

where d, the distance between the signals, is given by


d = ks0 (t) − s1 (t)k = ks0 − s1 k

Hence, the probability of error is


  r 
d E
p(e) = Q √ ⇒ p(e) = Q
2N0 N0
R T
(b) Let us define the random variable, Y = 0
2
r(t)dt. Y obeys
Z T Z T
2 2
Y |s0 = s0 (t)dt + n(t)dt
0 0
Z T Z T
2 2
Y |s1 = s1 (t)dt + n(t)dt
0 0

32
R T
Let us define the random variable N = 0
2
n(t)dt. N is a zero mean Gaussian random
variable, and variance
Z T Z T  Z T Z T
2 2 2 2 N0 No T
Var{N } = E n(τ )n(λ)dτ dλ = δ(τ − λ)dτ dλ =
0 0 0 0 2 4

Y |si is a Gaussian random variable (note that Y is not gaussian, but a Gaussin Mixture!)
with mean:
Z T2 √
2ET
E{Y |s0 } = s0 (t)dt =
0 π
Z T2
E{Y |s1 } = s1 (t)dt = 0
0

The variance of Y |si is identical under both cases, and equal to the variance of N . For the
given decision rule the error probability is:

p(e) = p(s0 ) Pr{Y < 0|s0 } + p(s1 ) Pr{Y > 0|s1 }


 r 
1 2 2E 1
= Q +
2 π N0 4

(c) We will use the same derivation procedure as in the previous item.
Define the random variables Y, N as follows:
Z aT Z aT
Y = r(t)dt, N= n(t)dt
0 0
aT N0
E{N } = 0, Var{N } =
2
r Z aT √
2E 2ET
E{Y |s0 } = s0 (t)dt = (1 − cos 2πa)
T 0 2π
r √
2E aT
Z
2ET
E{Y |s1 } = s1 (t)dt = sin 2πa
T 0 2π
Var{Y |s0 } = Var{Y |s1 } = Var{N }

The distance between Y |s0 and Y |s1 equals



2ET
d= (1 − cos(2πa) − sin(2πa))

d

For an optimal decision rule the probability of error equals Q 2σ . Hence the probability of
error equals  r 
1 E 1
p(e) = Q √ |(1 − cos(2πa) − sin(2πa))|
2π N0 a

which is minimized when √1a |(1 − cos 2πa − sin 2πa)| is maximized.
Let aopt denote the a which maximizes the above expression. Numerical solution yields that

aopt ∼
= 0.5885

33
8 Bit Error Probability
1. [3, Example 6.2].
Compare the probability of bit error for 8PSK and 16PSK, in an AWGN channel, assuming
Eb
γb = 15dB = N 0
and equal a-priori probabilities. Use the following approximations:
• Nearest neighbor approximation given in class.
γs
• γb ≈ log2 M .
• The approximation for Pe,bit given in class.
Solution:
The nearest neighbor approximation for the probability of error, in an AWGN channel, for an
M-PSK constellation is p π 
Pe ≈ 2Q 2γs sin( ) .
M
The approximation for Pe,bit (under Gray mapping at high enough SNR) is

Pe
Pe,bit ≈ .
log2 M

For 8PSK we have γs = (log2 8) · 1015/10 = 94.87. Hence



Pe ≈ 2Q 189.74 sin(π/8) = 1.355 · 10−7 .


Using the approximation for Pe,bit we get

Pe
Pe,bit = = 4.52 · 10−8 .
3

For 16PSK we have γs = (log2 16) · 1015/10 = 126.49. Hence



Pe ≈ 2Q 252.98 sin(π/16) = 1.916 · 10−3 .


Using the approximation for Pe,bit we get

Pe
Pe,bit = = 4.79 · 10−4 .
4

Note that Pe,bit is much larger for 16PSK than for 8PSK for the same γb . This result is expected,
since 16PSK packs more bits per symbol into a given constellation, so for a fixed energy-per-bit
the minimum distance between constellation points will be smaller.
2. Bit error probability for rectangular constellation.
Let p0 (t) and p1 (t) be two orthonormal functions, different from zero in the time interval [0, T ].
The equiprobable signals defined in Figure 6 are transmitted through a zero-mean AWGN channel
with noise PSD equals N0 /2.
(a) Calculate Pe for the optimal receiver.
(b) Calculate Pe,bit for the optimal receiver (optimal in the sense of minimal Pe ).
q
(c) Approximate Pe,bit for high SNR ( d2  N20 ). Explain.

34
p 1 (t)

(010 ) (011) (001) (000)


d2

- 3d 2 -d2 d2 3d 2
p 0 (t)

-d 2
(110 ) (111) (101) (100)

Figure 6: 8 signals in rectangular constellation.

Solution:
Let n0 denote the noise projection on p0 (t) and n1 the noise projection on p1 (t). Clearly ni ∼
N (0, N0 /2), i = 0, 1.

(a) Let Pc denote the probability for correct symbol decision; hence Pe = 1 − Pc .
  2
d/2
Pr{correct decision |(000) was transmitted} = 1−Q p
N0 /2
(a)
= Pr{correct decision |(100) was transmitted}
(b)
= Pr{correct decision |(010) was transmitted}
(c)
= Pr{correct decision |(110) was transmitted}
= P1 .

where (a), (b) and (c) are due to the constellation symmetry.

    
d/2 d/2
Pr{correct decision |(001) was transmitted} = 1−Q p 1 − 2Q p
N0 /2 N0 /2
(a)
= Pr{correct decision |(001) was transmitted}
(b)
= Pr{correct decision |(011) was transmitted}
(c)
= Pr{correct decision |(111) was transmitted}
= P2 .

35
Hence
  2     !
1 d/2 d/2 d/2
Pc = 1−Q p + 1−Q p 1 − 2Q p
2 N0 /2 N0 /2 N0 /2
Pe = 1 − Pc
    2 
1 d/2 d/2
⇒ Pe = 5Q p − 3Q p .
2 N0 /2 N0 /2

(b) Let b0 denote the MSB, b2 denote the LSB and b1 denote the middle bit7 . Let bi (s), i = 0, 1, 2
denote the ith bit of the constellation point s.

X
Pr{error in b2 |(000) was transmitted} = Pr{s̃ was received|(000) was transmitted}
s̃:b2 (s̃)6=0
 
5d d
= Pr − < N0 < −
2 2
 
d 5d
= Pr < N0 <
2 2
   
d/2 5d/2
= Q p −Q p
N0 /2 N0 /2
(a)
= Pr{error in b2 |(100) was transmitted}
(b)
= Pr{error in b2 |(010) was transmitted}
(c)
= Pr{error in b2 |(110) was transmitted}
= P1 .

where (a), (b) and (c) are due to the constellation symmetry.

X
Pr{error in b2 |(001) was transmitted} = Pr{s̃ was received|(001) was transmitted}
s̃:b2 (s̃)6=1
   
3d d
= Pr N0 < − + Pr < N0
2 2
   
d/2 3d/2
= Q p +Q p
N0 /2 N0 /2
= Pr{error in b2 |(101) was transmitted}
= Pr{error in b2 |(011) was transmitted}
= Pr{error in b2 |(111) was transmitted}
= P2 .
7 For the top left constellation point in Figure 6 (b0 , b1 , b2 ) = (010).

36
Using similar arguments we can calculate the bit error probability for b1
 
3d/2
Pr{error in b1 |(000) was transmitted} = Q p
N0 /2
= Pr{error in b1 |(100) was transmitted}
= Pr{error in b1 |(010) was transmitted}
= Pr{error in b1 |(110) was transmitted}
= P3 .

 
d/2
Pr{error in b1 |(001) was transmitted} = Q p
N0 /2
= Pr{error in b1 |(101) was transmitted}
= Pr{error in b1 |(011) was transmitted}
= Pr{error in b1 |(111) was transmitted}
= P4 .
The bit error probability for b0 equals
 
d/2
Pr{error in b0 |(000) was transmitted} = Q p
N0 /2
= P5 .
Due to the constellation symmetry and the bits mapping, the bit error probability for b0 is
equal for all the constellation points.
Let Pe,bi , i = 0, 1, 2 denote the averaged (over all signals) bit error probability of the ith bit,
then

Pe,b0 = P5 .
1
Pe,b1 = (P3 + P4 ).
2
1
Pe,b2 = (P1 + P2 ).
2
The averaged bit error probability, Pe,bit , is given by
2
1X
Pe,bit = Pe,bi
3 i=0
     
5 d/2 1 3d/2 1 5d/2
= Q p + Q p − Q p
6 N0 /2 3 N0 /2 6 N0 /2
q
d N0
(c) For 2  2
 
∼ 5 d/2
Pe,bit = Q p
6 N0 /2
 
∼ 5 d/2
Pe = Q p
2 N0 /2
Pe
⇒ Pe,bit → .
3

37
Pe
Note that log2 M is the lower bound for Pe,bit .

38
9 Connection with the Concept of Capacity
1. [2, Problem 9.29].
A voice-grade channel of the telephone network has a bandwidth of 3.4 KHz. Assume real-valued
symbols.
(a) Calculate the capacity of the telephone channel for signal-to-noise ratio of 30 dB.
(b) Calculate the minimum signal-to-noise ratio required
 to
 support information transmission
bits
through the telephone channel at the rate of 4800 sec .

Solution:
(a) The channel bandwidth is W = 3.4 KHz. The received signal-to-noise ratio is SNR = 103 =
30 dB. Hence the channel capacity is
 
3 3 3 bits
C = W log2 (1 + SNR) = 3.4 · 10 · log2 (1 + 10 ) = 33.9 · 10 .
sec
(b) The required SNR is the solution of the following equation
4800 = 3.4 · 103 · log2 (1 + SNR) ⇒ SNR = 1.6 = 2.2 dB.
2. [1, Problem 7.17].
Channel C1 is an additive white Gaussian noise channel with a bandwidth W , average transmitter
power P , and noise power spectral density N20 . Channel C2 is an additive white Gaussian noise
channel with the same bandwidth and average power as channel C1 but with noise power spectral
density Sn (f ). It is further assumed that the total noise power for both channels is the same;
that is Z W Z W
N0
Sn (f )df = df = N0 W.
−W −W 2
Which channel do you think has larger capacity? Give an intuitive reasoning.
Solution:
The capacity of the additive white Gaussian channel is:
 
P
C = W log2 1 +
N0 W

For the nonwhite Gaussian noise channel, although the noise power is equal to the noise power in
the white Gaussian noise channel, the capacity is higher. The reason is that since noise samples
are correlated, knowledge of the previous noise samples provides partial information on the future
noise samples and therefore reduces their effective variance.
3. Capacity of ISI channel.
Consider a channel with Inter Symbol Interference (ISI) defined as follows
L−1
X
yk = hi xk−i + zk .
i=0

The channel input obeys an average power constraint E{x2k } ≤ P , and the noise zk is i.i.d
Gaussian distributed: zk ∼ N (0, σz2 ). Assume that H(ej2πf ) has no zeros and show that the
channel capacity is
( + )
1 W

∆ − σz2 /|H(ej2πf )|2
Z
C= log 1 + df ,
2 −W σz2 /|H(ej2πf )|2

39
where ∆ is a constant selected such that
Z W  +
σz2
∆− df = P.
−W |H(ej2πf )|2

You may use the following theorem

Theorem 1. Let the transmitter have a maximum average power constraint of P [W atts]. The
capacity of an additive Gaussian noise channel with noise power spectrum N (f ) WHz
atts

is given
by ( + ) 
1 π

ν − N (f )
Z 
bits
C= log2 1 + df .
2 −π N (f ) sec
R +
where ν is chosen so that ν − N (f ) df = P .

Solution:
Since H(ej2πf ) has no zeros the ISI ”filter” is invertible. Inverting the chennel results in

Y (ej2πf )
Ỹ (ej2πf ) =
H(ej2πf )
Z(ej2πf )
= X(ej2πf ) +
H(ej2πf )
= X(ej2πf ) + Z̃(ej2πf ).

This is a problem of colored Gaussian channel with no ISI. The channel PSD is

σz2
SZZ (ej2πf ) = .
|H(ej2πf )|2

The capacity of this channel, using Theorem 1 is given by


( + )
1 W

∆ − σz2 /|H(ej2πf )|2
Z
C= log 1 + df ,
2 −W σz2 /|H(ej2πf )|2

where ∆ is a constant selected such that


Z W  +
σz2
∆− df = P.
−W |H(ej2πf )|2

40
References
[1] J. G. Proakis, Digital Communications, 4th Edition, John Wiley and Sons, 2000.
[2] S. Haykin, Communication Systems, 4th Edition, John Wiley and Sons, 2000.

[3] A. Goldsmith, Wireless Communications, Cambridge University Press, 2006.

41

Вам также может понравиться