Вы находитесь на странице: 1из 106

Digital Communications

Chapter 2: Deterministic and Random Signal Analysis


Po-Ning Chen, Professor
Institute of Communication Engineering
National Chiao-Tung University, Taiwan
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 1 / 106
2.1 Bandpass and lowpass signal representation
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 2 / 106
2.1 Bandpass and lowpass signal representation
Denition (Bandpass signal)
A bandpass signal x(t} is a real signal whose frequency
content is located around central frequency f
0
, i.e.
X(f } = 0 for all f + f
0
> W
E
T
X(f }
f
0
f
0
W f
0
+ W f
0
f
0
+ W f
0
W
f
0
may not be the
carrier frequency f
c
!
The spectrum of a bandpass signal is Hermitian symmetric,
i.e., X(f ) = X

(f ). (Why? Hint: Fourier transform.)


Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 3 / 106
2.1 Bandpass and lowpass signal representation
Since the spectrum is Hermitian symmetric, we only need
to retain half of the spectrum X
+
(f } = X(f }u
1
(f }
(named analytic signal or pre-envelope) in order to
analyze it,
where u
1
(f } =

1 f > 0
1
2
f = 0
0 f 0
Note: X(f } = X
+
(f } + X

+
(f }
A bandpass signal is very real, but may contain
unnecessary content such as the carrier frequency f
c
that is nothing to do with the digital information
transmitted.
So, it is more convenient to remove this carrier frequency
and transform x(t} into its lowpass equivalent signal
x

(t} before analyzing the digital content.


Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 4 / 106
2.1 Bandpass and lowpass signal representation -
Baseband and bandpass signals
Denition (Baseband signal)
A lowpass or baseband (equivalent) signal x

(t} is a complex
signal (because it is not necessarily Hermitian symmetric!) whose
spectrum is located around zero frequency, i.e.
X

(f } = 0 for all f > W


It is generally written as
x

(t} = x
i
(t} + x
q
(t}
where
x
i
(t} is called the in-phase signal
x
q
(t} is called the quadrature signal
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 5 / 106
Baseband signal
Our goal is to relate x

(t} to x(t} and vice versa


Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 6 / 106
From x(t} to its lowpass equivalent x

(t}
Denition of bandwidth. The bandwidth of a signal is one
half of the entire range of frequencies over which the spectrum
is (essentially) nonzero. Hence, W is the bandwidth in the
lowpass signal we just dened, while 2W is the bandwidth of
the bandpass signal by our denition.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 7 / 106
Analytic signal
Lets start from the analytic signal x
+
(t}.
x
+
(t} =

X
+
(f }e
2ft
df
=

X(f }u
1
(f }e
2ft
df
= T
1
{X(f }u
1
(f }}
T
1
Inverse Fourier transform
= T
1
{X(f }} T
1
{u
1
(f }}
= x(t} _
1
2
(t} +
1
2t
_
=
1
2
x(t} +
1
2
x(t},
where x(t} = x(t}
1
t
=

x()
(t)
d is a real-valued signal.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 8 / 106
Appendix: Extended Fourier transform
T
1
{2u
1
(f }} = T
1
{1 + sgn(f }}
= T
1
{1} + T
1
{sgn(f }} = (t} +
1
t
Since

sgn(f } = , the inverse Fourier transform of sgn(f }


does not exist in the standard sense! We therefore have to
derive its inverse Fourier transform in the extended sense!
( f }S(f } = lim
n
S
n
(f } and ( n}

S
n
(f }df
T
1
{S(f }} = lim
n
T
1
{S
n
(f }}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 9 / 106
Appendix: Extended Fourier transform
Since lim
a0
e
af
sgn(f } = sgn(f },
lim
a0

e
af
sgn(f }e
2ft
df
= lim
a0
_

e
f (a+ 2t)
df +


0
e
f (a+ 2t)
df _
= lim
a0
_
1
a + 2t
+
1
a 2t
_
= lim
a0
_
4t
a
2
+ 4
2
t
2
_ =

0 t = 0

1
t
t 0
Hence, T
1
{2u
1
(f }} = T
1
{1} + T
1
{sgn(f }} = (t} +
1
t
.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 10 / 106
From x
+
(t} to x

(t}
E
T
X(f )
f
0
f
0
W f
0
+ W f
0
f
0
+ W f
0
W

E
T
X

(f )
0 W W
We then observe
X

(f } = 2X
+
(f + f
0
}.
This implies
x

(t} = T
1
{X

(f }}
= T
1
{2X
+
(f + f
0
}}
= 2x
+
(t}e
2f
0
t
= (x(t} + x(t}}e
2f
0
t
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 11 / 106
As a result,
x(t} + x(t} = x

(t}e
2f
0
t
which gives:
x(t} { = Re{x(t} + x(t}} ) = Re|x

(t}e
2f
0
t
|
By x

(t} = x
i
(t} + x
q
(t},
x(t} = { = Re|(x
i
(t} + x
q
(t}}e
2f
0
t
|)
=
i
(t} cos(2f
0
t} x
q
(t} sin(2f
0
t}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 12 / 106
From X

(f } to X(f }
From x(t} = Re{x

(t}e
2f
0
t
}, we obtain
X(f } =

x(t}e
2ft
dt
=

Re|x

(t}e
2f
0
t
|e
2ft
dt
=

1
2
_x

(t}e
2f
0
t
+ {x

(t}e
2f
0
t
)

_ e
2ft
dt
=
1
2

(t}e
2(f f
0
)t
+
1
2

(t}e
2(f +f
0
)t
dt
=
1
2
|X

(f f
0
} + X

(f f
0
}|
X

(f } =

(x

(t}e
2(f )t
}

dt =

(f }e
2ft
dt
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 13 / 106
Summary
Terminologies & relations
Bandpass signal

x(t} = Re{x

(t}e
2f
0
t
}
X(f } =
1
2
X

(f f
0
} + X

(f f
0
}
Analytic signal or pre-envelope x
+
(t} and X
+
(f }
Lowpass equivalent signal or complex envelope

(t} = (x(t} + x(t}}e


2f
0
t
X

(f } = 2X(f + f
0
}u
1
(f + f
0
}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 14 / 106
Useful to know
Terminologies & relations
From x

(t} = x
i
(t} + x
q
(t} = (x(t} + x(t}}e
2f
0
t
,

x
i
(t} = Re (x(t} + x(t}} e
2f
0
t
_
x
q
(t} = Im (x(t} + x(t}} e
2f
0
t
_
Also from x

(t} = (x(t} + x(t}}e


2f
0
t
,

x(t} = Re x

(t} e
2f
0
t
_
x(t} = Im x

(t} e
2f
0
t
_
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 15 / 106
Useful to know
Terminologies & relations
From x

(t} = x
i
(t} + x
q
(t} = (x(t} + x(t}}e
2f
0
t
,

x
i
(t} = Re (x(t} + x(t}} e
2f
0
t
_
x
q
(t} = Im (x(t} + x(t}} e
2f
0
t
_
Also from x

(t} = (x(t} + x(t}}e


2f
0
t
,

x(t} = Re x
i
(t} + x
q
(t} e
2f
0
t
_
x(t} = Im x
i
(t} + x
q
(t} e
2f
0
t
_
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 16 / 106
Useful to know
Terminologies & relations
pre-envelope x
+
(t}
complex envelope x

(t}
envelope

(t} =

x
2
i
(t} + x
2
q
(t} = r

(t}
phase

(t} = arctan|x
q
(t}]x
i
(t}|
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 17 / 106
Modulaor/demodulator and Hilbert transformer
Usually, we will modulate and demodulator with respect to
carrier frequency f
c
, which may not be equal to the center
frequency f
0
.
x

(t} x(t} = Re{x

(t}e
2f
c
t
} = modulation
x(t} x

(t} = (x(t} + x(t}}e


2f
c
t
= demodulation
The modulation requires to generate x(t), a Hilbert
transform of x(t)
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 18 / 106
Hilbert transform is basically a 90-degree phase shifter.
H(f } = T _
1
t
_ = sgn(f } =

, f > 0
0, f = 0
, f 0
Recall that on page 10, we have shown
T
1
sgn(f ) =
1
t
1t 0;
hence
T _
1
t
_ =
1

sgn(f ) = sgn(f ).
Tip: x
+
(t) =
1
2
[x(t) + x(t)] X
+
(f ) =
1
2
[X(f ) +

X(f )] = {
X(f ) f > 0
0 f < 0

X(f )H(f ) = {
X(f ) f > 0
X(f ) f < 0
H(f ) = {
1 f > 0
1 f < 0
Example: sin(2f
c
t) = cos(2f
c
t) h(t) = cos(2f
c
t /2)
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 19 / 106
Energy considerations
Denition (Energy of a signal)
The energy c
s
of a (complex) signal s(t} is
c
s
=

s(t}
2
dt
Hence,
c
x
=

x(t}
2
dt
c
x
+
=

x
+
(t}
2
dt
c
x

(t}
2
dt
We are interested in the connections among c
x
, c
x
+
, and c
x

.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 20 / 106
From Parsevals Theorem we see
c
x
=

x(t}
2
dt =

X(f }
2
df
In Table 2.0-1,
Parsevals theorem:

x(t)y

(t)dt =

X(f )Y

(f )df
Rayleighs theorem:

x(t)
2
dt =

X(f )
2
df
Secondly
X(f } =
1
2
X

(f f
c
}
-- -
=X
+
(f )
+
1
2
X

(f f
c
}
- --
=X

+
(f )
Thirdly, f
c
W and
X

(f f
c
}X

(f f
c
} = 4X
+
(f }X

+
(f } = 0 for all f
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 21 / 106
It then shows
c
x
=


1
2
X

(f f
c
} +
1
2
X

(f f
c
}
2
df
=
1
4
c
x

+
1
4
c
x

=
1
2
c
x

and
c
x
=

X
+
(f } + X

+
(f }
2
df
= c
x
+
+ c
x
+
= 2c
x
+
Theorem (Energy considerations)
c
x

= 2c
x
= 4c
x
+
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 22 / 106
Extension of energy considerations
Denition (Inner product)
We dene the inner product of two (complex) signals x(t} and
y(t} as
x(t}, y(t} =

x(t}y

(t}dt.
Parsevals relation immediately gives
x(t}, y(t} = X(f }, Y(f }.
c
x
= x(t}, x(t} = X(f }, X(f }
c
x

= x

(t}, x

(t} = X

(f }, X

(f }
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 23 / 106
We can similarly prove that
x(t}, y(t}
= X(f }, Y(f }
=
1
2
X

(f f
c
} +
1
2
X

(f f
c
},
1
2
Y

(f f
c
} +
1
2
Y

(f f
c
}
=
1
4
X

(f f
c
}, Y

(f f
c
} +
1
4
X

(f f
c
}, Y

(f f
c
}
- --
=0
+
1
4
X

(f f
c
}, Y

(f f
c
}
- - -
=0
+
1
4
X

(f f
c
}, Y

(f f
c
}
=
1
4
x

(t}, y

(t} +
1
4
(x

(t}, y

(t}}

=
1
2
Re{x

(t}, y

(t}} .
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 24 / 106
Corss-correlation of two signals
Denition (Cross-correlation)
The cross-correlation of two signals x(t} and y(t} is dened as

x,y
=
x(t}, y(t}

x(t}, x(t}

y(t}, y(t}
=
x(t}, y(t}

c
x
c
y
.
Denition (Orthogonality)
Two signals x(t} and y(t} are said to be orthogonal if
x,y
= 0.
The previous slide then shows
x,y
= Re{
x

,y

}.

,y

= 0
x,y
= 0 but
x,y
= 0 ]
x

,y

= 0
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 25 / 106
2.1-4 Lowpass equivalent of a bandpass system
Denition (Bandpass system)
A bandpass system is an LTI system with real impulse response
h(t} whose transfer function is located around a frequency f
c
Using a similar concept, we set the lowpass equivalent
impulse response such that
h(t} = Re|h

(t}e
2f
c
t
|
and
H(f } =
1
2
|H

(f f
c
} + H

(f f
c
}|
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 26 / 106
Baseband input-output relation
Let x(t} be a bandpass input signal and let
y(t} = h(t}x(t} or equivalently Y(f } = H(f }X(f }
Then, we know
x(t} = Re|x

(t}e
2f
c
t
|
h(t} = Re|h

(t}e
2f
c
t
|
y(t} = Re|y

(t}e
2f
c
t
|
and
Theorem (Baseband input-output relation)
y(t} = h(t} x(t} y

(t} =
1
2
h

(t} x

(t}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 27 / 106
Proof:
For f f
c
(or specically, for u
1
(f + f
c
} = u
2
1
(f + f
c
}),
Note
1
2
= u
1
(0} u
2
1
(0} =
1
4
.
Y

(f } = 2Y(f + f
c
}u
1
(f + f
c
}
= 2H(f + f
c
}X(f + f
c
}u
1
(f + f
c
}
=
1
2
|2H(f + f
c
}u
1
(f + f
c
}| |2X(f + f
c
}u
1
(f + f
c
}|
=
1
2
H

(f } X

(f }
and the case for f = f
c
is valid if Y

(f
c
} = X

(f
c
} = 0.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 28 / 106
The above applies to deterministic system. How about
stochastic system?
x(t} y(t}
E
h(t)
E

X(t} Y(t}
E
h(t}
E
The text abuses the notation by using X(f } as the spectrum
of x(t} but using X(t} as the stochastic counterpart of x(t}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 29 / 106
2.7 Random processes
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 30 / 106
Random Process
Denition
A random process is a set of indexed random variables
{X(t}, t T }, where T is often called the index set.
Classication
1
If T is a nite set Random Vector
2
If T = Z or Z
+
Discrete Random Process
3
If T = R or R
+
Continuous Random Process
4
If T = R
2
, Z
2
, , R
n
, Z
n
Random Field
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 31 / 106
Examples of random process
Example
Let U be a random variable uniformly distributed over |, }.
Then
X(t} = cos (2f
c
t + U}
is a random process.
Example
Let B be a random variable taking values in {1, 1}. Then
X(t} = _
cos(2f
c
t} if B = 1
sin(2f
c
t} if B = +1
= cos _2f
c
t

4
(B+ 1}_
is a random process.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 32 / 106
Statistical properties of random process
For any integer k > 0 and any t
1
, t
2
, , t
k
T , the
nite-dimensional cumulative distribution function (cdf) for
X(t}:
F
X
(t
1
, , t
k
; x
1
, , x
k
} = Pr {X(t
1
} x
1
, , X(t
k
} x
k
}
As event |X(t} | (resp. |X(t} |) is always regarded
as true (resp. false),
lim
x
s

F
X
(t
1
, , t
k
; x
1
, , x
k
}
= F
X
(t
1
, , t
s1
, t
s+1
, t
k
; x
1
, , x
s1
, x
s+1
, , x
k
}
and
lim
x
s

F
X
(t
1
, , t
k
; x
1
, , x
k
} = 0
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 33 / 106
Denition
Let X(t} be a random process; then the mean function is
m
X
(t} = E|X(t}|,
the (auto)correlation function is
R
X
(t
1
, t
2
} = E|X(t
1
}X

(t
2
}| ,
and the (auto)covariance function is
K
X
(t
1
, t
2
} = E (X(t
1
} m
X
(t
1
}} (X(t
2
} m
X
(t
2
}}


Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 34 / 106
Denition
Let X(t} and Y(t} be two random processes; then the
cross-correlation function is
R
X,Y
(t
1
, t
2
} = E|X(t
1
}Y

(t
2
}| ,
and cross-covariance function is
K
X,Y
(t
1
, t
2
} = E (X(t
1
} m
X
(t
1
}| |Y(t
2
} m
Y
(t
2
}}


Proposition
R
X,Y
(t
1
, t
2
} = K
X,Y
(t
1
, t
2
} + m
X
(t
1
}m

Y
(t
2
}
R
Y,X
(t
2
, t
1
} = R

X,Y
(t
1
, t
2
} R
X
(t
2
, t
1
} = R

X
(t
1
, t
2
}
K
Y,X
(t
2
, t
1
} = K

X,Y
(t
1
, t
2
} K
X
(t
2
, t
1
} = K

X
(t
1
, t
2
}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 35 / 106
Stationary random processes
Denition
A random process X(t} is said to be (strictly) or strict-sense
stationary (SSS) if its nite-dimensional joint distribution
function is shift-invariant, i.e. for any integer k > 0, any
t
1
, , t
k
T and any ,
F
X
(t
1
, , t
k
; x
1
, , x
k
} = F
X
(t
1
, , t
k
; x
1
, , x
k
}
Denition
A random process X(t} is said to be weakly or wide-sense
stationary (WSS) if its mean function and (auto)correlation
function are shift-invariant, i.e. for any t
1
, t
2
T and any ,
m
X
(t } = m
X
(t} and R
X
(t
1
, t
2
} = R
X
(t
1
, t
2
}.
The above condition is equivalent to
m
X
(t} = constant and R
X
(t
1
, t
2
} = R
X
(t
1
t
2
}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 36 / 106
Wide-sense stationary random processes
Denition
Two random processes X(t} and Y(t} are said to be jointly
wide-sense stationary if
Both X(t} and Y(t} are WSS;
m
X,Y
(t
1
, t
2
} =constant and R
X,Y
(t
1
, t
2
} = R
X,Y
(t
1
t
2
}.
Proposition
For jointly WSS X(t} and Y(t},
R
Y,X
(t
2
, t
1
} = R

X,Y
(t
1
, t
2
} = R
X,Y
(} = R

Y,X
(}
K
Y,X
(t
2
, t
1
} = K

X,Y
(t
1
, t
2
} = K
X,Y
(} = K

Y,X
(}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 37 / 106
Gaussian random process
Denition
A random process {X(t}, t T } is said to be Gaussian if for
any integer k > 0 and for any t
1
, , t
k
T , the
nite-dimensional joint cdf
F
X
(t
1
, , t
k
; x
1
, , x
k
} = Pr |X(t
1
} x
1
, , X(t
k
} x
k
|
is Gaussian.
Remark
The joint cdf of a Gaussian process is fully determined by its
mean function and its (auto)covariance function.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 38 / 106
Gaussian random process
Denition
Two real random processes {X(t}, t T
X
} and {Y(t}, t T
Y
}
are said to be jointly Gaussian if for any integers j , k > 0 and
for any s
1
, , s
j
T
X
and t
1
, , t
k
T
Y
, the nite-dimensional
joint cdf
Pr |X(s
1
} x
1
, , X(s
j
} x
j
, Y(t
1
} y
1
, , Y(t
k
} y
k
|
is Gaussian.
Denition
A complex process is Gaussian if the real and imaginary
processes are jointly Gaussian.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 39 / 106
Gaussian random process
Remark
For joint (in general complex) Gaussian processes,
uncorrelatedness, dened as
R
X,Y
(t
1
, t
2
} = E|X(t
1
}Y

(t
2
}|
= E|X(t
1
}|E|Y

(t
2
}| = m
X
(t
1
}m

Y
(t
2
},
implies independence, i.e.,
Pr |X(s
1
} x
1
, , X(s
j
} x
j
, Y(t
1
} y
1
, , Y(t
k
} y
k
|
= Pr |X(s
1
} x
1
, , X(s
k
} x
k
|Pr |Y(t
1
} y
1
, , Y(t
k
} y
k
|
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 40 / 106
Theorem
If a Gaussian random process X(t} is WSS, then it is SSS.
Proof:
For any k > 0, consider the sampled random vector

X
k
=

X(t
1
}
X(t
2
}

X(t
k
}

The mean vector and covariance matrix of



X
k
are respectively
m

X
k
= E|

X
k
| =

E|X(t
1
}|
E|X(t
2
}|

E|X(t
k
}|

= m
X
(0}

1
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 41 / 106
and
K

X
= E|

X
k

X
H
k
| =

K
X
(0} K
X
(t
1
t
2
}
K
X
(t
2
t
1
} K
X
(0}

It can be shown that for a new sampled random vector

X(t
1
+ }
X(t
2
+ }

X(t
k
+ }

the mean vector and covariance matrix remain the same.


Hence, X(t} is SSS.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 42 / 106
Power spectral density
Denition
Let R
X
(} be the correlation function of a WSS random
process X(t}. The power spectral density (PSD) or power
spectrum of X(t} is dened as
S
X
(f } =

R
X
(}e
2f
d
Let R
X,Y
(} be the cross-correlation function of two jointly
WSS random process X(t} and Y(t}; then the cross spectral
density (CSD) is
S
X,Y
(f } =

R
X,Y
(}e
2f
d
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 43 / 106
Properties of PSD
PSD (in units of watts per Hz) describes the
distribution/density of power as a function of frequency.
Analogously, probability density function (pdf) describes
the distribution/density of probability as a function of
outcome.
The integration of PSD gives power of the random
process over the considered range of frequency.
Analogously, the integration of pdf gives probability over
the considered range of outcome.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 44 / 106
Theorem
S
X
(f } is non-negative and real (which matches that the power
of a signal cannot be negative).
Proof: S
X
(f } is real because
S
X
(f } =

R
X
(}e
2f
d
=

R
X
(s}e
2fs
ds (s = }
=

X
(s}e
2fs
ds
= _

R
X
(s}e
2fs
ds_

= S

X
(f }
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 45 / 106
S
X
(f } is non-negative because of the following (we only prove
this based on that T R and X(t} = 0 outside |T, T|).
S
X
(f } =

E|X(t + }X

(t}|e
2f
d
= E_X

(t}

X(t + }e
2f
d_ (s = t + }
= E_X

(t}

X(s}e
2f (st)
ds_
= EX

(t}

X(f }e
2ft

In notation,

X(f } = T{X(t}}
Since the above is a constant independent of t (by WSS),
S
X
(f } =
1
2T

T
T
EX

(t}

X(f }e
2ft
dt
=
1
2T
E_

X(f }

T
T
X

(t}e
2ft
dt_
=
1
2T
E_

X(f }

(f }_ =
1
2T
E

X(f }
2
0.

Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 46 / 106


Wiener-Khintchine theorem
Theorem (Wiener-Khintchine)
Let {X(t}, t R} be a WSS random process. Dene
X
T
(t} = _
X(t} if t |T, T|
0, otherwise.
and set

X
T
(f } =

X
T
(t}e
2ft
dt =

T
T
X(t}e
2ft
dt.
If S
X
(f } exists (i.e., R
X
(} has a Fourier transform), then
S
X
(f } = lim
T
1
2T
E

X
T
(f }
2
_
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 47 / 106
Variations of PSD denitions
Power density spectrum : Alternative denition
Fourier transform of auto-covariance function (e.g.,
Robert M. Gray and Lee D. Davisson, Random
Processes: A Mathematical Approach for Engineers,
p. 193)
I remark that from the viewpoint of digital
communications, the texts denition is more appropriate
since
the auto-covariance function is independent of a
mean-shift; however, random signals with dierent
means consume dierent powers.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 48 / 106
What can we say about, e.g., the PSD of stochastic
system input and output?
x(t}
x

(t}
y(t}
y

(t}
E
h(t}
1
2
h

(t}
E

(t} = Re{

(t}e
2f
c
t
}

(t} = ((t} +

(t}}e
2f
c
t
where can be x, y or h.

X(t}
X

(t}
Y(t}
Y

(t}
E
h(t}
1
2
h

(t}
E

(t} = Re{

(t}e
2f
c
t
}

(t} = ((t} +

(t}}e
2f
c
t
where can be X, Y or h.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 49 / 106
2.9 Bandpass and lowpass random processes
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 50 / 106
Denition (Bandpass random signal)
A bandpass (WSS) stochastic signal X(t} is a real random
process whose PSD is located around central frequency f
0
, i.e.
S
X
(f } = 0 for all f + f
0
> W
E
T
S
X
(f }
f
0
f
0
W f
0
+ W f
0
f
0
+ W f
0
W
f
0
may not be the
carrier frequency f
c
!
We know

X(t} = Re{X

(t}e
2ft
}
X

(t} = (X(t} +

X(t}}e
2f
0
t
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 51 / 106
Assumption
The bandpass signal X(t} is WSS.
In addition, its complex lowpass equivalent process X

(t} is
WSS. In other words,
X
i
(t} and X
q
(t} are WSS.
X
i
(t} and X
q
(t} are jointly WSS.
Under this fundamental assumption, we obtain the
following properties:
P1) If X(t} zero-mean, both X
i
(t} and X
q
(t} zero-mean
because m
X
= m
X
i
cos(2f
c
t} m
X
q
sin(2f
c
t} .
P2)

R
X
i
(} = R
X
q
(}
R
X
i
,X
q
(} = R
X
q
,X
i
(}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 52 / 106
Proof of P2):
R
X
(}
= E|X(t + }X(t}|
= ERe|X

(t + }e
2f
c
(t+)
|Re|X

(t}e
2f
c
t
|
= E|(X
i
(t + } cos(2f
c
(t + }} X
q
(t + } sin(2f
c
(t + }}}
(X
i
(t} cos(2f
c
t} X
q
(t} sin(2f
c
t}}|
=
R
X
i
(} + R
X
q
(}
2
cos(2f
c
}
+
R
X
i
,X
q
(} R
X
q
,X
i
(}
2
sin(2f
c
}
+
R
X
i
(} R
X
q
(}
2
cos(2f
c
(2t + }} (= 0}

R
X
i
,X
q
(} + R
X
q
,X
i
(}
2
sin(2f
c
(2t + }} (= 0}

Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 53 / 106
P3) R
X
(} = Re|
1
2
R
X

(}e
2f
c

|.
Proof: Observe from P2),
R
X

(} = E|X

(t + }X

(t}|
= E|(X
i
(t + } + X
q
(t + }}(X
i
(t} X
q
(t}}|
= R
X
i
(} + R
X
q
(} R
X
i
,X
q
(} + R
X
q
,X
i
(}
= 2R
X
i
(} + 2R
X
q
,X
i
(}.
Hence, also from P2),
R
X
(} = R
X
i
(} cos(2f
c
t} R
X
q
,X
i
(} sin(2f
c
t}
= Re_
1
2
R
X

(}e
2f
c

_
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 54 / 106
P4) S
X
(f } =
1
4
S
X

(f f
c
} + S

(f f
c
}.
Proof: A direct consequence of P3).
Note:
Amplitude

X(f ) =
1
2

(f f
c
) +

X

(f f
c
)_
Amplitude sequare

X(f )
2
=
1
4

(f f
c
) +

X

(f f
c
)
2
=
1
4
_

(f f
c
)
2
+

(f f
c
)
2
_
Wiener-Khintchine: S
X
(f )

X(f )
2
.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 55 / 106
P5) X
i
(t} and X
q
(t} uncorrelated if one of them has
zero-mean.
Proof: From P2),
R
X
i
,X
q
(} = R
X
q
,X
i
(} = R
X
i
,X
q
(}.
Hence, R
X
i
,X
q
(0} = 0 (i.e.,
E|X
i
(t}X
q
(t}| = 0 = E|X
i
(t}|E|X
q
(t}|}.

Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 56 / 106


P6) If S
X

(f } = S

(f }, X
i
(t +} and X
q
(t} uncorrelated for
any if one of them has zero-mean.
Proof: From P3),
R
X

(} = 2R
X
i
(} 2R
X
q
,X
i
(}.
S
X

(f } = S

(f } implies R
X

(} is real;
hence, R
X
q
,X
i
(} = 0 for any .
We next discuss the PSD of a system.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 57 / 106
X(t} Y(t}
E
h(t}
E
Y(t} =

h(}X(t }d
m
Y
= m
X

h(}d
R
X,Y
(} = E_X(t + } _

h(u}X(t u}du_

_
=

(u}R
X
( + u}du =

(v}R
X
( v}dv
= R
X
(} h

(}
R
Y
(} = E__

h(u}X(t + u}du_ _

h(v}X(t v}dv_

_
=

h(u} _

(v}R
X
(( u} + v}dv_du
=

h(u}R
X,Y
( u}du
= R
X,Y
(} h(} = R
X
(} h

(} h(}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 58 / 106
Thus,
S
X,Y
(f } = S
X
(f }H

(f } since

(}e
2f
d = H

(f }
and
S
Y
(f } = S
X,Y
(f }H(f } = S
X
(f }H(f }
2
.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 59 / 106
White process
Denition (White process)
A (WSS) process W(t} is called a white process if its PSD is
constant for all frequencies:
S
W
(f } =
N
0
2
This constant is usually denoted by
N
0
2
because the PSD
is two-sided. So, the power spectral density is actually N
0
per Hz (N
0
]2 at f = f
0
and N
0
]2 at f = f
0
).
The autocorrelation function R
W
(} =
N
0
2
(}, where (}
is the Dirac delta function.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 60 / 106
Why negative frequency?
Some sample answers:
It is just an imaginary convenient way created by Human
to correspond to the imaginary domain of a complex
signal (that is why we call it imaginary part).
By giving respectively the spectrum for f
0
and f
0
(which
may not be symmetric), we can tell the amount of real
part and imaginary part in time domain corresponding to
this frequency.
For example, if the spectrum is conjugate symmetric, we
know imaginary part = 0.
Notably, in communications, imaginary part is the part
that will be modulated by (or transmitted with carrier)
sin(2f
c
t}; on the contrary, real part is the part that will
be modulated by (or transmitted with carrier) cos(2f
c
t}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 61 / 106
Why (} function?
Denition (Dirac delta function)
Dene the Dirac delta function (t} as
(t} = _
, t = 0;
0, t 0
,
which satises the replication property, i.e., for every
continuous point of g(t},
g(t} =

g(}(t }d.
Hence, by replication property,

(t}dt =

(t }d = 1.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 62 / 106
Not that it seems (t} = 2(t} = _
, t = 0;
0, t 0
; but with
g
1
(t} = 1 and g
2
(t} = 2 continuous at all points,
1 =

g
1
(t}(t}dt

g
2
(t}(t}dt = 2.
So, mathematician does not like this function as it
contradicts intuition:
f (t} = g(t} for t R except for countably many points

f (t}dt =

g(t}dt _if

f (t}dt is nite_.
Hence, (t} and 2(t} are two dierent Diract delta
functions by denitions. (The multiplicative constant
cannot be omitted!) Very articial indeed.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 63 / 106
Comment: x + a = y + a x = y is incorrect if a = .
As a result, saying = (or (t} = 2(t} ) is not a
rigorously dened statement.
Summary: The Dirac delta function, like , is simply
a concept dened only through its replication property.
Hence, a white process W(t} that has autocorrelation
function R
W
(} =
N
0
2
(} is just a convenient and
simplied notion for theoretical research about real world
phenomenon. Usually, N
0
= KT, where T is the ambient
temperature in kelvins and k is Boltzmans constant.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 64 / 106
Discrete-time random processes
The property of a time-discrete process {X|n|, n Z
+
}
can be obtained using sampling notion via the Dirac
delta function.
X|n| = X(nT}, a sample at t = nT from a
time-continuous process X(t}, where we assume T = 1
for convenience.
The autocorrelation function of a time-discrete process is
given by:
R
X
|m| = E{X|n + m|X|n|}
= E{X(n + m}X(n}}
= R
X
(m}, a sample from R
X
(t}.
E
T
T
T
T
T
T
T
T
T
T
T
T
T
T
R
X
(0)
R
X
(1)
R
X
(2)
R
X
(3)
R
X
(4)
R
X
(5)
R
X
(6)
R
X
(7)
R
X
(8)
R
X
(9)
R
X
(10)
R
X
(11)
R
X
(12)
R
X
(13)
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 65 / 106
S
X
|f | =

n=
R
X
(t}(t n}_e
2ft
dt
=

n=

R
X
(t}e
2ft
(t n}dt
=

n=
R
X
(n}e
2fn
(Replication Property)
=

n=
R
X
|n|e
2fn
(Fourier Series)
Hence, by Fourier sesies,
R
X
|n| =

1/2
1/2
S
X
|f |e
2fm
df _ = R
X
(n} =

S
X
(f }e
2fm
df _.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 66 / 106
2.8 Series expansion of random processes
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 67 / 106
2.8-1 Sampling bandLimited random process
Deterministic case
A deterministic signal x(t} is called band-limited if
X(f } = 0 for all f > W
Shannon-Nyquist theorem: x(t} can be perfectly
reconstructed if the sampling rate f
s
> 2W and
x(t} =

n=
x _
n
2W
_sinc _2W _t
n
2W
__
Note that the above is only sucient, not necessary.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 68 / 106
Stochastic case
A WSS stochastic process X(t} is said to be band-limited
if its PSD S
X
(f } = 0 for all f > W
It follows that
R
X
(} =

n=
R
X
_
n
2W
_sinc _2W _
n
2W
__
In fact, the random process X(t} can be reconstructed by
its (random) samples in the mean square.
Theorem
EX(t}

n=
X_
n
2W
_sinc _2W _t
n
2W
__
2
= 0
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 69 / 106
The random samples
Problems of using these random samples.
These random samples |X{
n
2W
)|

n=
are in general
correlated unless X(t) is zero-mean white.
E_X_
n
2W
_X

_
m
2W
__ = R
X
_
n m
2W
_
E_X_
n
2W
__E_X

_
m
2W
__ = m
X

2
.
If X(t) is zero-mean white,
E_X_
n
2W
_X

_
m
2W
__ = R
X
_
n m
2W
_ =
N
0
2
_
n m
2W
_
= E_X_
n
2W
__E_X

_
m
2W
__ = m
X

2
= 0 except n = m.
Thus, we will introduce the uncorrelated KL expansions
in slide 87.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 70 / 106
2.9 Bandpass and lowpass random processes (revisited)
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 71 / 106
Denition (Filtered white noise)
A process N(t} is called a ltered white noise if its PSD equals
S
N
(f } = _
N
0
2
, f + f
c
W
0, otherwise
Applying P4) S
X
(f } =
1
4
S
X

(f f
c
} + S

(f f
c
} , we
learn the PSD of the lowpass equivalent process N

(t} of
N(t} is
S
N

(f } = _
2N
0
, f W
0, otherwise
From P6), S
N

(f } = S

(f } implies N
i
(t +} and N
q
(t}
are uncorrelated for any if one of them has zero mean.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 72 / 106
Now we explore more properties for PSD of bandlimited X(t}
and complex X

(t}.
P0-1) By fundamental assumption on slide 52, we obtain
that X(t} and

X(t} are jointly WSS.
R
X,

X
() and R

X
() are only functions of because

X(t) is the
Hilbert transform of X(t), i.e., R
X,

X
() = R
X
() h

() =
R
X
()h() (since h

() = h()) and R

X
() = R
X,

X
()h().
P0-2) X
i
(t} = Re|(X(t} +

X(t}}e
2f
c
t
| is WSS by
fundamental assumption.
P2

R
X
(} = R

X
(}
R
X,

X
(} = R

X,X
(}
(X(t) +

X(t) is the lowpass
equivalent signal of X
i
(t)!)
(X
i
(t}+ X
q
(t} is the lowpass
equivalent signal of X(t}!)
Also, R

X,X
(} =

R
X
(}, where

R
X
(} is the Hilbert transform
output due to input R
X
(}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 73 / 106
P3

} R
X
i
(} = Re|
1
2
R
(X+

X)
(}e
2f
c

|
R
X
i
(} = Re_
1
2
R
(X+

X)
(}e
2f
c

_
= Re|(R
X
(} + R

X,X
(}}e
2f
c

|
= R
X
(} cos(2f
c
} +

R
X
(} sin(2f
c
}
Note that

S
X
(f } = S
X
(f }H
Hilbert
(f } = S
X
(f }( sgn(f }}.
P4

) S
X
i
(f } = S
X
(f f
c
} + S
X
(f + f
c
} { = S
X
q
(f }} for f f
c

S
X
i
(f } =
1
2
(S
X
(f f
c
} + S
X
(f + f
c
}}
+
1
2
( sgn(f f
c
}S
X
(f f
c
} + sgn(f + f
c
}S
X
(f + f
c
}}
= S
X
(f f
c
} + S
X
(f + f
c
} for f f
c
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 74 / 106
P4

) S
X
q
,X
i
(f } = |S
X
(f f
c
} S
X
(f + f
c
}| for f f
c
Terminologies & relations

R
X
(} = Re
1
2
R
X

(} e
2f
c

_ (P3}
R

X,X
(} = R
X
(} h
Hilbert
(}
- - -
P0-1
= Im
1
2
R
X

(} e
2f
c

_
Then:
1
2
R
X

() = R
X
i
() + R
X
q
,X
i
()
- - -
Proof of P3
= (R
X
() + R

X,X
())e
2f
c

R
X
i
(} = Re_ (R
X
(} + R

X,X
(}} e
2f
c

_ (P3

}
R
X
q
,X
i
(} = Im_ (R
X
(} + R

X,X
(}} e
2f
c

_ = R
X
i
(} h
Hilbert
(}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 75 / 106
Proof: Hence,
R
X
q
,X
i
(} = Im|(R
X
(} + R

X,X
(}}e
2f
c

|
= R
X
(} sin(2f
c
} + R

X,X
(} cos(2f
c
}
= R
X
(} sin(2f
c
} +

R
X
(} cos(2f
c
}.
The property can be proved similarly to P4

).
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 76 / 106
2.2 Signal space representation
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 77 / 106
Key idea & motivation
The low-pass equivalent representation removes the
dependence of system performance analysis on carrier
frequency.
Equivalent vectorization of the (discrete or continuous)
signals further removes the waveform redundancy in
the analysis of system performance.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 78 / 106
Vector space concepts
Inner product: v
1
, v
2
=

n
i =1
v
1,i
v

2,i
= v
H
2
v
1
(H denotes Hermitian transpose)
Orthogonal if v
1
, v
2
= 0
Norm: v =

v, v
Orthonormal: v
1
, v
2
= 0 and v
1
= v
2
= 1
Linearly independent:
k

i =1
a
i
v
i
= 0 i a
i
= 0 for all i
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 79 / 106
Vector space concepts
Triangle inequality
v
1
+ v
2
v
1
+ v
2

Cauchy-Schwartz inequality
v
1
, v
2
v
1
v
2
.
Equality holds i v
1
= av
2
for some a
Norm square of sum:
v
1
+ v
2

2
= v
1

2
+ v
2

2
+ v
1
, v
2
+ v
2
, v
1

Pythagorean: if v
1
, v
2
= 0, then
v
1
+ v
2

2
= v
1

2
+ v
2

2
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 80 / 106
Eigen-decomposition
1
Matrix transformation w.r.t. matrix A
v = Av
2
Eigenvalues of square matrix A are solutions {} of
characteristic polynomial
det(A I } = 0
3
Eigenvectors for eigenvalue is solution v of
Av = v
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 81 / 106
Signal space concept
How to extend the signal space concept to a (complex)
function/signal z(t} dened over |0, T} ?
Answer: We can start by dening the inner product for
complex functions.
Inner product: z
1
(t}, z
2
(t} =

T
0
z
1
(t}z

2
(t} dt
Orthogonal if z
1
(t}, z
2
(t} = 0.
Norm: z(t} =

z(t}, z(t}
Orthonormal: z
1
(t}, z
2
(t} = 0 and z
1
(t} = z
2
(t} = 1.
Linearly independent:

k
i =1
a
i
z
i
(t} = 0 i a
i
= 0 for all
a
i
C
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 82 / 106
Triangle Inequality
z
1
(t} + z
2
(t} z
1
(t} + z
2
(t}
Cauchy Schwartz inequality
z
1
(t}, z
2
(t} z
1
(t} z
2
(t}
Equality holds i z
1
(t} = a z
2
(t} for some a C
Norm square of sum:
z
1
(t} + z
2
(t}
2
= z
1
(t}
2
+ z
2
(t}
2
+ z
1
(t}, z
2
(t} + z
2
(t}, z
1
(t}
Pythagorean property: if z
1
(t}, z
2
(t} = 0,
z
1
(t} + z
2
(t}
2
= z
1
(t}
2
+ z
2
(t}
2
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 83 / 106
Transformation w.r.t. a function C(t, s}
z(t} =

T
0
C(t, s}z(s} ds
This is in parallel to
v _ v
t
=
n

s=1
A
t,s
v
s
_ = Av.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 84 / 106
Eigenvalues and eigenfunctions
Given a complex continuous function C(t, s} over |0, T}
2
, the
eigenvalues and eigenfunctions are {
k
} and {
k
(t}} such
that

T
0
C(t, s}
k
(s} ds =
k

k
(t} (In parallel to Av = v}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 85 / 106
Mercers theorem
Theorem (Mercers theorem)
Give a complex continuous function C(t, s} over |0, T|
2
that
is Hermitian symmetric (i.e., C(t, s} = C

(s, t}) and


nonnegative denite (i.e.,

i

j
a
i
C(t
i
, t
j
}a

j
0 for any {a
i
}
and {t
i
}). Then the eigenvalues {
k
} are reals, and C(t, s}
has the following eigen-decomposition
C(t, s} =

k=1

k
(t}

k
(s}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 86 / 106
Karhunen-Lo`eve theorem
Theorem (Karhunen-Lo`eve theorem)
Let {Z(t}, t |0, T}} be a zero-mean random process with a
continuous autocorrelation function R
Z
(t, s} = E|Z(t}Z

(s}|.
Then Z(t} can be written as
Z(t}
M
2
=

k=1
Z
k

k
(t} 0 t T
where = is in the sense of mean-square,
Z
k
= Z(t},
k
(t} =

T
0
Z(t}

k
(t} dt
and {
k
(t}} are orthonormal eigenfunctions of R
Z
(t, s}
Merit of KL expansion: {Z
k
} are uncorrelated. (But
samples {Z(k](2W}}} are not uncorrelated even if Z(t}
is bandlimited!)
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 87 / 106
Proof.
E|Z
i
Z

j
| = E__

T
0
Z(t}

i
(t}dt_ _

T
0
Z(s}

j
(s}ds_

_
=

T
0
_

T
0
R
Z
(t, s}
j
(s}ds_

i
(t}dt
=

T
0

j
(t}

i
(t}dt
=

j
if i = j
0 { = E|Z
i
|E|Z

j
|) if i j
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 88 / 106
Lemma
Fro a given orthonormal set {
k
(t}}, how to minimize the
energy of error signal e(t} = s(t} s(t} for s(t} spanned by
(i.e., expressed as a linear combination of) {
k
(t}}?
Assume s(t} =

k
a
k

k
(t}; then
e(t}
2
= s(t} s(t}
2
= s(t}

kki
a
k

k
(t} a
i

i
(t}
2
= s(t}

kki
a
k

k
(t}
2
+ a
i

i
(t}
2
s(t}

kki
a
k

k
(t}, a
i

i
(t}
a
i

i
(t}, s(t}

kki
a
k

k
(t}
= s(t}

kki
a
k

k
(t}
2
+ a
i

2
a

i
s(t},
i
(t} a
i

i
(t}, s(t}
By taking derivative w.r.t. Re{a
i
} and Im{a
i
}, we obtain
a
i
= s(t},
i
(t}.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 89 / 106
Denition
If every nite energy signal s(t} (i.e., s(t}
2
) satises
e(t}
2
= s(t}

k
s(t},
k
(t}
k
(t}
2
= 0
(equivalently,
s(t}
L
2
=

k
s(t},
k
(t}
k
(t} =

k
a
k

k
(t}
(in the sense that the norm of the dierence between
left-hand-side and right-hand-side is zero), then the set of
orthonormal functions {
k
(t}} is said to be complete.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 90 / 106
Example (Fourier series)

2
T
cos _
2kt
T
_,

2
T
sin_
2kt
T
_ 0 k Z

is a complete orthonormal set for signals dened over |0, T}


with nite number of discontinuities.
For a complete orthonormal basis, the energy of s(t} is
equal to
s(t}
2
=

j
a
j

j
(t},

k
a
k

k
(t}
=

j

k
a
j
a

k

j
(t},
k
(t}
=

j
a
j
a

j
=

j
a
j

2
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 91 / 106
Given a deterministic function s(t}, and a set of complete
orthonormal basis {
k
(t}} (possibly countably innite),
s(t} can be written as
s(t}
L
2
=

k=0
a
k

k
(t}, 0 t T
where
a
k
= s(t},
k
(t} =

T
0
s(t}

k
(t} dt.
In addition,
s(t}
2
=

k
a
k

2
.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 92 / 106
Remark
In terms of energy (and error rate):
A bandpass signal s(t} can be equivalently analyzed
through lowpass equivalent signal s

(t} without the


burden of carrier freq f
c
;
A lowpass equivalent signal s

(t} can be equivalently


analyzed through (countably many) {a
k
= s

(t},
k
(t}}
without the burden of continuous waveforms.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 93 / 106
Gram-Schmidt procedure
Given a set of functions v
1
(t}, v
2
(t}, , v
k
(t}
1

1
(t} =
v
1
(t)
v
1
(t)
2
Compute for i = 2, 3, , k (or until
i
(t} = 0),

i
(t} = v
i
(t}
i 1

j =1
v
i
(t},
j
(t}
j
(t}
and set
i
(t} =

i
(t)

i
(t)
.
This gives an orthonormal basis
1
(t},
2
(t}, ,
k
(t}, where
k

k.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 94 / 106
Example
Find a Gram-Schmidt orthonormal basis of the following
signals.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 95 / 106
Sol.

1
(t} =
v
1
(t)
v
1
(t)
=
v
1
(t)

2
(t} = v
2
(t} v
2
(t},
1
(t}
1
(t}
= v
2
(t} _

3
0
v
2
(t}

1
(t} dt_
1
(t} = v
2
(t}
Hence
2
(t} =

2
(t)

2
(t)
=
v
2
(t)

2
.

3
(t} = v
3
(t} v
3
(t},
1
(t}
1
(t} v
3
(t},
2
(t}
2
(t}
= v
3
(t}

2
1
(t} 0
2
(t} = _
1, 2 t 3
0, otherwise
Hence
3
(t} =

3
(t)

3
(t)
.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 96 / 106

4
(t} = v
4
(t} v
4
(t},
1
(t}
1
(t} v
4
(t},
2
(t}
2
(t}
v
4
(t},
3
(t}
3
(t}
= v
4
(t} (

2}
1
(t} (0}
2
(t}
3
(t} = 0
Orthonormal basis={
1
(t},
2
(t},
3
(t}}, where 3 4.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 97 / 106
Example
Represent the signals in slide 95 in terms of the orthonormal
basis obtained in the same example.
Sol.
v
1
(t} =

2
1
(t} + 0
2
(t} + 0
3
(t} = _

2, 0, 0_
v
2
(t} = 0
1
(t} +

2
2
(t} + 0
3
(t} = _0,

2, 0_
v
3
(t} =

2
1
(t} + 0
2
(t} +
3
(t} = _

2, 0, 1_
v
4
(t} =

2
1
(t} + 0
2
(t} + 1
3
(t} = _

2, 0, 1_
The vectors are named signal space representations or
constellations of the signals.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 98 / 106
Remark
The orthonormal basis is not unique.
For example, for k = 1, 2, 3, re-dene

k
(t} = _
1, k 1 t k
0, otherwise
Then
v
1
(t}

= (+1, +1, 0}
v
2
(t}

= (+1, 1, 0}
v
3
(t}

= (+1, +1, 1}
v
4
(t}

= (1, 1, 1}
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 99 / 106
Euclidean distance
s
1
(t} = (a
1
, a
2
, , a
n
} for some complete basis
s
2
(t} = (b
1
, b
2
, , b
n
} for the same complete basis
d
12
= Euclidean distance between s
1
(t} and s
2
(t}
=
_

i =1
(a
i
b
i
}
2
= s
1
(t} s
2
(t} _ =

T
0
s
1
(t} s
2
(t}
2
dt_
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 100 / 106
Bandpass and lowpass orthonormal basis
Now lets change our focus from |0, T} to (, }
A time-limited signal cannot be bandlimited to W.
A bandlimited signal cannot be time-limited to T.
Hence, in order to talk about the ideal bandlimited signal, we
have to deal with signals with unlimited time span.
Re-dene the inner product as:
f (t}, g(t} =

f (t}g

(t} dt
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 101 / 106
Let s
1,
(t} and s
2,
(t} be lowpass equivalent signals of the
bandpass s
1
(t} and s
2
(t}
S
1,
(f } = S
2,
(f } = 0 for f > f
B
s
i
(t} = Re|s
i ,
(t}e
2f
c
t
| where f
c
f
B
Then, as we have proved in slide 24,
s
1
(t}, s
2
(t} =
1
2
Re{s
1,
(t}, s
2,
(t}} .
Proposition
If s
1,
(t}, s
2,
(t} = 0, then s
1
(t}, s
2
(t} = 0.
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 102 / 106
Proposition
If {
n,
(t}} is a complete basis for the set of lowpass signals,
then

n
(t} = Re|{

2
n,
(t}) e
2f
c
t
|

n
(t} = Im|{

2
n,
(t}) e
2f
c
t
|
= Re|{

2
n,
(t}) e
2f
c
t
|
is a complete orthonormal set for the set of bandpass signals.
Proof: First, orthonormality can be proved by

n
(t},
m
(t} =
1
2
Re

2
n,
(t},

2
m,
(t}_ =

1 n = m
0 n m

n
(t},

m
(t} =
1
2
Re

2
n,
(t},

2
m,
(t}_ =

1 n = m
0 n m
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 103 / 106
and

n
(t},
m
(t} =
1
2
Re

2
n,
(t},

2
m,
(t}_
= Re{
n,
(t},
m,
(t}}
=

Re{ } = 0 n = m
0 n m
Now, with

s(t} = Re{s

(t}e
2f
c
t
}
s(t} = Re{s

(t}e
2f
c
t
}
s

(t}
L
2
=

n
a
n,

n,
(t} with a
n,
= s

(t},
n,
(t}
s

(t} s

(t}
2
= 0
we have
s(t} s(t}
2
=
1
2
s

(t} s

(t}
2
= 0
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 104 / 106
and
s(t} = Re|s

(t}e
2f
c
t
|
= Re_

n
a
n,

n,
(t}e
2f
c
t
_
=

n
_Re_
a
n,

2
_Re

2
n,
(t}e
2f
c
t
_
+Im_
a
n,

2
_Im[

2
n,
(t}) e
2f
c
t
__
=

n
_Re_
a
n,

2
_
n
(t} + Im_
a
n,

2
_

n
(t}_

Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 105 / 106


What you learn from Chapter 2
Random process
WSS
autocorrelation and crosscorrelation functions
PSD and CSD
White and ltered white
Relation between (bandlimited) bandpass and lowpass
equivalent deterministic signals
Relation between (bandlimited) bandpass and lowpass
equivalent random signals
Properties of autocorrelation and power spectrum density
Role of Hilbert transform
Signal space concept
Digital Communications: Chapter 2 Ver. 2010.11.09 Po-Ning Chen 106 / 106

Вам также может понравиться