Академический Документы
Профессиональный Документы
Культура Документы
Jan Cernock
y
UPGM
FIT VUT Brno, cernocky@fit.vutbr.cz
N
1
X
1
=
R[k]
x[n]x[n + k],
N n=0
where N is the number of samples we have at our disposal, is called biased estimate.
When we pull the signals from each other, R(k) is estimated from only N k
samples, but we are always dividing by N , so that the values will decrease toward the
edges.
N
1
X
1
=
x[n]x[n + k],
R[k]
N |k| n=0
is unbiased estimate, where the division is done by the actual number of overlapping
samples. On the other hand, the coefficients on the edges: k N 1, k N + 1
are very badly estimated, as only few samples are available for their estimation.
Therefore, we prefer biased estimates.
2
10
x 10
6
300
200
100
100
200
300
200
100
100
200
300
x 10
14
12
10
8
6
4
2
0
2
4
6
300
we can not use Fourier transform, as random signals have infinite energy (FT can be
applied to such signals, but only to some special cases).
We will consider only ergodic random signals, one realization: x(t).
Power spectral density PSD derivation.
let us define an interval of length T : from T /2 till T /2:
-0.5T
+0.5T
xT (t)ejt dt
We will define energy spectral density (see the lecture on Fourier transform):
Z
Z
Z
1
LT (j)d
XT (j)XT (j)d =
x2T (t)dt = . . . =
2
|X(j)|2
LT (j) =
2
We will try to stretch T till . In this case however, the energy (and also its
density) would grow to infinity. We will therefore define (double-sided) power
spectral density by dividing the energy by T (analogy: P = E/T ):
GT (j) =
LT (j)
T
now, the stretching can be done and power spectral density can be computed not
only for the interval of length T , but for the whole signal:
|XT (j)|2
G(j) = lim GT (j) = lim
T
T
2T
Properties of G(j)
For spectral function of a real signal XT (j):
XT (j) = XT (j)
G(j) is given by squares of its values so that it will be purely real and even.
6
In communication technology, often the mean value a = 0. Then, the mean power is
equal to the variance: P = D and the effective value is equal to the standard
deviation: Xef = .
Wiener-Chinchins equations
The power spectral density is linked to the auto-correlation function by R( ) by FT
(usually, we compute PSD in this way it is more tractable than increasing T ):
Z +
1
G(j) =
R( )ej d
2
R( ) =
G(j)e+j d
R[k]ejk
k=
(question: which angular frequency is in this equation ?). G(ej ) is Discrete-time Fourier
transform (DTFT) of auto-correlation coefficients. In case we estimate these (ensemble or
temporal estimate), we can compute G(ej ).
Back from G(ej ) to auto-correlation coefficients (inverse DTFT):
Z
1
G(ej )e+jk d
R[k] =
2
9
15
10
10
15
f
0.5
0.4
0.3
0.2
0.1
4
4
x 10
10
normalized omega
0.7
0.6
0.5
0.4
0.3
0.2
0.1
3
f
0.7
0.6
0.5
0.4
0.3
0.2
0.1
8000
6000
4000
2000
2000
4000
6000
the flowing water has a peak in the power at 700 Hz (a resonance of tube in my
ex-apartment ?)
11
8000
13
N
1
X
x[n]ej N kn
n=0
2
N k:
1
jk
G(e ) = |X[k]|2 .
N
This estimate is usually very unreliable (noisy), so that we often use Welch method
averaging of several PSD estimates over several time-segments:
The signal is divided into M segments, each with N samples and DFT is computed
14
N
1
X
xm [n]ej N kn
n=0
for 0 m M 1
GW (e ) =
|Xm [k]|2 .
M m=0 N
15
Example of PSD estimate from one 320-sample segment and average from 1068 such
segments:
one realization
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.5
1.5
2.5
3.5
2.5
3.5
averaging Welch
1
0.8
0.6
0.4
0.2
0
0.5
1.5
Example: filtering of one realization of the water flowing by filter H(z) = 1 0.9z 1 .
Input signal and its PSD:
0.2
0.2
50
100
150
200
250
300
0.5
0.4
0.3
0.2
0.1
0.5
1.5
18
2.5
1.5
0.5
0.5
1.5
2.5
2.5
|H|
3.5
3
2.5
2
1.5
1
0.5
0
0.5
1.5
19
0.2
50
100
150
200
250
300
0.1
0.08
0.06
0.04
0.02
0.5
1.5
20
2.5
=0, =1
1
0.9
0.4
0.8
0.7
0.3
p(x)
F(x)
0.6
0.5
0.2
0.4
0.3
0.1
0.2
0.1
0
5
0
x
0
5
0
x
20
40
60
80
100
120
140
Autocorrelation coefficients:
R[0] = 2 + D,
R[k] = 2 pro k 6= 0
22
160
180
200
0.8
0.6
0.4
0.2
150
100
50
50
23
100
150
5
4
3
2
1
0.5
1.5
2.5
2.5
averaging Welch
1.06
1.04
1.02
1
0.98
0.96
0.94
0.92
0
0.5
1.5
For continuous time, a true white noise ca not be generated (in case G(j) is be non-zero
for all , the noise would have infinite energy. . . )
24
Quantization
We can not represent samples of discrete signal x[n] with arbitrary precision
quantization. Most often, we will round to a fixed number L quantization levels,
which are numbered 0 to L 1. In case we dispose of b bits, L = 2b .
Uniform quantization has uniform distribution of quantization levels q0 . . . qL1 from
minimum value of the signal xmin till its maximum value xmax :
xmax xmin
.
L
Quantization: for value x[n], the index of the best quantization level is given by:
i[n] = arg
min
l=0...L1
|x[n] ql |,
26
40
60
80
100
120
140
160
180
20
40
60
80
100
120
140
160
180
20
40
60
80
100
120
140
160
180
4
2
0
2
0.5
0.5
27
To learn, how the signal was distorted by the quantization, we will compute the power of
the useful signal Ps and compare it to the power of noise Pe : signal-to-noise ratio
(SNR):
Ps
SN R = 10 log10
[dB].
Pe
For computing the power of error signal, we will make use of the theory of random
processes: we do not know the values of e[n], but we know that they will be in the
,
+
interval [
2
2 ] and that they will be uniformly distributed. Probability density
function for e[n] will therefore be:
28
. . . its height
pe (g)dg = 1.
This process has a zero mean value (we would easily obtain
power will be equal to the variance:
P e = De =
g pe (g)dg =
1 g
g pe (g)dg =
3
2
29
1
=
3
+
8
8
2
=
12
A2
2
2
4A2
A2
Pe =
=
=
.
12
12L2
3L2
Signal-to-noise ratio:
SN R = 10 log10
Ps
= 10 log10
Pe
2A
L
A2
3L2
= 10 log10
3L2
.
2
3 b 2
3
(2 ) = 10 log10 + 10 log10 22b = 1.76 + 20b log10 2 = 1.76 + 6 b dB.
2
2
The constant 1.76 depends on the character of signal (cos, noise), but it always holds that
adding/removing 1 bit improves/decreases the SNR by 6 dB.
30
SN Rexp = 10 log10
N 1
1 X 2
s [n]
N n=0
1
N
N
1
X
e2 [n]
n=0
31
= 19.36 dB