Академический Документы
Профессиональный Документы
Культура Документы
FRANCE
FRANCE
Signal Processing
This course is a short note extracted from the Signal Processing course (in French)
delivered by Prof. J. Mars, Prof J. Chanussot and Asst. Prof. P. Granjon at Grenoble
Institute of Technology.
A Cdrom (Signal Processing for Geosciences) with 82 lessons is also used to widely
illustrate this course. For more information you can contact Prof. J. Mars
INTRODUCTION
To send information we use a physic quantity. This information is time dependant
and have variations (as speech signal which is a temporal variation of acoustic pressure,
an image have grey level variations, a phone mobile is a converted electromagnetic
signal). A signal can be natural (speech, star vibration, earthquake) or artificial (motor
engine vibration etc). This signal propagates through a transmitted channel (atmosphere,
underwater medium, and channel). It is recorded on sensor (antenna, geophone, and
hydrophone). During propagation, this signal can be modified by alteration and/or by
addictive noise coming from the environment. It is compulsory to process this signal in
order to recover the original signal without noise.
This course is divided into parts:
-
DETERMINISTIC SIGNAL
I. Axiom
Classically, to describe a signal, we use the temporal representation (natural
representation)
Signal x(t) is a real function (or complex) of real variable t.
- With limited modulus
- Continuous.
All signal defined have vector space structure (stability by addition or multiplication:
example: summation of two voltage function is a voltage function) :
In this space, we can define a scalar product noted
x, y
A norm associated :
A distance :
x =
x, x
d(x, y ) = x y
Page 3 sur 28
Signal Processing
x (t ) = i ei (t )
i =1
If this space is n unlimited, we can find an infinite discrete basis e1(t), , en(t),
; as > 0, x(t ) ,
N
i =1
i =1
II. Representation
Temporal representation is not always the best way to describe a signal. Example: a
cosine function is defined by its frequency, phase an amplitude (3 parameters). We can
pass from one type of representation to another one by bijectives transformations.
ESPACE
VECTORIEL
x, y , x
d(x, y )
x()
x(t)
x()
?
Going from temporal representation x(t) to X() representation, can be written as:
X( ) = K (, t )x (t )dt
123
noyau
x (t ) = K 1(, t )X( )d
'
Page 4 sur 28
Signal Processing
x(t ) L1
x(t ) dt
CVG (Convergence)
x (t ) L2
x(t )
dt
CVG
t2
L2 . We can describe quantity E12 = x 2(t ) dt as the energy of the signal
t1
between t1 and t2.
+
x (t )
123
2
dt
densit d'nergie
(puissance ins tan tan e )
x(t )
dt = E x
x(t ) y(t )
dt
Cross correlation function for delay between x(t) and y(t), is the scalar product x, y
If x(t) = y(t), we obtain the correlation function of x(t) defined as : xx( )= x(t ) x(t )dt
Correlation properties:
1. xx (0 ) =
x(t )
2. let : t = u xx ( ) = x (u + ) x (u )du
xx ( ) = xx ( ) : hermitian symmetry.
3. We observe that :
x(t ) x(t )
Signal Processing
xx (0 ) = E x and xx ( ) is symmetric
Frequential representation:
Signal from L2 allow a
x (t ) L
2
frequential
representation
X( ) = lim
x(t )e
2jt
by
Fourier
transform :
dt
Properties:
1. Parseval theorem: x(t ) y(t )dt = X ( )Y ( )d
x(t )
dt =
X( )
d = E X
In fact :
X( ) = x(t )e 2jt dt
X( ) = x(t )e +2jt dt = X( )
A ( ) = X( )
B( ) = Y ( )
Page 6 sur 28
Signal Processing
a(t ) = x(t )
We obtain:
b(t ) =
Y( )e
+ 2jt
let : b(t ) =
Y( )e
+ 2 j ( t )
d = y ( t )
a t b t
678
}
Convolution product is : ab( )= x(t ) y(t ) dt
and: TF xy ( ) = X( )Y ( )
3. if
x=y, we find the spectral energy density, which is the correlation Fourier
Transform: TF[xx ( )] = X( ) .
2
4. Energy of y (t ) = x1(t ) + x 2 (t ) :
Ey =
y(t )
y(t )
Ey =
dt = E x1 + E x 2 + 2 Re x1(t )x 2 (t )dt
dt = E x1 + E x 2 + 2 Re x1, x 2
x1 (t ) x 2 (t ) dt =
14243
X( ) Y ( )
14243
dsi
densit spectrale d'int eraction
_
The product X().Y() is the cross spectral density.
Observations:
x(t) and x(t-) have the same spectral energy density.
X( ) = TF[x (t )] dse
X( )
TF{x (t )} = X( )e 2j
X( )e 2j
dse
= X( )
x (t ) si t [ T 2 , T 2]
We define x T (t ) =
0 si t [ T 2 , T 2]
Signal Processing
xT (t ) dt
PT = ET =
T
T
2
We notice as PT
AT
, t because x(t ) A
T
x(t ) dt
( x (t )
T
is lim ited ) , and PT can not diverge
when T is infinity.
We consider now Px = lim PT is finite. This limit is called average power of l x(t).
T
Frequency representation :
X T ( ) = x T (t )e 2jt dt
X T ( ) =
+T
2jt
dt
x(t )e
+T
X T ( )
d
T
x(t ) dt =
X T ( )
XT ( )
T
1 2
2
PM X1 + X 2 = lim
x1(t ) + x 2 (t ) dt
T T T
+T
PM X1 + X2 = PM X1 + PM X2
1 2
+ 2 Re lim x1(t )x 2 (t )dt ,
T T
T
1444224443
x1,x 2
+T
is a power
x(t).
-
Signal Processing
Which is the power between x1 and x2. This is the scalar product.The square norm of x
2
is : x = Px
With
+T
Parseval
we
can
writte
same
expression
in
the
frequency
x(t )y(t )
x (t )y T (t )
X ( )YT ( )
dt = T
dt = T
d
T
T
T
T
1
XT ( )YT ( ) .
T T
Correlation function:
We call correlation function:
xy ( ) = x, y
+T
1 2
and xy ( ) = lim x ( t ) y ( t ) dt
T T
T
2
+T
1 2
x(t )x(t )dt
T T T
Page 9 sur 28
domain.
Signal Processing
LINEAR FILTERING
V. Linear filtering. (and homogeneous)
x(t)
y(t)
F.L.H.
d y (t )
q=0
dt q
aq
d x (t )
r =0
dt r
br
This filter is homogeneous if coefficients aq and br are time non-dependant. n is the filter
order.
et
q=0
r =0
y (q) (t ) = (2j )q Y ( )
br (2j )r
Y ( ) = G( )X( ); G( ) = r = 0
n
aq (2j )q
q= 0
YY ( )
XX ( )
PSD and the square modulus of the filter complex gain : xx()=0 entrane yy()=0.
Page 10 sur 28
Signal Processing
Energy of y: E Y =
Y( )
EY =
G( )
if x(t) is finite energy signal if the output y(t) is finite energy signal.
VII.
h(t)G()
y(t)
y(t) = Fh[x(t)]
Y ( ) = G( ) ( 0 )
Y ( ) = G(0 ) ( 0 )
y (t ) = G(0 ) e2j 0 t
A
j ( 0 ) j
Y ( ) = ( 0 )K ( 0 )e j ( 0 )e j + ( + 0 )K ( 0 )e
e
3
123 12
2
j ( 0 )
e
K ( 0 )
A
Y ( ) = [( 0 )F(0 ) + ( + 0 )F( 0 )]
2
Page 11 sur 28
Signal Processing
A
Y ( ) = K ( 0 )( 0 )e j[ (0 )+ ] + ( + 0 )e j[ (0 )+ ]
424
3
1
424
3
2
1
c
c
2 j0t
2 j0t
e
e
y (t ) = A K ( 0 )cos[2 0 t + + ( 0 )]
x (t ) = A cos(2 0 t + )
We observe that : Output is still a sinusoid function at the same frequency than the input
but modified by an amplitude factor K(0) and a phase shift (0). We can plot the modulus
and phase at different frequency steps (This is the harmonic decomposition).
h
G
1
So the correlation function of y(t) is : yy ( ) = lim
T T
(T )
Its PSD is : xx ( ) =
T
1
h({
u )h ({
v )x (t u )x(t v )du dv dt e 2 j d
3 1424
3
T (T ) u v 12
0 0
1 0
=1
1 1 0 1
So :
1
h(u1 )h (v 1 )x ( )x ( )e 2 j (u1 v1 + )du1 dv 1 d d
T (T ){ 123 14243
And if T
YY ( ) = G ( )G( ) XX ( ) then YY ( )= G( ) XX ( )
2
Page 12 sur 28
Signal Processing
Wideband signal
VIII.
Definition
x(t) is Wideband if :
x(t) is finite energy signal
X ( )
III.
Shannon theorem
All signals are sampled before analysis. So It is very important to understand sampling
effect.
Let x (t ) BL2 , we call sampled signal x e (t ) = x (t )T (t ) where T (t ) =
T (t kT ) is a
k =
Dirac comb, (the Fourier transform is a Dirac comb with inverse with (1/T).
Dirac comb
T (t kT ) and x e (t ) =
k =
Using
x e (t ) =
following
properties
x(t )T (t kT ) ;
k =
we
obtain :
x(kT )T (t kT )
k =
By Fourier transform:
X e ( ) = X( )
Xe ( ) =
1
1 +
p
1 T and Xe ( ) = X( )
T
T p =
T
1 +
p
T p =
T
Page 13 sur 28
Signal Processing
we obtain : X e ( ) =
1 +
p
X
T p =
T
1/T
filtrage
passe-bas
Ech.
x(t)
xe(t)
1/2T
x(t)
Ordinateur
If G() is equal to T. 1/T() defined by G()=0 si >1/2T and G()=T on other frequency
+
p
values, we obtain X e( ).G(v) = X 1 T ( )= X(v)
T
p =
1
+
T 2
T 2
1/T
1/2T
(t kT ) =
k =
x (t ) =
k =
sin (t kT )
x(kT ) (et kT )
+
k =
-2T
-T
2T
sin e t
e t
Signal Processing
-e
-0
.
Spectrum of the sampled sinus
IV.
Amplitude modulation
>
1
2
2
X( 0 ) + X( + 0 )
Xam ( )
()
()
-0
+0
X( )
modulated signal is : E x am =
Xam ( )
E
d = x :
2
Page 15 sur 28
d and energy of
Signal Processing
E x am =
1 2
x (t ) (1 + cos(40 t ))dt
2
so E x am =
Ex 1 2
+
x (t ) cos(4 0 t )dt (second term is null)
2 2
x am ( ) = TF 1 Xam ( )
x am ( ) =
} =>
x am ( ) =
1
xx ( ) 2 cos(2 0 )
4
1
xx ( ) cos(2 0 )
2
Page 16 sur 28
Signal Processing
X. Stationarity
x(t) is stationary by sensus stricto if F2n is not changing with time shift:
F2n(t1, x1;t2, x2;K;tn, xn)= F2n(t1+, x1;t2 +, x2;K;tn +, xn)
This condition should be available n, ti, .
A. Stationary for n = 1 :
In case of n=1, stationary can be written: F2 (x1, t1 ) = F2 (x1, t1 + ) , t1
In particular, for = -t1, we obtain: F2 (x1, t1) = F2 (x1,0 )
That means repartition function is non-time dependant.
f2 (x1, t1 ) = f2 (x1,0 ) [d.d.p.]
It the same for the d.d.p.
Page 17 sur 28
Signal Processing
B. Stationary for n = 2 :
For n=2, x(t1) and x(t2) are a two dimensional random variable. Its repartition function is:
F4 (x1, t1; x 2, t 2 ) = P(x(t1) < x1 x(t 2 ) < x 2 )
2F4
= f4 (x1, t1, x 2, t 2 )
x1x 2
In the same way, we can calculate the second order cross moment :
E{x (t1 )x (t 2 )} = x1x 2 f4 (x1, t1, x 2 , t 2 ) dx1 dx 2
and its d.d.p. :
This quantity gives information on the possible dependence between x(t1) and x(t2).
Prerequisites
If u and v are 2 independent random variables, the d.d.p. of this couple can be
separable (Its equals to the product of each d.d.d.) : fuv (u, v ) = fu (u ) fv (v )
With E{uv} = uv fuv (u, v ) du dv , independence means E{u}E{v}= E{uv}.
E{x(t1 )} = cste
E{x(t1 )x(t 2 )} = xx (t 2 t1)
2nd order stationary signal
n=n
Page 18 sur 28
Signal Processing
x(t)
y(t)
t1
t1
t2
t2
tn
tm
x(t1), x(t 2 ),..., x(tn ), y(t'1), y(t'2 ),...y(t'm ), is (n+m) dimensional random variable. Its repartition
function is given by : F2(n +m ) (x1, t1,..., xn , tn , y1, t'1 ,..., ym , t'm ) t i, t' j , n, m
These process are jointly stationary if:
F2(n +m ) (x1, t1,..., ym , t'm ) = F2(n+ m ) (x1, t1 + ,..., ym , t'm + )
Case of n=m=1 :
x(t1), y(t1) is a random couple.
If there is jointly stationary with =-t1, we obtain:
F4 (x1, t1, y1, t'1 ) = F4 (x1, t1 t'1 , y1,0 )
We can deduce: E{x(t1 ), y(t'1)} = xy (t1 t'1) . So the cross correlation function between x(t)
et y(t) is only dependant of the time delay.
if E{x(t1 ), y(t'1)} = xy (t1 t'1) , so x(t) and y(t) are jointly stationary at the second order.
XII.
Independence
Definition : We said that x(t) and y(t) are independent if and only if the n [ x(t1),..., x(t n ) ]
dimensional random variable is independent of the m [ y(t'1 ),..., y(t'm ) ] dimensional
random variable n, m, ti, tk.
XIII.
Gaussian process
2
A scalar random variable is gaussian if its d.d.p. is described as : fx (x ) = Ke x 2
1
Q(x1,x 2 ,..., x n )
= Ke 2
Page 19 sur 28
Signal Processing
XIV.
Main idea is to define the energy quantities by statistically average from each realization.
So we can define, the Average Spectral Power Density (ASPD) of a random signal as:
xx ( ) = E{ kk ( )}
where kk ( ) is the ASPD of each realization xk(t) de x(t). This allows writing:
let : X T ( ) =
+T
x(t )e
2jt
+T
1
xx ( ) = E lim
x(t )e 2jt dt
T 2T T
2
xx ( ) = E lim
XT ( )
T 2T
1
2
xx ( ) = lim
E XT ( )
T 2T
and :
with two signals x(t) and y(t), we can express the Cross Spectral Power Density asr:
1
xy ( ) = lim
E XT ( )YT * ( )
T 2T
1
2
and by identification
xx ( ) = lim
E XT ( )
T 2T
because E XT ( )
XV.
}= E{X ( )X ( )}.
T
Wiener-Khinchines theorem
Theorem: A PSD (respectively CPSD) of a stationary random signal x(t) (respectively and
y(t)), is the Fourier transform of the autocorrelation function (respectively the
cross-correlation function).
XX ( ) xx ( ) et XY ( ) xy ( )
Demo:
if x(t) is stationary random signal with a finite average power.
We have: xx ( ) = E{ kk ( )}
+
2j
1 xk (t).xk*(t )dt
d , with : k ()=Tlim
2.T
T
Page 20 sur 28
Signal Processing
1 E xk (t).xk*(t ) dte2j d
So we obtain: xx( )= Tlim
2.T
2.T
Demo is similar for two stationary and jointly stationary random signals.
XVI.
Coherency
Warning: In this expression, signals are supposed to be random, second order stationary
and centered (statistical average = 0).
Property: CXY ( )1
Dmonstration:
We call Short Time Fourier Transform: X T ( ) =
+T 2
2jt
dt
x(t )e
T 2
By applying the Schwartz inequality for XT() and YT()2 random variables.
XT ( ), YT ( ) XT ( ) YT ( )
In the random variable space, we define scalar product as: u, v = E[uv ]
Schwartz inequality can be written as (after a division by T):
][
E[X T ( )YT ( )] E X T ( ) E YT ( )
T
T
T
2
] because E[X ( ) ]= X ( )
2
If CXY ( ) =1 for any , Schwartz inequality becomes a equality function and Y() is
proportional to X(). That means, there is a linear and homogeneous system between X
YT ( ) = ( ) XT ( )
and Y :
Page 21 sur 28
Signal Processing
E{y(t)}= M x h( )d , is a constant.
As G() is the Fourier Transform of h(), we obtain E[ y(t )]= M x h( )e 2j0 d = M x.G(0)
{h( ), G( )}
Output can be written as a convolution product: y (t ) =
h()x(t )d .
h ( ) x ( t ) x ( t ) d
*
Signal Processing
*
(t2
E y(t )x (t ) = h( )E x 1
3
)x (1
t2
3
) d
u
u ( )
and if x(t) is second order stationary signal, we can see that the second member
(second part) is no time dependant. That means, we obtain jointly stationarity between
input and output of the filter:
E y(t )x (t ) =
h()xx ( )d
=>
xy () = h xx ( )
We propose to calculate the cross correlation function between the two outputs. These two
outputs are defined by following expression:
+
E{4
x (t4
4)2
x 4
(t 4
'4
3)}d d'
h ( )h (')1
*2
*2
x1 x 2 ( +' )
We see that we find the cross-correlation function of the two inputs in the double
integral. In that conditions, second member of last equations is not time dependant, that
means, the two outputs are jointly second order stationary. Practically, this formula is not
uses so we propose to find another equivalent equations more suitable and simple.
If we takes the Fourier transforms of the two members of the last equations rewritten
now: y1 y2 ( ) = h1 ( ) h*2 ( ' ) x1x2 ( + ') d d ' , we obtain :
Page 23 sur 28
Signal Processing
1 2
We can see that if the two filters have non overlapping bandwidth, product G1().G2*()=0,
so, with this condition y1y2()=0 : The outputs of the two filters are de-correlated.
Additionally if, the two inputs are jointly Gaussian, de-correlation gives independency
between the two outputs.
Particular case:
x1 = x2 = x
if x is a second order stationary, we have:
y1
h1
x(t)
H2
y2
( )
XX
( 0 )
Page 24 sur 28
Signal Processing
We filter this signal by a band-pass filter with central frequency 0 and a bandwidth .
Power of the output y(t) is equal to the integration of its Power Spectral Density (dashed
area around +0 and - 0 on following figure):
XX ( )
YY ( ) = G( ) XX ( )
2
PY = YY ( )d (Power)
2
y (t )dt
XX ( 0 ) =
PY
=0
2
2 T
We can see that estimation depends of .T product. Characterization of this estimator by
its bias and variance shows variance is depending of the same product. To diminish its
variance, it is necessary to have a product as great as possible. is the resolution
frequency of the spectral analyzing machine. T is the time integration. This technique
presented by following figure is called spectral analysis FQI (Filtering, Quadration,
Integration).
G()
x(t)
-0
2
y(t)
XX ( 0 )
To estimate a P.S.D. value for another frequency 1, a same procedure is used with a filter
centered 1. Computation for all frequencies is made in parallel (filter bank).
2
Y 2 (t)
- 1
XX ( 1 )
By choosing adjacent filters (non overlapped bandwidths but summation of all filters
describes the full spectrum) we describe the power of x(t). There is no power exchange
between all outputs, because there is no interferences as their Cross Power Spectral
Densities are null.
Page 25 sur 28
Signal Processing
XX ( ) = 0 if B 2
N0
X X (
)
B /2
-B /2
2 /B
1 /B
N sin(B )
xx ( ) = 0
and xx (0 ) = BN0
XX.
3 /B
xx ( ) = BN0 sin c (B )
White noise
If the bandwidth B of x(t) is going to the infinity, signal x(t) is close to the theoretical
model called white noise.
N0
bb(
)
b(t ) = lim x (t )
Its Average Power Spectral Density is a constant function. That means: average power of
this signal is infinity. White noise is not realistic and should be seen as theoretical model.
Corresponding autocorrelation function is close to a Dirac at time position 0:
bb ( ) = N00 = N0( )
Consequently, that two observations of this signal at two different time are non correlated.
We call this signal white noise with microscopic correlation.
Page 26 sur 28
Signal Processing
XXI.
N0
-B/2
B/2
x(t)
We define a siagnl by the system which create it. (input e(t) and the filter).
We want to define the input and a filter class allowing to generate all second order
stationary random signals. This possible by using the Linear and Homogenous filter class.
By choosing an input as a second order stationary signal e(t), we obtain:
XX ( ) = G( ) ee ( )
2
We obtain: XX ( ) = G ( )
G ( ) XX ( ) known
Page 27 sur 28
Signal Processing
h(t)
A. 1st method: Direct approach
By definition, impulse response is Dirac response of the filter. This method is strictly
impossible to realize as a Dirac can not create (infinity amplitude).
B. 2nd method: step response
In control and automatic topics, people use step response which is the response at a
step function. Technically it is a more suitable approach, but the step is applied on a
limited time duration (seems a boxcar function). So we obtain an approximation of the step
response. The impulse response is the derivative of the step response and it is not very
precise technique to recover the impulse response.
C. 3rd method: identification by cross-correlation between input and output of
the filter.
As we seen previously, this technique use a white noise as input of filter. To recover the
impulse response, it is necessary to compute the cross-correlation between input and
output.
e(t)
s(t)
h(t)
se ( ) = h ee ( )
=
se ( ) = h( ) se ( ) = G( ) ee ( )
Page 28 sur 28