Вы находитесь на странице: 1из 28

Signal Processing

for geologists and geophysicists


@ Institut Teknologi Bandung

Prof. Jrme Mars

From Department Images and Signal


Grenoble Image Parole Signal Automatique Lab.
http://www.gipsa-lab.grenoble-inp.fr/

Ecole Nationale Suprieure de lEau de lEnergie et de lEnvironnement


961, Rue de la Houille Blanche, BP 46, 38402 Saint Martin dHres Cedex,
Tl. + 33 (0)4 76 62 62 00 Fax + 33 (0)4 76 82 63 01

FRANCE

Internet : http://ense3.grenoble-inp.fr E-mail : jerome.mars@grenoble-inp.fr

Ecole Nationale Suprieure de lEau de lEnergie et de lEnvironnement


961, Rue de la Houille Blanche, BP 46, 38402 Saint Martin dHres Cedex,
Tl. + 33 (0)4 76 62 62 00 Fax + 33 (0)4 76 82 63 01

FRANCE

Internet : http://ense3.grenoble-inp.fr E-mail : jerome.mars@grenoble-inp.fr

Signal Processing

This course is a short note extracted from the Signal Processing course (in French)
delivered by Prof. J. Mars, Prof J. Chanussot and Asst. Prof. P. Granjon at Grenoble
Institute of Technology.
A Cdrom (Signal Processing for Geosciences) with 82 lessons is also used to widely
illustrate this course. For more information you can contact Prof. J. Mars

INTRODUCTION
To send information we use a physic quantity. This information is time dependant
and have variations (as speech signal which is a temporal variation of acoustic pressure,
an image have grey level variations, a phone mobile is a converted electromagnetic
signal). A signal can be natural (speech, star vibration, earthquake) or artificial (motor
engine vibration etc). This signal propagates through a transmitted channel (atmosphere,
underwater medium, and channel). It is recorded on sensor (antenna, geophone, and
hydrophone). During propagation, this signal can be modified by alteration and/or by
addictive noise coming from the environment. It is compulsory to process this signal in
order to recover the original signal without noise.
This course is divided into parts:
-

Deterministic signal with know temporal evolution (past, present, future).

Random signal when deterministic description is impossible (no knowledge on the


phenomena, unknown parameters).

DETERMINISTIC SIGNAL
I. Axiom
Classically, to describe a signal, we use the temporal representation (natural
representation)
Signal x(t) is a real function (or complex) of real variable t.
- With limited modulus
- Continuous.
All signal defined have vector space structure (stability by addition or multiplication:
example: summation of two voltage function is a voltage function) :
In this space, we can define a scalar product noted
x, y

A norm associated :

A distance :

x =

x, x

d(x, y ) = x y
Page 3 sur 28

Signal Processing

If this space is n limited, each vector can be written as a linear combination


describing a basis as :
n

x (t ) = i ei (t )

i =1

If this space is n unlimited, we can find an infinite discrete basis e1(t), , en(t),
; as > 0, x(t ) ,
N

i =1

i =1

N limited ; x (t ) i ei (t ) < we can write : x (t ) = i ei (t )

II. Representation
Temporal representation is not always the best way to describe a signal. Example: a
cosine function is defined by its frequency, phase an amplitude (3 parameters). We can
pass from one type of representation to another one by bijectives transformations.

ESPACE
VECTORIEL
x, y , x
d(x, y )

x()

x(t)
x()
?

Going from temporal representation x(t) to X() representation, can be written as:
X( ) = K (, t )x (t )dt
123

noyau

The inverse transformation is:

x (t ) = K 1(, t )X( )d
'

Classical examples are : Laplace and Fourier transforms.


= [0,+[
= ] ,+[
Laplace K (s, t ) = e st ;
Fourier K (, t ) = e 2 jt ;
=s
= (rel)
Recall: x(t) allows a Fourier transform (FT) if
L2
L1

Page 4 sur 28

Signal Processing

x(t ) L1

x(t ) dt

CVG (Convergence)

x (t ) L2

x(t )

dt

CVG

III. Finite Energy space: (L2)


+
Definition : x(t) a real function is finite energy form if x 2(t ) dt converge. We say x(t)

t2
L2 . We can describe quantity E12 = x 2(t ) dt as the energy of the signal
t1
between t1 and t2.
+

Energy of signal is: Ex=

x (t )
123
2

dt

densit d'nergie
(puissance ins tan tan e )

Extension to complex case is natural by using square modulus.


We define the scalar product between x(t) and y(t) as : x (t ), y (t ) = x (t )y (t )dt

In this case, energy is square norm: x (t )

x(t )

dt = E x

Distance between x(t) and y(t) is : d (x, y ) =


2

x(t ) y(t )

dt

Cross correlation function for delay between x(t) and y(t), is the scalar product x, y

between x(t) and y(t-), as : xy( )= x(t ) y(t )dt

is the temporal delay between two signal.


+

If x(t) = y(t), we obtain the correlation function of x(t) defined as : xx( )= x(t ) x(t )dt

Correlation properties:
1. xx (0 ) =

x(t )

dt = E x : correlation function at 0 is the energy of signal

2. let : t = u xx ( ) = x (u + ) x (u )du

xx ( ) = xx ( ) : hermitian symmetry.

3. We observe that :

x(t ) x(t )

dt 0 (integral of positive values)

In fact : 2xx (0 ) 2Re x (t ) x(t )dt 0 and xx (0 ) Re x (t ) x (t )dt


Page 5 sur 28

Signal Processing

for all : xx (0 ) Re[xx ( )]


case where x(t) is a real function:

xx (0 ) = E x and xx ( ) is symmetric

Correlation function is maximum for 0 : xx (0 ) is a global maximum.


4. For cross correlation function,: xy ( ) = yx ( )

Frequential representation:
Signal from L2 allow a

x (t ) L
2

frequential

representation

X( ) . Fourier Transform is defined as :

X( ) = lim

x(t )e

2jt

by

Fourier

transform :

dt

Properties:
1. Parseval theorem: x(t ) y(t )dt = X ( )Y ( )d

with x(t)=y(t), we have

x(t )

dt =

X( )

d = E X

Energy is the same for the temporal or frequential representation.


X( )

, a real and positive function is power energy density. (D.S.E.)

If x(t) is a real function, X( ) is symmetric.


2

In fact :

X( ) = x(t )e 2jt dt

X( ) = x(t )e +2jt dt = X( )

function X( ) = X( ) X( ) is well symmetric


2

2. Plancherel theorem: a convolution product is a product by Fourier transform.


TF[(a b )(t )] = A ( ) B( )
By using this formula on :

A ( ) = X( )

B( ) = Y ( )

Page 6 sur 28

Signal Processing

a(t ) = x(t )
We obtain:

b(t ) =

Y( )e

+ 2jt

let : b(t ) =

Y( )e

+ 2 j ( t )

d = y ( t )

a t b t
678
}
Convolution product is : ab( )= x(t ) y(t ) dt

and: TF xy ( ) = X( )Y ( )
3. if

x=y, we find the spectral energy density, which is the correlation Fourier

Transform: TF[xx ( )] = X( ) .
2

4. Energy of y (t ) = x1(t ) + x 2 (t ) :
Ey =

y(t )

y(t )

Ey =

dt = E x1 + E x 2 + 2 Re x1(t )x 2 (t )dt

dt = E x1 + E x 2 + 2 Re x1, x 2

We observe that E y = Ex1 + Ex2 if

x1, x2 = 0 , If x1 et x2 are orthogonal, there is no


exchange of energy between signals. The cross density is defined in time or in
frequency (Parceval theorem) as :
E12 =

x1 (t ) x 2 (t ) dt =
14243

puissance ins tan tan e


d'int eraction

X( ) Y ( )
14243

dsi
densit spectrale d'int eraction

_
The product X().Y() is the cross spectral density.
Observations:
x(t) and x(t-) have the same spectral energy density.
X( ) = TF[x (t )] dse

X( )

TF{x (t )} = X( )e 2j

X( )e 2j

dse

= X( )

IV. Finite Average power signal


Definition : If the signal is not a finite energy signal, x (t ) dt can not exist.
2

x (t ) si t [ T 2 , T 2]
We define x T (t ) =
0 si t [ T 2 , T 2]

In this conditions, x (t ) L2 , and it is the observation of signal x(t) on time duration T.


Page 7 sur 28

Signal Processing

We define the average power PT of x(t) on [-T/2, T/2] by :


+T

xT (t ) dt

PT = ET =
T
T
2

We notice as PT

AT
, t because x(t ) A
T

x(t ) dt

( x (t )

T
is lim ited ) , and PT can not diverge

when T is infinity.
We consider now Px = lim PT is finite. This limit is called average power of l x(t).
T

Frequency representation :

X T ( ) = x T (t )e 2jt dt

xT(t) allows a Fourier transform:

X T ( ) =

+T

2jt
dt
x(t )e

In the same way, with another signal y (t ) L2 y T (t ) YT ( ) , By using Parceval


theorem, we have : x T (t )y T (t )dt = XT ( )YT ( )d and :

+T

x(t )y(t )dt = XT ( )YT ( )d

Spectral average power density (dspm) :


+T

with x=y, and by dividing by T, we have : 1


T

X T ( )
d
T

x(t ) dt =

If T goes infinity, first integral is the average power of x(t). So


density. We can define xx ( ) = lim

X T ( )

XT ( )
T

as the average power spectral density of

Cross average power spectral density :


By finding the average power of a summation of two signal x1(t) and x2(t):
+T

1 2
2
PM X1 + X 2 = lim
x1(t ) + x 2 (t ) dt

T T T

+T

PM X1 + X2 = PM X1 + PM X2

1 2
+ 2 Re lim x1(t )x 2 (t )dt ,
T T
T
1444224443
x1,x 2

+T

is a power

x(t).
-

We see: lim 1 x1(t ) x 2(t ) dt = x1, x


T T
2
T
Page 8 sur 28

Signal Processing

Which is the power between x1 and x2. This is the scalar product.The square norm of x
2
is : x = Px
With
+T

Parseval

we

can

writte

same

expression

in

the

frequency

x(t )y(t )
x (t )y T (t )
X ( )YT ( )
dt = T
dt = T
d
T
T
T
T

1
XT ( )YT ( ) .
T T

We can define the cross average power spectral density by lim

Correlation function:
We call correlation function:

xy ( ) = x, y

+T

1 2
and xy ( ) = lim x ( t ) y ( t ) dt
T T
T
2

+T

1 2
x(t )x(t )dt
T T T

We call cross correlation function: xx ( ) = lim

Properties of correlation function:


1. xx(0)= Px
2. xx ( ) = xx ( )
3. xx (0 ) Re[xx ( )],

4. TF xx ( ) = Average Power Spectral Density


TF xy ( ) = Cross Power Spectral Density for the cross correlation function.

Page 9 sur 28

domain.

Signal Processing

LINEAR FILTERING
V. Linear filtering. (and homogeneous)
x(t)

y(t)

F.L.H.

The output y(t) of a linear filter excited by


differential equation :

d y (t )

q=0

dt q

aq

an input signal x(t) can be written as an

d x (t )

r =0

dt r

br

This filter is homogeneous if coefficients aq and br are time non-dependant. n is the filter
order.

VI. Case of finite energy signals


If x(t) is finite energy signal , then X() exists.
By using Fourier properties, we obtain:
x (n) (t ) = (2j )n X( )

et

q=0

r =0

y (q) (t ) = (2j )q Y ( )

Filter equation is : Y ( ) aq (2j )q = X( ) br (2j )r and we can obtain:


m

br (2j )r

Y ( ) = G( )X( ); G( ) = r = 0
n

aq (2j )q

q= 0

with G() the complex gain.


The output y(t) of a filter can be written as a convolution product between x(t) and h(t), the
impulse response: y (t ) = (h x ) = h( )x (t )d

h(t), the IFT of G() is system impulse response: h(t ) = TF 1{G( )}


We can write following equations:

Y ( ) = G( ) X( ) : the output PSD is equal to the product between the input


123
123
2

YY ( )

XX ( )

PSD and the square modulus of the filter complex gain : xx()=0 entrane yy()=0.

Arg Y ( ) = Arg G( ) + Arg X( ) : Filter induces a phase shift by frequency channel

Page 10 sur 28

Signal Processing

Energy of y: E Y =

Y( )

EY =

G( )

X( ) d if G( ) < A 2 , then E Y < A 2 E X


2

if x(t) is finite energy signal if the output y(t) is finite energy signal.

VII.

Case of Average Power signal.

Relation Y()=G().X() is still correct even if X() is defined in distributions way. We


observe that : Y ( ) = G( ) si X( ) = 1
so :
y(t ) = h(t ) if x(t ) =

Eigenfunction of linear filter :

Harmonic signal as x(t ) = e2j 0 t are eigenfunction of linear filter.


x(t)

h(t)G()

y(t)

y(t) = Fh[x(t)]

In fact we obtain: X( ) = 0 = ( 0 ) and Y ( ) = G( )X( )


So

Y ( ) = G( ) ( 0 )

Y ( ) = G(0 ) ( 0 )

y (t ) = G(0 ) e2j 0 t

Fh [x (t )] = G( 0 )x (t ) where Fh is the transformation by a filter.


The filtered output is proportional to the input.

Case of real filter on a real signal (harmonic decomposition)


Let h(t) and x(t) real signals with x(t )= Acos(2 0t + ); on pose =2 0t0
We obtain : x(t )= Acos(2 0(t +t0 )) :
A
Its Fourier transform is: X( ) = [( 0 ) + ( + 0 )]e 2jt 0
2
j()
G() = K() e
is the gain of a real filter so its modulus K()= |G()| is symmetric
And its argument () = Arg G() is non symmetric. The Fourier transform of the
output : Y ( ) = X( )G( ) , so :
( )
644F7
448
A
2 jt 0
[
]
Y ( ) =
( 0 ) + ( + 0 ) e
K ( )e j ( )
2

A
j ( 0 ) j

Y ( ) = ( 0 )K ( 0 )e j ( 0 )e j + ( + 0 )K ( 0 )e
e
3
123 12
2

j ( 0 )
e
K ( 0 )

A
Y ( ) = [( 0 )F(0 ) + ( + 0 )F( 0 )]
2

Page 11 sur 28

Signal Processing

A
Y ( ) = K ( 0 )( 0 )e j[ (0 )+ ] + ( + 0 )e j[ (0 )+ ]
424
3
1
424
3
2
1

c
c
2 j0t
2 j0t
e

e
y (t ) = A K ( 0 )cos[2 0 t + + ( 0 )]

x (t ) = A cos(2 0 t + )
We observe that : Output is still a sinusoid function at the same frequency than the input
but modified by an amplitude factor K(0) and a phase shift (0). We can plot the modulus
and phase at different frequency steps (This is the harmonic decomposition).

Input/ output relationship


We want study the expression of the PSD of the output y(t).
y(t)
x(t)

h
G

The convolution product is: y (t ) = h(u )x (t u )du and : y (t ) = h (u)x (t u )du

1
So the correlation function of y(t) is : yy ( ) = lim
T T

y(t )y(t )dt

(T )

Its PSD is : xx ( ) =
T

1
h({
u )h ({
v )x (t u )x(t v )du dv dt e 2 j d

3 1424
3
T (T ) u v 12

We compute this term by several variable tricks as:


= t u
u = u1
= tv
v = v1
u1 = u
t = u1 +
= tv+
v1 = v
= u1 v1 +
1 0 0 0
So the jacobian of the matrix is : j =

0 0

1 0

=1

1 1 0 1
So :

1
h(u1 )h (v 1 )x ( )x ( )e 2 j (u1 v1 + )du1 dv 1 d d

T (T ){ 123 14243

And if T

YY ( ) = G ( )G( ) XX ( ) then YY ( )= G( ) XX ( )
2

Page 12 sur 28

Signal Processing

Wideband signal
VIII.

Definition

x(t) is Wideband if :
x(t) is finite energy signal

X( ) = 0 for : X() is limited signal. X( ) = 0 for


2
2

X ( )

III.

Shannon theorem

All signals are sampled before analysis. So It is very important to understand sampling
effect.
Let x (t ) BL2 , we call sampled signal x e (t ) = x (t )T (t ) where T (t ) =

T (t kT ) is a

k =

Dirac comb, (the Fourier transform is a Dirac comb with inverse with (1/T).

Dirac comb

Fourier transform of the sampled signal is computed now.


x e (t ) = x (t )

T (t kT ) and x e (t ) =

k =

Using
x e (t ) =

following

properties

x(t )T (t kT ) ;

k =

F(t )(t t 0 ) = F(t 0 )(t t 0 )

we

obtain :

x(kT )T (t kT )

k =

By Fourier transform:
X e ( ) = X( )
Xe ( ) =

1
1 +
p
1 T and Xe ( ) = X( )

T
T p =
T

1 +
p

X( ) . Using following properties F(t ) (t t 0 ) = F(t t 0 )

T p =
T

Page 13 sur 28

Signal Processing

we obtain : X e ( ) =

1 +
p
X

T p =
T

Sampled signal spectrum is obtain by periodization of the initial spectrum signal


For 1/T > , patterns X(-p/T) are non overlapped, it is possible to recover x(t) by
applying a low pass filter on X() (p=0)

1/T

filtrage
passe-bas

Ech.
x(t)

xe(t)

1/2T

x(t)

Ordinateur

If G() is equal to T. 1/T() defined by G()=0 si >1/2T and G()=T on other frequency
+
p
values, we obtain X e( ).G(v) = X 1 T ( )= X(v)
T
p =

1
+
T 2

T 2

1/T
1/2T

Interpolation to recover temporal signal:


sin e t
X e ( )G( ) = X( ) x(t ) = x e (t ) T
and G( ) = T 1 T ( ) = T e ( )
e t
x e (t ) = x (t )

(t kT ) =

k =

x (t ) =

x(kT )(t kT ) => x(t ) = x e (t ) T

k =

sin (t kT )
x(kT ) (et kT )
+

k =

-2T

-T

2T

Recovering is correct if we know all samples.


Page 14 sur 28

sin e t
e t

Signal Processing

If the signal is infinite wideband, to avoid aliasing effect, it is necessary to filter by an


analogic filter before sampling operations.
Example : Sampling on x (t ) = A cos(2 0 t )

-e

-0

.
Spectrum of the sampled sinus

IV.

Amplitude modulation

Transmission by amplitude modulation is defined as : x am (t ) = x (t ) cos(2 0 t )


1
with its Fourier transform : Xam ( ) = [X( 0 ) + X( + 0 )]
2
X ( )

>

Its PSD is : Xam ( ) =


2

1
2
2
X( 0 ) + X( + 0 )

Xam ( )

()

()

-0

This is the modulation effect. Energy of x(t) is : E x =

+0

X( )

modulated signal is : E x am =

Xam ( )

E
d = x :
2

We obtain same result in time domain :

E x am = x 2am (t )dt => E x am = x 2 (t ) cos2 (20 t )dt

Page 15 sur 28

d and energy of

Signal Processing

E x am =

1 2
x (t ) (1 + cos(40 t ))dt
2

so E x am =

Ex 1 2
+
x (t ) cos(4 0 t )dt (second term is null)
2 2

Correlation of modulated signal:

x am ( ) = TF 1 Xam ( )
x am ( ) =

} =>

x am ( ) =

1
xx ( ) 2 cos(2 0 )
4

1
xx ( ) cos(2 0 )
2

Page 16 sur 28

Signal Processing

STATIONARY RANDOM SIGNAL


Overview:
Some complex phenomena can not characterized by deterministic equations. A random
model is necessary to describe them and to reach information on these specific signal by a
statistical way. Random property is based on the non reproducibility of the observed
phenomena. As example: two experiments made in the same conditions give not strictly
the same result, but we can observe similar evolutions or close variations on result. These
signals can not be predictable. That means past knowledge can not gives information on
future without errors.

IX. Random Signal


A signal is considered as random signal if its observation at time (ti)i=1n, x(t1), x(t2), ,
x(tn) is a n-dimensional random variable ti, n.
Such signal have a repartition function as:
F2n(t1, x1;t2, x2;K;tn, xn)= P(X(t1)< x1 X(t2)< x2 K X(tn)< xn)
If this function existe ti and n, random signal is called with temporal law (loi
temporelle)

X. Stationarity
x(t) is stationary by sensus stricto if F2n is not changing with time shift:
F2n(t1, x1;t2, x2;K;tn, xn)= F2n(t1+, x1;t2 +, x2;K;tn +, xn)
This condition should be available n, ti, .

Study at one time:


for n = 1, x(t1) is a one-dimensional random variable, t1 :
F2 (x1, t1 ) = P{x(t1) < x1} is called first repartition function.
F2 (x1, t1)
Derivative of this function
= f2 (x1, t1 ) gives the density of probability (noted
x1
d.d.p.). From this d.d.p., we can calculate the expected mathematical value (or the
statistical averaged value of the random variable X(t1) :
E{x (t1 )} = x1 f2 (x1, t1)dx1 = Mx (t1 )

if t1 is varying, this quantity characterizes the evolution of statistical average in time.

A. Stationary for n = 1 :
In case of n=1, stationary can be written: F2 (x1, t1 ) = F2 (x1, t1 + ) , t1
In particular, for = -t1, we obtain: F2 (x1, t1) = F2 (x1,0 )
That means repartition function is non-time dependant.
f2 (x1, t1 ) = f2 (x1,0 ) [d.d.p.]
It the same for the d.d.p.
Page 17 sur 28

Signal Processing

And for the expected mathematical value


E{x(t1 )} = cste
If a signal is stationary, its statistical average must be non time dependant
Mx (t1) = cste = Mx .

B. Stationary for n = 2 :
For n=2, x(t1) and x(t2) are a two dimensional random variable. Its repartition function is:
F4 (x1, t1; x 2, t 2 ) = P(x(t1) < x1 x(t 2 ) < x 2 )

2F4
= f4 (x1, t1, x 2, t 2 )
x1x 2
In the same way, we can calculate the second order cross moment :
E{x (t1 )x (t 2 )} = x1x 2 f4 (x1, t1, x 2 , t 2 ) dx1 dx 2
and its d.d.p. :

This quantity gives information on the possible dependence between x(t1) and x(t2).
Prerequisites
If u and v are 2 independent random variables, the d.d.p. of this couple can be
separable (Its equals to the product of each d.d.d.) : fuv (u, v ) = fu (u ) fv (v )
With E{uv} = uv fuv (u, v ) du dv , independence means E{u}E{v}= E{uv}.

Stationary sensus stricto can be written: F4 (x1, t1; x 2, t 2 ) = F4 (x1, t1 + ; x 2, t 2 + )


And for =-t1, we obtain: F4 (x1, t1; x 2, t 2 ) = F4 (x1,0; x 2 , t 2 t1 )
Repartition function in just dependant of t2-t1 and not of t2 and t1 independently.
We can deduce same property of d.d.p. of the couple: f4 (x1,0; x 2, t 2 t1 )
And for the cross moment E{x(t1 )x(t 2 )} = xx (t 2 t1) where xx is the autocorrelation
function of X.
To resume: Stationary shows several properties but we can keep only following
properties. Every signal which has these two properties is said second order
stationary.
n=1
n=2

E{x(t1 )} = cste
E{x(t1 )x(t 2 )} = xx (t 2 t1)
2nd order stationary signal

n=n

XI. Study with two random signals.


Let we have two random signals x(t) et y(t). we observe x(t) at time t1, t2, ,tn and y(t) at
time t1, t2,,tm.

Page 18 sur 28

Signal Processing

x(t)

y(t)

t1

t1

t2

t2

tn

tm

x(t1), x(t 2 ),..., x(tn ), y(t'1), y(t'2 ),...y(t'm ), is (n+m) dimensional random variable. Its repartition
function is given by : F2(n +m ) (x1, t1,..., xn , tn , y1, t'1 ,..., ym , t'm ) t i, t' j , n, m
These process are jointly stationary if:
F2(n +m ) (x1, t1,..., ym , t'm ) = F2(n+ m ) (x1, t1 + ,..., ym , t'm + )

Case of n=m=1 :
x(t1), y(t1) is a random couple.
If there is jointly stationary with =-t1, we obtain:
F4 (x1, t1, y1, t'1 ) = F4 (x1, t1 t'1 , y1,0 )
We can deduce: E{x(t1 ), y(t'1)} = xy (t1 t'1) . So the cross correlation function between x(t)
et y(t) is only dependant of the time delay.
if E{x(t1 ), y(t'1)} = xy (t1 t'1) , so x(t) and y(t) are jointly stationary at the second order.

XII.

Independence

Definition : We said that x(t) and y(t) are independent if and only if the n [ x(t1),..., x(t n ) ]
dimensional random variable is independent of the m [ y(t'1 ),..., y(t'm ) ] dimensional
random variable n, m, ti, tk.

XIII.

Gaussian process

2
A scalar random variable is gaussian if its d.d.p. is described as : fx (x ) = Ke x 2

1
Q(x1,x 2 ,..., x n )
= Ke 2

A n-dimensional random variable is if d.d.p. is : fx (x1, x 2 ,..., x n )


where Q is a quadratic form. A random signal is gaussian if its observation (x(t1),,x(tn))
is une n-dimensional gaussian random variable.

Page 19 sur 28

Signal Processing

FREQUENTIAL REPRESENTATION OF RANDOM FUNCTION

XIV.

Averaged power spectral density of a random signal x(t)

Main idea is to define the energy quantities by statistically average from each realization.
So we can define, the Average Spectral Power Density (ASPD) of a random signal as:
xx ( ) = E{ kk ( )}
where kk ( ) is the ASPD of each realization xk(t) de x(t). This allows writing:

let : X T ( ) =

+T

x(t )e

2jt

+T
1

xx ( ) = E lim
x(t )e 2jt dt

T 2T T

dt , ASPD is written as:

2
xx ( ) = E lim
XT ( )
T 2T

1
2
xx ( ) = lim
E XT ( )
T 2T

and :

with two signals x(t) and y(t), we can express the Cross Spectral Power Density asr:
1
xy ( ) = lim
E XT ( )YT * ( )
T 2T
1
2
and by identification
xx ( ) = lim
E XT ( )
T 2T

because E XT ( )

XV.

}= E{X ( )X ( )}.
T

Wiener-Khinchines theorem

It exists some specific relationship between a Power Spectral Density and a


autocorrelation function:

Theorem: A PSD (respectively CPSD) of a stationary random signal x(t) (respectively and
y(t)), is the Fourier transform of the autocorrelation function (respectively the
cross-correlation function).
XX ( ) xx ( ) et XY ( ) xy ( )
Demo:
if x(t) is stationary random signal with a finite average power.
We have: xx ( ) = E{ kk ( )}
+

and: kk( )= k ().e

2j

1 xk (t).xk*(t )dt
d , with : k ()=Tlim
2.T
T
Page 20 sur 28

Signal Processing

1 E xk (t).xk*(t ) dte2j d
So we obtain: xx( )= Tlim
2.T

By stationary property, we obtain for all k: E xk (t).xk (t ) =()


+

1 ()dte2j d , that means: xx( )= ()e2j d


So we have: xx( )= Tlim

2.T

Demo is similar for two stationary and jointly stationary random signals.

XVI.

Coherency

Coherency is function described in the frequency domain. We can compare to the


correlation coefficient in time. Coherency illustrates relationship existing between two
signals. It is defined as:
XY ( )
C XY ( ) =
describing statistical relation at each frequency.
XX ( ) YY ( )

Warning: In this expression, signals are supposed to be random, second order stationary
and centered (statistical average = 0).
Property: CXY ( )1
Dmonstration:
We call Short Time Fourier Transform: X T ( ) =

+T 2

2jt
dt
x(t )e

T 2

By applying the Schwartz inequality for XT() and YT()2 random variables.

XT ( ), YT ( ) XT ( ) YT ( )
In the random variable space, we define scalar product as: u, v = E[uv ]
Schwartz inequality can be written as (after a division by T):

][

E[X T ( )YT ( )] E X T ( ) E YT ( )

T
T
T
2

] because E[X ( ) ]= X ( )
2

If T is going to infinity, we obtain: XY ( ) XX ( ) YY ( ) .

Remark : If X and Y are independent, then C XY ( ) = 0


If CXY ( )=0 , signals are not necessary independent but they are uncorrelated.

If CXY ( ) =1 for any , Schwartz inequality becomes a equality function and Y() is
proportional to X(). That means, there is a linear and homogeneous system between X
YT ( ) = ( ) XT ( )
and Y :

Page 21 sur 28

Signal Processing

LINEAR AND HOMOGENEOUS FILTERING


OF SECOND ORDER STATIONARY PROCESS
In this chapter we want to determine statistical properties of the output of the filter when
the input is a random signal.

XVII. 1er order characterization (average formula)


Let x(t) a random signal filtered by a linear and homogeneous filter.
For all realization of x(t), the output of the filter y(t) is :
+

y(t )= h( )x(t )d = x( )h(t )d . So the expected mathematical value of y(t) is:

E[ y(t )]= E{h( )x(t )}d = h( ).E{x(t )}d


If we suppose that x(t) is stationary at the fist order, E{x(t-)}=Mx=constant. Then:
+

E{y(t)}= M x h( )d , is a constant.

As G() is the Fourier Transform of h(), we obtain E[ y(t )]= M x h( )e 2j0 d = M x.G(0)

E[ y(t )]= M x.G(0)


Output of a LH filter {h( ), G( )} exciting by a first order random signal {Mx , xx ( )} is
stationary signal at the first order
My = Mx G(0 )
If the input is centered, then the output is also centered: Mx = 0 My = 0
A filter with null gain at frequency zero gives a centered signal at the output even if the
input is not centered: G(0 ) = 0 My = 0

XVIII. 2nd order characterization


A. Input/output Relationship
Let LH filter with a impulse response h(), we calculate the cross correlation between
the input and the output of the filter.
Y
X
F

{h( ), G( )}
Output can be written as a convolution product: y (t ) =

h()x(t )d .

If the input is a complex signal, we obtain: y ( t ) x* ( t ) =

h ( ) x ( t ) x ( t ) d
*

The cross-correlation is given by taking the expected mathematical value as:


Page 22 sur 28

Signal Processing

*
(t2
E y(t )x (t ) = h( )E x 1
3
)x (1
t2
3
) d
u

u ( )

and if x(t) is second order stationary signal, we can see that the second member
(second part) is no time dependant. That means, we obtain jointly stationarity between
input and output of the filter:

E y(t )x (t ) =

h()xx ( )d

=>

xy () = h xx ( )

Fourier transform of last equation gives: XY ( ) = G( ) XX ( )


Application: identification of an unknown LH filter. .
If X is a normalized white noise:
- Its autocorrelation is a Dirac at position 0 : xx( )= 0()
- Its A.P.S.D is a constant equals to 1.
In that conditions, we obtain : xy( )= h() and XY ( )=G( ) .
To identify a filter (knowledge of its impulse response or its complex gain), we
calculate the cross correlation between input and output when the input is a white
noise. Practically, it is necessary to use a white noise only in the bandwidth of the
filter. That means XY ( ) is equal to 1 where G( ) is different of 0.
B. Interferences formula
We consider two LH filters defined respectively by {h1( ), G1( )} and {h2 ( ), G2 ( )} .
Supposing that the two inputs x1(t) et x2(t) are jointly second order stationary. Their cross
correlation function is given by: x x ( )= E{x1(t )x*2(t )}.
12
Y2
Y1
X2
X1
F2
F1

We propose to calculate the cross correlation function between the two outputs. These two
outputs are defined by following expression:
+

y1(t ) = h1( )x1(t )d

y 2 (t ) = h2 (' )x2 (t ' )d'

This cross correlation function is given by:


+ +

E{y1(t )y*2(t )}=

E{4
x (t4
4)2
x 4
(t 4
'4
3)}d d'
h ( )h (')1

*2

*2

x1 x 2 ( +' )

We see that we find the cross-correlation function of the two inputs in the double
integral. In that conditions, second member of last equations is not time dependant, that
means, the two outputs are jointly second order stationary. Practically, this formula is not
uses so we propose to find another equivalent equations more suitable and simple.
If we takes the Fourier transforms of the two members of the last equations rewritten
now: y1 y2 ( ) = h1 ( ) h*2 ( ' ) x1x2 ( + ') d d ' , we obtain :
Page 23 sur 28

Signal Processing

Y Y ( ) = h1 ( ) h*2 ( ') x x ( + ') e2 j d d ' d


1 2

1 2

To calculate this triple integral, we use some tricks as : u1 = , u2 = and u3 = + - .


At this moment: = u3 u2 +u1.
So the new equation is: Y1Y2 ( ) = h1(u1)h* 2 (u2 )x1x 2 (u3 )e 2j(u3 u2 +u1 )du1du2du3
This integral is not separable : we can write it as the product of 3 simple integrals as:
Y1Y2 ( ) = h1(u1)e 2ju1 du1 h* 2 (u2 )e 2ju2 du2 x1x 2 (u3 )e 2ju3 du3

At the end, we obtain: Y1Y2 ( ) = G1( )G* 2 ( ) X1X 2 ( )


By taking the inverse Fourier transform, we obtain following expression called

interference formula (due to its application in optic): y1y 2 ( ) = h1 h* 2 x1x 2

We can see that if the two filters have non overlapping bandwidth, product G1().G2*()=0,
so, with this condition y1y2()=0 : The outputs of the two filters are de-correlated.
Additionally if, the two inputs are jointly Gaussian, de-correlation gives independency
between the two outputs.
Particular case:
x1 = x2 = x
if x is a second order stationary, we have:
y1
h1
x(t)

H2

y2

Y1Y2 ( ) = G1( )G2 ( ) XX ( )


x1 = x2 = x and h1 = h2 = h
if x is a second order stationary, we have:
x(t)
Y(t)
h
Y1Y2 ( ) = G( ) XX ( )
Average Power Spectral Density of the output of a LH filter is equals to the product of
the square modulus of the complex gain of the filter by the A. P.S.D of the input.
2

C. Spectral analysis case (Filtering Quadratic Integration.)


We consider a second order stationary random signal x(t), with its Average Power Spectral
Density XX ( ) (A.P.S.D unknown). We want to estimate this A.P.S.D for a given frequency
0.
XX

( )

XX

( 0 )

Page 24 sur 28

Signal Processing

We filter this signal by a band-pass filter with central frequency 0 and a bandwidth .
Power of the output y(t) is equal to the integration of its Power Spectral Density (dashed
area around +0 and - 0 on following figure):

XX ( )

YY ( ) = G( ) XX ( )
2

PY = YY ( )d (Power)

If is small petit, we can estimate each dashed area as a rectangle ( large by XX ( 0 )


length). We obtain following estimation: PY 2 YY (0 ) . Estimation of the power of y(t)
can be also found by square integration. By this way, we obtain an estimation of XX ( 0 ) (if
is enough small and T enough big) by following formula:
T

2
y (t )dt

XX ( 0 ) =

PY
=0
2
2 T
We can see that estimation depends of .T product. Characterization of this estimator by
its bias and variance shows variance is depending of the same product. To diminish its
variance, it is necessary to have a product as great as possible. is the resolution
frequency of the spectral analyzing machine. T is the time integration. This technique
presented by following figure is called spectral analysis FQI (Filtering, Quadration,
Integration).
G()

x(t)
-0

2
y(t)

XX ( 0 )

To estimate a P.S.D. value for another frequency 1, a same procedure is used with a filter
centered 1. Computation for all frequencies is made in parallel (filter bank).

2
Y 2 (t)
- 1

XX ( 1 )

By choosing adjacent filters (non overlapped bandwidths but summation of all filters
describes the full spectrum) we describe the power of x(t). There is no power exchange
between all outputs, because there is no interferences as their Cross Power Spectral
Densities are null.

Page 25 sur 28

Signal Processing

WIDEBAND SIGNAL / WHITE NOISE


XIX.

Wideband signal (theoretical model)

Definition: A random signal is called wideband signal if:


x(t) is second order stationary signal
XX ( ) = N 0 if B 2

XX ( ) = 0 if B 2

N0

X X (
)

B /2

-B /2

Then, its autocorrelation function is a cardinal sinus (Inverse Fourier Transform of a


boxcar):
X X ( )
N 0B

2 /B
1 /B

N sin(B )
xx ( ) = 0
and xx (0 ) = BN0

XX.

3 /B

xx ( ) = BN0 sin c (B )

White noise

If the bandwidth B of x(t) is going to the infinity, signal x(t) is close to the theoretical
model called white noise.

N0

bb(
)
b(t ) = lim x (t )

Its Average Power Spectral Density is a constant function. That means: average power of
this signal is infinity. White noise is not realistic and should be seen as theoretical model.
Corresponding autocorrelation function is close to a Dirac at time position 0:
bb ( ) = N00 = N0( )
Consequently, that two observations of this signal at two different time are non correlated.
We call this signal white noise with microscopic correlation.

Page 26 sur 28

Signal Processing

XXI.

Signal System Approach

Microscopic correlation white noise in front of a filter.


Definition: A noise is a microscopic correlation noise compare to a filter it the Average
Power Spectral Density of this noise is a constant function in the filter
bandwidth.
G ( )

N0

-B/2

B/2

This noise is a white noise for the filter.


Signal System approach:
In the first part of this course, we characterized a signal by its characteristics quantities
(Correlation function, Power Spectral Density, etc). An other way, allowing some
generalizations is to define a signal as a result of particular filter.
e(t) ?

x(t)

We define a siagnl by the system which create it. (input e(t) and the filter).
We want to define the input and a filter class allowing to generate all second order
stationary random signals. This possible by using the Linear and Homogenous filter class.
By choosing an input as a second order stationary signal e(t), we obtain:

XX ( ) = G( ) ee ( )
2

We see that if ee (0 ) = 0 XX ( 0 ) = 0 because G(0) can not be infinity (filter


stability). SO it is necessary to have ee ( ) never equal to zero. In order to use smallest
possible parameters, it is classical to use as input a normalized white noise as ee ( ) =1.

We obtain: XX ( ) = G ( )

G ( ) XX ( ) known

Knowledge of G() gives information on the A.P.S.D.


The opposite is not true. In fact, knowledge on the A.P.S.D. gives not information on
modules of G().

XXII. Identification of LH filter.


Filter identification means to find its impulse response or/and its complex gain. We see
before.

Page 27 sur 28

Signal Processing

h(t)
A. 1st method: Direct approach
By definition, impulse response is Dirac response of the filter. This method is strictly
impossible to realize as a Dirac can not create (infinity amplitude).
B. 2nd method: step response
In control and automatic topics, people use step response which is the response at a
step function. Technically it is a more suitable approach, but the step is applied on a
limited time duration (seems a boxcar function). So we obtain an approximation of the step
response. The impulse response is the derivative of the step response and it is not very
precise technique to recover the impulse response.
C. 3rd method: identification by cross-correlation between input and output of
the filter.
As we seen previously, this technique use a white noise as input of filter. To recover the
impulse response, it is necessary to compute the cross-correlation between input and
output.
e(t)
s(t)
h(t)

se ( ) = h ee ( )
=

se ( ) = h( ) se ( ) = G( ) ee ( )

Normalized white noise in input.

Impulse response = correlation between input and output.


As is it impossible to create a theoretical white noise we prefer to use a white noise in
the filter bandwidth.

Page 28 sur 28

Вам также может понравиться