Вы находитесь на странице: 1из 37

AG

Adaptive Signal Processing

DS
P

Problem: Equalise through a FIR filter the


distorting effect of a communication channel that
may be changing with time.
If the channel were fixed then a possible solution
could be based on the Wiener filter approach
We need to know in such case the correlation
matrix of the transmitted signal and the cross
correlation vector between the input and desired
response.
When the the filter is operating in an unknown
environment these required quantities need to be
found from the accumulated data.
Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

The problem is particularly acute when


not only the environment is changing but
also the data involved are non-stationary
In such cases we need temporally to
follow the behaviour of the signals, and
adapt the correlation parameters as the
environment is changing.
This would essentially produce a
temporally adaptive filter.
Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

A possible framework is:

{x[n]}

Adaptive

d[n]

Filter : w

d [n]
e[n]

Algorithm
Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

Applications are many

Digital Communications
Channel Equalisation
Adaptive noise cancellation
Adaptive echo cancellation
System identification
Smart antenna systems
Blind system equalisation
And many, many others
Professor A G
Constantinides

AG
C
DS
P

Applications

Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

Echo Cancellers in Local Loops


Tx1

Rx2
Hybrid

Hybrid

Echo canceller

Echo canceller

Adaptive Algorithm

Rx1

Local Loop

Adaptive Algorithm

Rx2
+

+
Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

Adaptive Noise Canceller


REFERENCE SIGNAL
FIR filter
Noise

Adaptive Algorithm

Signal +Noise
PRIMARY SIGNAL

Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

System Identification
FIR filter

Adaptive Algorithm
Signal

Unknown System

Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

System Equalisation
Signal

FIR filter
Unknown System

Adaptive Algorithm

Delay

Professor A G
Constantinides

AG
C

Adaptive Signal Processing

DS
P

Adaptive Predictors
Signal

FIR filter

Delay

Adaptive Algorithm

Professor A G
Constantinides

10

AG
C

Adaptive Signal Processing

DS
P

Adaptive Arrays

Linear Combiner

Interference
Professor A G
Constantinides

11

AG
C

Adaptive Signal Processing

DS
P

Basic principles:
1) Form an objective function
(performance criterion)
2) Find gradient of objective function with
respect to FIR filter weights
3) There are several different approaches
that can be used at this point
3) Form a differential/difference equation
from the gradient.
Professor A G
Constantinides

12

AG
C

Adaptive Signal Processing

DS
P

Let the desired signal be d [n]


The input signal x[n]
The output y[n]
Now form the vectors

x[n] x[n] x[n 1] . x[n m 1]


T
h h[0] h[1] . h[m 1]

So that

y[n] x[n] h
T

Professor A G
Constantinides

13

AG
C

Adaptive Signal Processing

DS
P

The form the objective function


2
J (w ) E{ d [n] y[n] }
J (w ) d2 pT h hT p hT Rh

where

R E{x[n]x[n] }
T

p E{x[n]d [n]}
Professor A G
Constantinides

14

AG
C

Adaptive Signal Processing

DS
P

We wish to minimise this function


at the instant n
Using Steepest Descent we write
1 J (h[n])
h[n 1] h[n]
2
h[n]
But J (h)
2p 2Rh
h
Professor A G
Constantinides

15

AG
C

Adaptive Signal Processing

DS
P

So that the weights update equation

h[n 1] h[n] (p Rh[n])

Since the objective function is quadratic


this expression will converge in m steps
The equation is not practical
If we knew R and p a priori we could
find the required solution (Wiener) as
1

h opt R p

Professor A G
Constantinides

16

AG
C

Adaptive Signal Processing

DS
P

However these matrices are not known


Approximate expressions are obtained by
ignoring the expectations in the earlier
complete forms
T

R[n] x[n]x[n]

p [n] x[n]d [n]

This is very crude. However, because the


update equation accumulates such
quantities, progressive we expect the
crude form to improve
Professor A G
Constantinides

17

AG
C

The LMS Algorithm

DS
P

Thus we have

h[n 1] h[n] x[n](d [n] x[n] h[n])

Where the error is

e[n] (d [n] x[n] h[n]) (d [n] y[n])


T

And hence can write

h[n 1] h[n] x[n]e[n]

This is sometimes called the stochastic


gradient descent
Professor A G
Constantinides

18

AG
C
DS
P

Convergence
The parameter is the step size,
and it should be selected carefully
If too small it takes too long to
converge, if too large it can lead to
instability
Write the autocorrelation matrix in
the eigen factorisation form
T
R Q Q
Professor A G
Constantinides

19

AG
C

Convergence

DS
P

Where Q is orthogonal and is


diagonal containing the eigenvalues
The error in the weights with respect
to their optimal values is given by
(using the Wiener solution
p for

h[n 1] h opt h[n] h opt (Rh opt Rh[n])

We obtain

e h [n 1] e h [n] Re h [n]
Professor A G
Constantinides

20

AG
C

Convergence

DS
P

Or equivalently
I.e.

e h [n 1] (1 Q Q)e h [n]
T

Qe h [n 1] Q(1 Q T Q)e h [n]


(Q QQ Q)e h [n]
T

Thus we have

Qe h [n 1] (1 )Qe h [n]

Form a new variable

v[n] Qe h [n]

Professor A G
Constantinides

21

AG
C

Convergence

DS
P

So that

Thus each element of this new variable


is dependent on the previous value of it
via a scaling constant
The equation will therefore have an
exponential form in the time domain,
and the largest coefficient in the right
hand side will dominate

v[n 1] (1 ) v[n]

Professor A G
Constantinides

22

AG
C

Convergence

DS
P

We require that
1 max 1
Or
2
0
max
In practice we take a much smaller
value than this
Professor A G
Constantinides

23

AG
C

Estimates

DS
P

Then it can be seen that as n the


weight update equation yields

E{h[n 1]} E{h[n]}

And on taking expectations of both


sides of it we have

E{h[n 1]} E{h[n]} E{x[n](d [n] x[n] h[n])}


T

Or

0 E{( x[n]d [n] x[n]x[n] h[n])}


T

Professor A G
Constantinides

24

AG
C

Limiting forms

DS
P

This indicates that the solution


ultimately tends to the Wiener
form
I.e. the estimate is unbiased

Professor A G
Constantinides

25

AG
C

Misadjustment

DS
P

The excess mean square error in the


objective function due to gradient noise
Assume uncorrelatedness set

J min p h opt
2
d

Where 2 is the variance of desired


d
response andh opt is zero when
uncorrelated.
ThenJ XS
misadjustment
defined as
( J LMS () Jis
min ) / J min
Professor A G
Constantinides

26

AG
C

Misadjustment

DS
P

It can be shown that the


misadjustment is given by
m
i
J XS / J min
i 11 i

Professor A G
Constantinides

27

AG
C

Normalised LMS

DS
P

To make the step size respond to


the signal needs

2
h[n 1] h[n]
x[n]e[n]
2
1 x[n]
In this case
0 1
And misadjustment is proportional
to the step size.
Professor A G
Constantinides

28

AG
C
DS
P

Transform based LMS

{x[n]}

Transfor
m

Adaptive
Filter : w

d[n]

d [n]
e[n]

Algorith
m
Inverse Transform
Professor A G
Constantinides

29

AG
C

Least Squares Adaptive

DS
P

with

R[n] x[i ]x[i ]

i 1
n

p[n] x[n]d [n]

i 1
We have the Least
Squares solution
1

h[n] R[n] p[n]

However, this is computationally very


intensive to implement.
Alternative forms make use of recursive
estimates of the matrices involved.
Professor A G
Constantinides

30

AG
C

Recursive Least Squares

DS
P

Firstly we note that

p[n] p[n 1] x[n]d [n]


R[n] R[n 1] x[n]x[n]

We now use the Inversion Lemma (or


the Sherman-Morrison formula)
Let
Professor A G
Constantinides

31

Recursive Least Squares


(RLS)

AG
C
DS
P

P[n] R[n]

Let

R[n 1] x[n]
k[ n ]
T
1
1 x[n] R[n 1] x[n]

Then

P[n] R[n 1] k[n]x [n]P[n 1]


T

The quantity k[n]


Kalman gain

is known as the

Professor A G
Constantinides

32

AG
C

Recursive Least Squares

DS
P

Now use k[ n] P[ n]x[ n]


of the filter weights

in the computation

h[n] P[n]p[n] P[n](p[n 1] x[n]d [n])


P[n]
From the earlier expression for
updates
we have

P[n]p[n 1] P[n 1]p[n 1] k[n]x [n]P[n 1]p[n 1]


T

And hence

h[n] h[n 1] k[n](d [n] x[n]T h[n 1])


Professor A G
Constantinides

33

AG
C

Kalman Filters

DS
P

Kalman filter is a sequential estimation


problem normally derived from
The Bayes approach
The Innovations approach
Essentially they lead to the same
equations as RLS, but underlying
assumptions are different
Professor A G
Constantinides

34

AG
C

Kalman Filters

DS
P

The problem is normally stated as:

Given a sequence of noisy observations to


estimate the sequence of state vectors of a
linear system driven by noise.

Standard formulation

x[n 1] Ax[n] w[n]


y[n] C[n]x[n] [n]
Professor A G
Constantinides

35

AG
C

Kalman Filters

DS
P

Kalman filters may be seen as RLS with


the following correspondence
Sate space

Sate-Update matrix
Sate-noise variance
Observation matrix
Observations
State estimate

A[n]

Q[n] E{w[n]w[n]T }

C[n]
y[n]
x[n]

Professor A G
Constantinides

RLS

I
0

x[ n]T
d [n]
h[n]

36

AG
C

Cholesky Factorisation

DS
P

In situations where storage and to some


extend computational demand is at a
premium one can use the Cholesky
factorisation tecchnique for a positive
definite matrix
Express R LLT , where L
is lower
triangular
There are many techniques for
determining the factorisation
Professor A G
Constantinides

37

Вам также может понравиться