Вы находитесь на странице: 1из 19

Kalmans Beautiful Filter

(an introduction)
George Kantor
presented to

Sensor Based Planning Lab Carnegie Mellon University December 8, 2000

What does a Kalman Filter do, anyway?


Given the linear dynamical system:

x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k ) y ( k ) = H ( k ) x ( k ) + w( k )
x ( k ) is the n - dimensional state vector (unknown) u( k ) is the m - dimensional input vector (known) y ( k ) is the p - dimensional output vector (known, measured) F ( k ), G ( k ), H ( k ) are appropriately dimensioned system matrices (known) v ( k ), w( k ) are zero - mean, white Gaussian noise with (known) covariance matrices Q( k ), R ( k )

the Kalman Filter is a recursion that provides the best estimate of the state vector x.
Kalman Filter Introduction Carnegie Mellon University December 8, 2000

Whats so great about that?


x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k ) y ( k ) = H ( k ) x ( k ) + w( k )

noise smoothing (improve noisy measurements) state estimation (for state feedback) recursive (computes next estimate using only most recent measurement)
Kalman Filter Introduction Carnegie Mellon University December 8, 2000

How does it work?


x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k ) y ( k ) = H ( k ) x ( k ) + w( k )
1. prediction based on last estimate:

x(k + 1 | k ) = F (k ) x(k | k ) + G (k )u (k ) y (k ) = H (k ) x(k + 1 | k )


2. calculate correction based on prediction and current measurement:

x = f ( y (k + 1), x(k + 1 | k ) )
3. update prediction: x(k + 1 | k + 1) = x( k + 1 | k ) + x
Kalman Filter Introduction Carnegie Mellon University December 8, 2000

Finding the correction (no noise!)


y = Hx Given prediction x and output y, find x so that x = x + x is the " best" estimate of x.
x

= { x | Hx = y}

x = H
Kalman Filter Introduction

( HH )

T 1

Carnegie Mellon University December 8, 2000

A Geometric Interpretation
= { x | Hx = y}

Kalman Filter Introduction

Carnegie Mellon University December 8, 2000

A Simple State Observer


System:
x( k + 1) = Fx (k ) + Gu (k ) + v(k ) y (k ) = Hx ( k )

1. prediction:

x(k + 1 | k ) = Fx(k | k ) + Gu (k )

Observer:

2. compute correction:

x = H
3. update:

( HH )

T 1

( y (k + 1) Hx(k + 1 | k ))

x(k + 1 | k + 1) = x(k + 1 | k ) + x

Kalman Filter Introduction

Carnegie Mellon University December 8, 2000

Estimating a distribution for x


Our estimate of x is not exact! We can do better by estimating a joint Gaussian distribution p(x).

where P = E ( x x)( x x)

1 p( x) = e 2 P
T

1 ( x x )T P 1 ( x x ) 2

is the covariance matrix

Kalman Filter Introduction

Carnegie Mellon University December 8, 2000

Finding the correction (geometric intuition)


Given prediction x , covariance P, and output y, find x so that x = x + x is the " best" (i.e. most probable) estimate of x.

= { x | Hx = y}

1 p( x) = 2 P

1 ( x x )T P 1 ( x x ) e2

The most probable x is the one that : 1. satisfies x = x + x 2. minimizes x T P 1x

Kalman Filter Introduction

Carnegie Mellon University December 8, 2000

A new kind of distance


Suppose we define a new inner product on R n to be :
T x1 , x2 = x1 P 1 x2 T (this replaces the old inner product x1 x2 ) 2

Then we can define a new norm x

= x, x = x T P 1 x

The x in that minimizes x is the orthogonal projection of x onto , so x is orthogonal to .

, x = 0 for in T = null ( H )

, x = T P 1x = 0 iff x column ( PH T )
Kalman Filter Introduction Carnegie Mellon University December 8, 2000

Finding the correction (for real this time!)


Assuming that x is linear in = y Hx x = PH T K The condition y = H ( x + x) Hx = y Hx = Substituti on yields : Hx = = HPH T K

K = HPH x = PH
T

( HPH )
T

Carnegie Mellon University December 8, 2000

Kalman Filter Introduction

A Better State Observer


We can create a better state observer following the same 3. steps, but now we must also estimate the covariance matrix P. We start with x(k|k) and P(k|k)

Step 1: Prediction
x(k + 1 | k ) = Fx(k | k ) + Gu (k )
What about P? From the definition:

P (k | k ) = E ( x(k ) x(k | k ))( x(k ) x(k | k )) T


and

) )

P(k + 1 | k ) = E ( x(k + 1) x(k + 1 | k ))( x(k + 1) x(k + 1 | k ))T


Kalman Filter Introduction

Carnegie Mellon University December 8, 2000

Continuing Step 1
To make life a little easier, lets shift notation slightly:

Pk+1 = E ( xk +1 xk +1 )( xk +1 xk +1 )T

( = E (( F ( x x ) + v ) ( F ( x x ) + v ) ) = E ( F ( x x )( x x ) F + 2 F ( x x ) v = FE (( x x )( x x ) ) F + E (v v )
k k k k k k T k k k k T T k k k k k k T T k T
k

= E ( Fx k + Gu k + vk ( Fxk + Gu k ) )( Fx k + Gu k + v k ( Fxk + Gu k ) ) T

T
k

+ vk v T k

= FPk F T + Q

P(k + 1 | k ) = FP (k | k ) F T + Q
Kalman Filter Introduction Carnegie Mellon University December 8, 2000

Step 2: Computing the correction


From step 1 we get x (k + 1 | k ) and P (k + 1 | k ). Now we use these to compute x :

x = P(k + 1 | k ) H HP (k + 1 | k ) H

( y(k + 1) Hx(k + 1 | k ))

For ease of notation, define W so that

x = W

Kalman Filter Introduction

Carnegie Mellon University December 8, 2000

Step 3: Update
x(k + 1 | k + 1) = x(k + 1 | k ) + W
Pk +1 = E ( xk +1 xk +1 )( xk +1 xk +1 )T

( = E (( x

)
)

xk +1 W )( xk +1 xk +1 W )T k +1
(just take my word for it)

P(k + 1 | k + 1) = P(k + 1 | k ) WHP (k + 1 | k ) H T W T

Kalman Filter Introduction

Carnegie Mellon University December 8, 2000

Just take my word for it


Pk +1 = E ( xk +1 xk +1 )( xk +1 xk +1 )T
k +1

( = E (( x = E (( x

)
) ) W )
T

xk +1 W )( xk +1 xk +1 W )T

k +1 x k +1 ) W ( x k +1 xk +1

)(

= E ( xk +1 xk +1 )( xk +1 xk +1 )T 2W ( xk +1 xk +1 )T + W (W ) T

= Pk+1 + E 2WH ( xk +1 xk +1 )( xk +1 xk +1 )T + WH ( xk +1 xk +1 )( xk +1 xk +1 )T H T W T = Pk+1 2WHPk+1 + WHPk+1 H T W T


= Pk+1 2 Pk+1 H T HPk+1 H T = Pk+1 2 Pk+1 H T

( ( HP

1 HT k +1

) HP ) ( HP
1

k +1

+ WHPk+1 H T W T

T k +1 H

)( HP

1 H T HPk+1 k +1

+ WHPk+1 H T W T

= Pk+1 2WHPk+1 H T W T + WHPk+1 H T W T


Kalman Filter Introduction Carnegie Mellon University December 8, 2000

Better State Observer Summary


System:
1. Predict

x( k + 1) = Fx (k ) + Gu (k ) + v(k ) y (k ) = Hx ( k )

x(k + 1 | k ) = Fx(k | k ) + Gu (k ) P(k + 1 | k ) = FP (k | k ) F T + Q

Observer

2. Correction

W = P (k + 1 | k ) H HP (k + 1 | k ) H T x = W ( y (k + 1) Hx(k + 1 | k ) )
x(k + 1 | k + 1) = x(k + 1 | k ) + W

3. Update

P(k + 1 | k + 1) = P(k + 1 | k ) WHP (k + 1 | k ) H T W T


Carnegie Mellon University December 8, 2000

Kalman Filter Introduction

Finding the correction (with output noise)


y = Hx + w
Since you dont have a hyperplane to aim for, you cant solve this with algebra! You have to solve an optimization problem.

= { x | Hx = y}

Thats exactly what Kalman did! Heres his answer:

x = PH T HPH T + R
Kalman Filter Introduction

( y Hx )

Carnegie Mellon University December 8, 2000

LTI Kalman Filter Summary


System:
1. Predict

x( k + 1) = Fx (k ) + Gu (k ) + v(k ) y (k ) = Hx (k ) + w(k )

x(k + 1 | k ) = Fx(k | k ) + Gu (k ) P(k + 1 | k ) = FP (k | k ) F T + Q S = HP (k + 1 | k ) H T + R W = P(k + 1 | k ) HS 1 x = W ( y (k + 1) Hx(k + 1 | k ) )


x(k + 1 | k + 1) = x(k + 1 | k ) + W

Kalman Filter

2. Correction

3. Update

P (k + 1 | k + 1) = P(k + 1 | k ) WSW T
Carnegie Mellon University December 8, 2000

Kalman Filter Introduction

Вам также может понравиться