Вы находитесь на странице: 1из 10

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct.

1996

Tracking of Time-Varying Parameters using Optimal Bounding Ellipsoid Algorithms


S. Kapoor, S. Gollamudi, S. Nagaraj and Y. F. Huang Laboratory for Image and Signal Analysis Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556
This paper analyzes the performance of an optimal bounding ellipsoid (OBE) algorithm for tracking time-varying parameters with incrementally bounded time variations. A linear state-space model is used, with the time-varying parameters represented by the state vector. The OBE algorithm exhibits a selective update property for the time and observation-update equations, and necessary and sufcient conditions for state tracking are derived. The interpretability of the optimization criterion is also investigated along with simulation results.

Abstract

1 Introduction
Tracking of time varying parameters is an important problem, both from theoretical as well as practical viewpoints, in adaptive signal processing, communication and control systems. An elegant, convenient and general framework for formulating the problem is provided by linear state-space equations. In this paper, we use the discrete-time state equation framework and present an optimal bounding ellipsoid (OBE) algorithm for tracking time-varying parameters. The OBE algorithms strive to obtain a feasible set, rather than a unique parameter estimate. This is facilitated by bounded-error (rather than statistical) assumptions on the noise. We refer the reader to the literature on setmembership identi cation, and optimal bounding ellipsoid algorithms in particular, for the motivation, justi cation and comparison of bounded versus statistical description of the noise model and its rami cations on parameter estimates 3, 4]. Using state equations, the time-varying parameter vector is represented by the state vector and its time-evolution is modeled by the time-update equation. The observation-update is same 1 as the conventional multi-input multi-output model. For instance, the well known rst order Markov process model 6] used commonly for modeling time-varying parameter systems, is a special case of the state-space model. The problem of state estimation has been examined by numerous researchers for over two decades. The original state bounding algorithms are attributable to 13, 12] and 1].

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

Recently, 9] has derived minimal-volume and minimal-trace ellipsoids from the family of ellipsoids proposed in 13], and has also generalized the solution proposed in 4]. When statistical noise models are used, the Kalman lter is known to be the linear minimum variance state estimator. In this paper, however, we con ne our attention to the state bounding approach. This paper is organized as follows: Section 2 formulates the problem; An OBE algorithm and its features are derived and analyzed in Section 3, and Section 4 presents some simulation results. All proofs are contained in the Appendix.

2 Problem Formulation
Consider the discrete-time, linear state equation model at time k of the form, xk = Ak? xk? + wk? yk = Ck xk + vk (1) where xk : N 1 state vector representing the time-varying parameter vector to be estimated yk : L 1 observation vector Ak and Ck : Known matrices of dimensions N N and L N respectively wk : N 1 noise vector in time-update equation vk : L 1 noise vector in observation-update equation All quantities are assumed real for convenience, and the extension to complex variables is straightforward. It is assumed here that wk and vk belong to ellipsoidal sets 8k according to, T Wk = fwk 2 RN : wk W ? wk w g T Vk = fvk 2 RN : vk V ? vk v g (2) Parameter estimation via OBE algorithms typically proceeds by alternatively using the time and observation-update equations to recursively compute a state estimate. At time k, the time-update equation is used to form the feasible set for the predicted state. This is done by a vector summing of ellipsoids { the bounding ellipsoid for the state estimate at time k ? 1 and the ellipsoid bounding wk . The observation equation is then used to update the predicted state estimate by an ellipsoidal intersection { the ellipsoid from the previous step and the ellipsoid obtained by using the boundedness of vk . In general, the ellipsoidal summing and intersection operations do not yield ellipsoids and thus have to be overbounded \tightly" in some sense. Assume that the initial state is contained in the ellipsoid E (^ ; P ; ) given by x E (^ ; P ; ) = fx 2 RN : x ? x ]T P ? x ? x ] x ^ ^ g
1 1 1 1 2 1 2 0 0 2 0 0 0 2 0 0 0 1 0 2 0

3 An OBE Algorithm and Tracking Characteristics


Let E (^k ; Pk ; k ) denote the bounding ellipsoid at time k. The time-update at time k is x carried out by rst linearly transforming the ellipsoid at time k ? 1, E (^k? ; Pk? ; k? ) x 2
2 1 1 2 1

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

to E (Ak? xk? ; Ak? Pk? AT? ; k? ). This is followed by a vector sum of the resulting ^ k ellipsoid and Wk? to yield an overbounding ellipsoid E (^k=k? ; Pk=k? ; k=k? ). It is x notable that similar to the selective observation update, the ellipsoidal summing may not be required at each k, leading to selective time update. The necessary and su cient conditions for selective time update are presented in Section 3.3.
1 1 1 1 1 2 1 1 1 1 2 1

Let xk=k? = Ak? xk? , Pk=k? = Ak? Pk? AT? and k=k? = k? . The observation ^ ^ k update at time k is carried out by seeking the intersection of E (^k=k? ; Pk=k? ; k=k? ) x and Sk de ned by Sk = fx 2 RN : (yk ? Ck x)T V ? (yk ? Ck x) g ? T where v . Decomposing V = Ve Ve , the above can be rewritten as 0 0 0 0 Sk = fx 2 RN : (yk ? Ck x)T (yk ? Ck x) g 0 0 where yk = VeT yk and Ck = VeT Ck . This is now a convenient form for deriving a recursive observation update algorithm. Let jj jj denote the l norm of a vector. Then, an ellipsoid that contains E (^k=k? ; Pk=k? ; k=k? ) \ Sk is given by x
1 1 1 1 1 1 1 2 1 2 1 1 1 2 1 1 2 2 2 1 2 1 1 2 2 1

3.1 Observation Update

? ^ ^ E (^k ; Pk ; k ) = fx 2 RN : (1 ? k ) x ? xk=k? ]T Pk=k? x ? xk=k? ] x 0 0 + k jj (yk ? Ck x jj (1 ? k ) k=k? + k g (3) where k is a real number in 0; 1]. We now show that there exists a symmetric positive de nite Pk and a positive k such that E (^k ; Pk ; k ) = fx 2 RN : x ? xk ]T Pk? x ? xk ] k g x ^ ^ (4)
2 1 1 1 1 2 2 1 2 2 2 1 2

is a well de ned ellipsoid 8]. 4 0 Theorem 1. Consider the inequalities (3) and (4) above. Denoting k = yk ? Ck0 xk=k? , ^ 4 0 4 T and Q = (1 ? )I + G , we have the following recursive update 0 Gk = Ck Pk=k? Ck k k L k k equations, 1 0 0 Pk = 1 ? Pk=k? ? k Pk=k? Ck T Q? Ck Pk=k? ]; (5) k
1 1

= (1 ? k )

0 xk = xk=k? + k Pk Ck T ^ ^
1

k=k?1 + k
2

? (1 ? k )

T ?1 k k Qk k

and

(6)

The proof follows straightforwardly from (3) and (4) and is omitted for brevity. The update parameter k is optimized at each step by minimizing a tight upper bound on k which can be considered to be a bound on the estimation error, or as a scaling factor of 0 the matrix Pk at time k 2]. An upper bound on k denoted by k , is given by
2 2 2

02 = (1 ?

k)

k=k? + 3
2 1

k ? (1 ? k ) k (1 ? k) + g k k k

(7)

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

where gk is the 2-norm of Gk . Denoting the optimal k by o (which lies in 0; ] for k 4 T k , we have, some < 1) and de ning the quantity k = ( ? k=k? )= k
2 2 1

Theorem 2. Minimization of
dition: (1) if k 1, then
o k

02

with respect to k , leads to the following update con(1

T if k k = 0 if gk = 1 q gk 1? ] if 1 + k gk ? 1] > 0 ?gk k gk ? if 1 + k gk ? 1] 0 The above result follows straightforwardly from the derivation given in 2]. We see that the discerning update criterion does not require the computation of the singular value of Gk . To determine if an update is required or not, only the prediction error, k has to be computed.

8 > > < (2) else o = min( ; k ) where k = > k > :

=0

2 1

k)

1+

1]

We now investigate the interpretability of k minimization, i.e. its relation to minimization of interpretable measures like determinant and trace of the positive de nite matrix k Pk which characterizes the bounding ellipsoid. These measures are proportional to the volume and sum of semi-axes of the bounding ellipsoid, respectively. The issue of relating k minimization to determinant or trace minimization has also been addressed in 3]. It however, deals primarily with the relation between the update criterion resulting from the di erent minimization procedures. De ning
2 2 2

3.2 Interpretability of

2 k

minimization

4 Bk = k Pk ;
2

(8)

Consider the ellipsoidal determinant and trace 4]


4 vol = k 4 tr = k
vol 4 k = tr 4 k = vol = k tr = k vol = k?1 tr = k?1

det(Bk ) tr(Bk )
? det(Bk? Bk ) tr(Bk )=tr(Bk? )
1 1 1

(9)

Also consider the following determinant and trace measures 4, 3] (10)

The design parameter used in Theorem 2 is not necessarily xed for all time and may be chosen to be time-varying. Theorem 3. By minimizing k for computing the optimal observation update factor k , the following relations hold for the ellipsoidal determinant and trace measures de ned in (9) and (10): 1. vol cv ( k )N for some constant cv k 4
2 2

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

2. 3. 4.

tr k vol k tr k

ct k for some constant ct . Also, for L = 1, N k =((1 ? k ) k=k? )] k =((1 ? k ) k=k? )


2 2 2 1 2 2 1

If the update parameter k is bounded at each data snapshot by a pre-determined design parameter k (0 < k < 1) such that k ! 0 as k ! 1, then the upper bounds in items 3 and 4 of Theorem 3 above become exact in the limit. Theorem 3 reveals that minimizing k 8 k results in minimizing upper bounds on interpretable ellipsoidal volume and trace measures. These results show that although using k minimization, the determinant and trace (9) are not monotonically decreasing (as is the case for the more common determinant and trace minimization OBE algorithms 4, 3, 9]), vol and k tr are upper bounded by monotonically decreasing upper bounds at each step. k
2 2

Theorem 2 results in a selective observation update which can be exploited for signi cant computational savings 3, 5]. The following results demonstrate the conditions under which the OBE algorithm successfully tracks time variations in the state vector. The true state xk 2 E (^k ; Pk ; k ) if xk 2 E (^k=k? ; Pk=k? ; k=k? ). Thus, a su cient condition for x x tracking the true state is
2 1 1 2 1

3.3 Time Update

? (^k=k? ? xk )T Pk=k? (^k=k? ? xk ) x x


1 1 1 1

k=k?1

which can be rewritten as, (^k? ? A?? xk )T Pk? (^k? ? A?? xk ) k? x k ? x k Further, the true state can belong to a region outside the bounding ellipsoid without loss of tracking. This region is described by the following: Theorem 4. The true state xk 2 E (^k ; Pk ; k ) if and only if, x
1 1 1 1 1 1 1 1 2 1 2

(^k? ? A?? xk )T Pk? (^k? ? A?? xk ) x k ? x k


1 1 1 1 1 1 1 1 2 1

k?1 + 1?
2 2

( ? jj vk? jj )
2 1 2

Thus, increasing k? , the update parameter k , or can enhance the tracking capability. More speci cally, the following su cient condition emerges straightforwardly from above.

Theorem 5. If xk? 2 E (^k? ; Pk? ; x


1 1 1

jj wk? jj
1

?1 max (Ak?1 )

1 q

?1 min (Pk?1)

2v u t 4u
5

k?1 )
2 2

and

6= 0, then xk 2 E (^k ; Pk ; k ) if, x


2

k?1 +

0 1 ? k( ? v )
2 2

min (Pk?1) ?1 max (Pk?1 )

?1

3 k? 5
2 1

(11)

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996
10
6

10

10

10

Volume/Trace/Sigma^2

10

Trace

10

sigma^2 10
4

Mean Squared error


80 90 100

10

10

10

Volume

10

10

20

30

40

50 60 Snapshots

70

10

10

15

20

25 Time

30

35

40

45

50

Figure 1: (a) Plotting


1 1

vol , tr k k

and

versus time (b) MSE


1 1 1 1 2 1 1 1 1

where max(A?? ) is the maximum singular value of A?? , min(Pk? ) and max(Pk? ) k k ? ? 0 are the minimum and maximum singular values of Pk? and v is the actual bound on ? the observation noise. If Theorem 5 is satis ed at time k, there is no need for an ellipsoid summing operation resulting in selective time update. Since Theorem 5 is merely a su cient condition for successful tracking, its use will result in a conservative tracking strategy. Nevertheless, it yields further insight into the tracking behavior of the OBE algorithm. The above analysis sheds light on the behavior of the OBE algorithm when the parameter vector is time-varying. It also explains the basis for various schemes proposed in the literature for tracking. For instance, Schweppe 13] proposed a scheme to add a positive de nite matrix to Pk for all k. This has the e ect of increasing the eigenvalues of Pk and thus increasing the right hand side of (11), making the su cient condition easier to satisfy. Rao and Huang 11] carried out a detailed analysis of the parameter tracking problem for the single-input single-output case and proposed an increase in k to prevent the ellipsoid from shrinking to zero. Also, simply using a larger than necessary noise bound is seen to be bene cial. However, this benign e ect has to be traded o with the resulting enhancement in estimation error.
2

Consider a rst order Markov process model 6] with N = 3 and L = 2; Ak = aIN 8k and a = 0:98; each component of wk and vk is taken to be uniformly distributed in the intervals ?0:02; 0:02] and ?0:05; 0:05] respectively; Ck is drawn from a N dimensional Gaussian random process. Figure 1(a) shows a plot of vol , tr and k versus time for k k a typical simulation run. The bump in all the indicators near time k = 80 is because of increasing k by 0.25 to prevent it from becoming negative due to drifting of the parameter being tracked. The mean square error of the parameter estimate for the rst 50 time instants is averaged over 100 independent trials and is plotted in Figure 1(b).
2 2

4 Simulations

5 Conclusions

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

This paper has considered the problem of tracking time-varying parameters in a statespace framework using an OBE algorithm. OBE algorithms are well suited for timevarying parameter estimation due to their inherent structure and mathematical tractability. An OBE algorithm and its tracking features have been presented along with an investigation on the interpretability of k minimization criterion.
2

Appendix

Proof of Theorem 3:

1. It can be shown that under a persistence of excitation assumption of the input data, the matrix Pk is upper bounded 2] as

Pk
for some constant . For instance, = supk f
2

IN
max (Pk )g.

Thus,
2

det(Bk ) = ( k )N det(Pk ) cv ( k )N
with cv = N . 2. Similar to item 1 above,

tr(Bk ) = k tr(Pk ) ct
2

with ct = N . 3. Using (8) and (5) it follows that,


2

2 Bk = 1 4 Bk ? ? 1 ? k k=k? ( k
1 2 1 2

0T 0 k Bk?1 Ck Ck Bk?1 5 2 2 k=k?1 ) (1 ? k + k Gk ) 0T 0 k Ck Ck Bk?


2 1 2

Using (10),

0 ( k= vol k = det @

1?

k=k? )
2 1

IN ? (1 ? )( k ) (1 ? + G ) k k=k? k k k
1 2 1

1 A

Using the identity det(aI + bcT ) = aN ? (a + cT b) where a is a real constant and b; c 2 RN , we have,
1

vol k

"( = k
2

#N ? ( = 0 0T k k Ck Bk? Ck k=k? ) 5 4 k k=k? ) ? 1? k 1? k (1 ? k )( k=k? ) (1 ? k + k Gk )


2 1 1 2 2 1 2 1 2 1 2

which reduces to

vol k

"( = k
2

k=k?1 ) 1? k
2

#N "

1? 1?

k Gk k + k Gk

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

Since

and Pk is symmetric positive de nite, it follows that,


vol k

2 4

(1 ? k )

k=k?1

3N 5

(12)

Error in the upper bound is given by,

2 =4

k 2 k=k?1

3N " 5

1? N ? (1 ? k )N (1 ? (1 ? k ) 1

k k + k Gk ) k; k

As per the convergence of the OBE algorithm, as k ! 1, bound asymptotically exact. 4. Using (10), we have
2 2 1 1

! 0, thus making the


1

0T 0 ( k = k=k? ) k k Bk? Ck Ck Bk? @ A tr(Bk? ) ? tr (1 ? )( tr(Bk ) = 1 ? k k k=k? ) (1 ? k + k Gk )


2 1 1 2 1 2

Using the identity tr(xyT ) = (xT y),


tr k

0 0T ( k = k=k? ) 1 k k Ck Bk? Ck = ? tr(B ) 1? k k? (1 ? k )( k=k? ) (1 ? k + k Gk )


2 2 1 2 2 1 1 2 2 1

Rewriting,

tr k
2 1

4 0 4 0 where Hk = Ck Pk? Ck T and Tk? = tr(Pk? ). Since Pk is symmetric positive de nite, we obtain, tr ( k = k=k? ) k 1? k tr Similar to (12), the asymptotic exactness of the upper bound on k is obtained.
1 1 2 2 1

# ( k = k=k? ) " k Hk 1 ? (1 ? )(1 ? + G )T = 1? k k k k k k?


2 2 1 1

Sketch of Proof of Theorem 4: The necessary and su cient condition for xk 2


E (^k ; Pk ; k ) is x
2

(^k ? xk )T Pk? (^k ? xk ) k x x Using (1) and (5), and after extended manipulations, we obtain
1 2

(13)
k

(^k ? xk )T Pk? (^k ? xk ) ? x x


1 1 1

T ? = (1 ? k ) zk Pk=k? zk ? k ] +
1 1 2

jj vk jj ? ]
2 2

where zk = Ak? xk? ? xk for convenience. Using (13), and after rearrangement, the ^ desired expression is obtained. 8

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

Sketch of Proof of Theorem 5: Using (1) and Theorem 4, we have xT? Pk? xk? ? 2~T? Pk? A?? wk? + (A?? wk? )T Pk? A?? wk? ~k ? ~ xk ? k k ? k k k? + 1 ? ( ? jj vk? jj )
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 2 1

where xk = (^k ? xk ) for convenience. Assuming that xk? 2 E (^k? ; Pk? ; k? ), a ~ x x su cient condition for the above is
1 1 1 2 1

jj + jj wk? jj max (Pk?? ) max (A?? ) k ( ? jj vk? jj ) Again, since xk? 2 E (^k? ; Pk? ; k? ), jj xk? jj k? = min(Pk? ). Substituting and x ~ ? solving the quadratic for jj wk? jj yields the desired result.
2 jj xk? jj ~
1

max (Pk?1 ) max (Ak?1 ) jj wk?1


2

?1

?1

1 1

1 1

References
1] D. P. Bertsekas and I. B. Rhodes, \Recursive state estimation for a set-membership description of uncertainty," IEEE Trans. on Automatic Control, Vol. 16, pp. 117-128, April 1971. 2] S. Dasgupta and Y. F. Huang, \Asymptotically Convergent Modi ed Recursive Least Squares with Data Dependent Updating and forgetting factor for Systems with Bounded Noise," IEEE Transactions on Information Theory, Vol. 33, No. 3, pp. 383-392, May 1987. 3] J. R. Deller, M. Nayeri and S. F. Odeh, \Least square identi cation with error bounds for real-time signal processing and control," Proc. IEEE, Vol. 81, No. 6, pp. 813-849, June 1993. 4] E. Fogel and Y. F. Huang, \On the value of information in system identi cation bounded noise case," Automatica, Vol. 18, No. 2, pp. 229-238, March 1982. 5] S. Gollamudi and Y.F. Huang, \Updater-Shared Adaptive Parallel Equalization (USHAPE) using set-membership identi cation," Proc. IEEE Intl. Conf. Circuits and Systems, Atlanta, GA, USA, May 13-15, 1996. 6] S. Haykin, Adaptive Filter Theory, Prentice Hall, 1996. 7] Y.F. Huang and J. R. Deller, \On the tracking capabilities of optimal bounding ellipsoid algorithms," Proc. 30th Ann. Allerton Conf. on Comm., Control and Comp., pp. 50-59, October 1992. 8] S. Gollamudi, S. Nagaraj, S. Kapoor and Y.F. Huang, \Set Membership State Estimation with Optimal Bounding Ellipsoids," Proc. Intl. Symp. on Information Theory and its App., pp. 262-265, Victoria, B.C., Canada, September 17-20, 1996. 9

Proceedings of 34th Allerton Conference on Comm., Control and Computing, Monticello, IL, Oct. 1996

9] D. G. Maksarov and J. P. Norton, \State Bounding with Ellipsoidal Set Description of the Uncertainty," Int. Journal of Control, September 1996. 10] J. P. Norton and S. H. Mo, \Parameter bounding for time-varying systems," Mathematics and Computers in Simulation, Vol. 32, pp. 527-534, 1990. 11] A. K. Rao and Y. F. Huang, \Tracking Characteristics of an OBE Parameter Estimation Algorithm," IEEE Transactions on Signal Processing, Vol. 41, No. 3, pp. 1140-1148, March 1993. 12] F. M. Schlaepfer and F. C. Schweppe, \Continuous time state estimation under disturbances bounded by convex sets," IEEE Trans. on Automatic Control, Vol. 17, pp. 197-205, 1972. 13] F. C. Schweppe, \Recursive state estimation: Unknown but bounded errors and system inputs," IEEE Trans. on Automatic Control, Vol. 13, pp. 22-28, 1968.

10

Вам также может понравиться