Вы находитесь на странице: 1из 6

A Tight Bound for the Joint Covariance of Two Random Vectors with Unknown but Constrained CrossCorrelation

Uwe D. Hanebeck , Kai Briechle , Joachim Horn

Institute of Automatic Control Engineering Technische Universit at M unchen 80290 M unchen, Germany Uwe.Hanebeck@ieee.org Kai.Briechle@ei.tum.de

Siemens AG, Corporate Technology Information and Communications 81730 M unchen, Germany Joachim.Horn@ieee.org

Abstract
This paper derives a fundamental result for processing two correlated random vectors with unknown crosscorrelation, where constraints on the maximum absolute correlation coecient are given. A tight upper bound for the joint covariance matrix is derived on the basis of the individual covariances and the correlation constraint. For symmetric constraints, the bounding covariance matrix naturally possesses zero cross covariances, which further increases their usefulness in applications. Performance is demonstrated by recursively propagating a state through a linear dynamical system suering from stochastic noise correlated with the system state.

The advantage of using the new bound in applications is demonstrated in Sec. 5, where a state is recursively propagated through a linear dynamic system corrupted by correlated noise.

We are given two random vectors x IRN , y IRM with expected values E {x} = x , E {y } = y and individual covariances Cov{x} = Exx , Cov{y} = Eyy , where x and y are assumed to be correlated. Their cross covariances Cov{x, y} = Exy and Cov{y, x} = Eyx , however, are not explicitly known. It is only known that the correlation coecient r is limited according to |r| rmax . (1)

Problem Formulation

Introduction

In many applications correlated random vectors have to be processed, which requires their joint statistics to be available [1]. However, the cross covariances may either be too expensive to maintain or are simply not available. Unfortunately, simply neglecting the cross covariances by setting them to zero gives wrong results [2, 5]. Hence, an upper bound for the joint covariance matrix is desired, which is compatible with all possible cross covariances. For the case that the correlation between the considered random vectors is unconstrained, i.e., the maximum absolute correlation coecient is less than or equal to one (|r| 1), a covariance bound exists [3, 4]. However, the existing covariance bound is too conservative, i.e., is not tight enough, when a constraint of the form |r| rmax < 1 is available. Hence, the purpose of this paper is to derive a tight bound for the case of constrained correlation. The problem of bounding two correlated random vectors with a given crosscorrelation constraint is formulated in Sec. 2. An appropriate bound is derived in Sec. 3 and then discussed in detail in Sec. 4.

Hence, a constraint for the cross covariances is given by


1 2 Eyx E xx Exy rmax Eyy ,

(2)

where, in general, for two positive denite matrices A and B, an expression of the form A > B (A B) is interpreted as A B positive denite (positive semi denite). By dening the matrix
2 1 C = rmax Eyy Eyx E xx Exy ,

verication of (2) can be performed by Sylvesters criterion according to det (C(1 : i, 1 : i)) 0 for i = 1, . . . , M .

y
2.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2.5 -3 -2

| r | < 0 . 01

2.5

y
2

|r | < 0 .2

2.5

y
2

|r | < 0 .4

1.5 1 0.5 0 -0.5 -1 -1.5 -2 -1 0 1 2 3 -2.5

1.5 1 0.5 0 -0.5 -1 -1.5 -2 -3 -2 -1 0 1 2 3 -2.5

y
2.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2.5 -3 -2

|r | < 0 .6

y
2.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2.5

|r | < 0 .8

y
2.5 2 1.5 1 0.5 0

-3

-2

-1

| r | < 0 . 99

-0.5 -1 -1.5 -2 -3 -2 -1 0 1 2 3 -2.5

-1

-3

-2

-1

Figure 1: Some members of the family of possible covariances (1sigmabounds) for dierent constraints on the maximum absolute correlation coecient. The unconstrained case corresponds to |r| 1. Example 2.1 For two scalar random variables x and y with individual variances Exx = 9 and Eyy = 4, some members of the family of possible joint covariance matrices for dierent constraints on the maximum absolute correlation coecient are visualized in Fig. 1 by plotting the respective 1sigmabounds 1 . The unconstrained case would correspond to |r| 1. The goal is now to nd a family of bounding covariances B with B E(r) for all possible joint covariances E(r) dened by E(r) = Exx Eyx Exy Eyy with 2 and 0.5 1 . 1 + rmax (7) 1 2 + 2 2 1 rmax (6) (3) Theorem 3.1 The scale factors kx , ky in (4) are given by kx = 1 1 , ky = + (5) In addition, for achieving an upper bound, the covariance matrices Exx and Eyy have to be individually scaled. Combining both conditions yields B= kx Exx 0 0 ky Eyy . (4)

kx , ky have to be selected in such a way that (3) holds.

with r according to (1) and Exy , Eyx such that (2) holds.

Derivation of Covariance Bound

For deriving the desired covariance bound we use the fact that the union of the 1-sigmabounds of all possible joint covariances forms a convex set aligned with the coordinate axes. Hence, the cross covariances of the bounding covariance matrix have to be zero matrices. For the simplest case of two scalar random variables x and y this is visualized in Fig. 1.
1 1sigmabounds will be used throughout the paper without loss of generality.

Proof. For proving (3), the dierence matrix D = B(, ) E(r ) =


1 E xx

0.5 0.4 0.3

2
=4

0 Exx 1 Eyx E yy + Exy


1 +

Exy Eyy

0.2 0.1
rmax = 0.95 rmax = 0.75

rmax = 0.0 rmax = 0.25 rmax = 0.5

1 Exx

3 5

0 -0.1 -0.2 -0.3 -0.4 -0.5 0.5 0.6

Eyx

1 Eyy

is considered. According to Sylvesters criterion, the matrix D is positive semidenite, if the determinants of all submatrices D(1 : N + i, 1 : N + i) for i = 1, . . . , M are
1 1 Exx is positive larger than or equal to zero. denite and does not need to be tested. The determinants are given by

0.7

0.8

0.9

1.0

|D(1 : N + i, 1 : N + i)| = 1 1 Exx 1 1


1

Figure 2: The admissible values for () and resulting from Lemma 3.1.

1 1 Eyy (1 : i, 1 : i) +

Discussion of Result

1 Eyx (1 : i, 1 : N )E xx Exy (1 : N, 1 : i) .

With (2) we obtain 1 1


! 1 2 Eyy (1 : i, 1 : i) 0 1 rmax +

The resulting family of bounding covariance matrices B() given in Lemma 3.1 is now discussed regarding optimality, selection of one member, and more complicated correlation constraints. Optimality: An important feature of the new approach is that every member of the family of bounding covariance matrices B() bounds every possible covariance E(r) for r according to (1). Even more, consider the union U (rmax ) of the 1sigmabounds of all possible covariance matrices fullling the given correlation constraint and the intersection I () of the 1sigmabounds of the proposed family of bounding covariances. The previously known bound from [3] ensures that the set U (rmax ) is always a subset of the set I () according to U (rmax ) = E(r)
|r |rmax

for i = 1, . . . , M , which is equivalent to 1 1 1 2 1 rmax 0 +

and yields (6). The constraint on in (7) then follows by claiming a nonnegative righthandside in (6).

The parameter set for and from the Theorem is redundant in the sense that it species scaled variants of a bounding covariance with the same form and orientation. Hence, it is sucient to restrict attention to the smallest of these scaled variants. The appropriate parameter values are specied in the following Lemma. Lemma 3.1 A family of bounding covariances E() depending on a parameter is given by (4) with kx , ky in (5). The parameter may vary according to || 0.5 . is a function of given by () = 1
2 2 rmax + 2 (1 rmax ) 2

B() = I ()

(8)

However, it is apparent from the example given in Fig. 3 that the approximation is not tight for rmax < 1. In contrast, for the new bound the set U (rmax ) is equivalent for every rmax to the set I (), i.e., U (rmax ) I () ,

2 rmax

(9)

which is visualized in Fig. 4. Selection of : Up to now, the complete family of bounding covariance matrices B() has been considered. Of course, when applying the new bound, a specic value has to be selected, which results in a single joint covariance matrix B( ).

The admissible values for and resulting from Lemma 3.1 are visualized for dierent values of rmax in Fig. 2.

More complex constraints: The case of more general constraints on the correlation coecient of the form 1 rmin r rmax 1 is not considered here, since it is regarded to be of minor practical importance. Furthermore, it leads to a more complicated family of bounding covariance matrices B() with nonzero cross covariances.

possible joint covariance matrices. In addition, the resulting 1sigmabounds of applying the new and the existing bound are plotted. It is obvious that the new bound gives much less conservative results, since the existing bound does not exploit the given correlation constraints.

Conclusions

Application Example

For demonstrating the performance of the new bound, a typical application problem is solved: Propagation of a given state recursively through a linear system. The system equation is given by xk+1 = xk + w k+1 , where the noise vector w k is correlated with the state xk . The level of correlation between w k and xk is unknown but constrained by (1) with a given rmax . The individual covariances of the initial state and of the (timeinvariant) noise are selected as Exx 0 = 3 2 2 3 and Eww = k 1 0.9 0.9 1 ,

respectively. The goal is to calculate a bounding covariance matrix for all the possible covariance matrices Exx k (rk ) of the state xk for several time steps recursively. For reference purposes, the covariance matrices Exx k (rk ) are calculated according to Exx k+1 (rk+1 ) = T Exx k (rk ) wx Ek+1 (rk+1 ) Exw k+1 (rk+1 ) Eww k+1 TT

The problem of calculating an upper bound for the joint covariance matrix of two correlated random vectors has been considered for the case that the level of correlation is limited, i.e., the maximum absolute correlation coecient is less than a prespecied value. This problem has been solved by scaling the individual covariance matrices in such a way that the joint covariance matrix provides a tight upper bound for the set of all possible true joint covariances fullling the correlation constraint. The new bound generalizes and enhances a known result for unconstrained correlation between two random vectors, which gives rather conservative result when a correlation constraint is available. Simulations impressively demonstrate the advantage of the new bound. As a byproduct, the new covariance bound yields an uncorrelated representation of the joint statistics of the two random vectors under consideration. This provides the basis for the derivation of a state estimation algorithm in the presence of correlated noise with a prespecied maximum correlation level, which generalizes the results in [3].

References
[1] B. D. O. Anderson and J. B. Moore, Optimal Filtering, PrenticeHall, 1979. [2] J. A. Castellanos, J. D. Tard os, and G. Schmidt, Building a Global Map of the Environment of a Mobile Robot: The Importance of Correlations, Proc. of the 1997 IEEE International Conference on Robotics and Automation (ICRA97), Albuquerque, New Mexico, USA, 1997, pp. 10531059. [3] U. D. Hanebeck, K. Briechle, New Results for Stochastic Prediction and Filtering with Unknown Correlations, this conference. [4] S. J. Julier and J. K. Uhlmann, A non-divergent estimation algorithm in the presence of unknown correlations, Proc. of the 1997 American Control Conference (ACC97), Albuquerque, New Mexico, 1997, pp. 2369-2373. [5] J. A. Castellanos, J. M. Mart nez, J. Neira, J. D. Tard os, Simultaneous Map Building for Mobile Robots: A Multisensor Fusion Approach, Proc. of the IEEE International Conference on Robotics and Automation (ICRA98), Leuven, Belgium, 1998, pp. 1244-1249.

for k = 0, 1, 2, . . . and all possible cross covariances wx Exw k+1 (rk+1 ), Ek+1 (rk+1 ) compatible with the given bound (1), where T is given by T= 1 0 0 1 1 0 0 1 .

Applying the new bound gives Exx k+1 (k+1 ) = Eww Exx k+1 k (k ) + k+1 k+1 k+1 + k+1

for k = 0, 1, 2, . . . and k , k from Lemma 3.1. For comparison purposes, the existing bound [3] has been applied to the propagation problem. Results are shown in Fig. 5 for rmax = 0.2 and in Fig. 6 for rmax = 0.6. The shaded regions correspond to the convex hull of the 1sigmabounds of all

4 3 2 1 0

4 3 2 1

4 3 2 1

| r | < 0 . 01

0 -1 -2 -3

|r | < 0 .2

0 -1 -2 -3

|r | < 0 .4

-1 -2 -3 -4 -5 4 3 2 1 0 -1 -2 -3 -4 -5 -4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

x5

-4 -5

-4

-3

-2

-1

x5

-4 -5 4 3 2 1

-4

-3

-2

-1

x5

4 3 2 1

|r | < 0 .6

0 -1 -2 -3

|r | < 0 .8

0 -1 -2 -3

| r | < 0 . 99

-4 5 -5

-4

-3

-2

-1

x5

-4 -5

-4

-3

-2

-1

Figure 3: Results of applying the existing bound to the joint covariance of two scalar random variables x and y with individual covariances in accordance with Example 2.1. The result is not tight when the correlation is constrained (the approximation error is shown by the shaded area).

4 3 2 1 0

4 3 2 1

4 3 2 1

| r | < 0 . 01

0 -1 -2 -3

|r | < 0 .2

0 -1 -2 -3

|r | < 0 .4

-1 -2 -3 -4 -5 -4 -3 -2 -1 0 1 2 3 4

-4 -5 5

-4

-3

-2

-1

x5

-4 -5

-4

-3

-2

-1

x5

y4
3 2 1 0 -1 -2 -3 -4 -5 -4 -3 -2 -1 0 1 2 3 4

4 3 2 1

4 3 2 1

|r | < 0 .6

0 -1 -2 -3

|r | < 0 .8

0 -1 -2 -3

| r | < 0 . 99

x5

-4 -5

-4

-3

-2

-1

x5

-4 -5

-4

-3

-2

-1

x5

Figure 4: Results of applying the proposed new bound to the joint covariance of two scalar random variables x and y with individual covariances in accordance with Example 2.1. Here the result is tight for all constraints.

y
8 6 4 2 0 -2 -4 -6 -8 -8

y
8 6

y
8 6

newbound

4 2 0 -2

newbound

4 2 0 -2 -4

newbound

existingbound
k=1
-6 -4 -2 0 2 4 6

-4 -6

y
8 6 4 2 0 -2 -4 -6 -8 -8

-8 -8 8

k=2
-6 -4 -2 0

existingbound
2 4 6

-6 8 -8 -8

k=3
-6 -4 -2

existingbound
0 2 4 6

y
8

x8

existingbound newbound

existingbound

6 4 2 0 -2

existingbound 6
4 2 0 -2

newbound
k=4
-6 -4 -2 0 2 4 6

-4 -6

-8 8 -8

k=5
-6 -4 -2 0

newbound
2 4 6

-4 -6

-8 -8

k=6
-6 -4 -2 0 2 4 6

x8

Figure 5: Results of the recursive propagation of a given state through a linear system model suering from additive noise correlated with the system state (rmax = 0.2). Shaded: The convex hull of all possible true covariances.
y
8 6 4 2 0 -2 -4 -6

y
8 6

y
8

newbound

4 2 0 -2

newbound

6 4 2 0 -2 -4

newbound

existingbound
k=1
-6 -4 -2 0 2 4 6

-4 -6

-8 -8

y
8 6 4 2 0 -2 -4 -6 -8 -8

-8 8 -8

k=2
-6 -4 -2

existingbound
0 2 4 6

-6

-8 8 -8

k=3
-6 -4 -2

existingbound
0 2 4 6

x8

newbound

newbound

6 4 2 0 -2 -4

existingbound

6 4 2 0 -2 -4

newbound existingbound

k=4
-6 -4

existingbound
-2 0 2 4 6

-6

-8 8 -8

k=5
-6 -4 -2 0 2 4 6

-6

-8 -8

k=6
-6 -4 -2 0 2 4 6

Figure 6: Results of the recursive propagation of a given state through a linear system model suering from additive noise correlated with the system state (rmax = 0.6). Shaded: The convex hull of all possible true covariances.

Вам также может понравиться