Академический Документы
Профессиональный Документы
Культура Документы
Kuo-Chu Chang
Fairfax, Virginia
1
Outline
• Multisensor Fusion
2
Introduction
Detection Detection
Identification Sensors Forces Sensors Identification
Tracking Tracking
and Fusion and Fusion
Combat
Situation Situation
assessment assessment
Course of Course of
actions actions
Comm. Comm.
Decision Decision
making making
Order Order
3
Data Association and
Multitarget Tracking
4
Multitarget Tracking Problem
• Sensors
– Noisy measurements with ambiguous origins
– False alarms, clutters, etc.
– Less than perfect detection
– State-dependent target detection (FOV, masking, MTI,
etc.)
5
Data Association Problem
z1 Δ z1 Δ
z$ z$ 1
z2 Δ z2 Δ
z$ 2 z3 Δ
Validation Gate:
V ( k + 1) = { z:[ z − z$ ( k + 1| k )]' S ( k + 1) − 1 [ z − z$ ( k + 1| k )] ≤ γ }
where p[ z ( k + 1)| Z k ] = N [ z ( k + 1); z$ ( k + 1| k ), S ( k + 1)]
6
Multitarget Tracking Algorithms
• Target-Oriented Approach
Fixed Computational Requirement with Known Number of
Targets (e.g., PDA, JPDA)
Require Separate Track Initiation Modules
• Track-Oriented Approach
Treat Each Track Individually
Need Association and Evaluation Modules to Initiate,
Evaluate, and Maintain Tracks
7
Nearest Neighbor Algorithm
• NN Algorithm
– Validating measurements
– Select the nearest measurement to the predicted
measurement based on the distance measure
D ( z ) = [ z − z$ ( k + 1| k )]' S ( k + 1) − 1 [ z − z$ ( k + 1| k )] = v ' S ( k + 1) − 1 v
– Update the target state with the measurement as if
it were the correct one, i.e., use standard Kalman
filter
• Remarks
– Non-Bayesian association technique
– Could select the strongest measurement if the
signal intensity information is available
– Simple but tends to be “overconfident”
– Will lose target even with moderate clutter density
8
Example - Scenario
9
Tracking with Nearest Neighbor
Results
10
Probabilistic Data Association
• Assumptions
– Single target in clutter
– Track has been initiated with past summarized as
p [ x ( k )| Z k −1 ] = N [ x ( k ); x$ ( k | k − 1), P ( k | k − 1)]
– At most one validated measurement is target-
originated
– Target is detected over time with known
probability
• Approach
– Form all feasible association events with validated
measurements
– Compute association probability of each event
– Combine innovations with probability weights
– Update the state estimate with the “combined”
innovation
11
PDA Filter
Given a set of validated measurements Z ( k ) = {z i ( k )}im=k1
and cumulative measurement sets Z k = {Z (j)}kj =1
Association events: (mutually exclusive)
⎧{z i ( k ) is the target - originated measurement} i = 1,..., m( k )
θi ( k ) = ⎨
⎩{none of the measurements is target - originated} i=0
m( k ) m( k )
⇒ x$[ k | k ] = E [ x ( k )| Z ] = k
∑ E[ x ( k )|θ ( k ), Z
i=0
i
k
]P{θi ( k )| Z } =
k
∑ x$ ( k | k )β ( k )
i=0
i i
12
PDA - State Estimation
State Update: x$ ( k | k ) = x$ ( k | k − 1) + W ( k ) v ( k )
m( k )
com bined innovation: v ( k ) = ∑ β ( k )v ( k )
i= 0
i i
Covariance Update:
~
P ( k | k ) = β 0 ( k ) P ( k | k − 1) + (1 − β 0 ( k )) P C ( k | k ) + P ( k )
where P C ( k | k ) = P ( k | k − 1) − W ( k ) S ( k )W ( k )' is the covariance
of the state updated with correct measurement , and
m( k )
P ( k ) ≡ W ( k )[ ∑ βi ( k )v i ( k ) v i ( k )' − v ( k ) v ( k )' ]W ( k )'
~
i= 0
13
PDA - Association Probabilities
Conditional Probabilities :
β i (k ) = P{θi (k ) | Z k } = P{θi (k ) | Z ( k ), m(k ), Z k −1}
1
= p[ Z (k ) | θi ( k ), m(k ), Z k −1 ]P{θi ( k ) | m( k ), Z k −1} i = 1,..., m( k )
C
⎧ 1
− vi ( k )' S ( k ) −1 vi ( k )
⎪⎪ ei η
−1
i = 1,..., m(k ), where ei ≡ e 2
⇒ βi (k ) = ⎨ 1 1− P P
⎪ b η −1 i = 0, b = λ 2π S (k ) 2 D G
⎪⎩ PD
−1
⎛ m(k )
⎞
where η = ⎜ b + ∑ e j ⎟
⎝ j =1 ⎠
Remarks :
1. The pdf of incorrect measurements are assumed to be uniformin in the validation region
2. A Poisson model is used for the number of false measurements
(λV ) m
− λV
P (m = m) = e
F
, V : volume of validation region,
m!
PD : detection probability, PG : probability of measurements in gate
14
PDA - Summary
State estimate at tk-1 State prediction State covariance at tk-1
$ - 1|k - 1)
x(k x$ (k |k - 1 ) P (k - 1|k - 1)
Filter gain
Evaluate association W (k )
probabilities β i (k )
Effect of measurement
~
Combine innovation origin uncertainty P(k)
v (k )
15
Tracking with PDA Algorithm
Results
16
Tracking Capability Comparison
Percentage of
Lost Tracks
100
NNSF
PDAF
50
2 4
Expected Number of False Returns per Gate
17
Joint Probabilistic Data Association
• Assumptions
– A known number of targets established in clutter
– Tracks have been initiated with past summarized
p [ x ( k )| Z k −1 ] = N [ x ( k ); x$ ( k | k − 1), P ( k | k − 1)]
– Persistent interference from neighboring targets
– Each target can have different dynamics and PD
• Approach
– Form all feasible joint association events with
validated measurements
– Compute association probability of each event
jointly across targets
– Update the state estimate for each target as in
PDAF
18
Tracking Crossing Targets with
Nearest Neighbor Algorithm
Truth
0 10 20 Track
Nautical Miles
19
Tracking Crossing Targets with
PDA Filter
Truth
0 10 20 Track
Nautical Miles
20
Tracking Crossing Targets with
JPDA Filter
Truth
0 10 20 Track
Nautical Miles
21
Multiple Hypothesis Tracking
• Multiple hypothesis Tracking (MHT)
– Associate sequences of measurements
– Evaluate probabilities of all association hypotheses
– Require management schemes to limit growth of
hypotheses
• Approach
– Use of multiple hypotheses to delay decisions when
situation is unclear
– Track initiation and continuation treated in one integrated
framework
22
Tracks and Hypotheses
• Tracks: subset of cumulative measurement where each
track has at most one measurement from a data set
Example:
23
Hypothesis Formulation
Exhaustive Search
– Find all feasible associations between TMCR table
the current measurements and the M31 M32
existing tracks
New Target x x
– Each measurement could be T1 0 x
hypothesized as having originated
T2 x 0
from an established track, a false
False Alarm
alarm, or a new track x x
– Based on Track-to-Measurement
correlation table
Example:
24
Hypothesis Evaluation
Basic Bayesian Formula
k −1
P( Z ( k ), Λ| Z , Λ)
P( Λ| Z ) =
k
k −1 P( Λ| Z k −1 )
P( Z ( k )| Z )
where Λ is a hypothesis, and Λ is the predecessor of Λ
1
P( Λ | Z k ) = P( Λ | Z k −1 ) LFA ( z (k ) | Λ )∏ Lτ ( y | Z k −1 )
C τ ∈Λ
where
Lτ ( y | Z k −1 ,τ ) = ∫ p( y | x, Z k −1 ) PD ( x ) p( x | Z k −1 )dx ⇒ association likelihood
25
Hypothesis Management
• Clustering
– Group tracks and measurements into clusters
– No association across clusters
• Combining
– Combine hypotheses with same number of tracks and
similar tracks
– Combine similar tracks
• Pruning
– Fixed threshold, fixed breadth
– Fixed percentage
26
MHT – Summary
Measurements
Target Dynamic
Model
27
MHT
MHT Design
Design Issues
Issues
• Hypothesis Formation
– Bottle Neck of the MHT processing
– Need Efficient algorithm
– Use Good Heuristic
• Data Association
– NN, PDA, Optimal assignment, etc.
– N-Best Assignment algorithm
– Use Entropy Measure to control the process,
Greedy NN
28
Distributed
Distributed Estimation
Estimation and
and
Tracking
Tracking
• Centralized and Distributed Estimation
• Fusion Architecture
• Information Filters
• Linear and Nonlinear Fusion
• Information Flow and Information
Graph
• Distributed Tracking
29
Example: ForceNet
The Operational Construct and Architecture
Framework for Naval Warfare
30
Centralized and
Distributed Estimation
• Centralized Estimation
- Linear MMSE estimate
- Linear estimation in dynamic systems
- Kalman filter
• Distributed Estimation
- Hierarchical estimation
- Information filter
- Information graph
- Distributed Kalman filter
31
Centralized Architecture
S S... S S S S... S S
Data Association
Data Association and Tracking
and Tracking
32
Distributed Architecture
S S
P
S S
P F P
S S
S S
33
Hierarchical Architecture
S S S S S S S S
Local Local
Agents Agents
... ...
P P P P
Global
Local Local Estimate
Estimate Estimate
34
Hierarchical Fusion
Local Kalman Filter
Mapping from xˆ i ( k | k ) and Pi ( k | k ) to xˆ i ( k + 1 | k + 1) and Pi ( k + 1 | k + 1)
predicted state : (time update)
xˆ i ( k + 1 | k ) = F ( k ) xˆ i ( k | k )
Pi ( k + 1 | k ) = F ( k ) Pi ( k | k ) F ( k )'+ G ( k )Q ( k )G ( k )'
updated state : (measureme nt update)
xˆ i ( k + 1 | k + 1) = xˆ i ( k + 1 | k ) + Wi ( k + 1)ν i ( k + 1)
Pi ( k + 1 | k + 1) = Pi ( k + 1 | k ) − Wi ( k + 1) H i ( k + 1) Pi ( k + 1 | k )
where
filter gain : Wi ( k + 1) = Pi ( k + 1 | k ) H i ( k + 1)' S i ( k + 1) −1
innovation : ν i ( k + 1) = z i ( k + 1) − zˆ i ( k + 1 | k ) = ~z i ( k + 1 | k )
predicted measuremen t : zˆ i ( k + 1 | k ) = H i ( k + 1) xˆ i ( k + 1 | k )
measuremen t prediction covariance :
S i ( k + 1) = H i ( k + 1) Pi ( k + 1 | k ) H i ( k + 1)'+ Ri ( k + 1 | k )
35
Processing Cycle
Fusion Equations
Fusion Equations
N
P ( k + 1 | k + 1) = P (k + 1 | k ) + ∑ [ Pi -1 (k + 1 | k + 1) − Pi -1 (k + 1 | k )]
-1 -1
i =1
P -1 (k + 1 | k + 1)xˆ (k + 1 | k + 1) = P -1 (k + 1 | k )xˆ (k + 1 | k )
N
+ ∑ [ Pi -1 (k + 1 | k + 1)xˆ i (k + 1 | k + 1) − Pi -1 (k + 1 | k )xˆ i (k + 1 | k )]
i =1
Communication
36
Fusion Architecture Comparison
• Hierarchical
– Without feedback: suboptimal but
economical
– With feedback: optimal but expensive
• Centralized
– With intersensor fusion: sensors need to be
synchronized
– Complete centralized: simple and optimal but
not reliable
• Distributed
– More complicated and often suboptimal
– Flexible and reliable
37
References
1. T.E. Fortmann, Y. Bar-Shalom, and M. Scheffe, “Sonar Tracking of
Multiple Targets Using Joint Probabilistic Data Association,” IEEE Journal
of Oceanic Engineering, July 1983, pp. 173-184.
2. D.B. Reid, “An Algorithm for Tracking Multiple Targets,” IEEE Trans. on
Automatic Control, AC-24, 1979, pp. 843-854.
3. S. Mori, C.Y. Chong, E. Tse, and R.P. Wishner, “Tracking and Classifying
Multiple Targets without A Priori Identification,” IEEE Trans. on Automatic
Control, AC-31, 1986, pp. 401-409.
5. C.Y. Chong, “Distributed Architectures for Data Fusion,” Proc. Fusion ’98
International Conference, July 1998.
38
Backup
39
Track Oriented Approach
• Track Oriented
– Unknown number of targets in clutter
– Tracks can be initiated with a single measurement,
cold start, warm start
– Track is the basic entity, no multiple hypotheses
formed
• Approach
– Initiate tracks with unassigned measurements
– Clustering based on existing tracks
– Form and Select a Data Association event in
each cluster to be processed
– Score each Track based on the association
history
– Prune tracks with scores below a threshold and
combine similar tracks
40
Algorithm
Algorithm Summary
Summary
Clustering
For each Cluster
Data Association
Track Management
41
Single Scan vs. Multi-Scans
• Single Scan
– Associating current observation to prior sensor tracks
– Efficient and work well with sparse track density
– Could perform badly under poor conditions
• Multi-Scans
– Data association over multiple time steps
– Reduce track mis-association in dense target environment
– Computation and performance depends on window size
• Multiple Hypothesis Tracking
– Unified framework for tracking and data association
– Theoretically optimal under unlimited resources
– Require intelligent hypothesis management scheme
42
Track
Track Oriented
Oriented MHT
MHT
• Track Trees / Track Hypotheses
– Construct a track tree for each potential target
– The root represent the birth of the target
– Each branch represent a different dynamic model and report
association
– Each branch from root to a leaf represent a potential track
– Each report could also initiate a new target track
– Low likely tracks are pruned and confirmed tracks are
displayed
• Global Hypotheses
– Combining tracks from different target trees, at most one
track each
– Hypothesis likelihood is evaluated based on the likelihood of
the tracks
• Advantages
– More effective in managing tracks
– Can incorporate multiple target dynamic models naturally
43
Track Hypothesis
and Global Hypothesis
el
od m1
r m Global hypothesis 1
u ve
e
Target 1 an
M
Constant velocity m2
Tr
ac Mi
kt ss
er ed
m de
in tec
at tio
io n
n
Global hypothesis 2
Target 2
Target n
44