Multi-Sensor Fusion: Covariance Intersection Using Tracks as Measurements


The Covariance Intersection Algorithm from Julier and Uhlmann [1] is a popular algorithm for track-to-track fusion in target tracking systems. This approach is highly appealing due to its robustness, simple structure, and applicability to any tracking system that uses Gaussians as the basis for tracking. Generalisations to non-Gaussian systems have been proposed based on the exponential mixture density structure of the algorithm. The approach is based on a simple rule called the Chernoff Fusion Rule. However, due to the non-Bayesian formulation of the rule, it cannot be integrated straightforwardly into multi-target tracking algorithms which are based on Bayesian formulations.

A new Bayesian formulation for covariance intersection was recently proposed which allows for the integration of the approach into multi-target tracking algorithms. [2] The new formulation recasts the fusion rule as a Bayesian update rule that calculates a normalisation constant which enables integration into different multi-target tracking algorithms.

In this example we demonstrate the approach with different multi-target trackers in a multi-platform scenario where the sensors output estimated target tracks instead of raw measurements. In real life situations, such sensors make multi-target tracking more accessible to new researchers because the researchers don’t have to know about or implement target filtering and/or tracking algorithms on their own. However, when there are multiple sensors measuring the same target space and they all produce estimated tracks, as demonstrated in this example, it is not immediately clear how to combine this information into a single set of tracks. This is where covariance intersection comes in.

The concept of covariance intersection relies on the aforementioned Chernoff fusion rule [3] :

\[p_{\omega}(x_{k}) = \frac{p_{1}(x_{k})^{\omega}p_{2}(x_{k})^{1-\omega}}{\int p_{1}(x)^{\omega}p_{2}(x)^{1-\omega}dx}\]

In situations where \(p_1(x)\) and \(p_2(x)\) are multivariate Gaussian distributions, this formula is equal to the Covariance Intersection Algorithm from Julier and Uhlmann. In the Covariance Intersection Algorithm, the weighting parameter, \(\omega \in [0, 1]\) is chosen using an optimization algorithm. In this example, we have set it to \(0.5\) for simplicity.

We also introduce the following identity. Given two Gaussians, \(N(x ; a, A)\) and \(N(x ; b, B)\) with the same dimension, we have:

\[\left({\frac{\mathcal{N}(x ; a, A)}{\mathcal{N}(x ; a, A)}}\right)^\omega \mathcal{N}(x ; b, B) \propto \mathcal{N}(a ; b, V) \mathcal{N}(x ; d, D)\]


\[\begin{split}D &= \left(\omega A^{-1}+(1-\omega) B^{-1}\right)^{-1} \\ d &= D\left(\omega A^{-1} {a}+(1-\omega ) B^{-1} {b}\right) \\ V &= A/(1-\omega )+ B/\omega\end{split}\]

This example considers the Gaussian mixture probability hypothesis density (GM-PHD) algorithm as the tracker for the track-to-track fusion. The following table shows the formulas used in the regular GM-PHD, and the GM-PHD covariance intersector algorithm.

Formulas for the posterior/fused intensity, gaussian components, updated weight, updated mean, updated covariance, innovation, Kalman gain, and innovation covariance in the GM-PHD and the GM-PHD Covariance Intersector algorithm.

The specifics for implementing the Covariance Intersection Algorithm in several popular multi-target tracking algorithms was expanded upon recently by Clark et al [4]. The work includes a discussion of Stone Soup and and is used as the basis for this example.

The rest of this example will continue as follows:
  • Create a simulator for the ground truth

  • Create 2 radar simulators, one on the ground and one that is airborne

  • Make a JPDA tracker for the first radar, and a Gaussian mixture linear complexity with cumulants (GM-LCC) tracker for the second. These will mimic the situation where the radar sensors outputs tracks instead of raw measurements.

  • Create a GM-PHD tracker that will perform measurement fusion, using all measurements from both radars. This is created to compare with the covariance intersection method.

  • Create a GM-PHD tracker that will perform track fusion via covariance intersection using the ChernoffUpdater class.

  • Create a metric manager to generate metrics for each of the four trackers

  • Set up the detection feeders. The track fusion tracker will also use the Tracks2GaussianDetectionFeeder class.

  • Run the simulation

  • Plot the resulting tracks and the metrics over time

from copy import deepcopy
import numpy as np
from datetime import datetime

start_time =
num_steps = 50

1: Create a Ground Truth Simulator

We will simulate the paths of two targets using the MultiTargetGroundTruthSimulator. We can dictate the starting states of the two targets using the preexisting_states parameter. The targets start at [-100, -200, 500] and [0, 300, 500] respectively. Their initial velocities are [4, 0.5, 0] and [5, -0.5, 0] and they move according to a constant velocity transition model with noise.

from stonesoup.models.transition.linear import CombinedLinearGaussianTransitionModel,\
truth_transition_model = CombinedLinearGaussianTransitionModel(
    (ConstantVelocity(0.5), ConstantVelocity(0.5), ConstantVelocity(0.5)))

from stonesoup.simulator.simple import MultiTargetGroundTruthSimulator
from stonesoup.types.state import GaussianState
gt_simulator = MultiTargetGroundTruthSimulator(
    initial_state=GaussianState([0, 0, 0, 0, 500, 0], np.diag([100, 1, 100, 1, 100, 1]),
    preexisting_states=[[-100, 4, -200, 0.5, 500, 0], [0, 5, 300, -0.5, 500, 0]]

2: Create Two Radars and a Detection Simulation

The two radars can share the same clutter model.

from stonesoup.models.clutter.clutter import ClutterModel
clutter_model = ClutterModel(
    dist_params=((-600.0, 600.0), (-600.0, 600.0), (250.0, 750.0))

The first radar will be airborne, at an altitude of approximately 3000 m. It makes detections with an elevation, bearing, and range measurement model. By setting the max_range to 3500, we can ensure that it does not make detections of the other radar (which will be far away on the ground). We will later do a similar thing with the second radar. This mimics a real-life scenario where each radar is outside the field-of-view of the other.

from stonesoup.sensor.radar.radar import RadarElevationBearingRange
from stonesoup.types.array import CovarianceMatrix
from stonesoup.types.array import StateVector
from stonesoup.platform.base import MovingPlatform
from stonesoup.types.state import State

radar1 = RadarElevationBearingRange(
    position_mapping=(0, 2, 4),
    noise_covar=CovarianceMatrix(np.diag([np.deg2rad(0.005), np.deg2rad(0.005), 0.05])),
    mounting_offset=StateVector([10, 0, 0]),

# Mount the radar onto a moving platform. The platform starts at [-250, 50, 3000]
# with velocity [1, 5, 0] and moves according to a constant velocity model with low noise
sensor1_initial_loc = StateVector([[-250], [1], [50], [5], [3000], [0]])
initial_state = State(sensor1_initial_loc, start_time)
sensor1_transition_model = CombinedLinearGaussianTransitionModel(
    [ConstantVelocity(0.3), ConstantVelocity(0.3), ConstantVelocity(0.3)])
sensor1_platform = MovingPlatform(
    position_mapping=(0, 2, 4),
    velocity_mapping=(1, 3, 5),

The second radar will be stationary on the ground at the point [2000, 50, 0]. This radar also measures in 3D using bearing, range, and elevation.

radar2_noise_covar = CovarianceMatrix(np.diag([np.deg2rad(0.005), np.deg2rad(0.005), 0.05]))
radar2 = RadarElevationBearingRange(
    position_mapping=(0, 2, 4),

# Make a platform and mount the radar
from stonesoup.platform.base import FixedPlatform
sensor2_platform = FixedPlatform(
    State([2000, 0, 50, 0, 0, 0]),
    position_mapping=[0, 2, 4],

Now we can pass the platforms into a detection simulator. At each timestep, the simulator will return the detections from the sensor1_platform, then the detections from the sensor2_platform.

As we’ll be using the same simulation and detectors in multiple detectors, trackers and for plotting itertools.tee() is used to create independent iterators to use in different components.

from itertools import tee
from stonesoup.feeder.multi import MultiDataFeeder
from stonesoup.simulator.platform import PlatformDetectionSimulator

gt_sims = tee(gt_simulator, 2)
radar1_simulator = PlatformDetectionSimulator(
radar1_plotting, radar1_simulator, s1_detector = tee(radar1_simulator, 3)

radar2_simulator = PlatformDetectionSimulator(
radar2_plotting, radar2_simulator, s2_detector = tee(radar2_simulator, 3)

s1s2_detector = MultiDataFeeder([radar1_simulator, radar2_simulator])

Let’s briefly visualize the truths and measurements before we move on. Note that the final simulation will not have the same truths because the ground truth generator is randomized. But this gives an idea of what it will look like. The detections from the first sensor (airborne) will be plotted in blue, and the detections from the second sensor are in red. The clutter from both sensors are plotted in yellow. The sensor locations will be plotted in green Xs.

from stonesoup.plotter import Plotter, Dimension

# Lists to hold the detections from each sensor and the path of the airborne radar
s1_detections = set()
s2_detections = set()
radar1_path = []
truths = set()

# Iterate over the time steps, extracting the detections, truths, and airborne sensor path
for (time, s1ds), (_, s2ds) in zip(radar1_plotting, radar2_plotting):

# Plot the truths and detections
plotter = Plotter(dimension=Dimension.THREE)
plotter.plot_ground_truths(truths, [0, 2, 4])
plotter.plot_measurements(s1_detections, [0, 2, 4], color='blue')
plotter.plot_measurements(s2_detections, [0, 2, 4], color='red')

# Plot the radar positions*zip(*radar1_path), marker='x', color='green'), 50, 0, marker='x', color='green')
Track2Track Fusion Example
[<mpl_toolkits.mplot3d.art3d.Line3D object at 0x7f14c923e660>]

3: Make Trackers for the Radars

The airborne radar will be tracking using a JPDA tracker, and the stationary one will use a GM-LCC. These trackers will not be given the platform detection simulation objects as parameters, we will feed the measurements later to ensure that that the same measurements are used in the fusion trackers. To start, we can calculate the clutter spatial density.

JPDA Tracker

from stonesoup.hypothesiser.probability import PDAHypothesiser
from stonesoup.updater.kalman import ExtendedKalmanUpdater
from stonesoup.predictor.kalman import ExtendedKalmanPredictor
from stonesoup.dataassociator.probability import JPDA
from stonesoup.deleter.error import CovarianceBasedDeleter
from stonesoup.initiator.simple import MultiMeasurementInitiator
from stonesoup.tracker.simple import MultiTargetMixtureTracker

# Updater
jpda_updater = ExtendedKalmanUpdater(measurement_model=None)

# Data Associator
predictor = ExtendedKalmanPredictor(truth_transition_model)
hypothesiser = PDAHypothesiser(
data_associator = JPDA(hypothesiser=hypothesiser)

# Deleter
covariance_limit_for_delete = 500
deleter = CovarianceBasedDeleter(covar_trace_thresh=covariance_limit_for_delete)

# Initiator
s_prior_state = GaussianState([0, 0, 0, 0, 500, 0], np.diag([0, 50, 0, 50, 0, 50]))
from stonesoup.hypothesiser.distance import DistanceHypothesiser
from stonesoup.measures import Mahalanobis
hypothesiser = DistanceHypothesiser(

from stonesoup.dataassociator.neighbour import GNNWith2DAssignment
initiator_associator = GNNWith2DAssignment(hypothesiser)
initiator_deleter = CovarianceBasedDeleter(covar_trace_thresh=500)
initiator = MultiMeasurementInitiator(

jpda_tracker = MultiTargetMixtureTracker(
jpda_tracker, s1_tracker = tee(jpda_tracker, 2)

GM-LCC Tracker

from stonesoup.updater.pointprocess import LCCUpdater
from stonesoup.hypothesiser.distance import DistanceHypothesiser
from stonesoup.measures import Mahalanobis
from stonesoup.hypothesiser.gaussianmixture import GaussianMixtureHypothesiser
from stonesoup.mixturereducer.gaussianmixture import GaussianMixtureReducer
from stonesoup.types.state import TaggedWeightedGaussianState
from stonesoup.tracker.pointprocess import PointProcessMultiTargetTracker

# Updater
kalman_updater = ExtendedKalmanUpdater(measurement_model=None)
updater = LCCUpdater(

# Hypothesiser
kalman_predictor = ExtendedKalmanPredictor(truth_transition_model)
base_hypothesiser = DistanceHypothesiser(
hypothesiser = GaussianMixtureHypothesiser(

# Reducer
reducer = GaussianMixtureReducer(

# Birth component
birth_covar = CovarianceMatrix(np.diag([10000, 10, 10000, 10, 10000, 10]))
birth_component = TaggedWeightedGaussianState(
    state_vector=[0, 0, 0, 0, 500, 0],

# Tracker
gmlcc_tracker = PointProcessMultiTargetTracker(
gmlcc_tracker, s2_tracker = tee(gmlcc_tracker, 2)

4: Make GM-PHD Tracker For Measurement Fusion

This tracker can use many of the same elements as the GM-LCC one.

5: Define a GM-PHD Tracker for Track Fusion

Track fusion using covariance intersection is implemented in Stone Soup using the ChernoffUpdater class. For use in a GM-PHD, we insert the ChernoffUpdater as the base updater, instead of a typical KalmanUpdater. The clutter_spatial_density parameter now refers to the estimated intensity of false tracks. Since the previous tracker will (hopefully) have ignored some of the clutter, we can use a smaller intensity than in the previous trackers. The omega parameter is also adjustable. We will set it to 0.5 for now.

The remaining tracker parameters have been kept the same as the measurement fusion tracker except where noted. This will ensure a fair comparison of the results.

from stonesoup.updater.chernoff import ChernoffUpdater
from stonesoup.measures import Euclidean

# Updater
ch_updater = ChernoffUpdater(measurement_model=None)
updater = PHDUpdater(

# Hypothesiser
# The states being used as measurements are in Cartesian space. We will use Euclidean distance in
# the :class:`~.DistanceHypothesiser`, meaning that we need a bigger missed distance than the
# previous hypothesiser which used the Mahalanobis distance.
kalman_predictor = ExtendedKalmanPredictor(truth_transition_model)
base_hypothesiser = DistanceHypothesiser(
hypothesiser = GaussianMixtureHypothesiser(base_hypothesiser, order_by_detection=True)

# Reducer
# The states tend to have low weights when they are first initialized using this method, so we will
# keep the pruning threshold low.
ch_reducer = GaussianMixtureReducer(

# Birth component
birth_covar = CovarianceMatrix(np.diag([100000, 100, 100000, 100, 100000, 100]))
ch_birth_component = TaggedWeightedGaussianState(
    state_vector=[0, 0, 0, 0, 500, 0],

# Make tracker
from stonesoup.feeder.track import Tracks2GaussianDetectionFeeder

track_fusion_tracker = PointProcessMultiTargetTracker(
    detector=Tracks2GaussianDetectionFeeder(MultiDataFeeder([s1_tracker, s2_tracker])),

6: Make Metric Manager

We will generate metrics of each of the four trackers for comparison.

from stonesoup.metricgenerator.basicmetrics import BasicMetrics
from stonesoup.metricgenerator.ospametric import OSPAMetric
from stonesoup.metricgenerator.tracktotruthmetrics import SIAPMetrics
from stonesoup.metricgenerator.uncertaintymetric import SumofCovarianceNormsMetric

metric_generators = []

track_labels = ['jpda', 'gmlcc', 'meas_fusion', 'track_fusion']
labels = ['Airborne Radar (JPDAF)', 'Ground Radar (GM-LCC)', 'Measurement Fusion (GM-PHD)',
          'Covariance Intersection (GM-PHD)']

for track_label, label in zip(track_labels, labels):
    # for _ in ['jpda', 'gmlcc', 'meas_fusion', 'track_fusion']:
    metric_generators.extend([BasicMetrics(generator_name=f'basic {label}',
                              OSPAMetric(c=10, p=1, measure=Euclidean([0, 2, 4]),
                                         generator_name=f'OSPA {label}',
                              SIAPMetrics(position_measure=Euclidean(), velocity_measure=Euclidean(),
                                          generator_name=f'SIAP {label}',
                              SumofCovarianceNormsMetric(generator_name=f'Uncertainty {label}',

from stonesoup.dataassociator.tracktotrack import TrackToTruth
associator = TrackToTruth(association_threshold=30)
from stonesoup.metricgenerator.manager import MultiManager

metric_manager = MultiManager(metric_generators, associator=associator)

7: Set Up the Detection Feeders

As one final step before running the simulation, we will write a little class which feeds the detections for a single timestep. This makes sure that the two radars and the measurement fusion tracker are getting the same measurements.

The track fusion tracker will also use the Tracks2GaussianDetectionFeeder class to feed the tracks as measurements. At each time step, the resultant live tracks from the JPDA and GM-LCC trackers will be put into a Tracks2GaussianDetectionFeeder. The feeder will take the most recent state from each track and turn it into a GaussianDetection object. The set of detection objects will be returned and passed into the tracker.

8: Run Simulation

jpda_tracks, gmlcc_tracks = set(), set()
meas_fusion_tracks, track_fusion_tracks = set(), set()

meas_fusion_tracker_iter = iter(meas_fusion_tracker)
track_fusion_tracker_iter = iter(track_fusion_tracker)

for t in range(num_steps):

    # Run JPDA tracker from sensor 1
    _, sensor1_tracks = next(jpda_tracker)

    # Run GM-LCC tracker from sensor 2
    _, sensor2_tracks = next(gmlcc_tracker)

    # Run the GM-PHD for measurement fusion. This one gets called twice, once for each set of
    # detections. This ensures there is only one detection per target.
    # Run the GM-PHD for track fusion. Similar to the measurement fusion, this tracker gets run
    # twice, once for each set of tracks.
    for _ in (0, 1):
        _, tracks = next(meas_fusion_tracker_iter)
        _, tracks = next(track_fusion_tracker_iter)

detections = s1_detections | s2_detections
metric_manager.add_data({'truths': truths, 'detections': detections}, overwrite=False)

# Remove tracks that have just one state in them as they were probably from clutter
jpda_tracks = {track for track in jpda_tracks if len(track) > 1}
gmlcc_tracks = {track for track in gmlcc_tracks if len(track) > 1}
meas_fusion_tracks = {track for track in meas_fusion_tracks if len(track) > 1}
track_fusion_tracks = {track for track in track_fusion_tracks if len(track) > 1}

# Add track data to the metric manager
metric_manager.add_data({'jpda_tracks': jpda_tracks,
                         'gmlcc_tracks': gmlcc_tracks,
                         'meas_fusion_tracks': meas_fusion_tracks,
                         'track_fusion_tracks': track_fusion_tracks
                         }, overwrite=False)

9: Plot the Results

Next, we will plot all of the resulting tracks and measurements. This will be done in two plots. The first plot will show all of the data, and the second plot will show a closer view of one resultant track.

plotter1, plotter2 = Plotter(), Plotter()
for plotter in [plotter1, plotter2]:
    plotter.plot_ground_truths(truths, [0, 2],
    plotter.plot_measurements(s1_detections, [0, 2], color='orange', marker='*',
                              measurements_label='Measurements - Airborne Radar')
    plotter.plot_measurements(s2_detections, [0, 2], color='blue', marker='*',
                              measurements_label='Measurements - Ground Radar')
    plotter.plot_tracks(jpda_tracks, [0, 2], color='red',
                        track_label='Tracks - Airborne Radar (JPDAF)')
    plotter.plot_tracks(gmlcc_tracks, [0, 2], color='purple',
                        track_label='Tracks - Ground Radar (GM-LCC)')
    plotter.plot_tracks(meas_fusion_tracks, [0, 2], color='green',
                        track_label='Tracks - Measurement Fusion (GM-PHD)')
    plotter.plot_tracks(track_fusion_tracks, [0, 2], color='pink',
                        track_label='Tracks - Covariance Intersection (GM-PHD)')

    # Format the legend a bit. Set the position outside of the plot, and
    # swap the order of the clutter and ground radar measurements
    pos =[pos.x0, pos.y0, pos.width * 0.7, pos.height])
    k = list(plotter.legend_dict.keys())
    k[2], k[3] = k[3], k[2]
    v = list(plotter.legend_dict.values())
    v[2], v[3] = v[3], v[2], labels=k, loc='lower center', bbox_to_anchor=(0.5, -0.5))

track = sorted(track_fusion_tracks, key=len)[-1]  # Longest track
x_min = min([state.state_vector[0] for state in track])
x_max = max([state.state_vector[0] for state in track])
y_min = min([state.state_vector[2] for state in track])
y_max = max([state.state_vector[2] for state in track]), x_max+50), y_max+50)
  • Track2Track Fusion Example
  • Track2Track Fusion Example

Now we will plot the metrics. First, we call a function to calculate the metrics.

Now we can plot them. The SIAP and OSPA metrics can be done together in a loop. The Track-To-Truth ratio needs to be done separately so that it can be calculated at each timestep.

from stonesoup.plotter import MetricPlotter

fig = MetricPlotter()
fig.plot_metrics(metrics, color=['blue', 'orange', 'green', 'red'], linestyle='--')
OSPA distances, SIAP Ambiguity, SIAP Completeness, SIAP Position Accuracy, SIAP Spuriousness, SIAP Velocity Accuracy, Sum of Covariance Norms Metric

Plot Track to Truth Ratio

import matplotlib.pyplot as plt
fig, ax = plt.subplots()
times = metric_manager.list_timestamps(metric_manager.generators[2])

# Iterate through the trackers. For each one, go through the list of all timesteps
# and calculate the ratio at that time
for track_label, label in zip(track_labels, labels):
    ratios = []
    for time in times:
        num_tracks = SIAPMetrics.num_tracks_at_time(metric_manager.states_sets[f'{track_label}_tracks'], timestamp=time)
        num_truths = SIAPMetrics.num_truths_at_time(metric_manager.states_sets['truths'], timestamp=time)
        ratios.append(num_tracks / num_truths)
    plt.plot(ratios, linewidth=2, label=label, linestyle='--')

ax.set_title('Track-to-Truth Ratio')
ax.set_xlabel('Time $(s)$')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
Track-to-Truth Ratio


Total running time of the script: (0 minutes 21.449 seconds)

Gallery generated by Sphinx-Gallery