Multi-Sensor Moving Platform Simulation Example

This example looks at how multiple sensors can be mounted on a single moving platform and exploiting a defined moving platform as a sensor target.

Building a Simulated Multi-Sensor Moving Platform

The focus of this example is to show how to setup and configure a simulation environment in order to provide a multi-sensor moving platform, as such the application of a tracker will not be covered in detail. For more information about trackers and how to configure them review of the tutorials and demonstrations is recommended.

This example makes use of Stone Soup MovingPlatform, MultiTransitionMovingPlatform and Sensor objects.

In order to configure platforms, sensors and the simulation we will need to import some specific Stone Soup objects. As these have been introduced in previous tutorials they are imported upfront. New functionality within this example will be imported at the relevant point in order to draw attention to the new features.

# Some general imports and set up
from datetime import datetime
from datetime import timedelta
from matplotlib import pyplot as plt

import numpy as np

# Stone Soup imports:
from stonesoup.types.state import State, GaussianState
from stonesoup.types.array import StateVector
from stonesoup.types.array import CovarianceMatrix
from stonesoup.models.transition.linear import (
    CombinedLinearGaussianTransitionModel, ConstantVelocity)
from stonesoup.predictor.particle import ParticlePredictor
from stonesoup.resampler.particle import SystematicResampler
from stonesoup.updater.particle import ParticleUpdater
from stonesoup.measures import Mahalanobis
from stonesoup.hypothesiser.distance import DistanceHypothesiser
from stonesoup.dataassociator.neighbour import GNNWith2DAssignment
from stonesoup.tracker.simple import SingleTargetTracker

# Define the simulation start time
start_time = datetime.now()

Create a multi-sensor platform

We have previously demonstrated how to create a FixedPlatform which exploited a RadarRangeBearingElevation Sensor in order to detect and track targets generated within a MultiTargetGroundTruthSimulator.

In this example we are going to create a moving platform which will be mounted with a pair of sensors and moves within a 6 dimensional state space according to the following \(\mathbf{x}\).

\[\begin{split}\mathbf{x} = \begin{bmatrix} x\\ \dot{x}\\ y\\ \dot{y}\\ z\\ \dot{z} \end{bmatrix} = \begin{bmatrix} 0\\ 0\\ 0\\ 50\\ 8000\\ 0 \end{bmatrix}\end{split}\]

The platform will be initiated with a near constant velocity model which has been parameterised to have zero noise. Therefore the platform location at time \(k\) is given by \(F_{k}x_{k-1}\) where \(F_{k}\) is given by:

\[\begin{split}F_{k} = \begin{bmatrix} 1 & \triangle k & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & \triangle k & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & \triangle k \\ 0 & 0 & 0 & 0 & 0 & 1\\ \end{bmatrix}\end{split}\]
# First import the Moving platform
from stonesoup.platform.base import MovingPlatform

# Define the initial platform position, in this case the origin
initial_loc = StateVector([[0], [0], [0], [50], [8000], [0]])
initial_state = State(initial_loc, start_time)

# Define transition model and position for 3D platform
transition_model = CombinedLinearGaussianTransitionModel(
    [ConstantVelocity(0.), ConstantVelocity(0.), ConstantVelocity(0.)])

# create our fixed platform
sensor_platform = MovingPlatform(states=initial_state,
                                 position_mapping=(0, 2, 4),
                                 velocity_mapping=(1, 3, 5),
                                 transition_model=transition_model)

With our platform generated we now need to build a set of sensors which will be mounted onto the platform. In this case we will exploit a RadarElevationBearingRangeRate and a PassiveElevationBearing sensor (e.g. an optical sensor, which has no capability to directly measure range).

First we will create a radar which is capable of measuring bearing (\(\phi\)), elevation (\(\theta\)), range (\(r\)) and range-rate (\(\dot{r}\)) of the target platform.

# Import a range rate bearing elevation capable radar
from stonesoup.sensor.radar.radar import RadarElevationBearingRangeRate

# Create a radar sensor
radar_noise_covar = CovarianceMatrix(np.diag(
    np.array([np.deg2rad(3),  # Elevation
              np.deg2rad(3),  # Bearing
              100.,  # Range
              25.])))  # Range Rate

# radar mountings
radar_mounting_offsets = StateVector([10, 0, 0])  # e.g. nose cone
radar_rotation_offsets = StateVector([0, 0, 0])

# Mount the radar onto the platform

radar = RadarElevationBearingRangeRate(ndim_state=6,
                                       position_mapping=(0, 2, 4),
                                       velocity_mapping=(1, 3, 5),
                                       noise_covar=radar_noise_covar,
                                       mounting_offset=radar_mounting_offsets,
                                       rotation_offset=radar_rotation_offsets,
                                       )
sensor_platform.add_sensor(radar)

Our second sensor is a passive sensor, capable of measuring the bearing (\(\phi\)) and elevation (\(\theta\)) of the target platform. For the purposes of this example we will assume that the passive sensor is an imager. The imager sensor model is described by the following equations:

\[\mathbf{z}_k = h(\mathbf{x}_k, \dot{\mathbf{x}}_k)\]

where:

  • \(\mathbf{z}_k\) is a measurement vector of the form:

\[\begin{split}\mathbf{z}_k = \begin{bmatrix} \theta \\ \phi \end{bmatrix}\end{split}\]
  • \(h\) is a non - linear model function of the form:

\[\begin{split}h(\mathbf{x}_k,\dot{\mathbf{x}}_k) = \begin{bmatrix} \arcsin(\mathcal{z} /\sqrt{\mathcal{x} ^ 2 + \mathcal{y} ^ 2 +\mathcal{z} ^ 2}) \\ \arctan(\mathcal{y},\mathcal{x}) \ \ \end{bmatrix} + \dot{\mathbf{x}}_k\end{split}\]
  • \(\mathbf{z}_k\) is Gaussian distributed with covariance \(R\), i.e.:

\[\mathbf{z}_k \sim \mathcal{N}(0, R)\]
\[\begin{split}R = \begin{bmatrix} \sigma_{\theta}^2 & 0 \\ 0 & \sigma_{\phi}^2 \\ \end{bmatrix}\end{split}\]
# Import a passive sensor capability
from stonesoup.sensor.passive import PassiveElevationBearing

imager_noise_covar = CovarianceMatrix(np.diag(np.array([np.deg2rad(0.05),  # Elevation
                                                        np.deg2rad(0.05)])))  # Bearing

# imager mounting offset
imager_mounting_offsets = StateVector([0, 8, -1])  # e.g. wing mounted imaging pod
imager_rotation_offsets = StateVector([0, 0, 0])

# Mount the imager onto the platform
imager = PassiveElevationBearing(ndim_state=6,
                                 mapping=(0, 2, 4),
                                 noise_covar=imager_noise_covar,
                                 mounting_offset=imager_mounting_offsets,
                                 rotation_offset=imager_rotation_offsets,
                                 )
sensor_platform.add_sensor(imager)

Notice that we have added sensors to specific locations on the aircraft, defined by the mounting_offset parameter. The values in this array are defined in the platforms local coordinate frame of reference. So in this case an offset of \([0, 8, -1]\) means the sensor is located 8 meters to the right and 1 meter below the center point of the platform.

Now that we have mounted the two sensors we can see that the platform object has both associated with it:

Out:

(RadarElevationBearingRangeRate(position_mapping=(0, 2, 4), noise_covar=CovarianceMatrix([[5.23598776e-02, 0.00000000e+00, 0.00000000e+00,
                   0.00000000e+00],
                  [0.00000000e+00, 5.23598776e-02, 0.00000000e+00,
                   0.00000000e+00],
                  [0.00000000e+00, 0.00000000e+00, 1.00000000e+02,
                   0.00000000e+00],
                  [0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
                   2.50000000e+01]]), rotation_offset=StateVector([[0],
             [0],
             [0]]), mounting_offset=StateVector([[10],
             [ 0],
             [ 0]]), movement_controller=MovingMovable(states=[State(state_vector=StateVector([[   0],
             [   0],
             [   0],
             [  50],
             [8000],
             [   0]]), timestamp=datetime.datetime(2021, 6, 10, 14, 20, 47, 523541))], position_mapping=(0, 2, 4), transition_model=CombinedLinearGaussianTransitionModel(model_list=[ConstantVelocity(noise_diff_coeff=0.0, seed=None), ConstantVelocity(noise_diff_coeff=0.0, seed=None), ConstantVelocity(noise_diff_coeff=0.0, seed=None)], seed=None), velocity_mapping=(1, 3, 5)), ndim_state=6, velocity_mapping=(1, 3, 5)), PassiveElevationBearing(ndim_state=6, mapping=(0, 2, 4), noise_covar=CovarianceMatrix([[0.00087266, 0.        ],
                  [0.        , 0.00087266]]), rotation_offset=StateVector([[0],
             [0],
             [0]]), mounting_offset=StateVector([[ 0],
             [ 8],
             [-1]]), movement_controller=MovingMovable(states=[State(state_vector=StateVector([[   0],
             [   0],
             [   0],
             [  50],
             [8000],
             [   0]]), timestamp=datetime.datetime(2021, 6, 10, 14, 20, 47, 523541))], position_mapping=(0, 2, 4), transition_model=CombinedLinearGaussianTransitionModel(model_list=[ConstantVelocity(noise_diff_coeff=0.0, seed=None), ConstantVelocity(noise_diff_coeff=0.0, seed=None), ConstantVelocity(noise_diff_coeff=0.0, seed=None)], seed=None), velocity_mapping=(1, 3, 5))))

Create a Target Platform

There are two ways of generating a target in Stone Soup. Firstly, we can use the inbuilt ground-truth generator functionality within Stone Soup, which we demonstrated in the previous example, and creates a random target based on our selected parameters. The second method provides a means to generate a target which will perform specific behaviours, this is the approach we will take here.

In order to create a target which moves in pre-defined sequences we exploit the fact that platforms can be used as sensor targets within a simulation, coupled with the MultiTransitionMovingPlatform which enables a platform to be provided with a pre-defined list of transition models and transition times. The platform will continue to loop over the transition sequence provided until the simulation ends.

When simulating sensor platforms it is important to note that within the simulation Stone Soup treats all platforms as potential targets. Therefore if we created multiple sensor platforms they would each sense all other platforms within the simulation (sensor-target geometry dependant).

For this example we will create an air target which will fly a sequence of straight and level followed by a coordinated turn in the \(x-y\) plane. This is configured such that the target will perform each manoeuvre for 8 seconds, and it will turn through 45 degrees over the course of the turn manoeuvre.

# Import a Constant Turn model to enable target to perform basic manoeuvre
from stonesoup.models.transition.linear import ConstantTurn

straight_level = CombinedLinearGaussianTransitionModel(
    [ConstantVelocity(0.), ConstantVelocity(0.), ConstantVelocity(0.)])

# Configure the aircraft turn behaviour
turn_noise_diff_coeffs = np.array([0., 0.])

turn_rate = np.pi/32  # specified in radians per seconds...

turn_model = ConstantTurn(turn_noise_diff_coeffs=turn_noise_diff_coeffs, turn_rate=turn_rate)

# Configure turn model to maintain current altitude
turning = CombinedLinearGaussianTransitionModel(
    [turn_model, ConstantVelocity(0.)])

manoeuvre_list = [straight_level, turning]
manoeuvre_times = [timedelta(seconds=8),
                   timedelta(seconds=8)]

Now that we have created a list of manoeuvre behaviours and durations we can build our multi-transition moving platform. Because we intend for this platform to be a target we do not need to attach any sensors to it.

# Import a multi-transition moving platform
from stonesoup.platform.base import MultiTransitionMovingPlatform

initial_target_location = StateVector([[0], [-40], [1800], [0], [8000], [0]])
initial_target_state = State(initial_target_location, start_time)
target = MultiTransitionMovingPlatform(transition_models=manoeuvre_list,
                                       transition_times=manoeuvre_times,
                                       states=initial_target_state,
                                       position_mapping=(0, 2, 4),
                                       velocity_mapping=(1, 3, 5),
                                       sensors=None)

Creating the simulator

Now that we have build our sensor platform and a target platform we need to wrap them in a simulator. Because we do not want any additional ground truth objects, which is how most simulators work in Stone Soup, we need to use a DummyGroundTruthSimulator which returns a set of empty ground truth paths with timestamps. These are then feed into a PlatformDetectionSimulator with the two platforms we have already built.

# Import the required simulators
from stonesoup.simulator.simple import DummyGroundTruthSimulator
from stonesoup.simulator.platform import PlatformDetectionSimulator

We now need to create an array of timestamps which starts at datetime.now() and enable the simulator to run for 25 seconds.

times = np.arange(0, 24, 1)  # 25 seconds

timestamps = [start_time + timedelta(seconds=float(elapsed_time)) for elapsed_time in times]

truths = DummyGroundTruthSimulator(times=timestamps)
sim = PlatformDetectionSimulator(groundtruth=truths, platforms=[sensor_platform, target])

Create a Tracker

Now that we have setup our sensor platform, target and simulation we need to create a tracker. For this example we will use a Particle Filter as this enables us to handle the non-linear nature of the imaging sensor. In this example we will use an inflated constant noise model to account for target motion uncertainty.

Note that we don’t add a measurement model to the updater, this is because each sensor adds their measurement model to each detection they generate. The tracker handles this internally by checking for a measurement model with each detection it receives and applying only the relevant measurement model.

target_transition_model = CombinedLinearGaussianTransitionModel(
    [ConstantVelocity(5), ConstantVelocity(5), ConstantVelocity(1)])

# First add a Particle Predictor
predictor = ParticlePredictor(target_transition_model)

# Now create a resampler and particle updater
resampler = SystematicResampler()
updater = ParticleUpdater(measurement_model=None,
                          resampler=resampler)

# Create a particle initiator
from stonesoup.initiator.simple import GaussianParticleInitiator, SinglePointInitiator
single_point_initiator = SinglePointInitiator(
    GaussianState([[0], [-40], [2000], [0], [8000], [0]], np.diag([10000, 1000, 10000, 1000, 10000, 1000])),
    None)

initiator = GaussianParticleInitiator(number_particles=500,
                                      initiator=single_point_initiator)

hypothesiser = DistanceHypothesiser(predictor, updater, measure=Mahalanobis(), missed_distance=np.inf)
data_associator = GNNWith2DAssignment(hypothesiser)

from stonesoup.deleter.time import UpdateTimeStepsDeleter
deleter = UpdateTimeStepsDeleter(time_steps_since_update=10)

# Create a Kalman single-target tracker
tracker = SingleTargetTracker(
    initiator=initiator,
    deleter=deleter,
    detector=sim,
    data_associator=data_associator,
    updater=updater
)

The final step is to iterate our tracker over the simulation and plot out the results. Because we have a bearing only sensor it does not make sense to plot out the detections without animating the resulting plot. This animation shows the sensor platform (blue) moving towards the true target position (red). The estimated target position is shown in black, radar detections are shown in yellow while the bearing only imager detections are coloured green.

from matplotlib import animation
import matplotlib

matplotlib.rcParams['animation.html'] = 'jshtml'

from stonesoup.models.measurement.nonlinear import CartesianToElevationBearingRangeRate
from stonesoup.functions import sphere2cart

fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)


frames = []
for time, ctracks in tracker:
    artists = []

    ax.set_xlabel("$East$")
    ax.set_ylabel("$North$")
    ax.set_ylim(0, 2250)
    ax.set_xlim(-1000, 1000)
    X = [state.state_vector[0] for state in sensor_platform]
    Y = [state.state_vector[2] for state in sensor_platform]
    artists.extend(ax.plot(X, Y, color='b'))

    for detection in sim.detections:
        if isinstance(detection.measurement_model, CartesianToElevationBearingRangeRate):
            x, y = detection.measurement_model.inverse_function(detection)[[0, 2]]
            color = 'y'
        else:
            r = 10000000
            # extract the platform rotation offsets
            _, el_offset, az_offset = sensor_platform.orientation
            # obtain measurement angles and map to cartesian
            e, a = detection.state_vector
            x, y, _ = sphere2cart(r, a + az_offset, e + el_offset)
            color = 'g'
        X = [sensor_platform.state_vector[0], x]
        Y = [sensor_platform.state_vector[2], y]
        artists.extend(ax.plot(X, Y, color=color))

    X = [state.state_vector[0] for state in target]
    Y = [state.state_vector[2] for state in target]
    artists.extend(ax.plot(X, Y, color='r'))

    for track in ctracks:
        X = [state.state_vector[0] for state in track]
        Y = [state.state_vector[2] for state in track]
        artists.extend(ax.plot(X, Y, color='k'))

    frames.append(artists)

animation.ArtistAnimation(fig, frames)