{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# 5 - Data association - clutter tutorial\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tracking a single target in the presence of clutter and missed detections\n\nTracking is frequently complicated by the presence of detections which are not\nassociated with the target of interest. These may range from sensor-generated noise to\nreturns off intervening physical objects, or environmental effects. We refer to these\ncollectively as *clutter* if they have only nuisance value and need to be filtered out.\n\nIn this tutorial we introduce the use of data association algorithms in Stone Soup and\ndemonstrate how they can mitigate confusion due to clutter. To begin with we use a\n**nearest neighbour** method, which is conceptually simple and associates a prediction and a\ndetection based only on their proximity as quantified by a particular metric.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\nThe principal difficulty in practical multi-object tracking is not usually to do with the state\nof an individual object. Rather it is to do with the fact that the potential association of\nmeasurement to prediction is fraught with ambiguity. To illustrate this point consider a couple\nof examples:\n\n\n\n\n\nThe first (left) is the full set of associations of two targets (crosses) with two\nmeasurements (stars). Green: measurement generated by one target only and target capable of\ngenerating one measurement only; Yellow: includes measurements generated by more than one target;\nPink: includes targets generating more than one measurement. The second image (right) shows the\nnumber of ways of associating up to 5 targets with up to 10 measurements depending on whether you\nallow one-to-one (bottom), many-to-one (middle) or many-to-many (top). In this latter instance\nthe number of potential associations *at a single time instance* tops out at $10^{15}$\n\nClearly it would be prohibitive to have to assess each of these options at each timestep. For\nthis reason there exist a number of data association schemes. We'll set up a scenario and then\nproceed to introduce the nearest neighbour algorithm as a first means to address the association\nissue.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up a simulation\nAs in previous tutorials, we start with a target moving linearly in the 2D Cartesian plane.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\nfrom scipy.stats import uniform\nfrom datetime import datetime\nfrom datetime import timedelta\n\nfrom stonesoup.models.transition.linear import CombinedLinearGaussianTransitionModel, \\\n ConstantVelocity\nfrom stonesoup.types.groundtruth import GroundTruthPath, GroundTruthState\n\nnp.random.seed(1991)\n\nstart_time = datetime.now()\ntransition_model = CombinedLinearGaussianTransitionModel([ConstantVelocity(0.005),\n ConstantVelocity(0.005)])\ntruth = GroundTruthPath([GroundTruthState([0, 1, 0, 1], timestamp=start_time)])\nfor k in range(1, 21):\n truth.append(GroundTruthState(\n transition_model.function(truth[k-1], noise=True, time_interval=timedelta(seconds=1)),\n timestamp=start_time+timedelta(seconds=k)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Probability of detection\nFor the first time we introduce the possibility that, at any time-step, our sensor receives no\ndetection from the target (i.e. $p_d < 1$).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"prob_det = 0.9"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simulate clutter\nNext generate some measurements and, since $p_{fa} > 0$, add in some clutter\nat each time-step. We use the :class:`~.TrueDetection` and :class:`~.Clutter` subclasses of\n:class:`~.Detection` to help with identifying data types in plots later. A fixed number of\nclutter points are generated and uniformly distributed across\na $\\pm 20$ rectangular space centred on the true position of the target.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.types.detection import TrueDetection\nfrom stonesoup.types.detection import Clutter\nfrom stonesoup.models.measurement.linear import LinearGaussian\nmeasurement_model = LinearGaussian(\n ndim_state=4,\n mapping=(0, 2),\n noise_covar=np.array([[0.75, 0],\n [0, 0.75]])\n )\nall_measurements = []\nfor state in truth:\n measurement_set = set()\n\n # Generate actual detection from the state with a 1-p_d chance that no detection is received.\n if np.random.rand() <= prob_det:\n measurement = measurement_model.function(state, noise=True)\n measurement_set.add(TrueDetection(state_vector=measurement,\n groundtruth_path=truth,\n timestamp=state.timestamp,\n measurement_model=measurement_model))\n\n # Generate clutter at this time-step\n truth_x = state.state_vector[0]\n truth_y = state.state_vector[2]\n for _ in range(np.random.randint(10)):\n x = uniform.rvs(truth_x - 10, 20)\n y = uniform.rvs(truth_y - 10, 20)\n measurement_set.add(Clutter(np.array([[x], [y]]), timestamp=state.timestamp,\n measurement_model=measurement_model))\n\n all_measurements.append(measurement_set)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot the ground truth and measurements with clutter.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Plot ground truth.\nfrom stonesoup.plotter import Plotterly\nplotter = Plotterly()\nplotter.plot_ground_truths(truth, [0, 2])\n\n# Plot true detections and clutter.\nplotter.plot_measurements(all_measurements, [0, 2])\n\nplotter.fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Distance Hypothesiser and Nearest Neighbour\n\nPerhaps the simplest way to associate a detection with a prediction is to measure a 'distance'\nto each detection and hypothesise that the detection with the lowest distance\nis correctly associated with that prediction.\n\nAn appropriate distance metric for states described by Gaussian distributions is the\n*Mahalanobis distance*. This quantifies the distance of a point relative to a given\ndistribution.\nIn the case of a point $\\mathbf{x} = [x_{1}, ..., x_{N}]^T$, and distribution with mean\n$\\boldsymbol{\\mu} = [\\mu_{1}, ..., \\mu_{N}]^T$ and covariance matrix $P$, the\nMahalanobis distance of $\\mathbf{x}$ from the distribution is given by:\n\n\\begin{align}\\sqrt{(\\mathbf{x} - \\boldsymbol{\\mu})^T P^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})}\\end{align}\n\nwhich equates to the multi-dimensional measure of how many standard deviations a point is away\nfrom the mean.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We're going to create a hypothesiser that ranks detections against predicted measurement\naccording to the Mahalanobis distance, where those that fall outside of $3$ standard\ndeviations of the predicted measurement's mean are ignored. To do this we create a\n:class:`~.DistanceHypothesiser` which pairs incoming detections with track predictions, and pass\nit a :class:`Measure` class which (in this instance) calculates the Mahalanobis distance.\n\nThe hypothesiser must use a predicted state given by the predictor, create a measurement\nprediction using the updater, and compare this to a detection given a specific metric. Hence, it\ntakes the predictor, updater, measure (metric) and missed distance (gate) as its arguments. We\ntherefore need to create a predictor and updater, and to initialise a measure.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.predictor.kalman import KalmanPredictor\npredictor = KalmanPredictor(transition_model)\nfrom stonesoup.updater.kalman import KalmanUpdater\nupdater = KalmanUpdater(measurement_model)\n\nfrom stonesoup.hypothesiser.distance import DistanceHypothesiser\nfrom stonesoup.measures import Mahalanobis\nhypothesiser = DistanceHypothesiser(predictor, updater, measure=Mahalanobis(), missed_distance=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we use the :class:`~.NearestNeighbour` data associator, which picks the hypothesis pair\n(predicted measurement and detection) with the highest 'score' (in this instance, those that are\nclosest to each other).\n\n\n\nIn the diagram above, there are three possible detections to be considered for association (some\nof which may be clutter). The detection with a score of $0.4$ is selected by the nearest\nneighbour algorithm.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.dataassociator.neighbour import NearestNeighbour\ndata_associator = NearestNeighbour(hypothesiser)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the Kalman filter with the associator\nWith these components, we can run the simulated data and clutter through the Kalman filter.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Create prior\nfrom stonesoup.types.state import GaussianState\nprior = GaussianState([[0], [1], [0], [1]], np.diag([1.5, 0.5, 1.5, 0.5]), timestamp=start_time)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Loop through the predict, hypothesise, associate and update steps.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.types.track import Track\n\ntrack = Track([prior])\nfor n, measurements in enumerate(all_measurements):\n hypotheses = data_associator.associate([track],\n measurements,\n start_time + timedelta(seconds=n))\n hypothesis = hypotheses[track]\n\n if hypothesis.measurement:\n post = updater.update(hypothesis)\n track.append(post)\n else: # When data associator says no detections are good enough, we'll keep the prediction\n track.append(hypothesis.prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot the resulting track\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plotter.plot_tracks(track, [0, 2], uncertainty=True)\nplotter.fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you experiment with the clutter and detection parameters, you'll notice that there are often\ninstances where the estimate drifts away from the ground truth path. This is known as *track\nseduction*, and is a common feature of 'greedy' methods of association such as the nearest\nneighbour algorithm.\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 0
}