{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# 1 - An introduction to Stone Soup: using the Kalman filter\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is designed to introduce some of the basic features of Stone Soup using a single\ntarget scenario and a Kalman filter as an example.\n\n## Background and notation\n\nLet $p(\\mathbf{x}_{k})$, the probability distribution over\n$\\mathbf{x}_{k} \\in \\mathbb{R}^n$, represent a hidden state at some discrete point\n$k$. The $k$ is most often interpreted as a timestep, but could be any sequential\nindex (hence we don't use $t$). The measurement is given by\n$\\mathbf{z}_{k} \\in \\mathbb{R}^m$.\n\nIn Stone Soup, objects derived from the :class:`~.State` class carry information on a variety of\nstates, whether they be hidden states, observations, ground truth. Minimally, a state will\ncomprise a :class:`~.StateVector` and a timestamp. These can be extended. For example the\n:class:`~.GaussianState` is parameterised by a mean state vector and a covariance matrix. (We'll\nsee this in action later.)\n\nOur goal is to infer $p(\\mathbf{x}_{k})$, given a sequence of measurements\n$\\mathbf{z}_{1}, ..., \\mathbf{z}_{k}$, (which we'll write as $\\mathbf{z}_{1:k}$). In\ngeneral $\\mathbf{z}$ can include clutter and false alarms. We'll defer those complications\nto later tutorials and assume, for the moment, that all measurements are generated by a target,\nand that detecting the target at each timestep is certain ($p_d = 1$ and\n$p_{fa} = 0$).\n\n### Prediction\n\nWe proceed under the Markovian assumption that $p(\\mathbf{x}_k) = \\int_{-\\infty} ^{\\infty}\np(\\mathbf{x}_k|\\mathbf{x}_{k-1}) p(\\mathbf{x}_{k-1}) d \\mathbf{x}_{k-1}$, meaning that the\ndistribution over the state of an object at time $k$ can be predicted entirely from its\nstate at previous time $k-1$. If our understanding of $p(\\mathbf{x}_{k-1})$ was\ninformed by a series of measurements up to and including timestep $k-1$, we can write\n\n\\begin{align}p(\\mathbf{x}_k|\\mathbf{z}_{1:k-1}) =\n \\int_{-\\infty}^{\\infty} p(\\mathbf{x}_k|\\mathbf{x}_{k-1})\n p(\\mathbf{x}_{k-1}|\\mathbf{z}_{1:k-1})d \\mathbf{x}_{k-1}\\end{align}\n\nThis is known as the *Chapman-Kolmogorov* equation. In Stone Soup we refer to this process as\n*prediction* and to an object that undertakes it as a :class:`~.Predictor`. A predictor requires\na *state transition model*, namely a function which undertakes\n$\\mathbf{x}_{k|k-1} = f(\\mathbf{x}_{k-1}, \\mathbf{w}_k)$, where $\\mathbf{w}_k$ is a\nnoise term. Stone Soup has transition models derived from the :class:`~.TransitionModel` class.\n\n### Update\n\nWe assume a sensor measurement is generated by some stochastic process represented by a function,\n$\\mathbf{z}_k = h(\\mathbf{x}_k, \\boldsymbol{\\nu}_k)$ where $\\boldsymbol{\\nu}_k$ is\nthe noise.\n\nThe goal of the update process is to generate the *posterior state estimate*, from the prediction\nand the measurement. It does this by way of Bayes' rule,\n\n\\begin{align}p(\\mathbf{x}_k | \\mathbf{z}_{1:k}) =\n \\frac{ p(\\mathbf{z}_{k} | \\mathbf{x}_k) p(\\mathbf{x}_k | \\mathbf{z}_{1:k-1})}\n {p(\\mathbf{z}_k | \\mathbf{z}_{1:k-1})}\\end{align}\n\nwhere $p(\\mathbf{x}_k | \\mathbf{z}_{1:k-1})$ is the output of the prediction stage,\n$p(\\mathbf{z}_{k} | \\mathbf{x}_k)$ is known as the likelihood, and\n$p(\\mathbf{z}_k | \\mathbf{z}_{1:k-1})$ the evidence. In Stone Soup, this calculation is\nundertaken by the :class:`~.Updater` class. Updaters use a :class:`~.MeasurementModel` class\nwhich models the effect of $h(\\cdot)$.\n\n\n\nThis figure represents a single instance of this predict-update process. It shows a prior\ndistribution, at the bottom left, a prediction (dotted ellipse), and posterior (top right)\ncalculated after a measurement update. We then proceed recursively, the posterior distribution at\n$k$ becoming the prior for the next measurement timestep, and so on.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## A nearly-constant velocity example\n\nWe're going to set up a simple scenario in which a target moves at constant velocity with the\naddition of some random noise, (referred to as a *nearly constant velocity* model).\n\nAs is customary in Python scripts we begin with some imports. (These ones allow us access to\nmathematical and timing functions.)\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\nfrom datetime import datetime, timedelta"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simulate a target\n\nWe consider a 2d Cartesian scheme where the state vector is\n$[x \\ \\dot{x} \\ y \\ \\dot{y}]^T$. That is, we'll model the target motion as a position\nand velocity component in each dimension. The units used are unimportant, but do need to be\nconsistent.\n\nTo start we'll create a simple truth path, sampling at 1\nsecond intervals. We'll do this by employing one of Stone Soup's native transition models.\n\nThese inputs are required:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.types.groundtruth import GroundTruthPath, GroundTruthState\nfrom stonesoup.models.transition.linear import CombinedLinearGaussianTransitionModel, \\\n ConstantVelocity\n\n# And the clock starts\nstart_time = datetime.now().replace(microsecond=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We note that it can sometimes be useful to fix our random number generator in order to probe a\nparticular example repeatedly.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"np.random.seed(1991)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The :class:`~.ConstantVelocity` class creates a one-dimensional constant velocity model with\nGaussian noise. For this simulation $\\mathbf{x}_k = F_k \\mathbf{x}_{k-1} + \\mathbf{w}_k$,\n$\\mathbf{w}_k \\sim \\mathcal{N}(0,Q)$, with\n\n\\begin{align}F_{k} &= \\begin{bmatrix}\n 1 & \\triangle t \\\\\n 0 & 1 \\\\\n \\end{bmatrix} \\\\\n Q_k &= \\begin{bmatrix}\n \\frac{\\triangle t^3}{3} & \\frac{\\triangle t^2}{2} \\\\\n \\frac{\\triangle t^2}{2} & \\triangle t \\\\\n \\end{bmatrix} q\\end{align}\n\nwhere $q$, the input parameter to :class:`~.ConstantVelocity`, is the magnitude of the\nnoise per $\\triangle t$-sized timestep.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The :class:`~.CombinedLinearGaussianTransitionModel` class takes a number\nof 1d models and combines them in a linear Gaussian model of arbitrary dimension, $D$.\n\n\\begin{align}F_{k}^{D} &= \\begin{bmatrix}\n F_k^{1} & & \\mathbf{0} \\\\\n & \\ddots & \\\\\n \\mathbf{0} & & F_k^d \\\\\n \\end{bmatrix}\\\\\n Q_{k}^{D} &= \\begin{bmatrix}\n Q_k^{1} & & \\mathbf{0} \\\\\n & \\ddots & \\\\\n \\mathbf{0} & & Q_k^d \\\\\n \\end{bmatrix}\\end{align}\n\nWe want a 2d simulation, so we'll do:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"q_x = 0.05\nq_y = 0.05\ntransition_model = CombinedLinearGaussianTransitionModel([ConstantVelocity(q_x),\n ConstantVelocity(q_y)])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A 'truth path' is created starting at (0,0) moving to the NE at one distance unit per (time)\nstep in each dimension.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"timesteps = [start_time]\ntruth = GroundTruthPath([GroundTruthState([0, 1, 0, 1], timestamp=timesteps[0])])\n\nnum_steps = 20\nfor k in range(1, num_steps + 1):\n\n timesteps.append(start_time+timedelta(seconds=k)) # add next timestep to list of timesteps\n truth.append(GroundTruthState(\n transition_model.function(truth[k-1], noise=True, time_interval=timedelta(seconds=1)),\n timestamp=timesteps[k]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Thus the ground truth is generated and we can plot the result.\n\nStone Soup has a few in-built plotting classes which can be used to plot\nground truths, measurements and tracks in a consistent format. An animated plotter that uses\nPlotly graph objects can be accessed via the class :class:`AnimatedPlotterly` from Stone Soup\nas below.\n\nNote that the animated plotter requires a list of timesteps as an input, and that 'tail_length'\nis set to 0.3. This means that each data point will be on display for 30% of the total\nsimulation time. Also note that the mapping argument is [0, 2] because those are the x and\ny position indices from our state vector.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.plotter import AnimatedPlotterly\nplotter = AnimatedPlotterly(timesteps, tail_length=0.3)\nplotter.plot_ground_truths(truth, [0, 2])\nplotter.fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can check the $F_k$ and $Q_k$ matrices (generated over a 1s period).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"transition_model.matrix(time_interval=timedelta(seconds=1))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"transition_model.covar(time_interval=timedelta(seconds=1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point you can play with the various parameters and see how it affects the simulated\noutput.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simulate measurements\n\nWe'll use one of Stone Soup's measurement models in order to generate\nmeasurements from the ground truth. For the moment we assume a 'linear' sensor which detects the\nposition, but not velocity, of a target, such that\n$\\mathbf{z}_k = H_k \\mathbf{x}_k + \\boldsymbol{\\nu}_k$,\n$\\boldsymbol{\\nu}_k \\sim \\mathcal{N}(0,R)$, with\n\n\\begin{align}H_k &= \\begin{bmatrix}\n 1 & 0 & 0 & 0\\\\\n 0 & 0 & 1 & 0\\\\\n \\end{bmatrix}\\\\\n R &= \\begin{bmatrix}\n 1 & 0\\\\\n 0 & 1\\\\\n \\end{bmatrix} \\omega\\end{align}\n\nwhere $\\omega$ is set to 5 initially (but again, feel free to play around).\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We're going to need a :class:`~.Detection` type to\nstore the detections, and a :class:`~.LinearGaussian` measurement model.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.types.detection import Detection\nfrom stonesoup.models.measurement.linear import LinearGaussian"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The linear Gaussian measurement model is set up by indicating the number of dimensions in the\nstate vector and the dimensions that are measured (so specifying $H_k$) and the noise\ncovariance matrix $R$.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"measurement_model = LinearGaussian(\n ndim_state=4, # Number of state dimensions (position and velocity in 2D)\n mapping=(0, 2), # Mapping measurement vector index to state index\n noise_covar=np.array([[5, 0], # Covariance matrix for Gaussian PDF\n [0, 5]])\n )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check the output is as we expect\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"measurement_model.matrix()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"measurement_model.covar()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Generate the measurements\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"measurements = []\nfor state in truth:\n measurement = measurement_model.function(state, noise=True)\n measurements.append(Detection(measurement,\n timestamp=state.timestamp,\n measurement_model=measurement_model))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot the result, again mapping the x and y position values\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plotter.plot_measurements(measurements, [0, 2])\nplotter.fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this stage you should have a moderately linear ground truth path (dotted line) with a series\nof simulated measurements overplotted (blue circles). Take a moment to fiddle with the numbers in\n$Q$ and $R$ to see what it does to the path and measurements.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Construct a Kalman filter\n\nWe're now ready to build a tracker. We'll use a Kalman filter as it's conceptually the simplest\nto start with. The Kalman filter is described extensively elsewhere [#]_$^,$ [#]_,\nso for the moment we just assert that the prediction step proceeds as:\n\n\\begin{align}\\mathbf{x}_{k|k-1} &= F_{k}\\mathbf{x}_{k-1} + B_{k}\\mathbf{u}_{k}\\\\\n P_{k|k-1} &= F_{k}P_{k-1}F_{k}^T + Q_{k},\\end{align}\n\n$B_k \\mathbf{u}_k$ is a control term which we'll ignore for now (we assume we don't\ninfluence the target directly).\n\nThe update step is:\n\n\\begin{align}\\mathbf{x}_{k} &= \\mathbf{x}_{k|k-1} + K_k(\\mathbf{z}_k - H_{k}\\mathbf{x}_{k|k-1})\\\\\n P_{k} &= P_{k|k-1} - K_k H_{k} P_{k|k-1}\\\\\\end{align}\n\nwhere,\n\n\\begin{align}K_k &= P_{k|k-1} H_{k}^T S_k^{-1}\\\\\n S_k &= H_{k} P_{k|k-1} H_{k}^T + R_{k}\\end{align}\n\n$\\mathbf{z}_k - H_{k}\\mathbf{x}_{k|k-1}$ is known as the *innovation* and $S_k$ the\n*innovation covariance*; $K_k$ is the *Kalman gain*.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Constructing a predictor and updater in Stone Soup is simple. In a nice division of\nresponsibility, a :class:`~.Predictor` takes a :class:`~.TransitionModel` as input and\nan :class:`~.Updater` takes a :class:`~.MeasurementModel` as input. Note that for now we're using\nthe same models used to generate the ground truth and the simulated measurements. This won't\nusually be possible, and it's an interesting exercise to explore what happens when these\nparameters are mismatched.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.predictor.kalman import KalmanPredictor\npredictor = KalmanPredictor(transition_model)\n\nfrom stonesoup.updater.kalman import KalmanUpdater\nupdater = KalmanUpdater(measurement_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the Kalman filter\nNow we have the components, we can execute the Kalman filter estimator on the simulated data.\n\nIn order to start, we'll need to create the first prior estimate. We're going to use the\n:class:`~.GaussianState` we mentioned earlier. As the name suggests, this parameterises the state\nas $\\mathcal{N}(\\mathbf{x}_0, P_0)$. By happy chance the initial values are chosen to match\nthe truth quite well. You might want to manipulate these to see what happens.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.types.state import GaussianState\nprior = GaussianState([[0], [1], [0], [1]], np.diag([1.5, 0.5, 1.5, 0.5]), timestamp=start_time)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this instance data association is done somewhat implicitly. There is one prediction and\none detection per timestep so no need to think too deeply. Stone Soup discourages such\n(undesirable) practice and requires that a :class:`~.Prediction` and :class:`~.Detection` are\nassociated explicitly. This is done by way of a :class:`~.Hypothesis`; the most simple\nis a :class:`~.SingleHypothesis` which associates a single predicted state with a single\ndetection. There is much more detail on how the :class:`~.Hypothesis` class is used in later\ntutorials.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.types.hypothesis import SingleHypothesis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With this, we'll now loop through our measurements, predicting and updating at each timestep.\nUncontroversially, a Predictor has :meth:`predict` function and an Updater an :meth:`update` to\ndo this. Storing the information is facilitated by the top-level :class:`~.Track` class which\nholds a sequence of states.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from stonesoup.types.track import Track\ntrack = Track()\nfor measurement in measurements:\n prediction = predictor.predict(prior, timestamp=measurement.timestamp)\n hypothesis = SingleHypothesis(prediction, measurement) # Group a prediction and measurement\n post = updater.update(hypothesis)\n track.append(post)\n prior = track[-1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot the resulting track, including uncertainty ellipses\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plotter.plot_tracks(track, [0, 2], uncertainty=True)\nplotter.fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Key points\n1. Stone Soup is built on a variety of types of :class:`~.State` object. These can be used to\n represent hidden states, observations, estimates, ground truth, and more.\n2. Bayesian recursion is undertaken by the successive applications of predict and update methods\n using a :class:`~.Predictor` and an :class:`~.Updater`. Explicit association of predicted\n states with measurements is necessary. Broadly speaking, predictors apply a\n :class:`~.TransitionModel`, data associators use a\n :class:`~.Hypothesiser` to associate a prediction with a measurement, and updaters use this\n association together with the :class:`~.MeasurementModel` to calculate the posterior state\n estimate.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n.. [#] Kalman 1960, A New Approach to Linear Filtering and Prediction Problems, Transactions of\n the ASME, Journal of Basic Engineering, 82 (series D), 35\n (https://pdfs.semanticscholar.org/bb55/c1c619c30f939fc792b049172926a4a0c0f7.pdf?_ga=2.51363242.2056055521.1592932441-1812916183.1592932441)\n.. [#] Anderson & Moore 1979, Optimal filtering,\n (http://users.cecs.anu.edu.au/~john/papers/BOOK/B02.PDF)\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}