Updaters

class stonesoup.updater.base.Updater(measurement_model: MeasurementModel)[source]

Bases: Base

Updater base class

An updater is used to update the predicted state, utilising a measurement and a MeasurementModel. The general observation model is

\[\mathbf{z} = h(\mathbf{x}, \mathbf{\sigma})\]

where \(\mathbf{x}\) is the state, \(\mathbf{\sigma}\), the measurement noise and \(\mathbf{z}\) the resulting measurement.

Parameters:

measurement_model (MeasurementModel) – measurement model

measurement_model: MeasurementModel

measurement model

abstract predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

Get measurement prediction from state prediction

Parameters:
  • predicted_state (StatePrediction) – The state prediction

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. Should be used in cases where the measurement model is dependent on the received measurement. The default is None, in which case the updater will use the measurement model specified on initialisation

  • measurement_noise (bool) – Whether to include measurement noise predicted measurement. Default True

Returns:

The predicted measurement

Return type:

MeasurementPrediction

abstract update(hypothesis, **kwargs)[source]

Update state using prediction and measurement.

Parameters:

hypothesis (Hypothesis) – Hypothesis with predicted state and associated detection used for updating.

Returns:

The state posterior

Return type:

State

Kalman

class stonesoup.updater.kalman.KalmanUpdater(measurement_model: LinearGaussian = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: Updater

A class which embodies Kalman-type updaters; also a class which performs measurement update step as in the standard Kalman filter.

The Kalman updaters assume \(h(\mathbf{x}) = H \mathbf{x}\) with additive noise \(\sigma = \mathcal{N}(0,R)\). Daughter classes can overwrite to specify a more general measurement model \(h(\mathbf{x})\).

update() first calls predict_measurement() function which proceeds by calculating the predicted measurement, innovation covariance and measurement cross-covariance,

\[ \begin{align}\begin{aligned}\mathbf{z}_{k|k-1} &= H_k \mathbf{x}_{k|k-1}\\S_k &= H_k P_{k|k-1} H_k^T + R_k\\\Upsilon_k &= P_{k|k-1} H_k^T\end{aligned}\end{align} \]

where \(P_{k|k-1}\) is the predicted state covariance. predict_measurement() returns a GaussianMeasurementPrediction. The Kalman gain is then calculated as,

\[K_k = \Upsilon_k S_k^{-1}\]

and the posterior state mean and covariance are,

\[ \begin{align}\begin{aligned}\mathbf{x}_{k|k} &= \mathbf{x}_{k|k-1} + K_k (\mathbf{z}_k - H_k \mathbf{x}_{k|k-1})\\P_{k|k} &= P_{k|k-1} - K_k S_k K_k^T\end{aligned}\end{align} \]

These are returned as a GaussianStateUpdate object.

Parameters:
  • measurement_model (LinearGaussian, optional) – A linear Gaussian measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

measurement_model: LinearGaussian

A linear Gaussian measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

force_symmetric_covariance: bool

A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

use_joseph_cov: bool

Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

Predict the measurement implied by the predicted state mean

Parameters:
  • predicted_state (GaussianState) – The predicted state \(\mathbf{x}_{k|k-1}\), \(P_{k|k-1}\)

  • measurement_model (MeasurementModel) – The measurement model. If omitted, the model in the updater object is used

  • measurement_noise (bool) – Whether to include measurement noise \(R\) with innovation covariance. Default True

  • **kwargs (various) – These are passed to function() and matrix()

Returns:

The measurement prediction, \(\mathbf{z}_{k|k-1}\)

Return type:

GaussianMeasurementPrediction

update(hypothesis, **kwargs)[source]

The Kalman update method. Given a hypothesised association between a predicted state or predicted measurement and an actual measurement, calculate the posterior state.

Parameters:
  • hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

  • **kwargs (various) – These are passed to predict_measurement()

Returns:

The posterior state Gaussian with mean \(\mathbf{x}_{k|k}\) and covariance \(P_{x|x}\)

Return type:

GaussianStateUpdate

class stonesoup.updater.kalman.ExtendedKalmanUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: KalmanUpdater

The Extended Kalman Filter version of the Kalman Updater. Inherits most of the functionality from KalmanUpdater.

The difference is that the measurement model may now be non-linear, though must be differentiable to return the linearisation of \(h(\mathbf{x})\) via the matrix \(H\) accessible via jacobian().

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

measurement_model: MeasurementModel

A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

class stonesoup.updater.kalman.UnscentedKalmanUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, alpha: float = 0.5, beta: float = 2, kappa: float = None)[source]

Bases: KalmanUpdater

The Unscented Kalman Filter version of the Kalman Updater. Inherits most of the functionality from KalmanUpdater.

In this case the predict_measurement() function uses the unscented_transform() function to estimate a (Gaussian) predicted measurement. This is then updated via the standard Kalman update equations.

Parameters:
  • measurement_model (MeasurementModel, optional) – The measurement model to be used. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • alpha (float, optional) – Primary sigma point spread scaling parameter. Default is 0.5.

  • beta (float, optional) – Used to incorporate prior knowledge of the distribution. If the true distribution is Gaussian, the value of 2 is optimal. Default is 2

  • kappa (float, optional) – Secondary spread scaling parameter. Default is calculated as 3-Ns

measurement_model: MeasurementModel

The measurement model to be used. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

alpha: float

Primary sigma point spread scaling parameter. Default is 0.5.

beta: float

Used to incorporate prior knowledge of the distribution. If the true distribution is Gaussian, the value of 2 is optimal. Default is 2

kappa: float

Secondary spread scaling parameter. Default is calculated as 3-Ns

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

Unscented Kalman Filter measurement prediction step. Uses the unscented transform to estimate a Gauss-distributed predicted measurement.

Parameters:
  • predicted_state (GaussianStatePrediction) – A predicted state

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. This should be used in cases where the measurement model is dependent on the received measurement (the default is None, in which case the updater will use the measurement model specified on initialisation)

  • measurement_noise (bool) – Whether to include measurement noise \(R\) with innovation covariance

Returns:

The measurement prediction

Return type:

GaussianMeasurementPrediction

class stonesoup.updater.kalman.SqrtKalmanUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, qr_method: bool = False)[source]

Bases: ExtendedKalmanUpdater

The Square root version of the Kalman Updater.

The input State is a SqrtGaussianState which means that the covariance of the predicted state is stored in square root form. This can be achieved by keeping covar attribute as \(L\) where the ‘full’ covariance matrix \(P_{k|k-1} = L_{k|k-1} L^T_{k|k-1}\) [Eq1].

In its basic form \(L\) is the lower triangular matrix returned via Cholesky factorisation. There’s no reason why other forms that satisfy Eq 1 above can’t be used.

References

  1. Schmidt, S.F. 1970, Computational techniques in Kalman filtering, NATO advisory group for aerospace research and development, London 1970

  2. Andrews, A. 1968, A square root formulation of the Kalman covariance equations, AIAA Journal, 6:6, 1165-1166

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • qr_method (bool, optional) – A switch to do the update via a QR decomposition, rather than using the (vector form of) the Potter method.

qr_method: bool

A switch to do the update via a QR decomposition, rather than using the (vector form of) the Potter method.

class stonesoup.updater.kalman.IteratedKalmanUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, tolerance: float = 1e-06, measure: Measure = Euclidean(mapping=None, mapping2=None), max_iterations: int = 1000)[source]

Bases: ExtendedKalmanUpdater

This version of the Kalman updater runs an iteration over the linearisation of the sensor function in order to refine the posterior state estimate. Specifically,

\[ \begin{align}\begin{aligned}\mathbf{x}_{k,i+1} &= \mathbf{x}_{k|k-1} + K_i [\mathbf{z} - h(\mathbf{x}_{k,i}) - H_i (\mathbf{x}_{k|k-1} - \mathbf{x}_{k,i}) ]\\P_{k,i+1} &= (I - K_i H_i) P_{k|k-1}\end{aligned}\end{align} \]

where,

\[ \begin{align}\begin{aligned}H_i &= h^{\prime}(\mathbf{x}_{k,i}),\\K_i &= P_{k|k-1} H_i^T (H_i P_{k|k-1} H_i^T + R)^{-1}\end{aligned}\end{align} \]

and

\[ \begin{align}\begin{aligned}\mathbf{x}_{k,0} &= \mathbf{x}_{k|k-1}\\P_{k,0} &= P_{k|k-1}\end{aligned}\end{align} \]

It inherits from the ExtendedKalmanUpdater as it uses the same linearisation of the sensor function via the _measurement_matrix() function.

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • tolerance (float, optional) – The value of the difference in the measure used as a stopping criterion.

  • measure (Measure, optional) – The measure to use to test the iteration stopping criterion. Defaults to the Euclidean distance between current and prior posterior state estimate.

  • max_iterations (int, optional) – Number of iterations before while loop is exited and a non-convergence warning is returned

tolerance: float

The value of the difference in the measure used as a stopping criterion.

measure: Measure

The measure to use to test the iteration stopping criterion. Defaults to the Euclidean distance between current and prior posterior state estimate.

max_iterations: int

Number of iterations before while loop is exited and a non-convergence warning is returned

update(hypothesis, **kwargs)[source]

The iterated Kalman update method. Given a hypothesised association between a predicted state or predicted measurement and an actual measurement, calculate the posterior state.

Parameters:
  • hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

  • **kwargs (various) – These are passed to the measurement model function

Returns:

The posterior state Gaussian with mean \(\mathbf{x}_{k|k}\) and covariance \(P_{k|k}\)

Return type:

GaussianStateUpdate

class stonesoup.updater.kalman.SchmidtKalmanUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, consider: ndarray = None)[source]

Bases: ExtendedKalmanUpdater

A class which extends the standard Kalman filter to employ the Schmidt-Kalman version of the update. The key thing here is that the state vector is split into parameters to be estimated, and those which are merely ‘considered’. The consider parameters are not updated, though their relative covariances are maintained through the process. The state vector, covariance and measurement matrix are defined as,

\[ \begin{align}\begin{aligned}\mathbf{x}^T &\triangleq [\mathbf{s}^T \ \mathbf{p}^T]\\H &= [H_s \ H_p]\end{aligned}\end{align} \]
\[\begin{split}P &= \begin{bmatrix} P_{ss} & P_{sp} \\ P_{ps} & P_{pp} \end{bmatrix}\end{split}\]

where the consider parameters are denoted \(p\) and those to be estimated \(s\). Note that though they are separated in the definition above, they may be interleaved in practice. The update proceeds as:

\[ \begin{align}\begin{aligned}K_s &= (P_{ss,k|k-1} H_s^T + P_{sp,k|k-1} H_p^T) S^{-1},\\\mathbf{s}_{k|k} &= \mathbf{s}_{k|k-1} + K_s (\mathbf{z} - H_s \mathbf{s}_{k|k-1} - H_p \mathbf{p}_{k|k-1}),\\\mathbf{p}_{k|k} &= \mathbf{p}_{k|k-1},\end{aligned}\end{align} \]
\[\begin{split}P_{k|k} &= \begin{bmatrix} P_{ss,k|k-1} - K_s S K_s^T & P_{sp,k|k-1} - K_s H \begin{bmatrix} P_{sp,k|k-1} \\ P_{pp,k|k-1} \end{bmatrix} \\ P_{ps,k|k-1} - \begin{bmatrix} P_{sp,k|k-1} \\ P_{pp,k|k-1} \end{bmatrix}^T H^T K_s^T & P_{pp,k|k-1} \end{bmatrix}\end{split}\]

Note

Due to the excellent efficiency of NumPy’s matrix algebra tools, the savings gained by extracting a sub-matrix over performing the calculation on full matrices, are relatively minor. This class therefore functions most effectively as a tutorial example of the Schmidt-Kalman updater. Efficiencies could be made by enforcing view operations rather than copies, using the square-root form or, most promisingly, by employing Cython.

References

[1] S. F. Schmidt, “Application of State-Space Methods to Navigation Problems,” Advances in Control Systems, Vol. 3, 1966, pp. 293–340

[2] Zanetti, R. & D’Souza, C. (2013). Recursive Implementations of the Schmidt-Kalman ‘Consider’ Filter. The Journal of the Astronautical Sciences. 60. 672-685. 10.1007/s40295-015-0068-7.

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • consider (numpy.ndarray, optional) – The boolean vector of ‘consider’ parameters. True indicates considered, False are state parameters to be estimated. If undefined these default to all False, i.e.the standard Kalman filter.

consider: ndarray

The boolean vector of ‘consider’ parameters. True indicates considered, False are state parameters to be estimated. If undefined these default to all False, i.e.the standard Kalman filter.

class stonesoup.updater.kalman.CubatureKalmanUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, alpha: float = 1.0)[source]

Bases: KalmanUpdater

The cubature Kalman filter version of the Kalman updater. Inherits most of its functionality from KalmanUpdater.

The predict_measurement() function uses the cubature_transform() function to estimate a (Gaussian) predicted measurement. This is then updated via the standard Kalman update equations.

Parameters:
  • measurement_model (MeasurementModel, optional) – The measurement model to be used. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • alpha (float, optional) – Scaling parameter. Default is 1.0. Lower values select points closer to the mean and vice versa.

measurement_model: MeasurementModel

The measurement model to be used. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

alpha: float

Scaling parameter. Default is 1.0. Lower values select points closer to the mean and vice versa.

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

Cubature Kalman Filter measurement prediction step. Uses the cubature transform to estimate a Gauss-distributed predicted measurement.

Parameters:
  • predicted_state (GaussianStatePrediction) – A predicted state

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. This should be used in cases where the measurement model is dependent on the received measurement (the default is None, in which case the updater will use the measurement model specified on initialisation)

  • measurement_noise (bool) – Whether to include measurement noise \(R\) with innovation covariance. Default True

Returns:

The measurement prediction

Return type:

GaussianMeasurementPrediction

Particle

class stonesoup.updater.particle.ParticleUpdater(measurement_model: MeasurementModel, resampler: Resampler = None, regulariser: Regulariser = None, constraint_func: Callable = None)[source]

Bases: Updater

Particle Updater

Perform an update by multiplying particle weights by PDF of measurement model (either measurement_model or measurement_model), and normalising the weights. If provided, a resampler will be used to take a new sample of particles (this is called every time, and it’s up to the resampler to decide if resampling is required).

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • resampler (Resampler, optional) – Resampler to prevent particle degeneracy

  • regulariser (Regulariser, optional) – Regulariser to prevent particle impoverishment. The regulariser is normally used after resampling. If a Resampler is defined, then regularisation will only take place if the particles have been resampled. If the Resampler is not defined but a Regulariser is, then regularisation will be conducted under the assumption that the user intends for this to occur.

  • constraint_func (Callable, optional) – Callable, user defined function for applying constraints to the states. This is done by setting the weights of particles to 0 for particles that are not correctly constrained. This function provides indices of the unconstrained particles and should accept a ParticleState object and return an array-like object of logical indices.

resampler: Resampler

Resampler to prevent particle degeneracy

regulariser: Regulariser

Regulariser to prevent particle impoverishment. The regulariser is normally used after resampling. If a Resampler is defined, then regularisation will only take place if the particles have been resampled. If the Resampler is not defined but a Regulariser is, then regularisation will be conducted under the assumption that the user intends for this to occur.

constraint_func: Callable

Callable, user defined function for applying constraints to the states. This is done by setting the weights of particles to 0 for particles that are not correctly constrained. This function provides indices of the unconstrained particles and should accept a ParticleState object and return an array-like object of logical indices.

update(hypothesis, **kwargs)[source]

Particle Filter update step

Parameters:

hypothesis (Hypothesis) – Hypothesis with predicted state and associated detection used for updating.

Returns:

The state posterior

Return type:

ParticleState

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

Get measurement prediction from state prediction

Parameters:
  • predicted_state (StatePrediction) – The state prediction

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. Should be used in cases where the measurement model is dependent on the received measurement. The default is None, in which case the updater will use the measurement model specified on initialisation

  • measurement_noise (bool) – Whether to include measurement noise predicted measurement. Default True

Returns:

The predicted measurement

Return type:

MeasurementPrediction

class stonesoup.updater.particle.GromovFlowParticleUpdater(measurement_model: MeasurementModel)[source]

Bases: Updater

Gromov Flow Particle Updater

This is implementation of Gromov method for stochastic particle flow filters [2]. The Euler Maruyama method is used for integration, over 20 steps using an exponentially increase step size.

Parameters:

measurement_model (MeasurementModel) – measurement model

References

update(hypothesis, **kwargs)[source]

Update state using prediction and measurement.

Parameters:

hypothesis (Hypothesis) – Hypothesis with predicted state and associated detection used for updating.

Returns:

The state posterior

Return type:

State

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)

Get measurement prediction from state prediction

Parameters:
  • predicted_state (StatePrediction) – The state prediction

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. Should be used in cases where the measurement model is dependent on the received measurement. The default is None, in which case the updater will use the measurement model specified on initialisation

  • measurement_noise (bool) – Whether to include measurement noise predicted measurement. Default True

Returns:

The predicted measurement

Return type:

MeasurementPrediction

class stonesoup.updater.particle.GromovFlowKalmanParticleUpdater(measurement_model: MeasurementModel, kalman_updater: KalmanUpdater = None)[source]

Bases: GromovFlowParticleUpdater

Gromov Flow Parallel Kalman Particle Updater

This is a wrapper around the GromovFlowParticleUpdater which can use a ExtendedKalmanUpdater or UnscentedKalmanUpdater in parallel in order to maintain a state covariance, as proposed in [3]. In this implementation, the mean of the ParticleState is used the EKF/UKF update.

This should be used in conjunction with the ParticleFlowKalmanPredictor.

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • kalman_updater (KalmanUpdater, optional) – Kalman updater to use. Default None where a new instance of:class:~.ExtendedKalmanUpdater will be created utilising thesame measurement model.

References

kalman_updater: KalmanUpdater

Kalman updater to use. Default None where a new instance of:class:~.ExtendedKalmanUpdater will be created utilising thesame measurement model.

update(hypothesis, **kwargs)[source]

Update state using prediction and measurement.

Parameters:

hypothesis (Hypothesis) – Hypothesis with predicted state and associated detection used for updating.

Returns:

The state posterior

Return type:

State

predict_measurement(predicted_state, *args, **kwargs)[source]

Get measurement prediction from state prediction

Parameters:
  • predicted_state (StatePrediction) – The state prediction

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. Should be used in cases where the measurement model is dependent on the received measurement. The default is None, in which case the updater will use the measurement model specified on initialisation

  • measurement_noise (bool) – Whether to include measurement noise predicted measurement. Default True

Returns:

The predicted measurement

Return type:

MeasurementPrediction

class stonesoup.updater.particle.MultiModelParticleUpdater(measurement_model: MeasurementModel, predictor: MultiModelPredictor, resampler: Resampler = None, regulariser: Regulariser = None, constraint_func: Callable = None)[source]

Bases: ParticleUpdater

Particle Updater for the Multi Model system

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • predictor (MultiModelPredictor) – Predictor which hold holds transition matrix

  • resampler (Resampler, optional) – Resampler to prevent particle degeneracy

  • regulariser (Regulariser, optional) – Regulariser to prevent particle impoverishment. The regulariser is normally used after resampling. If a Resampler is defined, then regularisation will only take place if the particles have been resampled. If the Resampler is not defined but a Regulariser is, then regularisation will be conducted under the assumption that the user intends for this to occur.

  • constraint_func (Callable, optional) – Callable, user defined function for applying constraints to the states. This is done by setting the weights of particles to 0 for particles that are not correctly constrained. This function provides indices of the unconstrained particles and should accept a ParticleState object and return an array-like object of logical indices.

predictor: MultiModelPredictor

Predictor which hold holds transition matrix

update(hypothesis, **kwargs)[source]

Particle Filter update step

Parameters:

hypothesis (Hypothesis) – Hypothesis with predicted state and associated detection used for updating.

Returns:

The state posterior

Return type:

MultiModelParticleStateUpdate

class stonesoup.updater.particle.RaoBlackwellisedParticleUpdater(measurement_model: MeasurementModel, predictor: RaoBlackwellisedMultiModelPredictor, resampler: Resampler = None, regulariser: Regulariser = None, constraint_func: Callable = None)[source]

Bases: MultiModelParticleUpdater

Particle Updater for the Raoblackwellised scheme

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • predictor (RaoBlackwellisedMultiModelPredictor) – Predictor which hold holds transition matrix, models and mappings

  • resampler (Resampler, optional) – Resampler to prevent particle degeneracy

  • regulariser (Regulariser, optional) – Regulariser to prevent particle impoverishment. The regulariser is normally used after resampling. If a Resampler is defined, then regularisation will only take place if the particles have been resampled. If the Resampler is not defined but a Regulariser is, then regularisation will be conducted under the assumption that the user intends for this to occur.

  • constraint_func (Callable, optional) – Callable, user defined function for applying constraints to the states. This is done by setting the weights of particles to 0 for particles that are not correctly constrained. This function provides indices of the unconstrained particles and should accept a ParticleState object and return an array-like object of logical indices.

predictor: RaoBlackwellisedMultiModelPredictor

Predictor which hold holds transition matrix, models and mappings

update(hypothesis, **kwargs)[source]

Particle Filter update step

Parameters:

hypothesis (Hypothesis) – Hypothesis with predicted state and associated detection used for updating.

Returns:

The state posterior

Return type:

RaoBlackwellisedParticleStateUpdate

static calculate_model_probabilities(prediction, predictor)[source]

Calculates the new model probabilities based on the ones calculated in the previous time step

class stonesoup.updater.particle.BernoulliParticleUpdater(measurement_model: MeasurementModel, resampler: Resampler = None, regulariser: Regulariser = None, constraint_func: Callable = None, birth_probability: float = 0.01, survival_probability: float = 0.98, clutter_rate: int = 1, clutter_distribution: float = None, detection_probability: float = None, nsurv_particles: float = None)[source]

Bases: ParticleUpdater

Bernoulli Particle Filter Updater class

An implementation of a particle filter updater utilising the Bernoulli filter formulation that estimates the spatial distribution of a single target and estimates its existence, as described in [1].

Due to the nature of the Bernoulli particle filter prediction process, resampling is required at every time step to reduce the number of particles back down to the desired size.

References

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • resampler (Resampler, optional) – Resampler to prevent particle degeneracy

  • regulariser (Regulariser, optional) – Regulariser to prevent particle impoverishment. The regulariser is normally used after resampling. If a Resampler is defined, then regularisation will only take place if the particles have been resampled. If the Resampler is not defined but a Regulariser is, then regularisation will be conducted under the assumption that the user intends for this to occur.

  • constraint_func (Callable, optional) – Callable, user defined function for applying constraints to the states. This is done by setting the weights of particles to 0 for particles that are not correctly constrained. This function provides indices of the unconstrained particles and should accept a ParticleState object and return an array-like object of logical indices.

  • birth_probability (float, optional) – Probability of target birth.

  • survival_probability (float, optional) – Probability of target survival

  • clutter_rate (int, optional) – Average number of clutter measurements per time step. Implementation assumes number of clutter measurements follows a Poisson distribution

  • clutter_distribution (float, optional) – Distribution used to describe clutter measurements. This is usually assumed uniform in the measurement space.

  • detection_probability (float, optional) – Probability of detection assigned to the generated samples of the birth distribution. If None, it will inherit from the input.

  • nsurv_particles (float, optional) – Number of particles describing the surviving distribution, which will be output from the update algorithm.

birth_probability: float

Probability of target birth.

survival_probability: float

Probability of target survival

clutter_rate: int

Average number of clutter measurements per time step. Implementation assumes number of clutter measurements follows a Poisson distribution

clutter_distribution: float

Distribution used to describe clutter measurements. This is usually assumed uniform in the measurement space.

detection_probability: float

Probability of detection assigned to the generated samples of the birth distribution. If None, it will inherit from the input.

nsurv_particles: float

Number of particles describing the surviving distribution, which will be output from the update algorithm.

update(hypotheses, **kwargs)[source]

Bernoulli Particle Filter update step

Parameters:

hypotheses (MultipleHypothesis) – Hypotheses containing the sequence of single hypotheses that contain the prediction and unassociated measurements.

Returns:

The state posterior.

Return type:

BernoulliParticleStateUpdate

class stonesoup.updater.particle.SMCPHDUpdater(measurement_model: MeasurementModel, clutter_intensity: float, resampler: Resampler = None, regulariser: Regulariser = None, constraint_func: Callable = None, prob_detect: Probability = Probability(0.85), num_samples: int = None)[source]

Bases: ParticleUpdater

Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) Updater class

An implementation of a particle updater that estimates only the first-order moment (i.e. the Probability Hypothesis Density) of the multi-target state density based on [4] and [5].

Note

  • It is assumed that the proposal distribution is the same as the dynamics

  • Target “spawning” is not implemented

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • clutter_intensity (float) – Average number of clutter measurements per time step, per unit volume

  • resampler (Resampler, optional) – Resampler to prevent particle degeneracy

  • regulariser (Regulariser, optional) – Regulariser to prevent particle impoverishment. The regulariser is normally used after resampling. If a Resampler is defined, then regularisation will only take place if the particles have been resampled. If the Resampler is not defined but a Regulariser is, then regularisation will be conducted under the assumption that the user intends for this to occur.

  • constraint_func (Callable, optional) – Callable, user defined function for applying constraints to the states. This is done by setting the weights of particles to 0 for particles that are not correctly constrained. This function provides indices of the unconstrained particles and should accept a ParticleState object and return an array-like object of logical indices.

  • prob_detect (Probability, optional) – Target Detection Probability

  • num_samples (int, optional) – The number of particles to be output by the updater, after resampling. If the corresponding predictor has been configured in 'expansion' mode, users should setthis to the number of particles they want to output, otherwise the number of particles will continuously grow. Default is None, which will output the same number of particles as the input prediction.

References

prob_detect: Probability

Target Detection Probability

clutter_intensity: float

Average number of clutter measurements per time step, per unit volume

num_samples: int

The number of particles to be output by the updater, after resampling. If the corresponding predictor has been configured in 'expansion' mode, users should setthis to the number of particles they want to output, otherwise the number of particles will continuously grow. Default is None, which will output the same number of particles as the input prediction.

update(hypotheses, **kwargs)[source]

SMC-PHD update step

Parameters:

hypotheses (MultipleHypothesis) – A container of SingleHypothesis objects. All hypotheses are assumed to have the same prediction (and hence same timestamp).

Returns:

The state posterior

Return type:

ParticleStateUpdate

get_log_weights_per_hypothesis(hypotheses)[source]

Calculate the log particle weights per hypothesis

Parameters:

hypotheses (MultipleHypothesis) – A container of SingleHypothesis objects. All hypotheses are assumed to have the same prediction (and hence same timestamp).

Returns:

The log weights per hypothesis, where the first dimension is the number of particles and the second dimension is the number of hypotheses. The first hypothesis (column) is always the missed detection hypothesis.

Return type:

ndarray

Kernel

class stonesoup.updater.kernel.AdaptiveKernelKalmanUpdater(measurement_model: MeasurementModel, kernel: Kernel = None, lambda_updater: float = 0.001)[source]

Bases: Updater

The adaptive kernel Kalman updater uses the predictions from the predictor to generate the measurement particles and update the posterior kernel weight vector and covariance matrix. Additionally, the updater generates new proposal particles at every step to refine the state estimate.

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • kernel (Kernel, optional) – Default is None. If None, the default QuadraticKernel is used.

  • lambda_updater (float, optional) – Used to incorporate prior knowledge of the distribution. If the true distribution is Gaussian, the value of 2 is optimal. Default is 1e-3

lambda_updater: float

Used to incorporate prior knowledge of the distribution. If the true distribution is Gaussian, the value of 2 is optimal. Default is 1e-3

kernel: Kernel

Default is None. If None, the default QuadraticKernel is used.

predict_measurement(state_prediction, measurement_model=None, **kwargs)[source]

Get measurement prediction from state prediction

Parameters:
  • predicted_state (StatePrediction) – The state prediction

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. Should be used in cases where the measurement model is dependent on the received measurement. The default is None, in which case the updater will use the measurement model specified on initialisation

  • measurement_noise (bool) – Whether to include measurement noise predicted measurement. Default True

Returns:

The predicted measurement

Return type:

MeasurementPrediction

update(hypothesis, **kwargs)[source]

The adaptive kernel Kalman update method. Given a hypothesised association between a predicted state or predicted measurement and an actual measurement, calculate the posterior state.

Parameters:
  • hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

  • **kwargs (various) – These are passed to predict_measurement()

Returns:

The posterior state Gaussian with mean \(\mathbf{x}_{k|k}\) and covariance \(P_{x|x}\)

Return type:

KernelParticleStateUpdate

Ensemble

class stonesoup.updater.ensemble.EnsembleUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: KalmanUpdater

Ensemble Kalman Filter Updater class The EnKF is a hybrid of the Kalman updating scheme and the Monte Carlo approach of the particle filter.

Deliberately structured to resemble the Vanilla Kalman Filter, update() first calls predict_measurement() function which proceeds by calculating the predicted measurement, innovation covariance and measurement cross-covariance. Note however, these are not propagated explicitly, they are derived from the sample covariance of the ensemble itself.

Note that the EnKF equations are simpler when written in the following formalism. Note that h is not necessarily a matrix, but could be a nonlinear measurement function.

\[\mathbf{A}_k = \hat{X} - E(X) \mathbf{HA}_k = h(\hat{X} - E(X))\]

The cross covariance and measurement covariance are given by:

\[P_{xz} = \frac{1}{M-1} \mathbf{A}_k \mathbf{HA}_k^T P_{zz} = \frac{1}{M-1} A_k \mathbf{HA}_k^T + R\]

The Kalman gain is then calculated via:

\[K_{k} = P_{xz} P_{zz}^{-1}\]

and the posterior state mean and covariance are,

\[\mathbf{x}_{k|k} = \mathbf{x}_{k|k-1} + K_k (\mathbf{z}_k - H_k \mathbf{x}_{k|k-1})\]

This is returned as a EnsembleStateUpdate object.

References

1. J. Hiles, S. M. O’Rourke, R. Niu and E. P. Blasch, “Implementation of Ensemble Kalman Filters in Stone-Soup,” International Conference on Information Fusion, (2021)

2. Mandel, Jan. “A brief tutorial on the ensemble Kalman filter.” arXiv preprint arXiv:0901.3725 (2009).

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

measurement_model: MeasurementModel

A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

Predict the measurement implied by the predicted state mean

Parameters:
  • predicted_state (State) – The predicted state \(\mathbf{x}_{k|k-1}\)

  • measurement_model (MeasurementModel) – The measurement model. If omitted, the model in the updater object is used

  • measurement_noise (bool) – Whether to include measurement noise \(R\) when generating ensemble. Default True

Returns:

update(hypothesis, **kwargs)[source]

The Ensemble Kalman update method. The Ensemble Kalman filter simply uses the Kalman Update scheme to evolve a set or Ensemble of state vectors as a group. This ensemble of vectors contains all the information on the system state.

Parameters:

hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

Returns:

The posterior state which contains an ensemble of state vectors and a timestamp.

Return type:

EnsembleStateUpdate

class stonesoup.updater.ensemble.EnsembleSqrtUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: EnsembleUpdater

The Ensemble Square Root filter propagates the mean and square root covariance through time, and samples a new ensemble. This has the advantage of not requiring perturbation of the measurement which reduces sampling error. The posterior mean is calculated via:

\[\mathbf{x}_{k|k} = \mathbf{x}_{k|k-1} + K_k (\mathbf{z}_k - H_k \mathbf{x}_{k|k-1})\]

The Kalman gain is calculated via:

\[K_{k} = P_{xz} P_{zz}^{-1}\]

The cross covariance and measurement covariance respectivley are approximated via the sample square root covariances:

\[ \begin{align}\begin{aligned}P_{xz} \approx \tilde{P}_k (\tilde{Z}_k)^T\\P_{zz} \approx \tilde{Z}_k (\tilde{Z}_k)^T + R_k\end{aligned}\end{align} \]

and the posterior covariance is propaged through time via:

\[\mathbf{P}_{k|k} = \tilde{P}^- B (\tilde{P}^- B)^T\]

Where \(\tilde{P}^-\) represents the prediction square root covariance and B is the matrix square root of:

\[B = \mathbf{I} - (\tilde{Z}_k)^T [P_{zz}]^{-1} \tilde{Z}_k\]

The posterior mean and covariance are used to sample a new ensemble. The resulting state is returned via a EnsembleStateUpdate object.

References

1. J. Hiles, S. M. O’Rourke, R. Niu and E. P. Blasch, “Implementation of Ensemble Kalman Filters in Stone-Soup”, International Conference on Information Fusion, (2021)

2. Livings, Dance, S. L., & Nichols, N. K. “Unbiased ensemble square root filters.” Physica. D, 237(8), 1021–1028. (2008)

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

update(hypothesis, **kwargs)[source]

The Ensemble Square Root Kalman update method. The Ensemble Square Root filter propagates the mean and square root covariance through time, and samples a new ensemble. This has the advantage of not peturbing the measurement with statistical noise, and thus is less prone to sampling error for small ensembles.

Parameters:

hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

Returns:

The posterior state which contains an ensemble of state vectors and a timestamp.

Return type:

EnsembleStateUpdate

class stonesoup.updater.ensemble.LinearisedEnsembleUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, inflation_factor: float = 1.0)[source]

Bases: EnsembleUpdater

Implementation of ‘The Linearized EnKF Update’ algorithm from “Ensemble Kalman Filter with Bayesian Recursive Update” by Kristen Michaelson, Andrey A. Popov and Renato Zanetti. Similar to the EnsembleUpdater, but uses a different form of Kalman gain. This alternative form of the EnKF calculates a separate kalman gain for each ensemble member. This alternative Kalman gain calculation involves linearization of the measurement. An additional step is introduced to perform inflation.

References

1. K. Michaelson, A. A. Popov and R. Zanetti, “Ensemble Kalman Filter with Bayesian Recursive Update”

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • inflation_factor (float, optional) – Parameter to control inflation

inflation_factor: float

Parameter to control inflation

update(hypothesis, **kwargs)[source]

The LinearisedEnsembleUpdater update method. This method includes an additional step over the EnsembleUpdater update step to perform inflation.

Parameters:

hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

Returns:

The posterior state which contains an ensemble of state vectors and a timestamp.

Return type:

EnsembleStateUpdate

Recursive

class stonesoup.updater.recursive.BayesianRecursiveUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, number_steps: int = 1)[source]

Bases: ExtendedKalmanUpdater

Recursive extension of the ExtendedKalmanUpdater.

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • number_steps (int, optional) – Number of recursive steps

number_steps: int

Number of recursive steps

update(hypothesis, **kwargs)[source]

The Kalman update method. Given a hypothesised association between a predicted state or predicted measurement and an actual measurement, calculate the posterior state.

Parameters:
  • hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

  • **kwargs (various) – These are passed to predict_measurement()

Returns:

The posterior state Gaussian with mean \(\mathbf{x}_{k|k}\) and covariance \(P_{x|x}\)

Return type:

GaussianStateUpdate

class stonesoup.updater.recursive.RecursiveEnsembleUpdater(number_steps: int, measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: ExtendedKalmanUpdater, EnsembleUpdater

Recursive version of EnsembleUpdater. Uses calculated posterior ensemble as prior ensemble to recursively update number_steps times.

Parameters:
  • number_steps (int) – Number of recursive steps

  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

number_steps: int

Number of recursive steps

update(hypothesis, **kwargs)[source]

The BayesianRecursiveEnsembleUpdater update method. The Ensemble Kalman filter simply uses the Kalman Update scheme to evolve a set or Ensemble of state vectors as a group. This ensemble of vectors contains all the information on the system state. Uses calculated posterior ensemble as prior ensemble to recursively update number_steps times.

Parameters:

hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

Returns:

The posterior state which contains an ensemble of state vectors and a timestamp.

Return type:

EnsembleStateUpdate

class stonesoup.updater.recursive.RecursiveLinearisedEnsembleUpdater(number_steps: int, measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, inflation_factor: float = 1.0)[source]

Bases: ExtendedKalmanUpdater, EnsembleUpdater

Implementation of ‘The Bayesian Recursive Update Linearized EnKF’ algorithm from “Ensemble Kalman Filter with Bayesian Recursive Update” by Kristen Michaelson, Andrey A. Popov and Renato Zanetti. Recursive version of the LinearisedEnsembleUpdater that recursively iterates over the update step for a given number of steps.

References

1. K. Michaelson, A. A. Popov and R. Zanetti, “Ensemble Kalman Filter with Bayesian Recursive Update”

Parameters:
  • number_steps (int) – Number of recursive steps

  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • inflation_factor (float, optional) – Parameter to control inflation

number_steps: int

Number of recursive steps

inflation_factor: float

Parameter to control inflation

update(hypothesis, **kwargs)[source]

The RecursiveLinearisedEnsembleUpdater update method. Uses an alternative form of Kalman gain to calculate a value for each member of the ensemble. Iterates over the update step recursively to improve upon error caused by linearisation.

Parameters:

hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

Returns:

The posterior state which contains an ensemble of state vectors and a timestamp.

Return type:

EnsembleStateUpdate

class stonesoup.updater.recursive.VariableStepBayesianRecursiveUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, number_steps: int = 1)[source]

Bases: BayesianRecursiveUpdater

Extension of the BayesianRecursiveUpdater. The BayesianRecursiveUpdater uses equal measurement noise for each recursive step. The VariableStepBayesianUpdater over-inflates measurement noise in the earlier steps, requiring the use of a smaller number of steps.

References

1. K. Michaelson, A. A. Popov and R. Zanetti, “Bayesian Recursive Update for Ensemble Kalman Filters”

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • number_steps (int, optional) – Number of recursive steps

number_steps: int

Number of recursive steps

class stonesoup.updater.recursive.ErrorControllerBayesianRecursiveUpdater(f: float, fmin: float, fmax: float, measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False, number_steps: int = 1, atol: float = 0.001, rtol: float = 0.001)[source]

Bases: BayesianRecursiveUpdater

Extension of the variable-step Bayesian recursive update method, which introduces error-controlling parameters. This method allows the step size to adjust according to the error value from the previous step. Implementation based on algorithm 3 of [1]. Default values for parameters atol, rtol, f, fmin and fmax are copied from values stated in examples in [1]

References

1. K. Michaelson, A. A. Popov and R. Zanetti, “Bayesian Recursive Update for Ensemble Kalman Filters”

Parameters:
  • f (float) – Nominal value for step size scale factor

  • fmin (float) – Minimum value for step size scale factor

  • fmax (float) – Maximum value for step size scale factor

  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

  • number_steps (int, optional) – Number of recursive steps

  • atol (float, optional) – Absolute tolerance value

  • rtol (float, optional) – Relative tolerance value

atol: float

Absolute tolerance value

rtol: float

Relative tolerance value

f: float

Nominal value for step size scale factor

fmin: float

Minimum value for step size scale factor

fmax: float

Maximum value for step size scale factor

update(hypothesis, **kwargs)[source]

Update method of the ErrorControllerBayesianRecursiveUpdater. This method allows the step size to adjust according to the error value from the previous step.

Parameters:

hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

Returns:

The posterior state Gaussian with mean \(\mathbf{x}_{k|k}\) and covariance \(P_{x|x}\)

Return type:

GaussianStateUpdate

Iterated

class stonesoup.updater.iterated.DynamicallyIteratedUpdater(predictor: Predictor, updater: Updater, smoother: Smoother, tolerance: float = 1e-06, measure: Measure = KLDivergence(mapping=None, mapping2=None), max_iterations: int = 1000)[source]

Bases: Updater

Wrapper around a Predictor, Updater and Smoother. This updater takes a Prediction, and updates as usual by calling its updater property. The updated state is then used to smooth the prior state, completing the first iteration. The second iteration begins from predicting using the smoothed prior. Iterates until convergence, or a maximum number of iterations is reached.

Implementation of algorithm 2: Dynamically iterated filter, from “Iterated Filters for Nonlinear Transition Models”

References

1. Anton Kullberg, Isaac Skog, Gustaf Hendeby, “Iterated Filters for Nonlinear Transition Models”

Parameters:
  • predictor (Predictor) – Predictor to use for iterating over the predict step. Probably should be the same predictor used for the initial predict step

  • updater (Updater) – Updater to use for iterating over update step

  • smoother (Smoother) – Smoother used to smooth the prior

  • tolerance (float, optional) – The value of the difference in the measure used as a stopping criterion.

  • measure (Measure, optional) – The measure to use to test the iteration stopping criterion. Defaults to the Euclidean distance between current and prior posterior state estimate.

  • max_iterations (int, optional) – Number of iterations before while loop is exited and a non-convergence warning is returned

measurement_model: MeasurementModel = None
predictor: Predictor

Predictor to use for iterating over the predict step. Probably should be the same predictor used for the initial predict step

updater: Updater

Updater to use for iterating over update step

smoother: Smoother

Smoother used to smooth the prior

tolerance: float

The value of the difference in the measure used as a stopping criterion.

measure: Measure

The measure to use to test the iteration stopping criterion. Defaults to the Euclidean distance between current and prior posterior state estimate.

max_iterations: int

Number of iterations before while loop is exited and a non-convergence warning is returned

predict_measurement(*args, **kwargs)[source]

Get measurement prediction from state prediction

Parameters:
  • predicted_state (StatePrediction) – The state prediction

  • measurement_model (MeasurementModel, optional) – The measurement model used to generate the measurement prediction. Should be used in cases where the measurement model is dependent on the received measurement. The default is None, in which case the updater will use the measurement model specified on initialisation

  • measurement_noise (bool) – Whether to include measurement noise predicted measurement. Default True

Returns:

The predicted measurement

Return type:

MeasurementPrediction

update(hypothesis, **kwargs)[source]

Update state using prediction and measurement.

Parameters:

hypothesis (Hypothesis) – Hypothesis with predicted state and associated detection used for updating.

Returns:

The state posterior

Return type:

State

class stonesoup.updater.iterated.DynamicallyIteratedEKFUpdater(measurement_model: MeasurementModel, transition_model: TransitionModel, tolerance: float = 1e-06, measure: Measure = KLDivergence(mapping=None, mapping2=None), max_iterations: int = 1000)[source]

Bases: DynamicallyIteratedUpdater

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • transition_model (TransitionModel) – The transition model to be used.

  • tolerance (float, optional) – The value of the difference in the measure used as a stopping criterion.

  • measure (Measure, optional) – The measure to use to test the iteration stopping criterion. Defaults to the Euclidean distance between current and prior posterior state estimate.

  • max_iterations (int, optional) – Number of iterations before while loop is exited and a non-convergence warning is returned

measurement_model: MeasurementModel

measurement model

transition_model: TransitionModel

The transition model to be used.

updater: Updater = None
predictor: Predictor = None
smoother: Smoother = None

Information

class stonesoup.updater.information.InformationKalmanUpdater(measurement_model: LinearGaussian = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: KalmanUpdater

A class which implements the update of information form of the Kalman filter. This is conceptually very simple. The update proceeds as:

\[ \begin{align}\begin{aligned}Y_{k|k} = Y_{k|k-1} + H^{T}_k R^{-1}_k H_k\\\mathbf{y}_{k|k} = \mathbf{y}_{k|k-1} + H^{T}_k R^{-1}_k \mathbf{z}_{k}\end{aligned}\end{align} \]

where \(\mathbf{y}_{k|k-1}\) is the predicted information state and \(Y_{k|k-1}\) the predicted information matrix which form the InformationStatePrediction object. The measurement matrix \(H_k\) and measurement covariance \(R_k\) are those in the Kalman filter (see tutorial 1). An InformationStateUpdate object is returned.

Note

Analogously with the InformationKalmanPredictor, the measurement model is queried for the existence of an inverse_covar() property. If absent, the covar() is inverted.

Parameters:
  • measurement_model (LinearGaussian, optional) – A linear Gaussian measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

measurement_model: LinearGaussian

A linear Gaussian measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

There’s no direct analogue of a predicted measurement in the information form. This method is therefore provided to return the predicted measurement as would the standard Kalman updater. This is mainly for compatibility as it’s not anticipated that it would be used in the usual operation of the information filter.

Parameters:
  • predicted_state (State) – The predicted state in information form \(\mathbf{y}_{k|k-1}\)

  • measurement_model (MeasurementModel) – The measurement model. If omitted, the model in the updater object is used

  • measurement_noise (bool) – Whether to include measurement noise \(R\) with innovation covariance. Default True

  • **kwargs (various) – These are passed to matrix()

Returns:

The measurement prediction, \(H \mathbf{x}_{k|k-1}\)

Return type:

GaussianMeasurementPrediction

update(hypothesis, **kwargs)[source]

The Information filter update (corrector) method. Given a hypothesised association between a predicted information state and an actual measurement, calculate the posterior information state.

Parameters:
  • hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis carries a predicted information state.

  • **kwargs (various) – These are passed to predict_measurement()

Returns:

The posterior information state with information state \(\mathbf{y}_{k|k}\) and precision \(Y_{k|k}\)

Return type:

InformationStateUpdate

Accumulated State Densities

class stonesoup.updater.asd.ASDKalmanUpdater(measurement_model: LinearGaussian = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: KalmanUpdater

Accumulated State Densities Kalman Updater

A linear updater for accumulated state densities, for processing out of sequence measurements. This requires the state is represented in ASDGaussianState multi-state.

References

  1. W. Koch and F. Govaers, On Accumulated State Densities with Applications to Out-of-Sequence Measurement Processing in IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 4, pp. 2766-2778, OCTOBER 2011, doi: 10.1109/TAES.2011.6034663.

Parameters:
  • measurement_model (LinearGaussian, optional) – A linear Gaussian measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

Predict the measurement implied by the predicted state mean

Parameters:
  • predicted_state (ASDState) – The predicted state \(\mathbf{x}_{k|k-1}\)

  • measurement_model (MeasurementModel) – The measurement model. If omitted, the model in the updater object is used

  • measurement_noise (bool) – Whether to include measurement noise \(R\) with innovation covariance

  • **kwargs (various) – These are passed to function() and matrix()

Returns:

The measurement prediction, \(\mathbf{z}_{k|k-1}\)

Return type:

ASDGaussianMeasurementPrediction

update(hypothesis, force_symmetric_covariance=False, **kwargs)[source]

The Kalman update method. Given a hypothesised association between a predicted state and an actual measurement, calculate the posterior state. The Measurement Prediction should be calculated by this method. It is overwritten in this method

Parameters:
  • hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False

  • **kwargs (various) – These are passed to predict_measurement()

Returns:

The posterior state Gaussian with mean \(\mathbf{x}_{k|k}\) and covariance \(P_{x|x}\)

Return type:

ASDGaussianStateUpdate

Point Process

class stonesoup.updater.pointprocess.PointProcessUpdater(updater: KalmanUpdater, clutter_spatial_density: float = 1e-26, normalisation: bool = True, prob_detection: Probability = 1, prob_survival: Probability = 1)[source]

Bases: Base

Base updater class for the implementation of any Gaussian Mixture (GM) point process derived multi target filters such as the Probability Hypothesis Density (PHD), Cardinalised Probability Hypothesis Density (CPHD) or Linear Complexity with Cumulants (LCC) filters

Parameters:
  • updater (KalmanUpdater) – Underlying updater used to perform the single target Kalman Update.

  • clutter_spatial_density (float, optional) – Spatial density of the clutter process uniformly distributed across the state space.

  • normalisation (bool, optional) – Flag for normalisation

  • prob_detection (Probability, optional) – Probability of a target being detected at the current timestep

  • prob_survival (Probability, optional) – Probability of a target surviving until the next timestep

updater: KalmanUpdater

Underlying updater used to perform the single target Kalman Update.

clutter_spatial_density: float

Spatial density of the clutter process uniformly distributed across the state space.

normalisation: bool

Flag for normalisation

prob_detection: Probability

Probability of a target being detected at the current timestep

prob_survival: Probability

Probability of a target surviving until the next timestep

update(hypotheses)[source]

Updates the current components in a GaussianMixture by applying the underlying KalmanUpdater updater to each component with the supplied measurements.

Parameters:

hypotheses (list of MultipleHypothesis) – Measurements obtained at time \(k+1\)

Returns:

updated_components – GaussianMixtureMultiTargetTracker with updated components at time \(k+1\)

Return type:

GaussianMixtureUpdate

class stonesoup.updater.pointprocess.PHDUpdater(updater: KalmanUpdater, clutter_spatial_density: float = 1e-26, normalisation: bool = True, prob_detection: Probability = 1, prob_survival: Probability = 1)[source]

Bases: PointProcessUpdater

A implementation of the Gaussian Mixture Probability Hypothesis Density (GM-PHD) multi-target filter

References

[1] B.-N. Vo and W.-K. Ma, “The Gaussian Mixture Probability Hypothesis Density Filter,” Signal Processing,IEEE Transactions on, vol. 54, no. 11, pp. 4091–4104, 2006. https://ieeexplore.ieee.org/document/1710358.

[2] D. E. Clark, K. Panta and B. Vo, “The GM-PHD Filter Multiple Target Tracker,” 2006 9th International Conference on Information Fusion, 2006, pp. 1-8, doi: 10.1109/ICIF.2006.301809. https://ieeexplore.ieee.org/document/4086095.

Parameters:
  • updater (KalmanUpdater) – Underlying updater used to perform the single target Kalman Update.

  • clutter_spatial_density (float, optional) – Spatial density of the clutter process uniformly distributed across the state space.

  • normalisation (bool, optional) – Flag for normalisation

  • prob_detection (Probability, optional) – Probability of a target being detected at the current timestep

  • prob_survival (Probability, optional) – Probability of a target surviving until the next timestep

class stonesoup.updater.pointprocess.LCCUpdater(updater: KalmanUpdater, clutter_spatial_density: float = 1e-26, normalisation: bool = True, prob_detection: Probability = 1, prob_survival: Probability = 1, mean_number_of_false_alarms: float = 1, variance_of_false_alarms: float = 1)[source]

Bases: PointProcessUpdater

A implementation of the Gaussian Mixture Linear Complexity with Cumulants (GM-LCC) multi-target filter

References

[1] D. E. Clark and F. De Melo. “A Linear-Complexity Second-Order

Multi-Object Filter via Factorial Cumulants”. In: 2018 21st International Conference on Information Fusion (FUSION). 2018. DOI: 10. 23919/ICIF.2018.8455331. https://ieeexplore.ieee.org/document/8455331..

Parameters:
  • updater (KalmanUpdater) – Underlying updater used to perform the single target Kalman Update.

  • clutter_spatial_density (float, optional) – Spatial density of the clutter process uniformly distributed across the state space.

  • normalisation (bool, optional) – Flag for normalisation

  • prob_detection (Probability, optional) – Probability of a target being detected at the current timestep

  • prob_survival (Probability, optional) – Probability of a target surviving until the next timestep

  • mean_number_of_false_alarms (float, optional) – Mean number of false alarms (clutter) expected per timestep

  • variance_of_false_alarms (float, optional) – Variance on the number of false alarms (clutter) expected per timestep

mean_number_of_false_alarms: float

Mean number of false alarms (clutter) expected per timestep

variance_of_false_alarms: float

Variance on the number of false alarms (clutter) expected per timestep

AlphaBeta

class stonesoup.updater.alphabeta.AlphaBetaUpdater(measurement_model: MeasurementModel, alpha: float, beta: float, vmap: ndarray = None)[source]

Bases: Updater

Conceptually, the \(\alpha-\beta\) filter is similar to its Kalman cousins in that it operates recursively over predict and update steps. It assumes that a state vector is decomposable into quantities and the rates of change of those quantities. We refer to these as position \(p\) and velocity \(v\) respectively, though they aren’t confined to locations in space. If the interval from \(t_{k-1} \rightarrow t_k\) is \(\Delta T\), and at \(k\), we can gain a (noisy) measurement of the position, \(p^z_k\).

The recursion proceeds as:

  • Predict

\[ \begin{align}\begin{aligned}p_{k|k-1} &= p_{k-1} + \Delta T v_{k-1}\\v_{k|k-1} &= v_{k-1}\end{aligned}\end{align} \]
  • Update

\[ \begin{align}\begin{aligned}s_k &= p^z_k - p_{k|k-1} \: (\mathrm{innovation})\\p_k &= p_{k|k-1} + \alpha s_k\\v_k &= v_{k|k-1} + \frac{\beta}{\Delta T} s_k\end{aligned}\end{align} \]

The \(\alpha\) and \(\beta\) parameters which give the filter its name are small, \(0 < \alpha < 1\) and \(0 < \beta \leq 2\). Colloquially, the larger the values of the parameters, the more influence the measurements have over the transition model; \(\beta\) is usually much smaller than \(\alpha\).

As the prediction is just the application of a constant velocity model, there is no \(\alpha-\beta\) predictor provided in Stone Soup. It is assumed that the predictions passed to the hypothesis have been generated by a constant velocity model. Any application of a control model is also assumed to have taken place during the prediction stage.

This class assumes the velocity is in units of the length per second. If different units are required, scale the prior appropriately.

The measurement model used should be linear and a measurement model such that it provides a ‘mapping’ to \(p\) via the mapping tuple and a binary measurement matrix which returns \(p\). This isn’t checked.

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • alpha (float) – The alpha parameter. Controls the weight given to the measurements over the transition model.

  • beta (float) – The beta parameter. Controls the amount of variation allowed in the velocity component.

  • vmap (numpy.ndarray, optional) – Binary map of the velocity elements in the state vector. If left default, the class will assume that the velocity elements interleave the position elements in the state vector.

alpha: float

The alpha parameter. Controls the weight given to the measurements over the transition model.

beta: float

The beta parameter. Controls the amount of variation allowed in the velocity component.

vmap: ndarray

Binary map of the velocity elements in the state vector. If left default, the class will assume that the velocity elements interleave the position elements in the state vector.

predict_measurement(prediction, measurement_model=None, measurement_noise=False, **kwargs)[source]

Return the predicted measurement

Parameters:
  • prediction (StatePrediction) – The state prediction

  • measurement_model (MeasurementModel) – The measurement model. If omitted, the model in the updater object is used

  • measurement_noise (bool) – Whether to include measurement noise, in this case on False is valid. Default False

Returns:

The predicted measurement

Return type:

StateVector

update(hypothesis, time_interval, **kwargs)[source]

Calculate the inferred state following update

Parameters:
  • hypothesis (Hypothesis) – A hypothesis associates a measurement with a prediction

  • time_interval (timedelta) – The time interval over which the prediction has been made.

Returns:

The updated state

Return type:

StateUpdate

Sliding Innovation Filter

class stonesoup.updater.slidinginnovation.SlidingInnovationUpdater(layer_width: ndarray, measurement_model: LinearGaussian = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: KalmanUpdater

Sliding Innovation Filter Updater

The Sliding Innovation Filter (SIF) is a sub-optimal filter (in comparison to Kalman filter) which uses a switching gain to provide robustness to estimation problems that may be ill-conditioned or contain modeling uncertainties or disturbances.

The main difference from Kalman filter is the calculation of the gain:

\[K_k = H_k^+ \overline{sat}(|\mathbf{z}_{k|k-1}|/\mathbf{\delta})\]

where \(\mathbf{\delta}\) is the sliding boundary layer width.

References

  1. S. A. Gadsden and M. Al-Shabi, “The Sliding Innovation Filter,” in IEEE Access, vol. 8, pp. 96129-96138, 2020, doi: 10.1109/ACCESS.2020.2995345.

Parameters:
  • layer_width (numpy.ndarray) – Sliding boundary layer width \(\mathbf{\delta}\). A tunable parameter in measurement space. An example initial value provided in original paper is \(10 \times \text{diag}(R)\)

  • measurement_model (LinearGaussian, optional) – A linear Gaussian measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

layer_width: ndarray

Sliding boundary layer width \(\mathbf{\delta}\). A tunable parameter in measurement space. An example initial value provided in original paper is \(10 \times \text{diag}(R)\)

class stonesoup.updater.slidinginnovation.ExtendedSlidingInnovationUpdater(layer_width: ndarray, measurement_model: LinearGaussian = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: SlidingInnovationUpdater, ExtendedKalmanUpdater

Extended Sliding Innovation Filter Updater

This is the Extended version of the SlidingInnovationUpdater for non-linear measurement models.

References

  1. S. A. Gadsden and M. Al-Shabi, “The Sliding Innovation Filter,” in IEEE Access, vol. 8, pp. 96129-96138, 2020, doi: 10.1109/ACCESS.2020.2995345.

Parameters:
  • layer_width (numpy.ndarray) – Sliding boundary layer width \(\mathbf{\delta}\). A tunable parameter in measurement space. An example initial value provided in original paper is \(10 \times \text{diag}(R)\)

  • measurement_model (LinearGaussian, optional) – A linear Gaussian measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown.

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

Categorical

class stonesoup.updater.categorical.HMMUpdater(measurement_model: MarkovianMeasurementModel = None)[source]

Bases: Updater

Hidden Markov model updater

Parameters:

measurement_model (MarkovianMeasurementModel, optional) – The measurement model used to predict measurement vectors. If no model is specified on construction, or in a measurement, then an error will be thrown.

measurement_model: MarkovianMeasurementModel

The measurement model used to predict measurement vectors. If no model is specified on construction, or in a measurement, then an error will be thrown.

update(hypothesis, **kwargs)[source]

The update method. Given a hypothesised association between a predicted state or predicted measurement and an actual measurement, calculate the posterior state.

\[\alpha_t^i = E^{ki}(F\alpha_{t-1})^i\]

Measurements are assumed to be discrete categories from a finite set of measurement categories \(Z = \{\zeta^n|n\in \mathbf{N}, n\le N\}\) (for some finite \(N\)). A measurement should be equivalent to a basis vector \(e^k\), (the N-tuple with all components equal to 0, except the k-th (indices starting at 0), which is 1). This indicates that the measured category is \(\zeta^k\).

The equation above can be simplified to:

\[\alpha_t = E^Ty_t \circ F\alpha_{t-1}\]

where \(\circ\) denotes element-wise (Hadamard) product.

Parameters:
  • hypothesis (SingleHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a predicted measurement, or a predicted state. In the latter case a predicted measurement will be calculated.

  • **kwargs (various) – These are passed to predict_measurement().

Returns:

The posterior categorical state.

Return type:

CategoricalStateUpdate

predict_measurement(predicted_state, measurement_model=None, measurement_noise=False, **kwargs)[source]

Predict the measurement implied by the predicted state.

Parameters:
  • predicted_state (CategoricalState) – The predicted state.

  • measurement_model (MeasurementModel) – The measurement model. If omitted, the model in the updater object is used.

  • measurement_noise (bool) – Whether to include measurement noise. Default False

  • **kwargs (various) – These are passed to function().

Returns:

The measurement prediction.

Return type:

CategoricalMeasurementPrediction

Composite

class stonesoup.updater.composite.CompositeUpdater(sub_updaters: Sequence[Updater])[source]

Bases: Updater

Composite updater type

A composition of sub-updaters (Updater).

Parameters:

sub_updaters (Sequence[Updater]) – Sequence of sub-updaters comprising the composite updater. Must not be empty.

sub_updaters: Sequence[Updater]

Sequence of sub-updaters comprising the composite updater. Must not be empty.

property measurement_model

measurement model

predict_measurement(*args, **kwargs)[source]

To attain measurement predictions, the composite updater will use it’s sub-updaters’ predict_measurement methods and leave combining these to the CompositeHypothesis type.

update(hypothesis: CompositeHypothesis, **kwargs)[source]

Given a hypothesised association between a composite predicted state or composite predicted measurement and a composite measurement, calculate the composite posterior state.

Parameters:
  • hypothesis (CompositeHypothesis) – the prediction-measurement association hypothesis. This hypothesis may carry a composite predicted measurement, or a composite predicted state. In the latter case a measurement prediction is calculated for each sub-state of the composite hypothesis, which will then create its own composite measurement prediction.

  • **kwargs (various) – These are passed to the predict_measurement() method of each sub-updater

Returns:

The posterior composite state update

Return type:

CompositeUpdate

Chernoff

class stonesoup.updater.chernoff.ChernoffUpdater(measurement_model: MeasurementModel, omega: float = 0.5)[source]

A class which performs state updates using the Chernoff fusion rule. In this context, measurements come in the form of states with a mean and covariance (compared to traditional measurements which contain solely a mean). The measurements are expected to come as GaussianDetection objects.

The Chernoff fusion rule is written as [6]

\[p_{\omega}(x_{k}) = \frac{p_{1}(x_{k})^{\omega}p_{2}(x_{k})^{1-\omega}} {\int p_{1}(x)^{\omega}p_{2}(x)^{1-\omega} \mathrm{d} x}\]

where \(\omega\) is a weighting parameter in the range \((0,1]\), which can be found using an optimization algorithm.

In situations where \(p_1(x)\) and \(p_2(x)\) are multivariate Gaussian distributions, the above formula is equal to the Covariance Intersection Algorithm from Julier et al [7]. Let \((a,A)\) and \((b,B)\) be the means and covariances of the measurement and prediction respectively. The Covariance Intersection Algorithm was reformulated for use in Bayesian state estimation by Clark and Campbell [8], yielding formulas for the updated covariance and mean, \(D\) and \(d\), and the innovation covariance matrix, \(V\), as follows:

\[\begin{split}D &= \left ( \omega A^{-1} + (1-\omega)B^{-1} \right )\\ d &= D \left ( \omega A^{-1}a + (1-\omega)B^{-1}b \right )\\ V &= \frac{A}{1-\omega} + \frac{B}{\omega}\end{split}\]

In filters where gating is required, the gating region can be written using the innovation covariance matrix as:

\[\mathcal{V}(\gamma) = \left\{ (a,A) : (a-b)^T V^{-1} (a-b) \leq \gamma \right\}\]

The specifics for implementing the Covariance Intersection Algorithm in several popular multi-target tracking algorithms was expanded upon by Clark et al [9]. The work includes a discussion of Stone Soup and can be used to apply this class to a tracking algorithm of choice.

Note

If you have tracks that you would like to use as measurements for this updater, the Tracks2GaussianDetectionFeeder class can be used to convert the tracks to the appropriate format.

References

Parameters:
  • measurement_model (MeasurementModel) – measurement model

  • omega (float, optional) – A weighting parameter in the range \((0,1]\)

omega: float

A weighting parameter in the range \((0,1]\)

predict_measurement(predicted_state, measurement_model=None, measurement_noise=True, **kwargs)[source]

This function predicts the measurement of a state in situations where measurements consist of a covariance and state vector.

Parameters:
  • predicted_state (GaussianState) – The predicted state \(\mathbf{x}_{k|k-1}\)

  • measurement_model (MeasurementModel) – The measurement model. If omitted, the updater will use the model that was specified on initialization.

  • measurement_noise (bool) – Whether to include measurement noise. Default True. Where False the predicted state covariance is used directly without omega factor.

Returns:

The measurement prediction

Return type:

MeasurementPrediction

update(hypothesis, force_symmetric_covariance=False, **kwargs)[source]

Given a hypothesis, calculate the posterior mean and covariance.

Parameters:
  • hypothesis (Hypothesis) – Hypothesis with the predicted state and the actual/associated measurement which should be used for updating. If the hypothesis does not contain a measurement prediction, one will be calculated.

  • force_symmetric_covariance (bool) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

Returns:

The state posterior, saved in a generic Update object.

Return type:

Update

Probabilistic

class stonesoup.updater.probability.PDAUpdater(measurement_model: MeasurementModel = None, force_symmetric_covariance: bool = False, use_joseph_cov: bool = False)[source]

Bases: ExtendedKalmanUpdater

An updater which undertakes probabilistic data association (PDA), as defined in [10]. It differs slightly from the Kalman updater it inherits from in that instead of a single hypothesis object, the update() method takes a hypotheses object returned by a PDA (or similar) data associator. Functionally this is a list of single hypothesis objects which group tracks together with associated measuments and probabilities.

The ExtendedKalmanUpdater is used in order to inherit the ability to cope with (slight) non-linearities. Other inheritance structures should be trivial to implement.

The update step proceeds as:

\[ \begin{align}\begin{aligned}\mathbf{x}_{k|k} &= \mathbf{x}_{k|k-1} + K_k \mathbf{y}_k\\P_{k|k} &= \beta_0 P_{k|k-1} + (1 - \beta_0) P_{k|k} + \tilde{P}\end{aligned}\end{align} \]

where \(K_k\) and \(P_{k|k}\) are the Kalman gain and posterior covariance respectively returned by the single-target Kalman update, \(\beta_0\) is the probability of missed detection. In this instance \(\mathbf{y}_k\) is the combined innovation, over \(m_k\) detections:

\[\mathbf{y}_k = \Sigma_{i=1}^{m_k} \beta_i \mathbf{y}_{k,i}.\]

The posterior covariance is composed of a term to account for the covariance due to missed detection, that due to the true detection, and a term (\(\tilde{P}\)) which quantifies the effect of the measurement origin uncertainty.

\[\tilde{P} \triangleq K_k [ \Sigma_{i=1}^{m_k} \beta_i \mathbf{y}_{k,i}\mathbf{y}_{k,i}^T - \mathbf{y}_k \mathbf{y}_k^T ] K_k^T\]

A method for updating via a Gaussian mixture reduction is also provided. In this latter case, each of the hypotheses, including that for a missed detection, is updated and then a weighted Gaussian reduction is used to resolve the hypotheses to a single Gaussian distribution. The reason this is equivalent to the innovation-based method is shown in [11].

References

Parameters:
  • measurement_model (MeasurementModel, optional) – A measurement model. This need not be defined if a measurement model is provided in the measurement. If no model specified on construction, or in the measurement, then error will be thrown. Must be linear or capable or implement the jacobian().

  • force_symmetric_covariance (bool, optional) – A flag to force the output covariance matrix to be symmetric by way of a simple geometric combination of the matrix and transpose. Default is False.

  • use_joseph_cov (bool, optional) – Bool dictating the method of covariance calculation. If use_joseph_cov is True then the Joseph form of the covariance equation is used.

update(hypotheses, gm_method=False, **kwargs)[source]

The update step.

Parameters:
  • hypotheses (MultipleHypothesis) –

    The prediction-measurement association hypotheses. This hypotheses object carries tracks, associated sets of measurements for each track together with a probability measure which enumerates the likelihood of each track-measurement pair. (This is most likely output by the PDA associator).

    In a single case (the missed detection hypothesis), the hypothesis will not have an associated measurement or measurement prediction.

  • gm_method (bool) – Use the innovation-based update method if False (default), or the GM-reduction (True).

  • **kwargs (various) – These are passed to predict_measurement()

Returns:

The update, \(\mathbf{x}_{k|k}, P_{k|k}\)

Return type:

GaussianUpdate