# Predictors

Base classes for Stone Soup Predictor interface

class stonesoup.predictor.base.Predictor(transition_model: TransitionModel, control_model: ControlModel = None)[source]

Predictor base class

A predictor is used to predict a new State given a prior State and a TransitionModel. In addition, a ControlModel may be used to model an external influence on the state.

$\mathbf{x}_{k|k-1} = f_k(\mathbf{x}_{k-1}, \mathbf{\nu}_k) + b_k(\mathbf{u}_k, \mathbf{\eta}_k)$

where $$\mathbf{x}_{k-1}$$ is the prior state, $$f_k(\mathbf{x}_{k-1})$$ is the transition function, $$\mathbf{u}_k$$ the control vector, $$b_k(\mathbf{u}_k)$$ the control input and $$\mathbf{\nu}_k$$ and $$\mathbf{\eta}_k$$ the transition and control model noise respectively.

Parameters
transition_model: stonesoup.models.transition.base.TransitionModel

transition model

control_model: stonesoup.models.control.base.ControlModel

control model

abstract predict(prior, timestamp=None, **kwargs)[source]

The prediction function itself

Parameters
Returns

State prediction

Return type

StatePrediction

## Kalman

class stonesoup.predictor.kalman.KalmanPredictor(transition_model: LinearGaussianTransitionModel, control_model: LinearControlModel = None)[source]

A predictor class which forms the basis for the family of Kalman predictors. This class also serves as the (specific) Kalman Filter Predictor class. Here

$f_k( \mathbf{x}_{k-1}) = F_k \mathbf{x}_{k-1}, \ b_k( \mathbf{u}_k) = B_k \mathbf{u}_k \ \mathrm{and} \ \mathbf{\nu}_k \sim \mathcal{N}(0,Q_k)$

Notes

In the Kalman filter, transition and control models must be linear.

Raises

ValueError – If no TransitionModel is specified.

Parameters
transition_model: stonesoup.models.transition.linear.LinearGaussianTransitionModel

The transition model to be used.

control_model: stonesoup.models.control.linear.LinearControlModel

The control model to be used. Default None where the predictor will create a zero-effect linear ControlModel.

predict(prior, timestamp=None, **kwargs)[source]

The predict function

Parameters
Returns

$$\mathbf{x}_{k|k-1}$$, the predicted state and the predicted state covariance $$P_{k|k-1}$$

Return type

GaussianStatePrediction

class stonesoup.predictor.kalman.ExtendedKalmanPredictor(transition_model: TransitionModel, control_model: ControlModel = None)[source]

ExtendedKalmanPredictor class

An implementation of the Extended Kalman Filter predictor. Here the transition and control functions may be non-linear, their transition and control matrices are approximated via Jacobian matrices. To this end the transition and control models, if non-linear, must be able to return the jacobian() function.

Parameters
transition_model: stonesoup.models.transition.base.TransitionModel

The transition model to be used.

control_model: stonesoup.models.control.base.ControlModel

The control model to be used. Default None where the predictor will create a zero-effect linear ControlModel.

class stonesoup.predictor.kalman.UnscentedKalmanPredictor(transition_model: TransitionModel, control_model: ControlModel = None, alpha: float = 0.5, beta: float = 2, kappa: float = 0)[source]

UnscentedKalmanFilter class

The predict is accomplished by calculating the sigma points from the Gaussian mean and covariance, then putting these through the (in general non-linear) transition function, then reconstructing the Gaussian.

Parameters
• transition_model (TransitionModel) – The transition model to be used.

• control_model (ControlModel, optional) – The control model to be used. Default None where the predictor will create a zero-effect linear ControlModel.

• alpha (float, optional) – Primary sigma point spread scaling parameter. Default is 0.5.

• beta (float, optional) – Used to incorporate prior knowledge of the distribution. If the true distribution is Gaussian, the value of 2 is optimal. Default is 2

• kappa (float, optional) – Secondary spread scaling parameter. Default is calculated as 3-Ns

transition_model: stonesoup.models.transition.base.TransitionModel

The transition model to be used.

control_model: stonesoup.models.control.base.ControlModel

The control model to be used. Default None where the predictor will create a zero-effect linear ControlModel.

alpha: float

Primary sigma point spread scaling parameter. Default is 0.5.

beta: float

Used to incorporate prior knowledge of the distribution. If the true distribution is Gaussian, the value of 2 is optimal. Default is 2

kappa: float

Secondary spread scaling parameter. Default is calculated as 3-Ns

predict(prior, timestamp=None, **kwargs)[source]

The unscented version of the predict step

Parameters
Returns

The predicted state $$\mathbf{x}_{k|k-1}$$ and the predicted state covariance $$P_{k|k-1}$$

Return type

GaussianStatePrediction

class stonesoup.predictor.kalman.SqrtKalmanPredictor(transition_model: LinearGaussianTransitionModel, control_model: LinearControlModel = None, qr_method: bool = False)[source]

The version of the Kalman predictor that operates on the square root parameterisation of the Gaussian state, SqrtGaussianState.

The prediction is undertaken in one of two ways. The default is to work in exactly the same way as the parent class, with the exception that the predicted covariance is subject to a Cholesky factorisation prior to initialisation of the SqrtGaussianState output. The alternative, accessible via the qr_method = True flag, is to predict via a modified Gram-Schmidt process. See  for details.

If transition and control models are possessed of the square root form of the covariance (as sqrt_covar in the case of the transition model and sqrt_control_noise for control models), then these are used directly. If not then they are created from the full matrices using the scipy.linalg sqrtm() method. (Unlike the Cholesky decomposition this works on positive semi-definite matrices, as well as positive definite ones.

References

1. Maybeck, P.S. 1994, Stochastic Models, Estimation, and Control, Vol. 1, NavtechGPS, Springfield, VA.

Parameters
qr_method: bool

A switch to do the prediction via a QR decomposition, rather than using a Cholesky decomposition.

## Particle

class stonesoup.predictor.particle.ParticlePredictor(transition_model: TransitionModel, control_model: ControlModel = None)[source]

ParticlePredictor class

An implementation of a Particle Filter predictor.

Parameters
predict(prior, control_input=None, timestamp=None, **kwargs)[source]

Particle Filter prediction step

Parameters
Returns

The predicted state

Return type

ParticleStatePrediction

class stonesoup.predictor.particle.ParticleFlowKalmanPredictor(transition_model: TransitionModel, control_model: ControlModel = None, kalman_predictor: KalmanPredictor = None)[source]

Gromov Flow Parallel Kalman Particle Predictor

This is a wrapper around the GromovFlowParticlePredictor which can use a ExtendedKalmanPredictor or UnscentedKalmanPredictor in parallel in order to maintain a state covariance, as proposed in 1.

This should be used in conjunction with the ParticleFlowKalmanUpdater.

Parameters

References

1

Ding, Tao & Coates, Mark J., “Implementation of the Daum-Huang Exact-Flow Particle Filter” 2012

kalman_predictor: stonesoup.predictor.kalman.KalmanPredictor

Kalman predictor to use. Default None where a new instance of:class:~.ExtendedKalmanPredictor will be created utilising thesame transition model.

predict(prior, *args, **kwargs)[source]

Particle Filter prediction step

Parameters
Returns

The predicted state

Return type

ParticleStatePrediction

## Categorical

class stonesoup.predictor.categorical.HMMPredictor(transition_model: MarkovianTransitionModel, control_model: ControlModel = None)[source]

Hidden Markov model predictor

Assumes transition model is time-invariant, and therefore care should be taken when predicting forward to the same time.

Parameters
transition_model: stonesoup.models.transition.categorical.MarkovianTransitionModel

The transition model used to predict states forward in time.

predict(prior, timestamp=None, **kwargs)[source]

Predicts a CategoricalState forward using the transition_model.

Parameters
Returns

The predicted state.

Return type

CategoricalStatePrediction

Notes

The Markovian transition model is time-invariant and the evaluated time_interval can be None.

## Composite

class stonesoup.predictor.composite.CompositePredictor(sub_predictors: , control_model: ControlModel = None)[source]

Composite predictor type

A composition of ordered sub-predictors (Predictor). Independently predicts each sub-state of a CompositeState forward using a corresponding sub-predictor.

Parameters
• sub_predictors (Sequence[Predictor]) – Sequence of sub-predictors comprising the composite predictor. Must not be empty.

• control_model (ControlModel, optional) – control model

sub_predictors: Sequence[stonesoup.predictor.base.Predictor]

Sequence of sub-predictors comprising the composite predictor. Must not be empty.

property transition_model

transition model

predict(prior, timestamp=None, **kwargs)[source]

The predict function

Parameters
Returns

The predicted composite state

Return type

CompositeState

## Information

class stonesoup.predictor.information.InformationKalmanPredictor(transition_model: LinearGaussianTransitionModel, control_model: LinearControlModel = None)[source]

A predictor class which uses the information form of the Kalman filter. The key concept is that ‘information’ is encoded as the information matrix, and the so-called ‘information state’, which are:

\begin{align}\begin{aligned}Y_{k-1} &= P^{-1}_{k-1}\\\mathbf{y}_{k-1} &= P^{-1}_{k-1} \mathbf{x}_{k-1}\end{aligned}\end{align}

The prediction then proceeds as 2

\begin{align}\begin{aligned}Y_{k|k-1} &= [F_k Y_{k-1}^{-1} F^T + Q_k]^{-1}\\\mathbf{y}_{k|k-1} &= Y_{k|k-1} F_k Y_{k-1}^{-1} \mathbf{y}_{k-1} + Y_{k|k-1} B_k\mathbf{u}_k\end{aligned}\end{align}

where the symbols have the same meaning as in the description of the Kalman filter (see e.g. tutorial 1) and the prediction equations can be derived from those of the Kalman filter. In order to cut down on the number of matrix inversions and to benefit from caching these are usually recast as 3

\begin{align}\begin{aligned}M_k &= (F_k^{-1})^T Y_{k-1} F_k^{-1}\\Y_{k|k-1} &= (I + M_k Q_k)^{-1} M_k\\\mathbf{y}_{k|k-1} &= (I + M_k Q_k)^{-1} (F_k^{-1})^T \mathbf{y}_k + Y_{k|k-1} B_k\mathbf{u}_k\end{aligned}\end{align}

The prior state must have a state vector $$\mathbf{y}_{k-1}$$ corresponding to $$P_{k-1}^{-1} \mathbf{x}_{k-1}$$ and a precision matrix, $$Y_{k-1} = P_{k-1}^{-1}$$. The InformationState class is provided for this purpose.

The TransitionModel is queried for the existence of an inverse_matrix() method, and if not present, matrix() is inverted. This gives one the opportunity to cache $$F_k^{-1}$$ and save computational resource.

Raises

ValueError – If no TransitionModel is specified.

References

2

Kim, Y-S, Hong, K-S 2003, Decentralized information filter in federated form, SICE annual conference

3

Makarenko, A., Durrant-Whyte, H. 2004, Decentralized data fusion and control in active sensor networks, in: The 7th International Conference on Information Fusion (Fusion’04), pp. 479-486

Parameters
transition_model: stonesoup.models.transition.linear.LinearGaussianTransitionModel

The transition model to be used.

control_model: stonesoup.models.control.linear.LinearControlModel

The control model to be used. Default None where the predictor will create a zero-effect linear ControlModel.

predict(prior, timestamp=None, **kwargs)[source]

The predict function

Parameters
Returns

$$\mathbf{y}_{k|k-1}$$, the predicted information state and the predicted information matrix $$Y_{k|k-1}$$

Return type

InformationStatePrediction