Sensor Managers

class stonesoup.sensormanager.base.SensorManager(sensors: Set[Sensor], reward_function: Callable = None)[source]

Bases: Base, ABC

The sensor manager base class.

The purpose of a sensor manager is to return a mapping of sensors and sensor actions appropriate to a specific scenario and with a particular objective, or objectives, in mind. This involves using estimates of the situation and knowledge of the sensor system to calculate metrics associated with actions, and then determine optimal, or near optimal, actions to take.

There is considerable freedom in both the theory and practice of sensor management and these classes do not enforce a particular solution. A sensor manager may be ‘centralised’ in that it controls the actions of multiple sensors, or individual sensors may have their own managers which communicate with other sensor managers in a networked fashion.

Parameters
  • sensors (Set[Sensor]) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

  • reward_function (Callable, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

sensors: Set[Sensor]

The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

reward_function: Callable

A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

abstract choose_actions(timestamp, nchoose, **kwargs)[source]

A method which returns a set of actions, designed to be enacted by a sensor, or sensors, chosen by some means. This will likely make use of optimisation algorithms.

Returns

Key-value pairs of the form ‘sensor: actions’. In the general case a sensor may be given a single action, or a list. The actions themselves are objects which must be interpretable by the sensor to which they are assigned.

Return type

dict {Sensor: [Action]}

class stonesoup.sensormanager.base.RandomSensorManager(sensors: Set[Sensor], reward_function: Callable = None)[source]

Bases: SensorManager

As the name suggests, a sensor manager which returns a random choice of action or actions from the list available. Its practical purpose is to serve as a baseline to test against.

Parameters
  • sensors (Set[Sensor]) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

  • reward_function (Callable, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

choose_actions(tracks, timestamp, nchoose=1, **kwargs)[source]

Returns a randomly chosen [list of] action(s) from the action set for each sensor.

Parameters
  • tracks (set of Track) – Set of tracks at given time. Used in reward function.

  • timestamp (datetime.datetime) – Time at which the actions are carried out until

  • nchoose (int) – Number of actions from the set to choose (default is 1)

Returns

The pairs of Sensor: [Action] selected

Return type

dict

class stonesoup.sensormanager.base.BruteForceSensorManager(sensors: Set[Sensor], reward_function: Callable = None)[source]

Bases: SensorManager

A sensor manager which returns a choice of action from those available. The sensor manager iterates through every possible configuration of sensors and actions and selects the configuration which returns the maximum reward as calculated by a reward function.

Parameters
  • sensors (Set[Sensor]) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

  • reward_function (Callable, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

choose_actions(tracks, timestamp, nchoose=1, **kwargs)[source]

Returns a chosen [list of] action(s) from the action set for each sensor. Chosen action(s) is selected by finding the configuration of sensors: actions which returns the maximum reward, as calculated by a reward function.

Parameters
  • tracks (set of Track) – Set of tracks at given time. Used in reward function.

  • timestamp (datetime.datetime) – Time at which the actions are carried out until

  • nchoose (int) – Number of actions from the set to choose (default is 1)

Returns

The pairs of Sensor: [Action] selected

Return type

dict

class stonesoup.sensormanager.optimise.OptimizeBruteSensorManager(sensors: Set[Sensor], reward_function: Callable = None, number_of_grid_points: int = 10)[source]

Bases: _OptimizeSensorManager

A sensor manager built around the SciPy optimize.brute method. The sensor manager takes all possible configurations of sensors and actions and uses the optimising function to optimise a given reward function, returning the optimal configuration.

Scipy optimize provides functions which can minimize or maximize functions using a variety of algorithms. The scipy.optimize.brute() minimizes a function over a given range, using a brute force method. This is done by computing the function’s value at each point of a multidimensional grid of points, to find the global minimum.

This brute force method also applies a polishing function to the result of the brute force minimization. By default this is set as scipy.optimize.fmin() which minimizes a function using the downhill simplex algorithm.

Parameters
  • sensors (Set[Sensor]) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

  • reward_function (Callable, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

  • number_of_grid_points (int, optional) – Number of grid points to search along axis. See Ns in scipy.optimize.brute(). Default is 10.

number_of_grid_points: int

Number of grid points to search along axis. See Ns in scipy.optimize.brute(). Default is 10.

choose_actions(tracks, timestamp, nchoose=1, **kwargs)

Returns a chosen [list of] action(s) from the action set for each sensor. Chosen action(s) is selected by finding the configuration of sensors: actions which returns the maximum reward, as calculated by a reward function.

Parameters
  • tracks (set of Track) – Set of tracks at given time. Used in reward function.

  • timestamp (datetime.datetime) – Time at which the actions are carried out until

  • nchoose (int) – Number of actions from the set to choose (default is 1)

Returns

The pairs of Sensor: [Action] selected

Return type

dict

reward_function: Callable

A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

sensors: Set[Sensor]

The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

class stonesoup.sensormanager.optimise.OptimizeBasinHoppingSensorManager(sensors: Set[Sensor], reward_function: Callable = None)[source]

Bases: _OptimizeSensorManager

A sensor manager built around the SciPy optimize.basinhopping method. The sensor manager takes all possible configurations of sensors and actions and uses the optimising function to optimise a given reward function, returning the optimal configuration for the sensing system.

The scipy.optimize.basinhopping() finds the global minimum of a function using the basin-hopping algorithm. This is a combination of a global stepping algorithm and local minimization at each step.

Parameters
  • sensors (Set[Sensor]) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

  • reward_function (Callable, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

choose_actions(tracks, timestamp, nchoose=1, **kwargs)

Returns a chosen [list of] action(s) from the action set for each sensor. Chosen action(s) is selected by finding the configuration of sensors: actions which returns the maximum reward, as calculated by a reward function.

Parameters
  • tracks (set of Track) – Set of tracks at given time. Used in reward function.

  • timestamp (datetime.datetime) – Time at which the actions are carried out until

  • nchoose (int) – Number of actions from the set to choose (default is 1)

Returns

The pairs of Sensor: [Action] selected

Return type

dict

reward_function: Callable

A function or class designed to work out the reward associated with an action or set of actions. For an example see RewardFunction. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.

sensors: Set[Sensor]

The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.

Reward Functions

class stonesoup.sensormanager.reward.RewardFunction[source]

Bases: Base, ABC

The reward function base class.

A reward function is a callable used by a sensor manager to determine the best choice of action(s) for a sensor or group of sensors to take. For a given configuration of sensors and actions the reward function calculates a metric to evaluate how useful that choice of actions would be with a particular objective or objectives in mind. The sensor manager algorithm compares this metric for different possible configurations and chooses the appropriate sensing configuration to use at that time step.

__call__(config: Mapping[Sensor, Sequence[Action]], tracks: Set[Track], metric_time: datetime, *args, **kwargs)[source]

A method which returns a reward metric based on information about the state of the system, sensors and possible actions they can take. This requires a mapping of sensors to action(s) to be evaluated by reward function, a set of tracks at given time and the time at which the actions would be carried out until.

Returns

Calculated metric

Return type

float

__init__()
class stonesoup.sensormanager.reward.UncertaintyRewardFunction(predictor: KalmanPredictor, updater: ExtendedKalmanUpdater)[source]

Bases: RewardFunction

A reward function which calculates the potential reduction in the uncertainty of track estimates if a particular action is taken by a sensor or group of sensors.

Given a configuration of sensors and actions, a metric is calculated for the potential reduction in the uncertainty of the tracks that would occur if the sensing configuration were used to make an observation. A larger value indicates a greater reduction in uncertainty.

Parameters
predictor: KalmanPredictor

Predictor used to predict the track to a new state

updater: ExtendedKalmanUpdater

Updater used to update the track to the new state.

__call__(config: Mapping[Sensor, Sequence[Action]], tracks: Set[Track], metric_time: datetime, *args, **kwargs)[source]

For a given configuration of sensors and actions this reward function calculates the potential uncertainty reduction of each track by computing the difference between the covariance matrix norms of the prediction and the posterior assuming a predicted measurement corresponding to that prediction.

This requires a mapping of sensors to action(s) to be evaluated by reward function, a set of tracks at given time and the time at which the actions would be carried out until.

The metric returned is the total potential reduction in uncertainty across all tracks.

Returns

Metric of uncertainty for given configuration

Return type

float

__init__(predictor: KalmanPredictor, updater: ExtendedKalmanUpdater)