Sensor Managers
- class stonesoup.sensormanager.base.SensorManager(sensors: Set[Sensor], reward_function: Callable = None)[source]
-
The sensor manager base class.
The purpose of a sensor manager is to return a mapping of sensors and sensor actions appropriate to a specific scenario and with a particular objective, or objectives, in mind. This involves using estimates of the situation and knowledge of the sensor system to calculate metrics associated with actions, and then determine optimal, or near optimal, actions to take.
There is considerable freedom in both the theory and practice of sensor management and these classes do not enforce a particular solution. A sensor manager may be ‘centralised’ in that it controls the actions of multiple sensors, or individual sensors may have their own managers which communicate with other sensor managers in a networked fashion.
- Parameters:
sensors (
Set[Sensor]
) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.reward_function (
Callable
, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example seeRewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.
- sensors: Set[Sensor]
The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.
- reward_function: Callable
A function or class designed to work out the reward associated with an action or set of actions. For an example see
RewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.
- class stonesoup.sensormanager.base.RandomSensorManager(sensors: Set[Sensor], reward_function: Callable = None)[source]
Bases:
SensorManager
As the name suggests, a sensor manager which returns a random choice of action or actions from the list available. Its practical purpose is to serve as a baseline to test against.
- Parameters:
sensors (
Set[Sensor]
) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.reward_function (
Callable
, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example seeRewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.
- choose_actions(tracks, timestamp, nchoose=1, **kwargs)[source]
Returns a randomly chosen [list of] action(s) from the action set for each sensor.
- Parameters:
tracks (set of
Track
) – Set of tracks at given time. Used in reward function.timestamp (
datetime.datetime
) – Time at which the actions are carried out untilnchoose (int) – Number of actions from the set to choose (default is 1)
- Returns:
- Return type:
- class stonesoup.sensormanager.base.BruteForceSensorManager(sensors: Set[Sensor], reward_function: Callable = None)[source]
Bases:
SensorManager
A sensor manager which returns a choice of action from those available. The sensor manager iterates through every possible configuration of sensors and actions and selects the configuration which returns the maximum reward as calculated by a reward function.
- Parameters:
sensors (
Set[Sensor]
) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.reward_function (
Callable
, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example seeRewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.
- choose_actions(tracks, timestamp, nchoose=1, **kwargs)[source]
Returns a chosen [list of] action(s) from the action set for each sensor. Chosen action(s) is selected by finding the configuration of sensors: actions which returns the maximum reward, as calculated by a reward function.
- Parameters:
tracks (set of
Track
) – Set of tracks at given time. Used in reward function.timestamp (
datetime.datetime
) – Time at which the actions are carried out untilnchoose (int) – Number of actions from the set to choose (default is 1)
- Returns:
- Return type:
- class stonesoup.sensormanager.optimise.OptimizeBruteSensorManager(sensors: Set[Sensor], reward_function: Callable = None, n_grid_points: int = 10, generate_full_output: bool = False, finish: bool = False, disp: bool = False)[source]
Bases:
_OptimizeSensorManager
A sensor manager built around the SciPy
brute()
method. The sensor manager takes all possible configurations of sensors and actions and uses the optimising function to optimise a given reward function, returning the optimal configuration.Scipy optimize provides functions which can minimize or maximize functions using a variety of algorithms. The
brute()
minimizes a function over a given range, using a brute force method. This is done by computing the function’s value at each point of a multidimensional grid of points, to find the global minimum.A default version of the optimiser is used, or on initiation the sensor manager can be passed some parameters to alter the configuration of the optimiser. Please see the Scipy documentation site for full details on what each parameter does.
- Parameters:
sensors (
Set[Sensor]
) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.reward_function (
Callable
, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example seeRewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.n_grid_points (
int
, optional) – Number of grid points to search along axis. See Ns inbrute()
. Default is 10.generate_full_output (
bool
, optional) – If True, returns the evaluation grid and the objective function’s values on it.finish (
bool
, optional) – A polishing function can be applied to the result of brute force minimisation. If True this is set asfmin()
which minimizes a function using the downhill simplex algorithm.As a default no polishing function is applied.disp (
bool
, optional) – Set to True to print convergence messages from the finish callable.
- generate_full_output: bool
If True, returns the evaluation grid and the objective function’s values on it.
- finish: bool
A polishing function can be applied to the result of brute force minimisation. If True this is set as
fmin()
which minimizes a function using the downhill simplex algorithm.As a default no polishing function is applied.
- get_full_output()[source]
Returns the output generated when generate_full_output=True for the most recent time step. This returns the evaluation grid and reward function’s values on it, as generated by the
optimize.brute()
method. See Scipy documentation for full details.- Returns:
full_output
- Return type:
- choose_actions(tracks, timestamp, nchoose=1, **kwargs)
Returns a chosen [list of] action(s) from the action set for each sensor. Chosen action(s) is selected by finding the configuration of sensors: actions which returns the maximum reward, as calculated by a reward function.
- Parameters:
tracks (set of
Track
) – Set of tracks at given time. Used in reward function.timestamp (
datetime.datetime
) – Time at which the actions are carried out untilnchoose (int) – Number of actions from the set to choose (default is 1)
- Returns:
- Return type:
- reward_function: Callable
A function or class designed to work out the reward associated with an action or set of actions. For an example see
RewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.
- class stonesoup.sensormanager.optimise.OptimizeBasinHoppingSensorManager(sensors: Set[Sensor], reward_function: Callable = None, n_iter: int = 100, T: float = 1.0, stepsize: float = 0.5, interval: int = 50, disp: bool = False, niter_success: int = None)[source]
Bases:
_OptimizeSensorManager
A sensor manager built around the SciPy
optimize.basinhopping()
method. The sensor manager takes all possible configurations of sensors and actions and uses the optimising function to optimise a given reward function, returning the optimal configuration for the sensing system.The
basinhopping()
finds the global minimum of a function using the basin-hopping algorithm. This is a combination of a global stepping algorithm and local minimization at each step.A default version of the optimiser is used, or on initiation the sensor manager can be passed some parameters to alter the configuration of the optimiser. Please see the Scipy documentation site for full details on what each parameter does.
- Parameters:
sensors (
Set[Sensor]
) – The sensor(s) which the sensor manager is managing. These must be capable of returning available actions.reward_function (
Callable
, optional) – A function or class designed to work out the reward associated with an action or set of actions. For an example seeRewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.n_iter (
int
, optional) – The number of basin hopping iterations.T (
float
, optional) – The “temperature” parameter for the accept or reject criterion. Higher temperatures mean larger jumps in function value will be accepted.stepsize (
float
, optional) – Maximum step size for use in the random displacement.interval (
int
, optional) – Interval for how often to update the stepsize.disp (
bool
, optional) – Set to True to print status messages.niter_success (
int
, optional) – Stop the run if the global minimum candidate remains the same for this number of iterations.
- T: float
The “temperature” parameter for the accept or reject criterion. Higher temperatures mean larger jumps in function value will be accepted.
- niter_success: int
Stop the run if the global minimum candidate remains the same for this number of iterations.
- choose_actions(tracks, timestamp, nchoose=1, **kwargs)
Returns a chosen [list of] action(s) from the action set for each sensor. Chosen action(s) is selected by finding the configuration of sensors: actions which returns the maximum reward, as calculated by a reward function.
- Parameters:
tracks (set of
Track
) – Set of tracks at given time. Used in reward function.timestamp (
datetime.datetime
) – Time at which the actions are carried out untilnchoose (int) – Number of actions from the set to choose (default is 1)
- Returns:
- Return type:
- reward_function: Callable
A function or class designed to work out the reward associated with an action or set of actions. For an example see
RewardFunction
. This may also incorporate a notion of the cost of making a measurement. The values returned may be scalar or vector in the case of multi-objective optimisation. Metrics may be of any type and in any units.
Reward Functions
- class stonesoup.sensormanager.reward.RewardFunction[source]
-
The reward function base class.
A reward function is a callable used by a sensor manager to determine the best choice of action(s) for a sensor or group of sensors to take. For a given configuration of sensors and actions the reward function calculates a metric to evaluate how useful that choice of actions would be with a particular objective or objectives in mind. The sensor manager algorithm compares this metric for different possible configurations and chooses the appropriate sensing configuration to use at that time step.
- __call__(config: Mapping[Sensor, Sequence[Action]], tracks: Set[Track], metric_time: datetime, *args, **kwargs)[source]
A method which returns a reward metric based on information about the state of the system, sensors and possible actions they can take. This requires a mapping of sensors to action(s) to be evaluated by reward function, a set of tracks at given time and the time at which the actions would be carried out until.
- Returns:
Calculated metric
- Return type:
- __init__()
- class stonesoup.sensormanager.reward.UncertaintyRewardFunction(predictor: KalmanPredictor, updater: ExtendedKalmanUpdater, method_sum: bool = True)[source]
Bases:
RewardFunction
A reward function which calculates the potential reduction in the uncertainty of track estimates if a particular action is taken by a sensor or group of sensors.
Given a configuration of sensors and actions, a metric is calculated for the potential reduction in the uncertainty of the tracks that would occur if the sensing configuration were used to make an observation. A larger value indicates a greater reduction in uncertainty.
- Parameters:
predictor (
KalmanPredictor
) – Predictor used to predict the track to a new stateupdater (
ExtendedKalmanUpdater
) – Updater used to update the track to the new state.method_sum (
bool
, optional) – Determines method of calculating reward.Default calculates sum across all targets.Otherwise calculates mean of all targets.
- predictor: KalmanPredictor
Predictor used to predict the track to a new state
- updater: ExtendedKalmanUpdater
Updater used to update the track to the new state.
- method_sum: bool
Determines method of calculating reward.Default calculates sum across all targets.Otherwise calculates mean of all targets.
- __call__(config: Mapping[Sensor, Sequence[Action]], tracks: Set[Track], metric_time: datetime, *args, **kwargs)[source]
For a given configuration of sensors and actions this reward function calculates the potential uncertainty reduction of each track by computing the difference between the covariance matrix norms of the prediction and the posterior assuming a predicted measurement corresponding to that prediction.
This requires a mapping of sensors to action(s) to be evaluated by reward function, a set of tracks at given time and the time at which the actions would be carried out until.
The metric returned is the total potential reduction in uncertainty across all tracks.
- Returns:
Metric of uncertainty for given configuration
- Return type:
- __init__(predictor: KalmanPredictor, updater: ExtendedKalmanUpdater, method_sum: bool = True)