bindsnet.environment package

Submodules

bindsnet.environment.environment module

class bindsnet.environment.environment.Environment[source]

Bases: abc.ABC

Abstract environment class.

close() → None[source]

Abstract method header for close().

preprocess() → None[source]

Abstract method header for preprocess().

render() → None[source]

Abstract method header for render().

reset() → None[source]

Abstract method header for reset().

step(a: int) → Tuple[Any, ...][source]

Abstract method head for step().

Parameters:a – Integer action to take in environment.
class bindsnet.environment.environment.GymEnvironment(name: str, encoder: bindsnet.encoding.encoders.Encoder = <bindsnet.encoding.encoders.NullEncoder object>, **kwargs)[source]

Bases: bindsnet.environment.environment.Environment

A wrapper around the OpenAI gym environments.

Initializes the environment wrapper. This class makes the assumption that the OpenAI gym environment will provide an image of format HxW or CxHxW as an observation (we will add the C dimension to HxW tensors) or a 1D observation in which case no dimensions will be added.

Parameters:
  • name – The name of an OpenAI gym environment.
  • encoder – Function to encode observations into spike trains.

Keyword arguments:

Parameters:
  • max_prob (float) – Maximum spiking probability.
  • clip_rewards (bool) – Whether or not to use np.sign of rewards.
  • history (int) – Number of observations to keep track of.
  • delta (int) – Step size to save observations in history.
  • add_channel_dim (bool) – Allows for the adding of the channel dimension in 2D inputs.
close() → None[source]

Wrapper around the OpenAI gym environment close() function.

preprocess() → None[source]

Pre-processing step for an observation from a gym environment.

render() → None[source]

Wrapper around the OpenAI gym environment render() function.

reset() → torch.Tensor[source]

Wrapper around the OpenAI gym environment reset() function.

Returns:Observation from the environment.
step(a: int) → Tuple[torch.Tensor, float, bool, Dict[Any, Any]][source]

Wrapper around the OpenAI gym environment step() function.

Parameters:a – Action to take in the environment.
Returns:Observation, reward, done flag, and information dictionary.
update_history() → None[source]

Updates the observations inside history by performing subtraction from most recent observation and the sum of previous observations. If there are not enough observations to take a difference from, simply store the observation without any differencing.

update_index() → None[source]

Updates the index to keep track of history. For example: history = 4, delta = 3 will produce self.history = {1, 4, 7, 10} and self.history_index will be updated according to self.delta and will wrap around the history dictionary.

Module contents

class bindsnet.environment.Environment[source]

Bases: abc.ABC

Abstract environment class.

close() → None[source]

Abstract method header for close().

preprocess() → None[source]

Abstract method header for preprocess().

render() → None[source]

Abstract method header for render().

reset() → None[source]

Abstract method header for reset().

step(a: int) → Tuple[Any, ...][source]

Abstract method head for step().

Parameters:a – Integer action to take in environment.
class bindsnet.environment.GymEnvironment(name: str, encoder: bindsnet.encoding.encoders.Encoder = <bindsnet.encoding.encoders.NullEncoder object>, **kwargs)[source]

Bases: bindsnet.environment.environment.Environment

A wrapper around the OpenAI gym environments.

Initializes the environment wrapper. This class makes the assumption that the OpenAI gym environment will provide an image of format HxW or CxHxW as an observation (we will add the C dimension to HxW tensors) or a 1D observation in which case no dimensions will be added.

Parameters:
  • name – The name of an OpenAI gym environment.
  • encoder – Function to encode observations into spike trains.

Keyword arguments:

Parameters:
  • max_prob (float) – Maximum spiking probability.
  • clip_rewards (bool) – Whether or not to use np.sign of rewards.
  • history (int) – Number of observations to keep track of.
  • delta (int) – Step size to save observations in history.
  • add_channel_dim (bool) – Allows for the adding of the channel dimension in 2D inputs.
close() → None[source]

Wrapper around the OpenAI gym environment close() function.

preprocess() → None[source]

Pre-processing step for an observation from a gym environment.

render() → None[source]

Wrapper around the OpenAI gym environment render() function.

reset() → torch.Tensor[source]

Wrapper around the OpenAI gym environment reset() function.

Returns:Observation from the environment.
step(a: int) → Tuple[torch.Tensor, float, bool, Dict[Any, Any]][source]

Wrapper around the OpenAI gym environment step() function.

Parameters:a – Action to take in the environment.
Returns:Observation, reward, done flag, and information dictionary.
update_history() → None[source]

Updates the observations inside history by performing subtraction from most recent observation and the sum of previous observations. If there are not enough observations to take a difference from, simply store the observation without any differencing.

update_index() → None[source]

Updates the index to keep track of history. For example: history = 4, delta = 3 will produce self.history = {1, 4, 7, 10} and self.history_index will be updated according to self.delta and will wrap around the history dictionary.