bindsnet.pipeline package

Submodules

bindsnet.pipeline.action module

bindsnet.pipeline.action.select_first_spike(pipeline: EnvironmentPipeline, **kwargs) int[source]

Selects an action with have the highst spikes. In case of equal spiking select randomly

Parameters:

pipeline – EnvironmentPipeline with environment that has an integer action space and spike_record set.

Returns:

Action sampled from softmax over activity of similarly-sized output layer.

Keyword arguments:

Parameters:

output (str) – Name of output layer whose activity to base action selection on.

bindsnet.pipeline.action.select_highest(pipeline: EnvironmentPipeline, **kwargs) int[source]

Selects an action with have the highst spikes. In case of equal spiking select randomly

Parameters:

pipeline – EnvironmentPipeline with environment that has an integer action space and spike_record set.

Returns:

Action sampled from softmax over activity of similarly-sized output layer.

Keyword arguments:

Parameters:

output (str) – Name of output layer whose activity to base action selection on.

bindsnet.pipeline.action.select_multinomial(pipeline: EnvironmentPipeline, **kwargs) int[source]

Selects an action probabilistically based on spiking activity from a network layer.

Parameters:

pipeline – EnvironmentPipeline with environment that has an integer action space.

Returns:

Action sampled from multinomial over activity of similarly-sized output layer.

Keyword arguments:

Parameters:

output (str) – Name of output layer whose activity to base action selection on.

bindsnet.pipeline.action.select_random(pipeline: EnvironmentPipeline, **kwargs) int[source]

Selects an action randomly from the action space.

Parameters:

pipeline – EnvironmentPipeline with environment that has an integer action space.

Returns:

Action randomly sampled over size of pipeline’s action space.

bindsnet.pipeline.action.select_softmax(pipeline: EnvironmentPipeline, **kwargs) int[source]

Selects an action using softmax function based on spiking from a network layer.

Parameters:

pipeline – EnvironmentPipeline with environment that has an integer action space and spike_record set.

Returns:

Action sampled from softmax over activity of similarly-sized output layer.

Keyword arguments:

Parameters:

output (str) – Name of output layer whose activity to base action selection on.

bindsnet.pipeline.base_pipeline module

class bindsnet.pipeline.base_pipeline.BasePipeline(network: Network, **kwargs)[source]

Bases: object

A generic pipeline that handles high level functionality.

Initializes the pipeline.

Parameters:

network – Arbitrary network object, will be managed by the BasePipeline class.

Keyword arguments:

Parameters:
  • save_interval (int) – How often to save the network to disk.

  • save_dir (str) – Directory to save network object to.

  • plot_config (Dict[str, Any]) – Dict containing the plot configuration. Includes length, type ("color" or "line"), and interval per plot type.

  • print_interval (int) – Interval to print text output.

  • allow_gpu (bool) – Allows automatic transfer to the GPU.

get_spike_data() Dict[str, Tensor][source]

Get the spike data from all layers in the pipeline’s network.

Returns:

A dictionary containing all spike monitors from the network.

get_voltage_data() Tuple[Dict[str, Tensor], Dict[str, Tensor]][source]

Get the voltage data and threshold value from all applicable layers in the pipeline’s network.

Returns:

Two dictionaries containing the voltage data and threshold values from the network.

init_fn() None[source]

Placeholder function for subclass-specific actions that need to happen during the construction of the BasePipeline.

plots(batch: Any, step_out: Any) None[source]

Create any plots and logs for a step given the input batch and step output.

Parameters:
  • batch – The current batch. This could be anything as long as the subclass agrees upon the format in some way.

  • step_out – The output from the step_() method.

reset_state_variables() None[source]

Reset the pipeline.

step(batch: Any, **kwargs) Any[source]

Single step of any pipeline at a high level.

Parameters:

batch – A batch of inputs to be handed to the step_() function. Standard in subclasses of BasePipeline.

Returns:

The output from the subclass’s step_() method, which could be anything. Passed to plotting to accommodate this.

step_(batch: Any, **kwargs) Any[source]

Perform a pass of the network given the input batch.

Parameters:

batch – The current batch. This could be anything as long as the subclass agrees upon the format in some way.

Returns:

Any output that is need for recording purposes.

test() None[source]

A fully self contained test function.

train() None[source]

A fully self-contained training loop.

bindsnet.pipeline.base_pipeline.recursive_to(item, device)[source]

Recursively transfers everything contained in item to the target device.

Parameters:
  • item – An individual tensor or container of tensors.

  • devicetorch.device pointing to "cuda" or "cpu".

Returns:

A version of the item that has been sent to a device.

bindsnet.pipeline.dataloader_pipeline module

class bindsnet.pipeline.dataloader_pipeline.DataLoaderPipeline(network: Network, train_ds: Dataset, test_ds: Dataset | None = None, **kwargs)[source]

Bases: BasePipeline

A generic DataLoader pipeline that leverages the torch.utils.data setup. This still needs to be subclassed for specific implementations for functions given the dataset that will be used. An example can be seen in TorchVisionDatasetPipeline.

Initializes the pipeline.

Parameters:
  • network – Arbitrary network object.

  • train_ds – Arbitrary torch.utils.data.Dataset object.

  • test_ds – Arbitrary torch.utils.data.Dataset object.

test() None[source]

A fully self contained test function.

train() None[source]

Training loop that runs for the set number of epochs and creates a new DataLoader at each epoch.

class bindsnet.pipeline.dataloader_pipeline.TorchVisionDatasetPipeline(network: Network, train_ds: Dataset, pipeline_analyzer: PipelineAnalyzer | None = None, **kwargs)[source]

Bases: DataLoaderPipeline

An example implementation of DataLoaderPipeline that runs all of the datasets inside of bindsnet.datasets that inherit from an instance of a torchvision.datasets. These are documented in bindsnet/datasets/README.md. This specific class just runs an unsupervised network.

Initializes the pipeline.

Parameters:
  • network – Arbitrary network object.

  • train_ds – A torchvision.datasets wrapper dataset from bindsnet.datasets.

Keyword arguments:

Parameters:

input_layer (str) – Layer of the network that receives input.

init_fn() None[source]

Placeholder function for subclass-specific actions that need to happen during the construction of the BasePipeline.

plots(batch: Dict[str, Tensor], *args) None[source]

Create any plots and logs for a step given the input batch.

Parameters:

batch – A dictionary of the current batch. Includes image, label and encoded versions.

step_(batch: Dict[str, Tensor], **kwargs) None[source]

Perform a pass of the network given the input batch. Unsupervised training (implying everything is stored inside of the network object, therefore returns None.

Parameters:

batch – A dictionary of the current batch. Includes image, label and encoded versions.

test_step()[source]

bindsnet.pipeline.environment_pipeline module

class bindsnet.pipeline.environment_pipeline.EnvironmentPipeline(network: Network, environment: Environment, action_function: Callable | None = None, encoding: Callable | None = None, **kwargs)[source]

Bases: BasePipeline

Abstracts the interaction between Network, Environment, and environment feedback action.

Initializes the pipeline.

Parameters:
  • network – Arbitrary network object.

  • environment – Arbitrary environment.

  • action_function – Function to convert network outputs into environment inputs.

  • encoding – Function to encoding input.

Keyword arguments:

Parameters:
  • device (str) – PyTorch computing device

  • encode_factor – coefficient for the input before encoding.

  • num_episodes (int) – Number of episodes to train for. Defaults to 100.

  • output (str) – String name of the layer from which to take output.

  • render_interval (int) – Interval to render the environment.

  • reward_delay (int) – How many iterations to delay delivery of reward.

  • time (int) – Time for which to run the network. Defaults to the network’s

  • overlay_input (int) – Overlay the last X previous input

  • percent_of_random_action (float) – chance to choose random action

  • random_action_after (int) –

    take random action if same output action counter reach

    timestep.

env_step() Tuple[Tensor, float, bool, Dict][source]

Single step of the environment which includes rendering, getting and performing the action, and accumulating/delaying rewards.

Returns:

An OpenAI gym compatible tuple with modified reward and info.

init_fn() None[source]

Placeholder function for subclass-specific actions that need to happen during the construction of the BasePipeline.

plots(gym_batch: Tuple[Tensor, float, bool, Dict], *args) None[source]

Plot the encoded input, layer spikes, and layer voltages.

Parameters:

gym_batch – An OpenAI gym compatible tuple.

reset_state_variables() None[source]

Reset the pipeline.

step_(gym_batch: Tuple[Tensor, float, bool, Dict], **kwargs) None[source]

Run a single iteration of the network and update it and the reward list when done.

Parameters:

gym_batch – An OpenAI gym compatible tuple.

train(**kwargs) None[source]

Trains for the specified number of episodes. Each episode can be of arbitrary length.

Module contents

class bindsnet.pipeline.BasePipeline(network: Network, **kwargs)[source]

Bases: object

A generic pipeline that handles high level functionality.

Initializes the pipeline.

Parameters:

network – Arbitrary network object, will be managed by the BasePipeline class.

Keyword arguments:

Parameters:
  • save_interval (int) – How often to save the network to disk.

  • save_dir (str) – Directory to save network object to.

  • plot_config (Dict[str, Any]) – Dict containing the plot configuration. Includes length, type ("color" or "line"), and interval per plot type.

  • print_interval (int) – Interval to print text output.

  • allow_gpu (bool) – Allows automatic transfer to the GPU.

get_spike_data() Dict[str, Tensor][source]

Get the spike data from all layers in the pipeline’s network.

Returns:

A dictionary containing all spike monitors from the network.

get_voltage_data() Tuple[Dict[str, Tensor], Dict[str, Tensor]][source]

Get the voltage data and threshold value from all applicable layers in the pipeline’s network.

Returns:

Two dictionaries containing the voltage data and threshold values from the network.

init_fn() None[source]

Placeholder function for subclass-specific actions that need to happen during the construction of the BasePipeline.

plots(batch: Any, step_out: Any) None[source]

Create any plots and logs for a step given the input batch and step output.

Parameters:
  • batch – The current batch. This could be anything as long as the subclass agrees upon the format in some way.

  • step_out – The output from the step_() method.

reset_state_variables() None[source]

Reset the pipeline.

step(batch: Any, **kwargs) Any[source]

Single step of any pipeline at a high level.

Parameters:

batch – A batch of inputs to be handed to the step_() function. Standard in subclasses of BasePipeline.

Returns:

The output from the subclass’s step_() method, which could be anything. Passed to plotting to accommodate this.

step_(batch: Any, **kwargs) Any[source]

Perform a pass of the network given the input batch.

Parameters:

batch – The current batch. This could be anything as long as the subclass agrees upon the format in some way.

Returns:

Any output that is need for recording purposes.

test() None[source]

A fully self contained test function.

train() None[source]

A fully self-contained training loop.

class bindsnet.pipeline.DataLoaderPipeline(network: Network, train_ds: Dataset, test_ds: Dataset | None = None, **kwargs)[source]

Bases: BasePipeline

A generic DataLoader pipeline that leverages the torch.utils.data setup. This still needs to be subclassed for specific implementations for functions given the dataset that will be used. An example can be seen in TorchVisionDatasetPipeline.

Initializes the pipeline.

Parameters:
  • network – Arbitrary network object.

  • train_ds – Arbitrary torch.utils.data.Dataset object.

  • test_ds – Arbitrary torch.utils.data.Dataset object.

test() None[source]

A fully self contained test function.

train() None[source]

Training loop that runs for the set number of epochs and creates a new DataLoader at each epoch.

class bindsnet.pipeline.EnvironmentPipeline(network: Network, environment: Environment, action_function: Callable | None = None, encoding: Callable | None = None, **kwargs)[source]

Bases: BasePipeline

Abstracts the interaction between Network, Environment, and environment feedback action.

Initializes the pipeline.

Parameters:
  • network – Arbitrary network object.

  • environment – Arbitrary environment.

  • action_function – Function to convert network outputs into environment inputs.

  • encoding – Function to encoding input.

Keyword arguments:

Parameters:
  • device (str) – PyTorch computing device

  • encode_factor – coefficient for the input before encoding.

  • num_episodes (int) – Number of episodes to train for. Defaults to 100.

  • output (str) – String name of the layer from which to take output.

  • render_interval (int) – Interval to render the environment.

  • reward_delay (int) – How many iterations to delay delivery of reward.

  • time (int) – Time for which to run the network. Defaults to the network’s

  • overlay_input (int) – Overlay the last X previous input

  • percent_of_random_action (float) – chance to choose random action

  • random_action_after (int) –

    take random action if same output action counter reach

    timestep.

env_step() Tuple[Tensor, float, bool, Dict][source]

Single step of the environment which includes rendering, getting and performing the action, and accumulating/delaying rewards.

Returns:

An OpenAI gym compatible tuple with modified reward and info.

init_fn() None[source]

Placeholder function for subclass-specific actions that need to happen during the construction of the BasePipeline.

plots(gym_batch: Tuple[Tensor, float, bool, Dict], *args) None[source]

Plot the encoded input, layer spikes, and layer voltages.

Parameters:

gym_batch – An OpenAI gym compatible tuple.

reset_state_variables() None[source]

Reset the pipeline.

step_(gym_batch: Tuple[Tensor, float, bool, Dict], **kwargs) None[source]

Run a single iteration of the network and update it and the reward list when done.

Parameters:

gym_batch – An OpenAI gym compatible tuple.

train(**kwargs) None[source]

Trains for the specified number of episodes. Each episode can be of arbitrary length.

class bindsnet.pipeline.TorchVisionDatasetPipeline(network: Network, train_ds: Dataset, pipeline_analyzer: PipelineAnalyzer | None = None, **kwargs)[source]

Bases: DataLoaderPipeline

An example implementation of DataLoaderPipeline that runs all of the datasets inside of bindsnet.datasets that inherit from an instance of a torchvision.datasets. These are documented in bindsnet/datasets/README.md. This specific class just runs an unsupervised network.

Initializes the pipeline.

Parameters:
  • network – Arbitrary network object.

  • train_ds – A torchvision.datasets wrapper dataset from bindsnet.datasets.

Keyword arguments:

Parameters:

input_layer (str) – Layer of the network that receives input.

init_fn() None[source]

Placeholder function for subclass-specific actions that need to happen during the construction of the BasePipeline.

plots(batch: Dict[str, Tensor], *args) None[source]

Create any plots and logs for a step given the input batch.

Parameters:

batch – A dictionary of the current batch. Includes image, label and encoded versions.

step_(batch: Dict[str, Tensor], **kwargs) None[source]

Perform a pass of the network given the input batch. Unsupervised training (implying everything is stored inside of the network object, therefore returns None.

Parameters:

batch – A dictionary of the current batch. Includes image, label and encoded versions.

test_step()[source]