bindsnet.conversion package

Submodules

bindsnet.conversion.conversion module

class bindsnet.conversion.conversion.FeatureExtractor(submodule)[source]

Bases: torch.nn.modules.module.Module

Special-purpose PyTorch module for the extraction of child module’s activations.

Constructor for FeatureExtractor module.

Parameters:submodule – The module who’s children modules are to be extracted.
forward(x: torch.Tensor) → Dict[torch.nn.modules.module.Module, torch.Tensor][source]

Forward pass of the feature extractor.

Parameters:x – Input data for the ``submodule’’.
Returns:A dictionary mapping
class bindsnet.conversion.conversion.Permute(dims)[source]

Bases: torch.nn.modules.module.Module

PyTorch module for the explicit permutation of a tensor’s dimensions in a parent module’s forward pass (as opposed to torch.permute).

Constructor for Permute module.

Parameters:dims – Ordering of dimensions for permutation.
forward(x)[source]

Forward pass of permutation module.

Parameters:x – Input tensor to permute.
Returns:Permuted input tensor.
bindsnet.conversion.conversion.ann_to_snn(ann: Union[torch.nn.modules.module.Module, str], input_shape: Sequence[int], data: Optional[torch.Tensor] = None, percentile: float = 99.9, node_type: Optional[bindsnet.network.nodes.Nodes] = <class 'bindsnet.conversion.nodes.SubtractiveResetIFNodes'>, **kwargs) → bindsnet.network.network.Network[source]

Converts an artificial neural network (ANN) written as a torch.nn.Module into a near-equivalent spiking neural network.

Parameters:
  • ann – Artificial neural network implemented in PyTorch. Accepts either torch.nn.Module or path to network saved using torch.save().
  • input_shape – Shape of input data.
  • data – Data to use to perform data-based weight normalization of shape [n_examples, ...].
  • percentile – Percentile (in [0, 100]) of activations to scale by in data-based normalization scheme.
  • node_type – Class of Nodes to use in replacing torch.nn.Linear layers in original ANN.
Returns:

Spiking neural network implemented in PyTorch.

bindsnet.conversion.conversion.data_based_normalization(ann: Union[torch.nn.modules.module.Module, str], data: torch.Tensor, percentile: float = 99.9)[source]

Use a dataset to rescale ANN weights and biases such that that the max ReLU activation is less than 1.

Parameters:
  • ann – Artificial neural network implemented in PyTorch. Accepts either torch.nn.Module or path to network saved using torch.save().
  • data – Data to use to perform data-based weight normalization of shape [n_examples, ...].
  • percentile – Percentile (in [0, 100]) of activations to scale by in data-based normalization scheme.
Returns:

Artificial neural network with rescaled weights and biases according to activations on the dataset.

bindsnet.conversion.nodes module

class bindsnet.conversion.nodes.PassThroughNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of integrate-and-fire (IF) neurons with using reset by subtraction.

Instantiates a layer of IF neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • trace_tc – Time constant of spike trace decay.
  • sum_input – Whether to sum all inputs.
forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:
  • inputs – Inputs to the layer.
  • dt – Simulation time step.
reset_state_variables() → None[source]

Resets relevant state variables.

class bindsnet.conversion.nodes.SubtractiveResetIFNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = -52.0, reset: Union[float, torch.Tensor] = -65.0, refrac: Union[int, torch.Tensor] = 5, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of integrate-and-fire (IF) neurons <https://bit.ly/2EOk6YN> using reset by subtraction.

Instantiates a layer of IF neurons with the subtractive reset mechanism from this paper.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
  • reset – Post-spike reset voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • lbound – Lower bound of the voltage.
forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.

bindsnet.conversion.topology module

class bindsnet.conversion.topology.ConstantPad2dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, padding: Tuple, nu: Union[float, Iterable[float], None] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Special-purpose connection for emulating the ConstantPad2d PyTorch module in spiking neural networks.

Constructor for ConstantPad2dConnection.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • padding – Padding of input tensors; passed to torch.nn.functional.pad.
  • nu – Learning rate for both pre- and post-synaptic events.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (function) – Modifies connection parameters according to some rule.
  • wmin (float) – The minimum value on the connection weights.
  • wmax (float) – The maximum value on the connection weights.
  • norm (float) – Total weight per target neuron normalization.
compute(s: torch.Tensor)[source]

Pad input.

Parameters:s – Input.
Returns:Padding input.
class bindsnet.conversion.topology.PermuteConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, dims: Iterable[T_co], nu: Union[float, Iterable[float], None] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Special-purpose connection for emulating the custom Permute module in spiking neural networks.

Constructor for PermuteConnection.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • dims – Order of dimensions to permute.
  • nu – Learning rate for both pre- and post-synaptic events.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (function) – Modifies connection parameters according to some rule.
  • wmin (float) – The minimum value on the connection weights.
  • wmax (float) – The maximum value on the connection weights.
  • norm (float) – Total weight per target neuron normalization.
compute(s: torch.Tensor) → torch.Tensor[source]

Permute input.

Parameters:s – Input.
Returns:Permuted input.

Module contents

class bindsnet.conversion.Permute(dims)[source]

Bases: torch.nn.modules.module.Module

PyTorch module for the explicit permutation of a tensor’s dimensions in a parent module’s forward pass (as opposed to torch.permute).

Constructor for Permute module.

Parameters:dims – Ordering of dimensions for permutation.
forward(x)[source]

Forward pass of permutation module.

Parameters:x – Input tensor to permute.
Returns:Permuted input tensor.
class bindsnet.conversion.FeatureExtractor(submodule)[source]

Bases: torch.nn.modules.module.Module

Special-purpose PyTorch module for the extraction of child module’s activations.

Constructor for FeatureExtractor module.

Parameters:submodule – The module who’s children modules are to be extracted.
forward(x: torch.Tensor) → Dict[torch.nn.modules.module.Module, torch.Tensor][source]

Forward pass of the feature extractor.

Parameters:x – Input data for the ``submodule’’.
Returns:A dictionary mapping
class bindsnet.conversion.SubtractiveResetIFNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = -52.0, reset: Union[float, torch.Tensor] = -65.0, refrac: Union[int, torch.Tensor] = 5, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of integrate-and-fire (IF) neurons <https://bit.ly/2EOk6YN> using reset by subtraction.

Instantiates a layer of IF neurons with the subtractive reset mechanism from this paper.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
  • reset – Post-spike reset voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • lbound – Lower bound of the voltage.
forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.conversion.PassThroughNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of integrate-and-fire (IF) neurons with using reset by subtraction.

Instantiates a layer of IF neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • trace_tc – Time constant of spike trace decay.
  • sum_input – Whether to sum all inputs.
forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:
  • inputs – Inputs to the layer.
  • dt – Simulation time step.
reset_state_variables() → None[source]

Resets relevant state variables.

class bindsnet.conversion.PermuteConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, dims: Iterable[T_co], nu: Union[float, Iterable[float], None] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Special-purpose connection for emulating the custom Permute module in spiking neural networks.

Constructor for PermuteConnection.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • dims – Order of dimensions to permute.
  • nu – Learning rate for both pre- and post-synaptic events.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (function) – Modifies connection parameters according to some rule.
  • wmin (float) – The minimum value on the connection weights.
  • wmax (float) – The maximum value on the connection weights.
  • norm (float) – Total weight per target neuron normalization.
compute(s: torch.Tensor) → torch.Tensor[source]

Permute input.

Parameters:s – Input.
Returns:Permuted input.
class bindsnet.conversion.ConstantPad2dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, padding: Tuple, nu: Union[float, Iterable[float], None] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Special-purpose connection for emulating the ConstantPad2d PyTorch module in spiking neural networks.

Constructor for ConstantPad2dConnection.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • padding – Padding of input tensors; passed to torch.nn.functional.pad.
  • nu – Learning rate for both pre- and post-synaptic events.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (function) – Modifies connection parameters according to some rule.
  • wmin (float) – The minimum value on the connection weights.
  • wmax (float) – The maximum value on the connection weights.
  • norm (float) – Total weight per target neuron normalization.
compute(s: torch.Tensor)[source]

Pad input.

Parameters:s – Input.
Returns:Padding input.
bindsnet.conversion.data_based_normalization(ann: Union[torch.nn.modules.module.Module, str], data: torch.Tensor, percentile: float = 99.9)[source]

Use a dataset to rescale ANN weights and biases such that that the max ReLU activation is less than 1.

Parameters:
  • ann – Artificial neural network implemented in PyTorch. Accepts either torch.nn.Module or path to network saved using torch.save().
  • data – Data to use to perform data-based weight normalization of shape [n_examples, ...].
  • percentile – Percentile (in [0, 100]) of activations to scale by in data-based normalization scheme.
Returns:

Artificial neural network with rescaled weights and biases according to activations on the dataset.

bindsnet.conversion.ann_to_snn(ann: Union[torch.nn.modules.module.Module, str], input_shape: Sequence[int], data: Optional[torch.Tensor] = None, percentile: float = 99.9, node_type: Optional[bindsnet.network.nodes.Nodes] = <class 'bindsnet.conversion.nodes.SubtractiveResetIFNodes'>, **kwargs) → bindsnet.network.network.Network[source]

Converts an artificial neural network (ANN) written as a torch.nn.Module into a near-equivalent spiking neural network.

Parameters:
  • ann – Artificial neural network implemented in PyTorch. Accepts either torch.nn.Module or path to network saved using torch.save().
  • input_shape – Shape of input data.
  • data – Data to use to perform data-based weight normalization of shape [n_examples, ...].
  • percentile – Percentile (in [0, 100]) of activations to scale by in data-based normalization scheme.
  • node_type – Class of Nodes to use in replacing torch.nn.Linear layers in original ANN.
Returns:

Spiking neural network implemented in PyTorch.