bindsnet.network package

Submodules

bindsnet.network.monitors module

class bindsnet.network.monitors.AbstractMonitor[source]

Bases: abc.ABC

Abstract base class for state variable monitors.

class bindsnet.network.monitors.Monitor(obj: Union[bindsnet.network.nodes.Nodes, bindsnet.network.topology.AbstractConnection], state_vars: Iterable[str], time: Optional[int] = None, batch_size: int = 1, device: str = 'cpu')[source]

Bases: bindsnet.network.monitors.AbstractMonitor

Records state variables of interest.

Constructs a Monitor object.

Parameters:
  • obj – An object to record state variables from during network simulation.
  • state_vars – Iterable of strings indicating names of state variables to record.
  • time – If not None, pre-allocate memory for state variable recording.
  • device – Allow the monitor to be on different device separate from Network device
get(var: str) → torch.Tensor[source]

Return recording to user.

Parameters:var – State variable recording to return.
Returns:Tensor of shape [time, n_1, ..., n_k], where [n_1, ..., n_k] is the shape of the recorded state

variable. Note, if time == None, get return the logs and empty the monitor variable

record() → None[source]

Appends the current value of the recorded state variables to the recording.

reset_state_variables() → None[source]

Resets recordings to empty ``List``s.

class bindsnet.network.monitors.NetworkMonitor(network: Network, layers: Optional[Iterable[str]] = None, connections: Optional[Iterable[str]] = None, state_vars: Optional[Iterable[str]] = None, time: Optional[int] = None)[source]

Bases: bindsnet.network.monitors.AbstractMonitor

Record state variables of all layers and connections.

Constructs a NetworkMonitor object.

Parameters:
  • network – Network to record state variables from.
  • layers – Layers to record state variables from.
  • connections – Connections to record state variables from.
  • state_vars – List of strings indicating names of state variables to record.
  • time – If not None, pre-allocate memory for state variable recording.
get() → Dict[str, Dict[str, Union[bindsnet.network.nodes.Nodes, bindsnet.network.topology.AbstractConnection]]][source]

Return entire recording to user.

Returns:Dictionary of dictionary of all layers’ and connections’ recorded state variables.
record() → None[source]

Appends the current value of the recorded state variables to the recording.

reset_state_variables() → None[source]

Resets recordings to empty torch.Tensors.

save(path: str, fmt: str = 'npz') → None[source]

Write the recording dictionary out to file.

Parameters:
  • path – The directory to which to write the monitor’s recording.
  • fmt – Type of file to write to disk. One of "pickle" or "npz".

bindsnet.network.network module

class bindsnet.network.network.Network(dt: float = 1.0, batch_size: int = 1, learning: bool = True, reward_fn: Optional[Type[bindsnet.learning.reward.AbstractReward]] = None)[source]

Bases: torch.nn.modules.module.Module

Central object of the bindsnet package. Responsible for the simulation and interaction of nodes and connections.

Example:

import torch
import matplotlib.pyplot as plt

from bindsnet         import encoding
from bindsnet.network import Network, nodes, topology, monitors

network = Network(dt=1.0)  # Instantiates network.

X = nodes.Input(100)  # Input layer.
Y = nodes.LIFNodes(100)  # Layer of LIF neurons.
C = topology.Connection(source=X, target=Y, w=torch.rand(X.n, Y.n))  # Connection from X to Y.

# Spike monitor objects.
M1 = monitors.Monitor(obj=X, state_vars=['s'])
M2 = monitors.Monitor(obj=Y, state_vars=['s'])

# Add everything to the network object.
network.add_layer(layer=X, name='X')
network.add_layer(layer=Y, name='Y')
network.add_connection(connection=C, source='X', target='Y')
network.add_monitor(monitor=M1, name='X')
network.add_monitor(monitor=M2, name='Y')

# Create Poisson-distributed spike train inputs.
data = 15 * torch.rand(100)  # Generate random Poisson rates for 100 input neurons.
train = encoding.poisson(datum=data, time=5000)  # Encode input as 5000ms Poisson spike trains.

# Simulate network on generated spike trains.
inputs = {'X' : train}  # Create inputs mapping.
network.run(inputs=inputs, time=5000)  # Run network simulation.

# Plot spikes of input and output layers.
spikes = {'X' : M1.get('s'), 'Y' : M2.get('s')}

fig, axes = plt.subplots(2, 1, figsize=(12, 7))
for i, layer in enumerate(spikes):
    axes[i].matshow(spikes[layer], cmap='binary')
    axes[i].set_title('%s spikes' % layer)
    axes[i].set_xlabel('Time'); axes[i].set_ylabel('Index of neuron')
    axes[i].set_xticks(()); axes[i].set_yticks(())
    axes[i].set_aspect('auto')

plt.tight_layout(); plt.show()

Initializes network object.

Parameters:
  • dt – Simulation timestep.
  • batch_size – Mini-batch size.
  • learning – Whether to allow connection updates. True by default.
  • reward_fn – Optional class allowing for modification of reward in case of reward-modulated learning.
add_connection(connection: bindsnet.network.topology.AbstractConnection, source: str, target: str) → None[source]

Adds a connection between layers of nodes to the network.

Parameters:
  • connection – An instance of class Connection.
  • source – Logical name of the connection’s source layer.
  • target – Logical name of the connection’s target layer.
add_layer(layer: bindsnet.network.nodes.Nodes, name: str) → None[source]

Adds a layer of nodes to the network.

Parameters:
  • layer – A subclass of the Nodes object.
  • name – Logical name of layer.
add_monitor(monitor: bindsnet.network.monitors.AbstractMonitor, name: str) → None[source]

Adds a monitor on a network object to the network.

Parameters:
  • monitor – An instance of class Monitor.
  • name – Logical name of monitor object.
clone() → bindsnet.network.network.Network[source]

Returns a cloned network object.

Returns:A copy of this network.
reset_state_variables() → None[source]

Reset state variables of objects in network.

run(inputs: Dict[str, torch.Tensor], time: int, one_step=False, **kwargs) → None[source]

Simulate network for given inputs and time.

Parameters:
  • inputs – Dictionary of Tensor``s of shape ``[time, *input_shape] or [time, batch_size, *input_shape].
  • time – Simulation time.
  • one_step – Whether to run the network in “feed-forward” mode, where inputs propagate all the way through the network in a single simulation time step. Layers are updated in the order they are added to the network.

Keyword arguments:

Parameters:
  • torch.Tensor] clamp (Dict[str,) – Mapping of layer names to boolean masks if neurons should be clamped to spiking. The Tensor``s have shape ``[n_neurons] or [time, n_neurons].
  • torch.Tensor] unclamp (Dict[str,) – Mapping of layer names to boolean masks if neurons should be clamped to not spiking. The Tensor``s should have shape ``[n_neurons] or [time, n_neurons].
  • torch.Tensor] injects_v (Dict[str,) – Mapping of layer names to boolean masks if neurons should be added voltage. The Tensor``s should have shape ``[n_neurons] or [time, n_neurons].
  • torch.Tensor] reward (Union[float,) – Scalar value used in reward-modulated learning.
  • torch.Tensor] masks (Dict[Tuple[str],) – Mapping of connection names to boolean masks determining which weights to clamp to zero.
  • progress_bar (Bool) – Show a progress bar while running the network.

Example:

import torch
import matplotlib.pyplot as plt

from bindsnet.network import Network
from bindsnet.network.nodes import Input
from bindsnet.network.monitors import Monitor

# Build simple network.
network = Network()
network.add_layer(Input(500), name='I')
network.add_monitor(Monitor(network.layers['I'], state_vars=['s']), 'I')

# Generate spikes by running Bernoulli trials on Uniform(0, 0.5) samples.
spikes = torch.bernoulli(0.5 * torch.rand(500, 500))

# Run network simulation.
network.run(inputs={'I' : spikes}, time=500)

# Look at input spiking activity.
spikes = network.monitors['I'].get('s')
plt.matshow(spikes, cmap='binary')
plt.xticks(()); plt.yticks(());
plt.xlabel('Time'); plt.ylabel('Neuron index')
plt.title('Input spiking')
plt.show()
save(file_name: str) → None[source]

Serializes the network object to disk.

Parameters:file_name – Path to store serialized network object on disk.

Example:

import torch
import matplotlib.pyplot as plt

from pathlib          import Path
from bindsnet.network import *
from bindsnet.network import topology

# Build simple network.
network = Network(dt=1.0)

X = nodes.Input(100)  # Input layer.
Y = nodes.LIFNodes(100)  # Layer of LIF neurons.
C = topology.Connection(source=X, target=Y, w=torch.rand(X.n, Y.n))  # Connection from X to Y.

# Add everything to the network object.
network.add_layer(layer=X, name='X')
network.add_layer(layer=Y, name='Y')
network.add_connection(connection=C, source='X', target='Y')

# Save the network to disk.
network.save(str(Path.home()) + '/network.pt')
train(mode: bool = True) → torch.nn.modules.module.Module[source]

Sets the node in training mode.

Parameters:mode – Turn training on or off.
Returns:self as specified in torch.nn.Module.
bindsnet.network.network.load(file_name: str, map_location: str = 'cpu', learning: bool = None) → bindsnet.network.network.Network[source]

Loads serialized network object from disk.

Parameters:
  • file_name – Path to serialized network object on disk.
  • map_location – One of "cpu" or "cuda". Defaults to "cpu".
  • learning – Whether to load with learning enabled. Default loads value from disk.

bindsnet.network.nodes module

class bindsnet.network.nodes.AbstractInput[source]

Bases: abc.ABC

Abstract base class for groups of input neurons.

class bindsnet.network.nodes.AdaptiveLIFNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, rest: Union[float, torch.Tensor] = -65.0, reset: Union[float, torch.Tensor] = -65.0, thresh: Union[float, torch.Tensor] = -52.0, refrac: Union[int, torch.Tensor] = 5, tc_decay: Union[float, torch.Tensor] = 100.0, theta_plus: Union[float, torch.Tensor] = 0.05, tc_theta_decay: Union[float, torch.Tensor] = 10000000.0, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of leaky integrate-and-fire (LIF) neurons with adaptive thresholds. A neuron’s voltage threshold is increased by some constant each time it spikes; otherwise, it is decaying back to its default value.

Instantiates a layer of LIF neurons with adaptive firing thresholds.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • rest – Resting membrane voltage.
  • reset – Post-spike reset voltage.
  • thresh – Spike threshold voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • tc_decay – Time constant of neuron voltage decay.
  • theta_plus – Voltage increase of threshold after spiking.
  • tc_theta_decay – Time constant of adaptive threshold decay.
  • lbound – Lower bound of the voltage.
compute_decays(dt) → None[source]

Sets the relevant decays.

forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.BoostedLIFNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = 13.0, refrac: Union[int, torch.Tensor] = 5, tc_decay: Union[float, torch.Tensor] = 100.0, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Instantiates a layer of LIF neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
  • reset – Post-spike reset voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • tc_decay – Time constant of neuron voltage decay.
compute_decays(dt) → None[source]

Sets the relevant decays.

forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.CSRMNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, rest: Union[float, torch.Tensor] = -65.0, thresh: Union[float, torch.Tensor] = -52.0, responseKernel: str = 'ExponentialKernel', refractoryKernel: str = 'EtaKernel', tau: Union[float, torch.Tensor] = 1, res_window_size: Union[float, torch.Tensor] = 20, ref_window_size: Union[float, torch.Tensor] = 10, reset_const: Union[float, torch.Tensor] = 50, tc_decay: Union[float, torch.Tensor] = 100.0, theta_plus: Union[float, torch.Tensor] = 0.05, tc_theta_decay: Union[float, torch.Tensor] = 10000000.0, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

A layer of Cumulative Spike Response Model (Gerstner and van Hemmen 1992, Gerstner et al. 1996) nodes. It accounts for a model where refractoriness and adaptation were modeled by the combined effects of the spike after potentials of several previous spikes, rather than only the most recent spike.

Instantiates a layer of Cumulative Spike Response Model nodes.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • rest – Resting membrane voltage.
  • thresh – Spike threshold voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • tc_decay – Time constant of neuron voltage decay.
  • theta_plus – Voltage increase of threshold after spiking.
  • tc_theta_decay – Time constant of adaptive threshold decay.
  • lbound – Lower bound of the voltage.
AlphaKernel(dt)[source]
AlphaKernelSLAYER(dt)[source]
EtaKernel(dt)[source]
ExponentialKernel(dt)[source]
LaplacianKernel(dt)[source]
RectangularKernel(dt)[source]
TriangularKernel(dt)[source]
compute_decays(dt) → None[source]

Sets the relevant decays.

forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.CurrentLIFNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = -52.0, rest: Union[float, torch.Tensor] = -65.0, reset: Union[float, torch.Tensor] = -65.0, refrac: Union[int, torch.Tensor] = 5, tc_decay: Union[float, torch.Tensor] = 100.0, tc_i_decay: Union[float, torch.Tensor] = 2.0, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of current-based leaky integrate-and-fire (LIF) neurons. Total synaptic input current is modeled as a decaying memory of input spikes multiplied by synaptic strengths.

Instantiates a layer of synaptic input current-based LIF neurons. :param n: The number of neurons in the layer. :param shape: The dimensionality of the layer. :param traces: Whether to record spike traces. :param traces_additive: Whether to record spike traces additively. :param tc_trace: Time constant of spike trace decay. :param trace_scale: Scaling factor for spike trace. :param sum_input: Whether to sum all inputs. :param thresh: Spike threshold voltage. :param rest: Resting membrane voltage. :param reset: Post-spike reset voltage. :param refrac: Refractory (non-firing) period of the neuron. :param tc_decay: Time constant of neuron voltage decay. :param tc_i_decay: Time constant of synaptic input current decay. :param lbound: Lower bound of the voltage.

compute_decays(dt) → None[source]

Sets the relevant decays.

forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.DiehlAndCookNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = -52.0, rest: Union[float, torch.Tensor] = -65.0, reset: Union[float, torch.Tensor] = -65.0, refrac: Union[int, torch.Tensor] = 5, tc_decay: Union[float, torch.Tensor] = 100.0, theta_plus: Union[float, torch.Tensor] = 0.05, tc_theta_decay: Union[float, torch.Tensor] = 10000000.0, lbound: float = None, one_spike: bool = True, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of leaky integrate-and-fire (LIF) neurons with adaptive thresholds (modified for Diehl & Cook 2015 replication).

Instantiates a layer of Diehl & Cook 2015 neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
  • rest – Resting membrane voltage.
  • reset – Post-spike reset voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • tc_decay – Time constant of neuron voltage decay.
  • theta_plus – Voltage increase of threshold after spiking.
  • tc_theta_decay – Time constant of adaptive threshold decay.
  • lbound – Lower bound of the voltage.
  • one_spike – Whether to allow only one spike per timestep.
compute_decays(dt) → None[source]

Sets the relevant decays.

forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.IFNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = -52.0, reset: Union[float, torch.Tensor] = -65.0, refrac: Union[int, torch.Tensor] = 5, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of integrate-and-fire (IF) neurons.

Instantiates a layer of IF neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
  • reset – Post-spike reset voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • lbound – Lower bound of the voltage.
forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.Input(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes, bindsnet.network.nodes.AbstractInput

Layer of nodes with user-specified spiking behavior.

Instantiates a layer of input neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record decaying spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
forward(x: torch.Tensor) → None[source]

On each simulation step, set the spikes of the population equal to the inputs.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

class bindsnet.network.nodes.IzhikevichNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, excitatory: float = 1, thresh: Union[float, torch.Tensor] = 45.0, rest: Union[float, torch.Tensor] = -65.0, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of `Izhikevich neurons<https://www.izhikevich.org/publications/spikes.htm>`_.

Instantiates a layer of Izhikevich neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • excitatory – Percent of excitatory (vs. inhibitory) neurons in the layer; in range [0, 1].
  • thresh – Spike threshold voltage.
  • rest – Resting membrane voltage.
  • lbound – Lower bound of the voltage.
forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.LIFNodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = -52.0, rest: Union[float, torch.Tensor] = -65.0, reset: Union[float, torch.Tensor] = -65.0, refrac: Union[int, torch.Tensor] = 5, tc_decay: Union[float, torch.Tensor] = 100.0, lbound: float = None, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of leaky integrate-and-fire (LIF) neurons.

Instantiates a layer of LIF neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
  • rest – Resting membrane voltage.
  • reset – Post-spike reset voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • tc_decay – Time constant of neuron voltage decay.
  • lbound – Lower bound of the voltage.
compute_decays(dt) → None[source]

Sets the relevant decays.

forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.McCullochPitts(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = 1.0, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of McCulloch-Pitts neurons.

Instantiates a McCulloch-Pitts layer of neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
class bindsnet.network.nodes.Nodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, learning: bool = True, **kwargs)[source]

Bases: torch.nn.modules.module.Module

Abstract base class for groups of neurons.

Abstract base class constructor.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record decaying spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • learning – Whether to be in learning or testing.
compute_decays(dt) → None[source]

Abstract base class method for setting decays.

forward(x: torch.Tensor) → None[source]

Abstract base class method for a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Abstract base class method for resetting state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.
train(mode: bool = True) → bindsnet.network.nodes.Nodes[source]

Sets the layer in training mode.

Parameters:mode (bool) – Turn training on or off
Returns:self as specified in torch.nn.Module
class bindsnet.network.nodes.SRM0Nodes(n: Optional[int] = None, shape: Optional[Iterable[int]] = None, traces: bool = False, traces_additive: bool = False, tc_trace: Union[float, torch.Tensor] = 20.0, trace_scale: Union[float, torch.Tensor] = 1.0, sum_input: bool = False, thresh: Union[float, torch.Tensor] = -50.0, rest: Union[float, torch.Tensor] = -70.0, reset: Union[float, torch.Tensor] = -70.0, refrac: Union[int, torch.Tensor] = 5, tc_decay: Union[float, torch.Tensor] = 10.0, lbound: float = None, eps_0: Union[float, torch.Tensor] = 1.0, rho_0: Union[float, torch.Tensor] = 1.0, d_thresh: Union[float, torch.Tensor] = 5.0, **kwargs)[source]

Bases: bindsnet.network.nodes.Nodes

Layer of simplified spike response model (SRM0) neurons with stochastic threshold (escape noise). Adapted from (Vasilaki et al., 2009).

Instantiates a layer of SRM0 neurons.

Parameters:
  • n – The number of neurons in the layer.
  • shape – The dimensionality of the layer.
  • traces – Whether to record spike traces.
  • traces_additive – Whether to record spike traces additively.
  • tc_trace – Time constant of spike trace decay.
  • trace_scale – Scaling factor for spike trace.
  • sum_input – Whether to sum all inputs.
  • thresh – Spike threshold voltage.
  • rest – Resting membrane voltage.
  • reset – Post-spike reset voltage.
  • refrac – Refractory (non-firing) period of the neuron.
  • tc_decay – Time constant of neuron voltage decay.
  • lbound – Lower bound of the voltage.
  • eps_0 – Scaling factor for pre-synaptic spike contributions.
  • rho_0 – Stochastic intensity at threshold.
  • d_thresh – Width of the threshold region.
compute_decays(dt) → None[source]

Sets the relevant decays.

forward(x: torch.Tensor) → None[source]

Runs a single simulation step.

Parameters:x – Inputs to the layer.
reset_state_variables() → None[source]

Resets relevant state variables.

set_batch_size(batch_size) → None[source]

Sets mini-batch size. Called when layer is added to a network.

Parameters:batch_size – Mini-batch size.

bindsnet.network.topology module

class bindsnet.network.topology.AbstractConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: abc.ABC, torch.nn.modules.module.Module

Abstract base method for connections between Nodes.

Constructor for abstract base class for connection objects.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target

    A layer of nodes to which the connection connects. :param nu: Learning rate for both pre- and post-synaptic events. It also

    accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (LearningRule) – Modifies connection parameters according to some rule.
  • torch.Tensor] wmin (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • torch.Tensor] wmax (Union[float,) – Maximum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • norm (float) – Total weight per target neuron normalization.
compute(s: torch.Tensor) → None[source]

Compute pre-activations of downstream neurons given spikes of upstream neurons.

Parameters:s – Incoming spikes.
reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

Keyword arguments:

Parameters:
  • learning (bool) – Whether to allow connection updates.
  • mask (ByteTensor) – Boolean mask determining which weights to clamp to zero.
class bindsnet.network.topology.Connection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies synapses between one or two populations of neurons.

Instantiates a Connection object.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target

    A layer of nodes to which the connection connects. :param nu: Learning rate for both pre- and post-synaptic events. It also

    accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (LearningRule) – Modifies connection parameters according to some rule.
  • w (torch.Tensor) – Strengths of synapses.
  • b (torch.Tensor) – Target population bias.
  • torch.Tensor] wmin (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • torch.Tensor] wmax (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • norm (float) – Total weight per target neuron normalization constant.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute pre-activations given spikes using connection weights.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
compute_window(s: torch.Tensor) → torch.Tensor[source]
normalize() → None[source]

Normalize weights so each target neuron has sum of connection weights equal to self.norm.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.Conv1dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: int, stride: int = 1, padding: int = 0, dilation: int = 1, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies one-dimensional convolutional synapses between one or two populations of neurons.

Instantiates a Conv1dConnection object.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • kernel_size – the size of 1-D convolutional kernel.
  • stride – stride for convolution.
  • padding – padding for convolution.
  • dilation

    dilation for convolution. :param nu: Learning rate for both pre- and post-synaptic events. It also

    accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (LearningRule) – Modifies connection parameters according to some rule.
  • w (torch.Tensor) – Strengths of synapses.
  • b (torch.Tensor) – Target population bias.
  • torch.Tensor] wmin (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • torch.Tensor] wmax (Union[float,) – Maximum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • norm (float) – Total weight per target neuron normalization constant.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute convolutional pre-activations given spikes using layer weights.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

Normalize weights along the first axis according to total weight per target neuron.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.Conv2dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies two-dimensional convolutional synapses between one or two populations of neurons.

Instantiates a Conv2dConnection object.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • kernel_size – Horizontal and vertical size of convolutional kernels.
  • stride – Horizontal and vertical stride for convolution.
  • padding – Horizontal and vertical padding for convolution.
  • dilation

    Horizontal and vertical dilation for convolution. :param nu: Learning rate for both pre- and post-synaptic events. It also

    accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (LearningRule) – Modifies connection parameters according to some rule.
  • w (torch.Tensor) – Strengths of synapses.
  • b (torch.Tensor) – Target population bias.
  • torch.Tensor] wmin (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • torch.Tensor] wmax (Union[float,) – Maximum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • norm (float) – Total weight per target neuron normalization constant.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute convolutional pre-activations given spikes using layer weights.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

Normalize weights along the first axis according to total weight per target neuron.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.Conv3dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]] = 1, padding: Union[int, Tuple[int, int, int]] = 0, dilation: Union[int, Tuple[int, int, int]] = 1, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies three-dimensional convolutional synapses between one or two populations of neurons.

Instantiates a Conv3dConnection object.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • kernel_size – Depth-wise, horizontal, and vertical size of convolutional kernels.
  • stride – Depth-wise, horizontal, and vertical stride for convolution.
  • padding – Depth-wise, horizontal, and vertical padding for convolution.
  • dilation

    Depth-wise, horizontal and vertical dilation for convolution. :param nu: Learning rate for both pre- and post-synaptic events. It also

    accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (LearningRule) – Modifies connection parameters according to some rule.
  • w (torch.Tensor) – Strengths of synapses.
  • b (torch.Tensor) – Target population bias.
  • torch.Tensor] wmin (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • torch.Tensor] wmax (Union[float,) – Maximum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • norm (float) – Total weight per target neuron normalization constant.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute convolutional pre-activations given spikes using layer weights.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

Normalize weights along the first axis according to total weight per target neuron.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.LocalConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]], n_filters: int, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies a locally connected connection between one or two populations of neurons.

Instantiates a LocalConnection2D object. Source population should have square size

Neurons in the post-synaptic population are ordered by receptive field; that is, if there are n_conv neurons in each post-synaptic patch, then the first n_conv neurons in the post-synaptic population correspond to the first receptive field, the second n_conv to the second receptive field, and so on.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • kernel_size – Horizontal and vertical size of convolutional kernels.
  • stride – Horizontal and vertical stride for convolution.
  • n_filters

    Number of locally connected filters per pre-synaptic region. :param nu: Learning rate for both pre- and post-synaptic events. It also

    accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • update_rule (LearningRule) – Modifies connection parameters according to some rule.
  • w (torch.Tensor) – Strengths of synapses.
  • b (torch.Tensor) – Target population bias.
  • torch.Tensor] wmin (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • torch.Tensor] wmax (Union[float,) – Maximum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • norm (float) – Total weight per target neuron normalization constant.
  • int] input_shape (Tuple[int,) – Shape of input population if it’s not [sqrt, sqrt].
compute(s: torch.Tensor) → torch.Tensor[source]

Compute pre-activations given spikes using layer weights.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

Normalize weights so each target neuron has sum of connection weights equal to self.norm.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

Keyword arguments:

Parameters:mask (ByteTensor) – Boolean mask determining which weights to clamp to zero.
class bindsnet.network.topology.LocalConnection1D(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: int, stride: int, n_filters: int, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies a one-dimensional local connection between one or two population of neurons supporting multi-channel inputs with shape (C, H); The logic is different from the original LocalConnection implementation (where masks were used with normal dense connections).

Instantiates a ‘LocalConnection1D` object. Source population can be multi-channel. Neurons in the post-synaptic population are ordered by receptive field, i.e., if there are n_conv neurons in each post-synaptic patch, then the first n_conv neurons in the post-synaptic population correspond to the first receptive field, the second n_conv to the second receptive field, and so on. :param source: A layer of nodes from which the connection originates. :param target: A layer of nodes to which the connection connects. :param kernel_size: size of convolutional kernels. :param stride: stride for convolution. :param n_filters: Number of locally connected filters per pre-synaptic region.

param nu:Learning rate for both pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
Parameters:
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments: :param LearningRule update_rule: Modifies connection parameters according to some rule. :param torch.Tensor w: Strengths of synapses. :param torch.Tensor b: Target population bias. :param float wmin: Minimum allowed value on the connection weights. :param float wmax: Maximum allowed value on the connection weights. :param float norm: Total weight per target neuron normalization constant.

compute(s: torch.Tensor) → torch.Tensor[source]

Compute pre-activations given spikes using layer weights. :param s: Incoming spikes. :return: Incoming spikes multiplied by synaptic weights (with or without

decaying spike activation).
normalize() → None[source]

Normalize weights so each target neuron has sum of connection weights equal to self.norm.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.LocalConnection2D(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]], n_filters: int, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies a two-dimensional local connection between one or two population of neurons supporting multi-channel inputs with shape (C, H, W); The logic is different from the original LocalConnection implementation (where masks were used with normal dense connections)

Instantiates a ‘LocalConnection2D` object. Source population can be multi-channel. Neurons in the post-synaptic population are ordered by receptive field, i.e., if there are n_conv neurons in each post-synaptic patch, then the first n_conv neurons in the post-synaptic population correspond to the first receptive field, the second n_conv to the second receptive field, and so on. :param source: A layer of nodes from which the connection originates. :param target: A layer of nodes to which the connection connects. :param kernel_size: Horizontal and vertical size of convolutional kernels. :param stride: Horizontal and vertical stride for convolution. :param n_filters: Number of locally connected filters per pre-synaptic region.

param nu:Learning rate for both pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
Parameters:
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments: :param LearningRule update_rule: Modifies connection parameters according to some rule. :param torch.Tensor w: Strengths of synapses. :param torch.Tensor b: Target population bias. :param float wmin: Minimum allowed value on the connection weights. :param float wmax: Maximum allowed value on the connection weights. :param float norm: Total weight per target neuron normalization constant.

compute(s: torch.Tensor) → torch.Tensor[source]

Compute pre-activations given spikes using layer weights. :param s: Incoming spikes. :return: Incoming spikes multiplied by synaptic weights (with or without

decaying spike activation).
normalize() → None[source]

Normalize weights so each target neuron has sum of connection weights equal to self.norm.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.LocalConnection3D(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]], n_filters: int, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies a three-dimensional local connection between one or two population of neurons supporting multi-channel inputs with shape (C, H, W, D); The logic is different from the original LocalConnection implementation (where masks were used with normal dense connections)

Instantiates a ‘LocalConnection3D` object. Source population can be multi-channel. Neurons in the post-synaptic population are ordered by receptive field, i.e., if there are n_conv neurons in each post-synaptic patch, then the first n_conv neurons in the post-synaptic population correspond to the first receptive field, the second n_conv to the second receptive field, and so on. :param source: A layer of nodes from which the connection originates. :param target: A layer of nodes to which the connection connects. :param kernel_size: Horizontal, vertical, and depth-wise size of convolutional kernels. :param stride: Horizontal, vertical, and depth-wise stride for convolution. :param n_filters: Number of locally connected filters per pre-synaptic region.

param nu:Learning rate for both pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
Parameters:
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments: :param LearningRule update_rule: Modifies connection parameters according to some rule. :param torch.Tensor w: Strengths of synapses. :param torch.Tensor b: Target population bias. :param float wmin: Minimum allowed value on the connection weights. :param float wmax: Maximum allowed value on the connection weights. :param float norm: Total weight per target neuron normalization constant.

compute(s: torch.Tensor) → torch.Tensor[source]

Compute pre-activations given spikes using layer weights. :param s: Incoming spikes. :return: Incoming spikes multiplied by synaptic weights (with or without

decaying spike activation).
normalize() → None[source]

Normalize weights so each target neuron has sum of connection weights equal to self.norm.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.MaxPoo3dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]] = 1, padding: Union[int, Tuple[int, int, int]] = 0, dilation: Union[int, Tuple[int, int, int]] = 1, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies max-pooling synapses between one or two populations of neurons by keeping online estimates of maximally firing neurons.

Instantiates a MaxPool3dConnection object.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • kernel_size – Depth-wise, horizontal and vertical size of convolutional kernels.
  • stride – Depth-wise, horizontal and vertical stride for convolution.
  • padding – Depth-wise, horizontal and vertical padding for convolution.
  • dilation – Depth-wise, horizontal and vertical dilation for convolution.

Keyword arguments:

Parameters:decay – Decay rate of online estimates of average firing activity.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute max-pool pre-activations given spikes using online firing rate estimates.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

No weights -> no normalization.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.MaxPool1dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: int, stride: int = 1, padding: int = 0, dilation: int = 1, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies max-pooling synapses between one or two populations of neurons by keeping online estimates of maximally firing neurons.

Instantiates a MaxPool1dConnection object.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • kernel_size – the size of 1-D convolutional kernel.
  • stride – stride for convolution.
  • padding – padding for convolution.
  • dilation – dilation for convolution.

Keyword arguments:

Parameters:decay – Decay rate of online estimates of average firing activity.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute max-pool pre-activations given spikes using online firing rate estimates.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

No weights -> no normalization.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.MaxPool2dConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies max-pooling synapses between one or two populations of neurons by keeping online estimates of maximally firing neurons.

Instantiates a MaxPool2dConnection object.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target – A layer of nodes to which the connection connects.
  • kernel_size – Horizontal and vertical size of convolutional kernels.
  • stride – Horizontal and vertical stride for convolution.
  • padding – Horizontal and vertical padding for convolution.
  • dilation – Horizontal and vertical dilation for convolution.

Keyword arguments:

Parameters:decay – Decay rate of online estimates of average firing activity.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute max-pool pre-activations given spikes using online firing rate estimates.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

No weights -> no normalization.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.MeanFieldConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

A connection between one or two populations of neurons which computes a summary of the pre-synaptic population to use as weighted input to the post-synaptic population.

Instantiates a MeanFieldConnection object. :param source: A layer of nodes from which the connection originates. :param target: A layer of nodes to which the connection connects.

param nu:Learning rate for both pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
Parameters:weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments: :param LearningRule update_rule: Modifies connection parameters according to

some rule.
Parameters:
  • torch.Tensor] w (Union[float,) – Strengths of synapses. Can be single value or tensor of size target
  • torch.Tensor] wmin (Union[float,) – Minimum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • torch.Tensor] wmax (Union[float,) – Maximum allowed value(s) on the connection weights. Single value, or tensor of same size as w
  • norm (float) – Total weight per target neuron normalization constant.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute pre-activations given spikes using layer weights. :param s: Incoming spikes. :return: Incoming spikes multiplied by synaptic weights (with or without

decaying spike activation).
normalize() → None[source]

Normalize weights so each target neuron has sum of connection weights equal to self.norm.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

class bindsnet.network.topology.SparseConnection(source: bindsnet.network.nodes.Nodes, target: bindsnet.network.nodes.Nodes, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = None, **kwargs)[source]

Bases: bindsnet.network.topology.AbstractConnection

Specifies sparse synapses between one or two populations of neurons.

Instantiates a Connection object with sparse weights.

Parameters:
  • source – A layer of nodes from which the connection originates.
  • target

    A layer of nodes to which the connection connects. :param nu: Learning rate for both pre- and post-synaptic events. It also

    accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Constant multiple to decay weights by on each iteration.

Keyword arguments:

Parameters:
  • w (torch.Tensor) – Strengths of synapses. Must be in torch.sparse format
  • sparsity (float) – Fraction of sparse connections to use.
  • update_rule (LearningRule) – Modifies connection parameters according to some rule.
  • wmin (float) – Minimum allowed value on the connection weights.
  • wmax (float) – Maximum allowed value on the connection weights.
  • norm (float) – Total weight per target neuron normalization constant.
compute(s: torch.Tensor) → torch.Tensor[source]

Compute convolutional pre-activations given spikes using layer weights.

Parameters:s – Incoming spikes.
Returns:Incoming spikes multiplied by synaptic weights (with or without decaying spike activation).
normalize() → None[source]

Normalize weights along the first axis according to total weight per target neuron.

reset_state_variables() → None[source]

Contains resetting logic for the connection.

update(**kwargs) → None[source]

Compute connection’s update rule.

Module contents

class bindsnet.network.Network(dt: float = 1.0, batch_size: int = 1, learning: bool = True, reward_fn: Optional[Type[bindsnet.learning.reward.AbstractReward]] = None)[source]

Bases: torch.nn.modules.module.Module

Central object of the bindsnet package. Responsible for the simulation and interaction of nodes and connections.

Example:

import torch
import matplotlib.pyplot as plt

from bindsnet         import encoding
from bindsnet.network import Network, nodes, topology, monitors

network = Network(dt=1.0)  # Instantiates network.

X = nodes.Input(100)  # Input layer.
Y = nodes.LIFNodes(100)  # Layer of LIF neurons.
C = topology.Connection(source=X, target=Y, w=torch.rand(X.n, Y.n))  # Connection from X to Y.

# Spike monitor objects.
M1 = monitors.Monitor(obj=X, state_vars=['s'])
M2 = monitors.Monitor(obj=Y, state_vars=['s'])

# Add everything to the network object.
network.add_layer(layer=X, name='X')
network.add_layer(layer=Y, name='Y')
network.add_connection(connection=C, source='X', target='Y')
network.add_monitor(monitor=M1, name='X')
network.add_monitor(monitor=M2, name='Y')

# Create Poisson-distributed spike train inputs.
data = 15 * torch.rand(100)  # Generate random Poisson rates for 100 input neurons.
train = encoding.poisson(datum=data, time=5000)  # Encode input as 5000ms Poisson spike trains.

# Simulate network on generated spike trains.
inputs = {'X' : train}  # Create inputs mapping.
network.run(inputs=inputs, time=5000)  # Run network simulation.

# Plot spikes of input and output layers.
spikes = {'X' : M1.get('s'), 'Y' : M2.get('s')}

fig, axes = plt.subplots(2, 1, figsize=(12, 7))
for i, layer in enumerate(spikes):
    axes[i].matshow(spikes[layer], cmap='binary')
    axes[i].set_title('%s spikes' % layer)
    axes[i].set_xlabel('Time'); axes[i].set_ylabel('Index of neuron')
    axes[i].set_xticks(()); axes[i].set_yticks(())
    axes[i].set_aspect('auto')

plt.tight_layout(); plt.show()

Initializes network object.

Parameters:
  • dt – Simulation timestep.
  • batch_size – Mini-batch size.
  • learning – Whether to allow connection updates. True by default.
  • reward_fn – Optional class allowing for modification of reward in case of reward-modulated learning.
add_connection(connection: bindsnet.network.topology.AbstractConnection, source: str, target: str) → None[source]

Adds a connection between layers of nodes to the network.

Parameters:
  • connection – An instance of class Connection.
  • source – Logical name of the connection’s source layer.
  • target – Logical name of the connection’s target layer.
add_layer(layer: bindsnet.network.nodes.Nodes, name: str) → None[source]

Adds a layer of nodes to the network.

Parameters:
  • layer – A subclass of the Nodes object.
  • name – Logical name of layer.
add_monitor(monitor: bindsnet.network.monitors.AbstractMonitor, name: str) → None[source]

Adds a monitor on a network object to the network.

Parameters:
  • monitor – An instance of class Monitor.
  • name – Logical name of monitor object.
clone() → bindsnet.network.network.Network[source]

Returns a cloned network object.

Returns:A copy of this network.
reset_state_variables() → None[source]

Reset state variables of objects in network.

run(inputs: Dict[str, torch.Tensor], time: int, one_step=False, **kwargs) → None[source]

Simulate network for given inputs and time.

Parameters:
  • inputs – Dictionary of Tensor``s of shape ``[time, *input_shape] or [time, batch_size, *input_shape].
  • time – Simulation time.
  • one_step – Whether to run the network in “feed-forward” mode, where inputs propagate all the way through the network in a single simulation time step. Layers are updated in the order they are added to the network.

Keyword arguments:

Parameters:
  • torch.Tensor] clamp (Dict[str,) – Mapping of layer names to boolean masks if neurons should be clamped to spiking. The Tensor``s have shape ``[n_neurons] or [time, n_neurons].
  • torch.Tensor] unclamp (Dict[str,) – Mapping of layer names to boolean masks if neurons should be clamped to not spiking. The Tensor``s should have shape ``[n_neurons] or [time, n_neurons].
  • torch.Tensor] injects_v (Dict[str,) – Mapping of layer names to boolean masks if neurons should be added voltage. The Tensor``s should have shape ``[n_neurons] or [time, n_neurons].
  • torch.Tensor] reward (Union[float,) – Scalar value used in reward-modulated learning.
  • torch.Tensor] masks (Dict[Tuple[str],) – Mapping of connection names to boolean masks determining which weights to clamp to zero.
  • progress_bar (Bool) – Show a progress bar while running the network.

Example:

import torch
import matplotlib.pyplot as plt

from bindsnet.network import Network
from bindsnet.network.nodes import Input
from bindsnet.network.monitors import Monitor

# Build simple network.
network = Network()
network.add_layer(Input(500), name='I')
network.add_monitor(Monitor(network.layers['I'], state_vars=['s']), 'I')

# Generate spikes by running Bernoulli trials on Uniform(0, 0.5) samples.
spikes = torch.bernoulli(0.5 * torch.rand(500, 500))

# Run network simulation.
network.run(inputs={'I' : spikes}, time=500)

# Look at input spiking activity.
spikes = network.monitors['I'].get('s')
plt.matshow(spikes, cmap='binary')
plt.xticks(()); plt.yticks(());
plt.xlabel('Time'); plt.ylabel('Neuron index')
plt.title('Input spiking')
plt.show()
save(file_name: str) → None[source]

Serializes the network object to disk.

Parameters:file_name – Path to store serialized network object on disk.

Example:

import torch
import matplotlib.pyplot as plt

from pathlib          import Path
from bindsnet.network import *
from bindsnet.network import topology

# Build simple network.
network = Network(dt=1.0)

X = nodes.Input(100)  # Input layer.
Y = nodes.LIFNodes(100)  # Layer of LIF neurons.
C = topology.Connection(source=X, target=Y, w=torch.rand(X.n, Y.n))  # Connection from X to Y.

# Add everything to the network object.
network.add_layer(layer=X, name='X')
network.add_layer(layer=Y, name='Y')
network.add_connection(connection=C, source='X', target='Y')

# Save the network to disk.
network.save(str(Path.home()) + '/network.pt')
train(mode: bool = True) → torch.nn.modules.module.Module[source]

Sets the node in training mode.

Parameters:mode – Turn training on or off.
Returns:self as specified in torch.nn.Module.
bindsnet.network.load(file_name: str, map_location: str = 'cpu', learning: bool = None) → bindsnet.network.network.Network[source]

Loads serialized network object from disk.

Parameters:
  • file_name – Path to serialized network object on disk.
  • map_location – One of "cpu" or "cuda". Defaults to "cpu".
  • learning – Whether to load with learning enabled. Default loads value from disk.