bindsnet.learning package

Submodules

bindsnet.learning.learning module

class bindsnet.learning.learning.Hebbian(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Simple Hebbian learning rule. Pre- and post-synaptic updates are both positive.

Constructor for Hebbian learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the Hebbian learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
class bindsnet.learning.learning.LearningRule(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: abc.ABC

Abstract base class for learning rules.

Abstract constructor for the LearningRule object.

Parameters:
  • connection – An AbstractConnection object.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
update() → None[source]

Abstract method for a learning rule update.

class bindsnet.learning.learning.MSTDP(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Reward-modulated STDP. Adapted from (Florian 2007).

Constructor for MSTDP learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the MSTDP learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.

Keyword arguments:

Parameters:
  • tc_plus – Time constant for pre-synaptic firing trace.
  • tc_minus – Time constant for post-synaptic firing trace.
class bindsnet.learning.learning.MSTDPET(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Reward-modulated STDP with eligibility trace. Adapted from (Florian 2007).

Constructor for MSTDPET learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the MSTDPET learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.

Keyword arguments: :param float tc_plus: Time constant for pre-synaptic firing trace. :param float tc_minus: Time constant for post-synaptic firing trace. :param float tc_e_trace: Time constant for the eligibility trace.

class bindsnet.learning.learning.NoOp(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Learning rule with no effect.

Abstract constructor for the LearningRule object.

Parameters:
  • connection – An AbstractConnection object.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
update(**kwargs) → None[source]

Abstract method for a learning rule update.

class bindsnet.learning.learning.PostPre(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Simple STDP rule involving both pre- and post-synaptic spiking activity. By default, pre-synaptic update is negative and the post-synaptic update is positive.

Constructor for PostPre learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the PostPre learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
class bindsnet.learning.learning.Rmax(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Reward-modulated learning rule derived from reward maximization principles. Adapted from (Vasilaki et al., 2009).

Constructor for R-max learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the R-max learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.

Keyword arguments:

Parameters:
  • tc_c (float) – Time constant for balancing naive Hebbian and policy gradient learning.
  • tc_e_trace (float) – Time constant for the eligibility trace.
class bindsnet.learning.learning.WeightDependentPostPre(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

STDP rule involving both pre- and post-synaptic spiking activity. The post-synaptic update is positive and the pre- synaptic update is negative, and both are dependent on the magnitude of the synaptic weights.

Constructor for WeightDependentPostPre learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the WeightDependentPostPre learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.

bindsnet.learning.reward module

class bindsnet.learning.reward.AbstractReward[source]

Bases: abc.ABC

Abstract base class for reward computation.

compute(**kwargs) → None[source]

Computes/modifies reward.

update(**kwargs) → None[source]

Updates internal variables needed to modify reward. Usually called once per episode.

class bindsnet.learning.reward.MovingAvgRPE(**kwargs)[source]

Bases: bindsnet.learning.reward.AbstractReward

Computes reward prediction error (RPE) based on an exponential moving average (EMA) of past rewards.

Constructor for EMA reward prediction error.

compute(**kwargs) → torch.Tensor[source]

Computes the reward prediction error using EMA.

Keyword arguments:

Parameters:torch.Tensor] reward (Union[float,) – Current reward.
Returns:Reward prediction error.
update(**kwargs) → None[source]

Updates the EMAs. Called once per episode.

Keyword arguments:

Parameters:
  • torch.Tensor] accumulated_reward (Union[float,) – Reward accumulated over one episode.
  • steps (int) – Steps in that episode.
  • ema_window (float) – Width of the averaging window.

Module contents

class bindsnet.learning.LearningRule(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: abc.ABC

Abstract base class for learning rules.

Abstract constructor for the LearningRule object.

Parameters:
  • connection – An AbstractConnection object.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
update() → None[source]

Abstract method for a learning rule update.

class bindsnet.learning.NoOp(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Learning rule with no effect.

Abstract constructor for the LearningRule object.

Parameters:
  • connection – An AbstractConnection object.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
update(**kwargs) → None[source]

Abstract method for a learning rule update.

class bindsnet.learning.PostPre(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Simple STDP rule involving both pre- and post-synaptic spiking activity. By default, pre-synaptic update is negative and the post-synaptic update is positive.

Constructor for PostPre learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the PostPre learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
class bindsnet.learning.WeightDependentPostPre(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

STDP rule involving both pre- and post-synaptic spiking activity. The post-synaptic update is positive and the pre- synaptic update is negative, and both are dependent on the magnitude of the synaptic weights.

Constructor for WeightDependentPostPre learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the WeightDependentPostPre learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
class bindsnet.learning.Hebbian(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Simple Hebbian learning rule. Pre- and post-synaptic updates are both positive.

Constructor for Hebbian learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the Hebbian learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the batch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.
class bindsnet.learning.MSTDP(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Reward-modulated STDP. Adapted from (Florian 2007).

Constructor for MSTDP learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the MSTDP learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.

Keyword arguments:

Parameters:
  • tc_plus – Time constant for pre-synaptic firing trace.
  • tc_minus – Time constant for post-synaptic firing trace.
class bindsnet.learning.MSTDPET(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Reward-modulated STDP with eligibility trace. Adapted from (Florian 2007).

Constructor for MSTDPET learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the MSTDPET learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.

Keyword arguments: :param float tc_plus: Time constant for pre-synaptic firing trace. :param float tc_minus: Time constant for post-synaptic firing trace. :param float tc_e_trace: Time constant for the eligibility trace.

class bindsnet.learning.Rmax(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]

Bases: bindsnet.learning.learning.LearningRule

Reward-modulated learning rule derived from reward maximization principles. Adapted from (Vasilaki et al., 2009).

Constructor for R-max learning rule.

Parameters:
  • connection – An AbstractConnection object whose weights the R-max learning rule will modify.
  • nu – Single or pair of learning rates for pre- and post-synaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
  • reduction – Method for reducing parameter updates along the minibatch dimension.
  • weight_decay – Coefficient controlling rate of decay of the weights each iteration.

Keyword arguments:

Parameters:
  • tc_c (float) – Time constant for balancing naive Hebbian and policy gradient learning.
  • tc_e_trace (float) – Time constant for the eligibility trace.