bindsnet.learning package¶
Submodules¶
bindsnet.learning.learning module¶

class
bindsnet.learning.learning.
Hebbian
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Simple Hebbian learning rule. Pre and postsynaptic updates are both positive.
Constructor for
Hebbian
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theHebbian
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.learning.
LearningRule
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
abc.ABC
Abstract base class for learning rules.
Abstract constructor for the
LearningRule
object.Parameters:  connection – An
AbstractConnection
object.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.learning.
MSTDP
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Rewardmodulated STDP. Adapted from (Florian 2007).
Constructor for
MSTDP
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theMSTDP
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the minibatch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
Keyword arguments:
Parameters:  tc_plus – Time constant for presynaptic firing trace.
 tc_minus – Time constant for postsynaptic firing trace.
 connection – An

class
bindsnet.learning.learning.
MSTDPET
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Rewardmodulated STDP with eligibility trace. Adapted from (Florian 2007).
Constructor for
MSTDPET
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theMSTDPET
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the minibatch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
Keyword arguments: :param float tc_plus: Time constant for presynaptic firing trace. :param float tc_minus: Time constant for postsynaptic firing trace. :param float tc_e_trace: Time constant for the eligibility trace.
 connection – An

class
bindsnet.learning.learning.
NoOp
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Learning rule with no effect.
Abstract constructor for the
LearningRule
object.Parameters:  connection – An
AbstractConnection
object.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.learning.
PostPre
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Simple STDP rule involving both pre and postsynaptic spiking activity. By default, presynaptic update is negative and the postsynaptic update is positive.
Constructor for
PostPre
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights thePostPre
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.learning.
Rmax
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Rewardmodulated learning rule derived from reward maximization principles. Adapted from (Vasilaki et al., 2009).
Constructor for
Rmax
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theRmax
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the minibatch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
Keyword arguments:
Parameters:  tc_c (float) – Time constant for balancing naive Hebbian and policy gradient learning.
 tc_e_trace (float) – Time constant for the eligibility trace.
 connection – An

class
bindsnet.learning.learning.
WeightDependentPostPre
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
STDP rule involving both pre and postsynaptic spiking activity. The postsynaptic update is positive and the pre synaptic update is negative, and both are dependent on the magnitude of the synaptic weights.
Constructor for
WeightDependentPostPre
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theWeightDependentPostPre
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An
bindsnet.learning.reward module¶

class
bindsnet.learning.reward.
AbstractReward
[source]¶ Bases:
abc.ABC
Abstract base class for reward computation.

class
bindsnet.learning.reward.
MovingAvgRPE
(**kwargs)[source]¶ Bases:
bindsnet.learning.reward.AbstractReward
Computes reward prediction error (RPE) based on an exponential moving average (EMA) of past rewards.
Constructor for EMA reward prediction error.
Module contents¶

class
bindsnet.learning.
LearningRule
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
abc.ABC
Abstract base class for learning rules.
Abstract constructor for the
LearningRule
object.Parameters:  connection – An
AbstractConnection
object.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.
NoOp
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Learning rule with no effect.
Abstract constructor for the
LearningRule
object.Parameters:  connection – An
AbstractConnection
object.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.
PostPre
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Simple STDP rule involving both pre and postsynaptic spiking activity. By default, presynaptic update is negative and the postsynaptic update is positive.
Constructor for
PostPre
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights thePostPre
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.
WeightDependentPostPre
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
STDP rule involving both pre and postsynaptic spiking activity. The postsynaptic update is positive and the pre synaptic update is negative, and both are dependent on the magnitude of the synaptic weights.
Constructor for
WeightDependentPostPre
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theWeightDependentPostPre
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.
Hebbian
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Simple Hebbian learning rule. Pre and postsynaptic updates are both positive.
Constructor for
Hebbian
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theHebbian
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the batch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
 connection – An

class
bindsnet.learning.
MSTDP
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Rewardmodulated STDP. Adapted from (Florian 2007).
Constructor for
MSTDP
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theMSTDP
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the minibatch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
Keyword arguments:
Parameters:  tc_plus – Time constant for presynaptic firing trace.
 tc_minus – Time constant for postsynaptic firing trace.
 connection – An

class
bindsnet.learning.
MSTDPET
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Rewardmodulated STDP with eligibility trace. Adapted from (Florian 2007).
Constructor for
MSTDPET
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theMSTDPET
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the minibatch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
Keyword arguments: :param float tc_plus: Time constant for presynaptic firing trace. :param float tc_minus: Time constant for postsynaptic firing trace. :param float tc_e_trace: Time constant for the eligibility trace.
 connection – An

class
bindsnet.learning.
Rmax
(connection: bindsnet.network.topology.AbstractConnection, nu: Union[float, Sequence[float], Sequence[torch.Tensor], None] = None, reduction: Optional[callable] = None, weight_decay: float = 0.0, **kwargs)[source]¶ Bases:
bindsnet.learning.learning.LearningRule
Rewardmodulated learning rule derived from reward maximization principles. Adapted from (Vasilaki et al., 2009).
Constructor for
Rmax
learning rule.Parameters:  connection – An
AbstractConnection
object whose weights theRmax
learning rule will modify.  nu – Single or pair of learning rates for pre and postsynaptic events, respectively. It also accepts a pair of tensors to individualize learning rates of each neuron. In this case, their shape should be the same size as the connection weights.
 reduction – Method for reducing parameter updates along the minibatch dimension.
 weight_decay – Coefficient controlling rate of decay of the weights each iteration.
Keyword arguments:
Parameters:  tc_c (float) – Time constant for balancing naive Hebbian and policy gradient learning.
 tc_e_trace (float) – Time constant for the eligibility trace.
 connection – An