eprop_readout_bsshslm_2020 – Current-based leaky integrate readout neuron model with delta-shaped or exponentially filtered postsynaptic currents for e-prop plasticity¶
Description¶
eprop_readout_bsshslm_2020
is an implementation of an integrate-and-fire neuron model
with delta-shaped postsynaptic currents used as readout neuron for eligibility propagation (e-prop) plasticity.
E-prop plasticity was originally introduced and implemented in TensorFlow in [1].
The suffix _bsshslm_2020
follows the NEST convention to indicate in the
model name the paper that introduced it by the first letter of the authors’ last
names and the publication year.
The membrane voltage time course \(v_j^t\) of the neuron \(j\) is given by:
where \(W_{ji}^\text{out}\) is the output synaptic weight matrix and \(z_i^{t-1}\) is the recurrent presynaptic spike state variable.
Descriptions of further parameters and variables can be found in the table below.
The spike state variable of a presynaptic neuron is expressed by a Heaviside function:
An additional state variable and the corresponding differential equation represents a piecewise constant external current.
See the documentation on the iaf_psc_delta neuron model for more information on the integration of the subthreshold dynamics.
The change of the synaptic weight is calculated from the gradient \(g\) of the loss \(E\) with respect to the synaptic weight \(W_{ji}\): \(\frac{ \text{d}E }{ \text{d} W_{ij} }\) which depends on the presynaptic spikes \(z_i^{t-1}\) and the learning signal \(L_j^t\) emitted by the readout neurons.
The presynaptic spike trains are low-pass filtered with the following exponential kernel:
Since readout neurons are leaky integrators without a spiking mechanism, the formula for computing the gradient lacks the surrogate gradient / pseudo-derivative and a firing regularization term.
The learning signal \(L_j^t\) is given by the non-plastic feedback weight matrix \(B_{jk}\) and the continuous error signal \(e_k^t\) emitted by readout neuron \(k\):
The error signal depends on the selected loss function. If a mean squared error loss is selected, then:
where the readout signal \(y_k^t\) corresponds to the membrane voltage of readout neuron \(k\) and \(y_k^{*,t}\) is the real-valued target signal.
If a cross-entropy loss is selected, then:
where the readout signal \(\pi_k^t\) corresponds to the softmax of the membrane voltage of readout neuron \(k\) and \(\pi_k^{*,t}\) is the one-hot encoded target signal.
Furthermore, the readout and target signal are zero before the onset of the learning window in each update interval.
For more information on e-prop plasticity, see the documentation on the other e-prop models:
Details on the event-based NEST implementation of e-prop can be found in [2].
Parameters¶
The following parameters can be set in the status dictionary.
Neuron parameters |
||||
---|---|---|---|---|
Parameter |
Unit |
Math equivalent |
Default |
Description |
|
pF |
\(C_\text{m}\) |
250.0 |
Capacitance of the membrane |
|
mV |
\(E_\text{L}\) |
0.0 |
Leak / resting membrane potential |
|
pA |
\(I_\text{e}\) |
0.0 |
Constant external input current |
|
Boolean |
|
If |
|
|
ms |
\(\tau_\text{m}\) |
10.0 |
Time constant of the membrane |
|
mV |
\(v_\text{min}\) |
negative maximum
value
representable by a
|
Absolute lower bound of the membrane voltage |
E-prop parameters |
||||
---|---|---|---|---|
Parameter |
Unit |
Math equivalent |
Default |
Description |
|
\(E\) |
“mean_squared_error” |
Loss function [“mean_squared_error”, “cross_entropy”] |
Recordables¶
The following state variables evolve during simulation and can be recorded.
Neuron state variables and recordables |
||||
---|---|---|---|---|
State variable |
Unit |
Math equivalent |
Initial value |
Description |
|
mV |
\(v_j\) |
0.0 |
Membrane voltage |
E-prop state variables and recordables |
||||
---|---|---|---|---|
State variable |
Unit |
Math equivalent |
Initial value |
Description |
|
mV |
\(L_j\) |
0.0 |
Error signal |
|
mV |
\(y_j\) |
0.0 |
Readout signal |
|
mV |
0.0 |
Unnormalized readout signal |
|
|
mV |
\(y^*_j\) |
0.0 |
Target signal |
Usage¶
This model can only be used in combination with the other e-prop models and the network architecture requires specific wiring, input, and output. The usage is demonstrated in several supervised regression and classification tasks reproducing among others the original proof-of-concept tasks in [1].
References¶
Sends¶
LearningSignalConnectionEvent, DelayedRateConnectionEvent
Receives¶
SpikeEvent, CurrentEvent, DelayedRateConnectionEvent, DataLoggingRequest
See also¶
Examples using this model¶
Tutorial on learning to accumulate evidence with e-prop after Bellec et al. (2020)
Tutorial on learning to generate a lemniscate with e-prop after Bellec et al. (2020)
Tutorial on learning to generate handwritten text with e-prop after Bellec et al. (2020)
Tutorial on learning to generate sine waves with e-prop after Bellec et al. (2020)