eprop_synapse_bsshslm_2020 – Synapse type for e-prop plasticity =============================================================== Description +++++++++++ ``eprop_synapse_bsshslm_2020`` is an implementation of a connector model to create synapses between postsynaptic neurons :math:`j` and presynaptic neurons :math:`i` for eligibility propagation (e-prop) plasticity. E-prop plasticity was originally introduced and implemented in TensorFlow in [1]_. The suffix ``_bsshslm_2020`` follows the NEST convention to indicate in the model name the paper that introduced it by the first letter of the authors' last names and the publication year. The e-prop synapse collects the presynaptic spikes needed for calculating the weight update. When it is time to update, it triggers the calculation of the gradient which is specific to the post-synaptic neuron and is thus defined there. Eventually, it optimizes the weight with the specified optimizer. E-prop synapses require archiving of continuous quantities. Therefore e-prop synapses can only be connected to neuron models that are capable of archiving. So far, compatible models are ``eprop_iaf_bsshslm_2020``, ``eprop_iaf_adapt_bsshslm_2020``, and ``eprop_readout_bsshslm_2020``. For more information on e-prop plasticity, see the documentation on the other e-prop models: * :doc:`eprop_iaf_bsshslm_2020<../models/eprop_iaf_bsshslm_2020/>` * :doc:`eprop_iaf_adapt_bsshslm_2020<../models/eprop_iaf_adapt_bsshslm_2020/>` * :doc:`eprop_readout_bsshslm_2020<../models/eprop_readout_bsshslm_2020/>` * :doc:`eprop_learning_signal_connection_bsshslm_2020<../models/eprop_learning_signal_connection_bsshslm_2020/>` For more information on the optimizers, see the documentation of the weight optimizer: * :doc:`weight_optimizer<../models/weight_optimizer/>` Details on the event-based NEST implementation of e-prop can be found in [2]_. .. warning:: This synaptic plasticity rule does not take :ref:`precise spike timing ` into account. When calculating the weight update, the precise spike time part of the timestamp is ignored. Parameters ++++++++++ The following parameters can be set in the status dictionary. ================ ======= =============== ======= ====================================================== **Common synapse parameters** ------------------------------------------------------------------------------------------------------- Parameter Unit Math equivalent Default Description ================ ======= =============== ======= ====================================================== average_gradient Boolean False If True, average the gradient over the learning window optimizer {} Dictionary of optimizer parameters ================ ======= =============== ======= ====================================================== ============= ==== ========================= ======= ========================================================= **Individual synapse parameters** -------------------------------------------------------------------------------------------------------------- Parameter Unit Math equivalent Default Description ============= ==== ========================= ======= ========================================================= delay ms :math:`d_{ji}` 1.0 Dendritic delay tau_m_readout ms :math:`\tau_\text{m,out}` 10.0 Time constant for low-pass filtering of eligibility trace weight pA :math:`W_{ji}` 1.0 Initial value of synaptic weight ============= ==== ========================= ======= ========================================================= Recordables +++++++++++ The following variables can be recorded. - synaptic weight ``weight`` Usage +++++ This model can only be used in combination with the other e-prop models, whereby the network architecture requires specific wiring, input, and output. The usage is demonstrated in several :doc:`supervised regression and classification tasks <../auto_examples/eprop_plasticity/index>` reproducing among others the original proof-of-concept tasks in [1]_. Transmits +++++++++ SpikeEvent, DSSpikeEvent References ++++++++++ .. [1] Bellec G, Scherr F, Subramoney F, Hajek E, Salaj D, Legenstein R, Maass W (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11:3625. https://doi.org/10.1038/s41467-020-17236-y .. [2] Korcsak-Gorzo A, Stapmanns J, Espinoza Valverde JA, Dahmen D, van Albada SJ, Bolten M, Diesmann M. Event-based implementation of eligibility propagation (in preparation) See also ++++++++ :doc:`Synapse `, :doc:`E-Prop Plasticity ` Examples using this model ++++++++++++++++++++++++++ .. listexamples:: eprop_synapse_bsshslm_2020