Binary neuron model¶
-
class
gainfunction_erfc
¶ - #include <erfc_neuron.h>
Name: erfc_neuron - Binary stochastic neuron with complementary error function as activation function.
Description:
The erfc_neuron is an implementation of a binary neuron that is irregularly updated at Poisson time points. At each update point the total synaptic input h into the neuron is summed up, passed through a gain function g whose output is interpreted as the probability of the neuron to be in the active (1) state.
The gain function g used here is
\[ g(h) = 0.5 * erfc (( h - \theta ) / ( \sqrt( 2. ) * \sigma)). \]This corresponds to a McCulloch-Pitts neuron receiving additional Gaussian noise with mean 0 and standard deviation sigma. The time constant tau_m is defined as the mean of the inter-update-interval that is drawn from an exponential distribution with this parameter. Using this neuron to reproduce simulations with asynchronous update (similar to [1,2]), the time constant needs to be chosen as tau_m = dt*N, where dt is the simulation time step and N the number of neurons in the original simulation with asynchronous update. This ensures that a neuron is updated on average every tau_m ms. Since in the original papers [1,2] neurons are coupled with zero delay, this implementation follows that definition. It uses the update scheme described in [3] to maintain causality: The incoming events in time step t_i are taken into account at the beginning of the time step to calculate the gain function and to decide upon a transition. In order to obtain delayed coupling with delay d, the user has to specify the delay d+h upon connection, where h is the simulation time step.
Remarks:
This neuron has a special use for spike events to convey the binary state of the neuron to the target. The neuron model only sends a spike if a transition of its state occurs. If the state makes an up-transition it sends a spike with multiplicity 2, if a down transition occurs, it sends a spike with multiplicity 1. The decoding scheme relies on the feature that spikes with multiplicity larger 1 are delivered consecutively, also in a parallel setting. The creation of double connections between binary neurons will destroy the decoding scheme, as this effectively duplicates every event. Using random connection routines it is therefore advisable to set the property ‘multapses’ to false. The neuron accepts several sources of currents, e.g. from a noise_generator.
Parameters:
tau_m
ms
Membrane time constant (mean inter-update-interval)
theta
mV
threshold for sigmoidal activation function
sigma
mV
1/sqrt(2pi) x inverse of maximal slope
References:
- 1
Ginzburg I, Sompolinsky H (1994). Theory of correlations in stochastic neural networks. PRE 50(4) p. 3171 DOI: https://doi.org/10.1103/PhysRevE.50.3171
- 2
McCulloch W, Pitts W (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115-133. DOI: https://doi.org/10.1007/BF02478259
- 3
Morrison A, Diesmann M (2007). Maintaining causality in discrete time neuronal simulations. In: Lectures in Supercomputational Neuroscience, p. 267. Peter beim Graben, Changsong Zhou, Marco Thiel, Juergen Kurths (Eds.), Springer. DOI: https://doi.org/10.1007/978-3-540-73159-7_10
Sends: SpikeEvent
Receives: SpikeEvent, PotentialRequest
FirstVersion: May 2016
Authors: Jakob Jordan, Tobias Kuehn
SeeAlso: mcculloch_pitts_neuron, ginzburg_neuron
-
class
gainfunction_ginzburg
¶ - #include <ginzburg_neuron.h>
Name: ginzburg_neuron - Binary stochastic neuron with sigmoidal activation function.
Description:
The ginzburg_neuron is an implementation of a binary neuron that is irregularly updated as Poisson time points. At each update point the total synaptic input h into the neuron is summed up, passed through a gain function g whose output is interpreted as the probability of the neuron to be in the active (1) state.
The gain function g used here is \( g(h) = c1*h + c2 * 0.5*(1 + \tanh(c3*(h-\theta))) \) (output clipped to [0,1]). This allows to obtain affin-linear (c1!=0, c2!=0, c3=0) or sigmoidal (c1=0, c2=1, c3!=0) shaped gain functions. The latter choice corresponds to the definition in [1], giving the name to this neuron model. The choice c1=0, c2=1, c3=beta/2 corresponds to the Glauber dynamics [2], \( g(h) = 1 / (1 + \exp(-\beta (h-\theta))) \). The time constant \( \tau_m \) is defined as the mean inter-update-interval that is drawn from an exponential distribution with this parameter. Using this neuron to reprodce simulations with asynchronous update [1], the time constant needs to be chosen as \( \tau_m = dt*N \), where dt is the simulation time step and N the number of neurons in the original simulation with asynchronous update. This ensures that a neuron is updated on average every \( \tau_m \) ms. Since in the original paper [1] neurons are coupled with zero delay, this implementation follows this definition. It uses the update scheme described in [3] to maintain causality: The incoming events in time step \( t_i \) are taken into account at the beginning of the time step to calculate the gain function and to decide upon a transition. In order to obtain delayed coupling with delay d, the user has to specify the delay d+h upon connection, where h is the simulation time step.
Remarks:
This neuron has a special use for spike events to convey the binary state of the neuron to the target. The neuron model only sends a spike if a transition of its state occurs. If the state makes an up-transition it sends a spike with multiplicity 2, if a down transition occurs, it sends a spike with multiplicity 1. The decoding scheme relies on the feature that spikes with multiplicity larger 1 are delivered consecutively, also in a parallel setting. The creation of double connections between binary neurons will destroy the deconding scheme, as this effectively duplicates every event. Using random connection routines it is therefore advisable to set the property ‘multapses’ to false. The neuron accepts several sources of currents, e.g. from a noise_generator.
Parameters:
tau_m
ms
Membrane time constant (mean inter-update-interval)
theta
mV
Threshold for sigmoidal activation function
c1
probability/ mV
Linear gain factor
c2
probability
Prefactor of sigmoidal gain
c3
1/mV
Slope factor of sigmoidal gain
References:
- 1
Ginzburg I, Sompolinsky H (1994). Theory of correlations in stochastic neural networks. PRE 50(4) p. 3171 DOI: https://doi.org/10.1103/PhysRevE.50.3171
- 2
Hertz J, Krogh A, Palmer R (1991). Introduction to the theory of neural computation. Addison-Wesley Publishing Conmpany.
- 3
Morrison A, Diesmann M (2007). Maintaining causality in discrete time neuronal simulations. In: Lectures in Supercomputational Neuroscience, p. 267. Peter beim Graben, Changsong Zhou, Marco Thiel, Juergen Kurths (Eds.), Springer. DOI: https://doi.org/10.1007/978-3-540-73159-7_10
Sends: SpikeEvent
Receives: SpikeEvent, PotentialRequest
FirstVersion: February 2010
Author: Moritz Helias
SeeAlso: pp_psc_delta
-
class
gainfunction_mcculloch_pitts
¶ - #include <mcculloch_pitts_neuron.h>
Name: mcculloch_pitts_neuron - Binary deterministic neuron with Heaviside activation function.
Description:
The mcculloch_pitts_neuron is an implementation of a binary neuron that is irregularly updated as Poisson time points [1]. At each update point the total synaptic input h into the neuron is summed up, passed through a Heaviside gain function g(h) = H(h-theta), whose output is either 1 (if input is above) or 0 (if input is below threshold theta). The time constant tau_m is defined as the mean inter-update-interval that is drawn from an exponential distribution with this parameter. Using this neuron to reprodce simulations with asynchronous update [1], the time constant needs to be chosen as tau_m = dt*N, where dt is the simulation time step and N the number of neurons in the original simulation with asynchronous update. This ensures that a neuron is updated on average every tau_m ms. Since in the original paper [1] neurons are coupled with zero delay, this implementation follows this definition. It uses the update scheme described in [3] to maintain causality: The incoming events in time step t_i are taken into account at the beginning of the time step to calculate the gain function and to decide upon a transition. In order to obtain delayed coupling with delay d, the user has to specify the delay d+h upon connection, where h is the simulation time step.
Remarks:
This neuron has a special use for spike events to convey the binary state of the neuron to the target. The neuron model only sends a spike if a transition of its state occurs. If the state makes an up-transition it sends a spike with multiplicity 2, if a down transition occurs, it sends a spike with multiplicity 1. The decoding scheme relies on the feature that spikes with multiplicity larger 1 are delivered consecutively, also in a parallel setting. The creation of double connections between binary neurons will destroy the decoding scheme, as this effectively duplicates every event. Using random connection routines it is therefore advisable to set the property ‘multapses’ to false. The neuron accepts several sources of currents, e.g. from a noise_generator.
Parameters:
tau_m
ms
Membrane time constant (mean inter-update-interval)
theta
mV
Threshold for sigmoidal activation function
References:
- 1
McCulloch W, Pitts W (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115-133. DOI: https://doi.org/10.1007/BF02478259
- 2
Hertz J, Krogh A, Palmer R (1991). Introduction to the theory of neural computation. Addison-Wesley Publishing Conmpany.
- 3
Morrison A, Diesmann M (2007). Maintaining causality in discrete time neuronal simulations. In: Lectures in Supercomputational Neuroscience, p. 267. Peter beim Graben, Changsong Zhou, Marco Thiel, Juergen Kurths (Eds.), Springer. DOI: https://doi.org/10.1007/978-3-540-73159-7_10
Sends: SpikeEvent
Receives: SpikeEvent, PotentialRequest
FirstVersion: February 2013
Author: Moritz Helias
SeeAlso: pp_psc_delta