Frequently asked questions

Installation

  1. If I compile NEST with :hxt_ref:`MPI` support, I get errors about ``SEEK_SET``, ``SEEK_CUR`` and ``SEEK_END`` being defined This is a known issue in some MPI implementations. A solution is to add –with-debug=”-DMPICH_IGNORE_CXX_SEEK” to the configure command line. More details about this problem can be found here

  2. Configure warns that Makefile.in seems to ignore the –datarootdir setting and the installation fails because of permission errors This problem is due to a change in autoconf 2.60, where the prefix directory for the NEST documentation can end up being empty during the installation. This leads to wrong installation paths for some components of NEST. If you have the GNU autotools installed, you can run ./bootstrap.sh in the source directory followed by ./configure. If you don’t have the autotools, appending --datadir=PREFIX/share/nest with the same PREFIX as in the --prefix option should help.

  3. I get ‘Error: /ArgumentType in validate’ when compiling an extension This is a known bug that has been fixed. Ask your local NEST dealer for a new pre-release. You need at least nest-1.9-7320.

  4. I get ‘collect2: ld returned 1 exit status, ld: -rpath can only be used when targeting Mac OS X 10.5 or later Please try to set the environment variable MACOSX_DEPLOYMENT_TARGET to 10.5 (export MACOSX_DEPLOYMENT_TARGET=10.5)

  5. Ipython crashes with a strange error message as soon as I import ``nest`` If ipython crashes on import nest complaining about a Non-aligned pointer being freed, you probably compiled NEST with a different version of g++ than Python. Take a look at the information ipython prints when it starts up. That should tell you which compiler was used. Then re-build NEST with the same compiler version.

  6. I get a segmentation fault when I use SciPy and PyNEST in the same script. We recently observed that if PyNEST is used with some versions of SciPy, a segmentation fault is caused. A workaround for the problem is to import SciPy before PyNEST. See https://github.com/numpy/numpy/issues/2521 for the official bug report in NumPy.

Where does data get stored

By default, the data files produced by NEST are stored in the directory from where NEST is called. The location can be changed by running nest.data_path = "/path/to/data". In scripts, this property can be set via the environment variable NEST_DATA_PATH. Please note that the directory /path/to/data has to exist and will not be created. A common prefix for all data file names can be set by running nest.data_prefix = "prefix" or by setting the environment variable NEST_DATA_PREFIX.

Neuron models

  1. I cannot see any of the conductance based models. Where are they? Some neuron model need the GNU Scientific Library (GSL) to work. The conductance based models are among those. If your NEST installation does not have these models, you probably have no GSL or GSL development packages installed. To solve this problem, install the GSL and its development headers. Then reconfigure and recompile NEST.

Connections

  1. How can I create connections to multicompartment neurons? You need to create a synapse type with the proper receptor_type as in this example, which connects all 100 neurons in n to the first neuron in n:

    syns = nest.GetDefaults('iaf_cond_alpha_mc')['receptor_types']
    nest.CopyModel('static_synapse', 'exc_dist_syn', {'receptor_type': syns['distal_exc']})
    n = nest.Create('iaf_cond_alpha_mc', 100)
    nest.Connect(n, n[:1], sync_spec={'model'='exc_dist_syn'})
    nest.Simulate(10)
    

Questions and answers about precise neurons

  1. Q: Is it meaningful to compare the precise sequences of spikes generated by the simulations of a recurrent network using different solvers?

A: No, due to the chaotic nature of the dynamics, minor differences in the computer representation of the spike times lead to completely different spike sequences after a short time.

  1. Q: Does an event-driven algorithm which determines the precise spike times of a neuron by numerically evaluating a closed form expression or an iterative procedure like Newton-Raphson lead to machine independent spike sequences?

A: No. For example, if machine A uses “double” for the representation of floating point numbers and machine B uses “quad” precision, the spike sequences of the two simulations deviate after a short time. Even with the same representation of floating point values, results rapidly diverge if some library function like exp() is implemented in a slightly different way or the terms of mathematical expressions are reordered.

  1. Q: Given the non-reproducibility of spike sequences in network simulations, is there any meaningful way to talk about the accuracy of a solver?

A: Yes, even though network dynamics may be chaotic, for many neuron models relevant to Computational Neuroscience the dynamics of the single neuron is not. Examples are integrate-and-fire models with linear subthreshold dynamics and the AdEx model considered in Hanuschkin (2010). In these cases it is possible to study the accuracy of a solution of the single neuron dynamics.

  1. Q: Why are we investigating the performance of network simulations anyway?

A: A single neuron simulation is no challenge for modern processors in terms of memory consumption. The data fit into the fast cache memory and memory bandwidth is not an issue. In a network simulation, however, the run time of a simulation algorithm is to a large extent determined by the organization of the data flow between main memory and processor. Solvers may differ considerably in their demands on memory bandwidth. Therefore it is essential that integration algorithms are compared with respect to the run time of network simulations.

  1. Q: How can the efficiency of a solver be defined if accuracy is only accessible in single neuron simulations and run time is only of interest for network simulations?

A: Efficiency needs to be defined as the run time of a network simulation required to achieve a certain accuracy goal of a single neuron simulation with input statistics corresponding to the network simulation. This was developed and described in Morrison et al. (2007).

  1. Q: Given that network dynamics is chaotic anyway, why is it important that single neuron dynamics is accurately integrated?

A: Although the networks dynamics is chaotic, in some cases mesoscopic measures of network activity can be affected by the quality of the single neuron solver. For example, Hansel et al. (1998) showed that a measure of network synchrony exhibits a considerable error if the single neuron dynamics is integrated using a grid-constrained algorithm. Without confidence in the precision of the single neuron solver we cannot interpret features observed on the network level or control for artifacts.

  1. Q: The biological system contains noise and any model is only an accurate description of nature to some degree. Why is it then important to be able to integrate a model with a precision of n digits?

A: This question is based on a mix-up between a scientific model and a simulation of the model. A simulation should always attempt to solve the equations of a model accurately, so that the scientist can be sure of the predictions of the model. Any noise terms or variability of parameters should be explicit constituents of the model, not of a particular simulation.

  1. Q: Does this mean that we should always simulate using the maximum precision implementations of neuron models?

A: No, for many scientific problems a limited precision is good enough. The fastest method delivering at least the required precision is the one of choice. In the case of chaotic dynamics there is generally no good reason to consider results produced by a neuron model implementation with high precision as being ‘more correct’ than those produced by a faster implementation with lower precision, as long as mesoscopic measures of interest remain unchanged. With a more accurate method at hand, the researcher can always carry out control simulations at higher precision to verify that the scientific results are robust with respect to the integration method.

  1. Q: Is there a fundamental difference between event-driven and time-driven algorithms in the reproducibility of the spike sequences of network simulations if the solvers do not miss any spikes?

A: No. In both cases the sequence of spike times is generally not reproducible by a different implementation or on a different machine because it depends on the details of the numerical implementation and the representation of floating point numbers.

  1. Q: Is there a fundamental difference in the accuracy of an event-driven algorithm and the time-driven algorithm presented inHanuschkin (2010)?

A: Yes. In a class of integrate-and-fire neuron models with linear subthreshold dynamics the event-driven methods never miss a spike. The time-driven method presented in the study misses spikes with a low probability.

  1. Q: Is there a fundamental difference in the accuracy of an event-driven algorithm and the time-driven algorithm presented inHanuschkin (2010)if the event-driven algorithm is used for a neuron model like the AdEx model, for which a spike prediction expression remains to be discovered?

A: No, in this case both types of algorithms rely on solvers moving forward with an adaptive step size which can theoretically miss spikes, but in practice does not, due to the explosive dynamics at threshold. As there is no difference in the accuracy, the faster algorithm should be chosen.

  1. Q: Why is the time-driven method for the AdEx model presented inHanuschkin (2010)the preferred method if neither an event-driven nor a time-driven algorithm is known which theoretically excludes the loss of spikes?

A: The time-driven method is more efficient: it delivers the same accuracy in a shorter time because of a lower administrative overhead.

  1. Q: What is the rate at which spikes are missed in a typical large-scale neuronal network simulation of integrate-and-fire model neurons with linear subthreshold dynamics in the balanced state and a spike rate of around 10 Hz?

A: At a typical parameter setting for a simulation with around 10,000 neurons and 15 million synapses, the total rate at which spikes are missed is up to 5 spikes per second.

  1. Q: Is the time-driven method presented inHanuschkin (2010)more general than the event-driven methods discussed?

A: Yes, the event-driven methods that do not miss any spikes are specific to a particular class of neuron models (current based with exponential synapses). In contrast, the time-driven method presented in the study is applicable to any neuron model with a threshold condition independently of the nature of the subthreshold dynamics.

  1. Q: What is the scalability of the proposed solution for large-scale network simulations in comparison to an event-driven scheme?

A: The scalability of the time-driven method presented in Hanuschkin (2010) is excellent. It is identical to that of the classical time-driven solver constraining spikes to a fixed computation time grid. In contrast, the classical event-driven scheme does not scale well because it requires a central queue. This can be improved if a decoupling technique based on the existence of a minimal delay (Morrison et al. 2005) is employed, see Lytton & Hines (2005).