Table of Contents¶
Welcome to the NEST simulator documentation!¶
NEST is a simulator for spiking neural network models, ideal for networks of any size, for example:
Models of information processing e.g. in the visual or auditory cortex of mammals,
Models of network activity dynamics, e.g. laminar cortical networks or balanced random networks,
Models of learning and plasticity.
- New to NEST?
Start here at our Getting Started page
- Have an idea of the type of model you need?
Click on one of the images to access our model directory:
Create complex networks using the Topology Module or the Microcircuit Model:
- Need a different model?
Check out how you can create you own model here.
- Have a question or issue with NEST?
See our Getting Help page.
How the documentation is organized¶
Tutorials show you step by step instructions using NEST. If you haven’t used NEST before, the PyNEST tutorial is a good place to start.
Example Networks demonstrate the use of dozens of the neural network models implemented in NEST.
Topical Guides provide deeper insight into several topics and concepts from Parallel Computing to handling Gap Junction Simulations and setting up a topological network.
Reference Material provides a quick look up of definitions, functions and terms.
Contribute¶
Have you used NEST in an article or presentation? Let us know and we will add it to our list of publications. Find out how to cite NEST in your work.
If you have any comments or suggestions, please share them on our Mailing List.
Want to contribute code? Check out our Developer Space to get started!
For more info about our larger community and the history of NEST check out the NEST Initiative website
Links to other projects:¶
The NeuralEnsemble is a community-based initiative to promote and co-ordinate open-source software development in neuroscience. They host numerous software including PyNN, a simulator-independent language for building neuronal network models and Elephant (Electrophysiology Analysis Toolkit), a package for the analysis of neurophysiology data, using Neo data structures.
Download NEST¶
For standard situations where you just want to use but not modify NEST, you don’t have to download the source code.
Distribution packages ease the installation on Debian/Ubuntu, Fedora, macOS and Conda.
See our installation instructions to find the right option for you.
Here you’ll find the lastest versions of the source code and Live Media for download. If you use NEST for your project, don’t forget to cite NEST!
Download NEST source code¶
Get the source code of the latest release.
Follow the installation instructions for Linux or macOS.
See also
Previous versions and associated release notes can be found at https://github.com/nest/nest-simulator/releases/
Download the NEST live media for virtual machines¶
Live media is available in the OVA format, and is suitable, for example, for importing into VirtualBox. If you run Windows, this is the option for you OR if you just want to run NEST without installing it on your computer. After downloading the virtual machine, check out the install instructions for Live Media.
Previous releases¶
We continuously aim to improve NEST, implement features, and fix bugs with every new version; thus, we encourage our users to use the most recent version of NEST.
Older Versions of Live Media
NEST Live Media 2.14.0 (OVA, 2.5G)
Download 2.12.0 (OVA, 3.2G)
Download 2.10.0 (OVA, ~3.7G)
Download 2.8.0 (OVA, ~2.5G)
NEST is available under the GNU General Public License 2 or later. This means that you can
use NEST for your research,
modify and improve NEST according to your needs,
distribute NEST to others under the same license.
NEST Installation Instructions¶
Standard Installation Instructions¶
These installation instructions should work for most users, who do not need custom configurations for their systems. If you want to compile NEST from source, see section Advanced Installation Instructions.
Install NEST via the PPA repository.
Add the PPA repository for NEST and update apt:
sudo add-apt-repository ppa:nest-simulator/nest
sudo apt-get update
Install NEST:
sudo apt-get install nest
The NeuroFedora team has generously provided the latest
versions of NEST on their platform. As that is available in the
standard Fedora platform repositories, it can simply be
installed using dnf
:
sudo dnf install python3-nest
Find out more on the NeuroFedora site: https://docs.fedoraproject.org/en-US/neurofedora/nest/.
Create your conda environment and install NEST. We recommend that you create a dedicated environment for NEST, which should ensure there are no conflicts with previously installed packages.
We strongly recommend that you install all programs you’ll need, (such as
ipython
orjupyter-lab
) in the environment (ENVNAME) at the same time, by appending them to the command below.Installing packages later may override previously installed dependencies and potentially break packages! See managing environments in the Conda documentation for more information.
Without OpenMPI:
conda create --name ENVNAME -c conda-forge nest-simulator
With OpenMPI:
conda create --name ENVNAME -c conda-forge nest-simulator=*=mpi_openmpi*
The syntax for this install follows the pattern:
nest-simulator=<version>=<build_string>
Activate your environment:
conda activate ENVNAME
In addition to native installations from ready-made packages, we provide containerized versions of NEST in several formats:
Docker provides an isolated container to run applications. The NEST Docker container includes a complete install of NEST and is set up so you can create, modify, and run Juptyer Notebooks and save them on your host machine. (See the Note below for alternative ways to use the Docker container.)
If you do not have Docker installed, follow the Docker installation instructions for your system here: https://docs.docker.com/install/.
If you are using Linux, we strongly recommend you also create a Docker group to manage Docker as a non-root user. See instructions on the Docker website: https://docs.docker.com/install/linux/linux-postinstall/
Create a directory or change into a directory that you want to use for your Jupyter Notebooks.
mkdir my_nest_scripts
cd my_nest_scripts
Run the Docker container. Replace the
<version>
with one of the latest NEST versions (e.g.,2.20.0
) or uselatest
for the most recent build from the source code.
docker run --rm -e LOCAL_USER_ID=`id -u $USER` -v $(pwd):/opt/data -p 8080:8080 nestsim/nest:<version> notebook
Once completed, a link to a Jupyter Notebook will be generated, as shown below. You can then copy and paste the link into your browser.
You can now use the Jupyter Notebook as you normally would. Anything saved in the Notebook will be placed in the directory you started the Notebook from.
You can shutdown the Notebook in the terminal by typing Ctrl-c twice. Once the Notebook is shutdown the container running NEST is removed.
Note
You can check for updates to the Docker build by typing:
docker pull nestsim/nest:<version>
Note
You can also create an instance of a terminal within the container itself and, for example, run Python scripts.
docker run --rm -it -e LOCAL_USER_ID=`id -u $USER` -v $(pwd):/opt/data -p 8080:8080 nestsim/nest:<version> /bin/bash
See the README to find out more, but note some functionality, such as DISPLAY, will not be available.
We have live media (.ova) if you want to run NEST in a virtual machine. This option is suitable for Windows users, since we don’t support NEST natively on Windows,
Download the live media here, and follow the instructions to set up the virutal machine .
Once NEST is installed, you can run it in Python, IPython, or Jupyter Notebook
For example, in the terminal type:
python
Once in Python you can type:
import nest
Note
If you get ImportError: No module named nest after running python
. Try to run python3
instead.
or as a stand alone application:
nest
If installation was successful, you should see the NEST splash screen in the terminal:
Installation is now complete!
Now we can start creating simulations!¶
If installation didn’t work, see the troubleshooting section.
See also
Advanced Installation Instructions¶
If you need special configuration options or want to compile NEST yourself, follow these instructions.
Download the source code for the current release.
Follow instructions for Ubuntu/Debian Installation and take a look at our Configuration Options.
Get the latest developer version on GitHub. Fork NEST into your GitHub repository (see details on GitHub workflows here).
For further options on installing NEST on macOS, see mac_manual for Macs.
Instructions for high performance computers provides some instructions for certain machines. Please contact us if you need help with your system.
Ubuntu/Debian Installation¶
Installation from source¶
The following are the basic steps to compile and install NEST from source code:
If not already installed on your system, the following packages are recommended (see also the Dependencies section)
sudo apt-get install -y \
cython \
libgsl-dev \
libltdl-dev \
libncurses-dev \
libreadline-dev \
python-all-dev \
python-numpy \
python-scipy \
python-matplotlib \
python-nose \
openmpi-bin \
libopenmpi-dev
Unpack the tarball
tar -xzvf nest-simulator-x.y.z.tar.gz
Create a build directory:
mkdir nest-simulator-x.y.z-build
Change to the build directory:
cd nest-simulator-x.y.z-build
Configure NEST.
You may need additional
cmake
options and you can find the configuration options here
cmake -DCMAKE_INSTALL_PREFIX:PATH=</install/path> </path/to/NEST/src>
Note
If you want to use Python 3, add the configuration option
cmake -Dwith-python=3 -DCMAKE_INSTALL_PREFIX:PATH=</install/path> </path/to/NEST/src>
Note
/install/path
should be an absolute path
Compile and install NEST:
make
make install
make installcheck
NEST should now be successfully installed on your system. You should now be able to import nest
from a python or ipython shell.
IMPORTANT!
If your operating system does not find the nest
executable or if Python does not find the nest
module, your path variables may not be set correctly. This may also be the case if Python cannot load the nest
module due to missing or incompatible libraries. In this case, please run:
source </path/to/nest_install_dir>/bin/nest_vars.sh
to set the necessary environment variables. You may want to include this line in your .bashrc
file, so that the environment variables are set automatically.
See the Getting started pages to find out how to get going with NEST or check out our example networks.
Dependencies¶
To build NEST, you need a recent version of CMake and libtool; the latter should be available for most systems and is probably already installed.
Note
NEST requires at least version v2.8.12 of cmake, but we recommend v3.4 or later. You can type cmake --version
on the commandline to check your current version.
The GNU readline library is recommended if you use NEST interactively without Python. Although most Linux distributions have GNU readline installed, you still need to install its development package if want to use GNU readline with NEST. GNU readline itself depends on libncurses (or libtermcap on older systems). Again, the development packages are needed to compile NEST.
The GNU Scientific Library is needed by several neuron models, in particular those with conductance based synapses. If you want these models, please install the GNU Scientific Library along with its development packages.
If you want to use PyNEST, we recommend to install the following along with their development packages:
See the Configuration Options or the High Performance Computing instructions to further adjust settings for your system.
What gets installed where¶
By default, everything will be installed to the subdirectories /install/path/{bin,lib,share}
, where /install/path
is the install path given to cmake
:
Executables
/install/path/bin
Dynamic libraries
/install/path/lib/
SLI libraries
/install/path/share/nest/sli
Documentation
/install/path/share/doc/nest
Examples
/install/path/share/doc/nest/examples
PyNEST
/install/path/lib/pythonX.Y/site-packages/nest
PyNEST examples
/install/path/share/doc/nest/examples/pynest
Extras
/install/path/share/nest/extras/
If you want to run the nest
executable or use the nest
Python module without providing explicit paths, you have to add the installation directory to your search paths. For example, if you are using bash:
export PATH=$PATH:/install/path/bin
export PYTHONPATH=/install/path/lib/pythonX.Y/site-packages:$PYTHONPATH
The script /install/path/bin/nest_vars.sh
can be sourced in .bashrc
and will set these paths for you. This also allows to switch between NEST installations in a convenient manner.
Manual Installation on macOS¶
If you want to use PyNEST, you need to have a version of Python with some science packages installed, see the section Python on Mac for details.
The clang/clang++ compiler that ships with OS X/macOS does not support OpenMP threads and creates code that fails some tests. You therefore need to use GCC to compile NEST under OS X/macOS.
Installation instructions here have been tested under macOS 10.14 Mojave with Anaconda Python 3 and all other dependencies installed via Homebrew. They should also work with earlier versions of macOS.
Install Xcode from the AppStore.
Install the Xcode command line tools by executing the following line in the terminal and following the instructions in the windows that will pop up:
xcode-select --install
Install dependencies via Homebrew:
brew install gcc cmake gsl open-mpi libtool
Create a directory for building and installing NEST (you should always build NEST outside the source code directory; installing NEST in a “place of its own” makes it easy to remove NEST later).
Extract the NEST tarball as a subdirectory in that directory or clone NEST from GitHub into a subdirectory:
mkdir NEST # directory for all NEST stuff cd NEST tar zxf nest-simulator-x.y.z.tar.gz mkdir bld cd bld
Configure and build NEST inside the build directory (replacing gcc-9 and g++-9 with the GCC compiler versions you installed with brew):
cmake -DCMAKE_INSTALL_PREFIX:PATH=</install/path> \ -DCMAKE_C_COMPILER=gcc-9 \ -DCMAKE_CXX_COMPILER=g++-9 \ </path/to/NEST/src>make -j4 # -j4 builds in parallel using 4 processes make install make installcheck
To compile NEST with MPI support, add -Dwith-mpi=ON
as cmake
option.
Troubleshooting¶
If compiling NEST as described above fails with an error message like
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/wait.h:110, from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdlib.h:66, from /usr/local/Cellar/gcc/9.2.0/include/c++/9.2.0/cstdlib:75, from /usr/local/Cellar/gcc/9.2.0/include/c++/9.2.0/bits/stl_algo.h:59, from /usr/local/Cellar/gcc/9.2.0/include/c++/9.2.0/algorithm:62, from /Users/plesser/NEST/code/src/sli/dictutils.h:27, from /Users/plesser/NEST/code/src/sli/dictutils.cc:23: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/resource.h:443:34: error: expected initializer before '__OSX_AVAILABLE_STARTING' 443 | int getiopolicy_np(int, int) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_2_0); | ^~~~~~~~~~~~~~~~~~~~~~~~you most likely have installed a version of XCode prepared for the next version of macOS. You can attempt to fix this by running
sudo xcode-select -s /Library/Developer/CommandLineTools/
If this does not help, you can reset to the default XCode path using
sudo xcode-select -r
Python on Mac¶
The version of Python shipping with OS X/macOS is rather dated and does not include key packages such as NumPy. Therefore, you need to install Python via a channel that provides scientific packages.
One well-tested source is the Anaconda Python distribution for both Python 2 and 3. If you do not want to install the full Anaconda distribution, you can also install Miniconda and then install the packages needed by NEST by running:
conda install numpy scipy matplotlib ipython cython nose
Alternatively, you should be able to install the necessary Python packages via Homebrew, but this has not been tested.
High Performance Computer Systems Installation¶
Minimal configuration¶
NEST can be compiled without any external packages; such a configuration may be useful for initial porting to a new supercomputer. However, this implies several restrictions:
Some neuron and synapse models will not be available, as they depend on ODE solvers from the GNU Scientific Library.
The Python extension will not be available
Multi-threading and parallel computing facilities will be disabled.
To configure NEST for compilation without external packages, use the following command:
cmake -DCMAKE_INSTALL_PREFIX:PATH=</install/path> \
-Dwith-python=OFF \
-Dwith-gsl=OFF \
-Dwith-readline=OFF \
-Dwith-ltdl=OFF \
-Dwith-openmp=OFF \
</path/to/nest/source>
See the Configuration Options to further adjust settings for your system.
Compiling for BlueGene/Q¶
NEST provides a cmake tool-chain file for cross compilation for BlueGene/Q. When
configuring NEST use the following cmake
line:
cmake -DCMAKE_TOOLCHAIN_FILE=Platform/BlueGeneQ_XLC \
-DCMAKE_INSTALL_PREFIX:PATH=</install/path> \
-Dwith-python=OFF \
-Dstatic-libraries=ON \
</path/to/NEST/src>
If you compile dynamically, be aware that the BlueGene/Q system might not provide an ltdl
library. If you want to dynamically load an external user module, you have to
compile and install an ltdl
yourself and add -Dwith-ltdl=<ltdl-install-dir>
to the cmake
line. Otherwise add -Dwith-ltdl=OFF
.
Additionally, the design of cmake
’s MPI handling has a broken design, which is
brittle in the case of BGQ and certain libraries (flags to use SIONlib, for example).
If you run into that, you must force cmake
to use the wrappers rather than
it’s attempts to extract the proper flags for the underlying compiler
as in:
-DCMAKE_C_COMPILER=/bgsys/drivers/ppcfloor/comm/xl/bin/mpixlc_r
-DCMAKE_CXX_COMPILER=/bgsys/drivers/ppcfloor/comm/xl/bin/mpixlcxx_r
BlueGene/Q and PyNEST¶
Building PyNEST on BlueGene/Q requires you to compile dynamically, i.e.
-Dstatic-libraries=OFF
. Furthermore, you have to cythonize the
pynest/pynestkernel.pyx/.pyx
on a machine with Cython installed:
cythonize pynestkernel.pyx
Copy the generated file pynestkernel.cpp
into </path/to/NEST/src>/pynest
on
BlueGene/Q and point -Dwith-python=<...>
to a valid python version for cross
compilation, either Python 2:
-Dwith-python=/bgsys/tools/Python-2.7/bin/hostpython
or (much better) Python 3:
-Dwith-python=/bgsys/local/python3/3.4.2/bin/python3
CMake <3.4 is buggy when it comes to finding the matching libraries (for many years).
Thus, you also have to specify PYTHON_LIBRARY
and PYTHON_INCLUDE_DIR
if they are not found OR the incorrect libraries are found, e.g.:
-DPYTHON_LIBRARY=/bgsys/tools/Python-2.7/lib64/libpython2.7.so.1.0
-DPYTHON_INCLUDE_DIR=/bgsys/tools/Python-2.7/include/python2.7
or (much better):
-DPYTHON_LIBRARY=/bgsys/local/python3/3.4.2/lib/libpython3.4m.a
-DPYTHON_INCLUDE_DIR=/bgsys/local/python3/3.4.2/include/python3.4m
A complete cmake
line for PyNEST could look like this:
module load gsl
cmake -DCMAKE_TOOLCHAIN_FILE=Platform/BlueGeneQ_XLC \
-DCMAKE_INSTALL_PREFIX=</install/path> \
-Dstatic-libraries=OFF \
-Dcythonize-pynest=OFF \
-DCMAKE_C_COMPILER=/bgsys/drivers/ppcfloor/comm/xl/bin/mpixlc_r \
-DCMAKE_CXX_COMPILER=/bgsys/drivers/ppcfloor/comm/xl/bin/mpixlcxx_r \
-Dwith-python=/bgsys/local/python3/3.4.2/bin/python3 \
-DPYTHON_LIBRARY=/bgsys/local/python3/3.4.2/lib/libpython3.4m.a \
-DPYTHON_INCLUDE_DIR=/bgsys/local/python3/3.4.2/include/python3.4m \
-Dwith-ltdl=OFF \
<nest-src>
Furthermore, for running PyNEST, make sure all python dependencies are installed and environment variables are set properly:
module load python3/3.4.2
# adds PyNEST to the PYTHONPATH
source <nest-install-dir>/bin/nest_vars.sh
# makes HOME and PYTHONPATH available for python
runjob \
--exp-env HOME \
--exp-env PATH \
--exp-env LD_LIBRARY_PATH \
--exp-env PYTHONUNBUFFERED \
--exp-env PYTHONPATH \
... \
: /bgsys/local/python3/3.4.2/bin/python3.4 script.py
BlueGene/Q and GCC¶
Compiling NEST with GCC (-DCMAKE_TOOLCHAIN_FILE=Platform/BlueGeneQ_GCC
)
might require you to use a GSL library compiled using GCC, otherwise undefined
symbols break your build. After the GSL is built with GCC and installed in
gsl-install-dir
, add -Dwith-gsl=<gsl-install-dir>
to the cmake
line.
BlueGene/Q and Non-Standard Allocators¶
To use NEST with non-standard allocators on BlueGene/Q (e.g., tcmalloc), you should compile NEST and the allocator with the same compiler, usually GCC. Since static linking is recommended on BlueGene/Q, the allocator also needs to be linked statically. This requires specifying linker flags and the allocator library as shown in the following example:
cmake -DCMAKE_TOOLCHAIN_FILE=Platform/BlueGeneQ_GCC \
-DCMAKE_INSTALL_PREFIX:PATH=$PWD/install \
-Dstatic-libraries=ON -Dwith-warning=OFF \
-DCMAKE_EXE_LINKER_FLAGS="-Wl,--allow-multiple-definition" \
-Dwith-libraries=$HOME/tcmalloc/install/lib/libtcmalloc.a
Compiling for Fujitsu Sparc64¶
- On the K Computer:
The preinstalled
cmake
version is 2.6, which is too old for NEST. Please install a newer version, for example:wget https://cmake.org/files/v3.4/cmake-3.4.2.tar.gz tar -xzf cmake-3.4.2.tar.gz mv cmake-3.4.2 cmake.src mkdir cmake.build cd cmake.build ../cmake.src/bootstrap --prefix=$PWD/install --parallel=4 gmake -j4 gmake install
Also you might need a cross compiled GNU Scientific Library (GSL). For GSL 2.1 this is a possible installation scenario:
wget ftp://ftp.gnu.org/gnu/gsl/gsl-2.1.tar.gz tar -xzf gsl-2.1.tar.gz mkdir gsl-2.1.build gsl-2.1.install cd gsl-2.1.build ../gsl-2.1/configure --prefix=$PWD/../gsl-2.1.install/ \ CC=mpifccpx \ CXX=mpiFCCpx \ CFLAGS="-Nnoline" \ CXXFLAGS="--alternative_tokens -O3 -Kfast,openmp, -Nnoline, -Nquickdbg -NRtrap" \ --host=sparc64-unknown-linux-gnu \ --build=x86_64-unknown-linux-gnu gmake -j4 gmake install
To install NEST, use the following
cmake
line:cmake -DCMAKE_TOOLCHAIN_FILE=Platform/Fujitsu-Sparc64 \ -DCMAKE_INSTALL_PREFIX:PATH=</install/path> \ -Dwith-gsl=/path/to/gsl-2.1.install/ \ -Dwith-optimize="-Kfast" \ -Dwith-defines="-DUSE_PMA" \ -Dwith-python=OFF \ -Dwith-warning=OFF \ </path/to/NEST/src> make -j4 make install
The compilation can take quite some time compiling the file
models/modelsmodule.cpp
due to generation of many template classes. To speed up the process, you can comment out all synapse models you do not need. The option-Kfast
on the K computer enables many different options:-O3 -Kdalign,eval,fast_matmul,fp_contract,fp_relaxed,ilfunc,lib,mfunc,ns,omitfp,prefetch_conditional,rdconv -x-
Be aware that, with the option
-Kfast
an internal compiler error - probably an out of memory situation - can occur. One solution is to disable synapse models that you don’t use inmodels/modelsmodule.cpp
. From current observations this might be related to the-x-
option; you can give it a fixed value, e.g-x1
, and the compilation succeeds (the impact on performance was not analyzed):-Dwith-optimize="-Kfast -x1"
NEST LIVE MEDIA Installation¶
Download and install a virtual machine if you do not already have one installed.
Note
Although, the following instructions are for Virtual Box, you can use a different virtual machine, such as VMWare.
For Linux users, it is possible to install Virtual Box from the package repositories.
Debian:
sudo apt-get install virtualbox
Fedora:
sudo dnf install virtualbox
SuSe:
sudo zypper install virtualbox
Windows users can follow instructions on the website (see above).
NEST image setup¶
Download the NEST live medium
Start Virtual Box and import the virtual machine image “lubuntu-16.04_nest-2.XX.0.ova” (File > Import Appliance)
Once imported, you can run the NEST image
The user password is nest.
Notes¶
For better performance you can increase the memory for the machine Settings > System > Base Memory
To allow fullscreen mode of the virtual machine you also need to increase the video memory above 16MB. (Settings > Display > Video Memory)
If you need to share folders between the virtual machine and your regular desktop environment, click on Settings. Choose Shared Folder and add the folder you wish to share. Make sure to mark automount.
To install Guest Additions, select Devices > Insert Guest Additions CD image… (top left of the VirtualBox Window). Then, open a terminal (Ctrl+Alt+t), go to “/media/nest/VBOXADDITIONS…/” and run “sudo bash VboxLinuxAdditions.run”.
To set the correct language layout for your keyboard (e.g., from “US” to “DE”), you can use the program “lxkeymap”, which you start by typing “lxkeymap” in the terminal.
Configuration Options¶
NEST is installed with cmake
(at least v2.8.12). In the simplest case, the commands:
cmake -DCMAKE_INSTALL_PREFIX:PATH=</install/path> </path/to/NEST/src>
make
make install
should build and install NEST to /install/path
, which should be an absolute
path.
Choice of CMake Version¶
We recommend to use cmake
v3.4 or later, even though installing NEST with
cmake
v2.8.12 will in most cases work properly.
For more detailed information please see below: Python3 Binding (PyNEST)
Choice of compiler¶
The default compiler for NEST is GNU gcc/g++. Version 7 or higher is required due to the presence of bugs in earlier versions that prevent the compilation from succeeding. NEST has also successfully been compiled with Clang 7 and the IBM XL C++ compiler.
To select a specific compiler, please add the following flags to your cmake
line:
-DCMAKE_C_COMPILER=<C-compiler> -DCMAKE_CXX_COMPILER=<C++-compiler>
Options for configuring NEST¶
NEST allows for several configuration options for custom builds:
Change NEST behavior:
-Dtics_per_ms=[number] Specify elementary unit of time. [default 1000.0]
-Dtics_per_step=[number] Specify resolution. [default 100]
-Dwith-ps-arrays=[OFF|ON] Use PS array construction semantics. [default=ON]
Add user modules:
-Dexternal-modules=[OFF|<list;of;modules>] External NEST modules to be linked
in, separated by ';'. [default=OFF]
Connect NEST with external projects:
-Dwith-libneurosim=[OFF|ON|</path/to/libneurosim>] Request the use of libneurosim.
Optionally give the directory,
where libneurosim is installed.
[default=OFF]
-Dwith-music=[OFF|ON|</path/to/music>] Request the use of MUSIC. Optionally
give the directory, where MUSIC is installed.
[default=OFF]
Change parallelization scheme:
-Dwith-mpi=[OFF|ON|</path/to/mpi>] Request compilation with MPI. Optionally
give directory with MPI installation.
[default=OFF]
-Dwith-openmp=[OFF|ON|<OpenMP-Flag>] Enable OpenMP multi-threading.
Optional: set OMP flag. [default=ON]
Set default libraries:
-Dwith-gsl=[OFF|ON|</path/to/gsl>] Find a gsl library. To set a specific
library, set install path.[default=ON]
-Dwith-readline=[OFF|ON|</path/to/readline>] Find a GNU Readline library. To set
a specific library, set install path.
[default=ON]
-Dwith-ltdl=[OFF|ON|</path/to/ltdl>] Find an ltdl library. To set a specific
ltdl, set install path. NEST uses the
ltdl for dynamic loading of external
user modules. [default=ON]
-Dwith-python=[OFF|ON|2|3] Build PyNEST. To set a specific Python
version, set 2 or 3. [default=ON]
-Dcythonize-pynest=[OFF|ON] Use Cython to cythonize pynestkernel.pyx.
If OFF, PyNEST has to be build from
a pre-cythonized pynestkernel.pyx.
[default=ON]
Change compilation behavior:
-Dstatic-libraries=[OFF|ON] Build static executable and libraries. [default=OFF]
-Dwith-optimize=[OFF|ON|<list;of;flags>] Enable user defined optimizations. Separate
multiple flags by ';'.
[default OFF, when ON, defaults to '-O3']
-Dwith-warning=[OFF|ON|<list;of;flags>] Enable user defined warnings. Separate
multiple flags by ';'.
[default ON, when ON, defaults to '-Wall']
-Dwith-debug=[OFF|ON|<list;of;flags>] Enable user defined debug flags. Separate
multiple flags by ';'.
[default OFF, when ON, defaults to '-g']
-Dwith-intel-compiler-flags=[<list;of;flags>] User defined flags for the Intel compiler.
Separate multiple flags by ';'.
[defaults to '-fp-model strict']
-Dwith-libraries=<list;of;libraries> Link additional libraries. Give full path.
Separate multiple libraries by ';'.
[default OFF]
-Dwith-includes=<list;of;includes> Add additional include paths. Give full
path without '-I'. Separate multiple include
paths by ';'. [default OFF]
-Dwith-defines=<list;of;defines> Additional defines, e.g. '-DXYZ=1'.
Separate multiple defines by ';'. [default OFF]
NO-DOC option¶
On systems where help extraction is slow, the call to make install
can be replaced
by make install-nodoc
to skip the generation of help pages and indices. Using this
option can help developers to speed up development cycles, but is not recommended for
production use as it renders the built-in help system useless.
Configuring NEST for Distributed Simulation with MPI¶
Try
-Dwith-mpi=ON
as argument forcmake
. If it works, fine.If 1 does not work, or you want to use a non-standard MPI, try
-Dwith-mpi=/path/to/my/mpi
. Directory mpi should contain include, lib, bin subdirectories for MPI.If that does not work, but you know the correct compiler wrapper for your machine, try configure
-DMPI_CXX_COMPILER=myC++_CompilerWrapper -DMPI_C_COMPILER=myC_CompilerWrapper -Dwith-mpi=ON
Sorry, you need to fix your MPI installation.
Tell NEST about your MPI setup¶
If you compiled NEST with support for distributed computing via MPI, you
have to tell it how your mpirun
/mpiexec
command works by
defining the function mpirun in your ~/.nestrc
file. This file
already contains an example implementation that should work with
OpenMPI library.
Disabling the Python Bindings (PyNEST)¶
To disable Python bindings use:
-Dwith-python=OFF
as an argument to cmake
.
Please see also the file ../../pynest/README.md in the documentation directory for details.
Python3 Binding (PyNEST)¶
To force a Python3-binding in a mixed Python2/3 environment pass:
-Dwith-python=3
as an argument to cmake
.
cmake
usually autodetects your Python installation.
In some cases cmake
might not be able to localize the Python interpreter
and its corresponding libraries correctly. To circumvent such a problem following
cmake
built-in variables can be set manually and passed to cmake
:
PYTHON_EXECUTABLE ..... path to the Python interpreter
PYTHON_LIBRARY ........ path to libpython
PYTHON_INCLUDE_DIR .... two include ...
PYTHON_INCLUDE_DIR2 ... directories
e.g.: Please note ``-Dwith-python=ON`` is the default::
cmake -DCMAKE_INSTALL_PREFIX=</install/path> \
-DPYTHON_EXECUTABLE=/usr/bin/python3 \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.4m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.4 \
-DPYTHON_INCLUDE_DIR2=/usr/include/x86_64-linux-gnu/python3.4m \
</path/to/NEST/src>
Compiling for Apple OSX/macOS¶
NEST can currently not be compiled with the clang/clang++ compilers shipping with macOS. Therefore, you first need to install GCC 6.3 or later. The easiest way to install all required software is using Homebrew (from http://brew.sh):
brew install gcc cmake gsl open-mpi libtool
will install all required prequisites. You can then configure NEST with
cmake -DCMAKE_INSTALL_PREFIX:PATH=</install/path> \
-DCMAKE_C_COMPILER=gcc-6\
-DCMAKE_CXX_COMPILER=g++-6 \
</path/to/NEST/src>
For detailed information on installing NEST under OSX/macOS, please see the “macOS” section of https://www.nest-simulator.org/installation.
Choice of compiler¶
Most NEST developers use the GNU gcc/g++ compilers. We also regularly compile NEST using the IBM xlc/xlC compilers. You can find the version of your compiler by, e.g.:
g++ -v
To select a specific compiler, please add the following flags to your cmake
line:
-DCMAKE_C_COMPILER=<C-compiler> -DCMAKE_CXX_COMPILER=<C++-compiler>
Compiler-specific options¶
NEST has reasonable default compiler options for the most common compilers.
- When compiling with the Portland compiler:
Use the
-Kieee
flag to ensure that computations obey the IEEE754 standard for floating point numerics.- When compiling with the Intel compiler:
To ensure that computations obey the IEEE754 standard for floating point numerics, the
-fp-model strict
flag is used by default, but can be overridden with-Dwith-intel-compiler-flags="<intel-flags>"
Note
Installation instructions for NEST 2.10 and earlier are provided here, but we strongly encourage all our users to stay up-to-date with most recent version of NEST. We cannot support out-dated versions.
Getting Started¶
Have you already installed NEST?
Then let’s look at how to create a neural network simulation!
NEST is a command line tool for simulating neural networks. A NEST simulation tries to follow the logic of an electrophysiological experiment - the difference being it takes place inside the computer rather than in the physical world.
You can use NEST interactively from the Python prompt, from within IPython or in a Jupyter Notebook. The latter is helpful when you are exploring PyNEST, trying to learn a new functionality or debugging a routine.
How does it work?¶
Let’s start with a basic script to simulate a simple neural network.
Run Python or a Jupyter Notebook and try out this example simulation in NEST:
Import required packages:
import nest
import nest.voltage_trace
nest.ResetKernel()
Create the neuron models you want to simulate:
neuron = nest.Create('iaf_psc_exp')
Create the devices to stimulate or observe the neurons in the simulation:
spikegenerator = nest.Create('spike_generator')
voltmeter = nest.Create('voltmeter')
Modify properties of the device:
nest.SetStatus(spikegenerator, {'spike_times': [10.0, 50.0]})
Connect neurons to devices and specify synapse (connection) properties:
nest.Connect(spikegenerator, neuron, syn_spec={'weight': 1e3})
nest.Connect(voltmeter, neuron)
Simulate the network for the given time in miliseconds:
nest.Simulate(100.0)
Display the voltage graph from the voltmeter:
nest.voltage_trace.from_device(voltmeter)
nest.voltage_trace.show()
You should see the following image as the output:

And that’s it! You have performed your first neuronal simulation in NEST!
Want to know more?¶
Check out our PyNEST tutorial, which provides full explanations on how to build your first neural network simulation in NEST.
We have a large collection of Example networks for you to explore.
Regularly used terms and default physical units in NEST are explained in the Glossary.
Troubleshooting¶
Here you can find some tips to try to find out why your installation of NEST didn’t work.
1. CMAKE error says a <package> was not found or <package> is too old¶
Please make sure you have followed the installation instructions found here and have installed the required dependencies.
Install the missing package or update the package to a more recent version.
Remove the contents of the build directory. This step ensures the old build-cache, which may be missing some of the recently installed or updated packages, is cleaned out and a fresh build is triggered.
rm -r /path/to/nest-simulator-x.y.z-build/*
Compile NEST again:
cmake -DCMAKE_INSTALL_PREFIX:PATH=</install/path> </path/to/NEST/src>
If the error still persists, you may have more than one installation of the <package>. A conflict may occur between different package binaries: Your system may prefentially choose a system-wide installation of a package (e.g., /usr/bin/<package>), rather than a local environment installation (e.g., /home/user/ENV/bin/<package>).
Determine the path and version of the <package>:
- which <package>
searches the path of executables set in the $PATH environment variable. It will tell you the path to the <package> binary that is being used in the current environment.
- type -a <package>
shows the complete list of directories that your system found the binary file. The first result should be the location to your active environment.
- <package> –version
will tell you the version of the <package> binary.
Here is an example,
which python
The terminal will display the path to the binary it found:
/home/user/ENVNAME/bin/python
type -a python
The terminal will list the paths it found to the package binaries:
python is /home/user/ENVNAME/bin/python
python is /usr/bin/python
python --version
The terminal will display the version number:
Python 3.6.5
If it looks like you have an older version on your system:
Remove or update old versions of <package> (You may need to uninstall and reinstall the package)
If you do not have the <package> in your active environment:
Install the <package> while in your active environment.
Remove the contents of the build directory
rm -r /path/to/nest-simulator-x.y.z-build/*
Compile NEST again
cmake -DCMAKE_INSTALL_PREFIX:PATH=</install/path> </path/to/NEST/src>
2. When I try to import nest, I get an error in Python that says ‘No Module named NEST’ or ‘ImportError’¶
This error message means something in your environment is not set correctly, depending on how you installed NEST.
1. Check which Python version you are running¶
You must use Python 3 if you installed NEST with
the Ubuntu PPA,
the conda-forge package,
the Live Media, or
if you compiled NEST with Python 3 bindings
Type
python
oripython
in the terminal. The python version that is used will be displayed.If the Python version displayed is 2.X, you need to run
python3
oripython3
instead ofpython
oripython
.
If your Python version is correct and you still have the same error, then try one of the following options:
2a. If you compiled NEST from source¶
Your path variables may not be set correctly, in that case run:
source </path/to/nest_install_dir>/bin/nest_vars.sh
2b. If you installed NEST via the conda-forge package¶
Make sure you have activated the correct environment
To get a list of all your environments, run:
conda info -eAn asterisk (*) indicates the active environment.
Activate the correct environment if it’s not already:
conda activate ENVNAMETry to
import nest
in Python.
Check that the correct package binary is used for NEST and Python: for example, in a terminal type:
which python which nestThese commands will show you the path to the Python and NEST binary that your environment is using. You may have more than one installation on your system. The path to the binary should be within your active environment:
/path/to/conda/envs/ENVNAME/bin/python /path/to/conda/envs/ENVNAME/bin/nestYou can also view the list of packages in the active environment, by running:
conda listIf the package is not in your environment, then it needs to be installed.
If something is missing, you can try to
conda install <package>
BUT be aware that this may break pre-installed packages!You may be better off creating a new Conda environment and install NEST with all needed packages at one time! See the section on installation for Conda.
3. Docker crashes! Message from NotebookApp: “Running as root is not recommended. Use –allow-root to bypass.”¶
We strongly recommend that you do not run Docker as root!
If this happens, try to update the docker build. In the terminal type:
docker pull nestsim/nest:<version>replacing
<version>
with the actual version you want to use.
Then try the
docker run
command again.docker run --rm -e LOCAL_USER_ID=`id -u $USER` -v $(pwd):/opt/data -p 8080:8080 nestsim/nest:<version> notebook
Can’t find an answer to your question?¶
We may have answered your question on GitHub or in our Mailing List!
Please check out our GitHub issues page or search the mailing list for your question.
Tutorials¶
Part 1: Neurons and simple neural networks¶
Introduction¶
In this handout we cover the first steps in using PyNEST to simulate neuronal networks. When you have worked through this material, you will know how to:
start PyNEST
create neurons and stimulating/recording devices
query and set their parameters
connect them to each other or to devices
simulate the network
extract the data from recording devices
For more information on the usage of PyNEST, please see the other sections of this primer:
More advanced examples can be found at Example
Networks, or
have a look at at the source directory of your NEST installation in the
subdirectory: pynest/examples/
.
PyNEST - an interface to the NEST simulator¶

Python Interface Figure.
The Python interpreter imports NEST as a module and
dynamically loads the NEST simulator kernel (pynestkernel.so
). The
core functionality is defined in hl_api.py
. A simulation script of
the user (mysimulation.py
) uses functions defined in this high-level
API. These functions generate code in SLI (Simulation Language
Interpreter), the native language of the interpreter of NEST. This
interpreter, in turn, controls the NEST simulation kernel.¶
The NEural Simulation Tool (NEST: www.nest-initiative.org) 1
is designed for the simulation of large heterogeneous networks of point
neurons. It is open source software released under the GPL licence. The
simulator comes with an interface to Python 2. Figure 1
illustrates the interaction between the user’s simulation script
(mysimulation.py
) and the NEST simulator. Eppler et al. 3
contains a technically detailed description of the implementation of this
interface and parts of this text are based on this reference. The
simulation kernel is written in C++ to obtain the highest possible performance
for the simulation.
You can use PyNEST interactively from the Python prompt or from within ipython. This is very helpful when you are exploring PyNEST, trying to learn a new functionality or debugging a routine. Once out of the exploratory mode, you will find it saves a lot of time to write your simulations in text files. These can in turn be run from the command line or from the Python or ipython prompt.
Whether working interactively, semi-interactively, or purely executing scripts, the first thing that needs to happen is importing NEST’s functionality into the Python interpreter.
import nest
It should be noted, however, that certain external packages must be imported before importing nest. These include scikit-learn and SciPy.
from sklearn.svm import LinearSVC
from scipy.special import erf
import nest
As with every other module for Python, the available functions can be prompted for.
dir(nest)
One such command is nest.Models()
or in ipython nest.Models?
, which will return a list of all
the available models you can use. If you want to obtain more information
about a particular command, you may use Python’s standard help system.
This will return the help text (docstring) explaining the use of this
particular function. There is a help system within NEST as well. You can
open the help pages in a browser using nest.helpdesk()
and you can
get the help page for a particular object using nest.help(object)
.
Creating Nodes¶
A neural network in NEST consists of two basic element types: nodes and
connections. Nodes are either neurons, devices or sub-networks. Devices
are used to stimulate neurons or to record from them. Nodes can be
arranged in sub-networks to build hierarchical networks such as layers,
columns, and areas - we will get to this later in the course. For now we
will work in the default sub-network which is present when we start
NEST, known as the root node
.
To begin with, the root sub-network is empty. New nodes are created with
the command Create
, which takes as arguments the model name of the
desired node type, and optionally the number of nodes to be created and
the initialising parameters. The function returns a list of handles to
the new nodes, which you can assign to a variable for later use. These
handles are integer numbers, called ids. Many PyNEST functions expect
or return a list of ids (see command overview). Thus, it is
easy to apply functions to large sets of nodes with a single function
call.
After having imported NEST and also the Pylab interface to Matplotlib 4,
which we will use to display the results, we can start reating nodes.
As a first example, we will create a neuron of type
iaf_psc_alpha
. This neuron is an integrate-and-fire neuron with
alpha-shaped postsynaptic currents. The function returns a list of the
ids of all the created neurons, in this case only one, which we store in
a variable called neuron
.
import pylab
import nest
neuron = nest.Create("iaf_psc_alpha")
We can now use the id to access the properties of this neuron.
Properties of nodes in NEST are generally accessed via Python
dictionaries of key-value pairs of the form {key: value}
. In order
to see which properties a neuron has, you may ask it for its status.
nest.GetStatus(neuron)
This will print out the corresponding dictionary in the Python console.
Many of these properties are not relevant for the dynamics of the
neuron. To find out what the interesting properties are, look at the
documentation of the model through the helpdesk. If you already know
which properties you are interested in, you can specify a key, or a list
of keys, as an optional argument to GetStatus
:
nest.GetStatus(neuron, "I_e")
nest.GetStatus(neuron, ["V_reset", "V_th"])
In the first case we query the value of the constant background current
I_e
; the result is given as a tuple with one element. In the second
case, we query the values of the reset potential and threshold of the
neuron, and receive the result as a nested tuple. If GetStatus
is
called for a list of nodes, the dimension of the outer tuple is the
length of the node list, and the dimension of the inner tuples is the
number of keys specified.
To modify the properties in the dictionary, we use SetStatus
. In the
following example, the background current is set to 376.0pA, a value
causing the neuron to spike periodically.
nest.SetStatus(neuron, {"I_e": 376.0})
Note that we can set several properties at the same time by giving
multiple comma separated key:value pairs in the dictionary. Also be
aware that NEST is type sensitive - if a particular property is of type
double
, then you do need to explicitly write the decimal point:
nest.SetStatus(neuron, {"I_e": 376})
will result in an error. This conveniently protects us from making integer division errors, which are hard to catch.
Next we create a multimeter
, a device we can use to record the
membrane voltage of a neuron over time. We set its property withtime
such that it will also record the points in time at which it samples the
membrane voltage. The property record_from
expects a list of the
names of the variables we would like to record. The variables exposed to
the multimeter vary from model to model. For a specific model, you can
check the names of the exposed variables by looking at the neuron’s
property recordables
.
multimeter = nest.Create("multimeter")
nest.SetStatus(multimeter, {"withtime":True, "record_from":["V_m"]})
We now create a spikedetector
, another device that records the
spiking events produced by a neuron. We use the optional keyword
argument params
to set its properties. This is an alternative to
using SetStatus
. The property withgid
indicates whether the
spike detector is to record the source id from which it received the
event (i.e. the id of our neuron).
spikedetector = nest.Create("spike_detector",
params={"withgid": True, "withtime": True})
A short note on naming: here we have called the neuron neuron
, the
multimeter multimeter
and so on. Of course, you can assign your
created nodes to any variable names you like, but the script is easier
to read if you choose names that reflect the concepts in your
simulation.
Connecting nodes with default connections¶
Now we know how to create individual nodes, we can start connecting them to form a small network.
nest.Connect(multimeter, neuron)
nest.Connect(neuron, spikedetector)

Membrane potential of integrate-and-fire neuron with constant input current.¶

Spikes of the neuron.¶
The order in which the arguments to Connect
are specified reflects
the flow of events: if the neuron spikes, it sends an event to the spike
detector. Conversely, the multimeter periodically sends requests to the
neuron to ask for its membrane potential at that point in time. This can
be regarded as a perfect electrode stuck into the neuron.
Now we have connected the network, we can start the simulation. We have to inform the simulation kernel how long the simulation is to run. Here we choose 1000ms.
nest.Simulate(1000.0)
Congratulations, you have just simulated your first network in NEST!
Extracting and plotting data from devices¶
After the simulation has finished, we can obtain the data recorded by the multimeter.
dmm = nest.GetStatus(multimeter)[0]
Vms = dmm["events"]["V_m"]
ts = dmm["events"]["times"]
In the first line, we obtain the list of status dictionaries for all
queried nodes. Here, the variable multimeter
is the id of only one
node, so the returned list just contains one dictionary. We extract the
first element of this list by indexing it (hence the [0]
at the
end). This type of operation occurs quite frequently when using PyNEST,
as most functions are designed to take in and return lists, rather than
individual values. This is to make operations on groups of items (the
usual case when setting up neuronal network simulations) more
convenient.
This dictionary contains an entry named events
which holds the
recorded data. It is itself a dictionary with the entries V_m
and
times
, which we store separately in Vms
and ts
, in the
second and third line, respectively. If you are having trouble imagining
dictionaries of dictionaries and what you are extracting from where, try
first just printing dmm
to the screen to give you a better
understanding of its structure, and then in the next step extract the
dictionary events
, and so on.
Now we are ready to display the data in a figure. To this end, we make
use of pylab
.
import pylab
pylab.figure(1)
pylab.plot(ts, Vms)
The second line opens a figure (with the number 1), and the third line
actually produces the plot. You can’t see it yet because we have not
used pylab.show()
. Before we do that, we proceed analogously to
obtain and display the spikes from the spike detector.
dSD = nest.GetStatus(spikedetector,keys="events")[0]
evs = dSD["senders"]
ts = dSD["times"]
pylab.figure(2)
pylab.plot(ts, evs, ".")
pylab.show()
Here we extract the events more concisely by using the optional keyword
argument keys
to GetStatus
. This extracts the dictionary element
with the key events
rather than the whole status dictionary. The
output should look like Figure 2 and Figure 3.
If you want to execute this as a script, just paste all lines into a text
file named, say, one-neuron.py
. You can then run it from the command
line by prefixing the file name with python
, or from the Python or ipython
prompt, by prefixing it with run
.
It is possible to collect information of multiple neurons on a single multimeter. This does complicate retrieving the information: the data for each of the n neurons will be stored and returned in an interleaved fashion. Luckily Python provides us with a handy array operation to split the data easily: array slicing with a step (sometimes called stride). To explain this you have to adapt the model created in the previous part. Save your code under a new name, in the next section you will also work on this code. Create an extra neuron with the background current given a different value:
neuron2 = nest.Create("iaf_psc_alpha")
nest.SetStatus(neuron2 , {"I_e": 370.0})
now connect this newly created neuron to the multimeter:
nest.Connect(multimeter, neuron2)
Run the simulation and plot the results, they will look incorrect. To
fix this you must plot the two neuron traces separately. Replace the
code that extracts the events from the multimeter
with the following
lines.
pylab.figure(2)
Vms1 = dmm["events"]["V_m"][::2] # start at index 0: till the end: each second entry
ts1 = dmm["events"]["times"][::2]
pylab.plot(ts1, Vms1)
Vms2 = dmm["events"]["V_m"][1::2] # start at index 1: till the end: each second entry
ts2 = dmm["events"]["times"][1::2]
pylab.plot(ts2, Vms2)
Additional information can be found at http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html.
Connecting nodes with specific connections¶
A commonly used model of neural activity is the Poisson process. We now
adapt the previous example so that the neuron receives 2 Poisson spike
trains, one excitatory and the other inhibitory. Hence, we need a new
device, the poisson_generator
. After creating the neurons, we create
these two generators and set their rates to 80000Hz and 15000Hz,
respectively.
noise_ex = nest.Create("poisson_generator")
noise_in = nest.Create("poisson_generator")
nest.SetStatus(noise_ex, {"rate": 80000.0})
nest.SetStatus(noise_in, {"rate": 15000.0})
Additionally, the constant input current should be set to 0:
nest.SetStatus(neuron, {"I_e": 0.0})
Each event of the excitatory generator should produce a postsynaptic
current of 1.2pA amplitude, an inhibitory event of -2.0pA. The synaptic
weights can be defined in a dictionary, which is passed to the
Connect
function using the keyword syn_spec
(synapse
specifications). In general all parameters determining the synapse can
be specified in the synapse dictionary, such as "weight"
,
"delay"
, the synaptic model ("model"
) and parameters specific to
the synaptic model.
syn_dict_ex = {"weight": 1.2}
syn_dict_in = {"weight": -2.0}
nest.Connect(noise_ex, neuron, syn_spec=syn_dict_ex)
nest.Connect(noise_in, neuron, syn_spec=syn_dict_in)

Membrane potential of integrate-and-fire neuron with Poisson noise as input.¶

Spikes of the neuron with noise.¶
The rest of the code remains as before. You should see a membrane potential as in Figure 4 and Figure 5.
In the next part of the introduction (Part 2: Populations of neurons) we will look at more methods for connecting many neurons at once.
Two connected neurons¶

Postsynaptic potentials in neuron2 evoked by the spikes of neuron1¶
There is no additional magic involved in connecting neurons. To demonstrate this, we start from our original example of one neuron with a constant input current, and add a second neuron.
import pylab
import nest
neuron1 = nest.Create("iaf_psc_alpha")
nest.SetStatus(neuron1, {"I_e": 376.0})
neuron2 = nest.Create("iaf_psc_alpha")
multimeter = nest.Create("multimeter")
nest.SetStatus(multimeter, {"withtime":True, "record_from":["V_m"]}
We now connect neuron1
to neuron2
, and record the membrane
potential from neuron2
so we can observe the postsynaptic potentials
caused by the spikes of neuron1
.
nest.Connect(neuron1, neuron2, syn_spec = {"weight":20.0})
nest.Connect(multimeter, neuron2)
Here the default delay of 1ms was used. If the delay is specified in addition to the weight, the following shortcut is available:
nest.Connect(neuron1, neuron2, syn_spec={"weight":20, "delay":1.0})
If you simulate the network and plot the membrane potential as before,
you should then see the postsynaptic potentials of neuron2
evoked by
the spikes of neuron1
as in Figure 6.
Command overview¶
These are the functions we introduced for the examples in this handout; the following sections of this introduction will add more.
Getting information about NEST¶
See the Getting Help Section
Nodes¶
Create(model, n=1, params=None)
Create
n
instances of typemodel
in the current sub-network. Parameters for the new nodes can be given asparams
(a single dictionary, or a list of dictionaries with sizen
). If omitted, themodel
’s defaults are used.
GetStatus(nodes, keys=None)
Return a list of parameter dictionaries for the given list of
nodes
. Ifkeys
is given, a list of values is returned instead.keys
may also be a list, in which case the returned list contains lists of values.
SetStatus(nodes, params, val=None)
Set the parameters of the given
nodes
toparams
, which may be a single dictionary, or a list of dictionaries of the same size asnodes
. Ifval
is given,params
has to be the name of a property, which is set toval
on thenodes
.val
can be a single value, or a list of the same size asnodes
.
Connections¶
This is an abbreviated version of the documentation for the Connect
function, please see NEST’s online help for the full version and
Connection Management for an introduction
and worked examples.
Connect(pre, post, conn_spec=None, syn_spec=None, model=None)` Connect pre neurons to post neurons.Neurons in pre and post are connected using the specified connectivity (
"one_to_one"
by default) and synapse type ("static_synapse"
by default). Details depend on the connectivity rule. Note: Connect does not iterate over subnets, it only connects explicitly specified nodes.pre
- presynaptic neurons, given as list of GIDspost
- presynaptic neurons, given as list of GIDsconn_spec
- name or dictionary specifying connectivity rule, see belowsyn_spec
- name or dictionary specifying synapses, see below
Connectivity¶
Connectivity is either specified as a string containing the name of a
connectivity rule (default: "one_to_one"
) or as a dictionary
specifying the rule and rule-specific parameters (e.g. "indegree"
),
which must be given. In addition switches allowing self-connections
("autapses"
, default: True
) and multiple connections between a
pair of neurons ("multapses"
, default: True
) can be contained in
the dictionary.
Synapse¶
The synapse model and its properties can be inserted either as a string
describing one synapse model (synapse models are listed in the
synapsedict) or as a dictionary as described below. If no synapse model
is specified the default model "static_synapse"
will be used.
Available keys in the synapse dictionary are "model"
, "weight"
,
"delay"
, "receptor_type"
and parameters specific to the chosen
synapse model. All parameters are optional and if not specified will use
the default values determined by the current synapse model. "model"
determines the synapse type, taken from pre-defined synapse types in
NEST or manually specified synapses created via CopyModel()
. All
other parameters can be scalars or distributions. In the case of scalar
parameters, all keys take doubles except for "receptor_type"
which
has to be initialised with an integer. Distributed parameters are
initialised with yet another dictionary specifying the distribution
("distribution"
, such as "normal"
) and distribution-specific
paramters (such as "mu"
and "sigma"
).
Simulation control¶
Simulate(t)
Simulate the network for
t
milliseconds.
References¶
- 1
Gewaltig MO. and Diesmann M. 2007. NEural Simulation Tool. 2(4):1430.
- 2
Python Software Foundation. The Python programming language, 2008. http://www.python.org.
- 3
Eppler JM et al. 2009 PyNEST: A convenient interface to the NEST simulator. 2:12. 10.3389/neuro.11.012.2008.
- 4
Hunter JD. 2007 Matplotlib: A 2d graphics environment. 9(3):90–95.
Part 2: Populations of neurons¶
Introduction¶
In this handout we look at creating and parameterising batches of
neurons
, and connecting them. When you have worked through this
material, you will know how to:
create populations of neurons with specific parameters
set model parameters before creation
define models with customised parameters
randomise parameters after creation
make random connections between populations
set up devices to start, stop and save data to file
reset simulations
For more information on the usage of PyNEST, please see the other sections of this primer:
More advanced examples can be found at Example
Networks, or
have a look at at the source directory of your NEST installation in the
subdirectory: pynest/examples/
.
Creating parameterised populations of nodes¶
In the previous handout, we introduced the function
Create(model, n=1, params=None)
. Its mandatory argument is the model
name, which determines what type the nodes to be created should be. Its
two optional arguments are n
, which gives the number of nodes to be
created (default: 1) and params
, which is a dictionary giving the
parameters with which the nodes should be initialised. So the most basic
way of creating a batch of identically parameterised neurons is to
exploit the optional arguments of Create()
:
ndict = {"I_e": 200.0, "tau_m": 20.0}
neuronpop = nest.Create("iaf_psc_alpha", 100, params=ndict)
The variable neuronpop
is a tuple of all the ids of the created
neurons.
Parameterising the neurons at creation is more efficient than using
SetStatus()
after creation, so try to do this wherever possible.
We can also set the parameters of a neuron model before creation,
which allows us to define a simulation more concisely in many cases. If
many individual batches of neurons are to be produced, it is more
convenient to set the defaults of the model, so that all neurons created
from that model will automatically have the same parameters. The
defaults of a model can be queried with GetDefaults(model)
, and set
with SetDefaults(model, params)
, where params
is a dictionary
containing the desired parameter/value pairings. For example:
ndict = {"I_e": 200.0, "tau_m": 20.0}
nest.SetDefaults("iaf_psc_alpha", ndict)
neuronpop1 = nest.Create("iaf_psc_alpha", 100)
neuronpop2 = nest.Create("iaf_psc_alpha", 100)
neuronpop3 = nest.Create("iaf_psc_alpha", 100)
The three populations are now identically parameterised with the usual
model default values for all parameters except I_e
and tau_m
,
which have the values specified in the dictionary ndict
.
If batches of neurons should be of the same model but using different
parameters, it is handy to use CopyModel(existing, new, params=None)
to make a customised version of a neuron model with its own default
parameters. This function is an effective tool to help you write clearer
simulation scripts, as you can use the name of the model to indicate
what role it plays in the simulation. Set up your customised model in
two steps using SetDefaults()
:
edict = {"I_e": 200.0, "tau_m": 20.0}
nest.CopyModel("iaf_psc_alpha", "exc_iaf_psc_alpha")
nest.SetDefaults("exc_iaf_psc_alpha", edict)
or in one step:
idict = {"I_e": 300.0}
nest.CopyModel("iaf_psc_alpha", "inh_iaf_psc_alpha", params=idict)
Either way, the newly defined models can now be used to generate neuron
populations and will also be returned by the function Models()
.
epop1 = nest.Create("exc_iaf_psc_alpha", 100)
epop2 = nest.Create("exc_iaf_psc_alpha", 100)
ipop1 = nest.Create("inh_iaf_psc_alpha", 30)
ipop2 = nest.Create("inh_iaf_psc_alpha", 30)
It is also possible to create populations with an inhomogeneous set of parameters. You would typically create the complete set of parameters, depending on experimental constraints, and then create all the neurons in one go. To do this supply a list of dictionaries of the same length as the number of neurons (or synapses) created:
parameter_list = [{"I_e": 200.0, "tau_m": 20.0}, {"I_e": 150.0, "tau_m": 30.0}]
epop3 = nest.Create("exc_iaf_psc_alpha", 2, parameter_list)
Setting parameters for populations of neurons¶
It is not always possible to set all parameters for a neuron model at or before creation. A classic example of this is when some parameter should be drawn from a random distribution. Of course, it is always possible to make a loop over the population and set the status of each one:
Vth=-55.
Vrest=-70.
for neuron in epop1:
nest.SetStatus([neuron], {"V_m": Vrest+(Vth-Vrest)*numpy.random.rand()})
However, SetStatus()
expects a list of nodes and can set the
parameters for each of them, which is more efficient, and thus to be
preferred. One way to do it is to give a list of dictionaries which is
the same length as the number of nodes to be parameterised, for example
using a list comprehension:
dVms = [{"V_m": Vrest+(Vth-Vrest)\*numpy.random.rand()} for x in epop1]
nest.SetStatus(epop1, dVms)
If we only need to randomise one parameter then there is a more concise way by passing in the name of the parameter and a list of its desired values. Once again, the list must be the same size as the number of nodes to be parameterised:
Vms = Vrest+(Vth-Vrest)\*numpy.random.rand(len(epop1))
nest.SetStatus(epop1, "V_m", Vms)
Note that we are being rather lax with random numbers here. Really we have to take more care with them, especially if we are using multiple threads or distributing over multiple machines. We will worry about this later.
Generating populations of neurons with deterministic connections¶
In the previous handout two neurons were connected using synapse specifications. In this section we extend this example to two populations of ten neurons each.
import pylab
import nest
pop1 = nest.Create("iaf_psc_alpha", 10)
nest.SetStatus(pop1, {"I_e": 376.0})
pop2 = nest.Create("iaf_psc_alpha", 10)
multimeter = nest.Create("multimeter", 10)
nest.SetStatus(multimeter, {"withtime":True, "record_from":["V_m"]})
If no connectivity pattern is specified, the populations are connected
via the default rule, namely all_to_all
. Each neuron of pop1
is
connected to every neuron in pop2
, resulting in \(10^2\)
connections.
nest.Connect(pop1, pop2, syn_spec={"weight":20.0})
Alternatively, the neurons can be connected with the one_to_one
.
This means that the first neuron in pop1
is connected to the first
neuron in pop2
, the second to the second, etc., creating ten
connections in total.
nest.Connect(pop1, pop2, "one_to_one", syn_spec={"weight":20.0, "delay":1.0})
Finally, the multimeters are connected using the default rule
nest.Connect(multimeter, pop2)
Here we have just used very simple connection schemes. Connectivity
patterns requiring the specification of further parameters, such as
in-degree or connection probabilities, must be defined in a dictionary
containing the key rule
and the key for parameters associated to the
rule. Please see Connection management
for an illustrated guide to the usage of Connect
.
Connecting populations with random connections¶
In the previous handout we looked at the connectivity patterns
one_to_one
and all_to_all
. However, we often want to look at
networks with a sparser connectivity than all-to-all. Here we introduce
four connectivity patterns which generate random connections between two
populations of neurons.
The connection rule fixed_indegree
allows us to create n
random
connections for each neuron in the target population post
to a
randomly selected neuron from the source population pre
. The
variables weight
and delay
can be left unspecified, in which
case the default weight and delay are used. Alternatively we can set
them in the syn_spec
, so each created connection has the same
weight and delay. Here is an example:
d = 1.0
Je = 2.0
Ke = 20
Ji = -4.0
Ki = 12
conn_dict_ex = {"rule": "fixed_indegree", "indegree": Ke}
conn_dict_in = {"rule": "fixed_indegree", "indegree": Ki}
syn_dict_ex = {"delay": d, "weight": Je}
syn_dict_in = {"delay": d, "weight": Ji}
nest.Connect(epop1, ipop1, conn_dict_ex, syn_dict_ex)
nest.Connect(ipop1, epop1, conn_dict_in, syn_dict_in)
Now each neuron in the target population ipop1
has Ke
incoming
random connections chosen from the source population epop1
with
weight Je
and delay d
, and each neuron in the target population
epop1
has Ki
incoming random connections chosen from the source
population ipop1
with weight Ji
and delay d
.
The connectivity rule fixed_outdegree
works in analogous fashion,
with n
connections (keyword outdegree
) being randomly selected
from the target population post
for each neuron in the source
population pre
. For reasons of efficiency, particularly when
simulating in a distributed fashion, it is better to use
fixed_indegree
if possible.
Another connectivity pattern available is fixed_total_number
. Here
n
connections (keyword N
) are created by randomly drawing source
neurons from the populations pre
and target neurons from the
population post
.
When choosing the connectivity rule pairwise_bernoulli
connections
are generated by iterating through all possible source-target pairs and
creating each connection with the probability p
(keyword p
).
In addition to the rule specific parameters indegree
, outdegree
,
N
and p
, the conn_spec
can contain the keywords autapses
and multapses
(set to False
or True
) allowing or forbidding
self-connections and multiple connections between two neurons,
respectively.
Note that for all connectivity rules, it is perfectly legitimate to have
the same population simultaneously in the role of pre
and post
.
For more information on connecting neurons, please read the
documentation of the Connect
function and consult the guide at
Connection management.
Specifying the behaviour of devices¶
All devices implement a basic timing capacity; the parameter start
(default 0) determines the beginning of the device’s activity and the
parameter stop
(default: \(∞\)) its end. These values are taken
relative to the value of origin
(default: 0). For example, the
following example creates a poisson_generator
which is only active
between 100 and 150ms:
pg = nest.Create("poisson_generator")
nest.SetStatus(pg, {"start": 100.0, "stop": 150.0})
This functionality is useful for setting up experimental protocols with stimuli that start and stop at particular times.
So far we have accessed the data recorded by devices directly, by
extracting the value of events
. However, for larger or longer
simulations, we may prefer to write the data to file for later analysis
instead. All recording devices allow the specification of where data is
stored over the parameters to_memory
(default: True
),
to_file
(default: False
) and to_screen
(default: False
).
The following code sets up a multimeter
to record data to a named
file:
recdict = {"to_memory" : False, "to_file" : True, "label" : "epop_mp"}
mm1 = nest.Create("multimeter", params=recdict)
If no name for the file is specified using the label
parameter, NEST
will generate its own using the name of the device, and its id. If the
simulation is multithreaded or distributed, multiple files will be
created, one for each process and/or thread. For more information on how
to customise the behaviour and output format of recording devices,
please read the documentation for RecordingDevice
.
Resetting simulations¶
It often occurs that we need to reset a simulation. For example, if you
are developing a script, then you may need to run it from the
ipython
console multiple times before you are happy with its
behaviour. In this case, it is useful to use the function
ResetKernel()
. This gets rid of all nodes you have created, any
customised models you created, and resets the internal clock to 0.
The other main use of resetting is when you need to run a simulation in
a loop, for example to test different parameter settings. In this case
there is typically no need to throw out the whole network and create and
connect everything, it is enough to re-parameterise the network. A good
strategy here is to create and connect your network outside the loop,
and then carry out the parametrisation, simulation and data collection
steps within the loop. Here it is often helpful to call the function
ResetNetwork()
within each loop iteration. It resets all nodes to
their default configuration and wipes the data from recording devices.
Command overview¶
These are the new functions we introduced for the examples in this handout.
Getting and setting basic settings and parameters of NEST¶
GetKernelStatus(keys=none)
Obtain parameters of the simulation kernel. Returns:
Parameter dictionary if called without argument
Single parameter value if called with single parameter name
List of parameter values if called with list of parameter names
Set parameters for the simulation kernel.
Models¶
GetDefaults(model)
Return a dictionary with the default parameters of the given
model
, specified by a string.SetDefaults(model, params)
Set the default parameters of the given
model
to the values specified in theparams
dictionary.CopyModel(existing, new, params=None)
Create a
new
model by copying anexisting
one. Default parameters can be given asparams
, or else are taken fromexisting
.
Simulation control¶
ResetKernel()
Reset the simulation kernel. This will destroy the network as well as all custom models created with
CopyModel()
. The parameters of built-in models are reset to their defaults. Calling this function is equivalent to restarting NEST.ResetNetwork()
Reset all nodes and connections to the defaults of their respective model.
Part 3: Connecting networks with synapses¶
Introduction¶
In this handout we look at using synapse models to connect neurons. After you have worked through this material, you will know how to:
set synapse model parameters before creation
define synapse models with customised parameters
use synapse models in connection routines
query the synapse values after connection
set synapse values during and after connection
For more information on the usage of PyNEST, please see the other sections of this primer:
More advanced examples can be found at Example
Networks, or
have a look at at the source directory of your NEST installation in the
subdirectory: pynest/examples/
.
Parameterising synapse models¶
NEST provides a variety of different synapse models. You can see the
available models by using the command Models(synapses)
, which picks
only the synapse models out of the list of all available models.
Synapse models can be parameterised analogously to neuron models. You
can discover the default parameter settings using GetDefaults(model)
and set them with SetDefaults(model,params)
:
nest.SetDefaults("stdp_synapse",{"tau_plus": 15.0})
Any synapse generated from this model will then have all the standard
parameters except for the tau_plus
, which will have the value given
above.
Moreover, we can also create customised variants of synapse models using
CopyModel()
, exactly as demonstrated for neuron models:
nest.CopyModel("stdp_synapse","layer1_stdp_synapse",{"Wmax": 90.0})
Now layer1_stdp_synapse
will appear in the list returned by
Models()
, and can be used anywhere that a built-in model name can be
used.
STDP synapses¶
For the majority of synapses, all of their parameters are accessible via
GetDefaults()
and SetDefaults()
. Synapse models implementing
spike-timing dependent plasticity are an exception to this, as their
dynamics are driven by the post-synaptic spike train as well as the
pre-synaptic one. As a consequence, the time constant of the depressing
window of STDP is a parameter of the post-synaptic neuron. It can be set
as follows:
nest.Create("iaf_psc_alpha", params={"tau_minus": 30.0})
or by using any of the other methods of parameterising neurons demonstrated in the first two parts of this introduction.
Connecting with synapse models¶
The synapse model as well as parameters associated with the synapse type can be set in the synapse specification dictionary accepted by the connection routine.
conn_dict = {"rule": "fixed_indegree", "indegree": K}
syn_dict = {"model": "stdp_synapse", "alpha": 1.0}
nest.Connect(epop1, epop2, conn_dict, syn_dict)
If no synapse model is given, connections are made using the model
static_synapse
.
Distributing synapse parameters¶
The synapse parameters are specified in the synapse dictionary which is
passed to the Connect
-function. If the parameter is set to a scalar
all connections will be drawn using the same parameter. Parameters can
be randomly distributed by assigning a dictionary to the parameter. The
dictionary has to contain the key distribution
setting the target
distribution of the parameters (for example normal
). Optionally,
parameters associated with the distribution can be set (for example
mu
). Here we show an example where the parameters alpha
and
weight
of the stdp synapse are uniformly distributed.
alpha_min = 0.1
alpha_max = 2.
w_min = 0.5
w_max = 5.
syn_dict = {"model": "stdp_synapse",
"alpha": {"distribution": "uniform", "low": alpha_min, "high": alpha_max},
"weight": {"distribution": "uniform", "low": w_min, "high": w_max},
"delay": 1.0}
nest.Connect(epop1, neuron, "all_to_all", syn_dict)
Available distributions and associated parameters are described in Connection Management, the most common ones are:
Distributions |
Keys |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Querying the synapses¶
The function
GetConnections(source=None, target=None, synapse_model=None)
returns
a list of connection identifiers that match the given specifications.
There are no mandatory arguments. If it is called without any arguments,
it will return all the connections in the network. If source
is
specified, as a list of one or more nodes, the function will return all
outgoing connections from that population:
nest.GetConnections(epop1)
Similarly, we can find the incoming connections of a particular target
population by specifying target
as a list of one or more nodes:
nest.GetConnections(target=epop2)
will return all connections beween all neurons in the network and
neurons in epop2
. Finally, the search can be restricted by
specifying a given synapse model:
nest.GetConnections(synapse_model="stdp_synapse")
will return all the connections in the network which are of type
stdp_synapse
. The last two cases are slower than the first case, as
a full search of all connections has to be performed.The arguments
source
, target
and synapse_model
can be used individually,
as above, or in any conjunction:
nest.GetConnections(epop1, epop2, "stdp_synapse")
will return all the connections that the neurons in epop1
have to
neurons in epop2
of type stdp_synapse
. Note that all these
querying commands will only return the local connections, i.e. those
represented on that particular MPI process in a distributed simulation.
Once we have the array of connections, we can extract data from it using
GetStatus()
. In the simplest case, this returns a list of
dictionaries, containing the parameters and variables for each
connection found by GetConnections
. However, usually we don’t want
all the information from a synapse, but some specific part of it. For
example, if we want to check we have connected the network as intended,
we might want to examine only the parameter target
of each
connection. We can extract just this information by using the optional
keys
argument of GetStatus()
:
conns = nest.GetConnections(epop1, synapse_model="stdp_synapse")
targets = nest.GetStatus(conns, "target")
The variable targets
is now list of all the target
values of the
connections found. If we are interested in more than one parameter,
keys
can be a list of keys as well:
conns = nest.GetConnections(epop1, synapse_model="stdp_synapse")
conn_vals = nest.GetStatus(conns, ["target","weight"])
The variable conn_vals
is now a list of lists, containing the
target
and weight
values for each connection found.
To get used to these methods of querying the synapses, it is recommended to try them out on a small network where all connections are known.
Coding style¶
As your simulations become more complex, it is very helpful to develop a clean coding style. This reduces the number of errors in the first place, but also assists you to debug your code and makes it easier for others to understand it (or even yourself after two weeks). Here are some pointers, some of which are common to programming in general and some of which are more NEST specific. Another source of useful advice is PEP-8, which, conveniently, can be automatically checked by many editors and IDEs.
Numbers and variables¶
Simulations typically have lots of numbers in them - we use them to set parameters for neuron models, to define the strengths of connections, the length of simulations and so on. Sometimes we want to use the same parameters in different scripts, or calculate some parameters based on the values of other parameters. It is not recommended to hardwire the numbers into your scripts, as this is error-prone: if you later decide to change the value of a given parameter, you have to go through all your code and check that you have changed every instance of it. This is particularly difficult to catch if the value is being used in different contexts, for example to set a weight in one place and to calculate the mean synaptic input in another.
A better approach is to set a variable to your parameter value, and then always use the variable name every time the value is needed. It is also hard to follow the code if the definitions of variables are spread throughout the script. If you have a parameters section in your script, and group the variable names according to function (e.g. neuronal parameters, synaptic parameters, stimulation parameters,…) then it is much easier to find and check them. Similarly, if you need to share parameters between simulation scripts, it is much less error-prone to define all the variable names in a separate parameters file, which the individual scripts can import. Thus a good rule of thumb is that numbers should only be visible in distinct parameter files or parameter sections, otherwise they should be represented by variables.
Repetitive code, copy-and-paste, functions¶
Often you need to repeat a section of code with minor modifications. For
example, you have two multimeter
s and you wish to extract the
recorded variable from each of them and then calculate its maximum. The
temptation is to write the code once, then copy-and-paste it to its new
location and make any necessary modifications:
dma = nest.GetStatus(ma, keys="events")[0]
Vma = dma["Vm"]
amax = max(Vma)
dmb = nest.GetStatus(mb, keys="events")[0]
Vmb = dmb["Vm"]
bmax = max(Vmb)
print(amax-bmax)
There are two problems with this. First, it makes the main section of your code longer and harder to follow. Secondly, it is error-prone. A certain percentage of the time you will forget to make all the necessary modifications after the copy-and-paste, and this will introduce errors into your code that are hard to find, not only because they are semantically correct and so don’t cause an obvious error, but also because your eye tends to drift over them:
dma = nest.GetStatus(multimeter1, keys="events")[0]
Vma = dma["Vm"]
amax = max(Vma)
dmb = nest.GetStatus(multimeter2, keys="events")[0]
Vmb = dmb["Vm"]
bmax = max(Vma)
print(amax-bmax)
The best way to avoid this is to define a function:
def getMaxMemPot(Vdevice):
dm = nest.GetStatus(Vdevice, keys="events")[0]
return max(dm["Vm"])
Such helper functions can usefully be stored in their own section, analogous to the parameters section. Now we can write down the functionality in a more concise and less error-prone fashion:
amax = getMaxMemPot(multimeter1)
bmax = getMaxMemPot(multimeter2)
print(amax-bmax)
If you find that this clutters your code, as an alternative you can
write a lambda
function as an argument for map
, and enjoy the
feeling of smugness that will pervade the rest of your day. A good
policy is that if you find yourself about to copy-and-paste more than
one line of code, consider taking the few extra seconds required to
define a function. You will easily win this time back by spending less
time looking for errors.
Subsequences and loops¶
When preparing a simulation or collecting or analysing data, it commonly happens that we need to perform the same operation on each node (or a subset of nodes) in a population. As neurons receive ids at the time of creation, it is possible to use your knowledge of these ids explictly:
Nrec = 50
neuronpop = nest.Create("iaf_psc_alpha", 200)
sd = nest.Create("spike_detector")
nest.Connect(range(1,N_rec+1),sd,"all_to_all")
However, this is not at all recommended!. This is because as you develop your simulation, you may well add additional nodes - this means that your initially correct range boundaries are now incorrect, and this is an error that is hard to catch. To get a subsequence of nodes, use a slice of the relevant population:
nest.Connect(neuronpop[:Nrec],spikedetector,"all_to_all")
An even worse thing is to use knowledge about neuron ids to set up loops:
for n in range(1,len(neuronpop)+1):
nest.SetStatus([n], {"V_m": -67.0})
Not only is this error prone as in the previous example, the majority of PyNEST functions are expecting a list anyway. If you give them a list, you are reducing the complexity of your main script (good) and pushing the loop down to the faster C++ kernel, where it will run more quickly (also good). Therefore, instead you should write:
nest.SetStatus(neuronpop, {"V_m": -67.0})
See Part 2 for more examples on operations on multiple neurons, such as setting the status from a random distribution and connecting populations.
If you really really need to loop over neurons, just loop over the population itself (or a slice of it) rather than introducing ranges:
for n in neuronpop:
my_weird_function(n)
Thus we can conclude: instead of range operations, use slices of and loops over the neuronal population itself. In the case of loops, check first whether you can avoid it entirely by passing the entire population into the function - you usually can.
Command overview¶
These are the new functions we introduced for the examples in this handout.
Querying Synapses¶
GetConnections(neuron, synapse_model="None"))
Return an array of connection identifiers.
Parameters:
source
- list of source GIDstarget
- list of target GIDssynapse_model
- string with the synapse model
If GetConnections is called without parameters, all connections in the network are returned. If a list of source neurons is given, only connections from these pre-synaptic neurons are returned. If a list of target neurons is given, only connections to these post-synaptic neurons are returned. If a synapse model is given, only connections with this synapse type are returned. Any combination of source, target and synapse_model parameters is permitted. Each connection id is a 5-tuple or, if available, a NumPy array with the following five entries: source-gid, target-gid, target-thread, synapse-id, port
Note: Only connections with targets on the MPI process executing the command are returned.
Part 4: Topologically structured networks¶
Introduction¶
This handout covers the use of NEST’s topology
library to construct
structured networks. When you have worked through this material you will
be able to:
Create populations of neurons with specific spatial locations
Define connectivity profiles between populations
Connect populations using profiles
Visualise the connectivity
For more information on the usage of PyNEST, please see the other sections of this primer:
More advanced examples can be found at Example
Networks, or
have a look at at the source directory of your NEST installation in the
subdirectory: pynest/examples/
.
Incorporating structure in networks of point neurons¶
If we use biologically detailed models of a neuron, then it’s easy to understand and implement the concepts of topology, as we already have dendritic arbors, axons, etc. which are the physical prerequisites for connectivity within the nervous system. However, we can still get a level of specificity using networks of point neurons.
Structure, both in the topological and everyday sense, can be thought of as a set of rules governing the location of objects and the connections between them. Within networks of point neurons, we can distinguish between three types of specificity:
Cell-type specificity – what sorts of cells are there?
Location specificity – where are the cells?
Projection specificity – which cells do they project to, and how?
In the previous handouts, we saw that we can create deterministic or
randomly selected connections between networks using Connect()
. If
we want to create network models that incorporate the spatial location
and spatial connectivity profiles, it is time to turn to the
topology
module. NOTE: Full documentation for usage of the
topology module is present in NEST Topology Users Manual (NTUM)
1, which in the following pages is referenced as a
full-source.
The nest.topology module¶
The nest.topology
module allows us to create populations of nodes
with a given spatial organisation, connection profiles which specify how
neurons are to be connected, and provides a high-level connection
routine. We can thus create structured networks by designing the
connection profiles to give the desired specificity for cell-type,
location and projection.
The generation of structured networks is carried out in three steps, each of which will be explained in the subsequent sections in more detail:
Defining layers, in which we assign the layout and types of the neurons within a layer of our network.
Defining connection profiles, where we generate the profiles that we wish our connections to have. Each connection dictionary specifies the properties for one class of connection, and contains parameters that allow us to tune the profile. These are related to the location-dependent likelihood of choosing a target (
mask
andkernel
), and the cell-type specificity i.e. which types of cell in a layer can participate in the connection class (sources
andtargets
).Connecting layers, in which we apply the connection dictionaries between layers, equivalent to population-specificity. Note that multiple dictionaries can be applied between two layers, just as a layer can be connected to itself.
Auxillary, in which we visualise the results of the above steps either by
nest.PrintNetwork()
or visualization functions included in the topology module and query the connections for further analysis.
Defining layers¶
The code for defining a layer follows this template:
import nest.topology as topp
my_layer_dict = {...} # see below for options
my_layer = topp.CreateLayer(my_layer_dict)
where my_layer_dict
will define the elements of the layer and their
locations.
The choice of nodes to fill the layer
is specified using the
elements
key. For the moment, we’ll only concern ourselves with
creating simple layers, where each element is from a homogeneous
population. Then, the corresponding value for this dictionary entry
should is the model type of the neuron, which can either be an existing
model in the NEST
collection, or one that we’ve previously defined
using CopyModel()
.
We next have to decide whether the nodes should be placed in a grid-based or free (off-grid) fashion, which is equivalent to asking ``can the elements of our network be regularly and evenly placed within a 2D network, or do we need to tell them where they should be located?”.
1 - On-grid¶
we have to explicitly specify the size and spacing of the grid, by the number or rows m and columns n as well as the extent (layer size). The grid spacing i then determined from these, and nxm elements are arranged symmetrically. Note that we can also specify a center to the grid, else the default offset is the origin.
The following snippet produces grid
:
layer_dict_ex = {"extent" : [2.,2.], # the size of the layer in mm
"rows" : 10, # the number of rows in this layer ...
"columns" : 10, # ... and the number of columns
"elements" : "iaf_psc_alpha"} # the element at each (x,y) coordinate in the grid
2 - Off grid¶
we define only the elements, their positions and the extent. The number of elements created is equivalent to the length of the list of positions. This option allows much more flexibility in how we distribute neurons. Note that we should also specify the extent, if the positions fall outside of the default (extent size = [1,1] and origin as the center). See Section 2.2 in NUTM for more details.
The following snippet produces free
:
import numpy as np
# grid with jitter
jit = 0.03
xs = np.arange(-0.5,.501,0.1)
poss = [[x,y] for y in xs for x in xs]
poss = [[p[0]+np.random.uniform(-jit,jit),p[1]+np.random.uniform(-jit,jit)] for p in poss]
layer_dict_ex = {"positions": poss,
"extent" : [1.1,1.1],
"elements" : "iaf_psc_alpha"}
Note: The topology module does support 3D layer
s, but this is
outside the scope of this handout.
An overview of all the parameters that can be used, as well as whether they are primarily used for grid-based or free layers, follows:
Para mete r |
Gr id |
Description |
Possible values |
---|---|---|---|
elem ents |
Bo th |
Type of model to be included in the network |
Any model listed in
|
exte nt |
Bo th |
Size of the layer in mm. Default is [1.,1.] |
2D list |
rows |
On |
Number of rows |
int |
colu mns |
On |
Number of columns |
int |
cent er |
On |
The center of the grid or free layer. Allows for grids to be structured independently of each other (see Fig. 2.3 in NTUM) |
2D list |
posi tion s |
Of f |
List of positions for each of the neurons to be created. |
List of lists or tuples |
edge _wr ap |
Bo th |
Whether the layer should have a periodic boundary or not. Default: False |
boolean |
Advanced¶
Composite layers can also be created. This layer type extends the
grid-based layer and allows us to define a number of neurons and other
elements, such as poisson_generators
, at each grid location. A full
explanation is available in Section 2.5 of NTUM. The advantages in this
approach is that, if we want to have a layer in which each element or
subnetwork has the same composition of components, then it’s very easy
to define a layer which has these properties. For a simple example,
let’s consider a grid of elements, where each element comprises of 4
pyramidal cells, 1 interneuron, 1 poisson generator and 1 noise
generator. The corresponding code is:
nest.CopyModel("iaf_psc_alpha","pyr")
nest.CopyModel("iaf_psc_alpha","inh", {"V_th": -52.})
comp_layer = topp.CreateLayer({"rows":5,"columns":5,
"elements": ["pyr",4,"inh","poisson_generator","noise_generator"]})
Defining connection profiles¶
To define the types of connections that we want between populations of neurons, we specify a connection dictionary.
The only two mandatory parameters for any connection dictionary are
connection_type
and mask
. All others allow us to tune our
connectivity profiles by tuning the likelihood of a connection, the
synapse type, the weight and/or delay associated with a connection, or
the number of connections, as well as specifying restrictions on cell
types that can participate in the connection class.
Chapter 3 in NTUM deals comprehensively with all the different possibilities, and it’s suggested that you look there for learning about the different constraints, as well as reading through the different examples listed there. Here are some representative examples for setting up a connectivity profile, and the following table lists the parameters that can be used.
# Circular mask, gaussian kernel.
conn1 = { "connection_type":"divergent",
"mask": {"circular":{"radius":0.75}},
"kernel": {"gaussian":{"p_center":1.,"sigma":0.2}},
"allow_autapses":False
}
# Rectangular mask, constant kernel, non-centered anchor
conn2 = { "connection_type":"divergent",
"mask": {"rectangular":{"lower_left":[-0.5,-0.5],"upper_right":[0.5,0.5]},
"anchor": [0.5,0.5],
},
"kernel": 0.75,
"allow_autapses":False
}
# Donut mask, linear kernel that decreases with distance
# Commented out line would allow connection to target the pyr neurons (useful for composite layers)
conn3 = { "connection_type": "divergent",
"mask": {"doughnut":{"inner_radius":0.1,"outer_radius":0.95}},
"kernel": {"linear": {"c":1.,"a":-0.8}},
#"targets":"pyr"
}
# Rectangular mask, fixed number of connections, gaussian weights, linear delays
conn4 = { "connection_type":"divergent",
"mask": {"rectangular":{"lower_left":[-0.5,-0.5],"upper_right":[0.5,0.5]}},
"number_of_connections": 40,
"weights": {"gaussian":{"p_center":J,"sigma":0.25}},
"delays" : {"linear" :{"c":0.1,"a":0.2}},
"allow_autapses":False
}
Param eter |
Description |
Possible values |
---|---|---|
conne ction _typ e |
Determines how nodes are selected when connections are made |
convergent, divergent |
mask |
Spatially selected subset of neurons considered as (potential) targets |
circular, rectangular, doughnut, grid |
kerne l |
Function that determines the likelihood of a neuron being chosen as a target. Can be distance-dependent or -independent. |
constant, uniform, linear, gaussian, exponential, gaussian2D |
weigh ts |
Distribution of weight values of connections. Can be distance-dependent or -independent. NB: this value overrides any value currently used by synapse_model, and therefore unless defined will default to 1.! |
constant, uniform, linear, gaussian, exponential |
delay s |
Distribution of delay values for connections. Can be distance-dependent or -independent. NB: like weights, this value overrides any value currently used by synapse_model! |
constant, uniform, linear, gaussian, exponential |
synap se_m odel |
Define the type of synapse model to be included. |
any synapse model
included in
|
sourc es |
Defines the sources (presynaptic) neurons for this connection. |
any neuron label that is currently user-defined |
targe ts |
Defines the target (postsynaptic) neurons for this connection. |
any neuron label that is currently user-defined |
numbe r_of _con necti ons |
Fixes the number of connections that this neuron is to send, ensuring we have a fixed out-degree distribution. |
int |
allow _mul tapse s |
Whether we want to have multiple connections between the same source-target pair, or ensure unique connections. |
boolean |
allow _aut apses |
Whether we want to allow a neuron to connect to itself |
boolean |
Connecting layers¶
Connecting layers is the easiest step: having defined a source layer, a
target layer and a connection dictionary, we simply use the function
topp.ConnectLayers()
:
ex_layer = topp.CreateLayer({"rows":5,"columns":5,"elements":"iaf_psc_alpha"})
in_layer = topp.CreateLayer({"rows":4,"columns":4,"elements":"iaf_psc_alpha"})
conn_dict_ex = {"connection_type":"divergent","mask":{"circular":{"radius":0.5}}}
# And now we connect E->I
topp.ConnectLayers(ex_layer,in_layer,conn_dict_ex)
Note that we can define several dictionaries, use the same dictionary multiple times and connect to the same layer:
# Extending the code from above ... we add a conndict for inhibitory neurons
conn_dict_in = {"connection_type":"divergent",
"mask":{"circular":{"radius":0.75}},"weights":-4.}
# And finish connecting the rest of the layers:
topp.ConnectLayers(ex_layer,ex_layer,conn_dict_ex) # Connect E->E
topp.ConnectLayers(in_layer,in_layer,conn_dict_in) # Connect I->I
topp.ConnectLayers(in_layer,ex_layer,conn_dict_in) # Connect I->E
Visualising and querying the network structure¶
There are two main methods that we can use for checking that our network was built correctly:
nest.PrintNetwork(depth=1)
which prints out all the neurons and subnetworks within the network in text form. This is a good manner in which to inspect the hierarchy of composite layers;
create plots using functions in ``nest.topology` <https://www.nest-simulator.org/pynest-topology/>`__
There are three functions that can be combined:
PlotLayer
PlotTargets
PlotKernel
which allow us to generate the plots used with NUTM and this handout. See Section 4.2 of NTUM for more details.
Other useful functions that may be of help, in addition to those already listed in NTUM Section 4.1, are:
Function |
Description |
---|---|
nest.GetNodes(layer ) |
Returns GIDs of layer elements: either nodes or top-level subnets (for composite) |
nest.GetLeaves(laye r) |
Returns GIDs of leaves of a structure, which is always going to be neurons rather subnets |
topp.GetPosition(gi ds) |
Returns position of elements specified in input |
nest.GetStatus(laye r,“topology”) |
Returns the layer dictionary for a layer |
References¶
- 1
Plesser HE and Enger H. NEST Topology User Manual, https://www.nest-simulator.org/wp-content/uploads/2015/04/Topology_UserManual.pdf
Introduction to the MUSIC Interface¶
The MUSIC interface, a standard by the INCF, allows the transmission of data between applications at runtime. It can be used to couple NEST with other simulators, with applications for stimulus generation and data analysis and visualization and with custom applications that also use the MUSIC interface.
Setup of System¶
To use MUSIC with NEST, we first need to ensure MUSIC is installed on our system and NEST is configured properly.
Please install MUSIC using the instructions on the MUSIC website.
In the install of NEST, you need to add the following configuration option to your cmake.
cmake -Dwith-music=[ON </path/to/music>]
make
make install
A Quick Introduction to NEST and MUSIC¶
In this tutorial, we will show you how to use the MUSIC library together with NEST. We will cover how to use the library from PyNEST and from the SLI language interface. In addition, we’ll introduce the use of MUSIC in a C++ application and how to connect such an application to a NEST simulation.
Our aim is to show practical examples of how to use MUSIC, and highlight common pitfalls that can trip the unwary. Also, we assume only a minimal knowledge of Python, C++ and (especially) SLI, so the examples will favour clarity and simplicity over elegance and idiomatic constructs.
While the focus here is on MUSIC, we need to know a few things about how NEST works in order to understand how MUSIC interacts with it.
Go straight to part 1 of tutorial - connect 2 simulations using PyNEST
The Basics of NEST¶
A NEST network consists of three types of elements: neurons, devices, and connections between them.
Neurons are the basic building blocks, and in NEST they are generally spiking point neuron models. Devices are supporting units that for instance generate inputs to neurons or record data from them. The Poisson spike generator, the spike detector recording device and the MUSIC input and output proxies are all devices. Neurons and devices are collectively called nodes, and are connected using connections.
Connections are unidirectional and carry events between nodes. Each
neuron can get multiple input connections from any number of other
neurons. Neuron connections typically carry spike events, but other
kinds of events, such as voltages and currents, are also available for
recording devices. Synapses are not independent nodes, but are part of
the connection. Synapse models will typically modify the weight or
timing of the spike sent on to the neuron. All connections have a
synapse, by default the static_synapse
.

A: Two connected neurons \(N_a\) and \(N_b\), with a synapse \(S\) and a receptor \(R\). A spike with weight \(W_a\) is generated at \(t_0\). B: The spike traverses the synapse and is added to the queue in the receptor. C: The receptor processes the spike at time \(t_0 + d\).¶
Connections have a delay and a weight. All connections are implemented on the receiving side, and the interpretation of the parameters is ultimately up to the receiving node. In Figure 7 A, neuron \(N_a\) has sent a spike to \(N_b\) at time \(t\), over a connection with weight \(w_a\) and delay \(d\). The spike is sent through the synapse, then buffered on the receiving side until \(t+d\) (Figure 7 B). At that time it’s handed over to the neuron model receptor that converts the spike event to a current and applies it to the neuron model (Figure 7 C).
Adding MUSIC connections¶
In NEST you use MUSIC with a pair of extra devices called proxies that create a MUSIC connection between them across simulations. The pair effectively works just like a regular connection within a single simulation. Each connection between MUSIC proxies is called a port, and connected by name in the MUSIC configuration file.
Each MUSIC port can carry multiple numbered channels. The channel is the smallest unit of transmission, in that you can distinguish data flowing in different channels, but not within a single channel. Depending on the application a port may have one or many channels, and a single channel can carry the events from one single neuron model or the aggregate output of many neurons.

A: Two connected neurons \(N_a\) and \(N_b\), with delay \(d_n\) and weight \(w_n\). B: We’ve added a MUSIC connection with an output proxy \(P_a\) on one end, and an input proxy \(P_b\) on the other.¶
In Figure 8 A we see a regular NEST connection between two neurons \(N_a\) and \(N_b\). The connection carries a weight \(w_n\) and a delay \(d_n\). In Figure 8 B we have inserted a pair of MUSIC proxies into the connection, with an output proxy \(P_a\) on one end, and input proxy \(P_b\) on the other.
As we mentioned above, MUSIC proxies are devices, not regular neuron models. Like most devices, proxies ignore weight and delay parameters on incoming connections. Any delay applied to the connection from \(N_a\) to the output proxy \(P_a\) is thus silently ignored. MUSIC makes the inter-simulation transmission delays invisible to the models themselves, so the connection from \(P_a\) to \(P_b\) is effectively zero. The total delay and weight of the connection from \(N_a\) to \(N_b\) is thus that set on the \(P_b\) to \(N_b\) connection.

A MUSIC connection with two outputs and two inputs. A single output proxy sends two channels of data to an input event handler that divides the channels to the two input proxies. They connect the recipient neuron models.¶
When we have multiple channels, the structure looks something like in Figure 9. Now we have two neurons \(N_{a1}\) and \(N_{a2}\) that we want to connect to \(N_{b1}\) and \(N_{b2}\) respectively. As we mentioned above, NEST devices can accept connections from multiple separate devices, so we only need one output proxy \(P_a\). We connect each input to a different channel.
Nodes can only output one connection stream, so on the receiving side we need one input proxy \(P_b\) per input. Internally, there is a single MUSIC event handler device \(Ev\) that accepts all inputs from a port, then sends the appropriate channel inputs to each input proxy. These proxies each connect to the recipient neurons as above.
Publication¶
Djurfeldt M. et al. 2010. Run-time interoperability between neuronal network simulators based on the music framework. Neuroinformatics. 8(1):43–60. DOI: 10.1007/s12021-010-9064-z.
Connect two NEST simulations using MUSIC¶
Let’s look at an example of two NEST simulations connected through MUSIC. We’ll implement the simple network in Figure 9 from the introduction to this tutorial.
We need a sending process, a receiving process and a MUSIC configuration file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | #!/usr/bin/env python
import nest
nest.SetKernelStatus({"overwrite_files": True})
neurons = nest.Create('iaf_psc_alpha', 2, [{'I_e': 400.0}, {'I_e': 405.0}])
music_out = nest.Create('music_event_out_proxy', 1,
params = {'port_name':'p_out'})
for i, n in enumerate(neurons):
nest.Connect([n], music_out, "one_to_one",{'music_channel': i})
sdetector = nest.Create("spike_detector")
nest.SetStatus(sdetector, {"withgid": True, "withtime": True, "to_file": True,
"label": "send", "file_extension": "spikes"})
nest.Connect(neurons, sdetector)
nest.Simulate(1000.0)
|
The sending process is quite straightforward. We import the NEST library
and set a useful kernel parameter. On line 6, we create two simple
intergrate-and-fire neuron models, one with a current input of 400mA,
and one with 405mA, just so they will respond differently. If you use
ipython to work interactively, you can check their current status
dictionary with nest.GetStatus(neurons)
. The definitive
documentation for NEST nodes is the header file, in this case
models/iaf_psc_alpha.h
in the NEST source.
We create a single music_event_out_proxy
for our
output on line 8, and set the port name. We loop over all the neurons on
lines 11-20 and connect them to the proxy one by one, each one with a
different output channel. As we saw earlier, each MUSIC port can have
any number of channels. Since the proxy is a device, it ignores any
weight or delay settings here.
Lastly, we create a spike detector, set the parameters (which we could
have done directly in the Create
call) and connect the
neurons to the spike detector so we can see what we’re sending. Then we
simulate for one second.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | #!/usr/bin/env python
import nest
nest.SetKernelStatus({"overwrite_files": True})
music_in = nest.Create("music_event_in_proxy", 2,
params = {'port_name': 'p_in'})
for i, n in enumerate(music_in):
nest.SetStatus([n], {'music_channel': i})
nest.SetAcceptableLatency('p_in', 2.0)
parrots = nest.Create("parrot_neuron", 2)
sdetector = nest.Create("spike_detector")
nest.SetStatus(sdetector, {"withgid": True, "withtime": True, "to_file": True,
"label": "receive", "file_extension": "spikes"})
nest.Connect(music_in, parrots, 'one_to_one', {"weight":1.0, "delay": 2.0})
nest.Connect(parrots, sdetector)
nest.Simulate(1000.0)
|
The receiving process follows the same logic, but is just a little more
involved. We create two music_event_in_proxy
— one
per channel — on lines 6-7 and set the input port name. As we discussed
above, a NEST node can accept many inputs but only emit one stream of
data, so we need one input proxy per channel to be able to distinguish
the channels from each other. On lines 9-10 we set the input channel for
each input proxy.
The SetAcceptableLatency command on line 12 sets the maximum time, in milliseconds, that MUSIC is allowed to delay delivery of spikes transmitted through the named port. This should never be more than the minimum of the delays from the input proxies to their targets; that’s the 2.0 ms we set on line 20 in our case.
On line 14 we create a set of parrot neurons. They simply repeat the input they’re given. On lines 16-18 we create and configure a spike detector to save our inputs. We connect the input proxies one-to-one with the parrot neurons on line 20, then the parrot neurons to the spike detector on line 21. We will discuss the reasons for this in a moment. Finally we simulate for one second.
binary=./send.py
np=2
[to]
binary=./receive.py
np=2
from.p_out -> to.p_in [2]
The MUSIC configuration file structure is straightforward. We define one
process from
and one to
. For each
process we set the name of the binary we wish to run and the number of
MPI processes it should use. On line 9 we finally define a connection
from output port p_out
in process
from
to input port p_in
in process
to
, with two channels.
If our programs had taken command line options we could have added them
with the args
command:
binary=./send.py
args= --option -o somefile
Run the simulation on the command line like this:
mpirun -np 4 music python.music
You should get a screenful of information scrolling past, and then be
left with four new data files, named something like send-N-0.spikes
,
send-N-1.spikes
, receive-M-0.spikes
and receive-M-1.spikes
. The names
and suffixes are of course the same that we set in send.py
and
receive.py
above. The first numeral is the node ID of the spike detector
that recorded and saved the data, and the final numeral is the rank order of
each process that generated the file.
Collate the data files:
cat send-*spikes | sort -k 2 -n >send.spikes
cat receive-*spikes | sort -k 2 -n >receive.spikes
We run the files together, and sort the output numerically (\(-n\)) by the second column (\(-k\)). Let’s look at the beginning of the two files side by side:
send.spikes receive.spikes
2 26.100 4 28.100
1 27.800 3 29.800
2 54.200 4 56.200
1 57.600 3 59.600
2 82.300 4 84.300
1 87.400 3 89.400
2 110.40 4 112.40
1 117.20 3 119.20
As expected, the received spikes are two milliseconds later than the
sent spikes. The delay parameter for the connection from the input
proxies to the parrot neurons in receive.py
on line 20
accounts for the delay.
Also — and it may be obvious in a simple model like this — the neuron IDs on the sending side and the IDs on the receiving side have no fixed relationship. The sending neurons have ID 1 and 2, while the recipients have 3 and 4. If you need to map events in one simulation to events in another, you have to record this information by other means.
Continuous Inputs¶
MUSIC can send not just spike events, but also continuous inputs and messages. In NEST there are devices to receive, but not send, such inputs. The NEST documentation has a few examples such as this one below:
1 2 3 4 5 6 7 8 9 10 11 12 13 | #!/usr/bin/python
import nest
mcip = nest.Create('music_cont_in_proxy')
nest.SetStatus(mcip, {'port_name' : 'contdata'})
time = 0
while time < 1000:
nest.Simulate (10)
data = nest.GetStatus (mcip, 'data')
print data
time += 10
|
The start mirrors our earlier receiving example: you create a continuous input proxy (a single input in this case) and set the port name.
NEST has no general facility to actually apply continuous-valued inputs directly into models. Its neurons deal only with spike events. To use the input you need to create a loop on lines 9-13 where you simulate for a short period, explicitly read the value on line 11, apply it to the simulation model, then simulate for a period again.
People sometimes try to use this pattern to control the rate of a Poisson generator from outside the simulation. You get the rate from outside as a continuous value, then apply it to the Poisson generator that in turn stimulates input neurons in your network.
The problem is that you need to suspend the simulation every cycle, drop out to the Python interpreter, run a bit of code, then call back in to the simulator core and restart the simulation again. This is acceptable if you do it every few hundred or thousand milliseconds or so, but with an input that may change every few milliseconds this becomes very, very slow.
A much better approach is to forgo the use of the NEST Poisson generator. Generate a Poisson sequence of spike events in the outside process, and send the spike events directly into the simulation like we did in our earlier python example. This is far more effective, and the outside process is not limited to the generators implemented in NEST but can create any kind of spiking input. In the next section we will take a look at how to do this.
MUSIC Connections in C++ and Python¶
The C++ interface¶
The C++ interface is the lowest-level interface and what you would use to implement a MUSIC interface in simulators. But it is not a complicated API, so you can easily use it for your own applications that connect to a MUSIC-enabled simulation.
Let’s take a look at a pair of programs that send and receive spikes. These can be used as inputs or outputs to the NEST models we created above with no change to the code. C++ code tends to be somewhat longwinded so we only show the relevant parts here. The C++ interface is divided into a setup phase and a runtime phase. Let’s look at the setup phase first:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | MPI::Intracomm comm;
int main(int argc, char **argv)
{
MUSIC::Setup* setup = new MUSIC::Setup (argc, argv);
comm = setup->communicator();
double simt; // read simulation time from the
setup->config ("simtime", &simt); // MUSIC configuration file
MUSIC::EventOutputPort *outdata = // set output port
setup->publishEventOutput("p_out");
int nProcs = comm.Get_size(); // Number of mpi processes
int rank = comm.Get_rank(); // I am this process
int width = 0; // Get number of channels
if (outdata->hasWidth()) { // from the MUSIC configuration
width = outdata->width();
}
// divide output channels evenly among MPI processes
int nLocal = width / nProcs; // Number of channels per process
int rest = width % nProcs;
int firstId = nLocal * rank; // index of lowest ID
if (rank < rest) {
firstId += rank;
nLocal += 1;
} else
firstId += rest;
MUSIC::LinearIndex outindex(firstId, nLocal); // Create local index
outdata->map(&outindex, MUSIC::Index::GLOBAL); // apply index to port
[ ... continued below ... ]
}
|
At lines 5-6 we initialize MUSIC and MPI. The communicator is common to
all processes running under MUSIC, and you’d use it instead of
COMM_WORLD
for your MPI processing.
Lines 7 and 8 illustrate something we haven’t discussed so far. We can set and read free parameters in the MUSIC configuration file. We can for instance use that to set the simulation time like we do here; although this is of limited use with a NEST simulation as you can’t read these configuration parameters from within NEST.
We set up an event output port and name it on line 11 and 12, then get the number of MPI processes and our process rank for later use. In lines 17-19 we read the number of channels specified for this port in the configuration file. We don’t need to set the channels explicitly beforehand like we do in the NEST interface.
We need to tell MUSIC which channels should be processed by what MPI
processes. Lines 22-29 are the standard way to create a linear index map
from channels to MPI processes. It divides the set of channels into
equal-sized chunks, one per MPI process. If channels don’t divide evenly
into processes, the lower-numbered ranks each get an extra channel.
firstId
is the index of the lowest-numbered channel for
the current MPI process, and nLocal
is the number of
channels allocated to it.
On lines 31 and 32 we create the index map and then apply it to the
output port we created in line 11. The Index::GLOBAL
parameter says that each rank will refer to its channels by its global
ID number. We could have used Index::LOCAL
and each
rank would refer to their own channels starting with 0. The linear index
is the simplest way to map channels, but there is a permutation index
type that lets you do arbitrary mappings if you want to.
The map
method actually has one more optional argument:
the maxBuffered
argument. Normally MUSIC decides on its
own how much event data to buffer on the receiving side before actually
transmitting it. It depends on the connection structure, the amount of
data that is generated and other things. But if you want, you can set
this explicitly:
1 | outdata->map(&outindex, MUSIC::Index::GLOBAL, maxBuffered)
|
With a maxBuffered
value of 1, for instance, MUSIC will
send emitted spike events every cycle. With a value of 2 it would send
data every other cycle. This parameter can be necessary if the receiving
side is time-sensitive (perhaps the input controls some kind of physical
hardware), and the data needs to arrive as soon as possible.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | [ ... continued from above ... ]
// Start runtime phase
MUSIC::Runtime runtime = MUSIC::Runtime(setup, TICK);
double tickt = runtime.time();
while (tickt < simt) {
for (int idx = firstId; idx<(firstId+nLocal); idx++) {
// send poisson spikes to every channel.
send_poisson(outdata, RATE*(idx+1), tickt, idx);
}
runtime.tick(); // Give control to MUSIC
tickt = runtime.time();
}
runtime.finalize(); // clean up and end
}
double frand(double rate) {return -(1./rate)*log(random()/double(RAND_MAX));}
void send_poisson(MUSIC::EventOutputPort* outport,
double rate, double tickt, int index) {
double t = frand(rate);
while (t<TICK) {
outport -> insertEvent(tickt+t, MUSIC::GlobalIndex(index));
t = t + frand(rate);
}
}
|
The runtime phase is short. On line 4 we create the MUSIC runtime
object, and let it consume the setup. In the runtime loop on lines 7-14
we output data, then give control to MUSIC by its
tick()
function so it can communicate, until the
simulation time exceeds the end time.
runtime.time()
on lines 5 and 13 gives us the current
time according to MUSIC. In lines 8-10 we loop through the channel
indexes corresponding to our own rank (that we calculated during setup),
and call a function defined from line 20 onwards that generates a
poisson spike train with the rate we request.
The actual event insertion happens on line 24, and we give it the time and the global index of the channel we target. The loop on line 8 loops through only the indexes that belong to this rank, but that is only for performance. We could loop through all channels and send events to all of them if we wanted; MUSIC will silently ignore any events targeting a channel that does not belong to the current rank.
runtime.tick()
gives control to MUSIC. Any inserted
events will be sent to their destination, and any new incoming events
will be received and available once the method returns. Be aware that
this call is blocking and could take an arbitrary amount of time, if
MUSIC has to wait for another simulation to catch up. If you have other
time-critical communications you will need to put them in a different
thread.
Once we reach the end of the simulation we call
runtime.finalize()
. Music will shut down the
communications and clean up after itself before exiting.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | MPI::Intracomm comm;
FILE *fout;
struct eventtype {
double t;
int id;
};
std::queue <eventtype> in_q;
class InHandler : public MUSIC::EventHandlerGlobalIndex {
public:
void operator () (double t, MUSIC::GlobalIndex id) {
struct eventtype ev = {t, (int)id};
in_q.push(ev);
}
};
int main(int argc, char **argv)
{
MUSIC::Setup* setup = new MUSIC::Setup (argc, argv);
comm = setup->communicator();
double simt;
setup->config ("simtime", &simt);
MUSIC::EventInputPort *indata =
setup->publishEventInput("p_in");
InHandler inhandler;
[ ... get processes, rank and channel width as in send.cpp ... ]
char *fname;
int dummy = asprintf(&fname, "output-%d.spk", rank);
fout = fopen(fname, "w");
[ ... calculate channel allocation as in send.cpp ... ]
MUSIC::LinearIndex inindex(firstId, nLocal);
indata->map(&inindex, &inhandler, IN_LATENCY);
}
|
The setup phase for the reveiving application is mostly the same as the sending one. The main difference is that we receive events through a callback function that we provide. During communication, MUSIC will call that function once for every incoming event, and that function stores those events until MUSIC is done and we can process them.
For storage we define a structure to hold time stamp and ID pairs on lines 4-7, and a queue of such structs on line 8. Lines 10-14 defines our callback function. The meat of it is lines 13-14, where we create a new event struct instance with the time stamp and ID we received, then push the structure onto our queue.
The actual setup code follows the same pattern as before: we create a setup object, get ourself a communicator, read any config file parameters and create a named input port. We also declare an instance of our callback event handler on line 29. We get our process and rank information and calculate our per-rank channel allocation in the exact same way as before.
The map for an input port that we create on line 40 needs two additional
parameters that the output port map did not. We give it a reference to
our callback function that we defined earlier. When events appear on the
port, they get passed to the callback function. It also has an optional
latency parameter. This is the same latency that we set with the
separate SetAcceptableLatency
function in the NEST
example earlier, and it works the same way. Just remember that the MUSIC
unit of time is seconds, not milliseconds.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | int main(int argc, char **argv)
{
MUSIC::Runtime runtime = MUSIC::Runtime(setup, TICK);
double tickt = runtime.time();
while (tickt < simt) {
runtime.tick(); // Give control to MUSIC
tickt = runtime.time();
while (!in_q.empty()) {
struct eventtype ev = in_q.front();
fprintf (fout, "%d\t%.4f\n", ev.id, ev.t);
in_q.pop();
}
}
fclose(fout);
runtime.finalize();
}
|
The runtime is short. As before we create a runtime object that consumes
the setup, then we loop until the MUSIC time exceeds our simulation
time. We call runtime.tick()
each time through the loop
on line 8 and we process received events after the call to
tick()
. If you had a process with both sending and
receiving ports you would submit the sending data before the
tick()
call, and process the receiving data after it in
the same loop.
The in_q
input queue we defined earlier holds any new
input events. We take the first element on line 10, then process it — we
write it out to a file — and finally pop it off the queue. When the
queue is empty we’re done and go back around the main loop again.
Lastly we call runtime.finalize()
as before.
Building the Code¶
We have to build our C++
code. The example code is
already set up for the GNU Autotools, just to show how to do this for a
MUSIC project. There’s only two build-related files we need to care
about (all the rest are autogenerated), configure.ac
and Makefile.am
.
1 2 3 4 5 6 7 8 9 10 11 | AC_INIT(simple, 1.0)
AC_PREREQ([2.59])
AM_INIT_AUTOMAKE([1.11 -Wall subdir-objects no-define foreign])
AC_LANG([C++])
AC_CONFIG_HEADERS([config.h])
dnl # set OpenMPI compiler wrapper
AC_PROG_CXX(mpicxx)
AC_CHECK_LIB([music], [_init])
AC_CHECK_HEADER([music.hh])
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
|
The first three lines set the project name and version, the minimum version of autotools we require and a list of options for Automake. Line 4 sets the current language, and line 5 that we want a config.h file.
Line 7 tells autoconf to use the mpicxx
MPI wrapper as
the C++ compiler. Lines 8-9 tells it to test for the existence of the
music
library, and look for the
music.hh
include file.
1 2 3 | bin_PROGRAMS = send recv
send_SOURCES = send.cpp
recv_SOURCES = recv.cpp
|
Makefile.am
has only three lines:
bin_PROGRAMS
lists the binaries we want to build.
send_SOURCES
and recv_SOURCES
lists
the source files each one needs.
Your project should already be set up, but if you start from nothing,
you need to generate the rest of the build files. You’ll need the
Autotools installed for that. The easiest way to generate all build
files is to use autoreconf
:
autoreconf --install --force
Then you can build with the usual sequence of commands:
./configure
make
Try the Code¶
We can run these programs just like we did with the NEST example, using a Music configuration file:
1 2 3 4 5 6 7 8 9 | simtime=1.0
[from]
binary=./send
np=2
[to]
binary=./recv
np=2
from.p_out -> to.p_in [2]
|
The structure is just the same as before. We have added a
simtime
parameter for the two applications to read, and
the binaries are our two new programs. We run this the same way:
mpirun -np 4 music simple.music
You can change the simulation time by changing the
simtime
parameter at the top of the file. Also, these
apps are made to deal with any number of channels, so you can change
[2]
to anything you like. If you have more channels
than MPI processes for the recv
app you will get more
than one channel recorded per output file, just as the channel
allocation code specified. If you have more MPI processes than input
channels, some output files will be empty.
You can connect these with the NEST models that we wrote earlier. Copy
them into the same directory. Then, in the cpp.music
config file, change the binary
parameter in
[from]
from binary=./send
to
binary=./send.py
. You get two sets of output files.
Concatenate them as before, and compare:
1 2 3 4 5 6 7 8 9 | send.py recv
2 26.100 1 0.0261
1 27.800 0 0.0278
2 54.200 1 0.0542
1 57.600 0 0.0576
2 82.300 1 0.0823
1 87.400 0 0.0874
2 110.40 1 0.1104
|
Indeed, we get the expected result. The IDs from the python process on the left are the originating neurons; the IDs on the right is the MUSIC channel on the receiving side. And of course NEST deals in milliseconds while MUSIC uses seconds.
This section has covered most things you need in order to use it for straightforward user-level input and output applications. But there is a lot more to the MUSIC API, especially if you intend to implement it as a simulator interface, so you should consult the documentation for more details.
The pymusic interface¶
MUSIC has recently aqcuired a plain Python interface to go along with the C++ API. If you just want to connect with a simulation rather than adding MUSIC capability to a simulator, this Python interface can be a lot more convenient than C++. You have Numpy, Scipy and other high-level libraries available, and you don’t need to compile anything.
The interface is closely modelled on the C++ API; indeed, the steps to
use it is almost exactly the same. You can mostly refer to the C++
description for explanation. Below we will only highlight the
differences to the C++ API. The full example code is in the
pymusic
directory in the MUSIC repository.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | #!/usr/bin/python
import music
[ ... ]
outdata.map(music.Index.GLOBAL,
base=firstId,
size=nlocal)
[ ...]
runtime = setup.runtime(TICK)
tickt = runtime.time()
while tickt < simtime:
for i in range(width):
send_poisson(outdata, RATE, tickt, i)
runtime.tick()
tickt = runtime.time()
|
The sending code is almost completely identical to its C++ counterpart. Make sure python is used as interpreter for the code (and make sure this file is executable). Import music in the expected way.
Unlike the C++ API, the index is not an object, but simply a label
indicating global or local indexing. The map()
call
thus need to get the first ID and the number of elements mapped to this
rank directly. Also note that the map()
functions have
a somewhat unexpected parameter order, so it’s best to use named
parameters just in case.
The runtime looks the same as the C++ counterpart as well. We get the current simulation time, and repeatedly send new sets of events as long as the current time is smaller than the simulation time.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | import Queue
in_q = Queue.Queue()
# Our input handler function
def inhandler(t, indextype, channel_id):
in_q.put([t, channel_id])
[ ... ]
indata.map(inhandler,
music.Index.GLOBAL,
base=firstId,
size=nlocal,
accLatency=IN_LATENCY)
tickt = runtime.time()
while tickt < simtime:
runtime.tick()
tickt = runtime.time()
while not in_q.empty():
ev = in_q.get()
f.write("{0}\t{1:8.4f}\n".format(ev[1], ev[0]))
|
Here is the structure for the receiving process, modelled on the C++
code. We use a Python Queue
class to implement
our event queue.
The input handler function has signature
(float time, int indextype, int channel_id)
. The
time
and channel_id
are the event
times and IDs as before. The indextype
is the type of
the map index for this input and is music.Index.LOCAL
or music.Index.GLOBAL
.
The map()
function keyword for accepatable latency is
accLatency
, and the maxBuffered
keyword we mentioned in the previous section is, unsurprisingly,
maxBuffered
. The runtime is, again, the same as for
C++.
As the pymusic
bindings are still quite new the
documentation is still lagging behind. This quick introduction should
nevertheless bee enough for you to get going with the bindings. And
should you need further help, the authors are only an email away.
Practical Tips¶
Start MUSIC using mpirun¶
There is an alternative way to start a MUSIC simulation without the
music
binary. The logic for parsing the configuration file is built into the library itself. So we can start each binary explicitly using mpirun. We give the config file name and the corresponding app label as command line options:mpirun -np 2 <binary> --music_config <config file> --app-label <label> : ...
So to start a simulation with the sendsimple.py and recv programs, we can do:
mpirun -np 2 ./sendsimple.py --music-config simplepy.music --app-label from :/ -np 2 ./recv --music-config simplepy.music --app-label toThis looks long and cumbersome, of course, but it can be useful. Since it’s parsed by the shell you are not limited to what the
music
launcher can parse, but the binary can be anything the shell can handle, including an explicit interpreter invocation or a shell script.As a note, the config file no longer needs to contain the right binary names. But it does need to have a non-empty
binary=<something>
line for each process. The parser expects it and will complain (or crash) otherwise. Also, if you try to process comand line options in your Pynest script, it is very likely you will confuse MUSIC.
Disable Messages¶
NEST can be quite chatty as it connects things, especially with large networks. If we don’t want all that output, we can tell it to display only error messages:
nest.sli_run("M_ERROR setverbosity")There is unfortunately no straightforward way to suppress the initial welcome message. That is somewhat unfortunate, as they add up quickly in the output of a simulation when you use more than a few hundred cores.
Comma as decimal point¶
Sorting output spikes may fail if you, like the authors, come from a country that uses a comma as decimal separator and runs your computer in your native language. The problem is that sort respects the language settings and expects the decimal separator to be a comma. When it sees the decimal point in the input it assumes the numeral has ended and sorts only on the integer part.
The way to fix this is to set
LC\_ALL=C
before running the sort command. In a script or in the terminal you can do:export LC_ALL=C cat output-*|sort -k 2 -n >output.spikesOr, if you want to do this temporarily for only one command:
cat output-*|LC_ALL=C sort -k 2 -n >output.spikes
Build Autotool-enable project¶
To build an Autotool-enabled C/C++ project, you don’t actually need to be in the main directory. You can create a subdirectory and build everything from there. For instance, with the simple C++ MUSIC project in section C++ build, we can do this:
mkdir build cd build ../configure make
Why do that? Because all files you generate when building the project ends up under the
build
subdirectory, keeping the source directories completely clean and untouched. You can have multiple buildsdebug
,noMPI
and so on with different build options enabled, and you can completely clean out a build simply by deleting the directory.This is surely completely obvious to many of you, but this author is almost ashamed to admit just how many years it took before I realized you could do this. I sometimes actually kept two copies of projects checked out just so I could build a separate debug version.
Model Directory¶
Here you can find the list of all the models implemented in NEST for neurons, synapses, devices and topological networks.
We also have a list of user-contributed modules on Github for you to check out.
Neuron model categories
Synapse model categories
Device categories
Network models
NEST Example Networks¶
Note
Click here to download the full example code
One neuron example¶
This script simulates a neuron driven by a constant external current and records its membrane potential.
See Also¶
First, we import all necessary modules for simulation, analysis and plotting. Additionally, we set the verbosity to suppress info messages and reset the kernel. Resetting the kernel allows you to execute the script several times in a Python shell without interferences from previous NEST simulations. Thus, without resetting the kernel the network status including connections between nodes, status of neurons, devices and intrinsic time clocks, is kept and influences the next simulations.
import nest
import nest.voltage_trace
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
Second, the nodes (neurons and devices) are created using Create
.
We store the returned handles in variables for later reference.
The Create
function also allow you to create multiple nodes
e.g. nest.Create('iaf_psc_alpha',5)
Also default parameters of the model can be configured using Create
by including a list of parameter dictionaries
e.g. nest.Create("iaf_psc_alpha", params=[{'I_e':376.0}])
or nest.Create("voltmeter", [{"withgid": True, "withtime": True}])
.
In this example we will configure these parameters in an additional
step, which is explained in the third section.
neuron = nest.Create("iaf_psc_alpha")
voltmeter = nest.Create("voltmeter")
Third, the neuron and the voltmeter are configured using
SetStatus
, which expects a list of node handles and a list of
parameter dictionaries.
In this example we use SetStatus
to configure the constant
current input to the neuron. We also want to record the global id of
the observed nodes and set the withgid flag of the voltmeter to
True.
nest.SetStatus(neuron, "I_e", 376.0)
nest.SetStatus(voltmeter, [{"withgid": True}])
Fourth, the neuron is connected to the voltmeter. The command
Connect
has different variants. Plain Connect
just takes the
handles of pre- and post-synaptic nodes and uses the default values
for weight and delay. Note that the connection direction for the voltmeter is
reversed compared to the spike detector, because it observes the
neuron instead of receiving events from it. Thus, Connect
reflects the direction of signal flow in the simulation kernel
rather than the physical process of inserting an electrode into the
neuron. The latter semantics is presently not available in NEST.
nest.Connect(voltmeter, neuron)
Now we simulate the network using Simulate
, which takes the
desired simulation time in milliseconds.
nest.Simulate(1000.0)
Finally, we plot the neuron’s membrane potential as a function of time.
nest.voltage_trace.from_device(voltmeter)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
One neuron with noise¶
This script simulates a neuron with input from the poisson_generator
, and
records the neuron’s membrane potential.
First, we import all necessary modules needed to simulate, analyze and plot our example. Additionally, we set the verbosity to only show warnings and reset the kernel. Resetting the kernel removes any nodes we may have created previously and resets the internal clock to zero. This allows us to execute the script several times in a Python shell without interference from previous NEST simulations.
import nest
import nest.voltage_trace
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
Second, the nodes (the neuron, poisson generator (two of them), and the
voltmeter) are created using the Create
function.
We store the returned handles in variables for later reference.
neuron = nest.Create("iaf_psc_alpha")
noise = nest.Create("poisson_generator", 2)
voltmeter = nest.Create("voltmeter")
Third, the voltmeter and the Poisson generator are configured using
SetStatus
, which expects a list of node handles and a list of parameter
dictionaries. Note that we do not need to set parameters for the neuron,
since it has satisfactory defaults.
We set each Poisson generator to 8000 Hz and 15000 Hz, respectively.
For the voltmeter, we want to record the global id of the observed nodes and
set the withgid
flag of the voltmeter to True.
We also set its property withtime
so it will also record the points
in time at which it samples the membrane voltage.
nest.SetStatus(noise, [{"rate": 80000.0}, {"rate": 15000.0}])
nest.SetStatus(voltmeter, {"withgid": True, "withtime": True})
Fourth, the neuron is connected to the poisson_generator
and to the
voltmeter
. We also specify the synaptic weight and delay in this step.
nest.Connect(noise, neuron, syn_spec={'weight': [[1.2, -1.0]], 'delay': 1.0})
nest.Connect(voltmeter, neuron)
Now we simulate the network using Simulate
, which takes the
desired simulation time in milliseconds.
nest.Simulate(1000.0)
Finally, we plot the neuron’s membrane potential as a function of time.
nest.voltage_trace.from_device(voltmeter)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Two neuron example¶
See Also¶
import pylab
import nest
import nest.voltage_trace
weight = 20.0
delay = 1.0
stim = 1000.0
neuron1 = nest.Create("iaf_psc_alpha")
neuron2 = nest.Create("iaf_psc_alpha")
voltmeter = nest.Create("voltmeter")
nest.SetStatus(neuron1, {"I_e": stim})
nest.Connect(neuron1, neuron2, syn_spec={'weight': weight, 'delay': delay})
nest.Connect(voltmeter, neuron2)
nest.Simulate(100.0)
nest.voltage_trace.from_device(voltmeter)
nest.voltage_trace.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Balanced neuron example¶
This script simulates a neuron driven by an excitatory and an inhibitory population of neurons firing Poisson spike trains. The aim is to find a firing rate for the inhibitory population that will make the neuron fire at the same rate as the excitatory population.
Optimization is performed using the bisection
method from Scipy,
simulating the network repeatedly.
This example is also shown in the article 1
References¶
- 1
Eppler JM, Helias M, Mulller E, Diesmann M, Gewaltig MO (2009). PyNEST: A convenient interface to the NEST simulator, Front. Neuroinform. http://dx.doi.org/10.3389/neuro.11.012.2008
First, we import all necessary modules for simulation, analysis and plotting. Scipy should be imported before nest.
from scipy.optimize import bisect
import nest
import nest.voltage_trace
Additionally, we set the verbosity using set_verbosity
to
suppress info messages.
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
Second, the simulation parameters are assigned to variables.
t_sim = 25000.0 # how long we simulate
n_ex = 16000 # size of the excitatory population
n_in = 4000 # size of the inhibitory population
r_ex = 5.0 # mean rate of the excitatory population
r_in = 20.5 # initial rate of the inhibitory population
epsc = 45.0 # peak amplitude of excitatory synaptic currents
ipsc = -45.0 # peak amplitude of inhibitory synaptic currents
d = 1.0 # synaptic delay
lower = 15.0 # lower bound of the search interval
upper = 25.0 # upper bound of the search interval
prec = 0.01 # how close need the excitatory rates be
Third, the nodes are created using Create
. We store the returned
handles in variables for later reference.
neuron = nest.Create("iaf_psc_alpha")
noise = nest.Create("poisson_generator", 2)
voltmeter = nest.Create("voltmeter")
spikedetector = nest.Create("spike_detector")
Fourth, the excitatory poisson_generator
(noise[0]) and the voltmeter
are configured using SetStatus
, which expects a list of node handles and a
list of parameter dictionaries. The rate of the inhibitory Poisson generator
is set later. Note that we need not set parameters for the neuron and the
spike detector, since they have satisfactory defaults.
nest.SetStatus(noise, [{"rate": n_ex * r_ex}, {"rate": n_in * r_in}])
nest.SetStatus(voltmeter, {"withgid": True, "withtime": True})
Fifth, the iaf_psc_alpha
is connected to the spike_detector
and the
voltmeter
, as are the two Poisson generators to the neuron. The command
Connect
has different variants. Plain Connect just takes the handles of
pre- and post-synaptic nodes and uses the default values for weight and
delay. It can also be called with a list of weights, as in the connection
of the noise below.
Note that the connection direction for the voltmeter
is reversed compared
to the spike_detector
, because it observes the neuron instead of
receiving events from it. Thus, Connect
reflects the direction of signal
flow in the simulation kernel rather than the physical process of inserting
an electrode into the neuron. The latter semantics is presently not
available in NEST.
nest.Connect(neuron, spikedetector)
nest.Connect(voltmeter, neuron)
nest.Connect(noise, neuron, syn_spec={'weight': [[epsc, ipsc]], 'delay': 1.0})
To determine the optimal rate of the neurons in the inhibitory population,
the network is simulated several times for different values of the
inhibitory rate while measuring the rate of the target neuron. This is done
by calling Simulate
until the rate of the target neuron matches the rate
of the neurons in the excitatory population with a certain accuracy. The
algorithm is implemented in two steps:
First, the function output_rate
is defined to measure the firing rate
of the target neuron for a given rate of the inhibitory neurons.
def output_rate(guess):
print("Inhibitory rate estimate: %5.2f Hz" % guess)
rate = float(abs(n_in * guess))
nest.SetStatus([noise[1]], "rate", rate)
nest.SetStatus(spikedetector, "n_events", 0)
nest.Simulate(t_sim)
out = nest.GetStatus(spikedetector, "n_events")[0] * 1000.0 / t_sim
print(" -> Neuron rate: %6.2f Hz (goal: %4.2f Hz)" % (out, r_ex))
return out
The function takes the firing rate of the inhibitory neurons as an
argument. It scales the rate with the size of the inhibitory population and
configures the inhibitory Poisson generator (noise[1]) accordingly.
Then, the spike counter of the spike_detector
is reset to zero. The
network is simulated using Simulate
, which takes the desired simulation
time in milliseconds and advances the network state by this amount of time.
During simulation, the spike_detector
counts the spikes of the target
neuron and the total number is read out at the end of the simulation
period. The return value of output_rate()
is the firing rate of the
target neuron in Hz.
Second, the scipy function bisect
is used to determine the optimal
firing rate of the neurons of the inhibitory population.
in_rate = bisect(lambda x: output_rate(x) - r_ex, lower, upper, xtol=prec)
print("Optimal rate for the inhibitory population: %.2f Hz" % in_rate)
The function bisect
takes four arguments: first a function whose
zero crossing is to be determined. Here, the firing rate of the target
neuron should equal the firing rate of the neurons of the excitatory
population. Thus we define an anonymous function (using lambda) that
returns the difference between the actual rate of the target neuron and the
rate of the excitatory Poisson generator, given a rate for the inhibitory
neurons. The next two arguments are the lower and upper bound of the
interval in which to search for the zero crossing. The fourth argument of
bisect
is the desired relative precision of the zero crossing.
Finally, we plot the target neuron’s membrane potential as a function of time.
nest.voltage_trace.from_device(voltmeter)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
IAF Neuron example¶
A DC current is injected into the neuron using a current generator device. The membrane potential as well as the spiking activity are recorded by corresponding devices.
It can be observed how the current charges the membrane, a spike is emitted, the neuron becomes absolute refractory, and finally starts to recover.
First, we import all necessary modules for simulation and plotting
import nest
import pylab
Second the function build_network
is defined to build the network and
return the handles of the spike_detector
and the voltmeter
def build_network(dt):
nest.ResetKernel()
nest.SetKernelStatus({"local_num_threads": 1, "resolution": dt})
neuron = nest.Create('iaf_psc_alpha')
nest.SetStatus(neuron, "I_e", 376.0)
vm = nest.Create('voltmeter')
nest.SetStatus(vm, "withtime", True)
sd = nest.Create('spike_detector')
nest.Connect(vm, neuron)
nest.Connect(neuron, sd)
return vm, sd
The function build_network
takes the resolution as argument.
First the Kernel is reset and the number of threads is set to zero as well
as the resolution to the specified value dt. The iaf_psc_alpha
is
created and the handle is stored in the variable neuron The status of the
neuron is changed so it receives an external current. Next the voltmeter
is created and the handle stored in vm and the option withtime
is set,
therefore, times are given in the times vector in events. Now the
spike_detector
is created and its handle is stored in sd.
The voltmeter and spike detector are then connected to the neuron. The
Connect
function takes the handles as input. The voltmeter is connected
to the neuron and the neuron to the spikedetector because the neuron sends
spikes to the detector and the voltmeter ‘observes’ the neuron.
The neuron is simulated for three different resolutions and then the voltage trace is plotted
First using build_network
the network is build and the handles of the
spike_detector
and the voltmeter
are stored in vm and sd
for dt in [0.1, 0.5, 1.0]:
print("Running simulation with dt=%.2f" % dt)
vm, sd = build_network(dt)
nest.Simulate(1000.0)
The network is simulated using Simulate
, which takes the desired
simulation time in milliseconds and advances the network state by this
amount of time. During simulation, the spike_detector
counts the
spikes of the target neuron and the total number is read out at the
end of the simulation period.
The values of the voltage recorded by the voltmeter are read out and the values for the membrane potential are stored in potential and the corresponding times in the times array
potentials = nest.GetStatus(vm, "events")[0]["V_m"]
times = nest.GetStatus(vm, "events")[0]["times"]
Using the pylab library the voltage trace is plotted over time
pylab.plot(times, potentials, label="dt=%.2f" % dt)
print(" Number of spikes: {0}".format(nest.GetStatus(sd, "n_events")[0]))
Finally the axis are labelled and a legend is generated
pylab.legend(loc=3)
pylab.xlabel("time (ms)")
pylab.ylabel("V_m (mV)")
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Repeated Stimulation¶
Simple example for how to repeat a stimulation protocol
using the origin
property of devices.
In this example, a poisson_generator
generates a spike train that is
recorded directly by a spike_detector
, using the following paradigm:
A single trial last for 1000 ms.
Within each trial, the
poisson_generator
is active from 100 ms to 500 ms.
We achieve this by defining the start and stop properties of the
generator to 100 ms and 500 ms, respectively, and setting the origin
to the
simulation time at the beginning of each trial. Start and stop are interpreted
relative to the origin
.
First, the modules needed for simulation and analyis are imported.
import nest
import nest.raster_plot
Second, we set the parameters so the poisson_generator
generates 1000
spikes per second and is active from 100 to 500 ms
rate = 1000.0 # generator rate in spikes/s
start = 100.0 # start of simulation relative to trial start, in ms
stop = 500.0 # end of simulation relative to trial start, in ms
The simulation is supposed to take 1s (1000 ms) and is repeated 5 times
trial_duration = 1000.0 # trial duration, in ms
num_trials = 5 # number of trials to perform
Third, the network is set up. We reset the kernel and create a
poisson_generator
, in which the handle is stored in pg.
The parameters for rate and start and stop of activity are given as optional parameters in the form of a dictionary.
nest.ResetKernel()
pg = nest.Create('poisson_generator',
params={'rate': rate,
'start': start,
'stop': stop}
)
The spike_detector
is created and the handle stored in sd.
sd = nest.Create('spike_detector')
The Connect
function connects the nodes so spikes from pg are collected by
the spike_detector
sd
nest.Connect(pg, sd)
Before each trial, we set the origin
of the poisson_generator
to the
current simulation time. This automatically sets the start and stop time of
the poisson_generator
to the specified times with respect to the origin.
The simulation is then carried out for the specified time in trial_duration.
for n in range(num_trials):
nest.SetStatus(pg, {'origin': nest.GetKernelStatus()['time']})
nest.Simulate(trial_duration)
Now we plot the result, including a histogram using the nest.raster_plot
function. Note: The histogram will show spikes seemingly located before
100 ms into each trial. This is due to sub-optimal automatic placement of
histogram bin borders.
nest.raster_plot.from_device(sd, hist=True, hist_binwidth=100.,
title='Repeated stimulation by Poisson generator')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Example of multimeter recording to file¶
This file demonstrates recording from an iaf_cond_alpha
neuron using a
multimeter and writing data to file.
First, we import the necessary modules to simulate and plot this example.
The simulation kernel is put back to its initial state using ResetKernel
.
import nest
import numpy
import pylab
nest.ResetKernel()
With SetKernelStatus
, global properties of the simulation kernel can be
specified. The following properties are related to writing to file:
overwrite_files
is set to True to permit overwriting of an existing file.data_path
is the path to which all data is written. It is given relative to the current working directory.‘data_prefix’ allows to specify a common prefix for all data files.
nest.SetKernelStatus({"overwrite_files": True,
"data_path": "",
"data_prefix": ""})
For illustration, the recordables of the iaf_cond_alpha
neuron model are
displayed. This model is an implementation of a spiking neuron using
integrate-and-fire dynamics with conductance-based synapses. Incoming spike
events induce a post-synaptic change of conductance modeled by an alpha
function.
print("iaf_cond_alpha recordables: {0}".format(
nest.GetDefaults("iaf_cond_alpha")["recordables"]))
A neuron, a multimeter as recording device and two spike generators for
excitatory and inhibitory stimulation are instantiated. The command Create
expects a model type and, optionally, the desired number of nodes and a
dictionary of parameters to overwrite the default values of the model.
For the neuron, the rise time of the excitatory synaptic alpha function in ms
tau_syn_ex
and the reset potential of the membrane in mVV_reset
are specified.For the multimeter, the time interval for recording in ms
interval
and a selection of measures to record (the membrane voltage in mVV_m
and the excitatoryg_ex
and inhibitoyg_in
synaptic conductances in nS) are set.
In addition, more parameters can be modified for writing to file:
withgid
is set to True to record the global id of the observed node(s). (default: False).
to_file
indicates whether to write the recordings to file and is set to True.
label
specifies an arbitrary label for the device. It is used instead of the name of the model in the output file name.
For the spike generators, the spike times in ms
spike_times
are given explicitly.
n = nest.Create("iaf_cond_alpha",
params={"tau_syn_ex": 1.0, "V_reset": -70.0})
m = nest.Create("multimeter",
params={"interval": 0.1,
"record_from": ["V_m", "g_ex", "g_in"],
"withgid": True,
"to_file": True,
"label": "my_multimeter"})
s_ex = nest.Create("spike_generator",
params={"spike_times": numpy.array([10.0, 20.0, 50.0])})
s_in = nest.Create("spike_generator",
params={"spike_times": numpy.array([15.0, 25.0, 55.0])})
Next, We connect the spike generators to the neuron with Connect
. Synapse
specifications can be provided in a dictionary. In this example of a
conductance-based neuron, the synaptic weight weight
is given in nS.
Note that the values are positive for excitatory stimulation and negative
for inhibitor connections.
nest.Connect(s_ex, n, syn_spec={"weight": 40.0})
nest.Connect(s_in, n, syn_spec={"weight": -20.0})
nest.Connect(m, n)
A network simulation with a duration of 100 ms is started with Simulate
.
nest.Simulate(100.)
After the simulation, the recordings are obtained from the multimeter via the
key events
of the status dictionary accessed by GetStatus
. times
indicates the recording times stored for each data point. They are recorded
if the parameter withtime
of the multimeter is set to True which is the
default case.
events = nest.GetStatus(m)[0]["events"]
t = events["times"]
Finally, the time courses of the membrane voltage and the synaptic conductance are displayed.
pylab.clf()
pylab.subplot(211)
pylab.plot(t, events["V_m"])
pylab.axis([0, 100, -75, -53])
pylab.ylabel("membrane potential (mV)")
pylab.subplot(212)
pylab.plot(t, events["g_ex"], t, events["g_in"])
pylab.axis([0, 100, 0, 45])
pylab.xlabel("time (ms)")
pylab.ylabel("synaptic conductance (nS)")
pylab.legend(("g_exc", "g_inh"))
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Sensitivity to perturbation¶
This script simulates a network in two successive trials, which are identical except for one extra input spike in the second realisation (a small perturbation). The network consists of recurrent, randomly connected excitatory and inhibitory neurons. Its activity is driven by an external Poisson input provided to all neurons independently. In order to ensure that the network is reset appropriately between the trials, we do the following steps:
resetting the network
resetting the random network generator
resetting the internal clock
deleting all entries in the spike detector
introducing a hyperpolarisation phase between the trials (in order to avoid that spikes remaining in the NEST memory after the first simulation are fed into the second simulation)
Importing all necessary modules for simulation, analysis and plotting.
import numpy
import pylab
import nest
Here we define all parameters necessary for building and simulating the network.
# We start with the global network parameters.
NE = 1000 # number of excitatory neurons
NI = 250 # number of inhibitory neurons
N = NE + NI # total number of neurons
KE = 100 # excitatory in-degree
KI = 25 # inhibitory in-degree
Parameters specific for the neurons in the network. The default values of
the reset potential E_L
and the spiking threshold V_th
are used to set
the limits of the initial potential of the neurons.
neuron_model = 'iaf_psc_delta'
neuron_params = nest.GetDefaults(neuron_model)
Vmin = neuron_params['E_L'] # minimum of initial potential distribution (mV)
Vmax = neuron_params['V_th'] # maximum of initial potential distribution (mV)
Synapse parameters. Changing the weights J in the network can lead to
qualitatively different behaviors. If J is small (e.g. J = 0.1
), we
are likely to observe a non-chaotic network behavior (after perturbation
the network returns to its original activity). Increasing J
(e.g J = 5.5
) leads to rather chaotic activity. Given that in this
example the transition to chaos is probabilistic, we sometimes observe
chaotic behavior for small weights (e.g. J = 0.5
) and non-chaotic
behavior for strong weights (e.g. J = 5.4
).
J = 0.5 # excitatory synaptic weight (mV)
g = 6. # relative inhibitory weight
delay = 0.1 # spike transmission delay (ms)
# External input parameters.
Jext = 0.2 # PSP amplitude for external Poisson input (mV)
rate_ext = 6500. # rate of the external Poisson input
# Perturbation parameters.
t_stim = 400. # perturbation time (time of the extra spike)
Jstim = Jext # perturbation amplitude (mV)
# Simulation parameters.
T = 1000. # simulation time per trial (ms)
fade_out = 2.*delay # fade out time (ms)
dt = 0.01 # simulation time resolution (ms)
seed_NEST = 30 # seed of random number generator in Nest
seed_numpy = 30 # seed of random number generator in numpy
Before we build the network, we reset the simulation kernel to ensure that previous NEST simulations in the python shell will not disturb this simulation and set the simulation resolution (later defined synaptic delays cannot be smaller than the simulation resolution).
nest.ResetKernel()
nest.SetStatus([0], [{"resolution": dt}])
Now we start building the network and create excitatory and inhibitory nodes and connect them. According to the connectivity specification, each neuron is assigned random KE synapses from the excitatory population and random KI synapses from the inhibitory population.
nodes_ex = nest.Create(neuron_model, NE)
nodes_in = nest.Create(neuron_model, NI)
allnodes = nodes_ex+nodes_in
nest.Connect(nodes_ex, allnodes,
conn_spec={'rule': 'fixed_indegree', 'indegree': KE},
syn_spec={'weight': J, 'delay': dt})
nest.Connect(nodes_in, allnodes,
conn_spec={'rule': 'fixed_indegree', 'indegree': KI},
syn_spec={'weight': -g*J, 'delay': dt})
Afterwards we create a poisson_generator
that provides spikes (the external
input) to the neurons until time T is reached.
Afterwards a dc_generator
, which is also connected to the whole population,
provides a stong hyperpolarisation step for a short time period fade_out.
The fade_out period has to last at least twice as long as the simulation resolution to supress the neurons from firing.
ext = nest.Create("poisson_generator",
params={'rate': rate_ext, 'stop': T})
nest.Connect(ext, allnodes,
syn_spec={'weight': Jext, 'delay': dt})
suppr = nest.Create("dc_generator",
params={'amplitude': -1e16, 'start': T,
'stop': T+fade_out})
nest.Connect(suppr, allnodes)
spikedetector = nest.Create("spike_detector")
nest.Connect(allnodes, spikedetector)
We then create the spike_generator
, which provides the extra spike
(perturbation).
stimulus = nest.Create("spike_generator")
nest.SetStatus(stimulus, {'spike_times': []})
Finally, we run the two simulations successively. After each simulation the sender ids and spiketimes are stored in a list (senders, spiketimes).
senders = []
spiketimes = []
We need to reset the network, the random number generator, and the clock of the simulation kernel. In addition, we ensure that there is no spike left in the spike detector.
In the second trial, we add an extra input spike at time t_stim to the neuron that fires first after perturbation time t_stim. Thus, we make sure that the perturbation is transmitted to the network before it fades away in the perturbed neuron. (Single IAF-neurons are not chaotic.)
for trial in [0, 1]:
nest.ResetNetwork()
nest.SetStatus([0], [{"rng_seeds": [seed_NEST]}])
nest.SetStatus([0], {'time': 0.0})
nest.SetStatus(spikedetector, {'n_events': 0})
# We assign random initial membrane potentials to all neurons
numpy.random.seed(seed_numpy)
Vms = Vmin + (Vmax - Vmin) * numpy.random.rand(N)
nest.SetStatus(allnodes, "V_m", Vms)
if trial == 1:
id_stim = [senders[0][spiketimes[0] > t_stim][0]]
nest.Connect(stimulus, list(id_stim),
syn_spec={'weight': Jstim, 'delay': dt})
nest.SetStatus(stimulus, {'spike_times': [t_stim]})
# Now we simulate the network and add a fade out period to discard
# remaining spikes.
nest.Simulate(T)
nest.Simulate(fade_out)
# Storing the data.
senders += [nest.GetStatus(spikedetector, 'events')[0]['senders']]
spiketimes += [nest.GetStatus(spikedetector, 'events')[0]['times']]
We plot the spiking activity of the network (first trial in red, second trial in black).
pylab.figure(1)
pylab.clf()
pylab.plot(spiketimes[0], senders[0], 'ro', ms=4.)
pylab.plot(spiketimes[1], senders[1], 'ko', ms=2.)
pylab.xlabel('time (ms)')
pylab.ylabel('neuron id')
pylab.xlim((0, T))
pylab.ylim((0, N))
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Plot weight matrices example¶
This example demonstrates how to extract the connection strength for all the synapses among two populations of neurons and gather these values in weight matrices for further analysis and visualization.
All connection types between these populations are considered, i.e., four weight matrices are created and plotted.
First, we import all necessary modules to extract, handle and plot the connectivity matrices
import numpy as np
import pylab
import nest
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
We now specify a function to extract and plot weight matrices for all connections among E_neurons and I_neurons.
We initialize all the matrices, whose dimensionality is determined by the number of elements in each population. Since in this example, we have 2 populations (E/I), \(2^2\) possible synaptic connections exist (EE, EI, IE, II).
def plot_weight_matrices(E_neurons, I_neurons):
W_EE = np.zeros([len(E_neurons), len(E_neurons)])
W_EI = np.zeros([len(I_neurons), len(E_neurons)])
W_IE = np.zeros([len(E_neurons), len(I_neurons)])
W_II = np.zeros([len(I_neurons), len(I_neurons)])
a_EE = nest.GetConnections(E_neurons, E_neurons)
c_EE = nest.GetStatus(a_EE, keys='weight')
a_EI = nest.GetConnections(I_neurons, E_neurons)
c_EI = nest.GetStatus(a_EI, keys='weight')
a_IE = nest.GetConnections(E_neurons, I_neurons)
c_IE = nest.GetStatus(a_IE, keys='weight')
a_II = nest.GetConnections(I_neurons, I_neurons)
c_II = nest.GetStatus(a_II, keys='weight')
for idx, n in enumerate(a_EE):
W_EE[n[0] - min(E_neurons), n[1] - min(E_neurons)] += c_EE[idx]
for idx, n in enumerate(a_EI):
W_EI[n[0] - min(I_neurons), n[1] - min(E_neurons)] += c_EI[idx]
for idx, n in enumerate(a_IE):
W_IE[n[0] - min(E_neurons), n[1] - min(I_neurons)] += c_IE[idx]
for idx, n in enumerate(a_II):
W_II[n[0] - min(I_neurons), n[1] - min(I_neurons)] += c_II[idx]
fig = pylab.figure()
fig.subtitle('Weight matrices', fontsize=14)
gs = gridspec.GridSpec(4, 4)
ax1 = pylab.subplot(gs[:-1, :-1])
ax2 = pylab.subplot(gs[:-1, -1])
ax3 = pylab.subplot(gs[-1, :-1])
ax4 = pylab.subplot(gs[-1, -1])
plt1 = ax1.imshow(W_EE, cmap='jet')
divider = make_axes_locatable(ax1)
cax = divider.append_axes("right", "5%", pad="3%")
pylab.colorbar(plt1, cax=cax)
ax1.set_title('W_{EE}')
pylab.tight_layout()
plt2 = ax2.imshow(W_IE)
plt2.set_cmap('jet')
divider = make_axes_locatable(ax2)
cax = divider.append_axes("right", "5%", pad="3%")
pylab.colorbar(plt2, cax=cax)
ax2.set_title('W_{EI}')
pylab.tight_layout()
plt3 = ax3.imshow(W_EI)
plt3.set_cmap('jet')
divider = make_axes_locatable(ax3)
cax = divider.append_axes("right", "5%", pad="3%")
pylab.colorbar(plt3, cax=cax)
ax3.set_title('W_{IE}')
pylab.tight_layout()
plt4 = ax4.imshow(W_II)
plt4.set_cmap('jet')
divider = make_axes_locatable(ax4)
cax = divider.append_axes("right", "5%", pad="3%")
pylab.colorbar(plt4, cax=cax)
ax4.set_title('W_{II}')
pylab.tight_layout()
The script iterates through the list of all connections of each type. To populate the corresponding weight matrix, we identify the source-gid (first element of each connection object, n[0]) and the target-gid (second element of each connection object, n[1]). For each gid, we subtract the minimum gid within the corresponding population, to assure the matrix indices range from 0 to the size of the population.
After determining the matrix indices [i, j], for each connection object, the corresponding weight is added to the entry W[i,j]. The procedure is then repeated for all the different connection types.
We then plot the figure, specifying the properties we want. For example, we
can display all the weight matrices in a single figure, which requires us to
use GridSpec
to specify the spatial arrangement of the axes.
A subplot is subsequently created for each connection type. Using imshow
,
we can visualize the weight matrix in the corresponding axis. We can also
specify the colormap for this image.
Using the axis_divider
module from mpl_toolkits
, we can allocate a small
extra space on the right of the current axis, which we reserve for a
colorbar.
We can set the title of each axis and adjust the axis subplot parameters.
Finally, the last three steps are repeated for each synapse type.
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
IF curve example¶
This example illustrates how to measure the I-F curve of a neuron. The program creates a small group of neurons and injects a noisy current \(I(t) = I_mean + I_std*W(t)\) where \(W(t)\) is a white noise process. The programm systematically drives the current through a series of values in the two-dimensional (I_mean, I_std) space and measures the firing rate of the neurons.
In this example, we measure the I-F curve of the adaptive exponential
integrate and fire neuron (aeif_cond_exp
), but any other neuron model that
accepts current inputs is possible. The model and its parameters are
supplied when the IF_curve object is created.
import numpy
import nest
import shelve
Here we define which model and the neuron parameters to use for measuring the transfer function.
model = 'aeif_cond_exp'
params = {'a': 4.0,
'b': 80.8,
'V_th': -50.4,
'Delta_T': 2.0,
'I_e': 0.0,
'C_m': 281.0,
'g_L': 30.0,
'V_reset': -70.6,
'tau_w': 144.0,
't_ref': 5.0,
'V_peak': -40.0,
'E_L': -70.6,
'E_ex': 0.,
'E_in': -70.}
class IF_curve():
t_inter_trial = 200. # Interval between two successive measurement trials
t_sim = 1000. # Duration of a measurement trial
n_neurons = 100 # Number of neurons
n_threads = 4 # Nubmer of threads to run the simulation
def __init__(self, model, params=False):
self.model = model
self.params = params
self.build()
self.connect()
def build(self):
#######################################################################
# We reset NEST to delete information from previous simulations
# and adjust the number of threads.
nest.ResetKernel()
nest.SetKernelStatus({'local_num_threads': self.n_threads})
#######################################################################
# We set the default parameters of the neuron model to those
# defined above and create neurons and devices.
if self.params:
nest.SetDefaults(self.model, self.params)
self.neuron = nest.Create(self.model, self.n_neurons)
self.noise = nest.Create('noise_generator')
self.spike_detector = nest.Create('spike_detector')
def connect(self):
#######################################################################
# We connect the noisy current to the neurons and the neurons to
# the spike detectors.
nest.Connect(self.noise, self.neuron, 'all_to_all')
nest.Connect(self.neuron, self.spike_detector, 'all_to_all')
def output_rate(self, mean, std):
self.build()
self.connect()
#######################################################################
# We adjust the parameters of the noise according to the current
# values.
nest.SetStatus(self.noise, [{'mean': mean, 'std': std, 'start': 0.0,
'stop': 1000., 'origin': 0.}])
# We simulate the network and calculate the rate.
nest.Simulate(self.t_sim)
rate = nest.GetStatus(self.spike_detector, 'n_events')[0] * 1000.0 \
/ (1. * self.n_neurons * self.t_sim)
return rate
def compute_transfer(self, i_mean=(400.0, 900.0, 50.0),
i_std=(0.0, 600.0, 50.0)):
#######################################################################
# We loop through all possible combinations of `(I_mean, I_sigma)`
# and measure the output rate of the neuron.
self.i_range = numpy.arange(*i_mean)
self.std_range = numpy.arange(*i_std)
self.rate = numpy.zeros((self.i_range.size, self.std_range.size))
nest.set_verbosity('M_WARNING')
for n, i in enumerate(self.i_range):
print('I = {0}'.format(i))
for m, std in enumerate(self.std_range):
self.rate[n, m] = self.output_rate(i, std)
transfer = IF_curve(model, params)
transfer.compute_transfer()
After the simulation is finished we store the data into a file for later analysis.
dat = shelve.open(model + '_transfer.dat')
dat['I_mean'] = transfer.i_range
dat['I_std'] = transfer.std_range
dat['rate'] = transfer.rate
dat.close()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Pulse packet example¶
This script compares the average and individual membrane potential excursions in response to a single pulse packet with an analytically acquired voltage trace (see: Diesmann 1) A pulse packet is a transient spike volley with a Gaussian rate profile. The user can specify the neural parameters, the parameters of the pulse-packet and the number of trials.
References¶
- 1(1,2,3,4,5)
Diesmann M. 2002. Dissertation. Conditions for stable propagation of synchronous spiking in cortical neural networks: Single neuron dynamics and network properties. http://d-nb.info/968772781/34.
First, we import all necessary modules for simulation, analysis and plotting.
import nest
import numpy
import pylab
import array
# Properties of pulse packet:
a = 100 # number of spikes in one pulse packet
sdev = 10. # width of pulse packet (ms)
weight = 0.1 # PSP amplitude (mV)
pulsetime = 500. # occurrence time (center) of pulse-packet (ms)
# Network and neuron characteristics:
n_neurons = 100 # number of neurons
cm = 200. # membrane capacitance (pF)
tau_s = 0.5 # synaptic time constant (ms)
tau_m = 20. # membrane time constant (ms)
V0 = 0.0 # resting potential (mV)
Vth = numpy.inf # firing threshold, high value to avoid spiking
# Simulation and analysis parameters:
simtime = 1000. # how long we simulate (ms)
simulation_resolution = 0.1 # (ms)
sampling_resolution = 1. # for voltmeter (ms)
convolution_resolution = 1. # for the analytics (ms)
# Some parameters in base units.
Cm = cm * 1e-12 # convert to Farad
Weight = weight * 1e-12 # convert to Ampere
Tau_s = tau_s * 1e-3 # convert to sec
Tau_m = tau_m * 1e-3 # convert to sec
Sdev = sdev * 1e-3 # convert to sec
Convolution_resolution = convolution_resolution * 1e-3 # convert to sec
This function calculates the membrane potential excursion in response to a single input spike (the equation is given for example in Diesmann 1, eq.2.3). It expects:
Time
: a time array or a single time point (in sec)Tau_s
andTau_m
: the synaptic and the membrane time constant (in sec)Cm
: the membrane capacity (in Farad)Weight
: the synaptic weight (in Ampere)
It returns the provoked membrane potential (in mV)
def make_psp(Time, Tau_s, Tau_m, Cm, Weight):
term1 = (1 / (Tau_s) - 1 / (Tau_m))
term2 = numpy.exp(-Time / (Tau_s))
term3 = numpy.exp(-Time / (Tau_m))
PSP = (Weight / Cm * numpy.exp(1) / Tau_s *
(((-Time * term2) / term1) + (term3 - term2) / term1 ** 2))
return PSP * 1e3
This function finds the exact location of the maximum of the PSP caused by a
single input spike. The location is obtained by setting the first derivative
of the equation for the PSP (see make_psp()
) to zero. The resulting
equation can be expressed in terms of a LambertW function. This function is
implemented in nest as a .sli file. In order to access this function in
PyNEST we called the function nest.sli_func()
.
This function expects:
Tau_s
andTau_m
: the synaptic and membrane time constant (in sec)
It returns the location of the maximum (in sec)
def find_loc_pspmax(tau_s, tau_m):
var = tau_m / tau_s
lam = nest.ll_api.sli_func('LambertWm1', -numpy.exp(-1 / var) / var)
t_maxpsp = (-var * lam - 1) / var / (1 / tau_s - 1 / tau_m) * 1e-3
return t_maxpsp
First, we construct a Gaussian kernel for a given standard derivation
(sig
) and mean value (mu
). In this case the standard derivation is
the width of the pulse packet (see 1).
sig = Sdev
mu = 0.0
x = numpy.arange(-4 * sig, 4 * sig, Convolution_resolution)
term1 = 1 / (sig * numpy.sqrt(2 * numpy.pi))
term2 = numpy.exp(-(x - mu) ** 2 / (sig ** 2 * 2))
gauss = term1 * term2 * Convolution_resolution
Second, we calculate the PSP of a neuron due to a single spiking input.
(see Diesmann 2002, eq. 2.3).
Since we do that in discrete time steps, we first construct an array
(t_psp
) that contains the time points we want to consider. Then, the
function make_psp()
(that creates the PSP) takes the time array as its
first argument.
t_psp = numpy.arange(0, 10 * (Tau_m + Tau_s), Convolution_resolution)
psp = make_psp(t_psp, Tau_s, Tau_m, Cm, Weight)
Now, we want to normalize the PSP amplitude to one. We therefore have to
divide the PSP by its maximum (1 sec 6.1). The function
find_loc_pspmax()
returns the exact time point (t_pspmax
) when we
expect the maximum to occur. The function make_psp()
calculates the
corresponding PSP value, which is our PSP amplitude (psp_amp
).
t_pspmax = find_loc_pspmax(Tau_s, Tau_m)
psp_amp = make_psp(t_pspmax, Tau_s, Tau_m, Cm, Weight)
psp_norm = psp / psp_amp
Now we have all ingredients to compute the membrane potential excursion (U). This calculation implies a convolution of the Gaussian with the normalized PSP (see 1, eq. 6.9). In order to avoid an offset in the convolution, we need to add a pad of zeros on the left side of the normalized PSP. Later on we want to compare our analytical results with the simulation outcome. Therefore we need a time vector (t_U) with the correct temporal resolution, which places the excursion of the potential at the correct time.
tmp = numpy.zeros(2 * len(psp_norm))
tmp[len(psp_norm) - 1:-1] += psp_norm
psp_norm = tmp
del tmp
U = a * psp_amp * pylab.convolve(gauss, psp_norm)
l = len(U)
t_U = (convolution_resolution * numpy.linspace(-l / 2., l / 2., l) +
pulsetime + 1.)
In this section we simulate a network of multiple neurons. All these neurons receive an individual pulse packet that is drawn from a Gaussian distribution.
We reset the Kernel, define the simulation resolution and set the
verbosity using set_verbosity
to suppress info messages.
nest.ResetKernel()
nest.SetStatus([0], [{'resolution': simulation_resolution}])
nest.set_verbosity("M_WARNING")
Afterwards we create several neurons, the same amount of pulse-packet-generators and a voltmeter. All these nodes/devices have specific properties that are specified in device specific dictionaries (here: neuron_pars for the neurons, ppg_pars for the and pulse-packet-generators and vm_pars for the voltmeter).
neuron_pars = {
'V_th': Vth,
'tau_m': tau_m,
'tau_syn_ex': tau_s,
'C_m': cm,
'E_L': V0,
'V_reset': V0,
'V_m': V0
}
neurons = nest.Create('iaf_psc_alpha', n_neurons, neuron_pars)
ppg_pars = {
'pulse_times': [pulsetime],
'activity': a,
'sdev': sdev
}
ppgs = nest.Create('pulsepacket_generator', n_neurons, ppg_pars)
vm_pars = {
'record_to': ['memory'],
'withtime': True,
'withgid': True,
'interval': sampling_resolution
}
vm = nest.Create('voltmeter', 1, vm_pars)
Now, we connect each pulse generator to one neuron via static synapses.
We want to keep all properties of the static synapse constant except the
synaptic weight. Therefore we change the weight with the help of the command
SetDefaults
.
The command Connect
connects all kinds of nodes/devices. Since multiple
nodes/devices can be connected in different ways e.g., each source connects
to all targets, each source connects to a subset of targets or each source
connects to exactly one target, we have to specify the connection. In our
case we use the one_to_one
connection routine since we connect one pulse
generator (source) to one neuron (target).
In addition we also connect the voltmeter to the neurons.
nest.SetDefaults('static_synapse', {'weight': weight})
nest.Connect(ppgs, neurons, 'one_to_one')
nest.Connect(vm, neurons)
In the next step we run the simulation for a given duration in ms.
nest.Simulate(simtime)
Finally, we record the membrane potential, when it occurred and to which
neuron it belongs. We obtain this information using the command
nest.GetStatus(vm, 'events')[0]
. The sender and the time point of a voltage
data point at position x in the voltage array (V_m
), can be found at the
same position x in the sender (senders) and the time array (times).
Vm = nest.GetStatus(vm, 'events')[0]['V_m']
times = nest.GetStatus(vm, 'events')[0]['times']
senders = nest.GetStatus(vm, 'events')[0]['senders']
Here we plot the membrane potential derived from the theory and from the simulation. Since we simulate multiple neurons that received slightly different pulse packets, we plot the individual and the averaged membrane potentials.
We plot the analytical solution U (the resting potential V0 shifts the membrane potential up or downwards).
pylab.plot(t_U, U + V0, 'r', lw=2, zorder=3, label='analytical solution')
Then we plot all individual membrane potentials. The time axes is the range of the simulation time in steps of ms.
Vm_single = [Vm[senders == ii] for ii in neurons]
simtimes = numpy.arange(1, simtime)
for idn in range(n_neurons):
if idn == 0:
pylab.plot(simtimes, Vm_single[idn], 'gray',
zorder=1, label='single potentials')
else:
pylab.plot(simtimes, Vm_single[idn], 'gray', zorder=1)
Finally, we plot the averaged membrane potential.
Vm_average = numpy.mean(Vm_single, axis=0)
pylab.plot(simtimes, Vm_average, 'b', lw=4,
zorder=2, label='averaged potential')
pylab.legend()
pylab.xlabel('time (ms)')
pylab.ylabel('membrane potential (mV)')
pylab.xlim((-5 * (tau_m + tau_s) + pulsetime,
10 * (tau_m + tau_s) + pulsetime))
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Correlospinmatrix detector example¶
This scripts simulates two connected binary neurons, similar as in 1. It measures and plots the auto- and cross covariance functions of the individual neurons and between them, repsectively.
References¶
- 1
Ginzburg and Sompolinsky (1994). Theory of correlations in stochastic neural netoworks. 50(4) p. 3175. Fig. 1.
import pylab as pl
import nest
import numpy as np
m_x = 0.5
tau_m = 10.
h = 0.1
T = 1000000.
tau_max = 100.
csd = nest.Create("correlospinmatrix_detector")
nest.SetStatus(csd, {"N_channels": 2, "tau_max": tau_max, "Tstart": tau_max,
"delta_tau": h})
nest.SetDefaults('ginzburg_neuron', {'theta': 0.0, 'tau_m': tau_m,
'c_1': 0.0, 'c_2': 2. * m_x, 'c_3': 1.0})
n1 = nest.Create("ginzburg_neuron")
nest.SetDefaults("mcculloch_pitts_neuron", {'theta': 0.5, 'tau_m': tau_m})
n2 = nest.Create("mcculloch_pitts_neuron")
nest.Connect(n1, n2, syn_spec={"weight": 1.0})
nest.Connect(n1, csd, syn_spec={"receptor_type": 0})
nest.Connect(n2, csd, syn_spec={"receptor_type": 1})
nest.Simulate(T)
stat = nest.GetStatus(csd)[0]
c = stat["count_covariance"]
m = np.zeros(2, dtype=float)
for i in range(2):
m[i] = c[i][i][int(tau_max / h)] * (h / T)
print('mean activities =', m)
cmat = np.zeros((2, 2, int(2 * tau_max / h) + 1), dtype=float)
for i in range(2):
for j in range(2):
cmat[i, j] = c[i][j] * (h / T) - m[i] * m[j]
ts = np.arange(-tau_max, tau_max + h, h)
pl.title("auto- and cross covariance functions")
pl.plot(ts, cmat[0, 1], 'r', label=r"$c_{12}$")
pl.plot(ts, cmat[1, 0], 'b', label=r"$c_{21}$")
pl.plot(ts, cmat[0, 0], 'g', label=r"$c_{11}$")
pl.plot(ts, cmat[1, 1], 'y', label=r"$c_{22}$")
pl.xlabel("time $t \; \mathrm{ms}$")
pl.ylabel(r"$c$")
pl.legend()
pl.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Auto- and crosscorrelation functions for spike trains¶
A time bin of size tbin is centered around the time difference it represents. If the correlation function is calculated for tau in [-tau_max, tau_max], the pair events contributing to the left-most bin are those for which tau in [-tau_max-tbin/2, tau_max+tbin/2) and so on.
Correlate two spike trains with each other assumes spike times to be ordered in time. tau > 0 means spike2 is later than spike1
tau_max: maximum time lag in ms correlation function
tbin: bin size
spike1: first spike train [tspike…]
spike2: second spike train [tspike…]
import nest
from matplotlib.pylab import *
def corr_spikes_sorted(spike1, spike2, tbin, tau_max, h):
tau_max_i = int(tau_max / h)
tbin_i = int(tbin / h)
cross = zeros(int(2 * tau_max_i / tbin_i + 1), 'd')
j0 = 0
for spki in spike1:
j = j0
while j < len(spike2) and spike2[j] - spki < -tau_max_i - tbin_i / 2.0:
j += 1
j0 = j
while j < len(spike2) and spike2[j] - spki < tau_max_i + tbin_i / 2.0:
cross[int(
(spike2[j] - spki + tau_max_i + 0.5 * tbin_i) / tbin_i)] += 1.0
j += 1
return cross
nest.ResetKernel()
h = 0.1 # Computation step size in ms
T = 100000.0 # Total duration
delta_tau = 10.0
tau_max = 100.0
pc = 0.5
nu = 100.0
# grng_seed is 0 because test data was produced for seed = 0
nest.SetKernelStatus({'local_num_threads': 1, 'resolution': h,
'overwrite_files': True, 'grng_seed': 0})
# Set up network, connect and simulate
mg = nest.Create('mip_generator')
nest.SetStatus(mg, {'rate': nu, 'p_copy': pc})
cd = nest.Create('correlation_detector')
nest.SetStatus(cd, {'tau_max': tau_max, 'delta_tau': delta_tau})
sd = nest.Create('spike_detector')
nest.SetStatus(sd, {'withtime': True,
'withgid': True, 'time_in_steps': True})
pn1 = nest.Create('parrot_neuron')
pn2 = nest.Create('parrot_neuron')
nest.Connect(mg, pn1)
nest.Connect(mg, pn2)
nest.Connect(pn1, sd)
nest.Connect(pn2, sd)
nest.SetDefaults('static_synapse', {'weight': 1.0, 'receptor_type': 0})
nest.Connect(pn1, cd)
nest.SetDefaults('static_synapse', {'weight': 1.0, 'receptor_type': 1})
nest.Connect(pn2, cd)
nest.Simulate(T)
n_events = nest.GetStatus(cd)[0]['n_events']
n1 = n_events[0]
n2 = n_events[1]
lmbd1 = (n1 / (T - tau_max)) * 1000.0
lmbd2 = (n2 / (T - tau_max)) * 1000.0
h = 0.1
tau_max = 100.0 # ms correlation window
t_bin = 10.0 # ms bin size
spikes = nest.GetStatus(sd)[0]['events']['senders']
sp1 = find(spikes[:] == 4)
sp2 = find(spikes[:] == 5)
# Find crosscorrolation
cross = corr_spikes_sorted(sp1, sp2, t_bin, tau_max, h)
print("Crosscorrelation:")
print(cross)
print("Sum of crosscorrelation:")
print(sum(cross))
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Campbell & Siegert approximation example¶
Example script that applies Campbell’s theorem and Siegert’s rate approximation to and integrate-and-fire neuron.
This script calculates the firing rate of an integrate-and-fire neuron
in response to a series of Poisson generators, each specified with a
rate and a synaptic weight. The calculated rate is compared with a
simulation using the iaf_psc_alpha
model
References:¶
Authors¶
Schrader, Siegert implentation by T. Tetzlaff
First, we import all necessary modules for simulation and analysis. Scipy should be imported before nest.
from scipy.special import erf
from scipy.optimize import fmin
import numpy as np
import nest
We first set the parameters of neurons, noise and the simulation. First settings are with a single Poisson source, second is with two Poisson sources with half the rate of the single source. Both should lead to the same results.
weights = [0.1] # (mV) psp amplitudes
rates = [10000.] # (1/s) rate of Poisson sources
# weights = [0.1, 0.1] # (mV) psp amplitudes
# rates = [5000., 5000.] # (1/s) rate of Poisson sources
C_m = 250.0 # (pF) capacitance
E_L = -70.0 # (mV) resting potential
I_e = 0.0 # (nA) external current
V_reset = -70.0 # (mV) reset potential
V_th = -55.0 # (mV) firing threshold
t_ref = 2.0 # (ms) refractory period
tau_m = 10.0 # (ms) membrane time constant
tau_syn_ex = .5 # (ms) excitatory synaptic time constant
tau_syn_in = 2.0 # (ms) inhibitory synaptic time constant
simtime = 20000 # (ms) duration of simulation
n_neurons = 10 # number of simulated neurons
For convenience we define some units.
pF = 1e-12
ms = 1e-3
pA = 1e-12
mV = 1e-3
mu = 0.0
sigma2 = 0.0
J = []
assert(len(weights) == len(rates))
In the following we analytically compute the firing rate of the neuron based on Campbell’s theorem 1 and Siegerts approximation 2.
for rate, weight in zip(rates, weights):
if weight > 0:
tau_syn = tau_syn_ex
else:
tau_syn = tau_syn_in
t_psp = np.arange(0., 10. * (tau_m * ms + tau_syn * ms), 0.0001)
# We define the form of a single PSP, which allows us to match the
# maximal value to or chosen weight.
def psp(x):
return - ((C_m * pF) / (tau_syn * ms) * (1 / (C_m * pF)) *
(np.exp(1) / (tau_syn * ms)) *
(((-x * np.exp(-x / (tau_syn * ms))) /
(1 / (tau_syn * ms) - 1 / (tau_m * ms))) +
(np.exp(-x / (tau_m * ms)) - np.exp(-x / (tau_syn * ms))) /
((1 / (tau_syn * ms) - 1 / (tau_m * ms)) ** 2)))
min_result = fmin(psp, [0], full_output=1, disp=0)
# We need to calculate the PSC amplitude (i.e., the weight we set in NEST)
# from the PSP amplitude, that we have specified above.
fudge = -1. / min_result[1]
J.append(C_m * weight / (tau_syn) * fudge)
# We now use Campbell's theorem to calculate mean and variance of
# the input due to the Poisson sources. The mean and variance add up
# for each Poisson source.
mu += (rate * (J[-1] * pA) * (tau_syn * ms) *
np.exp(1) * (tau_m * ms) / (C_m * pF))
sigma2 += rate * (2 * tau_m * ms + tau_syn * ms) * \
(J[-1] * pA * tau_syn * ms * np.exp(1) * tau_m * ms /
(2 * (C_m * pF) * (tau_m * ms + tau_syn * ms))) ** 2
mu += (E_L * mV)
sigma = np.sqrt(sigma2)
Having calculate mean and variance of the input, we can now employ Siegert’s rate approximation.
num_iterations = 100
upper = (V_th * mV - mu) / sigma / np.sqrt(2)
lower = (E_L * mV - mu) / sigma / np.sqrt(2)
interval = (upper - lower) / num_iterations
tmpsum = 0.0
for cu in range(0, num_iterations + 1):
u = lower + cu * interval
f = np.exp(u ** 2) * (1 + erf(u))
tmpsum += interval * np.sqrt(np.pi) * f
r = 1. / (t_ref * ms + tau_m * ms * tmpsum)
We now simulate neurons receiving Poisson spike trains as input, and compare the theoretical result to the empirical value.
nest.ResetKernel()
nest.set_verbosity('M_WARNING')
neurondict = {'V_th': V_th, 'tau_m': tau_m, 'tau_syn_ex': tau_syn_ex,
'tau_syn_in': tau_syn_in, 'C_m': C_m, 'E_L': E_L, 't_ref': t_ref,
'V_m': E_L, 'V_reset': E_L}
Neurons and devices are instantiated. We set a high threshold as we want free membrane potential. In addition we choose a small resolution for recording the membrane to collect good statistics.
nest.SetDefaults('iaf_psc_alpha', neurondict)
n = nest.Create('iaf_psc_alpha', n_neurons)
n_free = nest.Create('iaf_psc_alpha', 1, [{'V_th': 1e12}])
pg = nest.Create('poisson_generator', len(rates),
[{'rate': float(rate_i)} for rate_i in rates])
vm = nest.Create('voltmeter', 1, [{'interval': .1}])
sd = nest.Create('spike_detector', 1)
We connect devices and neurons and start the simulation.
for i, currentpg in enumerate(pg):
nest.Connect([currentpg], n,
syn_spec={'weight': float(J[i]), 'delay': 0.1})
nest.Connect([currentpg], n_free,
syn_spec={'weight': J[i]})
nest.Connect(vm, n_free)
nest.Connect(n, sd)
nest.Simulate(simtime)
Here we read out the recorded membrane potential. The first 500 steps are omitted so initial transients do not perturb our results. We then print the results from theory and simulation.
v_free = nest.GetStatus(vm, 'events')[0]['V_m'][500:-1]
print('mean membrane potential (actual / calculated): {0} / {1}'
.format(np.mean(v_free), mu * 1000))
print('variance (actual / calculated): {0} / {1}'
.format(np.var(v_free), sigma2 * 1e6))
print('firing rate (actual / calculated): {0} / {1}'
.format(nest.GetStatus(sd, 'n_events')[0] /
(n_neurons * simtime * ms), r))
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Spike synchronization through subthreshold oscillation¶
This script reproduces the spike synchronization behavior of integrate-and-fire neurons in response to a subthreshold oscillation. This phenomenon is shown in Fig. 1 of 1
Neurons receive a weak 35 Hz oscillation, a gaussian noise current and an increasing DC. The time-locking capability is shown to depend on the input current given. The result is then plotted using pylab. All parameters are taken from the above paper.
References¶
- 1
Brody CD and Hopfield JJ (2003). Simple networks for spike-timing-based computation, with application to olfactory processing. Neuron 37, 843-852.
First, we import all necessary modules for simulation, analysis, and plotting.
import nest
import nest.raster_plot
Second, the simulation parameters are assigned to variables.
N = 1000 # number of neurons
bias_begin = 140. # minimal value for the bias current injection [pA]
bias_end = 200. # maximal value for the bias current injection [pA]
T = 600 # simulation time (ms)
# parameters for the alternative-current generator
driveparams = {'amplitude': 50., 'frequency': 35.}
# parameters for the noise generator
noiseparams = {'mean': 0.0, 'std': 200.}
neuronparams = {'tau_m': 20., # membrane time constant
'V_th': 20., # threshold potential
'E_L': 10., # membrane resting potential
't_ref': 2., # refractory period
'V_reset': 0., # reset potential
'C_m': 200., # membrane capacitance
'V_m': 0.} # initial membrane potential
Third, the nodes are created using Create
. We store the returned handles
in variables for later reference.
neurons = nest.Create('iaf_psc_alpha', N)
sd = nest.Create('spike_detector')
noise = nest.Create('noise_generator')
drive = nest.Create('ac_generator')
Set the parameters specified above for the generators using SetStatus
.
nest.SetStatus(drive, driveparams)
nest.SetStatus(noise, noiseparams)
Set the parameters specified above for the neurons. Neurons get an internal current. The first neuron additionally receives the current with amplitude bias_begin, the last neuron with amplitude bias_end.
nest.SetStatus(neurons, neuronparams)
nest.SetStatus(neurons, [{'I_e':
(n * (bias_end - bias_begin) / N + bias_begin)}
for n in neurons])
Set the parameters for the spike_detector
: recorded data should include
the information about global IDs of spiking neurons and the time of
individual spikes.
nest.SetStatus(sd, {"withgid": True, "withtime": True})
Connect alternative current and noise generators as well as spike detectors to neurons
nest.Connect(drive, neurons)
nest.Connect(noise, neurons)
nest.Connect(neurons, sd)
Simulate the network for time T.
nest.Simulate(T)
Plot the raster plot of the neuronal spiking activity.
nest.raster_plot.from_device(sd, hist=True)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Example using Hodgkin-Huxley neuron¶
This example produces a rate-response (FI) curve of the Hodgkin-Huxley
neuron hh_psc_alpha
in response to a range of different current (DC) stimulations.
The result is plotted using matplotlib.
Since a DC input affetcs only the neuron’s channel dynamics, this routine does not yet check correctness of synaptic response.
import nest
import numpy as np
import matplotlib.pyplot as plt
nest.set_verbosity('M_WARNING')
nest.ResetKernel()
simtime = 1000
# Amplitude range, in pA
dcfrom = 0
dcstep = 20
dcto = 2000
h = 0.1 # simulation step size in mS
neuron = nest.Create('hh_psc_alpha')
sd = nest.Create('spike_detector')
nest.SetStatus(sd, {'to_memory': False})
nest.Connect(neuron, sd, syn_spec={'weight': 1.0, 'delay': h})
# Simulation loop
n_data = int(dcto / float(dcstep))
amplitudes = np.zeros(n_data)
event_freqs = np.zeros(n_data)
for i, amp in enumerate(range(dcfrom, dcto, dcstep)):
nest.SetStatus(neuron, {'I_e': float(amp)})
print("Simulating with current I={} pA".format(amp))
nest.Simulate(1000) # one second warm-up time for equilibrium state
nest.SetStatus(sd, {'n_events': 0}) # then reset spike counts
nest.Simulate(simtime) # another simulation call to record firing rate
n_events = nest.GetStatus(sd, keys={'n_events'})[0][0]
amplitudes[i] = amp
event_freqs[i] = n_events / (simtime / 1000.)
plt.plot(amplitudes, event_freqs)
plt.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Numerical phase-plane analysis of the Hodgkin-Huxley neuron¶
hh_phaseplane makes a numerical phase-plane analysis of the Hodgkin-Huxley
neuron (hh_psc_alpha
). Dynamics is investigated in the V-n space (see remark
below). A constant DC can be specified and its influence on the nullclines
can be studied.
Remark¶
To make the two-dimensional analysis possible, the (four-dimensional) Hodgkin-Huxley formalism needs to be artificially reduced to two dimensions, in this case by ‘clamping’ the two other variables, m and h, to constant values (m_eq and h_eq).
import nest
import numpy as np
from matplotlib import pyplot as plt
amplitude = 100. # Set externally applied current amplitude in pA
dt = 0.1 # simulation step length [ms]
v_min = -100. # Min membrane potential
v_max = 42. # Max membrane potential
n_min = 0.1 # Min inactivation variable
n_max = 0.81 # Max inactivation variable
delta_v = 2. # Membrane potential step length
delta_n = 0.01 # Inactivation variable step length
V_vec = np.arange(v_min, v_max, delta_v)
n_vec = np.arange(n_min, n_max, delta_n)
num_v_steps = len(V_vec)
num_n_steps = len(n_vec)
nest.ResetKernel()
nest.set_verbosity('M_ERROR')
nest.SetKernelStatus({'resolution': dt})
neuron = nest.Create('hh_psc_alpha')
# Numerically obtain equilibrium state
nest.Simulate(1000)
m_eq = nest.GetStatus(neuron)[0]['Act_m']
h_eq = nest.GetStatus(neuron)[0]['Inact_h']
nest.SetStatus(neuron, {'I_e': amplitude}) # Apply external current
# Scan state space
print('Scanning phase space')
V_matrix = np.zeros([num_n_steps, num_v_steps])
n_matrix = np.zeros([num_n_steps, num_v_steps])
# pp_data will contain the phase-plane data as a vector field
pp_data = np.zeros([num_n_steps * num_v_steps, 4])
count = 0
for i, V in enumerate(V_vec):
for j, n in enumerate(n_vec):
# Set V_m and n
nest.SetStatus(neuron, {'V_m': V, 'Act_n': n,
'Act_m': m_eq, 'Inact_h': h_eq})
# Find state
V_m = nest.GetStatus(neuron)[0]['V_m']
Act_n = nest.GetStatus(neuron)[0]['Act_n']
# Simulate a short while
nest.Simulate(dt)
# Find difference between new state and old state
V_m_new = nest.GetStatus(neuron)[0]['V_m'] - V
Act_n_new = nest.GetStatus(neuron)[0]['Act_n'] - n
# Store in vector for later analysis
V_matrix[j, i] = abs(V_m_new)
n_matrix[j, i] = abs(Act_n_new)
pp_data[count] = np.array([V_m, Act_n, V_m_new, Act_n_new])
if count % 10 == 0:
# Write updated state next to old state
print('')
print('Vm: \t', V_m)
print('new Vm:\t', V_m_new)
print('Act_n:', Act_n)
print('new Act_n:', Act_n_new)
count += 1
# Set state for AP generation
nest.SetStatus(neuron, {'V_m': -34., 'Act_n': 0.2,
'Act_m': m_eq, 'Inact_h': h_eq})
print('')
print('AP-trajectory')
# ap will contain the trace of a single action potential as one possible
# numerical solution in the vector field
ap = np.zeros([1000, 2])
for i in range(1, 1001):
# Find state
V_m = nest.GetStatus(neuron)[0]['V_m']
Act_n = nest.GetStatus(neuron)[0]['Act_n']
if i % 10 == 0:
# Write new state next to old state
print('Vm: \t', V_m)
print('Act_n:', Act_n)
ap[i - 1] = np.array([V_m, Act_n])
# Simulate again
nest.SetStatus(neuron, {'Act_m': m_eq, 'Inact_h': h_eq})
nest.Simulate(dt)
# Make analysis
print('')
print('Plot analysis')
nullcline_V = []
nullcline_n = []
print('Searching nullclines')
for i in range(0, len(V_vec)):
index = np.nanargmin(V_matrix[:][i])
if index != 0 and index != len(n_vec):
nullcline_V.append([V_vec[i], n_vec[index]])
index = np.nanargmin(n_matrix[:][i])
if index != 0 and index != len(n_vec):
nullcline_n.append([V_vec[i], n_vec[index]])
print('Plotting vector field')
factor = 0.1
for i in range(0, np.shape(pp_data)[0], 3):
plt.plot([pp_data[i][0], pp_data[i][0] + factor * pp_data[i][2]],
[pp_data[i][1], pp_data[i][1] + factor * pp_data[i][3]],
color=[0.6, 0.6, 0.6])
plt.plot(nullcline_V[:][0], nullcline_V[:][1], linewidth=2.0)
plt.plot(nullcline_n[:][0], nullcline_n[:][1], linewidth=2.0)
plt.xlim([V_vec[0], V_vec[-1]])
plt.ylim([n_vec[0], n_vec[-1]])
plt.plot(ap[:][0], ap[:][1], color='black', linewidth=1.0)
plt.xlabel('Membrane potential V [mV]')
plt.ylabel('Inactivation variable n')
plt.title('Phase space of the Hodgkin-Huxley Neuron')
plt.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Structural Plasticity example¶
This example shows a simple network of two populations where structural plasticity is used. The network has 1000 neurons, 80% excitatory and 20% inhibitory. The simulation starts without any connectivity. A set of homeostatic rules are defined, according to which structural plasticity will create and delete synapses dynamically during the simulation until a desired level of electrical activity is reached. The model of structural plasticity used here corresponds to the formulation presented in 1.
At the end of the simulation, a plot of the evolution of the connectivity in the network and the average calcium concentration in the neurons is created.
References¶
- 1
Butz, M., and van Ooyen, A. (2013). A simple rule for dendritic spine and axonal bouton formation can account for cortical reorganization after focal retinal lesions. PLoS Comput. Biol. 9 (10), e1003259.
First, we have import all necessary modules.
import nest
import numpy
import matplotlib.pyplot as pl
import sys
We define general simulation parameters
class StructralPlasticityExample:
def __init__(self):
# simulated time (ms)
self.t_sim = 200000.0
# simulation step (ms).
self.dt = 0.1
self.number_excitatory_neurons = 800
self.number_inhibitory_neurons = 200
# Structural_plasticity properties
self.update_interval = 1000
self.record_interval = 1000.0
# rate of background Poisson input
self.bg_rate = 10000.0
self.neuron_model = 'iaf_psc_exp'
In this implementation of structural plasticity, neurons grow connection points called synaptic elements. Synapses can be created between compatible synaptic elements. The growth of these elements is guided by homeostatic rules, defined as growth curves. Here we specify the growth curves for synaptic elements of excitatory and inhibitory neurons.
# Excitatory synaptic elements of excitatory neurons
self.growth_curve_e_e = {
'growth_curve': "gaussian",
'growth_rate': 0.0001, # (elements/ms)
'continuous': False,
'eta': 0.0, # Ca2+
'eps': 0.05, # Ca2+
}
# Inhibitory synaptic elements of excitatory neurons
self.growth_curve_e_i = {
'growth_curve': "gaussian",
'growth_rate': 0.0001, # (elements/ms)
'continuous': False,
'eta': 0.0, # Ca2+
'eps': self.growth_curve_e_e['eps'], # Ca2+
}
# Excitatory synaptic elements of inhibitory neurons
self.growth_curve_i_e = {
'growth_curve': "gaussian",
'growth_rate': 0.0004, # (elements/ms)
'continuous': False,
'eta': 0.0, # Ca2+
'eps': 0.2, # Ca2+
}
# Inhibitory synaptic elements of inhibitory neurons
self.growth_curve_i_i = {
'growth_curve': "gaussian",
'growth_rate': 0.0001, # (elements/ms)
'continuous': False,
'eta': 0.0, # Ca2+
'eps': self.growth_curve_i_e['eps'] # Ca2+
}
# Now we specify the neuron model.
self.model_params = {'tau_m': 10.0, # membrane time constant (ms)
# excitatory synaptic time constant (ms)
'tau_syn_ex': 0.5,
# inhibitory synaptic time constant (ms)
'tau_syn_in': 0.5,
't_ref': 2.0, # absolute refractory period (ms)
'E_L': -65.0, # resting membrane potential (mV)
'V_th': -50.0, # spike threshold (mV)
'C_m': 250.0, # membrane capacitance (pF)
'V_reset': -65.0 # reset potential (mV)
}
self.nodes_e = None
self.nodes_i = None
self.mean_ca_e = []
self.mean_ca_i = []
self.total_connections_e = []
self.total_connections_i = []
We initialize variables for the post-synaptic currents of the excitatory, inhibitory, and external synapses. These values were calculated from a PSP amplitude of 1 for excitatory synapses, -1 for inhibitory synapses and 0.11 for external synapses.
self.psc_e = 585.0
self.psc_i = -585.0
self.psc_ext = 6.2
def prepare_simulation(self):
nest.ResetKernel()
nest.set_verbosity('M_ERROR')
We set global kernel parameters. Here we define the resolution for the simulation, which is also the time resolution for the update of the synaptic elements.
nest.SetKernelStatus(
{
'resolution': self.dt
}
)
Set Structural Plasticity synaptic update interval which is how often the connectivity will be updated inside the network. It is important to notice that synaptic elements and connections change on different time scales.
nest.SetStructuralPlasticityStatus({
'structural_plasticity_update_interval': self.update_interval,
})
Now we define Structural Plasticity synapses. In this example we create two synapse models, one for excitatory and one for inhibitory synapses. Then we define that excitatory synapses can only be created between a pre-synaptic element called Axon_ex and a post synaptic element called Den_ex. In a similar manner, synaptic elements for inhibitory synapses are defined.
nest.CopyModel('static_synapse', 'synapse_ex')
nest.SetDefaults('synapse_ex', {'weight': self.psc_e, 'delay': 1.0})
nest.CopyModel('static_synapse', 'synapse_in')
nest.SetDefaults('synapse_in', {'weight': self.psc_i, 'delay': 1.0})
nest.SetStructuralPlasticityStatus({
'structural_plasticity_synapses': {
'synapse_ex': {
'model': 'synapse_ex',
'post_synaptic_element': 'Den_ex',
'pre_synaptic_element': 'Axon_ex',
},
'synapse_in': {
'model': 'synapse_in',
'post_synaptic_element': 'Den_in',
'pre_synaptic_element': 'Axon_in',
},
}
})
def create_nodes(self):
Now we assign the growth curves to the corresponding synaptic elements
synaptic_elements = {
'Den_ex': self.growth_curve_e_e,
'Den_in': self.growth_curve_e_i,
'Axon_ex': self.growth_curve_e_e,
}
synaptic_elements_i = {
'Den_ex': self.growth_curve_i_e,
'Den_in': self.growth_curve_i_i,
'Axon_in': self.growth_curve_i_i,
}
Then it is time to create a population with 80% of the total network size excitatory neurons and another one with 20% of the total network size of inhibitory neurons.
self.nodes_e = nest.Create('iaf_psc_alpha',
self.number_excitatory_neurons,
{'synaptic_elements': synaptic_elements})
self.nodes_i = nest.Create('iaf_psc_alpha',
self.number_inhibitory_neurons,
{'synaptic_elements': synaptic_elements_i})
nest.SetStatus(self.nodes_e, 'synaptic_elements', synaptic_elements)
nest.SetStatus(self.nodes_i, 'synaptic_elements', synaptic_elements_i)
def connect_external_input(self):
"""
We create and connect the Poisson generator for external input
"""
noise = nest.Create('poisson_generator')
nest.SetStatus(noise, {"rate": self.bg_rate})
nest.Connect(noise, self.nodes_e, 'all_to_all',
{'weight': self.psc_ext, 'delay': 1.0})
nest.Connect(noise, self.nodes_i, 'all_to_all',
{'weight': self.psc_ext, 'delay': 1.0})
In order to save the amount of average calcium concentration in each
population through time we create the function record_ca
. Here we use the
GetStatus
function to retrieve the value of Ca for every neuron in the
network and then store the average.
def record_ca(self):
ca_e = nest.GetStatus(self.nodes_e, 'Ca'), # Calcium concentration
self.mean_ca_e.append(numpy.mean(ca_e))
ca_i = nest.GetStatus(self.nodes_i, 'Ca'), # Calcium concentration
self.mean_ca_i.append(numpy.mean(ca_i))
In order to save the state of the connectivity in the network through time
we create the function record_connectivity
. Here we use the GetStatus
function to retrieve the number of connected pre-synaptic elements of each
neuron. The total amount of excitatory connections is equal to the total
amount of connected excitatory pre-synaptic elements. The same applies for
inhibitory connections.
def record_connectivity(self):
syn_elems_e = nest.GetStatus(self.nodes_e, 'synaptic_elements')
syn_elems_i = nest.GetStatus(self.nodes_i, 'synaptic_elements')
self.total_connections_e.append(sum(neuron['Axon_ex']['z_connected']
for neuron in syn_elems_e))
self.total_connections_i.append(sum(neuron['Axon_in']['z_connected']
for neuron in syn_elems_i))
We define a function to plot the recorded values at the end of the simulation.
def plot_data(self):
fig, ax1 = pl.subplots()
ax1.axhline(self.growth_curve_e_e['eps'],
linewidth=4.0, color='#9999FF')
ax1.plot(self.mean_ca_e, 'b',
label='Ca Concentration Excitatory Neurons', linewidth=2.0)
ax1.axhline(self.growth_curve_i_e['eps'],
linewidth=4.0, color='#FF9999')
ax1.plot(self.mean_ca_i, 'r',
label='Ca Concentration Inhibitory Neurons', linewidth=2.0)
ax1.set_ylim([0, 0.275])
ax1.set_xlabel("Time in [s]")
ax1.set_ylabel("Ca concentration")
ax2 = ax1.twinx()
ax2.plot(self.total_connections_e, 'm',
label='Excitatory connections', linewidth=2.0, linestyle='--')
ax2.plot(self.total_connections_i, 'k',
label='Inhibitory connections', linewidth=2.0, linestyle='--')
ax2.set_ylim([0, 2500])
ax2.set_ylabel("Connections")
ax1.legend(loc=1)
ax2.legend(loc=4)
pl.savefig('StructuralPlasticityExample.eps', format='eps')
It is time to specify how we want to perform the simulation. In this function we first enable structural plasticity in the network and then we simulate in steps. On each step we record the calcium concentration and the connectivity. At the end of the simulation, the plot of connections and calcium concentration through time is generated.
def simulate(self):
if nest.NumProcesses() > 1:
sys.exit("For simplicity, this example only works " +
"for a single process.")
nest.EnableStructuralPlasticity()
print("Starting simulation")
sim_steps = numpy.arange(0, self.t_sim, self.record_interval)
for i, step in enumerate(sim_steps):
nest.Simulate(self.record_interval)
self.record_ca()
self.record_connectivity()
if i % 20 == 0:
print("Progress: " + str(i / 2) + "%")
print("Simulation finished successfully")
Finally we take all the functions that we have defined and create the sequence for our example. We prepare the simulation, create the nodes for the network, connect the external input and then simulate. Please note that as we are simulating 200 biological seconds in this example, it will take a few minutes to complete.
if __name__ == '__main__':
example = StructralPlasticityExample()
# Prepare simulation
example.prepare_simulation()
example.create_nodes()
example.connect_external_input()
# Start simulation
example.simulate()
example.plot_data()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Gap Junctions: Two neuron example¶
This script simulates two Hodgkin-Huxley neurons of type hh_psc_alpha_gap
connected by a gap junction. Both neurons receive a constant current of
100.0 pA. The neurons are initialized with different membrane potentials and
synchronize over time due to the gap-junction connection.
import nest
import pylab as pl
import numpy
nest.ResetKernel()
First we set the resolution of the simulation, create two neurons and
create a voltmeter
for recording.
nest.SetKernelStatus({'resolution': 0.05})
neuron = nest.Create('hh_psc_alpha_gap', 2)
vm = nest.Create('voltmeter', params={'to_file': False,
'withgid': True,
'withtime': True,
'interval': 0.1})
Then we set the constant current input, modify the inital membrane
potential of one of the neurons and connect the neurons to the voltmeter
.
nest.SetStatus(neuron, {'I_e': 100.})
nest.SetStatus([neuron[0]], {'V_m': -10.})
nest.Connect(vm, neuron, 'all_to_all')
In order to create the gap_junction
connection we employ the
all_to_all
connection rule: Gap junctions are bidirectional connections,
therefore we need to connect neuron[0] to neuron[1] and neuron[1] to
neuron[0]:
nest.Connect(neuron, neuron,
{'rule': 'all_to_all', 'autapses': False},
{'model': 'gap_junction', 'weight': 0.5})
Finally we start the simulation and plot the membrane potentials of both neurons.
nest.Simulate(351.)
senders = nest.GetStatus(vm, 'events')[0]['senders']
times = nest.GetStatus(vm, 'events')[0]['times']
V = nest.GetStatus(vm, 'events')[0]['V_m']
pl.figure(1)
pl.plot(times[numpy.where(senders == 1)],
V[numpy.where(senders == 1)], 'r-')
pl.plot(times[numpy.where(senders == 2)],
V[numpy.where(senders == 2)], 'g-')
pl.xlabel('time (ms)')
pl.ylabel('membrane potential (mV)')
pl.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Gap Junctions: Inhibitory network example¶
This script simulates an inhibitory network of 500 Hodgkin-Huxley neurons.
Without the gap junctions (meaning for gap_weight = 0.0
) the network shows
an asynchronous irregular state that is caused by the external excitatory
Poissonian drive being balanced by the inhibitory feedback within the
network. With increasing gap_weight the network synchronizes:
For a lower gap weight of 0.3 nS the network remains in an asynchronous state. With a weight of 0.54 nS the network switches randomly between the asynchronous to the synchronous state, while for a gap weight of 0.7 nS a stable synchronous state is reached.
This example is also used as test case 2 (see Figure 9 and 10) in 1.
References¶
- 1
Hahne et al. (2015) A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations, Front. Neuroinform. http://dx.doi.org/10.3389/neuro.11.012.2008
import nest
import pylab as pl
import numpy
import random
n_neuron = 500
gap_per_neuron = 60
inh_per_neuron = 50
delay = 1.0
j_exc = 300.
j_inh = -50.
threads = 8
stepsize = 0.05
simtime = 501.
gap_weight = 0.3
nest.ResetKernel()
First we set the random seed, adjust the kernel settings and create
hh_psc_alpha_gap
neurons, spike_detector
and poisson_generator
.
random.seed(1)
nest.SetKernelStatus({'resolution': 0.05,
'total_num_virtual_procs': threads,
'print_time': True,
# Settings for waveform relaxation
# 'use_wfr': False uses communication in every step
# instead of an iterative solution
'use_wfr': True,
'wfr_comm_interval': 1.0,
'wfr_tol': 0.0001,
'wfr_max_iterations': 15,
'wfr_interpolation_order': 3})
neurons = nest.Create('hh_psc_alpha_gap', n_neuron)
sd = nest.Create("spike_detector", params={'to_file': False,
'to_memory': True})
pg = nest.Create("poisson_generator", params={'rate': 500.0})
Each neuron shall receive inh_per_neuron = 50
inhibitory synaptic inputs
that are randomly selected from all other neurons, each with synaptic
weight j_inh = -50.0
pA and a synaptic delay of 1.0 ms. Furthermore each
neuron shall receive an excitatory external Poissonian input of 500.0 Hz
with synaptic weight j_exc = 300.0
pA and the same delay.
The desired connections are created with the following commands:
conn_dict = {'rule': 'fixed_indegree',
'indegree': inh_per_neuron,
'autapses': False,
'multapses': True}
syn_dict = {'model': 'static_synapse',
'weight': j_inh,
'delay': delay}
nest.Connect(neurons, neurons, conn_dict, syn_dict)
nest.Connect(pg, neurons, 'all_to_all', syn_spec={'model': 'static_synapse',
'weight': j_exc,
'delay': delay})
Then the neurons are connected to the spike_detector
and the initial
membrane potential of each neuron is set randomly between -40 and -80 mV.
nest.Connect(neurons, sd)
for i in range(n_neuron):
nest.SetStatus([neurons[i]], {'V_m': (-40. - 40. * random.random())})
Finally gap junctions are added to the network. \((60*500)/2\) gap_junction
connections are added randomly resulting in an average of 60 gap-junction
connections per neuron. We must not use the fixed_indegree
oder
fixed_outdegree
functionality of nest.Connect()
to create the
connections, as gap_junction
connections are bidirectional connections
and we need to make sure that the same neurons are connected in both ways.
This is achieved by creating the connections on the Python level with the
random module of the Python Standard Library and connecting the neurons
using the make_symmetric
flag for one_to_one
connections.
n_connection = int(n_neuron * gap_per_neuron / 2)
connections = numpy.transpose(
[random.sample(neurons, 2) for _ in range(n_connection)])
nest.Connect(connections[0], connections[1],
{'rule': 'one_to_one', 'make_symmetric': True},
{'model': 'gap_junction', 'weight': gap_weight})
In the end we start the simulation and plot the spike pattern.
nest.Simulate(simtime)
times = nest.GetStatus(sd, 'events')[0]['times']
spikes = nest.GetStatus(sd, 'events')[0]['senders']
n_spikes = nest.GetStatus(sd, 'n_events')[0]
hz_rate = (1000.0 * n_spikes / simtime) / n_neuron
pl.figure(1)
pl.plot(times, spikes, 'o')
pl.title('Average spike rate (Hz): %.2f' % hz_rate)
pl.xlabel('time (ms)')
pl.ylabel('neuron no')
pl.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Population of GIF neuron model with oscillatory behavior¶
This script simulates a population of generalized integrate-and-fire (GIF) model neurons driven by noise from a group of Poisson generators.
Due to spike-frequency adaptation, the GIF neurons tend to show oscillatory behavior on the time scale comparable with the time constant of adaptation elements (stc and sfa).
Population dynamics are visualized by raster plot and as average firing rate.
References¶
- 1
Schwalger T, Degert M, Gerstner W (2017). Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput Biol. https://doi.org/10.1371/journal.pcbi.1005507
- 2
Mensi S, Naud R, Pozzorini C, Avermann M, Petersen CC and Gerstner W (2012). Parameter extraction and classification of three cortical neuron types reveals two distinct adaptation mechanisms. Journal of Neurophysiology. 107(6), pp.1756-1775.
Import all necessary modules for simulation and plotting.
import nest
import nest.raster_plot
import matplotlib.pyplot as plt
nest.ResetKernel()
Assigning the simulation parameters to variables.
dt = 0.1
simtime = 2000.0
Definition of neural parameters for the GIF model. These parameters are extracted by fitting the model to experimental data 2.
neuron_params = {"C_m": 83.1,
"g_L": 3.7,
"E_L": -67.0,
"Delta_V": 1.4,
"V_T_star": -39.6,
"t_ref": 4.0,
"V_reset": -36.7,
"lambda_0": 1.0,
"q_stc": [56.7, -6.9],
"tau_stc": [57.8, 218.2],
"q_sfa": [11.7, 1.8],
"tau_sfa": [53.8, 640.0],
"tau_syn_ex": 10.0,
}
Definition of the parameters for the population of GIF neurons.
N_ex = 100 # size of the population
p_ex = 0.3 # connection probability inside the population
w_ex = 30.0 # synaptic weights inside the population (pA)
Definition of the parameters for the Poisson group and its connection with GIF neurons population.
N_noise = 50 # size of Poisson group
rate_noise = 10.0 # firing rate of Poisson neurons (Hz)
w_noise = 20.0 # synaptic weights from Poisson to population neurons (pA)
Configuration of the simulation kernel with the previously defined time resolution.
nest.SetKernelStatus({"resolution": dt})
Building a population of GIF neurons, a group of Poisson neurons and a spike detector device for capturing spike times of the population.
population = nest.Create("gif_psc_exp", N_ex, params=neuron_params)
noise = nest.Create("poisson_generator", N_noise, params={'rate': rate_noise})
spike_det = nest.Create("spike_detector")
Build connections inside the population of GIF neurons population, between Poisson group and the population, and also connecting spike detector to the population.
nest.Connect(
population, population, {'rule': 'pairwise_bernoulli', 'p': p_ex},
syn_spec={"weight": w_ex}
)
nest.Connect(noise, population, 'all_to_all', syn_spec={"weight": w_noise})
nest.Connect(population, spike_det)
Simulation of the network.
nest.Simulate(simtime)
Plotting the results of simulation including raster plot and histogram of population activity.
nest.raster_plot.from_device(spike_det, hist=True)
plt.title('Population dynamics')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Population rate model of generalized integrate-and-fire neurons¶
This script simulates a finite network of generalized integrate-and-fire (GIF) neurons directly on the mesoscopic population level using the effective stochastic population rate dynamics derived in the paper 1. The stochastic population dynamics is implemented in the NEST model gif_pop_psc_exp. We demonstrate this model using the example of a Brunel network of two coupled populations, one excitatory and one inhibitory population.
Note that the population model represents the mesoscopic level
description of the corresponding microscopic network based on the
NEST model gif_psc_exp
.
References¶
- 1
Schwalger T, Degert M, Gerstner W (2017). Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput Biol. https://doi.org/10.1371/journal.pcbi.1005507
# Loading the necessary modules:
import numpy as np
import matplotlib.pyplot as plt
import nest
We first set the parameters of the microscopic model:
# all times given in milliseconds
dt = 0.5
dt_rec = 1.
# Simulation time
t_end = 2000.
# Parameters
size = 200
N = np.array([4, 1]) * size
M = len(N) # number of populations
# neuronal parameters
t_ref = 4. * np.ones(M) # absolute refractory period
tau_m = 20 * np.ones(M) # membrane time constant
mu = 24. * np.ones(M) # constant base current mu=R*(I0+Vrest)
c = 10. * np.ones(M) # base rate of exponential link function
Delta_u = 2.5 * np.ones(M) # softness of exponential link function
V_reset = 0. * np.ones(M) # Reset potential
V_th = 15. * np.ones(M) # baseline threshold (non-accumulating part)
tau_sfa_exc = [100., 1000.] # adaptation time constants of excitatory neurons
tau_sfa_inh = [100., 1000.] # adaptation time constants of inhibitory neurons
J_sfa_exc = [1000., 1000.] # size of feedback kernel theta
# (= area under exponential) in mV*ms
J_sfa_inh = [1000., 1000.] # in mV*ms
tau_theta = np.array([tau_sfa_exc, tau_sfa_inh])
J_theta = np.array([J_sfa_exc, J_sfa_inh])
# connectivity
J = 0.3 # excitatory synaptic weight in mV if number of input connections
# is C0 (see below)
g = 5. # inhibition-to-excitation ratio
pconn = 0.2 * np.ones((M, M))
delay = 1. * np.ones((M, M))
C0 = np.array([[800, 200], [800, 200]]) * 0.2 # constant reference matrix
C = np.vstack((N, N)) * pconn # numbers of input connections
J_syn = np.array([[J, -g * J], [J, -g * J]]) * \
C0 / C # final synaptic weights scaling as 1/C
taus1_ = [3., 6.] # time constants of exc./inh. post-synaptic currents (PSC's)
taus1 = np.array([taus1_ for k in range(M)])
# step current input
step = [[20.], [20.]] # jump size of mu in mV
tstep = np.array([[1500.], [1500.]]) # times of jumps
# synaptic time constants of excitatory and inhibitory connections
tau_ex = 3. # in ms
tau_in = 6. # in ms
Simulation on the mesoscopic level¶
To directly simulate the mesoscopic population activities (i.e. generating
the activity of a finite-size population without simulating single
neurons), we can build the populations using the NEST model
gif_pop_psc_exp
:
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
nest.SetKernelStatus(
{'resolution': dt, 'print_time': True, 'local_num_threads': 1})
t0 = nest.GetKernelStatus('time')
nest_pops = nest.Create('gif_pop_psc_exp', M)
C_m = 250. # irrelevant value for membrane capacity, cancels out in simulation
g_L = C_m / tau_m
for i, nest_i in enumerate(nest_pops):
nest.SetStatus([nest_i], {
'C_m': C_m,
'I_e': mu[i] * g_L[i],
'lambda_0': c[i], # in Hz!
'Delta_V': Delta_u[i],
'tau_m': tau_m[i],
'tau_sfa': tau_theta[i],
'q_sfa': J_theta[i] / tau_theta[i], # [J_theta]= mV*ms -> [q_sfa]=mV
'V_T_star': V_th[i],
'V_reset': V_reset[i],
'len_kernel': -1, # -1 triggers automatic history size
'N': N[i],
't_ref': t_ref[i],
'tau_syn_ex': max([tau_ex, dt]),
'tau_syn_in': max([tau_in, dt]),
'E_L': 0.
})
# connect the populations
g_syn = np.ones_like(J_syn) # synaptic conductance
g_syn[:, 0] = C_m / tau_ex
g_syn[:, 1] = C_m / tau_in
for i, nest_i in enumerate(nest_pops):
for j, nest_j in enumerate(nest_pops):
nest.SetDefaults('static_synapse', {
'weight': J_syn[i, j] * g_syn[i, j] * pconn[i, j],
'delay': delay[i, j]})
nest.Connect([nest_j], [nest_i], 'all_to_all')
To record the instantaneous population rate Abar(t) we use a multimeter, and to get the population activity A_N(t) we use spike detector:
# monitor the output using a multimeter, this only records with dt_rec!
nest_mm = nest.Create('multimeter')
nest.SetStatus(nest_mm, {'record_from': ['n_events', 'mean'],
'withgid': True,
'withtime': False,
'interval': dt_rec})
nest.Connect(nest_mm, nest_pops, 'all_to_all')
# monitor the output using a spike detector
nest_sd = []
for i, nest_i in enumerate(nest_pops):
nest_sd.append(nest.Create('spike_detector'))
nest.SetStatus(nest_sd[i], {'withgid': False,
'withtime': True,
'time_in_steps': True})
nest.SetDefaults('static_synapse', {'weight': 1.,
'delay': dt})
nest.Connect([nest_pops[i]], nest_sd[i], 'all_to_all')
All neurons in a given population will be stimulated with a step input current:
# set initial value (at t0+dt) of step current generator to zero
tstep = np.hstack((dt * np.ones((M, 1)), tstep))
step = np.hstack((np.zeros((M, 1)), step))
# create the step current devices
nest_stepcurrent = nest.Create('step_current_generator', M)
# set the parameters for the step currents
for i in range(M):
nest.SetStatus([nest_stepcurrent[i]], {
'amplitude_times': tstep[i] + t0,
'amplitude_values': step[i] * g_L[i], 'origin': t0, 'stop': t_end})
pop_ = nest_pops[i]
if type(nest_pops[i]) == int:
pop_ = [pop_]
nest.Connect([nest_stepcurrent[i]], pop_, syn_spec={'weight': 1.})
We can now start the simulation:
local_num_threads = 1
seed = 1
msd = local_num_threads * seed + 1 # master seed
nest.SetKernelStatus({'rng_seeds': range(msd, msd + local_num_threads)})
t = np.arange(0., t_end, dt_rec)
A_N = np.ones((t.size, M)) * np.nan
Abar = np.ones_like(A_N) * np.nan
# simulate 1 step longer to make sure all t are simulated
nest.Simulate(t_end + dt)
data_mm = nest.GetStatus(nest_mm)[0]['events']
for i, nest_i in enumerate(nest_pops):
a_i = data_mm['mean'][data_mm['senders'] == nest_i]
a = a_i / N[i] / dt
min_len = np.min([len(a), len(Abar)])
Abar[:min_len, i] = a[:min_len]
data_sd = nest.GetStatus(nest_sd[i], keys=['events'])[0][0]['times']
data_sd = data_sd * dt - t0
bins = np.concatenate((t, np.array([t[-1] + dt_rec])))
A = np.histogram(data_sd, bins=bins)[0] / float(N[i]) / dt_rec
A_N[:, i] = A
and plot the activity:
plt.figure(1)
plt.clf()
plt.subplot(2, 1, 1)
plt.plot(t, A_N * 1000) # plot population activities (in Hz)
plt.ylabel(r'$A_N$ [Hz]')
plt.title('Population activities (mesoscopic sim.)')
plt.subplot(2, 1, 2)
plt.plot(t, Abar * 1000) # plot instantaneous population rates (in Hz)
plt.ylabel(r'$\bar A$ [Hz]')
plt.xlabel('time [ms]')
Microscopic (“direct”) simulation¶
As mentioned above, the population model gif_pop_psc_exp
directly
simulates the mesoscopic population activities, i.e. without the need to
simulate single neurons. On the other hand, if we want to know single
neuron activities, we must simulate on the microscopic level. This is
possible by building a corresponding network of gif_psc_exp
neuron models:
nest.ResetKernel()
nest.SetKernelStatus(
{'resolution': dt, 'print_time': True, 'local_num_threads': 1})
t0 = nest.GetKernelStatus('time')
nest_pops = nest.Create('gif_pop_psc_exp', M)
nest_pops = []
for k in range(M):
nest_pops.append(nest.Create('gif_psc_exp', N[k]))
# set single neuron properties
for i, nest_i in enumerate(nest_pops):
nest.SetStatus(nest_i, {
'C_m': C_m,
'I_e': mu[i] * g_L[i],
'lambda_0': c[i], # in Hz!
'Delta_V': Delta_u[i],
'g_L': g_L[i],
'tau_sfa': tau_theta[i],
'q_sfa': J_theta[i] / tau_theta[i], # [J_theta]= mV*ms -> [q_sfa]=mV
'V_T_star': V_th[i],
'V_reset': V_reset[i],
't_ref': t_ref[i],
'tau_syn_ex': max([tau_ex, dt]),
'tau_syn_in': max([tau_in, dt]),
'E_L': 0.,
'V_m': 0.
})
# connect the populations
for i, nest_i in enumerate(nest_pops):
for j, nest_j in enumerate(nest_pops):
nest.SetDefaults('static_synapse', {
'weight': J_syn[i, j] * g_syn[i, j],
'delay': delay[i, j]})
if np.allclose(pconn[i, j], 1.):
conn_spec = {'rule': 'all_to_all'}
else:
conn_spec = {
'rule': 'fixed_indegree', 'indegree': int(pconn[i, j] * N[j])}
nest.Connect(nest_j, nest_i, conn_spec)
We want to record all spikes of each population in order to compute the mesoscopic population activities A_N(t) from the microscopic simulation. We also record the membrane potentials of five example neurons:
# monitor the output using a multimeter and a spike detector
nest_sd = []
for i, nest_i in enumerate(nest_pops):
nest_sd.append(nest.Create('spike_detector'))
nest.SetStatus(nest_sd[i], {'withgid': False,
'withtime': True, 'time_in_steps': True})
nest.SetDefaults('static_synapse', {'weight': 1., 'delay': dt})
# record all spikes from population to compute population activity
nest.Connect(nest_pops[i], nest_sd[i], 'all_to_all')
Nrecord = [5, 0] # for each population "i" the first Nrecord[i] neurons are
# recorded
nest_mm_Vm = []
for i, nest_i in enumerate(nest_pops):
nest_mm_Vm.append(nest.Create('multimeter'))
nest.SetStatus(nest_mm_Vm[i], {'record_from': ['V_m'],
'withgid': True, 'withtime': True,
'interval': dt_rec})
nest.Connect(nest_mm_Vm[i], list(
np.array(nest_pops[i])[:Nrecord[i]]), 'all_to_all')
As before, all neurons in a given population will be stimulated with a step input current. The following code block is identical to the one for the mesoscopic simulation above:
# create the step current devices if they do not exist already
nest_stepcurrent = nest.Create('step_current_generator', M)
# set the parameters for the step currents
for i in range(M):
nest.SetStatus([nest_stepcurrent[i]], {
'amplitude_times': tstep[i] + t0,
'amplitude_values': step[i] * g_L[i], 'origin': t0, 'stop': t_end})
# optionally a stopping time may be added by: 'stop': sim_T + t0
pop_ = nest_pops[i]
if type(nest_pops[i]) == int:
pop_ = [pop_]
nest.Connect([nest_stepcurrent[i]], pop_, syn_spec={'weight': 1.})
We can now start the microscopic simulation:
local_num_threads = 1
seed = 1
msd = local_num_threads * seed + 1 # master seed
nest.SetKernelStatus({'rng_seeds': range(msd, msd + local_num_threads)})
t = np.arange(0., t_end, dt_rec)
A_N = np.ones((t.size, M)) * np.nan
# simulate 1 step longer to make sure all t are simulated
nest.Simulate(t_end + dt)
Let’s retrieve the data of the spike detector and plot the activity of the excitatory population (in Hz):
for i, nest_i in enumerate(nest_pops):
data_sd = nest.GetStatus(
nest_sd[i], keys=['events'])[0][0]['times'] * dt - t0
bins = np.concatenate((t, np.array([t[-1] + dt_rec])))
A = np.histogram(data_sd, bins=bins)[0] / float(N[i]) / dt_rec
A_N[:, i] = A * 1000 # in Hz
t = np.arange(dt, t_end + dt, dt_rec)
plt.figure(2)
plt.plot(t, A_N[:, 0])
plt.xlabel('time [ms]')
plt.ylabel('population activity [Hz]')
plt.title('Population activities (microscopic sim.)')
This should look similar to the population activity obtained from the
mesoscopic simulation based on the NEST model gif_pop_psc_exp
(cf. figure
1). Now we retrieve the data of the multimeter, which allows us to look at
the membrane potentials of single neurons. Here we plot the voltage traces
(in mV) of five example neurons:
voltage = []
for i in range(M):
if Nrecord[i] > 0:
senders = nest.GetStatus(nest_mm_Vm[i])[0]['events']['senders']
v = nest.GetStatus(nest_mm_Vm[i])[0]['events']['V_m']
voltage.append(
np.array([v[np.where(senders == j)] for j in set(senders)]))
else:
voltage.append(np.array([]))
f, axarr = plt.subplots(Nrecord[0], sharex=True)
for i in range(Nrecord[0]):
axarr[i].plot(voltage[0][i])
axarr[i].set_yticks((0, 15, 30))
axarr[i].set_xlabel('time [ms]')
axarr[2].set_ylabel('membrane potential [mV]')
axarr[0].set_title('5 example GIF neurons (microscopic sim.)')
Note that this plots only the subthreshold membrane potentials but not the spikes (as with every leaky integrate-and-fire model).
plt.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Testing the adapting exponential integrate and fire model in NEST (Brette and Gerstner Fig 2C)¶
This example tests the adaptive integrate and fire model (AdEx) according to Brette and Gerstner 1 reproduces Figure 2C of the paper. Note that Brette and Gerstner give the value for b in nA. To be consistent with the other parameters in the equations, b must be converted to pA (pico Ampere).
References¶
- 1
Brette R and Gerstner W (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity J. Neurophysiology. https://doi.org/10.1152/jn.00686.2005
import nest
import nest.voltage_trace
import pylab
nest.ResetKernel()
First we make sure that the resolution of the simulation is 0.1 ms. This is important, since the slop of the action potential is very steep.
res = 0.1
nest.SetKernelStatus({"resolution": res})
neuron = nest.Create("aeif_cond_alpha")
a and b are parameters of the adex model. Their values come from the publication.
nest.SetStatus(neuron, {"a": 4.0, "b": 80.5})
Next we define the stimulus protocol. There are two DC generators, producing stimulus currents during two time-intervals.
dc = nest.Create("dc_generator", 2)
nest.SetStatus(dc, [{"amplitude": 500.0, "start": 0.0, "stop": 200.0},
{"amplitude": 800.0, "start": 500.0, "stop": 1000.0}])
We connect the DC generators.
nest.Connect(dc, neuron, 'all_to_all')
And add a voltmeter
to record the membrane potentials.
voltmeter = nest.Create("voltmeter")
We set the voltmeter to record in small intervals of 0.1 ms and connect the voltmeter to the neuron.
nest.SetStatus(voltmeter, {'interval': 0.1, "withgid": True, "withtime": True})
nest.Connect(voltmeter, neuron)
Finally, we simulate for 1000 ms and plot a voltage trace to produce the figure.
nest.Simulate(1000.0)
nest.voltage_trace.from_device(voltmeter)
pylab.axis([0, 1000, -80, -20])
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Testing the adapting exponential integrate and fire model in NEST (Brette and Gerstner Fig 3D)¶
This example tests the adaptive integrate and fire model (AdEx) according to Brette and Gerstner 1 reproduces Figure 3D of the paper.
Note that Brette and Gerstner give the value for b in nA. To be consistent with the other parameters in the equations, b must be converted to pA (pico Ampere).
References¶
- 1
Brette R and Gerstner W (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity J. Neurophysiology. https://doi.org/10.1152/jn.00686.2005
import nest
import nest.voltage_trace
import pylab
nest.ResetKernel()
First we make sure that the resolution of the simulation is 0.1 ms. This is important, since the slop of the action potential is very steep.
res = 0.1
nest.SetKernelStatus({"resolution": res})
neuron = nest.Create("aeif_cond_exp")
Set the parameters of the neuron according to the paper.
nest.SetStatus(neuron, {"V_peak": 20., "E_L": -60.0, "a": 80.0, "b": 80.5,
"tau_w": 720.0})
Create and configure the stimulus which is a step current.
dc = nest.Create("dc_generator")
nest.SetStatus(dc, [{"amplitude": -800.0, "start": 0.0, "stop": 400.0}])
We connect the DC generators.
nest.Connect(dc, neuron, 'all_to_all')
And add a voltmeter
to record the membrane potentials.
voltmeter = nest.Create("voltmeter")
We set the voltmeter to record in small intervals of 0.1 ms and connect the voltmeter to the neuron.
nest.SetStatus(voltmeter, {"withgid": True, "withtime": True, 'interval': 0.1})
nest.Connect(voltmeter, neuron)
Finally, we simulate for 1000 ms and plot a voltage trace to produce the figure.
nest.Simulate(1000.0)
nest.voltage_trace.from_device(voltmeter)
pylab.axis([0, 1000, -85, 0])
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Multi-compartment neuron example¶
Simple example of how to use the three-compartment iaf_cond_alpha_mc
neuron model.
Three stimulation paradigms are illustrated:
externally applied current, one compartment at a time
spikes impinging on each compartment, one at a time
rheobase current injected to soma causing output spikes
Voltage and synaptic conductance traces are shown for all compartments.
First, we import all necessary modules to simulate, analyze and plot this example.
import nest
import pylab
nest.ResetKernel()
We then extract the receptor types and the list of recordable quantities from the neuron model. Receptor types and recordable quantities uniquely define the receptor type and the compartment while establishing synaptic connections or assigning multimeters.
syns = nest.GetDefaults('iaf_cond_alpha_mc')['receptor_types']
print("iaf_cond_alpha_mc receptor_types: {0}".format(syns))
rqs = nest.GetDefaults('iaf_cond_alpha_mc')['recordables']
print("iaf_cond_alpha_mc recordables : {0}".format(rqs))
The simulation parameters are assigned to variables.
nest.SetDefaults('iaf_cond_alpha_mc',
{'V_th': -60.0, # threshold potential
'V_reset': -65.0, # reset potential
't_ref': 10.0, # refractory period
'g_sp': 5.0, # somato-proximal coupling conductance
'soma': {'g_L': 12.0}, # somatic leak conductance
# proximal excitatory and inhibitory synaptic time constants
'proximal': {'tau_syn_ex': 1.0,
'tau_syn_in': 5.0},
'distal': {'C_m': 90.0} # distal capacitance
})
The nodes are created using Create
. We store the returned handles
in variables for later reference.
n = nest.Create('iaf_cond_alpha_mc')
A multimeter
is created and connected to the neurons. The parameters
specified for the multimeter include the list of quantities that should be
recorded and the time interval at which quantities are measured.
mm = nest.Create('multimeter', params={'record_from': rqs, 'interval': 0.1})
nest.Connect(mm, n)
We create one current generator per compartment and configure a stimulus regime that drives distal, proximal and soma dendrites, in that order. Configuration of the current generator includes the definition of the start and stop times and the amplitude of the injected current.
cgs = nest.Create('dc_generator', 3)
nest.SetStatus(cgs,
[{'start': 250.0, 'stop': 300.0, 'amplitude': 50.0}, # soma
{'start': 150.0, 'stop': 200.0, 'amplitude': -50.0}, # proxim.
{'start': 50.0, 'stop': 100.0, 'amplitude': 100.0}]) # distal
Generators are then connected to the correct compartments. Specification of
the receptor_type
uniquely defines the target compartment and receptor.
nest.Connect([cgs[0]], n, syn_spec={'receptor_type': syns['soma_curr']})
nest.Connect([cgs[1]], n, syn_spec={'receptor_type': syns['proximal_curr']})
nest.Connect([cgs[2]], n, syn_spec={'receptor_type': syns['distal_curr']})
We create one excitatory and one inhibitory spike generator per compartment and configure a regime that drives distal, proximal and soma dendrites, in that order, alternating the excitatory and inhibitory spike generators.
sgs = nest.Create('spike_generator', 6)
nest.SetStatus(sgs,
[{'spike_times': [600.0, 620.0]}, # soma excitatory
{'spike_times': [610.0, 630.0]}, # soma inhibitory
{'spike_times': [500.0, 520.0]}, # proximal excitatory
{'spike_times': [510.0, 530.0]}, # proximal inhibitory
{'spike_times': [400.0, 420.0]}, # distal excitatory
{'spike_times': [410.0, 430.0]}]) # distal inhibitory
Connect generators to correct compartments in the same way as in case of current generator
nest.Connect([sgs[0]], n, syn_spec={'receptor_type': syns['soma_exc']})
nest.Connect([sgs[1]], n, syn_spec={'receptor_type': syns['soma_inh']})
nest.Connect([sgs[2]], n, syn_spec={'receptor_type': syns['proximal_exc']})
nest.Connect([sgs[3]], n, syn_spec={'receptor_type': syns['proximal_inh']})
nest.Connect([sgs[4]], n, syn_spec={'receptor_type': syns['distal_exc']})
nest.Connect([sgs[5]], n, syn_spec={'receptor_type': syns['distal_inh']})
Run the simulation for 700 ms.
nest.Simulate(700)
Now we set the intrinsic current of soma to 150 pA to make the neuron spike.
nest.SetStatus(n, {'soma': {'I_e': 150.0}})
We simulate the network for another 300 ms and retrieve recorded data from the multimeter
nest.Simulate(300)
rec = nest.GetStatus(mm)[0]['events']
We create an array with the time points when the quantities were actually recorded
t = rec['times']
We plot the time traces of the membrane potential and the state of each membrane potential for soma, proximal, and distal dendrites (V_m.s, V_m.p and V_m.d).
pylab.figure()
pylab.subplot(211)
pylab.plot(t, rec['V_m.s'], t, rec['V_m.p'], t, rec['V_m.d'])
pylab.legend(('Soma', 'Proximal dendrite', 'Distal dendrite'),
loc='lower right')
pylab.axis([0, 1000, -76, -59])
pylab.ylabel('Membrane potential [mV]')
pylab.title('Responses of iaf_cond_alpha_mc neuron')
Finally, we plot the time traces of the synaptic conductance measured in each compartment.
pylab.subplot(212)
pylab.plot(t, rec['g_ex.s'], 'b-', t, rec['g_ex.p'], 'g-',
t, rec['g_ex.d'], 'r-')
pylab.plot(t, rec['g_in.s'], 'b--', t, rec['g_in.p'], 'g--',
t, rec['g_in.d'], 'r--')
pylab.legend(('g_ex.s', 'g_ex.p', 'g_in.d', 'g_in.s', 'g_in.p', 'g_in.d'))
pylab.axis([350, 700, 0, 1.15])
pylab.xlabel('Time [ms]')
pylab.ylabel('Synaptic conductance [nS]')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Tsodyks depressing example¶
This scripts simulates two neurons. One is driven with dc-input and connected to the other one with a depressing tsodyks synapse. The membrane potential trace of the second neuron is recorded.
This example reproduces Figure 1A of 1.
This example is analog to tsodyks_facilitating.py
, except that different
synapse parameters are used. Here, a large facilitation parameter U
causes a fast saturation of the synaptic efficacy (Eq. 2.2), disabling a
facilitating behavior.
References¶
- 1
Tsodyks M, Pawelzik K, Markram H (1998). Neural networks with dynamic synapses. Neural computation, http://dx.doi.org/10.1162/089976698300017502
See Also¶
First, we import all necessary modules for simulation and plotting.
import nest
import nest.voltage_trace
import pylab
from numpy import exp
Second, the simulation parameters are assigned to variables. The neuron and synapse parameters are stored into a dictionary.
h = 0.1 # simulation step size (ms)
Tau = 40. # membrane time constant
Theta = 15. # threshold
E_L = 0. # reset potential of membrane potential
R = 0.1 # 100 M Ohm
C = Tau / R # Tau (ms)/R in NEST units
TauR = 2. # refractory time
Tau_psc = 3. # time constant of PSC (= Tau_inact)
Tau_rec = 800. # recovery time
Tau_fac = 0. # facilitation time
U = 0.5 # facilitation parameter U
A = 250. # PSC weight in pA
f = 20. / 1000. # frequency in Hz converted to 1/ms
Tend = 1200. # simulation time
TIstart = 50. # start time of dc
TIend = 1050. # end time of dc
I0 = Theta * C / Tau / (1 - exp(-(1 / f - TauR) / Tau)) # dc amplitude
neuron_param = {"tau_m": Tau,
"t_ref": TauR,
"tau_syn_ex": Tau_psc,
"tau_syn_in": Tau_psc,
"C_m": C,
"V_reset": E_L,
"E_L": E_L,
"V_m": E_L,
"V_th": Theta}
syn_param = {"tau_psc": Tau_psc,
"tau_rec": Tau_rec,
"tau_fac": Tau_fac,
"U": U,
"delay": 0.1,
"weight": A,
"u": 0.0,
"x": 1.0}
Third, we reset the kernel and set the resolution using SetKernelStatus
.
nest.ResetKernel()
nest.SetKernelStatus({"resolution": h})
Fourth, the nodes are created using Create
. We store the returned
handles in variables for later reference.
neurons = nest.Create("iaf_psc_exp", 2)
dc_gen = nest.Create("dc_generator")
volts = nest.Create("voltmeter")
Fifth, the iaf_psc_exp
neurons, the dc_generator
and the voltmeter
are configured using SetStatus
, which expects a list of node handles and
a parameter dictionary or a list of parameter dictionaries.
nest.SetStatus(neurons, neuron_param)
nest.SetStatus(dc_gen, {"amplitude": I0, "start": TIstart, "stop": TIend})
nest.SetStatus(volts, {"label": "voltmeter", "withtime": True, "withgid": True,
"interval": 1.})
Sixth, the dc_generator
is connected to the first neuron
(neurons[0]) and the voltmeter
is connected to the second neuron
(neurons[1]). The command Connect
has different variants. Plain
Connect
just takes the handles of pre- and post-synaptic nodes and uses
the default values for weight and delay. Note that the connection
direction for the voltmeter
reflects the signal flow in the simulation
kernel, because it observes the neuron instead of receiving events from it.
nest.Connect(dc_gen, [neurons[0]])
nest.Connect(volts, [neurons[1]])
Seventh, the first neuron (neurons[0]) is connected to the second
neuron (neurons[1]). The command CopyModel
copies the
tsodyks_synapse
model to the new name syn
with parameters
syn_param
. The manually defined model syn
is used in the
connection routine via the syn_spec
parameter.
nest.CopyModel("tsodyks_synapse", "syn", syn_param)
nest.Connect([neurons[0]], [neurons[1]], syn_spec="syn")
Finally, we simulate the configuration using the command Simulate
,
where the simulation time Tend is passed as the argument. We plot the
target neuron’s membrane potential as a function of time.
nest.Simulate(Tend)
nest.voltage_trace.from_device(volts)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Tsodyks facilitating example¶
This scripts simulates two neurons. One is driven with dc-input and connected to the other one with a facilitating tsodyks synapse. The membrane potential trace of the second neuron is recorded.
This example reproduces figure 1B of 1
This example is analog to tsodyks_depressing.py
, except that
different synapse parameters are used. Here, a small facilitation
parameter U
causes a slow saturation of the synaptic efficacy
(Eq. 2.2), enabling a facilitating behavior.
References¶
- 1
Tsodyks M, Pawelzik K, Markram H (1998). Neural networks with dynamic synapses. Neural computation, http://dx.doi.org/10.1162/089976698300017502
See Also¶
First, we import all necessary modules for simulation and plotting.
import nest
import nest.voltage_trace
import pylab
from numpy import exp
Second, the simulation parameters are assigned to variables. The neuron and synapse parameters are stored into a dictionary.
h = 0.1 # simulation step size (ms)
Tau = 40. # membrane time constant
Theta = 15. # threshold
E_L = 0. # reset potential of membrane potential
R = 1. # membrane resistance (GOhm)
C = Tau / R # Tau (ms)/R in NEST units
TauR = 2. # refractory time
Tau_psc = 1.5 # time constant of PSC (= Tau_inact)
Tau_rec = 130. # recovery time
Tau_fac = 530. # facilitation time
U = 0.03 # facilitation parameter U
A = 1540. # PSC weight in pA
f = 20. / 1000. # frequency in Hz converted to 1/ms
Tend = 1200. # simulation time
TIstart = 50. # start time of dc
TIend = 1050. # end time of dc
I0 = Theta * C / Tau / (1 - exp(-(1 / f - TauR) / Tau)) # dc amplitude
neuron_param = {"tau_m": Tau,
"t_ref": TauR,
"tau_syn_ex": Tau_psc,
"tau_syn_in": Tau_psc,
"C_m": C,
"V_reset": E_L,
"E_L": E_L,
"V_m": E_L,
"V_th": Theta}
syn_param = {"tau_psc": Tau_psc,
"tau_rec": Tau_rec,
"tau_fac": Tau_fac,
"U": U,
"delay": 0.1,
"weight": A,
"u": 0.0,
"x": 1.0}
Third, we reset the kernel and set the resolution using SetKernelStatus
.
nest.ResetKernel()
nest.SetKernelStatus({"resolution": h})
Fourth, the nodes are created using Create
. We store the returned
handles in variables for later reference.
neurons = nest.Create("iaf_psc_exp", 2)
dc_gen = nest.Create("dc_generator")
volts = nest.Create("voltmeter")
Fifth, the iaf_psc_exp
neurons, the dc_generator
and the voltmeter
are configured using SetStatus
, which expects a list of node handles and
a parameter dictionary or a list of parameter dictionaries.
nest.SetStatus(neurons, neuron_param)
nest.SetStatus(dc_gen, {"amplitude": I0, "start": TIstart, "stop": TIend})
nest.SetStatus(volts, {"label": "voltmeter", "withtime": True, "withgid": True,
"interval": 1.})
Sixth, the dc_generator
is connected to the first neuron
(neurons[0]) and the voltmeter is connected to the second neuron
(neurons[1]). The command Connect has different variants. Plain
Connect
just takes the handles of pre- and post-synaptic nodes and
uses the default values for weight and delay. Note that the connection
direction for the voltmeter
reflects the signal flow in the simulation
kernel, because it observes the neuron instead of receiving events from it.
nest.Connect(dc_gen, [neurons[0]])
nest.Connect(volts, [neurons[1]])
Seventh, the first neuron (neurons[0]) is connected to the second
neuron (neurons[1]). The command CopyModel
copies the
tsodyks_synapse
model to the new name syn
with parameters
syn_param
. The manually defined model syn
is used in the
connection routine via the syn_spec
parameter.
nest.CopyModel("tsodyks_synapse", "syn", syn_param)
nest.Connect([neurons[0]], [neurons[1]], syn_spec="syn")
Finally, we simulate the configuration using the command Simulate
,
where the simulation time Tend is passed as the argument. We plot the
target neuron’s membrane potential as function of time.
nest.Simulate(Tend)
nest.voltage_trace.from_device(volts)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Example of the tsodyks2_synapse in NEST¶
This synapse model implements synaptic short-term depression and short-term f according to 1 and 2. It solves Eq (2) from 1 and modulates U according
This connection merely scales the synaptic weight, based on the spike history parameters of the kinetic model. Thus, it is suitable for all types of synapt that is current or conductance based.
The parameter A_se from the publications is represented by the synaptic weight. The variable x in the synapse properties is the factor that scales the synaptic weight.
Parameters¶
The following parameters can be set in the status dictionary:
U - probability of release increment (U1) [0,1], default=0.
u - Maximum probability of release (U_se) [0,1], default=0.
x - current scaling factor of the weight, default=U
tau_rec - time constant for depression in ms, default=800 ms
tau_fac - time constant for facilitation in ms, default=0 (off)
Notes¶
Under identical conditions, the tsodyks2_synapse
produces slightly lower
peak amplitudes than the tsodyks_synapse
. However, the qualitative behavior
is identical.
This compares the two synapse models.
References¶
- 1(1,2)
Tsodyks MV, and Markram H. (1997). The neural code between neocortical depends on neurotransmitter release probability. PNAS, 94(2), 719-23.
- 2
Fuhrmann G, Segev I, Markram H, and Tsodyks MV. (2002). Coding of temporal information by activity-dependent synapses. Journal of Neurophysiology, 8. https://doi.org/10.1152/jn.00258.2001
- 3
Maass W, and Markram H. (2002). Synapses as dynamic memory buffers. Neural Networks, 15(2), 155-161. http://dx.doi.org/10.1016/S0893-6080(01)00144-7
import nest
import nest.voltage_trace
nest.ResetKernel()
Parameter set for depression
dep_params = {"U": 0.67, "u": 0.67, 'x': 1.0, "tau_rec": 450.0,
"tau_fac": 0.0, "weight": 250.}
Parameter set for facilitation
fac_params = {"U": 0.1, "u": 0.1, 'x': 1.0, "tau_fac": 1000.,
"tau_rec": 100., "weight": 250.}
Now we assign the parameter set to the synapse models.
t1_params = fac_params # for tsodyks_synapse
t2_params = t1_params.copy() # for tsodyks2_synapse
nest.SetDefaults("tsodyks2_synapse", t1_params)
nest.SetDefaults("tsodyks_synapse", t2_params)
nest.SetDefaults("iaf_psc_exp", {"tau_syn_ex": 3.})
Create three neurons.
neuron = nest.Create("iaf_psc_exp", 3)
Neuron one produces spikes. Neurons 2 and 3 receive the spikes via the two synapse models.
nest.Connect([neuron[0]], [neuron[1]], syn_spec="tsodyks_synapse")
nest.Connect([neuron[0]], [neuron[2]], syn_spec="tsodyks2_synapse")
Now create two voltmeters to record the responses.
voltmeter = nest.Create("voltmeter", 2)
nest.SetStatus(voltmeter, {"withgid": True, "withtime": True})
Connect the voltmeters to the neurons.
nest.Connect([voltmeter[0]], [neuron[1]])
nest.Connect([voltmeter[1]], [neuron[2]])
Now simulate the standard STP protocol: a burst of spikes, followed by a pause and a recovery response.
nest.SetStatus([neuron[0]], "I_e", 376.0)
nest.Simulate(500.0)
nest.SetStatus([neuron[0]], "I_e", 0.0)
nest.Simulate(500.0)
nest.SetStatus([neuron[0]], "I_e", 376.0)
nest.Simulate(500.0)
Finally, generate voltage traces. Both are shown in the same plot and should be almost completely overlapping.
nest.voltage_trace.from_device([voltmeter[0]])
nest.voltage_trace.from_device([voltmeter[1]])
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Example for the quantal_stp_synapse¶
The quantal_stp_synapse
is a stochastic version of the Tsodys-Markram model
for synaptic short term plasticity (STP).
This script compares the two variants of the Tsodyks/Markram synapse in NEST.
This synapse model implements synaptic short-term depression and short-term facilitation according to the quantal release model described by Fuhrmann et al. 1 and Loebel et al. 2.
Each presynaptic spike will stochastically activate a fraction of the available release sites. This fraction is binomialy distributed and the release probability per site is governed by the Fuhrmann et al. (2002) model. The solution of the differential equations is taken from Maass and Markram 2002 3.
The connection weight is interpreted as the maximal weight that can be obtained if all n release sites are activated.
Parameters¶
The following parameters can be set in the status dictionary:
U - Maximal fraction of available resources [0,1], default=0.5
u - available fraction of resources [0,1], default=0.5
p - probability that a vesicle is available, default = 1.0
n - total number of release sites, default = 1
a - number of available release sites, default = n
tau_rec - time constant for depression in ms, default=800 ms
tau_rec - time constant for facilitation in ms, default=0 (off)
References¶
- 1
Fuhrmann G, Segev I, Markram H, and Tsodyks MV. (2002). Coding of temporal information by activity-dependent synapses. Journal of Neurophysiology, 8. https://doi.org/10.1152/jn.00258.2001
- 2
Loebel, A., Silberberg, G., Helbig, D., Markram, H., Tsodyks, M. V, & Richardson, M. J. E. (2009). Multiquantal release underlies the distribution of synaptic efficacies in the neocortex. Frontiers in Computational Neuroscience, 3:27. doi:10.3389/neuro.10.027.
- 3
Maass W, and Markram H. (2002). Synapses as dynamic memory buffers. Neural Networks, 15(2), 155-161. http://dx.doi.org/10.1016/S0893-6080(01)00144-7
import nest
import nest.voltage_trace
import numpy
import pylab
nest.ResetKernel()
On average, the quantal_stp_synapse
converges to the tsodyks2_synapse
,
so we can compare the two by running multiple trials.
First we define the number of trials as well as the number of release sites.
n_syn = 10.0 # number of synapses in a connection
n_trials = 100 # number of measurement trials
Next, we define parameter sets for facilitation
fac_params = {"U": 0.02, "u": 0.02, "tau_fac": 500.,
"tau_rec": 200., "weight": 1.}
Then, we assign the parameter set to the synapse models
t1_params = fac_params # for tsodyks2_synapse
t2_params = t1_params.copy() # for quantal_stp_synapse
t1_params['x'] = t1_params['U']
t2_params['n'] = n_syn
To make the responses comparable, we have to scale the weight by the number of synapses.
t2_params['weight'] = 1. / n_syn
Next, we chage the defaults of the various models to our parameters.
nest.SetDefaults("tsodyks2_synapse", t1_params)
nest.SetDefaults("quantal_stp_synapse", t2_params)
nest.SetDefaults("iaf_psc_exp", {"tau_syn_ex": 3.})
We create three different neurons. Neuron one is the sender, the two other neurons receive the synapses.
neuron = nest.Create("iaf_psc_exp", 3)
The connection from neuron 1 to neuron 2 is a deterministic synapse.
nest.Connect([neuron[0]], [neuron[1]], syn_spec="tsodyks2_synapse")
The connection from neuron 1 to neuron 3 has a stochastic
quantal_stp_synapse
.
nest.Connect([neuron[0]], [neuron[2]], syn_spec="quantal_stp_synapse")
The voltmeter will show us the synaptic responses in neurons 2 and 3.
voltmeter = nest.Create("voltmeter", 2)
nest.SetStatus(voltmeter, {"withgid": True, "withtime": True})
One dry run to bring all synapses into their rest state. The default initialization does not achieve this. In large network simulations this problem does not show, but in small simulations like this, we would see it.
nest.SetStatus([neuron[0]], "I_e", 376.0)
nest.Simulate(500.0)
nest.SetStatus([neuron[0]], "I_e", 0.0)
nest.Simulate(1000.0)
Only now do we connect the voltmeter
to the neurons.
nest.Connect([voltmeter[0]], [neuron[1]])
nest.Connect([voltmeter[1]], [neuron[2]])
This loop runs over the n_trials trials and performs a standard protocol of a high-rate response, followed by a pause and then a recovery response.
for t in range(n_trials):
nest.SetStatus([neuron[0]], "I_e", 376.0)
nest.Simulate(500.0)
nest.SetStatus([neuron[0]], "I_e", 0.0)
nest.Simulate(1000.0)
Flush the last voltmeter events from the queue by simulating one time-step.
nest.Simulate(.1)
Extract the reference trace.
vm = numpy.array(nest.GetStatus([voltmeter[1]], 'events')[0]['V_m'])
vm_reference = numpy.array(nest.GetStatus([voltmeter[0]], 'events')[0]['V_m'])
vm.shape = (n_trials, 1500)
vm_reference.shape = (n_trials, 1500)
Now compute the mean of all trials and plot against trials and references.
vm_mean = numpy.array([numpy.mean(vm[:, i]) for (i, j) in enumerate(vm[0, :])])
vm_ref_mean = numpy.array([numpy.mean(vm_reference[:, i])
for (i, j) in enumerate(vm_reference[0, :])])
pylab.plot(vm_mean)
pylab.plot(vm_ref_mean)
Finally, print the mean-suqared error between the trial-average and the reference trace. The value should be < 10^-9.
print(numpy.mean((vm_ref_mean - vm_mean) ** 2))
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Intrinsic currents spiking¶
This example illustrates a neuron receiving spiking input through
several different receptors (AMPA, NMDA, GABA_A, GABA_B), provoking
spike output. The model, ht_neuron
, also has intrinsic currents
(I_NaP
, I_KNa
, I_T
, and I_h
). It is a slightly simplified implementation of
neuron model proposed in 1.
The neuron is bombarded with spike trains from four Poisson generators, which are connected to the AMPA, NMDA, GABA_A, and GABA_B receptors, respectively.
References¶
- 1
Hill and Tononi (2005) Modeling sleep and wakefulness in the thalamocortical system. J Neurophysiol 93:1671 http://dx.doi.org/10.1152/jn.00915.2004.
See Also¶
Intrinsic currents subthreshold
We imported all necessary modules for simulation, analysis and plotting.
import nest
import numpy as np
import matplotlib.pyplot as plt
Additionally, we set the verbosity using set_verbosity
to suppress info
messages. We also reset the kernel to be sure to start with a clean NEST.
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
We define the simulation parameters:
The rate of the input spike trains
The weights of the different receptors (names must match receptor types)
The time to simulate
Note that all parameter values should be doubles, since NEST expects doubles.
rate_in = 100.
w_recep = {'AMPA': 30., 'NMDA': 30., 'GABA_A': 5., 'GABA_B': 10.}
t_sim = 250.
num_recep = len(w_recep)
We create
one neuron instance
one Poisson generator instance for each synapse type
one multimeter to record from the neuron:
# - membrane potential
# - threshold potential
# - synaptic conductances
# - intrinsic currents
#
# See :doc:`intrinsic_currents_subthreshold` for more details on ``multimeter``
# configuration.
nrn = nest.Create('ht_neuron')
p_gens = nest.Create('poisson_generator', 4,
params={'rate': rate_in})
mm = nest.Create('multimeter',
params={'interval': 0.1,
'record_from': ['V_m', 'theta',
'g_AMPA', 'g_NMDA',
'g_GABA_A', 'g_GABA_B',
'I_NaP', 'I_KNa', 'I_T', 'I_h']})
We now connect each Poisson generator with the neuron through a different receptor type.
First, we need to obtain the numerical codes for the receptor types from
the model. The receptor_types
entry of the default dictionary for the
ht_neuron
model is a dictionary mapping receptor names to codes.
In the loop, we use Python’s tuple unpacking mechanism to unpack dictionary entries from our w_recep dictionary.
Note that we need to pack the pg variable into a list before
passing it to Connect
, because iterating over the p_gens list
makes pg a “naked” GID.
receptors = nest.GetDefaults('ht_neuron')['receptor_types']
for pg, (rec_name, rec_wgt) in zip(p_gens, w_recep.items()):
nest.Connect([pg], nrn, syn_spec={'receptor_type': receptors[rec_name],
'weight': rec_wgt})
We then connnect the multimeter
. Note that the multimeter is connected to
the neuron, not the other way around.
nest.Connect(mm, nrn)
We are now ready to simulate.
nest.Simulate(t_sim)
We now fetch the data recorded by the multimeter. The data are returned as
a dictionary with entry times
containing timestamps for all
recorded data, plus one entry per recorded quantity.
All data is contained in the events
entry of the status dictionary
returned by the multimeter. Because all NEST function return arrays,
we need to pick out element 0 from the result of GetStatus
.
data = nest.GetStatus(mm)[0]['events']
t = data['times']
The following function turns a name such as I_NaP
into proper TeX code
\(I_{\mathrm{NaP}}\) for a pretty label.
def texify_name(name):
return r'${}_{{\mathrm{{{}}}}}$'.format(*name.split('_'))
The next step is to plot the results. We create a new figure, and add one subplot each for membrane and threshold potential, synaptic conductances, and intrinsic currents.
fig = plt.figure()
Vax = fig.add_subplot(311)
Vax.plot(t, data['V_m'], 'b', lw=2, label=r'$V_m$')
Vax.plot(t, data['theta'], 'g', lw=2, label=r'$\Theta$')
Vax.set_ylabel('Potential [mV]')
try:
Vax.legend(fontsize='small')
except TypeError:
Vax.legend() # work-around for older Matplotlib versions
Vax.set_title('ht_neuron driven by Poisson processes')
Gax = fig.add_subplot(312)
for gname in ('g_AMPA', 'g_NMDA', 'g_GABA_A', 'g_GABA_B'):
Gax.plot(t, data[gname], lw=2, label=texify_name(gname))
try:
Gax.legend(fontsize='small')
except TypeError:
Gax.legend() # work-around for older Matplotlib versions
Gax.set_ylabel('Conductance [nS]')
Iax = fig.add_subplot(313)
for iname, color in (('I_h', 'maroon'), ('I_T', 'orange'),
('I_NaP', 'crimson'), ('I_KNa', 'aqua')):
Iax.plot(t, data[iname], color=color, lw=2, label=texify_name(iname))
try:
Iax.legend(fontsize='small')
except TypeError:
Iax.legend() # work-around for older Matplotlib versions
Iax.set_ylabel('Current [pA]')
Iax.set_xlabel('Time [ms]')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Intrinsic currents subthreshold¶
This example illustrates how to record from a model with multiple
intrinsic currents and visualize the results. This is illustrated
using the ht_neuron
which has four intrinsic currents: I_NaP
,
I_KNa
, I_T
, and I_h
. It is a slightly simplified implementation of
neuron model proposed in 1.
The neuron is driven by DC current, which is alternated between depolarizing and hyperpolarizing. Hyperpolarization intervals become increasingly longer.
References¶
- 1
Hill and Tononi (2005) Modeling Sleep and Wakefulness in the Thalamocortical System J Neurophysiol 93:1671 http://dx.doi.org/10.1152/jn.00915.2004.
See Also¶
We imported all necessary modules for simulation, analysis and plotting.
import nest
import numpy as np
import matplotlib.pyplot as plt
Additionally, we set the verbosity using set_verbosity
to suppress info
messages. We also reset the kernel to be sure to start with a clean NEST.
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
We define simulation parameters:
The length of depolarization intervals
The length of hyperpolarization intervals
The amplitude for de- and hyperpolarizing currents
The end of the time window to plot
n_blocks = 5
t_block = 20.
t_dep = [t_block] * n_blocks
t_hyp = [t_block * 2 ** n for n in range(n_blocks)]
I_dep = 10.
I_hyp = -5.
t_end = 500.
We create the one neuron instance and the DC current generator and store the returned handles.
nrn = nest.Create('ht_neuron')
dc = nest.Create('dc_generator')
We create a multimeter to record
membrane potential
V_m
threshold value
theta
intrinsic currents
I_NaP
,I_KNa
,I_T
,I_h
by passing these names in the record_from
list.
To find out which quantities can be recorded from a given neuron, run:
nest.GetDefaults('ht_neuron')['recordables']
The result will contain an entry like:
<SLILiteral: V_m>
for each recordable quantity. You need to pass the value of the
SLILiteral
, in this case V_m
in the record_from
list.
We want to record values with 0.1 ms resolution, so we set the recording interval as well; the default recording resolution is 1 ms.
# create multimeter and configure it to record all information
# we want at 0.1 ms resolution
mm = nest.Create('multimeter',
params={'interval': 0.1,
'record_from': ['V_m', 'theta',
'I_NaP', 'I_KNa', 'I_T', 'I_h']}
)
We connect the DC generator and the multimeter to the neuron. Note that the multimeter, just like the voltmeter is connected to the neuron, not the neuron to the multimeter.
nest.Connect(dc, nrn)
nest.Connect(mm, nrn)
We are ready to simulate. We alternate between driving the neuron with depolarizing and hyperpolarizing currents. Before each simulation interval, we set the amplitude of the DC generator to the correct value.
for t_sim_dep, t_sim_hyp in zip(t_dep, t_hyp):
nest.SetStatus(dc, {'amplitude': I_dep})
nest.Simulate(t_sim_dep)
nest.SetStatus(dc, {'amplitude': I_hyp})
nest.Simulate(t_sim_hyp)
We now fetch the data recorded by the multimeter. The data are returned as
a dictionary with entry times
containing timestamps for all recorded
data, plus one entry per recorded quantity.
All data is contained in the events
entry of the status dictionary
returned by the multimeter. Because all NEST function return arrays,
we need to pick out element 0 from the result of GetStatus
.
data = nest.GetStatus(mm)[0]['events']
t = data['times']
The next step is to plot the results. We create a new figure, add a single subplot and plot at first membrane potential and threshold.
fig = plt.figure()
Vax = fig.add_subplot(111)
Vax.plot(t, data['V_m'], 'b-', lw=2, label=r'$V_m$')
Vax.plot(t, data['theta'], 'g-', lw=2, label=r'$\Theta$')
Vax.set_ylim(-80., 0.)
Vax.set_ylabel('Voltageinf [mV]')
Vax.set_xlabel('Time [ms]')
To plot the input current, we need to create an input current trace. We construct it from the durations of the de- and hyperpolarizing inputs and add the delay in the connection between DC generator and neuron:
We find the delay by checking the status of the dc->nrn connection.
We find the resolution of the simulation from the kernel status.
Each current interval begins one time step after the previous interval, is delayed by the delay and effective for the given duration.
We build the time axis incrementally. We only add the delay when adding the first time point after t=0. All subsequent points are then automatically shifted by the delay.
delay = nest.GetStatus(nest.GetConnections(dc, nrn))[0]['delay']
dt = nest.GetKernelStatus('resolution')
t_dc, I_dc = [0], [0]
for td, th in zip(t_dep, t_hyp):
t_prev = t_dc[-1]
t_start_dep = t_prev + dt if t_prev > 0 else t_prev + dt + delay
t_end_dep = t_start_dep + td
t_start_hyp = t_end_dep + dt
t_end_hyp = t_start_hyp + th
t_dc.extend([t_start_dep, t_end_dep, t_start_hyp, t_end_hyp])
I_dc.extend([I_dep, I_dep, I_hyp, I_hyp])
The following function turns a name such as I_NaP
into proper TeX code
\(I_{\mathrm{NaP}}\) for a pretty label.
def texify_name(name):
return r'${}_{{\mathrm{{{}}}}}$'.format(*name.split('_'))
Next, we add a right vertical axis and plot the currents with respect to that axis.
Iax = Vax.twinx()
Iax.plot(t_dc, I_dc, 'k-', lw=2, label=texify_name('I_DC'))
for iname, color in (('I_h', 'maroon'), ('I_T', 'orange'),
('I_NaP', 'crimson'), ('I_KNa', 'aqua')):
Iax.plot(t, data[iname], color=color, lw=2, label=texify_name(iname))
Iax.set_xlim(0, t_end)
Iax.set_ylim(-10., 15.)
Iax.set_ylabel('Current [pA]')
Iax.set_title('ht_neuron driven by DC current')
We need to make a little extra effort to combine lines from the two axis into one legend.
lines_V, labels_V = Vax.get_legend_handles_labels()
lines_I, labels_I = Iax.get_legend_handles_labels()
try:
Iax.legend(lines_V + lines_I, labels_V + labels_I, fontsize='small')
except TypeError:
# work-around for older Matplotlib versions
Iax.legend(lines_V + lines_I, labels_V + labels_I)
Note that I_KNa
is not activated in this example because the neuron does
not spike. I_T
has only a very small amplitude.
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Network of linear rate neurons¶
This script simulates an excitatory and an inhibitory population
of lin_rate_ipn
neurons with delayed excitatory and instantaneous
inhibitory connections. The rate of all neurons is recorded using
a multimeter. The resulting rate for one excitatory and one
inhibitory neuron is plotted.
import nest
import pylab
import numpy
Assigning the simulation parameters to variables.
dt = 0.1 # the resolution in ms
T = 100.0 # Simulation time in ms
Definition of the number of neurons
order = 50
NE = int(4 * order) # number of excitatory neurons
NI = int(1 * order) # number of inhibitory neurons
N = int(NE+NI) # total number of neurons
Definition of the connections
d_e = 5. # delay of excitatory connections in ms
g = 5.0 # ratio inhibitory weight/excitatory weight
epsilon = 0.1 # connection probability
w = 0.1/numpy.sqrt(N) # excitatory connection strength
KE = int(epsilon * NE) # number of excitatory synapses per neuron (outdegree)
KI = int(epsilon * NI) # number of inhibitory synapses per neuron (outdegree)
K_tot = int(KI + KE) # total number of synapses per neuron
connection_rule = 'fixed_outdegree' # connection rule
Definition of the neuron model and its neuron parameters
neuron_model = 'lin_rate_ipn' # neuron model
neuron_params = {'linear_summation': True,
# type of non-linearity (not affecting linear rate models)
'tau': 10.0,
# time constant of neuronal dynamics in ms
'mu': 2.0,
# mean input
'sigma': 5.
# noise parameter
}
Configuration of the simulation kernel by the previously defined time
resolution used in the simulation. Setting print_time
to True prints
the already processed simulation time as well as its percentage of the
total simulation time.
nest.ResetKernel()
nest.SetKernelStatus({"resolution": dt, "use_wfr": False,
"print_time": True,
"overwrite_files": True})
print("Building network")
Configuration of the neuron model using SetDefaults
.
nest.SetDefaults(neuron_model, neuron_params)
Creation of the nodes using Create
.
n_e = nest.Create(neuron_model, NE)
n_i = nest.Create(neuron_model, NI)
To record from the rate neurons a multimeter
is created and the parameter
record_from
is set to rate as well as the recording interval to dt
mm = nest.Create('multimeter', params={'record_from': ['rate'],
'interval': dt})
Specify synapse and connection dictionaries:
Connections originating from excitatory neurons are associatated
with a delay d (rate_connection_delayed
).
Connections originating from inhibitory neurons are not associatated
with a delay (rate_connection_instantaneous
).
syn_e = {'weight': w, 'delay': d_e, 'model': 'rate_connection_delayed'}
syn_i = {'weight': -g*w, 'model': 'rate_connection_instantaneous'}
conn_e = {'rule': connection_rule, 'outdegree': KE}
conn_i = {'rule': connection_rule, 'outdegree': KI}
Connect rate units
nest.Connect(n_e, n_e, conn_e, syn_e)
nest.Connect(n_i, n_i, conn_i, syn_i)
nest.Connect(n_e, n_i, conn_i, syn_e)
nest.Connect(n_i, n_e, conn_e, syn_i)
Connect recording device to rate units
nest.Connect(mm, n_e+n_i)
Simulate the network
nest.Simulate(T)
Plot rates of one excitatory and one inhibitory neuron
data = nest.GetStatus(mm)[0]['events']
rate_ex = data['rate'][numpy.where(data['senders'] == n_e[0])]
rate_in = data['rate'][numpy.where(data['senders'] == n_i[0])]
times = data['times'][numpy.where(data['senders'] == n_e[0])]
pylab.figure()
pylab.plot(times, rate_ex, label='excitatory')
pylab.plot(times, rate_in, label='inhibitory')
pylab.xlabel('time (ms)')
pylab.ylabel('rate (a.u.)')
pylab.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Rate neuron decision making¶
A binary decision is implemented in the form of two rate neurons engaging in mutual inhibition.
Evidence for each decision is reflected by the mean input experienced by the respective neuron. The activity of each neuron is recorded using multimeter devices.
It can be observed how noise as well as the difference in evidence affects which neuron exhibits larger activity and hence which decision will be made.
import nest
import pylab
import numpy
First,the function build_network
is defined to build the network
and return the handles of two decision units and the multimeter
def build_network(sigma, dt):
nest.ResetKernel()
nest.SetKernelStatus({'resolution': dt, 'use_wfr': False})
Params = {'lambda': 0.1, 'sigma': sigma, 'tau': 1., 'rectify_output': True}
D1 = nest.Create('lin_rate_ipn', params=Params)
D2 = nest.Create('lin_rate_ipn', params=Params)
nest.Connect(D1, D2, 'all_to_all', {
'model': 'rate_connection_instantaneous', 'weight': -0.2})
nest.Connect(D2, D1, 'all_to_all', {
'model': 'rate_connection_instantaneous', 'weight': -0.2})
mm = nest.Create('multimeter')
nest.SetStatus(mm, {'interval': dt, 'record_from': ['rate']})
nest.Connect(mm, D1, syn_spec={'delay': dt})
nest.Connect(mm, D2, syn_spec={'delay': dt})
return D1, D2, mm
The function build_network
takes the noise parameter sigma
and the time resolution as arguments.
First, the Kernel is reset and the use_wfr
(waveform-relaxation)
is set to false while the resolution is set to the specified value
dt. Two rate neurons with linear activation functions are created
and the handle is stored in the variables D1 and D2. The output
of both decision units is rectified at zero. The two decisions
units are coupled via mutual inhibition. Next the multimeter is
created and the handle stored in mm and the option record_from
is set. The multimeter is then connected to the two units in order
to ‘observe’ them. The Connect
function takes the handles as
input.
The decision making process is simulated for three different levels of noise and three differences in evidence for a given decision. The activity of both decision units is plotted for each scenario.
fig_size = [14, 8]
fig_rows = 3
fig_cols = 3
fig_plots = fig_rows * fig_cols
face = 'white'
edge = 'white'
ax = [None] * fig_plots
fig = pylab.figure(facecolor=face, edgecolor=edge, figsize=fig_size)
dt = 1e-3
sigma = [0.0, 0.1, 0.2]
dE = [0.0, 0.004, 0.008]
T = numpy.linspace(0, 200, 200 / dt - 1)
for i in range(9):
c = i % 3
r = int(i / 3)
D1, D2, mm = build_network(sigma[r], dt)
First using build_network the network is build and the handles of the decision units and the multimeter are stored in D1, D2 and mm
nest.Simulate(100.0)
nest.SetStatus(D1, {'mu': 1. + dE[c]})
nest.SetStatus(D2, {'mu': 1. - dE[c]})
nest.Simulate(100.0)
The network is simulated using Simulate
, which takes the desired
simulation time in milliseconds and advances the network state by
this amount of time. After an initial period in the absence of evidence
for either decision, evidence is given by changing the state of each
senders = data[0]['events']['senders']
voltages = data[0]['events']['rate']
The activity values (‘voltages’) are read out by the multimeter
ax[i] = fig.add_subplot(fig_rows, fig_cols, i + 1)
ax[i].plot(T, voltages[numpy.where(senders == D1)],
'b', linewidth=2, label="D1")
ax[i].plot(T, voltages[numpy.where(senders == D2)],
'r', linewidth=2, label="D2")
ax[i].set_ylim([-.5, 12.])
ax[i].get_xaxis().set_ticks([])
ax[i].get_yaxis().set_ticks([])
if c == 0:
ax[i].set_ylabel("activity ($\sigma=%.1f$) " % (sigma[r]))
ax[i].get_yaxis().set_ticks([0, 3, 6, 9, 12])
if r == 0:
ax[i].set_title("$\Delta E=%.3f$ " % (dE[c]))
if c == 2:
pylab.legend(loc=0)
if r == 2:
ax[i].get_xaxis().set_ticks([0, 50, 100, 150, 200])
ax[i].set_xlabel('time (ms)')
The activity of the two units is plotted in each scenario.
In the absence of noise, the network will not make a decision if evidence for both choices is equal. With noise, this symmetry can be broken and a decision wil be taken despite identical evidence.
As evidence for D1 relative to D2 increases, it becomes more likely that the corresponding decision will be taken. For small differences in the evidence for the two decisions, noise can lead to the ‘wrong’ decision.
pylab.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Comparing precise and grid-based neuron models¶
In traditional time-driven simulations, spikes are constrained to the time grid at a user-defined resolution. The precise spiking models overcome this by handling spikes in continuous time 1 and 2.
The precise spiking neuron models in NEST include: iaf_psc_exp_ps
,
iaf_psc_alpha_ps
, iaf_psc_alpha_presc
and iaf_psc_delta_ps
.
More detailed information about the precise spiking models can be
found here:
https://www.nest-simulator.org/simulations-with-precise-spike-times/
This example compares the conventional grid-constrained model and the precise version for an integrate-and-fire neuron model with exponential post-synaptic currents 2.
References¶
- 1
Morrison A, Straube S, Plesser HE, Diesmann M. 2007. Exact subthreshold integration with continuous spike times in discrete-time neural network simulations. Neural Computation. 19(1):47-79. https://doi.org/10.1162/neco.2007.19.1.47
- 2(1,2)
Hanuschkin A, Kunkel S, Helias M, Morrison A and Diesmann M. 2010. A general and efficient method for incorporating precise spike times in globally time-driven simulations. Froniers in Neuroinformatics. 4:113. https://doi.org/10.3389/fninf.2010.00113
First, we import all necessary modules for simulation, analysis, and plotting.
import nest
import pylab
Second, we assign the simulation parameters to variables.
simtime = 100.0 # ms
stim_current = 700.0 # pA
resolutions = [0.1, 0.5, 1.0] # ms
Now, we simulate the two versions of the neuron models (i.e. discrete-time:
iaf_psc_exp
; precise: iaf_psc_exp_ps
) for each of the defined
resolutions. The neurons use their default parameters and we stimulate them
by injecting a current using a dc_generator
device. The membrane potential
is recorded by a voltmeter
, the spikes are recorded by a spike_detector
,
whose property precise_times
is set to True. The data is stored in a
dictionary for later use.
data = {}
for h in resolutions:
data[h] = {}
for model in ["iaf_psc_exp", "iaf_psc_exp_ps"]:
nest.ResetKernel()
nest.SetKernelStatus({'resolution': h})
neuron = nest.Create(model)
voltmeter = nest.Create("voltmeter", params={"interval": h})
dc = nest.Create("dc_generator", params={"amplitude": stim_current})
sd = nest.Create("spike_detector", params={"precise_times": True})
nest.Connect(voltmeter, neuron)
nest.Connect(dc, neuron)
nest.Connect(neuron, sd)
nest.Simulate(simtime)
vm_status = nest.GetStatus(voltmeter, 'events')[0]
sd_status = nest.GetStatus(sd, 'events')[0]
data[h][model] = {"vm_times": vm_status['times'],
"vm_values": vm_status['V_m'],
"spikes": sd_status['times'],
"V_th": nest.GetStatus(neuron, 'V_th')[0]}
After simulation, we plot the results from the simulation. The figure illustrates the membrane potential excursion of the two models due to injected current simulated for 100 ms for a different timestep in each panel. The blue line is the voltage trace of the discrete-time neuron, the red line is that of the precise spiking version of the same model.
Please note that the temporal differences between the traces in the different panels is caused by the different resolutions used.
colors = ["#3465a4", "#cc0000"]
for v, h in enumerate(sorted(data)):
plot = pylab.subplot(len(data), 1, v + 1)
plot.set_title("Resolution: {0} ms".format(h))
for i, model in enumerate(data[h]):
times = data[h][model]["vm_times"]
potentials = data[h][model]["vm_values"]
spikes = data[h][model]["spikes"]
spikes_y = [data[h][model]["V_th"]] * len(spikes)
plot.plot(times, potentials, "-", c=colors[i], ms=5, lw=2, label=model)
plot.plot(spikes, spikes_y, ".", c=colors[i], ms=5, lw=2)
if v == 2:
plot.legend(loc=4)
else:
plot.set_xticklabels('')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Sinusoidal poisson generator example¶
This script demonstrates the use of the sinusoidal_poisson_generator
and its different parameters and modes. The source code of the model
can be found in models/sinusoidal_poisson_generator.h
.
The script is structured into two parts and creates one common figure.
In Part 1, two instances of the sinusoidal_poisson_generator
are
created with different parameters. Part 2 illustrates the effect of
the individual_spike_trains
switch.
We import the modules required to simulate, analyze and plot this example.
import nest
import matplotlib.pyplot as plt
import numpy as np
nest.ResetKernel() # in case we run the script multiple times from iPython
We create two instances of the sinusoidal_poisson_generator
with two
different parameter sets using Create
. Moreover, we create devices to
record firing rates (multimeter
) and spikes (spike_detector
) and connect
them to the generators using Connect
.
nest.SetKernelStatus({'resolution': 0.01})
g = nest.Create('sinusoidal_poisson_generator', n=2,
params=[{'rate': 10000.0,
'amplitude': 5000.0,
'frequency': 10.0,
'phase': 0.0},
{'rate': 0.0,
'amplitude': 10000.0,
'frequency': 5.0,
'phase': 90.0}])
m = nest.Create('multimeter', n=2, params={'interval': 0.1, 'withgid': False,
'record_from': ['rate']})
s = nest.Create('spike_detector', n=2, params={'withgid': False})
nest.Connect(m, g, 'one_to_one')
nest.Connect(g, s, 'one_to_one')
print(nest.GetStatus(m))
nest.Simulate(200)
After simulating, the spikes are extracted from the spike_detector
using
GetStatus
and plots are created with panels for the PST and ISI histograms.
colors = ['b', 'g']
for j in range(2):
ev = nest.GetStatus([m[j]])[0]['events']
t = ev['times']
r = ev['rate']
sp = nest.GetStatus([s[j]])[0]['events']['times']
plt.subplot(221)
h, e = np.histogram(sp, bins=np.arange(0., 201., 5.))
plt.plot(t, r, color=colors[j])
plt.step(e[:-1], h * 1000 / 5., color=colors[j], where='post')
plt.title('PST histogram and firing rates')
plt.ylabel('Spikes per second')
plt.subplot(223)
plt.hist(np.diff(sp), bins=np.arange(0., 1.005, 0.02),
histtype='step', color=colors[j])
plt.title('ISI histogram')
The kernel is reset and the number of threads set to 4.
nest.ResetKernel()
nest.SetKernelStatus({'local_num_threads': 4})
A sinusoidal_poisson_generator
with individual_spike_trains
set to
True is created and connected to 20 parrot neurons whose spikes are
recorded by a spike_detector
. After simulating, a raster plot of the spikes
is created.
g = nest.Create('sinusoidal_poisson_generator',
params={'rate': 100.0, 'amplitude': 50.0,
'frequency': 10.0, 'phase': 0.0,
'individual_spike_trains': True})
p = nest.Create('parrot_neuron', 20)
s = nest.Create('spike_detector')
nest.Connect(g, p, 'all_to_all')
nest.Connect(p, s, 'all_to_all')
nest.Simulate(200)
ev = nest.GetStatus(s)[0]['events']
plt.subplot(222)
plt.plot(ev['times'], ev['senders'] - min(ev['senders']), 'o')
plt.ylim([-0.5, 19.5])
plt.yticks([])
plt.title('Individual spike trains for each target')
The kernel is reset again and the whole procedure is repeated for a
sinusoidal_poisson_generator
with individual_spike_trains set to
False. The plot shows that in this case, all neurons receive the same
spike train from the sinusoidal_poisson_generator
.
nest.ResetKernel()
nest.SetKernelStatus({'local_num_threads': 4})
g = nest.Create('sinusoidal_poisson_generator',
params={'rate': 100.0, 'amplitude': 50.0,
'frequency': 10.0, 'phase': 0.0,
'individual_spike_trains': False})
p = nest.Create('parrot_neuron', 20)
s = nest.Create('spike_detector')
nest.Connect(g, p, 'all_to_all')
nest.Connect(p, s, 'all_to_all')
nest.Simulate(200)
ev = nest.GetStatus(s)[0]['events']
plt.subplot(224)
plt.plot(ev['times'], ev['senders'] - min(ev['senders']), 'o')
plt.ylim([-0.5, 19.5])
plt.yticks([])
plt.title('One spike train for all targets')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Sinusoidal gamma generator example¶
This script demonstrates the use of the sinusoidal_gamma_generator
and its
different parameters and modes. The source code of the model can be found in
models/sinusoidal_gamma_generator.h
.
The script is structured into two parts, each of which generates its own
figure. In part 1A, two generators are created with different orders of the
underlying gamma process and their resulting PST (Peristiumulus time) and ISI
(Inter-spike interval) histograms are plotted. Part 1B illustrates the effect
of the individual_spike_trains
switch. In Part 2, the effects of
different settings for rate, phase and frequency are demonstrated.
First, we import all necessary modules to simulate, analyze and plot this example.
import nest
import matplotlib.pyplot as plt
import numpy as np
nest.ResetKernel() # in case we run the script multiple times from iPython
We first create a figure for the plot and set the resolution of NEST.
plt.figure()
nest.SetKernelStatus({'resolution': 0.01})
Then we create two instances of the sinusoidal_gamma_generator
with two
different orders of the underlying gamma process using Create
. Moreover,
we create devices to record firing rates (multimeter
) and spikes
(spike_detector
) and connect them to the generators using Connect
.
g = nest.Create('sinusoidal_gamma_generator', n=2,
params=[{'rate': 10000.0, 'amplitude': 5000.0,
'frequency': 10.0, 'phase': 0.0, 'order': 2.0},
{'rate': 10000.0, 'amplitude': 5000.0,
'frequency': 10.0, 'phase': 0.0, 'order': 10.0}])
m = nest.Create('multimeter', n=2, params={'interval': 0.1, 'withgid': False,
'record_from': ['rate']})
s = nest.Create('spike_detector', n=2, params={'withgid': False})
nest.Connect(m, g, 'one_to_one')
nest.Connect(g, s, 'one_to_one')
nest.Simulate(200)
After simulating, the spikes are extracted from the spike_detector
using
GetStatus
and plots are created with panels for the PST and ISI histograms.
colors = ['b', 'g']
for j in range(2):
ev = nest.GetStatus([m[j]])[0]['events']
t = ev['times']
r = ev['rate']
sp = nest.GetStatus([s[j]])[0]['events']['times']
plt.subplot(221)
h, e = np.histogram(sp, bins=np.arange(0., 201., 5.))
plt.plot(t, r, color=colors[j])
plt.step(e[:-1], h * 1000 / 5., color=colors[j], where='post')
plt.title('PST histogram and firing rates')
plt.ylabel('Spikes per second')
plt.subplot(223)
plt.hist(np.diff(sp), bins=np.arange(0., 0.505, 0.01),
histtype='step', color=colors[j])
plt.title('ISI histogram')
The kernel is reset and the number of threads set to 4.
nest.ResetKernel()
nest.SetKernelStatus({'local_num_threads': 4})
First, a sinusoidal_gamma_generator
with individual_spike_trains
set to
True is created and connected to 20 parrot neurons whose spikes are
recorded by a spike detector. After simulating, a raster plot of the spikes
is created.
g = nest.Create('sinusoidal_gamma_generator',
params={'rate': 100.0, 'amplitude': 50.0,
'frequency': 10.0, 'phase': 0.0, 'order': 3.,
'individual_spike_trains': True})
p = nest.Create('parrot_neuron', 20)
s = nest.Create('spike_detector')
nest.Connect(g, p)
nest.Connect(p, s)
nest.Simulate(200)
ev = nest.GetStatus(s)[0]['events']
plt.subplot(222)
plt.plot(ev['times'], ev['senders'] - min(ev['senders']), 'o')
plt.ylim([-0.5, 19.5])
plt.yticks([])
plt.title('Individual spike trains for each target')
The kernel is reset again and the whole procedure is repeated for a
sinusoidal_gamma_generator
with individual_spike_trains
set to False.
The plot shows that in this case, all neurons receive the same spike train
from the sinusoidal_gamma_generator
.
nest.ResetKernel()
nest.SetKernelStatus({'local_num_threads': 4})
g = nest.Create('sinusoidal_gamma_generator',
params={'rate': 100.0, 'amplitude': 50.0,
'frequency': 10.0, 'phase': 0.0, 'order': 3.,
'individual_spike_trains': False})
p = nest.Create('parrot_neuron', 20)
s = nest.Create('spike_detector')
nest.Connect(g, p)
nest.Connect(p, s)
nest.Simulate(200)
ev = nest.GetStatus(s)[0]['events']
plt.subplot(224)
plt.plot(ev['times'], ev['senders'] - min(ev['senders']), 'o')
plt.ylim([-0.5, 19.5])
plt.yticks([])
plt.title('One spike train for all targets')
In part 2, multiple generators are created with different settings for rate, phase and frequency. First, we define an auxiliary function, which simulates n generators for t ms. After t/2, the parameter dictionary of the generators is changed from initial to after.
def step(t, n, initial, after, seed=1, dt=0.05):
nest.ResetKernel()
nest.SetStatus([0], [{"resolution": dt}])
nest.SetStatus([0], [{"grng_seed": 256 * seed + 1}])
nest.SetStatus([0], [{"rng_seeds": [256 * seed + 2]}])
g = nest.Create('sinusoidal_gamma_generator', n, params=initial)
sd = nest.Create('spike_detector')
nest.Connect(g, sd)
nest.Simulate(t / 2)
nest.SetStatus(g, after)
nest.Simulate(t / 2)
return nest.GetStatus(sd, 'events')[0]
This function serves to plot a histogram of the emitted spikes.
def plot_hist(spikes):
plt.hist(spikes['times'],
bins=np.arange(0., max(spikes['times']) + 1.5, 1.),
histtype='step')
t = 1000
n = 1000
dt = 1.0
steps = int(t / dt)
offset = t / 1000. * 2 * np.pi
# We create a figure with a 2x3 grid.
grid = (2, 3)
fig = plt.figure(figsize=(15, 10))
We simulate a sinusoidal_gamma_generator
with default parameter values,
i.e. ac=0
and the DC value being changed from 20 to 50 after t/2 and
plot the number of spikes per second over time.
plt.subplot(grid[0], grid[1], 1)
spikes = step(t, n,
{'rate': 20.0},
{'rate': 50.0, },
seed=123, dt=dt)
plot_hist(spikes)
exp = np.ones(steps)
exp[:int(steps / 2)] *= 20
exp[int(steps / 2):] *= 50
plt.plot(exp, 'r')
plt.title('DC rate: 20 -> 50')
plt.ylabel('Spikes per second')
We simulate a sinusoidal_gamma_generator
with the DC value being changed
from 80 to 40 after t/2 and plot the number of spikes per second over
time.
plt.subplot(grid[0], grid[1], 2)
spikes = step(t, n,
{'order': 6.0, 'rate': 80.0, 'amplitude': 0.,
'frequency': 0., 'phase': 0.},
{'order': 6.0, 'rate': 40.0, 'amplitude': 0.,
'frequency': 0., 'phase': 0.},
seed=123, dt=dt)
plot_hist(spikes)
exp = np.ones(steps)
exp[:int(steps / 2)] *= 80
exp[int(steps / 2):] *= 40
plt.plot(exp, 'r')
plt.title('DC rate: 80 -> 40')
Next, we simulate a sinusoidal_gamma_generator
with the AC value being
changed from 40 to 20 after t/2 and plot the number of spikes per
second over time.
plt.subplot(grid[0], grid[1], 3)
spikes = step(t, n,
{'order': 3.0, 'rate': 40.0, 'amplitude': 40.,
'frequency': 10., 'phase': 0.},
{'order': 3.0, 'rate': 40.0, 'amplitude': 20.,
'frequency': 10., 'phase': 0.},
seed=123, dt=dt)
plot_hist(spikes)
exp = np.zeros(int(steps))
exp[:int(steps / 2)] = (40. +
40. * np.sin(np.arange(0, t / 1000. * np.pi * 10,
t / 1000. * np.pi * 10. /
(steps / 2))))
exp[int(steps / 2):] = (40. + 20. * np.sin(np.arange(0, t / 1000. * np.pi * 10,
t / 1000. * np.pi * 10. /
(steps / 2)) + offset))
plt.plot(exp, 'r')
plt.title('Rate Modulation: 40 -> 20')
Finally, we simulate a sinusoidal_gamma_generator
with a non-zero AC value
and the DC value being changed from 80 to 40 after t/2 and plot the
number of spikes per second over time.
plt.subplot(grid[0], grid[1], 4)
spikes = step(t, n,
{'order': 6.0, 'rate': 20.0, 'amplitude': 20.,
'frequency': 10., 'phase': 0.},
{'order': 6.0, 'rate': 50.0, 'amplitude': 50.,
'frequency': 10., 'phase': 0.},
seed=123, dt=dt)
plot_hist(spikes)
exp = np.zeros(int(steps))
exp[:int(steps / 2)] = (20. + 20. * np.sin(np.arange(0, t / 1000. * np.pi * 10,
t / 1000. * np.pi * 10. /
(steps / 2))))
exp[int(steps / 2):] = (50. + 50. * np.sin(np.arange(0, t / 1000. * np.pi * 10,
t / 1000. * np.pi * 10. /
(steps / 2)) + offset))
plt.plot(exp, 'r')
plt.title('DC Rate and Rate Modulation: 20 -> 50')
plt.ylabel('Spikes per second')
plt.xlabel('Time [ms]')
Simulate a sinusoidal_gamma_generator
with the AC value being
changed from 0 to 40 after t/2 and plot the number of spikes per
second over time.
plt.subplot(grid[0], grid[1], 5)
spikes = step(t, n,
{'rate': 40.0, },
{'amplitude': 40.0, 'frequency': 20.},
seed=123, dt=1.)
plot_hist(spikes)
exp = np.zeros(int(steps))
exp[:int(steps / 2)] = 40. * np.ones(steps / 2)
exp[int(steps / 2):] = (40. + 40. * np.sin(np.arange(0, t / 1000. * np.pi * 20,
t / 1000. * np.pi * 20. /
(steps / 2))))
plt.plot(exp, 'r')
plt.title('Rate Modulation: 0 -> 40')
plt.xlabel('Time [ms]')
Simulate a sinusoidal_gamma_generator
with a phase shift at
t/2 and plot the number of spikes per second over time.
# Phase shift
plt.subplot(grid[0], grid[1], 6)
spikes = step(t, n,
{'order': 6.0, 'rate': 60.0, 'amplitude': 60.,
'frequency': 10., 'phase': 0.},
{'order': 6.0, 'rate': 60.0, 'amplitude': 60.,
'frequency': 10., 'phase': 180.},
seed=123, dt=1.)
plot_hist(spikes)
exp = np.zeros(int(steps))
exp[:int(steps / 2)] = (60. + 60. * np.sin(np.arange(0, t / 1000. * np.pi * 10,
t / 1000. * np.pi * 10. /
(steps / 2))))
exp[int(steps / 2):] = (60. + 60. * np.sin(np.arange(0, t / 1000. * np.pi * 10,
t / 1000. * np.pi * 10. /
(steps / 2)) +
offset + np.pi))
plt.plot(exp, 'r')
plt.title('Modulation Phase: 0 -> Pi')
plt.xlabel('Time [ms]')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Clopath Rule: Spike pairing experiment¶
This script simulates one aeif_psc_delta_clopath
neuron that is connected with
a Clopath connection 1. The synapse receives pairs of a pre- and a postsynaptic
spikes that are separated by either 10 ms (pre before post) or -10 ms (post
before pre). The change of the synaptic weight is measured after five of such
pairs. This experiment is repeated five times with different rates of the
sequence of the spike pairs: 10Hz, 20Hz, 30Hz, 40Hz, and 50Hz.
References¶
- 1
Clopath C, Büsing L, Vasilaki E, Gerstner W (2010). Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nature Neuroscience 13:3, 344–352
import numpy as np
import matplotlib.pyplot as pl
import nest
First we specify the neuron parameters. To enable voltage dependent
prefactor A_LTD(u_bar_bar)
add A_LTD_const: False
to the dictionary.
nrn_params = {'V_m': -70.6,
'E_L': -70.6,
'C_m': 281.0,
'theta_minus': -70.6,
'theta_plus': -45.3,
'A_LTD': 14.0e-5,
'A_LTP': 8.0e-5,
'tau_minus': 10.0,
'tau_plus': 7.0,
'delay_u_bars': 4.0,
'a': 4.0,
'b': 0.0805,
'V_reset': -70.6 + 21.0,
'V_clamp': 33.0,
't_clamp': 2.0,
't_ref': 0.0,
}
Hardcoded spike times of presynaptic spike generator
spike_times_pre = [
# Presynaptic spike before the postsynaptic
[20., 120., 220., 320., 420.],
[20., 70., 120., 170., 220.],
[20., 53.3, 86.7, 120., 153.3],
[20., 45., 70., 95., 120.],
[20., 40., 60., 80., 100.],
# Presynaptic spike after the postsynaptic
[120., 220., 320., 420., 520., 620.],
[70., 120., 170., 220., 270., 320.],
[53.3, 86.6, 120., 153.3, 186.6, 220.],
[45., 70., 95., 120., 145., 170.],
[40., 60., 80., 100., 120., 140.]]
Hardcoded spike times of postsynaptic spike generator
spike_times_post = [
[10., 110., 210., 310., 410.],
[10., 60., 110., 160., 210.],
[10., 43.3, 76.7, 110., 143.3],
[10., 35., 60., 85., 110.],
[10., 30., 50., 70., 90.],
[130., 230., 330., 430., 530., 630.],
[80., 130., 180., 230., 280., 330.],
[63.3, 96.6, 130., 163.3, 196.6, 230.],
[55., 80., 105., 130., 155., 180.],
[50., 70., 90., 110., 130., 150.]]
init_w = 0.5
syn_weights = []
resolution = 0.1
Loop over pairs of spike trains
for (s_t_pre, s_t_post) in zip(spike_times_pre, spike_times_post):
nest.ResetKernel()
nest.SetKernelStatus({"resolution": resolution})
# Create one neuron
nrn = nest.Create("aeif_psc_delta_clopath", 1, nrn_params)
# We need a parrot neuron since spike generators can only
# be connected with static connections
prrt_nrn = nest.Create("parrot_neuron", 1)
# Create and connect spike generators
spike_gen_pre = nest.Create("spike_generator", 1, {
"spike_times": s_t_pre})
nest.Connect(spike_gen_pre, prrt_nrn,
syn_spec={"delay": resolution})
spike_gen_post = nest.Create("spike_generator", 1, {
"spike_times": s_t_post})
nest.Connect(spike_gen_post, nrn, syn_spec={
"delay": resolution, "weight": 80.0})
# Create weight recorder
wr = nest.Create('weight_recorder', 1)
# Create Clopath connection with weight recorder
nest.CopyModel("clopath_synapse", "clopath_synapse_rec",
{"weight_recorder": wr[0]})
syn_dict = {"model": "clopath_synapse_rec",
"weight": init_w, "delay": resolution}
nest.Connect(prrt_nrn, nrn, syn_spec=syn_dict)
# Simulation
simulation_time = (10.0 + max(s_t_pre[-1], s_t_post[-1]))
nest.Simulate(simulation_time)
# Extract and save synaptic weights
w_events = nest.GetStatus(wr)[0]["events"]
weights = w_events["weights"]
syn_weights.append(weights[-1])
syn_weights = np.array(syn_weights)
# scaling of the weights so that they are comparable to [1]
syn_weights = 100.0*15.0*(syn_weights - init_w)/init_w + 100.0
# Plot results
fig1, axA = pl.subplots(1, sharex=False)
axA.plot([10., 20., 30., 40., 50.], syn_weights[5:], color='b', lw=2.5, ls='-',
label="pre-post pairing")
axA.plot([10., 20., 30., 40., 50.], syn_weights[:5], color='g', lw=2.5, ls='-',
label="post-pre pairing")
axA.set_ylabel("normalized weight change")
axA.set_xlabel("rho (Hz)")
axA.legend()
axA.set_title("synaptic weight")
pl.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Clopath Rule: Bidirectional connections¶
This script simulates a small network of ten excitatory and three
inhibitory aeif_psc_delta_clopath
neurons. The neurons are randomly connected
and driven by 500 Poisson generators. The synapses from the Poisson generators
to the excitatory population and those among the neurons of the network
are Clopath synapses. The rate of the Poisson generators is modulated with
a Gaussian profile whose center shifts randomly each 100 ms between ten
equally spaced positions.
This setup demonstrates that the Clopath synapse is able to establish
bidirectional connections. The example is adapted from 1 (cf. fig. 5).
References¶
- 1
Clopath C, Büsing L, Vasilaki E, Gerstner W (2010). Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nature Neuroscience 13:3, 344–352
import nest
import numpy as np
import matplotlib.pyplot as pl
import random
Set the parameters
simulation_time = 1.0e4
resolution = 0.1
delay = resolution
# Poisson_generator parameters
pg_A = 30. # amplitude of Gaussian
pg_sigma = 10. # std deviation
nest.ResetKernel()
nest.SetKernelStatus({'resolution': resolution})
# Create neurons and devices
nrn_model = 'aeif_psc_delta_clopath'
nrn_params = {'V_m': -30.6,
'g_L': 30.0,
'w': 0.0,
'tau_plus': 7.0,
'tau_minus': 10.0,
'tau_w': 144.0,
'a': 4.0,
'C_m': 281.0,
'Delta_T': 2.0,
'V_peak': 20.0,
't_clamp': 2.0,
'A_LTP': 8.0e-6,
'A_LTD': 14.0e-6,
'A_LTD_const': False,
'b': 0.0805,
'u_ref_squared': 60.0**2}
pop_exc = nest.Create(nrn_model, 10, nrn_params)
pop_inh = nest.Create(nrn_model, 3, nrn_params)
We need parrot neurons since Poisson generators can only be connected with static connections
pop_input = nest.Create('parrot_neuron', 500) # helper neurons
pg = nest.Create('poisson_generator', 500)
wr = nest.Create('weight_recorder', 1)
First connect Poisson generators to helper neurons
nest.Connect(pg, pop_input, 'one_to_one', {'model': 'static_synapse',
'weight': 1.0, 'delay': delay})
Create all the connections
nest.CopyModel('clopath_synapse', 'clopath_input_to_exc',
{'Wmax': 3.0})
conn_dict_input_to_exc = {'rule': 'all_to_all'}
syn_dict_input_to_exc = {'model': 'clopath_input_to_exc',
'weight': {'distribution': 'uniform', 'low': 0.5,
'high': 2.0},
'delay': delay}
nest.Connect(pop_input, pop_exc, conn_dict_input_to_exc,
syn_dict_input_to_exc)
# Create input->inh connections
conn_dict_input_to_inh = {'rule': 'all_to_all'}
syn_dict_input_to_inh = {'model': 'static_synapse',
'weight': {'distribution': 'uniform', 'low': 0.0,
'high': 0.5},
'delay': delay}
nest.Connect(pop_input, pop_inh, conn_dict_input_to_inh, syn_dict_input_to_inh)
# Create exc->exc connections
nest.CopyModel('clopath_synapse', 'clopath_exc_to_exc',
{'Wmax': 0.75, 'weight_recorder': wr[0]})
syn_dict_exc_to_exc = {'model': 'clopath_exc_to_exc', 'weight': 0.25,
'delay': delay}
conn_dict_exc_to_exc = {'rule': 'all_to_all', 'autapses': False}
nest.Connect(pop_exc, pop_exc, conn_dict_exc_to_exc, syn_dict_exc_to_exc)
# Create exc->inh connections
syn_dict_exc_to_inh = {'model': 'static_synapse',
'weight': 1.0, 'delay': delay}
conn_dict_exc_to_inh = {'rule': 'fixed_indegree', 'indegree': 8}
nest.Connect(pop_exc, pop_inh, conn_dict_exc_to_inh, syn_dict_exc_to_inh)
# Create inh->exc connections
syn_dict_inh_to_exc = {'model': 'static_synapse',
'weight': 1.0, 'delay': delay}
conn_dict_inh_to_exc = {'rule': 'fixed_outdegree', 'outdegree': 6}
nest.Connect(pop_inh, pop_exc, conn_dict_inh_to_exc, syn_dict_inh_to_exc)
Randomize the initial membrane potential
for nrn in pop_exc:
nest.SetStatus([nrn, ], {'V_m': np.random.normal(-60.0, 25.0)})
for nrn in pop_inh:
nest.SetStatus([nrn, ], {'V_m': np.random.normal(-60.0, 25.0)})
Simulation divided into intervals of 100ms for shifting the Gaussian
for i in range(int(simulation_time/100.0)):
# set rates of poisson generators
rates = np.empty(500)
# pg_mu will be randomly chosen out of 25,75,125,...,425,475
pg_mu = 25 + random.randint(0, 9) * 50
for j in range(500):
rates[j] = pg_A * \
np.exp((-1 * (j - pg_mu) ** 2) / (2 * (pg_sigma) ** 2))
nest.SetStatus([pg[j]], {'rate': rates[j]*1.75})
nest.Simulate(100.0)
Plot results
fig1, axA = pl.subplots(1, sharex=False)
# Plot synapse weights of the synapses within the excitatory population
# Sort weights according to sender and reshape
exc_conns = nest.GetConnections(pop_exc, pop_exc)
exc_conns_senders = np.array(nest.GetStatus(exc_conns, 'source'))
exc_conns_targets = np.array(nest.GetStatus(exc_conns, 'target'))
exc_conns_weights = np.array(nest.GetStatus(exc_conns, 'weight'))
idx_array = np.argsort(exc_conns_senders)
targets = np.reshape(exc_conns_targets[idx_array], (10, 10-1))
weights = np.reshape(exc_conns_weights[idx_array], (10, 10-1))
# Sort according to target
for i, (trgs, ws) in enumerate(zip(targets, weights)):
idx_array = np.argsort(trgs)
weights[i] = ws[idx_array]
weight_matrix = np.zeros((10, 10))
tu9 = np.triu_indices_from(weights)
tl9 = np.tril_indices_from(weights, -1)
tu10 = np.triu_indices_from(weight_matrix, 1)
tl10 = np.tril_indices_from(weight_matrix, -1)
weight_matrix[tu10[0], tu10[1]] = weights[tu9[0], tu9[1]]
weight_matrix[tl10[0], tl10[1]] = weights[tl9[0], tl9[1]]
# Difference between initial and final value
init_w_matrix = np.ones((10, 10))*0.25
init_w_matrix -= np.identity(10)*0.25
caxA = axA.imshow(weight_matrix - init_w_matrix)
cbarB = fig1.colorbar(caxA, ax=axA)
axA.set_xticks([0, 2, 4, 6, 8])
axA.set_xticklabels(['1', '3', '5', '7', '9'])
axA.set_yticks([0, 2, 4, 6, 8])
axA.set_xticklabels(['1', '3', '5', '7', '9'])
axA.set_xlabel("to neuron")
axA.set_ylabel("from neuron")
axA.set_title("Change of syn weights before and after simulation")
pl.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Random balanced network (alpha synapses) connected with NumPy¶
This script simulates an excitatory and an inhibitory population on the basis of the network used in 1.
In contrast to brunel_alpha_nest.py
, this variant uses NumPy to draw
the random connections instead of NEST’s builtin connection routines.
When connecting the network customary synapse models are used, which allow for querying the number of created synapses. Using spike detectors the average firing rates of the neurons in the populations are established. The building as well as the simulation time of the network are recorded.
References¶
- 1
Brunel N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience 8, 183-208.
See Also¶
Random balanced network (alpha synapses) connected with NEST
Import all necessary modules for simulation, analysis and plotting. Scipy should be imported before nest.
from scipy.optimize import fsolve
import nest
import nest.raster_plot
import numpy
from numpy import exp
import time
Definition of functions used in this example. First, define the Lambert W function implemented in SLI. The second function computes the maximum of the postsynaptic potential for a synaptic input current of unit amplitude ( 1 pA) using the Lambert W function. Thus function will later be used to calibrate the synaptic weights
def LambertWm1(x):
nest.ll_api.sli_push(x)
nest.ll_api.sli_run('LambertWm1')
y = nest.ll_api.sli_pop()
return y
def ComputePSPnorm(tauMem, CMem, tauSyn):
a = (tauMem / tauSyn)
b = (1.0 / tauSyn - 1.0 / tauMem)
# time of maximum
t_max = 1.0 / b * (-LambertWm1(-exp(-1.0 / a) / a) - 1.0 / a)
# maximum of PSP for current of unit amplitude
return (exp(1.0) / (tauSyn * CMem * b) *
((exp(-t_max / tauMem) - exp(-t_max / tauSyn)) / b -
t_max * exp(-t_max / tauSyn)))
nest.ResetKernel()
Assigning the current time to a variable in order to determine the build time of the network.
startbuild = time.time()
Assigning the simulation parameters to variables.
dt = 0.1 # the resolution in ms
simtime = 1000.0 # Simulation time in ms
delay = 1.5 # synaptic delay in ms
Definition of the parameters crucial for asynchronous irregular firing of the neurons.
g = 5.0 # ratio inhibitory weight/excitatory weight
eta = 2.0 # external rate relative to threshold rate
epsilon = 0.1 # connection probability
Definition of the number of neurons in the network and the number of neuron recorded from
order = 2500
NE = 4 * order # number of excitatory neurons
NI = 1 * order # number of inhibitory neurons
N_neurons = NE + NI # number of neurons in total
N_rec = 50 # record from 50 neurons
Definition of connectivity parameter
CE = int(epsilon * NE) # number of excitatory synapses per neuron
CI = int(epsilon * NI) # number of inhibitory synapses per neuron
C_tot = int(CI + CE) # total number of synapses per neuron
Initialization of the parameters of the integrate and fire neuron and the synapses. The parameter of the neuron are stored in a dictionary. The synaptic currents are normalized such that the amplitude of the PSP is J.
tauSyn = 0.5 # synaptic time constant in ms
tauMem = 20.0 # time constant of membrane potential in ms
CMem = 250.0 # capacitance of membrane in in pF
theta = 20.0 # membrane threshold potential in mV
neuron_params = {"C_m": CMem,
"tau_m": tauMem,
"tau_syn_ex": tauSyn,
"tau_syn_in": tauSyn,
"t_ref": 2.0,
"E_L": 0.0,
"V_reset": 0.0,
"V_m": 0.0,
"V_th": theta}
J = 0.1 # postsynaptic amplitude in mV
J_unit = ComputePSPnorm(tauMem, CMem, tauSyn)
J_ex = J / J_unit # amplitude of excitatory postsynaptic current
J_in = -g * J_ex # amplitude of inhibitory postsynaptic current
Definition of threshold rate, which is the external rate needed to fix the membrane potential around its threshold, the external firing rate and the rate of the poisson generator which is multiplied by the in-degree CE and converted to Hz by multiplication by 1000.
nu_th = (theta * CMem) / (J_ex * CE * numpy.exp(1) * tauMem * tauSyn)
nu_ex = eta * nu_th
p_rate = 1000.0 * nu_ex * CE
Configuration of the simulation kernel by the previously defined time resolution used in the simulation. Setting “print_time” to True prints the already processed simulation time as well as its percentage of the total simulation time.
nest.SetKernelStatus({"resolution": dt, "print_time": True,
"overwrite_files": True})
print("Building network")
Configuration of the model iaf_psc_alpha and poisson_generator using SetDefaults(). This function expects the model to be the inserted as a string and the parameter to be specified in a dictionary. All instances of theses models created after this point will have the properties specified in the dictionary by default.
nest.SetDefaults("iaf_psc_alpha", neuron_params)
nest.SetDefaults("poisson_generator", {"rate": p_rate})
Creation of the nodes using Create. We store the returned handles in variables for later reference. Here the excitatory and inhibitory, as well as the poisson generator and two spike detectors. The spike detectors will later be used to record excitatory and inhibitory spikes.
nodes_ex = nest.Create("iaf_psc_alpha", NE)
nodes_in = nest.Create("iaf_psc_alpha", NI)
noise = nest.Create("poisson_generator")
espikes = nest.Create("spike_detector")
ispikes = nest.Create("spike_detector")
Configuration of the spike detectors recording excitatory and inhibitory spikes using SetStatus, which expects a list of node handles and a list of parameter dictionaries. Setting the variable “to_file” to True ensures that the spikes will be recorded in a .gdf file starting with the string assigned to label. Setting “withtime” and “withgid” to True ensures that each spike is saved to file by stating the gid of the spiking neuron and the spike time in one line.
nest.SetStatus(espikes, [{"label": "brunel-py-ex",
"withtime": True,
"withgid": True,
"to_file": True}])
nest.SetStatus(ispikes, [{"label": "brunel-py-in",
"withtime": True,
"withgid": True,
"to_file": True}])
print("Connecting devices")
Definition of a synapse using CopyModel
, which expects the model name of
a pre-defined synapse, the name of the customary synapse and an optional
parameter dictionary. The parameters defined in the dictionary will be the
default parameter for the customary synapse. Here we define one synapse for
the excitatory and one for the inhibitory connections giving the
previously defined weights and equal delays.
nest.CopyModel("static_synapse", "excitatory",
{"weight": J_ex, "delay": delay})
nest.CopyModel("static_synapse", "inhibitory",
{"weight": J_in, "delay": delay})
Connecting the previously defined poisson generator to the excitatory and inhibitory neurons using the excitatory synapse. Since the poisson generator is connected to all neurons in the population the default rule ( ‘all_to_all’) of Connect() is used. The synaptic properties are inserted via syn_spec which expects a dictionary when defining multiple variables or a string when simply using a pre-defined synapse.
nest.Connect(noise, nodes_ex, syn_spec="excitatory")
nest.Connect(noise, nodes_in, syn_spec="excitatory")
Connecting the first N_rec nodes of the excitatory and inhibitory population to the associated spike detectors using excitatory synapses. Here the same shortcut for the specification of the synapse as defined above is used.
nest.Connect(nodes_ex[:N_rec], espikes, syn_spec="excitatory")
nest.Connect(nodes_in[:N_rec], ispikes, syn_spec="excitatory")
print("Connecting network")
Here, we create the connections from the excitatory neurons to all other neurons. We exploit that the neurons have consecutive IDs, running from 1, …,NE for the excitatory neurons and from (NE+1),…,(NE+NI) for the inhibitory neurons.
numpy.random.seed(1234)
sources_ex = numpy.random.randint(1, NE + 1, (N_neurons, CE))
sources_in = numpy.random.randint(NE + 1, N_neurons + 1, (N_neurons, CI))
We now iterate over all neuron IDs, and connect the neuron to the sources from our array. The first loop connects the excitatory neurons and the second loop the inhibitory neurons.
for n in range(N_neurons):
nest.Connect(list(sources_ex[n]), [n + 1], syn_spec="excitatory")
for n in range(N_neurons):
nest.Connect(list(sources_in[n]), [n + 1], syn_spec="inhibitory")
Storage of the time point after the buildup of the network in a variable.
endbuild = time.time()
Simulation of the network.
print("Simulating")
nest.Simulate(simtime)
Storage of the time point after the simulation of the network in a variable.
endsimulate = time.time()
Reading out the total number of spikes received from the spike detector connected to the excitatory population and the inhibitory population.
events_ex = nest.GetStatus(espikes, "n_events")[0]
events_in = nest.GetStatus(ispikes, "n_events")[0]
Calculation of the average firing rate of the excitatory and the inhibitory neurons by dividing the total number of recorded spikes by the number of neurons recorded from and the simulation time. The multiplication by 1000.0 converts the unit 1/ms to 1/s=Hz.
rate_ex = events_ex / simtime * 1000.0 / N_rec
rate_in = events_in / simtime * 1000.0 / N_rec
Reading out the number of connections established using the excitatory and inhibitory synapse model. The numbers are summed up resulting in the total number of synapses.
num_synapses = (nest.GetDefaults("excitatory")["num_connections"] +
nest.GetDefaults("inhibitory")["num_connections"])
Establishing the time it took to build and simulate the network by taking the difference of the pre-defined time variables.
build_time = endbuild - startbuild
sim_time = endsimulate - endbuild
Printing the network properties, firing rates and building times.
print("Brunel network simulation (Python)")
print("Number of neurons : {0}".format(N_neurons))
print("Number of synapses: {0}".format(num_synapses))
print(" Exitatory : {0}".format(int(CE * N_neurons) + N_neurons))
print(" Inhibitory : {0}".format(int(CI * N_neurons)))
print("Excitatory rate : %.2f Hz" % rate_ex)
print("Inhibitory rate : %.2f Hz" % rate_in)
print("Building time : %.2f s" % build_time)
print("Simulation time : %.2f s" % sim_time)
Plot a raster of the excitatory neurons and a histogram.
nest.raster_plot.from_device(espikes, hist=True)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Random balanced network (alpha synapses) connected with NEST¶
This script simulates an excitatory and an inhibitory population on the basis of the network used in 1.
In contrast to brunel-alpha-numpy.py
, this variant uses NEST’s builtin
connection routines to draw the random connections instead of NumPy.
When connecting the network customary synapse models are used, which allow for querying the number of created synapses. Using spike detectors the average firing rates of the neurons in the populations are established. The building as well as the simulation time of the network are recorded.
References¶
- 1
Brunel N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience 8, 183-208.
See Also¶
Random balanced network (alpha synapses) connected with NumPy
Import all necessary modules for simulation, analysis and plotting. Scipy should be imported before nest.
from scipy.optimize import fsolve
import nest
import nest.raster_plot
import time
from numpy import exp
Definition of functions used in this example. First, define the Lambert W function implemented in SLI. The second function computes the maximum of the postsynaptic potential for a synaptic input current of unit amplitude (1 pA) using the Lambert W function. Thus function will later be used to calibrate the synaptic weights.
def LambertWm1(x):
nest.ll_api.sli_push(x)
nest.ll_api.sli_run('LambertWm1')
y = nest.ll_api.sli_pop()
return y
def ComputePSPnorm(tauMem, CMem, tauSyn):
a = (tauMem / tauSyn)
b = (1.0 / tauSyn - 1.0 / tauMem)
# time of maximum
t_max = 1.0 / b * (-LambertWm1(-exp(-1.0 / a) / a) - 1.0 / a)
# maximum of PSP for current of unit amplitude
return (exp(1.0) / (tauSyn * CMem * b) *
((exp(-t_max / tauMem) - exp(-t_max / tauSyn)) / b -
t_max * exp(-t_max / tauSyn)))
nest.ResetKernel()
Assigning the current time to a variable in order to determine the build time of the network.
startbuild = time.time()
Assigning the simulation parameters to variables.
dt = 0.1 # the resolution in ms
simtime = 1000.0 # Simulation time in ms
delay = 1.5 # synaptic delay in ms
Definition of the parameters crucial for asynchronous irregular firing of the neurons.
g = 5.0 # ratio inhibitory weight/excitatory weight
eta = 2.0 # external rate relative to threshold rate
epsilon = 0.1 # connection probability
Definition of the number of neurons in the network and the number of neuron recorded from
order = 2500
NE = 4 * order # number of excitatory neurons
NI = 1 * order # number of inhibitory neurons
N_neurons = NE + NI # number of neurons in total
N_rec = 50 # record from 50 neurons
Definition of connectivity parameter
CE = int(epsilon * NE) # number of excitatory synapses per neuron
CI = int(epsilon * NI) # number of inhibitory synapses per neuron
C_tot = int(CI + CE) # total number of synapses per neuron
Initialization of the parameters of the integrate and fire neuron and the synapses. The parameter of the neuron are stored in a dictionary. The synaptic currents are normalized such that the amplitude of the PSP is J.
tauSyn = 0.5 # synaptic time constant in ms
tauMem = 20.0 # time constant of membrane potential in ms
CMem = 250.0 # capacitance of membrane in in pF
theta = 20.0 # membrane threshold potential in mV
neuron_params = {"C_m": CMem,
"tau_m": tauMem,
"tau_syn_ex": tauSyn,
"tau_syn_in": tauSyn,
"t_ref": 2.0,
"E_L": 0.0,
"V_reset": 0.0,
"V_m": 0.0,
"V_th": theta}
J = 0.1 # postsynaptic amplitude in mV
J_unit = ComputePSPnorm(tauMem, CMem, tauSyn)
J_ex = J / J_unit # amplitude of excitatory postsynaptic current
J_in = -g * J_ex # amplitude of inhibitory postsynaptic current
Definition of threshold rate, which is the external rate needed to fix the membrane potential around its threshold, the external firing rate and the rate of the poisson generator which is multiplied by the in-degree CE and converted to Hz by multiplication by 1000.
nu_th = (theta * CMem) / (J_ex * CE * exp(1) * tauMem * tauSyn)
nu_ex = eta * nu_th
p_rate = 1000.0 * nu_ex * CE
Configuration of the simulation kernel by the previously defined time
resolution used in the simulation. Setting print_time
to True prints the
already processed simulation time as well as its percentage of the total
simulation time.
nest.SetKernelStatus({"resolution": dt, "print_time": True,
"overwrite_files": True})
print("Building network")
Configuration of the model iaf_psc_alpha
and poisson_generator
using
SetDefaults
. This function expects the model to be the inserted as a
string and the parameter to be specified in a dictionary. All instances of
theses models created after this point will have the properties specified
in the dictionary by default.
nest.SetDefaults("iaf_psc_alpha", neuron_params)
nest.SetDefaults("poisson_generator", {"rate": p_rate})
Creation of the nodes using Create
. We store the returned handles in
variables for later reference. Here the excitatory and inhibitory, as well
as the poisson generator and two spike detectors. The spike detectors will
later be used to record excitatory and inhibitory spikes.
nodes_ex = nest.Create("iaf_psc_alpha", NE)
nodes_in = nest.Create("iaf_psc_alpha", NI)
noise = nest.Create("poisson_generator")
espikes = nest.Create("spike_detector")
ispikes = nest.Create("spike_detector")
Configuration of the spike detectors recording excitatory and inhibitory
spikes using SetStatus
, which expects a list of node handles and a list
of parameter dictionaries. Setting the variable to_file
to True ensures
that the spikes will be recorded in a .gdf file starting with the string
assigned to label. Setting withtime
and withgid
to True ensures that
each spike is saved to file by stating the gid of the spiking neuron and
the spike time in one line.
nest.SetStatus(espikes, [{"label": "brunel-py-ex",
"withtime": True,
"withgid": True,
"to_file": True}])
nest.SetStatus(ispikes, [{"label": "brunel-py-in",
"withtime": True,
"withgid": True,
"to_file": True}])
print("Connecting devices")
Definition of a synapse using CopyModel
, which expects the model name of
a pre-defined synapse, the name of the customary synapse and an optional
parameter dictionary. The parameters defined in the dictionary will be the
default parameter for the customary synapse. Here we define one synapse for
the excitatory and one for the inhibitory connections giving the
previously defined weights and equal delays.
nest.CopyModel("static_synapse", "excitatory",
{"weight": J_ex, "delay": delay})
nest.CopyModel("static_synapse", "inhibitory",
{"weight": J_in, "delay": delay})
Connecting the previously defined poisson generator to the excitatory and
inhibitory neurons using the excitatory synapse. Since the poisson
generator is connected to all neurons in the population the default rule
(all_to_all
) of Connect
is used. The synaptic properties are inserted
via syn_spec
which expects a dictionary when defining multiple variables or
a string when simply using a pre-defined synapse.
nest.Connect(noise, nodes_ex, syn_spec="excitatory")
nest.Connect(noise, nodes_in, syn_spec="excitatory")
Connecting the first N_rec
nodes of the excitatory and inhibitory
population to the associated spike detectors using excitatory synapses.
Here the same shortcut for the specification of the synapse as defined
above is used.
nest.Connect(nodes_ex[:N_rec], espikes, syn_spec="excitatory")
nest.Connect(nodes_in[:N_rec], ispikes, syn_spec="excitatory")
print("Connecting network")
print("Excitatory connections")
Connecting the excitatory population to all neurons using the pre-defined
excitatory synapse. Beforehand, the connection parameter are defined in a
dictionary. Here we use the connection rule fixed_indegree
,
which requires the definition of the indegree. Since the synapse
specification is reduced to assigning the pre-defined excitatory synapse it
suffices to insert a string.
conn_params_ex = {'rule': 'fixed_indegree', 'indegree': CE}
nest.Connect(nodes_ex, nodes_ex + nodes_in, conn_params_ex, "excitatory")
print("Inhibitory connections")
Connecting the inhibitory population to all neurons using the pre-defined inhibitory synapse. The connection parameter as well as the synapse parameter are defined analogously to the connection from the excitatory population defined above.
conn_params_in = {'rule': 'fixed_indegree', 'indegree': CI}
nest.Connect(nodes_in, nodes_ex + nodes_in, conn_params_in, "inhibitory")
Storage of the time point after the buildup of the network in a variable.
endbuild = time.time()
Simulation of the network.
print("Simulating")
nest.Simulate(simtime)
Storage of the time point after the simulation of the network in a variable.
endsimulate = time.time()
Reading out the total number of spikes received from the spike detector connected to the excitatory population and the inhibitory population.
events_ex = nest.GetStatus(espikes, "n_events")[0]
events_in = nest.GetStatus(ispikes, "n_events")[0]
Calculation of the average firing rate of the excitatory and the inhibitory neurons by dividing the total number of recorded spikes by the number of neurons recorded from and the simulation time. The multiplication by 1000.0 converts the unit 1/ms to 1/s=Hz.
rate_ex = events_ex / simtime * 1000.0 / N_rec
rate_in = events_in / simtime * 1000.0 / N_rec
Reading out the number of connections established using the excitatory and inhibitory synapse model. The numbers are summed up resulting in the total number of synapses.
num_synapses = (nest.GetDefaults("excitatory")["num_connections"] +
nest.GetDefaults("inhibitory")["num_connections"])
Establishing the time it took to build and simulate the network by taking the difference of the pre-defined time variables.
build_time = endbuild - startbuild
sim_time = endsimulate - endbuild
Printing the network properties, firing rates and building times.
print("Brunel network simulation (Python)")
print("Number of neurons : {0}".format(N_neurons))
print("Number of synapses: {0}".format(num_synapses))
print(" Exitatory : {0}".format(int(CE * N_neurons) + N_neurons))
print(" Inhibitory : {0}".format(int(CI * N_neurons)))
print("Excitatory rate : %.2f Hz" % rate_ex)
print("Inhibitory rate : %.2f Hz" % rate_in)
print("Building time : %.2f s" % build_time)
print("Simulation time : %.2f s" % sim_time)
Plot a raster of the excitatory neurons and a histogram.
nest.raster_plot.from_device(espikes, hist=True)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Random balanced network (delta synapses)¶
This script simulates an excitatory and an inhibitory population on the basis of the network used in 1
When connecting the network customary synapse models are used, which allow for querying the number of created synapses. Using spike detectors the average firing rates of the neurons in the populations are established. The building as well as the simulation time of the network are recorded.
References¶
- 1
Brunel N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience 8, 183-208.
Import all necessary modules for simulation, analysis and plotting.
import nest
import nest.raster_plot
import time
from numpy import exp
nest.ResetKernel()
Assigning the current time to a variable in order to determine the build time of the network.
startbuild = time.time()
Assigning the simulation parameters to variables.
dt = 0.1 # the resolution in ms
simtime = 1000.0 # Simulation time in ms
delay = 1.5 # synaptic delay in ms
Definition of the parameters crucial for asynchronous irregular firing of the neurons.
g = 5.0 # ratio inhibitory weight/excitatory weight
eta = 2.0 # external rate relative to threshold rate
epsilon = 0.1 # connection probability
Definition of the number of neurons in the network and the number of neuron recorded from
order = 2500
NE = 4 * order # number of excitatory neurons
NI = 1 * order # number of inhibitory neurons
N_neurons = NE + NI # number of neurons in total
N_rec = 50 # record from 50 neurons
Definition of connectivity parameter
CE = int(epsilon * NE) # number of excitatory synapses per neuron
CI = int(epsilon * NI) # number of inhibitory synapses per neuron
C_tot = int(CI + CE) # total number of synapses per neuron
Initialization of the parameters of the integrate and fire neuron and the synapses. The parameter of the neuron are stored in a dictionary.
tauMem = 20.0 # time constant of membrane potential in ms
theta = 20.0 # membrane threshold potential in mV
neuron_params = {"C_m": 1.0,
"tau_m": tauMem,
"t_ref": 2.0,
"E_L": 0.0,
"V_reset": 0.0,
"V_m": 0.0,
"V_th": theta}
J = 0.1 # postsynaptic amplitude in mV
J_ex = J # amplitude of excitatory postsynaptic potential
J_in = -g * J_ex # amplitude of inhibitory postsynaptic potential
Definition of threshold rate, which is the external rate needed to fix the membrane potential around its threshold, the external firing rate and the rate of the poisson generator which is multiplied by the in-degree CE and converted to Hz by multiplication by 1000.
nu_th = theta / (J * CE * tauMem)
nu_ex = eta * nu_th
p_rate = 1000.0 * nu_ex * CE
Configuration of the simulation kernel by the previously defined time
resolution used in the simulation. Setting print_time
to True prints the
already processed simulation time as well as its percentage of the total
simulation time.
nest.SetKernelStatus({"resolution": dt, "print_time": True,
"overwrite_files": True})
print("Building network")
Configuration of the model iaf_psc_delta
and poisson_generator
using
SetDefaults
. This function expects the model to be the inserted as a
string and the parameter to be specified in a dictionary. All instances of
theses models created after this point will have the properties specified
in the dictionary by default.
nest.SetDefaults("iaf_psc_delta", neuron_params)
nest.SetDefaults("poisson_generator", {"rate": p_rate})
Creation of the nodes using Create
. We store the returned handles in
variables for later reference. Here the excitatory and inhibitory, as well
as the poisson generator and two spike detectors. The spike detectors will
later be used to record excitatory and inhibitory spikes.
nodes_ex = nest.Create("iaf_psc_delta", NE)
nodes_in = nest.Create("iaf_psc_delta", NI)
noise = nest.Create("poisson_generator")
espikes = nest.Create("spike_detector")
ispikes = nest.Create("spike_detector")
Configuration of the spike detectors recording excitatory and inhibitory
spikes using SetStatus
, which expects a list of node handles and a list
of parameter dictionaries. Setting the variable to_file
to True ensures
that the spikes will be recorded in a .gdf file starting with the string
assigned to label. Setting withtime
and withgid
to True ensures that
each spike is saved to file by stating the gid of the spiking neuron and
the spike time in one line.
nest.SetStatus(espikes, [{"label": "brunel-py-ex",
"withtime": True,
"withgid": True,
"to_file": True}])
nest.SetStatus(ispikes, [{"label": "brunel-py-in",
"withtime": True,
"withgid": True,
"to_file": True}])
print("Connecting devices")
Definition of a synapse using CopyModel
, which expects the model name of
a pre-defined synapse, the name of the customary synapse and an optional
parameter dictionary. The parameters defined in the dictionary will be the
default parameter for the customary synapse. Here we define one synapse for
the excitatory and one for the inhibitory connections giving the
previously defined weights and equal delays.
nest.CopyModel("static_synapse", "excitatory",
{"weight": J_ex, "delay": delay})
nest.CopyModel("static_synapse", "inhibitory",
{"weight": J_in, "delay": delay})
Connecting the previously defined poisson generator to the excitatory and
inhibitory neurons using the excitatory synapse. Since the poisson
generator is connected to all neurons in the population the default rule
(# all_to_all
) of Connect
is used. The synaptic properties are inserted
via syn_spec
which expects a dictionary when defining multiple variables
or a string when simply using a pre-defined synapse.
nest.Connect(noise, nodes_ex, syn_spec="excitatory")
nest.Connect(noise, nodes_in, syn_spec="excitatory")
Connecting the first N_rec
nodes of the excitatory and inhibitory
population to the associated spike detectors using excitatory synapses.
Here the same shortcut for the specification of the synapse as defined
above is used.
nest.Connect(nodes_ex[:N_rec], espikes, syn_spec="excitatory")
nest.Connect(nodes_in[:N_rec], ispikes, syn_spec="excitatory")
print("Connecting network")
print("Excitatory connections")
Connecting the excitatory population to all neurons using the pre-defined
excitatory synapse. Beforehand, the connection parameter are defined in a
dictionary. Here we use the connection rule fixed_indegree
,
which requires the definition of the indegree. Since the synapse
specification is reduced to assigning the pre-defined excitatory synapse it
suffices to insert a string.
conn_params_ex = {'rule': 'fixed_indegree', 'indegree': CE}
nest.Connect(nodes_ex, nodes_ex + nodes_in, conn_params_ex, "excitatory")
print("Inhibitory connections")
Connecting the inhibitory population to all neurons using the pre-defined inhibitory synapse. The connection parameter as well as the synapse paramtere are defined analogously to the connection from the excitatory population defined above.
conn_params_in = {'rule': 'fixed_indegree', 'indegree': CI}
nest.Connect(nodes_in, nodes_ex + nodes_in, conn_params_in, "inhibitory")
Storage of the time point after the buildup of the network in a variable.
endbuild = time.time()
Simulation of the network.
print("Simulating")
nest.Simulate(simtime)
Storage of the time point after the simulation of the network in a variable.
endsimulate = time.time()
Reading out the total number of spikes received from the spike detector connected to the excitatory population and the inhibitory population.
events_ex = nest.GetStatus(espikes, "n_events")[0]
events_in = nest.GetStatus(ispikes, "n_events")[0]
Calculation of the average firing rate of the excitatory and the inhibitory neurons by dividing the total number of recorded spikes by the number of neurons recorded from and the simulation time. The multiplication by 1000.0 converts the unit 1/ms to 1/s=Hz.
rate_ex = events_ex / simtime * 1000.0 / N_rec
rate_in = events_in / simtime * 1000.0 / N_rec
Reading out the number of connections established using the excitatory and inhibitory synapse model. The numbers are summed up resulting in the total number of synapses.
num_synapses = (nest.GetDefaults("excitatory")["num_connections"] +
nest.GetDefaults("inhibitory")["num_connections"])
Establishing the time it took to build and simulate the network by taking the difference of the pre-defined time variables.
build_time = endbuild - startbuild
sim_time = endsimulate - endbuild
Printing the network properties, firing rates and building times.
print("Brunel network simulation (Python)")
print("Number of neurons : {0}".format(N_neurons))
print("Number of synapses: {0}".format(num_synapses))
print(" Exitatory : {0}".format(int(CE * N_neurons) + N_neurons))
print(" Inhibitory : {0}".format(int(CI * N_neurons)))
print("Excitatory rate : %.2f Hz" % rate_ex)
print("Inhibitory rate : %.2f Hz" % rate_in)
print("Building time : %.2f s" % build_time)
print("Simulation time : %.2f s" % sim_time)
Plot a raster of the excitatory neurons and a histogram.
nest.raster_plot.from_device(espikes, hist=True)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Mean-field theory for random balanced network¶
This script performs a mean-field analysis of the spiking network of
excitatory and an inhibitory population of leaky-integrate-and-fire neurons
simulated in brunel_delta_nest.py
. We refer to this spiking network of LIF
neurons with ‘SLIFN’.
The self-consistent equation for the population-averaged firing rates (eq.27 in 1, 2) is solved by integrating a pseudo-time dynamics (eq.30 in 1). The latter constitutes a network of rate neurons, which is simulated here. The asymptotic rates, i.e., the fixed points of the dynamics (eq.30), are the prediction for the population and time-averaged from the spiking simulation.
References¶
- 1(1,2)
Hahne J, Dahmen D, Schuecker J, Frommer A, Bolten M, Helias M and Diesmann M. (2017). Integration of continuous-time dynamics in a spiking neural network simulator. Front. Neuroinform. 11:34. doi: 10.3389/fninf.2017.00034
- 2
Schuecker J, Schmidt M, van Albada SJ, Diesmann M. and Helias, M. (2017). Fundamental activity constraints lead to specific interpretations of the connectome. PLOS Computational Biology 13(2): e1005179. https://doi.org/10.1371/journal.pcbi.1005179
import nest
import pylab
import numpy
nest.ResetKernel()
Assigning the simulation parameters to variables.
dt = 0.1 # the resolution in ms
simtime = 50.0 # Simulation time in ms
Definition of the network parameters in the SLIFN
g = 5.0 # ratio inhibitory weight/excitatory weight
eta = 2.0 # external rate relative to threshold rate
epsilon = 0.1 # connection probability
Definition of the number of neurons and connections in the SLIFN, needed for the connection strength in the siegert neuron network
order = 2500
NE = 4 * order # number of excitatory neurons
NI = 1 * order # number of inhibitory neurons
CE = int(epsilon * NE) # number of excitatory synapses per neuron
CI = int(epsilon * NI) # number of inhibitory synapses per neuron
C_tot = int(CI + CE) # total number of synapses per neuron
Initialization of the parameters of the siegert neuron and the connection strength. The parameter are equivalent to the LIF-neurons in the SLIFN.
tauMem = 20.0 # time constant of membrane potential in ms
theta = 20.0 # membrane threshold potential in mV
neuron_params = {'tau_m': tauMem,
't_ref': 2.0,
'theta': theta,
'V_reset': 0.0,
}
J = 0.1 # postsynaptic amplitude in mV in the SLIFN
J_ex = J # amplitude of excitatory postsynaptic potential
J_in = -g * J_ex # amplitude of inhibitory postsynaptic potential
# drift_factor in diffusion connections (see [1], eq. 28) for external
# drive, excitatory and inhibitory neurons
drift_factor_ext = tauMem * 1e-3 * J_ex
drift_factor_ex = tauMem * 1e-3 * CE * J_ex
drift_factor_in = tauMem * 1e-3 * CI * J_in
# diffusion_factor for diffusion connections (see [1], eq. 29)
diffusion_factor_ext = tauMem * 1e-3 * J_ex ** 2
diffusion_factor_ex = tauMem * 1e-3 * CE * J_ex ** 2
diffusion_factor_in = tauMem * 1e-3 * CI * J_in ** 2
External drive, this is equivalent to the drive in the SLIFN
nu_th = theta / (J * CE * tauMem)
nu_ex = eta * nu_th
p_rate = 1000.0 * nu_ex * CE
Configuration of the simulation kernel by the previously defined time
resolution used in the simulation. Setting print_time
to True prints the
already processed simulation time as well as its percentage of the total
simulation time.
nest.SetKernelStatus({"resolution": dt, "print_time": True,
"overwrite_files": True})
print("Building network")
Configuration of the model siegert_neuron
using SetDefaults
.
nest.SetDefaults("siegert_neuron", neuron_params)
Creation of the nodes using Create
. One rate neuron represents the
excitatory population of LIF-neurons in the SLIFN and one the inhibitory
population assuming homogeneity of the populations.
siegert_ex = nest.Create("siegert_neuron", 1)
siegert_in = nest.Create("siegert_neuron", 1)
The Poisson drive in the SLIFN is replaced by a driving rate neuron,
which does not receive input from other neurons. The activity of the rate
neuron is controlled by setting mean
to the rate of the corresponding
poisson generator in the SLIFN.
siegert_drive = nest.Create('siegert_neuron', 1, params={'mean': p_rate})
To record from the rate neurons a multimeter is created and the parameter
record_from
is set to rate as well as the recording interval to dt
multimeter = nest.Create(
'multimeter', params={'record_from': ['rate'], 'interval': dt})
Connections between siegert neurons
are realized with the synapse model
diffusion_connection
. These two parameters reflect the prefactors in
front of the rate variable in eq. 27-29 in [1].
Connections originating from the driving neuron
syn_dict = {'drift_factor': drift_factor_ext,
'diffusion_factor': diffusion_factor_ext,
'model': 'diffusion_connection'}
nest.Connect(
siegert_drive, siegert_ex + siegert_in, 'all_to_all', syn_dict)
nest.Connect(multimeter, siegert_ex + siegert_in)
Connections originating from the excitatory neuron
syn_dict = {'drift_factor': drift_factor_ex, 'diffusion_factor':
diffusion_factor_ex, 'model': 'diffusion_connection'}
nest.Connect(siegert_ex, siegert_ex + siegert_in, 'all_to_all', syn_dict)
Connections originating from the inhibitory neuron
syn_dict = {'drift_factor': drift_factor_in, 'diffusion_factor':
diffusion_factor_in, 'model': 'diffusion_connection'}
nest.Connect(siegert_in, siegert_ex + siegert_in, 'all_to_all', syn_dict)
Simulate the network
nest.Simulate(simtime)
Analyze the activity data. The asymptotic rate of the siegert neuron
corresponds to the population- and time-averaged activity in the SLIFN.
For the symmetric network setup used here, the excitatory and inhibitory
rates are identical. For comparison execute the example brunel_delta_nest.py
.
data = nest.GetStatus(multimeter)[0]['events']
rates_ex = data['rate'][numpy.where(data['senders'] == siegert_ex)]
rates_in = data['rate'][numpy.where(data['senders'] == siegert_in)]
times = data['times'][numpy.where(data['senders'] == siegert_in)]
print("Excitatory rate : %.2f Hz" % rates_ex[-1])
print("Inhibitory rate : %.2f Hz" % rates_in[-1])
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Random balanced network (exp synapses, multiple time constants)¶
This script simulates an excitatory and an inhibitory population on the basis of the network used in 1.
The example demonstrate the usage of the multisynapse neuron model. Each spike arriving at the neuron triggers an exponential PSP. The time constant associated with the PSP is defined in the recepter type array tau_syn of each neuron. The receptor types of all connections are uniformally distributed, resulting in uniformally distributed time constants of the PSPs.
When connecting the network customary synapse models are used, which allow for querying the number of created synapses. Using spike detectors the average firing rates of the neurons in the populations are established. The building as well as the simulation time of the network are recorded.
References¶
- 1
Brunel N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience 8, 183-208.
See Also¶
Random balanced network (alpha synapses) connected with NEST
Import all necessary modules for simulation, analysis and plotting.
import nest
import nest.raster_plot
import time
from numpy import exp
nest.ResetKernel()
Assigning the current time to a variable in order to determine the build time of the network.
startbuild = time.time()
Assigning the simulation parameters to variables.
dt = 0.1 # the resolution in ms
simtime = 1000.0 # Simulation time in ms
delay = 1.5 # synaptic delay in ms
Definition of the parameters crucial for asynchronous irregular firing of the neurons.
g = 5.0 # ratio inhibitory weight/excitatory weight
eta = 2.0 # external rate relative to threshold rate
epsilon = 0.1 # connection probability
Definition of the number of neurons in the network and the number of neuron recorded from
order = 2500
NE = 4 * order # number of excitatory neurons
NI = 1 * order # number of inhibitory neurons
N_neurons = NE + NI # number of neurons in total
N_rec = 50 # record from 50 neurons
Definition of connectivity parameter
CE = int(epsilon * NE) # number of excitatory synapses per neuron
CI = int(epsilon * NI) # number of inhibitory synapses per neuron
C_tot = int(CI + CE) # total number of synapses per neuron
Initialization of the parameters of the integrate and fire neuron and the synapses. The parameter of the neuron are stored in a dictionary.
tauMem = 20.0 # time constant of membrane potential in ms
theta = 20.0 # membrane threshold potential in mV
J = 0.1 # postsynaptic amplitude in mV
nr_ports = 100 # number of receptor types
# Create array of synaptic time constants for each neuron,
# ranging from 0.1 to 1.09 ms.
tau_syn = [0.1 + 0.01 * i for i in range(nr_ports)]
neuron_params = {"C_m": 1.0,
"tau_m": tauMem,
"t_ref": 2.0,
"E_L": 0.0,
"V_reset": 0.0,
"V_m": 0.0,
"V_th": theta,
"tau_syn": tau_syn}
J_ex = J # amplitude of excitatory postsynaptic current
J_in = -g * J_ex # amplitude of inhibitory postsynaptic current
Definition of threshold rate, which is the external rate needed to fix the membrane potential around its threshold, the external firing rate and the rate of the poisson generator which is multiplied by the in-degree CE and converted to Hz by multiplication by 1000.
nu_th = theta / (J * CE * tauMem)
nu_ex = eta * nu_th
p_rate = 1000.0 * nu_ex * CE
Configuration of the simulation kernel by the previously defined time
resolution used in the simulation. Setting print_time
to True prints the
already processed simulation time as well as its percentage of the total
simulation time.
nest.SetKernelStatus({"resolution": dt, "print_time": True,
"overwrite_files": True})
print("Building network")
Configuration of the model iaf_psc_exp_multisynapse
and
poisson_generator
using SetDefaults
. This function expects the model to
be the inserted as a string and the parameter to be specified in a
dictionary. All instances of theses models created after this point will
have the properties specified in the dictionary by default.
nest.SetDefaults("iaf_psc_exp_multisynapse", neuron_params)
nest.SetDefaults("poisson_generator", {"rate": p_rate})
Creation of the nodes using Create
. We store the returned handles in
variables for later reference. Here the excitatory and inhibitory, as well
as the poisson generator and two spike detectors. The spike detectors will
later be used to record excitatory and inhibitory spikes.
nodes_ex = nest.Create("iaf_psc_exp_multisynapse", NE)
nodes_in = nest.Create("iaf_psc_exp_multisynapse", NI)
noise = nest.Create("poisson_generator")
espikes = nest.Create("spike_detector")
ispikes = nest.Create("spike_detector")
Configuration of the spike detectors recording excitatory and inhibitory
spikes using SetStatus
, which expects a list of node handles and a list
of parameter dictionaries. Setting the variable to_file
to True ensures
that the spikes will be recorded in a .gdf file starting with the string
assigned to label. Setting withtime
and withgid
to True ensures that
each spike is saved to file by stating the gid of the spiking neuron and
the spike time in one line.
nest.SetStatus(espikes, [{"label": "brunel-py-ex",
"withtime": True,
"withgid": True,
"to_file": True}])
nest.SetStatus(ispikes, [{"label": "brunel-py-in",
"withtime": True,
"withgid": True,
"to_file": True}])
print("Connecting devices")
Definition of a synapse using CopyModel
, which expects the model name of
a pre-defined synapse, the name of the customary synapse and an optional
parameter dictionary. The parameters defined in the dictionary will be the
default parameter for the customary synapse. Here we define one synapse for
the excitatory and one for the inhibitory connections giving the
previously defined weights and equal delays.
nest.CopyModel("static_synapse", "excitatory",
{"weight": J_ex, "delay": delay})
nest.CopyModel("static_synapse", "inhibitory",
{"weight": J_in, "delay": delay})
Connecting the previously defined poisson generator to the excitatory and
inhibitory neurons using the excitatory synapse. Since the poisson
generator is connected to all neurons in the population the default rule
(# all_to_all
) of Connect
is used. The synaptic properties are
pre-defined # in a dictionary and inserted via syn_spec
. As synaptic model
the pre-defined synapses “excitatory” and “inhibitory” are choosen,
thus setting weight
and delay
. The recepter type is drawn from a
distribution for each connection, which is specified in the synapse
properties by assigning a dictionary to the keyword receptor_type
,
which includes the specification of the distribution and the associated
parameter.
syn_params_ex = {"model": "excitatory",
"receptor_type": {"distribution": "uniform_int",
"low": 1, "high": nr_ports}
}
syn_params_in = {"model": "inhibitory",
"receptor_type": {"distribution": "uniform_int",
"low": 1, "high": nr_ports}
}
nest.Connect(noise, nodes_ex, syn_spec=syn_params_ex)
nest.Connect(noise, nodes_in, syn_spec=syn_params_ex)
Connecting the first N_rec
nodes of the excitatory and inhibitory
population to the associated spike detectors using excitatory synapses.
Here the same shortcut for the specification of the synapse as defined
above is used.
nest.Connect(nodes_ex[:N_rec], espikes, syn_spec="excitatory")
nest.Connect(nodes_in[:N_rec], ispikes, syn_spec="excitatory")
print("Connecting network")
print("Excitatory connections")
Connecting the excitatory population to all neurons while distribution the
ports. Here we use the previously defined parameter dictionary
syn_params_ex
. Beforehand, the connection parameter are defined in a
dictionary. Here we use the connection rule fixed_indegree
,
which requires the definition of the indegree.
conn_params_ex = {'rule': 'fixed_indegree', 'indegree': CE}
nest.Connect(nodes_ex, nodes_ex + nodes_in, conn_params_ex, syn_params_ex)
print("Inhibitory connections")
Connecting the inhibitory population to all neurons while distribution the
ports. Here we use the previously defined parameter dictionary
syn_params_in
.The connection parameter are defined analogously to the
connection from the excitatory population defined above.
conn_params_in = {'rule': 'fixed_indegree', 'indegree': CI}
nest.Connect(nodes_in, nodes_ex + nodes_in, conn_params_in, syn_params_in)
Storage of the time point after the buildup of the network in a variable.
endbuild = time.time()
Simulation of the network.
print("Simulating")
nest.Simulate(simtime)
Storage of the time point after the simulation of the network in a variable.
endsimulate = time.time()
Reading out the total number of spikes received from the spike detector connected to the excitatory population and the inhibitory population.
events_ex = nest.GetStatus(espikes, "n_events")[0]
events_in = nest.GetStatus(ispikes, "n_events")[0]
Calculation of the average firing rate of the excitatory and the inhibitory neurons by dividing the total number of recorded spikes by the number of neurons recorded from and the simulation time. The multiplication by 1000.0 converts the unit 1/ms to 1/s=Hz.
rate_ex = events_ex / simtime * 1000.0 / N_rec
rate_in = events_in / simtime * 1000.0 / N_rec
Reading out the number of connections established using the excitatory and inhibitory synapse model. The numbers are summed up resulting in the total number of synapses.
num_synapses = (nest.GetDefaults("excitatory")["num_connections"] +
nest.GetDefaults("inhibitory")["num_connections"])
Establishing the time it took to build and simulate the network by taking the difference of the pre-defined time variables.
build_time = endbuild - startbuild
sim_time = endsimulate - endbuild
Printing the network properties, firing rates and building times.
print("Brunel network simulation (Python)")
print("Number of neurons : {0}".format(N_neurons))
print("Number of synapses: {0}".format(num_synapses))
print(" Exitatory : {0}".format(int(CE * N_neurons) + N_neurons))
print(" Inhibitory : {0}".format(int(CI * N_neurons)))
print("Excitatory rate : %.2f Hz" % rate_ex)
print("Inhibitory rate : %.2f Hz" % rate_in)
print("Building time : %.2f s" % build_time)
print("Simulation time : %.2f s" % sim_time)
Plot a raster of the excitatory neurons and a histogram.
nest.raster_plot.from_device(espikes, hist=True)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Use evolution strategies to find parameters for a random balanced network (alpha synapses)¶
This script uses an optimization algorithm to find the appropriate parameter values for the external drive “eta” and the relative ratio of excitation and inhibition “g” for a balanced random network that lead to particular population-averaged rates, coefficients of variation and correlations.
From an initial Gaussian search distribution parameterized with mean and standard deviation network parameters are sampled. Network realizations of these parameters are simulated and evaluated according to an objective function that measures how close the activity statistics are to their desired values (~fitness). From these fitness values the approximate natural gradient of the fitness landscape is computed and used to update the parameters of the search distribution. This procedure is repeated until the maximal number of function evaluations is reached or the width of the search distribution becomes extremely small. We use the following fitness function:
where alpha, beta and gamma are weighting factors, and stars indicate target values.
The network contains an excitatory and an inhibitory population on the basis of the network used in 1.
The optimization algorithm (evolution strategies) is described in Wierstra et al. 2.
References¶
Authors¶
Jakob Jordan
from __future__ import print_function
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
import numpy as np
import nest
from numpy import exp
Analysis
def cut_warmup_time(spikes, warmup_time):
# Removes initial warmup time from recorded spikes
spikes['senders'] = spikes['senders'][
spikes['times'] > warmup_time]
spikes['times'] = spikes['times'][
spikes['times'] > warmup_time]
return spikes
def compute_rate(spikes, N_rec, sim_time):
# Computes average rate from recorded spikes
return (1. * len(spikes['times']) / N_rec / sim_time * 1e3)
def sort_spikes(spikes):
# Sorts recorded spikes by gid
unique_gids = sorted(np.unique(spikes['senders']))
spiketrains = []
for gid in unique_gids:
spiketrains.append(spikes['times'][spikes['senders'] == gid])
return unique_gids, spiketrains
def compute_cv(spiketrains):
# Computes coefficient of variation from sorted spikes
if spiketrains:
isis = np.hstack([np.diff(st) for st in spiketrains])
if len(isis) > 1:
return np.std(isis) / np.mean(isis)
else:
return 0.
else:
return 0.
def bin_spiketrains(spiketrains, t_min, t_max, t_bin):
# Bins sorted spikes
bins = np.arange(t_min, t_max, t_bin)
return bins, [np.histogram(s, bins=bins)[0] for s in spiketrains]
def compute_correlations(binned_spiketrains):
# Computes correlations from binned spiketrains
n = len(binned_spiketrains)
if n > 1:
cc = np.corrcoef(binned_spiketrains)
return 1. / (n * (n - 1.)) * (np.sum(cc) - n)
else:
return 0.
def compute_statistics(parameters, espikes, ispikes):
# Computes population-averaged rates coefficients of variation and
# correlations from recorded spikes of excitatory and inhibitory
# populations
espikes = cut_warmup_time(espikes, parameters['warmup_time'])
ispikes = cut_warmup_time(ispikes, parameters['warmup_time'])
erate = compute_rate(espikes, parameters['N_rec'], parameters['sim_time'])
irate = compute_rate(espikes, parameters['N_rec'], parameters['sim_time'])
egids, espiketrains = sort_spikes(espikes)
igids, ispiketrains = sort_spikes(ispikes)
ecv = compute_cv(espiketrains)
icv = compute_cv(ispiketrains)
ecorr = compute_correlations(
bin_spiketrains(espiketrains, 0., parameters['sim_time'], 1.)[1])
icorr = compute_correlations(
bin_spiketrains(ispiketrains, 0., parameters['sim_time'], 1.)[1])
return (np.mean([erate, irate]),
np.mean([ecv, icv]),
np.mean([ecorr, icorr]))
Network simulation
def simulate(parameters):
# Simulates the network and returns recorded spikes for excitatory
# and inhibitory population
# Code taken from brunel_alpha_nest.py
def LambertWm1(x):
nest.ll_api.sli_push(x)
nest.ll_api.sli_run('LambertWm1')
y = nest.ll_api.sli_pop()
return y
def ComputePSPnorm(tauMem, CMem, tauSyn):
a = (tauMem / tauSyn)
b = (1.0 / tauSyn - 1.0 / tauMem)
# time of maximum
t_max = 1.0 / b * (-LambertWm1(-exp(-1.0 / a) / a) - 1.0 / a)
# maximum of PSP for current of unit amplitude
return (exp(1.0) / (tauSyn * CMem * b) *
((exp(-t_max / tauMem) - exp(-t_max / tauSyn)) / b -
t_max * exp(-t_max / tauSyn)))
# number of excitatory neurons
NE = int(parameters['gamma'] * parameters['N'])
# number of inhibitory neurons
NI = parameters['N'] - NE
# number of excitatory synapses per neuron
CE = int(parameters['epsilon'] * NE)
# number of inhibitory synapses per neuron
CI = int(parameters['epsilon'] * NI)
tauSyn = 0.5 # synaptic time constant in ms
tauMem = 20.0 # time constant of membrane potential in ms
CMem = 250.0 # capacitance of membrane in in pF
theta = 20.0 # membrane threshold potential in mV
neuron_parameters = {
'C_m': CMem,
'tau_m': tauMem,
'tau_syn_ex': tauSyn,
'tau_syn_in': tauSyn,
't_ref': 2.0,
'E_L': 0.0,
'V_reset': 0.0,
'V_m': 0.0,
'V_th': theta
}
J = 0.1 # postsynaptic amplitude in mV
J_unit = ComputePSPnorm(tauMem, CMem, tauSyn)
J_ex = J / J_unit # amplitude of excitatory postsynaptic current
# amplitude of inhibitory postsynaptic current
J_in = -parameters['g'] * J_ex
nu_th = (theta * CMem) / (J_ex * CE * exp(1) * tauMem * tauSyn)
nu_ex = parameters['eta'] * nu_th
p_rate = 1000.0 * nu_ex * CE
nest.ResetKernel()
nest.set_verbosity('M_FATAL')
nest.SetKernelStatus({'rng_seeds': [parameters['seed']],
'resolution': parameters['dt']})
nest.SetDefaults('iaf_psc_alpha', neuron_parameters)
nest.SetDefaults('poisson_generator', {'rate': p_rate})
nodes_ex = nest.Create('iaf_psc_alpha', NE)
nodes_in = nest.Create('iaf_psc_alpha', NI)
noise = nest.Create('poisson_generator')
espikes = nest.Create('spike_detector')
ispikes = nest.Create('spike_detector')
nest.SetStatus(espikes, [{'label': 'brunel-py-ex',
'withtime': True,
'withgid': True,
'to_file': False}])
nest.SetStatus(ispikes, [{'label': 'brunel-py-in',
'withtime': True,
'withgid': True,
'to_file': False}])
nest.CopyModel('static_synapse', 'excitatory',
{'weight': J_ex, 'delay': parameters['delay']})
nest.CopyModel('static_synapse', 'inhibitory',
{'weight': J_in, 'delay': parameters['delay']})
nest.Connect(noise, nodes_ex, syn_spec='excitatory')
nest.Connect(noise, nodes_in, syn_spec='excitatory')
if parameters['N_rec'] > NE:
raise ValueError(
'Requested recording from {} neurons, \
but only {} in excitatory population'.format(
parameters['N_rec'], NE))
if parameters['N_rec'] > NI:
raise ValueError(
'Requested recording from {} neurons, \
but only {} in inhibitory population'.format(
parameters['N_rec'], NI))
nest.Connect(nodes_ex[:parameters['N_rec']], espikes)
nest.Connect(nodes_in[:parameters['N_rec']], ispikes)
conn_parameters_ex = {'rule': 'fixed_indegree', 'indegree': CE}
nest.Connect(
nodes_ex, nodes_ex + nodes_in, conn_parameters_ex, 'excitatory')
conn_parameters_in = {'rule': 'fixed_indegree', 'indegree': CI}
nest.Connect(
nodes_in, nodes_ex + nodes_in, conn_parameters_in, 'inhibitory')
nest.Simulate(parameters['sim_time'])
return (nest.GetStatus(espikes, 'events')[0],
nest.GetStatus(ispikes, 'events')[0])
Optimization
def default_population_size(dimensions):
# Returns a population size suited for the given number of dimensions
# See Wierstra et al. (2014)
return 4 + int(np.floor(3 * np.log(dimensions)))
def default_learning_rate_mu():
# Returns a default learning rate for the mean of the search distribution
# See Wierstra et al. (2014)
return 1
def default_learning_rate_sigma(dimensions):
# Returns a default learning rate for the standard deviation of the
# search distribution for the given number of dimensions
# See Wierstra et al. (2014)
return (3 + np.log(dimensions)) / (12. * np.sqrt(dimensions))
def compute_utility(fitness):
# Computes utility and order used for fitness shaping
# See Wierstra et al. (2014)
n = len(fitness)
order = np.argsort(fitness)[::-1]
fitness = fitness[order]
utility = [
np.max([0, np.log((n / 2) + 1)]) - np.log(k + 1) for k in range(n)]
utility = utility / np.sum(utility) - 1. / n
return order, utility
def optimize(func, mu, sigma, learning_rate_mu=None, learning_rate_sigma=None,
population_size=None, fitness_shaping=True,
mirrored_sampling=True, record_history=False,
max_generations=2000, min_sigma=1e-8, verbosity=0):
###########################################################################
# Optimizes an objective function via evolution strategies using the
# natural gradient of multinormal search distributions in natural
# coordinates. Does not consider covariances between parameters (
# "Separable natural evolution strategies").
# See Wierstra et al. (2014)
#
# Parameters
# ----------
# func: function
# The function to be maximized.
# mu: float
# Initial mean of the search distribution.
# sigma: float
# Initial standard deviation of the search distribution.
# learning_rate_mu: float
# Learning rate of mu.
# learning_rate_sigma: float
# Learning rate of sigma.
# population_size: int
# Number of individuals sampled in each generation.
# fitness_shaping: bool
# Whether to use fitness shaping, compensating for large
# deviations in fitness, see Wierstra et al. (2014).
# mirrored_sampling: bool
# Whether to use mirrored sampling, i.e., evaluating a mirrored
# sample for each sample, see Wierstra et al. (2014).
# record_history: bool
# Whether to record history of search distribution parameters,
# fitness values and individuals.
# max_generations: int
# Maximal number of generations.
# min_sigma: float
# Minimal value for standard deviation of search
# distribution. If any dimension has a value smaller than this,
# the search is stoppped.
# verbosity: bool
# Whether to continuously print progress information.
#
# Returns
# -------
# dict
# Dictionary of final parameters of search distribution and
# history.
if not isinstance(mu, np.ndarray):
raise TypeError('mu needs to be of type np.ndarray')
if not isinstance(sigma, np.ndarray):
raise TypeError('sigma needs to be of type np.ndarray')
if learning_rate_mu is None:
learning_rate_mu = default_learning_rate_mu()
if learning_rate_sigma is None:
learning_rate_sigma = default_learning_rate_sigma(mu.size)
if population_size is None:
population_size = default_population_size(mu.size)
generation = 0
mu_history = []
sigma_history = []
pop_history = []
fitness_history = []
while True:
# create new population using the search distribution
s = np.random.normal(0, 1, size=(population_size,) + np.shape(mu))
z = mu + sigma * s
# add mirrored perturbations if enabled
if mirrored_sampling:
z = np.vstack([z, mu - sigma * s])
s = np.vstack([s, -s])
# evaluate fitness for every individual in population
fitness = np.fromiter((func(*zi) for zi in z), np.float)
# print status if enabled
if verbosity > 0:
print(
'# Generation {:d} | fitness {:.3f} | mu {} | sigma {}'.format(
generation, np.mean(fitness),
', '.join(str(np.round(mu_i, 3)) for mu_i in mu),
', '.join(str(np.round(sigma_i, 3)) for sigma_i in sigma)
))
# apply fitness shaping if enabled
if fitness_shaping:
order, utility = compute_utility(fitness)
s = s[order]
z = z[order]
else:
utility = fitness
# bookkeeping
if record_history:
mu_history.append(mu.copy())
sigma_history.append(sigma.copy())
pop_history.append(z.copy())
fitness_history.append(fitness)
# exit if max generations reached or search distributions are
# very narrow
if generation == max_generations or np.all(sigma < min_sigma):
break
# update parameter of search distribution via natural gradient
# descent in natural coordinates
mu += learning_rate_mu * sigma * np.dot(utility, s)
sigma *= np.exp(learning_rate_sigma / 2. * np.dot(utility, s**2 - 1))
generation += 1
return {
'mu': mu,
'sigma': sigma,
'fitness_history': np.array(fitness_history),
'mu_history': np.array(mu_history),
'sigma_history': np.array(sigma_history),
'pop_history': np.array(pop_history)
}
def optimize_network(optimization_parameters, simulation_parameters):
# Searches for suitable network parameters to fulfill defined constraints
np.random.seed(simulation_parameters['seed'])
def objective_function(g, eta):
# Returns the fitness of a specific network parametrization
# create local copy of parameters that uses parameters given
# by optimization algorithm
simulation_parameters_local = simulation_parameters.copy()
simulation_parameters_local['g'] = g
simulation_parameters_local['eta'] = eta
# perform the network simulation
espikes, ispikes = simulate(simulation_parameters_local)
# analyse the result and compute fitness
rate, cv, corr = compute_statistics(
simulation_parameters, espikes, ispikes)
fitness = \
- optimization_parameters['fitness_weight_rate'] * (
rate - optimization_parameters['target_rate']) ** 2 \
- optimization_parameters['fitness_weight_cv'] * (
cv - optimization_parameters['target_cv']) ** 2 \
- optimization_parameters['fitness_weight_corr'] * (
corr - optimization_parameters['target_corr']) ** 2
return fitness
return optimize(
objective_function,
np.array(optimization_parameters['mu']),
np.array(optimization_parameters['sigma']),
max_generations=optimization_parameters['max_generations'],
record_history=True,
verbosity=optimization_parameters['verbosity']
)
Main
if __name__ == '__main__':
simulation_parameters = {
'seed': 123,
'dt': 0.1, # (ms) simulation resolution
'sim_time': 1000., # (ms) simulation duration
'warmup_time': 300., # (ms) duration ignored during analysis
'delay': 1.5, # (ms) synaptic delay
'g': None, # relative ratio of excitation and inhibition
'eta': None, # relative strength of external drive
'epsilon': 0.1, # average connectivity of network
'N': 400, # number of neurons in network
'gamma': 0.8, # relative size of excitatory and
# inhibitory population
'N_rec': 40, # number of neurons to record activity from
}
optimization_parameters = {
'verbosity': 1, # print progress over generations
'max_generations': 20, # maximal number of generations
'target_rate': 1.89, # (spikes/s) target rate
'target_corr': 0.0, # target correlation
'target_cv': 1., # target coefficient of variation
'mu': [1., 3.], # initial mean for search distribution
# (mu(g), mu(eta))
'sigma': [0.15, 0.05], # initial sigma for search
# distribution (sigma(g), sigma(eta))
# hyperparameters of the fitness function; these are used to
# compensate for the different typical scales of the
# individual measures, rate ~ O(1), cv ~ (0.1), corr ~ O(0.01)
'fitness_weight_rate': 1., # relative weight of rate deviation
'fitness_weight_cv': 10., # relative weight of cv deviation
'fitness_weight_corr': 100., # relative weight of corr deviation
}
# optimize network parameters
optimization_result = optimize_network(optimization_parameters,
simulation_parameters)
simulation_parameters['g'] = optimization_result['mu'][0]
simulation_parameters['eta'] = optimization_result['mu'][1]
espikes, ispikes = simulate(simulation_parameters)
rate, cv, corr = compute_statistics(
simulation_parameters, espikes, ispikes)
print('Statistics after optimization:', end=' ')
print('Rate: {:.3f}, cv: {:.3f}, correlation: {:.3f}'.format(
rate, cv, corr))
# plot results
fig = plt.figure(figsize=(10, 4))
ax1 = fig.add_axes([0.06, 0.12, 0.25, 0.8])
ax2 = fig.add_axes([0.4, 0.12, 0.25, 0.8])
ax3 = fig.add_axes([0.74, 0.12, 0.25, 0.8])
ax1.set_xlabel('Time (ms)')
ax1.set_ylabel('Neuron id')
ax2.set_xlabel(r'Relative strength of inhibition $g$')
ax2.set_ylabel(r'Relative strength of external drive $\eta$')
ax3.set_xlabel('Generation')
ax3.set_ylabel('Fitness')
# raster plot
ax1.plot(espikes['times'], espikes['senders'], ls='', marker='.')
# search distributions and individuals
for mu, sigma in zip(optimization_result['mu_history'],
optimization_result['sigma_history']):
ellipse = Ellipse(
xy=mu, width=2 * sigma[0], height=2 * sigma[1], alpha=0.5, fc='k')
ellipse.set_clip_box(ax2.bbox)
ax2.add_artist(ellipse)
ax2.plot(optimization_result['mu_history'][:, 0],
optimization_result['mu_history'][:, 1],
marker='.', color='k', alpha=0.5)
for generation in optimization_result['pop_history']:
ax2.scatter(generation[:, 0], generation[:, 1])
# fitness over generations
ax3.errorbar(np.arange(len(optimization_result['fitness_history'])),
np.mean(optimization_result['fitness_history'], axis=1),
yerr=np.std(optimization_result['fitness_history'], axis=1))
fig.savefig('brunel_alpha_evolution_strategies.pdf')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Using CSA for connection setup¶
This example sets up a simple network in NEST using the Connection Set Algebra (CSA) instead of using the built-in connection routines.
Using the CSA requires NEST to be compiled with support for libneurosim. For details, see 1.
See Also¶
References¶
- 1
Djurfeldt M, Davison AP and Eppler JM (2014). Efficient generation of connectivity in neuronal networks from simulator-independent descriptions, Front. Neuroinform. http://dx.doi.org/10.3389/fninf.2014.00043
First, we import all necessary modules for simulation and plotting.
import nest
from nest import voltage_trace
from nest import visualization
Next, we check for the availability of the CSA Python module. If it does not import, we exit with an error message.
try:
import csa
haveCSA = True
except ImportError:
print("This example requires CSA to be installed in order to run.\n" +
"Please make sure you compiled NEST using\n" +
" -Dwith-libneurosim=[OFF|ON|</path/to/libneurosim>]\n" +
"and CSA and libneurosim are available.")
import sys
sys.exit()
To set up the connectivity, We create a random
connection set with a
probability of 0.1 and two associated values (10000.0 and 1.0) used as
weight and delay, respectively.
cs = csa.cset(csa.random(0.1), 10000.0, 1.0)
Using the Create
command from PyNEST, we create the neurons of the pre-
and postsynaptic populations, each of which containing 16 neurons.
pre = nest.Create("iaf_psc_alpha", 16)
post = nest.Create("iaf_psc_alpha", 16)
We can now connect the populations using the CGConnect
function. It takes
the IDs of pre- and postsynaptic neurons (pre
and post
),
the connection set (cs
) and a dictionary that maps the parameters
weight and delay to positions in the value set associated with the
connection set.
nest.CGConnect(pre, post, cs, {"weight": 0, "delay": 1})
To stimulate the network, we create a poisson_generator
and set it up to
fire with a rate of 100000 spikes per second. It is connected to the
neurons of the pre-synaptic population.
pg = nest.Create("poisson_generator", params={"rate": 100000.0})
nest.Connect(pg, pre, "all_to_all")
To measure and record the membrane potentials of the neurons, we create a
voltmeter
and connect it to all post-synaptic nodes.
vm = nest.Create("voltmeter")
nest.Connect(vm, post, "all_to_all")
We save the whole connection graph of the network as a PNG image using the
plot_network
function of the visualization
submodule of PyNEST.
allnodes = pg + pre + post + vm
visualization.plot_network(allnodes, "csa_example_graph.png")
Finally, we simulate the network for 50 ms. The voltage traces of the post-synaptic nodes are plotted.
nest.Simulate(50.0)
voltage_trace.from_device(vm)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Using CSA with Topology layers¶
This example shows a brute-force way of specifying connections between NEST Topology layers using Connection Set Algebra instead of the built-in connection routines.
Using the CSA requires NEST to be compiled with support for libneurosim 1.
This example uses the function GetLeaves, which is deprecated. A deprecation warning is therefore issued. For details about deprecated functions, see documentation.
See Also¶
References¶
- 1
Djurfeldt M, Davison AP and Eppler JM (2014). Efficient generation of connectivity in neuronal networks from simulator-independent descriptions. Front. Neuroinform. http://dx.doi.org/10.3389/fninf.2014.00043
First, we import all necessary modules.
import nest
import nest.topology as topo
Next, we check for the availability of the CSA Python module. If it does not import, we exit with an error message.
try:
import csa
haveCSA = True
except ImportError:
print("This example requires CSA to be installed in order to run.\n" +
"Please make sure you compiled NEST using\n" +
" -Dwith-libneurosim=[OFF|ON|</path/to/libneurosim>]\n" +
"and CSA and libneurosim are available.")
import sys
sys.exit()
We define a factory that returns a CSA-style geometry function for the given layer. The function returned will return for each CSA-index the position in space of the given neuron as a 2- or 3-element list.
This function stores a copy of the neuron positions internally, entailing memory overhead.
def geometryFunction(topologyLayer):
positions = topo.GetPosition(nest.GetLeaves(topologyLayer)[0])
def geometry_function(idx):
return positions[idx]
return geometry_function
We create two layers that have 20x20 neurons of type iaf_psc_alpha
.
pop1 = topo.CreateLayer({'elements': 'iaf_psc_alpha',
'rows': 20, 'columns': 20})
pop2 = topo.CreateLayer({'elements': 'iaf_psc_alpha',
'rows': 20, 'columns': 20})
For each layer, we create a CSA-style geometry function and a CSA metric based on them.
g1 = geometryFunction(pop1)
g2 = geometryFunction(pop2)
d = csa.euclidMetric2d(g1, g2)
The connection set cs describes a Gaussian connectivity profile with
sigma = 0.2
and cutoff at 0.5, and two values (10000.0 and 1.0) used as
weight
and delay
, respectively.
cs = csa.cset(csa.random * (csa.gaussian(0.2, 0.5) * d), 10000.0, 1.0)
We can now connect the populations using the CGConnect
function. It
takes the IDs of pre- and postsynaptic neurons (pop and pop2),
the connection set (cs) and a dictionary that map the parameters
weight and delay to positions in the value set associated with the
connection set.
# This is a work-around until NEST 3.0 is released. It will issue a deprecation
# warning.
pop1_gids = nest.GetLeaves(pop1)[0]
pop2_gids = nest.GetLeaves(pop2)[0]
nest.CGConnect(pop1_gids, pop2_gids, cs, {"weight": 0, "delay": 1})
Finally, we use the PlotTargets
function to show all targets in pop2
starting at the center neuron of pop1.
topo.PlotTargets(topo.FindCenterElement(pop1), pop2)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Random balanced network HPC benchmark¶
This script produces a balanced random network of scale*11250 neurons in which the excitatory-excitatory neurons exhibit STDP with multiplicative depression and power-law potentiation. A mutual equilibrium is obtained between the activity dynamics (low rate in asynchronous irregular regime) and the synaptic weight distribution (unimodal). The number of incoming connections per neuron is fixed and independent of network size (indegree=11250).
This is the standard network investigated in 1, 2, 3.
A note on scaling¶
This benchmark was originally developed for very large-scale simulations on supercomputers with more than 1 million neurons in the network and 11.250 incoming synapses per neuron. For such large networks, synaptic input to a single neuron will be little correlated across inputs and network activity will remain stable over long periods of time.
The original network size corresponds to a scale parameter of 100 or more. In order to make it possible to test this benchmark script on desktop computers, the scale parameter is set to 1 below, while the number of 11.250 incoming synapses per neuron is retained. In this limit, correlations in input to neurons are large and will lead to increasing synaptic weights. Over time, network dynamics will therefore become unstable and all neurons in the network will fire in synchrony, leading to extremely slow simulation speeds.
Therefore, the presimulation time is reduced to 50 ms below and the simulation time to 250 ms, while we usually use 100 ms presimulation and 1000 ms simulation time.
For meaningful use of this benchmark, you should use a scale > 10 and check that the firing rate reported at the end of the benchmark is below 10 spikes per second.
References¶
- 1
Morrison A, Aertsen A, Diesmann M (2007). Spike-timing-dependent plasticity in balanced random networks. Neural Comput 19(6):1437-67
- 2
Helias et al (2012). Supercomputers ready for use as discovery machines for neuroscience. Front. Neuroinform. 6:26
- 3
Kunkel et al (2014). Spiking network simulation code for petascale computers. Front. Neuroinform. 8:78
from __future__ import print_function # for Python 2
import numpy as np
import os
import sys
import time
import nest
import nest.raster_plot
M_INFO = 10
M_ERROR = 30
Parameter section Define all relevant parameters: changes should be made here
params = {
'nvp': 1, # total number of virtual processes
'scale': 1., # scaling factor of the network size
# total network size = scale*11250 neurons
'simtime': 250., # total simulation time in ms
'presimtime': 50., # simulation time until reaching equilibrium
'dt': 0.1, # simulation step
'record_spikes': True, # switch to record spikes of excitatory
# neurons to file
'path_name': '.', # path where all files will have to be written
'log_file': 'log', # naming scheme for the log files
}
def convert_synapse_weight(tau_m, tau_syn, C_m):
"""
Computes conversion factor for synapse weight from mV to pA
This function is specific to the leaky integrate-and-fire neuron
model with alpha-shaped postsynaptic currents.
"""
# compute time to maximum of V_m after spike input
# to neuron at rest
a = tau_m / tau_syn
b = 1.0 / tau_syn - 1.0 / tau_m
t_rise = 1.0 / b * (-lambertwm1(-np.exp(-1.0 / a) / a).real - 1.0 / a)
v_max = np.exp(1.0) / (tau_syn * C_m * b) * (
(np.exp(-t_rise / tau_m) - np.exp(-t_rise / tau_syn)) /
b - t_rise * np.exp(-t_rise / tau_syn))
return 1. / v_max
For compatiblity with earlier benchmarks, we require a rise time of
t_rise = 1.700759 ms
and we choose tau_syn
to achieve this for given
tau_m
. This requires numerical inversion of the expression for t_rise
in convert_synapse_weight
. We computed this value once and hard-code
it here.
tau_syn = 0.32582722403722841
brunel_params = {
'NE': int(9000 * params['scale']), # number of excitatory neurons
'NI': int(2250 * params['scale']), # number of inhibitory neurons
'Nrec': 1000, # number of neurons to record spikes from
'model_params': { # Set variables for iaf_psc_alpha
'E_L': 0.0, # Resting membrane potential(mV)
'C_m': 250.0, # Capacity of the membrane(pF)
'tau_m': 10.0, # Membrane time constant(ms)
't_ref': 0.5, # Duration of refractory period(ms)
'V_th': 20.0, # Threshold(mV)
'V_reset': 0.0, # Reset Potential(mV)
# time const. postsynaptic excitatory currents(ms)
'tau_syn_ex': tau_syn,
# time const. postsynaptic inhibitory currents(ms)
'tau_syn_in': tau_syn,
'tau_minus': 30.0, # time constant for STDP(depression)
# V can be randomly initialized see below
'V_m': 5.7 # mean value of membrane potential
},
####################################################################
# Note that Kunkel et al. (2014) report different values. The values
# in the paper were used for the benchmarks on K, the values given
# here were used for the benchmark on JUQUEEN.
'randomize_Vm': True,
'mean_potential': 5.7,
'sigma_potential': 7.2,
'delay': 1.5, # synaptic delay, all connections(ms)
# synaptic weight
'JE': 0.14, # peak of EPSP
'sigma_w': 3.47, # standard dev. of E->E synapses(pA)
'g': -5.0,
'stdp_params': {
'delay': 1.5,
'alpha': 0.0513,
'lambda': 0.1, # STDP step size
'mu': 0.4, # STDP weight dependence exponent(potentiation)
'tau_plus': 15.0, # time constant for potentiation
},
'eta': 1.685, # scaling of external stimulus
'filestem': params['path_name']
}
Function Section
def build_network(logger):
"""Builds the network including setting of simulation and neuron
parameters, creation of neurons and connections
Requires an instance of Logger as argument
"""
tic = time.time() # start timer on construction
# unpack a few variables for convenience
NE = brunel_params['NE']
NI = brunel_params['NI']
model_params = brunel_params['model_params']
stdp_params = brunel_params['stdp_params']
# set global kernel parameters
nest.SetKernelStatus({
'total_num_virtual_procs': params['nvp'],
'resolution': params['dt'],
'overwrite_files': True})
nest.SetDefaults('iaf_psc_alpha', model_params)
nest.message(M_INFO, 'build_network', 'Creating excitatory population.')
E_neurons = nest.Create('iaf_psc_alpha', NE)
nest.message(M_INFO, 'build_network', 'Creating inhibitory population.')
I_neurons = nest.Create('iaf_psc_alpha', NI)
if brunel_params['randomize_Vm']:
nest.message(M_INFO, 'build_network',
'Randomzing membrane potentials.')
seed = nest.GetKernelStatus(
'rng_seeds')[-1] + 1 + nest.GetStatus([0], 'vp')[0]
rng = np.random.RandomState(seed=seed)
for node in get_local_nodes(E_neurons):
nest.SetStatus([node],
{'V_m': rng.normal(
brunel_params['mean_potential'],
brunel_params['sigma_potential'])})
for node in get_local_nodes(I_neurons):
nest.SetStatus([node],
{'V_m': rng.normal(
brunel_params['mean_potential'],
brunel_params['sigma_potential'])})
# number of incoming excitatory connections
CE = int(1. * NE / params['scale'])
# number of incomining inhibitory connections
CI = int(1. * NI / params['scale'])
nest.message(M_INFO, 'build_network',
'Creating excitatory stimulus generator.')
# Convert synapse weight from mV to pA
conversion_factor = convert_synapse_weight(
model_params['tau_m'], model_params['tau_syn_ex'], model_params['C_m'])
JE_pA = conversion_factor * brunel_params['JE']
nu_thresh = model_params['V_th'] / (
CE * model_params['tau_m'] / model_params['C_m'] *
JE_pA * np.exp(1.) * tau_syn)
nu_ext = nu_thresh * brunel_params['eta']
E_stimulus = nest.Create('poisson_generator', 1, {
'rate': nu_ext * CE * 1000.})
nest.message(M_INFO, 'build_network',
'Creating excitatory spike detector.')
if params['record_spikes']:
detector_label = os.path.join(
brunel_params['filestem'],
'alpha_' + str(stdp_params['alpha']) + '_spikes')
E_detector = nest.Create('spike_detector', 1, {
'withtime': True, 'to_file': True, 'label': detector_label})
BuildNodeTime = time.time() - tic
logger.log(str(BuildNodeTime) + ' # build_time_nodes')
logger.log(str(memory_thisjob()) + ' # virt_mem_after_nodes')
tic = time.time()
nest.SetDefaults('static_synapse_hpc', {'delay': brunel_params['delay']})
nest.CopyModel('static_synapse_hpc', 'syn_std')
nest.CopyModel('static_synapse_hpc', 'syn_ex',
{'weight': JE_pA})
nest.CopyModel('static_synapse_hpc', 'syn_in',
{'weight': brunel_params['g'] * JE_pA})
stdp_params['weight'] = JE_pA
nest.SetDefaults('stdp_pl_synapse_hom_hpc', stdp_params)
nest.message(M_INFO, 'build_network', 'Connecting stimulus generators.')
# Connect Poisson generator to neuron
nest.Connect(E_stimulus, E_neurons, {'rule': 'all_to_all'},
{'model': 'syn_ex'})
nest.Connect(E_stimulus, I_neurons, {'rule': 'all_to_all'},
{'model': 'syn_ex'})
nest.message(M_INFO, 'build_network',
'Connecting excitatory -> excitatory population.')
nest.Connect(E_neurons, E_neurons,
{'rule': 'fixed_indegree', 'indegree': CE,
'autapses': False, 'multapses': True},
{'model': 'stdp_pl_synapse_hom_hpc'})
nest.message(M_INFO, 'build_network',
'Connecting inhibitory -> excitatory population.')
nest.Connect(I_neurons, E_neurons,
{'rule': 'fixed_indegree', 'indegree': CI,
'autapses': False, 'multapses': True},
{'model': 'syn_in'})
nest.message(M_INFO, 'build_network',
'Connecting excitatory -> inhibitory population.')
nest.Connect(E_neurons, I_neurons,
{'rule': 'fixed_indegree', 'indegree': CE,
'autapses': False, 'multapses': True},
{'model': 'syn_ex'})
nest.message(M_INFO, 'build_network',
'Connecting inhibitory -> inhibitory population.')
nest.Connect(I_neurons, I_neurons,
{'rule': 'fixed_indegree', 'indegree': CI,
'autapses': False, 'multapses': True},
{'model': 'syn_in'})
if params['record_spikes']:
local_neurons = list(get_local_nodes(E_neurons))
if len(local_neurons) < brunel_params['Nrec']:
nest.message(
M_ERROR, 'build_network',
"""Spikes can only be recorded from local neurons, but the
number of local neurons is smaller than the number of neurons
spikes should be recorded from. Aborting the simulation!""")
exit(1)
nest.message(M_INFO, 'build_network', 'Connecting spike detectors.')
nest.Connect(local_neurons[:brunel_params['Nrec']], E_detector,
'all_to_all', 'static_synapse_hpc')
# read out time used for building
BuildEdgeTime = time.time() - tic
logger.log(str(BuildEdgeTime) + ' # build_edge_time')
logger.log(str(memory_thisjob()) + ' # virt_mem_after_edges')
return E_detector if params['record_spikes'] else None
def run_simulation():
"""Performs a simulation, including network construction"""
# open log file
with Logger(params['log_file']) as logger:
nest.ResetKernel()
nest.set_verbosity(M_INFO)
logger.log(str(memory_thisjob()) + ' # virt_mem_0')
sdet = build_network(logger)
tic = time.time()
nest.Simulate(params['presimtime'])
PreparationTime = time.time() - tic
logger.log(str(memory_thisjob()) + ' # virt_mem_after_presim')
logger.log(str(PreparationTime) + ' # presim_time')
tic = time.time()
nest.Simulate(params['simtime'])
SimCPUTime = time.time() - tic
logger.log(str(memory_thisjob()) + ' # virt_mem_after_sim')
logger.log(str(SimCPUTime) + ' # sim_time')
if params['record_spikes']:
logger.log(str(compute_rate(sdet)) + ' # average rate')
print(nest.GetKernelStatus())
def compute_rate(sdet):
"""Compute local approximation of average firing rate
This approximation is based on the number of local nodes, number
of local spikes and total time. Since this also considers devices,
the actual firing rate is usually underestimated.
"""
n_local_spikes = nest.GetStatus(sdet, 'n_events')[0]
n_local_neurons = brunel_params['Nrec']
simtime = params['simtime']
return 1. * n_local_spikes / (n_local_neurons * simtime) * 1e3
def memory_thisjob():
"""Wrapper to obtain current memory usage"""
nest.ll_api.sr('memory_thisjob')
return nest.ll_api.spp()
def lambertwm1(x):
"""Wrapper for LambertWm1 function"""
nest.ll_api.sr('{} LambertWm1'.format(x))
return nest.ll_api.spp()
def get_local_nodes(nodes):
"""Generator for efficient looping over local nodes
Assumes nodes is a continous list of gids [1, 2, 3, ...], e.g., as
returned by Create. Only works for nodes with proxies, i.e.,
regular neurons.
"""
nvp = nest.GetKernelStatus('total_num_virtual_procs') # step size
i = 0
while i < len(nodes):
if nest.GetStatus([nodes[i]], 'local')[0]:
yield nodes[i]
i += nvp
else:
i += 1
class Logger(object):
"""Logger context manager used to properly log memory and timing
information from network simulations.
"""
def __init__(self, file_name):
# copy output to cout for ranks 0..max_rank_cout-1
self.max_rank_cout = 5
# write to log files for ranks 0..max_rank_log-1
self.max_rank_log = 30
self.line_counter = 0
self.file_name = file_name
def __enter__(self):
if nest.Rank() < self.max_rank_log:
# convert rank to string, prepend 0 if necessary to make
# numbers equally wide for all ranks
rank = '{:0' + str(len(str(self.max_rank_log))) + '}'
fn = '{fn}_{rank}.dat'.format(
fn=self.file_name, rank=rank.format(nest.Rank()))
self.f = open(fn, 'w')
return self
def log(self, value):
if nest.Rank() < self.max_rank_log:
line = '{lc} {rank} {value} \n'.format(
lc=self.line_counter, rank=nest.Rank(), value=value)
self.f.write(line)
self.line_counter += 1
if nest.Rank() < self.max_rank_cout:
print(str(nest.Rank()) + ' ' + value + '\n', file=sys.stdout)
print(str(nest.Rank()) + ' ' + value + '\n', file=sys.stderr)
def __exit__(self, exc_type, exc_val, traceback):
if nest.Rank() < self.max_rank_log:
self.f.close()
if __name__ == '__main__':
run_simulation()
Total running time of the script: ( 0 minutes 0.000 seconds)
Microcircuit Example¶
Hendrik Rothe, Hannah Bos, Sacha van Albada
Description¶
This is a PyNEST implementation of the microcircuit model by Potjans and
This example contains several files:
helpers.py
Helper functions for the simulation and evaluation of the microcircuit.network.py
Gathers all parameters and connects the different nodes with each other.network_params.py
Contains the parameters for the network.sim_params.py
Contains the simulation parameters.stimulus_params.py
Contains the parameters for the stimuli.example.py
Use this script to try out the microcircuit.
How to use the Microcircuit model example:
To run the microcircuit on a local machine, we have to first check that the
variables N_scaling and K_scaling in network_params.py
are set to
0.1
. N_scaling adjusts the number of neurons and K_scaling adjusts
the number of connections to be simulated. The full network can be run by
adjusting these values to 1. If this is done, the option to print the time
progress should be set to False in the file sim_params.py
. For running, use
python example.py
. The output will be saved in the directory data
.
The code can be parallelized using OpenMP and MPI, if NEST has been built with
these applications (Parallel computing with NEST).
The number of threads (per MPI process) can be chosen by adjusting
local_num_threads in sim_params.py
. The number of MPI processes can be
set by choosing a reasonable value for num_mpi_prc and then running the
script with the following command.
mpirun -n num_mpi_prc python example.py
The default version of the simulation uses Poissonian input, which is defined
in the file network_params.py
to excite neuronal populations of the
microcircuit. If no Poissonian input is provided, DC input is calculated, which
should approximately compensate the Poissonian input. It is also possible to
add thalamic stimulation to the microcircuit or drive it with constant DC
input. This can be defined in the file stimulus_params.py
.
Note
Click here to download the full example code
Pynest microcircuit example¶
Example file to run the microcircuit.
This example uses the function GetNodes
, which is deprecated. A deprecation
warning is therefore issued. For details about deprecated functions, see
documentation.
Import the necessary modules
import time
import numpy as np
import network
from network_params import net_dict
from sim_params import sim_dict
from stimulus_params import stim_dict
Initialize the network and pass parameters to it.
tic = time.time()
net = network.Network(sim_dict, net_dict, stim_dict)
toc = time.time() - tic
print("Time to initialize the network: %.2f s" % toc)
# Connect all nodes.
tic = time.time()
net.setup()
toc = time.time() - tic
print("Time to create the connections: %.2f s" % toc)
# Simulate.
tic = time.time()
net.simulate()
toc = time.time() - tic
print("Time to simulate: %.2f s" % toc)
Plot a raster plot of the spikes of the simulated neurons and the average spike rate of all populations. For visual purposes only spikes 100 ms before and 100 ms after the thalamic stimulus time are plotted here by default. The computation of spike rates discards the first 500 ms of the simulation to exclude initialization artifacts.
raster_plot_time_idx = np.array(
[stim_dict['th_start'] - 100.0, stim_dict['th_start'] + 100.0]
)
fire_rate_time_idx = np.array([500.0, sim_dict['t_sim']])
net.evaluate(raster_plot_time_idx, fire_rate_time_idx)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Pynest microcircuit network¶
Main file for the microcircuit.
This example uses the function GetNodes, which is deprecated. A deprecation warning is therefore issued. For details about deprecated functions, see documentation.
Authors¶
Hendrik Rothe, Hannah Bos, Sacha van Albada; May 2016
import nest
import numpy as np
import os
from helpers import adj_w_ext_to_K
from helpers import synapses_th_matrix
from helpers import get_total_number_of_synapses
from helpers import get_weight
from helpers import plot_raster
from helpers import fire_rate
from helpers import boxplot
from helpers import compute_DC
class Network:
""" Handles the setup of the network parameters and
provides functions to connect the network and devices.
Arguments
---------
sim_dict
dictionary containing all parameters specific to the simulation
such as the directory the data is stored in and the seeds
(see: sim_params.py)
net_dict
dictionary containing all parameters specific to the neurons
and the network (see: network_params.py)
Keyword Arguments
-----------------
stim_dict
dictionary containing all parameter specific to the stimulus
(see: stimulus_params.py)
"""
def __init__(self, sim_dict, net_dict, stim_dict=None):
self.sim_dict = sim_dict
self.net_dict = net_dict
if stim_dict is not None:
self.stim_dict = stim_dict
else:
self.stim_dict = None
self.data_path = sim_dict['data_path']
if nest.Rank() == 0:
if os.path.isdir(self.sim_dict['data_path']):
print('data directory already exists')
else:
os.mkdir(self.sim_dict['data_path'])
print('data directory created')
print('Data will be written to %s' % self.data_path)
def setup_nest(self):
""" Hands parameters to the NEST-kernel.
Resets the NEST-kernel and passes parameters to it.
The number of seeds for the NEST-kernel is computed, based on the
total number of MPI processes and threads of each.
"""
nest.ResetKernel()
master_seed = self.sim_dict['master_seed']
if nest.Rank() == 0:
print('Master seed: %i ' % master_seed)
nest.SetKernelStatus(
{'local_num_threads': self.sim_dict['local_num_threads']}
)
N_tp = nest.GetKernelStatus(['total_num_virtual_procs'])[0]
if nest.Rank() == 0:
print('Number of total processes: %i' % N_tp)
rng_seeds = list(
range(
master_seed + 1 + N_tp,
master_seed + 1 + (2 * N_tp)
)
)
grng_seed = master_seed + N_tp
if nest.Rank() == 0:
print(
'Seeds for random number generators of virtual processes: %r'
% rng_seeds
)
print('Global random number generator seed: %i' % grng_seed)
self.pyrngs = [np.random.RandomState(s) for s in list(range(
master_seed, master_seed + N_tp))]
self.sim_resolution = self.sim_dict['sim_resolution']
kernel_dict = {
'resolution': self.sim_resolution,
'grng_seed': grng_seed,
'rng_seeds': rng_seeds,
'overwrite_files': self.sim_dict['overwrite_files'],
'print_time': self.sim_dict['print_time'],
}
nest.SetKernelStatus(kernel_dict)
def create_populations(self):
""" Creates the neuronal populations.
The neuronal populations are created and the parameters are assigned
to them. The initial membrane potential of the neurons is drawn from a
normal distribution. Scaling of the number of neurons and of the
synapses is performed. If scaling is performed extra DC input is added
to the neuronal populations.
"""
self.N_full = self.net_dict['N_full']
self.N_scaling = self.net_dict['N_scaling']
self.K_scaling = self.net_dict['K_scaling']
self.synapses = get_total_number_of_synapses(self.net_dict)
self.synapses_scaled = self.synapses * self.K_scaling
self.nr_neurons = self.N_full * self.N_scaling
self.K_ext = self.net_dict['K_ext'] * self.K_scaling
self.w_from_PSP = get_weight(self.net_dict['PSP_e'], self.net_dict)
self.weight_mat = get_weight(
self.net_dict['PSP_mean_matrix'], self.net_dict
)
self.weight_mat_std = self.net_dict['PSP_std_matrix']
self.w_ext = self.w_from_PSP
if self.net_dict['poisson_input']:
self.DC_amp_e = np.zeros(len(self.net_dict['populations']))
else:
if nest.Rank() == 0:
print(
"""
no poisson input provided
calculating dc input to compensate
"""
)
self.DC_amp_e = compute_DC(self.net_dict, self.w_ext)
v0_type_options = ['original', 'optimized']
if self.net_dict['V0_type'] not in v0_type_options:
print(
'''
'{0}' is not a valid option, replacing it with '{1}'
Valid options are {2}
'''.format(self.net_dict['V0_type'],
v0_type_options[0],
v0_type_options)
)
self.net_dict['V0_type'] = v0_type_options[0]
if nest.Rank() == 0:
print(
'The number of neurons is scaled by a factor of: %.2f'
% self.N_scaling
)
print(
'The number of synapses is scaled by a factor of: %.2f'
% self.K_scaling
)
# Scaling of the synapses.
if self.K_scaling != 1:
synapses_indegree = self.synapses / (
self.N_full.reshape(len(self.N_full), 1) * self.N_scaling)
self.weight_mat, self.w_ext, self.DC_amp_e = adj_w_ext_to_K(
synapses_indegree, self.K_scaling, self.weight_mat,
self.w_from_PSP, self.DC_amp_e, self.net_dict, self.stim_dict
)
# Create cortical populations.
self.pops = []
pop_file = open(
os.path.join(self.data_path, 'population_GIDs.dat'), 'w+'
)
for i, pop in enumerate(self.net_dict['populations']):
population = nest.Create(
self.net_dict['neuron_model'], int(self.nr_neurons[i])
)
nest.SetStatus(
population, {
'tau_syn_ex': self.net_dict['neuron_params']['tau_syn_ex'],
'tau_syn_in': self.net_dict['neuron_params']['tau_syn_in'],
'E_L': self.net_dict['neuron_params']['E_L'],
'V_th': self.net_dict['neuron_params']['V_th'],
'V_reset': self.net_dict['neuron_params']['V_reset'],
't_ref': self.net_dict['neuron_params']['t_ref'],
'I_e': self.DC_amp_e[i]
}
)
if self.net_dict['V0_type'] == 'optimized':
for thread in \
np.arange(nest.GetKernelStatus('local_num_threads')):
# Using GetNodes is a work-around until NEST 3.0 is
# released. It will issue a deprecation warning.
local_nodes = nest.GetNodes(
[0], {
'model': self.net_dict['neuron_model'],
'thread': thread
}, local_only=True
)[0]
vp = nest.GetStatus(local_nodes)[0]['vp']
# vp is the same for all local nodes on the same thread
local_pop = list(set(local_nodes).intersection(population))
nest.SetStatus(
local_pop, 'V_m', self.pyrngs[vp].normal(
self.net_dict
['neuron_params']['V0_mean']['optimized'][i],
self.net_dict
['neuron_params']['V0_sd']['optimized'][i],
len(local_pop))
)
self.pops.append(population)
pop_file.write('%d %d \n' % (population[0], population[-1]))
pop_file.close()
if self.net_dict['V0_type'] == 'original':
for thread in np.arange(nest.GetKernelStatus('local_num_threads')):
local_nodes = nest.GetNodes(
[0], {
'model': self.net_dict['neuron_model'],
'thread': thread
}, local_only=True
)[0]
vp = nest.GetStatus(local_nodes)[0]['vp']
nest.SetStatus(
local_nodes, 'V_m', self.pyrngs[vp].normal(
self.net_dict['neuron_params']['V0_mean']['original'],
self.net_dict['neuron_params']['V0_sd']['original'],
len(local_nodes))
)
def create_devices(self):
""" Creates the recording devices.
Only devices which are given in net_dict['rec_dev'] are created.
"""
self.spike_detector = []
self.voltmeter = []
for i, pop in enumerate(self.pops):
if 'spike_detector' in self.net_dict['rec_dev']:
recdict = {
'withgid': True,
'withtime': True,
'to_memory': False,
'to_file': True,
'label': os.path.join(self.data_path, 'spike_detector')
}
dummy = nest.Create('spike_detector', params=recdict)
self.spike_detector.append(dummy)
if 'voltmeter' in self.net_dict['rec_dev']:
recdictmem = {
'interval': self.sim_dict['rec_V_int'],
'withgid': True,
'withtime': True,
'to_memory': False,
'to_file': True,
'label': os.path.join(self.data_path, 'voltmeter'),
'record_from': ['V_m'],
}
volt = nest.Create('voltmeter', params=recdictmem)
self.voltmeter.append(volt)
if 'spike_detector' in self.net_dict['rec_dev']:
if nest.Rank() == 0:
print('Spike detectors created')
if 'voltmeter' in self.net_dict['rec_dev']:
if nest.Rank() == 0:
print('Voltmeters created')
def create_thalamic_input(self):
""" This function creates the thalamic neuronal population if this
is specified in stimulus_params.py.
"""
if self.stim_dict['thalamic_input']:
if nest.Rank() == 0:
print('Thalamic input provided')
self.thalamic_population = nest.Create(
'parrot_neuron', self.stim_dict['n_thal']
)
self.thalamic_weight = get_weight(
self.stim_dict['PSP_th'], self.net_dict
)
self.stop_th = (
self.stim_dict['th_start'] + self.stim_dict['th_duration']
)
self.poisson_th = nest.Create('poisson_generator')
nest.SetStatus(
self.poisson_th, {
'rate': self.stim_dict['th_rate'],
'start': self.stim_dict['th_start'],
'stop': self.stop_th
}
)
nest.Connect(self.poisson_th, self.thalamic_population)
self.nr_synapses_th = synapses_th_matrix(
self.net_dict, self.stim_dict
)
if self.K_scaling != 1:
self.thalamic_weight = self.thalamic_weight / (
self.K_scaling ** 0.5)
self.nr_synapses_th = (self.nr_synapses_th * self.K_scaling)
else:
if nest.Rank() == 0:
print('Thalamic input not provided')
def create_poisson(self):
""" Creates the Poisson generators.
If Poissonian input is provided, the Poissonian generators are created
and the parameters needed are passed to the Poissonian generator.
"""
if self.net_dict['poisson_input']:
if nest.Rank() == 0:
print('Poisson background input created')
rate_ext = self.net_dict['bg_rate'] * self.K_ext
self.poisson = []
for i, target_pop in enumerate(self.pops):
poisson = nest.Create('poisson_generator')
nest.SetStatus(poisson, {'rate': rate_ext[i]})
self.poisson.append(poisson)
def create_dc_generator(self):
""" Creates a DC input generator.
If DC input is provided, the DC generators are created and the
necessary parameters are passed to them.
"""
if self.stim_dict['dc_input']:
if nest.Rank() == 0:
print('DC generator created')
dc_amp_stim = self.net_dict['K_ext'] * self.stim_dict['dc_amp']
self.dc = []
if nest.Rank() == 0:
print('DC_amp_stim', dc_amp_stim)
for i, target_pop in enumerate(self.pops):
dc = nest.Create(
'dc_generator', params={
'amplitude': dc_amp_stim[i],
'start': self.stim_dict['dc_start'],
'stop': (
self.stim_dict['dc_start'] +
self.stim_dict['dc_dur']
)
}
)
self.dc.append(dc)
def create_connections(self):
""" Creates the recurrent connections.
The recurrent connections between the neuronal populations are created.
"""
if nest.Rank() == 0:
print('Recurrent connections are established')
mean_delays = self.net_dict['mean_delay_matrix']
std_delays = self.net_dict['std_delay_matrix']
for i, target_pop in enumerate(self.pops):
for j, source_pop in enumerate(self.pops):
synapse_nr = int(self.synapses_scaled[i][j])
if synapse_nr >= 0.:
weight = self.weight_mat[i][j]
w_sd = abs(weight * self.weight_mat_std[i][j])
conn_dict_rec = {
'rule': 'fixed_total_number', 'N': synapse_nr
}
syn_dict = {
'model': 'static_synapse',
'weight': {
'distribution': 'normal_clipped', 'mu': weight,
'sigma': w_sd
},
'delay': {
'distribution': 'normal_clipped',
'mu': mean_delays[i][j], 'sigma': std_delays[i][j],
'low': self.sim_resolution
}
}
if weight < 0:
syn_dict['weight']['high'] = 0.0
else:
syn_dict['weight']['low'] = 0.0
nest.Connect(
source_pop, target_pop,
conn_spec=conn_dict_rec,
syn_spec=syn_dict
)
def connect_poisson(self):
""" Connects the Poisson generators to the microcircuit."""
if nest.Rank() == 0:
print('Poisson background input is connected')
for i, target_pop in enumerate(self.pops):
conn_dict_poisson = {'rule': 'all_to_all'}
syn_dict_poisson = {
'model': 'static_synapse',
'weight': self.w_ext,
'delay': self.net_dict['poisson_delay']
}
nest.Connect(
self.poisson[i], target_pop,
conn_spec=conn_dict_poisson,
syn_spec=syn_dict_poisson
)
def connect_thalamus(self):
""" Connects the thalamic population to the microcircuit."""
if nest.Rank() == 0:
print('Thalamus connection established')
for i, target_pop in enumerate(self.pops):
conn_dict_th = {
'rule': 'fixed_total_number',
'N': int(self.nr_synapses_th[i])
}
syn_dict_th = {
'weight': {
'distribution': 'normal_clipped',
'mu': self.thalamic_weight,
'sigma': (
self.thalamic_weight * self.net_dict['PSP_sd']
),
'low': 0.0
},
'delay': {
'distribution': 'normal_clipped',
'mu': self.stim_dict['delay_th'][i],
'sigma': self.stim_dict['delay_th_sd'][i],
'low': self.sim_resolution
}
}
nest.Connect(
self.thalamic_population, target_pop,
conn_spec=conn_dict_th, syn_spec=syn_dict_th
)
def connect_dc_generator(self):
""" Connects the DC generator to the microcircuit."""
if nest.Rank() == 0:
print('DC Generator connection established')
for i, target_pop in enumerate(self.pops):
if self.stim_dict['dc_input']:
nest.Connect(self.dc[i], target_pop)
def connect_devices(self):
""" Connects the recording devices to the microcircuit."""
if nest.Rank() == 0:
if ('spike_detector' in self.net_dict['rec_dev'] and
'voltmeter' not in self.net_dict['rec_dev']):
print('Spike detector connected')
elif ('spike_detector' not in self.net_dict['rec_dev'] and
'voltmeter' in self.net_dict['rec_dev']):
print('Voltmeter connected')
elif ('spike_detector' in self.net_dict['rec_dev'] and
'voltmeter' in self.net_dict['rec_dev']):
print('Spike detector and voltmeter connected')
else:
print('no recording devices connected')
for i, target_pop in enumerate(self.pops):
if 'voltmeter' in self.net_dict['rec_dev']:
nest.Connect(self.voltmeter[i], target_pop)
if 'spike_detector' in self.net_dict['rec_dev']:
nest.Connect(target_pop, self.spike_detector[i])
def setup(self):
""" Execute subfunctions of the network.
This function executes several subfunctions to create neuronal
populations, devices and inputs, connects the populations with
each other and with devices and input nodes.
"""
self.setup_nest()
self.create_populations()
self.create_devices()
self.create_thalamic_input()
self.create_poisson()
self.create_dc_generator()
self.create_connections()
if self.net_dict['poisson_input']:
self.connect_poisson()
if self.stim_dict['thalamic_input']:
self.connect_thalamus()
if self.stim_dict['dc_input']:
self.connect_dc_generator()
self.connect_devices()
def simulate(self):
""" Simulates the microcircuit."""
nest.Simulate(self.sim_dict['t_sim'])
def evaluate(self, raster_plot_time_idx, fire_rate_time_idx):
""" Displays output of the simulation.
Calculates the firing rate of each population,
creates a spike raster plot and a box plot of the
firing rates.
"""
if nest.Rank() == 0:
print(
'Interval to compute firing rates: %s ms'
% np.array2string(fire_rate_time_idx)
)
fire_rate(
self.data_path, 'spike_detector',
fire_rate_time_idx[0], fire_rate_time_idx[1]
)
print(
'Interval to plot spikes: %s ms'
% np.array2string(raster_plot_time_idx)
)
plot_raster(
self.data_path, 'spike_detector',
raster_plot_time_idx[0], raster_plot_time_idx[1]
)
boxplot(self.net_dict, self.data_path)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Pynest microcircuit helpers¶
Helper functions for the simulation and evaluation of the microcircuit.
Authors¶
Hendrik Rothe, Hannah Bos, Sacha van Albada; May 2016
import numpy as np
import os
import sys
if 'DISPLAY' not in os.environ:
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
def compute_DC(net_dict, w_ext):
""" Computes DC input if no Poisson input is provided to the microcircuit.
Parameters
----------
net_dict
Parameters of the microcircuit.
w_ext
Weight of external connections.
Returns
-------
DC
DC input, which compensates lacking Poisson input.
"""
DC = (
net_dict['bg_rate'] * net_dict['K_ext'] *
w_ext * net_dict['neuron_params']['tau_syn_E'] * 0.001
)
return DC
def get_weight(PSP_val, net_dict):
""" Computes weight to elicit a change in the membrane potential.
This function computes the weight which elicits a change in the membrane
potential of size PSP_val. To implement this, the weight is calculated to
elicit a current that is high enough to implement the desired change in the
membrane potential.
Parameters
----------
PSP_val
Evoked postsynaptic potential.
net_dict
Dictionary containing parameters of the microcircuit.
Returns
-------
PSC_e
Weight value(s).
"""
C_m = net_dict['neuron_params']['C_m']
tau_m = net_dict['neuron_params']['tau_m']
tau_syn_ex = net_dict['neuron_params']['tau_syn_ex']
PSC_e_over_PSP_e = (((C_m) ** (-1) * tau_m * tau_syn_ex / (
tau_syn_ex - tau_m) * ((tau_m / tau_syn_ex) ** (
- tau_m / (tau_m - tau_syn_ex)) - (tau_m / tau_syn_ex) ** (
- tau_syn_ex / (tau_m - tau_syn_ex)))) ** (-1))
PSC_e = (PSC_e_over_PSP_e * PSP_val)
return PSC_e
def get_total_number_of_synapses(net_dict):
""" Returns the total number of synapses between all populations.
The first index (rows) of the output matrix is the target population
and the second (columns) the source population. If a scaling of the
synapses is intended this is done in the main simulation script and the
variable 'K_scaling' is ignored in this function.
Parameters
----------
net_dict
Dictionary containing parameters of the microcircuit.
N_full
Number of neurons in all populations.
number_N
Total number of populations.
conn_probs
Connection probabilities of the eight populations.
scaling
Factor that scales the number of neurons.
Returns
-------
K
Total number of synapses with
dimensions [len(populations), len(populations)].
"""
N_full = net_dict['N_full']
number_N = len(N_full)
conn_probs = net_dict['conn_probs']
scaling = net_dict['N_scaling']
prod = np.outer(N_full, N_full)
n_syn_temp = np.log(1. - conn_probs)/np.log((prod - 1.) / prod)
N_full_matrix = np.column_stack(
(N_full for i in list(range(number_N)))
)
# If the network is scaled the indegrees are calculated in the same
# fashion as in the original version of the circuit, which is
# written in sli.
K = (((n_syn_temp * (
N_full_matrix * scaling).astype(int)) / N_full_matrix).astype(int))
return K
def synapses_th_matrix(net_dict, stim_dict):
""" Computes number of synapses between thalamus and microcircuit.
This function ignores the variable, which scales the number of synapses.
If this is intended the scaling is performed in the main simulation script.
Parameters
----------
net_dict
Dictionary containing parameters of the microcircuit.
stim_dict
Dictionary containing parameters of stimulation settings.
N_full
Number of neurons in the eight populations.
number_N
Total number of populations.
conn_probs
Connection probabilities of the thalamus to the eight populations.
scaling
Factor that scales the number of neurons.
T_full
Number of thalamic neurons.
Returns
-------
K
Total number of synapses.
"""
N_full = net_dict['N_full']
number_N = len(N_full)
scaling = net_dict['N_scaling']
conn_probs = stim_dict['conn_probs_th']
T_full = stim_dict['n_thal']
prod = (T_full * N_full).astype(float)
n_syn_temp = np.log(1. - conn_probs)/np.log((prod - 1.)/prod)
K = (((n_syn_temp * (N_full * scaling).astype(int))/N_full).astype(int))
return K
def adj_w_ext_to_K(K_full, K_scaling, w, w_from_PSP, DC, net_dict, stim_dict):
""" Adjustment of weights to scaling is performed.
The recurrent and external weights are adjusted to the scaling
of the indegrees. Extra DC input is added to compensate the scaling
and preserve the mean and variance of the input.
Parameters
----------
K_full
Total number of connections between the eight populations.
K_scaling
Scaling factor for the connections.
w
Weight matrix of the connections of the eight populations.
w_from_PSP
Weight of the external connections.
DC
DC input to the eight populations.
net_dict
Dictionary containing parameters of the microcircuit.
stim_dict
Dictionary containing stimulation parameters.
tau_syn_E
Time constant of the external postsynaptic excitatory current.
full_mean_rates
Mean rates of the eight populations in the full scale version.
K_ext
Number of external connections to the eight populations.
bg_rate
Rate of the Poissonian spike generator.
Returns
-------
w_new
Adjusted weight matrix.
w_ext_new
Adjusted external weight.
I_ext
Extra DC input.
"""
tau_syn_E = net_dict['neuron_params']['tau_syn_E']
full_mean_rates = net_dict['full_mean_rates']
w_mean = w_from_PSP
K_ext = net_dict['K_ext']
bg_rate = net_dict['bg_rate']
w_new = w / np.sqrt(K_scaling)
I_ext = np.zeros(len(net_dict['populations']))
x1_all = w * K_full * full_mean_rates
x1_sum = np.sum(x1_all, axis=1)
if net_dict['poisson_input']:
x1_ext = w_mean * K_ext * bg_rate
w_ext_new = w_mean / np.sqrt(K_scaling)
I_ext = 0.001 * tau_syn_E * (
(1. - np.sqrt(K_scaling)) * x1_sum + (
1. - np.sqrt(K_scaling)) * x1_ext) + DC
else:
w_ext_new = w_from_PSP / np.sqrt(K_scaling)
I_ext = 0.001 * tau_syn_E * (
(1. - np.sqrt(K_scaling)) * x1_sum) + DC
return w_new, w_ext_new, I_ext
def read_name(path, name):
""" Reads names and ids of spike detector.
The names of the spike detectors are gathered and the lowest and
highest id of each spike detector is computed. If the simulation was
run on several threads or mpi-processes, one name per spike detector
per mpi-process/thread is extracted.
Parameters
------------
path
Path where the spike detector files are stored.
name
Name of the spike detector.
Returns
-------
files
Name of all spike detectors, which are located in the path.
gids
Lowest and highest ids of the spike detectors.
"""
# Import filenames$
files = []
for file in os.listdir(path):
if file.endswith('.gdf') and file.startswith(name):
temp = file.split('-')[0] + '-' + file.split('-')[1]
if temp not in files:
files.append(temp)
# Import GIDs
gidfile = open(path + 'population_GIDs.dat', 'r')
gids = []
for l in gidfile:
a = l.split()
gids.append([int(a[0]), int(a[1])])
files = sorted(files)
return files, gids
def load_spike_times(path, name, begin, end):
""" Loads spike times of each spike detector.
Parameters
-----------
path
Path where the files with the spike times are stored.
name
Name of the spike detector.
begin
Lower boundary value to load spike times.
end
Upper boundary value to load spike times.
Returns
-------
data
Dictionary containing spike times in the interval from 'begin'
to 'end'.
"""
files, gids = read_name(path, name)
data = {}
for i in list(range(len(files))):
all_names = os.listdir(path)
temp3 = [
all_names[x] for x in list(range(len(all_names)))
if all_names[x].endswith('gdf') and
all_names[x].startswith('spike') and
(all_names[x].split('-')[0] + '-' + all_names[x].split('-')[1]) in
files[i]
]
data_temp = [np.loadtxt(os.path.join(path, f)) for f in temp3]
data_concatenated = np.concatenate(data_temp)
data_raw = data_concatenated[np.argsort(data_concatenated[:, 1])]
idx = ((data_raw[:, 1] > begin) * (data_raw[:, 1] < end))
data[i] = data_raw[idx]
return data
def plot_raster(path, name, begin, end):
""" Creates a spike raster plot of the microcircuit.
Parameters
-----------
path
Path where the spike times are stored.
name
Name of the spike detector.
begin
Initial value of spike times to plot.
end
Final value of spike times to plot.
Returns
-------
None
"""
files, gids = read_name(path, name)
data_all = load_spike_times(path, name, begin, end)
highest_gid = gids[-1][-1]
gids_numpy = np.asarray(gids)
gids_numpy_changed = abs(gids_numpy - highest_gid) + 1
L23_label_pos = (gids_numpy_changed[0][0] + gids_numpy_changed[1][1])/2
L4_label_pos = (gids_numpy_changed[2][0] + gids_numpy_changed[3][1])/2
L5_label_pos = (gids_numpy_changed[4][0] + gids_numpy_changed[5][1])/2
L6_label_pos = (gids_numpy_changed[6][0] + gids_numpy_changed[7][1])/2
ylabels = ['L23', 'L4', 'L5', 'L6']
color_list = [
'#000000', '#888888', '#000000', '#888888',
'#000000', '#888888', '#000000', '#888888'
]
Fig1 = plt.figure(1, figsize=(8, 6))
for i in list(range(len(files))):
times = data_all[i][:, 1]
neurons = np.abs(data_all[i][:, 0] - highest_gid) + 1
plt.plot(times, neurons, '.', color=color_list[i])
plt.xlabel('time [ms]', fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(
[L23_label_pos, L4_label_pos, L5_label_pos, L6_label_pos],
ylabels, rotation=10, fontsize=18
)
plt.savefig(os.path.join(path, 'raster_plot.png'), dpi=300)
plt.show()
def fire_rate(path, name, begin, end):
""" Computes firing rate and standard deviation of it.
The firing rate of each neuron for each population is computed and stored
in a numpy file in the directory of the spike detectors. The mean firing
rate and its standard deviation is displayed for each population.
Parameters
-----------
path
Path where the spike times are stored.
name
Name of the spike detector.
begin
Initial value of spike times to calculate the firing rate.
end
Final value of spike times to calculate the firing rate.
Returns
-------
None
"""
files, gids = read_name(path, name)
data_all = load_spike_times(path, name, begin, end)
rates_averaged_all = []
rates_std_all = []
for h in list(range(len(files))):
n_fil = data_all[h][:, 0]
n_fil = n_fil.astype(int)
count_of_n = np.bincount(n_fil)
count_of_n_fil = count_of_n[gids[h][0]-1:gids[h][1]]
rate_each_n = count_of_n_fil * 1000. / (end - begin)
rate_averaged = np.mean(rate_each_n)
rate_std = np.std(rate_each_n)
rates_averaged_all.append(float('%.3f' % rate_averaged))
rates_std_all.append(float('%.3f' % rate_std))
np.save(os.path.join(path, ('rate' + str(h) + '.npy')), rate_each_n)
print('Mean rates: %r Hz' % rates_averaged_all)
print('Standard deviation of rates: %r Hz' % rates_std_all)
def boxplot(net_dict, path):
""" Creates a boxblot of the firing rates of the eight populations.
To create the boxplot, the firing rates of each population need to be
computed with the function 'fire_rate'.
Parameters
-----------
net_dict
Dictionary containing parameters of the microcircuit.
path
Path were the firing rates are stored.
Returns
-------
None
"""
pops = net_dict['N_full']
reversed_order_list = list(range(len(pops) - 1, -1, -1))
list_rates_rev = []
for h in reversed_order_list:
list_rates_rev.append(
np.load(os.path.join(path, ('rate' + str(h) + '.npy')))
)
pop_names = net_dict['populations']
label_pos = list(range(len(pops), 0, -1))
color_list = ['#888888', '#000000']
medianprops = dict(linestyle='-', linewidth=2.5, color='firebrick')
fig, ax1 = plt.subplots(figsize=(10, 6))
bp = plt.boxplot(list_rates_rev, 0, 'rs', 0, medianprops=medianprops)
plt.setp(bp['boxes'], color='black')
plt.setp(bp['whiskers'], color='black')
plt.setp(bp['fliers'], color='red', marker='+')
for h in list(range(len(pops))):
boxX = []
boxY = []
box = bp['boxes'][h]
for j in list(range(5)):
boxX.append(box.get_xdata()[j])
boxY.append(box.get_ydata()[j])
boxCoords = list(zip(boxX, boxY))
k = h % 2
boxPolygon = Polygon(boxCoords, facecolor=color_list[k])
ax1.add_patch(boxPolygon)
plt.xlabel('firing rate [Hz]', fontsize=18)
plt.yticks(label_pos, pop_names, fontsize=18)
plt.xticks(fontsize=18)
plt.savefig(os.path.join(path, 'box_plot.png'), dpi=300)
plt.show()
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Pynest microcircuit parameters¶
Network parameters for the microcircuit.
Hendrik Rothe, Hannah Bos, Sacha van Albada; May 2016
import numpy as np
def get_mean_delays(mean_delay_exc, mean_delay_inh, number_of_pop):
""" Creates matrix containing the delay of all connections.
Parameters
----------
mean_delay_exc
Delay of the excitatory connections.
mean_delay_inh
Delay of the inhibitory connections.
number_of_pop
Number of populations.
Returns
-------
mean_delays
Matrix specifying the mean delay of all connections.
"""
dim = number_of_pop
mean_delays = np.zeros((dim, dim))
mean_delays[:, 0:dim:2] = mean_delay_exc
mean_delays[:, 1:dim:2] = mean_delay_inh
return mean_delays
def get_std_delays(std_delay_exc, std_delay_inh, number_of_pop):
""" Creates matrix containing the standard deviations of all delays.
Parameters
----------
std_delay_exc
Standard deviation of excitatory delays.
std_delay_inh
Standard deviation of inhibitory delays.
number_of_pop
Number of populations in the microcircuit.
Returns
-------
std_delays
Matrix specifying the standard deviation of all delays.
"""
dim = number_of_pop
std_delays = np.zeros((dim, dim))
std_delays[:, 0:dim:2] = std_delay_exc
std_delays[:, 1:dim:2] = std_delay_inh
return std_delays
def get_mean_PSP_matrix(PSP_e, g, number_of_pop):
""" Creates a matrix of the mean evoked postsynaptic potential.
The function creates a matrix of the mean evoked postsynaptic
potentials between the recurrent connections of the microcircuit.
The weight of the connection from L4E to L23E is doubled.
Parameters
----------
PSP_e
Mean evoked potential.
g
Relative strength of the inhibitory to excitatory connection.
number_of_pop
Number of populations in the microcircuit.
Returns
-------
weights
Matrix of the weights for the recurrent connections.
"""
dim = number_of_pop
weights = np.zeros((dim, dim))
exc = PSP_e
inh = PSP_e * g
weights[:, 0:dim:2] = exc
weights[:, 1:dim:2] = inh
weights[0, 2] = exc * 2
return weights
def get_std_PSP_matrix(PSP_rel, number_of_pop):
""" Relative standard deviation matrix of postsynaptic potential created.
The relative standard deviation matrix of the evoked postsynaptic potential
for the recurrent connections of the microcircuit is created.
Parameters
----------
PSP_rel
Relative standard deviation of the evoked postsynaptic potential.
number_of_pop
Number of populations in the microcircuit.
Returns
-------
std_mat
Matrix of the standard deviation of postsynaptic potentials.
"""
dim = number_of_pop
std_mat = np.zeros((dim, dim))
std_mat[:, :] = PSP_rel
return std_mat
net_dict = {
# Neuron model.
'neuron_model': 'iaf_psc_exp',
# The default recording device is the spike_detector. If you also
# want to record the membrane potentials of the neurons, add
# 'voltmeter' to the list.
'rec_dev': ['spike_detector'],
# Names of the simulated populations.
'populations': ['L23E', 'L23I', 'L4E', 'L4I', 'L5E', 'L5I', 'L6E', 'L6I'],
# Number of neurons in the different populations. The order of the
# elements corresponds to the names of the variable 'populations'.
'N_full': np.array([20683, 5834, 21915, 5479, 4850, 1065, 14395, 2948]),
# Mean rates of the different populations in the non-scaled version
# of the microcircuit. Necessary for the scaling of the network.
# The order corresponds to the order in 'populations'.
'full_mean_rates':
np.array([0.971, 2.868, 4.746, 5.396, 8.142, 9.078, 0.991, 7.523]),
# Connection probabilities. The first index corresponds to the targets
# and the second to the sources.
'conn_probs':
np.array(
[[0.1009, 0.1689, 0.0437, 0.0818, 0.0323, 0., 0.0076, 0.],
[0.1346, 0.1371, 0.0316, 0.0515, 0.0755, 0., 0.0042, 0.],
[0.0077, 0.0059, 0.0497, 0.135, 0.0067, 0.0003, 0.0453, 0.],
[0.0691, 0.0029, 0.0794, 0.1597, 0.0033, 0., 0.1057, 0.],
[0.1004, 0.0622, 0.0505, 0.0057, 0.0831, 0.3726, 0.0204, 0.],
[0.0548, 0.0269, 0.0257, 0.0022, 0.06, 0.3158, 0.0086, 0.],
[0.0156, 0.0066, 0.0211, 0.0166, 0.0572, 0.0197, 0.0396, 0.2252],
[0.0364, 0.001, 0.0034, 0.0005, 0.0277, 0.008, 0.0658, 0.1443]]
),
# Number of external connections to the different populations.
# The order corresponds to the order in 'populations'.
'K_ext': np.array([1600, 1500, 2100, 1900, 2000, 1900, 2900, 2100]),
# Factor to scale the indegrees.
'K_scaling': 0.1,
# Factor to scale the number of neurons.
'N_scaling': 0.1,
# Mean amplitude of excitatory postsynaptic potential (in mV).
'PSP_e': 0.15,
# Relative standard deviation of the postsynaptic potential.
'PSP_sd': 0.1,
# Relative inhibitory synaptic strength (in relative units).
'g': -4,
# Rate of the Poissonian spike generator (in Hz).
'bg_rate': 8.,
# Turn Poisson input on or off (True or False).
'poisson_input': True,
# Delay of the Poisson generator (in ms).
'poisson_delay': 1.5,
# Mean delay of excitatory connections (in ms).
'mean_delay_exc': 1.5,
# Mean delay of inhibitory connections (in ms).
'mean_delay_inh': 0.75,
# Relative standard deviation of the delay of excitatory and
# inhibitory connections (in relative units).
'rel_std_delay': 0.5,
# Initial conditions for the membrane potential, options are:
# 'original': uniform mean and std for all populations.
# 'optimized': population-specific mean and std, allowing a reduction of
# the initial activity burst in the network.
# Choose either 'original' or 'optimized'.
'V0_type': 'original',
# Parameters of the neurons.
'neuron_params': {
# Membrane potential average for the neurons (in mV).
'V0_mean': {'original': -58.0,
'optimized': [-68.28, -63.16, -63.33, -63.45,
-63.11, -61.66, -66.72, -61.43]},
# Standard deviation of the average membrane potential (in mV).
'V0_sd': {'original': 10.0,
'optimized': [5.36, 4.57, 4.74, 4.94,
4.94, 4.55, 5.46, 4.48]},
# Reset membrane potential of the neurons (in mV).
'E_L': -65.0,
# Threshold potential of the neurons (in mV).
'V_th': -50.0,
# Membrane potential after a spike (in mV).
'V_reset': -65.0,
# Membrane capacitance (in pF).
'C_m': 250.0,
# Membrane time constant (in ms).
'tau_m': 10.0,
# Time constant of postsynaptic excitatory currents (in ms).
'tau_syn_ex': 0.5,
# Time constant of postsynaptic inhibitory currents (in ms).
'tau_syn_in': 0.5,
# Time constant of external postsynaptic excitatory current (in ms).
'tau_syn_E': 0.5,
# Refractory period of the neurons after a spike (in ms).
't_ref': 2.0}
}
updated_dict = {
# PSP mean matrix.
'PSP_mean_matrix': get_mean_PSP_matrix(
net_dict['PSP_e'], net_dict['g'], len(net_dict['populations'])
),
# PSP std matrix.
'PSP_std_matrix': get_std_PSP_matrix(
net_dict['PSP_sd'], len(net_dict['populations'])
),
# mean delay matrix.
'mean_delay_matrix': get_mean_delays(
net_dict['mean_delay_exc'], net_dict['mean_delay_inh'],
len(net_dict['populations'])
),
# std delay matrix.
'std_delay_matrix': get_std_delays(
net_dict['mean_delay_exc'] * net_dict['rel_std_delay'],
net_dict['mean_delay_inh'] * net_dict['rel_std_delay'],
len(net_dict['populations'])
),
}
net_dict.update(updated_dict)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Microcircuit stimulus parameters¶
Stimulus parameters for the microcircuit.
Hendrik Rothe, Hannah Bos, Sacha van Albada; May 2016
import numpy as np
from network_params import net_dict
stim_dict = {
# Turn thalamic input on or off (True or False).
'thalamic_input': False,
# Turn DC input on or off (True or False).
'dc_input': False,
# Number of thalamic neurons.
'n_thal': 902,
# Mean amplitude of the thalamic postsynaptic potential (in mV).
'PSP_th': 0.15,
# Standard deviation of the postsynaptic potential (in relative units).
'PSP_sd': 0.1,
# Start of the thalamic input (in ms).
'th_start': 700.0,
# Duration of the thalamic input (in ms).
'th_duration': 10.0,
# Rate of the thalamic input (in Hz).
'th_rate': 120.0,
# Start of the DC generator (in ms).
'dc_start': 0.0,
# Duration of the DC generator (in ms).
'dc_dur': 1000.0,
# Connection probabilities of the thalamus to the different populations.
# Order as in 'populations' in 'network_params.py'
'conn_probs_th':
np.array([0.0, 0.0, 0.0983, 0.0619, 0.0, 0.0, 0.0512, 0.0196]),
# Mean delay of the thalamic input (in ms).
'delay_th':
np.asarray([1.5 for i in list(range(len(net_dict['populations'])))]),
# Standard deviation of the thalamic delay (in ms).
'delay_th_sd':
np.asarray([0.75 for i in list(range(len(net_dict['populations'])))]),
# Amplitude of the DC generator (in pA).
'dc_amp': np.ones(len(net_dict['populations'])) * 0.3,
}
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Microcircuit simulation parameters¶
Simulation parameters for the microcircuit.
Hendrik Rothe, Hannah Bos, Sacha van Albada; May 2016
import os
sim_dict = {
# Simulation time (in ms).
't_sim': 1000.0,
# Resolution of the simulation (in ms).
'sim_resolution': 0.1,
# Path to save the output data.
'data_path': os.path.join(os.getcwd(), 'data/'),
# Masterseed for NEST and NumPy.
'master_seed': 55,
# Number of threads per MPI process.
'local_num_threads': 1,
# Recording interval of the membrane potential (in ms).
'rec_V_int': 1.0,
# If True, data will be overwritten,
# If False, a NESTError is raised if the files already exist.
'overwrite_files': True,
# Print the time progress, this should only be used when the simulation
# is run on a local machine.
'print_time': True
}
Total running time of the script: ( 0 minutes 0.000 seconds)
MUSIC example¶
Requirements¶
MUSIC 1.1.15 or higher
NEST 2.14.0 or higher compiled with MPI and MUSIC
NumPy
Instructions¶
This example runs 2 NEST instances and one receiver instance. Neurons on the NEST instances are observed by the music_cont_out_proxy and their values are forwarded through MUSIC to the receiver.
mpiexec -np 3 music test.music
Note
Click here to download the full example code
Music example¶
This example runs 2 NEST instances and one receiver instance. Neurons on the NEST instances are observed by the music_cont_out_proxy and their values are forwarded through MUSIC to the receiver.
import nest
import music
import numpy
proxy = nest.Create('music_cont_out_proxy', 1)
nest.SetStatus(proxy, {'port_name': 'out'})
nest.SetStatus(proxy, {'record_from': ["V_m"], 'interval': 0.1})
neuron_grp = nest.Create('iaf_cond_exp', 2)
nest.SetStatus(proxy, {'targets': neuron_grp})
nest.SetStatus([neuron_grp[0]], "I_e", 300.)
nest.SetStatus([neuron_grp[1]], "I_e", 600.)
nest.Simulate(200)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Music example receiver script¶
import sys
import music
import numpy
from itertools import takewhile, dropwhile
setup = music.Setup()
stoptime = setup.config("stoptime")
timestep = setup.config("timestep")
comm = setup.comm
rank = comm.Get_rank()
pin = setup.publishContInput("in")
data = numpy.array([0.0, 0.0], dtype=numpy.double)
pin.map(data, interpolate=False)
runtime = setup.runtime(timestep)
mintime = timestep
maxtime = stoptime+timestep
start = dropwhile(lambda t: t < mintime, runtime)
times = takewhile(lambda t: t < maxtime, start)
for time in times:
val = data
sys.stdout.write(
"t={}\treceiver {}: received {}\n".
format(time, rank, val))
Total running time of the script: ( 0 minutes 0.000 seconds)
Topology¶
The Topology Module is designed to build complex, layered networks in NEST.
If you’ve never used topologically-structured networks before, we recommend you check out the PyNEST tutorial on Topologically-structured Networks
We also recommend another tutorial using the Hill Tononi model
For a comprehensive guide into topology, please see our Topology User Manual
We have a large selection of examples using topology, check them out in our Examples using Topology section.
Topology User Manual¶
The Topology Module provides the NEST simulator 1 with a convenient interface for creating layers of neurons placed in space and connecting neurons in such layers with probabilities and properties depending on the relative placement of neurons. This permits the creation of complex networks with spatial structure.
This user manual provides an introduction to the functionality provided by the Topology Module. It is based exclusively on the PyNEST, the Python interface to NEST. NEST users using the SLI interface should be able to map instructions to corresponding SLI code. This manual is not meant as a comprehensive reference manual. Please consult the online documentation in PyNEST for details; where appropriate, that documentation also points to relevant SLI documentation.
This manual describes the Topology Module included with NEST 2.16; the user interface and behavior of the module has not changed significantly since NEST 2.2.
In the next chapter of this manual, we introduce Topology layers, which place neurons in space. In Chapter 3 we then describe how to connect layers with each other, before discussing in Chapter 4 how you can inspect and visualize Topology networks. Chapter 5 deals with the more advanced topic of extending the Topology module with custom kernel functions and masks provided by C++ classes in an extension module.
You will find the Python scripts used in the examples in this manual in
the NEST source code directory under
doc/topology/user_manual_scripts
.
Limitations and Disclaimer¶
- Undocumented features
The Topology Module provides a number of undocumented features, which you may discover by browsing the code. These features are highly experimental and should not be used for simulations, as they have not been validated.
Layers¶
The Topology Module (just Topology for short in the remainder of this document) organizes neuronal networks in layers. We will first illustrate how Topology places elements in simple layers, where each element is a single model neuron. Layers with composite elements are discussed in the following section.
We will illustrate the definition and use of layers using examples.
Topology distinguishes between two classes of layers:
- grid-based layers
in which each element is placed at a location in a regular grid;
- free layers
in which elements can be placed arbitrarily in the plane.
Grid-based layers allow for more efficient connection-generation under certain circumstances.
Grid-based Layers¶
We create a first, grid-based simple layer with the following commands:
import nest.topology as tp
l = tp.CreateLayer({'rows': 5,
'columns': 5,
'elements': 'iaf_psc_alpha'})

Simple grid-based layer centered about the origin. Blue circles mark layer elements, the thin square the extent of the layer. Row and column indices are shown in the right and top margins, respectively.¶
The layer is shown in Figure 10. Note the following properties:
The layer has five rows and five columns.
The
'elements'
entry of the dictionary passed toCreateLayer
determines the elements of the layer. In this case, the layer containsiaf_psc_alpha
neurons.The center of the layer is at the origin of the coordinate system, \((0,0)\).
The extent or size of the layer is \(1\times 1\). This is the default size for layers. The extent is marked by the thin square in Figure 10.
The grid spacing of the layer is
In the layer shown, we have \(dx=dy=0.2\), but the grid spacing may differ in x- and y-direction.
Layer elements are spaced by the grid spacing and are arranged symmetrically about the center.
The outermost layer elements are placed \(dx/2\) and \(dy/2\) from the borders of the extent.
Element positions in the coordinate system are given by \((x,y)\) pairs. The coordinate system follows that standard mathematical convention that the \(x\)-axis runs from left to right and the \(y\)-axis from bottom to top.
Each element of a grid-based layer has a row- and column-index in addition to its \((x,y)\)-coordinates. Indices are shown in the top and right margin of Figure 10. Note that row-indices follow matrix convention, i.e., run from top to bottom. Following pythonic conventions, indices run from 0.
Layers have a default extent of \(1\times 1\). You can specify a
different extent of a layer, i.e., its size in \(x\)- and
\(y\)-direction by adding an 'extent'
entry to the dictionary
passed to CreateLayer
:
l = tp.CreateLayer({'rows': 5,
'columns': 5,
'extent': [2.0, 0.5],
'elements': 'iaf_psc_alpha'})
The resulting layer is shown in Figure 11. The extent is always a two-element tuple of floats. In this example, we have grid spacings \(dx=0.4\) and \(dy=0.1\). Changing the extent does not affect grid indices.
The size of 'extent'
in \(x\)- and \(y\)-directions should
be numbers that can be expressed exactly as binary fractions. This is
automatically ensured for integer values. Otherwise, under rare
circumstances, subtle rounding errors may occur and trigger an
assertion, thus stopping NEST.
Layers are centered about the origin \((0,0)\) by default. This can
be changed through the 'center'
entry in the dictionary specifying
the layer. The following code creates layers centered about
\((0,0)\), \((-1,1)\), and \((1.5,0.5)\), respectively:
l1 = tp.CreateLayer({'rows': 5,
'columns': 5,
'elements': 'iaf_psc_alpha'})
l2 = tp.CreateLayer({'rows': 5,
'columns': 5,
'elements': 'iaf_psc_alpha',
'center': [-1., 1.]})
l3 = tp.CreateLayer({'rows': 5,
'columns': 5,
'elements': 'iaf_psc_alpha',
'center': [1.5, 0.5]})

Three layers centered, respectively, about \((0,0)\) (blue), \((-1,-1)\) (green), and \((1.5,0.5)\) (red).¶
The center is given as a two-element tuple of floats. Changing the center does not affect grid indices: For each of the three layers in Figure 12, grid indices run from 0 to 4 through columns and rows, respectively, even though elements in these three layers have different positions in the global coordinate system.
The 'center'
coordinates should be numbers that can be expressed
exactly as binary fractions. For more information, see
Sec. 2.1.2.
To see how to construct a layer, consider the following example:
a layer with \(n_r\) rows and \(n_c\) columns;
spacing between nodes is \(d\) in \(x\)- and \(y\)-directions;
the left edge of the extent shall be at \(x=0\);
the extent shall be centered about \(y=0\).
From Eq. dx_dy_extent
, we see that the extent of the layer must be
\((n_c d, n_r d)\). We now need to find the coordinates
\((c_x, c_y)\) of the center of the layer. To place the left edge of
the extent at \(x=0\), we must place the center of the layer at
\(c_x=n_c d / 2\) along the \(x\)-axis, i.e., half the extent
width to the right of \(x=0\). Since the layer is to be centered
about \(y=0\), we have \(c_y=0\). Thus, the center coordinates
are \((n_c d/2, 0)\). The layer is created with the following code
and shown in Figure 13:
nc, nr = 5, 3
d = 0.1
l = tp.CreateLayer({'columns': nc,
'rows': nr,
'elements': 'iaf_psc_alpha',
'extent': [nc * d, nr * d],
'center': [nc * d / 2., 0.]})

Layer with \(n_c=5\) rows and \(n_r=3\) columns, spacing \(d=0.1\) and the left edge of the extent at \(x=0\), centered about the \(y\)-axis. The cross marks the point on the extent placed at the origin \((0,0)\), the circle the center of the layer.¶
Free layers¶
Free layers do not restrict node positions to a grid, but allow free
placement within the extent. To this end, the user needs to specify the
positions of all nodes explicitly. The following code creates a layer of
50 iaf_psc_alpha
neurons uniformly distributed in a layer with
extent \(1\times 1\), i.e., spanning the square
\([-0.5,0.5]\times[-0.5,0.5]\):
import numpy as np
pos = [[np.random.uniform(-0.5, 0.5), np.random.uniform(-0.5, 0.5)]
for j in range(50)]
l = tp.CreateLayer({'positions': pos,
'elements': 'iaf_psc_alpha'})

A free layer with 50 elements uniformly distributed in an extent of size \(1\times 1\).¶
Note the following points:
For free layers, element positions are specified by the
'positions'
entry in the dictionary passed toCreateLayer
.'positions'
is mutually exclusive with'rows'
/'columns'
entries in the dictionary.The
'positions'
entry must be a Pythonlist
(ortuple
) of element coordinates, i.e., of two-element tuples of floats giving the (\(x\), \(y\))-coordinates of the elements. One layer element is created per element in the'positions'
entry.All layer element positions must be within the layer’s extent. Elements may be placed on the perimeter of the extent as long as no periodic boundary conditions are used; see Sec. 2.4.
Element positions in free layers are not shifted when specifying the
'center'
of the layer. The user must make sure that the positions given lie within the extent when centered about the given center.
3D layers¶
Although the term “layer” suggests a 2-dimensional structure, the layers in NEST may in fact be 3-dimensional. The example from the previous section may be easily extended with another component in the coordinates for the positions:
import numpy as np
pos = [[np.random.uniform(-0.5, 0.5), np.random.uniform(-0.5, 0.5),
np.random.uniform(-0.5, 0.5)] for j in range(200)]
l = tp.CreateLayer({'positions': pos,
'elements': 'iaf_psc_alpha'})

A free 3D layer with 200 elements uniformly distributed in an extent of size \(1\times 1\times 1\).¶
Periodic boundary conditions¶
Simulations usually model systems much smaller than the biological networks we want to study. One problem this entails is that a significant proportion of neurons in a model network is close to the edges of the network with fewer neighbors than nodes properly inside the network. In the \(5\times 5\)-layer in Figure 10, e.g., 16 out of 25 nodes form the border of the layer.
One common approach to reducing the effect of boundaries on simulations is to introduce periodic boundary conditions, so that the rightmost elements on a grid are considered nearest neighbors to the leftmost elements, and the topmost to the bottommost. The flat layer becomes the surface of a torus. Figure 16 illustrates this for a one-dimensional layer, which turns from a line to a ring upon introduction of periodic boundary conditions.
You specify periodic boundary conditions for a layer using the
dictionary entry edge_wrap
:
lp = tp.CreateLayer({'rows': 1, 'columns': 5, 'extent': [5., 1.],
'elements': 'iaf_psc_alpha',
'edge_wrap': True})

Top left: Layer with single row and five columns without periodic boundary conditions. Numbers above elements show element coordinates. Colors shifting from blue to magenta mark increasing distance from the element at \((-2,0)\). Bottom left: Same layer, but with periodic boundary conditions. Note that the element at \((2,0)\) now is a nearest neighbor to the element at \((-2,0)\). Right: Layer with periodic boundary condition arranged on a circle to illustrate neighborhood relationships.¶
Note that the longest possible distance between two elements in a layer without periodic boundary conditions is
but only
for a layer with periodic boundary conditions; \(x_{\text{ext}}\) and \(y_{\text{ext}}\) are the components of the extent size.
We will discuss the consequences of periodic boundary conditions more in Chapter 3.
From the perspective of NEST, a Topology layer is a special type of subnet. From the user perspective, the following points may be of interest:
Grid-based layers have the NEST model type
topology_layer_grid
, free layers the model typetopology_layer_free
.The status dictionary of a layer has a
'topology'
entry describing the layer properties (l
is the layer created above):
print(nest.GetStatus(l)[0]['topology'])
{'center': (0.0, 0.0), 'columns': 5, 'depth': 1, 'edge_wrap': False, 'extent': (1.0, 1.0), 'rows': 5}
The ‘topology’ entry is read-only.
The NEST kernel sees the elements of the layer in the same way as the elements of any subnet. You will notice this when printing a network with a Topology layer:
nest.PrintNetwork(depth=3)
+-[0] root dim=[1 25]
|
+-[1] topology_layer_grid dim=[25]
|
+-[1]...[25] iaf_psc_alpha
The 5 times 5 layer created above appears here as a topology_layer_grid subnet of 25 iaf_psc_alpha neurons. Only Topology connection and visualization functions heed the spatial structure of the layer.
Layers with composite elements¶
So far, we have considered layers in which each element was a single model neuron. Topology can also create layers with composite elements, i.e., layers in which each element is a collection of model neurons, or, in general NEST network nodes.
Construction of layers with composite elements proceeds exactly as for
layers with simple elements, except that the 'elements'
entry of the
dictionary passed to CreateLayer
is a Python list or tuple. The
following code creates a \(1\times 2\) layer (to keep the output
from PrintNetwork()
compact) in which each element consists of one
'iaf_cond_alpha'
and one 'poisson_generator'
node
l = tp.CreateLayer({'rows': 1, 'columns': 2,
'elements': ['iaf_cond_alpha',
'poisson_generator']})
+-[0] root dim=[1 4]
|
+-[1] topology_layer_grid dim=[4]
|
+-[1]...[2] iaf_cond_alpha
+-[3]...[4] poisson_generator
The network consist of one topology_layer_grid
with four elements:
two iaf_cond_alpha
and two poisson_generator
nodes. The
identical nodes are grouped, so that the subnet contains first one full
layer of iaf_cond_alpha
nodes followed by one full layer of
poisson_generator
nodes.
You can create network elements with several nodes of each type by following a model name with the number of nodes to be created:
l = tp.CreateLayer({'rows': 1, 'columns': 2,
'elements': ['iaf_cond_alpha', 10,
'poisson_generator',
'noise_generator', 2]})
+-[0] root dim=[1 26]
|
+-[1] topology_layer_grid dim=[26]
|
+-[1]...[20] iaf_cond_alpha
+-[21]...[22] poisson_generator
+-[23]...[26] noise_generator
In this case, each layer element consists of 10 iaf_cond_alpha
neurons, one poisson_generator
, and two noise_generator
s.
Note the following points:
Each element of a layer has identical components.
All nodes within a composite element have identical positions, namely the position of the layer element.
When inspecting a layer as a subnet, the different nodes will appear in groups of identical nodes.
For grid-based layers, the function
GetElement
returns a list of nodes at a given grid position. See Chapter 4 for more on inspecting layers.In a previous version of the topology module it was possible to create layers with nested, composite elements, but such nested networks gobble up a lot of memory for subnet constructs and provide no practical advantages, so this is no longer supported. See the next section for design recommendations for more complex layers.
A paper on a neural network model might describe the network as follows 2:
The network consists of \(20x20\) microcolumns placed on a regular grid spanning \(0.5^\circ\times 0.5^\circ\) of visual space. Neurons within each microcolumn are organized into L2/3, L4, and L56 subpopulations. Each subpopulation consists of three pyramidal cells and one interneuron. All pyramidal cells are modeled as NEST
iaf_psc_alpha
neurons with default parameter values, while interneurons areiaf_psc_alpha
neurons with threshold voltage \(V_{\text{th}}=-52\)mV.
How should you implement such a network using the Topology module? The recommended approach is to create different models for the neurons in each layer and then define the microcolumn as one composite element:
for lyr in ['L23', 'L4', 'L56']:
nest.CopyModel('iaf_psc_alpha', lyr + 'pyr')
nest.CopyModel('iaf_psc_alpha', lyr + 'in', {'V_th': -52.})
l = tp.CreateLayer({'rows': 20, 'columns': 20, 'extent': [0.5, 0.5],
'elements': ['L23pyr', 3, 'L23in',
'L4pyr', 3, 'L4in',
'L56pyr', 3, 'L56in']})
We will discuss in Chapter 3.1 how to connect selectively to different neuron models.
Connections¶
The most important feature of the Topology module is the ability to
create connections between layers with quite some flexibility. In this
chapter, we will illustrate how to specify and create connections. All
connections are created using the ConnectLayers
function.
Basic principles¶
We begin by introducing important terminology:
- Connection
In the context of connections between the elements of Topology layers, we often call the set of all connections between pairs of network nodes created by a single call to
ConnectLayers
a connection.- Connection dictionary
A dictionary specifying the properties of a connection between two layers in a call to
CreateLayers
.- Source
The source of a single connection is the node sending signals (usually spikes). In a projection, the source layer is the layer from which source nodes are chosen.
- Target
The target of a single connection is the node receiving signals (usually spikes). In a projection, the target layer is the layer from which target nodes are chosen.
- Connection type
The connection type determines how nodes are selected when
ConnectLayers
creates connections between layers. It is either'convergent'
or'divergent'
.- Convergent connection
When creating a convergent connection between layers, Topology visits each node in the target layer in turn and selects sources for it in the source layer. Masks and kernels are applied to the source layer, and periodic boundary conditions are applied in the source layer, provided that the source layer has periodic boundary conditions.
- Divergent connection
When creating a divergent connection, Topology visits each node in the source layer and selects target nodes from the target layer. Masks, kernels, and boundary conditions are applied in the target layer.
- Driver
When connecting two layers, the driver layer is the one in which each node is considered in turn.
- Pool
- When connecting two layers, the pool layer is the one from which nodes are chosen for each node in the driver layer. I.e., we have
Connection type
Driver
Pool
convergent
target layer
source layer
divergent
source layer
target layer
- Displacement
The displacement between a driver and a pool node is the shortest vector connecting the driver to the pool node, taking boundary conditions into account.
- Distance
The distance between a driver and a pool node is the length of their displacement.
- Mask
The mask defines which pool nodes are at all considered as potential targets for each driver node. See Sec. 3.3 for details.
- Kernel
The kernel is a function returning a (possibly distance- or displacement-dependent) probability for creating a connection between a driver and a pool node. The default kernel is \(1\), i.e., connections are created with certainty. See Sec. 3.4 for details.
- Autapse
An autapse is a synapse (connection) from a node onto itself. Autapses are permitted by default, but can be disabled by adding
'allow_autapses': False
to the connection dictionary.- Multapse
Node A is connected to node B by a multapse if there are synapses (connections) from A to B. Multapses are permitted by default, but can be disabled by adding
'allow_multapses': False
to the connection dictionary.
Connections between Topology layers are created by calling
ConnectLayers
with the following arguments 3:
The source layer.
The target layer (can be identical to source layer).
A connection dictionary that contains at least the following entry:
- ‘connection_type’
either
'convergent'
or'divergent'
.
In many cases, the connection dictionary will also contain
- ‘mask’
a mask specification as described in Sec. 3.3.
Only neurons within the mask are considered as potential sources or targets. If no mask is given, all neurons in the respective layer are considered sources or targets.
Here is a simple example, cf. Figure 17
l = tp.CreateLayer({'rows': 11, 'columns': 11, 'extent': [11., 11.],
'elements': 'iaf_psc_alpha'})
conndict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-2., -1.],
'upper_right': [2., 1.]}}}
tp.ConnectLayers(l, l, conndict)

Left: Minimal connection example from a layer onto itself using a rectangular mask shown as red line for the node at \((0,0)\) (marked light red). The targets of this node are marked with red dots. The targets for the node at \((4,5)\) are marked with yellow dots. This node has fewer targets since it is at the corner and many potential targets are beyond the layer. Right: The effect of periodic boundary conditions is seen here. Source and target layer and connection dictionary were identical, except that periodic boundary conditions were used. The node at \((4,5)\) now has 15 targets, too, but they are spread across the corners of the layer. If we wrapped the layer to a torus, they would form a \(5\times 3\) rectangle centered on the node at \((4,5)\).¶
In this example, layer l
is both source and target layer. Connection
type is divergent, i.e., for each node in the layer we choose targets
according to the rectangular mask centered about each source node. Since
no connection kernel is specified, we connect to all nodes within the
mask. Note the effect of normal and periodic boundary conditions on the
connections created for different nodes in the layer, as illustrated in
Figure 17.
Mapping source and target layers¶
The application of masks and other functions depending on the distance or even the displacement between nodes in the source and target layers requires a mapping of coordinate systems between source and target layers. Topology applies the following coordinate mapping rules:
All layers have two-dimensional Euclidean coordinate systems.
No scaling or coordinate transformation can be applied between layers.
The displacement \(d(D,P)\) from node \(D\) in the driver layer to node \(P\) in the pool layer is measured by first mapping the position of \(D\) in the driver layer to the identical position in the pool layer and then computing the displacement from that position to \(P\). If the pool layer has periodic boundary conditions, they are taken into account. It does not matter for displacement computations whether the driver layer has periodic boundary conditions.
Masks¶
A mask describes which area of the pool layer shall be searched for nodes to connect for any given node in the driver layer. We will first describe geometrical masks defined for all layer types and then consider grid-based masks for grid-based layers. If no mask is specified, all nodes in the pool layer will be searched.
Note that the mask size should not exceed the size of the layer when using periodic boundary conditions, since the mask would “wrap around” in that case and pool nodes would be considered multiple times as targets.
If none of the mask types provided in the topology library meet your need, you may add more mask types in a NEST extension module. This is covered in Chapter 5.
Topology currently provides four types of masks usable for 2-dimensional free and grid-based layers. They are illustrated in Figure 18. The masks are
- Rectangular
All nodes within a rectangular area are connected. The area is specified by its lower left and upper right corners, measured in the same unit as element coordinates. Example:
conndict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-2., -1.],
'upper_right': [2., 1.]}}}
- Circular
All nodes within a circle are connected. The area is specified by its radius.
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 2.0}}}
- Doughnut
All nodes between an inner and outer circle are connected. Note that nodes on the inner circle are not connected. The area is specified by the radii of the inner and outer circles.
conndict = {'connection_type': 'divergent',
'mask': {'doughnut': {'inner_radius': 1.5,
'outer_radius': 3.}}}
- Elliptical
All nodes within an ellipsis are connected. The area is specified by its major and minor axis. Note that this mask was added to NEST with NEST 2.14.
conndict = {'connection_type': 'divergent',
'mask': {'elliptical': {'major_axis': 7.,
'minor_axis': 4.}}}

Masks for 2D layers. For all mask types, the driver node is marked by a wide light-red circle, the selected pool nodes by red dots and the masks by red lines. From left to right, top to bottom: rectangular, circular, doughnut and elliptical masks centered about the driver node.¶
By default, the masks are centered about the position of the driver
node, mapped into the pool layer. You can change the location of the
mask relative to the driver node by specifying an 'anchor'
entry in
the mask dictionary. The anchor is a 2D vector specifying the location
of the mask center relative to the driver node, as in the following
examples (cf. Figure 19).
conndict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-2., -1.],
'upper_right': [2., 1.]},
'anchor': [-1.5, -1.5]}}
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 2.0},
'anchor': [-2.0, 0.0]}}
conndict = {'connection_type': 'divergent',
'mask': {'doughnut': {'inner_radius': 1.5,
'outer_radius': 3.},
'anchor': [1.5, 1.5]}}
conndict = {'connection_type': 'divergent',
'mask': {'elliptical': {'major_axis': 7.,
'minor_axis': 4.},
'anchor': [2.0, -1.0]}}

The same masks as in Figure 18, but centered about
\((-1.5,-1.5)\), \((-2,0)\), \((1.5,1.5)\) and
\((2, -1)\), respectively, using the 'anchor'
parameter.¶
and \(\textbf{elliptical}\) masks, see Fig Figure 19. To do so,
add an 'azimuth_angle'
entry in the specific mask dictionary. The
azimuth_angle
is measured in degrees and is the rotational angle
from the x-axis to the y-axis.
conndict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-2., -1.],
'upper_right': [2., 1.],
'azimuth_angle': 120.}}}
conndict = {'connection_type': 'divergent',
'mask': {'elliptical': {'major_axis': 7.,
'minor_axis': 4.,
'azimuth_angle': 45.}}}

Rotated rectangle and elliptical mask from Figure 18 and Figure 19, where the rectangle mask is rotated \(120^\circ\) and the elliptical mask is rotated \(45^\circ\).¶
Similarly, there are three mask types that can be used for 3D layers,
- Box
All nodes within a cuboid volume are connected. The area is specified by its lower left and upper right corners, measured in the same unit as element coordinates. Example:
conndict = {'connection_type': 'divergent',
'mask': {'box': {'lower_left': [-2., -1., -1.],
'upper_right': [2., 1., 1.]}}}
- Spherical
All nodes within a sphere are connected. The area is specified by its radius.
conndict = {'connection_type': 'divergent',
'mask': {'spherical': {'radius': 2.5}}}
- Ellipsoidal
All nodes within an ellipsoid are connected. The area is specified by its major, minor, and polar axis. This mask has been part of NEST since NEST 2.14.
conndict = {'connection_type': 'divergent',
'mask': {'ellipsoidal': {'major_axis': 7.,
'minor_axis': 4.,
'polar_axis': 4.5}}}
As in the 2D case, you can change the location of the mask relative to
the driver node by specifying a 3D vector in the 'anchor'
entry in
the mask dictionary. If you want to rotate the box or ellipsoidal masks,
you can add an 'azimuth_angle'
entry in the specific mask dictionary
for rotation from the x-axis towards the y-axis about the z-axis, or an
'polar_angle'
entry, specifying the rotation angle in degrees from
the z-axis about the (possibly rotated) x axis, from the (possibly
rotated) y-axis. You can specify both at once of course. If both are
specified, we first rotate about the z-axis and then about the new
x-axis. NEST currently do not support rotation in all three directions,
the rotation from the y-axis about the (possibly rotated) z-axis, from
the (possibly rotated) x-axis is missing.

Masks for 3D layers. For all mask types, the driver node is marked by a wide light-red circle, the selected pool nodes by red dots and the masks by red lines. From left to right: box and spherical masks centered about the driver node.¶
Grid-based layers can be connected using rectangular grid masks. For these, you specify the size of the mask not by lower left and upper right corner coordinates, but give their size in rows and columns, as in this example:
conndict = {'connection_type': 'divergent',
'mask': {'grid': {'rows': 3, 'columns': 5}}}
The resulting connections are shown in Figure 22. By default the top-left corner of a grid mask, i.e., the grid mask element with grid index \([0,0]\) 4, is aligned with the driver node. You can change this alignment by specifying an anchor for the mask:
conndict = {'connection_type': 'divergent',
'mask': {'grid': {'rows': 3, 'columns': 5},
'anchor': {'row': 1, 'column': 2}}}
You can even place the anchor outside the mask:
conndict = {'connection_type': 'divergent',
'mask': {'grid': {'rows': 3, 'columns': 5},
'anchor': {'row': -1, 'column': 2}}}
The resulting connection patterns are shown in Figure 22.

Grid masks for connections between grid-based layers. Left: \(5\times 3\) mask with default alignment at upper left corner. Center: Same mask, but anchored to center node at grid index \([1,2]\). Right: Same mask, but anchor to the upper left of the mask at grid index \([-1,2]\).¶
Connections specified using grid masks are generated more efficiently than connections specified using other mask types.
Note the following:
Grid-based masks are applied by considering grid indices. The position of nodes in physical coordinates is ignored.
In consequence, grid-based masks should only be used between layers with identical grid spacings.
The semantics of the
'anchor'
property for grid-based masks differ significantly for general masks described in Sec. 3.3.1. For general masks, the anchor is the center of the mask relative to the driver node. For grid-based nodes, the anchor determines which mask element is aligned with the driver element.
Kernels¶
Many neuronal network models employ probabilistic connection rules. Topology supports probabilistic connections through kernels. A kernel is a function mapping the distance (or displacement) between a driver and a pool node to a connection probability. Topology then generates a connection according to this probability.
Probabilistic connections can be generated in two different ways using Topology:
- Free probabilistic connections
are the default. In this case,
ConnectLayers
considers each driver node \(D\) in turn. For each \(D\), it evaluates the kernel for each pool node \(P\) within the mask and creates a connection according to the resulting probability. This means in particular that each possible driver-pool pair is inspected exactly once and that there will be at most one connection between each driver-pool pair.- Prescribed number of connections
can be obtained by specifying the number of connections to create per driver node. See Sec. 3.7 for details.
Available kernel functions are shown in Table tbl_kernels. More kernel functions may be created in a NEST extension module. This is covered in Chapter 5.
\(d\) is the distance and \((d_x,d_y)\) the displacement. All functions can be used to specify weights and delays, but only the constant and the distance-dependent functions, i.e., all functions above the double line, can be used as kernels.
Name
Parameters
Function
constant
constant \(p\in[0,1]\)
linear
a
,c
\[p(d) = c + a d\]
exponential
a
,c
,tau
\[p(d) = c + a e^{-\frac{d}{\tau}}\]
gausssian
p_center
,sigma
,mean
,c
\[p(d) = c + p_{\text{center}} e^{-\frac {(d-\mu)^2}{2\sigma^2}}\]
gaussian2D
p_center
,sigma_x
,sigma_y
,mean_x
,mean_y
,rho
,c
\[p(d) = c + p_{\text{center}} e^{-\frac{\frac{(d_x-\mu_x)^2}{\sigma_x^2}- \frac{(d_y-\mu_y)^2}{\sigma_y^2} +2\rho\frac{(d_x-\mu_x)(d_y-\mu_y)}{\sigma_x \sigma_y}}{2(1-\rho^2)}}\]
gamma
kappa
,
theta
\[p(d) = \frac{d^{\kappa-1}e^{-\frac{d} {\theta}}}{\theta^\kappa\Gamma(\kappa)}\]
uniform
min
,
max
\(p\in [\text{min},\text{max})\) uniformly
normal
mean
,sigma
,min
,max
\(p \in [\text{min},\text{max})\) normal with given mean and \(\sigma\)
lognormal
mu
,sigma
,min
,max
\(p \in [\text{min},\text{max})\) lognormal with given \(\mu\) and \(\sigma\)

Illustration of various kernel functions. Top left: constant kernel, \(p=0.5\). Top center: Gaussian kernel, green dashed lines show \(\sigma\), \(2\sigma\), \(3\sigma\). Top right: Same Gaussian kernel anchored at \((1.5,1.5)\). Bottom left: Same Gaussian kernel, but all \(p<0.5\) treated as \(p=0\). Bottom center: 2D-Gaussian.¶
Several examples follow. They are illustrated in Figure 23.
- Constant
The simplest kernel is a fixed connection probability:
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 4.}},
'kernel': 0.5}
- Gaussian
This kernel is distance dependent. In the example, connection probability is 1 for \(d=0\) and falls off with a “standard deviation” of \(\sigma=1\):
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 4.}},
'kernel': {'gaussian': {'p_center': 1.0,
'sigma': 1.}}}
- Eccentric Gaussian
In this example, both kernel and mask have been moved using anchors:
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 4.},
'anchor': [1.5, 1.5]},
'kernel': {'gaussian': {'p_center': 1.0,
'sigma': 1.,
'anchor': [1.5, 1.5]}}}
Note that the anchor for the kernel is specified inside the dictionary containing the parameters for the Gaussian.
- Cut-off Gaussian
In this example, all probabilities less than \(0.5\) are set to zero:
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 4.}},
'kernel': {'gaussian': {'p_center': 1.0,
'sigma': 1.,
'cutoff': 0.5}}}
- 2D Gaussian
We conclude with an example using a two-dimensional Gaussian, i.e., a Gaussian with different widths in \(x\)- and \(y-\) directions. This kernel depends on displacement, not only on distance:
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 4.}},
'kernel': {'gaussian2D': {'p_center': 1.0,
'sigma_x': 1.,
'sigma_y': 3.}}}
Note that for pool layers with periodic boundary conditions, Topology always uses the shortest possible displacement vector from driver to pool neuron as argument to the kernel function.
Weights and delays¶
The functions presented in Table tbl_kernels can also be used to
specify distance-dependent or randomized weights and delays for the
connections created by ConnectLayers
.
Figure Figure 24 illustrates weights and delays generated using these functions with the following code examples. All examples use a “layer” of 51 nodes placed on a line; the line is centered about \((25,0)\), so that the leftmost node has coordinates \((0,0)\). The distance between neighboring elements is 1. The mask is rectangular, spans the entire layer and is centered about the driver node.
Linear example
ldict = {'rows': 1, 'columns': 51,
'extent': [51., 1.], 'center': [25., 0.],
'elements': 'iaf_psc_alpha'}
cdict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-25.5, -0.5],
'upper_right': [25.5, 0.5]}},
'weights': {'linear': {'c': 1.0,
'a': -0.05,
'cutoff': 0.0}},
'delays': {'linear': {'c': 0.1, 'a': 0.02}}}
Results are shown in the top panel of Figure 24. Connection weights and delays are shown for the leftmost neuron as driver. Weights drop linearly from \(1\). From the node at \((20,0)\) on, the cutoff sets weights to 0. There are no connections to nodes beyond \((25,0)\), since the mask extends only 25 units to the right of the driver. Delays increase in a stepwise linear fashion, as NEST requires delays to be multiples of the simulation resolution.
Linear example with periodic boundary conditions
cdict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-25.5, -0.5],
'upper_right': [25.5, 0.5]}},
'weights': {'linear': {'c': 1.0,
'a': -0.05,
'cutoff': 0.0}},
'delays': {'linear': {'c': 0.1, 'a': 0.02}}}
Results are shown in the middle panel of Figure 24. This example is identical to the previous, except that the (pool) layer has periodic boundary conditions. Therefore, the left half of the mask about the node at \((0,0)\) wraps back to the right half of the layer and that node connects to all nodes in the layer.
Various functions
cdict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-25.5, -0.5],
'upper_right': [25.5, 0.5]}},
'weights': {'exponential': {'a': 1., 'tau': 5.}}}
cdict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-25.5, -0.5],
'upper_right': [25.5, 0.5]}},
'weights': {'gaussian': {'p_center': 1., 'sigma': 5.}}}
Results are shown in the bottom panel of Figure 24. It shows linear, exponential and Gaussian weight functions for the node at \((25,0)\).
Randomized weights and delays
cdict = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-25.5, -0.5],
'upper_right': [25.5, 0.5]}},
'weights': {'uniform': {'min': 0.2, 'max': 0.8}}}
By using the 'uniform'
function for weights or delays, one can
obtain randomized values for weights and delays, as shown by the red
circles in the bottom panel of Figure 24. Weights and delays can
currently only be randomized with uniform distribution.

Distance-dependent and randomized weights and delays. See text for details.¶
Periodic boundary conditions¶
Connections between layers with periodic boundary conditions are based on the following principles:
Periodic boundary conditions are always applied in the pool layer. It is irrelevant whether the driver layer has periodic boundary conditions or not.
By default, Topology does not accept masks that are wider than the pool layer when using periodic boundary conditions. Otherwise, one pool node could appear as multiple targets to the same driver node as the masks wraps several times around the layer. For layers with different extents in \(x\)- and \(y\)-directions this means that the maximum layer size is determined by the smaller extension.
Kernel, weight and delay functions always consider the shortest distance (displacement) between driver and pool node.
In most physical systems simulated using periodic boundary conditions, interactions between entities are short-range. Periodic boundary conditions are well-defined in such cases. In neuronal network models with long-range interactions, periodic boundary conditions may not make sense. In general, we recommend to use periodic boundary conditions only when connection masks are significantly smaller than the layers they are applied to.
Prescribed number of connections¶
We have so far described how to connect layers by either connecting to all nodes inside the mask or by considering each pool node in turn and connecting it according to a given probability function. In both cases, the number of connections generated depends on mask and kernel.
Many neuron models in the literature, in contrast, prescribe a certain fan in (number of incoming connections) or fan out (number of outgoing connections) for each node. You can achieve this in Topology by prescribing the number of connections for each driver node. For convergent connections, where the target layer is the driver layer, you thus achieve a constant fan in, for divergent connections a constant fan out.
Connection generation now proceeds in a different way than before:
For each driver node,
ConnectLayers
randomly selects a node from the mask region in the pool layer, and creates a connection with the probability prescribed by the kernel. This is repeated until the requested number of connections has been created.Thus, if all nodes in the mask shall be connected with equal probability, you should not specify any kernel.
If you specify a non-uniform kernel (e.g., Gaussian, linear, exponential), the connections will be distributed within the mask with the spatial profile given by the kernel.
If you prohibit multapses (cf Sec. 3.1.1) and prescribe a number of connections greater than the number of pool nodes in the mask,
ConnectLayers
may get stuck in an infinite loop and NEST will hang. Keep in mind that the number of nodes within the mask may vary considerably for free layers with randomly placed nodes.
The following code generates a network of 1000 randomly placed nodes and connects them with a fixed fan out of 50 outgoing connections per node distributed with a profile linearly decaying from unit probability to zero probability at distance \(0.5\). Multiple connections (multapses) between pairs of nodes are allowed, self-connections (autapses) prohibited. The probability of finding a connection at a certain distance is then given by the product of the probabilities for finding nodes at a certain distance with the kernel value for this distance. For the kernel and parameter values below we have
The resulting distribution of distances between connected nodes is shown in Figure 25.
pos = [[np.random.uniform(-1., 1.), np.random.uniform(-1., 1.)]
for j in range(1000)]
ldict = {'positions': pos, 'extent': [2., 2.],
'elements': 'iaf_psc_alpha', 'edge_wrap': True}
cdict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 1.0}},
'kernel': {'linear': {'c': 1., 'a': -2., 'cutoff': 0.0}},
'number_of_connections': 50,
'allow_multapses': True, 'allow_autapses': False}

Distribution of distances between source and target for a network of
1000 randomly placed nodes, a fixed fan out of 50 connections and a
connection probability decaying linearly from 1 to 0 at
\(d=0.5\). The red line is the expected distribution from
Eq. eq_ptheo
.¶
Functions determining weight and delay as function of distance/displacement work in just the same way as before when the number of connections is prescribed.
Connecting composite layers¶
Connections between layers with composite elements are based on the following principles:
All nodes within a composite element have the same coordinates, the coordinates of the element.
All nodes within a composite element are treated equally. If, e.g., an element of the pool layer contains three nodes and connection probability is 1, then connections with all three nodes will be created. For probabilistic connection schemes, each of the three nodes will be considered individually.
If only nodes of a given model within each element shall be considered as sources or targets then this can be achieved by adding a
'sources'
or'targets'
entry to the connection dictionary, which specifies the model to connect.
This is exemplified by the following code, which connects pyramidal
cells (pyr
) to interneurons (in
) with a circular mask and
uniform probability and interneurons to pyramidal cells with a
rectangular mask unit probability.
nest.ResetKernel()
nest.CopyModel('iaf_psc_alpha', 'pyr')
nest.CopyModel('iaf_psc_alpha', 'in')
ldict = {'rows': 10, 'columns': 10, 'elements': ['pyr', 'in']}
cdict_p2i = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 0.5}},
'kernel': 0.8,
'sources': {'model': 'pyr'},
'targets': {'model': 'in'}}
cdict_i2p = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-0.2, -0.2],
'upper_right': [0.2, 0.2]}},
'sources': {'model': 'in'},
'targets': {'model': 'pyr'}}
l = tp.CreateLayer(ldict)
tp.ConnectLayers(l, l, cdict_p2i)
tp.ConnectLayers(l, l, cdict_i2p)
Synapse models and properties¶
By default, ConnectLayers
creates connections using the default
synapse model in NEST, static_synapse
. You can specify a different
model by adding a 'synapse_model'
entry to the connection
dictionary, as in this example:
nest.ResetKernel()
nest.CopyModel('iaf_psc_alpha', 'pyr')
nest.CopyModel('iaf_psc_alpha', 'in')
nest.CopyModel('static_synapse', 'exc', {'weight': 2.0})
nest.CopyModel('static_synapse', 'inh', {'weight': -8.0})
ldict = {'rows': 10, 'columns': 10, 'elements': ['pyr', 'in']}
cdict_p2i = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 0.5}},
'kernel': 0.8,
'sources': {'model': 'pyr'},
'targets': {'model': 'in'},
'synapse_model': 'exc'}
cdict_i2p = {'connection_type': 'divergent',
'mask': {'rectangular': {'lower_left': [-0.2, -0.2],
'upper_right': [0.2, 0.2]}},
'sources': {'model': 'in'},
'targets': {'model': 'pyr'},
'synapse_model': 'inh'}
l = tp.CreateLayer(ldict)
tp.ConnectLayers(l, l, cdict_p2i)
tp.ConnectLayers(l, l, cdict_i2p)
You have to use synapse models if you want to set, e.g., the receptor type of connections or parameters for plastic synapse models. These can not be set in distance-dependent ways at present.
Connecting devices to subregions of layers¶
It is possible to connect stimulation and recording devices only to specific subregions of layers. A simple way to achieve this is to create a layer which contains only the device placed typically in its center. For connecting the device layer to a neuron layer, an appropriate mask needs to be specified and optionally also an anchor for shifting the center of the mask. As demonstrated in the following example, stimulation devices require the divergent connection type
nrn_layer = tp.CreateLayer({'rows': 20,
'columns': 20,
'elements': 'iaf_psc_alpha'})
stim = tp.CreateLayer({'rows': 1,
'columns': 1,
'elements': 'poisson_generator'})
cdict_stim = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 0.1},
'anchor': [0.2, 0.2]}}
tp.ConnectLayers(stim, nrn_layer, cdict_stim)
while recording devices require the convergent connection type (see also Sec. 3.11):
rec = tp.CreateLayer({'rows': 1,
'columns': 1,
'elements': 'spike_detector'})
cdict_rec = {'connection_type': 'convergent',
'mask': {'circular': {'radius': 0.1},
'anchor': [-0.2, 0.2]}}
tp.ConnectLayers(nrn_layer, rec, cdict_rec)
Layers and recording devices¶
Generally, one should not create a layer of recording devices, especially spike detectors, to record from a topology layer. Instead, create a single spike detector, and connect all neurons in the layer to that spike detector using a normal connect command:
rec = nest.Create('spike_detector')
nrns = nest.GetLeaves(nrn_layer, local_only=True)[0]
nest.Connect(nrns, rec)
Connections to a layer of recording devices as described in Sec. 3.10, such as spike detectors, are only possible using the convergent connection type without a fixed number of connections. Note that voltmeter and multimeter are not suffering from this restriction, since they are connected as sources, not as targets.
Inspecting Layers¶
We strongly recommend that you inspect the layers created by Topology to be sure that node placement and connectivity indeed turned out as expected. In this chapter, we describe some functions that NEST and Topology provide to query and visualize networks, layers, and connectivity.
Query functions¶
The following table presents some query functions provided by NEST
(nest.
) and Topology (tp.
). For detailed information about these
functions, please see the online Python and SLI documentation.
|
Print structure of network or subnet from NEST perspective. |
|
Retrieve connections (all or for a given source or target); see also http://www.nest-simulator.org/connection_ma nagement. |
|
Applied to a layer, returns GIDs of the layer elements. For simple layers, these are the actual model neurons, for composite layers the top-level subnets. |
|
Applied to a layer, returns GIDs of all actual model neurons, ignoring subnets. |
|
Return the spatial locations of nodes. |
|
Return the layer to which nodes belong. |
|
Return the node(s) at the location(s) in the given grid-based layer(s). |
|
Obtain targets of a list of sources in a given target layer. |
|
Obtain positions of targets of a list of sources in a given target layer. |
|
Return the node(s) closest to the location(s) in the given layer(s). |
|
Return GID(s) of node closest to center of layer(s). |
|
Obtain vector of lateral displacement between nodes, taking periodic boundary conditions into account. |
|
Obtain vector of lateral distances between nodes, taking periodic boundary conditions into account. |
|
Write layer element positions to file. |
|
Write connectivity information to file. This function may be very useful to check that Topology created the correct connection structure. |
|
Obtain GIDs of nodes/elements inside a masked area of a layer. Part of NEST since NEST 2.14. |
Visualization functions¶
Topology provides three functions to visualize networks:
|
Plot nodes in a layer. |
|
Plot all targets of a node in a given layer. |
|
Add indication of mask and kernel to
plot of layer. It does not wrap masks
and kernels with respect to periodic
boundary conditions. This function is
usually called by |

\(21\times 21\) grid with divergent Gaussian projections onto itself. Blue circles mark layer elements, red circles connection targets of the center neuron (marked by large light-red circle). The large red circle is the mask, the dashed green lines mark \(\sigma\), \(2\sigma\) and \(3\sigma\) of the Gaussian kernel.¶
The following code shows a practical example: A \(21\times21\) network which connects to itself with divergent Gaussian connections. The resulting graphics is shown in Figure 26. All elements and the targets of the center neuron are shown, as well as mask and kernel.
l = tp.CreateLayer({'rows': 21, 'columns': 21,
'elements': 'iaf_psc_alpha'})
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 0.4}},
'kernel': {'gaussian': {'p_center': 1.0, 'sigma': 0.15}}}
tp.ConnectLayers(l, l, conndict)
fig = tp.PlotLayer(l, nodesize=80)
ctr = tp.FindCenterElement(l)
tp.PlotTargets(ctr, l, fig=fig,
mask=conndict['mask'], kernel=conndict['kernel'],
src_size=250, tgt_color='red', tgt_size=20,
kernel_color='green')
Adding topology kernels and masks¶
This chapter will show examples of how to extend the topology module by adding custom kernel functions and masks. Some knowledge of the C++ programming language is needed for this. The functions will be added as a part of an extension module which is dynamically loaded into NEST. For more information on writing an extension module, see the section titled “Writing an Extension Module” in the NEST Developer Manual. The basic steps required to get started are:
From the NEST source directory, copy directory examples/MyModule to somewhere outside the NEST source, build or install directories.
Change to the new location of MyModule and prepare by issuing
./bootstrap.sh
Leave MyModule and create a build directory for it, e.g., mmb next to it
cd .. mkdir mmb cd mmb
Configure. The configure process uses the script
nest-config
to find out where NEST is installed, where the source code resides, and which compiler options were used for compiling NEST. Ifnest-config
is not in your path, you need to provided it explicitly like thiscmake -Dwith-nest=${NEST_INSTALL_DIR}/bin/nest-config ../MyModule
MyModule will then be installed to
\${NEST_INSTALL_DIR}
. This ensures that NEST will be able to find initializing SLI files for the module. You should not use the--prefix
to select a different installation destination. If you do, you must make sure to use addpath in SLI before loading the module to ensure that NEST will find the SLI initialization file for your module.Compile.
make make install
The previous command installed MyModule to the NEST installation directory, including help files generated from the source code.
Adding kernel functions¶
As an example, we will add a kernel function called 'affine2d'
,
which will be linear (actually affine) in the displacement of the nodes,
on the form
The kernel functions are provided by C++ classes subclassed from
nest::Parameter
. To enable subclassing, add the following lines at
the top of the file mymodule.h
:
#include "topologymodule.h"
#include "parameter.h"
Then, add the class definition, e.g. near the bottom of the file before
the brace closing the namespace mynest
:
class Affine2DParameter: public nest::Parameter
{
public:
Affine2DParameter(const DictionaryDatum& d):
Parameter(d),
a_(1.0),
b_(1.0),
c_(0.0)
{
updateValue<double>(d, "a", a_);
updateValue<double>(d, "b", b_);
updateValue<double>(d, "c", c_);
}
double raw_value(const nest::Position<2>& disp,
librandom::RngPtr&) const
{
return a_*disp[0] + b_*disp[1] + c_;
}
nest::Parameter * clone() const
{ return new Affine2DParameter(*this); }
private:
double a_, b_, c_;
};
The class contains a constructor, which reads the value of the
parameters \(a\), \(b\) and \(c\) from the dictionary
provided by the user. The function updateValue
will do nothing if
the given key is not in the dictionary, and the default values
\(a=b=1,\ c=0\) will be used.
The overridden method raw_value()
will return the actual value of
the kernel function for the displacement given as the first argument,
which is of type nest::Position<2>
. The template argument 2 refers
to a 2-dimensional position. You can also implement a method taking a
nest::Position<3>
as the first argument if you want to support
3-dimensional layers. The second argument, a random number generator, is
not used in this example.
The class also needs to have a clone()
method, which will return a
dynamically allocated copy of the object. We use the (default) copy
constructor to implement this.
To make the custom function available to the Topology module, you need to register the class you have provided. To do this, add the line
nest::TopologyModule::register_parameter<Affine2DParameter>("affine2d");
to the function MyModule::init()
in the file mymodule.cpp
. Now
compile and install the module by issuing
make
make install
To use the function, the module must be loaded into NEST using
nest.Install()
. Then, the function is available to be used in
connections, e.g.
nest.Install('mymodule')
l = tp.CreateLayer({'rows': 11, 'columns': 11, 'extent': [1.,1.],
'elements': 'iaf_psc_alpha'})
tp.ConnectLayers(l,l,{'connection_type': 'convergent',
'mask': {'circular': {'radius': 0.5}},
'kernel': {'affine2d': {'a': 1.0, 'b': 2.0, 'c': 0.5}}})
Adding masks¶
The process of adding a mask is similar to that of adding a kernel
function. A subclass of nest::Mask<D>
must be defined, where D
is the dimension (2 or 3). In this case we will define a 2-dimensional
elliptic mask by creating a class called EllipticMask
. Note that
elliptical masks are already part of NEST see
Sec. 3.3. That elliptical mask is defined in a
different way than what we will do here though, so this can still be
used as an introductory example. First, we must include another header
file:
#include "mask.h"
Compared to the Parameter
class discussed in the previous section,
the Mask
class has a few more methods that must be overridden:
class EllipticMask : public nest::Mask<2>
{
public:
EllipticMask(const DictionaryDatum& d):
rx_(1.0), ry_(1.0)
{
updateValue<double>(d, "r_x", rx_);
updateValue<double>(d, "r_y", ry_);
}
using Mask<2>::inside;
// returns true if point is inside the ellipse
bool inside(const nest::Position<2> &p) const
{ return p[0]*p[0]/rx_/rx_ + p[1]*p[1]/ry_/ry_ <= 1.0; }
// returns true if the whole box is inside the ellipse
bool inside(const nest::Box<2> &b) const
{
nest::Position<2> p = b.lower_left;
// Test if all corners are inside mask
if (not inside(p)) return false; // (0,0)
p[0] = b.upper_right[0];
if (not inside(p)) return false; // (0,1)
p[1] = b.upper_right[1];
if (not inside(p)) return false; // (1,1)
p[0] = b.lower_left[0];
if (not inside(p)) return false; // (1,0)
return true;
}
// returns bounding box of ellipse
nest::Box<2> get_bbox() const
{
nest::Position<2> ll(-rx_,-ry_);
nest::Position<2> ur(rx_,ry_);
return nest::Box<2>(ll,ur);
}
nest::Mask<2> * clone() const
{ return new EllipticMask(*this); }
protected:
double rx_, ry_;
};
The overridden methods include a test if a point is inside the mask, and for efficiency reasons also a test if a box is fully inside the mask. We implement the latter by testing if all the corners are inside, since our elliptic mask is convex. We must also define a function which returns a bounding box for the mask, i.e. a box completely surrounding the mask.
Similar to kernel functions, the mask class must be registered with the
topology module, and this is done by adding a line to the function
MyModule::init()
in the file mymodule.cpp
:
nest::TopologyModule::register_mask<EllipticMask>("elliptic");
After compiling and installing the module, the mask is available to be used in connections, e.g.
nest.Install('mymodule')
l = tp.CreateLayer({'rows': 11, 'columns': 11, 'extent': [1.,1.],
'elements': 'iaf_psc_alpha'})
tp.ConnectLayers(l,l,{'connection_type': 'convergent',
'mask': {'elliptic': {'r_x': 0.5, 'r_y': 0.25}}})
Changes between versions¶
In this chapter we give summaries of the most important changes in the Topology Module between different NEST versions, starting with the most recent ones.
Changes from Topology 2.14 to 2.16¶
The one important change in the Topology module from NEST version 2.14 to 2.16 was the inclusion of rotated masks:
Rotation of
rectangular/box
andelliptical/ellipsoidal
masks is now possible. NEST offers rotation in two directions, from the x-axis towards the y-axis, and from the z-axis away from the y-axis. To specify the former use the variableazimuth_angle
and for the latter, usepolar_angle
.
Changes from Topology 2.12 to 2.14¶
This is a short summary of the most important changes in the Topology Module from NEST version 2.12 to 2.14.
Elliptical and ellipsoidal masks have been added to NEST with NEST 2.14. To specify the mask, the
major_axis
,minor_axis
and (for ellipsoidal masks)polar_axis
must be specified.It is now possible to obtain the GIDs inside a masked area with the function SelectNodesByMask.
Changes from Topology 2.0 to 2.2¶
This is a short summary of the most important changes in the Topology Module from NEST version 2.0 to 2.2.
Nested layers are no longer supported.
Subnets are no longer used inside composite layers. A call to GetElement for a composite layer will now return a list of GIDs for the nodes at the position rather than a single subnet GID.
Positions in layers may now be 3-dimensional.
The functions GetPosition, Displacement and Distance now only works for nodes local to the current MPI process, if used in a MPI-parallel simulation.
It is now possible to add kernel functions and masks to the Topology module through an extension module. Please see Chapter 5 for examples.
Changes from Topology 1.9 to 2.0¶
This is a short summary of the most important changes in the NEST Topology Module from the 1.9-xxxx to the 2.0 version.
ConnectLayer is now called ConnectLayers
Several other functions changed names, and there are many new functions. Please see Ch. 4 for an overview.
All nest.topology functions now require lists of GIDs as input, not “naked” GIDs
There are a number of new functions in nest.topology, I tried to write good doc strings for them
For grid based layers (ie those with /rows and /columns), we have changed the definition of “extent”: Previously, nodes were placed on the edges of the extent, so if you had an extend of 2 (in x-direction) and 3 nodes, these had x-coordinates -1, 0, 1. The grid constant was extent/(num_nodes - 1).
Now, we define the grid constant as extent/num_nodes, center the nodes about 0 and thus add a space of half a grid constant between the outermost nodes and the boundary of the extent. If you want three nodes at -1,0,1 you thus have to set the extent to 3, i.e., stretching from -1.5 to 1.5.
The main reason for this change was that topology always added this padding silently when you used periodic boundary conditions (otherwise, neurons are the left and right edge would have been in identical locations, not what one wants).
The semantics of the
anchor
entry for kernel functions has changed: the anchor now specifies the center of the probability distribution relative to the driver node. This is consistent with the semantics for free masks, see Sec. 3.3 and 3.4.Functions computing connection probabilities, weights and delays as functions of distance between source and target nodes now handle periodic boundary conditions correctly.
Masks with a diameter larger than the diameter of the layer they are applied to are now prohibited by default. This avoids multiple connections when masks overwrap.
References¶
- 1
NEST is available under an open source license at www.nest-simulator.org.
- 2
See Nord (2009) for suggestions on how to describe network models.
- 3
You can also use standard NEST connection functions to connect nodes in Topology layers.
- 4
See Sec. 2.1.1 for the distinction between layer coordinates and grid indices
Topology Tutorial with Hill Tononi Model¶
NEST Topology Module: A Case-Based Tutorial¶
NOTE: The network generated by this script does generate dynamics in which the activity of the entire system, especially Rp and Vp oscillates with approx 5 Hz. This is different from the full model. Deviations are due to the different model type and the elimination of a number of connections, with no changes to the weights.
Introduction¶
This tutorial shows you how to implement a simplified version of the Hill-Tononi model of the early visual pathway using the NEST Topology module. The model is described in the paper
S. L. Hill and G. Tononi. Modeling Sleep and Wakefulness in the Thalamocortical System. J Neurophysiology 93:1671-1698 (2005). Freely available via doi 10.1152/jn.00915.2004.
We simplify the model somewhat both to keep this tutorial a bit shorter, and because some details of the Hill-Tononi model are not currently supported by NEST. Simplifications include:
- # We use the
iaf_cond_alpha
neuron model, which is simpler than the Hill-Tononi model.
- # As the
iaf_cond_alpha
neuron model only supports two synapses (labeled “ex” and “in”), we only include AMPA and GABA_A synapses.
- # We ignore the secondary pathway (Ts, Rs, Vs), since it adds just
more of the same from a technical point of view.
- # Synaptic delays follow a Gaussian distribution in the HT
model. This implies actually a Gaussian distributions clipped at some small, non-zero delay, since delays must be positive. Currently, there is a bug in the Topology module when using clipped Gaussian distribution. We therefore draw delays from a uniform distribution.
Some further adaptations are given at the appropriate locations in the script.
This tutorial is divided in the following sections:
- Philosophy
Discusses the philosophy applied to model implementation in this tutorial
- Preparations
Neccessary steps to use NEST and the Topology Module
- `Configurable Parameters`_
Define adjustable network parameters
- `Neuron Models`_
Define the neuron models needed by the network model
- Populations_
Create Populations
- `Synapse models`_
Define the synapse models used in the network model
- Connections_
Create Connections
- `Example simulation`_
Perform a small simulation for illustration. This section also discusses the setup for recording.
Philosophy¶
- A network models has two essential components: populations and
projections. We first use NEST’s
CopyModel()
mechanism to
create specific models for all populations and subpopulations in
the network, and then create the populations using the Topology
modules CreateLayer()
function.
We use a two-stage process to create the connections, mainly because the same configurations are required for a number of projections: we first define dictionaries specifying the connections, then apply these dictionaries later.
The way in which we declare the network model here is an example. You should not consider it the last word: we expect to see a significant development in strategies and tools for network descriptions in the future. The following contributions to CNS*09 seem particularly interesting
Ralf Ansorg & Lars Schwabe. Declarative model description and code generation for hybrid individual- and population-based simulations of the early visual system (P57);
Sharon Crook, R. Angus Silver, & Padraig Gleeson. Describing and exchanging models of neurons and neuronal networks with NeuroML (F1);
as well as the following paper which will apply in PLoS Computational Biology shortly:
Eilen Nordlie, Marc-Oliver Gewaltig, & Hans Ekkehard Plesser.
Towards reproducible descriptions of neuronal network models.
Preparations¶
Please make sure that your PYTHONPATH
is set correctly, so
that Python can find the NEST Python module.
Note: By default, the script does not show any graphics.
Set SHOW_FIGURES
to True
to activate graphics.
This example uses the function GetLeaves, which is deprecated. A deprecation warning is therefore issued. For details about deprecated functions, see documentation.
import pylab
SHOW_FIGURES = False
if not SHOW_FIGURES:
pylab_show = pylab.show
def nop(s=None):
pass
pylab.show = nop
else:
pylab.ion()
# ! Introduction
# !=============
# ! This tutorial gives a brief introduction to the ConnPlotter
# ! toolbox. It is by no means complete.
# ! Load pynest
import nest
# ! Load NEST Topoplogy module (NEST 2.2)
import nest.topology as topo
# ! Make sure we start with a clean slate, even if we re-run the script
# ! in the same Python session.
nest.ResetKernel()
# ! Import math, we need Pi
import math
# ! Configurable Parameters
# ! =======================
# !
# ! Here we define those parameters that we take to be
# ! configurable. The choice of configurable parameters is obviously
# ! arbitrary, and in practice one would have far more configurable
# ! parameters. We restrict ourselves to:
# !
# ! - Network size in neurons ``N``, each layer is ``N x N``.
# ! - Network size in subtended visual angle ``visSize``, in degree.
# ! - Temporal frequency of drifting grating input ``f_dg``, in Hz.
# ! - Spatial wavelength and direction of drifting grating input,
# ! ``lambda_dg`` and ``phi_dg``, in degree/radian.
# ! - Background firing rate of retinal nodes and modulation amplitude,
# ! ``retDC`` and ``retAC``, in Hz.
# ! - Simulation duration ``simtime``; actual simulation is split into
# ! intervals of ``sim_interval`` length, so that the network state
# ! can be visualized in those intervals. Times are in ms.
Params = {'N': 40,
'visSize': 8.0,
'f_dg': 2.0,
'lambda_dg': 2.0,
'phi_dg': 0.0,
'retDC': 30.0,
'retAC': 30.0,
'simtime': 100.0,
'sim_interval': 5.0
}
# ! Neuron Models
# ! =============
# !
# ! We declare models in two steps:
# !
# ! 1. We define a dictionary specifying the NEST neuron model to use
# ! as well as the parameters for that model.
# ! #. We create three copies of this dictionary with parameters
# ! adjusted to the three model variants specified in Table~2 of
# ! Hill & Tononi (2005) (cortical excitatory, cortical inhibitory,
# ! thalamic)
# !
# ! In addition, we declare the models for the stimulation and
# ! recording devices.
# !
# ! The general neuron model
# ! ------------------------
# !
# ! We use the ``iaf_cond_alpha`` neuron, which is an
# ! integrate-and-fire neuron with two conductance-based synapses which
# ! have alpha-function time course. Any input with positive weights
# ! will automatically directed to the synapse labeled ``_ex``, any
# ! with negative weights to the synapes labeled ``_in``. We define
# ! **all** parameters explicitly here, so that no information is
# ! hidden in the model definition in NEST. ``V_m`` is the membrane
# ! potential to which the model neurons will be initialized.
# ! The model equations and parameters for the Hill-Tononi neuron model
# ! are given on pp. 1677f and Tables 2 and 3 in that paper. Note some
# ! peculiarities and adjustments:
# !
# ! - Hill & Tononi specify their model in terms of the membrane time
# ! constant, while the ``iaf_cond_alpha`` model is based on the
# ! membrane capcitance. Interestingly, conducantces are unitless in
# ! the H&T model. We thus can use the time constant directly as
# ! membrane capacitance.
# ! - The model includes sodium and potassium leak conductances. We
# ! combine these into a single one as follows:
# $ \begin{equation}-g_{NaL}(V-E_{Na}) - g_{KL}(V-E_K)
# $ = -(g_{NaL}+g_{KL})
# $ \left(V-\frac{g_{NaL}E_{NaL}+g_{KL}E_K}{g_{NaL}g_{KL}}\right)
# $ \end{equation}
# ! - We write the resulting expressions for g_L and E_L explicitly
# ! below, to avoid errors in copying from our pocket calculator.
# ! - The paper gives a range of 1.0-1.85 for g_{KL}, we choose 1.5
# ! here.
# ! - The Hill-Tononi model has no explicit reset or refractory
# ! time. We arbitrarily set V_reset and t_ref.
# ! - The paper uses double exponential time courses for the synaptic
# ! conductances, with separate time constants for the rising and
# ! fallings flanks. Alpha functions have only a single time
# ! constant: we use twice the rising time constant given by Hill and
# ! Tononi.
# ! - In the general model below, we use the values for the cortical
# ! excitatory cells as defaults. Values will then be adapted below.
# !
nest.CopyModel('iaf_cond_alpha', 'NeuronModel',
params={'C_m': 16.0,
'E_L': (0.2 * 30.0 + 1.5 * -90.0) / (0.2 + 1.5),
'g_L': 0.2 + 1.5,
'E_ex': 0.0,
'E_in': -70.0,
'V_reset': -60.0,
'V_th': -51.0,
't_ref': 2.0,
'tau_syn_ex': 1.0,
'tau_syn_in': 2.0,
'I_e': 0.0,
'V_m': -70.0})
# ! Adaptation of models for different populations
# ! ----------------------------------------------
# ! We must copy the `NeuronModel` dictionary explicitly, otherwise
# ! Python would just create a reference.
# ! Cortical excitatory cells
# ! .........................
# ! Parameters are the same as above, so we need not adapt anything
nest.CopyModel('NeuronModel', 'CtxExNeuron')
# ! Cortical inhibitory cells
# ! .........................
nest.CopyModel('NeuronModel', 'CtxInNeuron',
params={'C_m': 8.0,
'V_th': -53.0,
't_ref': 1.0})
# ! Thalamic cells
# ! ..............
nest.CopyModel('NeuronModel', 'ThalamicNeuron',
params={'C_m': 8.0,
'V_th': -53.0,
't_ref': 1.0,
'E_in': -80.0})
# ! Input generating nodes
# ! ----------------------
# ! Input is generated by sinusoidally modulate Poisson generators,
# ! organized in a square layer of retina nodes. These nodes require a
# ! slightly more complicated initialization than all other elements of
# ! the network:
# !
# ! - Average firing rate ``rate``, firing rate modulation depth ``amplitude``,
# ! and temporal modulation frequency ``frequency`` are the same for all
# ! retinal nodes and are set directly below.
# ! - The temporal phase ``phase`` of each node depends on its position in
# ! the grating and can only be assigned after the retinal layer has
# ! been created. We therefore specify a function for initalizing the
# ! ``phase``. This function will be called for each node.
def phaseInit(pos, lam, alpha):
'''Initializer function for phase of drifting grating nodes.
pos : position (x,y) of node, in degree
lam : wavelength of grating, in degree
alpha: angle of grating in radian, zero is horizontal
Returns number to be used as phase of sinusoidal Poisson generator.
'''
return 360.0 / lam * (math.cos(alpha) * pos[0] + math.sin(alpha) * pos[1])
nest.CopyModel('sinusoidal_poisson_generator', 'RetinaNode',
params={'amplitude': Params['retAC'],
'rate': Params['retDC'],
'frequency': Params['f_dg'],
'phase': 0.0,
'individual_spike_trains': False})
# ! Recording nodes
# ! ---------------
# ! We use the new ``multimeter`` device for recording from the model
# ! neurons. At present, ``iaf_cond_alpha`` is one of few models
# ! supporting ``multimeter`` recording. Support for more models will
# ! be added soon; until then, you need to use ``voltmeter`` to record
# ! from other models.
# !
# ! We configure multimeter to record membrane potential to membrane
# ! potential at certain intervals to memory only. We record the GID of
# ! the recorded neurons, but not the time.
nest.CopyModel('multimeter', 'RecordingNode',
params={'interval': Params['sim_interval'],
'record_from': ['V_m'],
'record_to': ['memory'],
'withgid': True,
'withtime': False})
# ! Populations
# ! ===========
# ! We now create the neuron populations in the model, again in the
# ! form of Python dictionaries. We define them in order from eye via
# ! thalamus to cortex.
# !
# ! We first define a dictionary defining common properties for all
# ! populations
layerProps = {'rows': Params['N'],
'columns': Params['N'],
'extent': [Params['visSize'], Params['visSize']],
'edge_wrap': True}
# ! This dictionary does not yet specify the elements to put into the
# ! layer, since they will differ from layer to layer. We will add them
# ! below by updating the ``'elements'`` dictionary entry for each
# ! population.
# ! Retina
# ! ------
layerProps.update({'elements': 'RetinaNode'})
retina = topo.CreateLayer(layerProps)
# retina_leaves is a work-around until NEST 3.0 is released
retina_leaves = nest.GetLeaves(retina)[0]
# ! Now set phases of retinal oscillators; we use a list comprehension instead
# ! of a loop.
[nest.SetStatus([n], {"phase": phaseInit(topo.GetPosition([n])[0],
Params["lambda_dg"],
Params["phi_dg"])})
for n in retina_leaves]
# ! Thalamus
# ! --------
# ! We first introduce specific neuron models for the thalamic relay
# ! cells and interneurons. These have identical properties, but by
# ! treating them as different models, we can address them specifically
# ! when building connections.
# !
# ! We use a list comprehension to do the model copies.
[nest.CopyModel('ThalamicNeuron', SpecificModel) for SpecificModel in
('TpRelay', 'TpInter')]
# ! Now we can create the layer, with one relay cell and one
# ! interneuron per location:
layerProps.update({'elements': ['TpRelay', 'TpInter']})
Tp = topo.CreateLayer(layerProps)
# ! Reticular nucleus
# ! -----------------
# ! We follow the same approach as above, even though we have only a
# ! single neuron in each location.
[nest.CopyModel('ThalamicNeuron', SpecificModel) for SpecificModel in
('RpNeuron',)]
layerProps.update({'elements': 'RpNeuron'})
Rp = topo.CreateLayer(layerProps)
# ! Primary visual cortex
# ! ---------------------
# ! We follow again the same approach. We differentiate neuron types
# ! between layers and between pyramidal cells and interneurons. At
# ! each location, there are two pyramidal cells and one interneuron in
# ! each of layers 2-3, 4, and 5-6. Finally, we need to differentiate
# ! between vertically and horizontally tuned populations. When creating
# ! the populations, we create the vertically and the horizontally
# ! tuned populations as separate populations.
# ! We use list comprehesions to create all neuron types:
[nest.CopyModel('CtxExNeuron', layer + 'pyr')
for layer in ('L23', 'L4', 'L56')]
[nest.CopyModel('CtxInNeuron', layer + 'in')
for layer in ('L23', 'L4', 'L56')]
# ! Now we can create the populations, suffixes h and v indicate tuning
layerProps.update({'elements': ['L23pyr', 2, 'L23in', 1,
'L4pyr', 2, 'L4in', 1,
'L56pyr', 2, 'L56in', 1]})
Vp_h = topo.CreateLayer(layerProps)
Vp_v = topo.CreateLayer(layerProps)
# ! Collect all populations
# ! -----------------------
# ! For reference purposes, e.g., printing, we collect all populations
# ! in a tuple:
populations = (retina, Tp, Rp, Vp_h, Vp_v)
# ! Inspection
# ! ----------
# ! We can now look at the network using `PrintNetwork`:
nest.PrintNetwork()
# ! We can also try to plot a single layer in a network. For
# ! simplicity, we use Rp, which has only a single neuron per position.
topo.PlotLayer(Rp)
pylab.title('Layer Rp')
pylab.show()
# ! Synapse models
# ! ==============
# ! Actual synapse dynamics, e.g., properties such as the synaptic time
# ! course, time constants, reversal potentials, are properties of
# ! neuron models in NEST and we set them in section `Neuron models`_
# ! above. When we refer to *synapse models* in NEST, we actually mean
# ! connectors which store information about connection weights and
# ! delays, as well as port numbers at the target neuron (``rport``)
# ! and implement synaptic plasticity. The latter two aspects are not
# ! relevant here.
# !
# ! We just use NEST's ``static_synapse`` connector but copy it to
# ! synapse models ``AMPA`` and ``GABA_A`` for the sake of
# ! explicitness. Weights and delays are set as needed in section
# ! `Connections`_ below, as they are different from projection to
# ! projection. De facto, the sign of the synaptic weight decides
# ! whether input via a connection is handle by the ``_ex`` or the
# ! ``_in`` synapse.
nest.CopyModel('static_synapse', 'AMPA')
nest.CopyModel('static_synapse', 'GABA_A')
# ! Connections
# ! ====================
# ! Building connections is the most complex part of network
# ! construction. Connections are specified in Table 1 in the
# ! Hill-Tononi paper. As pointed out above, we only consider AMPA and
# ! GABA_A synapses here. Adding other synapses is tedious work, but
# ! should pose no new principal challenges. We also use a uniform in
# ! stead of a Gaussian distribution for the weights.
# !
# ! The model has two identical primary visual cortex populations,
# ! ``Vp_v`` and ``Vp_h``, tuned to vertical and horizonal gratings,
# ! respectively. The *only* difference in the connection patterns
# ! between the two populations is the thalamocortical input to layers
# ! L4 and L5-6 is from a population of 8x2 and 2x8 grid locations,
# ! respectively. Furthermore, inhibitory connection in cortex go to
# ! the opposing orientation population as to the own.
# !
# ! To save us a lot of code doubling, we thus defined properties
# ! dictionaries for all connections first and then use this to connect
# ! both populations. We follow the subdivision of connections as in
# ! the Hill & Tononi paper.
# !
# ! **Note:** Hill & Tononi state that their model spans 8 degrees of
# ! visual angle and stimuli are specified according to this. On the
# ! other hand, all connection patterns are defined in terms of cell
# ! grid positions. Since the NEST Topology Module defines connection
# ! patterns in terms of the extent given in degrees, we need to apply
# ! the following scaling factor to all lengths in connections:
dpc = Params['visSize'] / (Params['N'] - 1)
# ! We will collect all same-orientation cortico-cortical connections in
ccConnections = []
# ! the cross-orientation cortico-cortical connections in
ccxConnections = []
# ! and all cortico-thalamic connections in
ctConnections = []
# ! Horizontal intralaminar
# ! -----------------------
# ! *Note:* "Horizontal" means "within the same cortical layer" in this
# ! case.
# !
# ! We first define a dictionary with the (most) common properties for
# ! horizontal intralaminar connection. We then create copies in which
# ! we adapt those values that need adapting, and
horIntraBase = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.05, "sigma": 7.5 * dpc}},
"weights": 1.0,
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
# ! We use a loop to do the for for us. The loop runs over a list of
# ! dictionaries with all values that need updating
for conn in [{"sources": {"model": "L23pyr"}, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L23pyr"}, "targets": {"model": "L23in"}},
{"sources": {"model": "L4pyr"}, "targets": {"model": "L4pyr"},
"mask": {"circular": {"radius": 7.0 * dpc}}},
{"sources": {"model": "L4pyr"}, "targets": {"model": "L4in"},
"mask": {"circular": {"radius": 7.0 * dpc}}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L56pyr"}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L56in"}}]:
ndict = horIntraBase.copy()
ndict.update(conn)
ccConnections.append(ndict)
# ! Vertical intralaminar
# ! -----------------------
# ! *Note:* "Vertical" means "between cortical layers" in this
# ! case.
# !
# ! We proceed as above.
verIntraBase = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian": {"p_center": 1.0, "sigma": 7.5 * dpc}},
"weights": 2.0,
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
for conn in [{"sources": {"model": "L23pyr"}, "targets": {"model": "L56pyr"},
"weights": 1.0},
{"sources": {"model": "L23pyr"}, "targets": {"model": "L23in"},
"weights": 1.0},
{"sources": {"model": "L4pyr"}, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L4pyr"}, "targets": {"model": "L23in"}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L23in"}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L4pyr"}},
{"sources": {"model": "L56pyr"}, "targets": {"model": "L4in"}}]:
ndict = verIntraBase.copy()
ndict.update(conn)
ccConnections.append(ndict)
# ! Intracortical inhibitory
# ! ------------------------
# !
# ! We proceed as above, with the following difference: each connection
# ! is added to the same-orientation and the cross-orientation list of
# ! connections.
# !
# ! **Note:** Weights increased from -1.0 to -2.0, to make up for missing GabaB
# !
# ! Note that we have to specify the **weight with negative sign** to make
# ! the connections inhibitory.
intraInhBase = {"connection_type": "divergent",
"synapse_model": "GABA_A",
"mask": {"circular": {"radius": 7.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.25, "sigma": 7.5 * dpc}},
"weights": -2.0,
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
# ! We use a loop to do the for for us. The loop runs over a list of
# ! dictionaries with all values that need updating
for conn in [{"sources": {"model": "L23in"}, "targets": {"model": "L23pyr"}},
{"sources": {"model": "L23in"}, "targets": {"model": "L23in"}},
{"sources": {"model": "L4in"}, "targets": {"model": "L4pyr"}},
{"sources": {"model": "L4in"}, "targets": {"model": "L4in"}},
{"sources": {"model": "L56in"}, "targets": {"model": "L56pyr"}},
{"sources": {"model": "L56in"}, "targets": {"model": "L56in"}}]:
ndict = intraInhBase.copy()
ndict.update(conn)
ccConnections.append(ndict)
ccxConnections.append(ndict)
# ! Corticothalamic
# ! ---------------
corThalBase = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 5.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.5, "sigma": 7.5 * dpc}},
"weights": 1.0,
"delays": {"uniform": {"min": 7.5, "max": 8.5}}}
# ! We use a loop to do the for for us. The loop runs over a list of
# ! dictionaries with all values that need updating
for conn in [{"sources": {"model": "L56pyr"},
"targets": {"model": "TpRelay"}},
{"sources": {"model": "L56pyr"},
"targets": {"model": "TpInter"}}]:
ndict = intraInhBase.copy()
ndict.update(conn)
ctConnections.append(ndict)
# ! Corticoreticular
# ! ----------------
# ! In this case, there is only a single connection, so we write the
# ! dictionary itself; it is very similar to the corThalBase, and to
# ! show that, we copy first, then update. We need no ``targets`` entry,
# ! since Rp has only one neuron per location.
corRet = corThalBase.copy()
corRet.update({"sources": {"model": "L56pyr"}, "weights": 2.5})
# ! Build all connections beginning in cortex
# ! -----------------------------------------
# ! Cortico-cortical, same orientation
print("Connecting: cortico-cortical, same orientation")
[topo.ConnectLayers(Vp_h, Vp_h, conn) for conn in ccConnections]
[topo.ConnectLayers(Vp_v, Vp_v, conn) for conn in ccConnections]
# ! Cortico-cortical, cross-orientation
print("Connecting: cortico-cortical, other orientation")
[topo.ConnectLayers(Vp_h, Vp_v, conn) for conn in ccxConnections]
[topo.ConnectLayers(Vp_v, Vp_h, conn) for conn in ccxConnections]
# ! Cortico-thalamic connections
print("Connecting: cortico-thalamic")
[topo.ConnectLayers(Vp_h, Tp, conn) for conn in ctConnections]
[topo.ConnectLayers(Vp_v, Tp, conn) for conn in ctConnections]
topo.ConnectLayers(Vp_h, Rp, corRet)
topo.ConnectLayers(Vp_v, Rp, corRet)
# ! Thalamo-cortical connections
# ! ----------------------------
# ! **Note:** According to the text on p. 1674, bottom right, of
# ! the Hill & Tononi paper, thalamocortical connections are
# ! created by selecting from the thalamic population for each
# ! L4 pyramidal cell, ie, are *convergent* connections.
# !
# ! We first handle the rectangular thalamocortical connections.
thalCorRect = {"connection_type": "convergent",
"sources": {"model": "TpRelay"},
"synapse_model": "AMPA",
"weights": 5.0,
"delays": {"uniform": {"min": 2.75, "max": 3.25}}}
print("Connecting: thalamo-cortical")
# ! Horizontally tuned
thalCorRect.update(
{"mask": {"rectangular": {"lower_left": [-4.0 * dpc, -1.0 * dpc],
"upper_right": [4.0 * dpc, 1.0 * dpc]}}})
for conn in [{"targets": {"model": "L4pyr"}, "kernel": 0.5},
{"targets": {"model": "L56pyr"}, "kernel": 0.3}]:
thalCorRect.update(conn)
topo.ConnectLayers(Tp, Vp_h, thalCorRect)
# ! Vertically tuned
thalCorRect.update(
{"mask": {"rectangular": {"lower_left": [-1.0 * dpc, -4.0 * dpc],
"upper_right": [1.0 * dpc, 4.0 * dpc]}}})
for conn in [{"targets": {"model": "L4pyr"}, "kernel": 0.5},
{"targets": {"model": "L56pyr"}, "kernel": 0.3}]:
thalCorRect.update(conn)
topo.ConnectLayers(Tp, Vp_v, thalCorRect)
# ! Diffuse connections
thalCorDiff = {"connection_type": "convergent",
"sources": {"model": "TpRelay"},
"synapse_model": "AMPA",
"weights": 5.0,
"mask": {"circular": {"radius": 5.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.1, "sigma": 7.5 * dpc}},
"delays": {"uniform": {"min": 2.75, "max": 3.25}}}
for conn in [{"targets": {"model": "L4pyr"}},
{"targets": {"model": "L56pyr"}}]:
thalCorDiff.update(conn)
topo.ConnectLayers(Tp, Vp_h, thalCorDiff)
topo.ConnectLayers(Tp, Vp_v, thalCorDiff)
# ! Thalamic connections
# ! --------------------
# ! Connections inside thalamus, including Rp
# !
# ! *Note:* In Hill & Tononi, the inhibition between Rp cells is mediated by
# ! GABA_B receptors. We use GABA_A receptors here to provide some
# ! self-dampening of Rp.
# !
# ! **Note:** The following code had a serious bug in v. 0.1: During the first
# ! iteration of the loop, "synapse_model" and "weights" were set to "AMPA" and
# ! "0.1", respectively and remained unchanged, so that all connections were
# ! created as excitatory connections, even though they should have been
# ! inhibitory. We now specify synapse_model and weight explicitly for each
# ! connection to avoid this.
thalBase = {"connection_type": "divergent",
"delays": {"uniform": {"min": 1.75, "max": 2.25}}}
print("Connecting: intra-thalamic")
for src, tgt, conn in [(Tp, Rp, {"sources": {"model": "TpRelay"},
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian": {"p_center": 1.0,
"sigma": 7.5 * dpc}},
"weights": 2.0}),
(Tp, Tp, {"sources": {"model": "TpInter"},
"targets": {"model": "TpRelay"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian":
{"p_center": 0.25,
"sigma": 7.5 * dpc}}}),
(Tp, Tp, {"sources": {"model": "TpInter"},
"targets": {"model": "TpInter"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 2.0 * dpc}},
"kernel": {"gaussian":
{"p_center": 0.25,
"sigma": 7.5 * dpc}}}),
(Rp, Tp, {"targets": {"model": "TpRelay"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian":
{"p_center": 0.15,
"sigma": 7.5 * dpc}}}),
(Rp, Tp, {"targets": {"model": "TpInter"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian":
{"p_center": 0.15,
"sigma": 7.5 * dpc}}}),
(Rp, Rp, {"targets": {"model": "RpNeuron"},
"synapse_model": "GABA_A",
"weights": -1.0,
"mask": {"circular": {"radius": 12.0 * dpc}},
"kernel": {"gaussian":
{"p_center": 0.5,
"sigma": 7.5 * dpc}}})]:
thalBase.update(conn)
topo.ConnectLayers(src, tgt, thalBase)
# ! Thalamic input
# ! --------------
# ! Input to the thalamus from the retina.
# !
# ! **Note:** Hill & Tononi specify a delay of 0 ms for this connection.
# ! We use 1 ms here.
retThal = {"connection_type": "divergent",
"synapse_model": "AMPA",
"mask": {"circular": {"radius": 1.0 * dpc}},
"kernel": {"gaussian": {"p_center": 0.75, "sigma": 2.5 * dpc}},
"weights": 10.0,
"delays": 1.0}
print("Connecting: retino-thalamic")
for conn in [{"targets": {"model": "TpRelay"}},
{"targets": {"model": "TpInter"}}]:
retThal.update(conn)
topo.ConnectLayers(retina, Tp, retThal)
# ! Checks on connections
# ! ---------------------
# ! As a very simple check on the connections created, we inspect
# ! the connections from the central node of various layers.
# ! Connections from Retina to TpRelay
topo.PlotTargets(topo.FindCenterElement(retina), Tp, 'TpRelay', 'AMPA')
pylab.title('Connections Retina -> TpRelay')
pylab.show()
# ! Connections from TpRelay to L4pyr in Vp (horizontally tuned)
topo.PlotTargets(topo.FindCenterElement(Tp), Vp_h, 'L4pyr', 'AMPA')
pylab.title('Connections TpRelay -> Vp(h) L4pyr')
pylab.show()
# ! Connections from TpRelay to L4pyr in Vp (vertically tuned)
topo.PlotTargets(topo.FindCenterElement(Tp), Vp_v, 'L4pyr', 'AMPA')
pylab.title('Connections TpRelay -> Vp(v) L4pyr')
pylab.show()
# ! Recording devices
# ! =================
# ! This recording device setup is a bit makeshift. For each population
# ! we want to record from, we create one ``multimeter``, then select
# ! all nodes of the right model from the target population and
# ! connect. ``loc`` is the subplot location for the layer.
print("Connecting: Recording devices")
recorders = {}
for name, loc, population, model in [('TpRelay', 1, Tp, 'TpRelay'),
('Rp', 2, Rp, 'RpNeuron'),
('Vp_v L4pyr', 3, Vp_v, 'L4pyr'),
('Vp_h L4pyr', 4, Vp_h, 'L4pyr')]:
recorders[name] = (nest.Create('RecordingNode'), loc)
# population_leaves is a work-around until NEST 3.0 is released
population_leaves = nest.GetLeaves(population)[0]
tgts = [nd for nd in population_leaves
if nest.GetStatus([nd], 'model')[0] == model]
nest.Connect(recorders[name][0], tgts) # one recorder to all targets
# ! Example simulation
# ! ====================
# ! This simulation is set up to create a step-wise visualization of
# ! the membrane potential. To do so, we simulate ``sim_interval``
# ! milliseconds at a time, then read out data from the multimeters,
# ! clear data from the multimeters and plot the data as pseudocolor
# ! plots.
# ! show time during simulation
nest.SetStatus([0], {'print_time': True})
# ! lower and upper limits for color scale, for each of the four
# ! populations recorded.
vmn = [-80, -80, -80, -80]
vmx = [-50, -50, -50, -50]
nest.Simulate(Params['sim_interval'])
# ! loop over simulation intervals
for t in pylab.arange(Params['sim_interval'], Params['simtime'],
Params['sim_interval']):
# do the simulation
nest.Simulate(Params['sim_interval'])
# clear figure and choose colormap
pylab.clf()
pylab.jet()
# now plot data from each recorder in turn, assume four recorders
for name, r in recorders.items():
rec = r[0]
sp = r[1]
pylab.subplot(2, 2, sp)
d = nest.GetStatus(rec)[0]['events']['V_m']
if len(d) != Params['N'] ** 2:
# cortical layer with two neurons in each location, take average
d = 0.5 * (d[::2] + d[1::2])
# clear data from multimeter
nest.SetStatus(rec, {'n_events': 0})
pylab.imshow(pylab.reshape(d, (Params['N'], Params['N'])),
aspect='equal', interpolation='nearest',
extent=(0, Params['N'] + 1, 0, Params['N'] + 1),
vmin=vmn[sp - 1], vmax=vmx[sp - 1])
pylab.colorbar()
pylab.title(name + ', t = %6.1f ms' % nest.GetKernelStatus()['time'])
pylab.draw() # force drawing inside loop
pylab.show() # required by ``pyreport``
# ! just for some information at the end
print(nest.GetKernelStatus())
Total running time of the script: ( 0 minutes 0.000 seconds)
Examples using Topology¶
Create two layers with one pyramidal cell and one interneuron¶
Create two 30x30 layers with nodes composed of one pyramidal cell and one interneuron. Connect with two projections, one pyr->pyr, one pyr->in, and visualize.
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import nest
import nest.topology as topo
import pylab
pylab.ion()
nest.ResetKernel()
nest.set_verbosity('M_WARNING')
# create two test layers
nest.CopyModel('iaf_psc_alpha', 'pyr')
nest.CopyModel('iaf_psc_alpha', 'in')
a = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': ['pyr', 'in']})
b = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': ['pyr', 'in']})
topo.ConnectLayers(a, b, {'connection_type': 'divergent',
'sources': {'model': 'pyr'},
'targets': {'model': 'pyr'},
'mask': {'circular': {'radius': 0.5}},
'kernel': 0.5,
'weights': 1.0,
'delays': 1.0})
topo.ConnectLayers(a, b, {'connection_type': 'divergent',
'sources': {'model': 'pyr'},
'targets': {'model': 'in'},
'mask': {'circular': {'radius': 1.0}},
'kernel': 0.2,
'weights': 1.0,
'delays': 1.0})
pylab.clf()
# plot targets of neurons in different grid locations
for ctr in [[15, 15]]:
# obtain node id for center: pick first node of composite
ctr_id = topo.GetElement(a, ctr)
# get all projection targets of center neuron
tgts = [ci[1] for ci in nest.GetConnections(ctr_id)]
# get positions of targets
tpyr = pylab.array(tuple(zip(*[topo.GetPosition([n])[0] for n in tgts
if
nest.GetStatus([n], 'model')[0] == 'pyr'])))
tin = pylab.array(tuple(zip(*[topo.GetPosition([n])[0] for n in tgts
if
nest.GetStatus([n], 'model')[0] == 'in'])))
# scatter-plot
pylab.scatter(tpyr[0] - 0.02, tpyr[1] - 0.02, 20, 'b', zorder=10)
pylab.scatter(tin[0] + 0.02, tin[1] + 0.02, 20, 'r', zorder=10)
# mark locations with background grey circle
pylab.plot(tpyr[0], tpyr[1], 'o', markerfacecolor=(0.7, 0.7, 0.7),
markersize=10, markeredgewidth=0, zorder=1, label='_nolegend_')
pylab.plot(tin[0], tin[1], 'o', markerfacecolor=(0.7, 0.7, 0.7),
markersize=10, markeredgewidth=0, zorder=1, label='_nolegend_')
# mark sender position with transparent red circle
ctrpos = topo.GetPosition(ctr_id)[0]
pylab.gca().add_patch(pylab.Circle(ctrpos, radius=0.15, zorder=99,
fc='r', alpha=0.4, ec='none'))
# mark mask positions with open red/blue circles
pylab.gca().add_patch(pylab.Circle(ctrpos, radius=0.5, zorder=2,
fc='none', ec='b', lw=3))
pylab.gca().add_patch(pylab.Circle(ctrpos, radius=1.0, zorder=2,
fc='none', ec='r', lw=3))
# mark layer edge
pylab.gca().add_patch(pylab.Rectangle((-1.5, -1.5), 3.0, 3.0, zorder=1,
fc='none', ec='k', lw=3))
# beautify
pylab.axes().set_xticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.axes().set_yticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.grid(True)
pylab.axis([-1.6, 1.6, -1.6, 1.6])
pylab.axes().set_aspect('equal', 'box')
Total running time of the script: ( 0 minutes 0.000 seconds)
Create two layers of iaf_psc_alpha neurons from target perspective¶
Create two 30x30 layers of iaf_psc_alpha neurons and connect with convergent projection and rectangular mask, visualize connection from target perspective.
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import nest
import nest.topology as topo
import pylab
pylab.ion()
nest.ResetKernel()
nest.set_verbosity('M_WARNING')
# create two test layers
a = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha', 'edge_wrap': True})
b = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha', 'edge_wrap': True})
topo.ConnectLayers(a, b, {'connection_type': 'convergent',
'mask': {'rectangular': {'lower_left': [-0.2, -0.5],
'upper_right': [0.2, 0.5]}},
'kernel': 0.5,
'weights': {'uniform': {'min': 0.5, 'max': 2.0}},
'delays': 1.0})
pylab.clf()
# plot sources of neurons in different grid locations
for tgt_pos in [[15, 15], [0, 0]]:
# obtain node id for center
tgt = topo.GetElement(b, tgt_pos)
# obtain list of outgoing connections for ctr
# int() required to cast numpy.int64
spos = tuple(zip(*[topo.GetPosition([int(conn[0])])[0] for conn in
nest.GetConnections(target=tgt)]))
# scatter-plot
pylab.scatter(spos[0], spos[1], 20, zorder=10)
# mark sender position with transparent red circle
ctrpos = pylab.array(topo.GetPosition(tgt)[0])
pylab.gca().add_patch(pylab.Circle(ctrpos, radius=0.1, zorder=99,
fc='r', alpha=0.4, ec='none'))
# mark mask position with open red rectangle
pylab.gca().add_patch(
pylab.Rectangle(ctrpos - (0.2, 0.5), 0.4, 1.0, zorder=1,
fc='none', ec='r', lw=3))
# mark layer edge
pylab.gca().add_patch(pylab.Rectangle((-1.5, -1.5), 3.0, 3.0, zorder=1,
fc='none', ec='k', lw=3))
# beautify
pylab.axes().set_xticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.axes().set_yticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.grid(True)
pylab.axis([-2.0, 2.0, -2.0, 2.0])
pylab.axes().set_aspect('equal', 'box')
pylab.title('Connection sources')
Total running time of the script: ( 0 minutes 0.000 seconds)
Create two layers of iaf_psc_alpha neurons from source perpective¶
Create two 30x30 layers of iaf_psc_alpha neurons and connect with convergent projection and rectangular mask, visualize connections from source perspective.
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import pylab
import nest
import nest.topology as topo
pylab.ion()
nest.ResetKernel()
# create two test layers
a = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha', 'edge_wrap': True})
b = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha', 'edge_wrap': True})
conndict = {'connection_type': 'convergent',
'mask': {'rectangular': {'lower_left': [-0.2, -0.5],
'upper_right': [0.2, 0.5]}},
'kernel': 0.5,
'weights': {'uniform': {'min': 0.5, 'max': 2.0}},
'delays': 1.0}
topo.ConnectLayers(a, b, conndict)
# first, clear existing figure, get current figure
pylab.clf()
fig = pylab.gcf()
# plot targets of two source neurons into same figure, with mask
for src_pos in [[15, 15], [0, 0]]:
# obtain node id for center
src = topo.GetElement(a, src_pos)
topo.PlotTargets(src, b, mask=conndict['mask'], fig=fig)
# beautify
pylab.axes().set_xticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.axes().set_yticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.grid(True)
pylab.axis([-2.0, 2.0, -2.0, 2.0])
pylab.axes().set_aspect('equal', 'box')
pylab.title('Connection targets')
# pylab.savefig('conncon_targets.pdf')
Total running time of the script: ( 0 minutes 0.000 seconds)
Create two 30x30 layers of iaf_psc_alpha neurons (circular mask)¶
Connect with circular mask, flat probability, visualize.
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import nest
import nest.topology as topo
import pylab
pylab.ion()
nest.ResetKernel()
# create two test layers
a = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha'})
b = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha'})
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 0.5}},
'kernel': 0.5,
'weights': {'uniform': {'min': 0.5, 'max': 2.0}},
'delays': 1.0}
topo.ConnectLayers(a, b, conndict)
# plot targets of neurons in different grid locations
# first, clear existing figure, get current figure
pylab.clf()
fig = pylab.gcf()
# plot targets of two source neurons into same figure, with mask
for src_pos in [[15, 15], [0, 0]]:
# obtain node id for center
src = topo.GetElement(a, src_pos)
topo.PlotTargets(src, b, mask=conndict['mask'], fig=fig)
# beautify
pylab.axes().set_xticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.axes().set_yticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.grid(True)
pylab.axis([-2.0, 2.0, -2.0, 2.0])
pylab.axes().set_aspect('equal', 'box')
pylab.title('Connection targets')
# pylab.savefig('connex.pdf')
Total running time of the script: ( 0 minutes 0.000 seconds)
Connect layers using Gaussian probalistic kernel¶
Create two layers of 30x30 elements and connect them using a Gaussian probabilistic kernel, visualize.
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import pylab
import nest
import nest.topology as topo
pylab.ion()
nest.ResetKernel()
# create two test layers
a = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha'})
b = topo.CreateLayer({'columns': 30, 'rows': 30, 'extent': [3.0, 3.0],
'elements': 'iaf_psc_alpha'})
conndict = {'connection_type': 'divergent',
'mask': {'circular': {'radius': 3.0}},
'kernel': {'gaussian': {'p_center': 1.0, 'sigma': 0.5}},
'weights': 1.0,
'delays': 1.0}
topo.ConnectLayers(a, b, conndict)
# plot targets of neurons in different grid locations
# first, clear existing figure, get current figure
pylab.clf()
fig = pylab.gcf()
# plot targets of two source neurons into same figure, with mask
# use different colors
for src_pos, color in [([15, 15], 'blue'), ([0, 0], 'green')]:
# obtain node id for center
src = topo.GetElement(a, src_pos)
topo.PlotTargets(src, b, mask=conndict['mask'], kernel=conndict['kernel'],
src_color=color, tgt_color=color, mask_color=color,
kernel_color=color, src_size=100,
fig=fig)
# beautify
pylab.axes().set_xticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.axes().set_yticks(pylab.arange(-1.5, 1.55, 0.5))
pylab.grid(True)
pylab.axis([-2.0, 2.0, -2.0, 2.0])
pylab.axes().set_aspect('equal', 'box')
pylab.title('Connection targets, Gaussian kernel')
# pylab.savefig('gaussex.pdf')
Total running time of the script: ( 0 minutes 0.000 seconds)
Create layer of 4x3 iaf_psc_alpha neurons¶
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import nest
import pylab
import nest.topology as topo
pylab.ion()
nest.ResetKernel()
l1 = topo.CreateLayer({'columns': 4, 'rows': 3,
'extent': [2.0, 1.5],
'elements': 'iaf_psc_alpha'})
nest.PrintNetwork()
nest.PrintNetwork(2)
nest.PrintNetwork(2, l1)
topo.PlotLayer(l1, nodesize=50)
# beautify
pylab.axis([-1.0, 1.0, -0.75, 0.75])
pylab.axes().set_aspect('equal', 'box')
pylab.axes().set_xticks((-0.75, -0.25, 0.25, 0.75))
pylab.axes().set_yticks((-0.5, 0, 0.5))
pylab.grid(True)
pylab.xlabel('4 Columns, Extent: 1.5')
pylab.ylabel('2 Rows, Extent: 1.0')
# pylab.savefig('grid_iaf.png')
Total running time of the script: ( 0 minutes 0.000 seconds)
Create layer of 12 freely placed iaf_psc_alpha neuron¶
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import nest
import pylab
import random
import nest.topology as topo
pylab.ion()
nest.ResetKernel()
# generate list of 12 (x,y) pairs
pos = [[random.uniform(-0.75, 0.75), random.uniform(-0.5, 0.5)]
for j in range(12)]
l1 = topo.CreateLayer({'extent': [2., 1.5],
'positions': pos,
'elements': 'iaf_psc_alpha'})
nest.PrintNetwork()
nest.PrintNetwork(2)
nest.PrintNetwork(2, l1)
topo.PlotLayer(l1, nodesize=50)
# beautify
pylab.axis([-1.0, 1.0, -0.75, 0.75])
pylab.axes().set_aspect('equal', 'box')
pylab.axes().set_xticks((-0.75, -0.25, 0.25, 0.75))
pylab.axes().set_yticks((-0.5, 0, 0.5))
pylab.grid(True)
pylab.xlabel('4 Columns, Extent: 1.5')
pylab.ylabel('2 Rows, Extent: 1.0')
# pylab.savefig('grid_iaf_irr.png')
Total running time of the script: ( 0 minutes 0.000 seconds)
Create three layers of 4x3 iaf_psc_alpha neurons, each with different center¶
BCCN Tutorial @ CNS*09 Hans Ekkehard Plesser, UMB
import pylab
import time
import nest
import nest.topology as topo
pylab.ion()
for ctr in [(0.0, 0.0), (-2.0, 2.0), (0.5, 1.0)]:
nest.ResetKernel()
pylab.clf()
l1 = topo.CreateLayer({'columns': 4, 'rows': 3,
'extent': [2.0, 1.5],
'center': ctr,
'elements': 'iaf_psc_alpha'})
topo.PlotLayer(l1, nodesize=50, fig=pylab.gcf())
# beautify
pylab.axis([-3, 3, -3, 3])
pylab.axes().set_aspect('equal', 'box')
pylab.axes().set_xticks(pylab.arange(-3.0, 3.1, 1.0))
pylab.axes().set_yticks(pylab.arange(-3.0, 3.1, 1.0))
pylab.grid(True)
pylab.xlabel('4 Columns, Extent: 1.5, Center: %.1f' % ctr[0])
pylab.ylabel('2 Rows, Extent: 1.0, Center: %.1f' % ctr[1])
pylab.draw()
Total running time of the script: ( 0 minutes 0.000 seconds)
Guides¶
Here you can find detailed look into a variety of topics in NEST.
Connection Management¶
From NEST 2.4 onwards the old connection routines (i.e. (Random)ConvergentConnect, (Random)DivergentConnect and plain Connect) are replaced by one unified Connect function. In SLI ,the old syntax of the function still works, while in PyNEST, the Connect() function has been renamed to OneToOneConnect(). However, simple cases, which are just creating one-to-one connections between two lists of nodes are still working with the new command without the need to change the code. Note that the topology-module is not effected by theses changes. The translation between the old and the new connect routines is described in Old Connection Routines.
The connectivity pattern is defined inside the Connect() function under the key ‘rule’. The patterns available are described in Connection Rules. In addition the synapse model can be specified within the connect function and all synaptic parameters can be randomly distributed.
The Connect() function can be called in either of the following manners:
Connect(pre, post)
Connect(pre, post, conn_spec)
Connect(pre, post, conn_spec, syn_spec)
pre
and post
are lists of Global Ids defining the nodes of
origin and termination.
conn_spec
can either be a string containing the name of the
connectivity rule (default: all_to_all
) or a dictionary specifying
the rule and the rule-specific parameters (e.g. indegree
), which must
be given.
In addition switches allowing self-connections (autapses
, default:
True) and multiple connections between pairs of neurons (multapses
,
default: True) can be contained in the dictionary. The validity of the
switches is confined by the Connect-call. Thus connecting the same set
of neurons multiple times with the switch ‘multapses’ set to False, one
particular connection might be established multiple times. The same
applies to nodes being specified multiple times in the source or target
vector. Here ‘multapses’ set to False will result in one potential
connection between each occurring node pair.
syn_spec
defines the synapse type and its properties. It can be
given as a string defining the synapse model (default:
‘static_synapse’) or as a dictionary. By using the key-word variant
(Connect(pre, post, syn_spec=syn_spec_dict)
), the conn_spec can be
omitted in the call to connect and ‘all_to_all’ is assumed as the
default. The exact usage of the synapse dictionary is described in
synapse-spec.
Connection Rules¶
Connection rules are specified using the conn_spec
parameter, which
can be a string naming a connection rule or a dictionary containing a
rule specification. Only connection rules requiring no parameters can be
given as strings, for all other rules, a dictionary specifying the rule
and its parameters, such as in- or out-degrees, is required.
one-to-one¶

The ith node in pre
is connected to the ith node in post
. The
node lists pre and post have to be of the same length.
Example:
One-to-one connections
n = 10
A = Create("iaf_psc_alpha", n)
B = Create("spike_detector", n)
Connect(A, B, 'one_to_one')
This rule can also take two Global IDs A and B instead of integer lists. A shortcut is provided if only two nodes are connected with the parameters weight and delay such that weight and delay can be given as third and fourth argument to the Connect() function.
Example:
weight = 1.5
delay = 0.5
Connect(A[0], B[0], weight, delay)
all-to-all¶

Each node in pre
is connected to every node in post
. Since
all_to_all
is the default, ‘rule’ doesn’t need to specified.
Example:
n, m = 10, 12
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", m)
Connect(A, B)
fixed-indegree¶

The nodes in pre
are randomly connected with the nodes in post
such that each node in post
has a fixed indegree
.
Example:
n, m, N = 10, 12, 2
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", m)
conn_dict = {'rule': 'fixed_indegree', 'indegree': N}
Connect(A, B, conn_dict)
fixed-outdegree¶

The nodes in pre
are randomly connected with the nodes in post
such that each node in pre
has a fixed outdegree
.
Example:
n, m, N = 10, 12, 2
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", m)
conn_dict = {'rule': 'fixed_outdegree', 'outdegree': N}
Connect(A, B, conn_dict)
fixed-total-number¶
The nodes in pre
are randomly connected with the nodes in post
such that the total number of connections equals N
.
Example:
n, m, N = 10, 12, 30
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", m)
conn_dict = {'rule': 'fixed_total_number', 'N': N}
Connect(A, B, conn_dict)
pairwise-bernoulli¶
For each possible pair of nodes from pre
and post
, a connection
is created with probability p
.
Example:
n, m, p = 10, 12, 0.2
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", m)
conn_dict = {'rule': 'pairwise_bernoulli', 'p': p}
Connect(A, B, conn_dict)
Synapse Specification¶
The synapse properties can be given as a string or a dictionary. The string can be the name of a pre-defined synapse which can be found in the synapsedict (see Synapse Types) or a manually defined synapse via CopyModel().
Example:
n = 10
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", n)
CopyModel("static_synapse","excitatory",{"weight":2.5, "delay":0.5})
Connect(A, B, syn_spec="excitatory")
Specifying the synapse properties in a dictionary allows for distributed
synaptic parameter. In addition to the key model
the dictionary can
contain specifications for weight
, delay
, receptor_type
and
parameters specific to the chosen synapse model. The specification of
all parameters is optional. Unspecified parameters will use the default
values determined by the current synapse model. All parameters can be
scalars, arrays or distributions (specified as dictionaries). One
synapse dictionary can contain an arbitrary combination of parameter
types, as long as they agree with the connection routine (rule
).
Scalar parameters must be given as floats except for the ‘receptor_type’ which has to be initialized as an integer. For more information on the receptor type see Receptor Types.
Example:
n = 10
neuron_dict = {'tau_syn': [0.3, 1.5]}
A = Create("iaf_psc_exp_multisynapse", n, neuron_dict)
B = Create("iaf_psc_exp_multisynapse", n, neuron_dict)
syn_dict ={"model": "static_synapse", "weight":2.5, "delay":0.5, 'receptor_type': 1}
Connect(A, B, syn_spec=syn_dict)
Array parameters can be used in conjunction with the rules
one_to_one
, all_to_all
, fixed_indegree
and
fixed_outdegree
. The arrays can be specified as numpy arrays or
lists. As for the scalar parameters, all parameters but the receptor
types must be specified as arrays of floats. For one_to_one
the
array must have the same length as the population vector.
Example:
A = Create("iaf_psc_alpha", 2)
B = Create("spike_detector", 2)
conn_dict = {'rule': 'one_to_one'}
syn_dict = {'weight': [1.2, -3.5]}
Connect(A, B, conn_dict, syn_dict)
When connecting using all_to_all
, the array must be of dimension
len(post) x len(pre).
Example:
A = Create("iaf_psc_alpha", 3)
B = Create("iaf_psc_alpha", 2)
syn_dict = {'weight': [[1.2, -3.5, 2.5],[0.4, -0.2, 0.7]]}
Connect(A, B, syn_spec=syn_dict)
For fixed_indegree
the array has to be a two-dimensional NumPy array
with shape (len(post), indegree), where indegree is the number of
incoming connections per target neuron, therefore the rows describe the
target and the columns the connections converging to the target neuron,
regardless of the identity of the source neurons.
Example:
A = Create("iaf_psc_alpha", 5)
B = Create("iaf_psc_alpha", 3)
conn_dict = {'rule': 'fixed_indegree', 'indegree': 2}
syn_dict = {'weight': [[1.2, -3.5],[0.4, -0.2],[0.6, 2.2]]}
Connect(A, B, conn_spec=conn_dict, syn_spec=syn_dict)
For fixed_outdegree
the array has to be a two-dimensional NumPy array
with shape (len(pre), outdegree), where outdegree is the number of
outgoing connections per source neuron, therefore the rows describe the
source and the columns the connections starting from the source neuron
regardless of the identity of the target neuron.
Example:
A = Create("iaf_psc_alpha", 2)
B = Create("iaf_psc_alpha", 5)
conn_dict = {'rule': 'fixed_outdegree', 'outdegree': 3}
syn_dict = {'weight': [[1.2, -3.5, 0.4], [-0.2, 0.6, 2.2]]}
Connect(A, B, conn_spec=conn_dict, syn_spec=syn_dict)
Distributed parameters¶
Distributed parameters are initialized with yet another dictionary specifying the ‘distribution’ and the distribution-specific parameters, whose specification is optional.
Available distributions are given in the rdevdict
, the most common ones
are:
Distributions Keys:
- 'normal', 'mu', 'sigma'
- 'normal_clipped', 'mu', 'sigma', 'low ', 'high'
- 'normal_clipped_to_boundary', 'mu', 'sigma', 'low ', 'high'
- 'lognormal', 'mu', 'sigma'
- 'lognormal_clipped', 'mu', 'sigma', 'low', 'high'
- 'lognormal_clipped_to_boundary', 'mu', 'sigma', 'low', 'high'
- 'uniform', 'low', 'high'
- 'uniform_int', 'low', 'high'
- 'binomial', 'n', 'p'
- 'binomial_clipped', 'n', 'p', 'low', 'high'
- 'binomial_clipped_to_boundary', 'n', 'p', 'low', 'high'
- 'gsl_binomial', 'n', 'p'
- 'exponential', 'lambda'
- 'exponential_clipped', 'lambda', 'low', 'high'
- 'exponential_clipped_to_boundary', 'lambda', 'low', 'high'
- 'gamma', 'order', 'scale'
- 'gamma_clipped', 'order', 'scale', 'low', 'high'
- 'gamma_clipped_to_boundary', 'order', 'scale', 'low', 'high'
- 'poisson', 'lambda'
- 'poisson_clipped', 'lambda', 'low', 'high'
- 'poisson_clipped_to_boundary', 'lambda', 'low', 'high'
Example
n = 10
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", n)
syn_dict = {'model': 'stdp_synapse',
'weight': 2.5,
'delay': {'distribution': 'uniform', 'low': 0.8, 'high': 2.5},
'alpha': {'distribution': 'normal_clipped', 'low': 0.5, 'mu': 5.0, 'sigma': 1.0}
}
Connect(A, B, syn_spec=syn_dict)
In this example, the all_to_all
connection rule is applied by
default, using the stdp_synapse model. All synapses are created with
weight 2.5, a delay uniformly distributed in [0.8, 2.5], while the alpha
parameters is drawn from a normal distribution with mean 5.0 and std.dev
1.0; values below 0.5 are excluded by re-drawing any values below 0.5.
Thus, the actual distribution is a slightly distorted Gaussian.
If the synapse is supposed to have a unique name and distributed parameters it needs to be defined in two steps:
n = 10
A = Create("iaf_psc_alpha", n)
B = Create("iaf_psc_alpha", n)
CopyModel('stdp_synapse','excitatory',{'weight':2.5})
syn_dict = {'model': 'excitatory',
'weight': 2.5,
'delay': {'distribution': 'uniform', 'low': 0.8, 'high': 2.5},
'alpha': {'distribution': 'normal_clipped', 'low': 0.5, 'mu': 5.0, 'sigma': 1.0}
}
Connect(A, B, syn_spec=syn_dict)
For further information on the distributions see Random numbers in NEST.
Topological Connections¶
If the connect functions above are not sufficient, the topology provides more sophisticated functions. For example, it is possible to create receptive field structures and much more! See Topological Connections for more information.
Receptor Types¶
Each connection in NEST targets a specific receptor type on the
post-synaptic node. Receptor types are identified by integer numbers,
the default receptor type is 0. The meaning of the receptor type depends
on the model and is documented in the model documentation. To connect to
a non-standard receptor type, the parameter receptor_type
of the
additional argument params
is used in the call to the Connect
command. To illustrate the concept of receptor types, we give an example
using standard integrate-and-fire neurons as presynaptic nodes and a
multi-compartment integrate-and-fire neuron (iaf_cond_alpha_mc
) as
post-synaptic node.

A1, A2, A3, A4 = Create("iaf_psc_alpha", 4)
B = Create("iaf_cond_alpha_mc")
receptors = GetDefaults("iaf_cond_alpha_mc")["receptor_types"]
print receptors
{'soma_exc': 1,
'soma_inh': 2,
'soma_curr': 7,
'proximal_exc': 3
'proximal_inh': 4,
'proximal_curr': 8,
'distal_exc': 5,
'distal_inh': 6,
'distal_curr': 9,}
Connect([A1], B, syn_spec={"receptor_type": receptors["distal_inh"]})
Connect([A2], B, syn_spec={"receptor_type": receptors["proximal_inh"]})
Connect([A3], B, syn_spec={"receptor_type": receptors["proximal_exc"]})
Connect([A4], B, syn_spec={"receptor_type": receptors["soma_inh"]})
The code block above connects a standard integrate-and-fire neuron to a somatic excitatory receptor of a multi-compartment integrate-and-fire neuron model. The result is illustrated in the figure.
Synapse Types¶
NEST supports multiple synapse types that are specified during
connection setup. The default synapse type in NEST is
static_synapse
. Its weight does not change over time. To allow
learning and plasticity, it is possible to use other synapse types that
implement long-term or short-term plasticity. A list of available types
is accessible via the command Models("synapses")
. The output of this
command (as of revision 11199) is shown below:
['cont_delay_synapse',
'ht_synapse',
'quantal_stp_synapse',
'static_synapse',
'static_synapse_hom_wd',
'stdp_dopamine_synapse',
'stdp_facetshw_synapse_hom',
'stdp_pl_synapse_hom',
'stdp_synapse',
'stdp_synapse_hom',
'tsodyks2_synapse',
'tsodyks_synapse']
All synapses store their parameters on a per-connection basis. An
exception to this scheme are the homogeneous synapse types (identified
by the suffix _hom
), which only store weight and delay once for all
synapses of a type. This means that these are the same for all
connections. They can be used to save memory.
The default values of a synapse type can be inspected using the command GetDefaults(), which takes the name of the synapse as an argument, and modified with SetDefaults(), which takes the name of the synapse type and a parameter dictionary as arguments.
print GetDefaults("static_synapse")
{'delay': 1.0,
'max_delay': -inf,
'min_delay': inf,
'num_connections': 0,
'num_connectors': 0,
'receptor_type': 0,
'synapsemodel': 'static_synapse',
'weight': 1.0}
SetDefaults("static_synapse", {"weight": 2.5})
For the creation of custom synapse types from already existing synapse
types, the command CopyModel is used. It has an optional argument
params
to directly customize it during the copy operation. Otherwise
the defaults of the copied model are taken.
CopyModel("static_synapse", "inhibitory", {"weight": -2.5})
Connect(A, B, syn_spec="inhibitory")
Note: Not all nodes can be connected via all available synapse
types. The events a synapse type is able to transmit is documented in
the Transmits
section of the model documentation.
Inspecting Connections¶
GetConnections(source=None, target=None, synapse_model=None)
: Return
an array of identifiers for connections that match the given parameters.
source and target need to be lists of global ids, model is a string
representing a synapse model. If GetConnections is called without
parameters, all connections in the network are returned. If a list of
source neurons is given, only connections from these pre-synaptic
neurons are returned. If a list of target neurons is given, only
connections to these post-synaptic neurons are returned. If a synapse
model is given, only connections with this synapse type are returned.
Any combination of source, target and model parameters is permitted.
Each connection id is a 5-tuple or, if available, a NumPy array with the
following five entries: source-gid, target-gid, target-thread,
synapse-id, port.
The result of GetConnections can be given as an argument to the GetStatus function, which will then return a list with the parameters of the connections:
n1 = Create("iaf_psc_alpha")
n2 = Create("iaf_psc_alpha")
Connect(n1, n2)
conn = GetConnections(n1)
print GetStatus(conn)
[{'synapse_type': 'static_synapse',
'target': 2,
'weight': 1.0,
'delay': 1.0,
'source': 1,
'receptor': 0}]
Modifying existing Connections¶
To modify the connections of an existing connection, one also has to obtain handles to the connections with GetConnections() first. These can then be given as arguments to the SetStatus() functions:
n1 = Create("iaf_psc_alpha")
n2 = Create("iaf_psc_alpha")
Connect(n1, n2)
conn = GetConnections(n1)
SetStatus(conn, {"weight": 2.0})
print GetStatus(conn)
[{'synapse_type': 'static_synapse',
'target': 2,
'weight': 2.0,
'delay': 1.0,
'source': 1,
'receptor': 0}]
Running simulations¶
Introduction¶
To drive the simulation, neurons and devices (nodes) are updated in a
time-driven fashion by calling a member function on each of them in a
regular interval. The spacing of the grid is called the simulation
resolution (default 0.1ms) and can be set using SetKernelStatus
:
SetKernelStatus("resolution", 0.1)
Even though a neuron model can use smaller time steps internally, the
membrane potential will only be visible to a multimeter
on the
outside at time points that are multiples of the simulation resolution.
In contrast to the update of nodes, an event-driven approach is used for
the synapses, meaning that they are only updated when an event is
transmitted through them (Morrison et al.
2005). To speed up the
simulation and allow the efficient use of computer clusters, NEST uses a
hybrid parallelization strategy. The
following figure shows the basic loop that is run upon a call to
Simulate
:

Simulation Loop¶
The simulation loop. Light gray boxes denote thread parallel parts, dark gray boxes denote MPI parallel parts. U(St) is the update operator that propagates the internal state of a neuron or device.
Simulation resolution and update interval¶
Each connection in NEST has its own specific delay that defines the time it takes until an event reaches the target node. We define the minimum delay dmin as the smallest transmission delay and dmax as the largest delay in the network. From this definition follows that no node can influence another node during at least a time of dmin, i.e. the elements are effectively decoupled for this interval.

Definitions of minimum delay (dmin) and simulation resolution (h).¶
Two major optimizations in NEST are built on this decoupling:
Every neuron is updated in steps of the simulation resolution, but always for dmin time in one go, as to keep neurons in cache as long as possible.
MPI processes only communicate in intervals of dmin as to minimize communication costs.
These optimizations mean that the sizes of spike buffers in nodes and
the buffers for inter-process communication depend on dmin+dmax as
histories that long back have to be kept. NEST will figure out the
correct value of dmin and dmax based on the actual delays used
during connection setup. Their actual values can be retrieved using
GetKernelStatus
:
GetKernelStatus("min_delay") # (A corresponding entry exists for max_delay)
Setting dmin and dmax manually¶
In linear simulation scripts that build a network, simulate it, carry
out some post-processing and exit, the user does not have to worry about
the delay extrema dmin and dmax as they are set automatically to the
correct values. However, NEST also allows subsequent calls
toSimulate
, which only work correctly if the content of the spike
buffers is preserved over the simulations.
As mentioned above, the size of that buffer depends on dmin+dmax and
the easiest way to assert its integrity is to not change its size after
initialization. Thus, we freeze the delay extrema after the first call
to Simulate
. To still allow adding new connections inbetween calls
to Simulate
, the required boundaries of delays can be set manually
using SetKernelStatus
(Please note that the delay extrema are set as
properties of the synapse model):
SetDefaults("static_synapse", {"min_delay": 0.5, "max_delay": 2.5})
These settings should be used with care, though: setting the delay extrema too wide without need leads to decreased performance due to more update calls and communication cycles (small dmin), or increased memory consumption of NEST (large dmax).
Spike generation and precision¶
A neuron fires a spike when the membrane potential is above threshold at the end of an update interval (i.e., a multiple of the simulation resolution). For most models, the membrane potential is then reset to some fixed value and clamped to that value during the refractory time. This means that the last membrane potential value at the last time step before the spike can vary, while the potential right after the step will usually be the reset potential (some models may deviate from this). This also means that the membrane potential recording will never show values above the threshold. The time of the spike is always the time at the end of the interval during which the threshold was crossed.
NEST also has a some models that determine the precise time of the threshold crossing during the interval. Please see the documentation on precise spike time neurons for details about neuron update in continuous time and the documentation on connection management for how to set the delay when creating synapses.
Splitting a simulation into multiple intervals¶
In some cases, it may be useful to run a simulation in shorter intervals
to extract information while the simulation is running. The simplest way
of doing this is to simply loop over Simulate()
calls:
for _ in range(20):
nest.Simulate(10)
# extract and analyse data
would run a simulation in 20 rounds of 10 ms. With this solution, NEST takes
a number of preparatory and cleanup steps for each Simulate()
call.
This makes the solution robust and entirely reliable, but comes with a
performance cost.
A more efficient solution doing exactly the same thing is
nest.Prepare()
for _ in range(20):
nest.Run(10)
# extract and analyse data
nest.Cleanup()
For convenience, the RunManager()
context manager can handle preparation
and cleanup for you:
with nest.RunManager():
for _ in range(20):
nest.Run(10)
# extract and analyse data
Note
If you do not use
RunManager()
, you must callPrepare()
,Run()
andCleanup()
in that order.You can call
Run()
any number of times inside aRunManager()
context or betweenPrepare()
andCleanup()
calls.Calling
SetStatus()
inside aRunManager()
context or betweenPrepare()
andCleanup()
will lead to unpredictable results.After calling
Cleanup()
, you need to callPrepare()
again before callingRun()
.
Repeated simulations¶
The only reliable way to perform two simulations of a network from exactly the same starting point is to restart NEST or to call ResetKernel() and then to build the network anew. If your simulations are rather large and you are working on a computer with a job queueing system, it may be most efficient to submit individual jobs or a job array to smiulate network instances in parallel; don’t forget to use different random seeds!
The following example performs simulations of a single neuron driven by a Poisson spike train using different seeds and output files for each run:
for n in range(10):
nest.ResetKernel()
nest.SetKernelStatus({'grng_seed': 100*n + 1,
'rng_seeds': [100*n + 2]})
pg = nest.Create('poisson_generator', params={'rate': 1000000.0})
nrn= nest.Create('iaf_psc_alpha')
sd = nest.Create('spike_detector',
params={'label': 'spikes-run{:02d}'.format(n),
'to_file': True})
nest.Connect(pg, nrn)
nest.Connect(nrn, sd)
nest.Simulate(100)
The ResetNetwork()
function available in NEST 2 is incomplete in that it
only resets the state of neurons and devices to default values and deletes
spikes that are in the delivery pipeline. It does does not reset plastic
synapses or delete spikes from the spike buffers of neurons. We will
therefore remove the function in NEST 3 and already now advise against
using ResetNetwork()
.
Guide to parallel computing¶
What is parallelization?¶
Parallelization can improve the efficiency of running large-scale simulations by taking advantage of multicore/multiprocessor machines, computer clusters or supercomputers. Here we explain how parallelization is set up in NEST and how you can take advantage of it for your simulations.
NEST employs two methods for parallelization:
- Thread-parallel simulation
uses OpenMP
takes advantage of multicore and multiprocessor computers without the need for additional libraries
- Distributed simulation (or distributed computing)
uses the Message Passing Interface (MPI)
supports simulations over multiple computers
Both methods can be combined within a simulation.
See Plesser et al. (2007) for more information on NEST parallelization and be sure to check the documentation on Random numbers in NEST
Virtual processes¶
We use the concept of local and remote threads, called virtual processes. A virtual process (VP) is a thread residing in one of NEST’s MPI processes. For both thread and distributed parallelization, VPs simplify handling of neuron and synapses distributions. Virtual processes are distributed round-robin (i.e. each VP is allocated equal time slices, without any given a priority) onto the MPI processes and counted continuously over all processes.

Basic scheme showing how threads (T) and virtual processes (VP) reside in MPI processes (P) in NEST¶
Node distributions¶
The distribution of nodes depends on the type of node.
In the figure below, a node distribution for a small network consisting of spike_generator
,
four iaf_psc_alpha
neurons, and a spike_detector
in a scenario with two processes with two threads each.

sg=spike_generator, iaf=iaf_psc_alpha, sd=spike_detector. Numbers to the left and right indicate global ids. The proxy object in the figure is a conceptual way of keeping the id of the real node free on remote processes).¶
Note
The status dictionary of each node (i.e. neuron or device) contains three entries that are related to parallel computing:
local (boolean): indicating if the node exists on the local process or not
thread (integer): id of the local thread the node is assigned to
vp (integer): id of the virtual process the node is assigned to
Neuron distribution¶
Neurons are assigned to one of the virtual processes in a round-robin fashion. On all other virtual processes, no object is created. Proxies ensure the id of the real node on a given VP is kept free.
The virtual process \(id_{vp}\) on which a neuron with global id \(gid_{node}\) is allocated is given by \(id_{vp} = gid_{node} %N_{vp}\), where \(N_{vp}\) is the total number of virtual processes in the simulation.
Device Distribution¶
Devices are replicated once on each thread in order to balance the load and minimize their interaction. Devices thus do not have proxies on remote virtual processes.
For recording devices configured to record to a file (property to_file set to true), the distribution results in multiple data files, each containing the data from one thread. The files names are composed according to the following scheme
[model|label]-gid-vp.[dat|gdf]
The first part is the name of the model (e.g. voltmeter
or
spike_detector
) or, if set, the label of the recording device. Next is
the global id (GID) of the recording device, followed by the id of the VP
assigned to the recorder. Spike files have the file extension gdf
and
analog recordings from the multimeter
have dat
as file extension.
The label
and file_extension
of a recording device can be set like any
other parameter of a node using SetStatus
.
Spike exchange and synapse update¶
Spike exchange in NEST takes different routes depending on the type of the sending and receiving node. There are two distinct cases.
Spikes between neurons¶
Spikes between neurons are always exchanged through the global spike exchange mechanism.
Neuron update and spike generation in the source neuron and spike delivery to the target neuron may be handled by different virtual process.
But the virtual process assigned to the target_neuron, always handles the corresponding spike delivery (see property
vp
in the status dictionary).
Spikes between neurons and devices¶
Spike exchange to or from neurons over connections that either originate or terminate at a device (e.g.,
spike_generator -> neuron
orneuron -> spike_detector
) bypasses the global spike exchange mechanism.Spikes are delivered locally within the virtual process from or to a replica of the device. In this case, both the pre- and postsynaptic nodes are handled by the virtual process to which the neuron is assigned.
Synaptic plasticity models¶
For synapse models supporting plasticity, synapse dynamics in the
Connection
object are always handled by the virtual process of the
target node.
Using multiple threads¶
Thread-parallel simulation is compiled into NEST by default and should work on all MacOS and Linux machines without additional requirements.
In order to keep results comparable and reproducible across different machines, the default mode is set to a single thread and multi-threading has to be turned on explicitly.
To use multiple threads for the simulation, the desired number of threads has to be set before any nodes or connections are created. The command for this is
nest.SetKernelStatus({"local_num_threads": T})
Usually, a good choice for T is the number of processor cores available on your machine.
Note
In some situations, oversubscribing (i.e., to specify a local_num_threads
that is higher than available cores on your machine)
can yield 20-30% improvement in simulation speed. Finding the optimal thread number for a
specific situation might require a bit of experimenting.
Using distributed computing¶
Build requirements¶
To compile NEST for distributed computing, you will need
a library implementation of MPI on your system. If you are on a cluster, you most likely have this already.
NEST development packages in the case of pre-packaged MPI library.
Note
Please be advised that NEST should currently only be run in a homogeneous MPI environment. Running in a heterogenenous environment can lead to unexpected results or even crashes. Please contact the NEST community if you require support for exotic setups.
Configure¶
If using the standard installation instructions
when calling cmake, add the option -Dwith-mpi=ON
. The build summary
should report that MPI is linked.
Please see the Installation instructions for more information on installing NEST.
Run distributed simulations¶
Distributed simulations cannot be run interactively, which means that the simulation has to be provided as a script. However, the script can be the same as a script for any simulation. No changes are necessary for distibuted simulation scripts: inter-process communication and node distribution is managed transparently inside of NEST.
To distribute a simulation onto 128 processes of a computer cluster, the command should look like this
mpirun -np 128 python simulation.py
Please refer to the MPI library documentation for details on the usage
of mpirun
.
Reproducibility¶
To achieve the same simulation results even when using different parallelization strategies, the number of virtual processes has to be kept constant. A simulation with a specific number of virtual processes will always yield the same results, no matter how they are distributed over threads and processes, given that the seeds for the random number generators of the different virtual processes are the same (see Random numbers in NEST).
In order to achieve a constant number of virtual processes, NEST provides the property total_num_virtual_procs to adapt the number of local threads (property local_num_threads, explained above) to the number of available processes.
The following listing contains a complete simulation script
(simulation.py) with four neurons connected in a chain. The first
neuron receives random input from a poisson_generator
and the spikes
of all four neurons are recorded to files.
from nest import *
SetKernelStatus({"total_num_virtual_procs": 4})
pg = Create("poisson_generator", params={"rate": 50000.0})
n = Create("iaf_psc_alpha", 4)
sd = Create("spike_detector", params={"to_file": True})
Connect(pg, [n[0]], syn_spec={'weight': 1000.0, 'delay': 1.0})
Connect([n[0]], [n[1]], syn_spec={'weight': 1000.0, 'delay': 1.0})
Connect([n[1]], [n[2]], syn_spec={'weight': 1000.0, 'delay': 1.0})
Connect([n[2]], [n[3]], syn_spec={'weight': 1000.0, 'delay': 1.0})
Connect(n, sd)
Simulate(100.0)
The script is run three times using different numbers of MPI processes, but 4 virtual processes in every run:
mkdir 4vp_1p; cd 4vp_1p
mpirun -np 1 python ../simulation.py
cd ..; mkdir 4vp_2p; cd 4vp_2p
mpirun -np 2 python ../simulation.py
cd ..; mkdir 4vp_4p; cd 4vp_4p
mpirun -np 4 python ../simulation.py
cd ..
diff 4vp_1p 4vp_2p
diff 4vp_1p 4vp_4p
Each variant of the experiment produces four data files, one for each virtual process (spike_detector-6-0.gdf, spike_detector-6-1.gdf, spike_detector-6-2.gdf, and spike_detector-6-3.gdf). Using diff on the three data directories shows that they all contain the same spikes, which means that the simulation results are indeed the same independently of the details of parallelization.
Random numbers¶
Introduction¶
Random numbers are used for a variety of purposes in neuronal network simulations, e.g.
to create randomized connections
to choose parameter values randomly
to inject noise into network simulations, e.g., in the form of Poissonian spike trains.
This document discusses how NEST provides random numbers for these purposes, how you can choose which random number generator (RNG) to choose, and how to set the seed of RNGs in NEST. We use the term “random number” here for ease of writing, even though we are always talking about pseudorandom numbers generated by some algorithm.
NEST is designed to support parallel simulation and this puts some constraints on the use and generation of random numbers. We discuss these in the next section, before going into the details of how to control RNGs in NEST.
On this page, we mainly discuss the use of random numbers in parallel NEST simulations, but the comments pertain equally to serial simulations (N_vp=1).
Random Numbers vs Random Deviates¶
NEST distinguishes between random number generators, provided by
rngdict
and random deviate generators provided by rdevdict
.
Random number generators only provide double-valued numbers uniformly
distributed on [0, 1] and uniformly distributed integers in {0, 1, …,
N}. Random deviate generators, on the other hand, provide random
numbers drawn from a range of distributions, such as the normal or
binomial distributions. In most cases, you will be using random deviate
generators. They are in particular used to initialize properties during
network construction, as described in the sections changes-nest>
and Examples below.
Changes in random number generation in NEST 2.4¶
Random deviate generation has become significantly more powerful in NEST
2.4, to fully support randomization of connections parameters offered by
the revised Connect
function, as described in Connection
Management and illustrated by the
Examples below. We have also made minor
changes to make to achieve greater similarity between NEST, PyNN, and
NumPy. For most users, these changes only add new features. Only
existing scripts using
uniformint
normal_clipped
,normal_clipped_left
,normal_clipped_right
generators from NEST 2.2 need to be adapted as detailed below.
The changes are as follows:
Uniform integer generator
renamed from
uniformint
touniform_int
parameters renamed to
low
andhigh
returns uniformly distributed integers from
{low, low+1, …, high}
Uniform continuous generator
new generator
uniform
parameters
low
andhigh
generates numbers uniformly distributed in
[low, high)
Full parameter sets for generators
In the past, many random deviate generators returned values for fixed parameters, e.g., the
normal
generator could only return zero-mean, unit-variance normal random numbers.Now, all parameters for each generator can be set, in particular:
normal: mu, sigma
lognormal: mu, sigma
exponential: lambda
gamma: order, scale
Parameter values are checked more systematically for unsuitable values.
Clipped normal generators
parameter names changed to
mu
andsigma
clipping limits now called
low
andhigh
_left
and_right
variants removed: for one-sided clipping, just set the boundary you want to clip at, the other is positive or negative infinity
Clipped variants for most generators
For most random deviate generators,
_clipped
variants exist now.For all clipped variants, one can set a lower limit (
low
, default: -infinity) and an upper limit (high
: +infinty).Clipped variants will then return numbers strictly in
(low, high)
for continuous distributions (e.g. normal, exponential) or{low, low+1, …, high}
for discrete distributions (e.g. poisson, binomial). This is achieved by redrawing numbers until an acceptable number is drawn.Note that the resulting distribution differs from the original one and that drawing may become very slow if
(low, high)
contains only very small probability mass. Clipped generator variants should therefore mostly be used to clip tails with very small probability mass when randomizing time constants or delays.
Clipped-to-boundary variants for most generators
To facilitate reproduction of certain publications, NEST also provides
_clipped_to_boundary
variants of most generators.Clipped-to-boundary variants return the value
low
if a number smaller thanlow
is drawn, andhigh
if a number larger thanhigh
is drawn.We believe that these variants should not be used for new studies.
Basics of parallel simulation in NEST¶
For details of parallelization in NEST, please see Parallel Computing and Plesser et al (2007). Here, we just summarize a few basics.
NEST can parallelize simulations through multi-threading, distribution or a combination of the two.
A distributed simulation is spread across several processes under the control of MPI (Message Passing Interface). Each network node is local to exactly one process and complete information about the node is only available to that process. Information about each connection is stored by the process in which the connection target is local and is only available and changeable on that process.
Multi-threaded simulations run in a single process in a single computer. As a consequence, all nodes in a multi-threaded simulation are local.
Distribution and multi-threading can be combined by running identical numbers of threads in each process.
A serial simulation has a single process with a single seed.
From the NEST user perspective, distributed processes and threads are visible as virtual processes. A simulation distributed across \(M\) MPI processes with \(T\) threads each, has \(N_{vp} = M times T\) virtual processes. It is a basic design principle of NEST that simulations shall generate identical results when run with a fixed \(N_{VP}\), no matter how the virutal processes are broken down into MPI processes and threads.
Useful information can be obtained like this
import nest nest.NumProcesses() # number of MPI processes nest.Rank() # rank of MPI process executing command nest.GetKernelStatus([‘num_processes’]) # same as nest.NumProcesses() nest.GetKernelStatus([‘local_num_threads’]) # number of threads in present process (same for all processes) nest.GetKernelStatus([‘total_num_virtual_procs’]) # N_vp = M x T
When querying neurons, only very limited information is available for neurons on other MPI processes. Thus, before checking for specific information, you need to check if a node is local:
n = nest.Create(‘iaf_psc_alpha’) if nest.GetStatus(n, ‘local’)[0]: # GetStatus() returns list, pick element print nest.GetStatus(n, ‘vp’) # virtual process “owning” node print nest.GetStatus(n, ‘thread’) # thread in calling process “owning” node
Random numbers in parallel simulations¶
Ideally, all random numbers in a simulation should come from a single RNG. This would require shipping truckloads of random numbers from a central RNG process to all simulations processes and is thus impractical, if not outright prohibitively costly. Therefore, parallel simulation requires an RNG on each parallel process. Advances in RNG technology give us today a range of RNGs that can be used in parallel, with a quite high level of certainty that the resulting parallel streams of random numbers are non-overlapping and uncorrelated. While the former can be guaranteed, we are not aware of any generator for which the latter can be proven.
How many generators in a simulation¶
In a typical PyNEST simulation running on \(N_{vp}\) virtual processes, we will encounter \(2 N_{vp} + 1\) random number generators:
RandomDivergentConnect
.RandomConvergentConnect
and to provide random numbers to nodes
generating random output, e.g. the poisson_generator
.The generators on the Python level are not strictly necessary, as one could in principle access the per-VP RNGs built into NEST. This would require very tedious SLI-coding, though. We therefore recommend at present that you use additional RNGs on the Python side.
Why a Global RNG in NEST¶
In some situations, randomized decisions on different virtual processes are not independent of each other. The most important case are randomized divergent connections. The problem here is as follows. For the sake of efficiency, NEST stores all connection information in the virtual process (VP) to which the target of a connection resides (target process). Thus, all connections are generated by this target process. Now consider the task of generating 100 randomized divergent connections emanating from a given source neuron while using 4 VPs. Then there should be 25 targets on each VP on average, but actual numbers will fluctuate. If independent processes on all VPs tried to choose target neurons, we could never be sure that exactly 100 targets would be chosen in total.
NEST thus creates divergent connections using a global RNG. This random number generator provides the exact same sequence of random numbers on each virtual process. Using this global RNG, each VP chooses 100 targets from the entire network, but actually creates connections only for those targets that reside on the VP. In practice, the global RNG is implemented using one “clone” on each VP; NEST checks occasionally that all these clones are synchronized, i.e., indeed generate identical sequences.
Seeding the Random Generators¶
Each of the \(N_{vp}\) random generators needs to be seeded with a
different seed to generate a different random number sequences. We
recommend that you choose a master seed msd
and seed the
\(2N_{vp}+1\) generators with seeds msd
, msd+1
, …,
msd+2*N_vp
. Master seeds for for independent experiments must differ
by at least \(2N_{vp}+1\) . Otherwise, the same sequence(s) would
enter in several experiments.
Seeding the Python RNGs¶
You can create a properly seeded list of \(N_{vp}\) RNGs on the Python side using
import numpy
msd = 123456
N_vp = nest.GetKernelStatus(['total_num_virtual_procs'])[0]
pyrngs = [numpy.random.RandomState(s) for s in range(msd, msd+N_vp)]
msd
is the master seed, choose your own!
Seeding the global RNG¶
The global NEST rng is seeded with a single, positive integer number:
nest.SetKernelStatus({’grng_seed’ : msd+N_vp})
Seeding the per-process RNGs¶
The per-process RNGs are seeded by a list of \(N_{vp}\) positive integers:
nest.SetKernelStatus({’rng_seeds’ : range(msd+N_vp+1, msd+2*N_vp+1)})
Choosing the random generator type¶
Python and NumPy have the MersenneTwister MT19937ar random number generator built in. There is no simple way of choosing a different generator in NumPy, but as the MT19937ar appears to be a very robust generator, this should not cause significant problems.
NEST uses by default Knuth’s lagged Fibonacci random number generator (The Art of Computer Programming, vol 2, 3rd ed, 9th printing or later, ch 3.6). If you want to use other generators, you can exchange them as described below. If you have built NEST without the GNU Science Library (GSL), you will only have the Mersenne Twister MT19937ar and Knuth’s lagged Fibonacci generator available. Otherwise, you will also have some 60 generators from the GSL at your disposal (not all of them particularly good). You can see the full list of RNGs using
nest.sli_run('rngdict info')
Setting a different global RNG¶
To set a different global RNG in NEST, you have to pass a NEST random number generator object to the NEST kernel. This can currently only be done by writing some SLI code. The following code replaces the current global RNG with MT19937 seeded with 101:
nest.sli_run('0 << /grng rngdict/MT19937 :: 101 CreateRNG >> SetStatus')
The following happens here:
rngdict/MT19937::
fetches a “factory” for MT19937 from therngdict
101 CreateRNG
uses the factory to create a single MT19937 generator with seed 101This is generator is then passed to the
/grng
status variable of the kernel. This is a “write only” variable that is invisible inGetKernelStatus()
.
Setting different per-processes RNGs¶
One always needs to exchange all \(N_{vp}\) per-process RNGs at once. This is done by (assuming \(N_{vp}=2\) ):
nest.sli_run('0 << /rngs [102 103] { rngdict/MT19937 :: exch CreateRNG } Map >> SetStatus')
The following happens here:
[102 103] { rngdict/MT19937:: exch CreateRNG } Map
creates an array of two RNG objects seeded with 102 and 103, respectively.This array is then passed to the
/rngs
status variable of the kernel. This variable is invisible as well.
Examples¶
NOTE: These examples are not yet updated for NEST 2.4
No random variables in script¶
If no explicit random variables appear in your script, i.e., if
randomness only enters in your simulation through random stimulus
generators such as poisson_generator
or randomized connection
routines such as RandomConvergentConnect
, you do not need to worry
about anything except choosing and setting your random seeds, possibly
exchanging the random number generators.
Randomizing the membrane potential¶
If you want to randomize the membrane potential (or any other property of a neuron), you need to take care that each node is updated by the process on which it is local using the per-VP RNG for the VP to which the node belongs. This is achieved by the following code
pyrngs = [numpy.random.RandomState(s) for s in range(msd, msd+N_vp)]
nodes = nest.Create('iaf_psc_delta', 10)
node_info = nest.GetStatus(nodes)
local_nodes = [(ni['global_id'], ni['vp']) for ni in node_info if ni['local']]
for gid,vp in local_nodes:
nest.SetStatus([gid], {'V_m': pyrngs[vp].uniform(-70.0, -50.0)})
The first line generates \([N_{vp}\) properly seeded NumPy RNGs as
discussed above. The next line creates 10 nodes, while the third line
extracts status information about each node. For local nodes, this will
be full information, for non-local nodes we only get the following
fields: local
, model
and type
. On the fourth line, we create
a list of tuples, containing global ID and virtual process number for
all local neurons. The for loop then sets the membrane potential of each
local neuron drawn from a uniform distribution on \([-70, -50]\) using
the Python-side RNG for the VP to which the neuron belongs.
Randomizing convergent connections¶
We continue the above example by creating random convergent connections, \(C_E\) connections per target node. In the process, we randomize the connection weights:
C_E = 10
nest.CopyModel("static_synapse", "excitatory")
for tgt_gid, tgt_vp in local_nodes:
weights = pyrngs[tgt_vp].uniform(0.5, 1.5, C_E)
nest.RandomConvergentConnect(nodes, [tgt_gid], C_E,
weight=list(weights), delay=2.0,
model="excitatory")
Here we loop over all local nodes considered as target nodes. For each
target, we create an array of \(C_E\) randomly chosen weights,
uniform on \([0.5, 1.5. We then call RandomConvergentConnect()
with this weight list as argument. Note a few details:
We need to put
tgt_gid
into brackets as PyNEST functions always expect lists of GIDs.We need to convert the NumPy array
weights
to a plain Python list, as most PyNEST functions currently cannot handle array input.If we specify
weight
, we must also providedelay
.
You can check the weights selected by
print nest.GetStatus(nest.GetConnections(), ['source', 'target', 'weight'])
which will print a list containing a triple of source GID, target GID
and weight for each connection in the network. If you want to see only a
subset of connections, pass source, target, or synapse model to
GetConnections()
.
Randomizing divergent connections¶
Randomizing the weights (or delays or any other properties) of divergent
connections is more complicated than for convergent connections, because
the target for each connection is not known upon the call to
RandomDivergentConnect
. We therefore need to first create all
connections (which we can do with a single call, passing lists of nodes
and targets), and then need to manipulate all connections. This is not
only more complicated, but also significantly slower than the example
above.
nest.CopyModel('static_synapse', 'inhibitory', {'weight': 0.0, 'delay': 3.0})
nest.RandomDivergentConnect(nodes, nodes, C_E, model='inhibitory')
gid_vp_map = dict(local_nodes)
for src in nodes:
conns = nest.GetConnections(source=[src], synapse_model='inhibitory')
tgts = [conn[1] for conn in conns]
rweights = [{'weight': pyrngs[gid_vp_map[tgt]].uniform(-2.5, -0.5)}
for tgt in tgts]
nest.SetStatus(conns, rweights)
In this code, we first create all connections with weight 0. We then
create gid_vp_map
, mapping GIDs to VP number for all local nodes.
For each node considered as source, we then find all outgoing excitatory
connections from that node and then obtain a flat list of the targets of
these connections. For each target we then choose a random weight as
above, using the RNG pertaining to the VP of the target. Finally, we set
these weights. Note that the code above is slow. Future versions of
NEST will provide better solutions.
Testing scripts randomizing node or connection parameters¶
To ensure that you are consistently using the correct RNG for each node or connection, you should run your simulation several times the same \(N_{vp}\), but using different numbers of MPI processes. To this end, add towards the beginning of your script
nest.SetKernelStatus({"total_num_virtual_procs": 4})
and ensure that spikes are logged to file in the current working directory. Then run the simulation with different numbers of MPI processes in separate directories
mkdir 41 42 44
cd 41
mpirun -np 1 python test.py
cd ../42
mpirun -np 2 python test.py
cd ../44
mpirun -np 4 python test.py
cd ..
These directories should now have identical content, something you can
check with diff
:
diff 41 42
diff 41 44
These commands should not generate any output. Obviously, this test checks only a necessary, but by no means sufficient condition for a correct simulation. (Oh yes, do make sure that these directories contain data! Nothing easier than to pass a diff-test on empty dirs.)
Analog recording with multimeter¶
As of r89xx, NEST replaces a range of analog recording devices, such as voltmeter, conductancemeter and aeif_w_meter with a universal multimeter, which can record all analog quantities a model neuron makes available for recording. The multimeter works essentially as the old-style voltmeter, but with a few changes:
The
/recordables
list of a neuron model will tell you which quantities can be recorded:In [3]: nest.GetDefaults('iaf_cond_alpha')['recordables'] Out[3]: ['V_m', 'g_ex', 'g_in', 't_ref_remaining']
You have to configure multimeter to record from a set of quantities:
nest.Create('multimeter', params={'record_from': ['V_m', 'g_ex']})
By default, the recording interval is 1ms, but you can change this
nest.Create('multimeter', params={'record_from': ['V_m', 'g_ex'], 'interval' :0.1})
The set of variables to record and the recording interval must be set before the multimeter is connected to any node, and cannot be changed afterwards.
After one has simulated a little, the
events
entry of the multimeter status dictionary will contain one numpy array of data for each recordable.Any node can only be recorded from by one multimeter.
Adapting scripts using voltmeter¶
Many NEST users have scripts that use voltmeter to record membrane
potential. To ease the transition to the new-style analog recording,
NEST still provides a device called voltmeter
. It is simply a
multimeter pre-configured to record the membrane potential V_m
. It
can be used exactly as the old voltmeter. The only change you need to
make to your scripts is that you collect data from events/V_m instead
of from events/potentials, e.g.
In [24]: nest.GetStatus(m, 'events')[0]['V_m']
Out[24]:
array([-70. , -70. , -70. , -70. ,
-70. , -70. , -70. , -70. ,
An example¶
As an example, here is the multimeter.py example from the PyNEST examples set:
import nest
import numpy as np
import pylab as pl
# display recordables for illustration
print 'iaf_cond_alpha recordables: ', nest.GetDefaults('iaf_cond_alpha')['recordables']
# create neuron and multimeter
n = nest.Create('iaf_cond_alpha', params = {'tau_syn_ex': 1.0, 'V_reset': -70.0})
m = nest.Create('multimeter', params = {'withtime': True, 'interval': 0.1, 'record_from': ['V_m', 'g_ex', 'g_in']})
# Create spike generators and connect
gex = nest.Create('spike_generator', params = {'spike_times': np.array([10.0, 20.0, 50.0])})
gin = nest.Create('spike_generator', params = {'spike_times': np.array([15.0, 25.0, 55.0])})
nest.Connect(gex, n, params={'weight': 40.0}) # excitatory
nest.Connect(gin, n, params={'weight': -20.0}) # inhibitory
nest.Connect(m, n)
# simulate
nest.Simulate(100)
# obtain and display data
events = nest.GetStatus(m)[0]['events']
t = events['times'];
pl.subplot(211)
pl.plot(t, events['V_m'])
pl.axis([0, 100, -75, -53])
pl.ylabel('Membrane potential [mV]')
pl.subplot(212)
pl.plot(t, events['g_ex'], t, events['g_in'])
pl.axis([0, 100, 0, 45])
pl.xlabel('Time [ms]')
pl.ylabel('Synaptic conductance [nS]')
pl.legend(('g_exc', 'g_inh'))
Here is the result:

Example for using the multimeter¶
Simulations with gap junctions¶
Note: This documentation describes the usage of gap junctions in NEST 2.12. A documentation for NEST 2.10 can be found in Hahne et al. 2016. It is however recommended to use NEST 2.12 (or later), due to several improvements in terms of usability.
Introduction¶
Simulations with gap junctions are supported by the Hodgkin-Huxley
neuron model hh_psc_alpha_gap
. The synapse model to create a
gap-junction connection is named gap_junction
. Unlike chemical
synapses gap junctions are bidirectional connections. In order to create
one accurate gap-junction connection two NEST connections are
required: For each created connection a second connection with the exact
same parameters in the opposite direction is required. NEST provides the
possibility to create both connections with a single call to
nest.Connect
via the make_symmetric
flag (default value:
False
) of the connection dictionary:
import nest
a = nest.Create('hh_psc_alpha_gap')
b = nest.Create('hh_psc_alpha_gap')
# Create gap junction between neurons a and b
nest.Connect(a, b, {'rule': 'one_to_one', 'make_symmetric': True},
{'model': 'gap_junction', 'weight': 0.5})
In this case the reverse connection is created internally. In order to prevent the creation of incomplete or non-symmetrical gap junctions the creation of gap junctions is restricted to
one_to_one
connections with'make_symmetric': True
all_to_all
connections with equal source and target populations and default or scalar parameters
Create random connections¶
NEST random connection rules like fixed_total_number
,
fixed_indegree
etc. cannot be employed for the creation of gap
junctions. Therefore random connections have to be created on the Python
level with e.g. the random
module of the Python Standard Library:
import nest
import random
import numpy as np
# total number of neurons
n_neuron = 100
# total number of gap junctions
n_gap_junction = 3000
n = nest.Create('hh_psc_alpha_gap', n_neuron)
random.seed(0)
# draw n_gap_junction pairs of random samples from the list of all
# neurons and reshaped data into two corresponding lists of neurons
m = np.transpose(
[random.sample(n, 2) for _ in range(n_gap_junction)])
# connect obtained lists of neurons both ways
nest.Connect(m[0], m[1],
{'rule': 'one_to_one', 'make_symmetric': True},
{'model': 'gap_junction', 'weight': 0.5})
As each gap junction contributes to the total number of gap-junction
connections of two neurons, it is hardly possible to create networks
with a fixed number of gap junctions per neuron. With the above script
it is however possible to control the approximate number of gap
junctions per neuron. E.g. if one desires gap_per_neuron = 60
the
total number of gap junctions should be chosen as
n_gap_junction = n_neuron * gap_per_neuron / 2
.
Note: The (necessary) drawback of creating the random connections on
the Python level is the serialization of the connection procedure in
terms of computation time and memory in distributed simulations. Each
compute node participating in the simulation needs to draw the identical
full set of random numbers and temporarily represent the total
connectivity in variable m
. Therefore it is advisable to use the
internal random connection rules of NEST for the creation of connections
whenever possible. For more details see Hahne et al.
2016.
Adjust settings of iterative solution scheme¶
For simulations with gap junctions NEST uses an iterative solution scheme based on a numerical method called Jacobi waveform relaxation. The default settings of the iterative method are based on numerical results, benchmarks and previous experience with gap-junction simulations (see Hahne et al. 2015) and should only be changed with proper knowledge of the method. In general the following parameters can be set via kernel parameters:
nest.SetKernelStatus({'use_wfr': True,
'wfr_comm_interval': 1.0,
'wfr_tol': 0.0001,
'wfr_max_iterations': 15,
'wfr_interpolation_order': 3})
For a detailed description of the parameters and their function see (Hahne et al. 2016, Table 2).
Simulations with precise spike times¶
The simulation resolution h and the minimum synaptic transmission delay dmin define the two major time intervals of the scheduling and simulation flow of NEST: neurons update their state variables in steps of h, whereas spikes are communicated and delivered to their targets in steps of dmin, where dmin is a multiple of h.
Traditionally, spikes are constrained to the simulation grid such that neurons can propagate their state variables for an entire h-step without interruption by incoming spikes. This enables faster simulations of neurons with linear sub-threshold dynamics as a precomputed propagator matrix for a time step of fixed size h can be employed (Rotter & Diesmann, 1999).
Neurons buffer the incoming spikes until they become due, where spikes can be lumped together provided that the corresponding synapses have the same post-synaptic dynamics. Within a dmin-interval, each neuron independently proceeds in steps of h: it retrieves the inputs that are due in the current time step from its spike buffers and updates its state variables such as the membrane potential.

Propagation of membrane potential in case of grid-constrained spiking. Filled dots indicate update of membrane potential; black cross indicates detection of threshold crossing. As visual guidance, dashed black curves indicate time course of membrane potential. For simplicity, d_min=2h.¶
If after an update the membrane potential is above the firing threshold, the neuron emits a spike and resets its membrane potential. Due to time discretization both spike and reset happen at the right border of the h-step in which the threshold crossing occurred; the spike is time stamped accordingly.
NEST enables also simulations with precise spike times, which are represented by an integer time stamp and a double precision offset. As the incoming spikes divide the h-steps into substeps, a neuron needs to update its state variables for each substep.

Propagation of membrane potential in case of off-grid spiking. Dashed red line indicates precise time of threshold crossing.¶
If after an update the membrane potential is above the firing threshold, the neuron determines the precise offset of the outgoing spike with respect to the next point on the time grid. This grid point marks the spike’s time stamp. The neuron then emits the spike and resets its membrane potential.
Models with precise spike times in NEST¶
poisson_generator_ps
creates Poissonian spike trains, where spike
times have an integer time stamp and a double precision offset. It is
hence dedicated to simulations with precise spike times. The device can
also be connected to grid-constrained neuron models, which only use the
time stamps of the spikes and ignore their offsets. However, spike
generation with poisson_generator_ps
is less efficient than with its
grid-constrained counterpart poisson_generator
.
parrot_neuron_ps
repeats the incoming spikes just as its
grid-constrained counterpart parrot_neuron
but it is able to
represent precise spike times.
iaf_psc_delta_ps
is an integrate-and-fire neuron model with
delta-shaped post-synaptic currents that employs precise spike times;
its grid-constrained counterpart is iaf_psc_delta
. In this model the
precise location of an outgoing spike is determined analytically.
iaf_psc_alpha_ps
and iaf_psc_alpha_presc
are
integrate-and-fire neuron models with alpha-shaped post-synaptic
currents that employ precise spike times; their grid-constrained
counterpart is iaf_psc_alpha
. The neuron models have been developed
in the context of Morrison et al.
(2007). As both models
employ interpolation in order to determine the precise location of an
outgoing spike, the achieved precision depends on the simulation
resolution h. The models differ in the way they process incoming
spikes, which also affects the attained precision (see Morrison et al.
(2007) for details).
iaf_psc_exp_ps
is an integrate-and-fire neuron model with
exponentially shaped post-synaptic currents that employs precise spike
times; its grid-constrained counterpart is iaf_psc_exp
. It has been
developed in the context of Hanuschkin et al.
(2010), which is a
continuation of the work presented in Morrison et al.
(2007). As the neuron
model employs an iterative search in order to determine the precise
location of an outgoing spike, the achieved precision does not depend on
the simulation resolution h. The model can also be used through the
PyNN
interface.
The source code of these models is in the *precise* module of NEST.
Using NEST with MUSIC¶
Introduction¶
NEST supports the MUSIC interface, a standard by the INCF, which allows the transmission of data between applications at runtime 1. It can be used to couple NEST with other simulators, with applications for stimulus generation and data analysis and visualization and with custom applications that also use the MUSIC interface.
Basically, all communication with MUSIC is mediated via proxies that receive/send data from/to remote applications using MUSIC. Different proxies are used for the different types of data. At the moment, NEST supports sending and receiving spike events and receiving continuous data and string messages.
You can find the installation instructions for MUSIC on their Github Page: INCF/MUSIC
Reference¶
- 1
Djurfeldt M, et al. 2010. Run-time interoperability between neuronal simulators based on the MUSIC framework. Neuroinformatics, 8. doi:10.1007/s12021-010-9064-z*.
Sending and receiving spike events¶
A minimal example for the exchange of spikes between two independent
instances of NEST is given in the example
examples/nest/music/minimalmusicsetup.music
.
It sends spikes using the music_event_out_proxy
script and receives the
spikes using a music_event_in_proxy
.
stoptime=0.01
[from]
binary=./minimalmusicsetup_sendnest.py
np=1
[to]
binary=./minimalmusicsetup_receivenest.py
np=1
from.spikes_out -> to.spikes_in [1]
This configuration file sets up two applications, from
and to
,
which are both instances of NEST. The first runs a script to send
spike events on the MUSIC port spikes_out
to the second, which
receives the events on the port spikes_in
. The width of the port is
1.
The content of minimalmusicsetup_sendnest.py
is contained in the
following listing.
First, we import nest and set up a check to ensure MUSIC is installed before continuing.
import nest
if not nest.ll_api.sli_func("statusdict/have_music ::"):
import sys
print("NEST was not compiled with support for MUSIC, not running.")
sys.exit()
nest.set_verbosity("M_ERROR")
Next we create a spike_generator
and set the spike times. We then create
our neuron model (iaf_psc_alpha
) and connect the neuron with the spike
generator.
sg = nest.Create('spike_generator')
nest.SetStatus(sg, {'spike_times': [1.0, 1.5, 2.0]})
n = nest.Create('iaf_psc_alpha')
nest.Connect(sg, n, 'one_to_one', {'weight': 750.0, 'delay': 1.0})
We then create a voltmeter, which will measure the membrane potenial, and connect it with the neuron.
vm = nest.Create('voltmeter')
nest.SetStatus(vm, {'to_memory': False, 'to_screen': True})
nest.Connect(vm, n)
Finally, we create a music_event_out_proxy
, which forwards the spikes it
receives directly to the MUSIC event output port spikes_out
. The spike
generator is connected to the music_event_out_proxy
on channel 0 and the
network is simulated for 10 milliseconds.
meop = nest.Create('music_event_out_proxy')
nest.SetStatus(meop, {'port_name': 'spikes_out'})
nest.Connect(sg, meop, 'one_to_one', {'music_channel': 0})
nest.Simulate(10)
The next listing contains the content of
minimalmusicsetup_receivenest.py
, which is set up similarly to the above
script, but without the spike generator.
import nest
if not nest.ll_api.sli_func("statusdict/have_music ::"):
import sys
print("NEST was not compiled with support for MUSIC, not running.")
sys.exit()
nest.set_verbosity("M_ERROR")
meip = nest.Create('music_event_in_proxy')
nest.SetStatus(meip, {'port_name': 'spikes_in', 'music_channel': 0})
n = nest.Create('iaf_psc_alpha')
nest.Connect(meip, n, 'one_to_one', {'weight': 750.0})
vm = nest.Create('voltmeter')
nest.SetStatus(vm, {'to_memory': False, 'to_screen': True})
nest.Connect(vm, n)
nest.Simulate(10)
Running the example using mpirun -np 2 music minimalmusicsetup.music
yields the following output, which shows that the neurons in both
processes receive the same input from the spike_generator
in the
first NEST process and show the same membrane potential trace.
2 1 -70
2 2 -70
2 3 -68.1559
2 4 -61.9174
2 5 -70
2 6 -70
2 7 -70
2 8 -65.2054
2 9 -62.1583
2 1 -70
2 2 -70
2 3 -68.1559
2 4 -61.9174
2 5 -70
2 6 -70
2 7 -70
2 8 -65.2054
2 9 -62.1583
Receiving string messages¶
Currently, NEST is only able to receive messages, and unable to send string
messages. We thus use MUSIC’s messagesource
program for the
generation of messages in the following example. The configuration file
(msgtest.music
) is shown below
stoptime=1.0
np=1
[from]
binary=messagesource
args=messages
[to]
binary=./msgtest.py
from.out -> to.msgdata [0]
This configuration file connects MUSIC’s messagesource
program to
the port msgdata
of a NEST instance. The messagesource
program
needs a data file, which contains the messages and the corresponding
time stamps. For this example, we use the data file, messages0.dat
:
0.3 Hello
0.7 !
Note
In MUSIC, the default unit for time is seconds for the specification of times, while NEST uses miliseconds.
The script that sets up the receiving side (msgtest.py
)
of the example is shown in the following script.
We first import NEST and create an instance of the music_message_in_proxy
.
We then set the name of the port it listens on to msgdata
. The network is
simulated in steps of 10 ms.
#!/usr/bin/python
import nest
mmip = nest.Create ('music_message_in_proxy')
nest.SetStatus (mmip, {'port_name' : 'msgdata'})
# Simulate and get message data with a granularity of 10 ms:
time = 0
while time < 1000:
nest.Simulate (10)
data = nest.GetStatus(mmip, 'data')
print data
time += 10
We then run the example using
mpirun -np 2 music msgtest.music
which yields the following output:
Nov 23 11:18:23 music_message_in_proxy::calibrate() [Info]:
Mapping MUSIC input port 'msgdata' with width=0 and acceptable latency=0
ms.
Nov 23 11:18:23 NodeManager::prepare_nodes [Info]:
Preparing 1 node for simulation.
Nov 23 11:18:23 MUSICManager::enter_runtime [Info]:
Entering MUSIC runtime with tick = 0.1 ms
Nov 23 11:18:23 SimulationManager::start_updating_ [Info]:
Number of local nodes: 1
Simulation time (ms): 10
Number of OpenMP threads: 1
Number of MPI processes: 1
Nov 23 11:18:23 SimulationManager::run [Info]:
Simulation finished.
({'messages_times': array([], dtype=float64), 'messages': ()},)
Nov 23 11:18:23 NodeManager::prepare_nodes [Info]:
Preparing 1 node for simulation.
Nov 23 11:18:23 SimulationManager::start_updating_ [Info]:
Number of local nodes: 1
Simulation time (ms): 10
Number of OpenMP threads: 1
Number of MPI processes: 1
Nov 23 11:18:23 SimulationManager::run [Info]:
Simulation finished.
({'messages_times': array([], dtype=float64), 'messages': ()},)
.
.
Nov 23 11:18:23 NodeManager::prepare_nodes [Info]:
Preparing 1 node for simulation.
Nov 23 11:18:23 SimulationManager::start_updating_ [Info]:
Number of local nodes: 1
Simulation time (ms): 10
Number of OpenMP threads: 1
Number of MPI processes: 1
Nov 23 11:18:23 SimulationManager::run [Info]:
Simulation finished.
({'messages_times': array([ 300., 700.]), 'messages': ('Hello', '!')},)
Receiving continuous data¶
As in the case of string message, NEST currently only supports receiving
continuous data, but not sending. This means that we have to use another
of MUSIC’s test programs to generate the data for us. This time, we use
constsource
, which generates a sequence of numbers form 0 to w,
where w is the width of the port. The MUSIC configuration file
(conttest.music
) is shown in the following listing:
stoptime=1.0
[from]
np=1
binary=constsource
[to]
np=1
binary=./conttest.py
from.contdata -> to.contdata [10]
The receiving side is again implemented using a
PyNEST script (conttest.py
).
We first import the NEST and create an instance of the
music_cont_in_proxy
. we set the name of the port
it listens on to msgdata
. We then simulate the network in
steps of 10 ms.
#!/usr/bin/python
import nest
mcip = nest.Create('music_cont_in_proxy')
nest.SetStatus(mcip, {'port_name' : 'cont_in'})
# Simulate and get vector data with a granularity of 10 ms:
time = 0
while time < 1000:
nest.Simulate (10)
data = nest.GetStatus (mcip, 'data')
print data
time += 10
The example is run using
mpirun -np 2 music conttest.music
which yields the following output:
Nov 23 11:33:26 music_cont_in_proxy::calibrate() [Info]:
Mapping MUSIC input port 'contdata' with width=10.
Nov 23 11:33:26 NodeManager::prepare_nodes [Info]:
Preparing 1 node for simulation.
Nov 23 11:33:26 MUSICManager::enter_runtime [Info]:
Entering MUSIC runtime with tick = 0.1 ms
Nov 23 11:33:28 SimulationManager::start_updating_ [Info]:
Number of local nodes: 1
Simulation time (ms): 10
Number of OpenMP threads: 1
Number of MPI processes: 1
Nov 23 11:33:28 SimulationManager::run [Info]:
Simulation finished.
(array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]),)
.
.
Nov 23 11:33:28 NodeManager::prepare_nodes [Info]:
Preparing 1 node for simulation.
Nov 23 11:33:28 SimulationManager::start_updating_ [Info]:
Number of local nodes: 1
Simulation time (ms): 10
Number of OpenMP threads: 1
Number of MPI processes: 1
Nov 23 11:33:28 SimulationManager::run [Info]:
Simulation finished.
(array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]),)
Getting Help¶
Have a specific question or problem with NEST?¶
Check out the troubleshooting section for common issues.
If your question is not on there, ask our Mailing List.
Getting help on the command line interface¶
The
helpdesk()
command will launch the documentation pages on your browser. See Set up the integrated helpdesk to specify the browser of your choice.To access the High-level Python API reference material you can use the commands:
# list all functions and attributes dir(nest) # Get docstring for function in python help('nest.FunctionName') # or in ipython nest.FunctionName?
To access a specific C++ or SLI reference page for an object, command or parameter you can use the command:
nest.help('name')
Model Information¶
To get a complete list of the models available in NEST type:
nest.Models()
To get a list of only neuron models use:
nest.Models(mtype='nodes', sel=None)
To get a list of only synapse models use:
nest.Models(mtype='synapses', sel=None)
To get details on model parameters and usage use:
nest.help('model_name')
Set up the integrated helpdesk¶
The command helpdesk
needs to know which browser to launch in order
to display the help pages. The browser is set as an option of
helpdesk
. Please see the file ~/.nestrc
for an example setting
firefox
as browser. Please note that the command helpdesk
does
not work if you have compiled NEST with MPI support, but you have to
enter the address of the helpdesk (file://$PREFIX/share/doc/nest(
)
manually into the browser. Please replace $PREFIX
with the prefix
you chose during the configuration of NEST. If you did not explicitly
specify one, it is most likely set to /usr
or /usr/local
depending on what system you use.
PyNEST API¶
Here is a list of functions for the PyNEST interface.
Functions related to models¶
Functions for model handling
-
nest.lib.hl_api_models.
ConnectionRules
()¶ Return a typle of all available connection rules, sorted by name.
- Returns
Available connection rules
- Return type
-
nest.lib.hl_api_models.
CopyModel
(existing, new, params=None)¶ Create a new model by copying an existing one.
-
nest.lib.hl_api_models.
GetDefaults
(model, keys=None, output='')¶ Return default parameters of the given model, specified by a string.
- Parameters
model (str) – Name of the model
keys (str or list, optional) – String or a list of strings naming model properties. GetDefaults then returns a single value or a list of values belonging to the keys given.
output (str, optional) – Whether the returned data should be in a format (
output='json'
). Default is ‘’.
- Returns
dict – A dictionary of default parameters.
type – If keys is a string, the corrsponding default parameter is returned.
list – If keys is a list of strings, a list of corrsponding default parameters is returned.
str – If output is
json
, returns parameters in JSON format.
- Raises
-
nest.lib.hl_api_models.
Models
(mtype='all', sel=None)¶ Return a tuple of model names, sorted by name.
All available models are neurons, devices and synapses.
- Parameters
- Returns
Available model names
- Return type
- Raises
ValueError – Description
Notes
Synapse model names ending with
'_hpc'
provide minimal memory requirements by using thread-local target neuron IDs and fixing the'rport'
to 0.Synapse model names ending with
'_lbl'
allow to assign an individual integer label ('synapse_label'
) to created synapses at the cost of increased memory requirements.
-
nest.lib.hl_api_models.
SetDefaults
(model, params, val=None)¶ Set the default parameter values of the given model.
New default values are used for all subsequently created instances of the model.
Functions related to the creation and retrieval of nodes (neurons, devices)¶
Functions for node handling
-
nest.lib.hl_api_nodes.
Create
(model, n=1, params=None)¶ Create one or more nodes.
Generates n new network objects of the supplied model type. If n is not given, a single node is created. Note that if setting parameters of the nodes fail, the nodes will still have been created.
- Parameters
- Returns
Global IDs of created nodes
- Return type
- Raises
NESTError – If setting node parameters fail. However, the nodes will still have been created.
-
nest.lib.hl_api_nodes.
GetLID
(gid)¶ Return the local id of a node with the global ID gid.
Deprecated since version 2.14: GetLID is deprecated and will be removed in NEST 3.0. Use index into GIDCollection instead.
Functions related to setting and getting parameters¶
Functions to get information on NEST.
Print the authors of NEST.
-
nest.lib.hl_api_info.
get_argv
()¶ Return argv as seen by NEST.
This is similar to Python
sys.argv
but might have changed after MPI initialization.- Returns
Argv, as seen by NEST
- Return type
-
nest.lib.hl_api_info.
GetStatus
(nodes, keys=None, output='')¶ Return the parameter dictionaries of nodes or connections.
If keys is given, a list of values is returned instead. keys may also be a list, in which case the returned list contains lists of values.
- Parameters
nodes (list or tuple) – Either a list of global ids of nodes, or a tuple of connection handles as returned by GetConnections.
keys (str or list, optional) – string or a list of strings naming model properties. GetDefaults then returns a single value or a list of values belonging to the keys given.
output (str, optional) – Whether the returned data should be in a selected format (
output='json'
). Default is ‘’.
- Returns
dict – All parameters
type – If keys is a string, the corrsponding default parameter is returned.
list – If keys is a list of strings, a list of corrsponding default parameters is returned.
str – If output is json, returns parameters in JSON format.
- Raises
TypeError – If nodes or keys are on the wrong form.
See also
-
nest.lib.hl_api_info.
help
(obj=None, pager=None, return_text=False)¶ Show the help page for the given object using the given pager.
The default pager is more (See .nestrc).
-
nest.lib.hl_api_info.
helpdesk
()¶ Open the NEST helpdesk in browser.
Use the system default browser.
-
nest.lib.hl_api_info.
message
(level, sender, text)¶ Print a message using message system of NEST.
- Parameters
level – Level
sender – Message sender
text (str) – Text to be sent in the message
-
nest.lib.hl_api_info.
SetStatus
(nodes, params, val=None)¶ Set parameters of nodes or connections.
Parameters of nodes or connections, given in nodes, is set as specified by params. If val is given, params has to be a string with the name of an attribute, which is set to val on the nodes/connections. val can be a single value or a list of the same size as nodes.
- Parameters
nodes (list or tuple) – Either a list of global ids of nodes, or a tuple of connection handles as returned by GetConnections.
params (str or dict or list) – Dictionary of parameters or list of dictionaries of parameters of same length as nodes. If val is given, this has to be the name of a model property as a str.
val (str, optional) – If given, params has to be the name of a model property.
- Raises
TypeError – If nodes is not a list of nodes or synapses, or if the number of parameters don’t match the number of nodes or synapses.
See also
-
nest.lib.hl_api_info.
sysinfo
()¶ Print information on the platform on which NEST was compiled.
Functions related to connections¶
Functions for connection handling
-
nest.lib.hl_api_connections.
CGConnect
(pre, post, cg, parameter_map=None, model='static_synapse')¶ Connect neurons using the Connection Generator Interface.
Potential pre-synaptic neurons are taken from pre, potential post-synaptic neurons are taken from post. The connection generator cg specifies the exact connectivity to be set up. The parameter_map can either be None or a dictionary that maps the keys weight and delay to their integer indices in the value set of the connection generator.
This function is only available if NEST was compiled with support for libneurosim.
For further information, see * The NEST documentation on using the CG Interface at
The GitHub repository and documentation for libneurosim at https://github.com/INCF/libneurosim/
The publication about the Connection Generator Interface at https://doi.org/10.3389/fninf.2014.00043
- Parameters
pre (list or numpy.array) – must contain a list of GIDs
post (list or numpy.array) – must contain a list of GIDs
cg (connection generator) – libneurosim connection generator to use
parameter_map (dict, optional) – Maps names of values such as weight and delay to value set positions
model (str, optional) – Synapse model to use
- Raises
kernel.NESTError –
-
nest.lib.hl_api_connections.
CGParse
(xml_filename)¶ Parse an XML file and return the corresponding connection generator cg.
The library to provide the parsing can be selected by
CGSelectImplementation()
.- Parameters
xml_filename (str) – Filename of the xml file to parse.
- Raises
kernel.NESTError –
-
nest.lib.hl_api_connections.
CGSelectImplementation
(tag, library)¶ Select a library to provide a parser for XML files and associate an XML tag with the library.
XML files can be read by
CGParse()
.
-
nest.lib.hl_api_connections.
Connect
(pre, post, conn_spec=None, syn_spec=None, model=None)¶ Connect pre nodes to post nodes.
Nodes in pre and post are connected using the specified connectivity (all-to-all by default) and synapse type (
static_synapse
by default). Details depend on the connectivity rule.- Parameters
pre (list) – Presynaptic nodes, as list of GIDs
post (list) – Postsynaptic nodes, as list of GIDs
conn_spec (str or dict, optional) – Specifies connectivity rule, see below
syn_spec (str or dict, optional) – Specifies synapse model, see below
model (str or dict, optional) – alias for syn_spec for backward compatibility
- Raises
kernel.NESTError –
Notes
Connect does not iterate over subnets, it only connects explicitly specified nodes.
Connectivity specification (conn_spec)
Available rules and associated parameters:
- 'all_to_all' (default) - 'one_to_one' - 'fixed_indegree', 'indegree' - 'fixed_outdegree', 'outdegree' - 'fixed_total_number', 'N' - 'pairwise_bernoulli', 'p'
See Connection Rules for more details, including example usage.
Synapse specification (syn_spec)
The synapse model and its properties can be given either as a string identifying a specific synapse model (default:
static_synapse
) or as a dictionary specifying the synapse model and its parameters.Available keys in the synapse specification dictionary are:
- 'model' - 'weight' - 'delay' - 'receptor_type' - any parameters specific to the selected synapse model.
See Synapse Specification for details, including example usage.
All parameters are optional and if not specified, the default values of the synapse model will be used. The key ‘model’ identifies the synapse model, this can be one of NEST’s built-in synapse models or a user-defined model created via
CopyModel()
.If model is not specified the default model
static_synapse
will be used.Any distributed parameter must be initialised with a further dictionary specifying the distribution type (distribution, e.g. normal) and any distribution-specific parameters (e.g. mu and sigma). See Distributed parameters for more info.
To see all available distributions, run:
nest.slirun('rdevdict info')
To get information on a particular distribution, e.g. ‘binomial’, run:
nest.help('rdevdict::binomial')
See also
-
nest.lib.hl_api_connections.
DataConnect
(pre, params=None, model='static_synapse')¶ Connect neurons from lists of connection data.
- Parameters
- Raises
Notes
Usage Variants
Variant 1:
Connect each neuron in pre to the targets given in params, using synapse type model
pre: [gid_1, ... gid_n] params: [ {param_1}, ..., {param_n} ] model= 'synapse_model'
The dictionaries param_1 to param_n must contain at least the following keys:
- 'target' - 'weight' - 'delay'
Each key must resolve to a list or numpy.ndarray of values.
Depending on the synapse model, other parameters can be given in the same format. All arrays in params must have the same length as ‘target’.
Variant 2:
Connect neurons according to a list of synapse status dictionaries, as obtained from
GetStatus()
.pre = [ {synapse_state1}, ..., {synapse_state_n}] params=None model=None
During connection, status dictionary misses will not raise errors, even if the kernel property dict_miss_is_error is True.
-
nest.lib.hl_api_connections.
Disconnect
(pre, post, conn_spec='one_to_one', syn_spec='static_synapse')¶ Disconnect pre neurons from post neurons.
Neurons in pre and post are disconnected using the specified disconnection rule (one-to-one by default) and synapse type (
static_synapse
by default). Details depend on the disconnection rule.- Parameters
Notes
conn_spec
Apply the same rules as for connectivity specs in the Connect method
Possible choices of the conn_spec are
- 'one_to_one' - 'all_to_all'
syn_spec
The synapse model and its properties can be inserted either as a string describing one synapse model (synapse models are listed in the synapsedict) or as a dictionary as described below.
Note that only the synapse type is checked when we disconnect and that if syn_spec is given as a non-empty dictionary, the model parameter must be present.
If no synapse model is specified the default model
static_synapse
will be used.Available keys in the synapse dictionary are
- 'model' - 'weight' - 'delay' - 'receptor_type' - parameters specific to the synapse model chosen
All parameters are optional and if not specified will use the default values determined by the current synapse model.
model determines the synapse type, taken from pre-defined synapse types in NEST or manually specified synapses created via
CopyModel()
.All other parameters are not currently implemented.
Disconnect does not iterate over subnets, it only disconnects explicitly specified nodes.
-
nest.lib.hl_api_connections.
DisconnectOneToOne
(source, target, syn_spec)¶ Disconnect a currently existing synapse.
Deprecated since version DisconnectOneToOne: is deprecated and will be removed in NEST-3.0. Use Disconnect instead.
-
nest.lib.hl_api_connections.
GetConnections
(source=None, target=None, synapse_model=None, synapse_label=None)¶ Return an array of connection identifiers.
Any combination of source, target, synapse_model and synapse_label parameters is permitted.
- Parameters
source (list, optional) – Source GIDs, only connections from these pre-synaptic neurons are returned
target (list, optional) – Target GIDs, only connections to these post-synaptic neurons are returned
synapse_model (str, optional) – Only connections with this synapse type are returned
synapse_label (int, optional) – (non-negative) only connections with this synapse label are returned
- Returns
Connections as 5-tuples with entries (source-gid, target-gid, target-thread, synapse-id, port)
- Return type
array
- Raises
Notes
Only connections with targets on the MPI process executing the command are returned.
Functions related to simulation¶
Functions for simulation control
-
nest.lib.hl_api_simulation.
Cleanup
()¶ Cleans up resources after a Run call. Not needed for Simulate.
Closes state for a series of runs, such as flushing and closing files. A Prepare is needed after a Cleanup before any more calls to Run.
-
nest.lib.hl_api_simulation.
DisableStructuralPlasticity
()¶ Disable structural plasticity for the network simulation
See also
-
nest.lib.hl_api_simulation.
EnableStructuralPlasticity
()¶ Enable structural plasticity for the network simulation
See also
-
nest.lib.hl_api_simulation.
GetKernelStatus
(keys=None)¶ Obtain parameters of the simulation kernel.
- Parameters
keys (str or list, optional) – Single parameter name or list of parameter names
- Returns
dict – Parameter dictionary, if called without argument
type – Single parameter value, if called with single parameter name
list – List of parameter values, if called with list of parameter names
- Raises
TypeError – If keys are of the wrong type.
See also
-
nest.lib.hl_api_simulation.
GetStructuralPlasticityStatus
(keys=None)¶ Get the current structural plasticity parameters
- Parameters
keys (str or list, optional) – Keys indicating the values of interest to be retrieved by the get call
See also
-
nest.lib.hl_api_simulation.
Install
(module_name)¶ Load a dynamically linked NEST module.
- Parameters
module_name (str) – Name of the dynamically linked module
- Returns
NEST module identifier, required for unloading
- Return type
handle
Notes
Dynamically linked modules are searched in the
LD_LIBRARY_PATH
(DYLD_LIBRARY_PATH
under OSX).Example
-
nest.lib.hl_api_simulation.
Prepare
()¶ Calibrate the system before a Run call. Not needed for Simulate.
Call before the first Run call, or before calling Run after changing the system, calling SetStatus or Cleanup.
-
nest.lib.hl_api_simulation.
ResetKernel
()¶ Reset the simulation kernel.
This will destroy the network as well as all custom models created with
CopyModel()
. Calling this function is equivalent to restarting NEST.In particular,
all network nodes
all connections
all user-defined neuron and synapse models
are deleted, and
time
random generators
are reset. The only exception is that dynamically loaded modules are not unloaded. This may change in a future version of NEST.
-
nest.lib.hl_api_simulation.
ResetNetwork
()¶ Reset all nodes and connections to their original state.
Deprecated since version 2.18: ResetNetwork is deprecated and will be removed in NEST 3.0, because this function is not fully able to reset network and simulator state. The only reliable way to reset state is to call ResetKernel and then rebuild the network.
Resets the dynamic state of the entire network to its original state. The dynamic state comprises typically the membrane potential, synaptic currents, buffers holding input that has been delivered, but not yet become effective, and all events pending delivery. Node parameters, such as time constants and threshold potentials, are not affected.
However, note that
Time and random number generators are NOT reset.
Files belonging to recording devices (spike detector, multimeter, voltmeter, etc) are closed. You must change the file name before simulating again. Otherwise the files can be overwritten or you will receive an error.
ResetNetwork will reset the nodes to the state values stored in the model prototypes. So if you have used SetDefaults to change a state value of a model since simulating the first time, the network will NOT be reset to the status at T=0.
The dynamic state of synapses with internal dynamics (STDP, facilitation) is NOT reset at present. This will be implemented in a future version of NEST.
See also
-
nest.lib.hl_api_simulation.
Run
(t)¶ Simulate the network for t milliseconds.
- Parameters
t (float) – Time to simulate in ms
Notes
Call between Prepare and Cleanup calls, or within a
with RunManager
clause.Simulate(t): t’ = t/m; Prepare(); for _ in range(m): Run(t’); Cleanup()
Prepare must be called before Run to calibrate the system, and Cleanup must be called after Run to close files, cleanup handles, and so on. After Cleanup, Prepare can and must be called before more Run calls. Any calls to SetStatus between Prepare and Cleanup have undefined behaviour.
See also
-
nest.lib.hl_api_simulation.
RunManager
()¶ ContextManager for Run
Calls Prepare before a series of Run calls, and calls Cleanup at end.
E.g.:
with RunManager(): for i in range(10): Run()
See also
-
nest.lib.hl_api_simulation.
SetKernelStatus
(params)¶ Set parameters for the simulation kernel.
- Parameters
params (dict) – Dictionary of parameters to set.
See also
-
nest.lib.hl_api_simulation.
SetStructuralPlasticityStatus
(params)¶ Set structural plasticity parameters for the network simulation.
- Parameters
params (dict) – Dictionary of structural plasticity parameters to set
See also
-
nest.lib.hl_api_simulation.
Simulate
(t)¶ Simulate the network for t milliseconds.
- Parameters
t (float) – Time to simulate in ms
See also
RunManager()
,ResumeSimulation()
Functions related to parallel computing¶
Functions for parallel computing
-
nest.lib.hl_api_parallel_computing.
NumProcesses
()¶ Return the overall number of MPI processes.
- Returns
Number of overall MPI processes
- Return type
-
nest.lib.hl_api_parallel_computing.
Rank
()¶ Return the MPI rank of the local process.
- Returns
MPI rank of the local process
- Return type
Note
DO NOT USE Rank() TO EXECUTE ANY FUNCTION IMPORTED FROM THE nest MODULE ON A SUBSET OF RANKS IN AN MPI-PARALLEL SIMULATION.
This will lead to unpredictable behavior. Symptoms may be an error message about non-synchronous global random number generators or deadlocks during simulation. In the worst case, the simulation may complete but generate nonsensical results.
-
nest.lib.hl_api_parallel_computing.
SetAcceptableLatency
(port_name, latency)¶ Set the acceptable latency (in ms) for a MUSIC port.
-
nest.lib.hl_api_parallel_computing.
SetMaxBuffered
(port_name, size)¶ Set the maximum buffer size for a MUSIC port.
-
nest.lib.hl_api_parallel_computing.
SyncProcesses
()¶ Synchronize all MPI processes.
Functions related to helper info¶
These are helper functions to ease the definition of the high-level API of the PyNEST wrapper.
-
nest.lib.hl_api_helper.
broadcast
(item, length, allowed_types, name='item')¶ Broadcast item to given length.
-
nest.lib.hl_api_helper.
deprecated
(alt_func_name, text=None)¶ Decorator for deprecated functions.
Shows a warning and calls the original function.
-
nest.lib.hl_api_helper.
get_help_filepath
(hlpobj)¶ Get file path of help object
Prints message if no help is available for hlpobj.
- Parameters
hlpobj (string) – Object to display help for
- Returns
Filepath of the help object or None if no help available
- Return type
string
-
nest.lib.hl_api_helper.
get_unistring_type
()¶ Returns string type dependent on python version.
- Returns
Depending on Python version
- Return type
str or basestring
-
nest.lib.hl_api_helper.
get_verbosity
()¶ Return verbosity level of NEST’s messages.
M_ALL=0, display all messages
M_INFO=10, display information messages and above
M_DEPRECATED=18, display deprecation warnings and above
M_WARNING=20, display warning messages and above
M_ERROR=30, display error messages and above
M_FATAL=40, display failure messages and above
- Returns
The current verbosity level
- Return type
-
nest.lib.hl_api_helper.
get_wrapped_text
(text, width=80)¶ Formats a given multiline string to wrap at a given width, while preserving newlines (and removing excessive whitespace).
-
nest.lib.hl_api_helper.
is_coercible_to_sli_array
(seq)¶ Checks whether a given object is coercible to a SLI array
-
nest.lib.hl_api_helper.
is_iterable
(seq)¶ Return True if the given object is an iterable, False otherwise.
-
nest.lib.hl_api_helper.
is_literal
(obj)¶ Check whether obj is a “literal”: a unicode string or SLI literal
-
nest.lib.hl_api_helper.
is_sequence_of_connections
(seq)¶ Checks whether low-level API accepts seq as a sequence of connections.
-
nest.lib.hl_api_helper.
is_sequence_of_gids
(seq)¶ Checks whether the argument is a potentially valid sequence of GIDs (non-negative integers).
-
nest.lib.hl_api_helper.
is_string
(obj)¶ Check whether obj is a unicode string
-
nest.lib.hl_api_helper.
load_help
(hlpobj)¶ Returns documentation of the object
- Parameters
hlpobj (object) – Object to display help for
- Returns
The documentation of the object or None if no help available
- Return type
string
-
nest.lib.hl_api_helper.
model_deprecation_warning
(model)¶ Checks whether the model is to be removed in a future verstion of NEST. If so, a deprecation warning is issued.
- Parameters
model (str) – Name of model
-
nest.lib.hl_api_helper.
serializable
(data)¶ Make data serializable for JSON.
-
nest.lib.hl_api_helper.
set_verbosity
(level)¶ Change verbosity level for NEST’s messages.
M_ALL=0, display all messages
M_INFO=10, display information messages and above
M_DEPRECATED=18, display deprecation warnings and above
M_WARNING=20, display warning messages and above
M_ERROR=30, display error messages and above
M_FATAL=40, display failure messages and above
- Parameters
level (str) – Can be one of ‘M_FATAL’, ‘M_ERROR’, ‘M_WARNING’, ‘M_DEPRECATED’, ‘M_INFO’ or ‘M_ALL’.
-
nest.lib.hl_api_helper.
show_deprecation_warning
(func_name, alt_func_name=None, text=None)¶ Shows a deprecation warning for a function.
-
nest.lib.hl_api_helper.
show_help_with_pager
(hlpobj, pager=None)¶ Output of doc in python with pager or print
-
class
nest.lib.hl_api_helper.
SuppressedDeprecationWarning
(no_dep_funcs)¶ Context manager turning off deprecation warnings for given methods.
Think thoroughly before use. This context should only be used as a way to make sure examples do not display deprecation warnings, that is, used in functions called from examples, and not as a way to make tedious deprecation warnings dissapear.
-
nest.lib.hl_api_helper.
to_json
(data)¶ Serialize data to JSON.
-
nest.lib.hl_api_helper.
uni_str
¶ alias of
builtins.str
NEST Community¶
Mailing List¶
The NEST users mailing list is intended to be a forum for questions on the usage of NEST, the exchange of code and general discussions about NEST. The philosophy is that all users profit by sharing their experience. All NEST core developers are subscribed to this list and will participate in the discussions as far as time allows.
By subscribing to the mailing list you will also get notified of all NEST related events!
Before submitting a question, please take a look at our guidelines for the NEST mailing list.
Open video meeting for users and developers¶
Every two weeks, we have an open video meeting to discuss current issues and developments in NEST. We welcome users with questions regarding their implementations or issues they want help solving to join. This is an opportunity to have discussions in real time with developers.
Information for dates and how to join can be found on our GitHub wiki
Publications using NEST¶
We have compiled a list of NEST-related peer-reviewed publications that we update regularly.
If you have used NEST in your research, let us know! Don’t forget to cite NEST in your work.
Have a talk or poster where you used NEST? Download our logo!
Become a NEST member¶
If you would like to be actively involved in the NEST Initiative and support its goals, please see our member page.
Contributing to NEST¶
NEST draws its strength from the many people that use and improve it. We are happy to consider your contributions (e.g., new models, bug or documentation fixes) for addition to the official version of NEST.
Report bugs and request features¶
If you find an error in the code or documentaton or want to suggest a feature, submit an issue on GitHub.
Make sure to check that your issue has not already been reported there before creating a new one.
Make changes to code or documentation¶
You can find all the details for our development workflow on the NEST developer space.
For making changes to the PyNEST APIs, please see our PyNEST API template.
Contribute a Python example script¶
If you have a Python example network to contribute, please refer to our template to ensure you cover the required information:
Have a question?¶
If you want to get in contact with us, see our NEST Community page for ways you can reach us.
License¶
GNU GENERAL PUBLIC LICENSE¶
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble¶
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software–to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation’s software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.
Also, for each author’s protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors’ reputations.
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone’s free use or not licensed at all.
The precise terms and conditions for copying, distribution and modification follow.
GNU GENERAL PUBLIC LICENSE¶
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION¶
0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The “Program”, below, refers to any such program or work, and a “work based on the Program” means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term “modification”.) Each licensee is addressed as “you”.
Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.
You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.
c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.
In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for alls it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.
If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients’ exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.
7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.
This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and “any later version”, you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.
NO WARRANTY¶
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.