ANNarchy 5.0.0
  • ANNarchy
  • Installation
  • Tutorial
  • Manual
  • Notebooks
  • Reference

  • Core objects
    • Overview
    • Equation parser
    • Rate-coded neurons
    • Spiking neurons
    • Populations
    • Rate-coded synapses
    • Spiking synapses
    • Projections
    • Connectors
    • Monitors
    • Network
    • Setting inputs
    • Random distributions
    • Numerical methods
    • Saving and loading
    • Parallel simulations
  • Extensions
    • Hybrid networks
    • Structural plasticity
    • Convolution and pooling
    • Reporting
    • Logging with tensorboard

On this page

  • Network
    • Network class
    • Compiling the network
      • Compiler
      • Directory
    • Simulating the network
    • Setting the discretization step dt
    • Setting the seed of the random number generators
    • Early-stopping
    • Setting inputs periodically
    • Parallel computing with OpenMP
    • Parallel computing with CUDA

Network

Network class

The main object in ANNarchy is the Network instance, which contains all data structures apart from the neuron / synapse models and the functions:

import ANNarchy as ann

neuron = ann.Neuron(
    parameters = dict(tau = 10.0),
    equations = [
        'tau * dv/dt  + v =  sum(exc)',
        'r = pos(v)'
    ]
)

synapse = ann.Synapse(
    parameters = dict(tau = 5000.0),
    equations = 'tau * dw/dt = pre.r * post.r ',
)

# Create the empty network
net = Network()

# Create two populations
pop1 = net.create(10, neuron)
pop2 = net.create(10, neuron)

# Connect the two populations
proj = net.connect(pop1, pop2, 'exc', synapse)

# Monitor the second population
m = net.monitor(pop2, 'r')

Network.create() and Network.connect() are the main access points to create populations and projections. However, if you lose the references to pop1, pop2 and proj (for example if you create them inside a method but do not return them), it is difficult to access them.

One option is to iterate over the lists of populations and projections stored in the network, but you have to know what you are looking for:

for pop in net.get_populations():
    print(pop.r)

for proj in net.get_projections():
    print(proj.w)

It is also possible to provide a unique name to each population and projection at creation time, so they can be easily retrieved:

def create_network():
    net = ann.Network()
    pop1 = net.create(10, neuron, name='pop1')
    pop2 = net.create(10, neuron, name='pop2')
    proj = net.connect(pop1, pop2, 'exc', synapse, name='projection1')
    m = net.monitor(pop2, 'r', name='monitor')
    return net

net = create_network()
pop1 = net.get_population('pop1')
pop2 = net.get_population('pop2')
proj = net.get_projection('projection1')
m = net.get_monitor('monitor')

Another safer option is to create your own class inheriting from ann.Network and store all populations and projections as an attribute:

class SimpleNetwork (ann.Network):
    def __init__(self, N)
        self.pop1 = self.create(N, neuron)
        self.pop2 = self.create(N, neuron)
        self.proj = self.connect(self.pop1, self.pop2, 'exc', synapse)
        self.m    = self.monitor(self.pop2, 'r')

net = SimpleNetwork(10)
print(net.pop1.r)

You do not need to explictly call the constructor of Network (Network.__init__(self)), it is done automatically. Creating a subclass of Network furthermore allows to use the parallel_run() method if the whole construction of the network is done through the constructor.

Apart from that, the two approaches are equivalent, pick the one you prefer. Subclasses are easier to re-use, especially across files.

Compiling the network

Once all the relevant information has been defined, one needs to actually compile the network, by calling the Network.compile() method:

net.compile()

The optimized C++ code will be generated, compiled, the underlying objects created and made available to the Python interface.

You can specify several arguments to compile(), including:

  • compiler: to select which C++ compiler will be used.
  • compiler_flags: to select which flags are passed to the compiler.
  • directory: absolute/relative path to the directory where files will be generated and compiled (default: annarchy/).

Compiler

ANNarchy requires a C++ compiler. On GNU/Linux, the default choice is g++, while on MacOS it is clang++. You can change the compiler (and its flags) to use either during the call to net.compile() in your script:

net.compile(compiler="clang++", compiler_flags="-march=native -O3")

or globally by modifying the configuration file located at ~/.config/ANNarchy/annarchy.json:

{
    "openmp": {
        "compiler": "clang++",
        "flags": "-march=native -O3"
    }
}

Be careful with the flags: for example, the optimization level -O3 does not obligatorily produce faster code. But this is the case for most models, therefore it is the default in the ANNarchy 4.7.x releases.

Even more caution is required when using the -ffast-math flag. It can increase the performance, in particular in combination with SIMD. However, the application of -ffast-math enables a set of optimizations which might violate IEEE 754 compliance (which might be okay in many cases, but it is important that the user verifies the result). For more details, see the g++ documentation: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

Note

In rare cases, it may occur that the CPU architecture is not detectable for the used g++ compiler (e.g. Intel’s Tigerlake and g++ <= 9.4). This will result in a compiler error which can be fixed by removing the ‘-march=native’ flag. To get access to AVX-512 SIMD instructions, you need to add -mavx512f instead, as well as -ftree-vectorize if -O3 is not already used.

Directory

When calling compile(), the subfolder annarchy/ (or whatever is defined by `directory) will be created, and the generated code will be compiled. The first compilation may last a couple of seconds, but further runs of the script are much faster. If no modification to the network has been made except for parameter values, it will not be recompiled, sparing us this overhead.

ANNarchy tracks the changes in the script and re-generates the corresponding code. In some cases (a new version of ANNarchy has been installed, bugs), it may be necessary to perform a fresh compilation of the network. You can either delete the annarchy/ subfolder and restart the script:

rm -rf annarchy/
python MyNetwork.py

pass the --clean flag to the script:

python MyNetwork.py --clean 

or tell compile() to start fresh:

net.compile(clean=True)

Simulating the network

After the network is compiled, the simulation can be run for the specified duration (in milliseconds) through the Network.simulate() method:

net.simulate(1000.0) # Simulate for 1 second

The provided duration should be a multiple of dt. If not, the number of simulation steps performed will be approximated.

In some cases, you may want to perform only one step of the simulation, instead of specifing the duration. The Network.step() method can then be used.

net.step() # Simulate for 1 step

Setting the discretization step dt

An important value for the simulation is the discretization step dt. Its default value is 1 ms, which is usually fine for rate-coded networks, but may be too high for spiking networks, as the equations are stiffer. Taken too high, it can lead to high numerical errors. Too low, and the simulation will take an unnecessary amount of time.

To set the discretization step, just pass the desired value to the constructor of Network:

net = ann.Network(dt=0.1)

It can also be set using the config() method, before projections are created:

net.config(dt=0.1)

However, changing its value after calling compile() will not have any effect.

You can always access the current value of dt with the attribute net.dt.

If you create a subclass of Network, you can also provide dt to its constructor, even if your network does not catch it.

class SimpleNetwork (ann.Network):
    def __init__(self, N)
        self.pop1 = net.create(N, neuron)
        self.pop2 = net.create(N, neuron)
        self.proj = net.connect(self.pop1, self.pop2, 'exc', synapse)

net = SimpleNetwork(N=10, dt=0.1)

Setting the seed of the random number generators

By default, the random number generators (RNG) are seeded with secrets.randbits(32), so each simulation will be different from run to run. If you want to have deterministic simulations, you need to provide a fixed seed to the constructor of Network:

net = ann.Network(dt=0.1, seed=42)

If you define a subclass of Network, pass the seed without further processing, like for dt.

class SimpleNetwork (ann.Network):
    def __init__(self, N)
        self.pop1 = net.create(N, neuron)
        self.pop2 = net.create(N, neuron)
        self.proj = net.connect(self.pop1, self.pop2, 'exc', synapse)

net = SimpleNetwork(N=10, dt=0.1, seed=42)

Note that this also sets the seed of the old RNG of numpy, which is used to initialize values produced by np.random.*.

If you use the new default RNG of numpy (rng = np.random.default_rng()) or ANNarchy’s random distributions (see Random Distributions), you will have to seed it yourself. The seed of network is accessible through the attribute net.seed.

rng = np.random.default_rng(seed=net.seed)

pop.r = ann.Uniform(min=0.0, max=1.0, rng=rng)
Note

Using the same seed with the OpenMP and CUDA backends will not lead to the same sequences of numbers!

Early-stopping

In some cases, it is desired to stop the simulation whenever a criterion is fulfilled (for example, a neural integrator exceeds a certain threshold), not after a fixed amount of time.

There is the possibility to define a stop_condition when creating a Population:

pop1 = net.create( ... , stop_condition = "r > 1.0")

When calling the simulate_until() method instead of simulate():

t = net.simulate_until(max_duration=1000.0, populations=pop1)

the simulation will be stopped whenever the stop_condition of pop1 is met, i.e. when the firing rate of any neuron of pop1 is above 1.0. If the condition is never met, the simulation will last maximally max_duration. The methods returns the effective duration of the simulation (to compute reaction times, for example).

The stop_condition can use any logical operation on the parameters and variables of the neuron associated to the population:

pop1 = net.create( ... , stop_condition = "(r > 1.0) and (mp < 2.0)")

By default, the simulation stops when at least one neuron in the population fulfills the criterion. If you want to stop the simulation when all neurons fulfill the condition, you can use the flag all after the condition:

pop1 = net.create( ... , stop_condition = "r > 1.0 : all")

The flag any is the default behavior and can be omitted.

The stop criterion can depend on several populations, by providing a list of populations to the populations argument instead of a single population:

t = net.simulate_until(max_duration=1000.0, populations=[pop1, pop2])

The simulation will then stop when the criterion is met in both populations at the same time. If you want that the simulation stops when at least one population meets its criterion, you can specify the operator argument:

t = net.simulate_until(max_duration=1000.0, populations=[pop1, pop2], operator='or')

The default value of operator is a 'and' function between the populations’ criteria.

Warning

Global operations (min, max, mean) are not possible inside the stop_condition. If you need them, store them in a variable in the equations argument of the neuron and use it as the condition:

equations = [
    'r = ...',
    'max_r = max(r)',
]

Setting inputs periodically

In most cases, your simulation will be decomposed into a series of fixed-duration trials, where you basically set inputs at the beginning of the trial, run the simulation for a fixed duration, and possibly read out results at the end:

# Iterate over 100 trials
result = []
for trial in range(100):
    # Set inputs to the network
    pop.I = ann.Uniform(0.0, 1.0)
    # Simulate for 1 second
    net.simulate(1000.)
    # Save the output
    result.append(pop.r)

For convenience, we provide the decorator every, which allows to register a python method and call it automatically during the simulation with a fixed period:

result = []

@ann.every(period=1000.)
def set inputs(n):
    # Set inputs to the network
    pop.I = ann.Uniform(0.0, 1.0)
    # Save the output of the previous step
    if n > 0:
        result.append(pop.r)

net.simulate(100 * 1000.)

In this example, set_inputs() will be executed just before the steps corresponding to times t = 0., 1000., 2000., and so on until t = 100000.

The method can have any name, but must accept only one argument, the integer n which will be incremented at each call of the method (i.e. it will take the values 0, 1, 2 until 99). This can for example be used to access data in a numpy array:

images = np.random.random((100, 640, 480))

@ann.every(period=1000.)
def set inputs(n):
    # Set inputs to the network
    pop.I = images[n, :, :]

net.simulate(100 * 1000.)

One can define several methods that will be called in the order of their definition:

@ann.every(period=1000.)
def set inputs(n):
    pop.I = 1.0

@ann.every(period=1000.)
def reset inputs(n):
    pop.I = 0.0

In this example, set_inputs() will be called first, followed by reset_inputs, so pop.I will finally be 0.0. The decorator every accepts an argument offset defining a delay within the period to call the method:

@ann.every(period=1000.)
def set inputs(n):
    pop.I = 1.0

@ann.every(period=1000., offset=500.)
def reset inputs(n):
    pop.I = 0.0

In this case, set_inputs() will be called at times 0, 1000, 2000... while reset_inputs() will be called at times 500, 1500, 2500..., allowing to structure a trial more effectively. The offset can be set negative, in which case it will be relative to the end of the trial:

@every(period=1000., offset=-100.)
def reset inputs(n):
    pop.I = 0.0

In this example, the method will be called at times 900, 1900, 2900 and so on. The offset value can not be longer than the period, by definition. If you try to do so, a modulo operation will anyway be applied (i.e. an offset of 1500 with a period of 1000 becomes 500).

Finally, the wait argument allows to delay the first call to the method from a fixed interval:

@every(period=1000., wait=5000.)
def reset inputs(n):
    pop.I = 0.0

In this case, the method will be called at times 5000, 6000 and so on.

Between two calls to simulate(), the callbacks can be disabled or re-enabled using the following methods:

@every(period=1000.)
def reset inputs(n):
    pop.I = 0.0

# Simulate with callbacks
net.simulate(10000.)

# Disable callbacks
net.disable_callbacks()

# Simulate without callbacks
net.simulate(10000.)

# Re-enable callbacks
net.enable_callbacks()

# Simulate with callbacks
net.simulate(10000.)

Note that the period is always relative to the time when simulate() is called, so if no offset is defined, the callbacks will be called before the first step of a simulation, no matter how long the previous simulation lasted. In the current state, it is not possible yet to enable/disable callbacks selectively, it is all or none.

Callbacks can only be used with simulate(), not with step() or simulate_until().

Parallel computing with OpenMP

The default paradigm for an ANNarchy simulation is through openMP, which distributes automatically the computations over the available CPU cores.

By default, ANNarchy will use a single thread for your simulation. Automatically using all possible cores would not be optimal: small networks in particular tend to run faster with a smaller amount of cores. For this reason, the OMP_NUM_THREADS environment variable has no effect in ANNarchy.

You can control the number of cores by passing the -j flag to the Python script:

python NeuralField.py -j2

It is the responsability of the user to find out which number of cores is optimal for his network, by comparing simulation times. When this optimal number is found, it can be hard-coded in the script by setting the num_threads argument to Network.config():

net.config(num_threads=2)

Parallel computing with CUDA

To run your network on GPUs, you need to declare to ANNarchy that you want to use CUDA. One way to do so is to pass the --gpu flag to the command line:

python NeuralField.py --gpu

You can also set the paradigm argument of Network.config() to make it permanent:

net.config(paradigm="cuda")

If there are multiple GPUs on your machine, you can select the ID of the device by specifying it to the --gpu flag on the command line:

python NeuralField.py --gpu=2

Alternatively, you can also pass the cuda_config dictionary argument to Network.compile():

net.compile(cuda_config={'device': 2})

The default GPU is defined in the configuration file ~/.config/ANNarchy/annarchy.json (0 unless you modify it).

{
    "cuda": {
        "device": 0,
        "path": "/usr/local/cuda"
    }
}

As the current implementation is a development version, some of the features provided by ANNarchy are not supported yet with CUDA:

  • weight sharing (convolutions),
  • non-uniform synaptic delays,
  • structural plasticity,
  • spiking neurons: a) with mean firing rate and b) continous integration of inputs,
  • SpikeSourceArray.
Monitors
Setting inputs
 

Copyright Julien Vitay, Helge Ülo Dinkelbach, Fred Hamker