Network
Network
class
The main object in ANNarchy is the Network
instance, which contains all data structures apart from the neuron / synapse models and the functions:
import ANNarchy as ann
= ann.Neuron(
neuron = dict(tau = 10.0),
parameters = [
equations 'tau * dv/dt + v = sum(exc)',
'r = pos(v)'
]
)
= ann.Synapse(
synapse = dict(tau = 5000.0),
parameters = 'tau * dw/dt = pre.r * post.r ',
equations
)
# Create the empty network
= Network()
net
# Create two populations
= net.create(10, neuron)
pop1 = net.create(10, neuron)
pop2
# Connect the two populations
= net.connect(pop1, pop2, 'exc', synapse)
proj
# Monitor the second population
= net.monitor(pop2, 'r') m
Network.create()
and Network.connect()
are the main access points to create populations and projections. However, if you lose the references to pop1
, pop2
and proj
(for example if you create them inside a method but do not return them), it is difficult to access them.
One option is to iterate over the lists of populations and projections stored in the network, but you have to know what you are looking for:
for pop in net.get_populations():
print(pop.r)
for proj in net.get_projections():
print(proj.w)
It is also possible to provide a unique name to each population and projection at creation time, so they can be easily retrieved:
def create_network():
= ann.Network()
net = net.create(10, neuron, name='pop1')
pop1 = net.create(10, neuron, name='pop2')
pop2 = net.connect(pop1, pop2, 'exc', synapse, name='projection1')
proj = net.monitor(pop2, 'r', name='monitor')
m return net
= create_network()
net = net.get_population('pop1')
pop1 = net.get_population('pop2')
pop2 = net.get_projection('projection1')
proj = net.get_monitor('monitor') m
Another safer option is to create your own class inheriting from ann.Network
and store all populations and projections as an attribute:
class SimpleNetwork (ann.Network):
def __init__(self, N)
self.pop1 = self.create(N, neuron)
self.pop2 = self.create(N, neuron)
self.proj = self.connect(self.pop1, self.pop2, 'exc', synapse)
self.m = self.monitor(self.pop2, 'r')
= SimpleNetwork(10)
net print(net.pop1.r)
You do not need to explictly call the constructor of Network (Network.__init__(self)
), it is done automatically. Creating a subclass of Network
furthermore allows to use the parallel_run()
method if the whole construction of the network is done through the constructor.
Apart from that, the two approaches are equivalent, pick the one you prefer. Subclasses are easier to re-use, especially across files.
Compiling the network
Once all the relevant information has been defined, one needs to actually compile the network, by calling the Network.compile()
method:
compile() net.
The optimized C++ code will be generated, compiled, the underlying objects created and made available to the Python interface.
You can specify several arguments to compile()
, including:
compiler
: to select which C++ compiler will be used.compiler_flags
: to select which flags are passed to the compiler.directory
: absolute/relative path to the directory where files will be generated and compiled (default:annarchy/
).
Compiler
ANNarchy requires a C++ compiler. On GNU/Linux, the default choice is g++
, while on MacOS it is clang++
. You can change the compiler (and its flags) to use either during the call to net.compile()
in your script:
compile(compiler="clang++", compiler_flags="-march=native -O3") net.
or globally by modifying the configuration file located at ~/.config/ANNarchy/annarchy.json
:
{
"openmp": {
"compiler": "clang++",
"flags": "-march=native -O3"
}
}
Be careful with the flags: for example, the optimization level -O3
does not obligatorily produce faster code. But this is the case for most models, therefore it is the default in the ANNarchy 4.7.x releases.
Even more caution is required when using the -ffast-math
flag. It can increase the performance, in particular in combination with SIMD. However, the application of -ffast-math
enables a set of optimizations which might violate IEEE 754 compliance (which might be okay in many cases, but it is important that the user verifies the result). For more details, see the g++ documentation: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
In rare cases, it may occur that the CPU architecture is not detectable for the used g++ compiler (e.g. Intel’s Tigerlake and g++ <= 9.4). This will result in a compiler error which can be fixed by removing the ‘-march=native’ flag. To get access to AVX-512 SIMD instructions, you need to add -mavx512f
instead, as well as -ftree-vectorize
if -O3
is not already used.
Directory
When calling compile()
, the subfolder annarchy/
(or whatever is defined by `directory
) will be created, and the generated code will be compiled. The first compilation may last a couple of seconds, but further runs of the script are much faster. If no modification to the network has been made except for parameter values, it will not be recompiled, sparing us this overhead.
ANNarchy tracks the changes in the script and re-generates the corresponding code. In some cases (a new version of ANNarchy has been installed, bugs), it may be necessary to perform a fresh compilation of the network. You can either delete the annarchy/
subfolder and restart the script:
rm -rf annarchy/
python MyNetwork.py
pass the --clean
flag to the script:
python MyNetwork.py --clean
or tell compile()
to start fresh:
compile(clean=True) net.
Simulating the network
After the network is compiled, the simulation can be run for the specified duration (in milliseconds) through the Network.simulate()
method:
1000.0) # Simulate for 1 second net.simulate(
The provided duration should be a multiple of dt
. If not, the number of simulation steps performed will be approximated.
In some cases, you may want to perform only one step of the simulation, instead of specifing the duration. The Network.step()
method can then be used.
# Simulate for 1 step net.step()
Setting the discretization step dt
An important value for the simulation is the discretization step dt
. Its default value is 1 ms, which is usually fine for rate-coded networks, but may be too high for spiking networks, as the equations are stiffer. Taken too high, it can lead to high numerical errors. Too low, and the simulation will take an unnecessary amount of time.
To set the discretization step, just pass the desired value to the constructor of Network
:
= ann.Network(dt=0.1) net
It can also be set using the config()
method, before projections are created:
=0.1) net.config(dt
However, changing its value after calling compile()
will not have any effect.
You can always access the current value of dt with the attribute net.dt
.
If you create a subclass of Network
, you can also provide dt
to its constructor, even if your network does not catch it.
class SimpleNetwork (ann.Network):
def __init__(self, N)
self.pop1 = net.create(N, neuron)
self.pop2 = net.create(N, neuron)
self.proj = net.connect(self.pop1, self.pop2, 'exc', synapse)
= SimpleNetwork(N=10, dt=0.1) net
Setting the seed of the random number generators
By default, the random number generators (RNG) are seeded with secrets.randbits(32)
, so each simulation will be different from run to run. If you want to have deterministic simulations, you need to provide a fixed seed to the constructor of Network
:
= ann.Network(dt=0.1, seed=42) net
If you define a subclass of Network
, pass the seed without further processing, like for dt
.
class SimpleNetwork (ann.Network):
def __init__(self, N)
self.pop1 = net.create(N, neuron)
self.pop2 = net.create(N, neuron)
self.proj = net.connect(self.pop1, self.pop2, 'exc', synapse)
= SimpleNetwork(N=10, dt=0.1, seed=42) net
Note that this also sets the seed of the old RNG of numpy, which is used to initialize values produced by np.random.*
.
If you use the new default RNG of numpy (rng = np.random.default_rng()
) or ANNarchy’s random distributions (see Random Distributions), you will have to seed it yourself. The seed of network is accessible through the attribute net.seed
.
= np.random.default_rng(seed=net.seed)
rng
= ann.Uniform(min=0.0, max=1.0, rng=rng) pop.r
Using the same seed with the OpenMP and CUDA backends will not lead to the same sequences of numbers!
Early-stopping
In some cases, it is desired to stop the simulation whenever a criterion is fulfilled (for example, a neural integrator exceeds a certain threshold), not after a fixed amount of time.
There is the possibility to define a stop_condition
when creating a Population
:
= net.create( ... , stop_condition = "r > 1.0") pop1
When calling the simulate_until()
method instead of simulate()
:
= net.simulate_until(max_duration=1000.0, populations=pop1) t
the simulation will be stopped whenever the stop_condition
of pop1
is met, i.e. when the firing rate of any neuron of pop1 is above 1.0. If the condition is never met, the simulation will last maximally max_duration
. The methods returns the effective duration of the simulation (to compute reaction times, for example).
The stop_condition
can use any logical operation on the parameters and variables of the neuron associated to the population:
= net.create( ... , stop_condition = "(r > 1.0) and (mp < 2.0)") pop1
By default, the simulation stops when at least one neuron in the population fulfills the criterion. If you want to stop the simulation when all neurons fulfill the condition, you can use the flag all
after the condition:
= net.create( ... , stop_condition = "r > 1.0 : all") pop1
The flag any
is the default behavior and can be omitted.
The stop criterion can depend on several populations, by providing a list of populations to the populations
argument instead of a single population:
= net.simulate_until(max_duration=1000.0, populations=[pop1, pop2]) t
The simulation will then stop when the criterion is met in both populations at the same time. If you want that the simulation stops when at least one population meets its criterion, you can specify the operator
argument:
= net.simulate_until(max_duration=1000.0, populations=[pop1, pop2], operator='or') t
The default value of operator
is a 'and'
function between the populations’ criteria.
Global operations (min, max, mean) are not possible inside the stop_condition
. If you need them, store them in a variable in the equations
argument of the neuron and use it as the condition:
= [
equations 'r = ...',
'max_r = max(r)',
]
Setting inputs periodically
In most cases, your simulation will be decomposed into a series of fixed-duration trials, where you basically set inputs at the beginning of the trial, run the simulation for a fixed duration, and possibly read out results at the end:
# Iterate over 100 trials
= []
result for trial in range(100):
# Set inputs to the network
= ann.Uniform(0.0, 1.0)
pop.I # Simulate for 1 second
1000.)
net.simulate(# Save the output
result.append(pop.r)
For convenience, we provide the decorator every
, which allows to register a python method and call it automatically during the simulation with a fixed period:
= []
result
@ann.every(period=1000.)
def set inputs(n):
# Set inputs to the network
= ann.Uniform(0.0, 1.0)
pop.I # Save the output of the previous step
if n > 0:
result.append(pop.r)
100 * 1000.) net.simulate(
In this example, set_inputs()
will be executed just before the steps corresponding to times t = 0., 1000., 2000., and so on until t = 100000.
The method can have any name, but must accept only one argument, the integer n
which will be incremented at each call of the method (i.e. it will take the values 0, 1, 2 until 99). This can for example be used to access data in a numpy array:
= np.random.random((100, 640, 480))
images
@ann.every(period=1000.)
def set inputs(n):
# Set inputs to the network
= images[n, :, :]
pop.I
100 * 1000.) net.simulate(
One can define several methods that will be called in the order of their definition:
@ann.every(period=1000.)
def set inputs(n):
= 1.0
pop.I
@ann.every(period=1000.)
def reset inputs(n):
= 0.0 pop.I
In this example, set_inputs()
will be called first, followed by reset_inputs
, so pop.I
will finally be 0.0. The decorator every
accepts an argument offset
defining a delay within the period to call the method:
@ann.every(period=1000.)
def set inputs(n):
= 1.0
pop.I
@ann.every(period=1000., offset=500.)
def reset inputs(n):
= 0.0 pop.I
In this case, set_inputs()
will be called at times 0, 1000, 2000... while reset_inputs()
will be called at times 500, 1500, 2500..., allowing to structure a trial more effectively. The offset
can be set negative, in which case it will be relative to the end of the trial:
@every(period=1000., offset=-100.)
def reset inputs(n):
= 0.0 pop.I
In this example, the method will be called at times 900, 1900, 2900 and so on. The offset
value can not be longer than the period
, by definition. If you try to do so, a modulo operation will anyway be applied (i.e. an offset of 1500 with a period of 1000 becomes 500).
Finally, the wait
argument allows to delay the first call to the method from a fixed interval:
@every(period=1000., wait=5000.)
def reset inputs(n):
= 0.0 pop.I
In this case, the method will be called at times 5000, 6000 and so on.
Between two calls to simulate()
, the callbacks can be disabled or re-enabled using the following methods:
@every(period=1000.)
def reset inputs(n):
= 0.0
pop.I
# Simulate with callbacks
10000.)
net.simulate(
# Disable callbacks
net.disable_callbacks()
# Simulate without callbacks
10000.)
net.simulate(
# Re-enable callbacks
net.enable_callbacks()
# Simulate with callbacks
10000.) net.simulate(
Note that the period is always relative to the time when simulate()
is called, so if no offset is defined, the callbacks will be called before the first step of a simulation, no matter how long the previous simulation lasted. In the current state, it is not possible yet to enable/disable callbacks selectively, it is all or none.
Callbacks can only be used with simulate()
, not with step()
or simulate_until()
.
Parallel computing with OpenMP
The default paradigm for an ANNarchy simulation is through openMP, which distributes automatically the computations over the available CPU cores.
By default, ANNarchy will use a single thread for your simulation. Automatically using all possible cores would not be optimal: small networks in particular tend to run faster with a smaller amount of cores. For this reason, the OMP_NUM_THREADS
environment variable has no effect in ANNarchy.
You can control the number of cores by passing the -j
flag to the Python script:
python NeuralField.py -j2
It is the responsability of the user to find out which number of cores is optimal for his network, by comparing simulation times. When this optimal number is found, it can be hard-coded in the script by setting the num_threads
argument to Network.config()
:
=2) net.config(num_threads
Parallel computing with CUDA
To run your network on GPUs, you need to declare to ANNarchy that you want to use CUDA. One way to do so is to pass the --gpu
flag to the command line:
python NeuralField.py --gpu
You can also set the paradigm
argument of Network.config()
to make it permanent:
="cuda") net.config(paradigm
If there are multiple GPUs on your machine, you can select the ID of the device by specifying it to the --gpu
flag on the command line:
python NeuralField.py --gpu=2
Alternatively, you can also pass the cuda_config
dictionary argument to Network.compile()
:
compile(cuda_config={'device': 2}) net.
The default GPU is defined in the configuration file ~/.config/ANNarchy/annarchy.json
(0 unless you modify it).
{
"cuda": {
"device": 0,
"path": "/usr/local/cuda"
}
}
As the current implementation is a development version, some of the features provided by ANNarchy are not supported yet with CUDA:
- weight sharing (convolutions),
- non-uniform synaptic delays,
- structural plasticity,
- spiking neurons: a) with mean firing rate and b) continous integration of inputs,
SpikeSourceArray
.