Parallel simulations and networks
A typical ANNarchy script represents a single network of populations and projections. Most of the work in computational neuroscience consists in running the same network again and again, varying some free parameters each time, until the fit to the data is publishable. The reset()
allows to return the network to its state before compilation, but this is particularly tedious to implement.
In order to run different networks using the same script, the Network
object can be used to make copies of existing objects (populations, projections and monitors) and simulate them in parallel using parallel_run()
.
Let’s suppose the following dummy network is defined:
= ann.PoissonPopulation(100, rates=10.0)
pop1 = ann.Population(100, Izhikevich)
pop2 = ann.Projection(pop1, pop2, 'exc')
proj =5.0, probability=0.2)
proj.connect_fixed_probability(weights= ann.Monitor(pop2, 'spike')
m
compile() ann.
One would like to compare the firing patterns in pop2
when:
- There is no input to
pop2
. - The Poisson input is at 10 Hz.
- The Poisson input is at 20 Hz.
parallel_run()
uses the multiprocessing
module to start parallel processes. On Linux, it should work directly, but there is an issue on OSX. Since Python 3.8, the ‘spawn’ method is the default way to start processes, but it does not work on MacOS. The following code should fix the issue, but it should only be ran once in the script.
import platform
if platform.system() == "Darwin":
import multiprocessing as mp
'fork') mp.set_start_method(
Parallel simulation
parallel_run
The most simple method is to create a single network and to iterate over some parameters values to run identical simulations multiple times using parallel_run()
:
= ann.PoissonPopulation(100, rates=10.0)
pop1 = ann.Population(100, Izhikevich)
pop2 = ann.Projection(pop1, pop2, 'exc')
proj =5.0, probability=0.2)
proj.connect_fixed_probability(weights= ann.Monitor(pop2, 'spike')
m
compile()
ann.
def simulation(idx, net):
= 10. * idx
net.get(pop1).rates 1000.)
net.simulate(return net.get(m).raster_plot()
= ann.parallel_run(method=simulation, number=3)
results
= results[0]
t1, n1 = results[1]
t2, n2 = results[2] t3, n3
The simulation()
method will be called over three copies of the network (in different processes). The first argument to this method must be an integer corresponding to the index of a network (here idx = [0, 1, 2]
), the second must be a Network
object (class Network).
Populations, projections and monitors of a network must be accessed with:
net.get(pop1)
net.get(pop2)
net.get(proj) net.get(m)
Networks only work on copies of the corresponding objects at the time they are added to the network. It is no use to modify the rates
parameter of pop1
after the network are created: the network is now independent. Only net.get(pop1).rates
allows to change rates
for the current simulation. Similarly, it is useless to read variables from the original objects when the networks are simulated: they would still have their original values.
You do not have access on the internally-created networks after the simulation (they are in a separate memory space). The method must the data you want to analyze (here the raster plot) or write them to disk (in separate files).
parallel_run()
returns a list of the values returned by the passed method:
= parallel_run(method=simulation, networks=[net1, net2, net3])
results
= results[0]
t1, n1 = results[1]
t2, n2 = results[2] t3, n3
If you initialize some variables randomly in the main network, for example:
= -60. + 10. * np.random.random(100) pop2.v
they will have the same value in all networks, they are not drawn again. You need to perform random initialization on each network inside the simulation method:
def simulation(idx, net):
= 10. * idx
net.get(pop1).rates = -60. + 10. * np.random.random(100)
net.get(pop2).v 1000.)
net.simulate(return net.get(m).raster_plot()
Oppositely, connection methods having a random components (e.g. connect_fixed_probability()
or using weights=Uniform(0.0, 1.0)
) will be redrawn for each network.
Global simulation methods should not be called directly, even with the net_id
parameter. The Network
class overrides them:
net.step()
net.simulate()
net.simulate_until()
net.reset()
net.get_time()
net.set_time(t)
net.get_current_step()
net.set_current_step(t)
net.set_seed(seed)
net.enable_learning()
net.disable_learning() net.get_population(name)
Passing additional arguments
The two first obligatory arguments of the simulation callback are idx
, the index of the network in the simulation, and net
, the network object. You can of course use other names, but these two arguments will be passed.
idx
can be used for example to access arrays of parameter values:
= [0.0, 0.1, 0.2, 0.3, 0.4]
rates def simulation(idx, net):
= rates[idx]
net.get(pop1).rates
...
= ann.parallel_run(method=simulation, number=len(rates)) results
Another option is to provide additional arguments to the simulation
callback during the parallel_run()
call:
def simulation(idx, net, rates):
= rates
net.get(pop1).rates
...
= [0.0, 0.1, 0.2, 0.3, 0.4]
rates = ann.parallel_run(method=simulation, number=len(rates), rates=rates) results
These additional arguments must be lists of the same size as the number of networks (number
or len(networks)
). You can use as many additional arguments as you want:
def simulation(idx, net, a, b, c, d):
...= ann.parallel_run(method=simulation, number=10, a=..., b=..., c=..., d=...) results
In parallel_run()
, the arguments can be passed in any order, but they must be named (e.g. , a=list(range(0)),
, not , list(range(10)),
).
Multiple network instances
One can also create three different Network
objects to implement the three conditions:
= Network()
net1
net1.add([pop2, m])compile() net1.
The network is created empty, and the population pop2
as well as the attached monitor are added to it through the add()
method. This method takes a list of objects (populations, projections and monitors).
The network has then to be compiled by calling the compile()
method specifically on the network. The network can be simulated independently by calling simulate()
or simulate_until()
on the network.
The basic network, with inputs at 10 Hz, can be simulated directly using the normal methods, or copied into a new network:
= Network()
net2
net2.add([pop1, pop2, proj, m])compile() net2.
Here, all defined objects are added to the network. It would be easier to pass the everything
argument of the Network constructor as True
, which has the same effect. We can use this for the third network:
= Network(everything=True)
net3 = 20.0
net3.get(pop1).rates compile() net3.
Here, the population pop1
of the third network has to be accessed though the get()
method. The data corresponding to pop1
will not be the same as for net3.get(pop1)
, only the geometry and neuron models are the same.
Once a network is compiled, it can be simulated (but it does not matter if the other networks are also compiled, including the “original” network):
1000.)
net1.simulate(1000.)
net2.simulate(1000.) net3.simulate(
Spike recordings have to be accessed per network, through the copies of the monitor m
:
= net1.get(m).raster_plot()
t1, n1 = net2.get(m).raster_plot()
t2, n2 = net3.get(m).raster_plot() t3, n3
One can also call the parallel_run()
method and pass it a list of networks instead of number
:
=simulation, networks=[net1, net2, net3]) ann.parallel_run(method
This will apply simulation()
in parallel on the 3 networks, reducing the total computation time.