ANNarchy 5.0.0
  • ANNarchy
  • Installation
  • Tutorial
  • Manual
  • Notebooks
  • Reference

Populations

  • Core objects
    • Overview
    • Equation parser
    • Rate-coded neurons
    • Spiking neurons
    • Populations
    • Rate-coded synapses
    • Spiking synapses
    • Projections
    • Connectors
    • Monitors
    • Network
    • Setting inputs
    • Random distributions
    • Numerical methods
    • Saving and loading
    • Parallel simulations
  • Extensions
    • Hybrid networks
    • Structural plasticity
    • Convolution and pooling
    • Reporting
    • Logging with tensorboard

On this page

  • Creating populations
  • Geometry and ranks
  • Population attributes
  • Accessing individual neurons
  • Accessing groups of neurons - PopulationView
  • Functions

Populations

Once the Neuron objects have been defined, the populations can be created. Let’s suppose we have defined the following rate-coded neuron:

import numpy as np
import ANNarchy as ann

LeakyIntegratorNeuron = ann.Neuron(
    parameters = dict(
        tau = 10.0,
        baseline = -0.2
    ),
    equations = [
        'tau * dv/dt  + v = baseline + sum(exc)',
        'r = pos(v)',
    ]
)
ANNarchy 5.0 (5.0.0rc0) on linux (posix).

Creating populations

Populations of neurons are contained in an instance of the Population class, which can be created by calling the create() method of a network:

net = ann.Network()

pop1 = net.create(geometry=100, neuron=LeakyIntegratorNeuron)
pop2 = net.create(geometry=(8, 8), neuron=LeakyIntegratorNeuron, name="pop2")

The rate-coded or spiking nature of the Neuron instance is irrelevant when creating the Population object. Population objects can also be created directly by providing the id of the network object, but it is not recommended.

Network.create() takes different arguments:

  • geometry defines the number of neurons in the population, as well as its spatial structure (1D/2D/3D or more). For example, a two-dimensional population with 15*10 neurons takes the argument (15, 10), while a one-dimensional array of 100 neurons would take (100,) or simply 100.
  • neuron indicates the neuron type to use for this population (which must have been defined before). It requires a Neuron class or instance.
  • name is an unique string for each population in the network. If name is omitted, an internal name such as pop0 will be given (the number is incremented every time a new population is defined). Although this argument is optional, it is recommended to give an understandable name to each population: if you somehow “lose” the reference to the Population object in some portion of your code, you can always retrieve it using the net.get_population(name) method.

After creation, each population has several attributes defined (corresponding to the parameters and variables of the Neuron type) and is assigned a fixed size (pop.size corresponding to the total number of neurons, here 100 for pop1 and 64 for pop2) and geometry (pop1.geometry, here (100, ) and (8, 8)).

Geometry and ranks

Each neuron in the population has a set of coordinates (expressed relative to pop1.geometry) and a rank (from 0 to pop1.size -1). The reason is that spatial coordinates are useful for visualization, or when defining a distance-dependent connection pattern, but that ANNarchy internally uses flat arrays for performance reasons.

The coordinates use the matrix notation for multi-dimensional arrays, which is also used by Numpy (for a 2D matrix, the first index represents the row, the second the column). You can therefore use safely the reshape() methods of Numpy to switch between coordinates-based and rank-based representations of an array.

To convert the rank of a neuron to its coordinates (and vice-versa), you can use the ravel_multi_index and unravel_index methods of Numpy, but they can be quite slow. The Population class provides two more efficient methods to do this conversion:

  • coordinates_from_rank returns a tuple representing the coordinates of neuron based on its rank (between 0 and size -1, otherwise an error is thrown).
  • rank_from_coordinates returns the rank corresponding to the coordinates.

For example, with pop2 having a geometry (8, 8):

pop2.coordinates_from_rank(15)
(1, 7)
pop2.rank_from_coordinates((4, 6))
38

Population attributes

The value of the parameters and variables of all neurons in a population can be accessed and modified through population attributes.

With the previously defined populations, you can list all their parameters and variables with:

pop2.attributes
['tau', 'baseline', 'v', 'r']
pop2.parameters
['tau', 'baseline']
pop2.variables
['v', 'r']

Reading their value is straightforward:

pop2.tau
10.0
pop2.r
array([[0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0.]])

Population-wise parameters/variables have a single value for the population, while neuron-specific ones return a NumPy array with the same geometry has the population.

Setting their value is also simple:

pop2.tau = 20.0
print(pop2.tau)
20.0
pop2.r = 1.0
print(pop2.r) 
[[1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]]
pop2.v = 0.5 * np.ones(pop2.geometry)
print(pop2.v)
[[0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]]
pop2.r = ann.Uniform(0.0, 1.0)
print(pop2.r)
[[0.88278574 0.28130904 0.43367077 0.50913694 0.06093721 0.39228607
  0.55457071 0.84540543]
 [0.73271108 0.10024575 0.15139512 0.67225055 0.10787216 0.94030627
  0.25132358 0.62874473]
 [0.5955857  0.6221603  0.19148943 0.18739162 0.33259606 0.27971882
  0.14893079 0.34920703]
 [0.89394619 0.92644123 0.80385068 0.08887383 0.41098601 0.81618563
  0.15934374 0.55867925]
 [0.12668823 0.76337302 0.48278515 0.94401798 0.88530427 0.76855142
  0.91091673 0.53636214]
 [0.34179086 0.29020004 0.11615279 0.96351097 0.2509132  0.50810183
  0.01549096 0.23028034]
 [0.10597681 0.06087826 0.53037316 0.32030348 0.48264167 0.07696145
  0.37961175 0.58398803]
 [0.20115746 0.69142218 0.56428511 0.88247939 0.13806341 0.44426022
  0.08976612 0.02018575]]

For population-wide attributes, you can only specify a single value (float, int or bool depending on the type of the parameter/variable). For neuron-specific attributes, you can provide either:

  • a single value which will be applied to all neurons of the population.
  • a list or a one-dimensional Numpy array of the same length as the number of neurons in the population. This information is provided by pop1.size.
  • a Numpy array of the same shape as the geometry of the population. This information is provided by pop1.geometry.
  • a random number generator object (Uniform, Normal...).
Note

If you do not want to use the attributes of Python (for example when doing a loop over unknown attributes), you can also use the get(name) and set(values) methods of Population:

pop1.set({'v': 1.0, 'r': ann.Uniform(0.0, 1.0)})
print(pop1.get('v'))
print(pop1.get('r'))
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1.]
[0.90182766 0.09259207 0.3726977  0.45817672 0.15112235 0.10496566
 0.77518394 0.8830195  0.85959747 0.26498527 0.81807916 0.76340563
 0.53191772 0.6958435  0.79753015 0.11778895 0.82693313 0.79647959
 0.63865791 0.16748874 0.7563963  0.11848339 0.0985155  0.1435983
 0.04366156 0.86150863 0.7020524  0.30669823 0.01796339 0.87956556
 0.28805126 0.7378753  0.29682299 0.26393512 0.63680801 0.93853053
 0.5639794  0.61534192 0.4516971  0.79543954 0.65003964 0.47950141
 0.82133482 0.72023134 0.16565911 0.09568645 0.78650168 0.44425343
 0.41165469 0.95472278 0.67191469 0.89705617 0.17705098 0.69071098
 0.51507371 0.91905904 0.71851021 0.06783573 0.82166457 0.852719
 0.25939713 0.75584488 0.38345775 0.15095496 0.88087791 0.53711281
 0.10640781 0.73838776 0.1699055  0.29030966 0.05204202 0.21367352
 0.4794544  0.53070043 0.59005974 0.4573081  0.70528585 0.23826384
 0.44877338 0.8787035  0.55371882 0.96479858 0.71630609 0.83334776
 0.74171017 0.13187172 0.30193958 0.94569879 0.11567426 0.98194533
 0.03968325 0.01941603 0.51437854 0.28151292 0.25298667 0.13527596
 0.70254359 0.68078721 0.65661374 0.4986296 ]

Accessing individual neurons

There exists a purely semantic access to individual neurons of a population. The IndividualNeuron class wraps population data for a specific neuron. It can be accessed through the Population.neuron() method using either the rank of the neuron (from 0 to pop1.size - 1) or its coordinates in the population’s geometry:

print(pop2.neuron(2, 2))
Neuron of the population pop1 with rank 18 (coordinates (2, 2)).
Parameters:
  tau = 20.0
  baseline = -0.2

Variables:
  v = 0.5
  r = 0.19148943158453047

The individual neurons can be manipulated individually:

my_neuron = pop2.neuron(2, 2)
my_neuron.r = 1.0
print(my_neuron)
Neuron of the population pop1 with rank 18 (coordinates (2, 2)).
Parameters:
  tau = 20.0
  baseline = -0.2

Variables:
  v = 0.5
  r = 1.0
Warning

IndividualNeuron is only a wrapper for ease of use, the real data is stored in arrays for the whole population, so accessing individual neurons is much slower and should be reserved to specific cases (i.e. only from time to time and for a limited set of neurons).

Accessing groups of neurons - PopulationView

Individual neurons can be grouped into PopulationView objects, which hold references to different neurons of the same population. One can create population views by “adding” several neurons together:

popview = pop2.neuron(2, 2) + pop2.neuron(3, 3) + pop2.neuron(4, 4)
popview.r = 1.0
print(pop2.r)
[[0.88278574 0.28130904 0.43367077 0.50913694 0.06093721 0.39228607
  0.55457071 0.84540543]
 [0.73271108 0.10024575 0.15139512 0.67225055 0.10787216 0.94030627
  0.25132358 0.62874473]
 [0.5955857  0.6221603  1.         0.18739162 0.33259606 0.27971882
  0.14893079 0.34920703]
 [0.89394619 0.92644123 0.80385068 1.         0.41098601 0.81618563
  0.15934374 0.55867925]
 [0.12668823 0.76337302 0.48278515 0.94401798 1.         0.76855142
  0.91091673 0.53636214]
 [0.34179086 0.29020004 0.11615279 0.96351097 0.2509132  0.50810183
  0.01549096 0.23028034]
 [0.10597681 0.06087826 0.53037316 0.32030348 0.48264167 0.07696145
  0.37961175 0.58398803]
 [0.20115746 0.69142218 0.56428511 0.88247939 0.13806341 0.44426022
  0.08976612 0.02018575]]

One can also use the slice operators to create PopulationViews:

popview = pop2[3, :]
popview.r = 1.0
print(pop2.r)
[[0.88278574 0.28130904 0.43367077 0.50913694 0.06093721 0.39228607
  0.55457071 0.84540543]
 [0.73271108 0.10024575 0.15139512 0.67225055 0.10787216 0.94030627
  0.25132358 0.62874473]
 [0.5955857  0.6221603  1.         0.18739162 0.33259606 0.27971882
  0.14893079 0.34920703]
 [1.         1.         1.         1.         1.         1.
  1.         1.        ]
 [0.12668823 0.76337302 0.48278515 0.94401798 1.         0.76855142
  0.91091673 0.53636214]
 [0.34179086 0.29020004 0.11615279 0.96351097 0.2509132  0.50810183
  0.01549096 0.23028034]
 [0.10597681 0.06087826 0.53037316 0.32030348 0.48264167 0.07696145
  0.37961175 0.58398803]
 [0.20115746 0.69142218 0.56428511 0.88247939 0.13806341 0.44426022
  0.08976612 0.02018575]]

or:

popview = pop2[2:5, 4] 
popview.r = 1.0
print(pop1.r)
[0.90182766 0.09259207 0.3726977  0.45817672 0.15112235 0.10496566
 0.77518394 0.8830195  0.85959747 0.26498527 0.81807916 0.76340563
 0.53191772 0.6958435  0.79753015 0.11778895 0.82693313 0.79647959
 0.63865791 0.16748874 0.7563963  0.11848339 0.0985155  0.1435983
 0.04366156 0.86150863 0.7020524  0.30669823 0.01796339 0.87956556
 0.28805126 0.7378753  0.29682299 0.26393512 0.63680801 0.93853053
 0.5639794  0.61534192 0.4516971  0.79543954 0.65003964 0.47950141
 0.82133482 0.72023134 0.16565911 0.09568645 0.78650168 0.44425343
 0.41165469 0.95472278 0.67191469 0.89705617 0.17705098 0.69071098
 0.51507371 0.91905904 0.71851021 0.06783573 0.82166457 0.852719
 0.25939713 0.75584488 0.38345775 0.15095496 0.88087791 0.53711281
 0.10640781 0.73838776 0.1699055  0.29030966 0.05204202 0.21367352
 0.4794544  0.53070043 0.59005974 0.4573081  0.70528585 0.23826384
 0.44877338 0.8787035  0.55371882 0.96479858 0.71630609 0.83334776
 0.74171017 0.13187172 0.30193958 0.94569879 0.11567426 0.98194533
 0.03968325 0.01941603 0.51437854 0.28151292 0.25298667 0.13527596
 0.70254359 0.68078721 0.65661374 0.4986296 ]

PopulationView objects can be used to create projections.

Warning

Contrary to the equivalent in PyNN, PopulationViews in ANNarchy can only group neurons from the same population.

Functions

If you have defined a function inside a Neuron definition:

LeakyIntegratorNeuron = ann.Neuron(
    parameters = dict(   
        tau = 10.0,
        slope = 1.0,
        baseline = -0.2,
    ),
    equations = [
        'tau * dv/dt + v = baseline + sum(exc)'
        'r = sigmoid(v, slope)'
    ],
    functions = """
        sigmoid(x, k) = 1.0 / (1.0 + exp(-x*k))
    """
)

you can use this function in Python as if it were a method of the corresponding object:

pop = net.create(1000, LeakyIntegratorNeuron)

x = np.linspace(-1., 1., 100)
k = np.ones(100)
r = pop.sigmoid(x, k)

You can pass either a list or a 1D Numpy array to each argument (not a single value, nor a multidimensional array!).

The size of the arrays passed for each argument is arbitrary (it must not match the population’s size) but you have to make sure that they all have the same size. Errors are not catched, so be careful.

Spiking neurons
Rate-coded synapses
 

Copyright Julien Vitay, Helge Ülo Dinkelbach, Fred Hamker