ANNarchy 5.0.0
  • ANNarchy
  • Installation
  • Tutorial
  • Manual
  • Notebooks
  • Reference

Random distributions

  • Core objects
    • Overview
    • Equation parser
    • Rate-coded neurons
    • Spiking neurons
    • Populations
    • Rate-coded synapses
    • Spiking synapses
    • Projections
    • Connectors
    • Monitors
    • Network
    • Setting inputs
    • Random distributions
    • Numerical methods
    • Saving and loading
    • Parallel simulations
  • Extensions
    • Hybrid networks
    • Structural plasticity
    • Convolution and pooling
    • Reporting
    • Logging with tensorboard

On this page

  • Inside equations
  • Python classes
  • Handling RNG states

Random distributions

ANNarchy allows to sample values from various probability distributions. The list is also available in the reference section: RandomDistribution

  • Uniform: Uniform distribution between min and max.
  • DiscreteUniform: Discrete uniform distribution between min and max.
  • Normal: Normal distribution.
  • LogNormal: Log-normal distribution.
  • Exponential: Exponential distribution, according to the density function:
  • Gamma: Gamma distribution.
  • Binomial: Binomial distribution.

Inside equations

The probability distributions can be used inside neural or synaptic equations to add noise. The arguments to the random distributions can be either fixed values or global parameters.

neuron = ann.Neuron(
    parameters = dict(
        noise_min = -0.1,
        noise_max = 0.1,
    ),
    equations = [
        ann.Variable('noise += Uniform(noise_min, noise_max)'),
    ]
)

It is not allowed to use local parameters (with different values per neuron) or variables, as the random number generators are initialized only once at network creation (doing otherwise would impair performance too much).

Caution

If a global parameter is used, changing its value will not affect the generator after compilation (net.compile()).

It is therefore better practice to use normalized random generators and scale their outputs:

neuron = ann.Neuron(
    parameters = dict(
        noise_min = -0.1,
        noise_max = 0.1,
    ),
    equations = [
        ann.Variable('noise += noise_min + (noise_max - noise_min) * Uniform(0, 1)'),
    ]
)

Python classes

ANNarchy also exposes these distributions as Python classes deriving from ann.RandomDistribution:

rd = ann.Uniform(min=-1.0, max=1.0)

values = rd.get_values(shape=(5, 5))
print(values)
[[ 0.25392816  0.14471749 -0.00747622  0.41345586  0.79989638]
 [ 0.89247827 -0.63836764  0.0998276  -0.65194194  0.98338768]
 [-0.73334271 -0.34800483 -0.45135163  0.1185995  -0.70204679]
 [-0.77543027 -0.01609569  0.60578511  0.59234792 -0.24299255]
 [ 0.55650479 -0.41401422  0.59384748 -0.74732156 -0.43987775]]

Those classes are only thin wrappers around numpy’s random module. The code above is fully equivalent to:

rng = np.random.default_rng()

values = rng.uniform(-1.0, 1.0, (5,5))
print(values)
[[ 0.29939459  0.96038729 -0.84927714 -0.53572586  0.51605243]
 [ 0.3326965  -0.24123856  0.43025058 -0.64146947 -0.81698968]
 [ 0.52746254  0.71203767 -0.30977727 -0.16010762  0.56849491]
 [-0.92940265  0.08199731 -0.16920848  0.15867704  0.57933176]
 [-0.39845591  0.88109401  0.09422005  0.67960715  0.92903362]]

The main interest of using these objects instead of directly numpy is that the get_values() method can only be called when the network is instantiated (at the end of net.compile()).

For example, in the following code, the random values are only created after compilation and transferred to the C++ kernel. They do not comsume RAM unnecessarily on the Python side.

net = ann.Network()
pop = net.create(100000000, ann.Neuron("r = 0.0"))
pop.r = ann.Uniform(min=0.0, max=1.0) 
# The values are not drawn yet
net.compile()
# The values are now drawn and stored in the C++ kernel.

If you want to control the seed of the random distributions, you should pass the numpy default RNG to it, initialized with the network’s seed:

net = ann.Network(seed=42) # or leave the seed to None to have it automatically set

rng = np.random.default_rng(seed=net.seed)

rd = ann.Uniform(min=0.0, max=1.0, rng=rng)

This is especially important when running simulations in parallel with parallel_run().

Handling RNG states

For debugging purposes or the ensure correctness of results, the handling of RNG states is important. As shown in the previous section, one can set the RNG generator for each distribution object individually on creation. By default, each ANNarchy distribution object utilizes an independent RNG generator instance created by the numpy.random.default_rng() method. Please note, that in this case the seed argument provided to Network constructor is ignored!

Caution

The latter is an important change to releases prior to ANNarchy 5.0. In old releases, the setting of a seed initialized the numpy.random.seed (the old-stlye NumPy RNG). As we don’t use this API in ANNarchy 5.0, this numpy.random.seed() method is not touched by ANNarchy anymore.

As already outlined in some cases, one wants to maintain only one generator, i.e., one global state (as in NumPy’s old random API). To allow this, one can pass a pre-initialized RNG generator called default_rng part of the Network class, which uses the seed provided to Network constructor:

net = ann.Network(seed=42)  # initializes default_rng member of the class

rd = ann.Uniform(min=0.0, max=1.0, rng=net.default_rng)

In this case the same behavior as in old ANNarchy releases is restored.

Setting inputs
Numerical methods
 

Copyright Julien Vitay, Helge Ülo Dinkelbach, Fred Hamker