ANNarchy allows to sample values from various probability distributions. The list is also available in the reference section: RandomDistribution
Uniform: Uniform distribution between min and max.
DiscreteUniform: Discrete uniform distribution between min and max.
Normal: Normal distribution.
LogNormal: Log-normal distribution.
Exponential: Exponential distribution, according to the density function:
Gamma: Gamma distribution.
Binomial: Binomial distribution.
Inside equations
The probability distributions can be used inside neural or synaptic equations to add noise. The arguments to the random distributions can be either fixed values or global parameters.
It is not allowed to use local parameters (with different values per neuron) or variables, as the random number generators are initialized only once at network creation (doing otherwise would impair performance too much).
Caution
If a global parameter is used, changing its value will not affect the generator after compilation (net.compile()).
It is therefore better practice to use normalized random generators and scale their outputs:
The main interest of using these objects instead of directly numpy is that the get_values() method can only be called when the network is instantiated (at the end of net.compile()).
For example, in the following code, the random values are only created after compilation and transferred to the C++ kernel. They do not comsume RAM unnecessarily on the Python side.
net = ann.Network()pop = net.create(100000000, ann.Neuron("r = 0.0"))pop.r = ann.Uniform(min=0.0, max=1.0) # The values are not drawn yetnet.compile()# The values are now drawn and stored in the C++ kernel.
If you want to control the seed of the random distributions, you should pass the numpy default RNG to it, initialized with the network’s seed:
net = ann.Network(seed=42) # or leave the seed to None to have it automatically setrng = np.random.default_rng(seed=net.seed)rd = ann.Uniform(min=0.0, max=1.0, rng=rng)
This is especially important when running simulations in parallel with parallel_run().