#!pip install ANNarchy
Echo state networks
If you run this notebook in colab, first uncomment and run this cell to install ANNarchy:
This notebook demonstrates how to implement a simple Echo state network (ESN) with ANNarchy. It is a simple rate-coded network, with a population of recurrently-connected neurons and a readout layer which will be learned offline using scikit-learn
. The task will be a simple univariate regression.
Let’s start by importing ANNarchy.
setup()
sets various parameters, such as the step size dt
in milliseconds. By default, dt
is 1.0, so the call is not necessary here.
import numpy as np
import matplotlib.pyplot as plt
import ANNarchy as ann
ANNarchy 5.0 (5.0.0) on darwin (posix).
Each neuron in the reservoir follows the following equations:
\tau \frac{dx(t)}{dt} + x(t) = \sum_\text{input} W^\text{IN} \, r^\text{IN}(t) + g \, \sum_\text{rec} W^\text{REC} \, r(t) + \xi(t)
r(t) = \tanh(x(t))
where \xi(t) is a uniform noise.
The neuron has three parameters and two variables:
= ann.Neuron(
ESN_Neuron = dict(
parameters = 30.0,
tau = 1.0,
g = 0.01,
noise
),= [
equations 'tau * dx/dt + x = sum(in) + g * sum(exc) + noise * Uniform(-1, 1)',
'r = tanh(x)',
] )
Let’s create an empty network:
= ann.Network() net
The echo-state network will be a population of 400 neuron.
= 400
N = net.create(N, ESN_Neuron) pop
We can specify the value of the parameters from Python, this will override the value defined in the neuron description. We can give single float values or numpy arrays of the correct shape:
= 30.0
pop.tau = 1.4
pop.g = 0.01 pop.noise
The input to the reservoir is a single value, we create a special population InputArray
that does nothing except storing a variable called r
that can be set externally.
= net.create(ann.InputArray(1))
inp = 0.0 inp.r
Input weights are uniformly distributed between -1 and 1.
= net.connect(inp, pop, 'in')
Wi =ann.Uniform(-1.0, 1.0)) Wi.all_to_all(weights
<ANNarchy.core.Projection.Projection at 0x11ede1580>
Recurrent weights are sampled from the normal distribution with mean 0 and variance g^2 / N. Here, we put the synaptic scaling g inside the neuron.
= net.connect(pop, pop, 'exc')
Wrec =ann.Normal(0., 1/np.sqrt(N))) Wrec.all_to_all(weights
<ANNarchy.core.Projection.Projection at 0x11e1be030>
compile() net.
We create a monitor to record the evolution of the firing rates in the reservoir during the simulation.
= net.monitor(pop, 'r')
m = net.monitor(inp, 'r') n
A single trial lasts 3 second by default, with a step input between 100 and 200 ms. We define the trial in a method, so we can run it multiple times.
def trial(T=3000.):
"Runs two trials for a given spectral radius."
# Reset firing rates
= 0.0
inp.r = 0.0
pop.x = 0.0
pop.r
# Run the trial
100.)
net.simulate(= 1.0
inp.r 100.0) # initial stimulation
net.simulate(= 0.0
inp.r - 200.)
net.simulate(T
return m.get('r')
We run two trials successively to look at the chaoticity depending on g.
= 1.5
pop.g = trial()
data1 = trial() data2
=(12, 12))
plt.figure(figsize311)
plt.subplot("First trial")
plt.title(for i in range(5):
=2)
plt.plot(data1[:, i], lw312)
plt.subplot("Second trial")
plt.title(for i in range(5):
=2)
plt.plot(data2[:, i], lw313)
plt.subplot("Difference")
plt.title(for i in range(5):
- data2[:, i], lw=2)
plt.plot(data1[:, i]
plt.tight_layout() plt.show()
We can now train the readout neurons to reproduce a step signal after 2 seconds.
For simplicity, we just train a L1-regularized linear regression (LASSO) on the reservoir activity using scikit-learn
.
= np.zeros(3000)
target 2000:2500] = 1.0 target[
from sklearn import linear_model
= linear_model.Lasso(alpha=0.001, max_iter=10000)
reg
reg.fit(data1, target)= reg.predict(data2) pred
=(12, 8))
plt.figure(figsize=3)
plt.plot(pred, lw=3)
plt.plot(target, lw plt.show()