#!pip install ANNarchy
Echo state networks
If you run this notebook in colab, first uncomment and run this cell to install ANNarchy:
This notebook demonstrates how to implement a simple Echo state network (ESN) with ANNarchy. It is a simple rate-coded network, with a population of recurrently-connected neurons and a readout layer which will be learned offline using scikit-learn
. The task will be a simple univariate regression.
Let’s start by importing ANNarchy.
The clear()
command is necessary in notebooks when recreating a network. If you re-run the cells creating a network without calling clear()
first, populations will add up, and the results may not be what you expect.
setup()
sets various parameters, such as the step size dt
in milliseconds. By default, dt
is 1.0, so the call is not necessary here.
import numpy as np
import matplotlib.pyplot as plt
import ANNarchy as ann
ann.clear()=1.0) ann.setup(dt
ANNarchy 4.8 (4.8.2) on darwin (posix).
Each neuron in the reservoir follows the following equations:
\tau \frac{dx(t)}{dt} + x(t) = \sum_\text{input} W^\text{IN} \, r^\text{IN}(t) + g \, \sum_\text{rec} W^\text{REC} \, r(t) + \xi(t)
r(t) = \tanh(x(t))
where \xi(t) is a uniform noise.
The neuron has three parameters and two variables:
= ann.Neuron(
ESN_Neuron = """
parameters tau = 30.0 : population
g = 1.0 : population
noise = 0.01
""",
="""
equations tau * dx/dt + x = sum(in) + g * sum(exc) + noise * Uniform(-1, 1)
r = tanh(x)
"""
)
The echo-state network will be a population of 400 neuron.
= 400
N = ann.Population(N, ESN_Neuron) pop
We can specify the value of the parameters from Python, this will override the value defined in the neuron description. We can give single float values or numpy arrays of the correct shape:
= 30.0
pop.tau = 1.4
pop.g = 0.01 pop.noise
The input to the reservoir is a single value, we create a special population InputArray
that does nothing except storing a variable called r
that can be set externally.
= ann.InputArray(1)
inp = 0.0 inp.r
Input weights are uniformly distributed between -1 and 1.
= ann.Projection(inp, pop, 'in')
Wi =ann.Uniform(-1.0, 1.0)) Wi.connect_all_to_all(weights
<ANNarchy.core.Projection.Projection at 0x13163c710>
Recurrent weights are sampled from the normal distribution with mean 0 and variance g^2 / N. Here, we put the synaptic scaling g inside the neuron.
= ann.Projection(pop, pop, 'exc')
Wrec =ann.Normal(0., 1/np.sqrt(N))) Wrec.connect_all_to_all(weights
<ANNarchy.core.Projection.Projection at 0x13163d490>
compile() ann.
Compiling ... OK
We create a monitor to record the evolution of the firing rates in the reservoir during the simulation.
= ann.Monitor(pop, 'r')
m = ann.Monitor(inp, 'r') n
A single trial lasts 3 second by default, with a step input between 100 and 200 ms. We define the trial in a method, so we can run it multiple times.
def trial(T=3000.):
"Runs two trials for a given spectral radius."
# Reset firing rates
= 0.0
inp.r = 0.0
pop.x = 0.0
pop.r
# Run the trial
100.)
ann.simulate(= 1.0
inp.r 100.0) # initial stimulation
ann.simulate(= 0.0
inp.r - 200.)
ann.simulate(T
return m.get('r')
We run two trials successively to look at the chaoticity depending on g.
= 1.5
pop.g = trial()
data1 = trial() data2
=(12, 12))
plt.figure(figsize311)
plt.subplot("First trial")
plt.title(for i in range(5):
=2)
plt.plot(data1[:, i], lw312)
plt.subplot("Second trial")
plt.title(for i in range(5):
=2)
plt.plot(data2[:, i], lw313)
plt.subplot("Difference")
plt.title(for i in range(5):
- data2[:, i], lw=2)
plt.plot(data1[:, i]
plt.tight_layout() plt.show()
We can now train the readout neurons to reproduce a step signal after 2 seconds.
For simplicity, we just train a L1-regularized linear regression (LASSO) on the reservoir activity using scikit-learn
.
= np.zeros(3000)
target 2000:2500] = 1.0 target[
from sklearn import linear_model
= linear_model.Lasso(alpha=0.001, max_iter=10000)
reg
reg.fit(data1, target)= reg.predict(data2) pred
=(12, 8))
plt.figure(figsize=3)
plt.plot(pred, lw=3)
plt.plot(target, lw plt.show()