ANNarchy

Artificial Neural Networks architect

Outline

  1. Neurocomputational models

  2. Neuro-simulator ANNarchy

  3. Rate-coded networks

  4. Spiking networks

1 - Neurocomputational models

Computational neuroscience

  • Computational neuroscience is about explaining brain functioning at various levels (neural activity patterns, behavior, etc.) using biologically realistic neuro-computational models.

  • Different types of neural and synaptic mathematical models are used in the field, abstracting biological complexity at different levels.

  • There is no “right” level of biological plausibility for a model - you can always add more details -, but you have to find a paradigm that allows you to:

  1. Explain the current experimental data.
  2. Make predictions that can be useful to experimentalists.

Source: Kriegeskorte and Douglas (2018)

Rate-coded and spiking neurons

  • Rate-coded neurons only represent the instantaneous firing rate of a neuron:

\tau \, \frac{d v(t)}{dt} + v(t) = \sum_{i=1}^d w_{i, j} \, r_i(t) + b

r(t) = f(v(t))

  • Spiking neurons emit binary spikes when their membrane potential exceeds a threshold (leaky integrate-and-fire, LIF):

C \, \frac{d v(t)}{dt} = - g_L \, (v(t) - V_L) + I(t)

\text{if} \; v(t) > V_T \; \text{emit a spike and reset.}

Many different spiking neuron models are possible

\begin{cases} \displaystyle\frac{dv}{dt} = 0.04 \, v^2 + 5 \, v + 140 - u + I \\ \\ \displaystyle\frac{du}{dt} = a \, (b \, v - u) \\ \end{cases}

\begin{cases} \begin{aligned} C \, \frac{dv}{dt} = -g_L \ (v - E_L) + & g_L \, \Delta_T \, \exp(\frac{v - v_T}{\Delta_T}) \\ & + I - w \end{aligned}\\ \\ \tau_w \, \displaystyle\frac{dw}{dt} = a \, (v - E_L) - w\\ \end{cases}

Realistic neuron models can reproduce a variety of dynamics

  • Biological neurons do not all respond the same to an input current: Some fire regularly, some slow down with time., some emit bursts of spikes…

  • Modern spiking neuron models allow to recreate these dynamics by changing just a few parameters.

Populations of neurons

  • Recurrent neural networks (e.g. randomly connected populations of neurons) can exhibit very rich dynamics even in the absence of inputs:

  • Oscillations at the population level.

  • Excitatory/inhibitory balance.

  • Spatio-temporal separation of inputs (reservoir computing).

Synaptic plasticity: Hebbian learning

  • Hebbian learning postulates that synapses strengthen based on the correlation between the activity of the pre- and post-synaptic neurons:

When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

Donald Hebb, 1949

  • Weights increase proportionally to the the product of the pre- and post-synaptic firing rates:

\frac{dw}{dt} = \eta \, r^\text{pre} \, r^\text{post}

Synaptic plasticity: Hebbian-based learning

\frac{dw}{dt} = \eta \, r^\text{pre} \, r^\text{post} \, (r^\text{post} - \mathbb{E}[(r^\text{post})^2])

\frac{dw}{dt} = \eta \, r^\text{pre} \, (r^\text{post} - \mathbb{E}[r^\text{post}])

\frac{dw}{dt}= \eta \, r^\text{pre} \, r^\text{post} - \alpha \, (r^\text{post})^2 \, w

or virtually anything depending only on the pre- and post-synaptic firing rates, e.g. Vitay and Hamker (2010):

\begin{aligned} \frac{dw}{dt} & = \eta \, ( \text{DA}(t) - \overline{\text{DA}}) \, (r^\text{post} - \mathbb{E}[r^\text{post}] )^+ \, (r^\text{pre} - \mathbb{E}[r^\text{pre}])- \alpha(t) \, ((r^\text{post} - \mathbb{E}[r^\text{post}] )^+ )^2 \, w \end{aligned}

STDP: Spike-timing dependent plasticity

  • Synaptic efficiencies actually evolve depending on the the causation between the neuron’s firing patterns:

    • If the pre-synaptic neuron fires before the post-synaptic one, the weight is increased (long-term potentiation). Pre causes Post to fire.

    • If it fires after, the weight is decreased (long-term depression). Pre does not cause Post to fire.

STDP: Spike-timing dependent plasticity

  • The STDP (spike-timing dependent plasticity, Bi and Poo, 2001) plasticity rule describes how the weight of a synapse evolves when the pre-synaptic neuron fires at t_\text{pre} and the post-synaptic one fires at t_\text{post}.

\frac{dw}{dt} = \begin{cases} A^+ \, \exp - \frac{t_\text{pre} - t_\text{post}}{\tau^+} \; \text{if} \; t_\text{post} > t_\text{pre}\\ \\ A^- \, \exp - \frac{t_\text{pre} - t_\text{post}}{\tau^-} \; \text{if} \; t_\text{pre} > t_\text{post}\\ \end{cases}

  • STDP can be implemented online using traces.

  • More complex variants of STDP (triplet STDP) exist, but this is the main model of synaptic plasticity in spiking networks.

Neuro-computational modeling

  • Populations of neurons can be combined in functional neuro-computational models learning to solve various tasks.

  • Need to implement one (or more) equations per neuron and synapse. There can thousands of neurons and millions of synapses, see Teichmann et al. (2021).

Basal Ganglia

Source: Villagrasa et al. (2018)

Hippocampus

Source: Gönner et al. (2017)

Dopaminergic system

Source: Vitay and Hamker (2014)

2 - Neuro-simulator ANNarchy

Neuro-simulators

Fixed libraries of models

Code generation

ANNarchy (Artificial Neural Networks architect)


Vitay et al. (2015)
ANNarchy: a code generation approach to neural simulations on parallel hardware.
Frontiers in Neuroinformatics 9. doi:10.3389/fninf.2015.00019


Installation

Installation guide: https://annarchy.github.io/Installation/

Using pip:

pip install ANNarchy

From source:

git clone https://github.com/ANNarchy/ANNarchy.git
cd annarchy
pip install -e .

Requirements (Linux and MacOS):

  • g++/clang++
  • python >= 3.6
  • numpy
  • sympy
  • cython

Features

  • Simulation of both rate-coded and spiking neural networks.

  • Only local biologically realistic mechanisms are possible (no backpropagation).

  • Equation-oriented description of neural/synaptic dynamics (à la Brian).

  • Code generation in C++, parallelized using OpenMP on CPU and CUDA on GPU (MPI is coming).

  • Synaptic, intrinsic and structural plasticity mechanisms.

Structure of a script

import ANNarchy as ann

# Create neuron types
neuron = ann.Neuron(...) 

# Create synapse types for transmission and/or plasticity
stdp = ann.Synapse(...) 

# Create populations of neurons
pop = ann.Population(1000, neuron) 

# Connect the populations through projections
proj = ann.Projection(pop, pop, 'exc', stdp) 
proj.connect_fixed_probability(weights=ann.Uniform(0.0, 1.0), probability=0.1)

# Generate and compile the code
ann.compile() 

# Record spiking activity
m = ann.Monitor(pop, ['spike']) 

# Simulate for 1 second
ann.simulate(1000.)

3 - Rate-coded networks

Echo-State Network

  • ESN rate-coded neurons follow first-order ODEs:

\tau \frac{dx(t)}{dt} + x(t) = \sum w^\text{in} \, r^\text{in}(t) + g \, \sum w^\text{rec} \, r(t) + \xi(t)

r(t) = \tanh(x(t))

  • Neural dynamics are described by the equation-oriented interface:
import ANNarchy as ann

ESN_Neuron = ann.Neuron(
    parameters = """
        tau = 30.0   : population   # Time constant
        g = 1.0      : population   # Scaling
        noise = 0.01 : population   # Noise level
    """,
    equations="""
        tau * dx/dt + x = sum(in) + g * sum(exc) + noise * Uniform(-1, 1)  : init=0.0
 
        r = tanh(x)
    """
)

Parameters

    parameters = """
        tau = 30.0   : population   # Time constant
        g = 1.0      : population   # Scaling
        noise = 0.01 : population   # Noise level
    """
  • All parameters used in the equations must be declared in the Neuron definition.

  • Parameters can have one value per neuron in the population (default) or be common to all neurons (flag population or projection).

  • Parameters and variables are double floats by default, but the type can be specified (int, bool).

Variables

    equations="""
        tau * dx/dt + x = sum(in) + g * sum(exc) + noise * Uniform(-1, 1) : init=0.0

        r = tanh(x)
    """
  • Variables are evaluated at each time step in the order of their declaration, except for coupled ODEs.

  • The output variable of a rate-coded neuron must be named r.

  • Variables can be updated with assignments (=, +=, etc) or by defining first order ODEs.

  • The math C library symbols can be used (tanh, cos, exp, etc).

  • Initial values at the start of the simulation can be specified with init (default: 0.0).

  • Lower/higher bounds on the values of the variables can be set with the min/max flags:

r = x : min=0.0 # ReLU
  • Additive noise can be drawn from several distributions, including Uniform, Normal, LogNormal, Exponential, Gamma

ODEs

  • First-order ODEs are parsed and manipulated using sympy:
    # All equivalent:
    tau * dx/dt + x = I
    tau * dx/dt = I - x
    dx/dt = (I - x)/tau
  • The generated C++ code applies a numerical method (fixed step size dt) for all neurons:
#pragma omp simd
for(unsigned int i = 0; i < size; i++){
    double _x = (I[i] - x[i])/tau;
    x[i] += dt*_x ;
    r[i] = tanh(x[i]);
}
  • Several numerical methods are available:

    • Explicit (forward) Euler (default):
    tau * dx/dt + x = I : explicit
    • Implicit (backward) Euler:
    tau * dx/dt + x = I : implicit
    • Exponential Euler (exact for linear ODE):
    tau * dx/dt + x = I : exponential
    • Midpoint (RK2):
    tau * dx/dt + x = I : midpoint
    • Runge-Kutta (RK4):
    tau * dx/dt + x = I : rk4
    • Event-driven (spiking synapses):
    tau * dx/dt + x = I : event-driven

Populations

  • Populations are creating by specifying a number of neurons and a neuron type:
pop = ann.Population(1000, ESN_Neuron)
  • For visualization purposes or when using convolutional layers, a tuple geometry can be passed instead of the size:
pop = ann.Population((100, 100), ESN_Neuron)
  • All parameters and variables become attributes of the population (read and write) as numpy arrays:
pop.tau = np.linspace(20.0, 40.0, 1000)
pop.r = np.tanh(pop.v)
  • Slices of populations are called PopulationView and can be addressed separately:
pop = ann.Population(1000, ESN_Neuron)
E = pop[:800]
I = pop[800:]

Projections

  • Projections connect two populations (or views) in a uni-directional way.
proj_exc = ann.Projection(E, pop, 'exc')
proj_inh = ann.Projection(I, pop, 'inh')
  • Each target ('exc', 'inh', 'AMPA', 'NMDA', 'GABA') can be defined as needed and will be treated differently by the post-synaptic neurons.

  • The weighted sum of inputs for a specific target is accessed in the equations by sum(target):

equations="""
    tau * dx/dt + x = sum(exc) - sum(inh)

    r = tanh(x)
"""
  • It is therefore possible to model modulatory effects, divisive inhibition, etc.

Connection methods

  • Projections must be populated with a connectivity matrix (who is connected to who), a weight w and optionally a delay d (uniform or variable).

  • Several patterns are predefined:

proj.connect_all_to_all(weights=Normal(0.0, 1.0), delays=2.0, allow_self_connections=False)

proj.connect_one_to_one(weights=1.0, delays=Uniform(1.0, 10.0))

proj.connect_fixed_number_pre(number=20, weights=1.0)

proj.connect_fixed_number_post(number=20, weights=1.0)

proj.connect_fixed_probability(probability=0.2, weights=1.0)

proj.connect_gaussian(amp=1.0, sigma=0.2, limit=0.001)

proj.connect_dog(amp_pos=1.0, sigma_pos=0.2, amp_neg=0.3, sigma_neg=0.7, limit=0.001)
  • But you can also load Numpy arrays or Scipy sparse matrices. Example for synfire chains:
w = np.array([[None]*pre.size]*post.size)

for i in range(post.size):
    w[i, (i-1)%pre.size] = 1.0

proj.connect_from_matrix(w)
w = lil_matrix((pre.size, post.size))

for i in range(pre.size):
    w[pre.size, (i+1)%post.size] = 1.0

proj.connect_from_sparse(w)

Compiling and running the simulation

  • Once all populations and projections are created, you have to generate to the C++ code and compile it:
ann.compile()
  • You can now manipulate all parameters/variables from Python thanks to the Cython bindings.

  • A simulation is simply run for a fixed duration in milliseconds with:

ann.simulate(1000.) # 1 second
  • You can also run a simulation until a criteria is filled (e.g. first-spike), see:

https://annarchy.github.io/manual/Simulation.html#early-stopping

Monitoring

  • By default, a simulation is run in C++ without interaction with Python.

  • You may want to record some variables (neural or synaptic) during the simulation with a Monitor:

m = ann.Monitor(pop, ['v', 'r'])

n = ann.Monitor(proj, ['w'])
  • After the simulation, you can retrieve the recordings with:
recorded_v = m.get('v')

recorded_r = m.get('r')

recorded_w = n.get('w')

Calling get() flushes the underlying arrays.

Recording projections can quickly fill up the RAM…

Notebook: Echo-State Network

Download JupyterNotebook Download JupyterNotebook

\tau \frac{dx(t)}{dt} + x(t) = \sum_\text{input} W^\text{IN} \, r^\text{IN}(t) + g \, \sum_\text{rec} W^\text{REC} \, r(t) + \xi(t)

r(t) = \tanh(x(t))

Plasticity in rate-coded networks

  • Synapses can also implement plasticity rules that will be evaluated after each neural update.

  • Example of the Intrator & Cooper BCM learning rule:

\Delta w = \eta \, r^\text{pre} \, r^\text{post} \, (r^\text{post} - \mathbb{E}[(r^\text{post})^2])

IBCM = ann.Synapse(
    parameters = """
        eta = 0.01 : projection
        tau = 2000.0 : projection
    """,
    equations = """
        tau * dtheta/dt + theta = post.r^2 : postsynaptic, exponential

        dw/dt = eta * post.r * (post.r - theta) * pre.r : min=0.0, explicit
    """,
    psp = "w * pre.r"
)
  • Each synapse can access pre- and post-synaptic variables with pre. and post..

  • The postsynaptic flag allows to do computations only once per post-synaptic neurons.

  • psp optionally defines what will be summed by the post-synaptic neuron (e.g. psp = "w * log(pre.r)").

Plastic projections

  • The synapse type just has to be passed to the Projection:
proj = Projection(inp, pop, 'exc', synapse=IBCM)
  • Synaptic variables can be accessed as lists of lists for the whole projection:
proj.w
proj.theta

or for a single post-synaptic neuron:

proj[10].w

Notebook: IBCM learning rule

Download JupyterNotebook Download JupyterNotebook

  • Variant of the BCM (Bienenstock, Cooper, Munro, 1982) learning rule.

  • The LTP and LTD depend on post-synaptic activity: homeostasis.

\Delta w = \eta \, r^\text{pre} \, r^\text{post} \, (r^\text{post} - \mathbb{E}[(r^\text{post})^2])

Notebook: Reward-modulated RC network of Miconi (2017)

Download JupyterNotebook Download JupyterNotebook

e_{i, j}(t) = e_{i, j}(t-1) + (r_i (t) \, x_j(t))^3

\Delta w_{i, j} = - \eta \, e_{i, j}(T) \, (R -R_\text{mean})

4 - Spiking networks

Spiking neurons

  • Spiking neurons must also define two additional fields:

    • spike: condition for emitting a spike.

    • reset: what happens after a spike is emitted (at the start of the refractory period).

  • A refractory period in ms can also be specified.

  • Example of the Leaky Integrate-and-Fire:

C \, \frac{d v(t)}{dt} = - g_L \, (v(t) - V_L) + I(t)

\text{if} \; v(t) > V_T \; \text{emit a spike and reset.}

LIF = ann.Neuron(
    parameters = """
        C = 200.
        g_L = 10.
        E_L = -70.
        v_T = 0.
        v_r = -58.
        I = 0.25
    """,
    equations = """
        C * dv/dt = g_L * (E_L - v) + I : init=E_L     
    """,
    spike = "v >= v_T",
    reset = "v = v_r",
    refractory = 2.0
)

Notebook: AdEx neuron - Adaptive exponential Integrate-and-fire

Download JupyterNotebook Download JupyterNotebook

\tau \cdot \frac{dv (t)}{dt} = E_l - v(t) + g_\text{exc} (t) \, (E_\text{exc} - v(t)) + g_\text{inh} (t) \, (E_\text{inh} - v(t)) + I(t)

Conductances / currents

  • A pre-synaptic spike arriving to a spiking neuron increases the conductance/current g_target (e.g. g_exc or g_inh, depending on the projection).
LIF = Neuron(
    parameters = "...",
    equations = """
        C*dv/dt = g_L*(E_L - v) + g_exc : init=E_L    
    """,
    spike = "v >= v_T",
    reset = "v = v_r",
    refractory = 2.0
)
  • Each spike increments instantaneously g_target from the synaptic efficiency w of the corresponding synapse.
g_target += w

Conductances / currents

  • For exponentially-decreasing or alpha-shaped synapses, ODEs have to be introduced for the conductance/current.

  • The exponential numerical method should be preferred, as integration is exact.

LIF = ann.Neuron(
    parameters = "...",
    equations = """
        C*dv/dt = g_L*(E_L - v) + g_exc : init=E_L   

        tau_exc * dg_exc/dt = - g_exc : exponential
    """,
    spike = "v >= v_T",
    reset = "v = v_r",
    refractory = 2.0
)

Notebook: Synaptic transmission

Download JupyterNotebook Download JupyterNotebook

# Membrane potential
tau * dv/dt = (E_L - v) + g_a + g_b + alpha_c

# Exponentially decreasing
tau_b * dg_b/dt = -g_b

# Alpha-shaped
tau_c * dg_c/dt = -g_c
tau_c * dh_c/dt = g_c - h_c

Notebook: COBA - Conductance-based E/I network

Download JupyterNotebook Download JupyterNotebook

\tau \, \frac{dv (t)}{dt} = E_l - v(t) + g_\text{exc} (t) \, (E_\text{exc} - v(t)) + g_\text{inh} (t) \, (E_\text{inh} - v(t)) + I(t)

Spiking synapses : Short-term plasticity (STP)

  • Spiking synapses can define a pre_spike field, defining what happens when a pre-synaptic spike arrives at the synapse.

  • g_target is an alias for the corresponding post-synaptic conductance: it will be replaced by g_exc or g_inh depending on how the synapse is used.

  • By default, a pre-synaptic spike increments the post-synaptic conductance from w: g_target += w

STP = ann.Synapse(
    parameters = """
        tau_rec = 100.0 : projection
        tau_facil = 0.01 : projection
        U = 0.5
    """,
    equations = """
        dx/dt = (1 - x)/tau_rec : init = 1.0, event-driven
        du/dt = (U - u)/tau_facil : init = 0.5, event-driven
    """,
    pre_spike="""
        g_target += w * u * x
        x *= (1 - u)
        u += U * (1 - u)
    """
)

Notebook: STP

Download JupyterNotebook Download JupyterNotebook

Spiking synapses : Example of Spike-Timing Dependent plasticity (STDP)

  • post_spike similarly defines what happens when a post-synaptic spike is emitted.
STDP = ann.Synapse(
    parameters = """
        tau_plus = 20.0 : projection ; tau_minus = 20.0 : projection
        A_plus = 0.01 : projection   ; A_minus = 0.01 : projection
        w_min = 0.0 : projection     ; w_max = 1.0 : projection
    """,
    equations = """
        tau_plus  * dx/dt = -x : event-driven # pre-synaptic trace
        tau_minus * dy/dt = -y : event-driven # post-synaptic trace
    """,
    pre_spike="""
        g_target += w
        x += A_plus * w_max
        w = clip(w + y, w_min , w_max)
    """,
    post_spike="""
        y -= A_minus * w_max
        w = clip(w + x, w_min , w_max)
    """)

Notebook: STDP

Download JupyterNotebook Download JupyterNotebook

\begin{cases} \tau^+ \, \dfrac{d x(t)}{dt} = -x(t) \\ \\ \tau^- \, \dfrac{d y(t)}{dt} = -y(t) \\ \end{cases}

And much more…

  • Standard populations (SpikeSourceArray, TimedArray, PoissonPopulation, HomogeneousCorrelatedSpikeTrains), OpenCV bindings.

  • Standard neurons:

    • LeakyIntegrator, Izhikevich, IF_curr_exp, IF_cond_exp, IF_curr_alpha, IF_cond_alpha, HH_cond_exp, EIF_cond_exp_isfa_ista, EIF_cond_alpha_isfa_ista
  • Standard synapses:

    • Hebb, Oja, IBCM, STP, STDP
  • Parallel simulations with parallel_run.

  • Convolutional and pooling layers.

  • Hybrid rate-coded / spiking networks.

  • Structural plasticity.

  • BOLD monitors.

  • Tensorboard visualization.

  • ANN (keras) to SNN conversion tool.

RTFM: https://annarchy.github.io

References (scroll down for the full list)

Bi, G., and Poo, M. (2001). Synaptic Modification by Correlated Activity: Hebb’s Postulate Revisited. Annual Review of Neuroscience 24, 139–166. doi:10.1146/annurev.neuro.24.1.139.
Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience 2, 32–48.
Brette, R., and Gerstner, W. (2005). Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. Journal of Neurophysiology 94, 3637–3642. doi:10.1152/jn.00686.2005.
Dayan, P., and Abbott, L. F. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. The MIT Press.
Gönner, L., Vitay, J., and Hamker, F. H. (2017). Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model. Frontiers in Computational Neuroscience 11, 84–84. doi:10.3389/fncom.2017.00084.
Intrator, N., and Cooper, L. N. (1992). Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions. Neural Networks 5, 3–17. doi:10.1016/S0893-6080(05)80003-6.
Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE transactions on neural networks 14, 1569–72. doi:10.1109/TNN.2003.820440.
Jaeger, H. (2001). The "echo state" approach to analysing and training recurrent neural networks. Jacobs Universität Bremen.
Kriegeskorte, N., and Douglas, P. K. (2018). Cognitive computational neuroscience. Nature Neuroscience 21, 1148–1160. doi:10.1038/s41593-018-0210-5.
Miconi, T. (2017). Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks. eLife 6. doi:10.7554/elife.20899.
Oja, E. (1982). A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology 15, 267–73.
Teichmann, M., Larisch, R., and Hamker, F. H. (2021). Performance of biologically grounded models of the early visual system on standard object recognition tasks. Neural Networks 144, 210–228. doi:10.1016/j.neunet.2021.08.009.
Tsodyks, M. V., and Markram, H. (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the National Academy of Sciences 94, 719–723. doi:10.1073/pnas.94.2.719.
Villagrasa, F., Baladron, J., Vitay, J., Schroll, H., Antzoulatos, E. G., Miller, E. K., et al. (2018). On the Role of Cortex-Basal Ganglia Interactions for Category Learning: A Neurocomputational Approach. Journal of Neuroscience 38, 9551–9562. doi:10.1523/JNEUROSCI.0874-18.2018.
Vitay, J., Dinkelbach, H. Ü., and Hamker, F. H. (2015). ANNarchy: A code generation approach to neural simulations on parallel hardware. Frontiers in Neuroinformatics 9. doi:10.3389/fninf.2015.00019.
Vitay, J., and Hamker, F. H. (2010). A computational model of Basal Ganglia and its role in memory retrieval in rewarded visual memory tasks. Frontiers in computational neuroscience 4. doi:10.3389/fncom.2010.00013.
Vitay, J., and Hamker, F. H. (2014). Timing and expectation of reward: A neuro-computational model of the afferents to the ventral tegmental area. Frontiers in Neurorobotics 8. doi:10.3389/fnbot.2014.00004.
Vogels, T. P., and Abbott, L. F. (2005). Signal propagation and logic gating in networks of integrate-and-fire neurons. The Journal of neuroscience : the official journal of the Society for Neuroscience 25, 10786–95. doi:10.1523/JNEUROSCI.3508-05.2005.