ANNarchy 5.0.0
  • ANNarchy
  • Installation
  • Tutorial
  • Manual
  • Notebooks
  • Reference

ANN-to-SNN conversion - CNN

  • List of notebooks
  • Rate-coded networks
    • Echo-state networks
    • Neural field
    • Bar Learning
    • Miconi network
    • Structural plasticity
  • Spiking networks
    • AdEx
    • PyNN/Brian
    • Izhikevich
    • Synaptic transmission
    • Gap junctions
    • Hodgkin-Huxley
    • COBA/CUBA
    • STP
    • STDP I
    • STDP II
    • Homeostatic STDP - Ramp
    • Homeostatic STDP - SORF
  • Advanced features
    • Hybrid networks
    • Parallel run
    • Bayesian optimization
  • Extensions
    • Image
    • Tensorboard
    • BOLD monitor I
    • BOLD monitor II
    • ANN to SNN I
    • ANN to SNN II

On this page

  • Training an ANN in tensorflow/keras
  • Initialize the ANN-to-SNN converter

ANN-to-SNN conversion - CNN

Download JupyterNotebook Download JupyterNotebook

This notebook demonstrates how to transform a CNN trained using tensorflow/keras into an SNN network usable in ANNarchy.

The CNN is adapted from the original model used in:

Diehl et al. (2015) “Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing” Proceedings of IJCNN. doi: 10.1109/IJCNN.2015.7280696

#!pip install ANNarchy
import numpy as np
import matplotlib.pyplot as plt

import tensorflow as tf
print(f"Tensorflow {tf.__version__}")
2026-02-26 14:55:11.038523: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-02-26 14:55:11.040868: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-02-26 14:55:11.050061: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2026-02-26 14:55:11.065637: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2026-02-26 14:55:11.070261: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2026-02-26 14:55:11.081269: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2026-02-26 14:55:11.803225: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Tensorflow 2.17.0
# Download data
(X_train, t_train), (X_test, t_test) = tf.keras.datasets.mnist.load_data()

# Normalize inputs
X_train = X_train.astype('float32') / 255.
X_test = X_test.astype('float32') / 255.

# One-hot output vectors
T_train = tf.keras.utils.to_categorical(t_train, 10)
T_test = tf.keras.utils.to_categorical(t_test, 10)

Training an ANN in tensorflow/keras

The tensorflow.keras convolutional network is built using the functional API.

The CNN has three 5*5 convolutional layers with ReLU, each followed by 2*2 max-pooling, no bias, dropout at 0.25, and a softmax output layer with 10 neurons. We use the standard SGD optimizer and the categorical crossentropy loss for classification.

def create_cnn():
    
    inputs = tf.keras.Input(shape = (28, 28, 1))
    x = tf.keras.layers.Conv2D(
        16, 
        kernel_size=(5,5),
        activation='relu',
        padding = 'same',
        use_bias=False)(inputs)
    x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x)
    x = tf.keras.layers.Conv2D(
        64,
        kernel_size=(5,5),
        activation='relu',
        padding = 'same',
        use_bias=False)(x)
    x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x)
    x = tf.keras.layers.Conv2D(
        64,
        kernel_size=(5,5),
        activation='relu',
        padding = 'same',
        use_bias=False)(x)
    x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x)
    x = tf.keras.layers.Dropout(0.25)(x)
    x = tf.keras.layers.Flatten()(x)
    x = tf.keras.layers.Dense(
        10,
        activation='softmax',
        use_bias=False)(x)

    # Create functional model
    model= tf.keras.Model(inputs, x)
    optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)

    # Loss function
    model.compile(
        loss='categorical_crossentropy', # loss function
        optimizer=optimizer, # learning rule
        metrics=['accuracy'] # show accuracy
    )
    print(model.summary())

    return model
# Create model
model = create_cnn()

# Train model
history = model.fit(
    X_train, T_train,       # training data
    batch_size=128,          # batch size
    epochs=20,              # Maximum number of epochs
    validation_split=0.1,   # Percentage of training data used for validation
)

model.save("runs/cnn.keras")

# Test model
predictions_keras = model.predict(X_test, verbose=0)
test_loss, test_accuracy = model.evaluate(X_test, T_test, verbose=0)
print(f"Test accuracy: {test_accuracy}")
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1772114112.859263 1067624 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2026-02-26 14:55:12.859628: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Model: "functional"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                    ┃ Output Shape           ┃       Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ input_layer (InputLayer)        │ (None, 28, 28, 1)      │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d (Conv2D)                 │ (None, 28, 28, 16)     │           400 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d (MaxPooling2D)    │ (None, 14, 14, 16)     │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_1 (Conv2D)               │ (None, 14, 14, 64)     │        25,600 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_1 (MaxPooling2D)  │ (None, 7, 7, 64)       │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_2 (Conv2D)               │ (None, 7, 7, 64)       │       102,400 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_2 (MaxPooling2D)  │ (None, 3, 3, 64)       │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout (Dropout)               │ (None, 3, 3, 64)       │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ flatten (Flatten)               │ (None, 576)            │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense (Dense)                   │ (None, 10)             │         5,760 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 134,160 (524.06 KB)
 Trainable params: 134,160 (524.06 KB)
 Non-trainable params: 0 (0.00 B)
2026-02-26 14:55:13.013163: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 169344000 exceeds 10% of free system memory.
None

Epoch 1/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 20s 46ms/step - accuracy: 0.5509 - loss: 1.4493 - val_accuracy: 0.9063 - val_loss: 0.3440

Epoch 2/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 19s 45ms/step - accuracy: 0.9017 - loss: 0.3304 - val_accuracy: 0.9578 - val_loss: 0.1622

Epoch 3/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 19s 46ms/step - accuracy: 0.9361 - loss: 0.2141 - val_accuracy: 0.9685 - val_loss: 0.1189

Epoch 4/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 25s 60ms/step - accuracy: 0.9497 - loss: 0.1683 - val_accuracy: 0.9717 - val_loss: 0.1019

Epoch 5/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 26s 62ms/step - accuracy: 0.9571 - loss: 0.1434 - val_accuracy: 0.9752 - val_loss: 0.0902

Epoch 6/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 21s 50ms/step - accuracy: 0.9616 - loss: 0.1278 - val_accuracy: 0.9777 - val_loss: 0.0813

Epoch 7/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 25s 58ms/step - accuracy: 0.9653 - loss: 0.1156 - val_accuracy: 0.9792 - val_loss: 0.0772

Epoch 8/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 26s 62ms/step - accuracy: 0.9682 - loss: 0.1037 - val_accuracy: 0.9797 - val_loss: 0.0721

Epoch 9/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 26s 62ms/step - accuracy: 0.9699 - loss: 0.0979 - val_accuracy: 0.9805 - val_loss: 0.0670

Epoch 10/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 26s 63ms/step - accuracy: 0.9719 - loss: 0.0917 - val_accuracy: 0.9812 - val_loss: 0.0682

Epoch 11/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 27s 63ms/step - accuracy: 0.9736 - loss: 0.0852 - val_accuracy: 0.9817 - val_loss: 0.0650

Epoch 12/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 27s 64ms/step - accuracy: 0.9751 - loss: 0.0809 - val_accuracy: 0.9823 - val_loss: 0.0598

Epoch 13/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 27s 64ms/step - accuracy: 0.9755 - loss: 0.0777 - val_accuracy: 0.9827 - val_loss: 0.0571

Epoch 14/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 27s 63ms/step - accuracy: 0.9773 - loss: 0.0738 - val_accuracy: 0.9830 - val_loss: 0.0603

Epoch 15/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 29s 67ms/step - accuracy: 0.9783 - loss: 0.0712 - val_accuracy: 0.9835 - val_loss: 0.0549

Epoch 16/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 29s 68ms/step - accuracy: 0.9784 - loss: 0.0688 - val_accuracy: 0.9828 - val_loss: 0.0582

Epoch 17/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 28s 66ms/step - accuracy: 0.9794 - loss: 0.0662 - val_accuracy: 0.9852 - val_loss: 0.0512

Epoch 18/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 27s 64ms/step - accuracy: 0.9802 - loss: 0.0636 - val_accuracy: 0.9842 - val_loss: 0.0513

Epoch 19/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 27s 63ms/step - accuracy: 0.9816 - loss: 0.0595 - val_accuracy: 0.9852 - val_loss: 0.0513

Epoch 20/20

422/422 ━━━━━━━━━━━━━━━━━━━━ 27s 63ms/step - accuracy: 0.9819 - loss: 0.0572 - val_accuracy: 0.9858 - val_loss: 0.0464

Test accuracy: 0.9868999719619751
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(history.history['loss'], '-r', label="Training")
plt.plot(history.history['val_loss'], '-b', label="Validation")
plt.xlabel('Epoch #')
plt.ylabel('Loss')
plt.legend()

plt.subplot(122)
plt.plot(history.history['accuracy'], '-r', label="Training")
plt.plot(history.history['val_accuracy'], '-b', label="Validation")
plt.xlabel('Epoch #')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

Initialize the ANN-to-SNN converter

We now create an instance of the ANN-to-SNN conversion object.

from ANNarchy.extensions.ann_to_snn_conversion import ANNtoSNNConverter

snn_converter = ANNtoSNNConverter(
    input_encoding='IB', 
    hidden_neuron='IaF',
    read_out='spike_count',
)
ANNarchy 5.0 (5.0.1) on linux (posix).
net = snn_converter.load_keras_model("runs/cnn.keras", show_info=True)
WARNING: Dense representation is an experimental feature for spiking models, we greatly appreciate bug reports. 
* Input layer: input_layer, (28, 28, 1)
* InputLayer skipped.
* Conv2D layer: conv2d, (28, 28, 16) 
* MaxPooling2D layer: max_pooling2d, (14, 14, 16) 
* Conv2D layer: conv2d_1, (14, 14, 64) 
* MaxPooling2D layer: max_pooling2d_1, (7, 7, 64) 
* Conv2D layer: conv2d_2, (7, 7, 64) 
* MaxPooling2D layer: max_pooling2d_2, (3, 3, 64) 
* Dropout skipped.
* Flatten skipped.
* Dense layer: dense, 10 
    weights: (10, 576)
    mean 0.00017201040463987738, std 0.0690472424030304
    min -0.22480590641498566, max 0.19217756390571594
predictions_snn = snn_converter.predict(X_test[:300], duration_per_sample=200)
100%|███████████████████████████████████████████████████████████████████████████████| 300/300 [09:49<00:00,  1.97s/it]

Using the recorded predictions, we can now compute the accuracy using scikit-learn for all presented samples.

from sklearn.metrics import classification_report, accuracy_score

print(classification_report(t_test[:300], predictions_snn))
print("Test accuracy of the SNN:", accuracy_score(t_test[:300], predictions_snn))
              precision    recall  f1-score   support

           0       0.96      1.00      0.98        24
           1       1.00      1.00      1.00        41
           2       0.97      1.00      0.98        32
           3       1.00      1.00      1.00        24
           4       1.00      0.97      0.99        37
           5       1.00      1.00      1.00        29
           6       1.00      0.96      0.98        24
           7       1.00      1.00      1.00        34
           8       0.91      1.00      0.95        21
           9       1.00      0.94      0.97        34

    accuracy                           0.99       300
   macro avg       0.98      0.99      0.99       300
weighted avg       0.99      0.99      0.99       300

Test accuracy of the SNN: 0.9866666666666667
ANN to SNN I
 

Copyright Julien Vitay, Helge Ülo Dinkelbach, Fred Hamker