#!pip install ANNarchyANN-to-SNN conversion - MLP
This notebook demonstrates how to transform a fully-connected neural network trained using tensorflow/keras into an SNN network usable in ANNarchy.
The methods are adapted from the original models used in:
Diehl et al. (2015) “Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing” Proceedings of IJCNN. doi: 10.1109/IJCNN.2015.7280696
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
print(f"Tensorflow {tf.__version__}")2026-02-26 14:53:56.538014: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-02-26 14:53:56.803554: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2026-02-26 14:53:57.050991: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2026-02-26 14:53:57.443648: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2026-02-26 14:53:57.493805: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2026-02-26 14:53:57.870547: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2026-02-26 14:53:59.609610: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Tensorflow 2.17.0
First we need to download and process the MNIST dataset provided by tensorflow.
# Download data
(X_train, t_train), (X_test, t_test) = tf.keras.datasets.mnist.load_data()
# Normalize inputs
X_train = X_train.reshape(X_train.shape[0], 784).astype('float32') / 255.
X_test = X_test.reshape(X_test.shape[0], 784).astype('float32') / 255.
# One-hot output vectors
T_train = tf.keras.utils.to_categorical(t_train, 10)
T_test = tf.keras.utils.to_categorical(t_test, 10)Training an ANN in tensorflow/keras
The tensorflow.keras network is build using the functional API.
The fully-connected network has two fully connected layers with ReLU, no bias, dropout at 0.5, and a softmax output layer with 10 neurons. We use the standard SGD optimizer and the categorical crossentropy loss for classification.
def create_mlp():
# Model
inputs = tf.keras.layers.Input(shape=(784,))
x= tf.keras.layers.Dense(128, use_bias=False, activation='relu')(inputs)
x = tf.keras.layers.Dropout(0.5)(x)
x= tf.keras.layers.Dense(128, use_bias=False, activation='relu')(x)
x = tf.keras.layers.Dropout(0.5)(x)
x=tf.keras.layers.Dense(10, use_bias=False, activation='softmax')(x)
model= tf.keras.Model(inputs, x)
# Optimizer
optimizer = tf.keras.optimizers.SGD(learning_rate=0.05)
# Loss function
model.compile(
loss='categorical_crossentropy', # loss function
optimizer=optimizer, # learning rule
metrics=['accuracy'] # show accuracy
)
print(model.summary())
return modelWe can now train the network and save the weights in the HDF5 format.
# Create model
model = create_mlp()
# Train model
history = model.fit(
X_train, T_train, # training data
batch_size=128, # batch size
epochs=20, # Maximum number of epochs
validation_split=0.1, # Percentage of training data used for validation
)
model.save("runs/mlp.keras")
# Test model
predictions_keras = model.predict(X_test, verbose=0)
test_loss, test_accuracy = model.evaluate(X_test, T_test, verbose=0)
print(f"Test accuracy: {test_accuracy}")WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1772114042.120473 1065810 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2026-02-26 14:54:02.120914: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Model: "functional"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer (InputLayer) │ (None, 784) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense (Dense) │ (None, 128) │ 100,352 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout (Dropout) │ (None, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 128) │ 16,384 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_1 (Dropout) │ (None, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_2 (Dense) │ (None, 10) │ 1,280 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 118,016 (461.00 KB)
Trainable params: 118,016 (461.00 KB)
Non-trainable params: 0 (0.00 B)
None
Epoch 1/20
2026-02-26 14:54:02.306697: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 169344000 exceeds 10% of free system memory.
422/422 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.6402 - loss: 1.0904 - val_accuracy: 0.9143 - val_loss: 0.3395 Epoch 2/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.8281 - loss: 0.5663 - val_accuracy: 0.9303 - val_loss: 0.2418 Epoch 3/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.8634 - loss: 0.4654 - val_accuracy: 0.9392 - val_loss: 0.2018 Epoch 4/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.8821 - loss: 0.4019 - val_accuracy: 0.9475 - val_loss: 0.1750 Epoch 5/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.8938 - loss: 0.3652 - val_accuracy: 0.9547 - val_loss: 0.1636 Epoch 6/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9008 - loss: 0.3387 - val_accuracy: 0.9587 - val_loss: 0.1469 Epoch 7/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9082 - loss: 0.3148 - val_accuracy: 0.9623 - val_loss: 0.1364 Epoch 8/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9147 - loss: 0.2970 - val_accuracy: 0.9627 - val_loss: 0.1321 Epoch 9/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9175 - loss: 0.2847 - val_accuracy: 0.9660 - val_loss: 0.1246 Epoch 10/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9234 - loss: 0.2678 - val_accuracy: 0.9670 - val_loss: 0.1202 Epoch 11/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9252 - loss: 0.2592 - val_accuracy: 0.9665 - val_loss: 0.1120 Epoch 12/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9282 - loss: 0.2477 - val_accuracy: 0.9697 - val_loss: 0.1086 Epoch 13/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9291 - loss: 0.2419 - val_accuracy: 0.9713 - val_loss: 0.1038 Epoch 14/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9327 - loss: 0.2330 - val_accuracy: 0.9707 - val_loss: 0.1029 Epoch 15/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9344 - loss: 0.2289 - val_accuracy: 0.9728 - val_loss: 0.0986 Epoch 16/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9369 - loss: 0.2199 - val_accuracy: 0.9732 - val_loss: 0.0965 Epoch 17/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9387 - loss: 0.2152 - val_accuracy: 0.9727 - val_loss: 0.0948 Epoch 18/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9392 - loss: 0.2116 - val_accuracy: 0.9737 - val_loss: 0.0918 Epoch 19/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9401 - loss: 0.2084 - val_accuracy: 0.9738 - val_loss: 0.0937 Epoch 20/20 422/422 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9409 - loss: 0.2031 - val_accuracy: 0.9752 - val_loss: 0.0914 Test accuracy: 0.967199981212616
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(history.history['loss'], '-r', label="Training")
plt.plot(history.history['val_loss'], '-b', label="Validation")
plt.xlabel('Epoch #')
plt.ylabel('Loss')
plt.legend()
plt.subplot(122)
plt.plot(history.history['accuracy'], '-r', label="Training")
plt.plot(history.history['val_accuracy'], '-b', label="Validation")
plt.xlabel('Epoch #')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
Initialize the ANN-to-SNN converter
We first create an instance of the ANN-to-SNN conversion object. The function receives the input_encoding parameter, which is the type of input encoding we want to use.
By default, there are intrinsically bursting (IB), phase shift oscillation (PSO) and Poisson (poisson) available.
from ANNarchy.extensions.ann_to_snn_conversion import ANNtoSNNConverter
snn_converter = ANNtoSNNConverter(
input_encoding='IB',
hidden_neuron='IaF',
read_out='spike_count',
)ANNarchy 5.0 (5.0.1) on linux (posix).
After that, we provide the TensorFlow model stored as a .keras file to the conversion tool. The print-out of the network structure of the imported network is suppressed when show_info=False is provided to load_keras_model.
net = snn_converter.load_keras_model("runs/mlp.keras", show_info=True)WARNING: Dense representation is an experimental feature for spiking models, we greatly appreciate bug reports.
* Input layer: input_layer, (784,)
* InputLayer skipped.
* Dense layer: dense, 128
weights: (128, 784)
mean -0.0035419047344475985, std 0.052765462547540665
min -0.3936513364315033, max 0.19855806231498718
* Dropout skipped.
* Dense layer: dense_1, 128
weights: (128, 128)
mean 0.004353927448391914, std 0.10169931501150131
min -0.2755189538002014, max 0.3772040009498596
* Dropout skipped.
* Dense layer: dense_2, 10
weights: (10, 128)
mean -0.0013658732641488314, std 0.21586911380290985
min -0.5502126216888428, max 0.4490526020526886
When the network has been built successfully, we can perform a test using all MNIST training samples. Using duration_per_sample, the duration simulated for each image can be specified. Here, 200 ms seem to be enough.
predictions_snn = snn_converter.predict(X_test, duration_per_sample=200)100%|██████████████████████████████████████████████████████████████████████████| 10000/10000 [00:25<00:00, 393.25it/s]
Using the recorded predictions, we can now compute the accuracy using scikit-learn for all presented samples.
from sklearn.metrics import classification_report, accuracy_score
print(classification_report(t_test, predictions_snn))
print("Test accuracy of the SNN:", accuracy_score(t_test, predictions_snn)) precision recall f1-score support
0 0.97 0.99 0.98 980
1 0.98 0.98 0.98 1135
2 0.96 0.96 0.96 1032
3 0.95 0.95 0.95 1010
4 0.97 0.95 0.96 982
5 0.95 0.96 0.95 892
6 0.96 0.97 0.97 958
7 0.97 0.96 0.96 1028
8 0.96 0.95 0.95 974
9 0.95 0.95 0.95 1009
accuracy 0.96 10000
macro avg 0.96 0.96 0.96 10000
weighted avg 0.96 0.96 0.96 10000
Test accuracy of the SNN: 0.9631
For comparison, here is the performance of the original ANN in keras:
print(classification_report(t_test, predictions_keras.argmax(axis=1)))
print("Test accuracy of the ANN:", accuracy_score(t_test, predictions_keras.argmax(axis=1))) precision recall f1-score support
0 0.97 0.99 0.98 980
1 0.98 0.99 0.98 1135
2 0.97 0.96 0.96 1032
3 0.96 0.97 0.96 1010
4 0.97 0.96 0.96 982
5 0.96 0.96 0.96 892
6 0.97 0.97 0.97 958
7 0.97 0.96 0.97 1028
8 0.97 0.95 0.96 974
9 0.97 0.95 0.96 1009
accuracy 0.97 10000
macro avg 0.97 0.97 0.97 10000
weighted avg 0.97 0.97 0.97 10000
Test accuracy of the ANN: 0.9672

