Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 May 30:19:1551143.
doi: 10.3389/fnins.2025.1551143. eCollection 2025.

SpyKing-Privacy-preserving framework for Spiking Neural Networks

Affiliations

SpyKing-Privacy-preserving framework for Spiking Neural Networks

Farzad Nikfam et al. Front Neurosci. .

Abstract

Artificial intelligence (AI) models, frequently built using deep neural networks (DNNs), have become integral to many aspects of modern life. However, the vast amount of data they process is not always secure, posing potential risks to privacy and safety. Fully Homomorphic Encryption (FHE) enables computations on encrypted data while preserving its confidentiality, making it a promising approach for privacy-preserving AI. This study evaluates the performance of FHE when applied to DNNs and compares it with Spiking Neural Networks (SNNs), which more closely resemble biological neurons and, under certain conditions, may achieve superior results. Using the SpyKing framework, we analyze key challenges in encrypted neural computations, particularly the limitations of FHE in handling non-linear operations. To ensure a comprehensive evaluation, we conducted experiments on the MNIST, FashionMNIST, and CIFAR10 datasets while systematically varying encryption parameters to optimize SNN performance. Our results show that FHE significantly increases computational costs but remains viable in terms of accuracy and data security. Furthermore, SNNs achieved up to 35% higher absolute accuracy than DNNs on encrypted data with low values of the plaintext modulus t. These findings highlight the potential of SNNs in privacy-preserving AI and underscore the growing need for secure yet efficient neural computing solutions.

Keywords: Deep Neural Network (DNN); Homomorphic Encryption (HE); LeNet5; Spiking Neural Network (SNN); machine learning; privacy-preserving; safety; security.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
A summary flowchart of the SpyKing research project.
Figure 2
Figure 2
An example of a spiking neuron that activates only after receiving the necessary charge to surpass the threshold, undergoing a refractory period before returning to a resting state.
Figure 3
Figure 3
A HE scheme with a clear separation between client and server, where the data and results in plaintext are visible only to the client.
Figure 4
Figure 4
LeNet5 model with each layer and matrix size for FashionMNIST and MNIST training.
Figure 5
Figure 5
LeNet5 model with each layer and matrix size for CIFAR10 training.
Figure 6
Figure 6
In the Spiking-LeNet5 the neurons fire randomly during the seqlength and the result is each time a portion of the total image, in this case it is the Ankle Boot, label 9 in the FashionMNIST dataset.
Figure 7
Figure 7
On the left we have the native Ankle Boot (Label 9 in the FashionMNIST dataset) image, while on the right there is the sum of the temporal sequence seqlength of Figure 6.
Figure 8
Figure 8
SpyKing experimental setup.
Figure 9
Figure 9
Accuracy and loss during training and validation of LeNet5 and Spiking-LeNet5 for the FashionMNIST dataset. The figure shows accuracy and loss values across different training epochs.
Figure 10
Figure 10
Comparison matrix for t and m variation for the FashionMNIST dataset on encrypted LeNet5 and Spiking-LeNet5 models.
Figure 11
Figure 11
FashionMNIST accuracy on encrypted LeNet5 for t variation with m set to 1,024.
Figure 12
Figure 12
FashionMNIST accuracy on encrypted Spiking-LeNet5 for t variation with m set to 1,024.
Figure 13
Figure 13
Comparison of FashionMNIST accuracy between plaintext and encrypted versions of LeNet5 and Spiking-LeNet5 for t variations when both plaintext and encrypted versions classified correctly.
Figure 14
Figure 14
Comparison of FashionMNIST accuracy between plaintext and encrypted versions of LeNet5 and Spiking-LeNet5 for t variations when both plaintext and encrypted versions coincide in both correct and incorrect classification.
Figure 15
Figure 15
Inside the LeNet5 we need to decrypt and encrypt again four times because the activation function ReLu is not a linear calculation.
Figure 16
Figure 16
NB qualitative variation during the process across the layers.
Figure 17
Figure 17
NB qualitative variation for each t variation.
Figure 18
Figure 18
Plaintext confusion matrix for FashionMNIST. The Shirt class is the one that misleads the model the most.
Figure 19
Figure 19
Encrypted confusion matrix for FashionMNIST with t and m variation. It can be noticed that for low values of t, the results tend to concentrate on labels that resemble each other the most. Spiking-LeNet5 is less random than LeNet5 for low values of t.
Figure 20
Figure 20
Errors layer-by-layer with FashionMNIST and m = 1,024. The top Red strip represents the errors in the layers of the LeNet5, the Blue strip in the middle represents the errors in the layers of the Spiking-LeNet5. The last strip at the bottom represents the difference between the errors in the layers of LeNet-5 and Spiking-LeNet5. It can be noticed that the third strip is predominantly Red, indicating that Spiking-LeNet5 generally performs better.

References

    1. Acar A., Aksu H., Uluagac A. S., Conti M. (2018). A survey on homomorphic encryption schemes: Theory and implementation. ACM Comput. Surv. 51, 1–79. 10.1145/3214303 - DOI
    1. Amirsoleimani A., Ahmadi M., Ahmadi A., Boukadoum M. (2016). “Brain-inspired pattern classification with memristive neural network using the hodgkin-huxley neuron,” in 2016 IEEE International Conference on Electronics, Circuits and Systems, ICECS 2016 (Monte Carlo: IEEE; ), 81–84.
    1. Barni M., Orlandi C., Piva A. (2006). “A privacy-preserving protocol for neural-network-based computation,” in Proceedings of the 8th Workshop on Multimedia & Security, MM&Sec 2006, eds. S. Voloshynovskiy, J. Dittmann, and J. Fridrich (Geneva: ACM; ), 146–151.
    1. Bonnoron G., Fontaine C., Gogniat G., Herbert V., Lapôtre V., Migliore V., et al. (2017). “Somewhat/fully homomorphic encryption: Implementation progresses and challenges,” in Codes, Cryptology and Information Security - Second International Conference, C2SI 2017, eds. S. E. Hajji, A. Nitaj, and E. M. Souidi (Rabat: Springer; ), 68–82.
    1. Bos J. W., Lauter K. E., Loftus J., Naehrig M. (2013). “Improved security for a ring-based fully homomorphic encryption scheme,” in IACR Cryptol. Arch, 75.