Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May 23:17:1091097.
doi: 10.3389/fnins.2023.1091097. eCollection 2023.

VTSNN: a virtual temporal spiking neural network

Affiliations

VTSNN: a virtual temporal spiking neural network

Xue-Rui Qiu et al. Front Neurosci. .

Abstract

Spiking neural networks (SNNs) have recently demonstrated outstanding performance in a variety of high-level tasks, such as image classification. However, advancements in the field of low-level assignments, such as image reconstruction, are rare. This may be due to the lack of promising image encoding techniques and corresponding neuromorphic devices designed specifically for SNN-based low-level vision problems. This paper begins by proposing a simple yet effective undistorted weighted-encoding-decoding technique, which primarily consists of an Undistorted Weighted-Encoding (UWE) and an Undistorted Weighted-Decoding (UWD). The former aims to convert a gray image into spike sequences for effective SNN learning, while the latter converts spike sequences back into images. Then, we design a new SNN training strategy, known as Independent-Temporal Backpropagation (ITBP) to avoid complex loss propagation in spatial and temporal dimensions, and experiments show that ITBP is superior to Spatio-Temporal Backpropagation (STBP). Finally, a so-called Virtual Temporal SNN (VTSNN) is formulated by incorporating the above-mentioned approaches into U-net network architecture, fully utilizing the potent multiscale representation capability. Experimental results on several commonly used datasets such as MNIST, F-MNIST, and CIFAR10 demonstrate that the proposed method produces competitive noise-removal performance extremely which is superior to the existing work. Compared to ANN with the same architecture, VTSNN has a greater chance of achieving superiority while consuming ~1/274 of the energy. Specifically, using the given encoding-decoding strategy, a simple neuromorphic circuit could be easily constructed to maximize this low-carbon strategy.

Keywords: Independent-Temporal Backpropagation; biologically-inspired artificial intelligence; neuromorphic circuits; spiking neural networks; undistorted weighted-encoding/decoding.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Spiking neuron model (Eshraghian et al., 2021). (A) Intracellular and extracellular mediums are divided by an isolating bilipid membrane. Gated ion channels allow ions such as Na+ to diffuse through the membrane. (B) Capacitive membrane and resistive ion channels constitute a resistor-capacitance circuit. A spike is generated when the membrane potential exceeds a threshold Vth. (C) Via the dendritic tree, input spikes generated by I are transmitted to the neuron body. Sufficient excitation will cause output spike emission. (D) Simulation depicting the membrane potential V(t) reaching Vth, resulting in output spikes.
Figure 2
Figure 2
A toy example of our encoding. Here we demo the UWE with nine pixels as examples. For each pixel, the grayscale image was transferred into the eight-bit spike sequences and each bit was represented by a time step.
Figure 3
Figure 3
Architecture of the proposed fully spiking neural network with eight-bit as an example. UWE generates sequences from an input image. The sequences are fed into U-VTSNN. UWD generates images from operated sequences and finishes a complete noise-removal process. Additionally, the type and size of different layers are clearly shown above.
Figure 4
Figure 4
The procedure of STBP and ITBP. For STBP, the operated sequence {ô7, ô6, ⋯ , ô0} (we denote the sequence as o^Seq) is transformed into ŷ via UWD. Then MSE between y and ŷ is calculated. For ITBP, y is transformed into input sequence {o7, o6, ⋯ , o0} (we denote the sequence as oSeq) by UWE. Then, calculate weighted MSE between o^Seq and oSeq by Equation (11), where o^Seq is the operated sequence ready to be decoded.
Figure 5
Figure 5
A digit from MNIST set is reconstructed by the proposed VTSNN incorporated into the commonly used U-net architecture and IF neuron, at different noise levels.
Figure 6
Figure 6
Results of classification task in MNIST dataset at different noise factors. For any T, while the noise level goes up the accuracy will decrease. However, even the worst case (T = 2, η = 0.8) will achieve a quite good result (85.2%). And the best case (T = 8, η = 0.0) can perform quite competitively (99.2%).
Figure 7
Figure 7
Performance of neurons in the final layer under various Vth conditions, with MSE as the evaluation metric. The circled and enlarged region illustrates the complexity of performance surrounding a specific Vth value (Vth = 0.1 here).
Figure 8
Figure 8
Neuromorphic decoding circuits. We use this simple neuromorphic DAC to realize our UWD. If a switch is on, the corresponding branch outputs 1. Otherwise, the branch outputs 0. This mechanism is designed to activate spikes. And with the resistors in series, the real pixel value is transferred.
Figure 9
Figure 9
Neuromorphic encoding circuits. we use this simple neuromorphic SAR ADC to realize our UWE. Each real pixel value will be transferred into pixel spiking sequences.

References

    1. Bohte S. M., Kok J. N., La Poutre H. (2002). Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing. 48, 17–37. 10.1016/S0925-2312(01)00658-0 - DOI
    1. Burkitt A. N. (2006). A review of the integrate-and-fire neuron model: I. homogeneous synaptic input. Biol. Cybern. 95, 1–19. 10.1007/s00422-006-0068-6 - DOI - PubMed
    1. Cheng S., Wang Y., Huang H., Liu D., Fan H., Liu S. (2021). “Nbnet: noise basis learning for image denoising with subspace projection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4896–4906. 10.1109/CVPR46437.2021.00486 - DOI
    1. Comşa I. M., Versari L., Fischbacher T., Alakuijala J. (2021). Spiking autoencoders with temporal coding. Front. Neurosci. 15, 936. 10.3389/fnins.2021.712667 - DOI - PMC - PubMed
    1. Davies M., Srinivasa N., Lin T.-H., Chinya G., Cao Y., Choday S. H., et al. . (2018). Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99. 10.1109/MM.2018.112130359 - DOI

LinkOut - more resources