Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
- PMID: 26491032
- DOI: 10.1088/0957-4484/26/45/455204
Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
Abstract
A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaOx/TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.
Similar articles
-
Mixed-Precision Deep Learning Based on Computational Memory.Front Neurosci. 2020 May 12;14:406. doi: 10.3389/fnins.2020.00406. eCollection 2020. Front Neurosci. 2020. PMID: 32477047 Free PMC article.
-
Neural Network Training Acceleration With RRAM-Based Hybrid Synapses.Front Neurosci. 2021 Jun 24;15:690418. doi: 10.3389/fnins.2021.690418. eCollection 2021. Front Neurosci. 2021. PMID: 34248492 Free PMC article.
-
Parallel weight update protocol for a carbon nanotube synaptic transistor array for accelerating neuromorphic computing.Nanoscale. 2020 Jan 23;12(3):2040-2046. doi: 10.1039/c9nr08979a. Nanoscale. 2020. PMID: 31912838
-
Synaptic dynamics: linear model and adaptation algorithm.Neural Netw. 2014 Aug;56:49-68. doi: 10.1016/j.neunet.2014.04.001. Epub 2014 Apr 28. Neural Netw. 2014. PMID: 24867390
-
Memristive crossbar arrays for brain-inspired computing.Nat Mater. 2019 Apr;18(4):309-323. doi: 10.1038/s41563-019-0291-x. Epub 2019 Mar 20. Nat Mater. 2019. PMID: 30894760 Review.
Cited by
-
Solving matrix equations in one step with cross-point resistive arrays.Proc Natl Acad Sci U S A. 2019 Mar 5;116(10):4123-4128. doi: 10.1073/pnas.1815682116. Epub 2019 Feb 19. Proc Natl Acad Sci U S A. 2019. PMID: 30782810 Free PMC article.
-
Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator.Sci Adv. 2024 Jun 14;10(24):eadl3350. doi: 10.1126/sciadv.adl3350. Epub 2024 Jun 14. Sci Adv. 2024. PMID: 38875324 Free PMC article.
-
Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain.Front Neurosci. 2018 Dec 3;12:891. doi: 10.3389/fnins.2018.00891. eCollection 2018. Front Neurosci. 2018. PMID: 30559644 Free PMC article. Review.
-
Neural sampling machine with stochastic synapse allows brain-like learning and inference.Nat Commun. 2022 May 11;13(1):2571. doi: 10.1038/s41467-022-30305-8. Nat Commun. 2022. PMID: 35546144 Free PMC article.
-
Hardware implementation of backpropagation using progressive gradient descent for in situ training of multilayer neural networks.Sci Adv. 2024 Jul 12;10(28):eado8999. doi: 10.1126/sciadv.ado8999. Epub 2024 Jul 12. Sci Adv. 2024. PMID: 38996020 Free PMC article.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources