Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jul 23:13:754.
doi: 10.3389/fnins.2019.00754. eCollection 2019.

Sparse Coding Using the Locally Competitive Algorithm on the TrueNorth Neurosynaptic System

Affiliations

Sparse Coding Using the Locally Competitive Algorithm on the TrueNorth Neurosynaptic System

Kaitlin L Fair et al. Front Neurosci. .

Abstract

The Locally Competitive Algorithm (LCA) is a biologically plausible computational architecture for sparse coding, where a signal is represented as a linear combination of elements from an over-complete dictionary. In this paper we map the LCA algorithm on the brain-inspired, IBM TrueNorth Neurosynaptic System. We discuss data structures and representation as well as the architecture of functional processing units that perform non-linear threshold, vector-matrix multiplication. We also present the design of the micro-architectural units that facilitate the implementation of dynamical based iterative algorithms. Experimental results with the LCA algorithm using the limited precision, fixed-point arithmetic on TrueNorth compare favorably with results using floating-point computations on a general purpose computer. The scaling of the LCA algorithm within the constraints of the TrueNorth is also discussed.

Keywords: TrueNorth; brain-inspired; sparse-approximation; sparse-code; sparsity; spiking-neurons.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The corelet used to implement the LCA on the TrueNorth using our novel design methodology.
Figure 2
Figure 2
The summation computation is accurate for every iteration so long as values do not exceed the window size, w = 10.
Figure 3
Figure 3
The left neuron is the positive representation of a variable and the right, the negative of the same variable.
Figure 4
Figure 4
A core with repeated axons and neurons to accommodate positive and negative inputs and outputs.
Figure 5
Figure 5
Restricted precision for vector-matrix multiplication if implemented using synaptic weights directly on the TrueNorth chip.
Figure 6
Figure 6
All layers of our vector-matrix multiply overlaid onto one crossbar array, representing one multiplication matrix element with a binary value of 146. Only the positive representations of inputs and outputs are shown.
Figure 7
Figure 7
Recurrence resulting in incorrect computations of the node state update.
Figure 8
Figure 8
Two subsequent LCA iterations to accommodate recurrence in the system. Triggers enable one path to calculate τ2u[n + 1] over w ticks while the other path sends the prior iteration's values τ2u[n] for use in the calculation.
Figure 9
Figure 9
Programmed core that produces inhibition triggers for the first blocks in both paths of the on-chip dynamic memory processing unit.
Figure 10
Figure 10
Programmed core that sends send and compute triggers to the second blocks in both paths of the on-chip dynamic memory processing unit.
Figure 11
Figure 11
The initial projection repeated on-chip using principles from our on-chip memory corelet.
Figure 12
Figure 12
The output spikes from the LCA corelet on the TrueNorth chip, representing positive and negative representations of τ2u.
Figure 13
Figure 13
The node dynamics of an LCA system with a 33 × 50 dictionary compared to a discrete LCA system. Input signals are y = 14 × Φ16−13 × Φ36 and parameters are τ = 13 and λ = 7.

Similar articles

Cited by

References

    1. Amir A., Datta P., Risk W. P., Cassidy A. S., Kusnitz J. A., Esser S. K., et al. (2013). Cognitive computing programming paradigm: a Corelet Language for composing networks of neurosynaptic cores, in Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN) (Dallas, TX: ), 1–10.
    1. Andreou A. G., Dykman A. A., Fischl K. D., Garreau G., Mendat D. R., Orchard G. M., et al. (2016). Real-time sensory information processing using the TrueNorth neurosynaptic system, in Proceedings of the 2016 IEEE International Symposium on Circuits and Systems (ISCAS) (Montreal, QC: ), 1–3.
    1. Bahar R. I., Hammerstrom D. W., Harlow J., Joyner W. H., Jr, Lau C., Marculescu D., et al. (2007). Architectures for silicon nanoelectronics and beyond. IEEE Comput. 40, 25–33. 10.1109/MC.2007.7 - DOI
    1. Balavoine A., Romberg J. K., Rozell C. J. (2012). Convergence and rate analysis of neural networks for sparse approximation. IEEE Trans. Neural Netw. Learn. Syst. 23, 1377–1389. 10.1109/TNNLS.2012.2202400 - DOI - PMC - PubMed
    1. Balavoine A., Rozell C. J., Romberg J. K. (2013a). Convergence of a neural network for sparse approximation using the nonsmooth Łojasiewicz inequality, in Neural Networks (IJCNN), The 2013 International Joint Conference on (IEEE: ), 1–11.