Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2020 Jul;15(7):529-544.
doi: 10.1038/s41565-020-0655-z. Epub 2020 Mar 30.

Memory devices and applications for in-memory computing

Affiliations
Review

Memory devices and applications for in-memory computing

Abu Sebastian et al. Nat Nanotechnol. 2020 Jul.

Erratum in

Abstract

Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.

PubMed Disclaimer

References

    1. Mutlu, O., Ghose, S., Gómez-Luna, J. & Ausavarungnirun, R. Processing data where it makes sense: Enabling in-memory computation. Microprocess. Microsyst. 67, 28–41 (2019).
    1. Horowitz, M. Computing’s energy problem (and what we can do about it). In Proc. International Solid-state Circuits Conference (ISSCC) 10–14 (IEEE, 2014).
    1. Keckler, S. W., Dally, W. J., Khailany, B., Garland, M. & Glasco, D. GPUs and the future of parallel computing. IEEE Micro 31, 7–17 (2011).
    1. Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proc. International Symposium on Computer Architecture (ISCA) 1–12 (IEEE, 2017).
    1. Sze, V., Chen, Y.-H., Yang, T.-J. & Emer, J. S. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 2295–2329 (2017).

Publication types

LinkOut - more resources