Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 May 20;12(1):2968.
doi: 10.1038/s41467-021-23180-2.

Self-rectifying resistive memory in passive crossbar arrays

Affiliations

Self-rectifying resistive memory in passive crossbar arrays

Kanghyeok Jeon et al. Nat Commun. .

Abstract

Conventional computing architectures are poor suited to the unique workload demands of deep learning, which has led to a surge in interest in memory-centric computing. Herein, a trilayer (Hf0.8Si0.2O2/Al2O3/Hf0.5Si0.5O2)-based self-rectifying resistive memory cell (SRMC) that exhibits (i) large selectivity (ca. 104), (ii) two-bit operation, (iii) low read power (4 and 0.8 nW for low and high resistance states, respectively), (iv) read latency (<10 μs), (v) excellent non-volatility (data retention >104 s at 85 °C), and (vi) complementary metal-oxide-semiconductor compatibility (maximum supply voltage ≤5 V) is introduced, which outperforms previously reported SRMCs. These characteristics render the SRMC highly suitable for the main memory for memory-centric computing which can improve deep learning acceleration. Furthermore, the low programming power (ca. 18 nW), latency (100 μs), and endurance (>106) highlight the energy-efficiency and highly reliable random-access memory of our SRMC. The feasible operation of individual SRMCs in passive crossbar arrays of different sizes (30 × 30, 160 × 160, and 320 × 320) is attributed to the large asymmetry and nonlinearity in the current-voltage behavior of the proposed SRMC, verifying its potential for application in large-scale and high-density non-volatile memory for memory-centric computing.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Electrical characterization of the unit self-rectifying resistive memory cell (SRMC).
a DC IV characteristics of 30 SRMCs. Arrows indicate switching direction. Readable current margin verified at 2 V is 0.4–2 nA. b Resistance states programmed by varying amplitude of programming voltage pulse for three pulse widths (50 μs, 100 μs, and 1 ms). c Read-out current in response to read-out pulse (2 V and 5 μs in amplitude and width, respectively). Current evaluated from voltage across 1 MΩ internal resistor of oscilloscope. d Memory retention of characteristic of 20 SRMCs in HRS and LRS as programmed and after baking (at 85 °C for 2 h). e Programming endurance of SRMC using 4.2 V/100 μs set and −4.3 V/100 μs reset pulses. f Read disturb characteristic of SRMC using repetitive reading pulse of 2 V/10 μs. (gray and red circle for LRS and HRS, respectively).
Fig. 2
Fig. 2. Microstructural and chemical analyses.
a Cross-sectional high-resolution transmission electron microscope image of our SRMC. b Depth profile of the elements, which were measured by Auger electron microscopy. c Atomic ratio (Hf+Si)/O along depth of SRMC. X-ray photoelectron spectra of d Hf4f, e Si2p, and f O1s emission for HSO1 and HSO2.
Fig. 3
Fig. 3. Current behavior in temperature domain emission equation and data retention for SRMC devices.
a Fitting Schottky emission equation to current measured at various temperatures (45–85 °C) and ±2 V for HRS and LRS. b Estimated barrier heights (ϕ1 and ϕ2) indicated in inset for HRS and LRS. c Data retention for the proposed SRMC (Dev 3) at 85 °C compared with Dev 1 (Ru/HSO2/TiN) and Dev 2 (Ru/HSO1/HSO2/TiN). The as-programmed current level and current level at 7200 s are denoted by I(0) and I(7200 s), respectively.
Fig. 4
Fig. 4. Resistive switching simulation.
a One-dimensional configuration of the SRMC for simulation. b Simulated IV loop (quasi-static behavior) in comparison with experimental data. c Simulated switching behaviors in response to voltage pulses of different widths and amplitudes. d Simulated LRS retention for the HSO2-only cell (Dev 1), HSO1/HSO2 cell (Dev 2), and HSO1/Al2O3/HSO2 SRMC (Dev 3). e, f Simulated oxygen vacancy distributions in the trilayer SRMC and HSO1/HSO2 cell in the LRS (upper panel) and HRS (lower panel). The change of the distribution in each state was monitored in the time range (0–7200 s). g Retention of areal density of oxygen vacancies in the LRS in each layer of the trilayer and bilayer cells.
Fig. 5
Fig. 5. Two-bit states of SRMC.
Two-bit states programmed using a erase-and-program scheme and b erase-free scheme on five SRMCs (indexed #1–#5). c Cumulative distribution of amplitudes of two-bit programming pulses. Average amplitude and standard deviation denoted by m and σ. d SOP between two-bit states. e Retention of two-bit states at 85 °C.
Fig. 6
Fig. 6. 30 × 30 CA of SRMCs.
a Top view of CA layout. b Scanning electron microscope image of the array. The inset shows an atomic force microscope image of a unit SRMC. Schematic of c Scheme 1 and d Scheme 2. Appended IV loops of 100 randomly chosen SRMCs (three loops for each SRMC), which were measured using e Scheme 1 and f Scheme 2. For Vop = 3 V, voltage across selected cell (red-filled circle), unselected group 1 cell (blue-filled circle), and unselected group 2 cell (black-filled circle) is indicated. Open circuit current was also plotted (blue line). The currents read using Scheme 1 and Scheme 2 on the 100 SRMCs are shown in the distributions in g and h, respectively. Scheme 1: the mean current m and standard deviation σ for HRS and LRS are (6.5 × 10−11 A, 1.8 × 10−11) and (9.3 × 10−10 A, 5.1 × 10−10), respectively. Scheme 2: (6.2 × 10−11 A, 3.2 × 10−11) and (1.0 × 10−9 A, 4.1 × 10−10) for HRS and LRS, respectively.
Fig. 7
Fig. 7. 160 × 160 and 320 × 320 CAs of SRMCs.
ad Illustrations of Schemes 1–4 and voltage across different cells indicated by different colors. IV loops of selected cell that was embedded in e 160 × 160 and f 320 × 320 CA.
Fig. 8
Fig. 8. Acceleration of vector-matrix multiplication using the 30 × 30 CA.
a Configuration of a 30 × 30 matrix w mapped onto a CA of the same size. Vector x is encoded as voltage signals (‘0’ = 0 V, ‘1’ = 2 V) and enters into the row-lines (ROW[0]–ROW[29]). The resulting current vector j as an intermediate product enters into sense amplifiers (SAs) to be quantized. b Schematic timing diagrams of row- and column-line signals. The inhibit voltages applied to unchosen column-lines are denoted by Vinhibit. c Statistics of four states (HRS, L1, L2, L3) in four random matrices (w1–w4). dg (upper panel) Conductance maps of the four random matrices (w1–w4) and (lower panel) measured current vectors j for the four matrices. We considered a vector x of ones. The measurement results are compared with the calculated current vectors using the measured currents on individual cells.
Fig. 9
Fig. 9. Acceleration of multibit vector-matrix multiplication.
a Configuration of a mapped weight matrix w (M × N) and multibit vector x (here, 3-bit). Elements x[i] are time-multiplexed, so that the multiplication delay is proportional to the bit-width of elements x[i]. b Timing diagrams of the signals to calculate w[:,i]·x for a given i with multibit elements including x[0] (=b101), x[1] (=b010), and x[29] (=b111). The resulting currents in the three time divisions (j[00], j[01], and j[02]) are first quantized by the SAs and subsequently multiplied by 1, 21, and 22, respectively, and summed in the processing elements (PEs).

References

    1. Gao, M., Ayers, G. & Kozyrakis, C. Practical near-data processing for in-memory analytics frameworks. 2015 International Conference on Parallel Architecture and Compilation (PACT) 113–124 (2015).
    1. Vincon, T., Koch, A. & Petrov, I. Moving processing to data: on the influence of processing in memory on data management. arXiv:1905.04767 v1 (2019).
    1. Hennessy, J. & Patterson, D. Computer Architecture 5th edn (Morgan Kaufmann, 2011).
    1. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet Classification With Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25. Vol. 25, p. 1097–1105 (Curran Associates, Inc., 2012).
    1. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 v1 (2014).