Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2021 Oct;15(5):877-897.
doi: 10.1109/TBCAS.2021.3112756. Epub 2021 Dec 9.

Closed-Loop Neural Prostheses With On-Chip Intelligence: A Review and a Low-Latency Machine Learning Model for Brain State Detection

Review

Closed-Loop Neural Prostheses With On-Chip Intelligence: A Review and a Low-Latency Machine Learning Model for Brain State Detection

Bingzhao Zhu et al. IEEE Trans Biomed Circuits Syst. 2021 Oct.

Abstract

The application of closed-loop approaches in systems neuroscience and therapeutic stimulation holds great promise for revolutionizing our understanding of the brain and for developing novel neuromodulation therapies to restore lost functions. Neural prostheses capable of multi-channel neural recording, on-site signal processing, rapid symptom detection, and closed-loop stimulation are critical to enabling such novel treatments. However, the existing closed-loop neuromodulation devices are too simplistic and lack sufficient on-chip processing and intelligence. In this paper, we first discuss both commercial and investigational closed-loop neuromodulation devices for brain disorders. Next, we review state-of-the-art neural prostheses with on-chip machine learning, focusing on application-specific integrated circuits (ASIC). System requirements, performance and hardware comparisons, design trade-offs, and hardware optimization techniques are discussed. To facilitate a fair comparison and guide design choices among various on-chip classifiers, we propose a new energy-area (E-A) efficiency figure of merit that evaluates hardware efficiency and multi-channel scalability. Finally, we present several techniques to improve the key design metrics of tree-based on-chip classifiers, both in the context of ensemble methods and oblique structures. A novel Depth-Variant Tree Ensemble (DVTE) is proposed to reduce processing latency (e.g., by 2.5× on seizure detection task). We further develop a cost-aware learning approach to jointly optimize the power and latency metrics. We show that algorithm-hardware co-design enables the energy- and memory-optimized design of tree-based models, while preserving a high accuracy and low latency. Furthermore, we show that our proposed tree-based models feature a highly interpretable decision process that is essential for safety-critical applications such as closed-loop stimulation.

PubMed Disclaimer

Figures

Fig. 1:
Fig. 1:
Symbolic view of a closed-loop neural prosthesis. Multi-channel neural signals such as ECoG and LFP are recorded by cortical and deep-brain electrodes and sent to the implantable microchip. The on-chip biomarker extraction and ML processor detect the onset of symptoms and trigger a therapeutic neurostimulator.
Fig. 2:
Fig. 2:
Standard and emerging electrodes for neural recording and stimulation via noninvasive, minimally-invasive, and invasive technologies; (a) Standard scalp-EEG electrodes. (b) The Epios subscalp EEG device for chronic epilepsy monitoring [39]. (c) Standard and high-density ECoG [40]. (d) Stereo-EEG leads [41]. (e) Clinical DBS (Medtronic’s FDA-approved 3389, left), emerging directional DBS leads (8-channel direct STNAcute and 40-channel Medtronic-Sapiens, middle) and the Willsie and Dorval 1760-contact micro-DBS lead (right) [42].
Fig. 3:
Fig. 3:
Existing clinical or research-based closed-loop neuromodulation devices (with or without on-device ML); (a) The NeuroPace RNS device for epilepsy. (b) The AspireSR (Cyberonics, now known as LivaNova) device for epilepsy. (c) The Medtronic Percept PC device for movement disorders. (d) The Newronika AlphaDBS system for Parkinson’s disease. (e) The DyNeuMo Mk-1 system for movement disorders.
Fig. 4:
Fig. 4:
Hardware architectures and chip micrographs of ML-embedded neural prostheses for epilepsy: (a) Linear dual-detector SVM classifier and closed-loop transcranial neurostimulator [13], (b) non-linear SVM-based seizure detector [14], (c) linear least square (LLS) classifier and closed-loop stimulator [17], (d) ridge regression classifier (RRC) and closed-loop stimulator [88]. (e) Gradient-boosted tree ensemble for seizure detection [12], (f) exponentially decaying-memory SVM and closed-loop stimulator [16], (g) AdaBoost decision tree classifier and closed-loop stimulator [29], (h) two-level coarse/fine classifier and closed-loop stimulator [89].
Fig. 4:
Fig. 4:
Hardware architectures and chip micrographs of ML-embedded neural prostheses for epilepsy: (a) Linear dual-detector SVM classifier and closed-loop transcranial neurostimulator [13], (b) non-linear SVM-based seizure detector [14], (c) linear least square (LLS) classifier and closed-loop stimulator [17], (d) ridge regression classifier (RRC) and closed-loop stimulator [88]. (e) Gradient-boosted tree ensemble for seizure detection [12], (f) exponentially decaying-memory SVM and closed-loop stimulator [16], (g) AdaBoost decision tree classifier and closed-loop stimulator [29], (h) two-level coarse/fine classifier and closed-loop stimulator [89].
Fig. 5:
Fig. 5:
Hardware architectures and chip micrographs of ML-embedded neural prostheses for various applications: (a) Linear SVM for epilepsy [11], (b) ANN for migraine state detection [18], (c) DNN for Autism emotion detection [100], (d) CNN for emotion detection [101].
Fig. 6:
Fig. 6:
A DVTE with eight decision trees. Unlike conventional tree ensembles that uniformly set the maximum depth on all trees, the maximum depths in a DVTE are different (1–4). The internal and leaf nodes are shown in blue and black, respectively.
Fig. 7:
Fig. 7:
The outputs of decision trees in a DVTE. Latency is defined as the time difference between the expert-marked seizure onset and the state change of each tree’s output. d is the maximum depth of each tree.
Fig. 8:
Fig. 8:
Performance comparison of DVTE and conventional tree ensemble with a maximum depth of 4. DVTE reduced the latency by 2.5× with a marginal performance reduction (<3% in sensitivity and <1% in specificity). Error bars indicate the standard errors across patients.
Fig. 9:
Fig. 9:
Hardware cost as a function of the regularization coefficient C in DVTE. Large C imposes strong regularization and reduces the power/latency cost. The power cost was calculated as the average power consumption to extract features along the decision path. Latency was estimated as the average time to traverse a root-to-leaf decision path in the tree.
Fig. 10:
Fig. 10:
Seizure detection performance as a function of (a) power consumption and (b) latency. Shaded area indicates the standard errors across patients. The experiment was performed using DVTE and the following setting: 8 trees, depths varying from 1 to 4.
Fig. 11:
Fig. 11:
The number of extracted features in DVTE for different regularization coefficients. With greater C, the cost-aware model tends to use hardware-friendly features (e.g., LLN, Var). Features with longer windows (δ, θ, α) are also penalized in the cost-aware model. The power cost and latency for each C are shown in the legend, while the X-axis shows individual feature costs. For C = 0.01, we achieved an average power cost of 268nW and latency of 0.52s.
Fig. 12:
Fig. 12:
Hardware implementation of the proposed DVTE classifier: (a) system architecture, (b) layout, (c) area breakdown of the DVTE processor and a single decision tree, and (d) system power breakdown.
Fig. 13:
Fig. 13:
Comparison of ResOT and axis-aligned tree ensemble on seizure and tremor detection tasks. The conventional gradient boosted ensemble (lightGBM [128]) and gradient boosting with power-efficient training (PEGB) were included. For PEGB, We used fixed-point thresholds and leaf weights, in contrast to floating points in lightGBM [105]. Cross-subject standard errors are shown by error bars.
Fig. 14:
Fig. 14:
The number of extracted features with different regularization coefficients in ResOT. With greater power- and latency-aware regularization terms, oblique trees prioritize low-power and low-latency features.
Fig. 15:
Fig. 15:
Parallel node evaluation scheme. (a) One internal node is evaluated per window. (b) Two layers (maximum 3 nodes) are concurrently evaluated per window. (c) All nodes are evaluated in parallel. The evaluated nodes are shown in color and bold lines represent the decision path. Node colors represent successive windows.
Fig. 16:
Fig. 16:
The power-latency trade-off with parallel node evaluation. With more nodes evaluated in parallel, latency is reduced at the cost of increased power consumption. Experiments were conducted with ResOT on epilepsy task.
Fig. 17:
Fig. 17:
(a) Interpretation of the tremor detection process using a tree ensemble and shapley additive explanations. Features plotted in red predicted an increased risk of tremor, while those in blue were associated with a low tremor risk. (b) Visualization of the seizure detection process in an oblique tree in one arbitrary patient. We show the percentage of samples visiting each node, and the required power and latency to evaluate each internal node. There are multiple “short paths” which allow dynamic early exiting. (c) Visualization of a cost-aware oblique tree, showing a significant reduction in the power cost of the root node.

References

    1. Little S, Pogosyan A, Neal S, Zavala B, Zrinzo L, Hariz M, Foltynie T, Limousin P, Ashkan K, FitzGerald J et al., “Adaptive deep brain stimulation in advanced parkinson disease,” Annals of neurology, vol. 74, no. 3, pp. 449–457, 2013. - PMC - PubMed
    1. Yao L, Brown P, and Shoaran M, “Improved detection of Parkinsonian resting tremor with feature engineering and Kalman filtering,” Clinical Neurophysiology, vol. 131, no. 1, pp. 274–284, 2020. - PMC - PubMed
    1. Lo M-C and Widge AS, “Closed-loop neuromodulation systems: next-generation treatments for psychiatric illness,” International review of psychiatry, vol. 29, no. 2, pp. 191–204, 2017. - PMC - PubMed
    1. Ezzyat Y, Wanda PA, Levy DF, Kadel A, Aka A, Pedisich I, Sperling MR, Sharan AD, Lega BC, Burks A et al., “Closed-loop stimulation of temporal cortex rescues functional networks and improves memory,” Nature Communications, vol. 9, no. 1, pp. 1–8, 2018. - PMC - PubMed
    1. Iturrate I, Pereira M, and Millán J. d. R., “Closed-loop electrical neurostimulation: challenges and opportunities,” Current Opinion in Biomedical Engineering, vol. 8, pp. 28–37, 2018.

Publication types