Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jul 5:17:1156838.
doi: 10.3389/fnins.2023.1156838. eCollection 2023.

Expert and deep learning model identification of iEEG seizures and seizure onset times

Affiliations

Expert and deep learning model identification of iEEG seizures and seizure onset times

Sharanya Arcot Desai et al. Front Neurosci. .

Abstract

Hundreds of 90-s iEEG records are typically captured from each NeuroPace RNS System patient between clinic visits. While these records provide invaluable information about the patient's electrographic seizure and interictal activity patterns, manually classifying them into electrographic seizure/non-seizure activity, and manually identifying the seizure onset channels and times is an extremely time-consuming process. A convolutional neural network based Electrographic Seizure Classifier (ESC) model was developed in an earlier study. In this study, the classification model is tested against iEEG annotations provided by three expert reviewers board certified in epilepsy. The three experts individually annotated 3,874 iEEG channels from 36, 29, and 35 patients with leads in the mesiotemporal (MTL), neocortical (NEO), and MTL + NEO regions, respectively. The ESC model's seizure/non-seizure classification scores agreed with the three reviewers at 88.7%, 89.6%, and 84.3% which was similar to how reviewers agreed with each other (92.9%-86.4%). On iEEG channels with all 3 experts in agreement (83.2%), the ESC model had an agreement score of 93.2%. Additionally, the ESC model's certainty scores reflected combined reviewer certainty scores. When 0, 1, 2 and 3 (out of 3) reviewers annotated iEEG channels as electrographic seizures, the ESC model's seizure certainty scores were in the range: [0.12-0.19], [0.32-0.42], [0.61-0.70], and [0.92-0.95] respectively. The ESC model was used as a starting-point model for training a second Seizure Onset Detection (SOD) model. For this task, seizure onset times were manually annotated on a relatively small number of iEEG channels (4,859 from 50 patients). Experiments showed that fine-tuning the ESC models with augmented data (30,768 iEEG channels) resulted in a better validation performance (on 20% of the manually annotated data) compared to training with only the original data (3.1s vs 4.4s median absolute error). Similarly, using the ESC model weights as the starting point for fine-tuning instead of other model weight initialization methods provided significant advantage in SOD model validation performance (3.1s vs 4.7s and 3.5s median absolute error). Finally, on iEEG channels where three expert annotations of seizure onset times were within 1.5 s, the SOD model's seizure onset time prediction was within 1.7 s of expert annotation.

Keywords: EEG; big data; deep learning; epilepsy; seizure classification.

PubMed Disclaimer

Conflict of interest statement

SD, WB, TT, CS, MM are employees of NeuroPace. MA was an intern at NeuroPace when performing analysis presented in this manuscript. JK, SB, and CT were consultants at NeuroPace when they independently annotated 1,000 iEEG records.

Figures

Figure 1
Figure 1
The RNS System (left). Two example 90-s scheduled iEEG records in time-series and spectrogram representation (right top). Two example 90-s long episode (LE) iEEG records in time-series and spectrogram representation (right bottom).
Figure 2
Figure 2
Example original and time-shifted (augmented) annotated iEEG channels used for training a seizure onset time detection model. The seizure onset time annotated in the original dataset was at 29.40 s. After time-shifting, the onset times in the augmented spectrograms were at 41.08, 54.47, and 64.36 s, respectively.
Figure 3
Figure 3
PR (Precision-Recall) curves of the 5 ESC models (each model trained on data from one cross-validation split) on a subset of iEEG channels in which all three reviewers’ annotations matched (left). PR curves and AUPRC of the ESC models against each individual reviewer (right).
Figure 4
Figure 4
Left top panel: model certainty of the ESC model trained on fold 1 of the training data (y-axis) vs combined reviewer certainty (x-axis). Other panels: similar performance on the other 4 folds of the model training.
Figure 5
Figure 5
Pairwise sensitivity and false positive rate comparisons show that the model’s performance (operating point shown here is 0.8) lies within the 95% confidence intervals (dotted lines) of the expert pairs, thus demonstrating that the ESC model’s performance is non-inferior compared to the experts according to the methods outlined in Scheuer et al. (2017).
Figure 6
Figure 6
(Top) Seizure Onset Detection (SOD) model validation performance vs. training epochs on original (left) and augmented (center) datasets. Right panel shows median absolute error, mean absolute error, and root mean squared error. In all cases, better model performance was observed when model training was performed on the original + augmented datasets. (Bottom) Seizure Onset Detection model validation performance vs. training epochs observed by fine-tuning the electrographic seizure classifier (left), a ResNet50 model with random weight initialization (center), and a ResNet50 model with weight learned by training on the ImageNet dataset (right).
Figure 7
Figure 7
(Top) Seizure onset time prediction in 3 example iEEG channels. The pink vertical line shows the SOD model’s predicted onset time in each of the three spectrogram images. (Bottom) Left panel: median difference in onset times between SOD model and human reviewer (y-axis) vs. median difference in onset times between reviewers (x-axis). Numbers above data points show the number of iEEG channels used to compute the difference. Right panel: Histogram of difference between trained SOD model vs. human reviewer in seconds.

References

    1. Acharya U. R., Oh S. L., Hagiwara Y., Tan J. H., Adeli H. (2018). Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 100, 270–278. doi: 10.1016/j.compbiomed.2017.09.017, PMID: - DOI - PubMed
    1. Ansari A. H., Cherian P. J., Caicedo A., Naulaers G., de Vos M., van Huffel S. (2019). Neonatal seizure detection using deep convolutional neural networks. Int. J. Neural Syst. 29:1850011. doi: 10.1142/S0129065718500119, PMID: - DOI - PubMed
    1. Arcot Desai S., Tcheng T., Morrell M. (2022). Non-linear Embedding Methods for Identifying Similar Brain Activity in 1 Million iEEG Records Captured From 256 RNS System Patients. Front. Big Data 5:840508. doi: 10.3389/fdata.2022.840508 - DOI - PMC - PubMed
    1. Barry W., Arcot Desai S., Tcheng T. K., Morrell M. J. (2021). A high accuracy electrographic seizure classifier trained using semi-supervised labeling applied to a large spectrogram dataset. Front. Neurosci. 15:697. doi: 10.3389/fnins.2021.667373 - DOI - PMC - PubMed
    1. Benjamens S., Dhunnoo P., Meskó B. (2020). The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit. Med. 3:118. doi: 10.1038/s41746-020-00324-0, PMID: - DOI - PMC - PubMed

LinkOut - more resources