Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Jul 11:15:1425490.
doi: 10.3389/fneur.2024.1425490. eCollection 2024.

The present and future of seizure detection, prediction, and forecasting with machine learning, including the future impact on clinical trials

Affiliations
Review

The present and future of seizure detection, prediction, and forecasting with machine learning, including the future impact on clinical trials

Wesley T Kerr et al. Front Neurol. .

Abstract

Seizures have a profound impact on quality of life and mortality, in part because they can be challenging both to detect and forecast. Seizure detection relies upon accurately differentiating transient neurological symptoms caused by abnormal epileptiform activity from similar symptoms with different causes. Seizure forecasting aims to identify when a person has a high or low likelihood of seizure, which is related to seizure prediction. Machine learning and artificial intelligence are data-driven techniques integrated with neurodiagnostic monitoring technologies that attempt to accomplish both of those tasks. In this narrative review, we describe both the existing software and hardware approaches for seizure detection and forecasting, as well as the concepts for how to evaluate the performance of new technologies for future application in clinical practice. These technologies include long-term monitoring both with and without electroencephalography (EEG) that report very high sensitivity as well as reduced false positive detections. In addition, we describe the implications of seizure detection and forecasting upon the evaluation of novel treatments for seizures within clinical trials. Based on these existing data, long-term seizure detection and forecasting with machine learning and artificial intelligence could fundamentally change the clinical care of people with seizures, but there are multiple validation steps necessary to rigorously demonstrate their benefits and costs, relative to the current standard.

Keywords: deficiency time; epilepsy; human-in-the loop; internet of things; wearables.

PubMed Disclaimer

Conflict of interest statement

WK has received compensation for review articles for Medlink Neurology and consulting for SK Life Sciences, Biohaven Pharmaceuticals, Cerebral Therapeutics, Jazz Pharmaceuticals, EpiTel, UCB Pharmaceuticals, Azurity Pharmaceuticals, and the Epilepsy Study Consortium; and has collaborative or data use agreements with Eisai, Janssen, Radius Health, and Neureka. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Machine learning training, testing, and validation flowsheet. The best parameters, β, of a model that maximize a chosen quantitative metric of performance are learned based on training from training data only. After application of all pre-processing steps to the testing data without modification, the best hyperparameters, θ, of a model that maximize performance are learned based on testing data, without modification of the learned parameters, β. Lastly, the expected (E) performance is measured based on validation data after application of all pre-processing steps and applying the model with the optimized parameters, β, and hyperparameters, θ. D# reflects a numbered subset of data; argmax reflects identifying the optimal argument (arg) that maximizes (max) the performance; the vertical line, |, means “given” in mathematical notation.
Figure 2
Figure 2
Examples of common errors of (A) “leakage” and (B) “peeking” where validation data is not truly “unseen.” In (A), the validation data leaks into training by being used in feature selection to identify the features related to the outcome of interest. In (B), the best performing ML model is chosen based on performance based on the “validation” data, but there is no data left to evaluate the performance of that best ML model on “unseen” data.
Figure 3
Figure 3
Illustrating the structure of a cyclic 10-fold cross-validation, where data is split into mutually exclusive subsets labelled D#. Model training occurs on training data only (black) and validation performance is estimated from validation data only (orange). In cyclic cross-validation, the identity of which data was validation cycles so that each subset of data is used for validation once and only once. Pooled performance across folds estimates performance of the general approach on unseen data, but each of the 10 different models likely have different learned parameters, β. When hyperparameters, θ, need to be learned, nested cross-validation can further split the black data into training and testing (pink in the Figure 1).
Figure 4
Figure 4
Illustration of the difference between splitting data into training and validation sets when the internal structure of the data is either maintained or modified. When the data includes 10 seizures from 10 patients, indicated by SX. Y for Seizure Y from Patient X, it would be an error to use unstructured splitting (first panel). Two appropriate methods for splitting into training and validation are illustrated. In the middle panel, we show training on data from 9 patients and validating based on the left out patient. In the right panel, we illustrate pseudo-prospective validation where the model is trained based on the first 9 seizures from each patient and validating using the last seizure from each patient.
Figure 5
Figure 5
Examples of the electrographic, myogenic, and electrocardiographic (ECG) signals seen for (A) a left temporal onset focal to bilateral tonic clonic seizure and (B) a functional (nonepileptic) seizure with rhythmic artifacts. A challenge of seizure detection, prediction, and forecasting technologies are to differentiate these two types of events based on recording these signals with a combination of relevant sensors. The purple arrows highlight rhythmic artifact from side-to-side movement of the head against a pillow that appear to evolve like an electrographic seizure, but they can be differentiated from an epileptic seizure based on the high amplitude field in the posterior electrodes whereas the amplitudes in the anterior electrodes are markedly lower. The red markings highlight the challenges of ECG monitoring where in (A) the tonic-clonic movements include the chest and the muscle-generated signals obscure the relatively lower amplitude signals from the heart and in (B) we highlight that the ECG was not accurately recording during the seizure, which represents deficiency time.
Figure 6
Figure 6
Illustration of EEG-based approaches for seizure detection, seizure prediction, and seizure forecasting that differ from conventional scalp EEG. See Table 1 and text for citations of specific technologies.
Figure 7
Figure 7
A GPT-4 generated illustration of a person wearing various monitoring devices that could be used for seizure detection, prediction, and forecasting. The white blanket could represent a bed pad monitoring device. The watch and bicep monitoring devices highlight where other external sensors can be placed. The headphones represent devices that can be worn around or inside the ear or head. See Table 2 and text for further descriptions.

Similar articles

Cited by

References

    1. Kerr WT, Stern JM. We need a functioning name for PNES: consider dissociative seizures. Epilepsy Behav. (2020) 105:107002. doi: 10.1016/j.yebeh.2020.107002, PMID: - DOI - PubMed
    1. Giussani G, Falcicchio G, la Neve A, Costagliola G, Striano P, Scarabello A, et al. . Sudden unexpected death in epilepsy. A critical view of the literature. Epilepsia Open. (2023) 8:728–57. doi: 10.1002/epi4.12722 - DOI - PMC - PubMed
    1. LaFrance WC, Ranieri R, Bamps Y, Stoll S, Sahoo SS, Welter E, et al. . Comparison of common data elements from the managing epilepsy well (MEW) network integrated database and a well-characterized sample with nonepileptic seizures. Epilepsy Behav. (2015) 45:136–41. doi: 10.1016/j.yebeh.2015.02.021, PMID: - DOI - PubMed
    1. Johnson EK, Jones JE, Seidenberg M, Hermann BP. The relative impact of anxiety, depression, and clinical seizure features on health-related quality of life in epilepsy. Epilepsia. (2004) 45:544–50. doi: 10.1111/j.0013-9580.2004.47003.x, PMID: - DOI - PubMed
    1. Chen M, Jamnadas-Khoda J, Broadhurst M, Wall M, Grünewald R, Howell SJL, et al. . Value of witness observations in the differential diagnosis of transient loss of consciousness. Neurology. (2019) 92:7017. doi: 10.1212/WNL.0000000000007017 - DOI - PubMed

LinkOut - more resources