Three simple steps to improve the interpretability of EEG-SVM studies
- PMID: 36169205
- DOI: 10.1152/jn.00221.2022
Three simple steps to improve the interpretability of EEG-SVM studies
Abstract
Machine-learning systems that classify electroencephalography (EEG) data offer important perspectives for the diagnosis and prognosis of a wide variety of neurological and psychiatric conditions, but their clinical adoption remains low. We propose here that much of the difficulties translating EEG-machine-learning research to the clinic result from consistent inaccuracies in their technical reporting, which severely impair the interpretability of their often-high claims of performance. Taking example from a major class of machine-learning algorithms used in EEG research, the support-vector machine (SVM), we highlight three important aspects of model development (normalization, hyperparameter optimization, and cross-validation) and show that, while these three aspects can make or break the performance of the system, they are left entirely undocumented in a shockingly vast majority of the research literature. Providing a more systematic description of these aspects of model development constitute three simple steps to improve the interpretability of EEG-SVM research and, in fine, its clinical adoption.
Keywords: electroencephalography; how-to; reliability; reproducibility; support vector machines.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
