Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Feb:2025:132-137.
doi: 10.1109/cdma61895.2025.00028. Epub 2025 Mar 7.

Utilizing Pretrained Vision Transformers and Large Language Models for Epileptic Seizure Prediction

Affiliations

Utilizing Pretrained Vision Transformers and Large Language Models for Epileptic Seizure Prediction

Paras Parani et al. 2025 8th Int Conf Data Sci Mach Learn Appl (2025). 2025 Feb.

Abstract

Repeated unprovoked seizures are a major source of concern for people with epilepsy. Predicting seizures before they occur is of interest to both machine-learning scientists as well as clinicians, and is an active area of research. The variability of EEG sensors, type of seizures, and specialized knowledge required for annotating the data complicates the large-scale annotation process essential for supervised predictive models. To address these challenges, we propose the use of Vision Transformers (ViTs) and Large Language Models (LLMs) that were originally trained on publicly available image or text data. Our work leverages these pre-trained models by refining the input, embedding, and classification layers in a minimalistic fashion to predict seizures. Our results demonstrate that LLMs outperforms the ViTs in patient-independent seizure prediction achieving a sensitivity of 79.02% which is 8% higher compared to ViTs and about 12% higher compared to a custom-designed ResNet-based model. Our work demonstrates the successful feasibility of pre-trained models for seizure prediction with its potential for improving the quality of life of people with epilepsy. Our code and related materials are available open-source at: https://github.com/pcdslab/UtilLLM_EPS/.

Keywords: Electroencephalography (EEG); Epilepsy; Large Language Model (LLM); Seizure Prediction; Vision Transformer (ViT).

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Illustration of the phases in which data is augmented to make it suitable for being input into a ViT.
Fig. 2.
Fig. 2.
Illustration of the different stages of ViT-2 with the re-trained layers encapsulated by dotted rectangles.

References

    1. World Health Organization, “Epilepsy,” 2023.
    1. Mühlenfeld N, Störmann P, Marzi I, Rosenow F, Strzelczyk A, Verboket RD, and Willems LM, “Seizure related injuries – Frequent injury patterns, hospitalization and therapeutic aspects,” Chinese Journal of Traumatology - English Edition, vol. 25, pp. 272–276, sep 2022. - PMC - PubMed
    1. Shoeb AH and Guttag J, “Application of Machine Learning To Epileptic Seizure Detection,” in ICML, pp. 975–982, jan 2010.
    1. Wang Z, Yang J, Wu H, Zhu J, and Sawan M, “Power efficient refined seizure prediction algorithm based on an enhanced benchmarking,” Scientific Reports, vol. 11, dec 2021. - PMC - PubMed
    1. Mohammad U and Saeed F, “MLSPred-Bench: ML-Ready Benchmark Leveraging Seizure Detection EEG data for Predictive Models,” bioRxiv preprint bioRxiv:2024.07.17.604006, pp. 1–13, 2024.

LinkOut - more resources