Foundation Models on Wearable EEG using Self-Supervised Learning
- PMID: 41337335
- DOI: 10.1109/EMBC58623.2025.11254670
Foundation Models on Wearable EEG using Self-Supervised Learning
Abstract
Machine learning models have been effective in learning generalizable representations across tasks but often rely on large, well-annotated datasets, which remain scarce in electroencephalography (EEG) analysis due to signal variability, artifacts, and labeling costs. Wearable EEG devices have enabled large-scale data collection; however, most data remain unlabeled, limiting the scalability of supervised learning approaches. Developing robust EEG feature representations that generalize across tasks remains a challenge. In this study, self-supervised learning (SSL) was explored as a method to develop foundation models for EEG using the Muse Meditation Dataset (MMD). Contrastive learning was applied at both participant and segment levels, with the hypothesis that participant-level contrastive learning captures inter-subject variability more effectively. Two deep learning architectures, ShallowNet and EEGConformer, were evaluated on downstream tasks, including age and sex classification. Results indicated that SSL-trained embeddings outperformed fully supervised models, particularly in low-label scenarios. Participant-level contrastive learning improved classification accuracy, and EEGConformer's transformer-based self-attention outperformed ShallowNet, demonstrating its effectiveness in EEG representation learning. These findings contribute to understanding how SSL and large-scale pretraining influence EEG feature extraction.