Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul:2025:1-5.
doi: 10.1109/EMBC58623.2025.11254670.

Foundation Models on Wearable EEG using Self-Supervised Learning

Foundation Models on Wearable EEG using Self-Supervised Learning

Jiansheng Niu et al. Annu Int Conf IEEE Eng Med Biol Soc. 2025 Jul.

Abstract

Machine learning models have been effective in learning generalizable representations across tasks but often rely on large, well-annotated datasets, which remain scarce in electroencephalography (EEG) analysis due to signal variability, artifacts, and labeling costs. Wearable EEG devices have enabled large-scale data collection; however, most data remain unlabeled, limiting the scalability of supervised learning approaches. Developing robust EEG feature representations that generalize across tasks remains a challenge. In this study, self-supervised learning (SSL) was explored as a method to develop foundation models for EEG using the Muse Meditation Dataset (MMD). Contrastive learning was applied at both participant and segment levels, with the hypothesis that participant-level contrastive learning captures inter-subject variability more effectively. Two deep learning architectures, ShallowNet and EEGConformer, were evaluated on downstream tasks, including age and sex classification. Results indicated that SSL-trained embeddings outperformed fully supervised models, particularly in low-label scenarios. Participant-level contrastive learning improved classification accuracy, and EEGConformer's transformer-based self-attention outperformed ShallowNet, demonstrating its effectiveness in EEG representation learning. These findings contribute to understanding how SSL and large-scale pretraining influence EEG feature extraction.

PubMed Disclaimer