EEG-Based Music Emotion Prediction Using Supervised Feature Extraction for MIDI Generation
- PMID: 40096343
- PMCID: PMC11902679
- DOI: 10.3390/s25051471
EEG-Based Music Emotion Prediction Using Supervised Feature Extraction for MIDI Generation
Abstract
Advancements in music emotion prediction are driving AI-driven algorithmic composition, enabling the generation of complex melodies. However, bridging neural and auditory domains remains challenging due to the semantic gap between brain-derived low-level features and high-level musical concepts, making alignment computationally demanding. This study proposes a deep learning framework for generating MIDI sequences aligned with labeled emotion predictions through supervised feature extraction from neural and auditory domains. EEGNet is employed to process neural data, while an autoencoder-based piano algorithm handles auditory data. To address modality heterogeneity, Centered Kernel Alignment is incorporated to enhance the separation of emotional states. Furthermore, regression between feature domains is applied to reduce intra-subject variability in extracted Electroencephalography (EEG) patterns, followed by the clustering of latent auditory representations into denser partitions to improve MIDI reconstruction quality. Using musical metrics, evaluation on real-world data shows that the proposed approach improves emotion classification (namely, between arousal and valence) and the system's ability to produce MIDI sequences that better preserve temporal alignment, tonal consistency, and structural integrity. Subject-specific analysis reveals that subjects with stronger imagery paradigms produced higher-quality MIDI outputs, as their neural patterns aligned more closely with the training data. In contrast, subjects with weaker performance exhibited auditory data that were less consistent.
Keywords: EEG; kernel methods; music emotion recognition; piano-roll algorithm.
Conflict of interest statement
The authors declare no conflicts of interest.
Figures





References
-
- Lopez Duarte A.E. A Progressive-Adaptive Music Generator (PAMG): An Approach to Interactive Procedural Music for Videogames; Proceedings of the FARM 2024: 12th ACM SIGPLAN International Workshop on Functional Art, Music, Modelling, and Design; New York, NY, USA. 2 September 2024; pp. 65–72.
-
- Chi X., Wang Y., Cheng A., Fang P., Tian Z., He Y.Y., Liu Z., Qi X., Pan J., Zhang R., et al. MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions. arXiv. 20242407.20962
-
- Chen Y., Sun Y. The Usage of Artificial Intelligence Technology in Music Education System Under Deep Learning. IEEE Access. 2024;12:130546–130556. doi: 10.1109/ACCESS.2024.3459791. - DOI
-
- Ramaswamy M., Philip J.L., Priya V., Priyadarshini S., Ramasamy M., Jeevitha G., Mathkor D.M., Haque S., Dabaghzadeh F., Bhattacharya P., et al. Therapeutic use of music in neurological disorders: A concise narrative review. Heliyon. 2024;10:e35564. doi: 10.1016/j.heliyon.2024.e35564. - DOI - PMC - PubMed
-
- El-Haddad C., Laouris Y. Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues: Third COST 2102 International Training School, Caserta, Italy, 15–19 March 2010, Revised Selected Papers. Springer; Berlin/Heidelberg, Germany: 2011. The ability of children with mild learning disabilities to encode emotions through facial expressions; pp. 387–402.
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Miscellaneous