This is a preprint.
Higher-order Sonification of the Human Brain
- PMID: 40678226
- PMCID: PMC12270209
- DOI: 10.21203/rs.3.rs-6623643/v1
Higher-order Sonification of the Human Brain
Abstract
Sonification, the process of translating data into sound, has recently gained traction as a tool for both disseminating scientific findings and enabling visually impaired individuals to analyze data. Despite its potential, most current sonification methods remain limited to one-dimensional data, primarily due to the absence of practical, quantitative, and robust techniques for handling multi-dimensional datasets. We analyze structural magnetic resonance imaging (MRI) data of the human brain by integrating two- and three-point statistical measures in Fourier space: the power spectrum and bispectrum. These quantify the spatial correlations of 3D voxel intensity distributions, yielding reduced bispectra that capture higher-order interactions. To showcase the potential of the sonification approach, we focus on a reduced bispectrum configuration which applied to the OASIS-3 dataset (864 imaging sessions), yields a brain age regression model with a mean absolute error (MAE) of 4.7 years. Finally, we apply sonification to the ensemble-averaged (median) outputs of this configuration across five age groups: 40-50, 50-60, 60-70, 70-80, and 80-100 years. The auditory experience clearly reveals differentiations between these age groups, an observation further supported visually when inspecting the corresponding sheet music scores. Our results demonstrate that the information loss (e.g., normalized mean squared error) during the reconstruction of the original bispectra, specifically in configurations sensitive to brain aging, from the sonified signal is minimal. This approach allows us to encode multi-dimensional data into time-series-like arrays suitable for sonification, creating new opportunities for scientific exploration and enhancing accessibility for a broader audience.
Conflict of interest statement
The authors declare no competing interest. Additional Declarations: No competing interests reported.
Figures










References
-
- Kramer G. Auditory display: Sonification, audification, and auditory interfaces (1994).
-
- Sawe Nik T. J., Chris Chafe. Using Data Sonification to Overcome Science Literacy, Numeracy, and Visualization Barriers in Science Communication. Front. Commun. 5, 46, DOI: 10.3389/fcomm.2020.00046 (2020). - DOI
-
- Arcand K. K. et al. A Universe of Sound: Processing NASA Data into Sonifications to Explore Participant Response. arXiv e-prints arXiv:2403.18082, DOI: 10.48550/arXiv.2403.18082 (2024). - DOI
-
- Ediyanto & Kawai N. Science Learning for Students with Visually Impaired: A Literature Review. In Journal of Physics Conference Series, vol. 1227 of Journal of Physics Conference Series, 012035, DOI: 10.1088/1742-6596/1227/1/012035 (IOP, 2019). - DOI
-
- Russo M., Gernon T. M., Santaguida A. & Hincks T. K. Improving Earth science communication and accessibility with data sonification. Nat. Rev. Earth Environ. 5, 1–3, DOI: 10.1038/s43017-023-00512-y (2024). - DOI
Publication types
LinkOut - more resources
Full Text Sources