A multimodal dynamical variational autoencoder for audiovisual speech representation learning
- PMID: 38266474
- DOI: 10.1016/j.neunet.2024.106120
A multimodal dynamical variational autoencoder for audiovisual speech representation learning
Abstract
High-dimensional data such as natural images or speech signals exhibit some form of regularity, preventing their dimensions from varying independently. This suggests that there exists a lower dimensional latent representation from which the high-dimensional observed data were generated. Uncovering the hidden explanatory features of complex data is the goal of representation learning, and deep latent variable generative models have emerged as promising unsupervised approaches. In particular, the variational autoencoder (VAE) which is equipped with both a generative and an inference model allows for the analysis, transformation, and generation of various types of data. Over the past few years, the VAE has been extended to deal with data that are either multimodal or dynamical (i.e., sequential). In this paper, we present a multimodal and dynamical VAE (MDVAE) applied to unsupervised audiovisual speech representation learning. The latent space is structured to dissociate the latent dynamical factors that are shared between the modalities from those that are specific to each modality. A static latent variable is also introduced to encode the information that is constant over time within an audiovisual speech sequence. The model is trained in an unsupervised manner on an audiovisual emotional speech dataset, in two stages. In the first stage, a vector quantized VAE (VQ-VAE) is learned independently for each modality, without temporal modeling. The second stage consists in learning the MDVAE model on the intermediate representation of the VQ-VAEs before quantization. The disentanglement between static versus dynamical and modality-specific versus modality-common information occurs during this second training stage. Extensive experiments are conducted to investigate how audiovisual speech latent factors are encoded in the latent space of MDVAE. These experiments include manipulating audiovisual speech, audiovisual facial image denoising, and audiovisual speech emotion recognition. The results show that MDVAE effectively combines the audio and visual information in its latent space. They also show that the learned static representation of audiovisual speech can be used for emotion recognition with few labeled data, and with better accuracy compared with unimodal baselines and a state-of-the-art supervised model based on an audiovisual transformer architecture.
Keywords: Audiovisual speech processing; Deep generative modeling; Disentangled representation learning; Multimodal and dynamical data; Variational autoencoder.
Copyright © 2024 The Authors. Published by Elsevier Ltd.. All rights reserved.
Conflict of interest statement
Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Similar articles
-
Learning emotions latent representation with CVAE for text-driven expressive audiovisual speech synthesis.Neural Netw. 2021 Sep;141:315-329. doi: 10.1016/j.neunet.2021.04.021. Epub 2021 Apr 21. Neural Netw. 2021. PMID: 33957381
-
Deep clustering analysis via variational autoencoder with Gamma mixture latent embeddings.Neural Netw. 2025 Mar;183:106979. doi: 10.1016/j.neunet.2024.106979. Epub 2024 Dec 4. Neural Netw. 2025. PMID: 39662201
-
An Overview of Variational Autoencoders for Source Separation, Finance, and Bio-Signal Applications.Entropy (Basel). 2021 Dec 28;24(1):55. doi: 10.3390/e24010055. Entropy (Basel). 2021. PMID: 35052081 Free PMC article. Review.
-
Development of a β-Variational Autoencoder for Disentangled Latent Space Representation of Anterior Segment Optical Coherence Tomography Images.Transl Vis Sci Technol. 2022 Feb 1;11(2):11. doi: 10.1167/tvst.11.2.11. Transl Vis Sci Technol. 2022. PMID: 35133405 Free PMC article.
-
Multimodal learning-based speech enhancement and separation, recent innovations, new horizons, challenges and real-world applications.Comput Biol Med. 2025 May;190:110082. doi: 10.1016/j.compbiomed.2025.110082. Epub 2025 Apr 1. Comput Biol Med. 2025. PMID: 40174498 Review.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
Miscellaneous