Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022;38(8):2939-2970.
doi: 10.1007/s00371-021-02166-7. Epub 2021 Jun 10.

A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets

Affiliations

A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets

Khaled Bayoudh et al. Vis Comput. 2022.

Abstract

The research progress in multimodal learning has grown rapidly over the last decade in several areas, especially in computer vision. The growing potential of multimodal data streams and deep learning algorithms has contributed to the increasing universality of deep multimodal learning. This involves the development of models capable of processing and analyzing the multimodal information uniformly. Unstructured real-world data can inherently take many forms, also known as modalities, often including visual and textual content. Extracting relevant patterns from this kind of data is still a motivating goal for researchers in deep learning. In this paper, we seek to improve the understanding of key concepts and algorithms of deep multimodal learning for the computer vision community by exploring how to generate deep models that consider the integration and combination of heterogeneous visual cues across sensory modalities. In particular, we summarize six perspectives from the current literature on deep multimodal learning, namely: multimodal data representation, multimodal fusion (i.e., both traditional and deep learning-based schemes), multitask learning, multimodal alignment, multimodal transfer learning, and zero-shot learning. We also survey current multimodal applications and present a collection of benchmark datasets for solving problems in various vision domains. Finally, we highlight the limitations and challenges of deep multimodal learning and provide insights and directions for future research.

Keywords: Applications; Computer vision; Datasets; Deep learning; Multimodal learning; Sensory modalities.

PubMed Disclaimer

Conflict of interest statement

Conflict of interestThe authors declare that they have no conflict of interest.

Figures

Fig. 1
Fig. 1
An example of a multimodal pipeline that includes three different modalities
Fig. 2
Fig. 2
A schematic illustration of the method used: The visual modality (video) involves the extraction of facial regions of interest followed by a visual mapping representation scheme. The obtained representations are then temporally fused into a common space. Additionally, the audio descriptions are also generated. The two modalities are then combined using a multimodal fusion operation to predict the target class label (emotion) of the test sample
Fig. 3
Fig. 3
Difference between visual and textual representation
Fig. 4
Fig. 4
Conventional methods for multimodal data fusion: a Early fusion, b Late fusion, c Hybrid fusion
Fig. 5
Fig. 5
Structure of a bimodal DBN
Fig. 6
Fig. 6
Structure of a bimodal AE
Fig. 7
Fig. 7
Structure of a bimodal CNN
Fig. 8
Fig. 8
A schematic illustration of bidirectional multimodal RNN (m-RNN) [223]
Fig. 9
Fig. 9
A schematic illustration of multimodal GAN
Fig. 10
Fig. 10
A schematic illustration of the attention-based machine translation model
Fig. 11
Fig. 11
A meta-architecture in the case of two tasks A and B [109]
Fig. 12
Fig. 12
An illustration of an example of a multimodal transfer learning process
Fig. 13
Fig. 13
Difference in results between EQA and VQA tasks: a EQA [90], b VQA [129]
Fig. 14
Fig. 14
Example of NST algorithm output to transform the style of a painting to a given image
Fig. 15
Fig. 15
Waymo self-driving car equipped with several on-board sensors [163]

References

    1. Bahdanau, D., Cho, K., Bengio, Y.: Neural Machine Translation by Jointly Learning to Align and Translate. arXiv:1409.0473 (2016)
    1. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013;35:1798–1828. doi: 10.1109/TPAMI.2013.50. - DOI - PubMed
    1. Bayoudh, K.: From machine learning to deep learning, (1st ed.), Ebook, ISBN: 9781387465606 (2017)
    1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. - DOI - PubMed
    1. Lawrence, S., Giles, C.L.: Overfitting and neural networks: conjugate gradient and backpropagation. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, pp. 114–119 (2000)

LinkOut - more resources