Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jun 3:8:e974.
doi: 10.7717/peerj-cs.974. eCollection 2022.

Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels

Affiliations

Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels

Thomas Winterbottom et al. PeerJ Comput Sci. .

Abstract

Bilinear pooling (BLP) refers to a family of operations recently developed for fusing features from different modalities predominantly for visual question answering (VQA) models. Successive BLP techniques have yielded higher performance with lower computational expense, yet at the same time they have drifted further from the original motivational justification of bilinear models, instead becoming empirically motivated by task performance. Furthermore, despite significant success in text-image fusion in VQA, BLP has not yet gained such notoriety in video question answering (video-QA). Though BLP methods have continued to perform well on video tasks when fusing vision and non-textual features, BLP has recently been overshadowed by other vision and textual feature fusion techniques in video-QA. We aim to add a new perspective to the empirical and motivational drift in BLP. We take a step back and discuss the motivational origins of BLP, highlighting the often-overlooked parallels to neurological theories (Dual Coding Theory and The Two-Stream Model of Vision). We seek to carefully and experimentally ascertain the empirical strengths and limitations of BLP as a multimodal text-vision fusion technique in video-QA using two models (TVQA baseline and heterogeneous-memory-enchanced 'HME' model) and four datasets (TVQA, TGif-QA, MSVD-QA, and EgoVQA). We examine the impact of both simply replacing feature concatenation in the existing models with BLP, and a modified version of the TVQA baseline to accommodate BLP that we name the 'dual-stream' model. We find that our relatively simple integration of BLP does not increase, and mostly harms, performance on these video-QA benchmarks. Using our insights on recent work in BLP for video-QA results and recently proposed theoretical multimodal fusion taxonomies, we offer insight into why BLP-driven performance gain for video-QA benchmarks may be more difficult to achieve than in earlier VQA models. We share our perspective on, and suggest solutions for, the key issues we identify with BLP techniques for multimodal fusion in video-QA. We look beyond the empirical justification of BLP techniques and propose both alternatives and improvements to multimodal fusion by drawing neurological inspiration from Dual Coding Theory and the Two-Stream Model of Vision. We qualitatively highlight the potential for neurological inspirations in video-QA by identifying the relative abundance of psycholinguistically 'concrete' words in the vocabularies for each of the text components (e.g., questions and answers) of the four video-QA datasets we experiment with.

Keywords: Bilinear pooling; Deep-CCA; Dual coding theory; Ego-VQA; MSVD-QA; Multimodal fusion; TGif-QA; TVQA; Two-stream model of vision; Video question answering.

PubMed Disclaimer

Conflict of interest statement

Alistair McLean is employed at Carbon AI, Middlesbrough.

Figures

Figure 1
Figure 1. Visualisation of mode-n fibres and matricisation.
Figure 2
Figure 2. Block term decomposition (n = 3).
Figure 3
Figure 3. TVQA Model.
⊙/ ⊕ = Element-wise multiplication/addition, ⊡ = context matching (Seo et al., 2017; Yu et al., 2018a), β = BLP. Any feature streams may be enabled/disabled.
Figure 4
Figure 4. HME model.
Figure 5
Figure 5. ⊕= Concatenation, β= BLP.
Figure 6
Figure 6. Baseline concatenation stream processor from TVQA model (left-A) vs our BLP stream processor (right-B).
⊙ = Element-wise multiplication, β = BLP, ⊡= Context Matching.
Figure 7
Figure 7. Our Dual-Stream Model.
⊡ = Context Matching.
Figure 8
Figure 8. Baseline concatenation stream processor from TVQA model (left-A) vs our DCCA stream processor (right-B).
⊙ = Element-wise multiplication, ⊡ = Context Matching.
Figure 9
Figure 9. Visualisation of the differences between concatenation and bilinear representations for unimodal processing.
Concatenation (left-A) can theoretically allow unimodal features from text or vision to process independently of the other modality by reducing it’s weighted contribution (see ‘V1 Only’). Bilinear representations (right-B) force multimodal interactions. It is less clear how useful ‘unimodal’ is processed.
Figure 10
Figure 10. Visualisation of the 1st and 3rd cross-stream scenarios for the two-stream model of vision described by Milner (2017).
The early bilinear model proposed by Tenenbaum & Freeman (2000) strikingly resembles the 1st (left-A). The 3rd and more recently favoured scenario features a continuous exchange of information across streams at multiple stages, and can be realised by introducing ‘cross-talking’ of deep learning features (right-B).
Figure 11
Figure 11. Visualisation of moving from less tangible visual features to more ‘imagen-like’ visual features e.g. convolution maps of an image.
Figure 12
Figure 12. The relative abundance of the psycholinguistic ‘concreteness’ score in the vocabularies of each source of text in the video-QA datasets we experiment with.
Stopwords are not included. Concreteness scores are taken from the following datasets: MT40k (Brysbaert, Warriner & Kuperman, 2013), USF (Nelson, Mcevoy & Schreiber, 1998), SimLex999 (Hill, Reichart & Korhonen, 2015), Clark-Paivio (Clark & Paivio, 2004), Toronto Word Pool (Friendly et al., 1982), Chinese Word Norm Corpus (Yee, 2017), MEGAHR-Crossling (Ljubešić, Fišer & Peti-Stantić, 2018), Glasgow Norms (Scott et al., 2017; Reilly & Kean, 2007), and (Sianipar, Groenestijn & Dijkstra, 2016). The scores for each word are abstract = 0 and most concrete = 1, and the result averaged if more than 1 dataset has the same word.

References

    1. Akaho S. A kernel method for canonical correlation analysis. Proceedings of the international meeting of the psychometric society (IMPS 2001); Osaka, volume 4; 2001. - DOI
    1. Anderson P, He X, Buehler C, Teney D, Johnson M, Gould S, Zhang L. Bottom-up and top-down attention for image captioning and visual question answering. 2018 IEEE/CVF conference on computer vision and pattern recognition. 2018:6077–6086. doi: 10.48550/ARXIV.1707.07998. - DOI
    1. Andrew G, Arora R, Bilmes JA, Livescu K. Deep canonical correlation analysis. Proceedings of the 30th International Conference on Machine Learning, PMLR. 2013;28(3):1247–1255.
    1. Baltrušaitis T, Ahuja C, Morency L-P. Multimodal machine learning: a survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019;41:423–443. doi: 10.1109/TPAMI.2018.2798607. - DOI - PubMed
    1. Begg I. Recall of meaningful phrases. Journal of Verbal Learning and Verbal Behavior. 1972;11(4):431–439. doi: 10.1016/S0022-5371(72)80024-0. - DOI

LinkOut - more resources