Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 May;1(3):230-244.
doi: 10.1016/j.bpsc.2015.12.005.

Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness

Affiliations

Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness

Vince D Calhoun et al. Biol Psychiatry Cogn Neurosci Neuroimaging. 2016 May.

Abstract

It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.

Keywords: brain function; connectivity; data fusion; independent component analysis; psychosis; schizophrenia.

PubMed Disclaimer

Figures

Figure 1
Figure 1
A spectrum of data fusion approaches. Fusion, in in increasing order of joint information provided, can range from simple visual inspection of two modalities (red and yellow circles), to overlaying them (e.g. PET/CT fusion), to jointly analyzing in series where one modality informs another (e.g. fMRI seeded EEG reconstruction), to a full joint analysis of multimodal relationships.
Figure 2
Figure 2
The benefit of a joint analysis is we can capitalize on the joint distribution of the multimodal imaging data which can improve our ability to discriminate health and disease. When we have two data sets, each with numerous variables, we could compute huge numbers of cross-correlations (adjusting for requisite multiple comparisons). Here, multivariate approaches like independent component analysis display a definite advantage, providing both a means to identify relationships among two very large data sets, while simultaneously identifying the (hopefully) most relevant variables representing this information, (i.e. simultaneously performing data reduction). In addition, improved individual subject classification above and beyond unimodal approaches has been shown in multiple studies. Such studies are typically cross-validated (meaning algorithms are trained on part of the data and accuracy is evaluate on another part of the data) to avoid overfitting (figure modified from (10)).
Figure 3
Figure 3
Univariate vs multivariate approaches for capturing information among multiple data types. The figure on the left shows a cloud of points from two datasets which are analyzed using a univariate approach, essentially an analysis of each set of points pairwise. Such an approach is not able to capture related patterns of multiple sets of points as indicted in the right side of the figure. This is a key advantage of a multivariate approach. As shown on the right figure, the identified patterns pool together multiple data points and thus can help identify patterns made up of a combination of relatively weak individual data points that together convey a significant finding. Weighted combinations of one modality are linked to weighted combinations of another modality which can then be tested for associations with variables of interest (e.g. disease status, symptoms). Extracted information is typically (but not necessarily) a linear weighted combination of all variables. Each variable's weight indicates its contribution to the component, and helps us interpret it. The variables (e.g. voxels) with the most weight contribute the most (figure modified from (10)).
Figure 4
Figure 4
(left) Results high showing similarity between brain networks extracted from EEG and fMRI. (right) Graph results computed from MEG and fMRI data collected from the same subjects for two tasks show dramatically different answers.
Figure 5
Figure 5
Summary of several multivariate voxel-wise data fusion approaches covering the various discussed approaches of asymmetric/symmetric and blind/semi-blind.
Figure 6
Figure 6
Summary of 7 blind and semi-blind data-driven methods for multimodal fusion. Figure modified and reprinted with permission from Sui et al(21).
Figure 7
Figure 7
Summary of multimodal fusion studies found via Pubmed. Note the rapid increase in all categories including both 2-way and N-way fusion.
Figure 8
Figure 8
A) Two dimensional display of deep learning analysis of brain imaging data for multiple models ranging from raw data to 3 levels (each dot represents an individual). Results show separation between patients and controls (both testing and training data) improves with model depth, B) example of a multimodal deep learning architecture.

References

    1. Savopol F, Armenakis C. Proc.ISPRS. Buenos Aires; Argentina: 2002. Merging of Heterogeneous Data for Emergency Mapping: Data integration or Data Fusion? pp. 615–620.
    1. Ardnt C. Information gained by data fusion. SPIE Proc. 1996
    1. Calhoun VD, Adalı T. Feature-based Fusion of Medical Imaging Data. IEEE Transactions on Information Technology in Biomedicine. 2009;13:1–10. PMC2737598. - PMC - PubMed
    1. Kim DS, Ronen I, Formisano E, Kim KH, Kim M, van Zijl P, Ugurbil K, Mori S, Goebel R. Proc.HBM. Sendai; Japan: 2003. Simultaneous mapping of functional maps and axonal connectivity in cat visual cortex. - PubMed
    1. Ramnani N, Lee L, Mechelli A, Phillips C, Roebroeck A, Formisano E. Exploring brain connectivity: a new frontier in systems neuroscience. Functional Brain Connectivity, 4-6 April 2002, Dusseldorf, Germany. Trends Neurosci. 2002;25:496–497. - PubMed