Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jan 12;16(1):e0245230.
doi: 10.1371/journal.pone.0245230. eCollection 2021.

Multi-view classification with convolutional neural networks

Affiliations

Multi-view classification with convolutional neural networks

Marco Seeland et al. PLoS One. .

Erratum in

Abstract

Humans' decision making process often relies on utilizing visual information from different views or perspectives. However, in machine-learning-based image classification we typically infer an object's class from just a single image showing an object. Especially for challenging classification problems, the visual information conveyed by a single image may be insufficient for an accurate decision. We propose a classification scheme that relies on fusing visual information captured through images depicting the same object from multiple perspectives. Convolutional neural networks are used to extract and encode visual features from the multiple views and we propose strategies for fusing these information. More specifically, we investigate the following three strategies: (1) fusing convolutional feature maps at differing network depths; (2) fusion of bottleneck latent representations prior to classification; and (3) score fusion. We systematically evaluate these strategies on three datasets from different domains. Our findings emphasize the benefit of integrating information fusion into the network rather than performing it by post-processing of classification scores. Furthermore, we demonstrate through a case study that already trained networks can be easily extended by the best fusion strategy, outperforming other approaches by large margin.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. A collection of images is composed of multiple views depicting the same object instance from different perspectives.
Fig 2
Fig 2. Considered multi-view fusion strategies: (a) general architecture of a deep multi-view CNN; (b) investigated fusion strategies; and (c) fusion strategies mapped onto the ResNet-50 architecture.
Vertical lines mark the insertion of a view-fusion layer.
Fig 3
Fig 3. Example collections of the three multi-view datasets: (a) CompCars, (b) PlantCLEF, and (c) AntWeb.
Photographs of the ant specimen CASENT0281563 by Estella Ortega retrieved from www.AntWeb.org [32].
Fig 4
Fig 4. Distance matrices for the three datasets.
Matrix diagonal elements refer to intra-class distance, off-diagonal elements to inter-class distances. Elements are sorted from well-separable classes to less-separable classes as computed from the class-wise silhouette scores.
Fig 5
Fig 5. Distribution of class-averaged top-1 classification accuracy for the single-view baseline and the multi-view classification strategies.
White dots indicate median accuracy whereas black bars display interquartile ranges. Thin black lines indicate lower and upper adjacent values at 1.5× the interquartile range.

References

    1. LeCun Y, Bengio Y, Hinton G. Deep Learning. Nature. 2015;521(7553):436–444. 10.1038/nature14539 - DOI - PubMed
    1. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al.. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision. 2015;115(3):211–252. 10.1007/s11263-015-0816-y - DOI
    1. Seeland M, Rzanny M, Boho D, Wäldchen J, Mäder P. Image-based classification of plant genus and family for trained and untrained plant species. BMC Bioinformatics. 2019;20(1):4. 10.1186/s12859-018-2474-x - DOI - PMC - PubMed
    1. Wäldchen J, Rzanny M, Seeland M, Mäder P. Automated plant species identification—Trends and future directions. PLOS Computational Biology. 2018;14(4):1–19. 10.1371/journal.pcbi.1005993 - DOI - PMC - PubMed
    1. Marques ACR, Raimundo MM, Cavalheiro EMB, Salles LFP, Lyra C, Von Zuben FJ. Ant genera identification using an ensemble of convolutional neural networks. PLOS ONE. 2018;13(1):1–13. 10.1371/journal.pone.0192011 - DOI - PMC - PubMed

Publication types

MeSH terms