Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jul 31;22(15):5735.
doi: 10.3390/s22155735.

Dual-Coupled CNN-GCN-Based Classification for Hyperspectral and LiDAR Data

Affiliations

Dual-Coupled CNN-GCN-Based Classification for Hyperspectral and LiDAR Data

Lei Wang et al. Sensors (Basel). .

Abstract

Deep learning techniques have brought substantial performance gains to remote sensing image classification. Among them, convolutional neural networks (CNN) can extract rich spatial and spectral features from hyperspectral images in a short-range region, whereas graph convolutional networks (GCN) can model middle- and long-range spatial relations (or structural features) between samples on their graph structure. These different features make it possible to classify remote sensing images finely. In addition, hyperspectral images and light detection and ranging (LiDAR) images can provide spatial-spectral information and elevation information of targets on the Earth's surface, respectively. These multi-source remote sensing data can further improve classification accuracy in complex scenes. This paper proposes a classification method for HS and LiDAR data based on a dual-coupled CNN-GCN structure. The model can be divided into a coupled CNN and a coupled GCN. The former employs a weight-sharing mechanism to structurally fuse and simplify the dual CNN models and extracting the spatial features from HS and LiDAR data. The latter first concatenates the HS and LiDAR data to construct a uniform graph structure. Then, the dual GCN models perform structural fusion by sharing the graph structures and weight matrices of some layers to extract their structural information, respectively. Finally, the final hybrid features are fed into a standard classifier for the pixel-level classification task under a unified feature fusion module. Extensive experiments on two real-world hyperspectral and LiDAR data demonstrate the effectiveness and superiority of the proposed method compared to other state-of-the-art baseline methods, such as two-branch CNN and context CNN. In particular, the overall accuracy (99.11%) on Trento achieves the best classification performance reported so far.

Keywords: convolutional neural network; graph convolutional network; hyperspectral; light detection and ranging.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
A general framework of proposed DCCG.
Figure 2
Figure 2
Illustration of the structure of CCNet. Conv and Pool denote the convolution and maximum pooling operations, respectively. p is the given neighborhood size.
Figure 3
Figure 3
Illustration of the structure of CGNet. [,] indicates cascade operations.
Figure 4
Figure 4
Visualization of Trento dataset: (a) false-color image (using 50, 45 and 5 bands as R, G and B, respectively); (b) LiDAR image; (c) training samples; (d) test samples; (e) color code.
Figure 5
Figure 5
Visualization of Houston dataset: (a) false-color image (using 64, 43 and 22 bands as R, G and B, respectively); (b) LiDAR image; (c) training samples; (d) test samples; (e) color code.
Figure 6
Figure 6
Classification maps obtained by different methods for the Trento dataset. (a) CNN-H; (b) CNN-L; (c) miniGCN-H; (d) miniGCN-L; (e) CCNet-HL; (f) CGNet-HL; (g) DCCGNet-H; (h) DCCGNet-L; (i) DCCG-HL.
Figure 7
Figure 7
Loss curves obtained by different methods for the Trento dataset. (a) CNN-H; (b) CNN-L; (c) miniGCN-H; (d) miniGCN-L; (e) CCNet-HL; (f) CGNet-HL; (g) DCCGNet-H; (h) DCCGNet-L; (i) DCCG-HL.
Figure 8
Figure 8
Classification maps obtained by different methods for the Houston dataset. (a) CNN-H; (b) CNN-L; (c) miniGCN-H; (d) miniGCN-L; (e) CCNet-HL; (f) CGNet-HL; (g) DCCGNet-H; (h) DCCGNet-L; (i) DCCG-HL.
Figure 9
Figure 9
Loss curves obtained by different methods for the Houston dataset. (a) CNN-H; (b) CNN-L; (c) miniGCN-H; (d) miniGCN-L; (e) CCNet-HL; (f) CGNet-HL; (g) DCCGNet-H; (h) DCCGNet-L; (i) DCCG-HL.
Figure 10
Figure 10
Effect of different neighborhood sizes on OA.

Similar articles

Cited by

References

    1. Kang J., Hong D., Liu J., Baier G., Yokoya N., Demir B. Learning Convolutional Sparse Coding on Complex Domain for Interferometric Phase Restoration. IEEE Trans. Neural Netw. Learn. Syst. 2021;32:826–840. doi: 10.1109/TNNLS.2020.2979546. - DOI - PubMed
    1. Huang R., Hong D., Xu Y., Yao W., Stilla U. Multi-Scale Local Context Embedding for LiDAR Point Cloud Classification. IEEE Geosci. Remote Sens. Lett. 2020;17:721–725. doi: 10.1109/LGRS.2019.2927779. - DOI
    1. Hang R., Liu Q., Hong D., Ghamisi P. Cascaded Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019;57:5384–5394. doi: 10.1109/TGRS.2019.2899129. - DOI
    1. Yokoya N., Grohnfeldt C., Chanussot J. Hyperspectral and Multispectral Data Fusion: A Comparative Review of the Recent Literature. IEEE Geosci. Remote Sens. Mag. 2017;5:29–56. doi: 10.1109/MGRS.2016.2637824. - DOI
    1. Ghamisi P., Gloaguen R., Atkinson P.M., Benediktsson J.A., Rasti B., Yokoya N., Wang Q., Hofle B., Bruzzone L., Bovolo F., et al. Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2019;7:6–39. doi: 10.1109/MGRS.2018.2890023. - DOI

MeSH terms

LinkOut - more resources