Class-aware feature attention-based semantic segmentation on hyperspectral images
- PMID: 39903744
- PMCID: PMC11793730
- DOI: 10.1371/journal.pone.0309997
Class-aware feature attention-based semantic segmentation on hyperspectral images
Abstract
This research explores an innovative approach to segment hyperspectral images. Aclass-aware feature-based attention approach is combined with an enhanced attention-based network, FAttNet is proposed to segment the hyperspectral images semantically. It is introduced to address challenges associated with inaccurate edge segmentation, diverse forms of target inconsistency, and suboptimal predictive efficacy encountered in traditional segmentation networks when applied to semantic segmentation tasks in hyperspectral images. First, the class-aware feature attention procedure is used to improve the extraction and processing of distinct types of semantic information. Subsequently, the spatial attention pyramid is employed in a parallel fashion to improve the correlation between spaces and extract context information from images at different scales. Finally, the segmentation results are refined using the encoder-decoder structure. It enhances precision in delineating distinct land cover patterns. The findings from the experiments demonstrate that FAttNet exhibits superior performance compared to established semantic segmentation networks commonly used. Specifically, on the GaoFen image dataset, FAttNet achieves a higher mean intersection over union (MIoU) of 77.03% and a segmentation accuracy of 87.26% surpassing the performance of the existing network.
Copyright: © 2025 Sevugan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Conflict of interest statement
The authors have declared that no competing interests exist.
Figures
Similar articles
-
Semantic Segmentation of Hyperspectral Remote Sensing Images Based on PSE-UNet Model.Sensors (Basel). 2022 Dec 10;22(24):9678. doi: 10.3390/s22249678. Sensors (Basel). 2022. PMID: 36560046 Free PMC article.
-
An One-step Triple Enhanced weakly supervised semantic segmentation using image-level labels.PLoS One. 2024 Oct 21;19(10):e0309126. doi: 10.1371/journal.pone.0309126. eCollection 2024. PLoS One. 2024. PMID: 39432517 Free PMC article.
-
Semantic segmentation method of underwater images based on encoder-decoder architecture.PLoS One. 2022 Aug 25;17(8):e0272666. doi: 10.1371/journal.pone.0272666. eCollection 2022. PLoS One. 2022. PMID: 36006956 Free PMC article.
-
Discriminative Feature Network Based on a Hierarchical Attention Mechanism for Semantic Hippocampus Segmentation.IEEE J Biomed Health Inform. 2021 Feb;25(2):504-513. doi: 10.1109/JBHI.2020.2994114. Epub 2021 Feb 5. IEEE J Biomed Health Inform. 2021. PMID: 32406848
-
Multibranch semantic image segmentation model based on edge optimization and category perception.PLoS One. 2024 Dec 19;19(12):e0315621. doi: 10.1371/journal.pone.0315621. eCollection 2024. PLoS One. 2024. PMID: 39700236 Free PMC article.
References
-
- Boulila W. (2019). A top-down approach for semantic segmentation of big remote sensing images. Earth Science Informatics, 12, 295–306.
-
- Xu H., Wang W., Wang S., Zhou W., Chen Q., & Peng W. (2023). PPNet: pooling position attention network for semantic segmentation. Multimedia Tools and Applications, 1–17.
-
- Zhang Y., Li Y., Chen J., Yang C., & Rolfe P. (2023). Fine-Grained Guided Model Fusion Network with Attention Mechanism for Infrared Small Target Segmentation. International Journal of Intelligent Systems, 2023.
-
- Zhao Q., Liu J., Li Y., & Zhang H. (2021). Semantic segmentation with attention mechanism for remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–13.
-
- Wang H., Wang Y., Zhang Q., Xiang S., & Pan C. (2017). Gated convolutional neural network for semantic segmentation in high-resolution images. Remote Sensing, 9(5), 446.
MeSH terms
LinkOut - more resources
Full Text Sources