A Residual-Inception U-Net (RIU-Net) Approach and Comparisons with U-Shaped CNN and Transformer Models for Building Segmentation from High-Resolution Satellite Images
- PMID: 36236721
- PMCID: PMC9570988
- DOI: 10.3390/s22197624
A Residual-Inception U-Net (RIU-Net) Approach and Comparisons with U-Shaped CNN and Transformer Models for Building Segmentation from High-Resolution Satellite Images
Abstract
Building segmentation is crucial for applications extending from map production to urban planning. Nowadays, it is still a challenge due to CNNs' inability to model global context and Transformers' high memory need. In this study, 10 CNN and Transformer models were generated, and comparisons were realized. Alongside our proposed Residual-Inception U-Net (RIU-Net), U-Net, Residual U-Net, and Attention Residual U-Net, four CNN architectures (Inception, Inception-ResNet, Xception, and MobileNet) were implemented as encoders to U-Net-based models. Lastly, two Transformer-based approaches (Trans U-Net and Swin U-Net) were also used. Massachusetts Buildings Dataset and Inria Aerial Image Labeling Dataset were used for training and evaluation. On Inria dataset, RIU-Net achieved the highest IoU score, F1 score, and test accuracy, with 0.6736, 0.7868, and 92.23%, respectively. On Massachusetts Small dataset, Attention Residual U-Net achieved the highest IoU and F1 scores, with 0.6218 and 0.7606, and Trans U-Net reached the highest test accuracy, with 94.26%. On Massachusetts Large dataset, Residual U-Net accomplished the highest IoU and F1 scores, with 0.6165 and 0.7565, and Attention Residual U-Net attained the highest test accuracy, with 93.81%. The results showed that RIU-Net was significantly successful on Inria dataset. On Massachusetts datasets, Residual U-Net, Attention Residual U-Net, and Trans U-Net provided successful results.
Keywords: CNN; Inception; Transformer; building segmentation; residual connections; satellite images.
Conflict of interest statement
The authors declare no conflict of interest.
Figures












References
-
- Chen J., Jiang Y., Luo L., Gu Y., Wu K. Building footprint generation by integrating U-Net with deepened space module; Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP); Anchorage, AK, USA. 19–22 September 2021; pp. 3847–3851.
-
- Zhang Y., Gong W., Sun J., Li W. Web-Net: A novel nest networks with ultra-hierarchical sampling for building extraction from aerial imageries. Remote Sens. 2019;11:1897. doi: 10.3390/rs11161897. - DOI
-
- Wang H., Miao F. Building extraction from remote sensing images using deep residual U-Net. Eur. J. Remote Sens. 2022;55:71–85. doi: 10.1080/22797254.2021.2018944. - DOI
-
- Sun X., Zhao W., Maretto R.V., Persello C. Building outline extraction from aerial imagery and digital surface model with a frame field learning framework. Int. Arch. Photogramm. Remote Sens. Spat. Inf. 2021;43:487–493. doi: 10.5194/isprs-archives-XLIII-B2-2021-487-2021. - DOI
MeSH terms
LinkOut - more resources
Full Text Sources