Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Apr 30;22(9):3440.
doi: 10.3390/s22093440.

Multiscale and Hierarchical Feature-Aggregation Network for Segmenting Medical Images

Affiliations

Multiscale and Hierarchical Feature-Aggregation Network for Segmenting Medical Images

Nagaraj Yamanakkanavar et al. Sensors (Basel). .

Abstract

We propose an encoder-decoder architecture using wide and deep convolutional layers combined with different aggregation modules for the segmentation of medical images. Initially, we obtain a rich representation of features that span from low to high levels and from small to large scales by stacking multiple k × k kernels, where each k × k kernel operation is split into k × 1 and 1 × k convolutions. In addition, we introduce two feature-aggregation modules-multiscale feature aggregation (MFA) and hierarchical feature aggregation (HFA)-to better fuse information across end-to-end network layers. The MFA module progressively aggregates features and enriches feature representation, whereas the HFA module merges the features iteratively and hierarchically to learn richer combinations of the feature hierarchy. Furthermore, because residual connections are advantageous for assembling very deep networks, we employ an MFA-based long residual connections to avoid vanishing gradients along the aggregation paths. In addition, a guided block with multilevel convolution provides effective attention to the features that were copied from the encoder to the decoder to recover spatial information. Thus, the proposed method using feature-aggregation modules combined with a guided skip connection improves the segmentation accuracy, achieving a high similarity index for ground-truth segmentation maps. Experimental results indicate that the proposed model achieves a superior segmentation performance to that obtained by conventional methods for skin-lesion segmentation, with an average accuracy score of 0.97 on the ISIC-2018, PH2, and UFBA-UESC datasets.

Keywords: convolutional neural network; feature fusion; medical-image segmentation.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Pipeline of the proposed method.
Figure 2
Figure 2
Preprocessing steps: (a) original image, (b) contrast-enhanced image, (c) hair mask, (d) hairless image with removed pixel information, (e) hairless image, and (f) gray-color image.
Figure 3
Figure 3
Overall architecture of the proposed model.
Figure 4
Figure 4
Qualitative comparison of the proposed method and conventional methods for a skin-lesion dataset. From left to right: (a) original input images; (b) preprocessed images; (cj) input images with the overlay of ground truth (blue contour) and predicted outputs (red contour) indicating the segmentation results obtained by U-Net, M-Net, CE-Net, M-SegNet, RA-UNet, nnU-Net, CMM-Net, and the proposed method, respectively.
Figure 5
Figure 5
Qualitative comparison of the proposed method and existing methods for the UFBA-UESC dental dataset. From left to right: (a) original input image; (b) ground-truth segmentation map; (cj) segmentation results obtained using U-Net, M-Net, CE-Net, M-SegNet, RA-UNet, nnU-Net, CMM-Net, and the proposed method, respectively.
Figure 6
Figure 6
Box plot of the accuracies of the proposed method and existing methods for the UFBA-UESC dental dataset.

Similar articles

Cited by

References

    1. Nishitani Y., Nakayama R., Hayashi D., Hizukuri A., Murata K. Segmentation of teeth in panoramic dental X-ray images using U-Net with a loss function weighted on the tooth edge. Radiol. Phys. Technol. 2021;14:64–69. doi: 10.1007/s12194-020-00603-1. - DOI - PubMed
    1. Lei B., Xia Z., Jiang F., Jiang X., Ge Z., Xu Y., Wang S. Skin lesion segmentation via generative adversarial networks with dual discriminators. Med. Image Anal. 2020;64:101716. doi: 10.1016/j.media.2020.101716. - DOI - PubMed
    1. Gao J., Jiang Q., Zhou B., Chen D. Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: An overview. Math. Biosci. Eng. 2019;16:6536–6561. doi: 10.3934/mbe.2019326. - DOI - PubMed
    1. Wang G., Li W., Zuluaga M.A., Pratt R., Patel P.A., Aertsen M., Doel T., David A.L., Deprest J., Ourselin S., et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans. Med. Imaging. 2018;37:1562–1573. doi: 10.1109/TMI.2018.2791721. - DOI - PMC - PubMed
    1. Wu H., Pan J., Li Z., Wen Z., Qin J. Automated Skin Lesion Segmentation Via an Adaptive Dual Attention Module. IEEE Trans. Med. Imaging. 2021;40:357–370. doi: 10.1109/TMI.2020.3027341. - DOI - PubMed

LinkOut - more resources