Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jul 11:8:e1033.
doi: 10.7717/peerj-cs.1033. eCollection 2022.

A retrospective study of 3D deep learning approach incorporating coordinate information to improve the segmentation of pre- and post-operative abdominal aortic aneurysm

Affiliations

A retrospective study of 3D deep learning approach incorporating coordinate information to improve the segmentation of pre- and post-operative abdominal aortic aneurysm

Thanongchai Siriapisith et al. PeerJ Comput Sci. .

Abstract

Abdominal aortic aneurysm (AAA) is one of the most common diseases worldwide. 3D segmentation of AAA provides useful information for surgical decisions and follow-up treatment. However, existing segmentation methods are time consuming and not practical in routine use. In this article, the segmentation task will be addressed automatically using a deep learning based approach which has been proved to successfully solve several medical imaging problems with excellent performances. This article therefore proposes a new solution of AAA segmentation using deep learning in a type of 3D convolutional neural network (CNN) architecture that also incorporates coordinate information. The tested CNNs are UNet, AG-DSV-UNet, VNet, ResNetMed and DenseVoxNet. The 3D-CNNs are trained with a dataset of high resolution (256 × 256) non-contrast and post-contrast CT images containing 64 slices from each of 200 patients. The dataset consists of contiguous CT slices without augmentation and no post-processing step. The experiments show that incorporation of coordinate information improves the segmentation results. The best accuracies on non-contrast and contrast-enhanced images have average dice scores of 97.13% and 96.74%, respectively. Transfer learning from a pre-trained network of a pre-operative dataset to post-operative endovascular aneurysm repair (EVAR) was also performed. The segmentation accuracy of post-operative EVAR using transfer learning on non-contrast and contrast-enhanced CT datasets achieved the best dice scores of 94.90% and 95.66%, respectively.

Keywords: 3D segmentation; Abdominal aortic aneurysm; Computed tomography; Coordinate information; Deep learning; Transfer learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Figure 1
Figure 1. Anatomy of AAA.
The anatomy of abdominal aortic aneurysm in abdominal CT images. The upper row is untreated AAA (arrow) (A) before (non-contrast) and (B) after contrast (post-contrast) administration. The lower row is post-operative endovascular aneurysm repair of AAA (arrow) (A) before and (B) after contrast administration. The non-contrast images are lower in contrast resolution as compared with post-contrast images.
Figure 2
Figure 2. The coordination data.
The value initialization in x, y and z dimensions of coordinate information created in three axes.
Figure 3
Figure 3. Three types of coordinate information.
An illustration of coordinate information data creation. The CoMat1 concatenates x, y, and z-coordinate matrices into three channels of data. The CoMat2 contains only the z-coordinate matrix (one channel). The CoMat3 is an average of x, y, and z-coordinate matrices into one channel data.
Figure 4
Figure 4. Method framework.
Framework of full training of pre-operative abdominal aortic aneurysm (AAA): The pre-processing step is to select 64 slices of contiguous CT images at infrarenal segments of abdominal aorta, which are then converted into a single 3D volume dataset. networks are trained using 3D CNN and set up in two separate experiments. The coordinate information is embedded into input data as the additional channels. The pre-contrast and contrast-enhanced CT datasets are used to train each network to create two models A1 and A2, respectively.
Figure 5
Figure 5. Training curve.
Model training accuracy of UNet, AG-DSV-UNet, VNet, ResNetMed, DenseVoxNet on non-contrast, contrast-enhanced datatsets of pre-operative abdominal aortic aneurysm datasets. The left column is the non-contrast and right column is the post-contrast CT dataset. The label color of non-coordinate, CoMat1, CoMat2, and CoMat3 are blue, light blue, green and orange, respectively.
Figure 6
Figure 6. Example of AAA result.
An example of CNN segmentation of abdominal aortic aneurysm. The upper row is non-contrast CT images of source image (A) CNN segmentation (B) and ground-truth (C). The lower row is contrast-enhanced CT images of source image (D) CNN segmentation (E) and ground-truth (F).
Figure 7
Figure 7. False positive exmaple.
False positive prediction of AAA. The left column is non-contrast AAA with standard UNet prediction on axial view (A) and 3D volume rendering on coronal view (D). There is false prediction of a second AAA on the right side of the abdomen (*) that is a well distended gallbladder. The middle column is non-contrast AAA with UNet+CoMat3 prediction on axial view (B) and 3D volume rendering (E) images on the same patient. The false prediction does not occur on UNet+CoMat3. The ground-truth is demonstrated on the right column on non-contrast axial view (C) and 3D volume rendering (F) CT images.
Figure 8
Figure 8. Example of post EVAR.
An example of CNN segmentation of post-operative EVAR of AAA, visualized on axial and 3D volume rendering. The upper row is post-contrast axial CT images of source image (A) CNN segmentation (B) and ground-truth (C). The lower row is post-contrast coronal reformatted CT image of source image (D) coronal reformatted CT image with 3D volume rendering of CNN segmentation (E) and ground-truth (F).

Similar articles

Cited by

References

    1. Amiri M, Brooks R, Rivaz H. Fine-tuning U-Net for ultrasound image segmentation: different layers, different outcomes. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 2020;67(12):2510–2518. doi: 10.1109/TUFFC.2020.3015081. - DOI - PubMed
    1. Chen S, Ma K, Zheng Y. Med3D: transfer learning for 3D medical image analysis. ArXiv preprint. 1904. - DOI
    1. Cheng J-Z, Ni D, Chou Y-H, Qin J, Tiu C-M, Chang Y-C, Huang C-S, Shen D, Chen C-M. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Scientific Reports. 2016;6(1):24454. doi: 10.1038/srep24454. - DOI - PMC - PubMed
    1. Dziubich T, Białas P, Znaniecki Ł, Halman J. Abdominal aortic aneurysm segmentation from contrast-enhanced computed tomography angiography using deep convolutional networks. ADBIS, TPDL and EDA 2020 Common Workshops and Doctoral Consortium; Cham: Springer International Publishing; 2020.
    1. Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin JC, Pujol S, Bauer C, Jennings D, Fennessy F, Sonka M, Buatti J, Aylward S, Miller JV, Pieper S, Kikinis R. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magnetic Resonance Imaging. 2012;30(9):1323–1341. doi: 10.1016/j.mri.2012.05.001. - DOI - PMC - PubMed

LinkOut - more resources