Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 May;44(5):1846-1856.
doi: 10.1002/mp.12214. Epub 2017 Apr 13.

Optimal reconstruction and quantitative image features for computer-aided diagnosis tools for breast CT

Affiliations

Optimal reconstruction and quantitative image features for computer-aided diagnosis tools for breast CT

Juhun Lee et al. Med Phys. 2017 May.

Abstract

Purpose: The purpose of this study is to determine the optimal representative reconstruction and quantitative image feature set for a computer-aided diagnosis (CADx) scheme for dedicated breast computer tomography (bCT).

Method: We used 93 bCT scans that contain 102 breast lesions (62 malignant, 40 benign). Using an iterative image reconstruction (IIR) algorithm, we created 37 reconstructions with different image appearances for each case. In addition, we added a clinical reconstruction for comparison purposes. We used image sharpness, determined by the gradient of gray value in a parenchymal portion of the reconstructed breast, as a surrogate measure of the image qualities/appearances for the 38 reconstructions. After segmentation of the breast lesion, we extracted 23 quantitative image features. Using leave-one-out-cross-validation (LOOCV), we conducted the feature selection, classifier training, and testing. For this study, we used the linear discriminant analysis classifier. Then, we selected the representative reconstruction and feature set for the classifier with the best diagnostic performance among all reconstructions and feature sets. Then, we conducted an observer study with six radiologists using a subset of breast lesions (N = 50). Using 1000 bootstrap samples, we compared the diagnostic performance of the trained classifier to those of the radiologists.

Result: The diagnostic performance of the trained classifier increased as the image sharpness of a given reconstruction increased. Among combinations of reconstructions and quantitative image feature sets, we selected one of the sharp reconstructions and three quantitative image feature sets with the first three highest diagnostic performances under LOOCV as the representative reconstruction and feature set for the classifier. The classifier on the representative reconstruction and feature set achieved better diagnostic performance with an area under the ROC curve (AUC) of 0.94 (95% CI = [0.81, 0.98]) than those of the radiologists, where their maximum AUC was 0.78 (95% CI = [0.63, 0.90]). Moreover, the partial AUC, at 90% sensitivity or higher, of the classifier (pAUC = 0.085 with 95% CI = [0.063, 0.094]) was statistically better (P-value < 0.0001) than those of the radiologists (maximum pAUC = 0.009 with 95% CI = [0.003, 0.024]).

Conclusion: We found that image sharpness measure can be a good candidate to estimate the diagnostic performance of a given CADx algorithm. In addition, we found that there exists a reconstruction (i.e., sharp reconstruction) and a feature set that maximizes the diagnostic performance of a CADx algorithm. On this optimal representative reconstruction and feature set, the CADx algorithm outperformed radiologists.

Keywords: CADx; breast CT; classification; curvature; image feature analysis.

PubMed Disclaimer

Figures

Figure 1
Figure 1
This figure shows example breast volumes for malignant (top row) and benign (bottom row) lesion cases with expert's manual outlines overlaid.
Figure 2
Figure 2
The left side shows an example of the coronal views of a breast for the 38 different reconstructions used in this study. We ordered the views in terms of their sharpness values (from left to right and from top to bottom, the image sharpness increases). The right side shows the scatter plot of image appearance values (i.e., noise and sharpness) for all 38 reconstructions. IIR1–3 and FDK refer to IIR and FDK reconstruction cases used for the observer study. IIROP indicates a candidate reconstruction we found in this study for a CADx algorithm.
Figure 3
Figure 3
Diagram shows how we divided each bootstrap sample (a total of 1000 samples) to train and test the classifier, and compare the performance of the classifier to that of radiologists.
Figure 4
Figure 4
This figure shows the selected features for the classifier and its diagnostic performances on each reconstruction. (a) shows the AUC of the classifier on each reconstruction. (b) shows the sharpness of each reconstruction. (c) shows the selection frequency of each feature in the classifier for each reconstruction. Feature #1–#4, #5–#11, #12–#16, #17–#20, and #21–#23 represent histogram, shape, margin, texture, and curvature features, respectively. As sharpness increased, the diagnostic performance of the classifier improved (a and b). Overall, the total curvature feature (feature #21) was selected 100% for all reconstructions except the smoothest reconstruction. For smooth reconstruction, the classifier frequently used the shape and margin descriptors. For sharp reconstruction, the classifier frequently used the margin and histogram descriptors. As images got sharper, the type and the number of selected features were reduced and stabilized. [Colour figure can be viewed at wileyonlinelibrary.com]
Figure 5
Figure 5
This figure shows the scatter plots of the selected features (F4, F12, and F21) for the classifier on the reconstruction #34. Malignant lesions tended to have higher Margin gray value variation (F4) and Total curvature (F21) values and lower Average radial gradient (F12) values than benign lesions.
Figure 6
Figure 6
This figure shows the averaged empirical ROC curves of the CADx and for the six radiologists. The CADx achieved an average AUC of 0.94, which was higher than the radiologists for all four reconstructions (IIR1–3 and FDK with AUC of 0.76–0.78). The differences did not reach statistical significance after correcting for multiple comparisons. For the partial AUC at 90% sensitivity or higher, i.e., the area between the ROC curve and the dashed line in the figures, CADx showed a statistically better performance than the radiologists on all reconstructions.

Similar articles

Cited by

References

    1. Lindfors KK, Boone JM, Newell MS, D'Orsi CJ. Dedicated breast computed tomography: The optimal cross‐sectional imaging solution? Radiol Clin North Am. 2010;48:1043–1054. - PMC - PubMed
    1. Elter M, Horsch A. CADx of mammographic masses and clustered microcalcifications: A review. Med Phys. 2009;36:2052–2068. - PubMed
    1. Doi K. Computer‐aided diagnosis in medical imaging: Historical review, current status and future potential. Comput Med Imaging Graph. 2007;31:198–211. - PMC - PubMed
    1. Rangayyan RM, Ayres FJ, Leo Desautels JE. A review of computer‐aided diagnosis of breast cancer: Toward the detection of subtle signs. J Franklin Inst. 2007;344:312–348.
    1. Eadie LH, Taylor P, Gibson AP. A systematic review of computer‐assisted diagnosis in diagnostic cancer imaging. Eur J Radiol. 2012;81:e70–e76. - PubMed