Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2019 Aug;90(4):394-400.
doi: 10.1080/17453674.2019.1600125. Epub 2019 Apr 3.

Artificial intelligence detection of distal radius fractures: a comparison between the convolutional neural network and professional assessments

Affiliations
Comparative Study

Artificial intelligence detection of distal radius fractures: a comparison between the convolutional neural network and professional assessments

Kaifeng Gan et al. Acta Orthop. 2019 Aug.

Abstract

Background and purpose - Artificial intelligence has rapidly become a powerful method in image analysis with the use of convolutional neural networks (CNNs). We assessed the ability of a CNN, with a fast object detection algorithm previously identifying the regions of interest, to detect distal radius fractures (DRFs) on anterior-posterior (AP) wrist radiographs. Patients and methods - 2,340 AP wrist radiographs from 2,340 patients were enrolled in this study. We trained the CNN to analyze wrist radiographs in the dataset. Feasibility of the object detection algorithm was evaluated by intersection of the union (IOU). The diagnostic performance of the network was measured by area under the receiver operating characteristics curve (AUC), accuracy, sensitivity, specificity, and Youden Index; the results were compared with those of medical professional groups. Results - The object detection model achieved a high average IOU, and none of the IOUs had a value less than 0.5. The AUC of the CNN for this test was 0.96. The network had better performance in distinguishing images with DRFs from normal images compared with a group of radiologists in terms of the accuracy, sensitivity, specificity, and Youden Index. The network presented a similar diagnostic performance to that of the orthopedists in terms of these variables. Interpretation - The network exhibited a diagnostic ability similar to that of the orthopedists and a performance superior to that of the radiologists in distinguishing AP wrist radiographs with DRFs from normal images under limited conditions. Further studies are required to determine the feasibility of applying our method as an auxiliary in clinical practice under extended conditions.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
A wrist radiograph was manually annotated with a red rectangle as the ground truth bound and automatically annotated with a blue rectangle as the candidate bound. The red rectangle and blue rectangle represent edges of the region of interest (ROI) detected by the orthopedists and edges of the ROI detected by Faster R-CNN, respectively.
Figure 2.
Figure 2.
The formula with which the Intersection of the Union (IOU) was calculated.
Figure 3.
Figure 3.
A typical example of the augmentation on 1 image from the annotated training dataset during the training course of Inception-v4.
Figure 4.
Figure 4.
Flow diagram of the training and test courses of Faster R-CNN (shown in a green) and Inception-v4 (shown in a red).
Figure 5.
Figure 5.
The receiver operating characteristic (ROC) curve for the test output of the Inception-v4 model. The dots on the plot represent the sensitivity and 1-specificity of the human groups (the blue dot represents the orthopedists’ group; the red dot represents the radiologists’ group). The sensitivity/1-specificity dot of the radiologists’ group lies below the ROC curve of the Inception-v4 model, and the sensitivity/1-specificity dot of the orthopedists’ group lies above the ROC curve of the Inception-v4 model.
Figure 6.
Figure 6.
The same wrist with a DRF in the anterior–posterior view radiograph (a) and in the lateral view radiograph (b). The hidden DRF in the anterior–posterior view was apparent in the lateral view (the fracture is shown by the red arrow).
Figure 7.
Figure 7.
A typical example of the augmentation on 1 image from the training dataset during the training of Faster R-CNN (the top left image is the original one).
Figure 8.
Figure 8.
The training processes of Faster R-CNN with respect to the training sample in the training dataset and validation dataset. The mean square error (MSE) with value close to 0 indicates the accurate performance of the model.
Figure 9.
Figure 9.
The training processes of Faster R-CNN with respect to the iteration number in the training dataset and validation dataset. The mean square error (MSE) with value close to 0 indicates the accurate performance of the model.
Figure 10.
Figure 10.
The training processes of the Inception-v4 model with respect to the training sample in the training dataset and validation dataset.
Figure 11.
Figure 11.
The training processes of the Inception-v4 model with respect to the iteration number in the training dataset and validation dataset.

References

    1. Chung S W, Han S S, Lee J W, Oh K S, Kim N R, Yoon J P, Kim J Y, Moon S H, Kwon J, Lee H J, Noh Y M, Kim Y. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop 2018; 89(4): 468–73. - PMC - PubMed
    1. Fujisawa Y, Otomo Y, Ogata Y, Nakamura Y, Fujita R, Ishitsuka Y, Watanabe R, Okiyama N, Ohara K, Fujimoto M. Deep learning-based, computer-aided classifier developed with a small dataset of clinical images surpasses board-certified dermatologists in skin tumor diagnosis. Br J Dermatol 2019; 180(2): 373–81. - PubMed
    1. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. 2015 IEEE International Conference on Computer Vision; 2015.
    1. Hua K L, Hsu C H, Hidayati S C, Cheng W H, Chen Y J. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. Onco Targets Ther 2015; 8: 2015–22. - PMC - PubMed
    1. Kim D H, MacKinnon T. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clin Radiol 2018; 73(5): 439–45. - PubMed

Publication types