Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification
- PMID: 31011155
- PMCID: PMC6476887
- DOI: 10.1038/s41598-019-42294-8
Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification
Abstract
The increased availability of labeled X-ray image archives (e.g. ChestX-ray14 dataset) has triggered a growing interest in deep learning techniques. To provide better insight into the different approaches, and their applications to chest X-ray classification, we investigate a powerful network architecture in detail: the ResNet-50. Building on prior work in this domain, we consider transfer learning with and without fine-tuning as well as the training of a dedicated X-ray network from scratch. To leverage the high spatial resolution of X-ray data, we also include an extended ResNet-50 architecture, and a network integrating non-image data (patient age, gender and acquisition type) in the classification process. In a concluding experiment, we also investigate multiple ResNet depths (i.e. ResNet-38 and ResNet-101). In a systematic evaluation, using 5-fold re-sampling and a multi-label loss function, we compare the performance of the different approaches for pathology classification by ROC statistics and analyze differences between the classifiers using rank correlation. Overall, we observe a considerable spread in the achieved performance and conclude that the X-ray-specific ResNet-38, integrating non-image data yields the best overall results. Furthermore, class activation maps are used to understand the classification process, and a detailed analysis of the impact of non-image features is provided.
Conflict of interest statement
M.G., H.N. and A.S. are employees of Philips Research, Hamburg, Germany. I.M.B. and T.K. declare no potential conflict of interest.
Figures
 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                References
- 
    - Commission, C. Q. Queen Alexandra hospital quality report. Available at, https://www.cqc.org.uk/location/RHU03 (2017).
 
- 
    - Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105 (2012).
 
- 
    - Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–9, 10.1109/CVPR.2015.7298594 (2015).
 
- 
    - Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
 
- 
    - He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778, 10.1109/CVPR.2016.90 (2016).
 
Publication types
MeSH terms
LinkOut - more resources
- Full Text Sources
- Other Literature Sources
 
        