Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 May 5;4(1):30.
doi: 10.1186/s41747-020-00159-0.

Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?

Affiliations

Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?

Giuseppe Baselli et al. Eur Radiol Exp. .

Abstract

Machine learning (ML) and deep learning (DL) systems, currently employed in medical image analysis, are data-driven models often considered as black boxes. However, improved transparency is needed to translate automated decision-making to clinical practice. To this aim, we propose a strategy to open the black box by presenting to the radiologist the annotated cases (ACs) proximal to the current case (CC), making decision rationale and uncertainty more explicit. The ACs, used for training, validation, and testing in supervised methods and for validation and testing in the unsupervised ones, could be provided as support of the ML/DL tool. If the CC is localised in a classification space and proximal ACs are selected by proper metrics, the latter ones could be shown in their original form of images, enriched with annotation to radiologists, thus allowing immediate interpretation of the CC classification. Moreover, the density of ACs in the CC neighbourhood, their image saliency maps, classification confidence, demographics, and clinical information would be available to radiologists. Thus, encrypted information could be transmitted to radiologists, who will know model output (what) and salient image regions (where) enriched by ACs, providing classification rationale (why). Summarising, if a classifier is data-driven, let us make its interpretation data-driven too.

Keywords: Artificial intelligence; Decision making (computer-assisted); Diagnosis; Machine learning; Radiology.

PubMed Disclaimer

Conflict of interest statement

The G.B. and M.C. declare that they have no competing interests related to the proposed study. F. S. is the Editor-in-Chief of European Radiology Experimental; for this reason, he was not involved in any way in the revision/decision process, which was completely managed by the Deputy Editor, Dr. Akos Varga-Szemes (Medical University of South Carolina, Charleston, SC, USA).

Figures

Fig. 1
Fig. 1
Breast arterial calcifications (BAC) detection by convolutional neural network (CNN). a Original image (positive to BAC presence). b Detail including the unsegmented BAC (white arrow). c Heat map provided by the CNN. d Annotated image (BAC in yellow). The heat map (c) has the reduced resolution of images input to the CNN
Fig. 2
Fig. 2
The L and L-1 layers of a deep neural network
Fig. 3
Fig. 3
The current case (red triangle) is positioned in the output feature space of L-1 layer. A neighbour region (red dashed circle) is fixed and the included training/validation annotated cases (red circles) are considered to provide reference images, classification confidence and ancillary information. Other cases outside the neighbourhood are represented as grey circles
Fig. 4
Fig. 4
Possible instances of location of the current case (CC) in the feature space. a The current case (red triangle) falls into a region crowded with annotated cases (ACs), supposed to be equally classified with high confidence (red circles). b The CC falls into an uninhabited region, which would highlight a lack of training or validation similar cases. c The CC falls into a crowded region, yet with different classifications of ACs (red and orange circles), most likely with relatively low confidence
Fig. 5
Fig. 5
Schemes of the diagnostic process aided by machine learning tools to show process differences with (a) and without (b) communication barriers. The second option allows the clinician to retrieve information about classification results (what), object localisation (where), and added information on the decision-making process (why) derived from the annotated library

References

    1. The Lancet Respiratory Medicine Opening the black box of machine learning. Lancet Respir Med. 2018;6:801. doi: 10.1016/S2213-2600(18)30425-9. - DOI - PubMed
    1. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. doi: 10.1016/j.media.2017.07.005. - DOI - PubMed
    1. Pesapane F, Codari M, Sardanelli F (2018) Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2:35–45 10.1186/s41747-018-0061-6 - PMC - PubMed
    1. Chartrand G, Cheng PM, Vorontsov E et al (2017) Deep learning: a primer for radiologists. Radiographics 37:2113–2131 10.1148/rg.2017170077 - PubMed
    1. Erickson BJ, Korfiati P, Zeynettin A, Kline TL (2017) Machine learning for medical imaging. Radiographics 37:505–515 10.1155/2015/825267 - PMC - PubMed

Publication types