Best Practices and Checklist for Reviewing Artificial Intelligence-Based Medical Imaging Papers: Classification
- PMID: 40465054
- DOI: 10.1007/s10278-025-01548-w
Best Practices and Checklist for Reviewing Artificial Intelligence-Based Medical Imaging Papers: Classification
Abstract
Recent advances in Artificial Intelligence (AI) methodologies and their application to medical imaging has led to an explosion of related research programs utilizing AI to produce state-of-the-art classification performance. Ideally, research culminates in dissemination of the findings in peer-reviewed journals. To date, acceptance or rejection criteria are often subjective; however, reproducible science requires reproducible review. The Machine Learning Education Sub-Committee of the Society for Imaging Informatics in Medicine (SIIM) has identified a knowledge gap and need to establish guidelines for reviewing these studies. This present work, written from the machine learning practitioner standpoint, follows a similar approach to our previous paper related to segmentation. In this series, the committee will address best practices to follow in AI-based studies and present the required sections with examples and discussion of requirements to make the studies cohesive, reproducible, accurate, and self-contained. This entry in the series focuses on image classification. Elements like dataset curation, data pre-processing steps, reference standard identification, data partitioning, model architecture, and training are discussed. Sections are presented as in a typical manuscript. The content describes the information necessary to ensure the study is of sufficient quality for publication consideration and, compared with other checklists, provides a focused approach with application to image classification tasks. The goal of this series is to provide resources to not only help improve the review process for AI-based medical imaging papers, but to facilitate a standard for the information that should be presented within all components of the research study.
Keywords: Artificial Intelligence; Best practices; Checklist; Classification; Medical imaging; Paper review.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Competing interests: Steven L. Blumer is employed by Bayer as Americas Director, Radiology Digital Medical Affairs at Bayer. Bayer owns the Calantoc and Blackford AI marketplaces. In addition, he owns stock equity in Rad AI. There are no other relevant financial or non-financial interests to disclose.
References
-
- Maleki, F., Moy, L., Forghani, R. et al. RIDGE: Reproducibility, Integrity, Dependability, Generalizability, and Efficiency Assessment of Medical Image Segmentation Models. J Digit Imaging. Inform. med. (2024). https://doi.org/10.1007/s10278-024-01282-9 - DOI
-
- Beam AL, Manrai AK, Ghassemi M. Challenges to the Reproducibility of Machine Learning Models in Health Care. JAMA - J Am Med Assoc. 2019:6–7. https://doi.org/10.1001/jama.2019.20866
-
- Bluemke DA, Moy L, Bredella MA, et al. Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board . Radiology. 2019:192515. https://doi.org/10.1148/radiol.2019192515
-
- Mongan J, Moy L, Kahn CE. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiol Artif Intell. 2020;2(2):e200029. https://doi.org/10.1148/ryai.2020200029 - DOI - PubMed - PMC
-
- Liu Y, Chen PHC, Krause J, Peng L. How to Read Articles That Use Machine Learning: Users’ Guides to the Medical Literature. JAMA - J Am Med Assoc. 2019;322(18):1806- 1816. https://doi.org/10.1001/jama.2019.16489 - DOI
LinkOut - more resources
Miscellaneous