Multimodal AI Combining Clinical and Imaging Inputs Improves Prostate Cancer Detection
- PMID: 39074400
- DOI: 10.1097/RLI.0000000000001102
Multimodal AI Combining Clinical and Imaging Inputs Improves Prostate Cancer Detection
Abstract
Objectives: Deep learning (DL) studies for the detection of clinically significant prostate cancer (csPCa) on magnetic resonance imaging (MRI) often overlook potentially relevant clinical parameters such as prostate-specific antigen, prostate volume, and age. This study explored the integration of clinical parameters and MRI-based DL to enhance diagnostic accuracy for csPCa on MRI.
Materials and methods: We retrospectively analyzed 932 biparametric prostate MRI examinations performed for suspected csPCa (ISUP ≥2) at 2 institutions. Each MRI scan was automatically analyzed by a previously developed DL model to detect and segment csPCa lesions. Three sets of features were extracted: DL lesion suspicion levels, clinical parameters (prostate-specific antigen, prostate volume, age), and MRI-based lesion volumes for all DL-detected lesions. Six multimodal artificial intelligence (AI) classifiers were trained for each combination of feature sets, employing both early (feature-level) and late (decision-level) information fusion methods. The diagnostic performance of each model was tested internally on 20% of center 1 data and externally on center 2 data (n = 529). Receiver operating characteristic comparisons determined the optimal feature combination and information fusion method and assessed the benefit of multimodal versus unimodal analysis. The optimal model performance was compared with a radiologist using PI-RADS.
Results: Internally, the multimodal AI integrating DL suspicion levels with clinical features via early fusion achieved the highest performance. Externally, it surpassed baselines using clinical parameters (0.77 vs 0.67 area under the curve [AUC], P < 0.001) and DL suspicion levels alone (AUC: 0.77 vs 0.70, P = 0.006). Early fusion outperformed late fusion in external data (0.77 vs 0.73 AUC, P = 0.005). No significant performance gaps were observed between multimodal AI and radiologist assessments (internal: 0.87 vs 0.88 AUC; external: 0.77 vs 0.75 AUC, both P > 0.05).
Conclusions: Multimodal AI (combining DL suspicion levels and clinical parameters) outperforms clinical and MRI-only AI for csPCa detection. Early information fusion enhanced AI robustness in our multicenter setting. Incorporating lesion volumes did not enhance diagnostic efficacy.
Copyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc.
Conflict of interest statement
Conflicts of interest and sources of funding: C.R., T.C.K., D.Y., and H.H. are receiving a grant from Siemens Healthineers. H.H. is receiving a grant from Canon Medical Systems. For the remaining authors, none were declared.
References
-
- Bray F, Laversanne M, Sung H, et al. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin . 2024;74:229–263.
-
- Mehralivand S, Bednarova S, Shih JH, et al. Prospective evaluation of PI-RADS™ version 2 using the International Society of Urological Pathology Prostate Cancer Grade Group System. J Urol . 2017;198:583–590.
-
- van Leenders GJLH, van der Kwast TH, Grignon DJ, et al. The 2019 International Society of Urological Pathology (ISUP) consensus conference on grading of prostatic carcinoma. Am J Surg Pathol . 2020;44:e87–e99.
-
- Cao R, Zhong X, Afshari S, et al. Performance of deep learning and genitourinary radiologists in detection of prostate cancer using 3-T multiparametric magnetic resonance imaging. J Magn Reson Imaging . 2021;54:474–483.
-
- Netzer N, Weißer C, Schelb P, et al. Fully automatic deep learning in bi-institutional prostate magnetic resonance imaging: effects of cohort size and heterogeneity. Invest Radiol . 2021;56:799–808.
Publication types
MeSH terms
Substances
LinkOut - more resources
Full Text Sources
Medical
Research Materials
Miscellaneous