This is a preprint.
Auditor Models to Suppress Poor AI Predictions Can Improve Human-AI Collaborative Performance
- PMID: 40666330
- PMCID: PMC12262782
- DOI: 10.1101/2025.06.24.25330212
Auditor Models to Suppress Poor AI Predictions Can Improve Human-AI Collaborative Performance
Abstract
Objective: Healthcare decisions are increasingly made with the assistance of machine learning (ML). ML has been known to have unfairness - inconsistent outcomes across subpopulations. Clinicians interacting with these systems can perpetuate such unfairness by overreliance. Recent work exploring ML suppression - silencing predictions based on auditing the ML - shows promise in mitigating performance issues originating from overreliance. This study aims to evaluate the impact of suppression on collaboration fairness and evaluate ML uncertainty as desiderata to audit the ML.
Materials and methods: We used data from the Vanderbilt University Medical Center electronic health record (n = 58,817) and the MIMIC-IV-ED dataset (n = 363,145) to predict likelihood of death or ICU transfer and likelihood of 30-day readmission. Our simulation study used gradient-boosted trees as well as an artificially high-performing oracle model. We derived clinician decisions directly from the dataset and simulated clinician acceptance of ML predictions based on previous empirical work on acceptance of CDS alerts. We measured performance as area under the receiver operating characteristic curve and algorithmic fairness using absolute averaged odds difference.
Results: When the ML outperforms humans, suppression outperforms the human alone (p < 0.034) and at least does not degrade fairness. When the human outperforms the ML, suppression outperforms the human (p < 5.2 × 10-5) but the human is fairer than suppression (p < 0.0019). Finally, incorporating uncertainty quantification into suppression approaches can improve performance.
Conclusion: Suppression of poor-quality ML predictions through an auditor model shows promise in improving collaborative human-AI performance and fairness.
Keywords: artificial intelligence; human-AI collaboration; machine learning.
Conflict of interest statement
CONFLICT OF INTEREST STATEMENT The authors have no conflicts of interest to disclose.
Figures





References
-
- Shortliffe EH, Sepúlveda MJ. Clinical Decision Support in the Era of Artificial Intelligence. JAMA. 2018. Dec 4;320(21):2199–200. - PubMed
-
- Yang Q, Steinfeld A, Zimmerman J. Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems [Internet]. New York, NY, USA: Association for Computing Machinery; 2019. [cited 2024 Sep 23]. p. 1–11. (CHI ‘19). Available from: https://dl.acm.org/doi/10.1145/3290605.3300468 - DOI
Publication types
Grants and funding
LinkOut - more resources
Full Text Sources