Metacognitive sensitivity: The key to calibrating trust and optimal decision making with AI
- PMID: 40417078
- PMCID: PMC12103939
- DOI: 10.1093/pnasnexus/pgaf133
Metacognitive sensitivity: The key to calibrating trust and optimal decision making with AI
Abstract
Knowing when to trust and incorporate the advice from artificially intelligent (AI) systems is of increasing importance in the modern world. Research indicates that when AI provides high confidence ratings, human users often correspondingly increase their trust in such judgments, but these increases in trust can occur even when AI fails to provide accurate information on a given task. In this piece, we argue that measures of metacognitive sensitivity provided by AI systems will likely play a critical role in (1) helping individuals to calibrate their level of trust in these systems and (2) optimally incorporating advice from AI into human-AI hybrid decision making. We draw upon a seminal finding in the perceptual decision-making literature that demonstrates the importance of metacognitive ratings for optimal joint decisions and outline a framework to test how different types of information provided by AI systems can guide decision making.
Keywords: artificial intelligence; joint decision making; metacognitive sensitivity; optimal decisions; trust calibration.
© The Author(s) 2025. Published by Oxford University Press on behalf of National Academy of Sciences.
Figures


Similar articles
-
Intermediate Judgments and Trust in Artificial Intelligence-Supported Decision-Making.Entropy (Basel). 2024 Jun 8;26(6):500. doi: 10.3390/e26060500. Entropy (Basel). 2024. PMID: 38920509 Free PMC article.
-
Determinants of Laypersons' Trust in Medical Decision Aids: Randomized Controlled Trial.JMIR Hum Factors. 2022 May 3;9(2):e35219. doi: 10.2196/35219. JMIR Hum Factors. 2022. PMID: 35503248 Free PMC article.
-
Psychological Factors Influencing Appropriate Reliance on AI-enabled Clinical Decision Support Systems: Experimental Web-Based Study Among Dermatologists.J Med Internet Res. 2025 Apr 4;27:e58660. doi: 10.2196/58660. J Med Internet Res. 2025. PMID: 40184614 Free PMC article.
-
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207. JMIR AI. 2024. PMID: 39476365 Free PMC article. Review.
-
Optimal metacognitive decision strategies in signal detection theory.Psychon Bull Rev. 2025 Jun;32(3):1041-1069. doi: 10.3758/s13423-024-02510-7. Epub 2024 Nov 18. Psychon Bull Rev. 2025. PMID: 39557811 Free PMC article. Review.
References
-
- Zhao WX, et al. 2023. A survey of large language models. arXiv 18223. 10.48550/arXiv.2303.18223, preprint: not peer reviewed. - DOI
-
- Wei J, et al. 2022. Emergent abilities of large language models. arXiv 07682. 10.48550/arXiv.2206.07682, preprint: not peer reviewed. - DOI
-
- Kasneci E, et al. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individ Differ. 103:102274.
-
- Sherani AMK, Khan M, Qayyum MU, Hussain HK. 2024. Synergizing AI and healthcare: pioneering advances in cancer medicine for personalized treatment. Int J Multidiscip Sci Arts. 3(2):270–277.
LinkOut - more resources
Full Text Sources