AI pitfalls and what not to do: mitigating bias in AI
- PMID: 37698583
- PMCID: PMC10546443
- DOI: 10.1259/bjr.20230023
AI pitfalls and what not to do: mitigating bias in AI
Abstract
Various forms of artificial intelligence (AI) applications are being deployed and used in many healthcare systems. As the use of these applications increases, we are learning the failures of these models and how they can perpetuate bias. With these new lessons, we need to prioritize bias evaluation and mitigation for radiology applications; all the while not ignoring the impact of changes in the larger enterprise AI deployment which may have downstream impact on performance of AI models. In this paper, we provide an updated review of known pitfalls causing AI bias and discuss strategies for mitigating these biases within the context of AI deployment in the larger healthcare enterprise. We describe these pitfalls by framing them in the larger AI lifecycle from problem definition, data set selection and curation, model training and deployment emphasizing that bias exists across a spectrum and is a sequela of a combination of both human and machine factors.
Figures
References
-
- Leavy S. Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In: Paper presented at the Proceedings of the 1st International Workshop on Gender Equality in Software Engineering 14–16 (Association for Computing Machinery).
-
- Turner Lee N. Detecting racial bias in Algorithms and machine learning. JICES 2018; 16: 252–60. doi: 10.1108/JICES-06-2018-0056 - DOI
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Research Materials
