Gender bias in text-to-image generative artificial intelligence depiction of Australian paramedics and first responders
- PMID: 39627045
- DOI: 10.1016/j.auec.2024.11.003
Gender bias in text-to-image generative artificial intelligence depiction of Australian paramedics and first responders
Abstract
Introduction: In Australia, almost 50 % of paramedics are female yet they remain under-represented in stereotypical depictions of the profession. The potentially transformative value of generative artificial intelligence (AI) may be limited by stereotypical errors, misrepresentations and bias. Increasing use of text-to-image generative AI, like DALL-E 3, could reinforce gender and ethnicity biases and, therefore, is important to objectively evaluate.
Method: In March 2024, DALL-E 3 was utilised via GPT-4 to generate a series of individual and group images of Australian paramedics, ambulance officers, police officers and firefighters. In total, 82 images were produced including 60 individual-character images, and 22 multiple-character group images. All 326 depicted characters were independently analysed by three reviewers for apparent gender, age, skin tone and ethnicity.
Results: Among first responders, 90.8 % (N = 296) were depicted as male, 90.5 % (N = 295) as Caucasian, 95.7 % (N = 312) as a light skin tone, and 94.8 % (N = 309) as under 55 years of age. For paramedics and police the gender distribution was a statistically significant variation from that of actual Australian workforce data (all p < 0.001). Among the images of individual paramedics and ambulance officers (N = 32), DALL-E 3 depicted 100 % as male, 100 % as Caucasian and 100 % with light skin tone.
Conclusion: Gender and ethnicity bias is a significant limitation for text-to-image generative AI using DALL-E 3 among Australian first responders. Generated images have a disproportionately high misrepresentation of males, Caucasians and light skin tones that are not representative of the diversity of paramedics in Australia today.
Keywords: Diversity; First responder; Generative artificial intelligence; Inclusivity.
Copyright © 2024 The Authors. Published by Elsevier Ltd.. All rights reserved.
Conflict of interest statement
Declaration of Competing Interest There are no conflicts of interests or funding declarations to be made. Human ethics approval is not required for this generative AI generated data.
MeSH terms
LinkOut - more resources
Full Text Sources
