Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 1;31(9):2002-2009.
doi: 10.1093/jamia/ocae086.

Mixed methods assessment of the influence of demographics on medical advice of ChatGPT

Affiliations

Mixed methods assessment of the influence of demographics on medical advice of ChatGPT

Katerina Andreadis et al. J Am Med Inform Assoc. .

Abstract

Objectives: To evaluate demographic biases in diagnostic accuracy and health advice between generative artificial intelligence (AI) (ChatGPT GPT-4) and traditional symptom checkers like WebMD.

Materials and methods: Combination symptom and demographic vignettes were developed for 27 most common symptom complaints. Standardized prompts, written from a patient perspective, with varying demographic permutations of age, sex, and race/ethnicity were entered into ChatGPT (GPT-4) between July and August 2023. In total, 3 runs of 540 ChatGPT prompts were compared to the corresponding WebMD Symptom Checker output using a mixed-methods approach. In addition to diagnostic correctness, the associated text generated by ChatGPT was analyzed for readability (using Flesch-Kincaid Grade Level) and qualitative aspects like disclaimers and demographic tailoring.

Results: ChatGPT matched WebMD in 91% of diagnoses, with a 24% top diagnosis match rate. Diagnostic accuracy was not significantly different across demographic groups, including age, race/ethnicity, and sex. ChatGPT's urgent care recommendations and demographic tailoring were presented significantly more to 75-year-olds versus 25-year-olds (P < .01) but were not statistically different among race/ethnicity and sex groups. The GPT text was suitable for college students, with no significant demographic variability.

Discussion: The use of non-health-tailored generative AI, like ChatGPT, for simple symptom-checking functions provides comparable diagnostic accuracy to commercially available symptom checkers and does not demonstrate significant demographic bias in this setting. The text accompanying differential diagnoses, however, suggests demographic tailoring that could potentially introduce bias.

Conclusion: These results highlight the need for continued rigorous evaluation of AI-driven medical platforms, focusing on demographic biases to ensure equitable care.

Keywords: ChatGPT; artificial intelligence; bias; digital health; large language model; symptom checker.

PubMed Disclaimer

Conflict of interest statement

The authors have no competing interests to declare.

Figures

Figure 1.
Figure 1.
ChatGPT prompt template and example response.

Similar articles

Cited by

References

    1. Wyatt JC. Fifty million people use computerised self triage. BMJ. 2015;351:h3727. - PubMed
    1. Arora VM, Madison S, Simpson L. Addressing medical misinformation in the patient-clinician relationship. JAMA. 2020;324(23):2367-2368. - PubMed
    1. Bach RL, Wenz A. Studying health-related internet and mobile device use using web logs and smartphone records. PLoS One. 2020;15(6):e0234663. - PMC - PubMed
    1. Pew Research. The search for online medical help. Accessed December 13, 2023. https://www.pewresearch.org/internet/2002/05/22/main-report-the-search-f...
    1. Wallace W, Chan C, Chidambaram S, et al. The diagnostic and triage accuracy of digital and online symptom checker tools: a systematic review. NPJ Digit Med. 2022;5(1):118. - PMC - PubMed