Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 May 1:27:e67383.
doi: 10.2196/67383.

Perspectives and Experiences With Large Language Models in Health Care: Survey Study

Affiliations

Perspectives and Experiences With Large Language Models in Health Care: Survey Study

Jennifer Sumner et al. J Med Internet Res. .

Abstract

Background: Large language models (LLMs) are transforming how data is used, including within the health care sector. However, frameworks including the Unified Theory of Acceptance and Use of Technology highlight the importance of understanding the factors that influence technology use for successful implementation.

Objective: This study aimed to (1) investigate users' uptake, perceptions, and experiences regarding LLMs in health care and (2) contextualize survey responses by demographics and professional profiles.

Methods: An electronic survey was administered to elicit stakeholder perspectives of LLMs (health care providers and support functions), their experiences with LLMs, and their potential impact on functional roles. Survey domains included: demographics (6 questions), user experiences of LLMs (8 questions), motivations for using LLMs (6 questions), and perceived impact on functional roles (4 questions). The survey was launched electronically, targeting health care providers or support staff, health care students, and academics in health-related fields. Respondents were adults (>18 years) aware of LLMs.

Results: Responses were received from 1083 individuals, of which 845 were analyzable. Of the 845 respondents, 221 had yet to use an LLM. Nonusers were more likely to be health care workers (P<.001), older (P<.001), and female (P<.01). Users primarily adopted LLMs for speed, convenience, and productivity. While 75% (470/624) agreed that the user experience was positive, 46% (294/624) found the generated content unhelpful. Regression analysis showed that the experience with LLMs is more likely to be positive if the user is male (odds ratio [OR] 1.62, CI 1.06-2.48), and increasing age was associated with a reduced likelihood of reporting LLM output as useful (OR 0.98, CI 0.96-0.99). Nonusers compared to LLM users were less likely to report LLMs meeting unmet needs (45%, 99/221 vs 65%, 407/624; OR 0.48, CI 0.35-0.65), and males were more likely to report that LLMs do address unmet needs (OR 1.64, CI 1.18-2.28). Furthermore, nonusers compared to LLM users were less likely to agree that LLMs will improve functional roles (63%, 140/221 vs 75%, 469/624; OR 0.60, CI 0.43-0.85). Free-text opinions highlighted concerns regarding autonomy, outperformance, and reduced demand for care. Respondents also predicted changes to human interactions, including fewer but higher quality interactions and a change in consumer needs as LLMs become more common, which would require provider adaptation.

Conclusions: Despite the reported benefits of LLMs, nonusers-primarily health care workers, older individuals, and females-appeared more hesitant to adopt these tools. These findings underscore the need for targeted education and support to address adoption barriers and ensure the successful integration of LLMs in health care. Anticipated role changes, evolving human interactions, and the risk of the digital divide further emphasize the need for careful implementation and ongoing evaluation of LLMs in health care to ensure equity and sustainability.

Keywords: artificial intelligence; digital health; healthcare; healthcare worker; large language model; professional; survey; survey research; workforce.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
A heat map of question responses (percentage agreement) on user experience, overall, and by individual demographic groups. LLM: Large Language Model.
Figure 2
Figure 2
A heat map of question responses (percentage agreement) on motivations for using large language models, overall, and by individual demographic groups. LLM: Large Language Model.
Figure 3
Figure 3
A heat map of question responses (percentage agreement) on the perceived impact of large language models on functional roles, overall and by individual demographic groups.
Figure 4
Figure 4
Qualitative themes (inner circle) and subthemes (outer circle) on the perceived impact of large language models on human interactions from the free text survey data.

Similar articles

References

    1. World Health Organization. Ageing and health. Geneva: WHO; 2024.
    1. Azzopardi-Muscat N, Zapata T, Kluge H. Moving from health workforce crisis to health workforce success: the time to act is now. Lancet Reg Health Eur. 2023;35:100765. doi: 10.1016/j.lanepe.2023.100765. https://linkinghub.elsevier.com/retrieve/pii/S2666-7762(23)00184-9 S2666-7762(23)00184-9 - DOI - PMC - PubMed
    1. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94–98. doi: 10.7861/futurehosp.6-2-94. https://linkinghub.elsevier.com/retrieve/pii/S2514-6645(24)01059-2 S2514-6645(24)01059-2 - DOI - PMC - PubMed
    1. Secinaro S, Calandra D, Secinaro A, Muthurangu V, Biancone P. The role of artificial intelligence in healthcare: a structured literature review. BMC Med Inform Decis Mak. 2021;21(1):125. doi: 10.1186/s12911-021-01488-9. https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-0... 10.1186/s12911-021-01488-9 - DOI - DOI - PMC - PubMed
    1. Clusmann J, Kolbinger FR, Muti HS, Carrero ZI, Eckardt JN, Laleh NG, Löffler CML, Schwarzkopf S, Unger M, Veldhuizen GP, Wagner SJ, Kather JN. The future landscape of large language models in medicine. Commun Med (Lond) 2023;3(1):141. doi: 10.1038/s43856-023-00370-1. https://doi.org/10.1038/s43856-023-00370-1 10.1038/s43856-023-00370-1 - DOI - DOI - PMC - PubMed

LinkOut - more resources