Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Dec 25:9:e50373.
doi: 10.2196/50373.

AI-Enabled Medical Education: Threads of Change, Promising Futures, and Risky Realities Across Four Potential Future Worlds

Affiliations

AI-Enabled Medical Education: Threads of Change, Promising Futures, and Risky Realities Across Four Potential Future Worlds

Michelle I Knopp et al. JMIR Med Educ. .

Abstract

Background: The rapid trajectory of artificial intelligence (AI) development and advancement is quickly outpacing society's ability to determine its future role. As AI continues to transform various aspects of our lives, one critical question arises for medical education: what will be the nature of education, teaching, and learning in a future world where the acquisition, retention, and application of knowledge in the traditional sense are fundamentally altered by AI?

Objective: The purpose of this perspective is to plan for the intersection of health care and medical education in the future.

Methods: We used GPT-4 and scenario-based strategic planning techniques to craft 4 hypothetical future worlds influenced by AI's integration into health care and medical education. This method, used by organizations such as Shell and the Accreditation Council for Graduate Medical Education, assesses readiness for alternative futures and effectively manages uncertainty, risk, and opportunity. The detailed scenarios provide insights into potential environments the medical profession may face and lay the foundation for hypothesis generation and idea-building regarding responsible AI implementation.

Results: The following 4 worlds were created using OpenAI's GPT model: AI Harmony, AI conflict, The world of Ecological Balance, and Existential Risk. Risks include disinformation and misinformation, loss of privacy, widening inequity, erosion of human autonomy, and ethical dilemmas. Benefits involve improved efficiency, personalized interventions, enhanced collaboration, early detection, and accelerated research.

Conclusions: To ensure responsible AI use, the authors suggest focusing on 3 key areas: developing a robust ethical framework, fostering interdisciplinary collaboration, and investing in education and training. A strong ethical framework emphasizes patient safety, privacy, and autonomy while promoting equity and inclusivity. Interdisciplinary collaboration encourages cooperation among various experts in developing and implementing AI technologies, ensuring that they address the complex needs and challenges in health care and medical education. Investing in education and training prepares professionals and trainees with necessary skills and knowledge to effectively use and critically evaluate AI technologies. The integration of AI in health care and medical education presents a critical juncture between transformative advancements and significant risks. By working together to address both immediate and long-term risks and consequences, we can ensure that AI integration leads to a more equitable, sustainable, and prosperous future for both health care and medical education. As we engage with AI technologies, our collective actions will ultimately determine the state of the future of health care and medical education to harness AI's power while ensuring the safety and well-being of humanity.

Keywords: ChatGPT; GPT-4; Open-AI; OpenAI; artificial intelligence; autonomous; autonomy; ethic; ethical; ethics; ethics and AI; future; future of healthcare; generative; medical education; privacy; scenario; scenario planning; strategic planning.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: DW has received funding from National Board of Medical Examiners for a project using natural language processing and a large language model to evaluate clinical reasoning in resident documentation. LT has received funding from the American Medical Association for a project using large language models to develop a platform that generates clinical skills practice scenarios.

Figures

Figure 1
Figure 1
An illustration of the 4 future worlds: AI Harmony, AI Conflict, The World of Ecological Balance, and Existential Risk generated using the DALL-E AI model (OpenAI, 2020). AI: artificial intelligence.

References

    1. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. Government of the United Kingdom. 2023. [2023-12-19]. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-ble... .
    1. Dorr DA, Adams L, Embí Peter. Harnessing the promise of artificial intelligence responsibly. JAMA. 2023 Apr 25;329(16):1347–1348. doi: 10.1001/jama.2023.2771.2803078 - DOI - PubMed
    1. Carlsmith J. Is Power-Seeking AI an Existential Risk? arXiv. Preprint posted online June 16, 2022. doi: 10.48550/arXiv.2206.13353. - DOI
    1. Bucknall B, Dori-Hacohen S. Current and Near-Term AI as a Potential Existential Risk Factor. AIES '22: AAAI/ACM Conference on AI, Ethics, and Society; May 19-21, 2021; Oxford. 2022. pp. 119–129. - DOI
    1. Hendrycks D. Natural Selection Favors AIs over Humans. arXiv. Preprint posted online March 28, 2023. 2023 doi: 10.5260/chara.21.2.8. - DOI