Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology
- PMID: 40627744
- DOI: 10.1111/1460-6984.70088
Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology
Abstract
Background: Integrating large language models (LLMs), such as ChatGPT, into speech-language pathology (SLP) presents promising opportunities and notable challenges. While these tools can support diagnostics, streamline documentation and assist in therapy planning, they also raise concerns related to misinformation, cultural insensitivity, overreliance and ethical ambiguity. Current discourse often centres on technological capabilities, overlooking how future speech-language pathologists (SLPs) are being prepared to use such tools responsibly.
Aims: This paper examines the pedagogical, ethical and professional implications of integrating LLMs into SLP. It emphasizes the need to cultivate professional responsibility, ethical awareness and critical engagement amongst student SLPs, ensuring that such technologies are applied thoughtfully, appropriately and in accordance with evidence-based and contextually relevant therapeutic standards.
Methods: The paper combines a review of recent interdisciplinary research with reflective insights from academic practice. It presents documented cases of student SLPs' overreliance on ChatGPT, analyzes common pitfalls through a structured table of examples and synthesizes perspectives from SLP, education, data ethics and linguistics.
Main contribution: Reflective examples presented in the article illustrate challenges that arise when LLMs are used without sufficient oversight or a clear understanding of their limitations. Rather than questioning the value of LLMs, these cases emphasize the importance of ensuring that student SLPs are guided towards thoughtful, ethical and clinically sound use. To support this, the paper offers a set of pedagogical recommendations-including ethics integration, reflective assignments, case-based learning, peer critique and interdisciplinary collaboration-aimed at embedding critical engagement with tools such as ChatGPT into professional training.
Conclusions: LLMs are becoming an integral part of SLP. Their impact, however, will depend on how effectively student SLPs are trained to balance technological innovation with professional responsibility. Higher education institutions (HEIs) must take an active role in embedding responsible engagement with LLMs into pre-service training and SLP curricula. Through intentional and early preparation, the field can move beyond the risks associated with automation and towards a future shaped by reflective, informed and ethically grounded use of generative tools.
What this paper adds: What is already known on this subject Large language models (LLMs), including ChatGPT, are increasingly used in speech-language pathology (SLP) for tasks such as diagnostic support, therapy material generation and documentation. While prior research acknowledges both their utility and risks, limited attention has been paid to how student SLPs engage with these tools and how educational institutions prepare them for responsible use. What this paper adds to existing knowledge This paper identifies key challenges in how student SLPs interact with ChatGPT, including overreliance, lack of critical evaluation and ethical blind spots. It emphasizes the role of higher education in developing critical AI literacy aligned with clinical and ethical standards. The study offers specific, practice-oriented recommendations for embedding responsibility-focused engagement with LLMs into SLP curricula. These include ethics integration, reflective assignments, peer feedback and interdisciplinary dialogue. What are the potential or actual clinical implications of this work? Without structured guidance, future SLPs may misuse LLMs in ways that compromise diagnostic accuracy, cultural appropriateness or therapeutic quality. Embedding reflective, ethics-focused training into SLP curricula can reduce these risks and ensure that generative tools like ChatGPT support rather than undermine clinical decision-making and patient care.
Keywords: ChatGPT; higher education institutions; large language models; responsibility; speech‐language pathology; student speech‐language pathologists.
© 2025 Royal College of Speech and Language Therapists.
Similar articles
-
Culturally Responsive Practices Among Speech-Language Pathologists in Saudi Arabia: Knowledge, Skills, Training Experiences and Attitudes.Int J Lang Commun Disord. 2025 Jul-Aug;60(4):e70078. doi: 10.1111/1460-6984.70078. Int J Lang Commun Disord. 2025. PMID: 40605452
-
How do speech-language pathologists assess and treat spoken discourse after TBI? A survey of clinical practice.Int J Lang Commun Disord. 2024 Mar-Apr;59(2):591-607. doi: 10.1111/1460-6984.12784. Epub 2022 Sep 18. Int J Lang Commun Disord. 2024. PMID: 36117377
-
Solid Medication Intake in Hospitalised Patients With Dysphagia: A Challenge for Speech and Language Pathologists?Int J Lang Commun Disord. 2025 Jul-Aug;60(4):e70073. doi: 10.1111/1460-6984.70073. Int J Lang Commun Disord. 2025. PMID: 40605450 Free PMC article.
-
The use of Open Dialogue in Trauma Informed Care services for mental health consumers and their family networks: A scoping review.J Psychiatr Ment Health Nurs. 2024 Aug;31(4):681-698. doi: 10.1111/jpm.13023. Epub 2024 Jan 17. J Psychiatr Ment Health Nurs. 2024. PMID: 38230967
-
Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics.Patient Prefer Adherence. 2025 Jul 31;19:2227-2249. doi: 10.2147/PPA.S527922. eCollection 2025. Patient Prefer Adherence. 2025. PMID: 40771655 Free PMC article. Review.
References
-
- American Speech‐Language‐Hearing Association. 2016. “Scope of Practice in Speech Language Pathology.” Available from https://www.asha.org/siteassets/publications/sp2016‐00343.pdf.
-
- Austin, J., K. Benas, S. Caicedo, E. Imiolek, A. Piekutowski, and I. Ghanim. 2025. “Perceptions of Artificial Intelligence and ChatGPT by Speech‐Language Pathologists and Students.” American Journal of Speech‐Language Pathology 34 174–200. https://doi.org/10.1044/2024_AJSLP‐24‐00218.
-
- Balloch, J., S. Sridharan, G. Oldham, et al. 2024. “Use of an Ambient Artificial Intelligence Tool to Improve Quality of Clinical Documentation.” Future Healthcare Journal 11, no. 3: 100157.
-
- Bianchi, F., P. Kalluri, E. Durmus, et al. 2023. “Easily Accessible Text‐To‐Image Generation Amplifies Demographic Stereotypes at Large Scale.” in FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 1493–1504. https://doi.org/10.1145/3593013.3594095.
-
- Birol, N. Y., H. B. Çiftci, A. Yılmaz, A. Çağlayan, and F. Alkan. 2025. “Is There any Room for ChatGPT AI Bot in Speech‐Language Pathology?” European Archives of Oto‐Rhino‐Laryngology 282: 3267–3280. https://doi.org/10.1007/s00405‐025‐09295‐y.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources