The Epistemological Danger of Large Language Models
- PMID: 37812104
- PMCID: PMC11797371
- DOI: 10.1080/15265161.2023.2250294
The Epistemological Danger of Large Language Models
Conflict of interest statement
DISCLOSURE STATEMENT
No potential conflict of interest was reported by the author(s).
Comment in
-
Generative-AI-Generated Challenges for Health Data Research.Am J Bioeth. 2023 Oct;23(10):1-5. doi: 10.1080/15265161.2023.2252311. Epub 2023 Oct 9. Am J Bioeth. 2023. PMID: 37831940 Free PMC article. No abstract available.
Comment on
-
What Should ChatGPT Mean for Bioethics?Am J Bioeth. 2023 Oct;23(10):8-16. doi: 10.1080/15265161.2023.2233357. Epub 2023 Jul 13. Am J Bioeth. 2023. PMID: 37440696
References
-
- Bechmann A, and Bowker GC. 2019. Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media. Big Data & Society 6 (1):205395171881956. doi: 10.1177/2053951718819569. - DOI
-
- Bender EM, Gebru T, McMillan-Major A, and Shmitchell S. 2021. On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. New York, NY, USA: ACM. doi: 10.1145/3442188.3445922. - DOI
-
- Bommasani R, Hudson DA, Adeli E, Altman R, Arora S, von Arx S, Bernstein MS, et al. 2022. On the opportunities and risks of foundation models. arXiv. doi: 10.48550/arXiv.2108.07258. - DOI
-
- Durmus E, Nyugen K, Liao TI, Schiefer N, Askell A, Bakhtin A, Chen C, et al. 2023. Towards measuring the representation of subjective global opinions in language models. arXiv. doi: 10.48550/arXiv.2306.16388. - DOI
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources