The persuasive potential of AI-paraphrased information at scale
- PMID: 40697191
- PMCID: PMC12281505
- DOI: 10.1093/pnasnexus/pgaf207
The persuasive potential of AI-paraphrased information at scale
Abstract
In this article, we study how AI-paraphrased messages have the potential to amplify the persuasive impact and scale of information campaigns. Building from social and cognitive theories on repetition and information processing, we model how CopyPasta-a common repetition tactic leveraged by information campaigns-can be enhanced using large language models. We first extract CopyPasta from two prominent disinformation campaigns in the United States and use ChatGPT to paraphrase the original message to generate AIPasta. We then validate that AIPasta is lexically diverse in comparison to CopyPasta while retaining the semantics of the original message using natural language processing metrics. In a preregistered experiment comparing the persuasive potential of CopyPasta and AIPasta (N = 1,200), we find that AIPasta (but not CopyPasta) is effective at increasing perceptions of consensus in the broad false narrative of the campaign while maintaining similar levels of sharing intent with respect to Control (CopyPasta reduces such intent). Additionally, AIPasta (vs. Control) increases belief in the exact false claim of the campaign, depending on political orientation. However, across most outcomes, we find little evidence of significant persuasive differences between AIPasta and CopyPasta. Nonetheless, current state-of-the-art AI-text detectors fail to detect AIPasta, opening the door for these operations to scale successfully. As AI-enabled information operations become more prominent, we anticipate a shift from traditional CopyPasta to AIPasta, which presents significant challenges for detection and mitigation.
Keywords: cognitive heuristics; generative AI; illusory truth effect; information campaigns; persuasion.
© The Author(s) 2025. Published by Oxford University Press on behalf of National Academy of Sciences.
Figures







References
-
- Bell E. 2023. A fake news frenzy: why ChatGPT could be disastrous for truth in journalism. Guardian. 1(10):2023.
-
- Goldstein JA, et al. 2023. Generative language models and automated influence operations: emerging threats and potential mitigations, arXiv, arXiv:2301.04246, preprint: not peer reviewed. 10.48550/arXiv.2301.04246 - DOI
-
- Habgood-Coote J. 2023. Deepfakes and the epistemic apocalypse. Synthese. 201(3):103.
-
- Hsu T, Thompson SA. Disinformation researchers raise alarms about AI chatbots. In: International New York Times. 2023.
-
- World Economic Forum . 2024. The global risks report 2024. [accessed 2024 Nov 08]. https://www.weforum.org/publications/global-risks-report-2024/2024.
LinkOut - more resources
Full Text Sources
Research Materials