Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 22;4(7):pgaf207.
doi: 10.1093/pnasnexus/pgaf207. eCollection 2025 Jul.

The persuasive potential of AI-paraphrased information at scale

Affiliations

The persuasive potential of AI-paraphrased information at scale

Saloni Dash et al. PNAS Nexus. .

Abstract

In this article, we study how AI-paraphrased messages have the potential to amplify the persuasive impact and scale of information campaigns. Building from social and cognitive theories on repetition and information processing, we model how CopyPasta-a common repetition tactic leveraged by information campaigns-can be enhanced using large language models. We first extract CopyPasta from two prominent disinformation campaigns in the United States and use ChatGPT to paraphrase the original message to generate AIPasta. We then validate that AIPasta is lexically diverse in comparison to CopyPasta while retaining the semantics of the original message using natural language processing metrics. In a preregistered experiment comparing the persuasive potential of CopyPasta and AIPasta (N = 1,200), we find that AIPasta (but not CopyPasta) is effective at increasing perceptions of consensus in the broad false narrative of the campaign while maintaining similar levels of sharing intent with respect to Control (CopyPasta reduces such intent). Additionally, AIPasta (vs. Control) increases belief in the exact false claim of the campaign, depending on political orientation. However, across most outcomes, we find little evidence of significant persuasive differences between AIPasta and CopyPasta. Nonetheless, current state-of-the-art AI-text detectors fail to detect AIPasta, opening the door for these operations to scale successfully. As AI-enabled information operations become more prominent, we anticipate a shift from traditional CopyPasta to AIPasta, which presents significant challenges for detection and mitigation.

Keywords: cognitive heuristics; generative AI; illusory truth effect; information campaigns; persuasion.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Metrics of lexical diversity and sematic similarity across conditions. AIPasta is observed to be more lexically diverse, while similar semantically, to CopyPasta. Both CopyPasta and AIPasta are distinctly different from random posts. a) #StopTheSteal Metrics. b) #Plandemic Metrics.
Fig. 2.
Fig. 2.
Perceived truth across conditions. Participants exposed to AIPasta were marginally more likely to believe the exact false claim as compared to Control (a), Participants exposed to CopyPasta were marginally more likely to believe the related false narrative as compared to Control (b), and Republican participants in the AIPasta condition were significantly more likely to believe the exact false claim (vs. Control) (c). a) Perceived Truth (Exact Claim). b) Perceived Truth (Related Narrative). c) Perceived Truth (Exact Claim) By Political Party.
Fig. 3.
Fig. 3.
Perceived intent to persuade across conditions moderated by political party. Republican participants exposed to AIPasta or CopyPasta are significantly more likely to feel like they are being persuaded (vs. Control).
Fig. 4.
Fig. 4.
Perceived social consensus across conditions. Participants exposed to AIPasta perceive greater social consensus for the broad false narrative claim as compared to Control. 95% CIs are displayed (a). Republican participants show a greater increase in consensus on the broad false narrative claim after exposure to AIPasta (vs. Control) compared to Democrat participants (b). Participants less familiar with the targeted topic (cutoff point = 4.4) showed a greater increase in perceived consensus on the broad false claims after exposure to AIPasta (vs. Control) compared with participants who were more familiar with the issue (c). a) Perceived Social Consensus. b) Perceived Social Consensus (By Political Party). c) Johnson–Neyman Plot (By Issue Familiarity).
Fig. 5.
Fig. 5.
Sharing intention across conditions. Participants exposed to CopyPasta report a significantly lower likelihood of sharing a randomly selected CopyPasta post among the stimuli compared to those in the Control condition, and those exposed to AIPasta were marginally significantly more likely to share the messages as compared to CopyPasta.
Fig. 6.
Fig. 6.
Distribution of perplexity scores. Perplexity score distributions for AIPasta and CopyPasta almost completely overlap. a) #StopTheSteal. b) #Plandemic.
Fig. 7.
Fig. 7.
Survey flow. The study used a three-condition (CopyPasta, AIPasta, and Control) between-subject design, with each participants making ratings on two topics (#StopTheSteal and #Plandemic) to reduce item effects.

Similar articles

References

    1. Bell E. 2023. A fake news frenzy: why ChatGPT could be disastrous for truth in journalism. Guardian. 1(10):2023.
    1. Goldstein JA, et al. 2023. Generative language models and automated influence operations: emerging threats and potential mitigations, arXiv, arXiv:2301.04246, preprint: not peer reviewed. 10.48550/arXiv.2301.04246 - DOI
    1. Habgood-Coote J. 2023. Deepfakes and the epistemic apocalypse. Synthese. 201(3):103.
    1. Hsu T, Thompson SA. Disinformation researchers raise alarms about AI chatbots. In: International New York Times. 2023.
    1. World Economic Forum . 2024. The global risks report 2024. [accessed 2024 Nov 08]. https://www.weforum.org/publications/global-risks-report-2024/2024.