Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jun 25;12(6):242148.
doi: 10.1098/rsos.242148. eCollection 2025 Jun.

Countering AI-generated misinformation with pre-emptive source discreditation and debunking

Affiliations

Countering AI-generated misinformation with pre-emptive source discreditation and debunking

Emily R Spearing et al. R Soc Open Sci. .

Abstract

Despite widespread concerns over AI-generated misinformation, its impact on people's reasoning and the effectiveness of countermeasures remain unclear. This study examined whether a pre-emptive, source-focused inoculation-designed to lower trust in AI-generated information-could reduce its influence on reasoning. This approach was compared with a retroactive, content-focused debunking, as well as a simple disclaimer that AI-generated information may be misleading, as often seen on real-world platforms. Additionally, the extent to which trust in AI-generated information is malleable was also tested with an intervention designed to boost trust. Across two experiments (total N = 1223), a misleading AI-generated article influenced reasoning regardless of its alleged source (human or AI). In both experiments, the inoculation reduced general trust in AI-generated information, but did not significantly reduce the misleading article's specific influence on reasoning. The additional trust-boosting and disclaimer interventions used in Experiment 1 also had no impact. By contrast, debunking of misinformation in Experiment 2 effectively reduced its impact, although only a combination of inoculation and debunking eliminated misinformation influence entirely. Findings demonstrate that generative AI can be a persuasive source of misinformation, potentially requiring multiple countermeasures to negate its effects.

Keywords: continued influence effect; generative artificial intelligence; misinformation; source credibility.

PubMed Disclaimer

Conflict of interest statement

We declare we have no competing interests.

Figures

Mean Trust in AI-Generated Information Across Conditions in Experiment 1.
Figure 1.
Mean trust in AI-generated information across conditions in Experiment 1. Note: Misinfo., misinformation; error bars show 95% confidence intervals.
Mean Misinformation Reliance Across Conditions in Experiment 1.
Figure 2.
Mean misinformation reliance across conditions in Experiment 1. Note: Misinfo., misinformation; error bars show 95% confidence intervals.
Mean Trust in AI-Generated Information Across Conditions in Experiment 2.
Figure 3.
Mean trust in AI-generated information across conditions in Experiment 2. Note: Misinfo., misinformation; error bars show 95% confidence intervals.
Mean Misinformation Reliance Across Conditions in Experiment 2.
Figure 4.
Mean misinformation reliance across conditions in Experiment 2. Note: Misinfo., misinformation; error bars show 95% confidence intervals.

References

    1. Alkaissi H, McFarlane SI. 2023. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 15, e35179. ( 10.7759/cureus.35179) - DOI - PMC - PubMed
    1. Buchanan J, Hill S, Shapoval O. 2024. ChatGPT hallucinates non-existent citations: evidence from economics. Am. Econ. 69, 80–87. ( 10.1177/05694345231218454) - DOI
    1. Gravel J, D’Amours-Gravel M, Osmanlliu E. 2023. Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clin. Proc. 1, 226–234. ( 10.1016/j.mcpdig.2023.05.004) - DOI - PMC - PubMed
    1. Heppell F, Bakir ME, Bontcheva K. 2024. Lying blindly: bypassing ChatGPT’s safeguards to generate hard-to-detect disinformation claims at scale. arXiv. See http://arxiv.org/abs/2402.08467.
    1. Shukla AK, Tripathi S. 2024. AI-generated misinformation in the election year 2024: measures of European Union. Front. Polit. Sci 6, 1451601. ( 10.3389/fpos.2024.1451601) - DOI

LinkOut - more resources