Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 May 12;20(5):e0322900.
doi: 10.1371/journal.pone.0322900. eCollection 2025.

The "multiple exposure effect" (MEE): How multiple exposures to similarly biased online content can cause increasingly larger shifts in opinions and voting preferences

Affiliations

The "multiple exposure effect" (MEE): How multiple exposures to similarly biased online content can cause increasingly larger shifts in opinions and voting preferences

Robert Epstein et al. PLoS One. .

Abstract

In three randomized, controlled experiments performed on simulations of three popular online platforms - Google search, X/Twitter, and Alexa - with a total of 1,488 undecided, eligible US voters, we asked whether multiple exposures to similarly biased content on those platforms could shift opinions and voting preferences more than a single exposure could. All participants were first shown brief biographies of two political candidates, then asked about their voting preferences, then exposed to biased content on one of our three simulated platforms, and then asked again about their voting preferences. In all experiments, participants in different groups saw biased content favoring one candidate, his or her opponent, or neither. In all the experiments, our primary dependent variable was Vote Manipulation Power (VMP), the percentage increase in the number of participants inclined to vote for one candidate after having viewed content favoring that candidate. In Experiment 1 (on our Google simulator), the VMP increased with successive searches from 14.3% to 20.2% to 22.6%. In Experiment 2 (on our X/Twitter simulator), the VMP increased with successive exposures to biased tweets from 49.7% to 61.8% to 69.1%. In Experiment 3 (on our Alexa simulator), the VMP increased with successive exposures to biased replies from 72.1% to 91.2% to 98.6%. Corresponding shifts were also generally found for how much participants reported liking and trusting the candidates and for participants' overall impression of the candidates. Because multiple exposures to similarly biased content might be common on the internet, we conclude that our previous reports about the possible impact of biased content - always based on single exposures - might have underestimated its possible impact. Findings in our new experiments exemplify what we call the "multiple exposure effect" (MEE).

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Experiment 1: Example of Kadoodle search results page.
The search phrase in the search bar was pre-filled. Each of the five pages of search results included a list of six search results. The order of the results was different for each of the three groups. The order could favor Hillary Clinton (as shown above), Donald Trump, or neither Presidential candidate. See text for details. Reprinted from [26] under a CC BY license, with permission from the American Institute from Behavioral Research and Technology, original copyright 2024. This figure is similar but not identical to the original image and is therefore for illustrative purposes only.
Fig 2
Fig 2. Experiment 1: Selection and grouping of search results for each of the three exposures.
In the first exposure in multiple exposure conditions, 30 search results and corresponding web pages were selected from the total bank of 60. In the second exposure, 10 search results were taken from the first batch and combined with 20 new search results from the bank. The third exposure used 10 search results from the first batch (that had not been used in the second batch), 10 results from the second batch (that had not been used in the first batch), and the remaining 10 results from the bank (that had not previously been used in either of the first two batches).
Fig 3
Fig 3. Experiment 1: Single and multiple exposure bias groups using a search engine.
Participants were first split into either the single exposure or multiple exposure conditions. In the single exposure condition, they were then randomly assigned to either a pro-Trump, pro-Clinton, or control group (see text and Fig 4 for details). The same occurred in the multiple exposure condition, but participants in that condition experienced three separate rounds of search, each with the same bias and each lasting a maximum of 15 minutes. Participants in the single exposure condition experienced just one search.
Fig 4
Fig 4. Experiment 1: Selection and ordering of search results and corresponding web pages for the two bias groups and the control group.
In each of the three groups to which single exposure and multiple exposure participants were assigned, they had access to five pages of search results, each with six search results per page. A: In Group 1 (pro-Trump) search results were displayed in an order favoring Donald Trump, then neither candidate, then Hillary Clinton, based on mean bias ratings that had been previously provided by independent raters (see text). B: In Group 2 (pro-Clinton), search results were placed in the opposite order. C: In Group 3 (control), pro-Trump, and pro-Clinton search results alternated, as shown in the figure.
Fig 5
Fig 5. Experiment 2: The three groups.
In Group 1, participants were first exposed to 14 tweets in which 2 of them were targeted messages containing a negative news alert about Scott Morrison’s opponent (and hence favored Morrison). The figure shows the positions of the targeted messages. That group was subsequently exposed to similarly biased content two more times. In Group 2, participants also were exposed, sequentially, to three groups of tweets, but these contained negative news alerts about Bill Shorten’s opponent (and hence favored Shorten). In Group 3, with each exposure, participants saw one negative targeted tweet about each candidate, with the order of those two tweets randomized. See text for details.
Fig 6
Fig 6. Experiment 2: Example of a biased targeted message on Twiddler.
The left-hand image shows a news alert (labeled “Twiddler Alert”) that includes a negative news item about candidate Bill Shorten. The right-hand image is the same, except that the opposing candidate’s name (Scott Morrison) is shown in that news alert.
Fig 7
Fig 7. Experiment 3: Example of Dyslexa, our Alexa simulator.
In the lower-right of the figure, we are showing 5 of the possible 15 questions we showed to participants in three different exposures to Dyslexa.
Fig 8
Fig 8. Experiment 3: Dyslexa bias groups.
This diagram shows the procedure used with each of the three groups: the pro-Morrison, the pro-Shorten, and the control group. With each of the three exposures to Dyslexa’s questions and answers, each set of five questions was presented in a random order.

References

    1. Epstein R. The ultimate mind control machine: summary of a decade of empirical research on online search engines. San Francisco, CA:Western Psychological Association; 2024. Available from: https://aibrt.org/downloads/EPSTEIN_2024-WPA-The_Ultimate_Mind_Control_M...
    1. Epstein R, Peirson L. How we preserved more than 2.4 million online ephemeral experiences in the 2022 midterm elections, and what this content revealed about online election bias. Riverside, CA: Western Psychological Association; 2023. Available from: https://aibrt.org/downloads/EPSTEIN_&_Peirson_2023-WPAHow_We_Preserved_M...
    1. Epstein R, Bock S, Peirson L, Wang H, Voillot M. How we preserved more than 1.5 million online “ephemeral experiences” in the recent US elections, and what this content revealed about online election bias. Portland, OR: Western Psychological Association; 2022. Available from: https://aibrt.org/downloads/EPSTEIN_et_al_2022-WPA-How_We_Preserved_More...
    1. De Gregorio G. Democratising online content moderation: a constitutional framework. Computer Law & Security Review. 2020;36:105374. doi: 10.1016/j.clsr.2019.105374 - DOI
    1. Lee E. Moderating content moderation: a framework for nonpartisanship in online governance. In: American University Law Review; 2021. [cited 2024 Dec 18]. Available from: https://aulawreview.org/blog/moderating-content-moderation-a-framework-f...

LinkOut - more resources