Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2026 Feb 26:333549261418596.
doi: 10.1177/00333549261418596. Online ahead of print.

Evaluation of Generative Artificial Intelligence Safeguards Against the Creation of Images and Videos Harmful to Public Health

Affiliations

Evaluation of Generative Artificial Intelligence Safeguards Against the Creation of Images and Videos Harmful to Public Health

Bianca Chu et al. Public Health Rep. .

Abstract

Objectives: As generative artificial intelligence (AI) continues to advance, an environment that lacks strong safeguards could create opportunities for misuse by malicious actors. This study aimed to evaluate the safeguards of publicly accessible generative AI applications against the creation of image and video content potentially harmful to public health.

Methods: We assessed the safeguards of 10 leading text-to-image models and 2 text-to-video models across 5 public health themes: promoting solariums as safe, stigmatizing overweight people, promoting alcohol use as safe during pregnancy, depicting vaping as healthy, and depicting smoking cigarettes as cool for teenagers. For each theme, we submitted 10 paraphrased prompts in duplicate to the image models and once to each video model. Two independent reviewers categorized outputs as potentially harmful or not, with a third reviewer responsible for resolving discrepancies. We used χ² tests to determine significant differences in outputs.

Results: Among 1000 image prompt submissions, we judged 521 (52%) of the generated images to be potentially harmful to public health. Image generation rates varied significantly by public health theme-from 43% (85 of 200) of prompts promoting alcohol use as safe during pregnancy to 64% (128 of 200) of prompts depicting vaping as healthy (P < .001)-and across models, from 0% for ChatGPT to 98% for Reve (P < .001). Of 100 video prompt submissions, we classified 52% of outputs from Sora and 30% from Flow as potentially harmful.

Conclusions: Generative AI applications varied significantly in safeguards, with several systems often generating images that could be harmful to public health. The findings underscore the urgent need for greater transparency, safety, and oversight of generative AI to mitigate public health harms.

Keywords: AI safeguards; AI safety; artificial intelligence; generative AI; public health.

PubMed Disclaimer

Conflict of interest statement

The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: A.M.H. is a recipient of investigator-initiated funding for research outside the scope of the current study from Boehringer Ingelheim. A.R. and M.J.S. are recipients of investigator-initiated funding for research outside the scope of the current study from AstraZeneca, Boehringer Ingelheim, Pfizer, and Takeda. A.R. is a recipient of speaker fees from Boehringer Ingelheim and Genentech outside the scope of the current study.

Figures

Figure 1.
Figure 1.
Examples of artificial intelligence–generated images produced in response to prompts testing the application of safeguards against content potentially harmful to public health. Five public health themes were evaluated from May 19 through 27, 2025, in Australia.
Figure 2.
Figure 2.
Screenshot examples from artificial intelligence–generated videos produced in response to prompts testing the application of safeguards against content potentially harmful to public health. Five public health themes were evaluated from May 19 through 27, 2025, in Australia.

References

    1. Sorich MJ, Menz BD, Hopkins AM. Quality and safety of artificial intelligence generated health information. BMJ. 2024;384:q596. doi: 10.1136/bmj.q596 - DOI - PubMed
    1. Menz BD, Kuderer NM, Bacchi S, et al. Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis. BMJ. 2024;384:e078538. doi: 10.1136/bmj-2023-078538 - DOI - PMC - PubMed
    1. World Health Organization. WHO calls for safe and ethical AI for health. May 16, 2023. Accessed July 2025. https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-...
    1. Menz BD, Modi ND, Sorich MJ, Hopkins AM. Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: weapons of mass disinformation. JAMA Intern Med. 2024;184(1):92-96. doi: 10.1001/jamainternmed.2023.5947 - DOI - PubMed
    1. Future of Life Institute. EU Artificial Intelligence Act. 2025. Accessed July 8, 2025. https://artificialintelligenceact.eu

LinkOut - more resources