Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Feb 28;11(3):73.
doi: 10.3390/jimaging11030073.

Deepfake Media Forensics: Status and Future Challenges

Affiliations

Deepfake Media Forensics: Status and Future Challenges

Irene Amerini et al. J Imaging. .

Abstract

The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like "Impostor Bias", a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive and active authentication, and detection in real-world scenarios. By exploring both the strengths and limitations of current methodologies, we highlight critical research gaps and propose directions for future advancements to ensure media integrity and trustworthiness in an era increasingly dominated by synthetic media.

Keywords: audio deepfake detection; deepfake attribution and recognition; deepfake authentication techniques; deepfake detection; media forensics.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Generic diagram of the deepfake creation process.
Figure 2
Figure 2
(a) The GAN framework consists of a Generator (G) that creates synthetic data x¯ from random noise z, aiming to learn the data distribution pdata, and a Discriminator (D) that distinguishes between real and generated data. Both models are trained simultaneously in an adversarial manner; (b) The Diffusion Model [8] uses a fixed Markov chain to add Gaussian noise to data, approximating the posterior distribution q(xt|xt1)fort=1,,T. The goal is to learn the reverse process pθ(xt1|xt) to generate data by reversing the noise-adding chain, where x1,,xT are latent variables with the same dimensionality as x0.
Figure 3
Figure 3
Taxonomy of Deepfake Detection Methods. Detection techniques are categorized into four primary groups: general network-based approaches, methods focusing on visual artifacts, approaches targeting biological inconsistencies, and techniques leveraging texture and spatio-temporal consistency. Each category addresses specific characteristics of manipulated content.
Figure 4
Figure 4
Example of deepfake detection features. The full face (a) exhibits visual artifacts, such as unnatural pixel formations around the facial features (e.g., glasses and skin edges). These are indicative of irregularities introduced by generative algorithms or compression distortions. The zoomed-in region (b) around the mouth shows potential texture inconsistencies, such as unnatural blending of lip textures and a lack of smooth transitions typical of natural skin and lip patterns.
Figure 5
Figure 5
A conceptual pipeline for deepfake detection and model recognition, illustrating the process of identifying whether an image is real or synthetic, determining the generative architecture, and tracing the specific model instance used for creation.
Figure 6
Figure 6
The TrueFace Dataset [158], comprising 80k fake images generated with StyleGAN models and 70k real images, of which 60k have been also shared on three distinct social networks.

References

    1. Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y. Generative Adversarial Nets. Adv. Neural Inf. Process. Syst. 2014;27
    1. Ho J., Jain A., Abbeel P. Denoising Diffusion Probabilistic Models. Adv. Neural Inf. Process. Syst. 2020;33:6840–6851.
    1. Casu M., Guarnera L., Caponnetto P., Battiato S. GenAI Mirage: The Impostor Bias and the Deepfake Detection Challenge in the Era of Artificial Illusions. Forensic Sci. Int. Digit. Investig. 2024;50:301795. doi: 10.1016/j.fsidi.2024.301795. - DOI
    1. Guarnera L., Giudice O., Guarnera F., Ortis A., Puglisi G., Paratore A., Bui L.M., Fontani M., Coccomini D.A., Caldelli R., et al. The Face Deepfake Detection Challenge. J. Imaging. 2022;8:263. doi: 10.3390/jimaging8100263. - DOI - PMC - PubMed
    1. Wang S.Y., Wang O., Zhang R., Owens A., Efros A.A. CNN-Generated Images are Surprisingly Easy to Spot …for Now; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Seattle, WA, USA. 13–19 June 2020; pp. 8695–8704.

LinkOut - more resources