Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Sep 28;8(10):263.
doi: 10.3390/jimaging8100263.

The Face Deepfake Detection Challenge

Affiliations

The Face Deepfake Detection Challenge

Luca Guarnera et al. J Imaging. .

Abstract

Multimedia data manipulation and forgery has never been easier than today, thanks to the power of Artificial Intelligence (AI). AI-generated fake content, commonly called Deepfakes, have been raising new issues and concerns, but also new challenges for the research community. The Deepfake detection task has become widely addressed, but unfortunately, approaches in the literature suffer from generalization issues. In this paper, the Face Deepfake Detection and Reconstruction Challenge is described. Two different tasks were proposed to the participants: (i) creating a Deepfake detector capable of working in an "in the wild" scenario; (ii) creating a method capable of reconstructing original images from Deepfakes. Real images from CelebA and FFHQ and Deepfake images created by StarGAN, StarGAN-v2, StyleGAN, StyleGAN2, AttGAN and GDWCT were collected for the competition. The winning teams were chosen with respect to the highest classification accuracy value (Task I) and "minimum average distance to Manhattan" (Task II). Deep Learning algorithms, particularly those based on the EfficientNet architecture, achieved the best results in Task I. No winners were proclaimed for Task II. A detailed discussion of teams' proposed methods with corresponding ranking is presented in this paper.

Keywords: deep learning; deepfake challenge; deepfake detection; deepfake reconstruction; discrete cosine transform; transformer networks.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Generic scheme of a GAN architecture. The Generator and the Discriminator are the main components of the GAN. The objective of the Generator is to capture the data distribution of the training set. The goal of the Discriminator is to distinguish the images coming from the Generator compared to the training data. When the Generator creates images with the same data distribution of the training set, the Discriminator will no longer be able to solve its task and the training phase can be considered completed.
Figure 2
Figure 2
Task I: Deepfake detection task. Given a set of Real and Deepfake images created by different GAN engines, the objective is to create a detector able to correctly classify Deepfake images in any scenario.
Figure 3
Figure 3
Examples of real (CelebA and FFHQ) and Deepfake images created by different GAN engines (AttGAN, StyleGAN, StyleGAN2, StarGAN, and GDWCT). The columns denote the source of the images. The rows (Raw images and Manipulated images) show examples of images without and with attacks.
Figure 4
Figure 4
Task II: Source image reconstruction task.
Figure 5
Figure 5
Name structure of source images, reference images and Deepfake images.
Figure 6
Figure 6
Sample output of random-crop with size 128 × 128.
Figure 7
Figure 7
Model employed by the DC-GAN (Amped Team).
Figure 8
Figure 8
Sample of output results with confidence score: the red label is fake, and green label is real.
Figure 9
Figure 9
Convolutional Cross ViT architecture.
Figure 10
Figure 10
Construction of the representation used as input of the model, designed to be robust to resizing and compressions typical of Deepfakes exchanged on the web. General overview (a) and tensor representation (b) [55].
Figure 11
Figure 11
Real samples classified as fake by the “PRA Lab—Div. Biometria” method. In particular: (a) 1420.jpg, (b) 1794.jpg, (c) 3938.jpg, (d) 4184.jpg and other real images from the competition dataset contain manipulations external to the facial region that affect the detector.

References

    1. Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y. Generative Adversarial Nets; Proceedings of the Advances in Neural Information Processing Systems; Montreal, QC, Canada. 8–13 December 2014; pp. 2672–2680.
    1. Guarnera L., Giudice O., Battiato S. Deepfake Style Transfer Mixture: A First Forensic Ballistics Study on Synthetic Images; Proceedings of the International Conference on Image Analysis and Processing; Lecce, Italy. 23–27 May 2022; Berlin/Heidelberg, Germany: Springer; 2022. pp. 151–163.
    1. Giudice O., Paratore A., Moltisanti M., Battiato S. A Classification Engine for Image Ballistics of Social Data; Proceedings of the International Conference on Image Analysis and Processing; Catania, Italy. 11–15 September 2017; Berlin/Heidelberg, Germany: Springer; 2017. pp. 625–636.
    1. Wang X., Guo H., Hu S., Chang M.C., Lyu S. GAN-generated Faces Detection: A Survey and New Perspectives. arXiv. 20222202.07145
    1. Verdoliva L. Media Forensics and Deepfakes: An Overview. IEEE J. Sel. Top. Signal Process. 2020;14:910–932. doi: 10.1109/JSTSP.2020.3002101. - DOI

LinkOut - more resources