Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2020 May 30:2:540-562.
doi: 10.1016/j.fsisyn.2020.01.017. eCollection 2020.

Interpol review of imaging and video 2016-2019

Affiliations
Review

Interpol review of imaging and video 2016-2019

Zeno Geradts et al. Forensic Sci Int Synerg. .

Abstract

This review paper covers the forensic-relevant literature in imaging and video analysis from 2016 to 2019 as a part of the 19th Interpol International Forensic Science Managers Symposium. The review papers are also available at the Interpol website at: https://www.interpol.int/content/download/14458/file/Interpol%20Review%20Papers%202019.pdf.

Keywords: Image analysis; Imaging; Interpol; Video.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Example of image manipulation that appeared in press in July 2008. (a) The forged image displaying four missiles. Only three of them are real, two different sections (encircled in red and orange, respectively) are replicates of other image sections (b)The original image showing only three missiles [6]. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 2
Fig. 2
Left: A regular 3-layer (2 hidden and 1 output) neural network with one dimensional layers. Right: a convolutional neural network with the neurons arranged in three dimensions. Every layer transforms the 3D input volume to a 3D output volume of neuron activation’s. The red input layer holds the image, with its width and height equal to the spatial dimensions of the image, and a depth of 3 (the Colour channels Red, Green, Blue) [11]. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 3
Fig. 3
CNN architecture as proposed by Kim and Lee [15] consisting of 1 HPF, 2 convolutional layers, 2 max pooling layers, and 2 fully connected layers with softmax function for classification. The networks input dimension is a 256 × 256 sized grayscale image.
Fig. 4
Fig. 4
CNN architecture as proposed by Bayar and Stamm [9] consisting of 1 constrained convolutional layer, 2 convolutional layers, 2 max pooling layers, and 3 fully connected layers with softmax function for classification. The networks input dimension is a 227 × 227 sized grayscale image.
Fig. 5
Fig. 5
Baseline CNN architecture as proposed by Bayar and Stamm [10] consisting of 1 constrained convolutional layer, 3 convolutional layers with PReLU activation functions, 2 max pooling layers and 1 average pooling layer, and 3 fully connected layers with softmax function for classification. The networks input dimension is a 256 × 256 green layer image. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 6
Fig. 6
CNN architecture as proposed by Bayar and Stamm [12] consisting of 1 constrained convolutional layer, 4 convolutional layers, 3 max pooling layers, 1 average pooling layer and 3 fully connected layers with softmax function/extremely randomised tree for classification. The networks input dimension is a 256 × 256 sized grayscale image [12].
Fig. 7
Fig. 7
The constrained weights of a 5 × 5 isotropic filter as proposed by Chen et al. [18].
Fig. 8
Fig. 8
Rotation invariant CNN architecture as proposed by Chen et al. [18] consisting of one constrained isotropic conv layer, 4 isotropic conv layers, 4 pooling layers and three fully connected layers with softmax function.
Fig. 9
Fig. 9
CNN architecture as proposed by Chen et al. [7] with 8 layer groups, the first being the isotropic convolutional layer and the second to eight the traditional convolutional layer. Additionally, it has 3 transition layers 3 max pooling layers and 1 fully connected layer.
Fig. 10
Fig. 10
CNN architecture as proposed by Boroumand and Fridrich [28] consisting of 8 convolutional layers with ReLU activation and batch normalisation, 7 average pooling layers and one fully connected layers with softmax function.
Fig. 11
Fig. 11
Graphical representation of proposed approach by Meyer et al. [32] (a) transfer learning and (b) multitask learning, both using an example share depth up to first fully connected layer (fc1).
Fig. 12
Fig. 12
[2].
Fig. 13
Fig. 13
CNN architecture as proposed by Yu et al. The networks input dimension is a 128 × 128 grayscale image (16.384 neurons). The architecture 5 convolutional layers, two pooling layers and one fully connected layer connected to the output layer through a soft max function [9].

References

    1. Birajdar Gajanan K., Mankar Vijay H. Digital image forgery detection using passive techniques: a survey. Digit. Invest. 2013;10(3):226–245.
    1. Bayar Belhassen, Stamm Matthew C. Proceedings of 5th ACM Workshop on Information Hiding and Multimedia Security. ACM; Philadelphia, Pennsylvania, USA: 2017. A generic approach towards image manipulation parameter estimation using convolutional neural networks; pp. 147–157.
    1. Redi Judith A., Taktak Wiem, Dugelay Jean-Luc. Digital image forensics: a booklet for beginners. Multimed. Tool. Appl. 2011;51(1):133–162.
    1. Piva Alessandro. An overview on image forensics. ISRN Signal Processing. 2013;2013:1–22.
    1. Stamm Matthew C., Wu Min, Ray Liu K.J. Information forensics: an overview of the first decade. IEEE Access. 2013;1:167–200.

Further reading

    1. ISO/IEC 2382-37:2012(E) Information Technology-Vocabulary Part 37. 2012. Biometrics.
    1. Grother P., Ngan M., Hanaoka K. ongoing face recognition vendor test (FRVT) Part 2: identification. Inside NIST. 2018;8238 https://nvlpubs.nist.gov/nistpubs/ir/2018/NIST.IR.8238.pdf
    1. Dutta A. University of Twente; Enschede, The Netherlands: 24-Apr-2015. Predicting Performance of a Face Recognition System Based on Image Quality.
    1. Megreya A.M., Sandford A., Burton A.M. Matching face images taken on the same day or months apart: the limitations of photo ID: matching face images. Appl. Cognit. Psychol. Oct. 2013;27(6):700–706. n/a–n/a.

LinkOut - more resources