Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jan 23;14(1):e0210257.
doi: 10.1371/journal.pone.0210257. eCollection 2019.

A method for automatic forensic facial reconstruction based on dense statistics of soft tissue thickness

Affiliations

A method for automatic forensic facial reconstruction based on dense statistics of soft tissue thickness

Thomas Gietzen et al. PLoS One. .

Abstract

In this paper, we present a method for automated estimation of a human face given a skull remain. Our proposed method is based on three statistical models. A volumetric (tetrahedral) skull model encoding the variations of different skulls, a surface head model encoding the head variations, and a dense statistic of facial soft tissue thickness (FSTT). All data are automatically derived from computed tomography (CT) head scans and optical face scans. In order to obtain a proper dense FSTT statistic, we register a skull model to each skull extracted from a CT scan and determine the FSTT value for each vertex of the skull model towards the associated extracted skin surface. The FSTT values at predefined landmarks from our statistic are well in agreement with data from the literature. To recover a face from a skull remain, we first fit our skull model to the given skull. Next, we generate spheres with radius of the respective FSTT value obtained from our statistic at each vertex of the registered skull. Finally, we fit a head model to the union of all spheres. The proposed automated method enables a probabilistic face-estimation that facilitates forensic recovery even from incomplete skull remains. The FSTT statistic allows the generation of plausible head variants, which can be adjusted intuitively using principal component analysis. We validate our face recovery process using an anonymized head CT scan. The estimation generated from the given skull visually compares well with the skin surface extracted from the CT scan itself.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Overview of our model generation processes.
Generation of a skull and a head model as well as a dense FSTT statistic from multimodal input data.
Fig 2
Fig 2. Skull variants along the two principal components with the largest eigenvalues.
We visualize s¯+α1u1+α2u2, where αi = aiσi, i = 1, 2, is the weight containing the standard deviation σi to the corresponding eigenvector ui, and the factor ai ∈ {−2, 0, 2}.
Fig 3
Fig 3. Statistic of the FSTT on a mean skull.
Mean and standard deviation of FSTT computed from the 43 CT scans.
Fig 4
Fig 4. Basis for the statistical evaluation of the FSTT.
From left to right: Example of a fitted skull (white) and corresponding extracted skull (black wireframe), validation mask (corresponding to left), number of samples used for all vertices in the statistic of FSTT in Fig 3.
Fig 5
Fig 5. FSTT for commonly used midline and bilateral landmarks.
Landmarks defined by [17] as produced by our method (red dots) in relation to pooled data from a recent meta-analysis [18] (weighted mean ± weighted standard deviation as blue error bars).
Fig 6
Fig 6. Head variants along the two principal components with the largest eigenvalues.
We visualize h¯+β1v1+β2v2, where βi = biσi, i = 1, 2, is the weight containing the standard deviation σi to the corresponding eigenvector vi, and the factor bi ∈ {−2, 0, 2}.
Fig 7
Fig 7. Processing steps of the automatic forensic facial reconstruction.
The reconstruction of a face from a given input skull utilizing the generated parametric skull model, the statistic of FSTT, and the parametric head model.
Fig 8
Fig 8. FSTT for a given individual visualized as sphere model.
At each skull vertex a sphere with radius of the actual FSTT value from the ground truth data set is drawn. From left to right: Some example spheres for points on the midline, union of all spheres (in green) with original skin surface as overlay.
Fig 9
Fig 9. Landmarks for the automatic facial reconstruction.
From left to right: Mean skull with preselected landmarks, sphere model based on mean FSTT with projected landmarks, and mean head with preselected landmarks. The landmarks consist of two midline landmarks and four bilateral landmarks, which are selected once on the parametric skull and head model after model generation. The landmarks are based on the proposed nomenclature of [17]: nasion and menton (from craniometry) and mid-supraorbitale and porion (from craniometry) as well as ciliare lateralis and ciliare medialis (from capulometric) and their corresponding counterparts on skull respectively skin surface.
Fig 10
Fig 10. Variants of plausible FSTT distributions for the anonymized given skull.
Top: Partial sphere model variants along the two principal components with the largest eigenvalues: We visualize t¯+γ1w1+γ2w2, where γi = ciσi, i = 1, 2, is the weight containing the standard deviation σi to the corresponding eigenvector wi, and the factor ci ∈ {−2, 0, 2}. Bottom: Head model fitted to these partial sphere models.
Fig 11
Fig 11. Skull fitting results for a given skull.
Extracted skull from CT (left) and fitted skull (right).
Fig 12
Fig 12. Head fittings with color coded distances (in mm) to original skin surface extracted from CT (last column).
First three columns from left to right: Fitted head to sphere model based on a) mean FSTT (RMSE 4.04 mm), b) best fit in PCA space (RMSE 1.99 mm), and c) original FSTT (RMSE 1.32 mm).

Similar articles

Cited by

References

    1. Claes P, Vandermeulen D, De Greef S, Willems G, Suetens P. Statistically Deformable Face Models for Cranio-Facial Reconstruction. Journal of Computing and Information Technology. 2006;14:21–30. 10.2498/cit.2006.01.03 - DOI
    1. Wilkinson C. Facial reconstruction—anatomical art or artistic anatomy? Journal of Anatomy. 2010;216 2:235–50. 10.1111/j.1469-7580.2009.01182.x - DOI - PMC - PubMed
    1. Turner W, Brown R, Kelliher T, Tu P, Taister M, Miller K. A novel method of automated skull registration for forensic facial approximation. Forensic Science International. 2005;154:149–158. 10.1016/j.forsciint.2004.10.003 - DOI - PubMed
    1. Tu P, Book R, Liu X, Krahnstoever N, Adrian C, Williams P. Automatic Face Recognition from Skeletal Remains. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2007. p. 1–7.
    1. Romeiro R, Marroquim R, Esperança C, Breda A, Figueredo CM. Forensic Facial Reconstruction Using Mesh Template Deformation with Detail Transfer over HRBF. In: 27th SIBGRAPI Conference on Graphics, Patterns and Images; 2014. p. 266–273.

Publication types