Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 May 16:9:878378.
doi: 10.3389/fsurg.2022.878378. eCollection 2022.

Development and Validation of a Novel Methodological Pipeline to Integrate Neuroimaging and Photogrammetry for Immersive 3D Cadaveric Neurosurgical Simulation

Affiliations

Development and Validation of a Novel Methodological Pipeline to Integrate Neuroimaging and Photogrammetry for Immersive 3D Cadaveric Neurosurgical Simulation

Sahin Hanalioglu et al. Front Surg. .

Abstract

Background: Visualizing and comprehending 3-dimensional (3D) neuroanatomy is challenging. Cadaver dissection is limited by low availability, high cost, and the need for specialized facilities. New technologies, including 3D rendering of neuroimaging, 3D pictures, and 3D videos, are filling this gap and facilitating learning, but they also have limitations. This proof-of-concept study explored the feasibility of combining the spatial accuracy of 3D reconstructed neuroimaging data with realistic texture and fine anatomical details from 3D photogrammetry to create high-fidelity cadaveric neurosurgical simulations.

Methods: Four fixed and injected cadaver heads underwent neuroimaging. To create 3D virtual models, surfaces were rendered using magnetic resonance imaging (MRI) and computed tomography (CT) scans, and segmented anatomical structures were created. A stepwise pterional craniotomy procedure was performed with synchronous neuronavigation and photogrammetry data collection. All points acquired in 3D navigational space were imported and registered in a 3D virtual model space. A novel machine learning-assisted monocular-depth estimation tool was used to create 3D reconstructions of 2-dimensional (2D) photographs. Depth maps were converted into 3D mesh geometry, which was merged with the 3D virtual model's brain surface anatomy to test its accuracy. Quantitative measurements were used to validate the spatial accuracy of 3D reconstructions of different techniques.

Results: Successful multilayered 3D virtual models were created using volumetric neuroimaging data. The monocular-depth estimation technique created qualitatively accurate 3D representations of photographs. When 2 models were merged, 63% of surface maps were perfectly matched (mean [SD] deviation 0.7 ± 1.9 mm; range -7 to 7 mm). Maximal distortions were observed at the epicenter and toward the edges of the imaged surfaces. Virtual 3D models provided accurate virtual measurements (margin of error <1.5 mm) as validated by cross-measurements performed in a real-world setting.

Conclusion: The novel technique of co-registering neuroimaging and photogrammetry-based 3D models can (1) substantially supplement anatomical knowledge by adding detail and texture to 3D virtual models, (2) meaningfully improve the spatial accuracy of 3D photogrammetry, (3) allow for accurate quantitative measurements without the need for actual dissection, (4) digitalize the complete surface anatomy of a cadaver, and (5) be used in realistic surgical simulations to improve neurosurgical education.

Keywords: 3D rendering; depth estimation; neuroanatomy; neuroimaging; neurosurgical training; photogrammetry; virtual model.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
The methodological pipeline to fully digitalize the cadavers. Used with permission from Barrow Neurological Institute, Phoenix, Arizona.
Figure 2
Figure 2
Segmentation and rendering process. (A) Neuroimaging data used for 3D rendering of segmented anatomical structures. (B) 3D rendering of segmented anatomical structures. Used with permission from Barrow Neurological Institute, Phoenix, Arizona.
Figure 3
Figure 3
Creation of virtual pterional craniotomy model. (A) Steps to create 3D pterional craniotomy model. (B) Synchronous data acquisition through neuronavigation. (C) Stepwise pterional craniotomy with images in the real-world setting (top panels) and 3D virtual model (bottom panels). Used with permission from Barrow Neurological Institute, Phoenix, Arizona.
Figure 4
Figure 4
Validation of monocular-depth estimation technique using the 3D virtual model. A single operative photograph was used to create a depth map and corresponding 3D mesh. This mesh was then imported into the 3D virtual model space as an object. Two models (3D mesh of the photographic depth map and 3D virtual model) were aligned, and the deviation map between 2 surfaces was created. The deviation map shows that almost two-thirds of the entire surface area matches almost perfectly between the 2 models (within 2-mm limits). The epicenter and edges show the highest degrees of distortion. Colored scale bar indicates the degree of distortion in millimeters. Used with permission from Barrow Neurological Institute, Phoenix, Arizona.
Figure 5
Figure 5
Adding vasculature to 3D virtual models. (A) Vascular structures are visible on MRI (coronal, top left; axial, top right; sagittal, bottom left). However, the delineation of vascular structures in 3D models (bottom right) is not practical or accurate unless a contrast medium is used before the vessels are injected. (B) An alternative method is to artificially add or draw arteries (red lines) or veins (blue lines) using certain anatomical landmarks acquired via neuronavigation and photography (green dots). Used with permission from Barrow Neurological Institute, Phoenix, Arizona.
Figure 6
Figure 6
Measurements in the real-world setting using a photograph (A) correspond to the 3D virtual model (B). Ruler and values shown are millimeters. Used with permission from Barrow Neurological Institute, Phoenix, Arizona.
Figure 7
Figure 7
Three rotations (columns) of real-world and two 3D models. Cadaveric photographs are shown in the top row. Both the 3D images obtained from a single 2D photograph via monocular-depth estimation technique (middle row) and the 3D virtual model generated from neuroimaging data (bottom row) can be rotated and matched in 3D space. Used with permission from Barrow Neurological Institute, Phoenix, Arizona.

References

    1. Gurses ME, Gungor A, Hanalioglu S, Yaltirik CK, Postuk HC, Berker M, et al. Qlone®: a simple method to create 360-degree photogrammetry-based 3-dimensional model of cadaveric specimens. Oper Neurosurg (Hagerstown). (2021) 21(6):E488–93. 10.1093/ons/opab355 - DOI - PubMed
    1. Nicolosi F, Rossini Z, Zaed I, Kolias AG, Fornari M, Servadei F. Neurosurgical digital teaching in low-middle income countries: beyond the frontiers of traditional education. Neurosurg Focus. (2018) 45(4):E17. 10.3171/2018.7.Focus18288 - DOI - PubMed
    1. Cheng D, Yuan M, Perera I, O’Connor A, Evins AI, Imahiyerobo T, et al. Developing a 3D composite training model for cranial remodeling. J Neurosurg Pediatr. (2019) 24(6):632–41. 10.3171/2019.6.Peds18773 - DOI - PubMed
    1. Benet A, Plata-Bello J, Abla AA, Acevedo-Bolton G, Saloner D, Lawton MT. Implantation of 3D-printed patient-specific aneurysm models into cadaveric specimens: a new training paradigm to allow for improvements in cerebrovascular surgery and research. Biomed Res Int. (2015) 2015:939387. 10.1155/2015/939387 - DOI - PMC - PubMed
    1. Roh TH, Oh JW, Jang CK, Choi S, Kim EH, Hong CK, et al. Virtual dissection of the real brain: integration of photographic 3D models into virtual reality and its effect on neurosurgical resident education. Neurosurg Focus. (2021) 51(2):E16. 10.3171/2021.5.FOCUS21193 - DOI - PubMed