Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jan;11(1):015001.
doi: 10.1117/1.JMI.11.1.015001. Epub 2024 Jan 8.

The Image-to-Physical Liver Registration Sparse Data Challenge: comparison of state-of-the-art using a common dataset

Affiliations

The Image-to-Physical Liver Registration Sparse Data Challenge: comparison of state-of-the-art using a common dataset

Jon S Heiselman et al. J Med Imaging (Bellingham). 2024 Jan.

Abstract

Purpose: Computational methods for image-to-physical registration during surgical guidance frequently rely on sparse point clouds obtained over a limited region of the organ surface. However, soft tissue deformations complicate the ability to accurately infer anatomical alignments from sparse descriptors of the organ surface. The Image-to-Physical Liver Registration Sparse Data Challenge introduced at SPIE Medical Imaging 2019 seeks to characterize the performance of sparse data registration methods on a common dataset to benchmark and identify effective tactics and limitations that will continue to inform the evolution of image-to-physical registration algorithms.

Approach: Three rigid and five deformable registration methods were contributed to the challenge. The deformable approaches consisted of two deep learning and three biomechanical boundary condition reconstruction methods. These algorithms were compared on a common dataset of 112 registration scenarios derived from a tissue-mimicking phantom with 159 subsurface validation targets. Target registration errors (TRE) were evaluated under varying conditions of data extent, target location, and measurement noise. Jacobian determinants and strain magnitudes were compared to assess displacement field consistency.

Results: Rigid registration algorithms produced significant differences in TRE ranging from 3.8±2.4 mm to 7.7±4.5 mm, depending on the choice of technique. Two biomechanical methods yielded TRE of 3.1±1.8 mm and 3.3±1.9 mm, which outperformed optimal rigid registration of targets. These methods demonstrated good performance under varying degrees of surface data coverage and across all anatomical segments of the liver. Deep learning methods exhibited TRE ranging from 4.3±3.3 mm to 7.6±5.3 mm but are likely to improve with continued development. TRE was weakly correlated among methods, with greatest agreement and field consistency observed among the biomechanical approaches.

Conclusions: The choice of registration algorithm significantly impacts registration accuracy and variability of deformation fields. Among current sparse data driven image-to-physical registration algorithms, biomechanical simulations that incorporate task-specific insight into boundary conditions seem to offer best performance.

Keywords: accuracy; challenge; image guidance; liver; registration; sparse data.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Sparse data challenge registration task. (a) A reference mesh of the non-deformed preoperative liver is provided; it must be registered to 112 patterns of intraoperative sparse data that were collected after one of four unknown deformations were applied to the organ. (b) The registration method generates dense displacement fields for mapping the preoperative liver to the intraoperative data frame (rendered with rigid component removed). (c) Ground truth locations of 159 target locations distributed throughout the liver were previously blinded to participants and serve as validation data. Registration performance was assessed via TRE, which was stratified across variations in clinically relevant factors including measurement noise, data extent, target location, and algorithm initialization. Furthermore, registration performance was compared according to consistency measures of the displacement field, and inter-method similarity was assessed via correlation analysis.
Fig. 2
Fig. 2
112 patterns of intraoperative surface data provided in the sparse data challenge to drive the image-to-physical registration task. Each intraoperative sparse data pattern was clinically collected during image-guided liver surgery and mapped to the deformed organ phantom. Points associated with the anterior surface of the liver are shown in black, the falciform ligament in red, the right inferior ridge in green, and the left inferior ridge in blue.
Fig. 3
Fig. 3
Registration results from the sparse data challenge corresponding to one of the 112 registration scenarios. The top center panel shows the intraoperative ground truth target positions (red) and deformed liver shape (gray), alongside the intraoperative sparse surface data pattern provided for the registration task (black). In all other panels, the resulting target positions predicted by each registration method (blue) are compared against the ground truth target positions (red). The deformed liver shape predicted by each registration method is also shown in gray.
Fig. 4
Fig. 4
Distributions of average TRE of each of the 112 registration instances in the sparse data challenge. Mean and standard deviation are plotted with red bars. Quantitative measures are reported as mean ± standard deviation (median) in units of mm.
Fig. 5
Fig. 5
Mean TRE of rigid registration methods (a) and deformable registration methods (b) compared against the extent of sparse surface data coverage on the liver. Solid lines indicate moving average.
Fig. 6
Fig. 6
Distribution of closest-point distances from each validation target position to the intraoperative data points that are provided to drive the registration. Inset illustrates anatomical Couinaud segments marked on the sparse data challenge liver mesh.
Fig. 7
Fig. 7
Spider plot of average TRE for noise-free registrations (blue) and registrations with added measurement noise (red). Plot shows the mean value as a solid line surrounded by a shaded region of one standard deviation.
Fig. 8
Fig. 8
Distribution of 112 average TRE values after deformable registration starting from varying the initial pose (Optimal PBR, Manual ICP, and wICP) plotted for the biomechanical method of Heiselman (red), biomechanical method of Ringel (green), and deep learning method of Pfeiffer (blue).
Fig. 9
Fig. 9
(a) Norm of the rotation-invariant Green strain tensor computed from displacement fields of the deformable registration methods. (b) Jacobian determinant of displacement fields. (c) Distribution of Jacobian determinants evaluated at all validation target locations in the set of sparse data challenge registrations.
Fig. 10
Fig. 10
Correlation matrix of TRE among rigid registrations (blue outline) and deformable registrations (green outline). Biomechanical deformable registrations are emphasized by a dashed green outline.

References

    1. Maier-Hein L., et al. , “Comparative validation of single-shot optical techniques for laparoscopic 3-D surface reconstruction,” IEEE Trans. Med. Imaging 33(10), 1913–1930 (2014). 10.1109/TMI.2014.2325607 - DOI - PubMed
    1. Clements L. W., et al. , “Organ surface deformation measurement and analysis in open hepatic surgery: method and preliminary results from 12 clinical cases,” IEEE Trans. Biomed. Eng. 58(8), 2280–2289 (2011). 10.1109/TBME.2011.2146782 - DOI - PMC - PubMed
    1. Heiselman J. S., et al. , “Characterization and correction of soft tissue deformation in laparoscopic image-guided liver surgery,” J. Med. Imaging 5(2), 021203 (2018). 10.1117/1.JMI.5.2.021203 - DOI - PMC - PubMed
    1. Brewer E. L., et al. , “The image-to-physical liver registration sparse data challenge,” Proc. SPIE 10951, 109511F (2019). 10.1117/12.2513952 - DOI
    1. Miga M. I., “The Image-to-Physical Liver Registration Sparse Data Challenge,” www.sparsedatachallenge.org. - PMC - PubMed