Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Jan;61(1):149-61.
doi: 10.1109/TBME.2013.2278619. Epub 2013 Aug 15.

Robustness and accuracy of feature-based single image 2-D-3-D registration without correspondences for image-guided intervention

Robustness and accuracy of feature-based single image 2-D-3-D registration without correspondences for image-guided intervention

Xin Kang et al. IEEE Trans Biomed Eng. 2014 Jan.

Abstract

2-D-to-3-D registration is critical and fundamental in image-guided interventions. It could be achieved from single image using paired point correspondences between the object and the image. The common assumption that such correspondences can readily be established does not necessarily hold for image guided interventions. Intraoperative image clutter and an imperfect feature extraction method may introduce false detection and, due to the physics of X-ray imaging, the 2-D image point features may be indistinguishable from each other and/or obscured by anatomy causing false detection of the point features. These create difficulties in establishing correspondences between image features and 3-D data points. In this paper, we propose an accurate, robust, and fast method to accomplish 2-D-3-D registration using a single image without the need for establishing paired correspondences in the presence of false detection. We formulate 2-D-3-D registration as a maximum likelihood estimation problem, which is then solved by coupling expectation maximization with particle swarm optimization. The proposed method was evaluated in a phantom and a cadaver study. In the phantom study, it achieved subdegree rotation errors and submillimeter in-plane ( X- Y plane) translation errors. In both studies, it outperformed the state-of-the-art methods that do not use paired correspondences and achieved the same accuracy as a state-of-the-art global optimal method that uses correct paired correspondences.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
(Left) Plastic femur phantom with the fiducial affixed on it and (right) a typical X-ray image of them taken by the bench system. The Z -axis of the image points out of the paper. The nine beads on the fiducial were used in the experiments, whereas the larger beads affixed on the femur phantom were not used and they were outliers in our experiments.
Fig. 2
Fig. 2
Setup of the cadaver study in a well-calibrated environment, and an example X-ray image. The FRAC fiducial was affixed firmly on the cadaver hip on the right femur shaft. The flat-panel C-arm was used to acquire X-ray images by rotating approximately around the long axis of the right femur.
Fig. 3
Fig. 3
Means and standard deviations of the registration errors in (top) rotations in degrees and in (bottom) translations in millimeter, respectively, on each of the 100 X-ray images over 50 trials. The solid, dash-dot and dotted lines are the mean values, and the shadow regions show the standard deviations.
Fig. 4
Fig. 4
Registration errors of our method in rotations and translations. In the left column, from top to bottom, the plots show the registration errors (in degrees) around the X -, Y -, and Z -axes over 50 trials on 100 images. In the right column, from top to bottom the plots show the registration errors (in millimeter) along the X -, Y -, and Z -axis. All rotation errors around X -, Y -, and Z -axis were less than 1° (indicated by the horizontal dashed lines), and all translation errors along X - and Y -axes were less than 1 mm, whereas the translation errors along Z -axis were relative larger.
Fig. 5
Fig. 5
Behavior of the proposed method. The detected beads, initialization, and final estimate are shown using blue plus signs (“+”), red asterisks (“*”), and red circles (“○”), respectively. Four correspondence maps below the image illustrate the behavior of our method at iterations #0 (initialization), #5, #10, and #50. The correspondence map is a visualization of the correspondence probabilities pm n with the horizontal axis the order of image points and the vertical axis the order of model points. The higher the value of a block in the map, the higher the correspondence probability is. Two enlarged regions give close-up displays of the final estimate (b) and an outlier (a large bead on the femur phantom) that is very close to a FTRAC bead (c).
Fig. 6
Fig. 6
Robustness of the proposed method to a large amount of outliers (87 outliers versus 9 correct feature points). The rotation errors for all images were within ±0.5°. The translation errors along X- and Y-axes were less than 1 mm, and the errors along Z-axis were relative large. Notably, the registration errors were at the same levels as that when there were much less outliers, by comparing with Fig. 3.
Fig. 7
Fig. 7
Comparison of the registration errors between the proposed method with gOp [41]. For most images, the average errors of the proposed method were the same as that of gOp, and they were smaller than that of gOp for some images. Note that gOp requires correct paired point correspondences, while no paired correspondences were used in the proposed method.
Fig. 8
Fig. 8
RMS projection errors and an example registration results of different methods. In the image, the green points are extracted feature points, and the cyan crosses, the red plus signs, the white circles and the yellow asterisks are the projections using the estimate of SoftPOSIT [25], MV-MI [39], gOp [41], and the proposed method, respectively. The yellow asterisks overlapped the white circles, indicating the proposed method had the same accuracy as gOp.

References

    1. Cleary K, Peters TM. Image-guided interventions: Technology review and clinical applications. Annu Rev Biomed Eng. 2010;12(1):119–142. - PubMed
    1. Markelj P, Tomazěvič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal. 2012;16(3):642–661. [Online] Available: http://www.sciencedirect.com/science/article/pii/S1361841510000368. - PubMed
    1. Gueziec A, Kazanzides P, Williamson B, Taylor R. Anatomy-based registration of CT-scan and intraoperative X-ray images for guiding a surgical robot. IEEE Trans Med Imag. 1998 Oct;17(5):715–728. - PubMed
    1. Fleute M, Lavallée S. Nonrigid 3-D/2-D registration of images using statistical models. In: Taylor ACC, editor. Medical Image Computing and Computer-Assisted Intervention—MICCAI’99. Berlin, Germany: Springer-Verlag; 1999. pp. 138–147. vol. LNCS 1679.
    1. Feldmar J, Ayache N, Betting F. 3D-2D projective registration of free-form curves and surfaces. Comput Vis Image Understand. 1997;65(3):403–424.

Publication types