Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Dec 13;15(1):43973.
doi: 10.1038/s41598-025-27784-2.

Intraoperative 3D reconstruction from sparse arbitrarily posed real X-rays

Affiliations

Intraoperative 3D reconstruction from sparse arbitrarily posed real X-rays

Sascha Jecklin et al. Sci Rep. .

Abstract

Spine surgery is a high-risk intervention demanding precise execution, often supported by image-based navigation systems. Recently, supervised learning approaches have gained attention for reconstructing 3D spinal anatomy from sparse fluoroscopic data, significantly reducing reliance on radiation-intensive 3D imaging systems. However, these methods typically require large amounts of annotated training data and may struggle to generalize across varying patient anatomies or imaging conditions. Instance-learning approaches like Gaussian splatting could offer an alternative by avoiding extensive annotation requirements. While Gaussian splatting has shown promise for novel view synthesis, its application to sparse, arbitrarily posed real intraoperative X-rays has remained largely unexplored. This work addresses this limitation by extending the [Formula: see text]-Gaussian splatting framework to reconstruct anatomically consistent 3D volumes under these challenging conditions. We introduce an anatomy-guided radiographic standardization step using style transfer, improving visual consistency across views, and enhancing reconstruction quality. Notably, our framework requires no pretraining, making it inherently adaptable to new patients and anatomies. We evaluated our approach using an ex-vivo dataset. Expert surgical evaluation confirmed the clinical utility of the 3D reconstructions for navigation, especially when using 20-30 views, and highlighted the standardization's benefit for anatomical clarity. Benchmarking via quantitative 2D metrics (PSNR/SSIM) confirmed performance trade-offs compared to idealized settings, but also validated the improvement gained from standardization over raw inputs. This work demonstrates the feasibility of instance-based volumetric reconstruction from arbitrary sparse-view X-rays, advancing intraoperative 3D imaging for surgical navigation. Code and data to reproduce our results is made available at https://github.com/MrMonk3y/IXGS .

Keywords: Computer-assisted orthopedic surgery; Domain adaptation; Gaussian splatting; Intraoperative 3D reconstruction; Sparse-view X-ray; Surgical navigation.

PubMed Disclaimer

Conflict of interest statement

Declarations. Competing Interests: S.J. reports that financial support for his doctoral studies was provided by the Monique Dornonville de la Cour Foundation and an internal Balgrist University Hospital fund. M.F. reports a relationship with X23D AG that includes equity or stocks. P.F. reports a relationship with X23D AG that includes board membership and equity or stocks. P.F. and M.F. have patent $$\#$$ WO2023156608A1 pending to University of Zurich related to prior work. P.F., M.F., and S.J. have patent “A computer-implemented method, device, system and computer program product for processing anatomic imaging data” pending to University of Zurich related to prior work. The remaining authors (A.M., R.Z., L.C., C.J.L.) declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Ethics approval: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the local ethical committee (KEK Zurich BASEC No. 2021-01083). The body donations were obtained from ScienceCare USA. All individuals had given the necessary consent prior to their death.

Figures

Fig. 1
Fig. 1
Comparison between conventional circular acquisition paths, typical for CBCT/CT imaging or idealized synthetic DRR generation (left), and the irregular, arbitrary acquisition poses representative of real intraoperative settings (right).
Fig. 2
Fig. 2
Pipeline Overview. The training of Pix2Pix (blue) uses paired real X-rays (formula image) and synthetic DRRs (formula image) to learn style transfer. During inference (green), real X-rays with calibration information are converted to style-transferred images (formula image). These images, along with their poses, are passed to the Gaussian splatting network, which outputs 3D reconstructions. The resulting volume can be visualized as slices or rendered from arbitrary viewpoints (orange).
Fig. 3
Fig. 3
Comparison of 3D reconstructions of the lumbar spine (axial, coronal, and sagittal slices). Each block shows axial, coronal, and sagittal slices of the reconstructed volume, overlaid with alpha-blended masks of the segmented lumbar spine from the ground truth CT for better accuracy assessment. The columns compare reconstructions using 25 views (left) and 50 views (right). The top row shows reconstructions from formula image baseline, while the bottom row displays reconstructions from formula image (our approach).
Fig. 4
Fig. 4
Comparison of slices from reconstructed volumes using different inputs and methods. From left to right: formula image: Ground Truth CT volume, formula image: Reconstruction from 50 synthetic DRRs generated from circular acquisition, formula image: Reconstruction from 50 X-rays generated from arbitrary poses, formula image (our approach): Reconstruction from 50 style-transferred X-rays.
Fig. 5
Fig. 5
Comparison of 3D reconstructions of the lumbar spine. Each block shows AP, lateral, and isometric views of the reconstructed volume. The columns compare reconstructions using 25 views (left) and 50 views (right). The top row displays reconstructions from formula image baseline, while the bottom row shows reconstructions from formula image (our approach).
Fig. 6
Fig. 6
Evaluation of reconstruction quality using Likert ratings from expert surgeons. a and b show ratings over 5–50 views, with a assessing 3D volumes and b assessing slice representations. See Sect. “Experiments and performance evaluation” for details on the different Likert scales used.
Fig. 7
Fig. 7
Evaluation of novel view synthesis quality from unseen poses using PSNR and SSIM metrics over varying numbers of views. Higher scores indicate better reconstruction quality.

References

    1. Tonetti, J., Boudissa, M., Kerschbaumer, G. & Seurat, O. Role of 3D intraoperative imaging in orthopedic and trauma surgery. Orthopaed. Traumatol. Surg. Res.106, S19–S25. 10.1016/j.otsr.2019.05.021 (2020). - DOI
    1. Keil, H. et al. Intraoperative revision rates due to three-dimensional imaging in orthopedic trauma surgery: Results of a case series of 4721 patients. Eur. J. Trauma Emerg. Surg. Off. Publ. Eur. Trauma Soc.49, 373–381. 10.1007/s00068-022-02083-x (2023). - DOI
    1. Hong, J. & Hashizume, M. An effective point-based registration tool for surgical navigation. Surg. Endosc.24, 944–948. 10.1007/s00464-009-0568-2 (2010). - DOI - PubMed
    1. Ma, Q. et al. Autonomous surgical robot with camera-based markerless navigation for oral and maxillofacial surgery. IEEE/ASME Trans. Mechatron.25, 1084–1094. 10.1109/TMECH.2020.2971618 (2020) (Conference Name: IEEE/ASME Transactions on Mechatronics.). - DOI
    1. Suenaga, H. et al. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: A pilot study. BMC Med. Imaging15, 51. 10.1186/s12880-015-0089-5 (2015). - DOI - PMC - PubMed

LinkOut - more resources