Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2021 Dec;48(13):4201-4224.
doi: 10.1007/s00259-021-05445-6. Epub 2021 Jun 29.

How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation

Affiliations
Review

How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation

Thomas Wendler et al. Eur J Nucl Med Mol Imaging. 2021 Dec.

Abstract

Molecular imaging is one of the pillars of precision surgery. Its applications range from early diagnostics to therapy planning, execution, and the accurate assessment of outcomes. In particular, molecular imaging solutions are in high demand in minimally invasive surgical strategies, such as the substantially increasing field of robotic surgery. This review aims at connecting the molecular imaging and nuclear medicine community to the rapidly expanding armory of surgical medical devices. Such devices entail technologies ranging from artificial intelligence and computer-aided visualization technologies (software) to innovative molecular imaging modalities and surgical navigation (hardware). We discuss technologies based on their role at different steps of the surgical workflow, i.e., from surgical decision and planning, over to target localization and excision guidance, all the way to (back table) surgical verification. This provides a glimpse of how innovations from the technology fields can realize an exciting future for the molecular imaging and surgery communities.

Keywords: Artificial intelligence; Augmented reality; Image-guided surgery; Molecular imaging; Precision surgery; Robotic surgery.

PubMed Disclaimer

Conflict of interest statement

TW is a consultant for technology developments for the medical device companies SurgicEye and Crystal Photonics. FvL is a consultant for the medical device company Hamamatsu Photonics. The remaining authors do not have any conflict of interest to disclose.

Figures

Fig. 1
Fig. 1
A Components in a robotic telemanipulator system. The surgeon operates the robot from a console that connects to the robotic arms over a central data processing unit where also the video signal of the laparoscope is processed. B Molecular images, like PET, SPECT, or scintigraphy, and so-called metadata of the patient are fed to the data processing unit. There, intraoperative information is merged and shown in a single central display. There, augmented reality (AR) image overlays can be shown with the instruments and signals from the surgery (theoretical possibilities indicated). As a result of the procedure, the diseased tissue is removed. This results in an outcome, e.g., the resection borders’ status. C A molecular imaging-enhanced robotic surgery can be abstracted as surgical decision/planning, target localization, intraoperative decision/planning, excision, and surgical verification, all of which are interconnected
Fig. 2
Fig. 2
Example of a convolutional deep neural network (CNN), a standard AI algorithm. CNNs can be applied for multiorgan image segmentation using molecular imaging and metadata. Here, the CNN reads anatomical images (CT or MR, gray) and molecular images (PET/SPECT/scintigraphy, red). After initial processing, it concatenates their features (pink). The network then reduces the input’s dimensionality (i.e., brings the images from a size of, e.g., 128×128×128 = 2,097,152 to only values 512 representing both of them). These 512 parameters get further concatenated with the 100 metadata parameters (yellow) fed into the network’s bottleneck. The 512 + 100 = 612 parameters contain a compressed high-level representation of the image and patient information. Based on the image and metadata, the network then solves the target tasks (here organ segmentation, orange) while increasing dimensionality (i.e., upscaling from 612 parameters back to 128×128×128 = 2,097,152)
Fig. 3
Fig. 3
Possible registration options to bring molecular images (SPECT/PET or MRI) over anatomical images (CT/MR) to the robot’s coordinate system
Fig. 4
Fig. 4
Different VR/AR visualization options applicable for robotic surgery. A Visualization of PET/CT in 3 planes (axial, coronal, and sagittal) and 3D render, including an overlay of segmented organs. B VR view of PET/CT image as used for guidance on PSMA-guided surgery. C VR view of intraoperative TRUS in the context of the TRUS probe and the robot instruments. Image courtesy of Tim Salcudean, UBC, Canada. D Segmented organs overlaid as AR patients’ body for port placement planning [80]. E AR visualization of freehand SPECT images showing sentinel lymph nodes in an endometrium cancer surgery
Fig. 5
Fig. 5
Depth perception improvement methods for AR: A virtual mirror, B curvature-dependent transparency, C virtual shadows, D object-subtraction; E, F example of a new paradigm of AR visualization where only relevant information is overlaid on the real image versus standard AR
Fig. 6
Fig. 6
Non-exhaustive overview of some current and possible future technologies for robotic intraoperative molecular imaging separated by their dimensions and development status (green, commercially available; orange, research prototypes available; red, potential developments). Non-imaging devices are defined as zero-dimensional as they are a single pixel detector and not a line detector which would be one-dimensional
Fig. 7
Fig. 7
Non-exhaustive overview of some current and possible future technologies for back table specimen analysis/intraoperative pathological evaluation separated by dimensionality and development status (green, commercially available; orange, research prototypes available). Non-imaging devices are defined as zero-dimensional as they are a single pixel detector and not a line detector which would be one-dimensional

References

    1. Lidsky ME, D’Angelica MI. An outlook on precision surgery. Eur J Surg Oncol. 2017;43(5):853–855. - PubMed
    1. Liu S, Hemal A. Techniques of robotic radical prostatectomy for the management of prostate cancer: which one, when and why. Transl Androl Urol. 2020;9(2):906–918. - PMC - PubMed
    1. Petersen LJ, Zacho HD. PSMA PET for primary lymph node staging of intermediate and high-risk prostate cancer: an expedited systematic review. Cancer Imaging. 2020;20(1):10. - PMC - PubMed
    1. Harbin AC, Eun DD. The role of extended pelvic lymphadenectomy with radical prostatectomy for high-risk prostate cancer. Urol Oncol. 2015;33(5):208–216. - PubMed
    1. Tsai S-H, Tseng P-T, Sherer BA, Lai Y-C, Lin P-Y, Wu C-K, Stoller ML. Open versus robotic partial nephrectomy: Systematic review and meta-analysis of contemporary studies. Int J Med Robot. 2019;15(1):e1963. - PubMed

LinkOut - more resources