Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2026 Jan 29;13(2):160.
doi: 10.3390/bioengineering13020160.

3D Medical Image Segmentation with 3D Modelling

Affiliations

3D Medical Image Segmentation with 3D Modelling

Mária Ždímalová et al. Bioengineering (Basel). .

Abstract

Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and planning. Volumetric analysis surpasses standard criteria by detecting subtle tumor changes, thereby aiding adaptive therapies. The objective of this study was to develop an enhanced, interactive Graphcut algorithm for 3D DICOM segmentation, specifically designed to improve boundary accuracy and 3D modeling of breast and brain tumors in datasets with heterogeneous tissue intensities. Methods: The standard Graphcut algorithm was augmented with a clustering mechanism (utilizing k = 2-5 clusters) to refine boundary detection in tissues with varying intensities. DICOM datasets were processed into 3D volumes using pixel spacing and slice thickness metadata. User-defined seeds were utilized for tumor and background initialization, constrained by bounding boxes. The method was implemented in Python 3.13 using the PyMaxflow library for graph optimization and pydicom for data transformation. Results: The proposed segmentation method outperformed standard thresholding and region growing techniques, demonstrating reduced noise sensitivity and improved boundary definition. An average Dice Similarity Coefficient (DSC) of 0.92 ± 0.07 was achieved for brain tumors and 0.90 ± 0.05 for breast tumors. These results were found to be comparable to state-of-the-art deep learning benchmarks (typically ranging from 0.84 to 0.95), achieved without the need for extensive pre-training. Boundary edge errors were reduced by a mean of 7.5% through the integration of clustering. Therapeutic changes were quantified accurately (e.g., a reduction from 22,106 mm3 to 14,270 mm3 post-treatment) with an average processing time of 12-15 s per stack. Conclusions: An efficient, precise 3D tumor segmentation tool suitable for diagnostics and planning is presented. This approach is demonstrated to be a robust, data-efficient alternative to deep learning, particularly advantageous in clinical settings where the large annotated datasets required for training neural networks are unavailable.

Keywords: 3D segmentation; DICOM data; Graphcut; image processing; tumor volumetry.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Comparison of 3D models without segmentation and after segmentation of bones [8,9,10,11]. (A) 3D model before segmentation. (B) 3D model after bone segmentation.
Figure 2
Figure 2
Hounsfield scale. Defining the window on the original scale.
Figure 3
Figure 3
Comparison of transformation results for different views [9]. (A) Result of transformation without creating a window. (B) Result with a window size of 1000. (C) Result with a window size of 400, showing enhanced contrast.
Figure 4
Figure 4
Illustration of minimum cut in 3D image segmentation, separating object nodes (Source) from background nodes (Sink). The red and green circles represent the seed points (or terminal nodes) used to initialize the 3D graph-cut segmentation algorithm. The green circles denote the Source nodes, which identify the object or foreground of interest. The red circles denote the Sink nodes, representing the background. These markers act as hard constraints, and the algorithm calculates the minimum cut (optimal boundary) that separates these two sets of nodes based on the image intensity gradients.
Figure 5
Figure 5
Schematic of the background modeling problem. (A) A standard single-mean model averages intensities, losing detail. (B) The proposed clustering model separates background seeds into multiple intensities classes (e.g., white bones vs. dark air). The black and white circles represent background seed points categorized into distinct intensity classes. White circles correspond to high-intensity structures (e.g., bones), while black circles correspond to low-intensity regions (e.g., air). By clustering these seeds into multiple classes (Model B), the algorithm better captures the heterogeneous nature of the background compared to a single-mean approach (Model A), leading to higher segmentation accuracy.
Figure 6
Figure 6
Examples of boundary constraints. (a) Bounding box around a breast tumor. (b) Bounding box around a brain tumor, excluding the skull.
Figure 7
Figure 7
Software environment. The user interface allows adjustment of the cluster parameters (top left) and visualization of the segmentation result (right). The colors in the provided figures represent manual interaction markers (seeds) and the resulting segmentation calculated by the software. Image 1 (3D Graph Representation): The green nodes represent the object/foreground seeds (Source), while the red nodes represent the background seeds (Sink). These were manually marked within the software to define the terminal points for the graph-cut algorithm. The gray surface represents the calculated "minimum cut" or the final 3D boundary. Image 2 (Background Modeling): The black and white circles are representative background seeds sampled from different tissue types (e.g., white for bone, black for air). Model (B) illustrates how the software clusters these manual markers into distinct intensity classes to better handle background heterogeneity. Image 3 (Software Interface): This image shows the actual user interaction. The green scribble in the ‘Image Viewer’ represents the foreground seeds drawn by the user to mark the target lesion. The blue scribble represents the background seeds. In the ‘Segmentation Result’ window, the red/white area represents the final segmented object generated by the algorithm based on these manual inputs.
Figure 8
Figure 8
Impact of cluster count (k) on segmentation. (a) Manual ground truth. (b) Graphcut result (k = 1). (cf) Graphcut results with increasing clusters (k = 2–5), showing tighter boundary adherence as k increases.
Figure 9
Figure 9
Comparison with Thresholding. (1a) Rib segmentation using Graphcut showing clean structure. (1b) Thresholding result showing significant noise. (2a) Skull segmentation using Graphcut. (2b) Thresholding result showing artifacts.
Figure 10
Figure 10
Comparison of segmentation methods on brain (Top) and breast (Bottom) tumors. (a) Proposed Graphcut method. (b) Manual Segmentation (Gold Standard). (c) Region Growing method, showing leakage/under-segmentation compared to the proposed method. The lines visible in the 3D reconstructions represent the computed 3D boundaries or ‘outlines’ of the segmented structures. In the context of the graph-cut algorithm, these lines illustrate the finalized ‘minimum cut’ surface where the algorithm has separated the object nodes from the background. They serve as a visual verification of the spatial extent and surface continuity of the segmented anatomical model (such as the skull or rib cage) within the 3D volume.
Figure 11
Figure 11
3D model comparison. (a) Resulting tumor segmentation using proposed Graphcut method. (b) Manual tumor segmentation (Gold Standard). The proposed method recovers the complex 3D shape with high fidelity. The lines visible in the 3D reconstructions represent the computed 3D boundaries or ‘outlines’ of the segmented structures. In the context of the graph-cut algorithm, these lines illustrate the finalized ‘minimum cut’ surface where the algorithm has separated the object nodes from the background. They serve as a visual verification of the spatial extent and surface continuity of the segmented anatomical model (such as the skull or rib cage) within the 3D volume.
Figure 12
Figure 12
Volumetric analysis of tumor response to treatment. (1a,2a) Pre-treatment tumor volumes. (1b,2b) Post-treatment tumor volumes, showing significant regression quantified by the software.
Figure 13
Figure 13
3D visualization of a segmented brain tumor using the proposed pipeline. (Top) 3D surface model rendering. (Bottom) Multi-planar reconstruction showing the tumor extent in axial, coronal, and sagittal views.

References

    1. Chen J., Pan T., Zhu Z., Liu L., Zhao N., Feng X., Zhang W., Wu Y., Cai C., Luo X., et al. A Deep Learning-Based Multimodal Medical Imaging Model for Breast Cancer Screening. Sci. Rep. 2025;15:14696. doi: 10.1038/s41598-025-99535-2. - DOI - PMC - PubMed
    1. Pánik J., Kopáni M., Zeman J., Ješkovský M., Kaizer J., Povinec P.P. Determination of Metal Elements Concentrations in Human Brain Tissues Using PIXE and EDX Methods. J. Radioanal. Nucl. Chem. 2018;3:2313–2319. doi: 10.1007/s10967-018-6208-3. - DOI
    1. Povinec P.P., Kontuľ I., Ješkovský M., Kaizer J., Kvasniak J., Pánik J., Zeman J. Development and Applications of Accelerator Mass Spectrometry Methods for Measurement of 14C, 10Be and 26Al in the CENTA Laboratory. J. Radioanal. Nucl. Chem. 2024;333:3497–3509. doi: 10.1007/s10967-023-09294-5. - DOI
    1. Zeman J., Ješkovský M., Kaizer J., Pánik J., Kontuľ I., Staníček J., Povinec P.P. Analysis of Meteorite Samples Using PIXE Technique. J. Radioanal. Nucl. Chem. 2019;322:1897–1903. doi: 10.1007/s10967-019-06851-9. - DOI
    1. Kopáni M., Pánik J., Filová B., Bujdoš M., Míšek J., Kohan M., Jakuš J., Povinec P. PIXE Analysis of Iron in Rabbit Cerebellum after Exposure to Radiofrequency Electromagnetic Fields. Bratisl. Med. J. 2022;123:864–871. doi: 10.4149/BLL_2022_138. - DOI - PubMed