Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Dec 6;12(12):1332.
doi: 10.3390/bioengineering12121332.

Integrating Foundation Model Features into Graph Neural Network and Fusing Predictions with Standard Fine-Tuned Models for Histology Image Classification

Affiliations

Integrating Foundation Model Features into Graph Neural Network and Fusing Predictions with Standard Fine-Tuned Models for Histology Image Classification

Nematollah Saeidi et al. Bioengineering (Basel). .

Abstract

Histopathological image classification using computational methods such as fine-tuned convolutional neural networks (CNNs) has gained significant attention in recent years. Graph neural networks (GNNs) have also emerged as strong alternatives, often employing CNNs or vision transformers (ViTs) as node feature extractors. However, as these models are usually pre-trained on small-scale natural image datasets, their performance in histopathology tasks can be limited. The introduction of foundation models trained on large-scale histopathological data now enables more effective feature extraction for GNNs. In this work, we integrate recently developed foundation models as feature extractors within a lightweight GNN and compare their performance with standard fine-tuned CNN and ViT models. Furthermore, we explore a prediction fusion approach that combines the outputs of the best-performing GNN and fine-tuned model to evaluate the benefits of complementary representations. Results demonstrate that GNNs utilizing foundation model features outperform those trained with CNN or ViT features and achieve performance comparable to standard fine-tuned CNN and ViT models. The highest overall performance is obtained with the proposed prediction fusion strategy. Evaluated on three publicly available datasets, the best fusion achieved F1-scores of 98.04%, 96.51%, and 98.28%, and balanced accuracies of 98.03%, 96.50%, and 97.50% on PanNuke, BACH, and BreakHis, respectively.

Keywords: computational pathology; deep learning; foundation model; graph neural network; image classification; medical image analysis.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest related to this work.

Figures

Figure 1
Figure 1
Example images from the PanNuke dataset (first row), the BACH dataset (second row) and the BreakHis dataset (third row).
Figure 2
Figure 2
General workflow of the proposed approach.
Figure 3
Figure 3
Visual comparison of the best-performing methods on the PanNuke, BACH, and BreakHis datasets. For each dataset, the performance of the best models from each part of the corresponding tables (Table 2, Table 3 and Table 4) is shown.
Figure 4
Figure 4
Attention heatmaps generated using the Grad-CAM method for two example images are shown for the GNN-UNI2 and GNN-ViT models. In both cases, GNN-UNI2 correctly classified the images, whereas GNN-ViT produced incorrect predictions. As illustrated, GNN-UNI2 focuses more strongly on clinically relevant regions of the tissue compared to GNN-ViT.

References

    1. Litjens G., Kooi T., Bejnordi B.E., Setio A.A.A., Ciompi F., Ghafoorian M., Van Der Laak J.A., Van Ginneken B., Sánchez C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017;42:60–88. - PubMed
    1. Gurcan M.N., Boucheron L., Can A., Madabhushi A., Rajpoot N., Yener B. Histopathological Image Analysis: A Review. IEEE Rev. Biomed. Eng. 2009;2:147–171. doi: 10.1109/RBME.2009.2034865. - DOI - PMC - PubMed
    1. Komura D., Ishikawa S. Machine Learning Methods for Histopathological Image Analysis. Comput. Struct. Biotechnol. J. 2018;16:34–42. doi: 10.1016/j.csbj.2018.01.001. - DOI - PMC - PubMed
    1. Wang L., Shen L., Yi J., Yang X., Peng Y., Ding J., Tian Y., Yan S. Prediction model of dynamic fracture toughness of nickel-based alloys: Combination of data-driven and multi-scale modelling. Eur. J. Mech.-A/Solids. 2026;116:105892. doi: 10.1016/j.euromechsol.2025.105892. - DOI
    1. Mao Z., Suzuki S., Nabae H., Miyagawa S., Suzumori K., Maeda S. Machine learning-enhanced soft robotic system inspired by rectal functions to investigate fecal incontinence. Bio-Des. Manuf. 2025;8:482–494. doi: 10.1631/bdm.2400152. - DOI

LinkOut - more resources