Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 3;27(7):722.
doi: 10.3390/e27070722.

Entropy-Regularized Attention for Explainable Histological Classification with Convolutional and Hybrid Models

Affiliations

Entropy-Regularized Attention for Explainable Histological Classification with Convolutional and Hybrid Models

Pedro L Miguel et al. Entropy (Basel). .

Abstract

Deep learning models such as convolutional neural networks (CNNs) and vision transformers (ViTs) perform well in histological image classification, but often lack interpretability. We introduce a unified framework that adds an attention branch and CAM Fostering, an entropy-based regularizer, to improve Grad-CAM visualizations. Six backbone architectures (ResNet-50, DenseNet-201, EfficientNet-b0, ResNeXt-50, ConvNeXt, CoatNet-small) were trained, with and without our modifications, on five H&E-stained datasets. We measured explanation quality using coherence, complexity, confidence drop, and their harmonic mean (ADCC). Our method increased the ADCC in five of the six backbones; ResNet-50 saw the largest gain (+15.65%), and CoatNet-small achieved the highest overall score (+2.69%), peaking at 77.90% on the non-Hodgkin lymphoma set. The classification accuracy remained stable or improved in four models. These results show that combining attention and entropy produces clearer, more informative heatmaps without degrading performance. Our contributions include a modular architecture for both convolutional and hybrid models and a comprehensive, quantitative explainability evaluation suite.

Keywords: CAM Fostering; Grad-CAM; attention branches; convolutional neural networks; histological images; vision transformers.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Proposed methodology integrating ABN and CAM Fostering techniques.
Figure 2
Figure 2
Representative histological samples from each dataset.
Figure 3
Figure 3
Training process schematic of proposed method: feature extractor, attention branch, and perception branch with CAM Fostering.
Figure 4
Figure 4
Visual comparison of Grad-CAM heatmaps produced by baseline (left) and proposed (right) models. Rows correspond to different architectures: (a) ResNet-50, (b) DenseNet-201, (c) EfficientNet-b0, (d) ResNeXt-50, (e) ConvNeXt, and (f) CoatNet-small.

Similar articles

References

    1. Krizhevsky A., Sutskever I., Hinton G.E. ImageNet Classification with Deep Convolutional Neural Networks. In: Pereira F., Burges C., Bottou L., Weinberger K., editors. Proceedings of the Advances in Neural Information Processing Systems; Lake Tahoe, NV, USA. 3–6 December 2012; New York, NY, USA: Curran Associates, Inc.; 2012.
    1. He K., Zhang X., Ren S., Sun J. Deep Residual Learning for Image Recognition; Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA. 27–30 June 2016; pp. 770–778. - DOI
    1. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin I. Attention Is All You Need. [(accessed on 23 June 2025)];arXiv. 2023 Available online: http://arxiv.org/abs/1706.03762.1706.03762
    1. Liu S., Wang L., Yue W. An efficient medical image classification network based on multi-branch CNN, token grouping Transformer and mixer MLP. Appl. Soft Comput. 2024;153:111323. doi: 10.1016/j.asoc.2024.111323. - DOI
    1. Dwivedi K., Dutta M.K., Pandey J.P. EMViT-Net: A novel transformer-based network utilizing CNN and multilayer perceptron for the classification of environmental microorganisms using microscopic images. Ecol. Inform. 2024;79:102451. doi: 10.1016/j.ecoinf.2023.102451. - DOI

LinkOut - more resources