A Fusion Model With Effective Multi-Scale Parallel Transformer for Cellular Segmentation
- PMID: 40811394
- DOI: 10.1109/TCBBIO.2025.3542123
A Fusion Model With Effective Multi-Scale Parallel Transformer for Cellular Segmentation
Abstract
Cellular segmentation in fluorescence images is challenging due to the uneven intensity distribution and distinguishable cell morphology. Existing segmentation models consider the changing cell shape and size very few. We propose a novel multi-scale parallel Swin Transformer fusion network (MSPSTF-Net) for cellular segmentation integrating cell morphological information. Multi-scale parallel Swin Transformer (MSPST) module is designed, consisting of 4 parallel branches at different scales. Each branch contains a self-attention block, which is responsible for learning features at a specific scale and capturing scale-specific information. Moreover, a multi-scale parallel feature fusion (MSPFF) module and a global feature fusion (GFF) module are designed to effectively fuse the multi-scale morphological features. We compare the proposed MSPSTF-Net with existing advanced models on three biological cellular datasets in three metrics, F1 score, AJI, and PQ, while the comprehensive results show that MSPSTF-Net has higher segmentation performance and better generalization ability. Compared to the second place, our method MSPSTF-Net achieves an average improvement of 1.091%, 2.268%, and 1.698% across three metrics on three datasets.