This is a preprint.
A visual-omics foundation model to bridge histopathology image with transcriptomics
- PMID: 40321764
- PMCID: PMC12047990
- DOI: 10.21203/rs.3.rs-5183775/v1
A visual-omics foundation model to bridge histopathology image with transcriptomics
Update in
-
A visual-omics foundation model to bridge histopathology with spatial transcriptomics.Nat Methods. 2025 Jul;22(7):1568-1582. doi: 10.1038/s41592-025-02707-1. Epub 2025 May 29. Nat Methods. 2025. PMID: 40442373 Free PMC article.
Abstract
Artificial intelligence has revolutionized computational biology. Recent developments in omics technologies, including single-cell RNA sequencing (scRNA-seq) and spatial transcriptomics (ST), provide detailed genomic data alongside tissue histology. However, current computational models focus on either omics or image analysis, lacking their integration. To address this, we developed OmiCLIP, a visual-omics foundation model linking hematoxylin and eosin (H&E) images and transcriptomics using tissue patches from Visium data. We transformed transcriptomic data into "sentences" by concatenating top-expressed gene symbols from each patch. We curated a dataset of 2.2 million paired tissue images and transcriptomic data across 32 organs to train OmiCLIP integrating histology and transcriptomics. Building on OmiCLIP, our Loki platform offers five key functions: tissue alignment, annotation via bulk RNA-seq or marker genes, cell type decomposition, image-transcriptomics retrieval, and ST gene expression prediction from H&E images. Compared with 22 state-of-the-art models on 5 simulations, 19 public, and 4 in-house experimental datasets, Loki demonstrated consistent accuracy and robustness.
Conflict of interest statement
Competing interests The authors declare no competing interests.
Figures
References
-
- Lu M.Y., et al. AI-based pathology predicts origins for cancers of unknown primary. Nature 594, 106–110 (2021). - PubMed
Publication types
Grants and funding
LinkOut - more resources
Full Text Sources
