Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Aug;31(8):4257-4269.
doi: 10.1109/TVCG.2024.3398791.

Lightweight Deep Exemplar Colorization via Semantic Attention-Guided Laplacian Pyramid

Lightweight Deep Exemplar Colorization via Semantic Attention-Guided Laplacian Pyramid

Chengyi Zou et al. IEEE Trans Vis Comput Graph. 2025 Aug.

Abstract

Exemplar-based colorization aims to generate plausible colors for a grayscale image with the guidance of a color reference image. The main challenging problem is finding the correct semantic correspondence between the target image and the reference image. However, the colors of the object and background are often confused in the existing methods. Besides, these methods usually use simple encoder-decoder architectures or pyramid structures to extract features and lack appropriate fusion mechanisms, which results in the loss of high-frequency information or high complexity. To address these problems, this article proposes a lightweight semantic attention-guided Laplacian pyramid network (SAGLP-Net) for deep exemplar-based colorization, exploiting the inherent multi-scale properties of color representations. They are exploited through a Laplacian pyramid, and semantic information is introduced as high-level guidance to align the object and background information. Specially, a semantic guided non-local attention fusion module is designed to exploit the long-range dependency and fuse the local and global features. Moreover, a Laplacian pyramid fusion module based on criss-cross attention is proposed to fuse high frequency components in the large-scale domain. An unsupervised multi-scale multi-loss training strategy is further introduced for network training, which combines pixel loss, color histogram loss, total variance regularisation, and adversarial loss. Experimental results demonstrate that our colorization method achieves better subjective and objective performance with lower complexity than the state-of-the-art methods.

PubMed Disclaimer