Supervision by Denoising
- PMID: 37505997
- PMCID: PMC12498241
- DOI: 10.1109/TPAMI.2023.3299789
Supervision by Denoising
Abstract
Learning-based image reconstruction models, such as those based on the U-Net, require a large set of labeled images if good generalization is to be guaranteed. In some imaging domains, however, labeled data with pixel- or voxel-level label accuracy are scarce due to the cost of acquiring them. This problem is exacerbated further in domains like medical imaging, where there is no single ground truth label, resulting in large amounts of repeat variability in the labels. Therefore, training reconstruction networks to generalize better by learning from both labeled and unlabeled examples (called semi-supervised learning) is problem of practical and theoretical interest. However, traditional semi-supervised learning methods for image reconstruction often necessitate handcrafting a differentiable regularizer specific to some given imaging problem, which can be extremely time-consuming. In this work, we propose "supervision by denoising" (SUD), a framework to supervise reconstruction models using their own denoised output as labels. SUD unifies stochastic averaging and spatial denoising techniques under a spatio-temporal denoising framework and alternates denoising and model weight update steps in an optimization framework for semi-supervision. As example applications, we apply SUD to two problems from biomedical imaging-anatomical brain reconstruction (3D) and cortical parcellation (2D)-to demonstrate a significant improvement in reconstruction over supervised-only and ensembling baselines.
References
-
- Medler CA, Ikoma H, Peng Y, and Wedstein G, “Deep Optics for Single-Shot High-Dynamic-Range Imaging,” in Proc. CVPR, 2020, pp. 1375–1385.
-
- Kalra A, Taamazyan V, Rao SK, Venkataraman K, Raskar R, and Kadambi A, “Deep Polarization Cues for Transparent Object Segmentation,” in Proc. CVPR, 2020, pp. 8602–8611.
-
- Ronneberger O, Fischer P, and Brox T, “U-Net: convolutional networks for biomedical image segmentation,” in Proc. MICCAI, 2015, pp. 234–241.
Grants and funding
- R01 AG064027/AG/NIA NIH HHS/United States
- R00 AG081493/AG/NIA NIH HHS/United States
- RF1 MH123195/MH/NIMH NIH HHS/United States
- RF1 MH121885/MH/NIMH NIH HHS/United States
- R01 NS105820/NS/NINDS NIH HHS/United States
- R01 EB023281/EB/NIBIB NIH HHS/United States
- R01 EB019956/EB/NIBIB NIH HHS/United States
- U01 MH117023/MH/NIMH NIH HHS/United States
- P41 EB015902/EB/NIBIB NIH HHS/United States
- R01 EB006758/EB/NIBIB NIH HHS/United States
- K99 AG081493/AG/NIA NIH HHS/United States
- R01 NS083534/NS/NINDS NIH HHS/United States
LinkOut - more resources
Full Text Sources
Research Materials
