Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jul:235:6286-6324.

From Biased Selective Labels to Pseudo-Labels: An Expectation-Maximization Framework for Learning from Biased Decisions

Affiliations

From Biased Selective Labels to Pseudo-Labels: An Expectation-Maximization Framework for Learning from Biased Decisions

Trenton Chang et al. Proc Mach Learn Res. 2024 Jul.

Abstract

Selective labels occur when label observations are subject to a decision-making process; e.g., diagnoses that depend on the administration of laboratory tests. We study a clinically-inspired selective label problem called disparate censorship, where labeling biases vary across subgroups and unlabeled individuals are imputed as "negative" (i.e., no diagnostic test = no illness). Machine learning models naïvely trained on such labels could amplify labeling bias. Inspired by causal models of selective labels, we propose Disparate Censorship Expectation-Maximization (DCEM), an algorithm for learning in the presence of disparate censorship. We theoretically analyze how DCEM mitigates the effects of disparate censorship on model performance. We validate DCEM on synthetic data, showing that it improves bias mitigation (area between ROC curves) without sacrificing discriminative performance (AUC) compared to baselines. We achieve similar results in a sepsis classification task using clinical data.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Top: Causal model of disparate censorship (x: covariates, y: ground truth, y˜: observed label, t: testing/labeling indicator, a: sensitive attribute). Shaded variables are fully observed. Bottom: Disparate Censorship Expectation-Maximization (DCEM). Dashed nodes are probabilistic estimates.
Figure 2.
Figure 2.
Comparison of ROC gap (left) and AUC (right) of selected models at qy=0.5,k=1,qt=2. Each point represents a different sY. Our method (DCEM, magenta) mitigates bias while maintaining competitive AUC compared to baselines, with a tighter range and improved empirical worst-case for both metrics. “-”: median, “”: worst-case ROC gap, “”: worst-case AUC.
Figure 3.
Figure 3.
Relative frequencies of ROC gaps for DCEM vs. tested-only models at similar AUC (increasing to the right), pooling models across all k,qy,qt tested. Dashed lines = mean ROC gap by model. DCEM improves bias mitigation among models with similar AUC.
Figure 4.
Figure 4.
ROC gaps (left) and AUC (right) of baselines and DCEM on sepsis classification task at qt=1.5,k=4. Each dot represents a different sT. Our method (DCEM, magenta) maintains competitive or better bias mitigation and discriminative performance compared to baselines. “-”: median, “”: worst-case ROC gap, “”: worst-case AUC.

Similar articles

References

    1. Adams R, Henry KE, Sridharan A, Soleimani H, Zhan A, Rawat N, Johnson L, Hager DN, Cosgrove SE, Markowski A, et al. Prospective, multi-site study of patient outcomes after implementation of the trews machine learning-based early warning system for sepsis. Nature Medicine, pp. 1–6, 2022. - PubMed
    1. Ambroise C and Govaert G EM algorithm for partially known labels. In Data Analysis, Classification, and Related Methods, pp. 161–166. Springer, 2000.
    1. Arazo E, Ortego D, Albert P, O’Connor NE, and McGuinness K Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2020.
    1. Arpit D, Jastrzebski S, Ballas N, Krueger D, Bengio E, Kanwal MS, Maharaj T, Fischer A, Courville A, Bengio Y, and Lacoste-Julien S A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 233–242, 2017.
    1. Bahadori MT, Chalupka K, Choi E, Chen R, Stewart WF, and Sun J Causal regularization. arXiv preprint arXiv:1702.02604, 2017.

LinkOut - more resources