Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 21;9(1):20.
doi: 10.1186/s41512-025-00199-3.

A comparison of modeling approaches for static and dynamic prediction of central-line bloodstream infections using electronic health records (part 1): regression models

Affiliations

A comparison of modeling approaches for static and dynamic prediction of central-line bloodstream infections using electronic health records (part 1): regression models

Shan Gao et al. Diagn Progn Res. .

Abstract

Background: Hospitals register information in the electronic health records (EHRs) continuously until discharge or death. As such, there is no censoring for in-hospital outcomes. We aimed to compare different static and dynamic regression modeling approaches to predict central line-associated bloodstream infections (CLABSIs) in EHR while accounting for competing events precluding CLABSI.

Methods: We analyzed data from 30,862 catheter episodes at University Hospitals Leuven from 2012 and 2013 to predict 7-day risk of CLABSI. Competing events are discharge and death. Static models using information at catheter onset included logistic, multinomial logistic, Cox, cause-specific hazard, and Fine-Gray regression. Dynamic models updated predictions daily up to 30 days after catheter onset (i.e., landmarks 0 to 30 days) and included landmark supermodel extensions of the static models, separate Fine-Gray models per landmark time, and regularized multi-task learning (RMTL). Model performance was assessed using 100 random 2:1 train-test splits.

Results: The Cox model performed worst of all static models in terms of area under the receiver operating characteristic curve (AUROC) and calibration. Dynamic landmark supermodels reached peak AUROCs between 0.741 and 0.747 at landmark 5. The Cox landmark supermodel had the worst AUROCs (≤ 0.731) and calibration up to landmark 7. Separate Fine-Gray models per landmark performed worst for later landmarks, when the number of patients at risk was low.

Conclusions: Categorical and time-to-event approaches had similar performance in the static and dynamic settings, except Cox models. Ignoring competing risks caused problems for risk prediction in the time-to-event framework (Cox), but not in the categorical framework (logistic regression).

Keywords: Central line–associated bloodstream infection; Dynamic model; Logistic regression; Risk prediction; Survival analysis.

PubMed Disclaimer

Conflict of interest statement

Declarations. Ethics approval and consent to participate: The study was approved by the Ethics Committee Research UZ/KU Leuven (EC Research, https://admin.kuleuven.be/raden/en/ethics-committee-research-uz-kuleuven#) on 19 January 2022 (S60891). The Ethics Committee Research UZ/KU Leuven waived the need to obtain informed consent from participants. All patient identifiers were coded using the pseudo-identifier in the data warehouse by the Management Information Reporting Department of UZ Leuven, according to the General Data Protection Regulation (GDPR). Competing interests: The authors declare that they have no conflicts of interests to disclose.

Figures

Fig. 1
Fig. 1
Frequency of outcomes within 7 days for each of the landmark subsets (LM ≤ 30). The total height of the bar is the number at risk
Fig. 2
Fig. 2
Creation of train-test data for estimating performance
Fig. 3
Fig. 3
Comparison of performance metrics of dynamic models across landmarks. The vertical Y-axis was truncated for clarity. Minimum mean observed AUROC was 0.535, minimum/maximum mean observed calibration slope was 0.093/1.742, maximum mean observed ECI was 0.386, minimum mean observed scaled BS was −0.133

Similar articles

References

    1. Clark TG, Bradburn MJ, Love SB, et al. Survival analysis part I: basic concepts and first analyses. Br J Cancer. 2003J 21;89(2):232–8. - PMC - PubMed
    1. Wynants L, Van Calster B, Collins GS, et al. Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal. BMJ. 2020Apr;369: m1328. - PMC - PubMed
    1. Gao S, Albu E, Tuand K, et al. Systematic review finds risk of bias and applicability concerns for models predicting central line-associated bloodstream infection. J Clin Epidemiol. 2023Sep;161:127–39. - PubMed
    1. Cuthbert AR, Graves SE, Giles LC, et al. What is the effect of using a competing-risks estimator when predicting survivorship after joint arthroplasty: a comparison of approaches to survivorship estimation in a large registry. Clin Orthop Relat Res. 2021Feb 1;479(2):392–403. - PMC - PubMed
    1. Abdel-Qadir H, Fang J, Lee DS, et al. Importance of considering competing risks in time-to-event analyses: application to stroke risk in a retrospective cohort study of elderly patients with atrial fibrillation. Circ Cardiovasc Qual Outcomes. 2018Jul;11(7): e004580. - PMC - PubMed

LinkOut - more resources