Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jun 12;28(6):1149-1158.
doi: 10.1093/jamia/ocaa318.

A framework for making predictive models useful in practice

Affiliations

A framework for making predictive models useful in practice

Kenneth Jung et al. J Am Med Inform Assoc. .

Abstract

Objective: To analyze the impact of factors in healthcare delivery on the net benefit of triggering an Advanced Care Planning (ACP) workflow based on predictions of 12-month mortality.

Materials and methods: We built a predictive model of 12-month mortality using electronic health record data and evaluated the impact of healthcare delivery factors on the net benefit of triggering an ACP workflow based on the models' predictions. Factors included nonclinical reasons that make ACP inappropriate: limited capacity for ACP, inability to follow up due to patient discharge, and availability of an outpatient workflow to follow up on missed cases. We also quantified the relative benefits of increasing capacity for inpatient ACP versus outpatient ACP.

Results: Work capacity constraints and discharge timing can significantly reduce the net benefit of triggering the ACP workflow based on a model's predictions. However, the reduction can be mitigated by creating an outpatient ACP workflow. Given limited resources to either add capacity for inpatient ACP versus developing outpatient ACP capability, the latter is likely to provide more benefit to patient care.

Discussion: The benefit of using a predictive model for identifying patients for interventions is highly dependent on the capacity to execute the workflow triggered by the model. We provide a framework for quantifying the impact of healthcare delivery factors and work capacity constraints on achieved benefit.

Conclusion: An analysis of the sensitivity of the net benefit realized by a predictive model triggered clinical workflow to various healthcare delivery factors is necessary for making predictive models useful in practice.

Keywords: learning, evaluation, utility assessment, workflow simulation, advanced care planning; machine.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
The figure summarizes the effect of different factors on the realized net utility of triggering a care workflow based on a predictive model for 1 year mortality. In all plots the y-axis shows the achieved net utility relative to the best case labeled as ‘optimistic.’ The default state of treating nobody, is the 0 point on the y-axis. The achieved utility is plotted as a percentage of the best case scenario, in which every prediction is followed up by ACP. We also plot the relative net utility of treating everybody (Treat all) for comparison. A. Impact of rejection of recommendations for ACP for nonclinical reasons. The x-axis shows the rate of rejection of ACP due to nonclinical factors ranging from 10% to 30%. The rejection rate translates to a linear reduction in net utility. B. Impact of capacity constraints on per patient utility. The x-axis shows different capacity constraints for conducting ACP. Capacity constraints have a large impact on net utility, with a capacity of 1 capturing close to 50% utility of the “best case.” Increasing capacity offers rapidly diminishing returns because there are few days when more than 4 patients are recommended for ACP. C. Impact of failure to complete ACP due to discharge on per patient utility. The x-axis shows the average number of days it takes to complete ACP. The relative net benefit ranges from 92% to 62.5% of the best case estimate as the mean time to complete ACP ranges from 1 to 4 days. D. Impact of an outpatient rescue pathway on per patient utility. The x-axis shows the effect of rescuing 0%, 50%, and 100% of the model’s recommendations. Without rescue, the net utility is 65% of the optimistic estimate. At 50% rescue, we achieve 76% of the optimistic estimate. At 100% rescue, we achieve 90.5% of the best-case scenario because the outpatient rescue pathway can not rescue ACP rejected for nonclinical reasons.
Figure 2.
Figure 2.
Trade-off between adding inpatient capacity for ACP versus outpatient capacity. The plot shows the change in mean per patient utility as we increment inpatient capacity starting from different initial inpatient capacity (solid red line). The dashed lines show the change in mean patient utility for having an outpatient pathway for ACP with 50% and 100% success rates. We find that at all starting inpatient capacities, an outpatient pathway with even a 50% success rate results in greater utility than adding to inpatient capacity.
Figure 3.
Figure 3.
Unit (per-patient) utility versus the probability threshold at which a patient is referred for follow up. The boxed numbers are the number of patients to follow up with (true positive and false positive), or “work” at that threshold, expressed as a percentage. Work increases as more patients are referred for ACP consultation. There is a tension between the goal of maximizing total utility, which is the product of per-patient utility and the number of patients acted upon; while keeping the number of patients followed up below the hospital system’s work capacity limit.
Figure 4.
Figure 4.
A 4-stage framework guiding the development and evaluation of a predictive model throughout its life cycle. The stages are: 1) problem specification and clarification, 2) development and validation of the model, 3) analysis of utility and impacts on the clinical workflow that is triggered by the model, and 4) monitoring and maintenance of the deployed model as well as evaluation of the running system comprised of the model-triggered workflow.

References

    1. Rajkomar A, Oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. NPJ Digital Med 2018; 1 (1): 18. - PMC - PubMed
    1. Rajkomar A, Dean J, Kohane I.. Machine learning in medicine. N Engl J Med 2019; 380 (14): 1347–58. - PubMed
    1. Obermeyer Z, Weinstein JN.. Adoption of artificial intelligence and machine learning is increasing, but irrational exuberance remains. NEJM Catalyst 2020; 1 (1) doi:10.1056/cat.19.1090.
    1. Beam AL, Kohane IS.. Big data and machine learning in health care. JAMA 2018; 319 (13): 1317. - PubMed
    1. Goldstein BA, Navar AM, Pencina MJ, et al. Opportunities and challenges in developing risk prediction models with electronic health records data: a systematic review. J Am Med Inform Assoc 2017; 24 (1): 198–208. - PMC - PubMed

Publication types