Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2018 May;15(5):523-529.
doi: 10.1513/AnnalsATS.201712-942IP.

Advancing Quality Improvement with Regression Discontinuity Designs

Affiliations
Review

Advancing Quality Improvement with Regression Discontinuity Designs

Allan J Walkey et al. Ann Am Thorac Soc. 2018 May.
No abstract available

Keywords: causal inference; quality improvement; research design.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Conceptual frameworks of randomized controlled trials (RCTs) and regression discontinuity designs (RDDs). (A) RCTs allocate interventions on the basis of random assignment. In this example, patients with a predicted readmission risk of more than 20% are randomized to receive an intervention or no intervention. When effective, the intervention shifts the association between predicted and observed mortality rates relative to the counterfactual, unexposed control group. (B) RDD in implementation and improvement science exploits the use of a threshold rule on a continuous assignment variable (in this example, a readmission risk score greater than X) to assign the intervention to patients. The association between the assignment variable (in this example, predicted readmission risk), and the outcome of interest (observed readmission rate) is then evaluated for a discontinuity at the threshold where the intervention was provided. Unlike RCTs, the counterfactual control group is not directly observed, because the intervention is offered to all eligible patients. However, estimates of counterfactual outcomes with and without the intervention are observed immediately above and below the threshold cutoff value, enabling causal inference at the threshold. With additional assumptions, causal effects can be projected to regions beyond the threshold.
Figure 2.
Figure 2.
Regression discontinuity (RD) designs can be paired with traditional implementation science approaches, such as evaluation of implementation fidelity (the degree to which implementation is delivered as intended), to increase the value of both approaches. For example, when serial assessments of implementation fidelity (e.g., proportion of eligible patients who are successfully contacted by a patient navigator) are paired with RD analyses of implementation effectiveness, the extent of fidelity to the intervention that achieves local effectiveness (e.g., in the figure, 75%) can be determined and guide resource allocation to subsequent/continued implementation efforts.
Figure 3.
Figure 3.
Example of a plan–do–study–act framework that incorporates regression discontinuity design (RDD) to evaluate both implementation adoption and effectiveness. The boxes outlined in red are steps modified from a traditional plan–do–study–act cycle when RDDs are used to evaluate effectiveness. *Implementation outcomes include adoption, fidelity, cost, penetration, sustainability, evaluated through traditional implementation science methods.
Figure 4.
Figure 4.
Theoretical results of a regression discontinuity design (RDD) study seeking to evaluate multiple simultaneous interventions to reduce readmissions implemented at different readmission risk cutoffs. In an RDD study, the continuous assignment score (here, risk of readmissions) is plotted on the x-axis against the outcome of interest (here, readmission rate at each level of readmission risk) on the y-axis. A “discontinuity” in the relationship between the score or continuous measure used to assign the intervention and the outcome at the intervention assignment threshold visually demonstrates the effect of the intervention. In this example, there is a reduction of readmissions with interventions 1 and 3, but not intervention 2. Detailed descriptions of statistical approaches to evaluating effects of the intervention in RDD can be found elsewhere (11, 17, 18, 22, 30, 31).

References

    1. Morgenthaler TI, Aronsky AJ, Carden KA, Chervin RD, Thomas SM, Watson NF. Measurement of quality to improve care in sleep medicine. J Clin Sleep Med. 2015;11:279–291. - PMC - PubMed
    1. Boulet LP, Bourbeau J, Skomro R, Gupta S. Major care gaps in asthma, sleep and chronic obstructive pulmonary disease: a road map for knowledge translation. Can Respir J. 2013;20:265–269. - PMC - PubMed
    1. Bourbeau J, Sebaldt RJ, Day A, Bouchard J, Kaplan A, Hernandez P, et al. Practice patterns in the management of chronic obstructive pulmonary disease in primary practice: the CAGE study. Can Respir J. 2008;15:13–19. - PMC - PubMed
    1. Bellani G, Laffey JG, Pham T, Fan E, Brochard L, Esteban A, et al. LUNG SAFE Investigators; ESICM Trials Group. Epidemiology, patterns of care, and mortality for patients with acute respiratory distress syndrome in intensive care units in 50 countries. JAMA. 2016;315:788–800. - PubMed
    1. Walkey AJ, Wiener RS. Risk factors for underuse of lung-protective ventilation in acute lung injury. J Crit Care. 2012;27:323.e1–323.e9. - PMC - PubMed

Publication types

LinkOut - more resources