Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Aug 17;16(1):81.
doi: 10.1186/s13012-021-01145-9.

How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool

Affiliations

How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool

Madison Foster et al. Implement Sci. .

Abstract

Background: Healthcare Audit and Feedback (A&F) interventions have been shown to be an effective means of changing healthcare professional behavior, but work is required to optimize them, as evidence suggests that A&F interventions are not improving over time. Recent published guidance has suggested an initial set of best practices that may help to increase intervention effectiveness, which focus on the "Nature of the desired action," "Nature of the data available for feedback," "Feedback display," and "Delivering the feedback intervention." We aimed to develop a generalizable evaluation tool that can be used to assess whether A&F interventions conform to these suggestions for best practice and conducted initial testing of the tool through application to a sample of critical care A&F interventions.

Methods: We used a consensus-based approach to develop an evaluation tool from published guidance and subsequently applied the tool to conduct a secondary analysis of A&F interventions. To start, the 15 suggestions for improved feedback interventions published by Brehaut et al. were deconstructed into rateable items. Items were developed through iterative consensus meetings among researchers. These items were then piloted on 12 A&F studies (two reviewers met for consensus each time after independently applying the tool to four A&F intervention studies). After each consensus meeting, items were modified to improve clarity and specificity, and to help increase the reliability between coders. We then assessed the conformity to best practices of 17 critical care A&F interventions, sourced from a systematic review of A&F interventions on provider ordering of laboratory tests and transfusions in the critical care setting. Data for each criteria item was extracted by one coder and confirmed by a second; results were then aggregated and presented graphically or in a table and described narratively.

Results: In total, 52 criteria items were developed (38 ratable items and 14 descriptive items). Eight studies targeted lab test ordering behaviors, and 10 studies targeted blood transfusion ordering. Items focused on specifying the "Nature of the Desired Action" were adhered to most commonly-feedback was often presented in the context of an external priority (13/17), showed or described a discrepancy in performance (14/17), and in all cases it was reasonable for the recipients to be responsible for the change in behavior (17/17). Items focused on the "Nature of the Data Available for Feedback" were adhered to less often-only some interventions provided individual (5/17) or patient-level data (5/17), and few included aspirational comparators (2/17), or justifications for specificity of feedback (4/17), choice of comparator (0/9) or the interval between reports (3/13). Items focused on the "Nature of the Feedback Display" were reported poorly-just under half of interventions reported providing feedback in more than one way (8/17) and interventions rarely included pilot-testing of the feedback (1/17 unclear) or presentation of a visual display and summary message in close proximity of each other (1/13). Items focused on "Delivering the Feedback Intervention" were also poorly reported-feedback rarely reported use of barrier/enabler assessments (0/17), involved target members in the development of the feedback (0/17), or involved explicit design to be received and discussed in a social context (3/17); however, most interventions clearly indicated who was providing the feedback (11/17), involved a facilitator (8/12) or involved engaging in self-assessment around the target behavior prior to receipt of feedback (12/17).

Conclusions: Many of the theory-informed best practice items were not consistently applied in critical care and can suggest clear ways to improve interventions. Standardized reporting of detailed intervention descriptions and feedback templates may also help to further advance research in this field. The 52-item tool can serve as a basis for reliably assessing concordance with best practice guidance in existing A&F interventions trialed in other healthcare settings, and could be used to inform future A&F intervention development.

Trial registration: Not applicable.

Keywords: A&F; Audit; Critical care; Evaluation tool; Feedback.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Fig. 1
Fig. 1
Description of feedback interventions according to the ‘Nature of the Desired Action’ items (n = 17 interventions). Note: For items where the total number of interventions is less than 17, the item was rated as ‘Not Applicable’ in the remaining cases
Fig. 2
Fig. 2
Description of feedback interventions according to the ‘Nature of the Data Available’ items (n = 17 interventions). Note: For items where the total number of interventions is less than 17, the item was rated as ‘Not Applicable’ in the remaining cases
Fig. 3
Fig. 3
Description of feedback interventions according to the ‘Feedback Display’ items (n = 17 feedback interventions). Note: For items where the total number of interventions is less than 17, the item was rated as ‘Not Applicable’ in the remaining cases
Fig. 4
Fig. 4
Description of feedback interventions according to the ‘Delivering the Feedback Intervention’ items (n = 17 feedback interventions). Note: For items where the total number of interventions is less than 17, the item was rated as ‘Not Applicable’ in the remaining cases

References

    1. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes (Review) Cochrane Database Syst Rev. 2012;6:1–227. - PMC - PubMed
    1. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, Young J, Odgaard-Jensen J. Growing Literature, Stagnant Science? Systematic Review, Meta-Regression and Cumulative Analysis of Audit and Feedback Interventions in Health Care. J Gen Intern Med. 2014;29(11):1534–1541. doi: 10.1007/s11606-014-2913-y. - DOI - PMC - PubMed
    1. Ivers NM, Grimshaw JM. Reducing research waste with implementation laboratories. Lancet. 2016;388(10044):547–548. doi: 10.1016/S0140-6736(16)31256-9. - DOI - PubMed
    1. Brehaut JC, Colquhoun HL, Eva KW, Carroll K, Sales A, Michie S, Ivers N, Grimshaw JM. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164(6):435–441. doi: 10.7326/M15-2248. - DOI - PubMed
    1. Gude WT, Roos-Blom MJ, van der Veer SN, de Jonge E, Peek N, Dongelmans DA, de Keizer NF. Electronic audit and feedback intervention with action implementation toolbox to improve pain management in intensive care: Protocol for a laboratory experiment and cluster randomised trial. Implement Sci. 2017;12(1):68. doi: 10.1186/s13012-017-0594-8. - DOI - PMC - PubMed

Publication types

Grants and funding