Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2010 Feb;103(2):99-108.
doi: 10.1093/qjmed/hcp169. Epub 2009 Dec 11.

Comparing and ranking hospitals based on outcome: results from The Netherlands Stroke Survey

Affiliations
Comparative Study

Comparing and ranking hospitals based on outcome: results from The Netherlands Stroke Survey

H F Lingsma et al. QJM. 2010 Feb.

Abstract

Background: Measuring quality of care and ranking hospitals with outcome measures poses two major methodological challenges: case-mix adjustment and variation that exists by chance.

Aim: To compare methods for comparing and ranking hospitals that considers these.

Methods: The Netherlands Stroke Survey was conducted in 10 hospitals in the Netherlands, between October 2002 and May 2003, with prospective and consecutive enrollment of patients with acute brain ischaemia. Poor outcome was defined as death or disability after 1 year (modified Rankin scale of > or =3). We calculated fixed and random hospital effects on poor outcome, unadjusted and adjusted for patient characteristics. We compared the hospitals using the expected rank, a novel statistical measure incorporating the magnitude and the uncertainty of differences in outcome.

Results: At 1 year after stroke, 268 of the total 505 patients (53%) had a poor outcome. There were substantial differences in outcome between hospitals in unadjusted analysis (chi(2) = 48, 9 df, P < 0.0001). Adjustment for 12 confounders led to halving of the chi(2) (chi(2) = 24). The same pattern was observed in random effects analysis. Estimated performance of individual hospitals changed considerably between unadjusted and adjusted analysis. Further changes were seen with random effect estimation, especially for smaller hospitals. Ordering by expected rank led to shrinkage of the original ranks of 1-10 towards the median rank of 5.5 and to a different order of the hospitals, compared to ranking based on fixed effects.

Conclusion: In comparing and ranking hospitals, case-mix-adjusted random effect estimates and the expected ranks are more robust alternatives to traditional fixed effect estimates and simple rankings.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Differences between centres with unadjusted fixed effect estimates, unadjusted random effect estimates, adjusted fixed effects estimates and adjusted random effect estimates. A positive numbers means a higher probability on poor outcome. Dot size indicates sample size per centre.
Figure 2.
Figure 2.
Ranks (left y-axis) of 10 centres in fixed effect unadjusted, fixed effect adjusted, and random effects adjusted analyses and ER. Dot size indicates sample size per centre.

References

    1. Green J, Wintfeld N. Report cards on cardiac surgeons. Assessing New York state's; approach. N Engl J Med. 1995;332:1229–32. - PubMed
    1. Wang W, Dillon B, Bouamra O. An analysis of hospital trauma care performance evaluation. J Trauma. 2007;62:1215–22. - PubMed
    1. Shahian DM, Normand SL, Torchiana DF, Lewis SM, Pastore JO, Kuntz RE, et al. Cardiac surgery report cards: comprehensive review and statistical critique. Ann Thorac Surg. 2001;72:2155–68. - PubMed
    1. Goldstein H, Spiegelhalter DJ. League tables and their limitations: statistical issues in comparisons of institutional performance. J R Stat Soc Ser A. 1996;159:385–443.
    1. Marshall EC, Spiegelhalter DJ. Reliability of league tables of in vitro fertilisation clinics: retrospective analysis of live birth rates. BMJ. 1998;316:1701–04. discussion 1705. - PMC - PubMed

Publication types