Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Feb 21;20(1):86.
doi: 10.1186/s12916-022-02271-x.

Comparative assessment of methods for short-term forecasts of COVID-19 hospital admissions in England at the local level

Collaborators, Affiliations

Comparative assessment of methods for short-term forecasts of COVID-19 hospital admissions in England at the local level

Sophie Meakin et al. BMC Med. .

Abstract

Background: Forecasting healthcare demand is essential in epidemic settings, both to inform situational awareness and facilitate resource planning. Ideally, forecasts should be robust across time and locations. During the COVID-19 pandemic in England, it is an ongoing concern that demand for hospital care for COVID-19 patients in England will exceed available resources.

Methods: We made weekly forecasts of daily COVID-19 hospital admissions for National Health Service (NHS) Trusts in England between August 2020 and April 2021 using three disease-agnostic forecasting models: a mean ensemble of autoregressive time series models, a linear regression model with 7-day-lagged local cases as a predictor, and a scaled convolution of local cases and a delay distribution. We compared their point and probabilistic accuracy to a mean-ensemble of them all and to a simple baseline model of no change from the last day of admissions. We measured predictive performance using the weighted interval score (WIS) and considered how this changed in different scenarios (the length of the predictive horizon, the date on which the forecast was made, and by location), as well as how much admissions forecasts improved when future cases were known.

Results: All models outperformed the baseline in the majority of scenarios. Forecasting accuracy varied by forecast date and location, depending on the trajectory of the outbreak, and all individual models had instances where they were the top- or bottom-ranked model. Forecasts produced by the mean-ensemble were both the most accurate and most consistently accurate forecasts amongst all the models considered. Forecasting accuracy was improved when using future observed, rather than forecast, cases, especially at longer forecast horizons.

Conclusions: Assuming no change in current admissions is rarely better than including at least a trend. Using confirmed COVID-19 cases as a predictor can improve admissions forecasts in some scenarios, but this is variable and depends on the ability to make consistently good case forecasts. However, ensemble forecasts can make forecasts that make consistently more accurate forecasts across time and locations. Given minimal requirements on data and computation, our admissions forecasting ensemble could be used to anticipate healthcare needs in future epidemic or pandemic settings.

Keywords: COVID-19; Ensemble; Forecasting; Healthcare demand; Infectious disease; Outbreak; Real-time.

PubMed Disclaimer

Conflict of interest statement

The authors declare they have no competing interests.

Figures

Fig. 1
Fig. 1
Summary of COVID-19 hospital admissions in England during August 2020– April 2021. A Daily COVID-19 hospital admissions for England. B Weekly COVID-19 hospital admissions by NHS Trust (identified by 3-letter code) for the top 40 Trusts by total COVID-19 hospital admissions during August 2020–April 2021. C Daily COVID-19 hospital admissions for top-five Trusts by total COVID-19 hospital admissions. In all panels, the dashed lines denote the date of the first (04 October 2020) and last (25 April 2021) forecast date. Data published by NHS England [45]
Fig. 2
Fig. 2
Overall forecasting performance of forecasting models. A Empirical coverage of 50% and 90% prediction intervals for 1-14 days forecast horizon. The dashed line indicates the target coverage level (50% or 90%). B Relative weighted interval score (rWIS) by forecast horizon (7 and 14 days). C Distribution of WIS rankings across all 7701 targets; for each target, rank 1 is assigned to the model with the lowest relative WIS (rWIS) and rank 5 to the model with the highest rWIS
Fig. 3
Fig. 3
Forecasting accuracy by forecast date (7-day forecast horizon). A Relative WIS (rWIS) of the forecasting models for the 30 forecasting dates. Lower rWIS values indicate better forecasts. B Mean absolute error of the forecasting models. The mean AE is calculated as the mean AE over all Trusts. C Mean daily Trust-level COVID-19 hospital admissions by week, for reference. All panels are for a 7-day forecast horizon; see Additional file 1: Fig. S5 for evaluation on a 14-day forecast horizon
Fig. 4
Fig. 4
Forecasting accuracy by location (7-day forecast horizon). A Relative WIS values of each model (y-axis) compared to the baseline model of no change (x-axis). Ticks on axes show the unilateral distribution of rWIS values. Dashed grey line shows y=x, for reference: a point below the line indicates that the model outperformed the baseline model by rWIS for that Trust. B Distribution of WIS rankings across all 129 NHS Trusts; rank 1 is assigned the model with the lowest relative WIS for a given scenario, and rank 5 to the highest relative WIS. C Mean absolute error of each model (y-axis) compared to the baseline model (x-axis). Ticks on axes show the unilateral distribution of MAE values. Dashed grey line shows y=x, for reference: a point below the line indicates that the model outperformed the baseline model by MAE for that Trust. All panels are for a 7-day forecast horizon; see Additional file 1: Fig. S6 for evaluation on a 14-day forecast horizon

Update of

References

    1. Papst I, Li M, Champredon D, Bolker BM, Dushoff J, Earn DJ. Age-dependence of healthcare interventions for COVID-19 in Ontario, Canada. BMC Public Health. 2021;21:706. doi: 10.1186/s12889-021-10611-4. - DOI - PMC - PubMed
    1. Verity R, Okell LC, Dorigatti I, Winskill P, Whittaker C, Imai N, et al. Estimates of the severity of coronavirus disease 2019: a model-based analysis. Lancet Infect Dis. 2020;20:669–677. doi: 10.1016/S1473-3099(20)30243-7. - DOI - PMC - PubMed
    1. Wilde H, Mellan T, Hawryluk I, Dennis JM, Denaxas S, Pagel C, et al. The association between mechanical ventilator compatible bed occupancy and mortality risk in intensive care patients with COVID-19: a national retrospective cohort study. BMC Med. 2021;19:213. doi: 10.1186/s12916-021-02096-0. - DOI - PMC - PubMed
    1. Carr A, Smith JA, Camaradou J, Prieto-Alhambra D. Growing backlog of planned surgery due to covid-19. BMJ. 2021;372:n339. doi: 10.1136/bmj.n339. - DOI - PubMed
    1. Camacho A, Kucharski A, Aki-Sawyerr Y, White MA, Flasche S, Baguelin M, et al. Temporal Changes in Ebola Transmission in Sierra Leone and Implications for Control Requirements: a Real-time Modelling Study. PLoS Curr. 2015;7. 10.1371/currents.outbreaks.406ae55e83ec0b5193e30856b9235ed2. - PMC - PubMed

Publication types