Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Mar 12;24(6):1818.
doi: 10.3390/s24061818.

Utilizing Wearable Device Data for Syndromic Surveillance: A Fever Detection Approach

Affiliations

Utilizing Wearable Device Data for Syndromic Surveillance: A Fever Detection Approach

Patrick Kasl et al. Sensors (Basel). .

Abstract

Commercially available wearable devices (wearables) show promise for continuous physiological monitoring. Previous works have demonstrated that wearables can be used to detect the onset of acute infectious diseases, particularly those characterized by fever. We aimed to evaluate whether these devices could be used for the more general task of syndromic surveillance. We obtained wearable device data (Oura Ring) from 63,153 participants. We constructed a dataset using participants' wearable device data and participants' responses to daily online questionnaires. We included days from the participants if they (1) completed the questionnaire, (2) reported not experiencing fever and reported a self-collected body temperature below 38 °C (negative class), or reported experiencing fever and reported a self-collected body temperature at or above 38 °C (positive class), and (3) wore the wearable device the nights before and after that day. We used wearable device data (i.e., skin temperature, heart rate, and sleep) from the nights before and after participants' fever day to train a tree-based classifier to detect self-reported fevers. We evaluated the performance of our model using a five-fold cross-validation scheme. Sixteen thousand, seven hundred, and ninety-four participants provided at least one valid ground truth day; there were a total of 724 fever days (positive class examples) from 463 participants and 342,430 non-fever days (negative class examples) from 16,687 participants. Our model exhibited an area under the receiver operating characteristic curve (AUROC) of 0.85 and an average precision (AP) of 0.25. At a sensitivity of 0.50, our calibrated model had a false positive rate of 0.8%. Our results suggest that it might be possible to leverage data from these devices at a public health level for live fever surveillance. Implementing these models could increase our ability to detect disease prevalence and spread in real-time during infectious disease outbreaks.

Keywords: illness detection; syndromic surveillance; wearables.

PubMed Disclaimer

Conflict of interest statement

Patent applications US App. No. 17/357,922, US App. No. 17/357,930, and PCT App. No. PCT/US21/39260 were filed as of July 2021 by Oura Health Oy on behalf of UCSD. Authors A.E.M. and B.L.S. are listed as the co-inventors of these applications. A.E.M. received remuneration for consulting work from Oura Ring Inc. but declares no non-financial competing interests. B.L.S. received remuneration for consulting work from, and has a financial interest in, Oura Ring Inc. but declares no non-financial competing interests. All other authors declare no financial or non-financial competing interests.

Figures

Figure 1
Figure 1
Instance selection and normalization procedure. At least 7 out of the 14 days in the range of −28 to −14 relative to the ground truth day were retrievable. The mean (μ) and standard deviation (σ) from these days were used to normalize z-score wearable device metrics. We depict an example of a valid instance with its baseline period (−28 → −14) with retrievable data from 9 out of 14 nights (nights without retrievable data are indicated by a white cross). This instance is based on sleep summary features from the night before (night −1) and the night after (night 0) relative to the ground truth day.
Figure 2
Figure 2
Self-reported body temperatures from non-fever examples are in blue and fever examples are in orange.
Figure 3
Figure 3
Z-score-normalized wearable metrics from individuals, aligned by self-reported fever day (white hatched areas) and grouped by self-reported temperature on fever day. Individuals reporting temperatures in the range of (38–39 °C) are in blue (n = 621), and (39+ °C) are in red (n = 103). Lines represent the mean z-score normalized wearable metric across all participants in the respective group for each night, and shaded regions are the 95% confidence interval of the mean.
Figure 4
Figure 4
Performance of the fever detection classifier following a five-fold cross-validation scheme. Shaded areas indicate a 95% confidence interval. (a) The mean Receiver Operator Characteristic curve (ROC) across iterations. The mean area under the curve is 0.85. (b) The mean Precision–Recall curve (PRC) across iterations. The average precision was 0.25. (c) The reliability plot (or calibration curve) across iterations. The mean Brier score was 0.0018. (d) Box plots indicating the classifier predicted probability, binned by self-reported body temperature.
Figure 5
Figure 5
Explanation of the fever detection classifier. Features are ranked from most (top) to least (bottom) important based on the mean permuted importance across 30 permutations. NB: Night before [non]-fever day; NA: night after [non]-fever day; days of the week (i.e., Sunday) indicate the ground truth day; error bars: 95% confidence interval of the mean.

Similar articles

References

    1. Mandl K.D., Overhage J.M., Wagner M.M., Lober W.B., Sebastiani P., Mostashari F., Pavlin J.A., Gesteland P.H., Treadwell T., Koski E., et al. Implementing Syndromic Surveillance: A Practical Guide Informed by the Early Experience. J. Am. Med. Inform. Assoc. 2004;11:141–150. doi: 10.1197/jamia.M1356. - DOI - PMC - PubMed
    1. Smith G.E., Elliot A.J., Lake I., Edeghere O., Morbey R., Catchpole M., Heymann D.L., Hawker J., Ibbotson S., McCloskey B., et al. Syndromic Surveillance: Two Decades Experience of Sustainable Systems—Its People Not Just Data! Epidemiol. Infect. 2019;147:e101. doi: 10.1017/S0950268819000074. - DOI - PMC - PubMed
    1. Colón-González F.J., Lake I.R., Morbey R.A., Elliot A.J., Pebody R., Smith G.E. A Methodological Framework for the Evaluation of Syndromic Surveillance Systems: A Case Study of England. BMC Public Health. 2018;18:544. doi: 10.1186/s12889-018-5422-9. - DOI - PMC - PubMed
    1. Overview of Syndromic Surveillance What Is Syndromic Surveillance? [(accessed on 22 August 2023)]; Available online: https://www.cdc.gov/mmwr/preview/mmwrhtml/su5301a3.htm.
    1. Chandrasekaran R., Katthula V., Moustakas E. Patterns of Use and Key Predictors for the Use of Wearable Health Care Devices by US Adults: Insights from a National Survey. J. Med. Internet Res. 2020;22:e22443. doi: 10.2196/22443. - DOI - PMC - PubMed