Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Dec 24:26:e53863.
doi: 10.2196/53863.

Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review

Affiliations

Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review

Yasmin Abdelaal et al. J Med Internet Res. .

Abstract

Background: Wearable technologies have become increasingly prominent in health care. However, intricate machine learning and deep learning algorithms often lead to the development of "black box" models, which lack transparency and comprehensibility for medical professionals and end users. In this context, the integration of explainable artificial intelligence (XAI) has emerged as a crucial solution. By providing insights into the inner workings of complex algorithms, XAI aims to foster trust and empower stakeholders to use wearable technologies responsibly.

Objective: This paper aims to review the recent literature and explore the application of explainability in wearables. By examining how XAI can enhance the interpretability of generated data and models, this review sought to shed light on the possibilities that arise at the intersection of wearable technologies and XAI.

Methods: We collected publications from ACM Digital Library, IEEE Xplore, PubMed, SpringerLink, JMIR, Nature, and Scopus. The eligible studies included technology-based research involving wearable devices, sensors, or mobile phones focused on explainability, machine learning, or deep learning and that used quantified self data in medical contexts. Only peer-reviewed articles, proceedings, or book chapters published in English between 2018 and 2022 were considered. We excluded duplicates, reviews, books, workshops, courses, tutorials, and talks. We analyzed 25 research papers to gain insights into the current state of explainability in wearables in the health care context.

Results: Our findings revealed that wrist-worn wearables such as Fitbit and Empatica E4 are prevalent in health care applications. However, more emphasis must be placed on making the data generated by these devices explainable. Among various explainability methods, post hoc approaches stand out, with Shapley Additive Explanations as a prominent choice due to its adaptability. The outputs of explainability methods are commonly presented visually, often in the form of graphs or user-friendly reports. Nevertheless, our review highlights a limitation in user evaluation and underscores the importance of involving users in the development process.

Conclusions: The integration of XAI into wearable health care technologies is crucial to address the issue of black box models. While wrist-worn wearables are widespread, there is a notable gap in making the data they generate explainable. Post hoc methods such as Shapley Additive Explanations have gained traction for their adaptability in explaining complex algorithms visually. However, user evaluation remains an area in which improvement is needed, and involving users in the development process can contribute to more transparent and reliable artificial intelligence models in health care applications. Further research in this area is essential to enhance the transparency and trustworthiness of artificial intelligence models used in wearable health care technology.

Keywords: XAI; analytics; deep learning; explainable artificial intelligence; health informatics; interpretation; machine learning; user experience; wearable; wearable data; wearable sensors.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
Relationship among artificial intelligence (AI), machine learning (ML), deep learning (DL), and explainable AI (XAI) [13].
Figure 2
Figure 2
Process of explainable artificial intelligence (AI) from wearable data, adapted from Saranya and Subhashini [13].
Figure 3
Figure 3
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram of the systematic review. DL: deep learning; ML: machine learning.
Figure 4
Figure 4
Distribution of included papers per year.
Figure 5
Figure 5
Overview of the explainability features.
Figure 6
Figure 6
UpSet diagram [45,46] of the various combinations of input data.
Figure 7
Figure 7
Overview of the explainability formats used in the studies included in this review.
Figure 8
Figure 8
Explainable gradient boosting of COVID-19 symptoms using bar charts [43].
Figure 9
Figure 9
Shapley Additive Explanations explainability using a bubble frequency plot [32].
Figure 10
Figure 10
Fitness tracker goals displayed as plain text [47].
Figure 11
Figure 11
(A) Stress-monitoring app [37] using a ring chart; (B) detecting hypoglycemia using wearables [33] displayed as text and emoticons; (C) blood pressure monitoring and lifestyle recommendations given as plain text and bar and line charts [38].
Figure 12
Figure 12
(A) Overview of problem types in the reviewed papers; (B) regression and classification problem types with the corresponding machine learning or deep learning model application. CNN: convolutional neural network; FCN: fully convolutional network; GNN: graph neural network; LSTM: long short-term memory; NSL: neural structured learning; RNN: recurrent neural network; SVM: support vector machine; TCN: temporal convolutional network.
Figure 13
Figure 13
Sources of the data collected.
Figure 14
Figure 14
Data collection medium.

Similar articles

References

    1. Iqbal MH, Aydin A, Brunckhorst O, Dasgupta P, Ahmed K. A review of wearable technology in medicine. J R Soc Med. 2016 Oct 11;109(10):372–80. doi: 10.1177/0141076816663560. https://europepmc.org/abstract/MED/27729595 109/10/372 - DOI - PMC - PubMed
    1. Duckworth C, Guy MJ, Kumaran A, O'Kane AA, Ayobi A, Chapman A, Marshall P, Boniface M. Explainable machine learning for real-time hypoglycemia and hyperglycemia prediction and personalized control recommendations. J Diabetes Sci Technol. 2024 Jan 13;18(1):113–23. doi: 10.1177/19322968221103561. https://journals.sagepub.com/doi/abs/10.1177/19322968221103561?url_ver=Z... - DOI - DOI - PMC - PubMed
    1. Barricelli BR, Casiraghi E, Gliozzo J, Petrini A, Valtolina S. Human digital twin for fitness management. IEEE Access. 2020;8:26637–64. doi: 10.1109/access.2020.2971576. - DOI
    1. Saeed W, Omlin C. Explainable AI (XAI): a systematic meta-survey of current challenges and future opportunities. Knowl Based Syst. 2023 Mar;263:110273. doi: 10.1016/j.knosys.2023.110273. - DOI
    1. Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) IEEE Access. 2018;6:52138–60. doi: 10.1109/access.2018.2870052. - DOI

Publication types

LinkOut - more resources