Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar;7(1):17.
doi: 10.1145/3580804. Epub 2023 Mar 28.

X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model

Affiliations

X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model

Jeya Vikranth Jeyakumar et al. Proc ACM Interact Mob Wearable Ubiquitous Technol. 2023 Mar.

Abstract

End-to-end deep learning models are increasingly applied to safety-critical human activity recognition (HAR) applications, e.g., healthcare monitoring and smart home control, to reduce developer burden and increase the performance and robustness of prediction models. However, integrating HAR models in safety-critical applications requires trust, and recent approaches have aimed to balance the performance of deep learning models with explainable decision-making for complex activity recognition. Prior works have exploited the compositionality of complex HAR (i.e., higher-level activities composed of lower-level activities) to form models with symbolic interfaces, such as concept-bottleneck architectures, that facilitate inherently interpretable models. However, feature engineering for symbolic concepts-as well as the relationship between the concepts-requires precise annotation of lower-level activities by domain experts, usually with fixed time windows, all of which induce a heavy and error-prone workload on the domain expert. In this paper, we introduce X-CHAR , an eXplainable Complex Human Activity Recognition model that doesn't require precise annotation of low-level activities, offers explanations in the form of human-understandable, high-level concepts, while maintaining the robust performance of end-to-end deep learning models for time series data. X-CHAR learns to model complex activity recognition in the form of a sequence of concepts. For each classification, X-CHAR outputs a sequence of concepts and a counterfactual example as the explanation. We show that the sequence information of the concepts can be modeled using Connectionist Temporal Classification (CTC) loss without having accurate start and end times of low-level annotations in the training dataset-significantly reducing developer burden. We evaluate our model on several complex activity datasets and demonstrate that our model offers explanations without compromising the prediction accuracy in comparison to baseline models. Finally, we conducted a mechanical Turk study to show that the explanations provided by our model are more understandable than the explanations from existing methods for complex activity recognition.

Keywords: Activity recognition; Explainable AI; Interpretability; Neural networks.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Examples of complex activities.
Fig. 2.
Fig. 2.
An overview of the proposed end-to-end X-CHAR model during the training and inference phases.
Fig. 3.
Fig. 3.
The overall X-CHAR model design showing the various layers in each module.
Fig. 4.
Fig. 4.
The concepts decoder: Converts the probability distribution of the concepts at every timestep to the most probable concept sequence for a given input.
Fig. 5.
Fig. 5.
Confusion Matrix of X-CHAR on Nurse Dataset.
Fig. 6.
Fig. 6.
Confusion Matrix of X-CHAR on Opportunity Dataset.
Fig. 7.
Fig. 7.
Confusion Matrix of X-CHAR on CRAA Dataset.
Fig. 8.
Fig. 8.
Explanation provided by X-CHAR for a test input from Complex Nurse Activities dataset
Fig. 9.
Fig. 9.
Explanation provided by GradCAM on a trained black-box model for a test input from Complex Nurse Activities dataset.
Fig. 10.
Fig. 10.
Explanation provided by ExMatchina on a trained black-box model for a test input from Complex Nurse Activities dataset.
Fig. 11.
Fig. 11.
Explanation provided by Concept Bottleneck Model for a test input from Complex Nurse Activities dataset.
Fig. 12.
Fig. 12.
Classification and Explanation provided by AROMA for a test input from Complex Nurse Activities dataset.
Fig. 13.
Fig. 13.
Explanation provided by DeXAR for two test inputs from Complex Nurse Activities dataset.
Fig. 14.
Fig. 14.
The preferred explainabilities of different models (i.e., GradCAM, ExMatchina, and X-CHAR) operating in raw input space from the Turk study.
Fig. 15.
Fig. 15.
The preferred explainabilities of different models (i.e., AROMA, Concept Bottleneck, DeXAR, and X-CHAR) operating in concept space from the Turk study.

References

    1. 2016. Defense Advanced Research Projects Agency. Broad Agency Announcement, Explainable Artificial Intelligence (XAI). https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf. Online; accessed 14-November-2022.
    1. 2018. Article 15 EU GDPR "Right of access by the data subject". https://www.privacy-regulation.eu/en/article-15-right-of-access-by-the-d.... Online; accessed 04-March-2022.
    1. 2018. Recital 71 EU GDPR. https://www.privacy-regulation.eu/en/r71.htm. Online; accessed 04-March-2022.
    1. Akther Sayma, Saleheen Nazir, Shahin Alan Samiei Vivek Shetty, Ertin Emre, and Kumar Santosh. 2019. mORAL: An mHealth Model for Inferring Oral Hygiene Behaviors in-the-wild Using Wrist-worn Inertial Sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 1. - PubMed
    1. Arrotta Luca, Civitarese Gabriele, and Bettini Claudio. 2022. DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 1 (2022), 1–30.