Skip to main content
McMaster University Menu Search

Personal tools

You are here: Home / Research / Human Activity Recognition

Human Activity Recognition

Click to learn more about our research in Human Activity Recognition

Research Project Overviews

Learn more about the Biomedic.AI Lab's past and current research as related to Human Activity Recognition...

Trust in Human Activity Recognition Deep Learning Models

Society has a responsibility to ensure that artificial intelligence systems can be trusted when tasked to make decisions. In this project trust is explored through the lens of technical robustness and safety. Technical robustness and safety are one of the requirements trustworthy AI [1].

Please note that the following is a brief description of the project that has been already published in [2]. In this project we evaluate if changes in performance occur when the input acceleration data of human activity recognition models experience a change in input data acquisition. The data acquisition changes that we explored are 1) a change in the recording device that collects input acceleration data 2) a change in the recording session of the input acceleration data. Two different wearable devices (Astroskin and Zephyr BioHarness) were used to record the acceleration data over a course of two sessions. From this data, three different human activity models were created 1) a model trained on acceleration data that only arises from Astroskin device (Model A) 2) a model trained on acceleration data that only arises from the Zephyr BioHarness (Model H) device and 3) a model trained on acceleration data that only arises from Astroskin device from the first session of recording (Model A Type 1). Our results suggests that performance degradation occurs when the human activity recognition models are evaluated on acceleration data that comes from a different wearable device and/or recording sessions. Inspired by the idea of the discriminator which acts to distinguish between actual samples and generated samples in [3], the project developed an out of domain generalizable discriminator. The purpose of the out of domain generalizable discriminator is to indicate when incoming acceleration data differs from the acceleration data that was used to train the human activity recognition model. If individuals are alerted when incoming data differs from the data that is used train the model, they may be able to anticipate the decrease in performance

This project falls under the Quantifying Trust in Autonomous Medical Advisory Systems done in collaboration with the Canadian Department of National Defence. More specifically, this specific project falls underneath the Trust in Hardware and Software section which is led by Dr. Thomas Doyle. We also express gratitude to Dr. Jim Reilly and Dr. David Musson who provided valued insights throughout the duration of the project.

Citations:

[1] High Level Expert Group on Artificial Intelligence, “Ethics guidelines for trustworthy AI,” Brussels, 2019 [Online]. Available: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60651

[2] A. Simons, “Trust in Human Activity Recognition Deep Learning Models,” MASc thesis, School of Biomed. Eng., McMaster Univ., Hamilton, Canada, 2021. Accessed on: Dec. 6, 2021. [Online]. Available: https://macsphere.mcmaster.ca/bitstream/11375/27039/4/Ama_Thesis%20%288%29.pdf

[3] I. Goodfellow, J.Pouget-Abaie, M.Mizra, B.Xu, D. Warde-Farley, S.Ozair ,A.Courville and Y.Bengio, “Generative adversarial networks,” Advances in Neural Information Processing Systems, vol. 27, 2014. Accessed on: Dec. 21, 2021 [Online]. Available: https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf

 

Principal Investigator

 

Researchers