Developing a Multimodal Model for Activity Recognition Using Electroencephalography (EEG) and Human Activity Recognition (HAR) Technologies: A Pilot Study Utilizing High Intensity Interval Training (HIIT)
Date
Authors
Language
Embargo Lift Date
Department
Committee Chair
Committee Members
Degree
Degree Year
Department
Grantor
Journal Title
Journal ISSN
Volume Title
Found At
Abstract
This study evaluates the performance of machine learning models on Human Activity Recognition (HAR) and Electroencephalography (EEG) datasets collected from six subjects during HIIT activities. The Random Forest classifier achieved cross-validation accuracies of 95.76% on HAR data and 95.09% on EEG data, with corresponding test accuracies of 95.79% and 95.75%. XGBoost showed comparable performance with HAR data (95.80% CV, 95.72% test) and slightly better results with EEG data (96.73% CV, 96.52% test). The LSTM model with HAR data demonstrated exceptional performance, achieving 99.88% test accuracy. In the multimodal scenario, integrating HAR and EEG data significantly enhanced performance. While the Random Forest model achieved 92.43% CV accuracy and 94.02% test accuracy on the combined dataset, the XGBoost model excelled with 99.38% CV accuracy and 99.23% test accuracy. The MobileHART and HART systems showed promising but varied performance across different activities and subjects, with overall accuracies of 84% and 81% respectively. These findings demonstrate that EEG data provides valuable complementary information to traditional motion sensors HAR, substantially improving activity recognition accuracy. This research highlights the potential of multimodal approaches for advancing real-time activity recognition systems in healthcare monitoring, rehabilitation, and personalized fitness tracking.
