Human Activity Recognition using Deep Learning Models on Smartphones and Smartwatches Sensor Data

dc.contributor.authorOluwalade, Bolu
dc.contributor.authorNeela, Sunil
dc.contributor.authorWawira, Judy
dc.contributor.authorAdejumo, Tobiloba
dc.contributor.authorPurkayastha, Saptarshi
dc.contributor.departmentBioHealth Informatics, School of Informatics and Computingen_US
dc.date.accessioned2023-01-30T20:47:43Z
dc.date.available2023-01-30T20:47:43Z
dc.date.issued2021
dc.description.abstractIn recent years, human activity recognition has garnered considerable attention both in industrial and academic research because of the wide deployment of sensors, such as accelerometers and gyroscopes, in products such as smartphones and smartwatches. Activity recognition is currently applied in various fields where valuable information about an individual’s functional ability and lifestyle is needed. In this study, we used the popular WISDM dataset for activity recognition. Using multivariate analysis of covariance (MANCOVA), we established a statistically significant difference (p < 0.05) between the data generated from the sensors embedded in smartphones and smartwatches. By doing this, we show that smartphones and smartwatches don’t capture data in the same way due to the location where they are worn. We deployed several neural network architectures to classify 15 different hand and non-hand oriented activities. These models include Long short-term memory (LSTM), Bi-directional Long short-term memory (BiLSTM), Convolutional Neural Network (CNN), and Convolutional LSTM (ConvLSTM). The developed models performed best with watch accelerometer data. Also, we saw that the classification precision obtained with the convolutional input classifiers (CNN and ConvLSTM) was higher than the end-to-end LSTM classifier in 12 of the 15 activities. Additionally, the CNN model for the watch accelerometer was better able to classify non-hand oriented activities when compared to hand-oriented activities.en_US
dc.eprint.versionAuthor's manuscripten_US
dc.identifier.citationOluwalade, B., Neela, S., Wawira, J., Adejumo, T., & Purkayastha, S. (2021). Human Activity Recognition using Deep Learning Models on Smartphones and Smartwatches Sensor Data. Proceedings of the 14th International Joint Conference on Biomedical Engineering Systems and Technologies, 645–650. https://doi.org/10.5220/0010325906450650en_US
dc.identifier.issn978-989-758-490-9en_US
dc.identifier.urihttps://hdl.handle.net/1805/31045
dc.language.isoen_USen_US
dc.publisherScitepressen_US
dc.relation.isversionof10.5220/0010325906450650en_US
dc.relation.journalProceedings of the 14th International Joint Conference on Biomedical Engineering Systems and Technologiesen_US
dc.rightsPublisher Policyen_US
dc.sourceAuthoren_US
dc.subjectHuman Activities Recognition (HAR)en_US
dc.subjectWISDM Dataseten_US
dc.subjectConvolutional LSTM (ConvLSTM)en_US
dc.titleHuman Activity Recognition using Deep Learning Models on Smartphones and Smartwatches Sensor Dataen_US
dc.typeArticleen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Oluwalade2022Human-AAM.pdf
Size:
413.39 KB
Format:
Adobe Portable Document Format
Description:
Article
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: