Learning Human Activity from Visual Data Using Deep Learning

Taha Alhersh, Heiner Stuckenschmidt, Atiq Ur Rehman, Samir Brahim Belhaouari*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Advances in wearable technologies have the ability to revolutionize and improve people's lives. The gains go beyond the personal sphere, encompassing business and, by extension, the global economy. The technologies are incorporated in electronic devices that collect data from consumers' bodies and their immediate environment. Human activities recognition, which involves the use of various body sensors and modalities either separately or simultaneously, is one of the most important areas of wearable technology development. In real-life scenarios, the number of sensors deployed is dictated by practical and financial considerations. In the research for this article, we reviewed our earlier efforts and have accordingly reduced the number of required sensors, limiting ourselves to first-person vision data for activities recognition. Nonetheless, our results beat state of the art by more than 4% of F1 score.

Original languageEnglish
Article number9494357
Pages (from-to)106245-106253
Number of pages9
JournalIEEE Access
Volume9
DOIs
Publication statusPublished - 2021

Keywords

  • Human activity recognition
  • deep learning
  • first-person vision

Fingerprint

Dive into the research topics of 'Learning Human Activity from Visual Data Using Deep Learning'. Together they form a unique fingerprint.

Cite this