TY - GEN
T1 - Action recognition using local visual descriptors and inertial data
AU - Alhersh, Taha
AU - Brahim Belhaouari, Samir
AU - Stuckenschmidt, Heiner
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2019.
PY - 2019
Y1 - 2019
N2 - Different body sensors and modalities can be used in human action recognition, either separately or simultaneously. Multi-modal data can be used in recognizing human action. In this work we are using inertial measurement units (IMUs) positioned at left and right hands with first person vision for human action recognition. A novel statistical feature extraction method was proposed based on curvature of the graph of a function and tracking left and right hand positions in space. Local visual descriptors have been used as features for egocentric vision. An intermediate fusion between IMUs and visual sensors has been performed. Despite of using only two IMUs sensors with egocentric vision, our classification result achieved is 99.61% for recognizing nine different actions. Feature extraction step could play a vital step in human action recognition with limited number of sensors, hence, our method might indeed be promising.
AB - Different body sensors and modalities can be used in human action recognition, either separately or simultaneously. Multi-modal data can be used in recognizing human action. In this work we are using inertial measurement units (IMUs) positioned at left and right hands with first person vision for human action recognition. A novel statistical feature extraction method was proposed based on curvature of the graph of a function and tracking left and right hand positions in space. Local visual descriptors have been used as features for egocentric vision. An intermediate fusion between IMUs and visual sensors has been performed. Despite of using only two IMUs sensors with egocentric vision, our classification result achieved is 99.61% for recognizing nine different actions. Feature extraction step could play a vital step in human action recognition with limited number of sensors, hence, our method might indeed be promising.
KW - Classification
KW - Feature extraction
KW - Human action recognition
KW - IMUs
KW - Sensor fusing
KW - Visual descriptors
UR - http://www.scopus.com/inward/record.url?scp=85076268224&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-34255-5_9
DO - 10.1007/978-3-030-34255-5_9
M3 - Conference contribution
AN - SCOPUS:85076268224
SN - 9783030342548
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 123
EP - 138
BT - Ambient Intelligence - 15th European Conference, AmI 2019, Proceedings
A2 - Chatzigiannakis, Ioannis
A2 - De Ruyter, Boris
A2 - Mavrommati, Irene
PB - Springer
T2 - 15th European Conference on Ambient Intelligence, AmI 2019
Y2 - 13 November 2019 through 15 November 2019
ER -