TY - GEN
T1 - Efficient Arabic Emotion Recognition Using Deep Neural Networks
AU - Hifny, Yasser
AU - Ali, Ahmed
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - Emotion recognition from speech signal based on deep learning is an active research area. Convolutional neural networks (CNNs) may be the dominant method in this area. In this paper, we implement two neural architectures to address this problem. The first architecture is an attention-based CNN-LSTM-DNN model. In this novel architecture, the convo-lutional layers extract salient features and the bi-directional long short-term memory (BLSTM) layers handle the sequential phenomena of the speech signal. This is followed by an attention layer, which extracts a summary vector that is fed to the fully connected dense layer (DNN), which finally connects to a softmax output layer. The second architecture is based on a deep CNN model. The results on an Arabic speech emotion recognition task show that our innovative approach can lead to significant improvements (2.2% absolute improvements) over a strong deep CNN baseline system. On the other hand, the deep CNN models are significantly faster than the attention based CNN-LSTM-DNN models in training and classification.
AB - Emotion recognition from speech signal based on deep learning is an active research area. Convolutional neural networks (CNNs) may be the dominant method in this area. In this paper, we implement two neural architectures to address this problem. The first architecture is an attention-based CNN-LSTM-DNN model. In this novel architecture, the convo-lutional layers extract salient features and the bi-directional long short-term memory (BLSTM) layers handle the sequential phenomena of the speech signal. This is followed by an attention layer, which extracts a summary vector that is fed to the fully connected dense layer (DNN), which finally connects to a softmax output layer. The second architecture is based on a deep CNN model. The results on an Arabic speech emotion recognition task show that our innovative approach can lead to significant improvements (2.2% absolute improvements) over a strong deep CNN baseline system. On the other hand, the deep CNN models are significantly faster than the attention based CNN-LSTM-DNN models in training and classification.
KW - Speech emotion recognition
KW - and attention
KW - bi-direction long short-term memory (BLSTM) networks
KW - convolutional neural networks (CNN)
KW - deep neural networks (DNN)
UR - http://www.scopus.com/inward/record.url?scp=85069004881&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2019.8683632
DO - 10.1109/ICASSP.2019.8683632
M3 - Conference contribution
AN - SCOPUS:85069004881
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 6710
EP - 6714
BT - 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Y2 - 12 May 2019 through 17 May 2019
ER -