ARTÍCULO
TITULO

Multimodal Emotional Classification Based on Meaningful Learning

Hajar Filali    
Jamal Riffi    
Chafik Boulealam    
Mohamed Adnane Mahraz and Hamid Tairi    

Resumen

Emotion recognition has become one of the most researched subjects in the scientific community, especially in the human?computer interface field. Decades of scientific research have been conducted on unimodal emotion analysis, whereas recent contributions concentrate on multimodal emotion recognition. These efforts have achieved great success in terms of accuracy in diverse areas of Deep Learning applications. To achieve better performance for multimodal emotion recognition systems, we exploit Meaningful Neural Network Effectiveness to enable emotion prediction during a conversation. Using the text and the audio modalities, we proposed feature extraction methods based on Deep Learning. Then, the bimodal modality that is created following the fusion of the text and audio features is used. The feature vectors from these three modalities are assigned to feed a Meaningful Neural Network to separately learn each characteristic. Its architecture consists of a set of neurons for each component of the input vector before combining them all together in the last layer. Our model was evaluated on a multimodal and multiparty dataset for emotion recognition in conversation MELD. The proposed approach reached an accuracy of 86.69%, which significantly outperforms all current multimodal systems. To sum up, several evaluation techniques applied to our work demonstrate the robustness and superiority of our model over other state-of-the-art MELD models.

PÁGINAS
pp. 0 - 0
MATERIAS
INFRAESTRUCTURA
REVISTAS SIMILARES
Future Internet

 Artículos similares