Inicio  /  Applied Sciences  /  Vol: 13 Par: 3 (2023)  /  Artículo
ARTÍCULO
TITULO

Bimodal Fusion Network with Multi-Head Attention for Multimodal Sentiment Analysis

Rui Zhang    
Chengrong Xue    
Qingfu Qi    
Liyuan Lin    
Jing Zhang and Lun Zhang    

Resumen

The enrichment of social media expression makes multimodal sentiment analysis a research hotspot. However, modality heterogeneity brings great difficulties to effective cross-modal fusion, especially the modality alignment problem and the uncontrolled vector offset during fusion. In this paper, we propose a bimodal multi-head attention network (BMAN) based on text and audio, which adaptively captures the intramodal utterance features and complex intermodal alignment relationships. Specifically, we first set two independent unimodal encoders to extract the semantic features within each modality. Considering that different modalities deserve different weights, we further built a joint decoder to fuse the audio information into the text representation, based on learnable weights to avoid an unreasonable vector offset. The obtained cross-modal representation is used to improve the sentiment prediction performance. Experiments on both the aligned and unaligned CMU-MOSEI datasets show that our model achieves better performance than multiple baselines, and it has outstanding advantages in solving the problem of cross-modal alignment.

 Artículos similares