Redirigiendo al acceso original de articulo en 22 segundos...
Inicio  /  Applied Sciences  /  Vol: 13 Par: 4 (2023)  /  Artículo
ARTÍCULO
TITULO

Document-Level Event Role Filler Extraction Using Key-Value Memory Network

Hao Wang    
Miao Li    
Jianyong Duan    
Li He and Qing Zhang    

Resumen

Previous work has demonstrated that end-to-end neural sequence models work well for document-level event role filler extraction. However, the end-to-end neural network model suffers from the problem of not being able to utilize global information, resulting in incomplete extraction of document-level event arguments. This is because the inputs to BiLSTM are all single-word vectors with no input of contextual information. This phenomenon is particularly pronounced at the document level. To address this problem, we propose key-value memory networks to enhance document-level contextual information, and the overall model is represented at two levels: the sentence-level and document-level. At the sentence-level, we use BiLSTM to obtain key sentence information. At the document-level, we use a key-value memory network to enhance document-level representations by recording information about those words in articles that are sensitive to contextual similarity. We fuse two levels of contextual information by means of a fusion formula. We perform various experimental validations on the MUC-4 dataset, and the results show that the model using key-value memory networks works better than the other models.

 Artículos similares