Redirigiendo al acceso original de articulo en 17 segundos...
Inicio  /  Information  /  Vol: 12 Par: 3 (2021)  /  Artículo
ARTÍCULO
TITULO

Pre-Training on Mixed Data for Low-Resource Neural Machine Translation

Wenbo Zhang    
Xiao Li    
Yating Yang and Rui Dong    

Resumen

The pre-training fine-tuning mode has been shown to be effective for low resource neural machine translation. In this mode, pre-training models trained on monolingual data are used to initiate translation models to transfer knowledge from monolingual data into translation models. In recent years, pre-training models usually take sentences with randomly masked words as input, and are trained by predicting these masked words based on unmasked words. In this paper, we propose a new pre-training method that still predicts masked words, but randomly replaces some of the unmasked words in the input with their translation words in another language. The translation words are from bilingual data, so that the data for pre-training contains both monolingual data and bilingual data. We conduct experiments on Uyghur-Chinese corpus to evaluate our method. The experimental results show that our method can make the pre-training model have a better generalization ability and help the translation model to achieve better performance. Through a word translation task, we also demonstrate that our method enables the embedding of the translation model to acquire more alignment knowledge.

 Artículos similares