跳至主導覽 跳至搜尋 跳過主要內容

Recurrent graph encoder for syntax-aware neural machine translation

  • Liang Ding
  • , Longyue Wang
  • , Siyou Liu

研究成果: Article同行評審

7 引文 斯高帕斯(Scopus)

摘要

Self-attention networks (SAN) have achieved promising performance in a variety of NLP tasks, e.g. neural machine translation (NMT), as they can directly build dependencies among words. But it is weaker at learning positional information than recurrent neural networks (RNN). Natural questions arise: (1) Can we design a component with RNN by directly guiding the syntax dependencies for it? (2) Whether such syntax enhanced sequence modeling component benefits existing NMT structures, e.g. RNN-based NMT and Transformer-based NMT. To answer above question, we propose a simple yet effective recurrent graph syntax encoder, dubbed RGSE, to utilize off-the-shelf syntax dependencies and its intrinsic recurrence property, such that RGSE models syntactic dependencies and sequential information (i.e. word order) simultaneously. Experimental studies on various neural machine translation tasks demonstrate that RGSE equipped RNN and Transformer models could gain consistent significant improvements over several strong syntax-aware benchmarks, with minuscule parameters increases. The extensive analysis further illustrates that RGSE does improve the syntactic and semantic preservation ability than SAN, additionally, shows superior robustness to defend syntactic noise than existing syntax-aware NMT models.

原文English
頁(從 - 到)1053-1062
頁數10
期刊International Journal of Machine Learning and Cybernetics
14
發行號4
DOIs
出版狀態Published - 4月 2023

指紋

深入研究「Recurrent graph encoder for syntax-aware neural machine translation」主題。共同形成了獨特的指紋。

引用此