Recurrent graph encoder for syntax-aware neural machine translation

Liang Ding, Longyue Wang, Siyou Liu

Research output: Contribution to journalArticlepeer-review


Self-attention networks (SAN) have achieved promising performance in a variety of NLP tasks, e.g. neural machine translation (NMT), as they can directly build dependencies among words. But it is weaker at learning positional information than recurrent neural networks (RNN). Natural questions arise: (1) Can we design a component with RNN by directly guiding the syntax dependencies for it? (2) Whether such syntax enhanced sequence modeling component benefits existing NMT structures, e.g. RNN-based NMT and Transformer-based NMT. To answer above question, we propose a simple yet effective recurrent graph syntax encoder, dubbed RGSE, to utilize off-the-shelf syntax dependencies and its intrinsic recurrence property, such that RGSE models syntactic dependencies and sequential information (i.e. word order) simultaneously. Experimental studies on various neural machine translation tasks demonstrate that RGSE equipped RNN and Transformer models could gain consistent significant improvements over several strong syntax-aware benchmarks, with minuscule parameters increases. The extensive analysis further illustrates that RGSE does improve the syntactic and semantic preservation ability than SAN, additionally, shows superior robustness to defend syntactic noise than existing syntax-aware NMT models.

Original languageEnglish
Pages (from-to)1053-1062
Number of pages10
JournalInternational Journal of Machine Learning and Cybernetics
Issue number4
Publication statusPublished - Apr 2023


  • Neural machine translation
  • Recurrent graph
  • Recurrent neural network
  • Self-attention network
  • Syntax-aware


Dive into the research topics of 'Recurrent graph encoder for syntax-aware neural machine translation'. Together they form a unique fingerprint.

Cite this