A benchmark dataset and evaluation methodology for Chinese zero pronoun translation

Mingzhou Xu, Longyue Wang, Siyou Liu, Derek F. Wong, Shuming Shi, Zhaopeng Tu

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

The phenomenon of zero pronoun (ZP) has attracted increasing interest in the machine translation community due to its importance and difficulty. However, previous studies generally evaluate the quality of translating ZPs with BLEU score on MT testsets, which is not expressive or sensitive enough for accurate assessment. To bridge the data and evaluation gaps, we propose a benchmark testset and evaluation metric for target evaluation on Chinese ZP translation. The human-annotated testset covers five challenging genres, which reveal different characteristics of ZPs for comprehensive evaluation. We systematically revisit advanced models on ZP translation and identify current challenges for future exploration. We release data, code, and trained models, which we hope can significantly promote research in this field.

Original languageEnglish
Pages (from-to)1263-1293
Number of pages31
JournalLanguage Resources and Evaluation
Volume57
Issue number3
DOIs
Publication statusPublished - Sept 2023

Keywords

  • Benchmark dataset
  • Discourse
  • Evaluation metric
  • Machine translation
  • Zero pronoun

Fingerprint

Dive into the research topics of 'A benchmark dataset and evaluation methodology for Chinese zero pronoun translation'. Together they form a unique fingerprint.

Cite this