Recent approaches to the Automatic Post-editing (APE) of Machine Translation (MT) have shown that best results are obtained by neural multi-source models that correct the raw MT output by also considering information from the corresponding source sentence. To this aim, we present for the first time a neural multi-source APE model based on theTransformer architecture. Moreover, we employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics used for the task. These are the main features of our submissions to the WMT 2018APE shared task (Chatterjee et al., 2018), where we participated both in the PBSMT sub-task (i.e. the correction of MT outputs from a phrase-based system) and in the NMT sub-task (i.e. the correction of neural outputs).In the first subtask, our system improves over the baseline up to -5.3 TER and +8.23 BLEU points ranking second out of 11 submitted runs. In the second one, characterized by the higher quality of the initial translations, we report lower but statistically significant gains (up to -0.38 TER and +0.8 BLEU), ranking first out of 10 submissions.

Multi-source transformer with combined losses for automatic post editing

Ruchit Agrawal;Rajen Chatterjee;Matteo Negri;Marco Turchi
2018-01-01

Abstract

Recent approaches to the Automatic Post-editing (APE) of Machine Translation (MT) have shown that best results are obtained by neural multi-source models that correct the raw MT output by also considering information from the corresponding source sentence. To this aim, we present for the first time a neural multi-source APE model based on theTransformer architecture. Moreover, we employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics used for the task. These are the main features of our submissions to the WMT 2018APE shared task (Chatterjee et al., 2018), where we participated both in the PBSMT sub-task (i.e. the correction of MT outputs from a phrase-based system) and in the NMT sub-task (i.e. the correction of neural outputs).In the first subtask, our system improves over the baseline up to -5.3 TER and +8.23 BLEU points ranking second out of 11 submitted runs. In the second one, characterized by the higher quality of the initial translations, we report lower but statistically significant gains (up to -0.38 TER and +0.8 BLEU), ranking first out of 10 submissions.
2018
978-1-948087-81-0
File in questo prodotto:
File Dimensione Formato  
WMT099.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 195.77 kB
Formato Adobe PDF
195.77 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/316429
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact