Direct speech translation (ST) has shown to be a complex task requiring knowledge transfer from its sub-tasks: automatic speech recognition (ASR) and machine translation (MT). For MT, one of the most promising techniques to transfer knowledge is knowledge distillation. In this paper, we compare the different solutions to distill knowledge in a sequence-tosequence task like ST. Moreover, we analyze eventual drawbacks of this approach and how to alleviate them maintaining the benefits in terms of translation quality

On Knowledge Distillation for Direct Speech Translation

Marco Gaido;Mattia Antonino Di Gangi;Matteo Negri;Marco Turchi
2020-01-01

Abstract

Direct speech translation (ST) has shown to be a complex task requiring knowledge transfer from its sub-tasks: automatic speech recognition (ASR) and machine translation (MT). For MT, one of the most promising techniques to transfer knowledge is knowledge distillation. In this paper, we compare the different solutions to distill knowledge in a sequence-tosequence task like ST. Moreover, we analyze eventual drawbacks of this approach and how to alleviate them maintaining the benefits in terms of translation quality
File in questo prodotto:
File Dimensione Formato  
paper_28.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 301.27 kB
Formato Adobe PDF
301.27 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/324624
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact