Machine translation is the task of translating a sentence in a source language to a different target language.
Results with a * indicate that the mean test score over the the best window based on average dev-set BLEU score over 21 consecutive evaluations is reported as in Chen et al. (2018).
Models are evaluated on the English-German dataset of the Ninth Workshop on Statistical Machine Translation (WMT 2014) based on BLEU.
Model | BLEU | Paper / Source |
---|---|---|
Transformer Big + BT (Edunov et al., 2018) | 35.0 | Understanding Back-Translation at Scale |
DeepL | 33.3 | DeepL Press release |
RNMT+ (Chen et al., 2018) | 28.5* | The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation |
Transformer Big (Vaswani et al., 2017) | 28.4 | Attention Is All You Need |
Transformer Base (Vaswani et al., 2017) | 27.3 | Attention Is All You Need |
MoE (Shazeer et al., 2017) | 26.03 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer |
ConvS2S (Gehring et al., 2017) | 25.16 | Convolutional Sequence to Sequence Learning |
Similarly, models are evaluated on the English-French dataset of the Ninth Workshop on Statistical Machine Translation (WMT 2014) based on BLEU.
Model | BLEU | Paper / Source |
---|---|---|
DeepL | 45.9 | DeepL Press release |
Transformer Big + BT (Edunov et al., 2018) | 45.6 | Understanding Back-Translation at Scale |
RNMT+ (Chen et al., 2018) | 41.0* | The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation |
Transformer Big (Vaswani et al., 2017) | 41.0 | Attention Is All You Need |
MoE (Shazeer et al., 2017) | 40.56 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer |
ConvS2S (Gehring et al., 2017) | 40.46 | Convolutional Sequence to Sequence Learning |
Transformer Base (Vaswani et al., 2017) | 38.1 | Attention Is All You Need |