Translation metrics

Knowing whether a translation is good or not is somewhat difficult. A common metric for the quality of a machine translation is called Bilingual Evaluation Understudy (BLEU), and it was created originally by Papineni and others in BLEU: a Method for Automatic Evaluation of Machine Translation (http://aclweb.org/anthology/P/P02/P02-1040.pdf). BLEU is a modified application of classification precision that's ngram based. If you'd like to use BLEU to measure the quality of your translations, the TensorFlow team has published a script that can compute a BLEU score given a corpus of ground truth translations and machine-predicted translations. You can find that script at https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.180.43