Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix an evaluation bug in fairseq-generate (facebookresearch#1158)
Summary: Pull Request resolved: fairinternal/fairseq-py#1158 When using BPE in --sacrebleu mode, the scores were computed before BPE was removed (H- strings, not D- strings). This is now fixed. In addition, added warnings that not using --sacrebleu for scoring with target-side BPE is a bad idea. Reviewed By: myleott Differential Revision: D21260024 fbshipit-source-id: f8cf9e3a42e501043b794c841297940ab9e2b75a
- Loading branch information