Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fairseq transformer: enable decoder_output_dim (facebookresearch#2096)
Summary: Pull Request resolved: facebookresearch#2096 No change to existing behavior. Allows the use of an extra learned linear projection (bottleneck layer) before the output projection. This structure was already supported in `TransformerDecoder` via args.decoder_output_dim, used in architectures such as `transformer_lm`, but this change surfaces a command-line option for the basic transformer architecture. Reviewed By: cndn Differential Revision: D21443249 fbshipit-source-id: cdf5806c97ce03a77befa14bc482c81c7b9c83a1
- Loading branch information