We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
address #25
fix setup.py
fix torch.flip not liking bool type
make sure mask is passed
fix potential bug of coarsening one extra level than needed
add token shifting experimental research feature, to improve on baseline
add reversible networks, to allow for scaling depth
use .max.values to be compatible with older version of pytorch
fix eos token for batched case