Starred repositories
A simple implementation of a deep linear Pytorch module
A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch
An All-MLP solution for Vision, from Google AI
Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch
Graph neural network message passing reframed as a Transformer with local attention
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch
Implementation of Nvidia's NeuralPlexer, for end-to-end differentiable design of functional small-molecules and ligand-binding proteins, in Pytorch
A simple cross attention that updates both the source and target in one step
Transformer based on a variant of attention that is linear complexity in respect to sequence length
Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto
Fast and memory-efficient exact attention
A concise but complete full-attention transformer with a set of promising experimental features from various papers
Vector (and Scalar) Quantization, in Pytorch
An implementation of local windowed attention for language modeling
Implementation of TACL 2017 paper: Cross-Sentence N-ary Relation Extraction with Graph LSTMs. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova and Wen-tau Yih.
利用pytorch实现图像分类的一个完整的代码,训练,预测,TTA,模型融合,模型部署,cnn提取特征,svm或者随机森林等进行分类,模型蒸馏,一个完整的代码