Stars
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
Evolutionary Scale Modeling (esm): Pretrained language models for proteins
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2
Official PyTorch implementation of SegFormer
Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.
This library provides common speech features for ASR including MFCCs and filterbank energies.
Graphormer is a general-purpose deep learning backbone for molecular modeling.
中文语音识别; Mandarin Automatic Speech Recognition;
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet
[CVPR 2022 Oral, Best Student Paper] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation
Implementations of recent research prototypes/demonstrations using MONAI.
Strategies for Pre-training Graph Neural Networks
TensorFlow implementations of Graph Neural Networks
Standardized data set for machine learning of protein structure
Code for a series of work in LiDAR perception, including SST (CVPR 22), FSD (NeurIPS 22), FSD++ (TPAMI 23), FSDv2, and CTRL (ICCV 23, oral).
Deep InfoMax (DIM), or "Learning Deep Representations by Mutual Information Estimation and Maximization"
This repository contains all the papers accepted in top conference of computer vision, with convenience to search related papers.
Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology.
Making Protein Design accessible to all via Google Colab!