Stars
[NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers
Code for the paper "Are Sixteen Heads Really Better than One?"
Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination" accepted in ICML 2020.
Privacy-Preserving, Accurate and Efficient Inference for Transformers
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
Implementation of a classification framework from the paper Aggregated Residual Transformations for Deep Neural Networks
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM
Microsoft SEAL is an easy-to-use and powerful homomorphic encryption library.
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)
Manga & Anime Downloader for Linux, Windows & MacOS
Code for training Keras ImageNet (ILSVRC2012) image classification models from scratch