Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga…
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
OpenMMLab Detection Toolbox and Benchmark
Code and documentation to train Stanford's Alpaca models, and generate the data.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Official inference repo for FLUX.1 models
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
Fast and memory-efficient exact attention
End-to-End Object Detection with Transformers
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.
A PyTorch implementation of the Transformer model in "Attention is All You Need".
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .
Deep Learning and Reinforcement Learning Library for Scientists and Engineers