Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
A natural language interface for computers
Interact with your documents using the power of GPT, 100% privately, no data leaks
Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
The simplest, fastest repository for training/finetuning medium-sized GPTs.
LlamaIndex is a data framework for your LLM applications
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
Convert PDF to markdown + JSON quickly with high accuracy
Fast and flexible image augmentation library. Paper about the library: https://www.mdpi.com/2078-2489/11/2/125
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challenges. [NeurIPS 2…
Open source code for AlphaFold 2.
Ongoing research training transformer models at scale
Keras implementations of Generative Adversarial Networks.
A framework to enable multimodal models to operate a computer.
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
Code for the paper "Jukebox: A Generative Model for Music"
Easily migrate your codebase from one framework or language to another.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Access large language models from the command-line
Socket.IO integration for Flask applications.
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
An open source implementation of OpenAI's ChatGPT Code interpreter