Stars
The code and models for the paper: Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis
Training-free Regional Prompting for Diffusion Transformers 🔥
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
Clean PyTorch implementations of imitation and reward learning algorithms
The nnsight package enables interpreting and manipulating the internals of deep learned models.
Must-read Papers on Knowledge Editing for Large Language Models.
Code for NeurIPS2023 accepted paper: Counterfactual Conservative Q Learning for Offline Multi-agent Reinforcement Learning.
V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
A PyTorch native library for large model training
ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment
Strange and odd python snippets explained
This is the code of our paper "Video-Based Human Pose Regression via Decoupled Space-Time Aggregation".
Official PyTorch implementation of the paper: Flow Matching in Latent Space
Scenic: A Jax Library for Computer Vision Research and Beyond
The fastest && easiest LLM security guardrails for CX AI Agents and applications.
Ring attention implementation with flash attention
Python package for rematerialization-aware gradient checkpointing
Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation
A Production-ready Reinforcement Learning AI Agent Library brought by the Applied Reinforcement Learning team at Meta.
Internal Docker Image used for Higgsfield.AI & invoker.