Lists (4)
Sort Name ascending (A-Z)
Stars
Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Code for the paper "Planning with Diffusion for Flexible Behavior Synthesis"
Truncated Diffusion Model for Real-Time End-to-End Autonomous Driving
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
上海交通大学 LaTeX 论文模板 | Shanghai Jiao Tong University LaTeX Thesis Template
A generative world for general-purpose robotics & embodied AI learning.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
A method to increase the speed and lower the memory footprint of existing vision transformers.
PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
PyTorch implementation for "Parallel Sampling of Diffusion Models", NeurIPS 2023 Spotlight
Speed up Stable Diffusion with this one simple trick!
This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.
[RSS 2023] Diffusion Policy Visuomotor Policy Learning via Action Diffusion
RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation
《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]
A PyTorch implementation of the paper "All are Worth Words: A ViT Backbone for Diffusion Models".
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
all the course resources of UCB's CS61C course -- Great ideas in computer architecture
Assignments of Stanford CS231n CV course.