-
The University of HongKong
- Pokfulam, Hong Kong, PRC
-
22:50
(UTC +08:00) - https://www.zhihu.com/people/wang-jia-hao-53-3
Highlights
- Pro
Lists (10)
Sort Name ascending (A-Z)
Starred repositories
🚀 Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"
Fully open reproduction of DeepSeek-R1
[ICLR 2025] Reconstructive Visual Instruction Tuning
Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis
Model Compression Toolbox for Large Language Models and Diffusion Models
Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.
Code repo for the paper "SpinQuant LLM quantization with learned rotations"
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient
Janus-Series: Unified Multimodal Understanding and Generation Models
Region-Aware Text-to-Image Generation via Hard Binding and Soft Refinement 🔥
[ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
A suite of image and video neural tokenizers
[NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training
[ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
The paper collections for the autoregressive models in vision.
The official implementation for "MonoFormer: One Transformer for Both Diffusion and Autoregression"
Official Pytorch Implementation of Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think (ICLR 2025)
An algorithm for static activation quantization of LLMs
🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton
Lumina-T2X is a unified framework for Text to Any Modality Generation
[ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators