-
Tencent
- Shenzhen
- https://xiaoming-yu.github.io
Stars
Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.
Official Repo for Open-Reasoner-Zero
The official implementation of work "REPARO: Compositional 3D Assets Generation with Differentiable 3D Layout Alignment".
DeepEP: an efficient expert-parallel communication library
A high-performance LLM inference API and Chat UI that integrates DeepSeek R1's CoT reasoning traces with Anthropic Claude models.
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)
Fully open reproduction of DeepSeek-R1
Code for Parametric Retrieval Augmented Generation
[Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey
This repository is the official implementation of Disentangling Writer and Character Styles for Handwriting Generation (CVPR 2023)
[ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
HunyuanVideo: A Systematic Framework For Large Video Generation Model
This is the official code release for our work, Denoising Vision Transformers.
Diffusion model papers, survey, and taxonomy
Author's Implementation for E-LatentLPIPS
[CoRL 2024] Open-TeleVision: Teleoperation with Immersive Active Visual Feedback
We introduce a novel approach for parameter generation, named neural network parameter diffusion (p-diff), which employs a standard latent diffusion model to synthesize a new set of parameters
Multi-camera calibration using one or more calibration patterns
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Best Student Paper Award)
A General NeRF Acceleration Toolbox in PyTorch.
InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥
Real-Time Rendering 4th (RTR4) 参考文献合集典藏 | Collection of <Real-Time Rendering 4th (RTR4)> Bibliography / Reference
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries