-
National University of Singapore
-
00:43
(UTC +08:00) - https://charles-haonan-wang.me/
Highlights
- Pro
Starred repositories
🚀 Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
Data science interview questions with answers. Not ideally (yet)
Compilation of resources for aspiring data scientists
Answers to 120 commonly asked data science interview questions.
Codebase for Aria - an Open Multimodal Native MoE
A instruction data generation system for multimodal language models.
AnchorAttention: Improved attention for LLMs long-context training
🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.
[ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View
Ring attention implementation with flash attention
`dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.
We introduce a novel approach for parameter generation, named neural network parameter diffusion (p-diff), which employs a standard latent diffusion model to synthesize a new set of parameters
Lossless Training Speed Up by Unbiased Dynamic Data Pruning
This repository is the PyTorch implementation of dynamicAL (NeurIPS 2022)
[CVPR2024] Efficient Dataset Distillation via Minimax Diffusion
Welcome to the Awesome Feature Learning in Deep Learning Thoery Reading Group! This repository serves as a collaborative platform for scholars, enthusiasts, and anyone interested in delving into th…
EcoAssistant: using LLM assistant more affordably and accurately
[ICCV 2023] Subclass-balancing contrastive learning for long-tailed recognition
[ICCV 2023] MADAug: When to Learn What: Model-Adaptive Data Augmentation Curriculum
[ICCV2023] Dataset Quantization
Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis
A curated list of the latest breakthroughs in AI (in 2022) by release date with a clear video explanation, link to a more in-depth article, and code.