![flask logo](https://raw.githubusercontent.com/github/explore/80688e429a7d4ef2fca1e82350fe8e3517d3494d/topics/flask/flask.png)
-
Shanghai Jiao Tong University
- China
Starred repositories
SGLang is a fast serving framework for large language models and vision language models.
GenRM-CoT: Data release for verification rationales
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)
InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning
T2Ranking: A large-scale Chinese benchmark for passage ranking.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.
code for paper 《RankingGPT: Empowering Large Language Models in Text Ranking with Progressive Enhancement》
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
Minimalistic large language model 3D-parallelism training
Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Efficient Triton Kernels for LLM Training
[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
Generate textbook-quality synthetic LLM pretraining data
DSIR large-scale data selection framework for language model training
General technology for enabling AI capabilities w/ LLMs and MLLMs
An open source trusted cloud native registry project that stores, signs, and scans content.
Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory
Build resilient language agents as graphs.
Easy, fast, and cheap pretrain,finetune, serving for everyone