- South Korea
- https://www.linkedin.com/in/kyeongpil/
Highlights
- Pro
Stars
A course on aligning smol models.
๐APPL: A Prompt Programming Language. Seamlessly integrate LLMs with programs.
SGLang is a fast serving framework for large language models and vision language models.
๐ฆ CHONK your texts with Chonkie โจ - The no-nonsense RAG chunking library
[PGAI@CIKM 2023] PyTorch Implementation of LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking
ํ๊ตญ์ ์์ ์คํธ๋ฆฌ๋ฐ ์ฌ์ดํธ์ธ "melon"์ (2000~2023)๋ ๋ ์ฐจํธ๋ฅผ ํฌ๋กค๋งํด ๋ถ์ํ๋ค.
Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"
ํ๊ตญ์ด ์ธ์ด๋ชจ๋ธ ๋ค๋ถ์ผ ์ฌ๊ณ ๋ ฅ ๋ฒค์น๋งํฌ
[NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models
A native PyTorch Library for large model training
Top2Vec learns jointly embedded topic, document and word vectors.
MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]
Efficient Triton Kernels for LLM Training
๐ฐ Newspaper4k a fork of the beloved Newspaper3k. Extraction of articles, titles, and metadata from news websites.
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery ๐งโ๐ฌ
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
brcps12 / transformers
Forked from huggingface/transformers๐ค Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Official repository for EXAONE built by LG AI Research
๐งฌ RegMix: Data Mixture as Regression for Language Model Pre-training
Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"
Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verified research papers.
Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers
Retrieval and Retrieval-augmented LLMs
Utilities intended for use with Llama models.