Mastering Applied AI, One Concept at a Time
-
Updated
Dec 2, 2024 - Jupyter Notebook
Mastering Applied AI, One Concept at a Time
End to End Generative AI Industry Projects on LLM Models with Deployment_Awesome LLM Projects
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
🚀 Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popular optimization frameworks. ✨
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
Fine-tune Mistral 7B to generate fashion style suggestions
[NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
Medical Language Model fine-tuned using pretraining, instruction tuning, and Direct Preference Optimization (DPO). Progresses from general medical knowledge to specific instruction following, with experiments in preference alignment for improved medical text generation and understanding.
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
On Memorization of Large Language Models in Logical Reasoning
SEIKO is a novel reinforcement learning method to efficiently fine-tune diffusion models in an online setting. Our methods outperform all baselines (PPO, classifier-based guidance, direct reward backpropagation) for fine-tuning Stable Diffusion.
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
[EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs
Finetuning Google's Gemma Model for Translating Natural Language into SQL
qwen-1.5-1.8B sentiment analysis with prompt optimization and qlora fine-tuning
Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses
A curated list of Parameter Efficient Fine-tuning papers with a TL;DR
Code Wizard is a coding companion/ code generation tool empowered by CodeLLama-v2-34B AI to automatically generate and enhance code based on best practices found in your GitHub repository.
Finetuning LLMs + Private Data (Video 1/10) Basic
Add a description, image, and links to the finetuning-llms topic page so that developers can more easily learn about it.
To associate your repository with the finetuning-llms topic, visit your repo's landing page and select "manage topics."