Practical course about Large Language Models.
-
Updated
Dec 5, 2024 - Jupyter Notebook
Practical course about Large Language Models.
[SIGIR'24] The official implementation code of MOELoRA.
Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
CRE-LLM: A Domain-Specific Chinese Relation Extraction Framework with Fine-tuned Large Language Model
High Quality Image Generation Model - Powered with NVIDIA A100
A Python library for efficient and flexible cycle-consistency training of transformer models via iteratie back-translation. Memory and compute efficient techniques such as PEFT adapter switching allow for 7.5x larger models to be trained on the same hardware.
Mistral and Mixtral (MoE) from scratch
Fine-tune StarCoder2-3b for SQL tasks on limited resources with LORA. LORA reduces model size for faster training on smaller datasets. StarCoder2 is a family of code generation models (3B, 7B, and 15B), trained on 600+ programming languages from The Stack v2 and some natural language text such as Wikipedia, Arxiv, and GitHub issues.
PEFT is a wonderful tool that enables training a very large model in a low resource environment. Quantization and PEFT will enable widespread adoption of LLM.
Finetuning Large Language Models
In this repo I will share different topics on anything I want to know in nlp and llms
AI Community Tutorial, including: LoRA/Qlora LLM fine-tuning, Training GPT-2 from scratch, Generative Model Architecture, Content safety and control implementation, Model distillation techniques, Dreambooth techniques, Transfer learning, etc for practice with real project!
Test results of Kanarya and Trendyol models with and without fine-tuning techniques on the Turkish tweet hate speech detection dataset.
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
practical projects using LLM, VLM and Diffusion models
Add a description, image, and links to the peft-fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the peft-fine-tuning-llm topic, visit your repo's landing page and select "manage topics."