memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
transformers lora pytorch-implementation huggingface-transformers large-language-models chinese-llama-65b llama2 chinese-llama memory-efficient-tuning peft-fine-tuning-llm llama3
-
Updated
May 26, 2024 - Python