-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Insights: huggingface/peft
Overview
-
- 4 Merged pull requests
- 4 Open pull requests
- 0 Closed issues
- 3 New issues
Could not load contribution data
Please try again later
4 Pull requests merged by 1 person
-
FIX: Failing single GPU tests related to hotswapping
#2385 merged
Feb 19, 2025 -
SEC Bump transformers version used in examples
#2374 merged
Feb 19, 2025 -
CI Skip audio test on single GPU CI
#2380 merged
Feb 18, 2025 -
FIX: Avoid caching in X-LoRA generate
#2384 merged
Feb 18, 2025
4 Pull requests opened by 2 people
-
ENH Allow rank/alpha keys to be "fully qualified"
#2382 opened
Feb 17, 2025 -
orthogonal lora layer init
#2389 opened
Feb 20, 2025 -
FIX Model with nested all-linear target modules
#2391 opened
Feb 20, 2025 -
ENH Make hotswap error on compile optional
#2393 opened
Feb 21, 2025
3 Issues opened by 2 people
-
Bug: Using 2 LoRA configs with `target_modules='all-linear'` leads to nested LoRA layers
#2390 opened
Feb 20, 2025 -
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.
#2388 opened
Feb 19, 2025 -
Bug when deleting adapters of a model with modules_to_save
#2381 opened
Feb 17, 2025
7 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Standalone Custom Tokens Tuner and integrated into LoRA
#2376 commented on
Feb 21, 2025 • 39 new comments -
Implementation of adapter?
#244 commented on
Feb 15, 2025 • 0 new comments -
Comparison of Different Fine-Tuning Techniques for Conversational AI
#2310 commented on
Feb 17, 2025 • 0 new comments -
Prefix Tuning dimension error with Qwen2 and missing vocab_size for PaliGemma2
#2315 commented on
Feb 17, 2025 • 0 new comments -
prompt_tuning_peft tutorial raises cache layer error
#2379 commented on
Feb 19, 2025 • 0 new comments -
Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error
#2339 commented on
Feb 21, 2025 • 0 new comments -
[FEAT] Add support for optimum-quanto
#2000 commented on
Feb 20, 2025 • 0 new comments