A paper list of parameter-efficient fine-tuning method
- Visual Tuning | [arxiv'23] |
[paper]
-
[LoRA] LoRA: Low-Rank Adaptation of Large Language Models | [ICLR'22] |
[paper]
[code]
-
[BitFit/Bias] BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models | [ACL'22] |
[paper]
[code]
-
[CoOp] Learning to Prompt for Vision-Language Models | [IJCV'22] |
[paper]
[code]
-
[CoCoOp] Conditional Prompt Learning for Vision-Language Models | [CVPR'22] |
[paper]
[code]
-
[AdaptFormer] AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition | [NIPS'22] |
[paper]
[code]
-
[SSF] Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning | [NIPS'22] |
[paper]
[code]
-
[FacT] FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer | [AAAI23] |
[paper]
-
[RepAdapter] Towards Efficient Visual Adaption via Structural Re-parameterization | [arxiv'23] |
[paper]
[code]
-
[AdaLoRA] AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning | [ICLR'23] |
[paper]
[code]
-
[SVDiff] SVDiff: Compact Parameter Space for Diffusion Fine-Tuning | [ICCV'23] |
[paper]
[code]
-
[GatedPromptTuning] Improving Visual Prompt Tuning for Self-supervised Vision Transformers | [ICML'23] |
[paper]
[code]
-
[PVP] PVP: Pre-trained Visual Parameter-Efficient Tuning | [arxiv'23] |
[paper]
-
[E2VPT] E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning | [ICCV'23] |
[paper]
[code]
-
[DVPT] Dynamic Visual Prompt Tuning for Parameter Efficient Transfer Learning | [PRCV'23] |
[paper]
-
[ARC] Efficient Adaptation of Large Vision Transformer via Adapter Re-Composing | [NIPS'23] |
[paper]
[code]
-
[FLoRA] Flora: Low-Rank Adapters Are Secretly Gradient Compressors | [ICML'24] |
[paper]
[code]
-
[DoRA] DoRA: Weight-Decomposed Low-Rank Adaptation | [ICML'24] |
[paper]
[code]
-
[LoRA+] LoRA+: Efficient Low Rank Adaptation of Large Models | [ICML'24] |
[paper]
-
[PiSSA] PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models | [NIPS'24] |
[paper]
[code]
-
[TriLoRA] TriLoRA: Integrating SVD for Advanced Style Personalization in Text-to-Image Generation | [arxiv'24] |
[paper]
-
[Spectral Adapter] Spectral Adapter: Fine-Tuning in Spectral Space | [NIPS'24] |
[paper]
[code]
-
[FLoRA] FLoRA: Low-Rank Core Space for N-dimension | [arxiv'24] |
[paper]
[code]
-
[MiLoRA] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning | [arxiv'24] |
[paper]
-
[MoSLoRA] Mixture-of-Subspaces in Low-Rank Adaptation | [arxiv'24] |
[paper]
[code]
-
[LoRA-GA] LoRA-GA: Low-Rank Adaptation with Gradient Approximation | [NIPS'24] |
[paper]
[code]
-
[LoRA-Pro] LoRA-Pro: Are Low-Rank Adapters Properly Optimized? | [arxiv'24] |
[paper]
[code]
-
[LoRA-Dash] Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-tuning | [arxiv'24] |
[paper]