A curated list of papers and resources about LoRA of Large Language Models based on our survey paper: A Survey on LoRA of Large Language Models.
This repo will be continuously updated. Don't forget to star it and keep tuned!
Please cite the paper in Citations if you find the resource helpful for your research. Thanks!
Low-Rank Adaptation(LoRA), which updates the dense neural network layers with pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning paradigms. Furthermore, it has significant advantages in cross-task generalization and privacy-preserving. Hence, LoRA has gained much attention recently, and the number of related literature demonstrates exponential growth. It is necessary to conduct a comprehensive overview of the current progress on LoRA. This survey categorizes and reviews the progress from the perspectives of (1) downstream adaptation improving variants that improve LoRA's performance on downstream tasks; (2) cross-task generalization methods that mix multiple LoRA plugins to achieve cross-task generalization; (3) efficiency-improving methods that boost the computation-efficiency of LoRA; (4) data privacy-preserving methods that use LoRA in federated learning; (5) application. Besides, this survey also discusses the future directions in this field.
- A Survey on LoRA of Large Language Models
- LoRA of LLMs
- Contents
- Low-Rank Adaptation
- Downstream Adaptation Improving
- Cross-task Generalization
- Efficiency Improving
- LoRA for Federated Learning
- Applications of LoRA
- Contribution
- Citations
- LoRA: Low-Rank Adaptation of Large Language Models.
ICLR
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen [PDF] [Code], 2022
-
A Kernel-Based View of Language Model Fine-Tuning.
ICML
Malladi S., Wettig A., Yu D., Chen D., Arora S. [PDF] [Code], 2023 -
The Impact of LoRA on the Emergence of Clusters in Transformers.
arXiv
Koubbi H., Boussard M., Hernandez L. [PDF] [Code], 2024 -
LoRA Training in the NTK Regime Has No Spurious Local Minima.
arXiv
Jang U., Lee J. D., Ryu E. K. [PDF] [Code], 2024 -
Asymmetry in Low-Rank Adapters of Foundation Models.
arXiv
Zhu J., Greenewald K. H., Nadjahi K., Ocáriz Borde d H. S., Gabrielsson R. B., Choshen L., Ghassemi M., Yurochkin M., Solomon J. [PDF] [Code], 2024 -
The Expressive Power of Low-Rank Adaptation.
arXiv
Zeng Y., Lee K. [PDF] [Code], 2023
-
ReLoRA: High-rank training through low-rank updates.
NeurIPS Workshop
.
Lialin V, Muckatira S, Shivagunde N, Rumshisky A. [PDF] [Code], 2023 -
MoRA: High-rank updating for parameter-efficient fine-tuning.
arXiv
Jiang T, Huang S, Luo S, Zhang Z, Huang H, Wei F, Deng W, Sun F, Zhang Q, Wang D, others. [PDF] [Code], 2024 -
Training neural networks from scratch with parallel low-rank adapters.
arXiv
Huh M, Cheung B, Bernstein J, Isola P, Agrawal P. [PDF] [Code], 2024 -
InfLoRA: Interference-free low-rank adaptation for continual learning.
arXiv
Liang Y, Li W. [PDF] [Code], 2024 -
GS-LoRA: Continual forgetting for pre-trained vision models.
arXiv
Zhao H, Ni B, Wang H, Fan J, Zhu F, Wang Y, Chen Y, Meng G, Zhang Z. [PDF] [Code], 2024 -
I-LoRA: Analyzing and reducing catastrophic forgetting in parameter-efficient tuning.
arXiv
Ren W, Li X, Wang L, Zhao T, Qin W. [PDF] [Code], 2024 -
LongLoRA: Efficient fine-tuning of long-context large language models.
arXiv
Y. Chen, S. Qian, H. Tang, X. Lai, Z. Liu, S. Han, J. Jia. [PDF] [Code], 2023 -
SinkLoRA: Enhanced efficiency and chat capabilities for long-context large language models.
arXiv
Zhang H. [PDF] [Code], 2023
-
ReLoRA: High-Rank Training Through Low-Rank Updates.
NeurIPS Workshop
Lialin V., Muckatira S., Shivagunde N., Rumshisky A. [PDF] [Code], 2023 -
Chain of LoRA: Efficient fine-tuning of language models via residual learning.
arXiv
Xia W, Qin C, Hazan E. [PDF], 2024 -
Mini-ensemble low-rank adapters for parameter-efficient fine-tuning.
arXiv
Ren P, Shi C, Wu S, Zhang M, Ren Z, Rijke d M, Chen Z, Pei J. [PDF] [Code], 2024
- FLoRA: Low-rank adapters are secretly gradient compressors.
arXiv
Hao Y, Cao Y, Mou L. [PDF] [Code], 2024
- Delta-LoRA: Fine-tuning high-rank parameters with the delta of low-rank matrices.
arXiv
Zi B, Qi X, Wang L, Wang J, Wong K, Zhang L. [PDF], 2023
-
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning.
ICLR 2023
Zhang Q., Chen M., Bukharin A., He P., Cheng Y., Chen W., Zhao T. [PDF] [Code], 2023 -
SaLoRA: Structure-aware low-rank adaptation for parameter-efficient fine-tuning.
Mathematics
Hu Y, Xie Y, Wang T, Chen M, Pan Z. [PDF], 2023 -
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-Tuning.
arXiv
Zhang F., Li L., Chen J., Jiang Z., Wang B., Qian Y. [PDF] [Code], 2023
-
DoRA: Enhancing parameter-efficient fine-tuning with dynamic rank distribution.
arXiv
Mao Y, Huang K, Guan C, Bao G, Mo F, Xu J. [PDF] [Code], 2024 -
AutoLoRA: Automatically tuning matrix ranks in low-rank adaptation based on meta learning.
arXiv
Zhang R, Qiang R, Somayajula S A, Xie P. [PDF], 2024 -
SoRA: Sparse low-rank adaptation of pre-trained language models.
EMNLP
Ding N, Lv X, Wang Q, Chen Y, Zhou B, Liu Z, Sun M. [PDF] [Code], 2023 -
ALoRA: Allocating low-rank adaptation for fine-tuning large language models.
arXiv
Liu Z, Lyn J, Zhu W, Tian X, Graham Y. [PDF], 2024
- DyLoRA: Parameter-Efficient Tuning of Pre-trained Models Using Dynamic Search-Free Low-Rank Adaptation.
EACL 2023
Valipour M., Rezagholizadeh M., Kobyzev I., Ghodsi A. [PDF] [Code], 2023
-
The impact of initialization on LoRA finetuning dynamics.
arXiv
Hayou S, Ghosh N, Yu B. [PDF], 2024 -
PISSA: Principal singular values and singular vectors adaptation of large language models.
arXiv
Meng F, Wang Z, Zhang M. [PDF] [Code], 2024 -
MiLoRA: Harnessing minor singular components for parameter-efficient LLM finetuning.
arXiv
Wang H, Xiao Z, Li Y, Wang S, Chen G, Chen Y. [PDF], 2024 -
Mixture-of-Subspaces in Low-Rank Adaptation.
arXiv
Wu T, Wang J, Zhao Z, Wong N [PDF] [Code], 2024
-
Riemannian preconditioned LoRA for fine-tuning foundation models.
arXiv
Zhang F, Pilanci M. [PDF] [Code], 2024 -
LoRA+: Efficient low rank adaptation of large models.
arXiv
Hayou S, Ghosh N, Yu B. [PDF] [Code], 2024 -
ResLoRA: Identity residual mapping in low-rank adaption.
arXiv
Shi S, Huang S, Song M, Li Z, Zhang Z, Huang H, Wei F, Deng W, Sun F, Zhang Q. [PDF] [Code], 2024 -
SIBO: A simple booster for parameter-efficient fine-tuning.
arXiv
Wen Z, Zhang J, Fang Y. [PDF], 2024 Hayou S, Ghosh N, Yu B. 2024
-
BiLoRA: A bi-level optimization framework for overfitting-resilient low-rank adaptation of large pre-trained models.
arXiv
Qiang R, Zhang R, Xie P. [PDF], 2024 -
LoRA dropout as a sparsity regularizer for overfitting control.
arXiv
Lin Y, Ma X, Chu X, Jin Y, Yang Z, Wang Y, Mei H. [PDF], 2024 -
LoRA meets dropout under a unified framework.
arXiv
Wang S, Chen L, Jiang J, Xue B, Kong L, Wu C. [PDF] [Code], 2024
-
Laplace-LoRA: Bayesian low-rank adaptation for large language models.
arXiv
Yang A X, Robeyns M, Wang X, Aitchison L. [PDF] [Code], 2023 -
PILLOW: Enhancing efficient instruction fine-tuning via prompt matching.
EMNLP
Qi Z, Tan X, Shi S, Qu C, Xu Y, Qi Y. [PDF], 2023 -
STAR: Constraint LoRA with dynamic active learning for data-efficient fine-tuning of large language models.
arXiv
Zhang L, Wu J, Zhou D, Xu G. [PDF] [Code], 2024
-
LoRA Ensembles for large language model fine-tuning.
arXiv
Wang X, Aitchison L, Rudolph M. [PDF], 2023 -
LoRAretriever: Input-aware LoRA retrieval and composition for mixed tasks in the wild.
arXiv
Zhao Z, Gan L, Wang G, Zhou W, Yang H, Kuang K, Wu F. [PDF], 2024 -
Token-level adaptation of LoRA adapters for downstream task generalization.
AICCC
Belofsky J. [PDF] [Code], 2023 -
Effective and parameter-efficient reusing fine-tuned models.
arXiv
Jiang W, Lin B, Shi H, Zhang Y, Li Z, Kwok J T.[PDF] [Code], 2023 -
Composing parameter-efficient modules with arithmetic operations.
arXiv
Zhang J, Chen S, Liu J, He J.[PDF] [Code], 2023 -
Task arithmetic with LoRA for continual learning.
arXiv
Chitale R, Vaidya A, Kane A, Ghotkar A. [PDF], 2023
-
LoRAHub: Efficient cross-task generalization via dynamic LoRA composition.
arXiv
Huang C, Liu Q, Lin B Y, Pang T, Du C, Lin M. [PDF] [Code], 2023 -
ComPEFT: Compression for communicating parameter efficient updates via sparsification and quantization.
arXiv
Yadav P, Choshen L, Raffel C, Bansal M. [PDF] [Code], 2023 -
L-LoRA: Parameter efficient multi-task model fusion with partial linearization.
arXiv
Tang A, Shen L, Luo Y, Zhan Y, Hu H, Du B, Chen Y, Tao D. [PDF] [Code], 2023 -
MixLoRA: Multimodal instruction tuning with conditional mixture of LoRA.
arXiv
Shen Y, Xu Z, Wang Q, Cheng Y, Yin W, Huang L. [PDF], 2024 -
X-LoRA: Mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and design.
arXiv
Buehler E L, Buehler M J. [PDF], 2024
-
MoRAL: MoE augmented LoRA for LLMs’ lifelong learning.
arXiv
Yang S, Ali M A, Wang C, Hu L, Wang D. [PDF], 2024 -
LoRAMoE: Alleviate world knowledge forgetting in large language models via MoE-style plugin.
arXiv
Dou S, Zhou E, Liu Y, Gao S, Zhao J, Shen W, Zhou Y, Xi Z, Wang X, Fan X, Pu S, Zhu J, Zheng R, Gui T, Zhang Q, Huang X. [PDF] [Code], 2023 -
MoCLE: Mixture of cluster-conditional LoRA experts for vision-language instruction tuning.
arXiv
Gou Y, Liu Z, Chen K, Hong L, Xu H, Li A, Yeung D, Kwok J T, Zhang Y. [PDF][Code], 2023 -
MOELoRA: An MoE-based parameter efficient fine-tuning method for multi-task medical applications.
arXiv
Liu Q, Wu X, Zhao X, Zhu Y, Xu D, Tian F, Zheng Y. [PDF] [Code], 2023 -
Mixture-of-LoRAs: An efficient multitask tuning method for large language models.
LREC/COLING
Feng W, Hao C, Zhang Y, Han Y, Wang H. [PDF], 2024 -
MultiLoRA: Democratizing LoRA for better multi-task learning.
arXiv
Wang Y, Lin Y, Zeng X, Zhang G. [PDF], 2023 -
MLoRE: Multi-task dense prediction via mixture of low-rank experts.
arXiv
Yang Y, Jiang P, Hou Q, Zhang H, Chen J, Li B. [PDF] [Code], 2024 -
MTLoRA: Low-rank adaptation approach for efficient multi-task learning.
CVPR
Agiza A R SN. M. [PDF] [Code], 2024 -
MoLA: Higher layers need more LoRA experts.
arXiv
Gao C, Chen K, Rao J, Sun B, Liu R, Peng D, Zhang Y, Guo X, Yang J, Subrahmanian V S. [PDF] [Code], 2024 -
LLaVA-MoLE: Sparse mixture of LoRA experts for mitigating data conflicts in instruction finetuning MLLMs.
arXiv
Chen S, Jie Z, Ma L. [PDF], 2024 -
SiRA: Sparse mixture of low rank adaptation.
arXiv
Zhu Y, Wichers N, Lin C, Wang X, Chen T, Shu L, Lu H, Liu C, Luo L, Chen J, Meng L. [PDF], 2023 -
Octavius: Mitigating task interference in MLLMs via MoE.
arXiv
Chen Z, Wang Z, Wang Z, Liu H, Yin Z, Liu S, Sheng L, Ouyang W, Qiao Y, Shao J. [PDF] [Code], 2023 -
Fast LoRA: Batched low-rank adaptation of foundation models.
arXiv
Wen Y, Chaudhuri S. [PDF], 2023 -
I-LoRA: Analyzing and reducing catastrophic forgetting in parameter-efficient tuning.
arXiv
Ren W, Li X, Wang L, Zhao T, Qin W. [PDF] [Code], 2024
-
LoRA-SP: Streamlined Partial Parameter Adaptation for Resource Efficient Fine-Tuning of Large Language Models
arXiv
Y. Wu, Y. Xiang, S. Huo, Y. Gong, P. Liang. [PDF] 2024 -
LoRA-FA: Memory-Efficient Low-Rank Adaptation for Large Language Models Fine-Tuning
arXiv
L. Zhang, L. Zhang, S. Shi, X. Chu, B. Li. [PDF] 2023 -
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models
arXiv
Z. Liu, S. Kundu, A. Li, J. Wan, L. Jiang, P. A. Beerel. [PDF] 2024 -
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation
arXiv
S. Woo, B. Park, B. Kim, M. Jo, S. Kwon, D. Jeon, D. Lee. [PDF] [Code] 2024 -
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters
arXiv
K. Bałazy, M. Banaei, K. Aberer, J. Tabor. [PDF] [Code] 2024 -
BYOM-LoRA: Effective and Parameter-Efficient Reusing Fine-Tuned Models
arXiv
W. Jiang, B. Lin, H. Shi, Y. Zhang, Z. Li, J. T. Kwok. [PDF] 2023
-
LoRA-Drop: Efficient LoRA Parameter Pruning Based on Output Evaluation
arXiv
H. Zhou, X. Lu, W. Xu, C. Zhu, T. Zhao. [PDF] 2024 -
LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning
arXiv
M. Zhang, H. Chen, C. Shen, Z. Yang, L. Ou, X. Zhuang, B. Zhu. [PDF] 2023 -
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery
arXiv
T. Chen, T. Ding, B. Yadav, I. Zharkov, L. Liang. [PDF] [Code]2023 -
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence Modeling
arXiv
Y. Zhu, X. Yang, Y. Wu, W. Zhang. [PDF] [Code] 2023
-
VeRA: Vector-Based Random Matrix Adaptation
arXiv
D. J. Kopiczko, T. Blankevoort, Y. M. Asano. [PDF] 2023 -
VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks
arXiv
Y. Li, S. Han, S. Ji. [PDF] [Code] 2024 -
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
arXiv
Z. Gao, Q. Wang, A. Chen, Z. Liu, B. Wu, L. Chen, J. Li. [PDF] [Code] 2024
-
QLoRA: Efficient Fine-Tuning of Quantized LLMs
NeurIPS
T. Dettmers, A. Pagnoni, A. Holtzman, L. Zettlemoyer. 2024 [PDF] [Code] -
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
arXiv
Y. Xu, L. Xie, X. Gu, X. Chen, H. Chang, H. Zhang, Z. Chen, X. Zhang, Q. Tian. 2023 [PDF] [Code]
-
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
arXiv
Y. Li, Y. Yu, C. Liang, P. He, N. Karampatziakis, W. Chen, T. Zhao. [PDF] [Code] 2023 -
ApiQ: Finetuning of 2-Bit Quantized Large Language Model
arXiv
B. Liao, C. Monz. [PDF] [Code] 2024 -
L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-Wise LSQ
arXiv
H. Jeon, Y. Kim, J. Kim. 2024 [PDF]
- ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU
arXiv
Z. Ye, D. Li, J. Tian, T. Lan, J. Zuo, L. Duan, Y. Jiang, J. Sha, K. Zhang, M. Tang. [PDF] [Code] 2023
-
Punica: Multi-Tenant LoRA Serving
MLSys
L. Chen, Z. Ye, Y. Wu, D. Zhuo, L. Ceze, A. Krishnamurthy. [PDF] [Code] 2024 -
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
arXiv
Y. Sheng, S. Cao, D. Li, C. Hooper, N. Lee, S. Yang, C.-C. Chou, B. Zheng, K. Keutzer. [PDF] [Code] 2023 -
CARASERVE: CPU-Assisted and Rank-Aware LoRA Serving for Generative LLM Inference
arXiv
S. Li, H. Lu, T. Wu, M. Yu, Q. Weng, X. Chen, Y. Shan, B. Yuan, W. Wang. [PDF] 2024
-
SLoRA: Federated parameter efficient fine-tuning of language models.
arXiv
Babakniya S, Elkordy A R, Ezzeldin Y H, Liu Q, Song K, El-Khamy M, Avestimehr S. [PDF], 2023 -
FeDeRA: Efficient fine-tuning of language models in federated learning leveraging weight decomposition.
arXiv
Yan Y, Tang S, Shi Z, Yang Q. [PDF], 2024 -
Improving LoRA in privacy-preserving federated learning.
arXiv
Sun Y, Li Z, Li Y, Ding B. [PDF], 2024
-
FedMS: Federated learning with mixture of sparsely activated foundation models.
arXiv
Wu P, Li K, Wang T, Wang F. [PDF], 2023 -
Federated fine-tuning of large language models under heterogeneous language tasks and client resources.
arXiv preprint
Bai J, Chen D, Qian B, Yao L, Li Y. [PDF] [Code], 2024 -
Federated fine-tuning of large language models under heterogeneous language tasks and client resources.
arXiv
Bai J, Chen D, Qian B, Yao L, Li Y. [PDF], 2024 -
Heterogeneous LoRA for federated fine-tuning of on-device foundation models.
NeurIPS
Cho Y J, Liu L, Xu Z, Fahrezi A, Barnes M, Joshi G. [PDF], 2023
- pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA Tuning.
arXiv
Yi L, Yu H, Wang G, Liu X, Li X. [PDF], 2023
-
A fast, performant, secure distributed training framework for large language model.
arXiv
Huang W, Wang Y, Cheng A, Zhou A, Yu C, Wang L. [PDF], 2024 -
PrivateLoRA for efficient privacy-preserving LLM.
arXiv
Wang Y, Lin Y, Zeng X, Zhang G. [PDF], 2023
-
DialogueLLM: Context and Emotion Knowledge-Tuned Large Language Models for Emotion Recognition in Conversations.
arXiv
Zhang Y, Wang M, Wu Y, Tiwari P, Li Q, Wang B, Qin J. [PDF], 2024. -
Label Supervised LLaMA Finetuning.
arXiv
Li Z, Li X, Liu Y, Xie H, Li J, Wang F L, Li Q, Zhong X. [PDF][Code], 2023. -
Speaker Attribution in German Parliamentary Debates with QLoRA-Adapted Large Language Models.
arXiv
Bornheim T, Grieger N, Blaneck P G, Bialonski S. [PDF], 2024. -
AutoRE: Document-Level Relation Extraction with Large Language Models.
arXiv
Xue L, Zhang D, Dong Y, Tang J. [PDF] [Code], 2024. -
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning.
EMNLP
Alves D M, Guerreiro N M, Alves J, Pombal J, Rei R, Souza D J G C, Colombo P, Martins A F T. [PDF] [Code], 2023. -
Finetuning Large Language Models for Domain-Specific Machine Translation.
arXiv
Zheng J, Hong H, Wang X, Su J, Liang Y, Wu S. [PDF], 2024. -
Assessing Translation Capabilities of Large Language Models Involving English and Indian Languages.
arXiv
Mujadia V, Urlana A, Bhaskar Y, Pavani P A, Shravya K, Krishnamurthy P, Sharma D M. [PDF], 2023. -
Personalized LoRA for Human-Centered Text Understanding.
AAAI
Zhang Y, Wang J, Yu L, Xu D, Zhang X. [PDF] [Code], 2024. -
Y-tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning.
Frontiers of Computer Science
Liu Y, An C, Qiu X. [PDF], 2024.
-
Delving into parameter-efficient fine-tuning in code change learning: An empirical study.
arXiv
Liu S, Keung J, Yang Z, Liu F, Zhou Q, Liao Y. [PDF], 2024. -
An empirical study on jit defect prediction based on bert-style model.
arXiv
Guo Y, Gao X, Jiang B. [PDF], 2024. -
Parameter-efficient finetuning of transformers for source code.
arXiv
Ayupov S, Chirkova N. [PDF][Code], 2022. -
Repairllama: Efficient representations and fine-tuned adapters for program repair.
arXiv
Silva A, Fang S, Monperrus M. [PDF][Code], 2023. -
Analyzing the effectiveness of large language models on text-to-sql synthesis.
arXiv
Roberson R, Kaki G, Trivedi A. [PDF], 2024. -
Stelocoder: a decoder-only LLM for multi-language to python code translation.
arXiv
Pan J, Sadé A, Kim J, Soriano E, Sole G, Flamant S. [PDF][Code], 2023.
-
Perl: parameter efficient reinforcement learning from human feedback.
arXiv
H. Sidahmed, S. Phatale, A. Hutcheson, Z. Lin, Z. Chen, Z. Yu, J. Jin, R. Komarytsia, C. Ahlheim, Y. Zhu, S. Chaudhary, B. Li, S. Ganesh, B. Byrne, J. Hoffmann, H. Mansoor, W. Li, A. Rastogi, L. Dixon. [PDF][Code], 2024 -
Efficient RLHF: reducing the memory usage of PPO.
arXiv
M. Santacroce, Y. Lu, H. Yu, Y. Li, Y. Shen. [PDF], 2023 -
Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF.
arXiv
S. Sun, D. Gupta, M. Iyyer. [PDF][Code], 2023 -
Dmoerm: Recipes of mixture-of-experts for effective reward modeling.
arXiv
S. Quan. [PDF][Code], 2024 -
Improving reinforcement learning from human feedback with efficient reward model ensemble.
arXiv
S. Zhang, Z. Chen, S. Chen, Y. Shen, Z. Sun, C. Gan. [PDF], 2024 -
Uncertainty-penalized reinforcement learning from human feedback with diverse reward lora ensembles.
arXiv
Y. Zhai, H. Zhang, Y. Lei, Y. Yu, K. Xu, D. Feng, B. Ding, H. Wang. [PDF], 2024 -
Bayesian reward models for LLM alignment.
arXiv
A. X. Yang, M. Robeyns, T. Coste, J. Wang, H. Bou-Ammar, L. Aitchison. [PDF], 2024 -
Bayesian low-rank adaptation for large language models.
arXiv
A. X. Yang, M. Robeyns, X. Wang, L. Aitchison. [PDF][Code], 2023
-
Bioinstruct: Instruction tuning of large language models for biomedical natural language processing.
arXiv
Tran H, Yang Z, Yao Z, Yu H. [PDF][Code], 2023 -
Parameterefficient fine-tuning of llama for the clinical domain.
arXiv
Gema A P, Daines L, Minervini P, Alex B. [PDF][Code], 2023 -
Clinical camel: An open-source expert-level medical language model with dialogue-based knowledge encoding.
arXiv
Toma A, Lawler P R, Ba J, Krishnan R G, Rubin B B, Wang B. [PDF][Code], 2023 -
Suryakiran at mediqa-sum 2023: Leveraging lora for clinical dialogue summarization.
CLEF
Suri K, Mishra P, Saha S, Singh A. [PDF], 2023 -
Assertion detection large language model in-context learning lora fine-tuning.
arXiv
Ji Y, Yu Z, Wang Y. [PDF][Code], 2024 -
Ivygpt: Interactive chinese pathway language model in medical domain.
CAAI
Wang R, Duan Y, Lam C, Chen J, Xu J, Chen H, Liu X, Pang P C, Tan T. [PDF], 2023 -
SM70: A large language model for medical devices.
arXiv
Bhatti A, Parmar S, Lee S. [PDF], 2023 -
Finllama: Financial sentiment classification for algorithmic trading applications.
arXiv
Konstantinidis T, Iacovides G, Xu M, Constantinides T G, Mandic D P. [PDF], 2024 -
Financial news analytics using fine-tuned llama 2 GPT model.
arXiv
Pavlyshenko B M. [PDF], 2023 -
Fingpt: Democratizing internet-scale data for financial large language models.
arXiv
Liu X, Wang G, Zha D. [PDF][Code], 2023 -
Ra-cfgpt: Chinese financial assistant with retrievalaugmented large language model.
Frontiers of Computer Science
Li J, Lei Y, Bian Y, Cheng D, Ding Z, Jiang C. [PDF], 2024 -
Db-gpt: Large language model meets database.
Data Science and Engineering
Zhou X, Sun Z, Li G. [PDF][Code], 2024
-
Diffstyler: Diffusion-based localized image style transfer.
arXiv
Li S. [PDF], 2024 -
Implicit style-content separation using b-lora.
arXiv
Frenkel Y, Vinker Y, Shamir A, Cohen-Or D. [PDF][Code], 2024 -
Facechain: A playground for human-centric artificial intelligence generated content.
arXiv
Liu Y, Yu C, Shang L, He Y, Wu Z, Wang X, Xu C, Xie H, Wang W, Zhao Y, Zhu L, Cheng C, Chen W, Yao Y, Zhou W, Xu J, Wang Q, Chen Y, Xie X, Sun B. [PDF][Code], 2023 -
Calliffusion: Chinese calligraphy generation and style transfer with diffusion modeling.
arXiv
Liao Q, Xia G, Wang Z. [PDF], 2023 -
Style transfer to calvin and hobbes comics using stable diffusion.
arXiv
Shrestha S, Venkataramanan A, others. [PDF], 2023 -
Block-wise lora: Revisiting fine-grained lora for effective personalization and stylization in text-to-image generation.
arXiv
Li L, Zeng H, Yang C, Jia H, Xu D. [PDF], 2024 -
OMG: occlusion-friendly personalized multi-concept generation in diffusion models.
arXiv
Kong Z, Zhang Y, Yang T, Wang T, Zhang K, Wu B, Chen G, Liu W, Luo W. [PDF][Code], 2024 -
Space narrative: Generating images and 3d scenes of chinese garden from text using deep learning.
preprint
Shi J, Hua H. [PDF], 2023 -
Generating coherent comic with rich story using chatgpt and stable diffusion.
arXiv
Jin Z, Song Z. [PDF], 2023 -
Customizing 360-degree panoramas through text-to-image diffusion models.
WACV
Wang H, Xiang X, Fan Y, Xue J. [PDF][Code], 2024 -
Smooth diffusion: Crafting smooth latent spaces in diffusion models.
arXiv
Guo J, Xu X, Pu Y, Ni Z, Wang C, Vasu M, Song S, Huang G, Shi H. [PDF][Code], 2023 -
Resadapter: Domain consistent resolution adapter for diffusion models.
arXiv
Cheng J, Xie P, Xia X, Li J, Wu J, Ren Y, Li H, Xiao X, Zheng M, Fu L. [PDF][Code], 2024 -
Continual diffusion with stamina: Stack-and-mask incremental adapters.
CVPR
Smith J S, Hsu Y C, Kira Z, Shen Y, Jin H. [PDF], 2024 -
Dreamsync: Aligning text-to-image generation with image understanding feedback.
CVPR
Sun J, Fu D, Hu Y, Wang S, Rassin R, Juan D C, Alon D, Herrmann C, Steenkiste v S, Krishna R, others. [PDF], 2023 -
Styleadapter: A single-pass lora-free model for stylized image generation.
arXiv
Wang Z, Wang X, Xie L, Qi Z, Shan Y, Wang W, Luo P. [PDF], 2023 -
Mix-of-show: Decentralized low-rank adaptation for multi-concept customization of diffusion models.
NeurIPS
Gu Y, Wang X, Wu J Z, Shi Y, Chen Y, Fan Z, Xiao W, Zhao R, Chang S, Wu W, Ge Y, Shan Y, Shou M Z. [PDF][Code], 2023 -
LCM-lora: A universal stable-diffusion acceleration module.
arXiv
Luo S, Tan Y, Patil S, Gu D, Platen v P, Passos A, Huang L, Li J, Zhao H. [PDF][Code], 2023 -
Lora-enhanced distillation on guided diffusion models.
arXiv
Golnari P A. [PDF], 2023 -
Customize-a-video: One-shot motion customization of text-to-video diffusion models.
arXiv
Ren Y, Zhou Y, Yang J, Shi J, Liu D, Liu F, Kwon M, Shrivastava A. [PDF], 2024 -
Dragvideo: Interactive drag-style video editing.
arXiv
Deng Y, Wang R, Zhang Y, Tai Y, Tang C. [PDF][Code], 2023 -
Rerender A video: Zero-shot text-guided video-to-video translation.
SIGGRAPH
Yang S, Zhou Y, Liu Z, Loy C C. [PDF][Code], 2023 -
Infusion: Inject and attention fusion for multi concept zero-shot text-based video editing.
ICCV
Khandelwal A. [PDF][Code], 2023 -
Stable video diffusion: Scaling latent video diffusion models to large datasets.
arXiv
Blattmann A, Dockhorn T, Kulal S, Mendelevitch D, Kilian M, Lorenz D, Levi Y, English Z, Voleti V, Letts A, others. [PDF], 2023 -
Animatediff: Animate your personalized text-to-image diffusion models without specific tuning.
arXiv
Guo Y, Yang C, Rao A, Wang Y, Qiao Y, Lin D, Dai B. [PDF][Code], 2023 -
Dreamcontrol: Control-based text-to-3d generation with 3d self-prior.
arXiv
Huang T, Zeng Y, Zhang Z, Xu W, Xu H, Xu S, Lau R W H, Zuo W. [PDF][Code], 2023 -
X-dreamer: Creating high-quality 3d content by bridging the domain gap between text-to-2d and text-to-3d generation.
arXiv
Ma Y, Fan Y, Ji J, Wang H, Sun X, Jiang G, Shu A, Ji R. [PDF][Code], 2023 -
Boosting3d: High-fidelity image-to-3d by boosting 2d diffusion prior to 3d prior with progressive learning.
arXiv
Yu K, Liu J, Feng M, Cui M, Xie X. [PDF], 2023 -
As-plausible-as-possible: Plausibility-aware mesh deformation using 2d diffusion priors.
CVPR
Yoo S, Kim K, Kim V G, Sung M. [PDF][Code], 2024 -
Dragtex: Generative point-based texture editing on 3d mesh.
arXiv
Zhang Y, Xu Q, Zhang L. [PDF], 2024
-
Samlp: A customized segment anything model for license plate detection.
arXiv
Ding H, Gao J, Yuan Y, Wang Q. [PDF][Code], 2024 -
Sam-based instance segmentation models for the automation of structural damage detection.
arXiv
Ye Z, Lovell L, Faramarzi A, Ninic J. [PDF], 2024 -
Segment any cell: A sam-based auto-prompting fine-tuning framework for nuclei segmentation.
arXiv
Na S, Guo Y, Jiang F, Ma H, Huang J. [PDF], 2024 -
SAM-OCTA: prompting segment-anything for OCTA image segmentation.
arXiv
Chen X, Wang C, Ning H, Li S. [PDF][Code], 2023 -
Cheap lunch for medical image segmentation by fine-tuning SAM on few exemplars.
arXiv
Feng W, Zhu L, Yu L. [PDF], 2023 -
Customized segment anything model for medical image segmentation.
arXiv
Zhang K, Liu D. [PDF], 2023 -
SAM meets robotic surgery: An empirical study on generalization, robustness and adaptation.
MICCAI
Wang A, Islam M, Xu M, Zhang Y, Ren H. [PDF], 2023 -
Tracking meets lora: Faster training, larger model, stronger performance.
arXiv
Lin L, Fan H, Zhang Z, Wang Y, Xu Y, Ling H. [PDF], 2024 -
Enhancing general face forgery detection via vision transformer with low-rank adaptation.
MIPR
Kong C, Li H, Wang S. [PDF], 2023
- SALM: speech-augmented language model with in-context learning for speech recognition and translation.
arXiv
Chen Z, Huang H, Andrusenko A, Hrinchuk O, Puvvada KC, Li J, Ghosh S, Balam J, Ginsburg B. [PDF], 2023
-
InternLM-XComposer2: Mastering Free-Form Text-Image Composition and Comprehension in Vision-Language Large Model.
arXiv
Chen Z, Huang H, Andrusenko A, Hrinchuk O, Puvvada KC, Li J, Ghosh S, Balam J, Ginsburg B. [PDF][Code], 2024 -
mPlug-OWL: Modularization Empowers Large Language Models with Multimodality.
arXiv
Ye Q, Xu H, Xu G, Ye J, Yan M, Zhou Y, Wang J, Hu A, Shi P, Shi Y, Li C, Xu Y, Chen H, Tian J, Qi Q, Zhang J, Huang F. [PDF][Code], 2023 -
Collavo: Crayon Large Language and Vision Model.
arXiv
Lee B, Park B, Kim CW, Ro YM. [PDF][Code], 2024
-
Where visual speech meets language: VSP-LLM framework for efficient and context-aware visual speech processing.
arXiv
J. H. Yeo, S. Han, M. Kim, Y. M. Ro. [PDF][Code], 2024 -
Molca: Molecular graph-language modeling with cross-modal projector and uni-modal adapter.
EMNLP
Z. Liu, S. Li, Y. Luo, H. Fei, Y. Cao, K. Kawaguchi, X. Wang, T. Chua. [PDF][Code], 2023 -
TPLLM: A traffic prediction framework based on pretrained large language models.
arXiv
Y. Ren, Y. Chen, S. Liu, B. Wang, H. Yu, Z. Cui. [PDF], 2024
Contributions to this repository are welcome!
If you find any error or have relevant resources, feel free to open an issue or a pull request.
Paper format:
1. **[paper title].** `[]`
*[authors].* [[PDF]([pdf link])] [[Code]([code link])], published time, ![](https://img.shields.io/badge/[architecture]-blue) ![](https://img.shields.io/badge/[size]-red)
Please cite the following paper if you find the resource helpful for your research.
@article{mao2024survey,
title={A Survey on LoRA of Large Language Models},
author={Mao, Yuren and Ge, Yuhang and Fan, Yijiang and Xu, Wenyi and Mi, Yu and Hu, Zhonghao and Gao, Yunjun},
journal={arXiv preprint arXiv:2407.11046},
year={2024}
}