Skip to content

Rasoul-Zahedifar/Parameter-Efficient-Fine-Tunning-of-LLMs

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn

Table of Contents
  1. About The Project
  2. Usage
  3. Contact

About The Project

In this project, we explore five distinct approaches for Parameter-Efficient Fine-Tuning of Large Language Models (LLMs): Full Fine-Tuning, Soft Prompting, Adapters, Adapter Hub, and LoRA, applied to the T5-small model for a sentiment analysis task using the IMDB dataset. Here is a brief explanation of each method:
  • Full Fine-Tuning: This approach updates all model parameters during training, enabling maximum model adaptation but requires significant computational resources and storage.

  • Soft Prompting: Instead of altering the model’s core parameters, this method learns a set of “soft prompts” that act as additional input tokens, steering the model towards desired outputs with minimal parameter changes.

  • Adapters: Adapters introduce small, trainable modules into each layer of the model, allowing the main model parameters to remain frozen. This approach is efficient in terms of storage and computation, as only the adapter parameters need updating.

  • Adapter Hub: An extension of the adapter concept, Adapter Hub is a modular framework that enables the sharing and reuse of adapters across tasks. This allows for flexible, plug-and-play fine-tuning of models on multiple tasks with minimal additional training.

  • LoRA (Low-Rank Adaptation): LoRA fine-tunes low-rank matrices within the model, significantly reducing the number of parameters to train. This method is highly parameter-efficient and works well for adapting large language models without substantial resource demands.

(back to top)

Usage

The code for each method is implemented from scratch, with performance comparisons to the popular PEFT library. Each notebook includes in-depth explanations and links for further reading on how each method works. A comprehensive report detailing these approaches is also available in the attached report PDF.

(back to top)

Contact

Rasoul Zahedifar - [email protected]

GitHub Link: https://github.com/Rasoul-Zahedifar/Parameter-Efficient-Fine-Tunning-of-LLMs

(back to top)

About

Evaluating PEFT methods on T5-small for language model fine-tuning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published