Skip to content
forked from roboflow/maestro

streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VL

License

Notifications You must be signed in to change notification settings

sadhiin/maestro

Repository files navigation

maestro

VLM fine-tuning for everyone



version colab

Hello

maestro is a streamlined tool to accelerate the fine-tuning of multimodal models. By encapsulating best practices from our core modules, maestro handles configuration, data loading, reproducibility, and training loop setup. It currently offers ready-to-use recipes for popular vision-language models such as Florence-2, PaliGemma 2, and Qwen2.5-VL.

maestro

Quickstart

Install

To begin, install the model-specific dependencies. Since some models may have clashing requirements, we recommend creating a dedicated Python environment for each model.

pip install "maestro[paligemma_2]"

CLI

Kick off fine-tuning with our command-line interface, which leverages the configuration and training routines defined in each model’s core module. Simply specify key parameters such as the dataset location, number of epochs, batch size, optimization strategy, and metrics.

maestro paligemma_2 train \
  --dataset "dataset/location" \
  --epochs 10 \
  --batch-size 4 \
  --optimization_strategy "qlora" \
  --metrics "edit_distance"

Python

For greater control, use the Python API to fine-tune your models. Import the train function from the corresponding module and define your configuration in a dictionary. The core modules take care of reproducibility, data preparation, and training setup.

from maestro.trainer.models.paligemma_2.core import train

config = {
    "dataset": "dataset/location",
    "epochs": 10,
    "batch_size": 4,
    "optimization_strategy": "qlora",
    "metrics": ["edit_distance"]
}

train(config)

Cookbooks

Looking for a place to start? Try our cookbooks to learn how to fine-tune different VLMs on various vision tasks with maestro.

description open in colab
Finetune Florence-2 for object detection with LoRA colab
Finetune PaliGemma 2 for JSON data extraction with LoRA colab
Finetune Qwen2.5-VL for JSON data extraction with QLoRA colab

Contribution

We appreciate your input as we continue refining Maestro. Your feedback is invaluable in guiding our improvements. To learn how you can help, please check out our Contributing Guide. If you have any questions or ideas, feel free to start a conversation in our GitHub Discussions. Thank you for being a part of our journey!

About

streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VL

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%