Skip to content

hanajibsa/VQA-MLLM-stage2

 
 

Repository files navigation

Improving VQA Using MLLM

image

Train

After downloading the training datasets and specify their path in dataset configs, we are ready for training!

  1. Setting Environments
conda create -n fusion python=3.9
git clone 
cd BLIVA
pip install -e .

if packaging error, then

pip install setuptools==69.5.1
  1. pretraining of Dm-Former
python train.py --cfg-path train_configs/pretrain_stage1.yaml
  1. Pretraining of visual assistant branch

you should specify model path in pretrained

python train.py --cfg-path train_configs/pretrain_bliva_vicuna.yaml
  1. Instruction Finetuning
python 

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%