Skip to content

Accelerating the development of large multimodal models (LMMs) with lmms-eval

License

Notifications You must be signed in to change notification settings

CaraJ7/lmms-eval

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

The Evaluation Suite of Large Multimodal Models

PyPI PyPI - Downloads GitHub contributors issue resolution open issues

Accelerating the development of large multimodal models (LMMs) with lmms-eval

🏠 LMMs-Lab Homepage | πŸŽ‰ Blog | πŸ“š Documentation | πŸ€— Huggingface Datasets | Discord_Thread discord/lmms-eval


Annoucement

  • [2024-09] πŸŽ‰πŸŽ‰ We welcome the new task MMSearch.

  • [2024-09] πŸŽ‰πŸŽ‰ We welcome the new task MME-RealWorld for inference acceleration

  • [2024-09] βš™οΈοΈβš™οΈοΈοΈοΈ We upgrade lmms-eval to 0.2.3 with more tasks and features. We support a compact set of language tasks evaluations (code credit to lm-evaluation-harness), and we remove the registration logic at start (for all models and tasks) to reduce the overhead. Now lmms-eval only launches necessary tasks/models. Please check the release notes for more details.

  • [2024-08] πŸŽ‰πŸŽ‰ We welcome the new model LLaVA-OneVision, Mantis, new tasks MVBench, LongVideoBench, MMStar. We provide new feature of SGlang Runtime API for llava-onevision model, please refer the doc for inference acceleration

  • [2024-07] πŸŽ‰πŸŽ‰ We have released the technical report and LiveBench!

  • [2024-07] πŸ‘¨β€πŸ’»πŸ‘¨β€πŸ’» The lmms-eval/v0.2.1 has been upgraded to support more models, including LongVA, InternVL-2, VILA, and many more evaluation tasks, e.g. Details Captions, MLVU, WildVision-Bench, VITATECS and LLaVA-Interleave-Bench.

  • [2024-06] 🎬🎬 The lmms-eval/v0.2.0 has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the blog for more details

  • [2024-03] πŸ“πŸ“ We have released the first version of lmms-eval, please refer to the blog for more details

Why lmms-eval?

In today's world, we're on an exciting journey toward creating Artificial General Intelligence (AGI), much like the enthusiasm of the 1960s moon landing. This journey is powered by advanced large language models (LLMs) and large multimodal models (LMMs), which are complex systems capable of understanding, learning, and performing a wide variety of human tasks.

To gauge how advanced these models are, we use a variety of evaluation benchmarks. These benchmarks are tools that help us understand the capabilities of these models, showing us how close we are to achieving AGI.

However, finding and using these benchmarks is a big challenge. The necessary benchmarks and datasets are spread out and hidden in various places like Google Drive, Dropbox, and different school and research lab websites. It feels like we're on a treasure hunt, but the maps are scattered everywhere.

In the field of language models, there has been a valuable precedent set by the work of lm-evaluation-harness. They offer integrated data and model interfaces, enabling rapid evaluation of language models and serving as the backend support framework for the open-llm-leaderboard, and has gradually become the underlying ecosystem of the era of foundation models.

We humbly obsorbed the exquisite and efficient design of lm-evaluation-harness and introduce lmms-eval, an evaluation framework meticulously crafted for consistent and efficient evaluation of LMM.

Installation

For formal usage, you can install the package from PyPI by running the following command:

pip install lmms-eval

For development, you can install the package by cloning the repository and running the following command:

git clone https://github.com/EvolvingLMMs-Lab/lmms-eval
cd lmms-eval
pip install -e .

If you want to test LLaVA, you will have to clone their repo from LLaVA and

# for llava 1.5
# git clone https://github.com/haotian-liu/LLaVA
# cd LLaVA
# pip install -e .

# for llava-next (1.6)
git clone https://github.com/LLaVA-VL/LLaVA-NeXT
cd LLaVA-NeXT
pip install -e .
Reproduction of LLaVA-1.5's paper results

You can check the environment install script and torch environment info to reproduce LLaVA-1.5's paper results. We found torch/cuda versions difference would cause small variations in the results, we provide the results check with different environments.

If you want to test on caption dataset such as coco, refcoco, and nocaps, you will need to have java==1.8.0 to let pycocoeval api to work. If you don't have it, you can install by using conda

conda install openjdk=8

you can then check your java version by java -version

Comprehensive Evaluation Results of LLaVA Family Models

As demonstrated by the extensive table below, we aim to provide detailed information for readers to understand the datasets included in lmms-eval and some specific details about these datasets (we remain grateful for any corrections readers may have during our evaluation process).

We provide a Google Sheet for the detailed results of the LLaVA series models on different datasets. You can access the sheet here. It's a live sheet, and we are updating it with new results.

We also provide the raw data exported from Weights & Biases for the detailed results of the LLaVA series models on different datasets. You can access the raw data here.


If you want to test VILA, you should install the following dependencies:

pip install s2wrapper@git+https://github.com/bfshi/scaling_on_scales

Our Development will be continuing on the main branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub.

Multiple Usages

Evaluation of LLaVA on MME

python3 -m accelerate.commands.launch \
    --num_processes=8 \
    -m lmms_eval \
    --model llava \
    --model_args pretrained="liuhaotian/llava-v1.5-7b" \
    --tasks mme \
    --batch_size 1 \
    --log_samples \
    --log_samples_suffix llava_v1.5_mme \
    --output_path ./logs/

Evaluation of LLaVA on multiple datasets

python3 -m accelerate.commands.launch \
    --num_processes=8 \
    -m lmms_eval \
    --model llava \
    --model_args pretrained="liuhaotian/llava-v1.5-7b" \
    --tasks mme,mmbench_en \
    --batch_size 1 \
    --log_samples \
    --log_samples_suffix llava_v1.5_mme_mmbenchen \
    --output_path ./logs/

For other variants llava. Please change the conv_template in the model_args

conv_template is an arg of the init function of llava in lmms_eval/models/llava.py, you could find the corresponding value at LLaVA's code, probably in a dict variable conv_templates in llava/conversations.py

python3 -m accelerate.commands.launch \
    --num_processes=8 \
    -m lmms_eval \
    --model llava \
    --model_args pretrained="liuhaotian/llava-v1.6-mistral-7b,conv_template=mistral_instruct" \
    --tasks mme,mmbench_en \
    --batch_size 1 \
    --log_samples \
    --log_samples_suffix llava_v1.5_mme_mmbenchen \
    --output_path ./logs/

Evaluation of larger lmms (llava-v1.6-34b)

python3 -m accelerate.commands.launch \
    --num_processes=8 \
    -m lmms_eval \
    --model llava \
    --model_args pretrained="liuhaotian/llava-v1.6-34b,conv_template=mistral_direct" \
    --tasks mme,mmbench_en \
    --batch_size 1 \
    --log_samples \
    --log_samples_suffix llava_v1.5_mme_mmbenchen \
    --output_path ./logs/

Evaluation with a set of configurations, supporting evaluation of multiple models and datasets

python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --config ./miscs/example_eval.yaml

Evaluation of video model (llava-next-video-32B)

accelerate launch --num_processes 8 --main_process_port 12345 -m lmms_eval \
    --model llavavid \
    --model_args pretrained=lmms-lab/LLaVA-NeXT-Video-32B-Qwen,conv_template=qwen_1_5,video_decode_backend=decord,max_frames_num=32,mm_spatial_pool_mode=average,mm_newline_position=grid,mm_resampler_location=after \
    --tasks videomme \
    --batch_size 1 \
    --log_samples \
    --log_samples_suffix llava_vid_32B \
    --output_path ./logs/

Evaluation with naive model sharding for bigger model (llava-next-72b)

python3 -m lmms_eval \
    --model=llava \
    --model_args=pretrained=lmms-lab/llava-next-72b,conv_template=qwen_1_5,device_map=auto,model_name=llava_qwen \
    --tasks=pope,vizwiz_vqa_val,scienceqa_img \
    --batch_size=1 \
    --log_samples \
    --log_samples_suffix=llava_qwen \
    --output_path="./logs/" \
    --wandb_args=project=lmms-eval,job_type=eval,entity=llava-vl

Evaluation with SGLang for bigger model (llava-next-72b)

python3 -m lmms_eval \
	--model=llava_sglang \
	--model_args=pretrained=lmms-lab/llava-next-72b,tokenizer=lmms-lab/llavanext-qwen-tokenizer,conv_template=chatml-llava,tp_size=8,parallel=8 \
	--tasks=mme \
	--batch_size=1 \
	--log_samples \
	--log_samples_suffix=llava_qwen \
	--output_path=./logs/ \
	--verbosity=INFO

Supported models

Please check supported models for more details.

Supported tasks

Please check supported tasks for more details.

Add Customized Model and Dataset

Please refer to our documentation.

Acknowledgement

lmms_eval is a fork of lm-eval-harness. We recommend you to read through the docs of lm-eval-harness for relevant information.


Below are the changes we made to the original API:

  • Build context now only pass in idx and process image and doc during the model responding phase. This is due to the fact that dataset now contains lots of images and we can't store them in the doc like the original lm-eval-harness other wise the cpu memory would explode.
  • Instance.args (lmms_eval/api/instance.py) now contains a list of images to be inputted to lmms.
  • lm-eval-harness supports all HF language models as single model class. Currently this is not possible of lmms because the input/output format of lmms in HF are not yet unified. Thererfore, we have to create a new class for each lmms model. This is not ideal and we will try to unify them in the future.

During the initial stage of our project, we thank:


During the v0.1 to v0.2, we thank the community support from pull requests (PRs):

Details are in lmms-eval/v0.2.0 release notes

Datasets:

  • VCR: Visual Caption Restoration (officially from the authors, MILA)
  • ConBench (officially from the authors, PKU/Bytedance)
  • MathVerse (officially from the authors, CUHK)
  • MM-UPD (officially from the authors, University of Tokyo)
  • WebSRC (from Hunter Heiden)
  • ScreeSpot (from Hunter Heiden)
  • RealworldQA (from Fanyi Pu, NTU)
  • Multi-lingual LLaVA-W (from Gagan Bhatia, UBC)

Models:

  • LLaVA-HF (officially from Huggingface)
  • Idefics-2 (from the lmms-lab team)
  • microsoft/Phi-3-Vision (officially from the authors, Microsoft)
  • LLaVA-SGlang (from the lmms-lab team)

Citations

@misc{lmms_eval2024,
    title={LMMs-Eval: Accelerating the Development of Large Multimoal Models},
    url={https://github.com/EvolvingLMMs-Lab/lmms-eval},
    author={Bo Li*, Peiyuan Zhang*, Kaichen Zhang*, Fanyi Pu*, Xinrun Du, Yuhao Dong, Haotian Liu, Yuanhan Zhang, Ge Zhang, Chunyuan Li and Ziwei Liu},
    publisher    = {Zenodo},
    version      = {v0.1.0},
    month={March},
    year={2024}
}

About

Accelerating the development of large multimodal models (LMMs) with lmms-eval

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.3%
  • Jupyter Notebook 7.6%
  • Shell 0.1%