A Toolkit for Evaluating Large Vision-Language Models.
🏆 OC Learderboard • 📊Datasets & Models • 🏗️Quickstart • 🛠️Development • 🎯Goal • 🖊️Citation
VLMEvalKit (the python package name is vlmeval) is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.
- [2024-08-06] We have supported TaskMeAnything ImageQA-Random Dataset, thanks to weikaih04🔥🔥🔥
- [2024-08-05] We have supported a new evaluation strategy for AI2D, which do not mask the corresponding areas when choices are uppercase letters. Instead, the area is annotated by a rectangle contour. Set the dataset name to
AI2D_TEST_NO_MASK
to evaluate under this setting (The leaderboard now is still using the previous setting) - [2024-08-05] We have supported Mantis, thanks to BrenchCC🔥🔥🔥
- [2024-08-05] We have supported Q-Bench and A-Bench, thanks to zzc-1998🔥🔥🔥
- [2024-07-29] We have supported Yi-Vision🔥🔥🔥
- [2024-07-27] VLMEvalKit Technical Report has been accepted by ACMMM 24' OpenSource 🔥🔥🔥
- [2024-07-27] We have supported MMDU, which is one of the first multi-turn & multi-image benchmark, thanks to Liuziyu77🔥🔥🔥
- [2024-07-25] We have supported VILA, thanks to amitbcp, evaluation results coming soon🔥🔥🔥
- [2024-07-25] We have supported Ovis1.5, thanks to runninglsy 🔥🔥🔥
- [2024-07-23] We have supported Video-LLaVA 🔥🔥🔥, the first Video-LLM to be supported by our repository! Using this fork version to install Video-LLaVA (More Recommended) or install transformers to use it!
The performance numbers on our official multi-modal leaderboards can be downloaded from here!
OpenVLM Leaderboard: Download All DETAILED Results.
Supported Image Understanding Dataset
- By default, all evaluation results are presented in OpenVLM Leaderboard.
Dataset | Dataset Names (for run.py) | Task | Dataset | Dataset Names (for run.py) | Task |
---|---|---|---|---|---|
MMBench Series: MMBench, MMBench-CN, CCBench |
MMBench_DEV_[EN/CN] MMBench_TEST_[EN/CN] MMBench_DEV_[EN/CN]_V11 MMBench_TEST_[EN/CN]_V11 CCBench |
Multi-choice Question (MCQ) |
MMStar | MMStar | MCQ |
MME | MME | Yes or No (Y/N) | SEEDBench Series | SEEDBench_IMG SEEDBench2 SEEDBench2_Plus |
MCQ |
MM-Vet | MMVet | VQA | MMMU | MMMU_[DEV_VAL/TEST] | MCQ |
MathVista | MathVista_MINI | VQA | ScienceQA_IMG | ScienceQA_[VAL/TEST] | MCQ |
COCO Caption | COCO_VAL | Caption | HallusionBench | HallusionBench | Y/N |
OCRVQA* | OCRVQA_[TESTCORE/TEST] | VQA | TextVQA* | TextVQA_VAL | VQA |
ChartQA* | ChartQA_TEST | VQA | AI2D | AI2D_[TEST/TEST_NO_MASK] | MCQ |
LLaVABench | LLaVABench | VQA | DocVQA+ | DocVQA_[VAL/TEST] | VQA |
InfoVQA+ | InfoVQA_[VAL/TEST] | VQA | OCRBench | OCRBench | VQA |
RealWorldQA | RealWorldQA | MCQ | POPE | POPE | Y/N |
Core-MM- | CORE_MM | VQA | MMT-Bench | MMT-Bench_[VAL/VAL_MI/ALL/ALL_MI] | MCQ |
MLLMGuard - | MLLMGuard_DS | VQA | AesBench+ | AesBench_[VAL/TEST] | MCQ |
VCR-wiki + | VCR_[EN/ZH]_[EASY/HARD]_[ALL/500/100] | VQA | MMLongBench-Doc+ | MMLongBench_DOC | VQA |
BLINK | BLINK | MCQ | MathVision+ | MathVision MathVision_MINI |
VQA |
MT-VQA+ | MTVQA_TEST | VQA | MMDU+ | MMDU | VQA (multi-turn) |
Q-Bench1+ | Q-Bench1_[VAL/TEST] | MCQ | A-Bench+ | A-Bench_[VAL/TEST] | MCQ |
TaskMeAnything ImageQA Random+ | TaskMeAnything_v1_imageqa_random | MCQ |
* We only provide a subset of the evaluation results, since some VLMs do not yield reasonable results under the zero-shot setting
+ The evaluation results are not available yet
- Only inference is supported in VLMEvalKit
VLMEvalKit will use a judge LLM to extract answer from the output if you set the key, otherwise it uses the exact matching mode (find "Yes", "No", "A", "B", "C"... in the output strings). The exact matching can only be applied to the Yes-or-No tasks and the Multi-choice tasks.
Supported Video Understanding Dataset
Dataset | Dataset Names (for run.py) | Task | Dataset | Dataset Names (for run.py) | Task |
---|---|---|---|---|---|
MMBench-Video | MMBench-Video | VQA | Video-MME | Video-MME | MCQ |
Supported API Models
GPT-4v (20231106, 20240409) 🎞️🚅 | GPT-4o 🎞️🚅 | Gemini-1.0-Pro 🎞️🚅 | Gemini-1.5-Pro 🎞️🚅 | Step-1V 🎞️🚅 |
---|---|---|---|---|
Reka-[Edge / Flash / Core]🚅 | Qwen-VL-[Plus / Max] 🎞️🚅 | Claude3-[Haiku / Sonnet / Opus] 🎞️🚅 | GLM-4v 🚅 | CongRong 🎞️🚅 |
Claude3.5-Sonnet 🎞️🚅 | GPT-4o-Mini 🎞️🚅 | Yi-Vision🎞️🚅 |
Supported PyTorch / HF Models
🎞️: Support multiple images as inputs.
🚅: Models can be used without any additional configuration/operation.
🎬: Support Video as inputs.
Transformers Version Recommendation:
Note that some VLMs may not be able to run under certain transformer versions, we recommend the following settings to evaluate each VLM:
- Please use
transformers==4.33.0
for:Qwen series
,Monkey series
,InternLM-XComposer Series
,mPLUG-Owl2
,OpenFlamingo v2
,IDEFICS series
,VisualGLM
,MMAlaya
,ShareCaptioner
,MiniGPT-4 series
,InstructBLIP series
,PandaGPT
,VXVERSE
,GLM-4v-9B
. - Please use
transformers==4.37.0
for:LLaVA series
,ShareGPT4V series
,TransCore-M
,LLaVA (XTuner)
,CogVLM Series
,EMU2 Series
,Yi-VL Series
,MiniCPM-[V1/V2]
,OmniLMM-12B
,DeepSeek-VL series
,InternVL series
,Cambrian Series
,VILA Series
. - Please use
transformers==4.40.0
for:IDEFICS2
,Bunny-Llama3
,MiniCPM-Llama3-V2.5
,360VL-70B
,Phi-3-Vision
,WeMM
. - Please use
transformers==latest
for:LLaVA-Next series
,PaliGemma-3B
,Chameleon series
,Video-LLaVA-7B-HF
,Ovis series
,Mantis series
.
# Demo
from vlmeval.config import supported_VLM
model = supported_VLM['idefics_9b_instruct']()
# Forward Single Image
ret = model.generate(['assets/apple.jpg', 'What is in this image?'])
print(ret) # The image features a red apple with a leaf on it.
# Forward Multiple Images
ret = model.generate(['assets/apple.jpg', 'assets/apple.jpg', 'How many apples are there in the provided images? '])
print(ret) # There are two apples in the provided images.
See [QuickStart | 快速开始] for a quick start guide.
To develop custom benchmarks, VLMs, or simply contribute other codes to VLMEvalKit, please refer to [Development_Guide | 开发指南].
Call for contributions
To promote the contribution from the community and share the corresponding credit (in the next report update):
- All Contributions will be acknowledged in the report.
- Contributors with 3 or more major contributions (implementing an MLLM, benchmark, or major feature) can join the author list of VLMEvalKit Technical Report on ArXiv. Eligible contributors can create an issue or dm kennyutc in VLMEvalKit Discord Channel.
The codebase is designed to:
- Provide an easy-to-use, opensource evaluation toolkit to make it convenient for researchers & developers to evaluate existing LVLMs and make evaluation results easy to reproduce.
- Make it easy for VLM developers to evaluate their own models. To evaluate the VLM on multiple supported benchmarks, one just need to implement a single
generate_inner()
function, all other workloads (data downloading, data preprocessing, prediction inference, metric calculation) are handled by the codebase.
The codebase is not designed to:
- Reproduce the exact accuracy number reported in the original papers of all 3rd party benchmarks. The reason can be two-fold:
- VLMEvalKit uses generation-based evaluation for all VLMs (and optionally with LLM-based answer extraction). Meanwhile, some benchmarks may use different approaches (SEEDBench uses PPL-based evaluation, eg.). For those benchmarks, we compare both scores in the corresponding result. We encourage developers to support other evaluation paradigms in the codebase.
- By default, we use the same prompt template for all VLMs to evaluate on a benchmark. Meanwhile, some VLMs may have their specific prompt templates (some may not covered by the codebase at this time). We encourage VLM developers to implement their own prompt template in VLMEvalKit, if that is not covered currently. That will help to improve the reproducibility.
If you find this work helpful, please consider to star🌟 this repo. Thanks for your support!
If you use VLMEvalKit in your research or wish to refer to published OpenSource evaluation results, please use the following BibTeX entry and the BibTex entry corresponding to the specific VLM / benchmark you used.
@misc{duan2024vlmevalkit,
title={VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models},
author={Haodong Duan and Junming Yang and Yuxuan Qiao and Xinyu Fang and Lin Chen and Yuan Liu and Xiaoyi Dong and Yuhang Zang and Pan Zhang and Jiaqi Wang and Dahua Lin and Kai Chen},
year={2024},
eprint={2407.11691},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.11691},
}