We introduce R1's paradigm to video understanding tasks and open-sourced the training code and data.
🤗 Models | 🤗 Datasets | Wandb Logs
Note
Although our insights may not be guaranteed to be correct, we commit to sharing them truthfully and honestly. We welcome community feedback and discussions to improve our understanding on multimodal reasoning models.
- [2025/02/22] We release a provisional model Open-R1-Video-7B, inference scripts, and evaluation results.
- [2025/02/18] We release training code and data of Open-R1-Video!
We train Qwen2-VL-7B-Instruct on simple video dataset open-r1-video-4k using 4 x A100 (80G) GPUs, and the training only utilize video, query, and the ground truth answer (the letter of the correct answer). We only used GRPO (pure reinforcement learning without labeled reasoning trajectories) to train the model and achieved promising rewards during model training. We release our wandb logs for reference.
What We Did
- Introduce R1 to Video-LMM (e.g., Qwen2-VL) based on huggingface/open-r1 and deepseek-ai/DeepSeek-R1.
- Open-sourced the simple training data open-r1-video-4k.
- The simple reformat data is available in open-r1-video-4k.
- The video data is available in LLaVA-Video-large-swift.
Note
The training commands below are configured for a node of 4 x A100 (80GB). For different hardware and topologies, you may need to tune the batch size and number of gradient accumulation steps.
git clone https://github.com/Wang-Xiaodong1899/Open-R1-Video.git
cd Open-R1-Video
conda create -n r1 python=3.10
conda activate r1
pip3 install -e ".[dev]"
pip3 install flash_attn --no-build-isolation
cd qwen-vl-utils
pip install -e .
cd ..
# download data and put in data/
wget https://huggingface.co/datasets/Xiaodong/open-r1-video-4k/resolve/main/LLaVA-Video-large-swift-origin.jsonl
# like: data/LLaVA-Video-large-swift-origin.jsonl
# download videos
git lfs install
git clone https://huggingface.co/datasets/malterei/LLaVA-Video-large-swift
Note
Our training also support single A100 (80G) GPU training. Just modify the GPU and you’re good to go!
We removed format accuracy during 7B model training and slightly modified the final answer matching to calculate the accuracy reward. See this commit.
To run GRPO on Qwen2-VL-7B:
bash qwen-7b.sh
Please refer to qwen-7b.sh for more details.
Infer the video reasoning model!
python infer.py
Inference results:
Note
We use Lmms-eval to evaluate models.
Benchmarks | Qwen2-VL-7B-Instruct(w.o reasoning) | Qwen2-VL-7B-Instruct(w. reasoning) | Open-R1-Video-7B(w. reasoning) |
---|---|---|---|
LongVideoBench (16 frames) | 53.33 | 41.89 | 43.31 |
We provide the easy reformat method to obtain the data for GRPO training, which only utilize video, query, and final answer. Please refer to format_video_data.py for more details.
Users can view data in open-r1-video-4k. The original question
/original answer
are from the original dataset.
We sincerely thank the contributions from the open source community, including the reproduction of DeepSeek, Open-R1, and R1-multimodal, etc.
The related projects are as follows:
If you find this useful, you can choose to cite us.
@misc{wang-2025-open-r1-video,
author = {Xiaodong Wang and Peixi Peng},
title = {Open-R1-Video},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Wang-Xiaodong1899/Open-R1-Video}}
}