Skip to content

Wang-Xiaodong1899/Open-R1-Video

Repository files navigation

Open R1 Video

We introduce R1's paradigm to video understanding tasks and open-sourced the training code and data.

🤗 Models | 🤗 Datasets | Wandb Logs

Note

Although our insights may not be guaranteed to be correct, we commit to sharing them truthfully and honestly. We welcome community feedback and discussions to improve our understanding on multimodal reasoning models.

News

  • [2025/02/22] We release a provisional model Open-R1-Video-7B, inference scripts, and evaluation results.
  • [2025/02/18] We release training code and data of Open-R1-Video!

Our Findings

GRPO training that forces thinking can improve video understanding

We train Qwen2-VL-7B-Instruct on simple video dataset open-r1-video-4k using 4 x A100 (80G) GPUs, and the training only utilize video, query, and the ground truth answer (the letter of the correct answer). We only used GRPO (pure reinforcement learning without labeled reasoning trajectories) to train the model and achieved promising rewards during model training. We release our wandb logs for reference. image

What We Did

Training Models

Note

The training commands below are configured for a node of 4 x A100 (80GB). For different hardware and topologies, you may need to tune the batch size and number of gradient accumulation steps.

Set up

git clone https://github.com/Wang-Xiaodong1899/Open-R1-Video.git
cd Open-R1-Video
conda create -n r1 python=3.10
conda activate r1
pip3 install -e ".[dev]"
pip3 install flash_attn --no-build-isolation
cd qwen-vl-utils
pip install -e .
cd ..

# download data and put in data/
wget https://huggingface.co/datasets/Xiaodong/open-r1-video-4k/resolve/main/LLaVA-Video-large-swift-origin.jsonl
# like: data/LLaVA-Video-large-swift-origin.jsonl

# download videos
git lfs install
git clone https://huggingface.co/datasets/malterei/LLaVA-Video-large-swift

GRPO on Qwen2-VL/7B

Note

Our training also support single A100 (80G) GPU training. Just modify the GPU and you’re good to go!

We removed format accuracy during 7B model training and slightly modified the final answer matching to calculate the accuracy reward. See this commit.

To run GRPO on Qwen2-VL-7B:

bash qwen-7b.sh

Please refer to qwen-7b.sh for more details.

Evaluating models

Inference

Infer the video reasoning model!

python infer.py

video

Video link

Inference results:

image

Evaluation

Note

We use Lmms-eval to evaluate models.

Benchmarks Qwen2-VL-7B-Instruct(w.o reasoning) Qwen2-VL-7B-Instruct(w. reasoning) Open-R1-Video-7B(w. reasoning)
LongVideoBench (16 frames) 53.33 41.89 43.31

RL Data Reformat

We provide the easy reformat method to obtain the data for GRPO training, which only utilize video, query, and final answer. Please refer to format_video_data.py for more details.

Users can view data in open-r1-video-4k. The original question/original answer are from the original dataset.

References & Acknowledgements

We sincerely thank the contributions from the open source community, including the reproduction of DeepSeek, Open-R1, and R1-multimodal, etc.

The related projects are as follows:

Citation

If you find this useful, you can choose to cite us.

@misc{wang-2025-open-r1-video,
  author = {Xiaodong Wang and Peixi Peng},
  title = {Open-R1-Video},
  year = {2025},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/Wang-Xiaodong1899/Open-R1-Video}}
}