Skip to content

[CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga

License

Notifications You must be signed in to change notification settings

yongliang-wu/NumPro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Number it: Temporal Grounding Videos like Flipping Manga

Figure 1

Video Large Language Models (Vid-LLMs) excel in video comprehension but struggle with precise temporal localization. Introducing Number-Prompt (NumPro): a novel method that adds unique numerical identifiers to video frames, transforming Video Temporal Grounding (VTG) into an intuitive process similar to flipping through manga panels. This technique significantly enhances VTG performance without additional computational cost, achieving up to 6.9% improvement in mIoU for moment retrieval and 8.5% in mAP for highlight detection.

Note: If you have any questions on this repository or the related paper, feel free to create an issue. All data corresponding to the paper will be updated at Google Drive.

Get Started

git clone https://github.com/yongliangwu/NumPro.git
cd NumPro
conda create -n numpro python=3.10
conda activate numpro
pip install -r requirements.txt

Data Preparation

Download

To get started with the data, please follow these steps:

  1. Download the video datasets from:

  2. Extract all downloaded datasets into the data folder.

  3. Download our instruction dataset for training from Google Drive and put it into data folder.

Note: For convenience, we have uploaded all the training videos (sampled at 1 FPS) to Hugging Face.

Preprocess

For training NumPro-FT, we need to extract the frames from the videos at 0.5 FPS and add numbers to them. We provide the code for this process in the preprocess folder.

python preprocess/anet.py
python preprocess/didemo.py
python preprocess/internvid.py

Please make sure all the folder paths are set correctly.

Training NumPro-FT

To begin, download the required model checkpoints and place them in the pretrained folder:

  1. LongVA-7B-DPO model from Hugging Face
  2. CLIP vision encoder from OpenAI

You can use the following commands to download them:

huggingface-cli download lmms-lab/LongVA-7B-DPO --local-dir ./pretrained/LongVA-7B-DPO
huggingface-cli download openai/clip-vit-large-patch14-336 --local-dir ./pretrained/clip-vit-large-patch14-336

Then, you can start training with the following command:

sh scripts/train.sh

Training requires approximately 35GB of GPU memory per device with batch size 1, and takes around 24 hours to complete 3 epochs when using 8 NVIDIA H800 GPUs.

Inference

Please download the annotation files for testing from Google Drive and put them into data folder.

NumPro-FT

Download the checkpoint from Google Drive and put it into checkpoints folder.

Moment Retrieval

LORA_PATH="checkpoints/longva_7b_dpo_NumPro_FT"

python eval/numpro_ft_mr.py --lora_path $LORA_PATH

Highlight Detection

LORA_PATH="checkpoints/longva_7b_dpo_NumPro_FT"

python eval/numpro_ft_hd.py --lora_path $LORA_PATH

NumPro

Moment Retrieval

python eval/qwen2_vl_7b_mr.py

Highlight Detection

python eval/qwen2_vl_7b_hd.py

Evaluation

We provide the evaluation results of NumPro-FT through Google Drive for reference and comparison.

For evaluation metrics and implementation details, please refer to the evaluation code from TimeChat.

Important Note: All results are processed at 0.5 frames per second (FPS). To convert to 1 FPS timestamps, simply multiply the frame numbers by 2.

Acknowledgement

Our implementation is based on the following repositories:

We thank the authors for their excellent works.

About

[CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published