Video Large Language Models (Vid-LLMs) excel in video comprehension but struggle with precise temporal localization. Introducing Number-Prompt (NumPro): a novel method that adds unique numerical identifiers to video frames, transforming Video Temporal Grounding (VTG) into an intuitive process similar to flipping through manga panels. This technique significantly enhances VTG performance without additional computational cost, achieving up to 6.9% improvement in mIoU for moment retrieval and 8.5% in mAP for highlight detection.
Note: If you have any questions on this repository or the related paper, feel free to create an issue. All data corresponding to the paper will be updated at Google Drive.
git clone https://github.com/yongliangwu/NumPro.git
cd NumPro
conda create -n numpro python=3.10
conda activate numpro
pip install -r requirements.txt
To get started with the data, please follow these steps:
-
Download the video datasets from:
-
Extract all downloaded datasets into the
data
folder. -
Download our instruction dataset for training from Google Drive and put it into
data
folder.
Note: For convenience, we have uploaded all the training videos (sampled at 1 FPS) to Hugging Face.
For training NumPro-FT, we need to extract the frames from the videos at 0.5 FPS and add numbers to them. We provide the code for this process in the preprocess
folder.
python preprocess/anet.py
python preprocess/didemo.py
python preprocess/internvid.py
Please make sure all the folder paths are set correctly.
To begin, download the required model checkpoints and place them in the pretrained
folder:
- LongVA-7B-DPO model from Hugging Face
- CLIP vision encoder from OpenAI
You can use the following commands to download them:
huggingface-cli download lmms-lab/LongVA-7B-DPO --local-dir ./pretrained/LongVA-7B-DPO
huggingface-cli download openai/clip-vit-large-patch14-336 --local-dir ./pretrained/clip-vit-large-patch14-336
Then, you can start training with the following command:
sh scripts/train.sh
Training requires approximately 35GB of GPU memory per device with batch size 1, and takes around 24 hours to complete 3 epochs when using 8 NVIDIA H800 GPUs.
Please download the annotation files for testing from Google Drive and put them into data
folder.
Download the checkpoint from Google Drive and put it into checkpoints
folder.
LORA_PATH="checkpoints/longva_7b_dpo_NumPro_FT"
python eval/numpro_ft_mr.py --lora_path $LORA_PATH
LORA_PATH="checkpoints/longva_7b_dpo_NumPro_FT"
python eval/numpro_ft_hd.py --lora_path $LORA_PATH
python eval/qwen2_vl_7b_mr.py
python eval/qwen2_vl_7b_hd.py
We provide the evaluation results of NumPro-FT through Google Drive for reference and comparison.
For evaluation metrics and implementation details, please refer to the evaluation code from TimeChat.
Important Note: All results are processed at 0.5 frames per second (FPS). To convert to 1 FPS timestamps, simply multiply the frame numbers by 2.
Our implementation is based on the following repositories:
- https://github.com/huangb23/VTimeLLM
- https://github.com/RenShuhuai-Andy/TimeChat
- https://github.com/EvolvingLMMs-Lab/LongVA
- https://github.com/xiaoachen98/Open-LLaVA-NeXT
- https://github.com/LaBaZh/OpenLongVA
We thank the authors for their excellent works.