Skip to content

Latest commit

 

History

History
102 lines (68 loc) · 5.05 KB

README.md

File metadata and controls

102 lines (68 loc) · 5.05 KB

VideoAgent: Long-form Video Understanding with Large Language Model as Agent

MIT license Python Pytorch Black

This repo provides the PyTorch source code of our paper: VideoAgent: Long-form Video Understanding with Large Language Model as Agent (ECCV 2024). Check out project page here!

🔮 Abstract

Long-form video understanding represents a significant challenge within computer vision, demanding a model capable of reasoning over long multi-modal sequences. Motivated by the human cognitive process for long-form video understanding, we emphasize interactive reasoning and planning over the ability to process lengthy visual inputs. We introduce a novel agent-based system, VideoAgent, that employs a large language model as a central agent to iteratively identify and compile crucial information to answer a question, with vision-language foundation models serving as tools to translate and retrieve visual information. Evaluated on the challenging EgoSchema and NExT-QA benchmarks, VideoAgent achieves 54.1% and 71.3% zero-shot accuracy with only 8.4 and 8.2 frames used on average. These results demonstrate superior effectiveness and efficiency of our method over the current state-of-the-art methods, highlighting the potential of agent-based approaches in advancing long-form video understanding.

🚀 Getting Started

download files from https://drive.google.com/drive/folders/1ZNty_n_8Jp8lObudbckkObHnYCvakgvY?usp=sharing
python main.py
python parse_results.py

🎯 Citation

If you use this repo in your research, please cite it as follows:

@inproceedings{VideoAgent,
  title={VideoAgent: Long-form Video Understanding with Large Language Model as Agent},
  author={Wang, Xiaohan and Zhang, Yuhui and Zohar, Orr and Yeung-Levy, Serena},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2024}
}