-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support VindLU multi-modality algorithm #2667
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
hukkai
reviewed
Sep 4, 2023
cir7
force-pushed
the
lilin/support_vindlu
branch
from
September 6, 2023 07:48
96cd2ac
to
3b7d2eb
Compare
Codecov ReportPatch coverage is
📢 Thoughts on this report? Let us know!. |
cir7
force-pushed
the
lilin/support_vindlu
branch
from
September 6, 2023 13:38
823638e
to
a926031
Compare
cir7
force-pushed
the
lilin/support_vindlu
branch
from
September 6, 2023 13:44
a926031
to
fb082c0
Compare
Dai-Wenxun
approved these changes
Sep 7, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
VindLU
VindLU: A Recipe for Effective Video-and-Language Pretraining
Abstract
The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretraining protocols, making the reproducibility, analysis and comparisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining objectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scaling. Our empirical study reveals that the most important design factors include: temporal modeling, video-to-text multimodal fusion, masked modeling objectives, and joint training on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VindLU, for effective VidL pretraining. Our final model trained using our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on external CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly available at: https://github.com/klauscc/VindLU.
Results and Models
Video Retrieval on MSRVTT-9k
Video Question-Answering on MSRVTT-QA
Multiple-Choice Question-Answering on MSRVTT-MC (Inference)
For more details on data preparation, you can refer to prepare msrvtt.
Train
You can use the following command to train a model.
python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train VindLU model on MSRVTT-9k dataset in a deterministic option with periodic validation.
For more details, you can refer to the Training part in the Training and Test Tutorial.
Test
You can use the following command to test a model.
Example: test CLIP4Clip model on MSRVTT-9k dataset and dump the result to a pkl file.
For more details, you can refer to the Test part in the Training and Test Tutorial.
Citation