Official pytorch implementation for "Deep Video Inpainting" (CVPR 2019)
Dahun Kim*, Sanghyun Woo*, Joon-Young Lee, and In So Kweon. (*: equal contribution)
[Paper] [Project page] [Video results]
If you are also interested in video caption removal, please check [Paper] [Project page]
This is tested under Python 3.6, PyTorch 0.4.0 (dependencies can be compiled on this version).
-
Download the trained weight 'save_agg_rec_512.pth' and place it in "./results/vinet_agg_rec/"
Google drive: [weight-512x512] [weight-256x256] -
Compile Resample2d, Correlation dependencies.
bash ./install.sh
- Run the demo (the results are saved in "./results/vinet_agg_rec/davis_512/").
python demo_vi.py
- Optional: Run the video retargeting (Section 4.5)
python demo_retarget.py
If you find the codes useful in your research, please cite:
@inproceedings{kim2019deep,
title={Deep Video Inpainting},
author={Kim, Dahun and Woo, Sanghyun and Lee, Joon-Young and So Kweon, In},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={5792--5801},
year={2019},}
@ARTICLE{kim2019deeppami,
author={Kim, Dahun and Woo, Sanghyun and Lee, Joon-Young and So Kweon, In},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Recurrent Temporal Aggregation Framework for Deep Video Inpainting},
year={2019},
volume={},
number={},
pages={1-1},
month={},}