This is the implementation of Learning a Deep Dual Attention Network
for Video Super-Resolution.(IEEE TIP)
The architecture of our proposed deep dual attentian network(DDAN).
- python==3.6
- Tensorflow==1.13.1
Download trained DDAN model from Baiduyun we provide. (Access code for Baiduyun: zelr)
Unzip and place the files in the DDAN_x4
directory
- numpy==1.16.4
- scipy==1.2.1
- Pillow==8.1.2
For testing, you can test one video or videos using function testvideo()
or testvideos()
. Please change the test video directory.
# testvideos()
python main.py
You can also train your DDNL using function train()
. Before you train your models, download the data for training in data
directory.
# model.train()
python main.py
Here are the results from different dataset.
The frame is from Myanmar.
The frame is from calendar.
The frame is from real-world LR videos we captured.
If you use our code or model in your research, please cite with:
@ARTICLE{8995790,
author={Feng. {Li} and Huihui. {Bai} and Yao. {Zhao}},
journal={IEEE Transactions on Image Processing},
title={Learning a Deep Dual Attention Network for Video Super-Resolution},
year={2020},
volume={29},
pages={4474-4488},
doi={10.1109/TIP.2020.2972118}
}
This code is built on MMCNN(Tensorflow), we thank the authors for sharing their code.