A Dual Camera System for High Spatiotemporal Resolution Video Acquisition
Ming Cheng, Zhan Ma, M. Salman Asif, Yiling Xu, Haojie Liu, Wenbo Bao, and Jun Sun
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
The code has been tested with Python 3.7, PyTorch 1.0, CUDA 10.1 and Cudnn 7.6.4.
Once your environment is set up and activated, generate the Correlation package required by PWCNet:
$ cd correlation_package_pytorch1_0
$ sh build.sh
Downsample the images with bicubic downsampling method by 4 times (or 8 times). Then, upsample the down-scaled images by 4 times (or 8 times), which will be used as the low-quality input of AWnet. The adjacent original high-quality frames can be used as the reference frames.
wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/pwc_net.pth.tar
These images are captured with our dual iPhone 7 cameras.
Different illumination conditions: High Light Illumination | Medium Light Illumination | Low Light Illumination
Single-Reference vs Multi-Reference: Simulated data | Real data
Unfortunately, I lost the models of one-reference AWnet (AWnet_1). Luckily, I saved the models of two-reference AWnet (AWnet_2), as shown below: Model without noise | Model with noise (0.008)