Unofficial pytorch implementation of Boundless: Generative Adversarial Networks for Image Extension. I used this code esrgan as reference.
pytorch
torchvision
torchsummary
numpy
Pillow
random
glob
- Download a dataset
wget http://data.csail.mit.edu/places/places365/train_256_places365standard.tar
- Unpack a tar file
tar -xvf train_256_places365standard.tar
- Run the script using command
python make_datasets.py
- Run the script using command
python train.py
- Run the script using command
python test.py
In this code, the input size is 512 x 512(in the original paper, 257 x 257). Due to this change, I intend to align the outputs' sizes of the layers and add an additional layer(layer 9) to the discriminator.
Please let me know if you have any problems.
Having applied the input size 256 x 256 indicated in the paper, assuming that 257 x 257 is a typo, I noticed some problems as follows:
- Inception_v3 in pytorch doesn’t support input size 256 x 256; thus, I implemented resnet152 instead. Details are here
- In the original paper, the kernel size is 5 x 5 in layer 7. However, this is incorrect since the input size is 4 x 4 so I specified the kernel size 4 x 4 in layer 7.
Following the author's advice, having applied the input size 257 x 257.
If you want to test the 257 x 257 input, prepare your dataset whose size is 257 x 257 and select it using argparse command --dataset_name