This is a Tensorflow implementation of cascaded refinement networks to synthesize photographic images from semantic layouts.
Required python libraries: Tensorflow (>=1.2) + Scipy + Numpy + Pillow + OpenCV.
Tested in Ubuntu + Intel i7 CPU + Nvidia Titan X (Pascal) with Cuda (>=8.0) and CuDNN (>=5.0). CPU mode should also work with minor changes.
- Clone this repository.
- cd into the cloned folder
- Download the pretrained models from "https:/trainedmodels". It takes several minutes to download all the models.
- Download the test data from "https:/testdata". It requires approximate 11G memory.
- Download the training data from "https:/traindata". It requires approximate 60G memory
Use resolution "512 x 1024" as a running example.
- cd "matlab_code". Run "test_generate_transform.m" to generate testing data for the transformation network. The generated data is stored in "../testdata/transform/"
- cd "../python_code". Run "transformation.py" to generate the transformed results. The results are saved in folder "../result/transform"
- cd "../matlab_code" run "test_generate_order.m" to generate testing data for the ordering network. The generated data is stored in "../testdata/order/".
- cd "../python_code". Run "order.py", the order prediction is stored in folder "../result/order/data/".
- cd "../matlab_code". Run "test_generate_canvas.m" to generate the canvas for the synthesis network to work on. The generated data is stored in "../testdata/synthesis".
- cd "../python_code". Run "synthesis_512_1024.py" to generate the final results. The result is saved in folder "../result/synthesis".
One should notice one can skip steps "2" and modify the "test_generate_canvas.m" accordingly if you do not want to use spatial transformer. We do not observe significant improvement with spatial transformation on cityscapes dataset but it is reuired on "NYU" datasets since "NYU" has large variation of viewpoints.
- Transformation network (1) cd "matlab_code" (2) run "test_generate_transform.m" to generate training data for the transformation network, training data is stored in folder "../traindata/transform/" (3) cd "../python-code" (4) run "transformation.py" with setting "training_phase = True"
- Ordering network (1) cd "matlab_code" (2) run "train_generate_order.m" to generate training data for the ordering network, training data is stored in folder "../traindata/order/". (3) cd "../python_code" (4) run "ordering.py" with setting "training_phase = True". The model is saved in "../trainedmodels/order/"
- Syntheiss network The synthesis network is trained in a progessive way. We first train a model with resolution "256 x 512", and use it to initialize the model for resolution "512 x 1024", and then "1024 x 2048". The script for different resolutions is in file "synthesis_256_512.py", "synthesis_512_1024.py" and "synthesis_1024_2048.py" respectively. (1) cd "../matlab_code/" (2) run "train_generate_synthesis.m", training data is saved in folder "../traindata/synthesis/". (3) run "synthesis_512_1024.py" with setting "training_phase = True". The result is saved in "../trainedmodels/synthesis/".
Results for all the datasets is stored in folder "../all_results".
If you use our code for research, please cite our paper:
Xiaojuan Qi, Qifeng Chen, Jiaya Jia Vladlen Koltun. Semi-parametric Image Synthesis. In CVPR 2018.
- Replace "transformation.py" with "appearance flow" [Zhou et al. 2016].
- Add "contextual loss" [Mechrez et al. 2018] in the synthesis network to further improve the results.
If you have any question or request about the code and data, please email me at [email protected] . If you need more information for other datasets plesase send email.
MIT License