This is a Torch7 implementation of the method described in the paper 'A Neural Algorthm of Artistic Style' by Leon Gatys, Alexander Ecker, and Matthias Bethge (http://arxiv.org/abs/1508.06576).
- Torch7
- imagine-nn
- CUDA 6.5+
imagine-nn (and any other Torch packages you're missing) can be installed via Luarocks:
luarocks install inn
Basic usage:
qlua main.lua --style <style.jpg> --content <content.jpg> --style_factor <factor>
where style.jpg
is the image that provides the style of the final generated image, and content.jpg
is the image that provides the content. style_factor
is a constant that controls the degree to which the generated image emphasizes style over content. By default it is set to 5E9.
The optimization of the generated image is performed on GPU. On a 2014 MacBook Pro with an NVIDIA GeForce GT 750M, it takes a little over 4 minutes to perform 500 iterations of gradient descent.
Other options:
num_iters
: Number of optimization steps. Default is 500.size
: Long edge dimension of the generated image. Set to 0 to use the size of the content image. Default is 500.display_interval
: Number of iterations between image displays. Set to 0 to suppress image display. Default is 20.smoothness
: Constant that controls smoothness of generated image (total variation norm regularization strength). Default is 6E-3.init
: {image, random}. Initialization mode for optimized image.image
initializes with the content image;random
initializes with random Gaussian noise. Default isimage
.backend
: {cunn, cudnn}. Neural network CUDA backend.cudnn
requires the Torch bindings for CuDNN R3.optimizer
: {sgd, lbfgs}. Optimization algorithm.lbfgs
is slower per iteration and consumes more memory, but may yield better results. Default issgd
.
The Eiffel Tower in the style of Van Gogh's Starry Night:
And in the style of Edvard Munch's The Scream:
Picasso-fied Obama:
The primary difference between this implementation and the paper is that it uses Google's Inception architecture instead of VGG. Consequently, the hyperparameter settings differ from those given in the paper (they have been tuned to give aesthetically pleasing results).
The outputs of the following layers are used to optimize for style: conv1/7x7_s2
, conv2/3x3
, inception_3a
, inception_3b
, inception_4a
, inception_4b
, inception_4c
, inception_4d
, inception_4e
.
The outputs of the following layers are used to optimize for content: inception_3a
, inception_4a
.
By default, optimization of the generated image is performed using gradient descent with momentum of 0.9. The learning rate is decayed exponentially by 0.75 every 100 iterations. L-BFGS can also be used.
By default, the optimized image is initialized using the content image; the implementation also works with white noise initialization, as described in the paper.
In order to reduce high-frequency "screen door" noise in the generated image, total variation regularization is applied (idea from cnn-vis by jcjohnson).
The weights for the Inception network used in this implementation were ported to Torch from the publicly-available Caffe distribution.