Skip to content
This repository has been archived by the owner on Sep 18, 2021. It is now read-only.
/ artistic-style Public archive

Transfer artistic style in TensorFlow! 🎨

License

Notifications You must be signed in to change notification settings

AparaV/artistic-style

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Artistic Style

Yet another implementation of A Neural Algorithm of Artistic Style in TensorFlow. Why? Because this thing blows me away each time I see it and after coding it from scratch it still blows me away!

What is it?

This paper explains how it is possible to extract the style and content from an image. Extracting an artist's style from, say, a classic painting and combining it with the content of, say, a modern day photograph, we can envision what the artist would have painted.

For example, combining Edvard Munch's style from The Scream and this photograph of a river, we can achieve this.

river_and_scream

river_scream

Related projects

There are many more similar projects out there.

Dependencies

  • Pre-trained VGG-19 Very Deep model can be downloaded from here. Search for VGG-VD and download imagenet-vgg-verydeep-19.mat. Place this file inside a new directory called data in the root folder. Hence the model should be available at ./data/imagenet-vgg-verydeep-19.mat. If you place it elsewhere make sure you pass in the respective optional argument while training.

  • The program is designed to run on Python 3.6.x

  • For additional modules that this program depends upon, run the following command to install all dependencies

$ pip install -r requirements.txt

How to run?

After cloning, installing and dowloading dependencies:

$ python main.py -c <content-img-path> -s <style-img-path> -o <destination-path>

There are various options that can be modified. Few of the important ones are:

  • --iterations <number> - Changes the number of iterations backprop runs for. Defaults to 1000
  • --content_weight <number> - Changes the weight of the content image (described as alpha in the paper). Defaults to 50
  • --style_weight <number> - Changes the weight of the style image (described as beta in the paper). Defaults to 400
  • --learning_rate <number> - Changes the learning rate of backprop. Defaults to 3
  • --gpu <number> - Choose whether to use your CPU (-1) or GPU device of your choice (corresponding number) for computation. Defaults to CPU For the entire list of optional arguments, use this command:
$ python main.py --help

I've found that these default parameters work great in most cases, where are images are 512x512. You might need to tune these hyperparameters for some images, especially with larger images.

Examples

Different Styles

Here is the picture of the river again.

river

This time, I'm combining it with A Starry Night (by Vincent Van Gogh), Girl with a Mandolin (by Pablo Picasso), and The Great Wave off Kanagawa (by Hokusai) respectively.

river_starry_night

river_girl_with_mandolin

river_great_waves

Varying the number of iterations

Here is a picture of me. Let's look at how the image varies every 500 iterations from 500 to 2000 when combined with A Starry Night.

me-iters-1

me-iters-2

  • Top left - 500 iterations
  • Top right - 1000 iterations
  • Bottom left - 1500 iterations
  • Bottom right - 2000 iterations

I'd say 1000 - 1500 iterations should produce nice results given the other parameters are tuned properly. But feel free to train longer if you have access to a GPU.

Releases

No releases published

Packages

No packages published

Languages