├── K.png
├── Check.toe
├── lrate.png
├── StyleTransfer.toe
├── input
├── tubingen.jpg
├── tubingen-mask.jpg
└── tubingen-mask-inv.jpg
├── styles
├── kandinsky.jpg
├── shrooms.jpg
├── transverse.jpg
└── starry-night.jpg
├── output
├── result5004942.png
├── result1000395331.png
├── result500211319.png
├── result500461356.png
├── result500495910.png
├── result500522971.png
├── result500558121.png
└── result500601739.png
└── README.md
/K.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/K.png
--------------------------------------------------------------------------------
/Check.toe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/Check.toe
--------------------------------------------------------------------------------
/lrate.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/lrate.png
--------------------------------------------------------------------------------
/StyleTransfer.toe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/StyleTransfer.toe
--------------------------------------------------------------------------------
/input/tubingen.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/input/tubingen.jpg
--------------------------------------------------------------------------------
/styles/kandinsky.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/styles/kandinsky.jpg
--------------------------------------------------------------------------------
/styles/shrooms.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/styles/shrooms.jpg
--------------------------------------------------------------------------------
/styles/transverse.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/styles/transverse.jpg
--------------------------------------------------------------------------------
/input/tubingen-mask.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/input/tubingen-mask.jpg
--------------------------------------------------------------------------------
/output/result5004942.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result5004942.png
--------------------------------------------------------------------------------
/styles/starry-night.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/styles/starry-night.jpg
--------------------------------------------------------------------------------
/input/tubingen-mask-inv.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/input/tubingen-mask-inv.jpg
--------------------------------------------------------------------------------
/output/result1000395331.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result1000395331.png
--------------------------------------------------------------------------------
/output/result500211319.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result500211319.png
--------------------------------------------------------------------------------
/output/result500461356.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result500461356.png
--------------------------------------------------------------------------------
/output/result500495910.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result500495910.png
--------------------------------------------------------------------------------
/output/result500522971.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result500522971.png
--------------------------------------------------------------------------------
/output/result500558121.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result500558121.png
--------------------------------------------------------------------------------
/output/result500601739.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exsstas/StyleTransfer-in-TD/HEAD/output/result500601739.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Style Transfer in TouchDesigner
2 |
3 | This is a TouchDesigner implementation of the Style Transfer using Neural Networks. Project is based on
4 | * [TensorFlow (Python API) implementation of Neural Style](https://github.com/cysmith/neural-style-tf) by [cysmith](https://github.com/jcjohnson)
5 |
6 | You can read about underlying math of the algorithm [here](https://harishnarayanan.org/writing/artistic-style-transfer/)
7 |
8 | Here is some results next to the original photo:
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 | ## Setup
21 |
22 | 0. Install [TouchDesigner](https://www.derivative.ca/099/Downloads/)
23 | 1. Install [Tensorflow for Windows 1.4](https://www.tensorflow.org/install/install_windows). It's higly recomended to use GPU version (so, you'll also need do install [CUDA](https://developer.nvidia.com/cuda-downloads) and, optionally, [cuDNN](https://developer.nvidia.com/cudnn)). You can install Tensorflow directly to Python directory or with [Anaconda](https://conda.io/docs/download.html). Touch currently uses [CUDA 8](https://docs.derivative.ca/CUDA) and Tensorflow higher than 1.4 needs CUDA 9, so you'll need to install tensorflow=1.4
24 | 2. In TouchDesigner menu `Edit - Preferences - Python 32/64 bit Module Path` add path to folder, where Tensorflow is installed (i.e. C:/Anaconda3/envs/TFinTD/Lib/site-packages). [Details here](http://www.derivative.ca/wiki099/index.php?title=Introduction_to_Python_Tutorial#Importing_Modules). To check your installation run in Textport (Alt+t):
25 | ```
26 | import tensorflow as tf
27 | hello = tf.constant('Hello, TensorFlow!')
28 | sess = tf.Session()
29 | print(sess.run(hello))
30 | ```
31 | If the system outputs `Hello, TensorFlow!`, then Tensorflow in TouchDesigner works well.
32 |
33 | 3. Run command line or Powershell, activate conda enviroment (if Tensorflow was installed in conda) and install:
34 | * numpy (or numpy+mkl)
35 | * scipy
36 | * opencv (OpenCV preinstalled in TouchDesigner 099 works fine, but for 088 you should install it manually in Python (or conda))
37 | 4. [Override built-in numpy module](http://www.derivative.ca/wiki099/index.php?title=Introduction_to_Python_Tutorial#Overriding_built_in_modules). To check in TouchDesigner Textport enter
38 | ```
39 | import numpy
40 | numpy
41 | ```
42 | You should see path to numpy in your Python directory or Conda enviroment (i.e. `C:/Anaconda3/envs/TFinTD/Lib/site-packages\\numpy\\__init__.py`)
43 |
44 | 5. You can use `check.toe` to check your setup: open textport (Alt+t), right click on GPU DAT (or CPU, if you going to use it) and choose "Run script". In textport you should see something like:
45 | ```
46 | python >>>
47 | [[ 22. 28.]
48 | [ 49. 64.]]
49 | ```
50 |
51 | Then run modules check. You should see something like:
52 | ```
53 | python >>>
54 | numpy: 1.13.0
55 | scipy: 1.1.0
56 | cv2: 3.2.0-dev
57 | tensorflow: 1.4.0
58 | ```
59 | If your numpy version is lower, probly you are using numpy built from TouchDesigner folder. Check step 4.
60 |
61 | 6. Download the [VGG-19 model weights](http://www.vlfeat.org/matconvnet/pretrained/) (see the "VGG-VD models from the *Very Deep Convolutional Networks for Large-Scale Visual Recognition* project" section). After downloading, copy the weights file `imagenet-vgg-verydeep-19.mat` to the project directory or set path to it, using Style transfer user interface in TouchDesigner (`StyleTransfer.toe` last row `Path to VGG` in UI).
62 |
63 | ## Usage
64 | ### Basic Usage
65 | 1. It's recomended to copy all images you need inside project folder derectories `/input` and `/styles` (or create your own directories). Long absolute paths cannot work sometimes (especially in windows %USER% folder)
66 | 2. Choose content image in `input` TOP
67 | 3. Choose style image in `style1` TOP
68 | 4. Press `Run Style Transfer` in UI
69 | 5. Wait. TouchDesigner wouldn't respond for some seconds or minutes (depends on your GPU and resolutions of the images).
70 | 6. Result will be in `result` TOP, linked to a file in the `/output` folder. Log with some info is in `log` DAT - save it somewhere, if needed.
71 | 7. Experiment with settings
72 | 8. Experiment with the code in `/StyleTransfer/local/modules/main` DAT
73 | 9. If something isn't working - first check errors in Textport
74 |
75 |
76 | #### Settings
77 | * You can always `load default parameters`, when experiments goes too far.
78 | * `Num of iterations` - Maximum number of iterations for optimizer: larger number increase an effect of stylization.
79 | * `Maximum resolution` - Max width or height of the input/style images. High resolutions increases time and GPU memory usage. Good news: you don't need Commercial version of TouchDesigner to produce images larger than 1280×1280.
80 | * You can perform style transfer on `GPU or CPU device`. GPU mode is many times faster and highly recommended, but requires NVIDIA CUDA (see Setup section)
81 | * You can transfer more than one style to input image. Set `number of styles`, weight for each of it and choose files in style TOPs. If you want to go beyond 5 styles — make changes in /StyleTransfer/UI/n_styles
82 | * `Use style masks` if you want to apply style transfer to specific areas of the image. Choose masks in stylemask TOPs. Style applied to white regions.
83 | * `Keep original colors` if the style is transferred but not the colors.
84 | * `Color space convertion`: Color spaces (YUV, YCrCb, CIE L\*u\*v\*, CIE L\*a\*b\*) for luminance-matching conversion to original colors.
85 | * `Content_weight` - Weight for the content loss function. You can use numbers in [scientific E notation](http://www.onlineconversion.com/faq_06.htm)
86 | * `Style_weight` - Weight for the style loss function.
87 | * `Temporal_weight` - Weight for the temporal loss function.
88 | * `Total variation weight` - Weight for the total variational loss function.
89 | * `Type of initialization image` - You can initialize the network with `content`, `random` (noise) or `style` image.
90 | * `Noise_ratio`: Interpolation value between the content image and noise image if network is initialized with `random`.
91 | * `Optimizer` - Loss minimization optimizer. L-BFGS gives better results. Adam uses less memory.`
92 | * `Learning_rate` - Learning-rate parameter for the Adam optimizer.
93 |
94 |
95 |
96 |
97 | * `VGG19 layers for content\style image`: [VGG-19](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) layers and weights used for the content\style image.
98 | * `Constant (K) for the lossfunction` - Different constants K in the content loss function.
99 |
100 |
101 |
102 |
103 | * `Type of pooling in CNN` - Maximum or average ype of pooling in convolutional neural network.
104 | * `Path to VGG file`: Path to `imagenet-vgg-verydeep-19.mat` [Download it here](http://www.vlfeat.org/matconvnet/pretrained/).
105 |
106 |
107 | ## Memory
108 | By default, Style transfer uses the NVIDIA cuDNN GPU backend for convolutions and L-BFGS for optimization.
109 | These produce better and faster results, but can consume a lot of memory. You can reduce memory usage with the following:
110 |
111 | * **Use Adam**: Set `Optimizer` to Adam instead of L-BFGS. This should significantly reduce memory usage, but will require tuning of other parameters for good results;
112 | in particular you should experiment with different values of `Learning rate`, `Content weight`, `Style_weight`.
113 | * **Reduce image size**: You can reduce the size of the generated image with the `Maximum resolution` setting.
114 |
115 |
116 | ## This code was generated and tested on system:
117 | * **CPU:** Intel Core i7-4790K @ 4.0GHz × 8
118 | * **GPU:** NVIDIA GeForce GTX 1070 8 Gb
119 | * **CUDA:** 8.0
120 | * **cuDNN:** v5.1 (or higher)
121 | * **OS:** Windows 10 64-bit
122 | * **TouchDesigner:** 099 64-bit built 2017.10000
123 | * **Anaconda:** 4.3.14
124 | * **tensorflow-gpu:** 1.2.0 (tested on 1.4.0 as well)
125 | * **opencv (built-in TouchDesigner):** 3.2.0-dev
126 | * **numpy (installed in conda enviroment):** 1.13.0 (tested on 1.15.1 as well)
127 | * **scipy (installed in conda enviroment):** 0.19.1 (tested on 1.1.0 as well)
128 |
129 |
130 | ## The implementation is based on the project:
131 | * [TensorFlow (Python API) implementation of Neural Style](https://github.com/cysmith/neural-style-tf) by [cysmith](https://github.com/jcjohnson)
132 |
133 |
134 | ## Contacts
135 | Contact me via exsstas@ya.ru or in [Twitter](https://twitter.com/exsstas)
136 |
137 |
138 |
139 |
140 |
--------------------------------------------------------------------------------