.
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # neural-style-tf
2 |
3 | This is a TensorFlow implementation of several techniques described in the papers:
4 | * [Image Style Transfer Using Convolutional Neural Networks](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf)
5 | by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
6 | * [Artistic style transfer for videos](https://arxiv.org/abs/1604.08610)
7 | by Manuel Ruder, Alexey Dosovitskiy, Thomas Brox
8 | * [Preserving Color in Neural Artistic Style Transfer](https://arxiv.org/abs/1606.05897)
9 | by Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman
10 |
11 | Additionally, techniques are presented for semantic segmentation and multiple style transfer.
12 |
13 | The Neural Style algorithm synthesizes a [pastiche](https://en.wikipedia.org/wiki/Pastiche) by separating and combining the content of one image with the style of another image using convolutional neural networks (CNN). Below is an example of transferring the artistic style of [The Starry Night](https://en.wikipedia.org/wiki/The_Starry_Night) onto a photograph of an African lion:
14 |
15 |
16 |
17 |
18 |
19 |
20 | Transferring the style of various artworks to the same content image produces qualitatively convincing results:
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 | Here we reproduce Figure 3 from the first paper, which renders a photograph of the Neckarfront in Tübingen, Germany in the style of 5 different iconic paintings [The Shipwreck of the Minotaur](http://www.artble.com/artists/joseph_mallord_william_turner/paintings/the_shipwreck_of_the_minotaur), [The Starry Night](https://www.wikiart.org/en/vincent-van-gogh/the-starry-night-1889), [Composition VII](https://www.wikiart.org/en/wassily-kandinsky/composition-vii-1913), [The Scream](https://www.wikiart.org/en/edvard-munch/the-scream-1893), [Seated Nude](http://www.pablopicasso.org/seated-nude.jsp):
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 | ### Content / Style Tradeoff
50 | The relative weight of the style and content can be controlled.
51 |
52 | Here we render with an increasing style weight applied to [Red Canna](http://www.georgiaokeeffe.net/red-canna.jsp):
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 | ### Multiple Style Images
62 | More than one style image can be used to blend multiple artistic styles.
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 | *Top row (left to right)*: [The Starry Night](https://www.wikiart.org/en/vincent-van-gogh/the-starry-night-1889) + [The Scream](https://www.wikiart.org/en/edvard-munch/the-scream-1893), [The Scream](https://www.wikiart.org/en/edvard-munch/the-scream-1893) + [Composition VII](https://www.wikiart.org/en/wassily-kandinsky/composition-vii-1913), [Seated Nude](http://www.pablopicasso.org/seated-nude.jsp) + [Composition VII](https://www.wikiart.org/en/wassily-kandinsky/composition-vii-1913)
75 | *Bottom row (left to right)*: [Seated Nude](http://www.pablopicasso.org/seated-nude.jsp) + [The Starry Night](https://www.wikiart.org/en/vincent-van-gogh/the-starry-night-1889), [Oversoul](http://alexgrey.com/art/paintings/soul/oversoul/) + [Freshness of Cold](https://afremov.com/FRESHNESS-OF-COLD-PALETTE-KNIFE-Oil-Painting-On-Canvas-By-Leonid-Afremov-Size-30-x40.html), [David Bowie](http://www.francoise-nielly.com/index.php/galerie/index/56) + [Skull](https://www.wikiart.org/en/jean-michel-basquiat/head)
76 |
77 | ### Style Interpolation
78 | When using multiple style images, the degree of blending between the images can be controlled.
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 | *Top row (left to right)*: content image, .2 [The Starry Night](https://www.wikiart.org/en/vincent-van-gogh/the-starry-night-1889) + .8 [The Scream](https://www.wikiart.org/en/edvard-munch/the-scream-1893), .8 [The Starry Night](https://www.wikiart.org/en/vincent-van-gogh/the-starry-night-1889) + .2 [The Scream](https://www.wikiart.org/en/edvard-munch/the-scream-1893)
91 | *Bottom row (left to right)*: .2 [Oversoul](http://alexgrey.com/art/paintings/soul/oversoul/) + .8 [Freshness of Cold](https://afremov.com/FRESHNESS-OF-COLD-PALETTE-KNIFE-Oil-Painting-On-Canvas-By-Leonid-Afremov-Size-30-x40.html), .5 [Oversoul](http://alexgrey.com/art/paintings/soul/oversoul/) + .5 [Freshness of Cold](https://afremov.com/FRESHNESS-OF-COLD-PALETTE-KNIFE-Oil-Painting-On-Canvas-By-Leonid-Afremov-Size-30-x40.html), .8 [Oversoul](http://alexgrey.com/art/paintings/soul/oversoul/) + .2 [Freshness of Cold](https://afremov.com/FRESHNESS-OF-COLD-PALETTE-KNIFE-Oil-Painting-On-Canvas-By-Leonid-Afremov-Size-30-x40.html)
92 |
93 | ### Transfer style but not color
94 | The color scheme of the original image can be preserved by including the flag `--original_colors`. Colors are transferred using either the [YUV](https://en.wikipedia.org/wiki/YUV), [YCrCb](https://en.wikipedia.org/wiki/YCbCr), [CIE L\*a\*b\*](https://en.wikipedia.org/wiki/Lab_color_space), or [CIE L\*u\*v\*](https://en.wikipedia.org/wiki/CIELUV) color spaces.
95 |
96 | Here we reproduce Figure 1 and Figure 2 in the third paper using luminance-only transfer:
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 | *Left to right*: content image, stylized image, stylized image with the original colors of the content image
108 |
109 | ### Textures
110 | The algorithm is not constrained to artistic painting styles. It can also be applied to photographic textures to create [pareidolic](https://en.wikipedia.org/wiki/Pareidolia) images.
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 | ### Segmentation
124 | Style can be transferred to semantic segmentations in the content image.
125 |
126 |
127 |
128 |
129 |
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 |
138 |
139 |
140 |
141 | Multiple styles can be transferred to the foreground and background of the content image.
142 |
143 |
144 |
145 |
146 |
147 |
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 | *Left to right*: content image, foreground style, background style, foreground mask, background mask, stylized image
159 |
160 | ### Video
161 | Animations can be rendered by applying the algorithm to each source frame. For the best results, the gradient descent is initialized with the previously stylized frame warped to the current frame according to the optical flow between the pair of frames. Loss functions for temporal consistency are used to penalize pixels excluding disoccluded regions and motion boundaries.
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 | *Top row (left to right)*: source frames, ground-truth optical flow visualized
172 | *Bottom row (left to right)*: disoccluded regions and motion boundaries, stylized frames
173 |
174 | Big thanks to Mike Burakoff for finding a bug in the video rendering.
175 |
176 | ### Gradient Descent Initialization
177 | The initialization of the gradient descent is controlled using `--init_img_type` for single images and `--init_frame_type` or `--first_frame_type` for video frames. White noise allows an arbitrary number of distinct images to be generated. Whereas, initializing with a fixed image always converges to the same output.
178 |
179 | Here we reproduce Figure 6 from the first paper:
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 | *Top row (left to right)*: Initialized with the content image, the style image, white noise (RNG seed 1)
191 | *Bottom row (left to right)*: Initialized with white noise (RNG seeds 2, 3, 4)
192 |
193 | ### Layer Representations
194 | The feature complexities and receptive field sizes increase down the CNN heirarchy.
195 |
196 | Here we reproduce Figure 3 from [the original paper](https://arxiv.org/abs/1508.06576):
197 |
198 |
199 | |
200 | 1 x 10^-5 |
201 | 1 x 10^-4 |
202 | 1 x 10^-3 |
203 | 1 x 10^-2 |
204 |
205 |
206 | conv1_1 |
207 |  |
208 |  |
209 |  |
210 |  |
211 |
212 |
213 | conv2_1 |
214 |  |
215 |  |
216 |  |
217 |  |
218 |
219 |
220 | conv3_1 |
221 |  |
222 |  |
223 |  |
224 |  |
225 |
226 |
227 | conv4_1 |
228 |  |
229 |  |
230 |  |
231 |  |
232 |
233 |
234 | conv5_1 |
235 |  |
236 |  |
237 |  |
238 |  |
239 |
240 |
241 |
242 | *Rows*: increasing subsets of CNN layers; i.e. 'conv4_1' means using 'conv1_1', 'conv2_1', 'conv3_1', 'conv4_1'.
243 | *Columns*: alpha/beta ratio of the the content and style reconstruction (see Content / Style Tradeoff).
244 |
245 | ## Setup
246 | #### Dependencies:
247 | * [tensorflow](https://github.com/tensorflow/tensorflow)
248 | * [opencv](http://opencv.org/downloads.html)
249 |
250 | #### Optional (but recommended) dependencies:
251 | * [CUDA](https://developer.nvidia.com/cuda-downloads) 7.5+
252 | * [cuDNN](https://developer.nvidia.com/cudnn) 5.0+
253 |
254 | #### After installing the dependencies:
255 | * Download the [VGG-19 model weights](http://www.vlfeat.org/matconvnet/pretrained/) (see the "VGG-VD models from the *Very Deep Convolutional Networks for Large-Scale Visual Recognition* project" section). More info about the VGG-19 network can be found [here](http://www.robots.ox.ac.uk/~vgg/research/very_deep/).
256 | * After downloading, copy the weights file `imagenet-vgg-verydeep-19.mat` to the project directory.
257 |
258 | ## Usage
259 | ### Basic Usage
260 |
261 | #### Single Image
262 | 1. Copy 1 content image to the default image content directory `./image_input`
263 | 2. Copy 1 or more style images to the default style directory `./styles`
264 | 3. Run the command:
265 | ```
266 | bash stylize_image.sh
267 | ```
268 | *Example*:
269 | ```
270 | bash stylize_image.sh ./image_input/lion.jpg ./styles/kandinsky.jpg
271 | ```
272 | *Note*: Supported image formats include: `.png`, `.jpg`, `.ppm`, `.pgm`
273 |
274 | *Note*: Paths to images should not contain the `~` character to represent your home directory; you should instead use a relative path or the absolute path.
275 |
276 | #### Video Frames
277 | 1. Copy 1 content video to the default video content directory `./video_input`
278 | 2. Copy 1 or more style images to the default style directory `./styles`
279 | 3. Run the command:
280 | ```
281 | bash stylize_video.sh
282 | ```
283 | *Example*:
284 | ```
285 | bash stylize_video.sh ./video_input/video.mp4 ./styles/kandinsky.jpg
286 | ```
287 |
288 | *Note*: Supported video formats include: `.mp4`, `.mov`, `.mkv`
289 |
290 | ### Advanced Usage
291 | #### Single Image or Video Frames
292 | 1. Copy content images to the default image content directory `./image_input` or copy video frames to the default video content directory `./video_input`
293 | 2. Copy 1 or more style images to the default style directory `./styles`
294 | 3. Run the command with specific arguments:
295 | ```
296 | python neural_style.py
297 | ```
298 | *Example (Single Image)*:
299 | ```
300 | python neural_style.py --content_img golden_gate.jpg \
301 | --style_imgs starry-night.jpg \
302 | --max_size 1000 \
303 | --max_iterations 100 \
304 | --original_colors \
305 | --device /cpu:0 \
306 | --verbose;
307 | ```
308 |
309 | To use multiple style images, pass a *space-separated* list of the image names and image weights like this:
310 |
311 | `--style_imgs starry_night.jpg the_scream.jpg --style_imgs_weights 0.5 0.5`
312 |
313 | *Example (Video Frames)*:
314 | ```
315 | python neural_style.py --video \
316 | --video_input_dir ./video_input/my_video_frames \
317 | --style_imgs starry-night.jpg \
318 | --content_weight 5 \
319 | --style_weight 1000 \
320 | --temporal_weight 1000 \
321 | --start_frame 1 \
322 | --end_frame 50 \
323 | --max_size 1024 \
324 | --first_frame_iterations 3000 \
325 | --verbose;
326 | ```
327 | *Note*: When using `--init_frame_type prev_warp` you must have previously computed the backward and forward optical flow between the frames. See `./video_input/make-opt-flow.sh` and `./video_input/run-deepflow.sh`
328 |
329 | #### Arguments
330 | * `--content_img`: Filename of the content image. *Example*: `lion.jpg`
331 | * `--content_img_dir`: Relative or absolute directory path to the content image. *Default*: `./image_input`
332 | * `--style_imgs`: Filenames of the style images. To use multiple style images, pass a *space-separated* list. *Example*: `--style_imgs starry-night.jpg`
333 | * `--style_imgs_weights`: The blending weights for each style image. *Default*: `1.0` (assumes only 1 style image)
334 | * `--style_imgs_dir`: Relative or absolute directory path to the style images. *Default*: `./styles`
335 | * `--init_img_type`: Image used to initialize the network. *Choices*: `content`, `random`, `style`. *Default*: `content`
336 | * `--max_size`: Maximum width or height of the input images. *Default*: `512`
337 | * `--content_weight`: Weight for the content loss function. *Default*: `5e0`
338 | * `--style_weight`: Weight for the style loss function. *Default*: `1e4`
339 | * `--style_scale`: Scales style images (global for all style images) *Default*: `1.0`
340 | * `--tv_weight`: Weight for the total variational loss function. *Default*: `1e-3`
341 | * `--temporal_weight`: Weight for the temporal loss function. *Default*: `2e2`
342 | * `--content_layers`: *Space-separated* VGG-19 layer names used for the content image. *Default*: `conv4_2`
343 | * `--style_layers`: *Space-separated* VGG-19 layer names used for the style image. *Default*: `relu1_1 relu2_1 relu3_1 relu4_1 relu5_1`
344 | * `--content_layer_weights`: *Space-separated* weights of each content layer to the content loss. *Default*: `1.0`
345 | * `--style_layer_weights`: *Space-separated* weights of each style layer to loss. *Default*: `0.2 0.2 0.2 0.2 0.2`
346 | * `--original_colors`: Boolean flag indicating if the style is transferred but not the colors.
347 | * `--color_convert_type`: Color spaces (YUV, YCrCb, CIE L\*u\*v\*, CIE L\*a\*b\*) for luminance-matching conversion to original colors. *Choices*: `yuv`, `ycrcb`, `luv`, `lab`. *Default*: `yuv`
348 | * `--style_mask`: Boolean flag indicating if style is transferred to masked regions.
349 | * `--style_mask_imgs`: Filenames of the style mask images (example: `face_mask.png`). To use multiple style mask images, pass a *space-separated* list. *Example*: `--style_mask_imgs face_mask.png face_mask_inv.png`
350 | * `--noise_ratio`: Interpolation value between the content image and noise image if network is initialized with `random`. *Default*: `1.0`
351 | * `--seed`: Seed for the random number generator. *Default*: `0`
352 | * `--model_weights`: Weights and biases of the VGG-19 network. Download [here](http://www.vlfeat.org/matconvnet/pretrained/). *Default*:`imagenet-vgg-verydeep-19.mat`
353 | * `--pooling_type`: Type of pooling in convolutional neural network. *Choices*: `avg`, `max`. *Default*: `avg`
354 | * `--device`: GPU or CPU device. GPU mode highly recommended but requires NVIDIA CUDA. *Choices*: `/gpu:0` `/cpu:0`. *Default*: `/gpu:0`
355 | * `--img_output_dir`: Directory to write output to. *Default*: `./image_output`
356 | * `--img_name`: Filename of the output image. *Default*: `result`
357 | * `--verbose`: Boolean flag indicating if statements should be printed to the console.
358 |
359 | #### Optimization Arguments
360 | * `--optimizer`: Loss minimization optimizer. L-BFGS gives better results. Adam uses less memory. *Choices*: `lbfgs`, `adam`. *Default*: `lbfgs`
361 | * `--learning_rate`: Learning-rate parameter for the Adam optimizer. *Default*: `1e0`
362 |
363 |
364 |
365 |
366 |
367 | * `--max_iterations`: Max number of iterations for the Adam or L-BFGS optimizer. *Default*: `1000`
368 | * `--print_iterations`: Number of iterations between optimizer print statements. *Default*: `50`
369 | * `--content_loss_function`: Different constants K in the content loss function. *Choices*: `1`, `2`, `3`. *Default*: `1`
370 |
371 |
372 |
373 |
374 |
375 | #### Video Frame Arguments
376 | * `--video`: Boolean flag indicating if the user is creating a video.
377 | * `--start_frame`: First frame number. *Default*: `1`
378 | * `--end_frame`: Last frame number. *Default*: `1`
379 | * `--first_frame_type`: Image used to initialize the network during the rendering of the first frame. *Choices*: `content`, `random`, `style`. *Default*: `random`
380 | * `--init_frame_type`: Image used to initialize the network during the every rendering after the first frame. *Choices*: `prev_warped`, `prev`, `content`, `random`, `style`. *Default*: `prev_warped`
381 | * `--video_input_dir`: Relative or absolute directory path to input frames. *Default*: `./video_input`
382 | * `--video_output_dir`: Relative or absolute directory path to write output frames to. *Default*: `./video_output`
383 | * `--content_frame_frmt`: Format string of input frames. *Default*: `frame_{}.png`
384 | * `--backward_optical_flow_frmt`: Format string of backward optical flow files. *Default*: `backward_{}_{}.flo`
385 | * `--forward_optical_flow_frmt`: Format string of forward optical flow files. *Default*: `forward_{}_{}.flo`
386 | * `--content_weights_frmt`: Format string of optical flow consistency files. *Default*: `reliable_{}_{}.txt`
387 | * `--prev_frame_indices`: Previous frames to consider for longterm temporal consistency. *Default*: `1`
388 | * `--first_frame_iterations`: Maximum number of optimizer iterations of the first frame. *Default*: `2000`
389 | * `--frame_iterations`: Maximum number of optimizer iterations for each frame after the first frame. *Default*: `800`
390 |
391 | ## Questions and Errata
392 |
393 | Send questions or issues:
394 |
395 |
396 | ## Memory
397 | By default, `neural-style-tf` uses the NVIDIA cuDNN GPU backend for convolutions and L-BFGS for optimization.
398 | These produce better and faster results, but can consume a lot of memory. You can reduce memory usage with the following:
399 |
400 | * **Use Adam**: Add the flag `--optimizer adam` to use Adam instead of L-BFGS. This should significantly
401 | reduce memory usage, but will require tuning of other parameters for good results; in particular you should
402 | experiment with different values of `--learning_rate`, `--content_weight`, `--style_weight`
403 | * **Reduce image size**: You can reduce the size of the generated image with the `--max_size` argument.
404 |
405 | ## Implementation Details
406 | All images were rendered on a machine with:
407 | * **CPU:** Intel Core i7-6800K @ 3.40GHz × 12
408 | * **GPU:** NVIDIA GeForce GTX 1080/PCIe/SSE2
409 | * **OS:** Linux Ubuntu 16.04.1 LTS 64-bit
410 | * **CUDA:** 8.0
411 | * **python:** 2.7.12
412 | * **tensorflow:** 0.10.0rc
413 | * **opencv:** 2.4.9.1
414 |
415 | ## Acknowledgements
416 |
417 | The implementation is based on the projects:
418 | * Torch (Lua) implementation 'neural-style' by [jcjohnson](https://github.com/jcjohnson)
419 | * Torch (Lua) implementation 'artistic-videos' by [manuelruder](https://github.com/manuelruder)
420 |
421 | Source video frames were obtained from:
422 | * [MPI Sintel Flow Dataset](http://sintel.is.tue.mpg.de/)
423 |
424 | Artistic images were created by the modern artists:
425 | * [Alex Grey](http://alexgrey.com/)
426 | * [Minjae Lee](http://www.grenomj.com/)
427 | * [Leonid Afremov](https://afremov.com/)
428 | * [Françoise Nielly](http://www.francoise-nielly.com/)
429 | * [James Jean](http://www.jamesjean.com/)
430 | * [Ben Giles](https://benlewisgiles.format.com/)
431 | * [Callie Fink](http://calliefink.deviantart.com/)
432 | * [H.R. Giger](https://en.wikipedia.org/wiki/H._R._Giger)
433 | * [Voka](http://www.voka.at/)
434 |
435 | Artistic images were created by the popular historical artists:
436 | * [Vincent Van Gogh](https://www.wikiart.org/en/vincent-van-gogh)
437 | * [Wassily Kandinsky](https://www.wikiart.org/en/wassily-kandinsky)
438 | * [Georgia O'Keeffe](http://www.georgiaokeeffe.net/)
439 | * [Jean-Michel Basquiat](http://basquiat.com/)
440 | * [Édouard Manet](http://www.manet.org/)
441 | * [Pablo Picasso](https://www.wikiart.org/en/pablo-picasso)
442 | * [Joseph Mallord William Turner](https://en.wikipedia.org/wiki/J._M._W._Turner)
443 | * [Frida Kahlo](https://en.wikipedia.org/wiki/Frida_Kahlo)
444 |
445 | Bash shell scripts for testing were created by my brother [Sheldon Smith](http://www.imdb.com/name/nm4328496/).
446 |
447 | ## Citation
448 |
449 | If you find this code useful for your research, please cite:
450 |
451 | ```
452 | @misc{Smith2016,
453 | author = {Smith, Cameron},
454 | title = {neural-style-tf},
455 | year = {2016},
456 | publisher = {GitHub},
457 | journal = {GitHub repository},
458 | howpublished = {\url{https://github.com/cysmith/neural-style-tf}},
459 | }
460 | ```
461 |
--------------------------------------------------------------------------------
/examples/equations/content.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/equations/content.png
--------------------------------------------------------------------------------
/examples/equations/email.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/equations/email.png
--------------------------------------------------------------------------------
/examples/equations/plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/equations/plot.png
--------------------------------------------------------------------------------
/examples/gatys_figure/tubingen.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/gatys_figure/tubingen.png
--------------------------------------------------------------------------------
/examples/gatys_figure/tubingen_kandinsky.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/gatys_figure/tubingen_kandinsky.png
--------------------------------------------------------------------------------
/examples/gatys_figure/tubingen_picasso.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/gatys_figure/tubingen_picasso.png
--------------------------------------------------------------------------------
/examples/gatys_figure/tubingen_scream.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/gatys_figure/tubingen_scream.png
--------------------------------------------------------------------------------
/examples/gatys_figure/tubingen_shipwreck.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/gatys_figure/tubingen_shipwreck.png
--------------------------------------------------------------------------------
/examples/gatys_figure/tubingen_starry_night.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/gatys_figure/tubingen_starry_night.png
--------------------------------------------------------------------------------
/examples/initialization/init_content.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/initialization/init_content.png
--------------------------------------------------------------------------------
/examples/initialization/init_random_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/initialization/init_random_0.png
--------------------------------------------------------------------------------
/examples/initialization/init_random_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/initialization/init_random_1.png
--------------------------------------------------------------------------------
/examples/initialization/init_random_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/initialization/init_random_2.png
--------------------------------------------------------------------------------
/examples/initialization/init_random_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/initialization/init_random_3.png
--------------------------------------------------------------------------------
/examples/initialization/init_random_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/initialization/init_random_4.png
--------------------------------------------------------------------------------
/examples/initialization/init_style.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/initialization/init_style.png
--------------------------------------------------------------------------------
/examples/layers/conv1_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv1_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/conv1_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv1_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/conv1_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv1_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/conv1_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv1_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/conv2_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv2_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/conv2_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv2_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/conv2_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv2_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/conv2_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv2_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/conv3_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv3_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/conv3_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv3_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/conv3_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv3_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/conv3_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv3_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/conv4_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv4_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/conv4_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv4_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/conv4_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv4_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/conv4_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv4_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/conv5_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv5_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/conv5_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv5_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/conv5_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv5_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/conv5_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/conv5_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/relu1_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu1_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/relu1_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu1_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/relu1_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu1_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/relu1_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu1_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/relu2_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu2_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/relu2_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu2_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/relu2_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu2_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/relu2_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu2_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/relu3_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu3_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/relu3_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu3_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/relu3_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu3_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/relu3_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu3_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/relu4_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu4_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/relu4_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu4_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/relu4_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu4_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/relu4_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu4_1_1e5.png
--------------------------------------------------------------------------------
/examples/layers/relu5_1_1e2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu5_1_1e2.png
--------------------------------------------------------------------------------
/examples/layers/relu5_1_1e3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu5_1_1e3.png
--------------------------------------------------------------------------------
/examples/layers/relu5_1_1e4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu5_1_1e4.png
--------------------------------------------------------------------------------
/examples/layers/relu5_1_1e5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/layers/relu5_1_1e5.png
--------------------------------------------------------------------------------
/examples/lions/32_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/32_output.png
--------------------------------------------------------------------------------
/examples/lions/33_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/33_output.png
--------------------------------------------------------------------------------
/examples/lions/42_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/42_output.png
--------------------------------------------------------------------------------
/examples/lions/basquiat_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/basquiat_output.png
--------------------------------------------------------------------------------
/examples/lions/calliefink_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/calliefink_output.png
--------------------------------------------------------------------------------
/examples/lions/content_style.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/content_style.png
--------------------------------------------------------------------------------
/examples/lions/giger_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/giger_output.png
--------------------------------------------------------------------------------
/examples/lions/kandinsky_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/kandinsky_output.png
--------------------------------------------------------------------------------
/examples/lions/styles/basquiat_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/styles/basquiat_crop.jpg
--------------------------------------------------------------------------------
/examples/lions/styles/calliefink_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/styles/calliefink_crop.jpg
--------------------------------------------------------------------------------
/examples/lions/styles/giger_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/styles/giger_crop.jpg
--------------------------------------------------------------------------------
/examples/lions/styles/kandinsky_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/styles/kandinsky_crop.jpg
--------------------------------------------------------------------------------
/examples/lions/styles/matisse_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/styles/matisse_crop.jpg
--------------------------------------------------------------------------------
/examples/lions/styles/water_lilies_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/styles/water_lilies_crop.jpg
--------------------------------------------------------------------------------
/examples/lions/styles/wave_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/styles/wave_crop.jpg
--------------------------------------------------------------------------------
/examples/lions/wave_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/lions/wave_output.png
--------------------------------------------------------------------------------
/examples/multiple_styles/tubingen_afremov_grey.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/multiple_styles/tubingen_afremov_grey.png
--------------------------------------------------------------------------------
/examples/multiple_styles/tubingen_basquiat_nielly.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/multiple_styles/tubingen_basquiat_nielly.png
--------------------------------------------------------------------------------
/examples/multiple_styles/tubingen_scream_kandinsky.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/multiple_styles/tubingen_scream_kandinsky.png
--------------------------------------------------------------------------------
/examples/multiple_styles/tubingen_seated_kandinsky.png.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/multiple_styles/tubingen_seated_kandinsky.png.png
--------------------------------------------------------------------------------
/examples/multiple_styles/tubingen_starry_scream.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/multiple_styles/tubingen_starry_scream.png
--------------------------------------------------------------------------------
/examples/multiple_styles/tubingen_starry_seated.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/multiple_styles/tubingen_starry_seated.png
--------------------------------------------------------------------------------
/examples/original_colors/garden.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/original_colors/garden.png
--------------------------------------------------------------------------------
/examples/original_colors/garden_starry.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/original_colors/garden_starry.png
--------------------------------------------------------------------------------
/examples/original_colors/garden_starry_yuv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/original_colors/garden_starry_yuv.png
--------------------------------------------------------------------------------
/examples/original_colors/new_york.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/original_colors/new_york.png
--------------------------------------------------------------------------------
/examples/original_colors/stylized.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/original_colors/stylized.png
--------------------------------------------------------------------------------
/examples/original_colors/stylized_original_colors.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/original_colors/stylized_original_colors.png
--------------------------------------------------------------------------------
/examples/pareidolic/ben_giles_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/ben_giles_output.png
--------------------------------------------------------------------------------
/examples/pareidolic/dark_matter_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/dark_matter_output.png
--------------------------------------------------------------------------------
/examples/pareidolic/flowers_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/flowers_output.png
--------------------------------------------------------------------------------
/examples/pareidolic/oil_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/oil_output.png
--------------------------------------------------------------------------------
/examples/pareidolic/styles/ben_giles.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/styles/ben_giles.png
--------------------------------------------------------------------------------
/examples/pareidolic/styles/dark_matter_bw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/styles/dark_matter_bw.png
--------------------------------------------------------------------------------
/examples/pareidolic/styles/flowers_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/styles/flowers_crop.jpg
--------------------------------------------------------------------------------
/examples/pareidolic/styles/oil_crop.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/pareidolic/styles/oil_crop.jpg
--------------------------------------------------------------------------------
/examples/segmentation/00017.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00017.jpg
--------------------------------------------------------------------------------
/examples/segmentation/00017_mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00017_mask.png
--------------------------------------------------------------------------------
/examples/segmentation/00017_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00017_output.png
--------------------------------------------------------------------------------
/examples/segmentation/00110.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00110.jpg
--------------------------------------------------------------------------------
/examples/segmentation/00110_mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00110_mask.png
--------------------------------------------------------------------------------
/examples/segmentation/00110_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00110_output.png
--------------------------------------------------------------------------------
/examples/segmentation/00768.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00768.jpg
--------------------------------------------------------------------------------
/examples/segmentation/00768_mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00768_mask.png
--------------------------------------------------------------------------------
/examples/segmentation/00768_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/00768_output.png
--------------------------------------------------------------------------------
/examples/segmentation/02270.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02270.jpg
--------------------------------------------------------------------------------
/examples/segmentation/02270_mask_face.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02270_mask_face.png
--------------------------------------------------------------------------------
/examples/segmentation/02270_mask_face_inv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02270_mask_face_inv.png
--------------------------------------------------------------------------------
/examples/segmentation/02270_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02270_output.png
--------------------------------------------------------------------------------
/examples/segmentation/02390.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02390.jpg
--------------------------------------------------------------------------------
/examples/segmentation/02390_mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02390_mask.png
--------------------------------------------------------------------------------
/examples/segmentation/02390_mask_inv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02390_mask_inv.png
--------------------------------------------------------------------------------
/examples/segmentation/02390_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02390_output.png
--------------------------------------------------------------------------------
/examples/segmentation/02630.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02630.png
--------------------------------------------------------------------------------
/examples/segmentation/02630_mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02630_mask.png
--------------------------------------------------------------------------------
/examples/segmentation/02630_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/02630_output.png
--------------------------------------------------------------------------------
/examples/segmentation/basquiat.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/basquiat.png
--------------------------------------------------------------------------------
/examples/segmentation/frida.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/frida.png
--------------------------------------------------------------------------------
/examples/segmentation/okeffe_iris.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/okeffe_iris.png
--------------------------------------------------------------------------------
/examples/segmentation/okeffe_red_canna.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/segmentation/okeffe_red_canna.png
--------------------------------------------------------------------------------
/examples/style_content_tradeoff/okeffe.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_content_tradeoff/okeffe.jpg
--------------------------------------------------------------------------------
/examples/style_content_tradeoff/okeffe_10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_content_tradeoff/okeffe_10.png
--------------------------------------------------------------------------------
/examples/style_content_tradeoff/okeffe_100.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_content_tradeoff/okeffe_100.png
--------------------------------------------------------------------------------
/examples/style_content_tradeoff/okeffe_1000.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_content_tradeoff/okeffe_1000.png
--------------------------------------------------------------------------------
/examples/style_content_tradeoff/okeffe_10000.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_content_tradeoff/okeffe_10000.png
--------------------------------------------------------------------------------
/examples/style_content_tradeoff/output_100000.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_content_tradeoff/output_100000.png
--------------------------------------------------------------------------------
/examples/style_content_tradeoff/output_1000000.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_content_tradeoff/output_1000000.png
--------------------------------------------------------------------------------
/examples/style_interpolation/golden_gate_scream_3_starry_7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/golden_gate_scream_3_starry_7.png
--------------------------------------------------------------------------------
/examples/style_interpolation/golden_gate_scream_5_starry_5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/golden_gate_scream_5_starry_5.png
--------------------------------------------------------------------------------
/examples/style_interpolation/golden_gate_scream_7_starry_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/golden_gate_scream_7_starry_3.png
--------------------------------------------------------------------------------
/examples/style_interpolation/taj_mahal_afremov_grey_2_8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/taj_mahal_afremov_grey_2_8.png
--------------------------------------------------------------------------------
/examples/style_interpolation/taj_mahal_afremov_grey_5_5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/taj_mahal_afremov_grey_5_5.png
--------------------------------------------------------------------------------
/examples/style_interpolation/taj_mahal_afremov_grey_8_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/taj_mahal_afremov_grey_8_2.png
--------------------------------------------------------------------------------
/examples/style_interpolation/taj_mahal_scream_2_starry_8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/taj_mahal_scream_2_starry_8.png
--------------------------------------------------------------------------------
/examples/style_interpolation/taj_mahal_scream_5_starry_5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/taj_mahal_scream_5_starry_5.png
--------------------------------------------------------------------------------
/examples/style_interpolation/taj_mahal_scream_8_starry_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/style_interpolation/taj_mahal_scream_8_starry_2.png
--------------------------------------------------------------------------------
/examples/video/input.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/video/input.gif
--------------------------------------------------------------------------------
/examples/video/opt_flow.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/video/opt_flow.gif
--------------------------------------------------------------------------------
/examples/video/output.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/video/output.gif
--------------------------------------------------------------------------------
/examples/video/weights.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/examples/video/weights.gif
--------------------------------------------------------------------------------
/image_input/face.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/image_input/face.jpg
--------------------------------------------------------------------------------
/image_input/face_mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/image_input/face_mask.png
--------------------------------------------------------------------------------
/image_input/face_mask_inv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/image_input/face_mask_inv.png
--------------------------------------------------------------------------------
/image_input/golden_gate.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/image_input/golden_gate.jpg
--------------------------------------------------------------------------------
/image_input/lion.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/image_input/lion.jpg
--------------------------------------------------------------------------------
/image_input/taj_mahal.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/image_input/taj_mahal.jpg
--------------------------------------------------------------------------------
/image_input/tubingen.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/image_input/tubingen.jpg
--------------------------------------------------------------------------------
/neural_style.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | import scipy.io
4 | import argparse
5 | import struct
6 | import errno
7 | import time
8 | import cv2
9 | import os
10 |
11 | '''
12 | parsing and configuration
13 | '''
14 | def parse_args():
15 |
16 | desc = "TensorFlow implementation of 'A Neural Algorithm for Artistic Style'"
17 | parser = argparse.ArgumentParser(description=desc)
18 |
19 | # options for single image
20 | parser.add_argument('--verbose', action='store_true',
21 | help='Boolean flag indicating if statements should be printed to the console.')
22 |
23 | parser.add_argument('--img_name', type=str,
24 | default='result',
25 | help='Filename of the output image.')
26 |
27 | parser.add_argument('--style_imgs', nargs='+', type=str,
28 | help='Filenames of the style images (example: starry-night.jpg)',
29 | required=True)
30 |
31 | parser.add_argument('--style_imgs_weights', nargs='+', type=float,
32 | default=[1.0],
33 | help='Interpolation weights of each of the style images. (example: 0.5 0.5)')
34 |
35 | parser.add_argument('--content_img', type=str,
36 | help='Filename of the content image (example: lion.jpg)')
37 |
38 | parser.add_argument('--style_imgs_dir', type=str,
39 | default='./styles',
40 | help='Directory path to the style images. (default: %(default)s)')
41 |
42 | parser.add_argument('--content_img_dir', type=str,
43 | default='./image_input',
44 | help='Directory path to the content image. (default: %(default)s)')
45 |
46 | parser.add_argument('--init_img_type', type=str,
47 | default='content',
48 | choices=['random', 'content', 'style'],
49 | help='Image used to initialize the network. (default: %(default)s)')
50 |
51 | parser.add_argument('--max_size', type=int,
52 | default=512,
53 | help='Maximum width or height of the input images. (default: %(default)s)')
54 |
55 | parser.add_argument('--content_weight', type=float,
56 | default=5e0,
57 | help='Weight for the content loss function. (default: %(default)s)')
58 |
59 | parser.add_argument('--style_weight', type=float,
60 | default=1e4,
61 | help='Weight for the style loss function. (default: %(default)s)')
62 |
63 | parser.add_argument('--tv_weight', type=float,
64 | default=1e-3,
65 | help='Weight for the total variational loss function. Set small (e.g. 1e-3). (default: %(default)s)')
66 |
67 | parser.add_argument('--temporal_weight', type=float,
68 | default=2e2,
69 | help='Weight for the temporal loss function. (default: %(default)s)')
70 |
71 | parser.add_argument('--content_loss_function', type=int,
72 | default=1,
73 | choices=[1, 2, 3],
74 | help='Different constants for the content layer loss function. (default: %(default)s)')
75 |
76 | parser.add_argument('--content_layers', nargs='+', type=str,
77 | default=['conv4_2'],
78 | help='VGG19 layers used for the content image. (default: %(default)s)')
79 |
80 | parser.add_argument('--style_layers', nargs='+', type=str,
81 | default=['relu1_1', 'relu2_1', 'relu3_1', 'relu4_1', 'relu5_1'],
82 | help='VGG19 layers used for the style image. (default: %(default)s)')
83 |
84 | parser.add_argument('--style_scale', type=float,
85 | default=1.0,
86 | help='scale the of style image. (default: %(default)s)')
87 |
88 | parser.add_argument('--content_layer_weights', nargs='+', type=float,
89 | default=[1.0],
90 | help='Contributions (weights) of each content layer to loss. (default: %(default)s)')
91 |
92 | parser.add_argument('--style_layer_weights', nargs='+', type=float,
93 | default=[0.2, 0.2, 0.2, 0.2, 0.2],
94 | help='Contributions (weights) of each style layer to loss. (default: %(default)s)')
95 |
96 | parser.add_argument('--original_colors', action='store_true',
97 | help='Transfer the style but not the colors.')
98 |
99 | parser.add_argument('--color_convert_type', type=str,
100 | default='yuv',
101 | choices=['yuv', 'ycrcb', 'luv', 'lab'],
102 | help='Color space for conversion to original colors (default: %(default)s)')
103 |
104 | parser.add_argument('--color_convert_time', type=str,
105 | default='after',
106 | choices=['after', 'before'],
107 | help='Time (before or after) to convert to original colors (default: %(default)s)')
108 |
109 | parser.add_argument('--style_mask', action='store_true',
110 | help='Transfer the style to masked regions.')
111 |
112 | parser.add_argument('--style_mask_imgs', nargs='+', type=str,
113 | default=None,
114 | help='Filenames of the style mask images (example: face_mask.png) (default: %(default)s)')
115 |
116 | parser.add_argument('--noise_ratio', type=float,
117 | default=1.0,
118 | help="Interpolation value between the content image and noise image if the network is initialized with 'random'.")
119 |
120 | parser.add_argument('--seed', type=int,
121 | default=0,
122 | help='Seed for the random number generator. (default: %(default)s)')
123 |
124 | parser.add_argument('--model_weights', type=str,
125 | default='imagenet-vgg-verydeep-19.mat',
126 | help='Weights and biases of the VGG-19 network.')
127 |
128 | parser.add_argument('--pooling_type', type=str,
129 | default='avg',
130 | choices=['avg', 'max'],
131 | help='Type of pooling in convolutional neural network. (default: %(default)s)')
132 |
133 | parser.add_argument('--device', type=str,
134 | default='/gpu:0',
135 | choices=['/gpu:0', '/cpu:0'],
136 | help='GPU or CPU mode. GPU mode requires NVIDIA CUDA. (default|recommended: %(default)s)')
137 |
138 | parser.add_argument('--img_output_dir', type=str,
139 | default='./image_output',
140 | help='Relative or absolute directory path to output image and data.')
141 |
142 | # optimizations
143 | parser.add_argument('--optimizer', type=str,
144 | default='lbfgs',
145 | choices=['lbfgs', 'adam'],
146 | help='Loss minimization optimizer. L-BFGS gives better results. Adam uses less memory. (default|recommended: %(default)s)')
147 |
148 | parser.add_argument('--learning_rate', type=float,
149 | default=1e0,
150 | help='Learning rate parameter for the Adam optimizer. (default: %(default)s)')
151 |
152 | parser.add_argument('--max_iterations', type=int,
153 | default=1000,
154 | help='Max number of iterations for the Adam or L-BFGS optimizer. (default: %(default)s)')
155 |
156 | parser.add_argument('--print_iterations', type=int,
157 | default=50,
158 | help='Number of iterations between optimizer print statements. (default: %(default)s)')
159 |
160 | # options for video frames
161 | parser.add_argument('--video', action='store_true',
162 | help='Boolean flag indicating if the user is generating a video.')
163 |
164 | parser.add_argument('--start_frame', type=int,
165 | default=1,
166 | help='First frame number.')
167 |
168 | parser.add_argument('--end_frame', type=int,
169 | default=1,
170 | help='Last frame number.')
171 |
172 | parser.add_argument('--first_frame_type', type=str,
173 | choices=['random', 'content', 'style'],
174 | default='content',
175 | help='Image used to initialize the network during the rendering of the first frame.')
176 |
177 | parser.add_argument('--init_frame_type', type=str,
178 | choices=['prev_warped', 'prev', 'random', 'content', 'style'],
179 | default='prev_warped',
180 | help='Image used to initialize the network during the every rendering after the first frame.')
181 |
182 | parser.add_argument('--video_input_dir', type=str,
183 | default='./video_input',
184 | help='Relative or absolute directory path to input frames.')
185 |
186 | parser.add_argument('--video_output_dir', type=str,
187 | default='./video_output',
188 | help='Relative or absolute directory path to output frames.')
189 |
190 | parser.add_argument('--content_frame_frmt', type=str,
191 | default='frame_{}.ppm',
192 | help='Filename format of the input content frames.')
193 |
194 | parser.add_argument('--zfill', type=str,
195 | default=4,
196 | help='digits in frame count')
197 |
198 | parser.add_argument('--backward_optical_flow_frmt', type=str,
199 | default='backward_{}_{}.flo',
200 | help='Filename format of the backward optical flow files.')
201 |
202 | parser.add_argument('--forward_optical_flow_frmt', type=str,
203 | default='forward_{}_{}.flo',
204 | help='Filename format of the forward optical flow files')
205 |
206 | parser.add_argument('--content_weights_frmt', type=str,
207 | default='reliable_{}_{}.txt',
208 | help='Filename format of the optical flow consistency files.')
209 |
210 | parser.add_argument('--prev_frame_indices', nargs='+', type=int,
211 | default=[1],
212 | help='Previous frames to consider for longterm temporal consistency.')
213 |
214 | parser.add_argument('--first_frame_iterations', type=int,
215 | default=2000,
216 | help='Maximum number of optimizer iterations of the first frame. (default: %(default)s)')
217 |
218 | parser.add_argument('--frame_iterations', type=int,
219 | default=800,
220 | help='Maximum number of optimizer iterations for each frame after the first frame. (default: %(default)s)')
221 |
222 | args = parser.parse_args()
223 |
224 | # normalize weights
225 | args.style_layer_weights = normalize(args.style_layer_weights)
226 | args.content_layer_weights = normalize(args.content_layer_weights)
227 | args.style_imgs_weights = normalize(args.style_imgs_weights)
228 |
229 | # create directories for output
230 | if args.video:
231 | maybe_make_directory(args.video_output_dir)
232 | else:
233 | maybe_make_directory(args.img_output_dir)
234 |
235 | return args
236 |
237 | '''
238 | pre-trained vgg19 convolutional neural network
239 | remark: layers are manually initialized for clarity.
240 | '''
241 |
242 | def build_model(input_img):
243 | if args.verbose: print('\nBUILDING VGG-19 NETWORK')
244 | net = {}
245 | _, h, w, d = input_img.shape
246 |
247 | if args.verbose: print('loading model weights...')
248 | vgg_rawnet = scipy.io.loadmat(args.model_weights)
249 | vgg_layers = vgg_rawnet['layers'][0]
250 | if args.verbose: print('constructing layers...')
251 | net['input'] = tf.Variable(np.zeros((1, h, w, d), dtype=np.float32))
252 |
253 | if args.verbose: print('LAYER GROUP 1')
254 | net['conv1_1'] = conv_layer('conv1_1', net['input'], W=get_weights(vgg_layers, 0))
255 | net['relu1_1'] = relu_layer('relu1_1', net['conv1_1'], b=get_bias(vgg_layers, 0))
256 |
257 | net['conv1_2'] = conv_layer('conv1_2', net['relu1_1'], W=get_weights(vgg_layers, 2))
258 | net['relu1_2'] = relu_layer('relu1_2', net['conv1_2'], b=get_bias(vgg_layers, 2))
259 |
260 | net['pool1'] = pool_layer('pool1', net['relu1_2'])
261 |
262 | if args.verbose: print('LAYER GROUP 2')
263 | net['conv2_1'] = conv_layer('conv2_1', net['pool1'], W=get_weights(vgg_layers, 5))
264 | net['relu2_1'] = relu_layer('relu2_1', net['conv2_1'], b=get_bias(vgg_layers, 5))
265 |
266 | net['conv2_2'] = conv_layer('conv2_2', net['relu2_1'], W=get_weights(vgg_layers, 7))
267 | net['relu2_2'] = relu_layer('relu2_2', net['conv2_2'], b=get_bias(vgg_layers, 7))
268 |
269 | net['pool2'] = pool_layer('pool2', net['relu2_2'])
270 |
271 | if args.verbose: print('LAYER GROUP 3')
272 | net['conv3_1'] = conv_layer('conv3_1', net['pool2'], W=get_weights(vgg_layers, 10))
273 | net['relu3_1'] = relu_layer('relu3_1', net['conv3_1'], b=get_bias(vgg_layers, 10))
274 |
275 | net['conv3_2'] = conv_layer('conv3_2', net['relu3_1'], W=get_weights(vgg_layers, 12))
276 | net['relu3_2'] = relu_layer('relu3_2', net['conv3_2'], b=get_bias(vgg_layers, 12))
277 |
278 | net['conv3_3'] = conv_layer('conv3_3', net['relu3_2'], W=get_weights(vgg_layers, 14))
279 | net['relu3_3'] = relu_layer('relu3_3', net['conv3_3'], b=get_bias(vgg_layers, 14))
280 |
281 | net['conv3_4'] = conv_layer('conv3_4', net['relu3_3'], W=get_weights(vgg_layers, 16))
282 | net['relu3_4'] = relu_layer('relu3_4', net['conv3_4'], b=get_bias(vgg_layers, 16))
283 |
284 | net['pool3'] = pool_layer('pool3', net['relu3_4'])
285 |
286 | if args.verbose: print('LAYER GROUP 4')
287 | net['conv4_1'] = conv_layer('conv4_1', net['pool3'], W=get_weights(vgg_layers, 19))
288 | net['relu4_1'] = relu_layer('relu4_1', net['conv4_1'], b=get_bias(vgg_layers, 19))
289 |
290 | net['conv4_2'] = conv_layer('conv4_2', net['relu4_1'], W=get_weights(vgg_layers, 21))
291 | net['relu4_2'] = relu_layer('relu4_2', net['conv4_2'], b=get_bias(vgg_layers, 21))
292 |
293 | net['conv4_3'] = conv_layer('conv4_3', net['relu4_2'], W=get_weights(vgg_layers, 23))
294 | net['relu4_3'] = relu_layer('relu4_3', net['conv4_3'], b=get_bias(vgg_layers, 23))
295 |
296 | net['conv4_4'] = conv_layer('conv4_4', net['relu4_3'], W=get_weights(vgg_layers, 25))
297 | net['relu4_4'] = relu_layer('relu4_4', net['conv4_4'], b=get_bias(vgg_layers, 25))
298 |
299 | net['pool4'] = pool_layer('pool4', net['relu4_4'])
300 |
301 | if args.verbose: print('LAYER GROUP 5')
302 | net['conv5_1'] = conv_layer('conv5_1', net['pool4'], W=get_weights(vgg_layers, 28))
303 | net['relu5_1'] = relu_layer('relu5_1', net['conv5_1'], b=get_bias(vgg_layers, 28))
304 |
305 | net['conv5_2'] = conv_layer('conv5_2', net['relu5_1'], W=get_weights(vgg_layers, 30))
306 | net['relu5_2'] = relu_layer('relu5_2', net['conv5_2'], b=get_bias(vgg_layers, 30))
307 |
308 | net['conv5_3'] = conv_layer('conv5_3', net['relu5_2'], W=get_weights(vgg_layers, 32))
309 | net['relu5_3'] = relu_layer('relu5_3', net['conv5_3'], b=get_bias(vgg_layers, 32))
310 |
311 | net['conv5_4'] = conv_layer('conv5_4', net['relu5_3'], W=get_weights(vgg_layers, 34))
312 | net['relu5_4'] = relu_layer('relu5_4', net['conv5_4'], b=get_bias(vgg_layers, 34))
313 |
314 | net['pool5'] = pool_layer('pool5', net['relu5_4'])
315 |
316 | return net
317 |
318 | def conv_layer(layer_name, layer_input, W):
319 | conv = tf.nn.conv2d(layer_input, W, strides=[1, 1, 1, 1], padding='SAME')
320 | if args.verbose: print('--{} | shape={} | weights_shape={}'.format(layer_name,
321 | conv.get_shape(), W.get_shape()))
322 | return conv
323 |
324 | def relu_layer(layer_name, layer_input, b):
325 | relu = tf.nn.relu(layer_input + b)
326 | if args.verbose:
327 | print('--{} | shape={} | bias_shape={}'.format(layer_name, relu.get_shape(),
328 | b.get_shape()))
329 | return relu
330 |
331 | def pool_layer(layer_name, layer_input):
332 | if args.pooling_type == 'avg':
333 | pool = tf.nn.avg_pool(layer_input, ksize=[1, 2, 2, 1],
334 | strides=[1, 2, 2, 1], padding='SAME')
335 | elif args.pooling_type == 'max':
336 | pool = tf.nn.max_pool(layer_input, ksize=[1, 2, 2, 1],
337 | strides=[1, 2, 2, 1], padding='SAME')
338 | if args.verbose:
339 | print('--{} | shape={}'.format(layer_name, pool.get_shape()))
340 | return pool
341 |
342 | def get_weights(vgg_layers, i):
343 | weights = vgg_layers[i][0][0][2][0][0]
344 | W = tf.constant(weights)
345 | return W
346 |
347 | def get_bias(vgg_layers, i):
348 | bias = vgg_layers[i][0][0][2][0][1]
349 | b = tf.constant(np.reshape(bias, (bias.size)))
350 | return b
351 |
352 | '''
353 | 'a neural algorithm for artistic style' loss functions
354 | '''
355 | def content_layer_loss(p, x):
356 | _, h, w, d = p.get_shape()
357 | M = h.value * w.value
358 | N = d.value
359 | if args.content_loss_function == 1:
360 | K = 1. / (2. * N**0.5 * M**0.5)
361 | elif args.content_loss_function == 2:
362 | K = 1. / (N * M)
363 | elif args.content_loss_function == 3:
364 | K = 1. / 2.
365 | loss = K * tf.reduce_sum(tf.pow((x - p), 2))
366 | return loss
367 |
368 | def style_layer_loss(a, x):
369 | _, h, w, d = a.get_shape()
370 | M = h.value * w.value
371 | N = d.value
372 | A = gram_matrix(a, M, N)
373 | G = gram_matrix(x, M, N)
374 | loss = (1./(4 * N**2 * M**2)) * tf.reduce_sum(tf.pow((G - A), 2))
375 | return loss
376 |
377 | def gram_matrix(x, area, depth):
378 | F = tf.reshape(x, (area, depth))
379 | G = tf.matmul(tf.transpose(F), F)
380 | return G
381 |
382 | def mask_style_layer(a, x, mask_img):
383 | _, h, w, d = a.get_shape()
384 | mask = get_mask_image(mask_img, w.value, h.value)
385 | mask = tf.convert_to_tensor(mask)
386 | tensors = []
387 | for _ in range(d.value):
388 | tensors.append(mask)
389 | mask = tf.stack(tensors, axis=2)
390 | mask = tf.stack(mask, axis=0)
391 | mask = tf.expand_dims(mask, 0)
392 | a = tf.multiply(a, mask)
393 | x = tf.multiply(x, mask)
394 | return a, x
395 |
396 | def sum_masked_style_losses(sess, net, style_imgs):
397 | total_style_loss = 0.
398 | weights = args.style_imgs_weights
399 | masks = args.style_mask_imgs
400 | for img, img_weight, img_mask in zip(style_imgs, weights, masks):
401 | sess.run(net['input'].assign(img))
402 | style_loss = 0.
403 | for layer, weight in zip(args.style_layers, args.style_layer_weights):
404 | a = sess.run(net[layer])
405 | x = net[layer]
406 | a = tf.convert_to_tensor(a)
407 | a, x = mask_style_layer(a, x, img_mask)
408 | style_loss += style_layer_loss(a, x) * weight
409 | style_loss /= float(len(args.style_layers))
410 | total_style_loss += (style_loss * img_weight)
411 | total_style_loss /= float(len(style_imgs))
412 | return total_style_loss
413 |
414 | def sum_style_losses(sess, net, style_imgs):
415 | total_style_loss = 0.
416 | weights = args.style_imgs_weights
417 | for img, img_weight in zip(style_imgs, weights):
418 | sess.run(net['input'].assign(img))
419 | style_loss = 0.
420 | for layer, weight in zip(args.style_layers, args.style_layer_weights):
421 | a = sess.run(net[layer])
422 | x = net[layer]
423 | a = tf.convert_to_tensor(a)
424 | style_loss += style_layer_loss(a, x) * weight
425 | style_loss /= float(len(args.style_layers))
426 | total_style_loss += (style_loss * img_weight)
427 | total_style_loss /= float(len(style_imgs))
428 | return total_style_loss
429 |
430 | def sum_content_losses(sess, net, content_img):
431 | sess.run(net['input'].assign(content_img))
432 | content_loss = 0.
433 | for layer, weight in zip(args.content_layers, args.content_layer_weights):
434 | p = sess.run(net[layer])
435 | x = net[layer]
436 | p = tf.convert_to_tensor(p)
437 | content_loss += content_layer_loss(p, x) * weight
438 | content_loss /= float(len(args.content_layers))
439 | return content_loss
440 |
441 | '''
442 | 'artistic style transfer for videos' loss functions
443 | '''
444 | def temporal_loss(x, w, c):
445 | c = c[np.newaxis,:,:,:]
446 | D = float(x.size)
447 | loss = (1. / D) * tf.reduce_sum(c * tf.nn.l2_loss(x - w))
448 | loss = tf.cast(loss, tf.float32)
449 | return loss
450 |
451 | def get_longterm_weights(i, j):
452 | c_sum = 0.
453 | for k in range(args.prev_frame_indices):
454 | if i - k > i - j:
455 | c_sum += get_content_weights(i, i - k)
456 | c = get_content_weights(i, i - j)
457 | c_max = tf.maximum(c - c_sum, 0.)
458 | return c_max
459 |
460 | def sum_longterm_temporal_losses(sess, net, frame, input_img):
461 | x = sess.run(net['input'].assign(input_img))
462 | loss = 0.
463 | for j in range(args.prev_frame_indices):
464 | prev_frame = frame - j
465 | w = get_prev_warped_frame(frame)
466 | c = get_longterm_weights(frame, prev_frame)
467 | loss += temporal_loss(x, w, c)
468 | return loss
469 |
470 | def sum_shortterm_temporal_losses(sess, net, frame, input_img):
471 | x = sess.run(net['input'].assign(input_img))
472 | prev_frame = frame - 1
473 | w = get_prev_warped_frame(frame)
474 | c = get_content_weights(frame, prev_frame)
475 | loss = temporal_loss(x, w, c)
476 | return loss
477 |
478 | '''
479 | utilities and i/o
480 | '''
481 | def read_image(path):
482 | # bgr image
483 | img = cv2.imread(path, cv2.IMREAD_COLOR)
484 | check_image(img, path)
485 | img = img.astype(np.float32)
486 | img = preprocess(img)
487 | return img
488 |
489 | def write_image(path, img):
490 | img = postprocess(img)
491 | cv2.imwrite(path, img)
492 |
493 | def preprocess(img):
494 | imgpre = np.copy(img)
495 | # bgr to rgb
496 | imgpre = imgpre[...,::-1]
497 | # shape (h, w, d) to (1, h, w, d)
498 | imgpre = imgpre[np.newaxis,:,:,:]
499 | imgpre -= np.array([123.68, 116.779, 103.939]).reshape((1,1,1,3))
500 | return imgpre
501 |
502 | def postprocess(img):
503 | imgpost = np.copy(img)
504 | imgpost += np.array([123.68, 116.779, 103.939]).reshape((1,1,1,3))
505 | # shape (1, h, w, d) to (h, w, d)
506 | imgpost = imgpost[0]
507 | imgpost = np.clip(imgpost, 0, 255).astype('uint8')
508 | # rgb to bgr
509 | imgpost = imgpost[...,::-1]
510 | return imgpost
511 |
512 | def read_flow_file(path):
513 | with open(path, 'rb') as f:
514 | # 4 bytes header
515 | header = struct.unpack('4s', f.read(4))[0]
516 | # 4 bytes width, height
517 | w = struct.unpack('i', f.read(4))[0]
518 | h = struct.unpack('i', f.read(4))[0]
519 | flow = np.ndarray((2, h, w), dtype=np.float32)
520 | for y in range(h):
521 | for x in range(w):
522 | flow[0,y,x] = struct.unpack('f', f.read(4))[0]
523 | flow[1,y,x] = struct.unpack('f', f.read(4))[0]
524 | return flow
525 |
526 | def read_weights_file(path):
527 | lines = open(path).readlines()
528 | header = list(map(int, lines[0].split(' ')))
529 | w = header[0]
530 | h = header[1]
531 | vals = np.zeros((h, w), dtype=np.float32)
532 | for i in range(1, len(lines)):
533 | line = lines[i].rstrip().split(' ')
534 | vals[i-1] = np.array(list(map(np.float32, line)))
535 | vals[i-1] = list(map(lambda x: 0. if x < 255. else 1., vals[i-1]))
536 | # expand to 3 channels
537 | weights = np.dstack([vals.astype(np.float32)] * 3)
538 | return weights
539 |
540 | def normalize(weights):
541 | denom = sum(weights)
542 | if denom > 0.:
543 | return [float(i) / denom for i in weights]
544 | else: return [0.] * len(weights)
545 |
546 | def maybe_make_directory(dir_path):
547 | if not os.path.exists(dir_path):
548 | os.makedirs(dir_path)
549 |
550 | def check_image(img, path):
551 | if img is None:
552 | raise OSError(errno.ENOENT, "No such file", path)
553 |
554 | '''
555 | rendering -- where the magic happens
556 | '''
557 | def stylize(content_img, style_imgs, init_img, frame=None):
558 | with tf.device(args.device), tf.compat.v1.Session() as sess:
559 | # setup network
560 | net = build_model(content_img)
561 |
562 | # style loss
563 | if args.style_mask:
564 | L_style = sum_masked_style_losses(sess, net, style_imgs)
565 | else:
566 | L_style = sum_style_losses(sess, net, style_imgs)
567 |
568 | # content loss
569 | L_content = sum_content_losses(sess, net, content_img)
570 |
571 | # denoising loss
572 | L_tv = tf.image.total_variation(net['input'])
573 |
574 | # loss weights
575 | alpha = args.content_weight
576 | beta = args.style_weight
577 | theta = args.tv_weight
578 |
579 | # total loss
580 | L_total = alpha * L_content
581 | L_total += beta * L_style
582 | L_total += theta * L_tv
583 |
584 | # video temporal loss
585 | if args.video and frame > 1:
586 | gamma = args.temporal_weight
587 | L_temporal = sum_shortterm_temporal_losses(sess, net, frame, init_img)
588 | L_total += gamma * L_temporal
589 |
590 | # optimization algorithm
591 | optimizer = get_optimizer(L_total)
592 |
593 | if args.optimizer == 'adam':
594 | minimize_with_adam(sess, net, optimizer, init_img, L_total)
595 | elif args.optimizer == 'lbfgs':
596 | minimize_with_lbfgs(sess, net, optimizer, init_img)
597 |
598 | output_img = sess.run(net['input'])
599 |
600 | if args.original_colors:
601 | output_img = convert_to_original_colors(np.copy(content_img), output_img)
602 |
603 | if args.video:
604 | write_video_output(frame, output_img)
605 | else:
606 | write_image_output(output_img, content_img, style_imgs, init_img)
607 |
608 | def minimize_with_lbfgs(sess, net, optimizer, init_img):
609 | if args.verbose: print('\nMINIMIZING LOSS USING: L-BFGS OPTIMIZER')
610 | init_op = tf.global_variables_initializer()
611 | sess.run(init_op)
612 | sess.run(net['input'].assign(init_img))
613 | optimizer.minimize(sess)
614 |
615 | def minimize_with_adam(sess, net, optimizer, init_img, loss):
616 | if args.verbose: print('\nMINIMIZING LOSS USING: ADAM OPTIMIZER')
617 | train_op = optimizer.minimize(loss)
618 | init_op = tf.global_variables_initializer()
619 | sess.run(init_op)
620 | sess.run(net['input'].assign(init_img))
621 | iterations = 0
622 | while (iterations < args.max_iterations):
623 | sess.run(train_op)
624 | if iterations % args.print_iterations == 0 and args.verbose:
625 | curr_loss = loss.eval()
626 | print("At iterate {}\tf= {}".format(iterations, curr_loss))
627 | iterations += 1
628 |
629 | def get_optimizer(loss):
630 | print_iterations = args.print_iterations if args.verbose else 0
631 | if args.optimizer == 'lbfgs':
632 | optimizer = tf.contrib.opt.ScipyOptimizerInterface(
633 | loss, method='L-BFGS-B',
634 | options={'maxiter': args.max_iterations,
635 | 'disp': print_iterations})
636 | elif args.optimizer == 'adam':
637 | optimizer = tf.train.AdamOptimizer(args.learning_rate)
638 | return optimizer
639 |
640 | def write_video_output(frame, output_img):
641 | fn = args.content_frame_frmt.format(str(frame).zfill(args.zfill))
642 | path = os.path.join(args.video_output_dir, fn)
643 | write_image(path, output_img)
644 |
645 | def write_image_output(output_img, content_img, style_imgs, init_img):
646 | out_dir = os.path.join(args.img_output_dir, str(args.max_iterations))
647 | maybe_make_directory(out_dir)
648 | img_path = os.path.join(out_dir, args.img_output_dir+'-'+str(args.max_iterations)+'.png')
649 | content_path = os.path.join(out_dir, 'content.png')
650 | init_path = os.path.join(out_dir, 'init.png')
651 |
652 | write_image(img_path, output_img)
653 | write_image(content_path, content_img)
654 | write_image(init_path, init_img)
655 | index = 0
656 | for style_img in style_imgs:
657 | path = os.path.join(out_dir, 'style_'+str(index)+'.png')
658 | write_image(path, style_img)
659 | index += 1
660 |
661 | # save the configuration settings
662 | out_file = os.path.join(out_dir, 'meta_data.txt')
663 | f = open(out_file, 'w')
664 | f.write('image_name: {}\n'.format(args.img_name))
665 | f.write('content: {}\n'.format(args.content_img))
666 | index = 0
667 | for style_img, weight in zip(args.style_imgs, args.style_imgs_weights):
668 | f.write('styles['+str(index)+']: {} * {}\n'.format(weight, style_img))
669 | index += 1
670 | index = 0
671 | if args.style_mask_imgs is not None:
672 | for mask in args.style_mask_imgs:
673 | f.write('style_masks['+str(index)+']: {}\n'.format(mask))
674 | index += 1
675 | f.write('init_type: {}\n'.format(args.init_img_type))
676 | f.write('content_weight: {}\n'.format(args.content_weight))
677 | f.write('style_weight: {}\n'.format(args.style_weight))
678 | f.write('tv_weight: {}\n'.format(args.tv_weight))
679 | f.write('content_layers: {}\n'.format(args.content_layers))
680 | f.write('style_layers: {}\n'.format(args.style_layers))
681 | f.write('optimizer_type: {}\n'.format(args.optimizer))
682 | f.write('max_iterations: {}\n'.format(args.max_iterations))
683 | f.write('max_image_size: {}\n'.format(args.max_size))
684 | f.close()
685 |
686 | '''
687 | image loading and processing
688 | '''
689 | def get_init_image(init_type, content_img, style_imgs, frame=None):
690 | if init_type == 'content':
691 | return content_img
692 | elif init_type == 'style':
693 | return style_imgs[0]
694 | elif init_type == 'random':
695 | init_img = get_noise_image(args.noise_ratio, content_img)
696 | return init_img
697 | # only for video frames
698 | elif init_type == 'prev':
699 | init_img = get_prev_frame(frame)
700 | return init_img
701 | elif init_type == 'prev_warped':
702 | init_img = get_prev_warped_frame(frame)
703 | return init_img
704 |
705 | def get_content_frame(frame):
706 | fn = args.content_frame_frmt.format(str(frame).zfill(args.zfill))
707 | path = os.path.join(args.video_input_dir, fn)
708 | img = read_image(path)
709 | return img
710 |
711 | def get_content_image(content_img):
712 | path = os.path.join(args.content_img_dir, content_img)
713 | # bgr image
714 | img = cv2.imread(path, cv2.IMREAD_COLOR)
715 | check_image(img, path)
716 | img = img.astype(np.float32)
717 | h, w, d = img.shape
718 | mx = args.max_size
719 | # resize if > max size
720 | if h > w and h > mx:
721 | w = (float(mx) / float(h)) * w
722 | img = cv2.resize(img, dsize=(int(w), mx), interpolation=cv2.INTER_AREA)
723 | if w > mx:
724 | h = (float(mx) / float(w)) * h
725 | img = cv2.resize(img, dsize=(mx, int(h)), interpolation=cv2.INTER_AREA)
726 | img = preprocess(img)
727 | return img
728 |
729 | def get_style_images(content_img):
730 | _, ch, cw, cd = content_img.shape
731 | mx = args.max_size
732 | style_imgs = []
733 | for style_fn in args.style_imgs:
734 | path = os.path.join(args.style_imgs_dir, style_fn)
735 | # bgr image
736 | img = cv2.imread(path, cv2.IMREAD_COLOR)
737 | check_image(img, path)
738 | img = img.astype(np.float32)
739 | sh, sw, sd = img.shape
740 |
741 | # use scale args to resize and tile image
742 | scaled_img = cv2.resize(img, dsize=(int(sw*args.style_scale), int(sh*args.style_scale)), interpolation=cv2.INTER_AREA)
743 | ssh, ssw, ssd = scaled_img.shape
744 |
745 | if ssh > ch and ssw > cw:
746 | starty = int((ssh-ch)/2)
747 | startx = int((ssw-cw)/2)
748 | img = scaled_img[starty:starty+ch, startx:startx+cw]
749 | elif ssh > ch:
750 | starty = int((ssh-ch)/2)
751 | img = scaled_img[starty:starty+ch, 0:ssw]
752 | if ssw != cw:
753 | img = cv2.copyMakeBorder(img,0,0,0,(cw-ssw),cv2.BORDER_REFLECT)
754 | elif ssw > cw:
755 | startx = int((ssw-cw)/2)
756 | img = scaled_img[0:ssh, startx:startx+cw]
757 | if ssh != ch:
758 | img = cv2.copyMakeBorder(img,0,(ch-ssh),0,0,cv2.BORDER_REFLECT)
759 | else:
760 | img = cv2.copyMakeBorder(scaled_img,0,(ch-ssh),0,(cw-ssw),cv2.BORDER_REFLECT)
761 |
762 | img = preprocess(img)
763 | style_imgs.append(img)
764 | return style_imgs
765 |
766 | def get_noise_image(noise_ratio, content_img):
767 | np.random.seed(args.seed)
768 | noise_img = np.random.uniform(-20., 20., content_img.shape).astype(np.float32)
769 | img = noise_ratio * noise_img + (1.-noise_ratio) * content_img
770 | return img
771 |
772 | def get_mask_image(mask_img, width, height):
773 | path = os.path.join(args.content_img_dir, mask_img)
774 | img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
775 | check_image(img, path)
776 | img = cv2.resize(img, dsize=(width, height), interpolation=cv2.INTER_AREA)
777 | img = img.astype(np.float32)
778 | mx = np.amax(img)
779 | img /= mx
780 | return img
781 |
782 | def get_prev_frame(frame):
783 | # previously stylized frame
784 | prev_frame = frame - 1
785 | fn = args.content_frame_frmt.format(str(prev_frame).zfill(args.zfill))
786 | path = os.path.join(args.video_output_dir, fn)
787 | img = cv2.imread(path, cv2.IMREAD_COLOR)
788 | check_image(img, path)
789 | return img
790 |
791 | def get_prev_warped_frame(frame):
792 | prev_img = get_prev_frame(frame)
793 | prev_frame = frame - 1
794 | # backwards flow: current frame -> previous frame
795 | fn = args.backward_optical_flow_frmt.format(str(frame), str(prev_frame))
796 | path = os.path.join(args.video_input_dir, fn)
797 | flow = read_flow_file(path)
798 | warped_img = warp_image(prev_img, flow).astype(np.float32)
799 | img = preprocess(warped_img)
800 | return img
801 |
802 | def get_content_weights(frame, prev_frame):
803 | forward_fn = args.content_weights_frmt.format(str(prev_frame), str(frame))
804 | backward_fn = args.content_weights_frmt.format(str(frame), str(prev_frame))
805 | forward_path = os.path.join(args.video_input_dir, forward_fn)
806 | backward_path = os.path.join(args.video_input_dir, backward_fn)
807 | forward_weights = read_weights_file(forward_path)
808 | backward_weights = read_weights_file(backward_path)
809 | return forward_weights #, backward_weights
810 |
811 | def warp_image(src, flow):
812 | _, h, w = flow.shape
813 | flow_map = np.zeros(flow.shape, dtype=np.float32)
814 | for y in range(h):
815 | flow_map[1,y,:] = float(y) + flow[1,y,:]
816 | for x in range(w):
817 | flow_map[0,:,x] = float(x) + flow[0,:,x]
818 | # remap pixels to optical flow
819 | dst = cv2.remap(
820 | src, flow_map[0], flow_map[1],
821 | interpolation=cv2.INTER_CUBIC, borderMode=cv2.BORDER_TRANSPARENT)
822 | return dst
823 |
824 | def convert_to_original_colors(content_img, stylized_img):
825 | content_img = postprocess(content_img)
826 | stylized_img = postprocess(stylized_img)
827 | if args.color_convert_type == 'yuv':
828 | cvt_type = cv2.COLOR_BGR2YUV
829 | inv_cvt_type = cv2.COLOR_YUV2BGR
830 | elif args.color_convert_type == 'ycrcb':
831 | cvt_type = cv2.COLOR_BGR2YCR_CB
832 | inv_cvt_type = cv2.COLOR_YCR_CB2BGR
833 | elif args.color_convert_type == 'luv':
834 | cvt_type = cv2.COLOR_BGR2LUV
835 | inv_cvt_type = cv2.COLOR_LUV2BGR
836 | elif args.color_convert_type == 'lab':
837 | cvt_type = cv2.COLOR_BGR2LAB
838 | inv_cvt_type = cv2.COLOR_LAB2BGR
839 | content_cvt = cv2.cvtColor(content_img, cvt_type)
840 | stylized_cvt = cv2.cvtColor(stylized_img, cvt_type)
841 | c1, _, _ = cv2.split(stylized_cvt)
842 | _, c2, c3 = cv2.split(content_cvt)
843 | merged = cv2.merge((c1, c2, c3))
844 | dst = cv2.cvtColor(merged, inv_cvt_type).astype(np.float32)
845 | dst = preprocess(dst)
846 | return dst
847 |
848 | def render_single_image():
849 | content_img = get_content_image(args.content_img)
850 | style_imgs = get_style_images(content_img)
851 | with tf.Graph().as_default():
852 | print('\n---- RENDERING SINGLE IMAGE ----\n')
853 | init_img = get_init_image(args.init_img_type, content_img, style_imgs)
854 | tick = time.time()
855 | stylize(content_img, style_imgs, init_img)
856 | tock = time.time()
857 | print('Single image elapsed time: {}'.format(tock - tick))
858 |
859 | def render_video():
860 | for frame in range(args.start_frame, args.end_frame+1):
861 | with tf.Graph().as_default():
862 | print('\n---- RENDERING VIDEO FRAME: {}/{} ----\n'.format(frame, args.end_frame))
863 | if frame == 1:
864 | content_frame = get_content_frame(frame)
865 | style_imgs = get_style_images(content_frame)
866 | init_img = get_init_image(args.first_frame_type, content_frame, style_imgs, frame)
867 | args.max_iterations = args.first_frame_iterations
868 | tick = time.time()
869 | stylize(content_frame, style_imgs, init_img, frame)
870 | tock = time.time()
871 | print('Frame {} elapsed time: {}'.format(frame, tock - tick))
872 | else:
873 | content_frame = get_content_frame(frame)
874 | style_imgs = get_style_images(content_frame)
875 | init_img = get_init_image(args.init_frame_type, content_frame, style_imgs, frame)
876 | args.max_iterations = args.frame_iterations
877 | tick = time.time()
878 | stylize(content_frame, style_imgs, init_img, frame)
879 | tock = time.time()
880 | print('Frame {} elapsed time: {}'.format(frame, tock - tick))
881 |
882 | def main():
883 | global args
884 | args = parse_args()
885 | if args.video: render_video()
886 | else: render_single_image()
887 |
888 | if __name__ == '__main__':
889 | main()
890 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | opencv-python>=4.1.1.26
2 | scipy>=1.3.1
3 |
--------------------------------------------------------------------------------
/styles/kandinsky.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/styles/kandinsky.jpg
--------------------------------------------------------------------------------
/styles/seated-nude.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/styles/seated-nude.jpg
--------------------------------------------------------------------------------
/styles/shipwreck.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/styles/shipwreck.jpg
--------------------------------------------------------------------------------
/styles/starry-night.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/styles/starry-night.jpg
--------------------------------------------------------------------------------
/styles/the_scream.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/styles/the_scream.jpg
--------------------------------------------------------------------------------
/styles/woman-with-hat-matisse.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dvschultz/neural-style-tf/b599ccfdb580130e9ddd8ee1793c4143558ae5f6/styles/woman-with-hat-matisse.jpg
--------------------------------------------------------------------------------
/stylize_image.sh:
--------------------------------------------------------------------------------
1 | set -e
2 | # Get a carriage return into `cr`
3 | cr=`echo $'\n.'`
4 | cr=${cr%.}
5 |
6 | if [ "$#" -le 1 ]; then
7 | echo "Usage: bash stylize_image.sh "
8 | exit 1
9 | fi
10 |
11 | echo ""
12 | read -p "Did you install the required dependencies? [y/n] $cr > " dependencies
13 |
14 | if [ "$dependencies" != "y" ]; then
15 | echo "Error: Requires dependencies: tensorflow, opencv2 (python), scipy"
16 | exit 1;
17 | fi
18 |
19 | echo ""
20 | read -p "Do you have a CUDA enabled GPU? [y/n] $cr > " cuda
21 |
22 | if [ "$cuda" != "y" ]; then
23 | device='/cpu:0'
24 | else
25 | device='/gpu:0'
26 | fi
27 |
28 | # Parse arguments
29 | content_image="$1"
30 | content_dir=$(dirname "$content_image")
31 | content_filename=$(basename "$content_image")
32 |
33 | style_image="$2"
34 | style_dir=$(dirname "$style_image" )
35 | style_filename=$(basename "$style_image")
36 |
37 | echo "Rendering stylized image. This may take a while..."
38 | python neural_style.py \
39 | --content_img "${content_filename}" \
40 | --content_img_dir "${content_dir}" \
41 | --style_imgs "${style_filename}" \
42 | --style_imgs_dir "${style_dir}" \
43 | --device "${device}" \
44 | --verbose;
--------------------------------------------------------------------------------
/stylize_video.sh:
--------------------------------------------------------------------------------
1 | set -e
2 | # Get a carriage return into `cr`
3 | cr=`echo $'\n.'`
4 | cr=${cr%.}
5 |
6 | # Find out whether ffmpeg or avconv is installed on the system
7 | FFMPEG=ffmpeg
8 | command -v $FFMPEG >/dev/null 2>&1 || {
9 | FFMPEG=avconv
10 | command -v $FFMPEG >/dev/null 2>&1 || {
11 | echo >&2 "This script requires either ffmpeg or avconv installed. Aborting."; exit 1;
12 | }
13 | }
14 |
15 | if [ "$#" -le 1 ]; then
16 | echo "Usage: bash stylize_video.sh "
17 | exit 1
18 | fi
19 |
20 | echo ""
21 | read -p "Did you install the required dependencies? [y/n] $cr > " dependencies
22 |
23 | if [ "$dependencies" != "y" ]; then
24 | echo "Error: Requires dependencies: tensorflow, opencv2 (python), scipy"
25 | exit 1;
26 | fi
27 |
28 | echo ""
29 | read -p "Do you have a CUDA enabled GPU? [y/n] $cr > " cuda
30 |
31 | if [ "$cuda" != "y" ]; then
32 | echo "Error: GPU required to render videos in a feasible amount of time."
33 | exit 1;
34 | fi
35 |
36 | # Parse arguments
37 | content_video="$1"
38 | content_dir=$(dirname "$content_video")
39 | content_filename=$(basename "$content_video")
40 | extension="${content_filename##*.}"
41 | content_filename="${content_filename%.*}"
42 | content_filename=${content_filename//[%]/x}
43 |
44 | style_image="$2"
45 | style_dir=$(dirname "$style_image")
46 | style_filename=$(basename "$style_image")
47 |
48 | if [ ! -d "./video_input" ]; then
49 | mkdir -p ./video_input
50 | fi
51 | temp_dir="./video_input/${content_filename}"
52 |
53 | # Create output folder
54 | mkdir -p "$temp_dir"
55 |
56 | # Save frames of the video as individual image files
57 | $FFMPEG -v quiet -i "$1" "${temp_dir}/frame_%04d.ppm"
58 | eval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=width,height "$1")
59 | width="${streams_stream_0_width}"
60 | height="${streams_stream_0_height}"
61 | if [ "$width" -gt "$height" ]; then
62 | max_size="$width"
63 | else
64 | max_size="$height"
65 | fi
66 | num_frames=$(find "$temp_dir" -iname "*.ppm" | wc -l)
67 |
68 | echo "Computing optical flow [CPU]. This will take a while..."
69 | cd ./video_input
70 | bash make-opt-flow.sh ${content_filename}/frame_%04d.ppm ${content_filename}
71 | cd ..
72 |
73 | echo "Rendering stylized video frames [CPU & GPU]. This will take a while..."
74 | python neural_style.py --video \
75 | --video_input_dir "${temp_dir}" \
76 | --style_imgs_dir "${style_dir}" \
77 | --style_imgs "${style_filename}" \
78 | --end_frame "${num_frames}" \
79 | --max_size "${max_size}" \
80 | --verbose;
81 |
82 | # Create video from output images.
83 | echo "Converting image sequence to video. This should be quick..."
84 | $FFMPEG -v quiet -i ./video_output/frame_%04d.ppm ./video_output/${content_filename}-stylized.$extension
85 |
86 | # Clean up garbage
87 | if [ -d "${temp_dir}" ]; then
88 | rm -rf "${temp_dir}"
89 | fi
90 |
--------------------------------------------------------------------------------
/video_input/consistencyChecker/CTensor4D.h:
--------------------------------------------------------------------------------
1 | // CTensor4D
2 | // A four-dimensional array
3 | //
4 | // Author: Thomas Brox
5 | // Last change: 05.11.2001
6 | //-------------------------------------------------------------------------
7 | // Note:
8 | // There is a difference between the GNU Compiler's STL and the standard
9 | // concerning the definition and usage of string streams as well as substrings.
10 | // Thus if using a GNU Compiler you should write #define GNU_COMPILER at the
11 | // beginning of your program.
12 | //
13 | // Another Note:
14 | // Linker problems occured in connection with from the STL.
15 | // In this case you should include this file in a namespace.
16 | // Example:
17 | // namespace NTensor4D {
18 | // #include
19 | // }
20 | // After including other packages you can then write:
21 | // using namespace NTensor4D;
22 |
23 | #ifndef CTENSOR4D_H
24 | #define CTENSOR4D_H
25 |
26 | #include
27 | #include
28 | #include
29 | #ifdef GNU_COMPILER
30 | #include
31 | #else
32 | #include
33 | #endif
34 | #include "CTensor.h"
35 |
36 | template
37 | class CTensor4D {
38 | public:
39 | // constructor
40 | inline CTensor4D();
41 | inline CTensor4D(const int aXSize, const int aYSize, const int aZSize, const int aASize);
42 | // copy constructor
43 | CTensor4D(const CTensor4D& aCopyFrom);
44 | // constructor with implicit filling
45 | CTensor4D(const int aXSize, const int aYSize, const int aZSize, const int aASize, const T aFillValue);
46 | // destructor
47 | virtual ~CTensor4D();
48 |
49 | // Changes the size of the tensor, data will be lost
50 | void setSize(int aXSize, int aYSize, int aZSize, int aASize);
51 | // Downsamples the tensor
52 | void downsample(int aNewXSize, int aNewYSize);
53 | void downsample(int aNewXSize, int aNewYSize, int aNewZSize);
54 | // Upsamples the tensor
55 | void upsample(int aNewXSize, int aNewYSize);
56 | void upsampleBilinear(int aNewXSize, int aNewYSize);
57 | void upsampleTrilinear(int aNewXSize, int aNewYSize, int aNewZSize);
58 | // Fills the tensor with the value aValue (see also operator =)
59 | void fill(const T aValue);
60 | // Copies a box from the tensor into aResult, the size of aResult will be adjusted
61 | void cut(CTensor4D& aResult, int x1, int y1, int z1, int a1, int x2, int y2, int z2, int a2);
62 | // Reads data from a list of PPM or PGM files given in a text file
63 | void readFromFile(char* aFilename);
64 | // Writes a set of colour images to a large PPM image
65 | void writeToPPM(const char* aFilename, int aCols = 0, int aRows = 0);
66 |
67 | // Gives full access to tensor's values
68 | inline T& operator()(const int ax, const int ay, const int az, const int aa) const;
69 | // Read access with bilinear interpolation
70 | CVector operator()(const float ax, const float ay, const int aa) const;
71 | // Fills the tensor with the value aValue (equivalent to fill())
72 | inline CTensor4D& operator=(const T aValue);
73 | // Copies the tensor aCopyFrom to this tensor (size of tensor might change)
74 | CTensor4D& operator=(const CTensor4D& aCopyFrom);
75 | // Multiplication with a scalar
76 | CTensor4D& operator*=(const T aValue);
77 | // Component-wise addition
78 | CTensor4D& operator+=(const CTensor4D& aTensor);
79 |
80 | // Gives access to the tensor's size
81 | inline int xSize() const;
82 | inline int ySize() const;
83 | inline int zSize() const;
84 | inline int aSize() const;
85 | inline int size() const;
86 | // Returns the aath layer of the 4D-tensor as 3D-tensor
87 | CTensor getTensor3D(const int aa) const;
88 | // Removes one dimension and returns the resulting 3D-tensor
89 | void getTensor3D(CTensor& aTensor, int aIndex, int aDim = 3) const;
90 | // Copies the components of a 3D-tensor in the aDimth layer of the 4D-tensor
91 | void putTensor3D(CTensor& aTensor, int aIndex, int aDim = 3);
92 | // Removes two dimensions and returns the resulting matrix
93 | void getMatrix(CMatrix& aMatrix, int aZIndex, int aAIndex) const;
94 | // Copies the components of a 3D-tensor in the aDimth layer of the 4D-tensor
95 | void putMatrix(CMatrix& aMatrix, int aZIndex, int aAIndex);
96 | // Gives access to the internal data representation (use sparingly)
97 | inline T* data() const;
98 | protected:
99 | int mXSize,mYSize,mZSize,mASize;
100 | T *mData;
101 | };
102 |
103 | // Provides basic output functionality (only appropriate for very small tensors)
104 | template std::ostream& operator<<(std::ostream& aStream, const CTensor4D& aTensor);
105 |
106 | // Exceptions thrown by CTensor-------------------------------------------------
107 |
108 | // Thrown when one tries to access an element of a tensor which is out of
109 | // the tensor's bounds
110 | struct ETensor4DRangeOverflow {
111 | ETensor4DRangeOverflow(const int ax, const int ay, const int az, const int aa) {
112 | using namespace std;
113 | cerr << "Exception ETensor4DRangeOverflow: x = " << ax << ", y = " << ay << ", z = " << az << ", a = " << aa << endl;
114 | }
115 | };
116 |
117 | // Thrown from getTensor3D if the parameter's size does not match with the size
118 | // of this tensor
119 | struct ETensor4DIncompatibleSize {
120 | ETensor4DIncompatibleSize(int ax, int ay, int az, int ax2, int ay2, int az2) {
121 | using namespace std;
122 | cerr << "Exception ETensor4DIncompatibleSize: x = " << ax << ":" << ax2;
123 | cerr << ", y = " << ay << ":" << ay2;
124 | cerr << ", z = " << az << ":" << az2 << endl;
125 | }
126 | };
127 |
128 | // Thrown from readFromFile if the file format is unknown
129 | struct ETensor4DInvalidFileFormat {
130 | ETensor4DInvalidFileFormat() {
131 | using namespace std;
132 | cerr << "Exception ETensor4DInvalidFileFormat" << endl;
133 | }
134 | };
135 |
136 | // I M P L E M E N T A T I O N --------------------------------------------
137 | //
138 | // You might wonder why there is implementation code in a header file.
139 | // The reason is that not all C++ compilers yet manage separate compilation
140 | // of templates. Inline functions cannot be compiled separately anyway.
141 | // So in this case the whole implementation code is added to the header
142 | // file.
143 | // Users of CTensor4D should ignore everything that's beyond this line :)
144 | // ------------------------------------------------------------------------
145 |
146 | // P U B L I C ------------------------------------------------------------
147 |
148 | // constructor
149 | template
150 | inline CTensor4D::CTensor4D() {
151 | mData = 0; mXSize = 0; mYSize = 0; mZSize = 0; mASize = 0;
152 | }
153 |
154 | // constructor
155 | template
156 | inline CTensor4D::CTensor4D(const int aXSize, const int aYSize, const int aZSize, const int aASize)
157 | : mXSize(aXSize), mYSize(aYSize), mZSize(aZSize), mASize(aASize) {
158 | mData = new T[aXSize*aYSize*aZSize*aASize];
159 | }
160 |
161 | // copy constructor
162 | template
163 | CTensor4D::CTensor4D(const CTensor4D& aCopyFrom)
164 | : mXSize(aCopyFrom.mXSize), mYSize(aCopyFrom.mYSize), mZSize(aCopyFrom.mZSize), mASize(aCopyFrom.mASize) {
165 | int wholeSize = mXSize*mYSize*mZSize*mASize;
166 | mData = new T[wholeSize];
167 | for (register int i = 0; i < wholeSize; i++)
168 | mData[i] = aCopyFrom.mData[i];
169 | }
170 |
171 | // constructor with implicit filling
172 | template
173 | CTensor4D::CTensor4D(const int aXSize, const int aYSize, const int aZSize, const int aASize, const T aFillValue)
174 | : mXSize(aXSize), mYSize(aYSize), mZSize(aZSize), mASize(aASize) {
175 | mData = new T[aXSize*aYSize*aZSize*aASize];
176 | fill(aFillValue);
177 | }
178 |
179 | // destructor
180 | template
181 | CTensor4D::~CTensor4D() {
182 | delete[] mData;
183 | }
184 |
185 | // setSize
186 | template
187 | void CTensor4D::setSize(int aXSize, int aYSize, int aZSize, int aASize) {
188 | if (mData != 0) delete[] mData;
189 | mData = new T[aXSize*aYSize*aZSize*aASize];
190 | mXSize = aXSize;
191 | mYSize = aYSize;
192 | mZSize = aZSize;
193 | mASize = aASize;
194 | }
195 |
196 | //downsample
197 | template
198 | void CTensor4D::downsample(int aNewXSize, int aNewYSize) {
199 | T* mData2 = new T[aNewXSize*aNewYSize*mZSize*mASize];
200 | int aSize = aNewXSize*aNewYSize;
201 | for (int a = 0; a < mASize; a++)
202 | for (int z = 0; z < mZSize; z++) {
203 | CMatrix aTemp(mXSize,mYSize);
204 | getMatrix(aTemp,z,a);
205 | aTemp.downsample(aNewXSize,aNewYSize);
206 | for (int i = 0; i < aSize; i++)
207 | mData2[i+(a*mZSize+z)*aSize] = aTemp.data()[i];
208 | }
209 | delete[] mData;
210 | mData = mData2;
211 | mXSize = aNewXSize;
212 | mYSize = aNewYSize;
213 | }
214 |
215 | template
216 | void CTensor4D::downsample(int aNewXSize, int aNewYSize, int aNewZSize) {
217 | T* mData2 = new T[aNewXSize*aNewYSize*aNewZSize*mASize];
218 | int aSize = aNewXSize*aNewYSize*aNewZSize;
219 | for (int a = 0; a < mASize; a++) {
220 | CTensor aTemp(mXSize,mYSize,mZSize);
221 | getTensor3D(aTemp,a);
222 | aTemp.downsample(aNewXSize,aNewYSize,aNewZSize);
223 | for (int i = 0; i < aSize; i++)
224 | mData2[i+a*aSize] = aTemp.data()[i];
225 | }
226 | delete[] mData;
227 | mData = mData2;
228 | mXSize = aNewXSize;
229 | mYSize = aNewYSize;
230 | mZSize = aNewZSize;
231 | }
232 |
233 | // upsample
234 | template
235 | void CTensor4D::upsample(int aNewXSize, int aNewYSize) {
236 | T* mData2 = new T[aNewXSize*aNewYSize*mZSize*mASize];
237 | int aSize = aNewXSize*aNewYSize;
238 | for (int a = 0; a < mASize; a++)
239 | for (int z = 0; z < mZSize; z++) {
240 | CMatrix aTemp(mXSize,mYSize);
241 | getMatrix(aTemp,z,a);
242 | aTemp.upsample(aNewXSize,aNewYSize);
243 | for (int i = 0; i < aSize; i++)
244 | mData2[i+(a*mZSize+z)*aSize] = aTemp.data()[i];
245 | }
246 | delete[] mData;
247 | mData = mData2;
248 | mXSize = aNewXSize;
249 | mYSize = aNewYSize;
250 | }
251 |
252 | // upsampleBilinear
253 | template
254 | void CTensor4D::upsampleBilinear(int aNewXSize, int aNewYSize) {
255 | T* mData2 = new T[aNewXSize*aNewYSize*mZSize*mASize];
256 | int aSize = aNewXSize*aNewYSize;
257 | for (int a = 0; a < mASize; a++)
258 | for (int z = 0; z < mZSize; z++) {
259 | CMatrix aTemp(mXSize,mYSize);
260 | getMatrix(aTemp,z,a);
261 | aTemp.upsampleBilinear(aNewXSize,aNewYSize);
262 | for (int i = 0; i < aSize; i++)
263 | mData2[i+(a*mZSize+z)*aSize] = aTemp.data()[i];
264 | }
265 | delete[] mData;
266 | mData = mData2;
267 | mXSize = aNewXSize;
268 | mYSize = aNewYSize;
269 | }
270 |
271 | // upsampleTrilinear
272 | template
273 | void CTensor4D::upsampleTrilinear(int aNewXSize, int aNewYSize, int aNewZSize) {
274 | T* mData2 = new T[aNewXSize*aNewYSize*aNewZSize*mASize];
275 | int aSize = aNewXSize*aNewYSize*aNewZSize;
276 | for (int a = 0; a < mASize; a++) {
277 | CTensor aTemp(mXSize,mYSize,mZSize);
278 | getTensor3D(aTemp,a);
279 | aTemp.upsampleTrilinear(aNewXSize,aNewYSize,aNewZSize);
280 | for (int i = 0; i < aSize; i++)
281 | mData2[i+a*aSize] = aTemp.data()[i];
282 | }
283 | delete[] mData;
284 | mData = mData2;
285 | mXSize = aNewXSize;
286 | mYSize = aNewYSize;
287 | mZSize = aNewZSize;
288 | }
289 |
290 | // fill
291 | template
292 | void CTensor4D::fill(const T aValue) {
293 | int wholeSize = mXSize*mYSize*mZSize*mASize;
294 | for (register int i = 0; i < wholeSize; i++)
295 | mData[i] = aValue;
296 | }
297 |
298 | // cut
299 | template
300 | void CTensor4D::cut(CTensor4D& aResult, int x1, int y1, int z1, int a1, int x2, int y2, int z2, int a2) {
301 | aResult.mXSize = x2-x1+1;
302 | aResult.mYSize = y2-y1+1;
303 | aResult.mZSize = z2-z1+1;
304 | aResult.mASize = a2-a1+1;
305 | delete[] aResult.mData;
306 | aResult.mData = new T[aResult.mXSize*aResult.mYSize*aResult.mZSize*aResult.mASize];
307 | for (int a = a1; a <= a2; a++)
308 | for (int z = z1; z <= z2; z++)
309 | for (int y = y1; y <= y2; y++)
310 | for (int x = x1; x <= x2; x++)
311 | aResult(x-x1,y-y1,z-z1,a-a1) = operator()(x,y,z,a);
312 | }
313 |
314 | // readFromFile
315 | template
316 | void CTensor4D::readFromFile(char* aFilename) {
317 | if (mData != 0) delete[] mData;
318 | std::string s;
319 | std::string aPath = aFilename;
320 | aPath.erase(aPath.find_last_of('\\')+1,100);
321 | mASize = 0;
322 | {
323 | std::ifstream aStream(aFilename);
324 | while (!aStream.eof()) {
325 | aStream >> s;
326 | if (s != "") {
327 | mASize++;
328 | if (mASize == 1) {
329 | s.erase(0,s.find_last_of('.'));
330 | if (s == ".ppm" || s == ".PPM") mZSize = 3;
331 | else if (s == ".pgm" || s == ".PGM") mZSize = 1;
332 | else throw ETensor4DInvalidFileFormat();
333 | }
334 | }
335 | }
336 | }
337 | std::ifstream aStream(aFilename);
338 | aStream >> s;
339 | s = aPath+s;
340 | // PGM
341 | if (mZSize == 1) {
342 | CMatrix aTemp;
343 | aTemp.readFromPGM(s.c_str());
344 | mXSize = aTemp.xSize();
345 | mYSize = aTemp.ySize();
346 | int aSize = mXSize*mYSize;
347 | mData = new T[aSize*mASize];
348 | for (int i = 0; i < aSize; i++)
349 | mData[i] = aTemp.data()[i];
350 | for (int a = 1; a < mASize; a++) {
351 | aStream >> s;
352 | s = aPath+s;
353 | aTemp.readFromPGM(s.c_str());
354 | for (int i = 0; i < aSize; i++)
355 | mData[i+a*aSize] = aTemp.data()[i];
356 | }
357 | }
358 | // PPM
359 | else {
360 | CTensor aTemp;
361 | aTemp.readFromPPM(s.c_str());
362 | mXSize = aTemp.xSize();
363 | mYSize = aTemp.ySize();
364 | int aSize = 3*mXSize*mYSize;
365 | mData = new T[aSize*mASize];
366 | for (int i = 0; i < aSize; i++)
367 | mData[i] = aTemp.data()[i];
368 | for (int a = 1; a < mASize; a++) {
369 | aStream >> s;
370 | s = aPath+s;
371 | aTemp.readFromPPM(s.c_str());
372 | for (int i = 0; i < aSize; i++)
373 | mData[i+a*aSize] = aTemp.data()[i];
374 | }
375 | }
376 | }
377 |
378 | // writeToPPM
379 | template
380 | void CTensor4D::writeToPPM(const char* aFilename, int aCols, int aRows) {
381 | int rows = (int)floor(sqrt(mASize));
382 | if (aRows != 0) rows = aRows;
383 | int cols = (int)ceil(mASize*1.0/rows);
384 | if (aCols != 0) cols = aCols;
385 | FILE* outimage = fopen(aFilename, "wb");
386 | fprintf(outimage, "P6 \n");
387 | fprintf(outimage, "%ld %ld \n255\n", cols*mXSize,rows*mYSize);
388 | for (int r = 0; r < rows; r++)
389 | for (int y = 0; y < mYSize; y++)
390 | for (int c = 0; c < cols; c++)
391 | for (int x = 0; x < mXSize; x++) {
392 | unsigned char aHelp;
393 | if (r*cols+c >= mASize) aHelp = 0;
394 | else aHelp = (unsigned char)operator()(x,y,0,r*cols+c);
395 | fwrite (&aHelp, sizeof(unsigned char), 1, outimage);
396 | if (r*cols+c >= mASize) aHelp = 0;
397 | else aHelp = (unsigned char)operator()(x,y,1,r*cols+c);
398 | fwrite (&aHelp, sizeof(unsigned char), 1, outimage);
399 | if (r*cols+c >= mASize) aHelp = 0;
400 | else aHelp = (unsigned char)operator()(x,y,2,r*cols+c);
401 | fwrite (&aHelp, sizeof(unsigned char), 1, outimage);
402 | }
403 | fclose(outimage);
404 | }
405 |
406 | // operator ()
407 | template
408 | inline T& CTensor4D::operator()(const int ax, const int ay, const int az, const int aa) const {
409 | #ifdef DEBUG
410 | if (ax >= mXSize || ay >= mYSize || az >= mZSize || aa >= mASize || ax < 0 || ay < 0 || az < 0 || aa < 0)
411 | throw ETensorRangeOverflow(ax,ay,az,aa);
412 | #endif
413 | return mData[mXSize*(mYSize*(mZSize*aa+az)+ay)+ax];
414 | }
415 |
416 | template
417 | CVector CTensor4D::operator()(const float ax, const float ay, const int aa) const {
418 | CVector aResult(mZSize);
419 | int x1 = (int)ax;
420 | int y1 = (int)ay;
421 | int x2 = x1+1;
422 | int y2 = y1+1;
423 | #ifdef _DEBUG
424 | if (x2 >= mXSize || y2 >= mYSize || x1 < 0 || y1 < 0) throw ETensorRangeOverflow(ax,ay,0);
425 | #endif
426 | float alphaX = ax-x1; float alphaXTrans = 1.0-alphaX;
427 | float alphaY = ay-y1; float alphaYTrans = 1.0-alphaY;
428 | for (int k = 0; k < mZSize; k++) {
429 | float a = alphaXTrans*operator()(x1,y1,k,aa)+alphaX*operator()(x2,y1,k,aa);
430 | float b = alphaXTrans*operator()(x1,y2,k,aa)+alphaX*operator()(x2,y2,k,aa);
431 | aResult(k) = alphaYTrans*a+alphaY*b;
432 | }
433 | return aResult;
434 | }
435 |
436 | // operator =
437 | template
438 | inline CTensor4D& CTensor4D::operator=(const T aValue) {
439 | fill(aValue);
440 | return *this;
441 | }
442 |
443 | template
444 | CTensor4D& CTensor4D::operator=(const CTensor4D& aCopyFrom) {
445 | if (this != &aCopyFrom) {
446 | if (mData != 0) delete[] mData;
447 | mXSize = aCopyFrom.mXSize;
448 | mYSize = aCopyFrom.mYSize;
449 | mZSize = aCopyFrom.mZSize;
450 | mASize = aCopyFrom.mASize;
451 | int wholeSize = mXSize*mYSize*mZSize*mASize;
452 | mData = new T[wholeSize];
453 | for (register int i = 0; i < wholeSize; i++)
454 | mData[i] = aCopyFrom.mData[i];
455 | }
456 | return *this;
457 | }
458 |
459 | // operator *=
460 | template
461 | CTensor4D& CTensor4D::operator*=(const T aValue) {
462 | int wholeSize = mXSize*mYSize*mZSize*mASize;
463 | for (int i = 0; i < wholeSize; i++)
464 | mData[i] *= aValue;
465 | return *this;
466 | }
467 |
468 | // operator +=
469 | template
470 | CTensor4D& CTensor4D::operator+=(const CTensor4D& aTensor) {
471 | #ifdef _DEBUG
472 | if (mXSize != aTensor.mXSize || mYSize != aTensor.mYSize || mZSize != aTensor.mZSize || mASize != aTensor.mASize)
473 | throw ETensorIncompatibleSize(mXSize,mYSize,mZSize);
474 | #endif
475 | int wholeSize = size();
476 | for (int i = 0; i < wholeSize; i++)
477 | mData[i] += aTensor.mData[i];
478 | return *this;
479 | }
480 |
481 | // xSize
482 | template
483 | inline int CTensor4D::xSize() const {
484 |
485 | return mXSize;
486 | }
487 |
488 | // ySize
489 | template
490 | inline int CTensor4D::ySize() const {
491 | return mYSize;
492 | }
493 |
494 | // zSize
495 | template
496 | inline int CTensor4D::zSize() const {
497 | return mZSize;
498 | }
499 |
500 | // aSize
501 | template
502 | inline int CTensor4D::aSize() const {
503 | return mASize;
504 | }
505 |
506 | // size
507 | template
508 | inline int CTensor4D::size() const {
509 | return mXSize*mYSize*mZSize*mASize;
510 | }
511 |
512 | // getTensor3D
513 | template
514 | CTensor CTensor4D::getTensor3D(const int aa) const {
515 | CTensor aTemp(mXSize,mYSize,mZSize);
516 | int aTensorSize = mXSize*mYSize*mZSize;
517 | int aOffset = aa*aTensorSize;
518 | for (int i = 0; i < aTensorSize; i++)
519 | aTemp.data()[i] = mData[i+aOffset];
520 | return aTemp;
521 | }
522 |
523 | // getTensor3D
524 | template
525 | void CTensor4D::getTensor3D(CTensor& aTensor, int aIndex, int aDim) const {
526 | int aSize;
527 | int aOffset;
528 | switch (aDim) {
529 | case 3:
530 | if (aTensor.xSize() != mXSize || aTensor.ySize() != mYSize || aTensor.zSize() != mZSize)
531 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mXSize,mYSize,mZSize);
532 | aSize = mXSize*mYSize*mZSize;
533 | aOffset = aIndex*aSize;
534 | for (int i = 0; i < aSize; i++)
535 | aTensor.data()[i] = mData[i+aOffset];
536 | break;
537 | case 2:
538 | if (aTensor.xSize() != mXSize || aTensor.ySize() != mYSize || aTensor.zSize() != mASize)
539 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mXSize,mYSize,mASize);
540 | aSize = mXSize*mYSize;
541 | aOffset = aIndex*aSize;
542 | for (int a = 0; a < mASize; a++)
543 | for (int i = 0; i < aSize; i++)
544 | aTensor.data()[i+a*aSize] = mData[i+aOffset+a*aSize*mZSize];
545 | break;
546 | case 1:
547 | if (aTensor.xSize() != mXSize || aTensor.ySize() != mZSize || aTensor.zSize() != mASize)
548 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mXSize,mZSize,mASize);
549 | for (int a = 0; a < mASize; a++)
550 | for (int z = 0; z < mZSize; z++)
551 | for (int x = 0; x < mXSize; x++)
552 | aTensor(x,z,a) = operator()(x,aIndex,z,a);
553 | break;
554 | case 0:
555 | if (aTensor.xSize() != mYSize || aTensor.ySize() != mZSize || aTensor.zSize() != mASize)
556 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mYSize,mZSize,mASize);
557 | for (int a = 0; a < mASize; a++)
558 | for (int z = 0; z < mZSize; z++)
559 | for (int y = 0; y < mYSize; y++)
560 | aTensor(y,z,a) = operator()(aIndex,y,z,a);
561 | break;
562 | default: getTensor3D(aTensor,aIndex);
563 | }
564 | }
565 |
566 | // putTensor3D
567 | template
568 | void CTensor4D::putTensor3D(CTensor& aTensor, int aIndex, int aDim) {
569 | int aSize;
570 | int aOffset;
571 | switch (aDim) {
572 | case 3:
573 | if (aTensor.xSize() != mXSize || aTensor.ySize() != mYSize || aTensor.zSize() != mZSize)
574 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mXSize,mYSize,mZSize);
575 | aSize = mXSize*mYSize*mZSize;
576 | aOffset = aIndex*aSize;
577 | for (int i = 0; i < aSize; i++)
578 | mData[i+aOffset] = aTensor.data()[i];
579 | break;
580 | case 2:
581 | if (aTensor.xSize() != mXSize || aTensor.ySize() != mYSize || aTensor.zSize() != mASize)
582 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mXSize,mYSize,mASize);
583 | aSize = mXSize*mYSize;
584 | aOffset = aIndex*aSize;
585 | for (int a = 0; a < mASize; a++)
586 | for (int i = 0; i < aSize; i++)
587 | mData[i+aOffset+a*aSize*mZSize] = aTensor.data()[i+a*aSize];
588 | break;
589 | case 1:
590 | if (aTensor.xSize() != mXSize || aTensor.ySize() != mZSize || aTensor.zSize() != mASize)
591 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mXSize,mZSize,mASize);
592 | for (int a = 0; a < mASize; a++)
593 | for (int z = 0; z < mZSize; z++)
594 | for (int x = 0; x < mXSize; x++)
595 | operator()(x,aIndex,z,a) = aTensor(x,z,a);
596 | break;
597 | case 0:
598 | if (aTensor.xSize() != mYSize || aTensor.ySize() != mZSize || aTensor.zSize() != mASize)
599 | throw ETensor4DIncompatibleSize(aTensor.xSize(),aTensor.ySize(),aTensor.zSize(),mYSize,mZSize,mASize);
600 | for (int a = 0; a < mASize; a++)
601 | for (int z = 0; z < mZSize; z++)
602 | for (int y = 0; y < mYSize; y++)
603 | operator()(aIndex,y,z,a) = aTensor(y,z,a);
604 | break;
605 | default: putTensor3D(aTensor,aIndex);
606 | }
607 | }
608 |
609 | // getMatrix
610 | template
611 | void CTensor4D::getMatrix(CMatrix& aMatrix, int aZIndex, int aAIndex) const {
612 | if (aMatrix.xSize() != mXSize || aMatrix.ySize() != mYSize)
613 | throw ETensor4DIncompatibleSize(aMatrix.xSize(),aMatrix.ySize(),1,mXSize,mYSize,1);
614 | int aSize = mXSize*mYSize;
615 | int aOffset = aSize*(aAIndex*mZSize+aZIndex);
616 | for (int i = 0; i < aSize; i++)
617 | aMatrix.data()[i] = mData[i+aOffset];
618 | }
619 |
620 | // putMatrix
621 | template
622 | void CTensor4D::putMatrix(CMatrix& aMatrix, int aZIndex, int aAIndex) {
623 | if (aMatrix.xSize() != mXSize || aMatrix.ySize() != mYSize)
624 | throw ETensor4DIncompatibleSize(aMatrix.xSize(),aMatrix.ySize(),1,mXSize,mYSize,1);
625 | int aSize = mXSize*mYSize;
626 | int aOffset = aSize*(aAIndex*mZSize+aZIndex);
627 | for (int i = 0; i < aSize; i++)
628 | mData[i+aOffset] = aMatrix.data()[i];
629 | }
630 |
631 | // data()
632 | template
633 | inline T* CTensor4D::data() const {
634 | return mData;
635 | }
636 |
637 | // N O N - M E M B E R F U N C T I O N S --------------------------------------
638 |
639 | // operator <<
640 | template
641 | std::ostream& operator<<(std::ostream& aStream, const CTensor4D& aTensor) {
642 | for (int a = 0; a < aTensor.aSize(); a++) {
643 | for (int z = 0; z < aTensor.zSize(); z++) {
644 | for (int y = 0; y < aTensor.ySize(); y++) {
645 | for (int x = 0; x < aTensor.xSize(); x++)
646 | aStream << aTensor(x,y,z) << ' ';
647 | aStream << std::endl;
648 | }
649 | aStream << std::endl;
650 | }
651 | aStream << std::endl;
652 | }
653 | return aStream;
654 | }
655 |
656 | #endif
657 |
--------------------------------------------------------------------------------
/video_input/consistencyChecker/CVector.h:
--------------------------------------------------------------------------------
1 | // CVector
2 | // A one-dimensional array including basic vector operations
3 | //
4 | // Author: Thomas Brox
5 | // Last change: 23.05.2005
6 | //-------------------------------------------------------------------------
7 | #ifndef CVECTOR_H
8 | #define CVECTOR_H
9 |
10 | #include
11 | #include
12 |
13 | template class CMatrix;
14 | template class CTensor;
15 |
16 | template
17 | class CVector {
18 | public:
19 | // constructor
20 | inline CVector();
21 | // constructor
22 | inline CVector(const int aSize);
23 | // copy constructor
24 | CVector(const CVector& aCopyFrom);
25 | // constructor (from array)
26 | CVector(const T* aPointer, const int aSize);
27 | // constructor with implicit filling
28 | CVector(const int aSize, const T aFillValue);
29 | // destructor
30 | virtual ~CVector();
31 |
32 | // Changes the size of the vector (data is lost)
33 | void setSize(int aSize);
34 | // Fills the vector with the specified value (see also operator=)
35 | void fill(const T aValue);
36 | // Appends the values of another vector
37 | void append(CVector& aVector);
38 | // Normalizes the length of the vector to 1
39 | void normalize();
40 | // Normalizes the component sum to 1
41 | void normalizeSum();
42 | // Reads values from a text file
43 | void readFromTXT(const char* aFilename);
44 | // Writes values to a text file
45 | void writeToTXT(char* aFilename);
46 | // Returns the sum of all values
47 | T sum();
48 | // Returns the minimum value
49 | T min();
50 | // Returns the maximum value
51 | T max();
52 | // Returns the Euclidean norm
53 | T norm();
54 |
55 | // Converts vector to homogeneous coordinates, i.e., all components are divided by last component
56 | CVector& homogen();
57 | // Remove the last component
58 | inline void homogen_nD();
59 | // Computes the cross product between this vector and aVector
60 | void cross(CVector& aVector);
61 |
62 | // Gives full access to the vector's values
63 | inline T& operator()(const int aIndex) const;
64 | inline T& operator[](const int aIndex) const;
65 | // Fills the vector with the specified value (equivalent to fill)
66 | inline CVector& operator=(const T aValue);
67 | // Copies a vector into this vector (size might change)
68 | CVector& operator=(const CVector& aCopyFrom);
69 | // Copies values from a matrix to the vector (size might change)
70 | CVector& operator=(const CMatrix& aCopyFrom);
71 | // Copies values from a tensor to the vector (size might change)
72 | CVector& operator=(const CTensor& aCopyFrom);
73 | // Adds another vector
74 | CVector& operator+=(const CVector& aVector);
75 | // Substracts another vector
76 | CVector& operator-=(const CVector& aVector);
77 | // Multiplies the vector with a scalar
78 | CVector& operator*=(const T aValue);
79 | // Scalar product
80 | T operator*=(const CVector& aVector);
81 | // Checks (non-)equivalence to another vector
82 | bool operator==(const CVector& aVector);
83 | inline bool operator!=(const CVector& aVector);
84 |
85 | // Gives access to the vector's size
86 | inline int size() const;
87 | // Gives access to the internal data representation
88 | inline T* data() const {return mData;}
89 | protected:
90 | int mSize;
91 | T* mData;
92 | };
93 |
94 | // Adds two vectors
95 | template CVector operator+(const CVector& vec1, const CVector& vec2);
96 | // Substracts two vectors
97 | template CVector operator-(const CVector& vec1, const CVector& vec2);
98 | // Multiplies vector with a scalar
99 | template CVector operator*(const CVector& aVector, const T aValue);
100 | template CVector operator*(const T aValue, const CVector& aVector);
101 | // Computes the scalar product of two vectors
102 | template T operator*(const CVector& vec1, const CVector& vec2);
103 | // Computes cross product of two vectors
104 | template CVector operator/(const CVector& vec1, const CVector& vec2);
105 | // Sends the vector to an output stream
106 | template std::ostream& operator<<(std::ostream& aStream, const CVector& aVector);
107 |
108 | // Exceptions thrown by CVector--------------------------------------------
109 |
110 | // Thrown if one tries to access an element of a vector which is out of
111 | // the vector's bounds
112 | struct EVectorRangeOverflow {
113 | EVectorRangeOverflow(const int aIndex) {
114 | using namespace std;
115 | cerr << "Exception EVectorRangeOverflow: Index = " << aIndex << endl;
116 | }
117 | };
118 |
119 | struct EVectorIncompatibleSize {
120 | EVectorIncompatibleSize(int aSize1, int aSize2) {
121 | using namespace std;
122 | cerr << "Exception EVectorIncompatibleSize: " << aSize1 << " <> " << aSize2 << endl;
123 | }
124 | };
125 |
126 |
127 | // I M P L E M E N T A T I O N --------------------------------------------
128 | //
129 | // You might wonder why there is implementation code in a header file.
130 | // The reason is that not all C++ compilers yet manage separate compilation
131 | // of templates. Inline functions cannot be compiled separately anyway.
132 | // So in this case the whole implementation code is added to the header
133 | // file.
134 | // Users of CVector should ignore everything that's beyond this line.
135 | // ------------------------------------------------------------------------
136 |
137 | // P U B L I C ------------------------------------------------------------
138 | // constructor
139 | template
140 | inline CVector::CVector() : mSize(0) {
141 | mData = new T[0];
142 | }
143 |
144 | // constructor
145 | template
146 | inline CVector::CVector(const int aSize)
147 | : mSize(aSize) {
148 | mData = new T[aSize];
149 | }
150 |
151 | // copy constructor
152 | template
153 | CVector::CVector(const CVector& aCopyFrom)
154 | : mSize(aCopyFrom.mSize) {
155 | mData = new T[mSize];
156 | for (int i = 0; i < mSize; i++)
157 | mData[i] = aCopyFrom.mData[i];
158 | }
159 |
160 | // constructor (from array)
161 | template
162 | CVector::CVector(const T* aPointer, const int aSize)
163 | : mSize(aSize) {
164 | mData = new T[mSize];
165 | for (int i = 0; i < mSize; i++)
166 | mData[i] = aPointer[i];
167 | }
168 |
169 | // constructor with implicit filling
170 | template
171 | CVector::CVector(const int aSize, const T aFillValue)
172 | : mSize(aSize) {
173 | mData = new T[aSize];
174 | fill(aFillValue);
175 | }
176 |
177 | // destructor
178 | template
179 | CVector::~CVector() {
180 | delete[] mData;
181 | }
182 |
183 | // setSize
184 | template
185 | void CVector::setSize(int aSize) {
186 | if (mData != 0) delete[] mData;
187 | mData = new T[aSize];
188 | mSize = aSize;
189 | }
190 |
191 | // fill
192 | template
193 | void CVector::fill(const T aValue) {
194 | for (register int i = 0; i < mSize; i++)
195 | mData[i] = aValue;
196 | }
197 |
198 | // append
199 | template
200 | void CVector::append(CVector& aVector) {
201 | T* aNewData = new T[mSize+aVector.size()];
202 | for (int i = 0; i < mSize; i++)
203 | aNewData[i] = mData[i];
204 | for (int i = 0; i < aVector.size(); i++)
205 | aNewData[i+mSize] = aVector(i);
206 | mSize += aVector.size();
207 | delete[] mData;
208 | mData = aNewData;
209 | }
210 |
211 | // normalize
212 | template
213 | void CVector::normalize() {
214 | T aSum = 0;
215 | for (register int i = 0; i < mSize; i++)
216 | aSum += mData[i]*mData[i];
217 | if (aSum == 0) return;
218 | aSum = 1.0/sqrt(aSum);
219 | for (register int i = 0; i < mSize; i++)
220 | mData[i] *= aSum;
221 | }
222 |
223 | // normalizeSum
224 | template
225 | void CVector::normalizeSum() {
226 | T aSum = 0;
227 | for (register int i = 0; i < mSize; i++)
228 | aSum += mData[i];
229 | if (aSum == 0) return;
230 | aSum = 1.0/aSum;
231 | for (register int i = 0; i < mSize; i++)
232 | mData[i] *= aSum;
233 | }
234 |
235 | // readFromTXT
236 | template
237 | void CVector::readFromTXT(const char* aFilename) {
238 | std::ifstream aStream(aFilename);
239 | mSize = 0;
240 | float aDummy;
241 | while (!aStream.eof()) {
242 | aStream >> aDummy;
243 | mSize++;
244 | }
245 | aStream.close();
246 | std::ifstream aStream2(aFilename);
247 | delete mData;
248 | mData = new T[mSize];
249 | for (int i = 0; i < mSize; i++)
250 | aStream2 >> mData[i];
251 | }
252 |
253 | // writeToTXT
254 | template
255 | void CVector::writeToTXT(char* aFilename) {
256 | std::ofstream aStream(aFilename);
257 | for (int i = 0; i < mSize; i++)
258 | aStream << mData[i] << std::endl;
259 | }
260 |
261 | // sum
262 | template
263 | T CVector::sum() {
264 | T val = mData[0];
265 | for (int i = 1; i < mSize; i++)
266 | val += mData[i];
267 | return val;
268 | }
269 |
270 | // min
271 | template
272 | T CVector::min() {
273 | T bestValue = mData[0];
274 | for (int i = 1; i < mSize; i++)
275 | if (mData[i] < bestValue) bestValue = mData[i];
276 | return bestValue;
277 | }
278 |
279 | // max
280 | template
281 | T CVector::max() {
282 | T bestValue = mData[0];
283 | for (int i = 1; i < mSize; i++)
284 | if (mData[i] > bestValue) bestValue = mData[i];
285 | return bestValue;
286 | }
287 |
288 | // norm
289 | template
290 | T CVector::norm() {
291 | T aSum = 0.0;
292 | for (int i = 0; i < mSize; i++)
293 | aSum += mData[i]*mData[i];
294 | return sqrt(aSum);
295 | }
296 |
297 | // homogen
298 | template
299 | CVector& CVector::homogen() {
300 | if (mSize > 1 && mData[mSize-1] != 0) {
301 | T invVal = 1.0/mData[mSize-1];
302 | for (int i = 0; i < mSize; i++)
303 | mData[i] *= invVal;
304 | }
305 | return (*this);
306 | }
307 |
308 | // homogen_nD
309 | template
310 | inline void CVector::homogen_nD() {
311 | mSize--;
312 | }
313 |
314 | // cross
315 | template
316 | void CVector::cross(CVector& aVector) {
317 | T aHelp0 = aVector(2)*mData[1] - aVector(1)*mData[2];
318 | T aHelp1 = aVector(0)*mData[2] - aVector(2)*mData[0];
319 | T aHelp2 = aVector(1)*mData[0] - aVector(0)*mData[1];
320 | mData[0] = aHelp0;
321 | mData[1] = aHelp1;
322 | mData[2] = aHelp2;
323 | }
324 |
325 | // operator()
326 | template
327 | inline T& CVector::operator()(const int aIndex) const {
328 | #ifdef _DEBUG
329 | if (aIndex >= mSize || aIndex < 0)
330 | throw EVectorRangeOverflow(aIndex);
331 | #endif
332 | return mData[aIndex];
333 | }
334 |
335 | // operator[]
336 | template
337 | inline T& CVector::operator[](const int aIndex) const {
338 | return operator()(aIndex);
339 | }
340 |
341 | // operator=
342 | template
343 | inline CVector& CVector::operator=(const T aValue) {
344 | fill(aValue);
345 | return *this;
346 | }
347 |
348 | template
349 | CVector& CVector::operator=(const CVector& aCopyFrom) {
350 | if (this != &aCopyFrom) {
351 | if (mSize != aCopyFrom.size()) {
352 | delete[] mData;
353 | mSize = aCopyFrom.size();
354 | mData = new T[mSize];
355 | }
356 | for (register int i = 0; i < mSize; i++)
357 | mData[i] = aCopyFrom.mData[i];
358 | }
359 | return *this;
360 | }
361 |
362 | template
363 | CVector& CVector::operator=(const CMatrix& aCopyFrom) {
364 | if (mSize != aCopyFrom.size()) {
365 | delete[] mData;
366 | mSize = aCopyFrom.size();
367 | mData = new T[mSize];
368 | }
369 | for (register int i = 0; i < mSize; i++)
370 | mData[i] = aCopyFrom.data()[i];
371 | return *this;
372 | }
373 |
374 | template
375 | CVector& CVector::operator=(const CTensor& aCopyFrom) {
376 | if (mSize != aCopyFrom.size()) {
377 | delete[] mData;
378 | mSize = aCopyFrom.size();
379 | mData = new T[mSize];
380 | }
381 | for (register int i = 0; i < mSize; i++)
382 | mData[i] = aCopyFrom.data()[i];
383 | return *this;
384 | }
385 |
386 | // operator +=
387 | template
388 | CVector& CVector::operator+=(const CVector& aVector) {
389 | #ifdef _DEBUG
390 | if (mSize != aVector.size()) throw EVectorIncompatibleSize(mSize,aVector.size());
391 | #endif
392 | for (int i = 0; i < mSize; i++)
393 | mData[i] += aVector(i);
394 | return *this;
395 | }
396 |
397 | // operator -=
398 | template
399 | CVector& CVector::operator-=(const CVector& aVector) {
400 | #ifdef _DEBUG
401 | if (mSize != aVector.size()) throw EVectorIncompatibleSize(mSize,aVector.size());
402 | #endif
403 | for (int i = 0; i < mSize; i++)
404 | mData[i] -= aVector(i);
405 | return *this;
406 | }
407 |
408 | // operator *=
409 | template
410 | CVector& CVector::operator*=(const T aValue) {
411 | for (int i = 0; i < mSize; i++)
412 | mData[i] *= aValue;
413 | return *this;
414 | }
415 |
416 | template
417 | T CVector::operator*=(const CVector& aVector) {
418 | #ifdef _DEBUG
419 | if (mSize != aVector.size()) throw EVectorIncompatibleSize(mSize,aVector.size());
420 | #endif
421 | T aSum = 0.0;
422 | for (int i = 0; i < mSize; i++)
423 | aSum += mData[i]*aVector(i);
424 | return aSum;
425 | }
426 |
427 | // operator ==
428 | template
429 | bool CVector::operator==(const CVector& aVector) {
430 | if (mSize != aVector.size()) return false;
431 | int i = 0;
432 | while (i < mSize && aVector(i) == mData[i])
433 | i++;
434 | return (i == mSize);
435 | }
436 |
437 | // operator !=
438 | template
439 | inline bool CVector::operator!=(const CVector& aVector) {
440 | return !((*this)==aVector);
441 | }
442 |
443 | // size
444 | template
445 | inline int CVector