├── .gitignore ├── LICENSE ├── README.md ├── codes ├── dataset │ ├── paris.jpg │ └── starry_night.jpg ├── main.py └── result │ ├── result.png │ ├── result2.png │ ├── result3.png │ ├── result_3000_0.000800_0.800000.png │ ├── result_3000_0.001000_0.020000.png │ └── result_3000_0.010000_0.200000.png └── images ├── content_layer.png └── example.png /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Image Style Transfer Using Convolutional Neural Networks 2 | A Keras Implementation of [Image Style Transfer Using Convolutional Neural Networks, Gatys et al.](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf) 3 | 4 | The goal of this paper is to transfer styles from the source image while preserving the semantic content of the target image. 5 | 6 | ![example](images/example.png) 7 | 8 | # Style Transfer 9 | To transfer the style of (style image) onto (content image), we can define a loss function as follows: 10 | 11 | 12 | 13 | is the generated image. α and β are the weighting factors for content and style reconstruction. Lcontent is how similar and are in their content representation. Lstyle is how similar and are in their style representation. 14 | 15 | In this paper, use CNN(VGG19) to generate images. Images can start from random image or content image. With each training, the resulting image is produced by updating the image with a smaller loss function. This is the method of updating the image directly, which has the disadvantage of taking a long time. This paper stated that it takes an hour to create a 512 x 512 image with an Nvidia K40 GPU. 16 | 17 | # Lcontent 18 | The activation value in a specific layer of the VGG19 network was defined as a **content representation**. And **content loss** can be expressed as the difference between the two image(content, generated) representations. 19 | 20 | Content loss is defined as the squared error loss between two feature representations as follows: 21 | 22 | 23 | 24 | Pl and Fl their respective feature representation in layer l. 25 | 26 | The image is updated so that both images have the same content representation value. As a deeper layer is used, specific pixel information is lost, and if a lower layer is used, a result similar to content can be obtained. Below is an example image created using the activation values of conv4_2 and conv_2_2. 27 | 28 | ![content_layer](images/content_layer.png) 29 | 30 | # Lstyle 31 | In this paper, **Style representation** is defined as a correlation between different features. These feature correlations are given by the Gram matrix. 32 | 33 | Style loss is defined as the squared error loss between two style representations as follows: 34 | 35 | 36 | 37 | and the total style loss is 38 | 39 | 40 | 41 | The image is updated so that both images have the same style representation. This is done by using gradient descent from a white noise image to minimise the mean-squared distance between the entries of the Gram matrices from the original image and the Gram matrices of the image to be generated. 42 | 43 | # Result 44 | - content image 45 | 46 | 47 | - style image 48 | 49 | 50 | - result (content_weight: 8e-4, style_weight=8e-1) 51 | 52 | 53 | # Difference Between Paper and Implementation 54 | - Use ADAM optimizer instead of L-BFGS 55 | - Use maximum pooling average pooling(I couldn't find a way to easily replace the corresponding layer in keras) 56 | -------------------------------------------------------------------------------- /codes/dataset/paris.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/dataset/paris.jpg -------------------------------------------------------------------------------- /codes/dataset/starry_night.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/dataset/starry_night.jpg -------------------------------------------------------------------------------- /codes/main.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | 4 | import tensorflow as tf 5 | from tensorflow import keras 6 | from tensorflow.keras.applications import vgg19 7 | 8 | # Generated image size 9 | RESIZE_HEIGHT = 607 10 | 11 | NUM_ITER = 3000 12 | 13 | # Weights of the different loss components 14 | CONTENT_WEIGHT = 8e-4 # 8e-4 15 | STYLE_WEIGHT = 8e-1 # 8e-4 16 | 17 | # The layer to use for the content loss. 18 | CONTENT_LAYER_NAME = "block5_conv2" # "block2_conv2" 19 | 20 | # List of layers to use for the style loss. 21 | STYLE_LAYER_NAMES = [ 22 | "block1_conv1", 23 | "block2_conv1", 24 | "block3_conv1", 25 | "block4_conv1", 26 | "block5_conv1", 27 | ] 28 | 29 | def get_result_image_size(image_path, result_height): 30 | image_width, image_height = keras.preprocessing.image.load_img(image_path).size 31 | result_width = int(image_width * result_height / image_height) 32 | return result_height, result_width 33 | 34 | def preprocess_image(image_path, target_height, target_width): 35 | img = keras.preprocessing.image.load_img(image_path, target_size = (target_height, target_width)) 36 | arr = keras.preprocessing.image.img_to_array(img) 37 | arr = np.expand_dims(arr, axis = 0) 38 | arr = vgg19.preprocess_input(arr) 39 | return tf.convert_to_tensor(arr) 40 | 41 | def get_model(): 42 | # Build a VGG19 model loaded with pre-trained ImageNet weights 43 | model = vgg19.VGG19(weights = 'imagenet', include_top = False) 44 | 45 | # Get the symbolic outputs of each "key" layer (we gave them unique names). 46 | outputs_dict = dict([(layer.name, layer.output) for layer in model.layers]) 47 | 48 | # Set up a model that returns the activation values for every layer in VGG19 (as a dict). 49 | return keras.Model(inputs = model.inputs, outputs = outputs_dict) 50 | 51 | def get_optimizer(): 52 | return keras.optimizers.Adam( 53 | keras.optimizers.schedules.ExponentialDecay( 54 | initial_learning_rate = 8.0, decay_steps = 445, decay_rate = 0.98 55 | # initial_learning_rate = 2.0, decay_steps = 376, decay_rate = 0.98 56 | ) 57 | ) 58 | 59 | def compute_loss(feature_extractor, combination_image, content_features, style_features): 60 | combination_features = feature_extractor(combination_image) 61 | loss_content = compute_content_loss(content_features, combination_features) 62 | loss_style = compute_style_loss(style_features, combination_features, combination_image.shape[1] * combination_image.shape[2]) 63 | 64 | return CONTENT_WEIGHT * loss_content + STYLE_WEIGHT * loss_style 65 | 66 | # A loss function designed to maintain the 'content' of the original_image in the generated_image 67 | def compute_content_loss(content_features, combination_features): 68 | original_image = content_features[CONTENT_LAYER_NAME] 69 | generated_image = combination_features[CONTENT_LAYER_NAME] 70 | 71 | return tf.reduce_sum(tf.square(generated_image - original_image)) / 2 72 | 73 | def compute_style_loss(style_features, combination_features, combination_size): 74 | loss_style = 0 75 | 76 | for layer_name in STYLE_LAYER_NAMES: 77 | style_feature = style_features[layer_name][0] 78 | combination_feature = combination_features[layer_name][0] 79 | loss_style += style_loss(style_feature, combination_feature, combination_size) / len(STYLE_LAYER_NAMES) 80 | 81 | return loss_style 82 | 83 | # The "style loss" is designed to maintain the style of the reference image in the generated image. 84 | # It is based on the gram matrices (which capture style) of feature maps from the style reference image and from the generated image 85 | def style_loss(style_features, combination_features, combination_size): 86 | S = gram_matrix(style_features) 87 | C = gram_matrix(combination_features) 88 | channels = style_features.shape[2] 89 | return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (combination_size ** 2)) 90 | 91 | def gram_matrix(x): 92 | x = tf.transpose(x, (2, 0, 1)) 93 | features = tf.reshape(x, (tf.shape(x)[0], -1)) 94 | gram = tf.matmul(features, tf.transpose(features)) 95 | return gram 96 | 97 | def save_result(generated_image, result_height, result_width, name): 98 | img = deprocess_image(generated_image, result_height, result_width) 99 | keras.preprocessing.image.save_img(name, img) 100 | 101 | # Util function to convert a tensor into a valid image 102 | def deprocess_image(tensor, result_height, result_width): 103 | tensor = tensor.numpy() 104 | tensor = tensor.reshape((result_height, result_width, 3)) 105 | 106 | # Remove zero-center by mean pixel 107 | tensor[:, :, 0] += 103.939 108 | tensor[:, :, 1] += 116.779 109 | tensor[:, :, 2] += 123.680 110 | 111 | # 'BGR'->'RGB' 112 | tensor = tensor[:, :, ::-1] 113 | return np.clip(tensor, 0, 255).astype("uint8") 114 | 115 | if __name__ == "__main__": 116 | # Prepare content, stlye images 117 | path = os.path.abspath(os.getcwd()) 118 | content_image_path = keras.utils.get_file(path + '\dataset\paris.jpg', 'https://i.imgur.com/F28w3Ac.jpg') 119 | style_image_path = keras.utils.get_file(path + '\dataset\starry_night.jpg', 'https://i.imgur.com/9ooB60I.jpg') 120 | result_height, result_width = get_result_image_size(content_image_path, RESIZE_HEIGHT) 121 | print("result resolution: (%d, %d)" % (result_height, result_width)) 122 | 123 | # Preprocessing 124 | content_tensor = preprocess_image(content_image_path, result_height, result_width) 125 | style_tensor = preprocess_image(style_image_path, result_height, result_width) 126 | generated_image = tf.Variable(tf.random.uniform(style_tensor.shape, dtype=tf.dtypes.float32)) 127 | # generated_image = tf.Variable(preprocess_image(content_image_path, result_height, result_width)) 128 | 129 | # Build model 130 | model = get_model() 131 | optimizer = get_optimizer() 132 | print(model.summary()) 133 | 134 | content_features = model(content_tensor) 135 | style_features = model(style_tensor) 136 | 137 | # Optimize result image 138 | for iter in range(NUM_ITER): 139 | with tf.GradientTape() as tape: 140 | loss = compute_loss(model, generated_image, content_features, style_features) 141 | 142 | grads = tape.gradient(loss, generated_image) 143 | 144 | print("iter: %4d, loss: %8.f" % (iter, loss)) 145 | optimizer.apply_gradients([(grads, generated_image)]) 146 | 147 | if (iter + 1) % 100 == 0: 148 | name = "result/generated_at_iteration_%d.png" % (iter + 1) 149 | save_result(generated_image, result_height, result_width, name) 150 | 151 | name = "result/result_%d_%f_%f.png" % (NUM_ITER, CONTENT_WEIGHT, STYLE_WEIGHT) 152 | save_result(generated_image, result_height, result_width, name) 153 | -------------------------------------------------------------------------------- /codes/result/result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/result/result.png -------------------------------------------------------------------------------- /codes/result/result2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/result/result2.png -------------------------------------------------------------------------------- /codes/result/result3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/result/result3.png -------------------------------------------------------------------------------- /codes/result/result_3000_0.000800_0.800000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/result/result_3000_0.000800_0.800000.png -------------------------------------------------------------------------------- /codes/result/result_3000_0.001000_0.020000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/result/result_3000_0.001000_0.020000.png -------------------------------------------------------------------------------- /codes/result/result_3000_0.010000_0.200000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/codes/result/result_3000_0.010000_0.200000.png -------------------------------------------------------------------------------- /images/content_layer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/images/content_layer.png -------------------------------------------------------------------------------- /images/example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/superb20/Image-Style-Transfer-Using-Convolutional-Neural-Networks/647dfd9256ec72c65c5b69d692129577abb1dda9/images/example.png --------------------------------------------------------------------------------