├── .gitignore
├── Dockerfile
├── LICENSE.md
├── PhotoWCTModels
└── photo_wct.pth
├── README.md
├── TUTORIAL.md
├── ade20k_semantic_rel.npy
├── converter.py
├── demo.py
├── demo_example1.sh
├── demo_example1_fast.sh
├── demo_example3.sh
├── demo_mask_poly.png
├── demo_result_content3_seg.pgm.visualization.jpg
├── demo_result_example1.png
├── demo_result_example2.png
├── demo_result_example3.png
├── demo_result_style3_seg.pgm.visualization.jpg
├── demo_with_ade20k_ssn.py
├── demo_with_segmentation.gif
├── download_models.py
├── download_models.sh
├── models.py
├── photo_gif.py
├── photo_smooth.py
├── photo_wct.py
├── process_stylization.py
├── process_stylization_ade20k_ssn.py
├── process_stylization_folder.py
├── smooth_filter.py
└── teaser.png
/.gitignore:
--------------------------------------------------------------------------------
1 | segmentation/
2 | outputs/
3 | models/
4 | results/
5 | images/
6 | data/
7 | logs/
8 | examples
9 | .idea/
10 | notebooks/.ipynb_checkpoints/*
11 | *.tar.gz
12 | *.zip
13 | *.pkl
14 | *.pyc
15 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM nvidia/cuda:9.1-cudnn7-devel-ubuntu16.04
2 | ENV ANACONDA /opt/anaconda3
3 | ENV CUDA_PATH /usr/local/cuda
4 | ENV PATH ${ANACONDA}/bin:${CUDA_PATH}/bin:$PATH
5 | ENV LD_LIBRARY_PATH ${ANACONDA}/lib:${CUDA_PATH}/bin64:$LD_LIBRARY_PATH
6 | ENV C_INCLUDE_PATH ${CUDA_PATH}/include
7 | RUN apt-get update && apt-get install -y --no-install-recommends \
8 | wget \
9 | axel \
10 | imagemagick \
11 | libopencv-dev \
12 | python-opencv \
13 | build-essential \
14 | cmake \
15 | git \
16 | curl \
17 | ca-certificates \
18 | libjpeg-dev \
19 | libpng-dev \
20 | axel \
21 | zip \
22 | unzip
23 | RUN wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh -P /tmp
24 | RUN bash /tmp/Anaconda3-5.0.1-Linux-x86_64.sh -b -p $ANACONDA
25 | RUN rm /tmp/Anaconda3-5.0.1-Linux-x86_64.sh -rf
26 | RUN conda install -y pytorch=0.4.1 torchvision cuda91 -c pytorch
27 | RUN conda install -y -c anaconda pip
28 | RUN conda install -y -c menpo opencv3
29 | RUN pip install scikit-umfpack
30 | RUN pip install cupy-cuda91
31 | RUN pip install pynvrtc
32 |
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | ## creative commons
2 |
3 | # Attribution-NonCommercial-ShareAlike 4.0 International
4 |
5 | Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
6 |
7 | ### Using Creative Commons Public Licenses
8 |
9 | Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
10 |
11 | * __Considerations for licensors:__ Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. [More considerations for licensors](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensors).
12 |
13 | * __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor’s permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees).
14 |
15 | ## Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
16 |
17 | By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
18 |
19 | ### Section 1 – Definitions.
20 |
21 | a. __Adapted Material__ means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
22 |
23 | b. __Adapter's License__ means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
24 |
25 | c. __BY-NC-SA Compatible License__ means a license listed at [creativecommons.org/compatiblelicenses](http://creativecommons.org/compatiblelicenses), approved by Creative Commons as essentially the equivalent of this Public License.
26 |
27 | d. __Copyright and Similar Rights__ means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
28 |
29 | e. __Effective Technological Measures__ means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
30 |
31 | f. __Exceptions and Limitations__ means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
32 |
33 | g. __License Elements__ means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution, NonCommercial, and ShareAlike.
34 |
35 | h. __Licensed Material__ means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
36 |
37 | i. __Licensed Rights__ means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
38 |
39 | h. __Licensor__ means the individual(s) or entity(ies) granting rights under this Public License.
40 |
41 | i. __NonCommercial__ means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.
42 |
43 | j. __Share__ means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
44 |
45 | k. __Sui Generis Database Rights__ means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
46 |
47 | l. __You__ means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
48 |
49 | ### Section 2 – Scope.
50 |
51 | a. ___License grant.___
52 |
53 | 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
54 |
55 | A. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and
56 |
57 | B. produce, reproduce, and Share Adapted Material for NonCommercial purposes only.
58 |
59 | 2. __Exceptions and Limitations.__ For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
60 |
61 | 3. __Term.__ The term of this Public License is specified in Section 6(a).
62 |
63 | 4. __Media and formats; technical modifications allowed.__ The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
64 |
65 | 5. __Downstream recipients.__
66 |
67 | A. __Offer from the Licensor – Licensed Material.__ Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
68 |
69 | B. __Additional offer from the Licensor – Adapted Material.__ Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply.
70 |
71 | C. __No downstream restrictions.__ You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
72 |
73 | 6. __No endorsement.__ Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
74 |
75 | b. ___Other rights.___
76 |
77 | 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
78 |
79 | 2. Patent and trademark rights are not licensed under this Public License.
80 |
81 | 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.
82 |
83 | ### Section 3 – License Conditions.
84 |
85 | Your exercise of the Licensed Rights is expressly made subject to the following conditions.
86 |
87 | a. ___Attribution.___
88 |
89 | 1. If You Share the Licensed Material (including in modified form), You must:
90 |
91 | A. retain the following if it is supplied by the Licensor with the Licensed Material:
92 |
93 | i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
94 |
95 | ii. a copyright notice;
96 |
97 | iii. a notice that refers to this Public License;
98 |
99 | iv. a notice that refers to the disclaimer of warranties;
100 |
101 | v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
102 |
103 | B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
104 |
105 | C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
106 |
107 | 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
108 |
109 | 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
110 |
111 | b. ___ShareAlike.___
112 |
113 | In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
114 |
115 | 1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License.
116 |
117 | 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
118 |
119 | 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.
120 |
121 | ### Section 4 – Sui Generis Database Rights.
122 |
123 | Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
124 |
125 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only;
126 |
127 | b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and
128 |
129 | c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
130 |
131 | For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
132 |
133 | ### Section 5 – Disclaimer of Warranties and Limitation of Liability.
134 |
135 | a. __Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.__
136 |
137 | b. __To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.__
138 |
139 | c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
140 |
141 | ### Section 6 – Term and Termination.
142 |
143 | a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
144 |
145 | b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
146 |
147 | 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
148 |
149 | 2. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
150 |
151 | For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
152 |
153 | c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
154 |
155 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
156 |
157 | ### Section 7 – Other Terms and Conditions.
158 |
159 | a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
160 |
161 | b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
162 |
163 | ### Section 8 – Interpretation.
164 |
165 | a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
166 |
167 | b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
168 |
169 | c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
170 |
171 | d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
172 |
173 | ```
174 | Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at [creativecommons.org/policies](http://creativecommons.org/policies), Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
175 |
176 | Creative Commons may be contacted at [creativecommons.org](http://creativecommons.org/).
177 | ```
178 |
--------------------------------------------------------------------------------
/PhotoWCTModels/photo_wct.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/PhotoWCTModels/photo_wct.pth
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | [](https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/master/LICENSE.md)
2 | 
3 | 
4 |
5 | ## FastPhotoStyle
6 |
7 | ### License
8 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
9 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
10 |
11 |
12 |
13 |
14 | ### What's new
15 |
16 | | Date | News |
17 | |----------|--------------|
18 | |2018-07-25| Migrate to pytorch 0.4.0. For pytorch 0.3.0 user, check out [FastPhotoStyle for pytorch 0.3.0](https://github.com/NVIDIA/FastPhotoStyle/releases/tag/f33e07f). |
19 | | | Add a [tutorial](TUTORIAL.md) showing 3 ways of using the FastPhotoStyle algorithm.|
20 | |2018-07-10| Our paper is accepted by the ECCV 2018 conference!!! |
21 |
22 |
23 | ### About
24 |
25 | Given a content photo and a style photo, the code can transfer the style of the style photo to the content photo. The details of the algorithm behind the code is documented in our arxiv paper. Please cite the paper if this code repository is used in your publications.
26 |
27 | [A Closed-form Solution to Photorealistic Image Stylization](https://arxiv.org/abs/1802.06474)
28 | [Yijun Li (UC Merced)](https://sites.google.com/site/yijunlimaverick/), [Ming-Yu Liu (NVIDIA)](http://mingyuliu.net/), [Xueting Li (UC Merced)](https://sunshineatnoon.github.io/), [Ming-Hsuan Yang (NVIDIA, UC Merced)](http://faculty.ucmerced.edu/mhyang/), [Jan Kautz (NVIDIA)](http://jankautz.com/)
29 | European Conference on Computer Vision (ECCV), 2018
30 |
31 |
32 | ### Tutorial
33 |
34 | Please check out the [tutorial](TUTORIAL.md).
35 |
36 |
37 |
--------------------------------------------------------------------------------
/TUTORIAL.md:
--------------------------------------------------------------------------------
1 | [](https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/master/LICENSE.md)
2 | 
3 | 
4 | ## FastPhotoStyle Tutorial
5 |
6 | In this short tutorial, we will guide you through setting up the system environment for running the FastPhotoStyle software and then show several usage examples.
7 |
8 | ### Background
9 |
10 | Existing style transfer algorithms can be divided into categories: artistic style transfer and photorealistic style transfer.
11 | For artistic style transfer, the goal is to transfer the style of a reference painting to a photo so that the stylized photo looks like a painting and carries the style of the reference painting.
12 | For photorealistic style transfer, the goal is to transfer the style of a reference photo to a photo so that the stylized photo preserves the content of the original photo but carries the style of the reference photo.
13 | The FastPhotoStyle algorithm is in the category of photorealistic style transfer.
14 |
15 | ### Algorithm
16 |
17 | FastPhotoStyle takes two images as input where one is the content image and the other is the style image. Its goal is to transfer the style of the style photo to the content photo for creating a stylized image as shown below.
18 |
19 |
20 |
21 |
22 |
23 | FastPhotoStyle divides the photorealistic stylization process into two steps.
24 | 1. **PhotoWCT:** Generate a stylized image with visible distortions by applying a whitening and coloring transform to the deep features extracted from the content and style images.
25 | 2. **Photorealistic Smoothing:** Suppress the distortion in the stylized image by applying an image smoothing filter.
26 |
27 | The output is a photorealistic image as it were captured by a camera.
28 |
29 | ### Requirements
30 |
31 | - Hardware: PC with NVIDIA Titan GPU.
32 | - Software: *Ubuntu 16.04*, *CUDA 9.1*, *Anaconda3*, *pytorch 0.4.0*
33 | - Environment variables.
34 | - export ANACONDA=PATH-TO-YOUR-ANACONDA-LIBRARY
35 | - export CUDA_PATH=/usr/local/cuda
36 | - export PATH=${ANACONDA}/bin:${CUDA_PATH}/bin:$PATH
37 | - export LD_LIBRARY_PATH=${ANACONDA}/lib:${CUDA_PATH}/bin64:$LD_LIBRARY_PATH
38 | - export C_INCLUDE_PATH=${CUDA_PATH}/include
39 | - System package
40 | - `sudo apt-get install -y axel imagemagick` (Only used for demo)
41 | - Python package
42 | - `conda install pytorch=0.4.0 torchvision cuda91 -y -c pytorch`
43 | - `pip install scikit-umfpack`
44 | - `pip install -U setuptools`
45 | - `pip install cupy`
46 | - `pip install pynvrtc`
47 | - `conda install -c menpo opencv3` (OpenCV is only required if you want to use the approximate version of the photo smoothing step.)
48 |
49 | ### Examples
50 |
51 | In the following, we will provide 3 usage examples.
52 | In the 1st example, we will run the FastPhotoStyle code without using
53 | segmentation mask.
54 | In the 2nd example, we will show how to use a labeling tool to create the segmentation masks and use them for stylization.
55 | In the 3rd example, we will show how to use a pretrained segmetnation network to automatically generate the segmetnation masks and use them for stylization.
56 |
57 | #### Example 1: Transfer style of a style photo to a content photo without using segmentation masks.
58 |
59 | You can simply type `./demo_example1.sh` to run the demo or follow the steps below.
60 | - Create image and output folders and make sure nothing is inside the folders: `mkdir images && mkdir results`
61 | - Go to the image folder: `cd images`
62 | - Download content image 1: `axel -n 1 http://freebigpictures.com/wp-content/uploads/shady-forest.jpg --output=content1.png`
63 | - Download style image 1: `axel -n 1 https://vignette.wikia.nocookie.net/strangerthings8338/images/e/e0/Wiki-background.jpeg/revision/latest?cb=20170522192233 --output=style1.png`
64 | - These images are huge. We need to resize them first. Run
65 | - `convert -resize 25% content1.png content1.png`
66 | - `convert -resize 50% style1.png style1.png`
67 | - Go back to the root folder: `cd ..`
68 | - Test the photorealistic image stylization code `python demo.py --output_image_path results/example1.png`
69 | - You should see output messages like
70 | - ```
71 | Resize image: (803,538)->(803,538)
72 | Resize image: (960,540)->(960,540)
73 | Elapsed time in stylization: 0.398996
74 | Elapsed time in propagation: 13.456573
75 | Elapsed time in post processing: 0.202319
76 | ```
77 | - You should see an output image like
78 |
79 | | Input Style Photo | Input Content Photo | Output Stylization Result |
80 | |-------------------|---------------------|---------------------------|
81 | |
|
|
|
82 |
83 | - As shown in the output messages, the computational bottleneck of FastPhotoStyle is the propagation step (the photorealistic smoothing step). We find that we can make this step much faster by using the guided image filtering algorithm as an approximate. To run the fast version of the demo, you can simply type `./demo_example1_fast.sh` or run.
84 | - `python demo.py --fast --output_image_path results/example1_fast.png`
85 | - You should see output messages like
86 | - ```
87 | Resize image: (803,538)->(803,538)
88 | Resize image: (960,540)->(960,540)
89 | Elapsed time in stylization: 0.342203
90 | Elapsed time in propagation: 0.039506
91 | Elapsed time in post processing: 0.203081
92 | ```
93 | - Check out the stylization result computed by the fast approximation step in `results/example1_fast.png`. It should look very similar to `results/example1.png` from the full algorithm.
94 |
95 | #### Example 2: Transfer style of a style photo to a content photo with manually generated semantic label maps.
96 |
97 | When segmentation masks of content and style photos are available, FastPhotoStyle can utilize content–style
98 | correspondences obtained by matching the semantic labels in the segmentation masks for generating better stylization effects.
99 | In this example, we show how to manually create segmentation masks of content and style photos and use them for photorealistic style transfer.
100 |
101 | ##### Prepare label maps
102 |
103 | - Install the tool [labelme](https://github.com/wkentaro/labelme) and run the following command to start it: `labelme`
104 | - Please refer to [labelme](https://github.com/wkentaro/labelme) for details about how to use this great UI. Basically, do the following steps:
105 | - Click `Open` and load the target image (content or style)
106 | - Click `Create Polygons` and start drawing polygons in content or style image. Note that the corresponding regions (e.g., sky-to-sky) should have the same label. All unlabeled pixels will be automatically labeled as `0`.
107 | - Optional: Click `Edit Polygons` and polish the mask.
108 | - Save the labeling result.
109 |
110 |
111 |
112 | - The labeling result is saved in a ".json" file. By running the following command, you will get the `label.png` under `path/example_json`, which is the label map used in our code. `label.png` is a 1-channel image (usually looks totally black) consists of consecutive labels starting from 0.
113 |
114 | ```
115 | labelme_json_to_dataset example.json -o path/example_json
116 | ```
117 |
118 | ##### Stylize with label maps
119 |
120 | ```
121 | python demo.py \
122 | --content_image_path PATH-TO-YOUR-CONTENT-IMAGE \
123 | --content_seg_path PATH-TO-YOUR-CONTENT-LABEL \
124 | --style_image_path PATH-TO-YOUR-STYLE-IMAGE \
125 | --style_seg_path PATH-TO-YOUR-STYLE-LABEL \
126 | --output_image_path PATH-TO-YOUR-OUTPUT
127 | ```
128 |
129 | Below is a 3-label transferring example (images and labels are from the [DPST](https://github.com/luanfujun/deep-photo-styletransfer) work by Luan et al.):
130 |
131 | 
132 |
133 | #### Example 3: Transfer the style of a style photo to a content photo with automatically generated semantic label maps.
134 |
135 | In this example, we will show how to use segmentation masks of content and style photos generated by a pretrained segmentation network to achieve better stylization results.
136 | We will use the segmentation network provided from [CSAILVision/semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch) in this example.
137 | To setup up the segmentation network, do the following steps:
138 | - Clone the CSAIL segmentation network from this fork of [CSAILVision/semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch) using the following command
139 | `git clone https://github.com/mingyuliutw/semantic-segmentation-pytorch segmentation`
140 | - Run the demo code in [CSAILVision/semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch) to download the network and make sure the environment is set up properly.
141 | - `cd segmentation`
142 | - `./demo_test.sh`
143 | - You should see output messages like
144 | ```
145 | 2018-XX-XX XX:XX:XX-- http://sceneparsing.csail.mit.edu//data/ADEChallengeData2016/images/validation/ADE_val_00001519.jpg
146 | Resolving sceneparsing.csail.mit.edu (sceneparsing.csail.mit.edu)... 128.30.100.255
147 | Connecting to sceneparsing.csail.mit.edu (sceneparsing.csail.mit.edu)|128.30.100.255|:80... connected.
148 | HTTP request sent, awaiting response... 200 OK
149 | Length: 62271 (61K) [image/jpeg]
150 | Saving to: ‘./ADE_val_00001519.jpg’
151 |
152 | ADE_val_00001519.jpg 100%[=====================================>] 60.81K 366KB/s in 0.2s
153 |
154 | 2018-07-25 16:55:00 (366 KB/s) - ‘./ADE_val_00001519.jpg’ saved [62271/62271]
155 |
156 | Namespace(arch_decoder='ppm_bilinear_deepsup', arch_encoder='resnet50_dilated8', batch_size=1, fc_dim=2048, gpu_id=0, imgMaxSize=1000, imgSize=[300, 400, 500, 600], model_path='baseline-resnet50_dilated8-ppm_bilinear_deepsup', num_class=150, num_val=-1, padding_constant=8, result='./', segm_downsampling_rate=8, suffix='_epoch_20.pth', test_img='ADE_val_00001519.jpg')
157 | Loading weights for net_encoder
158 | Loading weights for net_decoder
159 | Inference done!
160 | ```
161 | - Go back to the root folder `cd ..`
162 |
163 | - Now, we are ready to use the segmentation network trained on the ADE20K for automatically generating the segmentation mask.
164 | - To run the fast version of the demo, you can simply type `./demo_example3.sh` or run.
165 | - Create image and output folders and make sure nothing is inside the folders. `mkdir images && mkdir results`
166 | - Go to the image folder: `cd images`
167 | - Download content image 3: `axel -n 1 https://pre00.deviantart.net/f1a6/th/pre/i/2010/019/0/e/country_road_hdr_by_mirre89.jpg --output=content3.png`
168 | - Download style image 3: `axel -n 1 https://nerdist.com/wp-content/uploads/2017/11/Stranger_Things_S2_news_Images_V03-1024x481.jpg --output=style3.png;`
169 | - These images are huge. We need to resize them first. Run
170 | - `convert -resize 50% content3.png content3.png`
171 | - `convert -resize 50% style3.png style3.png`
172 | - Go back to the root folder: `cd ..`
173 | - **Update the python library path by** `export PYTHONPATH=$PYTHONPATH:segmentation`
174 | - We will now run the demo code that first computing the segmentation masks of content and style images and then performing photorealistic style transfer.
175 | `python demo_with_ade20k_ssn.py --output_visualization` or `python demo_with_ade20k_ssn.py --fast --output_visualization`
176 | - You should see output messages like
177 | ```
178 | Loading weights for net_encoder
179 | Loading weights for net_decoder
180 | Resize image: (546,366)->(546,366)
181 | Resize image: (485,273)->(485,273)
182 | Elapsed time in stylization: 0.890762
183 | Elapsed time in propagation: 0.014808
184 | Elapsed time in post processing: 0.197138
185 | ```
186 | - You should see an output image like
187 |
188 | | Input Style Photo | Input Content Photo | Output Stylization Result |
189 | |-------------------|---------------------|---------------------------|
190 | |
|
|
|
191 |
192 | - We can check out the segmentation results in the `results` folder.
193 |
194 | | Segmentation of the Style Photo | Segmentation of the Content Photo |
195 | |---------------------------------|-----------------------------------|
196 | |
|
|
197 |
198 |
199 | ### Use docker image
200 |
201 | We provide a docker image for testing the code.
202 |
203 | 1. Install docker-ce. Follow the instruction in the [Docker page](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce-1)
204 | 2. Install nvidia-docker. Follow the instruction in the [NVIDIA-DOCKER README page](https://github.com/NVIDIA/nvidia-docker).
205 | 3. Build the docker image `docker build -t your-docker-image:v1.0 .`
206 | 4. Run an interactive session `docker run -v YOUR_PATH:YOUR_PATH --runtime=nvidia -i -t your-docker-image:v1.0 /bin/bash`
207 | 5. `cd YOUR_PATH`
208 | 6. `./demo_example1.sh`
209 |
210 | ## Acknowledgement
211 |
212 | - We express gratitudes to the great work [DPST](https://www.cs.cornell.edu/~fujun/files/style-cvpr17/style-cvpr17.pdf) by Luan et al. and their [Torch](https://github.com/luanfujun/deep-photo-styletransfer) and [Tensorflow](https://github.com/LouieYang/deep-photo-styletransfer-tf) implementations.
213 |
--------------------------------------------------------------------------------
/ade20k_semantic_rel.npy:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/ade20k_semantic_rel.npy
--------------------------------------------------------------------------------
/converter.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | import torch
4 | import torch.nn as nn
5 | from torch.utils.serialization import load_lua
6 |
7 | from models import VGGEncoder, VGGDecoder
8 | from photo_wct import PhotoWCT
9 |
10 |
11 | def weight_assign(lua, pth, maps):
12 | for k, v in maps.items():
13 | getattr(pth, k).weight = nn.Parameter(lua.get(v).weight.float())
14 | getattr(pth, k).bias = nn.Parameter(lua.get(v).bias.float())
15 |
16 |
17 | def photo_wct_loader(p_wct):
18 | p_wct.e1.load_state_dict(torch.load('pth_models/vgg_normalised_conv1.pth'))
19 | p_wct.d1.load_state_dict(torch.load('pth_models/feature_invertor_conv1.pth'))
20 | p_wct.e2.load_state_dict(torch.load('pth_models/vgg_normalised_conv2.pth'))
21 | p_wct.d2.load_state_dict(torch.load('pth_models/feature_invertor_conv2.pth'))
22 | p_wct.e3.load_state_dict(torch.load('pth_models/vgg_normalised_conv3.pth'))
23 | p_wct.d3.load_state_dict(torch.load('pth_models/feature_invertor_conv3.pth'))
24 | p_wct.e4.load_state_dict(torch.load('pth_models/vgg_normalised_conv4.pth'))
25 | p_wct.d4.load_state_dict(torch.load('pth_models/feature_invertor_conv4.pth'))
26 |
27 |
28 | if __name__ == '__main__':
29 | if not os.path.exists('pth_models'):
30 | os.mkdir('pth_models')
31 |
32 | ## VGGEncoder1
33 | vgg1 = load_lua('models/vgg_normalised_conv1_1_mask.t7')
34 | e1 = VGGEncoder(1)
35 | weight_assign(vgg1, e1, {
36 | 'conv0': 0,
37 | 'conv1_1': 2,
38 | })
39 | torch.save(e1.state_dict(), 'pth_models/vgg_normalised_conv1.pth')
40 |
41 | ## VGGDecoder1
42 | inv1 = load_lua('models/feature_invertor_conv1_1_mask.t7')
43 | d1 = VGGDecoder(1)
44 | weight_assign(inv1, d1, {
45 | 'conv1_1': 1,
46 | })
47 | torch.save(d1.state_dict(), 'pth_models/feature_invertor_conv1.pth')
48 |
49 | ## VGGEncoder2
50 | vgg2 = load_lua('models/vgg_normalised_conv2_1_mask.t7')
51 | e2 = VGGEncoder(2)
52 | weight_assign(vgg2, e2, {
53 | 'conv0': 0,
54 | 'conv1_1': 2,
55 | 'conv1_2': 5,
56 | 'conv2_1': 9,
57 | })
58 | torch.save(e2.state_dict(), 'pth_models/vgg_normalised_conv2.pth')
59 |
60 | ## VGGDecoder2
61 | inv2 = load_lua('models/feature_invertor_conv2_1_mask.t7')
62 | d2 = VGGDecoder(2)
63 | weight_assign(inv2, d2, {
64 | 'conv2_1': 1,
65 | 'conv1_2': 5,
66 | 'conv1_1': 8,
67 | })
68 | torch.save(d2.state_dict(), 'pth_models/feature_invertor_conv2.pth')
69 |
70 | ## VGGEncoder3
71 | vgg3 = load_lua('models/vgg_normalised_conv3_1_mask.t7')
72 | e3 = VGGEncoder(3)
73 | weight_assign(vgg3, e3, {
74 | 'conv0': 0,
75 | 'conv1_1': 2,
76 | 'conv1_2': 5,
77 | 'conv2_1': 9,
78 | 'conv2_2': 12,
79 | 'conv3_1': 16,
80 | })
81 | torch.save(e3.state_dict(), 'pth_models/vgg_normalised_conv3.pth')
82 |
83 | ## VGGDecoder3
84 | inv3 = load_lua('models/feature_invertor_conv3_1_mask.t7')
85 | d3 = VGGDecoder(3)
86 | weight_assign(inv3, d3, {
87 | 'conv3_1': 1,
88 | 'conv2_2': 5,
89 | 'conv2_1': 8,
90 | 'conv1_2': 12,
91 | 'conv1_1': 15,
92 | })
93 | torch.save(d3.state_dict(), 'pth_models/feature_invertor_conv3.pth')
94 |
95 | ## VGGEncoder4
96 | vgg4 = load_lua('models/vgg_normalised_conv4_1_mask.t7')
97 | e4 = VGGEncoder(4)
98 | weight_assign(vgg4, e4, {
99 | 'conv0': 0,
100 | 'conv1_1': 2,
101 | 'conv1_2': 5,
102 | 'conv2_1': 9,
103 | 'conv2_2': 12,
104 | 'conv3_1': 16,
105 | 'conv3_2': 19,
106 | 'conv3_3': 22,
107 | 'conv3_4': 25,
108 | 'conv4_1': 29,
109 | })
110 | torch.save(e4.state_dict(), 'pth_models/vgg_normalised_conv4.pth')
111 |
112 | ## VGGDecoder4
113 | inv4 = load_lua('models/feature_invertor_conv4_1_mask.t7')
114 | d4 = VGGDecoder(4)
115 | weight_assign(inv4, d4, {
116 | 'conv4_1': 1,
117 | 'conv3_4': 5,
118 | 'conv3_3': 8,
119 | 'conv3_2': 11,
120 | 'conv3_1': 14,
121 | 'conv2_2': 18,
122 | 'conv2_1': 21,
123 | 'conv1_2': 25,
124 | 'conv1_1': 28,
125 | })
126 | torch.save(d4.state_dict(), 'pth_models/feature_invertor_conv4.pth')
127 |
128 | p_wct = PhotoWCT()
129 | photo_wct_loader(p_wct)
130 | torch.save(p_wct.state_dict(), 'PhotoWCTModels/photo_wct.pth')
131 |
--------------------------------------------------------------------------------
/demo.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 |
6 | from __future__ import print_function
7 | import argparse
8 | import torch
9 | import process_stylization
10 | from photo_wct import PhotoWCT
11 | parser = argparse.ArgumentParser(description='Photorealistic Image Stylization')
12 | parser.add_argument('--model', default='./PhotoWCTModels/photo_wct.pth')
13 | parser.add_argument('--content_image_path', default='./images/content1.png')
14 | parser.add_argument('--content_seg_path', default=[])
15 | parser.add_argument('--style_image_path', default='./images/style1.png')
16 | parser.add_argument('--style_seg_path', default=[])
17 | parser.add_argument('--output_image_path', default='./results/example1.png')
18 | parser.add_argument('--save_intermediate', action='store_true', default=False)
19 | parser.add_argument('--fast', action='store_true', default=False)
20 | parser.add_argument('--no_post', action='store_true', default=False)
21 | parser.add_argument('--cuda', type=int, default=1, help='Enable CUDA.')
22 | args = parser.parse_args()
23 |
24 | # Load model
25 | p_wct = PhotoWCT()
26 | p_wct.load_state_dict(torch.load(args.model))
27 |
28 | if args.fast:
29 | from photo_gif import GIFSmoothing
30 | p_pro = GIFSmoothing(r=35, eps=0.001)
31 | else:
32 | from photo_smooth import Propagator
33 | p_pro = Propagator()
34 | if args.cuda:
35 | p_wct.cuda(0)
36 |
37 | process_stylization.stylization(
38 | stylization_module=p_wct,
39 | smoothing_module=p_pro,
40 | content_image_path=args.content_image_path,
41 | style_image_path=args.style_image_path,
42 | content_seg_path=args.content_seg_path,
43 | style_seg_path=args.style_seg_path,
44 | output_image_path=args.output_image_path,
45 | cuda=args.cuda,
46 | save_intermediate=args.save_intermediate,
47 | no_post=args.no_post
48 | )
49 |
--------------------------------------------------------------------------------
/demo_example1.sh:
--------------------------------------------------------------------------------
1 | mkdir images -p && mkdir results -p;
2 | rm images/content1.png -rf;
3 | rm images/style1.png -rf;
4 | rm results/demo_result_example1.png
5 | cd images;
6 | axel -n 1 http://freebigpictures.com/wp-content/uploads/shady-forest.jpg --output=content1.png;
7 | axel -n 1 https://vignette.wikia.nocookie.net/strangerthings8338/images/e/e0/Wiki-background.jpeg/revision/latest?cb=20170522192233 --output=style1.png;
8 | convert -resize 25% content1.png content1.png;
9 | convert -resize 50% style1.png style1.png;
10 | cd ..;
11 | python demo.py;
12 |
--------------------------------------------------------------------------------
/demo_example1_fast.sh:
--------------------------------------------------------------------------------
1 | mkdir images -p && mkdir results -p;
2 | rm images/content1.png -rf;
3 | rm images/style1.png -rf;
4 | rm results/demo_result_example1.png
5 | cd images;
6 | axel -n 1 http://freebigpictures.com/wp-content/uploads/shady-forest.jpg --output=content1.png;
7 | axel -n 1 https://vignette.wikia.nocookie.net/strangerthings8338/images/e/e0/Wiki-background.jpeg/revision/latest?cb=20170522192233 --output=style1.png;
8 | convert -resize 25% content1.png content1.png;
9 | convert -resize 50% style1.png style1.png;
10 | cd ..;
11 | python demo.py --fast --output_image_path results/example2.png;
12 |
--------------------------------------------------------------------------------
/demo_example3.sh:
--------------------------------------------------------------------------------
1 | mkdir images -p && mkdir results -p;
2 | rm images/content3.png -rf;
3 | rm images/style3.png -rf;
4 | rm results/content3_seg.pgm -rf;
5 | rm results/style3_seg.pgm -rf;
6 | rm results/stylization_with_auto_segmentation.png -rf;
7 | export PYTHONPATH=$PYTHONPATH:segmentation
8 | cd images;
9 | axel -n 1 https://pre00.deviantart.net/f1a6/th/pre/i/2010/019/0/e/country_road_hdr_by_mirre89.jpg --output=content3.png;
10 | axel -n 1 https://nerdist.com/wp-content/uploads/2017/11/Stranger_Things_S2_news_Images_V03-1024x481.jpg --output=style3.png;
11 | convert -resize 50% content3.png content3.png;
12 | convert -resize 50% style3.png style3.png;
13 | cd ..;
14 | python demo_with_ade20k_ssn.py;
15 |
--------------------------------------------------------------------------------
/demo_mask_poly.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/demo_mask_poly.png
--------------------------------------------------------------------------------
/demo_result_content3_seg.pgm.visualization.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/demo_result_content3_seg.pgm.visualization.jpg
--------------------------------------------------------------------------------
/demo_result_example1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/demo_result_example1.png
--------------------------------------------------------------------------------
/demo_result_example2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/demo_result_example2.png
--------------------------------------------------------------------------------
/demo_result_example3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/demo_result_example3.png
--------------------------------------------------------------------------------
/demo_result_style3_seg.pgm.visualization.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/demo_result_style3_seg.pgm.visualization.jpg
--------------------------------------------------------------------------------
/demo_with_ade20k_ssn.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 | from __future__ import print_function
6 | import argparse
7 | import os
8 | import torch
9 | import process_stylization_ade20k_ssn
10 | from torch import nn
11 | from photo_wct import PhotoWCT
12 | from segmentation.dataset import round2nearest_multiple
13 | from segmentation.models import ModelBuilder, SegmentationModule
14 | from lib.nn import user_scattered_collate, async_copy_to
15 | from lib.utils import as_numpy, mark_volatile
16 | from scipy.misc import imread, imresize
17 | import cv2
18 | from torchvision import transforms
19 | import numpy as np
20 |
21 | parser = argparse.ArgumentParser(description='Photorealistic Image Stylization')
22 | parser.add_argument('--model_path', help='folder to model path', default='baseline-resnet50_dilated8-ppm_bilinear_deepsup')
23 | parser.add_argument('--suffix', default='_epoch_20.pth', help="which snapshot to load")
24 | parser.add_argument('--arch_encoder', default='resnet50_dilated8', help="architecture of net_encoder")
25 | parser.add_argument('--arch_decoder', default='ppm_bilinear_deepsup', help="architecture of net_decoder")
26 | parser.add_argument('--fc_dim', default=2048, type=int, help='number of features between encoder and decoder')
27 | parser.add_argument('--num_val', default=-1, type=int, help='number of images to evalutate')
28 | parser.add_argument('--num_class', default=150, type=int, help='number of classes')
29 | parser.add_argument('--batch_size', default=1, type=int, help='batchsize. current only supports 1')
30 | parser.add_argument('--imgSize', default=[300, 400, 500, 600], nargs='+', type=int, help='list of input image sizes.' 'for multiscale testing, e.g. 300 400 500')
31 | parser.add_argument('--imgMaxSize', default=1000, type=int, help='maximum input image size of long edge')
32 | parser.add_argument('--padding_constant', default=8, type=int, help='maxmimum downsampling rate of the network')
33 | parser.add_argument('--segm_downsampling_rate', default=8, type=int, help='downsampling rate of the segmentation label')
34 | parser.add_argument('--gpu_id', default=0, type=int, help='gpu_id for evaluation')
35 |
36 | parser.add_argument('--model', default='./PhotoWCTModels/photo_wct.pth', help='Path to the PhotoWCT model. These are provided by the PhotoWCT submodule, please use `git submodule update --init --recursive` to pull.')
37 | parser.add_argument('--content_image_path', default="./images/content3.png")
38 | parser.add_argument('--content_seg_path', default='./results/content3_seg.pgm')
39 | parser.add_argument('--style_image_path', default='./images/style3.png')
40 | parser.add_argument('--style_seg_path', default='./results/style3_seg.pgm')
41 | parser.add_argument('--output_image_path', default='./results/example3.png')
42 | parser.add_argument('--save_intermediate', action='store_true', default=False)
43 | parser.add_argument('--fast', action='store_true', default=False)
44 | parser.add_argument('--no_post', action='store_true', default=False)
45 | parser.add_argument('--output_visualization', action='store_true', default=False)
46 | parser.add_argument('--cuda', type=int, default=1, help='Enable CUDA.')
47 | parser.add_argument('--label_mapping', type=str, default='ade20k_semantic_rel.npy')
48 | args = parser.parse_args()
49 |
50 | segReMapping = process_stylization_ade20k_ssn.SegReMapping(args.label_mapping)
51 |
52 | # Absolute paths of segmentation model weights
53 | SEG_NET_PATH = 'segmentation'
54 | args.weights_encoder = os.path.join(SEG_NET_PATH,args.model_path, 'encoder' + args.suffix)
55 | args.weights_decoder = os.path.join(SEG_NET_PATH,args.model_path, 'decoder' + args.suffix)
56 | args.arch_encoder = 'resnet50_dilated8'
57 | args.arch_decoder = 'ppm_bilinear_deepsup'
58 | args.fc_dim = 2048
59 |
60 | # Load semantic segmentation network module
61 | builder = ModelBuilder()
62 | net_encoder = builder.build_encoder(arch=args.arch_encoder, fc_dim=args.fc_dim, weights=args.weights_encoder)
63 | net_decoder = builder.build_decoder(arch=args.arch_decoder, fc_dim=args.fc_dim, num_class=args.num_class, weights=args.weights_decoder, use_softmax=True)
64 | crit = nn.NLLLoss(ignore_index=-1)
65 | segmentation_module = SegmentationModule(net_encoder, net_decoder, crit)
66 | segmentation_module.cuda()
67 | segmentation_module.eval()
68 | transform = transforms.Compose([transforms.Normalize(mean=[102.9801, 115.9465, 122.7717], std=[1., 1., 1.])])
69 |
70 | # Load FastPhotoStyle model
71 | p_wct = PhotoWCT()
72 | p_wct.load_state_dict(torch.load(args.model))
73 | if args.fast:
74 | from photo_gif import GIFSmoothing
75 | p_pro = GIFSmoothing(r=35, eps=0.001)
76 | else:
77 | from photo_smooth import Propagator
78 | p_pro = Propagator()
79 | if args.cuda:
80 | p_wct.cuda(0)
81 |
82 |
83 | def segment_this_img(f):
84 | img = imread(f, mode='RGB')
85 | img = img[:, :, ::-1] # BGR to RGB!!!
86 | ori_height, ori_width, _ = img.shape
87 | img_resized_list = []
88 | for this_short_size in args.imgSize:
89 | scale = this_short_size / float(min(ori_height, ori_width))
90 | target_height, target_width = int(ori_height * scale), int(ori_width * scale)
91 | target_height = round2nearest_multiple(target_height, args.padding_constant)
92 | target_width = round2nearest_multiple(target_width, args.padding_constant)
93 | img_resized = cv2.resize(img.copy(), (target_width, target_height))
94 | img_resized = img_resized.astype(np.float32)
95 | img_resized = img_resized.transpose((2, 0, 1))
96 | img_resized = transform(torch.from_numpy(img_resized))
97 | img_resized = torch.unsqueeze(img_resized, 0)
98 | img_resized_list.append(img_resized)
99 | input = dict()
100 | input['img_ori'] = img.copy()
101 | input['img_data'] = [x.contiguous() for x in img_resized_list]
102 | segSize = (img.shape[0],img.shape[1])
103 | with torch.no_grad():
104 | pred = torch.zeros(1, args.num_class, segSize[0], segSize[1])
105 | for timg in img_resized_list:
106 | feed_dict = dict()
107 | feed_dict['img_data'] = timg.cuda()
108 | feed_dict = async_copy_to(feed_dict, args.gpu_id)
109 | # forward pass
110 | pred_tmp = segmentation_module(feed_dict, segSize=segSize)
111 | pred = pred + pred_tmp.cpu() / len(args.imgSize)
112 | _, preds = torch.max(pred, dim=1)
113 | preds = as_numpy(preds.squeeze(0))
114 | return preds
115 |
116 |
117 | cont_seg = segment_this_img(args.content_image_path)
118 | cv2.imwrite(args.content_seg_path, cont_seg)
119 | style_seg = segment_this_img(args.style_image_path)
120 | cv2.imwrite(args.style_seg_path, style_seg)
121 | process_stylization_ade20k_ssn.stylization(
122 | stylization_module=p_wct,
123 | smoothing_module=p_pro,
124 | content_image_path=args.content_image_path,
125 | style_image_path=args.style_image_path,
126 | content_seg_path=args.content_seg_path,
127 | style_seg_path=args.style_seg_path,
128 | output_image_path=args.output_image_path,
129 | cuda=True,
130 | save_intermediate=args.save_intermediate,
131 | no_post=args.no_post,
132 | label_remapping=segReMapping,
133 | output_visualization=args.output_visualization
134 | )
135 |
--------------------------------------------------------------------------------
/demo_with_segmentation.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/demo_with_segmentation.gif
--------------------------------------------------------------------------------
/download_models.py:
--------------------------------------------------------------------------------
1 | # Download code taken from Code taken from https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive/39225039#39225039
2 | import requests
3 |
4 | def download_file_from_google_drive(id, destination):
5 | URL = "https://docs.google.com/uc?export=download"
6 |
7 | session = requests.Session()
8 |
9 | response = session.get(URL, params = { 'id' : id }, stream = True)
10 | token = get_confirm_token(response)
11 |
12 | if token:
13 | params = { 'id' : id, 'confirm' : token }
14 | response = session.get(URL, params = params, stream = True)
15 |
16 | save_response_content(response, destination)
17 |
18 | def get_confirm_token(response):
19 | for key, value in response.cookies.items():
20 | if key.startswith('download_warning'):
21 | return value
22 |
23 | return None
24 |
25 | def save_response_content(response, destination):
26 | CHUNK_SIZE = 32768
27 |
28 | with open(destination, "wb") as f:
29 | for chunk in response.iter_content(CHUNK_SIZE):
30 | if chunk: # filter out keep-alive new chunks
31 | f.write(chunk)
32 |
33 | file_id = '1ENgQm9TgabE1R99zhNf5q6meBvX6WFuq'
34 | destination = './models.zip'
35 | download_file_from_google_drive(file_id, destination)
--------------------------------------------------------------------------------
/download_models.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | python download_models.py
3 | unzip models.zip
4 |
--------------------------------------------------------------------------------
/models.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 | import torch.nn as nn
6 |
7 |
8 | class VGGEncoder(nn.Module):
9 | def __init__(self, level):
10 | super(VGGEncoder, self).__init__()
11 | self.level = level
12 |
13 | # 224 x 224
14 | self.conv0 = nn.Conv2d(3, 3, 1, 1, 0)
15 |
16 | self.pad1_1 = nn.ReflectionPad2d((1, 1, 1, 1))
17 | # 226 x 226
18 | self.conv1_1 = nn.Conv2d(3, 64, 3, 1, 0)
19 | self.relu1_1 = nn.ReLU(inplace=True)
20 | # 224 x 224
21 |
22 | if level < 2: return
23 |
24 | self.pad1_2 = nn.ReflectionPad2d((1, 1, 1, 1))
25 | self.conv1_2 = nn.Conv2d(64, 64, 3, 1, 0)
26 | self.relu1_2 = nn.ReLU(inplace=True)
27 | # 224 x 224
28 | self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2, return_indices=True)
29 | # 112 x 112
30 |
31 | self.pad2_1 = nn.ReflectionPad2d((1, 1, 1, 1))
32 | self.conv2_1 = nn.Conv2d(64, 128, 3, 1, 0)
33 | self.relu2_1 = nn.ReLU(inplace=True)
34 | # 112 x 112
35 |
36 | if level < 3: return
37 |
38 | self.pad2_2 = nn.ReflectionPad2d((1, 1, 1, 1))
39 | self.conv2_2 = nn.Conv2d(128, 128, 3, 1, 0)
40 | self.relu2_2 = nn.ReLU(inplace=True)
41 | # 112 x 112
42 |
43 | self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2, return_indices=True)
44 | # 56 x 56
45 |
46 | self.pad3_1 = nn.ReflectionPad2d((1, 1, 1, 1))
47 | self.conv3_1 = nn.Conv2d(128, 256, 3, 1, 0)
48 | self.relu3_1 = nn.ReLU(inplace=True)
49 | # 56 x 56
50 |
51 | if level < 4: return
52 |
53 | self.pad3_2 = nn.ReflectionPad2d((1, 1, 1, 1))
54 | self.conv3_2 = nn.Conv2d(256, 256, 3, 1, 0)
55 | self.relu3_2 = nn.ReLU(inplace=True)
56 | # 56 x 56
57 |
58 | self.pad3_3 = nn.ReflectionPad2d((1, 1, 1, 1))
59 | self.conv3_3 = nn.Conv2d(256, 256, 3, 1, 0)
60 | self.relu3_3 = nn.ReLU(inplace=True)
61 | # 56 x 56
62 |
63 | self.pad3_4 = nn.ReflectionPad2d((1, 1, 1, 1))
64 | self.conv3_4 = nn.Conv2d(256, 256, 3, 1, 0)
65 | self.relu3_4 = nn.ReLU(inplace=True)
66 | # 56 x 56
67 |
68 | self.maxpool3 = nn.MaxPool2d(kernel_size=2, stride=2, return_indices=True)
69 | # 28 x 28
70 |
71 | self.pad4_1 = nn.ReflectionPad2d((1, 1, 1, 1))
72 | self.conv4_1 = nn.Conv2d(256, 512, 3, 1, 0)
73 | self.relu4_1 = nn.ReLU(inplace=True)
74 | # 28 x 28
75 |
76 | def forward(self, x):
77 | out = self.conv0(x)
78 |
79 | out = self.pad1_1(out)
80 | out = self.conv1_1(out)
81 | out = self.relu1_1(out)
82 |
83 | if self.level < 2:
84 | return out
85 |
86 | out = self.pad1_2(out)
87 | out = self.conv1_2(out)
88 | pool1 = self.relu1_2(out)
89 |
90 | out, pool1_idx = self.maxpool1(pool1)
91 |
92 | out = self.pad2_1(out)
93 | out = self.conv2_1(out)
94 | out = self.relu2_1(out)
95 |
96 | if self.level < 3:
97 | return out, pool1_idx, pool1.size()
98 |
99 | out = self.pad2_2(out)
100 | out = self.conv2_2(out)
101 | pool2 = self.relu2_2(out)
102 |
103 | out, pool2_idx = self.maxpool2(pool2)
104 |
105 | out = self.pad3_1(out)
106 | out = self.conv3_1(out)
107 | out = self.relu3_1(out)
108 |
109 | if self.level < 4:
110 | return out, pool1_idx, pool1.size(), pool2_idx, pool2.size()
111 |
112 | out = self.pad3_2(out)
113 | out = self.conv3_2(out)
114 | out = self.relu3_2(out)
115 |
116 | out = self.pad3_3(out)
117 | out = self.conv3_3(out)
118 | out = self.relu3_3(out)
119 |
120 | out = self.pad3_4(out)
121 | out = self.conv3_4(out)
122 | pool3 = self.relu3_4(out)
123 | out, pool3_idx = self.maxpool3(pool3)
124 |
125 | out = self.pad4_1(out)
126 | out = self.conv4_1(out)
127 | out = self.relu4_1(out)
128 |
129 | return out, pool1_idx, pool1.size(), pool2_idx, pool2.size(), pool3_idx, pool3.size()
130 |
131 | def forward_multiple(self, x):
132 | out = self.conv0(x)
133 |
134 | out = self.pad1_1(out)
135 | out = self.conv1_1(out)
136 | out = self.relu1_1(out)
137 |
138 | if self.level < 2: return out
139 |
140 | out1 = out
141 |
142 | out = self.pad1_2(out)
143 | out = self.conv1_2(out)
144 | pool1 = self.relu1_2(out)
145 |
146 | out, pool1_idx = self.maxpool1(pool1)
147 |
148 | out = self.pad2_1(out)
149 | out = self.conv2_1(out)
150 | out = self.relu2_1(out)
151 |
152 | if self.level < 3: return out, out1
153 |
154 | out2 = out
155 |
156 | out = self.pad2_2(out)
157 | out = self.conv2_2(out)
158 | pool2 = self.relu2_2(out)
159 |
160 | out, pool2_idx = self.maxpool2(pool2)
161 |
162 | out = self.pad3_1(out)
163 | out = self.conv3_1(out)
164 | out = self.relu3_1(out)
165 |
166 | if self.level < 4: return out, out2, out1
167 |
168 | out3 = out
169 |
170 | out = self.pad3_2(out)
171 | out = self.conv3_2(out)
172 | out = self.relu3_2(out)
173 |
174 | out = self.pad3_3(out)
175 | out = self.conv3_3(out)
176 | out = self.relu3_3(out)
177 |
178 | out = self.pad3_4(out)
179 | out = self.conv3_4(out)
180 | pool3 = self.relu3_4(out)
181 | out, pool3_idx = self.maxpool3(pool3)
182 |
183 | out = self.pad4_1(out)
184 | out = self.conv4_1(out)
185 | out = self.relu4_1(out)
186 |
187 | return out, out3, out2, out1
188 |
189 |
190 | class VGGDecoder(nn.Module):
191 | def __init__(self, level):
192 | super(VGGDecoder, self).__init__()
193 | self.level = level
194 |
195 | if level > 3:
196 | self.pad4_1 = nn.ReflectionPad2d((1, 1, 1, 1))
197 | self.conv4_1 = nn.Conv2d(512, 256, 3, 1, 0)
198 | self.relu4_1 = nn.ReLU(inplace=True)
199 | # 28 x 28
200 |
201 | self.unpool3 = nn.MaxUnpool2d(kernel_size=2, stride=2)
202 | # 56 x 56
203 |
204 | self.pad3_4 = nn.ReflectionPad2d((1, 1, 1, 1))
205 | self.conv3_4 = nn.Conv2d(256, 256, 3, 1, 0)
206 | self.relu3_4 = nn.ReLU(inplace=True)
207 | # 56 x 56
208 |
209 | self.pad3_3 = nn.ReflectionPad2d((1, 1, 1, 1))
210 | self.conv3_3 = nn.Conv2d(256, 256, 3, 1, 0)
211 | self.relu3_3 = nn.ReLU(inplace=True)
212 | # 56 x 56
213 |
214 | self.pad3_2 = nn.ReflectionPad2d((1, 1, 1, 1))
215 | self.conv3_2 = nn.Conv2d(256, 256, 3, 1, 0)
216 | self.relu3_2 = nn.ReLU(inplace=True)
217 | # 56 x 56
218 |
219 | if level > 2:
220 | self.pad3_1 = nn.ReflectionPad2d((1, 1, 1, 1))
221 | self.conv3_1 = nn.Conv2d(256, 128, 3, 1, 0)
222 | self.relu3_1 = nn.ReLU(inplace=True)
223 | # 56 x 56
224 |
225 | self.unpool2 = nn.MaxUnpool2d(kernel_size=2, stride=2)
226 | # 112 x 112
227 |
228 | self.pad2_2 = nn.ReflectionPad2d((1, 1, 1, 1))
229 | self.conv2_2 = nn.Conv2d(128, 128, 3, 1, 0)
230 | self.relu2_2 = nn.ReLU(inplace=True)
231 | # 112 x 112
232 |
233 | if level > 1:
234 | self.pad2_1 = nn.ReflectionPad2d((1, 1, 1, 1))
235 | self.conv2_1 = nn.Conv2d(128, 64, 3, 1, 0)
236 | self.relu2_1 = nn.ReLU(inplace=True)
237 | # 112 x 112
238 |
239 | self.unpool1 = nn.MaxUnpool2d(kernel_size=2, stride=2)
240 | # 224 x 224
241 |
242 | self.pad1_2 = nn.ReflectionPad2d((1, 1, 1, 1))
243 | self.conv1_2 = nn.Conv2d(64, 64, 3, 1, 0)
244 | self.relu1_2 = nn.ReLU(inplace=True)
245 | # 224 x 224
246 |
247 | if level > 0:
248 | self.pad1_1 = nn.ReflectionPad2d((1, 1, 1, 1))
249 | self.conv1_1 = nn.Conv2d(64, 3, 3, 1, 0)
250 |
251 | def forward(self, x, pool1_idx=None, pool1_size=None, pool2_idx=None, pool2_size=None, pool3_idx=None,
252 | pool3_size=None):
253 | out = x
254 |
255 | if self.level > 3:
256 | out = self.pad4_1(out)
257 | out = self.conv4_1(out)
258 | out = self.relu4_1(out)
259 | out = self.unpool3(out, pool3_idx, output_size=pool3_size)
260 |
261 | out = self.pad3_4(out)
262 | out = self.conv3_4(out)
263 | out = self.relu3_4(out)
264 |
265 | out = self.pad3_3(out)
266 | out = self.conv3_3(out)
267 | out = self.relu3_3(out)
268 |
269 | out = self.pad3_2(out)
270 | out = self.conv3_2(out)
271 | out = self.relu3_2(out)
272 |
273 | if self.level > 2:
274 | out = self.pad3_1(out)
275 | out = self.conv3_1(out)
276 | out = self.relu3_1(out)
277 | out = self.unpool2(out, pool2_idx, output_size=pool2_size)
278 |
279 | out = self.pad2_2(out)
280 | out = self.conv2_2(out)
281 | out = self.relu2_2(out)
282 |
283 | if self.level > 1:
284 | out = self.pad2_1(out)
285 | out = self.conv2_1(out)
286 | out = self.relu2_1(out)
287 | out = self.unpool1(out, pool1_idx, output_size=pool1_size)
288 |
289 | out = self.pad1_2(out)
290 | out = self.conv1_2(out)
291 | out = self.relu1_2(out)
292 |
293 | if self.level > 0:
294 | out = self.pad1_1(out)
295 | out = self.conv1_1(out)
296 |
297 | return out
298 |
--------------------------------------------------------------------------------
/photo_gif.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 | from __future__ import division
6 | from PIL import Image
7 | from torch import nn
8 | import numpy as np
9 | import cv2
10 | from cv2.ximgproc import guidedFilter
11 |
12 |
13 | class GIFSmoothing(nn.Module):
14 | def forward(self, *input):
15 | pass
16 |
17 | def __init__(self, r, eps):
18 | super(GIFSmoothing, self).__init__()
19 | self.r = r
20 | self.eps = eps
21 |
22 | def process(self, initImg, contentImg):
23 | return self.process_opencv(initImg, contentImg)
24 |
25 | def process_opencv(self, initImg, contentImg):
26 | '''
27 | :param initImg: intermediate output. Either image path or PIL Image
28 | :param contentImg: content image output. Either path or PIL Image
29 | :return: stylized output image. PIL Image
30 | '''
31 | if type(initImg) == str:
32 | init_img = cv2.imread(initImg)
33 | init_img = init_img[2:-2,2:-2,:]
34 | else:
35 | init_img = np.array(initImg)[:, :, ::-1].copy()
36 |
37 | if type(contentImg) == str:
38 | cont_img = cv2.imread(contentImg)
39 | else:
40 | cont_img = np.array(contentImg)[:, :, ::-1].copy()
41 |
42 | output_img = guidedFilter(guide=cont_img, src=init_img, radius=self.r, eps=self.eps)
43 | output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2RGB)
44 | output_img = Image.fromarray(output_img)
45 | return output_img
46 |
47 |
--------------------------------------------------------------------------------
/photo_smooth.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 | from __future__ import division
6 | import torch.nn as nn
7 | import scipy.misc
8 | import numpy as np
9 | import scipy.sparse
10 | import scipy.sparse.linalg
11 | from numpy.lib.stride_tricks import as_strided
12 | from PIL import Image
13 |
14 |
15 | class Propagator(nn.Module):
16 | def __init__(self, beta=0.9999):
17 | super(Propagator, self).__init__()
18 | self.beta = beta
19 |
20 | def process(self, initImg, contentImg):
21 |
22 | if type(contentImg) == str:
23 | content = scipy.misc.imread(contentImg, mode='RGB')
24 | else:
25 | content = contentImg.copy()
26 | # content = scipy.misc.imread(contentImg, mode='RGB')
27 |
28 | if type(initImg) == str:
29 | B = scipy.misc.imread(initImg, mode='RGB').astype(np.float64) / 255
30 | else:
31 | B = scipy.asarray(initImg).astype(np.float64) / 255
32 | # B = self.
33 | # B = scipy.misc.imread(initImg, mode='RGB').astype(np.float64)/255
34 | h1,w1,k = B.shape
35 | h = h1 - 4
36 | w = w1 - 4
37 | B = B[int((h1-h)/2):int((h1-h)/2+h),int((w1-w)/2):int((w1-w)/2+w),:]
38 | content = scipy.misc.imresize(content,(h,w))
39 | B = self.__replication_padding(B,2)
40 | content = self.__replication_padding(content,2)
41 | content = content.astype(np.float64)/255
42 | B = np.reshape(B,(h1*w1,k))
43 | W = self.__compute_laplacian(content)
44 | W = W.tocsc()
45 | dd = W.sum(0)
46 | dd = np.sqrt(np.power(dd,-1))
47 | dd = dd.A.squeeze()
48 | D = scipy.sparse.csc_matrix((dd, (np.arange(0,w1*h1), np.arange(0,w1*h1)))) # 0.026
49 | S = D.dot(W).dot(D)
50 | A = scipy.sparse.identity(w1*h1) - self.beta*S
51 | A = A.tocsc()
52 | solver = scipy.sparse.linalg.factorized(A)
53 | V = np.zeros((h1*w1,k))
54 | V[:,0] = solver(B[:,0])
55 | V[:,1] = solver(B[:,1])
56 | V[:,2] = solver(B[:,2])
57 | V = V*(1-self.beta)
58 | V = V.reshape(h1,w1,k)
59 | V = V[2:2+h,2:2+w,:]
60 |
61 | img = Image.fromarray(np.uint8(np.clip(V * 255., 0, 255.)))
62 | return img
63 |
64 | # Returns sparse matting laplacian
65 | # The implementation of the function is heavily borrowed from
66 | # https://github.com/MarcoForte/closed-form-matting/blob/master/closed_form_matting.py
67 | # We thank Marco Forte for sharing his code.
68 | def __compute_laplacian(self, img, eps=10**(-7), win_rad=1):
69 | win_size = (win_rad*2+1)**2
70 | h, w, d = img.shape
71 | c_h, c_w = h - 2*win_rad, w - 2*win_rad
72 | win_diam = win_rad*2+1
73 | indsM = np.arange(h*w).reshape((h, w))
74 | ravelImg = img.reshape(h*w, d)
75 | win_inds = self.__rolling_block(indsM, block=(win_diam, win_diam))
76 | win_inds = win_inds.reshape(c_h, c_w, win_size)
77 | winI = ravelImg[win_inds]
78 | win_mu = np.mean(winI, axis=2, keepdims=True)
79 | win_var = np.einsum('...ji,...jk ->...ik', winI, winI)/win_size - np.einsum('...ji,...jk ->...ik', win_mu, win_mu)
80 | inv = np.linalg.inv(win_var + (eps/win_size)*np.eye(3))
81 | X = np.einsum('...ij,...jk->...ik', winI - win_mu, inv)
82 | vals = (1/win_size)*(1 + np.einsum('...ij,...kj->...ik', X, winI - win_mu))
83 | nz_indsCol = np.tile(win_inds, win_size).ravel()
84 | nz_indsRow = np.repeat(win_inds, win_size).ravel()
85 | nz_indsVal = vals.ravel()
86 | L = scipy.sparse.coo_matrix((nz_indsVal, (nz_indsRow, nz_indsCol)), shape=(h*w, h*w))
87 | return L
88 |
89 | def __replication_padding(self, arr,pad):
90 | h,w,c = arr.shape
91 | ans = np.zeros((h+pad*2,w+pad*2,c))
92 | for i in range(c):
93 | ans[:,:,i] = np.pad(arr[:,:,i],pad_width=(pad,pad),mode='edge')
94 | return ans
95 |
96 | def __rolling_block(self, A, block=(3, 3)):
97 | shape = (A.shape[0] - block[0] + 1, A.shape[1] - block[1] + 1) + block
98 | strides = (A.strides[0], A.strides[1]) + A.strides
99 | return as_strided(A, shape=shape, strides=strides)
--------------------------------------------------------------------------------
/photo_wct.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 |
6 | import numpy as np
7 | from PIL import Image
8 | import torch
9 | import torch.nn as nn
10 | from models import VGGEncoder, VGGDecoder
11 |
12 |
13 | class PhotoWCT(nn.Module):
14 | def __init__(self):
15 | super(PhotoWCT, self).__init__()
16 | self.e1 = VGGEncoder(1)
17 | self.d1 = VGGDecoder(1)
18 | self.e2 = VGGEncoder(2)
19 | self.d2 = VGGDecoder(2)
20 | self.e3 = VGGEncoder(3)
21 | self.d3 = VGGDecoder(3)
22 | self.e4 = VGGEncoder(4)
23 | self.d4 = VGGDecoder(4)
24 |
25 | def transform(self, cont_img, styl_img, cont_seg, styl_seg):
26 | self.__compute_label_info(cont_seg, styl_seg)
27 |
28 | sF4, sF3, sF2, sF1 = self.e4.forward_multiple(styl_img)
29 |
30 | cF4, cpool_idx, cpool1, cpool_idx2, cpool2, cpool_idx3, cpool3 = self.e4(cont_img)
31 | sF4 = sF4.data.squeeze(0)
32 | cF4 = cF4.data.squeeze(0)
33 | # print(cont_seg)
34 | csF4 = self.__feature_wct(cF4, sF4, cont_seg, styl_seg)
35 | Im4 = self.d4(csF4, cpool_idx, cpool1, cpool_idx2, cpool2, cpool_idx3, cpool3)
36 |
37 | cF3, cpool_idx, cpool1, cpool_idx2, cpool2 = self.e3(Im4)
38 | sF3 = sF3.data.squeeze(0)
39 | cF3 = cF3.data.squeeze(0)
40 | csF3 = self.__feature_wct(cF3, sF3, cont_seg, styl_seg)
41 | Im3 = self.d3(csF3, cpool_idx, cpool1, cpool_idx2, cpool2)
42 |
43 | cF2, cpool_idx, cpool = self.e2(Im3)
44 | sF2 = sF2.data.squeeze(0)
45 | cF2 = cF2.data.squeeze(0)
46 | csF2 = self.__feature_wct(cF2, sF2, cont_seg, styl_seg)
47 | Im2 = self.d2(csF2, cpool_idx, cpool)
48 |
49 | cF1 = self.e1(Im2)
50 | sF1 = sF1.data.squeeze(0)
51 | cF1 = cF1.data.squeeze(0)
52 | csF1 = self.__feature_wct(cF1, sF1, cont_seg, styl_seg)
53 | Im1 = self.d1(csF1)
54 | return Im1
55 |
56 | def __compute_label_info(self, cont_seg, styl_seg):
57 | if cont_seg.size == False or styl_seg.size == False:
58 | return
59 | max_label = np.max(cont_seg) + 1
60 | self.label_set = np.unique(cont_seg)
61 | self.label_indicator = np.zeros(max_label)
62 | for l in self.label_set:
63 | # if l==0:
64 | # continue
65 | is_valid = lambda a, b: a > 10 and b > 10 and a / b < 100 and b / a < 100
66 | o_cont_mask = np.where(cont_seg.reshape(cont_seg.shape[0] * cont_seg.shape[1]) == l)
67 | o_styl_mask = np.where(styl_seg.reshape(styl_seg.shape[0] * styl_seg.shape[1]) == l)
68 | self.label_indicator[l] = is_valid(o_cont_mask[0].size, o_styl_mask[0].size)
69 |
70 | def __feature_wct(self, cont_feat, styl_feat, cont_seg, styl_seg):
71 | cont_c, cont_h, cont_w = cont_feat.size(0), cont_feat.size(1), cont_feat.size(2)
72 | styl_c, styl_h, styl_w = styl_feat.size(0), styl_feat.size(1), styl_feat.size(2)
73 | cont_feat_view = cont_feat.view(cont_c, -1).clone()
74 | styl_feat_view = styl_feat.view(styl_c, -1).clone()
75 |
76 | if cont_seg.size == False or styl_seg.size == False:
77 | target_feature = self.__wct_core(cont_feat_view, styl_feat_view)
78 | else:
79 | target_feature = cont_feat.view(cont_c, -1).clone()
80 | if len(cont_seg.shape) == 2:
81 | t_cont_seg = np.asarray(Image.fromarray(cont_seg).resize((cont_w, cont_h), Image.NEAREST))
82 | else:
83 | t_cont_seg = np.asarray(Image.fromarray(cont_seg, mode='RGB').resize((cont_w, cont_h), Image.NEAREST))
84 | if len(styl_seg.shape) == 2:
85 | t_styl_seg = np.asarray(Image.fromarray(styl_seg).resize((styl_w, styl_h), Image.NEAREST))
86 | else:
87 | t_styl_seg = np.asarray(Image.fromarray(styl_seg, mode='RGB').resize((styl_w, styl_h), Image.NEAREST))
88 |
89 | for l in self.label_set:
90 | if self.label_indicator[l] == 0:
91 | continue
92 | cont_mask = np.where(t_cont_seg.reshape(t_cont_seg.shape[0] * t_cont_seg.shape[1]) == l)
93 | styl_mask = np.where(t_styl_seg.reshape(t_styl_seg.shape[0] * t_styl_seg.shape[1]) == l)
94 | if cont_mask[0].size <= 0 or styl_mask[0].size <= 0:
95 | continue
96 |
97 | cont_indi = torch.LongTensor(cont_mask[0])
98 | styl_indi = torch.LongTensor(styl_mask[0])
99 | if self.is_cuda:
100 | cont_indi = cont_indi.cuda(0)
101 | styl_indi = styl_indi.cuda(0)
102 |
103 | cFFG = torch.index_select(cont_feat_view, 1, cont_indi)
104 | sFFG = torch.index_select(styl_feat_view, 1, styl_indi)
105 | # print(len(cont_indi))
106 | # print(len(styl_indi))
107 | tmp_target_feature = self.__wct_core(cFFG, sFFG)
108 | # print(tmp_target_feature.size())
109 | if torch.__version__ >= "0.4.0":
110 | # This seems to be a bug in PyTorch 0.4.0 to me.
111 | new_target_feature = torch.transpose(target_feature, 1, 0)
112 | new_target_feature.index_copy_(0, cont_indi, \
113 | torch.transpose(tmp_target_feature,1,0))
114 | target_feature = torch.transpose(new_target_feature, 1, 0)
115 | else:
116 | target_feature.index_copy_(1, cont_indi, tmp_target_feature)
117 |
118 | target_feature = target_feature.view_as(cont_feat)
119 | ccsF = target_feature.float().unsqueeze(0)
120 | return ccsF
121 |
122 | def __wct_core(self, cont_feat, styl_feat):
123 | cFSize = cont_feat.size()
124 | c_mean = torch.mean(cont_feat, 1) # c x (h x w)
125 | c_mean = c_mean.unsqueeze(1).expand_as(cont_feat)
126 | cont_feat = cont_feat - c_mean
127 |
128 | iden = torch.eye(cFSize[0]) # .double()
129 | if self.is_cuda:
130 | iden = iden.cuda()
131 |
132 | contentConv = torch.mm(cont_feat, cont_feat.t()).div(cFSize[1] - 1) + iden
133 | # del iden
134 | c_u, c_e, c_v = torch.svd(contentConv, some=False)
135 | # c_e2, c_v = torch.eig(contentConv, True)
136 | # c_e = c_e2[:,0]
137 |
138 | k_c = cFSize[0]
139 | for i in range(cFSize[0] - 1, -1, -1):
140 | if c_e[i] >= 0.00001:
141 | k_c = i + 1
142 | break
143 |
144 | sFSize = styl_feat.size()
145 | s_mean = torch.mean(styl_feat, 1)
146 | styl_feat = styl_feat - s_mean.unsqueeze(1).expand_as(styl_feat)
147 | styleConv = torch.mm(styl_feat, styl_feat.t()).div(sFSize[1] - 1)
148 | s_u, s_e, s_v = torch.svd(styleConv, some=False)
149 |
150 | k_s = sFSize[0]
151 | for i in range(sFSize[0] - 1, -1, -1):
152 | if s_e[i] >= 0.00001:
153 | k_s = i + 1
154 | break
155 |
156 | c_d = (c_e[0:k_c]).pow(-0.5)
157 | step1 = torch.mm(c_v[:, 0:k_c], torch.diag(c_d))
158 | step2 = torch.mm(step1, (c_v[:, 0:k_c].t()))
159 | whiten_cF = torch.mm(step2, cont_feat)
160 |
161 | s_d = (s_e[0:k_s]).pow(0.5)
162 | targetFeature = torch.mm(torch.mm(torch.mm(s_v[:, 0:k_s], torch.diag(s_d)), (s_v[:, 0:k_s].t())), whiten_cF)
163 | targetFeature = targetFeature + s_mean.unsqueeze(1).expand_as(targetFeature)
164 | return targetFeature
165 |
166 | @property
167 | def is_cuda(self):
168 | return next(self.parameters()).is_cuda
169 |
170 | def forward(self, *input):
171 | pass
--------------------------------------------------------------------------------
/process_stylization.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 | from __future__ import print_function
6 | import time
7 | import numpy as np
8 | from PIL import Image
9 | from torch.autograd import Variable
10 | import torchvision.transforms as transforms
11 | import torchvision.utils as utils
12 | import torch.nn as nn
13 | import torch
14 | from smooth_filter import smooth_filter
15 |
16 |
17 | class ReMapping:
18 | def __init__(self):
19 | self.remapping = []
20 |
21 | def process(self, seg):
22 | new_seg = seg.copy()
23 | for k, v in self.remapping.items():
24 | new_seg[seg == k] = v
25 | return new_seg
26 |
27 |
28 | class Timer:
29 | def __init__(self, msg):
30 | self.msg = msg
31 | self.start_time = None
32 |
33 | def __enter__(self):
34 | self.start_time = time.time()
35 |
36 | def __exit__(self, exc_type, exc_value, exc_tb):
37 | print(self.msg % (time.time() - self.start_time))
38 |
39 |
40 | def memory_limit_image_resize(cont_img):
41 | # prevent too small or too big images
42 | MINSIZE=256
43 | MAXSIZE=960
44 | orig_width = cont_img.width
45 | orig_height = cont_img.height
46 | if max(cont_img.width,cont_img.height) < MINSIZE:
47 | if cont_img.width > cont_img.height:
48 | cont_img.thumbnail((int(cont_img.width*1.0/cont_img.height*MINSIZE), MINSIZE), Image.BICUBIC)
49 | else:
50 | cont_img.thumbnail((MINSIZE, int(cont_img.height*1.0/cont_img.width*MINSIZE)), Image.BICUBIC)
51 | if min(cont_img.width,cont_img.height) > MAXSIZE:
52 | if cont_img.width > cont_img.height:
53 | cont_img.thumbnail((MAXSIZE, int(cont_img.height*1.0/cont_img.width*MAXSIZE)), Image.BICUBIC)
54 | else:
55 | cont_img.thumbnail(((int(cont_img.width*1.0/cont_img.height*MAXSIZE), MAXSIZE)), Image.BICUBIC)
56 | print("Resize image: (%d,%d)->(%d,%d)" % (orig_width, orig_height, cont_img.width, cont_img.height))
57 | return cont_img.width, cont_img.height
58 |
59 |
60 | def stylization(stylization_module, smoothing_module, content_image_path, style_image_path, content_seg_path, style_seg_path, output_image_path,
61 | cuda, save_intermediate, no_post, cont_seg_remapping=None, styl_seg_remapping=None):
62 | # Load image
63 | with torch.no_grad():
64 | cont_img = Image.open(content_image_path).convert('RGB')
65 | styl_img = Image.open(style_image_path).convert('RGB')
66 |
67 | new_cw, new_ch = memory_limit_image_resize(cont_img)
68 | new_sw, new_sh = memory_limit_image_resize(styl_img)
69 | cont_pilimg = cont_img.copy()
70 | cw = cont_pilimg.width
71 | ch = cont_pilimg.height
72 | try:
73 | cont_seg = Image.open(content_seg_path)
74 | styl_seg = Image.open(style_seg_path)
75 | cont_seg.resize((new_cw,new_ch),Image.NEAREST)
76 | styl_seg.resize((new_sw,new_sh),Image.NEAREST)
77 |
78 | except:
79 | cont_seg = []
80 | styl_seg = []
81 |
82 | cont_img = transforms.ToTensor()(cont_img).unsqueeze(0)
83 | styl_img = transforms.ToTensor()(styl_img).unsqueeze(0)
84 |
85 | if cuda:
86 | cont_img = cont_img.cuda(0)
87 | styl_img = styl_img.cuda(0)
88 | stylization_module.cuda(0)
89 |
90 | # cont_img = Variable(cont_img, volatile=True)
91 | # styl_img = Variable(styl_img, volatile=True)
92 |
93 | cont_seg = np.asarray(cont_seg)
94 | styl_seg = np.asarray(styl_seg)
95 | if cont_seg_remapping is not None:
96 | cont_seg = cont_seg_remapping.process(cont_seg)
97 | if styl_seg_remapping is not None:
98 | styl_seg = styl_seg_remapping.process(styl_seg)
99 |
100 | if save_intermediate:
101 | with Timer("Elapsed time in stylization: %f"):
102 | stylized_img = stylization_module.transform(cont_img, styl_img, cont_seg, styl_seg)
103 | if ch != new_ch or cw != new_cw:
104 | print("De-resize image: (%d,%d)->(%d,%d)" %(new_cw,new_ch,cw,ch))
105 | stylized_img = nn.functional.upsample(stylized_img, size=(ch,cw), mode='bilinear')
106 | utils.save_image(stylized_img.data.cpu().float(), output_image_path, nrow=1, padding=0)
107 |
108 | with Timer("Elapsed time in propagation: %f"):
109 | out_img = smoothing_module.process(output_image_path, content_image_path)
110 | out_img.save(output_image_path)
111 |
112 | if not cuda:
113 | print("NotImplemented: The CPU version of smooth filter has not been implemented currently.")
114 | return
115 |
116 | if no_post is False:
117 | with Timer("Elapsed time in post processing: %f"):
118 | out_img = smooth_filter(output_image_path, content_image_path, f_radius=15, f_edge=1e-1)
119 | out_img.save(output_image_path)
120 | else:
121 | with Timer("Elapsed time in stylization: %f"):
122 | stylized_img = stylization_module.transform(cont_img, styl_img, cont_seg, styl_seg)
123 | if ch != new_ch or cw != new_cw:
124 | print("De-resize image: (%d,%d)->(%d,%d)" %(new_cw,new_ch,cw,ch))
125 | stylized_img = nn.functional.upsample(stylized_img, size=(ch,cw), mode='bilinear')
126 | grid = utils.make_grid(stylized_img.data, nrow=1, padding=0)
127 | ndarr = grid.mul(255).clamp(0, 255).byte().permute(1, 2, 0).cpu().numpy()
128 | out_img = Image.fromarray(ndarr)
129 |
130 | with Timer("Elapsed time in propagation: %f"):
131 | out_img = smoothing_module.process(out_img, cont_pilimg)
132 |
133 | if no_post is False:
134 | with Timer("Elapsed time in post processing: %f"):
135 | out_img = smooth_filter(out_img, cont_pilimg, f_radius=15, f_edge=1e-1)
136 | out_img.save(output_image_path)
137 |
138 |
--------------------------------------------------------------------------------
/process_stylization_ade20k_ssn.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 |
6 | from __future__ import print_function
7 | import torch
8 | import numpy as np
9 | from PIL import Image
10 | from torch.autograd import Variable
11 | import torchvision.transforms as transforms
12 | import torchvision.utils as utils
13 | import torch.nn as nn
14 | from smooth_filter import smooth_filter
15 | from process_stylization import Timer, memory_limit_image_resize
16 | from scipy.io import loadmat
17 | colors = loadmat('segmentation/data/color150.mat')['colors']
18 |
19 |
20 | def overlay(img, pred_color, blend_factor=0.4):
21 | import cv2
22 | edges = cv2.Canny(pred_color, 20, 40)
23 | edges = cv2.dilate(edges, np.ones((5,5),np.uint8), iterations=1)
24 | out = (1-blend_factor)*img + blend_factor * pred_color
25 | edge_pixels = (edges==255)
26 | new_color = [0,0,255]
27 | for i in range(0,3):
28 | timg = out[:,:,i]
29 | timg[edge_pixels]=new_color[i]
30 | out[:,:,i] = timg
31 | return out
32 |
33 |
34 | def visualize_result(label_map):
35 | label_map = label_map.astype('int')
36 | label_map_rgb = np.zeros((label_map.shape[0], label_map.shape[1], 3), dtype=np.uint8)
37 | for label in np.unique(label_map):
38 | label_map_rgb += (label_map == label)[:, :, np.newaxis] * \
39 | np.tile(colors[label],(label_map.shape[0], label_map.shape[1], 1))
40 | return label_map_rgb
41 |
42 |
43 | class SegReMapping:
44 | def __init__(self, mapping_name, min_ratio=0.02):
45 | self.label_mapping = np.load(mapping_name)
46 | self.min_ratio = min_ratio
47 |
48 | def cross_remapping(self, cont_seg, styl_seg):
49 | cont_label_info = []
50 | new_cont_label_info = []
51 | for label in np.unique(cont_seg):
52 | cont_label_info.append(label)
53 | new_cont_label_info.append(label)
54 |
55 | style_label_info = []
56 | new_style_label_info = []
57 | for label in np.unique(styl_seg):
58 | style_label_info.append(label)
59 | new_style_label_info.append(label)
60 |
61 | cont_set_diff = set(cont_label_info) - set(style_label_info)
62 | # Find the labels that are not covered by the style
63 | # Assign them to the best matched region in the style region
64 | for s in cont_set_diff:
65 | cont_label_index = cont_label_info.index(s)
66 | for j in range(self.label_mapping.shape[0]):
67 | new_label = self.label_mapping[j, s]
68 | if new_label in style_label_info:
69 | new_cont_label_info[cont_label_index] = new_label
70 | break
71 | new_cont_seg = cont_seg.copy()
72 | for i,current_label in enumerate(cont_label_info):
73 | new_cont_seg[(cont_seg == current_label)] = new_cont_label_info[i]
74 |
75 | cont_label_info = []
76 | for label in np.unique(new_cont_seg):
77 | cont_label_info.append(label)
78 | styl_set_diff = set(style_label_info) - set(cont_label_info)
79 | valid_styl_set = set(style_label_info) - set(styl_set_diff)
80 | for s in styl_set_diff:
81 | style_label_index = style_label_info.index(s)
82 | for j in range(self.label_mapping.shape[0]):
83 | new_label = self.label_mapping[j, s]
84 | if new_label in valid_styl_set:
85 | new_style_label_info[style_label_index] = new_label
86 | break
87 | new_styl_seg = styl_seg.copy()
88 | for i,current_label in enumerate(style_label_info):
89 | # print("%d -> %d" %(current_label,new_style_label_info[i]))
90 | new_styl_seg[(styl_seg == current_label)] = new_style_label_info[i]
91 |
92 | return new_cont_seg, new_styl_seg
93 |
94 | def self_remapping(self, seg):
95 | init_ratio = self.min_ratio
96 | # Assign label with small portions to label with large portion
97 | new_seg = seg.copy()
98 | [h,w] = new_seg.shape
99 | n_pixels = h*w
100 | # First scan through what are the available labels and their sizes
101 | label_info = []
102 | ratio_info = []
103 | new_label_info = []
104 | for label in np.unique(seg):
105 | ratio = np.sum(np.float32((seg == label))[:])/n_pixels
106 | label_info.append(label)
107 | new_label_info.append(label)
108 | ratio_info.append(ratio)
109 | for i,current_label in enumerate(label_info):
110 | if ratio_info[i] < init_ratio:
111 | for j in range(self.label_mapping.shape[0]):
112 | new_label = self.label_mapping[j,current_label]
113 | if new_label in label_info:
114 | index = label_info.index(new_label)
115 | if index >= 0:
116 | if ratio_info[index] >= init_ratio:
117 | new_label_info[i] = new_label
118 | break
119 | for i,current_label in enumerate(label_info):
120 | new_seg[(seg == current_label)] = new_label_info[i]
121 | return new_seg
122 |
123 |
124 | def stylization(stylization_module, smoothing_module, content_image_path, style_image_path, content_seg_path,
125 | style_seg_path, output_image_path,
126 | cuda, save_intermediate, no_post, label_remapping, output_visualization=False):
127 | # Load image
128 | with torch.no_grad():
129 | cont_img = Image.open(content_image_path).convert('RGB')
130 | styl_img = Image.open(style_image_path).convert('RGB')
131 |
132 | new_cw, new_ch = memory_limit_image_resize(cont_img)
133 | new_sw, new_sh = memory_limit_image_resize(styl_img)
134 | cont_pilimg = cont_img.copy()
135 | styl_pilimg = styl_img.copy()
136 | cw = cont_pilimg.width
137 | ch = cont_pilimg.height
138 | try:
139 | cont_seg = Image.open(content_seg_path)
140 | styl_seg = Image.open(style_seg_path)
141 | cont_seg.resize((new_cw, new_ch), Image.NEAREST)
142 | styl_seg.resize((new_sw, new_sh), Image.NEAREST)
143 |
144 | except:
145 | cont_seg = []
146 | styl_seg = []
147 |
148 | cont_img = transforms.ToTensor()(cont_img).unsqueeze(0)
149 | styl_img = transforms.ToTensor()(styl_img).unsqueeze(0)
150 |
151 | if cuda:
152 | cont_img = cont_img.cuda(0)
153 | styl_img = styl_img.cuda(0)
154 | stylization_module.cuda(0)
155 |
156 | # cont_img = Variable(cont_img, volatile=True)
157 | # styl_img = Variable(styl_img, volatile=True)
158 |
159 | cont_seg = np.asarray(cont_seg)
160 | styl_seg = np.asarray(styl_seg)
161 |
162 | cont_seg = label_remapping.self_remapping(cont_seg)
163 | styl_seg = label_remapping.self_remapping(styl_seg)
164 | cont_seg, styl_seg = label_remapping.cross_remapping(cont_seg, styl_seg)
165 |
166 | if output_visualization:
167 | import cv2
168 | cont_seg_vis = visualize_result(cont_seg)
169 | styl_seg_vis = visualize_result(styl_seg)
170 | cont_seg_vis = overlay(cv2.imread(content_image_path), cont_seg_vis)
171 | styl_seg_vis = overlay(cv2.imread(style_image_path), styl_seg_vis)
172 | cv2.imwrite(content_seg_path + '.visualization.jpg', cont_seg_vis)
173 | cv2.imwrite(style_seg_path + '.visualization.jpg', styl_seg_vis)
174 |
175 | if save_intermediate:
176 | with Timer("Elapsed time in stylization: %f"):
177 | stylized_img = stylization_module.transform(cont_img, styl_img, cont_seg, styl_seg)
178 | if ch != new_ch or cw != new_cw:
179 | print("De-resize image: (%d,%d)->(%d,%d)" % (new_cw, new_ch, cw, ch))
180 | stylized_img = nn.functional.upsample(stylized_img, size=(ch, cw), mode='bilinear')
181 | utils.save_image(stylized_img.data.cpu().float(), output_image_path, nrow=1, padding=0)
182 |
183 | with Timer("Elapsed time in propagation: %f"):
184 | out_img = smoothing_module.process(output_image_path, content_image_path)
185 | out_img.save(output_image_path)
186 |
187 | if not cuda:
188 | print("NotImplemented: The CPU version of smooth filter has not been implemented currently.")
189 | return
190 |
191 | if no_post is False:
192 | with Timer("Elapsed time in post processing: %f"):
193 | out_img = smooth_filter(output_image_path, content_image_path, f_radius=15, f_edge=1e-1)
194 | out_img.save(output_image_path)
195 | else:
196 | with Timer("Elapsed time in stylization: %f"):
197 | stylized_img = stylization_module.transform(cont_img, styl_img, cont_seg, styl_seg)
198 | if ch != new_ch or cw != new_cw:
199 | print("De-resize image: (%d,%d)->(%d,%d)" % (new_cw, new_ch, cw, ch))
200 | stylized_img = nn.functional.upsample(stylized_img, size=(ch, cw), mode='bilinear')
201 | grid = utils.make_grid(stylized_img.data, nrow=1, padding=0)
202 | ndarr = grid.mul(255).clamp(0, 255).byte().permute(1, 2, 0).cpu().numpy()
203 | out_img = Image.fromarray(ndarr)
204 |
205 | with Timer("Elapsed time in propagation: %f"):
206 | out_img = smoothing_module.process(out_img, cont_pilimg)
207 |
208 | if no_post is False:
209 | with Timer("Elapsed time in post processing: %f"):
210 | out_img = smooth_filter(out_img, cont_pilimg, f_radius=15, f_edge=1e-1)
211 | out_img.save(output_image_path)
212 | return
213 |
214 |
--------------------------------------------------------------------------------
/process_stylization_folder.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 | from __future__ import print_function
6 | import argparse
7 | import os
8 | import torch
9 | from photo_wct import PhotoWCT
10 | import process_stylization
11 |
12 | parser = argparse.ArgumentParser(description='Photorealistic Image Stylization')
13 | parser.add_argument('--model', default='./PhotoWCTModels/photo_wct.pth')
14 | parser.add_argument('--cuda', type=bool, default=True, help='Enable CUDA.')
15 | parser.add_argument('--save_intermediate', action='store_true', default=False)
16 | parser.add_argument('--fast', action='store_true', default=False)
17 | parser.add_argument('--no_post', action='store_true', default=False)
18 | parser.add_argument('--folder', type=str, default='examples')
19 | parser.add_argument('--beta', type=float, default=0.9999)
20 | parser.add_argument('--cont_img_ext', type=str, default='.png')
21 | parser.add_argument('--cont_seg_ext', type=str, default='.pgm')
22 | parser.add_argument('--styl_img_ext', type=str, default='.png')
23 | parser.add_argument('--styl_seg_ext', type=str, default='.pgm')
24 | args = parser.parse_args()
25 |
26 | folder = args.folder
27 | cont_img_folder = os.path.join(folder, 'content_img')
28 | cont_seg_folder = os.path.join(folder, 'content_seg')
29 | styl_img_folder = os.path.join(folder, 'style_img')
30 | styl_seg_folder = os.path.join(folder, 'style_seg')
31 | outp_img_folder = os.path.join(folder, 'results')
32 | cont_img_list = [f for f in os.listdir(cont_img_folder) if os.path.isfile(os.path.join(cont_img_folder, f))]
33 | cont_img_list.sort()
34 |
35 | # Load model
36 | p_wct = PhotoWCT()
37 | p_wct.load_state_dict(torch.load(args.model))
38 | # Load Propagator
39 | if args.fast:
40 | from photo_gif import GIFSmoothing
41 | p_pro = GIFSmoothing(r=35, eps=0.01)
42 | else:
43 | from photo_smooth import Propagator
44 | p_pro = Propagator(args.beta)
45 |
46 | for f in cont_img_list:
47 | content_image_path = os.path.join(cont_img_folder, f)
48 | content_seg_path = os.path.join(cont_seg_folder, f).replace(args.cont_img_ext, args.cont_seg_ext)
49 | style_image_path = os.path.join(styl_img_folder, f)
50 | style_seg_path = os.path.join(styl_seg_folder, f).replace(args.styl_img_ext, args.styl_seg_ext)
51 | output_image_path = os.path.join(outp_img_folder, f)
52 |
53 | print("Content image: " + content_image_path )
54 | if os.path.isfile(content_seg_path):
55 | print("Content mask: " + content_seg_path )
56 |
57 | print("Style image: " + style_image_path )
58 | if os.path.isfile(style_seg_path):
59 | print("Style mask: " + style_seg_path )
60 |
61 | process_stylization.stylization(
62 | stylization_module=p_wct,
63 | smoothing_module=p_pro,
64 | content_image_path=content_image_path,
65 | style_image_path=style_image_path,
66 | content_seg_path=content_seg_path,
67 | style_seg_path=style_seg_path,
68 | output_image_path=output_image_path,
69 | cuda=args.cuda,
70 | save_intermediate=args.save_intermediate,
71 | no_post=args.no_post
72 | )
73 |
--------------------------------------------------------------------------------
/smooth_filter.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
3 | Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
4 | """
5 | src = '''
6 | #include "/usr/local/cuda/include/math_functions.h"
7 | #define TB 256
8 | #define EPS 1e-7
9 |
10 | __device__ bool InverseMat4x4(double m_in[4][4], double inv_out[4][4]) {
11 | double m[16], inv[16];
12 | for (int i = 0; i < 4; i++) {
13 | for (int j = 0; j < 4; j++) {
14 | m[i * 4 + j] = m_in[i][j];
15 | }
16 | }
17 |
18 | inv[0] = m[5] * m[10] * m[15] -
19 | m[5] * m[11] * m[14] -
20 | m[9] * m[6] * m[15] +
21 | m[9] * m[7] * m[14] +
22 | m[13] * m[6] * m[11] -
23 | m[13] * m[7] * m[10];
24 |
25 | inv[4] = -m[4] * m[10] * m[15] +
26 | m[4] * m[11] * m[14] +
27 | m[8] * m[6] * m[15] -
28 | m[8] * m[7] * m[14] -
29 | m[12] * m[6] * m[11] +
30 | m[12] * m[7] * m[10];
31 |
32 | inv[8] = m[4] * m[9] * m[15] -
33 | m[4] * m[11] * m[13] -
34 | m[8] * m[5] * m[15] +
35 | m[8] * m[7] * m[13] +
36 | m[12] * m[5] * m[11] -
37 | m[12] * m[7] * m[9];
38 |
39 | inv[12] = -m[4] * m[9] * m[14] +
40 | m[4] * m[10] * m[13] +
41 | m[8] * m[5] * m[14] -
42 | m[8] * m[6] * m[13] -
43 | m[12] * m[5] * m[10] +
44 | m[12] * m[6] * m[9];
45 |
46 | inv[1] = -m[1] * m[10] * m[15] +
47 | m[1] * m[11] * m[14] +
48 | m[9] * m[2] * m[15] -
49 | m[9] * m[3] * m[14] -
50 | m[13] * m[2] * m[11] +
51 | m[13] * m[3] * m[10];
52 |
53 | inv[5] = m[0] * m[10] * m[15] -
54 | m[0] * m[11] * m[14] -
55 | m[8] * m[2] * m[15] +
56 | m[8] * m[3] * m[14] +
57 | m[12] * m[2] * m[11] -
58 | m[12] * m[3] * m[10];
59 |
60 | inv[9] = -m[0] * m[9] * m[15] +
61 | m[0] * m[11] * m[13] +
62 | m[8] * m[1] * m[15] -
63 | m[8] * m[3] * m[13] -
64 | m[12] * m[1] * m[11] +
65 | m[12] * m[3] * m[9];
66 |
67 | inv[13] = m[0] * m[9] * m[14] -
68 | m[0] * m[10] * m[13] -
69 | m[8] * m[1] * m[14] +
70 | m[8] * m[2] * m[13] +
71 | m[12] * m[1] * m[10] -
72 | m[12] * m[2] * m[9];
73 |
74 | inv[2] = m[1] * m[6] * m[15] -
75 | m[1] * m[7] * m[14] -
76 | m[5] * m[2] * m[15] +
77 | m[5] * m[3] * m[14] +
78 | m[13] * m[2] * m[7] -
79 | m[13] * m[3] * m[6];
80 |
81 | inv[6] = -m[0] * m[6] * m[15] +
82 | m[0] * m[7] * m[14] +
83 | m[4] * m[2] * m[15] -
84 | m[4] * m[3] * m[14] -
85 | m[12] * m[2] * m[7] +
86 | m[12] * m[3] * m[6];
87 |
88 | inv[10] = m[0] * m[5] * m[15] -
89 | m[0] * m[7] * m[13] -
90 | m[4] * m[1] * m[15] +
91 | m[4] * m[3] * m[13] +
92 | m[12] * m[1] * m[7] -
93 | m[12] * m[3] * m[5];
94 |
95 | inv[14] = -m[0] * m[5] * m[14] +
96 | m[0] * m[6] * m[13] +
97 | m[4] * m[1] * m[14] -
98 | m[4] * m[2] * m[13] -
99 | m[12] * m[1] * m[6] +
100 | m[12] * m[2] * m[5];
101 |
102 | inv[3] = -m[1] * m[6] * m[11] +
103 | m[1] * m[7] * m[10] +
104 | m[5] * m[2] * m[11] -
105 | m[5] * m[3] * m[10] -
106 | m[9] * m[2] * m[7] +
107 | m[9] * m[3] * m[6];
108 |
109 | inv[7] = m[0] * m[6] * m[11] -
110 | m[0] * m[7] * m[10] -
111 | m[4] * m[2] * m[11] +
112 | m[4] * m[3] * m[10] +
113 | m[8] * m[2] * m[7] -
114 | m[8] * m[3] * m[6];
115 |
116 | inv[11] = -m[0] * m[5] * m[11] +
117 | m[0] * m[7] * m[9] +
118 | m[4] * m[1] * m[11] -
119 | m[4] * m[3] * m[9] -
120 | m[8] * m[1] * m[7] +
121 | m[8] * m[3] * m[5];
122 |
123 | inv[15] = m[0] * m[5] * m[10] -
124 | m[0] * m[6] * m[9] -
125 | m[4] * m[1] * m[10] +
126 | m[4] * m[2] * m[9] +
127 | m[8] * m[1] * m[6] -
128 | m[8] * m[2] * m[5];
129 |
130 | double det = m[0] * inv[0] + m[1] * inv[4] + m[2] * inv[8] + m[3] * inv[12];
131 |
132 | if (abs(det) < 1e-9) {
133 | return false;
134 | }
135 |
136 |
137 | det = 1.0 / det;
138 |
139 | for (int i = 0; i < 4; i++) {
140 | for (int j = 0; j < 4; j++) {
141 | inv_out[i][j] = inv[i * 4 + j] * det;
142 | }
143 | }
144 |
145 | return true;
146 | }
147 |
148 | extern "C"
149 | __global__ void best_local_affine_kernel(
150 | float *output, float *input, float *affine_model,
151 | int h, int w, float epsilon, int kernel_radius
152 | )
153 | {
154 | int size = h * w;
155 | int id = blockIdx.x * blockDim.x + threadIdx.x;
156 |
157 | if (id < size) {
158 | int x = id % w, y = id / w;
159 |
160 | double Mt_M[4][4] = {}; // 4x4
161 | double invMt_M[4][4] = {};
162 | double Mt_S[3][4] = {}; // RGB -> 1x4
163 | double A[3][4] = {};
164 | for (int i = 0; i < 4; i++)
165 | for (int j = 0; j < 4; j++) {
166 | Mt_M[i][j] = 0, invMt_M[i][j] = 0;
167 | if (i != 3) {
168 | Mt_S[i][j] = 0, A[i][j] = 0;
169 | if (i == j)
170 | Mt_M[i][j] = 1e-3;
171 | }
172 | }
173 |
174 | for (int dy = -kernel_radius; dy <= kernel_radius; dy++) {
175 | for (int dx = -kernel_radius; dx <= kernel_radius; dx++) {
176 |
177 | int xx = x + dx, yy = y + dy;
178 | int id2 = yy * w + xx;
179 |
180 | if (0 <= xx && xx < w && 0 <= yy && yy < h) {
181 |
182 | Mt_M[0][0] += input[id2 + 2*size] * input[id2 + 2*size];
183 | Mt_M[0][1] += input[id2 + 2*size] * input[id2 + size];
184 | Mt_M[0][2] += input[id2 + 2*size] * input[id2];
185 | Mt_M[0][3] += input[id2 + 2*size];
186 |
187 | Mt_M[1][0] += input[id2 + size] * input[id2 + 2*size];
188 | Mt_M[1][1] += input[id2 + size] * input[id2 + size];
189 | Mt_M[1][2] += input[id2 + size] * input[id2];
190 | Mt_M[1][3] += input[id2 + size];
191 |
192 | Mt_M[2][0] += input[id2] * input[id2 + 2*size];
193 | Mt_M[2][1] += input[id2] * input[id2 + size];
194 | Mt_M[2][2] += input[id2] * input[id2];
195 | Mt_M[2][3] += input[id2];
196 |
197 | Mt_M[3][0] += input[id2 + 2*size];
198 | Mt_M[3][1] += input[id2 + size];
199 | Mt_M[3][2] += input[id2];
200 | Mt_M[3][3] += 1;
201 |
202 | Mt_S[0][0] += input[id2 + 2*size] * output[id2 + 2*size];
203 | Mt_S[0][1] += input[id2 + size] * output[id2 + 2*size];
204 | Mt_S[0][2] += input[id2] * output[id2 + 2*size];
205 | Mt_S[0][3] += output[id2 + 2*size];
206 |
207 | Mt_S[1][0] += input[id2 + 2*size] * output[id2 + size];
208 | Mt_S[1][1] += input[id2 + size] * output[id2 + size];
209 | Mt_S[1][2] += input[id2] * output[id2 + size];
210 | Mt_S[1][3] += output[id2 + size];
211 |
212 | Mt_S[2][0] += input[id2 + 2*size] * output[id2];
213 | Mt_S[2][1] += input[id2 + size] * output[id2];
214 | Mt_S[2][2] += input[id2] * output[id2];
215 | Mt_S[2][3] += output[id2];
216 | }
217 | }
218 | }
219 |
220 | bool success = InverseMat4x4(Mt_M, invMt_M);
221 |
222 | for (int i = 0; i < 3; i++) {
223 | for (int j = 0; j < 4; j++) {
224 | for (int k = 0; k < 4; k++) {
225 | A[i][j] += invMt_M[j][k] * Mt_S[i][k];
226 | }
227 | }
228 | }
229 |
230 | for (int i = 0; i < 3; i++) {
231 | for (int j = 0; j < 4; j++) {
232 | int affine_id = i * 4 + j;
233 | affine_model[12 * id + affine_id] = A[i][j];
234 | }
235 | }
236 | }
237 | return ;
238 | }
239 |
240 | extern "C"
241 | __global__ void bilateral_smooth_kernel(
242 | float *affine_model, float *filtered_affine_model, float *guide,
243 | int h, int w, int kernel_radius, float sigma1, float sigma2
244 | )
245 | {
246 | int id = blockIdx.x * blockDim.x + threadIdx.x;
247 | int size = h * w;
248 | if (id < size) {
249 | int x = id % w;
250 | int y = id / w;
251 |
252 | double sum_affine[12] = {};
253 | double sum_weight = 0;
254 | for (int dx = -kernel_radius; dx <= kernel_radius; dx++) {
255 | for (int dy = -kernel_radius; dy <= kernel_radius; dy++) {
256 | int yy = y + dy, xx = x + dx;
257 | int id2 = yy * w + xx;
258 | if (0 <= xx && xx < w && 0 <= yy && yy < h) {
259 | float color_diff1 = guide[yy*w + xx] - guide[y*w + x];
260 | float color_diff2 = guide[yy*w + xx + size] - guide[y*w + x + size];
261 | float color_diff3 = guide[yy*w + xx + 2*size] - guide[y*w + x + 2*size];
262 | float color_diff_sqr =
263 | (color_diff1*color_diff1 + color_diff2*color_diff2 + color_diff3*color_diff3) / 3;
264 |
265 | float v1 = exp(-(dx * dx + dy * dy) / (2 * sigma1 * sigma1));
266 | float v2 = exp(-(color_diff_sqr) / (2 * sigma2 * sigma2));
267 | float weight = v1 * v2;
268 |
269 | for (int i = 0; i < 3; i++) {
270 | for (int j = 0; j < 4; j++) {
271 | int affine_id = i * 4 + j;
272 | sum_affine[affine_id] += weight * affine_model[id2*12 + affine_id];
273 | }
274 | }
275 | sum_weight += weight;
276 | }
277 | }
278 | }
279 |
280 | for (int i = 0; i < 3; i++) {
281 | for (int j = 0; j < 4; j++) {
282 | int affine_id = i * 4 + j;
283 | filtered_affine_model[id*12 + affine_id] = sum_affine[affine_id] / sum_weight;
284 | }
285 | }
286 | }
287 | return ;
288 | }
289 |
290 |
291 | extern "C"
292 | __global__ void reconstruction_best_kernel(
293 | float *input, float *filtered_affine_model, float *filtered_best_output,
294 | int h, int w
295 | )
296 | {
297 | int id = blockIdx.x * blockDim.x + threadIdx.x;
298 | int size = h * w;
299 | if (id < size) {
300 | double out1 =
301 | input[id + 2*size] * filtered_affine_model[id*12 + 0] + // A[0][0] +
302 | input[id + size] * filtered_affine_model[id*12 + 1] + // A[0][1] +
303 | input[id] * filtered_affine_model[id*12 + 2] + // A[0][2] +
304 | filtered_affine_model[id*12 + 3]; //A[0][3];
305 | double out2 =
306 | input[id + 2*size] * filtered_affine_model[id*12 + 4] + //A[1][0] +
307 | input[id + size] * filtered_affine_model[id*12 + 5] + //A[1][1] +
308 | input[id] * filtered_affine_model[id*12 + 6] + //A[1][2] +
309 | filtered_affine_model[id*12 + 7]; //A[1][3];
310 | double out3 =
311 | input[id + 2*size] * filtered_affine_model[id*12 + 8] + //A[2][0] +
312 | input[id + size] * filtered_affine_model[id*12 + 9] + //A[2][1] +
313 | input[id] * filtered_affine_model[id*12 + 10] + //A[2][2] +
314 | filtered_affine_model[id*12 + 11]; // A[2][3];
315 |
316 | filtered_best_output[id] = out1;
317 | filtered_best_output[id + size] = out2;
318 | filtered_best_output[id + 2*size] = out3;
319 | }
320 | return ;
321 | }
322 | '''
323 |
324 | import torch
325 | import numpy as np
326 | from PIL import Image
327 | from cupy.cuda import function
328 | from pynvrtc.compiler import Program
329 | from collections import namedtuple
330 |
331 |
332 | def smooth_local_affine(output_cpu, input_cpu, epsilon, patch, h, w, f_r, f_e):
333 | # program = Program(src.encode('utf-8'), 'best_local_affine_kernel.cu'.encode('utf-8'))
334 | # ptx = program.compile(['-I/usr/local/cuda/include'.encode('utf-8')])
335 | program = Program(src, 'best_local_affine_kernel.cu')
336 | ptx = program.compile(['-I/usr/local/cuda/include'])
337 | m = function.Module()
338 | m.load(bytes(ptx.encode()))
339 |
340 | _reconstruction_best_kernel = m.get_function('reconstruction_best_kernel')
341 | _bilateral_smooth_kernel = m.get_function('bilateral_smooth_kernel')
342 | _best_local_affine_kernel = m.get_function('best_local_affine_kernel')
343 | Stream = namedtuple('Stream', ['ptr'])
344 | s = Stream(ptr=torch.cuda.current_stream().cuda_stream)
345 |
346 | filter_radius = f_r
347 | sigma1 = filter_radius / 3
348 | sigma2 = f_e
349 | radius = (patch - 1) / 2
350 |
351 | filtered_best_output = torch.zeros(np.shape(input_cpu)).cuda()
352 | affine_model = torch.zeros((h * w, 12)).cuda()
353 | filtered_affine_model =torch.zeros((h * w, 12)).cuda()
354 |
355 | input_ = torch.from_numpy(input_cpu).cuda()
356 | output_ = torch.from_numpy(output_cpu).cuda()
357 | _best_local_affine_kernel(
358 | grid=(int((h * w) / 256 + 1), 1),
359 | block=(256, 1, 1),
360 | args=[output_.data_ptr(), input_.data_ptr(), affine_model.data_ptr(),
361 | np.int32(h), np.int32(w), np.float32(epsilon), np.int32(radius)], stream=s
362 | )
363 |
364 | _bilateral_smooth_kernel(
365 | grid=(int((h * w) / 256 + 1), 1),
366 | block=(256, 1, 1),
367 | args=[affine_model.data_ptr(), filtered_affine_model.data_ptr(), input_.data_ptr(), np.int32(h), np.int32(w), np.int32(f_r), np.float32(sigma1), np.float32(sigma2)], stream=s
368 | )
369 |
370 | _reconstruction_best_kernel(
371 | grid=(int((h * w) / 256 + 1), 1),
372 | block=(256, 1, 1),
373 | args=[input_.data_ptr(), filtered_affine_model.data_ptr(), filtered_best_output.data_ptr(),
374 | np.int32(h), np.int32(w)], stream=s
375 | )
376 | numpy_filtered_best_output = filtered_best_output.cpu().numpy()
377 | return numpy_filtered_best_output
378 |
379 |
380 | def smooth_filter(initImg, contentImg, f_radius=15,f_edge=1e-1):
381 | '''
382 | :param initImg: intermediate output. Either image path or PIL Image
383 | :param contentImg: content image output. Either path or PIL Image
384 | :return: stylized output image. PIL Image
385 | '''
386 | if type(initImg) == str:
387 | initImg = Image.open(initImg).convert("RGB")
388 | best_image_bgr = np.array(initImg, dtype=np.float32)
389 | bW, bH, bC = best_image_bgr.shape
390 | best_image_bgr = best_image_bgr[:, :, ::-1]
391 | best_image_bgr = best_image_bgr.transpose((2, 0, 1))
392 |
393 | if type(contentImg) == str:
394 | contentImg = Image.open(contentImg).convert("RGB")
395 | content_input = contentImg.resize((bH,bW))
396 | content_input = np.array(content_input, dtype=np.float32)
397 | content_input = content_input[:, :, ::-1]
398 | content_input = content_input.transpose((2, 0, 1))
399 | input_ = np.ascontiguousarray(content_input, dtype=np.float32) / 255.
400 | _, H, W = np.shape(input_)
401 | output_ = np.ascontiguousarray(best_image_bgr, dtype=np.float32) / 255.
402 | best_ = smooth_local_affine(output_, input_, 1e-7, 3, H, W, f_radius, f_edge)
403 | best_ = best_.transpose(1, 2, 0)
404 | result = Image.fromarray(np.uint8(np.clip(best_ * 255., 0, 255.)))
405 | return result
406 |
--------------------------------------------------------------------------------
/teaser.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/af0c8fecce58aa71f76488546231214f6684be02/teaser.png
--------------------------------------------------------------------------------