├── .gitignore
├── README.md
├── app.py
├── app.wsgi
├── checkpoints
├── la_muse.ckpt
├── rain_princess.ckpt
├── scream.ckpt
├── udnie.ckpt
└── wave.ckpt
├── content
├── NCKU.jpg
├── chicago.jpg
├── input.jpg
└── stata.jpg
├── evaluate.py
├── fsm.png
├── fsm.py
├── my_utils.py
├── output
├── NCKU.jpg
├── out.jpg
└── stata.jpg
├── requirements.txt
├── src
├── optimize.py
├── transform.py
├── utils.py
└── vgg.py
├── style
├── la_muse.jpg
├── rain_princess.jpg
├── the_scream.jpg
├── udnie.jpg
└── wave.jpg
└── thumbs
├── Neural_Style_Transfer_System.JPG
├── banner.png
├── fast_neural_style.png
├── icon.png
├── image_generation.gif
├── result.jpg
└── system.JPG
/.gitignore:
--------------------------------------------------------------------------------
1 | # Manual
2 | .vscode/
3 | usage.txt
4 | # Byte-compiled / optimized / DLL files
5 | __pycache__/
6 | *.py[cod]
7 | *$py.class
8 |
9 | # C extensions
10 | *.so
11 |
12 | # Distribution / packaging
13 | .Python
14 | build/
15 | develop-eggs/
16 | dist/
17 | downloads/
18 | eggs/
19 | .eggs/
20 | lib/
21 | lib64/
22 | parts/
23 | sdist/
24 | var/
25 | wheels/
26 | share/python-wheels/
27 | *.egg-info/
28 | .installed.cfg
29 | *.egg
30 | MANIFEST
31 |
32 | # PyInstaller
33 | # Usually these files are written by a python script from a template
34 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
35 | *.manifest
36 | *.spec
37 |
38 | # Installer logs
39 | pip-log.txt
40 | pip-delete-this-directory.txt
41 |
42 | # Unit test / coverage reports
43 | htmlcov/
44 | .tox/
45 | .nox/
46 | .coverage
47 | .coverage.*
48 | .cache
49 | nosetests.xml
50 | coverage.xml
51 | *.cover
52 | .hypothesis/
53 | .pytest_cache/
54 |
55 | # Translations
56 | *.mo
57 | *.pot
58 |
59 | # Django stuff:
60 | *.log
61 | local_settings.py
62 | db.sqlite3
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # celery beat schedule file
88 | celerybeat-schedule
89 |
90 | # SageMath parsed files
91 | *.sage.py
92 |
93 | # Environments
94 | .env
95 | .venv
96 | env/
97 | venv/
98 | ENV/
99 | env.bak/
100 | venv.bak/
101 |
102 | # Spyder project settings
103 | .spyderproject
104 | .spyproject
105 |
106 | # Rope project settings
107 | .ropeproject
108 |
109 | # mkdocs documentation
110 | /site
111 |
112 | # mypy
113 | .mypy_cache/
114 | .dmypy.json
115 | dmypy.json
116 |
117 | # Pyre type checker
118 | .pyre/
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Project Odin
4 | ## What is Project Odin ?
5 | **Project Odin** is a Facebook Chatbot that can do **Neural Style Transfer**
6 |
7 | ## What is Neural Style Transfer ?
8 | Neural Style Transfer is an algorithm using **Deep Learning Model** to **"repaint"** an image refer to a given style.
9 |
10 |

11 |
12 |
13 | ### [Watch demo videos on YouTube!](https://www.youtube.com/playlist?list=PLyrtJ1CjyyOPmKlV7Yck4STTHBChEGosH)
14 | ## Screenshots
15 |
16 |
17 |
18 | ## System Framework
19 |
20 |
21 | ## Neural Style Transfer System
22 |
23 | ### Fast Neural Style
24 |
25 |
26 | ### Loss Network(We use VGG-19 instead of VGG16)
27 |
28 |
29 | ## Image Generation
30 |
31 |
32 | ## FSM Graph
33 |
34 |
35 | ### States
36 | - Init : Initial state
37 | - Welcome : Send Welcome template to user
38 | - Transfer : Send Style images to user
39 | - Style : Ask user to upload an image
40 | - Image : Transfer the image
41 | - Finish : Say goodbye to user
42 | - Examples : Send an example template to user
43 | - About : Send a link to this repo to user
44 |
45 | ### Conditions
46 | - is_going_to_Welcome : Check if the user sent the magic word **Hi**
47 | - is_going_to_Transfer : If the user hit **Start!** button from welcome screen
48 | - is_going_to_Style : Check if the user entered the right index of style images (must be 1~5)
49 | - is_going_to_Image : Check if the user uploaded an image to transfer
50 | - is_going_to_About : If the user hit **About** button from welcome screen
51 | - is_going_to_Examples : If the user hit **Examples** button from welcome screen
52 |
53 | ## Reference
54 | - Great Explanation of [Artistic Style Transfer with Deep Neural Networks](https://shafeentejani.github.io/2016-12-27/style-transfer/)
55 | - Original Paper : [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https://arxiv.org/pdf/1603.08155v1.pdf)
56 | - from [lengstrom](https://github.com/lengstrom/fast-style-transfer)
57 |
58 | ### Citation
59 | ```
60 | @misc{engstrom2016faststyletransfer,
61 | author = {Logan Engstrom},
62 | title = {Fast Style Transfer},
63 | year = {2016},
64 | howpublished = {\url{https://github.com/lengstrom/fast-style-transfer/}},
65 | note = {commit xxxxxxx}
66 | }
67 | ```
68 |
--------------------------------------------------------------------------------
/app.py:
--------------------------------------------------------------------------------
1 | from evaluate import transfer
2 | import my_utils
3 | from fsm import MyMachine
4 | from bottle import route, run, request, abort, static_file
5 |
6 | VERIFY_TOKEN = "1234567890"
7 |
8 |
9 |
10 | machine = MyMachine()
11 |
12 | @route("/webhook", method="GET")
13 | def setup_webhook():
14 | mode = request.GET.get("hub.mode")
15 | token = request.GET.get("hub.verify_token")
16 | challenge = request.GET.get("hub.challenge")
17 |
18 | if mode == "subscribe" and token == VERIFY_TOKEN:
19 | print("WEBHOOK_VERIFIED")
20 | return challenge
21 |
22 | else:
23 | abort(403)
24 |
25 | @route("/webhook", method="POST")
26 | def webhook_handler():
27 | body = request.json
28 | if body['object'] == 'page':
29 | event = body['entry'][0]['messaging'][0]
30 | machine.advance(event)
31 | return 'OK'
32 |
33 |
34 | if __name__ == "__main__":
35 | run(host="localhost", port=5000, debug=True, reloader=True)
36 |
--------------------------------------------------------------------------------
/app.wsgi:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | sys.path.append('/var/www/NeuralTransferBot')
5 |
6 | import bottle
7 | import app
8 |
9 | application = bottle.default_app()
10 |
--------------------------------------------------------------------------------
/checkpoints/la_muse.ckpt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/checkpoints/la_muse.ckpt
--------------------------------------------------------------------------------
/checkpoints/rain_princess.ckpt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/checkpoints/rain_princess.ckpt
--------------------------------------------------------------------------------
/checkpoints/scream.ckpt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/checkpoints/scream.ckpt
--------------------------------------------------------------------------------
/checkpoints/udnie.ckpt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/checkpoints/udnie.ckpt
--------------------------------------------------------------------------------
/checkpoints/wave.ckpt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/checkpoints/wave.ckpt
--------------------------------------------------------------------------------
/content/NCKU.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/content/NCKU.jpg
--------------------------------------------------------------------------------
/content/chicago.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/content/chicago.jpg
--------------------------------------------------------------------------------
/content/input.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/content/input.jpg
--------------------------------------------------------------------------------
/content/stata.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/content/stata.jpg
--------------------------------------------------------------------------------
/evaluate.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | import sys
3 | sys.path.insert(0, '/var/www/NeuralTransferBot/src')
4 | import transform, numpy as np, vgg, pdb, os
5 | import scipy.misc
6 | import tensorflow as tf
7 | from utils import save_img, get_img, exists, list_files
8 | from argparse import ArgumentParser
9 | from collections import defaultdict
10 | import time
11 | import json
12 | import subprocess
13 | import numpy
14 |
15 | BATCH_SIZE = 4
16 |
17 | # get img_shape
18 | def ffwd(data_in, paths_out, checkpoint_dir, device_t='/gpu:0', batch_size=4):
19 | assert len(paths_out) > 0
20 | is_paths = type(data_in[0]) == str
21 | if is_paths:
22 | assert len(data_in) == len(paths_out)
23 | img_shape = get_img(data_in[0]).shape
24 | else:
25 | assert data_in.size[0] == len(paths_out)
26 | img_shape = X[0].shape
27 |
28 | g = tf.Graph()
29 | batch_size = min(len(paths_out), batch_size)
30 | curr_num = 0
31 | soft_config = tf.ConfigProto(allow_soft_placement=True)
32 | soft_config.gpu_options.allow_growth = True
33 | with g.as_default(), g.device(device_t), \
34 | tf.Session(config=soft_config) as sess:
35 | batch_shape = (batch_size,) + img_shape
36 | img_placeholder = tf.placeholder(tf.float32, shape=batch_shape,
37 | name='img_placeholder')
38 |
39 | preds = transform.net(img_placeholder)
40 | saver = tf.train.Saver()
41 | if os.path.isdir(checkpoint_dir):
42 | ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
43 | if ckpt and ckpt.model_checkpoint_path:
44 | saver.restore(sess, ckpt.model_checkpoint_path)
45 | else:
46 | raise Exception("No checkpoint found...")
47 | else:
48 | saver.restore(sess, checkpoint_dir)
49 |
50 | num_iters = int(len(paths_out)/batch_size)
51 | for i in range(num_iters):
52 | pos = i * batch_size
53 | curr_batch_out = paths_out[pos:pos+batch_size]
54 | if is_paths:
55 | curr_batch_in = data_in[pos:pos+batch_size]
56 | X = np.zeros(batch_shape, dtype=np.float32)
57 | for j, path_in in enumerate(curr_batch_in):
58 | img = get_img(path_in)
59 | assert img.shape == img_shape, \
60 | 'Images have different dimensions. ' + \
61 | 'Resize images or use --allow-different-dimensions.'
62 | X[j] = img
63 | else:
64 | X = data_in[pos:pos+batch_size]
65 |
66 | _preds = sess.run(preds, feed_dict={img_placeholder:X})
67 | for j, path_out in enumerate(curr_batch_out):
68 | save_img(path_out, _preds[j])
69 |
70 | remaining_in = data_in[num_iters*batch_size:]
71 | remaining_out = paths_out[num_iters*batch_size:]
72 | if len(remaining_in) > 0:
73 | ffwd(remaining_in, remaining_out, checkpoint_dir,
74 | device_t=device_t, batch_size=1)
75 | def transfer(in_path, out_path, checkpoint_dir, device='/cpu:0'):
76 | paths_in, paths_out = [in_path], [out_path]
77 | ffwd(paths_in, paths_out, checkpoint_dir, batch_size=1, device_t=device)
78 |
--------------------------------------------------------------------------------
/fsm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/fsm.png
--------------------------------------------------------------------------------
/fsm.py:
--------------------------------------------------------------------------------
1 | from transitions.extensions import GraphMachine
2 | from my_utils import *
3 | from evaluate import transfer
4 | import requests
5 | class MyMachine(GraphMachine):
6 | def __init__(self):
7 | self.machine = GraphMachine(
8 | model = self,
9 | states = [
10 | 'Init',
11 | 'Welcome',
12 | 'Transfer',
13 | 'Style',
14 | 'Image',
15 | 'Finish',
16 | 'About',
17 | 'Example'
18 | ],
19 | transitions = [
20 | {
21 | 'trigger' : 'advance',
22 | 'source' : 'Init',
23 | 'dest' : 'Welcome',
24 | 'conditions' : 'is_going_to_Welcome'
25 | },
26 | {
27 | 'trigger' : 'advance',
28 | 'source' : 'Welcome',
29 | 'dest' : 'Transfer',
30 | 'conditions' : 'is_going_to_Transfer'
31 | },
32 | {
33 | 'trigger' : 'advance',
34 | 'source' : 'Transfer',
35 | 'dest' : 'Style',
36 | 'conditions' : 'is_going_to_Style'
37 | },
38 | {
39 | 'trigger' : 'advance',
40 | 'source' : 'Style',
41 | 'dest' : 'Image',
42 | 'conditions' : 'is_going_to_Image'
43 | },
44 | {
45 | 'trigger' : 'advance',
46 | 'source' : 'Image',
47 | 'dest' : 'Finish'
48 | },
49 | {
50 | 'trigger' : 'advance',
51 | 'source' : 'Welcome',
52 | 'dest' : 'About',
53 | 'conditions' : 'is_going_to_About'
54 | },
55 | {
56 | 'trigger' : 'advance',
57 | 'source' : 'Welcome',
58 | 'dest' : 'Example',
59 | 'conditions' : 'is_going_to_Example'
60 | },
61 | {
62 | 'trigger' : 'go_back',
63 | 'source' : [
64 | 'Finish',
65 | 'About',
66 | 'Example'
67 | ],
68 | 'dest' : 'Init'
69 | }
70 | ],
71 | initial = 'Init',
72 | auto_transitions = False,
73 | show_conditions = True,
74 | )
75 | self.style_path = ''
76 | self.selected = -1
77 | self.styles = ['la_muse', 'rain_princess', 'scream', 'udnie', 'wave']
78 | self.input_path = '/var/www/NeuralTransferBot/content/input.jpg'
79 | self.output_path = '/var/www/NeuralTransferBot/output/out.jpg'
80 |
81 | def is_going_to_Welcome(self, event):
82 | if 'message' in event:
83 | if 'text' in event['message']:
84 | if event['message']['text'] == 'Hi':
85 | return True
86 | return False
87 |
88 | def is_going_to_Transfer(self, event):
89 | if event.get('postback'):
90 | if event['postback']['payload'] == 'Start!':
91 | return True
92 | pass
93 |
94 | def is_going_to_Style(self, event):
95 | sender_id = event['sender']['id']
96 | if 'message' in event:
97 | if 'text' in event['message']:
98 | choice = event['message']['text']
99 | if choice == '1':
100 | self.selected = 1
101 | self.style_path = '/var/www/NeuralTransferBot/checkpoints/la_muse.ckpt'
102 | elif choice == '2':
103 | self.selected = 2
104 | self.style_path = '/var/www/NeuralTransferBot/checkpoints/rain_princess.ckpt'
105 | elif choice == '3':
106 | self.selected = 3
107 | self.style_path = '/var/www/NeuralTransferBot/checkpoints/scream.ckpt'
108 | elif choice == '4':
109 | self.selected = 4
110 | self.style_path = '/var/www/NeuralTransferBot/checkpoints/udnie.ckpt'
111 | elif choice == '5':
112 | self.selected = 5
113 | self.style_path = '/var/www/NeuralTransferBot/checkpoints/wave.ckpt'
114 | else:
115 | send_text_message(sender_id, "Please choose a style from 1~5!")
116 | return False
117 | text = "The style you chose is " + self.styles[self.selected - 1]
118 | send_text_message(sender_id, text)
119 | return True
120 |
121 | def is_going_to_Image(self, event):
122 | sender_id = event['sender']['id']
123 | if 'message' in event:
124 | if 'attachments' in event['message']:
125 | if event['message']['attachments'][0]['type'] == 'image':
126 | img_url = event['message']['attachments'][0]['payload']['url']
127 | save_img(img_url)
128 | #print(img_url)
129 | return True
130 | send_text_message(sender_id, "Please upload an image!")
131 | return False
132 |
133 | def is_going_to_About(self, event):
134 | if event.get('postback'):
135 | if event['postback']['payload'] == 'About':
136 | return True
137 | return False
138 |
139 | def is_going_to_Example(self, event):
140 | if event.get('postback'):
141 | if event['postback']['payload'] == 'Examples':
142 | return True
143 | return False
144 |
145 | def on_enter_Welcome(self, event):
146 | print(self.state)
147 | sender_id = event['sender']['id']
148 | send_welcome_template(sender_id)
149 |
150 | def on_enter_Transfer(self, event):
151 | print(self.state)
152 | sender_id = event['sender']['id']
153 | text = "Step1:Choose a style!\n(Enter the number of the style)"
154 | send_text_message(sender_id, text)
155 | text = "Style 1 (la_muse)"
156 | send_text_message(sender_id, text)
157 | send_attachment(sender_id, get_image_id(1))
158 | text = "Style 2 (rain_princess)"
159 | send_text_message(sender_id, text)
160 | send_attachment(sender_id, get_image_id(2))
161 | text = "Style 3 (the_scream)"
162 | send_text_message(sender_id, text)
163 | send_attachment(sender_id, get_image_id(3))
164 | text = "Style 4 (udnie)"
165 | send_text_message(sender_id, text)
166 | send_attachment(sender_id, get_image_id(4))
167 | text = "Style 5 (wave)"
168 | send_text_message(sender_id, text)
169 | send_attachment(sender_id, get_image_id(5))
170 |
171 |
172 | def on_enter_About(self, event):
173 | print(self.state)
174 | sender_id = event['sender']['id']
175 | send_website_template(sender_id)
176 | self.go_back(event)
177 |
178 | def on_enter_Example(self, event):
179 | print(self.state)
180 | sender_id = event['sender']['id']
181 | send_text_message(sender_id, "Given a style")
182 | send_attachment(sender_id, get_image_id(4))
183 | send_text_message(sender_id, "Given an image")
184 | send_attachment(sender_id, get_image_id(6))
185 | send_text_message(sender_id,"I can repaint the image!")
186 | send_attachment(sender_id, get_image_id(7))
187 | self.go_back(event)
188 |
189 | def on_enter_Style(self, event):
190 | print(self.state)
191 | sender_id = event['sender']['id']
192 | send_text_message(sender_id, "Please upload an image!")
193 |
194 | def on_enter_Image(self, event):
195 | print(self.state)
196 | sender_id = event['sender']['id']
197 | send_text_message(sender_id, "Transferring! Please wait ...")
198 | transfer(self.input_path, self.output_path, self.style_path)
199 | send_img(sender_id, "/var/www/NeuralTransferBot/output/out.jpg")
200 | self.advance(event)
201 |
202 | def on_enter_Finish(self, event):
203 | print(self.state)
204 | sender_id = event['sender']['id']
205 | send_text_message(sender_id, "Thank you for using!")
206 | self.go_back(event)
207 |
208 | def on_enter_Init(self, event):
209 | print(self.state)
210 |
--------------------------------------------------------------------------------
/my_utils.py:
--------------------------------------------------------------------------------
1 | import urllib.request
2 | import subprocess
3 | import requests
4 | from requests_toolbelt.multipart.encoder import MultipartEncoder
5 | import os
6 |
7 | import json
8 |
9 | GRAPH_URL = "https://graph.facebook.com/v2.6"
10 | ACCESS_TOKEN = os.environ.get("ACCESS_TOKEN")
11 |
12 | def save_img(url):
13 | # Saves image from user url to ./content/input.jpg for further usage
14 | urllib.request.urlretrieve(url, "/var/www/NeuralTransferBot/content/input.jpg")
15 | def send_website_template(id):
16 | url = "{0}/me/messages?access_token={1}".format(GRAPH_URL, ACCESS_TOKEN)
17 | payload = {
18 | "recipient": {
19 | "id": id
20 | },
21 | "message": {
22 | "attachment": {
23 | "type": "template",
24 | "payload":{
25 | "template_type":"generic",
26 | "elements":[
27 | {
28 | "title":"Project Github Page",
29 | "subtitle":"Check out this repo for more details",
30 | "default_action": {
31 | "type": "web_url",
32 | "url": "https://github.com/Jadezzz/NeuralTransferBot",
33 | "messenger_extensions": "TRUE",
34 | "webview_height_ratio": "FULL"
35 | }
36 | }
37 | ]
38 | }
39 | }
40 | }
41 | }
42 |
43 | response = requests.post(url, json=payload)
44 |
45 | if response.status_code != 200:
46 | print("Unable to send message: " + response.text)
47 | return response
48 |
49 |
50 | def send_welcome_template(id):
51 | url = "{0}/me/messages?access_token={1}".format(GRAPH_URL, ACCESS_TOKEN)
52 | payload = {
53 | "recipient": {
54 | "id": id
55 | },
56 | "message": {
57 | "attachment": {
58 | "type": "template",
59 | "payload": {
60 | "template_type": "generic",
61 | "elements": [
62 | {
63 | "title": "Welcome!",
64 | "subtitle": "I can do Neural Style Transfer!",
65 | "image_url": "https://i.imgur.com/8vAv1wT.jpg",
66 | "buttons": [
67 | {
68 | "type": "postback",
69 | "title": "Start!",
70 | "payload": "Start!"
71 | },
72 | {
73 | "type": "postback",
74 | "title": "Examples",
75 | "payload": "Examples"
76 | },
77 | {
78 | "type": "postback",
79 | "title": "About",
80 | "payload": "About"
81 | }
82 | ]
83 | }
84 | ]
85 | }
86 | }
87 | }
88 | }
89 | response = requests.post(url, json=payload)
90 |
91 | if response.status_code != 200:
92 | print("Unable to send message: " + response.text)
93 | return response
94 |
95 | def send_text_message(id, text):
96 | url = "{0}/me/messages?access_token={1}".format(GRAPH_URL, ACCESS_TOKEN)
97 | payload = {
98 | "recipient": {"id": id},
99 | "message": {"text": text}
100 | }
101 | response = requests.post(url, json=payload)
102 |
103 | if response.status_code != 200:
104 | print("Unable to send message: " + response.text)
105 | return response
106 |
107 | def send_img(id, path):
108 | url = "{0}/me/messages?access_token={1}".format(GRAPH_URL, ACCESS_TOKEN)
109 | r = open(path, 'rb')
110 | data = {
111 | 'recipient':json.dumps({
112 | 'id':id
113 | }),
114 | 'message':json.dumps({
115 | 'attachment':{
116 | 'type':'image',
117 | 'payload':{}
118 | }
119 | }),
120 | 'filedata': (os.path.basename(path), r , 'image/jpeg')
121 | }
122 |
123 | multipart_data = MultipartEncoder(data)
124 | multipart_header = {
125 | 'Content-Type': multipart_data.content_type
126 | }
127 | response = requests.post(url, headers=multipart_header, data=multipart_data)
128 |
129 | r.close()
130 |
131 | if response.status_code != 200:
132 | print("Unable to send message: " + response.text)
133 | return response
134 |
135 | def send_attachment(id, attachment_id):
136 | url = "{0}/me/messages?access_token={1}".format(GRAPH_URL, ACCESS_TOKEN)
137 | payload = {
138 | "recipient": {
139 | "id": id
140 | },
141 | "message": {
142 | "attachment": {
143 | "type": "image",
144 | "payload": {
145 | "attachment_id": attachment_id
146 | }
147 | }
148 | }
149 | }
150 | response = requests.post(url, json=payload)
151 |
152 | if response.status_code != 200:
153 | print("Unable to send message: " + response.text)
154 | return response
155 |
156 | def get_image_id(num):
157 | if num == 1:
158 | return "2149646058697440"
159 | elif num == 2:
160 | return "2305373213042257"
161 | elif num == 3:
162 | return "516745915473247"
163 | elif num == 4:
164 | return "2026161817475017"
165 | elif num == 5:
166 | return "429747744224866"
167 | elif num == 6: # ncku original image
168 | return "2270010073242931"
169 | elif num == 7: # ncku transfered image
170 | return "434107690459930"
171 |
--------------------------------------------------------------------------------
/output/NCKU.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/output/NCKU.jpg
--------------------------------------------------------------------------------
/output/out.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/output/out.jpg
--------------------------------------------------------------------------------
/output/stata.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/output/stata.jpg
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy==1.11.0
2 | requests_toolbelt==0.8.0
3 | transitions==0.6.9
4 | requests==2.9.1
5 | tensorflow_gpu==1.12.0
6 | scipy==1.1.0
7 | bottle==0.12.16
8 | tensorflow==1.12.0
9 |
--------------------------------------------------------------------------------
/src/optimize.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | import functools
3 | import vgg, pdb, time
4 | import tensorflow as tf, numpy as np, os
5 | import transform
6 | from utils import get_img
7 |
8 | STYLE_LAYERS = ('relu1_1', 'relu2_1', 'relu3_1', 'relu4_1', 'relu5_1')
9 | CONTENT_LAYER = 'relu4_2'
10 | DEVICES = 'CUDA_VISIBLE_DEVICES'
11 |
12 | # np arr, np arr
13 | def optimize(content_targets, style_target, content_weight, style_weight,
14 | tv_weight, vgg_path, epochs=2, print_iterations=1000,
15 | batch_size=4, save_path='saver/fns.ckpt', slow=False,
16 | learning_rate=1e-3, debug=False):
17 | if slow:
18 | batch_size = 1
19 | mod = len(content_targets) % batch_size
20 | if mod > 0:
21 | print("Train set has been trimmed slightly..")
22 | content_targets = content_targets[:-mod]
23 |
24 | style_features = {}
25 |
26 | batch_shape = (batch_size,256,256,3)
27 | style_shape = (1,) + style_target.shape
28 | print(style_shape)
29 |
30 | # precompute style features
31 | with tf.Graph().as_default(), tf.device('/cpu:0'), tf.Session() as sess:
32 | style_image = tf.placeholder(tf.float32, shape=style_shape, name='style_image')
33 | style_image_pre = vgg.preprocess(style_image)
34 | net = vgg.net(vgg_path, style_image_pre)
35 | style_pre = np.array([style_target])
36 | for layer in STYLE_LAYERS:
37 | features = net[layer].eval(feed_dict={style_image:style_pre})
38 | features = np.reshape(features, (-1, features.shape[3]))
39 | gram = np.matmul(features.T, features) / features.size
40 | style_features[layer] = gram
41 |
42 | with tf.Graph().as_default(), tf.Session() as sess:
43 | X_content = tf.placeholder(tf.float32, shape=batch_shape, name="X_content")
44 | X_pre = vgg.preprocess(X_content)
45 |
46 | # precompute content features
47 | content_features = {}
48 | content_net = vgg.net(vgg_path, X_pre)
49 | content_features[CONTENT_LAYER] = content_net[CONTENT_LAYER]
50 |
51 | if slow:
52 | preds = tf.Variable(
53 | tf.random_normal(X_content.get_shape()) * 0.256
54 | )
55 | preds_pre = preds
56 | else:
57 | preds = transform.net(X_content/255.0)
58 | preds_pre = vgg.preprocess(preds)
59 |
60 | net = vgg.net(vgg_path, preds_pre)
61 |
62 | content_size = _tensor_size(content_features[CONTENT_LAYER])*batch_size
63 | assert _tensor_size(content_features[CONTENT_LAYER]) == _tensor_size(net[CONTENT_LAYER])
64 | content_loss = content_weight * (2 * tf.nn.l2_loss(
65 | net[CONTENT_LAYER] - content_features[CONTENT_LAYER]) / content_size
66 | )
67 |
68 | style_losses = []
69 | for style_layer in STYLE_LAYERS:
70 | layer = net[style_layer]
71 | bs, height, width, filters = map(lambda i:i.value,layer.get_shape())
72 | size = height * width * filters
73 | feats = tf.reshape(layer, (bs, height * width, filters))
74 | feats_T = tf.transpose(feats, perm=[0,2,1])
75 | grams = tf.matmul(feats_T, feats) / size
76 | style_gram = style_features[style_layer]
77 | style_losses.append(2 * tf.nn.l2_loss(grams - style_gram)/style_gram.size)
78 |
79 | style_loss = style_weight * functools.reduce(tf.add, style_losses) / batch_size
80 |
81 | # total variation denoising
82 | tv_y_size = _tensor_size(preds[:,1:,:,:])
83 | tv_x_size = _tensor_size(preds[:,:,1:,:])
84 | y_tv = tf.nn.l2_loss(preds[:,1:,:,:] - preds[:,:batch_shape[1]-1,:,:])
85 | x_tv = tf.nn.l2_loss(preds[:,:,1:,:] - preds[:,:,:batch_shape[2]-1,:])
86 | tv_loss = tv_weight*2*(x_tv/tv_x_size + y_tv/tv_y_size)/batch_size
87 |
88 | loss = content_loss + style_loss + tv_loss
89 |
90 | # overall loss
91 | train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)
92 | sess.run(tf.global_variables_initializer())
93 | import random
94 | uid = random.randint(1, 100)
95 | print("UID: %s" % uid)
96 | for epoch in range(epochs):
97 | num_examples = len(content_targets)
98 | iterations = 0
99 | while iterations * batch_size < num_examples:
100 | start_time = time.time()
101 | curr = iterations * batch_size
102 | step = curr + batch_size
103 | X_batch = np.zeros(batch_shape, dtype=np.float32)
104 | for j, img_p in enumerate(content_targets[curr:step]):
105 | X_batch[j] = get_img(img_p, (256,256,3)).astype(np.float32)
106 |
107 | iterations += 1
108 | assert X_batch.shape[0] == batch_size
109 |
110 | feed_dict = {
111 | X_content:X_batch
112 | }
113 |
114 | train_step.run(feed_dict=feed_dict)
115 | end_time = time.time()
116 | delta_time = end_time - start_time
117 | if debug:
118 | print("UID: %s, batch time: %s" % (uid, delta_time))
119 | is_print_iter = int(iterations) % print_iterations == 0
120 | if slow:
121 | is_print_iter = epoch % print_iterations == 0
122 | is_last = epoch == epochs - 1 and iterations * batch_size >= num_examples
123 | should_print = is_print_iter or is_last
124 | if should_print:
125 | to_get = [style_loss, content_loss, tv_loss, loss, preds]
126 | test_feed_dict = {
127 | X_content:X_batch
128 | }
129 |
130 | tup = sess.run(to_get, feed_dict = test_feed_dict)
131 | _style_loss,_content_loss,_tv_loss,_loss,_preds = tup
132 | losses = (_style_loss, _content_loss, _tv_loss, _loss)
133 | if slow:
134 | _preds = vgg.unprocess(_preds)
135 | else:
136 | saver = tf.train.Saver()
137 | res = saver.save(sess, save_path)
138 | yield(_preds, losses, iterations, epoch)
139 |
140 | def _tensor_size(tensor):
141 | from operator import mul
142 | return functools.reduce(mul, (d.value for d in tensor.get_shape()[1:]), 1)
143 |
--------------------------------------------------------------------------------
/src/transform.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf, pdb
2 |
3 | WEIGHTS_INIT_STDEV = .1
4 |
5 | def net(image):
6 | conv1 = _conv_layer(image, 32, 9, 1)
7 | conv2 = _conv_layer(conv1, 64, 3, 2)
8 | conv3 = _conv_layer(conv2, 128, 3, 2)
9 | resid1 = _residual_block(conv3, 3)
10 | resid2 = _residual_block(resid1, 3)
11 | resid3 = _residual_block(resid2, 3)
12 | resid4 = _residual_block(resid3, 3)
13 | resid5 = _residual_block(resid4, 3)
14 | conv_t1 = _conv_tranpose_layer(resid5, 64, 3, 2)
15 | conv_t2 = _conv_tranpose_layer(conv_t1, 32, 3, 2)
16 | conv_t3 = _conv_layer(conv_t2, 3, 9, 1, relu=False)
17 | preds = tf.nn.tanh(conv_t3) * 150 + 255./2
18 | return preds
19 |
20 | def _conv_layer(net, num_filters, filter_size, strides, relu=True):
21 | weights_init = _conv_init_vars(net, num_filters, filter_size)
22 | strides_shape = [1, strides, strides, 1]
23 | net = tf.nn.conv2d(net, weights_init, strides_shape, padding='SAME')
24 | net = _instance_norm(net)
25 | if relu:
26 | net = tf.nn.relu(net)
27 |
28 | return net
29 |
30 | def _conv_tranpose_layer(net, num_filters, filter_size, strides):
31 | weights_init = _conv_init_vars(net, num_filters, filter_size, transpose=True)
32 |
33 | batch_size, rows, cols, in_channels = [i.value for i in net.get_shape()]
34 | new_rows, new_cols = int(rows * strides), int(cols * strides)
35 | # new_shape = #tf.pack([tf.shape(net)[0], new_rows, new_cols, num_filters])
36 |
37 | new_shape = [batch_size, new_rows, new_cols, num_filters]
38 | tf_shape = tf.stack(new_shape)
39 | strides_shape = [1,strides,strides,1]
40 |
41 | net = tf.nn.conv2d_transpose(net, weights_init, tf_shape, strides_shape, padding='SAME')
42 | net = _instance_norm(net)
43 | return tf.nn.relu(net)
44 |
45 | def _residual_block(net, filter_size=3):
46 | tmp = _conv_layer(net, 128, filter_size, 1)
47 | return net + _conv_layer(tmp, 128, filter_size, 1, relu=False)
48 |
49 | def _instance_norm(net, train=True):
50 | batch, rows, cols, channels = [i.value for i in net.get_shape()]
51 | var_shape = [channels]
52 | mu, sigma_sq = tf.nn.moments(net, [1,2], keep_dims=True)
53 | shift = tf.Variable(tf.zeros(var_shape))
54 | scale = tf.Variable(tf.ones(var_shape))
55 | epsilon = 1e-3
56 | normalized = (net-mu)/(sigma_sq + epsilon)**(.5)
57 | return scale * normalized + shift
58 |
59 | def _conv_init_vars(net, out_channels, filter_size, transpose=False):
60 | _, rows, cols, in_channels = [i.value for i in net.get_shape()]
61 | if not transpose:
62 | weights_shape = [filter_size, filter_size, in_channels, out_channels]
63 | else:
64 | weights_shape = [filter_size, filter_size, out_channels, in_channels]
65 |
66 | weights_init = tf.Variable(tf.truncated_normal(weights_shape, stddev=WEIGHTS_INIT_STDEV, seed=1), dtype=tf.float32)
67 | return weights_init
68 |
--------------------------------------------------------------------------------
/src/utils.py:
--------------------------------------------------------------------------------
1 | import scipy.misc, numpy as np, os, sys
2 |
3 | def save_img(out_path, img):
4 | img = np.clip(img, 0, 255).astype(np.uint8)
5 | scipy.misc.imsave(out_path, img)
6 |
7 | def scale_img(style_path, style_scale):
8 | scale = float(style_scale)
9 | o0, o1, o2 = scipy.misc.imread(style_path, mode='RGB').shape
10 | scale = float(style_scale)
11 | new_shape = (int(o0 * scale), int(o1 * scale), o2)
12 | style_target = _get_img(style_path, img_size=new_shape)
13 | return style_target
14 |
15 | def get_img(src, img_size=False):
16 | img = scipy.misc.imread(src, mode='RGB') # misc.imresize(, (256, 256, 3))
17 | if not (len(img.shape) == 3 and img.shape[2] == 3):
18 | img = np.dstack((img,img,img))
19 | if img_size != False:
20 | img = scipy.misc.imresize(img, img_size)
21 | return img
22 |
23 | def exists(p, msg):
24 | assert os.path.exists(p), msg
25 |
26 | def list_files(in_path):
27 | files = []
28 | for (dirpath, dirnames, filenames) in os.walk(in_path):
29 | files.extend(filenames)
30 | break
31 |
32 | return files
33 |
34 |
--------------------------------------------------------------------------------
/src/vgg.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2015-2016 Anish Athalye. Released under GPLv3.
2 |
3 | import tensorflow as tf
4 | import numpy as np
5 | import scipy.io
6 | import pdb
7 |
8 | MEAN_PIXEL = np.array([ 123.68 , 116.779, 103.939])
9 |
10 | def net(data_path, input_image):
11 | layers = (
12 | 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1',
13 |
14 | 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2',
15 |
16 | 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3',
17 | 'relu3_3', 'conv3_4', 'relu3_4', 'pool3',
18 |
19 | 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'conv4_3',
20 | 'relu4_3', 'conv4_4', 'relu4_4', 'pool4',
21 |
22 | 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'conv5_3',
23 | 'relu5_3', 'conv5_4', 'relu5_4'
24 | )
25 |
26 | data = scipy.io.loadmat(data_path)
27 | mean = data['normalization'][0][0][0]
28 | mean_pixel = np.mean(mean, axis=(0, 1))
29 | weights = data['layers'][0]
30 |
31 | net = {}
32 | current = input_image
33 | for i, name in enumerate(layers):
34 | kind = name[:4]
35 | if kind == 'conv':
36 | kernels, bias = weights[i][0][0][0][0]
37 | # matconvnet: weights are [width, height, in_channels, out_channels]
38 | # tensorflow: weights are [height, width, in_channels, out_channels]
39 | kernels = np.transpose(kernels, (1, 0, 2, 3))
40 | bias = bias.reshape(-1)
41 | current = _conv_layer(current, kernels, bias)
42 | elif kind == 'relu':
43 | current = tf.nn.relu(current)
44 | elif kind == 'pool':
45 | current = _pool_layer(current)
46 | net[name] = current
47 |
48 | assert len(net) == len(layers)
49 | return net
50 |
51 |
52 | def _conv_layer(input, weights, bias):
53 | conv = tf.nn.conv2d(input, tf.constant(weights), strides=(1, 1, 1, 1),
54 | padding='SAME')
55 | return tf.nn.bias_add(conv, bias)
56 |
57 |
58 | def _pool_layer(input):
59 | return tf.nn.max_pool(input, ksize=(1, 2, 2, 1), strides=(1, 2, 2, 1),
60 | padding='SAME')
61 |
62 |
63 | def preprocess(image):
64 | return image - MEAN_PIXEL
65 |
66 |
67 | def unprocess(image):
68 | return image + MEAN_PIXEL
69 |
--------------------------------------------------------------------------------
/style/la_muse.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/style/la_muse.jpg
--------------------------------------------------------------------------------
/style/rain_princess.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/style/rain_princess.jpg
--------------------------------------------------------------------------------
/style/the_scream.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/style/the_scream.jpg
--------------------------------------------------------------------------------
/style/udnie.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/style/udnie.jpg
--------------------------------------------------------------------------------
/style/wave.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/style/wave.jpg
--------------------------------------------------------------------------------
/thumbs/Neural_Style_Transfer_System.JPG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/thumbs/Neural_Style_Transfer_System.JPG
--------------------------------------------------------------------------------
/thumbs/banner.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/thumbs/banner.png
--------------------------------------------------------------------------------
/thumbs/fast_neural_style.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/thumbs/fast_neural_style.png
--------------------------------------------------------------------------------
/thumbs/icon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/thumbs/icon.png
--------------------------------------------------------------------------------
/thumbs/image_generation.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/thumbs/image_generation.gif
--------------------------------------------------------------------------------
/thumbs/result.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/thumbs/result.jpg
--------------------------------------------------------------------------------
/thumbs/system.JPG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Jadezzz/NeuralTransferBot/52952b2e7915cff6f07234b87b692b13b45b4dfd/thumbs/system.JPG
--------------------------------------------------------------------------------