├── example.jpg
├── transformation.gif
├── README.md
├── Face_Depixelizer_Eng.ipynb
└── Face_Depixelizer_Rus.ipynb
/example.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tg-bomze/Face-Depixelizer/HEAD/example.jpg
--------------------------------------------------------------------------------
/transformation.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tg-bomze/Face-Depixelizer/HEAD/transformation.gif
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## NOTE
2 | We have noticed a lot of concern that PULSE will be used to identify individuals whose faces have been blurred out. We want to emphasize that this is impossible - **PULSE makes imaginary faces of people who do not exist, which should not be confused for real people.** It will **not** help identify or reconstruct the original image.
3 |
4 | We also want to address concerns of bias in PULSE. **We have now included a new section in the [paper](https://drive.google.com/file/d/1fV7FsmunjDuRrsn4KYf2Efwp0FNBtcR4/view) and an accompanying model card directly addressing this bias.**
5 |
6 | If you are interested more about the topic, you can read this [IEEE Tech Talk about PULSE](https://spectrum.ieee.org/tech-talk/computing/software/making-blurry-faces-photorealistic-goes-only-so-far).
7 |
8 | # Face-Depixelizer
9 | Face Depixelizer based on "PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models" repository.
10 |
11 | 
12 |
13 | Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model (here, [StyleGAN](https://github.com/NVlabs/stylegan)) for high-resolution images that are perceptually realistic and downscale correctly.
14 |
15 | **Check how it works on Google Colab:**
16 | - Russian Language [](https://colab.research.google.com/github/tg-bomze/Face-Depixelizer/blob/master/Face_Depixelizer_Rus.ipynb)
17 | - English Language [](https://colab.research.google.com/github/tg-bomze/Face-Depixelizer/blob/master/Face_Depixelizer_Eng.ipynb)
18 |
19 | **Based on:** [PULSE](https://github.com/adamian98/pulse)
20 |
21 | **Article**: [PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models](https://arxiv.org/abs/2003.03808)
22 |
23 | **Currently using Google Drive to store model weights and it has a daily cap on downloads, therefore, you may receive an error message saying "*Google Drive Quota Exceeded*" or "*No such file or directory: '/content/pulse/runs/face.png'*". If you are experiencing this error please try again later in the day or come back tomorrow.**
24 |
25 | Thanks for the help in fixing the errors: [AlrasheedA](https://github.com/AlrasheedA), [kuanhulio](https://github.com/kuanhulio), [DevMentor](t.me/DevMentor)
26 |
27 | ## Star History
28 |
29 | [](https://star-history.com/#tg-bomze/Face-Depixelizer&Date)
30 |
--------------------------------------------------------------------------------
/Face_Depixelizer_Eng.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Face Depixelizer Eng",
7 | "provenance": [],
8 | "private_outputs": true,
9 | "collapsed_sections": [],
10 | "include_colab_link": true
11 | },
12 | "kernelspec": {
13 | "name": "python3",
14 | "display_name": "Python 3"
15 | },
16 | "accelerator": "GPU"
17 | },
18 | "cells": [
19 | {
20 | "cell_type": "markdown",
21 | "metadata": {
22 | "id": "view-in-github",
23 | "colab_type": "text"
24 | },
25 | "source": [
26 | "
"
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "metadata": {
32 | "id": "siqzcgRRyr_n",
33 | "colab_type": "text"
34 | },
35 | "source": [
36 | "Face Depixelizer\n",
37 | "\n",
38 | "Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly.\n",
39 | "\n",
40 | "Based on:\n",
41 | "\n",
42 | "**GitHub repository**: [PULSE](https://github.com/adamian98/pulse)\n",
43 | "\n",
44 | "Article: [PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models](https://arxiv.org/abs/2003.03808)\n",
45 | "\n",
46 | "Creators: **[Alex Damian](https://github.com/adamian98), [Sachit Menon](mailto:sachit.menon@duke.edu).**\n",
47 | "\n",
48 | "Colab created by:\n",
49 | "\n",
50 | "GitHub: [@tg-bomze](https://github.com/tg-bomze),\n",
51 | "Telegram: [@bomze](https://t.me/bomze),\n",
52 | "Twitter: [@tg_bomze](https://twitter.com/tg_bomze).\n",
53 | "\n",
54 | "---\n",
55 | "##### Currently using Google Drive to store model weights and it has a daily cap on downloads, therefore, you may receive an error message saying \"Google Drive Quota Exceeded\" or \"No such file or directory: '/content/pulse/runs/face.png'\". If you are experiencing this error please try again later in the day or come back tomorrow.\n",
56 | "\n",
57 | "```\n",
58 | "To get started, click on the button (where the red arrow indicates). After clicking, wait until the execution is complete.\n",
59 | "```\n",
60 | "\n"
61 | ]
62 | },
63 | {
64 | "cell_type": "code",
65 | "metadata": {
66 | "id": "fU0aGtD4Nl4W",
67 | "colab_type": "code",
68 | "cellView": "form",
69 | "colab": {}
70 | },
71 | "source": [
72 | "#@title ← Let's ROCK!\n",
73 | "#@markdown **After starting this block, you will need to scroll down page and upload a pixel square photo into which the whole human head fits. Neural network works best on images where people are directly facing the camera. Example:**\n",
74 | "\n",
75 | "#@markdown \n",
76 | "\n",
77 | "#@markdown *You can crop the photo [HERE](https://www.iloveimg.com/crop-image)*\n",
78 | "\n",
79 | "#@markdown ---\n",
80 | "import torch\n",
81 | "import torchvision\n",
82 | "from pathlib import Path\n",
83 | "if not Path(\"PULSE.py\").exists():\n",
84 | " if Path(\"pulse\").exists():\n",
85 | " %cd /content/pulse\n",
86 | " else:\n",
87 | " !git clone https://github.com/adamian98/pulse\n",
88 | " %cd /content/pulse\n",
89 | " !mkdir input/\n",
90 | " toPIL = torchvision.transforms.ToPILImage()\n",
91 | " toTensor = torchvision.transforms.ToTensor()\n",
92 | " from bicubic import BicubicDownSample\n",
93 | " D = BicubicDownSample(factor=1)\n",
94 | "\n",
95 | "import os\n",
96 | "from io import BytesIO\n",
97 | "import matplotlib.pyplot as plt\n",
98 | "import matplotlib.image as mpimg\n",
99 | "from PIL import Image\n",
100 | "from PULSE import PULSE\n",
101 | "from google.colab import files\n",
102 | "from bicubic import BicubicDownSample\n",
103 | "from IPython import display\n",
104 | "from IPython.display import display\n",
105 | "from IPython.display import clear_output\n",
106 | "import numpy as np\n",
107 | "from drive import open_url\n",
108 | "from mpl_toolkits.axes_grid1 import ImageGrid\n",
109 | "%matplotlib inline\n",
110 | "\n",
111 | "#@markdown ## Basic settings:\n",
112 | "#@markdown ##### *If you have already uploaded a photo and just want to experiment with the settings, then uncheck the following checkbox*:\n",
113 | "upload_new_photo = True #@param {type:\"boolean\"}\n",
114 | "\n",
115 | "\n",
116 | "if upload_new_photo == True:\n",
117 | " !rm -rf /content/pulse/input/face.png\n",
118 | " clear_output()\n",
119 | " uploaded = files.upload()\n",
120 | " for fn in uploaded.keys():\n",
121 | " print('User uploaded file \"{name}\" with length {length} bytes'.format(\n",
122 | " name=fn, length=len(uploaded[fn])))\n",
123 | " os.rename(fn, fn.replace(\" \", \"\"))\n",
124 | " fn = fn.replace(\" \", \"\")\n",
125 | "\n",
126 | " if(len(uploaded.keys())!=1): raise Exception(\"You need to upload only one image.\")\n",
127 | "\n",
128 | " face = Image.open(fn)\n",
129 | " face = face.resize((1024, 1024), Image.ANTIALIAS)\n",
130 | " face = face.convert('RGB')\n",
131 | " face_name = 'face.png'\n",
132 | " face.save(face_name)\n",
133 | " %cp $face_name /content/pulse/input/\n",
134 | "\n",
135 | " images = []\n",
136 | " imagesHR = []\n",
137 | " imagesHR.append(face)\n",
138 | " face = toPIL(D(toTensor(face).unsqueeze(0).cuda()).cpu().detach().clamp(0,1)[0])\n",
139 | " images.append(face)\n",
140 | "\n",
141 | "#@markdown ---\n",
142 | "#@markdown ## Advanced settings:\n",
143 | "#@markdown ##### *If you want to make a more accurate result, then modify the following* **DEFAULT** *variables*:\n",
144 | "\n",
145 | "input_dir = '/content/pulse/input/'\n",
146 | "output_dir = '/content/pulse/runs/'\n",
147 | "seed = 100 #@param {type:\"integer\"}\n",
148 | "epsilon = 0.02 #@param {type:\"slider\", min:0.01, max:0.03, step:0.01}\n",
149 | "noise_type = 'trainable' #@param ['zero', 'fixed', 'trainable']\n",
150 | "optimizer = 'adam' #@param ['sgd', 'adam','sgdm', 'adamax']\n",
151 | "learning_rate = 0.4 #@param {type:\"slider\", min:0, max:1, step:0.05}\n",
152 | "learning_rate_schedule = 'linear1cycledrop' #@param ['fixed', 'linear1cycle', 'linear1cycledrop']\n",
153 | "steps = 100 #@param {type:\"slider\", min:100, max:2000, step:50}\n",
154 | "clear_output()\n",
155 | "\n",
156 | "seed = abs(seed)\n",
157 | "print('Estimated Runtime: {}s.\\n'.format(round(0.23*steps)+6))\n",
158 | "!python run.py \\\n",
159 | " -input_dir $input_dir \\\n",
160 | " -output_dir $output_dir \\\n",
161 | " -seed $seed \\\n",
162 | " -noise_type $noise_type \\\n",
163 | " -opt_name $optimizer \\\n",
164 | " -learning_rate $learning_rate \\\n",
165 | " -steps $steps \\\n",
166 | " -eps $epsilon \\\n",
167 | " -lr_schedule $learning_rate_schedule\n",
168 | "\n",
169 | "#@markdown ---\n",
170 | "#@markdown *If there is an error during execution or the \"**Browse**\" button is not active, try running this block again*\n",
171 | "\n",
172 | "fig, (ax1, ax2) = plt.subplots(1, 2)\n",
173 | "ax1.imshow(mpimg.imread('/content/pulse/input/face.png'))\n",
174 | "ax1.set_title('Original')\n",
175 | "ax2.imshow(mpimg.imread('/content/pulse/runs/face.png'))\n",
176 | "ax2.set_title('Result')\n",
177 | "plt.show()"
178 | ],
179 | "execution_count": null,
180 | "outputs": []
181 | },
182 | {
183 | "cell_type": "code",
184 | "metadata": {
185 | "id": "DUfP6_7vTK3b",
186 | "colab_type": "code",
187 | "cellView": "form",
188 | "colab": {}
189 | },
190 | "source": [
191 | "#@title ← Download result\n",
192 | "try: files.download('/content/pulse/runs/face.png')\n",
193 | "except: raise Exception(\"No result image\")"
194 | ],
195 | "execution_count": null,
196 | "outputs": []
197 | }
198 | ]
199 | }
--------------------------------------------------------------------------------
/Face_Depixelizer_Rus.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Face Depixelizer Rus",
7 | "provenance": [],
8 | "private_outputs": true,
9 | "collapsed_sections": [],
10 | "include_colab_link": true
11 | },
12 | "kernelspec": {
13 | "name": "python3",
14 | "display_name": "Python 3"
15 | },
16 | "accelerator": "GPU"
17 | },
18 | "cells": [
19 | {
20 | "cell_type": "markdown",
21 | "metadata": {
22 | "id": "view-in-github",
23 | "colab_type": "text"
24 | },
25 | "source": [
26 | "
"
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "metadata": {
32 | "id": "siqzcgRRyr_n",
33 | "colab_type": "text"
34 | },
35 | "source": [
36 | "Лицевой Депикселизатор\n",
37 | "\n",
38 | "Принимая входное изображение с низким разрешением, Лицевой Депикселизатор на выходах генеративной модели (StyleGAN) подбирает такое изображение с высоким разрешением, которые при пикселизации даст максимально похожий результат с входным.\n",
39 | "\n",
40 | "Базируется на:\n",
41 | "\n",
42 | "**GitHub репозиторий**: [PULSE](https://github.com/adamian98/pulse)\n",
43 | "\n",
44 | "Статья: [PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models](https://arxiv.org/abs/2003.03808)\n",
45 | "\n",
46 | "Авторы: **[Alex Damian](https://github.com/adamian98), [Sachit Menon](mailto:sachit.menon@duke.edu).**\n",
47 | "\n",
48 | "Колаб собрал:\n",
49 | "\n",
50 | "GitHub: [@tg-bomze](https://github.com/tg-bomze),\n",
51 | "Telegram: [@bomze](https://t.me/bomze),\n",
52 | "Twitter: [@tg_bomze](https://twitter.com/tg_bomze).\n",
53 | "\n",
54 | "---\n",
55 | "##### В настоящее время Google Drive используется в качестве хранилища весов модели, поэтому из-за постоянных загрузок этих файлов при выполнении следующего блока может быть получено сообщение об ошибке «Google Drive Quota Exceeded» (Квота на Google Drive превышена) или «No such file or directory: '/content/pulse/runs/face.png'». Если вы столкнулись с этой ошибкой, повторите попытку позже в тот же день или вернитесь завтра.\n",
56 | "\n",
57 | "```\n",
58 | "Чтобы начать, нажмите на кнопку (куда указывает красная стрелка), после чего дождитесь завершения выполнения блока.\n",
59 | "```\n",
60 | "\n"
61 | ]
62 | },
63 | {
64 | "cell_type": "code",
65 | "metadata": {
66 | "id": "fU0aGtD4Nl4W",
67 | "colab_type": "code",
68 | "cellView": "form",
69 | "colab": {}
70 | },
71 | "source": [
72 | "#@title ← Начинаем!\n",
73 | "#@markdown **После запуска этого блока вам будет необходимо прокрутить страницу вниз, а затем загрузить пиксельную квадратную фотографию, в которую целиком помещается вся человеческая голова. Нейронная сеть лучше всего работает на изображениях, в которых люди смотрят прямо в камеру. Пример:**\n",
74 | "\n",
75 | "#@markdown \n",
76 | "\n",
77 | "#@markdown *Обрезать фото вы можете [ТУТ](https://www.iloveimg.com/crop-image)*\n",
78 | "\n",
79 | "#@markdown ---\n",
80 | "import torch\n",
81 | "import torchvision\n",
82 | "from pathlib import Path\n",
83 | "if not Path(\"PULSE.py\").exists():\n",
84 | " if Path(\"pulse\").exists():\n",
85 | " %cd /content/pulse\n",
86 | " else:\n",
87 | " !git clone https://github.com/adamian98/pulse\n",
88 | " %cd /content/pulse\n",
89 | " !mkdir input/\n",
90 | " toPIL = torchvision.transforms.ToPILImage()\n",
91 | " toTensor = torchvision.transforms.ToTensor()\n",
92 | " from bicubic import BicubicDownSample\n",
93 | " D = BicubicDownSample(factor=1)\n",
94 | "\n",
95 | "import os\n",
96 | "from io import BytesIO\n",
97 | "import matplotlib.pyplot as plt\n",
98 | "import matplotlib.image as mpimg\n",
99 | "from PIL import Image\n",
100 | "from PULSE import PULSE\n",
101 | "from google.colab import files\n",
102 | "from bicubic import BicubicDownSample\n",
103 | "from IPython import display\n",
104 | "from IPython.display import display\n",
105 | "from IPython.display import clear_output\n",
106 | "import numpy as np\n",
107 | "from drive import open_url\n",
108 | "from mpl_toolkits.axes_grid1 import ImageGrid\n",
109 | "%matplotlib inline\n",
110 | "\n",
111 | "#@markdown ## Базовые настройки:\n",
112 | "#@markdown ##### *Если вы уже загрузили фотографию и просто хотите поэкспериментировать с настройками, то уберите галочку ниже*:\n",
113 | "upload_new_photo = True #@param {type:\"boolean\"}\n",
114 | "\n",
115 | "\n",
116 | "if upload_new_photo == True:\n",
117 | " !rm -rf /content/pulse/input/face.png\n",
118 | " clear_output()\n",
119 | " uploaded = files.upload()\n",
120 | " for fn in uploaded.keys():\n",
121 | " print('Вы загрузили файл \"{name}\" размером {length} байт'.format(\n",
122 | " name=fn, length=len(uploaded[fn])))\n",
123 | " os.rename(fn, fn.replace(\" \", \"\"))\n",
124 | " fn = fn.replace(\" \", \"\")\n",
125 | "\n",
126 | " if(len(uploaded.keys())!=1): raise Exception(\"Вам необходимо загрузить только одно изображение.\")\n",
127 | "\n",
128 | " face = Image.open(fn)\n",
129 | " face = face.resize((1024, 1024), Image.ANTIALIAS)\n",
130 | " face = face.convert('RGB')\n",
131 | " face_name = 'face.png'\n",
132 | " face.save(face_name)\n",
133 | " %cp $face_name /content/pulse/input/\n",
134 | "\n",
135 | " images = []\n",
136 | " imagesHR = []\n",
137 | " imagesHR.append(face)\n",
138 | " face = toPIL(D(toTensor(face).unsqueeze(0).cuda()).cpu().detach().clamp(0,1)[0])\n",
139 | " images.append(face)\n",
140 | "\n",
141 | "#@markdown ---\n",
142 | "#@markdown ## Расширенные настройки:\n",
143 | "#@markdown ##### *Если вы хотите получить более точный результат, измените следующие переменные, оптимально настроенные по умолчанию*:\n",
144 | "\n",
145 | "input_dir = '/content/pulse/input/'\n",
146 | "output_dir = '/content/pulse/runs/'\n",
147 | "seed = 100 #@param {type:\"integer\"}\n",
148 | "epsilon = 0.02 #@param {type:\"slider\", min:0.01, max:0.03, step:0.01}\n",
149 | "noise_type = 'trainable' #@param ['zero', 'fixed', 'trainable']\n",
150 | "optimizer = 'adam' #@param ['sgd', 'adam','sgdm', 'adamax']\n",
151 | "learning_rate = 0.4 #@param {type:\"slider\", min:0, max:1, step:0.05}\n",
152 | "learning_rate_schedule = 'linear1cycledrop' #@param ['fixed', 'linear1cycle', 'linear1cycledrop']\n",
153 | "steps = 100 #@param {type:\"slider\", min:100, max:2000, step:50}\n",
154 | "clear_output()\n",
155 | "\n",
156 | "seed = abs(seed)\n",
157 | "print('Примерное время генерации: {} сек.\\n'.format(round(0.23*steps)+6))\n",
158 | "!python run.py \\\n",
159 | " -input_dir $input_dir \\\n",
160 | " -output_dir $output_dir \\\n",
161 | " -seed $seed \\\n",
162 | " -noise_type $noise_type \\\n",
163 | " -opt_name $optimizer \\\n",
164 | " -learning_rate $learning_rate \\\n",
165 | " -steps $steps \\\n",
166 | " -eps $epsilon \\\n",
167 | " -lr_schedule $learning_rate_schedule\n",
168 | "\n",
169 | "#@markdown ---\n",
170 | "#@markdown *Если во время выполнения произошла ошибка или кнопка \"**Обзор**\" не активна, то попробуйте повторно запустить этот блок*\n",
171 | "\n",
172 | "fig, (ax1, ax2) = plt.subplots(1, 2)\n",
173 | "ax1.imshow(mpimg.imread('/content/pulse/input/face.png'))\n",
174 | "ax1.set_title('Оригинал')\n",
175 | "ax2.imshow(mpimg.imread('/content/pulse/runs/face.png'))\n",
176 | "ax2.set_title('Результат')\n",
177 | "plt.show()"
178 | ],
179 | "execution_count": null,
180 | "outputs": []
181 | },
182 | {
183 | "cell_type": "code",
184 | "metadata": {
185 | "id": "DUfP6_7vTK3b",
186 | "colab_type": "code",
187 | "cellView": "form",
188 | "colab": {}
189 | },
190 | "source": [
191 | "#@title ← Скачать итоговое изображение\n",
192 | "try: files.download('/content/pulse/runs/face.png')\n",
193 | "except: raise Exception(\"No result image\")"
194 | ],
195 | "execution_count": null,
196 | "outputs": []
197 | }
198 | ]
199 | }
--------------------------------------------------------------------------------