├── .github └── FUNDING.yml ├── .gitignore ├── LICENCE ├── README.md ├── Vanilla GAN (PyTorch).ipynb ├── data └── examples │ ├── generated_samples │ ├── generated_dcgan.jpg │ └── generated_vgan.jpg │ ├── intermediate_imagery.PNG │ ├── interpolation │ ├── dcgan_interpolated.jpg │ ├── slerp.png │ └── vgan_interpolated.jpg │ ├── jupyter │ ├── cross_entropy_loss.png │ └── data_distribution.PNG │ ├── losses.PNG │ ├── real_samples │ ├── celeba.jpg │ └── mnist.jpg │ ├── training_progress │ ├── training_progress_cgan.gif │ ├── training_progress_dcgan.gif │ └── training_progress_vgan.gif │ └── vector_arithmetic │ └── vector_arithmetic.jpg ├── environment.yml ├── generate_imagery.py ├── models ├── binaries │ ├── CGAN_000000.pth │ ├── DCGAN_000000.pth │ └── VANILLA_000000.pth └── definitions │ ├── conditional_gan.py │ ├── dcgan.py │ └── vanilla_gan.py ├── playground.py ├── train_cgan.py ├── train_dcgan.py ├── train_vanilla_gan.py └── utils ├── constants.py ├── utils.py └── video_utils.py /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | patreon: theaiepiphany 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | __pycache__ 3 | 4 | .ipynb_checkpoints 5 | 6 | runs 7 | 8 | models/binaries 9 | models/checkpoints 10 | 11 | data/interpolated_imagery 12 | data/generated_imagery 13 | data/debug_imagery 14 | data/MNIST 15 | data/CelebA -------------------------------------------------------------------------------- /LICENCE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Aleksa Gordić 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## PyTorch GANs :computer: vs :computer: = :heart: 2 | This repo contains PyTorch implementation of various GAN architectures.
3 | It's aimed at making it **easy for beginners** to start playing and learning about GANs.
4 | 5 | All of the repos I found do obscure things like setting bias in some network layer to `False` without explaining
6 | why certain design decisions were made. This repo makes **every design decision transparent.** 7 | 8 | ## Table of Contents 9 | * [What are GANs?](#what-are-gans) 10 | * [Setup](#setup) 11 | * [Implementations](#implementations) 12 | + [Vanilla GAN](#vanilla-gan) 13 | + [Conditional GAN](#conditional-gan) 14 | + [DCGAN](#dcgan) 15 | 16 | ## What are GANs? 17 | 18 | GANs were originally proposed by Ian Goodfellow et al. in a seminal paper called [Generative Adversarial Nets](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf). 19 | 20 | GANs are a framework where 2 models (usually neural networks), called generator (G) and discriminator (D), play a **minimax game** against each other. 21 | The generator is trying to **learn the distribution of real data** and is the network which we're usually interested in. 22 | During the game the goal of the generator is to trick the discriminator into "thinking" that the data it generates is real. 23 | The goal of the discriminator, on the other hand, is to correctly discriminate between the generated (fake) images and real images coming from some dataset (e.g. MNIST). 24 | 25 | ## Setup 26 | 27 | 1. `git clone https://github.com/gordicaleksa/pytorch-gans` 28 | 2. Open Anaconda console and navigate into project directory `cd path_to_repo` 29 | 3. Run `conda env create` from project directory (this will create a brand new conda environment). 30 | 4. Run `activate pytorch-gans` (for running scripts from your console or set the interpreter in your IDE) 31 | 32 | That's it! It should work out-of-the-box executing environment.yml file which deals with dependencies. 33 | 34 | ----- 35 | 36 | PyTorch package will pull some version of CUDA with it, but it is highly recommended that you install system-wide CUDA beforehand, mostly because of GPU drivers. I also recommend using Miniconda installer as a way to get conda on your system. 37 | 38 | Follow through points 1 and 2 of [this setup](https://github.com/Petlja/PSIML/blob/master/docs/MachineSetup.md) and use the most up-to-date versions of Miniconda and CUDA/cuDNN. 39 | 40 | ## Implementations 41 | 42 | Important note: you don't need to train the GANs to use this project I've checked-in pre-trained models.
43 | You can just use the `generate_imagery.py` script to play with the models. 44 | 45 | ## Vanilla GAN 46 | 47 | Vanilla GAN is my implementation of the [original GAN paper (Goodfellow et al.)](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) with certain modifications mostly in the model architecture, 48 | like the usage of LeakyReLU and 1D batch normalization (it didn't even exist back then) instead of the maxout activation and dropout. 49 | 50 | ### Examples 51 | 52 | GAN was trained on data from MNIST dataset. Here is how the digits from the dataset look like: 53 | 54 |

55 | 56 |

57 | 58 | You can see how the network is slowly learning to capture the data distribution during training: 59 | 60 |

61 | 62 |

63 | 64 | After the generator is trained we can use it to generate all 10 digits! Looks like it's coming directly from MNIST, right!? 65 | 66 |

67 | 68 |

69 | 70 | We can also pick 2 generated numbers that we like, save their latent vectors, and subsequently [linearly](https://en.wikipedia.org/wiki/Linear_interpolation) or [spherically](https://en.wikipedia.org/wiki/Slerp)
71 | interpolate between them to generate new images and understand how the latent space (z-space) is structured: 72 | 73 |

74 | 75 |

76 | 77 | We can see how the number 4 is slowly morphing into 9 and then into the number 3.
78 | 79 | The idea behind spherical interpolation is super easy - instead of moving over the shortest possible path
80 | (line i.e. linear interpolation) from the first vector (p0) to the second (p1), you take the sphere's arc path: 81 | 82 |

83 | 84 |

85 | 86 | ### Usage 87 | 88 | #### Option 1: Jupyter Notebook 89 | 90 | Just run `jupyter notebook` from you Anaconda console and it will open the session in your default browser.
91 | Open `Vanilla GAN (PyTorch).py` and you're ready to play!
92 | 93 | If you created the env before I added jupyter just do `pip install jupyter==1.0.0` and you're ready. 94 | 95 | --- 96 | 97 | **Note:** if you get `DLL load failed while importing win32api: The specified module could not be found`
98 | Just do `pip uninstall pywin32` and then either `pip install pywin32` or `conda install pywin32` [should fix it](https://github.com/jupyter/notebook/issues/4980)! 99 | 100 | #### Option 2: Use your IDE of choice 101 | 102 | #### Training 103 | 104 | It's really easy to kick-off new training just run this:
105 | `python train_vanilla_gan.py --batch_size ` 106 | 107 | The code is well commented so you can exactly understand how the training itself works.
108 | 109 | The script will: 110 | * Dump checkpoint *.pth models into `models/checkpoints/` 111 | * Dump the final *.pth model into `models/binaries/` 112 | * Dump intermediate generated imagery into `data/debug_imagery/` 113 | * Download MNIST (~100 MB) the first time you run it and place it into `data/MNIST/` 114 | * Dump tensorboard data into `runs/`, just run `tensorboard --logdir=runs` from your Anaconda 115 | 116 | And that's it you can track the training both visually (dumped imagery) and through G's and D's loss progress. 117 | 118 |

119 | 120 | 121 |

122 | 123 | Tracking loss can be helpful but I mostly relied on visually analyzing intermediate imagery.
124 | 125 | Note1: also make sure to check out **playground.py** file if you're having problems understanding adversarial loss.
126 | Note2: Images are dumped both to the file system `data/debug_imagery/` but also to tensorboard. 127 | 128 | #### Generating imagery and interpolating 129 | 130 | To generate a single image just run the script with defaults:
131 | `python generate_imagery.py` 132 | 133 | It will display and dump the generated image into `data/generated_imagery/` using checked-in generator model.
134 | 135 | Make sure to change the `--model_name` param to your model's name (once you train your own model).
136 | 137 | ----- 138 | 139 | If you want to play with interpolation, just set the `--generation_mode` to `GenerationMode.INTERPOLATION`.
140 | And optionally set `--slerp` to true if you want to use spherical interpolation. 141 | 142 | The first time you run it in this mode the script will start generating images,
143 | and ask you to pick 2 images you like by entering `'y'` into the console. 144 | 145 | Finally it will start displaying interpolated imagery and dump the results to `data/interpolated_imagery`. 146 | 147 | ## Conditional GAN 148 | 149 | Conditional GAN (cGAN) is my implementation of the [cGAN paper (Mehdi et al.)](https://arxiv.org/pdf/1411.1784.pdf).
150 | It basically just adds conditioning vectors (one hot encoding of digit labels) to the vanilla GAN above. 151 | 152 | ### Examples 153 | 154 | In addition to everything that we could do with the original GAN, here we can exactly control which digit we want to generate! 155 | We make it dump 10x10 grid where each column is a single digit and this is how the learning proceeds: 156 | 157 |

158 | 159 |

160 | 161 | ### Usage 162 | 163 | For training just check out [vanilla GAN](#training) (just make sure to use `train_cgan.py` instead). 164 | 165 | #### Generating imagery 166 | 167 | Same as for [vanilla GAN](#generating-imagery-and-interpolating) but you can additionally set `cgan_digit` to a number between 0 and 9 to generate that exact digit! 168 | There is no interpolation support for cGAN, it's the same as for vanilla GAN feel free to use that. 169 | 170 | Note: make sure to set `--model_name` to either `CGAN_000000.pth` (pre-trained and checked-in) or your own model. 171 | 172 | ## DCGAN 173 | 174 | DCGAN is my implementation of the [DCGAN paper (Radford et al.)](https://arxiv.org/pdf/1511.06434.pdf).
175 | The main contribution of the paper was that they were the first who made CNNs successfully work in the GAN setup.
176 | Batch normalization was invented in the meanwhile and that's what got CNNs to work basically. 177 | 178 | ### Examples 179 | 180 | I trained DCGAN on preprocessed [CelebA dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). Here are some samples from the dataset: 181 | 182 |

183 | 184 |

185 | 186 | Again, you can see how the network is slowly learning to capture the data distribution during training: 187 | 188 |

189 | 190 |

191 | 192 | After the generator is trained we can use it to generate new faces! This problem is much harder than generating MNIST digits, 193 | so generated faces are not indistinguishable from the real ones. 194 | 195 |

196 | 197 |

198 | 199 | Some SOTA GAN papers did a much better job at generating faces, currently the best model is [StyleGAN2](https://github.com/NVlabs/stylegan2). 200 | 201 | Similarly we can explore the structure of the latent space via interpolations: 202 | 203 |

204 | 205 |

206 | 207 | We can see how the man's face is slowly morphing into woman's face and also the skin tan is changing gradually. 208 | 209 | Finally, because the latent space has some nice properties (linear structure) we can do some interesting things.
210 | Subtracting neutral woman's latent vector from smiling woman's latent vector gives us the "smile vector".
211 | Adding that vector to neutral man's latent vector, we hopefully get smiling man's latent vector. And so it is! 212 | 213 |

214 | 215 |

216 | 217 | You can also create the "sunglasses vector" and use it to add sunglasses to other faces, etc. 218 | 219 | *Note: I've created an interactive script so you can play with this check out `GenerationMode.VECTOR_ARITHMETIC`.* 220 | 221 | ### Usage 222 | 223 | For training just check out [vanilla GAN](#training) (just make sure to use `train_dcgan.py` instead).
224 | The only difference is that this script will download [pre-processed CelebA dataset](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip) instead of MNIST. 225 | 226 | #### Generating imagery 227 | 228 | Again just use the `generate_imagery.py` script. 229 | 230 | You have 3 options you can set the `generation_mode` to: 231 | * `GenerationMode.SINGLE_IMAGE` <- generate a single face image 232 | * `GenerationMode.INTERPOLATION` <- pick 2 face images you like and script will interpolate between them 233 | * `GenerationMode.VECTOR_ARITHMETIC` <- pick 9 images and script will do vector arithmetic 234 | 235 | GenerationMode.VECTOR_ARITHMETIC will give you an **interactive matplotlib plot** to pick 9 images. 236 | 237 | Note: make sure to set `--model_name` to either `DCGAN_000000.pth` (pre-trained and checked-in) or your own model. 238 | 239 | ## Acknowledgements 240 | 241 | I found these repos useful (while developing this one): 242 | * [gans](https://github.com/diegoalejogm/gans) (PyTorch & TensorFlow) 243 | * [PyTorch-GAN](https://github.com/eriklindernoren/PyTorch-GAN) (PyTorch) 244 | 245 | ## Citation 246 | 247 | If you find this code useful for your research, please cite the following: 248 | 249 | ``` 250 | @misc{Gordić2020PyTorchGANs, 251 | author = {Gordić, Aleksa}, 252 | title = {pytorch-gans}, 253 | year = {2020}, 254 | publisher = {GitHub}, 255 | journal = {GitHub repository}, 256 | howpublished = {\url{https://github.com/gordicaleksa/pytorch-gans}}, 257 | } 258 | ``` 259 | 260 | ## Connect with me 261 | 262 | If you'd love to have some more AI-related content in your life :nerd_face:, consider: 263 | * Subscribing to my YouTube channel [The AI Epiphany](https://www.youtube.com/c/TheAiEpiphany) :bell: 264 | * Follow me on [LinkedIn](https://www.linkedin.com/in/aleksagordic/) and [Twitter](https://twitter.com/gordic_aleksa) :bulb: 265 | * Follow me on [Medium](https://gordicaleksa.medium.com/) :books: :heart: 266 | 267 | ## Licence 268 | 269 | [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/gordicaleksa/pytorch-gans/blob/master/LICENCE) -------------------------------------------------------------------------------- /Vanilla GAN (PyTorch).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Ultimate beginner's guide to GANs 👨🏽‍💻\n", 8 | "\n", 9 | "In this notebook you'll learn:\n", 10 | "\n", 11 | "✅ What are GANs exactly?
\n", 12 | "✅ How to train them?
\n", 13 | "✅ How to use them?
\n", 14 | "\n", 15 | "After you complete this one you'll have a much better understanding of GANs!\n", 16 | "\n", 17 | "So, let's start!\n", 18 | "\n", 19 | "---\n", 20 | "\n", 21 | "## What the heck are GANs and how they came to be?\n", 22 | "\n", 23 | "GANs were originally proposed by Ian Goodfellow et al. in a seminal paper called [Generative Adversarial Nets](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf).\n", 24 | "\n", 25 | "(`et al.` - fancy Latin phrase you'll be seing around it means `and others`)\n", 26 | "\n", 27 | "You DON'T need to understand the paper in order to understand this notebook. That being said:\n", 28 | "\n", 29 | "---\n", 30 | "\n", 31 | "GANs are a `framework` where 2 models (usually `neural networks`), called `generator (G)` and `discriminator (D)`, play a `minimax game` against each other. The generator is trying to learn the `distribution of real data` and is the network which we're usually interested in. During the game the goal of the generator is to trick the discriminator into \"thinking\" that the images it generates is real. The goal of the discriminator, on the other hand, is to correctly discriminate between the generated (fake) images and real images coming from some dataset (e.g. MNIST). \n", 32 | "\n", 33 | "At the equilibrium of the game the generator learns to generate images indistinguishable from the real images and the best that discriminator can do is output 0.5 - meaning it's 50% sure that what you gave him is a real image (and 50% sure that it's fake) - i.e. it doesn't have a clue of what's happening!\n", 34 | "\n", 35 | "Potentially confusing parts:

\n", 36 | "`minimax game` - basically they have some goal (objective function) and one is trying to minimize it, the other to maximize it, that's it.

\n", 37 | "`distribution of real data` - basically you can think of any data you use as a point in the `n-dimensional` space. For example, MNIST 28x28 image when flattened has 784 numbers. So 1 image is simply a point in the 784-dimensional space. That's it. when I say `n` in order to visualize it just think of `3` or `2` dimensions - that's how everybody does it. So you can think of your data as a 3D/2D cloud of points. Each point has some probability associated with it - how likely is it to appear - that's the `distribution` part. So if your model has internal representation of this 3D/2D point cloud there is nothing stopping it from generating more points from that cloud! And those are new images (be it human faces, digits or whatever) that never existed!\n", 38 | "\n", 39 | "\"example
\n", 40 | "\n", 41 | "Here is an example of a simple data distribution. The data here is 2-dimensional and the height of the plot is the probability of certain datapoint appearing. You can see that points around (0, 0) have the highest probability of happening. Those datapoints could be your 784-dimensional images projected into 2-dimensional space via PCA, t-SNE, UMAP, etc. (you don't need to know what these are, they are just some dimensionality reduction methods out there).\n", 42 | "\n", 43 | "In reality this plot would have multiple peaks (`multi-modal`) and wouldn't be this nice.\n", 44 | "\n", 45 | "\n", 46 | "---\n", 47 | "\n", 48 | "That was everything you need to know for now as a beginner! Let's code!\n", 49 | "\n" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": 1, 55 | "metadata": {}, 56 | "outputs": [], 57 | "source": [ 58 | "# I always like to structure my imports into Python's native libs,\n", 59 | "# stuff I installed via conda/pip and local file imports (we don't have those here)\n", 60 | "import os\n", 61 | "import re\n", 62 | "import time\n", 63 | "import enum\n", 64 | "\n", 65 | "\n", 66 | "import cv2 as cv\n", 67 | "import numpy as np\n", 68 | "import matplotlib.pyplot as plt\n", 69 | "import git\n", 70 | "\n", 71 | "\n", 72 | "import torch\n", 73 | "from torch import nn\n", 74 | "from torch.optim import Adam\n", 75 | "from torchvision import transforms, datasets\n", 76 | "from torchvision.utils import make_grid, save_image\n", 77 | "from torch.utils.data import DataLoader\n", 78 | "from torch.utils.tensorboard import SummaryWriter" 79 | ] 80 | }, 81 | { 82 | "cell_type": "code", 83 | "execution_count": 2, 84 | "metadata": {}, 85 | "outputs": [], 86 | "source": [ 87 | "# Let's create some constant to make stuff a bit easier\n", 88 | "BINARIES_PATH = os.path.join(os.getcwd(), 'models', 'binaries') # location where trained models are located\n", 89 | "CHECKPOINTS_PATH = os.path.join(os.getcwd(), 'models', 'checkpoints') # semi-trained models during training will be dumped here\n", 90 | "DATA_DIR_PATH = os.path.join(os.getcwd(), 'data') # all data both input (MNIST) and generated will be stored here\n", 91 | "DEBUG_IMAGERY_PATH = os.path.join(DATA_DIR_PATH, 'debug_imagery') # we'll be dumping images here during GAN training\n", 92 | "\n", 93 | "MNIST_IMG_SIZE = 28 # MNIST images have 28x28 resolution, it's just convinient to put this into a constant you'll see later why" 94 | ] 95 | }, 96 | { 97 | "cell_type": "markdown", 98 | "metadata": {}, 99 | "source": [ 100 | "## Understand your data - Become One With Your Data!\n", 101 | "\n", 102 | "You should always invest time to understand your data. You should be able to answer questions like:\n", 103 | "1. How many images do I have?\n", 104 | "2. What's the shape of my image?\n", 105 | "3. How do my images look like?\n", 106 | "\n", 107 | "So let's first answer those questions!" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": 3, 113 | "metadata": {}, 114 | "outputs": [ 115 | { 116 | "name": "stdout", 117 | "output_type": "stream", 118 | "text": [ 119 | "Dataset size: 60000 images.\n", 120 | "Image shape torch.Size([1, 28, 28])\n" 121 | ] 122 | }, 123 | { 124 | "data": { 125 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAW4AAAF1CAYAAADIswDXAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nOydebxUcx/H3197hDalVUm2ynqFxxZJ0uKxZcmWHiHLYwllC61Esif7kmRfIiIP2XOzVCqKUmlDRSWRfs8fM985Z+bO3Dt39nPn+3697mvO/M72O2fmnvn8vr/vIs45DMMwjOCwUb47YBiGYVQOe3AbhmEEDHtwG4ZhBAx7cBuGYQQMe3AbhmEEDHtwG4ZhBAx7cBtRiMiNIvJUho8pIvKoiKwQkcmZPHaqiMhjIjIw3/3IFyLSVESciGyS774Ylcce3AWCiBwsIh+LyG8islxEPhKR/fLdrwxxMNAeaOSca5Prk4vI2SLyYRaP/174IbhnTPvL4fa24fc3ht+f5Ntmk3Bb0/D7qB8UEekpIrNEZJWILBWR10VkaxEZLyKrw39/i8hfvvcjM3x9bUVkYSaPmc/zVAXswV0AiMg2wDjgbqAW0BC4CViXz35lkB2Aec65NfFWVhHV9x1wpr4RkdrAAcDPMdstB24WkY0rOqCIHAYMBk51zm0N7AY8C+Cc6+icq+6cqw6MBm7V98658zNyRUbBYg/uwmBnAOfcGOfcP865tc65Cc65qQAi0lxE3hWRX0XkFxEZLSI1dGcRmSciV4rIVBFZIyIPi0i9sCpbJSLviEjN8LY6RO4lIotEZLGIXJGoYyJyQHgksFJEvlb1GF53toj8ED7HXBHpHmf/nsBDwIFhNXiTKisRuVpElgCPhrc9V0TmhEccr4pIA99xnIj0FpHZ4fMNCN+XT0TkdxF5VkQ2i3P+3YCRvvOv9K2uGVawq0TkMxFp7ttvVxF5O9yXb0WkWwWf4WjgZN8D+VTgJeCvmO3eDLedXsHxAPYDPnHOfQngnFvunHvcObcqiX2jEJGNReS28PfnB6BTzPoeIjIzfC9+EJHzwu1bAeOBBj5F30BE2oTv/crwd+gevf8S4g4RWSahEeRUEWkVXrd5uB/zwyOIkSJSLdF5KnudRYNzzv7y/AdsA/wKPA50BGrGrN+JkKlhc2A7YBIwwrd+HvApUI+QWl8GfAHsHd7nXaB/eNumgAPGAFsBrQmpwiPD628EngovNwz36xhCP/Ltw++3C+/7O7BLeNv6QMsE13c28KHvfVtgPXBLuH/VgCOAX4B9wm13A5N8+zjg1fC9akloNDIR2BHYFpgBnJXM+cNtjxFSv22ATQg9eJ8Jr9sKWAD0CK/bJ9y3RNf3HvAfYALQMdw2GTgQWAi09d9boCvwA7Bp+PgOaOrr18Dw8iHAWkKjr4OAzROcP7JPOd+x84FZQGNCo7r/hc+7SXh9J6A5IMBhwB/APr7Pa2HM8fYlNKLYhNB3aiZwaXhdB2AKUCN8vN2A+uF1I8KfYy1ga+A1YEii89hf/D9T3AWAc+53QnZgBzwI/BxWnPXC6+c45952zq1zzv0MDCf0z+XnbufcUufcT8AHwGfOuS+dc+sIKb+9Y7a/yTm3xjk3jZDiPTVO104H3nDOveGc2+CcexsoJfQgB9gAtBKRas65xc65bypx2RsI/Zisc86tBboDjzjnvgj3uR8hldzUt88tzrnfw+eZDkxwzv3gnPuNkFqLvcaKeNE5N9k5t57Qg3uvcHtnQqadR51z651zXwAvACdWcLwngDNFZBeghnPuk3gbOedeJfRj+Z/yDuac+wA4ntAPx+vAryIyPBkzSxy6EfqxX+CcWw4MiTnX6865712I9wn9CB1STt+mOOc+Dd+fecADeN/Jvwk9lHcFxDk30zm3WEQEOBe4zIVGD6sImYJOSeF6ihp7cBcI4S/32c65RkAroAEhdYKI1BWRZ0TkJxH5nZBqqxNziKW+5bVx3leP2X6Bb/nH8Pli2QE4KTwcXhk2MxxMSD2tAU4mpOQWh00Ou1bikn92zv3pe98g3A8AnHOrCan7hr5tKnuNFbHEt/yHb/8dgP1jrrs7sH0Fx3uR0MjhYuDJCra9DrgW2KK8jZxz451zXQgp1GMJjR7KfeAnoAFlP/MIItJRRD4Nm4ZWEvpxjv2O+bffWUTGiciS8HdysG7vnHsXuAe4F1gqIqMkNI+zHbAlMMV3X98MtxuVwB7cBYhzbhah4W+rcNMQQmp8D+fcNoSUsKR5msa+5SbAojjbLACedM7V8P1t5ZwbGu7nW8659oTMJLMIjRaSJTYt5SJCD0wgYlutDfxUiWMme66KWAC8H3Pd1Z1zF5R7Euf+IKT8L6CCB3d49DIH6J1Mh8IjnomEzF6tKto+Dosp+5kDIbszoRHFbUA951wN4A2871i8+3c/oc+8Rfg7eY1ve5xzdznn9iVk1toZuJKQuWktIZOT3tdtXWiCNdF5jDjYg7sACE+EXSEijcLvGxMyXXwa3mRrYDWwUkQaEvonSJfrRWRLEWlJyJY7Ns42TwFdRKRDeHJri/DEYiMJTX52DT9g14X7908a/Xka6CEie4UfJIMJmXvmpXFMZSnQKN7kZQLGATuLyBkismn4b7/wRGdFXAMclmS/rwWuSrRSRI4VkVNEpGZ4wq8NIXPEp4n2KYdngUvCn11NoK9v3WaE5hV+BtaLSEfgKN/6pUBtEdnW17Y1oTmO1eGRVuRHLXyv9heRTYE1wJ/AP865DYR+3O8QkbrhbRuKSIdyzmPEwR7chcEqYH/gMxFZQ+gfczqg3h43EbJz/kbI1vliBs75PiHFNxG4zTk3IXYD59wCQsPzawj9Uy8g9KOxUfjvCkJKeTmhB0pS6jEeYTV5PSHlt5jQRFmmbJ/vAt8AS0TklyT6sorQg+sUQte3BG8itaJ9FznnkvIZd859RGgSMxErCNmEZxN6SD4FDHPOjU7m+DE8CLwFfE1o4jryHQpf7yWEHu4rgNMITSDq+lmEJrN/CJs4GgB9wtutCh/b/8O/TbhtBSGTzK+E1DzA1YS+d5+GTSzvALuUcx4jDuKcjU6KifBk31xg0/CknGEYAcMUt2EYRsCwB7dhGEbAyNqDW0SOllDE2RwR6VvxHkYucM7Nc86JmUkMI7hkxcYdDhD4jlCk3ULgc0L5FmZk/GSGYRhFRrYUdxtgTjiq7S/gGULeCYZhGEaaZCsrW0Oio7QWEnJ3i0udOnVc06ZNs9QVwzCM4DFv3jx++eWXuIF22XpwxztZlE1GRHoBvQCaNGlCaWlplrpiGIYRPEpKShKuy5apZCHR4bWNiAmpds6Ncs6VOOdKttvOUhUYhmEkS7YU9+dACxFpRijXxCmEoqySIpRErLiInSQutntQ7NcPdg+K/fqh7D1IRFYe3M659SJyEaEQ240JpeusTMpPwzAMIwFZKxnlnHuDUIYxwzAMI4NY5KRhGEbAsAe3YRhGwLAHt2EYRsCwB7dhGEbAsAe3YRhGwLAHt2EYRsDImjtgIfDvf/8bgOeffx6AhQsXAtC+fXsAZs+enZ+OGYZhpIEpbsMwjIBR5RT3LrvsElkePnw44IWRNmzYEIAxY8YA5SdxMSpG7/V7770HwPTp0wFvRFNVqF27NgBTpkwBoG3btkAoe1uQ2WyzUNH7mjVrAnD++ecD0L9/fwA2bNgAwG23her83nHHHWWOsXTp0qz3szL4w+TPOOMMAK677joAdtppJyD5sPJ4NG/eHMj/Z2+K2zAMI2BUOcX9wgsvRJYbN24cte7tt9+Ou0+bNm2AaLU+ePBgAD7++GMAxo0bB8D8+fMBeP/99zPU4+By7733AlC3bl0AFi9enM/uZJzNN98cgNdffx2ALbbYAoC//vorb32qLJp5U5WiqmqA+vXrA3DEEUdE7aNKW5XpFVdcEfXqp2fPngA88cQTmex2ytSpUyeyfPvttwOwbt06AM4991zAU8sHHnggAPXq1StznJUrVwLeaFJH6UcddRQAo0aNynDPK4cpbsMwjIBRZRR3s2bNAE/9+dFfzSFDhgBw8MEHA55NVn9xa9SoEdlHbWXHH3981Ovq1asBmDRpEuDZ0YOmwLfddlsABg4cCMCgQYMAWLJkSYX77rHHHoBn6/3zzz8B6NevX6a7mVdUce+3336A91kvWrQo4T75Zp999gHgsssuA2DfffcFYOeddwbSs+/GY+TIkYDnsfXuu+9m9PiV5eeff44s60jp2GNDVRM/+OADAL777jsAJk6cmPRxDzjgAAB++OGHjPQzXUxxG4ZhBAx7cBuGYQSMKmMq6dixI+C5NoFnIunRowcAd999NwCdO3dO+TzVq1cH4JhjjgHgsMMOAzyXI/ACfpIxO+SL5557DoB27doB8MUXXwDw6KOPVrjviBEjANhoo9Dv/pVXXgnATz/9lPF+5pM+ffpEvfdPfBcaLVq0ALzhv35Py0Mnk2NNBmomfOONUDr9GTNmlNlX91FXyauuugrIv6nEj07E6iTs5MmTAe9e+c0qFVEoJhLFFLdhGEbAqDKK+5xzzinT1rJlSwBeeuklAPbaa6+4+/74449AdDCBqg6d7Nxhhx3i7rvVVlsB0cEJqnaGDh2a/AXkgBtuuCGyrC5gOqn04osvlruvTmaC5za5Zs0aAF555ZWM9jOf7LbbbpFlHUkoOqlViFSrVg1IrLT/+OMPAO67775Im7q4ff3115U+3/r166PeF2LBb500v/DCCwFvVNmtWzfAc2cNIqa4DcMwAkbgFbe66am69qMqQF/Xrl0LQGlpKeDZaqdOnQrED2NVN8MzzzwTgLPPPhuARo0aJezTNddcA3g23yeffDLJq8kO+++/PxCtIFUt/+c//wHgt99+K/cYw4YNiyxvv/32AFxyySVA1bJtqwsgeCHhzzzzDACrVq3KS5+SQe3Vffv2jWr//PPPAc99NR0OOeSQyPLWW28NePMcqugLkTlz5gBeMJ26BZviNgzDMHJG4BW3BhZsskniS9HZYw0yqcwv7dy5cwG46aabAHjkkUcAz26+5557ltlH7Y133XVX1LYavJMrttlmG8CzX2+55ZaRdbfccguQOA2Aol46nTp1irTpdVQl27Zy6aWXRpb/+ecfAB588EEA/v7777z0KRn0O65h3plAU0bo3JB+98H7jmv4v6rYQka9vTQJmnqijR8/HohOUNWhQwcATjzxRMDzotHvgI7OdfSea0xxG4ZhBIzAK26/TTIW9dvUggqV8dtMxIIFCwAvjPa1114DoHXr1mW21Rn+3r17A3Drrbemff7KoDZ89WP1++smq5B0Rl6PAZ5KV7uqpsfVpD3qLx8k1LdXPQ4Ali9fDnjxAMWC2vYfeughoGwSKj86n6M+34WMjhDvuecewEsapikc/B43+l1W270+Q1SV6+hV5350RA7eSC2bmOI2DMMIGIFX3Ndffz0QP3mORkxmQmnHop4U5513HuDNWBcCqoBPOeUUwLs3atcGz8c1Eaq6Yo8Bnp1PS7+pj/uECROAYCpuHTH5R3BPP/10vrqTF3bffXcAxo4dC8Cuu+6acFv1XtHEW0FAnwP6vdXvqdrr1UMGvOvSUav+v+s2vXr1Arz5Mr/Hkd8DK1uY4jYMwwgYgVfc+guoyd9zjfrJvvrqq5E2tX8r/tnqXPDyyy8DZe3/b731VmRZvWViPSVUOWy88cZAdCShorPxK1asADz/cE3xGUQ0dav6+gM8/PDD+epOxlGb7AUXXBBpO+GEEwBvxKTbqI92eSlgNY+LpjPOl3dFKnz00UeAF22tqllLtEHi2AR9zqj9X4szdO/ePbKNKW7DMAyjDPbgNgzDCBiBN5XE1sfLF/7zx/YlV307+eSTgbI19L7//nsgOjGQukJpAFNFvPPOO5FlDWTSSZ4g15rUdAiaAvTbb7+NrPvmm2/y0qdsoOYP/wR1IvT7umzZMsALwtIJa/Dum7rc6v3Ldy3GZFDTngYr6fe4Muj/kpol1eyUK0xxG4ZhBIzAK+7y0Pp72UzHqS6HGiKbT7QIgk4s6qSLhnH7AwPUBUoDT5SmTZsCnpLQyUtNlA+ppQEtVDR5mKYH0ACrqoYmEfMXhzjjjDMAr1DCm2++CXiuolo4QgNw/N8BdR1s0KAB4I3C9H+tkIOWdAI+E0nDcp3GQjHFbRiGETBSVtwi0hh4Atge2ACMcs7dKSK1gLFAU2Ae0M05tyL9rsZH1UI8tzW1O2lazkyirkRaQEFtxn7U/qeuQ9lGQ+pr1aoFeJW+44XgqtubprRV1L1J7Zzjxo0DqpbKLo9CLk+WDqqiNaAkdrk8tByZvyyZuhCqzVz/1/S9lsTLlyLNFbHzSbkiHcW9HrjCObcbcABwoYjsDvQFJjrnWgATw+8NwzCMDJGy4nbOLQYWh5dXichMoCFwLNA2vNnjwHvA1Wn1shw0LHnAgAHZOgXgFQ/QWfNDDz0UiE6VqmhorSam0WRF2ebGG29MeV9Ni3vkkUcCXtDQ4MGD0+5XIaNh3f/73/8AGD16dD67Exi03J++KjqvpHMl06dPz2m/coV66egc1/3335/T82fExi0iTYG9gc+AeuGHuj7c62biHIZhGEaItL1KRKQ68AJwqXPu92TDu0WkF9ALoEmTJul2Iy4HHXQQ4KVrvPrqkPBPZjZZQ6C1wK7a9OLZ0iHaF1S9ObKR3CpbdO7cGfD8ujXk1+/XXJXQ4sc6wtCQ7XylTggq+n+iaAj8zJkz89GdSqFlCB977LFK79ulSxfASyuh6Z1zRVqKW0Q2JfTQHu2c0zLhS0Wkfnh9fWBZvH2dc6OccyXOuZJCrBBtGIZRqKTjVSLAw8BM55w/t+OrwFnA0PBrVutb/frrr4CnlPypGfUHQT0l1Pf0l19+qfC4ap+uKOpR07lq1CLAkiVLkup7IaElmhRNOl9VvQLUj129gbSgrJEcOhJt27Yt4P3/rVy5EshNMYFU0ZSsOn9TGcXdqlUrwCsArvdhypQpGexhxaRjKjkIOAOYJiJfhduuIfTAflZEegLzgZPS66JhGIbhJx2vkg+BRAbtdqket7JoIVdV0/60lRpBqKjNOx3U/1nTuZ522mkALF26NO1j55N999036n0Qck6kg46KdPS100475bM7GaNZs2aA5/Wg8yzp5JPREnw6DwJevg9V2vPnzwfgoosuSvk8uUI9wiozf6NzWxpRPG3aNCB5X/hMY5GThmEYAcMe3IZhGAGjyiSZ0vBuPz179gS8iahU+OuvvwD45JNPAK+6hb+aTFXi999/B7zqNkYwUBPJp59+CnhpDzRA5pFHHgE8E5+fRYsWAVCnTh3AS9969NFHA3DIIYcAsPfee5fZV/8/tMp7kCbm9To1wZh+9/1o8riXXnoJ8ILptDK8VoHPNaa4DcMwAkaVUdyKX3lr4I2mstTEUInwq+gPPvgA8Nx8/IUEqjKaGKuqpjeNRZWopkzQiTgIliukTj7q5Jl+1zVw7Oabbwbiu7eqC59e+6abbppw21jOO+88wHOPCwL6v3zqqacCXqGRCRMmlNn2pJNCTnGqtFu3bg3kT2krprgNwzAChuS75BdASUmJ81eJznVV9EIg9nPI9T3QEGXth7pX5op8Xb+qTFWqxx13XGRdJhLtV4ZM3AO122pIv1aqP+aYY+KeIx56Xt1W7eTPP/98ZBtNqhSbZCodcv0d6NixI+DNhWkwEXgpEDTAbvjwUIxhtkdh/ntQUlJCaWlp3JtgitswDCNgmOIuEPKtuPNNsV8/2D0o9usHU9yGYRhVFntwG4ZhBAx7cBuGYQQMe3AbhmEEDHtwG4ZhBAx7cBuGYQSMggx5LwQXxXxT7Peg2K8f7B4U+/WXhyluwzCMgGEPbsMwjIBhD27DMIyAUZA27mIPdYXiuwfFfv1g96DYrx+St+ub4jYMwwgY9uA2DMMIGPbgNgzDCBj24C5yWrVqRatWrVi/fj3r169njz32iBRCNQyjMLEHt2EYRsAoSK8SI3d06dIF8GaztTzVEUccAcDChQvz0zHDMBJiitswDCNgmOIucpo2bRr1vlmzZgDssssugCluwyhETHEbhmEEDHtwG4ZhBAwzlSSgY8eOANx0001AqOIyeJN4zz77LADLli0D4J577onsO3v27Jz1M1U23XRTwLuuWCZOnJjL7hQMrVu3jiwfc8wxAOy0004ANGnSBIClS5cC8OKLLwLw8ssv57KLhmGK2zAMI2iY4sZTVPfff3+k7bDDDgNg4403Bsomf+nWrVvU+5NPPjmy3Lt3b8BTZIVI165dAYo22Gb77bcH4IwzzgDg1FNPBaBFixaRbbbccksgceKf0047DYBHHnkk0nb11VcDsGLFigz32DA8THEbhmEEjLQVt4hsDJQCPznnOotILWAs0BSYB3RzzhWk/Nhhhx0AeOKJJwDYf//9E25bWloKeLZtpXv37gDsueeekbYhQ4YAha24TzrppLjtsdcXdKpXrw54AUX16tUDYNiwYVHr06Fnz56R5V9//RWAfv36pX3cbNC+ffvI8ptvvgnAiBEjALjiiivy0iej8mRCcf8XmOl73xeY6JxrAUwMvzcMwzAyRFqKW0QaAZ2AQcDl4eZjgbbh5ceB94Cr0zlPpmnTpg0A48ePB6BGjRplttHAk9GjRwNw1113AbBkyZKo7d5++20AJk+eHGmrW7cu4HkhzJ8/P2N9Twf/qEBt+LEMHDgwV93JCQMGDADg4osvBrzk/Ins1j/++GNk+Z133gHghRdeADwPohtuuAHw5gn8qGr98ssvgcIbwfTv3z+yrPfg8MMPB7wRqP8eJEK9b/T/YtWqVQAMHToUgI8//jhDPY5m0KBBQNnPTz8b/3xDItasWRP3GEEiXcU9ArgK2OBrq+ecWwwQfq2b5jkMwzAMH5Lqr46IdAaOcc71FpG2QJ+wjXulc66Gb7sVzrmacfbvBfQCaNKkyb7+X/lslSzaZpttAPjhhx8AqFkzulsfffRRZPmCCy4AYPHixQAsX7683GNPnTo1styyZUsABg8eDMD1119fYd9yUbbJf3066lBUMbVq1QqARYsWZfz85ZHJ69eRDnifi9qyEynuTp06AdFKUe9JLFtssQUA1113HRBtz9bjfvXVVwCcffbZAEyfPr3CfmfzO3DssccC0fMusedbt25d1KvO60ybNg2AvfbaK7LtwQcfDHheV9rXDz74AIC2bdtWuo/JXP8///wTd9vKoKr8jz/+iDrP6tWrI9vcfffdUfv89ddfQMXPgXTxX1dJSQmlpaVxvwTpKO6DgK4iMg94BjhCRJ4ClopIfYDw67IEHRzlnCtxzpVst912aXTDMAyjuEjZxu2c6wf0A/Ap7tNFZBhwFjA0/PpKBvqZEdRPO1Zp66/p5ZdfHmn75ptvKnVstaECvPvuuwDst99+KfUzH/z3v/8FyirtOnXqAHDggQdG2mbODM1Fz5kzJ0e9S45NNgl9nf0jnK233jpqG1VXqqzVF3vBggVJn+fPP/8EPMWtnioA55xzDgB77703AGeeeSYAV111VdLHzyQ6r/H000+XWae2XlWRjRs3BmDzzTcHoF27doDnkZMMiUYpmUK/c6pM9d7Hfs7loZ+REm8Upv74ikbL6rzYo48+CnhzW3///XfS588E2fDjHgq0F5HZQPvwe8MwDCNDZCRy0jn3HiHvEZxzvwLtMnFcwzAMoyxFEfKuiZROOeWUqHZ17dOh4LfffpvyOXQo5ae8gJ5copNKfndA5bvvvgPgueeei2rXSjgHHHAAEG0OWLlyJQA9evQAYNy4cRnucWroEFj7BWUnsfr06QN4ScEyMcT1TzzGnk8TVeXLVHL00UcDnvlj7dq1kXXqEqrfgdq1awOeO6ua+vSa/AnJ/PcYPHPLeeedl9kLiEHzxCvaJ3Vl9HP66acDnglI0Tm1hg0bJn1e/f7rZLO+ahCT38yq9zObWMi7YRhGwKjSirt58+YAPP7443HX669mOko7CKjCUDc2P6o4dcJNlfbxxx8PwIYNG8rso5O7I0eOBOBf//oXkP9AI+1zPDSY5o477sj4eT///POE6/Kd4ldHk/r5XnPNNZF16rKoqHucTtROmTIlav3OO+8cWY511dMka7l2I1WXRX31o4FTsai76G677ZbwuLVq1QI8Ja2TzbHoiMZ/rA4dOgDZnbw3xW0YhhEwqrTiVvW46667RrU/9NBDgBeuXlXZdtttAe964wUtqOuiJko66qijAE9plxfooHY/TZGaL8VdrVo1wBth+VFXT01dkA38dmM932abbQZEp4nNBxriroUzNECmMmjgmj91sX4vvv/+e8D7HgUB/Z4m83199dVXAW+eSO3mOl+mboh+G7uO7mLruWYSU9yGYRgBo8opbi1iAIltWNlIgBPPY0O9L/KFehI0aNCgzDpViToDrvbqWHRU4lcnZ511FuAFvOQbtT9qhXo/mt7gySefzNr5169fH1nWkOxC4dNPP037GBqMoiMr8K5TFWhVRYOUNF2Evqo3ic6f+dMDN2rUKOv9MsVtGIYRMApDMmUAVZcahgyeXW/u3LmAN2t+2WWXAV4BhUykdzzxxBPLtN1+++1pHzdbqFo+99xz465/+OGHAS/Z1j777BNZp4pb0XB4f2rbfBAvKVG2Epb50dJ34JU7Ux577LGsnz9bqLeQP52DoiH05XnUVGVeeSWUyUPjNzJRkKMymOI2DMMIGFVGcWvRV3+En9rhtEyXRkpOmjQJgI022ihqu1RQO7pGyIGXHlIT0hQiWmIrUbHg2Ag4f+RfrG07Fza9ZIg3cspFsnz/Zx97PvWfDiKaeExHEf7Uy4Vami1X6ChLPbdyjSluwzCMgFFlFHe8QqfqW6plpBRV4OkobbWdqv3PH5WokWfq41qIaEmtSy65pNztdAQTL0WtqvZEHinFwpFHHlmmTQtwBNHGrdGC+n+iowj//1hsCb9iYccddwTgxhtvBLy0x36yVbbNjyluwzCMgGEPbsMwjIBRZUwl9evXL9N22223xd02NrlOKmhlE53E81f+0HWFTEXVeXSory6V/kkYHTprUEu+TULz5s0D4Oeffwa8pGjNMNQAACAASURBVFrZZvjw4UB0jUu9Nxq45K9jWOh07twZKFtJXUO4Nfy7mOnVqxcAp556alS7Pz2wJp7KJqa4DcMwAkbgFbe6s8VLWVqZOoIVoZORDzzwAOClhP3tt9+A6HSh6m6YbzQUW/voV83+GpLxUNUaz51Ole2VV16ZkX6mi6YS1RGAP6m9Jv/RlK/+KuepovfR7waoaEBKvMnyQkWTdGlNVh1laUpaHVXGS/FbLJx//vkAdO/eHSj7fzFs2LDIsgb6ZRNT3IZhGAEj8Ipb7YuqEjKNBpfcfffdAHTt2hXwVOyIESMAGDBgQFbOnw5aTqpbt24AjB07NrKuRo0alTrWvffeG1nOd3GAROhIwB/mrsEjardNRXFrwFGbNm0Ar1SbKm//+dTNcsWKFZU+T75Q27bOE+l8jaZs0DmEYkSTx2nAUexcmrbr8yFXmOI2DMMIGIFX3Dprr/Y3DWMHLwF8svgT8fft2xeALl26AJ7NV9XIpZdeCniJqgoZDUTyh7dr//32YPA8bhYuXAh4Cbl++umnyDaZKLCbDbTg8U033RRpiy1oMGPGDMBLkqR23Xho8WFVXbHFptXO6Q+y0SLEQUJHjcp9990HwIMPPpiP7hQEmsJWPWliCwu//PLLgHevcp3awBS3YRhGwJBcJOGpiJKSEucv9plKKk5VlYcddlikTcNyVRHpOTQFrNqv1ePAXxBUlZoqelWi119/PZB5z5HYzyEX6UgLiUxe/wknnBBZVk8T/Tz1uMl87yvaVlPfDhw4MNKWjidTLr4Dfp9zHXWoh5Haslu2bAnkXkXm+3/AP0+mMSAXXnhh1DYvvfQS4I3G/PEbmcB/D0pKSigtLY17E0xxG4ZhBIzA27iV4447DoCvv/460ta4cWPAs1dXxLp16yLLWmR1zJgxQP6jA43keeGFFyLLOuehn2eicnblocnI1JtGk2rpq790WaGjcxYABxxwAFA2EjbIqWjTQed9IPdKu7KY4jYMwwgYVUZxq1+133NiyJAhANStWxfwbNlqn1bPCfXtnTp1amRfU9hVA/U00Ve153bq1AnwRlnt2rUDvOLJftT/feLEidntbA7we04pv//+O1CYsQi5QP30/fNjihb81nmMfCttxRS3YRhGwLAHt2EYRsCoMqYSRYd9UHaCwTA++eSTqFelWM0EAHPmzAHSqwgVZDRBXbx0rFrN3e/0UAiY4jYMwwgYVU5xG4aRHLNmzQK8tK3Fypo1a4DoEXqfPn0A6N27d176VBGmuA3DMAJGWiHvIlIDeAhoBTjgHOBbYCzQFJgHdHPOlZvjMhMh70En3+G++abYrx/sHuT7+v3nU7v32rVrc9qHXIW83wm86ZzbFdgTmAn0BSY651oAE8PvDcMwjAyRso1bRLYBDgXOBnDO/QX8JSLHAm3Dmz0OvAdcXZljF0Liq3xT7Peg2K8f7B4U+/WXRzqKe0fgZ+BREflSRB4Ska2Aes65xQDh17rxdhaRXiJSKiKlWrnEMAzDqJh0HtybAPsA9zvn9gbWUAmziHNulHOuxDlXokUKDMMwjIpJ58G9EFjonPss/P55Qg/ypSJSHyD8uiy9LhqGYRh+UrZxO+eWiMgCEdnFOfct0A6YEf47Cxgafn2lsscuttl0yP+Mer4p9usHuwfFfv2QvF0/3QCci4HRIrIZ8APQg5CKf1ZEegLzgZPSPIdhGIbhI60Ht3PuK6Akzqp26RzXMAzDSIxFThqGYQQMe3AbhmEEDEsyZUShrplao1ErXxdjIqLNN98c8O7BySefDMANN9wAeKlhTznllDz0zihmTHEbhmEEDFPcRU6zZs0AL33l+eefD0C1atWAwksgn23q168fWZ4wYQKQuDK8hWQb+cIUt2EYRsAwxZ0hvv3228iyKrFdd901X92pkMMPPxyAZ555BoBatWpFrddq5/fcc09uO5Yn1J6tKhsSK+1ioUWLFgA88MADALRu3Tqy7s033wTgjDPOyH3HDFPchmEYQaPKKe7rrrsusnz66acD0L59ewAWLFiQ8fOp4mjcuHGkbd68eRk/TzrsvvvuAIwePTrS1qhRIwBq1KgRte3zzz8PwMUXXwzAL7/8kosu5h0tlDt58uRIW7Epbh116WffsmVLwPMo2mOPPSLbXnXVVTnuXW7ZcsstgbKj5pkzZwK5L7AQiyluwzCMgFFlFHeDBg0AOPvssyNtO+ywAwBXXx2q46Aq4Y8//sjYefWY6utbSKjSfvvttwGoV69eZJ3a4X/88UcAunbtCniKotg8JtavXw/ABRdcEGl79tlnAWjevDkAd911V+47lgPUs+iRRx4B4MUXXwSgW7duUdsV8pxNOlxzzTUA/Pvf/4606ei0b99QpmqNb4hV3Pr+rLPOyk1nw5jiNgzDCBhVRnGrTUpVth+10b366qtAtOdAqpxwwgmAZwctRIV66aWXAp7SXrNmTWTdvffeC3hqwwjx119/RZbfeustALbaaisAunfvDsD++++f+45liD333DOyfP311wPQqlUrAC666CIA3nnnnbj7NmnSJLL8xRdfZKuLWeO4444DYODAgQDssssugJc+1v8/vO+++0a16Tb6/67v99lnn6h2gDZt2mTnAnyY4jYMwwgY9uA2DMMIGFXGVHLooYfm9HwHH3xwTs9XGXTCVIeGy5aFqse1a+elSddJFaNi1HVSJ3uDzJw5cyLLN998MwBz584FYNWqVeXue+6550aWX3rppSz0Ljs88cQTgDf5qGZVNYPMmjUL8EwnfmJNoIne+ydu+/XrB8CQIUPS7nsiTHEbhmEEjCqjuI8//vh8dyFCvtRs9erVATjxxBMBz2WpS5cuee1X0NGEW1tvvXWee5I+/gnqqVOnJrWPBmltv/32kTZ1HSxkdOJdJ5VVHev/hQbPvfzyy0B08JW6/5WUhAp8/frrrwC88cYbAHTo0CHqXPq/BzBgwADAC1578MEHM3I9fkxxG4ZhBIzAK261V+lrPFRlZCLwRgN9VMXGQ5Py5Jodd9wRgL333huAp556Cii+1KyZRlVXLFOmTMlxT/JDp06dAPjtt98ibdlIH5EJ/O6tGjyjSltf9fP0J4aD6BGpLqvSVlSlq+LWeSR99Z9H5wQ0oCn2WOlgitswDCNgBF5xqzfJIYccknCbSZMmAfDhhx+mfb7yAn3yTawyHDlyZJ56UjWoWbMm4CVdikVTCVR1NNjs/vvvz3NPEqOeI2rPhsQ27Vilrfi9QNTTJBZVzU8//TTgBWfFm2PTIB4NXDLFbRiGUcQEXnErGoIaDw39zsb5Ntoo9Ns3ffr0yLpEv9bZQme0L7vssqi+pWPT1+vS2fXy2LBhA5DZuYR84f8e3XnnnYAXEq6o7bSqzx2oF42mRdZkbYVEnTp1AC+uwu9nrcux3iOJSOX/Nl5StlykvzDFbRiGETACr7g1mi3bv3L6y642dT2fqk1/khn15shVNKfa2TR6S/vWsWNHAKZNm1bpY5555plAcj6o69atA2DGjBmA58eqNuA///yz0ufPFvo5brJJ9Fd/s802A7zESwCnnXYa4N1PTUD13nvvZbubBcEll1wCeEp00aJF+exOXLRYitqR/SMm9aOuSGmng86bxRvxl2cFSBdT3IZhGAEj8Iq7V69eFW6jdlr1PU3GI0Rn0lU1165dG/BmimPxR6Q9+eSTFR4/FxxxxBGAl1di9uzZSe9bmbJdW2yxBeD5j6vf6qBBgwC48cYbkz5WpmnYsCEAF154IeAVSoiNgoyX2jMWjTRMZQQTJDbeeGPA800eMWIEEP0dLxQ0v0i8z02/h7nA7wMem/NEfctPOumkjJ3PFLdhGEbAsAe3YRhGwAi8qSQZtH7cBx98AHgTGskMj5PZBrwgH8hOUplU0DSu2jd/WledSEzEHXfcAXhBA37TSefOnQFvskqrgcdy+OGHAzB06NBIWzYnKjUZ1JFHHhlpe+ihhwDP1JUOGuB0zDHHALkdimeK/fbbL7J8+eWXA/DTTz8Bnpuj1k+sW7cukJmKUdlCTZn6f5rNCcHy8Af3qZOATU4ahmEYEQKvuJP5pW3atGnUq6JBJurSFw/dRpPNq3pVp35dn8/E8j///DMA//nPfwDPpU0nYdUF7uOPP47s8/zzzwOe655We1eWLFkCwK233lrmfLVq1QI8N0Cd/NXzqiuhpgPVya5so0o73mehoy1V/zpZqeq5Mtx3332ANwmVzYT5maJ+/fqAV7kevKIKWmBA07a2bdsW8D5PLcRRiJQ3OZlLygvAGTx4cMbPZ4rbMAwjYKSluEXkMuA/gAOmAT2ALYGxQFNgHtDNObcirV6WQ2zKxsqgSru8fdXup0lsVNlrUitV8f60jo8++mil+5IOeh2PP/444LltqTte8+bNAdhmm20i+5x99tlRr0rPnj0BePPNNxOeTxW1qnYNcddADb1HX375ZVR/Mo0Gzdx+++0AdOvWrcw2n332GQD9+/ePav/Xv/4V95iff/55ZLlPnz6Ad716b3QE89///heAo48+OrKPqtcVK7L2lU8J7Ze/PJkmRtL5jHPOOQfwqtvn+nucCmpb1pB3/8g7l+UM/efNhb09ZcUtIg2BS4AS51wrYGPgFKAvMNE51wKYGH5vGIZhZIh0bdybANVE5G9CSnsR0A9oG17/OPAekLXsNGrfVVWZCvPmzQNg8eLFkbaBAwcCiWfUX3vtNSBxys98okpY01nq+0033TSyTaJRhnphJONNo7a72G31Ndu2UVXNGlSj+D8THSmpmrzlllsAT60rsWXewAuZVq8LLSigtmBV3voa77iFQo8ePQD43//+F2lT9a0jI8XvIVXoqGfM66+/DkQnRYtX/DfT6EhbRzTgff/1+6OvmSRlxe2c+wm4DZgPLAZ+c85NAOo55xaHt1kM1I23v4j0EpFSESnVh69hGIZRMSkrbhGpCRwLNANWAs+JyOnJ7u+cGwWMAigpKUl5SlgTp48bNw6IH6qt6jnRL5+GqP/++++pdqMgURVy7bXXAvE9RLKB+jfffPPNWT3P0qVL47b7y1dpSt/GjRsDZRWxhrFrioN43xH9Xug6f9HcoKCjBk3KBp5tW72RdK5E/eGDwBdffAF46SzU99yPjjjVE0xHV+mgzx2dV/Lbs1Vx60gwG2Xe0vEqORKY65z72Tn3N/Ai8C9gqYjUBwi/Fq4vkWEYRgBJx8Y9HzhARLYE1gLtgFJgDXAWMDT8+kq6nSy3E/PnA7DHHnsAnnIC75cvGxFuOputngX5ithKhuHDhwPRpbbUf1vttumgquebb74B4LbbbgNg9erVaR+7PLQE1dixYwE4+eSTAc9nuTwefvhhwPMaKs8LRBX3+PHjgbKFFfxRqIVaREJt+y+88EKkTecgDjroIACGDRsGJI6ELWR0vkVH1+DZuNX+PHnyZMBTwLH+/v7nhP4/axSk/r+rTVuVdrw5IE04lc3YjpQf3M65z0TkeeALYD3wJSHTR3XgWRHpSejhnrmUWIZhGEZ6XiXOuf5A/5jmdYTUt2EYhpEFJN+hohCanCwtLY28L2SzQyzr168H4I033oi0de3atdLHif0csn0PNBhHh5N+ExN4Yc86AejPN6xo1W/1CkrHTJDO9etkoU6GquufHw0OuummmwB47rnnKt1HTSHw/vvvA54pyJ+8K9GEaTLk+jsQi05ia477ZPLWZ5JMXL/fNbNfv36AV4tVJ19j3Vfjub5WtI2+1+++mgvBmwRNpaq7vw8lJSWUlpbGvQkW8m4YhhEwTHGnSVAVd6FR7NcP+b8HOhGnLpKa3nXMmDE5OX+2rv+oo44CvElKHVGko7hVTWtd19ggplQxxW0YhlFFCXxaV8MwMsMPP/wAwDPPPAOkZqMtRDRthb727t0b8Fz7NGjPH7Yei6YF1rmSfBdLMcVtGIYRMMzGXSDk276Zb4r9+sHuQbFfP5iN2zAMo8piD27DMIyAYQ9uwzCMgGEPbsMwjIBhD27DMIyAYQ9uwzCMgGEPbsMwjIBhD27DMIyAUZAh74UQFJRviv0eFPv1g92DYr/+8jDFbRiGETDswW0YhhEw7MFtGIYRMArSxl3syWWg+O5BsV8/2D0o9uuH5O36prgNwzAChj24DcMwAoY9uA3DMAKGPbgNwzAChj24DcMwAkZBepUYhpE/Fi1aBMD2228PeIVxzzvvvLz1yYjGFLdhGEbAsAe3YRhGwDBTiWEYUWgQyIYNG4BQtfGqxA477AB41zl//nwA9thjj8g2hx56aNx9TzjhBACGDx8OwGuvvZa1fpaHKW7DMIyAYYo7SWrVqgXAnDlzABg6dCgAt956a976lAk23XRTALp37w7AkCFDAKhbty4Af//9NwB9+vSJ7HP//fcD8M8//+Ssn0b2adeuHQA1atTIc0+yy+jRowFo0qQJAH/88QcQfd21a9eOu6+G4e+zzz4A3HjjjQC88MILkW1UwWcTU9yGYRgBo6gU9yabhC63f//+ANx8882ApyrLQ9XINttsA8BOO+2UjS7mjM033xyAsWPHAtC5c2cAlixZAsAHH3wAePfs4osvjuy73XbbAd59DCI6grjlllsAmDZtGuCpL4AFCxYA8PzzzwMwfvx4AFavXp2zfuaC+vXrA3D88ccDsNlmm0Wtf+KJJ3Lep2zQtWtXAFq3bg3AlltuGbX++++/jywvX7487jE22iikdVWt64i7R48ekW38tvJsYYrbMAwjYFSouEXkEaAzsMw51yrcVgsYCzQF5gHdnHMrwuv6AT2Bf4BLnHNvZaXnKXDNNdcA0K9fPwC23nprAC699NIK9z311FOj3k+cODHDvcstL774IgAdOnQAYP369QD06tULgDfeeCNqe7/Nb9iwYbnoYlbp2LEj4HkWtGrVqsw2+++/P+B5EqjSHjRoEFA17gN4166fvaIjjJEjR+a8T9lA/89jlbay6667Jn2sWbNmAdC8eXMAdtttt8g6/b747d6ZJhnF/RhwdExbX2Cic64FMDH8HhHZHTgFaBne5z4R2ThjvTUMwzAqVtzOuUki0jSm+VigbXj5ceA94Opw+zPOuXXAXBGZA7QBPslMd9OjcePG5b6Px1FHHQXA0UeHfrtmz54NZPfXNFuceeaZkeUjjzwSgD///BOA008/HSirtJVff/01sly9evVsdTHrqG2/adOmUe0jRowAYMyYMZG2Tp06AZ4S69atGwDXXnst4HnXBN3mrT7LsYUL9Luwbt26nPcpk+j1HXLIIXHXv//++5U+pu6jitvPwQcfDORfccejnnNuMUD4tW64vSGwwLfdwnBbGUSkl4iUikjpzz//nGI3DMMwio9Me5XEqzUUtxaPc24UMAqgpKQkuXo9abLzzjtHvU+kLv2oPVxn2tW2pTbhIFCzZk0AbrvttkjbxhuHLFjPPvssAC+//HLSxwtysqFGjRoBXvScqmW1Vy9dujSy7ZQpU6K2VcWtI4569epFHSNobLHFFgAcfvjhgGfvVyGlI4qgorbsRHNYa9asAbzRVmV46aWXADjnnHNS7F16pKq4l4pIfYDw67Jw+0LAb39oBCxKvXuGYRhGLKkq7leBs4Ch4ddXfO1Pi8hwoAHQApicbiczhdq4fv/9dwAmTJhQ4T6qqtT+99FHH2Wpd9njyiuvBKI9Q1RtXHLJJZU+3ooVKzLTsTzg90cHLxLWr7STZccddwSi/X+DhI5A1XtGGTBgQD66k3EaNGgAQJcuXeKunzp1KgDjxo2r9LFLS0sBb1S27777ptLFlEnGHXAMoYnIOiKyEOhP6IH9rIj0BOYDJwE4574RkWeBGcB64ELnnMVFG4ZhZJBkvEpOTbCqXYLtBwGD0umUYRiGkZiiCHnXUF6dfNEkMxrSHI8DDzwQ8NzGdN8gugGeeOKJgHcN4IVxB9nskQoaqqymr2QmmdWUoPtoWoAPP/wwG13MGbvvvnvcdv1uxG43Y8aMrPcplwwcODDlfTXtQ506dcqsO+ywwwDPNOl3pc0UFvJuGIYRMIpKcSu//fZbhftoukZ1A/zss88AWLhwYWY7l0VUXWqQgF9x+9O0xkMTD6nL5MMPPxxZd88992S0n7lEJ2X1Xtx9990V7nPQQQdF7TNz5kwA1q5dm40u5gwNKould+/egOf2Wa1aNSA6AVffvn0BePLJJ7PZxYwQG1hUUXsyqItobCAXwF577QXAtttuC5jiNgzDMKjiiltTr2rSFyVR4I2/RJOGhKvKuuOOO4DkUsAWCldddVXUe1WbUHEY80UXXQR4KTAT2UODhro/PvXUU0D54c4aaHPWWWdFtX/77bdZ6l1uUcUZqzyvv/56wEthqiXMNCkbwKOPPgp46X/nzZuX1b6mg3+kCd5nPmnSpJSPed1118U9NsBjjz0GZLeggiluwzCMgFGlFbeGdWt5LkW9Sn788ceodi3XFQ9NHKOq47vvvgO8ckevv/56BnqcXb788svIsl99+9EwaPWqUaqKylQvmrfeqjjbsI66VHn/9NNPgFeAI4hoIRDwPuN4qhHgq6++ArzEXLEpIwDOP/98wLN5BwG11acyRxFbjCEeOoeWzbQYprgNwzACRpVW3PqLumrVKsCb5dVEQ/oaj1i7X2yotPLxxx8Dham41U9dr8Wf1vKyyy4DvHukiYY0hWlsYQFNcQpw5513ZqnHhYXOayiafCmV8PhC4YwzzogsN2vWLO42WjRE/bm1FJeGdxcz6qmVqBhDrjDFbRiGETCqtOLWWV1VDOqfqvYptfdpMhr1VwXYfvvtAc/+pwUHYv2A1d+7ELnvvvsArxCw326dqOyWRgW+/fbbgGfn/de//hXZRu+NbluV8BfXUF9d/ayD7L+uqGIsj1QKCxQysaPnVPy3tRjDXXfdBXieNvHIRUStKW7DMIyAUaUVt6K2Xi2KEEvDhqEiPX6vC2X69OmA58ur+RqC4M+ttv0jjjgC8Mqwgefbrukply9fDngFFTRNpSpu9TaBsl46VQl/Un1V2pq29bnnnstLnzKJft6ZIgjFFmK9ZhJ50cQjthiDKu3YY/jzuOQin5EpbsMwjIBhD27DMIyAURSmkorQSUl/SK+iwRZff/11TvuUSdSs43dZrMh9UZNpaWi8BmFA/PtUVWjbtm2ZNq3VmShoKUi89957kWX9bP1mMPAm7zTkvUOHDlHtfoKeaCsRmoBL00YkqhCvz4XYRHbZxhS3YRhGwDDFTeLQePAmJ4sNTQcQT3FrYYYgh37HoiHb/tGE1iZ9/PHH89KnbKBBROAF2HTv3j1qG00ott9++wEwePBgIHpCTlMHBGGSPha9Pg02mzVrFhA9MX366acDXtBeLDoZqUo7mwml4mGK2zAMI2CY4gZ69OgBRNvw0kmyXpVQJeFPquNPf1tVOPfccwHPrgvw2muvARWnwA0qiVKxavCV/g/Ec58755xzgMIufbdy5UoApk2bBniBeFr8IHY07f/sEwXY6LHUTTYbRRKSwRS3YRhGwDDFjRfOHk9ZqNLUNK7FhioMf9KpRDPsQURHD/FKUMUWoqhqaDk6Vc9ari4WVdX33ntvpO2dd97Jcu/S55dffgGgffv2ALz77rtA4qIgfpWdKMBm5MiRQP6UtmKK2zAMI2CY4gbGjBkDeOWI/JRXXKEYiJcMXj0vNtlkk4TbBIVBgwYBnsKaMGFCZF1VTKLlR1NBdOnSBYABAwYA0LFjR8BLNnXDDTcA8NFHH+W6ixlB1bH64z/wwANA+akb1Gd/6tSpgOdlkmvvkUSY4jYMwwgYprgrYOLEifnuQl755JNPgOgE/IomrUpUfLmQUR9ef7paKOw0vdlCo/+0LFdV5cknnwS8z768OQxNKqWFfwsNU9yGYRgBwx7chmEYAcNMJXiO+v5agrNnzwa8is3Fioa+T5o0KdKm1UCWLVuWlz5lgiuuuAKIrnoEVaeavZGYa6+9Nuo1iJjiNgzDCBimuPGUtlbCMTzeeuutqNeqwo477hj1XoM1gpg0ySg+THEbhmEEDFPcRlHSrl27fHfBMFLGFLdhGEbAqPDBLSKPiMgyEZnuaxsmIrNEZKqIvCQiNXzr+onIHBH5VkQ6ZKvjhmEYxUoyivsx4OiYtreBVs65PYDvgH4AIrI7cArQMrzPfSKyccZ6axiGYVRs43bOTRKRpjFtE3xvPwVODC8fCzzjnFsHzBWROUAb4JPKdCpeetVio9jvQbFfP9g9KPbrL49M2LjPAcaHlxsCC3zrFobbyiAivUSkVERK/XXwDMMwjPJJ68EtItcC64HR2hRns7g/m865Uc65EudcyXbbbZdONwzDMIqKlN0BReQsoDPQznljmoVAY99mjYBFqXfPMAzDiCWlB7eIHA1cDRzmnPvDt+pV4GkRGQ40AFoAk1M4firdCjSx9rxiuwfFfv1g96DYrx+St+tX+OAWkTFAW6COiCwE+hPyItkceDt8cz91zp3vnPtGRJ4FZhAyoVzonPsnpSswDMMw4pKMV8mpcZofLmf7QcCgdDplGIZhJMYiJw3DMAKGPbgNwzAChj24DcMwAoY9uA3DMAKGPbgNwzAChj24DcMwAoYVUqgALWf29NNPA3DQQQcB8PDDIY/I8847Lz8dMwyjaDHFbRiGETBMcSdAlfb9998PwIEHHgjAhg0bAEs5aVRd2rdvD8CNN94IwAEHHJBw2wULQslAb775ZgAee+wxwPs/CSo1a9YEoH79+gA0b94cgKOOOipqu1NOOQWAWrVqRdpinw2LFoXSNbVp0waAJUuWpN0/U9yGYRgBwxR3AnbYYQcAjj46tviPUZXQkZSqoUsuuSSyrmnTpgCsWrUKgIsvvhiAJ598Moc9TI8ePXoAsOeeewIwc+bMyLoHHngAgDp16gBw2WWXAXDFFVcAsG7dOgAWL16c8PhbbrklAKNGjQK8kergwYMB+Oefwk9V1KVLCAN59AAAEH5JREFUl6hXgEMOOQSAFi1aABWPsP3rY7dV1a4q3hS3YRhGEWIPbsMwjIBhphIfTZo0iSyPGzeu3G07d+4MQMeOHQEYP358eZsXLNWqVQPg5JNPBuD4448HoFOnTgC88847AFx++eWRfebPnw94JoQgoSawnXbaCYAnnngCgC+//BLwzCMAc+fOBWD77bcHvIlqHfreeuut2e9wiuy6664AXHfddYB33ZMmTYpso20XXHABAFtvvXXUMfbbbz8Apk2blvA8++67LwAvvfQSAP379wc899nvv/8+javILGqqUHNY165dAWjdujVQfv7vlStXAvD3339Hteuk5Cab5PZRaorbMAwjYJji9uGfdNpmm22AxG5NqsiDqrRVcapqVPWxevVqAEaOHAl4AUd+pfbII48AcOWVV+amsxng0EMPBeCZZ54BoG7duoA3eaajiA4dOkT20SLWu+22GwAPPfQQ4LmAqQIvpJFHjRo1AHj99dcBT1Ureh9ilwGmT58OwC233AKUr7SVKVOmAN49eeuttwCoV68ekF/FvemmmwJw0kknAXDPPfcA3v+2TiL++OOPAMybNy+yr35PVqxYAcD7778PwC+//BJ1jqlTpwKw++67J+yHqvU1a9akeCVlMcVtGIYRMExxFxHHHXdcZFnVsqpJDd2fMGEC4AVW3HnnnQBceOGFkX233Xbb7Hc2Dfy2SnVXu+uuuwAYMWIE4LkB3nvvvYBny4/H119/DcD5558PwMEHHwzAa6+9BsC3334b2fbSSy8FYO3atWleRWpsscUWQFml/fHHHwPR8ziNGjUC4O233wbg2GOPBeCvv/6q9Hn1+N999x3guSFqez7Q0cfjjz8e1a4qWkcJ+vnGqul46P3V61O7eTx0lKrb6qguE5jiNgzDCBimuIE+ffoAsNdee1W4rXof3HDDDVntUyZR9aWqE6B69eoAnHjiiQBMnDgxah+dLVdF7kc9TgoN9Yro27dvpO2MM84APKWZjkdIaWkp4CluDdLQ9+AFbBxxxBGVPn4m0OuNRW20/fr1K7Nu6NChQGpKOwjEeouo50vsd7482rVrB8CgQaFyuiUlJVHr/fdO7fynnhoq1/vnn39WsscVY4rbMAwjYJjiBmrXrg14Ps3loSHwaicLAsOGDQOi7XFq706kOoYMGQLEV45qJy401JPBr7iVww8/HID//e9/lT7umWeeCXgh4Uo8v9/DDjsM8HygP//880qfLx005DyWX3/9FYi242qo+8KFCzPej2bNmgGw+eabR9o0hD5XqDeHpmA+55xzAM+PW0PPX3zxxYTH0Dme7t27A978TmxYe+/evSPLmmgrm5jiNgzDCBhFrbjV9zLWn7WqoLZotWOrXzJ4ngQ6yth7770BT62WZ8f+8MMPM9/ZDKDX6VdDqjBTUdqKqsdEiYb87XqP1Tc43yxfvhzwvGZ23HHHyDpViepRdNVVV2XsvG3btgU8zw6ApUuXZuz4yaBRjhdddBHg2aU14VZssjC/z/mYMWMA2HnnnQHvM9ZX9d9WO/bs2bMzfwHlYIrbMAwjYBS14tbk6GqPrGpo9NzkyZMBT1UDfPPNNwBsttlmgKcq169fD3gKVRWatoPnWVNoqOL2o0nsU0FtwP77VhFqV122bFnK500HjWRU1HdYoyLVgwpgq622AuCHH35I+7w6F9KyZcu0j5VpVHlrBOV7770HeB5GY8eOrfAY6uutUaXPPfdcprtZKUxxG4ZhBAx7cBuGYQSMojSVaApGDS7ZaKOyv1+xbVo5xG8yCApnn3024AUPxEMnW4YPHw7ArFmzAG9CSd9DdgIK0mGXXXYBvImkeOtS4YMPPgC8oJpE+L8TX331VcrnywQ66azcfvvtUe/9gSK9evUCMlMfUk1u+r+lE3+5npAsDzUJvfLKK4CXwqA8dFsNbPrjjz+y1LvKYYrbMAwjYBSl4tbE8QMHDgTKVxy6Tt3kCimFZ7KoWj7hhBOS3sdfe7HQ0SRPo0ePBuDcc8+NrHv++ecrdSz/BKe/qEJ53HbbbZFlLVyQLzSkXYODYhP/+8lEPUhNp3DWWWdFnffdd99N+9iZRp0R1P23vMIJ6uaX70nIRJjiNgzDCBhFqbiTsW0pGg6b69DlfOMPVYbcBxikwgsvvAB4tluA008/HfCUqCaz17SuWjhBlbY/ZLyiyt5vvvkmkH+V7Sc2UOTII48EKj/ySBadJ1JXOz1vPtO5Kvod1v/3m2++GfDcIMv7fI866ijAFLdhGIaRISpU3CLyCNAZWOacaxWzrg8wDNjOOfdLuK0f0BP4B7jEOfdWxnudIlr0VUsalcdnn30GeEnQ85UYP9+oHfCjjz7Kc08qRm35/nkITV+rXkFq19XyVbH4VVhFilvLuxUSscnPtPScFpTIlFeEho0/8MADUe1aiCNfAUj+kWKXLl2A6DkI8L4DGpimKRz8wUnqiTVq1Cig8EbcySjux4CjYxtFpDHQHpjva9sdOAVoGd7nPhHZOCM9NQzDMIAkFLdzbpKINI2z6g7gKuAVX9uxwDPOuXXAXBGZA7QBPkm/q+mjNq5kvAXUN7dYlbYSm1SnkNH0pOoRAJ5femyyoER88cUXkWUtw3XyySdHbaPeK1q6rJDQgrhaak5t0GrDf+KJJ1I+to5YAV599VWgbBpZnUvIl/eVqmzwEkXFon3U74b6oPsTcB1//PGAV0Q7iIq7DCLSFfjJOfd1zKqGwALf+4XhtnjH6CUipSJS6s9aZxiGYZRPpb1KRGRL4FrgqHir47TFlTjOuVHAKICSkpLyZVCaaGkptfclg5ZzKla0VJOiZbuCwPjx48ssayIxTe0ZG2WpiYf8KWufeuqpqG1+++03oLB93LVggipitdVqEYFUFLfOCWmqViirtNW2rl47uUaLhFx++eUJt7n++usBT2krGk167bXXRtq08IbOcd1///1AeknLMkkq7oDNgWbA1+GJq0bAFyLShpDCbuzbthFQGFdqGIZRRaj0g9s5Nw2oq+9FZB5Q4pz7RUReBZ4WkeFAA6AFMDlDfU0ZzTWRKOeEqgT9dYVo1VZMaCTcPvvsA3j3IUil2uKhNsqKbJVXXnllZNmvMMHLA6LKO0jsscceAPz73/+OtL388stJ7XvppZcCXjk7P/q90NJec+bMSaufqaLzV23atIm0rV69GvCiOjXvSCL8fdccK5qmdqeddgIKR3FXaOMWkTGEJhd3EZGFItIz0bbOuW+AZ4EZwJvAhc659ONqDcMwjAjJeJWcWsH6pjHvBwGJ09AZhmEYaVGlQ941+EKDBRIxYMAAIPmhY1VGJ51q1aoFeBM3FbnRVRVat24dWVY3MTWl3XDDDXnpUyrcddddgBfy3qhRIyA6PF+rHCUy/ahrYbzKQp98EvLw1f8dDbzJN/7vqQYbVWQiKe84mUh5mw0s5N0wDCNgVGnFrW5MdevWrWBLQ9EJWg11z1focq6pXbs24LkLgqe67rzzTsALyAkCGjClASQaLLTXXntFtvnvf/8LwIIFodCLGTNmAN5knlZ/jzfaGjFiBFA4SjteoJwmk4odKel3O/a6NLEUeO6iGmPy448/Zq6zGcAUt2EYRsCo0opbXZVUWWiinWOOOSZvfQoKmoinWOz+mmDIX+pMFWhsObAgocpbU92+/vrrkXWxSlRtwtWqVYvbrmHg4JV1KxTU1u6fo1D7vgbeKIkUdzw0CMsUt2EYhpEWVVpxK2qn8gcfGPFp0qQJAMuXLwfgrbcKJitvVvErNUXTtk6aNCnX3ck4OmrYe++9I239+/cHvP8LHZEqRx8dSgqqaQAKOeGaJrW65pprIm1a/ENt11q6LBlUpcemrS0UTHEbhmEEjKJQ3EbFqDeF2gWLLWPj3Llzy7QlWyw4CKg/8rRp0yJt8fyzg86XX34Zd7mqYYrbMAwjYJjiNgCYP39+1Hu/90ExcPHFF0e9GkYhY4rbMAwjYNiD2/h/e3cXIlUZx3H8+2MtSyPUzDJX0kIqk0qRsBciskhN9NZIEOoyyKIoRQi6Lnq56IWwFynRC7MSoVAs6CrTLM1Sc03RNUsjeqEglX5dnGd1GmZsJWaf57D/Dyw755wRvjs75+/MM7O7IYSaiaWSAJz+0faurvjbziGULh5xhxBCzRT5iHuw/ArRMxnst8Fg//ohboPB/vWfSTziDiGEmonBHUIINRODO4QQakYlrCNJOgb8AfyUu6WfRhOtnRCtnRGtndHp1sttX9zqQBGDG0DSVtvT//ua+UVrZ0RrZ0RrZ+RsjaWSEEKomRjcIYRQMyUN7ldzB5yFaO2MaO2MaO2MbK3FrHGHEELon5IecYcQQuiHIga3pFmS9kjqkbQkd08jSeMlfSxpl6SvJS1O+0dJ2ihpb/o8MncrgKQuSV9IWp+2i+wEkDRC0hpJu9Pte1OJvZIeSd/7nZJWSTqvpE5Jr0s6Kmlnw762fZKWpnNtj6S7M3c+nb7/OyS9K2lE7s52rQ3HHpNkSaNztWYf3JK6gBeB2cBk4F5Jk/NW/ctJ4FHb1wAzgAdT3xJgk+1JwKa0XYLFwK6G7VI7AV4APrR9NXA9VXdRvZLGAQ8B021PAbqABZTV+SYwq2lfy750310AXJv+zUvpHMzVuRGYYvs64FtgaQGd0LoVSeOBu4CDDfsGvDX74AZuBHpsf2f7OLAamJ+56RTbR2xvS5d/pxou46gaV6SrrQCy/wl5Sd3APcDyht3FdQJIuhC4DXgNwPZx279QZu8Q4HxJQ4BhwPcU1Gn7E+Dnpt3t+uYDq23/ZXs/0EN1DmbptL3B9sm0+SnQnbuzXWvyHPA40Pji4IC3ljC4xwGHGrZ7077iSJoATAU2A5fYPgLVcAfG5Cs75XmqO9XfDftK7AS4AjgGvJGWdpZLGk5hvbYPA89QPcI6AvxqewOFdbbQrq/k8+1+4IN0ubhOSfOAw7a3Nx0a8NYSBrda7CvurS6SLgDeAR62/VvunmaS5gJHbX+eu6WfhgDTgJdtT6X6lQclLeMAkNaG5wMTgcuA4ZIW5q36X4o83yQto1qWXNm3q8XVsnVKGgYsA55sdbjFvo62ljC4e4HxDdvdVE9FiyHpHKqhvdL22rT7R0lj0/GxwNFcfcktwDxJB6iWm+6Q9DbldfbpBXptb07ba6gGeWm9dwL7bR+zfQJYC9xMeZ3N2vUVd75JWgTMBe7z6fcnl9Z5JdV/3tvTOdYNbJN0KRlaSxjcW4BJkiZKOpdqkX9d5qZTJIlqHXaX7WcbDq0DFqXLi4D3B7qtke2ltrttT6C6DT+yvZDCOvvY/gE4JOmqtGsm8A3l9R4EZkgalu4LM6le5yits1m7vnXAAklDJU0EJgGfZegDqneUAU8A82z/2XCoqE7bX9keY3tCOsd6gWnpfjzwrbazfwBzqF5R3gcsy93T1HYr1dOeHcCX6WMOcBHVq/V70+dRuVsbmm8H1qfLJXfeAGxNt+17wMgSe4GngN3ATuAtYGhJncAqqvX3E1QD5YEz9VE95d8H7AFmZ+7soVof7ju3Xsnd2a616fgBYHSu1vjJyRBCqJkSlkpCCCGchRjcIYRQMzG4QwihZmJwhxBCzcTgDiGEmonBHUIINRODO4QQaiYGdwgh1Mw/qh1ppwUJm5kAAAAASUVORK5CYII=\n", 126 | "text/plain": [ 127 | "
" 128 | ] 129 | }, 130 | "metadata": { 131 | "needs_background": "light" 132 | }, 133 | "output_type": "display_data" 134 | } 135 | ], 136 | "source": [ 137 | "batch_size = 128\n", 138 | "\n", 139 | "# Images are usually in the [0., 1.] or [0, 255] range, Normalize transform will bring them into [-1, 1] range\n", 140 | "# It's one of those things somebody figured out experimentally that it works (without special theoretical arguments)\n", 141 | "# https://github.com/soumith/ganhacks <- you can find more of those hacks here\n", 142 | "transform = transforms.Compose([\n", 143 | " transforms.ToTensor(),\n", 144 | " transforms.Normalize((.5,), (.5,)) \n", 145 | " ])\n", 146 | "\n", 147 | "# MNIST is a super simple, \"hello world\" dataset so it's included in PyTorch.\n", 148 | "# First time you run this it will download the MNIST dataset and store it in DATA_DIR_PATH\n", 149 | "# the 'transform' (defined above) will be applied to every single image\n", 150 | "mnist_dataset = datasets.MNIST(root=DATA_DIR_PATH, train=True, download=True, transform=transform)\n", 151 | "\n", 152 | "# Nice wrapper class helps us load images in batches (suitable for GPUs)\n", 153 | "mnist_data_loader = DataLoader(mnist_dataset, batch_size=batch_size, shuffle=True, drop_last=True)\n", 154 | "\n", 155 | "# Let's answer our questions\n", 156 | "\n", 157 | "# Q1: How many images do I have?\n", 158 | "print(f'Dataset size: {len(mnist_dataset)} images.')\n", 159 | "\n", 160 | "num_imgs_to_visualize = 25 # number of images we'll display\n", 161 | "batch = next(iter(mnist_data_loader)) # take a single batch from the dataset\n", 162 | "img_batch = batch[0] # extract only images and ignore the labels (batch[1])\n", 163 | "img_batch_subset = img_batch[:num_imgs_to_visualize] # extract only a subset of images\n", 164 | "\n", 165 | "# Q2: What's the shape of my image?\n", 166 | "# format is (B,C,H,W), B - number of images in batch, C - number of channels, H - height, W - width\n", 167 | "print(f'Image shape {img_batch_subset.shape[1:]}') # we ignore shape[0] - number of imgs in batch.\n", 168 | "\n", 169 | "# Q3: How do my images look like?\n", 170 | "# Creates a 5x5 grid of images, normalize will bring images from [-1, 1] range back into [0, 1] for display\n", 171 | "# pad_value is 1. (white) because it's 0. (black) by default but since our background is also black,\n", 172 | "# we wouldn't see the grid pattern so I set it to 1.\n", 173 | "grid = make_grid(img_batch_subset, nrow=int(np.sqrt(num_imgs_to_visualize)), normalize=True, pad_value=1.)\n", 174 | "grid = np.moveaxis(grid.numpy(), 0, 2) # from CHW -> HWC format that's what matplotlib expects! Get used to this.\n", 175 | "plt.figure(figsize=(6, 6))\n", 176 | "plt.title(\"Samples from the MNIST dataset\")\n", 177 | "plt.imshow(grid)\n", 178 | "plt.show()" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": {}, 184 | "source": [ 185 | "## Understand your model (neural networks)!\n", 186 | "\n", 187 | "Let's define the generator and discriminator networks!\n", 188 | "\n", 189 | "The original paper used the maxout activation and dropout for regularization (you don't need to understand this).
\n", 190 | "I'm using `LeakyReLU` instead and `batch normalization` which came after the original paper was published.\n", 191 | "\n", 192 | "Those design decisions are inspired by the DCGAN model which came later than the original GAN." 193 | ] 194 | }, 195 | { 196 | "cell_type": "code", 197 | "execution_count": 4, 198 | "metadata": {}, 199 | "outputs": [], 200 | "source": [ 201 | "# Size of the generator's input vector. Generator will eventually learn how to map these into meaningful images!\n", 202 | "LATENT_SPACE_DIM = 100\n", 203 | "\n", 204 | "\n", 205 | "# This one will produce a batch of those vectors\n", 206 | "def get_gaussian_latent_batch(batch_size, device):\n", 207 | " return torch.randn((batch_size, LATENT_SPACE_DIM), device=device)\n", 208 | "\n", 209 | "\n", 210 | "# It's cleaner if you define the block like this - bear with me\n", 211 | "def vanilla_block(in_feat, out_feat, normalize=True, activation=None):\n", 212 | " layers = [nn.Linear(in_feat, out_feat)]\n", 213 | " if normalize:\n", 214 | " layers.append(nn.BatchNorm1d(out_feat))\n", 215 | " # 0.2 was used in DCGAN, I experimented with other values like 0.5 didn't notice significant change\n", 216 | " layers.append(nn.LeakyReLU(0.2) if activation is None else activation)\n", 217 | " return layers\n", 218 | "\n", 219 | "\n", 220 | "class GeneratorNet(torch.nn.Module):\n", 221 | " \"\"\"Simple 4-layer MLP generative neural network.\n", 222 | "\n", 223 | " By default it works for MNIST size images (28x28).\n", 224 | "\n", 225 | " There are many ways you can construct generator to work on MNIST.\n", 226 | " Even without normalization layers it will work ok. Even with 5 layers it will work ok, etc.\n", 227 | "\n", 228 | " It's generally an open-research question on how to evaluate GANs i.e. quantify that \"ok\" statement.\n", 229 | "\n", 230 | " People tried to automate the task using IS (inception score, often used incorrectly), etc.\n", 231 | " but so far it always ends up with some form of visual inspection (human in the loop).\n", 232 | " \n", 233 | " Fancy way of saying you'll have to take a look at the images from your generator and say hey this looks good!\n", 234 | "\n", 235 | " \"\"\"\n", 236 | "\n", 237 | " def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\n", 238 | " super().__init__()\n", 239 | " self.generated_img_shape = img_shape\n", 240 | " num_neurons_per_layer = [LATENT_SPACE_DIM, 256, 512, 1024, img_shape[0] * img_shape[1]]\n", 241 | "\n", 242 | " # Now you see why it's nice to define blocks - it's super concise!\n", 243 | " # These are pretty much just linear layers followed by LeakyReLU and batch normalization\n", 244 | " # Except for the last layer where we exclude batch normalization and we add Tanh (maps images into our [-1, 1] range!)\n", 245 | " self.net = nn.Sequential(\n", 246 | " *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1]),\n", 247 | " *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2]),\n", 248 | " *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3]),\n", 249 | " *vanilla_block(num_neurons_per_layer[3], num_neurons_per_layer[4], normalize=False, activation=nn.Tanh())\n", 250 | " )\n", 251 | "\n", 252 | " def forward(self, latent_vector_batch):\n", 253 | " img_batch_flattened = self.net(latent_vector_batch)\n", 254 | " # just un-flatten using view into (N, 1, 28, 28) shape for MNIST\n", 255 | " return img_batch_flattened.view(img_batch_flattened.shape[0], 1, *self.generated_img_shape)\n", 256 | "\n", 257 | "\n", 258 | "# You can interpret the output from the discriminator as a probability and the question it should\n", 259 | "# give an answer to is \"hey is this image real?\". If it outputs 1. it's 100% sure it's real. 0.5 - 50% sure, etc.\n", 260 | "class DiscriminatorNet(torch.nn.Module):\n", 261 | " \"\"\"Simple 3-layer MLP discriminative neural network. It should output probability 1. for real images and 0. for fakes.\n", 262 | "\n", 263 | " By default it works for MNIST size images (28x28).\n", 264 | "\n", 265 | " Again there are many ways you can construct discriminator network that would work on MNIST.\n", 266 | " You could use more or less layers, etc. Using normalization as in the DCGAN paper doesn't work well though.\n", 267 | "\n", 268 | " \"\"\"\n", 269 | "\n", 270 | " def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\n", 271 | " super().__init__()\n", 272 | " num_neurons_per_layer = [img_shape[0] * img_shape[1], 512, 256, 1]\n", 273 | "\n", 274 | " # Last layer is Sigmoid function - basically the goal of the discriminator is to output 1.\n", 275 | " # for real images and 0. for fake images and sigmoid is clamped between 0 and 1 so it's perfect.\n", 276 | " self.net = nn.Sequential(\n", 277 | " *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1], normalize=False),\n", 278 | " *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2], normalize=False),\n", 279 | " *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3], normalize=False, activation=nn.Sigmoid())\n", 280 | " )\n", 281 | "\n", 282 | " def forward(self, img_batch):\n", 283 | " img_batch_flattened = img_batch.view(img_batch.shape[0], -1) # flatten from (N,1,H,W) into (N, HxW)\n", 284 | " return self.net(img_batch_flattened)" 285 | ] 286 | }, 287 | { 288 | "cell_type": "markdown", 289 | "metadata": {}, 290 | "source": [ 291 | "## GAN Training\n", 292 | "\n", 293 | "**Feel free to skip this entire section** if you just want to use the pre-trained model to generate some new images - which don't exist in the original MNIST dataset and that's the whole magic of GANs!\n", 294 | "\n", 295 | "Phew, so far we got familiar with data and our models, awesome work!
\n", 296 | "But brace yourselves as this is arguable the hardest part. How to actually train your GAN?\n", 297 | "\n", 298 | "Let's start with understanding the loss function! We'll be using `BCE (binary cross-entropy loss`), let's see why?
\n", 299 | "\n", 300 | "If we input real images into the discriminator we expect it to output 1 (I'm 100% sure that this is a real image).
\n", 301 | "The further away it is from 1 and the closer it is to 0 the more we should penalize it, as it is making wrong prediction.
\n", 302 | "So this is how the loss should look like in that case (it's basically `-log(x)`):\n", 303 | "\n", 304 | "\"BCE
" 305 | ] 306 | }, 307 | { 308 | "cell_type": "markdown", 309 | "metadata": {}, 310 | "source": [ 311 | "BCE loss basically becomes `-log(x)` when it's target (true) label is 1.
\n", 312 | "\n", 313 | "Similarly for fake images, the target (true) label is 0 (as we want the discriminator to output 0 for fake images) and we want to penalize the generator if it starts outputing values close to 1. So we basically want to mirror the above loss function and that's just: `-log(1-x)`.
\n", 314 | "\n", 315 | "BCE loss basically becomes `-log(1-x)` when it's target (true) label is 0. That's why it perfectly fits the task!
\n", 316 | "\n", 317 | "\n", 318 | "### Training utility functions\n", 319 | "Let's define some useful utility functions:" 320 | ] 321 | }, 322 | { 323 | "cell_type": "code", 324 | "execution_count": 5, 325 | "metadata": {}, 326 | "outputs": [], 327 | "source": [ 328 | "# Tried SGD for the discriminator, had problems tweaking it - Adam simply works nicely but default lr 1e-3 won't work!\n", 329 | "# I had to train discriminator more (4 to 1 schedule worked) to get it working with default lr, still got worse results.\n", 330 | "# 0.0002 and 0.5, 0.999 are from the DCGAN paper it works here nicely!\n", 331 | "def get_optimizers(d_net, g_net):\n", 332 | " d_opt = Adam(d_net.parameters(), lr=0.0002, betas=(0.5, 0.999))\n", 333 | " g_opt = Adam(g_net.parameters(), lr=0.0002, betas=(0.5, 0.999))\n", 334 | " return d_opt, g_opt\n", 335 | "\n", 336 | "\n", 337 | "# It's useful to add some metadata when saving your model, it should probably make sense to also add the number of epochs\n", 338 | "def get_training_state(generator_net, gan_type_name):\n", 339 | " training_state = {\n", 340 | " \"commit_hash\": git.Repo(search_parent_directories=True).head.object.hexsha,\n", 341 | " \"state_dict\": generator_net.state_dict(),\n", 342 | " \"gan_type\": gan_type_name\n", 343 | " }\n", 344 | " return training_state\n", 345 | "\n", 346 | "\n", 347 | "# Makes things useful when you have multiple models\n", 348 | "class GANType(enum.Enum):\n", 349 | " VANILLA = 0\n", 350 | "\n", 351 | "\n", 352 | "# Feel free to ignore this one not important for GAN training. \n", 353 | "# It just figures out a good binary name so as not to overwrite your older models.\n", 354 | "def get_available_binary_name(gan_type_enum=GANType.VANILLA):\n", 355 | " def valid_binary_name(binary_name):\n", 356 | " # First time you see raw f-string? Don't worry the only trick is to double the brackets.\n", 357 | " pattern = re.compile(rf'{gan_type_enum.name}_[0-9]{{6}}\\.pth')\n", 358 | " return re.fullmatch(pattern, binary_name) is not None\n", 359 | "\n", 360 | " prefix = gan_type_enum.name\n", 361 | " # Just list the existing binaries so that we don't overwrite them but write to a new one\n", 362 | " valid_binary_names = list(filter(valid_binary_name, os.listdir(BINARIES_PATH)))\n", 363 | " if len(valid_binary_names) > 0:\n", 364 | " last_binary_name = sorted(valid_binary_names)[-1]\n", 365 | " new_suffix = int(last_binary_name.split('.')[0][-6:]) + 1 # increment by 1\n", 366 | " return f'{prefix}_{str(new_suffix).zfill(6)}.pth'\n", 367 | " else:\n", 368 | " return f'{prefix}_000000.pth'" 369 | ] 370 | }, 371 | { 372 | "cell_type": "markdown", 373 | "metadata": {}, 374 | "source": [ 375 | "### Tracking your model's progress during training\n", 376 | "You can track how your GAN training is progressing through:\n", 377 | "1. Console output\n", 378 | "2. Images dumped to: `data/debug_imagery`\n", 379 | "3. Tensorboard, just type in `tensorboard --logdir=runs` to your Anaconda console \n", 380 | "\n", 381 | "Note: to use tensorboard just navigate to project root first via `cd path_to_root` and open `http://localhost:6006/` (browser)" 382 | ] 383 | }, 384 | { 385 | "cell_type": "code", 386 | "execution_count": null, 387 | "metadata": {}, 388 | "outputs": [], 389 | "source": [ 390 | "####################### constants #####################\n", 391 | "# For logging purpose\n", 392 | "ref_batch_size = 16\n", 393 | "ref_noise_batch = get_gaussian_latent_batch(ref_batch_size, device) # Track G's quality during training on fixed noise vectors\n", 394 | "\n", 395 | "discriminator_loss_values = []\n", 396 | "generator_loss_values = []\n", 397 | "\n", 398 | "img_cnt = 0\n", 399 | "\n", 400 | "enable_tensorboard = True\n", 401 | "console_log_freq = 50\n", 402 | "debug_imagery_log_freq = 50\n", 403 | "checkpoint_freq = 2\n", 404 | "\n", 405 | "# For training purpose\n", 406 | "num_epochs = 10 # feel free to increase this\n", 407 | "\n", 408 | "########################################################\n", 409 | "\n", 410 | "writer = SummaryWriter() # (tensorboard) writer will output to ./runs/ directory by default\n", 411 | "\n", 412 | "# Hopefully you have some GPU ^^\n", 413 | "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", 414 | "\n", 415 | "# Prepare feed-forward nets (place them on GPU if present) and optimizers which will tweak their weights\n", 416 | "discriminator_net = DiscriminatorNet().train().to(device)\n", 417 | "generator_net = GeneratorNet().train().to(device)\n", 418 | "\n", 419 | "discriminator_opt, generator_opt = get_optimizers(discriminator_net, generator_net)\n", 420 | "\n", 421 | "# 1s (real_images_gt) will configure BCELoss into -log(x) (check out the loss image above that's -log(x)) \n", 422 | "# whereas 0s (fake_images_gt) will configure it to -log(1-x)\n", 423 | "# So that means we can effectively use binary cross-entropy loss to achieve adversarial loss!\n", 424 | "adversarial_loss = nn.BCELoss()\n", 425 | "real_images_gt = torch.ones((batch_size, 1), device=device)\n", 426 | "fake_images_gt = torch.zeros((batch_size, 1), device=device)\n", 427 | "\n", 428 | "ts = time.time() # start measuring time\n", 429 | "\n", 430 | "# GAN training loop, it's always smart to first train the discriminator so as to avoid mode collapse!\n", 431 | "# A mode collapse, for example, is when your generator learns to only generate a single digit instead of all 10 digits!\n", 432 | "for epoch in range(num_epochs):\n", 433 | " for batch_idx, (real_images, _) in enumerate(mnist_data_loader):\n", 434 | "\n", 435 | " real_images = real_images.to(device) # Place imagery on GPU (if present)\n", 436 | "\n", 437 | " #\n", 438 | " # Train discriminator: maximize V = log(D(x)) + log(1-D(G(z))) or equivalently minimize -V\n", 439 | " # Note: D = discriminator, x = real images, G = generator, z = latent Gaussian vectors, G(z) = fake images\n", 440 | " #\n", 441 | "\n", 442 | " # Zero out .grad variables in discriminator network,\n", 443 | " # otherwise we would have corrupt results - leftover gradients from the previous training iteration\n", 444 | " discriminator_opt.zero_grad()\n", 445 | "\n", 446 | " # -log(D(x)) <- we minimize this by making D(x)/discriminator_net(real_images) as close to 1 as possible\n", 447 | " real_discriminator_loss = adversarial_loss(discriminator_net(real_images), real_images_gt)\n", 448 | "\n", 449 | " # G(z) | G == generator_net and z == get_gaussian_latent_batch(batch_size, device)\n", 450 | " fake_images = generator_net(get_gaussian_latent_batch(batch_size, device))\n", 451 | " # D(G(z)), we call detach() so that we don't calculate gradients for the generator during backward()\n", 452 | " fake_images_predictions = discriminator_net(fake_images.detach())\n", 453 | " # -log(1 - D(G(z))) <- we minimize this by making D(G(z)) as close to 0 as possible\n", 454 | " fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt)\n", 455 | "\n", 456 | " discriminator_loss = real_discriminator_loss + fake_discriminator_loss\n", 457 | " discriminator_loss.backward() # this will populate .grad vars in the discriminator net\n", 458 | " discriminator_opt.step() # perform D weights update according to optimizer's strategy\n", 459 | "\n", 460 | " #\n", 461 | " # Train generator: minimize V1 = log(1-D(G(z))) or equivalently maximize V2 = log(D(G(z))) (or min of -V2)\n", 462 | " # The original expression (V1) had problems with diminishing gradients for G when D is too good.\n", 463 | " #\n", 464 | "\n", 465 | " # if you want to cause mode collapse probably the easiest way to do that would be to add \"for i in range(n)\"\n", 466 | " # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment.\n", 467 | "\n", 468 | " # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\n", 469 | " generator_opt.zero_grad()\n", 470 | "\n", 471 | " # D(G(z)) (see above for explanations)\n", 472 | " generated_images_predictions = discriminator_net(generator_net(get_gaussian_latent_batch(batch_size, device)))\n", 473 | " # By placing real_images_gt here we minimize -log(D(G(z))) which happens when D approaches 1\n", 474 | " # i.e. we're tricking D into thinking that these generated images are real!\n", 475 | " generator_loss = adversarial_loss(generated_images_predictions, real_images_gt)\n", 476 | "\n", 477 | " generator_loss.backward() # this will populate .grad vars in the G net (also in D but we won't use those)\n", 478 | " generator_opt.step() # perform G weights update according to optimizer's strategy\n", 479 | "\n", 480 | " #\n", 481 | " # Logging and checkpoint creation\n", 482 | " #\n", 483 | "\n", 484 | " generator_loss_values.append(generator_loss.item())\n", 485 | " discriminator_loss_values.append(discriminator_loss.item())\n", 486 | " \n", 487 | " if enable_tensorboard:\n", 488 | " global_batch_idx = len(mnist_data_loader) * epoch + batch_idx + 1\n", 489 | " writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, global_batch_idx)\n", 490 | " # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly)\n", 491 | " if batch_idx % debug_imagery_log_freq == 0:\n", 492 | " with torch.no_grad():\n", 493 | " log_generated_images = generator_net(ref_noise_batch)\n", 494 | " log_generated_images = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images)\n", 495 | " intermediate_imagery_grid = make_grid(log_generated_images, nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n", 496 | " writer.add_image('intermediate generated imagery', intermediate_imagery_grid, global_batch_idx)\n", 497 | "\n", 498 | " if batch_idx % console_log_freq == 0:\n", 499 | " prefix = 'GAN training: time elapsed'\n", 500 | " print(f'{prefix} = {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(mnist_data_loader)}]')\n", 501 | "\n", 502 | " # Save intermediate generator images (more convenient like this than through tensorboard)\n", 503 | " if batch_idx % debug_imagery_log_freq == 0:\n", 504 | " with torch.no_grad():\n", 505 | " log_generated_images = generator_net(ref_noise_batch)\n", 506 | " log_generated_images_resized = nn.Upsample(scale_factor=2.5, mode='nearest')(log_generated_images)\n", 507 | " out_path = os.path.join(DEBUG_IMAGERY_PATH, f'{str(img_cnt).zfill(6)}.jpg')\n", 508 | " save_image(log_generated_images_resized, out_path, nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n", 509 | " img_cnt += 1\n", 510 | "\n", 511 | " # Save generator checkpoint\n", 512 | " if (epoch + 1) % checkpoint_freq == 0 and batch_idx == 0:\n", 513 | " ckpt_model_name = f\"vanilla_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth\"\n", 514 | " torch.save(get_training_state(generator_net, GANType.VANILLA.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name))\n", 515 | "\n", 516 | "# Save the latest generator in the binaries directory\n", 517 | "torch.save(get_training_state(generator_net, GANType.VANILLA.name), os.path.join(BINARIES_PATH, get_available_binary_name()))\n" 518 | ] 519 | }, 520 | { 521 | "cell_type": "markdown", 522 | "metadata": {}, 523 | "source": [ 524 | "## Generate images with your vanilla GAN\n", 525 | "\n", 526 | "Nice, finally we can use the generator we trained to generate some MNIST-like imagery!\n", 527 | "\n", 528 | "Let's define a couple of utility functions which will make things cleaner!" 529 | ] 530 | }, 531 | { 532 | "cell_type": "code", 533 | "execution_count": 7, 534 | "metadata": {}, 535 | "outputs": [], 536 | "source": [ 537 | "def postprocess_generated_img(generated_img_tensor):\n", 538 | " assert isinstance(generated_img_tensor, torch.Tensor), f'Expected PyTorch tensor but got {type(generated_img_tensor)}.'\n", 539 | "\n", 540 | " # Move the tensor from GPU to CPU, convert to numpy array, extract 0th batch, move the image channel\n", 541 | " # from 0th to 2nd position (CHW -> HWC)\n", 542 | " generated_img = np.moveaxis(generated_img_tensor.to('cpu').numpy()[0], 0, 2)\n", 543 | "\n", 544 | " # Since MNIST images are grayscale (1-channel only) repeat 3 times to get RGB image\n", 545 | " generated_img = np.repeat(generated_img, 3, axis=2)\n", 546 | "\n", 547 | " # Imagery is in the range [-1, 1] (generator has tanh as the output activation) move it into [0, 1] range\n", 548 | " generated_img -= np.min(generated_img)\n", 549 | " generated_img /= np.max(generated_img)\n", 550 | "\n", 551 | " return generated_img\n", 552 | "\n", 553 | "\n", 554 | "# This function will generate a random vector pass it to the generator which will generate a new image\n", 555 | "# which we will just post-process and return it\n", 556 | "def generate_from_random_latent_vector(generator):\n", 557 | " with torch.no_grad(): # Tells PyTorch not to compute gradients which would have huge memory footprint\n", 558 | " \n", 559 | " # Generate a single random (latent) vector\n", 560 | " latent_vector = get_gaussian_latent_batch(1, next(generator.parameters()).device)\n", 561 | " \n", 562 | " # Post process generator output (as it's in the [-1, 1] range, remember?)\n", 563 | " generated_img = postprocess_generated_img(generator(latent_vector))\n", 564 | "\n", 565 | " return generated_img\n", 566 | "\n", 567 | "\n", 568 | "# You don't need to get deep into this one - irrelevant for GANs - it will just figure out a good name for your generated\n", 569 | "# images so that you don't overwrite the old ones. They'll be stored with xxxxxx.jpg naming scheme.\n", 570 | "def get_available_file_name(input_dir): \n", 571 | " def valid_frame_name(str):\n", 572 | " pattern = re.compile(r'[0-9]{6}\\.jpg') # regex, examples it covers: 000000.jpg or 923492.jpg, etc.\n", 573 | " return re.fullmatch(pattern, str) is not None\n", 574 | "\n", 575 | " # Filter out only images with xxxxxx.jpg format from the input_dir\n", 576 | " valid_frames = list(filter(valid_frame_name, os.listdir(input_dir)))\n", 577 | " if len(valid_frames) > 0:\n", 578 | " # Images are saved in the .jpg format we find the biggest such number and increment by 1\n", 579 | " last_img_name = sorted(valid_frames)[-1]\n", 580 | " new_prefix = int(last_img_name.split('.')[0]) + 1 # increment by 1\n", 581 | " return f'{str(new_prefix).zfill(6)}.jpg'\n", 582 | " else:\n", 583 | " return '000000.jpg'\n", 584 | "\n", 585 | "\n", 586 | "def save_and_maybe_display_image(dump_dir, dump_img, out_res=(256, 256), should_display=False):\n", 587 | " assert isinstance(dump_img, np.ndarray), f'Expected numpy array got {type(dump_img)}.'\n", 588 | "\n", 589 | " # step1: get next valid image name\n", 590 | " dump_img_name = get_available_file_name(dump_dir)\n", 591 | "\n", 592 | " # step2: convert to uint8 format <- OpenCV expects it otherwise your image will be completely black. Don't ask...\n", 593 | " if dump_img.dtype != np.uint8:\n", 594 | " dump_img = (dump_img*255).astype(np.uint8)\n", 595 | "\n", 596 | " # step3: write image to the file system (::-1 because opencv expects BGR (and not RGB) format...)\n", 597 | " cv.imwrite(os.path.join(dump_dir, dump_img_name), cv.resize(dump_img[:, :, ::-1], out_res, interpolation=cv.INTER_NEAREST)) \n", 598 | "\n", 599 | " # step4: maybe display part of the function\n", 600 | " if should_display:\n", 601 | " plt.imshow(dump_img)\n", 602 | " plt.show()" 603 | ] 604 | }, 605 | { 606 | "cell_type": "markdown", 607 | "metadata": {}, 608 | "source": [ 609 | "### We're now ready to generate some new digit images!" 610 | ] 611 | }, 612 | { 613 | "cell_type": "code", 614 | "execution_count": 10, 615 | "metadata": {}, 616 | "outputs": [ 617 | { 618 | "name": "stdout", 619 | "output_type": "stream", 620 | "text": [ 621 | "Model states contains this data: dict_keys(['commit_hash', 'state_dict', 'gan_type'])\n", 622 | "Using VANILLA GAN!\n", 623 | "Generating new MNIST-like images!\n" 624 | ] 625 | }, 626 | { 627 | "data": { 628 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAQ30lEQVR4nO3db4xV9Z3H8c+XYUZkQIVFcALjgpUHiwTtCoSEZlNsllh9ANXUaMwGskT6oCSt4cEalqTGjQnZbIt9YBqniyndFJsmSsTErEXjnyUxzQCyCIsurGD5Mw4oQgGBgeG7D+awO8U5v99wz733HPi9X8nkztzvPff8OMNnzr33e875mbsLwPVvRNkDANAchB1IBGEHEkHYgUQQdiARI5u5MjMr7aN/MwvW6UrgeuHuQ/5nLxR2M7tf0s8ltUj6V3dfU+T5YlpaWnJr/f39wWVHjRoVrJ89ezZYD/2x4A9FvjK324gR+S9cL126FFx25MhwNC5evFjTmIbz/EWfO0/NL+PNrEXS85K+K2mGpMfMbEa9Bgagvoq8Z58raZ+7f+LufZJ+K2lRfYYFoN6KhH2ypIODfj6U3fdnzGy5mW01s60F1gWgoCLv2Yd6M/a1N2Hu3iWpSyr3AzogdUX27IckdQ76eYqkI8WGA6BRioS9W9J0M5tmZm2SHpW0qT7DAlBvNb+Md/eLZrZC0hsaaL296O676zayIcTaayGx1lqsNXfu3Lma1x0zbty4YP3LL79s2Lpjih6fUGbrLdZeC4m1v4q25hrVXgsp1Gd399clvV6nsQBoIA6XBRJB2IFEEHYgEYQdSARhBxJB2IFEWDNPzyzzcNnQ6Y5SvJ9cpMff6HPpy+xlt7W1Bet9fX0NXX9VLViwIFh/++23G7buvPPZ2bMDiSDsQCIIO5AIwg4kgrADiSDsQCKa3nprVJuoypeKrvLY0BiNvjptCK03IHGEHUgEYQcSQdiBRBB2IBGEHUgEYQcS0dQpmxupzF51a2trsH7hwoUmjQRVETu2ogzs2YFEEHYgEYQdSARhBxJB2IFEEHYgEYQdSEQyl5KOGTNmTLB++vTp3FrsMtWxPnzR38HSpUtrXnds7GfOnAnW9+7dG6yvXLkytzZ//vzgsrFzvvfv3x+sP/vss7m13t7e4LLbtm0L1ov+zm644Ybc2vnz5ws9d9757IUOqjGzA5JOSeqXdNHdZxd5PgCNU48j6Ba4++d1eB4ADcR7diARRcPukn5vZtvMbPlQDzCz5Wa21cy2FlwXgAKKvoyf7+5HzGyipM1m9pG7vzf4Ae7eJalLqvYHdMD1rtCe3d2PZLdHJW2UNLcegwJQfzWH3czazWzs5e8lLZS0q14DA1BfRV7GT5K0MTtvd6SkDe7+73UZVQlCffSYS5cuBeuxvmms1z1x4sRgvbOzM7e2ePHi4LIdHR3Beuz4g9iUzaFtE+tVx64DMHPmzGD90Ucfza1t3LgxuOz27duD9aJ99tD/iTlz5gSX7e7urmmdNYfd3T+RdHetywNoLlpvQCIIO5AIwg4kgrADiSDsQCI4xbUCYu2tJUuWBOurV6/Ord1yyy3BZfv6+oL1mLFjx9a8bH9/f7AeG1tsWuTQ6bmPPPJIcNk333wzWK8ypmwGEkfYgUQQdiARhB1IBGEHEkHYgUQQdiAR182UzVUW6wc//vjjwfqaNWuC9dGjR+fWYqffxi4V3d7eHqzHjtMITV187ty54LInTpwI1m+88cZgfefOnbm1kydPBpeNTblc5hThtWLPDiSCsAOJIOxAIgg7kAjCDiSCsAOJIOxAIuiz10HsUtCxSwOvXbs2WA9N7yuFpzY+ePBgcNnnnnsuWG9paQnWFyxYEKzv2bOnppokzZ0bnnNk4cKFwfqGDRtya7Gppsvso0+YMCFY//zz2uZRZc8OJIKwA4kg7EAiCDuQCMIOJIKwA4kg7EAirpvrxpd5/nHs2unr1q0L1h966KFC6+/t7c2tLVq0KLhsrNfd2toarH/11VfBeugYgNjxAzNmzAjWe3p6gvXPPvsstxY7z79MseM2YmOv+brxZvaimR01s12D7htvZpvNbG92Oy72PADKNZyX8b+SdP8V9z0l6S13ny7prexnABUWDbu7vyfp+BV3L5K0Pvt+vaTFdR4XgDqr9dj4Se7eI0nu3mNmE/MeaGbLJS2vcT0A6qThJ8K4e5ekLomJHYEy1dp66zWzDknKbo/Wb0gAGqHWsG+SdHke4SWSXq3PcAA0SvRlvJm9JOnbkiaY2SFJP5G0RtLvzGyZpD9K+v5wVxjqIcb6h6FeepnnH997773B+oMPPhisx44RiG2X/fv359ZCfW5JmjlzZrA+e/bsYH3Tpk3B+uHDh3Nr58+fDy770UcfBeuxHn+Vr+1+22235dZCxwcUEQ27uz+WU/pOnccCoIE4XBZIBGEHEkHYgUQQdiARhB1IRFNPcW1vb/e77rort97d3d20sdTTqFGjgvXjx688teDqlo8Jteb6+/uDy8amk4617t5///1gfdmyZbm1UFtOirfmqtxaiwlt99g2j6n5FFcA1wfCDiSCsAOJIOxAIgg7kAjCDiSCsAOJuG4uJT2MdQfrRbZD7Lm3bNkSrM+bN6/Q8xdRdLvE6qdOncqtLV26NLjsa6+9FqzHjiGosiKnesfQZwcSR9iBRBB2IBGEHUgEYQcSQdiBRBB2IBGV6rOXOe1yI82aNStYf+edd4L1m2++OVhvZB++kS5cuBCsx44/+OCDD+o5nOsGfXYgcYQdSARhBxJB2IFEEHYgEYQdSARhBxJRqT57qtra2oL1IteVnzNnTrA+efLkYD3W43/yySeD9dtvvz23FjqnW5LOnDkTrN9xxx3B+rFjx4L1MrW0tOTWip6nX3Of3cxeNLOjZrZr0H1Pm9lhM9uRfT1QaHQAGm44L+N/Jen+Ie5f6+73ZF+v13dYAOotGnZ3f09SeP4iAJVX5AO6FWa2M3uZPy7vQWa23My2mtnWAusCUFCtYf+FpG9IukdSj6Sf5j3Q3bvcfba7z65xXQDqoKawu3uvu/e7+yVJv5Q0t77DAlBvNYXdzDoG/fg9SbvyHgugGqJ9djN7SdK3JU2Q1CvpJ9nP90hySQck/cDde6Iro89+zYmdKz9+/Phgvbu7O7c2derU4LKx66fffffdwfru3buD9TKFtmvRY1/y+uz5M8L//4KPDXH3ukKjAdB0HC4LJIKwA4kg7EAiCDuQCMIOJCL6aXyVjB07NrcWmhoYtYu1gU6cOBGsb9iwIbe2atWq4LKxtt+UKVOC9Sq33kaOzI9e7BTXWqd0Zs8OJIKwA4kg7EAiCDuQCMIOJIKwA4kg7EAirqk+O7306on1fI8cOdKwdZ89e7Zhz91osemqG4E9O5AIwg4kgrADiSDsQCIIO5AIwg4kgrADibim+uzXqth52bF67JzyZk67faXYdNIrVqyo+blj/67YZayrrMilpGtdlj07kAjCDiSCsAOJIOxAIgg7kAjCDiSCsAOJoM9eB62trcH6ww8/HKx3dnYWWv/zzz+fW+vr6wsuG+vxx/5t9913X7Aem5Y55OTJk8H6u+++W/Nzly3UDy963EWe6J7dzDrN7G0z22Nmu83sR9n9481ss5ntzW7H1TQCAE0xnJfxFyWtdPe/kjRP0g/NbIakpyS95e7TJb2V/QygoqJhd/ced9+efX9K0h5JkyUtkrQ+e9h6SYsbNUgAxV3Ve3Yzmyrpm5L+IGmSu/dIA38QzGxizjLLJS0vNkwARQ077GY2RtLLkn7s7n+KfYhwmbt3SerKnqO8MzaAxA2r9WZmrRoI+m/c/ZXs7l4z68jqHZKONmaIAOohume3gV34Okl73P1ng0qbJC2RtCa7fbXoYO68885gfd++fUVX0RDt7e3B+gsvvFBo+VirJdT++vjjj4PLxi71HGsbTps2LVgPTU188eLF4LLPPPNMsH4tX1o89Ds/c+ZMQ9Y5nJfx8yX9naQPzWxHdt8qDYT8d2a2TNIfJX2/ISMEUBfRsLv7Fkl5b9C/U9/hAGgUDpcFEkHYgUQQdiARhB1IBGEHEmHNvAzx9XoEXewU1S1bthRaPiY0bXJsSuURI8J/74uebhla/xdffBFcdt68ecH6p59+GqxXWVtbW24tNp3zMC4tPuQvjT07kAjCDiSCsAOJIOxAIgg7kAjCDiSCsAOJ4FLSdXDw4MFgfc6cOcH6gQMHgvXYtMihXnmsj17UcK9YNJT+/v5gPXYZ7GtZ6N9WZJuGsGcHEkHYgUQQdiARhB1IBGEHEkHYgUQQdiAR18357I2a5rYZbrrppmB9+vTpwfoTTzyRW5s1a1Zw2di1+s+fPx+snzt3Llh/4403cmurV68OLhu7LnysT99ILS0twXqZY+N8diBxhB1IBGEHEkHYgUQQdiARhB1IBGEHEhHts5tZp6RfS7pN0iVJXe7+czN7WtITko5lD13l7q9Hnqu6zW6gjkLzr0vF5mAPHVPi7rl99uFcvOKipJXuvt3MxkraZmabs9pad/+Xqx4tgKYbzvzsPZJ6su9PmdkeSZMbPTAA9XVV79nNbKqkb0r6Q3bXCjPbaWYvmtm4nGWWm9lWM9taaKQAChn2sfFmNkbSu5KedfdXzGySpM8luaR/ktTh7n8feQ7esyMJVXzPPqw9u5m1SnpZ0m/c/ZXsSXvdvd/dL0n6paS5Vz1qAE0TDbsN/BlZJ2mPu/9s0P0dgx72PUm76j88APUynNbbtyT9h6QPNdB6k6RVkh6TdI8GXsYfkPSD7MO80HMFV3brrbcGx3Ls2LFgPbLuYL3Kp8AiLbHLf8em4a659ebuWyQNtXCwpw6gWjiCDkgEYQcSQdiBRBB2IBGEHUgEYQcS0fQpm0P97lh/MbRs0Uv7FunDjxwZ3oyxdRft8ccOnwwp85LIsamoY5epjmlra8utFZ0OevTo0cH62bNng/XQ7yXWR68Ve3YgEYQdSARhBxJB2IFEEHYgEYQdSARhBxLR7Cmbj0n6dNBdEzRwaasqqurYqjouibHVqp5j+0t3H/LCEE0N+9dWbrbV3WeXNoCAqo6tquOSGFutmjU2XsYDiSDsQCLKDntXyesPqerYqjouibHVqiljK/U9O4DmKXvPDqBJCDuQiFLCbmb3m9nHZrbPzJ4qYwx5zOyAmX1oZjvKnp8um0PvqJntGnTfeDPbbGZ7s9sh59graWxPm9nhbNvtMLMHShpbp5m9bWZ7zGy3mf0ou7/UbRcYV1O2W9Pfs5tZi6T/lvS3kg5J6pb0mLv/V1MHksPMDkia7e6lH4BhZn8j6bSkX7v7zOy+f5Z03N3XZH8ox7n7P1RkbE9LOl32NN7ZbEUdg6cZl7RY0lKVuO0C43pETdhuZezZ50ra5+6fuHufpN9KWlTCOCrP3d+TdPyKuxdJWp99v14D/1maLmdsleDuPe6+Pfv+lKTL04yXuu0C42qKMsI+WdLBQT8fUrXme3dJvzezbWa2vOzBDGHS5Wm2stuJJY/nStFpvJvpimnGK7Ptapn+vKgywj7UBdOq1P+b7+5/Lem7kn6YvVzF8PxC0jc0MAdgj6SfljmYbJrxlyX92N3/VOZYBhtiXE3ZbmWE/ZCkzkE/T5F0pIRxDMndj2S3RyVtVPWmou69PINudnu05PH8nypN4z3UNOOqwLYrc/rzMsLeLWm6mU0zszZJj0raVMI4vsbM2rMPTmRm7ZIWqnpTUW+StCT7fomkV0scy5+pyjTeedOMq+RtV/r05+7e9C9JD2jgE/n/kfSPZYwhZ1x3SPrP7Gt32WOT9JIGXtZd0MAromWS/kLSW5L2ZrfjKzS2f9PA1N47NRCsjpLG9i0NvDXcKWlH9vVA2dsuMK6mbDcOlwUSwRF0QCIIO5AIwg4kgrADiSDsQCIIO5AIwg4k4n8BIFvhXuFW7T4AAAAASUVORK5CYII=\n", 629 | "text/plain": [ 630 | "
" 631 | ] 632 | }, 633 | "metadata": { 634 | "needs_background": "light" 635 | }, 636 | "output_type": "display_data" 637 | } 638 | ], 639 | "source": [ 640 | "# VANILLA_000000.pth is the model I pretrained for you, feel free to change it if you trained your own model (last section)!\n", 641 | "model_path = os.path.join(BINARIES_PATH, 'VANILLA_000000.pth') \n", 642 | "assert os.path.exists(model_path), f'Could not find the model {model_path}. You first need to train your generator.'\n", 643 | "\n", 644 | "# Hopefully you have some GPU ^^\n", 645 | "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", 646 | "\n", 647 | "# let's load the model, this is a dictionary containing model weights but also some metadata\n", 648 | "# commit_hash - simply tells me which version of my code generated this model (hey you have to learn git!)\n", 649 | "# gan_type - this one is \"VANILLA\" but I also have \"DCGAN\" and \"cGAN\" models\n", 650 | "# state_dict - contains the actuall neural network weights\n", 651 | "model_state = torch.load(model_path) \n", 652 | "print(f'Model states contains this data: {model_state.keys()}')\n", 653 | "\n", 654 | "gan_type = model_state[\"gan_type\"] # \n", 655 | "print(f'Using {gan_type} GAN!')\n", 656 | "\n", 657 | "# Let's instantiate a generator net and place it on GPU (if you have one)\n", 658 | "generator = GeneratorNet().to(device)\n", 659 | "# Load the weights, strict=True just makes sure that the architecture corresponds to the weights 100%\n", 660 | "generator.load_state_dict(model_state[\"state_dict\"], strict=True)\n", 661 | "generator.eval() # puts some layers like batch norm in a good state so it's ready for inference <- fancy name right?\n", 662 | " \n", 663 | "generated_imgs_path = os.path.join(DATA_DIR_PATH, 'generated_imagery') # this is where we'll dump images\n", 664 | "os.makedirs(generated_imgs_path, exist_ok=True)\n", 665 | "\n", 666 | "#\n", 667 | "# This is where the magic happens!\n", 668 | "#\n", 669 | "\n", 670 | "print('Generating new MNIST-like images!')\n", 671 | "generated_img = generate_from_random_latent_vector(generator)\n", 672 | "save_and_maybe_display_image(generated_imgs_path, generated_img, should_display=True)" 673 | ] 674 | }, 675 | { 676 | "cell_type": "markdown", 677 | "metadata": {}, 678 | "source": [ 679 | "# I'd love to hear your feedback\n", 680 | "\n", 681 | "If you found this notebook useful and would like me to add the same for cGAN and DCGAN please [open an issue](https://github.com/gordicaleksa/pytorch-gans/issues/new).
\n", 682 | "\n", 683 | "I'm super not aware of how useful people find this, I usually do stuff through my IDE.\n", 684 | "\n", 685 | "# Connect with me\n", 686 | "\n", 687 | "I share lots of useful (I hope so at least!) content on LinkedIn, Twitter, YouTube and Medium.
\n", 688 | "So feel free to connect with me there:\n", 689 | "1. My [LinkedIn](https://www.linkedin.com/in/aleksagordic) and [Twitter](https://twitter.com/gordic_aleksa) profiles\n", 690 | "2. My YouTube channel - [The AI Epiphany](https://www.youtube.com/c/TheAiEpiphany)\n", 691 | "3. My [Medium](https://gordicaleksa.medium.com/) profile\n" 692 | ] 693 | } 694 | ], 695 | "metadata": { 696 | "kernelspec": { 697 | "display_name": "Python 3", 698 | "language": "python", 699 | "name": "python3" 700 | }, 701 | "language_info": { 702 | "codemirror_mode": { 703 | "name": "ipython", 704 | "version": 3 705 | }, 706 | "file_extension": ".py", 707 | "mimetype": "text/x-python", 708 | "name": "python", 709 | "nbconvert_exporter": "python", 710 | "pygments_lexer": "ipython3", 711 | "version": "3.8.3" 712 | } 713 | }, 714 | "nbformat": 4, 715 | "nbformat_minor": 4 716 | } 717 | -------------------------------------------------------------------------------- /data/examples/generated_samples/generated_dcgan.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/generated_samples/generated_dcgan.jpg -------------------------------------------------------------------------------- /data/examples/generated_samples/generated_vgan.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/generated_samples/generated_vgan.jpg -------------------------------------------------------------------------------- /data/examples/intermediate_imagery.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/intermediate_imagery.PNG -------------------------------------------------------------------------------- /data/examples/interpolation/dcgan_interpolated.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/interpolation/dcgan_interpolated.jpg -------------------------------------------------------------------------------- /data/examples/interpolation/slerp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/interpolation/slerp.png -------------------------------------------------------------------------------- /data/examples/interpolation/vgan_interpolated.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/interpolation/vgan_interpolated.jpg -------------------------------------------------------------------------------- /data/examples/jupyter/cross_entropy_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/jupyter/cross_entropy_loss.png -------------------------------------------------------------------------------- /data/examples/jupyter/data_distribution.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/jupyter/data_distribution.PNG -------------------------------------------------------------------------------- /data/examples/losses.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/losses.PNG -------------------------------------------------------------------------------- /data/examples/real_samples/celeba.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/real_samples/celeba.jpg -------------------------------------------------------------------------------- /data/examples/real_samples/mnist.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/real_samples/mnist.jpg -------------------------------------------------------------------------------- /data/examples/training_progress/training_progress_cgan.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/training_progress/training_progress_cgan.gif -------------------------------------------------------------------------------- /data/examples/training_progress/training_progress_dcgan.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/training_progress/training_progress_dcgan.gif -------------------------------------------------------------------------------- /data/examples/training_progress/training_progress_vgan.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/training_progress/training_progress_vgan.gif -------------------------------------------------------------------------------- /data/examples/vector_arithmetic/vector_arithmetic.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/data/examples/vector_arithmetic/vector_arithmetic.jpg -------------------------------------------------------------------------------- /environment.yml: -------------------------------------------------------------------------------- 1 | name: pytorch-gans 2 | channels: 3 | - defaults 4 | - pytorch 5 | dependencies: 6 | - python==3.8.3 7 | - pip==20.0.2 8 | - matplotlib==3.1.3 9 | - pytorch==1.5.0 10 | - torchvision==0.6.0 11 | - pip: 12 | - numpy==1.18.4 13 | - opencv-python==4.2.0.32 14 | - GitPython==3.1.2 15 | - tensorboard==2.2.2 16 | - imageio==2.9.0 17 | - jupyter==1.0.0 18 | 19 | -------------------------------------------------------------------------------- /generate_imagery.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | import argparse 4 | 5 | 6 | import torch 7 | from torch import nn 8 | from torchvision.utils import save_image, make_grid 9 | import matplotlib.pyplot as plt 10 | import numpy as np 11 | import cv2 as cv 12 | 13 | 14 | import utils.utils as utils 15 | from utils.constants import * 16 | 17 | 18 | class GenerationMode(enum.Enum): 19 | SINGLE_IMAGE = 0, 20 | INTERPOLATION = 1, 21 | VECTOR_ARITHMETIC = 2 22 | 23 | 24 | def postprocess_generated_img(generated_img_tensor): 25 | assert isinstance(generated_img_tensor, torch.Tensor), f'Expected PyTorch tensor but got {type(generated_img_tensor)}.' 26 | 27 | # Move the tensor from GPU to CPU, convert to numpy array, extract 0th batch, move the image channel 28 | # from 0th to 2nd position (CHW -> HWC) 29 | generated_img = np.moveaxis(generated_img_tensor.to('cpu').numpy()[0], 0, 2) 30 | 31 | # If grayscale image repeat 3 times to get RGB image (for generators trained on MNIST) 32 | if generated_img.shape[2] == 1: 33 | generated_img = np.repeat(generated_img, 3, axis=2) 34 | 35 | # Imagery is in the range [-1, 1] (generator has tanh as the output activation) move it into [0, 1] range 36 | generated_img -= np.min(generated_img) 37 | generated_img /= np.max(generated_img) 38 | 39 | return generated_img 40 | 41 | 42 | def generate_from_random_latent_vector(generator, cgan_digit=None): 43 | with torch.no_grad(): 44 | latent_vector = utils.get_gaussian_latent_batch(1, next(generator.parameters()).device) 45 | 46 | if cgan_digit is None: 47 | generated_img = postprocess_generated_img(generator(latent_vector)) 48 | else: # condition and generate the digit specified by cgan_digit 49 | ref_label = torch.tensor([cgan_digit], dtype=torch.int64) 50 | ref_label_one_hot_encoding = torch.nn.functional.one_hot(ref_label, MNIST_NUM_CLASSES).type(torch.FloatTensor).to(next(generator.parameters()).device) 51 | generated_img = postprocess_generated_img(generator(latent_vector, ref_label_one_hot_encoding)) 52 | 53 | return generated_img, latent_vector.to('cpu').numpy()[0] 54 | 55 | 56 | def generate_from_specified_numpy_latent_vector(generator, latent_vector): 57 | assert isinstance(latent_vector, np.ndarray), f'Expected latent vector to be numpy array but got {type(latent_vector)}.' 58 | 59 | with torch.no_grad(): 60 | latent_vector_tensor = torch.unsqueeze(torch.tensor(latent_vector, device=next(generator.parameters()).device), dim=0) 61 | return postprocess_generated_img(generator(latent_vector_tensor)) 62 | 63 | 64 | def linear_interpolation(t, p0, p1): 65 | return p0 + t * (p1 - p0) 66 | 67 | 68 | def spherical_interpolation(t, p0, p1): 69 | """ Spherical interpolation (slerp) formula: https://en.wikipedia.org/wiki/Slerp 70 | 71 | Found inspiration here: https://github.com/soumith/ganhacks 72 | but I didn't get any improvement using it compared to linear interpolation. 73 | 74 | Args: 75 | t (float): has [0, 1] range 76 | p0 (numpy array): First n-dimensional vector 77 | p1 (numpy array): Second n-dimensional vector 78 | 79 | Result: 80 | Returns spherically interpolated vector. 81 | 82 | """ 83 | if t <= 0: 84 | return p0 85 | elif t >= 1: 86 | return p1 87 | elif np.allclose(p0, p1): 88 | return p0 89 | 90 | # Convert p0 and p1 to unit vectors and find the angle between them (omega) 91 | omega = np.arccos(np.dot(p0 / np.linalg.norm(p0), p1 / np.linalg.norm(p1))) 92 | sin_omega = np.sin(omega) # syntactic sugar 93 | return np.sin((1.0 - t) * omega) / sin_omega * p0 + np.sin(t * omega) / sin_omega * p1 94 | 95 | 96 | def display_vector_arithmetic_results(imgs_to_display): 97 | fig = plt.figure(figsize=(6, 6)) 98 | title_fontsize = 'x-small' 99 | num_display_imgs = 7 100 | titles = ['happy women', 'happy woman (avg)', 'neutral women', 'neutral woman (avg)', 'neutral men', 'neutral man (avg)', 'result - happy man'] 101 | ax = np.zeros(num_display_imgs, dtype=object) 102 | assert len(imgs_to_display) == num_display_imgs, f'Expected {num_display_imgs} got {len(imgs_to_display)} images.' 103 | 104 | gs = fig.add_gridspec(5, 4, left=0.02, right=0.98, wspace=0.05, hspace=0.3) 105 | ax[0] = fig.add_subplot(gs[0, :3]) 106 | ax[1] = fig.add_subplot(gs[0, 3]) 107 | ax[2] = fig.add_subplot(gs[1, :3]) 108 | ax[3] = fig.add_subplot(gs[1, 3]) 109 | ax[4] = fig.add_subplot(gs[2, :3]) 110 | ax[5] = fig.add_subplot(gs[2, 3]) 111 | ax[6] = fig.add_subplot(gs[3:, 1:3]) 112 | 113 | for i in range(num_display_imgs): 114 | ax[i].imshow(cv.resize(imgs_to_display[i], (0, 0), fx=3, fy=3, interpolation=cv.INTER_NEAREST)) 115 | ax[i].set_title(titles[i], fontsize=title_fontsize) 116 | ax[i].tick_params(which='both', bottom=False, left=False, labelleft=False, labelbottom=False) 117 | 118 | plt.show() 119 | 120 | 121 | def generate_new_images(model_name, cgan_digit=None, generation_mode=True, slerp=True, a=None, b=None, should_display=True): 122 | """ Generate imagery using pre-trained generator (using vanilla_generator_000000.pth by default) 123 | 124 | Args: 125 | model_name (str): model name you want to use (default lookup location is BINARIES_PATH). 126 | cgan_digit (int): if specified generate that exact digit. 127 | generation_mode (enum): generate a single image from a random vector, interpolate between the 2 chosen latent 128 | vectors, or perform arithmetic over latent vectors (note: not every mode is supported for every model type) 129 | slerp (bool): if True use spherical interpolation otherwise use linear interpolation. 130 | a, b (numpy arrays): latent vectors, if set to None you'll be prompted to choose images you like, 131 | and use corresponding latent vectors instead. 132 | should_display (bool): Display the generated images before saving them. 133 | 134 | """ 135 | 136 | model_path = os.path.join(BINARIES_PATH, model_name) 137 | assert os.path.exists(model_path), f'Could not find the model {model_path}. You first need to train your generator.' 138 | 139 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 140 | 141 | # Prepare the correct (vanilla, cGAN, DCGAN, ...) model, load the weights and put the model into evaluation mode 142 | model_state = torch.load(model_path) 143 | gan_type = model_state["gan_type"] 144 | print(f'Found {gan_type} GAN!') 145 | _, generator = utils.get_gan(device, gan_type) 146 | generator.load_state_dict(model_state["state_dict"], strict=True) 147 | generator.eval() 148 | 149 | # Generate a single image, save it and potentially display it 150 | if generation_mode == GenerationMode.SINGLE_IMAGE: 151 | generated_imgs_path = os.path.join(DATA_DIR_PATH, 'generated_imagery') 152 | os.makedirs(generated_imgs_path, exist_ok=True) 153 | 154 | generated_img, _ = generate_from_random_latent_vector(generator, cgan_digit if gan_type == GANType.CGAN.name else None) 155 | utils.save_and_maybe_display_image(generated_imgs_path, generated_img, should_display=should_display) 156 | 157 | # Pick 2 images you like between which you'd like to interpolate (by typing 'y' into console) 158 | elif generation_mode == GenerationMode.INTERPOLATION: 159 | assert gan_type == GANType.VANILLA.name or gan_type ==GANType.DCGAN.name, f'Got {gan_type} but only VANILLA/DCGAN are supported for the interpolation mode.' 160 | 161 | interpolation_name = "spherical" if slerp else "linear" 162 | interpolation_fn = spherical_interpolation if slerp else linear_interpolation 163 | 164 | grid_interpolated_imgs_path = os.path.join(DATA_DIR_PATH, 'interpolated_imagery') # combined results dir 165 | decomposed_interpolated_imgs_path = os.path.join(grid_interpolated_imgs_path, f'tmp_{gan_type}_{interpolation_name}_dump') # dump separate results 166 | if os.path.exists(decomposed_interpolated_imgs_path): 167 | shutil.rmtree(decomposed_interpolated_imgs_path) 168 | os.makedirs(grid_interpolated_imgs_path, exist_ok=True) 169 | os.makedirs(decomposed_interpolated_imgs_path, exist_ok=True) 170 | 171 | latent_vector_a, latent_vector_b = [None, None] 172 | 173 | # If a and b were not specified loop until the user picked the 2 images he/she likes. 174 | found_good_vectors_flag = False 175 | if a is None or b is None: 176 | while not found_good_vectors_flag: 177 | generated_img, latent_vector = generate_from_random_latent_vector(generator) 178 | plt.imshow(generated_img); plt.title('Do you like this image?'); plt.show() 179 | user_input = input("Do you like this generated image? [y for yes]:") 180 | if user_input == 'y': 181 | if latent_vector_a is None: 182 | latent_vector_a = latent_vector 183 | print('Saved the first latent vector.') 184 | elif latent_vector_b is None: 185 | latent_vector_b = latent_vector 186 | print('Saved the second latent vector.') 187 | found_good_vectors_flag = True 188 | else: 189 | print('Well lets generate a new one!') 190 | continue 191 | else: 192 | print('Skipping latent vectors selection section and using cached ones.') 193 | latent_vector_a, latent_vector_b = [a, b] 194 | 195 | # Cache latent vectors 196 | if a is None or b is None: 197 | np.save(os.path.join(grid_interpolated_imgs_path, 'a.npy'), latent_vector_a) 198 | np.save(os.path.join(grid_interpolated_imgs_path, 'b.npy'), latent_vector_b) 199 | 200 | print(f'Lets do some {interpolation_name} interpolation!') 201 | interpolation_resolution = 47 # number of images between the vectors a and b 202 | num_interpolated_imgs = interpolation_resolution + 2 # + 2 so that we include a and b 203 | 204 | generated_imgs = [] 205 | for i in range(num_interpolated_imgs): 206 | t = i / (num_interpolated_imgs - 1) # goes from 0. to 1. 207 | current_latent_vector = interpolation_fn(t, latent_vector_a, latent_vector_b) 208 | generated_img = generate_from_specified_numpy_latent_vector(generator, current_latent_vector) 209 | 210 | print(f'Generated image [{i+1}/{num_interpolated_imgs}].') 211 | utils.save_and_maybe_display_image(decomposed_interpolated_imgs_path, generated_img, should_display=should_display) 212 | 213 | # Move from channel last to channel first (CHW->HWC), PyTorch's save_image function expects BCHW format 214 | generated_imgs.append(torch.tensor(np.moveaxis(generated_img, 2, 0))) 215 | 216 | interpolated_block_img = torch.stack(generated_imgs) 217 | interpolated_block_img = nn.Upsample(scale_factor=2.5, mode='nearest')(interpolated_block_img) 218 | save_image(interpolated_block_img, os.path.join(grid_interpolated_imgs_path, utils.get_available_file_name(grid_interpolated_imgs_path)), nrow=int(np.sqrt(num_interpolated_imgs))) 219 | 220 | elif generation_mode == GenerationMode.VECTOR_ARITHMETIC: 221 | assert gan_type == GANType.DCGAN.name, f'Got {gan_type} but only DCGAN is supported for arithmetic mode.' 222 | 223 | # Generate num_options face images and create a grid image from them 224 | num_options = 100 225 | generated_imgs = [] 226 | latent_vectors = [] 227 | padding = 2 228 | for i in range(num_options): 229 | generated_img, latent_vector = generate_from_random_latent_vector(generator) 230 | generated_imgs.append(torch.tensor(np.moveaxis(generated_img, 2, 0))) # make_grid expects CHW format 231 | latent_vectors.append(latent_vector) 232 | stacked_tensor_imgs = torch.stack(generated_imgs) 233 | final_tensor_img = make_grid(stacked_tensor_imgs, nrow=int(np.sqrt(num_options)), padding=padding) 234 | display_img = np.moveaxis(final_tensor_img.numpy(), 0, 2) 235 | 236 | # For storing latent vectors 237 | num_of_vectors_per_category = 3 238 | happy_woman_latent_vectors = [] 239 | neutral_woman_latent_vectors = [] 240 | neutral_man_latent_vectors = [] 241 | 242 | # Make it easy - by clicking on the plot you pick the image. 243 | def onclick(event): 244 | if event.dblclick: 245 | pass 246 | else: # single click 247 | if event.button == 1: # left click 248 | x_coord = event.xdata 249 | y_coord = event.ydata 250 | column = int(x_coord / (64 + padding)) 251 | row = int(y_coord / (64 + padding)) 252 | 253 | # Store latent vector corresponding to the image that the user clicked on. 254 | if len(happy_woman_latent_vectors) < num_of_vectors_per_category: 255 | happy_woman_latent_vectors.append(latent_vectors[10*row + column]) 256 | print(f'Picked image row={row}, column={column} as {len(happy_woman_latent_vectors)}. happy woman.') 257 | elif len(neutral_woman_latent_vectors) < num_of_vectors_per_category: 258 | neutral_woman_latent_vectors.append(latent_vectors[10*row + column]) 259 | print(f'Picked image row={row}, column={column} as {len(neutral_woman_latent_vectors)}. neutral woman.') 260 | elif len(neutral_man_latent_vectors) < num_of_vectors_per_category: 261 | neutral_man_latent_vectors.append(latent_vectors[10*row + column]) 262 | print(f'Picked image row={row}, column={column} as {len(neutral_man_latent_vectors)}. neutral man.') 263 | else: 264 | plt.close() 265 | 266 | plt.figure(figsize=(10, 10)) 267 | plt.imshow(display_img) 268 | # This is just an example you could also pick 3 neutral woman images with sunglasses, etc. 269 | plt.title('Click on 3 happy women, 3 neutral women and \n 3 neutral men images (order matters!)') 270 | cid = plt.gcf().canvas.mpl_connect('button_press_event', onclick) 271 | plt.show() 272 | plt.gcf().canvas.mpl_disconnect(cid) 273 | print('Done choosing images.') 274 | 275 | # Calculate the average latent vector for every category (happy woman, neutral woman, neutral man) 276 | happy_woman_avg_latent_vector = np.mean(np.array(happy_woman_latent_vectors), axis=0) 277 | neutral_woman_avg_latent_vector = np.mean(np.array(neutral_woman_latent_vectors), axis=0) 278 | neutral_man_avg_latent_vector = np.mean(np.array(neutral_man_latent_vectors), axis=0) 279 | 280 | # By subtracting neutral woman from the happy woman we capture the "vector of smiling". Adding that vector 281 | # to a neutral man we get a happy man's latent vector! Our latent space has amazingly beautiful structure! 282 | happy_man_latent_vector = neutral_man_avg_latent_vector + (happy_woman_avg_latent_vector - neutral_woman_avg_latent_vector) 283 | 284 | # Generate images from these latent vectors 285 | happy_women_imgs = np.hstack([generate_from_specified_numpy_latent_vector(generator, v) for v in happy_woman_latent_vectors]) 286 | neutral_women_imgs = np.hstack([generate_from_specified_numpy_latent_vector(generator, v) for v in neutral_woman_latent_vectors]) 287 | neutral_men_imgs = np.hstack([generate_from_specified_numpy_latent_vector(generator, v) for v in neutral_man_latent_vectors]) 288 | 289 | happy_woman_avg_img = generate_from_specified_numpy_latent_vector(generator, happy_woman_avg_latent_vector) 290 | neutral_woman_avg_img = generate_from_specified_numpy_latent_vector(generator, neutral_woman_avg_latent_vector) 291 | neutral_man_avg_img = generate_from_specified_numpy_latent_vector(generator, neutral_man_avg_latent_vector) 292 | 293 | happy_man_img = generate_from_specified_numpy_latent_vector(generator, happy_man_latent_vector) 294 | 295 | display_vector_arithmetic_results([happy_women_imgs, happy_woman_avg_img, neutral_women_imgs, neutral_woman_avg_img, neutral_men_imgs, neutral_man_avg_img, happy_man_img]) 296 | else: 297 | raise Exception(f'Generation mode not yet supported.') 298 | 299 | 300 | if __name__ == "__main__": 301 | parser = argparse.ArgumentParser() 302 | parser.add_argument("--model_name", type=str, help="Pre-trained generator model name", default=r'VANILLA_000000.pth') 303 | parser.add_argument("--cgan_digit", type=int, help="Used only for cGAN - generate specified digit", default=3) 304 | parser.add_argument("--generation_mode", type=bool, help="Pick between 3 generation modes", default=GenerationMode.SINGLE_IMAGE) 305 | parser.add_argument("--slerp", type=bool, help="Should use spherical interpolation (default No)", default=False) 306 | parser.add_argument("--should_display", type=bool, help="Display intermediate results", default=True) 307 | args = parser.parse_args() 308 | 309 | # The first time you start generation in the interpolation mode it will cache a and b 310 | # which you'll choose the first time you run the it. 311 | a_path = os.path.join(DATA_DIR_PATH, 'interpolated_imagery', 'a.npy') 312 | b_path = os.path.join(DATA_DIR_PATH, 'interpolated_imagery', 'b.npy') 313 | latent_vector_a = np.load(a_path) if os.path.exists(a_path) else None 314 | latent_vector_b = np.load(b_path) if os.path.exists(b_path) else None 315 | 316 | generate_new_images( 317 | args.model_name, 318 | args.cgan_digit, 319 | generation_mode=args.generation_mode, 320 | slerp=args.slerp, 321 | a=latent_vector_a, 322 | b=latent_vector_b, 323 | should_display=args.should_display) 324 | -------------------------------------------------------------------------------- /models/binaries/CGAN_000000.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/models/binaries/CGAN_000000.pth -------------------------------------------------------------------------------- /models/binaries/DCGAN_000000.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/models/binaries/DCGAN_000000.pth -------------------------------------------------------------------------------- /models/binaries/VANILLA_000000.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gordicaleksa/pytorch-GANs/2119f6d9522c710e658f897cc7c97b8ac771b8ea/models/binaries/VANILLA_000000.pth -------------------------------------------------------------------------------- /models/definitions/conditional_gan.py: -------------------------------------------------------------------------------- 1 | """Conditional GAN (cGAN) implementation. 2 | 3 | It's completely the same architecture as vanilla GAN just with additional conditioning vector on the input. 4 | 5 | Note: I could have merged this file with vanilla_gan.py and made the conditioning vector be an optional input, 6 | but I decided not to for ease of understanding for the beginners. Otherwise it could get a bit confusing. 7 | """ 8 | 9 | 10 | import torch 11 | from torch import nn 12 | 13 | 14 | from utils.constants import LATENT_SPACE_DIM, MNIST_IMG_SIZE, MNIST_NUM_CLASSES 15 | from .vanilla_gan import vanilla_block 16 | 17 | 18 | class ConditionalGeneratorNet(torch.nn.Module): 19 | """Simple 4-layer MLP generative neural network. 20 | 21 | By default it works for MNIST size images (28x28). 22 | 23 | There are many ways you can construct generator to work on MNIST. 24 | Even without normalization layers it will work ok. Even with 5 layers it will work ok, etc. 25 | 26 | It's generally an open-research question on how to evaluate GANs i.e. quantify that "ok" statement. 27 | 28 | People tried to automate the task using IS (inception score, often used incorrectly), etc. 29 | but so far it always ends up with some form of visual inspection (human in the loop). 30 | 31 | """ 32 | 33 | def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)): 34 | super().__init__() 35 | self.generated_img_shape = img_shape 36 | # We're adding the conditioning vector (hence +MNIST_NUM_CLASSES) which will directly control 37 | # which MNIST class we should generate. We did not have this control in the original (vanilla) GAN. 38 | # If that vector = [1., 0., ..., 0.] we generate 0, if [0., 1., 0., ..., 0.] we generate 1, etc. 39 | num_neurons_per_layer = [LATENT_SPACE_DIM + MNIST_NUM_CLASSES, 256, 512, 1024, img_shape[0] * img_shape[1]] 40 | 41 | self.net = nn.Sequential( 42 | *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1]), 43 | *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2]), 44 | *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3]), 45 | *vanilla_block(num_neurons_per_layer[3], num_neurons_per_layer[4], normalize=False, activation=nn.Tanh()) 46 | ) 47 | 48 | def forward(self, latent_vector_batch, one_hot_conditioning_vector_batch): 49 | img_batch_flattened = self.net(torch.cat((latent_vector_batch, one_hot_conditioning_vector_batch), 1)) 50 | # just un-flatten using view into (N, 1, 28, 28) shape for MNIST 51 | return img_batch_flattened.view(img_batch_flattened.shape[0], 1, *self.generated_img_shape) 52 | 53 | 54 | class ConditionalDiscriminatorNet(torch.nn.Module): 55 | """Simple 3-layer MLP discriminative neural network. It should output probability 1. for real images and 0. for fakes. 56 | 57 | By default it works for MNIST size images (28x28). 58 | 59 | Again there are many ways you can construct discriminator network that would work on MNIST. 60 | You could use more or less layers, etc. Using normalization as in the DCGAN paper doesn't work well though. 61 | 62 | """ 63 | 64 | def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)): 65 | super().__init__() 66 | # Same as above using + MNIST_NUM_CLASSES we add support for the conditioning vector 67 | num_neurons_per_layer = [img_shape[0] * img_shape[1] + MNIST_NUM_CLASSES, 512, 256, 1] 68 | 69 | self.net = nn.Sequential( 70 | *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1], normalize=False), 71 | *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2], normalize=False), 72 | *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3], normalize=False, activation=nn.Sigmoid()) 73 | ) 74 | 75 | def forward(self, img_batch, one_hot_conditioning_vector_batch): 76 | img_batch_flattened = img_batch.view(img_batch.shape[0], -1) # flatten from (N,1,H,W) into (N, HxW) 77 | # One hot conditioning vector batch is of shape (N, 10) for MNIST 78 | conditioned_input = torch.cat((img_batch_flattened, one_hot_conditioning_vector_batch), 1) 79 | return self.net(conditioned_input) 80 | 81 | 82 | 83 | -------------------------------------------------------------------------------- /models/definitions/dcgan.py: -------------------------------------------------------------------------------- 1 | """DCGAN implementation 2 | 3 | Note1: 4 | Many implementations out there, including PyTorch's official, did certain deviations from the original arch, 5 | without clearly explaining why they did it. PyTorch for example uses 512 channels initially instead of 1024. 6 | 7 | Note2: 8 | Small modification I did compared to the original paper is used kernel size = 4 as I can't get 64x64 9 | output spatial dimension with 5 no matter the padding setting. I noticed others did the same thing. 10 | 11 | Also I'm not doing 0-centered normal weight initialization - it actually gives far worse results. 12 | Batch normalization, in general, reduced the need for smart initialization but it obviously still matters. 13 | 14 | """ 15 | 16 | import torch 17 | from torch import nn 18 | import numpy as np 19 | 20 | 21 | from utils.constants import LATENT_SPACE_DIM 22 | 23 | 24 | def dcgan_upsample_block(in_channels, out_channels, normalize=True, activation=None): 25 | # Bias set to True gives unnatural color casts 26 | layers = [nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels, kernel_size=4, stride=2, padding=1, bias=False)] 27 | # There were debates to whether BatchNorm should go before or after the activation function, in my experiments it 28 | # did not matter. Goodfellow also had a talk where he mentioned that it should not matter. 29 | if normalize: 30 | layers.append(nn.BatchNorm2d(out_channels)) 31 | layers.append(nn.ReLU() if activation is None else activation) 32 | return layers 33 | 34 | 35 | class ConvolutionalGenerativeNet(nn.Module): 36 | 37 | def __init__(self): 38 | super().__init__() 39 | 40 | # Constants as defined in the DCGAN paper 41 | num_channels_per_layer = [1024, 512, 256, 128, 3] 42 | self.init_volume_shape = (num_channels_per_layer[0], 4, 4) 43 | 44 | # Both with and without bias gave similar results 45 | self.linear = nn.Linear(LATENT_SPACE_DIM, num_channels_per_layer[0] * np.prod(self.init_volume_shape[1:])) 46 | 47 | self.net = nn.Sequential( 48 | *dcgan_upsample_block(num_channels_per_layer[0], num_channels_per_layer[1]), 49 | *dcgan_upsample_block(num_channels_per_layer[1], num_channels_per_layer[2]), 50 | *dcgan_upsample_block(num_channels_per_layer[2], num_channels_per_layer[3]), 51 | *dcgan_upsample_block(num_channels_per_layer[3], num_channels_per_layer[4], normalize=False, activation=nn.Tanh()) 52 | ) 53 | 54 | def forward(self, latent_vector_batch): 55 | # Project from the space with dimensionality 100 into the space with dimensionality 1024 * 4 * 4 56 | # -> basic linear algebra (huh you thought you'll never need math?) and reshape into a 3D volume 57 | latent_vector_batch_projected = self.linear(latent_vector_batch) 58 | latent_vector_batch_projected_reshaped = latent_vector_batch_projected.view(latent_vector_batch_projected.shape[0], *self.init_volume_shape) 59 | 60 | return self.net(latent_vector_batch_projected_reshaped) 61 | 62 | 63 | def dcgan_downsample_block(in_channels, out_channels, normalize=True, activation=None, padding=1): 64 | layers = [nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=4, stride=2, padding=padding, bias=False)] 65 | if normalize: 66 | layers.append(nn.BatchNorm2d(out_channels)) 67 | layers.append(nn.LeakyReLU(0.2) if activation is None else activation) 68 | return layers 69 | 70 | 71 | class ConvolutionalDiscriminativeNet(nn.Module): 72 | 73 | def __init__(self): 74 | super().__init__() 75 | 76 | num_channels_per_layer = [3, 128, 256, 512, 1024, 1] 77 | 78 | # Since the last volume has a shape = 1024x4x4, we can do 1 more block and since it has a 4x4 kernels it will 79 | # collapse the spatial dimension into 1x1 and putting channel number to 1 and padding to 0 we get a scalar value 80 | # that we can pass into Sigmoid - effectively simulating a fully connected layer. 81 | self.net = nn.Sequential( 82 | *dcgan_downsample_block(num_channels_per_layer[0], num_channels_per_layer[1], normalize=False), 83 | *dcgan_downsample_block(num_channels_per_layer[1], num_channels_per_layer[2]), 84 | *dcgan_downsample_block(num_channels_per_layer[2], num_channels_per_layer[3]), 85 | *dcgan_downsample_block(num_channels_per_layer[3], num_channels_per_layer[4]), 86 | *dcgan_downsample_block(num_channels_per_layer[4], num_channels_per_layer[5], normalize=False, activation=nn.Sigmoid(), padding=0), 87 | ) 88 | 89 | def forward(self, img_batch): 90 | return self.net(img_batch) 91 | 92 | 93 | # Hurts the peformance in all my experiments, leaving it here as a proof that I tried it and it didn't give good results 94 | # Batch normalization in general reduces the need for smart initialization - that's one of it's main advantages. 95 | def weights_init_normal(m): 96 | classname = m.__class__.__name__ 97 | print(classname) 98 | if classname.find("Conv2d") != -1: 99 | torch.nn.init.normal_(m.weight.data, 0.0, 0.02) 100 | elif classname.find("BatchNorm2d") != -1: 101 | # It wouldn't make sense to make this 0-centered normal distribution as it would clamp the outputs to 0 102 | # that's why it's 1-centered normal distribution with std dev of 0.02 as specified in the paper 103 | torch.nn.init.normal_(m.weight.data, 1.0, 0.02) 104 | torch.nn.init.constant_(m.bias.data, 0.0) 105 | 106 | 107 | -------------------------------------------------------------------------------- /models/definitions/vanilla_gan.py: -------------------------------------------------------------------------------- 1 | """The original (vanilla) GAN implementation with some modifications. 2 | 3 | Modifications: 4 | The original paper used the maxout activation and dropout for regularization. 5 | I'm using LeakyReLU instead and batch normalization which came after the original paper was published. 6 | 7 | Also note that certain architectural design decisions were inspired by the DCGAN paper. 8 | """ 9 | 10 | import torch 11 | from torch import nn 12 | 13 | 14 | from utils.constants import LATENT_SPACE_DIM, MNIST_IMG_SIZE 15 | 16 | 17 | def vanilla_block(in_feat, out_feat, normalize=True, activation=None): 18 | layers = [nn.Linear(in_feat, out_feat)] 19 | if normalize: 20 | layers.append(nn.BatchNorm1d(out_feat)) 21 | # 0.2 was used in DCGAN, I experimented with other values like 0.5 didn't notice significant change 22 | layers.append(nn.LeakyReLU(0.2) if activation is None else activation) 23 | return layers 24 | 25 | 26 | class GeneratorNet(torch.nn.Module): 27 | """Simple 4-layer MLP generative neural network. 28 | 29 | By default it works for MNIST size images (28x28). 30 | 31 | There are many ways you can construct generator to work on MNIST. 32 | Even without normalization layers it will work ok. Even with 5 layers it will work ok, etc. 33 | 34 | It's generally an open-research question on how to evaluate GANs i.e. quantify that "ok" statement. 35 | 36 | People tried to automate the task using IS (inception score, often used incorrectly), etc. 37 | but so far it always ends up with some form of visual inspection (human in the loop). 38 | 39 | """ 40 | 41 | def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)): 42 | super().__init__() 43 | self.generated_img_shape = img_shape 44 | num_neurons_per_layer = [LATENT_SPACE_DIM, 256, 512, 1024, img_shape[0] * img_shape[1]] 45 | 46 | self.net = nn.Sequential( 47 | *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1]), 48 | *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2]), 49 | *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3]), 50 | *vanilla_block(num_neurons_per_layer[3], num_neurons_per_layer[4], normalize=False, activation=nn.Tanh()) 51 | ) 52 | 53 | def forward(self, latent_vector_batch): 54 | img_batch_flattened = self.net(latent_vector_batch) 55 | # just un-flatten using view into (N, 1, 28, 28) shape for MNIST 56 | return img_batch_flattened.view(img_batch_flattened.shape[0], 1, *self.generated_img_shape) 57 | 58 | 59 | class DiscriminatorNet(torch.nn.Module): 60 | """Simple 3-layer MLP discriminative neural network. It should output probability 1. for real images and 0. for fakes. 61 | 62 | By default it works for MNIST size images (28x28). 63 | 64 | Again there are many ways you can construct discriminator network that would work on MNIST. 65 | You could use more or less layers, etc. Using normalization as in the DCGAN paper doesn't work well though. 66 | 67 | """ 68 | 69 | def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)): 70 | super().__init__() 71 | num_neurons_per_layer = [img_shape[0] * img_shape[1], 512, 256, 1] 72 | 73 | self.net = nn.Sequential( 74 | *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1], normalize=False), 75 | *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2], normalize=False), 76 | *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3], normalize=False, activation=nn.Sigmoid()) 77 | ) 78 | 79 | def forward(self, img_batch): 80 | img_batch_flattened = img_batch.view(img_batch.shape[0], -1) # flatten from (N,1,H,W) into (N, HxW) 81 | return self.net(img_batch_flattened) 82 | 83 | 84 | 85 | -------------------------------------------------------------------------------- /playground.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | 4 | import torch 5 | from torch import nn 6 | 7 | 8 | from utils.video_utils import create_gif 9 | from utils.constants import * 10 | 11 | 12 | def understand_adversarial_loss(): 13 | """Understand why we can use binary cross entropy as adversarial loss. 14 | 15 | It's currently setup so as to make discriminator's output close to 1 (we assume real images), 16 | but you can create fake_images_gt = torch.tensor(0.) and do a similar thing for fake images. 17 | 18 | How to use it: 19 | Read through the comments and analyze the console output. 20 | 21 | """ 22 | adversarial_loss = nn.BCELoss() 23 | 24 | logits = [-10, -3, 0, 3, 10] # Simulation of discriminator net's outputs before the sigmoid activation 25 | 26 | # This will setup BCE loss into -log(x) (0. would set it to -log(1-x)) 27 | real_images_gt = torch.tensor(1.) 28 | 29 | lr = 0.1 # learning rate 30 | 31 | for logit in logits: 32 | print('*' * 5) 33 | 34 | # Consider this as discriminator net's last layer's (single neuron) output 35 | # just before the sigmoid which converts it to probability. 36 | logit_tensor = torch.tensor(float(logit), requires_grad=True) 37 | print(f'logit value before optimization: {logit}') 38 | 39 | # Note: with_requires grad we force PyTorch to build the computational graph so that we can push the logit 40 | # towards values which will give us probability 1 41 | 42 | # Discriminator's output (probability that the image is real) 43 | prediction = nn.Sigmoid()(logit_tensor) 44 | print(f'discriminator net\'s output: {prediction}') 45 | 46 | # The closer the prediction is to 1 the lower the loss will be! 47 | # -log(prediction) <- for predictions close to 1 loss will be close to 0, 48 | # predictions close to 0 will cause the loss to go to "+ infinity". 49 | loss = adversarial_loss(prediction, real_images_gt) 50 | print(f'adversarial loss output: {loss}') 51 | 52 | loss.backward() # calculate the gradient (sets the .grad field of the logit_tensor) 53 | # The closer the discriminator's prediction is to 1 the closer the loss will be to 0, 54 | # and the smaller this gradient will be, as there is no need to change logit, 55 | # because we accomplished what we wanted - to make prediction as close to 1 as possible. 56 | print(f'logit gradient {logit_tensor.grad.data}') 57 | 58 | # Effectively the biggest update will be made for logit -10. 59 | # Logit value -10 will cause the discriminator to output probability close to 0, which will give us huge loss 60 | # -log(0), which will cause big (negative) grad value which will then push the logit towards "+ infinity", 61 | # as that forces the discriminator to output the probability of 1. So -10 goes to ~ -9.9 in the first iteration. 62 | logit_tensor.data -= lr * logit_tensor.grad.data 63 | print(f'logit value after optimization {logit_tensor}') 64 | 65 | print('') 66 | 67 | 68 | if __name__ == "__main__": 69 | # understand_adversarial_loss() 70 | 71 | create_gif(os.path.join(DATA_DIR_PATH, 'debug_imagery'), os.path.join(DATA_DIR_PATH, 'default.gif'), downsample=10) 72 | 73 | 74 | -------------------------------------------------------------------------------- /train_cgan.py: -------------------------------------------------------------------------------- 1 | """ 2 | The main difference between training vanilla GAN and training cGAN is that we are additionally 3 | adding this conditioning vector y to discriminators and generators inputs (by just concatenating it to old input). 4 | 5 | y is one hot vector meaning if we want to condition the generator to: 6 | generate 0 -> we add [1., 0., ..., 0.] (10 elements) 7 | generate 1 -> we add [0., 1., 0., ..., 0.] (10 elements) 8 | ... 9 | """ 10 | 11 | import os 12 | import argparse 13 | import time 14 | 15 | 16 | import numpy as np 17 | import torch 18 | from torch import nn 19 | from torchvision.utils import save_image, make_grid 20 | from torch.utils.tensorboard import SummaryWriter 21 | 22 | 23 | import utils.utils as utils 24 | from utils.constants import * 25 | 26 | 27 | def train_cgan(training_config): 28 | writer = SummaryWriter() # (tensorboard) writer will output to ./runs/ directory by default 29 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # checking whether you have a GPU 30 | 31 | # Prepare MNIST data loader (it will download MNIST the first time you run it) 32 | mnist_data_loader = utils.get_mnist_data_loader(training_config['batch_size']) 33 | 34 | # Fetch feed-forward nets (place them on GPU if present) and optimizers which will tweak their weights 35 | discriminator_net, generator_net = utils.get_gan(device, GANType.CGAN.name) 36 | discriminator_opt, generator_opt = utils.get_optimizers(discriminator_net, generator_net) 37 | 38 | # 1s will configure BCELoss into -log(x) whereas 0s will configure it to -log(1-x) 39 | # So that means we can effectively use binary cross-entropy loss to achieve adversarial loss! 40 | adversarial_loss = nn.BCELoss() 41 | real_images_gt = torch.ones((training_config['batch_size'], 1), device=device) 42 | fake_images_gt = torch.zeros((training_config['batch_size'], 1), device=device) 43 | 44 | # For logging purposes 45 | ref_batch_size = MNIST_NUM_CLASSES**2 # We'll create a grid 10x10 where each column is a single digit 46 | ref_noise_batch = utils.get_gaussian_latent_batch(ref_batch_size, device) # Track G's quality during training 47 | 48 | # We'll generate exactly this grid of 10x10 (each digit in a separate column and 10 instances) for easier debugging 49 | ref_labels = torch.tensor(np.array([digit for _ in range(MNIST_NUM_CLASSES) for digit in range(MNIST_NUM_CLASSES)]), dtype=torch.int64) 50 | ref_labels_one_hot = torch.nn.functional.one_hot(ref_labels, MNIST_NUM_CLASSES).type(torch.FloatTensor).to(device) 51 | 52 | discriminator_loss_values = [] 53 | generator_loss_values = [] 54 | img_cnt = 0 55 | 56 | ts = time.time() # start measuring time 57 | 58 | # cGAN training loop 59 | utils.print_training_info_to_console(training_config) 60 | for epoch in range(training_config['num_epochs']): 61 | for batch_idx, (real_images, labels) in enumerate(mnist_data_loader): 62 | 63 | # Labels [0-9], converted to one hot encoding, are used for conditioning. Basically a fancy word for 64 | # if we give you e.g. [1., 0., ..., 0.] we expect a digit from class 0. 65 | # I found that using real labels for training both G and D works nice. No need for random labels. 66 | labels_one_hot = torch.nn.functional.one_hot(labels, MNIST_NUM_CLASSES).type(torch.FloatTensor).to(device) 67 | real_images = real_images.to(device) # Place imagery on GPU (if present) 68 | 69 | # 70 | # Train discriminator: maximize V = log(D(x|y)) + log(1-D(G(z|y))) or equivalently minimize -V 71 | # Note: D-discriminator, x-real images, G-generator, z-latent vectors, G(z)-fake images, y-conditioning 72 | # 73 | 74 | # Zero out .grad variables in discriminator network (otherwise we would have corrupt results) 75 | discriminator_opt.zero_grad() 76 | 77 | # -log(D(x|y)) <- we minimize this by making D(x|y) as close to 1 as possible 78 | real_discriminator_loss = adversarial_loss(discriminator_net(real_images, labels_one_hot), real_images_gt) 79 | 80 | # G(z|y) | G ~ generator_net and z ~ utils.get_gaussian_latent_batch(batch_size, device), y ~ conditioning 81 | fake_images = generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device), labels_one_hot) 82 | # D(G(z|y)), we call detach() so that we don't calculate gradients for the generator during backward() 83 | fake_images_predictions = discriminator_net(fake_images.detach(), labels_one_hot) 84 | # -log(1 - D(G(z|y))) <- we minimize this by making D(G(z|y)) as close to 0 as possible 85 | fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt) 86 | 87 | discriminator_loss = real_discriminator_loss + fake_discriminator_loss 88 | discriminator_loss.backward() # this will populate .grad vars in the discriminator net 89 | discriminator_opt.step() # perform D weights update according to optimizer's strategy 90 | 91 | # 92 | # Train G: minimize V1 = log(1-D(G(z|y))) or equivalently maximize V2 = log(D(G(z|y))) (or min of -V2) 93 | # The original expression (V1) had problems with diminishing gradients for G when D is too good. 94 | # 95 | 96 | # if you want to cause mode collapse probably the easiest way to do that would be to add "for i in range(n)" 97 | # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment. 98 | 99 | # Zero out .grad variables in discriminator network (otherwise we would have corrupt results) 100 | generator_opt.zero_grad() 101 | 102 | # D(G(z|y)) (see above for explanations) 103 | generated_images_predictions = discriminator_net(generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device), labels_one_hot), labels_one_hot) 104 | # By placing real_images_gt here we minimize -log(D(G(z|y))) which happens when D approaches 1 105 | # i.e. we're tricking D into thinking that these generated images are real! 106 | generator_loss = adversarial_loss(generated_images_predictions, real_images_gt) 107 | 108 | generator_loss.backward() # this will populate .grad vars in the G net (also in D but we won't use those) 109 | generator_opt.step() # perform G weights update according to optimizer's strategy 110 | 111 | # 112 | # Logging and checkpoint creation 113 | # 114 | 115 | generator_loss_values.append(generator_loss.item()) 116 | discriminator_loss_values.append(discriminator_loss.item()) 117 | 118 | if training_config['enable_tensorboard']: 119 | writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, len(mnist_data_loader) * epoch + batch_idx + 1) 120 | # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly) 121 | if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0: 122 | with torch.no_grad(): 123 | log_generated_images = generator_net(ref_noise_batch, ref_labels_one_hot) 124 | log_generated_images_resized = nn.Upsample(scale_factor=1.5, mode='nearest')(log_generated_images) 125 | intermediate_imagery_grid = make_grid(log_generated_images_resized, nrow=int(np.sqrt(ref_batch_size)), normalize=True) 126 | writer.add_image('intermediate generated imagery', intermediate_imagery_grid, len(mnist_data_loader) * epoch + batch_idx + 1) 127 | 128 | if training_config['console_log_freq'] is not None and batch_idx % training_config['console_log_freq'] == 0: 129 | print(f'GAN training: time elapsed= {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(mnist_data_loader)}]') 130 | 131 | # Save intermediate generator images (more convenient like this than through tensorboard) 132 | if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0: 133 | with torch.no_grad(): 134 | log_generated_images = generator_net(ref_noise_batch, ref_labels_one_hot) 135 | log_generated_images_resized = nn.Upsample(scale_factor=1.5, mode='nearest')(log_generated_images) 136 | save_image(log_generated_images_resized, os.path.join(training_config['debug_path'], f'{str(img_cnt).zfill(6)}.jpg'), nrow=int(np.sqrt(ref_batch_size)), normalize=True) 137 | img_cnt += 1 138 | 139 | # Save generator checkpoint 140 | if training_config['checkpoint_freq'] is not None and (epoch + 1) % training_config['checkpoint_freq'] == 0 and batch_idx == 0: 141 | ckpt_model_name = f"cgan_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth" 142 | torch.save(utils.get_training_state(generator_net, GANType.CGAN.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name)) 143 | 144 | # Save the latest generator in the binaries directory 145 | torch.save(utils.get_training_state(generator_net, GANType.CGAN.name), os.path.join(BINARIES_PATH, utils.get_available_binary_name(GANType.CGAN))) 146 | 147 | 148 | if __name__ == "__main__": 149 | # 150 | # fixed args - don't change these unless you have a good reason 151 | # 152 | debug_path = os.path.join(DATA_DIR_PATH, 'debug_imagery') 153 | os.makedirs(debug_path, exist_ok=True) 154 | 155 | # 156 | # modifiable args - feel free to play with these (only small subset is exposed by design to avoid cluttering) 157 | # 158 | parser = argparse.ArgumentParser() 159 | parser.add_argument("--num_epochs", type=int, help="height of content and style images", default=100) 160 | parser.add_argument("--batch_size", type=int, help="height of content and style images", default=128) 161 | 162 | # logging/debugging/checkpoint related (helps a lot with experimentation) 163 | parser.add_argument("--enable_tensorboard", type=bool, help="enable tensorboard logging (D and G loss)", default=True) 164 | parser.add_argument("--debug_imagery_log_freq", type=int, help="log generator images during training (batch) freq", default=100) 165 | parser.add_argument("--console_log_freq", type=int, help="log to output console (batch) freq", default=100) 166 | parser.add_argument("--checkpoint_freq", type=int, help="checkpoint model saving (epoch) freq", default=5) 167 | args = parser.parse_args() 168 | 169 | # Wrapping training configuration into a dictionary 170 | training_config = dict() 171 | for arg in vars(args): 172 | training_config[arg] = getattr(args, arg) 173 | training_config['debug_path'] = debug_path 174 | 175 | # train GAN model 176 | train_cgan(training_config) 177 | 178 | -------------------------------------------------------------------------------- /train_dcgan.py: -------------------------------------------------------------------------------- 1 | """ 2 | Literally nothing changed in the training loop of DCGAN compared to vanilla GAN: 3 | 4 | Things that changed: 5 | * Model architecture - using CNNs compared to fully connected networks 6 | * We're now using CelebA dataset loaded via utils.get_celeba_data_loader (MNIST would work, it's just too easy) 7 | * Logging parameters and number of epochs (as we have bigger images) 8 | 9 | """ 10 | 11 | import os 12 | import argparse 13 | import time 14 | 15 | 16 | import numpy as np 17 | import torch 18 | from torch import nn 19 | from torchvision.utils import save_image, make_grid 20 | from torch.utils.tensorboard import SummaryWriter 21 | 22 | 23 | import utils.utils as utils 24 | from utils.constants import * 25 | 26 | 27 | def train_dcgan(training_config): 28 | writer = SummaryWriter() # (tensorboard) writer will output to ./runs/ directory by default 29 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # checking whether you have a GPU 30 | 31 | # Prepare CelebA data loader (it will download preprocessed CelebA the first time you run it ~240 MB) 32 | celeba_data_loader = utils.get_celeba_data_loader(training_config['batch_size']) 33 | 34 | # Fetch convolutional nets (place them on GPU if present) and optimizers which will tweak their weights 35 | discriminator_net, generator_net = utils.get_gan(device, GANType.DCGAN.name) 36 | discriminator_opt, generator_opt = utils.get_optimizers(discriminator_net, generator_net) 37 | 38 | # 1s will configure BCELoss into -log(x) whereas 0s will configure it to -log(1-x) 39 | # So that means we can effectively use binary cross-entropy loss to achieve adversarial loss! 40 | adversarial_loss = nn.BCELoss() 41 | real_images_gt = torch.ones((training_config['batch_size'], 1, 1, 1), device=device) 42 | fake_images_gt = torch.zeros((training_config['batch_size'], 1, 1, 1), device=device) 43 | 44 | # For logging purposes 45 | ref_batch_size = 25 46 | ref_noise_batch = utils.get_gaussian_latent_batch(ref_batch_size, device) # Track G's quality during training 47 | discriminator_loss_values = [] 48 | generator_loss_values = [] 49 | img_cnt = 0 50 | 51 | ts = time.time() # start measuring time 52 | 53 | # GAN training loop, it's always smart to first train the discriminator so as to avoid mode collapse! 54 | utils.print_training_info_to_console(training_config) 55 | for epoch in range(training_config['num_epochs']): 56 | for batch_idx, (real_images, _) in enumerate(celeba_data_loader): 57 | 58 | real_images = real_images.to(device) # Place imagery on GPU (if present) 59 | 60 | # 61 | # Train discriminator: maximize V = log(D(x)) + log(1-D(G(z))) or equivalently minimize -V 62 | # Note: D = discriminator, x = real images, G = generator, z = latent Gaussian vectors, G(z) = fake images 63 | # 64 | 65 | # Zero out .grad variables in discriminator network (otherwise we would have corrupt results) 66 | discriminator_opt.zero_grad() 67 | 68 | # -log(D(x)) <- we minimize this by making D(x)/discriminator_net(real_images) as close to 1 as possible 69 | real_discriminator_loss = adversarial_loss(discriminator_net(real_images), real_images_gt) 70 | 71 | # G(z) | G == generator_net and z == utils.get_gaussian_latent_batch(batch_size, device) 72 | fake_images = generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device)) 73 | # D(G(z)), we call detach() so that we don't calculate gradients for the generator during backward() 74 | fake_images_predictions = discriminator_net(fake_images.detach()) 75 | # -log(1 - D(G(z))) <- we minimize this by making D(G(z)) as close to 0 as possible 76 | fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt) 77 | 78 | discriminator_loss = real_discriminator_loss + fake_discriminator_loss 79 | discriminator_loss.backward() # this will populate .grad vars in the discriminator net 80 | discriminator_opt.step() # perform D weights update according to optimizer's strategy 81 | 82 | # 83 | # Train generator: minimize V1 = log(1-D(G(z))) or equivalently maximize V2 = log(D(G(z))) (or min of -V2) 84 | # The original expression (V1) had problems with diminishing gradients for G when D is too good. 85 | # 86 | 87 | # if you want to cause mode collapse probably the easiest way to do that would be to add "for i in range(n)" 88 | # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment. 89 | 90 | # Zero out .grad variables in discriminator network (otherwise we would have corrupt results) 91 | generator_opt.zero_grad() 92 | 93 | # D(G(z)) (see above for explanations) 94 | generated_images_predictions = discriminator_net(generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device))) 95 | # By placing real_images_gt here we minimize -log(D(G(z))) which happens when D approaches 1 96 | # i.e. we're tricking D into thinking that these generated images are real! 97 | generator_loss = adversarial_loss(generated_images_predictions, real_images_gt) 98 | 99 | generator_loss.backward() # this will populate .grad vars in the G net (also in D but we won't use those) 100 | generator_opt.step() # perform G weights update according to optimizer's strategy 101 | 102 | # 103 | # Logging and checkpoint creation 104 | # 105 | 106 | generator_loss_values.append(generator_loss.item()) 107 | discriminator_loss_values.append(discriminator_loss.item()) 108 | 109 | if training_config['enable_tensorboard']: 110 | writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, len(celeba_data_loader) * epoch + batch_idx + 1) 111 | # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly) 112 | if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0: 113 | with torch.no_grad(): 114 | log_generated_images = generator_net(ref_noise_batch) 115 | log_generated_images_resized = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images) 116 | intermediate_imagery_grid = make_grid(log_generated_images_resized, nrow=int(np.sqrt(ref_batch_size)), normalize=True) 117 | writer.add_image('intermediate generated imagery', intermediate_imagery_grid, len(celeba_data_loader) * epoch + batch_idx + 1) 118 | 119 | if training_config['console_log_freq'] is not None and batch_idx % training_config['console_log_freq'] == 0: 120 | print(f'GAN training: time elapsed= {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(celeba_data_loader)}]') 121 | 122 | # Save intermediate generator images (more convenient like this than through tensorboard) 123 | if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0: 124 | with torch.no_grad(): 125 | log_generated_images = generator_net(ref_noise_batch) 126 | log_generated_images_resized = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images) 127 | save_image(log_generated_images_resized, os.path.join(training_config['debug_path'], f'{str(img_cnt).zfill(6)}.jpg'), nrow=int(np.sqrt(ref_batch_size)), normalize=True) 128 | img_cnt += 1 129 | 130 | # Save generator checkpoint 131 | if training_config['checkpoint_freq'] is not None and (epoch + 1) % training_config['checkpoint_freq'] == 0 and batch_idx == 0: 132 | ckpt_model_name = f"dcgan_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth" 133 | torch.save(utils.get_training_state(generator_net, GANType.DCGAN.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name)) 134 | 135 | # Save the latest generator in the binaries directory 136 | torch.save(utils.get_training_state(generator_net, GANType.DCGAN.name), os.path.join(BINARIES_PATH, utils.get_available_binary_name(GANType.DCGAN))) 137 | 138 | 139 | if __name__ == "__main__": 140 | # 141 | # fixed args - don't change these unless you have a good reason 142 | # 143 | debug_path = os.path.join(DATA_DIR_PATH, 'debug_imagery') 144 | os.makedirs(debug_path, exist_ok=True) 145 | 146 | # 147 | # modifiable args - feel free to play with these (only small subset is exposed by design to avoid cluttering) 148 | # 149 | parser = argparse.ArgumentParser() 150 | parser.add_argument("--num_epochs", type=int, help="height of content and style images", default=8) 151 | parser.add_argument("--batch_size", type=int, help="height of content and style images", default=128) 152 | 153 | # logging/debugging/checkpoint related (helps a lot with experimentation) 154 | parser.add_argument("--enable_tensorboard", type=bool, help="enable tensorboard logging (D and G loss)", default=True) 155 | parser.add_argument("--debug_imagery_log_freq", type=int, help="log generator images during training (batch) freq", default=20) 156 | parser.add_argument("--console_log_freq", type=int, help="log to output console (batch) freq", default=20) 157 | parser.add_argument("--checkpoint_freq", type=int, help="checkpoint model saving (epoch) freq", default=2) 158 | args = parser.parse_args() 159 | 160 | # Wrapping training configuration into a dictionary 161 | training_config = dict() 162 | for arg in vars(args): 163 | training_config[arg] = getattr(args, arg) 164 | training_config['debug_path'] = debug_path 165 | 166 | # train GAN model 167 | train_dcgan(training_config) 168 | 169 | -------------------------------------------------------------------------------- /train_vanilla_gan.py: -------------------------------------------------------------------------------- 1 | import os 2 | import argparse 3 | import time 4 | 5 | 6 | import numpy as np 7 | import torch 8 | from torch import nn 9 | from torchvision.utils import save_image, make_grid 10 | from torch.utils.tensorboard import SummaryWriter 11 | 12 | 13 | import utils.utils as utils 14 | from utils.constants import * 15 | 16 | 17 | def train_vanilla_gan(training_config): 18 | writer = SummaryWriter() # (tensorboard) writer will output to ./runs/ directory by default 19 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # checking whether you have a GPU 20 | 21 | # Prepare MNIST data loader (it will download MNIST the first time you run it) 22 | mnist_data_loader = utils.get_mnist_data_loader(training_config['batch_size']) 23 | 24 | # Fetch feed-forward nets (place them on GPU if present) and optimizers which will tweak their weights 25 | discriminator_net, generator_net = utils.get_gan(device, GANType.VANILLA.name) 26 | discriminator_opt, generator_opt = utils.get_optimizers(discriminator_net, generator_net) 27 | 28 | # 1s will configure BCELoss into -log(x) whereas 0s will configure it to -log(1-x) 29 | # So that means we can effectively use binary cross-entropy loss to achieve adversarial loss! 30 | adversarial_loss = nn.BCELoss() 31 | real_images_gt = torch.ones((training_config['batch_size'], 1), device=device) 32 | fake_images_gt = torch.zeros((training_config['batch_size'], 1), device=device) 33 | 34 | # For logging purposes 35 | ref_batch_size = 16 36 | ref_noise_batch = utils.get_gaussian_latent_batch(ref_batch_size, device) # Track G's quality during training 37 | discriminator_loss_values = [] 38 | generator_loss_values = [] 39 | img_cnt = 0 40 | 41 | ts = time.time() # start measuring time 42 | 43 | # GAN training loop, it's always smart to first train the discriminator so as to avoid mode collapse! 44 | utils.print_training_info_to_console(training_config) 45 | for epoch in range(training_config['num_epochs']): 46 | for batch_idx, (real_images, _) in enumerate(mnist_data_loader): 47 | 48 | real_images = real_images.to(device) # Place imagery on GPU (if present) 49 | 50 | # 51 | # Train discriminator: maximize V = log(D(x)) + log(1-D(G(z))) or equivalently minimize -V 52 | # Note: D = discriminator, x = real images, G = generator, z = latent Gaussian vectors, G(z) = fake images 53 | # 54 | 55 | # Zero out .grad variables in discriminator network (otherwise we would have corrupt results) 56 | discriminator_opt.zero_grad() 57 | 58 | # -log(D(x)) <- we minimize this by making D(x)/discriminator_net(real_images) as close to 1 as possible 59 | real_discriminator_loss = adversarial_loss(discriminator_net(real_images), real_images_gt) 60 | 61 | # G(z) | G == generator_net and z == utils.get_gaussian_latent_batch(batch_size, device) 62 | fake_images = generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device)) 63 | # D(G(z)), we call detach() so that we don't calculate gradients for the generator during backward() 64 | fake_images_predictions = discriminator_net(fake_images.detach()) 65 | # -log(1 - D(G(z))) <- we minimize this by making D(G(z)) as close to 0 as possible 66 | fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt) 67 | 68 | discriminator_loss = real_discriminator_loss + fake_discriminator_loss 69 | discriminator_loss.backward() # this will populate .grad vars in the discriminator net 70 | discriminator_opt.step() # perform D weights update according to optimizer's strategy 71 | 72 | # 73 | # Train generator: minimize V1 = log(1-D(G(z))) or equivalently maximize V2 = log(D(G(z))) (or min of -V2) 74 | # The original expression (V1) had problems with diminishing gradients for G when D is too good. 75 | # 76 | 77 | # if you want to cause mode collapse probably the easiest way to do that would be to add "for i in range(n)" 78 | # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment. 79 | 80 | # Zero out .grad variables in discriminator network (otherwise we would have corrupt results) 81 | generator_opt.zero_grad() 82 | 83 | # D(G(z)) (see above for explanations) 84 | generated_images_predictions = discriminator_net(generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device))) 85 | # By placing real_images_gt here we minimize -log(D(G(z))) which happens when D approaches 1 86 | # i.e. we're tricking D into thinking that these generated images are real! 87 | generator_loss = adversarial_loss(generated_images_predictions, real_images_gt) 88 | 89 | generator_loss.backward() # this will populate .grad vars in the G net (also in D but we won't use those) 90 | generator_opt.step() # perform G weights update according to optimizer's strategy 91 | 92 | # 93 | # Logging and checkpoint creation 94 | # 95 | 96 | generator_loss_values.append(generator_loss.item()) 97 | discriminator_loss_values.append(discriminator_loss.item()) 98 | 99 | if training_config['enable_tensorboard']: 100 | writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, len(mnist_data_loader) * epoch + batch_idx + 1) 101 | # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly) 102 | if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0: 103 | with torch.no_grad(): 104 | log_generated_images = generator_net(ref_noise_batch) 105 | log_generated_images_resized = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images) 106 | intermediate_imagery_grid = make_grid(log_generated_images_resized, nrow=int(np.sqrt(ref_batch_size)), normalize=True) 107 | writer.add_image('intermediate generated imagery', intermediate_imagery_grid, len(mnist_data_loader) * epoch + batch_idx + 1) 108 | 109 | if training_config['console_log_freq'] is not None and batch_idx % training_config['console_log_freq'] == 0: 110 | print(f'GAN training: time elapsed = {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(mnist_data_loader)}]') 111 | 112 | # Save intermediate generator images (more convenient like this than through tensorboard) 113 | if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0: 114 | with torch.no_grad(): 115 | log_generated_images = generator_net(ref_noise_batch) 116 | log_generated_images_resized = nn.Upsample(scale_factor=2.5, mode='nearest')(log_generated_images) 117 | save_image(log_generated_images_resized, os.path.join(training_config['debug_path'], f'{str(img_cnt).zfill(6)}.jpg'), nrow=int(np.sqrt(ref_batch_size)), normalize=True) 118 | img_cnt += 1 119 | 120 | # Save generator checkpoint 121 | if training_config['checkpoint_freq'] is not None and (epoch + 1) % training_config['checkpoint_freq'] == 0 and batch_idx == 0: 122 | ckpt_model_name = f"vanilla_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth" 123 | torch.save(utils.get_training_state(generator_net, GANType.VANILLA.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name)) 124 | 125 | # Save the latest generator in the binaries directory 126 | torch.save(utils.get_training_state(generator_net, GANType.VANILLA.name), os.path.join(BINARIES_PATH, utils.get_available_binary_name())) 127 | 128 | 129 | if __name__ == "__main__": 130 | # 131 | # fixed args - don't change these unless you have a good reason 132 | # 133 | debug_path = os.path.join(DATA_DIR_PATH, 'debug_imagery') 134 | os.makedirs(debug_path, exist_ok=True) 135 | 136 | # 137 | # modifiable args - feel free to play with these (only small subset is exposed by design to avoid cluttering) 138 | # 139 | parser = argparse.ArgumentParser() 140 | parser.add_argument("--num_epochs", type=int, help="height of content and style images", default=100) 141 | parser.add_argument("--batch_size", type=int, help="height of content and style images", default=128) 142 | 143 | # logging/debugging/checkpoint related (helps a lot with experimentation) 144 | parser.add_argument("--enable_tensorboard", type=bool, help="enable tensorboard logging (D and G loss)", default=True) 145 | parser.add_argument("--debug_imagery_log_freq", type=int, help="log generator images during training (batch) freq", default=100) 146 | parser.add_argument("--console_log_freq", type=int, help="log to output console (batch) freq", default=100) 147 | parser.add_argument("--checkpoint_freq", type=int, help="checkpoint model saving (epoch) freq", default=5) 148 | args = parser.parse_args() 149 | 150 | # Wrapping training configuration into a dictionary 151 | training_config = dict() 152 | for arg in vars(args): 153 | training_config[arg] = getattr(args, arg) 154 | training_config['debug_path'] = debug_path 155 | 156 | # train GAN model 157 | train_vanilla_gan(training_config) 158 | 159 | -------------------------------------------------------------------------------- /utils/constants.py: -------------------------------------------------------------------------------- 1 | """ 2 | Contains constants shared across the project. 3 | """ 4 | 5 | import os 6 | import enum 7 | 8 | 9 | BINARIES_PATH = os.path.join(os.path.dirname(__file__), os.pardir, 'models', 'binaries') 10 | CHECKPOINTS_PATH = os.path.join(os.path.dirname(__file__), os.pardir, 'models', 'checkpoints') 11 | DATA_DIR_PATH = os.path.join(os.path.dirname(__file__), os.pardir, 'data') 12 | 13 | # Make sure these exist as the rest of the code assumes it 14 | os.makedirs(BINARIES_PATH, exist_ok=True) 15 | os.makedirs(CHECKPOINTS_PATH, exist_ok=True) 16 | os.makedirs(DATA_DIR_PATH, exist_ok=True) 17 | 18 | LATENT_SPACE_DIM = 100 # input random vector size to generator network 19 | MNIST_IMG_SIZE = 28 20 | MNIST_NUM_CLASSES = 10 21 | 22 | 23 | class GANType(enum.Enum): 24 | VANILLA = 0, 25 | CGAN = 1, 26 | DCGAN = 2 27 | -------------------------------------------------------------------------------- /utils/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import zipfile 4 | import shutil 5 | 6 | 7 | import git 8 | import cv2 as cv 9 | import numpy as np 10 | import matplotlib.pyplot as plt 11 | import torch 12 | from torchvision import transforms, datasets 13 | from torchvision.datasets import ImageFolder 14 | from torch.utils.data import DataLoader 15 | from torch.optim import Adam 16 | from torch.hub import download_url_to_file 17 | 18 | 19 | from .constants import * 20 | from models.definitions.vanilla_gan import DiscriminatorNet, GeneratorNet 21 | from models.definitions.conditional_gan import ConditionalDiscriminatorNet, ConditionalGeneratorNet 22 | from models.definitions.dcgan import ConvolutionalDiscriminativeNet, ConvolutionalGenerativeNet 23 | 24 | 25 | def load_image(img_path, target_shape=None): 26 | if not os.path.exists(img_path): 27 | raise Exception(f'Path does not exist: {img_path}') 28 | img = cv.imread(img_path)[:, :, ::-1] # [:, :, ::-1] converts BGR (opencv format...) into RGB 29 | 30 | if target_shape is not None: # resize section 31 | if isinstance(target_shape, int) and target_shape != -1: # scalar -> implicitly setting the width 32 | current_height, current_width = img.shape[:2] 33 | new_width = target_shape 34 | new_height = int(current_height * (new_width / current_width)) 35 | img = cv.resize(img, (new_width, new_height), interpolation=cv.INTER_CUBIC) 36 | else: # set both dimensions to target shape 37 | img = cv.resize(img, (target_shape[1], target_shape[0]), interpolation=cv.INTER_CUBIC) 38 | 39 | # this need to go after resizing - otherwise cv.resize will push values outside of [0,1] range 40 | img = img.astype(np.float32) # convert from uint8 to float32 41 | img /= 255.0 # get to [0, 1] range 42 | return img 43 | 44 | 45 | def save_and_maybe_display_image(dump_dir, dump_img, out_res=(256, 256), should_display=False): 46 | assert isinstance(dump_img, np.ndarray), f'Expected numpy array got {type(dump_img)}.' 47 | 48 | # step1: get next valid image name 49 | dump_img_name = get_available_file_name(dump_dir) 50 | 51 | # step2: convert to uint8 format 52 | if dump_img.dtype != np.uint8: 53 | dump_img = (dump_img*255).astype(np.uint8) 54 | 55 | # step3: write image to the file system 56 | cv.imwrite(os.path.join(dump_dir, dump_img_name), cv.resize(dump_img[:, :, ::-1], out_res, interpolation=cv.INTER_NEAREST)) # ::-1 because opencv expects BGR (and not RGB) format... 57 | 58 | # step4: maybe display part of the function 59 | if should_display: 60 | plt.imshow(dump_img) 61 | plt.show() 62 | 63 | 64 | def get_available_file_name(input_dir): 65 | def valid_frame_name(str): 66 | pattern = re.compile(r'[0-9]{6}\.jpg') # regex, examples it covers: 000000.jpg or 923492.jpg, etc. 67 | return re.fullmatch(pattern, str) is not None 68 | 69 | valid_frames = list(filter(valid_frame_name, os.listdir(input_dir))) 70 | if len(valid_frames) > 0: 71 | # Images are saved in the .jpg format we find the biggest such number and increment by 1 72 | last_img_name = sorted(valid_frames)[-1] 73 | new_prefix = int(last_img_name.split('.')[0]) + 1 # increment by 1 74 | return f'{str(new_prefix).zfill(6)}.jpg' 75 | else: 76 | return '000000.jpg' 77 | 78 | 79 | def get_available_binary_name(gan_type_enum=GANType.VANILLA): 80 | def valid_binary_name(binary_name): 81 | # First time you see raw f-string? Don't worry the only trick is to double the brackets. 82 | pattern = re.compile(rf'{gan_type_enum.name}_[0-9]{{6}}\.pth') 83 | return re.fullmatch(pattern, binary_name) is not None 84 | 85 | prefix = gan_type_enum.name 86 | # Just list the existing binaries so that we don't overwrite them but write to a new one 87 | valid_binary_names = list(filter(valid_binary_name, os.listdir(BINARIES_PATH))) 88 | if len(valid_binary_names) > 0: 89 | last_binary_name = sorted(valid_binary_names)[-1] 90 | new_suffix = int(last_binary_name.split('.')[0][-6:]) + 1 # increment by 1 91 | return f'{prefix}_{str(new_suffix).zfill(6)}.pth' 92 | else: 93 | return f'{prefix}_000000.pth' 94 | 95 | 96 | def get_gan_data_transform(): 97 | # It's good to normalize the images to [-1, 1] range https://github.com/soumith/ganhacks 98 | transform = transforms.Compose([ 99 | transforms.ToTensor(), 100 | transforms.Normalize((.5,), (.5,)) 101 | ]) 102 | return transform 103 | 104 | 105 | def get_mnist_dataset(): 106 | # This will download the MNIST the first time it is called 107 | return datasets.MNIST(root=DATA_DIR_PATH, train=True, download=True, transform=get_gan_data_transform()) 108 | 109 | 110 | def get_mnist_data_loader(batch_size): 111 | mnist_dataset = get_mnist_dataset() 112 | mnist_data_loader = DataLoader(mnist_dataset, batch_size=batch_size, shuffle=True, drop_last=True) 113 | return mnist_data_loader 114 | 115 | 116 | def download_and_prepare_celeba(celeba_path): 117 | celeba_url = r'https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip' 118 | 119 | # Step1: Download the resource to local filesystem 120 | print('*' * 50) 121 | print(f'Downloading {celeba_url}.') 122 | print('This may take a while the first time, the zip file has 240 MBs.') 123 | print('*' * 50) 124 | 125 | resource_tmp_path = celeba_path + '.zip' 126 | download_url_to_file(celeba_url, resource_tmp_path) 127 | 128 | # Step2: Unzip the resource 129 | print(f'Started unzipping. Go and take a cup of coffe.') 130 | with zipfile.ZipFile(resource_tmp_path) as zf: 131 | os.makedirs(celeba_path, exist_ok=True) 132 | zf.extractall(path=celeba_path) 133 | print(f'Unzipping to: {celeba_path} finished.') 134 | 135 | # Step3: Remove the temporary resource file 136 | os.remove(resource_tmp_path) 137 | print(f'Removing tmp file {resource_tmp_path}.') 138 | 139 | # Step4: Prepare the dataset into a suitable format for PyTorch's ImageFolder 140 | # I don't have any control over this zip so it's got bunch of junk that needs to be cleaned up. 141 | # I am also assuming that the directory structure will remain like this. 142 | print(f'Preparing the CelebA dataset - this may take a while the first time.') 143 | shutil.rmtree(os.path.join(celeba_path, '__MACOSX')) 144 | dst_data_directory = os.path.join(celeba_path, 'processed_celeba_small') 145 | os.remove(os.path.join(dst_data_directory, '.DS_Store')) 146 | data_directory1 = os.path.join(dst_data_directory, 'celeba') 147 | data_directory2 = os.path.join(data_directory1, 'New Folder With Items') 148 | for element in os.listdir(data_directory1): 149 | if os.path.isfile(os.path.join(data_directory1, element)) and element.endswith('.jpg'): 150 | shutil.move(os.path.join(data_directory1, element), os.path.join(dst_data_directory, element)) 151 | 152 | for element in os.listdir(data_directory2): 153 | if os.path.isfile(os.path.join(data_directory2, element)) and element.endswith('.jpg'): 154 | shutil.move(os.path.join(data_directory2, element), os.path.join(dst_data_directory, element)) 155 | 156 | shutil.rmtree(data_directory1) 157 | 158 | 159 | def get_celeba_data_loader(batch_size): 160 | celeba_path = os.path.join(DATA_DIR_PATH, 'CelebA') 161 | if not os.path.exists(celeba_path): # We'll have to do this only 1 time, I promise. 162 | download_and_prepare_celeba(celeba_path) 163 | 164 | celeba_dataset = ImageFolder(celeba_path, transform=get_gan_data_transform()) 165 | celeba_data_loader = DataLoader(celeba_dataset, batch_size=batch_size, shuffle=True, drop_last=True) 166 | return celeba_data_loader 167 | 168 | 169 | def get_gaussian_latent_batch(batch_size, device): 170 | return torch.randn((batch_size, LATENT_SPACE_DIM), device=device) 171 | 172 | 173 | def get_gan(device, gan_type_name): 174 | assert gan_type_name in [gan_type.name for gan_type in GANType], f'Unknown GAN type = {gan_type_name}.' 175 | 176 | if gan_type_name == GANType.VANILLA.name: 177 | d_net = DiscriminatorNet().train().to(device) 178 | g_net = GeneratorNet().train().to(device) 179 | elif gan_type_name == GANType.CGAN.name: 180 | d_net = ConditionalDiscriminatorNet().train().to(device) 181 | g_net = ConditionalGeneratorNet().train().to(device) 182 | elif gan_type_name == GANType.DCGAN.name: 183 | d_net = ConvolutionalDiscriminativeNet().train().to(device) 184 | g_net = ConvolutionalGenerativeNet().train().to(device) 185 | else: 186 | raise Exception(f'GAN type {gan_type_name} not yet supported.') 187 | 188 | return d_net, g_net 189 | 190 | 191 | # Tried SGD for the discriminator, had problems tweaking it - Adam simply works nicely but default lr 1e-3 won't work! 192 | # I had to train discriminator more (4 to 1 schedule worked) to get it working with default lr, still got worse results. 193 | # 0.0002 and 0.5, 0.999 are from the DCGAN paper it works here nicely! 194 | def get_optimizers(d_net, g_net): 195 | d_opt = Adam(d_net.parameters(), lr=0.0002, betas=(0.5, 0.999)) 196 | g_opt = Adam(g_net.parameters(), lr=0.0002, betas=(0.5, 0.999)) 197 | return d_opt, g_opt 198 | 199 | 200 | def get_training_state(generator_net, gan_type_name): 201 | training_state = { 202 | "commit_hash": git.Repo(search_parent_directories=True).head.object.hexsha, 203 | "state_dict": generator_net.state_dict(), 204 | "gan_type": gan_type_name 205 | } 206 | return training_state 207 | 208 | 209 | def print_training_info_to_console(training_config): 210 | print(f'Starting the GAN training.') 211 | print('*' * 80) 212 | print(f'Settings: num_epochs={training_config["num_epochs"]}, batch_size={training_config["batch_size"]}') 213 | print('*' * 80) 214 | 215 | if training_config["console_log_freq"]: 216 | print(f'Logging to console every {training_config["console_log_freq"]} batches.') 217 | else: 218 | print(f'Console logging disabled. Set console_log_freq if you want to use it.') 219 | 220 | print('') 221 | 222 | if training_config["debug_imagery_log_freq"]: 223 | print(f'Saving intermediate generator images to {training_config["debug_path"]} every {training_config["debug_imagery_log_freq"]} batches.') 224 | else: 225 | print(f'Generator intermediate image saving disabled. Set debug_imagery_log_freq you want to use it') 226 | 227 | print('') 228 | 229 | if training_config["checkpoint_freq"]: 230 | print(f'Saving checkpoint models to {CHECKPOINTS_PATH} every {training_config["checkpoint_freq"]} epochs.') 231 | else: 232 | print(f'Checkpoint models saving disabled. Set checkpoint_freq you want to use it') 233 | 234 | print('') 235 | 236 | if training_config['enable_tensorboard']: 237 | print('Tensorboard enabled. Logging generator and discriminator losses.') 238 | print('Run "tensorboard --logdir=runs" from your Anaconda (with conda env activated)') 239 | print('Open http://localhost:6006/ in your browser and you\'re ready to use tensorboard!') 240 | else: 241 | print('Tensorboard logging disabled.') 242 | print('*' * 80) 243 | 244 | -------------------------------------------------------------------------------- /utils/video_utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | 4 | import cv2 as cv 5 | import numpy as np 6 | import imageio 7 | 8 | 9 | from .utils import load_image 10 | 11 | 12 | def create_gif(frames_dir, out_path, downsample=1, img_width=None): 13 | assert os.path.splitext(out_path)[1].lower() == '.gif', f'Expected gif got {os.path.splitext(out_path)[1]}.' 14 | 15 | frame_paths = [os.path.join(frames_dir, frame_name) for cnt, frame_name in enumerate(os.listdir(frames_dir)) if frame_name.endswith('.jpg') and cnt % downsample == 0] 16 | 17 | if img_width is not None: # overwrites over the old frames 18 | for frame_path in frame_paths: 19 | img = load_image(frame_path, target_shape=img_width) 20 | cv.imwrite(frame_path, np.uint8(img[:, :, ::-1] * 255)) 21 | 22 | images = [imageio.imread(frame_path) for frame_path in frame_paths] 23 | imageio.mimwrite(out_path, images, fps=5) 24 | print(f'Saved gif to {out_path}.') --------------------------------------------------------------------------------