├── .gitignore ├── LICENSE ├── README.md ├── examples ├── mnist_example.py └── mnist_noteboook_example.ipynb ├── fast_adv ├── __init__.py ├── attacks │ ├── __init__.py │ ├── carlini.py │ ├── ddn.py │ ├── ddn_tf.py │ ├── deepfool.py │ └── deepfool_tf.py ├── defenses │ ├── __init__.py │ ├── cifar10.py │ ├── cifar10_small.py │ └── mnist.py ├── models │ ├── __init__.py │ ├── cifar10 │ │ ├── __init__.py │ │ ├── madry_tf.py │ │ ├── small_cnn.py │ │ └── wide_resnet.py │ └── mnist │ │ ├── __init__.py │ │ ├── madry_tf.py │ │ └── small_cnn.py ├── scripts │ └── imagenet_selected_images.csv └── utils │ ├── __init__.py │ ├── utils.py │ └── visualization.py ├── requirements-dev.txt ├── requirements.txt └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | local_settings.py 57 | db.sqlite3 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # Environments 85 | .env 86 | .venv 87 | env/ 88 | venv/ 89 | ENV/ 90 | env.bak/ 91 | venv.bak/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | # mypy 104 | .mypy_cache/ 105 | 106 | .idea -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2018, Jerome Rony 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | * Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | * Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### Update 24-11-2020: the official implementation of DDN, compatible with more recent versions of PyTorch is now implemented in [adversarial-library](https://github.com/jeromerony/adversarial-library) 2 | 3 | ## About 4 | 5 | Code for the article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses" (https://arxiv.org/abs/1811.09600), to be presented at CVPR 2019 (Oral presentation) 6 | 7 | 8 | Implementation is done in PyTorch 0.4.1 and runs with Python 3.6+. The code of the attack is also provided on TensorFlow. This repository also contains an implementation of the C&W L2 attack in PyTorch (ported from Carlini's [TF version](https://github.com/carlini/nn_robust_attacks/blob/master/l2_attack.py)) 9 | 10 | For PyTorch 1.1+, check the pytorch1.1+ branch (`scheduler.step()` moved). 11 | 12 | 13 | ## Installation 14 | 15 | This package can be installed via pip as follows: 16 | 17 | ```pip install git+https://github.com/jeromerony/fast_adversarial``` 18 | 19 | ## Using DDN to attack a model 20 | 21 | ```python 22 | from fast_adv.attacks import DDN 23 | attacker = DDN(steps=100, device=device) 24 | 25 | adv = attacker.attack(model, x, labels=y, targeted=False) 26 | ``` 27 | 28 | Where ```model``` is a pytorch ``nn.Module`` that takes inputs ```x``` and outputs the pre-softmax activations (logits), ```x``` is a batch of images (N x C x H x W) and ```labels``` are either the true labels (for ```targeted=False```) or the target labels (for ```targeted=True```). Note: ```x``` is expected to be on the [0, 1] range: you can use ```fast_adv.utils.NormalizedModel``` to wrap any normalization, such as mean subtraction. 29 | 30 | See the "examples" folder for a [python](https://github.com/jeromerony/fast_adversarial/blob/master/examples/mnist_example.py) and a [jupyter notebook](http://nbviewer.jupyter.org/github/jeromerony/fast_adversarial/blob/master/examples/mnist_noteboook_example.ipynb) example 31 | 32 | ## Adversarial training with DDN 33 | 34 | The following commands were used to adversarially train the models: 35 | 36 | MNIST: 37 | ``` 38 | python -m fast_adv.defenses.mnist --lr=0.01 --lrs=30 --adv=0 --max-norm=2.4 --sn=mnist_adv_2.4 39 | ``` 40 | 41 | CIFAR-10 (adversarial training starts at epoch 200): 42 | ``` 43 | python -m fast_adv.defenses.cifar10 -e=230 --adv=200 --max-norm=1 --sn=cifar10_wrn28-10_adv_1 44 | ``` 45 | 46 | ### Adversarially trained models 47 | 48 | * MNIST: https://www.dropbox.com/s/9onr3jfsuc3b4dh/mnist.pth 49 | * CIFAR10: https://www.dropbox.com/s/ppydug8zefsrdqn/cifar10_wrn28-10.pth 50 | 51 | 52 | -------------------------------------------------------------------------------- /examples/mnist_example.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import torch 3 | import time 4 | from torch.utils import data 5 | from torchvision import datasets, transforms 6 | from torchvision.utils import save_image 7 | 8 | from fast_adv.models.mnist import SmallCNN 9 | from fast_adv.attacks import DDN, CarliniWagnerL2 10 | from fast_adv.utils import requires_grad_, l2_norm 11 | 12 | 13 | if __name__ == '__main__': 14 | parser = argparse.ArgumentParser('Generate adversarial examples on MNIST') 15 | parser.add_argument('--data-path', default='data/mnist') 16 | parser.add_argument('--model-path', required=True) 17 | 18 | args = parser.parse_args() 19 | 20 | torch.manual_seed(42) 21 | device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') 22 | 23 | print('Loading data') 24 | dataset = datasets.MNIST(args.data_path, train=False, 25 | transform=transforms.ToTensor(), 26 | download=True) 27 | loader = data.DataLoader(dataset, shuffle=False, batch_size=16) 28 | 29 | x, y = next(iter(loader)) 30 | x = x.to(device) 31 | y = y.to(device) 32 | 33 | print('Loading model') 34 | model = SmallCNN() 35 | model.load_state_dict(torch.load(args.model_path)) 36 | model.eval().to(device) 37 | requires_grad_(model, False) 38 | 39 | print('Running DDN attack') 40 | attacker = DDN(steps=100, device=device) 41 | start = time.time() 42 | ddn_atk = attacker.attack(model, x, labels=y, targeted=False) 43 | ddn_time = time.time() - start 44 | 45 | print('Running C&W attack') 46 | cwattacker = CarliniWagnerL2(device=device, 47 | image_constraints=(0, 1), 48 | num_classes=10) 49 | 50 | start = time.time() 51 | cw_atk = cwattacker.attack(model, x, labels=y, targeted=False) 52 | cw_time = time.time() - start 53 | 54 | # Save images 55 | all_imgs = torch.cat((x, cw_atk, ddn_atk)) 56 | save_image(all_imgs, 'images_and_attacks.png', nrow=16, pad_value=0) 57 | 58 | # Print metrics 59 | pred_orig = model(x).argmax(dim=1).cpu() 60 | pred_cw = model(cw_atk).argmax(dim=1).cpu() 61 | pred_ddn = model(ddn_atk).argmax(dim=1).cpu() 62 | print('Predictions on original images: {}'.format(pred_orig)) 63 | print('Predictions on C&W attack: {}'.format(pred_cw)) 64 | print('Predictions on DDN attack: {}'.format(pred_ddn)) 65 | print('C&W done in {:.1f}s: Success: {:.2f}%, Mean L2: {:.4f}.'.format( 66 | cw_time, 67 | (pred_cw != y.cpu()).float().mean().item() * 100, 68 | l2_norm(cw_atk - x).mean().item() 69 | )) 70 | print('DDN done in {:.1f}s: Success: {:.2f}%, Mean L2: {:.4f}.'.format( 71 | ddn_time, 72 | (pred_ddn != y.cpu()).float().mean().item() * 100, 73 | l2_norm(ddn_atk - x).mean().item() 74 | )) 75 | 76 | print('See "images_and_attacks.png". ' 77 | 'Top: original images, mid: C&W attack; ' 78 | 'bottom: DDN attack') 79 | -------------------------------------------------------------------------------- /examples/mnist_noteboook_example.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Generating adversarial examples on MNIST using C&W and DDN\n", 8 | "\n", 9 | "In this notebook we will generate adversarial examples on MNIST using two methods:\n", 10 | "\n", 11 | "* Carlini and Wagner (C&W) L2 attack (https://arxiv.org/abs/1608.04644)\n", 12 | "* Decoupled Direction and Norm (DDN) (https://arxiv.org/abs/1811.09600)\n", 13 | "\n", 14 | "We will attack a robust model trained with DDN - the noise required to attack an image is quite noticeable for this model.\n", 15 | "\n", 16 | "Note: this example requires matplotlib (pip install matplotlib)" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": 1, 22 | "metadata": {}, 23 | "outputs": [], 24 | "source": [ 25 | "import argparse\n", 26 | "import torch\n", 27 | "import time\n", 28 | "from torch.utils import data\n", 29 | "from torchvision import datasets, transforms\n", 30 | "from torchvision.utils import save_image, make_grid\n", 31 | "\n", 32 | "from fast_adv.models.mnist import SmallCNN\n", 33 | "from fast_adv.attacks import DDN, CarliniWagnerL2\n", 34 | "from fast_adv.utils import requires_grad_, l2_norm\n", 35 | "import matplotlib.pyplot as plt\n", 36 | "import os\n", 37 | "%matplotlib inline\n", 38 | "\n", 39 | "torch.manual_seed(42)\n", 40 | "device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n", 41 | "data_path = 'data/mnist' # Change this if you already downloaded MNIST elsewhere\n", 42 | "model_url = 'https://www.dropbox.com/s/9onr3jfsuc3b4dh/mnist.pth?dl=1'\n", 43 | "model_path = 'mnist.pth'" 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": 2, 49 | "metadata": {}, 50 | "outputs": [ 51 | { 52 | "data": { 53 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAAA5CAYAAAAvOXAvAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAADwVJREFUeJztnX1QFdUbx9fQADEVBSYcBSaBUcj3QilTxGRwQEwMajKnxnHsxd7Al1Ewo+GlAKEpG17H1xBRYyBwnKgpYEh8uSYKYoogFhAXfEmRUrt7zvf3h7+7A3qBu7sXcJvnM/PMcO/e/d6ze85+9+xzzrkMASAQBEEQ2uOxwS4AQRAEoQwycIIgCI1CBk4QBKFRyMAJgiA0Chk4QRCERiEDJwiC0Chk4ARBEBqFDJwgCEKjkIETBEFoFDJwgiAIjTJ0IL9syJAhtG6fIAhCJgCGmHqfeuAEQRAahQycIAhCo/ynDfzAgQMCY6xb1NXVCS4uLhb9Hk9PT4FzrlrHzs5OSEtLE06ePCm4urpaoGREf2Jvby9MmzZNmDZtmvDpp58KYWFhwrRp0wa7WIKVlZXg5eUllJeXCxs3bhReeOEFxVrW1taCKIqCKIoWLKG2iImJESz5q62zZs0S4uLihPPnzwucc4ExJuzdu1eYPHmyfDEAAxaCIGAggzEmRWpqKgoLC8EYQ1RUlEW/55VXXoEoiqp1PDw8YDAYwBjD2rVrVetduXLF5PsBAQEWO3bOOdauXQsrKytF+5eVlSEhIQFubm4mt48aNQrDhg0b0HZjTmRmZuLixYtS+/rnn3+kvwerTCNHjkRxcTGuXbuGjo4OcM7BOcfff/+Nq1evKtLcsGEDfvjhB8yZM2fQz7k54e/vj6SkJADAgQMHkJSUpFqztLQUAODn56dKJzU1FTqdDowxiKIIxhjS09PNuh579NRHwcBLSkqwa9cuzJ07F+7u7hapyCVLlqC6uhrOzs7Se1FRUWCMYcGCBRZtNJ2dnUhLS1Ol8cEHH8BgMEgGfuDAAcybN0+x3tChQ9Ha2mpy25tvvgk7OzvVx71s2TJwzrFv3z78f4Da7HBwcMBPP/2E7du39/iZ5ORkRYa4YsUKXLlyBYwxjB071qJ13bVTYIza2locPHhQCrma7u7uSExMxIkTJ3Ds2DF4eHjI1vD29sbVq1fBOYefnx/Gjx8vbYuJiYEoisjPz4e9vb0s3aysLMycOdOi59AYM2bMAGNMMlt3d3cEBwfD1tZWttZ7770HURSl+Pnnn6W/1ZbTiFodURSh0+mUluHRNXBjT4Fzjlu3bqGysvKhOHjwIJ555hlZBz1mzJhur8+ePdsvBs45x/z581VpMMa6GbjBYEB9fT1mzZqlSG/RokVISEgwuS0yMhKOjo6qymttbQ2dTgfOORYvXix7/4CAADDGei0H5xz5+fmytY1GxhhDbm6u6vqNi4tDYGAgBEHAjRs30NTUhLy8PGzZsgUODg6qboYZGRlob2/vdg38+++/qKmpweOPP262zpw5c6RjNrU9ISEBnHMEBQWZrTlu3Lh+e6J48cUX0dzcLPVERVHE7t27wRjD66+/Lluvra1NMuz169dj2LBhj5yBNzY2or29XWkZHl0Dj4yMRGBgIL755hs0NTXh999/79aYW1pawDnHtm3bFJ+8DRs24M6dO6isrMTw4cMt2hgbGxtVXcRHjhwBAKlH197ejsuXLyt+JJ8yZQquXbuGESNGmNxeVlam2sCfffZZqX6U7J+ZmdnrsXl7e4NzruhiZoxJZsYYw7p162SZYdews7MDYwwhISEQBEFK9Tz22GOqzt/UqVORmZkptfPc3FzEx8fDYDCAc46Wlha8/fbbZuvNnz8fnHPs3Lmzx89wzrFjxw6zNdPS0nqsI19fX4SFhcHT01PR8RvNdenSpVi0aBFWr16N1157DXfu3MHChQtlaXl7e0s3AW9vb6lufHx8oNfrUV1draquYmJiAAAxMTGqdIwZACX7PtIG3lcsX74cZ8+efahHLSf6Kz/5xhtvoKKiQvH+bm5uUq+7vr4eiYmJ0rY///wTBoMBt2/flpUH7ujoQE1NjcltKSkpYIypziv/8ssv4Jxj+vTpiuujqKjI5LbY2Fhcv35dke7SpUtRXl4OQRCwatUqXLp0CQBw4sQJ2e0nPj4ejDGMGjXKYu3FxsYGW7duBeccbW1tD934q6urERAQIBm7uTfa+vp6/PXXX3juued6/AznHLW1tWaXtaWlBX/88Yf02sfHBy0tLVIno6CgAADQ3Nws6xzk5eWhqKioW6ooICAAoigqepLNzc0FYwwlJSU9trXTp08rrjMjag28sbFR8iHjk0doaKi5ZdCmgTs5OaGtrQ3Lly9XfOKMg5e7du1SVQGmYtu2bViyZIni/bsOXDo4OHTb9v7770vbJk6caLamKIp49913TW7T6/WKe81dw2gwSvc3NuSCggIEBARIYTRNpTfb8PBwvPTSS9LrI0eOgHOO0tLSHp9IeitjY2OjRdtLYGAgbt++jebmZvj4+EjvW1lZwc3NDevXr5cGIPfu3Wv2jZZzjkOHDvX5GTkGLoqiNLYzbtw4qdfc1NQk5diLiopkpSmys7PBGIOXl1e390tLSxXV+dixY1FfX99r6uVRMHBHR0cwxqDT6bBmzRqsWbNGSkEyxvrMCmjWwO/du6fKKCorK8EYQ0pKimKNnmLLli3A/QNTHB4eHj02XDc3N5w8eRIAkJOTY5aek5NTr58VRRH79+9Xfeycc9y4cUPx/vb29liwYAE459Dr9UhMTERiYiKefvppMMawZ88eRbrh4eE4c+YMJk2ahNzcXCklce3atYdMo68wzi6ZMWOGxdqMra0tCgoKpME7zrlURmOIotinGT8Yx44dM6vO5Bp4REQEPD09cf36daSnpz+UikpJScHJkyfN1qyqqupm+G5ubkhKSpJuDHLPZ1RUVJ89d2OPV2mdGbFUGzAVOp2u1/y4Zg2cc44ff/xR0UkJCQnB3bt3wRiT3fsyJw4dOqTq5iII93vwBoPB5DYlPXBbW9se001OTk4WmaI4d+5ccM4t3jsVBAFPPfUUTp8+rThHP2bMmG458JKSEly4cAGMMWRkZMjSMj4JGAwGbN++HStWrEBYWBi8vLwQFhbWbaaH3Pj8889RUVGBr776qlsunHOOtLQ0jB492mwtOzu7fjFwxhgiIyOxffv2HjsZWVlZOHr0qNma+/btgyiKqKqqQlVVFZqbm7sNYso9j3FxcX323AGoSp8OhIE7ODigtra2x5SKJg3c1tYWd+/e7TWv11scP36833LfTz75JPR6PX777TdVOhcvXjRp4I6OjmhtbZUM3MXFxWxNURRx7NgxhIWFSZGTk4OKiope0yvmxtKlS8E5R1ZWlsXP6+7du7Fo0SJVGjdv3gQAfPnll7CxsZFmYMi94ZiaMtg19Hq9xY676ywsuXPqV61aZbaBnz17VlY7ioiIwLfffttjD5YxJutpwdbWFsXFxd2m/AUFBUEURUVzzb///vs+e9da6IELgoB58+ahvb0dH330kakyaMvAg4KCYDAY4Ovrq/iEGC+0rnPBLRXp6engnCM1NVWVjikDP3LkCBoaGqRphYcPH5atO2PGDLz88stSCIKAPXv2qJ5WNX78eIiiqHpk31Ts3LlT9RNNT/Hqq6+CMaZoIHzhwoUIDAzs1iHomsNXU67Vq1dLOe+amhpYW1vL1ujLwGfNmoV9+/ahoqICQ4cONUvTmPOOiIiAIAhwdnaGKIooLy/HE088geDgYBw/fhwff/yx6rphjGHv3r2K9jXXwE2ZorlhXMijdhDTnOhpdpamDHzs2LFoaGhQfTEbLzJvb284ODjAwcFBGhQyvvbw8EBaWhrS0tJkTS9sbm4G5xz+/v6qylhXVwfGGBYvXiw9Thof+Sz99PDJJ59AFEVMmTJFsUZ4eDg454iNjbV4421tbUVnZ6fFdQXh/rS/3NxcbN26VbHGpk2bcO/ePWRkZCAnJ8ciBn7r1i1wztHR0YHnn39ekYa/v3+vBr5//35wzjF79mxZuowxFBcXS6+Dg4MxceJETJgwAYwxi9SVm5sbGGOKFwtNnz4der2+1+mTer0eNjY2iss4kAbe1taGc+fOPfS+pgzcODp76dIlVSfD1GNvXl4evvjiC5PboqOjzdY2DjypNfCIiIhui3ce/Lu3lYpyIyYmRvUN4Z133gHn/KEZM5YI44CmpXWNMX36dDDGFM9dnjlz5kPTwL7++mvVx2xcPalGp7a29qE6mTp1KjIyMhTPGDIufAsNDYWvry9iY2Nx9OhRiKKI8+fPY9myZarrZMeOHarb5MqVK9HZ2WnyJpCdnY28vDxV+kbUGnhfaxpcXV3R2dmJSZMmmSqDNgzc1dUVnHOsW7dO9vLsB2Pjxo2Ijo5GdHS0NFeUMYbs7GxER0dj8uTJinSHDx8Ozjk2b96sugELwv3fSHjQwC3xWygPxmeffaa611RTU9NvaQ7GGLKysnDq1CnFs1D6irq6OlXlP3z4MFpbW9HQ0KDqgh4xYoT0FHfmzBnVx/Xrr792GwjtOqNl06ZNinVHjx6N5ORk3Lx5E8nJyUhOTjZpMGrq3BKrJS9fvoybN29Krz09PdHe3q5a28/PTzJwNTqhoaHS/PnMzEyUl5cjPz9fGmyvra3tNR2rGQOPj48H51z2svmBjGHDhqGwsNCiKzoDAwNhMBiQn5+PwMBAWYOW5oZer8eHH36oSqM/89TG2SNZWVmYMGFCv3yHi4uL6vKvXLkSTk5OqjRCQkIAwCJPcYJwP2ddXV3dzbwzMzNlreYcjBBFUdbAak/h7OyM7777DlVVVXjrrbekpfWFhYWqdP38/CyWOklPT0d6ejoYYzh37hzKysoQGxuLmTNn/jfmgW/evFlKnVjyLk9B0TVGjhwpLY0frGhqagKAfhlg11IYf7tksMvxqEdPnvpI/h747NmzhQsXLgx2MYj/KB0dHUJRUdGgliE1NVUQBEFobW0d1HIMNqdOnRJ0Ot1gF0OzDPl/z3hgvoz+JyZBEIRsQP8TkyAI4r/FgPbACYIgCMtBPXCCIAiNQgZOEAShUcjACYIgNAoZOEEQhEYhAycIgtAoZOAEQRAahQycIAhCo5CBEwRBaBQycIIgCI1CBk4QBKFRyMAJgiA0Chk4QRCERiEDJwiC0Chk4ARBEBqFDJwgCEKjkIETBEFoFDJwgiAIjUIGThAEoVHIwAmCIDQKGThBEIRGIQMnCILQKGTgBEEQGoUMnCAIQqP8D8hJC7Cr1N3mAAAAAElFTkSuQmCC\n", 54 | "text/plain": [ 55 | "
" 56 | ] 57 | }, 58 | "metadata": { 59 | "needs_background": "light" 60 | }, 61 | "output_type": "display_data" 62 | } 63 | ], 64 | "source": [ 65 | "# Loading the data\n", 66 | "dataset = datasets.MNIST(data_path, train=False,\n", 67 | " transform=transforms.ToTensor(),\n", 68 | " download=True)\n", 69 | "loader = data.DataLoader(dataset, shuffle=False, batch_size=16)\n", 70 | "\n", 71 | "x, y = next(iter(loader))\n", 72 | "x = x.to(device)\n", 73 | "y = y.to(device)\n", 74 | "\n", 75 | "plt.imshow(make_grid(x, nrow=16).permute(1,2,0))\n", 76 | "plt.axis('off');" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 3, 82 | "metadata": {}, 83 | "outputs": [ 84 | { 85 | "name": "stdout", 86 | "output_type": "stream", 87 | "text": [ 88 | "Loading model\n" 89 | ] 90 | } 91 | ], 92 | "source": [ 93 | "print('Loading model')\n", 94 | "\n", 95 | "if not os.path.exists(model_path):\n", 96 | " import urllib \n", 97 | " print('Downloading model')\n", 98 | " urllib.request.urlretrieve(model_url, model_path)\n", 99 | "\n", 100 | "\n", 101 | "model = SmallCNN()\n", 102 | "model.load_state_dict(torch.load(model_path))\n", 103 | "model.eval().to(device)\n", 104 | "requires_grad_(model, False)" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": 4, 110 | "metadata": {}, 111 | "outputs": [ 112 | { 113 | "name": "stdout", 114 | "output_type": "stream", 115 | "text": [ 116 | "Running DDN 100 attack\n", 117 | "Completed in 0.40s\n" 118 | ] 119 | }, 120 | { 121 | "data": { 122 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAAA5CAYAAAAvOXAvAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAHPNJREFUeJztnXlYVNf5x9+7DasIgigocSGNiIpKSV1iBCSoMRpJ+6C4YG2sgrGirdYNTYpLQMXgghqNSzTRqNXHGLUxaCOJWjdwiVtVUiOuIMimzDB3Zr6/P+y9nUGWmYFE+D3n8zznAWbuvPc973nP95577jkDB4AYDAaD0fjgX7QDDAaDwbAPJuAMBoPRSGECzmAwGI0UJuAMBoPRSGECzmAwGI0UJuAMBoPRSGECzmAwGI0UJuAMBoPRSGECzmAwGI0UJuAMBoPRSBF/yZNxHPfC9+3zPE8mk+lFu8H4fwLHccS+joLxcwOAq+r1BjcC5/mqXeK4Kv23GuXzHMeRg4NDnWwRWfpZnc/2UNd6/lJwHFdvvv5cda5PH6sDQINpM47jSBSfH5PVxT+O49T8bij1tIX67Jv1CcdxJAhCne00uNoBsAi6kjT2jnIUWzzPkyRJFB0dTZ6enhaCbq+fit36GtErozme54nn+QbfYepLvJRYVhbcutr+OcXVPEcB1EtnrCsAnusnoigSz/PUsmVL9W/zn7bY/CUuiPWBeVuYTCar66pg3veUdq7v9q2vO7dfdAqlMuaVUEYPoiiSTqcjnudVMTMajerftlR6165dNHjwYPrLX/5ChYWFFBAQQDNnzqTevXvT/fv37UpGxQ8PDw/y9fWl8vJycnR0pOPHj1OzZs1stmeOq6srLV68mEJCQmjWrFmUmZlZp0Y2v7goSWkymdS41uXCo4iW0WgkIrL43RoqJ3BISAiFhISQTqejI0eO0N27d9X3bbUtCAK5u7tTQEAAnThxotpz2oMgCKqdqKgoGjp0KDVr1ox4nqfdu3fTpUuX6Pz58zadxzwP6+KfRqMhvV5PoiiSh4cHhYaGUk5ODul0Ovr3v/9NRESenp6Ul5dHBoOhVp94nidBEKisrIw0Gg1NmDCBHj9+TJmZmVRYWGi3n+bnUHLSHFEUa/WvMkqOcBxHRqORBEFQ9aKyrdrsK20bGRlJN2/epOvXr5O3tzd9+umnNvlkjnnu/fa3v6U333yTBg8eTDt37qQHDx5QTk4Offfdd1RYWGhTDrxQASf63xXOZDJRhw4dqF27drR//35VIEwmk9rIlUdnVVXU/PXo6GjS6/VUVFREubm5FBwcTC4uLtS1a1e6dOmSTX6aj9hNJhNJkkTu7u50+/ZtCgoKqpdpmbZt29LEiROJiCg8PJy+//57u0WW53maOXMmJScnkyRJJMsyET1L9F//+teUnZ1tl6CZf8ZoNNKgQYPozJkzVFBQYJMdxcaGDRvIz8+PcnNz6auvviKtVkuPHj1S6+Dk5ERPnz61+oKjdOCAgAAKCgqiM2fOkCzLdaqr+V0gx3FkMBho165dFBoaSt7e3pSfn0/e3t7Uv39/EgTB5hGfYrfy3WdtAxbzOjk4OBDP8xQbG0shISHk4uJCbdu2JS8vL3Jzc6NTp07RuHHjSKPRWO0TEVFCQgLt37+f0tLSqGXLltS9e3fSaDS0c+dOq3yszXfzO1ml7fr06UMGg4GOHz9eq53KeeHv708dOnSgsWPH0vXr10kQBEpJSaGSkhL1WGsuXv379yd3d3fq06cPvfHGG7R9+3by9fVVB3621FkQBGrSpAlNnjyZXn/9deratStdu3aNPD096datW3TkyBG6ceMGGQwG22OpBPGXKEQEpQiCAEEQwHEcFixYgDVr1mDixImIiYlB06ZNQUTgeR7BwcFwd3cHEeG/D0EhCALMbVVVmjVrhlu3bsHLywtEBH9/fxw9ehTffvst3NzcoNFoarVRVVHOHRoaioSEBAiCgH379mH16tUWxym+WlveffddAEBxcTGSkpLQq1cvtGrVCjzPq3Gy1de8vDzMnTsXiYmJEEUR7dq1g4uLC95++21wHAee5222y/M8PDw8MGvWLCxduhQAcP/+fQwePNgmOx06dEBWVha2bNmi+qD4o/ycM2cO8CxxrPKT53nwPI/Q0FBkZ2cDAFq0aAGe5+1qa8UeEUGSJDRp0gTr16/H5cuX8fTpUxgMBsiyjClTpmDevHnYsWMHNm3aZPM5AgICMH78eKxbtw5paWlo1aqV+l5NnxVFUc3jHj16ICMjAzt27MCHH36I6dOnQ5IkcByHbt26YcGCBfjnP/8Jd3d3NYeri6n56ytXrkTbtm3Vv11cXNCrVy+78tG8ODg4ICEhAdu3b0d6ejpGjBgBZ2dnrFixAnFxcWoMaiuiKEIQBLz11lvIysqCwvnz55GTk4OTJ0+CiGzu7+b1Kysrg8lkUvPAlrpzHAeO4yBJEk6cOIEZM2ZYvDdv3jz07t27VjvVauqLFHBRFNG0aVMYjUYAwOHDh5GZmYkDBw7gwIEDOHbsGI4fP47MzEwMHz4cXl5eEAShxsQ2D2779u0tBKOgoABZWVlo3ry5TZ3a3KbyucTERMTExICIkJubi8jISJuTWhErIkJpaSlKSkqwceNGJCYmIjY2FlOmTFHrYIu/giCgZ8+e+Pbbb9G7d2/4+PioNkRRxODBg2uNY+XC8zzCwsKwevVqbNmyBZs2bUJeXh4AYN26dYiKirKp7tOmTQMAuLm5QRRFC1+UOD569AhHjhyxqX2ICCtWrIDRaIRWq0VycrJNflWuMxEhMjISgYGBICKcP38eeXl5WL58ORISEuDq6qoKqYuLS61CoXRo5e8tW7YgPz8fsiwDAPR6PQwGA+7fv482bdpUO1hRfNNoNBAEAcHBwcjOzoZWq0VoaCiaN29uIfBz5syBVqvFsGHDqo1bZfve3t6oqKiwOFYURXh4eOCdd95BixYtIIqizYJGRIiLi0NBQQFWrFiBVatWYe3atfjd734HAJg/f75VgzRFQ0RRRFZWFsrLyyHLMsaMGQOe57F161YkJiaqflvjZ+X2EUURmzZtwoULF+Di4gJRFO3KJY7jMGnSJKSnp6uvNW/eHDt27IAkSbV+vkEJOMdxcHJyQmhoKGbOnIkVK1YgJCQEkZGRSEhIwM6dO9Ur6dmzZ3HlyhVcuHABERERVgfQ3d0djo6O6t+fffYZ9Ho9MjIyVPGyZ2SmnH/Xrl3w8PAAEeHs2bPqXUPlBLBGIE6dOgVZllFWVoaysjIUFhbizp072LlzJ7744guLY60pXbt2RVFREVq2bKkmuvn7O3bsQJMmTawe2XMch2bNmmH79u3417/+hfT0dPzxj39EYWEhAKBnz54WozRryv79+3Hu3LlqO5e/vz8AICEhwSa7giDgvffeAwDIsozi4mIsW7YMLi4udnU8R0dHbN68WRXwCRMm4KWXXrKIja0dmYjQs2dPbNiwQfUzOzsbe/fuVfP+3r17WL16dY357uzsDF9fXxARunXrhuLiYpw9e1YVNiVvRFHEsGHDcOzYMcydO7fWfFLeS01NhdForDKHYmNjkZSUhNjYWKtHt+YDFoPBgIqKCvj7+6N9+/aIiIhA//79odPp0KNHD6v7oUajQdeuXfHkyRPodDr4+vqC53k4ODjAx8cHcXFxmD17tjpSt7XNeJ5Hx44dsX79euzbt089r60XLZ7n0aVLF+zevVvVnri4OCxevBgODg612mhQAq50DD8/P3h4eEAUxecq8corr6iv+fv74/79+wgICLApaKNGjUL37t0RGBiI8vJyAIC3t7ddHdm8SJKEhQsXgojg5+eHixcv1topqkuOPn36qJ22tLQUN2/eRHh4OLy9vZGfn4+CggLcvn3bqkZWSnl5Ob755htwHPdcx1u/fj1kWUaLFi1s8lUQBPz+97+Ho6MjOI7D+PHjcfLkSYwaNcrmhFZ8/PDDD6ucyklPT0dRUZEao9psm180W7dujaSkJAQGBmLTpk04ePAgjEYjtm3bpnY+a0Z3PM9j1qxZkGUZrq6uFq9Xvluwpe5NmzZFamoqDAYDCgsLn5viycrKQlRUFEwmE4xGozpIqC4PlQtLXl4ebty4AR8fn+dE39PTE/Hx8SguLsaf//xntf/VFE8iwq1bt1BQUKBOG7Rq1QpTp07FuHHjEBcXp14or127ZiHOtdlNTU3F2bNnER4err4WGRkJvV6Pzp0729wfly9fDgCYO3eu2k8EQYCPjw+6d+8OWZZx9+5daDQam6ZAlKmPDz74ABkZGTh16lSNd0U15RLHcdi/fz90Op16pwUAYWFh6jE12WhwAq4kviIy5vOgikhwHAdHR0ecOnUKSUlJaN68uU2BS09Px8SJE/Hjjz9ClmWcPn0aRM+u2ubnt8WmMu2TlpYGZ2dnREZGYtGiRTYnnRKDgIAAVcBfeeUVtWMSPZsXNxqNKC8vR5cuXay6QHAcB51Oh4EDBz5XP0EQUFBQAJ1OZ9fdh3m7nDlzBnjWqHYVg8EAk8mE/fv3IyIiAr169UJwcDCmTJkCALh165bqszV1VvLopZdewoQJEzBr1iz4+Phg+vTpMBgMWLt2rU0XQSKCXq9HXl6exXnsra/y+XHjxqGsrAzFxcUYMWKERV65u7tj1KhRKCoqAgAkJSVV207KqDc6OhpEzwQ8MjLS4hhlJKr4feLECRw7dgx9+/atMRZKzHU6HT7//HPwPA9PT0/ExcUhJiYG0dHRaNKkCSRJwo8//oiSkhKr679s2TI8ffoUo0ePVi+GHMfh6NGjaj5ZE2flIuXm5oYbN27g3r17CAoKUqdUiAgDBgzAgAEDAABXr16Fg4NDjflU3XkzMzNRXl6Offv2wdPT06629/LyAgAUFBRg+fLlWLFiBcrKylBeXo6CgoJaP9/gBLymwPE8j6ZNmyIiIkIVN/NGs7YjiaKIRYsWYenSpUhLS0Pnzp3Vi4VSbH2QpxybnJyMK1euAID6mj0dnOM4dOjQQfXXfPTUtm1bnDlzBjqdDvPnz7fKnqenJz755JMqfXZ3d0dFRQU2bNhgVxKa10+WZWi1WrvnBJ2dndGxY0cAwKNHj7BkyRIkJibC19cXRqMRCxcutDme7u7uWLVqFe7cuYO//vWvSE1NRVZWFmRZxr179zB+/HhMmjQJMTExNQqYcl69Xo+ioiKEh4fDycnJIj+V3219OObt7Y2tW7di5cqVWL16NT755BOsW7cOmZmZkGUZRqMRFRUVSEtLq9GOuT+dOnVSBxGKYLdu3Vod8IwfPx5ffPEFAODhw4do1apVrWImCAIqKiowd+5cuLq6Yvz48Rg4cCAkSbJo8y+//BJXr161akDA8zx++ukn6PV6+Pv7Q5IkuLm54c0334TBYMBPP/1k9ehWEARIkoS0tDQAUBcrmC906NWrF+Lj46HX66HVatU+Zq2IK7+ba5BSD3se/lf3niRJOHfuHHJycqo9pjpNfeHLCIlIEXcSRZFMJhOFhYXRmDFjyMvLi0pLS2njxo3q8i1rlvAoxzRp0oSOHTtGJSUldPv2bfLy8iJJkizWmVu7JIjjLNeMz549m/7zn//Q+vXrn6uHLTZ/9atfkbu7u7qGl/vv+luTyUTh4eEUEBBAHMfRkiVLarVHRFRWVka/+c1vqEmTJvTkyZP/NbQoUrt27chgMFBGRoZVvlWuA/BsmZufnx+JokilpaU211lBq9XStWvXLJZnAqCWLVvS6dOnacmSJTbZIyLq2rUr5efnk4eHBy1ZsoTKy8vp3LlzVFZWRr6+vjR27Fh67bXXrLIlSRIJgkDOzs50+PBhmjdvHl2+fJlatWpFRUVFFBYWRllZWWQ0GunatWt0+vRpq+zm5+fTmDFjaNCgQdSmTRvKzc0lDw8P+sMf/qCuMU9LS6Pk5OQaY+rp6Ul3794lIqInT56Qs7MzERE5OjqSi4sL3b17lxITEwkARUREqHsUrl+/Tvfu3SNJkmr0U8lBV1dXioqKotLSUjp06NBzx7m5ualLfGvLAUEQKDc3l3x9fenkyZMkyzLduHGDvLy8iOM4ysrKsmqpqLKpxmAwkKurKxERFRQUqMuOlTXhQUFB5O3tTZIkqXZryynz95V8JyLS6XREZN8adSKy2I9RuY6yLNNbb71FJ06coCFDhtD+/futttsgBFzpxMpmjbCwMCosLKTFixeTn58fZWVlWYh8besllUTq2LEjybJM7du3p5MnT1Lbtm3JycmJKioqnltfrvxelR0iUsVeeV2SJOrYsSNdunRJFUij0Wh+t1EjSkP6+fmRr68vnT17Vn3PZDKRi4sLzZw5k5ydndWktAa9Xk+dOnWiTz/9lBITE8lkMpGTkxNNmzaNgoODSRAE0uv1VtmqKhbmvnz11Vc2XQTNbZn/VGxoNBpKTU2l2bNnU2lpqc0+lpaWUl5eHl2+fJny8vJo7969tGfPHvr73/9OAwYMoBYtWqhxr8ln7r+bynieJ1mWKSMjg9q3b09jxoyh9u3bk0ajIZPJRNHR0bRo0SLq1KmT1QKunPfrr7+2OP+WLVvIZDJRWVmZ2m41xfTBgwdqXQYMGKDGq0OHDnTz5k0ienaR7N69O3EcR9nZ2WQymSg7O5uISN0XUB2yLJMgCFReXk5OTk7k5ORERJabqniep/DwcPrmm29q9ZfoWf+Ojo6mzZs3U1hYGAmCQL6+vrRo0SJ6+eWXaevWrbVE7xmKsAqCQJ07d6aLFy+qW9OVgR4RUevWren27dtERJSTk6N+1trBlSAIqi1FG8zr7u3tTQ8fPrTKZwVzjTHPwQcPHlBsbCzt3buXUlJSKC0tzbo+1RCmUJQSFxeHlStXgojg4+ODESNGqKs7/vSnP2HZsmVISUnBrFmzrLptCQoKQkREhLoCoV+/fggLC6t27ruq5TyVp1uU26833ngDT58+xebNm5+79bK2NG/eHBUVFXjy5ImFjTNnzuDOnTsAAIPBgFWrVlk9Z60c17NnTyQmJmLhwoWYPn06iAgfffSRuizMGp/Np4aUunt6euLx48c4d+5clVNH9sRBuSU/ePAg8vPzMXXqVCxfvtxmO5V9V2IhiiLef/99ALBqykOpQ3x8PGbMmIEZM2YgJiYGd+/exdOnT9UH4rIs4+DBg3ZPIxERJk2ahLKyMgDApUuXLOasrS3jxo1THya7uLjg1KlTyMzMxKJFizBq1Chs3rwZu3btwvvvvw+NRmPVs4D27dtDr9dj4cKFcHJygoODA1JSUhAfHw9nZ2eMGjUKP/zwA1JTU21qk8rPnjiOAwBs3LjR6rX+ygNKIsKNGzfU6Q13d3e89tpr2LNnj7oef9iwYfj6668REhJi1RLPqnJ4zZo1KC4uxoULF7B06VJs3boVc+bMQUJCAmJiYtCmTRur+mRNRdGVPXv2qNM95qVaTW1IAv7ZZ58hPj4eoihi7dq1kGUZoaGhGDlyJLZs2YKYmBh4eHhY9bSb53n06tULo0aNgoeHB1xcXDB+/HiEhoZCkiS4uLjA0dERrq6umDx5MmbNmmUxr1hTw4qiiA0bNlisq63Kp9oSskWLFigpKYHRaMTo0aNRXFwMc4xGI/R6vU2dubKYmneWSZMmQa/Xo127dlb5V9WFLjw8HE+ePMGcOXOsTk5r/VVW3Pj5+dk9T2/eGRT7oihCkiQcPXoUI0eOtPrB6IQJEzB16lSEhISA53mcPn0aAPD999/jyy+/BAD88MMPeOedd547r7Ux1Wq1AICioiL06dPHprjxPI/evXtj27ZtuHPnDv7xj3/g3r17uHv3Lnbu3InZs2eDiHD16lUAwODBg+Ho6Gj1ahFZlpGZmQlJkqDRaODq6gqNRoOWLVuipKREfXhp78NdjuPU5yBdunSx6XPBwcEICAjA5MmTUVxcjCtXrmDTpk14+PCh+uCViDBw4EB89NFHIKp9LXh1A5GkpCRotVocPXoUc+fOxQcffAA3NzfVprX5WFt+CIKAoqIifPfdd8+91+AFvFOnTkhPT0ePHj1QUFAA4NlSm48//thiM4Y1V1Ge5+Hs7AyDwaCKoVarRXFxMU6cOIGUlBT19YqKCnUTxaRJk6xOops3bwIAhgwZUmsCVFd4nsfu3bvVpUVGo1H1WXlt3759Nj0oq+68PM9j8uTJ6o4ya21Vtjdw4EBotVp4eXnZtYqnqrZSfpdlGcePHwcRqZ2uPkuPHj1QUVGhXsBsLcpGE6V99Ho9rl69arG7zpaY8jyvLhcMDQ212R9JkuDv74+xY8fCYDAgPT0dCQkJaN26NVq1aqUKpLJRjujZSqLaRuCKj9evX4der8frr7+Ol19+GUOHDkVqaiqePHmC/Px8m/pL5fxUfm7btg0Gg8HmXOrXrx+GDx8Ob29v9OvXDwCQkpKCLl26wMfHR10unJ2djffee8+uPBUEAf3794dWq4Usy/j888/Rs2dP9S7J1sFLVX6YPxANDAzEnTt3qhzRN3gBJ3r2ZN3X1xd9+/Z9rhK2NgDHcZg5cyaSk5OxZs0aHD9+XBXJqVOnIioqqsY1tpWLeaA5joNWq8XQoUMtzlf5GGsSWRRFJCcn49GjR9DpdLh9+zZkWcbMmTNtTrja/E9JSUFZWZld8SR6dvFUNllxHGfXxojq4kD0bAQeHx+P69evWyzfq6/6cxyH3NxcVcxsKcrmr1WrVqGkpAQ5OTn429/+Bo1Goy6vtMWep6cnSktLAUDd0GRv/Hieh7+/Pzp37ozRo0dj165dAICJEydi8ODBSEpKwpQpUxAfH4+QkBCbbHfp0gWHDh2CTqdDRkYGDh8+jDFjxqi5W9d20ev10Ov1dueQ4sOgQYPQu3dvSJKkrm5ZsmQJAFi107GqnBQEAZmZmXj8+DEOHDhQp3qOGzcOOp0Oubm5SE9PR0ZGBrZu3Qq9Xg9ZlnHmzBlMmzat2otCoxBwIsKrr76K0aNH2ySuVZXKo1bllqdyI9nTWZo2bYp169apjWyvTfPjAwMDUVBQgG3btiEqKsrmnY3WlJKSEkybNs3uz/M8j3Xr1kGn01mId10E3DwOFRUVAJ59t8q4cePqvf5EBFdXV3WUb4vf5iOujh07ws3Nza7lZEpRvvvGZDLZNfo2L05OTnBzc8OGDRvU0fbFixcxZcoUDB8+HJMnT0azZs2wYsUKxMbGqt8tZE1xdHTE22+/jbFjx2LkyJHo2bOnzd8HUl08BUGATqdDdnZ2nWw5ODjA2dkZAwYMwJAhQ+Dr64uJEydi79696uYme+wqu3pfffXVOuedKIpYs2YNPv74Y8iyjKtXr+LQoUOYOnUqAgMD4eTkVGNfatACrjTm/Pnz8fjxY2RnZ6tro+tazANSHyMGjuMQERHx3AWiLnPBde0M1ibju+++a3cMlC3Z06dPx4IFCzBhwgT079+/3tpImauur5jWFuvAwEC74l5fPinThLbuiK2qSJKEkSNHYsCAARZiJUmSxZdC+fv7q5vY7I1bfRXlYWRRURGGDRtmt31lPTjP8+oadWWvR2RkJJYuXYoWLVrUyX9bvzfo5ygNWsCVIM2bNw+5ublwdnb+WZKmPoqDgwOCgoKs+mKthlQ0Gg38/f3rbKdv376IjY1Ft27d1O/hqO/yc7a7+e30i8yvuXPn2jWVU11R7ogkSVLnuJWvqHB0dLT4bpQXVWfzovizcuVKBAUF2W2nuu80Ui4QyrOAuvjaEGJWnaZy/xXWX4T/BpLx/4CqNiQ0Brg6/oen+vSjvn1Qvofcno0mvzTm+VPXXKqpTX+OOL8IUM3/xGQCzmAwGA2cBiHgDAaDwag/Gtw/NWYwGAyGdTABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopDABZzAYjEYKE3AGg8FopPwflTMcN1eBLbIAAAAASUVORK5CYII=\n", 123 | "text/plain": [ 124 | "
" 125 | ] 126 | }, 127 | "metadata": { 128 | "needs_background": "light" 129 | }, 130 | "output_type": "display_data" 131 | } 132 | ], 133 | "source": [ 134 | "print('Running DDN 100 attack')\n", 135 | "attacker = DDN(steps=100, device=device)\n", 136 | "start = time.time()\n", 137 | "ddn_atk = attacker.attack(model, x, labels=y, targeted=False)\n", 138 | "ddn_time = time.time() - start\n", 139 | "print('Completed in {:.2f}s'.format(ddn_time))\n", 140 | "\n", 141 | "plt.imshow(make_grid(ddn_atk, nrow=16).permute(1,2,0))\n", 142 | "plt.axis('off');" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": 5, 148 | "metadata": {}, 149 | "outputs": [ 150 | { 151 | "name": "stdout", 152 | "output_type": "stream", 153 | "text": [ 154 | "Running C&W 4 x 25 attack (limited to 100 iterations)\n", 155 | "Completed in 0.59s\n" 156 | ] 157 | }, 158 | { 159 | "data": { 160 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAAA5CAYAAAAvOXAvAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAD+xJREFUeJztnXlsFHUbx38zu91uS2nLoRwahVaECgihMcihgCBWgpWKihAop8SDO6AIlCNGjcYgHqiljaBgBOQfwFASUEyqGAwaoIJXEaJAkaMUWkt323m+7x++M+/s7mw7V9ndN88neZLd2Z1nnt/1nd81uxIAwTAMwyQecqwDYBiGYezBAs4wDJOgsIAzDMMkKCzgDMMwCQoLOMMwTILCAs4wDJOgsIAzDMMkKCzgDMMwCQoLOMMwTILCAs4wDJOgeG/kxSRJsv3cflpamqirq3MzHEekpqaK+vp6IYQQc+bMEe+9916MI2IYxk08Ho9QFEVIkiRi/ZMjACSj4zHrgSclJYW8T05ODnnv8XhEXl6e9j4exFuSJM2mTJkiHn/8cSGEEDt27IhxZLHjzjvvjHUIMcfj8cQ6BA1Jimzn+nprh3hK341CkqQQ8ZZlWciyHKFTMQfADTMhBKLZlStXQt4nJSVBCAGPx4N58+ahY8eOUc+1YqpfJ9alSxdXYkk0k2U54lhqamrM44qXvIhHc6O+q5aRkRHz9Nwo83q9EOJf/VHfy7IMv9+PzMzMGx5PVE2NFwHXmyzLkCQJ7dq1c5RoWZYxadIkrbFdv34dRIT/TuXYtqFDh6K0tBRdu3bFr7/+CiICEblSUMuXLwcRxZUwqvm1Z88e5Ofn44cffmj2+2qlt2Lt27ePeTqt5Ed2djaWLl2Kt99+G+vWrQMRYefOncjJyYl5fKp9/fXXmDJliu3zk5KS4PV6tfrds2fPmKcpFqam/8UXX3TF3+23345du3ahsbERn376KaZOnYqUlJRmdSmhBFw1p70cSZJARJg8eTImTJigFcRDDz1k2ofRTWTFihWtUlH69Onj6s3gvvvuizi2aNEi+Hw+pKen285T9fXhw4ejfmbWTp8+jWvXrrWYZjt1wefztUo5CSFCbtx6a2hoaLVrmrHu3btrsWzatCkkNqu+ZFlGQUEBpk2bBiEE0tLStM+SkpIcd4RayyRJwqxZsyCEwN133+3YHxEBAIgIeXl5tv1MnDgR3377LYgIGzduxOnTpzFq1ChTHZ64F3B95dBbMBhssccXzQKBQERBFBUVWa4MRsdTUlKQnJwMSZIgy3JIA7FTsR944AEQEerr6+H3+12pyESEN998M2qa7PSUja4RnvZOnTqZOjcYDGLHjh3Nfmf69OmWY8rOznbtRqjeOCRJQnp6Ol5++WUoioK9e/dq13jsscewdu1aHD16FJs3b7Z8jdzcXDz11FNYtGgRpk2bZmuqoqioKCTN4XWwpKTEtoi7UReNbPny5aiurkZtbS1SUlIghEBdXR0KCwtt+du8ebOWB4qigIjw22+/mTq3uZu96ksIgREjRtgeiSxcuND2SDPuBTzcwns2dhIdCAS0xpCTkwMiwvr16y35iCZy+jl5fYyyLGvCbjW9iqKgvLwcHTp0gBD2bgSqpaamoqmpCW3btjWdJiu+jdJuxcaNG9fiTa9jx47o3r27Zd/jx4+HoihoaGiwdQMIt9zcXE3IDh48iEAggPXr10cMqe2U+5YtW1BbWxvSgycinD9/3vR0jNqB0OenKojhdczqqKRz587aNYw+tzvHnpWVhcbGRrzwwgsgIrz//vtaOo4cOWLZ31dffaXlwbZt2yCEwJw5c3D//ffbik/fvjMyMoB/BcyRde3aFXfccYetcxNWwO0kVl2A6NKliyZgqr/XX3/dcUEYxeqkMutvVOo8vSroiqJY9peeng5FUaLGYyTqZi28IRORtgBtRbyKi4u1tEmSZHiu3fWKYcOGaXnY1NSEdevWOSrfJUuWaHm2cePGqKNFK3bPPffggw8+0OL8/vvvsW/fPu39pUuXsHHjRq0uRzP1czOjS0VR8Msvv5iOsXPnzlHbYP/+/UFEWLlyZYsxGln4aEFd9yIi0yM41UaMGIHGxkYQEUaPHo3U1FTtRjVw4MCIkbgZU9Ok1r9BgwY5HtF5vV7bo4u4EnCjHmB4j6GmpiZqgVs1v99v+Xy14JKTk01VxubS1pwNGTIkYrShXru+vl47ZmUoqwqXEJGC+9dff9meb1aFVv1c76dr166W0v3PP/9gzZo1UeN30liICH369MHq1atx4MABAMAbb7xh2c/MmTNtNf7mLD09Ha+88gqICNeuXUO3bt1CPq+srERhYaGWB2an09q0aWMqX9566y1L+Xjt2rWQOlFTUwMiwtChQ5GXlwcAlstq586dUBQlZCT36KOP2l68P3DgAIgIubm5tutTc+1LxamAT548OaKtm52OiSsBF8L69IDdzGvTpo2WWWVlZa2628FOb/Hmm2/GuXPncOnSpYiejH4HQN++fU37rK6uxoABA5qtzE57kW6ILBHh1VdfRb9+/eD1euHz+UJGH3bz/cEHH9Re5+fnIxgMagtxVmM8d+6cq+X92muvob6+HtXV1Vi1apV23OPxID09HU8++STOnj0LIsLMmTNdLQ+z5aV2QogII0aMgMfjQWZmJvx+P9q3bx8SlzpqNBvntm3bQESYOHFiyPHvvvvOdn2qqqoCEUVt21bqqlGZ1tXVYeDAgSAiDBgwwPaW5lOnTqGxsRGVlZW4ePEiFEXR6kJL58adgJsxo21/Vhd4CgoKcMstt0TsM3fL8G/C8Pnnn0d8ZrbXPHTo0KifdejQAURkepgqyzKGDRtm+Nnq1avR1NSERYsWWa7E4dZSozArbqqf5557Dr1799aOzZ8/31Z5qDt5ioqKUFlZGdLbmTp1qiVf6nkjR460vWsn3Hr06IEtW7Zg9uzZmD9/PlauXKktjBIRAoEAGhoasHz5clfr6aVLlywLJBFh7ty58Hg86N+/v6FAnjx50vTNVpIkKIqCpqamkF0svXr1AhHhzJkzltP1zTffgIiaFVX9iNSOAUB5ebn23sxox6oFg0H8/fffzcUQXwKuFp6+oZtZYLHS6/H7/dr3hwwZYvgdsyLbo0ePqJVDfR3eqzUz/RLuw+gzq/Pg7777bkR+9u/fH2vWrMGFCxcwb948xxVOFTdV2OzMg9rJj+ZMjUGdDw23y5cvm/aVlJSEqqoqbRpr6tSpyMrKwrhx41BYWIjr169j9erVttOYk5ODMWPGYPjw4doOJNXs+jUz3XLbbbdZKodnn30W9957L+bOnWv4natXr5ouL1mWEQwGQ9J68OBB7fXevXstp/ny5cstXh8AFi5c2KIvfXtJTU1FQUGB1glQO2qtZRkZGaipqYkYmejSEF8Crlq4ILspBFlZWYYCbfYmoD/31ltvNX1ddQhqdj68rq4u4lhZWRn++OMPbYeClbl1IsKyZctCjukbjpkHpIyuN3jw4JBrtEZFXrVqFZ5//nnb58uyjIsXL6KkpAR33XUXOnTogIqKCi3tZvPxpptuwtq1axEMBlFXV4fp06ejuLg44qawYMECV9Kt+rt69aqlOmrG2rVrB6/Xi3bt2lleSxk9enRIR8joO7W1taZ9SpKE8vLykDycNm0aFEWx1MZUq6ioQFNTE7xeb7MxWqmvPp8PSUlJWl1RUeNXr+P2FsuOHTtGHSHHrYCHm36KxOk8640wfXx25sYURUFjY6NhhdObVb89e/bEjBkzUFJSovVs1qxZY8uXUc+OiKIuQtq13bt3u1Le6u4btUfl8/lw/Phxy7779esHj8eDoqIiLFmyxLBctm/f7ijWOXPmoKmpCUSEQ4cO2driqReuaCLm9/sxcuRI0z7Vbbf6RdYFCxZo/h9++GEQETZs2OC4vIgIe/bssX2ufk7eKP+ICOPHjzd1U/R4PIb5qU5lOk1rS1ZRUWG4cB5XAp6cnGzY09bf0dwSb73PaFMHy5YtQ2FhYbON57PPPkNpaWnIopY+RkmSkJGREXK98MoQrQIqioKUlBTNHwCtUTvNA/00zooVK2z5++ijjyCEwIkTJ6J+p1evXrZjVHs19fX1OH/+vOuNwufzYdSoUQgEApg8ebLpvdD6xqsv77KyMuzfvx9EhHfeeQf5+fm2YwsEAiAibaHM7bQL8e9owk65ExFeeukleDyeiPbqdF5ZtYyMDBCR7f3aapk0NDRgzJgxht9x+pMcQgh88sknICLMnj27VcpItTNnzuDLL7+MOB5XAm62UNyoGOG2b98+LFu2LOJ4VVUVJkyYYLniqHGePHnS1pCqurraME7V7D6Fqjc1LvV3VqyeH35jU/frHj16FBUVFRBCYP/+/Y7jJKKI0YibNmjQIEePui9evDiifH7//XdLu0WM0qwoStQ1Gqf56WQUp54nyzJSUlIwfPhw5OfnawuudhYdw23r1q2O2rrP59N2iBj5ISJkZ2c7ilG9ARJZ36OuN/2uIyObNWsWjhw5Yvj8RsIIuJvDFI/Hg6effjqiQlp9mCc8Q6PFOHbsWFtxqoIYCAS0xuHmr8jpK5CZBdEZM2aEvNffRMIbyu7du7U0OI2PiFBcXIzLly+70ruLdg27dSw1NRW5ubkoLi7Gn3/+ie3btxs+8WjGOnXqpO2yOnbsmKM0qdND0abe1Hnqqqoqy771T3kqioKLFy+6sgjuRnm0ZG3btnXk+9ChQ45ugHrbtWsXGhoacOrUKWzduhU//vgjPvzwQ20Of9++fc0+SxG3Ah7e8MMzKvxBh1jbxx9/HBKj+qM5blh9fX2rxk4UubhpZE888UTIe30PPLwyr1271tEUQnh8RITy8nJs3brV1bTrF54eeeQRx/7sCrdq8+fP19I7atQox/GUlpZq/g4fPqztRBJC4OzZs+jbty+WLl1qy3dhYSEmTZqE4cOHu14mRITKykrHvoy29smyjLq6OtuLjT/99FNIXXe6fXDTpk3aOtSpU6dw7NgxPPPMM6a2qMatgBvZzz//DKL/rci7WWHsnuvGDz/F2qxMETVnX3zxhat52xp+WvIf61/SUx8+sfoEazTz+/3aT5K2RtpaK78WL16s7f9ni27RNDVu/hMzMzNTe52TkyNkWRYZGRmuXuO/NxFbKIqivS4tLRWVlZXiwoULboR1Q+jWrZvYtm2b7X9l0TN27NiIY07ytjX8RMPv99+Q67TEhg0bhBBCnDt3zhV/wWBQXL9+Xd9ZcpXWyq+0tDRRU1PTKr6dkpmZKbKyskRKSkrc/iuRdCMrspX/xOzdu7c4fvx4a4Zjmnj4T7x44cSJE8Ln84krV66IwYMHi8bGxliHxDD/9yDKf2LGrYAzDMMw/xIXAs4wDMO4R9zMgTMMwzDWYAFnGIZJUFjAGYZhEhQWcIZhmASFBZxhGCZBYQFnGIZJUFjAGYZhEhQWcIZhmASFBZxhGCZBYQFnGIZJUFjAGYZhEhQWcIZhmASFBZxhGCZBYQFnGIZJUFjAGYZhEhQWcIZhmASFBZxhGCZBYQFnGIZJUFjAGYZhEhQWcIZhmASFBZxhGCZBYQFnGIZJUFjAGYZhEpT/AEN562/Rcw6EAAAAAElFTkSuQmCC\n", 161 | "text/plain": [ 162 | "
" 163 | ] 164 | }, 165 | "metadata": { 166 | "needs_background": "light" 167 | }, 168 | "output_type": "display_data" 169 | } 170 | ], 171 | "source": [ 172 | "print('Running C&W 4 x 25 attack (limited to 100 iterations)')\n", 173 | "cwattacker100 = CarliniWagnerL2(device=device,\n", 174 | " image_constraints=(0, 1),\n", 175 | " num_classes=10,\n", 176 | " search_steps=4,\n", 177 | " max_iterations=25,\n", 178 | " learning_rate=0.5, \n", 179 | " initial_const=1.0)\n", 180 | "\n", 181 | "start = time.time()\n", 182 | "cw100_atk = cwattacker100.attack(model, x, labels=y, targeted=False)\n", 183 | "cw100_time = time.time() - start\n", 184 | "print('Completed in {:.2f}s'.format(cw100_time))\n", 185 | "\n", 186 | "plt.imshow(make_grid(cw100_atk, nrow=16).permute(1,2,0))\n", 187 | "plt.axis('off');" 188 | ] 189 | }, 190 | { 191 | "cell_type": "code", 192 | "execution_count": 6, 193 | "metadata": {}, 194 | "outputs": [ 195 | { 196 | "name": "stdout", 197 | "output_type": "stream", 198 | "text": [ 199 | "Running C&W 9 x 10000 attack\n", 200 | "Completed in 177.87s\n" 201 | ] 202 | }, 203 | { 204 | "data": { 205 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAAA5CAYAAAAvOXAvAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAGPpJREFUeJztnXlYVNf5x9879w4zILK6L7hrKAha64aPEcUF18fqIz6mEUPcat0C1i0kJkpQxIrGBluXKGoS69ZYjYliTWyoWjHYiK27gE9EhaBGBmaGWc7394e/ewsCce6FJkx6Ps/zPsMs971nec97znnPORcBAHE4HA7H/dD92AngcDgcjja4A+dwOBw3hTtwDofDcVO4A+dwOBw3hTtwDofDcVO4A+dwOBw3hTtwDofDcVO4A+dwOBw3hTtwDofDcVO4A+dwOBw3RfohbyYIAj+3z+FwOCoBINT0+f/cCFynq/8sC0KNZcvhNAgEQeA22sAQBKFefNFP3oGLokh6vZ6IiHx8fKhPnz71asyiKBJ/IFjd+W84mPpqJLXpduWzhgC3T+38N+uUMVZnHT9oCEUL8uhBbWZ1Oh3NmzeP+vXrR0+ePKGpU6eSTqcjs9lM7du3J4vFQk6nU1Oa/Pz8aOHChWSz2Sg7O5sCAwPpo48+IkmqW3F6e3vT6tWrqX///rR06VLKysoiu92uWZ8oirXmUafT1dmAKuv/vns9D51ORwEBAdS9e3dq06YN5ebm0uXLl5X0qU2r/HtPT08ym82a0lQTgiAQAJIkiURRpJdeeonCw8PJx8eH/P396ciRI5STk0O5ubmadBPVj7MVRZE6dOhASUlJlJmZScXFxZSZmanalgRBIIPBQI8fPyaDwUCDBw+mwMBAKioqojNnzii/0Zrm+syzjCRJxBirF+dI9DRt6enpNGPGDDIYDPXSbiZOnEjR0dE0fPhw+vjjj+nGjRv08OFD+uKLL6i4uFh9An8oISLUJP8fG683EQQBnp6eKC8vB2MMN27cwLp163D+/HkwxjBmzBiIoqhar06ngyiKmDdvHl5++WX0798fnTt3xpw5c8AYq3NeunfvDsYYGGNIT0+Hh4eHZl06nQ5Lly4FEUGSJMTExGDZsmUYOHAghg4dCp1Op1lv5TzGxsaiXbt2mtO5ceNG7N27F5s2bcLs2bOxYMEC9OzZU/lekiRNdRUaGoq4uLh6tanK70+fPo3CwkKlvioqKsAYg8PhUK1X1i3/rdWG2rdvj0uXLuHbb79FcXExGGOwWCwoLy+H2WxWXeeCIGDlypXYvXs3+vXrh65duyIyMhJz586t9zar0+mg0+ng5+eHyZMno1WrVpr0BAUFYeTIkdi1axcmTpyICRMmaLZ1OY/3798HAERFRWnSJQgCvLy88Prrr+PkyZMoLy/HlStXYLfbkZSUhM6dOz+3rdfqUxuCA58+fTrGjx+Pjh07olWrVpAkCV27dkVMTAzWrFmDF154QXWhtWrVCrt27YKfnx+ICL169cLVq1cRFxcHX19f1c5RdiKrVq2CwWCoYsBnz57Fe++9VycDnjlzJhhjuHPnDhISEtC7d2+0bt0agiBAkiRNOktKSrBkyRIkJyfDaDQqHVBMTAwkSVLVCCs7UZ1Oh0GDBmHRokVgjOGbb75BmzZtVKUtMDAQn332GY4fPw5vb2/lc71ej9atW4OI0LdvX8TExKhucKNHj8ZXX30Fxhh8fHyqfKdW5HzLZXfx4kXYbDYcOXJEcd5r165FWloaDh8+jA8//FCVfkEQ0LVrV8TFxeGPf/wj0tLSlLJU4yx69uyJJ0+egDGGYcOGKU5Br9dj2rRpOHPmDPbs2YOAgABV6Xv33Xdr7KCTk5ORnJys2d49PDywcOFC/OlPf0J6ejqmTJkCLy8vpKWlYfr06XWqc8YY/v3vf8Nut8NkMmlOoyyyzsppV2NTcueUnZ2NFStWVCvHmTNnPldHbT61QYRQunTpQkFBQfTzn/+cSktLaezYsdSxY0eyWCzk5eVFUVFRtGfPHjp06BDdu3fvufpEUaR79+7Rr3/9a3I4HERE5O/vT4GBgeTj40MAyGazaUrrJ598QhUVFVU+69evH61YsUKTPpm0tDS6ffs2HTt2jAwGA/Xo0YP69u1Lhw8fprt376qaqoqiSBEREXT58mX68ssvqbCwkGw2mzL1czgc5OHh4XJ4QRAEcjqd1LhxY0pMTKROnTqR2Wym6OhoIiL69NNPKSAggO7evetyfocNG0YjRoygxo0bU3l5eZWpaWFhIYmiSC1btqSSkhKX8y7/Jjo6msLCwshut9PChQspKSlJ0zRdznd0dDTdv3+fLl26RD4+PpSXl0cFBQX02muv0Y4dOwgAWSwWYoypjrnv3LmTRo8eTb6+viRJEjkcDlqwYAEVFxdTv3796O7duy5N2T09Pcnb25vMZjOdOnWKiP4TY921axfl5eXRsWPHKCMjQ/n+eXn38/OjOXPm0MKFC6t9n5+fTz179iQvLy+qqKhQHT6Li4ujxMRE2r9/P+n1eoqIiCCLxUKvvfYaJScnK2XxPOR6Xb16NYWEhBAASkxMpE2bNtF3331XL6GUDz/8ULHtV199lbKysujmzZsu2xRjjARBoA8++IC6detWxZ49PT3J19dXe+J+zBG43IP17dsX7dq1w6hRo7B+/XqUlJQgLy8Pp06dgt1ux5EjR/DZZ59h3rx5Lvd6giAoPR8R4euvv4bFYsH48eNVjz4r69Tr9dU+z83NVUaNanTJf1+8eBEOhwNmsxkmkwkWiwUWiwVHjx5VPaIjIvTo0QMmkwnNmzevcaSwe/du+Pr6qhpBSJKEt99+GydOnMB7772HmTNn4tatW2CMISwsDJ6enqrSePjwYeTn54Po6Si3chhBFEX4+/vjzTffrBJOcbVc5ZAWYwxlZWVISUmBwWBQXY5EBKPRiL1796JXr14gIrzwwgtVZm9aR/b9+/fH9u3bwRiD3W5HTk4OPv74YyXdhYWF2Lx5s8uzr8GDB4Mxho0bN1YrD3kWYbVasWbNGpdDUitWrKgy8qwsvXv3xoYNGzBmzBhNdu9wOGC1WtG1a1d07twZUVFRiIqKgsViwYABA1SXpcVigdVqRZcuXSCKIiRJUkJKN27cqFPIp23bthg+fDiOHTuGtLQ0vPXWW6p16HQ6hIWF4eDBgzAajRBFEYMGDUJKSoriO74vjQ06hFI58WlpacpnckPp1KkTvvrqK/Tv3191oU2dOhW9evUCYww2m61KIWl14pXf9+jRA1evXtVkGKIoYuDAgUqjffz4Ma5cuYKIiAgYjUYUFBTgwYMHuH//Pho3buyy3vLychw7dqxKBybL5s2bYbPZVE+l4+PjkZaWpjjbmTNnIisrC5MnT9ZUlnKcv6Y6+/Of/4zS0lJNZdqqVSusXbsWY8aMwcqVK3H69GlYrVZs375dtRNPTEyE3W5Xwk9yPrXGVIkIvr6+WL9+Pex2O0pKStCsWbMqZXfhwgWMHTtWsQl/f//n2qPRaERhYSFu376Nnj17Ku1GkqQqHYDZbMa2bdtcTuutW7dQVFSk3CcgIACTJk3CxIkTMXLkSLz88ssAgLy8PFVlkJqaiuzsbAwaNEjJ+5AhQ2Cz2RAaGqq6TA8dOgSHw4Hf/e53Ndqhw+FQBgtqRZIk5OfnAwDOnj2LCRMmoEOHDqp0yGk6evQoLBYLGGMwm82wWq0YOXKkSzoavAMnIkRGRuLNN99U3ut0OnTr1g2FhYWYPn06AgMDVY3AZcM4dOgQGGM4c+ZMvSy+VNYRExODadOmadbVqVMnMMbgdDqVWKOHhwcEQcCvfvUr5bsePXq4PHKyWq2IioqqMd1FRUWoqKjQVA6jRo1Srjt37lyV0ZkWB84Yw+7du9G3b1+Eh4dj6NChOHr0KBhjqp2CnIa2bdsiISEBS5YsgcFgwIoVK2AymbBmzRo0atRIlT6bzYbi4mJNi6i1yfTp02EymfDo0SOl8yN62pn7+vripZdewqNHj8AYw6pVq1zqLIxGI0wmE06ePFnrjKVdu3awWCw1dpq12bfdbkdGRgb0ej0CAwMxc+ZMTJgwAePHj4ePjw8kSUJBQQHKyspcqn9BELBu3TqUlZUhNja2Smf4+eefa9oI0LRpU5SUlKCsrAw9evSo8p2sW46Ja60z2Vbr0s6bNGkCxhiKi4uxceNG/OEPf0BpaSnMZjMKCwufe32DduDyKGH9+vWIi4tDnz598Pbbb+PAgQNwOBxwOp2aRz1DhgxBZGQksrKykJKSonlBsCaZPXt2rVPM7zNi2UBFUYQoiggODlYMrnKIJigoCNnZ2bBarUhKSnJJv5+fX62jLD8/P1gsFmzZskVTfiuX3cWLF2GxWDTXi6enJ4KDg8EYQ1FRETZu3IjXX39dMfSTJ0+qLteAgACkp6fjzp07mDFjBjZt2oTz58/D4XDg3r17mDx5MmJjYzFq1CgMHToUffv2rVUXEaG0tBQWiwWhoaEwGAxKKEnuYLXku1mzZti+fTs2b96M9PR0bN26FVu2bMHf/vY32Gw2OJ1OWK3WKjNRV2T58uUgIqxcuRJz587F/PnzMWLECCQnJ2P27Nlo3rw5GGPIzMx0WSdjDImJiQgJCUFWVhZGjx6tlIH8m0OHDiE3N9el8tDpdMjLy4PNZkP79u0hSRIaNWqEMWPGwOFw4O7du6rtadOmTWCMoUWLFiAiNGrUCH369IHBYIDRaMTixYvhcDhQXl5epW5dtSm5HM6cOVOtDai1z9o+0+v1yMnJwbVr12q9vjaf2iAWMeUFkGbNmlFoaCh16NCBfH19qaCggCwWC+Xk5KhejJAkiVq0aEGMMbpz5w6NHDmSYmJilL2nlfegatnLqtPpaPTo0WQymVRdJ99TJigoiJo2bUq3bt0iu92uLHgAoGHDhlFwcDAREaWkpLiku7y8nPr160eNGzemsrIy5V6SJFGnTp3I6XTSyZMnVaeZ6OnipyiK1KZNGwoJCaFHjx5p0kNEVFFRQdevX6+26Ne0aVMqKiqiuXPnuqxLrsuJEyeSxWKhgIAA2rp1K5WVldGlS5fIZDJRs2bNaNGiRdSnTx+X9vJ6eHiQ0WgknU5Hs2fPpj179pDdbqe2bduSyWSiSZMm0cWLF+nKlSt0/vx5lxfxiouLacaMGTRq1Chq37493blzh/z9/emVV14hURRJEATasGEDrVmzxiV98r7nc+fOERHRW2+9RUT/2Qt/4sQJ5T3R08NsriCXqY+PD82bN48aNWpEx44dI6KqB1CMRqPLedfpdPTNN99Q69at6fz582S32+nevXsUEBBAgiDQ+fPnVbVzQRAoMDCQGGP04MEDEkWR2rVrR+3atSM/Pz8aMmQI2e120ul0Sv7VtPPKv+3duzcRkUsLq9+nq7KvkV/tdjuNGzeOsrKyaOzYsXT06FGX9TYIBy5z9+5d8vb2pjNnztDXX39NeXl5NGfOHE07PJxOJwUHB5PZbKagoCDKz8+n8vJyCggIoIcPHxIAxfDUOnGdTkdhYWE0ZMgQOnjwIOn1emKMuWTI8j3kgy/dunUjSZKUQxZyOho1akTx8fFKA3F1d4Pdbqfg4GDauXMnLV++nBhj5OXlRYsXL6awsDASRZGsVqtLuiojp8vpdBIA0uv19Pnnn2te5Zc7qmdJSkqiZcuW0c2bN1WljTFGAOjbb7+lK1euUFFRER0+fJgOHDhAO3bsoIkTJ1KTJk2Ua57nxAVBILvdThUVFZSbm0utW7emSZMm0bhx48jLy4sYY2S1WikiIkLTAaZPP/20yvuMjAxijJHJZKLExESXbFHeJdOpUydq3bq1YlOVD77J+ZTfy6eSXUV2gH5+ftXuDYCio6MpMzPTpfQyxmjKlCn0/vvv06BBg0gURWrVqhUlJyfT4sWLKSMjQ1XaANDgwYPpyZMnRPS0zQ8YMIDatGlDNpuN7t27R5mZmZSYmEi3b99WPVCTy9PhcFQpt7oc5qnt/oWFhTRt2jQ6ePAgpaSk0LvvvuvaPRpCCKUmCQ0NxYULFzBw4EBNUxYiQnBwMAYMGFAlhrls2bJaf+9qrFMQBOzduxdWqxXbtm1TFprkvcKups/f3x+PHz9WQkTylCorKwsFBQXKDoXf//73qmP/AwYMQEJCApKSkpQ8p6amwmq1ai5PecHs+vXryM3NrdNiXk2yb98+1SGpmkSSpCrlKYoili1bBsaYqoXMCRMmYNmyZRg9ejTGjBmD4uJilJaWKqEOk8mE/fv31ylGPnfuXJSWloIxhkuXLqleaJX3zQ8bNkx5/6ytdOjQAb/97W9x7ty5Gr+vSbp06QKLxYLY2Fh4eHhAkiT861//Qnp6Ory8vPDKK6/gxo0bSE1NVWU/lV9lYYypWlytLP/85z/x3XffISwsDBkZGUhLS4PRaFTu84tf/ALZ2dl48cUXXQp/PGvTgiCgf//+mDp1Km7duqUpja6Gl4gIBw4cgNlsrvZ9g46BPysGg0HZQqi1YRA9Pdk4fPhw+Pr6wtvbG2FhYZg1axY8PDzg6ekJo9EIf39/LF26FImJifDy8nJZ982bN8EYQ0pKCpKTkxEZGakqbaIoomXLlooDnzp1qnIQA4Cya8Zms2nOv6enp7Ktiuips7DZbKpX0SsbYUREBJ48eYLFixfXqW6e1S0IAh49eoSHDx/Wi75nG4eXlxdOnTqF2NhYlzue3/zmN3jjjTcUndeuXQNjDKdPn8aGDRtw+fJl/PWvf1W9O6qyyLsSHj9+jIiICE06WrZsicGDB1c7JSvLO++8g/Ly8hoXtr+vDCsqKrB161YQPe0Yfvazn8FgMKB58+ZV4sp1kS5duoAxpmn3iSAIGDJkCLKzs1FcXIzNmzejefPmVTqpLl26YN++fSD6/gGabBOVD5VV3gixdOlSXL16VdOgRb7meR29KIp4+PAhvvjii2rfuY0D1+l0+OSTT2Cz2XD9+nVNBRUSEgJ/f3/s3bsX169fB2MMJSUlKC4uxt///nekp6dXOwLNGMOcOXNcvldZWRkYY0hNTcWqVavw/vvvY/Xq1Vi/fr3LJ8kEQUBGRgZsNpuSBqfTCcYYrFarspindQ/zs2Uzf/58OJ3OOumJjIyEyWRS9pHX55Fqxhi+/PLLetcrS+/eveFwONCxY0eXr6k8aktOTq5iN1arFWfPnsWMGTM014lc5y+++GKd8jZp0qRqu2wGDhyI/fv3w263a9oxdOPGDZhMJoSHh6Nbt25ISEjA4cOHYbfbkZ+fr+rEZG2yc+dOxSa11LkkSejcuTP27t2Ljh07KvveZV9w4cIFLFmyxCVder0e8fHx1U6e7t+/H06nE3/5y19qPAfiqixYsKDaZ5XzHBISgoKCAgQFBVX7nds48GbNmsFutyMhIaHOjXj+/PlITEzEypUr8cEHHyjOcfny5YiMjFS9F1oWLy8vZGVlYcSIESD6T88qv6pxQJIkKVurysvLce3aNdhsNmVXQX2JIAhITU1FWVmZZh16vR47duxQnEHlAzj1IWazGStWrMDVq1dx6dKlet2+Jzfs/Px8VWGaytN+SZKwevVqPHr0CA8ePMCGDRs0P6slMDBQmXHl5ORozpOvry/GjRunHB2XO5jKA4LU1FSXQyfPStu2bfHRRx/BZrPh4MGD2LdvH4YPH15v9VJRUVGnWaYoijAajRg8eDBGjRql2GmLFi0wd+5cMMZUPeZBEAQkJCQgPj4e4eHhWLt2rTIbrks+Z82aBYvFgoKCAmzatAnHjx9XBm82mw3/+Mc/EB8fX2sduY0DT0hIgMPhQHh4eJ0K7HnxLnkLnxbdOp0OU6ZMqVbxlV/VSnh4OOx2Ow4ePIhJkybV2AvXVZ48eYKEhATN1xsMBmzZsgWlpaUwGAwIDAys1/TJHWxWVhZmzZpV7zF2IkLjxo1x7tw5TdfK9hIaGqocrNJa36+++qoSKhs0aJDm/MiOuUWLFsjJyanivLdt21alvrWmtXnz5mjSpAnatm1bpwes1SSMMeTm5mq+Xh4seXt7Izo6Gr/85S/RtWtXzJkzBwsXLsTx48eVZ424mv/27dsjKioK69atw/bt25X933WxR0mSsGXLFmzZsgV2ux1Xr17FiRMnEB8fj5CQkOfOst3CgS9atAh2ux3Xr1/X9AArVyq7Ier6IUSSJMTFxWnutOTHCMybNw/vvPMOYmNjERsbW6/lWXkRuD5H3zVJ9+7dNV1XX51KSUkJACiPO2joUt9hLVlXZmYmJkyY8F9pTx4eHvD398esWbNUrW89m87/ti26Im7hwBMSEmC1WuHr6/ujF9hPTQwGQ5UFmrpKmzZt0LRp0x89X2pF3qHyY6fjjTfeAJ42iv9JkZ1iWlpanWfbrt5LizSUgVptPlX4f8f6g/D/hcH5CeDq0+I4nJqovCe7Lv8M5H8F1PI/MbkD53A4nAZOg3DgHA6Hw6k/fvL/1JjD4XB+qnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpnAHzuFwOG4Kd+AcDofjpvwfkoXpbqjuUHQAAAAASUVORK5CYII=\n", 206 | "text/plain": [ 207 | "
" 208 | ] 209 | }, 210 | "metadata": { 211 | "needs_background": "light" 212 | }, 213 | "output_type": "display_data" 214 | } 215 | ], 216 | "source": [ 217 | "print('Running C&W 9 x 10000 attack')\n", 218 | "cwattacker = CarliniWagnerL2(device=device,\n", 219 | " image_constraints=(0, 1),\n", 220 | " num_classes=10)\n", 221 | "\n", 222 | "start = time.time()\n", 223 | "cw_atk = cwattacker.attack(model, x, labels=y, targeted=False)\n", 224 | "cw_time = time.time() - start\n", 225 | "print('Completed in {:.2f}s'.format(cw_time))\n", 226 | "\n", 227 | "plt.imshow(make_grid(cw_atk, nrow=16).permute(1,2,0))\n", 228 | "plt.axis('off');" 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": 7, 234 | "metadata": {}, 235 | "outputs": [ 236 | { 237 | "name": "stdout", 238 | "output_type": "stream", 239 | "text": [ 240 | "C&W 4 x 25 done in 0.6s: Success: 100.00%, Mean L2: 4.4013.\n", 241 | "C&W 9 x 10000 done in 177.9s: Success: 100.00%, Mean L2: 2.7124.\n", 242 | "DDN 100 done in 0.4s: Success: 100.00%, Mean L2: 2.9836.\n", 243 | "\n", 244 | "Figure: top row: original images; 2nd: C&W 4x25 atk; 3rd: C&W 9x10000 atk; 4th: DDN 100 atk\n" 245 | ] 246 | }, 247 | { 248 | "data": { 249 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAB3CAYAAAD4twBKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJztXXdYFFfXvzvbAOliAWuwREUFW+wNlGBiN2hixNiwxYo1tojdRDHG3mLs/VXUV9HEiLH3FkhUjImKBZEiZZfdmfl9f/DOfLuwCzOzaxAyv+c5j7K7c+aWc88t59xzFACIDBkyZMgoWaCKugAyZMiQIcP+kJW7DBkyZJRAyMpdhgwZMkogZOUuQ4YMGSUQsnKXIUOGjBIIWbnLkCFDRgmErNxlyJAhowRCVu4yZMiQUQIhK3cZMmTIKIFQFXUBCCFEoVDI12RlyJAhQyQAKKx9J6/cZciQIaMEQlbuMmTIkFEC8a9V7hMnTiQzZ84kAAgAEhYWVtRFkiGjWOHatWuEYZiiLoYMa+CUW1ESIQT/JO3ZswcMw5jR/fv3UblyZbu+p2bNmmBZ1mY+pUqVwurVq3HlyhVUqVLlH20rmcSTh4cH/P394e/vj8jISISGhsLf37/Iy6VUKlGnTh2cOXMGkydPRuvWrSXz0mq1oGkaNE0Xeb2KimbPng3kKjC7UKNGjTBv3jzEx8eDZVkwDIOtW7eidu3aVp8pUK8WtWIvCuVuqtSjoqJw6NAhMAyDadOm2fU9ffr0sYvw16hRA0ajEQzD4Msvv7SZ319//WXx8+DgYLvVnWVZfPnll1AqlZKej42NxYIFC1C1alWL37u5uUGtVv+jciOE1q1bh3v37vHylZ2dzf+/qMrk6uqKI0eOIDk5GW/evAHLsmBZFllZWXj16pUknpMmTcLJkyfRrFmzIm9zIRQYGIhvvvkGALBnzx588803NvM8ffo0AKBdu3Y28YmKisLVq1fBMAxomgbDMFizZo2g8VhslfuJEyewefNmtGrVCtWrV7dLJ3fp0gV37tyBt7c3/9m0adPAMAzat29vV4HKzMzE6tWrbeIxZswYGI1GXrnv2bMHbdq0kcxPpVLh+fPnFr8bMGAASpUqZXO9e/ToAZZlsWPHDvzPE0oweXl54dSpU1ixYoXV33z77beSlOXnn3+Ov/76CwzDoHTp0nbt67w7QYZhEBcXh7179/Iklmf16tWxePFiXL58GRcvXkSNGjVE8/Dz88OrV6/AsizatWuHihUr8t/Nnj0bNE3jwIED8PDwEMV3/fr1aNiwoV3bkKMGDRqAYRheEVevXh2dO3eGo6OjaF6jRo3idxg0TeOXX36x246Dg618aJrG1atXpZaheCp3boXBsizS09Nx4cKFfLR37140btxYVIN4enqa/X379u23otxZlkXbtm1t4sEwjJlyNxqNSEhIQKNGjSTx69ixIxYsWGDxu4iICJQpU8am8mq1Wly9ehUsy6JTp06inw8ODgbDMAWWg2VZHDhwQDRvTskxDIOdO3fa3L/z5s1DSEgICCFISUnBkydPsHv3bsyYMQNeXl42TZRr165FUlKS2RgwGAy4e/cuNBqNYD7NmjXj62zp+wULFoBlWXz88ceCefr4+Ly1nUiHDh3w9OlTfgVL0zR+/PFHMAyDfv36ieb38uVLXplPnDgRarX6nVPujx49QlJSktQyFE/lbkoeHh5wdXVFUFAQgoKC0LJlS5QpUwYsy2LkyJGSG1an04FhGJuVcF6qWrUq/vjjD5t4PHz40Ey5d+nSBW3atOH/HjFihCh+u3fvhk6ns/idp6cnANis3DllcfToUanCimvXrln9fvjw4fjhhx9E861SpQoYhsGtW7f4lfXz589Rvnx50bw0Gg2io6Mxb948ODk52VVuOIX+008/ISoqiv/89OnTuHDhAn/MI7Sffv31V7Asi2HDhln9DcuyiIuLE1zGv//+26JyX7NmDTIyMkDTNFJSUjBz5kxRdffz8wNN0zhx4gT/mY+PD65evYqFCxeKbsuwsDAAQE5OTr7vFi1aBABYv3695L7iMHv2bJv6fPr06WBZFo0aNeJJRBmKv3K3RL169cLt27fzrcTF0Ns6D/3iiy9w9uxZyc9XrVqVX60nJCRg8eLF/HfPnj2D0WhERkaGqHPnN2/e4O7duxa/W7p0KRiGsfkc+9y5c2BZFgEBAZL74/Dhwxa/mzt3Ll6/fi2Jb7du3XDmzBkQQjBo0CA8ePAAAHD58mXR8jN//nwwDAM3Nze7yYuDgwNmzZoFlmXx8uXLfKv+O3fuIDg4mF/FC1XuCQkJSE1NRYsWLaz+RqxyT0xMxOPHj/m/P/jgAyQmJoJhGCQlJeHgwYMAgKdPn4pqg927d+Pw4cNmx0/BwcGgaVrS4mvnzp1gGMZsssgrazdu3JDcZ/ZS7o8ePeL1ELdj6dmzp9AylDzlHhkZCZZlUbduXUkN6u3tjaSkJLx8+RLVqlWzqXPyUvPmzfH69Ws4ODhI5sEZUS9fvpzvuypVquDSpUtgGEZw2d3c3ArcitI0bVXxiyFO+Uh9/unTp2AYhj9K4Ij7+8GDB5L4Hjx4EBs2bOD/5lbIUlaEDMPgP//5j11l5uOPP0ZaWhoePXqE0aNH4/Dhwzh8+DDu3LkDo9GI58+f821bs2ZNwXxTU1PRqlWrQvtMjHKnaRrjx48HIQSrV68GTdM4efIk/Pz8+N8sXbpU1NHHzZs38/3eliMUblKIjY21ak9gGAaPHj2S3Gf2UO6tW7fGvn378n0+btw4xMbG4rfffvv3ecvk5OTYpEQuXLgAhmGwdOlSyTys0YwZM4DcikmmGjVqWN1RVK1aFVeuXAEAbN++XRC/smXLFvhbmqaxa9cum+vOsixSUlIkP+/h4YH27duDZVm8ePECixcvxuLFi1G3bl0wDIMtW7ZI4tu7d2/cunULtWrVws6dO2E0GsGyLJKTk1GnTh1RvDgvmAYNGthNZhwdHXHw4EHekMiyLF9GjmiatqgICqKLFy8K6jMpyr1mzZp4/fo11qxZk88OsHTpUly5ckUwz7zKvWrVqvjmm29A0zSePHkiuj2nTZtW6IqfWylL7TMO9pIBS3T16tUCz+NLpHLnziWlPNu1a1fo9XowDANnZ2e7d8i+ffts9m9fsmQJjEajxe9Gjx7NH9kIXbk7OjpaPcIqW7asXdwsW7VqBZZlbVoNWSNfX1/cuHFDsk3A09PTbAdw4sQJ/PHHH2AYBmvXrhXFi9tNGI1GrFixAp9//jlCQ0NRp04dhIaGmnmkiKVFixbh7Nmz+P7777Fu3Toz5b569Wq4u7sL5lWqVKm3otwZhkFERARWrFhhdQGyfv16nD9/XjDPHTt2gKZp3Lx5Ezdv3uR3cJxBVWw7zps3r9DjVgA2Hcn+E8rdy8sLcXFxVo9pSpxynzVrFo4dOya5wbjBKcXjojCaOnUqWJbF5s2bbeJz7949i8r9u+++4w2qDx8+FMVz7969yMrKMnMNe/78OYxGI2ialuRqZkrbtm1DSkoKmjRpYtc2DQ0NBcuyNrvedejQAT/88AM/oZuulsXwadKkCe7fv89PsJZoxowZNtd78uTJvGLv27ev6OcHDRpUqHJv1KiRaG8ZbuVeunRpPH/+HNOmTcvnWpqZmSnJB37ixImYOHEiCCFYtWoVfv/9d0kTekxMTKGrcoZhkJaWJrl//gnlzlFcXJzF+pQo5f7xxx/DaDSiefPmkhuKG4Cmvu72ojVr1oBlWTNPBylkSbkfO3YMDx8+5JW7FI+UBg0a4JNPPuGJEIItW7bY7BpWsWJF0DSNO3fu2L1Nf/jhB7vc9LVEn376KRiGkWSUDwoKQkhICG//MKWDBw/aVK4hQ4bwF47u3r0LrVYrmkdhyr1Ro0bYsWMHzp49C5VKJYinj4+P2Zm7t7c3aJrGmTNn4OLigs6dO+PSpUuiPWUsEXdDU8qzQpX7uHHjJJePu8Rkq0FVCK1bt87iIqTEKHfOf1qqUc20U/PS7t27sWzZMovfTZ8+XTBv7pw0MDDQpjKOHz/ezLc97/8LuuQjlmbPnm2zx9CIESPAsiy8vLzsLtjc+bu9+XIUEBAAhmFEGSpNqWHDhvm8HVauXGlznbmLR7bwiYuLy9cn9evXx9q1ayUbv7l7IT179kTz5s0xd+5cnD9/HjRNIz4+Hj169LC5TzZt2mSzTIaFhSEzM9Pijm/Dhg3YvXu3Tfw52KrcC/Pfr1KlCjIzM1GrVi1LZSgZyp0Txi5dutjUmNa20hzl5ORAp9Nhz549mDRpkqhdAsuyuH79uuRr96Ydakm5P3v2DNWqVbOrf/XXX39t88r94MGDYFn2rYQEYBgGmzZtgouLi93j/5j22/79+yU96+joaCY/BoPBpgtMLi4uNnsdmdbr2rVrOHr0KE/cZa6kpCRs3LhRNE9vb2/88ccfZsd7HNnr5m9aWprNyt3X1xcMw/C7DI4CAwMlGdLzkj2Ue8+ePbFmzZp8n5cpUwZDhw7F8ePH8fLlS6sTZolQ7vPnzwfLsqJvo/6TpFarcejQIbsq3pCQEBiNRhw4cAAhISFvRbm9ePECY8eOtYnH2zw64Qyh69evR6VKld7KOypXrmxz+cPCwlC2bFmbeHTt2hUA7LL7IyT3GOXOnTtmhtl169Zh+PDhb6Ud7UU0TeP27ds28/H29kZ0dDRu3ryJYcOG8TdWDx06ZBPfdu3a2e04Zs2aNVizZg0YhsFvv/2G2NhYzJ07Fw0bNixUlxR75d66dWv+/PFdVu7FlY4cOWIzDx8fH2zatOmtlO+XX37B7NmzRV27l0InT560eTVnK92+fRssy5pdWvs3EhcuwB683NzccOTIEdA0jcWLF+ODDz4QbGN416nYK/evvvqKP2u3dO4kk0z2IFdXV3Tt2rVIy/DkyRMAeCvG/uJE9lTuJZkK0quK/ynXIoWcQ1WGDBkyxANyDlUZMmTI+Heh2Ct3Z2fnoi6CGZycnPj/jxo1qghLIkOGjLcBpVJJCCFEobC6aH4n8M4pd7Vabfa3Vqs1+1upVJKQkBD+78zMzH+kXAVBoVDwFBYWRj755BNCCCH79+8v4pIVHWrWrFnURShycErgXYAlRWQqt1LwLtXvn4JCoSAMwxCFQkEAEIqiCEVR+fTUO4GiNqYKMaiaEkVRdjFEKBQKODg4IDo6GqmpqbwbnC3888Z/sBTwSGxmon+KXFxcJD/LxSXhQuC+fPnS5n6jKOqteMfYev9AKJUqVYqP+/MupgO0B3Gxb7744osiL0thpFAo7KY7CnrHP12vYu8twxFFUVAoFKJTglni07dvX76zdTodWJa1uXNatWqFjRs3wsfHB/fu3bPbRRRC/j+gv72TQ9hDmI8dO4auXbvi+vXrBf5eimK1JVZ/UbRHtWrVMHXqVCxfvhzfffcdWJZFdHR0gWFb/2mKjY1FWFiY5OfVajVUKhUv3++//36R16koiKv/lClT7MKvSpUqOHz4MIxGI3bs2IEvvvgCjo6OBeqlEqPc7UGlSpXKF9QpMjLSakB/oaRQKFChQgX+b24lbKrcpU4enBDp9Xqb609RFPR6Pb7//nuL77G34HN/q9VqUStx7nlrt4M7duwoun8IyU34kbee9ljRde7cGVOmTOFDUXPxhUwvD0nlbUteANO25Kh+/fpmf0sJZfHBBx/wbZdXrt/GCtbWHVeTJk0QEBDAh1I2GAyS5MdUXjiwLIt69epJLpter8e9e/fQqlUrODo6wtXVVfCzxVq5WwvJazAYCl0pWqO8abdYlhUd6MiaADs6OkKr1fLbQFuVe2BgIFiWRXZ2ts2D3LS+S5YssVonexxd5FVoCoUC5cqVE/SswWAoNBTAwIEDRZepWrVqdttNcQNcoVDA1dUVc+fOBcMwiImJ4d/Rs2dPREVF4fbt29i2bZvodzRq1AiffvopIiIiMGDAAEmZn2bOnGlW57wyuGHDBknt8TaPOKZPn46UlBRkZGTwkUozMzPRv39/Sfy2bdvGtwF32/n+/fuCni1oQcLxIoSgffv2kncw48ePl7xDLdbKPS/lXYVIaZCcnBx+oNSuXRssy2LVqlWieFhTgKZBmkzLSFEUr/TF1pdhGJw9e5aP22HLysjJyQk0TVs8Y7dVqZseGUntn+7duxc6IXp5eeG9994TzbtXr15gGAZ6vV7S5JCXGjVqxCu5CxcuICcnB6tWrcq3TZfS79u3b0dGRobZro0LoCb0iIdbXJi2p6WwzizLirZvcLlnrSl5qXYGX19fGI1GPtzx6tWr+XrcunVLNL9ffvmFb4M9e/aAEIJRo0ahTZs2kspnOr7d3NyAXAVmE/n4+KB69eqSni2Wyt3adjyvwrD1DJrreDFJaQsi09CstqwQFQpFvomMZVk+EYiUoEo0TfNlsqTICxNUoQrAVKGIjeBJ03SBitDBwUFSsCvTcnE7ISn9o9FoQFEUfvrpJ3Ts2NHuxtLExEQ+zO+PP/4IFxcXKJVKJCQk4MmTJzAYDGBZVnCALiFRVMVOxEeOHAHLsggKCrLavizLIjY2VvQKn2VZnDx5kv/b0dERp0+fxvLly0W35Zw5c8zqZtpX5cuXB4BC621NFrVaLSpVqsTzqFq1quQ+V6lUmDFjBo4fP45Tp07h2bNngp8tNsrdksLJu9LIG1zflhW8g4OD6Oe5zhYSX9uUr9hVccuWLfMNFu7dnGJiWVbU4GFZlo/+mPe5J0+eFNoO1t6lUCjMvBFM+fj4+Iiqd1ZWFiIjI62W35YJk2Vzc+7Onj2bj8X9zTffiOYzePDgfEd7tpKrqysfHO/Nmzf5lEVCQgL69+/Pt4HQIzoh0SlZlsWyZctEteObN2/MZCItLQ0sy6JVq1YICQkRpDjzUnR0NBiGMVuwdevWTbIjwenTpwtcuAmRp4LGFwdbj/n69euXb6wLPeIpNspdbAPbMtgLOtqxh0HIHue6ISEhVr/LysqymobPEikUCqxbt85q+7KsuFRr1trKVgXMnWNao+zsbEl8OSUxYMAAEEIwfPhwHD16FE2bNhXNKzs7u8BEKVKOt6KiovhIiHXr1kVYWBg6d+6Mfv36IScnBydOnOCPZ8TEzOfazVpe27x2ISH09OlT3hbWo0cPPkWj6W77559/FsyXoihkZWWZHQ+ZLhSkyBM3URaUNIVlWVFB8/LqIZZl8dlnn1n9XghVrFgRffr0yff569ev+SQoBQU5K/bKPS9Zcl0Ua2zq0aMHKlSogNTUVNEdIoSQWzGLCY2FCkFBWetLly4NlmUFR7ejKMpqsuDZs2eDpmlEREQUyEPIpFfYYBQ6cXJ8Ro4cCT8/P/4zqaGJ69atC5bNNZwnJCSYTRhi/bS554KCgkR5NhRENWrUwPbt2zF06FCMHTsWs2bN4o20LMsiJycHer1eVOIYIZScnCxaebIsi9GjR0OpVCIgIMCiMfDhw4eCjw4VCgWf6EStVvMyUqtWLbAsi6dPn4qu17lz58CyBSePMd3JSiEAOHv2LP+3LTH8rZHBYLB4b8SkDMVPuedVAvYM0enr62tRwQpVPKbPikmGzK3ohK7sMjMz8312/Phx/Pnnn7yxTcwqkWVZTJs2LZ/wcMpKyP0BS+9r0aKF2TvsLeCE5CYUsSWBN0VRePXqFTZs2IA6deqgdOnSuHv3Ll93oe1YpkwZREVFwWAwIDMzEwMHDsyXyJplWZvSt+XtM5ZlkZ6eLkpGhZCHhwdUKhU8PDxEH+8FBwfDwcHBanlYlkVGRoZgngqFAmfPnjVrwwEDBoBhGEkJx+/evQuapqFSqQosoxh51Wg0UKvVvKxw4MrPvcfenkReXl5WF1/FQrnn7QBLg81UwdtDiVhKCix08OQ9S7aWXLqg1XdB1LRpU7M6csJjKvz37t0TxZM7s929ezeio6NhMBiwefNmDB06VFB7WmqbvPW2NGBsVUhcve2RqCPv+Su3CxT6vEqlwieffAKWZfHo0SM+XV16erpZ30ycOFFS3lNT4i5BmR4n2ZukHHuwrLlft6V6siyLzz//XHR5wsPD8fnnn0OlUsFoNOLVq1eSjNam9XJ2draocFmWxcKFC0XLImfvcHNzs5sjRmGUkpJisZ+KhXIvjEyPXWw91/0nyLR8UvKKcun1LAlsQfaCwuj999/HoEGDsGHDBsTExICQ3EtcUnhZMuqxLGvVICqVOO8MW/lwSoI719VoNIiLixPN29/fH0qlEjNnzsSkSZMs9svevXttKuuoUaN476bLly9LOsc3nVStTbAODg75vF4KIs512NTgO27cOJ5/p06dwLK5WbNs7S+WZXHs2DHJzw4ePJj/21L7sSyLXr16CVp8KJVKi+3JHY/aWtfC6O7duxaN+CVCuXOdYWtDurm5WRyMP/30E6ZNm5bv8+fPn1s0eBRWRq6cDx8+lLRN42ZqayT1ApcpceXiQhuIfT7vgOFW2Ldv38bdu3dBSK5hzdZysiwryngslpo3b27T7d+JEyfm658HDx6YKRcpdWYYBi1btrR7fW1dIHDPURQFR0dHtGvXDl27duXtA1LOyPPS7t27bRrrGo2G3/1a4sOyLB/7RyqVKVOG5y/0gp4l+vrrrwv8fsiQIbh165bFHUyxVe55g/0U1Nliz57zkk6n440wHG3YsAGhoaFmXgDWXNAOHDhgxpub2d3c3CQbwSyV09akwZZIqHLPW/cJEybw/+cE722sYliWhb+/v9355n2HFG8hru4GgwHp6el8oujKlStL9oH39PTkk6LbUqf09HScPXsWgYGB6NGjh93kqW/fvmBZlve7NxqNPL+6devy9bbl7Nn09qdUUigUVvWC6RgVS59++ikIIeDAsiyqVKkiiZe/v7/VlIodOnTA8+fPkZiYaNVYWyyVuyUXO9O/bbk08DZoy5YtZmUcMmSI3XhLdf8TSiyb39BqiUJDQ83+Nh04eVdIUVFRdktZx/E+e/Ysdu/ebde6mxrBunTpYjM/a7YXoTR27Fi+vh06dLC5PBs3buT5Xbt2zSyOTGJiIurVq4epU6dK4t2/f3/07dsX7dq1s3ufsCyLhIQEm3lZUooURSEzM1Py5PPbb7+ZybqtXjI//vgjfzT66NEj3LlzB8OHDxfkiVXslLvpTUjTK9QcSbnO/bbJ1GPEWjycd5VY1vaEzFzfpKSk4NGjR1AqlZg0aZJdbnDSNI1NmzahVKlSFo3g9iLuCMQeBmCpzz59+hQ6nc5i7B8xxCmukJAQmyecoiCWZTFr1iy78OKUpEKh4MNW2EOPmLqAUhT1j4WTNqVip9xNyd3dvcgFTSht3LgRCQkJSEpKKvKyCCVuB/SuTZb/NL0rClCqcdsave0Y5m+LZs+ebRZl9V0id3d3+Pr6wtHRsUgUuikVpFeLXYJsPz8/EhcX9zaLIxhcNhYZhMTHxxONRkNSU1NJixYtiNFoLOoiyZBR4oECEmQXO+UuQ4YMGTJyUZByf+dyqMqQIUOGDNshK/f/gaLs3xTvenZ0Gf9u2JIcW8bbgUKhsJsu+lcpd1Nh7tWrFwkMDCQPHjwger2eHDx4kBAiPaO76SDhsqGrVCrbC50HtvI0LadCoeDra8+ymgqnVOWhUCiISqUy4/U22vNtoX379mTkyJHvZJm5Pler1bmGN4l99N///pe8fv2aUBRFlErlW1kg2QsURfF98S5PaAAIy7KS9ZApiu2ZO6eoWZYV9RxFUWTUqFGkWbNmJD09nYSFhRGKokh2djapWrUq0el0hGEYscUhhBDi7u5Oxo4dSwwGA7ly5QopXbo02blzp80D3NnZmSxYsIA0b96cTJkyhZw9e9Ymg6VSqbRaR4qiRLdpQfwLeldhoCiKeHp6knr16pGKFSuSO3fukLt37/LlE1tW7vdOTk4kOztbUpksgTOsq1QqolQqSd++fYm/vz9xdXUlHh4e5PDhw+T69evkzp07kngTQuxiuFcqleS9994jc+fOJSdPniRJSUnk5MmTomVJoVAQrVZLUlNTiVarJe3btyelS5cmL1++JOfPn+d/I7XM9qwzB5VKRViWtVm2TbFq1SoyZMgQotVq7TJuevXqRUJCQkhwcDA5ePAguX//Pnn9+jU5ffo0SUpKsvhMQWfuRe4Gac0V0jRHJUVRBSaKsPS5td/WqVMHLMvi77//RuPGjVGuXDkMGDAAsbGxaN26tejok9z7Bw0alC8S4KBBg6DT6Wx2d4qLiwPDMPjrr7/scmFk6dKlKF++PMLDwzFlyhR069YNLVq04OO9i2kDrv6mdxPc3d35aJNSXAw1Gg1YlsWrV6/Qpk0bqFQqUBTFu8ZptVqEh4dbTBVYGN969eohMTERGo2GL7s93EC7dOmC+Ph4pKam8jFhrl27xvv/Swkty8k8V39LyaiF8Dh+/DhfjkePHuHq1au4ceMGn7rPWijogujbb781C8uh1WrRtm1bTJ06VXSClsLkS61Wo1y5chg2bJjodIAqlQqBgYF4+PAhDAYDsrOzERcXhxcvXtjsxggAOp0Ou3btkuxyqlQqodPpcPfuXXTv3h0BAQHo0KEDvvzyS0EJO4qtn/vgwYPRvXt3+Pr6wsfHByqVCjVr1kTv3r2xcOFC1KpVS3Rj+vj4YMuWLbz/fKNGjfD7779j4MCBcHNzEy08nIDMmTMn36WICxcuYOXKlTYJUHh4OD8ZRUREoEmTJqhQoQIUCoXkMMjJycmYPHky5s+fDwcHB/4CRu/evQsMkVpQ/Qn5/5jxEyZM4K+niw3XWrp0aRw/fhwxMTFml8HUajWv3Js2bYrevXsL5snV5+OPP+YVrunFFqmDkquzUqnEjRs3YDAYcPjwYV6RLl68GFFRUTh06BB27Nghir9CoUDNmjUxcOBArF27FlFRUXxbilEkDRo04CNWduzYEdWrV+dD137xxRc4f/48tm3bJjpB8/Llyy1euZ8/fz7mz58vWd41Gg3Gjh2L3bt3Y9WqVfjss8/g5OSEqKgoUbF6LPU5F2LCaDSKCkdsjfLeyuZ0h5jQ4RRF4cqVK/kubM2fPx/h4eGF8ih2yp1rnKZNm6JKlSr46KOPsHTpUiQnJ+PPP//EqVOnYDQacfjwYRwwKso4AAAgAElEQVQ/fhyjRo0S3KB5dwK3bt2CTqdD9+7dRSs2U56WbmLeuXNH9EUM0/ffuHEDNE0jOzsbGRkZ0Ol00Ol0OHLkiGhlQQhBQEAAMjIy+CBHeeu6detWuLm5iRJOlUqF2bNn48SJE1i5ciXCw8P5ZBj169cXvXI/dOgQHj16BELMI/FxcUI8PDwwc+ZMNGjQQHS7jhgxgh+QmZmZWLRokeSwvA4ODti1axcf8rVWrVpmCwOpk0bz5s35kAFGoxHXr1/HwYMH+XInJiZi9erVgif29u3bg2VZfPfdd/nag5ug9Ho9Fi5cKHglO2vWLKsXrZo0aYJly5ahc+fOkuSepmno9XrUrFkT1atXR1BQEIKCgqDT6UQHUWvevDl0Oh30ej1q1KgBpVIJlUqFqlWr4vbt27h//75Nu7ZKlSohODgY//3vfxEVFVVoADBLRFEU6tevj/3798PBwQFKpRJt27bFokWLeN1RUBmLnXI3JaVSCYqiMHjwYFSsWBG+vr6oWLEiKlSoAJVKhQsXLuDbb7+V1Dn169cHy7LIysqCt7e35K2VaWowriMcHR3x+PFjyYLj4OCAxMREPikHy+Zmb583bx5YlsVvv/0megDFxMTwcWryDmQXFxcAgK+vryieEyZMwM6dOxEaGgp3d3d07NgRP/74Y4HZYwoimqaxcuXKfEHjOAoJCSk0Y5Qpcf3h5OSEBQsW4Pbt22BZFgaDATqdDmXKlCl0AOXl5+7ujujoaISFhfFHQ/a4CUpRFFJSUkDTNK5fv46tW7fivffeQ4UKFRAXF4eHDx/yRymFHX1w9bl8+TIyMzPRs2dPq3U0Go3Ys2eP4HLeuXPHonJv3rw5Bg4ciP379yMpKQkRERGilGelSpXAMAyOHz/OL7Q8PDxw8+ZNSYpzyJAhYBgGqamp+fL8jh07FgBw6NAhSX2lVquxdu1axMTE4MGDB/j+++8lJWhRKpWoW7cujh49ig8//BD9+/fH0qVL0bhxY0FyWayVO1e5qKgo/jNOmVarVg3Xrl0zi9oodBCFhYWhUaNG/EAXEvtaSDk5CggIwO+//y5JcJRKJVq3bs0r9dTUVMTHx6NFixZwcHDAX3/9hRcvXuD58+eizp2zsrLw3//+16INY/Xq1TAYDKK35+PHj0dUVBS/yg4PD8fZs2f581ixbcmyLFatWmWxz/7zn//wiZnFko+PDxYvXozOnTsjMjISsbGx0Ov12Lhxo+jV+/Tp02E0GvkjLa6etih4Nzc3LF26FEajEcnJyShbtqxZ2129ehVdunThZaKwrFlcUonExEQ8fPgQDRo04MeNSqUyW/lnZ2djw4YNgsuakJDAT94KhQKenp4IDQ1Fr1690KlTJ/Tr1w8A8Oeff4pqg2+++QZXrlxB27Zt+boHBgbCYDCgbt26otv0wIEDoGkaS5YssSiHNE3zu0SxpFKp8OjRIwDAhQsX0LNnTz5ujVDiynTkyBE+aUx2djb0ej06deokiEexVe7cYBk5ciS6d+/OZz4ZNGgQYmNjwbIsevfuLTo4FUVRKFeuHJYvXw6WZbFz504zHlKVO1feypUr49SpUzZl4nF2dsa+ffswYsSIfGXy8PDA5cuXkZGRgcqVKwtSKu7u7lajS5YqVQosy+LKlSuSytqoUSNoNBpQFIWbN2/aFBvljz/+sBia1mg0gqZpXL582Uw2hJCjoyPmzZuHv//+mz+r544+uPZ1dnbmZaCwI4/s7GxcvHjRTLHbapSdMGEC0tLScPLkSbRt2xZfffUVRo0ahXnz5uHq1av8ubHBYOB3G0KIM/idOXMGU6ZMwfz587F27Vps374ds2bNwhdffIGsrCxs3bpVsJyzLIsJEyZAoVBgwoQJGDJkCEJCQvg2pCgK+/btExxOWKlU4rfffoPRaOR3jlxf6PV60DQt2nFiwIABvN3DUr9yRntbdtdc5MoWLVpAqVSKNtBSFIXWrVtj165dICQ3wFnZsmVRqVIlrF69Gvfv38epU6fg7e1tlUdBevXdc8I1AedeVK1aNaLT6cj7779PPvroI+Ln50datWpFAJD9+/eLdkFiWZb4+fmRgwcPkoYNG5LHjx9zkwwhhJj9XyhMJirSqVMn0r59e5KTkyP4eVP3L6VSSXQ6HZk1axb5/fffeT9izmXNxcWFKBQKolarSXh4OJk5c6agd+zYscPi52q1muTk5JCbN28KLq8pbt++TWia5sufk5Mj2TWsQYMGpGrVqiQuLo68evWK7Nq1iyQlJZH169eTpKQk8ubNG0IIEcxboVAQR0dH4uHhQQghxNXVlXz//fekXr16hGVZMnPmTJKSkkK0Wi1JTk4mBoOBZGRkkMuXL1vkBYDQNE0CAgJInTp1yIMHD4jRaCQURRGKoojRaJQkP9u2bSO1a9cmBoOB9O7dm/dBr1WrFvH39ydKpZIYDAayevVq8urVK8F858yZQxYuXEgiIyNJZmYmyc7OJr/++itp06YNefnyJYmJiSGOjo6kfPnygvhxdXNwcCB16tQh3bt3J4sWLSInT54kFEXxckBRFImLixPkEgmAODk5ESDXx1ulUhGNRkM+/PBDolKpyIsXL0S5VgIgDRs2JIQQsmzZMmI0GkmpUqWIn58fuX37NlEoFGT06NGEZVlSunRpQog4103T3758+ZJcuHCBqFQqvu5CwbIsOXfuHDl79iwhhJA3b96QN2/eEIVCQUaOHEnUajW5dOkSOX36NKlVq5Yo3oQQUuCKuqhX7txsvXDhQqxYsQI9e/bkZ/bMzEy0adNG9GyrUCjQsWNHtGzZkncB69OnD8qVKweVSmUxu5BQ3hRFISAgAJmZmfjxxx/NkukKJe73ISEhZmfqXDlKlSrFr3L0er2oYxmaprF//37UqFED1apVQ7169bB161beqCz2DD9v+1SuXBksy0oy9hbW5mvXrsUXX3whuj8IyT17nTJlCi5fvozDhw9j0KBBcHFxwb59+8CyLH98UJDLLVcurVaLrKwspKSkIDw8HD169MDOnTuRmZnJuzxmZmaifv36NrWBaZ8ZjUakpKQIdoXkdhLVq1fH559/zsuU6bN5k+AIzezFrdznzZuHdevW5Tt64d7BnZ0L7ScfHx8cP34c2dnZyMnJAcuymDt3LvR6vaQ4+8+ePcPr16/5v8PDwxEZGYnp06djzJgxqFWrFm+7Ervr4trTYDAAuQosX5vak1q3bo2XL19i/Pjx+d5RbI9lOCFVqVRo0qQJ/3lgYKAoA1Be+uijj9CsWTN+AHbo0IE/05PaQZyAbNmyBTRNY/To0fkGlBAh4rxLVq9ebZZai7P0f/vtt2BZFnq9ns/+JJQ2bNiA7OxsXL58GQkJCTAYDLhy5Qr+/PNP6HQ6m4RTqVRiyJAhSElJQfv27e0aQpg7aw4ICJDUJ9xRga+vL1q2bMnX09XVFadOnRJ8jMTx69OnD968eYMxY8Zg8uTJSElJQVZWlllWrxUrVtjUBgqFgg//y7LSkk37+PigW7duVuVaoVDwRsxevXoJ5sswDEaNGgWVSoVTp04hIiLCLGGFSqWCTqeTtPjq3bs3+vTpAy8vL1y6dAnx8fGSQv/ev3+fv2NSt25d7N69GwsWLMDAgQNRs2ZNeHp6gmVZ/Prrr5L6Rq1WY82aNbhy5Qpq1KhhN1nPS1y/JSQklOwcqlqtlneDtKXB6tWrh+DgYLi5ucHZ2Rn169fH0KFDodFo4OjoCAcHB3h4eGDKlCmYPn06nJycBPN+8OABWJbFokWLMH/+fNGXjpRKJby9vZGamgqGYRAWFsb7KQPgz10NBoPk+js6OvKuYYQQfPnllzAYDKINQpygE5KbrCQ9PR2TJk2ym2Bzk3tKSorZKswWfnkHjpOTE06dOoX+/fsLntxGjhyJGTNm8Dw5O0FsbCyWLVuGu3fv4ueffxZt6DclzsCWmppqlghGDHl7e6N9+/ZWV/zz5s1DVlaWqATZCoUCOTk5fAJstVqNOnXqQKvVoly5cqBpGllZWTb3VY0aNfi0fVL6OTAwEFeuXEFSUhJWr16NcuXKQa1W8+1Qo0YNfoFY0O6akwnTOxemxt4pU6bg999/l7Qw4p4pbHevVCrx+vVrnD59Ot93JUK5UxSFo0ePwmAw4N69e5Ia0c/PDx4eHti1axfu3bsHlmWRnJyMpKQknDt3DqtWreJXStzW0NToJoS47fk333yDOXPmYNOmTViwYAGWLl0q+BKGQqHAjz/+yN/yZFmWzynJucL99NNPNhlsTdtm9OjRNudmbdeuHTIyMng/eXuu3LkVlr35ctSkSRPQNC3KDdTUODd//nwzudHr9bhw4YLkVItc9jGGYSStfk0pNDQ0Xxq41q1bY+/evXzuU1O5E8Lz/v37yMjIgL+/P95//31ERETg0KFDMBqNePTokU2JwTnavHkzL5NS+lylUqF69erYtWsXfH19eb9+ThdcvXoVkydPFsRLrVZj/Pjx+S5t7d27FwzDIDo62qaMY2PGjMn3mWmd/fz88Ndff6Fy5cr5flfslTtXUc5jQkqqNaVSiZkzZ/IXMExXwNwqKSkpiffMeP78OSZNmoSePXvCz89P8HvS0tJw4MABfvA3adIEw4cPR//+/TFw4EDBZ7GtW7eGXq/nlTq37U9KSkLDhg1tTl9nqignTpxo005AoVBgzJgxyMzMlHxrtiDS6XQYNGgQnJ2d87kI2toGnGxwE6YUPo6OjrxMZWdng6Zp1KxZU/LkW6ZMGbskyC5fvjwyMjJw5swZHDt2DBcvXsTt27f5uxOvXr3C/v37RfP19fXFq1evoNPpkJWVxS84WJZF1apV7XL2nJ6eblP9HRwc4OnpiZYtW8LX15d3A9VoNPDz80NaWhqqV68uWJYmT56MyMhIPnMZRVFgGAYAJPm3cxQaGoply5bl+9zb2xvjx49HTEwM/v77b6snAMVeuRNCEBERAZqm4e/vb5PQFKZ8pLg0cURRFD777DOzz2yNX+Lv7w+j0Yj9+/cjNDTU4uxtK6Wnp4u6GJSXtFot1q1bhzdv3kCr1aJ06dJ2LR83wZ09exZDhw59K4YrFxcXXLx4UdKznLzUrVuXN3BL7e9BgwYBgOSYLxxxRxDly5fH9evXzdxKN2zYYNbfUstarlw5eHl5oVKlSqLDdhRGLMvizp07kp/nFi/Ozs4ICQlBjx49ULNmTYwYMQJjx45FTEwMf71faP2rVq2KoKAgfPvtt9i4cSO++OILEGKbIVWlUmHdunVYt24djEYjfv/9d5w4cQLjx4+Hn59foQuEYq/cP/jgAyQmJoJlWTRs2NAmoXlbFm1bL7EUVbkJybUTiLEr5CWlUon+/fvj9u3baNq0KUaOHGnX45Nr165h+fLlqFevnk0KrzCaNm0afHx87HKJTaoMcQGu5s2bZ7d62SNJ+T9Rf1PKzs62y/GORqNBqVKl0KVLFyxevBitWrXCe++9h4CAAHz99deSxlWtWrX4I5q3cVQohl+xV+4RERHQ6/Vwc3N760L6byOtVmtmLLKVKlasKOqSzbtCXOTFoi7HjBkzgNxB8a8kbhcUFRVl8y5d6Luk0Nuw/UihgvRqsY3nLuPdhJTLHDJkcDC9IGRLLoB/CyAnyJYhQ4aMkoeClPu7mxfrH8a7nCJMRvGD4h1O5Sbj34Fio9EUeRLH2jp4OF5KpZKo1WrSu3dv4uPjw/OVyp97zp6ThSlPiqLeacVha/tZ42nKzx6839ZkzvE1PVp4F5C3HFxKQC6mDJcKUkxKSFO5fJdlkoOlNhAD03qa6g97ghvjduFlFy7/AEyMr4QQwgfTEgOuY7RaLRk6dChp1aoVmTFjBpkxYwZZuHAhn5hWiqByHQ+AaLVa4u7ubreEvFy9WTskzs0rONykaS+BAsC3Q16lLBSK/yXuNg2mZtr3UnmaPmfPXJocf1O+5cqVI+PGjSNlypQx+76ooFarzfqaZVni7u5OypcvTxQKBalatSpRKBSi7CVHjx4lmZmZxM3NzW5KjuunvO0lVZZMwckkBzGTEieTnFxy/cwwDFGpVFbLLIRv3jLmlU2pY/OdjAppalRRKBREpVIRlUpF9Ho9oSiK/45hGP5vIbYD7jfbtm0jnTt3JhERESQ+Pp7UqlWLlClThpQtW5aPQCcVnp6exMfHh2RnZ5Py5cuTc+fOEU9PT8n8CMmNArl48WLSuHFjMnXqVBIbGyuovpbAsqxZxEZOKFmJSactgaIowjAMASDaKMb1PfdM48aNSePGjYleryc///wzefr0KV8+Mby5sri7u5NatWrxiZxN32kLTCf3Ll26kG7duhFPT08SFBRE9u/fT+7evUtu3rwp6j2mcmhL+TQaDdHr9USlUhEPDw/Stm1bkpCQQPR6Pbl16xYhhJCMjAxB7+AmCKVSSTp06EA0Gg355JNPSEpKComNjSWvX7+WXE5CiNmiwLQ8ACQZ6zkZ4WRcqVTy+sJgMJj9tiD+QG400O7du5OOHTuSBw8ekHv37pGyZcuSH3/80eozYtCzZ0/SqVMn0rlzZ7Jnzx7y/PlzkpCQQM6cOUNev34tXgaK2g3Skiskd5FIqVTC09MTwcHB+PTTT82i23FukaZ+pkJcm5ydnQEA27Ztg1qthlarRadOnfDixQuEhIRI9lvlylupUiUMGTIELi4uCA4OxsOHD212sbtz5w5omsZff/2Fjh072uSKRVEUli5dirJly2Lo0KFYtmwZvv/+e7Rt25ZPklGQX3Te93L+/VzbK5VKuLi48PFwxCaxJiQ3axIAJCYmonHjxvy1ce7dHh4eePz4MZ8uUChptVo0atQIz549g5eXV77sPEKJe44rj0ajQbt27fD999+DpmkAQHZ2Nvbu3QsOUm4Ac+XiAp8JjQrJpT/k+iM8PBzLly/HTz/9hGfPnuHx48eIi4vD3r17sWnTJnTu3Nksm1hh9SYkN+TCgAED+N9rtVo+wJelKJRSiQvSVbZsWYwYMaLQWE2mukChUECj0aBTp0549OgR3zcpKSlISkqCl5cXHwnWNKWjtTbVaDRo0aIFunXrhnHjxvG8Pvroo3z1LazuXFu+//77yMnJQVpaGjp37oxGjRqhY8eOaNKkCZ9tqyBeBenVd27lzm3vWJYlc+bMIaVLlyZ3794lqampxNnZmaSnpxOFQkGqVatG/vzzT5KWliZqpa3RaMhff/1Fxo8fT4xGI6lWrRqZPHkyiY+PJxcuXCBqtTrfjC4UDMMQX19f4uTkRLKzs8mXX35JYmJizFbCYleJgwYNIvXq1SPp6elky5YtJDMzk1SoUIE8f/6cX2WL4ceyLOnXrx9JTU0lAMikSZNIpUqVSFJSEnFzcyMKhYJf6Vjim/czTpAoiiIeHh5k2LBhpHTp0sTV1ZU8f/6ctG3blhw9elRw+d5//32yY8cOsnXrVjJgwACet+mWesSIEaRSpUrk5cuXgtqT29Y2a9aMREVFEW9vb7PttdgVkekRjFqtJlqtlvTt25e0aNGC5OTkEK1WS9RqNTl//jy5e/cu8fPzI9nZ2aLeQVEUqVmzJmndujVp3Lgxyc7OJkuWLCGJiYmF7q5Mj9kaN25MQkNDSUpKCrl69So5ceIEWb58OaFpmvj7+5NevXqR8ePHk3PnzhGGYQrse0L+v/9dXFxIbGwsXw6VSkUqV65Mnj17xu+mpOw2tFotGTZsGGnWrBlJSUkh58+fJ9HR0WT69OkkPj6ePHjwoMDnTXf8FEWRjh07ksjISFK1alVCCCG3bt0iLi4u5NWrVyQ5OZloNBpB451lWWIwGMjFixf5d8ydO5e4u7uTY8eOEbVabbbqF1r3P//8k1y7do1ER0fz40ShUJAZM2aQU6dOkWfPnknftRX1qt105a5QKODo6Ii2bdtiypQpWL58ORo3boyOHTtizJgx2LNnD78Sunr1KuLi4nDr1i0EBQUJjmni7u4OBwcH/u9t27bBYDDg5MmT/ApRykqbe//evXv5FGhXr161uMMojLj3X7p0ic/UnpGRgdevX+PJkyfYs2cPn71FTFn9/f2RmpqK8uXLm62wONq9ezdcXFwKXcWY9penpyd27tyJCxcuYOXKlRgyZAhev34NAGjWrBkfi0MoHTlyBDdu3ODbNG85qlWrBgAWgy0VREqlEiNHjgQAGI1GpKWlYenSpfmCagklBwcHbN68GXXq1AEhBEOHDjULDSF21cr9vlmzZti4cSNfTi5BNofExESsWrWqQHl3cnLiV30BAQFIS0vD1atX+bDRnNyoVCr07t0bZ8+e5aNcFiRP3HdLlizhg3rllaGwsDBERkYiLCxMcEgC090TTdPIyclBtWrV4Ovri6CgIAQHB0Ov16Np06aCx6FGo4G/vz8yMzOh1+vh4+MDiqKg1Wrh7e2NYcOG4auvvjLL4SCmzyiKQu3atbF+/XpER0fz7xXDg6t3vXr1sH//fl73DBs2DIsXLxYUm6hAvVrUit1UuSuVSlSsWBFDhw7lY19wnc5t7wcNGoT27dvD398fjo6OAIABAwYIbtBy5cqhbNmy/PuA3C10aGioWU5MoR2V9zec0tVqtWZp64QqYW4ScHBwwLNnz6DX6/mBvX//fixduhQMw+D69esIDAwUpUCOHTvGx4TOOyjd3d0BQJAy5uqsUChQpkwZbNy4EUOHDoW3tzeCg4Oxbds2PH/+XPSE5uTkhOzsbCxZsoSPjZL3+WHDhvHHR2IGkpOTE+bOnYvz58/zfW40GvkUZkLLqlQq4eXlhX379vEyQ8j/x+G3JhdCj1NSUlLAMAxu3LiBrVu3wtfXFz4+PoiPj8eff/6JnJwcACgwQXbFihXh4+MDpVKJa9euAQAff52Q3GMetVqNmjVrYuLEiYiJicHIkSN5uS2sjAkJCQBgptCUSiV69uyJ2bNn8208a9YswUdJhOQmyKZpGsePH+f7v0yZMoiLi+MTZBfGi5vA1Go1unTpAgB49eoVnzeWk/vevXvDYDDgwoULZpOeEFIqlVCr1RgzZgyWL1+OX3/9tdCE5dZIpVKhTp06OHLkCIKDgzFs2DDs3LnTLCR3QVRslLvpIOMEhutMiqLg4OBgpvwuXbqEyMhI0dfdV65ciREjRuDhw4cwGo18Xk6NRpPvPFVMJ7m5uWHZsmVwcnJCx44dMX/+fEkdrlAoUKtWLV6p16xZkx+UhOROcAzDIDs7G/Xq1RM0cSgUCuj1eot2BaVSieTkZOj1ekm7FtN+uXLlCpDbqZKIpmmwLIsjR44gKCgIzZs3R8OGDfls9VxCYyGCbypHlStXxtChQzF16lR4e3tj4sSJoGkaa9asER290WAw8AmiufdIrS/3/ODBg5GRkYG0tDSz4HMqlQru7u74/PPPkZqaCgCIjIy02k/cajk0NBSEELx8+ZK303DErWC5cp8/fx5nz55FmzZtCmwLrs31ej22b98OiqJQunRpDBs2DJ9++ilCQ0Ph4uICtVqNhw8fIj09XXD9ly5diqysLPTr18/MvnD69GlenoS0M6ekXV1dcf/+fSQmJqJ+/fpmScE//PBDfPjhhwCA+Ph4aLXaAuXJ2ntjY2ORnZ2N6OhoycHyvLy8AADJycn47rvvsHz5cmRkZCA7OxvJycmFPl+slLsQUiqVmDp1KhiG4YVYzCCbMWMGpk+fjjdv3gAAwsPD+S2SVOXO0c2bN7Fv3z48e/aMP/6QQs7Ozti+fTv69OmTz+jn6emJa9euwWg0Fpogm6uHp6enWYJs0/qVKlUKAHDjxg1RKy3TBNHcZ9xOQ2q9L1y4gJcvX/Kx0bkM9X/88Qd0Oh327t1r9m4h5OjoiMjISDx+/Bhjx44FIbkTvNFoxIQJE0BRFMqWLVto8DSununp6Th06JDFMkiNVzJ+/Hikp6fj1KlTaNeuHaZMmYIvv/wSCxYswNWrV/nIjgaDAV5eXlb5cMq9W7duaNmyJfbu3QsvLy/+c3d3d9SuXZvfcWi1WtA0jVevXqFRo0aoUKFCoUczRqMRM2fOBCEE3bt3x8CBA/Hxxx/D2dmZV6AHDhwQHI9dqVTijz/+gMFg4ANyabVaUBQFg8EAmqYFL2C4d33yyScwGo2IjIw0O3ZRKpWoXr06f9Tz5MkTaDQaiyk2LfU91wZqtZpffC1cuLDA3xfUlm3atMG+fft4JxGu/TZt2oSHDx/izJkzBWaiKkivvnMGVVPkNexQFEVcXFxI48aNycKFCwkhhOzbt0+0i9SiRYtIZGQkWbduHVGpVOTixYv5jFSc0cz0/ULKGhMTQ7p27Uq8vb3Jy5cvLdZDCLKyssjcuXPJvXv3eH95rnyurq6EZVnCMAwZMmQImTVrllU+3HsVCoVZgmzg//3ROSPyjRs3RLlBWvqtUqnk3e6kxJjp0KEDqVKlComPjydv3rwhmzdvJunp6WTz5s3kyZMn5P79+2aum0Kg1WqJp6cnUSgURKPRkCVLlpBmzZoRQgiJiIggb968IRqNhrx+/ZocPHiw0MTmjo6OpG3btqRt27bk0qVLRKfTEUL+3wWUECLYUMdhx44dpEGDBiQtLY2EhoYSjUZDWJYl77//PgkICOCTb69evZokJydb5cM5JERHRxM/Pz/y4MEDkpycTLRaLVEoFMTZ2ZkkJycTvV5PwsPDSWBgIO8u+OLFC5KcnFygg4KpEdrZ2ZmUKVOGPHnyhJw6dYoA4PtcqVSSe/fuCXKtBUAcHBwIIbmGWc5IHRgYSCiKIk+fPhXkNAHkurtSFEVatmxJVCoVWbFiRT4jcZkyZYivry+hKIp4eXkRg8Eg6kITAD5ZPSGEfPXVV4QQYuaWLWS8cwmyf/31V0IIIenp6fx3gwcPJmq1mly+fJmcOXOGVK9eXXD5zApa1EQKmYk5mjNnDpYuXYratWsjODgYnp6e/G9NU2hZI272b9GiBYKCgvi8lC1atICHh4fZWTK3WhwguCQAAAq7SURBVLaWe9JaOTUaDaKiovhY1JaMgkLKGBQUhLCwsHyr41KlSuGPP/4ATdMwGAyiImXSNI0DBw6gVq1aqFmzJvz9/bF161b89ttv0Ov16N69u6DVkbW/fX19AQCbN28WtQOw1t9ce2i1Wmzfvl1yuN8GDRpg6NChuHTpEqKjozFgwAC4uLggJiYGAJCQkGBxF2Kpbzg7T2ZmJo4cOYINGzbg999/58/CGYZBcnIyxo8fj0GDBomqu6X3MwwDmqaRmpoqyNBtmm1o6NChiIyMBCG5YbM5Q39ERAS2bduGX375BRs3bsT169cRFRUluKwGgwGzZs1CeHg4HxPddNVLURQAICYmRvAKtly5cjh27BifIBsA5s2bB51Oh65du4pyAVWpVLh48SJu3brFj0nTM/W5c+diyJAhAIDffvuNf1aojY3jBYBPKWgqr7bs2C3JQMuWLZGUlISIiIh83xWoV4tasRek3LnGIiTXS6Jbt27w8vJCQEAAWrduzf/GyckJzZo1Q7Vq1QQ1Wo8ePRAUFMTnjuzRowd/ZiZkkFviyz3XtGlTnDp1ij8+EDLpmBK3Xd6yZQvu3LnDP6tWq6FSqbBkyRIAgF6vR3R0tCihWb9+PfR6Pe7du4enT59Cr9fj2rVryMjIgF6vt6pgCusfjiIiIpCcnIyWLVvaRag5/tOnTwcATJ06lTeGiyHOVlGhQgU0a9aMH5yenp64ePEiAGHHSFwfjx49GpmZmfj555+RlpbGJ8U2GAz8gD9x4gSf3k1qW8yZMwcc+vfvL7iunKKdOnUqevbsCTc3N7MUgu3bt8eKFSuwbNkyDB48GBMnThRlEASAyZMnQ6PRYNy4cWjZsqXZkZaDgwOys7PRvn170X3/6aefol+/fihTpgyfILtSpUqC+XCK/MyZM7wtjRtTSqUSU6ZM4RNkMwyD//znP2bn8QWVjyPut5zdy83NDX369MHw4cOxefNmbNiwwSaZN/2c68tHjx5ZvCtRrJU7IbnuisOHD4dKpcKaNWtgNBrRtm1b9O3bF1u2bMGnn34KDw+PQs/lOCXcvHlzfP755/Dw8ECpUqUQHh6Otm3bQq1Wo1SpUnBwcICzszNGjx6NqVOn5vOEyNsB3L8qlQobN26ETqdD7969zRSCkA7lqFy5ckhPTwfDMOjXrx/S0tJgCoZhRF+KMX1n3tVx3gTZhZXPkk2iffv2yMzMxLRp06zWW2p5k5OT8ffff6NSpUrYuHGjZJ6mq0tukKrVapw+fRp9+/YVbKQdOnQoxo0bh8aNG4OiKFy+fBkA8Ouvv+LQoUMAgDt37qBHjx753iu0TXU6HQAgNTUVrVq1EtVuFEWhRYsW2LFjB548eYJjx44hMTERT58+xZ49e/DVV1+BEIL4+HgAQOfOnfN5ihXUJ0ajEbGxsVCr1dBoNHB2doZGo0H58uWRnp7OG1Jtmdhq164NAKhXr56o5xo2bIhatWph9OjRSEtLQ1xcHH744Qe8ePGCNwITQhASEsLvVgrbXecdO9z/IyMjodPpcPr0acyYMQNff/01XF1deZ5C5VFIguzU1FScOXMm33fFWrn7+flh5cqVaNq0KZKTkwHk3vZbu3atmSGjMJ9aTuidnJz4m2oAoNPpkJaWhvPnz2PRokX85zk5OTAajQCAL7/8UrCAPXjwAADQpUuXQoXDGlEUhf379/PGSW5rDoD/LDo6WlRqM2vvpf6XINs0UbIQXnn5hYSEQKfTmd38lDKwTfuK+7/RaMS5c+dACBF1fCCUmjZtipycHH5yE0u9evXi5Uav18NgMCA+Pl5wAua8bUqZJMiWchSlVqtRrVo1DBgwADRNY+XKlRgzZgwqVqyIChUq8MqTywFKSO7qtjCvIa6M9+7dg8FgQOvWrVG9enV069YNS5YsQWZmJpKSkkSNl7zyyf27Y8cO0DQtWpYCAwPRp08flC1bFoGBgQCARYsWoV69evD29uZ3ftevX5ecMUypVCI4OBg6nQ5GoxHbt29Hs2bNeA8ksQsbS+UwPSaqU6cOnjx5ki9BNyHFXLkTkuvt4OPjgzZt2uSroJRz3SlTpmDhwoVYvXo1zp07xyvQcePGoXv37vzZpBAy7QSFQgGdTodu3bqZvS/vb4QIuUqlwsKFC/Hq1Svo9Xr8/fffMBqNmDJlimhhLKz8ixYtQkZGhqT2JCR3YuUumHEeCVJ5WerX5ORkDB8+HPfu3TNzQbRX/RUKBR4/fswrOjHEnXGvWLEC6enpSEhIwOzZs6HRaHgXUTH8SpcuzXtxcZe5pLYfRVGoVq0a6tati379+vHhEEaMGIHOnTsjMjISY8eOxfDhw9G4cWNRvOvVq4eYmBjo9XqcPHkSP/30E390ZI8k6VzyeqkyxJXho48+QosWLXjffldXV3zzzTcAIDr9oOlRSWxsLFJSUnD06FGb6jl48GDo9Xo8fvwYK1euxMmTJ7F161YYDAYYjUZcuXKF9+iy9HyxV+6EEDRp0gT9+vUTpXgtUd7VLreNytuBUgaSm5sb1q1bxwuAVJ6mv69Tpw6Sk5OxY8cOdO/eXfSNTyGUnp6OCRMmSH6eoiisW7cOer3eTLHbotxN24EzsD179swueTUtkbOzM787kGpvqV27NlxdXSUZkzmyV4JsQnIXRa6urti4cSO/Sr99+zbGjh2LPn36YPTo0fD09MTy5csRFhYGd3d3wbwdHBzQtWtXDBgwAH379kWzZs1E25estadSqYRer8f169dt4qXVauHk5IQPP/wQXbp0gY+PD0aMGIGDBw/i/v37/AU2scTddm7SpInNcqdSqbB69WqsXbsWRqMR8fHxiImJwbhx41CnTh04OjoWOJaKrXLnrN/169fHrl27cPLkSVHGFWvCY9pQtgzEvBQcHMxPPlKMk4WV2x58LPFMSEiwKY8qRVEYPnw4bty4gYYNG4o6IxZSvsWLF2PkyJHw9fU1u3xjL/6ccp40aRLKli0r6jaxNbmSWp6EhAQYjUabE2RzXiJ9+vRBnz59MHLkSFSsWBEajQb169dH3bp14ePjg4CAAKxatQqfffaZVdtSUVBmZqYob6O8decuPTo4OMDFxQUhISFYsmQJgoKC0LdvX6xduxb+/v422Ya43YC95NBaPQp6ttgqd0JyZ8mZM2fi8ePHvEX+bSg6W0mr1aJ+/frw8vIyc0d710mj0RTqZSSE2rRpg7CwMAQEBEi+il0Yvc1+Nx2kRSlf9k6Qze2kuAiohOSuFrVaLRwcHMxizRRVnU2JK8/333+P+vXrS+ZjzY2Z+l8EU872YEtZ34U2K0ivyjlUZdgVQi6tvItQiLy09jbLYe8ycBd0ikPiclP5sVWWCurTt9HORQHICbJlyJAho+ThnVfuMmTIkCHDvig2OVRlyJAhQ4ZwyMpdhgwZMkogZOUuQ4YMGSUQsnKXIUOGjBIIWbnLkCFDRgmErNxlyJAhowRCVu4yZMiQUQIhK3cZMmTIKIGQlbsMGTJklEDIyl2GDBkySiBk5S5DhgwZJRCycpchQ4aMEghZucuQIUNGCYSs3GXIkCGjBEJW7jJkyJBRAiErdxkyZMgogZCVuwwZMmSUQMjKXYYMGTJKIGTlLkOGDBklELJylyFDhowSCFm5y5AhQ0YJhKzcZciQIaMEQlbuMmTIkFEC8X8Q5AU+Vt0HWgAAAABJRU5ErkJggg==\n", 250 | "text/plain": [ 251 | "
" 252 | ] 253 | }, 254 | "metadata": { 255 | "needs_background": "light" 256 | }, 257 | "output_type": "display_data" 258 | } 259 | ], 260 | "source": [ 261 | "all_imgs = torch.cat((x, cw100_atk, cw_atk, ddn_atk))\n", 262 | "img_grid = make_grid(all_imgs, nrow=16, pad_value=0)\n", 263 | "plt.imshow(img_grid.permute(1,2,0))\n", 264 | "plt.axis('off')\n", 265 | "\n", 266 | "# Print metrics\n", 267 | "pred_orig = model(x).argmax(dim=1).cpu()\n", 268 | "pred_cw = model(cw_atk).argmax(dim=1).cpu()\n", 269 | "pred_cw100 = model(cw100_atk).argmax(dim=1).cpu()\n", 270 | "pred_ddn = model(ddn_atk).argmax(dim=1).cpu()\n", 271 | "print('C&W 4 x 25 done in {:.1f}s: Success: {:.2f}%, Mean L2: {:.4f}.'.format(\n", 272 | " cw100_time,\n", 273 | " (pred_cw100 != y.cpu()).float().mean().item() * 100,\n", 274 | " l2_norm(cw100_atk - x).mean().item()\n", 275 | "))\n", 276 | "print('C&W 9 x 10000 done in {:.1f}s: Success: {:.2f}%, Mean L2: {:.4f}.'.format(\n", 277 | " cw_time,\n", 278 | " (pred_cw != y.cpu()).float().mean().item() * 100,\n", 279 | " l2_norm(cw_atk - x).mean().item()\n", 280 | "))\n", 281 | "print('DDN 100 done in {:.1f}s: Success: {:.2f}%, Mean L2: {:.4f}.'.format(\n", 282 | " ddn_time,\n", 283 | " (pred_ddn != y.cpu()).float().mean().item() * 100,\n", 284 | " l2_norm(ddn_atk - x).mean().item()\n", 285 | "))\n", 286 | "print()\n", 287 | "print('Figure: top row: original images; 2nd: C&W 4x25 atk; 3rd: C&W 9x10000 atk; 4th: DDN 100 atk')" 288 | ] 289 | } 290 | ], 291 | "metadata": { 292 | "kernelspec": { 293 | "display_name": "Python 3", 294 | "language": "python", 295 | "name": "python3" 296 | }, 297 | "language_info": { 298 | "codemirror_mode": { 299 | "name": "ipython", 300 | "version": 3 301 | }, 302 | "file_extension": ".py", 303 | "mimetype": "text/x-python", 304 | "name": "python", 305 | "nbconvert_exporter": "python", 306 | "pygments_lexer": "ipython3", 307 | "version": "3.6.7" 308 | }, 309 | "pycharm": { 310 | "stem_cell": { 311 | "cell_type": "raw", 312 | "source": [], 313 | "metadata": { 314 | "collapsed": false 315 | } 316 | } 317 | } 318 | }, 319 | "nbformat": 4, 320 | "nbformat_minor": 2 321 | } -------------------------------------------------------------------------------- /fast_adv/__init__.py: -------------------------------------------------------------------------------- 1 | from . import attacks, defenses, models, utils 2 | 3 | __all__ = [ 4 | 'attacks', 5 | 'defenses', 6 | 'models', 7 | 'utils', 8 | ] 9 | -------------------------------------------------------------------------------- /fast_adv/attacks/__init__.py: -------------------------------------------------------------------------------- 1 | from .carlini import CarliniWagnerL2 2 | from .deepfool import DeepFool 3 | from .ddn import DDN 4 | 5 | __all__ = [ 6 | 'DDN', 7 | 'CarliniWagnerL2', 8 | 'DeepFool', 9 | ] 10 | -------------------------------------------------------------------------------- /fast_adv/attacks/carlini.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Optional 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | import torch.autograd as autograd 6 | import torch.optim as optim 7 | 8 | 9 | class CarliniWagnerL2: 10 | """ 11 | Carlini's attack (C&W): https://arxiv.org/abs/1608.04644 12 | Based on https://github.com/tensorflow/cleverhans/blob/master/cleverhans/attacks_tf.py 13 | 14 | Parameters 15 | ---------- 16 | image_constraints : tuple 17 | Bounds of the images. 18 | num_classes : int 19 | Number of classes of the model to attack. 20 | confidence : float, optional 21 | Confidence of the attack for Carlini's loss, in term of distance between logits. 22 | learning_rate : float 23 | Learning rate for the optimization. 24 | search_steps : int 25 | Number of search steps to find the best scale constant for Carlini's loss. 26 | max_iterations : int 27 | Maximum number of iterations during a single search step. 28 | initial_const : float 29 | Initial constant of the attack. 30 | quantize : bool, optional 31 | If True, the returned adversarials will have possible values (1/255, 2/255, etc.). 32 | device : torch.device, optional 33 | Device to use for the attack. 34 | callback : object, optional 35 | Callback to display losses. 36 | """ 37 | 38 | def __init__(self, 39 | image_constraints: Tuple[float, float], 40 | num_classes: int, 41 | confidence: float = 0, 42 | learning_rate: float = 0.01, 43 | search_steps: int = 9, 44 | max_iterations: int = 10000, 45 | abort_early: bool = True, 46 | initial_const: float = 0.001, 47 | quantize: bool = False, 48 | device: torch.device = torch.device('cpu'), 49 | callback: Optional = None) -> None: 50 | 51 | self.confidence = confidence 52 | self.learning_rate = learning_rate 53 | 54 | self.binary_search_steps = search_steps 55 | self.max_iterations = max_iterations 56 | self.abort_early = abort_early 57 | self.initial_const = initial_const 58 | self.num_classes = num_classes 59 | 60 | self.repeat = self.binary_search_steps >= 10 61 | 62 | self.boxmin = image_constraints[0] 63 | self.boxmax = image_constraints[1] 64 | self.boxmul = (self.boxmax - self.boxmin) / 2 65 | self.boxplus = (self.boxmin + self.boxmax) / 2 66 | self.quantize = quantize 67 | 68 | self.device = device 69 | self.callback = callback 70 | self.log_interval = 10 71 | 72 | @staticmethod 73 | def _arctanh(x: torch.Tensor, eps: float = 1e-6) -> torch.Tensor: 74 | x *= (1. - eps) 75 | return (torch.log((1 + x) / (1 - x))) * 0.5 76 | 77 | def _step(self, model: nn.Module, optimizer: optim.Optimizer, inputs: torch.Tensor, tinputs: torch.Tensor, 78 | modifier: torch.Tensor, labels: torch.Tensor, labels_infhot: torch.Tensor, targeted: bool, 79 | const: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: 80 | 81 | batch_size = inputs.shape[0] 82 | adv_input = torch.tanh(tinputs + modifier) * self.boxmul + self.boxplus 83 | l2 = (adv_input - inputs).view(batch_size, -1).pow(2).sum(1) 84 | 85 | logits = model(adv_input) 86 | 87 | real = logits.gather(1, labels.unsqueeze(1)).squeeze(1) 88 | other = (logits - labels_infhot).max(1)[0] 89 | if targeted: 90 | # if targeted, optimize for making the other class most likely 91 | logit_dists = torch.clamp(other - real + self.confidence, min=0) 92 | else: 93 | # if non-targeted, optimize for making this class least likely. 94 | logit_dists = torch.clamp(real - other + self.confidence, min=0) 95 | 96 | logit_loss = (const * logit_dists).sum() 97 | l2_loss = l2.sum() 98 | loss = logit_loss + l2_loss 99 | 100 | optimizer.zero_grad() 101 | loss.backward() 102 | optimizer.step() 103 | 104 | return adv_input.detach(), logits.detach(), l2.detach(), logit_dists.detach(), loss.detach() 105 | 106 | def attack(self, model: nn.Module, inputs: torch.Tensor, labels: torch.Tensor, 107 | targeted: bool = False) -> torch.Tensor: 108 | """ 109 | Performs the attack of the model for the inputs and labels. 110 | 111 | Parameters 112 | ---------- 113 | model : nn.Module 114 | Model to attack. 115 | inputs : torch.Tensor 116 | Batch of samples to attack. 117 | labels : torch.Tensor 118 | Labels of the samples to attack if untargeted, else labels of targets. 119 | targeted : bool, optional 120 | Whether to perform a targeted attack or not. 121 | 122 | Returns 123 | ------- 124 | torch.Tensor 125 | Batch of samples modified to be adversarial to the model 126 | 127 | """ 128 | batch_size = inputs.shape[0] 129 | tinputs = self._arctanh((inputs - self.boxplus) / self.boxmul) 130 | 131 | # set the lower and upper bounds accordingly 132 | lower_bound = torch.zeros(batch_size, device=self.device) 133 | CONST = torch.full((batch_size,), self.initial_const, device=self.device) 134 | upper_bound = torch.full((batch_size,), 1e10, device=self.device) 135 | 136 | o_best_l2 = torch.full((batch_size,), 1e10, device=self.device) 137 | o_best_score = torch.full((batch_size,), -1, dtype=torch.long, device=self.device) 138 | o_best_attack = inputs.clone() 139 | 140 | # setup the target variable, we need it to be in one-hot form for the loss function 141 | labels_onehot = torch.zeros(labels.size(0), self.num_classes, device=self.device) 142 | labels_onehot.scatter_(1, labels.unsqueeze(1), 1) 143 | labels_infhot = torch.zeros_like(labels_onehot).scatter_(1, labels.unsqueeze(1), float('inf')) 144 | 145 | for outer_step in range(self.binary_search_steps): 146 | 147 | # setup the modifier variable, this is the variable we are optimizing over 148 | modifier = torch.zeros_like(inputs, requires_grad=True) 149 | 150 | # setup the optimizer 151 | optimizer = optim.Adam([modifier], lr=self.learning_rate, betas=(0.9, 0.999), eps=1e-8) 152 | best_l2 = torch.full((batch_size,), 1e10, device=self.device) 153 | best_score = torch.full((batch_size,), -1, dtype=torch.long, device=self.device) 154 | 155 | # The last iteration (if we run many steps) repeat the search once. 156 | if self.repeat and outer_step == (self.binary_search_steps - 1): 157 | CONST = upper_bound 158 | 159 | prev = float('inf') 160 | for iteration in range(self.max_iterations): 161 | # perform the attack 162 | adv, logits, l2, logit_dists, loss = self._step(model, optimizer, inputs, tinputs, modifier, 163 | labels, labels_infhot, targeted, CONST) 164 | 165 | if self.callback and (iteration + 1) % self.log_interval == 0: 166 | self.callback.scalar('logit_dist_{}'.format(outer_step), iteration + 1, logit_dists.mean().item()) 167 | self.callback.scalar('l2_norm_{}'.format(outer_step), iteration + 1, l2.sqrt().mean().item()) 168 | 169 | # check if we should abort search if we're getting nowhere. 170 | if self.abort_early and iteration % (self.max_iterations // 10) == 0: 171 | if loss > prev * 0.9999: 172 | break 173 | prev = loss 174 | 175 | # adjust the best result found so far 176 | predicted_classes = (logits - labels_onehot * self.confidence).argmax(1) if targeted else \ 177 | (logits + labels_onehot * self.confidence).argmax(1) 178 | 179 | is_adv = (predicted_classes == labels) if targeted else (predicted_classes != labels) 180 | is_smaller = l2 < best_l2 181 | o_is_smaller = l2 < o_best_l2 182 | is_both = is_adv * is_smaller 183 | o_is_both = is_adv * o_is_smaller 184 | 185 | best_l2[is_both] = l2[is_both] 186 | best_score[is_both] = predicted_classes[is_both] 187 | o_best_l2[o_is_both] = l2[o_is_both] 188 | o_best_score[o_is_both] = predicted_classes[o_is_both] 189 | o_best_attack[o_is_both] = adv[o_is_both] 190 | 191 | # adjust the constant as needed 192 | adv_found = (best_score == labels) if targeted else ((best_score != labels) * (best_score != -1)) 193 | upper_bound[adv_found] = torch.min(upper_bound[adv_found], CONST[adv_found]) 194 | adv_not_found = ~adv_found 195 | lower_bound[adv_not_found] = torch.max(lower_bound[adv_not_found], CONST[adv_not_found]) 196 | is_smaller = upper_bound < 1e9 197 | CONST[is_smaller] = (lower_bound[is_smaller] + upper_bound[is_smaller]) / 2 198 | CONST[(~is_smaller) * adv_not_found] *= 10 199 | 200 | if self.quantize: 201 | adv_found = o_best_score != -1 202 | o_best_attack[adv_found] = self._quantize(model, inputs[adv_found], o_best_attack[adv_found], 203 | labels[adv_found], targeted=targeted) 204 | 205 | # return the best solution found 206 | return o_best_attack 207 | 208 | def _quantize(self, model: nn.Module, inputs: torch.Tensor, adv: torch.Tensor, labels: torch.Tensor, 209 | targeted: bool = False) -> torch.Tensor: 210 | """ 211 | Quantize the continuous adversarial inputs. 212 | 213 | model : nn.Module 214 | Model to attack. 215 | inputs : torch.Tensor 216 | Batch of samples to attack. 217 | adv : torch.Tensor 218 | Batch of continuous adversarial perturbations produced by the attack. 219 | labels : torch.Tensor 220 | Labels of the samples if untargeted, else labels of targets. 221 | targeted : bool, optional 222 | Whether to perform a targeted attack or not. 223 | 224 | Returns 225 | ------- 226 | torch.Tensor 227 | Batch of samples modified to be quantized and adversarial to the model. 228 | 229 | """ 230 | batch_size = inputs.shape[0] 231 | multiplier = 1 if targeted else -1 232 | delta = torch.round((adv - inputs) * 255) / 255 233 | delta.requires_grad_(True) 234 | logits = model(inputs + delta) 235 | is_adv = (logits.argmax(1) == labels) if targeted else (logits.argmax(1) != labels) 236 | i = 0 237 | while not is_adv.all() and i < 100: 238 | loss = F.cross_entropy(logits, labels, reduction='sum') 239 | grad = autograd.grad(loss, delta)[0].view(batch_size, -1) 240 | order = grad.abs().max(1, keepdim=True)[0] 241 | direction = (grad / order).int().float() 242 | direction.mul_(1 - is_adv.float().unsqueeze(1)) 243 | delta.data.view(batch_size, -1).sub_(multiplier * direction / 255) 244 | 245 | logits = model(inputs + delta) 246 | is_adv = (logits.argmax(1) == labels) if targeted else (logits.argmax(1) != labels) 247 | i += 1 248 | 249 | delta.detach_() 250 | if not is_adv.all(): 251 | delta.data[~is_adv].copy_(torch.round((adv[~is_adv] - inputs[~is_adv]) * 255) / 255) 252 | 253 | return inputs + delta 254 | -------------------------------------------------------------------------------- /fast_adv/attacks/ddn.py: -------------------------------------------------------------------------------- 1 | from typing import Optional 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | import torch.optim as optim 6 | 7 | 8 | class DDN: 9 | """ 10 | DDN attack: decoupling the direction and norm of the perturbation to achieve a small L2 norm in few steps. 11 | 12 | Parameters 13 | ---------- 14 | steps : int 15 | Number of steps for the optimization. 16 | gamma : float, optional 17 | Factor by which the norm will be modified. new_norm = norm * (1 + or - gamma). 18 | init_norm : float, optional 19 | Initial value for the norm. 20 | quantize : bool, optional 21 | If True, the returned adversarials will have quantized values to the specified number of levels. 22 | levels : int, optional 23 | Number of levels to use for quantization (e.g. 256 for 8 bit images). 24 | max_norm : float or None, optional 25 | If specified, the norms of the perturbations will not be greater than this value which might lower success rate. 26 | device : torch.device, optional 27 | Device on which to perform the attack. 28 | callback : object, optional 29 | Visdom callback to display various metrics. 30 | 31 | """ 32 | 33 | def __init__(self, 34 | steps: int, 35 | gamma: float = 0.05, 36 | init_norm: float = 1., 37 | quantize: bool = True, 38 | levels: int = 256, 39 | max_norm: Optional[float] = None, 40 | device: torch.device = torch.device('cpu'), 41 | callback: Optional = None) -> None: 42 | self.steps = steps 43 | self.gamma = gamma 44 | self.init_norm = init_norm 45 | 46 | self.quantize = quantize 47 | self.levels = levels 48 | self.max_norm = max_norm 49 | 50 | self.device = device 51 | self.callback = callback 52 | 53 | def attack(self, model: nn.Module, inputs: torch.Tensor, labels: torch.Tensor, 54 | targeted: bool = False) -> torch.Tensor: 55 | """ 56 | Performs the attack of the model for the inputs and labels. 57 | 58 | Parameters 59 | ---------- 60 | model : nn.Module 61 | Model to attack. 62 | inputs : torch.Tensor 63 | Batch of samples to attack. Values should be in the [0, 1] range. 64 | labels : torch.Tensor 65 | Labels of the samples to attack if untargeted, else labels of targets. 66 | targeted : bool, optional 67 | Whether to perform a targeted attack or not. 68 | 69 | Returns 70 | ------- 71 | torch.Tensor 72 | Batch of samples modified to be adversarial to the model. 73 | 74 | """ 75 | if inputs.min() < 0 or inputs.max() > 1: raise ValueError('Input values should be in the [0, 1] range.') 76 | 77 | batch_size = inputs.shape[0] 78 | multiplier = 1 if targeted else -1 79 | delta = torch.zeros_like(inputs, requires_grad=True) 80 | norm = torch.full((batch_size,), self.init_norm, device=self.device, dtype=torch.float) 81 | worst_norm = torch.max(inputs, 1 - inputs).view(batch_size, -1).norm(p=2, dim=1) 82 | 83 | # Setup optimizers 84 | optimizer = optim.SGD([delta], lr=1) 85 | scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=self.steps, eta_min=0.01) 86 | 87 | best_l2 = worst_norm.clone() 88 | best_delta = torch.zeros_like(inputs) 89 | adv_found = torch.zeros(inputs.size(0), dtype=torch.uint8, device=self.device) 90 | 91 | for i in range(self.steps): 92 | scheduler.step() 93 | 94 | l2 = delta.data.view(batch_size, -1).norm(p=2, dim=1) 95 | adv = inputs + delta 96 | logits = model(adv) 97 | pred_labels = logits.argmax(1) 98 | ce_loss = F.cross_entropy(logits, labels, reduction='sum') 99 | loss = multiplier * ce_loss 100 | 101 | is_adv = (pred_labels == labels) if targeted else (pred_labels != labels) 102 | is_smaller = l2 < best_l2 103 | is_both = is_adv * is_smaller 104 | adv_found[is_both] = 1 105 | best_l2[is_both] = l2[is_both] 106 | best_delta[is_both] = delta.data[is_both] 107 | 108 | optimizer.zero_grad() 109 | loss.backward() 110 | # renorming gradient 111 | grad_norms = delta.grad.view(batch_size, -1).norm(p=2, dim=1) 112 | delta.grad.div_(grad_norms.view(-1, 1, 1, 1)) 113 | # avoid nan or inf if gradient is 0 114 | if (grad_norms == 0).any(): 115 | delta.grad[grad_norms == 0] = torch.randn_like(delta.grad[grad_norms == 0]) 116 | 117 | if self.callback: 118 | cosine = F.cosine_similarity(-delta.grad.view(batch_size, -1), 119 | delta.data.view(batch_size, -1), dim=1).mean().item() 120 | self.callback.scalar('ce', i, ce_loss.item() / batch_size) 121 | self.callback.scalars( 122 | ['max_norm', 'l2', 'best_l2'], i, 123 | [norm.mean().item(), l2.mean().item(), 124 | best_l2[adv_found].mean().item() if adv_found.any() else norm.mean().item()] 125 | ) 126 | self.callback.scalars(['cosine', 'lr', 'success'], i, 127 | [cosine, optimizer.param_groups[0]['lr'], adv_found.float().mean().item()]) 128 | 129 | optimizer.step() 130 | 131 | norm.mul_(1 - (2 * is_adv.float() - 1) * self.gamma) 132 | norm = torch.min(norm, worst_norm) 133 | 134 | delta.data.mul_((norm / delta.data.view(batch_size, -1).norm(2, 1)).view(-1, 1, 1, 1)) 135 | delta.data.add_(inputs) 136 | if self.quantize: 137 | delta.data.mul_(self.levels - 1).round_().div_(self.levels - 1) 138 | delta.data.clamp_(0, 1).sub_(inputs) 139 | 140 | if self.max_norm: 141 | best_delta.renorm_(p=2, dim=0, maxnorm=self.max_norm) 142 | if self.quantize: 143 | best_delta.mul_(self.levels - 1).round_().div_(self.levels - 1) 144 | 145 | return inputs + best_delta 146 | 147 | 148 | -------------------------------------------------------------------------------- /fast_adv/attacks/ddn_tf.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Optional, Callable 2 | import tensorflow as tf 3 | import numpy as np 4 | 5 | 6 | def cosine_distance(x1, x2, eps=1e-8): 7 | numerator = tf.reduce_sum(x1 * x2, axis=1) 8 | denominator = tf.norm(x1, axis=1) * tf.norm(x2, axis=1) + eps 9 | return tf.reduce_mean(numerator / denominator) 10 | 11 | 12 | def quantization(x, levels): 13 | return tf.round(x * (levels - 1)) / (levels - 1) 14 | 15 | 16 | class DDN_tf: 17 | """ 18 | DDN attack: decoupling the direction and norm of the perturbation to 19 | achieve a small L2 norm in few steps. 20 | 21 | Parameters 22 | ---------- 23 | model : Callable 24 | A function that accepts a tf.placeholder as argument, and returns 25 | logits (pre-softmax activations) 26 | batch_shape : tuple (B x H x W x C) 27 | The input shape 28 | steps : int 29 | Number of steps for the optimization. 30 | targeted : bool 31 | Whether to perform a targeted attack or not. 32 | gamma : float, optional 33 | Factor by which the norm will be modified: 34 | new_norm = norm * (1 + or - gamma). 35 | init_norm : float, optional 36 | Initial value for the norm. 37 | quantize : bool, optional 38 | If True, the returned adversarials will have quantized values to the 39 | specified number of levels. 40 | levels : int, optional 41 | Number of levels to use for quantization (e.g. 256 for 8 bit images). 42 | max_norm : float or None, optional 43 | If specified, the norms of the perturbations will not be greater than 44 | this value which might lower success rate. 45 | callback : object, optional 46 | Visdom callback to display various metrics. 47 | 48 | """ 49 | 50 | def __init__(self, model: Callable, batch_shape: Tuple[int, int, int, int], 51 | steps: int, targeted: bool, gamma: float = 0.05, 52 | init_norm: float = 1., quantize: bool = True, 53 | levels: int = 256, max_norm: float or None = None, 54 | callback: Optional = None) -> None: 55 | self.steps = steps 56 | self.max_norm = max_norm 57 | self.quantize = quantize 58 | self.levels = levels 59 | self.callback = callback 60 | 61 | multiplier = 1 if targeted else -1 62 | 63 | # We keep the images under attack in memory using tf.Variable 64 | self.inputs = tf.Variable(np.zeros(batch_shape), dtype=tf.float32, name='inputs') 65 | self.labels = tf.Variable(np.zeros(batch_shape[0]), dtype=tf.int64, name='labels') 66 | self.assign_inputs = tf.placeholder(tf.float32, batch_shape) 67 | self.assign_labels = tf.placeholder(tf.int64, batch_shape[0]) 68 | self.setup = [self.inputs.assign(self.assign_inputs), 69 | self.labels.assign(self.assign_labels)] 70 | 71 | # Constraints on delta, such that the image remains in [0, 1] 72 | boxmin = 0 - self.inputs 73 | boxmax = 1 - self.inputs 74 | self.worst_norm = tf.norm(tf.layers.flatten(tf.maximum(self.inputs, 1 - self.inputs)), axis=1) 75 | 76 | # delta: the distortion (adversarial noise) 77 | delta = tf.Variable(np.zeros(batch_shape, dtype=np.float32), name='delta') 78 | 79 | # norm: the current \epsilon-ball around the inputs, on which the attacks are projected 80 | norm = tf.Variable(np.full(batch_shape[0], init_norm, dtype=np.float32), name='norm') 81 | self.mean_norm = tf.reduce_mean(norm) 82 | 83 | self.best_delta = tf.Variable(delta) 84 | 85 | adv_found = tf.Variable(np.full(batch_shape[0], 0, dtype=np.bool)) 86 | self.mean_adv_found = tf.reduce_mean(tf.to_float(adv_found)) 87 | 88 | self.best_l2 = tf.Variable(self.worst_norm) 89 | self.mean_best_l2 = tf.reduce_sum(self.best_l2 * tf.to_float(adv_found)) / tf.reduce_sum(tf.to_float(adv_found)) 90 | 91 | self.init = tf.variables_initializer(var_list=[delta, norm, self.best_l2, self.best_delta, adv_found]) 92 | 93 | # Forward propagation 94 | adv = self.inputs + delta 95 | logits = model(adv) 96 | pred_labels = tf.argmax(logits, 1) 97 | self.ce_loss = tf.losses.sparse_softmax_cross_entropy(labels=self.labels, logits=logits, 98 | reduction=tf.losses.Reduction.SUM) 99 | 100 | self.loss = multiplier * self.ce_loss 101 | if targeted: 102 | self.is_adv = tf.equal(pred_labels, self.labels) 103 | else: 104 | self.is_adv = tf.not_equal(pred_labels, self.labels) 105 | 106 | delta_flat = tf.layers.flatten(delta) 107 | l2 = tf.norm(delta_flat, axis=1) 108 | self.mean_l2 = tf.reduce_mean(l2) 109 | 110 | new_adv_found = tf.logical_or(self.is_adv, adv_found) 111 | self.update_adv_found = tf.assign(adv_found, new_adv_found) 112 | is_smaller = tf.less(l2, self.best_l2) 113 | is_both = tf.logical_and(self.is_adv, is_smaller) 114 | new_best_l2 = tf.where(is_both, l2, self.best_l2) 115 | self.update_best_l2 = tf.assign(self.best_l2, new_best_l2) 116 | new_best_delta = tf.where(is_both, delta, self.best_delta) 117 | self.update_best_delta = tf.assign(self.best_delta, new_best_delta) 118 | 119 | self.update_saved = tf.group(self.update_adv_found, self.update_best_l2, self.update_best_delta) 120 | 121 | # Expand or contract the norm depending on whether the current examples are adversarial 122 | new_norm = norm * (1 - (2 * tf.to_float(self.is_adv) - 1) * gamma) 123 | new_norm = tf.minimum(new_norm, self.worst_norm) 124 | 125 | self.step = tf.placeholder(tf.int32, name='step') 126 | 127 | lr = tf.train.cosine_decay(learning_rate=1., global_step=self.step, decay_steps=steps, alpha=0.01) 128 | self.lr = tf.reshape(lr, ()) # Tensorflow doesnt know its shape. 129 | 130 | # Compute the gradient and renorm it 131 | grad = tf.gradients(self.loss, delta)[0] 132 | grad_flat = tf.layers.flatten(grad) 133 | 134 | grad_norm_flat = tf.norm(grad_flat, axis=1) 135 | grad_norms = tf.reshape(grad_norm_flat, (-1, 1, 1, 1)) 136 | new_grad = grad / grad_norms 137 | 138 | # Corner case: if gradient is zero, take a random direction 139 | is_grad_zero = tf.equal(grad_norm_flat, 0) 140 | random_values = tf.random_normal(batch_shape) 141 | 142 | grad_without_zeros = tf.where(is_grad_zero, random_values, new_grad) 143 | grad_without_zeros_flat = tf.layers.flatten(grad_without_zeros) 144 | 145 | # Take a step in the gradient direction 146 | new_delta = delta - self.lr * grad_without_zeros 147 | 148 | new_l2 = tf.norm(tf.layers.flatten(new_delta), axis=1) 149 | normer = tf.reshape(new_norm / new_l2, (-1, 1, 1, 1)) 150 | new_delta = new_delta * normer 151 | 152 | if quantize: 153 | # Quantize delta (e.g. such that the resulting image has 256 values) 154 | new_delta = quantization(new_delta, levels) 155 | 156 | # Ensure delta is on the valid range 157 | new_delta = tf.clip_by_value(new_delta, boxmin, boxmax) 158 | self.update_delta = tf.assign(delta, new_delta) 159 | self.update_norm = tf.assign(norm, new_norm) 160 | 161 | # Update operation (updates both delta and the norm) 162 | self.update_op = tf.group(self.update_delta, self.update_norm) 163 | 164 | # Cosine between self.delta and new grad 165 | self.cosine = cosine_distance(-delta_flat, grad_without_zeros_flat) 166 | 167 | # Renorm if max-norm is provided 168 | if max_norm: 169 | best_delta_flat = tf.layers.flatten(self.best_delta) 170 | best_delta_renormed = tf.clip_by_norm(best_delta_flat, max_norm, axes=1) 171 | if quantize: 172 | best_delta_renormed = quantization(best_delta_renormed, levels) 173 | self.best_delta_renormed = tf.reshape(best_delta_renormed, batch_shape) 174 | 175 | def attack(self, sess: tf.Session, inputs: np.ndarray, 176 | labels: np.ndarray) -> np.ndarray: 177 | """ 178 | Performs the attack of the model for the inputs and labels. 179 | 180 | Parameters 181 | ---------- 182 | sess : tf session 183 | Tensorflow session 184 | inputs : np.ndarray 185 | Batch of samples to attack. Values should be in the [0, 1] range. 186 | labels : np.ndarray 187 | Labels of the samples to attack if untargeted, 188 | else labels of targets. 189 | 190 | Returns 191 | ------- 192 | np.ndarray 193 | Batch of samples modified to be adversarial to the model. 194 | 195 | """ 196 | if inputs.min() < 0 or inputs.max() > 1: 197 | raise ValueError('Input values should be in the [0, 1] range.') 198 | 199 | sess.run(self.setup, feed_dict={self.assign_inputs: inputs, self.assign_labels: labels}) 200 | sess.run(self.init) 201 | for i in range(self.steps): 202 | # Runs one step and collects statistics 203 | if self.callback: 204 | results = sess.run([self.ce_loss, self.mean_l2, self.mean_norm, self.cosine, self.update_saved]) 205 | loss, l2, norm, cosine, _, = results 206 | best_l2, adv_found = sess.run([self.mean_best_l2, self.mean_adv_found]) 207 | else: 208 | sess.run(self.update_saved) 209 | 210 | lr, _ = sess.run([self.lr, self.update_op], feed_dict={self.step: i}) 211 | 212 | if self.callback: 213 | self.callback.scalar('ce', i, loss / len(inputs)) 214 | self.callback.scalars(['max_norm', 'l2', 'best_l2'], i, 215 | [norm, l2, best_l2 if adv_found else norm]) 216 | self.callback.scalars(['cosine', 'lr', 'success'], i, [cosine, lr, adv_found]) 217 | 218 | if self.max_norm: 219 | best_delta = sess.run(self.best_delta_renormed) 220 | else: 221 | best_delta = sess.run(self.best_delta) 222 | 223 | return inputs + best_delta 224 | -------------------------------------------------------------------------------- /fast_adv/attacks/deepfool.py: -------------------------------------------------------------------------------- 1 | import warnings 2 | import tqdm 3 | import foolbox 4 | import torch 5 | import torch.nn as nn 6 | 7 | 8 | class DeepFool: 9 | """ 10 | Wrapper for the DeepFool attack, using the implementation from foolbox 11 | 12 | Parameters 13 | ---------- 14 | num_classes : int 15 | Number of classes of the model. 16 | max_iter : int, optional 17 | Number of steps for the attack. 18 | subsample : int, optional 19 | Limit on the number of the most likely classes that should be considered. 20 | device : torch.device, optional 21 | Device on which to perform the attack. 22 | 23 | """ 24 | 25 | def __init__(self, 26 | num_classes: int = 10, 27 | max_iter: int = 100, 28 | subsample: int = 10, 29 | device=torch.device) -> None: 30 | self.num_classes = num_classes 31 | self.max_iter = max_iter 32 | self.subsample = subsample 33 | self.device = device 34 | 35 | def attack(self, model: nn.Module, inputs: torch.Tensor, labels: torch.Tensor, 36 | targeted: bool = False) -> torch.Tensor: 37 | """ 38 | Performs the attack of the model for the inputs and labels. 39 | 40 | Parameters 41 | ---------- 42 | model : nn.Module 43 | Model to attack. 44 | inputs : torch.Tensor 45 | Batch of samples to attack. Values should be in the [0, 1] range. 46 | labels : torch.Tensor 47 | Labels of the samples to attack if untargeted, else labels of targets. 48 | targeted : bool, optional 49 | Whether to perform a targeted attack or not. 50 | 51 | Returns 52 | ------- 53 | torch.Tensor 54 | Batch of samples modified to be adversarial to the model. 55 | 56 | """ 57 | if inputs.min() < 0 or inputs.max() > 1: 58 | raise ValueError('Input values should be in the [0, 1] range.') 59 | if targeted: 60 | print('DeepFool is an untargeted adversarial attack. Returning clean inputs.') 61 | return inputs 62 | 63 | fmodel = foolbox.models.PyTorchModel(model, bounds=(0, 1), num_classes=self.num_classes, device=self.device) 64 | attack = foolbox.attacks.DeepFoolL2Attack(model=fmodel) 65 | 66 | numpy_inputs = inputs.cpu().numpy() 67 | numpy_labels = labels.cpu().numpy() 68 | batch_size = len(inputs) 69 | adversarials = numpy_inputs.copy() 70 | 71 | warnings.filterwarnings('ignore', category=UserWarning) 72 | for i in tqdm.tqdm(range(batch_size), ncols=80): 73 | adv = attack(numpy_inputs[i], numpy_labels[i], unpack=True, steps=self.max_iter, subsample=self.subsample) 74 | if adv is not None: 75 | adversarials[i] = adv 76 | warnings.resetwarnings() 77 | 78 | adversarials = torch.from_numpy(adversarials).to(self.device) 79 | 80 | return adversarials 81 | -------------------------------------------------------------------------------- /fast_adv/attacks/deepfool_tf.py: -------------------------------------------------------------------------------- 1 | import warnings 2 | import foolbox 3 | import numpy as np 4 | import tqdm 5 | 6 | 7 | class DeepFoolTF: 8 | """ 9 | Wrapper for the DeepFool attack on TF, using the implementation from foolbox 10 | 11 | Parameters 12 | ---------- 13 | num_classes : int 14 | Number of classes of the model. 15 | max_iter : int, optional 16 | Number of steps for the attack. 17 | subsample : int, optional 18 | Limit on the number of the most likely classes that should be considered. 19 | device : torch.device, optional 20 | Device on which to perform the attack. 21 | 22 | """ 23 | 24 | def __init__(self, 25 | input, 26 | logits, 27 | num_classes: int = 10, 28 | max_iter: int = 100, 29 | subsample: int = 10) -> None: 30 | self.input = input 31 | self.logits = logits 32 | self.num_classes = num_classes 33 | self.max_iter = max_iter 34 | self.subsample = subsample 35 | 36 | def attack(self, inputs: np.ndarray, labels: np.ndarray) -> np.ndarray: 37 | """ 38 | Performs the attack of the model for the inputs and labels. 39 | 40 | Parameters 41 | ---------- 42 | inputs : np.ndarray 43 | Batch of samples to attack. Values should be in the [0, 1] range. 44 | labels : np.ndarray 45 | Labels of the samples to attack if untargeted, else labels of targets. 46 | targeted : bool, optional 47 | Whether to perform a targeted attack or not. 48 | 49 | Returns 50 | ------- 51 | np.ndarray 52 | Batch of samples modified to be adversarial to the model. 53 | 54 | """ 55 | if inputs.min() < 0 or inputs.max() > 1: 56 | raise ValueError('Input values should be in the [0, 1] range.') 57 | 58 | fmodel = foolbox.models.TensorFlowModel(self.input, self.logits, bounds=(0, 1)) 59 | attack = foolbox.attacks.DeepFoolL2Attack(model=fmodel) 60 | 61 | batch_size = len(inputs) 62 | adversarials = inputs.copy() 63 | 64 | warnings.filterwarnings('ignore', category=UserWarning) 65 | for i in tqdm.tqdm(range(batch_size), ncols=80): 66 | adv = attack(inputs[i], labels[i], unpack=True, steps=self.max_iter, subsample=self.subsample) 67 | if adv is not None: 68 | adversarials[i] = adv 69 | warnings.resetwarnings() 70 | 71 | return adversarials 72 | -------------------------------------------------------------------------------- /fast_adv/defenses/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jeromerony/fast_adversarial/45210b7c79e2deaeac9845d6c901dc2580d6e316/fast_adv/defenses/__init__.py -------------------------------------------------------------------------------- /fast_adv/defenses/cifar10.py: -------------------------------------------------------------------------------- 1 | import os 2 | import argparse 3 | import tqdm 4 | from copy import deepcopy 5 | 6 | import torch 7 | import torch.nn.functional as F 8 | from torch.utils import data 9 | from torch.optim import SGD, lr_scheduler 10 | from torch.backends import cudnn 11 | 12 | from torchvision import transforms 13 | from torchvision.datasets import CIFAR10 14 | 15 | from fast_adv.models.cifar10 import wide_resnet 16 | from fast_adv.utils import AverageMeter, save_checkpoint, requires_grad_, NormalizedModel, VisdomLogger 17 | from fast_adv.attacks import DDN 18 | 19 | parser = argparse.ArgumentParser(description='CIFAR10 Training against DDN Attack') 20 | 21 | parser.add_argument('--data', default='data/cifar10', help='path to dataset') 22 | parser.add_argument('--workers', default=2, type=int, help='number of data loading workers') 23 | parser.add_argument('--cpu', action='store_true', help='force training on cpu') 24 | parser.add_argument('--save-folder', '--sf', default='', help='folder to save state dicts') 25 | parser.add_argument('--save-freq', '--sfr', default=10, type=int, help='save frequency') 26 | parser.add_argument('--save-name', '--sn', default='cifar10', help='name for saving the final state dict') 27 | 28 | parser.add_argument('--batch-size', '-b', default=128, type=int, help='mini-batch size') 29 | parser.add_argument('--epochs', '-e', default=200, type=int, help='number of total epochs to run') 30 | parser.add_argument('--lr', '--learning-rate', default=0.1, type=float, help='initial learning rate') 31 | parser.add_argument('--lr-decay', '--lrd', default=0.2, type=float, help='decay for learning rate') 32 | parser.add_argument('--lr-step', '--lrs', default=10, type=int, help='step size for learning rate decay') 33 | parser.add_argument('--momentum', default=0.9, type=float, help='momentum') 34 | parser.add_argument('--weight-decay', '--wd', default=5e-4, type=float, help='weight decay') 35 | parser.add_argument('--drop', default=0.3, type=float, help='dropout rate of the classifier') 36 | 37 | parser.add_argument('--adv', type=int, help='epoch to start training with adversarial images') 38 | parser.add_argument('--max-norm', type=float, help='max norm for the adversarial perturbations') 39 | parser.add_argument('--steps', default=100, type=int, help='number of steps for the attack') 40 | 41 | parser.add_argument('--visdom-port', '--vp', type=int, help='For visualization, which port visdom is running.') 42 | parser.add_argument('--print-freq', '--pf', default=10, type=int, help='print frequency') 43 | 44 | args = parser.parse_args() 45 | print(args) 46 | if args.lr_step is None: args.lr_step = args.epochs 47 | 48 | DEVICE = torch.device('cuda:0' if (torch.cuda.is_available() and not args.cpu) else 'cpu') 49 | CALLBACK = VisdomLogger(port=args.visdom_port) if args.visdom_port else None 50 | 51 | if not os.path.exists(args.save_folder) and args.save_folder: 52 | os.makedirs(args.save_folder) 53 | 54 | image_mean = torch.tensor([0.491, 0.482, 0.447]).view(1, 3, 1, 1) 55 | image_std = torch.tensor([0.247, 0.243, 0.262]).view(1, 3, 1, 1) 56 | 57 | train_transform = transforms.Compose([ 58 | transforms.RandomCrop(32, padding=4), 59 | transforms.RandomHorizontalFlip(), 60 | transforms.ToTensor(), 61 | ]) 62 | 63 | test_transform = transforms.Compose([ 64 | transforms.ToTensor(), 65 | ]) 66 | 67 | train_set = data.Subset(CIFAR10(args.data, train=True, transform=train_transform, download=True), list(range(45000))) 68 | val_set = data.Subset(CIFAR10(args.data, train=True, transform=test_transform, download=True), 69 | list(range(45000, 50000))) 70 | test_set = CIFAR10(args.data, train=False, transform=test_transform, download=True) 71 | 72 | train_loader = data.DataLoader(train_set, batch_size=args.batch_size, shuffle=True, num_workers=args.workers, 73 | drop_last=True, pin_memory=True) 74 | val_loader = data.DataLoader(val_set, batch_size=100, shuffle=True, num_workers=args.workers, pin_memory=True) 75 | test_loader = data.DataLoader(test_set, batch_size=100, shuffle=True, num_workers=args.workers, pin_memory=True) 76 | 77 | m = wide_resnet(num_classes=10, depth=28, widen_factor=10, dropRate=args.drop) 78 | model = NormalizedModel(model=m, mean=image_mean, std=image_std).to(DEVICE) # keep images in the [0, 1] range 79 | if torch.cuda.device_count() > 1: 80 | model = torch.nn.DataParallel(model) 81 | 82 | optimizer = SGD(model.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay) 83 | if args.adv == 0: 84 | scheduler = lr_scheduler.StepLR(optimizer, step_size=args.lr_step, gamma=args.lr_decay) 85 | else: 86 | scheduler = lr_scheduler.MultiStepLR(optimizer, milestones=[60, 120, 160], gamma=0.2) 87 | 88 | attacker = DDN(steps=args.steps, device=DEVICE) 89 | 90 | max_loss = torch.log(torch.tensor(10.)).item() # for callback 91 | best_acc = 0 92 | best_epoch = 0 93 | 94 | for epoch in range(args.epochs): 95 | scheduler.step() 96 | cudnn.benchmark = True 97 | model.train() 98 | requires_grad_(m, True) 99 | accs = AverageMeter() 100 | losses = AverageMeter() 101 | attack_norms = AverageMeter() 102 | 103 | length = len(train_loader) 104 | for i, (images, labels) in enumerate(tqdm.tqdm(train_loader, ncols=80)): 105 | images, labels = images.to(DEVICE), labels.to(DEVICE) 106 | 107 | if args.adv is not None and epoch >= args.adv: 108 | model.eval() 109 | requires_grad_(m, False) 110 | adv = attacker.attack(model, images, labels) 111 | l2_norms = (adv - images).view(args.batch_size, -1).norm(2, 1) 112 | mean_norm = l2_norms.mean() 113 | if args.max_norm: 114 | adv = torch.renorm(adv - images, p=2, dim=0, maxnorm=args.max_norm) + images 115 | attack_norms.append(mean_norm.item()) 116 | requires_grad_(m, True) 117 | model.train() 118 | logits = model(adv.detach()) 119 | else: 120 | logits = model(images) 121 | 122 | loss = F.cross_entropy(logits, labels) 123 | optimizer.zero_grad() 124 | loss.backward() 125 | optimizer.step() 126 | 127 | accs.append((logits.argmax(1) == labels).float().mean().item()) 128 | losses.append(loss.item()) 129 | 130 | if CALLBACK and not ((i + 1) % args.print_freq): 131 | CALLBACK.scalar('Tr_Loss', epoch + i / length, min(losses.last_avg, max_loss)) 132 | CALLBACK.scalar('Tr_Acc', epoch + i / length, accs.last_avg) 133 | if args.adv is not None and epoch >= args.adv: 134 | CALLBACK.scalar('L2', epoch + i / length, attack_norms.last_avg) 135 | 136 | print('Epoch {} | Training | Loss: {:.4f}, Accs: {:.4f}'.format(epoch, losses.avg, accs.avg)) 137 | 138 | cudnn.benchmark = False 139 | model.eval() 140 | requires_grad_(m, False) 141 | val_accs = AverageMeter() 142 | val_losses = AverageMeter() 143 | 144 | with torch.no_grad(): 145 | for i, (images, labels) in enumerate(tqdm.tqdm(val_loader, ncols=80)): 146 | images, labels = images.to(DEVICE), labels.to(DEVICE) 147 | 148 | logits = model(images) 149 | loss = F.cross_entropy(logits, labels) 150 | 151 | val_accs.append((logits.argmax(1) == labels).float().mean().item()) 152 | val_losses.append(loss.item()) 153 | 154 | if CALLBACK: 155 | CALLBACK.scalar('Val_Loss', epoch + 1, val_losses.last_avg) 156 | CALLBACK.scalar('Val_Acc', epoch + 1, val_accs.last_avg) 157 | 158 | print('Epoch {} | Validation | Loss: {:.4f}, Accs: {:.4f}'.format(epoch, val_losses.avg, val_accs.avg)) 159 | 160 | if args.adv is None and val_accs.avg >= best_acc: 161 | best_acc = val_accs.avg 162 | best_epoch = epoch 163 | best_dict = deepcopy(model.state_dict()) 164 | 165 | if not (epoch + 1) % args.save_freq: 166 | save_checkpoint( 167 | model.state_dict(), os.path.join(args.save_folder, args.save_name + '_{}.pth'.format(epoch + 1)), cpu=True) 168 | 169 | if args.adv is None: 170 | model.load_state_dict(best_dict) 171 | 172 | test_accs = AverageMeter() 173 | test_losses = AverageMeter() 174 | 175 | with torch.no_grad(): 176 | for i, (images, labels) in enumerate(tqdm.tqdm(test_loader, ncols=80)): 177 | images, labels = images.to(DEVICE), labels.to(DEVICE) 178 | 179 | logits = model(images) 180 | loss = F.cross_entropy(logits, labels) 181 | 182 | test_accs.append((logits.argmax(1) == labels).float().mean().item()) 183 | test_losses.append(loss.item()) 184 | 185 | if args.adv is not None: 186 | print('\nTest accuracy with final model: {:.4f} with loss: {:.4f}'.format(test_accs.avg, test_losses.avg)) 187 | else: 188 | print('\nTest accuracy with model from epoch {}: {:.4f} with loss: {:.4f}'.format(best_epoch, 189 | test_accs.avg, test_losses.avg)) 190 | 191 | print('\nSaving model...') 192 | save_checkpoint(model.state_dict(), os.path.join(args.save_folder, args.save_name + '.pth'), cpu=True) 193 | -------------------------------------------------------------------------------- /fast_adv/defenses/cifar10_small.py: -------------------------------------------------------------------------------- 1 | import os 2 | import argparse 3 | import tqdm 4 | from copy import deepcopy 5 | 6 | import torch 7 | import torch.nn.functional as F 8 | from torch.utils import data 9 | from torch.optim import SGD, lr_scheduler 10 | from torch.backends import cudnn 11 | 12 | from torchvision import transforms 13 | from torchvision.datasets import CIFAR10 14 | 15 | from fast_adv.models.cifar10 import SmallCNN 16 | from fast_adv.utils import AverageMeter, save_checkpoint, VisdomLogger 17 | 18 | parser = argparse.ArgumentParser(description='CIFAR10 Training against DDN Attack') 19 | 20 | parser.add_argument('--data', default='data/cifar10', help='path to dataset') 21 | parser.add_argument('--workers', default=2, type=int, help='number of data loading workers') 22 | parser.add_argument('--cpu', dest='cpu', action='store_true', help='force training on cpu') 23 | parser.add_argument('--save-folder', '--sf', default='', help='folder to save state dicts') 24 | parser.add_argument('--save-freq', '--sfr', default=10, type=int, help='save frequency') 25 | parser.add_argument('--save-name', '--sn', default='cifar10', help='name for saving the best state dict') 26 | 27 | parser.add_argument('--batch-size', '-b', default=128, type=int, help='mini-batch size') 28 | parser.add_argument('--epochs', '-e', default=50, type=int, help='number of total epochs to run') 29 | parser.add_argument('--lr', '--learning-rate', default=0.01, type=float, help='initial learning rate') 30 | parser.add_argument('--lr-decay', '--lrd', default=0.1, type=float, help='decay for learning rate') 31 | parser.add_argument('--lr-step', '--lrs', default=30, type=int, help='step for learning rate decay') 32 | parser.add_argument('--momentum', default=0.9, type=float, help='momentum') 33 | parser.add_argument('--weight-decay', '--wd', default=1e-4, type=float, help='weight decay') 34 | parser.add_argument('--drop', default=0.5, type=float, help='dropout rate of the classifier') 35 | 36 | parser.add_argument('--visdom-port', '--vp', type=int, help='For visualization, which port visdom is running.') 37 | parser.add_argument('--print-freq', '--pf', default=10, type=int, help='print frequency') 38 | 39 | args = parser.parse_args() 40 | print(args) 41 | if args.lr_step is None: args.lr_step = args.epochs 42 | 43 | DEVICE = torch.device('cuda:0' if (torch.cuda.is_available() and not args.cpu) else 'cpu') 44 | CALLBACK = VisdomLogger(port=args.visdom_port) if args.visdom_port else None 45 | 46 | if not os.path.exists(args.save_folder) and args.save_folder: 47 | os.makedirs(args.save_folder) 48 | 49 | train_transform = transforms.Compose([ 50 | transforms.RandomHorizontalFlip(), 51 | transforms.RandomGrayscale(p=0.05), 52 | transforms.RandomAffine(0, translate=(0.1, 0.1)), 53 | transforms.ToTensor(), 54 | ]) 55 | 56 | test_transform = transforms.Compose([ 57 | transforms.ToTensor(), 58 | ]) 59 | 60 | train_set = data.Subset(CIFAR10(args.data, train=True, transform=train_transform, download=True), list(range(45000))) 61 | val_set = data.Subset(CIFAR10(args.data, train=True, transform=test_transform, download=True), 62 | list(range(45000, 50000))) 63 | test_set = CIFAR10(args.data, train=False, transform=test_transform, download=True) 64 | 65 | train_loader = data.DataLoader(train_set, batch_size=args.batch_size, shuffle=True, num_workers=args.workers, 66 | drop_last=True, pin_memory=True) 67 | val_loader = data.DataLoader(val_set, batch_size=100, shuffle=True, num_workers=args.workers, pin_memory=True) 68 | test_loader = data.DataLoader(test_set, batch_size=100, shuffle=True, num_workers=args.workers, pin_memory=True) 69 | 70 | model = SmallCNN(drop=args.drop).to(DEVICE) 71 | if torch.cuda.device_count() > 1: 72 | model = torch.nn.DataParallel(model) 73 | 74 | optimizer = SGD(model.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay, 75 | nesterov=True if args.momentum else False) 76 | scheduler = lr_scheduler.StepLR(optimizer, step_size=args.lr_step, gamma=args.lr_decay) 77 | 78 | max_loss = torch.log(torch.tensor(10.)).item() 79 | best_acc = 0 80 | best_epoch = 0 81 | 82 | for epoch in range(args.epochs): 83 | scheduler.step() 84 | cudnn.benchmark = True 85 | model.train() 86 | accs = AverageMeter() 87 | losses = AverageMeter() 88 | attack_norms = AverageMeter() 89 | 90 | length = len(train_loader) 91 | for i, (images, labels) in enumerate(tqdm.tqdm(train_loader, ncols=80)): 92 | images, labels = images.to(DEVICE), labels.to(DEVICE) 93 | 94 | logits = model(images) 95 | loss = F.cross_entropy(logits, labels) 96 | optimizer.zero_grad() 97 | loss.backward() 98 | optimizer.step() 99 | 100 | accs.append((logits.argmax(1) == labels).float().mean().item()) 101 | losses.append(loss.item()) 102 | 103 | if CALLBACK and not ((i + 1) % args.print_freq): 104 | CALLBACK.scalar('Tr_Loss', epoch + i / length, min(losses.last_avg, max_loss)) 105 | CALLBACK.scalar('Tr_Acc', epoch + i / length, accs.last_avg) 106 | 107 | print('Epoch {} | Training | Loss: {:.4f}, Accs: {:.4f}'.format(epoch, losses.avg, accs.avg)) 108 | 109 | cudnn.benchmark = False 110 | model.eval() 111 | val_accs = AverageMeter() 112 | val_losses = AverageMeter() 113 | 114 | with torch.no_grad(): 115 | for i, (images, labels) in enumerate(tqdm.tqdm(val_loader, ncols=80)): 116 | images, labels = images.to(DEVICE), labels.to(DEVICE) 117 | 118 | logits = model(images) 119 | loss = F.cross_entropy(logits, labels) 120 | 121 | val_accs.append((logits.argmax(1) == labels).float().mean().item()) 122 | val_losses.append(loss.item()) 123 | 124 | if CALLBACK: 125 | CALLBACK.scalar('Val_Loss', epoch + 1, val_losses.last_avg) 126 | CALLBACK.scalar('Val_Acc', epoch + 1, val_accs.last_avg) 127 | 128 | print('Epoch {} | Validation | Loss: {:.4f}, Accs: {:.4f}'.format(epoch, val_losses.avg, val_accs.avg)) 129 | 130 | if val_accs.avg >= best_acc: 131 | best_acc = val_accs.avg 132 | best_dict = deepcopy(model.state_dict()) 133 | best_epoch = epoch 134 | 135 | model.load_state_dict(best_dict) 136 | 137 | test_accs = AverageMeter() 138 | test_losses = AverageMeter() 139 | 140 | with torch.no_grad(): 141 | for i, (images, labels) in enumerate(tqdm.tqdm(test_loader, ncols=80)): 142 | images, labels = images.to(DEVICE), labels.to(DEVICE) 143 | 144 | logits = model(images) 145 | loss = F.cross_entropy(logits, labels) 146 | 147 | test_accs.append((logits.argmax(1) == labels).float().mean().item()) 148 | test_losses.append(loss.item()) 149 | 150 | print('\nTest accuracy with model from epoch {}: {:.4f} with loss: {:.4f}'.format(best_epoch, test_accs.avg, 151 | test_losses.avg)) 152 | print('\nSaving best model...') 153 | save_checkpoint(best_dict, os.path.join(args.save_folder, args.save_name + '.pth'), cpu=True) 154 | -------------------------------------------------------------------------------- /fast_adv/defenses/mnist.py: -------------------------------------------------------------------------------- 1 | import os 2 | import argparse 3 | import tqdm 4 | from copy import deepcopy 5 | 6 | import torch 7 | import torch.nn.functional as F 8 | from torch.utils import data 9 | from torch.optim import SGD, lr_scheduler 10 | from torch.backends import cudnn 11 | 12 | from torchvision import transforms 13 | from torchvision.datasets import MNIST 14 | 15 | from fast_adv.models.mnist import SmallCNN 16 | from fast_adv.utils import AverageMeter, save_checkpoint, requires_grad_, VisdomLogger 17 | from fast_adv.attacks import DDN 18 | 19 | parser = argparse.ArgumentParser(description='MNIST Training against DDN Attack') 20 | 21 | parser.add_argument('--data', default='data/mnist', help='path to dataset') 22 | parser.add_argument('--workers', default=2, type=int, help='number of data loading workers') 23 | parser.add_argument('--cpu', dest='cpu', action='store_true', help='force training on cpu') 24 | parser.add_argument('--save-folder', '--sf', default='', help='folder to save state dicts') 25 | parser.add_argument('--save-name', '--sn', default='mnist', help='name for saving the final state dict') 26 | parser.add_argument('--save-freq', '--sfr', type=int, help='save frequency') 27 | 28 | parser.add_argument('--batch-size', '-b', default=128, type=int, help='mini-batch size') 29 | parser.add_argument('--epochs', '-e', default=50, type=int, help='number of total epochs to run') 30 | parser.add_argument('--lr', '--learning-rate', default=0.1, type=float, help='initial learning rate') 31 | parser.add_argument('--lr-decay', '--lrd', default=0.1, type=float, help='decay for learning rate') 32 | parser.add_argument('--lr-step', '--lrs', type=int, help='step size for learning rate decay') 33 | parser.add_argument('--momentum', default=0.9, type=float, help='momentum') 34 | parser.add_argument('--weight-decay', '--wd', default=1e-6, type=float, help='weight decay') 35 | parser.add_argument('--drop', default=0.5, type=float, help='dropout rate of the classifier') 36 | 37 | parser.add_argument('--adv', type=int, help='epoch to start training with adversarial images') 38 | parser.add_argument('--max-norm', type=float, help='max norm for the adversarial perturbations') 39 | parser.add_argument('--steps', default=100, type=int, help='number of steps for the attack') 40 | 41 | parser.add_argument('--visdom-port', '--vp', type=int, help='For visualization, which port visdom is running.') 42 | parser.add_argument('--print-freq', '--pf', default=10, type=int, help='print frequency') 43 | 44 | args = parser.parse_args() 45 | print(args) 46 | if args.lr_step is None: args.lr_step = args.epochs 47 | 48 | DEVICE = torch.device('cuda:0' if (torch.cuda.is_available() and not args.cpu) else 'cpu') 49 | CALLBACK = VisdomLogger(port=args.visdom_port) if args.visdom_port else None 50 | 51 | if not os.path.exists(args.save_folder) and args.save_folder: 52 | os.makedirs(args.save_folder) 53 | 54 | transform = transforms.Compose([transforms.ToTensor()]) 55 | 56 | train_data = MNIST(args.data, train=True, transform=transform, download=True) 57 | train_set = data.Subset(train_data, list(range(55000))) 58 | val_set = data.Subset(train_data, list(range(55000, 60000))) 59 | test_set = MNIST(args.data, train=False, transform=transform, download=True) 60 | 61 | train_loader = data.DataLoader(train_set, batch_size=args.batch_size, shuffle=True, num_workers=args.workers, 62 | drop_last=True, pin_memory=True) 63 | val_loader = data.DataLoader(val_set, batch_size=100, shuffle=True, num_workers=args.workers, pin_memory=True) 64 | test_loader = data.DataLoader(test_set, batch_size=100, shuffle=True, num_workers=args.workers, pin_memory=True) 65 | 66 | model = SmallCNN(drop=args.drop).to(DEVICE) 67 | if torch.cuda.device_count() > 1: 68 | model = torch.nn.DataParallel(model) 69 | 70 | optimizer = SGD(model.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay) 71 | scheduler = lr_scheduler.StepLR(optimizer, step_size=args.lr_step, gamma=args.lr_decay) 72 | 73 | attacker = DDN(steps=args.steps, device=DEVICE) 74 | 75 | best_acc = 0 76 | best_epoch = 0 77 | 78 | for epoch in range(args.epochs): 79 | cudnn.benchmark = True 80 | model.train() 81 | requires_grad_(model, True) 82 | accs = AverageMeter() 83 | losses = AverageMeter() 84 | attack_norms = AverageMeter() 85 | 86 | scheduler.step() 87 | length = len(train_loader) 88 | for i, (images, labels) in enumerate(tqdm.tqdm(train_loader, ncols=80)): 89 | images, labels = images.to(DEVICE), labels.to(DEVICE) 90 | 91 | if args.adv is not None and epoch >= args.adv: 92 | model.eval() 93 | requires_grad_(model, False) 94 | with torch.no_grad(): 95 | accs.append((model(images).argmax(1) == labels).float().mean().item()) 96 | adv = attacker.attack(model, images, labels) 97 | l2_norms = (adv - images).view(args.batch_size, -1).norm(2, 1) 98 | mean_norm = l2_norms.mean() 99 | if args.max_norm: 100 | adv = torch.renorm(adv - images, p=2, dim=0, maxnorm=args.max_norm) + images 101 | attack_norms.append(mean_norm.item()) 102 | requires_grad_(model, True) 103 | model.train() 104 | logits = model(adv.detach()) 105 | else: 106 | logits = model(images) 107 | accs.append((logits.argmax(1) == labels).float().mean().item()) 108 | 109 | loss = F.cross_entropy(logits, labels) 110 | optimizer.zero_grad() 111 | loss.backward() 112 | optimizer.step() 113 | 114 | losses.append(loss.item()) 115 | 116 | if CALLBACK and not ((i + 1) % args.print_freq): 117 | CALLBACK.scalar('Tr_Loss', epoch + i / length, losses.last_avg) 118 | CALLBACK.scalar('Tr_Acc', epoch + i / length, accs.last_avg) 119 | if args.adv is not None and epoch >= args.adv: 120 | CALLBACK.scalar('L2', epoch + i / length, attack_norms.last_avg) 121 | 122 | print('Epoch {} | Training | Loss: {:.4f}, Accs: {:.4f}'.format(epoch, losses.avg, accs.avg)) 123 | 124 | cudnn.benchmark = False 125 | model.eval() 126 | requires_grad_(model, False) 127 | val_accs = AverageMeter() 128 | val_losses = AverageMeter() 129 | 130 | with torch.no_grad(): 131 | for i, (images, labels) in enumerate(tqdm.tqdm(val_loader, ncols=80)): 132 | images, labels = images.to(DEVICE), labels.to(DEVICE) 133 | 134 | logits = model(images) 135 | loss = F.cross_entropy(logits, labels) 136 | 137 | val_accs.append((logits.argmax(1) == labels).float().mean().item()) 138 | val_losses.append(loss.item()) 139 | 140 | if CALLBACK: 141 | CALLBACK.scalar('Val_Loss', epoch + 1, val_losses.last_avg) 142 | CALLBACK.scalar('Val_Acc', epoch + 1, val_accs.last_avg) 143 | 144 | print('Epoch {} | Validation | Loss: {:.4f}, Accs: {:.4f}'.format(epoch, val_losses.avg, val_accs.avg)) 145 | 146 | if args.adv is None and val_accs.avg >= best_acc: 147 | best_acc = val_accs.avg 148 | best_epoch = epoch 149 | best_dict = deepcopy(model.state_dict()) 150 | 151 | if args.save_freq and not (epoch + 1) % args.save_freq: 152 | save_checkpoint( 153 | model.state_dict(), os.path.join(args.save_folder, args.save_name + '_{}.pth'.format(epoch + 1)), cpu=True) 154 | 155 | if args.adv is None: 156 | model.load_state_dict(best_dict) 157 | 158 | test_accs = AverageMeter() 159 | test_losses = AverageMeter() 160 | 161 | with torch.no_grad(): 162 | for i, (images, labels) in enumerate(tqdm.tqdm(test_loader, ncols=80)): 163 | images, labels = images.to(DEVICE), labels.to(DEVICE) 164 | 165 | logits = model(images) 166 | loss = F.cross_entropy(logits, labels) 167 | 168 | test_accs.append((logits.argmax(1) == labels).float().mean().item()) 169 | test_losses.append(loss.item()) 170 | 171 | if args.adv is not None: 172 | print('\nTest accuracy with final model: {:.4f} with loss: {:.4f}'.format(test_accs.avg, test_losses.avg)) 173 | else: 174 | print('\nTest accuracy with model from epoch {}: {:.4f} with loss: {:.4f}'.format(best_epoch, test_accs.avg, 175 | test_losses.avg)) 176 | 177 | print('\nSaving model...') 178 | save_checkpoint(model.state_dict(), os.path.join(args.save_folder, args.save_name + '.pth'), cpu=True) 179 | -------------------------------------------------------------------------------- /fast_adv/models/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jeromerony/fast_adversarial/45210b7c79e2deaeac9845d6c901dc2580d6e316/fast_adv/models/__init__.py -------------------------------------------------------------------------------- /fast_adv/models/cifar10/__init__.py: -------------------------------------------------------------------------------- 1 | from .small_cnn import SmallCNN 2 | from .wide_resnet import wide_resnet 3 | 4 | __all__ = [ 5 | 'SmallCNN', 6 | 'wide_resnet', 7 | ] -------------------------------------------------------------------------------- /fast_adv/models/cifar10/madry_tf.py: -------------------------------------------------------------------------------- 1 | # Adapted from: 2 | # https://github.com/MadryLab/cifar10_challenge/blob/master/model.py 3 | # 4 | # based on https://github.com/tensorflow/models/tree/master/resnet 5 | from __future__ import absolute_import 6 | from __future__ import division 7 | from __future__ import print_function 8 | 9 | import numpy as np 10 | import tensorflow as tf 11 | 12 | 13 | class Model(object): 14 | """ResNet model.""" 15 | 16 | def __init__(self, mode): 17 | """ResNet constructor. 18 | 19 | Args: 20 | mode: One of 'train' and 'eval'. 21 | """ 22 | self.mode = mode 23 | self.image_size = 32 24 | self.num_channels = 3 25 | self.num_labels = 10 26 | 27 | self.built = False 28 | 29 | def predict(self, input: tf.placeholder): 30 | out = self._build_model(input) 31 | self.built = True 32 | return out 33 | 34 | def add_internal_summaries(self): 35 | pass 36 | 37 | def _stride_arr(self, stride): 38 | """Map a stride scalar to the stride array for tf.nn.conv2d.""" 39 | return [1, stride, stride, 1] 40 | 41 | def _build_model(self, input): 42 | assert self.mode == 'train' or self.mode == 'eval' 43 | """Build the core model within the graph.""" 44 | with tf.variable_scope('input', reuse=self.built): 45 | 46 | input_standardized = tf.map_fn(lambda img: tf.image.per_image_standardization(img), 47 | input) 48 | x = self._conv('init_conv', input_standardized, 3, 3, 16, self._stride_arr(1)) 49 | 50 | strides = [1, 2, 2] 51 | activate_before_residual = [True, False, False] 52 | res_func = self._residual 53 | 54 | # Uncomment the following codes to use w28-10 wide residual network. 55 | # It is more memory efficient than very deep residual network and has 56 | # comparably good performance. 57 | # https://arxiv.org/pdf/1605.07146v1.pdf 58 | filters = [16, 160, 320, 640] 59 | 60 | 61 | # Update hps.num_residual_units to 9 62 | 63 | with tf.variable_scope('unit_1_0', reuse=self.built): 64 | x = res_func(x, filters[0], filters[1], self._stride_arr(strides[0]), 65 | activate_before_residual[0]) 66 | for i in range(1, 5): 67 | with tf.variable_scope('unit_1_%d' % i, reuse=self.built): 68 | x = res_func(x, filters[1], filters[1], self._stride_arr(1), False) 69 | 70 | with tf.variable_scope('unit_2_0', reuse=self.built): 71 | x = res_func(x, filters[1], filters[2], self._stride_arr(strides[1]), 72 | activate_before_residual[1]) 73 | for i in range(1, 5): 74 | with tf.variable_scope('unit_2_%d' % i, reuse=self.built): 75 | x = res_func(x, filters[2], filters[2], self._stride_arr(1), False) 76 | 77 | with tf.variable_scope('unit_3_0', reuse=self.built): 78 | x = res_func(x, filters[2], filters[3], self._stride_arr(strides[2]), 79 | activate_before_residual[2]) 80 | for i in range(1, 5): 81 | with tf.variable_scope('unit_3_%d' % i, reuse=self.built): 82 | x = res_func(x, filters[3], filters[3], self._stride_arr(1), False) 83 | 84 | with tf.variable_scope('unit_last', reuse=self.built): 85 | x = self._batch_norm('final_bn', x) 86 | x = self._relu(x, 0.1) 87 | x = self._global_avg_pool(x) 88 | 89 | with tf.variable_scope('logit', reuse=self.built): 90 | self.pre_softmax = self._fully_connected(x, 10) 91 | 92 | return self.pre_softmax 93 | 94 | def _batch_norm(self, name, x): 95 | """Batch normalization.""" 96 | with tf.name_scope(name): 97 | return tf.contrib.layers.batch_norm( 98 | inputs=x, 99 | decay=.9, 100 | center=True, 101 | scale=True, 102 | activation_fn=None, 103 | updates_collections=None, 104 | is_training=(self.mode == 'train')) 105 | 106 | def _residual(self, x, in_filter, out_filter, stride, 107 | activate_before_residual=False): 108 | """Residual unit with 2 sub layers.""" 109 | if activate_before_residual: 110 | with tf.variable_scope('shared_activation', reuse=self.built): 111 | x = self._batch_norm('init_bn', x) 112 | x = self._relu(x, 0.1) 113 | orig_x = x 114 | else: 115 | with tf.variable_scope('residual_only_activation', reuse=self.built): 116 | orig_x = x 117 | x = self._batch_norm('init_bn', x) 118 | x = self._relu(x, 0.1) 119 | 120 | with tf.variable_scope('sub1', reuse=self.built): 121 | x = self._conv('conv1', x, 3, in_filter, out_filter, stride) 122 | 123 | with tf.variable_scope('sub2', reuse=self.built): 124 | x = self._batch_norm('bn2', x) 125 | x = self._relu(x, 0.1) 126 | x = self._conv('conv2', x, 3, out_filter, out_filter, [1, 1, 1, 1]) 127 | 128 | with tf.variable_scope('sub_add', reuse=self.built): 129 | if in_filter != out_filter: 130 | orig_x = tf.nn.avg_pool(orig_x, stride, stride, 'VALID') 131 | orig_x = tf.pad( 132 | orig_x, [[0, 0], [0, 0], [0, 0], 133 | [(out_filter-in_filter)//2, (out_filter-in_filter)//2]]) 134 | x += orig_x 135 | 136 | tf.logging.debug('image after unit %s', x.get_shape()) 137 | return x 138 | 139 | def _decay(self): 140 | """L2 weight decay loss.""" 141 | costs = [] 142 | for var in tf.trainable_variables(): 143 | if var.op.name.find('DW') > 0: 144 | costs.append(tf.nn.l2_loss(var)) 145 | return tf.add_n(costs) 146 | 147 | def _conv(self, name, x, filter_size, in_filters, out_filters, strides): 148 | """Convolution.""" 149 | with tf.variable_scope(name, reuse=self.built): 150 | n = filter_size * filter_size * out_filters 151 | kernel = tf.get_variable( 152 | 'DW', [filter_size, filter_size, in_filters, out_filters], 153 | tf.float32, initializer=tf.random_normal_initializer( 154 | stddev=np.sqrt(2.0/n))) 155 | return tf.nn.conv2d(x, kernel, strides, padding='SAME') 156 | 157 | def _relu(self, x, leakiness=0.0): 158 | """Relu, with optional leaky support.""" 159 | return tf.where(tf.less(x, 0.0), leakiness * x, x, name='leaky_relu') 160 | 161 | def _fully_connected(self, x, out_dim): 162 | """FullyConnected layer for final output.""" 163 | num_non_batch_dimensions = len(x.shape) 164 | prod_non_batch_dimensions = 1 165 | for ii in range(num_non_batch_dimensions - 1): 166 | prod_non_batch_dimensions *= int(x.shape[ii + 1]) 167 | x = tf.reshape(x, [tf.shape(x)[0], -1]) 168 | w = tf.get_variable( 169 | 'DW', [prod_non_batch_dimensions, out_dim], 170 | initializer=tf.uniform_unit_scaling_initializer(factor=1.0)) 171 | b = tf.get_variable('biases', [out_dim], 172 | initializer=tf.constant_initializer()) 173 | return tf.nn.xw_plus_b(x, w, b) 174 | 175 | def _global_avg_pool(self, x): 176 | assert x.get_shape().ndims == 4 177 | return tf.reduce_mean(x, [1, 2]) 178 | -------------------------------------------------------------------------------- /fast_adv/models/cifar10/small_cnn.py: -------------------------------------------------------------------------------- 1 | from collections import OrderedDict 2 | import torch.nn as nn 3 | 4 | 5 | class SmallCNN(nn.Module): 6 | def __init__(self, drop=0.5): 7 | super(SmallCNN, self).__init__() 8 | 9 | self.num_channels = 3 10 | self.num_labels = 10 11 | 12 | activ = nn.ReLU(True) 13 | 14 | self.feature_extractor = nn.Sequential(OrderedDict([ 15 | ('conv1', nn.Conv2d(self.num_channels, 64, 3)), 16 | ('relu1', activ), 17 | ('conv2', nn.Conv2d(64, 64, 3)), 18 | ('relu2', activ), 19 | ('maxpool1', nn.MaxPool2d(2, 2)), 20 | ('conv3', nn.Conv2d(64, 128, 3)), 21 | ('relu3', activ), 22 | ('conv4', nn.Conv2d(128, 128, 3)), 23 | ('relu4', activ), 24 | ('maxpool2', nn.MaxPool2d(2, 2)), 25 | ])) 26 | 27 | self.classifier = nn.Sequential(OrderedDict([ 28 | ('fc1', nn.Linear(128 * 5 * 5, 256)), 29 | ('relu1', activ), 30 | ('drop', nn.Dropout(drop)), 31 | ('fc2', nn.Linear(256, 256)), 32 | ('relu2', activ), 33 | ('fc3', nn.Linear(256, self.num_labels)), 34 | ])) 35 | 36 | for m in self.modules(): 37 | if isinstance(m, (nn.Conv2d)): 38 | nn.init.kaiming_normal_(m.weight) 39 | if m.bias is not None: 40 | nn.init.constant_(m.bias, 0) 41 | elif isinstance(m, nn.BatchNorm2d): 42 | nn.init.constant_(m.weight, 1) 43 | nn.init.constant_(m.bias, 0) 44 | nn.init.constant_(self.classifier.fc3.weight, 0) 45 | nn.init.constant_(self.classifier.fc3.bias, 0) 46 | 47 | def forward(self, input): 48 | features = self.feature_extractor(input) 49 | logits = self.classifier(features.view(-1, 128 * 5 * 5)) 50 | return logits 51 | -------------------------------------------------------------------------------- /fast_adv/models/cifar10/wide_resnet.py: -------------------------------------------------------------------------------- 1 | """PyTorch implementation of Wide-ResNet taken from https://github.com/xternalz/WideResNet-pytorch""" 2 | 3 | import math 4 | import torch 5 | import torch.nn as nn 6 | import torch.nn.functional as F 7 | 8 | 9 | class BasicBlock(nn.Module): 10 | def __init__(self, in_planes, out_planes, stride, dropRate=0.0): 11 | super(BasicBlock, self).__init__() 12 | self.bn1 = nn.BatchNorm2d(in_planes) 13 | self.relu1 = nn.ReLU(inplace=True) 14 | self.conv1 = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, 15 | padding=1, bias=False) 16 | self.bn2 = nn.BatchNorm2d(out_planes) 17 | self.relu2 = nn.ReLU(inplace=True) 18 | self.conv2 = nn.Conv2d(out_planes, out_planes, kernel_size=3, stride=1, 19 | padding=1, bias=False) 20 | self.droprate = dropRate 21 | self.equalInOut = (in_planes == out_planes) 22 | self.convShortcut = (not self.equalInOut) and nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, 23 | padding=0, bias=False) or None 24 | 25 | def forward(self, x): 26 | if not self.equalInOut: 27 | x = self.relu1(self.bn1(x)) 28 | else: 29 | out = self.relu1(self.bn1(x)) 30 | out = self.relu2(self.bn2(self.conv1(out if self.equalInOut else x))) 31 | if self.droprate > 0: 32 | out = F.dropout(out, p=self.droprate, training=self.training) 33 | out = self.conv2(out) 34 | return torch.add(x if self.equalInOut else self.convShortcut(x), out) 35 | 36 | 37 | class NetworkBlock(nn.Module): 38 | def __init__(self, nb_layers, in_planes, out_planes, block, stride, dropRate=0.0): 39 | super(NetworkBlock, self).__init__() 40 | self.layer = self._make_layer(block, in_planes, out_planes, nb_layers, stride, dropRate) 41 | 42 | def _make_layer(self, block, in_planes, out_planes, nb_layers, stride, dropRate): 43 | layers = [] 44 | for i in range(nb_layers): 45 | layers.append(block(i == 0 and in_planes or out_planes, out_planes, i == 0 and stride or 1, dropRate)) 46 | return nn.Sequential(*layers) 47 | 48 | def forward(self, x): 49 | return self.layer(x) 50 | 51 | 52 | class WideResNet(nn.Module): 53 | def __init__(self, depth, num_classes, widen_factor=1, dropRate=0.0): 54 | super(WideResNet, self).__init__() 55 | nChannels = [16, 16 * widen_factor, 32 * widen_factor, 64 * widen_factor] 56 | assert (depth - 4) % 6 == 0, 'depth should be 6n+4' 57 | n = (depth - 4) // 6 58 | block = BasicBlock 59 | # 1st conv before any network block 60 | self.conv1 = nn.Conv2d(3, nChannels[0], kernel_size=3, stride=1, 61 | padding=1, bias=False) 62 | # 1st block 63 | self.block1 = NetworkBlock(n, nChannels[0], nChannels[1], block, 1, dropRate) 64 | # 2nd block 65 | self.block2 = NetworkBlock(n, nChannels[1], nChannels[2], block, 2, dropRate) 66 | # 3rd block 67 | self.block3 = NetworkBlock(n, nChannels[2], nChannels[3], block, 2, dropRate) 68 | # global average pooling and classifier 69 | self.bn1 = nn.BatchNorm2d(nChannels[3]) 70 | self.relu = nn.ReLU(inplace=True) 71 | self.fc = nn.Linear(nChannels[3], num_classes) 72 | self.nChannels = nChannels[3] 73 | 74 | for m in self.modules(): 75 | if isinstance(m, nn.Conv2d): 76 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 77 | m.weight.data.normal_(0, math.sqrt(2. / n)) 78 | elif isinstance(m, nn.BatchNorm2d): 79 | m.weight.data.fill_(1) 80 | m.bias.data.zero_() 81 | elif isinstance(m, nn.Linear): 82 | m.bias.data.zero_() 83 | 84 | def forward(self, x): 85 | out = self.conv1(x) 86 | out = self.block1(out) 87 | out = self.block2(out) 88 | out = self.block3(out) 89 | out = self.relu(self.bn1(out)) 90 | out = F.avg_pool2d(out, 8) 91 | out = out.view(-1, self.nChannels) 92 | return self.fc(out) 93 | 94 | 95 | def wide_resnet(**kwargs): 96 | """ 97 | Constructs a Wide Residual Networks. 98 | """ 99 | model = WideResNet(**kwargs) 100 | return model 101 | -------------------------------------------------------------------------------- /fast_adv/models/mnist/__init__.py: -------------------------------------------------------------------------------- 1 | from .small_cnn import SmallCNN 2 | 3 | __all__ = [ 4 | 'SmallCNN', 5 | ] -------------------------------------------------------------------------------- /fast_adv/models/mnist/madry_tf.py: -------------------------------------------------------------------------------- 1 | """ 2 | Madry model from: 3 | https://github.com/MadryLab/mnist_challenge/blob/master/model.py 4 | 5 | Modified to have the signature expected by the tensorflow implementation 6 | of the Carlini's L2 attack 7 | 8 | """ 9 | from __future__ import absolute_import 10 | from __future__ import division 11 | from __future__ import print_function 12 | 13 | import tensorflow as tf 14 | 15 | 16 | class Model(object): 17 | def __init__(self): 18 | self.image_size = 28 19 | self.num_channels = 1 20 | self.num_labels = 10 21 | 22 | # first convolutional layer 23 | self.W_conv1 = self._weight_variable([5,5,1,32]) 24 | self.b_conv1 = self._bias_variable([32]) 25 | 26 | # second convolutional layer 27 | self.W_conv2 = self._weight_variable([5,5,32,64]) 28 | self.b_conv2 = self._bias_variable([64]) 29 | 30 | # first fully connected layer 31 | self.W_fc1 = self._weight_variable([7 * 7 * 64, 1024]) 32 | self.b_fc1 = self._bias_variable([1024]) 33 | 34 | # output layer 35 | self.W_fc2 = self._weight_variable([1024,10]) 36 | self.b_fc2 = self._bias_variable([10]) 37 | 38 | def predict(self, input: tf.placeholder): 39 | h_conv1 = tf.nn.relu(self._conv2d(input, self.W_conv1) + self.b_conv1) 40 | h_pool1 = self._max_pool_2x2(h_conv1) 41 | 42 | h_conv2 = tf.nn.relu(self._conv2d(h_pool1, self.W_conv2) + self.b_conv2) 43 | h_pool2 = self._max_pool_2x2(h_conv2) 44 | 45 | h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64]) 46 | h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, self.W_fc1) + self.b_fc1) 47 | 48 | pre_softmax = tf.matmul(h_fc1, self.W_fc2) + self.b_fc2 49 | return pre_softmax 50 | 51 | 52 | @staticmethod 53 | def _weight_variable(shape): 54 | initial = tf.truncated_normal(shape, stddev=0.1) 55 | return tf.Variable(initial) 56 | 57 | @staticmethod 58 | def _bias_variable(shape): 59 | initial = tf.constant(0.1, shape = shape) 60 | return tf.Variable(initial) 61 | 62 | @staticmethod 63 | def _conv2d(x, W): 64 | return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME') 65 | 66 | @staticmethod 67 | def _max_pool_2x2( x): 68 | return tf.nn.max_pool(x, 69 | ksize = [1,2,2,1], 70 | strides=[1,2,2,1], 71 | padding='SAME') 72 | -------------------------------------------------------------------------------- /fast_adv/models/mnist/small_cnn.py: -------------------------------------------------------------------------------- 1 | from collections import OrderedDict 2 | import torch.nn as nn 3 | 4 | 5 | class SmallCNN(nn.Module): 6 | def __init__(self, drop=0.5): 7 | super(SmallCNN, self).__init__() 8 | 9 | self.num_channels = 1 10 | self.num_labels = 10 11 | 12 | activ = nn.ReLU(True) 13 | 14 | self.feature_extractor = nn.Sequential(OrderedDict([ 15 | ('conv1', nn.Conv2d(self.num_channels, 32, 3)), 16 | ('relu1', activ), 17 | ('conv2', nn.Conv2d(32, 32, 3)), 18 | ('relu2', activ), 19 | ('maxpool1', nn.MaxPool2d(2, 2)), 20 | ('conv3', nn.Conv2d(32, 64, 3)), 21 | ('relu3', activ), 22 | ('conv4', nn.Conv2d(64, 64, 3)), 23 | ('relu4', activ), 24 | ('maxpool2', nn.MaxPool2d(2, 2)), 25 | ])) 26 | 27 | self.classifier = nn.Sequential(OrderedDict([ 28 | ('fc1', nn.Linear(64 * 4 * 4, 200)), 29 | ('relu1', activ), 30 | ('drop', nn.Dropout(drop)), 31 | ('fc2', nn.Linear(200, 200)), 32 | ('relu2', activ), 33 | ('fc3', nn.Linear(200, self.num_labels)), 34 | ])) 35 | 36 | for m in self.modules(): 37 | if isinstance(m, (nn.Conv2d)): 38 | nn.init.kaiming_normal_(m.weight) 39 | if m.bias is not None: 40 | nn.init.constant_(m.bias, 0) 41 | elif isinstance(m, nn.BatchNorm2d): 42 | nn.init.constant_(m.weight, 1) 43 | nn.init.constant_(m.bias, 0) 44 | nn.init.constant_(self.classifier.fc3.weight, 0) 45 | nn.init.constant_(self.classifier.fc3.bias, 0) 46 | 47 | def forward(self, input): 48 | features = self.feature_extractor(input) 49 | logits = self.classifier(features.view(-1, 64 * 4 * 4)) 50 | return logits 51 | -------------------------------------------------------------------------------- /fast_adv/utils/__init__.py: -------------------------------------------------------------------------------- 1 | from .utils import (save_checkpoint, AverageMeter, NormalizedModel, 2 | requires_grad_, l2_norm, squared_l2_norm) 3 | from .visualization import VisdomLogger 4 | 5 | __all__ = [ 6 | 'save_checkpoint', 7 | 'AverageMeter', 8 | 'NormalizedModel', 9 | 'requires_grad_', 10 | 'VisdomLogger', 11 | 'l2_norm', 12 | 'squared_l2_norm' 13 | ] 14 | -------------------------------------------------------------------------------- /fast_adv/utils/utils.py: -------------------------------------------------------------------------------- 1 | from collections import OrderedDict 2 | import torch 3 | import torch.nn as nn 4 | 5 | 6 | def save_checkpoint(state: OrderedDict, filename: str = 'checkpoint.pth', cpu: bool = False) -> None: 7 | if cpu: 8 | new_state = OrderedDict() 9 | for k in state.keys(): 10 | newk = k.replace('module.', '') # remove module. if model was trained using DataParallel 11 | new_state[newk] = state[k].cpu() 12 | state = new_state 13 | torch.save(state, filename) 14 | 15 | 16 | class AverageMeter: 17 | """Computes and stores the average and current value""" 18 | 19 | def __init__(self): 20 | self.reset() 21 | 22 | def reset(self): 23 | self.values = [] 24 | self.counter = 0 25 | 26 | def append(self, val): 27 | self.values.append(val) 28 | self.counter += 1 29 | 30 | @property 31 | def val(self): 32 | return self.values[-1] 33 | 34 | @property 35 | def avg(self): 36 | return sum(self.values) / len(self.values) 37 | 38 | @property 39 | def last_avg(self): 40 | if self.counter == 0: 41 | return self.latest_avg 42 | else: 43 | self.latest_avg = sum(self.values[-self.counter:]) / self.counter 44 | self.counter = 0 45 | return self.latest_avg 46 | 47 | 48 | class NormalizedModel(nn.Module): 49 | """ 50 | Wrapper for a model to account for the mean and std of a dataset. 51 | mean and std do not require grad as they should not be learned, but determined beforehand. 52 | mean and std should be broadcastable (see pytorch doc on broadcasting) with the data. 53 | Args: 54 | model (nn.Module): model to use to predict 55 | mean (torch.Tensor): sequence of means for each channel 56 | std (torch.Tensor): sequence of standard deviations for each channel 57 | """ 58 | 59 | def __init__(self, model: nn.Module, mean: torch.Tensor, std: torch.Tensor) -> None: 60 | super(NormalizedModel, self).__init__() 61 | 62 | self.model = model 63 | self.mean = nn.Parameter(mean, requires_grad=False) 64 | self.std = nn.Parameter(std, requires_grad=False) 65 | 66 | def forward(self, input: torch.Tensor) -> torch.Tensor: 67 | normalized_input = (input - self.mean) / self.std 68 | return self.model(normalized_input) 69 | 70 | 71 | def requires_grad_(model:nn.Module, requires_grad:bool) -> None: 72 | for param in model.parameters(): 73 | param.requires_grad_(requires_grad) 74 | 75 | 76 | def squared_l2_norm(x: torch.Tensor) -> torch.Tensor: 77 | flattened = x.view(x.shape[0], -1) 78 | return (flattened ** 2).sum(1) 79 | 80 | 81 | def l2_norm(x: torch.Tensor) -> torch.Tensor: 82 | return squared_l2_norm(x).sqrt() -------------------------------------------------------------------------------- /fast_adv/utils/visualization.py: -------------------------------------------------------------------------------- 1 | # From https://github.com/luizgh/visdom_logger 2 | 3 | import visdom 4 | import torch 5 | from collections import defaultdict 6 | 7 | 8 | class VisdomLogger: 9 | def __init__(self, port): 10 | self.vis = visdom.Visdom(port=port) 11 | self.windows = defaultdict(lambda: None) 12 | 13 | def scalar(self, name, x, y): 14 | win = self.windows[name] 15 | 16 | update = None if win is None else 'append' 17 | win = self.vis.line(torch.Tensor([y]), torch.Tensor([x]), 18 | win=win, update=update, opts={'legend': [name]}) 19 | 20 | self.windows[name] = win 21 | 22 | def scalars(self, list_of_names, x, list_of_ys): 23 | name = '$'.join(list_of_names) 24 | 25 | win = self.windows[name] 26 | 27 | update = None if win is None else 'append' 28 | list_of_xs = [x] * len(list_of_ys) 29 | win = self.vis.line(torch.Tensor([list_of_ys]), torch.Tensor([list_of_xs]), 30 | win=win, update=update, opts={'legend': list_of_names}) 31 | 32 | self.windows[name] = win 33 | 34 | def images(self, name, images, mean_std=None): 35 | win = self.windows[name] 36 | 37 | win = self.vis.images(images if mean_std is None else 38 | images * torch.Tensor(mean_std[0]) + torch.Tensor(mean_std[1]), 39 | win=win, opts={'legend': [name]}) 40 | 41 | self.windows[name] = win 42 | 43 | def reset_windows(self): 44 | self.windows.clear() 45 | -------------------------------------------------------------------------------- /requirements-dev.txt: -------------------------------------------------------------------------------- 1 | torch>=0.4.1 2 | torchvision>=0.2.1 3 | tqdm>=4.23.4 4 | visdom>=0.1.8 5 | tensorflow-gpu 6 | pytest -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch>=0.4.1 2 | torchvision>=0.2.1 3 | tqdm>=4.23.4 4 | visdom>=0.1.8 5 | tensorflow-gpu 6 | foolbox>=1.7.0 -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from distutils.core import setup 4 | from setuptools import find_packages 5 | import os 6 | 7 | setup_path = os.path.abspath(os.path.dirname(__file__)) 8 | 9 | setup(name='fast_adversarial', 10 | version='0.1', 11 | url='https://github.com/jeromerony/fast_adversarial', 12 | maintainer='Jerome Rony, Luiz G. Hafemann', 13 | maintainer_email='jerome.rony@gmail.com', 14 | description='Implementation of gradient-based attacks and defenses for adversarial examples', 15 | author='Jerome Rony, Luiz G. Hafemann', 16 | author_email='jerome.rony@gmail.com', 17 | classifiers=[ 18 | 'Development Status :: 1 - Alpha', 19 | 'Intended Audience :: Developers', 20 | 'Intended Audience :: Science/Research', 21 | 'Programming Language :: Python :: 3.6', 22 | 'Programming Language :: Python :: 3.7', 23 | 'Topic :: Scientific/Engineering :: Artificial Intelligence', 24 | ], 25 | python_requires='>=3.6', 26 | install_requires=[ 27 | 'torch>=0.4.1', 28 | 'torchvision>=0.2.1', 29 | 'tqdm>=4.23.4', 30 | 'visdom>=0.1.8', 31 | 'foolbox>=1.7.0', 32 | ], 33 | packages=find_packages()) 34 | --------------------------------------------------------------------------------