├── .DS_Store ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── Quant_101.ipynb ├── Quant_API.ipynb ├── Quant_Workflow.ipynb ├── Quantization_Slides.pdf ├── README.md ├── code └── 101 │ ├── latency.py │ ├── output_range.py │ ├── prof.py │ ├── qparams.py │ ├── roundfail.txt │ └── sizeof.py ├── img ├── affine-symmetric.png ├── flowchart-check1.png ├── flowchart-check2.png ├── flowchart-check3.png ├── flowchart-check4.png ├── flowchart-check5.png ├── flowchart-check6.png ├── flowchart-check7_1.png ├── flowchart-check7_2.png ├── flowchart-check8.png ├── flowchart-check9.png ├── ns.png ├── observer.png ├── per_t_c.png ├── ptq-flowchart.png ├── ptq-flowchart.svg ├── ptq-fx-flowchart.png ├── q_scheme.png ├── quant_dequant.png ├── quantization-flowchart.png ├── scaling.png └── swan-3299528_1280.jpeg └── resnet_cifar.py /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/.DS_Store -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to make participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, sex characteristics, gender identity and expression, 9 | level of experience, education, socio-economic status, nationality, personal 10 | appearance, race, religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies within all project spaces, and it also applies when 49 | an individual is representing the project or its community in public spaces. 50 | Examples of representing a project or community include using an official 51 | project e-mail address, posting via an official social media account, or acting 52 | as an appointed representative at an online or offline event. Representation of 53 | a project may be further defined and clarified by project maintainers. 54 | 55 | This Code of Conduct also applies outside the project spaces when there is a 56 | reasonable belief that an individual's behavior may have a negative impact on 57 | the project or its community. 58 | 59 | ## Enforcement 60 | 61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 62 | reported by contacting the project team at . All 63 | complaints will be reviewed and investigated and will result in a response that 64 | is deemed necessary and appropriate to the circumstances. The project team is 65 | obligated to maintain confidentiality with regard to the reporter of an incident. 66 | Further details of specific enforcement policies may be posted separately. 67 | 68 | Project maintainers who do not follow or enforce the Code of Conduct in good 69 | faith may face temporary or permanent repercussions as determined by other 70 | members of the project's leadership. 71 | 72 | ## Attribution 73 | 74 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 75 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 76 | 77 | [homepage]: https://www.contributor-covenant.org 78 | 79 | For answers to common questions about this code of conduct, see 80 | https://www.contributor-covenant.org/faq 81 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | ## Pull Requests 2 | We actively welcome your pull requests for any issues in the code. 3 | 4 | 1. Fork the repo and create your branch from `main`. 5 | 2. If you haven't already, complete the Contributor License Agreement ("CLA"). 6 | 7 | ## Contributor License Agreement ("CLA") 8 | In order to accept your pull request, we need you to submit a CLA. You only need 9 | to do this once to work on any of Meta's open source projects. 10 | 11 | Complete your CLA here: 12 | 13 | ## Issues 14 | We use GitHub issues to track public bugs. Please ensure your description is 15 | clear and has sufficient instructions to be able to reproduce the issue. 16 | 17 | Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe 18 | disclosure of security bugs. In those cases, please go through the process 19 | outlined on that page and do not file a public issue. 20 | 21 | ## License 22 | By contributing to this project, you agree that your contributions will be licensed 23 | under the LICENSE file in the root directory of this source tree. 24 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /Quant_101.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "### Prerequisites" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 1, 13 | "metadata": {}, 14 | "outputs": [ 15 | { 16 | "name": "stderr", 17 | "output_type": "stream", 18 | "text": [ 19 | "/opt/miniconda3/lib/python3.9/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: \n", 20 | " warn(f\"Failed to load image Python extension: {e}\")\n" 21 | ] 22 | } 23 | ], 24 | "source": [ 25 | "import torch\n", 26 | "import torch.nn.functional as F\n", 27 | "from torchvision import models, transforms\n", 28 | "from copy import deepcopy\n", 29 | "import requests\n", 30 | "from PIL import Image\n", 31 | "import ast\n", 32 | "\n", 33 | "cls_idx = requests.get(\"https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt\")\n", 34 | "cls_idx = ast.literal_eval(cls_idx.text)\n", 35 | "\n", 36 | "\n", 37 | "def load_img(url):\n", 38 | " IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD = ((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))\n", 39 | " transform = transforms.Compose([\n", 40 | " transforms.Resize(256, interpolation=transforms.InterpolationMode.BICUBIC),\n", 41 | " transforms.CenterCrop(224),\n", 42 | " transforms.ToTensor(),\n", 43 | " transforms.Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD)\n", 44 | " ])\n", 45 | " if url.startswith(\"https\"):\n", 46 | " img = Image.open(requests.get(url, stream=True).raw)\n", 47 | " else:\n", 48 | " img = Image.open(url)\n", 49 | " img = transform(img).unsqueeze(0)\n", 50 | " return img\n", 51 | "\n", 52 | "\n", 53 | "def get_predictions(outp):\n", 54 | " outp = F.softmax(outp, dim=1)\n", 55 | " score, idx = torch.topk(outp, 1)\n", 56 | " idx.squeeze_()\n", 57 | " predicted_label = cls_idx[idx.item()]\n", 58 | " print(predicted_label, '(', score.squeeze().item(), ')')\n", 59 | "\n", 60 | "\n", 61 | "def print_sizeof(model):\n", 62 | " total = 0\n", 63 | " for p in model.parameters():\n", 64 | " total += p.numel() * p.element_size()\n", 65 | " total /= 1e6\n", 66 | " print(\"Model size: \", total, \" MB\")\n" 67 | ] 68 | }, 69 | { 70 | "cell_type": "markdown", 71 | "metadata": {}, 72 | "source": [ 73 | "## Fundamentals of Quantization\n", 74 | "* Quantization is the process of reducing the size of data. \n", 75 | "* It uses a `mapping function` to convert values in floating-point space to integer space" 76 | ] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "execution_count": 2, 81 | "metadata": {}, 82 | "outputs": [ 83 | { 84 | "name": "stdout", 85 | "output_type": "stream", 86 | "text": [ 87 | "3\n", 88 | "4\n", 89 | "3\n" 90 | ] 91 | } 92 | ], 93 | "source": [ 94 | "# floor, ceil and round are also quantization mapping functions\n", 95 | "\n", 96 | "import math\n", 97 | "\n", 98 | "print(math.floor(3.14159265359))\n", 99 | "print(math.ceil(3.14159265359))\n", 100 | "print(round(3.14159265359))" 101 | ] 102 | }, 103 | { 104 | "cell_type": "markdown", 105 | "metadata": {}, 106 | "source": [ 107 | "While its roots are in digital signal processing (for digital encoding and lossy compression), quantization techniques are also used to reduce the size of deep neural networks (DNNs).\n", 108 | "\n", 109 | "DNN parameters are typically 32-bit floating point numbers; using quantization, we can represent them as 8-bit (or lower) integers." 110 | ] 111 | }, 112 | { 113 | "cell_type": "markdown", 114 | "metadata": {}, 115 | "source": [ 116 | "## Quantization of neural networks from scratch" 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "metadata": {}, 122 | "source": [ 123 | "In this workshop, we'll \n", 124 | "\n", 125 | "a) Load a pretrained Resnet model\n", 126 | "\n", 127 | "b) Quantize the last layer (classifier) from scratch\n", 128 | "\n", 129 | "c) Compare accuracy performance with non-quantized classifier\n" 130 | ] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "metadata": {}, 135 | "source": [ 136 | "### Loading the Resnet model" 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": 3, 142 | "metadata": {}, 143 | "outputs": [], 144 | "source": [ 145 | "# Load the model\n", 146 | "resnet = models.resnet18(pretrained=True).eval()\n", 147 | "resnet.requires_grad_(False)\n", 148 | "\n", 149 | "# Extract the classifier before removing from resnet\n", 150 | "fp32_fc = deepcopy(resnet.fc)\n", 151 | "\n", 152 | "# Remove classifier from resnet model. This is now a Resnet feature extractor.\n", 153 | "resnet.fc = torch.nn.Identity()" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "Testing that we didn't screw anything up..." 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": 4, 166 | "metadata": {}, 167 | "outputs": [ 168 | { 169 | "name": "stdout", 170 | "output_type": "stream", 171 | "text": [ 172 | "timber wolf, grey wolf, gray wolf, Canis lupus ( 0.44803616404533386 )\n" 173 | ] 174 | } 175 | ], 176 | "source": [ 177 | "wolf_img = \"https://raw.githubusercontent.com/pytorch/ios-demo-app/master/HelloWorld/HelloWorld/HelloWorld/image.png\"\n", 178 | "img = load_img(wolf_img)\n", 179 | "\n", 180 | "model = torch.nn.Sequential(resnet, fp32_fc)\n", 181 | "logits = model(img)\n", 182 | "get_predictions(logits)" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "### Attempt 1: Round\n", 190 | "Quantization mapping functions also include naive functions like `round`. " 191 | ] 192 | }, 193 | { 194 | "cell_type": "code", 195 | "execution_count": 5, 196 | "metadata": {}, 197 | "outputs": [], 198 | "source": [ 199 | "rounded_fc = deepcopy(fp32_fc)\n", 200 | "rounded_fc.weight = torch.nn.Parameter(torch.round(rounded_fc.weight), requires_grad=False)\n", 201 | "rounded_fc.bias = torch.nn.Parameter(torch.round(rounded_fc.bias), requires_grad=False)" 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": {}, 207 | "source": [ 208 | "Sounds too good to be true?" 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": 6, 214 | "metadata": {}, 215 | "outputs": [ 216 | { 217 | "name": "stdout", 218 | "output_type": "stream", 219 | "text": [ 220 | "rhinoceros beetle ( 0.01966511830687523 )\n" 221 | ] 222 | } 223 | ], 224 | "source": [ 225 | "model = torch.nn.Sequential(resnet, rounded_fc)\n", 226 | "logits = model(img)\n", 227 | "get_predictions(logits)" 228 | ] 229 | }, 230 | { 231 | "cell_type": "markdown", 232 | "metadata": {}, 233 | "source": [ 234 | "You already knew [this wouldn't work](https://en.wikipedia.org/wiki/There_ain%27t_no_such_thing_as_a_free_lunch), but it's good to get it out of the way.\n", 235 | "\n", 236 | "The reason this failed is because our classifier's parameters are between [-0.2, 0.4]. By directly rounding these, we just zeroed out our layer!" 237 | ] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": 7, 242 | "metadata": {}, 243 | "outputs": [ 244 | { 245 | "data": { 246 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWoAAAD4CAYAAADFAawfAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8qNh9FAAAACXBIWXMAAAsTAAALEwEAmpwYAAANyElEQVR4nO3dfYylZX3G8e/lLlRBKpSdIgLp0gRIWpoCOaEvNlh5abQYbFJjMaGBxnSTvmpt2tD4h2n7D7SNqUlN2w3aYquopdASjS+0QoiNbJkFNO4i8iLqAroHW1FsKqK//nHO4HQ4u/MMe54z95zz/SSTPWfOncn17Jlcc8899/M8qSokSe16wWYHkCQdnkUtSY2zqCWpcRa1JDXOopakxm3v44vu2LGjdu7c2ceXlqS5tHfv3ieqamnSa70U9c6dO1leXu7jS0vSXEryxUO95tKHJDXOopakxnUq6iS/l2Rfks8muSHJC/sOJkkaWbeok5wC/C4wqKqzgW3A5X0HkySNdF362A68KMl24Bjgsf4iSZJWW7eoq+pR4C+ALwGPA09W1cfXjkuyK8lykuXhcDj9pJK0oLosfZwAvBY4HXgZcGySK9aOq6rdVTWoqsHS0sStgJKk56HL0sfFwBeqalhV3wFuAn6231iSpBVdivpLwE8nOSZJgIuA+/qNJUlase6ZiVW1J8mNwN3AM8A9wO6+g+nQdl794Ymff+SaS2ecRNIsdDqFvKreBryt5yySpAk8M1GSGmdRS1LjLGpJapxFLUmNs6glqXEWtSQ1rpc7vGhzrN5f7Z5qaX44o5akxlnUktQ4i1qSGmdRS1LjLGpJapxFLUmNs6glqXHuo94iDnUNaknzzxm1JDXOopakxnW5C/lZSe5d9fGNJG+eQTZJEt3umXg/cA5Akm3Ao8DN/caSJK3Y6NLHRcBDVfXFPsJIkp5ro0V9OXDDpBeS7EqynGR5OBweeTJJEgCpqm4Dk6OBx4Afr6qvHm7sYDCo5eXlKcTTiiPZnuclT6X2JdlbVYNJr21kRv1q4O71SlqSNF0bKeo3cIhlD0lSfzoVdZJjgUuAm/qNI0laq9Mp5FX1LeDEnrNIkibwzERJapxFLUmNs6glqXEWtSQ1zqKWpMZZ1JLUOItakhpnUUtS4yxqSWqcRS1JjbOoJalxFrUkNc6ilqTGdbp6njbHkdzVRdL8cEYtSY2zqCWpcRa1JDXONeoFsHqt2zuSS1tP13smHp/kxiSfS3Jfkp/pO5gkaaTrjPodwEer6nVJjgaO6TGTJGmVdYs6yUuAC4CrAKrqaeDpfmNJklZ0Wfo4HRgCf5fkniTXJTl27aAku5IsJ1keDodTDypJi6pLUW8HzgP+uqrOBb4FXL12UFXtrqpBVQ2WlpamHFOSFleXoj4AHKiqPePnNzIqbknSDKxb1FX1FeDLSc4af+oiYH+vqSRJz+q66+N3gPeOd3w8DPxaf5EkSat1KuqquhcY9BtFkjSJp5BLUuMsaklqnEUtSY2zqCWpcRa1JDXOopakxlnUktQ4i1qSGmdRS1LjLGpJapz3TFww3j9R2nqcUUtS4yxqSWqcRS1JjbOoJalxFrUkNc6ilqTGddqel+QR4JvAd4Fnqsq7vfRk9fY5SYKN7aN+ZVU90VsSSdJELn1IUuO6FnUBH0+yN8muSQOS7EqynGR5OBxOL6EkLbiuRf1zVXUe8Grgt5JcsHZAVe2uqkFVDZaWlqYaUpIWWaeirqpHx/8eBG4Gzu8zlCTp+9Yt6iTHJjlu5THwC8Bn+w4mSRrpsuvjJODmJCvj31dVH+01lSTpWesWdVU9DPzkDLJIkiZwe54kNc6ilqTGWdSS1DiLWpIaZ1FLUuMsaklqnEUtSY2zqCWpcRu5HrXmzOqbFDxyzaWbmETS4TijlqTGWdSS1DiLWpIaZ1FLUuMsaklqnEUtSY2zqCWpcRa1JDWuc1En2ZbkniQf6jOQJOn/28iM+k3AfX0FkSRN1qmok5wKXApc128cSdJaXWfUfwn8IfC9Qw1IsivJcpLl4XA4jWySJDoUdZLXAAerau/hxlXV7qoaVNVgaWlpagEladF1mVG/HLgsySPA+4ELk/xjr6kkSc9at6ir6o+q6tSq2glcDnyiqq7oPZkkCXAftSQ1b0M3Dqiq24Hbe0mywFZfwL+FDN5EQGqLM2pJapxFLUmNs6glqXEWtSQ1zqKWpMZZ1JLUOItakhpnUUtS4yxqSWqcRS1JjbOoJalxFrUkNc6ilqTGWdSS1LgNXeZUi8FLnkptcUYtSY2zqCWpcRa1JDVu3aJO8sIk/5nk00n2JfnjWQSTJI10+WPit4ELq+qpJEcBn0zykaq6s+dskiQ6FHVVFfDU+OlR44/qM5Qk6fs6rVEn2ZbkXuAgcGtV7ZkwZleS5STLw+FwyjElaXF1Kuqq+m5VnQOcCpyf5OwJY3ZX1aCqBktLS1OOKUmLa0O7Pqrq68BtwKt6SSNJeo4uuz6Wkhw/fvwi4BLgcz3nkiSNddn1cTJwfZJtjIr9g1X1oX5jSZJWdNn18Rng3BlkkSRN4JmJktQ4i1qSGudlTnVYXvJU2nzOqCWpcc6oN8nqmaokHY4zaklqnEUtSY2zqCWpcRa1JDXOopakxlnUktQ4i1qSGmdRS1LjLGpJapxnJqozr/shbQ5n1JLUOItakhrX5Z6JpyW5Lcn+JPuSvGkWwSRJI13WqJ8Bfr+q7k5yHLA3ya1Vtb/nbJIkOsyoq+rxqrp7/PibwH3AKX0HkySNbGiNOslORje63TPhtV1JlpMsD4fDKcWTJHUu6iQvBv4ZeHNVfWPt61W1u6oGVTVYWlqaZkZJWmidijrJUYxK+r1VdVO/kSRJq637x8QkAd4F3FdVb+8/krYCT36RZqfLjPrlwK8CFya5d/zxiz3nkiSNrTujrqpPAplBFknSBJ6ZKEmNs6glqXFePW+GVv8BTpK6ckYtSY1zRq0j5lY9qV/OqCWpcRa1JDXOopakxlnUktQ4i1qSGmdRS1Lj3J6nqXKrnjR9zqglqXEWtSQ1zqKWpMZZ1JLUOItakhrX5Z6J7wZeAxysqrP7j6R54Q4QaTq6bM/7e+CvgPf0G2U+eQ1qSUdq3aWPqroD+K8ZZJEkTTC1Neoku5IsJ1keDofT+rKStPCmdmZiVe0GdgMMBoOa1tfVfFi7BOSatdSduz4kqXEWtSQ1bt2iTnID8CngrCQHkryx/1iSpBXrrlFX1RtmEUSLxT3WUncufUhS4yxqSWqcNw7QpnMZRDo8Z9SS1Dhn1FPmtT0kTZtFraa4DCI9l0WtZlna0ohr1JLUOItakhrn0oe2BJdBtMgsam05lrYWjUU9BW7J2zyWthaBa9SS1Dhn1Jobzq41ryxqzSVLW/PEotbcs7S11VnUz5N/QNyaurxvlrlaY1FLazgDV2s6FXWSVwHvALYB11XVNb2mapSz6MVzqPfcAtcsrVvUSbYB7wQuAQ4AdyW5par29x2uBZazJtno94XFriPRZUZ9PvBgVT0MkOT9wGuBuSpqC1l9msX3lz8M5leXoj4F+PKq5weAn1o7KMkuYNf46VNJ7j/yeL3YATyx2SFmzGNeALkWWMDjZn6O+UcO9cLU/phYVbuB3dP6en1JslxVg83OMUse8+JYxONehGPucgr5o8Bpq56fOv6cJGkGuhT1XcAZSU5PcjRwOXBLv7EkSSvWXfqoqmeS/DbwMUbb895dVft6T9af5pdneuAxL45FPO65P+ZU1WZnkCQdhpc5laTGWdSS1Li5L+okP5Tk1iQPjP89YcKYc5J8Ksm+JJ9J8iubkfVIJXlVkvuTPJjk6gmv/0CSD4xf35Nk5ybEnKoOx/yWJPvH7+u/JznkXtWtYr1jXjXul5NUkrnYutbluJO8fvx+70vyvlln7E1VzfUH8GfA1ePHVwPXThhzJnDG+PHLgMeB4zc7+waPcxvwEPCjwNHAp4EfWzPmN4G/GT++HPjAZueewTG/Ejhm/Pg3FuGYx+OOA+4A7gQGm517Ru/1GcA9wAnj5z+82bmn9TH3M2pGp7tfP358PfBLawdU1eer6oHx48eAg8DSrAJOybOn+lfV08DKqf6rrf6/uBG4KElmmHHa1j3mqrqtqv5n/PRORucBbGVd3meAPwWuBf53luF61OW4fx14Z1X9N0BVHZxxxt4sQlGfVFWPjx9/BTjpcIOTnM/oJ/ZDfQebskmn+p9yqDFV9QzwJHDiTNL1o8sxr/ZG4CO9Jurfusec5DzgtKqapwvYdHmvzwTOTPIfSe4cX/VzLszF9aiT/Bvw0gkvvXX1k6qqJIfcj5jkZOAfgCur6nvTTanNlOQKYAC8YrOz9CnJC4C3A1dtcpTNsJ3R8sfPM/rN6Y4kP1FVX9/MUNMwF0VdVRcf6rUkX01yclU9Pi7iib8OJflB4MPAW6vqzp6i9qnLqf4rYw4k2Q68BPjabOL1otPlDZJczOiH9iuq6tszytaX9Y75OOBs4PbxqtZLgVuSXFZVyzNLOX1d3usDwJ6q+g7whSSfZ1Tcd80mYn8WYenjFuDK8eMrgX9dO2B8avzNwHuq6sYZZpumLqf6r/6/eB3wiRr/1WWLWveYk5wL/C1w2ZysWR72mKvqyaraUVU7q2ono3X5rV7S0O37+18YzaZJsoPRUsjDM8zYm0Uo6muAS5I8AFw8fk6SQZLrxmNeD1wAXJXk3vHHOZuS9nkarzmvnOp/H/DBqtqX5E+SXDYe9i7gxCQPAm9htAtmy+p4zH8OvBj4p/H7uqWvU9PxmOdOx+P+GPC1JPuB24A/qKqt/BvjszyFXJIatwgzakna0ixqSWqcRS1JjbOoJalxFrUkNc6ilqTGWdSS1Lj/A87pUTiphhFaAAAAAElFTkSuQmCC", 247 | "text/plain": [ 248 | "
" 249 | ] 250 | }, 251 | "metadata": { 252 | "needs_background": "light" 253 | }, 254 | "output_type": "display_data" 255 | } 256 | ], 257 | "source": [ 258 | "from matplotlib import pyplot as plt\n", 259 | "_, _, _ = plt.hist(fp32_fc.weight.detach().flatten(), density=True, bins=100)\n", 260 | "plt.show()" 261 | ] 262 | }, 263 | { 264 | "cell_type": "markdown", 265 | "metadata": {}, 266 | "source": [ 267 | "### Attempt 2: Scale before Round\n", 268 | "\n", 269 | "This time, we rescale the parameters into an appropriate output range before rounding. \n", 270 | "\n", 271 | "What's a good output range? It depends on the quantization precision you want" 272 | ] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "#### Choosing the quantized ouput range" 279 | ] 280 | }, 281 | { 282 | "cell_type": "code", 283 | "execution_count": 8, 284 | "metadata": {}, 285 | "outputs": [ 286 | { 287 | "name": "stdout", 288 | "output_type": "stream", 289 | "text": [ 290 | "For 16-bit quantization, the quantized range is (-32768, 32767)\n", 291 | "For 8-bit quantization, the quantized range is (-128, 127)\n", 292 | "For 4-bit quantization, the quantized range is (-8, 7)\n" 293 | ] 294 | } 295 | ], 296 | "source": [ 297 | "def get_output_range(bits):\n", 298 | " alpha_q = -2 ** (bits - 1)\n", 299 | " beta_q = 2 ** (bits - 1) - 1\n", 300 | " return alpha_q, beta_q\n", 301 | "\n", 302 | "\n", 303 | "print(\"For 16-bit quantization, the quantized range is \", get_output_range(16))\n", 304 | "print(\"For 8-bit quantization, the quantized range is \", get_output_range(8))\n", 305 | "print(\"For 4-bit quantization, the quantized range is \", get_output_range(4))" 306 | ] 307 | }, 308 | { 309 | "cell_type": "markdown", 310 | "metadata": {}, 311 | "source": [ 312 | "In this example, we're going to use 8-bit quantization. So the output range to scale our parameters is [-128, 127]." 313 | ] 314 | }, 315 | { 316 | "cell_type": "markdown", 317 | "metadata": {}, 318 | "source": [ 319 | "#### Moving from FP32 to INT8\n", 320 | "\n", 321 | "\n", 322 | "\n", 323 | "Generally speaking, what we're doing here is an affine transformation from 32-bit space to 8-bit space.\n", 324 | "\n", 325 | "These are of the form `y = Ax + B`\n", 326 | "\n", 327 | "The two parameters for this transformation are: \n", 328 | "* The scaling factor `S` \n", 329 | "* The zero-point `Z` \n", 330 | "\n", 331 | "So our transformation looks like `Q(x) = round(x/S + Z)`" 332 | ] 333 | }, 334 | { 335 | "cell_type": "code", 336 | "execution_count": 9, 337 | "metadata": {}, 338 | "outputs": [], 339 | "source": [ 340 | "def get_quantization_params(input_range, output_range):\n", 341 | " min_val, max_val = input_range\n", 342 | " alpha_q, beta_q = output_range\n", 343 | " S = (max_val - min_val) / (beta_q - alpha_q)\n", 344 | " Z = alpha_q - (min_val / S)\n", 345 | " return S, Z\n", 346 | "\n", 347 | "\n", 348 | "def scale_transform(x, S, Z):\n", 349 | " x_q = 1/S * x + Z \n", 350 | " x_q = torch.round(x_q).to(torch.int8)\n", 351 | " return x_q\n", 352 | "\n", 353 | "\n", 354 | "def quantize_int8(x):\n", 355 | " S, Z = get_quantization_params(input_range=(x.min(), x.max(),), output_range=(-128, 127))\n", 356 | " x_q = scale_transform(x, S, Z)\n", 357 | " return x_q, S, Z\n", 358 | "\n", 359 | "\n", 360 | "def dequantize(x_q, S, Z):\n", 361 | " x = S * (x_q - Z)\n", 362 | " return x\n" 363 | ] 364 | }, 365 | { 366 | "cell_type": "markdown", 367 | "metadata": {}, 368 | "source": [ 369 | "Now we have all the functions we need to quantize our classifier.\n", 370 | "\n", 371 | "Like before, we quantize each parameter in the layer (`weights` and `bias` in this case). \n", 372 | "\n", 373 | "We will also quantize the inputs to the layer." 374 | ] 375 | }, 376 | { 377 | "cell_type": "markdown", 378 | "metadata": {}, 379 | "source": [ 380 | "#### Quantize classifier" 381 | ] 382 | }, 383 | { 384 | "cell_type": "code", 385 | "execution_count": 10, 386 | "metadata": {}, 387 | "outputs": [], 388 | "source": [ 389 | "def quantize_classifier(clf):\n", 390 | " W_q, S_w, Z_w = quantize_int8(clf.weight)\n", 391 | " b_q, S_b, Z_b = quantize_int8(clf.bias)\n", 392 | " return (W_q, S_w, Z_w, b_q, S_b, Z_b)" 393 | ] 394 | }, 395 | { 396 | "cell_type": "markdown", 397 | "metadata": {}, 398 | "source": [ 399 | "#### Quantize inputs" 400 | ] 401 | }, 402 | { 403 | "cell_type": "code", 404 | "execution_count": 11, 405 | "metadata": {}, 406 | "outputs": [], 407 | "source": [ 408 | "def quantize_inputs(img):\n", 409 | " features = resnet(img)\n", 410 | " X_q, S_x, Z_x = quantize_int8(features)\n", 411 | " return (X_q, S_x, Z_x)" 412 | ] 413 | }, 414 | { 415 | "cell_type": "markdown", 416 | "metadata": {}, 417 | "source": [ 418 | "#### Quantized Matrix Multiplication\n", 419 | "\n", 420 | "In PyTorch, the quantized operators run in specialized backends like FBGEMM and QNNPACK.\n", 421 | "\n", 422 | "We can simulate the INT8 matmul by first dequantizing everything to FP32 and then running the multiply." 423 | ] 424 | }, 425 | { 426 | "cell_type": "code", 427 | "execution_count": 12, 428 | "metadata": {}, 429 | "outputs": [], 430 | "source": [ 431 | "def int8_matmul_sim(quantized_input, quantized_layer):\n", 432 | " X = dequantize(*quantized_input)\n", 433 | " W = dequantize(*quantized_layer[:3])\n", 434 | " b = dequantize(*quantized_layer[3:])\n", 435 | " return b + X @ W.T" 436 | ] 437 | }, 438 | { 439 | "cell_type": "markdown", 440 | "metadata": {}, 441 | "source": [ 442 | "#### Run Quantized and Non-Quantized forward pass" 443 | ] 444 | }, 445 | { 446 | "cell_type": "code", 447 | "execution_count": 13, 448 | "metadata": {}, 449 | "outputs": [], 450 | "source": [ 451 | "# Non-Quantized\n", 452 | "model = torch.nn.Sequential(resnet, fp32_fc)\n", 453 | "logits = model(img)\n", 454 | "\n", 455 | "# Quantized\n", 456 | "inputs_q = quantize_inputs(img)\n", 457 | "classifier_q = quantize_classifier(fp32_fc)\n", 458 | "logits_q = int8_matmul_sim(inputs_q, classifier_q)" 459 | ] 460 | }, 461 | { 462 | "cell_type": "markdown", 463 | "metadata": {}, 464 | "source": [ 465 | "#### Compare Q and N-Q logits" 466 | ] 467 | }, 468 | { 469 | "cell_type": "code", 470 | "execution_count": 14, 471 | "metadata": {}, 472 | "outputs": [ 473 | { 474 | "name": "stdout", 475 | "output_type": "stream", 476 | "text": [ 477 | "Non-Quantized output:\n", 478 | " tensor([[ 0.2827, -1.5461, 1.2094, -0.2907, -3.6378, 0.8214, -1.3164, -3.7967,\n", 479 | " -1.8691, -1.7165]]) \n", 480 | "\n", 481 | "Quantized output:\n", 482 | " tensor([[ 0.2997, -1.5684, 1.2295, -0.2559, -3.6580, 0.8425, -1.3213, -3.8168,\n", 483 | " -1.8718, -1.7032]]) \n", 484 | "\n", 485 | "Quantization error = tensor(-0.0010)\n" 486 | ] 487 | } 488 | ], 489 | "source": [ 490 | "# Compare quantized and non-quantized logits\n", 491 | "print(\"Non-Quantized output:\\n\", logits[:, :10], \"\\n\")\n", 492 | "print(\"Quantized output:\\n\", logits_q[:, :10], \"\\n\")\n", 493 | "\n", 494 | "quantization_error = (logits_q - logits).mean()\n", 495 | "print(\"Quantization error = \", quantization_error)" 496 | ] 497 | }, 498 | { 499 | "cell_type": "markdown", 500 | "metadata": {}, 501 | "source": [ 502 | "The quantization error is pretty sizable at 1e-3. \n", 503 | "\n", 504 | "Eyeballing the outputs, the logits from the quantized and non-quantized layers seem fairly different too.\n", 505 | "\n", 506 | "Let's see by how much are the quantized predictions off..." 507 | ] 508 | }, 509 | { 510 | "cell_type": "code", 511 | "execution_count": 15, 512 | "metadata": {}, 513 | "outputs": [ 514 | { 515 | "name": "stdout", 516 | "output_type": "stream", 517 | "text": [ 518 | "Non-Quantized prediction:\n", 519 | "timber wolf, grey wolf, gray wolf, Canis lupus ( 0.44803616404533386 )\n", 520 | "\n", 521 | "Quantized prediction:\n", 522 | "timber wolf, grey wolf, gray wolf, Canis lupus ( 0.445095956325531 )\n" 523 | ] 524 | } 525 | ], 526 | "source": [ 527 | "# check their outputs for same input\n", 528 | "print(\"Non-Quantized prediction:\")\n", 529 | "get_predictions(logits)\n", 530 | "print()\n", 531 | "print(\"Quantized prediction:\")\n", 532 | "get_predictions(logits_q)" 533 | ] 534 | }, 535 | { 536 | "cell_type": "markdown", 537 | "metadata": {}, 538 | "source": [ 539 | "Not by much! The quantized logits predict the same class, albeit with slightly lower confidence.\n", 540 | "\n", 541 | "Let's try more images" 542 | ] 543 | }, 544 | { 545 | "cell_type": "code", 546 | "execution_count": 16, 547 | "metadata": {}, 548 | "outputs": [ 549 | { 550 | "name": "stdout", 551 | "output_type": "stream", 552 | "text": [ 553 | "Non-Quantized prediction:\n", 554 | "goose ( 0.5297383666038513 )\n", 555 | "\n", 556 | "Quantized prediction:\n", 557 | "goose ( 0.5486957430839539 )\n" 558 | ] 559 | } 560 | ], 561 | "source": [ 562 | "# Similarly for an image of a swan\n", 563 | "img_url = \"img/swan-3299528_1280.jpeg\"\n", 564 | "# img_url = \"https://static.scientificamerican.com/sciam/cache/file/32665E6F-8D90-4567-9769D59E11DB7F26_source.jpg\"\n", 565 | "# img_url = \"https://media.newyorker.com/photos/5dfab39dde5fcf00086aec77/1:1/w_1706,h_1706,c_limit/Lane-Cats.jpg\"\n", 566 | "\n", 567 | "img = load_img(img_url)\n", 568 | "\n", 569 | "# Non-Quantized\n", 570 | "model = torch.nn.Sequential(resnet, fp32_fc)\n", 571 | "logits = model(img)\n", 572 | "\n", 573 | "# Quantized\n", 574 | "inputs_q = quantize_inputs(img)\n", 575 | "classifier_q = quantize_classifier(fp32_fc)\n", 576 | "logits_q = int8_matmul_sim(inputs_q, classifier_q)\n", 577 | "\n", 578 | "\n", 579 | "# Compare predictions\n", 580 | "print(\"Non-Quantized prediction:\")\n", 581 | "get_predictions(logits)\n", 582 | "print()\n", 583 | "print(\"Quantized prediction:\")\n", 584 | "get_predictions(logits_q)\n", 585 | "\n" 586 | ] 587 | } 588 | ], 589 | "metadata": { 590 | "interpreter": { 591 | "hash": "3d597f4c481aa0f25dceb95d2a0067e73c0966dcbd003d741d821a7208527ecf" 592 | }, 593 | "kernelspec": { 594 | "display_name": "Python 3.9.11 ('base')", 595 | "language": "python", 596 | "name": "python3" 597 | }, 598 | "language_info": { 599 | "codemirror_mode": { 600 | "name": "ipython", 601 | "version": 3 602 | }, 603 | "file_extension": ".py", 604 | "mimetype": "text/x-python", 605 | "name": "python", 606 | "nbconvert_exporter": "python", 607 | "pygments_lexer": "ipython3", 608 | "version": "3.9.11" 609 | }, 610 | "orig_nbformat": 4 611 | }, 612 | "nbformat": 4, 613 | "nbformat_minor": 2 614 | } 615 | -------------------------------------------------------------------------------- /Quant_API.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import torch" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": {}, 15 | "source": [ 16 | "## Quantization schemes\n", 17 | "\n", 18 | "\n", 19 | "Two sets of schemes:\n", 20 | "* Symmetric\n", 21 | "* Affine\n", 22 | "\n", 23 | "And\n", 24 | "\n", 25 | "* Per-channel\n", 26 | "* Per-Tensor" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": {}, 32 | "source": [ 33 | "### Per-Channel and Per-Tensor\n", 34 | "\n", 35 | "" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": null, 41 | "metadata": {}, 42 | "outputs": [], 43 | "source": [ 44 | "x = torch.tensor([\n", 45 | " [0.5827, 0.8619], \n", 46 | " [0.3827, -0.1982], \n", 47 | " [-0.8213, 0.6351]])\n", 48 | "\n", 49 | "print(x.size())" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": null, 55 | "metadata": {}, 56 | "outputs": [], 57 | "source": [ 58 | "# per-tensor\n", 59 | "\n", 60 | "scale = torch.tensor(1e-2)\n", 61 | "zero_pt = torch.tensor(0)\n", 62 | "\n", 63 | "xq = torch.quantize_per_tensor(x, scale, zero_pt, dtype=torch.qint8)\n", 64 | "print(xq)" 65 | ] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "execution_count": null, 70 | "metadata": {}, 71 | "outputs": [], 72 | "source": [ 73 | "# per-channel\n", 74 | "\n", 75 | "channel_axis = 0\n", 76 | "scale = torch.tensor([1e-2, 1e-3, 5e-2])\n", 77 | "zero_pt = torch.zeros(3)\n", 78 | "\n", 79 | "xq = torch.quantize_per_channel(x, scale, zero_pt, dtype=torch.qint8, axis=0)\n", 80 | "print(xq)" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "### Symmetric and Affine\n", 88 | "\n", 89 | "Symmetric\n", 90 | "* Input range is calculated symmetrically around 0\n", 91 | "* Good for quantizing weights\n", 92 | "* Wasteful for quantizing activations - why?\n", 93 | "\n", 94 | "Affine \n", 95 | "* Clips the input tightly \n", 96 | "\n", 97 | "\n", 98 | "" 99 | ] 100 | }, 101 | { 102 | "cell_type": "markdown", 103 | "metadata": {}, 104 | "source": [ 105 | "### Observers\n", 106 | "\n", 107 | "" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": null, 113 | "metadata": {}, 114 | "outputs": [], 115 | "source": [ 116 | "from torch.ao.quantization.observer import MovingAverageMinMaxObserver, HistogramObserver, MovingAveragePerChannelMinMaxObserver\n", 117 | "\n", 118 | "size = (3,4)\n", 119 | "normal = torch.distributions.normal.Normal(0,1)\n", 120 | "input = [normal.sample(size) for _ in range(3)]\n", 121 | "\n", 122 | "observers = [\n", 123 | " MovingAverageMinMaxObserver(qscheme=torch.per_tensor_affine), \n", 124 | " HistogramObserver(), \n", 125 | " MovingAveragePerChannelMinMaxObserver(qscheme=torch.per_channel_symmetric)\n", 126 | " ]\n", 127 | "\n" 128 | ] 129 | }, 130 | { 131 | "cell_type": "code", 132 | "execution_count": null, 133 | "metadata": {}, 134 | "outputs": [], 135 | "source": [ 136 | "for obs in observers:\n", 137 | " for x in input: \n", 138 | " obs(x) \n", 139 | " print(obs.__class__.__name__, obs.calculate_qparams())\n" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "### QConfig\n", 147 | "\n", 148 | "* High-level abstraction wrapping these knobs in one object\n", 149 | "* Allows separate configuration for activation and weights of a layer" 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [ 158 | "from torch.ao.quantization.observer import MovingAverageMinMaxObserver, MovingAveragePerChannelMinMaxObserver\n", 159 | "from torch.ao.quantization.qconfig import QConfig\n", 160 | "\n", 161 | "my_qconfig = QConfig(\n", 162 | " activation=MovingAverageMinMaxObserver.with_args(\n", 163 | " qscheme=torch.per_tensor_affine,\n", 164 | " dtype=torch.quint8),\n", 165 | " weight=MovingAveragePerChannelMinMaxObserver.with_args(\n", 166 | " qscheme=torch.per_channel_symmetric)\n", 167 | ")\n" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "#### Default QConfigs out of the box" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": null, 180 | "metadata": {}, 181 | "outputs": [], 182 | "source": [ 183 | "torch.quantization.qconfig.default_per_channel_qconfig" 184 | ] 185 | }, 186 | { 187 | "cell_type": "code", 188 | "execution_count": null, 189 | "metadata": {}, 190 | "outputs": [], 191 | "source": [ 192 | "print(torch.quantization.qconfig.default_dynamic_qconfig)" 193 | ] 194 | }, 195 | { 196 | "cell_type": "code", 197 | "execution_count": null, 198 | "metadata": {}, 199 | "outputs": [], 200 | "source": [ 201 | "print(torch.quantization.qconfig.per_channel_dynamic_qconfig)" 202 | ] 203 | } 204 | ], 205 | "metadata": { 206 | "interpreter": { 207 | "hash": "5b2c14c5f2a3b21e6c2412c8196f5145870350e81c0b737cae3e5c60eb1e1eac" 208 | }, 209 | "kernelspec": { 210 | "display_name": "Python 3.8.12 ('pytorch_p38': conda)", 211 | "language": "python", 212 | "name": "python3" 213 | }, 214 | "language_info": { 215 | "codemirror_mode": { 216 | "name": "ipython", 217 | "version": 3 218 | }, 219 | "file_extension": ".py", 220 | "mimetype": "text/x-python", 221 | "name": "python", 222 | "nbconvert_exporter": "python", 223 | "pygments_lexer": "ipython3", 224 | "version": "3.8.12" 225 | }, 226 | "orig_nbformat": 4 227 | }, 228 | "nbformat": 4, 229 | "nbformat_minor": 2 230 | } 231 | -------------------------------------------------------------------------------- /Quant_Workflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "### Prerequisites" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "# !pip install torchmetrics" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": 3, 22 | "metadata": {}, 23 | "outputs": [], 24 | "source": [ 25 | "import torch\n", 26 | "import torch.nn.functional as F\n", 27 | "from torchvision import models, transforms, datasets\n", 28 | "from copy import deepcopy\n", 29 | "import requests\n", 30 | "from PIL import Image\n", 31 | "from resnet_cifar import Trainer, cifar_dataloader\n", 32 | "\n", 33 | "\n", 34 | "def load_img(url):\n", 35 | " IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD = ((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))\n", 36 | " transform = transforms.Compose([\n", 37 | " transforms.Resize(256, interpolation=transforms.InterpolationMode.BICUBIC),\n", 38 | " transforms.CenterCrop(224),\n", 39 | " transforms.ToTensor(),\n", 40 | " transforms.Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD)\n", 41 | " ])\n", 42 | " if url.startswith(\"https\"):\n", 43 | " img = Image.open(requests.get(url, stream=True).raw)\n", 44 | " else:\n", 45 | " img = Image.open(url)\n", 46 | " img = transform(img).unsqueeze(0)\n", 47 | " return img\n", 48 | "\n", 49 | "\n", 50 | "def get_predictions(outp):\n", 51 | " cls_idx = {\n", 52 | " 0: 'airplane',\n", 53 | " 1: 'automobile',\n", 54 | " 2: 'bird',\n", 55 | " 3: 'cat',\n", 56 | " 4: 'deer',\n", 57 | " 5: 'dog',\n", 58 | " 6: 'frog',\n", 59 | " 7: 'horse',\n", 60 | " 8: 'ship',\n", 61 | " 9: 'truck'}\n", 62 | " outp = F.softmax(outp, dim=1)\n", 63 | " score, idx = torch.topk(outp, 1)\n", 64 | " idx.squeeze_()\n", 65 | " predicted_label = cls_idx[idx.item()]\n", 66 | " print(predicted_label, '(', score.squeeze().item(), ')')\n", 67 | "\n", 68 | "\n", 69 | "def print_sizeof(model):\n", 70 | " total = 0\n", 71 | " for p in model.parameters():\n", 72 | " total += p.numel() * p.element_size()\n", 73 | " total /= 1e6\n", 74 | " print(\"Model size: \", total, \" MB\")\n" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": 4, 80 | "metadata": {}, 81 | "outputs": [ 82 | { 83 | "name": "stdout", 84 | "output_type": "stream", 85 | "text": [ 86 | "1.10.2\n" 87 | ] 88 | } 89 | ], 90 | "source": [ 91 | "print(torch.__version__)" 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "metadata": {}, 97 | "source": [ 98 | "## Flowchart for using Quantization in PyTorch\n", 99 | "\n", 100 | "" 101 | ] 102 | }, 103 | { 104 | "cell_type": "markdown", 105 | "metadata": {}, 106 | "source": [ 107 | "## 10M+ Parameters?\n", 108 | "\n", 109 | "\n", 110 | "\n", 111 | "Quantization works best on models with 10M+ parameters. [[1](https://arxiv.org/pdf/1806.08342.pdf)]\n", 112 | "\n", 113 | "Large models are more robust to quantization error. Overparameterized models generally have more degrees of freedom and can afford the precision drops with quantization.\n", 114 | "\n", 115 | "As with most thumb rules, YMMV. Quantization is an active area of research, and this might become more permissive." 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": 5, 121 | "metadata": {}, 122 | "outputs": [ 123 | { 124 | "name": "stdout", 125 | "output_type": "stream", 126 | "text": [ 127 | "resnet18: (True, 11.0)\n", 128 | "resnet50: (True, 25.0)\n", 129 | "mobilenet_large: (False, 5.0)\n", 130 | "\n" 131 | ] 132 | } 133 | ], 134 | "source": [ 135 | "def is_large_enough(model):\n", 136 | " n_params = sum([p.numel() for p in model.parameters()])\n", 137 | " return n_params > 1e7, n_params // 1e6\n", 138 | "\n", 139 | "print(\"resnet18: \", is_large_enough(models.resnet18()))\n", 140 | "print(\"resnet50: \", is_large_enough(models.resnet50()))\n", 141 | "print(\"mobilenet_large: \", is_large_enough(models.mobilenet_v3_large()))\n", 142 | "print()" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "metadata": {}, 148 | "source": [ 149 | "## FP32-pretrained checkpoint?\n", 150 | "\n", 151 | "\n", 152 | "\n", 153 | "Quantized inference works best on models that were originally trained in FP32 (like all non-quantized pretrained models in PyTorch (vision, audio and text)). \n", 154 | "Even Quantization-Aware Training (more on this below) uses FP32 arithmetic to train the parameters.\n", 155 | "\n", 156 | "....\n", 157 | "\n", 158 | "In this exercise, we'll use an FP32 Imagenet-pretrained Resnet that is finetuned to CIFAR10." 159 | ] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "execution_count": 6, 164 | "metadata": {}, 165 | "outputs": [ 166 | { 167 | "name": "stderr", 168 | "output_type": "stream", 169 | "text": [ 170 | "Downloading: \"https://quantization-workshop.s3.amazonaws.com/resnet50_cifar_weights.pth\" to /Users/subramen/.cache/torch/hub/checkpoints/resnet50_cifar_weights.pth\n" 171 | ] 172 | }, 173 | { 174 | "data": { 175 | "application/vnd.jupyter.widget-view+json": { 176 | "model_id": "0663437d03014d39969105b63927093e", 177 | "version_major": 2, 178 | "version_minor": 0 179 | }, 180 | "text/plain": [ 181 | " 0%| | 0.00/90.1M [00:00\n", 418 | "\n", 419 | "Backend refers to the hardware-specific kernels that support quantization. This controls the numerics engine that does the integer arithmetic.\n", 420 | "\n", 421 | "`torch.backends.quantized.engine` specifies the backend to be used.\n", 422 | "\n", 423 | "Using an incorrect backend engine for your hardware will result in (much) slower inference." 424 | ] 425 | }, 426 | { 427 | "cell_type": "code", 428 | "execution_count": 8, 429 | "metadata": {}, 430 | "outputs": [ 431 | { 432 | "name": "stdout", 433 | "output_type": "stream", 434 | "text": [ 435 | "Using qnnpack backend engine for arm CPU\n" 436 | ] 437 | } 438 | ], 439 | "source": [ 440 | "import platform\n", 441 | "chip = platform.processor()\n", 442 | "\n", 443 | "if chip == 'arm':\n", 444 | " backend = 'qnnpack'\n", 445 | "elif chip in ['x86_64', 'i386']:\n", 446 | " backend = 'fbgemm'\n", 447 | "else:\n", 448 | " raise SystemError(\"Backend is not supported\")\n", 449 | "\n", 450 | "print(f\"Using {backend} backend engine for {chip} CPU\")\n", 451 | "\n", 452 | "torch.backends.quantized.engine = backend" 453 | ] 454 | }, 455 | { 456 | "cell_type": "markdown", 457 | "metadata": {}, 458 | "source": [ 459 | "## Profile FP32 model inference\n", 460 | "\n", 461 | "\n", 462 | "\n", 463 | "Let's establish a baseline for model size, inference latency and accuracy" 464 | ] 465 | }, 466 | { 467 | "cell_type": "code", 468 | "execution_count": 9, 469 | "metadata": {}, 470 | "outputs": [], 471 | "source": [ 472 | "import os\n", 473 | " \n", 474 | "def print_size_of_model(model):\n", 475 | " torch.jit.script(model).save(\"temp.p\")\n", 476 | " print('Size (MB):', os.path.getsize(\"temp.p\")/1e6)\n", 477 | " os.remove('temp.p')\n", 478 | "\n", 479 | "def profile(model):\n", 480 | " print_size_of_model(model)\n", 481 | " print(\"=\"*20)\n", 482 | " Trainer(model, -1).evaluate(max_batch=30) # latency + accuracy on CIFAR test set " 483 | ] 484 | }, 485 | { 486 | "cell_type": "code", 487 | "execution_count": 10, 488 | "metadata": {}, 489 | "outputs": [ 490 | { 491 | "name": "stdout", 492 | "output_type": "stream", 493 | "text": [ 494 | "Resnet FP32 profile:\n", 495 | "Size (MB): 94.469753\n", 496 | "====================\n", 497 | "Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./cifar_data/cifar-10-python.tar.gz\n" 498 | ] 499 | }, 500 | { 501 | "data": { 502 | "application/vnd.jupyter.widget-view+json": { 503 | "model_id": "d0c1ccc63a89439796d464f229d974e4", 504 | "version_major": 2, 505 | "version_minor": 0 506 | }, 507 | "text/plain": [ 508 | " 0%| | 0/170498071 [00:00\n", 539 | "\n", 540 | "While dynamic quantization has more overhead than static quantization, some operators (like recurrent layers) aren't supported by static quantization. (See [Operator coverage](https://pytorch.org/docs/stable/quantization.html#:~:text=these%20quantization%20types.-,Operator%20coverage,-varies%20between%20dynamic)).\n", 541 | "\n", 542 | "Knowing which layers are in our model can inform our quantization strategy.\n", 543 | "\n", 544 | "\n", 545 | "#### Thumb rule\n", 546 | "\n", 547 | "* For recurrent and transformer layers, use Dynamic quantization.\n", 548 | "* For linear layers, you can use either Dynamic or Static quantization.\n", 549 | "* For everything else, use Static quantization." 550 | ] 551 | }, 552 | { 553 | "cell_type": "code", 554 | "execution_count": 11, 555 | "metadata": {}, 556 | "outputs": [ 557 | { 558 | "name": "stdout", 559 | "output_type": "stream", 560 | "text": [ 561 | "Model consists of: Counter({'Conv2d': 53, 'BatchNorm2d': 53, 'ReLU': 17, 'Bottleneck': 16, 'Sequential': 8, 'ResNet': 1, 'MaxPool2d': 1, 'AdaptiveAvgPool2d': 1, 'Linear': 1})\n", 562 | "\n", 563 | "Dynamic quantization\n", 564 | "====================\n", 565 | "Layers: 1 || Parameters: 20480\n", 566 | "\n", 567 | "Static quantization\n", 568 | "====================\n", 569 | "Layers: 54 || Parameters: 2.34754e+07\n" 570 | ] 571 | } 572 | ], 573 | "source": [ 574 | "def optimal_quant_strategy(model):\n", 575 | " from collections import Counter\n", 576 | " layer_counts = Counter([type(x).__name__ for x in model.modules()])\n", 577 | " print(\"Model consists of: \", layer_counts)\n", 578 | " \n", 579 | " dyn = [0, 0]\n", 580 | " stat = [0, 0]\n", 581 | "\n", 582 | " for m in model.modules():\n", 583 | " if hasattr(m, 'weight'): \n", 584 | " name = type(m).__name__\n", 585 | " params = m.weight.numel()\n", 586 | " if name in ['RNN', 'LSTM', 'GRU', 'LSTMCell', 'RNNCell', 'GRUCell', 'Linear']:\n", 587 | " dyn[0] += 1\n", 588 | " dyn[1] += params\n", 589 | " if 'Conv' in name or name == 'Linear':\n", 590 | " stat[0] += 1\n", 591 | " stat[1] += params\n", 592 | " print()\n", 593 | " print(\"Dynamic quantization\")\n", 594 | " print(\"====================\")\n", 595 | " print(f\"Layers: {dyn[0]} || Parameters: {format(dyn[1], 'g')}\")\n", 596 | " print()\n", 597 | " print(\"Static quantization\")\n", 598 | " print(\"====================\")\n", 599 | " print(f\"Layers: {stat[0]} || Parameters: {format(stat[1], 'g')}\")\n", 600 | " \n", 601 | "\n", 602 | "optimal_quant_strategy(resnet)" 603 | ] 604 | }, 605 | { 606 | "cell_type": "markdown", 607 | "metadata": {}, 608 | "source": [ 609 | "## Try Dynamic Quantization\n", 610 | "\n", 611 | "\n", 612 | "\n", 613 | "[Dynamic Quantization API](https://pytorch.org/docs/stable/generated/torch.quantization.quantize_dynamic.html?highlight=quantize_dynamic#torch.quantization.quantize_dynamic)\n", 614 | "\n", 615 | "[Dynamic Quantization Tutorial](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html)" 616 | ] 617 | }, 618 | { 619 | "cell_type": "code", 620 | "execution_count": 12, 621 | "metadata": {}, 622 | "outputs": [ 623 | { 624 | "name": "stderr", 625 | "output_type": "stream", 626 | "text": [ 627 | "/opt/miniconda3/lib/python3.9/site-packages/torch/quantization/fx/quantization_patterns.py:616: UserWarning: dtype combination: (torch.float32, torch.qint8, torch.quint8) is not supported by Conv supported dtype combinations are: [(torch.quint8, torch.qint8, None)]\n", 628 | " warnings.warn(\n", 629 | "/opt/miniconda3/lib/python3.9/site-packages/torch/quantization/fx/quantization_patterns.py:484: UserWarning: dtype combination: (torch.float32, torch.qint8, torch.quint8) is not supported by for is_reference=False. Supported non-reference dtype combinations are: [(torch.qint8, torch.qint8, None), (torch.quint8, torch.qint8, None), (torch.float16, torch.float16, None)] \n", 630 | " warnings.warn(\n" 631 | ] 632 | } 633 | ], 634 | "source": [ 635 | "from torch.quantization.quantize_fx import prepare_fx, convert_fx\n", 636 | "\n", 637 | "dynamic_qconfig = torch.quantization.default_dynamic_qconfig\n", 638 | "qconfig_dict = {\n", 639 | " # Global Config\n", 640 | " \"\": dynamic_qconfig\n", 641 | "}\n", 642 | "\n", 643 | "model_prepared = prepare_fx(resnet, qconfig_dict)\n", 644 | "dynamic_resnet = convert_fx(model_prepared)" 645 | ] 646 | }, 647 | { 648 | "cell_type": "markdown", 649 | "metadata": {}, 650 | "source": [ 651 | "### Evaluate performance of dynamic-quantized Resnet model" 652 | ] 653 | }, 654 | { 655 | "cell_type": "code", 656 | "execution_count": 13, 657 | "metadata": {}, 658 | "outputs": [ 659 | { 660 | "name": "stdout", 661 | "output_type": "stream", 662 | "text": [ 663 | "Resnet Dynamic-Quant Profile:\n", 664 | "Size (MB): 94.057089\n", 665 | "====================\n", 666 | "Files already downloaded and verified\n", 667 | "Files already downloaded and verified\n", 668 | "Loss: 0.5846356943249702 \n", 669 | "Accuracy: 0.8723958333333334\n", 670 | "====================\n", 671 | "Time taken (1920 CIFAR test samples): 78.73776507377625\n" 672 | ] 673 | } 674 | ], 675 | "source": [ 676 | "print(\"Resnet Dynamic-Quant Profile:\")\n", 677 | "profile(dynamic_resnet)" 678 | ] 679 | }, 680 | { 681 | "cell_type": "markdown", 682 | "metadata": {}, 683 | "source": [ 684 | "## Try Static Quantization\n", 685 | "\n", 686 | "\n", 687 | "
\n", 688 | "\n", 689 | "\n", 690 | "Static quantization has faster inference than dynamic quantization because it eliminates the float<->int conversion costs between layers\n", 691 | "\n", 692 | "### Manual approach - using Eager Mode\n", 693 | "\n", 694 | "Explicitly perform the following steps:\n", 695 | "\n", 696 | "\n", 697 | "\n", 698 | "* Manually identify sequence of fusable modules\n", 699 | "* Manually insert stubs to quantize and dequantize activations\n", 700 | "* Functional ops (eg: `torch.nn.functional.linear`) aren't supported\n", 701 | "\n", 702 | "[Module Fusion Tutorial](https://pytorch.org/tutorials/recipes/fuse.html)\n", 703 | "\n", 704 | "[Static Quantization (Eager Mode) Tutorial](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html)" 705 | ] 706 | }, 707 | { 708 | "cell_type": "markdown", 709 | "metadata": {}, 710 | "source": [ 711 | "### Easier approach - using FX Graph Mode\n", 712 | "\n", 713 | "\n", 714 | "\n", 715 | "* Just 2 function calls: `prepare_fx` and `convert_fx`\n", 716 | "* Automates all the above steps under the hood using `torch.fx`\n", 717 | "\n", 718 | "[`prepare_fx` API](https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html#torch.quantization.quantize_fx.prepare_fx)\n", 719 | "\n", 720 | "[`convert_fx` API](https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html#torch.quantization.quantize_fx.convert_fx)\n", 721 | "\n", 722 | "[Static Quantization (FX Graph Mode) Tutorial](https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_static.html)" 723 | ] 724 | }, 725 | { 726 | "cell_type": "markdown", 727 | "metadata": {}, 728 | "source": [ 729 | "#### QConfig\n", 730 | "\n", 731 | "In FX Quantization, the `qconfig_dict` offers fine-grained control of the model's quantization process.\n", 732 | "Setting a `qconfig=None` skips quantization for that module.\n", 733 | "\n", 734 | "```python\n", 735 | "qconfig_dict = {\n", 736 | " # Global Config\n", 737 | " \"\": qconfig,\n", 738 | "\n", 739 | " # Module-specific config (by class)\n", 740 | " \"object_type\": [\n", 741 | " (torch.nn.Conv2d, qconfig),\n", 742 | " (torch.nn.functional.add, None), # skips quantization for this module\n", 743 | " ...,\n", 744 | " ],\n", 745 | " \n", 746 | " # Module-specific config (by name)\n", 747 | " \"module_name\": [\n", 748 | " (\"foo.bar\", qconfig)\n", 749 | " ...,\n", 750 | " ],\n", 751 | "}\n", 752 | "```" 753 | ] 754 | }, 755 | { 756 | "cell_type": "code", 757 | "execution_count": 14, 758 | "metadata": {}, 759 | "outputs": [], 760 | "source": [ 761 | "static_qconfig = torch.quantization.get_default_qconfig(backend)\n", 762 | "qconfig_dict = {\n", 763 | " # Global Config\n", 764 | " \"\": static_qconfig,\n", 765 | "}" 766 | ] 767 | }, 768 | { 769 | "cell_type": "code", 770 | "execution_count": 15, 771 | "metadata": {}, 772 | "outputs": [ 773 | { 774 | "name": "stdout", 775 | "output_type": "stream", 776 | "text": [ 777 | "Files already downloaded and verified\n", 778 | "Files already downloaded and verified\n" 779 | ] 780 | }, 781 | { 782 | "name": "stderr", 783 | "output_type": "stream", 784 | "text": [ 785 | "/opt/miniconda3/lib/python3.9/site-packages/torch/ao/quantization/observer.py:886: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n", 786 | " src_bin_begin // dst_bin_width, 0, self.dst_nbins - 1\n", 787 | "/opt/miniconda3/lib/python3.9/site-packages/torch/ao/quantization/observer.py:891: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n", 788 | " src_bin_end // dst_bin_width, 0, self.dst_nbins - 1\n" 789 | ] 790 | } 791 | ], 792 | "source": [ 793 | "from torchvision import datasets, transforms\n", 794 | "from torch.quantization.quantize_fx import prepare_fx, convert_fx\n", 795 | "\n", 796 | "\n", 797 | "def static_quantize_vision_model(model, qconfig_dict):\n", 798 | " _, data = cifar_dataloader()\n", 799 | " mp = prepare_fx(model, qconfig_dict)\n", 800 | "\n", 801 | " for c, (x, y) in enumerate(data):\n", 802 | " if c == 30:\n", 803 | " break\n", 804 | " mp(x)\n", 805 | " \n", 806 | " mc = convert_fx(mp)\n", 807 | " return mc\n", 808 | "\n", 809 | "static_resnet = static_quantize_vision_model(resnet, qconfig_dict)" 810 | ] 811 | }, 812 | { 813 | "cell_type": "markdown", 814 | "metadata": {}, 815 | "source": [ 816 | "### Evaluate performance of static-quantized model" 817 | ] 818 | }, 819 | { 820 | "cell_type": "code", 821 | "execution_count": 16, 822 | "metadata": {}, 823 | "outputs": [ 824 | { 825 | "name": "stdout", 826 | "output_type": "stream", 827 | "text": [ 828 | "Resnet Static-Quant Profile:\n", 829 | "Size (MB): 23.661729\n", 830 | "====================\n", 831 | "Files already downloaded and verified\n", 832 | "Files already downloaded and verified\n", 833 | "Loss: 0.5700115218758584 \n", 834 | "Accuracy: 0.8708333333333333\n", 835 | "====================\n", 836 | "Time taken (1920 CIFAR test samples): 34.77193212509155\n" 837 | ] 838 | } 839 | ], 840 | "source": [ 841 | "print(\"Resnet Static-Quant Profile:\")\n", 842 | "profile(static_resnet)" 843 | ] 844 | }, 845 | { 846 | "cell_type": "markdown", 847 | "metadata": {}, 848 | "source": [ 849 | "### Sensitivity Analysis - Which quantized layers affect accuracy the most?\n", 850 | "\n", 851 | "\n", 852 | "
\n", 853 | "\n", 854 | "Some layers are more sensitive to precision drops than others. PyTorch provides tools to help with this analysis under the Numeric Suite.\n", 855 | "\n", 856 | "[Numeric Suite Tutorial](https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html)\n" 857 | ] 858 | }, 859 | { 860 | "cell_type": "code", 861 | "execution_count": 17, 862 | "metadata": {}, 863 | "outputs": [], 864 | "source": [ 865 | "import torch.quantization._numeric_suite as ns\n", 866 | "\n", 867 | "def SNR(x, y):\n", 868 | " # Higher is better\n", 869 | " Ps = torch.norm(x)\n", 870 | " Pn = torch.norm(x-y)\n", 871 | " return 20 * torch.log10(Ps/Pn)\n", 872 | "\n", 873 | "def compare_model_weights(float_model, quant_model):\n", 874 | " snr_dict = {}\n", 875 | " wt_compare_dict = ns.compare_weights(float_model.state_dict(), quant_model.state_dict())\n", 876 | " for param_name, weight in wt_compare_dict.items():\n", 877 | " snr = SNR(weight['float'], weight['quantized'].dequantize())\n", 878 | " snr_dict[param_name] = snr\n", 879 | "\n", 880 | " return snr_dict" 881 | ] 882 | }, 883 | { 884 | "cell_type": "markdown", 885 | "metadata": {}, 886 | "source": [ 887 | "Layer-by-layer comparison of model weights \n", 888 | "\n", 889 | "" 890 | ] 891 | }, 892 | { 893 | "cell_type": "code", 894 | "execution_count": 18, 895 | "metadata": {}, 896 | "outputs": [ 897 | { 898 | "name": "stdout", 899 | "output_type": "stream", 900 | "text": [ 901 | "{'conv1.weight': tensor(0.1324), 'layer1.0.conv1.weight': tensor(1.6452), 'layer1.0.conv2.weight': tensor(-2.4947), 'layer1.0.conv3.weight': tensor(-10.0644), 'layer1.0.downsample.0.weight': tensor(1.3075), 'layer1.1.conv1.weight': tensor(3.2131), 'layer1.1.conv2.weight': tensor(1.2505), 'layer1.1.conv3.weight': tensor(-9.6478), 'layer1.2.conv1.weight': tensor(-0.2502), 'layer1.2.conv2.weight': tensor(-6.6366), 'layer1.2.conv3.weight': tensor(-12.0677), 'layer2.0.conv1.weight': tensor(-0.1468), 'layer2.0.conv2.weight': tensor(1.0737), 'layer2.0.conv3.weight': tensor(-9.7258), 'layer2.0.downsample.0.weight': tensor(-0.3786), 'layer2.1.conv1.weight': tensor(6.1034), 'layer2.1.conv2.weight': tensor(-3.3722), 'layer2.1.conv3.weight': tensor(-13.6104), 'layer2.2.conv1.weight': tensor(5.4238), 'layer2.2.conv2.weight': tensor(-5.2156), 'layer2.2.conv3.weight': tensor(-13.1915), 'layer2.3.conv1.weight': tensor(3.8682), 'layer2.3.conv2.weight': tensor(-1.7579), 'layer2.3.conv3.weight': tensor(-10.4095), 'layer3.0.conv1.weight': tensor(0.8389), 'layer3.0.conv2.weight': tensor(2.5050), 'layer3.0.conv3.weight': tensor(-8.3334), 'layer3.0.downsample.0.weight': tensor(1.4675), 'layer3.1.conv1.weight': tensor(-3.4588), 'layer3.1.conv2.weight': tensor(-10.1038), 'layer3.1.conv3.weight': tensor(-14.9413), 'layer3.2.conv1.weight': tensor(-5.7936), 'layer3.2.conv2.weight': tensor(-13.7164), 'layer3.2.conv3.weight': tensor(-15.2606), 'layer3.3.conv1.weight': tensor(-6.9320), 'layer3.3.conv2.weight': tensor(-14.8504), 'layer3.3.conv3.weight': tensor(-15.1983), 'layer3.4.conv1.weight': tensor(-7.3590), 'layer3.4.conv2.weight': tensor(-11.8682), 'layer3.4.conv3.weight': tensor(-13.7047), 'layer3.5.conv1.weight': tensor(-6.7509), 'layer3.5.conv2.weight': tensor(-13.1252), 'layer3.5.conv3.weight': tensor(-15.3381), 'layer4.0.conv1.weight': tensor(-5.8595), 'layer4.0.conv2.weight': tensor(-10.9096), 'layer4.0.conv3.weight': tensor(-19.0842), 'layer4.0.downsample.0.weight': tensor(-15.0892), 'layer4.1.conv1.weight': tensor(5.9120), 'layer4.1.conv2.weight': tensor(-17.6098), 'layer4.1.conv3.weight': tensor(-22.6924), 'layer4.2.conv1.weight': tensor(7.8386), 'layer4.2.conv2.weight': tensor(-17.1798), 'layer4.2.conv3.weight': tensor(-28.5748), 'fc._packed_params._packed_params': tensor(29.6848)}\n" 902 | ] 903 | } 904 | ], 905 | "source": [ 906 | "snrd = compare_model_weights(resnet, static_resnet)\n", 907 | "print(snrd)" 908 | ] 909 | }, 910 | { 911 | "cell_type": "code", 912 | "execution_count": 19, 913 | "metadata": {}, 914 | "outputs": [ 915 | { 916 | "name": "stdout", 917 | "output_type": "stream", 918 | "text": [ 919 | "dict_keys(['layer4.2.conv3', 'layer4.1.conv3', 'layer4.0.conv3', 'layer4.1.conv2', 'layer4.2.conv2'])\n" 920 | ] 921 | } 922 | ], 923 | "source": [ 924 | "def topk_sensitive_layers(snr_dict, k):\n", 925 | " snr_dict = dict(sorted(snr_dict.items(), key=lambda x:x[1]))\n", 926 | " snr_dict = {k.replace('.weight', ''):v for k,v in list(snr_dict.items())[:k]}\n", 927 | " return snr_dict\n", 928 | " \n", 929 | "sensitive_layers = topk_sensitive_layers(snrd, 5).keys()\n", 930 | "print(sensitive_layers)" 931 | ] 932 | }, 933 | { 934 | "cell_type": "markdown", 935 | "metadata": {}, 936 | "source": [ 937 | "## Selective Static Quantization" 938 | ] 939 | }, 940 | { 941 | "cell_type": "code", 942 | "execution_count": 20, 943 | "metadata": {}, 944 | "outputs": [ 945 | { 946 | "name": "stdout", 947 | "output_type": "stream", 948 | "text": [ 949 | "Files already downloaded and verified\n", 950 | "Files already downloaded and verified\n" 951 | ] 952 | } 953 | ], 954 | "source": [ 955 | "sensitive_layers = topk_sensitive_layers(snrd, 5).keys()\n", 956 | "\n", 957 | "qconfig_dict = {\n", 958 | " # Global Config\n", 959 | " \"\": static_qconfig,\n", 960 | "\n", 961 | " # Disable for sensitive modules\n", 962 | " \"module_name\": [(m, None) for m in sensitive_layers],\n", 963 | "}\n", 964 | "\n", 965 | "sel_static_resnet = static_quantize_vision_model(resnet, qconfig_dict)" 966 | ] 967 | }, 968 | { 969 | "cell_type": "markdown", 970 | "metadata": {}, 971 | "source": [ 972 | "### Evaluate performance of selective static-quantized model" 973 | ] 974 | }, 975 | { 976 | "cell_type": "code", 977 | "execution_count": 21, 978 | "metadata": {}, 979 | "outputs": [ 980 | { 981 | "name": "stdout", 982 | "output_type": "stream", 983 | "text": [ 984 | "Resnet Selective-Static-Quant Profile:\n", 985 | "Size (MB): 47.265735\n", 986 | "====================\n", 987 | "Files already downloaded and verified\n", 988 | "Files already downloaded and verified\n", 989 | "Loss: 0.5818707610170046 \n", 990 | "Accuracy: 0.8703125\n", 991 | "====================\n", 992 | "Time taken (1920 CIFAR test samples): 42.71668791770935\n" 993 | ] 994 | } 995 | ], 996 | "source": [ 997 | "print(\"Resnet Selective-Static-Quant Profile:\")\n", 998 | "profile(sel_static_resnet)" 999 | ] 1000 | }, 1001 | { 1002 | "cell_type": "markdown", 1003 | "metadata": {}, 1004 | "source": [ 1005 | "## Quantization-Aware Training\n", 1006 | "\n", 1007 | "" 1008 | ] 1009 | }, 1010 | { 1011 | "cell_type": "code", 1012 | "execution_count": 22, 1013 | "metadata": {}, 1014 | "outputs": [], 1015 | "source": [ 1016 | "from torch.quantization.quantize_fx import prepare_qat_fx\n", 1017 | "from resnet_cifar import Trainer, cifar_dataloader\n", 1018 | "\n", 1019 | "sensitive_layers = topk_sensitive_layers(snrd, 5).keys()\n", 1020 | "\n", 1021 | "qat_qconfig = torch.quantization.get_default_qat_qconfig(backend)\n", 1022 | "qconfig_dict = {\n", 1023 | " # Global Config\n", 1024 | " \"\": qat_qconfig,\n", 1025 | "}\n", 1026 | "\n", 1027 | "def qat_vision_model(model, qconfig):\n", 1028 | " model.train()\n", 1029 | " mp = prepare_qat_fx(model, qconfig)\n", 1030 | "\n", 1031 | " # training loop\n", 1032 | " trainer = Trainer(mp, epochs=120, device='cuda') \n", 1033 | " trainer.run_epoch()\n", 1034 | " mp = mp.cpu()\n", 1035 | "\n", 1036 | " mc = convert_fx(mp)\n", 1037 | " return mc" 1038 | ] 1039 | }, 1040 | { 1041 | "cell_type": "markdown", 1042 | "metadata": {}, 1043 | "source": [ 1044 | "Training this for 120 epochs takes about 2 hours on a single Tesla V100 GPU.\n", 1045 | "If you don't want to wait, download the QAT weights instead" 1046 | ] 1047 | }, 1048 | { 1049 | "cell_type": "code", 1050 | "execution_count": 24, 1051 | "metadata": {}, 1052 | "outputs": [ 1053 | { 1054 | "name": "stderr", 1055 | "output_type": "stream", 1056 | "text": [ 1057 | "/opt/miniconda3/lib/python3.9/site-packages/torch/serialization.py:602: UserWarning: 'torch.load' received a zip file that looks like a TorchScript archive dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to silence this warning)\n", 1058 | " warnings.warn(\"'torch.load' received a zip file that looks like a TorchScript archive\"\n" 1059 | ] 1060 | } 1061 | ], 1062 | "source": [ 1063 | "\n", 1064 | "# qat_resnet = qat_vision_model(resnet, qconfig_dict) \n", 1065 | "# torch.jit.script(qat_resnet).save('qat_resnet50_cifar.pt')\n", 1066 | "\n", 1067 | "import requests\n", 1068 | "r = requests.get(\"https://quantization-workshop.s3.amazonaws.com/qat_resnet50_cifar.pt\")\n", 1069 | "open('qat_resnet50_cifar.pt', 'wb').write(r.content)\n", 1070 | "\n", 1071 | "qat_resnet = torch.load(\"qat_resnet50_cifar.pt\")" 1072 | ] 1073 | }, 1074 | { 1075 | "cell_type": "markdown", 1076 | "metadata": {}, 1077 | "source": [ 1078 | "### Evaluate performance of QAT model" 1079 | ] 1080 | }, 1081 | { 1082 | "cell_type": "code", 1083 | "execution_count": 25, 1084 | "metadata": {}, 1085 | "outputs": [ 1086 | { 1087 | "name": "stdout", 1088 | "output_type": "stream", 1089 | "text": [ 1090 | "Resnet QAT Profile:\n", 1091 | "Size (MB): 24.105327\n", 1092 | "====================\n", 1093 | "Files already downloaded and verified\n", 1094 | "Files already downloaded and verified\n", 1095 | "Loss: 0.3617867136994998 \n", 1096 | "Accuracy: 0.8880208333333334\n", 1097 | "====================\n", 1098 | "Time taken (1920 CIFAR test samples): 37.88538384437561\n" 1099 | ] 1100 | } 1101 | ], 1102 | "source": [ 1103 | "print(\"Resnet QAT Profile:\")\n", 1104 | "profile(qat_resnet)" 1105 | ] 1106 | }, 1107 | { 1108 | "cell_type": "markdown", 1109 | "metadata": {}, 1110 | "source": [ 1111 | "The accuracy of the QAT model is the highest, even slightly higher than the FP32 one! This is atypical, and is most likely because the CIFAR10 dataset is very simple for the Resnet architecture. Typically, we'd expect accuracies to drop from FP32-levels for more complex jobs." 1112 | ] 1113 | } 1114 | ], 1115 | "metadata": { 1116 | "interpreter": { 1117 | "hash": "3d597f4c481aa0f25dceb95d2a0067e73c0966dcbd003d741d821a7208527ecf" 1118 | }, 1119 | "kernelspec": { 1120 | "display_name": "Python 3.9.11 ('base')", 1121 | "language": "python", 1122 | "name": "python3" 1123 | }, 1124 | "language_info": { 1125 | "codemirror_mode": { 1126 | "name": "ipython", 1127 | "version": 3 1128 | }, 1129 | "file_extension": ".py", 1130 | "mimetype": "text/x-python", 1131 | "name": "python", 1132 | "nbconvert_exporter": "python", 1133 | "pygments_lexer": "ipython3", 1134 | "version": "3.9.11" 1135 | }, 1136 | "orig_nbformat": 4 1137 | }, 1138 | "nbformat": 4, 1139 | "nbformat_minor": 2 1140 | } 1141 | -------------------------------------------------------------------------------- /Quantization_Slides.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/Quantization_Slides.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pytorch-quantization-workshop 2 | 3 | This repo holds the files for the PyTorch Quantization Workshop conducted by [Suraj Subramanian](https://twitter.com/subramen) at the MLOpsWorld Conference on June 8 2022. 4 | 5 | ## Notebooks 6 | #### [Quant_101.ipynb](Quant_101.ipynb) 7 | Learn the fundamentals of quantization in pure Python code. 8 | 9 | #### [Quant_API.ipynb](Quant_API.ipynb) 10 | Learn about quantization schemes, when some are better than others, and using QConfigs in PyTorch 11 | 12 | #### [Quant_Workflow.ipynb](Quant_Workflow.ipynb) 13 | The number of available options can be overwhelming. Choosing the correct quantization technique and scheme is an empirical process; this notebook contains a workflow that aids choosing the most suitable option to quantize your FP32 model. 14 | 15 | ## Requirements 16 | * An x86 or ARM CPU 17 | * PyTorch 1.10.0+ 18 | 19 | ## Further Reading 20 | * [Quantization — PyTorch 1.11.0 documentation](https://pytorch.org/docs/stable/quantization.html) 21 | * [Practical Quantization in PyTorch](https://pytorch.org/blog/quantization-in-practice/) 22 | * [FX Graph Mode Quantization User Guide](https://pytorch.org/tutorials/prototype/fx_graph_mode_quant_guide.html) 23 | * [PyTorch Forum - Quantization](https://discuss.pytorch.org/c/quantization/17) 24 | * [PyTorch Github Issues](https://github.com/pytorch/pytorch/issues) 25 | 26 | ## Issues/Requests 27 | If you encounter a bug, please open an issue or a PR. See [CONTRIBUTING.MD](CONTRIBUTING.MD) 28 | 29 | -------------------------------------------------------------------------------- /code/101/latency.py: -------------------------------------------------------------------------------- 1 | def module_latency(mod, input, num_tests=10): 2 | t0 = time.time() 3 | with torch.inference_mode(): 4 | for _ in range(num_tests): 5 | mod(input) 6 | elapsed = time.time() - t0 7 | latency = elapsed/num_tests 8 | print("Average Latency: ", format(latency, 'g')) -------------------------------------------------------------------------------- /code/101/output_range.py: -------------------------------------------------------------------------------- 1 | def get_output_range(bits): 2 | alpha_q = -2 ** (bits - 1) 3 | beta_q = 2 ** (bits - 1) - 1 4 | return alpha_q, beta_q 5 | 6 | print("For 16-bit quantization, the quantized range is ", get_output_range(16)) 7 | print("For 8-bit quantization, the quantized range is ", get_output_range(8)) 8 | print("For 3-bit quantization, the quantized range is ", get_output_range(3)) 9 | print("For 2-bit quantization, the quantized range is ", get_output_range(2)) -------------------------------------------------------------------------------- /code/101/prof.py: -------------------------------------------------------------------------------- 1 | input = preprocess_image(load_image(wolf_img_url)) 2 | profile(resnet, input) -------------------------------------------------------------------------------- /code/101/qparams.py: -------------------------------------------------------------------------------- 1 | def get_quantization_params(input_range, output_range): 2 | min_val, max_val = input_range 3 | alpha_q, beta_q = output_range 4 | S = (max_val - min_val) / (beta_q - alpha_q) 5 | Z = alpha_q - (min_val / S) 6 | return S, Z 7 | 8 | 9 | def quantize(x, S, Z): 10 | x_q = 1/S * x + Z 11 | x_q = torch.round(x_q).to(torch.int8) 12 | return x_q 13 | 14 | 15 | def dequantize(x_q, S, Z): 16 | x = S * (x_q - Z) 17 | return x 18 | 19 | 20 | def quantize_int8(x): 21 | S, Z = get_quantization_params(input_range=(x.min(), x.max(),), output_range=(-128, 127)) 22 | x_q = quantize(x, S, Z) 23 | return x_q, S, Z 24 | 25 | 26 | -------------------------------------------------------------------------------- /code/101/roundfail.txt: -------------------------------------------------------------------------------- 1 | The reason this failed is because our classifier's parameters are between [-0.2, 0.4]. By directly rounding these, we just zeroed out our layer! -------------------------------------------------------------------------------- /code/101/sizeof.py: -------------------------------------------------------------------------------- 1 | def print_size_of_model(model): 2 | torch.save(model.state_dict(), "temp.p") 3 | print('Size (MB):', os.path.getsize("temp.p")/1e6) 4 | os.remove('temp.p') -------------------------------------------------------------------------------- /img/affine-symmetric.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/affine-symmetric.png -------------------------------------------------------------------------------- /img/flowchart-check1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check1.png -------------------------------------------------------------------------------- /img/flowchart-check2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check2.png -------------------------------------------------------------------------------- /img/flowchart-check3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check3.png -------------------------------------------------------------------------------- /img/flowchart-check4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check4.png -------------------------------------------------------------------------------- /img/flowchart-check5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check5.png -------------------------------------------------------------------------------- /img/flowchart-check6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check6.png -------------------------------------------------------------------------------- /img/flowchart-check7_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check7_1.png -------------------------------------------------------------------------------- /img/flowchart-check7_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check7_2.png -------------------------------------------------------------------------------- /img/flowchart-check8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check8.png -------------------------------------------------------------------------------- /img/flowchart-check9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/flowchart-check9.png -------------------------------------------------------------------------------- /img/ns.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/ns.png -------------------------------------------------------------------------------- /img/observer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/observer.png -------------------------------------------------------------------------------- /img/per_t_c.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/per_t_c.png -------------------------------------------------------------------------------- /img/ptq-flowchart.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/ptq-flowchart.png -------------------------------------------------------------------------------- /img/ptq-flowchart.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 15 | 16 | Pre-trained modelFuse modulesInsert stubs & observersCalibration dataCalibrationQuantizationPTQ Model -------------------------------------------------------------------------------- /img/ptq-fx-flowchart.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/ptq-fx-flowchart.png -------------------------------------------------------------------------------- /img/q_scheme.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/q_scheme.png -------------------------------------------------------------------------------- /img/quant_dequant.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/quant_dequant.png -------------------------------------------------------------------------------- /img/quantization-flowchart.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/quantization-flowchart.png -------------------------------------------------------------------------------- /img/scaling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/scaling.png -------------------------------------------------------------------------------- /img/swan-3299528_1280.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fbsamples/pytorch-quantization-workshop/dab20a29b5d20408df0321195e20764def0027b2/img/swan-3299528_1280.jpeg -------------------------------------------------------------------------------- /resnet_cifar.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | from torch.optim.lr_scheduler import OneCycleLR 4 | from torchvision import datasets, models, transforms as T 5 | from torch.utils.data import DataLoader 6 | from torchmetrics.functional import accuracy 7 | import os 8 | import time 9 | 10 | NUM_WORKERS = int(os.cpu_count() / 2) 11 | DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' 12 | BATCH_SIZE = 64 13 | 14 | def cifar_dataloader(): 15 | CIFAR_MEAN, CIFAR_STD = (0.491, 0.482, 0.446), (0.247, 0.243, 0.262) 16 | train_transforms = T.Compose([ 17 | T.RandomCrop(32, padding=4), 18 | T.RandomHorizontalFlip(), 19 | T.ToTensor(), 20 | T.Normalize(CIFAR_MEAN, CIFAR_STD) 21 | ]) 22 | 23 | test_transforms = T.Compose([ 24 | T.ToTensor(), 25 | T.Normalize(CIFAR_MEAN, CIFAR_STD) 26 | ]) 27 | 28 | train = DataLoader(datasets.CIFAR10("./cifar_data", transform=train_transforms, download=True), batch_size=BATCH_SIZE, num_workers=NUM_WORKERS, shuffle=True) 29 | test = DataLoader(datasets.CIFAR10("./cifar_data", transform=test_transforms, train=False, download=True), batch_size=BATCH_SIZE, num_workers=NUM_WORKERS) 30 | 31 | return train, test 32 | 33 | 34 | class Trainer: 35 | def __init__(self, model, epochs, device=None): 36 | self.device = device or DEVICE 37 | self.model = model.to(self.device) 38 | self.train_data, self.test_data = cifar_dataloader() 39 | self.criterion = torch.nn.CrossEntropyLoss() 40 | 41 | # if not test-only 42 | if epochs > 0: 43 | self.optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4) 44 | self.scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(self.optimizer, T_max=epochs) 45 | self.epochs = epochs 46 | self.out_dir = "./checkpoints/" 47 | 48 | def run_batch(self, inputs, targets): 49 | self.optimizer.zero_grad() 50 | inputs = inputs.to(self.device) 51 | targets = targets.to(self.device) 52 | output = self.model(inputs) 53 | loss = self.criterion(output, targets) 54 | loss.backward() 55 | self.optimizer.step() 56 | 57 | def run_epoch(self): 58 | for epoch in range(self.epochs): 59 | print(f"Epoch: {epoch}") 60 | # if epoch%5==0: 61 | # self.evaluate(self.model, 4) 62 | 63 | for inputs, targets in self.train_data: 64 | self.run_batch(inputs, targets) 65 | 66 | if self.scheduler is not None: 67 | self.scheduler.step() 68 | 69 | def evaluate(self, max_batch=None): 70 | L = 0 71 | A = 0 72 | t0 = time.time() 73 | with torch.inference_mode(): 74 | for b, (x, y) in enumerate(self.test_data): 75 | if max_batch and b == max_batch: 76 | break 77 | x = x.to(self.device) 78 | y = y.to(self.device) 79 | logits = self.model(x) 80 | loss = self.criterion(logits, y) 81 | preds = torch.argmax(logits, dim=1) 82 | acc = accuracy(preds, y) 83 | L += loss.item() 84 | A += acc.item() 85 | elapsed = time.time() - t0 86 | L /= b 87 | A /= b 88 | print(f"Loss: {L} \nAccuracy: {A}") 89 | print("="*20) 90 | print(f"Time taken ({b * BATCH_SIZE} CIFAR test samples): {elapsed}") 91 | 92 | def save_checkpoint(self): 93 | torch.save(self.model.state_dict(), self.out_dir) 94 | print("Model state dict saved at model.pth") 95 | 96 | 97 | --------------------------------------------------------------------------------