├── .gitignore ├── Alejandro Ahumada ├── Federated Recurrent Neural Network.ipynb ├── README.md └── logo │ ├── PyTorch-Robotics_Logo_v001.ai │ └── export │ └── PyTorch-Robotics_Logo_v001@2x.png ├── Char_rnn_classification.ipynb ├── Command Line ├── Environment variables ├── Increase Swap Space ├── Install PySyft on Raspberry Pi ├── Install PyTorch on Raspberry Pi ├── Install PyTorch through PyTorch Wheel ├── Install PyTorch wheel file └── Install Python 3.6.7 ├── Ebinbin Ajagun ├── Federated_Learning_CIFAR10.ipynb └── images │ └── cifar_data.png ├── Elena Kutanov ├── Federated Recurrent Neural Network.ipynb ├── README.md └── Setup guide.pdf ├── Federated Recurrent Neural Network ├── 1. Step - Dependencies.ipynb ├── Dataset └── FederatedLearningRaspberryPIs.ipynb ├── Federated learning on Raspberry Pi.md ├── Ivoline Ngong ├── Federated_Learning_With_LSTM.ipynb └── data │ ├── labels.txt │ └── reviews.txt ├── Nirupama ├── Char_rnn_classification.ipynb └── README.MD ├── PyTorch Wheels ├── How to build PyTorch for Raspberry Pi.md ├── README.md ├── torch-1.0.0a0+8322165-cp37-cp37m-linux_armv7l.whl └── torch-1.2.0a0+8554416-cp37-cp37m-linux_armv7l.whl ├── README.md ├── Raspberry-Pi-on-Virtual-Worker.md ├── Sergio Valderrama ├── FederatedLearningOnRaspberryPis_Windows.ipynb └── README.md ├── Shivam Raisharma └── My Journey.md ├── Sushil Ghimire ├── .ipynb_checkpoints │ └── FMNIST_FL-checkpoint.ipynb └── FMNIST_FL.ipynb ├── Temitope Oladokun └── images ├── federated-learning.png ├── fork.png ├── notebook.png ├── pullrequest.png ├── pullrequest2.png ├── savetogithub.png └── savetogithub2.png /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /Alejandro Ahumada/README.md: -------------------------------------------------------------------------------- 1 | Project based on [this tutorial by Daniele Gadler](https://blog.openmined.org/federated-learning-of-a-rnn-on-raspberry-pis/). 2 | 3 | # Tools needed/used 4 | - Laptop (macOS) 5 | - 2x Raspberry Pi Model 3B 6 | - Local Network (Ethernet or WiFi) 7 | - Wireless Keyboard (initial setup) 8 | - Tv and HDMI cable (initial setup) 9 | 10 | # Project phases 11 | - ## Initial setup 12 | - Installed a fresh copy of Raspbian on an empty card. 13 | - I connected the first RPi to a screen and wireless keybboard to do an initial setup: 14 | - connect it to WiFi 15 | - enable SSH 16 | - boot on CLI 17 | - Install Python, PyTorch and PySyft. 18 | - Clone card to the second RPi3B and give it an unique name. 19 | - ## Python 20 | - **Laptop:** Python 3.7 already installed on Anaconda env. 21 | - **Raspberry Pi:** I compiled and installed Python 3.6.7 with no problems following the tutorial steps. 22 | - ## Pytorch (v1.1.0) 23 | - **Laptop:** Installed via conda. 24 | - **Raspberry Pi:** I compiled and installed version 1.1.0. 25 | At the time of writing v1.2.0 had just been released, but had not been tested with latest PySyft release, so I chose to stick to v1.1.0. 26 | - I had some issues with Raspbian Buster GCC version, they've been summarized in the [troubleshooting guide](https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki/Troubleshooting). 27 | - ## PySyft (v0.1.23a1) 28 | - Had some issues that were solved by running the exact same version on PySyft on all devices as stated on [troubleshooting guide](https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki/Troubleshooting). 29 | # General workflow 30 | - Mostly used Raspberry Pis on a headless setup (except initial setup). 31 | - Access RPi3B via macOS SSH in terminal to launch worker servers and build tools. 32 | - Used VS Code remote tools to edit code both on my laptop and the two RPi3B. -------------------------------------------------------------------------------- /Alejandro Ahumada/logo/PyTorch-Robotics_Logo_v001.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/Alejandro Ahumada/logo/PyTorch-Robotics_Logo_v001.ai -------------------------------------------------------------------------------- /Alejandro Ahumada/logo/export/PyTorch-Robotics_Logo_v001@2x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/Alejandro Ahumada/logo/export/PyTorch-Robotics_Logo_v001@2x.png -------------------------------------------------------------------------------- /Char_rnn_classification.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Char-rnn-classification", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "include_colab_link": true 10 | }, 11 | "kernelspec": { 12 | "name": "python3", 13 | "display_name": "Python 3" 14 | } 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "markdown", 19 | "metadata": { 20 | "id": "view-in-github", 21 | "colab_type": "text" 22 | }, 23 | "source": [ 24 | "\"Open" 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "metadata": { 30 | "id": "JiVRHoWisYNS", 31 | "colab_type": "code", 32 | "colab": {} 33 | }, 34 | "source": [ 35 | "from __future__ import unicode_literals, print_function, division\n", 36 | "from torch.utils.data import Dataset\n", 37 | "\n", 38 | "import torch\n", 39 | "from io import open\n", 40 | "import glob\n", 41 | "import os\n", 42 | "import numpy as np\n", 43 | "import unicodedata\n", 44 | "import string\n", 45 | "import random\n", 46 | "import torch.nn as nn\n", 47 | "import time\n", 48 | "import math\n", 49 | "import syft as sy\n", 50 | "import pandas as pd\n", 51 | "import random\n", 52 | "from syft.frameworks.torch.federated import utils\n", 53 | "\n", 54 | "from syft.workers import WebsocketClientWorker\n", 55 | "import matplotlib.pyplot as plt\n", 56 | "import matplotlib.ticker as ticker" 57 | ], 58 | "execution_count": 0, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "code", 63 | "metadata": { 64 | "id": "9v5vV97xw2v2", 65 | "colab_type": "code", 66 | "outputId": "502f72da-adf1-40f3-f260-9ed9dadf9669", 67 | "colab": { 68 | "base_uri": "https://localhost:8080/", 69 | "height": 204 70 | } 71 | }, 72 | "source": [ 73 | "\n", 74 | "!wget https://download.pytorch.org/tutorial/data.zip" 75 | ], 76 | "execution_count": 2, 77 | "outputs": [ 78 | { 79 | "output_type": "stream", 80 | "text": [ 81 | "--2019-08-18 18:24:58-- https://download.pytorch.org/tutorial/data.zip\n", 82 | "Resolving download.pytorch.org (download.pytorch.org)... 13.224.29.60, 13.224.29.19, 13.224.29.73, ...\n", 83 | "Connecting to download.pytorch.org (download.pytorch.org)|13.224.29.60|:443... connected.\n", 84 | "HTTP request sent, awaiting response... 200 OK\n", 85 | "Length: 2882130 (2.7M) [application/zip]\n", 86 | "Saving to: ‘data.zip’\n", 87 | "\n", 88 | "\rdata.zip 0%[ ] 0 --.-KB/s \rdata.zip 100%[===================>] 2.75M --.-KB/s in 0.05s \n", 89 | "\n", 90 | "2019-08-18 18:24:58 (53.2 MB/s) - ‘data.zip’ saved [2882130/2882130]\n", 91 | "\n" 92 | ], 93 | "name": "stdout" 94 | } 95 | ] 96 | }, 97 | { 98 | "cell_type": "code", 99 | "metadata": { 100 | "id": "8Nw3Q1eBxT-w", 101 | "colab_type": "code", 102 | "outputId": "0ee9f0c7-48f1-4cd9-de49-16139ab17a0b", 103 | "colab": { 104 | "base_uri": "https://localhost:8080/", 105 | "height": 391 106 | } 107 | }, 108 | "source": [ 109 | "\n", 110 | "!unzip data.zip" 111 | ], 112 | "execution_count": 3, 113 | "outputs": [ 114 | { 115 | "output_type": "stream", 116 | "text": [ 117 | "Archive: data.zip\n", 118 | " creating: data/\n", 119 | " inflating: data/eng-fra.txt \n", 120 | " creating: data/names/\n", 121 | " inflating: data/names/Arabic.txt \n", 122 | " inflating: data/names/Chinese.txt \n", 123 | " inflating: data/names/Czech.txt \n", 124 | " inflating: data/names/Dutch.txt \n", 125 | " inflating: data/names/English.txt \n", 126 | " inflating: data/names/French.txt \n", 127 | " inflating: data/names/German.txt \n", 128 | " inflating: data/names/Greek.txt \n", 129 | " inflating: data/names/Irish.txt \n", 130 | " inflating: data/names/Italian.txt \n", 131 | " inflating: data/names/Japanese.txt \n", 132 | " inflating: data/names/Korean.txt \n", 133 | " inflating: data/names/Polish.txt \n", 134 | " inflating: data/names/Portuguese.txt \n", 135 | " inflating: data/names/Russian.txt \n", 136 | " inflating: data/names/Scottish.txt \n", 137 | " inflating: data/names/Spanish.txt \n", 138 | " inflating: data/names/Vietnamese.txt \n" 139 | ], 140 | "name": "stdout" 141 | } 142 | ] 143 | }, 144 | { 145 | "cell_type": "code", 146 | "metadata": { 147 | "id": "C2BPWFLQxYn8", 148 | "colab_type": "code", 149 | "outputId": "f4ade389-4d1b-4722-e3dc-0fcad540daa7", 150 | "colab": { 151 | "base_uri": "https://localhost:8080/", 152 | "height": 54 153 | } 154 | }, 155 | "source": [ 156 | "all_filenames = glob.glob('data/names/*.txt')\n", 157 | "print(all_filenames)" 158 | ], 159 | "execution_count": 4, 160 | "outputs": [ 161 | { 162 | "output_type": "stream", 163 | "text": [ 164 | "['data/names/Italian.txt', 'data/names/Irish.txt', 'data/names/Dutch.txt', 'data/names/Scottish.txt', 'data/names/Vietnamese.txt', 'data/names/Portuguese.txt', 'data/names/Japanese.txt', 'data/names/French.txt', 'data/names/Chinese.txt', 'data/names/Russian.txt', 'data/names/Polish.txt', 'data/names/German.txt', 'data/names/Korean.txt', 'data/names/Greek.txt', 'data/names/English.txt', 'data/names/Arabic.txt', 'data/names/Czech.txt', 'data/names/Spanish.txt']\n" 165 | ], 166 | "name": "stdout" 167 | } 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": { 173 | "id": "XWEWJuAP-tRg", 174 | "colab_type": "text" 175 | }, 176 | "source": [ 177 | "**Data Preparation Steps::**\n" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": { 183 | "id": "7z5Gu9pb-275", 184 | "colab_type": "text" 185 | }, 186 | "source": [ 187 | "1.)**Conversion from unicode to ascii**\n", 188 | " Why to convert from unicode to ascii:: Char-rnn, supports ascii and will error out in nonmasking characters(unicode with type as diacritic.)\n", 189 | " \n", 190 | " **Code to convert from unicode to ascii is** def unicodeToAscii(s): ....\n", 191 | " We have to remove only diacritical marks .We need to normalize the unicode string using NFKD-normalization and then remove only those unicode whose category type is Mn.\n", 192 | "Mn ::Nonspacing character that indicates modifications of a base character. Signified by the Unicode designation \"Mn\" (mark, nonspacing). The value is 5.\n", 193 | "Unicode normalization:: unicodedata.normalize(form, unistr)::Return the normal form form for the Unicode string unistr. Valid values for form are ‘NFC’, ‘NFKC’, ‘NFD’, and ‘NFKD’.The normalization does, it makes sure characters which look identical actually are identical.\n", 194 | "\n", 195 | "\n" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": { 201 | "id": "9kx6QyG1_5Fm", 202 | "colab_type": "text" 203 | }, 204 | "source": [ 205 | "**2)Build category_lines dictionary, a list of names per language**::\n", 206 | "eg:: category_lines will be like {'Japanese': ['Abe', 'Abukara'...],....'Irish':['Adam', 'Ahearn', 'Aodh'])\n", 207 | "\n", 208 | "**Build all_categories list for list of all 18 Countries. eg::**\n", 209 | "all_categoroes will like ['Japanese', 'Irish', 'Italian', 'Dutch', 'Russian', 'German',...]" 210 | ] 211 | }, 212 | { 213 | "cell_type": "markdown", 214 | "metadata": { 215 | "id": "C77E2nk_A6Su", 216 | "colab_type": "text" 217 | }, 218 | "source": [ 219 | "**3)Convert the word in each category into one-hot vector in order to feed into RNN neural network model.**" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "metadata": { 225 | "id": "B5m3zB8wnrN9", 226 | "colab_type": "code", 227 | "outputId": "bbaac151-5638-4c13-8005-bbe82f6f68f0", 228 | "colab": { 229 | "base_uri": "https://localhost:8080/", 230 | "height": 71 231 | } 232 | }, 233 | "source": [ 234 | "from __future__ import unicode_literals, print_function, division\n", 235 | "from io import open\n", 236 | "import glob\n", 237 | "import os\n", 238 | "\n", 239 | "def findFiles(path): return glob.glob(path)\n", 240 | "\n", 241 | "print(findFiles('data/names/*.txt'))\n", 242 | "\n", 243 | "import unicodedata\n", 244 | "import string\n", 245 | "\n", 246 | "all_letters = string.ascii_letters + \" .,;'\"\n", 247 | "n_letters = len(all_letters)\n", 248 | "\n", 249 | "# Turn a Unicode string to plain ASCII, thanks to https://stackoverflow.com/a/518232/2809427\n", 250 | "def unicodeToAscii(s):\n", 251 | " return ''.join(\n", 252 | " c for c in unicodedata.normalize('NFD', s)\n", 253 | " if unicodedata.category(c) != 'Mn'\n", 254 | " and c in all_letters\n", 255 | " )\n", 256 | "\n", 257 | "print(unicodeToAscii('Ślusàrski'))\n", 258 | "\n", 259 | "# Build the category_lines dictionary, a list of names per language\n", 260 | "category_lines = {}\n", 261 | "all_categories = []\n", 262 | "\n", 263 | "# Read a file and split into lines\n", 264 | "def readLines(filename):\n", 265 | " lines = open(filename, encoding='utf-8').read().strip().split('\\n')\n", 266 | " return [unicodeToAscii(line) for line in lines]\n", 267 | "\n", 268 | "for filename in findFiles('data/names/*.txt'):\n", 269 | " category = os.path.splitext(os.path.basename(filename))[0]\n", 270 | " all_categories.append(category)\n", 271 | " lines = readLines(filename)\n", 272 | " category_lines[category] = lines\n", 273 | "\n", 274 | "n_categories = len(all_categories)" 275 | ], 276 | "execution_count": 6, 277 | "outputs": [ 278 | { 279 | "output_type": "stream", 280 | "text": [ 281 | "['data/names/Italian.txt', 'data/names/Irish.txt', 'data/names/Dutch.txt', 'data/names/Scottish.txt', 'data/names/Vietnamese.txt', 'data/names/Portuguese.txt', 'data/names/Japanese.txt', 'data/names/French.txt', 'data/names/Chinese.txt', 'data/names/Russian.txt', 'data/names/Polish.txt', 'data/names/German.txt', 'data/names/Korean.txt', 'data/names/Greek.txt', 'data/names/English.txt', 'data/names/Arabic.txt', 'data/names/Czech.txt', 'data/names/Spanish.txt']\n", 282 | "Slusarski\n" 283 | ], 284 | "name": "stdout" 285 | } 286 | ] 287 | }, 288 | { 289 | "cell_type": "code", 290 | "metadata": { 291 | "id": "f0yqB6P4H_dN", 292 | "colab_type": "code", 293 | "outputId": "9322be5f-45ed-4fa0-c502-545154bc8de9", 294 | "colab": { 295 | "base_uri": "https://localhost:8080/", 296 | "height": 85 297 | } 298 | }, 299 | "source": [ 300 | "import torch\n", 301 | "\n", 302 | "# Find letter index from all_letters, e.g. \"a\" = 0\n", 303 | "def letterToIndex(letter):\n", 304 | " return all_letters.find(letter)\n", 305 | "\n", 306 | "# Just for demonstration, turn a letter into a <1 x n_letters> Tensor\n", 307 | "def letterToTensor(letter):\n", 308 | " tensor = torch.zeros(1, n_letters)\n", 309 | " tensor[0][letterToIndex(letter)] = 1\n", 310 | " return tensor\n", 311 | "\n", 312 | "# Turn a line into a ,\n", 313 | "# or an array of one-hot letter vectors\n", 314 | "def lineToTensor(line):\n", 315 | " tensor = torch.zeros(len(line), 1, n_letters)\n", 316 | " for li, letter in enumerate(line):\n", 317 | " tensor[li][0][letterToIndex(letter)] = 1\n", 318 | " return tensor\n", 319 | "\n", 320 | "print(letterToTensor('J'))" 321 | ], 322 | "execution_count": 7, 323 | "outputs": [ 324 | { 325 | "output_type": "stream", 326 | "text": [ 327 | "tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 328 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n", 329 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 330 | " 0., 0., 0.]])\n" 331 | ], 332 | "name": "stdout" 333 | } 334 | ] 335 | } 336 | ] 337 | } -------------------------------------------------------------------------------- /Command Line/Environment variables: -------------------------------------------------------------------------------- 1 | $ export NO_CUDA=1 2 | $ export NO_DISTRIBUTED=1 3 | $ export NO_MKLDNN=1 4 | $ export NO_NNPACK=1 5 | $ export NO_QNNPACK=1 6 | -------------------------------------------------------------------------------- /Command Line/Increase Swap Space: -------------------------------------------------------------------------------- 1 | $ sudo nano /etc/dphys-swapfile 2 | This opens up dhys-swapfile in nano editor. Locate the below line: 3 | CONF_SWAPFILE=100 4 | And change this to: 5 | CONF_SWAPFILE=2048 6 | to increase the swap size to 2GB (2*1024 Bytes) 7 | Press Ctrl+O (Overwrite the changes). Press Enter (Yes) 8 | Press Ctrl+X (Exit from nano) 9 | -------------------------------------------------------------------------------- /Command Line/Install PySyft on Raspberry Pi: -------------------------------------------------------------------------------- 1 | For torch v1.0 2 | $ pip3 install syft==0.1.13a1 --no-dependencies 3 | $ pip3 install Flask flask-socketio lz4 msgpack 4 | 5 | For torch v1.1 6 | $ pip3 install syft 7 | -------------------------------------------------------------------------------- /Command Line/Install PyTorch on Raspberry Pi: -------------------------------------------------------------------------------- 1 | $ sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools 2 | $ git clone --recursive https://github.com/pytorch/pytorch --branch=v1.0.0 3 | $ cd pytorch 4 | $ export NO_CUDA=1 5 | $ export NO_DISTRIBUTED=1 6 | $ export NO_MKLDNN=1 7 | $ export NO_NNPACK=1 8 | $ export NO_QNNPACK=1 9 | $ cd pytorch_install/pytorch 10 | $ python3 setup.py build 11 | -------------------------------------------------------------------------------- /Command Line/Install PyTorch through PyTorch Wheel: -------------------------------------------------------------------------------- 1 | INSTALL PYTORCH 2 | $ sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools 3 | $ uname -a 4 | $ python3 --version 5 | $ wget https://github.com/shashigharti/federated-learning-on-raspberry-pi/archive/master.zip 6 | $ unzip master.zip 7 | $ cd federated-learning-on-raspberry-pi-master/ 8 | $ cd pytorch_wheels/ 9 | $ pip3 install --user ./torch-1.0.0a0+8322165-cp37-cp37m-linux_armv7l.whl 10 | $ python3 11 | 12 | >>> import torch 13 | # if no errors torch is successfully installed! 14 | -------------------------------------------------------------------------------- /Command Line/Install PyTorch wheel file: -------------------------------------------------------------------------------- 1 | INSTALL PYTORCH THROUGH THE WHEEL FILE 2 | $ sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools 3 | $ uname -a 4 | $ python3 --version 5 | $ wget https://github.com/shashigharti/federated-learning-on-raspberry-pi/archive/master.zip 6 | $ unzip master.zip 7 | $ cd federated-learning-on-raspberry-pi-master/ 8 | $ cd pytorch_wheels/ 9 | $ pip3 install --user ./torch-1.0.0a0+8322165-cp37-cp37m-linux_armv7l.whl 10 | $ python3 11 | 12 | >>> import torch 13 | # if no errors torch is successfully installed! 14 | -------------------------------------------------------------------------------- /Command Line/Install Python 3.6.7: -------------------------------------------------------------------------------- 1 | $ wget tar xf Python-3.6.7.tar.tgz 2 | $ tar xf Python-3.6.7.tgz 3 | $ cd Python-3.6.7 4 | ./configure 5 | $ make 6 | $ sudo make altinstall 7 | $ python3.6 --version 8 | -------------------------------------------------------------------------------- /Ebinbin Ajagun/images/cifar_data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/Ebinbin Ajagun/images/cifar_data.png -------------------------------------------------------------------------------- /Elena Kutanov/README.md: -------------------------------------------------------------------------------- 1 | Project based on https://blog.openmined.org/federated-learning-of-a-rnn-on-raspberry-pis/ 2 | 3 | In the project was done: 4 | - Installation the environment on the windows computer and Raspberry Pi 4 B+ (PySyft, PyTorch, Python, etc.) 5 | - Established remotely connection. 6 | - Started virtual worker 'Bob' and Raspberry Pi worker 'Alice', trained the model. 7 | - Wrote a part of setup guide. 8 | -------------------------------------------------------------------------------- /Elena Kutanov/Setup guide.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/Elena Kutanov/Setup guide.pdf -------------------------------------------------------------------------------- /Federated Recurrent Neural Network/1. Step - Dependencies.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "Collecting tf-encrypted==0.5.6\n", 13 | " Downloading https://files.pythonhosted.org/packages/49/ab/8b772e6d81f1a8af0141b1b7648c6826ed9f4306021568ec8165a3f7f71a/tf-encrypted-0.5.6.tar.gz (105kB)\n", 14 | "Requirement already satisfied: tensorflow<2,>=1.12.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tf-encrypted==0.5.6) (1.13.1)\n", 15 | "Requirement already satisfied: numpy>=1.14.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tf-encrypted==0.5.6) (1.16.4)\n", 16 | "Collecting pyyaml>=5.1 (from tf-encrypted==0.5.6)\n", 17 | " Downloading https://files.pythonhosted.org/packages/bc/3f/4f733cd0b1b675f34beb290d465a65e0f06b492c00b111d1b75125062de1/PyYAML-5.1.2-cp37-cp37m-win_amd64.whl (215kB)\n", 18 | "Requirement already satisfied: wheel>=0.26 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (0.33.4)\n", 19 | "Requirement already satisfied: protobuf>=3.6.1 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (3.8.0)\n", 20 | "Requirement already satisfied: grpcio>=1.8.6 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (1.16.1)\n", 21 | "Requirement already satisfied: astor>=0.6.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (0.7.1)\n", 22 | "Requirement already satisfied: gast>=0.2.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (0.2.2)\n", 23 | "Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (1.1.0)\n", 24 | "Requirement already satisfied: keras-applications>=1.0.6 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (1.0.8)\n", 25 | "Requirement already satisfied: tensorflow-estimator<1.14.0rc0,>=1.13.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (1.13.0)\n", 26 | "Requirement already satisfied: six>=1.10.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (1.12.0)\n", 27 | "Requirement already satisfied: termcolor>=1.1.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (1.1.0)\n", 28 | "Requirement already satisfied: tensorboard<1.14.0,>=1.13.0 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (1.13.1)\n", 29 | "Requirement already satisfied: absl-py>=0.1.6 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (0.7.1)\n", 30 | "Requirement already satisfied: setuptools in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from protobuf>=3.6.1->tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (41.0.1)\n", 31 | "Requirement already satisfied: h5py in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from keras-applications>=1.0.6->tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (2.9.0)\n", 32 | "Requirement already satisfied: werkzeug>=0.11.15 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (0.15.4)\n", 33 | "Requirement already satisfied: markdown>=2.6.8 in c:\\users\\sankalp\\anaconda3\\envs\\udacity\\lib\\site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow<2,>=1.12.0->tf-encrypted==0.5.6) (3.1.1)\n", 34 | "Building wheels for collected packages: tf-encrypted\n", 35 | " Building wheel for tf-encrypted (setup.py): started\n", 36 | " Building wheel for tf-encrypted (setup.py): finished with status 'done'\n", 37 | " Stored in directory: C:\\Users\\Sankalp\\AppData\\Local\\pip\\Cache\\wheels\\b1\\42\\32\\dfbd686975e0c953a8a8781446c55286abff2cd29b8b445506\n", 38 | "Successfully built tf-encrypted\n", 39 | "Installing collected packages: pyyaml, tf-encrypted\n", 40 | "Successfully installed pyyaml-5.1.2 tf-encrypted-0.5.6\n", 41 | "Collecting msgpack==0.6.1\n", 42 | " Using cached https://files.pythonhosted.org/packages/d1/67/476640810609471e0f3a32c9f4388bf1318b773d0a64b116305d3b604dca/msgpack-0.6.1-cp37-cp37m-win_amd64.whl\n", 43 | "Installing collected packages: msgpack\n", 44 | " Found existing installation: msgpack 0.5.6\n", 45 | " Uninstalling msgpack-0.5.6:\n", 46 | " Successfully uninstalled msgpack-0.5.6\n", 47 | "Successfully installed msgpack-0.6.1\n" 48 | ] 49 | }, 50 | { 51 | "name": "stderr", 52 | "output_type": "stream", 53 | "text": [ 54 | "ERROR: spacy 2.0.12 has requirement regex==2017.4.5, but you'll have regex 2018.7.11 which is incompatible.\n", 55 | "! was unexpected at this time.\n", 56 | "The system cannot find the path specified.\n" 57 | ] 58 | }, 59 | { 60 | "name": "stdout", 61 | "output_type": "stream", 62 | "text": [ 63 | "Collecting lz4\n", 64 | " Using cached https://files.pythonhosted.org/packages/de/30/c241f360f769fd5a8623bf512de1b184a0473eaeaa6d32c7cda6cfeafef5/lz4-2.1.10-cp37-cp37m-win_amd64.whl\n", 65 | "Installing collected packages: lz4\n", 66 | "Successfully installed lz4-2.1.10\n", 67 | "Collecting websocket\n", 68 | " Downloading https://files.pythonhosted.org/packages/f2/6d/a60d620ea575c885510c574909d2e3ed62129b121fa2df00ca1c81024c87/websocket-0.2.1.tar.gz (195kB)\n", 69 | "Collecting gevent (from websocket)\n", 70 | " Downloading https://files.pythonhosted.org/packages/8a/dd/417aad4e69fa7f8882534b778c46cb28eb0421ffa1e924ec3b4efcfcc81f/gevent-1.4.0-cp37-cp37m-win_amd64.whl (3.0MB)\n", 71 | "Collecting greenlet (from websocket)\n", 72 | " Downloading https://files.pythonhosted.org/packages/90/a3/da8593df08ee2efeb86ccf3201508a1fd2a3749e2735b7cadb7dd00416c6/greenlet-0.4.15-cp37-cp37m-win_amd64.whl\n", 73 | "Collecting cffi>=1.11.5; sys_platform == \"win32\" and platform_python_implementation == \"CPython\" (from gevent->websocket)\n", 74 | " Downloading https://files.pythonhosted.org/packages/2f/ad/9722b7752fdd88c858be57b47f41d1049b5fb0ab79caf0ab11407945c1a7/cffi-1.12.3-cp37-cp37m-win_amd64.whl (171kB)\n", 75 | "Collecting pycparser (from cffi>=1.11.5; sys_platform == \"win32\" and platform_python_implementation == \"CPython\"->gevent->websocket)\n", 76 | " Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB)\n", 77 | "Building wheels for collected packages: websocket, pycparser\n", 78 | " Building wheel for websocket (setup.py): started\n", 79 | " Building wheel for websocket (setup.py): finished with status 'done'\n", 80 | " Stored in directory: C:\\Users\\Sankalp\\AppData\\Local\\pip\\Cache\\wheels\\35\\f7\\5c\\9e8243838269ea93f05295708519a6e183fa6b515d9ce3b636\n", 81 | " Building wheel for pycparser (setup.py): started\n", 82 | " Building wheel for pycparser (setup.py): finished with status 'done'\n", 83 | " Stored in directory: C:\\Users\\Sankalp\\AppData\\Local\\pip\\Cache\\wheels\\f2\\9a\\90\\de94f8556265ddc9d9c8b271b0f63e57b26fb1d67a45564511\n", 84 | "Successfully built websocket pycparser\n", 85 | "Installing collected packages: greenlet, pycparser, cffi, gevent, websocket\n", 86 | " Found existing installation: pycparser 2.19\n", 87 | " Uninstalling pycparser-2.19:\n", 88 | " Successfully uninstalled pycparser-2.19\n", 89 | " Found existing installation: cffi 1.12.3\n", 90 | " Uninstalling cffi-1.12.3:\n", 91 | " Successfully uninstalled cffi-1.12.3\n", 92 | "Successfully installed cffi-1.12.3 gevent-1.4.0 greenlet-0.4.15 pycparser-2.19 websocket-0.2.1\n", 93 | "Collecting websockets\n", 94 | " Downloading https://files.pythonhosted.org/packages/9c/60/f96f535f3354cb6ba5e5c7ab128b1c4802a2d040ee7225e3fe51242816c1/websockets-8.0.2-cp37-cp37m-win_amd64.whl (65kB)\n", 95 | "Installing collected packages: websockets\n", 96 | "Successfully installed websockets-8.0.2\n" 97 | ] 98 | } 99 | ], 100 | "source": [ 101 | "#Install PySyft in Google Colab\n", 102 | "\n", 103 | "!pip install tf-encrypted==0.5.6\n", 104 | "!pip install msgpack==0.6.1\n", 105 | "!pip install regex==2017.4.5\n", 106 | "\n", 107 | "! URL=\"https://github.com/openmined/PySyft.git\" && FOLDER=\"PySyft\" && if [ ! -d $FOLDER ]; then git clone -b dev --single-branch $URL; else (cd $FOLDER && git pull $URL && cd ..); fi;\n", 108 | "\n", 109 | "!cd PySyft; python setup.py install > /dev/null\n", 110 | "\n", 111 | "import os\n", 112 | "import sys\n", 113 | "module_path = os.path.abspath(os.path.join('./PySyft'))\n", 114 | "if module_path not in sys.path:\n", 115 | " sys.path.append(module_path)\n", 116 | " \n", 117 | "!pip install --upgrade --force-reinstall lz4\n", 118 | "!pip install --upgrade --force-reinstall websocket\n", 119 | "!pip install --upgrade --force-reinstall websockets\n", 120 | "!pip install --upgrade --force-reinstall zstd" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "execution_count": null, 126 | "metadata": {}, 127 | "outputs": [], 128 | "source": [ 129 | "from __future__ import unicode_literals, print_function, division\n", 130 | "from torch.utils.data import Dataset\n", 131 | "\n", 132 | "import torch\n", 133 | "from io import open\n", 134 | "import glob\n", 135 | "import os\n", 136 | "import numpy as np\n", 137 | "import unicodedata\n", 138 | "import string\n", 139 | "import random\n", 140 | "import torch.nn as nn\n", 141 | "import time\n", 142 | "import math\n", 143 | "import syft as sy\n", 144 | "import pandas as pd\n", 145 | "import random\n", 146 | "from syft.frameworks.torch.federated import utils\n", 147 | "\n", 148 | "from syft.workers import WebsocketClientWorker\n", 149 | "import matplotlib.pyplot as plt\n", 150 | "import matplotlib.ticker as ticker" 151 | ] 152 | }, 153 | { 154 | "cell_type": "code", 155 | "execution_count": null, 156 | "metadata": {}, 157 | "outputs": [], 158 | "source": [ 159 | "!wget https://download.pytorch.org/tutorial/data.zip " 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "execution_count": null, 165 | "metadata": {}, 166 | "outputs": [], 167 | "source": [ 168 | "!unzip data.zip" 169 | ] 170 | }, 171 | { 172 | "cell_type": "code", 173 | "execution_count": null, 174 | "metadata": {}, 175 | "outputs": [], 176 | "source": [] 177 | }, 178 | { 179 | "cell_type": "code", 180 | "execution_count": null, 181 | "metadata": {}, 182 | "outputs": [], 183 | "source": [] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": null, 188 | "metadata": {}, 189 | "outputs": [], 190 | "source": [] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": null, 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [] 198 | } 199 | ], 200 | "metadata": { 201 | "kernelspec": { 202 | "display_name": "Python 3", 203 | "language": "python", 204 | "name": "python3" 205 | }, 206 | "language_info": { 207 | "codemirror_mode": { 208 | "name": "ipython", 209 | "version": 3 210 | }, 211 | "file_extension": ".py", 212 | "mimetype": "text/x-python", 213 | "name": "python", 214 | "nbconvert_exporter": "python", 215 | "pygments_lexer": "ipython3", 216 | "version": "3.7.3" 217 | } 218 | }, 219 | "nbformat": 4, 220 | "nbformat_minor": 2 221 | } 222 | -------------------------------------------------------------------------------- /Federated Recurrent Neural Network/Dataset: -------------------------------------------------------------------------------- 1 | https://download.pytorch.org/tutorial/data.zip 2 | -------------------------------------------------------------------------------- /Federated learning on Raspberry Pi.md: -------------------------------------------------------------------------------- 1 | Title: Federated Learning on Raspberry Pi 2 | 3 | Author: Jess 4 | 5 | **Overview** 6 | 7 | The purpose of using federated learning on a Raspberry Pi (RPI) is to build the model on the device so that data does not have to be moved to a centralized server. In addition to increased privacy, FL works well for Internet-of-Things applications because training can be done on the device instead of having to pass data between devices and a centralized server. 8 | 9 | This project, which implements the OpenMined tutorial (linked below) simulates the process using 2 RPIs to classify a person's surname with its most likely language of origin. 10 | 11 | **Federated Learning** 12 | 13 | Using federated learning, it's possible to train multiple RPIs without passing any data between them or a centralized server. For example, this could be used to develop a secure, RPi-based "smart home" system. The model is trained on the RPIs and the data is encrypted using secure aggregation, which adds zero-sum masks to obscure the training results. Then, the encrypted training results (not the actual data) are sent to the server. The encrypted results from the RPIs are combined and the server can only decrypt the aggregated result. 14 | 15 | Testing is also done on the RPIs. The server uses the aggregated result to build a better model, then sends that improved model back. Each RPI tests the updated model on its own data (without it ever leaving the device). Once the test results are deemed good enough, the new model can be pushed out from the server to all RPIs. 16 | 17 | The main advantage of federated learning is that data never leaves each RPI while they each receive an improved model based on their aggregated data. One thing to remember is that the improved model sent back to the devices is static- it won't be updated until the entire process occurs again to build a sufficiently accurate, updated model. 18 | 19 | **Resources** 20 | 21 | * [Federated Learning of a RNN on Raspberry Pi](https://blog.openmined.org/federated-learning-of-a-rnn-on-raspberry-pis/) 22 | 23 | 24 | Link to article in Wiki: https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki/Article:-Federated-Learning-on-Raspberry-Pi,-written-by-Jess 25 | -------------------------------------------------------------------------------- /Nirupama/Char_rnn_classification.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Char-rnn-classification", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "include_colab_link": true 10 | }, 11 | "kernelspec": { 12 | "name": "python3", 13 | "display_name": "Python 3" 14 | } 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "markdown", 19 | "metadata": { 20 | "id": "view-in-github", 21 | "colab_type": "text" 22 | }, 23 | "source": [ 24 | "\"Open" 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "metadata": { 30 | "id": "JiVRHoWisYNS", 31 | "colab_type": "code", 32 | "colab": {} 33 | }, 34 | "source": [ 35 | "" 36 | ], 37 | "execution_count": 0, 38 | "outputs": [] 39 | } 40 | ] 41 | } -------------------------------------------------------------------------------- /Nirupama/README.MD: -------------------------------------------------------------------------------- 1 | 2 | CLASSIFYING NAMES WITH A CHARACTER-LEVEL RNN :: We will be building and training a basic character-level RNN to classify words. A character-level RNN reads words as a series of characters - outputting a prediction and “hidden state” at each step, feeding its previous hidden state into each next step. We take the final prediction to be the output, i.e. which class the word belongs to. 3 | 4 | Recommended resources:: 5 | 6 | https://pytorch.org/ For installation instructions 7 | Deep Learning with PyTorch: A 60 Minute Blitz to get started with PyTorch in general 8 | Learning PyTorch with Examples for a wide and deep overview 9 | PyTorch for Former Torch Users if you are former Lua Torch user 10 | The Unreasonable Effectiveness of Recurrent Neural Networks shows a bunch of real life examples 11 | Understanding LSTM Networks is about LSTMs specifically but also informative about RNNs in general 12 | 13 | Dataset:: We can download Dataset from here "https://download.pytorch.org/tutorial/data.zip" 14 | -------------------------------------------------------------------------------- /PyTorch Wheels/How to build PyTorch for Raspberry Pi.md: -------------------------------------------------------------------------------- 1 | # How to build PyTorch for Raspberry Pi 2 | 3 | The building process is based on [this](https://nmilosev.svbtle.com/compling-arm-stuff-without-an-arm-board-build-pytorch-for-the-raspberry-pi) article. Unfortunately the author uses Fedora 30 which by default has GCC 9, and RPI has GCC 8 so it will not work. 4 | So I used Fedora 29 installed in VirtualBox 5 | 6 | 1. Get VirtualBox from [here](https://www.virtualbox.org/wiki/Downloads) 7 | 2. Install Fedora 29 on it. You can download the image [here](https://download.fedoraproject.org/pub/fedora/linux/releases/29/Workstation/x86_64/iso/Fedora-Workstation-Live-x86_64-29-1.2.iso) 8 | 3. Make sure you enabled as much CPU cores as possible in your VM. 9 | 4. Also you'll need at least 3Gb of /tmp directory size in Fedora in order to compile PyTorch. By default Fedora sets the /tmp size to a half of RAM size. So make sure to allocate at least 6Gb to your VM. If you can't do it you'll have to manually increase the size of tmp directory (more details [here](https://unix.stackexchange.com/questions/402637/tmp-directory-size-in-fedora)) 10 | 5. Now in your VM 11 | - Install qemu and qemu-user packages 12 | 13 | `sudo dnf install qemu-system-arm qemu-user-static` 14 | - Now you need the rootfs. The following command will install a ARM rootfs to your /tmp directory along with everything you need to build PyTorch. 15 | 16 | ``` 17 | sudo dnf install --releasever=29 --installroot=/tmp/F29ARM --forcearch=armv7hl --repo=fedora --repo=updates systemd passwd dnf fedora-release nano openblas-devel blas-devel m4 cmake python3-Cython python3-devel python3-yaml python3-setuptools python3-numpy python3-cffi python3-wheel gcc-c++ tar gcc git make tmux -y 18 | ``` 19 | And then 20 | ``` 21 | sudo chroot /tmp/F29ARM 22 | ``` 23 | 24 | Check: 25 | ``` 26 | bash-4.4# uname -a 27 | Linux localhost.localdomain 4.18.16-300.fc29.x86_64 #1 SMP Sat Oct 20 23:24:08 UTC 2018 armv7l armv7l armv7l GNU/Linux 28 | ``` 29 | - Some things are broken, but easy to fix. Mainly network and DNF [wrongly detects your arch](https://bugzilla.redhat.com/show_bug.cgi?id=1691430). 30 | ``` 31 | # Fix for 1691430 32 | sed -i "s/'armv7hnl', 'armv8hl'/'armv7hnl', 'armv7hcnl', 'armv8hl'/" /usr/lib/python3.7/site-packages/dnf/rpm/__init__.py 33 | alias dnf='dnf --releasever=29 --forcearch=armv7hl --repo=fedora --repo=updates' 34 | 35 | # Fixes for default python and network 36 | alias python=python3 37 | echo 'nameserver 8.8.8.8' > /etc/resolv.conf 38 | ``` 39 | 40 | - Get PyTorch source: 41 | ``` 42 | git clone https://github.com/pytorch/pytorch --recursive && cd pytorch 43 | git checkout v1.2.0 44 | git submodule update --init --recursive 45 | ``` 46 | - Since we are building for a Raspberry Pi we want to disable CUDA, MKL etc. 47 | ``` 48 | export NO_CUDA=1 49 | export NO_DISTRIBUTED=1 50 | export NO_MKLDNN=1 51 | export BUILD_TEST=0 # for faster builds 52 | export MAX_JOBS=4 # I have 4 cores on my VM 53 | export NO_NNPACK=1 54 | export NO_QNNPACK=1 55 | ``` 56 | - Build: 57 | ``` 58 | python setup.py bdist_wheel 59 | ``` 60 | 61 | It took about 10 hours on my VM 62 | 63 | 6. After the building process is finished you'll have `torch-1.2.0a0+8554416-cp37-cp37m-linux_armv7l.whl` in the `dist` directory. Copy it to your computer either via shared folders in VirtualBox or some online file sharing service (I used https://dropmefiles.com) 64 | 7. And now some tricky part. Any `.whl` is just a `zip` file. You need to modify it using some Zip File Manager (e.g. [7-Zip](https://www.7-zip.org/)). You need to rename file `/torch/_C.cpython-37m-arm-linux-gnueabi.so` to `/torch/_C.cpython-37m-arm-linux-gnueabihf.so` and `/torch/_dl.cpython-37m-arm-linux-gnueabi.so` to `/torch/_dl.cpython-37m-arm-linux-gnueabihf.so`. Also edit `torch-1.2.0a0+8554416.dist-info/RECORD` and change corresponding lines there, but DON'T CHANGE sha256. 65 | 66 | 67 | 68 | -------------------------------------------------------------------------------- /PyTorch Wheels/README.md: -------------------------------------------------------------------------------- 1 | # PyTorch V1.0.0 Wheel file for Python 3.7 on Raspberry Pi with armv7l architecture 2 | 3 | ## What is a Wheel File 4 | A Python Wheel (WHL) file is the standard Python package format for distributing Python libraries. According to the Python Packaging Index’s 5 | description, a wheel is *designed to contain all the files for a PEP 376 compatible install in a way that is very close to the on-disk format*. 6 | 7 | ## Prerequisites for installing the given wheel file: 8 | * Python version should be 3.7. Use `python3 --version` to check this. 9 | * Raspberry Pi architecture should be armv71. Use `uname -a` to check this. 10 | 11 | ## Installation of the Pytorch Wheel 12 | To install a wheel file, open the terminal in Raspberry Pi and type the following: 13 | 14 | `pip3 install name_of_the_package.whl` (for Python3) - In this project, we use Python3 15 | 16 | `pip install name_of_the_package.whl` (for Python2) 17 | 18 | To make sure that the package has been successfully installed, launch Python and try to import that package. In this project, type: `import torch`. 19 | 20 | If import is successful, you are good to go! 21 | 22 | -------------------------------------------------------------------------------- /PyTorch Wheels/torch-1.0.0a0+8322165-cp37-cp37m-linux_armv7l.whl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/PyTorch Wheels/torch-1.0.0a0+8322165-cp37-cp37m-linux_armv7l.whl -------------------------------------------------------------------------------- /PyTorch Wheels/torch-1.2.0a0+8554416-cp37-cp37m-linux_armv7l.whl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/PyTorch Wheels/torch-1.2.0a0+8554416-cp37-cp37m-linux_armv7l.whl -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![PyTorch Robotics logo](Alejandro%20Ahumada/logo/export/PyTorch-Robotics_Logo_v001@2x.png "PyTorch Robotics logo") 2 | 3 | # Federated Learning with Raspberry PI (PySyft) 4 | We are a group of scholars in the study group PyTorch Robotics from the Secure and Private AI Scholarship Challenge by [Facebook AI](https://ai.facebook.com/) and [Udacity](https://www.udacity.com/) working together to implement [this](https://blog.openmined.org/federated-learning-of-a-rnn-on-raspberry-pis/) tutorial by [Daniele Gadler](https://github.com/DanyEle) from [OpenMined](https://www.openmined.org/). 5 | 6 | We will set up PySyft on two Raspberry Pis and learn how to train a Recurrent Neural Network on a Raspberry Pi via PySyft. 7 | 8 | *** 9 | 10 | ## Purpose of the project 11 | The purpose of using federated learning on a Raspberry Pi (RPI) is to build the model on the device so that data does not have to be moved to a centralized server. In addition to increased privacy, FL works well for Internet-of-Things applications because training can be done on the device instead of having to pass data between devices and a centralized server. 12 | 13 | This project, which implements the OpenMined tutorial simulates the process using 2 RPIs to classify a person's surname with its most likely language of origin. 14 | 15 | Read more about this [here](https://github.com/shashigharti/federated-learning-on-raspberry-pi/blob/master/Federated%20learning%20on%20Raspberry%20Pi.md) in the article written by [Jess](https://github.com/jess-s) 16 | 17 | *** 18 | ### Federated learning? 19 | Would you like to know more about to federated learning? Look no further! Our team has prepared a few articles to get you up to speed: 20 | - [Federated Learning On Raspberry Pi](https://medium.com/@ayeshamanzur123/federated-learning-on-raspberry-pi-8c470cfe7cd3) 21 | - [Federated Learning: An Overview](https://medium.com/secure-and-private-ai-writing-challenge/federated-learning-an-overview-64708606297f) 22 | - [Federated Learning on Raspberry Pi](https://github.com/shashigharti/federated-learning-on-raspberry-pi/blob/master/Federated%20learning%20on%20Raspberry%20Pi.md) 23 | - [The Complete Beginners Guide to Federated Learning With LSTM With Pytorch](https://medium.com/@ivolinengong/c1c26ca22d96) 24 | 25 | 26 | *** 27 | ### Raspberry Pi? 28 | Would you like to know what parts are needed and how to get started? Have a look at these articles written by the team: 29 | - [A Step by Step guide to installing PyTorch in Raspberry Pi](https://medium.com/@suparnasnair/a-step-by-step-guide-to-installing-pytorch-in-raspberry-pi-a1491bb80531) 30 | - [Connecting Raspberry Pi to the Internet](https://medium.com/@suparnasnair/connecting-raspberry-pi-to-the-internet-7a6e98da21ac) 31 | - [First Steps with your Raspberry Pi](https://medium.com/@suparnasnair/first-steps-with-your-raspberry-pi-5917f980a48) 32 | - [Federated Learning of a Recurrent Neural Network for text classification, with Raspberry Pis working as remote workers](https://medium.com/@m.naufil1/federated-learning-of-a-recurrent-neural-network-for-text-classification-with-raspberry-pis-6ce184f85a2a) 33 | - [Getting started with Raspberry Pi — Install Raspian on your Raspberry Pi using Windows](https://medium.com/@sarahhelena.barmer/getting-started-with-raspberry-pi-install-raspian-on-your-raspberry-pi-using-windows-e6df42decf56) 34 | - [Setup guide - What parts are needed for federated learning on Raspberry Pi?](https://medium.com/@elena.kutanov/setup-guide-what-parts-are-needed-for-federated-learning-on-raspberry-pi-7c0c7b06ab3b) 35 | - [Project Equipment Setup](https://medium.com/@jcchidiadi/federated-learning-with-raspberry-pi-project-equipment-setup-38c2f88cb677) 36 | - [A Step by Step guide to installing PySyft in Raspberry Pi](https://medium.com/@suparnasnair/a-step-by-step-guide-to-installing-pysyft-in-raspberry-pi-d8d10c440c37) 37 | - [Getting started with Raspberry Pi — Set up Raspberry Pi in headless mode using Windows](https://medium.com/@sarahhelena.barmer/getting-started-with-raspberry-pi-set-up-raspberry-pi-in-headless-mode-using-windows-639365d7da2d) 38 | - [Installing Raspberry Pi on Virtual Machine](https://github.com/shashigharti/federated-learning-on-raspberry-pi/blob/master/Raspberry-Pi-on-Virtual-Worker.md) 39 | - [Setting Static IP address for Raspberry Pi](https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki/Setting-Static-IP-address-for-Raspberry-Pi,--by-Sayed-Maheen-Basheer) 40 | 41 | *** 42 | ### Stuck on a problem? 43 | Do not worry, we have it all covered for you. Head over to the troubleshooting section [here](https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki/Troubleshooting) 44 | *** 45 | ### Implementations of the project 46 | Have a look here to see the implementation of the project in different ways: 47 | - Process to run the project - [Elena Kutanov](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Elena%20Kutanov) 48 | - Process to run the project - [Sergio Valderrama](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Sergio%20Valderrama) 49 | - PyTorch V1.0.0 Wheel file - [Suparna S Nair](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/PyTorch%20Wheels) 50 | - Implementation of the code and guide - [Alejandro Ahumada](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Alejandro%20Ahumada) 51 | - Implementation of the code using Nigerian surnames - [Temitope Oladokun](https://github.com/shashigharti/federated-learning-on-raspberry-pi/blob/master/Temitope%20Oladokun) 52 | - Shivams journey - [Shivam Raisharma](https://github.com/shashigharti/federated-learning-on-raspberry-pi/blob/master/Shivam%20Raisharma/My%20Journey.md) 53 | - Implementing Federated learning on CIFAR10 Dataset - [Ebinbin Ajagun](https://github.com/shashigharti/federated-learning-on-raspberry-pi/blob/master/Ebinbin%20Ajagun/Federated_Learning_CIFAR10.ipynb) 54 | - Classifying names with a character-level RNN - [Nirupama Singh](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Nirupama) 55 | - Federated learning FashionMNIST - [Sushil Ghimire](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Sushil%20Ghimire) 56 | - Federated learning with LSTM - [Ivoline Ngong](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Ivoline%20Ngong) 57 | *** 58 | ### Code 59 | - Find the code and dataset for the project [here](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Federated%20Recurrent%20Neural%20Network) 60 | - Slides: Char RNN Names Classification - Check the slides [here](https://www.slideshare.net/NirupamaSingh8/char-rnn-names-classification) 61 | - What does it mean to train a Neural Network? Read article [here](https://medium.com/@mikaelaysanchez/what-does-it-mean-to-train-a-neural-network-64065fbc7bb0) 62 | *** 63 | ### Shortcut to command line 64 | Want a quick guide for the command line commands used? Have a look [here](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Command%20Line) 65 | *** 66 | ### Topics 67 | - [PySyft](https://github.com/OpenMined/PySyft) 68 | - [Raspberry Pi](https://www.raspberrypi.org/) 69 | 70 | *** 71 | ### Project leaders 72 | - [x] [Helena Barmer](https://github.com/helenabarmer) - @Helena Barmer 73 | - [x] [Shashi Gharti](https://github.com/shashigharti) - @Shashi Gharti 74 | 75 | 76 | ### Contributors/Team 77 | 78 | - [x] [Jess](https://github.com/jess-s) - @Jess 79 | - [x] [Nirupama Singh](https://github.com/nirupamait) - @Nirupama Singh 80 | - [x] [Pooja Vinod](https://github.com/poojavinod100) - @Pooja Vinod 81 | - [x] [Alex Ahumada](https://github.com/projectsperminute) - @Alex Ahumada 82 | - [x] [Elena Kutanov](https://github.com/EVikVik) - @Elena Kutanov 83 | - [x] [Ayesha Manzur](https://github.com/GlowWorm95) - @Ayesha Manzur 84 | - [x] [Ivoline Ngong](https://github.com/ivyclare) - @Ivy 85 | - [x] [Joyce Chidiadi ](https://github.com/Joycechidi) - @Joyce 86 | - [x] [Temitope Oladokun](https://github.com/TemitopeOladokun) - @Temitope Oladokun 87 | - [x] [Shivam Raisharma](https://github.com/ShivamSRS) - @Shivam Raisharma 88 | - [x] [Sankalp Dayal](https://github.com/sankalpdayal5) - @Sankalp Dayal 89 | - [x] [Sushil Ghimire](https://github.com/sushil79g) - @Sushil Ghimire 90 | - [x] [Juan Carlos Kuri Pinto](https://github.com/jckuri) - @Juan Carlos Kuri Pinto 91 | - [x] [Ebinbin Ajagun](https://github.com/meajagun) - @Ebinbin Ajagun 92 | - [x] [Suparna S Nair](https://github.com/suparnasnair) - @Suparna S Nair 93 | - [x] [Sayed Maheen Basheer](https://github.com/SayedMaheen) - @Sayed Maheen Basheer 94 | - [x] [Sergio Valderrama](https://github.com/vucket) - @Sergio Valderrama 95 | - [x] [Stanislav Ladyzhenskiy](https://github.com/LStan) - @Stanislav Ladyzhenskiy 96 | - [x] [Mikaela Sanchez](https://github.com/mikaelasanchez) - @Mika 97 | - [x] [Muhammad Naufil](https://github.com/mnauf) - @Muhammad Naufil 98 | - [x] [cibaca](https://github.com/cibaca) - @cibaca 99 | 100 | *** 101 | 102 | ### More information 103 | - For more articles, resources and links visit our wiki section [here](https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki) 104 | - Read [this](https://medium.com/@sarahhelena.barmer/virtual-team-experiences-project-showcase-challenge-4b95fe479330) article about the shared experiences from three participants of the group: Virtual Team Experiences — Project Showcase Challenge 105 | - Future implementations of the project - Read our suggestions [here](https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki/Future-implementations) 106 | - Jetson Nano: Setup for Federated Learning - Read more about this setup [here](https://github.com/shashigharti/federated-learning-on-raspberry-pi/wiki/Jetson-Nano:-Setup-for-Federated-Learning-written-by-Jess) 107 | - Would you like to learn more about Secure and Private AI? Join this free course from Udacity [here](https://www.udacity.com/course/secure-and-private-ai--ud185) 108 | - Join OpenMined Slack [here](http://slack.openmined.org/ ) 109 | - PyTorch Robotics [logo](https://github.com/shashigharti/federated-learning-on-raspberry-pi/tree/master/Alejandro%20Ahumada/logo) designed by [Alex Ahumada](https://github.com/projectsperminute) 110 | -------------------------------------------------------------------------------- /Raspberry-Pi-on-Virtual-Worker.md: -------------------------------------------------------------------------------- 1 | The purpose of this article is to share the resources around setting up Hardware for Raspberry Pi on a virtual worker. 2 | # Installing Raspberry Pi on Virtual Machine. 3 | ## Why do we need to opt for Raspberry Pi on Virtual Machine. 4 | You’ve probably heard of emulation. It essentially enables us to run software on systems where it would otherwise be incompatible. Windows itself has emulation built in, in the form of compatibility mode. It is useful for testing out projects when your Pi isn’t handy. 5 | 6 | Virtual machines are the default option these days for anyone wanting to try out a new operating system without upsetting their delicate digital life. VMware and VirtualBox are often recommended to anyone wanting to try Linux for the first time, for instance, or with a desire to access an older version of Windows. It’s even possible to run some older versions of Mac OS X in a virtual machine. 7 | 8 | While this makes them ideal for other forms of OS emulation/virtualization, it means that any operating system that runs on ARM chipsets cannot be installed and tested. 9 | 10 | Well, several reasons spring to mind. First, using QEMU to run a virtualized Raspberry Pi environment lets you try out Raspbian without all of the messing around that is involved with writing a disk image to SD. While NOOBs is a better approach, neither is a fast setup, so virtualization gives anyone wanting to dip a toe in the pie, as it were, a quick chance to do so. 11 | 12 | Second, a virtual Raspberry Pi offers the chance to gauge how the various apps will run, as well as enable debugging and troubleshooting on your standard PC. This might be useful to children using Scratch or other development tools. Making screenshots on the Raspberry Pi is simple enough, but exporting them can be tricky – virtualization circumvents that. It’s also good practice to test a new operating system in a virtualized environment. 13 | It may not feature a physical computer, but it can be a time saver, and a bit of a game changer in some scenarios. 14 | 15 | ## Steps to install Raspberry Pi as an Emulator. 16 | * Installing Virtual box on Windows Machine. 17 | VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. 18 | * You can download the Virtual box from https://www.virtualbox.org/wiki/Downloads 19 | * Choose the right version of your operating system. 20 | * After you’ve downloaded the executable, install VirtualBox by following the installation wizard’s instructions. 21 | * Change the Type to Linux and Version to Debian 64-bit. 22 | * Select Next. 23 | * Set 1024MB RAM in the next window. 24 | * Set 8-10GB of disk space in the next window and then select Create. 25 | * VirtualBox may take a few seconds to create the virtual machine. Once complete, it should appear in the left pane of the main VirtualBox window. 26 | * Select Start in the main VirtualBox window to start the VM. 27 | * Download Debian Raspberry Pi Desktop 28 | * You can download Raspberry Pi Desktop from https://www.raspberrypi.org/downloads/raspberry-pi-desktop/ 29 | * Select Install when prompted. 30 | * Set up language and keyboard and use Guided Installation. 31 | * Select the drive you want to install and the partitioning scheme. Defaults should do. 32 | * Select to install the GRUB bootloader when prompted. Select /dev/sda from the options. 33 | * Allow the VM to boot into Raspberry Pi Desktop. 34 | * You should now see the Raspberry Pi Desktop. We have almost completed the installation and have just a couple of configuration changes to make. 35 | 36 | * Configuration for Raspberry Pi Desktop. 37 | * Open Terminal from the Raspberry Pi Desktop. 38 | * Type ‘sudo apt update’ and hit Enter to update Raspberry Pi. 39 | * Type ‘sudo apt install virtualbox-guest-dkms virtualbox-guest-x11 linux-headers-$(uname -r)’ and hit Enter to install VirtualBox guest - extensions. 40 | * Navigate to Devices, Shared Clipboard and set it to Bidrectional. 41 | * Type ‘sudo reboot’ and hit Enter to reboot your virtual machine to enable the updates. 42 | * Open Terminal once more. 43 | * Type ‘sudo adduser pi vboxsf’ and hit Enter to enable file sharing. 44 | * Type ‘shutdown -h now’ and hit Enter and wait for Raspberry Pi to shut down. 45 | * In the main VirtualBox window, select the Raspberry Pi VM. 46 | * Select Settings and Shared Folders. 47 | * Select the add icon on the right of the window and add the folders you want to share between Windows and Raspberry Pi. 48 | * Select Auto-mount in the selection window. 49 | You now have a fully functional Raspberry Pi Desktop running on Windows. 50 | 51 | You can emulate Raspberry Pi rather easier in Windows 10 if you have VirtualBox. You download the OS, install it in VirtualBox and run Raspberry Pi within the virtual machine. It works with most architecture types and most versions of Windows 10 so you should be fine. VirtualBox is free too. 52 | 53 | Also, Microsoft Azure has a downloadable Raspberry Pi emulator and also a neat client simulator online. These two are easy ways to experiment with Raspberry Pi without buying the hardware. It is also a useful way to simulate your code purely in software before installing it onto hardware. 54 | 55 | # Future Scope 56 | * To train a Recurrent Neural Network on a virtual worker for Raspberry PI via PySyft 57 | -------------------------------------------------------------------------------- /Sergio Valderrama/README.md: -------------------------------------------------------------------------------- 1 | Project based on https://blog.openmined.org/federated-learning-of-a-rnn-on-raspberry-pis/ 2 | 3 | This README contains information about my process to run the project. 4 | 5 | ## My Setup 6 | - Windows 10 Pc 7 | - Raspberry Pi Model 3B 8 | - Mouse 9 | - Tv and HDMI cable 10 | - Local WiFi Network 11 | 12 | ## PySyft on the Raspberry 13 | - ### Initial setup 14 | - Download and boot the Raspberry with the latest Raspbian image. I installed Raspbian Buster with desktop which comes with Python 3.7.3 15 | - I did not have a keyboard to setup the WiFi on the Raspberry, just a mouse, so I changed my WiFi password to "webstore", which is a word that you can copy/paste if you open Chromium and click on the store :v. 16 | With that you will be able to connect to your WiFi network and enable SSH and VNC. 17 | - Connect through SSH and install a virual keyboard on the Raspberry just in case: 18 | https://raspberrypi.stackexchange.com/questions/41150/virtual-keyboard-activation 19 | - ### Python 20 | - If your Python3 version is higher than 3.6.7, there is no need to download and build Python as the guide mentions. 21 | - ### Pytorch 1.0.0 22 | - You can download an build Pytorch 1.0.0 if you have a good SD card and luck. 23 | I had a lot of troubles trying to build Pytorch so I was able to find a wheel for Raspberry: 24 | 25 | https://github.com/pytorch/pytorch/issues/22898 26 | 27 | Also @Suparna S Nair uploaded the file to her drive: 28 | 29 | https://drive.google.com/drive/folders/1anJ-P-IAbMFdB9D1LEH6Tx7fBW7MP8DU 30 | - You can open the links in Chromium on the Raspberry or you can SSH and download the files with curl -O . 31 | Then just pip3 install 32 | - Once you have pytorch installed you have to install Torch vision 0.2.2.post3 as well: 33 | 34 | pip3 install torchvision==0.2.2.post3 35 | - ### PySyft 0.1.13a1 36 | - Now for PySyft the trick is to install it without dependencies: 37 | 38 | pip3 install syft==0.1.13a1 --no-dependencies 39 | - Then install the missing dependencies mentioned in the PySyft GitHUb project, in requirements.txt: 40 | 41 | https://github.com/OpenMined/PySyft/blob/dev/requirements.txt 42 | 43 | pip3 install Flask flask-socketio lz4 websocket-client websockets zstd msgpack 44 | 45 | You should be good to go on the Raspberry by now, just be sure to validate the correct installation of each component as the guide mentions. 46 | - ### Running the worker servers on the Raspberry 47 | - One important thing is that the project guide makes a mistake calling run_websocket_client.py to start a websocket server. 48 | The obvious correct step is to run run_websocket_server.py instead. 49 | 50 | ## PySyft on Windows Pc 51 | The Python, Pytorch, TorchVision and PySyft versions must match the ones that you installed on the Raspberry 52 | - ### Python 53 | - Just download Python 3.7.3 from the official page. This versions comes with pip. 54 | - ### Pytorch 55 | - Download the correspondent wheel from the official page and install it with pip: 56 | 57 | https://pytorch.org/get-started/previous-versions 58 | 59 | https://download.pytorch.org/whl/cpu/torch_stable.html 60 | 61 | For my machine I used torch-1.0.0-cp37-cp37m-win_amd64.whl 62 | - As for the Raspberry, install torchvision: 63 | pip3 install torchvision==0.2.2.post3 64 | - ### PySyft 0.1.13a1 65 | - Same as before: 66 | 67 | pip3 install syft==0.1.13a1 --no-dependencies 68 | 69 | pip3 install Flask flask-socketio lz4 websocket-client websockets zstd msgpack 70 | * The installation of zstd asked me to download Build Tools for Visual Studio from the official page: 71 | https://visualstudio.microsoft.com/downloads/ 72 | - ### Central coordinator 73 | - Follow the guide, download the PySyft repo and open the Federated Recurrent Neural Network.ipynb notebook 74 | - In this GitHub folder I uploaded my notebook, which is almost a copy of the original one. 75 | 76 | * I had some problems connecting to the Raspberry websockets because of an error that said that TIMEOUT_INTERVAL was too large. 77 | 78 | So I modified C:/Python37/Lib/site-packages/syft/workers/websocket_client.py 79 | 80 | I set TIMEOUT_INTERVAL from 9_999_999 to 999_999 and added "import os" 81 | * While running the Notebook you may have to install some dependencies, but it is okay. If the notebook throws the same error after installing the dependency, try restarting the kernel. 82 | 83 | Connecting to the websockets took around 20 min and the trainning with 2 workers took 4hr 15 min. 84 | 85 | 86 | 87 | 88 | 89 | 90 | -------------------------------------------------------------------------------- /Shivam Raisharma/My Journey.md: -------------------------------------------------------------------------------- 1 | # My journey 2 | This project is based on the blog https://blog.openmined.org/federated-learning-of-a-rnn-on-raspberry-pis/ 3 | 4 | It had been few days since the study groups were first announced in the SPAIC slack group. I was yet to finish my morning coffee when I came across this very interesting study group #sg_pytorch-robotics created by @Helena Barmer. 5 | The primary project of the group involved training RNNs on raspberry pis by federated learning. And boy did it peak my interest! 6 | 7 | The major motive behind me choosing this was to learn. The whole scholarship journey to me has been about learning and this project offered to get me started afresh in a lot of fields simultaneously! 8 | 9 | 1. Hardware - I had never used any hardware device before. This looked like a great opportunity to get my hands dirty finally! Moreover having a huge array of experts in the study group would definitely make my start smoother! 10 | 2. Deep Learning on peripherals - I had read a lot of articles (especially by pyimagesearch) on performing deep learning on raspberry pi and other devices but had never tried it before! 11 | 3. Federated Learning - While the course by udacity has offered me a great exposure to Federated Learning techniques, I would never be able to see the magic in action unless I actually deploy it on multiple peripheral devices (like raspberry pis or PC). 12 | 13 | 14 | So I buckled my seat belts, changed the gears and accelerated towards the RnD cell of my institute! 15 | After a few applications I was able to procure 3 Raspberry pi 3Bs (they did not have 3B+) and the ethernet switch mentioned in the blog. 16 | 17 | Installing OS on the pis took a lot of time and I was able to get 1 of them started with Raspbian Buster OS while the other 2 had some problem with reading the memory card. 18 | It took me another day to figure out why I can't connect my lone working raspberry pi to the wifi when I found that I needed to download wifi drivers. 19 | However the wifi driver couldn't detect anything! Great. So I have spent two days working on 3 Raspberry Pis and none of them work. 20 | I wished to replace those pis or buy new ones but due to flood-like situation in Mumbai, couldn't do the same for a couple of days. 21 | 22 | ## New Start 23 | I was finally able to collect 3 new Raspberry Pis from the college R&D cell ( but not before making my R&D co-ordinators promise that they're not faulty :P) 24 | I decided to first test my first Raspberry pi and if it works reproduce the steps on the other 2 peripherals! 25 | 26 | I had my first Rpi up and running with Raspbian Stretch OS. So far so good :D. 27 | I installed python and dependencies of pytorch. It's going great! 28 | Wait.. perhaps I spoke a tad bit quickly! I can't ping github.com :( It works fine with other websites though. Strange. 29 | I was able to work my way around it by calling help from a cloud enthusiast friend Kaustubh (THANKS A LOT FRIEND), he somehow managed to host the repo on an instance on AWS and I cloned the pytorch repo from there! 30 | 31 | Next step : Building and installing pytorch 32 | 33 | Someone has rightly said " Building libraries give you a lot of time to think about life ". 34 | Building pytorch takes a lot of time and requires loads of patience. But the progress in % sure keeps you satisfied but on toes. 35 | It gave me some errors for half a hour which I later figured out to be a result of my own negligence 36 | 37 | 20% 38 | 20% 39 | 20% 40 | 24% 41 | Oh it's increasing! 42 | 26% 43 | Don't do that, don't give me hope :P 44 | 28% 45 | 71% 46 | YES A BIG JUMP THANK GOD IT'S SPEEDING UP 47 | 71% 48 | 71% 49 | And it stayed 71% for an hour before giving me an error with caffe2. 50 | 51 | I copied the error and pasted in the slack study group and voila! 5 responses in 15 minutes :D I am telling you my fellow scholarship participants here are beyond awesome! 52 | 53 | On their advice i changed some flag statuses and upgraded my numpy and increased the size of my swaps. 54 | And it worked :) (Although after almost 2 days of follow up doubts and error solvings on threads and DMs) 55 | Thank you good helpful people of slack! 56 | 57 | Next milestone : Installing pysyft 58 | 59 | So I started installing pysyft and it gave some very odd errors at various points of time. 60 | There are some errors that I dont encounter sometimes but I encounter them occassionally which had me puzzled. 61 | On times I didnt encounter them I proceeded forward and found that the setup.py was unable to detect that I had pytorch, so I edited it out of the requirements.:P 62 | I hadn't installed torchvision which was one of the requirements but it wasn't able to install the same because it couldn't find a suitable version of it. 63 | 64 | I realised that the python version my Rpi was installed with was python 3.5 which was apparently the official version. And pysyft requires python 3.6.7 (which maybe why I cant find the suitable versions of torchvision through pip install)but I can't find any official way to do that without overturning my whole progress. 65 | I have found some tutorial to install python 3.6.7 and I am following them. 66 | Last I checked my progress bar was at 89% for 8 hours while I was installing pysyft. 67 | 68 | 69 | ## Future scope 70 | Finish the installations and code an RNN and train federately 71 | 72 | 73 | I learnt a lot about Raspberry pis from this project. The webinars hosted in the #sg_pytorch-robotics study group helped me out for getting familiar with other IOT devices. 74 | All in all my journey yet with this project and study group has made me learn a lot about Deep learning on IoT devices. I wish to continue this and aim to see it to the end of this project. 75 | And I want to wholeheartedly thank the entire group and community for empowering me with this opportunity that made me learn more and exposed me to a whole new lot of domains. 76 | It got me out of my comfort zone (ie into hardware), and patiently challenged me to explore a whole new field! 77 | 78 | Thank you for reading. Have a great day! 79 | 80 | -------------------------------------------------------------------------------- /Sushil Ghimire/.ipynb_checkpoints/FMNIST_FL-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": { 7 | "colab": { 8 | "base_uri": "https://localhost:8080/", 9 | "height": 836 10 | }, 11 | "colab_type": "code", 12 | "id": "WeJ30ioYrnea", 13 | "outputId": "249fe108-9262-4c8c-eb5c-44935171dbd0" 14 | }, 15 | "outputs": [], 16 | "source": [ 17 | "!pip install syft\n", 18 | "!pip install fashion" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": 0, 24 | "metadata": { 25 | "colab": {}, 26 | "colab_type": "code", 27 | "id": "x3tLzXeNrybp" 28 | }, 29 | "outputs": [], 30 | "source": [ 31 | "import torch\n", 32 | "import torch.nn as nn\n", 33 | "import torchvision.transforms as transforms\n", 34 | "import torchvision.datasets as dataset\n", 35 | "from torch.autograd import Variable\n", 36 | "import torch.nn.functional as F\n", 37 | "import torch.optim as optim\n", 38 | "# from fashion import fashion" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": 0, 44 | "metadata": { 45 | "colab": {}, 46 | "colab_type": "code", 47 | "id": "1QVwGIursJXf" 48 | }, 49 | "outputs": [], 50 | "source": [ 51 | "transform = transforms.Compose([\n", 52 | " transforms.ToTensor(),\n", 53 | " transforms.Normalize((0.5,), (0.5,))\n", 54 | "])" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 0, 60 | "metadata": { 61 | "colab": {}, 62 | "colab_type": "code", 63 | "id": "KQGyqk8utAU0" 64 | }, 65 | "outputs": [], 66 | "source": [ 67 | "trainset = dataset.FashionMNIST('./fmnist.',download=True, train=True, transform=transform)\n", 68 | "trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)" 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": 0, 74 | "metadata": { 75 | "colab": {}, 76 | "colab_type": "code", 77 | "id": "qA8YVo-AtYO6" 78 | }, 79 | "outputs": [], 80 | "source": [ 81 | "testset = dataset.FashionMNIST('./fmnist.',download=True, train=False, transform=transform)\n", 82 | "testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": 8, 88 | "metadata": { 89 | "colab": { 90 | "base_uri": "https://localhost:8080/", 91 | "height": 105 92 | }, 93 | "colab_type": "code", 94 | "id": "oePy1FivtksC", 95 | "outputId": "6510671f-8735-49cf-f99d-c41492422fed" 96 | }, 97 | "outputs": [ 98 | { 99 | "name": "stderr", 100 | "output_type": "stream", 101 | "text": [ 102 | "WARNING: Logging before flag parsing goes to stderr.\n", 103 | "W0626 07:37:42.884648 140436642215808 secure_random.py:26] Falling back to insecure randomness since the required custom op could not be found for the installed version of TensorFlow. Fix this by compiling custom ops. Missing file was '/usr/local/lib/python3.6/dist-packages/tf_encrypted/operations/secure_random/secure_random_module_tf_1.14.0-rc1.so'\n", 104 | "W0626 07:37:42.900305 140436642215808 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tf_encrypted/session.py:26: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n", 105 | "\n" 106 | ] 107 | } 108 | ], 109 | "source": [ 110 | "import syft as sy\n", 111 | "hook = sy.TorchHook(torch)\n", 112 | "aviskar = sy.VirtualWorker(hook, id='aviskar')\n", 113 | "bipin = sy.VirtualWorker(hook, id='bipin')" 114 | ] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "execution_count": 0, 119 | "metadata": { 120 | "colab": {}, 121 | "colab_type": "code", 122 | "id": "JbWFx7q3uAEk" 123 | }, 124 | "outputs": [], 125 | "source": [ 126 | "class Hyperparam():\n", 127 | " def __init__(self):\n", 128 | " self.epochs = 10\n", 129 | " self.batch_size = 64\n", 130 | " self.lr = 0.01\n", 131 | " self.seed = 20\n", 132 | " self.save_model = False\n", 133 | " self.test_batch_size = 100\n", 134 | " self.momentum = 0.5\n", 135 | " self.log_interval = 30\n", 136 | "\n", 137 | "args = Hyperparam()\n", 138 | "device = \"cpu\"" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": 0, 144 | "metadata": { 145 | "colab": {}, 146 | "colab_type": "code", 147 | "id": "zKFIC3H-utet" 148 | }, 149 | "outputs": [], 150 | "source": [ 151 | "use_cuda = False\n", 152 | "kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}\n", 153 | "federated_train_loader = sy.FederatedDataLoader(\n", 154 | " trainset.federate((aviskar,bipin)), batch_size=args.batch_size, shuffle = True, **kwargs \n", 155 | ")" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": 0, 161 | "metadata": { 162 | "colab": {}, 163 | "colab_type": "code", 164 | "id": "xU5tnW9AvZL-" 165 | }, 166 | "outputs": [], 167 | "source": [ 168 | "\n", 169 | "class FashionNet(nn.Module):\n", 170 | " def __init__(self):\n", 171 | " super(FashionNet, self).__init__()\n", 172 | " self.conv1 = nn.Conv2d(1, 20, 5, 1)\n", 173 | " self.conv2 = nn.Conv2d(20, 50, 5, 1)\n", 174 | " self.fc1 = nn.Linear(4*4*50, 500)\n", 175 | " self.fc2 = nn.Linear(500, 10)\n", 176 | "\n", 177 | " def forward(self, x):\n", 178 | " x = F.relu(self.conv1(x))\n", 179 | " x = F.max_pool2d(x, 2, 2)\n", 180 | " x = F.relu(self.conv2(x))\n", 181 | " x = F.max_pool2d(x, 2, 2)\n", 182 | " x = x.view(-1, 4*4*50)\n", 183 | " x = F.relu(self.fc1(x))\n", 184 | " x = self.fc2(x)\n", 185 | " return F.log_softmax(x, dim=1)" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": 0, 191 | "metadata": { 192 | "colab": {}, 193 | "colab_type": "code", 194 | "id": "fyOL82rmyIHQ" 195 | }, 196 | "outputs": [], 197 | "source": [ 198 | "def train(args, model, device, federated_train_loader, optimizer, epoch):\n", 199 | " model.train()\n", 200 | " for batch_idx, (data, target) in enumerate(federated_train_loader):\n", 201 | " model.send(data.location)\n", 202 | " data, target = data.to(device), target.to(device)\n", 203 | " optimizer.zero_grad()\n", 204 | " output = model(data)\n", 205 | " loss = F.nll_loss(output, target)\n", 206 | " loss.backward()\n", 207 | " optimizer.step()\n", 208 | " model.get()\n", 209 | " if batch_idx % args.log_interval == 0:\n", 210 | " loss = loss.get() # <-- NEW: get the loss back\n", 211 | " print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n", 212 | " epoch, batch_idx * args.batch_size, len(federated_train_loader) * args.batch_size,\n", 213 | " 100. * batch_idx / len(federated_train_loader), loss.item()))" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": 0, 219 | "metadata": { 220 | "colab": {}, 221 | "colab_type": "code", 222 | "id": "m_DhkxUNyZZp" 223 | }, 224 | "outputs": [], 225 | "source": [ 226 | "def test(args, model, device, test_loader):\n", 227 | " model.eval()\n", 228 | " test_loss = 0\n", 229 | " correct = 0\n", 230 | " with torch.no_grad():\n", 231 | " for data, target in test_loader:\n", 232 | " data, target = data.to(device), target.to(device)\n", 233 | " output = model(data)\n", 234 | " test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss\n", 235 | " pred = output.argmax(1, keepdim=True) # get the index of the max log-probability \n", 236 | " correct += pred.eq(target.view_as(pred)).sum().item()\n", 237 | "\n", 238 | " test_loss /= len(test_loader.dataset)\n", 239 | "\n", 240 | " print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n", 241 | " test_loss, correct, len(test_loader.dataset),\n", 242 | " 100. * correct / len(test_loader.dataset)))" 243 | ] 244 | }, 245 | { 246 | "cell_type": "code", 247 | "execution_count": 27, 248 | "metadata": { 249 | "colab": { 250 | "base_uri": "https://localhost:8080/", 251 | "height": 1000 252 | }, 253 | "colab_type": "code", 254 | "id": "loSIXUG6ycXB", 255 | "outputId": "dcefa8ce-6199-458e-b725-68b8b4fd9ff4" 256 | }, 257 | "outputs": [ 258 | { 259 | "name": "stdout", 260 | "output_type": "stream", 261 | "text": [ 262 | "Train Epoch: 1 [0/60032 (0%)]\tLoss: 2.297894\n", 263 | "Train Epoch: 1 [1920/60032 (3%)]\tLoss: 2.270388\n", 264 | "Train Epoch: 1 [3840/60032 (6%)]\tLoss: 2.256516\n", 265 | "Train Epoch: 1 [5760/60032 (10%)]\tLoss: 2.218875\n", 266 | "Train Epoch: 1 [7680/60032 (13%)]\tLoss: 2.107992\n", 267 | "Train Epoch: 1 [9600/60032 (16%)]\tLoss: 2.018991\n", 268 | "Train Epoch: 1 [11520/60032 (19%)]\tLoss: 1.680741\n", 269 | "Train Epoch: 1 [13440/60032 (22%)]\tLoss: 1.539548\n", 270 | "Train Epoch: 1 [15360/60032 (26%)]\tLoss: 1.253159\n", 271 | "Train Epoch: 1 [17280/60032 (29%)]\tLoss: 0.972984\n", 272 | "Train Epoch: 1 [19200/60032 (32%)]\tLoss: 1.001128\n", 273 | "Train Epoch: 1 [21120/60032 (35%)]\tLoss: 0.925889\n", 274 | "Train Epoch: 1 [23040/60032 (38%)]\tLoss: 0.879947\n", 275 | "Train Epoch: 1 [24960/60032 (42%)]\tLoss: 0.702568\n", 276 | "Train Epoch: 1 [26880/60032 (45%)]\tLoss: 0.791418\n", 277 | "Train Epoch: 1 [28800/60032 (48%)]\tLoss: 0.777112\n", 278 | "Train Epoch: 1 [30720/60032 (51%)]\tLoss: 0.773149\n", 279 | "Train Epoch: 1 [32640/60032 (54%)]\tLoss: 0.922166\n", 280 | "Train Epoch: 1 [34560/60032 (58%)]\tLoss: 0.706456\n", 281 | "Train Epoch: 1 [36480/60032 (61%)]\tLoss: 0.693013\n", 282 | "Train Epoch: 1 [38400/60032 (64%)]\tLoss: 0.656149\n", 283 | "Train Epoch: 1 [40320/60032 (67%)]\tLoss: 0.892522\n", 284 | "Train Epoch: 1 [42240/60032 (70%)]\tLoss: 0.746137\n", 285 | "Train Epoch: 1 [44160/60032 (74%)]\tLoss: 0.659126\n", 286 | "Train Epoch: 1 [46080/60032 (77%)]\tLoss: 0.422212\n", 287 | "Train Epoch: 1 [48000/60032 (80%)]\tLoss: 0.707158\n", 288 | "Train Epoch: 1 [49920/60032 (83%)]\tLoss: 0.665552\n", 289 | "Train Epoch: 1 [51840/60032 (86%)]\tLoss: 0.553482\n", 290 | "Train Epoch: 1 [53760/60032 (90%)]\tLoss: 0.781476\n", 291 | "Train Epoch: 1 [55680/60032 (93%)]\tLoss: 0.586698\n", 292 | "Train Epoch: 1 [57600/60032 (96%)]\tLoss: 0.680393\n", 293 | "Train Epoch: 1 [59520/60032 (99%)]\tLoss: 0.581316\n", 294 | "\n", 295 | "Test set: Average loss: 0.6202, Accuracy: 7667/10000 (77%)\n", 296 | "\n", 297 | "Train Epoch: 2 [0/60032 (0%)]\tLoss: 0.624619\n", 298 | "Train Epoch: 2 [1920/60032 (3%)]\tLoss: 0.521833\n", 299 | "Train Epoch: 2 [3840/60032 (6%)]\tLoss: 0.621955\n", 300 | "Train Epoch: 2 [5760/60032 (10%)]\tLoss: 0.760472\n", 301 | "Train Epoch: 2 [7680/60032 (13%)]\tLoss: 0.748148\n", 302 | "Train Epoch: 2 [9600/60032 (16%)]\tLoss: 0.439471\n", 303 | "Train Epoch: 2 [11520/60032 (19%)]\tLoss: 0.544026\n", 304 | "Train Epoch: 2 [13440/60032 (22%)]\tLoss: 0.418315\n", 305 | "Train Epoch: 2 [15360/60032 (26%)]\tLoss: 0.540082\n", 306 | "Train Epoch: 2 [17280/60032 (29%)]\tLoss: 0.684459\n", 307 | "Train Epoch: 2 [19200/60032 (32%)]\tLoss: 0.523130\n", 308 | "Train Epoch: 2 [21120/60032 (35%)]\tLoss: 0.481329\n", 309 | "Train Epoch: 2 [23040/60032 (38%)]\tLoss: 0.585857\n", 310 | "Train Epoch: 2 [24960/60032 (42%)]\tLoss: 0.790414\n", 311 | "Train Epoch: 2 [26880/60032 (45%)]\tLoss: 0.446411\n", 312 | "Train Epoch: 2 [28800/60032 (48%)]\tLoss: 0.581044\n", 313 | "Train Epoch: 2 [30720/60032 (51%)]\tLoss: 0.609288\n", 314 | "Train Epoch: 2 [32640/60032 (54%)]\tLoss: 0.468391\n", 315 | "Train Epoch: 2 [34560/60032 (58%)]\tLoss: 0.343276\n", 316 | "Train Epoch: 2 [36480/60032 (61%)]\tLoss: 0.598389\n", 317 | "Train Epoch: 2 [38400/60032 (64%)]\tLoss: 0.540413\n", 318 | "Train Epoch: 2 [40320/60032 (67%)]\tLoss: 0.568029\n", 319 | "Train Epoch: 2 [42240/60032 (70%)]\tLoss: 0.539726\n", 320 | "Train Epoch: 2 [44160/60032 (74%)]\tLoss: 0.490907\n", 321 | "Train Epoch: 2 [46080/60032 (77%)]\tLoss: 0.622787\n", 322 | "Train Epoch: 2 [48000/60032 (80%)]\tLoss: 0.739177\n", 323 | "Train Epoch: 2 [49920/60032 (83%)]\tLoss: 0.669979\n", 324 | "Train Epoch: 2 [51840/60032 (86%)]\tLoss: 0.474868\n", 325 | "Train Epoch: 2 [53760/60032 (90%)]\tLoss: 0.529145\n", 326 | "Train Epoch: 2 [55680/60032 (93%)]\tLoss: 0.521167\n", 327 | "Train Epoch: 2 [57600/60032 (96%)]\tLoss: 0.528423\n", 328 | "Train Epoch: 2 [59520/60032 (99%)]\tLoss: 0.430959\n", 329 | "\n", 330 | "Test set: Average loss: 0.5481, Accuracy: 7939/10000 (79%)\n", 331 | "\n", 332 | "Train Epoch: 3 [0/60032 (0%)]\tLoss: 0.456396\n", 333 | "Train Epoch: 3 [1920/60032 (3%)]\tLoss: 0.412943\n", 334 | "Train Epoch: 3 [3840/60032 (6%)]\tLoss: 0.479357\n", 335 | "Train Epoch: 3 [5760/60032 (10%)]\tLoss: 0.531959\n", 336 | "Train Epoch: 3 [7680/60032 (13%)]\tLoss: 0.434902\n", 337 | "Train Epoch: 3 [9600/60032 (16%)]\tLoss: 0.517279\n", 338 | "Train Epoch: 3 [11520/60032 (19%)]\tLoss: 0.459047\n", 339 | "Train Epoch: 3 [13440/60032 (22%)]\tLoss: 0.500252\n", 340 | "Train Epoch: 3 [15360/60032 (26%)]\tLoss: 0.511273\n", 341 | "Train Epoch: 3 [17280/60032 (29%)]\tLoss: 0.408997\n", 342 | "Train Epoch: 3 [19200/60032 (32%)]\tLoss: 0.321631\n", 343 | "Train Epoch: 3 [21120/60032 (35%)]\tLoss: 0.442650\n", 344 | "Train Epoch: 3 [23040/60032 (38%)]\tLoss: 0.498727\n", 345 | "Train Epoch: 3 [24960/60032 (42%)]\tLoss: 0.475500\n", 346 | "Train Epoch: 3 [26880/60032 (45%)]\tLoss: 0.428555\n", 347 | "Train Epoch: 3 [28800/60032 (48%)]\tLoss: 0.342744\n", 348 | "Train Epoch: 3 [30720/60032 (51%)]\tLoss: 0.507664\n", 349 | "Train Epoch: 3 [32640/60032 (54%)]\tLoss: 0.518508\n", 350 | "Train Epoch: 3 [34560/60032 (58%)]\tLoss: 0.405584\n", 351 | "Train Epoch: 3 [36480/60032 (61%)]\tLoss: 0.436437\n", 352 | "Train Epoch: 3 [38400/60032 (64%)]\tLoss: 0.400065\n", 353 | "Train Epoch: 3 [40320/60032 (67%)]\tLoss: 0.520095\n", 354 | "Train Epoch: 3 [42240/60032 (70%)]\tLoss: 0.300094\n", 355 | "Train Epoch: 3 [44160/60032 (74%)]\tLoss: 0.509318\n", 356 | "Train Epoch: 3 [46080/60032 (77%)]\tLoss: 0.484434\n", 357 | "Train Epoch: 3 [48000/60032 (80%)]\tLoss: 0.442702\n", 358 | "Train Epoch: 3 [49920/60032 (83%)]\tLoss: 0.490993\n", 359 | "Train Epoch: 3 [51840/60032 (86%)]\tLoss: 0.537253\n", 360 | "Train Epoch: 3 [53760/60032 (90%)]\tLoss: 0.588608\n", 361 | "Train Epoch: 3 [55680/60032 (93%)]\tLoss: 0.393754\n", 362 | "Train Epoch: 3 [57600/60032 (96%)]\tLoss: 0.497899\n", 363 | "Train Epoch: 3 [59520/60032 (99%)]\tLoss: 0.797962\n", 364 | "\n", 365 | "Test set: Average loss: 0.4598, Accuracy: 8364/10000 (84%)\n", 366 | "\n", 367 | "Train Epoch: 4 [0/60032 (0%)]\tLoss: 0.393662\n", 368 | "Train Epoch: 4 [1920/60032 (3%)]\tLoss: 0.452811\n", 369 | "Train Epoch: 4 [3840/60032 (6%)]\tLoss: 0.428987\n", 370 | "Train Epoch: 4 [5760/60032 (10%)]\tLoss: 0.681027\n", 371 | "Train Epoch: 4 [7680/60032 (13%)]\tLoss: 0.837731\n", 372 | "Train Epoch: 4 [9600/60032 (16%)]\tLoss: 0.512435\n", 373 | "Train Epoch: 4 [11520/60032 (19%)]\tLoss: 0.429265\n", 374 | "Train Epoch: 4 [13440/60032 (22%)]\tLoss: 0.347277\n", 375 | "Train Epoch: 4 [15360/60032 (26%)]\tLoss: 0.472584\n", 376 | "Train Epoch: 4 [17280/60032 (29%)]\tLoss: 0.427452\n", 377 | "Train Epoch: 4 [19200/60032 (32%)]\tLoss: 0.462409\n", 378 | "Train Epoch: 4 [21120/60032 (35%)]\tLoss: 0.348962\n", 379 | "Train Epoch: 4 [23040/60032 (38%)]\tLoss: 0.405967\n", 380 | "Train Epoch: 4 [24960/60032 (42%)]\tLoss: 0.499514\n", 381 | "Train Epoch: 4 [26880/60032 (45%)]\tLoss: 0.477124\n", 382 | "Train Epoch: 4 [28800/60032 (48%)]\tLoss: 0.713701\n", 383 | "Train Epoch: 4 [30720/60032 (51%)]\tLoss: 0.334142\n", 384 | "Train Epoch: 4 [32640/60032 (54%)]\tLoss: 0.391503\n", 385 | "Train Epoch: 4 [34560/60032 (58%)]\tLoss: 0.256457\n", 386 | "Train Epoch: 4 [36480/60032 (61%)]\tLoss: 0.402779\n", 387 | "Train Epoch: 4 [38400/60032 (64%)]\tLoss: 0.429268\n", 388 | "Train Epoch: 4 [40320/60032 (67%)]\tLoss: 0.387079\n", 389 | "Train Epoch: 4 [42240/60032 (70%)]\tLoss: 0.490530\n", 390 | "Train Epoch: 4 [44160/60032 (74%)]\tLoss: 0.392317\n", 391 | "Train Epoch: 4 [46080/60032 (77%)]\tLoss: 0.466212\n", 392 | "Train Epoch: 4 [48000/60032 (80%)]\tLoss: 0.420838\n", 393 | "Train Epoch: 4 [49920/60032 (83%)]\tLoss: 0.239213\n", 394 | "Train Epoch: 4 [51840/60032 (86%)]\tLoss: 0.558575\n", 395 | "Train Epoch: 4 [53760/60032 (90%)]\tLoss: 0.563194\n", 396 | "Train Epoch: 4 [55680/60032 (93%)]\tLoss: 0.377696\n", 397 | "Train Epoch: 4 [57600/60032 (96%)]\tLoss: 0.508661\n", 398 | "Train Epoch: 4 [59520/60032 (99%)]\tLoss: 0.649463\n", 399 | "\n", 400 | "Test set: Average loss: 0.4603, Accuracy: 8358/10000 (84%)\n", 401 | "\n", 402 | "Train Epoch: 5 [0/60032 (0%)]\tLoss: 0.361523\n", 403 | "Train Epoch: 5 [1920/60032 (3%)]\tLoss: 0.401346\n", 404 | "Train Epoch: 5 [3840/60032 (6%)]\tLoss: 0.300103\n", 405 | "Train Epoch: 5 [5760/60032 (10%)]\tLoss: 0.696953\n", 406 | "Train Epoch: 5 [7680/60032 (13%)]\tLoss: 0.387073\n", 407 | "Train Epoch: 5 [9600/60032 (16%)]\tLoss: 0.346780\n", 408 | "Train Epoch: 5 [11520/60032 (19%)]\tLoss: 0.433337\n", 409 | "Train Epoch: 5 [13440/60032 (22%)]\tLoss: 0.535868\n", 410 | "Train Epoch: 5 [15360/60032 (26%)]\tLoss: 0.441173\n", 411 | "Train Epoch: 5 [17280/60032 (29%)]\tLoss: 0.394439\n", 412 | "Train Epoch: 5 [19200/60032 (32%)]\tLoss: 0.247470\n", 413 | "Train Epoch: 5 [21120/60032 (35%)]\tLoss: 0.368897\n", 414 | "Train Epoch: 5 [23040/60032 (38%)]\tLoss: 0.520492\n", 415 | "Train Epoch: 5 [24960/60032 (42%)]\tLoss: 0.284961\n", 416 | "Train Epoch: 5 [26880/60032 (45%)]\tLoss: 0.308489\n", 417 | "Train Epoch: 5 [28800/60032 (48%)]\tLoss: 0.434475\n", 418 | "Train Epoch: 5 [30720/60032 (51%)]\tLoss: 0.221708\n", 419 | "Train Epoch: 5 [32640/60032 (54%)]\tLoss: 0.383588\n", 420 | "Train Epoch: 5 [34560/60032 (58%)]\tLoss: 0.397143\n", 421 | "Train Epoch: 5 [36480/60032 (61%)]\tLoss: 0.516105\n", 422 | "Train Epoch: 5 [38400/60032 (64%)]\tLoss: 0.317147\n", 423 | "Train Epoch: 5 [40320/60032 (67%)]\tLoss: 0.350431\n", 424 | "Train Epoch: 5 [42240/60032 (70%)]\tLoss: 0.312608\n", 425 | "Train Epoch: 5 [44160/60032 (74%)]\tLoss: 0.543478\n", 426 | "Train Epoch: 5 [46080/60032 (77%)]\tLoss: 0.239131\n", 427 | "Train Epoch: 5 [48000/60032 (80%)]\tLoss: 0.297404\n", 428 | "Train Epoch: 5 [49920/60032 (83%)]\tLoss: 0.295462\n", 429 | "Train Epoch: 5 [51840/60032 (86%)]\tLoss: 0.363845\n", 430 | "Train Epoch: 5 [53760/60032 (90%)]\tLoss: 0.377399\n", 431 | "Train Epoch: 5 [55680/60032 (93%)]\tLoss: 0.569621\n", 432 | "Train Epoch: 5 [57600/60032 (96%)]\tLoss: 0.469008\n", 433 | "Train Epoch: 5 [59520/60032 (99%)]\tLoss: 0.386325\n", 434 | "\n", 435 | "Test set: Average loss: 0.4003, Accuracy: 8558/10000 (86%)\n", 436 | "\n", 437 | "Train Epoch: 6 [0/60032 (0%)]\tLoss: 0.433503\n", 438 | "Train Epoch: 6 [1920/60032 (3%)]\tLoss: 0.395139\n", 439 | "Train Epoch: 6 [3840/60032 (6%)]\tLoss: 0.457346\n", 440 | "Train Epoch: 6 [5760/60032 (10%)]\tLoss: 0.759689\n", 441 | "Train Epoch: 6 [7680/60032 (13%)]\tLoss: 0.286895\n", 442 | "Train Epoch: 6 [9600/60032 (16%)]\tLoss: 0.286468\n", 443 | "Train Epoch: 6 [11520/60032 (19%)]\tLoss: 0.430500\n", 444 | "Train Epoch: 6 [13440/60032 (22%)]\tLoss: 0.632611\n", 445 | "Train Epoch: 6 [15360/60032 (26%)]\tLoss: 0.431436\n", 446 | "Train Epoch: 6 [17280/60032 (29%)]\tLoss: 0.422151\n", 447 | "Train Epoch: 6 [19200/60032 (32%)]\tLoss: 0.320861\n", 448 | "Train Epoch: 6 [21120/60032 (35%)]\tLoss: 0.341787\n", 449 | "Train Epoch: 6 [23040/60032 (38%)]\tLoss: 0.263591\n", 450 | "Train Epoch: 6 [24960/60032 (42%)]\tLoss: 0.404998\n", 451 | "Train Epoch: 6 [26880/60032 (45%)]\tLoss: 0.297293\n", 452 | "Train Epoch: 6 [28800/60032 (48%)]\tLoss: 0.293235\n", 453 | "Train Epoch: 6 [30720/60032 (51%)]\tLoss: 0.320074\n", 454 | "Train Epoch: 6 [32640/60032 (54%)]\tLoss: 0.563778\n", 455 | "Train Epoch: 6 [34560/60032 (58%)]\tLoss: 0.355298\n", 456 | "Train Epoch: 6 [36480/60032 (61%)]\tLoss: 0.477644\n", 457 | "Train Epoch: 6 [38400/60032 (64%)]\tLoss: 0.305222\n", 458 | "Train Epoch: 6 [40320/60032 (67%)]\tLoss: 0.259541\n", 459 | "Train Epoch: 6 [42240/60032 (70%)]\tLoss: 0.364645\n", 460 | "Train Epoch: 6 [44160/60032 (74%)]\tLoss: 0.469457\n", 461 | "Train Epoch: 6 [46080/60032 (77%)]\tLoss: 0.473092\n", 462 | "Train Epoch: 6 [48000/60032 (80%)]\tLoss: 0.354201\n", 463 | "Train Epoch: 6 [49920/60032 (83%)]\tLoss: 0.247948\n", 464 | "Train Epoch: 6 [51840/60032 (86%)]\tLoss: 0.490131\n", 465 | "Train Epoch: 6 [53760/60032 (90%)]\tLoss: 0.490336\n", 466 | "Train Epoch: 6 [55680/60032 (93%)]\tLoss: 0.372183\n", 467 | "Train Epoch: 6 [57600/60032 (96%)]\tLoss: 0.406239\n", 468 | "Train Epoch: 6 [59520/60032 (99%)]\tLoss: 0.318030\n", 469 | "\n", 470 | "Test set: Average loss: 0.3896, Accuracy: 8588/10000 (86%)\n", 471 | "\n", 472 | "Train Epoch: 7 [0/60032 (0%)]\tLoss: 0.444990\n", 473 | "Train Epoch: 7 [1920/60032 (3%)]\tLoss: 0.374634\n", 474 | "Train Epoch: 7 [3840/60032 (6%)]\tLoss: 0.314474\n", 475 | "Train Epoch: 7 [5760/60032 (10%)]\tLoss: 0.364646\n", 476 | "Train Epoch: 7 [7680/60032 (13%)]\tLoss: 0.240099\n", 477 | "Train Epoch: 7 [9600/60032 (16%)]\tLoss: 0.430089\n", 478 | "Train Epoch: 7 [11520/60032 (19%)]\tLoss: 0.296451\n", 479 | "Train Epoch: 7 [13440/60032 (22%)]\tLoss: 0.403176\n", 480 | "Train Epoch: 7 [15360/60032 (26%)]\tLoss: 0.405922\n", 481 | "Train Epoch: 7 [17280/60032 (29%)]\tLoss: 0.327272\n", 482 | "Train Epoch: 7 [19200/60032 (32%)]\tLoss: 0.180533\n", 483 | "Train Epoch: 7 [21120/60032 (35%)]\tLoss: 0.336758\n", 484 | "Train Epoch: 7 [23040/60032 (38%)]\tLoss: 0.367229\n", 485 | "Train Epoch: 7 [24960/60032 (42%)]\tLoss: 0.450152\n", 486 | "Train Epoch: 7 [26880/60032 (45%)]\tLoss: 0.217510\n", 487 | "Train Epoch: 7 [28800/60032 (48%)]\tLoss: 0.223738\n", 488 | "Train Epoch: 7 [30720/60032 (51%)]\tLoss: 0.455882\n", 489 | "Train Epoch: 7 [32640/60032 (54%)]\tLoss: 0.342804\n", 490 | "Train Epoch: 7 [34560/60032 (58%)]\tLoss: 0.322493\n", 491 | "Train Epoch: 7 [36480/60032 (61%)]\tLoss: 0.469595\n", 492 | "Train Epoch: 7 [38400/60032 (64%)]\tLoss: 0.429854\n", 493 | "Train Epoch: 7 [40320/60032 (67%)]\tLoss: 0.477897\n", 494 | "Train Epoch: 7 [42240/60032 (70%)]\tLoss: 0.261932\n", 495 | "Train Epoch: 7 [44160/60032 (74%)]\tLoss: 0.257700\n", 496 | "Train Epoch: 7 [46080/60032 (77%)]\tLoss: 0.344267\n", 497 | "Train Epoch: 7 [48000/60032 (80%)]\tLoss: 0.380841\n", 498 | "Train Epoch: 7 [49920/60032 (83%)]\tLoss: 0.290892\n", 499 | "Train Epoch: 7 [51840/60032 (86%)]\tLoss: 0.387709\n", 500 | "Train Epoch: 7 [53760/60032 (90%)]\tLoss: 0.354812\n", 501 | "Train Epoch: 7 [55680/60032 (93%)]\tLoss: 0.134910\n", 502 | "Train Epoch: 7 [57600/60032 (96%)]\tLoss: 0.238924\n", 503 | "Train Epoch: 7 [59520/60032 (99%)]\tLoss: 0.295154\n", 504 | "\n", 505 | "Test set: Average loss: 0.3751, Accuracy: 8654/10000 (87%)\n", 506 | "\n", 507 | "Train Epoch: 8 [0/60032 (0%)]\tLoss: 0.420723\n", 508 | "Train Epoch: 8 [1920/60032 (3%)]\tLoss: 0.291933\n", 509 | "Train Epoch: 8 [3840/60032 (6%)]\tLoss: 0.475130\n", 510 | "Train Epoch: 8 [5760/60032 (10%)]\tLoss: 0.320126\n", 511 | "Train Epoch: 8 [7680/60032 (13%)]\tLoss: 0.303498\n", 512 | "Train Epoch: 8 [9600/60032 (16%)]\tLoss: 0.245330\n", 513 | "Train Epoch: 8 [11520/60032 (19%)]\tLoss: 0.221644\n", 514 | "Train Epoch: 8 [13440/60032 (22%)]\tLoss: 0.250708\n", 515 | "Train Epoch: 8 [15360/60032 (26%)]\tLoss: 0.237361\n", 516 | "Train Epoch: 8 [17280/60032 (29%)]\tLoss: 0.284803\n", 517 | "Train Epoch: 8 [19200/60032 (32%)]\tLoss: 0.288773\n", 518 | "Train Epoch: 8 [21120/60032 (35%)]\tLoss: 0.253599\n", 519 | "Train Epoch: 8 [23040/60032 (38%)]\tLoss: 0.486923\n", 520 | "Train Epoch: 8 [24960/60032 (42%)]\tLoss: 0.347767\n", 521 | "Train Epoch: 8 [26880/60032 (45%)]\tLoss: 0.489972\n", 522 | "Train Epoch: 8 [28800/60032 (48%)]\tLoss: 0.348951\n", 523 | "Train Epoch: 8 [30720/60032 (51%)]\tLoss: 0.352687\n", 524 | "Train Epoch: 8 [32640/60032 (54%)]\tLoss: 0.262888\n", 525 | "Train Epoch: 8 [34560/60032 (58%)]\tLoss: 0.418008\n", 526 | "Train Epoch: 8 [36480/60032 (61%)]\tLoss: 0.195721\n", 527 | "Train Epoch: 8 [38400/60032 (64%)]\tLoss: 0.464302\n", 528 | "Train Epoch: 8 [40320/60032 (67%)]\tLoss: 0.250094\n", 529 | "Train Epoch: 8 [42240/60032 (70%)]\tLoss: 0.284123\n", 530 | "Train Epoch: 8 [44160/60032 (74%)]\tLoss: 0.260842\n", 531 | "Train Epoch: 8 [46080/60032 (77%)]\tLoss: 0.475878\n", 532 | "Train Epoch: 8 [48000/60032 (80%)]\tLoss: 0.385621\n", 533 | "Train Epoch: 8 [49920/60032 (83%)]\tLoss: 0.466025\n", 534 | "Train Epoch: 8 [51840/60032 (86%)]\tLoss: 0.365008\n", 535 | "Train Epoch: 8 [53760/60032 (90%)]\tLoss: 0.391639\n", 536 | "Train Epoch: 8 [55680/60032 (93%)]\tLoss: 0.410657\n", 537 | "Train Epoch: 8 [57600/60032 (96%)]\tLoss: 0.185470\n", 538 | "Train Epoch: 8 [59520/60032 (99%)]\tLoss: 0.234561\n", 539 | "\n", 540 | "Test set: Average loss: 0.3718, Accuracy: 8635/10000 (86%)\n", 541 | "\n", 542 | "Train Epoch: 9 [0/60032 (0%)]\tLoss: 0.198807\n", 543 | "Train Epoch: 9 [1920/60032 (3%)]\tLoss: 0.311220\n", 544 | "Train Epoch: 9 [3840/60032 (6%)]\tLoss: 0.221064\n", 545 | "Train Epoch: 9 [5760/60032 (10%)]\tLoss: 0.438531\n", 546 | "Train Epoch: 9 [7680/60032 (13%)]\tLoss: 0.296658\n", 547 | "Train Epoch: 9 [9600/60032 (16%)]\tLoss: 0.329275\n", 548 | "Train Epoch: 9 [11520/60032 (19%)]\tLoss: 0.329174\n", 549 | "Train Epoch: 9 [13440/60032 (22%)]\tLoss: 0.413715\n", 550 | "Train Epoch: 9 [15360/60032 (26%)]\tLoss: 0.375727\n", 551 | "Train Epoch: 9 [17280/60032 (29%)]\tLoss: 0.404160\n", 552 | "Train Epoch: 9 [19200/60032 (32%)]\tLoss: 0.254195\n", 553 | "Train Epoch: 9 [21120/60032 (35%)]\tLoss: 0.262947\n", 554 | "Train Epoch: 9 [23040/60032 (38%)]\tLoss: 0.298239\n", 555 | "Train Epoch: 9 [24960/60032 (42%)]\tLoss: 0.286815\n", 556 | "Train Epoch: 9 [26880/60032 (45%)]\tLoss: 0.422843\n", 557 | "Train Epoch: 9 [28800/60032 (48%)]\tLoss: 0.255840\n", 558 | "Train Epoch: 9 [30720/60032 (51%)]\tLoss: 0.336783\n", 559 | "Train Epoch: 9 [32640/60032 (54%)]\tLoss: 0.134909\n", 560 | "Train Epoch: 9 [34560/60032 (58%)]\tLoss: 0.199918\n", 561 | "Train Epoch: 9 [36480/60032 (61%)]\tLoss: 0.397980\n", 562 | "Train Epoch: 9 [38400/60032 (64%)]\tLoss: 0.220716\n", 563 | "Train Epoch: 9 [40320/60032 (67%)]\tLoss: 0.305564\n", 564 | "Train Epoch: 9 [42240/60032 (70%)]\tLoss: 0.437461\n", 565 | "Train Epoch: 9 [44160/60032 (74%)]\tLoss: 0.232263\n", 566 | "Train Epoch: 9 [46080/60032 (77%)]\tLoss: 0.491814\n", 567 | "Train Epoch: 9 [48000/60032 (80%)]\tLoss: 0.470752\n", 568 | "Train Epoch: 9 [49920/60032 (83%)]\tLoss: 0.305019\n", 569 | "Train Epoch: 9 [51840/60032 (86%)]\tLoss: 0.280134\n", 570 | "Train Epoch: 9 [53760/60032 (90%)]\tLoss: 0.366624\n", 571 | "Train Epoch: 9 [55680/60032 (93%)]\tLoss: 0.322959\n", 572 | "Train Epoch: 9 [57600/60032 (96%)]\tLoss: 0.285094\n", 573 | "Train Epoch: 9 [59520/60032 (99%)]\tLoss: 0.311868\n", 574 | "\n", 575 | "Test set: Average loss: 0.3698, Accuracy: 8652/10000 (87%)\n", 576 | "\n", 577 | "Train Epoch: 10 [0/60032 (0%)]\tLoss: 0.281692\n", 578 | "Train Epoch: 10 [1920/60032 (3%)]\tLoss: 0.275689\n", 579 | "Train Epoch: 10 [3840/60032 (6%)]\tLoss: 0.371555\n", 580 | "Train Epoch: 10 [5760/60032 (10%)]\tLoss: 0.358771\n", 581 | "Train Epoch: 10 [7680/60032 (13%)]\tLoss: 0.338153\n", 582 | "Train Epoch: 10 [9600/60032 (16%)]\tLoss: 0.201937\n", 583 | "Train Epoch: 10 [11520/60032 (19%)]\tLoss: 0.321843\n", 584 | "Train Epoch: 10 [13440/60032 (22%)]\tLoss: 0.343546\n", 585 | "Train Epoch: 10 [15360/60032 (26%)]\tLoss: 0.246389\n", 586 | "Train Epoch: 10 [17280/60032 (29%)]\tLoss: 0.277746\n", 587 | "Train Epoch: 10 [19200/60032 (32%)]\tLoss: 0.336140\n", 588 | "Train Epoch: 10 [21120/60032 (35%)]\tLoss: 0.256819\n", 589 | "Train Epoch: 10 [23040/60032 (38%)]\tLoss: 0.409090\n", 590 | "Train Epoch: 10 [24960/60032 (42%)]\tLoss: 0.308156\n", 591 | "Train Epoch: 10 [26880/60032 (45%)]\tLoss: 0.245950\n", 592 | "Train Epoch: 10 [28800/60032 (48%)]\tLoss: 0.136618\n", 593 | "Train Epoch: 10 [30720/60032 (51%)]\tLoss: 0.244874\n", 594 | "Train Epoch: 10 [32640/60032 (54%)]\tLoss: 0.290621\n", 595 | "Train Epoch: 10 [34560/60032 (58%)]\tLoss: 0.258422\n", 596 | "Train Epoch: 10 [36480/60032 (61%)]\tLoss: 0.330319\n", 597 | "Train Epoch: 10 [38400/60032 (64%)]\tLoss: 0.240209\n", 598 | "Train Epoch: 10 [40320/60032 (67%)]\tLoss: 0.444517\n", 599 | "Train Epoch: 10 [42240/60032 (70%)]\tLoss: 0.285876\n", 600 | "Train Epoch: 10 [44160/60032 (74%)]\tLoss: 0.318286\n", 601 | "Train Epoch: 10 [46080/60032 (77%)]\tLoss: 0.318790\n", 602 | "Train Epoch: 10 [48000/60032 (80%)]\tLoss: 0.203132\n", 603 | "Train Epoch: 10 [49920/60032 (83%)]\tLoss: 0.429110\n", 604 | "Train Epoch: 10 [51840/60032 (86%)]\tLoss: 0.274273\n", 605 | "Train Epoch: 10 [53760/60032 (90%)]\tLoss: 0.310811\n", 606 | "Train Epoch: 10 [55680/60032 (93%)]\tLoss: 0.297233\n", 607 | "Train Epoch: 10 [57600/60032 (96%)]\tLoss: 0.390102\n", 608 | "Train Epoch: 10 [59520/60032 (99%)]\tLoss: 0.396362\n", 609 | "\n", 610 | "Test set: Average loss: 0.3375, Accuracy: 8793/10000 (88%)\n", 611 | "\n", 612 | "CPU times: user 20min 36s, sys: 59.7 s, total: 21min 36s\n", 613 | "Wall time: 21min 37s\n" 614 | ] 615 | } 616 | ], 617 | "source": [ 618 | "%%time\n", 619 | "model = FashionNet().to(device)\n", 620 | "optimizer = optim.SGD(model.parameters(), lr=args.lr)\n", 621 | "\n", 622 | "for epoch in range(1, args.epochs + 1):\n", 623 | " train(args, model, device, federated_train_loader, optimizer, epoch)\n", 624 | " test(args, model, device, testloader)\n", 625 | "\n", 626 | "if (args.save_model):\n", 627 | " torch.save(model.state_dict(), \"Fmnist_cnn.pt\")" 628 | ] 629 | } 630 | ], 631 | "metadata": { 632 | "colab": { 633 | "name": "FMNIST_FL.ipynb", 634 | "provenance": [], 635 | "version": "0.3.2" 636 | }, 637 | "kernelspec": { 638 | "display_name": "Python 3", 639 | "language": "python", 640 | "name": "python3" 641 | }, 642 | "language_info": { 643 | "codemirror_mode": { 644 | "name": "ipython", 645 | "version": 3 646 | }, 647 | "file_extension": ".py", 648 | "mimetype": "text/x-python", 649 | "name": "python", 650 | "nbconvert_exporter": "python", 651 | "pygments_lexer": "ipython3", 652 | "version": "3.7.3" 653 | } 654 | }, 655 | "nbformat": 4, 656 | "nbformat_minor": 1 657 | } 658 | -------------------------------------------------------------------------------- /Sushil Ghimire/FMNIST_FL.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": { 7 | "colab": { 8 | "base_uri": "https://localhost:8080/", 9 | "height": 836 10 | }, 11 | "colab_type": "code", 12 | "id": "WeJ30ioYrnea", 13 | "outputId": "249fe108-9262-4c8c-eb5c-44935171dbd0" 14 | }, 15 | "outputs": [], 16 | "source": [ 17 | "!pip install syft\n", 18 | "!pip install fashion" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": 0, 24 | "metadata": { 25 | "colab": {}, 26 | "colab_type": "code", 27 | "id": "x3tLzXeNrybp" 28 | }, 29 | "outputs": [], 30 | "source": [ 31 | "import torch\n", 32 | "import torch.nn as nn\n", 33 | "import torchvision.transforms as transforms\n", 34 | "import torchvision.datasets as dataset\n", 35 | "from torch.autograd import Variable\n", 36 | "import torch.nn.functional as F\n", 37 | "import torch.optim as optim\n", 38 | "# from fashion import fashion" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": 0, 44 | "metadata": { 45 | "colab": {}, 46 | "colab_type": "code", 47 | "id": "1QVwGIursJXf" 48 | }, 49 | "outputs": [], 50 | "source": [ 51 | "transform = transforms.Compose([\n", 52 | " transforms.ToTensor(),\n", 53 | " transforms.Normalize((0.5,), (0.5,))\n", 54 | "])" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 0, 60 | "metadata": { 61 | "colab": {}, 62 | "colab_type": "code", 63 | "id": "KQGyqk8utAU0" 64 | }, 65 | "outputs": [], 66 | "source": [ 67 | "trainset = dataset.FashionMNIST('./fmnist.',download=True, train=True, transform=transform)\n", 68 | "trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)" 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": 0, 74 | "metadata": { 75 | "colab": {}, 76 | "colab_type": "code", 77 | "id": "qA8YVo-AtYO6" 78 | }, 79 | "outputs": [], 80 | "source": [ 81 | "testset = dataset.FashionMNIST('./fmnist.',download=True, train=False, transform=transform)\n", 82 | "testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": 8, 88 | "metadata": { 89 | "colab": { 90 | "base_uri": "https://localhost:8080/", 91 | "height": 105 92 | }, 93 | "colab_type": "code", 94 | "id": "oePy1FivtksC", 95 | "outputId": "6510671f-8735-49cf-f99d-c41492422fed" 96 | }, 97 | "outputs": [ 98 | { 99 | "name": "stderr", 100 | "output_type": "stream", 101 | "text": [ 102 | "WARNING: Logging before flag parsing goes to stderr.\n", 103 | "W0626 07:37:42.884648 140436642215808 secure_random.py:26] Falling back to insecure randomness since the required custom op could not be found for the installed version of TensorFlow. Fix this by compiling custom ops. Missing file was '/usr/local/lib/python3.6/dist-packages/tf_encrypted/operations/secure_random/secure_random_module_tf_1.14.0-rc1.so'\n", 104 | "W0626 07:37:42.900305 140436642215808 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tf_encrypted/session.py:26: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n", 105 | "\n" 106 | ] 107 | } 108 | ], 109 | "source": [ 110 | "import syft as sy\n", 111 | "hook = sy.TorchHook(torch)\n", 112 | "aviskar = sy.VirtualWorker(hook, id='aviskar')\n", 113 | "bipin = sy.VirtualWorker(hook, id='bipin')" 114 | ] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "execution_count": 0, 119 | "metadata": { 120 | "colab": {}, 121 | "colab_type": "code", 122 | "id": "JbWFx7q3uAEk" 123 | }, 124 | "outputs": [], 125 | "source": [ 126 | "class Hyperparam():\n", 127 | " def __init__(self):\n", 128 | " self.epochs = 10\n", 129 | " self.batch_size = 64\n", 130 | " self.lr = 0.01\n", 131 | " self.seed = 20\n", 132 | " self.save_model = False\n", 133 | " self.test_batch_size = 100\n", 134 | " self.momentum = 0.5\n", 135 | " self.log_interval = 30\n", 136 | "\n", 137 | "args = Hyperparam()\n", 138 | "device = \"cpu\"" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": 0, 144 | "metadata": { 145 | "colab": {}, 146 | "colab_type": "code", 147 | "id": "zKFIC3H-utet" 148 | }, 149 | "outputs": [], 150 | "source": [ 151 | "use_cuda = False\n", 152 | "kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}\n", 153 | "federated_train_loader = sy.FederatedDataLoader(\n", 154 | " trainset.federate((aviskar,bipin)), batch_size=args.batch_size, shuffle = True, **kwargs \n", 155 | ")" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": 0, 161 | "metadata": { 162 | "colab": {}, 163 | "colab_type": "code", 164 | "id": "xU5tnW9AvZL-" 165 | }, 166 | "outputs": [], 167 | "source": [ 168 | "\n", 169 | "class FashionNet(nn.Module):\n", 170 | " def __init__(self):\n", 171 | " super(FashionNet, self).__init__()\n", 172 | " self.conv1 = nn.Conv2d(1, 20, 5, 1)\n", 173 | " self.conv2 = nn.Conv2d(20, 50, 5, 1)\n", 174 | " self.fc1 = nn.Linear(4*4*50, 500)\n", 175 | " self.fc2 = nn.Linear(500, 10)\n", 176 | "\n", 177 | " def forward(self, x):\n", 178 | " x = F.relu(self.conv1(x))\n", 179 | " x = F.max_pool2d(x, 2, 2)\n", 180 | " x = F.relu(self.conv2(x))\n", 181 | " x = F.max_pool2d(x, 2, 2)\n", 182 | " x = x.view(-1, 4*4*50)\n", 183 | " x = F.relu(self.fc1(x))\n", 184 | " x = self.fc2(x)\n", 185 | " return F.log_softmax(x, dim=1)" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": 0, 191 | "metadata": { 192 | "colab": {}, 193 | "colab_type": "code", 194 | "id": "fyOL82rmyIHQ" 195 | }, 196 | "outputs": [], 197 | "source": [ 198 | "def train(args, model, device, federated_train_loader, optimizer, epoch):\n", 199 | " model.train()\n", 200 | " for batch_idx, (data, target) in enumerate(federated_train_loader):\n", 201 | " model.send(data.location)\n", 202 | " data, target = data.to(device), target.to(device)\n", 203 | " optimizer.zero_grad()\n", 204 | " output = model(data)\n", 205 | " loss = F.nll_loss(output, target)\n", 206 | " loss.backward()\n", 207 | " optimizer.step()\n", 208 | " model.get()\n", 209 | " if batch_idx % args.log_interval == 0:\n", 210 | " loss = loss.get() # <-- NEW: get the loss back\n", 211 | " print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n", 212 | " epoch, batch_idx * args.batch_size, len(federated_train_loader) * args.batch_size,\n", 213 | " 100. * batch_idx / len(federated_train_loader), loss.item()))" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": 0, 219 | "metadata": { 220 | "colab": {}, 221 | "colab_type": "code", 222 | "id": "m_DhkxUNyZZp" 223 | }, 224 | "outputs": [], 225 | "source": [ 226 | "def test(args, model, device, test_loader):\n", 227 | " model.eval()\n", 228 | " test_loss = 0\n", 229 | " correct = 0\n", 230 | " with torch.no_grad():\n", 231 | " for data, target in test_loader:\n", 232 | " data, target = data.to(device), target.to(device)\n", 233 | " output = model(data)\n", 234 | " test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss\n", 235 | " pred = output.argmax(1, keepdim=True) # get the index of the max log-probability \n", 236 | " correct += pred.eq(target.view_as(pred)).sum().item()\n", 237 | "\n", 238 | " test_loss /= len(test_loader.dataset)\n", 239 | "\n", 240 | " print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n", 241 | " test_loss, correct, len(test_loader.dataset),\n", 242 | " 100. * correct / len(test_loader.dataset)))" 243 | ] 244 | }, 245 | { 246 | "cell_type": "code", 247 | "execution_count": 27, 248 | "metadata": { 249 | "colab": { 250 | "base_uri": "https://localhost:8080/", 251 | "height": 1000 252 | }, 253 | "colab_type": "code", 254 | "id": "loSIXUG6ycXB", 255 | "outputId": "dcefa8ce-6199-458e-b725-68b8b4fd9ff4" 256 | }, 257 | "outputs": [ 258 | { 259 | "name": "stdout", 260 | "output_type": "stream", 261 | "text": [ 262 | "Train Epoch: 1 [0/60032 (0%)]\tLoss: 2.297894\n", 263 | "Train Epoch: 1 [1920/60032 (3%)]\tLoss: 2.270388\n", 264 | "Train Epoch: 1 [3840/60032 (6%)]\tLoss: 2.256516\n", 265 | "Train Epoch: 1 [5760/60032 (10%)]\tLoss: 2.218875\n", 266 | "Train Epoch: 1 [7680/60032 (13%)]\tLoss: 2.107992\n", 267 | "Train Epoch: 1 [9600/60032 (16%)]\tLoss: 2.018991\n", 268 | "Train Epoch: 1 [11520/60032 (19%)]\tLoss: 1.680741\n", 269 | "Train Epoch: 1 [13440/60032 (22%)]\tLoss: 1.539548\n", 270 | "Train Epoch: 1 [15360/60032 (26%)]\tLoss: 1.253159\n", 271 | "Train Epoch: 1 [17280/60032 (29%)]\tLoss: 0.972984\n", 272 | "Train Epoch: 1 [19200/60032 (32%)]\tLoss: 1.001128\n", 273 | "Train Epoch: 1 [21120/60032 (35%)]\tLoss: 0.925889\n", 274 | "Train Epoch: 1 [23040/60032 (38%)]\tLoss: 0.879947\n", 275 | "Train Epoch: 1 [24960/60032 (42%)]\tLoss: 0.702568\n", 276 | "Train Epoch: 1 [26880/60032 (45%)]\tLoss: 0.791418\n", 277 | "Train Epoch: 1 [28800/60032 (48%)]\tLoss: 0.777112\n", 278 | "Train Epoch: 1 [30720/60032 (51%)]\tLoss: 0.773149\n", 279 | "Train Epoch: 1 [32640/60032 (54%)]\tLoss: 0.922166\n", 280 | "Train Epoch: 1 [34560/60032 (58%)]\tLoss: 0.706456\n", 281 | "Train Epoch: 1 [36480/60032 (61%)]\tLoss: 0.693013\n", 282 | "Train Epoch: 1 [38400/60032 (64%)]\tLoss: 0.656149\n", 283 | "Train Epoch: 1 [40320/60032 (67%)]\tLoss: 0.892522\n", 284 | "Train Epoch: 1 [42240/60032 (70%)]\tLoss: 0.746137\n", 285 | "Train Epoch: 1 [44160/60032 (74%)]\tLoss: 0.659126\n", 286 | "Train Epoch: 1 [46080/60032 (77%)]\tLoss: 0.422212\n", 287 | "Train Epoch: 1 [48000/60032 (80%)]\tLoss: 0.707158\n", 288 | "Train Epoch: 1 [49920/60032 (83%)]\tLoss: 0.665552\n", 289 | "Train Epoch: 1 [51840/60032 (86%)]\tLoss: 0.553482\n", 290 | "Train Epoch: 1 [53760/60032 (90%)]\tLoss: 0.781476\n", 291 | "Train Epoch: 1 [55680/60032 (93%)]\tLoss: 0.586698\n", 292 | "Train Epoch: 1 [57600/60032 (96%)]\tLoss: 0.680393\n", 293 | "Train Epoch: 1 [59520/60032 (99%)]\tLoss: 0.581316\n", 294 | "\n", 295 | "Test set: Average loss: 0.6202, Accuracy: 7667/10000 (77%)\n", 296 | "\n", 297 | "Train Epoch: 2 [0/60032 (0%)]\tLoss: 0.624619\n", 298 | "Train Epoch: 2 [1920/60032 (3%)]\tLoss: 0.521833\n", 299 | "Train Epoch: 2 [3840/60032 (6%)]\tLoss: 0.621955\n", 300 | "Train Epoch: 2 [5760/60032 (10%)]\tLoss: 0.760472\n", 301 | "Train Epoch: 2 [7680/60032 (13%)]\tLoss: 0.748148\n", 302 | "Train Epoch: 2 [9600/60032 (16%)]\tLoss: 0.439471\n", 303 | "Train Epoch: 2 [11520/60032 (19%)]\tLoss: 0.544026\n", 304 | "Train Epoch: 2 [13440/60032 (22%)]\tLoss: 0.418315\n", 305 | "Train Epoch: 2 [15360/60032 (26%)]\tLoss: 0.540082\n", 306 | "Train Epoch: 2 [17280/60032 (29%)]\tLoss: 0.684459\n", 307 | "Train Epoch: 2 [19200/60032 (32%)]\tLoss: 0.523130\n", 308 | "Train Epoch: 2 [21120/60032 (35%)]\tLoss: 0.481329\n", 309 | "Train Epoch: 2 [23040/60032 (38%)]\tLoss: 0.585857\n", 310 | "Train Epoch: 2 [24960/60032 (42%)]\tLoss: 0.790414\n", 311 | "Train Epoch: 2 [26880/60032 (45%)]\tLoss: 0.446411\n", 312 | "Train Epoch: 2 [28800/60032 (48%)]\tLoss: 0.581044\n", 313 | "Train Epoch: 2 [30720/60032 (51%)]\tLoss: 0.609288\n", 314 | "Train Epoch: 2 [32640/60032 (54%)]\tLoss: 0.468391\n", 315 | "Train Epoch: 2 [34560/60032 (58%)]\tLoss: 0.343276\n", 316 | "Train Epoch: 2 [36480/60032 (61%)]\tLoss: 0.598389\n", 317 | "Train Epoch: 2 [38400/60032 (64%)]\tLoss: 0.540413\n", 318 | "Train Epoch: 2 [40320/60032 (67%)]\tLoss: 0.568029\n", 319 | "Train Epoch: 2 [42240/60032 (70%)]\tLoss: 0.539726\n", 320 | "Train Epoch: 2 [44160/60032 (74%)]\tLoss: 0.490907\n", 321 | "Train Epoch: 2 [46080/60032 (77%)]\tLoss: 0.622787\n", 322 | "Train Epoch: 2 [48000/60032 (80%)]\tLoss: 0.739177\n", 323 | "Train Epoch: 2 [49920/60032 (83%)]\tLoss: 0.669979\n", 324 | "Train Epoch: 2 [51840/60032 (86%)]\tLoss: 0.474868\n", 325 | "Train Epoch: 2 [53760/60032 (90%)]\tLoss: 0.529145\n", 326 | "Train Epoch: 2 [55680/60032 (93%)]\tLoss: 0.521167\n", 327 | "Train Epoch: 2 [57600/60032 (96%)]\tLoss: 0.528423\n", 328 | "Train Epoch: 2 [59520/60032 (99%)]\tLoss: 0.430959\n", 329 | "\n", 330 | "Test set: Average loss: 0.5481, Accuracy: 7939/10000 (79%)\n", 331 | "\n", 332 | "Train Epoch: 3 [0/60032 (0%)]\tLoss: 0.456396\n", 333 | "Train Epoch: 3 [1920/60032 (3%)]\tLoss: 0.412943\n", 334 | "Train Epoch: 3 [3840/60032 (6%)]\tLoss: 0.479357\n", 335 | "Train Epoch: 3 [5760/60032 (10%)]\tLoss: 0.531959\n", 336 | "Train Epoch: 3 [7680/60032 (13%)]\tLoss: 0.434902\n", 337 | "Train Epoch: 3 [9600/60032 (16%)]\tLoss: 0.517279\n", 338 | "Train Epoch: 3 [11520/60032 (19%)]\tLoss: 0.459047\n", 339 | "Train Epoch: 3 [13440/60032 (22%)]\tLoss: 0.500252\n", 340 | "Train Epoch: 3 [15360/60032 (26%)]\tLoss: 0.511273\n", 341 | "Train Epoch: 3 [17280/60032 (29%)]\tLoss: 0.408997\n", 342 | "Train Epoch: 3 [19200/60032 (32%)]\tLoss: 0.321631\n", 343 | "Train Epoch: 3 [21120/60032 (35%)]\tLoss: 0.442650\n", 344 | "Train Epoch: 3 [23040/60032 (38%)]\tLoss: 0.498727\n", 345 | "Train Epoch: 3 [24960/60032 (42%)]\tLoss: 0.475500\n", 346 | "Train Epoch: 3 [26880/60032 (45%)]\tLoss: 0.428555\n", 347 | "Train Epoch: 3 [28800/60032 (48%)]\tLoss: 0.342744\n", 348 | "Train Epoch: 3 [30720/60032 (51%)]\tLoss: 0.507664\n", 349 | "Train Epoch: 3 [32640/60032 (54%)]\tLoss: 0.518508\n", 350 | "Train Epoch: 3 [34560/60032 (58%)]\tLoss: 0.405584\n", 351 | "Train Epoch: 3 [36480/60032 (61%)]\tLoss: 0.436437\n", 352 | "Train Epoch: 3 [38400/60032 (64%)]\tLoss: 0.400065\n", 353 | "Train Epoch: 3 [40320/60032 (67%)]\tLoss: 0.520095\n", 354 | "Train Epoch: 3 [42240/60032 (70%)]\tLoss: 0.300094\n", 355 | "Train Epoch: 3 [44160/60032 (74%)]\tLoss: 0.509318\n", 356 | "Train Epoch: 3 [46080/60032 (77%)]\tLoss: 0.484434\n", 357 | "Train Epoch: 3 [48000/60032 (80%)]\tLoss: 0.442702\n", 358 | "Train Epoch: 3 [49920/60032 (83%)]\tLoss: 0.490993\n", 359 | "Train Epoch: 3 [51840/60032 (86%)]\tLoss: 0.537253\n", 360 | "Train Epoch: 3 [53760/60032 (90%)]\tLoss: 0.588608\n", 361 | "Train Epoch: 3 [55680/60032 (93%)]\tLoss: 0.393754\n", 362 | "Train Epoch: 3 [57600/60032 (96%)]\tLoss: 0.497899\n", 363 | "Train Epoch: 3 [59520/60032 (99%)]\tLoss: 0.797962\n", 364 | "\n", 365 | "Test set: Average loss: 0.4598, Accuracy: 8364/10000 (84%)\n", 366 | "\n", 367 | "Train Epoch: 4 [0/60032 (0%)]\tLoss: 0.393662\n", 368 | "Train Epoch: 4 [1920/60032 (3%)]\tLoss: 0.452811\n", 369 | "Train Epoch: 4 [3840/60032 (6%)]\tLoss: 0.428987\n", 370 | "Train Epoch: 4 [5760/60032 (10%)]\tLoss: 0.681027\n", 371 | "Train Epoch: 4 [7680/60032 (13%)]\tLoss: 0.837731\n", 372 | "Train Epoch: 4 [9600/60032 (16%)]\tLoss: 0.512435\n", 373 | "Train Epoch: 4 [11520/60032 (19%)]\tLoss: 0.429265\n", 374 | "Train Epoch: 4 [13440/60032 (22%)]\tLoss: 0.347277\n", 375 | "Train Epoch: 4 [15360/60032 (26%)]\tLoss: 0.472584\n", 376 | "Train Epoch: 4 [17280/60032 (29%)]\tLoss: 0.427452\n", 377 | "Train Epoch: 4 [19200/60032 (32%)]\tLoss: 0.462409\n", 378 | "Train Epoch: 4 [21120/60032 (35%)]\tLoss: 0.348962\n", 379 | "Train Epoch: 4 [23040/60032 (38%)]\tLoss: 0.405967\n", 380 | "Train Epoch: 4 [24960/60032 (42%)]\tLoss: 0.499514\n", 381 | "Train Epoch: 4 [26880/60032 (45%)]\tLoss: 0.477124\n", 382 | "Train Epoch: 4 [28800/60032 (48%)]\tLoss: 0.713701\n", 383 | "Train Epoch: 4 [30720/60032 (51%)]\tLoss: 0.334142\n", 384 | "Train Epoch: 4 [32640/60032 (54%)]\tLoss: 0.391503\n", 385 | "Train Epoch: 4 [34560/60032 (58%)]\tLoss: 0.256457\n", 386 | "Train Epoch: 4 [36480/60032 (61%)]\tLoss: 0.402779\n", 387 | "Train Epoch: 4 [38400/60032 (64%)]\tLoss: 0.429268\n", 388 | "Train Epoch: 4 [40320/60032 (67%)]\tLoss: 0.387079\n", 389 | "Train Epoch: 4 [42240/60032 (70%)]\tLoss: 0.490530\n", 390 | "Train Epoch: 4 [44160/60032 (74%)]\tLoss: 0.392317\n", 391 | "Train Epoch: 4 [46080/60032 (77%)]\tLoss: 0.466212\n", 392 | "Train Epoch: 4 [48000/60032 (80%)]\tLoss: 0.420838\n", 393 | "Train Epoch: 4 [49920/60032 (83%)]\tLoss: 0.239213\n", 394 | "Train Epoch: 4 [51840/60032 (86%)]\tLoss: 0.558575\n", 395 | "Train Epoch: 4 [53760/60032 (90%)]\tLoss: 0.563194\n", 396 | "Train Epoch: 4 [55680/60032 (93%)]\tLoss: 0.377696\n", 397 | "Train Epoch: 4 [57600/60032 (96%)]\tLoss: 0.508661\n", 398 | "Train Epoch: 4 [59520/60032 (99%)]\tLoss: 0.649463\n", 399 | "\n", 400 | "Test set: Average loss: 0.4603, Accuracy: 8358/10000 (84%)\n", 401 | "\n", 402 | "Train Epoch: 5 [0/60032 (0%)]\tLoss: 0.361523\n", 403 | "Train Epoch: 5 [1920/60032 (3%)]\tLoss: 0.401346\n", 404 | "Train Epoch: 5 [3840/60032 (6%)]\tLoss: 0.300103\n", 405 | "Train Epoch: 5 [5760/60032 (10%)]\tLoss: 0.696953\n", 406 | "Train Epoch: 5 [7680/60032 (13%)]\tLoss: 0.387073\n", 407 | "Train Epoch: 5 [9600/60032 (16%)]\tLoss: 0.346780\n", 408 | "Train Epoch: 5 [11520/60032 (19%)]\tLoss: 0.433337\n", 409 | "Train Epoch: 5 [13440/60032 (22%)]\tLoss: 0.535868\n", 410 | "Train Epoch: 5 [15360/60032 (26%)]\tLoss: 0.441173\n", 411 | "Train Epoch: 5 [17280/60032 (29%)]\tLoss: 0.394439\n", 412 | "Train Epoch: 5 [19200/60032 (32%)]\tLoss: 0.247470\n", 413 | "Train Epoch: 5 [21120/60032 (35%)]\tLoss: 0.368897\n", 414 | "Train Epoch: 5 [23040/60032 (38%)]\tLoss: 0.520492\n", 415 | "Train Epoch: 5 [24960/60032 (42%)]\tLoss: 0.284961\n", 416 | "Train Epoch: 5 [26880/60032 (45%)]\tLoss: 0.308489\n", 417 | "Train Epoch: 5 [28800/60032 (48%)]\tLoss: 0.434475\n", 418 | "Train Epoch: 5 [30720/60032 (51%)]\tLoss: 0.221708\n", 419 | "Train Epoch: 5 [32640/60032 (54%)]\tLoss: 0.383588\n", 420 | "Train Epoch: 5 [34560/60032 (58%)]\tLoss: 0.397143\n", 421 | "Train Epoch: 5 [36480/60032 (61%)]\tLoss: 0.516105\n", 422 | "Train Epoch: 5 [38400/60032 (64%)]\tLoss: 0.317147\n", 423 | "Train Epoch: 5 [40320/60032 (67%)]\tLoss: 0.350431\n", 424 | "Train Epoch: 5 [42240/60032 (70%)]\tLoss: 0.312608\n", 425 | "Train Epoch: 5 [44160/60032 (74%)]\tLoss: 0.543478\n", 426 | "Train Epoch: 5 [46080/60032 (77%)]\tLoss: 0.239131\n", 427 | "Train Epoch: 5 [48000/60032 (80%)]\tLoss: 0.297404\n", 428 | "Train Epoch: 5 [49920/60032 (83%)]\tLoss: 0.295462\n", 429 | "Train Epoch: 5 [51840/60032 (86%)]\tLoss: 0.363845\n", 430 | "Train Epoch: 5 [53760/60032 (90%)]\tLoss: 0.377399\n", 431 | "Train Epoch: 5 [55680/60032 (93%)]\tLoss: 0.569621\n", 432 | "Train Epoch: 5 [57600/60032 (96%)]\tLoss: 0.469008\n", 433 | "Train Epoch: 5 [59520/60032 (99%)]\tLoss: 0.386325\n", 434 | "\n", 435 | "Test set: Average loss: 0.4003, Accuracy: 8558/10000 (86%)\n", 436 | "\n", 437 | "Train Epoch: 6 [0/60032 (0%)]\tLoss: 0.433503\n", 438 | "Train Epoch: 6 [1920/60032 (3%)]\tLoss: 0.395139\n", 439 | "Train Epoch: 6 [3840/60032 (6%)]\tLoss: 0.457346\n", 440 | "Train Epoch: 6 [5760/60032 (10%)]\tLoss: 0.759689\n", 441 | "Train Epoch: 6 [7680/60032 (13%)]\tLoss: 0.286895\n", 442 | "Train Epoch: 6 [9600/60032 (16%)]\tLoss: 0.286468\n", 443 | "Train Epoch: 6 [11520/60032 (19%)]\tLoss: 0.430500\n", 444 | "Train Epoch: 6 [13440/60032 (22%)]\tLoss: 0.632611\n", 445 | "Train Epoch: 6 [15360/60032 (26%)]\tLoss: 0.431436\n", 446 | "Train Epoch: 6 [17280/60032 (29%)]\tLoss: 0.422151\n", 447 | "Train Epoch: 6 [19200/60032 (32%)]\tLoss: 0.320861\n", 448 | "Train Epoch: 6 [21120/60032 (35%)]\tLoss: 0.341787\n", 449 | "Train Epoch: 6 [23040/60032 (38%)]\tLoss: 0.263591\n", 450 | "Train Epoch: 6 [24960/60032 (42%)]\tLoss: 0.404998\n", 451 | "Train Epoch: 6 [26880/60032 (45%)]\tLoss: 0.297293\n", 452 | "Train Epoch: 6 [28800/60032 (48%)]\tLoss: 0.293235\n", 453 | "Train Epoch: 6 [30720/60032 (51%)]\tLoss: 0.320074\n", 454 | "Train Epoch: 6 [32640/60032 (54%)]\tLoss: 0.563778\n", 455 | "Train Epoch: 6 [34560/60032 (58%)]\tLoss: 0.355298\n", 456 | "Train Epoch: 6 [36480/60032 (61%)]\tLoss: 0.477644\n", 457 | "Train Epoch: 6 [38400/60032 (64%)]\tLoss: 0.305222\n", 458 | "Train Epoch: 6 [40320/60032 (67%)]\tLoss: 0.259541\n", 459 | "Train Epoch: 6 [42240/60032 (70%)]\tLoss: 0.364645\n", 460 | "Train Epoch: 6 [44160/60032 (74%)]\tLoss: 0.469457\n", 461 | "Train Epoch: 6 [46080/60032 (77%)]\tLoss: 0.473092\n", 462 | "Train Epoch: 6 [48000/60032 (80%)]\tLoss: 0.354201\n", 463 | "Train Epoch: 6 [49920/60032 (83%)]\tLoss: 0.247948\n", 464 | "Train Epoch: 6 [51840/60032 (86%)]\tLoss: 0.490131\n", 465 | "Train Epoch: 6 [53760/60032 (90%)]\tLoss: 0.490336\n", 466 | "Train Epoch: 6 [55680/60032 (93%)]\tLoss: 0.372183\n", 467 | "Train Epoch: 6 [57600/60032 (96%)]\tLoss: 0.406239\n", 468 | "Train Epoch: 6 [59520/60032 (99%)]\tLoss: 0.318030\n", 469 | "\n", 470 | "Test set: Average loss: 0.3896, Accuracy: 8588/10000 (86%)\n", 471 | "\n", 472 | "Train Epoch: 7 [0/60032 (0%)]\tLoss: 0.444990\n", 473 | "Train Epoch: 7 [1920/60032 (3%)]\tLoss: 0.374634\n", 474 | "Train Epoch: 7 [3840/60032 (6%)]\tLoss: 0.314474\n", 475 | "Train Epoch: 7 [5760/60032 (10%)]\tLoss: 0.364646\n", 476 | "Train Epoch: 7 [7680/60032 (13%)]\tLoss: 0.240099\n", 477 | "Train Epoch: 7 [9600/60032 (16%)]\tLoss: 0.430089\n", 478 | "Train Epoch: 7 [11520/60032 (19%)]\tLoss: 0.296451\n", 479 | "Train Epoch: 7 [13440/60032 (22%)]\tLoss: 0.403176\n", 480 | "Train Epoch: 7 [15360/60032 (26%)]\tLoss: 0.405922\n", 481 | "Train Epoch: 7 [17280/60032 (29%)]\tLoss: 0.327272\n", 482 | "Train Epoch: 7 [19200/60032 (32%)]\tLoss: 0.180533\n", 483 | "Train Epoch: 7 [21120/60032 (35%)]\tLoss: 0.336758\n", 484 | "Train Epoch: 7 [23040/60032 (38%)]\tLoss: 0.367229\n", 485 | "Train Epoch: 7 [24960/60032 (42%)]\tLoss: 0.450152\n", 486 | "Train Epoch: 7 [26880/60032 (45%)]\tLoss: 0.217510\n", 487 | "Train Epoch: 7 [28800/60032 (48%)]\tLoss: 0.223738\n", 488 | "Train Epoch: 7 [30720/60032 (51%)]\tLoss: 0.455882\n", 489 | "Train Epoch: 7 [32640/60032 (54%)]\tLoss: 0.342804\n", 490 | "Train Epoch: 7 [34560/60032 (58%)]\tLoss: 0.322493\n", 491 | "Train Epoch: 7 [36480/60032 (61%)]\tLoss: 0.469595\n", 492 | "Train Epoch: 7 [38400/60032 (64%)]\tLoss: 0.429854\n", 493 | "Train Epoch: 7 [40320/60032 (67%)]\tLoss: 0.477897\n", 494 | "Train Epoch: 7 [42240/60032 (70%)]\tLoss: 0.261932\n", 495 | "Train Epoch: 7 [44160/60032 (74%)]\tLoss: 0.257700\n", 496 | "Train Epoch: 7 [46080/60032 (77%)]\tLoss: 0.344267\n", 497 | "Train Epoch: 7 [48000/60032 (80%)]\tLoss: 0.380841\n", 498 | "Train Epoch: 7 [49920/60032 (83%)]\tLoss: 0.290892\n", 499 | "Train Epoch: 7 [51840/60032 (86%)]\tLoss: 0.387709\n", 500 | "Train Epoch: 7 [53760/60032 (90%)]\tLoss: 0.354812\n", 501 | "Train Epoch: 7 [55680/60032 (93%)]\tLoss: 0.134910\n", 502 | "Train Epoch: 7 [57600/60032 (96%)]\tLoss: 0.238924\n", 503 | "Train Epoch: 7 [59520/60032 (99%)]\tLoss: 0.295154\n", 504 | "\n", 505 | "Test set: Average loss: 0.3751, Accuracy: 8654/10000 (87%)\n", 506 | "\n", 507 | "Train Epoch: 8 [0/60032 (0%)]\tLoss: 0.420723\n", 508 | "Train Epoch: 8 [1920/60032 (3%)]\tLoss: 0.291933\n", 509 | "Train Epoch: 8 [3840/60032 (6%)]\tLoss: 0.475130\n", 510 | "Train Epoch: 8 [5760/60032 (10%)]\tLoss: 0.320126\n", 511 | "Train Epoch: 8 [7680/60032 (13%)]\tLoss: 0.303498\n", 512 | "Train Epoch: 8 [9600/60032 (16%)]\tLoss: 0.245330\n", 513 | "Train Epoch: 8 [11520/60032 (19%)]\tLoss: 0.221644\n", 514 | "Train Epoch: 8 [13440/60032 (22%)]\tLoss: 0.250708\n", 515 | "Train Epoch: 8 [15360/60032 (26%)]\tLoss: 0.237361\n", 516 | "Train Epoch: 8 [17280/60032 (29%)]\tLoss: 0.284803\n", 517 | "Train Epoch: 8 [19200/60032 (32%)]\tLoss: 0.288773\n", 518 | "Train Epoch: 8 [21120/60032 (35%)]\tLoss: 0.253599\n", 519 | "Train Epoch: 8 [23040/60032 (38%)]\tLoss: 0.486923\n", 520 | "Train Epoch: 8 [24960/60032 (42%)]\tLoss: 0.347767\n", 521 | "Train Epoch: 8 [26880/60032 (45%)]\tLoss: 0.489972\n", 522 | "Train Epoch: 8 [28800/60032 (48%)]\tLoss: 0.348951\n", 523 | "Train Epoch: 8 [30720/60032 (51%)]\tLoss: 0.352687\n", 524 | "Train Epoch: 8 [32640/60032 (54%)]\tLoss: 0.262888\n", 525 | "Train Epoch: 8 [34560/60032 (58%)]\tLoss: 0.418008\n", 526 | "Train Epoch: 8 [36480/60032 (61%)]\tLoss: 0.195721\n", 527 | "Train Epoch: 8 [38400/60032 (64%)]\tLoss: 0.464302\n", 528 | "Train Epoch: 8 [40320/60032 (67%)]\tLoss: 0.250094\n", 529 | "Train Epoch: 8 [42240/60032 (70%)]\tLoss: 0.284123\n", 530 | "Train Epoch: 8 [44160/60032 (74%)]\tLoss: 0.260842\n", 531 | "Train Epoch: 8 [46080/60032 (77%)]\tLoss: 0.475878\n", 532 | "Train Epoch: 8 [48000/60032 (80%)]\tLoss: 0.385621\n", 533 | "Train Epoch: 8 [49920/60032 (83%)]\tLoss: 0.466025\n", 534 | "Train Epoch: 8 [51840/60032 (86%)]\tLoss: 0.365008\n", 535 | "Train Epoch: 8 [53760/60032 (90%)]\tLoss: 0.391639\n", 536 | "Train Epoch: 8 [55680/60032 (93%)]\tLoss: 0.410657\n", 537 | "Train Epoch: 8 [57600/60032 (96%)]\tLoss: 0.185470\n", 538 | "Train Epoch: 8 [59520/60032 (99%)]\tLoss: 0.234561\n", 539 | "\n", 540 | "Test set: Average loss: 0.3718, Accuracy: 8635/10000 (86%)\n", 541 | "\n", 542 | "Train Epoch: 9 [0/60032 (0%)]\tLoss: 0.198807\n", 543 | "Train Epoch: 9 [1920/60032 (3%)]\tLoss: 0.311220\n", 544 | "Train Epoch: 9 [3840/60032 (6%)]\tLoss: 0.221064\n", 545 | "Train Epoch: 9 [5760/60032 (10%)]\tLoss: 0.438531\n", 546 | "Train Epoch: 9 [7680/60032 (13%)]\tLoss: 0.296658\n", 547 | "Train Epoch: 9 [9600/60032 (16%)]\tLoss: 0.329275\n", 548 | "Train Epoch: 9 [11520/60032 (19%)]\tLoss: 0.329174\n", 549 | "Train Epoch: 9 [13440/60032 (22%)]\tLoss: 0.413715\n", 550 | "Train Epoch: 9 [15360/60032 (26%)]\tLoss: 0.375727\n", 551 | "Train Epoch: 9 [17280/60032 (29%)]\tLoss: 0.404160\n", 552 | "Train Epoch: 9 [19200/60032 (32%)]\tLoss: 0.254195\n", 553 | "Train Epoch: 9 [21120/60032 (35%)]\tLoss: 0.262947\n", 554 | "Train Epoch: 9 [23040/60032 (38%)]\tLoss: 0.298239\n", 555 | "Train Epoch: 9 [24960/60032 (42%)]\tLoss: 0.286815\n", 556 | "Train Epoch: 9 [26880/60032 (45%)]\tLoss: 0.422843\n", 557 | "Train Epoch: 9 [28800/60032 (48%)]\tLoss: 0.255840\n", 558 | "Train Epoch: 9 [30720/60032 (51%)]\tLoss: 0.336783\n", 559 | "Train Epoch: 9 [32640/60032 (54%)]\tLoss: 0.134909\n", 560 | "Train Epoch: 9 [34560/60032 (58%)]\tLoss: 0.199918\n", 561 | "Train Epoch: 9 [36480/60032 (61%)]\tLoss: 0.397980\n", 562 | "Train Epoch: 9 [38400/60032 (64%)]\tLoss: 0.220716\n", 563 | "Train Epoch: 9 [40320/60032 (67%)]\tLoss: 0.305564\n", 564 | "Train Epoch: 9 [42240/60032 (70%)]\tLoss: 0.437461\n", 565 | "Train Epoch: 9 [44160/60032 (74%)]\tLoss: 0.232263\n", 566 | "Train Epoch: 9 [46080/60032 (77%)]\tLoss: 0.491814\n", 567 | "Train Epoch: 9 [48000/60032 (80%)]\tLoss: 0.470752\n", 568 | "Train Epoch: 9 [49920/60032 (83%)]\tLoss: 0.305019\n", 569 | "Train Epoch: 9 [51840/60032 (86%)]\tLoss: 0.280134\n", 570 | "Train Epoch: 9 [53760/60032 (90%)]\tLoss: 0.366624\n", 571 | "Train Epoch: 9 [55680/60032 (93%)]\tLoss: 0.322959\n", 572 | "Train Epoch: 9 [57600/60032 (96%)]\tLoss: 0.285094\n", 573 | "Train Epoch: 9 [59520/60032 (99%)]\tLoss: 0.311868\n", 574 | "\n", 575 | "Test set: Average loss: 0.3698, Accuracy: 8652/10000 (87%)\n", 576 | "\n", 577 | "Train Epoch: 10 [0/60032 (0%)]\tLoss: 0.281692\n", 578 | "Train Epoch: 10 [1920/60032 (3%)]\tLoss: 0.275689\n", 579 | "Train Epoch: 10 [3840/60032 (6%)]\tLoss: 0.371555\n", 580 | "Train Epoch: 10 [5760/60032 (10%)]\tLoss: 0.358771\n", 581 | "Train Epoch: 10 [7680/60032 (13%)]\tLoss: 0.338153\n", 582 | "Train Epoch: 10 [9600/60032 (16%)]\tLoss: 0.201937\n", 583 | "Train Epoch: 10 [11520/60032 (19%)]\tLoss: 0.321843\n", 584 | "Train Epoch: 10 [13440/60032 (22%)]\tLoss: 0.343546\n", 585 | "Train Epoch: 10 [15360/60032 (26%)]\tLoss: 0.246389\n", 586 | "Train Epoch: 10 [17280/60032 (29%)]\tLoss: 0.277746\n", 587 | "Train Epoch: 10 [19200/60032 (32%)]\tLoss: 0.336140\n", 588 | "Train Epoch: 10 [21120/60032 (35%)]\tLoss: 0.256819\n", 589 | "Train Epoch: 10 [23040/60032 (38%)]\tLoss: 0.409090\n", 590 | "Train Epoch: 10 [24960/60032 (42%)]\tLoss: 0.308156\n", 591 | "Train Epoch: 10 [26880/60032 (45%)]\tLoss: 0.245950\n", 592 | "Train Epoch: 10 [28800/60032 (48%)]\tLoss: 0.136618\n", 593 | "Train Epoch: 10 [30720/60032 (51%)]\tLoss: 0.244874\n", 594 | "Train Epoch: 10 [32640/60032 (54%)]\tLoss: 0.290621\n", 595 | "Train Epoch: 10 [34560/60032 (58%)]\tLoss: 0.258422\n", 596 | "Train Epoch: 10 [36480/60032 (61%)]\tLoss: 0.330319\n", 597 | "Train Epoch: 10 [38400/60032 (64%)]\tLoss: 0.240209\n", 598 | "Train Epoch: 10 [40320/60032 (67%)]\tLoss: 0.444517\n", 599 | "Train Epoch: 10 [42240/60032 (70%)]\tLoss: 0.285876\n", 600 | "Train Epoch: 10 [44160/60032 (74%)]\tLoss: 0.318286\n", 601 | "Train Epoch: 10 [46080/60032 (77%)]\tLoss: 0.318790\n", 602 | "Train Epoch: 10 [48000/60032 (80%)]\tLoss: 0.203132\n", 603 | "Train Epoch: 10 [49920/60032 (83%)]\tLoss: 0.429110\n", 604 | "Train Epoch: 10 [51840/60032 (86%)]\tLoss: 0.274273\n", 605 | "Train Epoch: 10 [53760/60032 (90%)]\tLoss: 0.310811\n", 606 | "Train Epoch: 10 [55680/60032 (93%)]\tLoss: 0.297233\n", 607 | "Train Epoch: 10 [57600/60032 (96%)]\tLoss: 0.390102\n", 608 | "Train Epoch: 10 [59520/60032 (99%)]\tLoss: 0.396362\n", 609 | "\n", 610 | "Test set: Average loss: 0.3375, Accuracy: 8793/10000 (88%)\n", 611 | "\n", 612 | "CPU times: user 20min 36s, sys: 59.7 s, total: 21min 36s\n", 613 | "Wall time: 21min 37s\n" 614 | ] 615 | } 616 | ], 617 | "source": [ 618 | "%%time\n", 619 | "model = FashionNet().to(device)\n", 620 | "optimizer = optim.SGD(model.parameters(), lr=args.lr)\n", 621 | "\n", 622 | "for epoch in range(1, args.epochs + 1):\n", 623 | " train(args, model, device, federated_train_loader, optimizer, epoch)\n", 624 | " test(args, model, device, testloader)\n", 625 | "\n", 626 | "if (args.save_model):\n", 627 | " torch.save(model.state_dict(), \"Fmnist_cnn.pt\")" 628 | ] 629 | } 630 | ], 631 | "metadata": { 632 | "colab": { 633 | "name": "FMNIST_FL.ipynb", 634 | "provenance": [], 635 | "version": "0.3.2" 636 | }, 637 | "kernelspec": { 638 | "display_name": "Python 3", 639 | "language": "python", 640 | "name": "python3" 641 | }, 642 | "language_info": { 643 | "codemirror_mode": { 644 | "name": "ipython", 645 | "version": 3 646 | }, 647 | "file_extension": ".py", 648 | "mimetype": "text/x-python", 649 | "name": "python", 650 | "nbconvert_exporter": "python", 651 | "pygments_lexer": "ipython3", 652 | "version": "3.7.3" 653 | } 654 | }, 655 | "nbformat": 4, 656 | "nbformat_minor": 1 657 | } 658 | -------------------------------------------------------------------------------- /Temitope Oladokun: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Federated Learning & RaspberryPi ", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "collapsed_sections": [], 10 | "toc_visible": true, 11 | "include_colab_link": true 12 | }, 13 | "kernelspec": { 14 | "name": "python3", 15 | "display_name": "Python 3" 16 | } 17 | }, 18 | "cells": [ 19 | { 20 | "cell_type": "markdown", 21 | "metadata": { 22 | "id": "view-in-github", 23 | "colab_type": "text" 24 | }, 25 | "source": [ 26 | "\"Open" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "metadata": { 32 | "id": "NFlWUbBa59YR", 33 | "colab_type": "code", 34 | "outputId": "5d7c65bd-2143-48ed-9b0c-5a59db9290a8", 35 | "colab": { 36 | "base_uri": "https://localhost:8080/", 37 | "height": 1000 38 | } 39 | }, 40 | "source": [ 41 | "!pip install tf-encrypted\n", 42 | "\n", 43 | "! URL=\"https://github.com/openmined/PySyft.git\" && FOLDER=\"PySyft\" && if [ ! -d $FOLDER ]; then git clone -b dev --single-branch $URL; else (cd $FOLDER && git pull $URL && cd ..); fi;\n", 44 | "\n", 45 | "!cd PySyft; python setup.py install > /dev/null\n", 46 | "\n", 47 | "import os\n", 48 | "import sys\n", 49 | "module_path = os.path.abspath(os.path.join('./PySyft'))\n", 50 | "if module_path not in sys.path:\n", 51 | " sys.path.append(module_path)\n", 52 | " \n", 53 | "!pip install --upgrade --force-reinstall lz4\n", 54 | "!pip install --upgrade --force-reinstall websocket\n", 55 | "!pip install --upgrade --force-reinstall websockets\n", 56 | "!pip install --upgrade --force-reinstall zstd" 57 | ], 58 | "execution_count": 3, 59 | "outputs": [ 60 | { 61 | "output_type": "stream", 62 | "text": [ 63 | "Collecting tf-encrypted\n", 64 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/1f/82/cf15aeac92525da2f794956712e7ebf418819390dec783430ee242b52d0b/tf_encrypted-0.5.8-py3-none-manylinux1_x86_64.whl (2.1MB)\n", 65 | "\u001b[K |████████████████████████████████| 2.1MB 2.8MB/s \n", 66 | "\u001b[?25hRequirement already satisfied: tensorflow<2,>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tf-encrypted) (1.14.0)\n", 67 | "Collecting pyyaml>=5.1 (from tf-encrypted)\n", 68 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/e3/e8/b3212641ee2718d556df0f23f78de8303f068fe29cdaa7a91018849582fe/PyYAML-5.1.2.tar.gz (265kB)\n", 69 | "\u001b[K |████████████████████████████████| 266kB 46.3MB/s \n", 70 | "\u001b[?25hRequirement already satisfied: numpy>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from tf-encrypted) (1.16.4)\n", 71 | "Requirement already satisfied: tensorboard<1.15.0,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.14.0)\n", 72 | "Requirement already satisfied: tensorflow-estimator<1.15.0rc0,>=1.14.0rc0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.14.0)\n", 73 | "Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.0.8)\n", 74 | "Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.15.0)\n", 75 | "Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.1.0)\n", 76 | "Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.1.0)\n", 77 | "Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (0.33.4)\n", 78 | "Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.12.0)\n", 79 | "Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (1.11.2)\n", 80 | "Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (0.7.1)\n", 81 | "Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (0.2.2)\n", 82 | "Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (0.1.7)\n", 83 | "Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (3.7.1)\n", 84 | "Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2,>=1.12.0->tf-encrypted) (0.8.0)\n", 85 | "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow<2,>=1.12.0->tf-encrypted) (3.1.1)\n", 86 | "Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow<2,>=1.12.0->tf-encrypted) (41.0.1)\n", 87 | "Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow<2,>=1.12.0->tf-encrypted) (0.15.5)\n", 88 | "Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tensorflow<2,>=1.12.0->tf-encrypted) (2.8.0)\n", 89 | "Building wheels for collected packages: pyyaml\n", 90 | " Building wheel for pyyaml (setup.py) ... \u001b[?25l\u001b[?25hdone\n", 91 | " Created wheel for pyyaml: filename=PyYAML-5.1.2-cp36-cp36m-linux_x86_64.whl size=44105 sha256=d43e3214bccf1bb6bb5b0918463fbb4e1d2d953614c7e947965eaf34246f4233\n", 92 | " Stored in directory: /root/.cache/pip/wheels/d9/45/dd/65f0b38450c47cf7e5312883deb97d065e030c5cca0a365030\n", 93 | "Successfully built pyyaml\n", 94 | "Installing collected packages: pyyaml, tf-encrypted\n", 95 | " Found existing installation: PyYAML 3.13\n", 96 | " Uninstalling PyYAML-3.13:\n", 97 | " Successfully uninstalled PyYAML-3.13\n", 98 | "Successfully installed pyyaml-5.1.2 tf-encrypted-0.5.8\n", 99 | "Cloning into 'PySyft'...\n", 100 | "remote: Enumerating objects: 30364, done.\u001b[K\n", 101 | "remote: Total 30364 (delta 0), reused 0 (delta 0), pack-reused 30364\u001b[K\n", 102 | "Receiving objects: 100% (30364/30364), 33.00 MiB | 23.42 MiB/s, done.\n", 103 | "Resolving deltas: 100% (20363/20363), done.\n", 104 | "zip_safe flag not set; analyzing archive contents...\n", 105 | "zip_safe flag not set; analyzing archive contents...\n", 106 | "__pycache__.zstd.cpython-36: module references __file__\n", 107 | "Collecting lz4\n", 108 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/0a/c6/96bbb3525a63ebc53ea700cc7d37ab9045542d33b4d262d0f0408ad9bbf2/lz4-2.1.10-cp36-cp36m-manylinux1_x86_64.whl (385kB)\n", 109 | "\u001b[K |████████████████████████████████| 389kB 2.8MB/s \n", 110 | "\u001b[31mERROR: syft 0.1.23a1 has requirement msgpack>=0.6.1, but you'll have msgpack 0.5.6 which is incompatible.\u001b[0m\n", 111 | "\u001b[?25hInstalling collected packages: lz4\n", 112 | " Found existing installation: lz4 2.1.10\n", 113 | " Uninstalling lz4-2.1.10:\n", 114 | " Successfully uninstalled lz4-2.1.10\n", 115 | "Successfully installed lz4-2.1.10\n", 116 | "Collecting websocket\n", 117 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/f2/6d/a60d620ea575c885510c574909d2e3ed62129b121fa2df00ca1c81024c87/websocket-0.2.1.tar.gz (195kB)\n", 118 | "\u001b[K |████████████████████████████████| 204kB 2.8MB/s \n", 119 | "\u001b[?25hCollecting gevent (from websocket)\n", 120 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/f2/ca/5b5962361ed832847b6b2f9a2d0452c8c2f29a93baef850bb8ad067c7bf9/gevent-1.4.0-cp36-cp36m-manylinux1_x86_64.whl (5.5MB)\n", 121 | "\u001b[K |████████████████████████████████| 5.5MB 46.1MB/s \n", 122 | "\u001b[?25hCollecting greenlet (from websocket)\n", 123 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/bf/45/142141aa47e01a5779f0fa5a53b81f8379ce8f2b1cd13df7d2f1d751ae42/greenlet-0.4.15-cp36-cp36m-manylinux1_x86_64.whl (41kB)\n", 124 | "\u001b[K |████████████████████████████████| 51kB 22.0MB/s \n", 125 | "\u001b[?25hBuilding wheels for collected packages: websocket\n", 126 | " Building wheel for websocket (setup.py) ... \u001b[?25l\u001b[?25hdone\n", 127 | " Created wheel for websocket: filename=websocket-0.2.1-cp36-none-any.whl size=192134 sha256=46d452d7b290beb083eccb732aa65d8fef99d0db9514cdb5d68eeb0e6afb25ef\n", 128 | " Stored in directory: /root/.cache/pip/wheels/35/f7/5c/9e8243838269ea93f05295708519a6e183fa6b515d9ce3b636\n", 129 | "Successfully built websocket\n", 130 | "Installing collected packages: greenlet, gevent, websocket\n", 131 | " Found existing installation: greenlet 0.4.15\n", 132 | " Uninstalling greenlet-0.4.15:\n", 133 | " Successfully uninstalled greenlet-0.4.15\n", 134 | " Found existing installation: gevent 1.4.0\n", 135 | " Uninstalling gevent-1.4.0:\n", 136 | " Successfully uninstalled gevent-1.4.0\n", 137 | "Successfully installed gevent-1.4.0 greenlet-0.4.15 websocket-0.2.1\n", 138 | "Collecting websockets\n", 139 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/f0/4b/ad228451b1c071c5c52616b7d4298ebcfcac5ae8515ede959db19e4cd56d/websockets-8.0.2-cp36-cp36m-manylinux1_x86_64.whl (72kB)\n", 140 | "\u001b[K |████████████████████████████████| 81kB 3.2MB/s \n", 141 | "\u001b[31mERROR: syft 0.1.23a1 has requirement msgpack>=0.6.1, but you'll have msgpack 0.5.6 which is incompatible.\u001b[0m\n", 142 | "\u001b[?25hInstalling collected packages: websockets\n", 143 | " Found existing installation: websockets 8.0.2\n", 144 | " Uninstalling websockets-8.0.2:\n", 145 | " Successfully uninstalled websockets-8.0.2\n", 146 | "Successfully installed websockets-8.0.2\n", 147 | "Collecting zstd\n", 148 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/22/37/6a7ba746ebddbd6cd06de84367515d6bc239acd94fb3e0b1c85788176ca2/zstd-1.4.1.0.tar.gz (454kB)\n", 149 | "\u001b[K |████████████████████████████████| 460kB 2.7MB/s \n", 150 | "\u001b[?25hBuilding wheels for collected packages: zstd\n", 151 | " Building wheel for zstd (setup.py) ... \u001b[?25l\u001b[?25hdone\n", 152 | " Created wheel for zstd: filename=zstd-1.4.1.0-cp36-cp36m-linux_x86_64.whl size=1067098 sha256=4378524782195c5812206e3f2691909d5690c78bd1383434b34010f1c02974b4\n", 153 | " Stored in directory: /root/.cache/pip/wheels/66/3f/ee/ac08c81af7c1b24a80c746df669ea3cb37542d27877d66ccf4\n", 154 | "Successfully built zstd\n", 155 | "\u001b[31mERROR: syft 0.1.23a1 has requirement msgpack>=0.6.1, but you'll have msgpack 0.5.6 which is incompatible.\u001b[0m\n", 156 | "Installing collected packages: zstd\n", 157 | " Found existing installation: zstd 1.4.1.0\n", 158 | " Uninstalling zstd-1.4.1.0:\n", 159 | " Successfully uninstalled zstd-1.4.1.0\n", 160 | "Successfully installed zstd-1.4.1.0\n" 161 | ], 162 | "name": "stdout" 163 | } 164 | ] 165 | }, 166 | { 167 | "cell_type": "markdown", 168 | "metadata": { 169 | "id": "NbksiWKqHSKe", 170 | "colab_type": "text" 171 | }, 172 | "source": [ 173 | "# Importing the necessary libraries" 174 | ] 175 | }, 176 | { 177 | "cell_type": "code", 178 | "metadata": { 179 | "id": "kq0bR3hV6qW0", 180 | "colab_type": "code", 181 | "outputId": "bdb4df99-93ec-4c90-d265-f17c75adc832", 182 | "colab": { 183 | "base_uri": "https://localhost:8080/", 184 | "height": 34 185 | } 186 | }, 187 | "source": [ 188 | "from __future__ import print_function\n", 189 | "import argparse\n", 190 | "import numpy as np\n", 191 | "import pandas as pd\n", 192 | "import torch\n", 193 | "import torch.nn as nn\n", 194 | "import torch.nn.functional as F\n", 195 | "import torch.optim as optim\n", 196 | "torch.__version__" 197 | ], 198 | "execution_count": 5, 199 | "outputs": [ 200 | { 201 | "output_type": "execute_result", 202 | "data": { 203 | "text/plain": [ 204 | "'1.1.0'" 205 | ] 206 | }, 207 | "metadata": { 208 | "tags": [] 209 | }, 210 | "execution_count": 5 211 | } 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "metadata": { 217 | "id": "l-ecdXAvagy3", 218 | "colab_type": "text" 219 | }, 220 | "source": [ 221 | "## Start your syft workers" 222 | ] 223 | }, 224 | { 225 | "cell_type": "markdown", 226 | "metadata": { 227 | "id": "vJelG_lqaurB", 228 | "colab_type": "text" 229 | }, 230 | "source": [ 231 | "" 232 | ] 233 | }, 234 | { 235 | "cell_type": "code", 236 | "metadata": { 237 | "id": "oaRPUY0aHIPo", 238 | "colab_type": "code", 239 | "colab": {} 240 | }, 241 | "source": [ 242 | "import syft as sy\n", 243 | "hook = sy.TorchHook(torch) #hook PyTorch ie add extra functionalities to support Federated Learning\n", 244 | "Temitope = sy.VirtualWorker(hook, id=\"Temitope\") #define remote worker Temitope\n", 245 | "#Sarah = sy.VirtualWorker(hook, id= \"Sarah\") #and sarah\n", 246 | "Ayanfunke = sy.VirtualWorker(hook, id=\"Ayanfunke\") #and Ayanfunke" 247 | ], 248 | "execution_count": 0, 249 | "outputs": [] 250 | }, 251 | { 252 | "cell_type": "markdown", 253 | "metadata": { 254 | "id": "3sKG0llXa0Xr", 255 | "colab_type": "text" 256 | }, 257 | "source": [ 258 | "# Start your code" 259 | ] 260 | }, 261 | { 262 | "cell_type": "code", 263 | "metadata": { 264 | "id": "9vnhBTDGZaqx", 265 | "colab_type": "code", 266 | "outputId": "56ba5ca6-6aae-413a-c255-1b6b702420bf", 267 | "colab": { 268 | "base_uri": "https://localhost:8080/", 269 | "height": 34 270 | } 271 | }, 272 | "source": [ 273 | "x = torch.tensor([1.]).send(Ayanfunke)\n", 274 | "y = (x * 2).get()\n", 275 | "y" 276 | ], 277 | "execution_count": 8, 278 | "outputs": [ 279 | { 280 | "output_type": "execute_result", 281 | "data": { 282 | "text/plain": [ 283 | "tensor([2.])" 284 | ] 285 | }, 286 | "metadata": { 287 | "tags": [] 288 | }, 289 | "execution_count": 8 290 | } 291 | ] 292 | }, 293 | { 294 | "cell_type": "code", 295 | "metadata": { 296 | "id": "5cSxr6US2UnE", 297 | "colab_type": "code", 298 | "colab": {} 299 | }, 300 | "source": [ 301 | "pip install -r \"../../../requirements.txt\"" 302 | ], 303 | "execution_count": 0, 304 | "outputs": [] 305 | }, 306 | { 307 | "cell_type": "code", 308 | "metadata": { 309 | "id": "CRHRnKwTCylj", 310 | "colab_type": "code", 311 | "outputId": "71b883e1-66f6-4025-e132-dd84b009a309", 312 | "colab": { 313 | "base_uri": "https://localhost:8080/", 314 | "height": 34 315 | } 316 | }, 317 | "source": [ 318 | "from zipfile import ZipFile\n", 319 | "filename = \"Nigerian Names.zip\"\n", 320 | "\n", 321 | "with ZipFile(filename, 'r') as zip:\n", 322 | " zip.extractall()\n", 323 | " print('Done')" 324 | ], 325 | "execution_count": 11, 326 | "outputs": [ 327 | { 328 | "output_type": "stream", 329 | "text": [ 330 | "Done\n" 331 | ], 332 | "name": "stdout" 333 | } 334 | ] 335 | }, 336 | { 337 | "cell_type": "code", 338 | "metadata": { 339 | "id": "HP3yfMPfddwm", 340 | "colab_type": "code", 341 | "colab": {} 342 | }, 343 | "source": [ 344 | "#The below code will remove data form the document in the cloud. I am leaving it commeneted till i need it\n", 345 | "#!rm -rf data" 346 | ], 347 | "execution_count": 0, 348 | "outputs": [] 349 | }, 350 | { 351 | "cell_type": "code", 352 | "metadata": { 353 | "id": "h_ZvXrHeBAE0", 354 | "colab_type": "code", 355 | "colab": {} 356 | }, 357 | "source": [ 358 | "#!wget https://github.com/TemitopeOladokun/FederatedLearningandRaspberryPi/tree/master/data/NigerianNames\n" 359 | ], 360 | "execution_count": 0, 361 | "outputs": [] 362 | }, 363 | { 364 | "cell_type": "code", 365 | "metadata": { 366 | "id": "zgUP_5Rz2XXs", 367 | "colab_type": "code", 368 | "colab": {} 369 | }, 370 | "source": [ 371 | "from __future__ import unicode_literals, print_function, division\n", 372 | "#from torch.utils.data import Dataset\n", 373 | "\n", 374 | "\n", 375 | "import torch\n", 376 | "from io import open\n", 377 | "import glob\n", 378 | "import os\n", 379 | "import numpy as np\n", 380 | "import unicodedata\n", 381 | "import string\n", 382 | "import random\n", 383 | "import torch.nn as nn\n", 384 | "import time\n", 385 | "import math\n", 386 | "import syft as sy\n", 387 | "import pandas as pd\n", 388 | "import random\n", 389 | "from syft.frameworks.torch.federated import utils\n", 390 | "\n", 391 | "from syft.workers import WebsocketClientWorker\n", 392 | "import matplotlib.pyplot as plt\n", 393 | "import matplotlib.ticker as ticker" 394 | ], 395 | "execution_count": 0, 396 | "outputs": [] 397 | }, 398 | { 399 | "cell_type": "code", 400 | "metadata": { 401 | "id": "JF2nN0li79Pt", 402 | "colab_type": "code", 403 | "colab": {} 404 | }, 405 | "source": [ 406 | "#Load all the files in a certain path\n", 407 | "def findFiles(path):\n", 408 | " return glob.glob(path)\n", 409 | "\n", 410 | "# Read a file and split into lines\n", 411 | "def readLines(filename):\n", 412 | " lines = open(filename, encoding='utf-8').read().strip().split('\\n')\n", 413 | " return [unicodeToAscii(line) for line in lines]\n", 414 | "\n", 415 | "#convert a string 's' in unicode format to ASCII format\n", 416 | "def unicodeToAscii(s):\n", 417 | " return ''.join(\n", 418 | " c for c in unicodedata.normalize('NFD', s)\n", 419 | " if unicodedata.category(c) != 'Mn'\n", 420 | " and c in all_letters\n", 421 | " )\n", 422 | " " 423 | ], 424 | "execution_count": 0, 425 | "outputs": [] 426 | }, 427 | { 428 | "cell_type": "code", 429 | "metadata": { 430 | "id": "xIo8AabEDJIN", 431 | "colab_type": "code", 432 | "outputId": "53221bcd-4d22-4bce-c734-e5b2a6211544", 433 | "colab": { 434 | "base_uri": "https://localhost:8080/", 435 | "height": 153 436 | } 437 | }, 438 | "source": [ 439 | "all_letters = string.ascii_letters + \" .,;'\"\n", 440 | "n_letters = len(all_letters)\n", 441 | "\n", 442 | "#dictionary containing the nation as key and the names as values\n", 443 | "#Example: category_lines[\"italian\"] = [\"Abandonato\",\"Abatangelo\",\"Abatantuono\",...]\n", 444 | "category_lines = {}\n", 445 | "#List containing the different categories in the data\n", 446 | "all_categories = []\n", 447 | "\n", 448 | "print (\"This dataset has Nigerian Names\" + \"\\n\")\n", 449 | "#print(\"Amount of categories:\" + str(n_categories) + \"\\n\")\n", 450 | "\n", 451 | "for filename in findFiles('Nigerian Names/*.txt'):\n", 452 | " print(filename) \n", 453 | " category = os.path.splitext(os.path.basename(filename))[0]\n", 454 | " all_categories.append(category)\n", 455 | " lines = readLines(filename)\n", 456 | " category_lines[category] = lines \n", 457 | " \n", 458 | "n_categories = len(all_categories)\n", 459 | "\n", 460 | "#print (\"This dataset has Nigerian names included\" + \"\\n\")\n", 461 | "print(\"Amount of categories:\" + str(n_categories))" 462 | ], 463 | "execution_count": 14, 464 | "outputs": [ 465 | { 466 | "output_type": "stream", 467 | "text": [ 468 | "This dataset has Nigerian Names\n", 469 | "\n", 470 | "Nigerian Names/Edo.txt\n", 471 | "Nigerian Names/Urhobo.txt\n", 472 | "Nigerian Names/Hausa.txt\n", 473 | "Nigerian Names/Yoruba.txt\n", 474 | "Nigerian Names/Igbo.txt\n", 475 | "Amount of categories:5\n" 476 | ], 477 | "name": "stdout" 478 | } 479 | ] 480 | }, 481 | { 482 | "cell_type": "code", 483 | "metadata": { 484 | "id": "vQffmR5yLJ0L", 485 | "colab_type": "code", 486 | "outputId": "e6c69630-c3b6-42aa-ef0f-c36c0bce908f", 487 | "colab": { 488 | "base_uri": "https://localhost:8080/", 489 | "height": 323 490 | } 491 | }, 492 | "source": [ 493 | "print(\"These are Yoruba names\")\n", 494 | "print(*category_lines['Yoruba'][:10] , sep = \"\\n\")\n", 495 | "print(\"\\n\")\n", 496 | "print(\"These are Urhobo names\")\n", 497 | "print(*category_lines['Urhobo'][:4], sep = \"\\n\")" 498 | ], 499 | "execution_count": 21, 500 | "outputs": [ 501 | { 502 | "output_type": "stream", 503 | "text": [ 504 | "These are Yoruba names\n", 505 | "Temitope\n", 506 | "Adeola\n", 507 | "Ayanfunke\n", 508 | "Eyitayo\n", 509 | "Ayodele\n", 510 | "Oyeleke\n", 511 | "Funke\n", 512 | "Olayemi\n", 513 | "Damilola\n", 514 | "Oluwaseun\n", 515 | "\n", 516 | "\n", 517 | "These are Urhobo names\n", 518 | "Ejiro\n", 519 | "Efetobo\n", 520 | "Anaborhi\n", 521 | "Edewor\n" 522 | ], 523 | "name": "stdout" 524 | } 525 | ] 526 | }, 527 | { 528 | "cell_type": "code", 529 | "metadata": { 530 | "id": "j-8sqnGyJEpH", 531 | "colab_type": "code", 532 | "colab": {} 533 | }, 534 | "source": [ 535 | "class LanguageDataset(Dataset):\n", 536 | " #Constructor is mandatory\n", 537 | " def __init__(self, text, labels, transform=None):\n", 538 | " self.data = text\n", 539 | " self.targets = labels #categories\n", 540 | " #self.to_torchtensor()\n", 541 | " self.transform = transform\n", 542 | " \n", 543 | " def to_torchtensor(self): \n", 544 | " self.data = torch.from_numpy(self.text, requires_grad=True)\n", 545 | " self.labels = torch.from_numpy(self.targets, requires_grad=True)\n", 546 | " \n", 547 | " def __len__(self):\n", 548 | " #Mandatory\n", 549 | " '''Returns:\n", 550 | " Length [int]: Length of Dataset/batches\n", 551 | " '''\n", 552 | " return len(self.data)\n", 553 | " \n", 554 | " def __getitem__(self, idx): \n", 555 | " #Mandatory \n", 556 | " \n", 557 | " '''Returns:\n", 558 | " Data [Torch Tensor]: \n", 559 | " Target [ Torch Tensor]:\n", 560 | " '''\n", 561 | " sample = self.data[idx]\n", 562 | " target = self.targets[idx]\n", 563 | " \n", 564 | " if self.transform:\n", 565 | " sample = self.transform(sample)\n", 566 | " \n", 567 | " return sample,target" 568 | ], 569 | "execution_count": 0, 570 | "outputs": [] 571 | }, 572 | { 573 | "cell_type": "code", 574 | "metadata": { 575 | "id": "OI7az0p0K2_0", 576 | "colab_type": "code", 577 | "colab": {} 578 | }, 579 | "source": [ 580 | "#The list of arguments for our program. We will be needing most of them soon.\n", 581 | "class Arguments():\n", 582 | " def __init__(self):\n", 583 | " self.batch_size = 1\n", 584 | " self.learning_rate = 0.005\n", 585 | " self.epochs = 10000\n", 586 | " self.federate_after_n_batches = 15000\n", 587 | " self.seed = 1\n", 588 | " self.print_every = 200\n", 589 | " self.plot_every = 100\n", 590 | " self.use_cuda = False\n", 591 | " \n", 592 | "args = Arguments()\n" 593 | ], 594 | "execution_count": 0, 595 | "outputs": [] 596 | }, 597 | { 598 | "cell_type": "code", 599 | "metadata": { 600 | "id": "4XZSzKp4o4Up", 601 | "colab_type": "code", 602 | "outputId": "da68bcf0-3a9e-4fea-c1b4-15be62997900", 603 | "colab": { 604 | "base_uri": "https://localhost:8080/", 605 | "height": 34 606 | } 607 | }, 608 | "source": [ 609 | "%%latex\n", 610 | "\n", 611 | "\\begin{split}\n", 612 | "names\\_list = [d_1,...,d_n] \\\\\n", 613 | "\n", 614 | "category\\_list = [c_1,...,c_n] \n", 615 | "\\end{split}\n", 616 | "\n", 617 | "\n", 618 | "Where $n$ is the total amount of data points" 619 | ], 620 | "execution_count": 24, 621 | "outputs": [ 622 | { 623 | "output_type": "display_data", 624 | "data": { 625 | "text/latex": "\n\\begin{split}\nnames\\_list = [d_1,...,d_n] \\\\\n\ncategory\\_list = [c_1,...,c_n] \n\\end{split}\n\n\nWhere $n$ is the total amount of data points", 626 | "text/plain": [ 627 | "" 628 | ] 629 | }, 630 | "metadata": { 631 | "tags": [] 632 | } 633 | } 634 | ] 635 | }, 636 | { 637 | "cell_type": "code", 638 | "metadata": { 639 | "id": "I-_MgMJv0j4a", 640 | "colab_type": "code", 641 | "outputId": "df77c131-2c50-48d3-be0f-8733a1a11731", 642 | "colab": { 643 | "base_uri": "https://localhost:8080/", 644 | "height": 122 645 | } 646 | }, 647 | "source": [ 648 | "#Set of names(X)\n", 649 | "names_list = []\n", 650 | "#Set of labels (Y)\n", 651 | "category_list = []\n", 652 | "\n", 653 | "#Convert into a list with corresponding label.\n", 654 | "\n", 655 | "for nation, names in category_lines.items():\n", 656 | " #iterate over every single name\n", 657 | " for name in names:\n", 658 | " names_list.append(name) #input data point\n", 659 | " category_list.append(nation) #label\n", 660 | " \n", 661 | "#let's see if it was successfully loaded. Each data sample(X) should have its own corresponding category(Y)\n", 662 | "print(names_list[0:20])\n", 663 | "print(category_list[0:20])\n", 664 | "\n", 665 | "print(\"\\n \\n Amount of data points loaded: \" + str(len(names_list)))\n", 666 | "\n" 667 | ], 668 | "execution_count": 30, 669 | "outputs": [ 670 | { 671 | "output_type": "stream", 672 | "text": [ 673 | "['Adesua', 'Obi', 'Ogheneme', 'Eromosele', 'Ejiro', 'Efetobo', 'Anaborhi', 'Edewor', 'Ejiroghene', 'Edafetanure', 'Efemena', 'Efemuaye', 'Efetobore', 'Etanomare', 'Etaredafe', 'Omonigho', 'Ighomuedafe', 'Ighovavwerhe', 'Omonigho', 'Omonoro']\n", 674 | "['Edo', 'Edo', 'Edo', 'Edo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo', 'Urhobo']\n", 675 | "\n", 676 | " \n", 677 | " Amount of data points loaded: 55\n" 678 | ], 679 | "name": "stdout" 680 | } 681 | ] 682 | }, 683 | { 684 | "cell_type": "code", 685 | "metadata": { 686 | "id": "BwOuKoHR01Ji", 687 | "colab_type": "code", 688 | "outputId": "45bd508d-0a33-4540-d5d2-bb7abcd4003b", 689 | "colab": { 690 | "base_uri": "https://localhost:8080/", 691 | "height": 71 692 | } 693 | }, 694 | "source": [ 695 | "#Assign an integer to every category\n", 696 | "categories_numerical = pd.factorize(category_list)[0]\n", 697 | "#Let's wrap our categories with a tensor, so that it can be loaded by LanguageDataset\n", 698 | "category_tensor = torch.tensor(np.array(categories_numerical), dtype=torch.long)\n", 699 | "#Ready to be processed by torch.from_numpy in LanguageDataset\n", 700 | "categories_numpy = np.array(category_tensor)\n", 701 | "\n", 702 | "#Let's see a few resulting categories\n", 703 | "print(names_list[0:20])\n", 704 | "print(categories_numpy[0:20])\n", 705 | "\n" 706 | ], 707 | "execution_count": 33, 708 | "outputs": [ 709 | { 710 | "output_type": "stream", 711 | "text": [ 712 | "['Adesua', 'Obi', 'Ogheneme', 'Eromosele', 'Ejiro', 'Efetobo', 'Anaborhi', 'Edewor', 'Ejiroghene', 'Edafetanure', 'Efemena', 'Efemuaye', 'Efetobore', 'Etanomare', 'Etaredafe', 'Omonigho', 'Ighomuedafe', 'Ighovavwerhe', 'Omonigho', 'Omonoro']\n", 713 | "[0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n" 714 | ], 715 | "name": "stdout" 716 | } 717 | ] 718 | }, 719 | { 720 | "cell_type": "code", 721 | "metadata": { 722 | "id": "teVvzmQbZ5rO", 723 | "colab_type": "code", 724 | "outputId": "9831be9d-f03c-41d1-cc4e-c8b51e653808", 725 | "colab": { 726 | "base_uri": "https://localhost:8080/", 727 | "height": 510 728 | } 729 | }, 730 | "source": [ 731 | "def letterToIndex(letter):\n", 732 | " return all_letters.find(letter)\n", 733 | " \n", 734 | "# Just for demonstration, turn a letter into a <1 x n_letters> Tensor\n", 735 | "def letterToTensor(letter):\n", 736 | " tensor = torch.zeros(1, n_letters)\n", 737 | " tensor[0][letterToIndex(letter)] = 1\n", 738 | " return tensor\n", 739 | "\n", 740 | "# Turn a line into a ,\n", 741 | "# or an array of one-hot letter vectors\n", 742 | "def lineToTensor(line):\n", 743 | " tensor = torch.zeros(len(line), 1, n_letters) #Daniele: len(max_line_size) was len(line)\n", 744 | " for li, letter in enumerate(line):\n", 745 | " tensor[li][0][letterToIndex(letter)] = 1\n", 746 | " #Daniele: add blank elements over here\n", 747 | " return tensor \n", 748 | " \n", 749 | " \n", 750 | " \n", 751 | "def list_strings_to_list_tensors(names_list):\n", 752 | " lines_tensors = []\n", 753 | " for index, line in enumerate(names_list):\n", 754 | " lineTensor = lineToTensor(line)\n", 755 | " lineNumpy = lineTensor.numpy()\n", 756 | " lines_tensors.append(lineNumpy)\n", 757 | " \n", 758 | " return(lines_tensors)\n", 759 | "\n", 760 | "lines_tensors = list_strings_to_list_tensors(names_list)\n", 761 | "\n", 762 | "print(names_list[40])\n", 763 | "print(lines_tensors[40])\n", 764 | "print(lines_tensors[40].shape)\n", 765 | "\n" 766 | ], 767 | "execution_count": 35, 768 | "outputs": [ 769 | { 770 | "output_type": "stream", 771 | "text": [ 772 | "Ololade\n", 773 | "[[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 774 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.\n", 775 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n", 776 | "\n", 777 | " [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 778 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 779 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n", 780 | "\n", 781 | " [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.\n", 782 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 783 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n", 784 | "\n", 785 | " [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 786 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 787 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n", 788 | "\n", 789 | " [[1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 790 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 791 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n", 792 | "\n", 793 | " [[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 794 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 795 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n", 796 | "\n", 797 | " [[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 798 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", 799 | " 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]]\n", 800 | "(7, 1, 57)\n" 801 | ], 802 | "name": "stdout" 803 | } 804 | ] 805 | }, 806 | { 807 | "cell_type": "code", 808 | "metadata": { 809 | "id": "LdCglO02aGA6", 810 | "colab_type": "code", 811 | "outputId": "0728efb4-50bb-4c08-cfbe-9c898f5aada1", 812 | "colab": { 813 | "base_uri": "https://localhost:8080/", 814 | "height": 1000 815 | } 816 | }, 817 | "source": [ 818 | "max_line_size = max(len(x) for x in lines_tensors)\n", 819 | "\n", 820 | "def lineToTensorFillEmpty(line, max_line_size):\n", 821 | " tensor = torch.zeros(max_line_size, 1, n_letters) #notice the difference between this method and the previous one\n", 822 | " for li, letter in enumerate(line):\n", 823 | " tensor[li][0][letterToIndex(letter)] = 1\n", 824 | " \n", 825 | " #Vectors with (0,0,.... ,0) are placed where there are no characters\n", 826 | " return tensor\n", 827 | "\n", 828 | "def list_strings_to_list_tensors_fill_empty(names_list):\n", 829 | " lines_tensors = []\n", 830 | " for index, line in enumerate(names_list):\n", 831 | " lineTensor = lineToTensorFillEmpty(line, max_line_size)\n", 832 | " lines_tensors.append(lineTensor)\n", 833 | " return(lines_tensors)\n", 834 | "\n", 835 | "lines_tensors = list_strings_to_list_tensors_fill_empty(names_list)\n", 836 | "\n", 837 | "#Let's take a look at what a word now looks like\n", 838 | "print(names_list[40])\n", 839 | "print(lines_tensors[40])\n", 840 | "print(lines_tensors[40].shape)" 841 | ], 842 | "execution_count": 38, 843 | "outputs": [ 844 | { 845 | "output_type": "stream", 846 | "text": [ 847 | "Ololade\n", 848 | "tensor([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 849 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 850 | " 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 851 | " 0., 0., 0., 0., 0., 0.]],\n", 852 | "\n", 853 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,\n", 854 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 855 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 856 | " 0., 0., 0., 0., 0., 0.]],\n", 857 | "\n", 858 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,\n", 859 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 860 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 861 | " 0., 0., 0., 0., 0., 0.]],\n", 862 | "\n", 863 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,\n", 864 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 865 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 866 | " 0., 0., 0., 0., 0., 0.]],\n", 867 | "\n", 868 | " [[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 869 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 870 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 871 | " 0., 0., 0., 0., 0., 0.]],\n", 872 | "\n", 873 | " [[0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 874 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 875 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 876 | " 0., 0., 0., 0., 0., 0.]],\n", 877 | "\n", 878 | " [[0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 879 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 880 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 881 | " 0., 0., 0., 0., 0., 0.]],\n", 882 | "\n", 883 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 884 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 885 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 886 | " 0., 0., 0., 0., 0., 0.]],\n", 887 | "\n", 888 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 889 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 890 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 891 | " 0., 0., 0., 0., 0., 0.]],\n", 892 | "\n", 893 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 894 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 895 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 896 | " 0., 0., 0., 0., 0., 0.]],\n", 897 | "\n", 898 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 899 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 900 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 901 | " 0., 0., 0., 0., 0., 0.]],\n", 902 | "\n", 903 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 904 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 905 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 906 | " 0., 0., 0., 0., 0., 0.]],\n", 907 | "\n", 908 | " [[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 909 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 910 | " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", 911 | " 0., 0., 0., 0., 0., 0.]]])\n", 912 | "torch.Size([13, 1, 57])\n" 913 | ], 914 | "name": "stdout" 915 | } 916 | ] 917 | }, 918 | { 919 | "cell_type": "code", 920 | "metadata": { 921 | "id": "EOcm7kP0agMq", 922 | "colab_type": "code", 923 | "outputId": "e5dfbc40-f518-4ca4-86f5-8bd6cf0bbd0f", 924 | "colab": { 925 | "base_uri": "https://localhost:8080/", 926 | "height": 51 927 | } 928 | }, 929 | "source": [ 930 | "#And finally, from a list, we can create a numpy array with all our word embeddings having the same shape:\n", 931 | "array_lines_tensors = np.stack(lines_tensors)\n", 932 | "#However, such operation introduces one extra dimension (look at the dimension with index=2 having size '1')\n", 933 | "print(array_lines_tensors.shape)\n", 934 | "#Because that dimension just has size 1, we can get rid of it with the following function call\n", 935 | "array_lines_proper_dimension = np.squeeze(array_lines_tensors, axis=2)\n", 936 | "print(array_lines_proper_dimension.shape)\n", 937 | "\n" 938 | ], 939 | "execution_count": 39, 940 | "outputs": [ 941 | { 942 | "output_type": "stream", 943 | "text": [ 944 | "(55, 13, 1, 57)\n", 945 | "(55, 13, 57)\n" 946 | ], 947 | "name": "stdout" 948 | } 949 | ] 950 | }, 951 | { 952 | "cell_type": "code", 953 | "metadata": { 954 | "id": "0JVn6xHCaxCd", 955 | "colab_type": "code", 956 | "colab": { 957 | "base_uri": "https://localhost:8080/", 958 | "height": 34 959 | }, 960 | "outputId": "a80d8fd9-19f5-4d6b-f7f6-ea11e3453850" 961 | }, 962 | "source": [ 963 | "def find_start_index_per_category(category_list):\n", 964 | " categories_start_index = {}\n", 965 | " \n", 966 | " #Initialize every category with an empty list\n", 967 | " for category in all_categories:\n", 968 | " categories_start_index[category] = []\n", 969 | " \n", 970 | " #Insert the start index of each category into the dictionary categories_start_index\n", 971 | " #Example: \"Italian\" --> 203\n", 972 | " # \"Spanish\" --> 19776\n", 973 | " last_category = None\n", 974 | " i = 0\n", 975 | " for name in names_list:\n", 976 | " cur_category = category_list[i]\n", 977 | " if(cur_category != last_category):\n", 978 | " categories_start_index[cur_category] = i\n", 979 | " last_category = cur_category\n", 980 | " \n", 981 | " i = i + 1\n", 982 | " \n", 983 | " return(categories_start_index)\n", 984 | "\n", 985 | "categories_start_index = find_start_index_per_category(category_list)\n", 986 | "\n", 987 | "print(categories_start_index)\n" 988 | ], 989 | "execution_count": 40, 990 | "outputs": [ 991 | { 992 | "output_type": "stream", 993 | "text": [ 994 | "{'Edo': 0, 'Urhobo': 4, 'Hausa': 23, 'Yoruba': 29, 'Igbo': 42}\n" 995 | ], 996 | "name": "stdout" 997 | } 998 | ] 999 | } 1000 | ] 1001 | } 1002 | -------------------------------------------------------------------------------- /images/federated-learning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/images/federated-learning.png -------------------------------------------------------------------------------- /images/fork.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/images/fork.png -------------------------------------------------------------------------------- /images/notebook.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/images/notebook.png -------------------------------------------------------------------------------- /images/pullrequest.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/images/pullrequest.png -------------------------------------------------------------------------------- /images/pullrequest2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/images/pullrequest2.png -------------------------------------------------------------------------------- /images/savetogithub.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/images/savetogithub.png -------------------------------------------------------------------------------- /images/savetogithub2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shashigharti/federated-learning-on-raspberry-pi/4e6b109b61f788ef8d21d84782dbc1920473caa0/images/savetogithub2.png --------------------------------------------------------------------------------