├── 1.png ├── Freebirds_Crew.png ├── LICENSE ├── README.md ├── mini_classes.txt └── Quick_Draw_(FreeBirdsCrew).ipynb /1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FreeBirdsCrew/Google_QuickDraw_Implementation/HEAD/1.png -------------------------------------------------------------------------------- /Freebirds_Crew.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FreeBirdsCrew/Google_QuickDraw_Implementation/HEAD/Freebirds_Crew.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 FreeBirds Crew 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # QUICK DRAW - GOOGLE DOODLGE RECOGNITION CLONE OR SKETCHER 2 | 3 | Google's Doodle Quick Draw! implementation using Python and Machine Learning along with Convolutional Neural Network 🛑. 4 | 5 | ![Screenshot](Freebirds_Crew.png) 6 | 7 | Link - https://www.youtube.com/watch?v=2H0tSHGK_CU 8 | 9 | Explained all about Quick Draw - 10 | 11 | • Architecture 12 | 13 | • Data Pre-Processing and Data Manipulation 14 | 15 | • Activation Functions ( RELU and Sigmoid ) 16 | 17 | • Convolutional Neural Network 18 | 19 | • Max Pooling 20 | 21 | • Fully Connected Layers 22 | 23 | • Flaterning 24 | 25 | Note - If Computational power is not enough to train the model on 10-12 Doodled images or .npy files. 26 | Download .npy Files - https://bit.ly/2P7TDut 27 | 28 | Quickdraw API for accessing the Quick Draw data that downloads the data files as and when needed, caches them locally and interprets them so they can be used. 29 | API - https://bit.ly/2DkRCbz 30 | 31 | Watch the Full Vidoe and Like Share and Subscribe our YouTube Channel. 32 | 33 | YouTube Channel - https://www.youtube.com/channel/UC4RZP6hNT5gMlWCm0NDzUWg?view_as=subscriber?sub_confirmation=1 34 | -------------------------------------------------------------------------------- /mini_classes.txt: -------------------------------------------------------------------------------- 1 | drums 2 | sun 3 | laptop 4 | anvil 5 | baseball_bat 6 | ladder 7 | eyeglasses 8 | grapes 9 | book 10 | dumbbell 11 | traffic_light 12 | wristwatch 13 | wheel 14 | shovel 15 | bread 16 | table 17 | tennis_racquet 18 | cloud 19 | chair 20 | headphones 21 | face 22 | eye 23 | airplane 24 | snake 25 | lollipop 26 | power_outlet 27 | pants 28 | mushroom 29 | star 30 | sword 31 | clock 32 | hot_dog 33 | syringe 34 | stop_sign 35 | mountain 36 | smiley_face 37 | apple 38 | bed 39 | shorts 40 | broom 41 | diving_board 42 | flower 43 | spider 44 | cell_phone 45 | car 46 | camera 47 | tree 48 | square 49 | moon 50 | radio 51 | hat 52 | pizza 53 | axe 54 | door 55 | tent 56 | umbrella 57 | line 58 | cup 59 | fan 60 | triangle 61 | basketball 62 | pillow 63 | scissors 64 | t-shirt 65 | tooth 66 | alarm_clock 67 | paper_clip 68 | spoon 69 | microphone 70 | candle 71 | pencil 72 | envelope 73 | saw 74 | frying_pan 75 | screwdriver 76 | helmet 77 | bridge 78 | light_bulb 79 | ceiling_fan 80 | key 81 | donut 82 | bird 83 | circle 84 | beard 85 | coffee_cup 86 | butterfly 87 | bench 88 | rifle 89 | cat 90 | sock 91 | ice_cream 92 | moustache 93 | suitcase 94 | hammer 95 | rainbow 96 | knife 97 | cookie 98 | baseball 99 | lightning 100 | bicycle -------------------------------------------------------------------------------- /Quick_Draw_(FreeBirdsCrew).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "accelerator": "GPU", 6 | "colab": { 7 | "name": "Quick_Draw (FreeBirdsCrew).ipynb", 8 | "provenance": [], 9 | "collapsed_sections": [] 10 | }, 11 | "kernelspec": { 12 | "display_name": "Python 3", 13 | "language": "python", 14 | "name": "python3" 15 | }, 16 | "language_info": { 17 | "codemirror_mode": { 18 | "name": "ipython", 19 | "version": 3 20 | }, 21 | "file_extension": ".py", 22 | "mimetype": "text/x-python", 23 | "name": "python", 24 | "nbconvert_exporter": "python", 25 | "pygments_lexer": "ipython3", 26 | "version": "3.6.0" 27 | } 28 | }, 29 | "cells": [ 30 | { 31 | "cell_type": "markdown", 32 | "metadata": { 33 | "colab_type": "text", 34 | "id": "6H3ATAdp_URp" 35 | }, 36 | "source": [ 37 | "# FreeBirds Crew\n", 38 | "Quick Draw - A Google Doodle Recognition Clone" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": { 44 | "colab_type": "text", 45 | "id": "zlx6-LFL_jbi" 46 | }, 47 | "source": [ 48 | "This file contains a subset of the quick draw classes. I choose around 100 classes from the dataset. " 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "metadata": { 54 | "colab_type": "code", 55 | "id": "XXv-xzU1sd88", 56 | "colab": {} 57 | }, 58 | "source": [ 59 | "!wget 'https://raw.githubusercontent.com/zaidalyafeai/zaidalyafeai.github.io/master/sketcher/mini_classes.txt'" 60 | ], 61 | "execution_count": null, 62 | "outputs": [] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": { 67 | "colab_type": "text", 68 | "id": "4GL_TdMffD6-" 69 | }, 70 | "source": [ 71 | "Read the classes names " 72 | ] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "metadata": { 77 | "colab_type": "code", 78 | "id": "eP-OxOx5sy0b", 79 | "colab": {} 80 | }, 81 | "source": [ 82 | "f = open(\"mini_classes.txt\",\"r\")\n", 83 | "# And for reading use\n", 84 | "classes = f.readlines()\n", 85 | "f.close()" 86 | ], 87 | "execution_count": null, 88 | "outputs": [] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "metadata": { 93 | "colab_type": "code", 94 | "id": "lTE6D3uxtMc5", 95 | "colab": {} 96 | }, 97 | "source": [ 98 | "classes = [c.replace('\\n','').replace(' ','_') for c in classes]" 99 | ], 100 | "execution_count": null, 101 | "outputs": [] 102 | }, 103 | { 104 | "cell_type": "markdown", 105 | "metadata": { 106 | "colab_type": "text", 107 | "id": "5NDfBHVjACAt" 108 | }, 109 | "source": [ 110 | "# Download the Dataset " 111 | ] 112 | }, 113 | { 114 | "cell_type": "markdown", 115 | "metadata": { 116 | "colab_type": "text", 117 | "id": "7MC_PUS-fKjH" 118 | }, 119 | "source": [ 120 | "Loop over the classes and download the currospondent data" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "metadata": { 126 | "colab_type": "code", 127 | "id": "rdSUnpL0u22Q", 128 | "colab": {} 129 | }, 130 | "source": [ 131 | "!mkdir data" 132 | ], 133 | "execution_count": null, 134 | "outputs": [] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "metadata": { 139 | "colab_type": "code", 140 | "id": "22DPhL5FtWcQ", 141 | "colab": {} 142 | }, 143 | "source": [ 144 | "import urllib.request\n", 145 | "def download():\n", 146 | " base = 'https://storage.googleapis.com/quickdraw_dataset/full/numpy_bitmap/'\n", 147 | " for c in classes: \n", 148 | " cls_url = c.replace('_', '%20')\n", 149 | " path = base+cls_url+'.npy'\n", 150 | " print(path)\n", 151 | " urllib.request.urlretrieve(path, 'data/'+c+'.npy')" 152 | ], 153 | "execution_count": null, 154 | "outputs": [] 155 | }, 156 | { 157 | "cell_type": "code", 158 | "metadata": { 159 | "colab_type": "code", 160 | "id": "O5jF6TXXu-Bu", 161 | "colab": {} 162 | }, 163 | "source": [ 164 | "download()" 165 | ], 166 | "execution_count": null, 167 | "outputs": [] 168 | }, 169 | { 170 | "cell_type": "markdown", 171 | "metadata": { 172 | "colab_type": "text", 173 | "id": "uEdnbBVXAI-X" 174 | }, 175 | "source": [ 176 | "# Imports " 177 | ] 178 | }, 179 | { 180 | "cell_type": "code", 181 | "metadata": { 182 | "colab_type": "code", 183 | "id": "J2FYrPgOKh6t", 184 | "colab": {} 185 | }, 186 | "source": [ 187 | "import os\n", 188 | "import glob\n", 189 | "import numpy as np\n", 190 | "from tensorflow.python.keras import layers\n", 191 | "from tensorflow import keras \n", 192 | "import tensorflow as tf\n", 193 | "\n", 194 | "print(len(os.listdir('data')))" 195 | ], 196 | "execution_count": null, 197 | "outputs": [] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": { 202 | "colab_type": "text", 203 | "id": "6o30ipBPAQ5Y" 204 | }, 205 | "source": [ 206 | "# Load the Data " 207 | ] 208 | }, 209 | { 210 | "cell_type": "markdown", 211 | "metadata": { 212 | "colab_type": "text", 213 | "id": "UBq3GXEKAYuO" 214 | }, 215 | "source": [ 216 | "Each class contains different number samples of arrays stored as .npy format. Since we have some memory limitations we only load 5000 images per class. " 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "metadata": { 222 | "colab_type": "code", 223 | "id": "6HEIgQNHYQnl", 224 | "colab": {} 225 | }, 226 | "source": [ 227 | "def load_data(root, vfold_ratio=0.2, max_items_per_class= 4000 ):\n", 228 | " all_files = glob.glob(os.path.join(root, '*.npy'))\n", 229 | "\n", 230 | " #initialize variables \n", 231 | " x = np.empty([0, 784])\n", 232 | " y = np.empty([0])\n", 233 | " class_names = []\n", 234 | "\n", 235 | " #load each data file \n", 236 | " for idx, file in enumerate(all_files):\n", 237 | " data = np.load(file)\n", 238 | " data = data[0: max_items_per_class, :]\n", 239 | " labels = np.full(data.shape[0], idx)\n", 240 | "\n", 241 | " x = np.concatenate((x, data), axis=0)\n", 242 | " y = np.append(y, labels)\n", 243 | "\n", 244 | " class_name, ext = os.path.splitext(os.path.basename(file))\n", 245 | " class_names.append(class_name)\n", 246 | "\n", 247 | " data = None\n", 248 | " labels = None\n", 249 | " \n", 250 | " #randomize the dataset \n", 251 | " permutation = np.random.permutation(y.shape[0])\n", 252 | " x = x[permutation, :]\n", 253 | " y = y[permutation]\n", 254 | "\n", 255 | " #separate into training and testing \n", 256 | " vfold_size = int(x.shape[0]/100*(vfold_ratio*100))\n", 257 | "\n", 258 | " x_test = x[0:vfold_size, :]\n", 259 | " y_test = y[0:vfold_size]\n", 260 | "\n", 261 | " x_train = x[vfold_size:x.shape[0], :]\n", 262 | " y_train = y[vfold_size:y.shape[0]]\n", 263 | " return x_train, y_train, x_test, y_test, class_names" 264 | ], 265 | "execution_count": null, 266 | "outputs": [] 267 | }, 268 | { 269 | "cell_type": "code", 270 | "metadata": { 271 | "colab_type": "code", 272 | "id": "K6uUjN-WL2Y9", 273 | "colab": {} 274 | }, 275 | "source": [ 276 | "x_train, y_train, x_test, y_test, class_names = load_data('data')\n", 277 | "num_classes = len(class_names)\n", 278 | "image_size = 28" 279 | ], 280 | "execution_count": null, 281 | "outputs": [] 282 | }, 283 | { 284 | "cell_type": "code", 285 | "metadata": { 286 | "colab_type": "code", 287 | "id": "VhGEDS0SMgLK", 288 | "colab": {} 289 | }, 290 | "source": [ 291 | "print(len(x_train))" 292 | ], 293 | "execution_count": null, 294 | "outputs": [] 295 | }, 296 | { 297 | "cell_type": "markdown", 298 | "metadata": { 299 | "colab_type": "text", 300 | "id": "rNZmQvBWBBHE" 301 | }, 302 | "source": [ 303 | "Show some random data " 304 | ] 305 | }, 306 | { 307 | "cell_type": "code", 308 | "metadata": { 309 | "colab_type": "code", 310 | "id": "KfpDaHRkyMQC", 311 | "colab": {} 312 | }, 313 | "source": [ 314 | "import matplotlib.pyplot as plt\n", 315 | "from random import randint\n", 316 | "%matplotlib inline \n", 317 | "idx = randint(0, len(x_train))\n", 318 | "plt.imshow(x_train[idx].reshape(28,28)) \n", 319 | "print(class_names[int(y_train[idx].item())])" 320 | ], 321 | "execution_count": null, 322 | "outputs": [] 323 | }, 324 | { 325 | "cell_type": "markdown", 326 | "metadata": { 327 | "colab_type": "text", 328 | "id": "n8InHz5NBFrV" 329 | }, 330 | "source": [ 331 | "# Preprocess the Data " 332 | ] 333 | }, 334 | { 335 | "cell_type": "code", 336 | "metadata": { 337 | "colab_type": "code", 338 | "id": "p2GHUq7D2r9e", 339 | "colab": {} 340 | }, 341 | "source": [ 342 | "# Reshape and normalize\n", 343 | "x_train = x_train.reshape(x_train.shape[0], image_size, image_size, 1).astype('float32')\n", 344 | "x_test = x_test.reshape(x_test.shape[0], image_size, image_size, 1).astype('float32')\n", 345 | "\n", 346 | "x_train /= 255.0\n", 347 | "x_test /= 255.0\n", 348 | "\n", 349 | "# Convert class vectors to class matrices\n", 350 | "y_train = keras.utils.to_categorical(y_train, num_classes)\n", 351 | "y_test = keras.utils.to_categorical(y_test, num_classes)" 352 | ], 353 | "execution_count": null, 354 | "outputs": [] 355 | }, 356 | { 357 | "cell_type": "markdown", 358 | "metadata": { 359 | "colab_type": "text", 360 | "id": "rL6XAb4hBMSc" 361 | }, 362 | "source": [ 363 | "# The Model " 364 | ] 365 | }, 366 | { 367 | "cell_type": "code", 368 | "metadata": { 369 | "colab_type": "code", 370 | "id": "uYUVV2wf2z8H", 371 | "colab": {} 372 | }, 373 | "source": [ 374 | "# Define model\n", 375 | "model = keras.Sequential()\n", 376 | "model.add(layers.Convolution2D(16, (3, 3),\n", 377 | " padding='same',\n", 378 | " input_shape=x_train.shape[1:], activation='relu'))\n", 379 | "model.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", 380 | "model.add(layers.Convolution2D(32, (3, 3), padding='same', activation= 'relu'))\n", 381 | "model.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", 382 | "model.add(layers.Convolution2D(64, (3, 3), padding='same', activation= 'relu'))\n", 383 | "model.add(layers.MaxPooling2D(pool_size =(2,2)))\n", 384 | "model.add(layers.Flatten())\n", 385 | "model.add(layers.Dense(128, activation='relu'))\n", 386 | "model.add(layers.Dense(100, activation='softmax')) \n", 387 | "\n", 388 | "# Train model\n", 389 | "adam = tf.optimizers.Adam()\n", 390 | "model.compile(loss='categorical_crossentropy',\n", 391 | " optimizer=adam,\n", 392 | " metrics=['top_k_categorical_accuracy'])\n", 393 | "print(model.summary())" 394 | ], 395 | "execution_count": null, 396 | "outputs": [] 397 | }, 398 | { 399 | "cell_type": "markdown", 400 | "metadata": { 401 | "colab_type": "text", 402 | "id": "_YRSRkOyBP1P" 403 | }, 404 | "source": [ 405 | "# Training " 406 | ] 407 | }, 408 | { 409 | "cell_type": "code", 410 | "metadata": { 411 | "colab_type": "code", 412 | "id": "7OMEJ7kF3lsP", 413 | "colab": {} 414 | }, 415 | "source": [ 416 | "model.fit(x = x_train, y = y_train, validation_split=0.1, batch_size = 256, verbose=2, epochs=5)" 417 | ], 418 | "execution_count": null, 419 | "outputs": [] 420 | }, 421 | { 422 | "cell_type": "markdown", 423 | "metadata": { 424 | "colab_type": "text", 425 | "id": "d2KztY7qEn9_" 426 | }, 427 | "source": [ 428 | "# Testing " 429 | ] 430 | }, 431 | { 432 | "cell_type": "code", 433 | "metadata": { 434 | "colab_type": "code", 435 | "id": "ssaZczS7DxeA", 436 | "colab": {} 437 | }, 438 | "source": [ 439 | "score = model.evaluate(x_test, y_test, verbose=0)\n", 440 | "print('Test accuarcy: {:0.2f}%'.format(score[1] * 100))" 441 | ], 442 | "execution_count": null, 443 | "outputs": [] 444 | }, 445 | { 446 | "cell_type": "markdown", 447 | "metadata": { 448 | "colab_type": "text", 449 | "id": "9xBM_w0VBbNr" 450 | }, 451 | "source": [ 452 | "# Inference " 453 | ] 454 | }, 455 | { 456 | "cell_type": "code", 457 | "metadata": { 458 | "colab_type": "code", 459 | "id": "nH3JfoiYHdpk", 460 | "colab": {} 461 | }, 462 | "source": [ 463 | "import matplotlib.pyplot as plt\n", 464 | "from random import randint\n", 465 | "%matplotlib inline \n", 466 | "idx = randint(0, len(x_test))\n", 467 | "img = x_test[idx]\n", 468 | "plt.imshow(img.squeeze()) " 469 | ], 470 | "execution_count": null, 471 | "outputs": [] 472 | }, 473 | { 474 | "cell_type": "code", 475 | "metadata": { 476 | "id": "OrYTmFByqcN_", 477 | "colab_type": "code", 478 | "colab": { 479 | "base_uri": "https://localhost:8080/", 480 | "height": 35 481 | }, 482 | "outputId": "f38b3e71-5891-493d-d76f-867a7c77afa7" 483 | }, 484 | "source": [ 485 | "pred = model.predict(np.expand_dims(img, axis=0))[0]\n", 486 | "ind = (-pred).argsort()[:5]\n", 487 | "latex = [class_names[x] for x in ind]\n", 488 | "print(latex)" 489 | ], 490 | "execution_count": 29, 491 | "outputs": [ 492 | { 493 | "output_type": "stream", 494 | "text": [ 495 | "['tennis_racquet', 'fan', 'stop_sign', 'lollipop', 'tree']\n" 496 | ], 497 | "name": "stdout" 498 | } 499 | ] 500 | }, 501 | { 502 | "cell_type": "markdown", 503 | "metadata": { 504 | "colab_type": "text", 505 | "id": "YPp5D82YBhM-" 506 | }, 507 | "source": [ 508 | "# Store the classes " 509 | ] 510 | }, 511 | { 512 | "cell_type": "code", 513 | "metadata": { 514 | "colab_type": "code", 515 | "id": "NoFI1msFYpCN", 516 | "colab": {} 517 | }, 518 | "source": [ 519 | "with open('class_names.txt', 'w') as file_handler:\n", 520 | " for item in class_names:\n", 521 | " file_handler.write(\"{}\\n\".format(item))" 522 | ], 523 | "execution_count": null, 524 | "outputs": [] 525 | }, 526 | { 527 | "cell_type": "markdown", 528 | "metadata": { 529 | "colab_type": "text", 530 | "id": "mfJ6dpaDBpRx" 531 | }, 532 | "source": [ 533 | "# Install TensorFlowJS" 534 | ] 535 | }, 536 | { 537 | "cell_type": "code", 538 | "metadata": { 539 | "colab_type": "code", 540 | "id": "hJJDfp9mY9Xh", 541 | "colab": {} 542 | }, 543 | "source": [ 544 | "!pip install tensorflowjs " 545 | ], 546 | "execution_count": null, 547 | "outputs": [] 548 | }, 549 | { 550 | "cell_type": "markdown", 551 | "metadata": { 552 | "colab_type": "text", 553 | "id": "-oBl0ZKVB00d" 554 | }, 555 | "source": [ 556 | "# Save and Convert " 557 | ] 558 | }, 559 | { 560 | "cell_type": "code", 561 | "metadata": { 562 | "colab_type": "code", 563 | "id": "XVICB3TbZGb2", 564 | "colab": {} 565 | }, 566 | "source": [ 567 | "model.save('keras.h5')" 568 | ], 569 | "execution_count": null, 570 | "outputs": [] 571 | }, 572 | { 573 | "cell_type": "code", 574 | "metadata": { 575 | "colab_type": "code", 576 | "id": "bTWWlGdWZOvs", 577 | "colab": {} 578 | }, 579 | "source": [ 580 | "!mkdir model\n", 581 | "!tensorflowjs_converter --input_format keras keras.h5 model/" 582 | ], 583 | "execution_count": null, 584 | "outputs": [] 585 | }, 586 | { 587 | "cell_type": "markdown", 588 | "metadata": { 589 | "colab_type": "text", 590 | "id": "JKYxE2MEB6LV" 591 | }, 592 | "source": [ 593 | "# Zip and Download " 594 | ] 595 | }, 596 | { 597 | "cell_type": "code", 598 | "metadata": { 599 | "colab_type": "code", 600 | "id": "865-t79uaB63", 601 | "colab": {} 602 | }, 603 | "source": [ 604 | "!cp class_names.txt model/class_names.txt" 605 | ], 606 | "execution_count": null, 607 | "outputs": [] 608 | }, 609 | { 610 | "cell_type": "code", 611 | "metadata": { 612 | "colab_type": "code", 613 | "id": "GLC-MzW8ZXTa", 614 | "colab": {} 615 | }, 616 | "source": [ 617 | "!zip -r model.zip model " 618 | ], 619 | "execution_count": null, 620 | "outputs": [] 621 | }, 622 | { 623 | "cell_type": "code", 624 | "metadata": { 625 | "colab_type": "code", 626 | "id": "4vfPR03xZZeD", 627 | "colab": {} 628 | }, 629 | "source": [ 630 | "from google.colab import files\n", 631 | "files.download('model.zip')" 632 | ], 633 | "execution_count": null, 634 | "outputs": [] 635 | } 636 | ] 637 | } --------------------------------------------------------------------------------