├── .gitignore ├── 01_Training ├── 01 MNIST LR.ipynb └── 02 CIFAR 10 CNN.ipynb ├── 03_Deployment ├── 00 Deploy Numpy.ipynb ├── 01 Deploy Keras.ipynb ├── 02 Deploy Tensorflow.ipynb ├── 03 Deploy Pytorch.ipynb ├── artifacts.png ├── experiments_list.png ├── model_deployment.png ├── model_deployment_keras_4GB.png ├── model_list.png ├── run_details.png └── runs_list.png ├── 04_PINNs ├── 01 1D Heat Equation.ipynb ├── 02 2D Heat Equation.ipynb └── 03 1D Heat Equation with MLFlow.ipynb ├── 05_DiffNets ├── 01 poisson-manufactured-fem.ipynb ├── 02 poisson-manufactured-fdm.ipynb ├── 03 poisson-manufactured-fem-network.ipynb ├── 04 DiffNet-training.ipynb ├── 05 DiffNet-training-with-deployment.ipynb ├── UNetArch.png ├── sobol_6d.npy ├── xdiffnet-scheme.png └── xfdm-grid.png ├── 06_CNN_based_forward_PDE_solvers_for_3D_Poisson ├── poisson3d_rmlserver.html └── poisson3d_rmlserver.ipynb └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | -------------------------------------------------------------------------------- /01_Training/01 MNIST LR.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab": { 7 | "base_uri": "https://localhost:8080/" 8 | }, 9 | "id": "3JjjLNk32wUZ", 10 | "outputId": "066cf2a8-2e2a-4081-e99a-21893864dabe" 11 | }, 12 | "source": [ 13 | "# Logistic Regression for MNIST Hand Written Digits" 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": 1, 19 | "metadata": { 20 | "id": "EgnryA2z3H9O" 21 | }, 22 | "outputs": [], 23 | "source": [ 24 | "import numpy as np\n", 25 | "import matplotlib.pyplot as plt" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": 2, 31 | "metadata": { 32 | "id": "OO6AOjVd3_Fj" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "import torch\n", 37 | "import torchvision" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 3, 43 | "metadata": { 44 | "colab": { 45 | "base_uri": "https://localhost:8080/", 46 | "height": 369, 47 | "referenced_widgets": [ 48 | "a64f27dc03d7409e8d18712ce7dac77a", 49 | "19b8d17e00d245d185781a9d103927b1", 50 | "e3cac4d67a5f4399b61a23df9af0a733", 51 | "f36a53629e784e0784e33fd7f2f59cdf", 52 | "805faab1a1004bef864059238b42b1c0", 53 | "b3e9f195e72b4e36a49d2a0849ad53a9", 54 | "60ccf59c740b4d50aafae10b48168245", 55 | "e47b12a588df4610b7700a8e04907233", 56 | "83287b021d1c4ea5b98d7fca180d184f", 57 | "388850f6e78d4e88812f80c689a98170", 58 | "d09b58b17dca42c2bcf3ac2a397505f8", 59 | "0c09a861959f4cd8b86db1e6738c9a4f", 60 | "a583bb9720b34d7790320eddf59f634f", 61 | "4be7ced1fbfa4883a0ccf832342832bf", 62 | "383cadb12dc244c398158607387a50f2", 63 | "8b4e5a3bfbfa4ca6aae44cc663636203", 64 | "39cbc20a87ee4affa6255ae8543d4077", 65 | "508741f9c5584948a4038de3a54d50a7", 66 | "be3bc7c66af74fa68f1957fd2b35ebbb", 67 | "f242ac743e6d424f927583404494dbbf", 68 | "a724d954d23f49fb8727e0bfce85bfb9", 69 | "58e8e64e8f364e768fbb48cd24b380da", 70 | "d716982ea62f4a86af003f6a463693b4", 71 | "e1718bb29f9442c6badb17bd68991ebb", 72 | "7280fde643ac48848aa736ec41a98bcb", 73 | "b12014e93d1343f096e55d764c0c3a58", 74 | "68bf0dd468d847808bbc2a6eaf605472", 75 | "bf4052f5f12f46258d59c52a6445a7f5", 76 | "ceb460b0aaca4862b0a4f9e67cfed125", 77 | "e9306e79837f40fb9ccf002a926c34d0", 78 | "f955c472f82845dc930e5da6a2a5be0f", 79 | "58b6dd2a21df4675b4b71c7a793d9619" 80 | ] 81 | }, 82 | "id": "v9rQKeET5AwA", 83 | "outputId": "d57a7815-23c6-4e59-893f-c4cf9bd5ff87" 84 | }, 85 | "outputs": [ 86 | { 87 | "name": "stdout", 88 | "output_type": "stream", 89 | "text": [ 90 | "Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\n", 91 | "Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./MNIST/MNIST/raw/train-images-idx3-ubyte.gz\n" 92 | ] 93 | }, 94 | { 95 | "data": { 96 | "application/vnd.jupyter.widget-view+json": { 97 | "model_id": "43aa3cf9da044d9dac4f17a1ae4769ee", 98 | "version_major": 2, 99 | "version_minor": 0 100 | }, 101 | "text/plain": [ 102 | " 0%| | 0/9912422 [00:00" 241 | ] 242 | }, 243 | "metadata": { 244 | "needs_background": "light" 245 | }, 246 | "output_type": "display_data" 247 | } 248 | ], 249 | "source": [ 250 | "image, label = trainingdata[59999]\n", 251 | "print(image.shape, label)\n", 252 | "\n", 253 | "plt.imshow(image.squeeze().numpy())\n", 254 | "plt.show()" 255 | ] 256 | }, 257 | { 258 | "cell_type": "code", 259 | "execution_count": 6, 260 | "metadata": { 261 | "id": "uq4GtUW-7UO7" 262 | }, 263 | "outputs": [], 264 | "source": [ 265 | "traindataloader = torch.utils.data.DataLoader(trainingdata, batch_size=64, shuffle=True)\n", 266 | "testdataloader = torch.utils.data.DataLoader(testdata, batch_size=64, shuffle=False)" 267 | ] 268 | }, 269 | { 270 | "cell_type": "code", 271 | "execution_count": 7, 272 | "metadata": { 273 | "colab": { 274 | "base_uri": "https://localhost:8080/" 275 | }, 276 | "id": "295zIfTr80pP", 277 | "outputId": "f980bc32-7a32-4beb-ff50-f92c370f01e8" 278 | }, 279 | "outputs": [ 280 | { 281 | "name": "stdout", 282 | "output_type": "stream", 283 | "text": [ 284 | "torch.Size([64, 1, 28, 28]) tensor([4, 2, 1, 5, 2, 8, 5, 7, 8, 2, 0, 9, 7, 2, 6, 3, 2, 2, 6, 7, 3, 9, 6, 0,\n", 285 | " 5, 2, 7, 5, 0, 6, 5, 6, 4, 8, 3, 1, 2, 8, 3, 2, 0, 2, 2, 1, 1, 3, 8, 1,\n", 286 | " 6, 9, 4, 1, 6, 3, 4, 3, 3, 7, 7, 4, 2, 3, 7, 3])\n" 287 | ] 288 | } 289 | ], 290 | "source": [ 291 | "images, labels = iter(traindataloader).next()\n", 292 | "print(images.size(), labels)" 293 | ] 294 | }, 295 | { 296 | "cell_type": "code", 297 | "execution_count": 10, 298 | "metadata": { 299 | "id": "U4uiqeXs875-" 300 | }, 301 | "outputs": [], 302 | "source": [ 303 | "class LR(torch.nn.Module):\n", 304 | " def __init__(self):\n", 305 | " super(LR, self).__init__()\n", 306 | " self.linear1 = torch.nn.Linear(28*28, 128) # W:784x128 , b:128x1 , parameters = [W,b]\n", 307 | " self.linear2 = torch.nn.Linear(128, 10)\n", 308 | " \n", 309 | " def forward(self, x):\n", 310 | " x = x.view(-1,28*28)\n", 311 | " transformed_x1 = self.linear1(x)\n", 312 | " transformed_x2 = self.linear2(transformed_x1)\n", 313 | " return transformed_x2\n", 314 | "net = LR()\n", 315 | "optimizer = torch.optim.SGD(net.parameters(), lr=0.01)\n", 316 | "criterion = torch.nn.CrossEntropyLoss()" 317 | ] 318 | }, 319 | { 320 | "cell_type": "code", 321 | "execution_count": null, 322 | "metadata": { 323 | "colab": { 324 | "base_uri": "https://localhost:8080/" 325 | }, 326 | "id": "JYFVqnLt_PMt", 327 | "outputId": "0b676a12-4332-43ba-887b-bcbf878addbb" 328 | }, 329 | "outputs": [], 330 | "source": [ 331 | "train_loss_history = []\n", 332 | "test_loss_history = []\n", 333 | "\n", 334 | "for epoch in range(20):\n", 335 | " train_loss = 0.0\n", 336 | " test_loss = 0.0\n", 337 | " for i, data in enumerate(traindataloader):\n", 338 | " images, labels = data\n", 339 | " images = images.cuda()\n", 340 | " labels = labels.cuda()\n", 341 | " optimizer.zero_grad()\n", 342 | " predicted_output = net(images)\n", 343 | " loss = criterion(predicted_output, labels)\n", 344 | " loss.backward()\n", 345 | " optimizer.step()\n", 346 | " train_loss += loss.item()\n", 347 | " for i, data in enumerate(testdataloader):\n", 348 | " with torch.no_grad():\n", 349 | " images, labels = data\n", 350 | " images = images.cuda()\n", 351 | " labels = labels.cuda()\n", 352 | " predicted_output = net(images)\n", 353 | " loss = criterion(predicted_output, labels)\n", 354 | " test_loss += loss.item()\n", 355 | " train_loss = train_loss/len(traindataloader)\n", 356 | " test_loss = test_loss/len(testdataloader)\n", 357 | " train_loss_history.append(train_loss)\n", 358 | " test_loss_history.append(test_loss)\n", 359 | " print('Epoch %s finished with train loss %s and test loss %s'%(epoch, train_loss, test_loss))" 360 | ] 361 | }, 362 | { 363 | "cell_type": "code", 364 | "execution_count": null, 365 | "metadata": { 366 | "colab": { 367 | "base_uri": "https://localhost:8080/" 368 | }, 369 | "id": "q9gNUyFKD2b7", 370 | "outputId": "31612e75-2d9a-46a8-99d0-ee1fcf66021f" 371 | }, 372 | "outputs": [], 373 | "source": [ 374 | "predicted_output = net(images)\n", 375 | "print(predicted_output[1])\n", 376 | "print(torch.max(predicted_output, 1)[1])\n", 377 | "loss = criterion(predicted_output, labels)\n", 378 | "print(labels)\n", 379 | "print(loss)" 380 | ] 381 | }, 382 | { 383 | "cell_type": "code", 384 | "execution_count": null, 385 | "metadata": { 386 | "id": "nlaTsjBISoPn" 387 | }, 388 | "outputs": [], 389 | "source": [] 390 | }, 391 | { 392 | "cell_type": "code", 393 | "execution_count": null, 394 | "metadata": {}, 395 | "outputs": [], 396 | "source": [] 397 | } 398 | ], 399 | "metadata": { 400 | "accelerator": "GPU", 401 | "colab": { 402 | "collapsed_sections": [], 403 | "name": "mnist_lr.ipynb", 404 | "provenance": [] 405 | }, 406 | "kernelspec": { 407 | "display_name": "Python 3 (ipykernel)", 408 | "language": "python", 409 | "name": "python3" 410 | }, 411 | "language_info": { 412 | "codemirror_mode": { 413 | "name": "ipython", 414 | "version": 3 415 | }, 416 | "file_extension": ".py", 417 | "mimetype": "text/x-python", 418 | "name": "python", 419 | "nbconvert_exporter": "python", 420 | "pygments_lexer": "ipython3", 421 | "version": "3.7.11" 422 | }, 423 | "widgets": { 424 | "application/vnd.jupyter.widget-state+json": { 425 | "0c09a861959f4cd8b86db1e6738c9a4f": { 426 | "model_module": "@jupyter-widgets/controls", 427 | "model_module_version": "1.5.0", 428 | "model_name": "HTMLModel", 429 | "state": { 430 | "_dom_classes": [], 431 | "_model_module": "@jupyter-widgets/controls", 432 | "_model_module_version": "1.5.0", 433 | "_model_name": "HTMLModel", 434 | "_view_count": null, 435 | "_view_module": "@jupyter-widgets/controls", 436 | "_view_module_version": "1.5.0", 437 | "_view_name": "HTMLView", 438 | "description": "", 439 | "description_tooltip": null, 440 | "layout": "IPY_MODEL_8b4e5a3bfbfa4ca6aae44cc663636203", 441 | "placeholder": "​", 442 | "style": "IPY_MODEL_383cadb12dc244c398158607387a50f2", 443 | "value": " 0/28881 [00:00<?, ?it/s]" 444 | } 445 | }, 446 | "19b8d17e00d245d185781a9d103927b1": { 447 | "model_module": "@jupyter-widgets/base", 448 | "model_module_version": "1.2.0", 449 | "model_name": "LayoutModel", 450 | "state": { 451 | "_model_module": "@jupyter-widgets/base", 452 | "_model_module_version": "1.2.0", 453 | "_model_name": "LayoutModel", 454 | "_view_count": null, 455 | "_view_module": "@jupyter-widgets/base", 456 | "_view_module_version": "1.2.0", 457 | "_view_name": "LayoutView", 458 | "align_content": null, 459 | "align_items": null, 460 | "align_self": null, 461 | "border": null, 462 | "bottom": null, 463 | "display": null, 464 | "flex": null, 465 | "flex_flow": null, 466 | "grid_area": null, 467 | "grid_auto_columns": null, 468 | "grid_auto_flow": null, 469 | "grid_auto_rows": null, 470 | "grid_column": null, 471 | "grid_gap": null, 472 | "grid_row": null, 473 | "grid_template_areas": null, 474 | "grid_template_columns": null, 475 | "grid_template_rows": null, 476 | "height": null, 477 | "justify_content": null, 478 | "justify_items": null, 479 | "left": null, 480 | "margin": null, 481 | "max_height": null, 482 | "max_width": null, 483 | "min_height": null, 484 | "min_width": null, 485 | "object_fit": null, 486 | "object_position": null, 487 | "order": null, 488 | "overflow": null, 489 | "overflow_x": null, 490 | "overflow_y": null, 491 | "padding": null, 492 | "right": null, 493 | "top": null, 494 | "visibility": null, 495 | "width": null 496 | } 497 | }, 498 | "383cadb12dc244c398158607387a50f2": { 499 | "model_module": "@jupyter-widgets/controls", 500 | "model_module_version": "1.5.0", 501 | "model_name": "DescriptionStyleModel", 502 | "state": { 503 | "_model_module": "@jupyter-widgets/controls", 504 | "_model_module_version": "1.5.0", 505 | "_model_name": "DescriptionStyleModel", 506 | "_view_count": null, 507 | "_view_module": "@jupyter-widgets/base", 508 | "_view_module_version": "1.2.0", 509 | "_view_name": "StyleView", 510 | "description_width": "" 511 | } 512 | }, 513 | "388850f6e78d4e88812f80c689a98170": { 514 | "model_module": "@jupyter-widgets/base", 515 | "model_module_version": "1.2.0", 516 | "model_name": "LayoutModel", 517 | "state": { 518 | "_model_module": "@jupyter-widgets/base", 519 | "_model_module_version": "1.2.0", 520 | "_model_name": "LayoutModel", 521 | "_view_count": null, 522 | "_view_module": "@jupyter-widgets/base", 523 | "_view_module_version": "1.2.0", 524 | "_view_name": "LayoutView", 525 | "align_content": null, 526 | "align_items": null, 527 | "align_self": null, 528 | "border": null, 529 | "bottom": null, 530 | "display": null, 531 | "flex": null, 532 | "flex_flow": null, 533 | "grid_area": null, 534 | "grid_auto_columns": null, 535 | "grid_auto_flow": null, 536 | "grid_auto_rows": null, 537 | "grid_column": null, 538 | "grid_gap": null, 539 | "grid_row": null, 540 | "grid_template_areas": null, 541 | "grid_template_columns": null, 542 | "grid_template_rows": null, 543 | "height": null, 544 | "justify_content": null, 545 | "justify_items": null, 546 | "left": null, 547 | "margin": null, 548 | "max_height": null, 549 | "max_width": null, 550 | "min_height": null, 551 | "min_width": null, 552 | "object_fit": null, 553 | "object_position": null, 554 | "order": null, 555 | "overflow": null, 556 | "overflow_x": null, 557 | "overflow_y": null, 558 | "padding": null, 559 | "right": null, 560 | "top": null, 561 | "visibility": null, 562 | "width": null 563 | } 564 | }, 565 | "39cbc20a87ee4affa6255ae8543d4077": { 566 | "model_module": "@jupyter-widgets/controls", 567 | "model_module_version": "1.5.0", 568 | "model_name": "HBoxModel", 569 | "state": { 570 | "_dom_classes": [], 571 | "_model_module": "@jupyter-widgets/controls", 572 | "_model_module_version": "1.5.0", 573 | "_model_name": "HBoxModel", 574 | "_view_count": null, 575 | "_view_module": "@jupyter-widgets/controls", 576 | "_view_module_version": "1.5.0", 577 | "_view_name": "HBoxView", 578 | "box_style": "", 579 | "children": [ 580 | "IPY_MODEL_be3bc7c66af74fa68f1957fd2b35ebbb", 581 | "IPY_MODEL_f242ac743e6d424f927583404494dbbf" 582 | ], 583 | "layout": "IPY_MODEL_508741f9c5584948a4038de3a54d50a7" 584 | } 585 | }, 586 | "4be7ced1fbfa4883a0ccf832342832bf": { 587 | "model_module": "@jupyter-widgets/base", 588 | "model_module_version": "1.2.0", 589 | "model_name": "LayoutModel", 590 | "state": { 591 | "_model_module": "@jupyter-widgets/base", 592 | "_model_module_version": "1.2.0", 593 | "_model_name": "LayoutModel", 594 | "_view_count": null, 595 | "_view_module": "@jupyter-widgets/base", 596 | "_view_module_version": "1.2.0", 597 | "_view_name": "LayoutView", 598 | "align_content": null, 599 | "align_items": null, 600 | "align_self": null, 601 | "border": null, 602 | "bottom": null, 603 | "display": null, 604 | "flex": null, 605 | "flex_flow": null, 606 | "grid_area": null, 607 | "grid_auto_columns": null, 608 | "grid_auto_flow": null, 609 | "grid_auto_rows": null, 610 | "grid_column": null, 611 | "grid_gap": null, 612 | "grid_row": null, 613 | "grid_template_areas": null, 614 | "grid_template_columns": null, 615 | "grid_template_rows": null, 616 | "height": null, 617 | "justify_content": null, 618 | "justify_items": null, 619 | "left": null, 620 | "margin": null, 621 | "max_height": null, 622 | "max_width": null, 623 | "min_height": null, 624 | "min_width": null, 625 | "object_fit": null, 626 | "object_position": null, 627 | "order": null, 628 | "overflow": null, 629 | "overflow_x": null, 630 | "overflow_y": null, 631 | "padding": null, 632 | "right": null, 633 | "top": null, 634 | "visibility": null, 635 | "width": null 636 | } 637 | }, 638 | "508741f9c5584948a4038de3a54d50a7": { 639 | "model_module": "@jupyter-widgets/base", 640 | "model_module_version": "1.2.0", 641 | "model_name": "LayoutModel", 642 | "state": { 643 | "_model_module": "@jupyter-widgets/base", 644 | "_model_module_version": "1.2.0", 645 | "_model_name": "LayoutModel", 646 | "_view_count": null, 647 | "_view_module": "@jupyter-widgets/base", 648 | "_view_module_version": "1.2.0", 649 | "_view_name": "LayoutView", 650 | "align_content": null, 651 | "align_items": null, 652 | "align_self": null, 653 | "border": null, 654 | "bottom": null, 655 | "display": null, 656 | "flex": null, 657 | "flex_flow": null, 658 | "grid_area": null, 659 | "grid_auto_columns": null, 660 | "grid_auto_flow": null, 661 | "grid_auto_rows": null, 662 | "grid_column": null, 663 | "grid_gap": null, 664 | "grid_row": null, 665 | "grid_template_areas": null, 666 | "grid_template_columns": null, 667 | "grid_template_rows": null, 668 | "height": null, 669 | "justify_content": null, 670 | "justify_items": null, 671 | "left": null, 672 | "margin": null, 673 | "max_height": null, 674 | "max_width": null, 675 | "min_height": null, 676 | "min_width": null, 677 | "object_fit": null, 678 | "object_position": null, 679 | "order": null, 680 | "overflow": null, 681 | "overflow_x": null, 682 | "overflow_y": null, 683 | "padding": null, 684 | "right": null, 685 | "top": null, 686 | "visibility": null, 687 | "width": null 688 | } 689 | }, 690 | "58b6dd2a21df4675b4b71c7a793d9619": { 691 | "model_module": "@jupyter-widgets/base", 692 | "model_module_version": "1.2.0", 693 | "model_name": "LayoutModel", 694 | "state": { 695 | "_model_module": "@jupyter-widgets/base", 696 | "_model_module_version": "1.2.0", 697 | "_model_name": "LayoutModel", 698 | "_view_count": null, 699 | "_view_module": "@jupyter-widgets/base", 700 | "_view_module_version": "1.2.0", 701 | "_view_name": "LayoutView", 702 | "align_content": null, 703 | "align_items": null, 704 | "align_self": null, 705 | "border": null, 706 | "bottom": null, 707 | "display": null, 708 | "flex": null, 709 | "flex_flow": null, 710 | "grid_area": null, 711 | "grid_auto_columns": null, 712 | "grid_auto_flow": null, 713 | "grid_auto_rows": null, 714 | "grid_column": null, 715 | "grid_gap": null, 716 | "grid_row": null, 717 | "grid_template_areas": null, 718 | "grid_template_columns": null, 719 | "grid_template_rows": null, 720 | "height": null, 721 | "justify_content": null, 722 | "justify_items": null, 723 | "left": null, 724 | "margin": null, 725 | "max_height": null, 726 | "max_width": null, 727 | "min_height": null, 728 | "min_width": null, 729 | "object_fit": null, 730 | "object_position": null, 731 | "order": null, 732 | "overflow": null, 733 | "overflow_x": null, 734 | "overflow_y": null, 735 | "padding": null, 736 | "right": null, 737 | "top": null, 738 | "visibility": null, 739 | "width": null 740 | } 741 | }, 742 | "58e8e64e8f364e768fbb48cd24b380da": { 743 | "model_module": "@jupyter-widgets/base", 744 | "model_module_version": "1.2.0", 745 | "model_name": "LayoutModel", 746 | "state": { 747 | "_model_module": "@jupyter-widgets/base", 748 | "_model_module_version": "1.2.0", 749 | "_model_name": "LayoutModel", 750 | "_view_count": null, 751 | "_view_module": "@jupyter-widgets/base", 752 | "_view_module_version": "1.2.0", 753 | "_view_name": "LayoutView", 754 | "align_content": null, 755 | "align_items": null, 756 | "align_self": null, 757 | "border": null, 758 | "bottom": null, 759 | "display": null, 760 | "flex": null, 761 | "flex_flow": null, 762 | "grid_area": null, 763 | "grid_auto_columns": null, 764 | "grid_auto_flow": null, 765 | "grid_auto_rows": null, 766 | "grid_column": null, 767 | "grid_gap": null, 768 | "grid_row": null, 769 | "grid_template_areas": null, 770 | "grid_template_columns": null, 771 | "grid_template_rows": null, 772 | "height": null, 773 | "justify_content": null, 774 | "justify_items": null, 775 | "left": null, 776 | "margin": null, 777 | "max_height": null, 778 | "max_width": null, 779 | "min_height": null, 780 | "min_width": null, 781 | "object_fit": null, 782 | "object_position": null, 783 | "order": null, 784 | "overflow": null, 785 | "overflow_x": null, 786 | "overflow_y": null, 787 | "padding": null, 788 | "right": null, 789 | "top": null, 790 | "visibility": null, 791 | "width": null 792 | } 793 | }, 794 | "60ccf59c740b4d50aafae10b48168245": { 795 | "model_module": "@jupyter-widgets/controls", 796 | "model_module_version": "1.5.0", 797 | "model_name": "DescriptionStyleModel", 798 | "state": { 799 | "_model_module": "@jupyter-widgets/controls", 800 | "_model_module_version": "1.5.0", 801 | "_model_name": "DescriptionStyleModel", 802 | "_view_count": null, 803 | "_view_module": "@jupyter-widgets/base", 804 | "_view_module_version": "1.2.0", 805 | "_view_name": "StyleView", 806 | "description_width": "" 807 | } 808 | }, 809 | "68bf0dd468d847808bbc2a6eaf605472": { 810 | "model_module": "@jupyter-widgets/controls", 811 | "model_module_version": "1.5.0", 812 | "model_name": "FloatProgressModel", 813 | "state": { 814 | "_dom_classes": [], 815 | "_model_module": "@jupyter-widgets/controls", 816 | "_model_module_version": "1.5.0", 817 | "_model_name": "FloatProgressModel", 818 | "_view_count": null, 819 | "_view_module": "@jupyter-widgets/controls", 820 | "_view_module_version": "1.5.0", 821 | "_view_name": "ProgressView", 822 | "bar_style": "info", 823 | "description": " 0%", 824 | "description_tooltip": null, 825 | "layout": "IPY_MODEL_e9306e79837f40fb9ccf002a926c34d0", 826 | "max": 1, 827 | "min": 0, 828 | "orientation": "horizontal", 829 | "style": "IPY_MODEL_ceb460b0aaca4862b0a4f9e67cfed125", 830 | "value": 0 831 | } 832 | }, 833 | "7280fde643ac48848aa736ec41a98bcb": { 834 | "model_module": "@jupyter-widgets/controls", 835 | "model_module_version": "1.5.0", 836 | "model_name": "HBoxModel", 837 | "state": { 838 | "_dom_classes": [], 839 | "_model_module": "@jupyter-widgets/controls", 840 | "_model_module_version": "1.5.0", 841 | "_model_name": "HBoxModel", 842 | "_view_count": null, 843 | "_view_module": "@jupyter-widgets/controls", 844 | "_view_module_version": "1.5.0", 845 | "_view_name": "HBoxView", 846 | "box_style": "", 847 | "children": [ 848 | "IPY_MODEL_68bf0dd468d847808bbc2a6eaf605472", 849 | "IPY_MODEL_bf4052f5f12f46258d59c52a6445a7f5" 850 | ], 851 | "layout": "IPY_MODEL_b12014e93d1343f096e55d764c0c3a58" 852 | } 853 | }, 854 | "805faab1a1004bef864059238b42b1c0": { 855 | "model_module": "@jupyter-widgets/controls", 856 | "model_module_version": "1.5.0", 857 | "model_name": "ProgressStyleModel", 858 | "state": { 859 | "_model_module": "@jupyter-widgets/controls", 860 | "_model_module_version": "1.5.0", 861 | "_model_name": "ProgressStyleModel", 862 | "_view_count": null, 863 | "_view_module": "@jupyter-widgets/base", 864 | "_view_module_version": "1.2.0", 865 | "_view_name": "StyleView", 866 | "bar_color": null, 867 | "description_width": "initial" 868 | } 869 | }, 870 | "83287b021d1c4ea5b98d7fca180d184f": { 871 | "model_module": "@jupyter-widgets/controls", 872 | "model_module_version": "1.5.0", 873 | "model_name": "HBoxModel", 874 | "state": { 875 | "_dom_classes": [], 876 | "_model_module": "@jupyter-widgets/controls", 877 | "_model_module_version": "1.5.0", 878 | "_model_name": "HBoxModel", 879 | "_view_count": null, 880 | "_view_module": "@jupyter-widgets/controls", 881 | "_view_module_version": "1.5.0", 882 | "_view_name": "HBoxView", 883 | "box_style": "", 884 | "children": [ 885 | "IPY_MODEL_d09b58b17dca42c2bcf3ac2a397505f8", 886 | "IPY_MODEL_0c09a861959f4cd8b86db1e6738c9a4f" 887 | ], 888 | "layout": "IPY_MODEL_388850f6e78d4e88812f80c689a98170" 889 | } 890 | }, 891 | "8b4e5a3bfbfa4ca6aae44cc663636203": { 892 | "model_module": "@jupyter-widgets/base", 893 | "model_module_version": "1.2.0", 894 | "model_name": "LayoutModel", 895 | "state": { 896 | "_model_module": "@jupyter-widgets/base", 897 | "_model_module_version": "1.2.0", 898 | "_model_name": "LayoutModel", 899 | "_view_count": null, 900 | "_view_module": "@jupyter-widgets/base", 901 | "_view_module_version": "1.2.0", 902 | "_view_name": "LayoutView", 903 | "align_content": null, 904 | "align_items": null, 905 | "align_self": null, 906 | "border": null, 907 | "bottom": null, 908 | "display": null, 909 | "flex": null, 910 | "flex_flow": null, 911 | "grid_area": null, 912 | "grid_auto_columns": null, 913 | "grid_auto_flow": null, 914 | "grid_auto_rows": null, 915 | "grid_column": null, 916 | "grid_gap": null, 917 | "grid_row": null, 918 | "grid_template_areas": null, 919 | "grid_template_columns": null, 920 | "grid_template_rows": null, 921 | "height": null, 922 | "justify_content": null, 923 | "justify_items": null, 924 | "left": null, 925 | "margin": null, 926 | "max_height": null, 927 | "max_width": null, 928 | "min_height": null, 929 | "min_width": null, 930 | "object_fit": null, 931 | "object_position": null, 932 | "order": null, 933 | "overflow": null, 934 | "overflow_x": null, 935 | "overflow_y": null, 936 | "padding": null, 937 | "right": null, 938 | "top": null, 939 | "visibility": null, 940 | "width": null 941 | } 942 | }, 943 | "a583bb9720b34d7790320eddf59f634f": { 944 | "model_module": "@jupyter-widgets/controls", 945 | "model_module_version": "1.5.0", 946 | "model_name": "ProgressStyleModel", 947 | "state": { 948 | "_model_module": "@jupyter-widgets/controls", 949 | "_model_module_version": "1.5.0", 950 | "_model_name": "ProgressStyleModel", 951 | "_view_count": null, 952 | "_view_module": "@jupyter-widgets/base", 953 | "_view_module_version": "1.2.0", 954 | "_view_name": "StyleView", 955 | "bar_color": null, 956 | "description_width": "initial" 957 | } 958 | }, 959 | "a64f27dc03d7409e8d18712ce7dac77a": { 960 | "model_module": "@jupyter-widgets/controls", 961 | "model_module_version": "1.5.0", 962 | "model_name": "HBoxModel", 963 | "state": { 964 | "_dom_classes": [], 965 | "_model_module": "@jupyter-widgets/controls", 966 | "_model_module_version": "1.5.0", 967 | "_model_name": "HBoxModel", 968 | "_view_count": null, 969 | "_view_module": "@jupyter-widgets/controls", 970 | "_view_module_version": "1.5.0", 971 | "_view_name": "HBoxView", 972 | "box_style": "", 973 | "children": [ 974 | "IPY_MODEL_e3cac4d67a5f4399b61a23df9af0a733", 975 | "IPY_MODEL_f36a53629e784e0784e33fd7f2f59cdf" 976 | ], 977 | "layout": "IPY_MODEL_19b8d17e00d245d185781a9d103927b1" 978 | } 979 | }, 980 | "a724d954d23f49fb8727e0bfce85bfb9": { 981 | "model_module": "@jupyter-widgets/controls", 982 | "model_module_version": "1.5.0", 983 | "model_name": "ProgressStyleModel", 984 | "state": { 985 | "_model_module": "@jupyter-widgets/controls", 986 | "_model_module_version": "1.5.0", 987 | "_model_name": "ProgressStyleModel", 988 | "_view_count": null, 989 | "_view_module": "@jupyter-widgets/base", 990 | "_view_module_version": "1.2.0", 991 | "_view_name": "StyleView", 992 | "bar_color": null, 993 | "description_width": "initial" 994 | } 995 | }, 996 | "b12014e93d1343f096e55d764c0c3a58": { 997 | "model_module": "@jupyter-widgets/base", 998 | "model_module_version": "1.2.0", 999 | "model_name": "LayoutModel", 1000 | "state": { 1001 | "_model_module": "@jupyter-widgets/base", 1002 | "_model_module_version": "1.2.0", 1003 | "_model_name": "LayoutModel", 1004 | "_view_count": null, 1005 | "_view_module": "@jupyter-widgets/base", 1006 | "_view_module_version": "1.2.0", 1007 | "_view_name": "LayoutView", 1008 | "align_content": null, 1009 | "align_items": null, 1010 | "align_self": null, 1011 | "border": null, 1012 | "bottom": null, 1013 | "display": null, 1014 | "flex": null, 1015 | "flex_flow": null, 1016 | "grid_area": null, 1017 | "grid_auto_columns": null, 1018 | "grid_auto_flow": null, 1019 | "grid_auto_rows": null, 1020 | "grid_column": null, 1021 | "grid_gap": null, 1022 | "grid_row": null, 1023 | "grid_template_areas": null, 1024 | "grid_template_columns": null, 1025 | "grid_template_rows": null, 1026 | "height": null, 1027 | "justify_content": null, 1028 | "justify_items": null, 1029 | "left": null, 1030 | "margin": null, 1031 | "max_height": null, 1032 | "max_width": null, 1033 | "min_height": null, 1034 | "min_width": null, 1035 | "object_fit": null, 1036 | "object_position": null, 1037 | "order": null, 1038 | "overflow": null, 1039 | "overflow_x": null, 1040 | "overflow_y": null, 1041 | "padding": null, 1042 | "right": null, 1043 | "top": null, 1044 | "visibility": null, 1045 | "width": null 1046 | } 1047 | }, 1048 | "b3e9f195e72b4e36a49d2a0849ad53a9": { 1049 | "model_module": "@jupyter-widgets/base", 1050 | "model_module_version": "1.2.0", 1051 | "model_name": "LayoutModel", 1052 | "state": { 1053 | "_model_module": "@jupyter-widgets/base", 1054 | "_model_module_version": "1.2.0", 1055 | "_model_name": "LayoutModel", 1056 | "_view_count": null, 1057 | "_view_module": "@jupyter-widgets/base", 1058 | "_view_module_version": "1.2.0", 1059 | "_view_name": "LayoutView", 1060 | "align_content": null, 1061 | "align_items": null, 1062 | "align_self": null, 1063 | "border": null, 1064 | "bottom": null, 1065 | "display": null, 1066 | "flex": null, 1067 | "flex_flow": null, 1068 | "grid_area": null, 1069 | "grid_auto_columns": null, 1070 | "grid_auto_flow": null, 1071 | "grid_auto_rows": null, 1072 | "grid_column": null, 1073 | "grid_gap": null, 1074 | "grid_row": null, 1075 | "grid_template_areas": null, 1076 | "grid_template_columns": null, 1077 | "grid_template_rows": null, 1078 | "height": null, 1079 | "justify_content": null, 1080 | "justify_items": null, 1081 | "left": null, 1082 | "margin": null, 1083 | "max_height": null, 1084 | "max_width": null, 1085 | "min_height": null, 1086 | "min_width": null, 1087 | "object_fit": null, 1088 | "object_position": null, 1089 | "order": null, 1090 | "overflow": null, 1091 | "overflow_x": null, 1092 | "overflow_y": null, 1093 | "padding": null, 1094 | "right": null, 1095 | "top": null, 1096 | "visibility": null, 1097 | "width": null 1098 | } 1099 | }, 1100 | "be3bc7c66af74fa68f1957fd2b35ebbb": { 1101 | "model_module": "@jupyter-widgets/controls", 1102 | "model_module_version": "1.5.0", 1103 | "model_name": "FloatProgressModel", 1104 | "state": { 1105 | "_dom_classes": [], 1106 | "_model_module": "@jupyter-widgets/controls", 1107 | "_model_module_version": "1.5.0", 1108 | "_model_name": "FloatProgressModel", 1109 | "_view_count": null, 1110 | "_view_module": "@jupyter-widgets/controls", 1111 | "_view_module_version": "1.5.0", 1112 | "_view_name": "ProgressView", 1113 | "bar_style": "info", 1114 | "description": "", 1115 | "description_tooltip": null, 1116 | "layout": "IPY_MODEL_58e8e64e8f364e768fbb48cd24b380da", 1117 | "max": 1, 1118 | "min": 0, 1119 | "orientation": "horizontal", 1120 | "style": "IPY_MODEL_a724d954d23f49fb8727e0bfce85bfb9", 1121 | "value": 1 1122 | } 1123 | }, 1124 | "bf4052f5f12f46258d59c52a6445a7f5": { 1125 | "model_module": "@jupyter-widgets/controls", 1126 | "model_module_version": "1.5.0", 1127 | "model_name": "HTMLModel", 1128 | "state": { 1129 | "_dom_classes": [], 1130 | "_model_module": "@jupyter-widgets/controls", 1131 | "_model_module_version": "1.5.0", 1132 | "_model_name": "HTMLModel", 1133 | "_view_count": null, 1134 | "_view_module": "@jupyter-widgets/controls", 1135 | "_view_module_version": "1.5.0", 1136 | "_view_name": "HTMLView", 1137 | "description": "", 1138 | "description_tooltip": null, 1139 | "layout": "IPY_MODEL_58b6dd2a21df4675b4b71c7a793d9619", 1140 | "placeholder": "​", 1141 | "style": "IPY_MODEL_f955c472f82845dc930e5da6a2a5be0f", 1142 | "value": " 0/4542 [00:00<?, ?it/s]" 1143 | } 1144 | }, 1145 | "ceb460b0aaca4862b0a4f9e67cfed125": { 1146 | "model_module": "@jupyter-widgets/controls", 1147 | "model_module_version": "1.5.0", 1148 | "model_name": "ProgressStyleModel", 1149 | "state": { 1150 | "_model_module": "@jupyter-widgets/controls", 1151 | "_model_module_version": "1.5.0", 1152 | "_model_name": "ProgressStyleModel", 1153 | "_view_count": null, 1154 | "_view_module": "@jupyter-widgets/base", 1155 | "_view_module_version": "1.2.0", 1156 | "_view_name": "StyleView", 1157 | "bar_color": null, 1158 | "description_width": "initial" 1159 | } 1160 | }, 1161 | "d09b58b17dca42c2bcf3ac2a397505f8": { 1162 | "model_module": "@jupyter-widgets/controls", 1163 | "model_module_version": "1.5.0", 1164 | "model_name": "FloatProgressModel", 1165 | "state": { 1166 | "_dom_classes": [], 1167 | "_model_module": "@jupyter-widgets/controls", 1168 | "_model_module_version": "1.5.0", 1169 | "_model_name": "FloatProgressModel", 1170 | "_view_count": null, 1171 | "_view_module": "@jupyter-widgets/controls", 1172 | "_view_module_version": "1.5.0", 1173 | "_view_name": "ProgressView", 1174 | "bar_style": "info", 1175 | "description": " 0%", 1176 | "description_tooltip": null, 1177 | "layout": "IPY_MODEL_4be7ced1fbfa4883a0ccf832342832bf", 1178 | "max": 1, 1179 | "min": 0, 1180 | "orientation": "horizontal", 1181 | "style": "IPY_MODEL_a583bb9720b34d7790320eddf59f634f", 1182 | "value": 0 1183 | } 1184 | }, 1185 | "d716982ea62f4a86af003f6a463693b4": { 1186 | "model_module": "@jupyter-widgets/controls", 1187 | "model_module_version": "1.5.0", 1188 | "model_name": "DescriptionStyleModel", 1189 | "state": { 1190 | "_model_module": "@jupyter-widgets/controls", 1191 | "_model_module_version": "1.5.0", 1192 | "_model_name": "DescriptionStyleModel", 1193 | "_view_count": null, 1194 | "_view_module": "@jupyter-widgets/base", 1195 | "_view_module_version": "1.2.0", 1196 | "_view_name": "StyleView", 1197 | "description_width": "" 1198 | } 1199 | }, 1200 | "e1718bb29f9442c6badb17bd68991ebb": { 1201 | "model_module": "@jupyter-widgets/base", 1202 | "model_module_version": "1.2.0", 1203 | "model_name": "LayoutModel", 1204 | "state": { 1205 | "_model_module": "@jupyter-widgets/base", 1206 | "_model_module_version": "1.2.0", 1207 | "_model_name": "LayoutModel", 1208 | "_view_count": null, 1209 | "_view_module": "@jupyter-widgets/base", 1210 | "_view_module_version": "1.2.0", 1211 | "_view_name": "LayoutView", 1212 | "align_content": null, 1213 | "align_items": null, 1214 | "align_self": null, 1215 | "border": null, 1216 | "bottom": null, 1217 | "display": null, 1218 | "flex": null, 1219 | "flex_flow": null, 1220 | "grid_area": null, 1221 | "grid_auto_columns": null, 1222 | "grid_auto_flow": null, 1223 | "grid_auto_rows": null, 1224 | "grid_column": null, 1225 | "grid_gap": null, 1226 | "grid_row": null, 1227 | "grid_template_areas": null, 1228 | "grid_template_columns": null, 1229 | "grid_template_rows": null, 1230 | "height": null, 1231 | "justify_content": null, 1232 | "justify_items": null, 1233 | "left": null, 1234 | "margin": null, 1235 | "max_height": null, 1236 | "max_width": null, 1237 | "min_height": null, 1238 | "min_width": null, 1239 | "object_fit": null, 1240 | "object_position": null, 1241 | "order": null, 1242 | "overflow": null, 1243 | "overflow_x": null, 1244 | "overflow_y": null, 1245 | "padding": null, 1246 | "right": null, 1247 | "top": null, 1248 | "visibility": null, 1249 | "width": null 1250 | } 1251 | }, 1252 | "e3cac4d67a5f4399b61a23df9af0a733": { 1253 | "model_module": "@jupyter-widgets/controls", 1254 | "model_module_version": "1.5.0", 1255 | "model_name": "FloatProgressModel", 1256 | "state": { 1257 | "_dom_classes": [], 1258 | "_model_module": "@jupyter-widgets/controls", 1259 | "_model_module_version": "1.5.0", 1260 | "_model_name": "FloatProgressModel", 1261 | "_view_count": null, 1262 | "_view_module": "@jupyter-widgets/controls", 1263 | "_view_module_version": "1.5.0", 1264 | "_view_name": "ProgressView", 1265 | "bar_style": "info", 1266 | "description": "", 1267 | "description_tooltip": null, 1268 | "layout": "IPY_MODEL_b3e9f195e72b4e36a49d2a0849ad53a9", 1269 | "max": 1, 1270 | "min": 0, 1271 | "orientation": "horizontal", 1272 | "style": "IPY_MODEL_805faab1a1004bef864059238b42b1c0", 1273 | "value": 1 1274 | } 1275 | }, 1276 | "e47b12a588df4610b7700a8e04907233": { 1277 | "model_module": "@jupyter-widgets/base", 1278 | "model_module_version": "1.2.0", 1279 | "model_name": "LayoutModel", 1280 | "state": { 1281 | "_model_module": "@jupyter-widgets/base", 1282 | "_model_module_version": "1.2.0", 1283 | "_model_name": "LayoutModel", 1284 | "_view_count": null, 1285 | "_view_module": "@jupyter-widgets/base", 1286 | "_view_module_version": "1.2.0", 1287 | "_view_name": "LayoutView", 1288 | "align_content": null, 1289 | "align_items": null, 1290 | "align_self": null, 1291 | "border": null, 1292 | "bottom": null, 1293 | "display": null, 1294 | "flex": null, 1295 | "flex_flow": null, 1296 | "grid_area": null, 1297 | "grid_auto_columns": null, 1298 | "grid_auto_flow": null, 1299 | "grid_auto_rows": null, 1300 | "grid_column": null, 1301 | "grid_gap": null, 1302 | "grid_row": null, 1303 | "grid_template_areas": null, 1304 | "grid_template_columns": null, 1305 | "grid_template_rows": null, 1306 | "height": null, 1307 | "justify_content": null, 1308 | "justify_items": null, 1309 | "left": null, 1310 | "margin": null, 1311 | "max_height": null, 1312 | "max_width": null, 1313 | "min_height": null, 1314 | "min_width": null, 1315 | "object_fit": null, 1316 | "object_position": null, 1317 | "order": null, 1318 | "overflow": null, 1319 | "overflow_x": null, 1320 | "overflow_y": null, 1321 | "padding": null, 1322 | "right": null, 1323 | "top": null, 1324 | "visibility": null, 1325 | "width": null 1326 | } 1327 | }, 1328 | "e9306e79837f40fb9ccf002a926c34d0": { 1329 | "model_module": "@jupyter-widgets/base", 1330 | "model_module_version": "1.2.0", 1331 | "model_name": "LayoutModel", 1332 | "state": { 1333 | "_model_module": "@jupyter-widgets/base", 1334 | "_model_module_version": "1.2.0", 1335 | "_model_name": "LayoutModel", 1336 | "_view_count": null, 1337 | "_view_module": "@jupyter-widgets/base", 1338 | "_view_module_version": "1.2.0", 1339 | "_view_name": "LayoutView", 1340 | "align_content": null, 1341 | "align_items": null, 1342 | "align_self": null, 1343 | "border": null, 1344 | "bottom": null, 1345 | "display": null, 1346 | "flex": null, 1347 | "flex_flow": null, 1348 | "grid_area": null, 1349 | "grid_auto_columns": null, 1350 | "grid_auto_flow": null, 1351 | "grid_auto_rows": null, 1352 | "grid_column": null, 1353 | "grid_gap": null, 1354 | "grid_row": null, 1355 | "grid_template_areas": null, 1356 | "grid_template_columns": null, 1357 | "grid_template_rows": null, 1358 | "height": null, 1359 | "justify_content": null, 1360 | "justify_items": null, 1361 | "left": null, 1362 | "margin": null, 1363 | "max_height": null, 1364 | "max_width": null, 1365 | "min_height": null, 1366 | "min_width": null, 1367 | "object_fit": null, 1368 | "object_position": null, 1369 | "order": null, 1370 | "overflow": null, 1371 | "overflow_x": null, 1372 | "overflow_y": null, 1373 | "padding": null, 1374 | "right": null, 1375 | "top": null, 1376 | "visibility": null, 1377 | "width": null 1378 | } 1379 | }, 1380 | "f242ac743e6d424f927583404494dbbf": { 1381 | "model_module": "@jupyter-widgets/controls", 1382 | "model_module_version": "1.5.0", 1383 | "model_name": "HTMLModel", 1384 | "state": { 1385 | "_dom_classes": [], 1386 | "_model_module": "@jupyter-widgets/controls", 1387 | "_model_module_version": "1.5.0", 1388 | "_model_name": "HTMLModel", 1389 | "_view_count": null, 1390 | "_view_module": "@jupyter-widgets/controls", 1391 | "_view_module_version": "1.5.0", 1392 | "_view_name": "HTMLView", 1393 | "description": "", 1394 | "description_tooltip": null, 1395 | "layout": "IPY_MODEL_e1718bb29f9442c6badb17bd68991ebb", 1396 | "placeholder": "​", 1397 | "style": "IPY_MODEL_d716982ea62f4a86af003f6a463693b4", 1398 | "value": " 1654784/? [00:18<00:00, 528643.49it/s]" 1399 | } 1400 | }, 1401 | "f36a53629e784e0784e33fd7f2f59cdf": { 1402 | "model_module": "@jupyter-widgets/controls", 1403 | "model_module_version": "1.5.0", 1404 | "model_name": "HTMLModel", 1405 | "state": { 1406 | "_dom_classes": [], 1407 | "_model_module": "@jupyter-widgets/controls", 1408 | "_model_module_version": "1.5.0", 1409 | "_model_name": "HTMLModel", 1410 | "_view_count": null, 1411 | "_view_module": "@jupyter-widgets/controls", 1412 | "_view_module_version": "1.5.0", 1413 | "_view_name": "HTMLView", 1414 | "description": "", 1415 | "description_tooltip": null, 1416 | "layout": "IPY_MODEL_e47b12a588df4610b7700a8e04907233", 1417 | "placeholder": "​", 1418 | "style": "IPY_MODEL_60ccf59c740b4d50aafae10b48168245", 1419 | "value": " 9920512/? [00:20<00:00, 1049660.66it/s]" 1420 | } 1421 | }, 1422 | "f955c472f82845dc930e5da6a2a5be0f": { 1423 | "model_module": "@jupyter-widgets/controls", 1424 | "model_module_version": "1.5.0", 1425 | "model_name": "DescriptionStyleModel", 1426 | "state": { 1427 | "_model_module": "@jupyter-widgets/controls", 1428 | "_model_module_version": "1.5.0", 1429 | "_model_name": "DescriptionStyleModel", 1430 | "_view_count": null, 1431 | "_view_module": "@jupyter-widgets/base", 1432 | "_view_module_version": "1.2.0", 1433 | "_view_name": "StyleView", 1434 | "description_width": "" 1435 | } 1436 | } 1437 | } 1438 | } 1439 | }, 1440 | "nbformat": 4, 1441 | "nbformat_minor": 4 1442 | } 1443 | -------------------------------------------------------------------------------- /03_Deployment/00 Deploy Numpy.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Model Deployment using Numpy Linear Classifier" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## 1. Introduction\n", 15 | "In this workbook, we will look into the basics of deploying a model. For simplicity, we will consider a simple numpy linear classifier $$ \\mathbf{Y} = \\mathbf{W} \\mathbf{X} + \\mathbf{b}$$\n", 16 | "\n", 17 | "For simplicity, we will consider $\\mathbf{X}$ to be 6 dimensional ($\\mathbb{R}^6$). i.e. 1 data point $x \\in \\mathbf{X}$ will be a numpy array of shape $(1,6)$. The output $\\mathbf{Y}$ is 3 dimensional ($\\mathbb{R}^3$). Then, the weights $\\mathbf{W}$ will be a numpy array of shape $(3,6)$ and bias $\\mathbf{b}$ will be a numpy array of shape $(,3)$. \n", 18 | "\n", 19 | "In this workbook, we will demonstrate how to deploy this numpy linear classifier as a server and how to perform query on this numpy linear classifier.\n", 20 | "\n", 21 | "## 2. Imports and Dependencies.\n", 22 | "The few packages needed are loaded next. Particularly, `numpy`, `mlflow` will be majorly used in this tutorial. `requests` package will be used for performing query. `json` is used to post and get response from the server." 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": null, 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "import os\n", 32 | "import sys\n", 33 | "import mlflow\n", 34 | "import numpy as np\n", 35 | "\n", 36 | "# Suppress warnings\n", 37 | "import warnings\n", 38 | "warnings.filterwarnings(\"ignore\")" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": { 44 | "tags": [] 45 | }, 46 | "source": [ 47 | "## MLflow for experiment tracking and model deployment\n", 48 | "\n", 49 | "MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It tackles four primary functions:\n", 50 | "\n", 51 | "- Tracking experiments to record and compare parameters and results (MLflow Tracking).\n", 52 | "- Managing and deploying models from a variety of ML libraries to a variety of model serving and inference platforms (MLflow Models).\n", 53 | "- Providing a central model store to collaboratively manage the full lifecycle of an MLflow Model, including model versioning, stage transitions, and annotations (MLflow Model Registry).\n", 54 | "\n", 55 | "More information [here](https://www.mlflow.org/docs/latest/index.html#)\n", 56 | "\n", 57 | "\n", 58 | "\n", 59 | "![image.png](https://www.mlflow.org/docs/latest/_images/scenario_4.png)\n", 60 | "\n", 61 | "- localhost maps to the server on which the current notebook is running\n", 62 | "\n", 63 | "- Tracking server maps to the server at environment variable `TRACKING_URL` that can be printed using `os.environ.get(\"TRACKING_URL\")`\n", 64 | "\n", 65 | "- Create an mlflow client that communicates with the tracking server" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [ 74 | "from mlflow import pyfunc\n", 75 | "\n", 76 | "# Setting a tracking uri to log the mlflow logs in a particular location tracked by \n", 77 | "from mlflow.tracking import MlflowClient\n", 78 | "tracking_uri = os.environ.get(\"TRACKING_URL\")\n", 79 | "client = MlflowClient(tracking_uri=tracking_uri)\n", 80 | "mlflow.set_tracking_uri(tracking_uri)" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "## Create an experiment in mlflow database using mlflow client\n", 88 | "\n", 89 | "- Get the list of all the experiments (Click on **Experiments** tab on the sidebar to see the list)\n", 90 | "- Create a new experiment named *numpy_deployment* if it doesn't exist\n", 91 | "- Set *numpy_deployment* as the new experiment under which different **runs** are tracked" 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "metadata": { 97 | "tags": [] 98 | }, 99 | "source": [ 100 | "## MLflow Entity Hierarchy\n", 101 | "\n", 102 | "- Experiment 1\n", 103 | " - Run 1\n", 104 | " - Parameters\n", 105 | " - Metrics\n", 106 | " - Artifacts\n", 107 | " - Folder 1\n", 108 | " - File 1\n", 109 | " - File 2\n", 110 | " - Folder 2 \n", 111 | " - Run 2\n", 112 | " - Run 3\n", 113 | "\n", 114 | "- Experiment 2\n", 115 | "- Experiment 3 " 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": null, 121 | "metadata": {}, 122 | "outputs": [], 123 | "source": [ 124 | "# Setting a tracking project experiment name to keep the experiments organized\n", 125 | "experiments = client.list_experiments()\n", 126 | "experiment_names = []\n", 127 | "for exp in experiments:\n", 128 | " experiment_names.append(exp.name)\n", 129 | "experiment_name = \"numpy_deployment\"\n", 130 | "if experiment_name not in experiment_names:\n", 131 | " mlflow.create_experiment(experiment_name)\n", 132 | "mlflow.set_experiment(experiment_name)\n" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": { 138 | "tags": [] 139 | }, 140 | "source": [ 141 | "## Python Class for inference\n", 142 | "\n", 143 | "- ModelWrapper is derived from mlflow.pyfunc.PythonModel [more info](https://www.mlflow.org/docs/latest/python_api/mlflow.pyfunc.html)\n", 144 | "- load_context() member function is used to load the model. In this case, it loads a numpy file with two arrays **weights** and **bias**\n", 145 | "- predict member function takes a numpy array as input and outputs another numpy array\n", 146 | "- An object of this class will be saved as a pickle file in blob storage" 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": null, 152 | "metadata": {}, 153 | "outputs": [], 154 | "source": [ 155 | "## Model Wrapper that takes \n", 156 | "class ModelWrapper(mlflow.pyfunc.PythonModel):\n", 157 | " def load_context(self,context):\n", 158 | " import numpy as np\n", 159 | " self.model = np.load(context.artifacts['model_path'], allow_pickle=True).tolist()\n", 160 | " print(\"Model initialized\")\n", 161 | " \n", 162 | " def predict(self, context, model_input):\n", 163 | " import numpy as np\n", 164 | " import json\n", 165 | " json_txt = \", \".join(model_input.columns)\n", 166 | " data_list = json.loads(json_txt)\n", 167 | " inputs = np.array(data_list)\n", 168 | " if len(inputs.shape) == 2:\n", 169 | " print('batch inference')\n", 170 | " predictions = []\n", 171 | " for idx in range(inputs.shape[0]):\n", 172 | " prediction = np.matmul(inputs[idx,:],self.model['weights'].T) + self.model['bias']\n", 173 | " predictions.append(prediction.tolist())\n", 174 | " elif len(inputs.shape) == 1:\n", 175 | " print('single inference')\n", 176 | " predictions = self.model['weights'].T * inputs + self.model['bias']\n", 177 | " predictions = predictions.tolist()\n", 178 | " else:\n", 179 | " raise ValueError('invalid input shape')\n", 180 | " return json.dumps(predictions)" 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": {}, 186 | "source": [ 187 | "## Register a model using mlflow\n", 188 | "\n", 189 | "- Log user-defined parameters in a remote database through a remote server\n", 190 | "- Create a model_wrapper object using ModelWrapper() class in the above cell\n", 191 | "- Create a default conda environment that need to be installed on the Docker conatiner that serves a REST API\n", 192 | "- Save the model object as a pickle file and conda environment as artifacts (files) in S3 or Blob Storage" 193 | ] 194 | }, 195 | { 196 | "cell_type": "code", 197 | "execution_count": null, 198 | "metadata": {}, 199 | "outputs": [], 200 | "source": [ 201 | "# instantiate the python inference model wrapper for the server\n", 202 | "model_wrapper = ModelWrapper()\n", 203 | "\n", 204 | "\n", 205 | "# define the model weights randomly\n", 206 | "np_weights = np.random.rand(3,6)\n", 207 | "np_bias = np.random.rand(3)\n", 208 | "\n", 209 | "# checkpointing and logging the model in mlflow\n", 210 | "artifact_path = './np_model'\n", 211 | "np.save(artifact_path, {'weights':np_weights, 'bias':np_bias})\n", 212 | "model_artifacts = {\"model_path\" : artifact_path+'.npy'}\n", 213 | "\n", 214 | "#Conda environment\n", 215 | "env = mlflow.sklearn.get_default_conda_env()\n", 216 | "with mlflow.start_run():\n", 217 | " mlflow.log_param(\"features\",6)\n", 218 | " mlflow.log_param(\"labels\",3)\n", 219 | " mlflow.pyfunc.log_model(\"np_model\", python_model=model_wrapper, artifacts=model_artifacts, conda_env=env)" 220 | ] 221 | }, 222 | { 223 | "cell_type": "markdown", 224 | "metadata": {}, 225 | "source": [ 226 | "## 4. Deploying the model\n", 227 | "The above code logs a model in the experiments tab. For more info please refer [here](https://rocketml.gitbook.io/rocketml-user-guide/experiments). \n", 228 | "\n", 229 | "### 4.1 Find experiment in experiment list and click on it\n", 230 | "![experiments_list](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/experiments_list.png)\n", 231 | "\n", 232 | "### 4.2 Find run in runs list and click on it\n", 233 | "![runs_list](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/runs_list.png)\n", 234 | "\n", 235 | "### 4.3 Get run details and click on artifacts\n", 236 | "![run_details](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/run_details.png)\n", 237 | "\n", 238 | "### 4.4 Check different files logged as artifacts\n", 239 | "![artifacts](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/artifacts.png)\n", 240 | "\n", 241 | "- An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools [More Details](https://www.mlflow.org/docs/latest/models.html#storage-format)\n", 242 | "- ModelWrapper() object is saved as pkl file\n", 243 | "- conda.yaml and requirements.txt file are used to manage Python environment\n", 244 | "- Numpy file is saved in artifacts folder within the main folder (np_model)\n", 245 | "\n", 246 | "### 4.5 Deploy ML model as a REST API service\n", 247 | "\n", 248 | "Click on **Convert To Model** and fill the form\n", 249 | "\n", 250 | "![model_deployment](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/model_deployment.png)\n", 251 | "\n", 252 | "### 4.6 Go to models tab and wait until the model turns to **ON** state\n", 253 | "![model_list](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/model_list.png)" 254 | ] 255 | }, 256 | { 257 | "cell_type": "markdown", 258 | "metadata": {}, 259 | "source": [ 260 | "## 5. Use the Endpoint and Query from the server\n", 261 | "\n", 262 | "There are two methods to perform query... The first is using `requests` library and the other using `curl` shell command." 263 | ] 264 | }, 265 | { 266 | "cell_type": "code", 267 | "execution_count": null, 268 | "metadata": {}, 269 | "outputs": [], 270 | "source": [ 271 | "import requests\n", 272 | "import json\n", 273 | "\n", 274 | "################################################################################\n", 275 | "# *** SET MODEL URL HERE BEFORE RUNNING THIS CELL (instructions above) ***\n", 276 | "# Example: https://.sciml.rocketml.net/invocations\n", 277 | "url = \"\"\n", 278 | "################################################################################\n", 279 | "\n", 280 | "if not url:\n", 281 | " raise ValueError('Model URL not set! Please read instructions on how to deploy model, set the correct URL, and try again.')\n", 282 | "\n", 283 | "headers = {\"Content-Type\":\"text/csv\"}\n", 284 | "\n", 285 | "# First case, run inference on single data point\n", 286 | "np_array = np.random.rand(1,6).tolist()\n", 287 | "json_data = json.dumps(np_array)\n", 288 | "\n", 289 | "if url:\n", 290 | " response = requests.post(url,data=json_data,headers=headers)\n", 291 | " if response.status_code == 200:\n", 292 | " output = np.array(json.loads(response.json())).astype(np.float32)\n", 293 | " print(output)\n", 294 | " else:\n", 295 | " print(response.status_code)\n", 296 | " print(\"REST API deployment is in progress -- please try again in a few minutes!\")\n", 297 | "else:\n", 298 | " print(\"Make sure that the model is in ON state. Copy the Endpoint\")\n", 299 | "\n", 300 | "# Second case, run inference on multiple data points\n", 301 | "np_array = np.random.rand(20,6).tolist()\n", 302 | "json_data = json.dumps(np_array)\n", 303 | "\n", 304 | "if url:\n", 305 | " response = requests.post(url,data=json_data,headers=headers)\n", 306 | " if response.status_code == 200:\n", 307 | " output = np.array(json.loads(response.json())).astype(np.float32)\n", 308 | " print(output)\n", 309 | " else:\n", 310 | " print(response.status_code)\n", 311 | " print(\"REST API deployment is in progress -- please try again in a few minutes!\")\n", 312 | "else:\n", 313 | " print(\"Make sure that the model is in ON state. Copy the Endpoint\")\n" 314 | ] 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": null, 319 | "metadata": {}, 320 | "outputs": [], 321 | "source": [] 322 | } 323 | ], 324 | "metadata": { 325 | "kernelspec": { 326 | "display_name": "Python 3 (ipykernel)", 327 | "language": "python", 328 | "name": "python3" 329 | }, 330 | "language_info": { 331 | "codemirror_mode": { 332 | "name": "ipython", 333 | "version": 3 334 | }, 335 | "file_extension": ".py", 336 | "mimetype": "text/x-python", 337 | "name": "python", 338 | "nbconvert_exporter": "python", 339 | "pygments_lexer": "ipython3", 340 | "version": "3.7.11" 341 | } 342 | }, 343 | "nbformat": 4, 344 | "nbformat_minor": 4 345 | } 346 | -------------------------------------------------------------------------------- /03_Deployment/01 Deploy Keras.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Model Deployment using Keras" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## 1. Introduction\n", 15 | "In this workbook, we will train a simple Keras MNIST CNN model and deploy that for inference\n", 16 | "\n", 17 | "Parts of this workbook are borrowed from [here](https://keras.io/examples/vision/mnist_convnet/)\n", 18 | "\n", 19 | "## 2. Imports and Dependencies.\n", 20 | "The few packages needed are loaded next. Particularly, `numpy`, `tensorflow`, `keras`, `mlflow` will be majorly used in this tutorial. `requests` package will be used for performing query. `json` is used to post and get response from the server." 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 1, 26 | "metadata": {}, 27 | "outputs": [ 28 | { 29 | "name": "stderr", 30 | "output_type": "stream", 31 | "text": [ 32 | "2021-11-14 19:26:45.910967: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n", 33 | "2021-11-14 19:26:45.911013: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n" 34 | ] 35 | } 36 | ], 37 | "source": [ 38 | "import os\n", 39 | "import sys\n", 40 | "import mlflow\n", 41 | "import mlflow.keras\n", 42 | "import numpy as np\n", 43 | "from mlflow import pyfunc\n", 44 | "import cloudpickle\n", 45 | "import tensorflow as tf\n", 46 | "from tensorflow import keras\n", 47 | "from tensorflow.keras import layers\n", 48 | "from mlflow.utils.environment import _mlflow_conda_env\n", 49 | "\n", 50 | "# Suppress warnings\n", 51 | "import warnings\n", 52 | "warnings.filterwarnings(\"ignore\")" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": { 58 | "tags": [] 59 | }, 60 | "source": [ 61 | "## MLflow for experiment tracking and model deployment\n", 62 | "\n", 63 | "MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It tackles four primary functions:\n", 64 | "\n", 65 | "- Tracking experiments to record and compare parameters and results (MLflow Tracking).\n", 66 | "- Managing and deploying models from a variety of ML libraries to a variety of model serving and inference platforms (MLflow Models).\n", 67 | "- Providing a central model store to collaboratively manage the full lifecycle of an MLflow Model, including model versioning, stage transitions, and annotations (MLflow Model Registry).\n", 68 | "\n", 69 | "More information [here](https://www.mlflow.org/docs/latest/index.html#)\n", 70 | "\n", 71 | "\n", 72 | "\n", 73 | "![image.png](https://www.mlflow.org/docs/latest/_images/scenario_4.png)\n", 74 | "\n", 75 | "- localhost maps to the server on which the current notebook is running\n", 76 | "\n", 77 | "- Tracking server maps to the server at environment variable `TRACKING_URL` that can be printed using `os.environ.get(\"TRACKING_URL\")`\n", 78 | "\n", 79 | "- Create an mlflow client that communicates with the tracking server" 80 | ] 81 | }, 82 | { 83 | "cell_type": "code", 84 | "execution_count": 2, 85 | "metadata": {}, 86 | "outputs": [], 87 | "source": [ 88 | "from mlflow import pyfunc\n", 89 | "\n", 90 | "# Setting a tracking uri to log the mlflow logs in a particular location tracked by \n", 91 | "from mlflow.tracking import MlflowClient\n", 92 | "tracking_uri = os.environ.get(\"TRACKING_URL\")\n", 93 | "client = MlflowClient(tracking_uri=tracking_uri)\n", 94 | "mlflow.set_tracking_uri(tracking_uri)" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": { 100 | "jp-MarkdownHeadingCollapsed": true, 101 | "tags": [] 102 | }, 103 | "source": [ 104 | "## Create an experiment in mlflow database using mlflow client\n", 105 | "\n", 106 | "- Get the list of all the experiments (Click on **Experiments** tab on the sidebar to see the list)\n", 107 | "- Create a new experiment named *numpy_deployment* if it doesn't exist\n", 108 | "- Set *numpy_deployment* as the new experiment under which different **runs** are tracked" 109 | ] 110 | }, 111 | { 112 | "cell_type": "markdown", 113 | "metadata": { 114 | "jp-MarkdownHeadingCollapsed": true, 115 | "tags": [] 116 | }, 117 | "source": [ 118 | "## MLflow Entity Hierarchy\n", 119 | "\n", 120 | "- Experiment 1\n", 121 | " - Run 1\n", 122 | " - Parameters\n", 123 | " - Metrics\n", 124 | " - Artifacts\n", 125 | " - Folder 1\n", 126 | " - File 1\n", 127 | " - File 2\n", 128 | " - Folder 2 \n", 129 | " - Run 2\n", 130 | " - Run 3\n", 131 | "\n", 132 | "- Experiment 2\n", 133 | "- Experiment 3 " 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": 3, 139 | "metadata": {}, 140 | "outputs": [], 141 | "source": [ 142 | "# Setting a tracking project experiment name to keep the experiments organized\n", 143 | "experiments = client.list_experiments()\n", 144 | "experiment_names = []\n", 145 | "for exp in experiments:\n", 146 | " experiment_names.append(exp.name)\n", 147 | "experiment_name = \"keras_deployment\"\n", 148 | "if experiment_name not in experiment_names:\n", 149 | " mlflow.create_experiment(experiment_name)\n", 150 | "mlflow.set_experiment(experiment_name)\n" 151 | ] 152 | }, 153 | { 154 | "cell_type": "markdown", 155 | "metadata": { 156 | "tags": [] 157 | }, 158 | "source": [ 159 | "## Python Class for inference\n", 160 | "\n", 161 | "- ModelWrapper is derived from mlflow.pyfunc.PythonModel [more info](https://www.mlflow.org/docs/latest/python_api/mlflow.pyfunc.html)\n", 162 | "- load_context() member function is used to load the model. In this case, it loads a keras trained model which can be loaded.\n", 163 | "- predict member function takes a numpy array as input and outputs another numpy array\n", 164 | "- An object of this class will be saved as a pickle file in blob storage" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": 4, 170 | "metadata": {}, 171 | "outputs": [], 172 | "source": [ 173 | "## Model Wrapper that takes \n", 174 | "class ModelWrapper(mlflow.pyfunc.PythonModel):\n", 175 | " def load_context(self,context):\n", 176 | " import numpy as np\n", 177 | " import tensorflow as tf\n", 178 | " self.model = tf.keras.models.load_model(context.artifacts['model_path'])\n", 179 | " print(\"Model initialized\")\n", 180 | " \n", 181 | " def predict(self, context, model_input):\n", 182 | " import numpy as np\n", 183 | " import json\n", 184 | " import tensorflow as tf\n", 185 | " json_txt = \", \".join(model_input.columns)\n", 186 | " data_list = json.loads(json_txt)\n", 187 | " inputs = np.array(data_list)\n", 188 | " print(inputs.shape)\n", 189 | " if len(inputs.shape) == 4:\n", 190 | " print('batch inference')\n", 191 | " predictions = self.model.predict(inputs)\n", 192 | " predictions = predictions.tolist()\n", 193 | " elif len(inputs.shape) == 3:\n", 194 | " print('single inference')\n", 195 | " predictions = self.model.predict(np.expand_dims(inputs,0))\n", 196 | " predictions = predictions.tolist()\n", 197 | " else:\n", 198 | " raise ValueError('invalid input shape')\n", 199 | " return json.dumps(predictions)" 200 | ] 201 | }, 202 | { 203 | "cell_type": "markdown", 204 | "metadata": { 205 | "jp-MarkdownHeadingCollapsed": true, 206 | "tags": [] 207 | }, 208 | "source": [ 209 | "## Register a model using mlflow\n", 210 | "\n", 211 | "- Log user-defined parameters in a remote database through a remote server\n", 212 | "- Create a model_wrapper object using ModelWrapper() class in the above cell\n", 213 | "- Create a default conda environment that need to be installed on the Docker conatiner that serves a REST API\n", 214 | "- Save the model object as a pickle file and conda environment as artifacts (files) in S3 or Blob Storage" 215 | ] 216 | }, 217 | { 218 | "cell_type": "markdown", 219 | "metadata": { 220 | "tags": [] 221 | }, 222 | "source": [ 223 | "## 3.Training\n", 224 | "\n", 225 | "We download the MNIST dataset using utilities. MNIST dataset contains hand written digits. mlflow can automatically log all the metrics along with model. Once the training is complete, mlflow can log the model that needs to be used for inference.\n", 226 | "\n", 227 | "First, we download the dataset and perform preprocessing" 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": 5, 233 | "metadata": {}, 234 | "outputs": [ 235 | { 236 | "name": "stdout", 237 | "output_type": "stream", 238 | "text": [ 239 | "x_train shape: (60000, 28, 28, 1)\n", 240 | "60000 train samples\n", 241 | "10000 test samples\n" 242 | ] 243 | } 244 | ], 245 | "source": [ 246 | "# Model / data parameters\n", 247 | "num_classes = 10\n", 248 | "input_shape = (28, 28, 1)\n", 249 | "\n", 250 | "# the data, split between train and test sets\n", 251 | "(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n", 252 | "\n", 253 | "# Scale images to the [0, 1] range\n", 254 | "x_train = x_train.astype(\"float32\") / 255\n", 255 | "x_test = x_test.astype(\"float32\") / 255\n", 256 | "# Make sure images have shape (28, 28, 1)\n", 257 | "x_train = np.expand_dims(x_train, -1)\n", 258 | "x_test = np.expand_dims(x_test, -1)\n", 259 | "print(\"x_train shape:\", x_train.shape)\n", 260 | "print(x_train.shape[0], \"train samples\")\n", 261 | "print(x_test.shape[0], \"test samples\")\n", 262 | "\n", 263 | "\n", 264 | "# convert class vectors to binary class matrices\n", 265 | "y_train = keras.utils.to_categorical(y_train, num_classes)\n", 266 | "y_test = keras.utils.to_categorical(y_test, num_classes)" 267 | ] 268 | }, 269 | { 270 | "cell_type": "markdown", 271 | "metadata": {}, 272 | "source": [ 273 | "Lets build a CNN model for training" 274 | ] 275 | }, 276 | { 277 | "cell_type": "code", 278 | "execution_count": 6, 279 | "metadata": {}, 280 | "outputs": [ 281 | { 282 | "name": "stdout", 283 | "output_type": "stream", 284 | "text": [ 285 | "Model: \"sequential\"\n", 286 | "_________________________________________________________________\n", 287 | "Layer (type) Output Shape Param # \n", 288 | "=================================================================\n", 289 | "conv2d (Conv2D) (None, 26, 26, 32) 320 \n", 290 | "_________________________________________________________________\n", 291 | "max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 \n", 292 | "_________________________________________________________________\n", 293 | "conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 \n", 294 | "_________________________________________________________________\n", 295 | "max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 \n", 296 | "_________________________________________________________________\n", 297 | "flatten (Flatten) (None, 1600) 0 \n", 298 | "_________________________________________________________________\n", 299 | "dropout (Dropout) (None, 1600) 0 \n", 300 | "_________________________________________________________________\n", 301 | "dense (Dense) (None, 10) 16010 \n", 302 | "=================================================================\n", 303 | "Total params: 34,826\n", 304 | "Trainable params: 34,826\n", 305 | "Non-trainable params: 0\n", 306 | "_________________________________________________________________\n" 307 | ] 308 | }, 309 | { 310 | "name": "stderr", 311 | "output_type": "stream", 312 | "text": [ 313 | "2021-11-14 19:26:48.571004: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\n", 314 | "2021-11-14 19:26:48.571058: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)\n", 315 | "2021-11-14 19:26:48.571080: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (rlxlgt66-b87c796c-vfrc5): /proc/driver/nvidia/version does not exist\n", 316 | "2021-11-14 19:26:48.571417: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n", 317 | "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n" 318 | ] 319 | } 320 | ], 321 | "source": [ 322 | "model = keras.Sequential(\n", 323 | " [\n", 324 | " keras.Input(shape=input_shape),\n", 325 | " layers.Conv2D(32, kernel_size=(3, 3), activation=\"relu\"),\n", 326 | " layers.MaxPooling2D(pool_size=(2, 2)),\n", 327 | " layers.Conv2D(64, kernel_size=(3, 3), activation=\"relu\"),\n", 328 | " layers.MaxPooling2D(pool_size=(2, 2)),\n", 329 | " layers.Flatten(),\n", 330 | " layers.Dropout(0.5),\n", 331 | " layers.Dense(num_classes, activation=\"softmax\"),\n", 332 | " ]\n", 333 | ")\n", 334 | "\n", 335 | "model.summary()" 336 | ] 337 | }, 338 | { 339 | "cell_type": "markdown", 340 | "metadata": {}, 341 | "source": [ 342 | "compiling the model and performing the fit" 343 | ] 344 | }, 345 | { 346 | "cell_type": "code", 347 | "execution_count": 7, 348 | "metadata": {}, 349 | "outputs": [ 350 | { 351 | "name": "stderr", 352 | "output_type": "stream", 353 | "text": [ 354 | "2021-11-14 19:26:48.771078: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)\n" 355 | ] 356 | }, 357 | { 358 | "name": "stdout", 359 | "output_type": "stream", 360 | "text": [ 361 | "Epoch 1/5\n", 362 | "422/422 [==============================] - 20s 47ms/step - loss: 0.3714 - accuracy: 0.8880 - val_loss: 0.0868 - val_accuracy: 0.9757\n", 363 | "Epoch 2/5\n", 364 | "422/422 [==============================] - 19s 46ms/step - loss: 0.1128 - accuracy: 0.9652 - val_loss: 0.0642 - val_accuracy: 0.9828\n", 365 | "Epoch 3/5\n", 366 | "422/422 [==============================] - 19s 46ms/step - loss: 0.0857 - accuracy: 0.9734 - val_loss: 0.0482 - val_accuracy: 0.9873\n", 367 | "Epoch 4/5\n", 368 | "422/422 [==============================] - 19s 45ms/step - loss: 0.0733 - accuracy: 0.9776 - val_loss: 0.0437 - val_accuracy: 0.9888\n", 369 | "Epoch 5/5\n", 370 | "422/422 [==============================] - 19s 46ms/step - loss: 0.0646 - accuracy: 0.9798 - val_loss: 0.0402 - val_accuracy: 0.9885\n" 371 | ] 372 | }, 373 | { 374 | "name": "stderr", 375 | "output_type": "stream", 376 | "text": [ 377 | "2021-11-14 19:28:26.467713: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\n" 378 | ] 379 | }, 380 | { 381 | "name": "stdout", 382 | "output_type": "stream", 383 | "text": [ 384 | "INFO:tensorflow:Assets written to: ./keras-model/assets\n", 385 | "Model: \"sequential\"\n", 386 | "_________________________________________________________________\n", 387 | "Layer (type) Output Shape Param # \n", 388 | "=================================================================\n", 389 | "conv2d (Conv2D) (None, 26, 26, 32) 320 \n", 390 | "_________________________________________________________________\n", 391 | "max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 \n", 392 | "_________________________________________________________________\n", 393 | "conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 \n", 394 | "_________________________________________________________________\n", 395 | "max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 \n", 396 | "_________________________________________________________________\n", 397 | "flatten (Flatten) (None, 1600) 0 \n", 398 | "_________________________________________________________________\n", 399 | "dropout (Dropout) (None, 1600) 0 \n", 400 | "_________________________________________________________________\n", 401 | "dense (Dense) (None, 10) 16010 \n", 402 | "=================================================================\n", 403 | "Total params: 34,826\n", 404 | "Trainable params: 34,826\n", 405 | "Non-trainable params: 0\n", 406 | "_________________________________________________________________\n" 407 | ] 408 | } 409 | ], 410 | "source": [ 411 | "# instantiate the python inference model wrapper for the server\n", 412 | "model_wrapper = ModelWrapper()\n", 413 | "\n", 414 | "batch_size = 128\n", 415 | "epochs = 5\n", 416 | "model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n", 417 | "history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)\n", 418 | "\n", 419 | "# checkpointing and logging the model in mlflow\n", 420 | "artifact_path = './keras-model'\n", 421 | "model.save(artifact_path)\n", 422 | "model_artifacts = {\"model_path\" : artifact_path}\n", 423 | "env = mlflow.tensorflow.get_default_conda_env()\n", 424 | "with mlflow.start_run():\n", 425 | " mlflow.log_param(\"model_summary\",model.summary())\n", 426 | " mlflow.log_param(\"epochs\",epochs)\n", 427 | " mlflow.log_param(\"batch_size\",batch_size)\n", 428 | " mlflow.log_param(\"training_history\",history)\n", 429 | " mlflow.pyfunc.log_model(\"keras_model\", python_model=model_wrapper, artifacts=model_artifacts, conda_env=env)" 430 | ] 431 | }, 432 | { 433 | "cell_type": "markdown", 434 | "metadata": {}, 435 | "source": [ 436 | "## 4. Deploying the model\n", 437 | "The above code logs a model in the experiments tab. For more info please refer [here](https://rocketml.gitbook.io/rocketml-user-guide/experiments). \n", 438 | "\n", 439 | "### 4.1 Find experiment in experiment list and click on it\n", 440 | "![experiments_list](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/experiments_list.png)\n", 441 | "\n", 442 | "### 4.2 Find run in runs list and click on it\n", 443 | "![runs_list](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/runs_list.png)\n", 444 | "\n", 445 | "### 4.3 Get run details and click on artifacts\n", 446 | "![run_details](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/run_details.png)\n", 447 | "\n", 448 | "### 4.4 Check different files logged as artifacts\n", 449 | "![artifacts](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/artifacts.png)\n", 450 | "\n", 451 | "- An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools [More Details](https://www.mlflow.org/docs/latest/models.html#storage-format)\n", 452 | "- ModelWrapper() object is saved as pkl file\n", 453 | "- conda.yaml and requirements.txt file are used to manage Python environment\n", 454 | "- Numpy file is saved in artifacts folder within the main folder (np_model)\n", 455 | "\n", 456 | "### 4.5 Deploy ML model as a REST API service\n", 457 | "\n", 458 | "Click on **Convert To Model** and fill the form. **Note: For deploying the Keras model, please select at least 4096MB for max memory!**\n", 459 | "\n", 460 | "![model_deployment](https://github.com/rocketmlhq/sciml/raw/cd1455b1e3bc09e5d0e847af87bd5ac1775add88/03_Deployment/model_deployment_keras_4GB.png)\n", 461 | "\n", 462 | "### 4.6 Go to models tab and wait until the model turns to **ON** state\n", 463 | "![model_list](https://github.com/rocketmlhq/sciml/raw/e8abbef269c5bee9d2b69398495fc5ced7457708/03_Deployment/model_list.png)\n", 464 | "\n", 465 | "## 5. Use the Endpoint and Query from the server\n", 466 | "\n", 467 | "There are two methods to perform query... The first is using `requests` library and the other using `curl` shell command." 468 | ] 469 | }, 470 | { 471 | "cell_type": "code", 472 | "execution_count": 10, 473 | "metadata": {}, 474 | "outputs": [ 475 | { 476 | "name": "stdout", 477 | "output_type": "stream", 478 | "text": [ 479 | "[[5.4550702e-03 4.0400081e-04 3.8316324e-02 2.7348047e-02 4.0564043e-03\n", 480 | " 7.5183250e-02 7.6675541e-03 5.5649033e-04 8.3789396e-01 3.1188931e-03]]\n", 481 | "[[2.9919872e-03 1.2010236e-03 7.7142179e-02 3.4051750e-02 1.2698015e-02\n", 482 | " 9.8876357e-02 2.0492699e-03 1.7935239e-02 7.4503058e-01 8.0236373e-03]\n", 483 | " [9.3202889e-03 2.1819610e-04 4.3830331e-02 3.8132112e-02 5.9867408e-03\n", 484 | " 1.1520740e-01 3.1106498e-03 5.8234786e-04 7.7803266e-01 5.5792583e-03]]\n" 485 | ] 486 | } 487 | ], 488 | "source": [ 489 | "import requests\n", 490 | "import json\n", 491 | "\n", 492 | "################################################################################\n", 493 | "# *** SET MODEL URL HERE BEFORE RUNNING THIS CELL (instructions above) ***\n", 494 | "# Example: https://.sciml.rocketml.net/invocations\n", 495 | "url = \"\"\n", 496 | "################################################################################\n", 497 | "\n", 498 | "if not url:\n", 499 | " raise ValueError('Model URL not set! Please read instructions on how to deploy model, set the correct URL, and try again.')\n", 500 | "\n", 501 | "headers = {\"Content-Type\":\"text/csv\"}\n", 502 | "\n", 503 | "# First case, run inference on single data point\n", 504 | "np_array = np.random.rand(28,28,1).tolist()\n", 505 | "json_data = json.dumps(np_array)\n", 506 | "\n", 507 | "if url:\n", 508 | " response = requests.post(url,data=json_data,headers=headers)\n", 509 | " if response.status_code == 200:\n", 510 | " output = np.array(json.loads(response.json())).astype(np.float32)\n", 511 | " print(output)\n", 512 | " else:\n", 513 | " print(response.status_code)\n", 514 | " print(\"REST API deployment is in progress -- please try again in a few minutes!\")\n", 515 | "else:\n", 516 | " print(\"Make sure that the model is in ON state. Copy the Endpoint\")\n", 517 | "\n", 518 | "# Second case, run inference on multiple data points\n", 519 | "np_array = np.random.rand(2,28,28,1).tolist()\n", 520 | "json_data = json.dumps(np_array)\n", 521 | "\n", 522 | "if url:\n", 523 | " response = requests.post(url,data=json_data,headers=headers)\n", 524 | " if response.status_code == 200:\n", 525 | " output = np.array(json.loads(response.json())).astype(np.float32)\n", 526 | " print(output)\n", 527 | " else:\n", 528 | " print(response.status_code)\n", 529 | " print(\"REST API deployment is in progress -- please try again in a few minutes!\")\n", 530 | "else:\n", 531 | " print(\"Make sure that the model is in ON state. Copy the Endpoint\")\n" 532 | ] 533 | } 534 | ], 535 | "metadata": { 536 | "kernelspec": { 537 | "display_name": "Python 3 (ipykernel)", 538 | "language": "python", 539 | "name": "python3" 540 | }, 541 | "language_info": { 542 | "codemirror_mode": { 543 | "name": "ipython", 544 | "version": 3 545 | }, 546 | "file_extension": ".py", 547 | "mimetype": "text/x-python", 548 | "name": "python", 549 | "nbconvert_exporter": "python", 550 | "pygments_lexer": "ipython3", 551 | "version": "3.7.11" 552 | } 553 | }, 554 | "nbformat": 4, 555 | "nbformat_minor": 4 556 | } 557 | -------------------------------------------------------------------------------- /03_Deployment/02 Deploy Tensorflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Model Deployment using Tensorflow" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": { 13 | "tags": [] 14 | }, 15 | "source": [ 16 | "## 1. Introduction\n", 17 | "In this workbook, we will train a simple Tensorflow model and deploy that for inference. \n", 18 | "In this example, we use TensorFlow's [premade estimator iris data example](https://www.tensorflow.org/tutorials/estimator/premade) and add MLflow tracking.\n", 19 | "This example trains a `tf.estimator.DNNClassifier` on the [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris) and predicts on a validation set.\n", 20 | "We then demonstrate how to load the saved model back as a generic `mlflow.pyfunc`, allowing us to make predictions.\n", 21 | "\n", 22 | "\n", 23 | "## 2. Imports and Dependencies.\n", 24 | "The few packages needed are loaded next. Particularly, `tensorflow`, `mlflow` will be majorly used in this tutorial. `requests` package will be used for performing query. `json` is used to post and get response from the server." 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "execution_count": 1, 30 | "metadata": {}, 31 | "outputs": [ 32 | { 33 | "name": "stderr", 34 | "output_type": "stream", 35 | "text": [ 36 | "2021-11-14 20:53:26.210552: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n", 37 | "2021-11-14 20:53:26.210591: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n" 38 | ] 39 | } 40 | ], 41 | "source": [ 42 | "import os\n", 43 | "import sys\n", 44 | "import mlflow\n", 45 | "import numpy as np\n", 46 | "import mlflow.tensorflow\n", 47 | "from mlflow import pyfunc\n", 48 | "import tensorflow as tf\n", 49 | "import pandas as pd\n", 50 | "import tempfile\n", 51 | "import shutil\n", 52 | "\n", 53 | "# Suppress warnings\n", 54 | "import warnings\n", 55 | "warnings.filterwarnings(\"ignore\")" 56 | ] 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "metadata": {}, 61 | "source": [ 62 | "## MLflow for experiment tracking and model deployment\n", 63 | "\n", 64 | "MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It tackles four primary functions:\n", 65 | "\n", 66 | "- Tracking experiments to record and compare parameters and results (MLflow Tracking).\n", 67 | "- Managing and deploying models from a variety of ML libraries to a variety of model serving and inference platforms (MLflow Models).\n", 68 | "- Providing a central model store to collaboratively manage the full lifecycle of an MLflow Model, including model versioning, stage transitions, and annotations (MLflow Model Registry).\n", 69 | "\n", 70 | "More information [here](https://www.mlflow.org/docs/latest/index.html#)\n", 71 | "\n", 72 | "\n", 73 | "\n", 74 | "![image.png](https://www.mlflow.org/docs/latest/_images/scenario_4.png)\n", 75 | "\n", 76 | "- localhost maps to the server on which the current notebook is running\n", 77 | "\n", 78 | "- Tracking server maps to the server at environment variable `TRACKING_URL` that can be printed using `os.environ.get(\"TRACKING_URL\")`\n", 79 | "\n", 80 | "- Create an mlflow client that communicates with the tracking server" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 2, 86 | "metadata": {}, 87 | "outputs": [], 88 | "source": [ 89 | "from mlflow import pyfunc\n", 90 | "\n", 91 | "# Setting a tracking uri to log the mlflow logs in a particular location tracked by \n", 92 | "from mlflow.tracking import MlflowClient\n", 93 | "tracking_uri = os.environ.get(\"TRACKING_URL\")\n", 94 | "client = MlflowClient(tracking_uri=tracking_uri)\n", 95 | "mlflow.set_tracking_uri(tracking_uri)" 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "## Create an experiment in mlflow database using mlflow client\n", 103 | "\n", 104 | "- Get the list of all the experiments (Click on **Experiments** tab on the sidebar to see the list)\n", 105 | "- Create a new experiment named *numpy_deployment* if it doesn't exist\n", 106 | "- Set *numpy_deployment* as the new experiment under which different **runs** are tracked" 107 | ] 108 | }, 109 | { 110 | "cell_type": "markdown", 111 | "metadata": {}, 112 | "source": [ 113 | "## MLflow Entity Hierarchy\n", 114 | "\n", 115 | "- Experiment 1\n", 116 | " - Run 1\n", 117 | " - Parameters\n", 118 | " - Metrics\n", 119 | " - Artifacts\n", 120 | " - Folder 1\n", 121 | " - File 1\n", 122 | " - File 2\n", 123 | " - Folder 2 \n", 124 | " - Run 2\n", 125 | " - Run 3\n", 126 | "\n", 127 | "- Experiment 2\n", 128 | "- Experiment 3 " 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": 3, 134 | "metadata": {}, 135 | "outputs": [], 136 | "source": [ 137 | "# Setting a tracking project experiment name to keep the experiments organized\n", 138 | "experiments = client.list_experiments()\n", 139 | "experiment_names = []\n", 140 | "for exp in experiments:\n", 141 | " experiment_names.append(exp.name)\n", 142 | "experiment_name = \"tf_deployment\"\n", 143 | "if experiment_name not in experiment_names:\n", 144 | " mlflow.create_experiment(experiment_name)\n", 145 | "mlflow.set_experiment(experiment_name)\n" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": { 151 | "tags": [] 152 | }, 153 | "source": [ 154 | "## Python Class for inference\n", 155 | "\n", 156 | "- ModelWrapper is derived from mlflow.pyfunc.PythonModel [more info](https://www.mlflow.org/docs/latest/python_api/mlflow.pyfunc.html)\n", 157 | "- load_context() member function is used to load the model. In this case, it loads a tensorflow model weights and estimator\n", 158 | "- predict member function takes a. input and outputs classification\n", 159 | "- An object of this class will be saved as a pickle file in blob storage" 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "execution_count": 4, 165 | "metadata": {}, 166 | "outputs": [], 167 | "source": [ 168 | "## Model Wrapper that takes \n", 169 | "class ModelWrapper(mlflow.pyfunc.PythonModel):\n", 170 | " def load_context(self,context):\n", 171 | " import numpy as np\n", 172 | " import tensorflow as tf\n", 173 | " self.model = tf.saved_model.load(context.artifacts['model_path'])\n", 174 | " print(\"Model initialized\")\n", 175 | " \n", 176 | " def predict(self, context, dfeval):\n", 177 | " predictions = self.model.signatures[\"predict\"](dfeval)\n", 178 | " return predictions\n" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": { 184 | "tags": [] 185 | }, 186 | "source": [ 187 | "## Register a model using mlflow\n", 188 | "\n", 189 | "- Log user-defined parameters in a remote database through a remote server\n", 190 | "- Create a model_wrapper object using ModelWrapper() class in the above cell\n", 191 | "- Create a default conda environment that need to be installed on the Docker conatiner that serves a REST API\n", 192 | "- Save the model object as a pickle file and conda environment as artifacts (files) in S3 or Blob Storage" 193 | ] 194 | }, 195 | { 196 | "cell_type": "markdown", 197 | "metadata": {}, 198 | "source": [ 199 | "# 3. Some utility functions" 200 | ] 201 | }, 202 | { 203 | "cell_type": "code", 204 | "execution_count": 5, 205 | "metadata": { 206 | "tags": [] 207 | }, 208 | "outputs": [], 209 | "source": [ 210 | "# Some utility functions to load the data and create training functions\n", 211 | "\n", 212 | "def load_data(y_name=\"Species\"):\n", 213 | " \"\"\"Returns the iris dataset as (train_x, train_y), (test_x, test_y).\"\"\"\n", 214 | " train_path = tf.keras.utils.get_file(TRAIN_URL.split(\"/\")[-1], TRAIN_URL)\n", 215 | " test_path = tf.keras.utils.get_file(TEST_URL.split(\"/\")[-1], TEST_URL)\n", 216 | "\n", 217 | " train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)\n", 218 | " train_x, train_y = train, train.pop(y_name)\n", 219 | "\n", 220 | " test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)\n", 221 | " test_x, test_y = test, test.pop(y_name)\n", 222 | "\n", 223 | " return (train_x, train_y), (test_x, test_y)\n", 224 | "\n", 225 | "\n", 226 | "def train_input_fn(features, labels, batch_size):\n", 227 | " \"\"\"An input function for training\"\"\"\n", 228 | " # Convert the inputs to a Dataset.\n", 229 | " dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))\n", 230 | "\n", 231 | " # Shuffle, repeat, and batch the examples.\n", 232 | " dataset = dataset.shuffle(1000).repeat().batch(batch_size)\n", 233 | "\n", 234 | " # Return the dataset.\n", 235 | " return dataset\n", 236 | "\n", 237 | "\n", 238 | "def eval_input_fn(features, labels, batch_size):\n", 239 | " \"\"\"An input function for evaluation or prediction\"\"\"\n", 240 | " features = dict(features)\n", 241 | " if labels is None:\n", 242 | " # No labels, use only features.\n", 243 | " inputs = features\n", 244 | " else:\n", 245 | " inputs = (features, labels)\n", 246 | "\n", 247 | " # Convert the inputs to a Dataset.\n", 248 | " dataset = tf.data.Dataset.from_tensor_slices(inputs)\n", 249 | "\n", 250 | " # Batch the examples\n", 251 | " assert batch_size is not None, \"batch_size must not be None\"\n", 252 | " dataset = dataset.batch(batch_size)\n", 253 | "\n", 254 | " # Return the dataset.\n", 255 | " return dataset\n" 256 | ] 257 | }, 258 | { 259 | "cell_type": "code", 260 | "execution_count": 6, 261 | "metadata": {}, 262 | "outputs": [], 263 | "source": [ 264 | "batch_size = 100\n", 265 | "train_steps = 1000\n", 266 | "TRAIN_URL = \"http://download.tensorflow.org/data/iris_training.csv\"\n", 267 | "TEST_URL = \"http://download.tensorflow.org/data/iris_test.csv\"\n", 268 | "\n", 269 | "CSV_COLUMN_NAMES = [\"SepalLength\", \"SepalWidth\", \"PetalLength\", \"PetalWidth\", \"Species\"]\n", 270 | "SPECIES = [\"Setosa\", \"Versicolor\", \"Virginica\"]\n", 271 | "\n", 272 | "# Fetch the data\n", 273 | "(train_x, train_y), (test_x, test_y) = load_data()\n", 274 | "\n", 275 | "# Feature columns describe how to use the input.\n", 276 | "my_feature_columns = []\n", 277 | "for key in train_x.keys():\n", 278 | " my_feature_columns.append(tf.feature_column.numeric_column(key=key))" 279 | ] 280 | }, 281 | { 282 | "cell_type": "code", 283 | "execution_count": 7, 284 | "metadata": {}, 285 | "outputs": [ 286 | { 287 | "name": "stdout", 288 | "output_type": "stream", 289 | "text": [ 290 | "INFO:tensorflow:Using default config.\n", 291 | "WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpjrockdiq\n", 292 | "INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpjrockdiq', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\n", 293 | "graph_options {\n", 294 | " rewrite_options {\n", 295 | " meta_optimizer_iterations: ONE\n", 296 | " }\n", 297 | "}\n", 298 | ", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n" 299 | ] 300 | }, 301 | { 302 | "name": "stderr", 303 | "output_type": "stream", 304 | "text": [ 305 | "2021-11-14 20:53:28.242913: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\n", 306 | "2021-11-14 20:53:28.242948: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)\n", 307 | "2021-11-14 20:53:28.242968: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (knrphwgg-6889997f5d-w24t4): /proc/driver/nvidia/version does not exist\n", 308 | "2021-11-14 20:53:28.243217: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA\n", 309 | "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n" 310 | ] 311 | } 312 | ], 313 | "source": [ 314 | "# Two hidden layers of 10 nodes each.\n", 315 | "hidden_units = [10, 10]\n", 316 | "\n", 317 | "# Build 2 hidden layer DNN with 10, 10 units respectively.\n", 318 | "classifier = tf.estimator.DNNClassifier(\n", 319 | " feature_columns=my_feature_columns,\n", 320 | " hidden_units=hidden_units,\n", 321 | " # The model must choose between 3 classes.\n", 322 | " n_classes=3,\n", 323 | ")" 324 | ] 325 | }, 326 | { 327 | "cell_type": "code", 328 | "execution_count": 8, 329 | "metadata": {}, 330 | "outputs": [ 331 | { 332 | "name": "stdout", 333 | "output_type": "stream", 334 | "text": [ 335 | "WARNING:tensorflow:From /miniconda/lib/python3.7/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.\n", 336 | "Instructions for updating:\n", 337 | "Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.\n", 338 | "INFO:tensorflow:Calling model_fn.\n", 339 | "WARNING:tensorflow:From /miniconda/lib/python3.7/site-packages/keras/optimizer_v2/adagrad.py:84: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n", 340 | "Instructions for updating:\n", 341 | "Call initializer instance with the dtype argument instead of passing it to the constructor\n", 342 | "INFO:tensorflow:Done calling model_fn.\n", 343 | "INFO:tensorflow:Create CheckpointSaverHook.\n", 344 | "INFO:tensorflow:Graph was finalized.\n", 345 | "INFO:tensorflow:Running local_init_op.\n", 346 | "INFO:tensorflow:Done running local_init_op.\n", 347 | "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...\n", 348 | "INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpjrockdiq/model.ckpt.\n", 349 | "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...\n", 350 | "INFO:tensorflow:loss = 2.0530477, step = 0\n", 351 | "INFO:tensorflow:global_step/sec: 737.546\n", 352 | "INFO:tensorflow:loss = 1.6492424, step = 100 (0.136 sec)\n", 353 | "INFO:tensorflow:global_step/sec: 1106.08\n", 354 | "INFO:tensorflow:loss = 1.3105888, step = 200 (0.090 sec)\n", 355 | "INFO:tensorflow:global_step/sec: 1116.78\n", 356 | "INFO:tensorflow:loss = 1.3175418, step = 300 (0.090 sec)\n", 357 | "INFO:tensorflow:global_step/sec: 1099.96\n", 358 | "INFO:tensorflow:loss = 1.2445583, step = 400 (0.091 sec)\n", 359 | "INFO:tensorflow:global_step/sec: 1067.61\n", 360 | "INFO:tensorflow:loss = 1.0884886, step = 500 (0.094 sec)\n", 361 | "INFO:tensorflow:global_step/sec: 1101.81\n", 362 | "INFO:tensorflow:loss = 1.1102866, step = 600 (0.091 sec)\n", 363 | "INFO:tensorflow:global_step/sec: 1103.82\n", 364 | "INFO:tensorflow:loss = 1.1515474, step = 700 (0.091 sec)\n", 365 | "INFO:tensorflow:global_step/sec: 1110.28\n", 366 | "INFO:tensorflow:loss = 1.2511487, step = 800 (0.090 sec)\n", 367 | "INFO:tensorflow:global_step/sec: 1113.89\n", 368 | "INFO:tensorflow:loss = 1.0435706, step = 900 (0.090 sec)\n", 369 | "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1000...\n", 370 | "INFO:tensorflow:Saving checkpoints for 1000 into /tmp/tmpjrockdiq/model.ckpt.\n", 371 | "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1000...\n", 372 | "INFO:tensorflow:Loss for final step: 0.9864547.\n" 373 | ] 374 | } 375 | ], 376 | "source": [ 377 | "# Train the Model.\n", 378 | "estimator = classifier.train(\n", 379 | " input_fn=lambda: train_input_fn(train_x, train_y, batch_size),\n", 380 | " steps=train_steps,\n", 381 | ")" 382 | ] 383 | }, 384 | { 385 | "cell_type": "code", 386 | "execution_count": 9, 387 | "metadata": {}, 388 | "outputs": [ 389 | { 390 | "name": "stdout", 391 | "output_type": "stream", 392 | "text": [ 393 | "INFO:tensorflow:Calling model_fn.\n", 394 | "INFO:tensorflow:Done calling model_fn.\n", 395 | "INFO:tensorflow:Starting evaluation at 2021-11-14T20:53:30\n", 396 | "INFO:tensorflow:Graph was finalized.\n", 397 | "INFO:tensorflow:Restoring parameters from /tmp/tmpjrockdiq/model.ckpt-1000\n", 398 | "INFO:tensorflow:Running local_init_op.\n", 399 | "INFO:tensorflow:Done running local_init_op.\n", 400 | "INFO:tensorflow:Inference Time : 0.17968s\n", 401 | "INFO:tensorflow:Finished evaluation at 2021-11-14-20:53:30\n", 402 | "INFO:tensorflow:Saving dict for global step 1000: accuracy = 0.53333336, average_loss = 1.2849653, global_step = 1000, loss = 1.2849653\n", 403 | "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1000: /tmp/tmpjrockdiq/model.ckpt-1000\n", 404 | "\n", 405 | "Test set accuracy: 0.533\n", 406 | "\n" 407 | ] 408 | } 409 | ], 410 | "source": [ 411 | "# Evaluate the model.\n", 412 | "eval_result = classifier.evaluate(\n", 413 | " input_fn=lambda: eval_input_fn(test_x, test_y, batch_size)\n", 414 | ")\n", 415 | "\n", 416 | "print(\"\\nTest set accuracy: {accuracy:0.3f}\\n\".format(**eval_result))\n", 417 | "\n", 418 | "# Generate predictions from the model\n", 419 | "expected = [\"Setosa\", \"Versicolor\", \"Virginica\"]\n", 420 | "predict_x = {\n", 421 | " \"SepalLength\": [5.1, 5.9, 6.9],\n", 422 | " \"SepalWidth\": [3.3, 3.0, 3.1],\n", 423 | " \"PetalLength\": [1.7, 4.2, 5.4],\n", 424 | " \"PetalWidth\": [0.5, 1.5, 2.1],\n", 425 | "}\n", 426 | "\n", 427 | "predictions = classifier.predict(\n", 428 | " input_fn=lambda: eval_input_fn(predict_x, labels=None, batch_size=batch_size)\n", 429 | ")\n", 430 | "\n", 431 | "old_predictions = []\n", 432 | "template = '\\nPrediction is \"{}\" ({:.1f}%), expected \"{}\"'" 433 | ] 434 | }, 435 | { 436 | "cell_type": "code", 437 | "execution_count": 10, 438 | "metadata": {}, 439 | "outputs": [ 440 | { 441 | "name": "stdout", 442 | "output_type": "stream", 443 | "text": [ 444 | "INFO:tensorflow:Calling model_fn.\n", 445 | "INFO:tensorflow:Done calling model_fn.\n", 446 | "INFO:tensorflow:Graph was finalized.\n", 447 | "INFO:tensorflow:Restoring parameters from /tmp/tmpjrockdiq/model.ckpt-1000\n", 448 | "INFO:tensorflow:Running local_init_op.\n", 449 | "INFO:tensorflow:Done running local_init_op.\n", 450 | "\n", 451 | "Prediction is \"Setosa\" (45.9%), expected \"Setosa\"\n", 452 | "\n", 453 | "Prediction is \"Virginica\" (55.6%), expected \"Versicolor\"\n", 454 | "\n", 455 | "Prediction is \"Virginica\" (65.1%), expected \"Virginica\"\n" 456 | ] 457 | } 458 | ], 459 | "source": [ 460 | "for pred_dict, expec in zip(predictions, expected):\n", 461 | " class_id = pred_dict[\"class_ids\"][0]\n", 462 | " probability = pred_dict[\"probabilities\"][class_id]\n", 463 | "\n", 464 | " print(template.format(SPECIES[class_id], 100 * probability, expec))\n", 465 | "\n", 466 | " old_predictions.append(SPECIES[class_id])\n", 467 | "\n", 468 | "# Creating output tf.Variables to specify the output of the saved model.\n", 469 | "feat_specifications = {\n", 470 | " \"SepalLength\": tf.Variable([], dtype=tf.float64, name=\"SepalLength\"),\n", 471 | " \"SepalWidth\": tf.Variable([], dtype=tf.float64, name=\"SepalWidth\"),\n", 472 | " \"PetalLength\": tf.Variable([], dtype=tf.float64, name=\"PetalLength\"),\n", 473 | " \"PetalWidth\": tf.Variable([], dtype=tf.float64, name=\"PetalWidth\"),\n", 474 | "}" 475 | ] 476 | }, 477 | { 478 | "cell_type": "code", 479 | "execution_count": null, 480 | "metadata": {}, 481 | "outputs": [], 482 | "source": [] 483 | }, 484 | { 485 | "cell_type": "code", 486 | "execution_count": 11, 487 | "metadata": {}, 488 | "outputs": [ 489 | { 490 | "name": "stdout", 491 | "output_type": "stream", 492 | "text": [ 493 | "INFO:tensorflow:Calling model_fn.\n", 494 | "INFO:tensorflow:Done calling model_fn.\n", 495 | "WARNING:tensorflow:From /miniconda/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.\n", 496 | "Instructions for updating:\n", 497 | "This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.\n", 498 | "INFO:tensorflow:Signatures INCLUDED in export for Classify: None\n", 499 | "INFO:tensorflow:Signatures INCLUDED in export for Regress: None\n", 500 | "INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']\n", 501 | "INFO:tensorflow:Signatures INCLUDED in export for Train: None\n", 502 | "INFO:tensorflow:Signatures INCLUDED in export for Eval: None\n", 503 | "INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:\n", 504 | "INFO:tensorflow:'serving_default' : Classification input must be a single string Tensor; got {'SepalLength': , 'SepalWidth': , 'PetalLength': , 'PetalWidth': }\n", 505 | "INFO:tensorflow:'classification' : Classification input must be a single string Tensor; got {'SepalLength': , 'SepalWidth': , 'PetalLength': , 'PetalWidth': }\n", 506 | "WARNING:tensorflow:Export includes no default signature!\n", 507 | "INFO:tensorflow:Restoring parameters from /tmp/tmpjrockdiq/model.ckpt-1000\n", 508 | "INFO:tensorflow:Assets added to graph.\n", 509 | "INFO:tensorflow:No assets to write.\n", 510 | "INFO:tensorflow:SavedModel written to: /tmp/temp-1636923213/saved_model.pb\n" 511 | ] 512 | } 513 | ], 514 | "source": [ 515 | "# checkpointing and logging the model in mlflow\n", 516 | "receiver_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feat_specifications)\n", 517 | "artifact_path = './tf-model/'\n", 518 | "shutil.rmtree(artifact_path)\n", 519 | "saved_estimator_path = classifier.export_saved_model('/tmp', receiver_fn).decode(\"utf-8\")\n", 520 | "shutil.move(saved_estimator_path, artifact_path)\n", 521 | "model_artifacts = {\"model_path\" : artifact_path}\n", 522 | "env = mlflow.tensorflow.get_default_conda_env()\n", 523 | "model_wrapper = ModelWrapper()\n", 524 | "with mlflow.start_run():\n", 525 | " mlflow.log_param('batch_size', batch_size)\n", 526 | " mlflow.log_param('train_steps', train_steps)\n", 527 | " mlflow.log_param('csv_column_names', CSV_COLUMN_NAMES)\n", 528 | " mlflow.log_param('species', SPECIES)\n", 529 | " mlflow.pyfunc.log_model(\"tf_model\", python_model=model_wrapper, artifacts=model_artifacts, conda_env=env)\n" 530 | ] 531 | }, 532 | { 533 | "cell_type": "code", 534 | "execution_count": 20, 535 | "metadata": {}, 536 | "outputs": [ 537 | { 538 | "data": { 539 | "text/plain": [ 540 | "'./tf-model/'" 541 | ] 542 | }, 543 | "execution_count": 20, 544 | "metadata": {}, 545 | "output_type": "execute_result" 546 | } 547 | ], 548 | "source": [ 549 | "artifact_path" 550 | ] 551 | }, 552 | { 553 | "cell_type": "markdown", 554 | "metadata": { 555 | "tags": [] 556 | }, 557 | "source": [ 558 | "## 4. Deploying the model\n", 559 | "The above code logs a model in the experiments tab. For more info please refer [here](https://rocketml.gitbook.io/rocketml-user-guide/experiments). After deploying the model, we can obtain the model url for performing query as shown below.\n", 560 | "\n", 561 | "## 5. Query from the server\n", 562 | "\n", 563 | "There are two methods to perform query... The first is using `requests` library and the other using `curl` shell command." 564 | ] 565 | }, 566 | { 567 | "cell_type": "code", 568 | "execution_count": 22, 569 | "metadata": {}, 570 | "outputs": [ 571 | { 572 | "name": "stdout", 573 | "output_type": "stream", 574 | "text": [ 575 | " SepalLength SepalWidth PetalLength PetalWidth\n", 576 | "0 5.1 3.3 1.7 0.5\n", 577 | "1 5.9 3.0 4.2 1.5\n", 578 | "2 6.9 3.1 5.4 2.1\n", 579 | "400\n", 580 | "REST API deployment is in progress -- please try again in a few minutes!\n" 581 | ] 582 | } 583 | ], 584 | "source": [ 585 | "import requests\n", 586 | "import json\n", 587 | "\n", 588 | "url = \"http://127.0.0.1:5000/invocations\"\n", 589 | "headers = {\"Content-Type\":\"application/json; format=pandas-split\"}\n", 590 | "\n", 591 | "# First case, run inference on single data point\n", 592 | "predict_data = [[5.1, 3.3, 1.7, 0.5], [5.9, 3.0, 4.2, 1.5], [6.9, 3.1, 5.4, 2.1]]\n", 593 | "df = pd.DataFrame(\n", 594 | " data=predict_data,\n", 595 | " columns=[\"SepalLength\", \"SepalWidth\", \"PetalLength\", \"PetalWidth\"],\n", 596 | ")\n", 597 | "\n", 598 | "print(df)\n", 599 | "response = requests.post(url,data=df.to_json(orient=\"split\",index=False),headers=headers)\n", 600 | "if response.status_code == 200:\n", 601 | " output = response.json()\n", 602 | " print(response)\n", 603 | "else:\n", 604 | " print(response.status_code)\n", 605 | " print(\"REST API deployment is in progress -- please try again in a few minutes!\")" 606 | ] 607 | }, 608 | { 609 | "cell_type": "code", 610 | "execution_count": 23, 611 | "metadata": {}, 612 | "outputs": [ 613 | { 614 | "name": "stdout", 615 | "output_type": "stream", 616 | "text": [ 617 | "Traceback (most recent call last):\n", 618 | " File \"/home/ubuntu/.conda/envs/mlflow-fd702d538505a3f80cc3fa48a53c9859f694d90f/lib/python3.7/site-packages/mlflow/pyfunc/scoring_server/__init__.py\", line 303, in transformation\n", 619 | " raw_predictions = model.predict(data)\n", 620 | " File \"/home/ubuntu/.conda/envs/mlflow-fd702d538505a3f80cc3fa48a53c9859f694d90f/lib/python3.7/site-packages/mlflow/pyfunc/__init__.py\", line 608, in predict\n", 621 | " return self._model_impl.predict(data)\n", 622 | " File \"/home/ubuntu/.conda/envs/mlflow-fd702d538505a3f80cc3fa48a53c9859f694d90f/lib/python3.7/site-packages/mlflow/pyfunc/model.py\", line 296, in predict\n", 623 | " return self.python_model.predict(self.context, model_input)\n", 624 | " File \"/tmp/ipykernel_2159/3779823312.py\", line 10, in predict\n", 625 | " File \"/home/ubuntu/.conda/envs/mlflow-fd702d538505a3f80cc3fa48a53c9859f694d90f/lib/python3.7/site-packages/tensorflow/python/eager/function.py\", line 1707, in __call__\n", 626 | " return self._call_impl(args, kwargs)\n", 627 | " File \"/home/ubuntu/.conda/envs/mlflow-fd702d538505a3f80cc3fa48a53c9859f694d90f/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py\", line 247, in _call_impl\n", 628 | " args, kwargs, cancellation_manager)\n", 629 | " File \"/home/ubuntu/.conda/envs/mlflow-fd702d538505a3f80cc3fa48a53c9859f694d90f/lib/python3.7/site-packages/tensorflow/python/eager/function.py\", line 1725, in _call_impl\n", 630 | " return self._call_with_flat_signature(args, kwargs, cancellation_manager)\n", 631 | " File \"/home/ubuntu/.conda/envs/mlflow-fd702d538505a3f80cc3fa48a53c9859f694d90f/lib/python3.7/site-packages/tensorflow/python/eager/function.py\", line 1747, in _call_with_flat_signature\n", 632 | " len(args)))\n", 633 | "TypeError: pruned(SepalLength, SepalWidth, PetalLength, PetalWidth) takes 0 positional arguments but 1 were given\n", 634 | "\n" 635 | ] 636 | } 637 | ], 638 | "source": [ 639 | "print(response.json()['stack_trace'])" 640 | ] 641 | }, 642 | { 643 | "cell_type": "code", 644 | "execution_count": 30, 645 | "metadata": {}, 646 | "outputs": [ 647 | { 648 | "ename": "ResourceNotFoundError", 649 | "evalue": "The specified blob does not exist.\nRequestId:5d7e63d1-901e-0041-1d95-d97cd0000000\nTime:2021-11-14T20:23:35.9277894Z\nErrorCode:BlobNotFound", 650 | "output_type": "error", 651 | "traceback": [ 652 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 653 | "\u001b[0;31mResourceNotFoundError\u001b[0m Traceback (most recent call last)", 654 | "\u001b[0;32m/tmp/ipykernel_2159/652785607.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mpyfunc_model\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpyfunc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mload_model\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmlflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_artifact_uri\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"tf_deployment\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", 655 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/mlflow/pyfunc/__init__.py\u001b[0m in \u001b[0;36mload_model\u001b[0;34m(model_uri, suppress_warnings)\u001b[0m\n\u001b[1;32m 649\u001b[0m \u001b[0mmessages\u001b[0m \u001b[0mwill\u001b[0m \u001b[0mbe\u001b[0m \u001b[0memitted\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 650\u001b[0m \"\"\"\n\u001b[0;32m--> 651\u001b[0;31m \u001b[0mlocal_path\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_download_artifact_from_uri\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0martifact_uri\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmodel_uri\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 652\u001b[0m \u001b[0mmodel_meta\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mModel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mload\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mos\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjoin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlocal_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mMLMODEL_FILE_NAME\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 653\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", 656 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/mlflow/tracking/artifact_utils.py\u001b[0m in \u001b[0;36m_download_artifact_from_uri\u001b[0;34m(artifact_uri, output_path)\u001b[0m\n\u001b[1;32m 94\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 95\u001b[0m return get_artifact_repository(artifact_uri=root_uri).download_artifacts(\n\u001b[0;32m---> 96\u001b[0;31m \u001b[0martifact_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0martifact_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdst_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0moutput_path\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 97\u001b[0m )\n\u001b[1;32m 98\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", 657 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/mlflow/store/artifact/artifact_repo.py\u001b[0m in \u001b[0;36mdownload_artifacts\u001b[0;34m(self, artifact_path, dst_path)\u001b[0m\n\u001b[1;32m 182\u001b[0m )\n\u001b[1;32m 183\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 184\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mdownload_artifact\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msrc_artifact_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0martifact_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdst_local_dir_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mdst_path\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 185\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 186\u001b[0m \u001b[0;34m@\u001b[0m\u001b[0mabstractmethod\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 658 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/mlflow/store/artifact/artifact_repo.py\u001b[0m in \u001b[0;36mdownload_artifact\u001b[0;34m(src_artifact_path, dst_local_dir_path)\u001b[0m\n\u001b[1;32m 128\u001b[0m )\n\u001b[1;32m 129\u001b[0m self._download_file(\n\u001b[0;32m--> 130\u001b[0;31m \u001b[0mremote_file_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0msrc_artifact_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlocal_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlocal_destination_file_path\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 131\u001b[0m )\n\u001b[1;32m 132\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mlocal_destination_file_path\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 659 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/mlflow/store/artifact/azure_blob_artifact_repo.py\u001b[0m in \u001b[0;36m_download_file\u001b[0;34m(self, remote_file_path, local_path)\u001b[0m\n\u001b[1;32m 144\u001b[0m \u001b[0mremote_full_path\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mposixpath\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjoin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mremote_root_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mremote_file_path\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 145\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlocal_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"wb\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mfile\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 146\u001b[0;31m \u001b[0mcontainer_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdownload_blob\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mremote_full_path\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreadinto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfile\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 147\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 148\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mdelete_artifacts\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0martifact_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 660 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/core/tracing/decorator.py\u001b[0m in \u001b[0;36mwrapper_use_tracer\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 81\u001b[0m \u001b[0mspan_impl_type\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0msettings\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtracing_implementation\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 82\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mspan_impl_type\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 83\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 84\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 85\u001b[0m \u001b[0;31m# Merge span is parameter is set, but only if no explicit parent are passed\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 661 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_container_client.py\u001b[0m in \u001b[0;36mdownload_blob\u001b[0;34m(self, blob, offset, length, **kwargs)\u001b[0m\n\u001b[1;32m 1094\u001b[0m \u001b[0mblob_client\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_blob_client\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mblob\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# type: ignore\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1095\u001b[0m \u001b[0mkwargs\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msetdefault\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'merge_span'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1096\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mblob_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdownload_blob\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0moffset\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0moffset\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlength\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlength\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1097\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1098\u001b[0m def _generate_delete_blobs_subrequest_options(\n", 662 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/core/tracing/decorator.py\u001b[0m in \u001b[0;36mwrapper_use_tracer\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 81\u001b[0m \u001b[0mspan_impl_type\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0msettings\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtracing_implementation\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 82\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mspan_impl_type\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 83\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 84\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 85\u001b[0m \u001b[0;31m# Merge span is parameter is set, but only if no explicit parent are passed\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 663 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_blob_client.py\u001b[0m in \u001b[0;36mdownload_blob\u001b[0;34m(self, offset, length, **kwargs)\u001b[0m\n\u001b[1;32m 846\u001b[0m \u001b[0mlength\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlength\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 847\u001b[0m **kwargs)\n\u001b[0;32m--> 848\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mStorageStreamDownloader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 849\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 850\u001b[0m def _quick_query_options(self, query_expression,\n", 664 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_download.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, clients, config, start_range, end_range, validate_content, encryption_options, max_concurrency, name, container, encoding, **kwargs)\u001b[0m\n\u001b[1;32m 347\u001b[0m )\n\u001b[1;32m 348\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 349\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_response\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_initial_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 350\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mproperties\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_response\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mproperties\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 351\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mproperties\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mname\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mname\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 665 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_download.py\u001b[0m in \u001b[0;36m_initial_request\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 427\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_file_size\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 428\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 429\u001b[0;31m \u001b[0mprocess_storage_error\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0merror\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 430\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 431\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 666 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_shared/response_handlers.py\u001b[0m in \u001b[0;36mprocess_storage_error\u001b[0;34m(storage_error)\u001b[0m\n\u001b[1;32m 175\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 176\u001b[0m \u001b[0;31m# `from None` prevents us from double printing the exception (suppresses generated layer error context)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 177\u001b[0;31m \u001b[0mexec\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"raise error from None\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# pylint: disable=exec-used # nosec\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 178\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mSyntaxError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 179\u001b[0m \u001b[0;32mraise\u001b[0m \u001b[0merror\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 667 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_shared/response_handlers.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n", 668 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_download.py\u001b[0m in \u001b[0;36m_initial_request\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 390\u001b[0m \u001b[0mdata_stream_total\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 391\u001b[0m \u001b[0mdownload_stream_current\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 392\u001b[0;31m \u001b[0;34m**\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_request_options\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 393\u001b[0m )\n\u001b[1;32m 394\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", 669 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/storage/blob/_generated/operations/_blob_operations.py\u001b[0m in \u001b[0;36mdownload\u001b[0;34m(self, snapshot, version_id, timeout, range, range_get_content_md5, range_get_content_crc64, request_id_parameter, lease_access_conditions, cpk_info, modified_access_conditions, **kwargs)\u001b[0m\n\u001b[1;32m 182\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 183\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstatus_code\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32min\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m200\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m206\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 184\u001b[0;31m \u001b[0mmap_error\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstatus_code\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstatus_code\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0merror_map\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0merror_map\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 185\u001b[0m \u001b[0merror\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_deserialize\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfailsafe_deserialize\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0m_models\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mStorageError\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 186\u001b[0m \u001b[0;32mraise\u001b[0m \u001b[0mHttpResponseError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmodel\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0merror\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 670 | "\u001b[0;32m/miniconda/lib/python3.7/site-packages/azure/core/exceptions.py\u001b[0m in \u001b[0;36mmap_error\u001b[0;34m(status_code, response, error_map)\u001b[0m\n\u001b[1;32m 103\u001b[0m \u001b[0;32mreturn\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 104\u001b[0m \u001b[0merror\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0merror_type\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 105\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0merror\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 106\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 107\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", 671 | "\u001b[0;31mResourceNotFoundError\u001b[0m: The specified blob does not exist.\nRequestId:5d7e63d1-901e-0041-1d95-d97cd0000000\nTime:2021-11-14T20:23:35.9277894Z\nErrorCode:BlobNotFound" 672 | ] 673 | } 674 | ], 675 | "source": [ 676 | "pyfunc_model = pyfunc.load_model('')\n" 677 | ] 678 | }, 679 | { 680 | "cell_type": "code", 681 | "execution_count": null, 682 | "metadata": {}, 683 | "outputs": [], 684 | "source": [] 685 | } 686 | ], 687 | "metadata": { 688 | "kernelspec": { 689 | "display_name": "Python 3 (ipykernel)", 690 | "language": "python", 691 | "name": "python3" 692 | }, 693 | "language_info": { 694 | "codemirror_mode": { 695 | "name": "ipython", 696 | "version": 3 697 | }, 698 | "file_extension": ".py", 699 | "mimetype": "text/x-python", 700 | "name": "python", 701 | "nbconvert_exporter": "python", 702 | "pygments_lexer": "ipython3", 703 | "version": "3.7.11" 704 | } 705 | }, 706 | "nbformat": 4, 707 | "nbformat_minor": 4 708 | } 709 | -------------------------------------------------------------------------------- /03_Deployment/03 Deploy Pytorch.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Model Deployment using Numpy Linear Classifier" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## 1. Introduction\n", 15 | "In this workbook, we will look into the basics of deploying a model. For simplicity, we will consider a simple numpy linear classifier $$ \\mathbf{Y} = \\mathbf{W} \\mathbf{X} + \\mathbf{b}$$\n", 16 | "\n", 17 | "For simplicity, we will consider $\\mathbf{X}$ to be 6 dimensional ($\\mathbb{R}^6$). i.e. 1 data point $x \\in \\mathbf{X}$ will be a numpy array of shape $(1,6)$. The output $\\mathbf{Y}$ is 3 dimensional ($\\mathbb{R}^3$). Then, the weights $\\mathbf{W}$ will be a numpy array of shape $(3,6)$ and bias $\\mathbf{b}$ will be a numpy array of shape $(,3)$. \n", 18 | "\n", 19 | "In this workbook, we will demonstrate how to deploy this numpy linear classifier as a server and how to perform query on this numpy linear classifier.\n", 20 | "\n", 21 | "## 2. Imports and Dependencies.\n", 22 | "The few packages needed are loaded next. Particularly, `numpy`, `mlflow` will be majorly used in this tutorial. `requests` package will be used for performing query. `json` is used to post and get response from the server." 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": 1, 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "import os\n", 32 | "import sys\n", 33 | "import mlflow\n", 34 | "import numpy as np\n", 35 | "from mlflow import pyfunc\n", 36 | "\n", 37 | "# Setting a tracking uri to log the mlflow logs in a particular location tracked by \n", 38 | "from mlflow.tracking import MlflowClient\n", 39 | "tracking_uri = os.environ.get(\"TRACKING_URL\")\n", 40 | "client = MlflowClient(tracking_uri=tracking_uri)\n", 41 | "mlflow.set_tracking_uri(tracking_uri)" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "metadata": {}, 47 | "source": [ 48 | "# 3. Some utility functions" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": 2, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "## Utility function to add libraries to conda environment\n", 58 | "def add_libraries_to_conda_env(_conda_env,libraries=[],conda_dependencies=[]):\n", 59 | " dependencies = _conda_env[\"dependencies\"]\n", 60 | " dependencies = dependencies + conda_dependencies\n", 61 | " pip_index = None\n", 62 | " for _index,_element in enumerate(dependencies):\n", 63 | " if type(_element) == dict:\n", 64 | " if \"pip\" in _element.keys():\n", 65 | " pip_index = _index\n", 66 | " break\n", 67 | " dependencies[pip_index][\"pip\"] = dependencies[pip_index][\"pip\"] + libraries\n", 68 | " _conda_env[\"dependencies\"] = dependencies\n", 69 | " return _conda_env" 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": 3, 75 | "metadata": {}, 76 | "outputs": [], 77 | "source": [ 78 | "## Model Wrapper that takes X as input using json and predicts an output Y\n", 79 | "class ModelWrapper(mlflow.pyfunc.PythonModel):\n", 80 | " def load_context(self,context):\n", 81 | " import numpy as np\n", 82 | " self.model = np.load(context.artifacts['model_path'], allow_pickle=True).tolist()\n", 83 | " print(\"Model initialized\")\n", 84 | " \n", 85 | " def predict(self, context, model_input):\n", 86 | " import numpy as np\n", 87 | " import json\n", 88 | " json_txt = \", \".join(model_input.columns)\n", 89 | " data_list = json.loads(json_txt)\n", 90 | " inputs = np.array(data_list)\n", 91 | " if len(inputs.shape) == 2:\n", 92 | " print('batch inference')\n", 93 | " predictions = []\n", 94 | " for idx in range(inputs.shape[0]):\n", 95 | " prediction = np.matmul(inputs[idx,:],self.model['weights'].T) + self.model['bias']\n", 96 | " predictions.append(prediction.tolist())\n", 97 | " elif len(inputs.shape) == 1:\n", 98 | " print('single inference')\n", 99 | " predictions = self.model['weights'].T * inputs + self.model['bias']\n", 100 | " predictions = predictions.tolist()\n", 101 | " else:\n", 102 | " raise ValueError('invalid input shape')\n", 103 | " return json.dumps(predictions)" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": 4, 109 | "metadata": {}, 110 | "outputs": [], 111 | "source": [ 112 | "# instantiate the python inference model wrapper for the server\n", 113 | "model_wrapper = ModelWrapper()\n", 114 | "env = mlflow.pytorch.get_default_conda_env()\n", 115 | "env = add_libraries_to_conda_env(env,libraries=['numpy'])\n", 116 | "\n", 117 | "# define the model weights randomly\n", 118 | "np_weights = np.random.rand(3,6)\n", 119 | "np_bias = np.random.rand(3)\n", 120 | "\n", 121 | "# checkpointing and logging the model in mlflow\n", 122 | "artifact_path = './np_model'\n", 123 | "np.save(artifact_path, {'weights':np_weights, 'bias':np_bias})\n", 124 | "model_artifacts = {\"model_path\" : artifact_path+'.npy'}\n", 125 | "mlflow.pyfunc.log_model(\"np_model\", python_model=model_wrapper, artifacts=model_artifacts, conda_env=env)" 126 | ] 127 | }, 128 | { 129 | "cell_type": "code", 130 | "execution_count": null, 131 | "metadata": {}, 132 | "outputs": [], 133 | "source": [] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "## 4. Deploying the model\n", 140 | "The above code logs a model in the experiments tab. For more info please refer [here](https://rocketml.gitbook.io/rocketml-user-guide/experiments). After deploying the model, we can obtain the model url for performing query as shown below.\n", 141 | "\n", 142 | "## 5. Query from the server\n", 143 | "\n", 144 | "There are two methods to perform query... The first is using `requests` library and the other using `curl` shell command." 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": 8, 150 | "metadata": {}, 151 | "outputs": [ 152 | { 153 | "name": "stdout", 154 | "output_type": "stream", 155 | "text": [ 156 | "[[1.81811 2.361802 2.3531516]]\n", 157 | "[[1.6456957 1.9675521 1.7067064]\n", 158 | " [2.5838118 2.8157287 2.2976384]\n", 159 | " [2.1801186 2.6144202 2.1953769]\n", 160 | " [1.488653 2.0579617 1.9867096]\n", 161 | " [1.809318 2.2533681 1.9978099]\n", 162 | " [1.6314347 2.6019633 2.2711916]\n", 163 | " [1.8024088 2.3680732 2.1608143]\n", 164 | " [1.5847387 2.1460922 1.6070927]\n", 165 | " [1.6557037 1.9244839 1.9006283]\n", 166 | " [1.6835335 2.0878038 1.7025021]\n", 167 | " [1.1496542 1.4508257 1.5056852]\n", 168 | " [1.8416998 2.6405902 2.111057 ]\n", 169 | " [2.2202673 2.6577232 2.1318862]\n", 170 | " [2.4101307 2.4593313 2.14497 ]\n", 171 | " [1.4908085 2.0428987 2.0579786]\n", 172 | " [1.6514273 2.2453768 2.2934952]\n", 173 | " [1.8059101 1.9135844 1.9444287]\n", 174 | " [1.8330411 2.6794806 2.3693244]\n", 175 | " [1.997308 2.4394622 2.3306658]\n", 176 | " [1.9428451 2.5416408 2.459957 ]]\n" 177 | ] 178 | } 179 | ], 180 | "source": [ 181 | "import requests\n", 182 | "import json\n", 183 | "\n", 184 | "url = \"http://127.0.0.1:5011/invocations\"\n", 185 | "headers = {\"Content-Type\":\"text/csv\"}\n", 186 | "\n", 187 | "# First case, run inference on single data point\n", 188 | "np_array = np.random.rand(1,6).tolist()\n", 189 | "json_data = json.dumps(np_array)\n", 190 | "response = requests.post(url,data=json_data,headers=headers)\n", 191 | "if response.status_code == 200:\n", 192 | " output = np.array(json.loads(response.json())).astype(np.float32)\n", 193 | " print(output)\n", 194 | "else:\n", 195 | " print(response.status_code)\n", 196 | " print(\"REST API deployment is in progress -- please try again in a few minutes!\")\n", 197 | "\n", 198 | "# Second case, run inference on multiple data points\n", 199 | "np_array = np.random.rand(20,6).tolist()\n", 200 | "json_data = json.dumps(np_array)\n", 201 | "response = requests.post(url,data=json_data,headers=headers)\n", 202 | "if response.status_code == 200:\n", 203 | " output = np.array(json.loads(response.json())).astype(np.float32)\n", 204 | " print(output)\n", 205 | "else:\n", 206 | " print(response.status_code)\n", 207 | " print(\"REST API deployment is in progress -- please try again in a few minutes!\")" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "execution_count": 7, 213 | "metadata": {}, 214 | "outputs": [ 215 | { 216 | "name": "stdout", 217 | "output_type": "stream", 218 | "text": [ 219 | "\"[[1.9308286100142427, 2.58603809605743, 2.3403451985121264]]\"" 220 | ] 221 | } 222 | ], 223 | "source": [ 224 | "!curl http://127.0.0.1:5011/invocations -H 'Content-Type:text/csv' -d '[[0.6499166977064089, 0.17579454262114602, 0.2688911143313131, 0.7146591854799202, 0.6497433572112488, 0.7723469203958951]]'" 225 | ] 226 | }, 227 | { 228 | "cell_type": "code", 229 | "execution_count": null, 230 | "metadata": {}, 231 | "outputs": [], 232 | "source": [] 233 | }, 234 | { 235 | "cell_type": "code", 236 | "execution_count": null, 237 | "metadata": {}, 238 | "outputs": [], 239 | "source": [] 240 | } 241 | ], 242 | "metadata": { 243 | "kernelspec": { 244 | "display_name": "Python 3 (ipykernel)", 245 | "language": "python", 246 | "name": "python3" 247 | }, 248 | "language_info": { 249 | "codemirror_mode": { 250 | "name": "ipython", 251 | "version": 3 252 | }, 253 | "file_extension": ".py", 254 | "mimetype": "text/x-python", 255 | "name": "python", 256 | "nbconvert_exporter": "python", 257 | "pygments_lexer": "ipython3", 258 | "version": "3.7.11" 259 | } 260 | }, 261 | "nbformat": 4, 262 | "nbformat_minor": 4 263 | } 264 | -------------------------------------------------------------------------------- /03_Deployment/artifacts.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/03_Deployment/artifacts.png -------------------------------------------------------------------------------- /03_Deployment/experiments_list.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/03_Deployment/experiments_list.png -------------------------------------------------------------------------------- /03_Deployment/model_deployment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/03_Deployment/model_deployment.png -------------------------------------------------------------------------------- /03_Deployment/model_deployment_keras_4GB.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/03_Deployment/model_deployment_keras_4GB.png -------------------------------------------------------------------------------- /03_Deployment/model_list.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/03_Deployment/model_list.png -------------------------------------------------------------------------------- /03_Deployment/run_details.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/03_Deployment/run_details.png -------------------------------------------------------------------------------- /03_Deployment/runs_list.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/03_Deployment/runs_list.png -------------------------------------------------------------------------------- /04_PINNs/02 2D Heat Equation.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "7cc00c83-000a-465a-8e09-72ac8e8233eb", 6 | "metadata": {}, 7 | "source": [ 8 | "# Physics Informed Neural Networks (PINNs) for 2D Heat Equation\n", 9 | "\n", 10 | "## 1. Introduction\n", 11 | "In this workbook, we would be training a physics informed neural network model for 1D Heat equation.\n", 12 | "\n", 13 | "$$\\frac{\\partial u}{\\partial t} - \\nu \\Big( \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} \\Big)= 0$$\n", 14 | "\n", 15 | "Physics informed neural networks is made of a dense neural network that takes in the $(x,t)$ points in the domain and learns the physics from it using PDEs such as the one just above. \n", 16 | "\n", 17 | "The architecture of the network looks something like this:\n", 18 | "\n", 19 | "![](https://www.researchgate.net/profile/Zhen-Li-105/publication/335990167/figure/fig1/AS:806502679982080@1569296631121/Schematic-of-a-physics-informed-neural-network-PINN-where-the-loss-function-of-PINN.ppm)\n", 20 | "\n", 21 | "We will begin the workbook with few imports and creating some helper functions\n", 22 | "\n", 23 | "## 2. Imports and helper functions" 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 1, 29 | "id": "0f3e002e-a44d-4436-b9a2-b2485e86b2d1", 30 | "metadata": {}, 31 | "outputs": [ 32 | { 33 | "name": "stderr", 34 | "output_type": "stream", 35 | "text": [ 36 | "Using backend: tensorflow.compat.v1\n", 37 | "\n", 38 | "2021-11-13 22:29:25.398080: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n", 39 | "2021-11-13 22:29:25.398123: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n" 40 | ] 41 | }, 42 | { 43 | "name": "stdout", 44 | "output_type": "stream", 45 | "text": [ 46 | "WARNING:tensorflow:From /miniconda/lib/python3.7/site-packages/tensorflow/python/compat/v2_compat.py:101: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\n", 47 | "Instructions for updating:\n", 48 | "non-resource variables are not supported in the long term\n", 49 | "WARNING:tensorflow:From /miniconda/lib/python3.7/site-packages/deepxde/nn/initializers.py:120: The name tf.keras.initializers.he_normal is deprecated. Please use tf.compat.v1.keras.initializers.he_normal instead.\n", 50 | "\n" 51 | ] 52 | } 53 | ], 54 | "source": [ 55 | "import deepxde as dde\n", 56 | "import numpy as np\n", 57 | "# Backend tensorflow.compat.v1 or tensorflow\n", 58 | "from deepxde.backend import tf\n", 59 | "import time \n", 60 | "import matplotlib.pyplot as plt\n", 61 | "import os\n", 62 | "from PIL import Image\n", 63 | "t0 = time.time()\n", 64 | "save_directory = './results'\n", 65 | "\n", 66 | "# Suppress warnings\n", 67 | "import warnings\n", 68 | "warnings.filterwarnings(\"ignore\")\n", 69 | "\n", 70 | "def plot(geom_time,resolution,data,save_directory,name): #output_data = pred[:,j]\n", 71 | " img_save_directory = save_directory + 'visualize_result'\n", 72 | " if not os.path.exists(img_save_directory):\n", 73 | " os.makedirs(img_save_directory)\n", 74 | " img_save_directory = img_save_directory + '/'\n", 75 | " fig = plt.figure()\n", 76 | " ims_test = []\n", 77 | " if name[-10:] =='prediction':\n", 78 | " t_max = 1\n", 79 | " t_min = -1\n", 80 | " \n", 81 | " else:\n", 82 | " t_max = np.max(data)\n", 83 | " t_min = np.min(data)\n", 84 | " nx, ny,nt = resolution \n", 85 | " data = data.reshape((len(data),)) \n", 86 | " for t in range(nt):\n", 87 | " plt.scatter(geom_time[:,0][nx*ny*t:nx*ny*(t+1)],geom_time[:,1][nx*ny*t:nx*ny*(t+1)], \n", 88 | " c=data[nx*ny*t:nx*ny*(t+1)].reshape((len(data[nx*ny*t:nx*ny*(t+1)]),)), cmap='jet',vmin=t_min, vmax=t_max, s= 200, marker = 's')\n", 89 | " plt.colorbar()\n", 90 | " plt.xlabel('x domain')\n", 91 | " plt.ylabel('y domain')\n", 92 | " plt.title( 't = ' + \"{:.3f}\".format(geom_time[:,2][nx*ny*t +1 ]))\n", 93 | " plt.show()\n", 94 | " filename = name + '_' +str(t)\n", 95 | " plt.savefig(os.path.join(img_save_directory, filename + '.png'))\n", 96 | " plt.close()\n", 97 | " im = Image.open(os.path.join(img_save_directory, filename + '.png'))\n", 98 | " ims_test.append(im) \n", 99 | " ims_test[0].save(os.path.join(img_save_directory + name + '.gif'),save_all = True, \n", 100 | " append_images = ims_test[1:], optimize = False, duration = 60, loop = 1000)\n", 101 | " im.show()\n", 102 | "\n", 103 | "\n", 104 | "def plot_mean_data_history(duration, resolution, data,title,save_directory):\n", 105 | " nx,ny,nt = resolution\n", 106 | " m = []\n", 107 | " for t in range(nt):\n", 108 | " mean_t = np.mean(abs(data[nx*ny*t:nx*ny*(t+1)]))\n", 109 | " m.append(mean_t)\n", 110 | "\n", 111 | " time = np.array(range(nt))*(duration/nt)\n", 112 | " time = time.reshape((nt,1))\n", 113 | " plt.plot(time, np.asarray(m))\n", 114 | " plt.title(title)\n", 115 | " if not os.path.exists(save_directory):\n", 116 | " os.makedirs(save_directory)\n", 117 | " plt.savefig(os.path.join(save_directory, 'mean_' + title + '_history.png'))\n", 118 | "\n", 119 | "\n", 120 | "def pde(X, u):\n", 121 | " du_X = tf.gradients(u, X)[0]\n", 122 | " du_x, du_y, du_t = du_X[:, 0:1], du_X[:, 1:2],du_X[:, 2:3]\n", 123 | " du_xx = tf.gradients(du_x, X)[0][:, 0:1]\n", 124 | " du_yy = tf.gradients(du_y, X)[0][:, 1:2]\n", 125 | " return du_t-0.5*(du_xx + du_yy)\n", 126 | " \n", 127 | "\n", 128 | "def func(x):\n", 129 | " return np.sin(np.pi * x[:, 0:1]) * np.exp(-x[:, 1:2])* np.exp(-x[:, 2:3])" 130 | ] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "id": "6451b179-e08b-4f12-a513-9a0d1d77d59e", 135 | "metadata": {}, 136 | "source": [ 137 | "## 2. initialization\n", 138 | "Initialize the geometry, time domain and the Dirichlet BC and IC" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": 2, 144 | "id": "f4e6d8d3-b784-41ae-a468-074a690715ff", 145 | "metadata": {}, 146 | "outputs": [ 147 | { 148 | "name": "stdout", 149 | "output_type": "stream", 150 | "text": [ 151 | "Warning: 200 points required, but 225 points sampled.\n", 152 | "Warning: 10000 points required, but 11250 points sampled.\n" 153 | ] 154 | } 155 | ], 156 | "source": [ 157 | "geom = dde.geometry.geometry_2d.Rectangle([-1,-1], [1,1])\n", 158 | "timedomain = dde.geometry.TimeDomain(0, 1)\n", 159 | "geomtime = dde.geometry.GeometryXTime(geom, timedomain)\n", 160 | "\n", 161 | "bc = dde.DirichletBC(geomtime, func, lambda _, on_boundary: on_boundary)\n", 162 | "ic = dde.IC(geomtime, func, lambda _, on_initial: on_initial)\n", 163 | "data = dde.data.TimePDE(\n", 164 | " geomtime,\n", 165 | " pde,\n", 166 | " [],\n", 167 | " num_domain=40000,\n", 168 | " num_boundary=20000,\n", 169 | " num_initial=10000,\n", 170 | " solution=func,\n", 171 | " num_test=10000,\n", 172 | ")" 173 | ] 174 | }, 175 | { 176 | "cell_type": "markdown", 177 | "id": "31e26c8a-f118-45bd-8f91-b725f71fa6ee", 178 | "metadata": {}, 179 | "source": [ 180 | "Initialize the network and compile the model" 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": 3, 186 | "id": "cd305e42-701b-4f9c-987b-3b7826d23467", 187 | "metadata": {}, 188 | "outputs": [ 189 | { 190 | "name": "stdout", 191 | "output_type": "stream", 192 | "text": [ 193 | "Compiling model...\n", 194 | "Building feed-forward neural network...\n", 195 | "'build' took 0.046935 s\n", 196 | "\n" 197 | ] 198 | }, 199 | { 200 | "name": "stderr", 201 | "output_type": "stream", 202 | "text": [ 203 | "2021-11-13 22:29:29.861934: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\n", 204 | "2021-11-13 22:29:29.861968: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)\n", 205 | "2021-11-13 22:29:29.861987: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (bneievrk-76946f66f-dtbkv): /proc/driver/nvidia/version does not exist\n", 206 | "2021-11-13 22:29:29.862219: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA\n", 207 | "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n" 208 | ] 209 | }, 210 | { 211 | "name": "stdout", 212 | "output_type": "stream", 213 | "text": [ 214 | "'compile' took 0.417882 s\n", 215 | "\n" 216 | ] 217 | } 218 | ], 219 | "source": [ 220 | "initializer = \"Glorot uniform\"\n", 221 | "optimizer = \"adam\"\n", 222 | "\n", 223 | "\n", 224 | "layer_size = [3] + [32]*3 + [1]\n", 225 | "activation = \"tanh\"\n", 226 | "net = dde.maps.FNN(layer_size, activation, initializer)\n", 227 | "\n", 228 | "model = dde.Model(data, net)\n", 229 | "\n", 230 | "model.compile(optimizer, lr=0.001)" 231 | ] 232 | }, 233 | { 234 | "cell_type": "markdown", 235 | "id": "f9d979fa-b250-4d24-b797-8a08493181ad", 236 | "metadata": {}, 237 | "source": [ 238 | "## 3. Training\n" 239 | ] 240 | }, 241 | { 242 | "cell_type": "code", 243 | "execution_count": null, 244 | "id": "3561a8c6-f58c-491b-9490-2a482b5d30a4", 245 | "metadata": {}, 246 | "outputs": [ 247 | { 248 | "name": "stdout", 249 | "output_type": "stream", 250 | "text": [ 251 | "Initializing variables...\n", 252 | "Training model...\n", 253 | "\n", 254 | "Step Train loss Test loss Test metric\n", 255 | "0 [5.78e-02] [5.28e-02] [] \n", 256 | "1000 [1.82e-06] [1.50e-06] [] \n", 257 | "2000 [5.52e-07] [5.12e-07] [] \n", 258 | "3000 [2.19e-07] [2.01e-07] [] \n", 259 | "4000 [1.05e-07] [9.43e-08] [] \n" 260 | ] 261 | } 262 | ], 263 | "source": [ 264 | "t1 = time.time()\n", 265 | "\n", 266 | "losshistory, train_state = model.train(epochs=10000)\n", 267 | "t2 = time.time()\n", 268 | "print(\"training time:\", (t2-t1))" 269 | ] 270 | }, 271 | { 272 | "cell_type": "markdown", 273 | "id": "b183bf25-e1fe-45fb-ae19-dde3024b8c8f", 274 | "metadata": {}, 275 | "source": [ 276 | "## 4. Post-training Visualization" 277 | ] 278 | }, 279 | { 280 | "cell_type": "code", 281 | "execution_count": null, 282 | "id": "d998accd-f5e4-4421-9066-74879a5a1164", 283 | "metadata": {}, 284 | "outputs": [], 285 | "source": [ 286 | "dde.postprocessing.plot_loss_history(losshistory)\n", 287 | "plt.show()\n", 288 | "\n", 289 | "\n", 290 | "x = np.linspace(-1, 1, 100)\n", 291 | "y = np.linspace(-1, 1, 100)\n", 292 | "t = np.linspace(0, 1, 21)\n", 293 | "test_x , test_t, test_y = np.meshgrid(x, t,y)\n", 294 | "test_domain = np.vstack((np.ravel(test_x), np.ravel(test_y),np.ravel(test_t))).T\n", 295 | "\n", 296 | "\n", 297 | "prediction = model.predict(test_domain)\n", 298 | "residual = model.predict(test_domain, operator=pde)\n", 299 | "plot_mean_data_history(1, (100,100,21), np.abs(residual),'residual',save_directory)\n", 300 | "plot(test_domain,(100,100,21), prediction, save_directory,'prediction')\n", 301 | "#plot(test_domain,(100,100,21), residual, save_directory,'prediction')\n", 302 | "print(\"total time\")\n", 303 | "print(t2-t0)" 304 | ] 305 | }, 306 | { 307 | "cell_type": "code", 308 | "execution_count": null, 309 | "id": "befe25bd-4e0a-4400-90f2-88b869e2aa2f", 310 | "metadata": {}, 311 | "outputs": [], 312 | "source": [] 313 | } 314 | ], 315 | "metadata": { 316 | "kernelspec": { 317 | "display_name": "Python 3 (ipykernel)", 318 | "language": "python", 319 | "name": "python3" 320 | }, 321 | "language_info": { 322 | "codemirror_mode": { 323 | "name": "ipython", 324 | "version": 3 325 | }, 326 | "file_extension": ".py", 327 | "mimetype": "text/x-python", 328 | "name": "python", 329 | "nbconvert_exporter": "python", 330 | "pygments_lexer": "ipython3", 331 | "version": "3.7.11" 332 | } 333 | }, 334 | "nbformat": 4, 335 | "nbformat_minor": 5 336 | } 337 | -------------------------------------------------------------------------------- /05_DiffNets/UNetArch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/05_DiffNets/UNetArch.png -------------------------------------------------------------------------------- /05_DiffNets/sobol_6d.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/05_DiffNets/sobol_6d.npy -------------------------------------------------------------------------------- /05_DiffNets/xdiffnet-scheme.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/05_DiffNets/xdiffnet-scheme.png -------------------------------------------------------------------------------- /05_DiffNets/xfdm-grid.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rocketmlhq/sciml/4cc6d08c4bf7f0417bc079d4dd4ae17cda453a5f/05_DiffNets/xfdm-grid.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # SC21 Tutorial on Scientific Machine Learning 2 | 3 | In this repository, you can find all the example notebooks used for SC21 full-day tutorial: _Scientific Machine Learning using HPC Servers on Cloud_ 4 | 5 | 6 | ## Contents 7 | 8 | - [Resources](#resources) 9 | - [Target Audience](#target-audience) 10 | - [Content Level](#content-level) 11 | - [Prerequisites](#prerequisites) 12 | - [Getting Started](#getting-started) 13 | - [Frequently Asked Questions](#faq) 14 | 15 | ## Resources 16 | 17 | - [Slack Invite](https://join.slack.com/t/sciml-workspace/shared_invite/zt-xfzyqf2u-zh4GRt7sRoh4RLSY9~yyJw) 18 | - [Youtube Playlist](https://www.youtube.com/watch?v=ssZO8Y_TqxI&list=PLcK0exoS00ZTPdvhmh0IdyCIlVQ2lzjJ5) 19 | - [RocketML support](mailto:rocketml@20230188.hubspot-inbox.com) 20 | - [DeepXDE](https://github.com/lululxvi/deepxde) 21 | - [DiffNet](https://github.com/adityabalu/DiffNet) 22 | 23 | 24 | ## Target Audience 25 | Practitioners who use numerical simulations of Partial Differential Equations (PDEs) in analysis, optimization, design and control of complex engineered systems 26 | 27 | ## Content level 28 | 20% Beginner, 40 % Intermediate, 40% Advanced 29 | 30 | ## Prerequisites 31 | Partial Differential Equations, Numerical methods, Machine Learning, Deep Learning, High Performance Computing, Python programming, Jupyter 32 | 33 | ## Getting Started 34 | 35 | - Please email [RocketML](mailto:rocketml@20230188.hubspot-inbox.com) for a user account on [sciml.rocketml.net](https://sciml.rocketml.net). We will send you email instructions on how to log in. 36 | 37 | - Log in to [sciml.rocketml.net](https://sciml.rocketml.net) using the instructions received from RocketML 38 | Screen Shot 2021-11-14 at 2 05 29 PM 39 | 40 | - Go through the onboarding screens 41 | 42 | - You will see a list of tutorials that are ready to use 43 | 44 | - Group by Topic to see _Beginner_, _Intermediate_, _Advanced_ tutorials 45 | Screen Shot 2021-11-14 at 2 07 45 PM 46 | 47 | - Select a tutorial and wait for Jupyter Compute to start 48 | Screen Shot 2021-11-14 at 2 10 24 PM 49 | 50 | - Run the tutorial notebook one cell at a time. If you are not familiar with Jupyter notebook please google for a relevant [tutorial](https://www.youtube.com/watch?v=CwFq3YDU6_Y) 51 | Screen Shot 2021-11-14 at 2 12 04 PM 52 | 53 | ## FAQ 54 | 55 | 1. My tutorial screen is stuck at _Please Wait Jupyter Compute is not started yet_ for more than 5 minutes. What do I do? 56 | 57 | This can happen due to following reasons: 58 | - _When you log in for the first time. Resources like persistent disk space and Azure blob storage are being created for you to store the tutorials and for you to create new notebooks. For first login, expect up to 10 minutes delay._ 59 | 60 | - _There is a delay in creating containers when new nodes are being added to Azure Kubernetes cluster. A new node creation and downloading Docker images can take up to 10 minutes. If it takes more than 15 minutes, please ping us on slack._ 61 | 62 | - _Similar to system reboot for Windows that fixes all the issues, we have two fixes 1) Browser refresh, 2) Log out/Log in to get new containers._ 63 | 64 | 65 | 66 | --------------------------------------------------------------------------------