├── .gitignore ├── LICENSE ├── README.md ├── build_and_push.sh ├── build_local_env.sh ├── conf ├── config.yaml ├── dataset │ ├── iris.yaml │ └── mnist.yaml ├── model │ └── simple_mlp.yaml └── trainer │ └── default.yaml ├── container ├── ReadMe.md ├── docker_template.jinja2 └── render_docker.py ├── deployement ├── nginx.conf ├── predictor.py ├── publish │ ├── algorithm_validation_specification.py │ ├── inference_specification.py │ ├── metric_definitions.py │ ├── modelpackage_validation_specification.py │ ├── training_channels.py │ ├── training_specification.py │ ├── tuning_objectives.py │ └── validation_specification.py ├── serve └── wsgi.py ├── local_test ├── explore_local.sh ├── predict.sh ├── serve_local.sh ├── test_dir │ ├── input │ │ └── config │ │ │ ├── hyperparameters.json │ │ │ └── resourceConfig.json │ └── model │ │ └── trainer.ckpt └── train_local.sh ├── poetry.lock ├── pyproject.toml ├── src ├── __init__.py ├── app.py ├── configs │ ├── __init__.py │ └── train_config.py ├── datasets │ ├── __init__.py │ ├── base_dataset.py │ ├── iris.py │ └── mnist.py ├── models │ ├── __init__.py │ ├── model_handler.py │ └── simple_mlp.py ├── paths.py └── training.py ├── train └── workflow.ipynb /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *,cover 47 | .hypothesis/ 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | local_settings.py 56 | 57 | # Flask stuff: 58 | instance/ 59 | .webassets-cache 60 | 61 | # Scrapy stuff: 62 | .scrapy 63 | 64 | # Sphinx documentation 65 | docs/_build/ 66 | 67 | # PyBuilder 68 | target/ 69 | 70 | # Jupyter Notebook 71 | .ipynb_checkpoints 72 | 73 | # IntelliJ IDEA 74 | .idea/ 75 | 76 | # pyenv 77 | .python-version 78 | 79 | # celery beat schedule file 80 | celerybeat-schedule 81 | 82 | # SageMath parsed files 83 | *.sage.py 84 | 85 | # dotenv 86 | .env 87 | 88 | # virtualenv 89 | .venv 90 | venv/ 91 | ENV/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | *.csv 104 | *.ckpt 105 | data/*_dataset/ 106 | .vscode/ 107 | outputs/* 108 | container/Dockerfile 109 | requirements.txt 110 | multirun/* 111 | container/src 112 | container/src/* 113 | *.ckpt -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Copyright (c) 2020 Thomas Chaton 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy 5 | of this software and associated documentation files (the "Software"), to deal 6 | in the Software without restriction, including without limitation the rights 7 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 8 | copies of the Software, and to permit persons to whom the Software is 9 | furnished to do so, subject to the following conditions: 10 | 11 | The above copyright notice and this permission notice shall be included in 12 | all copies or substantial portions of the Software. 13 | 14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 17 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 18 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 19 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 20 | THE SOFTWARE. 21 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # sagemaker-pytorch-boilerplate 2 | 3 | Production ML, as a field, has matured. It’s increasingly common for companies to have at least one model in production. As more teams deploy models, the conversation around tooling has shifted from “What gets the job done?” to “What does it take to deploy a model at production scale?” 4 | 5 | This project is a boilerplate codebase to `train / serve / publish` Pytorch Model using AWS Sagemaker. 6 | 7 | We aim at simplifying MLOps worflow by providing a template for production ready development, allowing ML engineer to focus uniquely on their models and datasets. 8 | 9 | We rely on [Hydra](https://hydra.cc) for elegantly configuring our application and [Pytorch Lightning](https://pytorch-lightning.readthedocs.io/en/latest/), a lightweight PyTorch wrapper for ML researchers to scale their experiments with less boilerplate. 10 | 11 | # How to use this project 12 | 13 | This project implements a 1-layer MLP on iris dataset as a baby demo. 14 | 15 | ```bash 16 | sh build_local_env.sh 3.7.8 # It will create a local env to ease local dev 17 | ``` 18 | 19 | ```bash 20 | sh build_and_push.sh {IMAGE_NAME} {MODEL} {DATASET} 21 | # It will build the folder container and push the image to AWS Elastic Container Registry (ECR) 22 | ``` 23 | 24 | # Local development 25 | 26 | ## Training 27 | 28 | Used to make quick dev. 29 | 30 | ```bash 31 | source .venv/bin/activate 32 | python src/train model={MODEL} dataset={DATASET} 33 | ``` 34 | 35 | or within docker image 36 | 37 | Used to make sure the docker image is correcly working 38 | 39 | ```bash 40 | sh local_test/train_local.sh ${IMAGE_NAME} ${ARGS_1} ${ARGS_2} ${ARGS_3} ... 41 | ``` 42 | 43 | ## Local Serving 44 | 45 | Terminal 1 46 | ```bash 47 | In: 48 | sh build_and_push.sh {IMAGE_NAME} {MODEL} {DATASET}. 49 | cd local_test 50 | sh serve_local.sh {IMAGE_NAME} 51 | ``` 52 | 53 | ``` bash 54 | Out: 55 | Starting the inference server with 4 workers. 56 | [2020-08-19 11:41:31 +0000] [9] [INFO] Starting gunicorn 20.0.4 57 | [2020-08-19 11:41:31 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9) 58 | [2020-08-19 11:41:31 +0000] [9] [INFO] Using worker: gevent 59 | [2020-08-19 11:41:31 +0000] [13] [INFO] Booting worker with pid: 13 60 | [2020-08-19 11:41:31 +0000] [14] [INFO] Booting worker with pid: 14 61 | [2020-08-19 11:41:31 +0000] [15] [INFO] Booting worker with pid: 15 62 | [2020-08-19 11:41:31 +0000] [16] [INFO] Booting worker with pid: 16 63 | ``` 64 | 65 | Terminal 2 66 | ```bash 67 | In: 68 | cd local_test 69 | sh predict.sh {SAMPLE_DATA} # Currently support only 'text/csv' 70 | ``` 71 | 72 | 73 | Train on AWS: 74 | Run workflow.ipynb notebook 75 | 76 | ``` 77 | jupyter lab 78 | ``` 79 | 80 | # CAREFUL: Work in progress 81 | -------------------------------------------------------------------------------- /build_and_push.sh: -------------------------------------------------------------------------------- 1 | # The name of our algorithm 2 | image=$1 3 | MODEL=$2 4 | DATASET=$3 5 | 6 | if [ "$image" == "" ] 7 | then 8 | echo "Usage: $0 " 9 | exit 1 10 | fi 11 | 12 | cd container 13 | 14 | cp -r ../deployement . 15 | cp -r ../pyproject.toml . 16 | cp -r ../poetry.lock . 17 | 18 | cp -r ../train deployement 19 | cp -r ../src deployement/ 20 | 21 | pip install poetry 22 | poetry export -f requirements.txt -o requirements.txt --without-hashes 23 | python render_docker.py 24 | chmod +x deployement/train 25 | chmod +x deployement/serve 26 | 27 | account=$(aws sts get-caller-identity --query Account --output text) 28 | 29 | # Get the region defined in the current configuration (default to us-west-2 if none defined) 30 | region=$(aws configure get region) 31 | # specifically setting to us-east-2 since during the pre-release period, we support only that region. 32 | region=${region:-eu-west-1} 33 | 34 | fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:latest" 35 | 36 | # If the repository doesn't exist in ECR, create it. 37 | 38 | aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1 39 | 40 | if [ $? -ne 0 ] 41 | then 42 | aws ecr create-repository --repository-name "${image}" > /dev/null 43 | fi 44 | 45 | # Build the docker image locally with the image name and then push it to ECR 46 | # with the full name. 47 | docker build --build-arg MODEL=$MODEL --build-arg DATASET=$DATASET -t ${image} . 48 | docker tag ${image}:latest ${fullname} 49 | 50 | aws ecr get-login-password \ 51 | --region ${region} \ 52 | | docker login \ 53 | --username AWS \ 54 | --password-stdin ${account}.dkr.ecr.${region}.amazonaws.com 55 | 56 | docker push ${fullname} 57 | 58 | # Cleaning 59 | rm -r deployement 60 | rm Dockerfile 61 | rm requirements.txt 62 | rm pyproject.toml 63 | rm poetry.lock 64 | echo ${fullname} 65 | -------------------------------------------------------------------------------- /build_local_env.sh: -------------------------------------------------------------------------------- 1 | PYTHON_VERSION=$1 2 | #pyenv install ${PYTHON_VERSION} 3 | pyenv local ${PYTHON_VERSION} 4 | python -m venv .venv 5 | cd container 6 | poetry install 7 | cd .. && source .venv/bin/activate -------------------------------------------------------------------------------- /conf/config.yaml: -------------------------------------------------------------------------------- 1 | defaults: 2 | - model: ??? 3 | - dataset: ??? 4 | 5 | mode: local 6 | -------------------------------------------------------------------------------- /conf/dataset/iris.yaml: -------------------------------------------------------------------------------- 1 | # @package _group_ 2 | target: src.datasets.iris.IrisDataset 3 | params: 4 | val_split: 0.3 5 | num_workers: 16 6 | -------------------------------------------------------------------------------- /conf/dataset/mnist.yaml: -------------------------------------------------------------------------------- 1 | # @package _group_ 2 | target: src.datasets.mnist.MNISTDataset 3 | params: 4 | data_dir: '.' 5 | val_split: 5000 6 | num_workers: 16 7 | normalize: False 8 | seed: 42 9 | -------------------------------------------------------------------------------- /conf/model/simple_mlp.yaml: -------------------------------------------------------------------------------- 1 | # @package _group_ 2 | target: src.models.simple_mlp.Model 3 | -------------------------------------------------------------------------------- /conf/trainer/default.yaml: -------------------------------------------------------------------------------- 1 | # @package _group_ 2 | max_epochs: 100 3 | gpus: 0 4 | -------------------------------------------------------------------------------- /container/ReadMe.md: -------------------------------------------------------------------------------- 1 | # Bring-your-own Algorithm Sample 2 | 3 | This example shows how to package an algorithm for use with SageMaker. We have chosen a simple [scikit-learn][skl] implementation of decision trees to illustrate the procedure. 4 | 5 | SageMaker supports two execution modes: _training_ where the algorithm uses input data to train a new model and _serving_ where the algorithm accepts HTTP requests and uses the previously trained model to do an inference (also called "scoring", "prediction", or "transformation"). 6 | 7 | The algorithm that we have built here supports both training and scoring in SageMaker with the same container image. It is perfectly reasonable to build an algorithm that supports only training _or_ scoring as well as to build an algorithm that has separate container images for training and scoring.v 8 | 9 | In order to build a production grade inference server into the container, we use the following stack to make the implementer's job simple: 10 | 11 | 1. __[nginx][nginx]__ is a light-weight layer that handles the incoming HTTP requests and manages the I/O in and out of the container efficiently. 12 | 2. __[gunicorn][gunicorn]__ is a WSGI pre-forking worker server that runs multiple copies of your application and load balances between them. 13 | 3. __[flask][flask]__ is a simple web framework used in the inference app that you write. It lets you respond to call on the `/ping` and `/invocations` endpoints without having to write much code. 14 | 15 | ## The Structure of the Sample Code 16 | 17 | The components are as follows: 18 | 19 | * __Dockerfile__: The _Dockerfile_ describes how the image is built and what it contains. It is a recipe for your container and gives you tremendous flexibility to construct almost any execution environment you can imagine. Here. we use the Dockerfile to describe a pretty standard python science stack and the simple scripts that we're going to add to it. See the [Dockerfile reference][dockerfile] for what's possible here. 20 | 21 | * __build\_and\_push.sh__: The script to build the Docker image (using the Dockerfile above) and push it to the [Amazon EC2 Container Registry (ECR)][ecr] so that it can be deployed to SageMaker. Specify the name of the image as the argument to this script. The script will generate a full name for the repository in your account and your configured AWS region. If this ECR repository doesn't exist, the script will create it. 22 | 23 | * __decision-trees__: The directory that contains the application to run in the container. See the next session for details about each of the files. 24 | 25 | * __local-test__: A directory containing scripts and a setup for running a simple training and inference jobs locally so that you can test that everything is set up correctly. See below for details. 26 | 27 | ### The application run inside the container 28 | 29 | When SageMaker starts a container, it will invoke the container with an argument of either __train__ or __serve__. We have set this container up so that the argument in treated as the command that the container executes. When training, it will run the __train__ program included and, when serving, it will run the __serve__ program. 30 | 31 | * __train__: The main program for training the model. When you build your own algorithm, you'll edit this to include your training code. 32 | * __serve__: The wrapper that starts the inference server. In most cases, you can use this file as-is. 33 | * __wsgi.py__: The start up shell for the individual server workers. This only needs to be changed if you changed where predictor.py is located or is named. 34 | * __predictor.py__: The algorithm-specific inference server. This is the file that you modify with your own algorithm's code. 35 | * __nginx.conf__: The configuration for the nginx master server that manages the multiple workers. 36 | 37 | ### Setup for local testing 38 | 39 | The subdirectory local-test contains scripts and sample data for testing the built container image on the local machine. When building your own algorithm, you'll want to modify it appropriately. 40 | 41 | * __train-local.sh__: Instantiate the container configured for training. 42 | * __serve-local.sh__: Instantiate the container configured for serving. 43 | * __predict.sh__: Run predictions against a locally instantiated server. 44 | * __test-dir__: The directory that gets mounted into the container with test data mounted in all the places that match the container schema. 45 | * __payload.csv__: Sample data for used by predict.sh for testing the server. 46 | 47 | #### The directory tree mounted into the container 48 | 49 | The tree under test-dir is mounted into the container and mimics the directory structure that SageMaker would create for the running container during training or hosting. 50 | 51 | * __input/config/hyperparameters.json__: The hyperparameters for the training job. 52 | * __input/data/training/leaf_train.csv__: The training data. 53 | * __model__: The directory where the algorithm writes the model file. 54 | * __output__: The directory where the algorithm can write its success or failure file. 55 | 56 | ## Environment variables 57 | 58 | When you create an inference server, you can control some of Gunicorn's options via environment variables. These 59 | can be supplied as part of the CreateModel API call. 60 | 61 | Parameter Environment Variable Default Value 62 | --------- -------------------- ------------- 63 | number of workers MODEL_SERVER_WORKERS the number of CPU cores 64 | timeout MODEL_SERVER_TIMEOUT 60 seconds 65 | 66 | 67 | [skl]: http://scikit-learn.org "scikit-learn Home Page" 68 | [dockerfile]: https://docs.docker.com/engine/reference/builder/ "The official Dockerfile reference guide" 69 | [ecr]: https://aws.amazon.com/ecr/ "ECR Home Page" 70 | [nginx]: http://nginx.org/ 71 | [gunicorn]: http://gunicorn.org/ 72 | [flask]: http://flask.pocoo.org/ 73 | -------------------------------------------------------------------------------- /container/docker_template.jinja2: -------------------------------------------------------------------------------- 1 | FROM python:3.7-slim 2 | 3 | LABEL maintainer="thomas.chaton.ai@gmail.com" 4 | 5 | # Avoid warnings by switching to noninteractive 6 | ENV DEBIAN_FRONTEND=noninteractive 7 | 8 | RUN apt-get update \ 9 | && apt-get install -y --fix-missing --no-install-recommends\ 10 | ca-certificates nginx wget libffi-dev libssl-dev build-essential libopenblas-dev\ 11 | python3-pip python3-dev python3-venv python3-setuptools\ 12 | git iproute2 procps lsb-release \ 13 | libsm6 libxext6 libxrender-dev \ 14 | && apt-get clean \ 15 | && rm -rf /var/lib/apt/lists/* 16 | 17 | RUN python3 -m pip install -U pip \ 18 | && python3 -m pip install poetry \ 19 | && poetry config virtualenvs.create false --local \ 20 | && pip3 install torch==1.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html \ 21 | && pip3 install setuptools>=41.0.0 \ 22 | && rm -rf /root/.cache 23 | 24 | {{ requirements }} 25 | RUN rm -rf /root/.cache 26 | 27 | # Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard 28 | # output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE 29 | # keeps Python from writing the .pyc files which are unnecessary in this case. We also update 30 | # PATH so that the train and serve programs are found when the container is invoked. 31 | 32 | ENV PYTHONUNBUFFERED=TRUE 33 | ENV PYTHONDONTWRITEBYTECODE=TRUE 34 | ENV PYTHON=$ENV/bin/python 35 | ENV PIP=$ENV/bin/pip 36 | ENV PATH="/opt/program:${PATH}" 37 | 38 | RUN rm -f /usr/bin/python && ln -s /usr/bin/python3 /usr/bin/python 39 | 40 | # Set up the program in the image 41 | COPY src /opt/program 42 | 43 | ARG MODEL="" 44 | ARG DATASET="" 45 | 46 | # Setup location of model for forward inference 47 | RUN sed -i "/model:/c\- model: $MODEL" /opt/program/conf/config.yaml 48 | RUN sed -i "/dataset:/c\- dataset: $DATASET" /opt/program/conf/config.yaml 49 | RUN sed -i "/mode:/c\mode: aws" /opt/program/conf/config.yaml 50 | 51 | WORKDIR /opt/program 52 | 53 | -------------------------------------------------------------------------------- /container/render_docker.py: -------------------------------------------------------------------------------- 1 | from jinja2 import Template 2 | 3 | CMD = "RUN pip install " 4 | 5 | requirements = [] 6 | with open("requirements.txt") as file_: 7 | for l in file_: 8 | if ';' in l: 9 | requirements.append(CMD + l.split(';')[0] + '\n') 10 | else: 11 | requirements.append(CMD + l) 12 | with open("docker_template.jinja2") as file_: 13 | template = Template(file_.read()) 14 | rendered = template.render(requirements=''.join(requirements)) 15 | with open("Dockerfile", "w") as file_: 16 | file_.write(rendered) -------------------------------------------------------------------------------- /deployement/nginx.conf: -------------------------------------------------------------------------------- 1 | worker_processes 1; 2 | daemon off; # Prevent forking 3 | 4 | 5 | pid /tmp/nginx.pid; 6 | error_log /var/log/nginx/error.log; 7 | 8 | events { 9 | # defaults 10 | } 11 | 12 | http { 13 | include /etc/nginx/mime.types; 14 | default_type application/octet-stream; 15 | access_log /var/log/nginx/access.log combined; 16 | 17 | upstream gunicorn { 18 | server unix:/tmp/gunicorn.sock; 19 | } 20 | 21 | server { 22 | listen 8080 deferred; 23 | client_max_body_size 5m; 24 | 25 | keepalive_timeout 5; 26 | 27 | location ~ ^/(ping|invocations) { 28 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 29 | proxy_set_header Host $http_host; 30 | proxy_redirect off; 31 | proxy_pass http://gunicorn; 32 | } 33 | 34 | location / { 35 | return 404 "{}"; 36 | } 37 | } 38 | } 39 | -------------------------------------------------------------------------------- /deployement/predictor.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # A sample training component that trains a simple scikit-learn decision tree model. 4 | # This implementation works in File mode and makes no assumptions about the input file names. 5 | # Input is specified as CSV with a data point in each row and the labels in the first column. 6 | 7 | from __future__ import print_function 8 | 9 | import os 10 | import hydra 11 | 12 | os.environ["HYDRA_FULL_ERROR"] = "1" 13 | import os.path as osp 14 | import json 15 | import pickle 16 | import sys 17 | from io import StringIO 18 | import traceback 19 | import signal 20 | import flask 21 | import pandas as pd 22 | from omegaconf import DictConfig 23 | import random 24 | import torch 25 | 26 | from src.datasets import * 27 | from src.paths import Paths 28 | from src.models.model_handler import ModelHandler 29 | from src.configs.train_config import * 30 | 31 | app = flask.Flask(__name__) 32 | 33 | 34 | @app.route("/ping", methods=["GET"]) 35 | def ping(): 36 | """Determine if the container is working and healthy. In this sample container, we declare 37 | it healthy if we can load the model successfully.""" 38 | health = ModelHandler.get_model() 39 | 40 | status = 200 if health else 404 41 | return flask.Response(response="\n", status=status, mimetype="application/json") 42 | 43 | 44 | @app.route("/invocations", methods=["POST"]) 45 | def transformation(): 46 | """Do an inference on a single batch of data. In this sample server, we take data as CSV, convert 47 | it to a pandas data frame for internal use and then convert the predictions back to CSV (which really 48 | just means one prediction per line, since there's a single column. 49 | """ 50 | data = None 51 | 52 | # Convert from CSV to pandas 53 | if flask.request.content_type == "text/csv": 54 | data = flask.request.data.decode("utf-8") 55 | s = StringIO(data) 56 | data = pd.read_csv(s, header=None) 57 | else: 58 | return flask.Response( 59 | response="This predictor only supports CSV data", 60 | status=415, 61 | mimetype="text/plain", 62 | ) 63 | 64 | print("Invoked with {} records".format(data.shape[0])) 65 | 66 | # Do the prediction 67 | predictions = ModelHandler.predict(data) 68 | 69 | # Convert from numpy back to CSV 70 | out = StringIO() 71 | pd.Series(predictions).to_csv(out, header=False, index=False) 72 | result = out.getvalue() 73 | 74 | return flask.Response(response=result, status=200, mimetype="text/csv") 75 | 76 | -------------------------------------------------------------------------------- /deployement/publish/algorithm_validation_specification.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | class AlgorithmValidationSpecification: 4 | template = """ 5 | { 6 | "ValidationSpecification": { 7 | "ValidationRole": "ROLE_REPLACE_ME", 8 | "ValidationProfiles": [ 9 | { 10 | "ProfileName": "ValidationProfile1", 11 | "TrainingJobDefinition": { 12 | "TrainingInputMode": "File", 13 | "HyperParameters": {}, 14 | "InputDataConfig": [ 15 | { 16 | "ChannelName": "CHANNEL_NAME_REPLACE_ME", 17 | "DataSource": { 18 | "S3DataSource": { 19 | "S3DataType": "S3Prefix", 20 | "S3Uri": "TRAIN_S3_INPUT_REPLACE_ME", 21 | "S3DataDistributionType": "FullyReplicated" 22 | } 23 | }, 24 | "ContentType": "CONTENT_TYPE_REPLACE_ME", 25 | "CompressionType": "None", 26 | "RecordWrapperType": "None" 27 | } 28 | ], 29 | "OutputDataConfig": { 30 | "KmsKeyId": "", 31 | "S3OutputPath": "VALIDATION_S3_OUTPUT_REPLACE_ME/training-output" 32 | }, 33 | "ResourceConfig": { 34 | "InstanceType": "INSTANCE_TYPE_REPLACE_ME", 35 | "InstanceCount": 1, 36 | "VolumeSizeInGB": 10, 37 | "VolumeKmsKeyId": "" 38 | }, 39 | "StoppingCondition": { 40 | "MaxRuntimeInSeconds": 1800 41 | } 42 | }, 43 | "TransformJobDefinition": { 44 | "MaxConcurrentTransforms": 1, 45 | "MaxPayloadInMB": 6, 46 | "TransformInput": { 47 | "DataSource": { 48 | "S3DataSource": { 49 | "S3DataType": "S3Prefix", 50 | "S3Uri": "BATCH_S3_INPUT_REPLACE_ME" 51 | } 52 | }, 53 | "ContentType": "CONTENT_TYPE_REPLACE_ME", 54 | "CompressionType": "None", 55 | "SplitType": "Line" 56 | }, 57 | "TransformOutput": { 58 | "S3OutputPath": "VALIDATION_S3_OUTPUT_REPLACE_ME/batch-transform-output", 59 | "Accept": "CONTENT_TYPE_REPLACE_ME", 60 | "AssembleWith": "Line", 61 | "KmsKeyId": "" 62 | }, 63 | "TransformResources": { 64 | "InstanceType": "INSTANCE_TYPE_REPLACE_ME", 65 | "InstanceCount": 1 66 | } 67 | } 68 | } 69 | ] 70 | } 71 | } 72 | """ 73 | 74 | def get_algo_validation_specification_dict(self, validation_role, training_channel_name, training_input, batch_transform_input, content_type, instance_type, output_s3_location): 75 | return json.loads(self.get_algo_validation_specification_json(validation_role, training_channel_name, training_input, batch_transform_input, content_type, instance_type, output_s3_location)) 76 | 77 | def get_algo_validation_specification_json(self, validation_role, training_channel_name, training_input, batch_transform_input, content_type, instance_type, output_s3_location): 78 | 79 | return self.template.replace("ROLE_REPLACE_ME", validation_role)\ 80 | .replace("CHANNEL_NAME_REPLACE_ME", training_channel_name)\ 81 | .replace("TRAIN_S3_INPUT_REPLACE_ME", training_input)\ 82 | .replace("BATCH_S3_INPUT_REPLACE_ME", batch_transform_input)\ 83 | .replace("CONTENT_TYPE_REPLACE_ME", content_type)\ 84 | .replace("INSTANCE_TYPE_REPLACE_ME", instance_type)\ 85 | .replace("VALIDATION_S3_OUTPUT_REPLACE_ME", output_s3_location) 86 | -------------------------------------------------------------------------------- /deployement/publish/inference_specification.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | class InferenceSpecification: 4 | 5 | template = """ 6 | { 7 | "InferenceSpecification": { 8 | "Containers" : [{"Image": "IMAGE_REPLACE_ME"}], 9 | "SupportedTransformInstanceTypes": INSTANCES_REPLACE_ME, 10 | "SupportedRealtimeInferenceInstanceTypes": INSTANCES_REPLACE_ME, 11 | "SupportedContentTypes": CONTENT_TYPES_REPLACE_ME, 12 | "SupportedResponseMIMETypes": RESPONSE_MIME_TYPES_REPLACE_ME 13 | } 14 | } 15 | """ 16 | 17 | def get_inference_specification_dict(self, ecr_image, supports_gpu, supported_content_types=None, supported_mime_types=None): 18 | return json.loads(self.get_inference_specification_json(ecr_image, supports_gpu, supported_content_types, supported_mime_types)) 19 | 20 | def get_inference_specification_json(self, ecr_image, supports_gpu, supported_content_types=None, supported_mime_types=None): 21 | if supported_mime_types is None: 22 | supported_mime_types = [] 23 | if supported_content_types is None: 24 | supported_content_types = [] 25 | return self.template.replace("IMAGE_REPLACE_ME", ecr_image) \ 26 | .replace("INSTANCES_REPLACE_ME", self.get_supported_instances(supports_gpu)) \ 27 | .replace("CONTENT_TYPES_REPLACE_ME", json.dumps(supported_content_types)) \ 28 | .replace("RESPONSE_MIME_TYPES_REPLACE_ME", json.dumps(supported_mime_types)) \ 29 | 30 | def get_supported_instances(self, supports_gpu): 31 | cpu_list = ["ml.m4.xlarge","ml.m4.2xlarge","ml.m4.4xlarge","ml.m4.10xlarge","ml.m4.16xlarge","ml.m5.large","ml.m5.xlarge","ml.m5.2xlarge","ml.m5.4xlarge","ml.m5.12xlarge","ml.m5.24xlarge","ml.c4.xlarge","ml.c4.2xlarge","ml.c4.4xlarge","ml.c4.8xlarge","ml.c5.xlarge","ml.c5.2xlarge","ml.c5.4xlarge","ml.c5.9xlarge","ml.c5.18xlarge"] 32 | gpu_list = ["ml.p2.xlarge", "ml.p2.8xlarge", "ml.p2.16xlarge", "ml.p3.2xlarge", "ml.p3.8xlarge", "ml.p3.16xlarge"] 33 | 34 | list_to_return = cpu_list 35 | 36 | if supports_gpu: 37 | list_to_return = cpu_list + gpu_list 38 | 39 | return json.dumps(list_to_return) -------------------------------------------------------------------------------- /deployement/publish/metric_definitions.py: -------------------------------------------------------------------------------- 1 | class MetricDefinitions: 2 | def __init__(self, name, regex): 3 | self.Name = name 4 | self.Regex = regex 5 | 6 | 7 | -------------------------------------------------------------------------------- /deployement/publish/modelpackage_validation_specification.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | class ModelPackageValidationSpecification: 4 | template = """ 5 | { 6 | "ValidationSpecification": { 7 | "ValidationRole": "ROLE_REPLACE_ME", 8 | "ValidationProfiles": [ 9 | { 10 | "ProfileName": "ValidationProfile1", 11 | "TransformJobDefinition": { 12 | "MaxConcurrentTransforms": 1, 13 | "MaxPayloadInMB": 6, 14 | "TransformInput": { 15 | "DataSource": { 16 | "S3DataSource": { 17 | "S3DataType": "S3Prefix", 18 | "S3Uri": "BATCH_S3_INPUT_REPLACE_ME" 19 | } 20 | }, 21 | "ContentType": "CONTENT_TYPE_REPLACE_ME", 22 | "CompressionType": "None", 23 | "SplitType": "Line" 24 | }, 25 | "TransformOutput": { 26 | "S3OutputPath": "VALIDATION_S3_OUTPUT_REPLACE_ME/batch-transform-output", 27 | "Accept": "CONTENT_TYPE_REPLACE_ME", 28 | "AssembleWith": "Line", 29 | "KmsKeyId": "" 30 | }, 31 | "TransformResources": { 32 | "InstanceType": "INSTANCE_TYPE_REPLACE_ME", 33 | "InstanceCount": 1 34 | } 35 | } 36 | } 37 | ] 38 | } 39 | } 40 | """ 41 | 42 | def get_validation_specification_dict(self, validation_role, batch_transform_input, content_type, instance_type, output_s3_location): 43 | return json.loads(self.get_validation_specification_json(validation_role, batch_transform_input, content_type, instance_type, output_s3_location)) 44 | 45 | def get_validation_specification_json(self, validation_role, batch_transform_input, content_type, instance_type, output_s3_location): 46 | 47 | return self.template.replace("ROLE_REPLACE_ME", validation_role)\ 48 | .replace("BATCH_S3_INPUT_REPLACE_ME", batch_transform_input)\ 49 | .replace("CONTENT_TYPE_REPLACE_ME", content_type)\ 50 | .replace("INSTANCE_TYPE_REPLACE_ME", instance_type)\ 51 | .replace("VALIDATION_S3_OUTPUT_REPLACE_ME", output_s3_location) 52 | -------------------------------------------------------------------------------- /deployement/publish/training_channels.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | class TrainingChannels: 4 | 5 | def __init__(self, name, description, supported_content_types = [], is_required=True, supported_compression_types = ["None"], supported_input_modes = ["File"]): 6 | self.Name = name 7 | self.Description = description 8 | self.IsRequired = is_required 9 | self.SupportedContentTypes = supported_content_types 10 | self.SupportedCompressionTypes = supported_compression_types 11 | self.SupportedInputModes = supported_input_modes 12 | 13 | def to_json(self): 14 | return json.dumps(self.__dict__, indent=2, sort_keys=True) -------------------------------------------------------------------------------- /deployement/publish/training_specification.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | class TrainingSpecification: 4 | 5 | template = """ 6 | { 7 | "TrainingSpecification": { 8 | "TrainingImage": "IMAGE_REPLACE_ME", 9 | "SupportedHyperParameters": [ 10 | { 11 | "Description": "Grow a tree with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes", 12 | "Name": "max_leaf_nodes", 13 | "Type": "Integer", 14 | "Range": { 15 | "IntegerParameterRangeSpecification": { 16 | "MinValue": "1", 17 | "MaxValue": "100000" 18 | } 19 | }, 20 | "IsTunable": true, 21 | "IsRequired": false, 22 | "DefaultValue": "100" 23 | } 24 | ], 25 | "SupportedTrainingInstanceTypes": INSTANCES_REPLACE_ME, 26 | "SupportsDistributedTraining": false, 27 | "MetricDefinitions": METRICS_REPLACE_ME, 28 | "TrainingChannels": CHANNELS_REPLACE_ME, 29 | "SupportedTuningJobObjectiveMetrics": TUNING_OBJECTIVES_REPLACE_ME 30 | } 31 | } 32 | """ 33 | 34 | def get_training_specification_dict(self, ecr_image, supports_gpu, supported_channels=None, supported_metrics=None, supported_tuning_job_objective_metrics=None): 35 | return json.loads(self.get_training_specification_json(ecr_image, supports_gpu, supported_channels, supported_metrics, supported_tuning_job_objective_metrics)) 36 | 37 | def get_training_specification_json(self, ecr_image, supports_gpu, supported_channels=None, supported_metrics=None, supported_tuning_job_objective_metrics=None): 38 | if supported_channels is None: 39 | print("Please provide at least one supported channel") 40 | raise ValueError("Please provide at least one supported channel") 41 | 42 | if supported_metrics is None: 43 | supported_metrics = [] 44 | if supported_tuning_job_objective_metrics is None: 45 | supported_tuning_job_objective_metrics = [] 46 | 47 | return self.template.replace("IMAGE_REPLACE_ME", ecr_image) \ 48 | .replace("INSTANCES_REPLACE_ME", self.get_supported_instances(supports_gpu)) \ 49 | .replace("CHANNELS_REPLACE_ME", json.dumps([ob.__dict__ for ob in supported_channels], indent=4, sort_keys=True)) \ 50 | .replace("METRICS_REPLACE_ME", json.dumps([ob.__dict__ for ob in supported_metrics], indent=4, sort_keys=True)) \ 51 | .replace("TUNING_OBJECTIVES_REPLACE_ME", json.dumps([ob.__dict__ for ob in supported_tuning_job_objective_metrics], indent=4, sort_keys=True)) 52 | 53 | @staticmethod 54 | def get_supported_instances(supports_gpu): 55 | cpu_list = ["ml.m4.xlarge","ml.m4.2xlarge","ml.m4.4xlarge","ml.m4.10xlarge","ml.m4.16xlarge","ml.m5.large","ml.m5.xlarge","ml.m5.2xlarge","ml.m5.4xlarge","ml.m5.12xlarge","ml.m5.24xlarge","ml.c4.xlarge","ml.c4.2xlarge","ml.c4.4xlarge","ml.c4.8xlarge","ml.c5.xlarge","ml.c5.2xlarge","ml.c5.4xlarge","ml.c5.9xlarge","ml.c5.18xlarge"] 56 | gpu_list = ["ml.p2.xlarge", "ml.p2.8xlarge", "ml.p2.16xlarge", "ml.p3.2xlarge", "ml.p3.8xlarge", "ml.p3.16xlarge"] 57 | 58 | list_to_return = cpu_list 59 | 60 | if supports_gpu: 61 | list_to_return = cpu_list + gpu_list 62 | 63 | return json.dumps(list_to_return) 64 | -------------------------------------------------------------------------------- /deployement/publish/tuning_objectives.py: -------------------------------------------------------------------------------- 1 | class TuningObjectives: 2 | def __init__(self, objectiveType, metricName): 3 | self.Type = objectiveType 4 | self.MetricName = metricName -------------------------------------------------------------------------------- /deployement/publish/validation_specification.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | class ValidationSpecification: 4 | template = """ 5 | { 6 | "ValidationSpecification": { 7 | "ValidationRole": "ROLE_REPLACE_ME", 8 | "ValidationProfiles": [ 9 | { 10 | "ProfileName": "ValidationProfile1", 11 | "TrainingJobDefinition": { 12 | "TrainingInputMode": "File", 13 | "HyperParameters": {}, 14 | "InputDataConfig": [ 15 | { 16 | "ChannelName": "CHANNEL_NAME_REPLACE_ME", 17 | "DataSource": { 18 | "S3DataSource": { 19 | "S3DataType": "S3Prefix", 20 | "S3Uri": "TRAIN_S3_INPUT_REPLACE_ME", 21 | "S3DataDistributionType": "FullyReplicated" 22 | } 23 | }, 24 | "ContentType": "CONTENT_TYPE_REPLACE_ME", 25 | "CompressionType": "None", 26 | "RecordWrapperType": "None" 27 | } 28 | ], 29 | "OutputDataConfig": { 30 | "KmsKeyId": "", 31 | "S3OutputPath": "VALIDATION_S3_OUTPUT_REPLACE_ME/training-output" 32 | }, 33 | "ResourceConfig": { 34 | "InstanceType": "INSTANCE_TYPE_REPLACE_ME", 35 | "InstanceCount": 1, 36 | "VolumeSizeInGB": 10, 37 | "VolumeKmsKeyId": "" 38 | }, 39 | "StoppingCondition": { 40 | "MaxRuntimeInSeconds": 3600 41 | } 42 | }, 43 | "TransformJobDefinition": { 44 | "MaxConcurrentTransforms": 1, 45 | "MaxPayloadInMB": 6, 46 | "TransformInput": { 47 | "DataSource": { 48 | "S3DataSource": { 49 | "S3DataType": "S3Prefix", 50 | "S3Uri": "BATCH_S3_INPUT_REPLACE_ME" 51 | } 52 | }, 53 | "ContentType": "CONTENT_TYPE_REPLACE_ME", 54 | "CompressionType": "None", 55 | "SplitType": "Line" 56 | }, 57 | "TransformOutput": { 58 | "S3OutputPath": "VALIDATION_S3_OUTPUT_REPLACE_ME/batch-transform-output", 59 | "Accept": "CONTENT_TYPE_REPLACE_ME", 60 | "AssembleWith": "Line", 61 | "KmsKeyId": "" 62 | }, 63 | "TransformResources": { 64 | "InstanceType": "INSTANCE_TYPE_REPLACE_ME", 65 | "InstanceCount": 1 66 | } 67 | } 68 | } 69 | ], 70 | "ValidationOutputS3Prefix": "VALIDATION_S3_OUTPUT_REPLACE_ME/validation-output", 71 | "ValidateForMarketplace": true 72 | } 73 | } 74 | """ 75 | 76 | def get_validation_specification_dict(self, validation_role, training_channel_name, training_input, batch_transform_input, content_type, instance_type, output_s3_location): 77 | return json.loads(self.get_validation_specification_json(validation_role, training_channel_name, training_input, batch_transform_input, content_type, instance_type, output_s3_location)) 78 | 79 | def get_validation_specification_json(self, validation_role, training_channel_name, training_input, batch_transform_input, content_type, instance_type, output_s3_location): 80 | 81 | return self.template.replace("ROLE_REPLACE_ME", validation_role)\ 82 | .replace("CHANNEL_NAME_REPLACE_ME", training_channel_name)\ 83 | .replace("TRAIN_S3_INPUT_REPLACE_ME", training_input)\ 84 | .replace("BATCH_S3_INPUT_REPLACE_ME", batch_transform_input)\ 85 | .replace("CONTENT_TYPE_REPLACE_ME", content_type)\ 86 | .replace("INSTANCE_TYPE_REPLACE_ME", instance_type)\ 87 | .replace("VALIDATION_S3_OUTPUT_REPLACE_ME", output_s3_location) 88 | -------------------------------------------------------------------------------- /deployement/serve: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # This file implements the scoring service shell. You don't necessarily need to modify it for various 4 | # algorithms. It starts nginx and gunicorn with the correct configurations and then simply waits until 5 | # gunicorn exits. 6 | # 7 | # The flask server is specified to be the app object in wsgi.py 8 | # 9 | # We set the following parameters: 10 | # 11 | # Parameter Environment Variable Default Value 12 | # --------- -------------------- ------------- 13 | # number of workers MODEL_SERVER_WORKERS the number of CPU cores 14 | # timeout MODEL_SERVER_TIMEOUT 60 seconds 15 | 16 | from __future__ import print_function 17 | import multiprocessing 18 | import os 19 | import signal 20 | import subprocess 21 | import sys 22 | 23 | cpu_count = multiprocessing.cpu_count() 24 | 25 | model_server_timeout = os.environ.get('MODEL_SERVER_TIMEOUT', 60) 26 | model_server_workers = int(os.environ.get('MODEL_SERVER_WORKERS', cpu_count)) 27 | 28 | def sigterm_handler(nginx_pid, gunicorn_pid): 29 | try: 30 | os.kill(nginx_pid, signal.SIGQUIT) 31 | except OSError: 32 | pass 33 | try: 34 | os.kill(gunicorn_pid, signal.SIGTERM) 35 | except OSError: 36 | pass 37 | 38 | sys.exit(0) 39 | 40 | def start_server(): 41 | print('Starting the inference server with {} workers.'.format(model_server_workers)) 42 | 43 | 44 | # link the log streams to stdout/err so they will be logged to the container logs 45 | subprocess.check_call(['ln', '-sf', '/dev/stdout', '/var/log/nginx/access.log']) 46 | subprocess.check_call(['ln', '-sf', '/dev/stderr', '/var/log/nginx/error.log']) 47 | 48 | nginx = subprocess.Popen(['nginx', '-c', '/opt/program/nginx.conf']) 49 | gunicorn = subprocess.Popen(['gunicorn', 50 | '--timeout', str(model_server_timeout), 51 | '-k', 'gevent', 52 | '-b', 'unix:/tmp/gunicorn.sock', 53 | '-w', str(model_server_workers), 54 | 'wsgi:app']) 55 | 56 | signal.signal(signal.SIGTERM, lambda a, b: sigterm_handler(nginx.pid, gunicorn.pid)) 57 | 58 | # If either subprocess exits, so do we. 59 | pids = set([nginx.pid, gunicorn.pid]) 60 | while True: 61 | pid, _ = os.wait() 62 | if pid in pids: 63 | break 64 | 65 | sigterm_handler(nginx.pid, gunicorn.pid) 66 | print('Inference server exiting') 67 | 68 | # The main routine just invokes the start function. 69 | 70 | if __name__ == '__main__': 71 | start_server() 72 | -------------------------------------------------------------------------------- /deployement/wsgi.py: -------------------------------------------------------------------------------- 1 | import predictor as myapp 2 | 3 | # This is just a simple wrapper for gunicorn to find your app. 4 | # If you want to change the algorithm file, simply change "predictor" above to the 5 | # new file. 6 | 7 | app = myapp.app 8 | -------------------------------------------------------------------------------- /local_test/explore_local.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | image=$1 4 | 5 | docker run -it -v $(pwd)/test_dir:/opt/ml -p 8080:8080 --rm ${image} bash 6 | -------------------------------------------------------------------------------- /local_test/predict.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | payload=$1 4 | content=${2:-text/csv} 5 | 6 | curl --data-binary @${payload} -H "Content-Type: ${content}" -v http://localhost:8080/invocations 7 | -------------------------------------------------------------------------------- /local_test/serve_local.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | image=$1 4 | 5 | docker run -v $(pwd)/test_dir:/opt/ml -p 8080:8080 --rm ${image} serve 6 | -------------------------------------------------------------------------------- /local_test/test_dir/input/config/hyperparameters.json: -------------------------------------------------------------------------------- 1 | {} 2 | -------------------------------------------------------------------------------- /local_test/test_dir/input/config/resourceConfig.json: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tchaton/sagemaker-pytorch-boilerplate/0f64d71a8080d1a457ec1b4586951796b2da8521/local_test/test_dir/input/config/resourceConfig.json -------------------------------------------------------------------------------- /local_test/test_dir/model/trainer.ckpt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tchaton/sagemaker-pytorch-boilerplate/0f64d71a8080d1a457ec1b4586951796b2da8521/local_test/test_dir/model/trainer.ckpt -------------------------------------------------------------------------------- /local_test/train_local.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | IMAGE=$1 4 | ARGS=(${@: 2}) 5 | echo $ARGS 6 | 7 | rm -r test_dir/model 8 | rm -r test_dir/output 9 | 10 | mkdir -p test_dir/model 11 | mkdir -p test_dir/output 12 | 13 | docker run -v $(pwd)/test_dir:/opt/ml --rm ${IMAGE} train ${ARGS} 14 | -------------------------------------------------------------------------------- /poetry.lock: -------------------------------------------------------------------------------- 1 | [[package]] 2 | category = "main" 3 | description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py." 4 | name = "absl-py" 5 | optional = false 6 | python-versions = "*" 7 | version = "0.9.0" 8 | 9 | [package.dependencies] 10 | six = "*" 11 | 12 | [[package]] 13 | category = "main" 14 | description = "ANTLR 4.8 runtime for Python 3.7" 15 | name = "antlr4-python3-runtime" 16 | optional = false 17 | python-versions = "*" 18 | version = "4.8" 19 | 20 | [[package]] 21 | category = "main" 22 | description = "Extensible memoizing collections and decorators" 23 | name = "cachetools" 24 | optional = false 25 | python-versions = "~=3.5" 26 | version = "4.1.1" 27 | 28 | [[package]] 29 | category = "main" 30 | description = "Python package for providing Mozilla's CA Bundle." 31 | name = "certifi" 32 | optional = false 33 | python-versions = "*" 34 | version = "2020.6.20" 35 | 36 | [[package]] 37 | category = "main" 38 | description = "Foreign Function Interface for Python calling C code." 39 | marker = "platform_python_implementation == \"CPython\" and sys_platform == \"win32\"" 40 | name = "cffi" 41 | optional = false 42 | python-versions = "*" 43 | version = "1.14.2" 44 | 45 | [package.dependencies] 46 | pycparser = "*" 47 | 48 | [[package]] 49 | category = "main" 50 | description = "Universal encoding detector for Python 2 and 3" 51 | name = "chardet" 52 | optional = false 53 | python-versions = "*" 54 | version = "3.0.4" 55 | 56 | [[package]] 57 | category = "main" 58 | description = "Composable command line interface toolkit" 59 | name = "click" 60 | optional = false 61 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" 62 | version = "7.1.2" 63 | 64 | [[package]] 65 | category = "main" 66 | description = "enum/enum34 compatibility package" 67 | name = "enum-compat" 68 | optional = false 69 | python-versions = "*" 70 | version = "0.0.3" 71 | 72 | [[package]] 73 | category = "main" 74 | description = "A simple framework for building complex web applications." 75 | name = "flask" 76 | optional = false 77 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" 78 | version = "1.1.2" 79 | 80 | [package.dependencies] 81 | Jinja2 = ">=2.10.1" 82 | Werkzeug = ">=0.15" 83 | click = ">=5.1" 84 | itsdangerous = ">=0.24" 85 | 86 | [package.extras] 87 | dev = ["pytest", "coverage", "tox", "sphinx", "pallets-sphinx-themes", "sphinxcontrib-log-cabinet", "sphinx-issues"] 88 | docs = ["sphinx", "pallets-sphinx-themes", "sphinxcontrib-log-cabinet", "sphinx-issues"] 89 | dotenv = ["python-dotenv"] 90 | 91 | [[package]] 92 | category = "main" 93 | description = "Clean single-source support for Python 3 and 2" 94 | name = "future" 95 | optional = false 96 | python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" 97 | version = "0.18.2" 98 | 99 | [[package]] 100 | category = "main" 101 | description = "Coroutine-based network library" 102 | name = "gevent" 103 | optional = false 104 | python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*" 105 | version = "20.6.2" 106 | 107 | [package.dependencies] 108 | cffi = ">=1.12.2" 109 | greenlet = ">=0.4.16" 110 | setuptools = "*" 111 | "zope.event" = "*" 112 | "zope.interface" = "*" 113 | 114 | [package.extras] 115 | dnspython = ["dnspython (>=1.16.0)", "idna"] 116 | docs = ["repoze.sphinx.autointerface", "sphinxcontrib-programoutput"] 117 | monitor = ["psutil (>=5.7.0)"] 118 | recommended = ["dnspython (>=1.16.0)", "idna", "cffi (>=1.12.2)", "selectors2", "backports.socketpair", "psutil (>=5.7.0)"] 119 | test = ["dnspython (>=1.16.0)", "idna", "requests", "objgraph", "cffi (>=1.12.2)", "selectors2", "futures", "mock", "backports.socketpair", "contextvars (2.4)", "coverage (<5.0)", "coveralls (>=1.7.0)", "psutil (>=5.7.0)"] 120 | 121 | [[package]] 122 | category = "main" 123 | description = "Google Authentication Library" 124 | name = "google-auth" 125 | optional = false 126 | python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*" 127 | version = "1.20.1" 128 | 129 | [package.dependencies] 130 | cachetools = ">=2.0.0,<5.0" 131 | pyasn1-modules = ">=0.2.1" 132 | setuptools = ">=40.3.0" 133 | six = ">=1.9.0" 134 | 135 | [package.dependencies.rsa] 136 | python = ">=3.5" 137 | version = ">=3.1.4,<5" 138 | 139 | [[package]] 140 | category = "main" 141 | description = "Google Authentication Library" 142 | name = "google-auth-oauthlib" 143 | optional = false 144 | python-versions = "*" 145 | version = "0.4.1" 146 | 147 | [package.dependencies] 148 | google-auth = "*" 149 | requests-oauthlib = ">=0.7.0" 150 | 151 | [package.extras] 152 | tool = ["click"] 153 | 154 | [[package]] 155 | category = "main" 156 | description = "Lightweight in-process concurrent programming" 157 | marker = "platform_python_implementation == \"CPython\"" 158 | name = "greenlet" 159 | optional = false 160 | python-versions = "*" 161 | version = "0.4.16" 162 | 163 | [[package]] 164 | category = "main" 165 | description = "HTTP/2-based RPC framework" 166 | name = "grpcio" 167 | optional = false 168 | python-versions = "*" 169 | version = "1.31.0" 170 | 171 | [package.dependencies] 172 | six = ">=1.5.2" 173 | 174 | [package.extras] 175 | protobuf = ["grpcio-tools (>=1.31.0)"] 176 | 177 | [[package]] 178 | category = "main" 179 | description = "WSGI HTTP Server for UNIX" 180 | name = "gunicorn" 181 | optional = false 182 | python-versions = ">=3.4" 183 | version = "20.0.4" 184 | 185 | [package.dependencies] 186 | setuptools = ">=3.0" 187 | 188 | [package.extras] 189 | eventlet = ["eventlet (>=0.9.7)"] 190 | gevent = ["gevent (>=0.13)"] 191 | setproctitle = ["setproctitle"] 192 | tornado = ["tornado (>=0.2)"] 193 | 194 | [[package]] 195 | category = "main" 196 | description = "A framework for elegantly configuring complex applications" 197 | name = "hydra-core" 198 | optional = false 199 | python-versions = "*" 200 | version = "1.0.0rc2" 201 | 202 | [package.dependencies] 203 | antlr4-python3-runtime = "4.8" 204 | omegaconf = ">=2.0.1rc11" 205 | 206 | [package.dependencies.importlib-resources] 207 | python = "<3.9" 208 | version = "*" 209 | 210 | [[package]] 211 | category = "main" 212 | description = "Internationalized Domain Names in Applications (IDNA)" 213 | name = "idna" 214 | optional = false 215 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" 216 | version = "2.10" 217 | 218 | [[package]] 219 | category = "main" 220 | description = "Read metadata from Python packages" 221 | marker = "python_version < \"3.8\"" 222 | name = "importlib-metadata" 223 | optional = false 224 | python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" 225 | version = "1.7.0" 226 | 227 | [package.dependencies] 228 | zipp = ">=0.5" 229 | 230 | [package.extras] 231 | docs = ["sphinx", "rst.linker"] 232 | testing = ["packaging", "pep517", "importlib-resources (>=1.3)"] 233 | 234 | [[package]] 235 | category = "main" 236 | description = "Read resources from Python packages" 237 | marker = "python_version < \"3.9\"" 238 | name = "importlib-resources" 239 | optional = false 240 | python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" 241 | version = "3.0.0" 242 | 243 | [package.dependencies] 244 | [package.dependencies.zipp] 245 | python = "<3.8" 246 | version = ">=0.4" 247 | 248 | [package.extras] 249 | docs = ["sphinx", "rst.linker", "jaraco.packaging"] 250 | 251 | [[package]] 252 | category = "main" 253 | description = "Various helpers to pass data to untrusted environments and back." 254 | name = "itsdangerous" 255 | optional = false 256 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" 257 | version = "1.1.0" 258 | 259 | [[package]] 260 | category = "main" 261 | description = "A very fast and expressive template engine." 262 | name = "jinja2" 263 | optional = false 264 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" 265 | version = "2.11.2" 266 | 267 | [package.dependencies] 268 | MarkupSafe = ">=0.23" 269 | 270 | [package.extras] 271 | i18n = ["Babel (>=0.8)"] 272 | 273 | [[package]] 274 | category = "main" 275 | description = "Lightweight pipelining: using Python functions as pipeline jobs." 276 | name = "joblib" 277 | optional = false 278 | python-versions = ">=3.6" 279 | version = "0.16.0" 280 | 281 | [[package]] 282 | category = "main" 283 | description = "Python implementation of Markdown." 284 | name = "markdown" 285 | optional = false 286 | python-versions = ">=3.5" 287 | version = "3.2.2" 288 | 289 | [package.dependencies] 290 | [package.dependencies.importlib-metadata] 291 | python = "<3.8" 292 | version = "*" 293 | 294 | [package.extras] 295 | testing = ["coverage", "pyyaml"] 296 | 297 | [[package]] 298 | category = "main" 299 | description = "Safely add untrusted strings to HTML/XML markup." 300 | name = "markupsafe" 301 | optional = false 302 | python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*" 303 | version = "1.1.1" 304 | 305 | [[package]] 306 | category = "main" 307 | description = "NumPy is the fundamental package for array computing with Python." 308 | name = "numpy" 309 | optional = false 310 | python-versions = ">=3.6" 311 | version = "1.19.1" 312 | 313 | [[package]] 314 | category = "main" 315 | description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic" 316 | name = "oauthlib" 317 | optional = false 318 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" 319 | version = "3.1.0" 320 | 321 | [package.extras] 322 | rsa = ["cryptography"] 323 | signals = ["blinker"] 324 | signedtoken = ["cryptography", "pyjwt (>=1.0.0)"] 325 | 326 | [[package]] 327 | category = "main" 328 | description = "A flexible configuration library" 329 | name = "omegaconf" 330 | optional = false 331 | python-versions = ">=3.6" 332 | version = "2.0.1rc11" 333 | 334 | [package.dependencies] 335 | PyYAML = ">=5.1" 336 | typing-extensions = "*" 337 | 338 | [[package]] 339 | category = "main" 340 | description = "Core utilities for Python packages" 341 | name = "packaging" 342 | optional = false 343 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" 344 | version = "20.4" 345 | 346 | [package.dependencies] 347 | pyparsing = ">=2.0.2" 348 | six = "*" 349 | 350 | [[package]] 351 | category = "main" 352 | description = "Powerful data structures for data analysis, time series, and statistics" 353 | name = "pandas" 354 | optional = false 355 | python-versions = ">=3.6.1" 356 | version = "1.1.0" 357 | 358 | [package.dependencies] 359 | numpy = ">=1.15.4" 360 | python-dateutil = ">=2.7.3" 361 | pytz = ">=2017.2" 362 | 363 | [package.extras] 364 | test = ["pytest (>=4.0.2)", "pytest-xdist", "hypothesis (>=3.58)"] 365 | 366 | [[package]] 367 | category = "main" 368 | description = "Python Imaging Library (Fork)" 369 | name = "pillow" 370 | optional = false 371 | python-versions = ">=3.5" 372 | version = "7.2.0" 373 | 374 | [[package]] 375 | category = "main" 376 | description = "Protocol Buffers" 377 | name = "protobuf" 378 | optional = false 379 | python-versions = "*" 380 | version = "3.13.0" 381 | 382 | [package.dependencies] 383 | setuptools = "*" 384 | six = ">=1.9" 385 | 386 | [[package]] 387 | category = "main" 388 | description = "Cross-platform lib for process and system monitoring in Python." 389 | name = "psutil" 390 | optional = false 391 | python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" 392 | version = "5.7.2" 393 | 394 | [package.extras] 395 | test = ["ipaddress", "mock", "unittest2", "enum34", "pywin32", "wmi"] 396 | 397 | [[package]] 398 | category = "main" 399 | description = "ASN.1 types and codecs" 400 | name = "pyasn1" 401 | optional = false 402 | python-versions = "*" 403 | version = "0.4.8" 404 | 405 | [[package]] 406 | category = "main" 407 | description = "A collection of ASN.1-based protocols modules." 408 | name = "pyasn1-modules" 409 | optional = false 410 | python-versions = "*" 411 | version = "0.2.8" 412 | 413 | [package.dependencies] 414 | pyasn1 = ">=0.4.6,<0.5.0" 415 | 416 | [[package]] 417 | category = "main" 418 | description = "C parser in Python" 419 | marker = "platform_python_implementation == \"CPython\" and sys_platform == \"win32\"" 420 | name = "pycparser" 421 | optional = false 422 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" 423 | version = "2.20" 424 | 425 | [[package]] 426 | category = "main" 427 | description = "Python parsing module" 428 | name = "pyparsing" 429 | optional = false 430 | python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" 431 | version = "2.4.7" 432 | 433 | [[package]] 434 | category = "main" 435 | description = "Extensions to the standard Python datetime module" 436 | name = "python-dateutil" 437 | optional = false 438 | python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" 439 | version = "2.8.1" 440 | 441 | [package.dependencies] 442 | six = ">=1.5" 443 | 444 | [[package]] 445 | category = "main" 446 | description = "PyTorch Lightning is the lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate." 447 | name = "pytorch-lightning" 448 | optional = false 449 | python-versions = ">=3.6" 450 | version = "0.9.0rc15" 451 | 452 | [package.dependencies] 453 | PyYAML = ">=5.1" 454 | future = ">=0.17.1" 455 | numpy = ">=1.16.4" 456 | packaging = "*" 457 | tensorboard = "2.2.0" 458 | torch = ">=1.3" 459 | tqdm = ">=4.41.0" 460 | 461 | [package.extras] 462 | all = ["neptune-client (>=0.4.109)", "comet-ml (>=3.1.12)", "mlflow (>=1.0.0)", "test-tube (>=0.7.5)", "wandb (>=0.8.21)", "matplotlib (>=3.1.1)", "horovod (>=0.19.2)", "omegaconf (>=2.0.0)", "scikit-learn (>=0.22.2)", "torchtext (>=0.3.1,<0.7)", "onnx (>=1.7.0)", "onnxruntime (>=1.3.0)", "coverage", "codecov (>=2.1)", "pytest (>=3.0.5)", "pytest-cov", "pytest-flake8", "flake8", "flake8-black", "check-manifest", "twine (1.13.0)", "scikit-image", "black (19.10b0)", "pre-commit (>=1.0)", "cloudpickle (>=1.2)", "nltk (>=3.3)", "torchvision (>=0.4.0,<0.7)", "gym (>=0.17.0)", "sphinx (>=2.0,<3.0)", "recommonmark", "m2r", "nbsphinx", "pandoc", "docutils", "sphinxcontrib-fulltoc", "sphinxcontrib-mockautodoc", "sphinx-autodoc-typehints", "sphinx-paramlinks (<0.4.0)"] 463 | dev = ["neptune-client (>=0.4.109)", "comet-ml (>=3.1.12)", "mlflow (>=1.0.0)", "test-tube (>=0.7.5)", "wandb (>=0.8.21)", "matplotlib (>=3.1.1)", "horovod (>=0.19.2)", "omegaconf (>=2.0.0)", "scikit-learn (>=0.22.2)", "torchtext (>=0.3.1,<0.7)", "onnx (>=1.7.0)", "onnxruntime (>=1.3.0)", "coverage", "codecov (>=2.1)", "pytest (>=3.0.5)", "pytest-cov", "pytest-flake8", "flake8", "flake8-black", "check-manifest", "twine (1.13.0)", "scikit-image", "black (19.10b0)", "pre-commit (>=1.0)", "cloudpickle (>=1.2)", "nltk (>=3.3)"] 464 | docs = ["sphinx (>=2.0,<3.0)", "recommonmark", "m2r", "nbsphinx", "pandoc", "docutils", "sphinxcontrib-fulltoc", "sphinxcontrib-mockautodoc", "sphinx-autodoc-typehints", "sphinx-paramlinks (<0.4.0)"] 465 | examples = ["torchvision (>=0.4.0,<0.7)", "gym (>=0.17.0)"] 466 | extra = ["neptune-client (>=0.4.109)", "comet-ml (>=3.1.12)", "mlflow (>=1.0.0)", "test-tube (>=0.7.5)", "wandb (>=0.8.21)", "matplotlib (>=3.1.1)", "horovod (>=0.19.2)", "omegaconf (>=2.0.0)", "scikit-learn (>=0.22.2)", "torchtext (>=0.3.1,<0.7)", "onnx (>=1.7.0)", "onnxruntime (>=1.3.0)"] 467 | test = ["coverage", "codecov (>=2.1)", "pytest (>=3.0.5)", "pytest-cov", "pytest-flake8", "flake8", "flake8-black", "check-manifest", "twine (1.13.0)", "scikit-image", "black (19.10b0)", "pre-commit (>=1.0)", "cloudpickle (>=1.2)", "nltk (>=3.3)"] 468 | 469 | [[package]] 470 | category = "main" 471 | description = "World timezone definitions, modern and historical" 472 | name = "pytz" 473 | optional = false 474 | python-versions = "*" 475 | version = "2020.1" 476 | 477 | [[package]] 478 | category = "main" 479 | description = "YAML parser and emitter for Python" 480 | name = "pyyaml" 481 | optional = false 482 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" 483 | version = "5.3.1" 484 | 485 | [[package]] 486 | category = "main" 487 | description = "Python HTTP for Humans." 488 | name = "requests" 489 | optional = false 490 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" 491 | version = "2.24.0" 492 | 493 | [package.dependencies] 494 | certifi = ">=2017.4.17" 495 | chardet = ">=3.0.2,<4" 496 | idna = ">=2.5,<3" 497 | urllib3 = ">=1.21.1,<1.25.0 || >1.25.0,<1.25.1 || >1.25.1,<1.26" 498 | 499 | [package.extras] 500 | security = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)"] 501 | socks = ["PySocks (>=1.5.6,<1.5.7 || >1.5.7)", "win-inet-pton"] 502 | 503 | [[package]] 504 | category = "main" 505 | description = "OAuthlib authentication support for Requests." 506 | name = "requests-oauthlib" 507 | optional = false 508 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" 509 | version = "1.3.0" 510 | 511 | [package.dependencies] 512 | oauthlib = ">=3.0.0" 513 | requests = ">=2.0.0" 514 | 515 | [package.extras] 516 | rsa = ["oauthlib (>=3.0.0)"] 517 | 518 | [[package]] 519 | category = "main" 520 | description = "Pure-Python RSA implementation" 521 | marker = "python_version >= \"3.5\"" 522 | name = "rsa" 523 | optional = false 524 | python-versions = ">=3.5, <4" 525 | version = "4.6" 526 | 527 | [package.dependencies] 528 | pyasn1 = ">=0.1.3" 529 | 530 | [[package]] 531 | category = "main" 532 | description = "A set of python modules for machine learning and data mining" 533 | name = "scikit-learn" 534 | optional = false 535 | python-versions = ">=3.6" 536 | version = "0.23.2" 537 | 538 | [package.dependencies] 539 | joblib = ">=0.11" 540 | numpy = ">=1.13.3" 541 | scipy = ">=0.19.1" 542 | threadpoolctl = ">=2.0.0" 543 | 544 | [package.extras] 545 | alldeps = ["numpy (>=1.13.3)", "scipy (>=0.19.1)"] 546 | 547 | [[package]] 548 | category = "main" 549 | description = "SciPy: Scientific Library for Python" 550 | name = "scipy" 551 | optional = false 552 | python-versions = ">=3.6" 553 | version = "1.5.2" 554 | 555 | [package.dependencies] 556 | numpy = ">=1.14.5" 557 | 558 | [[package]] 559 | category = "main" 560 | description = "Python 2 and 3 compatibility utilities" 561 | name = "six" 562 | optional = false 563 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" 564 | version = "1.15.0" 565 | 566 | [[package]] 567 | category = "main" 568 | description = "TensorBoard lets you watch Tensors Flow" 569 | name = "tensorboard" 570 | optional = false 571 | python-versions = ">= 2.7, != 3.0.*, != 3.1.*" 572 | version = "2.2.0" 573 | 574 | [package.dependencies] 575 | absl-py = ">=0.4" 576 | google-auth = ">=1.6.3,<2" 577 | google-auth-oauthlib = ">=0.4.1,<0.5" 578 | grpcio = ">=1.24.3" 579 | markdown = ">=2.6.8" 580 | numpy = ">=1.12.0" 581 | protobuf = ">=3.6.0" 582 | requests = ">=2.21.0,<3" 583 | setuptools = ">=41.0.0" 584 | six = ">=1.10.0" 585 | tensorboard-plugin-wit = ">=1.6.0" 586 | werkzeug = ">=0.11.15" 587 | 588 | [package.dependencies.wheel] 589 | python = ">=3" 590 | version = ">=0.26" 591 | 592 | [[package]] 593 | category = "main" 594 | description = "What-If Tool TensorBoard plugin." 595 | name = "tensorboard-plugin-wit" 596 | optional = false 597 | python-versions = "*" 598 | version = "1.7.0" 599 | 600 | [[package]] 601 | category = "main" 602 | description = "threadpoolctl" 603 | name = "threadpoolctl" 604 | optional = false 605 | python-versions = ">=3.5" 606 | version = "2.1.0" 607 | 608 | [[package]] 609 | category = "main" 610 | description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration" 611 | name = "torch" 612 | optional = false 613 | python-versions = ">=3.6.1" 614 | version = "1.6.0" 615 | 616 | [package.dependencies] 617 | future = "*" 618 | numpy = "*" 619 | 620 | [[package]] 621 | category = "main" 622 | description = "Torch Model Archiver is used for creating archives of trained neural net models that can be consumed by TorchServe inference" 623 | name = "torch-model-archiver" 624 | optional = false 625 | python-versions = "*" 626 | version = "0.2.0" 627 | 628 | [package.dependencies] 629 | enum-compat = "*" 630 | future = "*" 631 | 632 | [[package]] 633 | category = "main" 634 | description = "TorchServe is a tool for serving neural net models for inference" 635 | name = "torchserve" 636 | optional = false 637 | python-versions = "*" 638 | version = "0.2.0" 639 | 640 | [package.dependencies] 641 | Pillow = "*" 642 | future = "*" 643 | packaging = "*" 644 | psutil = "*" 645 | 646 | [[package]] 647 | category = "main" 648 | description = "image and video datasets and models for torch deep learning" 649 | name = "torchvision" 650 | optional = false 651 | python-versions = "*" 652 | version = "0.7.0" 653 | 654 | [package.dependencies] 655 | numpy = "*" 656 | pillow = ">=4.1.1" 657 | torch = "1.6.0" 658 | 659 | [package.extras] 660 | scipy = ["scipy"] 661 | 662 | [[package]] 663 | category = "main" 664 | description = "Fast, Extensible Progress Meter" 665 | name = "tqdm" 666 | optional = false 667 | python-versions = ">=2.6, !=3.0.*, !=3.1.*" 668 | version = "4.48.2" 669 | 670 | [package.extras] 671 | dev = ["py-make (>=0.1.0)", "twine", "argopt", "pydoc-markdown"] 672 | 673 | [[package]] 674 | category = "main" 675 | description = "Backported and Experimental Type Hints for Python 3.5+" 676 | name = "typing-extensions" 677 | optional = false 678 | python-versions = "*" 679 | version = "3.7.4.2" 680 | 681 | [[package]] 682 | category = "main" 683 | description = "HTTP library with thread-safe connection pooling, file post, and more." 684 | name = "urllib3" 685 | optional = false 686 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4" 687 | version = "1.25.10" 688 | 689 | [package.extras] 690 | brotli = ["brotlipy (>=0.6.0)"] 691 | secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "pyOpenSSL (>=0.14)", "ipaddress"] 692 | socks = ["PySocks (>=1.5.6,<1.5.7 || >1.5.7,<2.0)"] 693 | 694 | [[package]] 695 | category = "main" 696 | description = "The comprehensive WSGI web application library." 697 | name = "werkzeug" 698 | optional = false 699 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" 700 | version = "1.0.1" 701 | 702 | [package.extras] 703 | dev = ["pytest", "pytest-timeout", "coverage", "tox", "sphinx", "pallets-sphinx-themes", "sphinx-issues"] 704 | watchdog = ["watchdog"] 705 | 706 | [[package]] 707 | category = "main" 708 | description = "A built-package format for Python" 709 | marker = "python_version >= \"3\"" 710 | name = "wheel" 711 | optional = false 712 | python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" 713 | version = "0.35.1" 714 | 715 | [package.extras] 716 | test = ["pytest (>=3.0.0)", "pytest-cov"] 717 | 718 | [[package]] 719 | category = "main" 720 | description = "Backport of pathlib-compatible object wrapper for zip files" 721 | marker = "python_version < \"3.8\"" 722 | name = "zipp" 723 | optional = false 724 | python-versions = ">=3.6" 725 | version = "3.1.0" 726 | 727 | [package.extras] 728 | docs = ["sphinx", "jaraco.packaging (>=3.2)", "rst.linker (>=1.9)"] 729 | testing = ["jaraco.itertools", "func-timeout"] 730 | 731 | [[package]] 732 | category = "main" 733 | description = "Very basic event publishing system" 734 | name = "zope.event" 735 | optional = false 736 | python-versions = "*" 737 | version = "4.4" 738 | 739 | [package.dependencies] 740 | setuptools = "*" 741 | 742 | [package.extras] 743 | docs = ["sphinx"] 744 | test = ["zope.testrunner"] 745 | 746 | [[package]] 747 | category = "main" 748 | description = "Interfaces for Python" 749 | name = "zope.interface" 750 | optional = false 751 | python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" 752 | version = "5.1.0" 753 | 754 | [package.dependencies] 755 | setuptools = "*" 756 | 757 | [package.extras] 758 | docs = ["sphinx", "repoze.sphinx.autointerface"] 759 | test = ["coverage (>=5.0.3)", "zope.event", "zope.testing"] 760 | testing = ["coverage (>=5.0.3)", "zope.event", "zope.testing"] 761 | 762 | [metadata] 763 | content-hash = "ed60a27783a7b3fa7855f91e36eeff46c8fef474eb7ee5f3da8641c8bd1c1a54" 764 | lock-version = "1.0" 765 | python-versions = "3.7.8" 766 | 767 | [metadata.files] 768 | absl-py = [ 769 | {file = "absl-py-0.9.0.tar.gz", hash = "sha256:75e737d6ce7723d9ff9b7aa1ba3233c34be62ef18d5859e706b8fdc828989830"}, 770 | ] 771 | antlr4-python3-runtime = [ 772 | {file = "antlr4-python3-runtime-4.8.tar.gz", hash = "sha256:15793f5d0512a372b4e7d2284058ad32ce7dd27126b105fb0b2245130445db33"}, 773 | ] 774 | cachetools = [ 775 | {file = "cachetools-4.1.1-py3-none-any.whl", hash = "sha256:513d4ff98dd27f85743a8dc0e92f55ddb1b49e060c2d5961512855cda2c01a98"}, 776 | {file = "cachetools-4.1.1.tar.gz", hash = "sha256:bbaa39c3dede00175df2dc2b03d0cf18dd2d32a7de7beb68072d13043c9edb20"}, 777 | ] 778 | certifi = [ 779 | {file = "certifi-2020.6.20-py2.py3-none-any.whl", hash = "sha256:8fc0819f1f30ba15bdb34cceffb9ef04d99f420f68eb75d901e9560b8749fc41"}, 780 | {file = "certifi-2020.6.20.tar.gz", hash = "sha256:5930595817496dd21bb8dc35dad090f1c2cd0adfaf21204bf6732ca5d8ee34d3"}, 781 | ] 782 | cffi = [ 783 | {file = "cffi-1.14.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:da9d3c506f43e220336433dffe643fbfa40096d408cb9b7f2477892f369d5f82"}, 784 | {file = "cffi-1.14.2-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:23e44937d7695c27c66a54d793dd4b45889a81b35c0751ba91040fe825ec59c4"}, 785 | {file = "cffi-1.14.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:0da50dcbccd7cb7e6c741ab7912b2eff48e85af217d72b57f80ebc616257125e"}, 786 | {file = "cffi-1.14.2-cp27-cp27m-win32.whl", hash = "sha256:76ada88d62eb24de7051c5157a1a78fd853cca9b91c0713c2e973e4196271d0c"}, 787 | {file = "cffi-1.14.2-cp27-cp27m-win_amd64.whl", hash = "sha256:15a5f59a4808f82d8ec7364cbace851df591c2d43bc76bcbe5c4543a7ddd1bf1"}, 788 | {file = "cffi-1.14.2-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:e4082d832e36e7f9b2278bc774886ca8207346b99f278e54c9de4834f17232f7"}, 789 | {file = "cffi-1.14.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:57214fa5430399dffd54f4be37b56fe22cedb2b98862550d43cc085fb698dc2c"}, 790 | {file = "cffi-1.14.2-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:6843db0343e12e3f52cc58430ad559d850a53684f5b352540ca3f1bc56df0731"}, 791 | {file = "cffi-1.14.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:577791f948d34d569acb2d1add5831731c59d5a0c50a6d9f629ae1cefd9ca4a0"}, 792 | {file = "cffi-1.14.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:8662aabfeab00cea149a3d1c2999b0731e70c6b5bac596d95d13f643e76d3d4e"}, 793 | {file = "cffi-1.14.2-cp35-cp35m-win32.whl", hash = "sha256:837398c2ec00228679513802e3744d1e8e3cb1204aa6ad408b6aff081e99a487"}, 794 | {file = "cffi-1.14.2-cp35-cp35m-win_amd64.whl", hash = "sha256:bf44a9a0141a082e89c90e8d785b212a872db793a0080c20f6ae6e2a0ebf82ad"}, 795 | {file = "cffi-1.14.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:29c4688ace466a365b85a51dcc5e3c853c1d283f293dfcc12f7a77e498f160d2"}, 796 | {file = "cffi-1.14.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:99cc66b33c418cd579c0f03b77b94263c305c389cb0c6972dac420f24b3bf123"}, 797 | {file = "cffi-1.14.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:65867d63f0fd1b500fa343d7798fa64e9e681b594e0a07dc934c13e76ee28fb1"}, 798 | {file = "cffi-1.14.2-cp36-cp36m-win32.whl", hash = "sha256:f5033952def24172e60493b68717792e3aebb387a8d186c43c020d9363ee7281"}, 799 | {file = "cffi-1.14.2-cp36-cp36m-win_amd64.whl", hash = "sha256:7057613efefd36cacabbdbcef010e0a9c20a88fc07eb3e616019ea1692fa5df4"}, 800 | {file = "cffi-1.14.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6539314d84c4d36f28d73adc1b45e9f4ee2a89cdc7e5d2b0a6dbacba31906798"}, 801 | {file = "cffi-1.14.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:672b539db20fef6b03d6f7a14b5825d57c98e4026401fce838849f8de73fe4d4"}, 802 | {file = "cffi-1.14.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:95e9094162fa712f18b4f60896e34b621df99147c2cee216cfa8f022294e8e9f"}, 803 | {file = "cffi-1.14.2-cp37-cp37m-win32.whl", hash = "sha256:b9aa9d8818c2e917fa2c105ad538e222a5bce59777133840b93134022a7ce650"}, 804 | {file = "cffi-1.14.2-cp37-cp37m-win_amd64.whl", hash = "sha256:e4b9b7af398c32e408c00eb4e0d33ced2f9121fd9fb978e6c1b57edd014a7d15"}, 805 | {file = "cffi-1.14.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e613514a82539fc48291d01933951a13ae93b6b444a88782480be32245ed4afa"}, 806 | {file = "cffi-1.14.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:9b219511d8b64d3fa14261963933be34028ea0e57455baf6781fe399c2c3206c"}, 807 | {file = "cffi-1.14.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:c0b48b98d79cf795b0916c57bebbc6d16bb43b9fc9b8c9f57f4cf05881904c75"}, 808 | {file = "cffi-1.14.2-cp38-cp38-win32.whl", hash = "sha256:15419020b0e812b40d96ec9d369b2bc8109cc3295eac6e013d3261343580cc7e"}, 809 | {file = "cffi-1.14.2-cp38-cp38-win_amd64.whl", hash = "sha256:12a453e03124069b6896107ee133ae3ab04c624bb10683e1ed1c1663df17c13c"}, 810 | {file = "cffi-1.14.2.tar.gz", hash = "sha256:ae8f34d50af2c2154035984b8b5fc5d9ed63f32fe615646ab435b05b132ca91b"}, 811 | ] 812 | chardet = [ 813 | {file = "chardet-3.0.4-py2.py3-none-any.whl", hash = "sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"}, 814 | {file = "chardet-3.0.4.tar.gz", hash = "sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae"}, 815 | ] 816 | click = [ 817 | {file = "click-7.1.2-py2.py3-none-any.whl", hash = "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"}, 818 | {file = "click-7.1.2.tar.gz", hash = "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a"}, 819 | ] 820 | enum-compat = [ 821 | {file = "enum-compat-0.0.3.tar.gz", hash = "sha256:3677daabed56a6f724451d585662253d8fb4e5569845aafa8bb0da36b1a8751e"}, 822 | {file = "enum_compat-0.0.3-py3-none-any.whl", hash = "sha256:88091b617c7fc3bbbceae50db5958023c48dc40b50520005aa3bf27f8f7ea157"}, 823 | ] 824 | flask = [ 825 | {file = "Flask-1.1.2-py2.py3-none-any.whl", hash = "sha256:8a4fdd8936eba2512e9c85df320a37e694c93945b33ef33c89946a340a238557"}, 826 | {file = "Flask-1.1.2.tar.gz", hash = "sha256:4efa1ae2d7c9865af48986de8aeb8504bf32c7f3d6fdc9353d34b21f4b127060"}, 827 | ] 828 | future = [ 829 | {file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"}, 830 | ] 831 | gevent = [ 832 | {file = "gevent-20.6.2-cp27-cp27m-macosx_10_15_x86_64.whl", hash = "sha256:b03890bbddbae5667f5baad517417056496ff5e92c3c7945b27cc08f55a9fcb2"}, 833 | {file = "gevent-20.6.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1ea0d34cb78cdf37870be3bfb9330ebda89197bed9e048c14f4a90dec19a33e0"}, 834 | {file = "gevent-20.6.2-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:73eb4cf3114fbb5dd801bd0b93941adfa2fa6d99e91976c20a121ea14b8b39b9"}, 835 | {file = "gevent-20.6.2-cp27-cp27m-win32.whl", hash = "sha256:f41cc8e853ac2252bc58f6feabd74b8aae613e2d19097c5373463122f4dc08f0"}, 836 | {file = "gevent-20.6.2-cp27-cp27m-win_amd64.whl", hash = "sha256:d3baff87d935a5eeffb0e4f7cd5ffe258d2430cd62aeee2e5396f85da07df435"}, 837 | {file = "gevent-20.6.2-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:7d8408854ce892f987305a0e9bf5c051f4ea29453665454396d6afb620c719b6"}, 838 | {file = "gevent-20.6.2-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:ea2e4584950186b71d648bde6af40dae4d4c6f43db25a732ec056b27a7a83afe"}, 839 | {file = "gevent-20.6.2-cp35-cp35m-win32.whl", hash = "sha256:c0f4340e40e0f9dfe93a52a12ddf5b1eeda9bbc89b99bf3b9b23acab0dfae0a4"}, 840 | {file = "gevent-20.6.2-cp35-cp35m-win_amd64.whl", hash = "sha256:13c74d6784ef5ada2666abf2bb310d27a1d14291f7cac46148f336b19f714d40"}, 841 | {file = "gevent-20.6.2-cp36-cp36m-macosx_10_15_x86_64.whl", hash = "sha256:78bd94f6f2ac366155169df3507068f6381f2ad77625633189ce183f86a57597"}, 842 | {file = "gevent-20.6.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:0b16dd85eddaf6acdad373ce90ed4da09ef466cbc5e0ee5932d13f099929e844"}, 843 | {file = "gevent-20.6.2-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:a47556cac07e31b3cef8fd701599b3b1365961fe3736471f41807ffa27c5c848"}, 844 | {file = "gevent-20.6.2-cp36-cp36m-win32.whl", hash = "sha256:bef18b8bd3b728240b9bbd699737216b793d6c97b482431f69dcbe328ad73692"}, 845 | {file = "gevent-20.6.2-cp36-cp36m-win_amd64.whl", hash = "sha256:d0a67a20ce325f6a2068e0bd9fbf83db8a5f5ced972ed8ac5c20079a7d98c7d1"}, 846 | {file = "gevent-20.6.2-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:b17915b65b49a425115ddc3087484c81b1e47ce38c931d18bb14e453753e4d06"}, 847 | {file = "gevent-20.6.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ebb8a545112110e3a6edf905ae1556b0538fc148c743aa7d8cfaebbbc23de31d"}, 848 | {file = "gevent-20.6.2-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:6c864b5604166ac8351e3128a1135b883b9e978fd24afbd75a249dcb42bc8ab5"}, 849 | {file = "gevent-20.6.2-cp37-cp37m-win32.whl", hash = "sha256:e5ca5ee80a9d9e697c9fc22b4bbce9ad06870f83fc8e7774e5504892ef702476"}, 850 | {file = "gevent-20.6.2-cp37-cp37m-win_amd64.whl", hash = "sha256:f2a02d9004ccb18edd9eaf6f25da9a7763de41a69754d5e4d872a8cbf8bd0b72"}, 851 | {file = "gevent-20.6.2-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:354f932c284fa45826b32f42927d892096cce05671b50b3ff59528230217ad47"}, 852 | {file = "gevent-20.6.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:67776cb33b638a3c61a0351d9d1e8f33a46b47de619e249de1159892f9ff035c"}, 853 | {file = "gevent-20.6.2-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:68764aca061bbbbade43727e797f9c28042f6d90cca5fb6514ef726d43ab00ca"}, 854 | {file = "gevent-20.6.2-cp38-cp38-win32.whl", hash = "sha256:0f3fbb1703b10609856e5dffb0e358bf5edf57e52dc7cd7226e3f8674fdc0a0f"}, 855 | {file = "gevent-20.6.2-cp38-cp38-win_amd64.whl", hash = "sha256:a18d8dd9bfa994a22f30adfa0563d80f0809140045c34f85535f422813d25855"}, 856 | {file = "gevent-20.6.2-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:9527087984f1659be899b3300d5d61c7c5b01d8beae106aff5160316da8bc56f"}, 857 | {file = "gevent-20.6.2-pp27-pypy_73-macosx_10_7_x86_64.whl", hash = "sha256:76ef4c6e3332e6f7278142d791b28695adfce39735900fccef2a0f1d894f6b36"}, 858 | {file = "gevent-20.6.2-pp27-pypy_73-win32.whl", hash = "sha256:3cb2f6978615d52e4e4e667b035c11a7272bb68b14d119faf1b138164b2f354f"}, 859 | {file = "gevent-20.6.2.tar.gz", hash = "sha256:a23c2abf08e851c988723f6a2996d495f513a2c0dc70f9956af03af8debdb5d1"}, 860 | ] 861 | google-auth = [ 862 | {file = "google-auth-1.20.1.tar.gz", hash = "sha256:2f34dd810090d0d4c9d5787c4ad7b4413d1fbfb941e13682c7a2298d3b6cdcc8"}, 863 | {file = "google_auth-1.20.1-py2.py3-none-any.whl", hash = "sha256:ce1fb80b5c6d3dd038babcc43e221edeafefc72d983b3dc28b67b996f76f00b9"}, 864 | ] 865 | google-auth-oauthlib = [ 866 | {file = "google-auth-oauthlib-0.4.1.tar.gz", hash = "sha256:88d2cd115e3391eb85e1243ac6902e76e77c5fe438b7276b297fbe68015458dd"}, 867 | {file = "google_auth_oauthlib-0.4.1-py2.py3-none-any.whl", hash = "sha256:a92a0f6f41a0fb6138454fbc02674e64f89d82a244ea32f98471733c8ef0e0e1"}, 868 | ] 869 | greenlet = [ 870 | {file = "greenlet-0.4.16-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:80cb0380838bf4e48da6adedb0c7cd060c187bb4a75f67a5aa9ec33689b84872"}, 871 | {file = "greenlet-0.4.16-cp27-cp27m-win32.whl", hash = "sha256:df7de669cbf21de4b04a3ffc9920bc8426cab4c61365fa84d79bf97401a8bef7"}, 872 | {file = "greenlet-0.4.16-cp27-cp27m-win_amd64.whl", hash = "sha256:1429dc183b36ec972055e13250d96e174491559433eb3061691b446899b87384"}, 873 | {file = "greenlet-0.4.16-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:5ea034d040e6ab1d2ae04ab05a3f37dbd719c4dee3804b13903d4cc794b1336e"}, 874 | {file = "greenlet-0.4.16-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c196a5394c56352e21cb7224739c6dd0075b69dd56f758505951d1d8d68cf8a9"}, 875 | {file = "greenlet-0.4.16-cp35-cp35m-win32.whl", hash = "sha256:1000038ba0ea9032948e2156a9c15f5686f36945e8f9906e6b8db49f358e7b52"}, 876 | {file = "greenlet-0.4.16-cp35-cp35m-win_amd64.whl", hash = "sha256:1b805231bfb7b2900a16638c3c8b45c694334c811f84463e52451e00c9412691"}, 877 | {file = "greenlet-0.4.16-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:e5db19d4a7d41bbeb3dd89b49fc1bc7e6e515b51bbf32589c618655a0ebe0bf0"}, 878 | {file = "greenlet-0.4.16-cp36-cp36m-win32.whl", hash = "sha256:eac2a3f659d5f41d6bbfb6a97733bc7800ea5e906dc873732e00cebb98cec9e4"}, 879 | {file = "greenlet-0.4.16-cp36-cp36m-win_amd64.whl", hash = "sha256:7eed31f4efc8356e200568ba05ad645525f1fbd8674f1e5be61a493e715e3873"}, 880 | {file = "greenlet-0.4.16-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:682328aa576ec393c1872615bcb877cf32d800d4a2f150e1a5dc7e56644010b1"}, 881 | {file = "greenlet-0.4.16-cp37-cp37m-win32.whl", hash = "sha256:3a35e33902b2e6079949feed7a2dafa5ac6f019da97bd255842bb22de3c11bf5"}, 882 | {file = "greenlet-0.4.16-cp37-cp37m-win_amd64.whl", hash = "sha256:b0b2a984bbfc543d144d88caad6cc7ff4a71be77102014bd617bd88cfb038727"}, 883 | {file = "greenlet-0.4.16-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:d83c1d38658b0f81c282b41238092ed89d8f93c6e342224ab73fb39e16848721"}, 884 | {file = "greenlet-0.4.16-cp38-cp38-win32.whl", hash = "sha256:e695ac8c3efe124d998230b219eb51afb6ef10524a50b3c45109c4b77a8a3a92"}, 885 | {file = "greenlet-0.4.16-cp38-cp38-win_amd64.whl", hash = "sha256:133ba06bad4e5f2f8bf6a0ac434e0fd686df749a86b3478903b92ec3a9c0c90b"}, 886 | {file = "greenlet-0.4.16.tar.gz", hash = "sha256:6e06eac722676797e8fce4adb8ad3dc57a1bb3adfb0dd3fdf8306c055a38456c"}, 887 | ] 888 | grpcio = [ 889 | {file = "grpcio-1.31.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:e8c3264b0fd728aadf3f0324471843f65bd3b38872bdab2a477e31ffb685dd5b"}, 890 | {file = "grpcio-1.31.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:5fb0923b16590bac338e92d98c7d8effb3cfad1d2e18c71bf86bde32c49cd6dd"}, 891 | {file = "grpcio-1.31.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:58d7121f48cb94535a4cedcce32921d0d0a78563c7372a143dedeec196d1c637"}, 892 | {file = "grpcio-1.31.0-cp27-cp27m-win32.whl", hash = "sha256:ea849210e7362559f326cbe603d5b8d8bb1e556e86a7393b5a8847057de5b084"}, 893 | {file = "grpcio-1.31.0-cp27-cp27m-win_amd64.whl", hash = "sha256:ba3e43cb984399064ffaa3c0997576e46a1e268f9da05f97cd9b272f0b59ee71"}, 894 | {file = "grpcio-1.31.0-cp27-cp27mu-linux_armv7l.whl", hash = "sha256:ebb2ca09fa17537e35508a29dcb05575d4d9401138a68e83d1c605d65e8a1770"}, 895 | {file = "grpcio-1.31.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:292635f05b6ce33f87116951d0b3d8d330bdfc5cac74f739370d60981e8c256c"}, 896 | {file = "grpcio-1.31.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:92e54ab65e782f227e751c7555918afaba8d1229601687e89b80c2b65d2f6642"}, 897 | {file = "grpcio-1.31.0-cp35-cp35m-linux_armv7l.whl", hash = "sha256:013287f99c99b201aa8a5f6bc7918f616739b9be031db132d9e3b8453e95e151"}, 898 | {file = "grpcio-1.31.0-cp35-cp35m-macosx_10_7_intel.whl", hash = "sha256:d2c5e05c257859febd03f5d81b5015e1946d6bcf475c7bf63ee99cea8ab0d590"}, 899 | {file = "grpcio-1.31.0-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:c9016ab1eaf4e054099303287195f3746bd4e69f2631d040f9dca43e910a5408"}, 900 | {file = "grpcio-1.31.0-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:baaa036540d7ace433bdf38a3fe5e41cf9f84cdf10a88bac805f678a7ca8ddcc"}, 901 | {file = "grpcio-1.31.0-cp35-cp35m-manylinux2014_i686.whl", hash = "sha256:75e383053dccb610590aa53eed5278db5c09bf498d3b5105ce6c776478f59352"}, 902 | {file = "grpcio-1.31.0-cp35-cp35m-manylinux2014_x86_64.whl", hash = "sha256:739a72abffbd36083ff7adbb862cf1afc1e311c35834bed9c0361d8e68b063e1"}, 903 | {file = "grpcio-1.31.0-cp35-cp35m-win32.whl", hash = "sha256:f04c59d186af3157dc8811114130aaeae92e90a65283733f41de94eed484e1f7"}, 904 | {file = "grpcio-1.31.0-cp35-cp35m-win_amd64.whl", hash = "sha256:ef9fce98b6fe03874c2a6576b02aec1a0df25742cd67d1d7b75a49e30aa74225"}, 905 | {file = "grpcio-1.31.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:08a9b648dbe8852ff94b73a1c96da126834c3057ba2301d13e8c4adff334c482"}, 906 | {file = "grpcio-1.31.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c22b19abba63562a5a200e586b5bde39d26c8ec30c92e26d209d81182371693b"}, 907 | {file = "grpcio-1.31.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:0397616355760cd8282ed5ea34d51830ae4cb6613b7e5f66bed3be5d041b8b9a"}, 908 | {file = "grpcio-1.31.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:259240aab2603891553e17ad5b2655693df79e02a9b887ff605bdeb2fcd3dcc9"}, 909 | {file = "grpcio-1.31.0-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:8ca26b489b5dc1e3d31807d329c23d6cb06fe40fbae25b0649b718947936e26a"}, 910 | {file = "grpcio-1.31.0-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:bf39977282a79dc1b2765cc3402c0ada571c29a491caec6ed12c0993c1ec115e"}, 911 | {file = "grpcio-1.31.0-cp36-cp36m-win32.whl", hash = "sha256:f5b0870b733bcb7b6bf05a02035e7aaf20f599d3802b390282d4c2309f825f1d"}, 912 | {file = "grpcio-1.31.0-cp36-cp36m-win_amd64.whl", hash = "sha256:074871a184483d5cd0746fd01e7d214d3ee9d36e67e32a5786b0a21f29fb8304"}, 913 | {file = "grpcio-1.31.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:220c46b1fc9c9a6fcca4caac398f08f0ed43cdd63c45b7458983c4a1575ef6df"}, 914 | {file = "grpcio-1.31.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:7a11b1ebb3210f34913b8be6995936bf9ebc541a65ab69e75db5ce1fe5047e8f"}, 915 | {file = "grpcio-1.31.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:3c2aa6d7a5e5bf73fdb1715eee777efe06dd39df03383f1cc095b2fdb34883e6"}, 916 | {file = "grpcio-1.31.0-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:e64bddd09842ef508d72ca354319b0eb126205d951e8ac3128fe9869bd563552"}, 917 | {file = "grpcio-1.31.0-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5d7faa89992e015d245750ca9ac916c161bbf72777b2c60abc61da3fae41339e"}, 918 | {file = "grpcio-1.31.0-cp37-cp37m-win32.whl", hash = "sha256:43d44548ad6ee738b941abd9f09e3b83a5c13f3e1410321023c3c148ba50e796"}, 919 | {file = "grpcio-1.31.0-cp37-cp37m-win_amd64.whl", hash = "sha256:bf00ab06ea4f89976288f4d6224d4aa120780e30c955d4f85c3214ada29b3ddf"}, 920 | {file = "grpcio-1.31.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:344b50865914cc8e6d023457bffee9a640abb18f75d0f2bb519041961c748da9"}, 921 | {file = "grpcio-1.31.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:63ee8e02d04272c3d103f44b4bce5d43ea757dd288673cea212d2f7da27967d2"}, 922 | {file = "grpcio-1.31.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:a9a7ae74cb3108e6457cf15532d4c300324b48fbcf3ef290bcd2835745f20510"}, 923 | {file = "grpcio-1.31.0-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:64077e3a9a7cf2f59e6c76d503c8de1f18a76428f41a5b000dc53c48a0b772ff"}, 924 | {file = "grpcio-1.31.0-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:8b42f0ac76be07a5fa31117a3388d754ad35ef05e2e34be185ca9ccbcfac2069"}, 925 | {file = "grpcio-1.31.0-cp38-cp38-win32.whl", hash = "sha256:8002a89ea91c0078c15d3c0daf423fd4968946be78f08545e807ea9a5ff8054a"}, 926 | {file = "grpcio-1.31.0-cp38-cp38-win_amd64.whl", hash = "sha256:0fa86ac4452602c79774783aa68979a1a7625ebb7eaabee2b6550b975b9d61e6"}, 927 | {file = "grpcio-1.31.0.tar.gz", hash = "sha256:5043440c45c0a031f387e7f48527541c65d672005fb24cf18ef6857483557d39"}, 928 | ] 929 | gunicorn = [ 930 | {file = "gunicorn-20.0.4-py2.py3-none-any.whl", hash = "sha256:cd4a810dd51bf497552cf3f863b575dabd73d6ad6a91075b65936b151cbf4f9c"}, 931 | {file = "gunicorn-20.0.4.tar.gz", hash = "sha256:1904bb2b8a43658807108d59c3f3d56c2b6121a701161de0ddf9ad140073c626"}, 932 | ] 933 | hydra-core = [ 934 | {file = "hydra-core-1.0.0rc2.tar.gz", hash = "sha256:86f4882a466289351ba7efd3d6a11be531200d00b08c6a83f08302481592c005"}, 935 | {file = "hydra_core-1.0.0rc2-py3-none-any.whl", hash = "sha256:d2a9e406a24a99c42e1b2746b82b280d9ab062ebc406ce6bfbc7b09e57c16b70"}, 936 | ] 937 | idna = [ 938 | {file = "idna-2.10-py2.py3-none-any.whl", hash = "sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0"}, 939 | {file = "idna-2.10.tar.gz", hash = "sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6"}, 940 | ] 941 | importlib-metadata = [ 942 | {file = "importlib_metadata-1.7.0-py2.py3-none-any.whl", hash = "sha256:dc15b2969b4ce36305c51eebe62d418ac7791e9a157911d58bfb1f9ccd8e2070"}, 943 | {file = "importlib_metadata-1.7.0.tar.gz", hash = "sha256:90bb658cdbbf6d1735b6341ce708fc7024a3e14e99ffdc5783edea9f9b077f83"}, 944 | ] 945 | importlib-resources = [ 946 | {file = "importlib_resources-3.0.0-py2.py3-none-any.whl", hash = "sha256:d028f66b66c0d5732dae86ba4276999855e162a749c92620a38c1d779ed138a7"}, 947 | {file = "importlib_resources-3.0.0.tar.gz", hash = "sha256:19f745a6eca188b490b1428c8d1d4a0d2368759f32370ea8fb89cad2ab1106c3"}, 948 | ] 949 | itsdangerous = [ 950 | {file = "itsdangerous-1.1.0-py2.py3-none-any.whl", hash = "sha256:b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749"}, 951 | {file = "itsdangerous-1.1.0.tar.gz", hash = "sha256:321b033d07f2a4136d3ec762eac9f16a10ccd60f53c0c91af90217ace7ba1f19"}, 952 | ] 953 | jinja2 = [ 954 | {file = "Jinja2-2.11.2-py2.py3-none-any.whl", hash = "sha256:f0a4641d3cf955324a89c04f3d94663aa4d638abe8f733ecd3582848e1c37035"}, 955 | {file = "Jinja2-2.11.2.tar.gz", hash = "sha256:89aab215427ef59c34ad58735269eb58b1a5808103067f7bb9d5836c651b3bb0"}, 956 | ] 957 | joblib = [ 958 | {file = "joblib-0.16.0-py3-none-any.whl", hash = "sha256:d348c5d4ae31496b2aa060d6d9b787864dd204f9480baaa52d18850cb43e9f49"}, 959 | {file = "joblib-0.16.0.tar.gz", hash = "sha256:8f52bf24c64b608bf0b2563e0e47d6fcf516abc8cfafe10cfd98ad66d94f92d6"}, 960 | ] 961 | markdown = [ 962 | {file = "Markdown-3.2.2-py3-none-any.whl", hash = "sha256:c467cd6233885534bf0fe96e62e3cf46cfc1605112356c4f9981512b8174de59"}, 963 | {file = "Markdown-3.2.2.tar.gz", hash = "sha256:1fafe3f1ecabfb514a5285fca634a53c1b32a81cb0feb154264d55bf2ff22c17"}, 964 | ] 965 | markupsafe = [ 966 | {file = "MarkupSafe-1.1.1-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161"}, 967 | {file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"}, 968 | {file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183"}, 969 | {file = "MarkupSafe-1.1.1-cp27-cp27m-win32.whl", hash = "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b"}, 970 | {file = "MarkupSafe-1.1.1-cp27-cp27m-win_amd64.whl", hash = "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e"}, 971 | {file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f"}, 972 | {file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1"}, 973 | {file = "MarkupSafe-1.1.1-cp34-cp34m-macosx_10_6_intel.whl", hash = "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5"}, 974 | {file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_i686.whl", hash = "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1"}, 975 | {file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735"}, 976 | {file = "MarkupSafe-1.1.1-cp34-cp34m-win32.whl", hash = "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21"}, 977 | {file = "MarkupSafe-1.1.1-cp34-cp34m-win_amd64.whl", hash = "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235"}, 978 | {file = "MarkupSafe-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b"}, 979 | {file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f"}, 980 | {file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905"}, 981 | {file = "MarkupSafe-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1"}, 982 | {file = "MarkupSafe-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d"}, 983 | {file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_6_intel.whl", hash = "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff"}, 984 | {file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473"}, 985 | {file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e"}, 986 | {file = "MarkupSafe-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66"}, 987 | {file = "MarkupSafe-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5"}, 988 | {file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_6_intel.whl", hash = "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d"}, 989 | {file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e"}, 990 | {file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6"}, 991 | {file = "MarkupSafe-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2"}, 992 | {file = "MarkupSafe-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c"}, 993 | {file = "MarkupSafe-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15"}, 994 | {file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2"}, 995 | {file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42"}, 996 | {file = "MarkupSafe-1.1.1-cp38-cp38-win32.whl", hash = "sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b"}, 997 | {file = "MarkupSafe-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be"}, 998 | {file = "MarkupSafe-1.1.1.tar.gz", hash = "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b"}, 999 | ] 1000 | numpy = [ 1001 | {file = "numpy-1.19.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b1cca51512299841bf69add3b75361779962f9cee7d9ee3bb446d5982e925b69"}, 1002 | {file = "numpy-1.19.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:c9591886fc9cbe5532d5df85cb8e0cc3b44ba8ce4367bd4cf1b93dc19713da72"}, 1003 | {file = "numpy-1.19.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:cf1347450c0b7644ea142712619533553f02ef23f92f781312f6a3553d031fc7"}, 1004 | {file = "numpy-1.19.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:ed8a311493cf5480a2ebc597d1e177231984c818a86875126cfd004241a73c3e"}, 1005 | {file = "numpy-1.19.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:3673c8b2b29077f1b7b3a848794f8e11f401ba0b71c49fbd26fb40b71788b132"}, 1006 | {file = "numpy-1.19.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:56ef7f56470c24bb67fb43dae442e946a6ce172f97c69f8d067ff8550cf782ff"}, 1007 | {file = "numpy-1.19.1-cp36-cp36m-win32.whl", hash = "sha256:aaf42a04b472d12515debc621c31cf16c215e332242e7a9f56403d814c744624"}, 1008 | {file = "numpy-1.19.1-cp36-cp36m-win_amd64.whl", hash = "sha256:082f8d4dd69b6b688f64f509b91d482362124986d98dc7dc5f5e9f9b9c3bb983"}, 1009 | {file = "numpy-1.19.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e4f6d3c53911a9d103d8ec9518190e52a8b945bab021745af4939cfc7c0d4a9e"}, 1010 | {file = "numpy-1.19.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:5b6885c12784a27e957294b60f97e8b5b4174c7504665333c5e94fbf41ae5d6a"}, 1011 | {file = "numpy-1.19.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:1bc0145999e8cb8aed9d4e65dd8b139adf1919e521177f198529687dbf613065"}, 1012 | {file = "numpy-1.19.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:5a936fd51049541d86ccdeef2833cc89a18e4d3808fe58a8abeb802665c5af93"}, 1013 | {file = "numpy-1.19.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:ef71a1d4fd4858596ae80ad1ec76404ad29701f8ca7cdcebc50300178db14dfc"}, 1014 | {file = "numpy-1.19.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b9792b0ac0130b277536ab8944e7b754c69560dac0415dd4b2dbd16b902c8954"}, 1015 | {file = "numpy-1.19.1-cp37-cp37m-win32.whl", hash = "sha256:b12e639378c741add21fbffd16ba5ad25c0a1a17cf2b6fe4288feeb65144f35b"}, 1016 | {file = "numpy-1.19.1-cp37-cp37m-win_amd64.whl", hash = "sha256:8343bf67c72e09cfabfab55ad4a43ce3f6bf6e6ced7acf70f45ded9ebb425055"}, 1017 | {file = "numpy-1.19.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e45f8e981a0ab47103181773cc0a54e650b2aef8c7b6cd07405d0fa8d869444a"}, 1018 | {file = "numpy-1.19.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:667c07063940e934287993366ad5f56766bc009017b4a0fe91dbd07960d0aba7"}, 1019 | {file = "numpy-1.19.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:480fdd4dbda4dd6b638d3863da3be82873bba6d32d1fc12ea1b8486ac7b8d129"}, 1020 | {file = "numpy-1.19.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:935c27ae2760c21cd7354402546f6be21d3d0c806fffe967f745d5f2de5005a7"}, 1021 | {file = "numpy-1.19.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:309cbcfaa103fc9a33ec16d2d62569d541b79f828c382556ff072442226d1968"}, 1022 | {file = "numpy-1.19.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:7ed448ff4eaffeb01094959b19cbaf998ecdee9ef9932381420d514e446601cd"}, 1023 | {file = "numpy-1.19.1-cp38-cp38-win32.whl", hash = "sha256:de8b4a9b56255797cbddb93281ed92acbc510fb7b15df3f01bd28f46ebc4edae"}, 1024 | {file = "numpy-1.19.1-cp38-cp38-win_amd64.whl", hash = "sha256:92feb989b47f83ebef246adabc7ff3b9a59ac30601c3f6819f8913458610bdcc"}, 1025 | {file = "numpy-1.19.1-pp36-pypy36_pp73-manylinux2010_x86_64.whl", hash = "sha256:e1b1dc0372f530f26a03578ac75d5e51b3868b9b76cd2facba4c9ee0eb252ab1"}, 1026 | {file = "numpy-1.19.1.zip", hash = "sha256:b8456987b637232602ceb4d663cb34106f7eb780e247d51a260b84760fd8f491"}, 1027 | ] 1028 | oauthlib = [ 1029 | {file = "oauthlib-3.1.0-py2.py3-none-any.whl", hash = "sha256:df884cd6cbe20e32633f1db1072e9356f53638e4361bef4e8b03c9127c9328ea"}, 1030 | {file = "oauthlib-3.1.0.tar.gz", hash = "sha256:bee41cc35fcca6e988463cacc3bcb8a96224f470ca547e697b604cc697b2f889"}, 1031 | ] 1032 | omegaconf = [ 1033 | {file = "omegaconf-2.0.1rc11-py3-none-any.whl", hash = "sha256:12fd270dae1a95078fe79d3fe197b060748ed0bcd2645b1728b81b129be5d587"}, 1034 | {file = "omegaconf-2.0.1rc11.tar.gz", hash = "sha256:ef2bea858166b1bc57f30a0162466facda67a691e8d9b37f81034d8909d9d94e"}, 1035 | ] 1036 | packaging = [ 1037 | {file = "packaging-20.4-py2.py3-none-any.whl", hash = "sha256:998416ba6962ae7fbd6596850b80e17859a5753ba17c32284f67bfff33784181"}, 1038 | {file = "packaging-20.4.tar.gz", hash = "sha256:4357f74f47b9c12db93624a82154e9b120fa8293699949152b22065d556079f8"}, 1039 | ] 1040 | pandas = [ 1041 | {file = "pandas-1.1.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:47a03bfef80d6812c91ed6fae43f04f2fa80a4e1b82b35aa4d9002e39529e0b8"}, 1042 | {file = "pandas-1.1.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:0210f8fe19c2667a3817adb6de2c4fd92b1b78e1975ca60c0efa908e0985cbdb"}, 1043 | {file = "pandas-1.1.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:35db623487f00d9392d8af44a24516d6cb9f274afaf73cfcfe180b9c54e007d2"}, 1044 | {file = "pandas-1.1.0-cp36-cp36m-win32.whl", hash = "sha256:4d1a806252001c5db7caecbe1a26e49a6c23421d85a700960f6ba093112f54a1"}, 1045 | {file = "pandas-1.1.0-cp36-cp36m-win_amd64.whl", hash = "sha256:9f61cca5262840ff46ef857d4f5f65679b82188709d0e5e086a9123791f721c8"}, 1046 | {file = "pandas-1.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:182a5aeae319df391c3df4740bb17d5300dcd78034b17732c12e62e6dd79e4a4"}, 1047 | {file = "pandas-1.1.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:40ec0a7f611a3d00d3c666c4cceb9aa3f5bf9fbd81392948a93663064f527203"}, 1048 | {file = "pandas-1.1.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:16504f915f1ae424052f1e9b7cd2d01786f098fbb00fa4e0f69d42b22952d798"}, 1049 | {file = "pandas-1.1.0-cp37-cp37m-win32.whl", hash = "sha256:fc714895b6de6803ac9f661abb316853d0cd657f5d23985222255ad76ccedc25"}, 1050 | {file = "pandas-1.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a15835c8409d5edc50b4af93be3377b5dd3eb53517e7f785060df1f06f6da0e2"}, 1051 | {file = "pandas-1.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0bc440493cf9dc5b36d5d46bbd5508f6547ba68b02a28234cd8e81fdce42744d"}, 1052 | {file = "pandas-1.1.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:4b21d46728f8a6be537716035b445e7ef3a75dbd30bd31aa1b251323219d853e"}, 1053 | {file = "pandas-1.1.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0227e3a6e3a22c0e283a5041f1e3064d78fbde811217668bb966ed05386d8a7e"}, 1054 | {file = "pandas-1.1.0-cp38-cp38-win32.whl", hash = "sha256:ed60848caadeacecefd0b1de81b91beff23960032cded0ac1449242b506a3b3f"}, 1055 | {file = "pandas-1.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:60e20a4ab4d4fec253557d0fc9a4e4095c37b664f78c72af24860c8adcd07088"}, 1056 | {file = "pandas-1.1.0.tar.gz", hash = "sha256:b39508562ad0bb3f384b0db24da7d68a2608b9ddc85b1d931ccaaa92d5e45273"}, 1057 | ] 1058 | pillow = [ 1059 | {file = "Pillow-7.2.0-cp35-cp35m-macosx_10_10_intel.whl", hash = "sha256:1ca594126d3c4def54babee699c055a913efb01e106c309fa6b04405d474d5ae"}, 1060 | {file = "Pillow-7.2.0-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c92302a33138409e8f1ad16731568c55c9053eee71bb05b6b744067e1b62380f"}, 1061 | {file = "Pillow-7.2.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:8dad18b69f710bf3a001d2bf3afab7c432785d94fcf819c16b5207b1cfd17d38"}, 1062 | {file = "Pillow-7.2.0-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:431b15cffbf949e89df2f7b48528be18b78bfa5177cb3036284a5508159492b5"}, 1063 | {file = "Pillow-7.2.0-cp35-cp35m-win32.whl", hash = "sha256:09d7f9e64289cb40c2c8d7ad674b2ed6105f55dc3b09aa8e4918e20a0311e7ad"}, 1064 | {file = "Pillow-7.2.0-cp35-cp35m-win_amd64.whl", hash = "sha256:0295442429645fa16d05bd567ef5cff178482439c9aad0411d3f0ce9b88b3a6f"}, 1065 | {file = "Pillow-7.2.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:ec29604081f10f16a7aea809ad42e27764188fc258b02259a03a8ff7ded3808d"}, 1066 | {file = "Pillow-7.2.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:612cfda94e9c8346f239bf1a4b082fdd5c8143cf82d685ba2dba76e7adeeb233"}, 1067 | {file = "Pillow-7.2.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:0a80dd307a5d8440b0a08bd7b81617e04d870e40a3e46a32d9c246e54705e86f"}, 1068 | {file = "Pillow-7.2.0-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:06aba4169e78c439d528fdeb34762c3b61a70813527a2c57f0540541e9f433a8"}, 1069 | {file = "Pillow-7.2.0-cp36-cp36m-win32.whl", hash = "sha256:f7e30c27477dffc3e85c2463b3e649f751789e0f6c8456099eea7ddd53be4a8a"}, 1070 | {file = "Pillow-7.2.0-cp36-cp36m-win_amd64.whl", hash = "sha256:ffe538682dc19cc542ae7c3e504fdf54ca7f86fb8a135e59dd6bc8627eae6cce"}, 1071 | {file = "Pillow-7.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:94cf49723928eb6070a892cb39d6c156f7b5a2db4e8971cb958f7b6b104fb4c4"}, 1072 | {file = "Pillow-7.2.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:6edb5446f44d901e8683ffb25ebdfc26988ee813da3bf91e12252b57ac163727"}, 1073 | {file = "Pillow-7.2.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:52125833b070791fcb5710fabc640fc1df07d087fc0c0f02d3661f76c23c5b8b"}, 1074 | {file = "Pillow-7.2.0-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:9ad7f865eebde135d526bb3163d0b23ffff365cf87e767c649550964ad72785d"}, 1075 | {file = "Pillow-7.2.0-cp37-cp37m-win32.whl", hash = "sha256:c79f9c5fb846285f943aafeafda3358992d64f0ef58566e23484132ecd8d7d63"}, 1076 | {file = "Pillow-7.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d350f0f2c2421e65fbc62690f26b59b0bcda1b614beb318c81e38647e0f673a1"}, 1077 | {file = "Pillow-7.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:6d7741e65835716ceea0fd13a7d0192961212fd59e741a46bbed7a473c634ed6"}, 1078 | {file = "Pillow-7.2.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:edf31f1150778abd4322444c393ab9c7bd2af271dd4dafb4208fb613b1f3cdc9"}, 1079 | {file = "Pillow-7.2.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:d08b23fdb388c0715990cbc06866db554e1822c4bdcf6d4166cf30ac82df8c41"}, 1080 | {file = "Pillow-7.2.0-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:5e51ee2b8114def244384eda1c82b10e307ad9778dac5c83fb0943775a653cd8"}, 1081 | {file = "Pillow-7.2.0-cp38-cp38-win32.whl", hash = "sha256:725aa6cfc66ce2857d585f06e9519a1cc0ef6d13f186ff3447ab6dff0a09bc7f"}, 1082 | {file = "Pillow-7.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:a060cf8aa332052df2158e5a119303965be92c3da6f2d93b6878f0ebca80b2f6"}, 1083 | {file = "Pillow-7.2.0-pp36-pypy36_pp73-win32.whl", hash = "sha256:25930fadde8019f374400f7986e8404c8b781ce519da27792cbe46eabec00c4d"}, 1084 | {file = "Pillow-7.2.0.tar.gz", hash = "sha256:97f9e7953a77d5a70f49b9a48da7776dc51e9b738151b22dacf101641594a626"}, 1085 | ] 1086 | protobuf = [ 1087 | {file = "protobuf-3.13.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:9c2e63c1743cba12737169c447374fab3dfeb18111a460a8c1a000e35836b18c"}, 1088 | {file = "protobuf-3.13.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:1e834076dfef9e585815757a2c7e4560c7ccc5962b9d09f831214c693a91b463"}, 1089 | {file = "protobuf-3.13.0-cp35-cp35m-macosx_10_9_intel.whl", hash = "sha256:df3932e1834a64b46ebc262e951cd82c3cf0fa936a154f0a42231140d8237060"}, 1090 | {file = "protobuf-3.13.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:8c35bcbed1c0d29b127c886790e9d37e845ffc2725cc1db4bd06d70f4e8359f4"}, 1091 | {file = "protobuf-3.13.0-cp35-cp35m-win32.whl", hash = "sha256:339c3a003e3c797bc84499fa32e0aac83c768e67b3de4a5d7a5a9aa3b0da634c"}, 1092 | {file = "protobuf-3.13.0-cp35-cp35m-win_amd64.whl", hash = "sha256:361acd76f0ad38c6e38f14d08775514fbd241316cce08deb2ce914c7dfa1184a"}, 1093 | {file = "protobuf-3.13.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9edfdc679a3669988ec55a989ff62449f670dfa7018df6ad7f04e8dbacb10630"}, 1094 | {file = "protobuf-3.13.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:5db9d3e12b6ede5e601b8d8684a7f9d90581882925c96acf8495957b4f1b204b"}, 1095 | {file = "protobuf-3.13.0-cp36-cp36m-win32.whl", hash = "sha256:c8abd7605185836f6f11f97b21200f8a864f9cb078a193fe3c9e235711d3ff1e"}, 1096 | {file = "protobuf-3.13.0-cp36-cp36m-win_amd64.whl", hash = "sha256:4d1174c9ed303070ad59553f435846a2f877598f59f9afc1b89757bdf846f2a7"}, 1097 | {file = "protobuf-3.13.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0bba42f439bf45c0f600c3c5993666fcb88e8441d011fad80a11df6f324eef33"}, 1098 | {file = "protobuf-3.13.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:c0c5ab9c4b1eac0a9b838f1e46038c3175a95b0f2d944385884af72876bd6bc7"}, 1099 | {file = "protobuf-3.13.0-cp37-cp37m-win32.whl", hash = "sha256:f68eb9d03c7d84bd01c790948320b768de8559761897763731294e3bc316decb"}, 1100 | {file = "protobuf-3.13.0-cp37-cp37m-win_amd64.whl", hash = "sha256:91c2d897da84c62816e2f473ece60ebfeab024a16c1751aaf31100127ccd93ec"}, 1101 | {file = "protobuf-3.13.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3dee442884a18c16d023e52e32dd34a8930a889e511af493f6dc7d4d9bf12e4f"}, 1102 | {file = "protobuf-3.13.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:e7662437ca1e0c51b93cadb988f9b353fa6b8013c0385d63a70c8a77d84da5f9"}, 1103 | {file = "protobuf-3.13.0-py2.py3-none-any.whl", hash = "sha256:d69697acac76d9f250ab745b46c725edf3e98ac24763990b24d58c16c642947a"}, 1104 | {file = "protobuf-3.13.0.tar.gz", hash = "sha256:6a82e0c8bb2bf58f606040cc5814e07715b2094caeba281e2e7d0b0e2e397db5"}, 1105 | ] 1106 | psutil = [ 1107 | {file = "psutil-5.7.2-cp27-none-win32.whl", hash = "sha256:f2018461733b23f308c298653c8903d32aaad7873d25e1d228765e91ae42c3f2"}, 1108 | {file = "psutil-5.7.2-cp27-none-win_amd64.whl", hash = "sha256:66c18ca7680a31bf16ee22b1d21b6397869dda8059dbdb57d9f27efa6615f195"}, 1109 | {file = "psutil-5.7.2-cp35-cp35m-win32.whl", hash = "sha256:5e9d0f26d4194479a13d5f4b3798260c20cecf9ac9a461e718eb59ea520a360c"}, 1110 | {file = "psutil-5.7.2-cp35-cp35m-win_amd64.whl", hash = "sha256:4080869ed93cce662905b029a1770fe89c98787e543fa7347f075ade761b19d6"}, 1111 | {file = "psutil-5.7.2-cp36-cp36m-win32.whl", hash = "sha256:d8a82162f23c53b8525cf5f14a355f5d1eea86fa8edde27287dd3a98399e4fdf"}, 1112 | {file = "psutil-5.7.2-cp36-cp36m-win_amd64.whl", hash = "sha256:0ee3c36428f160d2d8fce3c583a0353e848abb7de9732c50cf3356dd49ad63f8"}, 1113 | {file = "psutil-5.7.2-cp37-cp37m-win32.whl", hash = "sha256:ff1977ba1a5f71f89166d5145c3da1cea89a0fdb044075a12c720ee9123ec818"}, 1114 | {file = "psutil-5.7.2-cp37-cp37m-win_amd64.whl", hash = "sha256:a5b120bb3c0c71dfe27551f9da2f3209a8257a178ed6c628a819037a8df487f1"}, 1115 | {file = "psutil-5.7.2-cp38-cp38-win32.whl", hash = "sha256:10512b46c95b02842c225f58fa00385c08fa00c68bac7da2d9a58ebe2c517498"}, 1116 | {file = "psutil-5.7.2-cp38-cp38-win_amd64.whl", hash = "sha256:68d36986ded5dac7c2dcd42f2682af1db80d4bce3faa126a6145c1637e1b559f"}, 1117 | {file = "psutil-5.7.2.tar.gz", hash = "sha256:90990af1c3c67195c44c9a889184f84f5b2320dce3ee3acbd054e3ba0b4a7beb"}, 1118 | ] 1119 | pyasn1 = [ 1120 | {file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"}, 1121 | {file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"}, 1122 | {file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"}, 1123 | {file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"}, 1124 | {file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"}, 1125 | {file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"}, 1126 | {file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"}, 1127 | {file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"}, 1128 | {file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"}, 1129 | {file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"}, 1130 | {file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"}, 1131 | {file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"}, 1132 | {file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"}, 1133 | ] 1134 | pyasn1-modules = [ 1135 | {file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"}, 1136 | {file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"}, 1137 | {file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"}, 1138 | {file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"}, 1139 | {file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"}, 1140 | {file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"}, 1141 | {file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"}, 1142 | {file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"}, 1143 | {file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"}, 1144 | {file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"}, 1145 | {file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"}, 1146 | {file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"}, 1147 | {file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"}, 1148 | ] 1149 | pycparser = [ 1150 | {file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"}, 1151 | {file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"}, 1152 | ] 1153 | pyparsing = [ 1154 | {file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"}, 1155 | {file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"}, 1156 | ] 1157 | python-dateutil = [ 1158 | {file = "python-dateutil-2.8.1.tar.gz", hash = "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c"}, 1159 | {file = "python_dateutil-2.8.1-py2.py3-none-any.whl", hash = "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a"}, 1160 | ] 1161 | pytorch-lightning = [ 1162 | {file = "pytorch-lightning-0.9.0rc15.tar.gz", hash = "sha256:ac6fb8ec6129d8f165bb02ccc2e62c0cb6e7910ce567647de12fd1d2ea3e68d4"}, 1163 | {file = "pytorch_lightning-0.9.0rc15-py3-none-any.whl", hash = "sha256:b2511308fb55fae70827d545199783217f6a794981b1dc530f1b4895cbf0d2be"}, 1164 | ] 1165 | pytz = [ 1166 | {file = "pytz-2020.1-py2.py3-none-any.whl", hash = "sha256:a494d53b6d39c3c6e44c3bec237336e14305e4f29bbf800b599253057fbb79ed"}, 1167 | {file = "pytz-2020.1.tar.gz", hash = "sha256:c35965d010ce31b23eeb663ed3cc8c906275d6be1a34393a1d73a41febf4a048"}, 1168 | ] 1169 | pyyaml = [ 1170 | {file = "PyYAML-5.3.1-cp27-cp27m-win32.whl", hash = "sha256:74809a57b329d6cc0fdccee6318f44b9b8649961fa73144a98735b0aaf029f1f"}, 1171 | {file = "PyYAML-5.3.1-cp27-cp27m-win_amd64.whl", hash = "sha256:240097ff019d7c70a4922b6869d8a86407758333f02203e0fc6ff79c5dcede76"}, 1172 | {file = "PyYAML-5.3.1-cp35-cp35m-win32.whl", hash = "sha256:4f4b913ca1a7319b33cfb1369e91e50354d6f07a135f3b901aca02aa95940bd2"}, 1173 | {file = "PyYAML-5.3.1-cp35-cp35m-win_amd64.whl", hash = "sha256:cc8955cfbfc7a115fa81d85284ee61147059a753344bc51098f3ccd69b0d7e0c"}, 1174 | {file = "PyYAML-5.3.1-cp36-cp36m-win32.whl", hash = "sha256:7739fc0fa8205b3ee8808aea45e968bc90082c10aef6ea95e855e10abf4a37b2"}, 1175 | {file = "PyYAML-5.3.1-cp36-cp36m-win_amd64.whl", hash = "sha256:69f00dca373f240f842b2931fb2c7e14ddbacd1397d57157a9b005a6a9942648"}, 1176 | {file = "PyYAML-5.3.1-cp37-cp37m-win32.whl", hash = "sha256:d13155f591e6fcc1ec3b30685d50bf0711574e2c0dfffd7644babf8b5102ca1a"}, 1177 | {file = "PyYAML-5.3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:73f099454b799e05e5ab51423c7bcf361c58d3206fa7b0d555426b1f4d9a3eaf"}, 1178 | {file = "PyYAML-5.3.1-cp38-cp38-win32.whl", hash = "sha256:06a0d7ba600ce0b2d2fe2e78453a470b5a6e000a985dd4a4e54e436cc36b0e97"}, 1179 | {file = "PyYAML-5.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:95f71d2af0ff4227885f7a6605c37fd53d3a106fcab511b8860ecca9fcf400ee"}, 1180 | {file = "PyYAML-5.3.1.tar.gz", hash = "sha256:b8eac752c5e14d3eca0e6dd9199cd627518cb5ec06add0de9d32baeee6fe645d"}, 1181 | ] 1182 | requests = [ 1183 | {file = "requests-2.24.0-py2.py3-none-any.whl", hash = "sha256:fe75cc94a9443b9246fc7049224f75604b113c36acb93f87b80ed42c44cbb898"}, 1184 | {file = "requests-2.24.0.tar.gz", hash = "sha256:b3559a131db72c33ee969480840fff4bb6dd111de7dd27c8ee1f820f4f00231b"}, 1185 | ] 1186 | requests-oauthlib = [ 1187 | {file = "requests-oauthlib-1.3.0.tar.gz", hash = "sha256:b4261601a71fd721a8bd6d7aa1cc1d6a8a93b4a9f5e96626f8e4d91e8beeaa6a"}, 1188 | {file = "requests_oauthlib-1.3.0-py2.py3-none-any.whl", hash = "sha256:7f71572defaecd16372f9006f33c2ec8c077c3cfa6f5911a9a90202beb513f3d"}, 1189 | {file = "requests_oauthlib-1.3.0-py3.7.egg", hash = "sha256:fa6c47b933f01060936d87ae9327fead68768b69c6c9ea2109c48be30f2d4dbc"}, 1190 | ] 1191 | rsa = [ 1192 | {file = "rsa-4.6-py3-none-any.whl", hash = "sha256:6166864e23d6b5195a5cfed6cd9fed0fe774e226d8f854fcb23b7bbef0350233"}, 1193 | {file = "rsa-4.6.tar.gz", hash = "sha256:109ea5a66744dd859bf16fe904b8d8b627adafb9408753161e766a92e7d681fa"}, 1194 | ] 1195 | scikit-learn = [ 1196 | {file = "scikit-learn-0.23.2.tar.gz", hash = "sha256:20766f515e6cd6f954554387dfae705d93c7b544ec0e6c6a5d8e006f6f7ef480"}, 1197 | {file = "scikit_learn-0.23.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:98508723f44c61896a4e15894b2016762a55555fbf09365a0bb1870ecbd442de"}, 1198 | {file = "scikit_learn-0.23.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:a64817b050efd50f9abcfd311870073e500ae11b299683a519fbb52d85e08d25"}, 1199 | {file = "scikit_learn-0.23.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:daf276c465c38ef736a79bd79fc80a249f746bcbcae50c40945428f7ece074f8"}, 1200 | {file = "scikit_learn-0.23.2-cp36-cp36m-win32.whl", hash = "sha256:cb3e76380312e1f86abd20340ab1d5b3cc46a26f6593d3c33c9ea3e4c7134028"}, 1201 | {file = "scikit_learn-0.23.2-cp36-cp36m-win_amd64.whl", hash = "sha256:0a127cc70990d4c15b1019680bfedc7fec6c23d14d3719fdf9b64b22d37cdeca"}, 1202 | {file = "scikit_learn-0.23.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2aa95c2f17d2f80534156215c87bee72b6aa314a7f8b8fe92a2d71f47280570d"}, 1203 | {file = "scikit_learn-0.23.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:6c28a1d00aae7c3c9568f61aafeaad813f0f01c729bee4fd9479e2132b215c1d"}, 1204 | {file = "scikit_learn-0.23.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:da8e7c302003dd765d92a5616678e591f347460ac7b53e53d667be7dfe6d1b10"}, 1205 | {file = "scikit_learn-0.23.2-cp37-cp37m-win32.whl", hash = "sha256:d9a1ce5f099f29c7c33181cc4386660e0ba891b21a60dc036bf369e3a3ee3aec"}, 1206 | {file = "scikit_learn-0.23.2-cp37-cp37m-win_amd64.whl", hash = "sha256:914ac2b45a058d3f1338d7736200f7f3b094857758895f8667be8a81ff443b5b"}, 1207 | {file = "scikit_learn-0.23.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7671bbeddd7f4f9a6968f3b5442dac5f22bf1ba06709ef888cc9132ad354a9ab"}, 1208 | {file = "scikit_learn-0.23.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:d0dcaa54263307075cb93d0bee3ceb02821093b1b3d25f66021987d305d01dce"}, 1209 | {file = "scikit_learn-0.23.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:5ce7a8021c9defc2b75620571b350acc4a7d9763c25b7593621ef50f3bd019a2"}, 1210 | {file = "scikit_learn-0.23.2-cp38-cp38-win32.whl", hash = "sha256:0d39748e7c9669ba648acf40fb3ce96b8a07b240db6888563a7cb76e05e0d9cc"}, 1211 | {file = "scikit_learn-0.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:1b8a391de95f6285a2f9adffb7db0892718950954b7149a70c783dc848f104ea"}, 1212 | ] 1213 | scipy = [ 1214 | {file = "scipy-1.5.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cca9fce15109a36a0a9f9cfc64f870f1c140cb235ddf27fe0328e6afb44dfed0"}, 1215 | {file = "scipy-1.5.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:1c7564a4810c1cd77fcdee7fa726d7d39d4e2695ad252d7c86c3ea9d85b7fb8f"}, 1216 | {file = "scipy-1.5.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:07e52b316b40a4f001667d1ad4eb5f2318738de34597bd91537851365b6c61f1"}, 1217 | {file = "scipy-1.5.2-cp36-cp36m-win32.whl", hash = "sha256:d56b10d8ed72ec1be76bf10508446df60954f08a41c2d40778bc29a3a9ad9bce"}, 1218 | {file = "scipy-1.5.2-cp36-cp36m-win_amd64.whl", hash = "sha256:8e28e74b97fc8d6aa0454989db3b5d36fc27e69cef39a7ee5eaf8174ca1123cb"}, 1219 | {file = "scipy-1.5.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6e86c873fe1335d88b7a4bfa09d021f27a9e753758fd75f3f92d714aa4093768"}, 1220 | {file = "scipy-1.5.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:a0afbb967fd2c98efad5f4c24439a640d39463282040a88e8e928db647d8ac3d"}, 1221 | {file = "scipy-1.5.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:eecf40fa87eeda53e8e11d265ff2254729d04000cd40bae648e76ff268885d66"}, 1222 | {file = "scipy-1.5.2-cp37-cp37m-win32.whl", hash = "sha256:315aa2165aca31375f4e26c230188db192ed901761390be908c9b21d8b07df62"}, 1223 | {file = "scipy-1.5.2-cp37-cp37m-win_amd64.whl", hash = "sha256:ec5fe57e46828d034775b00cd625c4a7b5c7d2e354c3b258d820c6c72212a6ec"}, 1224 | {file = "scipy-1.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:fc98f3eac993b9bfdd392e675dfe19850cc8c7246a8fd2b42443e506344be7d9"}, 1225 | {file = "scipy-1.5.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:a785409c0fa51764766840185a34f96a0a93527a0ff0230484d33a8ed085c8f8"}, 1226 | {file = "scipy-1.5.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0a0e9a4e58a4734c2eba917f834b25b7e3b6dc333901ce7784fd31aefbd37b2f"}, 1227 | {file = "scipy-1.5.2-cp38-cp38-win32.whl", hash = "sha256:dac09281a0eacd59974e24525a3bc90fa39b4e95177e638a31b14db60d3fa806"}, 1228 | {file = "scipy-1.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:92eb04041d371fea828858e4fff182453c25ae3eaa8782d9b6c32b25857d23bc"}, 1229 | {file = "scipy-1.5.2.tar.gz", hash = "sha256:066c513d90eb3fd7567a9e150828d39111ebd88d3e924cdfc9f8ce19ab6f90c9"}, 1230 | ] 1231 | six = [ 1232 | {file = "six-1.15.0-py2.py3-none-any.whl", hash = "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced"}, 1233 | {file = "six-1.15.0.tar.gz", hash = "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259"}, 1234 | ] 1235 | tensorboard = [ 1236 | {file = "tensorboard-2.2.0-py3-none-any.whl", hash = "sha256:bb6bbc75ad2d8511ba6cbd49e4417276979f49866e11841e83da8298727dbaed"}, 1237 | ] 1238 | tensorboard-plugin-wit = [ 1239 | {file = "tensorboard_plugin_wit-1.7.0-py3-none-any.whl", hash = "sha256:ee775f04821185c90d9a0e9c56970ee43d7c41403beb6629385b39517129685b"}, 1240 | ] 1241 | threadpoolctl = [ 1242 | {file = "threadpoolctl-2.1.0-py3-none-any.whl", hash = "sha256:38b74ca20ff3bb42caca8b00055111d74159ee95c4370882bbff2b93d24da725"}, 1243 | {file = "threadpoolctl-2.1.0.tar.gz", hash = "sha256:ddc57c96a38beb63db45d6c159b5ab07b6bced12c45a1f07b2b92f272aebfa6b"}, 1244 | ] 1245 | torch = [ 1246 | {file = "torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:7669f4d923b5758e28b521ea749c795ed67ff24b45ba20296bc8cff706d08df8"}, 1247 | {file = "torch-1.6.0-cp36-none-macosx_10_9_x86_64.whl", hash = "sha256:728facb972a5952323c6d790c2c5922b2b35c44b0bc7bdfa02f8639727671a0c"}, 1248 | {file = "torch-1.6.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:87d65c01d1b70bb46070824f28bfd93c86d3c5c56b90cbbe836a3f2491d91c76"}, 1249 | {file = "torch-1.6.0-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:3838bd01af7dfb1f78573973f6842ce75b17e8e4f22be99c891dcb7c94bc13f5"}, 1250 | {file = "torch-1.6.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:5357873e243bcfa804c32dc341f564e9a4c12addfc9baae4ee857fcc09a0a216"}, 1251 | {file = "torch-1.6.0-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:4f9a4ad7947cef566afb0a323d99009fe8524f0b0f2ca1fb7ad5de0400381a5b"}, 1252 | ] 1253 | torch-model-archiver = [ 1254 | {file = "torch_model_archiver-0.2.0-py2.py3-none-any.whl", hash = "sha256:82edadaef0d1ccbc65efe13805f5eebb9fbb97c220623801448f059933624dac"}, 1255 | ] 1256 | torchserve = [ 1257 | {file = "torchserve-0.2.0-py2.py3-none-any.whl", hash = "sha256:d4a76fe449eae57f078dc049d7cdca6f87d50880b3dc43b4e70f357d73dd2b6b"}, 1258 | ] 1259 | torchvision = [ 1260 | {file = "torchvision-0.7.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:a70d80bb8749c1e4a46fa56dc2fc857e98d14600841e02cc2fed766daf96c245"}, 1261 | {file = "torchvision-0.7.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:14c0bf60fa26aabaea64ef30b8e5d441ee78d1a5eed568c30806af19bbe6b638"}, 1262 | {file = "torchvision-0.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8c8df7e1d1f3d4e088256be1e8c8d3eb90b016302baa4649742d47ae1531da37"}, 1263 | {file = "torchvision-0.7.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:0d1a5adfef4387659c7a0af3b72e16caa0c67224a422050ab65184d13ac9fb13"}, 1264 | {file = "torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f5686e0a0dd511ac33eb9d6279bd34edd9f282dcb7c8ad21e290882c6206504f"}, 1265 | {file = "torchvision-0.7.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:cfa2b367bc9acf20f18b151d0525970279719e81969c17214effe77245875354"}, 1266 | ] 1267 | tqdm = [ 1268 | {file = "tqdm-4.48.2-py2.py3-none-any.whl", hash = "sha256:1a336d2b829be50e46b84668691e0a2719f26c97c62846298dd5ae2937e4d5cf"}, 1269 | {file = "tqdm-4.48.2.tar.gz", hash = "sha256:564d632ea2b9cb52979f7956e093e831c28d441c11751682f84c86fc46e4fd21"}, 1270 | ] 1271 | typing-extensions = [ 1272 | {file = "typing_extensions-3.7.4.2-py2-none-any.whl", hash = "sha256:f8d2bd89d25bc39dabe7d23df520442fa1d8969b82544370e03d88b5a591c392"}, 1273 | {file = "typing_extensions-3.7.4.2-py3-none-any.whl", hash = "sha256:6e95524d8a547a91e08f404ae485bbb71962de46967e1b71a0cb89af24e761c5"}, 1274 | {file = "typing_extensions-3.7.4.2.tar.gz", hash = "sha256:79ee589a3caca649a9bfd2a8de4709837400dfa00b6cc81962a1e6a1815969ae"}, 1275 | ] 1276 | urllib3 = [ 1277 | {file = "urllib3-1.25.10-py2.py3-none-any.whl", hash = "sha256:e7983572181f5e1522d9c98453462384ee92a0be7fac5f1413a1e35c56cc0461"}, 1278 | {file = "urllib3-1.25.10.tar.gz", hash = "sha256:91056c15fa70756691db97756772bb1eb9678fa585d9184f24534b100dc60f4a"}, 1279 | ] 1280 | werkzeug = [ 1281 | {file = "Werkzeug-1.0.1-py2.py3-none-any.whl", hash = "sha256:2de2a5db0baeae7b2d2664949077c2ac63fbd16d98da0ff71837f7d1dea3fd43"}, 1282 | {file = "Werkzeug-1.0.1.tar.gz", hash = "sha256:6c80b1e5ad3665290ea39320b91e1be1e0d5f60652b964a3070216de83d2e47c"}, 1283 | ] 1284 | wheel = [ 1285 | {file = "wheel-0.35.1-py2.py3-none-any.whl", hash = "sha256:497add53525d16c173c2c1c733b8f655510e909ea78cc0e29d374243544b77a2"}, 1286 | {file = "wheel-0.35.1.tar.gz", hash = "sha256:99a22d87add3f634ff917310a3d87e499f19e663413a52eb9232c447aa646c9f"}, 1287 | ] 1288 | zipp = [ 1289 | {file = "zipp-3.1.0-py3-none-any.whl", hash = "sha256:aa36550ff0c0b7ef7fa639055d797116ee891440eac1a56f378e2d3179e0320b"}, 1290 | {file = "zipp-3.1.0.tar.gz", hash = "sha256:c599e4d75c98f6798c509911d08a22e6c021d074469042177c8c86fb92eefd96"}, 1291 | ] 1292 | "zope.event" = [ 1293 | {file = "zope.event-4.4-py2.py3-none-any.whl", hash = "sha256:d8e97d165fd5a0997b45f5303ae11ea3338becfe68c401dd88ffd2113fe5cae7"}, 1294 | {file = "zope.event-4.4.tar.gz", hash = "sha256:69c27debad9bdacd9ce9b735dad382142281ac770c4a432b533d6d65c4614bcf"}, 1295 | ] 1296 | "zope.interface" = [ 1297 | {file = "zope.interface-5.1.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:645a7092b77fdbc3f68d3cc98f9d3e71510e419f54019d6e282328c0dd140dcd"}, 1298 | {file = "zope.interface-5.1.0-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:d1fe9d7d09bb07228650903d6a9dc48ea649e3b8c69b1d263419cc722b3938e8"}, 1299 | {file = "zope.interface-5.1.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:a744132d0abaa854d1aad50ba9bc64e79c6f835b3e92521db4235a1991176813"}, 1300 | {file = "zope.interface-5.1.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:461d4339b3b8f3335d7e2c90ce335eb275488c587b61aca4b305196dde2ff086"}, 1301 | {file = "zope.interface-5.1.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:269b27f60bcf45438e8683269f8ecd1235fa13e5411de93dae3b9ee4fe7f7bc7"}, 1302 | {file = "zope.interface-5.1.0-cp27-cp27m-win32.whl", hash = "sha256:6874367586c020705a44eecdad5d6b587c64b892e34305bb6ed87c9bbe22a5e9"}, 1303 | {file = "zope.interface-5.1.0-cp27-cp27m-win_amd64.whl", hash = "sha256:8149ded7f90154fdc1a40e0c8975df58041a6f693b8f7edcd9348484e9dc17fe"}, 1304 | {file = "zope.interface-5.1.0-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:0103cba5ed09f27d2e3de7e48bb320338592e2fabc5ce1432cf33808eb2dfd8b"}, 1305 | {file = "zope.interface-5.1.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:b0becb75418f8a130e9d465e718316cd17c7a8acce6fe8fe07adc72762bee425"}, 1306 | {file = "zope.interface-5.1.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:fb55c182a3f7b84c1a2d6de5fa7b1a05d4660d866b91dbf8d74549c57a1499e8"}, 1307 | {file = "zope.interface-5.1.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:4f98f70328bc788c86a6a1a8a14b0ea979f81ae6015dd6c72978f1feff70ecda"}, 1308 | {file = "zope.interface-5.1.0-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:af2c14efc0bb0e91af63d00080ccc067866fb8cbbaca2b0438ab4105f5e0f08d"}, 1309 | {file = "zope.interface-5.1.0-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:f68bf937f113b88c866d090fea0bc52a098695173fc613b055a17ff0cf9683b6"}, 1310 | {file = "zope.interface-5.1.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:d7804f6a71fc2dda888ef2de266727ec2f3915373d5a785ed4ddc603bbc91e08"}, 1311 | {file = "zope.interface-5.1.0-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:74bf0a4f9091131de09286f9a605db449840e313753949fe07c8d0fe7659ad1e"}, 1312 | {file = "zope.interface-5.1.0-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:ba4261c8ad00b49d48bbb3b5af388bb7576edfc0ca50a49c11dcb77caa1d897e"}, 1313 | {file = "zope.interface-5.1.0-cp35-cp35m-win32.whl", hash = "sha256:ebb4e637a1fb861c34e48a00d03cffa9234f42bef923aec44e5625ffb9a8e8f9"}, 1314 | {file = "zope.interface-5.1.0-cp35-cp35m-win_amd64.whl", hash = "sha256:911714b08b63d155f9c948da2b5534b223a1a4fc50bb67139ab68b277c938578"}, 1315 | {file = "zope.interface-5.1.0-cp36-cp36m-macosx_10_6_intel.whl", hash = "sha256:e74671e43ed4569fbd7989e5eecc7d06dc134b571872ab1d5a88f4a123814e9f"}, 1316 | {file = "zope.interface-5.1.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:b1d2ed1cbda2ae107283befd9284e650d840f8f7568cb9060b5466d25dc48975"}, 1317 | {file = "zope.interface-5.1.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:ef739fe89e7f43fb6494a43b1878a36273e5924869ba1d866f752c5812ae8d58"}, 1318 | {file = "zope.interface-5.1.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:eb9b92f456ff3ec746cd4935b73c1117538d6124b8617bc0fe6fda0b3816e345"}, 1319 | {file = "zope.interface-5.1.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:dcefc97d1daf8d55199420e9162ab584ed0893a109f45e438b9794ced44c9fd0"}, 1320 | {file = "zope.interface-5.1.0-cp36-cp36m-win32.whl", hash = "sha256:f40db0e02a8157d2b90857c24d89b6310f9b6c3642369852cdc3b5ac49b92afc"}, 1321 | {file = "zope.interface-5.1.0-cp36-cp36m-win_amd64.whl", hash = "sha256:14415d6979356629f1c386c8c4249b4d0082f2ea7f75871ebad2e29584bd16c5"}, 1322 | {file = "zope.interface-5.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5e86c66a6dea8ab6152e83b0facc856dc4d435fe0f872f01d66ce0a2131b7f1d"}, 1323 | {file = "zope.interface-5.1.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:39106649c3082972106f930766ae23d1464a73b7d30b3698c986f74bf1256a34"}, 1324 | {file = "zope.interface-5.1.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:8cccf7057c7d19064a9e27660f5aec4e5c4001ffcf653a47531bde19b5aa2a8a"}, 1325 | {file = "zope.interface-5.1.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:562dccd37acec149458c1791da459f130c6cf8902c94c93b8d47c6337b9fb826"}, 1326 | {file = "zope.interface-5.1.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:da2844fba024dd58eaa712561da47dcd1e7ad544a257482392472eae1c86d5e5"}, 1327 | {file = "zope.interface-5.1.0-cp37-cp37m-win32.whl", hash = "sha256:1ae4693ccee94c6e0c88a4568fb3b34af8871c60f5ba30cf9f94977ed0e53ddd"}, 1328 | {file = "zope.interface-5.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:dd98c436a1fc56f48c70882cc243df89ad036210d871c7427dc164b31500dc11"}, 1329 | {file = "zope.interface-5.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1b87ed2dc05cb835138f6a6e3595593fea3564d712cb2eb2de963a41fd35758c"}, 1330 | {file = "zope.interface-5.1.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:558a20a0845d1a5dc6ff87cd0f63d7dac982d7c3be05d2ffb6322a87c17fa286"}, 1331 | {file = "zope.interface-5.1.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b726194f938791a6691c7592c8b9e805fc6d1b9632a833b9c0640828cd49cbc"}, 1332 | {file = "zope.interface-5.1.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:60a207efcd8c11d6bbeb7862e33418fba4e4ad79846d88d160d7231fcb42a5ee"}, 1333 | {file = "zope.interface-5.1.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:b054eb0a8aa712c8e9030065a59b5e6a5cf0746ecdb5f087cca5ec7685690c19"}, 1334 | {file = "zope.interface-5.1.0-cp38-cp38-win32.whl", hash = "sha256:27d287e61639d692563d9dab76bafe071fbeb26818dd6a32a0022f3f7ca884b5"}, 1335 | {file = "zope.interface-5.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:a5f8f85986197d1dd6444763c4a15c991bfed86d835a1f6f7d476f7198d5f56a"}, 1336 | {file = "zope.interface-5.1.0.tar.gz", hash = "sha256:40e4c42bd27ed3c11b2c983fecfb03356fae1209de10686d03c02c8696a1d90e"}, 1337 | ] 1338 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.poetry] 2 | name = "creating_marketplace_products" 3 | version = "0.1.0" 4 | description = "" 5 | authors = ["Thomas Chaton "] 6 | 7 | [tool.poetry.dependencies] 8 | python = "3.7.8" 9 | torch = "1.6.0" 10 | hydra-core = "1.0.0rc2" 11 | numpy = "^1.19.1" 12 | scipy = "^1.5.2" 13 | scikit-learn = "^0.23.2" 14 | pandas = "^1.1.0" 15 | flask = "^1.1.2" 16 | gevent = "^20.6.2" 17 | gunicorn = "^20.0.4" 18 | pytorch-lightning = "0.9.0rc15" 19 | torchserve = "^0.2.0" 20 | torch-model-archiver = "^0.2.0" 21 | torchvision = "^0.7.0" 22 | 23 | [tool.poetry.dev-dependencies] 24 | 25 | [build-system] 26 | requires = ["poetry>=0.12"] 27 | build-backend = "poetry.masonry.api" 28 | -------------------------------------------------------------------------------- /src/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tchaton/sagemaker-pytorch-boilerplate/0f64d71a8080d1a457ec1b4586951796b2da8521/src/__init__.py -------------------------------------------------------------------------------- /src/app.py: -------------------------------------------------------------------------------- 1 | # This is the file that implements a flask server to do inferences. It's the file that you will modify to 2 | # implement the scoring for your own algorithm. 3 | 4 | from __future__ import print_function 5 | 6 | import os 7 | import json 8 | import pickle 9 | from io import StringIO 10 | import sys 11 | import signal 12 | import traceback 13 | import flask 14 | import pandas as pd 15 | 16 | # The flask app for serving predictions 17 | 18 | 19 | @app.route("/ping", methods=["GET"]) 20 | def ping(): 21 | """Determine if the container is working and healthy. In this sample container, we declare 22 | it healthy if we can load the model successfully.""" 23 | health = model_handler.get_model() 24 | 25 | status = 200 if health else 404 26 | return flask.Response(response="\n", status=status, mimetype="application/json") 27 | 28 | 29 | @app.route("/invocations", methods=["POST"]) 30 | def transformation(): 31 | """Do an inference on a single batch of data. In this sample server, we take data as CSV, convert 32 | it to a pandas data frame for internal use and then convert the predictions back to CSV (which really 33 | just means one prediction per line, since there's a single column. 34 | """ 35 | data = None 36 | 37 | # Convert from CSV to pandas 38 | if flask.request.content_type == "text/csv": 39 | data = flask.request.data.decode("utf-8") 40 | s = StringIO(data) 41 | data = pd.read_csv(s, header=None) 42 | else: 43 | return flask.Response( 44 | response="This predictor only supports CSV data", 45 | status=415, 46 | mimetype="text/plain", 47 | ) 48 | 49 | print("Invoked with {} records".format(data.shape[0])) 50 | 51 | # Do the prediction 52 | predictions = model_handler.predict(data) 53 | 54 | # Convert from numpy back to CSV 55 | out = StringIO.StringIO() 56 | pd.DataFrame({"results": predictions}).to_csv(out, header=False, index=False) 57 | result = out.getvalue() 58 | 59 | return flask.Response(response=result, status=200, mimetype="text/csv") 60 | -------------------------------------------------------------------------------- /src/configs/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tchaton/sagemaker-pytorch-boilerplate/0f64d71a8080d1a457ec1b4586951796b2da8521/src/configs/__init__.py -------------------------------------------------------------------------------- /src/configs/train_config.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass, field 2 | from typing import * 3 | 4 | from hydra.core.config_store import ConfigStore 5 | from omegaconf import MISSING 6 | 7 | defaults = [ 8 | # An error will be raised if the user forgets to specify `db=...` 9 | {"model": MISSING}, 10 | {"dataset": MISSING}, 11 | ] 12 | 13 | 14 | @dataclass 15 | class Trainer: 16 | max_epochs: int = 100 17 | gpus: int = 0 18 | 19 | 20 | @dataclass 21 | class ObjectConf(Dict[str, Any]): 22 | # class, class method or function name 23 | target: str = MISSING 24 | # parameters to pass to target when calling it 25 | params: Any = field(default_factory=dict) 26 | 27 | 28 | @dataclass 29 | class Config: 30 | defaults: List[Any] = field(default_factory=lambda: defaults) 31 | # Hydra will populate this field based on the defaults list 32 | model: Any = MISSING 33 | dataset: Any = MISSING 34 | trainer: Trainer = Trainer() 35 | mode: str = "local" 36 | 37 | 38 | cs = ConfigStore.instance() 39 | cs.store(name="config", node=Config) 40 | cs.store(group="model", name="simple_mlp", node=ObjectConf) 41 | cs.store(group="dataset", name="mnist", node=ObjectConf) 42 | cs.store(group="dataset", name="iris", node=ObjectConf) 43 | cs.store(group="trainer", name="default", node=Trainer) 44 | 45 | -------------------------------------------------------------------------------- /src/datasets/__init__.py: -------------------------------------------------------------------------------- 1 | from .mnist import * 2 | from .iris import * 3 | -------------------------------------------------------------------------------- /src/datasets/base_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import pandas as pd 4 | import torch 5 | from pytorch_lightning import LightningDataModule 6 | from torch.utils.data import DataLoader, random_split 7 | from src.paths import Paths 8 | 9 | 10 | class BaseSagemakerDataset(LightningDataModule): 11 | 12 | name = "BaseSagemaker" 13 | 14 | def __init__( 15 | self, val_split: float = 0.3, P: Paths = None, *args, **kwargs, 16 | ): 17 | super().__init__(*args, **kwargs) 18 | self._P = P 19 | self._val_split = val_split 20 | assert self._P is not None 21 | 22 | @property 23 | def num_features(self): 24 | return NotImplementedError 25 | 26 | @property 27 | def num_classes(self): 28 | return NotImplementedError 29 | 30 | def _labelize(self, raw_data): 31 | raise NotImplementedError 32 | 33 | def prepare_splitted_data(self, path_train, path_val): 34 | train_raw_data = pd.read_csv(path_train, header=None).values 35 | 36 | val_raw_data = pd.read_csv(path_val, header=None).values 37 | 38 | self._labelize(train_raw_data) 39 | self._labelize(val_raw_data) 40 | self.dataset_train = torch.from_numpy(train_raw_data.astype(np.float)).float() 41 | self.dataset_val = torch.from_numpy(val_raw_data.astype(np.float)).float() 42 | 43 | def _prepare_no_splitted_data(self): 44 | input_files = [ 45 | os.path.join(self._P.TRAINING_PATH, filename) 46 | for filename in os.listdir(self._P.TRAINING_PATH) 47 | ] 48 | if len(input_files) == 0: 49 | raise ValueError( 50 | ( 51 | "There are no files in {}.\n" 52 | + "This usually indicates that the channel ({}) was incorrectly specified,\n" 53 | + "the data specification in S3 was incorrectly specified or the role specified\n" 54 | + "does not have permission to access the data." 55 | ).format(training_path, channel_name) 56 | ) 57 | 58 | raw_data = [pd.read_csv(file, header=None) for file in input_files] 59 | raw_data = pd.concat(raw_data).values 60 | self._labelize(raw_data) 61 | raw_data = torch.from_numpy(raw_data.astype(np.float)).float() 62 | raw_data_length = len(raw_data) 63 | 64 | train_size = int(raw_data_length * (1 - self._val_split)) 65 | val_size = raw_data_length - train_size 66 | 67 | self.dataset_train, self.dataset_val = random_split( 68 | raw_data, 69 | [train_size, val_size], 70 | generator=torch.Generator().manual_seed(self._seed), 71 | ) 72 | 73 | def prepare_data(self): 74 | 75 | path_train = os.path.join(self._P.TRAINING_PATH, "train.csv") 76 | path_val = os.path.join(self._P.TRAINING_PATH, "val.csv") 77 | 78 | if os.path.exists(path_train) and os.path.exists(path_val): 79 | self._prepare_splitted_data(path_train, path_val) 80 | 81 | else: 82 | self._prepare_no_splitted_data() 83 | 84 | def train_dataloader(self, batch_size=32, transforms=None): 85 | loader = DataLoader( 86 | self.dataset_train, 87 | batch_size=batch_size, 88 | shuffle=True, 89 | num_workers=self.num_workers, 90 | drop_last=True, 91 | pin_memory=True, 92 | ) 93 | return loader 94 | 95 | def val_dataloader(self, batch_size=32, transforms=None): 96 | 97 | loader = DataLoader( 98 | self.dataset_val, 99 | batch_size=batch_size, 100 | shuffle=False, 101 | num_workers=self.num_workers, 102 | drop_last=True, 103 | pin_memory=True, 104 | ) 105 | return loader 106 | -------------------------------------------------------------------------------- /src/datasets/iris.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import pandas as pd 4 | import torch 5 | from pytorch_lightning import LightningDataModule 6 | from torch.utils.data import DataLoader, random_split 7 | from src.datasets.base_dataset import BaseSagemakerDataset 8 | 9 | 10 | class IrisDataset(BaseSagemakerDataset): 11 | 12 | name = "IRIS" 13 | 14 | def __init__( 15 | self, 16 | val_split: float = 0.3, 17 | num_workers: int = 16, 18 | seed: int = 42, 19 | P=None, 20 | *args, 21 | **kwargs, 22 | ): 23 | super().__init__(val_split=val_split, P=P, *args, **kwargs) 24 | self.num_workers = num_workers 25 | self._seed = seed 26 | 27 | @property 28 | def num_features(self): 29 | return 4 30 | 31 | @property 32 | def num_classes(self): 33 | return 3 34 | 35 | @property 36 | def hyper_parameters(self): 37 | return {"num_features": self.num_features, "num_classes": self.num_classes} 38 | 39 | def _labelize(self, raw_data): 40 | labels = raw_data[:, 0] 41 | self.unique_labels = np.unique(labels) 42 | self._num_classes = len(self.unique_labels) 43 | for idx, u in enumerate(self.unique_labels): 44 | labels[labels == u] = idx 45 | 46 | @staticmethod 47 | def process(raw_data): 48 | return torch.from_numpy(raw_data.values.astype(np.float)).float() 49 | 50 | -------------------------------------------------------------------------------- /src/datasets/mnist.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from pytorch_lightning import LightningDataModule 3 | from torch.utils.data import DataLoader, random_split 4 | from torchvision import transforms as transform_lib 5 | from torchvision.datasets import MNIST 6 | 7 | 8 | class MNISTDataset(LightningDataModule): 9 | 10 | name = "mnist" 11 | 12 | def __init__( 13 | self, 14 | data_dir: str, 15 | val_split: int = 5000, 16 | num_workers: int = 16, 17 | normalize: bool = False, 18 | seed: int = 42, 19 | P=None, 20 | *args, 21 | **kwargs, 22 | ): 23 | super().__init__(*args, **kwargs) 24 | self.dims = (1, 28, 28) 25 | self.data_dir = data_dir 26 | self.val_split = val_split 27 | self.num_workers = num_workers 28 | self.normalize = normalize 29 | self.seed = seed 30 | 31 | @property 32 | def num_features(self): 33 | """ 34 | Return: 35 | 10 36 | """ 37 | return 28 * 28 38 | 39 | @property 40 | def num_classes(self): 41 | """ 42 | Return: 43 | 10 44 | """ 45 | return 10 46 | 47 | @property 48 | def hyper_parameters(self): 49 | return {"num_features": self.num_features, "num_classes": self.num_classes} 50 | 51 | def prepare_data(self): 52 | """ 53 | Saves MNIST files to data_dir 54 | """ 55 | MNIST( 56 | self.data_dir, train=True, download=True, transform=transform_lib.ToTensor() 57 | ) 58 | MNIST( 59 | self.data_dir, 60 | train=False, 61 | download=True, 62 | transform=transform_lib.ToTensor(), 63 | ) 64 | 65 | def train_dataloader(self, batch_size=32, transforms=None): 66 | """ 67 | MNIST train set removes a subset to use for validation 68 | Args: 69 | batch_size: size of batch 70 | transforms: custom transforms 71 | """ 72 | transforms = transforms or self.train_transforms or self._default_transforms() 73 | 74 | dataset = MNIST(self.data_dir, train=True, download=False, transform=transforms) 75 | train_length = len(dataset) 76 | dataset_train, _ = random_split( 77 | dataset, 78 | [train_length - self.val_split, self.val_split], 79 | generator=torch.Generator().manual_seed(self.seed), 80 | ) 81 | loader = DataLoader( 82 | dataset_train, 83 | batch_size=batch_size, 84 | shuffle=True, 85 | num_workers=self.num_workers, 86 | drop_last=True, 87 | pin_memory=True, 88 | ) 89 | return loader 90 | 91 | def val_dataloader(self, batch_size=32, transforms=None): 92 | """ 93 | MNIST val set uses a subset of the training set for validation 94 | Args: 95 | batch_size: size of batch 96 | transforms: custom transforms 97 | """ 98 | transforms = transforms or self.val_transforms or self._default_transforms() 99 | dataset = MNIST(self.data_dir, train=True, download=True, transform=transforms) 100 | train_length = len(dataset) 101 | _, dataset_val = random_split( 102 | dataset, 103 | [train_length - self.val_split, self.val_split], 104 | generator=torch.Generator().manual_seed(self.seed), 105 | ) 106 | loader = DataLoader( 107 | dataset_val, 108 | batch_size=batch_size, 109 | shuffle=False, 110 | num_workers=self.num_workers, 111 | drop_last=True, 112 | pin_memory=True, 113 | ) 114 | return loader 115 | 116 | def test_dataloader(self, batch_size=32, transforms=None): 117 | """ 118 | MNIST test set uses the test split 119 | Args: 120 | batch_size: size of batch 121 | transforms: custom transforms 122 | """ 123 | transforms = transforms or self.val_transforms or self._default_transforms() 124 | 125 | dataset = MNIST( 126 | self.data_dir, train=False, download=False, transform=transforms 127 | ) 128 | loader = DataLoader( 129 | dataset, 130 | batch_size=batch_size, 131 | shuffle=False, 132 | num_workers=self.num_workers, 133 | drop_last=True, 134 | pin_memory=True, 135 | ) 136 | return loader 137 | 138 | def _default_transforms(self): 139 | if self.normalize: 140 | mnist_transforms = transform_lib.Compose( 141 | [ 142 | transform_lib.ToTensor(), 143 | transform_lib.Normalize(mean=(0.5,), std=(0.5,)), 144 | ] 145 | ) 146 | else: 147 | mnist_transforms = transform_lib.ToTensor() 148 | 149 | return mnist_transforms 150 | -------------------------------------------------------------------------------- /src/models/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tchaton/sagemaker-pytorch-boilerplate/0f64d71a8080d1a457ec1b4586951796b2da8521/src/models/__init__.py -------------------------------------------------------------------------------- /src/models/model_handler.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import random 4 | import torch 5 | import hydra 6 | import pytorch_lightning as pl 7 | from omegaconf import OmegaConf, DictConfig 8 | from src.paths import Paths 9 | from src.configs.train_config import * 10 | from hydra.experimental import compose, initialize 11 | 12 | 13 | class ModelHandler(object): 14 | model = None # Where we keep the model when it's loaded 15 | data_module_cls = None 16 | 17 | @classmethod 18 | def get_model(cls): 19 | """Get the model object for this instance, loading it if it's not already loaded.""" 20 | if cls.model == None: 21 | initialize( 22 | config_path="../../conf", strict=True, 23 | ) 24 | cfg = compose("config.yaml") 25 | P = Paths("aws") 26 | model_cls = hydra.utils.get_class(cfg.model.target) 27 | cls.model = model_cls.load_from_checkpoint(P.TRAINER_CHECKPOINT_PATH) 28 | cls.model.freeze() 29 | cls.data_module_cls = hydra.utils.get_class(cfg.dataset.target) 30 | return cls.model 31 | 32 | @classmethod 33 | def predict(cls, input): 34 | """For the input, do the predictions and return them. 35 | Args: 36 | input (a pandas dataframe): The data on which to do the predictions. There will be 37 | one prediction per row in the dataframe""" 38 | model = cls.get_model() 39 | input = cls.data_module_cls.process(input) 40 | return model.model_fn(input).numpy() 41 | -------------------------------------------------------------------------------- /src/models/simple_mlp.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | from torch.nn import functional as F 4 | import pytorch_lightning as pl 5 | from argparse import Namespace 6 | 7 | 8 | class Model(pl.LightningModule): 9 | def __init__(self, *args, **kwargs): 10 | super().__init__() 11 | 12 | self.save_hyperparameters() 13 | self.l1 = torch.nn.Linear(kwargs["num_features"], kwargs["num_classes"]) 14 | 15 | def forward(self, x): 16 | return F.log_softmax(self.l1(x.view(x.size(0), -1)), -1) 17 | 18 | def model_fn(self, x): 19 | return self.forward(x).argmax(-1) 20 | 21 | def training_step(self, batch, batch_nb): 22 | if isinstance(batch, (list, tuple)): 23 | x, y = batch 24 | else: 25 | x = batch[:, 1:] 26 | y = batch[:, 0].long() 27 | loss = F.nll_loss(self(x), y) 28 | tensorboard_logs = {"train_loss": loss} 29 | return {"loss": loss, "log": tensorboard_logs} 30 | 31 | def configure_optimizers(self): 32 | return torch.optim.Adam(self.parameters(), lr=0.02) 33 | -------------------------------------------------------------------------------- /src/paths.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | 4 | # These are the paths to where SageMaker mounts interesting things in your container. 5 | LOCAL_PREFIX = os.path.join( 6 | os.path.dirname(os.path.realpath(__file__)), "..", "local_test", "test_dir", 7 | ) 8 | 9 | CONFIG_PATH = os.path.join( 10 | os.path.dirname(os.path.realpath(__file__)), "..", "conf", "config.yaml" 11 | ) 12 | 13 | 14 | class Paths: 15 | 16 | AWS_PREFIX = "/opt/ml/" 17 | CONFIG_PATH = "/opt/program/conf/config.yaml" 18 | MODEL_NAME = "model.ckpt" 19 | TRAINER_NAME = "trainer.ckpt" 20 | 21 | def __init__(self, mode): 22 | self.MODE = mode 23 | if mode == "local": 24 | self._build(LOCAL_PREFIX, CONFIG_PATH) 25 | else: 26 | self._build(self.AWS_PREFIX, self.CONFIG_PATH) 27 | 28 | def _build(self, prefix, config_path): 29 | self.INPUT_PATH = osp.join(prefix, "input/data") 30 | self.OUTPUT_PATH = osp.join(prefix, "output") 31 | self.MODEL_PATH = osp.join(prefix, "model") 32 | self.PARAM_PATH = osp.join(prefix, "input/config/hyperparameters.json") 33 | self.CHANNEL_NAME = "training" 34 | self.TRAINING_PATH = os.path.join(self.INPUT_PATH, self.CHANNEL_NAME) 35 | self.MODEL_CHECKPOINT_PATH = os.path.join(self.MODEL_PATH, self.MODEL_NAME) 36 | self.TRAINER_CHECKPOINT_PATH = os.path.join(self.MODEL_PATH, self.TRAINER_NAME) 37 | self.CONFIG_PATH = config_path 38 | 39 | def __repr__(self): 40 | msg = "" 41 | for key, item in self.__dict__.items(): 42 | if "__" not in key: 43 | msg += "{}: {} \n".format(key, item) 44 | return msg 45 | 46 | -------------------------------------------------------------------------------- /src/training.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import random 4 | import torch 5 | import hydra 6 | from pytorch_lightning import Trainer 7 | from pytorch_lightning.callbacks import ModelCheckpoint 8 | from src.datasets import * 9 | from src.paths import Paths 10 | 11 | 12 | def train(cfg): 13 | 14 | P = Paths(cfg.mode) 15 | print(P) 16 | 17 | data_module = hydra.utils.instantiate(cfg.dataset, P=P) 18 | model = hydra.utils.instantiate(cfg.model, **data_module.hyper_parameters) 19 | 20 | checkpoint_callback = ModelCheckpoint(filepath=P.MODEL_PATH,) 21 | 22 | trainer = Trainer(checkpoint_callback=checkpoint_callback, max_epochs=2) 23 | trainer.fit(model, data_module) 24 | trainer.save_checkpoint(P.TRAINER_CHECKPOINT_PATH) 25 | new_model = model.__class__.load_from_checkpoint(P.TRAINER_CHECKPOINT_PATH) 26 | print(new_model) 27 | print("Training complete.") 28 | -------------------------------------------------------------------------------- /train: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # A sample training component that trains a simple scikit-learn decision tree model. 4 | # This implementation works in File mode and makes no assumptions about the input file names. 5 | # Input is specified as CSV with a data point in each row and the labels in the first column. 6 | 7 | from __future__ import print_function 8 | 9 | import os 10 | import hydra 11 | 12 | os.environ["HYDRA_FULL_ERROR"] = "1" 13 | import os.path as osp 14 | import json 15 | import pickle 16 | import sys 17 | import traceback 18 | from omegaconf import DictConfig 19 | 20 | from src.configs.train_config import * 21 | from src.training import train 22 | 23 | 24 | # https://hydra.cc/docs/next/tutorials/structured_config/hierarchical_static_config 25 | @hydra.main(config_path="conf", config_name="config") 26 | def my_app(cfg: DictConfig) -> None: 27 | print(cfg.pretty()) 28 | train(cfg) 29 | 30 | 31 | if __name__ == "__main__": 32 | my_app() 33 | -------------------------------------------------------------------------------- /workflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Building your own container as Algorithm / Model Package\n", 8 | "\n", 9 | "With Amazon SageMaker, you can package your own algorithms that can than be trained and deployed in the SageMaker environment. This notebook will guide you through an example that shows you how to build a Docker container for SageMaker and use it for training and inference.\n", 10 | "\n", 11 | "This is an extension of the [scikit-bring-your-own notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb). We append specific steps that help you create a new Algorithm / Model Package SageMaker entities, which can be sold on AWS Marketplace\n", 12 | "\n", 13 | "By packaging an algorithm in a container, you can bring almost any code to the Amazon SageMaker environment, regardless of programming language, environment, framework, or dependencies. \n", 14 | "\n", 15 | "1. [Building your own algorithm container](#Building-your-own-algorithm-container)\n", 16 | " 1. [When should I build my own algorithm container?](#When-should-I-build-my-own-algorithm-container?)\n", 17 | " 1. [Permissions](#Permissions)\n", 18 | " 1. [The example](#The-example)\n", 19 | " 1. [The presentation](#The-presentation)\n", 20 | "1. [Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker](#Part-1:-Packaging-and-Uploading-your-Algorithm-for-use-with-Amazon-SageMaker)\n", 21 | " 1. [An overview of Docker](#An-overview-of-Docker)\n", 22 | " 1. [How Amazon SageMaker runs your Docker container](#How-Amazon-SageMaker-runs-your-Docker-container)\n", 23 | " 1. [Running your container during training](#Running-your-container-during-training)\n", 24 | " 1. [The input](#The-input)\n", 25 | " 1. [The output](#The-output)\n", 26 | " 1. [Running your container during hosting](#Running-your-container-during-hosting)\n", 27 | " 1. [The parts of the sample container](#The-parts-of-the-sample-container)\n", 28 | " 1. [The Dockerfile](#The-Dockerfile)\n", 29 | " 1. [Building and registering the container](#Building-and-registering-the-container)\n", 30 | " 1. [Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance](#Testing-your-algorithm-on-your-local-machine-or-on-an-Amazon-SageMaker-notebook-instance)\n", 31 | "1. [Part 2: Training and Hosting your Algorithm in Amazon SageMaker](#Part-2:-Training-and-Hosting-your-Algorithm-in-Amazon-SageMaker)\n", 32 | " 1. [Set up the environment](#Set-up-the-environment)\n", 33 | " 1. [Create the session](#Create-the-session)\n", 34 | " 1. [Upload the data for training](#Upload-the-data-for-training)\n", 35 | " 1. [Create an estimator and fit the model](#Create-an-estimator-and-fit-the-model)\n", 36 | " 1. [Run a Batch Transform Job](#Batch-Transform-Job)\n", 37 | " 1. [Deploy the model](#Deploy-the-model)\n", 38 | " 1. [Optional cleanup](#Cleanup-Endpoint)\n", 39 | "1. [Part 3: Package your resources as an Amazon SageMaker Algorithm](#Part-3---Package-your-resources-as-an-Amazon-SageMaker-Algorithm)\n", 40 | " 1. [Algorithm Definition](#Algorithm-Definition)\n", 41 | "1. [Part 4: Package your resources as an Amazon SageMaker ModelPackage](#Part-4---Package-your-resources-as-an-Amazon-SageMaker-ModelPackage)\n", 42 | " 1. [Model Package Definition](#Model-Package-Definition)\n", 43 | "1. [Debugging Creation Issues](#Debugging-Creation-Issues)\n", 44 | "1. [List on AWS Marketplace](#List-on-AWS-Marketplace)\n", 45 | "\n", 46 | "## When should I build my own algorithm container?\n", 47 | "\n", 48 | "You may not need to create a container to bring your own code to Amazon SageMaker. When you are using a framework (such as Apache MXNet or TensorFlow) that has direct support in SageMaker, you can simply supply the Python code that implements your algorithm using the SDK entry points for that framework. This set of frameworks is continually expanding, so we recommend that you check the current list if your algorithm is written in a common machine learning environment.\n", 49 | "\n", 50 | "Even if there is direct SDK support for your environment or framework, you may find it more effective to build your own container. If the code that implements your algorithm is quite complex on its own or you need special additions to the framework, building your own container may be the right choice.\n", 51 | "\n", 52 | "If there isn't direct SDK support for your environment, don't worry. You'll see in this walk-through that building your own container is quite straightforward.\n", 53 | "\n", 54 | "## Permissions\n", 55 | "\n", 56 | "Running this notebook requires permissions in addition to the normal `SageMakerFullAccess` permissions. This is because we'll creating new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy `AmazonEC2ContainerRegistryFullAccess` to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately.\n", 57 | "\n", 58 | "## The example\n", 59 | "\n", 60 | "Here, we'll show how to package a simple Python example which showcases the [decision tree][] algorithm from the widely used [scikit-learn][] machine learning package. The example is purposefully fairly trivial since the point is to show the surrounding structure that you'll want to add to your own code so you can train and host it in Amazon SageMaker.\n", 61 | "\n", 62 | "The ideas shown here will work in any language or environment. You'll need to choose the right tools for your environment to serve HTTP requests for inference, but good HTTP environments are available in every language these days.\n", 63 | "\n", 64 | "In this example, we use a single image to support training and hosting. This is easy because it means that we only need to manage one image and we can set it up to do everything. Sometimes you'll want separate images for training and hosting because they have different requirements. Just separate the parts discussed below into separate Dockerfiles and build two images. Choosing whether to have a single image or two images is really a matter of which is more convenient for you to develop and manage.\n", 65 | "\n", 66 | "If you're only using Amazon SageMaker for training or hosting, but not both, there is no need to build the unused functionality into your container.\n", 67 | "\n", 68 | "[scikit-learn]: http://scikit-learn.org/stable/\n", 69 | "[decision tree]: http://scikit-learn.org/stable/modules/tree.html\n", 70 | "\n", 71 | "## The presentation\n", 72 | "\n", 73 | "This presentation is divided into two parts: _building_ the container and _using_ the container." 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "# Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n", 81 | "\n", 82 | "### An overview of Docker\n", 83 | "\n", 84 | "If you're familiar with Docker already, you can skip ahead to the next section.\n", 85 | "\n", 86 | "For many data scientists, Docker containers are a new concept, but they are not difficult, as you'll see here. \n", 87 | "\n", 88 | "Docker provides a simple way to package arbitrary code into an _image_ that is totally self-contained. Once you have an image, you can use Docker to run a _container_ based on that image. Running a container is just like running a program on the machine except that the container creates a fully self-contained environment for the program to run. Containers are isolated from each other and from the host environment, so the way you set up your program is the way it runs, no matter where you run it.\n", 89 | "\n", 90 | "Docker is more powerful than environment managers like conda or virtualenv because (a) it is completely language independent and (b) it comprises your whole operating environment, including startup commands, environment variable, etc.\n", 91 | "\n", 92 | "In some ways, a Docker container is like a virtual machine, but it is much lighter weight. For example, a program running in a container can start in less than a second and many containers can run on the same physical machine or virtual machine instance.\n", 93 | "\n", 94 | "Docker uses a simple file called a `Dockerfile` to specify how the image is assembled. We'll see an example of that below. You can build your Docker images based on Docker images built by yourself or others, which can simplify things quite a bit.\n", 95 | "\n", 96 | "Docker has become very popular in the programming and devops communities for its flexibility and well-defined specification of the code to be run. It is the underpinning of many services built in the past few years, such as [Amazon ECS].\n", 97 | "\n", 98 | "Amazon SageMaker uses Docker to allow users to train and deploy arbitrary algorithms.\n", 99 | "\n", 100 | "In Amazon SageMaker, Docker containers are invoked in a certain way for training and a slightly different way for hosting. The following sections outline how to build containers for the SageMaker environment.\n", 101 | "\n", 102 | "Some helpful links:\n", 103 | "\n", 104 | "* [Docker home page](http://www.docker.com)\n", 105 | "* [Getting started with Docker](https://docs.docker.com/get-started/)\n", 106 | "* [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)\n", 107 | "* [`docker run` reference](https://docs.docker.com/engine/reference/run/)\n", 108 | "\n", 109 | "[Amazon ECS]: https://aws.amazon.com/ecs/\n", 110 | "\n", 111 | "### How Amazon SageMaker runs your Docker container\n", 112 | "\n", 113 | "Because you can run the same image in training or hosting, Amazon SageMaker runs your container with the argument `train` or `serve`. How your container processes this argument depends on the container:\n", 114 | "\n", 115 | "* In the example here, we don't define an `ENTRYPOINT` in the Dockerfile so Docker will run the command `train` at training time and `serve` at serving time. In this example, we define these as executable Python scripts, but they could be any program that we want to start in that environment.\n", 116 | "* If you specify a program as an `ENTRYPOINT` in the Dockerfile, that program will be run at startup and its first argument will be `train` or `serve`. The program can then look at that argument and decide what to do.\n", 117 | "* If you are building separate containers for training and hosting (or building only for one or the other), you can define a program as an `ENTRYPOINT` in the Dockerfile and ignore (or verify) the first argument passed in. \n", 118 | "\n", 119 | "#### Running your container during training\n", 120 | "\n", 121 | "When Amazon SageMaker runs training, your `train` script is run just like a regular Python program. A number of files are laid out for your use, under the `/opt/ml` directory:\n", 122 | "\n", 123 | " /opt/ml\n", 124 | " |-- input\n", 125 | " | |-- config\n", 126 | " | | |-- hyperparameters.json\n", 127 | " | | `-- resourceConfig.json\n", 128 | " | `-- data\n", 129 | " | `-- \n", 130 | " | `-- \n", 131 | " |-- model\n", 132 | " | `-- \n", 133 | " `-- output\n", 134 | " `-- failure\n", 135 | "\n", 136 | "##### The input\n", 137 | "\n", 138 | "* `/opt/ml/input/config` contains information to control how your program runs. `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names to values. These values will always be strings, so you may need to convert them. `resourceConfig.json` is a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn doesn't support distributed training, we'll ignore it here.\n", 139 | "* `/opt/ml/input/data//` (for File mode) contains the input data for that channel. The channels are created based on the call to CreateTrainingJob but it's generally important that channels match what the algorithm expects. The files for each channel will be copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure. \n", 140 | "* `/opt/ml/input/data/_` (for Pipe mode) is the pipe for a given epoch. Epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch.\n", 141 | "\n", 142 | "##### The output\n", 143 | "\n", 144 | "* `/opt/ml/model/` is the directory where you write the model that your algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker will package any files in this directory into a compressed tar archive file. This file will be available at the S3 location returned in the `DescribeTrainingJob` result.\n", 145 | "* `/opt/ml/output` is a directory where the algorithm can write a file `failure` that describes why the job failed. The contents of this file will be returned in the `FailureReason` field of the `DescribeTrainingJob` result. For jobs that succeed, there is no reason to write this file as it will be ignored.\n", 146 | "\n", 147 | "#### Running your container during hosting\n", 148 | "\n", 149 | "Hosting has a very different model than training because hosting is reponding to inference requests that come in via HTTP. In this example, we use our recommended Python serving stack to provide robust and scalable serving of inference requests:\n", 150 | "\n", 151 | "![Request serving stack](images/stack.png)\n", 152 | "\n", 153 | "This stack is implemented in the sample code here and you can mostly just leave it alone. \n", 154 | "\n", 155 | "Amazon SageMaker uses two URLs in the container:\n", 156 | "\n", 157 | "* `/ping` will receive `GET` requests from the infrastructure. Your program returns 200 if the container is up and accepting requests.\n", 158 | "* `/invocations` is the endpoint that receives client inference `POST` requests. The format of the request and the response is up to the algorithm. If the client supplied `ContentType` and `Accept` headers, these will be passed in as well. \n", 159 | "\n", 160 | "The container will have the model files in the same place they were written during training:\n", 161 | "\n", 162 | " /opt/ml\n", 163 | " `-- model\n", 164 | " `-- \n", 165 | "\n" 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | "### The parts of the sample container\n", 173 | "\n", 174 | "In the `container` directory are all the components you need to package the sample algorithm for Amazon SageMager:\n", 175 | "\n", 176 | " .\n", 177 | " |-- Dockerfile\n", 178 | " |-- build_and_push.sh\n", 179 | " `-- decision_trees\n", 180 | " |-- nginx.conf\n", 181 | " |-- predictor.py\n", 182 | " |-- serve\n", 183 | " |-- train\n", 184 | " `-- wsgi.py\n", 185 | "\n", 186 | "Let's discuss each of these in turn:\n", 187 | "\n", 188 | "* __`Dockerfile`__ describes how to build your Docker container image. More details below.\n", 189 | "* __`build_and_push.sh`__ is a script that uses the Dockerfile to build your container images and then pushes it to ECR. We'll invoke the commands directly later in this notebook, but you can just copy and run the script for your own algorithms.\n", 190 | "* __`decision_trees`__ is the directory which contains the files that will be installed in the container.\n", 191 | "* __`local_test`__ is a directory that shows how to test your new container on any computer that can run Docker, including an Amazon SageMaker notebook instance. Using this method, you can quickly iterate using small datasets to eliminate any structural bugs before you use the container with Amazon SageMaker. We'll walk through local testing later in this notebook.\n", 192 | "\n", 193 | "In this simple application, we only install five files in the container. You may only need that many or, if you have many supporting routines, you may wish to install more. These five show the standard structure of our Python containers, although you are free to choose a different toolset and therefore could have a different layout. If you're writing in a different programming language, you'll certainly have a different layout depending on the frameworks and tools you choose.\n", 194 | "\n", 195 | "The files that we'll put in the container are:\n", 196 | "\n", 197 | "* __`nginx.conf`__ is the configuration file for the nginx front-end. Generally, you should be able to take this file as-is.\n", 198 | "* __`predictor.py`__ is the program that actually implements the Flask web server and the decision tree predictions for this app. You'll want to customize the actual prediction parts to your application. Since this algorithm is simple, we do all the processing here in this file, but you may choose to have separate files for implementing your custom logic.\n", 199 | "* __`serve`__ is the program started when the container is started for hosting. It simply launches the gunicorn server which runs multiple instances of the Flask app defined in `predictor.py`. You should be able to take this file as-is.\n", 200 | "* __`train`__ is the program that is invoked when the container is run for training. You will modify this program to implement your training algorithm.\n", 201 | "* __`wsgi.py`__ is a small wrapper used to invoke the Flask app. You should be able to take this file as-is.\n", 202 | "\n", 203 | "In summary, the two files you will probably want to change for your application are `train` and `predictor.py`." 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "metadata": {}, 209 | "source": [ 210 | "### The Dockerfile\n", 211 | "\n", 212 | "The Dockerfile describes the image that we want to build. You can think of it as describing the complete operating system installation of the system that you want to run. A Docker container running is quite a bit lighter than a full operating system, however, because it takes advantage of Linux on the host machine for the basic operations. \n", 213 | "\n", 214 | "For the Python science stack, we will start from a standard Ubuntu installation and run the normal tools to install the things needed by scikit-learn. Finally, we add the code that implements our specific algorithm to the container and set up the right environment to run under.\n", 215 | "\n", 216 | "Along the way, we clean up extra space. This makes the container smaller and faster to start.\n", 217 | "\n", 218 | "Let's look at the Dockerfile for the example:" 219 | ] 220 | }, 221 | { 222 | "cell_type": "code", 223 | "execution_count": 1, 224 | "metadata": {}, 225 | "outputs": [ 226 | { 227 | "name": "stdout", 228 | "output_type": "stream", 229 | "text": [ 230 | "# Build an image that can do training and inference in SageMaker\n", 231 | "# This is a Python 2 image that uses the nginx, gunicorn, flask stack\n", 232 | "# for serving inferences in a stable way.\n", 233 | "\n", 234 | "FROM ubuntu:18.04\n", 235 | "\n", 236 | "MAINTAINER Amazon AI \n", 237 | "\n", 238 | "\n", 239 | "RUN apt-get -y update && apt-get install -y --no-install-recommends \\\n", 240 | " wget \\\n", 241 | " python \\\n", 242 | " nginx \\\n", 243 | " ca-certificates \\\n", 244 | " && rm -rf /var/lib/apt/lists/*\n", 245 | "\n", 246 | "# Here we get all python packages.\n", 247 | "# There's substantial overlap between scipy and numpy that we eliminate by\n", 248 | "# linking them together. Likewise, pip leaves the install caches populated which uses\n", 249 | "# a significant amount of space. These optimizations save a fair amount of space in the\n", 250 | "# image, which reduces start up time.\n", 251 | "RUN wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py && \\\n", 252 | " pip install numpy scipy scikit-learn pandas flask gevent gunicorn && \\\n", 253 | " rm -rf /root/.cache\n", 254 | "\n", 255 | "# Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard\n", 256 | "# output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE\n", 257 | "# keeps Python from writing the .pyc files which are unnecessary in this case. We also update\n", 258 | "# PATH so that the train and serve programs are found when the container is invoked.\n", 259 | "\n", 260 | "ENV PYTHONUNBUFFERED=TRUE\n", 261 | "ENV PYTHONDONTWRITEBYTECODE=TRUE\n", 262 | "ENV PATH=\"/opt/program:${PATH}\"\n", 263 | "\n", 264 | "# Set up the program in the image\n", 265 | "COPY decision_trees /opt/program\n", 266 | "WORKDIR /opt/program\n", 267 | "\n" 268 | ] 269 | } 270 | ], 271 | "source": [ 272 | "!cat container/Dockerfile" 273 | ] 274 | }, 275 | { 276 | "cell_type": "markdown", 277 | "metadata": {}, 278 | "source": [ 279 | "### Building and registering the container\n", 280 | "\n", 281 | "The following shell code shows how to build the container image using `docker build` and push the container image to ECR using `docker push`. This code is also available as the shell script `container/build-and-push.sh`, which you can run as `build-and-push.sh decision_trees_sample` to build the image `decision_trees_sample`. \n", 282 | "\n", 283 | "This code looks for an ECR repository in the account you're using and the current default region (if you're using an Amazon SageMaker notebook instance, this will be the region where the notebook instance was created). If the repository doesn't exist, the script will create it." 284 | ] 285 | }, 286 | { 287 | "cell_type": "code", 288 | "execution_count": 1, 289 | "metadata": {}, 290 | "outputs": [ 291 | { 292 | "name": "stdout", 293 | "output_type": "stream", 294 | "text": [ 295 | "Sending build context to Docker daemon 51.71kB\n", 296 | "Step 1/9 : FROM ubuntu:18.04\n", 297 | " ---> 72300a873c2c\n", 298 | "Step 2/9 : MAINTAINER Amazon AI \n", 299 | " ---> Using cache\n", 300 | " ---> 964992c9d672\n", 301 | "Step 3/9 : RUN apt-get -y update && apt-get install -y --no-install-recommends wget python nginx ca-certificates && rm -rf /var/lib/apt/lists/*\n", 302 | " ---> Using cache\n", 303 | " ---> c7d874ac8fd5\n", 304 | "Step 4/9 : RUN wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py && pip install numpy scipy scikit-learn pandas flask gevent gunicorn && rm -rf /root/.cache\n", 305 | " ---> Using cache\n", 306 | " ---> 7ffad4e512e7\n", 307 | "Step 5/9 : ENV PYTHONUNBUFFERED=TRUE\n", 308 | " ---> Using cache\n", 309 | " ---> b57c951f0cd9\n", 310 | "Step 6/9 : ENV PYTHONDONTWRITEBYTECODE=TRUE\n", 311 | " ---> Using cache\n", 312 | " ---> f356f77cfaa6\n", 313 | "Step 7/9 : ENV PATH=\"/opt/program:${PATH}\"\n", 314 | " ---> Using cache\n", 315 | " ---> b85c935db183\n", 316 | "Step 8/9 : COPY decision_trees /opt/program\n", 317 | " ---> Using cache\n", 318 | " ---> 7f8724b1dcfc\n", 319 | "Step 9/9 : WORKDIR /opt/program\n", 320 | " ---> Using cache\n", 321 | " ---> 1d319e477f05\n", 322 | "Successfully built 1d319e477f05\n", 323 | "Successfully tagged decisiontrees:latest\n", 324 | "Login Succeeded\n" 325 | ] 326 | } 327 | ], 328 | "source": [ 329 | "%%sh\n", 330 | "\n", 331 | "# The name of our algorithm\n", 332 | "algorithm_name=\"decisiontrees\"\n", 333 | "\n", 334 | "cd container\n", 335 | "\n", 336 | "chmod +x decision_trees/train\n", 337 | "chmod +x decision_trees/serve\n", 338 | "\n", 339 | "account=$(aws sts get-caller-identity --query Account --output text)\n", 340 | "\n", 341 | "# Get the region defined in the current configuration (default to us-west-2 if none defined)\n", 342 | "region=$(aws configure get region)\n", 343 | "# specifically setting to us-east-2 since during the pre-release period, we support only that region.\n", 344 | "region=${region:-eu-west-1}\n", 345 | "\n", 346 | "fullname=\"${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest\"\n", 347 | "\n", 348 | "# If the repository doesn't exist in ECR, create it.\n", 349 | "\n", 350 | "aws ecr describe-repositories --repository-names \"${algorithm_name}\" > /dev/null 2>&1\n", 351 | "\n", 352 | "if [ $? -ne 0 ]\n", 353 | "then\n", 354 | " aws ecr create-repository --repository-name \"${algorithm_name}\" > /dev/null\n", 355 | "fi\n", 356 | "\n", 357 | "# Build the docker image locally with the image name and then push it to ECR\n", 358 | "# with the full name.\n", 359 | "\n", 360 | "docker build -t ${algorithm_name} .\n", 361 | "docker tag ${algorithm_name} ${fullname}\n", 362 | "\n", 363 | "aws ecr get-login-password \\\n", 364 | " --region ${region} \\\n", 365 | "| docker login \\\n", 366 | " --username AWS \\\n", 367 | " --password-stdin ${account}.dkr.ecr.${region}.amazonaws.com" 368 | ] 369 | }, 370 | { 371 | "cell_type": "markdown", 372 | "metadata": {}, 373 | "source": [ 374 | "## Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance\n", 375 | "\n", 376 | "While you're first packaging an algorithm use with Amazon SageMaker, you probably want to test it yourself to make sure it's working right. In the directory `container/local_test`, there is a framework for doing this. It includes three shell scripts for running and using the container and a directory structure that mimics the one outlined above.\n", 377 | "\n", 378 | "The scripts are:\n", 379 | "\n", 380 | "* `train_local.sh`: Run this with the name of the image and it will run training on the local tree. You'll want to modify the directory `test_dir/input/data/...` to be set up with the correct channels and data for your algorithm. Also, you'll want to modify the file `input/config/hyperparameters.json` to have the hyperparameter settings that you want to test (as strings).\n", 381 | "* `serve_local.sh`: Run this with the name of the image once you've trained the model and it should serve the model. It will run and wait for requests. Simply use the keyboard interrupt to stop it.\n", 382 | "* `predict.sh`: Run this with the name of a payload file and (optionally) the HTTP content type you want. The content type will default to `text/csv`. For example, you can run `$ ./predict.sh payload.csv text/csv`.\n", 383 | "\n", 384 | "The directories as shipped are set up to test the decision trees sample algorithm presented here." 385 | ] 386 | }, 387 | { 388 | "cell_type": "markdown", 389 | "metadata": {}, 390 | "source": [ 391 | "# Part 2: Training, Batch Inference and Hosting your Algorithm in Amazon SageMaker\n", 392 | "\n", 393 | "Once you have your container packaged, you can use it to train and serve models. Let's do that with the algorithm we made above.\n", 394 | "\n", 395 | "## Set up the environment\n", 396 | "\n", 397 | "Here we specify a bucket to use and the role that will be used for working with Amazon SageMaker." 398 | ] 399 | }, 400 | { 401 | "cell_type": "code", 402 | "execution_count": 3, 403 | "metadata": {}, 404 | "outputs": [], 405 | "source": [ 406 | "# S3 prefix\n", 407 | "common_prefix = \"DEMO-scikit-byo-iris\"\n", 408 | "training_input_prefix = common_prefix + \"/training-input-data\"\n", 409 | "batch_inference_input_prefix = common_prefix + \"/batch-inference-input-data\"\n", 410 | "\n", 411 | "import os\n", 412 | "import sagemaker" 413 | ] 414 | }, 415 | { 416 | "cell_type": "markdown", 417 | "metadata": {}, 418 | "source": [ 419 | "## Create the session\n", 420 | "\n", 421 | "The session remembers our connection parameters to Amazon SageMaker. We'll use it to perform all of our SageMaker operations." 422 | ] 423 | }, 424 | { 425 | "cell_type": "code", 426 | "execution_count": 4, 427 | "metadata": {}, 428 | "outputs": [], 429 | "source": [ 430 | "import sagemaker as sage\n", 431 | "\n", 432 | "sess = sage.Session()" 433 | ] 434 | }, 435 | { 436 | "cell_type": "markdown", 437 | "metadata": {}, 438 | "source": [ 439 | "## Upload the data for training\n", 440 | "\n", 441 | "When training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we're using some the classic [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which we have included. \n", 442 | "\n", 443 | "We can use use the tools provided by the Amazon SageMaker Python SDK to upload the data to a default bucket. " 444 | ] 445 | }, 446 | { 447 | "cell_type": "code", 448 | "execution_count": 5, 449 | "metadata": {}, 450 | "outputs": [ 451 | { 452 | "name": "stdout", 453 | "output_type": "stream", 454 | "text": [ 455 | "Training Data Location s3://sagemaker-eu-west-1-252328296877/DEMO-scikit-byo-iris/training-input-data\n" 456 | ] 457 | } 458 | ], 459 | "source": [ 460 | "TRAINING_WORKDIR = \"data/training\"\n", 461 | "\n", 462 | "training_input = sess.upload_data(TRAINING_WORKDIR, key_prefix=training_input_prefix)\n", 463 | "print (\"Training Data Location \" + training_input)" 464 | ] 465 | }, 466 | { 467 | "cell_type": "markdown", 468 | "metadata": {}, 469 | "source": [ 470 | "## Create an estimator and fit the model\n", 471 | "\n", 472 | "In order to use Amazon SageMaker to fit our algorithm, we'll create an `Estimator` that defines how to use the container to train. This includes the configuration we need to invoke SageMaker training:\n", 473 | "\n", 474 | "* The __container name__. This is constructed as in the shell commands above.\n", 475 | "* The __role__. As defined above.\n", 476 | "* The __instance count__ which is the number of machines to use for training.\n", 477 | "* The __instance type__ which is the type of machine to use for training.\n", 478 | "* The __output path__ determines where the model artifact will be written.\n", 479 | "* The __session__ is the SageMaker session object that we defined above.\n", 480 | "\n", 481 | "Then we use fit() on the estimator to train against the data that we uploaded above." 482 | ] 483 | }, 484 | { 485 | "cell_type": "code", 486 | "execution_count": 36, 487 | "metadata": {}, 488 | "outputs": [ 489 | { 490 | "data": { 491 | "text/plain": [ 492 | "('252328296877',\n", 493 | " 'eu-west-1',\n", 494 | " '252328296877.dkr.ecr.eu-west-1.amazonaws.com/decision-trees:latest')" 495 | ] 496 | }, 497 | "execution_count": 36, 498 | "metadata": {}, 499 | "output_type": "execute_result" 500 | } 501 | ], 502 | "source": [ 503 | "account = sess.boto_session.client('sts').get_caller_identity()['Account']\n", 504 | "region = sess.boto_session.region_name\n", 505 | "image = '{}.dkr.ecr.{}.amazonaws.com/decision-trees:latest'.format(account, region)\n", 506 | "role = \"arn:aws:iam::252328296877:role/Sagemaker-notebook\"\n", 507 | "account, region, image" 508 | ] 509 | }, 510 | { 511 | "cell_type": "code", 512 | "execution_count": 19, 513 | "metadata": {}, 514 | "outputs": [ 515 | { 516 | "name": "stdout", 517 | "output_type": "stream", 518 | "text": [ 519 | "2020-08-15 17:30:16 Starting - Starting the training job...\n", 520 | "2020-08-15 17:30:18 Starting - Launching requested ML instances......\n", 521 | "2020-08-15 17:31:21 Starting - Preparing the instances for training...\n", 522 | "2020-08-15 17:32:11 Downloading - Downloading input data...\n", 523 | "2020-08-15 17:32:46 Training - Training image download completed. Training in progress..\u001b[34mStarting the training.\u001b[0m\n", 524 | "\u001b[34mvalidation-accuracy: 0.96\u001b[0m\n", 525 | "\u001b[34mTraining complete.\u001b[0m\n", 526 | "\n", 527 | "2020-08-15 17:32:57 Uploading - Uploading generated training model\n", 528 | "2020-08-15 17:32:57 Completed - Training job completed\n", 529 | "Training seconds: 46\n", 530 | "Billable seconds: 46\n" 531 | ] 532 | } 533 | ], 534 | "source": [ 535 | "tree = sage.estimator.Estimator(image,\n", 536 | " role, \n", 537 | " 1,\n", 538 | " 'ml.c4.2xlarge',\n", 539 | " output_path=\"s3://{}/output\".format(sess.default_bucket()),\n", 540 | " sagemaker_session=sess)\n", 541 | "tree.fit(training_input)" 542 | ] 543 | }, 544 | { 545 | "cell_type": "markdown", 546 | "metadata": {}, 547 | "source": [ 548 | "## Batch Transform Job\n", 549 | "\n", 550 | "Now let's use the model built to run a batch inference job and verify it works.\n" 551 | ] 552 | }, 553 | { 554 | "cell_type": "markdown", 555 | "metadata": {}, 556 | "source": [ 557 | "### Batch Transform Input Preparation\n", 558 | "\n", 559 | "The snippet below is removing the \"label\" column (column indexed at 0) and retaining the rest to be batch transform's input. \n", 560 | "\n", 561 | "NOTE: This is the same training data, which is a no-no from a statistical/ML science perspective. But the aim of this notebook is to demonstrate how things work end-to-end." 562 | ] 563 | }, 564 | { 565 | "cell_type": "code", 566 | "execution_count": 20, 567 | "metadata": {}, 568 | "outputs": [ 569 | { 570 | "name": "stdout", 571 | "output_type": "stream", 572 | "text": [ 573 | "Transform input uploaded to s3://sagemaker-eu-west-1-252328296877/DEMO-scikit-byo-iris/batch-inference-input-data/batchtransform_test.csv\n" 574 | ] 575 | } 576 | ], 577 | "source": [ 578 | "import pandas as pd\n", 579 | "\n", 580 | "## Remove first column that contains the label\n", 581 | "shape=pd.read_csv(TRAINING_WORKDIR + \"/iris.csv\", header=None).drop([0], axis=1)\n", 582 | "\n", 583 | "TRANSFORM_WORKDIR = \"data/transform\"\n", 584 | "shape.to_csv(TRANSFORM_WORKDIR + \"/batchtransform_test.csv\", index=False, header=False)\n", 585 | "\n", 586 | "transform_input = sess.upload_data(TRANSFORM_WORKDIR, key_prefix=batch_inference_input_prefix) + \"/batchtransform_test.csv\"\n", 587 | "print(\"Transform input uploaded to \" + transform_input)" 588 | ] 589 | }, 590 | { 591 | "cell_type": "markdown", 592 | "metadata": {}, 593 | "source": [ 594 | "### Run Batch Transform\n", 595 | "\n", 596 | "Now that our batch transform input is setup, we run the transformation job next" 597 | ] 598 | }, 599 | { 600 | "cell_type": "code", 601 | "execution_count": 21, 602 | "metadata": {}, 603 | "outputs": [ 604 | { 605 | "name": "stdout", 606 | "output_type": "stream", 607 | "text": [ 608 | ".....................\u001b[32m2020-08-15T17:39:38.338:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD\u001b[0m\n", 609 | "\u001b[34mStarting the inference server with 4 workers.\u001b[0m\n", 610 | "\u001b[34m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 611 | "\u001b[35mStarting the inference server with 4 workers.\u001b[0m\n", 612 | "\u001b[35m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 613 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 614 | "\u001b[34m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 615 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 616 | "\u001b[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0\u001b[0m\n", 617 | "\u001b[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)\u001b[0m\n", 618 | "\u001b[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent\u001b[0m\n", 619 | "\u001b[34m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14\u001b[0m\n", 620 | "\u001b[34m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15\u001b[0m\n", 621 | "\u001b[34m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17\u001b[0m\n", 622 | "\u001b[34m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18\u001b[0m\n", 623 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /ping HTTP/1.1\" 200 1 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 624 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /execution-parameters HTTP/1.1\" 404 2 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 625 | "\u001b[34mInvoked with 150 records\u001b[0m\n", 626 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"POST /invocations HTTP/1.1\" 200 1400 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 627 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 628 | "\u001b[35m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 629 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 630 | "\u001b[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0\u001b[0m\n", 631 | "\u001b[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)\u001b[0m\n", 632 | "\u001b[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent\u001b[0m\n", 633 | "\u001b[35m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14\u001b[0m\n", 634 | "\u001b[35m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15\u001b[0m\n", 635 | "\u001b[35m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17\u001b[0m\n", 636 | "\u001b[35m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18\u001b[0m\n", 637 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /ping HTTP/1.1\" 200 1 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 638 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /execution-parameters HTTP/1.1\" 404 2 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 639 | "\u001b[35mInvoked with 150 records\u001b[0m\n", 640 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"POST /invocations HTTP/1.1\" 200 1400 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 641 | "\n", 642 | "\u001b[32m2020-08-15T17:39:38.338:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD\u001b[0m\n", 643 | "\u001b[34mStarting the inference server with 4 workers.\u001b[0m\n", 644 | "\u001b[34m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 645 | "\u001b[35mStarting the inference server with 4 workers.\u001b[0m\n", 646 | "\u001b[35m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 647 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 648 | "\u001b[34m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 649 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 650 | "\u001b[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0\u001b[0m\n", 651 | "\u001b[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)\u001b[0m\n", 652 | "\u001b[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent\u001b[0m\n", 653 | "\u001b[34m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14\u001b[0m\n", 654 | "\u001b[34m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15\u001b[0m\n", 655 | "\u001b[34m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17\u001b[0m\n", 656 | "\u001b[34m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18\u001b[0m\n", 657 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /ping HTTP/1.1\" 200 1 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 658 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /execution-parameters HTTP/1.1\" 404 2 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 659 | "\u001b[34mInvoked with 150 records\u001b[0m\n", 660 | "\u001b[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"POST /invocations HTTP/1.1\" 200 1400 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 661 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 662 | "\u001b[35m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: \"GET /ping HTTP/1.1\", upstream: \"http://unix:/tmp/gunicorn.sock:/ping\", host: \"169.254.255.131:8080\"\u001b[0m\n", 663 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] \"GET /ping HTTP/1.1\" 502 182 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 664 | "\u001b[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0\u001b[0m\n", 665 | "\u001b[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)\u001b[0m\n", 666 | "\u001b[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent\u001b[0m\n", 667 | "\u001b[35m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14\u001b[0m\n", 668 | "\u001b[35m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15\u001b[0m\n", 669 | "\u001b[35m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17\u001b[0m\n", 670 | "\u001b[35m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18\u001b[0m\n", 671 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /ping HTTP/1.1\" 200 1 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 672 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"GET /execution-parameters HTTP/1.1\" 404 2 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 673 | "\u001b[35mInvoked with 150 records\u001b[0m\n", 674 | "\u001b[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] \"POST /invocations HTTP/1.1\" 200 1400 \"-\" \"Go-http-client/1.1\"\u001b[0m\n", 675 | "Batch Transform output saved to s3://sagemaker-eu-west-1-252328296877/decision-trees-2020-08-15-17-36-12-879\n" 676 | ] 677 | } 678 | ], 679 | "source": [ 680 | "transformer = tree.transformer(instance_count=1, instance_type='ml.m4.xlarge')\n", 681 | "transformer.transform(transform_input, content_type='text/csv')\n", 682 | "transformer.wait()\n", 683 | "\n", 684 | "print(\"Batch Transform output saved to \" + transformer.output_path)" 685 | ] 686 | }, 687 | { 688 | "cell_type": "markdown", 689 | "metadata": {}, 690 | "source": [ 691 | "#### Inspect the Batch Transform Output in S3" 692 | ] 693 | }, 694 | { 695 | "cell_type": "code", 696 | "execution_count": 22, 697 | "metadata": {}, 698 | "outputs": [ 699 | { 700 | "name": "stdout", 701 | "output_type": "stream", 702 | "text": [ 703 | "setosa\n", 704 | "setosa\n", 705 | "setosa\n", 706 | "setosa\n", 707 | "setosa\n", 708 | "setosa\n", 709 | "setosa\n", 710 | "setosa\n", 711 | "setosa\n", 712 | "setosa\n", 713 | "setosa\n", 714 | "setosa\n", 715 | "setosa\n", 716 | "setosa\n", 717 | "setosa\n", 718 | "setosa\n", 719 | "setosa\n", 720 | "setosa\n", 721 | "setosa\n", 722 | "setosa\n", 723 | "setosa\n", 724 | "setosa\n", 725 | "setosa\n", 726 | "setosa\n", 727 | "setosa\n", 728 | "setosa\n", 729 | "setosa\n", 730 | "setosa\n", 731 | "setosa\n", 732 | "setosa\n", 733 | "setosa\n", 734 | "setosa\n", 735 | "setosa\n", 736 | "setosa\n", 737 | "setosa\n", 738 | "setosa\n", 739 | "setosa\n", 740 | "setosa\n", 741 | "setosa\n", 742 | "setosa\n", 743 | "setosa\n", 744 | "setosa\n", 745 | "setosa\n", 746 | "setosa\n", 747 | "setosa\n", 748 | "setosa\n", 749 | "setosa\n", 750 | "setosa\n", 751 | "setosa\n", 752 | "setosa\n", 753 | "versicolor\n", 754 | "versicolor\n", 755 | "versicolor\n", 756 | "versicolor\n", 757 | "versicolor\n", 758 | "versicolor\n", 759 | "versicolor\n", 760 | "versicolor\n", 761 | "versicolor\n", 762 | "versicolor\n", 763 | "versicolor\n", 764 | "versicolor\n", 765 | "versicolor\n", 766 | "versicolor\n", 767 | "versicolor\n", 768 | "versicolor\n", 769 | "versicolor\n", 770 | "versicolor\n", 771 | "versicolor\n", 772 | "versicolor\n", 773 | "versicolor\n", 774 | "versicolor\n", 775 | "versicolor\n", 776 | "versicolor\n", 777 | "versicolor\n", 778 | "versicolor\n", 779 | "versicolor\n", 780 | "versicolor\n", 781 | "versicolor\n", 782 | "versicolor\n", 783 | "versicolor\n", 784 | "versicolor\n", 785 | "versicolor\n", 786 | "versicolor\n", 787 | "versicolor\n", 788 | "versicolor\n", 789 | "versicolor\n", 790 | "versicolor\n", 791 | "versicolor\n", 792 | "versicolor\n", 793 | "versicolor\n", 794 | "versicolor\n", 795 | "versicolor\n", 796 | "versicolor\n", 797 | "versicolor\n", 798 | "versicolor\n", 799 | "versicolor\n", 800 | "versicolor\n", 801 | "versicolor\n", 802 | "versicolor\n", 803 | "virginica\n", 804 | "virginica\n", 805 | "virginica\n", 806 | "virginica\n", 807 | "virginica\n", 808 | "virginica\n", 809 | "virginica\n", 810 | "virginica\n", 811 | "virginica\n", 812 | "virginica\n", 813 | "virginica\n", 814 | "virginica\n", 815 | "virginica\n", 816 | "virginica\n", 817 | "virginica\n", 818 | "virginica\n", 819 | "virginica\n", 820 | "virginica\n", 821 | "virginica\n", 822 | "virginica\n", 823 | "virginica\n", 824 | "virginica\n", 825 | "virginica\n", 826 | "virginica\n", 827 | "virginica\n", 828 | "virginica\n", 829 | "virginica\n", 830 | "virginica\n", 831 | "virginica\n", 832 | "virginica\n", 833 | "virginica\n", 834 | "virginica\n", 835 | "virginica\n", 836 | "virginica\n", 837 | "virginica\n", 838 | "virginica\n", 839 | "virginica\n", 840 | "virginica\n", 841 | "virginica\n", 842 | "virginica\n", 843 | "virginica\n", 844 | "virginica\n", 845 | "virginica\n", 846 | "virginica\n", 847 | "virginica\n", 848 | "virginica\n", 849 | "virginica\n", 850 | "virginica\n", 851 | "virginica\n", 852 | "virginica\n", 853 | "\n" 854 | ] 855 | } 856 | ], 857 | "source": [ 858 | "from urllib.parse import urlparse\n", 859 | "\n", 860 | "parsed_url = urlparse(transformer.output_path)\n", 861 | "bucket_name = parsed_url.netloc\n", 862 | "file_key = '{}/{}.out'.format(parsed_url.path[1:], \"batchtransform_test.csv\")\n", 863 | "\n", 864 | "s3_client = sess.boto_session.client('s3')\n", 865 | "\n", 866 | "response = s3_client.get_object(Bucket = sess.default_bucket(), Key = file_key)\n", 867 | "response_bytes = response['Body'].read().decode('utf-8')\n", 868 | "print(response_bytes)" 869 | ] 870 | }, 871 | { 872 | "cell_type": "markdown", 873 | "metadata": {}, 874 | "source": [ 875 | "## Deploy the model\n", 876 | "\n", 877 | "Deploying the model to Amazon SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count, instance type, and optionally serializer and deserializer functions. These are used when the resulting predictor is created on the endpoint." 878 | ] 879 | }, 880 | { 881 | "cell_type": "code", 882 | "execution_count": 30, 883 | "metadata": {}, 884 | "outputs": [ 885 | { 886 | "name": "stdout", 887 | "output_type": "stream", 888 | "text": [ 889 | "-----------!" 890 | ] 891 | } 892 | ], 893 | "source": [ 894 | "from sagemaker.serializers import CSVSerializer\n", 895 | "\n", 896 | "model = tree.create_model()\n", 897 | "predictor = tree.deploy(1, 'ml.m4.xlarge', serializer=CSVSerializer())" 898 | ] 899 | }, 900 | { 901 | "cell_type": "markdown", 902 | "metadata": {}, 903 | "source": [ 904 | "### Choose some data and use it for a prediction\n", 905 | "\n", 906 | "In order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works." 907 | ] 908 | }, 909 | { 910 | "cell_type": "code", 911 | "execution_count": 31, 912 | "metadata": {}, 913 | "outputs": [], 914 | "source": [ 915 | "shape=pd.read_csv(TRAINING_WORKDIR + \"/iris.csv\", header=None)\n", 916 | "\n", 917 | "import itertools\n", 918 | "\n", 919 | "a = [50*i for i in range(3)]\n", 920 | "b = [40+i for i in range(10)]\n", 921 | "indices = [i+j for i,j in itertools.product(a,b)]\n", 922 | "\n", 923 | "test_data=shape.iloc[indices[:-1]]\n", 924 | "test_X=test_data.iloc[:,1:]\n", 925 | "test_y=test_data.iloc[:,0]" 926 | ] 927 | }, 928 | { 929 | "cell_type": "markdown", 930 | "metadata": {}, 931 | "source": [ 932 | "Prediction is as easy as calling predict with the predictor we got back from deploy and the data we want to do predictions with. The serializers take care of doing the data conversions for us." 933 | ] 934 | }, 935 | { 936 | "cell_type": "code", 937 | "execution_count": 32, 938 | "metadata": {}, 939 | "outputs": [ 940 | { 941 | "name": "stdout", 942 | "output_type": "stream", 943 | "text": [ 944 | "setosa\n", 945 | "setosa\n", 946 | "setosa\n", 947 | "setosa\n", 948 | "setosa\n", 949 | "setosa\n", 950 | "setosa\n", 951 | "setosa\n", 952 | "setosa\n", 953 | "setosa\n", 954 | "versicolor\n", 955 | "versicolor\n", 956 | "versicolor\n", 957 | "versicolor\n", 958 | "versicolor\n", 959 | "versicolor\n", 960 | "versicolor\n", 961 | "versicolor\n", 962 | "versicolor\n", 963 | "versicolor\n", 964 | "virginica\n", 965 | "virginica\n", 966 | "virginica\n", 967 | "virginica\n", 968 | "virginica\n", 969 | "virginica\n", 970 | "virginica\n", 971 | "virginica\n", 972 | "virginica\n", 973 | "\n" 974 | ] 975 | } 976 | ], 977 | "source": [ 978 | "print(predictor.predict(test_X.values).decode('utf-8'))" 979 | ] 980 | }, 981 | { 982 | "cell_type": "markdown", 983 | "metadata": {}, 984 | "source": [ 985 | "### Cleanup Endpoint\n", 986 | "\n", 987 | "When you're done with the endpoint, you'll want to clean it up." 988 | ] 989 | }, 990 | { 991 | "cell_type": "code", 992 | "execution_count": 35, 993 | "metadata": {}, 994 | "outputs": [], 995 | "source": [ 996 | "predictor.delete_endpoint()" 997 | ] 998 | }, 999 | { 1000 | "cell_type": "markdown", 1001 | "metadata": {}, 1002 | "source": [ 1003 | "# Part 3 - Package your resources as an Amazon SageMaker Algorithm\n", 1004 | "(If you looking to sell a pretrained model (ModelPackage), please skip to Part 4.)\n", 1005 | "\n", 1006 | "Now that you have verified that the algorithm code works for training, live inference and batch inference in the above sections, you can start packaging it up as an Amazon SageMaker Algorithm." 1007 | ] 1008 | }, 1009 | { 1010 | "cell_type": "markdown", 1011 | "metadata": {}, 1012 | "source": [ 1013 | "#### Region Limitation\n", 1014 | "Seller onboarding is limited to us-east-2 region (CMH) only. The client we are creating below will be hard-coded to talk to our us-east-2 endpoint only." 1015 | ] 1016 | }, 1017 | { 1018 | "cell_type": "code", 1019 | "execution_count": null, 1020 | "metadata": {}, 1021 | "outputs": [], 1022 | "source": [ 1023 | "import boto3\n", 1024 | "\n", 1025 | "smmp = boto3.client('sagemaker', region_name='us-east-2', endpoint_url=\"https://sagemaker.us-east-2.amazonaws.com\")" 1026 | ] 1027 | }, 1028 | { 1029 | "cell_type": "markdown", 1030 | "metadata": {}, 1031 | "source": [ 1032 | "## Algorithm Definition\n", 1033 | "\n", 1034 | "SageMaker Algorithm is comprised of 2 parts:\n", 1035 | "\n", 1036 | "1. A training image\n", 1037 | "2. An inference image (optional)\n", 1038 | "\n", 1039 | "The key requirement is that the training and inference images (if provided) remain compatible with each other. Specifically, the model artifacts generated by the code in training image should be readable and compatible with the code in inference image. \n", 1040 | "\n", 1041 | "You can reuse the same image to perform both training and inference or you can choose to separate them. \n", 1042 | "\n", 1043 | "\n", 1044 | "This sample notebook has already created a single algorithm image that perform both training and inference. This image has also been pushed to your ECR registry at {{image}}. You need to provide the following details as part of this algorithm specification:\n", 1045 | "\n", 1046 | "#### Training Specification\n", 1047 | "\n", 1048 | "You specify details pertinent to your training algorithm in this section.\n", 1049 | "\n", 1050 | "#### Supported Hyper-parameters\n", 1051 | "\n", 1052 | "This section captures the hyper-parameters your algorithm supports, their names, types, if they are required, default values, valid ranges etc. This serves both as documentation for buyers and is used by Amazon SageMaker to perform validations of buyer requests in the synchronous request path.\n", 1053 | "\n", 1054 | "Please Note: While this section is optional, we strongly recommend you provide comprehensive information here to leverage our validations and serve as documentation. Additionally, without this being specified, customers cannot leverage your algorithm for Hyper-parameter tuning.\n", 1055 | "\n", 1056 | "*** NOTE: The code below has hyper-parameters hard-coded in the json present in src/training_specification.py. Until we have better functionality to customize it, please update the json in that file appropriately***\n", 1057 | "\n" 1058 | ] 1059 | }, 1060 | { 1061 | "cell_type": "code", 1062 | "execution_count": null, 1063 | "metadata": {}, 1064 | "outputs": [], 1065 | "source": [ 1066 | "from src.training_specification import TrainingSpecification\n", 1067 | "from src.training_channels import TrainingChannels\n", 1068 | "from src.metric_definitions import MetricDefinitions\n", 1069 | "from src.tuning_objectives import TuningObjectives\n", 1070 | "import json\n", 1071 | "\n", 1072 | "training_specification = TrainingSpecification().get_training_specification_dict(\n", 1073 | " ecr_image=image, \n", 1074 | " supports_gpu=True, \n", 1075 | " supported_channels=[\n", 1076 | " TrainingChannels(\"training\", description=\"Input channel that provides training data\", supported_content_types=[\"text/csv\"])], \n", 1077 | " supported_metrics=[MetricDefinitions(\"validation:accuracy\", \"validation-accuracy: (\\\\S+)\")],\n", 1078 | " supported_tuning_job_objective_metrics=[TuningObjectives(\"Maximize\", \"validation:accuracy\")]\n", 1079 | " )\n", 1080 | "\n", 1081 | "print(json.dumps(training_specification, indent=2, sort_keys=True))" 1082 | ] 1083 | }, 1084 | { 1085 | "cell_type": "markdown", 1086 | "metadata": {}, 1087 | "source": [ 1088 | "#### Inference Specification\n", 1089 | "\n", 1090 | "You specify details pertinent to your inference code in this section. \n" 1091 | ] 1092 | }, 1093 | { 1094 | "cell_type": "code", 1095 | "execution_count": null, 1096 | "metadata": {}, 1097 | "outputs": [], 1098 | "source": [ 1099 | "from src.inference_specification import InferenceSpecification\n", 1100 | "import json\n", 1101 | "\n", 1102 | "inference_specification = InferenceSpecification().get_inference_specification_dict(\n", 1103 | " ecr_image=image,\n", 1104 | " supports_gpu=True,\n", 1105 | " supported_content_types=[\"text/csv\"],\n", 1106 | " supported_mime_types=[\"text/csv\"])\n", 1107 | "\n", 1108 | "print(json.dumps(inference_specification, indent=4, sort_keys=True))\n" 1109 | ] 1110 | }, 1111 | { 1112 | "cell_type": "markdown", 1113 | "metadata": {}, 1114 | "source": [ 1115 | "#### Validation Specification\n", 1116 | "\n", 1117 | "In order to provide confidence to the sellers (and buyers) that the products work in Amazon SageMaker before listing them on AWS Marketplace, SageMaker needs to perform basic validations. The product can be listed in AWS Marketplace only if this validation process succeeds. This validation process uses the validation profile and sample data provided by you to run the following validations:\n", 1118 | "\n", 1119 | "1. Create a training job in your account to verify your training image works with SageMaker.\n", 1120 | "2. Once the training job completes successfully, create a Model in your account using the algorithm's inference image and the model artifacts produced as part of the training job we ran. \n", 1121 | "3. Create a transform job in your account using the above Model to verify your inference image works with SageMaker" 1122 | ] 1123 | }, 1124 | { 1125 | "cell_type": "code", 1126 | "execution_count": null, 1127 | "metadata": {}, 1128 | "outputs": [], 1129 | "source": [ 1130 | "from src.algorithm_validation_specification import AlgorithmValidationSpecification\n", 1131 | "import json\n", 1132 | "\n", 1133 | "validation_specification = AlgorithmValidationSpecification().get_algo_validation_specification_dict(\n", 1134 | " validation_role = role,\n", 1135 | " training_channel_name = \"training\",\n", 1136 | " training_input = training_input,\n", 1137 | " batch_transform_input = transform_input,\n", 1138 | " content_type = \"text/csv\",\n", 1139 | " instance_type = \"ml.c4.xlarge\",\n", 1140 | " output_s3_location = 's3://{}/{}'.format(sess.default_bucket(), common_prefix))\n", 1141 | "\n", 1142 | "print(json.dumps(validation_specification, indent=4, sort_keys=True))" 1143 | ] 1144 | }, 1145 | { 1146 | "cell_type": "markdown", 1147 | "metadata": {}, 1148 | "source": [ 1149 | "## Putting it all together\n", 1150 | "\n", 1151 | "Now we put all the pieces together in the next cell and create an Amazon SageMaker Algorithm" 1152 | ] 1153 | }, 1154 | { 1155 | "cell_type": "code", 1156 | "execution_count": null, 1157 | "metadata": {}, 1158 | "outputs": [], 1159 | "source": [ 1160 | "import json\n", 1161 | "import time\n", 1162 | "\n", 1163 | "algorithm_name = \"scikit-decision-trees-\" + str(round(time.time()))\n", 1164 | "\n", 1165 | "create_algorithm_input_dict = {\n", 1166 | " \"AlgorithmName\" : algorithm_name,\n", 1167 | " \"AlgorithmDescription\" : \"Decision trees using Scikit\",\n", 1168 | " \"CertifyForMarketplace\" : True\n", 1169 | "}\n", 1170 | "create_algorithm_input_dict.update(training_specification)\n", 1171 | "create_algorithm_input_dict.update(inference_specification)\n", 1172 | "create_algorithm_input_dict.update(validation_specification)\n", 1173 | "\n", 1174 | "print(json.dumps(create_algorithm_input_dict, indent=4, sort_keys=True))\n", 1175 | "\n", 1176 | "print (\"Now creating an algorithm in SageMaker\")\n", 1177 | "\n", 1178 | "smmp.create_algorithm(**create_algorithm_input_dict)" 1179 | ] 1180 | }, 1181 | { 1182 | "cell_type": "markdown", 1183 | "metadata": {}, 1184 | "source": [ 1185 | "### Describe the algorithm\n", 1186 | "\n", 1187 | "The next cell describes the Algorithm and waits until it reaches a terminal state (Completed or Failed)" 1188 | ] 1189 | }, 1190 | { 1191 | "cell_type": "code", 1192 | "execution_count": null, 1193 | "metadata": {}, 1194 | "outputs": [], 1195 | "source": [ 1196 | "import time\n", 1197 | "import json\n", 1198 | "\n", 1199 | "while True:\n", 1200 | " response = smmp.describe_algorithm(AlgorithmName=algorithm_name)\n", 1201 | " status = response[\"AlgorithmStatus\"]\n", 1202 | " print (status)\n", 1203 | " if (status == \"Completed\" or status == \"Failed\"):\n", 1204 | " print (response[\"AlgorithmStatusDetails\"])\n", 1205 | " break\n", 1206 | " time.sleep(5)\n" 1207 | ] 1208 | }, 1209 | { 1210 | "cell_type": "markdown", 1211 | "metadata": {}, 1212 | "source": [ 1213 | "# Part 4 - Package your resources as an Amazon SageMaker ModelPackage\n", 1214 | "\n", 1215 | "In this section, we will see how you can package your artifacts (ECR image and the trained artifact from your previous training job) into a ModelPackage. Once you complete this, you can list your product as a pretrained model in the AWS Marketplace.\n", 1216 | "\n", 1217 | "## Model Package Definition\n", 1218 | "A Model Package is a reusable model artifacts abstraction that packages all ingredients necessary for inference. It consists of an inference specification that defines the inference image to use along with an optional model weights location.\n" 1219 | ] 1220 | }, 1221 | { 1222 | "cell_type": "markdown", 1223 | "metadata": {}, 1224 | "source": [ 1225 | "#### Region Limitation\n", 1226 | "Seller onboarding is limited to us-east-2 region (CMH) only. The client we are creating below will be hard-coded to talk to our us-east-2 endpoint only. (Note: You may have previous done this step in Part 3. Repeating here to keep Part 4 self contained.)" 1227 | ] 1228 | }, 1229 | { 1230 | "cell_type": "code", 1231 | "execution_count": null, 1232 | "metadata": {}, 1233 | "outputs": [], 1234 | "source": [ 1235 | "smmp = boto3.client('sagemaker', region_name='us-east-2', endpoint_url=\"https://sagemaker.us-east-2.amazonaws.com\")" 1236 | ] 1237 | }, 1238 | { 1239 | "cell_type": "markdown", 1240 | "metadata": {}, 1241 | "source": [ 1242 | "#### Inference Specification\n", 1243 | "\n", 1244 | "You specify details pertinent to your inference code in this section.\n" 1245 | ] 1246 | }, 1247 | { 1248 | "cell_type": "code", 1249 | "execution_count": null, 1250 | "metadata": {}, 1251 | "outputs": [], 1252 | "source": [ 1253 | "from src.inference_specification import InferenceSpecification\n", 1254 | "\n", 1255 | "import json\n", 1256 | "\n", 1257 | "modelpackage_inference_specification = InferenceSpecification().get_inference_specification_dict(\n", 1258 | " ecr_image=image,\n", 1259 | " supports_gpu=True,\n", 1260 | " supported_content_types=[\"text/csv\"],\n", 1261 | " supported_mime_types=[\"text/csv\"])\n", 1262 | "\n", 1263 | "# Specify the model data resulting from the previously completed training job\n", 1264 | "modelpackage_inference_specification[\"InferenceSpecification\"][\"Containers\"][0][\"ModelDataUrl\"]=tree.model_data\n", 1265 | "print(json.dumps(modelpackage_inference_specification, indent=4, sort_keys=True))" 1266 | ] 1267 | }, 1268 | { 1269 | "cell_type": "markdown", 1270 | "metadata": {}, 1271 | "source": [ 1272 | "#### Validation Specification\n", 1273 | "\n", 1274 | "In order to provide confidence to the sellers (and buyers) that the products work in Amazon SageMaker before listing them on AWS Marketplace, SageMaker needs to perform basic validations. The product can be listed in the AWS Marketplace only if this validation process succeeds. This validation process uses the validation profile and sample data provided by you to run the following validations:\n", 1275 | "\n", 1276 | "* Create a transform job in your account using the above Model to verify your inference image works with SageMaker.\n" 1277 | ] 1278 | }, 1279 | { 1280 | "cell_type": "code", 1281 | "execution_count": null, 1282 | "metadata": {}, 1283 | "outputs": [], 1284 | "source": [ 1285 | "from src.modelpackage_validation_specification import ModelPackageValidationSpecification\n", 1286 | "import json\n", 1287 | "\n", 1288 | "modelpackage_validation_specification = ModelPackageValidationSpecification().get_validation_specification_dict(\n", 1289 | " validation_role = role,\n", 1290 | " batch_transform_input = transform_input,\n", 1291 | " content_type = \"text/csv\",\n", 1292 | " instance_type = \"ml.c4.xlarge\",\n", 1293 | " output_s3_location = 's3://{}/{}'.format(sess.default_bucket(), common_prefix))\n", 1294 | "\n", 1295 | "print(json.dumps(modelpackage_validation_specification, indent=4, sort_keys=True))" 1296 | ] 1297 | }, 1298 | { 1299 | "cell_type": "markdown", 1300 | "metadata": {}, 1301 | "source": [ 1302 | "## Putting it all together\n", 1303 | "\n", 1304 | "Now we put all the pieces together in the next cell and create an Amazon SageMaker Model Package." 1305 | ] 1306 | }, 1307 | { 1308 | "cell_type": "code", 1309 | "execution_count": null, 1310 | "metadata": {}, 1311 | "outputs": [], 1312 | "source": [ 1313 | "import json\n", 1314 | "import time\n", 1315 | "\n", 1316 | "model_package_name = \"scikit-iris-detector-\" + str(round(time.time()))\n", 1317 | "create_model_package_input_dict = {\n", 1318 | " \"ModelPackageName\" : model_package_name,\n", 1319 | " \"ModelPackageDescription\" : \"Model to detect 3 different types of irises (Setosa, Versicolour, and Virginica)\",\n", 1320 | " \"CertifyForMarketplace\" : True\n", 1321 | "}\n", 1322 | "create_model_package_input_dict.update(modelpackage_inference_specification)\n", 1323 | "create_model_package_input_dict.update(modelpackage_validation_specification)\n", 1324 | "print(json.dumps(create_model_package_input_dict, indent=4, sort_keys=True))\n", 1325 | "\n", 1326 | "smmp.create_model_package(**create_model_package_input_dict)" 1327 | ] 1328 | }, 1329 | { 1330 | "cell_type": "markdown", 1331 | "metadata": {}, 1332 | "source": [ 1333 | "#### Describe the ModelPackage \n", 1334 | "\n", 1335 | "The next cell describes the ModelPackage and waits until it reaches a terminal state (Completed or Failed)" 1336 | ] 1337 | }, 1338 | { 1339 | "cell_type": "code", 1340 | "execution_count": null, 1341 | "metadata": {}, 1342 | "outputs": [], 1343 | "source": [ 1344 | "import time\n", 1345 | "import json\n", 1346 | "\n", 1347 | "while True:\n", 1348 | " response = smmp.describe_model_package(ModelPackageName=model_package_name)\n", 1349 | " status = response[\"ModelPackageStatus\"]\n", 1350 | " print (status)\n", 1351 | " if (status == \"Completed\" or status == \"Failed\"):\n", 1352 | " print (response[\"ModelPackageStatusDetails\"])\n", 1353 | " break\n", 1354 | " time.sleep(5)\n" 1355 | ] 1356 | }, 1357 | { 1358 | "cell_type": "markdown", 1359 | "metadata": {}, 1360 | "source": [ 1361 | "## Debugging Creation Issues\n", 1362 | "\n", 1363 | "Entity creation typically never fails in the synchronous path. However, the validation process can fail for many reasons. If the above Algorithm creation fails, you can investigate the cause for the failure by looking at the \"AlgorithmStatusDetails\" field in the Algorithm object or \"ModelPackageStatusDetails\" field in the ModelPackage object. You can also look for the Training Jobs / Transform Jobs created in your account as part of our validation and inspect their logs for more hints on what went wrong. \n", 1364 | "\n", 1365 | "If all else fails, please contact AWS Customer Support for assistance!" 1366 | ] 1367 | }, 1368 | { 1369 | "cell_type": "markdown", 1370 | "metadata": {}, 1371 | "source": [ 1372 | "\n", 1373 | "## List on AWS Marketplace\n", 1374 | "\n", 1375 | "Next, please go back to the Amazon SageMaker console, click on \"Algorithms\" (or \"Model Packages\") and you'll find the entity you created above. If it was successfully created and validated, you should be able to select the entity and \"Publish new ML Marketplace listing\" from SageMaker console.\n", 1376 | "" 1377 | ] 1378 | } 1379 | ], 1380 | "metadata": { 1381 | "kernelspec": { 1382 | "display_name": "Python 3", 1383 | "language": "python", 1384 | "name": "python3" 1385 | }, 1386 | "language_info": { 1387 | "codemirror_mode": { 1388 | "name": "ipython", 1389 | "version": 3 1390 | }, 1391 | "file_extension": ".py", 1392 | "mimetype": "text/x-python", 1393 | "name": "python", 1394 | "nbconvert_exporter": "python", 1395 | "pygments_lexer": "ipython3", 1396 | "version": "3.6.10" 1397 | } 1398 | }, 1399 | "nbformat": 4, 1400 | "nbformat_minor": 4 1401 | } 1402 | --------------------------------------------------------------------------------