├── .gitignore ├── LICENSE ├── README.md ├── VERSION ├── helm-ml-score-app ├── .helmignore ├── Chart.yaml ├── templates │ ├── NOTES.txt │ ├── deployment.yaml │ ├── namespace.yaml │ └── service.yaml └── values.yaml ├── py-flask-ml-score-api ├── .dockerignore ├── Dockerfile ├── Pipfile ├── Pipfile.lock ├── api.py └── py-flask-ml-score.yaml └── seldon-ml-score-component ├── Dockerfile ├── MLScore.py ├── Pipfile └── Pipfile.lock /.gitignore: -------------------------------------------------------------------------------- 1 | .venv 2 | .vscode 3 | .idea 4 | .mypy_cache 5 | __pycache__ 6 | .DS_Store -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2019 Alex Ioannides 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deploying Machine Learning Models on Kubernetes 2 | 3 | A common pattern for deploying Machine Learning (ML) models into production environments - e.g. ML models trained using the SciKit Learn or Keras packages (for Python), that are ready to provide predictions on new data - is to expose these ML as RESTful API microservices, hosted from within [Docker](https://www.docker.com) containers. These can then deployed to a cloud environment for handling everything required for maintaining continuous availability - e.g. fault-tolerance, auto-scaling, load balancing and rolling service updates. 4 | 5 | The configuration details for a continuously available cloud deployment are specific to the targeted cloud provider(s) - e.g. the deployment process and topology for Amazon Web Services is not the same as that for Microsoft Azure, which in-turn is not the same as that for Google Cloud Platform. This constitutes knowledge that needs to be acquired for every cloud provider. Furthermore, it is difficult (some would say near impossible) to test entire deployment strategies locally, which makes issues such as networking hard to debug. 6 | 7 | [Kubernetes](https://kubernetes.io) is a container orchestration platform that seeks to address these issues. Briefly, it provides a mechanism for defining **entire** microservice-based application deployment topologies and their service-level requirements for maintaining continuous availability. It is agnostic to the targeted cloud provider, can be run on-premises and even locally on your laptop - all that's required is a cluster of virtual machines running Kubernetes - i.e. a Kubernetes cluster. 8 | 9 | This README is designed to be read in conjunction with the code in this repository, that contains the Python modules, Docker configuration files and Kubernetes instructions for demonstrating how a simple Python ML model can be turned into a production-grade RESTful model-scoring (or prediction) API service, using Docker and Kubernetes - both locally and with Google Cloud Platform (GCP). It is not a comprehensive guide to Kubernetes, Docker or ML - think of it more as a 'ML on Kubernetes 101' for demonstrating capability and allowing newcomers to Kubernetes (e.g. data scientists who are more focused on building models as opposed to deploying them), to get up-and-running quickly and become familiar with the basic concepts and patterns. 10 | 11 | We will demonstrate ML model deployment using two different approaches: a first principles approach using Docker and Kubernetes; and then a deployment using the [Seldon-Core](https://www.seldon.io) Kubernetes native framework for streamlining the deployment of ML services. The former will help to appreciate the latter, which constitutes a powerful framework for deploying and performance-monitoring many complex ML model pipelines. 12 | 13 | This work was initially committed in 2018 and has since formed the basis of [Bodywork](https://github.com/bodywork-ml/bodywork-core) - an open-source MLOps tool for deploying machine learning projects developed in Python, to Kubernetes. Bodywork automates a lot of the steps that this project has demonstrated to the many machine learning engineers that have used it over the years - take a look at the [documentation](https://bodywork.readthedocs.io/en/latest/). 14 | 15 | ## Containerising a Simple ML Model Scoring Service using Flask and Docker 16 | 17 | We start by demonstrating how to achieve this basic competence using the simple Python ML model scoring REST API contained in the `api.py` module, together with the `Dockerfile`, both within the `py-flask-ml-score-api` directory, whose core contents are as follows, 18 | 19 | ```bash 20 | py-flask-ml-score-api/ 21 | | Dockerfile 22 | | Pipfile 23 | | Pipfile.lock 24 | | api.py 25 | ``` 26 | 27 | If you're already feeling lost then these files are discussed in the points below, otherwise feel free to skip to the next section. 28 | 29 | ### Defining the Flask Service in the `api.py` Module 30 | 31 | This is a Python module that uses the [Flask](http://flask.pocoo.org) framework for defining a web service (`app`), with a function (`score`), that executes in response to a HTTP request to a specific URL (or 'route'), thanks to being wrapped by the `app.route` function. For reference, the relevant code is reproduced below, 32 | 33 | ```python 34 | from flask import Flask, jsonify, make_response, request 35 | 36 | app = Flask(__name__) 37 | 38 | 39 | @app.route('/score', methods=['POST']) 40 | def score(): 41 | features = request.json['X'] 42 | return make_response(jsonify({'score': features})) 43 | 44 | 45 | if __name__ == '__main__': 46 | app.run(host='0.0.0.0', port=5000) 47 | ``` 48 | 49 | If running locally - e.g. by starting the web service using `python run api.py` - we would be able reach our function (or 'endpoint') at `http://localhost:5000/score`. This function takes data sent to it as JSON (that has been automatically de-serialised as a Python dict made available as the `request` variable in our function definition), and returns a response (automatically serialised as JSON). 50 | 51 | In our example function, we expect an array of features, `X`, that we pass to a ML model, which in our example returns those same features back to the caller - i.e. our chosen ML model is the identity function, which we have chosen for purely demonstrative purposes. We could just as easily have loaded a pickled SciKit-Learn or Keras model and passed the data to the approproate `predict` method, returning a score for the feature-data as JSON - see [here](https://github.com/AlexIoannides/ml-workflow-automation/blob/master/deploy/py-sklearn-flask-ml-service/api.py) for an example of this in action. 52 | 53 | ### Defining the Docker Image with the `Dockerfile` 54 | 55 | A `Dockerfile` is essentially the configuration file used by Docker, that allows you to define the contents and configure the operation of a Docker container, when operational. This static data, when not executed as a container, is referred to as the 'image'. For reference, the `Dockerfile` is reproduced below, 56 | 57 | ```docker 58 | FROM python:3.6-slim 59 | WORKDIR /usr/src/app 60 | COPY . . 61 | RUN pip install pipenv 62 | RUN pipenv install 63 | EXPOSE 5000 64 | CMD ["pipenv", "run", "python", "api.py"] 65 | ``` 66 | 67 | In our example `Dockerfile` we: 68 | 69 | - start by using a pre-configured Docker image (`python:3.6-slim`) that has a version of the [Alpine Linux](https://www.alpinelinux.org/community/) distribution with Python already installed; 70 | - then copy the contents of the `py-flask-ml-score-api` local directory to a directory on the image called `/usr/src/app`; 71 | - then use `pip` to install the [Pipenv](https://pipenv.readthedocs.io/en/latest/) package for Python dependency management (see the appendix at the bottom for more information on how we use Pipenv); 72 | - then use Pipenv to install the dependencies described in `Pipfile.lock` into a virtual environment on the image; 73 | - configure port 5000 to be exposed to the 'outside world' on the running container; and finally, 74 | - to start our Flask RESTful web service - `api.py`. Note, that here we are relying on Flask's internal [WSGI](https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface) server, whereas in a production setting we would recommend on configuring a more robust option (e.g. Gunicorn), [as discussed here](https://pythonspeed.com/articles/gunicorn-in-docker/). 75 | 76 | Building this custom image and asking the Docker daemon to run it (remember that a running image is a 'container'), will expose our RESTful ML model scoring service on port 5000 as if it were running on a dedicated virtual machine. Refer to the official [Docker documentation](https://docs.docker.com/get-started/) for a more comprehensive discussion of these core concepts. 77 | 78 | ### Building a Docker Image for the ML Scoring Service 79 | 80 | We assume that [Docker is running locally](https://www.docker.com) (both Docker client and daemon), that the client is logged into an account on [DockerHub](https://hub.docker.com) and that there is a terminal open in the this project's root directory. To build the image described in the `Dockerfile` run, 81 | 82 | ```bash 83 | docker build --tag alexioannides/test-ml-score-api py-flask-ml-score-api 84 | ``` 85 | 86 | Where 'alexioannides' refers to the name of the DockerHub account that we will push the image to, once we have tested it. 87 | 88 | #### Testing 89 | 90 | To test that the image can be used to create a Docker container that functions as we expect it to use, 91 | 92 | ```bash 93 | docker run --rm --name test-api -p 5000:5000 -d alexioannides/test-ml-score-api 94 | ``` 95 | 96 | Where we have mapped port 5000 from the Docker container - i.e. the port our ML model scoring service is listening to - to port 5000 on our host machine (localhost). Then check that the container is listed as running using, 97 | 98 | ```bash 99 | docker ps 100 | ``` 101 | 102 | And then test the exposed API endpoint using, 103 | 104 | ```bash 105 | curl http://localhost:5000/score \ 106 | --request POST \ 107 | --header "Content-Type: application/json" \ 108 | --data '{"X": [1, 2]}' 109 | ``` 110 | 111 | Where you should expect a response along the lines of, 112 | 113 | ```json 114 | {"score":[1,2]} 115 | ``` 116 | 117 | All our test model does is return the input data - i.e. it is the identity function. Only a few lines of additional code are required to modify this service to load a SciKit Learn model from disk and pass new data to it's 'predict' method for generating predictions - see [here](https://github.com/AlexIoannides/ml-workflow-automation/blob/master/deploy/py-sklearn-flask-ml-service/api.py) for an example. Now that the container has been confirmed as operational, we can stop it, 118 | 119 | ```bash 120 | docker stop test-api 121 | ``` 122 | 123 | #### Pushing the Image to the DockerHub Registry 124 | 125 | In order for a remote Docker host or Kubernetes cluster to have access to the image we've created, we need to publish it to an image registry. All cloud computing providers that offer managed Docker-based services will provide private image registries, but we will use the public image registry at DockerHub, for convenience. To push our new image to DockerHub (where my account ID is 'alexioannides') use, 126 | 127 | ```bash 128 | docker push alexioannides/test-ml-score-api 129 | ``` 130 | 131 | Where we can now see that our chosen naming convention for the image is intrinsically linked to our target image registry (you will need to insert your own account ID where required). Once the upload is finished, log onto DockerHub to confirm that the upload has been successful via the [DockerHub UI](https://hub.docker.com/u/alexioannides). 132 | 133 | ## Installing Kubernetes for Local Development and Testing 134 | 135 | There are two options for installing a single-node Kubernetes cluster that is suitable for local development and testing: via the [Docker Desktop](https://www.docker.com/products/docker-desktop) client, or via [Minikube](https://github.com/kubernetes/minikube). 136 | 137 | ### Installing Kubernetes via Docker Desktop 138 | 139 | If you have been using Docker on a Mac, then the chances are that you will have been doing this via the Docker Desktop application. If not (e.g. if you installed Docker Engine via Homebrew), then Docker Desktop can be downloaded [here](https://www.docker.com/products/docker-desktop). Docker Desktop now comes bundled with Kubernetes, which can be activated by going to `Preferences -> Kubernetes` and selecting `Enable Kubernetes`. It will take a while for Docker Desktop to download the Docker images required to run Kubernetes, so be patient. After it has finished, go to `Preferences -> Advanced` and ensure that at least 2 CPUs and 4 GiB have been allocated to the Docker Engine, which are the the minimum resources required to deploy a single Seldon ML component. 140 | 141 | To interact with the Kubernetes cluster you will need the `kubectl` Command Line Interface (CLI) tool, which will need to be downloaded separately. The easiest way to do this on a Mac is via Homebrew - i.e with `brew install kubernetes-cli`. Once you have `kubectl` installed and a Kubernetes cluster up-and-running, test that everything is working as expected by running, 142 | 143 | ```bash 144 | kubectl cluster-info 145 | ``` 146 | 147 | Which ought to return something along the lines of, 148 | 149 | ```bash 150 | Kubernetes master is running at https://kubernetes.docker.internal:6443 151 | KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 152 | 153 | To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 154 | ``` 155 | 156 | ### Installing Kubernetes via Minikube 157 | 158 | On Mac OS X, the steps required to get up-and-running with Minikube are as follows: 159 | 160 | - make sure the [Homebrew](https://brew.sh) package manager for OS X is installed; then, 161 | - install VirtualBox using, `brew cask install virtualbox` (you may need to approve installation via OS X System Preferences); and then, 162 | - install Minikube using, `brew cask install minikube`. 163 | 164 | To start the test cluster run, 165 | 166 | ```bash 167 | minikube start --memory 4096 168 | ``` 169 | 170 | Where we have specified the minimum amount of memory required to deploy a single Seldon ML component. Be patient - Minikube may take a while to start. To test that the cluster is operational run, 171 | 172 | ```bash 173 | kubectl cluster-info 174 | ``` 175 | 176 | Where `kubectl` is the standard Command Line Interface (CLI) client for interacting with the Kubernetes API (which was installed as part of Minikube, but is also available separately). 177 | 178 | ### Deploying the Containerised ML Model Scoring Service to Kubernetes 179 | 180 | To launch our test model scoring service on Kubernetes, we will start by deploying the containerised service within a Kubernetes [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/), whose rollout is managed by a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), which in in-turn creates a [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) - a Kubernetes resource that ensures a minimum number of pods (or replicas), running our service are operational at any given time. This is achieved with, 181 | 182 | ```bash 183 | kubectl create deployment test-ml-score-api --image=alexioannides/test-ml-score-api:latest 184 | ``` 185 | 186 | To check on the status of the deployment run, 187 | 188 | ```bash 189 | kubectl rollout status deployment test-ml-score-api 190 | ``` 191 | 192 | And to see the pods that is has created run, 193 | 194 | ```bash 195 | kubectl get pods 196 | ``` 197 | 198 | It is possible to use [port forwarding](https://en.wikipedia.org/wiki/Port_forwarding) to test an individual container without exposing it to the public internet. To use this, open a separate terminal and run (for example), 199 | 200 | ```bash 201 | kubectl port-forward test-ml-score-api-szd4j 5000:5000 202 | ``` 203 | 204 | Where `test-ml-score-api-szd4j` is the precise name of the pod currently active on the cluster, as determined from the `kubectl get pods` command. Then from your original terminal, to repeat our test request against the same container running on Kubernetes run, 205 | 206 | ```bash 207 | curl http://localhost:5000/score \ 208 | --request POST \ 209 | --header "Content-Type: application/json" \ 210 | --data '{"X": [1, 2]}' 211 | ``` 212 | 213 | To expose the container as a (load balanced) [service](https://kubernetes.io/docs/concepts/services-networking/service/) to the outside world, we have to create a Kubernetes service that references it. This is achieved with the following command, 214 | 215 | ```bash 216 | kubectl expose deployment test-ml-score-api --port 5000 --type=LoadBalancer --name test-ml-score-api-lb 217 | ``` 218 | 219 | If you are using Docker Desktop, then this will automatically emulate a load balancer at `http://localhost:5000`. To find where Minikube has exposed its emulated load balancer run, 220 | 221 | ```bash 222 | minikube service list 223 | ``` 224 | 225 | Now we test our new service - for example (with Docker Desktop), 226 | 227 | ```bash 228 | curl http://localhost:5000/score \ 229 | --request POST \ 230 | --header "Content-Type: application/json" \ 231 | --data '{"X": [1, 2]}' 232 | ``` 233 | 234 | Note, neither Docker Desktop or Minikube setup a real-life load balancer (which is what would happen if we made this request on a cloud platform). To tear-down the load balancer, deployment and pod, run the following commands in sequence, 235 | 236 | ```bash 237 | kubectl delete deployment test-ml-score-api 238 | kubectl delete service test-ml-score-api-lb 239 | ``` 240 | 241 | ## Configuring a Multi-Node Cluster on Google Cloud Platform 242 | 243 | In order to perform testing on a real-world Kubernetes cluster with far greater resources than those available on a laptop, the easiest way is to use a managed Kubernetes platform from a cloud provider. We will use Kubernetes Engine on [Google Cloud Platform (GCP)](https://cloud.google.com). 244 | 245 | ### Getting Up-and-Running with Google Cloud Platform 246 | 247 | Before we can use Google Cloud Platform, sign-up for an account and create a project specifically for this work. Next, make sure that the GCP SDK is installed on your local machine - e.g., 248 | 249 | ```bash 250 | brew cask install google-cloud-sdk 251 | ``` 252 | 253 | Or by downloading an installation image [directly from GCP](https://cloud.google.com/sdk/docs/quickstart-macos). Note, that if you haven't already installed Kubectl, then you will need to do so now, which can be done using the GCP SDK, 254 | 255 | ```bash 256 | gcloud components install kubectl 257 | ``` 258 | 259 | We then need to initialise the SDK, 260 | 261 | ```bash 262 | gcloud init 263 | ``` 264 | 265 | Which will open a browser and guide you through the necessary authentication steps. Make sure you pick the project you created, together with a default zone and region (if this has not been set via Compute Engine -> Settings). 266 | 267 | ### Initialising a Kubernetes Cluster 268 | 269 | Firstly, within the GCP UI visit the Kubernetes Engine page to trigger the Kubernetes API to start-up. From the command line we then start a cluster using, 270 | 271 | ```bash 272 | gcloud container clusters create k8s-test-cluster --num-nodes 3 --machine-type g1-small 273 | ``` 274 | 275 | And then go make a cup of coffee while you wait for the cluster to be created. Note, that this will automatically switch your `kubectl` context to point to the cluster on GCP, as you will see if you run, `kubectl config get-contexts`. To switch back to the Docker Desktop client use `kubectl config use-context docker-desktop`. 276 | 277 | ### Launching the Containerised ML Model Scoring Service on GCP 278 | 279 | This is largely the same as we did for running the test service locally - run the following commands in sequence, 280 | 281 | ```bash 282 | kubectl create deployment test-ml-score-api --image=alexioannides/test-ml-score-api:latest 283 | kubectl expose deployment test-ml-score-api --port 5000 --type=LoadBalancer --name test-ml-score-api-lb 284 | ``` 285 | 286 | But, to find the external IP address for the GCP cluster we will need to use, 287 | 288 | ```bash 289 | kubectl get services 290 | ``` 291 | 292 | And then we can test our service on GCP - for example, 293 | 294 | ```bash 295 | curl http://35.246.92.213:5000/score \ 296 | --request POST \ 297 | --header "Content-Type: application/json" \ 298 | --data '{"X": [1, 2]}' 299 | ``` 300 | 301 | Or, we could again use port forwarding to attach to a single pod - for example, 302 | 303 | ```bash 304 | kubectl port-forward test-ml-score-api-nl4sc 5000:5000 305 | ``` 306 | 307 | And then in a separate terminal, 308 | 309 | ```bash 310 | curl http://localhost:5000/score \ 311 | --request POST \ 312 | --header "Content-Type: application/json" \ 313 | --data '{"X": [1, 2]}' 314 | ``` 315 | 316 | Finally, we tear-down the replication controller and load balancer, 317 | 318 | ```bash 319 | kubectl delete deployment test-ml-score-api 320 | kubectl delete service test-ml-score-api-lb 321 | ``` 322 | 323 | ## Switching Between Kubectl Contexts 324 | 325 | If you are running both with Kubernetes locally and with a cluster on GCP, then you can switch Kubectl [context](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) from one cluster to the other, as follows, 326 | 327 | ```bash 328 | kubectl config use-context docker-desktop 329 | ``` 330 | 331 | Where the list of available contexts can be found using, 332 | 333 | ```bash 334 | kubectl config get-contexts 335 | ``` 336 | 337 | ## Using YAML Files to Define and Deploy the ML Model Scoring Service 338 | 339 | Up to this point we have been using Kubectl commands to define and deploy a basic version of our ML model scoring service. This is fine for demonstrative purposes, but quickly becomes limiting, as well as unmanageable. In practice, the standard way of defining entire Kubernetes deployments is with YAML files, posted to the Kubernetes API. The `py-flask-ml-score.yaml` file in the `py-flask-ml-score-api` directory is an example of how our ML model scoring service can be defined in a single YAML file. This can now be deployed using a single command, 340 | 341 | ```bash 342 | kubectl apply -f py-flask-ml-score-api/py-flask-ml-score.yaml 343 | ``` 344 | 345 | Note, that we have defined three separate Kubernetes components in this single file: a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a deployment and a load-balanced service - for all of these components (and their sub-components), using `---` to delimit the definition of each separate component. To see all components deployed into this namespace use, 346 | 347 | ```bash 348 | kubectl get all --namespace test-ml-app 349 | ``` 350 | 351 | And likewise set the `--namespace` flag when using any `kubectl get` command to inspect the different components of our test app. Alternatively, we can set our new namespace as the default context, 352 | 353 | ```bash 354 | kubectl config set-context $(kubectl config current-context) --namespace=test-ml-app 355 | ``` 356 | 357 | And then run, 358 | 359 | ```bash 360 | kubectl get all 361 | ``` 362 | 363 | Where we can switch back to the default namespace using, 364 | 365 | ```bash 366 | kubectl config set-context $(kubectl config current-context) --namespace=default 367 | ``` 368 | 369 | To tear-down this application we can then use, 370 | 371 | ```bash 372 | kubectl delete -f py-flask-ml-score-api/py-flask-ml-score.yaml 373 | ``` 374 | 375 | Which saves us from having to use multiple commands to delete each component individually. Refer to the [official documentation for the Kubernetes API](https://kubernetes.io/docs/home/) to understand the contents of this YAML file in greater depth. 376 | 377 | ## Using Helm Charts to Define and Deploy the ML Model Scoring Service 378 | 379 | Writing YAML files for Kubernetes can get repetitive and hard to manage, especially if there is a lot of 'copy-paste' involved, when only a handful of parameters need to be changed from one deployment to the next, but there is a 'wall of YAML' that needs to be modified. Enter [Helm](https://helm.sh//) - a framework for creating, executing and managing Kubernetes deployment templates. What follows is a very high-level demonstration of how Helm can be used to deploy our ML model scoring service - for a comprehensive discussion of Helm's full capabilities (and here are a lot of them), please refer to the [official documentation](https://docs.helm.sh). Seldon-Core can also be deployed using Helm and we will cover this in more detail later on. 380 | 381 | ### Installing Helm 382 | 383 | As before, the easiest way to install Helm onto Mac OS X is to use the Homebrew package manager, 384 | 385 | ```bash 386 | brew install kubernetes-helm 387 | ``` 388 | 389 | Helm relies on a dedicated deployment server, referred to as the 'Tiller', to be running within the same Kubernetes cluster we wish to deploy our applications to. Before we deploy Tiller we need to create a cluster-wide super-user role to assign to it, so that it can create and modify Kubernetes resources in any namespace. To achieve this, we start by creating a [Service Account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) that is destined for our tiller. A Service Account is a means by which a pod (and any service running within it), when associated with a Service Accoutn, can authenticate itself to the Kubernetes API, to be able to view, create and modify resources. We create this in the `kube-system` namespace (a common convention) as follows, 390 | 391 | ```bash 392 | kubectl --namespace kube-system create serviceaccount tiller 393 | ``` 394 | 395 | We then create a binding between this Service Account and the `cluster-admin` [Cluster Role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), which as the name suggest grants cluster-wide admin rights, 396 | 397 | ```bash 398 | kubectl create clusterrolebinding tiller \ 399 | --clusterrole cluster-admin \ 400 | --serviceaccount=kube-system:tiller 401 | ``` 402 | 403 | We can now deploy the Helm Tiller to a Kubernetes cluster, with the desired access rights using, 404 | 405 | ```bash 406 | helm init --service-account tiller 407 | ``` 408 | 409 | ### Deploying with Helm 410 | 411 | To create a fresh Helm deployment definition - referred to as a 'chart' in Helm terminology - run, 412 | 413 | ```bash 414 | helm create NAME-OF-YOUR-HELM-CHART 415 | ``` 416 | 417 | This creates a new directory - e.g. `helm-ml-score-app` as included with this repository - with the following high-level directory structure, 418 | 419 | ```bash 420 | helm-ml-score-app/ 421 | | -- charts/ 422 | | -- templates/ 423 | | Chart.yaml 424 | | values.yaml 425 | ``` 426 | 427 | Briefly, the `charts` directory contains other charts that our new chart will depend on (we will not make use of this), the `templates` directory contains our Helm templates, `Chart.yaml` contains core information for our chart (e.g. name and version information) and `values.yaml` contains default values to render our templates with (in the case that no values are set from the command line). 428 | 429 | The next step is to delete all of the files in the `templates` directory (apart from `NOTES.txt`), and to replace them with our own. We start with `namespace.yaml` for declaring a namespace for our app, 430 | 431 | ```yaml 432 | apiVersion: v1 433 | kind: Namespace 434 | metadata: 435 | name: {{ .Values.app.namespace }} 436 | ``` 437 | 438 | Anyone familiar with HTML template frameworks (e.g. Jinja), will be familiar with the use of ``{{}}`` for defining values that will be injected into the rendered template. In this specific instance `.Values.app.namespace` injects the `app.namespace` variable, whose default value defined in `values.yaml`. Next we define a deployment of pods in `deployment.yaml`, 439 | 440 | ```yaml 441 | apiVersion: apps/v1 442 | kind: Deployment 443 | metadata: 444 | labels: 445 | app: {{ .Values.app.name }} 446 | env: {{ .Values.app.env }} 447 | name: {{ .Values.app.name }} 448 | namespace: {{ .Values.app.namespace }} 449 | spec: 450 | replicas: 1 451 | selector: 452 | matchLabels: 453 | app: {{ .Values.app.name }} 454 | template: 455 | metadata: 456 | labels: 457 | app: {{ .Values.app.name }} 458 | env: {{ .Values.app.env }} 459 | spec: 460 | containers: 461 | - image: {{ .Values.app.image }} 462 | name: {{ .Values.app.name }} 463 | ports: 464 | - containerPort: {{ .Values.containerPort }} 465 | protocol: TCP 466 | ``` 467 | 468 | And the details of the load balancer service in `service.yaml`, 469 | 470 | ```yaml 471 | apiVersion: v1 472 | kind: Service 473 | metadata: 474 | name: {{ .Values.app.name }}-lb 475 | labels: 476 | app: {{ .Values.app.name }} 477 | namespace: {{ .Values.app.namespace }} 478 | spec: 479 | type: LoadBalancer 480 | ports: 481 | - port: {{ .Values.containerPort }} 482 | targetPort: {{ .Values.targetPort }} 483 | selector: 484 | app: {{ .Values.app.name }} 485 | ``` 486 | 487 | What we have done, in essence, is to split-out each component of the deployment details from `py-flask-ml-score.yaml` into its own file and then define template variables for each parameter of the configuration that is most likely to change from one deployment to the next. To test and examine the rendered template, without having to attempt a deployment, run, 488 | 489 | ```bash 490 | helm install helm-ml-score-app --debug --dry-run 491 | ``` 492 | 493 | If you are happy with the results of the 'dry run', then execute the deployment and generate a release from the chart using, 494 | 495 | ```bash 496 | helm install helm-ml-score-app --name test-ml-app 497 | ``` 498 | 499 | This will automatically print the status of the release, together with the name that Helm has ascribed to it (e.g. 'willing-yak') and the contents of `NOTES.txt` rendered to the terminal. To list all available Helm releases and their names use, 500 | 501 | ```bash 502 | helm list 503 | ``` 504 | 505 | And to the status of all their constituent components (e.g. pods, replication controllers, service, etc.) use for example, 506 | 507 | ```bash 508 | helm status test-ml-app 509 | ``` 510 | 511 | The ML scoring service can now be tested in exactly the same way as we have done previously (above). Once you have convinced yourself that it's working as expected, the release can be deleted using, 512 | 513 | ```bash 514 | helm delete test-ml-app 515 | ``` 516 | 517 | ## Using Seldon to Deploy the ML Model Scoring Service to Kubernetes 518 | 519 | Seldon's core mission is to simplify the repeated deployment and management of complex ML prediction pipelines on top of Kubernetes. In this demonstration we are going to focus on the simplest possible example - i.e. the simple ML model scoring API we have already been using. 520 | 521 | ### Building an ML Component for Seldon 522 | 523 | To deploy a ML component using Seldon, we need to create Seldon-compatible Docker images. We start by following [these guidelines](https://docs.seldon.io/projects/seldon-core/en/latest/python/python_wrapping_docker.html) for defining a Python class that wraps an ML model targeted for deployment with Seldon. This is contained within the `seldon-ml-score-component` directory, whose contents are similar to those in `py-flask-ml-score-api`, 524 | 525 | ```bash 526 | seldon-ml-score-component/ 527 | | Dockerfile 528 | | MLScore.py 529 | | Pipfile 530 | | Pipfile.lock 531 | ``` 532 | 533 | #### Building the Docker Image for use with Seldon 534 | 535 | Seldon requires that the Docker image for the ML scoring service be structured in a particular way: 536 | 537 | - the ML model has to be wrapped in a Python class with a `predict` method with a particular signature (or interface) - for example, in `MLScore.py` (deliberately named after the Python class contained within it) we have, 538 | 539 | ```python 540 | class MLScore: 541 | """ 542 | Model template. You can load your model parameters in __init__ from 543 | a location accessible at runtime 544 | """ 545 | 546 | def __init__(self): 547 | """ 548 | Load models and add any initialization parameters (these will 549 | be passed at runtime from the graph definition parameters 550 | defined in your seldondeployment kubernetes resource manifest). 551 | """ 552 | print("Initializing") 553 | 554 | def predict(self, X, features_names): 555 | """ 556 | Return a prediction. 557 | 558 | Parameters 559 | ---------- 560 | X : array-like 561 | feature_names : array of feature names (optional) 562 | """ 563 | print("Predict called - will run identity function") 564 | return X 565 | ``` 566 | 567 | - the `seldon-core` Python package must be installed (we use `pipenv` to manage dependencies as discussed above and in the Appendix below); and, 568 | - the container starts by running the Seldon service using the `seldon-core-microservice` entry-point provided by the `seldon-core` package - both this and the point above can be seen the `DockerFile`, 569 | 570 | ```docker 571 | FROM python:3.6-slim 572 | COPY . /app 573 | WORKDIR /app 574 | RUN pip install pipenv 575 | RUN pipenv install 576 | EXPOSE 5000 577 | 578 | # Define environment variable 579 | ENV MODEL_NAME MLScore 580 | ENV API_TYPE REST 581 | ENV SERVICE_TYPE MODEL 582 | ENV PERSISTENCE 0 583 | 584 | CMD pipenv run seldon-core-microservice $MODEL_NAME $API_TYPE --service-type $SERVICE_TYPE --persistence $PERSISTENCE 585 | ``` 586 | 587 | For the precise details refer to the [official Seldon documentation](https://docs.seldon.io/projects/seldon-core/en/latest/python/index.html). Next, build this image, 588 | 589 | ```bash 590 | docker build seldon-ml-score-component -t alexioannides/test-ml-score-seldon-api:latest 591 | ``` 592 | 593 | Before we push this image to our registry, we need to make sure that it's working as expected. Start the image on the local Docker daemon, 594 | 595 | ```bash 596 | docker run --rm -p 5000:5000 -d alexioannides/test-ml-score-seldon-api:latest 597 | ``` 598 | 599 | And then send it a request (using a different request format to the ones we've used thus far), 600 | 601 | ```bash 602 | curl -g http://localhost:5000/predict \ 603 | --data-urlencode 'json={"data":{"names":["a","b"],"tensor":{"shape":[2,2],"values":[0,0,1,1]}}}' 604 | ``` 605 | 606 | If response is as expected (i.e. it contains the same payload as the request), then push the image, 607 | 608 | ```bash 609 | docker push alexioannides/test-ml-score-seldon-api:latest 610 | ``` 611 | 612 | ### Deploying a ML Component with Seldon Core 613 | 614 | We now move on to deploying our Seldon compatible ML component to a Kubernetes cluster and creating a fault-tolerant and scalable service from it. To achieve this, we will [deploy Seldon-Core using Helm charts](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html). We start by creating a namespace that will contain the `seldon-core-operator`, a custom Kubernetes resource required to deploy any ML model using Seldon, 615 | 616 | ```bash 617 | kubectl create namespace seldon-core 618 | ``` 619 | 620 | Then we deploy Seldon-Core using Helm and the official Seldon Helm chart repository hosted at `https://storage.googleapis.com/seldon-charts`, 621 | 622 | ```bash 623 | helm install seldon-core-operator \ 624 | --name seldon-core \ 625 | --repo https://storage.googleapis.com/seldon-charts \ 626 | --set usageMetrics.enabled=false \ 627 | --namespace seldon-core 628 | ``` 629 | 630 | Next, we deploy the Ambassador API gateway for Kubernetes, that will act as a single point of entry into our Kubernetes cluster and will be able to route requests to any ML model we have deployed using Seldon. We will create a dedicate namespace for the Ambassador deployment, 631 | 632 | ```bash 633 | kubectl create namespace ambassador 634 | ``` 635 | 636 | And then deploy Ambassador using the most recent charts in the official Helm repository, 637 | 638 | ```bash 639 | helm install stable/ambassador \ 640 | --name ambassador \ 641 | --set crds.keep=false \ 642 | --namespace ambassador 643 | ``` 644 | 645 | If we now run `helm list --namespace seldon-core` we should see that Seldon-Core has been deployed and is waiting for Seldon ML components to be deployed. To deploy our Seldon ML model scoring service we create a separate namespace for it, 646 | 647 | ```bash 648 | kubectl create namespace test-ml-seldon-app 649 | ``` 650 | 651 | And then configure and deploy another official Seldon Helm chart as follows, 652 | 653 | ```bash 654 | helm install seldon-single-model \ 655 | --name test-ml-seldon-app \ 656 | --repo https://storage.googleapis.com/seldon-charts \ 657 | --set model.image.name=alexioannides/test-ml-score-seldon-api:latest \ 658 | --namespace test-ml-seldon-app 659 | ``` 660 | 661 | Note, that multiple ML models can now be deployed using Seldon by repeating the last two steps and they will all be automatically reachable via the same Ambassador API gateway, which we will now use to test our Seldon ML model scoring service. 662 | 663 | ### Testing the API via the Ambassador Gateway API 664 | 665 | To test the Seldon-based ML model scoring service, we follow the same general approach as we did for our first-principles Kubernetes deployments above, but we will route our requests via the Ambassador API gateway. To find the IP address for Ambassador service run, 666 | 667 | ```bash 668 | kubectl -n ambassador get service ambassador 669 | ``` 670 | 671 | Which will be `localhost:80` if using Docker Desktop, or an IP address if running on GCP or Minikube (were you will need to remember to use `minikuke service list` in the latter case). Now test the prediction end-point - for example, 672 | 673 | ```bash 674 | curl http://35.246.28.247:80/seldon/test-ml-seldon-app/test-ml-seldon-app/api/v0.1/predictions \ 675 | --request POST \ 676 | --header "Content-Type: application/json" \ 677 | --data '{"data":{"names":["a","b"],"tensor":{"shape":[2,2],"values":[0,0,1,1]}}}' 678 | ``` 679 | 680 | If you want to understand the full logic behind the routing see the [Seldon documentation](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/serving.html), but the URL is essentially assembled using, 681 | 682 | ```html 683 | http:///seldon///api/v0.1/predictions 684 | ``` 685 | 686 | If your request has been successful, then you should see a response along the lines of, 687 | 688 | ```json 689 | { 690 | "meta": { 691 | "puid": "hsu0j9c39a4avmeonhj2ugllh9", 692 | "tags": { 693 | }, 694 | "routing": { 695 | }, 696 | "requestPath": { 697 | "classifier": "alexioannides/test-ml-score-seldon-api:latest" 698 | }, 699 | "metrics": [] 700 | }, 701 | "data": { 702 | "names": ["t:0", "t:1"], 703 | "tensor": { 704 | "shape": [2, 2], 705 | "values": [0.0, 0.0, 1.0, 1.0] 706 | } 707 | } 708 | } 709 | ``` 710 | 711 | ## Tear Down 712 | 713 | To delete a single Seldon ML model and its namespace, deployed using the steps above, run, 714 | 715 | ```bash 716 | helm delete test-ml-seldon-app --purge && 717 | kubectl delete namespace test-ml-seldon-app 718 | ``` 719 | 720 | Follow the same pattern to remove the Seldon Core Operator and Ambassador, 721 | 722 | ```bash 723 | helm delete seldon-core --purge && kubectl delete namespace seldon-core 724 | helm delete ambassador --purge && kubectl delete namespace ambassador 725 | ``` 726 | 727 | If there is a GCP cluster that needs to be killed run, 728 | 729 | ```bash 730 | gcloud container clusters delete k8s-test-cluster 731 | ``` 732 | 733 | And likewise if working with Minikube, 734 | 735 | ```bash 736 | minikube stop 737 | minikube delete 738 | ``` 739 | 740 | If running on Docker Desktop, navigate to `Preferences -> Reset` to reset the cluster. 741 | 742 | ## Where to go from Here 743 | 744 | The following list of resources will help you dive deeply into the subjects we skimmed-over above: 745 | 746 | - the full set of functionality provided by [Seldon](https://www.seldon.io/open-source/); 747 | - running multi-stage containerised workflows (e.g. for data engineering and model training) using [Argo Workflows](https://argoproj.github.io/argo); 748 | - the excellent '_Kubernetes in Action_' by Marko Lukša [available from Manning Publications](https://www.manning.com/books/kubernetes-in-action); 749 | - '_Docker in Action_' by Jeff Nickoloff and Stephen Kuenzli [also available from Manning Publications](https://www.manning.com/books/docker-in-action-second-edition); and, 750 | - _'Flask Web Development'_ by Miguel Grinberg [O'Reilly](http://shop.oreilly.com/product/0636920089056.do). 751 | 752 | This work was initially committed in 2018 and has since formed the basis of [Bodywork](https://www.bodyworkml.com) - an open-source MLOps tool for deploying machine learning projects developed in Python, to Kubernetes. 753 | 754 | ## Appendix - Using Pipenv for Managing Python Package Dependencies 755 | 756 | We use [pipenv](https://docs.pipenv.org) for managing project dependencies and Python environments (i.e. virtual environments). All of the direct packages dependencies required to run the code (e.g. Flask or Seldon-Core), as well as any packages that could have been used during development (e.g. flake8 for code linting and IPython for interactive console sessions), are described in the `Pipfile`. Their **precise** downstream dependencies are described in `Pipfile.lock`. 757 | 758 | ### Installing Pipenv 759 | 760 | To get started with Pipenv, first of all download it - assuming that there is a global version of Python available on your system and on the PATH, then this can be achieved by running the following command, 761 | 762 | ```bash 763 | pip3 install pipenv 764 | ``` 765 | 766 | Pipenv is also available to install from many non-Python package managers. For example, on OS X it can be installed using the [Homebrew](https://brew.sh) package manager, with the following terminal command, 767 | 768 | ```bash 769 | brew install pipenv 770 | ``` 771 | 772 | For more information, including advanced configuration options, see the [official pipenv documentation](https://docs.pipenv.org). 773 | 774 | ### Installing Projects Dependencies 775 | 776 | If you want to experiment with the Python code in the `py-flask-ml-score-api` or `seldon-ml-score-component` directories, then make sure that you're in the appropriate directory and then run, 777 | 778 | ```bash 779 | pipenv install 780 | ``` 781 | 782 | This will install all of the direct project dependencies. 783 | 784 | ### Running Python, IPython and JupyterLab from the Project's Virtual Environment 785 | 786 | In order to continue development in a Python environment that precisely mimics the one the project was initially developed with, use Pipenv from the command line as follows, 787 | 788 | ```bash 789 | pipenv run python3 790 | ``` 791 | 792 | The `python3` command could just as well be `seldon-core-microservice` or any other entry-point provided by the `seldon-core` package - for example, in the `Dockerfile` for the `seldon-ml-score-component` we start the Seldon-based ML model scoring service using, 793 | 794 | ```bash 795 | pipenv run seldon-core-microservice ... 796 | ``` 797 | 798 | ### Pipenv Shells 799 | 800 | Prepending `pipenv` to every command you want to run within the context of your Pipenv-managed virtual environment, can get very tedious. This can be avoided by entering into a Pipenv-managed shell, 801 | 802 | ```bash 803 | pipenv shell 804 | ``` 805 | 806 | which is equivalent to 'activating' the virtual environment. Any command will now be executed within the virtual environment. Use `exit` to leave the shell session. 807 | -------------------------------------------------------------------------------- /VERSION: -------------------------------------------------------------------------------- 1 | v2.1 - 18th August 2019 -------------------------------------------------------------------------------- /helm-ml-score-app/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | .vscode/ 23 | -------------------------------------------------------------------------------- /helm-ml-score-app/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | name: helm-ml-score-app 3 | version: 0.0.1 4 | description: ML model scoring service. 5 | appVersion: "0.0.1" 6 | -------------------------------------------------------------------------------- /helm-ml-score-app/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | Thank you for installing {{ .Chart.Name }}. 2 | 3 | Your release is named {{ .Release.Name }}. 4 | 5 | To learn more about the release, try: 6 | 7 | $ helm status {{ .Release.Name }} 8 | $ helm get {{ .Release.Name }} 9 | -------------------------------------------------------------------------------- /helm-ml-score-app/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | labels: 5 | app: {{ .Values.app.name }} 6 | env: {{ .Values.app.env }} 7 | name: {{ .Values.app.name }} 8 | namespace: {{ .Values.app.namespace }} 9 | spec: 10 | replicas: {{ .Values.replicas }} 11 | selector: 12 | matchLabels: 13 | app: {{ .Values.app.name }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ .Values.app.name }} 18 | env: {{ .Values.app.env }} 19 | spec: 20 | containers: 21 | - image: {{ .Values.app.image }} 22 | name: {{ .Values.app.name }} 23 | ports: 24 | - containerPort: {{ .Values.containerPort }} 25 | protocol: TCP 26 | -------------------------------------------------------------------------------- /helm-ml-score-app/templates/namespace.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: {{ .Values.app.namespace }} 5 | -------------------------------------------------------------------------------- /helm-ml-score-app/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ .Values.app.name }}-lb 5 | labels: 6 | app: {{ .Values.app.name }} 7 | namespace: {{ .Values.app.namespace }} 8 | spec: 9 | type: LoadBalancer 10 | ports: 11 | - port: {{ .Values.containerPort }} 12 | targetPort: {{ .Values.targetPort }} 13 | selector: 14 | app: {{ .Values.app.name }} 15 | -------------------------------------------------------------------------------- /helm-ml-score-app/values.yaml: -------------------------------------------------------------------------------- 1 | app: 2 | name: test-ml-score 3 | env: prod 4 | namespace: test-ml-app 5 | image: alexioannides/test-ml-score-api 6 | 7 | replicas: 1 8 | containerPort: 5000 9 | targetPort: 5000 10 | -------------------------------------------------------------------------------- /py-flask-ml-score-api/.dockerignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | .DS_Store -------------------------------------------------------------------------------- /py-flask-ml-score-api/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.6-slim 2 | WORKDIR /usr/src/app 3 | COPY . . 4 | RUN pip install pipenv 5 | RUN pipenv install 6 | EXPOSE 5000 7 | CMD ["pipenv", "run", "python", "api.py"] -------------------------------------------------------------------------------- /py-flask-ml-score-api/Pipfile: -------------------------------------------------------------------------------- 1 | [[source]] 2 | url = "https://pypi.org/simple" 3 | verify_ssl = true 4 | name = "pypi" 5 | 6 | [packages] 7 | flask = "*" 8 | 9 | [requires] 10 | python_version = "3.6" 11 | -------------------------------------------------------------------------------- /py-flask-ml-score-api/Pipfile.lock: -------------------------------------------------------------------------------- 1 | { 2 | "_meta": { 3 | "hash": { 4 | "sha256": "8ec50e78e90ad609e540d41d1ed90f3fb880ffbdf6049b0a6b2f1a00158a3288" 5 | }, 6 | "pipfile-spec": 6, 7 | "requires": { 8 | "python_version": "3.6" 9 | }, 10 | "sources": [ 11 | { 12 | "name": "pypi", 13 | "url": "https://pypi.org/simple", 14 | "verify_ssl": true 15 | } 16 | ] 17 | }, 18 | "default": { 19 | "click": { 20 | "hashes": [ 21 | "sha256:2335065e6395b9e67ca716de5f7526736bfa6ceead690adf616d925bdc622b13", 22 | "sha256:5b94b49521f6456670fdb30cd82a4eca9412788a93fa6dd6df72c94d5a8ff2d7" 23 | ], 24 | "version": "==7.0" 25 | }, 26 | "flask": { 27 | "hashes": [ 28 | "sha256:2271c0070dbcb5275fad4a82e29f23ab92682dc45f9dfbc22c02ba9b9322ce48", 29 | "sha256:a080b744b7e345ccfcbc77954861cb05b3c63786e93f2b3875e0913d44b43f05" 30 | ], 31 | "index": "pypi", 32 | "version": "==1.0.2" 33 | }, 34 | "itsdangerous": { 35 | "hashes": [ 36 | "sha256:321b033d07f2a4136d3ec762eac9f16a10ccd60f53c0c91af90217ace7ba1f19", 37 | "sha256:b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749" 38 | ], 39 | "version": "==1.1.0" 40 | }, 41 | "jinja2": { 42 | "hashes": [ 43 | "sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd", 44 | "sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4" 45 | ], 46 | "version": "==2.10" 47 | }, 48 | "markupsafe": { 49 | "hashes": [ 50 | "sha256:048ef924c1623740e70204aa7143ec592504045ae4429b59c30054cb31e3c432", 51 | "sha256:130f844e7f5bdd8e9f3f42e7102ef1d49b2e6fdf0d7526df3f87281a532d8c8b", 52 | "sha256:19f637c2ac5ae9da8bfd98cef74d64b7e1bb8a63038a3505cd182c3fac5eb4d9", 53 | "sha256:1b8a7a87ad1b92bd887568ce54b23565f3fd7018c4180136e1cf412b405a47af", 54 | "sha256:1c25694ca680b6919de53a4bb3bdd0602beafc63ff001fea2f2fc16ec3a11834", 55 | "sha256:1f19ef5d3908110e1e891deefb5586aae1b49a7440db952454b4e281b41620cd", 56 | "sha256:1fa6058938190ebe8290e5cae6c351e14e7bb44505c4a7624555ce57fbbeba0d", 57 | "sha256:31cbb1359e8c25f9f48e156e59e2eaad51cd5242c05ed18a8de6dbe85184e4b7", 58 | "sha256:3e835d8841ae7863f64e40e19477f7eb398674da6a47f09871673742531e6f4b", 59 | "sha256:4e97332c9ce444b0c2c38dd22ddc61c743eb208d916e4265a2a3b575bdccb1d3", 60 | "sha256:525396ee324ee2da82919f2ee9c9e73b012f23e7640131dd1b53a90206a0f09c", 61 | "sha256:52b07fbc32032c21ad4ab060fec137b76eb804c4b9a1c7c7dc562549306afad2", 62 | "sha256:52ccb45e77a1085ec5461cde794e1aa037df79f473cbc69b974e73940655c8d7", 63 | "sha256:5c3fbebd7de20ce93103cb3183b47671f2885307df4a17a0ad56a1dd51273d36", 64 | "sha256:5e5851969aea17660e55f6a3be00037a25b96a9b44d2083651812c99d53b14d1", 65 | "sha256:5edfa27b2d3eefa2210fb2f5d539fbed81722b49f083b2c6566455eb7422fd7e", 66 | "sha256:7d263e5770efddf465a9e31b78362d84d015cc894ca2c131901a4445eaa61ee1", 67 | "sha256:83381342bfc22b3c8c06f2dd93a505413888694302de25add756254beee8449c", 68 | "sha256:857eebb2c1dc60e4219ec8e98dfa19553dae33608237e107db9c6078b1167856", 69 | "sha256:98e439297f78fca3a6169fd330fbe88d78b3bb72f967ad9961bcac0d7fdd1550", 70 | "sha256:bf54103892a83c64db58125b3f2a43df6d2cb2d28889f14c78519394feb41492", 71 | "sha256:d9ac82be533394d341b41d78aca7ed0e0f4ba5a2231602e2f05aa87f25c51672", 72 | "sha256:e982fe07ede9fada6ff6705af70514a52beb1b2c3d25d4e873e82114cf3c5401", 73 | "sha256:edce2ea7f3dfc981c4ddc97add8a61381d9642dc3273737e756517cc03e84dd6", 74 | "sha256:efdc45ef1afc238db84cb4963aa689c0408912a0239b0721cb172b4016eb31d6", 75 | "sha256:f137c02498f8b935892d5c0172560d7ab54bc45039de8805075e19079c639a9c", 76 | "sha256:f82e347a72f955b7017a39708a3667f106e6ad4d10b25f237396a7115d8ed5fd", 77 | "sha256:fb7c206e01ad85ce57feeaaa0bf784b97fa3cad0d4a5737bc5295785f5c613a1" 78 | ], 79 | "version": "==1.1.0" 80 | }, 81 | "werkzeug": { 82 | "hashes": [ 83 | "sha256:c3fd7a7d41976d9f44db327260e263132466836cef6f91512889ed60ad26557c", 84 | "sha256:d5da73735293558eb1651ee2fddc4d0dedcfa06538b8813a2e20011583c9e49b" 85 | ], 86 | "version": "==0.14.1" 87 | } 88 | }, 89 | "develop": {} 90 | } 91 | -------------------------------------------------------------------------------- /py-flask-ml-score-api/api.py: -------------------------------------------------------------------------------- 1 | """ 2 | api.py 3 | ~~~~~~ 4 | 5 | This module defines a simple REST API for an imaginary Machine Learning 6 | (ML) model. It will be used for testing Docker and Kubernetes. 7 | 8 | This can be tested locally on the command line, using `python api.py` 9 | to start the service and then in another terminal window using, 10 | 11 | ``` 12 | curl http://localhost:5000/score \ 13 | --request POST \ 14 | --header "Content-Type: application/json" \ 15 | --data '{"X": [1, 2]}' 16 | ``` 17 | 18 | To test the API. 19 | """ 20 | 21 | from typing import Iterable 22 | 23 | from flask import Flask, jsonify, make_response, request, Response 24 | 25 | app = Flask(__name__) 26 | 27 | 28 | @app.route('/score', methods=['POST']) 29 | def score() -> Response: 30 | """Score data using an imaginary machine learning model. 31 | 32 | This API endpoint expects a JSON payload with a field called `X` 33 | containing an iterable sequence of features to send to the model. 34 | This data is parsed into Python dict and made available via 35 | `request.json` 36 | 37 | If `X` cannot be found in the parsed JSON data, then an exception 38 | will be raised. Otherwise, it will return a JSON payload with the 39 | `score` field containing the model's prediction. 40 | """ 41 | 42 | try: 43 | features = request.json['X'] 44 | prediction = model_predict(features) 45 | return make_response(jsonify({'score': prediction})) 46 | except KeyError: 47 | raise RuntimeError('"X" cannot be be found in JSON payload.') 48 | 49 | 50 | def model_predict(x: Iterable[float]) -> Iterable[float]: 51 | """Dummy prediction function.""" 52 | return x 53 | 54 | 55 | if __name__ == '__main__': 56 | app.run(host='0.0.0.0', port=5000) 57 | -------------------------------------------------------------------------------- /py-flask-ml-score-api/py-flask-ml-score.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: test-ml-app 5 | --- 6 | apiVersion: apps/v1 7 | kind: Deployment 8 | metadata: 9 | labels: 10 | app: test-ml-score-api 11 | name: test-ml-score-api 12 | namespace: test-ml-app 13 | spec: 14 | replicas: 1 15 | selector: 16 | matchLabels: 17 | app: test-ml-score-api 18 | template: 19 | metadata: 20 | labels: 21 | app: test-ml-score-api 22 | spec: 23 | containers: 24 | - image: alexioannides/test-ml-score-api:latest 25 | name: test-ml-score-api 26 | ports: 27 | - containerPort: 5000 28 | --- 29 | apiVersion: v1 30 | kind: Service 31 | metadata: 32 | name: test-ml-score-lb 33 | labels: 34 | app: test-ml-score-api 35 | namespace: test-ml-app 36 | spec: 37 | type: LoadBalancer 38 | ports: 39 | - port: 5000 40 | targetPort: 5000 41 | selector: 42 | app: test-ml-score-api 43 | -------------------------------------------------------------------------------- /seldon-ml-score-component/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.6-slim 2 | COPY . /app 3 | WORKDIR /app 4 | RUN pip install pipenv 5 | RUN pipenv install 6 | EXPOSE 5000 7 | 8 | # Define environment variable 9 | ENV MODEL_NAME MLScore 10 | ENV API_TYPE REST 11 | ENV SERVICE_TYPE MODEL 12 | ENV PERSISTENCE 0 13 | 14 | CMD pipenv run seldon-core-microservice $MODEL_NAME $API_TYPE --service-type $SERVICE_TYPE --persistence $PERSISTENCE 15 | -------------------------------------------------------------------------------- /seldon-ml-score-component/MLScore.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module contains a class that conforms to the Seldon core interface 3 | for ML model. 4 | """ 5 | 6 | 7 | class MLScore: 8 | """ 9 | Model template. You can load your model parameters in __init__ from 10 | a location accessible at runtime 11 | """ 12 | 13 | def __init__(self): 14 | """ 15 | Load models and add any initialization parameters (these will 16 | be passed at runtime from the graph definition parameters 17 | defined in your seldondeployment kubernetes resource manifest). 18 | """ 19 | print("Initializing") 20 | 21 | def predict(self, X, features_names): 22 | """ 23 | Return a prediction. 24 | 25 | Parameters 26 | ---------- 27 | X : array-like 28 | feature_names : array of feature names (optional) 29 | """ 30 | print("Predict called - will run identity function") 31 | return X 32 | -------------------------------------------------------------------------------- /seldon-ml-score-component/Pipfile: -------------------------------------------------------------------------------- 1 | [[source]] 2 | name = "pypi" 3 | url = "https://pypi.org/simple" 4 | verify_ssl = true 5 | 6 | [dev-packages] 7 | 8 | [packages] 9 | seldon-core = "*" 10 | 11 | [requires] 12 | python_version = "3.6" 13 | -------------------------------------------------------------------------------- /seldon-ml-score-component/Pipfile.lock: -------------------------------------------------------------------------------- 1 | { 2 | "_meta": { 3 | "hash": { 4 | "sha256": "f000aaac8480433d8a28b83d5bc76f832b2a2921b99a106e80823e7d635aedd8" 5 | }, 6 | "pipfile-spec": 6, 7 | "requires": { 8 | "python_version": "3.6" 9 | }, 10 | "sources": [ 11 | { 12 | "name": "pypi", 13 | "url": "https://pypi.org/simple", 14 | "verify_ssl": true 15 | } 16 | ] 17 | }, 18 | "default": { 19 | "absl-py": { 20 | "hashes": [ 21 | "sha256:87519e3b91a3d573664c6e2ee33df582bb68dca6642ae3cf3a4361b1c0a4e9d6" 22 | ], 23 | "version": "==0.6.1" 24 | }, 25 | "astor": { 26 | "hashes": [ 27 | "sha256:95c30d87a6c2cf89aa628b87398466840f0ad8652f88eb173125a6df8533fb8d", 28 | "sha256:fb503b9e2fdd05609fbf557b916b4a7824171203701660f0c55bbf5a7a68713e" 29 | ], 30 | "version": "==0.7.1" 31 | }, 32 | "certifi": { 33 | "hashes": [ 34 | "sha256:47f9c83ef4c0c621eaef743f133f09fa8a74a9b75f037e8624f83bd1b6626cb7", 35 | "sha256:993f830721089fef441cdfeb4b2c8c9df86f0c63239f06bd025a76a7daddb033" 36 | ], 37 | "version": "==2018.11.29" 38 | }, 39 | "chardet": { 40 | "hashes": [ 41 | "sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae", 42 | "sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691" 43 | ], 44 | "version": "==3.0.4" 45 | }, 46 | "click": { 47 | "hashes": [ 48 | "sha256:2335065e6395b9e67ca716de5f7526736bfa6ceead690adf616d925bdc622b13", 49 | "sha256:5b94b49521f6456670fdb30cd82a4eca9412788a93fa6dd6df72c94d5a8ff2d7" 50 | ], 51 | "version": "==7.0" 52 | }, 53 | "flask": { 54 | "hashes": [ 55 | "sha256:2271c0070dbcb5275fad4a82e29f23ab92682dc45f9dfbc22c02ba9b9322ce48", 56 | "sha256:a080b744b7e345ccfcbc77954861cb05b3c63786e93f2b3875e0913d44b43f05" 57 | ], 58 | "version": "==1.0.2" 59 | }, 60 | "flask-cors": { 61 | "hashes": [ 62 | "sha256:7ad56ee3b90d4955148fc25a2ecaa1124fc84298471e266a7fea59aeac4405a5", 63 | "sha256:7e90bf225fdf163d11b84b59fb17594d0580a16b97ab4e1146b1fb2737c1cfec" 64 | ], 65 | "version": "==3.0.7" 66 | }, 67 | "flatbuffers": { 68 | "hashes": [ 69 | "sha256:1d367990cbf0ab3b7e9458b81793aa4875082bae2ad497911410f5d1f9234caa", 70 | "sha256:91800165bedd4efbb4d3a60a7007d2124514faf7f6d85919f7023b74b503445d" 71 | ], 72 | "version": "==1.10" 73 | }, 74 | "gast": { 75 | "hashes": [ 76 | "sha256:fe939df4583692f0512161ec1c880e0a10e71e6a232da045ab8edd3756fbadf0" 77 | ], 78 | "version": "==0.2.2" 79 | }, 80 | "grpcio": { 81 | "hashes": [ 82 | "sha256:082bc981d6aabfdb26bfdeab63f5626df3d2c5ac3a9ae8533dfa5ce73432f4fe", 83 | "sha256:0e8ff79b12b8b07198dd847974fc32a4ed8c0d52d5224fabb9d28bf4c2e3f4a9", 84 | "sha256:11c8026a3d35e8b9ad6cda7bf4f5e51b9b82e7f29a590ad194f63957657fa808", 85 | "sha256:145e82aec0a643d7569499b1aa0d5167c99d9d26a2b8c4e4b3f5cd51b99a8cdc", 86 | "sha256:1a820ebf0c924cbfa299cb59e4bc9582a24abfec89d9a36c281d78fa941115ae", 87 | "sha256:284bee4657c4dd7d48835128b31975e8b0ea3a2eeb084c5d46de215b31d1f8f5", 88 | "sha256:2a8b6b569fd23f4d9f2c8201fd8995519dfbddc60ceeffa8bf5bea2a8e9cb72c", 89 | "sha256:38b93080df498656aea1dbab632e32013c580c2d00bd8c30d0f1d2c9513b0469", 90 | "sha256:4837ad8fdcf99df0e89214ba42001469cab807851f30481db41fd84fc9358ce7", 91 | "sha256:5447336edd6fea8ab35eca34ff5289e369e22c375bc2ac8156a419fa467949ac", 92 | "sha256:57705e31f76db45b51f3a98bcfd362c89d58e99f846337a25fed957b4d43ae4f", 93 | "sha256:612e742c748df51c921a7eefd76195d76467e3cc00e084e089af5b111d8210b7", 94 | "sha256:62c777f801aee22100d8ea5fa057020e37b65541a8000091879a8560b089da9d", 95 | "sha256:8317d351ab1e80cf20676ef3d4929d3e760df10e6e5c289283c36c4c92ca61f7", 96 | "sha256:8703efaf03396123426fdea08b369712df1248fa5fdfdbee3f87a410f52e9bac", 97 | "sha256:8b72721e64becd4a3e9580f12dbdf618d41e80d3ae7585dc8a921dbf76c979bb", 98 | "sha256:8bb7dbe20fe883ee22a6cb2c1317ea228b75a3ef60f3749584ee2634192e3452", 99 | "sha256:9a7ed6160e6c14058b4676aac68a8bf268f171f4c371ff0a0c0ab81b90803f70", 100 | "sha256:a46c34768f292fa0d97e929591e51ec20dc857321d83b198de1dad9c8183e8cb", 101 | "sha256:a7f21a7b48fcd9f51029419b22a9bfea097973cca5d1529b8578f1d2919e6b23", 102 | "sha256:adfee9c9099cae92c2a4948bc95cc2cc3185cdf59b371e056b8dd19ed434247e", 103 | "sha256:b3bbeadc6b99e4a42bf23803f5e9b292f23f3e37cc7f75a9f5efbfa9b812abc1", 104 | "sha256:b51d49d89758ea45841130c5c7be79c68612d8834bd600994b8a2672c59dc9b9", 105 | "sha256:cbb95a586fdf3e795eba28b4acc75fdfdb59a14df62e747fe8bc4572ef37b647", 106 | "sha256:cdea5595b30f027e6603887b71f343ca5b209da74b910fe04fc25e1dfe6df263", 107 | "sha256:d64350156dc4b21914409e0c93ffeeb4ceba193716fb1ae570df699383c4cd63", 108 | "sha256:e10bbef59706a90672b295c0f82dcb6329d829643b8dd7c3bd120f89a093d740", 109 | "sha256:e68e6afbbae2cbfadaabd33ee40314963cd83500feff733c07edb172674a7f8b", 110 | "sha256:f0c0e48c255a63fec78be2f240ff5a3bd4291b1f83976895f6ee0085362568d0", 111 | "sha256:f7bb6617bae5e7333e66ec1e7aac1fe419b59e0e34a8717f97e1ce2791ab9d3a", 112 | "sha256:fa6e14bce7ad5de2363abb644191489ddfffcdb2751337251f7ef962ab7e3293", 113 | "sha256:fd6774bbb6c717f725b39394757445ead4f69c471118364933aadb81a4f16961" 114 | ], 115 | "version": "==1.17.1" 116 | }, 117 | "h5py": { 118 | "hashes": [ 119 | "sha256:05750b91640273c69989c657eaac34b091abdd75efc8c4824c82aaf898a2da0a", 120 | "sha256:082a27208aa3a2286e7272e998e7e225b2a7d4b7821bd840aebf96d50977abbb", 121 | "sha256:08e2e8297195f9e813e894b6c63f79372582787795bba2014a2db6a2de95f713", 122 | "sha256:0dd2adeb2e9de5081eb8dcec88874e7fd35dae9a21557be3a55a3c7d491842a4", 123 | "sha256:0f94de7a10562b991967a66bbe6dda9808e18088676834c0a4dcec3fdd3bcc6f", 124 | "sha256:106e42e2e01e486a3d32eeb9ba0e3a7f65c12fa8998d63625fa41fb8bdc44cdb", 125 | "sha256:1606c66015f04719c41a9863c156fc0e6b992150de21c067444bcb82e7d75579", 126 | "sha256:1854c4beff9961e477e133143c5e5e355dac0b3ebf19c52cf7cc1b1ef757703c", 127 | "sha256:1e9fb6f1746500ea91a00193ce2361803c70c6b13f10aae9a33ad7b5bd28e800", 128 | "sha256:2cca17e80ddb151894333377675db90cd0279fa454776e0a4f74308376afd050", 129 | "sha256:30e365e8408759db3778c361f1e4e0fe8e98a875185ae46c795a85e9bafb9cdf", 130 | "sha256:3206bac900e16eda81687d787086f4ffd4f3854980d798e191a9868a6510c3ae", 131 | "sha256:3c23d72058647cee19b30452acc7895621e2de0a0bd5b8a1e34204b9ea9ed43c", 132 | "sha256:407b5f911a83daa285bbf1ef78a9909ee5957f257d3524b8606be37e8643c5f0", 133 | "sha256:4162953714a9212d373ac953c10e3329f1e830d3c7473f2a2e4f25dd6241eef0", 134 | "sha256:5fc7aba72a51b2c80605eba1c50dbf84224dcd206279d30a75c154e5652e1fe4", 135 | "sha256:713ac19307e11de4d9833af0c4bd6778bde0a3d967cafd2f0f347223711c1e31", 136 | "sha256:71b946d80ef3c3f12db157d7778b1fe74a517ca85e94809358b15580983c2ce2", 137 | "sha256:8cc4aed71e20d87e0a6f02094d718a95252f11f8ed143bc112d22167f08d4040", 138 | "sha256:9d41ca62daf36d6b6515ab8765e4c8c4388ee18e2a665701fef2b41563821002", 139 | "sha256:a744e13b000f234cd5a5b2a1f95816b819027c57f385da54ad2b7da1adace2f3", 140 | "sha256:b087ee01396c4b34e9dc41e3a6a0442158206d383c19c7d0396d52067b17c1cb", 141 | "sha256:b0f03af381d33306ce67d18275b61acb4ca111ced645381387a02c8a5ee1b796", 142 | "sha256:b9e4b8dfd587365bdd719ae178fa1b6c1231f81280b1375eef8626dfd8761bf3", 143 | "sha256:c5dd4ec75985b99166c045909e10f0534704d102848b1d9f0992720e908928e7", 144 | "sha256:d2b82f23cd862a9d05108fe99967e9edfa95c136f532a71cb3d28dc252771f50", 145 | "sha256:e58a25764472af07b7e1c4b10b0179c8ea726446c7141076286e41891bf3a563", 146 | "sha256:f3b49107fbfc77333fc2b1ef4d5de2abcd57e7ea3a1482455229494cf2da56ce" 147 | ], 148 | "version": "==2.9.0" 149 | }, 150 | "idna": { 151 | "hashes": [ 152 | "sha256:c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407", 153 | "sha256:ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c" 154 | ], 155 | "version": "==2.8" 156 | }, 157 | "itsdangerous": { 158 | "hashes": [ 159 | "sha256:321b033d07f2a4136d3ec762eac9f16a10ccd60f53c0c91af90217ace7ba1f19", 160 | "sha256:b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749" 161 | ], 162 | "version": "==1.1.0" 163 | }, 164 | "jinja2": { 165 | "hashes": [ 166 | "sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd", 167 | "sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4" 168 | ], 169 | "version": "==2.10" 170 | }, 171 | "keras-applications": { 172 | "hashes": [ 173 | "sha256:721dda4fa4e043e5bbd6f52a2996885c4639a7130ae478059b3798d0706f5ae7", 174 | "sha256:a03af60ddc9c5afdae4d5c9a8dd4ca857550e0b793733a5072e0725829b87017" 175 | ], 176 | "version": "==1.0.6" 177 | }, 178 | "keras-preprocessing": { 179 | "hashes": [ 180 | "sha256:90d04c1750bccceef88ac09475c291b4b5f6aa1eaf0603167061b1aa8b043c61", 181 | "sha256:ef2e482c4336fcf7180244d06f4374939099daa3183816e82aee7755af35b754" 182 | ], 183 | "version": "==1.0.5" 184 | }, 185 | "markdown": { 186 | "hashes": [ 187 | "sha256:c00429bd503a47ec88d5e30a751e147dcb4c6889663cd3e2ba0afe858e009baa", 188 | "sha256:d02e0f9b04c500cde6637c11ad7c72671f359b87b9fe924b2383649d8841db7c" 189 | ], 190 | "version": "==3.0.1" 191 | }, 192 | "markupsafe": { 193 | "hashes": [ 194 | "sha256:048ef924c1623740e70204aa7143ec592504045ae4429b59c30054cb31e3c432", 195 | "sha256:130f844e7f5bdd8e9f3f42e7102ef1d49b2e6fdf0d7526df3f87281a532d8c8b", 196 | "sha256:19f637c2ac5ae9da8bfd98cef74d64b7e1bb8a63038a3505cd182c3fac5eb4d9", 197 | "sha256:1b8a7a87ad1b92bd887568ce54b23565f3fd7018c4180136e1cf412b405a47af", 198 | "sha256:1c25694ca680b6919de53a4bb3bdd0602beafc63ff001fea2f2fc16ec3a11834", 199 | "sha256:1f19ef5d3908110e1e891deefb5586aae1b49a7440db952454b4e281b41620cd", 200 | "sha256:1fa6058938190ebe8290e5cae6c351e14e7bb44505c4a7624555ce57fbbeba0d", 201 | "sha256:31cbb1359e8c25f9f48e156e59e2eaad51cd5242c05ed18a8de6dbe85184e4b7", 202 | "sha256:3e835d8841ae7863f64e40e19477f7eb398674da6a47f09871673742531e6f4b", 203 | "sha256:4e97332c9ce444b0c2c38dd22ddc61c743eb208d916e4265a2a3b575bdccb1d3", 204 | "sha256:525396ee324ee2da82919f2ee9c9e73b012f23e7640131dd1b53a90206a0f09c", 205 | "sha256:52b07fbc32032c21ad4ab060fec137b76eb804c4b9a1c7c7dc562549306afad2", 206 | "sha256:52ccb45e77a1085ec5461cde794e1aa037df79f473cbc69b974e73940655c8d7", 207 | "sha256:5c3fbebd7de20ce93103cb3183b47671f2885307df4a17a0ad56a1dd51273d36", 208 | "sha256:5e5851969aea17660e55f6a3be00037a25b96a9b44d2083651812c99d53b14d1", 209 | "sha256:5edfa27b2d3eefa2210fb2f5d539fbed81722b49f083b2c6566455eb7422fd7e", 210 | "sha256:7d263e5770efddf465a9e31b78362d84d015cc894ca2c131901a4445eaa61ee1", 211 | "sha256:83381342bfc22b3c8c06f2dd93a505413888694302de25add756254beee8449c", 212 | "sha256:857eebb2c1dc60e4219ec8e98dfa19553dae33608237e107db9c6078b1167856", 213 | "sha256:98e439297f78fca3a6169fd330fbe88d78b3bb72f967ad9961bcac0d7fdd1550", 214 | "sha256:bf54103892a83c64db58125b3f2a43df6d2cb2d28889f14c78519394feb41492", 215 | "sha256:d9ac82be533394d341b41d78aca7ed0e0f4ba5a2231602e2f05aa87f25c51672", 216 | "sha256:e982fe07ede9fada6ff6705af70514a52beb1b2c3d25d4e873e82114cf3c5401", 217 | "sha256:edce2ea7f3dfc981c4ddc97add8a61381d9642dc3273737e756517cc03e84dd6", 218 | "sha256:efdc45ef1afc238db84cb4963aa689c0408912a0239b0721cb172b4016eb31d6", 219 | "sha256:f137c02498f8b935892d5c0172560d7ab54bc45039de8805075e19079c639a9c", 220 | "sha256:f82e347a72f955b7017a39708a3667f106e6ad4d10b25f237396a7115d8ed5fd", 221 | "sha256:fb7c206e01ad85ce57feeaaa0bf784b97fa3cad0d4a5737bc5295785f5c613a1" 222 | ], 223 | "version": "==1.1.0" 224 | }, 225 | "numpy": { 226 | "hashes": [ 227 | "sha256:00a458d6821b1e87be873f2126d5646b901047a7480e8ae9773ecf214f0e19f3", 228 | "sha256:0470c5dc32212a08ebc2405f32e8ceb9a5b1c8ac61a2daf9835ec0856a220495", 229 | "sha256:24a9c287a4a1c427c2d45bf7c4fc6180c52a08fa0990d4c94e4c86a9b1e23ba5", 230 | "sha256:25600e8901012180a1b7cd1ac3e27e7793586ecd432383191929ac2edf37ff5d", 231 | "sha256:2d279bd99329e72c30937bdef82b6dc7779c7607c5a379bab1bf76be1f4c1422", 232 | "sha256:32af2bcf4bb7631dac19736a6e092ec9715e770dcaa1f85fcd99dec5040b2a4d", 233 | "sha256:3e90a9fce378114b6c2fc01fff7423300515c7b54b7cc71b02a22bc0bd7dfdd8", 234 | "sha256:5774d49516c37fd3fc1f232e033d2b152f3323ca4c7bfefd7277e4c67f3c08b4", 235 | "sha256:64ff21aac30d40c20ba994c94a08d439b8ced3b9c704af897e9e4ba09d10e62c", 236 | "sha256:803b2af862dcad6c11231ea3cd1015d1293efd6c87088be33d713a9b23e9e419", 237 | "sha256:95c830b09626508f7808ce7f1344fb98068e63143e6050e5dc3063142fc60007", 238 | "sha256:96e49a0c82b4e3130093002f625545104037c2d25866fa2e0c90d6e54f5a1fbc", 239 | "sha256:a1dd8221f0e69038748f47b8bb3248d0b9ecdf13fe837440951c3d5ff72639bb", 240 | "sha256:a80ecac5664f420556a725a5646f2d1c60a7c0489d68a38b5056393e949e27ac", 241 | "sha256:b19a47ff1bd2fca0cacdfa830c967746764c32dca6a0c0328d9c893f4bfe2f6b", 242 | "sha256:be43df2c563e264b38e3318574d80fc8f365df3fb745270934d2dbe54e006f41", 243 | "sha256:c40cb17188f6ae3c5b6efc6f0fd43a7ddd219b7807fe179e71027849a9b91afc", 244 | "sha256:c6251e0f0ecac53ba2b99d9f0cc16fa9021914a78869c38213c436ba343641f0", 245 | "sha256:cb189bd98b2e7ac02df389b6212846ab20661f4bafe16b5a70a6f1728c1cc7cb", 246 | "sha256:ef4ae41add536cb825d8aa029c15ef510aead06ea5b68daea64f0b9ecbff17db", 247 | "sha256:f00a2c21f60284e024bba351875f3501c6d5817d64997a0afe4f4355161a8889", 248 | "sha256:f1232f98a6bbd6d1678249f94028bccc541bbc306aa5c4e1471a881b0e5a3409", 249 | "sha256:fea682f6ddc09517df0e6d5caad9613c6d91a42232aeb082df67e4d205de19cc" 250 | ], 251 | "version": "==1.16.0" 252 | }, 253 | "protobuf": { 254 | "hashes": [ 255 | "sha256:10394a4d03af7060fa8a6e1cbf38cea44be1467053b0aea5bbfcb4b13c4b88c4", 256 | "sha256:1489b376b0f364bcc6f89519718c057eb191d7ad6f1b395ffd93d1aa45587811", 257 | "sha256:1931d8efce896981fe410c802fd66df14f9f429c32a72dd9cfeeac9815ec6444", 258 | "sha256:196d3a80f93c537f27d2a19a4fafb826fb4c331b0b99110f985119391d170f96", 259 | "sha256:46e34fdcc2b1f2620172d3a4885128705a4e658b9b62355ae5e98f9ea19f42c2", 260 | "sha256:4b92e235a3afd42e7493b281c8b80c0c65cbef45de30f43d571d1ee40a1f77ef", 261 | "sha256:574085a33ca0d2c67433e5f3e9a0965c487410d6cb3406c83bdaf549bfc2992e", 262 | "sha256:59cd75ded98094d3cf2d79e84cdb38a46e33e7441b2826f3838dcc7c07f82995", 263 | "sha256:5ee0522eed6680bb5bac5b6d738f7b0923b3cafce8c4b1a039a6107f0841d7ed", 264 | "sha256:65917cfd5da9dfc993d5684643063318a2e875f798047911a9dd71ca066641c9", 265 | "sha256:685bc4ec61a50f7360c9fd18e277b65db90105adbf9c79938bd315435e526b90", 266 | "sha256:92e8418976e52201364a3174e40dc31f5fd8c147186d72380cbda54e0464ee19", 267 | "sha256:9335f79d1940dfb9bcaf8ec881fb8ab47d7a2c721fb8b02949aab8bbf8b68625", 268 | "sha256:a7ee3bb6de78185e5411487bef8bc1c59ebd97e47713cba3c460ef44e99b3db9", 269 | "sha256:ceec283da2323e2431c49de58f80e1718986b79be59c266bb0509cbf90ca5b9e", 270 | "sha256:fcfc907746ec22716f05ea96b7f41597dfe1a1c088f861efb8a0d4f4196a6f10" 271 | ], 272 | "version": "==3.6.1" 273 | }, 274 | "redis": { 275 | "hashes": [ 276 | "sha256:2100750629beff143b6a200a2ea8e719fcf26420adabb81402895e144c5083cf", 277 | "sha256:8e0bdd2de02e829b6225b25646f9fb9daffea99a252610d040409a6738541f0a" 278 | ], 279 | "version": "==3.0.1" 280 | }, 281 | "requests": { 282 | "hashes": [ 283 | "sha256:502a824f31acdacb3a35b6690b5fbf0bc41d63a24a45c4004352b0242707598e", 284 | "sha256:7bf2a778576d825600030a110f3c0e3e8edc51dfaafe1c146e39a2027784957b" 285 | ], 286 | "version": "==2.21.0" 287 | }, 288 | "seldon-core": { 289 | "hashes": [ 290 | "sha256:cb0308249f0f8d7fcad8e0cc68984d0a167430f9e2793d6d0505738aaf71b120", 291 | "sha256:fcb549e9ba464e06addc1b84f8e2cb51b7c729ad6a0d65f8f5760fe4c2fd3cd7" 292 | ], 293 | "index": "pypi", 294 | "version": "==0.2.5" 295 | }, 296 | "six": { 297 | "hashes": [ 298 | "sha256:3350809f0555b11f552448330d0b52d5f24c91a322ea4a15ef22629740f3761c", 299 | "sha256:d16a0141ec1a18405cd4ce8b4613101da75da0e9a7aec5bdd4fa804d0e0eba73" 300 | ], 301 | "version": "==1.12.0" 302 | }, 303 | "tensorboard": { 304 | "hashes": [ 305 | "sha256:6f194519f41762bfdf5eb410ccf33226d1c252caf5ad8893288648bfbcf4d135", 306 | "sha256:81170f66bf8f95c2e9f6b3fefe0ddc5472655a9e3793e73b5b5d4ec0ba395e76" 307 | ], 308 | "version": "==1.12.2" 309 | }, 310 | "tensorflow": { 311 | "hashes": [ 312 | "sha256:16fb8a59e724afd37a276d33b7e2ed070e5c84899a8d4cfc3fe1bb446a859da7", 313 | "sha256:1ae50e44c0b29df5fb5b460118be5a257b4eb3e561008f64d2c4c715651259b7", 314 | "sha256:1b7d09cc26ef727d628dcb74841b89374a38ed81af25bd589a21659ef67443da", 315 | "sha256:2681b55d3e434e20fe98e3a3b1bde3588af62d7864b62feee4141a71e29ef594", 316 | "sha256:42fc8398ce9f9895b488f516ea0143cf6cf2a3a5fc804da4a190b063304bc173", 317 | "sha256:531619ad1c17b4084d09f442a9171318af813e81aae748e5de8274d561461749", 318 | "sha256:5cee35f8a6a12e83560f30246811643efdc551c364bc981d27f21fbd0926403d", 319 | "sha256:6ad6ed495f1a3d445c43d90cb2ce251ff5532fd6436e25f52977ee59ffa583df", 320 | "sha256:cd8c1a899e3befe1ccb774ea1aae077a4b1286f855c956210b23766f4ac85c30", 321 | "sha256:d3f3d7cd9bd4cdc7ebf25fd6c2dfc103dcf4b2834ae9276cc4cf897eb1515f6d", 322 | "sha256:e4f479e6aca595acc98347364288cbdfd3c025ca85389380174ea75a43c327b7", 323 | "sha256:f587dc03b5f0d1e50cca39b7159c9f21ffdec96273dbf5f7619d48c622cb21f2" 324 | ], 325 | "version": "==1.12.0" 326 | }, 327 | "termcolor": { 328 | "hashes": [ 329 | "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b" 330 | ], 331 | "version": "==1.1.0" 332 | }, 333 | "tornado": { 334 | "hashes": [ 335 | "sha256:0662d28b1ca9f67108c7e3b77afabfb9c7e87bde174fbda78186ecedc2499a9d", 336 | "sha256:4e5158d97583502a7e2739951553cbd88a72076f152b4b11b64b9a10c4c49409", 337 | "sha256:732e836008c708de2e89a31cb2fa6c0e5a70cb60492bee6f1ea1047500feaf7f", 338 | "sha256:8154ec22c450df4e06b35f131adc4f2f3a12ec85981a203301d310abf580500f", 339 | "sha256:8e9d728c4579682e837c92fdd98036bd5cdefa1da2aaf6acf26947e6dd0c01c5", 340 | "sha256:d4b3e5329f572f055b587efc57d29bd051589fb5a43ec8898c77a47ec2fa2bbb", 341 | "sha256:e5f2585afccbff22390cddac29849df463b252b711aa2ce7c5f3f342a5b3b444" 342 | ], 343 | "version": "==5.1.1" 344 | }, 345 | "urllib3": { 346 | "hashes": [ 347 | "sha256:61bf29cada3fc2fbefad4fdf059ea4bd1b4a86d2b6d15e1c7c0b582b9752fe39", 348 | "sha256:de9529817c93f27c8ccbfead6985011db27bd0ddfcdb2d86f3f663385c6a9c22" 349 | ], 350 | "version": "==1.24.1" 351 | }, 352 | "werkzeug": { 353 | "hashes": [ 354 | "sha256:c3fd7a7d41976d9f44db327260e263132466836cef6f91512889ed60ad26557c", 355 | "sha256:d5da73735293558eb1651ee2fddc4d0dedcfa06538b8813a2e20011583c9e49b" 356 | ], 357 | "version": "==0.14.1" 358 | }, 359 | "wheel": { 360 | "hashes": [ 361 | "sha256:029703bf514e16c8271c3821806a1c171220cc5bdd325cbf4e7da1e056a01db6", 362 | "sha256:1e53cdb3f808d5ccd0df57f964263752aa74ea7359526d3da6c02114ec1e1d44" 363 | ], 364 | "markers": "python_version >= '3'", 365 | "version": "==0.32.3" 366 | } 367 | }, 368 | "develop": {} 369 | } 370 | --------------------------------------------------------------------------------