├── .gitignore ├── 1-Clipper-Basics.ipynb ├── 2-Pong-Game.ipynb ├── Makefile ├── PyDarknetDockerfile ├── README.md ├── images ├── blue.jpg ├── cat.jpg ├── detection │ ├── baseball-boy.jpg │ ├── cat-in-pan.jpg │ ├── cityscapes-1.jpg │ ├── cityscapes-2.jpg │ ├── cityscapes-3.jpg │ ├── frisbee.jpg │ ├── horses.jpg │ └── skate.jpg ├── dog.jpg └── duck.jpg ├── notebook-images ├── add_replicas.png ├── deploy_model.png ├── grafana.png ├── grafana_add_dashboard.png ├── grafana_dashboard_id.png ├── grafana_import_dashboard.png ├── grafana_new_dashboard.png ├── grafana_set_data_source.png ├── grafana_test_succeeded.png ├── grafana_w_info.png ├── link_model.png ├── pong_update_model.png ├── register_app.png ├── rollback_version.png ├── set_replicas.png ├── start_clipper.png └── update_model.png ├── other ├── clipper-tutorial-server │ ├── README.md │ ├── app.py │ ├── requirements.txt │ └── templates │ │ ├── index.html │ │ └── vm.html ├── nodejs │ ├── .dockerignore │ ├── .gitignore │ ├── Dockerfile │ ├── Makefile │ ├── node_server.js │ └── package.json └── start_up.sh └── pong-server ├── pong-server.py └── static ├── game.js ├── images ├── press1.png ├── press2.png └── winner.png ├── index.html ├── pong.css ├── pong.js └── sounds ├── goal.wav ├── ping.wav ├── pong.wav └── wall.wav /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .ipynb_checkpoints/ 3 | __pycache__/ 4 | cifar/ 5 | clipper_logs/ 6 | *.log 7 | -------------------------------------------------------------------------------- /1-Clipper-Basics.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Table of Contents\n", 8 | "
\n", 9 | " \n", 38 | "
" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "## Jupyter Notebooks\n", 46 | "\n", 47 | "Welcome to the Clipper tutorial at Strata NYC 2018. You will be using [Jupyter Notebooks](http://jupyter.org/) for this tutorial. If you have never used a Jupyter notebook before, you may find it useful to spend a few minutes looking over the documentation. To get you started, two useful commands are:\n", 48 | "\n", 49 | "+ Shift-Enter will execute a cell. If it is a code cell, it will run the code. If it is a Markdown cell, it will render the Markdown.\n", 50 | "+ You can run shell commands directly in the notebook by prefixing them with an exclamation point. You will see several examples of this throughput the tutorial." 51 | ] 52 | }, 53 | { 54 | "cell_type": "markdown", 55 | "metadata": {}, 56 | "source": [ 57 | "\n", 58 | "## API Overview\n", 59 | "\n", 60 | "In the first part of this exercise, you will explore how to create and interact with a Clipper cluster. The primary way of managing Clipper is with the Clipper Admin Python tool. This tutorial will walk you through all the things you can do with the Clipper Admin tool as well as explain what happens within Clipper when you issue each command. You can find the complete API documentation for the Clipper Admin tool on our website: .\n", 61 | "\n", 62 | "**Goal:** Be familiar with how to create and manage a Clipper cluster, and understand what happens when you issue Clipper admin commands.\n", 63 | "\n", 64 | "> *NOTE: Throughout this exercise, you will building Docker images and deploying and managing a distributed system. Some commands may take a few minutes to run. If a cell does not complete immediately, that is perfectly normal. Thanks for your patience!*" 65 | ] 66 | }, 67 | { 68 | "cell_type": "markdown", 69 | "metadata": {}, 70 | "source": [ 71 | "### Context\n", 72 | "The Clipper Admin tool is distributed through Pip. You can install it with `pip install clipper_admin`, but it has already been installed in this notebook for you.\n", 73 | "\n", 74 | "Clipper uses Docker containers as its deployment mechanism. A running Clipper cluster consists of a collection of Docker containers communicating with each other over the network. As you issue commands against Clipper, you are communicating with these containers as well as creating new ones or destroying existing ones. As you explore the Clipper API throughout this exercise, we will illustrate how each command effects the cluster state.\n", 75 | "\n", 76 | "The main API for interacting with Clipper is exposed via a [`ClipperConnection`](http://docs.clipper.ai/en/develop/#clipper-connection) object. This is your handle to a Clipper cluster (this collection of Docker containers). It can be used to start, stop, inspect, and modify the cluster.\n", 77 | "\n", 78 | "In order to create a `ClipperConnection` object, you must provide it with a [`ContainerManager`](http://docs.clipper.ai/en/develop/#container-managers) object. While Docker is becoming an increasingly standard mechanism for deploying applications, there are many different tools for managing a Docker cluster. These tools broadly fall into the category of *Container Orchestration frameworks*. Some popular examples are [Kubernetes](https://kubernetes.io/), [Docker Swarm](https://docs.docker.com/engine/swarm/), and [DC/OS](https://dcos.io/). One of the reasons we run Clipper in Docker containers is to make the system as general as possible and support many different deployment scenarios. Within the Clipper Admin, we abstract away all of the Docker container-specific commands behind the `ContainerManager` interface. The `ClipperConnection` object makes Clipper-specific decisions about how to issue commands, and then makes any changes to the Docker configuration (for example, to launch a container for a newly deployed model) through the `ContainerManager`. To support different container orchestration frameworks that manage Docker containers in different ways, we create different implementations of the `ContainerManager` interface.\n", 79 | "\n", 80 | "Clipper currently provides two `ContainerManager` implementations: the `DockerContainerManager` and the `KubernetesContainerManager`. In this exercise, you will be using the `DockerContainerManager`, which runs Clipper directly on your local Docker instance. This `ContainerManager` is particularly useful for trying out Clipper without needing to set up an enterprise-grade container orchestration framework. The `DockerContainerManager` is not recommended for production use cases." 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "### Creating a ClipperConnection\n", 88 | "To start Clipper, you must first create a [`ClipperConnection`](http://docs.clipper.ai/en/develop/#clipper-connection) object with the type of `ContainerManager` you want to use. In this case, you will be using the `DockerContainerManager`." 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": null, 94 | "metadata": { 95 | "collapsed": true 96 | }, 97 | "outputs": [], 98 | "source": [ 99 | "from clipper_admin import ClipperConnection, DockerContainerManager\n", 100 | "clipper_conn = ClipperConnection(DockerContainerManager())" 101 | ] 102 | }, 103 | { 104 | "cell_type": "markdown", 105 | "metadata": {}, 106 | "source": [ 107 | "### Starting Clipper\n", 108 | "Now that you have a `ClipperConnection` object, you can start a Clipper cluster.\n", 109 | "\n", 110 | "The following command will start 3 Docker containers:\n", 111 | "1. The Query Frontend: The Query Frontend container listens for incoming prediction requests and schedules and routes them to the deployed models.\n", 112 | "2. The Management Frontend: The Management Frontend container manages and updates the cluster's internal configuration state, such as tracking which models are deployed and which application endpoints have been registered.\n", 113 | "3. A Redis instance: Redis is used to persistently store Clipper's internal configuration state. By default, Redis is started on port 6380 instead of the standard Redis default port 6379 to avoid collisions with any Redis instances that are already running.\n", 114 | "\n", 115 | "\n", 116 | "\n", 117 | "> ***NOTE:*** *Because Docker must download the Docker images from the internet (if they are not already cached) before it can start the containers, the first time you run this command it may take a few minutes to complete. Once the images have been downloaded once, they will be cached and future invocations of this command will complete much more quickly. Thanks for your patience.*\n", 118 | "\n", 119 | "If you try to start more than one Clipper cluster at once on the same host, the second execution of the command will fail because, by default, the second cluster will try to bind to the same ports as the first one. If you run into problems with the exercise and want to start over, you can completely stop the cluster with [`clipper_conn.stop_all()`](http://docs.clipper.ai/en/v0.3.0/clipper_connection.html#clipper_admin.ClipperConnection.stop_all_model_containers)." 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": null, 125 | "metadata": { 126 | "collapsed": true 127 | }, 128 | "outputs": [], 129 | "source": [ 130 | "clipper_conn.start_clipper()\n", 131 | "clipper_addr = clipper_conn.get_query_addr()" 132 | ] 133 | }, 134 | { 135 | "cell_type": "markdown", 136 | "metadata": {}, 137 | "source": [ 138 | "At this point, let us take a look at the containers Clipper has started. It is interesting to note that one simple call to the Python API was able to spin up so many containers!" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": null, 144 | "metadata": {}, 145 | "outputs": [], 146 | "source": [ 147 | "!docker ps --filter label=ai.clipper.container.label" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "### Deploying a model\n", 155 | "At its most basic, a trained model is just a function that takes some input and produces some output. As a result, one way to think about Clipper is as a function server. While these functions are often complex models, Clipper is not restricted to serving machine learning models.\n", 156 | "\n", 157 | "Many machine learning models are trained in commonly used machine learning frameworks such as [Scikit-Learn](http://scikit-learn.org/), [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org/), etc. To support these common use cases, Clipper offers a wide variety of deployers for common machine learning frameworks to simplify deployment. Furthermore, any frameworks that produce models that can be pickled -- this includes Scikit-Learn and XGBoost -- do not even require a special deployer and can be deployed directly with the standard Clipper Python closure deployer.\n", 158 | "\n", 159 | "In this exercise, you'll start by deploying a Scikit-Learn model using the Python closure deployer." 160 | ] 161 | }, 162 | { 163 | "cell_type": "markdown", 164 | "metadata": {}, 165 | "source": [ 166 | "#### Creating a model\n", 167 | "You will start by creating and training a three-class logistic regression classifier.\n", 168 | "\n", 169 | "Complete the TODO's in the next cells to build your model!" 170 | ] 171 | }, 172 | { 173 | "cell_type": "code", 174 | "execution_count": null, 175 | "metadata": { 176 | "collapsed": true 177 | }, 178 | "outputs": [], 179 | "source": [ 180 | "# The code to train the model and produce the graph comes from this sklearn example:\n", 181 | "# http://scikit-learn.org/stable/auto_examples/linear_model/plot_iris_logistic.html\n", 182 | "import numpy as np\n", 183 | "import matplotlib.pyplot as plt\n", 184 | "from sklearn import linear_model, datasets\n", 185 | "%matplotlib inline\n", 186 | "\n", 187 | "np.random.seed(5)\n", 188 | "\n", 189 | "iris = datasets.load_iris()\n", 190 | "X = iris.data[:, :2] # We only take the first two features.\n", 191 | "Y = iris.target\n", 192 | "\n", 193 | "model = linear_model.LogisticRegression(C=1e5)" 194 | ] 195 | }, 196 | { 197 | "cell_type": "markdown", 198 | "metadata": {}, 199 | "source": [ 200 | "Now that you've initialized your model, it's time to train it." 201 | ] 202 | }, 203 | { 204 | "cell_type": "code", 205 | "execution_count": null, 206 | "metadata": {}, 207 | "outputs": [], 208 | "source": [ 209 | "model.fit(X, Y)" 210 | ] 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": null, 215 | "metadata": { 216 | "collapsed": true 217 | }, 218 | "outputs": [], 219 | "source": [ 220 | "# Plot the decision boundary. For that, we will assign a color to each\n", 221 | "# point in the mesh [x_min, x_max]x[y_min, y_max].\n", 222 | "h = .02 # Step size in the mesh\n", 223 | "x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n", 224 | "y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n", 225 | "xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n", 226 | "Z = model.predict(np.c_[xx.ravel(), yy.ravel()])\n", 227 | "\n", 228 | "# Put the result into a color plot\n", 229 | "Z = Z.reshape(xx.shape)\n", 230 | "plt.figure(1, figsize=(4, 3))\n", 231 | "plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)\n", 232 | "\n", 233 | "# Plot also the training points\n", 234 | "plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)\n", 235 | "plt.xlabel('Sepal length')\n", 236 | "plt.ylabel('Sepal width')\n", 237 | "\n", 238 | "plt.xlim(xx.min(), xx.max())\n", 239 | "plt.ylim(yy.min(), yy.max())\n", 240 | "plt.xticks(())\n", 241 | "plt.yticks(())\n", 242 | "plt.title('3 Way Classifier')\n", 243 | "plt.show()" 244 | ] 245 | }, 246 | { 247 | "cell_type": "markdown", 248 | "metadata": {}, 249 | "source": [ 250 | "#### Clipper's Model Deployers" 251 | ] 252 | }, 253 | { 254 | "cell_type": "markdown", 255 | "metadata": {}, 256 | "source": [ 257 | "One of the goals of Clipper is to make it simple to deploy and maintain machine-learning models in production. The prediction interface that models must implement is very simple, consisting of a single function. And the use of Docker makes it easy to include all of a model's dependencies in a self-contained environment. However, deploying a new type of model still entails writing and debugging a new model container and creating a Docker image.\n", 258 | "To make the model deployment process even simpler, Clipper provides a library of model deployers for common types of models. If your model can be deployed with one of these deployers, you no longer need to write a model container, create a Docker image, or even figure out how to save a model. Instead, you provide your trained model directly to the model deployer function within your Python process. The model deployer takes care of saving the model and building a Docker image that is compatible with your model type.\n", 259 | "\n", 260 | "Clipper provides model deployers for many common ML packages including PySpark, PyTorch, TensorFlow, etc. In addition, Clipper provides a Python model deployer that can deploy arbitrary Python functions as long as they can be pickled. Finally, if none of these fit your needs, you can always write a custom model container that can execute arbitrary code.\n", 261 | "\n", 262 | "To keep the base images small, Clipper model containers install only the required dependencies to ensure that a basic model will run. However, if your model requires custom Python packages installed, you can specify these additional packages to the model deployer and it will automatically use Pip to install them inside your model's Docker container. You will use this to install Scikit-Learn when you deploy the flower model.\n", 263 | "\n", 264 | "For more information about model deployers, check out the [documentation](http://docs.clipper.ai/en/v0.3.0/model_deployers.html)." 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "#### Define a prediction function" 272 | ] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "Now you have a model that takes an array of length 2 as input -- a petal width and a sepal length -- and returns a flower label. Before you can deploy the model using the Python model deployer, you need to wrap that model in a function that conforms to Clipper's prediction interface.\n", 279 | "\n", 280 | "To improve performance during inference, many machine learning models exploit opportunities for data parallelism in the inference process. Because of this, Clipper tries to provide multiple inputs at once to a deployed model. Therefore, models deployed to Clipper must have a function interface that takes a list of inputs as an argument and returns a list of predictions as strings. Returning predictions as strings provides a lot of flexibility over what your models can return. Commonly, models in Clipper will return either a single number (such as a label or score) or JSON containing a richer representation of the model output (for example, by including confidence estimates of predicted labels).\n", 281 | "\n", 282 | "Define a `predict_flower` function below that takes a list of inputs and returns their predicted labels as strings." 283 | ] 284 | }, 285 | { 286 | "cell_type": "code", 287 | "execution_count": null, 288 | "metadata": { 289 | "collapsed": true 290 | }, 291 | "outputs": [], 292 | "source": [ 293 | "def get_label_from_class(l):\n", 294 | " if l == 0:\n", 295 | " return 'Setosa'\n", 296 | " elif l == 1:\n", 297 | " return 'Virginica'\n", 298 | " else:\n", 299 | " return 'Versicolour'\n", 300 | "\n", 301 | "\n", 302 | "def predict_flowers(flowers):\n", 303 | " labels = \"FIXME\" # TODO: Use the model to make predict labels\n", 304 | " return [get_label_from_class(l) for l in labels]" 305 | ] 306 | }, 307 | { 308 | "cell_type": "markdown", 309 | "metadata": {}, 310 | "source": [ 311 | "**Solution:**" 312 | ] 313 | }, 314 | { 315 | "cell_type": "markdown", 316 | "metadata": {}, 317 | "source": [ 318 | "```python\n", 319 | "def predict_flowers(flowers):\n", 320 | " labels = model.predict(flowers)\n", 321 | " return [get_label_from_class(l) for l in labels]\n", 322 | "```" 323 | ] 324 | }, 325 | { 326 | "cell_type": "markdown", 327 | "metadata": {}, 328 | "source": [ 329 | "Execute the following cell to test your function:" 330 | ] 331 | }, 332 | { 333 | "cell_type": "code", 334 | "execution_count": null, 335 | "metadata": { 336 | "collapsed": true 337 | }, 338 | "outputs": [], 339 | "source": [ 340 | "assert predict_flowers([X[0], X[101]]) == ['Setosa', 'Virginica']" 341 | ] 342 | }, 343 | { 344 | "cell_type": "markdown", 345 | "metadata": {}, 346 | "source": [ 347 | "Now you can use the Python model deployer to deploy your model to Clipper. The following cell may take a few seconds to run." 348 | ] 349 | }, 350 | { 351 | "cell_type": "code", 352 | "execution_count": null, 353 | "metadata": { 354 | "collapsed": true 355 | }, 356 | "outputs": [], 357 | "source": [ 358 | "from clipper_admin.deployers import python as python_deployer\n", 359 | "python_deployer.deploy_python_closure(\n", 360 | " clipper_conn,\n", 361 | " name=\"flowercat\", # The name of the model in Clipper\n", 362 | " version=1, # A unique identifier to assign to this model.\n", 363 | " input_type=\"floats\", # The type of data the model function expects as input\n", 364 | " func=predict_flowers, # The model function to deploy\n", 365 | " pkgs_to_install=['scipy', 'scikit-learn'] # Packages to install in the new container. Must be a list\n", 366 | ")" 367 | ] 368 | }, 369 | { 370 | "cell_type": "markdown", 371 | "metadata": {}, 372 | "source": [ 373 | "Clipper deploys each model in its own Docker container. After deploying the model, Clipper uses the DockerContainerManager to start a container for this model and create an RPC connection with the Clipper query frontend, as illustrated below (the changes to the cluster are highlighted in red).\n", 374 | "\n", 375 | "> *Once again, Clipper must download a Docker container from the internet the first time this command is run.*\n", 376 | "\n", 377 | "\n", 378 | "\n", 379 | "If you list the Clipper containers again, you can see the container running your word count model." 380 | ] 381 | }, 382 | { 383 | "cell_type": "code", 384 | "execution_count": null, 385 | "metadata": { 386 | "collapsed": true 387 | }, 388 | "outputs": [], 389 | "source": [ 390 | "!docker ps --filter label=ai.clipper.container.label" 391 | ] 392 | }, 393 | { 394 | "cell_type": "markdown", 395 | "metadata": {}, 396 | "source": [ 397 | "#### A Note About Types [Optional]" 398 | ] 399 | }, 400 | { 401 | "cell_type": "markdown", 402 | "metadata": {}, 403 | "source": [ 404 | "When you deploy models and register applications, you must specify the input type that the model or application expects. The type that you specify has implications for how Clipper manages input serialization and deserialization. From the user's perspective, the input type affects the behavior of Clipper in two places. In the \"input\" field of the request JSON body, applications will reject requests where the value of that field is the wrong type. And the deployed model function will be called with a list of inputs of the specified type.\n", 405 | "\n", 406 | "The input type can be one of the following types:\n", 407 | "\n", 408 | "* \"ints\": The value of the \"input\" field in a request must be a JSON list of ints. The model function will be called with a list of numpy arrays of type numpy.int.\n", 409 | "* \"floats\": The value of the \"input\" field in a request must be a JSON list of doubles. The model function will be called with a list of numpy arrays of type numpy.float32.\n", 410 | "* \"doubles\": The value of the \"input\" field in a request must be a JSON list of doubles. The model function will be called with a list of numpy arrays of type numpy.float64.\n", 411 | "* \"bytes\": The value of the \"input\" field in a request must be a Base64 encoded string. The model function will be called with a list of numpy arrays of type numpy.int8.\n", 412 | "* \"strings\": The value of the \"input\" field in a request must be a string. The model function will be called with a list of strings." 413 | ] 414 | }, 415 | { 416 | "cell_type": "markdown", 417 | "metadata": {}, 418 | "source": [ 419 | "### Registering an Application\n", 420 | "You've now deployed a model to Clipper, but you don't have any way to query it yet. Instead of automatically creating a REST endpoint when you deploy a model, Clipper introduces a layer of indirection: the application. Clients query a specific application in Clipper, and the application routes the query to the correct model. One advantage of this choice is it decouples endpoint versioning from model versioning, enabling data scientists to rapidly iterate and update models without stopping or modifying a live application.\n", 421 | "\n", 422 | "A single Clipper cluster can have many applications registered and many models deployed at once.\n", 423 | "\n", 424 | "When you register an application you configure certain elements of the application's behavior. These include:\n", 425 | "\n", 426 | "* The name to give the REST endpoint.\n", 427 | "* The input type that the application expects (Clipper will ensure applications only route requests to models with matching input types).\n", 428 | "* The latency service level objective (SLO) specified in microseconds. Clipper will manage how it schedules and routes queries for an application based on the specified latency SLO. For example, Clipper uses the specified SLO to configure its batching behavior to balance maximizing resource utilization by using larger batches with ensuring that SLOs can be met. In addition, Clipper will respond to requests by the end of the specified SLO with a default response if it has not received a prediction back from the model yet.\n", 429 | "* The default output: Clipper will respond with the default output to requests if a real prediction isn't available by the end of the service level objective.\n", 430 | "\n", 431 | "\n", 432 | "\n", 433 | "Register an application to query your classifier:" 434 | ] 435 | }, 436 | { 437 | "cell_type": "code", 438 | "execution_count": null, 439 | "metadata": { 440 | "collapsed": true 441 | }, 442 | "outputs": [], 443 | "source": [ 444 | "clipper_conn.register_application(\n", 445 | " name=\"flowercat-app\",\n", 446 | " input_type=\"floats\",\n", 447 | " default_output=\"Default\",\n", 448 | " slo_micros=100000)" 449 | ] 450 | }, 451 | { 452 | "cell_type": "markdown", 453 | "metadata": {}, 454 | "source": [ 455 | "When you register an application with Clipper, it creates a REST endpoint for that application:\n", 456 | "\n", 457 | "```\n", 458 | "URL: //predict\n", 459 | "Method: POST\n", 460 | "Data Params: {\"input\": }\n", 461 | "```\n", 462 | "\n", 463 | "If you want to send several inputs at once, you can batch them into a single request by providing a JSON object with the key `input_batch`. Note that this is separate from Clipper's internal batching mechanism, which will be used regardless of whether the inputs are provided in a single REST request or separate requests.\n", 464 | "\n", 465 | "```\n", 466 | "URL: //predict\n", 467 | "Method: POST\n", 468 | "Data Params: {\"input_batch\": }\n", 469 | "```\n", 470 | "\n", 471 | "Try querying the newly created application." 472 | ] 473 | }, 474 | { 475 | "cell_type": "code", 476 | "execution_count": null, 477 | "metadata": { 478 | "collapsed": true 479 | }, 480 | "outputs": [], 481 | "source": [ 482 | "import requests, json\n", 483 | "response = requests.post(\n", 484 | " \"http://%s/%s/predict\" % (clipper_addr, 'flowercat-app'),\n", 485 | " headers={\"Content-type\": \"application/json\"},\n", 486 | " data=json.dumps({\n", 487 | " 'input': list(X[0]),\n", 488 | " }))\n", 489 | "result = response.json()\n", 490 | "result" 491 | ] 492 | }, 493 | { 494 | "cell_type": "markdown", 495 | "metadata": {}, 496 | "source": [ 497 | "You should see that your application returned the default output of \"Default\". This is because even though you have deployed a model and registered an application, you have not told Clipper to route requests from the \"flowercat-app\" application to the \"flowercat\" model.\n", 498 | "\n", 499 | "You can do this by linking the model to the application.\n", 500 | "\n", 501 | "" 502 | ] 503 | }, 504 | { 505 | "cell_type": "code", 506 | "execution_count": null, 507 | "metadata": { 508 | "collapsed": true 509 | }, 510 | "outputs": [], 511 | "source": [ 512 | "clipper_conn.link_model_to_app(app_name=\"flowercat-app\", model_name=\"flowercat\")" 513 | ] 514 | }, 515 | { 516 | "cell_type": "markdown", 517 | "metadata": {}, 518 | "source": [ 519 | "When you query the \"flowercat-app\" endpoint again, Clipper should return a predicted label. Try it with your own input." 520 | ] 521 | }, 522 | { 523 | "cell_type": "code", 524 | "execution_count": null, 525 | "metadata": { 526 | "collapsed": true, 527 | "scrolled": true 528 | }, 529 | "outputs": [], 530 | "source": [ 531 | "response = requests.post(\n", 532 | " \"http://%s/%s/predict\" % (clipper_addr, 'flowercat-app'),\n", 533 | " headers={\"Content-type\": \"application/json\"},\n", 534 | " data=json.dumps({\n", 535 | " 'input': \"FIXME\", # TODO: Specify an input\n", 536 | " }))\n", 537 | "result = response.json()\n", 538 | "result" 539 | ] 540 | }, 541 | { 542 | "cell_type": "markdown", 543 | "metadata": {}, 544 | "source": [ 545 | "**Solution:**" 546 | ] 547 | }, 548 | { 549 | "cell_type": "markdown", 550 | "metadata": {}, 551 | "source": [ 552 | "```python\n", 553 | "response = requests.post(\n", 554 | " \"http://%s/%s/predict\" % (clipper_addr, 'flowercat-app'),\n", 555 | " headers={\"Content-type\": \"application/json\"},\n", 556 | " data=json.dumps({\n", 557 | " 'input': list(X[0]), # TODO: Specify an input\n", 558 | " }))\n", 559 | "result = response.json()\n", 560 | "result\n", 561 | "```" 562 | ] 563 | }, 564 | { 565 | "cell_type": "markdown", 566 | "metadata": {}, 567 | "source": [ 568 | "### Inspecting Clipper\n", 569 | "The ClipperConnection object has several methods to inspect various aspects of the Clipper cluster.\n", 570 | "\n", 571 | "You can list all of the applications." 572 | ] 573 | }, 574 | { 575 | "cell_type": "code", 576 | "execution_count": null, 577 | "metadata": { 578 | "collapsed": true 579 | }, 580 | "outputs": [], 581 | "source": [ 582 | "clipper_conn.get_all_apps(verbose=True)" 583 | ] 584 | }, 585 | { 586 | "cell_type": "markdown", 587 | "metadata": {}, 588 | "source": [ 589 | "Or all of the models." 590 | ] 591 | }, 592 | { 593 | "cell_type": "code", 594 | "execution_count": null, 595 | "metadata": { 596 | "collapsed": true 597 | }, 598 | "outputs": [], 599 | "source": [ 600 | "clipper_conn.get_all_models(verbose=True)" 601 | ] 602 | }, 603 | { 604 | "cell_type": "markdown", 605 | "metadata": {}, 606 | "source": [ 607 | "You can also fetch the raw container logs from all of the Clipper docker containers. The command will print the paths to the log files for further examination. You can figure out which logs belong to which container based on the unique Docker container ID in the log filename." 608 | ] 609 | }, 610 | { 611 | "cell_type": "code", 612 | "execution_count": null, 613 | "metadata": { 614 | "collapsed": true 615 | }, 616 | "outputs": [], 617 | "source": [ 618 | "clipper_conn.get_clipper_logs()" 619 | ] 620 | }, 621 | { 622 | "cell_type": "markdown", 623 | "metadata": {}, 624 | "source": [ 625 | "### Updating the Model\n", 626 | "Machine learning models are rarely static. Instead, data science tends to be an iterative process, with new and improved models being developed over time. Clipper supports this workflow by letting you deploy new versions of models. If you look back to where you linked your flowercat model to the application, you'll see that there is no mention of versioning in that method call. Instead, when a new version of a model is deployed, Clipper will automatically start routing requests to the new version.\n", 627 | "\n", 628 | "Create a new version of the \"flowercat\" model that returns the probabilities that an input is in each class instead.\n", 629 | "\n", 630 | "> *Hint: Check out the [Scikit-Learn Logistic Regression documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.predict_proba).*" 631 | ] 632 | }, 633 | { 634 | "cell_type": "code", 635 | "execution_count": null, 636 | "metadata": { 637 | "collapsed": true 638 | }, 639 | "outputs": [], 640 | "source": [ 641 | "def predict_flower_probabilities(flowers):\n", 642 | " # TODO: Return the predicted probabilities.\n", 643 | " return \"FIXME\"" 644 | ] 645 | }, 646 | { 647 | "cell_type": "markdown", 648 | "metadata": {}, 649 | "source": [ 650 | "**Solution:**" 651 | ] 652 | }, 653 | { 654 | "cell_type": "markdown", 655 | "metadata": {}, 656 | "source": [ 657 | "```python\n", 658 | "def predict_flower_probabilities(flowers):\n", 659 | " return model.predict_proba(flowers)\n", 660 | "```" 661 | ] 662 | }, 663 | { 664 | "cell_type": "code", 665 | "execution_count": null, 666 | "metadata": { 667 | "collapsed": true 668 | }, 669 | "outputs": [], 670 | "source": [ 671 | "predict_flower_probabilities([X[0]])" 672 | ] 673 | }, 674 | { 675 | "cell_type": "markdown", 676 | "metadata": {}, 677 | "source": [ 678 | "Deploy this new version of the function as version \"2\". For this application, you are using a numeric versioning scheme. But Clipper simply treats versions as unique string identifiers, so you could use other versioning schemes (such as Git hashes or semantic versioning). Versions don't even have to be ordered, Clipper just tracks the currently active version.\n", 679 | "\n", 680 | "\n", 681 | "\n", 682 | "The following cell may take a few seconds to run." 683 | ] 684 | }, 685 | { 686 | "cell_type": "code", 687 | "execution_count": null, 688 | "metadata": { 689 | "collapsed": true 690 | }, 691 | "outputs": [], 692 | "source": [ 693 | "python_deployer.deploy_python_closure(\n", 694 | " clipper_conn,\n", 695 | " name=\"flowercat\",\n", 696 | " version=2, # Note that you are specifying a new version\n", 697 | " input_type=\"floats\",\n", 698 | " func=predict_flower_probabilities, # The new predict function\n", 699 | " pkgs_to_install=['scipy', 'scikit-learn']\n", 700 | ")" 701 | ] 702 | }, 703 | { 704 | "cell_type": "code", 705 | "execution_count": null, 706 | "metadata": { 707 | "collapsed": true 708 | }, 709 | "outputs": [], 710 | "source": [ 711 | "response = requests.post(\n", 712 | " \"http://%s/%s/predict\" % (clipper_addr, 'flowercat-app'),\n", 713 | " headers={\"Content-type\": \"application/json\"},\n", 714 | " data=json.dumps({\n", 715 | " 'input': list(X[0]),\n", 716 | " }))\n", 717 | "result = response.json()\n", 718 | "result" 719 | ] 720 | }, 721 | { 722 | "cell_type": "markdown", 723 | "metadata": {}, 724 | "source": [ 725 | "Sometimes the \"new and improved\" model is not actually improved. If you deploy a model that isn't working well, you can roll back to any previous version. This just changes which version of the model the application routes requests to.\n", 726 | "\n", 727 | "\n" 728 | ] 729 | }, 730 | { 731 | "cell_type": "code", 732 | "execution_count": null, 733 | "metadata": { 734 | "collapsed": true 735 | }, 736 | "outputs": [], 737 | "source": [ 738 | "clipper_conn.set_model_version(name=\"flowercat\", version=1)" 739 | ] 740 | }, 741 | { 742 | "cell_type": "code", 743 | "execution_count": null, 744 | "metadata": { 745 | "collapsed": true 746 | }, 747 | "outputs": [], 748 | "source": [ 749 | "response = requests.post(\n", 750 | " \"http://%s/%s/predict\" % (clipper_addr, 'flowercat-app'),\n", 751 | " headers={\"Content-type\": \"application/json\"},\n", 752 | " data=json.dumps({\n", 753 | " 'input': list(X[0]),\n", 754 | " }))\n", 755 | "result = response.json()\n", 756 | "result" 757 | ] 758 | }, 759 | { 760 | "cell_type": "markdown", 761 | "metadata": {}, 762 | "source": [ 763 | "### Adding Model Replicas\n", 764 | "Many machine learning models are computationally expensive and a single instance of the model may not meet the throughput demands of a serving workload. To increase prediction throughput, you can horizontally scale the application by creating additional model replicas. This creates additional Docker containers running the same model. Clipper will automatically load-balance requests across the set of available model replicas.\n", 765 | "\n", 766 | "Set the number of replicas for the currently active version (1) of the \"flowercat\" model to 4.\n", 767 | "\n", 768 | "" 769 | ] 770 | }, 771 | { 772 | "cell_type": "code", 773 | "execution_count": null, 774 | "metadata": { 775 | "collapsed": true 776 | }, 777 | "outputs": [], 778 | "source": [ 779 | "clipper_conn.set_num_replicas(\"flowercat\", num_replicas=4)" 780 | ] 781 | }, 782 | { 783 | "cell_type": "markdown", 784 | "metadata": {}, 785 | "source": [ 786 | "If you list the Clipper Docker containers, you should now see four containers based on the image \"flowercat:1\"." 787 | ] 788 | }, 789 | { 790 | "cell_type": "code", 791 | "execution_count": null, 792 | "metadata": { 793 | "collapsed": true 794 | }, 795 | "outputs": [], 796 | "source": [ 797 | "!docker ps --filter label=ai.clipper.container.label" 798 | ] 799 | }, 800 | { 801 | "cell_type": "markdown", 802 | "metadata": {}, 803 | "source": [ 804 | "If you want to reduce the number of replicas of a model to free up hardware resource, you can use the same command.\n", 805 | "\n", 806 | "Set the number of replicas for \"flowercat\" back to 1.\n", 807 | "\n", 808 | "" 809 | ] 810 | }, 811 | { 812 | "cell_type": "code", 813 | "execution_count": null, 814 | "metadata": { 815 | "collapsed": true 816 | }, 817 | "outputs": [], 818 | "source": [ 819 | "clipper_conn.set_num_replicas(\"flowercat\", num_replicas=1)" 820 | ] 821 | }, 822 | { 823 | "cell_type": "code", 824 | "execution_count": null, 825 | "metadata": { 826 | "collapsed": true 827 | }, 828 | "outputs": [], 829 | "source": [ 830 | "!docker ps --filter label=ai.clipper.container.label" 831 | ] 832 | }, 833 | { 834 | "cell_type": "markdown", 835 | "metadata": {}, 836 | "source": [ 837 | "## Example Application - Image Classification" 838 | ] 839 | }, 840 | { 841 | "cell_type": "markdown", 842 | "metadata": {}, 843 | "source": [ 844 | "In the second part of this exercise, you will deploy a pre-trained [SqueezeNet](https://arxiv.org/abs/1602.07360) model that you will download from the PyTorch model zoo to classify images.\n", 845 | "\n", 846 | "You will create an application that labels images from the ImageNet dataset.\n", 847 | "\n", 848 | "Some sample images have already been downloaded for you." 849 | ] 850 | }, 851 | { 852 | "cell_type": "markdown", 853 | "metadata": {}, 854 | "source": [ 855 | "### Creating an application\n", 856 | "\n", 857 | "For this tutorial, create an application named \"squeezenet-classifier\". Note that Clipper allows you to create the application before deploying any models." 858 | ] 859 | }, 860 | { 861 | "cell_type": "code", 862 | "execution_count": null, 863 | "metadata": { 864 | "collapsed": true 865 | }, 866 | "outputs": [], 867 | "source": [ 868 | "app_name = \"squeezenet-classsifier\"\n", 869 | "default_output = \"default\"\n", 870 | "\n", 871 | "clipper_conn.register_application(\n", 872 | " name=app_name,\n", 873 | " input_type=\"bytes\",\n", 874 | " default_output=default_output,\n", 875 | " slo_micros=10000000)" 876 | ] 877 | }, 878 | { 879 | "cell_type": "markdown", 880 | "metadata": {}, 881 | "source": [ 882 | "When you list the applications registered with Clipper, you should see the newly registered \"squeezenet-classifier\" application show up." 883 | ] 884 | }, 885 | { 886 | "cell_type": "code", 887 | "execution_count": null, 888 | "metadata": { 889 | "collapsed": true 890 | }, 891 | "outputs": [], 892 | "source": [ 893 | "clipper_conn.get_all_apps()" 894 | ] 895 | }, 896 | { 897 | "cell_type": "markdown", 898 | "metadata": {}, 899 | "source": [ 900 | "### Start serving\n", 901 | "\n", 902 | "As soon as you register the application, the REST endpoint is live, even though no models have been linked yet. However, in this exercsie you will wait until you've deployed and linked a model before querying the application." 903 | ] 904 | }, 905 | { 906 | "cell_type": "markdown", 907 | "metadata": {}, 908 | "source": [ 909 | "### Import the pretrained model from the PyTorch model zoo\n", 910 | "\n", 911 | "Several common machine learning libraries, including PyTorch and TensorFlow, maintain a \"model zoo\" of pre-trained models that can be downloaded and used for inference without the need to train them. Most of the models in these model zoos are state-of-the-art deep-learning models which require an enormous amount of compute power (often several thousand GPU-hours) and expertise to train well. PyTorch provides a simple API to download and import models available in their computer vision model zoo.\n", 912 | "\n", 913 | "Download the SqueezeNet model from the PyTorch model zoo and import it into your notebook." 914 | ] 915 | }, 916 | { 917 | "cell_type": "code", 918 | "execution_count": null, 919 | "metadata": { 920 | "collapsed": true 921 | }, 922 | "outputs": [], 923 | "source": [ 924 | "from torchvision import models, transforms\n", 925 | "model = models.squeezenet1_1(pretrained=True)" 926 | ] 927 | }, 928 | { 929 | "cell_type": "markdown", 930 | "metadata": {}, 931 | "source": [ 932 | "### Deploying the PyTorch Model\n", 933 | "\n", 934 | "Unlike the Scikit-Learn model in [Section 1](#API-Overview), PyTorch models cannot just be pickled and loaded. Instead, they must be saved using PyTorch's native serialization API. Because of this, you cannot use the generic Python model deployer to deploy the model to Clipper. Instead, you will use the Clipper PyTorch deployer to deploy it. The Docker container will load and reconstruct the model from the serialized model checkpoint when the container is started." 935 | ] 936 | }, 937 | { 938 | "cell_type": "markdown", 939 | "metadata": {}, 940 | "source": [ 941 | "#### Preprocessing\n", 942 | "Before making the predict function, you will download some labels for the dataset to test the accuracy of the pre-trained model, and specify some preprocessing logic to transform the images into the format the model expects. The code for this cell comes from the this [tutorial](http://blog.outcome.io/pytorch-quick-start-classifying-an-image/)." 943 | ] 944 | }, 945 | { 946 | "cell_type": "code", 947 | "execution_count": null, 948 | "metadata": { 949 | "collapsed": true 950 | }, 951 | "outputs": [], 952 | "source": [ 953 | "# First we define the preproccessing on the images:\n", 954 | "normalize = transforms.Normalize(\n", 955 | " mean=[0.485, 0.456, 0.406],\n", 956 | " std=[0.229, 0.224, 0.225]\n", 957 | ")\n", 958 | "preprocess = transforms.Compose([\n", 959 | " transforms.Scale(256),\n", 960 | " transforms.CenterCrop(224),\n", 961 | " transforms.ToTensor(),\n", 962 | " normalize\n", 963 | "])\n", 964 | "\n", 965 | "# Then we download the labels:\n", 966 | "labels = {int(key):value for (key, value)\n", 967 | " in requests.get('https://s3.amazonaws.com/outcome-blog/imagenet/labels.json').json().items()}" 968 | ] 969 | }, 970 | { 971 | "cell_type": "markdown", 972 | "metadata": {}, 973 | "source": [ 974 | "### Define a predict function and add metrics\n", 975 | "Now you can define a predict function to deploy. This predict function will wrap the model you downloaded from the model zoo as well as the preprocessing logic you just specified.\n", 976 | "\n", 977 | "You will also use Clipper's metrics reporting functionality to export some custom metrics for this model. In particular, you will report the batch size for each batch sent to the model. These metrics will be aggregated along with several system metrics and reported to [Prometheus](https://prometheus.io/), a time-series database commonly used for metrics reporting (for example, Kubernetes uses Prometheus for its own system metrics). You can view a simple metrics dashboard automatically exported by Prometheus at port 9090 (`http:://[IP_ADDRESS]:9090`). In practice, a monitoring and alerting system such as [Grafana](https://grafana.com/) is typically layered on top of Promtheus. You will work through an example of connecting Grafana to visualize Clipper's metrics later in the tutorial." 978 | ] 979 | }, 980 | { 981 | "cell_type": "code", 982 | "execution_count": null, 983 | "metadata": { 984 | "collapsed": true 985 | }, 986 | "outputs": [], 987 | "source": [ 988 | "import clipper_admin.metrics as metrics\n", 989 | "\n", 990 | "def predict_torch_model(model, imgs):\n", 991 | " import io\n", 992 | " import PIL.Image\n", 993 | " import torch\n", 994 | " import clipper_admin.metrics as metrics\n", 995 | " \n", 996 | " metrics.add_metric(\"batch_size\", 'Gauge', 'Batch size passed to PyTorch predict function.')\n", 997 | " metrics.report_metric('batch_size', \"FIXME\") # TODO: Fill in the batch size\n", 998 | " \n", 999 | " # We first prepare a batch from `imgs`\n", 1000 | " img_tensors = []\n", 1001 | " for img in imgs:\n", 1002 | " img_tensor = preprocess(PIL.Image.open(io.BytesIO(img)))\n", 1003 | " img_tensor.unsqueeze_(0)\n", 1004 | " img_tensors.append(img_tensor)\n", 1005 | " img_batch = torch.cat(img_tensors)\n", 1006 | " \n", 1007 | " # We perform a forward pass\n", 1008 | " with torch.no_grad():\n", 1009 | " model_output = model(img_batch)\n", 1010 | " \n", 1011 | " # Parse Result\n", 1012 | " img_labels = [labels[out.data.numpy().argmax()] for out in model_output]\n", 1013 | " \n", 1014 | " return img_labels" 1015 | ] 1016 | }, 1017 | { 1018 | "cell_type": "markdown", 1019 | "metadata": {}, 1020 | "source": [ 1021 | "**Solution:**" 1022 | ] 1023 | }, 1024 | { 1025 | "cell_type": "markdown", 1026 | "metadata": {}, 1027 | "source": [ 1028 | "```py\n", 1029 | "metrics.report_metric('batch_size', len(imgs))\n", 1030 | "```" 1031 | ] 1032 | }, 1033 | { 1034 | "cell_type": "markdown", 1035 | "metadata": {}, 1036 | "source": [ 1037 | "> *Once again, Clipper must download this Docker image from the internet, so this may take a minute. Thanks for your patience.*" 1038 | ] 1039 | }, 1040 | { 1041 | "cell_type": "code", 1042 | "execution_count": null, 1043 | "metadata": { 1044 | "collapsed": true 1045 | }, 1046 | "outputs": [], 1047 | "source": [ 1048 | "from clipper_admin.deployers import pytorch as pytorch_deployer\n", 1049 | "pytorch_deployer.deploy_pytorch_model(\n", 1050 | " clipper_conn,\n", 1051 | " name=\"pytorch-model\",\n", 1052 | " version=1, \n", 1053 | " input_type=\"bytes\", \n", 1054 | " func=\"FIXME\", # TODO: Provide the predict function wrapper\n", 1055 | " pytorch_model=\"FIXME\", # TODO: Pass model to function\n", 1056 | ")" 1057 | ] 1058 | }, 1059 | { 1060 | "cell_type": "markdown", 1061 | "metadata": {}, 1062 | "source": [ 1063 | "**Solution:**" 1064 | ] 1065 | }, 1066 | { 1067 | "cell_type": "markdown", 1068 | "metadata": {}, 1069 | "source": [ 1070 | "```python\n", 1071 | "from clipper_admin.deployers import pytorch as pytorch_deployer\n", 1072 | "pytorch_deployer.deploy_pytorch_model(\n", 1073 | " clipper_conn,\n", 1074 | " name= \"pytorch-model\",\n", 1075 | " version=1, \n", 1076 | " input_type=\"bytes\", \n", 1077 | " func=predict_torch_model, # TODO: Provide the predict function wrapper\n", 1078 | " pytorch_model= model, # TODO: Pass model to function\n", 1079 | ")\n", 1080 | "```" 1081 | ] 1082 | }, 1083 | { 1084 | "cell_type": "code", 1085 | "execution_count": null, 1086 | "metadata": { 1087 | "collapsed": true 1088 | }, 1089 | "outputs": [], 1090 | "source": [ 1091 | "clipper_conn.link_model_to_app(app_name=\"squeezenet-classsifier\", model_name=\"pytorch-model\")" 1092 | ] 1093 | }, 1094 | { 1095 | "cell_type": "markdown", 1096 | "metadata": {}, 1097 | "source": [ 1098 | "To visualize the metrics, you will connect Grafana to our Prometheus server. First, start Grafana in a Docker container. You will start it on port 3000." 1099 | ] 1100 | }, 1101 | { 1102 | "cell_type": "code", 1103 | "execution_count": null, 1104 | "metadata": { 1105 | "collapsed": true 1106 | }, 1107 | "outputs": [], 1108 | "source": [ 1109 | "!docker run --label ai.clipper.container.label -d -p 3000:3000 grafana/grafana" 1110 | ] 1111 | }, 1112 | { 1113 | "cell_type": "markdown", 1114 | "metadata": {}, 1115 | "source": [ 1116 | "Before you open up the Grafana Dashboard, we need to explain how to add Prometheus as a data source. Please read to the end of this section to get instructions on how to set up Grafana and import a pre-made dashboard. There will be a code block to run at the end that will give you the URL to use to access Grafana.\n", 1117 | "\n", 1118 | "When you go to Grafana, you will be asked to log in. You can use the username `admin` and password `admin` to log in.\n", 1119 | "\n", 1120 | "Next, click the green `Add Datasource` button, which will take you to the page shown in the image below.\n", 1121 | "\n", 1122 | "\n", 1123 | "\n", 1124 | "Give the data source a name, such as `Clipper Metrics`, then change the settings so that `Type` is `Prometheus`, which you can select from the drop down menu. For the `URL` field under the `HTTP` tab, please run the following code block and copy paste the generated URL." 1125 | ] 1126 | }, 1127 | { 1128 | "cell_type": "code", 1129 | "execution_count": null, 1130 | "metadata": {}, 1131 | "outputs": [], 1132 | "source": [ 1133 | "import requests\n", 1134 | "from IPython.display import display, Markdown\n", 1135 | "\n", 1136 | "this_ip = requests.get('http://ip.42.pl/raw').text\n", 1137 | "\n", 1138 | "display(Markdown(f\"\"\"\n", 1139 | "Prometheus address: http://{this_ip}:9090\n", 1140 | "\"\"\"))" 1141 | ] 1142 | }, 1143 | { 1144 | "cell_type": "markdown", 1145 | "metadata": {}, 1146 | "source": [ 1147 | "You can also adjust the scrape interval so that Grafana queries Prometheus more often. When you are done, your setup page should look like the image below.\n", 1148 | "\n", 1149 | "\n", 1150 | "\n", 1151 | "When you click `Save and Test`, the page should show that the data source was successfully added, like so:\n", 1152 | "\n", 1153 | "\n", 1154 | "\n", 1155 | "Next, click the plus button in the sidebar and select `Dashboard`:\n", 1156 | "\n", 1157 | "\n", 1158 | "\n", 1159 | "This will take you to the following screen, where you should click `New Dashboard` in the upper left of the screen:\n", 1160 | "\n", 1161 | "\n", 1162 | "\n", 1163 | "Which will take you to this screen, where you should click `Import Dashboard` in the lower right box\n", 1164 | "\n", 1165 | "\n", 1166 | "\n", 1167 | "That will take you to this next screen, where you should copy and paste the ID `7904` into the `Grafana.com Dashboard` field like so:\n", 1168 | "\n", 1169 | "\n", 1170 | "\n", 1171 | "This will lead you to the final page, with settings for the Dashboard. The last step is to select `Clipper Metrics` (or whatever else you named the data source you added at the start) from the drop down menu in the last field labelled `Clipper Metrics`. Once you do so, you can hit `Import` to view your dashboard. Be sure to send more queries to generate interesting metrics!\n", 1172 | "\n", 1173 | "\n", 1174 | "\n", 1175 | "To open Grafana, run the following code block, and navigate to the URL it outputs." 1176 | ] 1177 | }, 1178 | { 1179 | "cell_type": "code", 1180 | "execution_count": null, 1181 | "metadata": {}, 1182 | "outputs": [], 1183 | "source": [ 1184 | "display(Markdown(f\"\"\"This is the location of your Grafana instance:\n", 1185 | " http://{this_ip}:3000\n", 1186 | "\"\"\"))" 1187 | ] 1188 | }, 1189 | { 1190 | "cell_type": "code", 1191 | "execution_count": null, 1192 | "metadata": { 1193 | "collapsed": true 1194 | }, 1195 | "outputs": [], 1196 | "source": [ 1197 | "import base64\n", 1198 | "import json\n", 1199 | "req_json = json.dumps({\n", 1200 | " \"input\":\n", 1201 | " base64.b64encode(open('images/cat.jpg', \"rb\").read()).decode() # bytes to unicode\n", 1202 | " })\n", 1203 | "\n", 1204 | "response = requests.post(\n", 1205 | " \"http://%s/%s/predict\" % (clipper_addr, 'squeezenet-classsifier'),\n", 1206 | " headers={\"Content-type\": \"application/json\"},\n", 1207 | " data=req_json)\n", 1208 | "response.json()" 1209 | ] 1210 | }, 1211 | { 1212 | "cell_type": "markdown", 1213 | "metadata": {}, 1214 | "source": [ 1215 | "> *Note that squeezenet is a deep-learning model (specifically, a convolutional neural network). Even though it was developed to be relatively fast, it may take a few seconds to return a prediction since it is running on a CPU. Deploying the model to a GPU will accelerate it substantially.*" 1216 | ] 1217 | }, 1218 | { 1219 | "cell_type": "markdown", 1220 | "metadata": {}, 1221 | "source": [ 1222 | "It's important to note that since we are sending requests in series, there aren't multiple requests in the system at any one time, and therefore the system will not have multiple queries to batch together. You will still be able to observe some batching, however, thanks to Clipper's batching exploration algorithm. Clipper will occasionally inject additional queries into the system (when it is underloaded) to explore the throughput-latency tradeoff for each model. This exploration algorithm estimates a function that predicts query latency based on the batch size, thereby enabling Clipper to choose the largest batch size (and therefore the highest throughput) that is still capable of returning within the deadline imposed by the latency SLO. Clipper estimates a different function for each model container, allowing the system to adapt to different model characteristics and varying resource loads." 1223 | ] 1224 | }, 1225 | { 1226 | "cell_type": "code", 1227 | "execution_count": null, 1228 | "metadata": { 1229 | "collapsed": true 1230 | }, 1231 | "outputs": [], 1232 | "source": [ 1233 | "from IPython.display import display, Image\n", 1234 | "# Blue is Rehan's cat's name. The last picture is an image of his cat.\n", 1235 | "images = ['images/cat.jpg', 'images/dog.jpg', 'images/duck.jpg', 'images/blue.jpg']\n", 1236 | "for img in images:\n", 1237 | " print('Output for', img)\n", 1238 | " display(Image(img, width=200, height=200))\n", 1239 | " req_json = json.dumps({\n", 1240 | " \"input\":\n", 1241 | " base64.b64encode(open(img, \"rb\").read()).decode() # bytes to unicode\n", 1242 | " })\n", 1243 | "\n", 1244 | " response = requests.post(\n", 1245 | " \"http://%s/%s/predict\" % (clipper_addr, 'squeezenet-classsifier'),\n", 1246 | " headers={\"Content-type\": \"application/json\"},\n", 1247 | " data=req_json)\n", 1248 | " print(response.json())" 1249 | ] 1250 | }, 1251 | { 1252 | "cell_type": "markdown", 1253 | "metadata": {}, 1254 | "source": [ 1255 | "## Example Application - Custom Docker Containers" 1256 | ] 1257 | }, 1258 | { 1259 | "cell_type": "markdown", 1260 | "metadata": {}, 1261 | "source": [ 1262 | "In this final example, you will learn how to create a custom Docker model container. Custom Docker containers can be utilized when model containers rely on dependencies that cannot be installed with Pip. In this example, you will deploying a model developed in a pure C framework that must be compiled from source within the Docker container.\n", 1263 | "\n", 1264 | "You will deploy the [YoloV3 Real Time Object Detection System](https://pjreddie.com/darknet/yolo/) to predict both *labels and bounding boxes* for objects within an image. This is a harder machine learning task than the image classification task that you performed with SqueezeNet.\n", 1265 | "\n", 1266 | "YOLO (You Only Look Once) is a state-of-the-art object detection model, but unfortunately it does not expose a native Python API. Instead, it must be compiled from source and executed as a binary executable, not as a library. You will clone the repo (in this case, a fork we created with some minor modifications to extract the bounding box coordinates instead of directly drawing the on the image) and compile it, all within the Docker image.\n", 1267 | "\n", 1268 | "The first step in creating a Docker image is to write a [Dockerfile](https://docs.docker.com/engine/reference/builder/). This is like the source code for the image. After you have written the Dockerfile, you will build it. Unlike when you compile source code, building a Docker image actually runs the program specified in the Dockerfile. The end result is a Docker image -- a binary object that contains the results of executing the instructions in the Dockerfile sequentially from top to bottom. \n", 1269 | "\n", 1270 | "For your convenience, we have already written a Dockerfile for you called `PyDarknetDockerfile` and included it in the tutorial. However, before you build it you will read about each step in the Dockerfile, providing a blueprint for you to define your own custom Clipper model containers in the future.\n", 1271 | "\n", 1272 | "Here are the full contents of the Dockerfile:\n", 1273 | "\n", 1274 | "```dockerfile\n", 1275 | "FROM clipper/python36-closure-container:0.3\n", 1276 | "\n", 1277 | "# Install Git\n", 1278 | "RUN apt-get update \\\n", 1279 | " && apt-get install -y git\n", 1280 | "\n", 1281 | "# Install cURL\n", 1282 | "RUN apt-get install -y curl\n", 1283 | "\n", 1284 | "# Clone Darknet Repo\n", 1285 | "RUN git clone https://github.com/RehanSD/darknet.git /tmp/darknet\n", 1286 | "RUN mv /tmp/darknet/* .\n", 1287 | "\n", 1288 | "# Make Darknet Project\n", 1289 | "RUN make -j4\n", 1290 | "\n", 1291 | "#Download Weights\n", 1292 | "RUN curl -o yolov3-tiny.weights https://pjreddie.com/media/files/yolov3-tiny.weights\n", 1293 | "```\n", 1294 | "\n", 1295 | "In the following sections, we will break down each part of the file." 1296 | ] 1297 | }, 1298 | { 1299 | "cell_type": "markdown", 1300 | "metadata": {}, 1301 | "source": [ 1302 | "The first line of the file ensures that you are starting off with a Docker image with all of the necessary Clipper dependencies installed. In the rest of the Dockerfile, you are extending the Clipper-provided container to simply install some additional dependencies.\n", 1303 | "``` dockerfile\n", 1304 | "FROM clipper/python36-closure-container:0.3\n", 1305 | "```\n", 1306 | "\n", 1307 | "The next few lines install the basic dependencies we need in the container - git to clone the repo, annd curl to download the pretrained yolo model weights.\n", 1308 | "\n", 1309 | "```dockerfile\n", 1310 | "# Install Git\n", 1311 | "RUN apt-get update \\\n", 1312 | " && apt-get install -y git\n", 1313 | "\n", 1314 | "# Install cURL\n", 1315 | "RUN apt-get install -y curl\n", 1316 | "```\n", 1317 | "\n", 1318 | "The last few lines clone the darknet repo, compile it, and then download the pretrained model weights. We are using `yolov3-tiny` because it offers a good trade off between accuracy and inference speed on a CPU. (For a latency-sensitive production use case, you might run it on GPU.)\n", 1319 | "\n", 1320 | "```dockerfile\n", 1321 | "# Clone Darknet Repo\n", 1322 | "RUN git clone https://github.com/RehanSD/darknet.git /tmp/darknet\n", 1323 | "RUN mv /tmp/darknet/* .\n", 1324 | "\n", 1325 | "# Make Darknet Project\n", 1326 | "RUN make -j4\n", 1327 | "\n", 1328 | "#Download Weights\n", 1329 | "RUN curl -o yolov3-tiny.weights https://pjreddie.com/media/files/yolov3-tiny.weights\n", 1330 | "```\n", 1331 | "\n", 1332 | "\n", 1333 | "Now that you have defined the image, you can build it with the following shell command." 1334 | ] 1335 | }, 1336 | { 1337 | "cell_type": "code", 1338 | "execution_count": null, 1339 | "metadata": { 1340 | "collapsed": true 1341 | }, 1342 | "outputs": [], 1343 | "source": [ 1344 | "!docker build -t clipper/darknet-yolov3-container -f PyDarknetDockerfile ." 1345 | ] 1346 | }, 1347 | { 1348 | "cell_type": "markdown", 1349 | "metadata": {}, 1350 | "source": [ 1351 | "### Predict Function\n", 1352 | "Now we build the predict function. We must first deserialize the image that we serialized in our request - similar to how we did in the PyTorch example, and then write it to a file to run darknet on it. Since darknet is a C executable, we must call it using the subprocess API. It is not strictly neccessary to print out the output of calling darknet, but it is useful to have the output for the sake of debugging." 1353 | ] 1354 | }, 1355 | { 1356 | "cell_type": "code", 1357 | "execution_count": null, 1358 | "metadata": { 1359 | "collapsed": true 1360 | }, 1361 | "outputs": [], 1362 | "source": [ 1363 | "import os\n", 1364 | "import subprocess\n", 1365 | "import base64\n", 1366 | "import io\n", 1367 | "import os\n", 1368 | "def yolo_pred(imgs):\n", 1369 | " import base64\n", 1370 | " import io\n", 1371 | " import os\n", 1372 | " import tempfile\n", 1373 | " import subprocess\n", 1374 | "\n", 1375 | " num_imgs = len(imgs)\n", 1376 | " ret_coords = []\n", 1377 | " predict_procs = []\n", 1378 | " file_names = []\n", 1379 | " \n", 1380 | " # First, we save the images to file\n", 1381 | " for i in range(num_imgs):\n", 1382 | " # Create a temp file to write to\n", 1383 | " tmp = tempfile.NamedTemporaryFile('wb', delete=False, suffix='.jpg')\n", 1384 | " tmp.write(io.BytesIO(imgs[i]).getvalue())\n", 1385 | " tmp.close()\n", 1386 | " file_names.append(tmp.name)\n", 1387 | " \n", 1388 | " # Second, we call ./darknet executable to detect objects in images.\n", 1389 | " # This is done in parallel.\n", 1390 | " for file_name in file_names:\n", 1391 | " process = subprocess.Popen(\n", 1392 | " ['./darknet',\n", 1393 | " 'detector',\n", 1394 | " 'test',\n", 1395 | " './cfg/coco.data',\n", 1396 | " './cfg/yolov3-tiny.cfg',\n", 1397 | " './yolov3-tiny.weights',\n", 1398 | " file_name,\n", 1399 | " '-json',\n", 1400 | " '-dont_show',\n", 1401 | " '-ext_output', '>',\n", 1402 | " '{}.txt'.format(file_name+'_result')], stdout=subprocess.PIPE)\n", 1403 | " predict_procs.append(process)\n", 1404 | " \n", 1405 | " # Lastly, we wait for all process to finished and return stdout of each process\n", 1406 | " for process in predict_procs:\n", 1407 | " process.wait()\n", 1408 | " ret_coords += [' '.join(map(lambda byte_str: byte_str.decode(), process.stdout))]\n", 1409 | "\n", 1410 | " return ret_coords" 1411 | ] 1412 | }, 1413 | { 1414 | "cell_type": "code", 1415 | "execution_count": null, 1416 | "metadata": { 1417 | "collapsed": true 1418 | }, 1419 | "outputs": [], 1420 | "source": [ 1421 | "# Do not be concerned if this cell takes a couple of seconds to run.\n", 1422 | "from clipper_admin.deployers import python as python_deployer\n", 1423 | "python_deployer.deploy_python_closure(\n", 1424 | " clipper_conn,\n", 1425 | " name=\"yolov3\", # The name of the model in Clipper\n", 1426 | " version=1, # A unique identifier to assign to this model.\n", 1427 | " input_type=\"bytes\", # The type of data the model function expects as input\n", 1428 | " func=yolo_pred, # The model function to deploy\n", 1429 | " base_image='clipper/darknet-yolov3-container'\n", 1430 | ")" 1431 | ] 1432 | }, 1433 | { 1434 | "cell_type": "code", 1435 | "execution_count": null, 1436 | "metadata": { 1437 | "collapsed": true 1438 | }, 1439 | "outputs": [], 1440 | "source": [ 1441 | "clipper_conn.register_application(\n", 1442 | " name=\"darknet-app\",\n", 1443 | " input_type=\"bytes\",\n", 1444 | " default_output=\"Default\",\n", 1445 | " slo_micros=10000000 # 10 seconds\n", 1446 | ")" 1447 | ] 1448 | }, 1449 | { 1450 | "cell_type": "code", 1451 | "execution_count": null, 1452 | "metadata": { 1453 | "collapsed": true 1454 | }, 1455 | "outputs": [], 1456 | "source": [ 1457 | "clipper_conn.link_model_to_app(app_name=\"darknet-app\", model_name=\"yolov3\")" 1458 | ] 1459 | }, 1460 | { 1461 | "cell_type": "code", 1462 | "execution_count": null, 1463 | "metadata": { 1464 | "collapsed": true 1465 | }, 1466 | "outputs": [], 1467 | "source": [ 1468 | "# Please note the request may take a couple of seconds to return, as we are running a largeish model\n", 1469 | "# on a CPU, which is slow.\n", 1470 | "import requests, json\n", 1471 | "url = \"http://%s/darknet-app/predict\" % clipper_addr\n", 1472 | "req_json = json.dumps({\n", 1473 | " \"input\":\n", 1474 | " base64.b64encode(open('images/dog.jpg', \"rb\").read()).decode() # bytes to unicode\n", 1475 | "})\n", 1476 | "headers = {'Content-type': 'application/json'}\n", 1477 | "r = requests.post(url, headers=headers, data=req_json)\n", 1478 | "\n", 1479 | "# Let's see what does YoloV3-tiny return\n", 1480 | "print(r.json())\n", 1481 | "print()\n", 1482 | "print(r.json()['output'])" 1483 | ] 1484 | }, 1485 | { 1486 | "cell_type": "code", 1487 | "execution_count": null, 1488 | "metadata": {}, 1489 | "outputs": [], 1490 | "source": [ 1491 | "# Let's plot the result\n", 1492 | "def _get_bounding_boxes(output_string):\n", 1493 | " \"\"\"Yield Rectangle object from output string\"\"\"\n", 1494 | " import re\n", 1495 | " import matplotlib.patches as patches\n", 1496 | " bbox_regex = re.compile(r\" ([a-z]+): [\\d]{2}%\\n Left: ([\\d]{1,3}), Bottom: ([\\d]{1,3}), Right: ([\\d]{1,3}), Top: ([\\d]{1,3})\")\n", 1497 | " matched = bbox_regex.findall(output_string)\n", 1498 | " \n", 1499 | " colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k']\n", 1500 | " for c, m in zip(colors, matched):\n", 1501 | " cat, left, bottom, right, top = m\n", 1502 | " left, bottom, right, top = int(left), int(bottom), int(right), int(top)\n", 1503 | " yield patches.Rectangle((left,bottom), right-left, top-bottom, linewidth=1, facecolor='none', edgecolor=c,label=cat)\n", 1504 | "\n", 1505 | "def plot_bbox(im_name, output_string):\n", 1506 | " \"\"\"Plot the image with bbox overlay\"\"\"\n", 1507 | " import matplotlib.pyplot as plt\n", 1508 | " import matplotlib.patches as patches\n", 1509 | " from PIL import Image\n", 1510 | " import numpy as np\n", 1511 | " \n", 1512 | " fig,ax = plt.subplots(1, figsize=(10,8))\n", 1513 | " ax.imshow(np.array(Image.open(im_name)))\n", 1514 | " for p in _get_bounding_boxes(output_string):\n", 1515 | " ax.add_patch(p) \n", 1516 | " plt.title(\"Prediction result for image: {}\".format(im_name))\n", 1517 | " fig.legend()\n", 1518 | "\n", 1519 | "def predict_and_plot(im_name):\n", 1520 | " \"\"\"Make a prediction and plot image with bbox\"\"\"\n", 1521 | " url = \"http://%s/darknet-app/predict\" % clipper_addr\n", 1522 | " req_json = json.dumps({\n", 1523 | " \"input\":\n", 1524 | " base64.b64encode(open(im_name, \"rb\").read()).decode() # bytes to unicode\n", 1525 | " })\n", 1526 | " headers = {'Content-type': 'application/json'}\n", 1527 | " r = requests.post(url, headers=headers, data=req_json)\n", 1528 | " print(r.json())\n", 1529 | " plot_bbox(im_name, r.json()['output'])\n", 1530 | "\n", 1531 | "plot_bbox('images/dog.jpg', r.json()['output'])" 1532 | ] 1533 | }, 1534 | { 1535 | "cell_type": "markdown", 1536 | "metadata": {}, 1537 | "source": [ 1538 | "Feel free to experiment with other images under `images/detection/*.jpg`." 1539 | ] 1540 | }, 1541 | { 1542 | "cell_type": "code", 1543 | "execution_count": null, 1544 | "metadata": {}, 1545 | "outputs": [], 1546 | "source": [ 1547 | "!ls images/detection/" 1548 | ] 1549 | }, 1550 | { 1551 | "cell_type": "code", 1552 | "execution_count": null, 1553 | "metadata": {}, 1554 | "outputs": [], 1555 | "source": [ 1556 | "# Please note the request may take a couple of seconds to return, as we are running a largeish model\n", 1557 | "# on a CPU, which is slow.\n", 1558 | "# im_name = \"images/detection/*.jpg\"\n", 1559 | "im_name = \"images/detection/cityscapes-3.jpg\"\n", 1560 | "predict_and_plot(im_name)" 1561 | ] 1562 | }, 1563 | { 1564 | "cell_type": "markdown", 1565 | "metadata": {}, 1566 | "source": [ 1567 | "## Stopping Clipper\n", 1568 | "If you run into issues and want to completely stop Clipper, you can do this by calling [`ClipperConnection.stop_all()`](http://docs.clipper.ai/en/latest/#clipper_admin.ClipperConnection.stop_all)." 1569 | ] 1570 | }, 1571 | { 1572 | "cell_type": "code", 1573 | "execution_count": null, 1574 | "metadata": { 1575 | "collapsed": true 1576 | }, 1577 | "outputs": [], 1578 | "source": [ 1579 | "clipper_conn.stop_all()" 1580 | ] 1581 | }, 1582 | { 1583 | "cell_type": "markdown", 1584 | "metadata": {}, 1585 | "source": [ 1586 | "When you list all the Docker containers a final time, you should see that all of the Clipper containers have been stopped." 1587 | ] 1588 | }, 1589 | { 1590 | "cell_type": "code", 1591 | "execution_count": null, 1592 | "metadata": { 1593 | "collapsed": true 1594 | }, 1595 | "outputs": [], 1596 | "source": [ 1597 | "!docker ps --filter label=ai.clipper.container.label" 1598 | ] 1599 | }, 1600 | { 1601 | "cell_type": "markdown", 1602 | "metadata": {}, 1603 | "source": [ 1604 | "You can now call `clipper_conn.start_clipper()` again without running into errors." 1605 | ] 1606 | }, 1607 | { 1608 | "cell_type": "markdown", 1609 | "metadata": {}, 1610 | "source": [ 1611 | "Please continue with the rest of the tutorial in [this notebook](2-Pong-Game.ipynb)." 1612 | ] 1613 | } 1614 | ], 1615 | "metadata": { 1616 | "kernelspec": { 1617 | "display_name": "Python 3", 1618 | "language": "python", 1619 | "name": "python3" 1620 | }, 1621 | "language_info": { 1622 | "codemirror_mode": { 1623 | "name": "ipython", 1624 | "version": 3 1625 | }, 1626 | "file_extension": ".py", 1627 | "mimetype": "text/x-python", 1628 | "name": "python", 1629 | "nbconvert_exporter": "python", 1630 | "pygments_lexer": "ipython3", 1631 | "version": "3.6.6" 1632 | } 1633 | }, 1634 | "nbformat": 4, 1635 | "nbformat_minor": 2 1636 | } 1637 | -------------------------------------------------------------------------------- /2-Pong-Game.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Tutorial 2: Clipper in Action with Pong" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## Table of Contents\n", 15 | "1. Starting Clipper and deploying your first Pong AI model\n", 16 | "2. Training a better model\n", 17 | " 2.1 After you've finished playing Pong\n", 18 | "3. Deploying the new model to a live application\n", 19 | " 3.1 Next steps\n", 20 | "4. Conclusion" 21 | ] 22 | }, 23 | { 24 | "cell_type": "markdown", 25 | "metadata": {}, 26 | "source": [ 27 | "We have already explored some of the features of Clipper in Part 1. Now let's take a look at Clipper in action with Pong! Released by Atari in 1972, Pong was the first commercially successful video game. You can read more about it here.\n", 28 | "\n", 29 | "The goal of this tutorial is to use Clipper to deploy several ML models to play against, and in doing so explore:\n", 30 | "1. Deploying models trained in your choice of framework to Clipper with a few lines of code using Clipper's model deployers.\n", 31 | "2. Easily update or roll back models that have already been deployed in live applications.\n", 32 | "3. Use machine learning and Clipper to do something fun!\n", 33 | "\n", 34 | "This tutorial will be broken up into 3 main parts:\n", 35 | "##### 1. Starting Clipper and deploying an initial (poor) model\n", 36 | "##### 2. Training a better model\n", 37 | "##### 3. Deploying the new and improved model to the live Pong application" 38 | ] 39 | }, 40 | { 41 | "cell_type": "markdown", 42 | "metadata": {}, 43 | "source": [ 44 | "### Part 1: Starting Clipper and deploying your first Pong AI model" 45 | ] 46 | }, 47 | { 48 | "cell_type": "code", 49 | "execution_count": null, 50 | "metadata": {}, 51 | "outputs": [], 52 | "source": [ 53 | "#Import dependencies\n", 54 | "from clipper_admin import ClipperConnection, DockerContainerManager\n", 55 | "from clipper_admin.deployers import python as py_deployer\n", 56 | "import random\n", 57 | "import numpy as np\n", 58 | "import pandas as pd\n", 59 | "from sklearn import linear_model\n", 60 | "import requests\n", 61 | "from IPython.display import Markdown, display\n", 62 | "\n", 63 | "this_ip = requests.get('http://ip.42.pl/raw').text\n", 64 | "this_ip" 65 | ] 66 | }, 67 | { 68 | "cell_type": "markdown", 69 | "metadata": {}, 70 | "source": [ 71 | "The first step is to start Clipper and deploy your first model: one that randomly guesses which direction to move the paddle.\n", 72 | "\n", 73 | "The cell below will start Clipper running in Docker containers. You can run the `docker ps` shell comamnd to see the Clipper Docker containers. By this point, the Clipper Docker images should already be downloaded on the server. But if you decided to do the exercises out of order and are starting Clipper for the first time, this command may take a few minutes while it downloads the Docker images." 74 | ] 75 | }, 76 | { 77 | "cell_type": "code", 78 | "execution_count": null, 79 | "metadata": {}, 80 | "outputs": [], 81 | "source": [ 82 | "# Start Clipper. This command assumes that Docker is already running.\n", 83 | "clipper_conn = ClipperConnection(DockerContainerManager())\n", 84 | "clipper_conn.stop_all()\n", 85 | "clipper_conn.start_clipper()" 86 | ] 87 | }, 88 | { 89 | "cell_type": "markdown", 90 | "metadata": {}, 91 | "source": [ 92 | "Running the cell below will register an application in Clipper called \"pong\" and create a Clipper endpoint for the random policy at http://localhost:1337/pong/predict" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "def random_predict(xs):\n", 102 | " '''\n", 103 | " Deploy a policy that returns a random choice from 0, 1, or 2.\n", 104 | " Remember that Clipper requires the output of the predict function to\n", 105 | " be a list of string objects.\n", 106 | " '''\n", 107 | " action = None # TODO randomly choose an action from the choices 0, 1, 2\n", 108 | " return [str(action) for _ in xs]\n", 109 | "\n", 110 | "py_deployer.create_endpoint(clipper_conn, name=\"pong\", input_type=\"doubles\", func=random_predict,\n", 111 | " default_output=\"0\", slo_micros=100000, )" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "**Solution:**" 119 | ] 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "metadata": {}, 124 | "source": [ 125 | "```python\n", 126 | "def random_predict(xs):\n", 127 | " action = random.randint(0, 2)\n", 128 | " return [str(action) for _ in xs]\n", 129 | "\n", 130 | "py_deployer.create_endpoint(clipper_conn, name=\"pong\", input_type=\"doubles\", func=random_predict,\n", 131 | " default_output=\"0\", slo_micros=100000)\n", 132 | "```" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "Now that you have a model deployed, let's see how it works! Run the cell below to start the little web app that will serve the Pong game." 140 | ] 141 | }, 142 | { 143 | "cell_type": "code", 144 | "execution_count": null, 145 | "metadata": {}, 146 | "outputs": [], 147 | "source": [ 148 | "from subprocess import Popen, PIPE\n", 149 | "\n", 150 | "server_proc = Popen([\n", 151 | " \"python\", \n", 152 | " \"pong-server/pong-server.py\", \n", 153 | " \"localhost:1337\", \n", 154 | " this_ip,\n", 155 | " \"pong_server.log\"], stdout=PIPE)\n", 156 | "\n", 157 | "display(Markdown(f\"\"\"\n", 158 | "This is your link to pong game:\n", 159 | "http://{this_ip}:4000/pong/\n", 160 | "\"\"\"))" 161 | ] 162 | }, 163 | { 164 | "cell_type": "markdown", 165 | "metadata": {}, 166 | "source": [ 167 | "Congratulations! We have deployed our first Pong AI to Clipper! Let's see how well it works by **clicking the link above and pressing 1 to start the game.**" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "> *If you need to stop the server for any reason, you can run the following command to stop it:*\n", 175 | ">```py\n", 176 | "server_proc.terminate()\n", 177 | "```" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": {}, 183 | "source": [ 184 | "### Part 2: Training a better model" 185 | ] 186 | }, 187 | { 188 | "cell_type": "markdown", 189 | "metadata": {}, 190 | "source": [ 191 | "As you probably noticed, the random-guessing policy did not perform well at all. In order to train a better model, you will use [imitation learning](https://katefvision.github.io/katefSlides/immitation_learning_I_katef.pdf). Imitation learning is often used for reinforcement learning, but in this case we are going to use it to train a simple classifier." 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": {}, 197 | "source": [ 198 | "Play the game again, this time with 2 human players, and we will collect data on how you play the game to train the model.\n", 199 | "\n", 200 | "First, pair up with one of the people sitting next to you to play against. Then go back to your pong game and **press 2 to start a 2-player game.** After you have played a few games on your computer, switch to your partner's computer and play on their instance so you both have training data." 201 | ] 202 | }, 203 | { 204 | "cell_type": "markdown", 205 | "metadata": {}, 206 | "source": [ 207 | "#### After you've finished playing Pong\n", 208 | "\n", 209 | "Now that you have some training data, it's time to train a model. First, run the cell below to clean the data and format it for Scikit-Learn's LogisticRegression model." 210 | ] 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": null, 215 | "metadata": {}, 216 | "outputs": [], 217 | "source": [ 218 | "df_data = pd.read_csv('out.csv')\n", 219 | "df_data.columns = [\"label\",\"paddle_y\",\"ball_x\",\"ball_y\",\"ball_dx\",\"ball_dy\",\"x_prev\",\"y_prev\"]\n", 220 | "\n", 221 | "def convert_label(label):\n", 222 | " \"\"\"Convert labels into numeric values\"\"\"\n", 223 | " if(label==\"down\"):\n", 224 | " return 1\n", 225 | " elif(label==\"up\"):\n", 226 | " return 2\n", 227 | " else:\n", 228 | " return 0\n", 229 | "\n", 230 | "df_data['label'] = df_data['label'].apply(convert_label)\n", 231 | "df_data.loc[:, \"paddle_y\":\"y_prev\"] = df_data.loc[:, \"paddle_y\":\"y_prev\"]/500.0\n", 232 | "\n", 233 | "df_data.head()" 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": null, 239 | "metadata": {}, 240 | "outputs": [], 241 | "source": [ 242 | "df_data.size" 243 | ] 244 | }, 245 | { 246 | "cell_type": "markdown", 247 | "metadata": {}, 248 | "source": [ 249 | "You will use the data to train a scikit-learn Logistic Regression model, just as you did in the previous exercise. You can read more about the particular model [in the documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)." 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": null, 255 | "metadata": {}, 256 | "outputs": [], 257 | "source": [ 258 | "labels = df_data['label']\n", 259 | "training_data= df_data.drop(['label'], axis=1)\n", 260 | "\n", 261 | "model = linear_model.LogisticRegression()\n", 262 | "model.fit(training_data, labels)" 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "metadata": {}, 268 | "source": [ 269 | "### Part 3: Deploying the new model to a live application" 270 | ] 271 | }, 272 | { 273 | "cell_type": "markdown", 274 | "metadata": {}, 275 | "source": [ 276 | "Now that you have a better model, you can deploy that model to Clipper. Once the system detects there is a new version of the model, it will automatically start routing requests to the new version.\n", 277 | "\n", 278 | "![update_model](notebook-images/pong_update_model.png)" 279 | ] 280 | }, 281 | { 282 | "cell_type": "code", 283 | "execution_count": null, 284 | "metadata": {}, 285 | "outputs": [], 286 | "source": [ 287 | "def predict(inputs):\n", 288 | " # model.predict returns a list of predictions\n", 289 | " preds = model.predict(inputs)\n", 290 | " return [str(p) for p in preds]\n", 291 | "\n", 292 | "# TODO: Fill in the missing values below to deploy a version 2 of the pong model container. \n", 293 | "# It takes in inputs of type double and uses the predict function defined above.\n", 294 | "\n", 295 | "py_deployer.deploy_python_closure(clipper_conn, \n", 296 | " name=\"\", \n", 297 | " version=\"\", \n", 298 | " input_type=\"\", \n", 299 | " func=\"\",\n", 300 | " pkgs_to_install=[\"numpy\",\"scipy\", \"pandas\", \"sklearn\"])\n" 301 | ] 302 | }, 303 | { 304 | "cell_type": "markdown", 305 | "metadata": {}, 306 | "source": [ 307 | "**Solution:**" 308 | ] 309 | }, 310 | { 311 | "cell_type": "markdown", 312 | "metadata": {}, 313 | "source": [ 314 | "```python\n", 315 | "def predict(inputs):\n", 316 | " # model.predict returns a list of predictions\n", 317 | " preds = model.predict(inputs)\n", 318 | " return [str(p) for p in preds]\n", 319 | "\n", 320 | "# TODO: Fill in the missing values below to deploy a version 2 of the pong model container. \n", 321 | "# It takes in inputs of type double and uses the predict function defined above.\n", 322 | "\n", 323 | "py_deployer.deploy_python_closure(clipper_conn, \n", 324 | " name=\"pong\", \n", 325 | " version=2, \n", 326 | " input_type=\"doubles\", \n", 327 | " func=predict, \n", 328 | " pkgs_to_install=[\"numpy\",\"scipy\", \"pandas\", \"sklearn\"])\n", 329 | "```" 330 | ] 331 | }, 332 | { 333 | "cell_type": "markdown", 334 | "metadata": {}, 335 | "source": [ 336 | "Go to pong link we showed above , press 1 to start a new game against the AI, and notice how the game AI has improved with your new model!" 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "metadata": {}, 342 | "source": [ 343 | "#### Next steps\n", 344 | "\n", 345 | "This is the end of the Clipper tutorial. If you have finished early, you can continue trying to improve the Pong model. Scikit-Learn has several different classifiers you can experiment with. For example, you might see how a [Random Forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) or an [SVM](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) perform. " 346 | ] 347 | }, 348 | { 349 | "cell_type": "markdown", 350 | "metadata": {}, 351 | "source": [ 352 | "### 4. Conclusion" 353 | ] 354 | }, 355 | { 356 | "cell_type": "markdown", 357 | "metadata": {}, 358 | "source": [ 359 | "Just as a recap, here's what we did today:\n", 360 | "\n", 361 | "1. Deployed an initial random policy to Clipper and served predictions\n", 362 | "2. Trained a new model to imitate your own Pong playing behavior.\n", 363 | "3. Deployed this new version of model to a live application without any downtime.\n", 364 | "\n", 365 | "By doing so, we've explored the following Clipper features:\n", 366 | "\n", 367 | "1. Deploy models trained in your choice of framework to Clipper with a few lines of code by using an existing model container or writing your own\n", 368 | "2. Easily update or roll back models in running applications\n", 369 | "3. Run each model in a separate Docker container for simple cluster management and resource allocation\n" 370 | ] 371 | }, 372 | { 373 | "cell_type": "markdown", 374 | "metadata": {}, 375 | "source": [ 376 | "\n", 377 | "Run the cell below to stop clipper:" 378 | ] 379 | }, 380 | { 381 | "cell_type": "code", 382 | "execution_count": null, 383 | "metadata": {}, 384 | "outputs": [], 385 | "source": [ 386 | "clipper_conn.stop_all() " 387 | ] 388 | } 389 | ], 390 | "metadata": { 391 | "anaconda-cloud": {}, 392 | "kernelspec": { 393 | "display_name": "Python 3", 394 | "language": "python", 395 | "name": "python3" 396 | }, 397 | "language_info": { 398 | "codemirror_mode": { 399 | "name": "ipython", 400 | "version": 3 401 | }, 402 | "file_extension": ".py", 403 | "mimetype": "text/x-python", 404 | "name": "python", 405 | "nbconvert_exporter": "python", 406 | "pygments_lexer": "ipython3", 407 | "version": "3.6.6" 408 | } 409 | }, 410 | "nbformat": 4, 411 | "nbformat_minor": 2 412 | } 413 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | run: 2 | echo "" > out.csv 3 | docker run --rm -p 8080:8080 -v $(shell pwd)/out.csv:/out.csv -d clipper/strata-nodejs 4 | -------------------------------------------------------------------------------- /PyDarknetDockerfile: -------------------------------------------------------------------------------- 1 | FROM clipper/python36-closure-container:0.3 2 | 3 | # Install Git 4 | RUN apt-get update \ 5 | && apt-get install -y git 6 | 7 | # Install cURL 8 | RUN apt-get install -y curl 9 | 10 | # Clone Darknet Repo 11 | RUN git clone https://github.com/RehanSD/darknet.git /tmp/darknet 12 | RUN mv /tmp/darknet/* . 13 | 14 | # Make Darknet Project 15 | RUN make -j4 16 | 17 | #Download Weights 18 | RUN curl -o yolov3-tiny.weights https://pjreddie.com/media/files/yolov3-tiny.weights 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Clipper Tutorial 2 | 3 | Welcome to Clipper Tutorial! To reproduce the environment, there are two approaches: 4 | 1. Launch an EC2 instance from our AMI (ami-0213a0d82c7061779). Once launched, the instance will have a jupyter lab environment running at port 8888. The password for jupyterlab is "clipper" 5 | 6 | 2. Clone the repo and install the following dependencies: 7 | - Docker 8 | - Python packages: 9 | ``` 10 | matplotlib 11 | numpy 12 | pandas 13 | scipy 14 | scikit-learn 15 | pillow 16 | clipper_admin 17 | cloudpickle==0.5.3 18 | jupyterlab 19 | torch==0.4.0 20 | torchvision 21 | jinja2 22 | ``` 23 | - Start Nodejs server that writes pong games status to a file 24 | 1. `cd $REPO/other/nodejs; make` will build the image 25 | 2. `cd $REPO; make` will run the image 26 | 27 | 28 | Please post any issue relating to the tutorial on the issue page. -------------------------------------------------------------------------------- /images/blue.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/blue.jpg -------------------------------------------------------------------------------- /images/cat.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/cat.jpg -------------------------------------------------------------------------------- /images/detection/baseball-boy.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/baseball-boy.jpg -------------------------------------------------------------------------------- /images/detection/cat-in-pan.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/cat-in-pan.jpg -------------------------------------------------------------------------------- /images/detection/cityscapes-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/cityscapes-1.jpg -------------------------------------------------------------------------------- /images/detection/cityscapes-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/cityscapes-2.jpg -------------------------------------------------------------------------------- /images/detection/cityscapes-3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/cityscapes-3.jpg -------------------------------------------------------------------------------- /images/detection/frisbee.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/frisbee.jpg -------------------------------------------------------------------------------- /images/detection/horses.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/horses.jpg -------------------------------------------------------------------------------- /images/detection/skate.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/detection/skate.jpg -------------------------------------------------------------------------------- /images/dog.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/dog.jpg -------------------------------------------------------------------------------- /images/duck.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/images/duck.jpg -------------------------------------------------------------------------------- /notebook-images/add_replicas.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/add_replicas.png -------------------------------------------------------------------------------- /notebook-images/deploy_model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/deploy_model.png -------------------------------------------------------------------------------- /notebook-images/grafana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana.png -------------------------------------------------------------------------------- /notebook-images/grafana_add_dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana_add_dashboard.png -------------------------------------------------------------------------------- /notebook-images/grafana_dashboard_id.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana_dashboard_id.png -------------------------------------------------------------------------------- /notebook-images/grafana_import_dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana_import_dashboard.png -------------------------------------------------------------------------------- /notebook-images/grafana_new_dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana_new_dashboard.png -------------------------------------------------------------------------------- /notebook-images/grafana_set_data_source.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana_set_data_source.png -------------------------------------------------------------------------------- /notebook-images/grafana_test_succeeded.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana_test_succeeded.png -------------------------------------------------------------------------------- /notebook-images/grafana_w_info.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/grafana_w_info.png -------------------------------------------------------------------------------- /notebook-images/link_model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/link_model.png -------------------------------------------------------------------------------- /notebook-images/pong_update_model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/pong_update_model.png -------------------------------------------------------------------------------- /notebook-images/register_app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/register_app.png -------------------------------------------------------------------------------- /notebook-images/rollback_version.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/rollback_version.png -------------------------------------------------------------------------------- /notebook-images/set_replicas.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/set_replicas.png -------------------------------------------------------------------------------- /notebook-images/start_clipper.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/start_clipper.png -------------------------------------------------------------------------------- /notebook-images/update_model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/notebook-images/update_model.png -------------------------------------------------------------------------------- /other/clipper-tutorial-server/README.md: -------------------------------------------------------------------------------- 1 | This app will pop from a redis list named "provisioned_vms" and set user email to that link. -------------------------------------------------------------------------------- /other/clipper-tutorial-server/app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, request, render_template 2 | import redis 3 | 4 | app = Flask(__name__) 5 | r = redis.Redis() 6 | 7 | @app.route("/") 8 | def index(): 9 | return render_template('index.html') 10 | 11 | @app.route("/assign_vm", methods=["POST"]) 12 | def user_email(): 13 | email = request.form["email"] 14 | vm_addr = r.get(email) 15 | if vm_addr != None: 16 | vm_addr = vm_addr.decode() 17 | else: 18 | vm_addr = r.blpop("provisioned_vms")[1].decode() 19 | r.set(email, vm_addr) 20 | return render_template("vm.html", vm=vm_addr) 21 | 22 | if __name__ == '__main__': 23 | app.run(debug=True) -------------------------------------------------------------------------------- /other/clipper-tutorial-server/requirements.txt: -------------------------------------------------------------------------------- 1 | flask 2 | redis 3 | gunicorn -------------------------------------------------------------------------------- /other/clipper-tutorial-server/templates/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | Clipper Strata Tutorial 4 | 5 | 6 | 7 |
8 |

Welcome to Clipper Tutorial @ Strata!

9 |

In this page, we will assign you a VM (virtual machine) to work on the tutorial.

10 |
11 |

Please enter your email below; we will only use this email once (sending you the tutorial material after conference).

12 | 13 | 14 |
15 |
16 | 17 | 18 | We'll never share your email with anyone else. 19 |
20 | 21 |
22 | 23 |
24 | 25 | -------------------------------------------------------------------------------- /other/clipper-tutorial-server/templates/vm.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | Clipper Strata Tutorial 4 | 5 | 6 | 7 |
8 |

Welcome to Clipper Tutorial @ Strata!

9 |

Please go the the following link to access your vm:

10 |
11 | {{ vm }} 12 |

This link will direct you to a JupyterLab Instance

13 |
14 | -------------------------------------------------------------------------------- /other/nodejs/.dockerignore: -------------------------------------------------------------------------------- 1 | node_modules/ 2 | -------------------------------------------------------------------------------- /other/nodejs/.gitignore: -------------------------------------------------------------------------------- 1 | node_modules/ 2 | -------------------------------------------------------------------------------- /other/nodejs/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:alpine 2 | 3 | COPY *.json ./ 4 | RUN npm install . 5 | 6 | COPY node_server.js . 7 | RUN echo "" > /out.csv 8 | ENTRYPOINT ["node", "node_server.js"] 9 | -------------------------------------------------------------------------------- /other/nodejs/Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: build 2 | 3 | build: 4 | docker build -t clipper/strata-nodejs . 5 | build-no-cache: 6 | docker build -t clipper/strata-nodejs --no-cache . 7 | -------------------------------------------------------------------------------- /other/nodejs/node_server.js: -------------------------------------------------------------------------------- 1 | const http = require('http'); 2 | const fs = require('fs'); 3 | var csvWriter = require('csv-write-stream') 4 | const port = 8080 5 | 6 | const requestHandler = (request, response) => { 7 | response.setHeader('Access-Control-Allow-Origin', '*'); 8 | response.setHeader("Access-Control-Allow-Headers", "Access-Control-Allow-Origin, Content-Type, Origin, Content-Type, X-Requested-With, Accept, Authorization"); 9 | console.log(request.method) 10 | if(request.method =="POST"){ 11 | var body = ""; 12 | request.on('data', function (chunk) { 13 | body += chunk; 14 | }); 15 | request.on('end', function () { 16 | console.log('body: ' + body); 17 | var jsonObj = JSON.parse(body); 18 | console.log(jsonObj); 19 | var toWrite = [jsonObj['label'], jsonObj['leftPaddle_y'], jsonObj['ball_x'], jsonObj['ball_y'], jsonObj['ball_dx'], jsonObj['ball_dy'], jsonObj['ball_x_prev'], jsonObj['ball_y_prev']] 20 | 21 | var writer = csvWriter({sendHeaders: false}) 22 | writer.pipe(fs.createWriteStream('out.csv', {flags: 'a'})) 23 | writer.write(jsonObj) 24 | writer.end() 25 | }) 26 | response.end("Data recorded") 27 | } 28 | else{ 29 | // console.log(request.url) 30 | console.log("expecting OPTIONS") 31 | console.log(request.method) 32 | console.log("end response") 33 | response.end('Hello World\n'); 34 | } 35 | 36 | 37 | 38 | } 39 | 40 | const server = http.createServer(requestHandler) 41 | 42 | server.listen(port, (err) => { 43 | if (err) { 44 | return console.log('something bad happened', err) 45 | } 46 | 47 | console.log(`server is listening on ${port}`) 48 | }) 49 | -------------------------------------------------------------------------------- /other/nodejs/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "nodejs", 3 | "version": "1.0.0", 4 | "description": "", 5 | "main": "node_server.js", 6 | "dependencies": { 7 | "body-parser": "^1.18.3", 8 | "csv-write-stream": "^2.0.0", 9 | "express": "^4.16.3" 10 | }, 11 | "devDependencies": {}, 12 | "scripts": { 13 | "test": "echo \"Error: no test specified\" && exit 1" 14 | }, 15 | "author": "", 16 | "license": "ISC" 17 | } 18 | -------------------------------------------------------------------------------- /other/start_up.sh: -------------------------------------------------------------------------------- 1 | # This shell script will be ran at start up of the VM 2 | # It's intended for last minute changes when AMI update 3 | # is too slow. 4 | 5 | # For example 6 | # docker pull clipper/some_image:newest_version 7 | echo "start script ran" 8 | -------------------------------------------------------------------------------- /pong-server/pong-server.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function, absolute_import 2 | from socketserver import ThreadingMixIn 3 | from http.server import BaseHTTPRequestHandler, HTTPServer 4 | import mimetypes 5 | mimetypes.init() 6 | import os 7 | import requests 8 | from datetime import datetime 9 | import logging 10 | import json 11 | import sys 12 | from jinja2 import Template 13 | cur_dir = os.path.dirname(os.path.abspath(__file__)) 14 | static_dir = os.path.join(cur_dir, "static") 15 | 16 | 17 | PORT = 4000 18 | JS_FILE_PATH = os.path.join(static_dir, "pong.js") 19 | 20 | 21 | # NOTE: This is definitely not secure 22 | def in_static_dir(file): 23 | # make both absolute 24 | directory = os.path.join(os.path.realpath(static_dir), '') 25 | file = os.path.realpath(file) 26 | 27 | # return true, if the common prefix of both is equal to directory 28 | # e.g. /a/b/c/d.rst and directory is /a/b, the common prefix is /a/b 29 | return os.path.commonprefix([file, directory]) == directory 30 | 31 | 32 | class PongServer(BaseHTTPRequestHandler): 33 | 34 | def _respond_not_found(self): 35 | pass 36 | 37 | # GET requests serve the corresponding file from the "static/" subdirectory 38 | def do_GET(self): 39 | if self.path == "/pong" or self.path == "/pong/": 40 | self.path = "/pong/index.html" 41 | 42 | if self.path.startswith("/pong/"): 43 | self.path = self.path.replace("/pong/", "", 1) 44 | 45 | local_path = os.path.abspath(os.path.join(static_dir, self.path)) 46 | logger.info("Local path: {}".format(local_path)) 47 | if not in_static_dir(local_path): 48 | self.send_error(403, "Forbidden") 49 | elif not os.path.exists(local_path) or not os.path.isfile(local_path): 50 | self.send_error(404, "Not Found") 51 | else: 52 | with open(local_path, "rb") as f: 53 | self.send_response(200) 54 | mtype, encoding = mimetypes.guess_type(local_path) 55 | self.send_header('Content-Type', mtype) 56 | self.end_headers() 57 | self.wfile.write(f.read()) 58 | return 59 | 60 | def do_POST(self): 61 | if not self.path == "/pong/predict": 62 | self.send_error(404, "Not Found") 63 | return 64 | print(self.rfile) 65 | 66 | clipper_url = "http://{}/pong/predict".format(self.server.clipper_addr) 67 | content_length = int(self.headers['Content-Length']) 68 | logger.info(content_length) 69 | logger.info(clipper_url) 70 | # # workaround because Javascript's JSON.stringify will turn 1.0 into 1, which 71 | # # Clipper's JSON parsing will parse as an integer not a double 72 | req_json = json.loads(self.rfile.read(content_length).decode("utf-8")) 73 | req_json["input"] = [float(i) for i in req_json["input"]] 74 | logger.info("Request JSON: {}".format(req_json)) 75 | headers = {'Content-Type': 'application/json'} 76 | start = datetime.now() 77 | clipper_response = requests.post(clipper_url, headers=headers, data=json.dumps(req_json)) 78 | end = datetime.now() 79 | latency = (end - start).total_seconds() * 1000.0 80 | logger.debug("Clipper responded with '{txt}' in {time} ms".format( 81 | txt=clipper_response.text, time=latency)) 82 | self.send_response(clipper_response.status_code) 83 | # Forward headers 84 | logger.info("Clipper responded with '{txt}' in {time} ms".format( 85 | txt=clipper_response.text, time=latency)) 86 | 87 | for k, v in clipper_response.headers.items(): 88 | self.send_header(k, v) 89 | self.end_headers() 90 | self.wfile.write(clipper_response.text.encode()) 91 | 92 | 93 | class ThreadingServer(ThreadingMixIn, HTTPServer): 94 | pass 95 | 96 | 97 | def run(clipper_addr): 98 | server_addr = ('0.0.0.0', PORT) 99 | logger.info("Starting Pong Server on localhost:{port}".format(port=PORT)) 100 | server = ThreadingServer(server_addr, PongServer) 101 | server.clipper_addr = clipper_addr 102 | server.serve_forever() 103 | 104 | 105 | def inject_localhost_addr(addr): 106 | template = Template(open(JS_FILE_PATH,'r').read()) 107 | rendered = template.render(ip_addr=addr) 108 | with open(JS_FILE_PATH, 'w') as f: 109 | f.write(rendered) 110 | 111 | 112 | if __name__ == '__main__': 113 | clipper_addr = sys.argv[1] 114 | 115 | localhost_addr = sys.argv[2] 116 | inject_localhost_addr(localhost_addr) 117 | 118 | log_filename = sys.argv[3] 119 | logging.basicConfig( 120 | filename=log_filename, 121 | format='%(asctime)s %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s', 122 | datefmt='%y-%m-%d:%H:%M:%S', 123 | level=logging.INFO) 124 | logger = logging.getLogger(__name__) 125 | 126 | run(clipper_addr) 127 | -------------------------------------------------------------------------------- /pong-server/static/game.js: -------------------------------------------------------------------------------- 1 | //============================================================================= 2 | // 3 | // We need some ECMAScript 5 methods but we need to implement them ourselves 4 | // for older browsers (compatibility: 5 | // http://kangax.github.com/es5-compat-table/) 6 | // 7 | // Function.bind: 8 | // https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/bind 9 | // Object.create: http://javascript.crockford.com/prototypal.html 10 | // Object.extend: (defacto standard like jquery $.extend or prototype's 11 | // Object.extend) 12 | // 13 | // Object.construct: our own wrapper around Object.create that ALSO calls 14 | // an initialize constructor method if one exists 15 | // 16 | //============================================================================= 17 | 18 | if (!Function.prototype.bind) { 19 | Function.prototype.bind = function(obj) { 20 | var slice = [].slice, args = slice.call(arguments, 1), self = this, 21 | nop = function() {}, bound = function() { 22 | return self.apply( 23 | this instanceof nop ? this : (obj || {}), 24 | args.concat(slice.call(arguments))); 25 | }; 26 | nop.prototype = self.prototype; 27 | bound.prototype = new nop(); 28 | return bound; 29 | }; 30 | } 31 | 32 | if (!Object.create) { 33 | Object.create = function(base) { 34 | function F(){}; 35 | F.prototype = base; 36 | return new F(); 37 | } 38 | } 39 | 40 | if (!Object.construct) { 41 | Object.construct = function(base) { 42 | var instance = Object.create(base); 43 | if (instance.initialize) 44 | instance.initialize.apply(instance, [].slice.call(arguments, 1)); 45 | return instance; 46 | } 47 | } 48 | 49 | if (!Object.extend) { 50 | Object.extend = function(destination, source) { 51 | for (var property in source) { 52 | if (source.hasOwnProperty(property)) 53 | destination[property] = source[property]; 54 | } 55 | return destination; 56 | }; 57 | } 58 | 59 | /* NOT READY FOR PRIME TIME 60 | if (!window.requestAnimationFrame) {// 61 | http://paulirish.com/2011/requestanimationframe-for-smart-animating/ 62 | window.requestAnimationFrame = window.webkitRequestAnimationFrame || 63 | window.mozRequestAnimationFrame || 64 | window.oRequestAnimationFrame || 65 | window.msRequestAnimationFrame || 66 | function(callback, element) { 67 | window.setTimeout(callback, 1000 / 60); 68 | } 69 | } 70 | */ 71 | 72 | //============================================================================= 73 | // GAME 74 | //============================================================================= 75 | 76 | Game = { 77 | 78 | compatible: function() { 79 | return Object.create && Object.extend && Function.bind && 80 | document.addEventListener && // HTML5 standard, all modern browsers 81 | // that support canvas should also support 82 | // add/removeEventListener 83 | Game.ua.hasCanvas 84 | }, 85 | 86 | start: function(id, game, cfg) { 87 | if (Game.compatible()) 88 | return Object.construct(Game.Runner, id, game, cfg).game; // return the 89 | // game 90 | // instance, 91 | // not the 92 | // runner 93 | // (caller can 94 | // always get 95 | // at the 96 | // runner via 97 | // game.runner) 98 | }, 99 | 100 | ua: function() { // should avoid user agent sniffing... but sometimes you 101 | // just gotta do what you gotta do 102 | var ua = navigator.userAgent.toLowerCase(); 103 | var key = ((ua.indexOf('opera') > -1) ? 'opera' : null); 104 | key = key || ((ua.indexOf('firefox') > -1) ? 'firefox' : null); 105 | key = key || ((ua.indexOf('chrome') > -1) ? 'chrome' : null); 106 | key = key || ((ua.indexOf('safari') > -1) ? 'safari' : null); 107 | key = key || ((ua.indexOf('msie') > -1) ? 'ie' : null); 108 | 109 | try { 110 | var re = (key == 'ie') ? 'msie (\\d)' : key + '\\/(\\d\\.\\d)' 111 | var matches = ua.match(new RegExp(re, 'i')); 112 | var version = matches ? parseFloat(matches[1]) : null; 113 | } catch (e) { 114 | } 115 | 116 | return { 117 | full: ua, name: key + (version ? ' ' + version.toString() : ''), 118 | version: version, isFirefox: (key == 'firefox'), 119 | isChrome: (key == 'chrome'), isSafari: (key == 'safari'), 120 | isOpera: (key == 'opera'), isIE: (key == 'ie'), 121 | hasCanvas: (document.createElement('canvas').getContext), 122 | hasAudio: (typeof(Audio) != 'undefined') 123 | } 124 | }(), 125 | 126 | addEvent: function(obj, type, fn) { 127 | obj.addEventListener(type, fn, false); 128 | }, 129 | removeEvent: function(obj, type, fn) { 130 | obj.removeEventListener(type, fn, false); 131 | }, 132 | 133 | ready: function(fn) { 134 | if (Game.compatible()) Game.addEvent(document, 'DOMContentLoaded', fn); 135 | }, 136 | 137 | createCanvas: function() { 138 | return document.createElement('canvas'); 139 | }, 140 | 141 | createAudio: function(src) { 142 | try { 143 | var a = new Audio(src); 144 | a.volume = 0.1; // lets be real quiet please 145 | return a; 146 | } catch (e) { 147 | return null; 148 | } 149 | }, 150 | 151 | loadImages: function( 152 | sources, callback) { /* load multiple images and callback when ALL have 153 | finished loading */ 154 | var images = {}; 155 | var count = sources ? sources.length : 0; 156 | if (count == 0) { 157 | callback(images); 158 | } else { 159 | for (var n = 0; n < sources.length; n++) { 160 | var source = sources[n]; 161 | var image = document.createElement('img'); 162 | images[source] = image; 163 | Game.addEvent(image, 'load', function() { 164 | if (--count == 0) callback(images); 165 | }); 166 | image.src = source; 167 | } 168 | } 169 | }, 170 | 171 | random: function(min, max) { 172 | return (min + (Math.random() * (max - min))); 173 | }, 174 | 175 | timestamp: function() { 176 | return new Date().getTime(); 177 | }, 178 | 179 | KEY: { 180 | BACKSPACE: 8, 181 | TAB: 9, 182 | RETURN: 13, 183 | ESC: 27, 184 | SPACE: 32, 185 | LEFT: 37, 186 | UP: 38, 187 | RIGHT: 39, 188 | DOWN: 40, 189 | DELETE: 46, 190 | HOME: 36, 191 | END: 35, 192 | PAGEUP: 33, 193 | PAGEDOWN: 34, 194 | INSERT: 45, 195 | ZERO: 48, 196 | ONE: 49, 197 | TWO: 50, 198 | A: 65, 199 | L: 76, 200 | P: 80, 201 | Q: 81, 202 | TILDA: 192 203 | }, 204 | 205 | //----------------------------------------------------------------------------- 206 | 207 | Runner: { 208 | 209 | initialize: function(id, game, cfg) { 210 | this.cfg = Object.extend( 211 | game.Defaults || {}, cfg || {}); // use game defaults (if any) and 212 | // extend with custom cfg (if any) 213 | this.fps = this.cfg.fps || 20; 214 | this.interval = 1000.0 / this.fps; 215 | this.canvas = document.getElementById(id); 216 | this.width = this.cfg.width || this.canvas.offsetWidth; 217 | this.height = this.cfg.height || this.canvas.offsetHeight; 218 | this.front = this.canvas; 219 | this.front.width = this.width; 220 | this.front.height = this.height; 221 | this.back = Game.createCanvas(); 222 | this.back.width = this.width; 223 | this.back.height = this.height; 224 | this.front2d = this.front.getContext('2d'); 225 | this.back2d = this.back.getContext('2d'); 226 | this.addEvents(); 227 | this.resetStats(); 228 | 229 | this.game = Object.construct( 230 | game, this, this.cfg); // finally construct the game object itself 231 | }, 232 | 233 | start: function() { // game instance should call runner.start() when its 234 | // finished initializing and is ready to start the game 235 | // loop 236 | this.lastFrame = Game.timestamp(); 237 | this.timer = setInterval(this.loop.bind(this), this.interval); 238 | }, 239 | 240 | stop: function() { 241 | clearInterval(this.timer); 242 | }, 243 | 244 | loop: function() { 245 | var start = Game.timestamp(); 246 | this.update((start - this.lastFrame) / 1000.0); // send dt as seconds 247 | var middle = Game.timestamp(); 248 | this.draw(); 249 | var end = Game.timestamp(); 250 | this.updateStats(middle - start, end - middle); 251 | this.lastFrame = start; 252 | }, 253 | 254 | update: function(dt) { 255 | this.game.update(dt); 256 | }, 257 | 258 | draw: function() { 259 | this.back2d.clearRect(0, 0, this.width, this.height); 260 | this.game.draw(this.back2d); 261 | this.drawStats(this.back2d); 262 | this.front2d.clearRect(0, 0, this.width, this.height); 263 | this.front2d.drawImage(this.back, 0, 0); 264 | }, 265 | 266 | resetStats: function() { 267 | this.stats = { 268 | count: 0, 269 | fps: 0, 270 | update: 0, 271 | draw: 0, 272 | frame: 0 // update + draw 273 | }; 274 | }, 275 | 276 | updateStats: function(update, draw) { 277 | if (this.cfg.stats) { 278 | this.stats.update = Math.max(1, update); 279 | this.stats.draw = Math.max(1, draw); 280 | this.stats.frame = this.stats.update + this.stats.draw; 281 | this.stats.count = 282 | this.stats.count == this.fps ? 0 : this.stats.count + 1; 283 | this.stats.fps = Math.min(this.fps, 1000 / this.stats.frame); 284 | } 285 | }, 286 | 287 | drawStats: function(ctx) { 288 | if (this.cfg.stats) { 289 | ctx.fillText( 290 | 'frame: ' + this.stats.count, this.width - 100, this.height - 60); 291 | ctx.fillText( 292 | 'fps: ' + this.stats.fps, this.width - 100, this.height - 50); 293 | ctx.fillText( 294 | 'update: ' + this.stats.update + 'ms', this.width - 100, 295 | this.height - 40); 296 | ctx.fillText( 297 | 'draw: ' + this.stats.draw + 'ms', this.width - 100, 298 | this.height - 30); 299 | } 300 | }, 301 | 302 | addEvents: function() { 303 | Game.addEvent(document, 'keydown', this.onkeydown.bind(this)); 304 | Game.addEvent(document, 'keyup', this.onkeyup.bind(this)); 305 | }, 306 | 307 | onkeydown: function(ev) { 308 | if (this.game.onkeydown) this.game.onkeydown(ev.keyCode); 309 | }, 310 | onkeyup: function(ev) { 311 | if (this.game.onkeyup) this.game.onkeyup(ev.keyCode); 312 | }, 313 | 314 | hideCursor: function() { 315 | this.canvas.style.cursor = 'none'; 316 | }, 317 | showCursor: function() { 318 | this.canvas.style.cursor = 'auto'; 319 | }, 320 | 321 | alert: function(msg) { 322 | this.stop(); // alert blocks thread, so need to stop game loop in order 323 | // to avoid sending huge dt values to next update 324 | result = window.alert(msg); 325 | this.start(); 326 | return result; 327 | }, 328 | 329 | confirm: function(msg) { 330 | this.stop(); // alert blocks thread, so need to stop game loop in order 331 | // to avoid sending huge dt values to next update 332 | result = window.confirm(msg); 333 | this.start(); 334 | return result; 335 | } 336 | 337 | //------------------------------------------------------------------------- 338 | 339 | } // Game.Runner 340 | } // Game 341 | -------------------------------------------------------------------------------- /pong-server/static/images/press1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/pong-server/static/images/press1.png -------------------------------------------------------------------------------- /pong-server/static/images/press2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/pong-server/static/images/press2.png -------------------------------------------------------------------------------- /pong-server/static/images/winner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/pong-server/static/images/winner.png -------------------------------------------------------------------------------- /pong-server/static/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Pong! 5 | 6 | 7 | 8 | 9 | 10 | 11 | 49 | 50 | 51 |
52 | Sorry, this example cannot be run because your browser does not support the <canvas> element 53 |
54 |
55 | 56 | 57 | 58 | 81 | 82 | 83 | 84 | -------------------------------------------------------------------------------- /pong-server/static/pong.css: -------------------------------------------------------------------------------- 1 | body { background-color: black; color: #AAA; font-size: 12pt; padding: 1em; } 2 | 3 | #unsupported { border: 1px solid yellow; color: black; background-color: #FFFFAD; padding: 2em; margin: 1em; display: inline-block; } 4 | 5 | #sidebar { width: 18em; height: 40em; float: left; font-size: 0.825em; background-color: #333; border: 1px solid white; padding: 1em; } 6 | #sidebar h2 { color: white; text-align: center; margin: 0; } 7 | #sidebar .parts { padding-left: 1em; list-style-type: none; margin-bottom: 2em; text-align: right; } 8 | #sidebar .parts li a { color: white; text-decoration: none; } 9 | #sidebar .parts li a:visited { color: white; } 10 | #sidebar .parts li a:hover { color: white; text-decoration: underline; } 11 | #sidebar .parts li a.selected { color: #F08010; } 12 | #sidebar .parts li a i { color: #AAA; } 13 | #sidebar .parts li a.selected i { color: #F08010; } 14 | #sidebar .settings { line-height: 1.2em; height: 1.2em; text-align: right; } 15 | #sidebar .settings.size { } 16 | #sidebar .settings.speed { margin-bottom: 1em; } 17 | #sidebar .settings label { vertical-align: middle; } 18 | #sidebar .settings input { vertical-align: middle; } 19 | #sidebar .settings select { vertical-align: middle; } 20 | #sidebar .description { margin-bottom: 2em; } 21 | #sidebar .description b { font-weight: normal; color: #FFF; } 22 | 23 | 24 | @media screen and (min-width: 0px) { 25 | #sidebar { display: none; } 26 | #game { display: block; width: 480px; height: 360px; margin: 0 auto; } 27 | } 28 | 29 | @media screen and (min-width: 800px) { 30 | #game { width: 640px; height: 480px; } 31 | } 32 | 33 | @media screen and (min-width: 1000px) { 34 | #sidebar { display: block; } 35 | #game { margin-left: 18em; } 36 | } 37 | 38 | @media screen and (min-width: 1200px) { 39 | #game { width: 800px; height: 600px; } 40 | } 41 | 42 | @media screen and (min-width: 1600px) { 43 | #game { width: 1024px; height: 768px; } 44 | } 45 | -------------------------------------------------------------------------------- /pong-server/static/pong.js: -------------------------------------------------------------------------------- 1 | //============================================================================= 2 | // PONG 3 | //============================================================================= 4 | 5 | Pong = { 6 | 7 | Defaults: { 8 | width: 640, // logical canvas width (browser will scale to physical canvas 9 | // size - which is controlled by @media css queries) 10 | height: 480, // logical canvas height (ditto) 11 | wallWidth: 12, 12 | paddleWidth: 12, 13 | paddleHeight: 60, 14 | paddleSpeed: 2, // should be able to cross court vertically in 2 seconds 15 | ballSpeed: 4, // should be able to cross court horizontally in 4 seconds, 16 | // at starting speed ... 17 | ballAccel: 8, // ... but accelerate as time passes 18 | ballRadius: 5, 19 | sound: true 20 | }, 21 | 22 | Colors: { 23 | walls: 'white', 24 | ball: 'white', 25 | score: 'white', 26 | footprint: '#333', 27 | predictionGuess: 'yellow', 28 | predictionExact: 'red' 29 | }, 30 | 31 | Images: ['images/press1.png', 'images/press2.png', 'images/winner.png'], 32 | 33 | Levels: [ 34 | {aiReaction: 0.2, aiError: 40}, // 0: ai is losing by 8 35 | {aiReaction: 0.3, aiError: 50}, // 1: ai is losing by 7 36 | {aiReaction: 0.4, aiError: 60}, // 2: ai is losing by 6 37 | {aiReaction: 0.5, aiError: 70}, // 3: ai is losing by 5 38 | {aiReaction: 0.6, aiError: 80}, // 4: ai is losing by 4 39 | {aiReaction: 0.7, aiError: 90}, // 5: ai is losing by 3 40 | {aiReaction: 0.8, aiError: 100}, // 6: ai is losing by 2 41 | {aiReaction: 0.9, aiError: 110}, // 7: ai is losing by 1 42 | {aiReaction: 1.0, aiError: 120}, // 8: tie 43 | {aiReaction: 1.1, aiError: 130}, // 9: ai is winning by 1 44 | {aiReaction: 1.2, aiError: 140}, // 10: ai is winning by 2 45 | {aiReaction: 1.3, aiError: 150}, // 11: ai is winning by 3 46 | {aiReaction: 1.4, aiError: 160}, // 12: ai is winning by 4 47 | {aiReaction: 1.5, aiError: 170}, // 13: ai is winning by 5 48 | {aiReaction: 1.6, aiError: 180}, // 14: ai is winning by 6 49 | {aiReaction: 1.7, aiError: 190}, // 15: ai is winning by 7 50 | {aiReaction: 1.8, aiError: 200} // 16: ai is winning by 8 51 | ], 52 | 53 | //----------------------------------------------------------------------------- 54 | 55 | initialize: function(runner, cfg) { 56 | Game.loadImages(Pong.Images, function(images) { 57 | this.cfg = cfg; 58 | this.runner = runner; 59 | this.width = runner.width; 60 | this.height = runner.height; 61 | this.images = images; 62 | this.playing = false; 63 | this.scores = [0, 0]; 64 | this.menu = Object.construct(Pong.Menu, this); 65 | this.court = Object.construct(Pong.Court, this); 66 | this.leftPaddle = Object.construct(Pong.Paddle, this); 67 | this.rightPaddle = Object.construct(Pong.Paddle, this, true); 68 | this.ball = Object.construct(Pong.Ball, this); 69 | this.sounds = Object.construct(Pong.Sounds, this); 70 | this.runner.start(); 71 | }.bind(this)); 72 | }, 73 | 74 | startDemo: function() { 75 | this.start(0); 76 | }, 77 | startSinglePlayer: function() { 78 | this.start(1); 79 | }, 80 | startDoublePlayer: function() { 81 | this.start(2); 82 | }, 83 | 84 | start: function(numPlayers) { 85 | if (!this.playing) { 86 | this.scores = [0, 0]; 87 | this.playing = true; 88 | this.leftPaddle.setAuto(numPlayers < 2, this.level(0)); 89 | this.rightPaddle.setAuto(false, this.level(1)); 90 | this.ball.reset(); 91 | this.runner.hideCursor(); 92 | } 93 | }, 94 | 95 | stop: function(ask) { 96 | if (this.playing) { 97 | if (!ask || this.runner.confirm('Abandon game in progress ?')) { 98 | this.playing = false; 99 | this.leftPaddle.setAuto(false); 100 | this.rightPaddle.setAuto(false); 101 | this.runner.showCursor(); 102 | } 103 | } 104 | }, 105 | 106 | level: function(playerNo) { 107 | return 8 + (this.scores[playerNo] - this.scores[playerNo ? 0 : 1]); 108 | }, 109 | 110 | goal: function(playerNo) { 111 | this.sounds.goal(); 112 | this.scores[playerNo] += 1; 113 | if (this.scores[playerNo] == 9) { 114 | this.menu.declareWinner(playerNo); 115 | this.stop(); 116 | } else { 117 | this.ball.reset(playerNo); 118 | this.leftPaddle.setLevel(this.level(0)); 119 | this.rightPaddle.setLevel(this.level(1)); 120 | } 121 | }, 122 | 123 | update: function(dt) { 124 | this.leftPaddle.update(dt, this.ball); 125 | this.rightPaddle.update(dt, this.ball); 126 | if (this.playing) { 127 | var dx = this.ball.dx; 128 | var dy = this.ball.dy; 129 | this.ball.update(dt, this.leftPaddle, this.rightPaddle); 130 | if (this.ball.dx < 0 && dx > 0) 131 | this.sounds.ping(); 132 | else if (this.ball.dx > 0 && dx < 0) 133 | this.sounds.pong(); 134 | else if (this.ball.dy * dy < 0) 135 | this.sounds.wall(); 136 | 137 | if (this.ball.left > this.width) 138 | this.goal(0); 139 | else if (this.ball.right < 0) 140 | this.goal(1); 141 | } 142 | }, 143 | 144 | draw: function(ctx) { 145 | this.court.draw(ctx, this.scores[0], this.scores[1]); 146 | this.leftPaddle.draw(ctx); 147 | this.rightPaddle.draw(ctx); 148 | if (this.playing) 149 | this.ball.draw(ctx); 150 | else 151 | this.menu.draw(ctx); 152 | }, 153 | 154 | onkeydown: function(keyCode) { 155 | switch (keyCode) { 156 | case Game.KEY.ONE: 157 | this.startSinglePlayer(); 158 | break; 159 | case Game.KEY.TWO: 160 | this.startDoublePlayer(); 161 | break; 162 | case Game.KEY.ESC: 163 | this.stop(true); 164 | break; 165 | case Game.KEY.Q: 166 | if (!this.leftPaddle.auto){ 167 | this.leftPaddle.moveUp(); 168 | // console.log("up", this.leftPaddle.y, this.ball.x, this.ball.y, 169 | // this.ball.dx, this.ball.dy,this.ball.x_prev, this.ball.y_prev) 170 | var data1 = JSON.stringify({"label": "up", 171 | "leftPaddle_y": this.leftPaddle.y, 172 | "ball_x": this.ball.x, 173 | "ball_y": this.ball.y, 174 | "ball_dx": this.ball.dx, 175 | "ball_dy": this.ball.dy, 176 | "ball_x_prev": this.ball.x_prev, 177 | "ball_y_prev": this.ball.y_prev}); 178 | console.log('DATA 1') 179 | console.log(data1) 180 | fetch('http://{{ ip_addr }}:8080/', { 181 | method: 'POST', 182 | mode: "cors", 183 | redirect: 'follow', 184 | headers: new Headers({'Content-Type': 'application/json', 185 | 'Access-Control-Allow-Origin': '*'}), 186 | body: data1 187 | }); 188 | } 189 | break; 190 | case Game.KEY.A: 191 | if (!this.leftPaddle.auto) { 192 | this.leftPaddle.moveDown(); 193 | console.log("down", this.leftPaddle.y, this.ball.x, this.ball.y, 194 | this.ball.dx, this.ball.dy,this.ball.x_prev, this.ball.y_prev) 195 | var data1 = JSON.stringify({"label": "down", 196 | "leftPaddle_y": this.leftPaddle.y, 197 | "ball_x": this.ball.x, 198 | "ball_y": this.ball.y, 199 | "ball_dx": this.ball.dx, 200 | "ball_dy": this.ball.dy, 201 | "ball_x_prev": this.ball.x_prev, 202 | "ball_y_prev": this.ball.y_prev}); 203 | fetch('http://{{ ip_addr }}:8080/', { 204 | method: 'POST', 205 | mode: "cors", 206 | redirect: 'follow', 207 | headers: new Headers({'Content-Type': 'application/json', 208 | 'Access-Control-Allow-Origin': '*'}), 209 | body: data1 210 | }); 211 | } 212 | break; 213 | case Game.KEY.P: 214 | if (!this.rightPaddle.auto) { 215 | this.rightPaddle.moveUp(); 216 | } 217 | break; 218 | case Game.KEY.UP: 219 | if (!this.rightPaddle.auto) { 220 | this.rightPaddle.moveUp(); 221 | 222 | } 223 | break; 224 | case Game.KEY.L: 225 | if (!this.rightPaddle.auto) { 226 | this.rightPaddle.moveDown(); 227 | 228 | } 229 | break; 230 | case Game.KEY.DOWN: 231 | if (!this.rightPaddle.auto) { 232 | this.rightPaddle.moveDown(); 233 | } 234 | break; 235 | } 236 | }, 237 | 238 | onkeyup: function(keyCode) { 239 | switch (keyCode) { 240 | case Game.KEY.Q: 241 | if (!this.leftPaddle.auto){ 242 | this.leftPaddle.stopMovingUp(); 243 | console.log("stop", this.leftPaddle.y, this.ball.x, this.ball.y, 244 | this.ball.dx, this.ball.dy,this.ball.x_prev, this.ball.y_prev) 245 | var data1 = JSON.stringify({"label": "stop", 246 | "leftPaddle_y": this.leftPaddle.y, 247 | "ball_x": this.ball.x, 248 | "ball_y": this.ball.y, 249 | "ball_dx": this.ball.dx, 250 | "ball_dy": this.ball.dy, 251 | "ball_x_prev": this.ball.x_prev, 252 | "ball_y_prev": this.ball.y_prev}); 253 | fetch('http://{{ ip_addr }}:8080/', { 254 | method: 'POST', 255 | mode: "cors", 256 | redirect: 'follow', 257 | headers: new Headers({'Content-Type': 'application/json', 258 | 'Access-Control-Allow-Origin': '*'}), 259 | body: data1 260 | }); 261 | } 262 | break; 263 | case Game.KEY.A: 264 | if (!this.leftPaddle.auto){ 265 | this.leftPaddle.stopMovingDown(); 266 | console.log("stop", this.leftPaddle.y, this.ball.x, this.ball.y, 267 | this.ball.dx, this.ball.dy,this.ball.x_prev, this.ball.y_prev) 268 | var data1 = JSON.stringify({"label": "stop", 269 | "leftPaddle_y": this.leftPaddle.y, 270 | "ball_x": this.ball.x, 271 | "ball_y": this.ball.y, 272 | "ball_dx": this.ball.dx, 273 | "ball_dy": this.ball.dy, 274 | "ball_x_prev": this.ball.x_prev, 275 | "ball_y_prev": this.ball.y_prev}); 276 | fetch('http://{{ ip_addr }}:8080/', { 277 | method: 'POST', 278 | mode: "cors", 279 | redirect: 'follow', 280 | headers: new Headers({'Content-Type': 'application/json', 281 | 'Access-Control-Allow-Origin': '*'}), 282 | body: data1 283 | }); 284 | } 285 | break; 286 | case Game.KEY.P: 287 | if (!this.rightPaddle.auto){ 288 | this.rightPaddle.stopMovingUp(); 289 | } 290 | break; 291 | case Game.KEY.UP: 292 | if (!this.rightPaddle.auto){ 293 | this.rightPaddle.stopMovingUp(); 294 | // console.log("stop", this.ball.x, this.ball.y, this.ball.accel, this.ball.speed, this.rightPaddle.x, this.rightPaddle.y); 295 | } 296 | break; 297 | case Game.KEY.L: 298 | if (!this.rightPaddle.auto){ 299 | this.rightPaddle.stopMovingDown(); 300 | } 301 | break; 302 | case Game.KEY.DOWN: 303 | if (!this.rightPaddle.auto){ 304 | this.rightPaddle.stopMovingDown(); 305 | // console.log("stop", this.ball.x, this.ball.y, this.ball.accel, this.ball.speed, this.rightPaddle.x, this.rightPaddle.y); 306 | } 307 | break; 308 | } 309 | }, 310 | 311 | showStats: function(on) { 312 | this.cfg.stats = on; 313 | }, 314 | showFootprints: function(on) { 315 | this.cfg.footprints = on; 316 | this.ball.footprints = []; 317 | }, 318 | showPredictions: function(on) { 319 | this.cfg.predictions = on; 320 | }, 321 | enableSound: function(on) { 322 | this.cfg.sound = on; 323 | }, 324 | 325 | //============================================================================= 326 | // MENU 327 | //============================================================================= 328 | 329 | Menu: { 330 | 331 | initialize: function(pong) { 332 | var press1 = pong.images['images/press1.png']; 333 | var press2 = pong.images['images/press2.png']; 334 | var winner = pong.images['images/winner.png']; 335 | this.press1 = {image: press1, x: 10, y: pong.cfg.wallWidth}; 336 | this.press2 = { 337 | image: press2, 338 | x: (pong.width - press2.width - 10), 339 | y: pong.cfg.wallWidth 340 | }; 341 | this.winner1 = { 342 | image: winner, 343 | x: (pong.width / 2) - winner.width - pong.cfg.wallWidth, 344 | y: 6 * pong.cfg.wallWidth 345 | }; 346 | this.winner2 = { 347 | image: winner, 348 | x: (pong.width / 2) + pong.cfg.wallWidth, 349 | y: 6 * pong.cfg.wallWidth 350 | }; 351 | }, 352 | 353 | declareWinner: function(playerNo) { 354 | this.winner = playerNo; 355 | }, 356 | 357 | draw: function(ctx) { 358 | // ctx.drawImage(this.press1.image, this.press1.x, this.press1.y); 359 | // ctx.drawImage(this.press2.image, this.press2.x, this.press2.y); 360 | if (this.winner == 0) 361 | ctx.drawImage(this.winner1.image, this.winner1.x, this.winner1.y); 362 | else if (this.winner == 1) 363 | ctx.drawImage(this.winner2.image, this.winner2.x, this.winner2.y); 364 | } 365 | 366 | }, 367 | 368 | //============================================================================= 369 | // SOUNDS 370 | //============================================================================= 371 | 372 | Sounds: { 373 | 374 | initialize: function(pong) { 375 | this.game = pong; 376 | this.supported = Game.ua.hasAudio; 377 | if (this.supported) { 378 | this.files = { 379 | ping: Game.createAudio('sounds/ping.wav'), 380 | pong: Game.createAudio('sounds/pong.wav'), 381 | wall: Game.createAudio('sounds/wall.wav'), 382 | goal: Game.createAudio('sounds/goal.wav') 383 | }; 384 | } 385 | }, 386 | 387 | play: function(name) { 388 | if (this.supported && this.game.cfg.sound && this.files[name]) 389 | this.files[name].play(); 390 | }, 391 | 392 | ping: function() { 393 | this.play('ping'); 394 | }, 395 | pong: function() { 396 | this.play('pong'); 397 | }, 398 | wall: function() { /*this.play('wall');*/ }, 399 | goal: function() { /*this.play('goal');*/ } 400 | 401 | }, 402 | 403 | //============================================================================= 404 | // COURT 405 | //============================================================================= 406 | 407 | Court: { 408 | 409 | initialize: function(pong) { 410 | var w = pong.width; 411 | var h = pong.height; 412 | var ww = pong.cfg.wallWidth; 413 | 414 | this.ww = ww; 415 | this.walls = []; 416 | this.walls.push({x: 0, y: 0, width: w, height: ww}); 417 | this.walls.push({x: 0, y: h - ww, width: w, height: ww}); 418 | var nMax = (h / (ww * 2)); 419 | for (var n = 0; n < nMax; n++) { // draw dashed halfway line 420 | this.walls.push({ 421 | x: (w / 2) - (ww / 2), 422 | y: (ww / 2) + (ww * 2 * n), 423 | width: ww, 424 | height: ww 425 | }); 426 | } 427 | 428 | var sw = 3 * ww; 429 | var sh = 4 * ww; 430 | this.score1 = {x: 0.5 + (w / 2) - 1.5 * ww - sw, y: 2 * ww, w: sw, h: sh}; 431 | this.score2 = {x: 0.5 + (w / 2) + 1.5 * ww, y: 2 * ww, w: sw, h: sh}; 432 | }, 433 | 434 | draw: function(ctx, scorePlayer1, scorePlayer2) { 435 | ctx.fillStyle = Pong.Colors.walls; 436 | for (var n = 0; n < this.walls.length; n++) 437 | ctx.fillRect( 438 | this.walls[n].x, this.walls[n].y, this.walls[n].width, 439 | this.walls[n].height); 440 | this.drawDigit( 441 | ctx, scorePlayer1, this.score1.x, this.score1.y, this.score1.w, 442 | this.score1.h); 443 | this.drawDigit( 444 | ctx, scorePlayer2, this.score2.x, this.score2.y, this.score2.w, 445 | this.score2.h); 446 | }, 447 | 448 | drawDigit: function(ctx, n, x, y, w, h) { 449 | ctx.fillStyle = Pong.Colors.score; 450 | var dw = dh = this.ww * 4 / 5; 451 | var blocks = Pong.Court.DIGITS[n]; 452 | if (blocks[0]) ctx.fillRect(x, y, w, dh); 453 | if (blocks[1]) ctx.fillRect(x, y, dw, h / 2); 454 | if (blocks[2]) ctx.fillRect(x + w - dw, y, dw, h / 2); 455 | if (blocks[3]) ctx.fillRect(x, y + h / 2 - dh / 2, w, dh); 456 | if (blocks[4]) ctx.fillRect(x, y + h / 2, dw, h / 2); 457 | if (blocks[5]) ctx.fillRect(x + w - dw, y + h / 2, dw, h / 2); 458 | if (blocks[6]) ctx.fillRect(x, y + h - dh, w, dh); 459 | }, 460 | 461 | DIGITS: [ 462 | [1, 1, 1, 0, 1, 1, 1], // 0 463 | [0, 0, 1, 0, 0, 1, 0], // 1 464 | [1, 0, 1, 1, 1, 0, 1], // 2 465 | [1, 0, 1, 1, 0, 1, 1], // 3 466 | [0, 1, 1, 1, 0, 1, 0], // 4 467 | [1, 1, 0, 1, 0, 1, 1], // 5 468 | [1, 1, 0, 1, 1, 1, 1], // 6 469 | [1, 0, 1, 0, 0, 1, 0], // 7 470 | [1, 1, 1, 1, 1, 1, 1], // 8 471 | [1, 1, 1, 1, 0, 1, 0] // 9 472 | ] 473 | 474 | }, 475 | 476 | //============================================================================= 477 | // PADDLE 478 | //============================================================================= 479 | 480 | Paddle: { 481 | 482 | initialize: function(pong, rhs) { 483 | this.pong = pong; 484 | this.width = pong.cfg.paddleWidth; 485 | this.height = pong.cfg.paddleHeight; 486 | this.minY = pong.cfg.wallWidth; 487 | this.maxY = pong.height - pong.cfg.wallWidth - this.height; 488 | this.speed = (this.maxY - this.minY) / pong.cfg.paddleSpeed; 489 | this.setpos( 490 | rhs ? pong.width - this.width : 0, 491 | this.minY + (this.maxY - this.minY) / 2); 492 | this.setdir(0); 493 | }, 494 | 495 | setpos: function(x, y) { 496 | this.x = x; 497 | this.y = y; 498 | this.left = this.x; 499 | this.right = this.left + this.width; 500 | this.top = this.y; 501 | this.bottom = this.y + this.height; 502 | }, 503 | 504 | setdir: function(dy) { 505 | this.up = (dy < 0 ? -dy : 0); 506 | this.down = (dy > 0 ? dy : 0); 507 | }, 508 | 509 | setAuto: function(on, level) { 510 | if (on && !this.auto) { 511 | this.auto = true; 512 | this.setLevel(level); 513 | } else if (!on && this.auto) { 514 | this.auto = false; 515 | this.setdir(0); 516 | } 517 | }, 518 | 519 | setLevel: function(level) { 520 | if (this.auto) this.level = Pong.Levels[level]; 521 | }, 522 | 523 | update: function(dt, ball) { 524 | if (this.auto) this.ai(dt, ball); 525 | 526 | var amount = this.down - this.up; 527 | if (amount != 0) { 528 | var y = this.y + (amount * dt * this.speed); 529 | if (y < this.minY) 530 | y = this.minY; 531 | else if (y > this.maxY) 532 | y = this.maxY; 533 | this.setpos(this.x, y); 534 | } 535 | }, 536 | 537 | ai: function(dt, ball) { 538 | 539 | // var features = [0.0, 1.0, 2.0, 2.0, 1.0, 1.0, 0.0, 0.0]; 540 | // var features = Array.apply(null, Array(8)).map(function(item, index) { 541 | // return Math.random() * 2 542 | // }); 543 | 544 | var features = [this.pong.leftPaddle.y, 545 | ball.x, ball.y, 546 | ball.dx, ball.dy, 547 | ball.x_prev, ball.y_prev].map(function(x) { 548 | return x / 500.0; 549 | }); 550 | 551 | // features = [for (i of features) i / 500]; 552 | 553 | 554 | var data = JSON.stringify({'input': features}); 555 | // console.log(data); 556 | var self = this; 557 | 558 | // var predict_url = `${window.location.href}/predict`; 559 | var predict_url = "predict"; 560 | // Query Clipper via the Pong server proxy 561 | fetch(predict_url, { 562 | method: 'POST', 563 | redirect: 'follow', 564 | headers: new Headers({'Content-Type': 'application/json'}), 565 | body: data 566 | }).then(function(response) { 567 | if (response.ok) { 568 | response.json().then(function(data) { 569 | if (data.output === 0) { 570 | // console.log('Staying still'); 571 | self.stopMovingUp(); 572 | self.stopMovingDown(); 573 | } else if (data.output === 1) { 574 | // console.log('Moving down'); 575 | self.stopMovingUp(); 576 | self.moveDown(); 577 | } else if (data.output === 2) { 578 | // console.log('Moving up'); 579 | self.stopMovingDown(); 580 | self.moveUp(); 581 | } else { 582 | // console.log(data.output, 'Unrecognized action. Not moving.'); 583 | self.stopMovingUp(); 584 | self.stopMovingDown(); 585 | } 586 | }); 587 | } else { 588 | console.log(response.status, response.statusText); 589 | } 590 | }); 591 | }, 592 | 593 | predict: function(ball, dt) { 594 | // only re-predict if the ball changed direction, or its been some amount 595 | // of time since last prediction 596 | if (this.prediction && ((this.prediction.dx * ball.dx) > 0) && 597 | ((this.prediction.dy * ball.dy) > 0) && 598 | (this.prediction.since < this.level.aiReaction)) { 599 | this.prediction.since += dt; 600 | return; 601 | } 602 | 603 | var pt = Pong.Helper.ballIntercept( 604 | ball, 605 | {left: this.left, right: this.right, top: -10000, bottom: 10000}, 606 | ball.dx * 10, ball.dy * 10); 607 | if (pt) { 608 | var t = this.minY + ball.radius; 609 | var b = this.maxY + this.height - ball.radius; 610 | 611 | while ((pt.y < t) || (pt.y > b)) { 612 | if (pt.y < t) { 613 | pt.y = t + (t - pt.y); 614 | } else if (pt.y > b) { 615 | pt.y = t + (b - t) - (pt.y - b); 616 | } 617 | } 618 | this.prediction = pt; 619 | } else { 620 | this.prediction = null; 621 | } 622 | 623 | if (this.prediction) { 624 | this.prediction.since = 0; 625 | this.prediction.dx = ball.dx; 626 | this.prediction.dy = ball.dy; 627 | this.prediction.radius = ball.radius; 628 | this.prediction.exactX = this.prediction.x; 629 | this.prediction.exactY = this.prediction.y; 630 | var closeness = 631 | (ball.dx < 0 ? ball.x - this.right : this.left - ball.x) / 632 | this.pong.width; 633 | var error = this.level.aiError * closeness; 634 | this.prediction.y = this.prediction.y + Game.random(-error, error); 635 | } 636 | }, 637 | 638 | draw: function(ctx) { 639 | ctx.fillStyle = Pong.Colors.walls; 640 | ctx.fillRect(this.x, this.y, this.width, this.height); 641 | if (this.prediction && this.pong.cfg.predictions) { 642 | ctx.strokeStyle = Pong.Colors.predictionExact; 643 | ctx.strokeRect( 644 | this.prediction.x - this.prediction.radius, 645 | this.prediction.exactY - this.prediction.radius, 646 | this.prediction.radius * 2, this.prediction.radius * 2); 647 | ctx.strokeStyle = Pong.Colors.predictionGuess; 648 | ctx.strokeRect( 649 | this.prediction.x - this.prediction.radius, 650 | this.prediction.y - this.prediction.radius, 651 | this.prediction.radius * 2, this.prediction.radius * 2); 652 | } 653 | }, 654 | 655 | moveUp: function() { 656 | this.up = 1; 657 | }, 658 | moveDown: function() { 659 | this.down = 1; 660 | }, 661 | stopMovingUp: function() { 662 | this.up = 0; 663 | }, 664 | stopMovingDown: function() { 665 | this.down = 0; 666 | } 667 | 668 | }, 669 | 670 | //============================================================================= 671 | // BALL 672 | //============================================================================= 673 | 674 | Ball: { 675 | 676 | initialize: function(pong) { 677 | this.pong = pong; 678 | this.radius = pong.cfg.ballRadius; 679 | this.minX = this.radius; 680 | this.maxX = pong.width - this.radius; 681 | this.minY = pong.cfg.wallWidth + this.radius; 682 | this.maxY = pong.height - pong.cfg.wallWidth - this.radius; 683 | this.speed = (this.maxX - this.minX) / pong.cfg.ballSpeed; 684 | this.accel = pong.cfg.ballAccel; 685 | }, 686 | 687 | reset: function(playerNo) { 688 | this.footprints = []; 689 | this.setpos( 690 | playerNo == 1 ? this.maxX : this.minX, 691 | Game.random(this.minY, this.maxY)); 692 | this.setdir(playerNo == 1 ? -this.speed : this.speed, this.speed); 693 | }, 694 | 695 | setpos: function(x, y) { 696 | this.x_prev = this.x == null ? x : this.x; 697 | this.y_prev = this.y == null ? y : this.y; 698 | this.x = x; 699 | this.y = y; 700 | this.left = this.x - this.radius; 701 | this.top = this.y - this.radius; 702 | this.right = this.x + this.radius; 703 | this.bottom = this.y + this.radius; 704 | }, 705 | 706 | setdir: function(dx, dy) { 707 | this.dxChanged = 708 | ((this.dx < 0) != (dx < 0)); // did horizontal direction change 709 | this.dyChanged = 710 | ((this.dy < 0) != (dy < 0)); // did vertical direction change 711 | this.dx = dx; 712 | this.dy = dy; 713 | }, 714 | 715 | footprint: function() { 716 | if (this.pong.cfg.footprints) { 717 | if (!this.footprintCount || this.dxChanged || this.dyChanged) { 718 | this.footprints.push({x: this.x, y: this.y}); 719 | if (this.footprints.length > 50) this.footprints.shift(); 720 | this.footprintCount = 5; 721 | } else { 722 | this.footprintCount--; 723 | } 724 | } 725 | }, 726 | 727 | update: function(dt, leftPaddle, rightPaddle) { 728 | 729 | pos = Pong.Helper.accelerate( 730 | this.x, this.y, this.dx, this.dy, this.accel, dt); 731 | 732 | if ((pos.dy > 0) && (pos.y > this.maxY)) { 733 | pos.y = this.maxY; 734 | pos.dy = -pos.dy; 735 | } else if ((pos.dy < 0) && (pos.y < this.minY)) { 736 | pos.y = this.minY; 737 | pos.dy = -pos.dy; 738 | } 739 | 740 | var paddle = (pos.dx < 0) ? leftPaddle : rightPaddle; 741 | var pt = Pong.Helper.ballIntercept(this, paddle, pos.nx, pos.ny); 742 | 743 | if (pt) { 744 | switch (pt.d) { 745 | case 'left': 746 | case 'right': 747 | pos.x = pt.x; 748 | pos.dx = -pos.dx; 749 | break; 750 | case 'top': 751 | case 'bottom': 752 | pos.y = pt.y; 753 | pos.dy = -pos.dy; 754 | break; 755 | } 756 | 757 | // add/remove spin based on paddle direction 758 | if (paddle.up) 759 | pos.dy = pos.dy * (pos.dy < 0 ? 0.5 : 1.5); 760 | else if (paddle.down) 761 | pos.dy = pos.dy * (pos.dy > 0 ? 0.5 : 1.5); 762 | } 763 | 764 | this.setpos(pos.x, pos.y); 765 | this.setdir(pos.dx, pos.dy); 766 | this.footprint(); 767 | }, 768 | 769 | draw: function(ctx) { 770 | var w = h = this.radius * 2; 771 | ctx.fillStyle = Pong.Colors.ball; 772 | ctx.fillRect(this.x - this.radius, this.y - this.radius, w, h); 773 | if (this.pong.cfg.footprints) { 774 | var max = this.footprints.length; 775 | ctx.strokeStyle = Pong.Colors.footprint; 776 | for (var n = 0; n < max; n++) 777 | ctx.strokeRect( 778 | this.footprints[n].x - this.radius, 779 | this.footprints[n].y - this.radius, w, h); 780 | } 781 | } 782 | 783 | }, 784 | 785 | //============================================================================= 786 | // HELPER 787 | //============================================================================= 788 | 789 | Helper: { 790 | 791 | accelerate: function(x, y, dx, dy, accel, dt) { 792 | var x2 = x + (dt * dx) + (accel * dt * dt * 0.5); 793 | var y2 = y + (dt * dy) + (accel * dt * dt * 0.5); 794 | var dx2 = dx + (accel * dt) * (dx > 0 ? 1 : -1); 795 | var dy2 = dy + (accel * dt) * (dy > 0 ? 1 : -1); 796 | return {nx: (x2 - x), ny: (y2 - y), x: x2, y: y2, dx: dx2, dy: dy2}; 797 | }, 798 | 799 | intercept: function(x1, y1, x2, y2, x3, y3, x4, y4, d) { 800 | var denom = ((y4 - y3) * (x2 - x1)) - ((x4 - x3) * (y2 - y1)); 801 | if (denom != 0) { 802 | var ua = (((x4 - x3) * (y1 - y3)) - ((y4 - y3) * (x1 - x3))) / denom; 803 | if ((ua >= 0) && (ua <= 1)) { 804 | var ub = (((x2 - x1) * (y1 - y3)) - ((y2 - y1) * (x1 - x3))) / denom; 805 | if ((ub >= 0) && (ub <= 1)) { 806 | var x = x1 + (ua * (x2 - x1)); 807 | var y = y1 + (ua * (y2 - y1)); 808 | return {x: x, y: y, d: d}; 809 | } 810 | } 811 | } 812 | return null; 813 | }, 814 | 815 | ballIntercept: function(ball, rect, nx, ny) { 816 | var pt; 817 | if (nx < 0) { 818 | pt = Pong.Helper.intercept( 819 | ball.x, ball.y, ball.x + nx, ball.y + ny, rect.right + ball.radius, 820 | rect.top - ball.radius, rect.right + ball.radius, 821 | rect.bottom + ball.radius, 'right'); 822 | } else if (nx > 0) { 823 | pt = Pong.Helper.intercept( 824 | ball.x, ball.y, ball.x + nx, ball.y + ny, rect.left - ball.radius, 825 | rect.top - ball.radius, rect.left - ball.radius, 826 | rect.bottom + ball.radius, 'left'); 827 | } 828 | if (!pt) { 829 | if (ny < 0) { 830 | pt = Pong.Helper.intercept( 831 | ball.x, ball.y, ball.x + nx, ball.y + ny, rect.left - ball.radius, 832 | rect.bottom + ball.radius, rect.right + ball.radius, 833 | rect.bottom + ball.radius, 'bottom'); 834 | } else if (ny > 0) { 835 | pt = Pong.Helper.intercept( 836 | ball.x, ball.y, ball.x + nx, ball.y + ny, rect.left - ball.radius, 837 | rect.top - ball.radius, rect.right + ball.radius, 838 | rect.top - ball.radius, 'top'); 839 | } 840 | } 841 | return pt; 842 | } 843 | 844 | } 845 | 846 | //============================================================================= 847 | 848 | }; // Pong 849 | -------------------------------------------------------------------------------- /pong-server/static/sounds/goal.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/pong-server/static/sounds/goal.wav -------------------------------------------------------------------------------- /pong-server/static/sounds/ping.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/pong-server/static/sounds/ping.wav -------------------------------------------------------------------------------- /pong-server/static/sounds/pong.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/pong-server/static/sounds/pong.wav -------------------------------------------------------------------------------- /pong-server/static/sounds/wall.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucbrise/clipper-tutorials/fd6fe688968afc8907d007525b15ff7226e37259/pong-server/static/sounds/wall.wav --------------------------------------------------------------------------------