├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── Dockerfile ├── LICENSE ├── README.md ├── build_and_push.sh ├── data ├── glass_bottle.jpg ├── paper.jpg └── plastic_bottle.jpg ├── download_and_push.sh ├── environment-cpu.yml ├── image_classification ├── nginx.conf ├── predictor.py ├── serve └── wsgi.py ├── images ├── IamAttachPolicy.png ├── IamAttachPolicy2.png ├── arcDiagram.png ├── archDiagram.jpg ├── c9Dashboard.png ├── c9OpenIDE.png ├── cfCreateStack.png ├── ec2AddStorage.png ├── ec2ConnectToInstance.png ├── ec2Console.png ├── ec2InstanceType.png ├── ec2KeyPair.png ├── ec2Launch.png ├── ec2LaunchedInstance.png ├── ec2List.png ├── ecr.png ├── lambdaCode.png ├── lambdaCreateFunction.png ├── lambdaCreateFunction2.png ├── lambdaCreateTestEvent.png ├── lambdaEnvVariable.png ├── lambdaIAM.png ├── lambdaTestEvent.png ├── lambdaTestSuceed.png ├── lambdaTimeout.png ├── sagemakerCreateNotebook.png ├── sagemakerCreateNotebook2.png ├── sagemakerCreateNotebook3.png ├── sagemakerCreateNotebook4.png ├── sagemakerCreateNotebook5.png ├── sagemakerEndpoint.png ├── sagemakerEndpointConf.png ├── sagemakerEndpointConf2.png ├── sagemakerEndpointConf3.png ├── sagemakerEndpointConf4.png ├── sagemakerEndpointSuccess.png ├── sagemakerModel.png ├── sagemakerModel2.png ├── sagemakerModel2_5.png └── sagemakerModel3.png ├── lambda_function.py ├── model └── model.tar.gz ├── resize.sh └── template └── builder_session_setup.json /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # Build an image that can do inference in SageMaker 2 | # This is a Python 2 image that uses the nginx, gunicorn, flask stack 3 | 4 | FROM ubuntu:18.04 5 | 6 | MAINTAINER Amazon AI 7 | 8 | RUN apt-get -y update && apt-get install -y --no-install-recommends \ 9 | wget \ 10 | python \ 11 | nginx \ 12 | ca-certificates \ 13 | build-essential \ 14 | git \ 15 | curl \ 16 | python-qt4 &&\ 17 | rm -rf /var/lib/apt/lists/* 18 | 19 | RUN apt-get clean 20 | 21 | ENV PYTHON_VERSION=3.6 22 | 23 | # using minoconda3 24 | RUN curl -o ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \ 25 | chmod +x ~/miniconda.sh && \ 26 | ~/miniconda.sh -b -p /opt/conda && \ 27 | rm ~/miniconda.sh && \ 28 | /opt/conda/bin/conda install conda-build 29 | 30 | # This is just to get the environment-cpu.yml I updated 31 | RUN git clone https://github.com/rumiio/fastai-rumi.git 32 | RUN cd fastai-rumi/ && /opt/conda/bin/conda env create -f environment-cpu.yml 33 | RUN /opt/conda/bin/conda clean -ya 34 | 35 | 36 | ENV PATH /opt/conda/envs/fastai-cpu/bin:$PATH 37 | ENV USER fastai 38 | 39 | # set working directory to /fastai 40 | WORKDIR /fastai 41 | 42 | CMD source activate fastai-cpu ~/.bashrc 43 | 44 | # Here we install the extra python packages to run the inference code 45 | RUN pip install flask gevent gunicorn && \ 46 | rm -rf /root/.cache 47 | 48 | RUN pip install http://download.pytorch.org/whl/cpu/torch-1.0.0-cp36-cp36m-linux_x86_64.whl 49 | RUN pip install fastai 50 | 51 | # Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard 52 | # output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE 53 | # keeps Python from writing the .pyc files which are unnecessary in this case. We also update 54 | # PATH so that the train and serve programs are found when the container is invoked. 55 | 56 | ENV PYTHONUNBUFFERED=TRUE 57 | ENV PYTHONDONTWRITEBYTECODE=TRUE 58 | ENV PATH="/opt/program:${PATH}" 59 | 60 | # Set up the program in the image 61 | COPY image_classification /opt/program 62 | 63 | RUN chmod 755 /opt/program 64 | WORKDIR /opt/program 65 | RUN chmod 755 serve 66 | 67 | RUN ln -s /fastai-rumi/fastai fastai -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Bring your own training-completed model with SageMaker by building a custom container 2 | 3 | 4 | [Amazon SageMaker](https://aws.amazon.com/sagemaker/) provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost. 5 | 6 | In this session, you will build a custom container which contains a train-completed Pytorch model, and deploy it as a SageMaker endpoint. Your model is trained elsewhere like on premises perhaps, and you only want to use SageMaker to host the model. The session teaches how to do that. Pyorch/fast-ai model is provided for learning purposes. Once you know how to deploy a custom container with SageMaker, you can use the same approach to deploy the model trained with other machine learning framework. 7 | 8 | 9 | ## When should I build my own algorithm container? 10 | 11 | You may not need to create a container to bring your own code to Amazon SageMaker. When you are using a framework (such as Apache MXNet or TensorFlow) that has [direct support in SageMaker](https://sagemaker.readthedocs.io/en/stable/), you can simply supply the Python code that implements your algorithm using the SDK entry points for that framework. This set of frameworks is continually expanding, so we recommend that you check the current list if your algorithm is written in a common machine learning environment. 12 | 13 | Even if there is direct SDK support for your environment or framework, you may find it more effective to build your own container. If the code that implements your algorithm is quite complex on its own or you need special additions to the framework, building your own container may be the right choice. 14 | 15 | If there isn't direct SDK support for your environment, don't worry. You'll see in this walk-through that building your own container is quite straightforward. 16 | 17 | 18 | ## Example 19 | 20 | We will show how to package a simple Pytorch image classification model which classifies types of recycle item. For simplification, there are 3 categories of recycle item, paper, glass bottle, and plastic bottle. The model predicts the image passed on is any of the 3 categories. 21 | 22 | The example is purposefully fairly trivial since the point is to show the surrounding structure that you'll want to add to your own code to host it in SageMaker. 23 | 24 | The ideas shown here will work in any language or environment. You'll need to choose the right tools for your environment to serve HTTP requests for inference, but good HTTP environments are available in every language these days. 25 | 26 | ## Contents of the solution 27 | 28 | 29 | - **build_and_push.sh** is a script that uses the Dockerfile to build your container images and then pushes it to [Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/). The argument you pass here will be used as your ECR repository name. 30 | - **Dockerfile** describes how to build your Docker container image, and specifies which libraries and frameworks to be installed to host your model. If your model is trained with frameworks other than Pytorch and [fastai](https://www.fast.ai/), you will update this file. 31 | - **lambda_function.py** contains the code that downloads a test image from an [Amazon S3](https://aws.amazon.com/s3/) bucket, and then invokes the SageMaker endpoint sending the image for an inference. You will paste this code to your [Lambda](https://aws.amazon.com/lambda/) function after the endpoint creation is done. 32 | - **data folder** contains test images. You will upload those images to your S3 bucket. 33 | - **model folder** contains the compressed Pytorch/fastai image classification model. You will upload the tar.gz file to your S3 bucket. 34 | - **image_classification** folder contains the following files that are going to be copied into the Docker image that hosts your model. 35 | - **nginx.conf** is the configuration file for the nginx front-end. No need to modify this file and use it as-is. 36 | - **serve** is the program that starts when the container is started for hosting. It simply launches the gunicorn server which runs multiple instances of the Flask app defined in predictor.py. No need to modify this file and use it as-is. 37 | - **wsgi.py** is a small wrapper used to invoke the Flask app. No need to modify this file and use it as-is. 38 | - **predictor.py** is the program that actually implements the Flask web server and the image classification predictions. SageMaker uses two URLs in the container: 39 | - **/ping** will receive GET requests from the infrastructure. The program returns 200 if the container is up and accepting requests. 40 | - **/invocations** is the endpoint that receives client’s inference POST requests. The format of the request and the response depends on the algorithm. For this blog post, we will be receiving a JPEG image and the model will classify which type of recycling item it is. It returns the results text in a JSON format. 41 | 42 | ![archDiagram](./images/arcDiagram.png) 43 | 44 | 45 | 46 | ## Prerequisites for the Workshop 47 | 48 | - Sign up for an AWS account 49 | 50 | 51 | ## Workshop Roadmap 52 | 53 | - [Run AWS CloudFormation template](#run-cloudformation-template) to create an [AWS Cloud9](https://aws.amazon.com/cloud9/) environment, a S3 bucket, an IAM role, and a test Lambda function. You will be using the EC2 instance to build the Docker image and push it to ECR. You will configure the test Lambda function to call the SageMaker endpoint. 54 | - [Build a Docker container](#build-a-docker-container-on-cloud9-environment) on Cloud9 environment. 55 | - [Create a model object](#create-a-model-object-on-sagemaker) on SageMaker. 56 | - [Create an endpoint configuration](#create-an-endpoint-configuration-on-sagemaker) on SageMaker. 57 | - [Create an endpoint](#create-an-endpoint) on SageMaker. 58 | - [Configure the test Lambda function](#configure-the-test-lambda-function) to call your endpoint. 59 | - [Conclusion](#conclusion) 60 | - [Final Step](#final-step) 61 | 62 | 63 | ## Run CloudFormation Template 64 | 65 | If you are comfortable provisioning an EC2 instance and creating a Lambda function, you can skip below and go ahead follow [this section](#optional-launch-ec2-instance) instead. 66 | 67 | If you want to focus your learning on SageMaker features and building a custom container and let [CloudFormation](https://aws.amazon.com/cloudformation/) handle setting up the environment and test function, select a region from the following. Click on the **Launch stack** link and it will bring up a CloudFormation console with the template loaded. The template will create a Cloud9 IDE, a S3 bucket, and a Lambda function. 68 | 69 | Region| Launch 70 | ------|----- 71 | US West (Oregon) | [![Launch Module 1 in us-west-2](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/images/cloudformation-launch-stack-button.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=GPSTEC417-Builder-Session&templateURL=https://aws-workshop-content-rumi.s3-us-west-2.amazonaws.com/SageMaker-Custom-Container/builder_session_setup.json) 72 | US East (N. Virginia) | [![Launch Module 1 in us-east-1](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/images/cloudformation-launch-stack-button.png)](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=GPSTEC417-Builder-Session&templateURL=https://aws-workshop-content-rumi.s3-us-west-2.amazonaws.com/SageMaker-Custom-Container/builder_session_setup.json) 73 | 74 | 75 |
76 | CloudFormation Launch Instructions (expand for details)

77 | On the CloudFormation console, leave the default settings and values as is, and keep clicking on the Next button until you get to the Review page. Check the acknowledge checkbox, and click on the Create stack button. 78 | 79 | It will take a few minutes for CloudFormation to complete provisioning of EC2 instance and other resources. 80 | 81 | ![cfCreateStack](./images/cfCreateStack.png) 82 | 83 |

84 | 85 | 86 | ## Build a Docker container on Cloud9 environment 87 | 88 | 1. Go to the Cloud9 console. Click on **Open IDE**. 89 | 90 | ![c9Dashboard](./images/c9Dashboard.png) 91 | 92 | 1. Clone the github repo by running the following command: 93 | 94 | ``` 95 | > git clone https://github.com/aws-samples/amazon-sagemaker-custom-container.git 96 | ``` 97 | 98 | ![c9OpenIDE](./images/c9OpenIDE.png) 99 | 100 | Before moving on, you want to increase the ESB volume size as building the Docker container for SageMaker deployment takes much space. You can accomplish that by [running resize.sh](https://docs.aws.amazon.com/cloud9/latest/user-guide/move-environment.html) script provided. 101 | 102 | ``` 103 | >cd amazon-sagemaker-custom-container 104 | >sh resize.sh 40 105 | ``` 106 | 107 | Run build_and_push.sh by running the following commands. This script will create an ECR repository, build the custom container image and push it to the repository. The argument of the script is the repository name. 108 | 109 | ``` 110 | >chmod +x build_and_push.sh 111 | >./build_and_push.sh image_classification_recycle 112 | ``` 113 | This step will take 15-20 minutes. 114 | 115 | 1. Run the following commands to copy the contents of data and model folders to your S3 bucket (the bucket has to be in the same region as the region you will be using SageMaker). The model folder contains the sample train-competed model to deploy. The data folder contains test images for making an inference on the model. 116 | 117 | ``` 118 | aws s3 cp ./data/glass_bottle.jpg s3://gpstec417-builder-session--/SageMaker_Custom_Container/data/glass_bottle.jpg 119 | aws s3 cp ./data/paper.jpg s3://gpstec417-builder-session--/SageMaker_Custom_Container/data/paper.jpg 120 | aws s3 cp ./data/plastic_bottle.jpg s3://gpstec417-builder-session--/SageMaker_Custom_Container/data/plastic_bottle.jpg 121 | aws s3 cp ./model/model.tar.gz s3://gpstec417-builder-session--/SageMaker_Custom_Container/model/model.tar.gz 122 | ``` 123 | 124 | 1. Go to ECR console and see the repo and image that were created by executing **build_and_push.sh** in the previous step. Copy the image URI. We will use this when we create the model object on SageMaker in the next step. 125 | 126 | .dkr.ecr.us-west-2.amazonaws.com/image_classification_recycle:latest 127 | 128 | ![ecr](./images/ecr.png) 129 | 130 | 131 | ## Create a Model Object on SageMaker 132 | 133 | 1. On the SageMaker console, go to **Models**, and click on **Create model** button. 134 | 135 | ![sagemakerModel](./images/sagemakerModel.png) 136 | 137 | ![sagemakerModel2](./images/sagemakerModel2.png) 138 | 139 | 1. Enter model name *image-classification-recycle*. Choose a SageMaker IAM role if you have it. If not, choose **Create a new role**. 140 | 141 | ![sagemakerModel2_5](./images/sagemakerModel2_5.png) 142 | 143 | Scroll down and enter the location of the model arcifacts (S3 location), and container host name which is the image URI you copied in the previous section. Leave everything else blank or as default value. After done entering those values, click on **Create model** button 144 | 145 | ![sagemakerModel3](./images/sagemakerModel3.png) 146 | 147 | 148 | 149 | ## Create an Endpoint Configuration on SageMaker 150 | 151 | 1. On the SageMaker console, go to **Endpoint Configurations**, and click on **Create endpoint configuration** button. 152 | 153 | ![sagemakerEndpointConf](./images/sagemakerEndpointConf.png) 154 | 155 | ![sagemakerEndpointConf2](./images/sagemakerEndpointConf2.png) 156 | 157 | 1. Enter Endpoint configuration name, *image-classification-recycle-conf*. Click on the **Add model** button link. It will bring up a pop up that lists available model objects. Select the one you created in the previous step. Then click on **Create endpoint configuration** button on the bottom of the page. 158 | 159 | ![sagemakerEndpointConf3](./images/sagemakerEndpointConf3.png) 160 | 161 | 162 | 163 | ## Create an Endpoint 164 | 165 | 1. On the SageMaker console, go to **Endpoint**, and click on **Create endpoint** button. 166 | 167 | 1. Enter endpoint name, *image-classification-recycle*. Choose **Use an existing endpoint configuration** option, and specify endpoint configuration you created in the previous step. Click on **Create endpoint** button on the bottom of the page. 168 | 169 | ![sagemakerEndpoint](./images/sagemakerEndpoint.png) 170 | 171 | This would take 5-10 min to complete. When you see the status *InService*, you are ready to test. 172 | 173 | ![sagemakerEndpointSuccess](./images/sagemakerEndpointSuccess.png) 174 | 175 | 176 | 177 | ## Configure the Test Lambda Function 178 | 179 | 1. Go to the Lambda console. Find a function called **Call_SageMaker_Endpoint_Image_Classification**. CloudFormation you ran in the previous step created this function. 180 | 181 | 1. The function was created with 2 environment variables, called **BUCKET_NAME** and **SAGEMAKER_ENDPOINT_NAME**. The value of each of variable would be empty. Enter the S3 bucket name and SageMaker endpoint name here. 182 | 183 | **BUCKET_NAME** should be *gpstec417-builder-session-[region]-[your-account-id]*. Replace *region* with us-west-2 or us-east-1, and *your-account-id* with your own account id. 184 | 185 | **SAGEMAKER_ENDPOINT_NAME** should be *image-classification-recycle* or if you chose your own name, enter the name here. 186 | 187 | ![lambdaEnvVariable](./images/lambdaEnvVariable.png) 188 | 189 | Save the change. 190 | 191 | 1. Scroll up and click on the dropdown next to the **Test** button. Select **Configure test events**. 192 | 193 | ![lambdaCreateTestEvent](./images/lambdaCreateTestEvent.png) 194 | 195 | 1. It will bring up a **Configure test event** window. Enter *testEvent* as Event name. Replace the default event with the following JSON. Click on **Create** button. 196 | 197 | ``` 198 | { 199 | "glass bottle image": "SageMaker_Custom_Container/data/glass_bottle.jpg", 200 | "pastic bottle image": "SageMaker_Custom_Container/data/plastic_bottle.jpg", 201 | "paper image": "SageMaker_Custom_Container/data/paper.jpg" 202 | } 203 | ``` 204 | 205 | They are test JPEG names to send to the SageMaker endpoint. We will be sending one image at a time to see if the deployed model will respond with a prediction. 206 | 207 | ![lambdaTestEvent](./images/lambdaTestEvent.png) 208 | 209 | Click on the **Save** button to save the configuration update. 210 | 211 | 1. Finally click on the **Test** button and see what the execution returns. If you see the greenbox with *Execution result: succeeded* message, it means the endpoint you deployed to the custom container is able to successfully host the model. 212 | 213 | ![lambdaTestSuceed](./images/lambdaTestSuceed.png) 214 | 215 | **Congratulations! You have completed the session.** If your Lambda function returns **green** colored message back, move on to [Conclusion](#conclusion). 216 | 217 | 218 | ## (Optional) Launch EC2 Instance 219 | 220 | **If you used CloudFormation to launch Cloud9 to build Docker container, skip this section.** 221 | 222 |
223 | EC2 Instance Launch Instructions (expand for details)

224 | 225 | 1. Click on **EC2** from the list of all services by entering EC2 into the **Find services** box. This will bring you to the EC2 console homepage. 226 | 227 | 1. To launch a new EC2 instance, click on the **Launch instance** button. 228 | 229 | ![ec2Console](./images/ec2Console.png) 230 | 231 | 1. Look for Deep Learning AMI (Amazon Linux) by typing *Deep Learning* the searchbox. To choose Deep Learning AMI, click on the blue **Select** button. 232 | 233 | ![ec2Launch](./images/ec2Launch.png) 234 | 235 | 1. If your account allows C instance, choose c5.4xlarge. If not, choose one of m5 instances. Click on **Next: Configure Instance Details** button. 236 | 237 | ![ec2InstanceType](./images/ec2InstanceType.png) 238 | 239 | 1. No change reuqired on **Step 3: Configure Instance Details**. Click **Next: Add Storage** button. 240 | 241 | On the **Add Storage** page, make sure to change the storage size to 150 GiB. This is important step as building Docker container will run out of space if you leave as the default value of 75 GiB. 242 | 243 | Click on **Review and Launch** button. 244 | 245 | ![ec2AddStorage](./images/ec2AddStorage.png) 246 | 247 | 1. On **Step 7: Review Instance Launch**, click on the **Launch** button. It will bring up a **Select key pair window**. Select **Choose existing key pair** if you already have one. Select **Create a new key pair** if you don't have one. 248 | 249 | ![ec2KeyPair](./images/ec2KeyPair.png) 250 | 251 | 1. It would take a few minutes before the instance is ready for use. Once the status shows *running*, look up IP address from **IPv4 Public IP**. Use that IP address to SSH into the instance along the KeyPair. 252 | 253 | ![ec2List](./images/ec2List.png) 254 | 255 |

256 | 257 | 258 | ## (Optional) Create a Lambda Function to Test your Endpoint. 259 | 260 | **If you used CloudFormation to create test Lambda function, skip this secion.** 261 | 262 |
263 | Instructions To Create Lambda Function (expand for details)

264 | 265 | 1. Go to the Lambda console, click on **Create function** button. 266 | 267 | 1. Select **Author from scratch** option, and enter funciton name, *Call-SageMaker-Endpoint-Image-Class*. Choose **Python 3.6** for the Runtime. 268 | 269 | ![lambdaCreateFunction](./images/lambdaCreateFunction.png) 270 | 271 | For the execution role, choose **Create a new role with basic Lambda permissions**. Then click on **Create function** button. 272 | 273 | ![lambdaCreateFunction2](./images/lambdaCreateFunction2.png) 274 | 275 | 1. After the function is created, scroll down to **Execution role** section and click on the link of **View the Call-SageMaker-Endpoint-Image-Class-role-...** link under the **Existing role** dropdown. It will bring up the IAM console where you will add one policy to the role. 276 | 277 | ![lambdaIAM](./images/lambdaIAM.png) 278 | 279 | On the Summry page, click on **Attach policy** button. 280 | 281 | ![IamAttachPolicy](./images/IamAttachPolicy.png) 282 | 283 | Type **SageMakerFullAccess** on the search box. Select the checkbox once the policy name is Click on the **Attach policy** button on the bottom of the page. 284 | 285 | ![IamAttachPolicy2](./images/IamAttachPolicy2.png) 286 | 287 | 1. Copy and code from lambda_function.py and paste it into the code window. 288 | 289 | ![lambdaCode](./images/lambdaCode.png) 290 | 291 | 1. Create **ENDPOINT_NAME** envrionment variables and enter your Sagemaker endpoint name. Create **BUCKET_NAME** envrionment variables and enter your bucket name. 292 | 293 | ![lambdaEnvVariable](./images/lambdaEnvVariable.png) 294 | 295 | 1. Increase the timeout on basic setting to be 30 seconds. Click on the **Save** on the upper right corner. 296 | 297 | ![lambdaTimeout](./images/lambdaTimeout.png) 298 | 299 |

300 | 301 | 302 | ## Conclusion 303 | 304 | What you learned in this session: 305 | - You learned how to build a Docker container to deploy your train-completed Pytorch model. You can use the same method to deploy a model trained in different machine learning framework. Just update [Dockerfile](Dockerfile) to install the framework of your choice. 306 | - You learned how to create a SageMaker model object that utilizes the custom container. 307 | - You learned how to deploy the model as a SageMaker endpoint. 308 | - You learned how to test the SageMaker endpoint from a Lambda function. 309 | 310 | 311 | ## Final Step 312 | 313 | Please do not forget: 314 | - To delete the SageMaker endpoint. 315 | - To delete the ECR repo and image. 316 | - To delete the CloudFormation template which deletes the objects provisioned by the template. 317 | 318 | -------------------------------------------------------------------------------- /build_and_push.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # This script shows how to build the Docker image and push it to ECR to be ready for use 4 | # by SageMaker. 5 | 6 | # The argument to this script is the image name. This will be used as the image on the local 7 | # machine and combined with the account and region to form the repository name for ECR. 8 | image=$1 9 | 10 | if [ "$image" == "" ] 11 | then 12 | echo "Usage: $0 " 13 | exit 1 14 | fi 15 | 16 | chmod +x image_classification/serve 17 | 18 | # Get the account number associated with the current IAM credentials 19 | account=$(aws sts get-caller-identity --query Account --output text) 20 | 21 | if [ $? -ne 0 ] 22 | then 23 | exit 255 24 | fi 25 | 26 | 27 | # Get the region defined in the current configuration (default to us-west-2 if none defined) 28 | region=$(aws configure get region) 29 | region=${region:-us-west-2} 30 | 31 | 32 | fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:latest" 33 | 34 | # If the repository doesn't exist in ECR, create it. 35 | 36 | aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1 37 | 38 | if [ $? -ne 0 ] 39 | then 40 | aws ecr create-repository --repository-name "${image}" > /dev/null 41 | fi 42 | 43 | # Get the login command from ECR and execute it directly 44 | $(aws ecr get-login --region ${region} --no-include-email) 45 | 46 | # Build the docker image locally with the image name and then push it to ECR 47 | # with the full name. 48 | 49 | docker build -t ${image} . 50 | docker tag ${image} ${fullname} 51 | 52 | docker push ${fullname} 53 | 54 | docker save --output image_class.tar ${image} -------------------------------------------------------------------------------- /data/glass_bottle.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/data/glass_bottle.jpg -------------------------------------------------------------------------------- /data/paper.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/data/paper.jpg -------------------------------------------------------------------------------- /data/plastic_bottle.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/data/plastic_bottle.jpg -------------------------------------------------------------------------------- /download_and_push.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # This script shows how to download the Docker image and push it to ECR to be ready for use 4 | # by SageMaker. 5 | 6 | # The argument to this script is the image name. This will be used as the image on the local 7 | # machine and combined with the account and region to form the repository name for ECR. 8 | image=$1 9 | 10 | if [ "$image" == "" ] 11 | then 12 | echo "Usage: $0 " 13 | exit 1 14 | fi 15 | 16 | 17 | # Get the account number associated with the current IAM credentials 18 | account=$(aws sts get-caller-identity --query Account --output text) 19 | 20 | if [ $? -ne 0 ] 21 | then 22 | exit 255 23 | fi 24 | 25 | 26 | # Get the region defined in the current configuration (default to us-west-2 if none defined) 27 | region=$(aws configure get region) 28 | region=${region:-us-west-2} 29 | 30 | 31 | fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:latest" 32 | 33 | # If the repository doesn't exist in ECR, create it. 34 | 35 | aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1 36 | 37 | if [ $? -ne 0 ] 38 | then 39 | aws ecr create-repository --repository-name "${image}" > /dev/null 40 | fi 41 | 42 | # Get the login command from ECR and execute it directly 43 | $(aws ecr get-login --region ${region} --no-include-email) 44 | 45 | # Download the docker image to local, 46 | # tag the downloaded image to the repository created above, 47 | # and then push it to ECR. 48 | 49 | #Download the image that is available 50 | docker pull mr891gloyf.execute-api.us-west-2.amazonaws.com/image_classification_recycle:latest 51 | 52 | docker tag mr891gloyf.execute-api.us-west-2.amazonaws.com/image_classification_recycle:latest ${fullname} 53 | 54 | docker push ${fullname} 55 | 56 | -------------------------------------------------------------------------------- /environment-cpu.yml: -------------------------------------------------------------------------------- 1 | # use/edit this file only for working with fastai-0.7.x version 2 | # for fastai-1.0+ edit "setup.py" and use "pip install -e ." 3 | # or "pip install -e .[dev]" for extra developer requirements 4 | name: fastai-cpu 5 | channels: 6 | - fastai 7 | - pytorch 8 | - defaults 9 | - peterjc123 10 | dependencies: 11 | - scipy 12 | - numpy 13 | - pillow 14 | - jpeg 15 | - spacy 16 | - zlib 17 | - freetype 18 | - libtiff 19 | - bleach 20 | - certifi 21 | - cffi 22 | - cycler 23 | - decorator 24 | - entrypoints 25 | - expat 26 | - html5lib 27 | - icu 28 | - defaults::intel-openmp 29 | - ipykernel 30 | - ipython 31 | - ipython_genutils 32 | - ipywidgets 33 | - jedi 34 | - jinja2 35 | - jsonschema 36 | - jupyter 37 | - jupyter_client 38 | - jupyter_console 39 | - jupyter_core 40 | - conda-forge::jupyter_contrib_nbextensions 41 | - libiconv 42 | - libpng 43 | - libsodium 44 | - libxml2 45 | - markupsafe 46 | - matplotlib 47 | - mistune 48 | - mkl 49 | - nbconvert 50 | - nbformat 51 | - notebook 52 | - numpy 53 | - olefile 54 | - openssl 55 | - pandas 56 | - pandocfilters 57 | - path.py 58 | - patsy 59 | - pcre 60 | - pexpect 61 | - pickleshare 62 | - pillow 63 | - pip 64 | - prompt_toolkit 65 | - pycparser 66 | - pygments 67 | - pyparsing 68 | - pyqt 69 | - python>=3.6.0 70 | - python-dateutil 71 | - pytz 72 | - pyzmq 73 | - qt 74 | - qtconsole 75 | - scipy 76 | - seaborn 77 | - setuptools 78 | - simplegeneric 79 | - sip 80 | - six 81 | - sqlite 82 | - statsmodels 83 | - testfixtures 84 | - testpath 85 | - tk 86 | - tornado<5 87 | - tqdm 88 | - traitlets 89 | - wcwidth 90 | - wheel 91 | - widgetsnbextension 92 | - xz 93 | - zeromq 94 | - pytorch<0.4 95 | - bcolz 96 | - prompt_toolkit 97 | #- pytest 98 | - pip: 99 | - torchvision==0.1.9 100 | - opencv-python 101 | - isoweek 102 | - pandas_summary 103 | - torchtext==0.2.3 104 | - graphviz 105 | - sklearn_pandas 106 | - feather-format 107 | - plotnine 108 | - kaggle-cli 109 | - ipywidgets 110 | - pdpbox<=0.1.0 111 | - treeinterpreter 112 | -------------------------------------------------------------------------------- /image_classification/nginx.conf: -------------------------------------------------------------------------------- 1 | worker_processes 1; 2 | daemon off; # Prevent forking 3 | 4 | 5 | pid /tmp/nginx.pid; 6 | error_log /var/log/nginx/error.log; 7 | 8 | events { 9 | # defaults 10 | } 11 | 12 | http { 13 | include /etc/nginx/mime.types; 14 | default_type application/octet-stream; 15 | access_log /var/log/nginx/access.log combined; 16 | 17 | upstream gunicorn { 18 | server unix:/tmp/gunicorn.sock; 19 | } 20 | 21 | server { 22 | listen 8080 deferred; 23 | client_max_body_size 5m; 24 | 25 | keepalive_timeout 5; 26 | 27 | location ~ ^/(ping|invocations) { 28 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 29 | proxy_set_header Host $http_host; 30 | proxy_redirect off; 31 | proxy_pass http://gunicorn; 32 | } 33 | 34 | location / { 35 | return 404 "{}"; 36 | } 37 | } 38 | } 39 | -------------------------------------------------------------------------------- /image_classification/predictor.py: -------------------------------------------------------------------------------- 1 | # This is the file that implements a flask server to do inferences. It's the file that you will modify 2 | # to implement the prediction for your own algorithm. 3 | 4 | from __future__ import print_function 5 | 6 | import os, sys, stat 7 | import json 8 | import shutil 9 | import flask 10 | from flask import Flask, jsonify 11 | import glob 12 | 13 | from fastai.imports import * 14 | from fastai.vision import * 15 | 16 | 17 | MODEL_PATH = '/opt/ml/' 18 | TMP_MODEL_PATH = '/tmp/ml/model' 19 | DATA_PATH = '/tmp/data' 20 | MODEL_NAME = '' 21 | 22 | IMG_FOR_INFERENCE = os.path.join(DATA_PATH, 'image_for_inference.jpg') 23 | 24 | # in this tmp folder, image for inference will be saved 25 | if not os.path.exists(DATA_PATH): 26 | os.makedirs(DATA_PATH, mode=0o755,exist_ok=True) 27 | 28 | # creating a model folder in tmp directry as opt/ml/model is read-only and 29 | # fastai's load_learner requires to be able to write. 30 | if not os.path.exists(TMP_MODEL_PATH): 31 | os.makedirs(TMP_MODEL_PATH, mode=0o755,exist_ok=True) 32 | #print(str(TMP_MODEL_PATH) + ' has been created') 33 | os.chmod(TMP_MODEL_PATH, stat.S_IRWXG) 34 | 35 | if os.path.exists(MODEL_PATH): 36 | model_file = glob.glob('/opt/ml/model/*.pkl')[0] 37 | path, MODEL_NAME = os.path.split(model_file) 38 | #print('MODEL_NAME holds: ' + str(MODEL_NAME)) 39 | shutil.copy(model_file, TMP_MODEL_PATH) 40 | 41 | def write_test_image(stream): 42 | with open(IMG_FOR_INFERENCE, "bw") as f: 43 | chunk_size = 4096 44 | while True: 45 | chunk = stream.read(chunk_size) 46 | if len(chunk) == 0: 47 | return 48 | f.write(chunk) 49 | 50 | 51 | # A singleton for holding the model. This simply loads the model and holds it. 52 | # It has a predict function that does a prediction based on the model and the input data. 53 | class ClassificationService(object): 54 | 55 | @classmethod 56 | def get_model(cls): 57 | """Get the model object for this instance.""" 58 | return load_learner(path=TMP_MODEL_PATH) #default model name of export.pkl 59 | 60 | @classmethod 61 | def predict(cls, input): 62 | """For the input, do the predictions and return them.""" 63 | 64 | learn = cls.get_model() 65 | return learn.predict(input) 66 | 67 | # The flask app for serving predictions 68 | app = flask.Flask(__name__) 69 | 70 | @app.route('/ping', methods=['GET']) 71 | def ping(): 72 | """Determine if the container is working and healthy. In this sample container, we declare 73 | it healthy if we can load the model successfully.""" 74 | health = ClassificationService.get_model() is not None 75 | 76 | status = 200 if health else 404 77 | return flask.Response(response='\n', status=status, mimetype='application/json') 78 | 79 | @app.route('/invocations', methods=['POST']) 80 | def transformation(): 81 | 82 | write_test_image(flask.request.stream) #receive the image and write it out as a JPEG file. 83 | 84 | # Do the prediction 85 | img = open_image(IMG_FOR_INFERENCE) 86 | predictions = ClassificationService.predict(img) #predict() also loads the model 87 | 88 | #print('predictions: ' + str(predictions[0]) + ', ' + str(predictions[1])) 89 | 90 | # Convert result to JSON 91 | return_value = { "predictions": {} } 92 | return_value["predictions"]["class"] = str(predictions[0]) 93 | print(return_value) 94 | 95 | return jsonify(return_value) 96 | -------------------------------------------------------------------------------- /image_classification/serve: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # This file implements the service shell. You don't necessarily need to modify it for various 4 | # algorithms. It starts nginx and gunicorn with the correct configurations and then simply waits 5 | # until gunicorn exits. 6 | # 7 | # The flask server is specified to be the app object in wsgi.py 8 | # 9 | # We set the following parameters: 10 | # 11 | # Parameter Environment Variable Default Value 12 | # --------- -------------------- ------------- 13 | # number of workers MODEL_SERVER_WORKERS the number of CPU cores 14 | # timeout MODEL_SERVER_TIMEOUT 60 seconds 15 | 16 | from __future__ import print_function 17 | import multiprocessing 18 | import os 19 | import signal 20 | import subprocess 21 | import sys 22 | 23 | cpu_count = multiprocessing.cpu_count() 24 | 25 | model_server_timeout = os.environ.get('MODEL_SERVER_TIMEOUT', 60) 26 | model_server_workers = int(os.environ.get('MODEL_SERVER_WORKERS', cpu_count)) 27 | 28 | def sigterm_handler(nginx_pid, gunicorn_pid): 29 | try: 30 | os.kill(nginx_pid, signal.SIGQUIT) 31 | except OSError: 32 | pass 33 | try: 34 | os.kill(gunicorn_pid, signal.SIGTERM) 35 | except OSError: 36 | pass 37 | 38 | sys.exit(0) 39 | 40 | def start_server(): 41 | print('Starting the inference server with {} workers.'.format(model_server_workers)) 42 | 43 | 44 | # link the log streams to stdout/err so they will be logged to the container logs 45 | subprocess.check_call(['ln', '-sf', '/dev/stdout', '/var/log/nginx/access.log']) 46 | subprocess.check_call(['ln', '-sf', '/dev/stderr', '/var/log/nginx/error.log']) 47 | 48 | nginx = subprocess.Popen(['nginx', '-c', '/opt/program/nginx.conf']) 49 | gunicorn = subprocess.Popen(['gunicorn', 50 | '--timeout', str(model_server_timeout), 51 | '-k', 'gevent', 52 | '-b', 'unix:/tmp/gunicorn.sock', 53 | '-w', str(model_server_workers), 54 | 'wsgi:app']) 55 | 56 | signal.signal(signal.SIGTERM, lambda a, b: sigterm_handler(nginx.pid, gunicorn.pid)) 57 | 58 | # If either subprocess exits, so do we. 59 | pids = set([nginx.pid, gunicorn.pid]) 60 | while True: 61 | pid, _ = os.wait() 62 | if pid in pids: 63 | break 64 | 65 | sigterm_handler(nginx.pid, gunicorn.pid) 66 | print('Inference server exiting') 67 | 68 | # The main routine just invokes the start function. 69 | 70 | if __name__ == '__main__': 71 | start_server() 72 | -------------------------------------------------------------------------------- /image_classification/wsgi.py: -------------------------------------------------------------------------------- 1 | import predictor as myapp 2 | 3 | # This is just a simple wrapper for gunicorn to find your app. 4 | # If you want to change the algorithm file, simply change "predictor" above to the 5 | # new file. 6 | 7 | app = myapp.app 8 | -------------------------------------------------------------------------------- /images/IamAttachPolicy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/IamAttachPolicy.png -------------------------------------------------------------------------------- /images/IamAttachPolicy2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/IamAttachPolicy2.png -------------------------------------------------------------------------------- /images/arcDiagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/arcDiagram.png -------------------------------------------------------------------------------- /images/archDiagram.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/archDiagram.jpg -------------------------------------------------------------------------------- /images/c9Dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/c9Dashboard.png -------------------------------------------------------------------------------- /images/c9OpenIDE.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/c9OpenIDE.png -------------------------------------------------------------------------------- /images/cfCreateStack.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/cfCreateStack.png -------------------------------------------------------------------------------- /images/ec2AddStorage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2AddStorage.png -------------------------------------------------------------------------------- /images/ec2ConnectToInstance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2ConnectToInstance.png -------------------------------------------------------------------------------- /images/ec2Console.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2Console.png -------------------------------------------------------------------------------- /images/ec2InstanceType.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2InstanceType.png -------------------------------------------------------------------------------- /images/ec2KeyPair.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2KeyPair.png -------------------------------------------------------------------------------- /images/ec2Launch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2Launch.png -------------------------------------------------------------------------------- /images/ec2LaunchedInstance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2LaunchedInstance.png -------------------------------------------------------------------------------- /images/ec2List.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ec2List.png -------------------------------------------------------------------------------- /images/ecr.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/ecr.png -------------------------------------------------------------------------------- /images/lambdaCode.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaCode.png -------------------------------------------------------------------------------- /images/lambdaCreateFunction.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaCreateFunction.png -------------------------------------------------------------------------------- /images/lambdaCreateFunction2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaCreateFunction2.png -------------------------------------------------------------------------------- /images/lambdaCreateTestEvent.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaCreateTestEvent.png -------------------------------------------------------------------------------- /images/lambdaEnvVariable.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaEnvVariable.png -------------------------------------------------------------------------------- /images/lambdaIAM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaIAM.png -------------------------------------------------------------------------------- /images/lambdaTestEvent.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaTestEvent.png -------------------------------------------------------------------------------- /images/lambdaTestSuceed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaTestSuceed.png -------------------------------------------------------------------------------- /images/lambdaTimeout.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/lambdaTimeout.png -------------------------------------------------------------------------------- /images/sagemakerCreateNotebook.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerCreateNotebook.png -------------------------------------------------------------------------------- /images/sagemakerCreateNotebook2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerCreateNotebook2.png -------------------------------------------------------------------------------- /images/sagemakerCreateNotebook3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerCreateNotebook3.png -------------------------------------------------------------------------------- /images/sagemakerCreateNotebook4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerCreateNotebook4.png -------------------------------------------------------------------------------- /images/sagemakerCreateNotebook5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerCreateNotebook5.png -------------------------------------------------------------------------------- /images/sagemakerEndpoint.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerEndpoint.png -------------------------------------------------------------------------------- /images/sagemakerEndpointConf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerEndpointConf.png -------------------------------------------------------------------------------- /images/sagemakerEndpointConf2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerEndpointConf2.png -------------------------------------------------------------------------------- /images/sagemakerEndpointConf3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerEndpointConf3.png -------------------------------------------------------------------------------- /images/sagemakerEndpointConf4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerEndpointConf4.png -------------------------------------------------------------------------------- /images/sagemakerEndpointSuccess.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerEndpointSuccess.png -------------------------------------------------------------------------------- /images/sagemakerModel.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerModel.png -------------------------------------------------------------------------------- /images/sagemakerModel2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerModel2.png -------------------------------------------------------------------------------- /images/sagemakerModel2_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerModel2_5.png -------------------------------------------------------------------------------- /images/sagemakerModel3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/images/sagemakerModel3.png -------------------------------------------------------------------------------- /lambda_function.py: -------------------------------------------------------------------------------- 1 | import os 2 | import boto3 3 | import json 4 | 5 | # grab environment variables 6 | SAGEMAKER_ENDPOINT_NAME = os.environ['SAGEMAKER_ENDPOINT_NAME'] 7 | BUCKET_NAME = os.environ['BUCKET_NAME'] 8 | runtime= boto3.client('runtime.sagemaker') 9 | s3 = boto3.resource('s3') 10 | 11 | def lambda_handler(event, context): 12 | OBJECT_KEY = event['paper image'] 13 | file_name = '/tmp/test_image.jpg' 14 | s3.Bucket(BUCKET_NAME).download_file(OBJECT_KEY, file_name) 15 | 16 | payload = '' 17 | 18 | with open(file_name, 'rb') as f: 19 | payload = f.read() 20 | payload = bytearray(payload) 21 | 22 | response = runtime.invoke_endpoint(EndpointName=SAGEMAKER_ENDPOINT_NAME, 23 | ContentType='application/x-image', 24 | Body=payload) 25 | 26 | result = json.loads(response['Body'].read().decode()) 27 | print(result) 28 | pred = result['predictions']['class'] 29 | 30 | return pred -------------------------------------------------------------------------------- /model/model.tar.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-custom-container/068ffa45e58d84651cabebea4878d09b7794e1a5/model/model.tar.gz -------------------------------------------------------------------------------- /resize.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Specify the desired volume size in GiB as a command-line argument. If not specified, default to 20 GiB. 4 | SIZE=${1:-20} 5 | 6 | # Install the jq command-line JSON processor. 7 | sudo yum -y install jq 8 | 9 | # Get the ID of the envrionment host Amazon EC2 instance. 10 | INSTANCEID=$(curl http://169.254.169.254/latest/meta-data//instance-id) 11 | 12 | # Get the ID of the Amazon EBS volume associated with the instance. 13 | VOLUMEID=$(aws ec2 describe-instances --instance-id $INSTANCEID | jq -r .Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId) 14 | 15 | # Resize the EBS volume. 16 | aws ec2 modify-volume --volume-id $VOLUMEID --size $SIZE 17 | 18 | # Wait for the resize to finish. 19 | while [ "$(aws ec2 describe-volumes-modifications --volume-id $VOLUMEID --filters Name=modification-state,Values="optimizing","completed" | jq '.VolumesModifications | length')" != "1" ]; do 20 | sleep 1 21 | done 22 | 23 | # Rewrite the partition table so that the partition takes up all the space that it can. 24 | sudo growpart /dev/xvda 1 25 | 26 | # Expand the size of the file system. 27 | sudo resize2fs /dev/xvda1 -------------------------------------------------------------------------------- /template/builder_session_setup.json: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion" : "2010-09-09", 3 | 4 | "Description" : "AWS CloudFormation template for Build Custom Container with SageMaker workshop. Creates an EC2 instance and test Lambda function.", 5 | 6 | "Parameters" : { 7 | "01C9InstanceType" : { 8 | "Description" : "Cloud9 instance type", 9 | "Type" : "String", 10 | "Default" : "m4.xlarge", 11 | "AllowedValues" : [ "m4.large", "m4.xlarge", "t2.large" ], 12 | "ConstraintDescription" : "Must be a valid Cloud9 instance type" 13 | }, 14 | "02BucketName" :{ 15 | "Type": "String", 16 | "Description": " The name for the bucket where you'll upload model and test images.", 17 | "AllowedPattern": "^([a-z]|(\\d(?!\\d{0,2}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})))([a-z\\d]|(\\.(?!(\\.|-)))|(-(?!\\.))){1,61}[a-z\\d\\.]$", 18 | "Default": "gpstec417-builder-session", 19 | "ConstraintDescription" : "Should be a Valid S3 Bucket Name" 20 | } 21 | }, 22 | 23 | "Mappings" : { 24 | "AWSRegion2AMI" : { 25 | "us-east-1" : { "AMI" : "ami-0ee43063d08ad19d2" }, 26 | "us-west-2" : { "AMI" : "ami-019f1da417e0ce397" } 27 | } 28 | }, 29 | 30 | "Metadata" : { 31 | "AWS::CloudFormation::Interface" : { 32 | "ParameterLabels" : { 33 | "01InstanceType" : { "default" : "Provision an EC2 Instance to create Docker image, with the following instance type" } 34 | } 35 | } 36 | }, 37 | 38 | "Resources" : { 39 | 40 | "C9Instance": { 41 | "Description": "-", 42 | "Type": "AWS::Cloud9::EnvironmentEC2", 43 | "Properties": { 44 | "Description": "AWS Cloud9 instance for builder session GPSTEC417", 45 | "AutomaticStopTimeMinutes": 120, 46 | "InstanceType": { "Ref" : "01C9InstanceType" }, 47 | "Name": { "Ref": "AWS::StackName" } 48 | } 49 | }, 50 | 51 | "BuilderSessionS3" : { 52 | "Type" : "AWS::S3::Bucket", 53 | "Properties" : { 54 | "BucketName" : {"Fn::Join": ["-", [{"Ref": "02BucketName"}, { "Ref" : "AWS::Region" }, {"Ref": "AWS::AccountId"}]]} 55 | }, 56 | "DeletionPolicy" : "Retain" 57 | }, 58 | 59 | "LambdaExecutionRole": { 60 | "Type": "AWS::IAM::Role", 61 | "Properties": { 62 | "RoleName": "GPSTEC417-LambdaExecutionRole", 63 | "AssumeRolePolicyDocument": { 64 | "Version": "2012-10-17", 65 | "Statement": [ 66 | { 67 | "Effect": "Allow", 68 | "Principal": { 69 | "Service": [ 70 | "lambda.amazonaws.com" 71 | ] 72 | }, 73 | "Action": [ 74 | "sts:AssumeRole" 75 | ] 76 | } 77 | ] 78 | }, 79 | "ManagedPolicyArns": [ 80 | "arn:aws:iam::aws:policy/AmazonSageMakerFullAccess" 81 | ], 82 | "Path": "/" 83 | } 84 | }, 85 | 86 | "CallSageMakerEndpointLambdaFunction": { 87 | "Type" : "AWS::Lambda::Function", 88 | "DependsOn": [ 89 | "LambdaExecutionRole" 90 | ], 91 | "Properties" : { 92 | "FunctionName" : "Call_SageMaker_Endpoint_Image_Classification", 93 | "Handler" : "index.lambda_handler", 94 | "Role": { 95 | "Fn::GetAtt": [ 96 | "LambdaExecutionRole", 97 | "Arn" 98 | ] 99 | }, 100 | "Environment" : { 101 | "Variables" : { "BUCKET_NAME" : "", 102 | "SAGEMAKER_ENDPOINT_NAME" : "" } 103 | }, 104 | "Runtime": "python3.6", 105 | "MemorySize" : 128, 106 | "Timeout": "30", 107 | "Code" : { 108 | "ZipFile": { 109 | "Fn::Join": [ 110 | "\n",[ 111 | "import os", 112 | "import boto3", 113 | "import json", 114 | "", 115 | "# grab environment variables", 116 | "SAGEMAKER_ENDPOINT_NAME = os.environ['SAGEMAKER_ENDPOINT_NAME']", 117 | "BUCKET_NAME = os.environ['BUCKET_NAME']", 118 | "runtime= boto3.client('runtime.sagemaker')", 119 | "s3 = boto3.resource('s3')", 120 | "", 121 | "def lambda_handler(event, context):", 122 | " OBJECT_KEY = event['glass bottle image'] ", 123 | " file_name = '/tmp/test_image.jpg'", 124 | " s3.Bucket(BUCKET_NAME).download_file(OBJECT_KEY, file_name)", 125 | "", 126 | " payload = ''", 127 | "", 128 | " with open(file_name, 'rb') as f:", 129 | " payload = f.read()", 130 | " payload = bytearray(payload)", 131 | "", 132 | " response = runtime.invoke_endpoint(EndpointName=SAGEMAKER_ENDPOINT_NAME,", 133 | " ContentType='application/x-image',", 134 | " Body=payload)", 135 | "", 136 | " result = json.loads(response['Body'].read().decode())", 137 | " print(result)", 138 | " pred = result['predictions']['class']", 139 | "", 140 | " return pred " 141 | ] 142 | ] 143 | } 144 | } 145 | } 146 | } 147 | } 148 | } 149 | --------------------------------------------------------------------------------