├── gai-inpainting-cdk ├── cdk │ ├── __init__.py │ └── gai_inpainting_sagemaker.py ├── requirements-dev.txt ├── requirements.txt ├── notebooks │ ├── images │ │ ├── speaker.png │ │ └── speaker_mask.png │ ├── code │ │ ├── requirements.txt │ │ ├── inference_sam.py │ │ └── inference_inpainting.py │ ├── 3_CleanUp.ipynb │ ├── 2_GenerateInPaintingImages.ipynb │ └── 1_HostGenAIModels.ipynb ├── app.py ├── source.bat ├── cdk.json ├── README.md └── gai-inpainting-cfn-template.yaml ├── assets ├── Flow.png ├── output_SAM.png ├── LaunchStack.png ├── AWSArchitecture.png ├── Step1_StackName.png ├── Step2_StackIAM.png ├── output_InPainting.png └── Step3_StackSuccess.png ├── CODE_OF_CONDUCT.md ├── LICENSE ├── .gitignore ├── CONTRIBUTING.md └── README.md /gai-inpainting-cdk/cdk/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/requirements-dev.txt: -------------------------------------------------------------------------------- 1 | pytest==6.2.5 -------------------------------------------------------------------------------- /gai-inpainting-cdk/requirements.txt: -------------------------------------------------------------------------------- 1 | aws-cdk-lib==2.46.0 2 | constructs<=11.0.0 3 | jsonpickle~=2.2.0 4 | boto3~=1.24.1 -------------------------------------------------------------------------------- /assets/Flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/Flow.png -------------------------------------------------------------------------------- /assets/output_SAM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/output_SAM.png -------------------------------------------------------------------------------- /assets/LaunchStack.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/LaunchStack.png -------------------------------------------------------------------------------- /assets/AWSArchitecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/AWSArchitecture.png -------------------------------------------------------------------------------- /assets/Step1_StackName.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/Step1_StackName.png -------------------------------------------------------------------------------- /assets/Step2_StackIAM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/Step2_StackIAM.png -------------------------------------------------------------------------------- /assets/output_InPainting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/output_InPainting.png -------------------------------------------------------------------------------- /assets/Step3_StackSuccess.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/assets/Step3_StackSuccess.png -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/images/speaker.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/gai-inpainting-cdk/notebooks/images/speaker.png -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/images/speaker_mask.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images/HEAD/gai-inpainting-cdk/notebooks/images/speaker_mask.png -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/code/requirements.txt: -------------------------------------------------------------------------------- 1 | nvgpu==0.9.0 2 | pynvml==11.4.1 3 | opencv-python 4 | torch 5 | torchvision 6 | Pillow 7 | transformers 8 | numpy 9 | matplotlib 10 | diffusers 11 | albumentations 12 | accelerate -------------------------------------------------------------------------------- /gai-inpainting-cdk/app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | import aws_cdk as cdk 4 | from cdk.gai_inpainting_sagemaker import GenAiInpaintStack 5 | 6 | app = cdk.App() 7 | GenAiInpaintStack(app, "GenAiInpaintStack",) 8 | 9 | app.synth() -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/source.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | rem The sole purpose of this script is to make the command 4 | rem 5 | rem source .venv/bin/activate 6 | rem 7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows. 8 | rem On Windows, this command just runs this batch file (the argument is ignored). 9 | rem 10 | rem Now we don't need to document a Windows command for activating a virtualenv. 11 | 12 | echo Executing .venv\Scripts\activate.bat for you 13 | .venv\Scripts\activate.bat 14 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so. 10 | 11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 17 | 18 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python app.py", 3 | "watch": { 4 | "include": [ 5 | "**" 6 | ], 7 | "exclude": [ 8 | "README.md", 9 | "cdk*.json", 10 | "requirements*.txt", 11 | "source.bat", 12 | "**/__init__.py", 13 | "python/__pycache__", 14 | "tests" 15 | ] 16 | }, 17 | "context": { 18 | "@aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true, 19 | "@aws-cdk/core:stackRelativeExports": true, 20 | "@aws-cdk/aws-rds:lowercaseDbIdentifier": true, 21 | "@aws-cdk/aws-lambda:recognizeVersionProps": true, 22 | "@aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021": true, 23 | "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true, 24 | "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true, 25 | "@aws-cdk/core:checkSecretUsage": true, 26 | "@aws-cdk/aws-iam:minimizePolicies": true, 27 | "@aws-cdk/core:target-partitions": [ 28 | "aws", 29 | "aws-cn" 30 | ] 31 | } 32 | } -------------------------------------------------------------------------------- /gai-inpainting-cdk/README.md: -------------------------------------------------------------------------------- 1 | # GitHub to SageMaker Notebook 2 | This Python implementation creates 1 S3 Bucket and 1 SageMaker Notebook 3 | 4 | ![AWSArchitecture](../assets/AWSArchitecture.png) 5 | 6 | ## Creation and Deployment 7 | The project uses virtual environment. Use the following steps to setup the CDK Stack: 8 | ``` 9 | [To manually create a virtualenv on MacOS and Linux] 10 | $ python3 -m venv .venv 11 | 12 | [Activate the virtual environment] 13 | $ source .venv/bin/activate 14 | 15 | [Install the pre-requisites] 16 | $ pip3 install -r requirements.txt 17 | 18 | [CDK Creation and Deployment] 19 | $ cdk synth 20 | $ cdk bootstrap 21 | $ cdk deploy 22 | ``` 23 | 24 | ## Cleanup 25 | To clear the stack and resources: 26 | ``` 27 | $ cdk destroy 28 | ``` 29 | 30 | ## Useful commands 31 | * `cdk ls` list all stacks in the app 32 | * `cdk synth` emits the synthesized CloudFormation template 33 | * `cdk deploy` deploy this stack to your default AWS account/region 34 | * `cdk diff` compare deployed stack with current state 35 | * `cdk docs` open CDK documentation -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | .vscode/ 3 | .ipynb_checkpoints/ 4 | .venv/ 5 | cdk.out/ 6 | cdk.out/** 7 | 8 | .idea 9 | *.venv 10 | *cdk.out 11 | *.drawio 12 | 13 | # Prerequisites 14 | *.d 15 | 16 | # Compiled Object files 17 | *.slo 18 | *.lo 19 | *.o 20 | *.obj 21 | 22 | # Precompiled Headers 23 | *.gch 24 | *.pch 25 | 26 | # Compiled Dynamic libraries 27 | *.so 28 | *.dylib 29 | *.dll 30 | 31 | # Fortran module files 32 | *.mod 33 | *.smod 34 | 35 | # Compiled Static libraries 36 | *.lai 37 | *.la 38 | *.a 39 | *.lib 40 | 41 | # Executables 42 | *.exe 43 | *.out 44 | *.app 45 | 46 | # Prerequisites 47 | *.d 48 | 49 | # Object files 50 | *.o 51 | *.ko 52 | *.obj 53 | *.elf 54 | 55 | # Linker output 56 | *.ilk 57 | *.map 58 | *.exp 59 | 60 | # Precompiled Headers 61 | *.gch 62 | *.pch 63 | 64 | # Libraries 65 | *.lib 66 | *.a 67 | *.la 68 | *.lo 69 | 70 | # Shared objects (inc. Windows DLLs) 71 | *.dll 72 | *.so 73 | *.so.* 74 | *.dylib 75 | 76 | # Executables 77 | *.exe 78 | *.out 79 | *.app 80 | *.i*86 81 | *.x86_64 82 | *.hex 83 | 84 | # Debug files 85 | *.dSYM/ 86 | *.su 87 | *.idb 88 | *.pdb 89 | 90 | # Kernel Module Compile Results 91 | *.mod* 92 | *.cmd 93 | .tmp_versions/ 94 | modules.order 95 | Module.symvers 96 | Mkfile.old 97 | dkms.conf 98 | 99 | # MacOS .DS_Store files 100 | .DS_Store 101 | 102 | # Avoid all videos and csvs, zips 103 | *.zip 104 | *.mp4 105 | *.csv 106 | *.tar.gz 107 | 108 | # Avoid all greengrass build files 109 | **zip-build 110 | **greengrass-build -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/code/inference_sam.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import os 4 | import json 5 | import io 6 | import cv2 7 | import time 8 | from transformers import SamModel, SamProcessor 9 | from PIL import Image 10 | 11 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 12 | 13 | # Initialize the processor with the pre-trained model 14 | processor = SamProcessor.from_pretrained("facebook/sam-vit-large") 15 | 16 | def model_fn(model_dir): 17 | """ 18 | Load the pre-trained model from the specified directory. 19 | 20 | Args: 21 | model_dir (str): Directory containing the pre-trained model files. 22 | 23 | Returns: 24 | model: Loaded pre-trained model. 25 | """ 26 | print("Executing model_fn from inference.py ...") 27 | env = os.environ 28 | model = SamModel.from_pretrained("facebook/sam-vit-large") 29 | model.to(device) 30 | return model 31 | 32 | def input_fn(request_body, request_content_type): 33 | """ 34 | Preprocess the input data. 35 | 36 | Args: 37 | request_body: Input data from the request. 38 | request_content_type (str): Content type of the request. 39 | 40 | Returns: 41 | inputs: Preprocessed input data. 42 | """ 43 | print("Executing input_fn from inference.py ...") 44 | inputs = [] 45 | if request_content_type: 46 | # Load image array from request body 47 | img_array = np.load(io.BytesIO(request_body), allow_pickle=True) 48 | img = Image.fromarray(img_array) 49 | 50 | # Define input points for the processor 51 | input_points = [[[np.array(img.size)/2]]] 52 | 53 | # Preprocess the image using the processor 54 | inputs = processor(img, input_points=input_points, return_tensors="pt") 55 | inputs = inputs.to(device) 56 | else: 57 | raise Exception("Unsupported content type: " + request_content_type) 58 | return inputs 59 | 60 | def predict_fn(input_data, model): 61 | """ 62 | Perform the prediction using the input data and the loaded model. 63 | 64 | Args: 65 | input_data: Preprocessed input data. 66 | model: Loaded pre-trained model. 67 | 68 | Returns: 69 | result: Prediction output. 70 | """ 71 | print("Executing predict_fn from inference.py ...") 72 | result = [] 73 | with torch.no_grad(): 74 | # Perform the prediction using the model 75 | result = model(**input_data) 76 | 77 | # Post-process the predicted masks 78 | result = processor.image_processor.post_process_masks(result.pred_masks.cpu(), input_data["original_sizes"].cpu(), input_data["reshaped_input_sizes"].cpu()) 79 | 80 | if torch.cuda.is_available(): 81 | # Empty the GPU cache and collect garbage 82 | torch.cuda.empty_cache() 83 | torch.cuda.ipc_collect() 84 | return result 85 | 86 | def output_fn(prediction_output, content_type): 87 | """ 88 | Process the prediction output and prepare the response. 89 | 90 | Args: 91 | prediction_output: Prediction output. 92 | content_type (str): Desired content type for the response. 93 | 94 | Returns: 95 | str: Response in the specified content type. 96 | """ 97 | print("Executing output_fn from inference.py ...") 98 | masks = np.transpose(prediction_output[0][0, :, :, :].numpy(), [1, 2, 0]).astype(np.uint8) * 255 99 | mask_list = masks.tolist() 100 | return json.dumps(mask_list) 101 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/3_CleanUp.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "6213da9a", 6 | "metadata": {}, 7 | "source": [ 8 | "# 3. Cleanup\n", 9 | "This Notebook is used to delete the endpoints and run a cleanup to stop the endpoints which would avoid any additional charges." 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": null, 15 | "id": "a8e05087", 16 | "metadata": { 17 | "tags": [] 18 | }, 19 | "outputs": [], 20 | "source": [ 21 | "# Import libraries\n", 22 | "import boto3\n", 23 | "\n", 24 | "# Get the right credentials, role and client for SageMaker\n", 25 | "sm_client = boto3.client(service_name=\"sagemaker\")" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "id": "cc5a144d", 31 | "metadata": {}, 32 | "source": [ 33 | "## 3.1 Delete endpoint, endpoint config and model for Model 1: SAM" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "id": "a2d48b85", 40 | "metadata": { 41 | "tags": [] 42 | }, 43 | "outputs": [], 44 | "source": [ 45 | "%store -r SAM_ENDPOINT_NAME\n", 46 | "response = sm_client.describe_endpoint_config(EndpointConfigName=SAM_ENDPOINT_NAME)\n", 47 | "print(response)\n", 48 | "endpoint_config_name = response['EndpointConfigName']\n", 49 | "\n", 50 | "# Delete Endpoint\n", 51 | "sm_client.delete_endpoint(EndpointName=SAM_ENDPOINT_NAME)\n", 52 | "\n", 53 | "# Delete Endpoint Configuration\n", 54 | "sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)\n", 55 | "\n", 56 | "# Delete Model\n", 57 | "for prod_var in response['ProductionVariants']:\n", 58 | " model_name = prod_var['ModelName']\n", 59 | " sm_client.delete_model(ModelName=model_name) " 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "id": "33987ac0", 65 | "metadata": {}, 66 | "source": [ 67 | "## 3.2 Delete endpoint, endpoint config and model for Model 2: InPainting" 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": null, 73 | "id": "b53d902f", 74 | "metadata": { 75 | "tags": [] 76 | }, 77 | "outputs": [], 78 | "source": [ 79 | "%store -r INPAINTING_ENDPOINT_NAME\n", 80 | "response = sm_client.describe_endpoint_config(EndpointConfigName=INPAINTING_ENDPOINT_NAME)\n", 81 | "print(response)\n", 82 | "endpoint_config_name = response['EndpointConfigName']\n", 83 | "\n", 84 | "# Delete Endpoint\n", 85 | "sm_client.delete_endpoint(EndpointName=INPAINTING_ENDPOINT_NAME)\n", 86 | "\n", 87 | "# Delete Endpoint Configuration\n", 88 | "sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)\n", 89 | "\n", 90 | "# Delete Model\n", 91 | "for prod_var in response['ProductionVariants']:\n", 92 | " model_name = prod_var['ModelName']\n", 93 | " sm_client.delete_model(ModelName=model_name) " 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "id": "261e5e37", 100 | "metadata": {}, 101 | "outputs": [], 102 | "source": [] 103 | } 104 | ], 105 | "metadata": { 106 | "kernelspec": { 107 | "display_name": "conda_pytorch_p39", 108 | "language": "python", 109 | "name": "conda_pytorch_p39" 110 | }, 111 | "language_info": { 112 | "codemirror_mode": { 113 | "name": "ipython", 114 | "version": 3 115 | }, 116 | "file_extension": ".py", 117 | "mimetype": "text/x-python", 118 | "name": "python", 119 | "nbconvert_exporter": "python", 120 | "pygments_lexer": "ipython3", 121 | "version": "3.9.15" 122 | } 123 | }, 124 | "nbformat": 4, 125 | "nbformat_minor": 5 126 | } 127 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Using Generative AI InPainting for Product Placement 2 | This project is a practical demonstration of utilizing Amazon SageMaker to run an inpainting algorithm. This repository serves as a starting point for developers interested in leveraging SageMaker's capabilities for image completion tasks. Inpainting, a powerful technique in computer vision, enables the automatic filling in of missing or corrupted parts of an image using machine learning algorithms. By providing a simple and streamlined implementation, this repository offers a hands-on experience for understanding how to set up and run an inpainting model on the SageMaker platform. By leveraging the potential of inpainting, we aim to revolutionize the way products are integrated into visual content, enabling advertisers and marketers to seamlessly incorporate their offerings in a visually appealing and contextually relevant manner. 3 | In this demo, we are using SageMaker endpoints to host the Generative AI models and pass images to create outputs. 4 | 5 | The project utilizes [AWS CloudFormation/CDK](https://aws.amazon.com/cloudformation/) to build the stack and once that is created, it uses the SageMaker notebooks created in order to run the Generative AI tasks. 6 | 7 | Note: Although this repository is released under the `MIT-0` license, its `StableDiffusionInPaintPipeline` 8 | use the third party `diffusers` project. The `diffusers` project's licensing includes the `OpenRAIL-M` license. 9 | 10 | ## AWS Architecture: 11 | ![AWSArchitecture](assets/AWSArchitecture.png) 12 | 13 | ## Data Flow and Image Generations 14 | ![Flow](assets/Flow.png) 15 | 16 | ## AWS CloudFormation Stack Creation 17 | The AWS CloudFormation Stack can be created using 2 methods: (1) Using Template or (2) Using AWS CDK. Both the methods are described as follows: 18 | 19 | 1. Create Stack using AWS CloudFormation: 20 | - Choose **Launch Stack** and (if prompted) log into your AWS account: 21 | [![Launch Stack](assets/LaunchStack.png)](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/ML-14623/CfnTemplate/gai-inpainting-cfn-template.yaml) 22 | - Select a unique Stack Name, ackowledge creation of IAM resources, create the stack and wait for a few minutes for it to be successfully deployed 23 | 1. ![Step1_StackName](assets/Step1_StackName.png) 24 | 2. ![Step2_StackIAM](assets/Step2_StackIAM.png) 25 | 3. ![Step3_StackSuccess](assets/Step3_StackSuccess.png) 26 | 27 | 2. Generative AI InPainting using AWS CDK 28 | In order to create the stack with AWS CDK, follow the steps highlighted in [gai-inpainting-cdk](gai-inpainting-cdk/README.md). Use these steps: 29 | ``` 30 | $ cd gai-inpainting-cdk 31 | $ python3 -m venv .venv 32 | $ source .venv/bin/activate 33 | $ pip3 install -r requirements.txt 34 | $ cdk synth 35 | $ cdk bootstrap 36 | $ cdk deploy 37 | ``` 38 | 39 | ## Running InPainting using Amazon SageMaker notebooks: 40 | - From AWS Console, go to [Amazon SageMaker Notebook Instances](https://us-east-1.console.aws.amazon.com/sagemaker/home?region=us-east-1#/notebook-instances) 41 | - Select the Notebook created by the stack and open it 42 | - Inside SageMaker Notebook, navigate: `notebooks` and open the notebooks: `1_HostGenAIModels.ipynb`, `2_GenerateInPaintingImages.ipynb` & `3_CleanUp.ipynb` 43 | 1. `1_HostGenAIModels.ipynb`: Host the 2 models (SAM and InPainting) to SageMaker Endpoint. 44 | 2. `2_GenerateInPaintingImages.ipynb`: Using prompts, the base image and the mask generated, run in-painting model to generate an output image. 45 | 3. `3_CleanUp.ipynb`: Run a cleanup job to destroy the endpoints. 46 | 47 | ### Running SAM for mask creation: 48 | - Using Segment Anything Models (SAM), an advanced generative AI technique, we can effortlessly generate high-quality masks for various objects within images. 49 | - Example: 50 | ![output_SAM.png](assets/output_SAM.png) 51 | 52 | ### Running InPainting on image, mask and prompts: 53 | - By combining the power of InPainting with the mask generated by Segment Anything Models (SAM) and the user's prompt, we can create remarkable generated images. 54 | - Example: 55 | ![output_InPainting.png](assets/output_InPainting.png) 56 | 57 | ## Contributors: 58 | - [Fabian Benitez-Quiroz (@fabianbq)](fabianbq@amazon.com) 59 | - [Romil Shah (@rpshah)](rpshah@amazon.com) 60 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/cdk/gai_inpainting_sagemaker.py: -------------------------------------------------------------------------------- 1 | import os, base64 2 | from constructs import Construct 3 | from aws_cdk import ( 4 | Aws, 5 | Stack, 6 | aws_s3 as s3, 7 | aws_iam as iam, 8 | aws_sagemaker as sagemaker, 9 | aws_s3_deployment as s3deploy, 10 | ) 11 | import aws_cdk as cdk 12 | 13 | region = Aws.REGION 14 | account = Aws.ACCOUNT_ID 15 | 16 | # CDK Stack for 17 | # 1. Create S3 18 | # 2. Create SageMaker Notebook 19 | 20 | class GenAiInpaintStack(Stack): 21 | """ 22 | The SageMaker Notebook is used to deploy the custom model on a SageMaker endpoint and test it. 23 | """ 24 | 25 | def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: 26 | super().__init__(scope, construct_id, **kwargs) 27 | 28 | ## Create S3 bucket 29 | self.bucket = s3.Bucket( 30 | self, 31 | "genAiInPaintingS3", 32 | bucket_name=f"gai-inpainting-s3-{account}", 33 | auto_delete_objects=True, 34 | removal_policy=cdk.RemovalPolicy.DESTROY) 35 | 36 | # Upload the notebook directory to S3 37 | notebook_directory = 'notebooks' 38 | zip_file_path = '/tmp/notebooks.zip' 39 | 40 | # Zip the notebook directory 41 | os.system('zip -r ' + zip_file_path + ' ' + notebook_directory) 42 | 43 | # Upload the zip file to S3 44 | s3deploy.BucketDeployment( 45 | self, 46 | 'genAiS3Deploy', 47 | sources=[s3deploy.Source.asset(zip_file_path)], 48 | destination_bucket=self.bucket, 49 | destination_key_prefix='notebooks' 50 | ) 51 | 52 | ## IAM Roles 53 | # Create role for Notebook instance 54 | nRole = iam.Role(self, "genAiInPaintingNotebookAccessRole", assumed_by=iam.ServicePrincipal('sagemaker.amazonaws.com')) 55 | nRole.add_managed_policy(iam.ManagedPolicy.from_aws_managed_policy_name("AmazonSageMakerFullAccess")) 56 | 57 | # Add necessary permissions to the role 58 | nRole.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, actions=["logs:*"], resources=["*"])) 59 | nRole.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, actions=["sagemaker:Describe*", "sagemaker:*Model*", "sagemaker:*Endpoint*", "sagemaker:*ProcessingJob*"], resources=["*"])) 60 | nRole.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, actions=["s3:*"], resources=[self.bucket.bucket_arn + "/*"])) 61 | nRole.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, actions=["ecr:BatchGetImage"], resources=["arn:aws:ecr:::*"])) 62 | nRole.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, actions=["s3:ListAllMyBuckets", "s3:ListBucket"], resources=["arn:aws:s3:::*"])) 63 | nRole.add_to_policy(iam.PolicyStatement(effect=iam.Effect.ALLOW, actions=["iam:PassRole"], resources=["arn:aws:iam::*:role/" + nRole.role_name])) 64 | 65 | # Create the lifecycle configuration content 66 | lifecycle_config_content = """ 67 | #!/bin/bash 68 | search_prefix="gai-inpainting-s3" 69 | bucket_name=$(aws s3 ls | awk '{print $3}' | grep "^$search_prefix") 70 | echo "Buckets found:" 71 | echo "$bucket_name" 72 | aws s3 cp "s3://$bucket_name/notebooks/notebooks/" /home/ec2-user/SageMaker/ --recursive 73 | sudo chmod -R 777 /home/ec2-user/SageMaker 74 | """.strip() 75 | 76 | # Encode the lifecycle configuration content 77 | encoded_config_content = base64.b64encode(lifecycle_config_content.encode()).decode() 78 | 79 | # Create the lifecycle configuration 80 | lifecycle_config = sagemaker.CfnNotebookInstanceLifecycleConfig( 81 | self, 82 | "gaiLcConfig", 83 | notebook_instance_lifecycle_config_name="gaiLcConfig", 84 | on_start=[sagemaker.CfnNotebookInstanceLifecycleConfig.NotebookInstanceLifecycleHookProperty(content=encoded_config_content)] 85 | ) 86 | 87 | ## Create SageMaker Notebook instances cluster 88 | nid = 'genAiInPaintingNotebookInstance' 89 | notebook = sagemaker.CfnNotebookInstance( 90 | self, 91 | nid, 92 | instance_type='ml.c5.2xlarge', 93 | volume_size_in_gb=16, 94 | notebook_instance_name=nid, 95 | role_arn=nRole.role_arn, 96 | lifecycle_config_name=lifecycle_config.notebook_instance_lifecycle_config_name 97 | ) -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/2_GenerateInPaintingImages.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "75fccc56", 6 | "metadata": {}, 7 | "source": [ 8 | "# 2. Running Inference on the GenAI Models\n", 9 | "This Notebook is used to run inference on the 2 models hosted to SageMaker Endpoints. The first model is Segment Anything Model (SAM) and the second model is the InPainting model." 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "id": "f7fafeef", 15 | "metadata": {}, 16 | "source": [ 17 | "## 2.1 Install Dependencies and Import Libraries\n", 18 | "Here we are using `SageMaker, Numpy, Pillow` libraries for Python" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "id": "45c19e9b", 25 | "metadata": { 26 | "tags": [] 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "# Import libraries\n", 31 | "import numpy as np, matplotlib.pyplot as plt\n", 32 | "from PIL import Image\n", 33 | "from sagemaker.pytorch import PyTorchPredictor\n", 34 | "from sagemaker.serializers import JSONSerializer\n", 35 | "from sagemaker.deserializers import JSONDeserializer" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "id": "13551774", 41 | "metadata": {}, 42 | "source": [ 43 | "## 2.2 Model 1: SAM\n", 44 | "\n", 45 | "### Run inference on the SAM Model\n", 46 | "In this section, we will demonstrate the inference on the SAM model. \n", 47 | "The product image is provided as an input to the model and the product mask is genereted as an output. " 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": null, 53 | "id": "6196248a", 54 | "metadata": { 55 | "tags": [] 56 | }, 57 | "outputs": [], 58 | "source": [ 59 | "%store -r SAM_ENDPOINT_NAME\n", 60 | "print(f'SAM Endpoint Name: {SAM_ENDPOINT_NAME}')\n", 61 | "\n", 62 | "raw_image = Image.open(\"images/speaker.png\").convert(\"RGB\")\n", 63 | "\n", 64 | "predictor_sam = PyTorchPredictor(endpoint_name=SAM_ENDPOINT_NAME,\n", 65 | " deserializer=JSONDeserializer())\n", 66 | "\n", 67 | "output_array = predictor_sam.predict(raw_image, initial_args={'Accept': 'application/json'})\n", 68 | "\n", 69 | "mask_image = Image.fromarray(np.array(output_array).astype(np.uint8))\n", 70 | "\n", 71 | "# save the image using PIL Image\n", 72 | "mask_image.save('images/speaker_mask.png')\n", 73 | "\n", 74 | "# We are going to plot the outputs\n", 75 | "plot_images = [raw_image, mask_image]\n", 76 | "titles = ['Original Product Image', 'Mask']\n", 77 | "fig, ax = plt.subplots(1,len(plot_images), dpi = 200)\n", 78 | "for k1, img in enumerate(plot_images):\n", 79 | " ax[k1].imshow(img); ax[k1].axis('off')\n", 80 | " ax[k1].set_title(titles[k1], fontsize=6)" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "id": "c66cdc72", 86 | "metadata": {}, 87 | "source": [ 88 | "## 2.3 Model 2: InPainting\n", 89 | "\n", 90 | "### Run inference on the InPainting Model\n", 91 | "In this section, we will demonstrate the inference on the InPainting model.\n", 92 | "The product image, the previously generated mask, foreground promts, background prompts and negative prompts are used as an input to generate the right type of image. " 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "id": "14bcd460", 99 | "metadata": { 100 | "tags": [] 101 | }, 102 | "outputs": [], 103 | "source": [ 104 | "%store -r INPAINTING_ENDPOINT_NAME\n", 105 | "print(f'InPainting Endpoint Name: {INPAINTING_ENDPOINT_NAME}')\n", 106 | "\n", 107 | "raw_image = Image.open(\"images/speaker.png\").convert(\"RGB\")\n", 108 | "mask_image = Image.open('images/speaker_mask.png').convert('RGB')\n", 109 | "prompt_fr = \"apple, books\"\n", 110 | "prompt_bg = \"table\"\n", 111 | "negative_prompt = \"longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, letters\" \n", 112 | "\n", 113 | "if raw_image.size[0]>raw_image.size[1]:\n", 114 | " w = 512\n", 115 | " h = 512*raw_image.size[1]//raw_image.size[0]\n", 116 | "elif raw_image.size[1]>raw_image.size[2]:\n", 117 | " h = 512\n", 118 | " w = 512*raw_image.size[0]//raw_image.size[1]\n", 119 | "raw_image = raw_image.resize((w,h))\n", 120 | "mask_image = mask_image.resize((w,h))\n", 121 | "\n", 122 | "inputs = {}\n", 123 | "inputs[\"image\"] = np.array(raw_image)\n", 124 | "inputs[\"mask\"] = np.array(mask_image)\n", 125 | "inputs[\"prompt_fr\"] = prompt_fr\n", 126 | "inputs[\"prompt_bg\"] = prompt_bg\n", 127 | "inputs[\"negative_prompt\"] = negative_prompt\n", 128 | "\n", 129 | "predictor_inpainting = PyTorchPredictor(endpoint_name=INPAINTING_ENDPOINT_NAME,\n", 130 | " serializer=JSONSerializer(),\n", 131 | " deserializer=JSONDeserializer())\n", 132 | "\n", 133 | "output_array = predictor_inpainting.predict(inputs, initial_args={'Accept': 'application/json'})\n", 134 | "\n", 135 | "gai_mask = Image.fromarray(np.array(output_array[2]).astype(np.uint8))\n", 136 | "gai_background = Image.fromarray(np.array(output_array[1]).astype(np.uint8))\n", 137 | "gai_image = Image.fromarray(np.array(output_array[0]).astype(np.uint8))\n", 138 | "post_image = Image.fromarray(np.array(output_array[3]).astype(np.uint8))\n", 139 | "\n", 140 | "# We are going to plot the outputs\n", 141 | "plot_images = [gai_mask, gai_background, gai_image, post_image]\n", 142 | "titles = ['Refined Mask', 'Generated Background', 'Generated Product Image', 'Post Process Image']\n", 143 | "fig, ax = plt.subplots(1,len(plot_images), dpi = 200)\n", 144 | "for k1, img in enumerate(plot_images):\n", 145 | " ax[k1].imshow(img); ax[k1].axis('off')\n", 146 | " ax[k1].set_title(titles[k1], fontsize=5)\n", 147 | "\n", 148 | "\n", 149 | "# save the generated image using PIL Image\n", 150 | "post_image.save('images/speaker_generated.png')" 151 | ] 152 | }, 153 | { 154 | "cell_type": "code", 155 | "execution_count": null, 156 | "id": "b0ac3bc1", 157 | "metadata": {}, 158 | "outputs": [], 159 | "source": [] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "execution_count": null, 164 | "id": "516ce65a-319a-4cac-a4bb-de499541bca6", 165 | "metadata": {}, 166 | "outputs": [], 167 | "source": [] 168 | } 169 | ], 170 | "metadata": { 171 | "kernelspec": { 172 | "display_name": "conda_python3", 173 | "language": "python", 174 | "name": "conda_python3" 175 | }, 176 | "language_info": { 177 | "codemirror_mode": { 178 | "name": "ipython", 179 | "version": 3 180 | }, 181 | "file_extension": ".py", 182 | "mimetype": "text/x-python", 183 | "name": "python", 184 | "nbconvert_exporter": "python", 185 | "pygments_lexer": "ipython3", 186 | "version": "3.10.8" 187 | } 188 | }, 189 | "nbformat": 4, 190 | "nbformat_minor": 5 191 | } 192 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/1_HostGenAIModels.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "b2402bed", 6 | "metadata": {}, 7 | "source": [ 8 | "# 1. Hosting GenAI Models\n", 9 | "This Notebook is used to host 2 models to SageMaker Endpoints. The first model is Segment Anything Model (SAM) and the second model is the InPainting model." 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "id": "4b8170ee", 15 | "metadata": {}, 16 | "source": [ 17 | "## 1.1 Install Dependencies and Import Libraries\n", 18 | "Here we are using `SageMaker, Boto3` libraries for Python" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "id": "a641a723", 25 | "metadata": { 26 | "tags": [] 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "# Import libraries\n", 31 | "import sagemaker, subprocess, boto3\n", 32 | "from datetime import datetime\n", 33 | "from sagemaker import s3, get_execution_role\n", 34 | "from sagemaker.pytorch import PyTorchModel\n", 35 | "from sagemaker.serializers import JSONSerializer\n", 36 | "from sagemaker.deserializers import JSONDeserializer" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "id": "98e6da1e", 43 | "metadata": { 44 | "tags": [] 45 | }, 46 | "outputs": [], 47 | "source": [ 48 | "# Get the right credentials, role and client for SageMaker\n", 49 | "sm_client = boto3.client(service_name=\"sagemaker\")\n", 50 | "role = get_execution_role()\n", 51 | "print(f'Role: {role}')\n", 52 | "\n", 53 | "INSTANCE_TYPE = 'ml.g5.2xlarge'\n", 54 | "\n", 55 | "bashCommand = \"tar -cpzf code.tar.gz code/\"\n", 56 | "process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)\n", 57 | "output, error = process.communicate()\n", 58 | "\n", 59 | "s3_client = boto3.client('s3')\n", 60 | "s3_resource = boto3.resource('s3')\n", 61 | "sts = boto3.client('sts')\n", 62 | "AWS_ACCOUNT_ID = sts.get_caller_identity()[\"Account\"]\n", 63 | "REGION = s3_client.meta.region_name\n", 64 | "\n", 65 | "bucket = f'gai-inpainting-s3'\n", 66 | "response = s3_client.list_buckets()\n", 67 | "for bucket in response['Buckets']:\n", 68 | " if 'gai-inpainting-s3' in bucket[\"Name\"]:\n", 69 | " bucket = bucket[\"Name\"]\n", 70 | " break\n", 71 | "\n", 72 | "print(f'Bucket: {bucket}')\n", 73 | "sess = sagemaker.Session(default_bucket=bucket.split('s3://')[-1])" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "id": "a6065a96", 79 | "metadata": {}, 80 | "source": [ 81 | "## 1.2 Model 1: SAM\n", 82 | "\n", 83 | "### Create and Host SAM Model\n", 84 | "In this section, we will demonstrate how to create and host the endpoint for SAM model." 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "execution_count": null, 90 | "id": "9b205466", 91 | "metadata": { 92 | "tags": [] 93 | }, 94 | "outputs": [], 95 | "source": [ 96 | "SAM_ENDPOINT_NAME = 'sam-pytorch-' + str(datetime.utcnow().strftime('%Y-%m-%d-%H-%M-%S-%f'))\n", 97 | "%store SAM_ENDPOINT_NAME\n", 98 | "\n", 99 | "prefix_sam = \"SAM/demo-custom-endpoint\"\n", 100 | "\n", 101 | "model_data_sam = s3.S3Uploader.upload(\"code.tar.gz\", f's3://{bucket}/{prefix_sam}')\n", 102 | "print(f'Model Data: {model_data_sam}')\n", 103 | "\n", 104 | "model_sam = PyTorchModel(entry_point='inference_sam.py',\n", 105 | " model_data=model_data_sam, \n", 106 | " framework_version='1.12', \n", 107 | " py_version='py38',\n", 108 | " role=role,\n", 109 | " env={'TS_MAX_RESPONSE_SIZE':'2000000000', 'SAGEMAKER_MODEL_SERVER_TIMEOUT' : '300'},\n", 110 | " sagemaker_session=sess,\n", 111 | " name='model-'+SAM_ENDPOINT_NAME)\n", 112 | "\n", 113 | "print(f'SAM Endpoint Name: {SAM_ENDPOINT_NAME}')\n", 114 | "\n", 115 | "predictor_sam = model_sam.deploy(initial_instance_count=1, \n", 116 | " instance_type=INSTANCE_TYPE,\n", 117 | " deserializers=JSONDeserializer(),\n", 118 | " endpoint_name=SAM_ENDPOINT_NAME)" 119 | ] 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "id": "15c3beda", 124 | "metadata": {}, 125 | "source": [ 126 | "## 1.3 Model 2: InPainting\n", 127 | "\n", 128 | "### Create and Host InPainting Model\n", 129 | "In this section, we will demonstrate how to create and host the endpoint for InPainting model." 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "execution_count": null, 135 | "id": "202068c9", 136 | "metadata": { 137 | "tags": [] 138 | }, 139 | "outputs": [], 140 | "source": [ 141 | "INPAINTING_ENDPOINT_NAME = 'inpainting-pytorch-' + str(datetime.utcnow().strftime('%Y-%m-%d-%H-%M-%S-%f'))\n", 142 | "%store INPAINTING_ENDPOINT_NAME\n", 143 | "\n", 144 | "prefix_inpainting = \"InPainting/demo-custom-endpoint\"\n", 145 | "\n", 146 | "model_data_inpainting = s3.S3Uploader.upload(\"code.tar.gz\", f\"s3://{bucket}/{prefix_inpainting}\")\n", 147 | "print(f'Model Data: {model_data_inpainting}')\n", 148 | "\n", 149 | "model_inpainting = PyTorchModel(entry_point='inference_inpainting.py',\n", 150 | " model_data=model_data_inpainting, \n", 151 | " framework_version='1.12', \n", 152 | " py_version='py38',\n", 153 | " role=role,\n", 154 | " env={'TS_MAX_RESPONSE_SIZE':'2000000000', 'SAGEMAKER_MODEL_SERVER_TIMEOUT' : '300'},\n", 155 | " sagemaker_session=sess,\n", 156 | " name='model-'+INPAINTING_ENDPOINT_NAME)\n", 157 | "\n", 158 | "print(f'InPainting Endpoint Name: {INPAINTING_ENDPOINT_NAME}')\n", 159 | "\n", 160 | "predictor_inpainting = model_inpainting.deploy(initial_instance_count=1, \n", 161 | " instance_type=INSTANCE_TYPE,\n", 162 | " serializer=JSONSerializer(),\n", 163 | " deserializers=JSONDeserializer(),\n", 164 | " endpoint_name=INPAINTING_ENDPOINT_NAME)" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": null, 170 | "id": "9726d58f-159e-45dc-9fdc-d878ab059b06", 171 | "metadata": {}, 172 | "outputs": [], 173 | "source": [] 174 | } 175 | ], 176 | "metadata": { 177 | "kernelspec": { 178 | "display_name": "conda_python3", 179 | "language": "python", 180 | "name": "conda_python3" 181 | }, 182 | "language_info": { 183 | "codemirror_mode": { 184 | "name": "ipython", 185 | "version": 3 186 | }, 187 | "file_extension": ".py", 188 | "mimetype": "text/x-python", 189 | "name": "python", 190 | "nbconvert_exporter": "python", 191 | "pygments_lexer": "ipython3", 192 | "version": "3.10.8" 193 | } 194 | }, 195 | "nbformat": 4, 196 | "nbformat_minor": 5 197 | } 198 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/notebooks/code/inference_inpainting.py: -------------------------------------------------------------------------------- 1 | from PIL import Image 2 | from PIL import ImageFilter 3 | import torch 4 | import os 5 | import json 6 | import io 7 | import cv2 8 | import time 9 | import numpy as np 10 | import matplotlib.pyplot as plt 11 | from diffusers import StableDiffusionInpaintPipeline, StableDiffusionPipeline 12 | import transformers 13 | import albumentations as A 14 | 15 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 16 | 17 | # Define image transformation 18 | transform = A.Compose([ 19 | A.ShiftScaleRotate(shift_limit=0.4, scale_limit=(-0.4, -0.4), rotate_limit=0, p=1, border_mode=1, interpolation=4, 20 | shift_limit_y=[0.1, 0.25]) 21 | ]) 22 | 23 | torch_dtype = torch.float32 24 | if torch.cuda.is_available(): 25 | torch_dtype = torch.float16 26 | 27 | model_id_inpainting = "runwayml/stable-diffusion-inpainting" 28 | model_id_t2i = "runwayml/stable-diffusion-v1-5" 29 | 30 | def find_contour(image): 31 | """ 32 | Finds contours in the given image. 33 | 34 | Args: 35 | image (numpy.ndarray): The input image. 36 | 37 | Returns: 38 | tuple: A tuple containing the contours and the thresholded image. 39 | """ 40 | # Convert image to grayscale 41 | gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) 42 | # Apply binary thresholding 43 | ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) 44 | # Find contours 45 | contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] 46 | # Return the contours 47 | return contours, thresh 48 | 49 | def get_centroid(contour): 50 | """ 51 | Calculates the centroid of a contour. 52 | 53 | Args: 54 | contour (numpy.ndarray): The input contour. 55 | 56 | Returns: 57 | tuple: A tuple containing the centroid coordinates (x, y). 58 | """ 59 | M = cv2.moments(contour) 60 | if M['m00'] != 0: 61 | cx = int(M['m10'] / M['m00']) 62 | cy = int(M['m01'] / M['m00']) 63 | else: 64 | cx = 0 65 | cy = 0 66 | return cx, cy 67 | 68 | def model_fn(model_dir): 69 | """ 70 | Loads the model and returns the model object. 71 | 72 | Args: 73 | model_dir (str): The directory where the model is stored. 74 | 75 | Returns: 76 | list: A list containing the loaded models. 77 | """ 78 | print("Executing model_fn from inference.py ...") 79 | model = [] 80 | env = os.environ 81 | 82 | # Load the StableDiffusionInpaintPipeline and StableDiffusionPipeline models 83 | pipeline_inpainting = StableDiffusionInpaintPipeline.from_pretrained(model_id_inpainting, torch_dtype=torch_dtype) 84 | pipeline_t2i = StableDiffusionPipeline.from_pretrained(model_id_t2i, torch_dtype=torch_dtype) 85 | 86 | pipeline_inpainting = pipeline_inpainting.to(device) 87 | pipeline_t2i = pipeline_t2i.to(device) 88 | 89 | model = [pipeline_inpainting, pipeline_t2i] 90 | return model 91 | 92 | def input_fn(request_body, request_content_type): 93 | """ 94 | Parses the input request body and returns the input data for prediction. 95 | 96 | Args: 97 | request_body (str): The request body containing the input data. 98 | request_content_type (str): The content type of the request. 99 | 100 | Returns: 101 | dict: A dictionary containing the parsed input data. 102 | """ 103 | print("Executing input_fn from inference.py ...") 104 | request_body = json.loads(request_body) 105 | inputs = {} 106 | if request_content_type: 107 | # Load the image and mask from the request body 108 | img_array = np.uint8(np.array(request_body["image"])) 109 | img = Image.fromarray(img_array).convert('RGB').resize((512, 512)) 110 | inputs["image"] = img 111 | msk_array = np.uint8(np.array(request_body["mask"])) 112 | msk = Image.fromarray(np.array(Image.fromarray(msk_array).convert('L')) == 0).convert('RGB').resize((512, 512)) 113 | inputs["mask"] = msk 114 | inputs["prompt_fr"] = request_body["prompt_fr"] 115 | inputs["prompt_bg"] = request_body["prompt_bg"] 116 | inputs["negative_prompt"] = request_body["negative_prompt"] 117 | else: 118 | raise Exception("Unsupported content type: " + request_content_type) 119 | return inputs 120 | 121 | def predict_fn(input_data, model): 122 | """ 123 | Runs the prediction on the input data using the loaded model. 124 | 125 | Args: 126 | input_data (dict): The input data for prediction. 127 | model (list): The loaded models. 128 | 129 | Returns: 130 | dict: The prediction results. 131 | """ 132 | print("Executing predict_fn from inference.py ...") 133 | result = {} 134 | pipeline_inpainting, pipeline_t2i = model[0], model[1] 135 | with torch.no_grad(): 136 | image_bg = pipeline_t2i(input_data["prompt_bg"], guidance_scale=10).images[0] 137 | 138 | # Apply image transformation and blend with the background 139 | transformed_image_mask = transform(image=np.array(input_data["image"]), mask=np.array(input_data["mask"])) 140 | image_np = np.array(image_bg) 141 | idx = np.where(transformed_image_mask["mask"] == 0) 142 | image_np[idx[0], idx[1], :] = transformed_image_mask["image"][idx[0], idx[1], :] 143 | transformed_image_mask["image"] = image_np 144 | 145 | # Find the contours and centroid of the mask 146 | contours, ret = find_contour(transformed_image_mask["mask"]) 147 | cx, cy = get_centroid(contours[0]) 148 | max_min_cy = (contours[0][:, 0, 1].max() - contours[0][:, 0, 1].min()) // 3 149 | 150 | # Create masks for the lower and upper portions of the object 151 | anchor_point = int(np.random.uniform(0, max_min_cy)) 152 | anti_mask = np.copy(transformed_image_mask["mask"]) 153 | anti_mask[(cy - anchor_point):, :] = 0 154 | anti_mask_lower = 255 - (transformed_image_mask["mask"] - anti_mask) 155 | 156 | anti_mask = np.copy(transformed_image_mask["mask"]) 157 | anti_mask[:(cy - anchor_point), :] = 0 158 | anti_mask_upper = 255 - (transformed_image_mask["mask"] - anti_mask) 159 | anti_mask_upper = Image.fromarray(255 - anti_mask_upper).filter(ImageFilter.GaussianBlur(4)) 160 | new_mask_ori = Image.fromarray(transformed_image_mask["mask"]) 161 | new_mask = new_mask_ori.filter(ImageFilter.GaussianBlur(8)) 162 | 163 | transformed_image_mask["mask"] = 255 - anti_mask_lower 164 | transformed_image_mask["mask"] = np.array( 165 | Image.fromarray(transformed_image_mask["mask"]).filter(ImageFilter.GaussianBlur(4))) 166 | 167 | image_fr = Image.fromarray(transformed_image_mask["image"]) 168 | 169 | # Run the inpainting pipeline on the foreground image 170 | image_fr = pipeline_inpainting( 171 | prompt=input_data["prompt_fr"], 172 | image=image_fr, 173 | mask_image=Image.fromarray(transformed_image_mask["mask"]), 174 | num_inference_steps=100, 175 | guidance_scale=8.5, 176 | negative_prompt=input_data["negative_prompt"] 177 | ).images[0] 178 | 179 | # Run the inpainting pipeline on the background image 180 | image_fr = pipeline_inpainting( 181 | prompt=input_data["prompt_bg"], 182 | image=image_fr, 183 | mask_image=anti_mask_upper, 184 | num_inference_steps=100, 185 | guidance_scale=8.5, 186 | negative_prompt=input_data["negative_prompt"] 187 | ).images[0] 188 | 189 | result["image"] = image_fr 190 | result["background"] = image_bg 191 | result["mask"] = Image.fromarray(((1 - np.array(new_mask) / 255) * (1) * 255).astype(np.uint8)) 192 | 193 | scale = (1 - np.array(new_mask_ori)/255)*(1 - np.array(new_mask_ori)/255) 194 | image_np = scale*transformed_image_mask["image"] + (1 - scale)*image_fr 195 | result["postprocess_image"] = Image.fromarray(image_np.astype(np.uint8)) 196 | 197 | 198 | if torch.cuda.is_available(): 199 | torch.cuda.empty_cache() 200 | torch.cuda.ipc_collect() 201 | 202 | return result 203 | 204 | def output_fn(prediction_output, content_type): 205 | """ 206 | Converts the prediction output to the specified content type and returns it. 207 | 208 | Args: 209 | prediction_output (dict): The prediction output. 210 | content_type (str): The desired content type. 211 | 212 | Returns: 213 | str: The prediction output converted to the specified content type. 214 | """ 215 | print("Executing output_fn from inference.py ...") 216 | all_list = [np.array(prediction_output["image"]).tolist(), np.array(prediction_output["background"]).tolist(), 217 | np.array(prediction_output["mask"]).tolist(), np.array(prediction_output["postprocess_image"]).tolist()] 218 | 219 | return json.dumps(all_list) 220 | -------------------------------------------------------------------------------- /gai-inpainting-cdk/gai-inpainting-cfn-template.yaml: -------------------------------------------------------------------------------- 1 | Resources: 2 | genAiInPainting: 3 | Type: AWS::S3::Bucket 4 | Properties: 5 | BucketName: 6 | Fn::Join: 7 | - "" 8 | - - gai-inpainting-s3- 9 | - Ref: AWS::AccountId 10 | Tags: 11 | - Key: aws-cdk:auto-delete-objects 12 | Value: "true" 13 | UpdateReplacePolicy: Delete 14 | DeletionPolicy: Delete 15 | Metadata: 16 | aws:cdk:path: GenAiInpaintStack/genAiInPaintingS3/Resource 17 | genAiInPaintingS3Policy: 18 | Type: AWS::S3::BucketPolicy 19 | Properties: 20 | Bucket: 21 | Ref: genAiInPainting 22 | PolicyDocument: 23 | Statement: 24 | - Action: 25 | - s3:DeleteObject* 26 | - s3:GetBucket* 27 | - s3:List* 28 | Effect: Allow 29 | Principal: 30 | AWS: 31 | Fn::GetAtt: 32 | - CustomS3AutoDeleteObjectsCustomResourceProviderRole 33 | - Arn 34 | Resource: 35 | - Fn::GetAtt: 36 | - genAiInPainting 37 | - Arn 38 | - Fn::Join: 39 | - "" 40 | - - Fn::GetAtt: 41 | - genAiInPainting 42 | - Arn 43 | - /* 44 | Version: "2012-10-17" 45 | Metadata: 46 | aws:cdk:path: GenAiInpaintStack/genAiInPaintingS3/Policy/Resource 47 | genAiInPaintingS3AutoDeleteObjectsCustomResource: 48 | Type: Custom::S3AutoDeleteObjects 49 | Properties: 50 | ServiceToken: 51 | Fn::GetAtt: 52 | - CustomS3AutoDeleteObjectsCustomResourceProviderHandler 53 | - Arn 54 | BucketName: 55 | Ref: genAiInPainting 56 | DependsOn: 57 | - genAiInPaintingS3Policy 58 | UpdateReplacePolicy: Delete 59 | DeletionPolicy: Delete 60 | Metadata: 61 | aws:cdk:path: GenAiInpaintStack/genAiInPaintingS3/AutoDeleteObjectsCustomResource/Default 62 | CustomS3AutoDeleteObjectsCustomResourceProviderRole: 63 | Type: AWS::IAM::Role 64 | Properties: 65 | AssumeRolePolicyDocument: 66 | Version: "2012-10-17" 67 | Statement: 68 | - Action: sts:AssumeRole 69 | Effect: Allow 70 | Principal: 71 | Service: lambda.amazonaws.com 72 | ManagedPolicyArns: 73 | - Fn::Sub: arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 74 | Metadata: 75 | aws:cdk:path: GenAiInpaintStack/Custom::S3AutoDeleteObjectsCustomResourceProvider/Role 76 | CustomS3AutoDeleteObjectsCustomResourceProviderHandler: 77 | Type: AWS::Lambda::Function 78 | Properties: 79 | Code: 80 | S3Bucket: 81 | Fn::Sub: aws-blogs-artifacts-public 82 | S3Key: artifacts/ML-14623/CfnTemplate/6babbac1f25446ab4660ead0ad5972e3a7742f50c6d8326af98a8bcd5d485335.zip 83 | Timeout: 900 84 | MemorySize: 128 85 | Handler: __entrypoint__.handler 86 | Role: 87 | Fn::GetAtt: 88 | - CustomS3AutoDeleteObjectsCustomResourceProviderRole 89 | - Arn 90 | Runtime: nodejs14.x 91 | Description: 92 | Fn::Join: 93 | - "" 94 | - - "Lambda function for auto-deleting objects in " 95 | - Ref: genAiInPainting 96 | - " S3 bucket." 97 | DependsOn: 98 | - CustomS3AutoDeleteObjectsCustomResourceProviderRole 99 | Metadata: 100 | aws:cdk:path: GenAiInpaintStack/Custom::S3AutoDeleteObjectsCustomResourceProvider/Handler 101 | aws:asset:path: asset.6babbac1f25446ab4660ead0ad5972e3a7742f50c6d8326af98a8bcd5d485335 102 | aws:asset:property: Code 103 | genAiS3DeployAwsCliLayer: 104 | Type: AWS::Lambda::LayerVersion 105 | Properties: 106 | Content: 107 | S3Bucket: 108 | Fn::Sub: aws-blogs-artifacts-public 109 | S3Key: artifacts/ML-14623/CfnTemplate/c409e6c5845f1f349df8cd84e160bf6f1c35d2b060b63e1f032f9bd39d4542cc.zip 110 | Description: /opt/awscli/aws 111 | Metadata: 112 | aws:cdk:path: GenAiInpaintStack/genAiS3Deploy/AwsCliLayer/Resource 113 | aws:asset:path: asset.c409e6c5845f1f349df8cd84e160bf6f1c35d2b060b63e1f032f9bd39d4542cc.zip 114 | aws:asset:is-bundled: false 115 | aws:asset:property: Content 116 | CustomCDKBucketDeploymentServiceRole: 117 | Type: AWS::IAM::Role 118 | Properties: 119 | AssumeRolePolicyDocument: 120 | Statement: 121 | - Action: sts:AssumeRole 122 | Effect: Allow 123 | Principal: 124 | Service: lambda.amazonaws.com 125 | Version: "2012-10-17" 126 | ManagedPolicyArns: 127 | - Fn::Join: 128 | - "" 129 | - - "arn:" 130 | - Ref: AWS::Partition 131 | - :iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 132 | Metadata: 133 | aws:cdk:path: GenAiInpaintStack/Custom::CDKBucketDeployment/ServiceRole/Resource 134 | CustomCDKBucketDeploymentServiceRoleDefaultPolicy: 135 | Type: AWS::IAM::Policy 136 | Properties: 137 | PolicyDocument: 138 | Statement: 139 | - Action: 140 | - s3:GetBucket* 141 | - s3:GetObject* 142 | - s3:List* 143 | Effect: Allow 144 | Resource: 145 | - Fn::Join: 146 | - "" 147 | - - "arn:" 148 | - Ref: AWS::Partition 149 | - ":s3:::" 150 | - Fn::Sub: cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region} 151 | - /* 152 | - Fn::Join: 153 | - "" 154 | - - "arn:" 155 | - Ref: AWS::Partition 156 | - ":s3:::" 157 | - Fn::Sub: cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region} 158 | - Action: 159 | - s3:Abort* 160 | - s3:DeleteObject* 161 | - s3:GetBucket* 162 | - s3:GetObject* 163 | - s3:List* 164 | - s3:PutObject 165 | - s3:PutObjectLegalHold 166 | - s3:PutObjectRetention 167 | - s3:PutObjectTagging 168 | - s3:PutObjectVersionTagging 169 | Effect: Allow 170 | Resource: 171 | - Fn::GetAtt: 172 | - genAiInPainting 173 | - Arn 174 | - Fn::Join: 175 | - "" 176 | - - Fn::GetAtt: 177 | - genAiInPainting 178 | - Arn 179 | - /* 180 | Version: "2012-10-17" 181 | PolicyName: CustomCDKBucketDeploymentServiceRoleDefaultPolicy 182 | Roles: 183 | - Ref: CustomCDKBucketDeploymentServiceRole 184 | Metadata: 185 | aws:cdk:path: GenAiInpaintStack/Custom::CDKBucketDeployment/ServiceRole/DefaultPolicy/Resource 186 | CustomCDKBucketDeployment: 187 | Type: AWS::Lambda::Function 188 | Properties: 189 | Code: 190 | S3Bucket: 191 | Fn::Sub: aws-blogs-artifacts-public 192 | S3Key: artifacts/ML-14623/CfnTemplate/6ddcf10002539818a9256eff3fb2b22aa09298d8f946e26ba121c175a600c44e.zip 193 | Role: 194 | Fn::GetAtt: 195 | - CustomCDKBucketDeploymentServiceRole 196 | - Arn 197 | Handler: index.handler 198 | Layers: 199 | - Ref: genAiS3DeployAwsCliLayer 200 | Runtime: python3.9 201 | Timeout: 900 202 | DependsOn: 203 | - CustomCDKBucketDeploymentServiceRoleDefaultPolicy 204 | - CustomCDKBucketDeploymentServiceRole 205 | Metadata: 206 | aws:cdk:path: GenAiInpaintStack/Custom::CDKBucketDeployment/Resource 207 | aws:asset:path: asset.6ddcf10002539818a9256eff3fb2b22aa09298d8f946e26ba121c175a600c44e 208 | aws:asset:is-bundled: false 209 | aws:asset:property: Code 210 | genAiInPaintingNotebookAccessRole: 211 | Type: AWS::IAM::Role 212 | Properties: 213 | AssumeRolePolicyDocument: 214 | Statement: 215 | - Action: sts:AssumeRole 216 | Effect: Allow 217 | Principal: 218 | Service: sagemaker.amazonaws.com 219 | Version: "2012-10-17" 220 | ManagedPolicyArns: 221 | - Fn::Join: 222 | - "" 223 | - - "arn:" 224 | - Ref: AWS::Partition 225 | - :iam::aws:policy/AmazonSageMakerFullAccess 226 | Metadata: 227 | aws:cdk:path: GenAiInpaintStack/genAiInPaintingNotebookAccessRole/Resource 228 | genAiInPaintingNotebookAccessRoleDefaultPolicy: 229 | Type: AWS::IAM::Policy 230 | Properties: 231 | PolicyDocument: 232 | Statement: 233 | - Action: 234 | - logs:* 235 | - sagemaker:Describe* 236 | - sagemaker:*Model* 237 | - sagemaker:*Endpoint* 238 | - sagemaker:*ProcessingJob* 239 | Effect: Allow 240 | Resource: "*" 241 | - Action: s3:* 242 | Effect: Allow 243 | Resource: 244 | Fn::Join: 245 | - "" 246 | - - Fn::GetAtt: 247 | - genAiInPainting 248 | - Arn 249 | - /* 250 | - Action: ecr:BatchGetImage 251 | Effect: Allow 252 | Resource: arn:aws:ecr:::* 253 | - Action: 254 | - s3:ListAllMyBuckets 255 | - s3:ListBucket 256 | Effect: Allow 257 | Resource: arn:aws:s3:::* 258 | - Action: iam:PassRole 259 | Effect: Allow 260 | Resource: 261 | Fn::Join: 262 | - "" 263 | - - arn:aws:iam::*:role/ 264 | - Ref: genAiInPaintingNotebookAccessRole 265 | Version: "2012-10-17" 266 | PolicyName: genAiInPaintingNotebookAccessRoleDefaultPolicy 267 | Roles: 268 | - Ref: genAiInPaintingNotebookAccessRole 269 | Metadata: 270 | aws:cdk:path: GenAiInpaintStack/genAiInPaintingNotebookAccessRole/DefaultPolicy/Resource 271 | gaiLcConfig: 272 | Type: AWS::SageMaker::NotebookInstanceLifecycleConfig 273 | Properties: 274 | NotebookInstanceLifecycleConfigName: gaiLcConfig 275 | OnStart: 276 | - Content: 277 | Fn::Base64: !Sub | 278 | #!/bin/bash 279 | set -e 280 | # Clone CodeCommit/GitHub repository 281 | cd /home/ec2-user/SageMaker/ 282 | git clone https://github.com/aws-samples/sagemaker-generative-ai-for-product-placement-using-images.git 283 | mv sagemaker-generative-ai-for-product-placement-using-images/gai-inpainting-cdk/notebooks/* . 284 | rm -rf sagemaker-generative-ai-for-product-placement-using-images 285 | sudo chmod -R 777 /home/ec2-user/SageMaker 286 | genAiInPaintingNotebookInstance: 287 | Type: AWS::SageMaker::NotebookInstance 288 | Properties: 289 | InstanceType: ml.c5.2xlarge 290 | RoleArn: 291 | Fn::GetAtt: 292 | - genAiInPaintingNotebookAccessRole 293 | - Arn 294 | LifecycleConfigName: gaiLcConfig 295 | NotebookInstanceName: genAiInPaintingNotebookInstance 296 | VolumeSizeInGB: 16 297 | Metadata: 298 | aws:cdk:path: GenAiInpaintStack/genAiInPaintingNotebookInstance 299 | CDKMetadata: 300 | Type: AWS::CDK::Metadata 301 | Properties: 302 | Analytics: v2:deflate64:H4sIAAAAAAAA/21Q0WrDMAz8lr673tqOvbcZg0EZJYW9BsdRUy2ONSxnJZj8+5w4Sxns6U4nodNpK5+e5eNK3Xitq2ZtsJTh7JVuRJSKwDsZDp1uwIvsYmeW4EQGdX+X5zoVB8UwCN4VoYIvQ30L1svUelkEoZjBs9yPMAij2rJSMsSNR9WD+wDHSFac0dYGPNnXzmo/KguJo798EKhaGXIyMMoT3k9MLF6kamhVA26yeScPJVHzZtkrq+GIF9C9NpCRvWAt/hkZBpF17KnNgalzOpkt/E/r5OgbK3BiChi/Wsckc8zCjBGL+GRtUO5vnBmcUo/7on2FKdSp91eyDzu52cjt6pMR166zHluQecIfwiKbrMIBAAA= 303 | Metadata: 304 | aws:cdk:path: GenAiInpaintStack/CDKMetadata/Default 305 | Condition: CDKMetadataAvailable 306 | Conditions: 307 | CDKMetadataAvailable: 308 | Fn::Or: 309 | - Fn::Or: 310 | - Fn::Equals: 311 | - Ref: AWS::Region 312 | - af-south-1 313 | - Fn::Equals: 314 | - Ref: AWS::Region 315 | - ap-east-1 316 | - Fn::Equals: 317 | - Ref: AWS::Region 318 | - ap-northeast-1 319 | - Fn::Equals: 320 | - Ref: AWS::Region 321 | - ap-northeast-2 322 | - Fn::Equals: 323 | - Ref: AWS::Region 324 | - ap-south-1 325 | - Fn::Equals: 326 | - Ref: AWS::Region 327 | - ap-southeast-1 328 | - Fn::Equals: 329 | - Ref: AWS::Region 330 | - ap-southeast-2 331 | - Fn::Equals: 332 | - Ref: AWS::Region 333 | - ca-central-1 334 | - Fn::Equals: 335 | - Ref: AWS::Region 336 | - cn-north-1 337 | - Fn::Equals: 338 | - Ref: AWS::Region 339 | - cn-northwest-1 340 | - Fn::Or: 341 | - Fn::Equals: 342 | - Ref: AWS::Region 343 | - eu-central-1 344 | - Fn::Equals: 345 | - Ref: AWS::Region 346 | - eu-north-1 347 | - Fn::Equals: 348 | - Ref: AWS::Region 349 | - eu-south-1 350 | - Fn::Equals: 351 | - Ref: AWS::Region 352 | - eu-west-1 353 | - Fn::Equals: 354 | - Ref: AWS::Region 355 | - eu-west-2 356 | - Fn::Equals: 357 | - Ref: AWS::Region 358 | - eu-west-3 359 | - Fn::Equals: 360 | - Ref: AWS::Region 361 | - me-south-1 362 | - Fn::Equals: 363 | - Ref: AWS::Region 364 | - sa-east-1 365 | - Fn::Equals: 366 | - Ref: AWS::Region 367 | - us-east-1 368 | - Fn::Equals: 369 | - Ref: AWS::Region 370 | - us-east-2 371 | - Fn::Or: 372 | - Fn::Equals: 373 | - Ref: AWS::Region 374 | - us-west-1 375 | - Fn::Equals: 376 | - Ref: AWS::Region 377 | - us-west-2 378 | Parameters: 379 | BootstrapVersion: 380 | Type: AWS::SSM::Parameter::Value 381 | Default: /cdk-bootstrap/hnb659fds/version 382 | Description: Version of the CDK Bootstrap resources in this environment, automatically retrieved from SSM Parameter Store. [cdk:skip] 383 | Rules: 384 | CheckBootstrapVersion: 385 | Assertions: 386 | - Assert: 387 | Fn::Not: 388 | - Fn::Contains: 389 | - - "1" 390 | - "2" 391 | - "3" 392 | - "4" 393 | - "5" 394 | - Ref: BootstrapVersion 395 | --------------------------------------------------------------------------------