├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── DEVELOPMENT.md ├── LICENSE ├── README.md ├── assets ├── cloudformation_output.png ├── postman.jpg ├── postman_auth.png └── solution.png ├── cloudformation └── template.yml ├── custom-nodes ├── BedrockNode │ ├── __init__.py │ └── bedrock_node.py ├── README.md ├── SagemakerNode │ ├── __init__.py │ └── sagemaker_node.py ├── deploy_flux_dev_pipe.ipynb ├── imgs │ ├── bedrock-1.png │ ├── bedrock-2.png │ ├── bedrock-3.png │ ├── sm-flux-1.png │ ├── sm-flux-2.png │ └── sm-flux-3.png └── workflows │ ├── bedrock_workflow.json │ └── sagemaker_workflow.json ├── deploy.sh ├── image ├── Dockerfile.gui ├── Dockerfile.inference ├── code │ ├── api_server.py │ ├── comfyui_prompt.py │ ├── nginx.conf │ ├── requirements.txt │ ├── serve │ └── wsgi.py └── extra_model_paths.yaml ├── lambda ├── lambda_function.py └── workflow │ ├── flux1-dev-fp8-ckpt.json │ ├── flux1-schnell-fp8-ckpt.json │ └── workflow_api.json └── model └── build.sh /.gitignore: -------------------------------------------------------------------------------- 1 | model-artifact/ 2 | model-artifact*.tar.gz 3 | model-artifact*.tgz 4 | lambda/lambda.zip 5 | *.zip 6 | .DS_Store -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /DEVELOPMENT.md: -------------------------------------------------------------------------------- 1 | # Development Guide 2 | 3 | ## Model Data 4 | ComfyUI loads models from folder `ComfyUI/models`, and supports loading extra search paths by configuring `extra_model_paths.yaml`. We pack all the necessary model files into a tar gzipped file and upload to S3. Then these files will be copied to `/opt/ml/model` by SageMaker, and can be used by ComfyUI inside inference container. 5 | 6 | ### How to prepare model data 7 | The model prepare script can be found at [model/build.sh](model/build.sh), which downloads models from HuggingFace, creates tar gzipped file and uploads to S3. The model files are installed to `model/model-artifact`. Customize it to prepare your models. 8 | ```sh 9 | cd ./model 10 | ./build.sh 11 | ``` 12 | *Note [model/build.sh](model/build.sh) is executed by [deploy.sh](deploy.sh) too.* 13 | 14 | 15 | ## Inference Image 16 | This section describes the container that runs your inference code for hosting services. Read [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html) for how SageMaker works. 17 | 18 | ### Highlights 19 | - ComfyUI is running in container and listening on `127.0.0.1:8188`. The inference code will access to the local ComfyUI server by REST api and WebSocket. 20 | - The container has read-only access to `/opt/ml/model`, which SageMaker copies the model artifacts from S3 location to this directory. `extra_model_paths.yaml` of ComfyUI is configured to load models (such as CheckPoint, VAE, LoRA) from this path. 21 | - The container has a Flask server listening on port 8080 and accept `POST` requests to `/invocations` and `GET` requests to `/ping` endpoints. 22 | - Health Check (`GET` requests to `/ping`) is to check whether the local ComfyUI is still running and responding. 23 | - Inference Requests (`POST` requests to `/invocations`) is implemented by passing the payload to ComfyUI server. The payload is the same as used by ComfyUI GUI, in which the network traffics inspected in DevTools of browser. 24 | - Inference result is the image itself. If `Accept` header of the inference requests indicate jpeg is supported (e.g., `*/*`, `image/jpeg`), the output image will be converted to jpeg, else leave default as png. 25 | - Environment variables supported: 26 | - `JPEG_QUALITY` - Set between 0 to 95 for jpeg quality (default 90) 27 | - `DEBUG_HEADER` - Set to `true` to print HTTP header of requests in CloudWatch log 28 | 29 | ## Local run of ComfyUI GUI 30 | Follow the following to build and run ComfyUI locally with GUI. The image install ComfyUI same way as inference image does, so you can use it for model testing and tuning image workflow. The initial part of the Dockerfile is the same as the inference image, so most layers are shared. 31 | 32 | 33 | 1. Build the image. 34 | ```sh 35 | # run inside comfyui-on-amazon-sagemaker folder 36 | export IMAGE_GUI="comfyui-gui:latest" 37 | export LOCAL_MODEL_PATH="${PWD}/model/model-artifact" 38 | docker build -t ${IMAGE_GUI} ./image -f ./image/Dockerfile.gui 39 | ``` 40 | 41 | 2. Run the container. Note it requires model data to [prepared](./DEVELOPMENT.md#how-to-prepare-model-data) in advance. 42 | ```sh 43 | docker run --rm --gpus all --volume ${LOCAL_MODEL_PATH}:/opt/ml/model --publish 8188:8188 ${IMAGE_GUI} 44 | ``` 45 | 46 | 3. Open a browser and browse to `http://:8188`. Make sure the inbound rules of EC2 security group allows port 8188 from your browser. 47 | 48 | ## Workflow File 49 | 50 | ### How to download it from ComfyUI 51 | Turn on **Enable Dev mode Options** from the ComfyUI settings (settings icon in the bottom right menu), then you will see **Save (API format)** button appear. Click **Save (API format)** button to download the json file. Replace the positive prompt text to `POSITIVE_PROMT_PLACEHOLDER`, and negative prompt to `NEGATIVE_PROMPT_PLACEHOLDER` which allows the lambda function to replace with during invocation. Then put the json file inside [lambda/workflow/](lambda/workflow/) folder. 52 | 53 | ### Lambda Function 54 | Source code of Lambda function is in [lambda/lambda_function.py](lambda/lambda_function.py). In this project the lambda function is invoked by lambda function URL, but you can integrate with API Gateway or ALB for your application. You can find the full request and response payloads from [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/urls-invocation.html#urls-payloads). 55 | 56 | This is the sample event for lambda function URL. Only `body` field is used. 57 | ```json 58 | { 59 | "body": "{\"positive_prompt\": \"hill happy dog\",\"negative_prompt\": \"hill\",\"prompt_file\": \"workflow_api.json\",\"seed\": 123}" 60 | } 61 | ``` 62 | 63 | Here are the fields in the body: 64 | | Key | Value | Optional | 65 | |-----------------|---------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------| 66 | | positive_prompt | Text to replace `POSITIVE_PROMT_PLACEHOLDER` in prompt | | 67 | | negative_prompt | Text to replace `NEGATIVE_PROMPT_PLACEHOLDER` in prompt | | 68 | | prompt_file | Workflow file in lambda/workflow/ | Yes. Default value: `workflow_api.json` | 69 | | seed | Seed integer | Yes. If not specified, a random seed will be used. | 70 | 71 | 72 | ## CloudFormation 73 | CloudFormation template can be found at [cloudformation/template.yml](cloudformation/template.yml). [deploy.sh](deploy.sh) passes parameters to CloudFormation template. 74 | 75 | ### Specify ComfyUI Version 76 | Check the commit ID or tag version on [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI), then update `COMFYUI_GIT_REF` in [deploy.sh](deploy.sh#L23). 77 | 78 | ### SageMaker Model 79 | To create or update the SageMaker model you will need the ECR url and the model S3 path, refer to [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deploy-models.html) for more details about model deployment. You can also read [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-test-endpoints.html) on how to invoke endpoint for real-time inference of generating image. You can pass the new values for `ModelVersion`, `ModelDataS3Key`, and `ModelEcrImage` and update the CloudFormation stack. 80 | 81 | ### SageMaker Instance Type 82 | The default instance type is `ml.g5.xlarge`. Change to `ml.g4dn.xlarge` for lower cost or `ml.g5.xlarge` is not available in your AWS region. 83 | 84 | ### SageMaker Auto Scaling 85 | By default it is disabled, you can set `SageMakerAutoScaling` to enable it. You can also uncomment the followings to define the schedule actions to set dynamic max and min by schedule. 86 | ```yaml 87 | ComfyUIScalableTarget: 88 | Type: "AWS::ApplicationAutoScaling::ScalableTarget" 89 | DependsOn: ComfyUIEndpoint 90 | Condition: EnableAutoScaling 91 | Properties: 92 | MaxCapacity: 3 93 | MinCapacity: 1 94 | ResourceId: !Sub "endpoint/${ComfyUIEndpoint.EndpointName}/variant/${AppName}-${ModelVersion}" 95 | ScalableDimension: sagemaker:variant:DesiredInstanceCount 96 | ServiceNamespace: sagemaker 97 | # Optional: Uncomment below to define autoscaling capacity according to schedule 98 | # ScheduledActions: 99 | # - ScheduledActionName: scheduled-action-at-1800-utc 100 | # ScalableTargetAction: 101 | # MaxCapacity: 1 102 | # MinCapacity: 1 103 | # Schedule: cron(0 18 * * ? *) 104 | # - ScheduledActionName: scheduled-action-at-0400-utc 105 | # ScalableTargetAction: 106 | # MaxCapacity: 3 107 | # MinCapacity: 1 108 | # Schedule: cron(0 4 * * ? *) 109 | ``` 110 | ### Lambda Function URL Auth 111 | By default AWS_IAM auth is enabled, and you need to [sign your request](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) with IAM to protect your endpoint. You may set `LambdaUrlAuthType` to NONE for testing but it's not recommended. Alternatively, you can disable the lambda function URL and put the lambda function under api gateway or ALB, or even invoke the sagemaker endpoint directly from your current backend. 112 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so. 10 | 11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 17 | 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ComfyUI on Amazon SageMaker 2 | This project demonstrates how to generate images using Stable Diffusion by hosting ComfyUI on Amazon SageMaker Inference. It is a design pattern that allows adding GenAI image generation capability to your application. 3 | 4 | [ComfyUI](https://github.com/comfyanonymous/ComfyUI) is one of the most popular GUI and backend that allows you to generate images using Stable Diffusion. Some of the key features: 5 | - ⭐ supports complex image generation workflows using GUI without needing to code anything 6 | - ⭐ supports Stable Diffusion - SD1.x, SD2.x, SDXL, SDXL Turbo and SD3 7 | - ⭐ supports [Flux.1 models](https://blackforestlabs.ai/announcing-black-forest-labs/) from Black Forest Labs 8 | - ⭐ can load ckpt, safetensors and diffusers models 9 | - ⭐ supports Embeddings, Hypernetworks, LCM Models, Loras, Hires fix 10 | 11 | By hosting ComfyUI using Amazon SageMaker Inference, it can be particularly suitable when you want to: 12 | * self-host models and keep all your data within your AWS account and region 13 | * intergrate text-to-image or image-to-image in your application using Stable Diffusion but no coding on PyTorch (just intergrate this pattern into your application) 14 | * use state-of-the-art models (checkpoints, embeddings, hypernetwork, LoRA, Controlnet) from community, such as [Hugging Face](https://huggingface.co/) or [Civitai](https://civitai.com/), yet without any coding 15 | * make use of fully managed service without taking care of setting up and managing instances, software version compatibilities, and patching versions 16 | * use autoscaling to shut down instances when there is no usage to prevent idle capacity and reduce inference cost 17 | 18 | ## Architecture 19 | There is a Lambda function to invoke SageMaker inference endpoint (which is running ComfyUI) for generating images. For illustration, [Lambda function URL](https://docs.aws.amazon.com/lambda/latest/dg/urls-configuration.html) is configured so you can test the image generation by calling to this dedicated HTTPS endpoint. 20 | ![Solution](./assets/solution.png) 21 | 22 | ## Deployment Guide 23 | ### Environments 24 | 25 | The easiest way is to launch an EC2 instance of `g5.xlarge` running AMI `Deep Learning OSS Nvidia Driver AMI GPU PyTorch 2.3.0 (Amazon Linux 2)`. Connect to the instane using **Session Manager**, then add user to group `docker` by running the following command. 26 | 27 | ```bash 28 | sudo usermod -a -G docker $(whoami) 29 | ``` 30 | 31 | Disconnect and connect again for updated group membership to take effects. 32 | 33 | > - If you do not run ComfyUI locally, non-gpu instance such as `t3.small` also works. 34 | > - If you want to run FLUX.1 model, use at least `g5.2xlarge` or above for fp8 version. use at least `g5.4xlarge` for fp16 version. 35 | 36 | [AWS Cloud9](https://console.aws.amazon.com/cloud9control) or local machine also work but make sure the followings are installed properly. 37 | * awscli 38 | * Docker 39 | * pigz 40 | 41 | > 🚫 **Notes:** Apple M1/M2/M3 is not working as cross-architecture build of container is not supported yet. You must build on x86_64 that matches SageMaker endpoint. 42 | 43 | 44 | ### Quick Start 45 | **Step 1** - Clone the project: 46 | ```bash 47 | git clone https://github.com/aws-samples/comfyui-on-amazon-sagemaker.git 48 | ``` 49 | 50 | **Step 2** - Customize the following files (optional): 51 | * **[deploy.sh](deploy.sh)** - deployment configurations 52 | * **[model/build.sh](model/build.sh)** - models to include (SDXL 1.0 is downloaded by default) 53 | * **[lambda/workflow/](lambda/workflow/)** - workflow api folder for ComfyUI 54 | 55 | > **FLUX.1** - If you want to try FLUX.1 models: 56 | > - uncomment [here](model/build.sh#L52) or [here](model/build.sh#L56) to download the models. 57 | > - update [here](deploy.sh#L51) to change `SAGEMAKER_INSTANCE_TYPE` to at least `ml.g5.2xlarge`. 58 | > - use corresponding prompt file `flux1-dev-fp8-ckpt.json` or `flux1-schnell-fp8-ckpt.json` in the API request. 59 | 60 | **Step 3** - Run [deploy.sh](deploy.sh). It usually takes less than one hour to complete. 61 | ```bash 62 | ./deploy.sh 63 | ``` 64 | 65 | 66 | ### HTTPS Endpoint for testing 67 | For illustration, [Lambda function URL](https://docs.aws.amazon.com/lambda/latest/dg/urls-configuration.html) is enabled so you can test the image generation by calling to this dedicated HTTPS endpoint. The endpoint URL can be found at the `ComfyUIFunctionUrl` from the stack output. 68 | ![CloudFormation Output](./assets/cloudformation_output.png) 69 | 70 | `AWS_IAM` auth is configured by default for invoking the lambda function URL, so you must sign each HTTP request using AWS Signature Version 4 (SigV4). Tools such as [awscurl](https://github.com/okigan/awscurl), [Postman](http://www.postman.com/), and [AWS SigV4 Proxy](https://github.com/awslabs/aws-sigv4-proxy) offer built-in ways to sign your requests with SigV4. You may disable the authentication by setting `LAMBDA_URL_AUTH_TYPE` to `NONE` in [deploy.sh](deploy.sh) but it is dangerious as the function URL is open to everyone. 71 | 72 | Example of AWS Signature authorization using Postman: 73 | ![Postman Auth Config](./assets/postman_auth.png) 74 | 75 | And here is an example of request body: 76 | ```json 77 | { 78 | "positive_prompt": "hill happy dog", 79 | "negative_prompt": "hill", 80 | "prompt_file": "workflow_api.json", 81 | "seed": 11245 82 | } 83 | ``` 84 | 85 | A successful invocation seeing the image generated: 86 | ![Postman](./assets/postman.jpg) 87 | 88 | ### Clean up 89 | Delete the following resources deployed by [deploy.sh](deploy.sh). 90 | 1. CloudFormation stack (default: `comfyui`) 91 | 2. S3 bucket (default:`comfyui-sagemaker--`) 92 | 3. ECR repository (default:`comfyui-sagemaker`) 93 | 94 | ## Development Guide 95 | See [DEVELOPMENT](DEVELOPMENT.md) 96 | 97 | ## Security 98 | 99 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. 100 | 101 | ## License 102 | 103 | This library is licensed under the MIT-0 License. See the LICENSE file. 104 | 105 | 106 | -------------------------------------------------------------------------------- /assets/cloudformation_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/assets/cloudformation_output.png -------------------------------------------------------------------------------- /assets/postman.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/assets/postman.jpg -------------------------------------------------------------------------------- /assets/postman_auth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/assets/postman_auth.png -------------------------------------------------------------------------------- /assets/solution.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/assets/solution.png -------------------------------------------------------------------------------- /cloudformation/template.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: "2010-09-09" 2 | Description: >- 3 | CloudFormation template for ComfyUI on Amazon SageMaker 4 | 5 | Parameters: 6 | AppName: 7 | Type: String 8 | Description: Application name prefix for resources provisioned 9 | Default: comfyui 10 | DeploymentBucket: 11 | Type: String 12 | Description: Bucket name of deployment files 13 | Default: comfyui-on-sagemaker 14 | LambdaPackageS3Key: 15 | Type: String 16 | Description: S3 object key of Lambda function package 17 | Default: lambda.zip 18 | ModelVersion: 19 | Type: String 20 | Description: Suffix for SageMaker model and endpoint config 21 | Default: default 22 | ModelDataS3Key: 23 | Type: String 24 | Description: S3 object key of model data 25 | Default: model-data-comfyui-default.tgz 26 | ModelEcrImage: 27 | Type: String 28 | Description: Image location where the inference code image is stored in Amazon ECR 29 | Default: comfyui-inference:default 30 | SageMakerInstanceType: 31 | Type: String 32 | Description: Instance type of SageMaker endpoint 33 | AllowedValues: 34 | - ml.g4dn.xlarge 35 | - ml.g4dn.2xlarge 36 | - ml.g5.xlarge 37 | - ml.g5.2xlarge 38 | - ml.g5.4xlarge 39 | - ml.g5.8xlarge 40 | - ml.g5.16xlarge 41 | - ml.g5.12xlarge 42 | - ml.g5.24xlarge 43 | - ml.g5.48xlarge 44 | - ml.g6.xlarge 45 | - ml.g6.2xlarge 46 | - ml.g6.4xlarge 47 | - ml.g6.8xlarge 48 | - ml.g6.16xlarge 49 | - ml.g6.12xlarge 50 | - ml.g6.24xlarge 51 | - ml.g6.48xlarge 52 | Default: ml.g5.xlarge 53 | SageMakerAutoScaling: 54 | Type: String 55 | Description: Whether to enable auto scaling for the SageMaker endpoint 56 | AllowedValues: 57 | - true 58 | - false 59 | Default: false 60 | LambdaUrlAuthType: 61 | Type: String 62 | Description: Authentication type for the Lambda URL 63 | AllowedValues: 64 | - NONE 65 | - AWS_IAM 66 | Default: AWS_IAM 67 | 68 | Conditions: 69 | LambdaUrlAuthTypeIsNone: !Equals 70 | - !Ref LambdaUrlAuthType 71 | - NONE 72 | EnableAutoScaling: !Equals 73 | - !Ref SageMakerAutoScaling 74 | - true 75 | 76 | Resources: 77 | ComfyUIModelExecutionRole: 78 | Type: "AWS::IAM::Role" 79 | Properties: 80 | RoleName: !Sub "${AppName}-sagemaker-model-execution-role" 81 | AssumeRolePolicyDocument: 82 | Version: "2012-10-17" 83 | Statement: 84 | - Effect: Allow 85 | Principal: 86 | Service: sagemaker.amazonaws.com 87 | Action: sts:AssumeRole 88 | ManagedPolicyArns: 89 | - arn:aws:iam::aws:policy/AmazonSageMakerFullAccess 90 | ComfyUIModel: 91 | Type: "AWS::SageMaker::Model" 92 | Properties: 93 | EnableNetworkIsolation: true 94 | ExecutionRoleArn: !GetAtt ComfyUIModelExecutionRole.Arn 95 | ModelName: !Sub "${AppName}-${ModelVersion}" 96 | PrimaryContainer: 97 | Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${ModelEcrImage}" 98 | ModelDataUrl: !Sub s3://${DeploymentBucket}/${ModelDataS3Key} 99 | ComfyUIEndpointConfig: 100 | Type: "AWS::SageMaker::EndpointConfig" 101 | Properties: 102 | EndpointConfigName: !Sub "${AppName}-${ModelVersion}" 103 | ProductionVariants: 104 | - VariantName: !Sub "${AppName}-${ModelVersion}" 105 | ModelName: !GetAtt ComfyUIModel.ModelName 106 | ContainerStartupHealthCheckTimeoutInSeconds: 600 107 | InitialInstanceCount: 1 108 | InstanceType: !Sub "${SageMakerInstanceType}" 109 | RoutingConfig: 110 | RoutingStrategy: LEAST_OUTSTANDING_REQUESTS 111 | ComfyUIEndpoint: 112 | Type: "AWS::SageMaker::Endpoint" 113 | Properties: 114 | EndpointName: !Sub "${AppName}" 115 | EndpointConfigName: !GetAtt ComfyUIEndpointConfig.EndpointConfigName 116 | ComfyUIScalingPolicy: 117 | Type: "AWS::ApplicationAutoScaling::ScalingPolicy" 118 | DependsOn: ComfyUIEndpoint 119 | Condition: EnableAutoScaling 120 | Properties: 121 | PolicyName: !Sub scale-in-scale-out-policy 122 | PolicyType: TargetTrackingScaling 123 | ScalingTargetId: !Ref ComfyUIScalableTarget 124 | TargetTrackingScalingPolicyConfiguration: 125 | TargetValue: 10000 126 | CustomizedMetricSpecification: 127 | MetricName: Duration 128 | Namespace: AWS/Lambda 129 | Dimensions: 130 | - Name: FunctionName 131 | Value: !Ref ComfyUIFunction 132 | Statistic: Average 133 | ScaleInCooldown: 1800 134 | ScaleOutCooldown: 1800 135 | DisableScaleIn: false 136 | ComfyUIScalableTarget: 137 | Type: "AWS::ApplicationAutoScaling::ScalableTarget" 138 | DependsOn: ComfyUIEndpoint 139 | Condition: EnableAutoScaling 140 | Properties: 141 | MaxCapacity: 3 142 | MinCapacity: 1 143 | ResourceId: !Sub "endpoint/${ComfyUIEndpoint.EndpointName}/variant/${AppName}-${ModelVersion}" 144 | ScalableDimension: sagemaker:variant:DesiredInstanceCount 145 | ServiceNamespace: sagemaker 146 | # Optional: Uncomment below to define autoscaling capacity according to schedule 147 | # ScheduledActions: 148 | # - ScheduledActionName: scheduled-action-at-1800-utc 149 | # ScalableTargetAction: 150 | # MaxCapacity: 1 151 | # MinCapacity: 1 152 | # Schedule: cron(0 18 * * ? *) 153 | # - ScheduledActionName: scheduled-action-at-0400-utc 154 | # ScalableTargetAction: 155 | # MaxCapacity: 3 156 | # MinCapacity: 1 157 | # Schedule: cron(0 4 * * ? *) 158 | ComfyUIFunctionRole: 159 | Type: "AWS::IAM::Role" 160 | DependsOn: ComfyUIEndpoint 161 | Properties: 162 | RoleName: !Sub "${AppName}-lambda-role" 163 | AssumeRolePolicyDocument: 164 | Version: "2012-10-17" 165 | Statement: 166 | - Effect: Allow 167 | Principal: 168 | Service: lambda.amazonaws.com 169 | Action: sts:AssumeRole 170 | ManagedPolicyArns: 171 | - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 172 | Policies: 173 | - PolicyName: !Sub "${AppName}-lambda-policy" 174 | PolicyDocument: 175 | Version: "2012-10-17" 176 | Statement: 177 | - Effect: Allow 178 | Action: 179 | - sagemaker:InvokeEndpoint 180 | Resource: !Ref ComfyUIEndpoint 181 | #Resource: !Sub "arn:aws:sagemaker:${AWS::Region}:${AWS::AccountId}:endpoint/${AppName}" 182 | ComfyUIFunction: 183 | Type: "AWS::Lambda::Function" 184 | Properties: 185 | FunctionName: !Sub "${AppName}-lambda" 186 | Handler: lambda_function.lambda_handler 187 | Role: !GetAtt ComfyUIFunctionRole.Arn 188 | Runtime: python3.12 189 | MemorySize: 256 190 | Timeout : 120 191 | Architectures: 192 | - arm64 193 | Code: 194 | S3Bucket: !Sub "${DeploymentBucket}" 195 | S3Key: !Sub "${LambdaPackageS3Key}" 196 | Environment: 197 | Variables: 198 | ENDPOINT_NAME: !GetAtt ComfyUIEndpoint.EndpointName 199 | ComfyUIFunctionUrl: 200 | Type: "AWS::Lambda::Url" 201 | Properties: 202 | AuthType: !If [LambdaUrlAuthTypeIsNone, "NONE", "AWS_IAM"] 203 | InvokeMode: BUFFERED 204 | TargetFunctionArn: !GetAtt ComfyUIFunction.Arn 205 | ComfyUIFunctionUrlPermission: 206 | Type: "AWS::Lambda::Permission" 207 | Condition: LambdaUrlAuthTypeIsNone 208 | Properties: 209 | Action: lambda:InvokeFunctionUrl 210 | FunctionName: !Ref ComfyUIFunction 211 | FunctionUrlAuthType: "NONE" 212 | Principal: "*" 213 | 214 | Outputs: 215 | ComfyUIFunctionUrl: 216 | Description: Function URL of Lambda function 217 | Value: !GetAtt ComfyUIFunctionUrl.FunctionUrl 218 | ComfyUIEndpoint: 219 | Description: Arn of SageMaker inference endpoint 220 | Value: !Ref ComfyUIEndpoint 221 | -------------------------------------------------------------------------------- /custom-nodes/BedrockNode/__init__.py: -------------------------------------------------------------------------------- 1 | from .bedrock_node import Image2ImageNode 2 | 3 | 4 | NODE_CLASS_MAPPINGS = { 5 | "Image2ImageNode": Image2ImageNode 6 | } 7 | 8 | __all__ = ['NODE_CLASS_MAPPINGS'] -------------------------------------------------------------------------------- /custom-nodes/BedrockNode/bedrock_node.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | import base64 4 | import io 5 | from PIL import Image 6 | import numpy as np 7 | import torch 8 | 9 | bedrock = boto3.client("bedrock-runtime", region_name="us-west-2") 10 | 11 | class Image2ImageNode: 12 | @classmethod 13 | def INPUT_TYPES(s): 14 | return { 15 | "required": {"image": ("IMAGE",), 16 | "prompt": ("STRING", {"default": ""}), 17 | "strength": ("FLOAT", {"default": 0.75})} 18 | } 19 | 20 | RETURN_TYPES = ("IMAGE",) 21 | FUNCTION = "image_to_image" 22 | CATEGORY = "image" 23 | OUTPUT_NODE = True 24 | 25 | def image_to_image(self, image, prompt, strength): 26 | # Convert input image tensor to PIL Image 27 | input_array = image.cpu().numpy().squeeze() 28 | input_image = Image.fromarray(np.clip(255.0 * input_array, 0, 255).astype(np.uint8)) 29 | 30 | # Get original dimensions and resize maintaining aspect ratio 31 | width, height = input_image.size 32 | target_width = min(max(width, 640), 1536) 33 | scale = target_width / width 34 | target_height = int(height * scale) 35 | 36 | # Ensure minimum dimensions 37 | if target_height < 640: 38 | target_height = 640 39 | target_width = int(width * (640 / height)) 40 | 41 | input_image = input_image.resize((target_width, target_height), Image.LANCZOS) 42 | 43 | # Process with Bedrock API 44 | buffered = io.BytesIO() 45 | input_image.save(buffered, format="PNG") 46 | img_str = base64.b64encode(buffered.getvalue()).decode() 47 | 48 | response = bedrock.invoke_model( 49 | modelId='stability.sd3-large-v1:0', 50 | body=json.dumps({ 51 | 'prompt': prompt, 52 | 'image': img_str, 53 | 'strength': strength, 54 | 'mode': 'image-to-image' 55 | }) 56 | ) 57 | 58 | try: 59 | # Process response and convert to tensor 60 | output_body = json.loads(response["body"].read().decode("utf-8")) 61 | image_data = base64.b64decode(output_body["images"][0]) 62 | output_image = Image.open(io.BytesIO(image_data)) 63 | output_image.save("output_image.png") 64 | # Convert to numpy array with correct shape for save_images 65 | output_array = np.array(output_image) 66 | if len(output_array.shape) == 2: 67 | output_array = np.stack([output_array] * 3, axis=-1) 68 | 69 | # Convert to float32 and normalize to 0-1 range 70 | output_array = output_array.astype(np.float32) / 255.0 71 | 72 | # Create tensor in format expected by save_images: [B, H, W, C] 73 | output_tensor = torch.from_numpy(output_array) 74 | if len(output_tensor.shape) == 3: 75 | output_tensor = output_tensor.unsqueeze(0) # Add batch dimension 76 | 77 | return (output_tensor,) 78 | 79 | except Exception as e: 80 | print(f"Error: {e}") 81 | return (torch.zeros_like(image),) 82 | 83 | 84 | NODE_CLASS_MAPPINGS = {"Image2ImageNode": Image2ImageNode} 85 | -------------------------------------------------------------------------------- /custom-nodes/README.md: -------------------------------------------------------------------------------- 1 | # ComfyUI AWS Integration Nodes 2 | 3 | This repository contains custom nodes for ComfyUI that integrate with AWS services for AI image generation: 4 | 5 | 1. **Bedrock Node**: Image-to-image generation using Amazon Bedrock and Stable Diffusion 6 | 7 | ![ComfyUI Bedrock Node Example](imgs/comfy-custom.png) 8 | 2. **SageMaker Node**: Text-to-image generation using FP8 Flux Dev 1 diffusion pipeline 9 | 10 | ![ComfyUI SageMaker Node Example](imgs/sagemaker_node.png) 11 | 12 | Both nodes seamlessly integrate with your ComfyUI workflows, allowing you to leverage AWS's powerful AI infrastructure directly within the ComfyUI interface. 13 | 14 | ## Features 15 | 16 | ### Bedrock Node 17 | - Image-to-image generation using Stability AI's SD3 model through Amazon Bedrock 18 | - Automatic image resizing to meet model requirements 19 | - Support for customizable prompt and strength parameters 20 | 21 | ### SageMaker Node 22 | - Text-to-image generation using FP8 Flux Dev 1 diffusion pipeline 23 | - Custom FP8-optimized model deployment 24 | - Flexible parameter customization 25 | 26 | ## Prerequisites 27 | 28 | - An AWS account with access to Amazon Bedrock and SageMaker 29 | - Proper AWS credentials configured 30 | - Python 3.x 31 | - ComfyUI installed 32 | 33 | ## Installation 34 | 35 | 1. Configure your AWS credentials through one of these methods: 36 | - AWS CLI (`aws configure`) 37 | - Environment variables 38 | - AWS credentials file 39 | 40 | 2. Add both node folders to the ComfyUI `custom_nodes` directory 41 | 42 | ## Usage 43 | 44 | ### Bedrock Node 45 | 46 | 1. Launch ComfyUI 47 | 2. Find the "Image2ImageNode" in the node browser under the "image" category 48 | 3. Connect the node to your workflow with: 49 | - Input image 50 | - Text prompt 51 | - Strength value (0-1) 52 | 53 | #### Parameters 54 | - **image**: Input image tensor 55 | - **prompt**: Text description for the desired image modification 56 | - **strength**: Float value between 0 and 1 determining how much to modify the original image (default: 0.75) 57 | 58 | ### SageMaker Node 59 | 60 | 1. Deploy the FP8 Flux Dev 1 model using the provided notebook 61 | 2. Launch ComfyUI 62 | 3. Find the SageMaker "Text@Image" node in the node browser 63 | 4. Connect the node to your workflow with: 64 | - Text prompt 65 | - Optional parameters 66 | 67 | #### Model Deployment 68 | There was no full pipeline available that leveraged the flux1dev transformer in FP8, so I created one and made it available here: [Jlonge4/flux-dev-fp8](https://huggingface.co/Jlonge4/flux-dev-fp8). This is important as using hugging face diffusers `from_single_file` download option will create symlinks, rendering your `model.tar.gz` unusable for deployment. 69 | 70 | To deploy the model: 71 | 1. Run the `deploy_flux_dev-pipe.ipynb` notebook 72 | 2. A g5.8xlarge instance is recommended for deployment 73 | 3. Once deployed, the endpoint can be used directly by the SageMaker node 74 | 75 | ## Notes 76 | 77 | ### Bedrock Node 78 | - Automatically handles image resizing to meet the model's requirements (minimum 640px, maximum 1536px) 79 | - Images are processed maintaining aspect ratio 80 | - Uses the `stability.sd3-large-v1:0` model from Amazon Bedrock 81 | 82 | ### SageMaker Node 83 | - Be mindful of the 60 second timeout for SageMaker inference endpoints 84 | - The single FP8 safetensors file I used to create my pipeline / HF repo is found here: [Comfy-Org/flux1-dev](https://huggingface.co/Comfy-Org/flux1-dev). -------------------------------------------------------------------------------- /custom-nodes/SagemakerNode/__init__.py: -------------------------------------------------------------------------------- 1 | from .sagemaker_node import Text2ImageNode 2 | 3 | 4 | NODE_CLASS_MAPPINGS = { 5 | "Text2ImageNode": Text2ImageNode 6 | } 7 | 8 | __all__ = ['NODE_CLASS_MAPPINGS'] -------------------------------------------------------------------------------- /custom-nodes/SagemakerNode/sagemaker_node.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | import base64 4 | import io 5 | from PIL import Image 6 | import numpy as np 7 | import torch 8 | 9 | runtime = boto3.client('runtime.sagemaker') 10 | ENDPOINT = "flux-image-generator-endpoint" 11 | 12 | class Text2ImageNode: 13 | @classmethod 14 | def INPUT_TYPES(s): 15 | return { 16 | "required": { 17 | "prompt": ("STRING", {"multiline": True, "default": "a beautiful landscape"}) 18 | }, 19 | "optional": { 20 | "negative_prompt": ("STRING", {"multiline": True, "default": ""}), 21 | "guidance_scale": ("FLOAT", {"default": 3.5, "min": 0.0, "max": 20.0, "step": 0.1}), 22 | "height": ("INT", {"default": 768, "min": 384, "max": 1536, "step": 64}), 23 | "width": ("INT", {"default": 1360, "min": 384, "max": 2048, "step": 64}), 24 | "num_inference_steps": ("INT", {"default": 3, "min": 1, "max": 50, "step": 1}), 25 | "seed": ("INT", {"default": -1}) 26 | } 27 | } 28 | 29 | RETURN_TYPES = ("IMAGE",) 30 | FUNCTION = "text_to_image" 31 | CATEGORY = "image" 32 | OUTPUT_NODE = True 33 | 34 | def text_to_image(self, prompt, negative_prompt, guidance_scale, height, width, num_inference_steps, seed): 35 | # Create the payload with all parameters 36 | payload = { 37 | "prompt": prompt 38 | } 39 | 40 | # Only add optional parameters that are provided 41 | if negative_prompt: 42 | payload["negative_prompt"] = negative_prompt 43 | 44 | if guidance_scale != 3.5: 45 | payload["guidance_scale"] = guidance_scale 46 | 47 | if height != 768: 48 | payload["height"] = height 49 | 50 | if width != 1360: 51 | payload["width"] = width 52 | 53 | if num_inference_steps != 3: 54 | payload["num_inference_steps"] = num_inference_steps 55 | 56 | if seed != -1: 57 | payload["seed"] = seed 58 | 59 | 60 | # Make the direct call 61 | response = runtime.invoke_endpoint( 62 | EndpointName=ENDPOINT, 63 | ContentType="application/json", 64 | Body=json.dumps(payload) 65 | ) 66 | print(response) 67 | try: 68 | response_body = response["Body"].read().decode("utf-8") 69 | print("Response Body:", response_body) 70 | 71 | # Parse JSON from the response body 72 | response_data = json.loads(response_body) 73 | 74 | # Check if 'image' is in the parsed response 75 | if "image" not in response_data: 76 | raise KeyError("'image' key not found in the response") 77 | 78 | # Extract the base64-encoded image 79 | base64_img = response_data["image"] 80 | 81 | # Decode the base64 string 82 | img_bytes = base64.b64decode(base64_img) 83 | 84 | # Create a BytesIO object from the decoded bytes 85 | img_buffer = io.BytesIO(img_bytes) 86 | 87 | # Open the image using PIL 88 | img = Image.open(img_buffer) 89 | img.save("output_image.png") 90 | # Convert to numpy array with correct shape for save_images 91 | output_array = np.array(img) 92 | if len(output_array.shape) == 2: 93 | output_array = np.stack([output_array] * 3, axis=-1) 94 | 95 | # Convert to float32 and normalize to 0-1 range 96 | output_array = output_array.astype(np.float32) / 255.0 97 | 98 | # Create tensor in format expected by save_images: [B, H, W, C] 99 | output_tensor = torch.from_numpy(output_array) 100 | if len(output_tensor.shape) == 3: 101 | output_tensor = output_tensor.unsqueeze(0) # Add batch dimension 102 | 103 | return (output_tensor,) 104 | 105 | except Exception as e: 106 | print(f"Error: {e}") 107 | 108 | 109 | NODE_CLASS_MAPPINGS = {"Text2ImageNode": Text2ImageNode} 110 | -------------------------------------------------------------------------------- /custom-nodes/imgs/bedrock-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/custom-nodes/imgs/bedrock-1.png -------------------------------------------------------------------------------- /custom-nodes/imgs/bedrock-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/custom-nodes/imgs/bedrock-2.png -------------------------------------------------------------------------------- /custom-nodes/imgs/bedrock-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/custom-nodes/imgs/bedrock-3.png -------------------------------------------------------------------------------- /custom-nodes/imgs/sm-flux-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/custom-nodes/imgs/sm-flux-1.png -------------------------------------------------------------------------------- /custom-nodes/imgs/sm-flux-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/custom-nodes/imgs/sm-flux-2.png -------------------------------------------------------------------------------- /custom-nodes/imgs/sm-flux-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/comfyui-on-amazon-sagemaker/8b43ae7a6a92eb712598e58333135c8209b7bffb/custom-nodes/imgs/sm-flux-3.png -------------------------------------------------------------------------------- /custom-nodes/workflows/bedrock_workflow.json: -------------------------------------------------------------------------------- 1 | { 2 | "last_node_id": 11, 3 | "last_link_id": 12, 4 | "nodes": [ 5 | { 6 | "id": 5, 7 | "type": "EmptyLatentImage", 8 | "pos": [473, 609], 9 | "size": [315, 106], 10 | "flags": {}, 11 | "order": 0, 12 | "mode": 0, 13 | "inputs": [], 14 | "outputs": [ 15 | {"name": "LATENT", "type": "LATENT", "links": [2], "slot_index": 0} 16 | ], 17 | "properties": {"Node name for S&R": "EmptyLatentImage"}, 18 | "widgets_values": [512, 512, 1], 19 | }, 20 | { 21 | "id": 8, 22 | "type": "VAEDecode", 23 | "pos": [1209, 188], 24 | "size": [210, 46], 25 | "flags": {}, 26 | "order": 5, 27 | "mode": 0, 28 | "inputs": [ 29 | {"name": "samples", "type": "LATENT", "link": 7}, 30 | {"name": "vae", "type": "VAE", "link": 8}, 31 | ], 32 | "outputs": [ 33 | {"name": "IMAGE", "type": "IMAGE", "links": [10, 11], "slot_index": 0} 34 | ], 35 | "properties": {"Node name for S&R": "VAEDecode"}, 36 | "widgets_values": [], 37 | }, 38 | { 39 | "id": 4, 40 | "type": "CheckpointLoaderSimple", 41 | "pos": [26, 474], 42 | "size": [315, 98], 43 | "flags": {}, 44 | "order": 1, 45 | "mode": 0, 46 | "inputs": [], 47 | "outputs": [ 48 | {"name": "MODEL", "type": "MODEL", "links": [1], "slot_index": 0}, 49 | {"name": "CLIP", "type": "CLIP", "links": [3, 5], "slot_index": 1}, 50 | {"name": "VAE", "type": "VAE", "links": [8], "slot_index": 2}, 51 | ], 52 | "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, 53 | "widgets_values": ["v1-5-pruned-emaonly.safetensors"], 54 | }, 55 | { 56 | "id": 3, 57 | "type": "KSampler", 58 | "pos": [863, 186], 59 | "size": [315, 262], 60 | "flags": {}, 61 | "order": 4, 62 | "mode": 0, 63 | "inputs": [ 64 | {"name": "model", "type": "MODEL", "link": 1}, 65 | {"name": "positive", "type": "CONDITIONING", "link": 4}, 66 | {"name": "negative", "type": "CONDITIONING", "link": 6}, 67 | {"name": "latent_image", "type": "LATENT", "link": 2}, 68 | ], 69 | "outputs": [ 70 | {"name": "LATENT", "type": "LATENT", "links": [7], "slot_index": 0} 71 | ], 72 | "properties": {"Node name for S&R": "KSampler"}, 73 | "widgets_values": [ 74 | 1100481969503640, 75 | "randomize", 76 | 20, 77 | 20, 78 | "euler", 79 | "normal", 80 | 1, 81 | ], 82 | }, 83 | { 84 | "id": 9, 85 | "type": "SaveImage", 86 | "pos": [1225.1632080078125, 283.2218322753906], 87 | "size": [331.1170654296875, 270], 88 | "flags": {}, 89 | "order": 6, 90 | "mode": 0, 91 | "inputs": [{"name": "images", "type": "IMAGE", "link": 10}], 92 | "outputs": [], 93 | "properties": {}, 94 | "widgets_values": ["ComfyUI"], 95 | }, 96 | { 97 | "id": 11, 98 | "type": "SaveImage", 99 | "pos": [1612.6502685546875, 261.57568359375], 100 | "size": [405.30255126953125, 311.7368469238281], 101 | "flags": {}, 102 | "order": 8, 103 | "mode": 0, 104 | "inputs": [{"name": "images", "type": "IMAGE", "link": 12}], 105 | "outputs": [], 106 | "properties": {}, 107 | "widgets_values": ["ComfyUI"], 108 | }, 109 | { 110 | "id": 6, 111 | "type": "CLIPTextEncode", 112 | "pos": [413.788818359375, 165.4100799560547], 113 | "size": [422.84503173828125, 164.31304931640625], 114 | "flags": {}, 115 | "order": 2, 116 | "mode": 0, 117 | "inputs": [{"name": "clip", "type": "CLIP", "link": 3}], 118 | "outputs": [ 119 | { 120 | "name": "CONDITIONING", 121 | "type": "CONDITIONING", 122 | "links": [4], 123 | "slot_index": 0, 124 | } 125 | ], 126 | "properties": {"Node name for S&R": "CLIPTextEncode"}, 127 | "widgets_values": [ 128 | "Realistic photo of Goku from Dragonball Z, standing in an empty Japanese alley\n" 129 | ], 130 | }, 131 | { 132 | "id": 7, 133 | "type": "CLIPTextEncode", 134 | "pos": [413, 389], 135 | "size": [425.27801513671875, 180.6060791015625], 136 | "flags": {}, 137 | "order": 3, 138 | "mode": 0, 139 | "inputs": [{"name": "clip", "type": "CLIP", "link": 5}], 140 | "outputs": [ 141 | { 142 | "name": "CONDITIONING", 143 | "type": "CONDITIONING", 144 | "links": [6], 145 | "slot_index": 0, 146 | } 147 | ], 148 | "properties": {"Node name for S&R": "CLIPTextEncode"}, 149 | "widgets_values": ["text, watermark, anime, cartoon"], 150 | }, 151 | { 152 | "id": 10, 153 | "type": "Image2ImageNode", 154 | "pos": [857.8277587890625, 631.30224609375], 155 | "size": [402.0130310058594, 82], 156 | "flags": {}, 157 | "order": 7, 158 | "mode": 0, 159 | "inputs": [{"name": "image", "type": "IMAGE", "link": 11}], 160 | "outputs": [ 161 | {"name": "IMAGE", "type": "IMAGE", "links": [12], "slot_index": 0} 162 | ], 163 | "properties": {"Node name for S&R": "Image2ImageNode"}, 164 | "widgets_values": ["Anime styling of Goku in a Japanese alley", 0.8], 165 | }, 166 | ], 167 | "links": [ 168 | [1, 4, 0, 3, 0, "MODEL"], 169 | [2, 5, 0, 3, 3, "LATENT"], 170 | [3, 4, 1, 6, 0, "CLIP"], 171 | [4, 6, 0, 3, 1, "CONDITIONING"], 172 | [5, 4, 1, 7, 0, "CLIP"], 173 | [6, 7, 0, 3, 2, "CONDITIONING"], 174 | [7, 3, 0, 8, 0, "LATENT"], 175 | [8, 4, 2, 8, 1, "VAE"], 176 | [10, 8, 0, 9, 0, "IMAGE"], 177 | [11, 8, 0, 10, 0, "IMAGE"], 178 | [12, 10, 0, 11, 0, "IMAGE"], 179 | ], 180 | "groups": [], 181 | "config": {}, 182 | "extra": { 183 | "ds": { 184 | "scale": 0.6727499949325658, 185 | "offset": [-6.624555279414988, -77.41483254839348], 186 | } 187 | }, 188 | "version": 0.4, 189 | } 190 | -------------------------------------------------------------------------------- /custom-nodes/workflows/sagemaker_workflow.json: -------------------------------------------------------------------------------- 1 | {"last_node_id":10,"last_link_id":10,"nodes":[{"id":10,"type":"Text2ImageNode","pos":[1906.9521484375,18.622203826904297],"size":[400,270],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[10],"slot_index":0}],"properties":{"Node name for S&R":"Text2ImageNode"},"widgets_values":["a beautiful landscape","",3.5,768,1360,3,1815,"randomize"]},{"id":9,"type":"SaveImage","pos":[2420.903076171875,-76.47523498535156],"size":[210,58],"flags":{},"order":1,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":10}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]}],"links":[[10,10,0,9,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.5644739300537773,"offset":[-1094.3168435998598,443.0318621709079]}},"version":0.4} -------------------------------------------------------------------------------- /deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e # Exit on error 4 | set -u # Exit on undefined variable 5 | # set -x # Print commands 6 | 7 | # Generate a hash for lambda folder content 8 | lambda_hash() { 9 | # --sort=name # requires GNU Tar 1.28+ 10 | tar \ 11 | --mtime="@0" \ 12 | --owner=0 --group=0 --numeric-owner \ 13 | --pax-option=exthdr.name=%d/PaxHeaders/%f,delete=atime,delete=ctime \ 14 | -cf - lambda | md5sum | head -c 6 15 | } 16 | 17 | # Set global variables for deployment, to be run after prepare_env functions 18 | configure() { 19 | # Application name prefix for resourceis provisioned (and also CloudFormation stack name) 20 | APP_NAME="comfyui" 21 | 22 | # Git reference of ComfyUI (should be a commit id instead of a branch name for production) 23 | COMFYUI_GIT_REF="v0.0.6" 24 | 25 | # S3 bucket for deployment files (model artifact and Lambda package) 26 | # Note: Adjust ComfyUIModelExecutionRole in template.yaml to grant S3 related permissions if the bucket name does not contain "SageMaker", "Sagemaker" or "sagemaker". 27 | S3_BUCKET="comfyui-sagemaker-${AWS_ACCOUNT_ID}-${AWS_DEFAULT_REGION}" 28 | 29 | # Filename of lambda package on S3 bucket used during CloudFormation deployment 30 | LAMBDA_FILE="lambda-$(lambda_hash).zip" 31 | 32 | # Identifier of SageMaker model and endpoint config 33 | MODEL_VERSION="sample" 34 | 35 | # Filename of model artifact on S3 bucket 36 | MODEL_FILE="model-artificact-${MODEL_VERSION}.tgz" 37 | 38 | # ECR repository of SageMaker inference image 39 | IMAGE_REPO="comfyui-sagemaker" 40 | 41 | # Image tag of SageMaker inference image 42 | IMAGE_TAG="latest" 43 | 44 | # ECR registry for SageMaker inference image 45 | IMAGE_REGISTRY="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com" 46 | 47 | # Full image URI for SageMaker inference image 48 | IMAGE_URI="${IMAGE_REGISTRY}/${IMAGE_REPO}:${IMAGE_TAG}" 49 | 50 | # Instance type of SageMaker endpoint 51 | SAGEMAKER_INSTANCE_TYPE="ml.g5.xlarge" 52 | 53 | # Whether to enable auto scaling for the SageMaker endpoint 54 | SAGEMAKER_AUTO_SCALING="false" 55 | 56 | # Authentication type for the Lambda URL (NONE or AWS_IAM) 57 | LAMBDA_URL_AUTH_TYPE="AWS_IAM" 58 | } 59 | 60 | # Collect variables from AWS environment 61 | prepare_env() { 62 | ARCH=$(uname -m) 63 | if [ "${ARCH}" != "x86_64" ]; then 64 | echo "Error: You must build on x86_64 architecture that matches SageMaker endpoint running" 65 | exit 1 66 | fi 67 | SUPPORT_AMD64=$(docker buildx inspect --bootstrap | grep "^Platforms:" | grep -o -m1 "linux/amd64" | head -n1) 68 | if [ -z "$SUPPORT_AMD64" ]; then 69 | echo "Error: docker does not support platform linux/amd64" 70 | echo "You may try running: docker run --privileged --rm tonistiigi/binfmt --install all" 71 | exit 1 72 | fi 73 | # get AWS region from AWS profile if not previously defined 74 | AWS_DEFAULT_REGION="${AWS_DEFAULT_REGION:-$(aws configure get region || true)}" 75 | if [ -z "${AWS_DEFAULT_REGION}" ]; then 76 | # get AWS region from EC2 metadata 77 | TOKEN=$(curl --max-time 10 -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" || true) 78 | if [ -z "${TOKEN}" ]; then 79 | echo "Error: AWS_DEFAULT_REGION is empty" 80 | exit 1 81 | fi 82 | AWS_DEFAULT_REGION=$(curl -s http://169.254.169.254/latest/meta-data/placement/region -H "X-aws-ec2-metadata-token: $TOKEN" || true) 83 | fi 84 | export AWS_DEFAULT_REGION 85 | echo "AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}" 86 | 87 | AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) 88 | echo "AWS_ACCOUNT_ID: ${AWS_ACCOUNT_ID}" 89 | } 90 | 91 | # Create ECR repository if not exist 92 | prepare_ecr() { 93 | ECR_REPO_URI=$(aws ecr describe-repositories --repository-names "${IMAGE_REPO}" --query 'repositories[?repositoryName==`'${IMAGE_REPO}'`].repositoryUri' --output text 2>/dev/null || true) 94 | 95 | if [ -z "$ECR_REPO_URI" ]; then 96 | echo "Repository $IMAGE_REPO does not exist. Creating it..." 97 | ECR_REPO_URI=$(aws ecr create-repository \ 98 | --repository-name "$IMAGE_REPO" \ 99 | --encryption-configuration encryptionType=KMS \ 100 | --image-scanning-configuration scanOnPush=true \ 101 | --query 'repository.repositoryUri' \ 102 | --output text) 103 | echo "Repository created with URI: $ECR_REPO_URI" 104 | else 105 | echo "Repository URI: $ECR_REPO_URI" 106 | fi 107 | } 108 | 109 | # Create S3 bucket if not exist 110 | prepare_s3() { 111 | if aws s3 ls "s3://$S3_BUCKET" >/dev/null 2>&1; then 112 | echo "Bucket $S3_BUCKET exists" 113 | return 114 | fi 115 | 116 | echo "Bucket $S3_BUCKET does not exist. Creating it..." 117 | # Create the bucket 118 | aws s3 mb "s3://$S3_BUCKET" --region "$AWS_DEFAULT_REGION" 119 | 120 | # Enable bucket encryption with AWS-managed KMS key 121 | aws s3api put-bucket-encryption --bucket "$S3_BUCKET" --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "aws:kms", "KMSMasterKeyID": "alias/aws/s3"}}]}' 122 | 123 | echo "Bucket created: $S3_BUCKET" 124 | } 125 | 126 | # Login to ECR 127 | login_ecr() { 128 | aws ecr get-login-password --region "${AWS_DEFAULT_REGION}" | docker login --username AWS --password-stdin "${IMAGE_REGISTRY}" 129 | } 130 | 131 | # Build and push image for inference 132 | build_and_push_image() { 133 | cd image 134 | docker build \ 135 | --platform linux/amd64 \ 136 | -t ${IMAGE_URI} \ 137 | -f Dockerfile.inference \ 138 | --build-arg="COMFYUI_GIT_REF=${COMFYUI_GIT_REF}" \ 139 | . 140 | docker push ${IMAGE_URI} 141 | cd - 142 | } 143 | 144 | # Pack and upload model artifact to S3 145 | build_and_upload_model_artifact() { 146 | cd model 147 | ./build.sh "s3://$S3_BUCKET/$MODEL_FILE" 148 | cd - 149 | } 150 | 151 | # Deploy CloudFormation 152 | deploy_cloudformation() { 153 | # first pack lambda package and upload to S3 bucket 154 | cd lambda 155 | zip -r $LAMBDA_FILE * 156 | aws s3 cp "$LAMBDA_FILE" "s3://$S3_BUCKET/lambda/$LAMBDA_FILE" 157 | cd - 158 | 159 | # deploy cloudformation stack 160 | echo "Deploying CloudFormation stack..." 161 | aws cloudformation deploy --template-file cloudformation/template.yml \ 162 | --stack-name "$APP_NAME" \ 163 | --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \ 164 | --parameter-overrides \ 165 | AppName="$APP_NAME" \ 166 | DeploymentBucket="$S3_BUCKET" \ 167 | LambdaPackageS3Key="lambda/$LAMBDA_FILE" \ 168 | ModelVersion="$MODEL_VERSION" \ 169 | ModelDataS3Key="$MODEL_FILE" \ 170 | ModelEcrImage="$IMAGE_REPO:$IMAGE_TAG" \ 171 | SageMakerInstanceType="$SAGEMAKER_INSTANCE_TYPE" \ 172 | SageMakerAutoScaling="$SAGEMAKER_AUTO_SCALING" \ 173 | LambdaUrlAuthType="$LAMBDA_URL_AUTH_TYPE" 174 | 175 | aws cloudformation describe-stacks \ 176 | --stack-name "$APP_NAME" \ 177 | --query 'Stacks[0].Outputs[*].{OutputKey:OutputKey,OutputValue:OutputValue}' \ 178 | --output table 179 | } 180 | 181 | prepare_env 182 | configure 183 | prepare_ecr 184 | prepare_s3 185 | login_ecr 186 | build_and_push_image 187 | build_and_upload_model_artifact 188 | deploy_cloudformation 189 | echo "Done" 190 | -------------------------------------------------------------------------------- /image/Dockerfile.gui: -------------------------------------------------------------------------------- 1 | FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04 2 | 3 | # Install necessary dependencies along with nginx 4 | RUN apt-get update && \ 5 | apt-get upgrade -y && \ 6 | apt-get -y install --no-install-recommends \ 7 | git \ 8 | python3.10 \ 9 | python3-pip \ 10 | nginx \ 11 | curl \ 12 | && rm -rf /var/lib/apt/lists/* 13 | 14 | RUN pip3 install --no-cache-dir torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121 15 | 16 | # Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard 17 | # output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE 18 | # keeps Python from writing the .pyc files which are unnecessary in this case. 19 | ENV PYTHONUNBUFFERED=TRUE 20 | ENV PYTHONDONTWRITEBYTECODE=TRUE 21 | 22 | # Git reference of ComfyUI (can be a branch name or commit id) 23 | ARG COMFYUI_GIT_REF=master 24 | 25 | WORKDIR /opt/program 26 | 27 | # Install ComfyUI 28 | RUN git clone https://github.com/comfyanonymous/ComfyUI.git && \ 29 | cd /opt/program/ComfyUI && \ 30 | git checkout $COMFYUI_GIT_REF 31 | RUN pip3 install --no-cache-dir -r /opt/program/ComfyUI/requirements.txt 32 | 33 | # Copy extra_model_paths so that ComfyUI load the model artifacts 34 | COPY extra_model_paths.yaml /opt/program/ComfyUI/ 35 | 36 | # Run as non-root user 37 | RUN useradd --system --create-home --home-dir /home/ubuntu --shell /bin/bash -g root -u 10001 ubuntu && \ 38 | chown -R ubuntu:root /opt/program 39 | 40 | USER ubuntu 41 | 42 | # Expose port 8188 for ComfyUI GUI 43 | EXPOSE 8188 44 | CMD ["python3", "-u", "/opt/program/ComfyUI/main.py", "--listen", "0.0.0.0", "--port", "8188"] 45 | HEALTHCHECK CMD curl -fs http://localhost:8188 || exit 1 46 | -------------------------------------------------------------------------------- /image/Dockerfile.inference: -------------------------------------------------------------------------------- 1 | FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04 2 | 3 | # Install necessary dependencies along with nginx 4 | RUN apt-get update && \ 5 | apt-get upgrade -y && \ 6 | apt-get -y install --no-install-recommends \ 7 | git \ 8 | python3.10 \ 9 | python3-pip \ 10 | nginx \ 11 | curl \ 12 | && rm -rf /var/lib/apt/lists/* 13 | 14 | RUN pip3 install --no-cache-dir torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121 15 | 16 | 17 | # Set some environment variables. PYTHONUNBUFFERED prevents Python buffering our standard 18 | # output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE 19 | # stops Python creating the .pyc files which are unnecessary in this case. 20 | ENV PYTHONUNBUFFERED=TRUE 21 | ENV PYTHONDONTWRITEBYTECODE=TRUE 22 | 23 | # Git reference of ComfyUI (can be a branch name or commit id) 24 | ARG COMFYUI_GIT_REF=master 25 | 26 | WORKDIR /opt/program 27 | 28 | # Install ComfyUI 29 | RUN git clone https://github.com/comfyanonymous/ComfyUI.git && \ 30 | cd /opt/program/ComfyUI && \ 31 | git checkout $COMFYUI_GIT_REF 32 | RUN pip3 install --no-cache-dir -r /opt/program/ComfyUI/requirements.txt 33 | 34 | # Copy extra_model_paths so that ComfyUI load the model artifacts 35 | COPY extra_model_paths.yaml /opt/program/ComfyUI/ 36 | 37 | # Copy contents of code/ dir to /opt/program 38 | COPY code/ /opt/program/ 39 | RUN pip3 install --no-cache-dir -r /opt/program/requirements.txt 40 | 41 | #checkov:skip=CKV_DOCKER_3:SageMaker expects all containers to run with root users 42 | 43 | # Expose port 8080 for sagemaker inference 44 | EXPOSE 8080 45 | ENTRYPOINT ["python3"] 46 | CMD [ "serve" ] 47 | HEALTHCHECK CMD curl -fs http://localhost:8080/ping || exit 1 48 | -------------------------------------------------------------------------------- /image/code/api_server.py: -------------------------------------------------------------------------------- 1 | import io 2 | import os 3 | import requests 4 | import flask 5 | import websocket # Note: websocket-client (https://github.com/websocket-client/websocket-client) 6 | import uuid 7 | from comfyui_prompt import prompt_for_image_data 8 | from PIL import Image 9 | 10 | app = flask.Flask(__name__) 11 | ws = None 12 | client_id = None 13 | 14 | # environment variable to set jpeg quality 15 | JPEG_QUALITY = int(os.getenv("JPEG_QUALITY", 90)) 16 | 17 | # environment variable to print HTTP header of requests 18 | DEBUG_HEADER = os.getenv("DEBUG_HEADER", "False").lower() in ("true", "1", "t") 19 | 20 | # contants for comfyui server 21 | SERVER_ADDRESS = "127.0.0.1:8188" 22 | URL_PING = f"http://{SERVER_ADDRESS}" 23 | 24 | 25 | @app.route("/ping", methods=["GET"]) 26 | def ping(): 27 | """ 28 | Check the health of the ComfyUI local server is responding 29 | 30 | Returns a 200 status code if success, or a 500 status code if there is an error. 31 | 32 | Returns: 33 | flask.Response: A response object containing the status code and mimetype. 34 | """ 35 | # Check if the local server is responding, set the status accordingly 36 | r = requests.head(URL_PING, timeout=5) 37 | status = 200 if r.ok else 500 38 | 39 | # Return the response with the determined status code 40 | return flask.Response(response="\n", status=status, mimetype="application/json") 41 | 42 | 43 | @app.route("/invocations", methods=["POST"]) 44 | def invocations(): 45 | """ 46 | Handle prediction requests by transforming the input data and returning the 47 | transformed data as a CSV string. 48 | 49 | This function checks if the request content type is supported (text/csv), 50 | and if so, decodes the input data, transforms it using the transform_fn 51 | function, and returns the transformed data as a CSV string. If the content 52 | type is not supported, a 415 status code is returned. 53 | 54 | Returns: 55 | flask.Response: A response object containing the transformed data, 56 | status code, and mimetype. 57 | """ 58 | global ws, client_id 59 | if ws is None or client_id is None: 60 | client_id = str(uuid.uuid4()) 61 | ws = websocket.WebSocket() 62 | ws.connect("ws://{}/ws?clientId={}".format(SERVER_ADDRESS, client_id)) # nosemgrep: detect-insecure-websocket 63 | 64 | if DEBUG_HEADER: 65 | print(flask.request.headers) 66 | 67 | # get prompt from request body regardless of content type 68 | prompt = flask.request.get_json(silent=True, force=True) 69 | image_data = prompt_for_image_data(ws, client_id, prompt) 70 | 71 | # convert png to jpeg if it is allowed from accept header 72 | accept_jpeg = "image/jpeg" in flask.request.accept_mimetypes 73 | if accept_jpeg and image_data.get("content_type") == "image/png": 74 | png_image = Image.open(io.BytesIO(image_data.get("data"))) 75 | rgb_image = png_image.convert("RGB") 76 | jpeg_bytes = io.BytesIO() 77 | rgb_image.save(jpeg_bytes, format="jpeg", optimize=True, quality=JPEG_QUALITY) 78 | image_data["data"] = jpeg_bytes.getvalue() 79 | image_data["content_type"] = "image/jpeg" 80 | 81 | return flask.Response( 82 | response=image_data.get("data", ""), 83 | status=200, 84 | mimetype=image_data.get("content_type", "text/plain"), 85 | ) 86 | 87 | 88 | if __name__ == "__main__": 89 | app.run(host="0.0.0.0", port=8080) 90 | -------------------------------------------------------------------------------- /image/code/comfyui_prompt.py: -------------------------------------------------------------------------------- 1 | # Modify from ComfyUI example 2 | # Reference: https://github.com/comfyanonymous/ComfyUI/blob/master/script_examples/websockets_api_example.py 3 | 4 | #This is an example that uses the websockets api to know when a prompt execution is done 5 | #Once the prompt execution is done it downloads the images using the /history endpoint 6 | 7 | import websocket # Note: websocket-client (https://github.com/websocket-client/websocket-client) 8 | import uuid 9 | import json 10 | import urllib.request 11 | import urllib.parse 12 | 13 | server_address = "127.0.0.1:8188" 14 | 15 | def queue_prompt(prompt, client_id): 16 | p = {"prompt": prompt, "client_id": client_id} 17 | data = json.dumps(p).encode('utf-8') 18 | req = urllib.request.Request("http://{}/prompt".format(server_address), data=data) 19 | return json.loads(urllib.request.urlopen(req).read()) 20 | 21 | def get_image(filename, subfolder, folder_type): 22 | data = {"filename": filename, "subfolder": subfolder, "type": folder_type} 23 | url_values = urllib.parse.urlencode(data) 24 | with urllib.request.urlopen("http://{}/view?{}".format(server_address, url_values)) as response: 25 | return response.read() 26 | 27 | def get_image_data(filename, subfolder, folder_type): 28 | data = {"filename": filename, "subfolder": subfolder, "type": folder_type} 29 | url_values = urllib.parse.urlencode(data) 30 | with urllib.request.urlopen("http://{}/view?{}".format(server_address, url_values)) as response: 31 | image_data = { 32 | "content_type": response.info().get_content_type(), 33 | "data": response.read(), 34 | } 35 | return image_data 36 | 37 | def get_history(prompt_id): 38 | with urllib.request.urlopen("http://{}/history/{}".format(server_address, prompt_id)) as response: 39 | return json.loads(response.read()) 40 | 41 | def get_images(ws, client_id, prompt): 42 | prompt_id = queue_prompt(prompt, client_id)['prompt_id'] 43 | output_images = {} 44 | while True: 45 | out = ws.recv() 46 | if isinstance(out, str): 47 | message = json.loads(out) 48 | if message['type'] == 'executing': 49 | data = message['data'] 50 | if data['node'] is None and data['prompt_id'] == prompt_id: 51 | break #Execution is done 52 | else: 53 | continue #previews are binary data 54 | 55 | history = get_history(prompt_id)[prompt_id] 56 | for o in history['outputs']: 57 | for node_id in history['outputs']: 58 | node_output = history['outputs'][node_id] 59 | if 'images' in node_output: 60 | images_output = [] 61 | for image in node_output['images']: 62 | image_data = get_image(image['filename'], image['subfolder'], image['type']) 63 | images_output.append(image_data) 64 | output_images[node_id] = images_output 65 | 66 | return output_images 67 | 68 | def prompt_for_image_data(ws, client_id, prompt): 69 | """ 70 | Execute prompt to get image data 71 | (only one image is returned) 72 | 73 | Return dictionary: 74 | data: binary image in byte 75 | content_type: string 76 | """ 77 | prompt_id = queue_prompt(prompt, client_id)['prompt_id'] 78 | output_images = {} 79 | while True: 80 | out = ws.recv() 81 | if isinstance(out, str): 82 | message = json.loads(out) 83 | if message['type'] == 'executing': 84 | data = message['data'] 85 | if data['node'] is None and data['prompt_id'] == prompt_id: 86 | break #Execution is done 87 | else: 88 | continue #previews are binary data 89 | 90 | history = get_history(prompt_id)[prompt_id] 91 | for o in history['outputs']: 92 | for node_id in history['outputs']: 93 | node_output = history['outputs'][node_id] 94 | if 'images' in node_output: 95 | for image in node_output['images']: 96 | # only one image is returned, other images are ignored 97 | image_data = get_image_data(image['filename'], image['subfolder'], image['type']) 98 | return image_data 99 | 100 | return output_images 101 | 102 | prompt_text = """ 103 | { 104 | "3": { 105 | "class_type": "KSampler", 106 | "inputs": { 107 | "cfg": 8, 108 | "denoise": 1, 109 | "latent_image": [ 110 | "5", 111 | 0 112 | ], 113 | "model": [ 114 | "4", 115 | 0 116 | ], 117 | "negative": [ 118 | "7", 119 | 0 120 | ], 121 | "positive": [ 122 | "6", 123 | 0 124 | ], 125 | "sampler_name": "euler", 126 | "scheduler": "normal", 127 | "seed": 8566257, 128 | "steps": 20 129 | } 130 | }, 131 | "4": { 132 | "class_type": "CheckpointLoaderSimple", 133 | "inputs": { 134 | "ckpt_name": "v1-5-pruned-emaonly.ckpt" 135 | } 136 | }, 137 | "5": { 138 | "class_type": "EmptyLatentImage", 139 | "inputs": { 140 | "batch_size": 1, 141 | "height": 512, 142 | "width": 512 143 | } 144 | }, 145 | "6": { 146 | "class_type": "CLIPTextEncode", 147 | "inputs": { 148 | "clip": [ 149 | "4", 150 | 1 151 | ], 152 | "text": "masterpiece best quality girl" 153 | } 154 | }, 155 | "7": { 156 | "class_type": "CLIPTextEncode", 157 | "inputs": { 158 | "clip": [ 159 | "4", 160 | 1 161 | ], 162 | "text": "bad hands" 163 | } 164 | }, 165 | "8": { 166 | "class_type": "VAEDecode", 167 | "inputs": { 168 | "samples": [ 169 | "3", 170 | 0 171 | ], 172 | "vae": [ 173 | "4", 174 | 2 175 | ] 176 | } 177 | }, 178 | "9": { 179 | "class_type": "SaveImage", 180 | "inputs": { 181 | "filename_prefix": "ComfyUI", 182 | "images": [ 183 | "8", 184 | 0 185 | ] 186 | } 187 | } 188 | } 189 | """ 190 | 191 | if __name__ == "__main__": 192 | import random 193 | import base64 194 | 195 | client_id = str(uuid.uuid4()) 196 | 197 | prompt = json.loads(prompt_text) 198 | # set the text prompt for our positive CLIPTextEncode 199 | prompt["6"]["inputs"]["text"] = "masterpiece best quality man" 200 | 201 | # set the seed for our KSampler node 202 | prompt["3"]["inputs"]["seed"] = random.randint(0, 1e10) 203 | 204 | ws = websocket.WebSocket() 205 | ws.connect("ws://{}/ws?clientId={}".format(server_address, client_id)) 206 | print("Prompt:") 207 | print(json.dumps(prompt, indent = 2)) 208 | print("\n\n") 209 | 210 | images = get_images(ws, client_id, prompt) 211 | for node_id in images: 212 | for image_data in images[node_id]: 213 | print("Base64 Image:") 214 | print(base64.b64encode(image_data).decode("utf-8")) 215 | print("\n\n") 216 | 217 | # Commented out code to display the output images: 218 | 219 | # for node_id in images: 220 | # for image_data in images[node_id]: 221 | # from PIL import Image 222 | # import io 223 | # image = Image.open(io.BytesIO(image_data)) 224 | # image.show() 225 | -------------------------------------------------------------------------------- /image/code/nginx.conf: -------------------------------------------------------------------------------- 1 | worker_processes 1; 2 | daemon off; # Prevent forking 3 | 4 | 5 | pid /tmp/nginx.pid; 6 | error_log /var/log/nginx/error.log; 7 | 8 | events { 9 | # defaults 10 | } 11 | 12 | http { 13 | include /etc/nginx/mime.types; 14 | default_type application/octet-stream; 15 | access_log /var/log/nginx/access.log combined; 16 | server_tokens off; 17 | 18 | upstream gunicorn { 19 | server unix:/tmp/gunicorn.sock; 20 | } 21 | 22 | server { 23 | listen 8080 deferred; 24 | client_max_body_size 5m; 25 | 26 | keepalive_timeout 5; 27 | proxy_read_timeout 1200s; 28 | 29 | location ~ ^/(ping|invocations) { 30 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 31 | proxy_set_header Host $http_host; 32 | proxy_redirect off; 33 | proxy_pass http://gunicorn; 34 | } 35 | 36 | location / { 37 | return 404 "{}"; 38 | } 39 | } 40 | } -------------------------------------------------------------------------------- /image/code/requirements.txt: -------------------------------------------------------------------------------- 1 | gunicorn 2 | Flask 3 | requests 4 | websocket-client 5 | pillow -------------------------------------------------------------------------------- /image/code/serve: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # This file implements the scoring service shell. You don't necessarily need to modify it for various 4 | # algorithms. It starts nginx and gunicorn with the correct configurations and then simply waits until 5 | # gunicorn exits. 6 | # 7 | # The flask server is specified to be the app object in wsgi.py 8 | # 9 | # We set the following parameters: 10 | # 11 | # Parameter Environment Variable Default Value 12 | # --------- -------------------- ------------- 13 | # number of workers INFERENCE_SERVER_WORKERS number of CPU cores 14 | # timeout INFERENCE_SERVER_TIMEOUT 70 seconds 15 | 16 | import multiprocessing 17 | import os 18 | import signal 19 | import subprocess 20 | import sys 21 | 22 | cpu_count = multiprocessing.cpu_count() 23 | 24 | inference_server_timeout = os.environ.get("INFERENCE_SERVER_TIMEOUT", 70) 25 | inference_server_workers = int(os.environ.get("INFERENCE_SERVER_WORKERS", cpu_count)) 26 | 27 | 28 | def sigterm_handler(nginx_pid, gunicorn_pid, app_pid): 29 | try: 30 | os.kill(nginx_pid, signal.SIGQUIT) 31 | except OSError: 32 | pass 33 | try: 34 | os.kill(gunicorn_pid, signal.SIGTERM) 35 | except OSError: 36 | pass 37 | try: 38 | os.kill(app_pid, signal.SIGTERM) 39 | except OSError: 40 | pass 41 | 42 | sys.exit(0) 43 | 44 | 45 | def start_server(): 46 | print("Starting the inference server with {} workers.".format(inference_server_workers)) 47 | print("Listen to port 8080") 48 | 49 | # link the log streams to stdout/err so they will be logged to the container logs 50 | subprocess.check_call(["ln", "-sf", "/dev/stdout", "/var/log/nginx/access.log"]) 51 | subprocess.check_call(["ln", "-sf", "/dev/stderr", "/var/log/nginx/error.log"]) 52 | 53 | nginx = subprocess.Popen(["nginx", "-c", "/opt/program/nginx.conf"]) 54 | gunicorn = subprocess.Popen( 55 | [ 56 | "gunicorn", 57 | "--timeout", 58 | str(inference_server_timeout), 59 | "-k", 60 | "sync", 61 | "-b", 62 | "unix:/tmp/gunicorn.sock", 63 | "-w", 64 | str(inference_server_workers), 65 | "wsgi:app", 66 | ] 67 | ) 68 | app = subprocess.Popen(["python3", "-u", "/opt/program/ComfyUI/main.py", "--listen", "127.0.0.1", "--port", "8188"]) 69 | 70 | signal.signal(signal.SIGTERM, lambda a, b, c: sigterm_handler(nginx.pid, gunicorn.pid, app.pid)) 71 | 72 | # If either subprocess exits, so do we. 73 | pids = set([nginx.pid, gunicorn.pid, app.pid]) 74 | while True: 75 | pid, _ = os.wait() 76 | if pid in pids: 77 | break 78 | 79 | sigterm_handler(nginx.pid, gunicorn.pid, app.pid) 80 | print("Inference server exiting") 81 | 82 | 83 | # The main routine just invokes the start function. 84 | 85 | if __name__ == "__main__": 86 | start_server() -------------------------------------------------------------------------------- /image/code/wsgi.py: -------------------------------------------------------------------------------- 1 | import api_server as myapp 2 | 3 | # This is just a simple wrapper for gunicorn to find your app. 4 | 5 | app = myapp.app -------------------------------------------------------------------------------- /image/extra_model_paths.yaml: -------------------------------------------------------------------------------- 1 | #Rename this to extra_model_paths.yaml and ComfyUI will load it 2 | 3 | #config for sagemaker inference 4 | # SageMaker copies your model artifacts from the S3 location to the /opt/ml/model directory 5 | sagemaker_inference: 6 | base_path: /opt/ml/model 7 | checkpoints: checkpoints/ 8 | clip: clip/ 9 | clip_vision: clip_vision/ 10 | configs: configs/ 11 | controlnet: controlnet/ 12 | embeddings: embeddings/ 13 | loras: loras/ 14 | upscale_models: upscale_models/ 15 | vae: vae/ 16 | gligen: gligen/ 17 | custom_nodes: custom_nodes/ 18 | unet: unet/ 19 | 20 | #config for a1111 ui 21 | #all you have to do is change the base_path to where yours is installed 22 | # a111: 23 | # base_path: path/to/stable-diffusion-webui/ 24 | 25 | # checkpoints: models/Stable-diffusion 26 | # configs: models/Stable-diffusion 27 | # vae: models/VAE 28 | # loras: | 29 | # models/Lora 30 | # models/LyCORIS 31 | # upscale_models: | 32 | # models/ESRGAN 33 | # models/RealESRGAN 34 | # models/SwinIR 35 | # embeddings: embeddings 36 | # hypernetworks: models/hypernetworks 37 | # controlnet: models/ControlNet 38 | 39 | #config for comfyui 40 | #your base path should be either an existing comfy install or a central folder where you store all of your models, loras, etc. 41 | 42 | #comfyui: 43 | # base_path: path/to/comfyui/ 44 | # checkpoints: models/checkpoints/ 45 | # clip: models/clip/ 46 | # clip_vision: models/clip_vision/ 47 | # configs: models/configs/ 48 | # controlnet: models/controlnet/ 49 | # embeddings: models/embeddings/ 50 | # loras: models/loras/ 51 | # upscale_models: models/upscale_models/ 52 | # vae: models/vae/ 53 | 54 | #other_ui: 55 | # base_path: path/to/ui 56 | # checkpoints: models/checkpoints 57 | # gligen: models/gligen 58 | # custom_nodes: path/custom_nodes 59 | -------------------------------------------------------------------------------- /lambda/lambda_function.py: -------------------------------------------------------------------------------- 1 | import json 2 | import boto3 3 | import logging 4 | import random 5 | import base64 6 | import io 7 | import os 8 | 9 | # Define Logger 10 | logger = logging.getLogger() 11 | logging.basicConfig() 12 | logger.setLevel(logging.INFO) 13 | 14 | sagemaker_client = boto3.client("sagemaker-runtime") 15 | 16 | 17 | def update_seed(prompt_dict, seed=None): 18 | """ 19 | Update the seed value for the KSampler node in the prompt dictionary. 20 | 21 | Args: 22 | prompt_dict (dict): The prompt dictionary containing the node information. 23 | seed (int, optional): The seed value to set for the KSampler node. If not provided, a random seed will be generated. 24 | 25 | Returns: 26 | dict: The updated prompt dictionary with the seed value set for the KSampler node. 27 | """ 28 | # set seed for KSampler node 29 | for i in prompt_dict: 30 | if "inputs" in prompt_dict[i]: 31 | if ( 32 | prompt_dict[i]["class_type"] == "KSampler" 33 | and "seed" in prompt_dict[i]["inputs"] 34 | ): 35 | if seed is None: 36 | prompt_dict[i]["inputs"]["seed"] = random.randint(0, int(1e10)) 37 | else: 38 | prompt_dict[i]["inputs"]["seed"] = int(seed) 39 | return prompt_dict 40 | 41 | 42 | def update_prompt_text(prompt_dict, positive_prompt, negative_prompt): 43 | """ 44 | Update the prompt text in the given prompt dictionary. 45 | 46 | Args: 47 | prompt_dict (dict): The dictionary containing the prompt information. 48 | positive_prompt (str): The new text to replace the positive prompt placeholder. 49 | negative_prompt (str): The new text to replace the negative prompt placeholder. 50 | 51 | Returns: 52 | dict: The updated prompt dictionary. 53 | """ 54 | # replace prompt text for CLIPTextEncode node 55 | for i in prompt_dict: 56 | if "inputs" in prompt_dict[i]: 57 | if ( 58 | prompt_dict[i]["class_type"] == "CLIPTextEncode" 59 | and "text" in prompt_dict[i]["inputs"] 60 | ): 61 | if prompt_dict[i]["inputs"]["text"] == "POSITIVE_PROMT_PLACEHOLDER": 62 | prompt_dict[i]["inputs"]["text"] = positive_prompt 63 | elif prompt_dict[i]["inputs"]["text"] == "NEGATIVE_PROMPT_PLACEHOLDER": 64 | prompt_dict[i]["inputs"]["text"] = negative_prompt 65 | return prompt_dict 66 | 67 | 68 | def invoke_from_prompt(prompt_file, positive_prompt, negative_prompt, seed=None): 69 | """ 70 | Invokes the SageMaker endpoint with the provided prompt data. 71 | 72 | Args: 73 | prompt_file (str): The path to the JSON file in ./workflow/ containing the prompt data. 74 | positive_prompt (str): The positive prompt to be used in the prompt data. 75 | negative_prompt (str): The negative prompt to be used in the prompt data. 76 | seed (int, optional): The seed value for randomization. Defaults to None. 77 | 78 | Returns: 79 | dict: The response from the SageMaker endpoint. 80 | 81 | Raises: 82 | FileNotFoundError: If the prompt file does not exist. 83 | """ 84 | logger.info("prompt: %s", prompt_file) 85 | 86 | # read the prompt data from json file 87 | with open("./workflow/" + prompt_file) as prompt_file: 88 | prompt_text = prompt_file.read() 89 | 90 | prompt_dict = json.loads(prompt_text) 91 | prompt_dict = update_seed(prompt_dict, seed) 92 | prompt_dict = update_prompt_text(prompt_dict, positive_prompt, negative_prompt) 93 | prompt_text = json.dumps(prompt_dict) 94 | 95 | endpoint_name = os.environ["ENDPOINT_NAME"] 96 | content_type = "application/json" 97 | accept = "*/*" 98 | payload = prompt_text 99 | logger.info("Final payload to invoke sagemaker:") 100 | logger.info(json.dumps(payload, indent=4)) 101 | response = sagemaker_client.invoke_endpoint( 102 | EndpointName=endpoint_name, 103 | ContentType=content_type, 104 | Accept=accept, 105 | Body=payload, 106 | ) 107 | return response 108 | 109 | 110 | def lambda_handler(event: dict, context: dict): 111 | """ 112 | Lambda function handler for processing events. 113 | 114 | Args: 115 | event (dict): The event from lambda function URL. 116 | context (dict): The runtime information of the Lambda function. 117 | 118 | Returns: 119 | dict: The response data for lambda function URL. 120 | """ 121 | logger.info("Event:") 122 | logger.info(json.dumps(event, indent=2)) 123 | request = json.loads(event["body"]) 124 | 125 | try: 126 | prompt_file = request.get("prompt_file", "workflow_api.json") 127 | positive_prompt = request["positive_prompt"] 128 | negative_prompt = request.get("negative_prompt", "") 129 | seed = request.get("seed") 130 | response = invoke_from_prompt( 131 | prompt_file=prompt_file, 132 | positive_prompt=positive_prompt, 133 | negative_prompt=negative_prompt, 134 | seed=seed, 135 | ) 136 | except KeyError as e: 137 | logger.error(f"Error: {e}") 138 | return { 139 | "statusCode": 400, 140 | "body": json.dumps( 141 | { 142 | "error": "Missing required parameter", 143 | } 144 | ), 145 | } 146 | 147 | image_data = response["Body"].read() 148 | 149 | result = { 150 | "headers": {"Content-Type": response["ContentType"]}, 151 | "statusCode": response["ResponseMetadata"]["HTTPStatusCode"], 152 | "body": base64.b64encode(io.BytesIO(image_data).getvalue()).decode("utf-8"), 153 | "isBase64Encoded": True, 154 | } 155 | return result 156 | 157 | 158 | if __name__ == "__main__": 159 | import sys 160 | 161 | logging.basicConfig(level=logging.INFO, stream=sys.stdout) 162 | event = { 163 | "body": "{\"positive_prompt\": \"hill happy dog\",\"negative_prompt\": \"hill\",\"prompt_file\": \"workflow_api.json\",\"seed\": 123}" 164 | } 165 | lambda_handler(event, None) 166 | -------------------------------------------------------------------------------- /lambda/workflow/flux1-dev-fp8-ckpt.json: -------------------------------------------------------------------------------- 1 | { 2 | "6": { 3 | "inputs": { 4 | "text": "POSITIVE_PROMT_PLACEHOLDER", 5 | "clip": [ 6 | "30", 7 | 1 8 | ] 9 | }, 10 | "class_type": "CLIPTextEncode", 11 | "_meta": { 12 | "title": "CLIP Text Encode (Positive Prompt)" 13 | } 14 | }, 15 | "8": { 16 | "inputs": { 17 | "samples": [ 18 | "31", 19 | 0 20 | ], 21 | "vae": [ 22 | "30", 23 | 2 24 | ] 25 | }, 26 | "class_type": "VAEDecode", 27 | "_meta": { 28 | "title": "VAE Decode" 29 | } 30 | }, 31 | "9": { 32 | "inputs": { 33 | "filename_prefix": "ComfyUI", 34 | "images": [ 35 | "8", 36 | 0 37 | ] 38 | }, 39 | "class_type": "SaveImage", 40 | "_meta": { 41 | "title": "Save Image" 42 | } 43 | }, 44 | "27": { 45 | "inputs": { 46 | "width": 1024, 47 | "height": 1024, 48 | "batch_size": 1 49 | }, 50 | "class_type": "EmptySD3LatentImage", 51 | "_meta": { 52 | "title": "EmptySD3LatentImage" 53 | } 54 | }, 55 | "30": { 56 | "inputs": { 57 | "ckpt_name": "flux1-dev-fp8.safetensors" 58 | }, 59 | "class_type": "CheckpointLoaderSimple", 60 | "_meta": { 61 | "title": "Load Checkpoint" 62 | } 63 | }, 64 | "31": { 65 | "inputs": { 66 | "seed": 130778844713795, 67 | "steps": 20, 68 | "cfg": 1, 69 | "sampler_name": "euler", 70 | "scheduler": "simple", 71 | "denoise": 1, 72 | "model": [ 73 | "30", 74 | 0 75 | ], 76 | "positive": [ 77 | "35", 78 | 0 79 | ], 80 | "negative": [ 81 | "33", 82 | 0 83 | ], 84 | "latent_image": [ 85 | "27", 86 | 0 87 | ] 88 | }, 89 | "class_type": "KSampler", 90 | "_meta": { 91 | "title": "KSampler" 92 | } 93 | }, 94 | "33": { 95 | "inputs": { 96 | "text": "", 97 | "clip": [ 98 | "30", 99 | 1 100 | ] 101 | }, 102 | "class_type": "CLIPTextEncode", 103 | "_meta": { 104 | "title": "CLIP Text Encode (Negative Prompt)" 105 | } 106 | }, 107 | "35": { 108 | "inputs": { 109 | "guidance": 3.5, 110 | "conditioning": [ 111 | "6", 112 | 0 113 | ] 114 | }, 115 | "class_type": "FluxGuidance", 116 | "_meta": { 117 | "title": "FluxGuidance" 118 | } 119 | } 120 | } -------------------------------------------------------------------------------- /lambda/workflow/flux1-schnell-fp8-ckpt.json: -------------------------------------------------------------------------------- 1 | { 2 | "6": { 3 | "inputs": { 4 | "text": "POSITIVE_PROMT_PLACEHOLDER", 5 | "clip": [ 6 | "30", 7 | 1 8 | ] 9 | }, 10 | "class_type": "CLIPTextEncode", 11 | "_meta": { 12 | "title": "CLIP Text Encode (Positive Prompt)" 13 | } 14 | }, 15 | "8": { 16 | "inputs": { 17 | "samples": [ 18 | "31", 19 | 0 20 | ], 21 | "vae": [ 22 | "30", 23 | 2 24 | ] 25 | }, 26 | "class_type": "VAEDecode", 27 | "_meta": { 28 | "title": "VAE Decode" 29 | } 30 | }, 31 | "9": { 32 | "inputs": { 33 | "filename_prefix": "ComfyUI", 34 | "images": [ 35 | "8", 36 | 0 37 | ] 38 | }, 39 | "class_type": "SaveImage", 40 | "_meta": { 41 | "title": "Save Image" 42 | } 43 | }, 44 | "27": { 45 | "inputs": { 46 | "width": 1024, 47 | "height": 1024, 48 | "batch_size": 1 49 | }, 50 | "class_type": "EmptySD3LatentImage", 51 | "_meta": { 52 | "title": "EmptySD3LatentImage" 53 | } 54 | }, 55 | "30": { 56 | "inputs": { 57 | "ckpt_name": "flux1-schnell-fp8.safetensors" 58 | }, 59 | "class_type": "CheckpointLoaderSimple", 60 | "_meta": { 61 | "title": "Load Checkpoint" 62 | } 63 | }, 64 | "31": { 65 | "inputs": { 66 | "seed": 272534778885432, 67 | "steps": 4, 68 | "cfg": 1, 69 | "sampler_name": "euler", 70 | "scheduler": "simple", 71 | "denoise": 1, 72 | "model": [ 73 | "30", 74 | 0 75 | ], 76 | "positive": [ 77 | "6", 78 | 0 79 | ], 80 | "negative": [ 81 | "33", 82 | 0 83 | ], 84 | "latent_image": [ 85 | "27", 86 | 0 87 | ] 88 | }, 89 | "class_type": "KSampler", 90 | "_meta": { 91 | "title": "KSampler" 92 | } 93 | }, 94 | "33": { 95 | "inputs": { 96 | "text": "", 97 | "clip": [ 98 | "30", 99 | 1 100 | ] 101 | }, 102 | "class_type": "CLIPTextEncode", 103 | "_meta": { 104 | "title": "CLIP Text Encode (Negative Prompt)" 105 | } 106 | } 107 | } -------------------------------------------------------------------------------- /lambda/workflow/workflow_api.json: -------------------------------------------------------------------------------- 1 | { 2 | "3": { 3 | "inputs": { 4 | "seed": 187450802590534, 5 | "steps": 20, 6 | "cfg": 8, 7 | "sampler_name": "euler", 8 | "scheduler": "normal", 9 | "denoise": 1, 10 | "model": [ 11 | "4", 12 | 0 13 | ], 14 | "positive": [ 15 | "6", 16 | 0 17 | ], 18 | "negative": [ 19 | "7", 20 | 0 21 | ], 22 | "latent_image": [ 23 | "5", 24 | 0 25 | ] 26 | }, 27 | "class_type": "KSampler", 28 | "_meta": { 29 | "title": "KSampler" 30 | } 31 | }, 32 | "4": { 33 | "inputs": { 34 | "ckpt_name": "sd_xl_base_1.0.safetensors" 35 | }, 36 | "class_type": "CheckpointLoaderSimple", 37 | "_meta": { 38 | "title": "Load Checkpoint" 39 | } 40 | }, 41 | "5": { 42 | "inputs": { 43 | "width": 1024, 44 | "height": 1024, 45 | "batch_size": 1 46 | }, 47 | "class_type": "EmptyLatentImage", 48 | "_meta": { 49 | "title": "Empty Latent Image" 50 | } 51 | }, 52 | "6": { 53 | "inputs": { 54 | "text": "POSITIVE_PROMT_PLACEHOLDER", 55 | "clip": [ 56 | "4", 57 | 1 58 | ] 59 | }, 60 | "class_type": "CLIPTextEncode", 61 | "_meta": { 62 | "title": "CLIP Text Encode" 63 | } 64 | }, 65 | "7": { 66 | "inputs": { 67 | "text": "NEGATIVE_PROMPT_PLACEHOLDER", 68 | "clip": [ 69 | "4", 70 | 1 71 | ] 72 | }, 73 | "class_type": "CLIPTextEncode", 74 | "_meta": { 75 | "title": "CLIP Text Encode" 76 | } 77 | }, 78 | "8": { 79 | "inputs": { 80 | "samples": [ 81 | "3", 82 | 0 83 | ], 84 | "vae": [ 85 | "4", 86 | 2 87 | ] 88 | }, 89 | "class_type": "VAEDecode", 90 | "_meta": { 91 | "title": "VAE Decode" 92 | } 93 | }, 94 | "9": { 95 | "inputs": { 96 | "filename_prefix": "ComfyUI", 97 | "images": [ 98 | "8", 99 | 0 100 | ] 101 | }, 102 | "class_type": "SaveImage", 103 | "_meta": { 104 | "title": "Save Image" 105 | } 106 | } 107 | } -------------------------------------------------------------------------------- /model/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # script to create a model artifact for SageMaker inference 3 | 4 | set -e # Exit on error 5 | set -u # Exit on undefined variable 6 | # set -x # Print commands 7 | 8 | # handy function to download files from hugging face 9 | # usage: download_huggingface 10 | download_huggingface() { 11 | # first wget with --no-clobber, then wget with --timestamping 12 | wget -nc "$1" -P "$2" || wget -N "$1" -P "$2" 13 | # wget --header="Authorization: Bearer ${HF_TOKEN}" -nc "$1" -P "$2" || wget --header="Authorization: Bearer ${HF_TOKEN}" -N "$1" -P "$2" 14 | } 15 | 16 | # target folder for downloading model artifact 17 | TARGET_DIR="model-artifact" 18 | 19 | # target file for tar-gzip archive of model artifact 20 | TARGET_FILE="model-artifact.tgz" 21 | 22 | show_usage() { 23 | echo "Usage: $0 [s3://path/to/s3/object]" 24 | exit 1 25 | } 26 | # s3 upload path (optional) 27 | S3_PATH="" 28 | if [ "$#" -gt 1 ]; then 29 | show_usage 30 | elif [ "$#" -eq 1 ]; then 31 | if [[ "$1" == s3://* ]]; then 32 | S3_PATH="$1" 33 | else 34 | show_usage 35 | fi 36 | fi 37 | 38 | # initialize empty folder structure 39 | mkdir -p "${TARGET_DIR}" 40 | DIRS=(checkpoints clip clip_vision configs controlnet embeddings loras upscale_models vae gligen custom_nodes unet) 41 | for dir in "${DIRS[@]}" 42 | do 43 | mkdir -p "${TARGET_DIR}/${dir}" 44 | done 45 | 46 | # download models that you want to include 47 | # stable-diffusion-xl-base-1.0 48 | download_huggingface 'https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors' "${TARGET_DIR}/checkpoints" 49 | 50 | # Flux Dev (fp8 checkpoint version) 51 | # Ref: https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-dev-1 52 | # download_huggingface 'https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors' "${TARGET_DIR}/checkpoints" 53 | 54 | # Flux Schnell (fp8 checkpoint version) 55 | # Ref: https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-schnell-1 56 | # download_huggingface 'https://huggingface.co/Comfy-Org/flux1-schnell/resolve/main/flux1-schnell-fp8.safetensors' "${TARGET_DIR}/checkpoints" 57 | 58 | # black-forest-labs/FLUX.1-dev (requires authentication) 59 | # download_huggingface 'https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors' "${TARGET_DIR}/unet" 60 | # download_huggingface 'https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors' "${TARGET_DIR}/vae" 61 | # download_huggingface 'https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors' "${TARGET_DIR}/clip" 62 | # download_huggingface 'https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors' "${TARGET_DIR}/clip" 63 | 64 | # black-forest-labs/FLUX.1-schnell (requires authentication) 65 | # download_huggingface 'https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/flux1-schnell.safetensors' "${TARGET_DIR}/unet" 66 | # download_huggingface 'https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors' "${TARGET_DIR}/vae" 67 | # download_huggingface 'https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors' "${TARGET_DIR}/clip" 68 | # download_huggingface 'https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors' "${TARGET_DIR}/clip" 69 | 70 | # ComfyUI-Manager - extension to manage custom nodes 71 | # cd ${TARGET_DIR}/custom_nodes 72 | # [[ -e ComfyUI-Manager ]] || git clone https://github.com/ltdrdata/ComfyUI-Manager.git && (cd ComfyUI-Manager && git fetch && git checkout 2.48.6) 73 | # cd - 74 | 75 | if [ -z "${S3_PATH}" ]; then 76 | exit 0 77 | fi 78 | echo "Creating ${TARGET_FILE}..." 79 | # tar gzip the folder and upload to S3 80 | if [ -n "$(which pigz)" ]; then 81 | # use pigz to speed up compression on multiple cores 82 | tar -cv -C "${TARGET_DIR}" . | pigz -1 > "${TARGET_FILE}" 83 | else 84 | # tar is slower 85 | tar -czvf ${TARGET_FILE} -C ${TARGET_DIR} . 86 | fi 87 | echo "Uploading ${S3_PATH}..." 88 | aws s3 cp "${TARGET_FILE}" "${S3_PATH}" 89 | --------------------------------------------------------------------------------