├── .github ├── solutionid_validator.sh └── workflows │ └── maintainer_workflows.yml ├── .gitignore ├── CODEOWNERS ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── app.py ├── architecture.png ├── assets ├── setup-perforce-helix-core.sh ├── setup-unreal-egine-swarm-agent.ps1 ├── setup-unreal-engine-swarm-coordinator.ps1 ├── setup-virtual-workstation.ps1 ├── unreal-engine-swarm-cluster-component.yml └── unreal-engine-swarm-create-dependencies-archive.ps1 ├── cdk.json ├── cloudformation ├── gpic-pipeline-foundation.yaml ├── gpic-pipeline-perforce-helix-core.yaml ├── gpic-pipeline-swarm-cluster.yaml └── gpic-pipeline-virtual-workstation.yaml ├── gpic_pipeline ├── __init__.py ├── foundation.py ├── perforcehelixcore.py ├── unrealengineswarmcluster.py └── virtualworkstation.py ├── requirements.txt └── source.bat /.github/solutionid_validator.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | #set -e 3 | 4 | echo "checking solution id $1" 5 | echo "grep -nr --exclude-dir='.github' "$1" ./.." 6 | result=$(grep -nr --exclude-dir='.github' "$1" ./..) 7 | if [ $? -eq 0 ] 8 | then 9 | echo "Solution ID $1 found\n" 10 | echo "$result" 11 | exit 0 12 | else 13 | echo "Solution ID $1 not found" 14 | exit 1 15 | fi 16 | 17 | export result 18 | -------------------------------------------------------------------------------- /.github/workflows/maintainer_workflows.yml: -------------------------------------------------------------------------------- 1 | # Workflows managed by aws-solutions-library-samples maintainers 2 | name: Maintainer Workflows 3 | on: 4 | # Triggers the workflow on push or pull request events but only for the "main" branch 5 | push: 6 | branches: [ "main" ] 7 | pull_request: 8 | branches: [ "main" ] 9 | types: [opened, reopened, edited] 10 | 11 | jobs: 12 | CheckSolutionId: 13 | runs-on: ubuntu-latest 14 | steps: 15 | - uses: actions/checkout@v4 16 | - name: Run solutionid validator 17 | run: | 18 | chmod u+x ./.github/solutionid_validator.sh 19 | ./.github/solutionid_validator.sh ${{ vars.SOLUTIONID }} -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | package-lock.json 3 | __pycache__ 4 | .pytest_cache 5 | .env 6 | .venv 7 | *.egg-info 8 | 9 | # CDK asset staging directory 10 | .cdk.staging 11 | cdk.out 12 | cdk.context.json 13 | -------------------------------------------------------------------------------- /CODEOWNERS: -------------------------------------------------------------------------------- 1 | CODEOWNERS @aws-solutions-library-samples/maintainers 2 | /.github/workflows/maintainer_workflows.yml @aws-solutions-library-samples/maintainers 3 | /.github/solutionid_validator.sh @aws-solutions-library-samples/maintainers 4 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | 3 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 4 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 5 | opensource-codeofconduct@amazon.com with any additional questions or comments. 6 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | ## Reporting Bugs/Feature Requests 10 | 11 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 12 | 13 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 14 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 15 | 16 | - A reproducible test case or series of steps 17 | - The version of our code being used 18 | - Any modifications you've made relevant to the bug 19 | - Anything unusual about your environment or deployment 20 | 21 | ## Contributing via Pull Requests 22 | 23 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 24 | 25 | 1. You are working against the latest source on the _main_ branch. 26 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 27 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 28 | 29 | To send us a pull request, please: 30 | 31 | 1. Fork the repository. 32 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 33 | 3. Ensure local tests pass. 34 | 4. Commit to your fork using clear commit messages. 35 | 5. Send us a pull request, answering any default questions in the pull request interface. 36 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 37 | 38 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 39 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 40 | 41 | ## Finding contributions to work on 42 | 43 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 44 | 45 | ## Code of Conduct 46 | 47 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 48 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 49 | opensource-codeofconduct@amazon.com with any additional questions or comments. 50 | 51 | ## Security issue notifications 52 | 53 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 54 | 55 | ## Licensing 56 | 57 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 58 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Guidance for a Game Production Environment on AWS 2 | 3 | - [Overview](#overview) 4 | - [Contents](#contents) 5 | - [Architecture Diagram](#architecture-diagram) 6 | - [Cost](#cost) 7 | - [Prerequisites](#prerequisites) 8 | - [Operating System](#operating-system) 9 | - [Deployment steps](#deployment-steps) 10 | - [Initiate CDK (Optional)](#initiate-cdk-optional) 11 | - [Deployment of the Foundations Stack](#deployment-of-the-foundations-stack) 12 | - [Deployment of the Perforce Helix Core Stack](#deployment-of-the-perforce-helix-core-stack) 13 | - [Deployment and Setup of the Virtual Workstation Stack](#deployment-and-setup-of-the-virtual-workstation-stack) 14 | - [Deployment of the Virtual Workstation](#deployment-of-the-virtual-workstation) 15 | - [Setup of the Virtual Workstation](#setup-of-the-virtual-workstation) 16 | - [Deployment of the Unreal Engine 5 Swarm Cluster](#deployment-of-the-unreal-engine-5-swarm-cluster) 17 | - [Collecting dependencies](#collecting-dependencies) 18 | - [Deployment](#deployment) 19 | - [A look behind the curtain](#a-look-behind-the-curtain) 20 | - [Baking custom Windows AMI for UE5 Swarm](#baking-custom-windows-ami-for-ue5-swarm) 21 | - [Deploying UE5 Swarm Coordinator](#deploying-ue5-swarm-coordinator) 22 | - [Deploying UE5 Swarm Agent Auto Scaling Group](#deploying-ue5-swarm-agent-auto-scaling-group) 23 | - [Deployment Validation and Running the Guidance](#deployment-validation) 24 | - [Cleanup](#cleanup) 25 | - [Extra Tips](#extra-tips) 26 | - [How to access the EC2 instances in the private subnets](#how-to-access-the-ec2-instances-in-the-private-subnets) 27 | - [How to access the Unreal Engine 5 Swarm Agent logs?](#how-to-acess-the-unreal-engine-5-swarm-agent-logs) 28 | - [Updating CloudFormation templates after code changes](#updating-cloudformation-templates-after-code-changes) 29 | - [Security](#security) 30 | - [License](#license) 31 | 32 | # Overview 33 | 34 | Quickly deploy a Cloud Game Development environment for you and your team, with this AWS Sample. Once deployed, you will have high performance virtual workstation, central version control system, and acceleration of compute heavy tasks by distributing the work to other machines on demand. 35 | 36 | The AWS Cloud Development Kit(CDK), or optionally AWS CloudFormation templates, creates a new AWS Virtual Private Cloud (VPC), a customizable virtual workstation, a [Perforce Helix Core](https://www.perforce.com/solutions/game-development) server, and an [Unreal Swarm](https://docs.unrealengine.com/5.3/en-US/unreal-swarm-in-unreal-engine/) cluster. The latter can be used to accelerate Unreal Engine 5 lighting builds using the vast compute resources available in AWS. 37 | 38 | Please note that this guidance was written using CDK v1 and the customer is responsible for updating to CDK v2 if they wish. 39 | 40 | Optionally you can add on other solutions from partners such as [Epic Games](https://partners.amazonaws.com/partners/0010h00001ffwqNAAQ/), [HP Anyware](https://www.teradici.com/partners/cloud-partners/aws), [Parsec](https://aws.amazon.com/marketplace/seller-profile?id=2da5e545-e16e-4240-895a-4bbc4a056fbf), and [Incredibuild](https://www.incredibuild.com/partners/aws). These partner solutions further accelerate your development with: 41 | 42 | - [HP Anyware and Unreal Engine integrated virtual workstation on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-fryvjy6m3qn2q) 43 | - [Parsec for Teams virtual workstation on AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=2da5e545-e16e-4240-895a-4bbc4a056fbf) 44 | - [Incredibuild Cloud managed build acceleration solution](https://aws.amazon.com/marketplace/pp/prodview-gaxjwt6msh55q) 45 | 46 | ## Contents 47 | 48 | This project repository contains: 49 | 50 | - **Infrastructure deployment automation** that deploys required infrastructure resources using AWS CDK or AWS CloudFormation. 51 | - **PowerShell Scripts** that are used to: 52 | - - Collect dependencies from your Unreal Engine 5 installation 53 | - - Setup the Unreal Engine 5 Swarm coordinator and agents. 54 | - - Setup the Virtual Workstation 55 | - **Bash Scripts** to setup a secure Perforce Helix Core Server 56 | 57 | You can deploy this multiple ways, with the easiest being the [AWS CloudFormation](https://aws.amazon.com/cloudformation/) web interface to deploy the following CloudFormation stacks from the `cloudformation` folder in this repository: 58 | 59 | - `cloudformation/gpic-pipeline-foundation.yaml` 60 | - `cloudformation/gpic-pipeline-perforce-helix-core.yaml` 61 | - `cloudformation/gpic-pipeline-virtual-workstation.yaml` 62 | - `cloudformation/gpic-pipeline-swarm-cluster.yaml` 63 | 64 | Download these four files locally and follow the instructions below, only using the AWS CDK or AWS CLI as described below if you are familiar with that process. 65 | 66 | ## Architecture Diagram 67 | 68 | ![Architecture](architecture.png "Architecture") 69 | 70 | ## Cost 71 | 72 | You are responsible for the cost of the AWS services used while running this Guidance. As of 7-November-2023, the cost for running this Guidance with the default settings in the US East (N. Virginia) region is approximately $544 per month for 40 hours per week of use (instances stopped when not in use). 73 | 74 | Note that this estimate does not include costs of third party software installed with this guidance, including Unreal Engine and associated tools, Perforce Helix Core, Incredibuild, Parsec, HP Anyware, and any others. 75 | 76 | # Prerequisites 77 | 78 | ## Operating System 79 | 80 | This guidance can be deployed from any Windows, macOS, or Linux environment with an internet connection and the ability to run Python 3 and the AWS Cloud Development Kit (AWS CDK). 81 | 82 | # Deployment steps 83 | 84 | On a high level the deployment is split into four (4) parts. 85 | 86 | The first step is to deploy the _gpic-pipeline-foundation_ stack. This will setup a VPC, private- and public -subnets, the network routing and a S3 bucket which is needed in subsequent steps. 87 | 88 | The second step is to deploy the _gpic-pipeline-perforce-helix-core_ stack. This will deploy a secure Perforce Helix Core Server into one of the private subnets and configure it according to best-practices. Once this machine is up and running you can connect to it as user `perforce` using P4V/P4/P4Admin. The password for the user `perforce` can be retrieved via [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/). 89 | 90 | The third step is to deploy the _gpic-pipeline-virtual-workstation_ stack. Once this machine is up and running you can connect to it as user `Administrator` via RDP using the public IP address (or the public DNS name). The password for the user `Administrator` can be retrieved via the [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/). 91 | 92 | The last step is the deployment of the _gpic-pipeline-swarm-cluster_ stack, which will deploy a Unreal Engine 5 Swarm Coordinator into one of the private subnets as well as an Amazon EC2 AutoScaling consisting of Unreal Engine 5 Swarm agents. 93 | 94 | For the best possible experience, we recommend to deploy this example in the AWS region closest to you. 95 | 96 | ## Initiate CDK (Optional) 97 | 98 | > This step is optional if you decide to use the CloudFormation templates in the `cloudformation` directory, which is recommended for most users. 99 | 100 | This project uses a standard Python CDK project to deploy this example. If you don't have CDK installed, please follow the instructions at [AWS Cloud Development Kit](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_install) documentation. 101 | 102 | When you have CDK and Python installed you can use following steps to initiate this project: 103 | 104 | Create a python virtualenv 105 | 106 | MacOS and Linux: 107 | 108 | ``` 109 | python3 -m venv .venv 110 | ``` 111 | 112 | Windows 113 | 114 | ``` 115 | python -m venv .venv 116 | ``` 117 | 118 | After the virtualenv is created, you can use the following step to activate your virtualenv. 119 | 120 | MacOS and Linux: 121 | 122 | ``` 123 | source .venv/bin/activate 124 | ``` 125 | 126 | Windows: 127 | 128 | ``` 129 | source.bat 130 | ``` 131 | 132 | Once the virtualenv is activated, you can install the required dependencies. 133 | 134 | ``` 135 | pip install -r requirements.txt 136 | ``` 137 | 138 | > If you run on a MacOS or Linux, you potentialy need to modify the file `cdk.json` to use the correct python version. This CDK application expects that `python` is a symlink to python3 or the binary for python verions 3. 139 | 140 | ## Deployment of the Foundations Stack 141 | 142 | The Foundation stacks deploys an AWS Virtual Private Cloud (VPC) with two public subnets, two private subnets and a NAT Gateway. It also deploys a S3 bucket, which is utilized by the other stacks of this example. 143 | 144 | You can use the `cloudformation/gpic-pipeline-foundation.yaml` template to create a CloudFormation stack called `gpic-pipeline-foundation` using the AWS Console and AWS CLI. If you want to customize the VPC CIDR you will need to edit the CloudFormation template or use the CDK to synthesize new one with the CDK context variable (see the [Tips](#extra-tips) section). 145 | 146 | Alternatively can deploy this stack with CDK or by AWS CLI. By default we will create a VPC with a CIDR prefix of `10.0.0.0/16` and allow traffic to the Virtual Workstation, the Perforce Helix Core Server and the Unreal Engine 5 Swarm coordinator and agents from CIDR Prefix `10.0.0.0/8`. In addition we will allow traffic from the CIDR prefix `0.0.0.0/0.` to the Virtual Workstation for ease of use. These values can be changed by editing the `cdk.json`-file and giving new values for `foundation_vpc_cidr`, `virtual_workstation_trusted_internal_cidr`, `virtual_workstation_trusted_remote_cidr`, `unreal_engine_swarm_cluster_trusted_internal_cidr` and `perforce_trusted_internal_cidr`. 147 | 148 | To deploy the foundation stack please run the following command: 149 | 150 | ``` 151 | cdk deploy gpic-pipeline-foundation 152 | ``` 153 | 154 | CLI deployment command: 155 | 156 | ``` 157 | aws cloudformation create-stack --stack-name gpic-pipeline-foundation --template-body file://cloudformation/gpic-pipeline-foundation.yaml 158 | ``` 159 | 160 | Once the creation of resources is ready you can move to [next step](#deployment-of-the-perforce-helix-core-stack). 161 | 162 | ## Deployment of the Perforce Helix Core Stack 163 | 164 | The Perforce Helix Core stack deploys an EC2 instance based on the latest Amazon Linux 2 AMI into one of the private subnets and setup the Perforce Helix Core daemon using [Perforce Helix Server Deployment Package (SDP)](https://swarm.workshop.perforce.com/projects/perforce-software-sdp/view/main/doc/SDP_Guide.Unix.pdf). In the default configuration access to this machine is only allowed by the CIDR prefix `10.0.0.0/8` on port 1666. These CIDR prefix can be changed by editing the `cdk.json` and modifying the value of `perforce_trusted_internal_cidr`. 165 | 166 | To deploy the Perforce Helix Core stack you can use the `cloudformation/gpic-pipeline-perforce-helix-core.yaml` template to create a CloudFormation stack called `gpic-pipeline-perforce-helix-core` using the AWS Console. 167 | 168 | Alternatively you can deploy the Perforce Helix Core stack using AWS CDK with the following command: 169 | 170 | ``` 171 | cdk deploy gpic-pipeline-perforce-helix-core 172 | ``` 173 | 174 | You can also use the AWS CLI. If you want to customize the VPC CIDR you will need to edit the CloudFormation template or use the CDK to synthesize new one with the CDK context variable (see the [Tips](#extra-tips) section). 175 | 176 | AWS CLI command: 177 | 178 | ``` 179 | aws cloudformation create-stack --stack-name gpic-pipeline-perforce-helix-core --template-body file://cloudformation/gpic-pipeline-perforce-helix-core.yaml --capabilities CAPABILITY_NAMED_IAM 180 | ``` 181 | 182 | To access the Perforce Helix Core server with the P4/P4V/P4Admin please use the user `perforce`. The password for this user can be obtained via the AWS Secrets Manager. To access the server via Secure Shell (SSH), please use the AWS Systems Manager Session Manager. 183 | 184 | Once the creation of resources is ready you can move to [next step](#deployment--setup-of-the-virtual-workstation-stack). 185 | 186 | ## Deployment and Setup of the Virtual Workstation Stack 187 | 188 | This deployment will not only launch a virtual workstation, it will also deploy the necessary security groups and additional AWS infrastructure to use when deploying other workstations from the AWS Marketplace as in the [Description](#description). 189 | 190 | ### Deployment of the Virtual Workstation 191 | 192 | The Virtual Workstation stack deploys an EC2 Instance based on the latest Microsoft Windows Server 2019 Base AMI in one of the public subnets. In this process we also: 193 | 194 | - Set the password for the user "Administrator" 195 | - Install the latest [NVIDIA Grid driver](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/install-nvidia-driver.html#nvidia-GRID-driver) 196 | - Setup Windows 2019 Server firewall and allow ingress traffic for the following applications/protocols: 197 | - ICMP 198 | - PCoIP 199 | - Parsec 200 | - NICE-DCV 201 | - HP Anyware 202 | - Unreal Engine 5 Swarm 203 | 204 | You can use the `cloudformation/gpic-pipeline-virtual-workstation.yaml` template to create a CloudFormation stack called `gpic-pipeline-virtual-workstation` using the AWS Console. To customize this workstation further, you will need to use the AWS CDK or AWS CLI. It is recommended to launch this stack with defaults then deploy Marketplace AMIs into the same VPC created in the first stack. 205 | 206 | Access to this virtual workstation is controlled by the variables `virtual_workstation_trusted_internal_cidr` and 207 | `virtual_workstation_trusted_remote_cidr` in `cdk.json`. 208 | 209 | To deploy the Virtual Workstation stack please via AWS CDK, run the following command: 210 | 211 | ``` 212 | cdk deploy gpic-pipeline-virtual-workstation 213 | ``` 214 | 215 | If you want to customize the trusted CIDR's you will need to edit the CloudFormation template or use the CDK to synthesize new one with the CDK context variable (see the [Tips](#extra-tips) section). 216 | 217 | AWS CLI command: 218 | 219 | ``` 220 | aws cloudformation create-stack --stack-name gpic-pipeline-virtual-workstation --template-body file://cloudformation/gpic-pipeline-virtual-workstation.yaml --capabilities CAPABILITY_IAM 221 | ``` 222 | 223 | Once the creation of resources is finished you can move to [next step](#connecting-to-initial-virtual-workstation). 224 | 225 | ### Connecting to initial Virtual Workstation 226 | 227 | If you are using this initial virtual workstation, you can configure the instance by first connecting via Remote Desktop Protocol (RDP) by doing the following: 228 | 229 | Open a client capable of connection to a Windows computer using RDP. 230 | 231 | - Windows has the Remote Desktop Connection software built in. Search for Remote Desktop Connection in the Windows taskbar 232 | - For macOS, you'll need to download the [Remote Desktop client](https://learn.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-mac) 233 | - For Linux, you'll need to obtain an RDP capable like Remmina or TeamViewer 234 | 235 | To access the Virtual Workstation use the public IP/DNS, which can be obtained via the Output values from CDK/CloudFormation stack or alternatively via the AWS Management Console. 236 | 237 | Once you're connected, you'll need to log in with credentials that you retrieve from 238 | AWS Secrets Manager. To find the credentials: 239 | 240 | - Open the AWS console and navigate to the AWS CloudFormation console 241 | - In the CloudFormation console, select the **gpic-pipeline-virtual-workstation** stack and expand the **Virtual Workstation Password** 242 | - Select the **Physical ID** and you will be navigated to the AWS Secrets Manager console with the correct secret open 243 | - Select **Retrieve secret value** to see the password 244 | - Copy the password, use **administrator** for the user name and paste the password 245 | 246 | Once you are logged in, you have several options for installing high performance connection protocols: 247 | 248 | - [NICE DCV](https://aws.amazon.com/hpc/dcv/) 249 | Follow the [installation steps in the NICE DCV Administrator Guide](https://docs.aws.amazon.com/dcv/latest/adminguide/setting-up-installing-wininstall.html). 250 | 251 | - [Parsec](https://parsec.app/) 252 | - Open a PowerShell and run the [script](https://github.com/parsec-cloud/Parsec-Cloud-Preparation-Tool#copy-this-code-into-powershell-you-may-need-to-press-enter-at-the-end), which is provided by Parsec in their GitHub repository 253 | - Question: Do you want this computer to log on to Windows automatically? Yes 254 | - Once the installation is finished, the script will open up an additional Powershell session to update the GPU driver. Please close this Powershell session and don't provide a ACCESS_KEY & SECRET_ACCESS_KEY. We have already-installed the latest NVIDIA Grid driver on the virtual workstation. 255 | - Close all remaining Powershell sessions 256 | - Sign-Up for a Parsec account or use an existing Parsec account to login 257 | - HP Anyware 258 | Visit the [HP Anyware free trial page](https://connect.teradici.com/hp-anyware-30-day-trial) for more information on installation. 259 | 260 | ### Setup of the Virtual Workstation 261 | 262 | There are some Windows Firewall rules that need to be added when using some software on an AWS virtual workstation. You can run the below PowerShell commands to add these rules: 263 | 264 | Parsec 265 | 266 | `New-NetFirewallRule -DisplayName 'Allow Parsec' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 1666` 267 | 268 | HP Anyware 269 | 270 | `New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 4172` 271 | 272 | `New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 4172` 273 | 274 | `New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 443` 275 | 276 | NICE DCV 277 | 278 | `New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8443` 279 | 280 | Unreal Engine Swarm Communication 281 | 282 | `New-NetFirewallRule -DisplayName 'Allow UE5 Swarm TCP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8008-8009` 283 | 284 | `New-NetFirewallRule -DisplayName 'Allow UE5 Swarm UDP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 8008-8009` 285 | 286 | `New-NetFirewallRule -DisplayName 'Allow ICMP' -Direction Inbound -Action Allow -Protocol ICMPv4` 287 | 288 | Below you will find instructions on adding additional software to your virtual workstation: 289 | 290 | - [Perforce Helix](https://www.perforce.com/) 291 | - Steps: 292 | - Download and Install the Helix Visual Client (P4V) from [here](https://www.perforce.com/downloads/helix-visual-client-p4v) 293 | - Steps in the Installation Wizard: 294 | - Keep all application selected and hit _Next_ 295 | - For the Server please enter the private IP/DNS name of the Perforce Helix Core server we deployed earlier. You can find the private IP/DNS name in the CDK/CloudFormation Outputs or via the AWS Management Console. Example: `ip-10-0-XXX-XXX.eu-central-1.compute.internal:1666` 296 | - The User Name is `perforce` 297 | - Hit _Next_ 298 | - Hit _Next_ 299 | - Hit _Install_ 300 | - Uncheck "Start P4V" and hit _Exit_ 301 | - [Unreal Engine 5](https://www.unrealengine.com/en-US/download) 302 | You need to Sign-Up for an EPIC account or use your existing one. Alternatively you can clone the Unreal Engine 5 source from Github if you have connected your EPIC Account with your Github account. 303 | - [Incredibuild Cloud (optional)](https://www.incredibuild.com/partners/aws) 304 | To install Incredibuild Cloud, you can either install from the [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-gaxjwt6msh55q) or follow the [Incredibuild installation instructions](https://docs.incredibuild.com/cloud/cloud_initiating.html). 305 | 306 | Now, start the PV4 application and connect to the Peforce Helix Core server. The password of the user `perforce` can be found in the AWS Secrets Manager. 307 | 308 | Once you are done please continue to the [next step](#deployment-of-the-unreal-engine-5-swarm-cluster) 309 | 310 | ### Deployment of the Unreal Engine 5 Swarm Cluster 311 | 312 | #### Collecting dependencies 313 | 314 | Each Windows instance that will act as a Swarm Coordinator or as a Swarm Agent will need a set of prerequisites installed. 315 | We can collect these prerequisites from the Unreal Engine 5 version you installed on the Virtual Workstation with the provided `assets/unreal-engine-swarm-create-dependencies-archive.ps1` script in this repository. 316 | 317 | - If you are no longer logged into the Virtual Workstation, please login again. 318 | - Please download the script `unreal-engine-swarm-create-dependencies-archive.ps1` from the `assets` folder to the virtual workstation 319 | - This PowerShell script will copy all the components that are needed to customize a fresh Window installation 320 | - The script assumes that your Unreal Engine is installed to `C:\Program Files\Epic Games\UE_5.2` directory but you can customize the script to match your location 321 | - Script will create a compressed archive called `ue5-swarm-archive.zip` under your `My Documents` directory 322 | - You can find more details about these prerequisites at: 323 | - [Unreal Engines 5's Hardware and Software requirements](https://docs.unrealengine.com/en-US/GettingStarted/RecommendedSpecifications/index.html) -page 324 | - [Setting up Swarm Coordinator and Swarm Agents instructions](https://docs.unrealengine.com/en-US/Engine/Rendering/LightingAndShadows/Lightmass/UnrealSwarmOverview/index.html) -page 325 | 326 | After you have created the `ue5-swarm-archive.zip` -archive you need to upload it into the root directory of the newly created S3 bucket. It will be downloaded from that location and used during the EC2 Image Builder process. The name of the bucket is available as an output called `BucketName` from the `gpic-pipeline-foundation` stack. 327 | 328 | With [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/reference/index.html) you can use following command to first list buckets and fetch the name of the bucket created by the foundation stack: 329 | 330 | ``` 331 | Get-S3Bucket 332 | ``` 333 | 334 | Then write the local file to the specified bucket and key: 335 | 336 | ``` 337 | Write-S3Object -BucketName -Key ue5-swarm-archive.zip -File ue5-swarm-archive.zip 338 | ``` 339 | 340 | #### Deployment 341 | 342 | To deploy the Swarm Cluster you can use the `cloudformation/gpic-pipeline-swarm-cluster.yaml` template to create a CloudFormation stack called `gpic-pipeline-swarm-cluster` using the AWS Web Console. 343 | 344 | You can deploy with AWS CDK using the following command: 345 | 346 | ``` 347 | cdk deploy gpic-pipeline-swarm-cluster 348 | ``` 349 | 350 | Alternatively you can use the AWS CLI with this command: 351 | 352 | CLI example: 353 | 354 | ``` 355 | aws cloudformation create-stack --stack-name gpic-pipeline-swarm-cluster --template-body file://cloudformation/gpic-pipeline-swarm-cluster.yaml --capabilities CAPABILITY_NAMED_IAM 356 | ``` 357 | 358 | This step will take **30 minutes** on average as it's baking the Window AMI for Swarm. The steps to install all dependencies does take some time to complete. While the deployment is running you can read [below](#a-look-behind-the-curtains) for details on what's happening during stack creation. 359 | 360 | Once this stack is deployed, please proceed to the [final step](#finish) 361 | 362 | #### A look behind the curtain 363 | 364 | ##### Baking custom Windows AMI for UE5 Swarm 365 | 366 | The `gpic-pipeline-swarm-cluster` -stack will first configure EC2 Image Builder to use latest "Microsoft Windows Server 2019 Base" image as the base image. It also creates a EC2 Image Builder component defining the build steps. These steps will download the Zip archive from S3, install .Net runtime, run the `UEPrereqSetup_x64.exe` installer and then open Windows Firewall for the Swarm ports. You can view the `assets/unreal-engine-swarm-cluster-component.yaml` file for details. 367 | 368 | Once the EC2 Image Builder completes it will create a private AMI under your account. This AMI contains all the required Unreal Engine 5 Swarm build dependencies and can be used to quickly launch the Swarm Coordinator and Swarm Agents. 369 | 370 | ##### Deploying UE5 Swarm Coordinator 371 | 372 | The Swarm Coordinator will be launched as a single EC2 Instance. The launch will use `User Data` to configure the Windows to start `SwarmCoordinator.exe` on boot. You can view the contents of the `User Data` in `assets/setup-unreal-engine-swarm-coordinator.ps1` - Powershell script. 373 | 374 | ##### Deploying UE5 Swarm Agent Auto Scaling Group 375 | 376 | The Swarm Agents are going to be launched as Auto Scaling Group. Enabling us to quickly scale the number of nodes up and down. As the Swarm Agents need to be already online and registered when you submit a UE5 build, we can't use any metrics to scale the cluster on demand. 377 | Instead you can use for example Schedule or some script to scale the cluster before submit a job. With a schedule you could for example configure the cluster to scale up to certain number of nodes in the morning and then after office hours scale the cluster back to zero. 378 | 379 | The Swarm Agent will also use `User Data` to configure the Windows to start `SwarmAgent.exe` on boot and injects a Swarm configuration file into the Instance. This configuration file will set number of threads to equal amount of CPU Core and also will set the Coordinator IP address. You can view the contents of the `User Data` in `assets/setup-unreal-engine-swarm-agent.ps1` - Powershell script. 380 | 381 | # Deployment Validation 382 | 383 | Now that the `gpic-pipeline-swarm-cluster.yaml` stack has completed deployment you should see two additional EC2 Instances running in your new VPC. Also the CDK/CloudFormation stack should have output the private IP address of the Unreal Engine 5 Swarm Coordinator. 384 | 385 | On your Virtual Workstation you have to configure the local Swarm Agent. You can launch it from `C:\Program Files\Epic Games\UE_5.27\Engine\Binaries\DotNET` directory. The application can be accessed by double clicking the Swarm Agent icon in the Taskbar (System Tray). After this you will need to configure the following values accessible in the **Settings** tab: 386 | 387 | - `AgentGroupName`: `ue5-swarm-aws` 388 | - `AllowedRemoteAgentGroup`: `ue5-swarm-aws` 389 | - `AllowedRemoteAgentNames`: `*` 390 | - `CoordinatorRemotingHost`: `` 391 | 392 | Once this is done start experimenting with your environment. Here are some example tasks that you might want to test: 393 | 394 | - Create a streaming depot 395 | - Create a mainline stream 396 | - Create additional Perforce user and grant them access to the newly created streaming depot 397 | - Create a new workspace 398 | - Create a `.p4ignore` file for your Unreal project, mark it for add and submit it to the mainline stream 399 | - Set the correct typemap for an Unreal project 400 | - Start Unreal Engine 5 401 | - Create a new Unreal project 402 | - Close Unreal Engine 5 403 | - Move your Unreal project into your Perforce Workspace 404 | - Mark your Unreal project folder for add and submit it to the mainline stream 405 | - Open the Unreal project and setup Perforce integration 406 | - Reconfigure the EC2 Autoscaling Group to use more instances 407 | - Submit a lightning build and see your Unreal Engine 5 Swarm agents in action 408 | 409 | If you have any questions regarding the steps outlined above feel free to reach out to us. 410 | 411 | _Please, don't forget stop the resources if you are not working with them, otherwise you will incur unnecessary cost. You can simply stop the Virtual Workstation, the Perforce Helix Core server and the Unreal Engine 5 Swarm Coordinator if you are taking a break and restart these instances when you want to continue to work with them. The same applies to the Unreal Engine 5 Swarm agents, with the only caveat that the start and termination of these instances is managed by an EC2 Auto Scaling group. To change the amount of running EC2 instances you need to modify the EC2 Auto Scaling group, see [here](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html)._ 412 | 413 | # Cleanup 414 | 415 | To clean up this example you need to delete the CloudFormation stacks. Start by deleting the `gpic-pipeline-virtual-workstation`, followed by the `gpic-pipeline-perforce-helix-core` and `gpic-pipeline-swarm-cluster` stacks. Once all of these stacks are completely removed you can delete the `gpic-pipeline-foundation` stack. Additionally terminate any additional workstations launched from the AWS Marketplace. 416 | 417 | With CDK you can delete the stacks with: 418 | Example commands: 419 | 420 | ``` 421 | $ cdk destroy gpic-pipeline-virtual-workstation 422 | ``` 423 | 424 | After you have removed all stacks, two resources need to be deleted manually. First the S3 bucket will need to be deleted manually and second the AMI that was created in context of the Unreal Engine 5 Swarm stack needs to be deleted. 425 | 426 | Example commands: 427 | 428 | ``` 429 | $ aws s3 rb s3:// --force 430 | $ 431 | $ aws ec2 deregister-image --image-id 432 | ``` 433 | 434 | # Extra Tips 435 | 436 | ## How to access the EC2 instances in the private subnets 437 | 438 | The Perforce Helix Core Server can be accessed via the AWS Systems Manager - Session Manager. 439 | 440 | The Unreal Engine 5 Coordinator and agents can be accessed via RDP utilizing the Virtual Workstation. Alternatively you can use the AWS Systems Manager - Session Manager to open a Powershell session to these instances. 441 | 442 | ## How to acess the Unreal Engine 5 Swarm Agent logs? 443 | 444 | The Swarm Agent writes logs to: `C:\ue5-swarm\SwarmCache\Logs`. See the section [above](#how-to-access-the-ec2-instances-in-the-private-subnets) on how to connect to these EC2 instances. 445 | 446 | ## Updating CloudFormation templates after code changes 447 | 448 | If you do changes to the CDK code and want to generate new CloudFormation templates you will need to use following commands to keep the stack references in sync: 449 | 450 | ``` 451 | cdk synth gpic-pipeline-foundation -e > cloudformation/gpic-pipeline-foundation.yaml 452 | cdk synth gpic-pipeline-perforce-helix-core -e > cloudformation/gpic-pipeline-perforce-helix-core.yaml 453 | cdk synth gpic-pipeline-swarm-cluster -e > cloudformation/gpic-pipeline-swarm-cluster.yaml 454 | cdk synth gpic-pipeline-virtual-workstation -e > cloudformation/gpic-pipeline-virtual-workstation.yaml 455 | ``` 456 | 457 | # Security 458 | 459 | See [CONTRIBUTING](CONTRIBUTING.md) for more information. 460 | 461 | # License 462 | 463 | This library is licensed under the MIT-0 License. See the [LICENSE](LICENSE) file. 464 | 465 | The AWS CloudFormation template downloads and installs Perforce Helix Core on EC2 instances during the process. Helix Core is a proprietary software and is subject to the terms and conditions of Perforce. Please refer to EULA in the following page for details. Perforce Terms of Use : https://www.perforce.com/terms-use 466 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | ## SPDX-License-Identifier: MIT-0 4 | 5 | 6 | from aws_cdk import core 7 | import os 8 | 9 | from gpic_pipeline.foundation import FoundationStack 10 | from gpic_pipeline.virtualworkstation import VirtualWorkstationStack 11 | from gpic_pipeline.unrealengineswarmcluster import UnrealEngineSwarmClusterStack 12 | from gpic_pipeline.perforcehelixcore import PerforceHelixCoreStack 13 | 14 | app = core.App() 15 | 16 | foundation_stack = FoundationStack(app, "gpic-pipeline-foundation" , env={'region': os.environ['CDK_DEFAULT_REGION']}, description="Guidance for a Game Production Environment on AWS - Foundation Stack (SO9329)") 17 | 18 | perforce_stack = PerforceHelixCoreStack(app, "gpic-pipeline-perforce-helix-core", foundation_stack.vpc, env={'region': os.environ['CDK_DEFAULT_REGION']}, description="Guidance for a Game Production Environment on AWS - Perforce Helix Core Stack (SO9329)") 19 | 20 | virtual_desktop_stack = VirtualWorkstationStack(app, "gpic-pipeline-virtual-workstation", foundation_stack.bucket, foundation_stack.vpc, env={'region': os.environ['CDK_DEFAULT_REGION']}, description="Guidance for a Game Production Environment on AWS - Virtual Workstation Stack (SO9329)") 21 | 22 | unreal_engine_swarm_cluster_stack = UnrealEngineSwarmClusterStack (app, "gpic-pipeline-swarm-cluster", foundation_stack.bucket, foundation_stack.vpc,env={'region': os.environ['CDK_DEFAULT_REGION']}, description="Guidance for a Game Production Environment on AWS - Unreal Engine Swarm Stack (SO9329)") 23 | 24 | 25 | app.synth() 26 | -------------------------------------------------------------------------------- /architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-solutions-library-samples/guidance-for-game-production-environment-on-aws/cec0b36404c637f82dda475d32b0d5f0276fb128/architecture.png -------------------------------------------------------------------------------- /assets/setup-perforce-helix-core.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | ## SPDX-License-Identifier: MIT-0 4 | 5 | # Create filesystem on each of the block devices and mount them 6 | 7 | mkfs -t xfs /dev/sdb && mkdir /hxdepots && mount /dev/sdb /hxdepots 8 | mkfs -t xfs /dev/sdc && mkdir /hxlogs && mount /dev/sdc /hxlogs 9 | mkfs -t xfs /dev/sdd && mkdir /hxmetadata && mount /dev/sdd /hxmetadata 10 | 11 | # Modify /etc/fstab to mount device when booting up 12 | 13 | blkid /dev/sdb | awk -v OFS=" " '{print $2,"/hxdepots","xfs","defaults,nofail","0","2"}' >> /etc/fstab 14 | blkid /dev/sdc | awk -v OFS=" " '{print $2,"/hxlogs","xfs","defaults,nofail","0","2"}' >> /etc/fstab 15 | 16 | blkid /dev/sdd | awk -v OFS=" " '{print $2,"/hxmetadata","xfs","defaults,nofail","0","2"}' >> /etc/fstab 17 | 18 | # Add Perforce YUM repository and install Perforce 19 | cat <<'EOF' >> /etc/yum.repos.d/perforce.repo 20 | [perforce] 21 | name=Perforce 22 | baseurl=http://package.perforce.com/yum/rhel/7/x86_64/ 23 | enabled=1 24 | gpgcheck=1 25 | EOF 26 | 27 | chown root:root /etc/yum.repos.d/perforce.repo 28 | chmod 0644 /etc/yum.repos.d/perforce.repo 29 | 30 | rpm --import https://package.perforce.com/perforce.pubkey 31 | 32 | yum -y update 33 | yum -y install helix-p4d 34 | 35 | # Remove AWS cli version 1 and install version 2 36 | yum -y remove awscli 37 | 38 | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "/tmp/awscliv2.zip" 39 | unzip /tmp/awscliv2.zip -d /tmp 40 | ./tmp/aws/install 41 | 42 | 43 | # Install mailx - Needed by /p4/common/bin/recreate_offline_db.sh 44 | yum -y install mailx 45 | 46 | # Create P4admin user 47 | adduser -g perforce -G adm,wheel p4admin 48 | 49 | # Download an untar SDP 50 | wget -O /tmp/sdp.tgz https://swarm.workshop.perforce.com/downloads/guest/perforce_software/sdp/downloads/sdp.Unix.tgz?v=%2314 51 | 52 | tar xvfz /tmp/sdp.tgz --directory /hxdepots 53 | 54 | # Modify mkdirs.cfg 55 | cp /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg.bak 56 | 57 | INSTANCE_PRIVATE_DNS_NAME=$(hostname) 58 | 59 | sed -i -e 's/DB1=.*/DB1=hxmetadata/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 60 | sed -i -e 's/DB2=.*/DB2=hxmetadata/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 61 | sed -i -e 's/DD=.*/DD=hxdepots/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 62 | sed -i -e 's/LG=.*/LG=hxlogs/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 63 | sed -i -e 's/OSUSER=.*/OSUSER=p4admin/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 64 | sed -i -e 's/OSGROUP=.*/OSGROUP=perforce/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 65 | sed -i -e 's/CASE_SENSITIVE=.*/CASE_SENSITIVE=0/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 66 | sed -i -e 's/MAILHOST=.*/MAILHOST=localhost/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 67 | sed -i -e 's/SSL_PREFIX=.*/SSL_PREFIX=/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 68 | sed -i -e "s/P4DNSNAME=.*/P4DNSNAME=$INSTANCE_PRIVATE_DNS_NAME/g" /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 69 | sed -i -e 's/COMPLAINFROM_DOMAIN=.*/COMPLAINFROM_DOMAIN=amazonaws.com/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 70 | 71 | # Create symlinks 72 | ln -s /opt/perforce/bin/p4 /hxdepots/sdp/Server/Unix/p4/common/bin/p4 73 | ln -s /opt/perforce/sbin/p4d /hxdepots/sdp/Server/Unix/p4/common/bin/p4d 74 | 75 | # Run SDP 76 | /hxdepots/sdp/Server/Unix/setup/mkdirs.sh 1 77 | 78 | 79 | # Add systemd configuration file for Perforce Helix Code 80 | cat <<'EOF' >> /etc/systemd/system/p4d_1.service 81 | [Unit] 82 | Description=Helix Server Instance 1 83 | Documentation=http://www.perforce.com/perforce/doc.current/manuals/p4sag/index.html 84 | Requires=network.target network-online.target 85 | After=network.target network-online.target 86 | 87 | [Service] 88 | Type=forking 89 | TimeoutStartSec=60s 90 | TimeoutStopSec=60s 91 | ExecStart=/p4/1/bin/p4d_1_init start 92 | ExecStop=/p4/1/bin/p4d_1_init stop 93 | User=p4admin 94 | 95 | [Install] 96 | WantedBy=multi-user.target 97 | EOF 98 | 99 | chown p4admin:perforce /etc/systemd/system/p4d_1.service 100 | chmod 0400 /etc/systemd/system/p4d_1.service 101 | 102 | # Enable and start the Perforce Helix Code daemon 103 | systemctl enable p4d_1 104 | systemctl start p4d_1 105 | 106 | # Persist ServerID 107 | echo ${SERVER_ID} > /p4/1/root/server.id 108 | 109 | /hxdepots/sdp/Server/setup/configure_new_server.sh 1 110 | 111 | 112 | # Load Perforce environment variables, set the password persisted in the AWS Secrets Manager and put security measaurements in place 113 | source /p4/common/bin/p4_vars 1 114 | 115 | p4 configure set dm.password.minlength=32 116 | p4 configure set dm.user.noautocreate=2 117 | p4 configure set run.users.authorize=1 118 | p4 configure set dm.keys.hide=2 119 | p4 configure set security=3 120 | 121 | 122 | perforce_default_password=$(/usr/local/bin/aws secretsmanager get-secret-value --secret-id PERFORCE_PASSWORD_ARN --query SecretString --output text) 123 | 124 | # 125 | # p4 passwd -P is not supported w/ security level set to 3 (See above) 126 | echo -en "$perforce_default_password\n$perforce_default_password\n" | p4 passwd 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | -------------------------------------------------------------------------------- /assets/setup-unreal-egine-swarm-agent.ps1: -------------------------------------------------------------------------------- 1 | 2 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | ## SPDX-License-Identifier: MIT-0 4 | 5 | # Get Administrator password from AWS Secrets Manager 6 | 7 | $admin_password_plaintext = Get-SECSecretValue ADMIN_PASSWORD_SECRET_ARN | % { Echo $_.SecretString} 8 | $admin_password_secure_string = $admin_password_plaintext | ConvertTo-SecureString -AsPlainText -Force 9 | 10 | # Set Administrator password 11 | Get-LocalUser -Name "Administrator" | Set-LocalUser -Password $admin_password_secure_string 12 | 13 | # Define Coordinator IP, Cloudformation replaces this when task is been created 14 | $coordinator_ip = "COORDINATOR_IP" 15 | 16 | # Template of the Swarm Agent Developper Options file 17 | $developeroptions = ' 18 | 19 | true 20 | LOCALCORES 21 | BelowNormal 22 | REMOTECORES 23 | Idle 24 | false 25 | 15 26 | ' 27 | 28 | # Calculate number of cores 29 | $cores = (Get-WmiObject -Class Win32_Processor | Select-Object -Property NumberOfLogicalProcessors).NumberOfLogicalProcessors 30 | 31 | # Set the core values for the Swarm Agent 32 | $developeroptions = $developeroptions.replace("REMOTECORES", $cores) 33 | $developeroptions = $developeroptions.replace("LOCALCORES", $cores-1) 34 | 35 | # Save the configureation file 36 | $developeroptions | Out-File -FilePath "C:\ue5-swarm\SwarmAgent.DeveloperOptions.xml" 37 | 38 | # Template of the Swarm Options file 39 | $agentoptions = ' 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | * 71 | ue5-swarm-aws 72 | ue5-swarm-aws 73 | COORDINATORHOST 74 | C:\ue5-swarm/SwarmCache 75 | 5 76 | false 77 | true 78 | 15 79 | 80 | 0 81 | 0 82 | 83 | 84 | 768 85 | 768 86 | 87 | 2 88 | 4 89 | ' 90 | 91 | # Replace the Coordinator IP in the template 92 | $agentoptions = $agentoptions.replace("COORDINATORHOST", $coordinator_ip) 93 | 94 | # Save the configuration file 95 | $agentoptions | Out-File -FilePath "C:\ue5-swarm\SwarmAgent.Options.xml" 96 | 97 | # Define the Swarm agent as Scheduled Task that starts at instance boot 98 | $action = New-ScheduledTaskAction -Execute "C:\ue5-swarm\SwarmAgent.exe" 99 | $trigger = New-ScheduledTaskTrigger -AtStartup 100 | Register-ScheduledTask -Action $action -Trigger $trigger -User "Administrator" -Password $admin_password_plaintext -TaskName "SwarmAgent" -Description "UE5 Swarm Agent" -RunLevel Highest 101 | 102 | # Restart the instance to trigger the Swarm Agent Scheduled Task 103 | Restart-Computer 104 | -------------------------------------------------------------------------------- /assets/setup-unreal-engine-swarm-coordinator.ps1: -------------------------------------------------------------------------------- 1 | 2 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | ## SPDX-License-Identifier: MIT-0 4 | 5 | # Get Administrator password from AWS Secrets Manager 6 | $admin_password_plaintext = Get-SECSecretValue ADMIN_PASSWORD_SECRET_ARN | % { Echo $_.SecretString} 7 | $admin_password_secure_string = $admin_password_plaintext | ConvertTo-SecureString -AsPlainText -Force 8 | 9 | # Set Administrator password 10 | Get-LocalUser -Name "Administrator" | Set-LocalUser -Password $admin_password_secure_string 11 | 12 | # Define the Swarm Coordinator to start as a Scheduled task at startup 13 | $action = New-ScheduledTaskAction -Execute "C:\ue5-swarm\SwarmCoordinator.exe" 14 | $trigger = New-ScheduledTaskTrigger -AtStartup 15 | Register-ScheduledTask -Action $action -Trigger $trigger -User "Administrator" -Password $admin_password_plaintext -TaskName "SwarmCoordinator" -Description "UE5 Swarm Coordinator" -RunLevel Highest -AsJob 16 | 17 | # Restart the instance to trigger the Schedule task. 18 | Restart-Computer 19 | -------------------------------------------------------------------------------- /assets/setup-virtual-workstation.ps1: -------------------------------------------------------------------------------- 1 | 2 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | ## SPDX-License-Identifier: MIT-0 4 | 5 | # Set Windows Administrator password 6 | 7 | $admin_password = Get-SECSecretValue ADMIN_PASSWORD_SECRET_ARN | % { Echo $_.SecretString} | ConvertTo-SecureString -AsPlainText -Force 8 | 9 | Get-LocalUser -Name "Administrator" | Set-LocalUser -Password $admin_password 10 | 11 | # Setup Windows Firewall rules 12 | 13 | #Parsec 14 | New-NetFirewallRule -DisplayName 'Allow Parsec' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 1666 15 | 16 | #PCoIP 17 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 4172 18 | 19 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 4172 20 | 21 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 443 22 | 23 | # NICE DCV 24 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8443 25 | 26 | # Allow Unreal Engine Swarm coomunication 27 | New-NetFirewallRule -DisplayName 'Allow UE5 Swarm TCP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8008-8009 28 | New-NetFirewallRule -DisplayName 'Allow UE5 Swarm UDP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 8008-8009 29 | New-NetFirewallRule -DisplayName 'Allow ICMP' -Direction Inbound -Action Allow -Protocol ICMPv4 30 | 31 | # Install NVIDIA Grid driver 32 | 33 | $Bucket = "ec2-windows-nvidia-drivers" 34 | $KeyPrefix = "latest" 35 | $LocalPath = "c:\nvidia-temp" 36 | $Objects = Get-S3Object -BucketName $Bucket -KeyPrefix $KeyPrefix -Region us-east-1 37 | foreach ($Object in $Objects) { 38 | $LocalFileName = $Object.Key 39 | if ($LocalFileName -ne '' -and $Object.Size -ne 0) { 40 | $LocalFilePath = Join-Path $LocalPath $LocalFileName 41 | Copy-S3Object -BucketName $Bucket -Key $Object.Key -LocalFile $LocalFilePath -Region us-east-1 42 | } 43 | } 44 | 45 | 46 | $nvidia_setup = Get-ChildItem -Path $LocalPath -Filter *server2019*.exe -Recurse -ErrorAction SilentlyContinue -Force | %{$_.FullName} 47 | 48 | 49 | & $nvidia_setup -s | Out-Null 50 | 51 | New-ItemProperty -Path "HKLM:\SOFTWARE\NVIDIA Corporation\Global\GridLicensing" -Name "NvCplDisableManageLicensePage" -PropertyType "DWord" -Value "1" 52 | 53 | Remove-Item $LocalPath -Recurse 54 | 55 | 56 | # Install .NET Core Framework 57 | Install-WindowsFeature Net-Framework-Core 58 | 59 | -------------------------------------------------------------------------------- /assets/unreal-engine-swarm-cluster-component.yml: -------------------------------------------------------------------------------- 1 | name: InstallUE5Swarm 2 | description: This component installs UE5 Swarm from S3 archive and also installs all prerequirements for a build. 3 | schemaVersion: 1.0 4 | 5 | phases: 6 | - name: build 7 | steps: 8 | - name: CreateTempFolder 9 | action: CreateFolder 10 | inputs: 11 | - path: C:\ue5-swarm-temp 12 | - name: DownloadDependencies 13 | action: S3Download 14 | maxAttempts: 3 15 | inputs: 16 | - source: s3://S3-BUCKET-NAME/ue5-swarm-archive.zip 17 | destination: C:\ue5-swarm-temp\ue5-swarm-archive.zip 18 | - name: CreateSwarmFolder 19 | action: CreateFolder 20 | inputs: 21 | - path: C:\ue5-swarm 22 | - name: UncompressSwarmFiles 23 | action: ExecutePowerShell 24 | inputs: 25 | commands: 26 | - Expand-Archive -LiteralPath C:\ue5-swarm-temp\ue5-swarm-archive.zip -DestinationPath C:\ue5-swarm 27 | - name: DeleteTempFolder 28 | action: DeleteFolder 29 | inputs: 30 | - path: C:\ue5-swarm-temp 31 | force: true 32 | - name: InstallDotNet 33 | action: ExecutePowerShell 34 | inputs: 35 | commands: 36 | - Install-WindowsFeature Net-Framework-Core 37 | - name: InstallPreReqs 38 | action: ExecutePowerShell 39 | inputs: 40 | commands: 41 | - Start-Process -Wait -FilePath "C:\ue5-swarm\UEPrereqSetup_x64.exe" -ArgumentList "/install /quiet" 42 | - name: OpenFirewall 43 | action: ExecutePowerShell 44 | inputs: 45 | commands: 46 | - New-NetFirewallRule -DisplayName 'Allow UE5 Swarm TCP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8008-8009 47 | - New-NetFirewallRule -DisplayName 'Allow UE5 Swarm UDP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 8008-8009 48 | - New-NetFirewallRule -DisplayName 'Allow ICMP' -Direction Inbound -Action Allow -Protocol ICMPv4 49 | -------------------------------------------------------------------------------- /assets/unreal-engine-swarm-create-dependencies-archive.ps1: -------------------------------------------------------------------------------- 1 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | ## SPDX-License-Identifier: MIT-0 3 | 4 | # This PowerShell script creates a Zip archive with all required 5 | # files for creating UE5 Swarm node and Controller 6 | 7 | # Prompt user to enter the location of Unreal Engine executable 8 | $ue5root = 'C:\Program Files\Epic Games\UE_5.2' 9 | while (! (Test-Path -Path "$ue5root")) { 10 | $ue5root = Read-Host -Prompt 'Input Unreal Engine root directory, for example C:\Program Files\Epic Games\UE_5.2' 11 | } 12 | 13 | # Where we are going to store the archive into your Documents folder 14 | $ue5SwarmArchive = "ue5-swarm-archive" 15 | $archivePath = [Environment]::GetFolderPath('MyDocuments')+"\"+$ue5SwarmArchive 16 | 17 | # Path to Swarm related files 18 | $ue5swarmpath = $ue5root + "\Engine\Binaries\DotNET" 19 | $swarmPaths = @() 20 | $swarmfiles = "AgentInterface.dll", "SwarmAgent.exe", "SwarmAgent.exe.config" 21 | $swarmfiles += "SwarmCommonUtils.dll", "SwarmCoordinator.exe" 22 | $swarmfiles += "SwarmCoordinator.exe.config","SwarmCoordinatorInterface.dll" 23 | $swarmfiles += "SwarmInterface.dll","UnrealControls.dll" 24 | $swarmfiles | Foreach-Object { $swarmPaths += $ue5swarmpath + '\' + $_ } 25 | 26 | # Path to UE5PrereqSetup 27 | $ue5prereq = $ue5root + "\Engine\Extras\Redist\en-us\UEPrereqSetup_x64.exe" 28 | $compressPaths = @() 29 | $compressPaths = $swarmPaths + $ue5prereq 30 | Compress-Archive -LiteralPath $compressPaths -DestinationPath $archivePath -Force 31 | 32 | "Dependencies compressed and saved to $archivePath" -------------------------------------------------------------------------------- /cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python3 app.py", 3 | "context": { 4 | "@aws-cdk/core:enableStackNameDuplicates": "true", 5 | "aws-cdk:enableDiffNoFail": "true", 6 | "@aws-cdk/core:stackRelativeExports": "true", 7 | "@aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true, 8 | "@aws-cdk/aws-secretsmanager:parseOwnedSecretName": true, 9 | "@aws-cdk/aws-kms:defaultKeyPolicies": true, 10 | "@aws-cdk/aws-s3:grantWriteWithoutAcl": true, 11 | 12 | "_comment_foundation_stack": "Foundation Stack", 13 | "foundation_vpc_cidr": "10.0.0.0/16", 14 | 15 | "_comment_virtual_workstation_stack": "Virtual Workstation Stack", 16 | 17 | "virtual_workstation_trusted_internal_cidr": "10.0.0.0/8", 18 | "virtual_workstation_trusted_remote_cidr": "0.0.0.0/0", 19 | "virtual_workstation_instance_type": "g4dn.4xlarge", 20 | "virtual_workstation_root_volume_size": 200, 21 | 22 | "_comment_unreal_engine_swarm_cluster_stack": "Unreal Engine 5 Swarm Cluster Stack", 23 | 24 | "unreal_engine_swarm_cluster_trusted_internal_cidr": "10.0.0.0/8", 25 | "unreal_engine_swarm_cluster_image_builder_instance_type": "m5.large", 26 | "unreal_engine_swarm_cluster_coordinator_instance_type": "t3.large", 27 | "unreal_engine_swarm_cluster_agent_instance_type": "c5.4xlarge", 28 | "unreal_engine_swarm_cluster_agent_root_volume_size": 100, 29 | 30 | "_comment_perforce_helix_core_stack": "Perforce Helix Core", 31 | 32 | "perforce_trusted_internal_cidr": "10.0.0.0/8", 33 | "perforce_server_name": "pvs-aws-01", 34 | "perforce_server_id": "master.1", 35 | "perforce_server_description": "Master/commit server. The master.1 is the SDP instance name, a data set identifier", 36 | "perforce_instance_type": "c5.4xlarge", 37 | "perforce_depot_volume_type": "st1", 38 | "perforce_depot_volume_size": 1024, 39 | "perforce_log_volume_type": "gp2", 40 | "perforce_log_volume_size": 128, 41 | "perforce_metadata_volume_type": "gp2", 42 | "perforce_metadata_volume_size": 64 43 | } 44 | } 45 | -------------------------------------------------------------------------------- /cloudformation/gpic-pipeline-foundation.yaml: -------------------------------------------------------------------------------- 1 | Description: Guidance for a Game Production Environment on AWS - Foundation Stack (SO9329) 2 | Resources: 3 | gpicpipelinebucket6D2579DD: 4 | Type: AWS::S3::Bucket 5 | UpdateReplacePolicy: Retain 6 | DeletionPolicy: Retain 7 | Metadata: 8 | aws:cdk:path: gpic-pipeline-foundation/gpic-pipeline-bucket/Resource 9 | VPCB9E5F0B4: 10 | Type: AWS::EC2::VPC 11 | Properties: 12 | CidrBlock: 10.0.0.0/16 13 | EnableDnsHostnames: true 14 | EnableDnsSupport: true 15 | InstanceTenancy: default 16 | Tags: 17 | - Key: Name 18 | Value: gpic-pipeline-foundation/VPC 19 | Metadata: 20 | aws:cdk:path: gpic-pipeline-foundation/VPC/Resource 21 | VPCPublicSubnet1SubnetB4246D30: 22 | Type: AWS::EC2::Subnet 23 | Properties: 24 | VpcId: 25 | Ref: VPCB9E5F0B4 26 | AvailabilityZone: 27 | Fn::Select: 28 | - 0 29 | - Fn::GetAZs: "" 30 | CidrBlock: 10.0.0.0/18 31 | MapPublicIpOnLaunch: true 32 | Tags: 33 | - Key: aws-cdk:subnet-name 34 | Value: Public 35 | - Key: aws-cdk:subnet-type 36 | Value: Public 37 | - Key: Name 38 | Value: gpic-pipeline-foundation/VPC/PublicSubnet1 39 | Metadata: 40 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet1/Subnet 41 | VPCPublicSubnet1RouteTableFEE4B781: 42 | Type: AWS::EC2::RouteTable 43 | Properties: 44 | VpcId: 45 | Ref: VPCB9E5F0B4 46 | Tags: 47 | - Key: Name 48 | Value: gpic-pipeline-foundation/VPC/PublicSubnet1 49 | Metadata: 50 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet1/RouteTable 51 | VPCPublicSubnet1RouteTableAssociation0B0896DC: 52 | Type: AWS::EC2::SubnetRouteTableAssociation 53 | Properties: 54 | RouteTableId: 55 | Ref: VPCPublicSubnet1RouteTableFEE4B781 56 | SubnetId: 57 | Ref: VPCPublicSubnet1SubnetB4246D30 58 | Metadata: 59 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet1/RouteTableAssociation 60 | VPCPublicSubnet1DefaultRoute91CEF279: 61 | Type: AWS::EC2::Route 62 | Properties: 63 | RouteTableId: 64 | Ref: VPCPublicSubnet1RouteTableFEE4B781 65 | DestinationCidrBlock: 0.0.0.0/0 66 | GatewayId: 67 | Ref: VPCIGWB7E252D3 68 | DependsOn: 69 | - VPCVPCGW99B986DC 70 | Metadata: 71 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet1/DefaultRoute 72 | VPCPublicSubnet1EIP6AD938E8: 73 | Type: AWS::EC2::EIP 74 | Properties: 75 | Domain: vpc 76 | Tags: 77 | - Key: Name 78 | Value: gpic-pipeline-foundation/VPC/PublicSubnet1 79 | Metadata: 80 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet1/EIP 81 | VPCPublicSubnet1NATGatewayE0556630: 82 | Type: AWS::EC2::NatGateway 83 | Properties: 84 | SubnetId: 85 | Ref: VPCPublicSubnet1SubnetB4246D30 86 | AllocationId: 87 | Fn::GetAtt: 88 | - VPCPublicSubnet1EIP6AD938E8 89 | - AllocationId 90 | Tags: 91 | - Key: Name 92 | Value: gpic-pipeline-foundation/VPC/PublicSubnet1 93 | Metadata: 94 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet1/NATGateway 95 | VPCPublicSubnet2Subnet74179F39: 96 | Type: AWS::EC2::Subnet 97 | Properties: 98 | VpcId: 99 | Ref: VPCB9E5F0B4 100 | AvailabilityZone: 101 | Fn::Select: 102 | - 1 103 | - Fn::GetAZs: "" 104 | CidrBlock: 10.0.64.0/18 105 | MapPublicIpOnLaunch: true 106 | Tags: 107 | - Key: aws-cdk:subnet-name 108 | Value: Public 109 | - Key: aws-cdk:subnet-type 110 | Value: Public 111 | - Key: Name 112 | Value: gpic-pipeline-foundation/VPC/PublicSubnet2 113 | Metadata: 114 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet2/Subnet 115 | VPCPublicSubnet2RouteTable6F1A15F1: 116 | Type: AWS::EC2::RouteTable 117 | Properties: 118 | VpcId: 119 | Ref: VPCB9E5F0B4 120 | Tags: 121 | - Key: Name 122 | Value: gpic-pipeline-foundation/VPC/PublicSubnet2 123 | Metadata: 124 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet2/RouteTable 125 | VPCPublicSubnet2RouteTableAssociation5A808732: 126 | Type: AWS::EC2::SubnetRouteTableAssociation 127 | Properties: 128 | RouteTableId: 129 | Ref: VPCPublicSubnet2RouteTable6F1A15F1 130 | SubnetId: 131 | Ref: VPCPublicSubnet2Subnet74179F39 132 | Metadata: 133 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet2/RouteTableAssociation 134 | VPCPublicSubnet2DefaultRouteB7481BBA: 135 | Type: AWS::EC2::Route 136 | Properties: 137 | RouteTableId: 138 | Ref: VPCPublicSubnet2RouteTable6F1A15F1 139 | DestinationCidrBlock: 0.0.0.0/0 140 | GatewayId: 141 | Ref: VPCIGWB7E252D3 142 | DependsOn: 143 | - VPCVPCGW99B986DC 144 | Metadata: 145 | aws:cdk:path: gpic-pipeline-foundation/VPC/PublicSubnet2/DefaultRoute 146 | VPCPrivateSubnet1Subnet8BCA10E0: 147 | Type: AWS::EC2::Subnet 148 | Properties: 149 | VpcId: 150 | Ref: VPCB9E5F0B4 151 | AvailabilityZone: 152 | Fn::Select: 153 | - 0 154 | - Fn::GetAZs: "" 155 | CidrBlock: 10.0.128.0/18 156 | MapPublicIpOnLaunch: false 157 | Tags: 158 | - Key: aws-cdk:subnet-name 159 | Value: Private 160 | - Key: aws-cdk:subnet-type 161 | Value: Private 162 | - Key: Name 163 | Value: gpic-pipeline-foundation/VPC/PrivateSubnet1 164 | Metadata: 165 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet1/Subnet 166 | VPCPrivateSubnet1RouteTableBE8A6027: 167 | Type: AWS::EC2::RouteTable 168 | Properties: 169 | VpcId: 170 | Ref: VPCB9E5F0B4 171 | Tags: 172 | - Key: Name 173 | Value: gpic-pipeline-foundation/VPC/PrivateSubnet1 174 | Metadata: 175 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet1/RouteTable 176 | VPCPrivateSubnet1RouteTableAssociation347902D1: 177 | Type: AWS::EC2::SubnetRouteTableAssociation 178 | Properties: 179 | RouteTableId: 180 | Ref: VPCPrivateSubnet1RouteTableBE8A6027 181 | SubnetId: 182 | Ref: VPCPrivateSubnet1Subnet8BCA10E0 183 | Metadata: 184 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet1/RouteTableAssociation 185 | VPCPrivateSubnet1DefaultRouteAE1D6490: 186 | Type: AWS::EC2::Route 187 | Properties: 188 | RouteTableId: 189 | Ref: VPCPrivateSubnet1RouteTableBE8A6027 190 | DestinationCidrBlock: 0.0.0.0/0 191 | NatGatewayId: 192 | Ref: VPCPublicSubnet1NATGatewayE0556630 193 | Metadata: 194 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet1/DefaultRoute 195 | VPCPrivateSubnet2SubnetCFCDAA7A: 196 | Type: AWS::EC2::Subnet 197 | Properties: 198 | VpcId: 199 | Ref: VPCB9E5F0B4 200 | AvailabilityZone: 201 | Fn::Select: 202 | - 1 203 | - Fn::GetAZs: "" 204 | CidrBlock: 10.0.192.0/18 205 | MapPublicIpOnLaunch: false 206 | Tags: 207 | - Key: aws-cdk:subnet-name 208 | Value: Private 209 | - Key: aws-cdk:subnet-type 210 | Value: Private 211 | - Key: Name 212 | Value: gpic-pipeline-foundation/VPC/PrivateSubnet2 213 | Metadata: 214 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet2/Subnet 215 | VPCPrivateSubnet2RouteTable0A19E10E: 216 | Type: AWS::EC2::RouteTable 217 | Properties: 218 | VpcId: 219 | Ref: VPCB9E5F0B4 220 | Tags: 221 | - Key: Name 222 | Value: gpic-pipeline-foundation/VPC/PrivateSubnet2 223 | Metadata: 224 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet2/RouteTable 225 | VPCPrivateSubnet2RouteTableAssociation0C73D413: 226 | Type: AWS::EC2::SubnetRouteTableAssociation 227 | Properties: 228 | RouteTableId: 229 | Ref: VPCPrivateSubnet2RouteTable0A19E10E 230 | SubnetId: 231 | Ref: VPCPrivateSubnet2SubnetCFCDAA7A 232 | Metadata: 233 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet2/RouteTableAssociation 234 | VPCPrivateSubnet2DefaultRouteF4F5CFD2: 235 | Type: AWS::EC2::Route 236 | Properties: 237 | RouteTableId: 238 | Ref: VPCPrivateSubnet2RouteTable0A19E10E 239 | DestinationCidrBlock: 0.0.0.0/0 240 | NatGatewayId: 241 | Ref: VPCPublicSubnet1NATGatewayE0556630 242 | Metadata: 243 | aws:cdk:path: gpic-pipeline-foundation/VPC/PrivateSubnet2/DefaultRoute 244 | VPCIGWB7E252D3: 245 | Type: AWS::EC2::InternetGateway 246 | Properties: 247 | Tags: 248 | - Key: Name 249 | Value: gpic-pipeline-foundation/VPC 250 | Metadata: 251 | aws:cdk:path: gpic-pipeline-foundation/VPC/IGW 252 | VPCVPCGW99B986DC: 253 | Type: AWS::EC2::VPCGatewayAttachment 254 | Properties: 255 | VpcId: 256 | Ref: VPCB9E5F0B4 257 | InternetGatewayId: 258 | Ref: VPCIGWB7E252D3 259 | Metadata: 260 | aws:cdk:path: gpic-pipeline-foundation/VPC/VPCGW 261 | CDKMetadata: 262 | Type: AWS::CDK::Metadata 263 | Properties: 264 | Analytics: v2:deflate64:H4sIAAAAAAAA/0WPzY7CMAyEn4V7CL8XbgsVQlwgKoh76jVqKCQosUEoyrtvQ2F78ueZsWVP5HQ8l+PBj36GIfw2owjOo4wH0tCIEoNjDyiKs90z3ZlE4Wwgz0BZ+/pJ5PEYZjKuGBp8mx0lgTCV8XSHrJ1UIRRXVwMHrmyX66l0THjU1RV7vdeWITgwmoyz/+EM663KZadpowmf+iWUN48W+8VbS+hb/ga6Sz7dktpf6xtaSikJ9aLa2dFMLuRkcAnGDD1bMjeUZVf/AELh+K0yAQAA 265 | Metadata: 266 | aws:cdk:path: gpic-pipeline-foundation/CDKMetadata/Default 267 | Outputs: 268 | BucketName: 269 | Description: Game Production in the Cloud pipeline bucket. This bucket will be used by other stacks in this application. 270 | Value: 271 | Ref: gpicpipelinebucket6D2579DD 272 | ExportsOutputRefVPCB9E5F0B4BD23A326: 273 | Value: 274 | Ref: VPCB9E5F0B4 275 | Export: 276 | Name: gpic-pipeline-foundation:ExportsOutputRefVPCB9E5F0B4BD23A326 277 | ExportsOutputRefVPCPrivateSubnet1Subnet8BCA10E01F79A1B7: 278 | Value: 279 | Ref: VPCPrivateSubnet1Subnet8BCA10E0 280 | Export: 281 | Name: gpic-pipeline-foundation:ExportsOutputRefVPCPrivateSubnet1Subnet8BCA10E01F79A1B7 282 | ExportsOutputFnGetAttgpicpipelinebucket6D2579DDArnC5C75313: 283 | Value: 284 | Fn::GetAtt: 285 | - gpicpipelinebucket6D2579DD 286 | - Arn 287 | Export: 288 | Name: gpic-pipeline-foundation:ExportsOutputFnGetAttgpicpipelinebucket6D2579DDArnC5C75313 289 | ExportsOutputRefVPCPublicSubnet1SubnetB4246D30D84F935B: 290 | Value: 291 | Ref: VPCPublicSubnet1SubnetB4246D30 292 | Export: 293 | Name: gpic-pipeline-foundation:ExportsOutputRefVPCPublicSubnet1SubnetB4246D30D84F935B 294 | ExportsOutputRefgpicpipelinebucket6D2579DDA0504931: 295 | Value: 296 | Ref: gpicpipelinebucket6D2579DD 297 | Export: 298 | Name: gpic-pipeline-foundation:ExportsOutputRefgpicpipelinebucket6D2579DDA0504931 299 | ExportsOutputRefVPCPrivateSubnet2SubnetCFCDAA7AB22CF85D: 300 | Value: 301 | Ref: VPCPrivateSubnet2SubnetCFCDAA7A 302 | Export: 303 | Name: gpic-pipeline-foundation:ExportsOutputRefVPCPrivateSubnet2SubnetCFCDAA7AB22CF85D 304 | 305 | -------------------------------------------------------------------------------- /cloudformation/gpic-pipeline-perforce-helix-core.yaml: -------------------------------------------------------------------------------- 1 | Description: Guidance for a Game Production Environment on AWS - Perforce Helix Core Stack (SO9329) 2 | Resources: 3 | PerforceHelixCoreInstanceRoleB352A600: 4 | Type: AWS::IAM::Role 5 | Properties: 6 | AssumeRolePolicyDocument: 7 | Statement: 8 | - Action: sts:AssumeRole 9 | Effect: Allow 10 | Principal: 11 | Service: ec2.amazonaws.com 12 | Version: "2012-10-17" 13 | ManagedPolicyArns: 14 | - Fn::Join: 15 | - "" 16 | - - "arn:" 17 | - Ref: AWS::Partition 18 | - :iam::aws:policy/service-role/AmazonEC2RoleforSSM 19 | RoleName: PerforceHelixCoreInstanceRole 20 | Metadata: 21 | aws:cdk:path: gpic-pipeline-perforce-helix-core/PerforceHelixCoreInstanceRole/Resource 22 | PerforceHelixCoreInstanceRoleDefaultPolicy3F692F92: 23 | Type: AWS::IAM::Policy 24 | Properties: 25 | PolicyDocument: 26 | Statement: 27 | - Action: 28 | - secretsmanager:GetSecretValue 29 | - secretsmanager:DescribeSecret 30 | Effect: Allow 31 | Resource: 32 | Ref: PerforceHelixCorePasswordB6B489C5 33 | Version: "2012-10-17" 34 | PolicyName: PerforceHelixCoreInstanceRoleDefaultPolicy3F692F92 35 | Roles: 36 | - Ref: PerforceHelixCoreInstanceRoleB352A600 37 | Metadata: 38 | aws:cdk:path: gpic-pipeline-perforce-helix-core/PerforceHelixCoreInstanceRole/DefaultPolicy/Resource 39 | PerforceHelixCorePasswordB6B489C5: 40 | Type: AWS::SecretsManager::Secret 41 | Properties: 42 | GenerateSecretString: 43 | ExcludeCharacters: "'\"" 44 | ExcludePunctuation: true 45 | IncludeSpace: false 46 | PasswordLength: 32 47 | UpdateReplacePolicy: Delete 48 | DeletionPolicy: Delete 49 | Metadata: 50 | aws:cdk:path: gpic-pipeline-perforce-helix-core/Perforce Helix Core Password/Resource 51 | PerforceHelixCoreSecurityGroup922154AA: 52 | Type: AWS::EC2::SecurityGroup 53 | Properties: 54 | GroupDescription: Allows access to the Perforce server 55 | GroupName: Perforce-SecurityGroup 56 | SecurityGroupEgress: 57 | - CidrIp: 0.0.0.0/0 58 | Description: Allow all outbound traffic by default 59 | IpProtocol: "-1" 60 | SecurityGroupIngress: 61 | - CidrIp: 10.0.0.0/8 62 | Description: Allow access to Perforce Helix Core 63 | FromPort: 1666 64 | IpProtocol: tcp 65 | ToPort: 1666 66 | VpcId: 67 | Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCB9E5F0B4BD23A326 68 | Metadata: 69 | aws:cdk:path: gpic-pipeline-perforce-helix-core/PerforceHelixCoreSecurityGroup/Resource 70 | pvsaws01InstanceProfile00F25A80: 71 | Type: AWS::IAM::InstanceProfile 72 | Properties: 73 | Roles: 74 | - Ref: PerforceHelixCoreInstanceRoleB352A600 75 | Metadata: 76 | aws:cdk:path: gpic-pipeline-perforce-helix-core/pvs-aws-01/InstanceProfile 77 | pvsaws01C56355C4: 78 | Type: AWS::EC2::Instance 79 | Properties: 80 | AvailabilityZone: 81 | Fn::Select: 82 | - 0 83 | - Fn::GetAZs: "" 84 | BlockDeviceMappings: 85 | - DeviceName: /dev/sdb 86 | Ebs: 87 | DeleteOnTermination: true 88 | VolumeSize: 1024 89 | VolumeType: st1 90 | - DeviceName: /dev/sdc 91 | Ebs: 92 | DeleteOnTermination: true 93 | VolumeSize: 128 94 | VolumeType: gp2 95 | - DeviceName: /dev/sdd 96 | Ebs: 97 | DeleteOnTermination: true 98 | VolumeSize: 64 99 | VolumeType: gp2 100 | IamInstanceProfile: 101 | Ref: pvsaws01InstanceProfile00F25A80 102 | ImageId: 103 | Ref: SsmParameterValueawsserviceamiamazonlinuxlatestamzn2amihvmx8664gp2C96584B6F00A464EAD1953AFF4B05118Parameter 104 | InstanceType: c5.4xlarge 105 | SecurityGroupIds: 106 | - Fn::GetAtt: 107 | - PerforceHelixCoreSecurityGroup922154AA 108 | - GroupId 109 | SubnetId: 110 | Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCPrivateSubnet1Subnet8BCA10E01F79A1B7 111 | Tags: 112 | - Key: Name 113 | Value: gpic-pipeline-perforce-helix-core/pvs-aws-01 114 | UserData: 115 | Fn::Base64: 116 | Fn::Join: 117 | - "" 118 | - - |- 119 | #!/bin/bash 120 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 121 | ## SPDX-License-Identifier: MIT-0 122 | 123 | # Create filesystem on each of the block devices and mount them 124 | 125 | mkfs -t xfs /dev/sdb && mkdir /hxdepots && mount /dev/sdb /hxdepots 126 | mkfs -t xfs /dev/sdc && mkdir /hxlogs && mount /dev/sdc /hxlogs 127 | mkfs -t xfs /dev/sdd && mkdir /hxmetadata && mount /dev/sdd /hxmetadata 128 | 129 | # Modify /etc/fstab to mount device when booting up 130 | 131 | blkid /dev/sdb | awk -v OFS=" " '{print $2,"/hxdepots","xfs","defaults,nofail","0","2"}' >> /etc/fstab 132 | blkid /dev/sdc | awk -v OFS=" " '{print $2,"/hxlogs","xfs","defaults,nofail","0","2"}' >> /etc/fstab 133 | 134 | blkid /dev/sdd | awk -v OFS=" " '{print $2,"/hxmetadata","xfs","defaults,nofail","0","2"}' >> /etc/fstab 135 | 136 | # Add Perforce YUM repository and install Perforce 137 | cat <<'EOF' >> /etc/yum.repos.d/perforce.repo 138 | [perforce] 139 | name=Perforce 140 | baseurl=http://package.perforce.com/yum/rhel/7/x86_64/ 141 | enabled=1 142 | gpgcheck=1 143 | EOF 144 | 145 | chown root:root /etc/yum.repos.d/perforce.repo 146 | chmod 0644 /etc/yum.repos.d/perforce.repo 147 | 148 | rpm --import https://package.perforce.com/perforce.pubkey 149 | 150 | yum -y update 151 | yum -y install helix-p4d 152 | 153 | # Remove AWS cli version 1 and install version 2 154 | yum -y remove awscli 155 | 156 | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "/tmp/awscliv2.zip" 157 | unzip /tmp/awscliv2.zip -d /tmp 158 | ./tmp/aws/install 159 | 160 | 161 | # Install mailx - Needed by /p4/common/bin/recreate_offline_db.sh 162 | yum -y install mailx 163 | 164 | # Create P4admin user 165 | adduser -g perforce -G adm,wheel p4admin 166 | 167 | # Download an untar SDP 168 | wget -O /tmp/sdp.tgz https://swarm.workshop.perforce.com/downloads/guest/perforce_software/sdp/downloads/sdp.Unix.tgz?v=%2314 169 | 170 | tar xvfz /tmp/sdp.tgz --directory /hxdepots 171 | 172 | # Modify mkdirs.cfg 173 | cp /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg.bak 174 | 175 | INSTANCE_PRIVATE_DNS_NAME=$(hostname) 176 | 177 | sed -i -e 's/DB1=.*/DB1=hxmetadata/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 178 | sed -i -e 's/DB2=.*/DB2=hxmetadata/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 179 | sed -i -e 's/DD=.*/DD=hxdepots/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 180 | sed -i -e 's/LG=.*/LG=hxlogs/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 181 | sed -i -e 's/OSUSER=.*/OSUSER=p4admin/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 182 | sed -i -e 's/OSGROUP=.*/OSGROUP=perforce/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 183 | sed -i -e 's/CASE_SENSITIVE=.*/CASE_SENSITIVE=0/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 184 | sed -i -e 's/MAILHOST=.*/MAILHOST=localhost/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 185 | sed -i -e 's/SSL_PREFIX=.*/SSL_PREFIX=/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 186 | sed -i -e "s/P4DNSNAME=.*/P4DNSNAME=$INSTANCE_PRIVATE_DNS_NAME/g" /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 187 | sed -i -e 's/COMPLAINFROM_DOMAIN=.*/COMPLAINFROM_DOMAIN=amazonaws.com/g' /hxdepots/sdp/Server/Unix/setup/mkdirs.cfg 188 | 189 | # Create symlinks 190 | ln -s /opt/perforce/bin/p4 /hxdepots/sdp/Server/Unix/p4/common/bin/p4 191 | ln -s /opt/perforce/sbin/p4d /hxdepots/sdp/Server/Unix/p4/common/bin/p4d 192 | 193 | # Run SDP 194 | /hxdepots/sdp/Server/Unix/setup/mkdirs.sh 1 195 | 196 | 197 | # Add systemd configuration file for Perforce Helix Code 198 | cat <<'EOF' >> /etc/systemd/system/p4d_1.service 199 | [Unit] 200 | Description=Helix Server Instance 1 201 | Documentation=http://www.perforce.com/perforce/doc.current/manuals/p4sag/index.html 202 | Requires=network.target network-online.target 203 | After=network.target network-online.target 204 | 205 | [Service] 206 | Type=forking 207 | TimeoutStartSec=60s 208 | TimeoutStopSec=60s 209 | ExecStart=/p4/1/bin/p4d_1_init start 210 | ExecStop=/p4/1/bin/p4d_1_init stop 211 | User=p4admin 212 | 213 | [Install] 214 | WantedBy=multi-user.target 215 | EOF 216 | 217 | chown p4admin:perforce /etc/systemd/system/p4d_1.service 218 | chmod 0400 /etc/systemd/system/p4d_1.service 219 | 220 | # Enable and start the Perforce Helix Code daemon 221 | systemctl enable p4d_1 222 | systemctl start p4d_1 223 | 224 | # Persist ServerID 225 | echo ${master.1} > /p4/1/root/server.id 226 | 227 | /hxdepots/sdp/Server/setup/configure_new_server.sh 1 228 | 229 | 230 | # Load Perforce environment variables, set the password persisted in the AWS Secrets Manager and put security measaurements in place 231 | source /p4/common/bin/p4_vars 1 232 | 233 | p4 configure set dm.password.minlength=32 234 | p4 configure set dm.user.noautocreate=2 235 | p4 configure set run.users.authorize=1 236 | p4 configure set dm.keys.hide=2 237 | p4 configure set security=3 238 | 239 | 240 | perforce_default_password=$(/usr/local/bin/aws secretsmanager get-secret-value --secret-id 241 | - Ref: PerforceHelixCorePasswordB6B489C5 242 | - |+2 243 | --query SecretString --output text) 244 | 245 | # 246 | # p4 passwd -P is not supported w/ security level set to 3 (See above) 247 | echo -en "$perforce_default_password\n$perforce_default_password\n" | p4 passwd 248 | 249 | 250 | 251 | 252 | 253 | 254 | 255 | 256 | DependsOn: 257 | - PerforceHelixCoreInstanceRoleDefaultPolicy3F692F92 258 | - PerforceHelixCoreInstanceRoleB352A600 259 | Metadata: 260 | aws:cdk:path: gpic-pipeline-perforce-helix-core/pvs-aws-01/Resource 261 | CDKMetadata: 262 | Type: AWS::CDK::Metadata 263 | Properties: 264 | Analytics: v2:deflate64:H4sIAAAAAAAA/01OQQ7CMAx7y+6lsMGFGxIHxIlpvKAKGZRtLUpToanq31k7QDvZsR0npaw2O7kpDurtVnDr1gEsoQxXVtCJY2tqRWpARhINOusJMKkXzy/P4miNY/LASfv5UaSuoNUgQ2P7nM9Y217DmEv/7DwVKANYk211P+06BEJ2gzLqjjQ9kucUnVkUCFWWPWkeT2T96+suhF/t8kSMUdQjP6xZb+VelsXTab0ib1gPKJsZP7n5nTkQAQAA 265 | Metadata: 266 | aws:cdk:path: gpic-pipeline-perforce-helix-core/CDKMetadata/Default 267 | Parameters: 268 | SsmParameterValueawsserviceamiamazonlinuxlatestamzn2amihvmx8664gp2C96584B6F00A464EAD1953AFF4B05118Parameter: 269 | Type: AWS::SSM::Parameter::Value 270 | Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 271 | Outputs: 272 | PerforceHelixCorePrivateDNS: 273 | Description: The private DNS name of the Perforce Helix Core Server. 274 | Value: 275 | Fn::GetAtt: 276 | - pvsaws01C56355C4 277 | - PrivateDnsName 278 | PerforceHelixCorePrivateIP: 279 | Description: The private IP address of the Perforce Helix Core Server. 280 | Value: 281 | Fn::GetAtt: 282 | - pvsaws01C56355C4 283 | - PrivateIp 284 | PerforceHelixCoreSecretName: 285 | Description: The name of secret in the AWS Secrets Manager. Please open the AWS Secrets Manager to retrieve the password for the user 'perforce'. 286 | Value: 287 | Fn::Join: 288 | - "-" 289 | - - Fn::Select: 290 | - 0 291 | - Fn::Split: 292 | - "-" 293 | - Fn::Select: 294 | - 6 295 | - Fn::Split: 296 | - ":" 297 | - Ref: PerforceHelixCorePasswordB6B489C5 298 | - Fn::Select: 299 | - 1 300 | - Fn::Split: 301 | - "-" 302 | - Fn::Select: 303 | - 6 304 | - Fn::Split: 305 | - ":" 306 | - Ref: PerforceHelixCorePasswordB6B489C5 307 | 308 | -------------------------------------------------------------------------------- /cloudformation/gpic-pipeline-swarm-cluster.yaml: -------------------------------------------------------------------------------- 1 | Description: Guidance for a Game Production Environment on AWS - Unreal Engine Swarm Stack (SO9329) 2 | Resources: 3 | SwarmInstanceRole6BCAB0C0: 4 | Type: AWS::IAM::Role 5 | Properties: 6 | AssumeRolePolicyDocument: 7 | Statement: 8 | - Action: sts:AssumeRole 9 | Effect: Allow 10 | Principal: 11 | Service: ec2.amazonaws.com 12 | Version: "2012-10-17" 13 | ManagedPolicyArns: 14 | - Fn::Join: 15 | - "" 16 | - - "arn:" 17 | - Ref: AWS::Partition 18 | - :iam::aws:policy/service-role/AmazonEC2RoleforSSM 19 | - Fn::Join: 20 | - "" 21 | - - "arn:" 22 | - Ref: AWS::Partition 23 | - :iam::aws:policy/EC2InstanceProfileForImageBuilder 24 | Policies: 25 | - PolicyDocument: 26 | Statement: 27 | - Action: s3:GetObject 28 | Effect: Allow 29 | Resource: 30 | Fn::Join: 31 | - "" 32 | - - Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputFnGetAttgpicpipelinebucket6D2579DDArnC5C75313 33 | - /* 34 | Version: "2012-10-17" 35 | PolicyName: GPICDemo_UESwarmAccess 36 | Metadata: 37 | aws:cdk:path: gpic-pipeline-swarm-cluster/SwarmInstanceRole/Resource 38 | SwarmInstanceRoleDefaultPolicy939A4DEF: 39 | Type: AWS::IAM::Policy 40 | Properties: 41 | PolicyDocument: 42 | Statement: 43 | - Action: 44 | - secretsmanager:GetSecretValue 45 | - secretsmanager:DescribeSecret 46 | Effect: Allow 47 | Resource: 48 | Ref: UnrealEngineSwarmInstancespassword695BA7B3 49 | Version: "2012-10-17" 50 | PolicyName: SwarmInstanceRoleDefaultPolicy939A4DEF 51 | Roles: 52 | - Ref: SwarmInstanceRole6BCAB0C0 53 | Metadata: 54 | aws:cdk:path: gpic-pipeline-swarm-cluster/SwarmInstanceRole/DefaultPolicy/Resource 55 | GpicSwarmInstanceProfile: 56 | Type: AWS::IAM::InstanceProfile 57 | Properties: 58 | Roles: 59 | - Ref: SwarmInstanceRole6BCAB0C0 60 | InstanceProfileName: ue5-swarm-instance-profile 61 | Path: /executionServiceEC2Role/ 62 | Metadata: 63 | aws:cdk:path: gpic-pipeline-swarm-cluster/GpicSwarmInstanceProfile 64 | UE5SwarmSecurityGroup58BC7F66: 65 | Type: AWS::EC2::SecurityGroup 66 | Properties: 67 | GroupDescription: Security Group for UE5 Swarm Agent and Coordinator 68 | GroupName: Allow UE5 Swarm communication 69 | SecurityGroupEgress: 70 | - CidrIp: 0.0.0.0/0 71 | Description: Allow all outbound traffic by default 72 | IpProtocol: "-1" 73 | SecurityGroupIngress: 74 | - CidrIp: 10.0.0.0/8 75 | Description: Allow Trusted IP Swarm TCP 76 | FromPort: 8008 77 | IpProtocol: tcp 78 | ToPort: 8009 79 | - CidrIp: 10.0.0.0/8 80 | Description: Allow Trusted IP Swarm ICMP Ping 81 | FromPort: 8 82 | IpProtocol: icmp 83 | ToPort: -1 84 | - CidrIp: 10.0.0.0/8 85 | Description: Allow Trusted IP RDP TCP 86 | FromPort: 3389 87 | IpProtocol: tcp 88 | ToPort: 3389 89 | VpcId: 90 | Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCB9E5F0B4BD23A326 91 | Metadata: 92 | aws:cdk:path: gpic-pipeline-swarm-cluster/UE5-Swarm-SecurityGroup/Resource 93 | UE5SwarmSecurityGroupfromgpicpipelineswarmclusterUE5SwarmSecurityGroup78930DA180088009188E7E3A: 94 | Type: AWS::EC2::SecurityGroupIngress 95 | Properties: 96 | IpProtocol: tcp 97 | Description: Allow SG Swarm TCP 98 | FromPort: 8008 99 | GroupId: 100 | Fn::GetAtt: 101 | - UE5SwarmSecurityGroup58BC7F66 102 | - GroupId 103 | SourceSecurityGroupId: 104 | Fn::GetAtt: 105 | - UE5SwarmSecurityGroup58BC7F66 106 | - GroupId 107 | ToPort: 8009 108 | Metadata: 109 | aws:cdk:path: gpic-pipeline-swarm-cluster/UE5-Swarm-SecurityGroup/from gpicpipelineswarmclusterUE5SwarmSecurityGroup78930DA1:8008-8009 110 | UE5SwarmSecurityGroupfromgpicpipelineswarmclusterUE5SwarmSecurityGroup78930DA1ICMPType88DDD4ADF: 111 | Type: AWS::EC2::SecurityGroupIngress 112 | Properties: 113 | IpProtocol: icmp 114 | Description: Allow SG Swarm ICMP Ping 115 | FromPort: 8 116 | GroupId: 117 | Fn::GetAtt: 118 | - UE5SwarmSecurityGroup58BC7F66 119 | - GroupId 120 | SourceSecurityGroupId: 121 | Fn::GetAtt: 122 | - UE5SwarmSecurityGroup58BC7F66 123 | - GroupId 124 | ToPort: -1 125 | Metadata: 126 | aws:cdk:path: gpic-pipeline-swarm-cluster/UE5-Swarm-SecurityGroup/from gpicpipelineswarmclusterUE5SwarmSecurityGroup78930DA1:ICMP Type 8 127 | SwarmComponent: 128 | Type: AWS::ImageBuilder::Component 129 | Properties: 130 | Name: Install-Swarm-Dependencies 131 | Platform: Windows 132 | Version: 1.0.0 133 | Data: 134 | Fn::Join: 135 | - "" 136 | - - |- 137 | name: InstallUE5Swarm 138 | description: This component installs UE5 Swarm from S3 archive and also installs all prerequirements for a build. 139 | schemaVersion: 1.0 140 | 141 | phases: 142 | - name: build 143 | steps: 144 | - name: CreateTempFolder 145 | action: CreateFolder 146 | inputs: 147 | - path: C:\ue5-swarm-temp 148 | - name: DownloadDependencies 149 | action: S3Download 150 | maxAttempts: 3 151 | inputs: 152 | - source: s3:// 153 | - Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefgpicpipelinebucket6D2579DDA0504931 154 | - | 155 | /ue5-swarm-archive.zip 156 | destination: C:\ue5-swarm-temp\ue5-swarm-archive.zip 157 | - name: CreateSwarmFolder 158 | action: CreateFolder 159 | inputs: 160 | - path: C:\ue5-swarm 161 | - name: UncompressSwarmFiles 162 | action: ExecutePowerShell 163 | inputs: 164 | commands: 165 | - Expand-Archive -LiteralPath C:\ue5-swarm-temp\ue5-swarm-archive.zip -DestinationPath C:\ue5-swarm 166 | - name: DeleteTempFolder 167 | action: DeleteFolder 168 | inputs: 169 | - path: C:\ue5-swarm-temp 170 | force: true 171 | - name: InstallDotNet 172 | action: ExecutePowerShell 173 | inputs: 174 | commands: 175 | - Install-WindowsFeature Net-Framework-Core 176 | - name: InstallPreReqs 177 | action: ExecutePowerShell 178 | inputs: 179 | commands: 180 | - Start-Process -Wait -FilePath "C:\ue5-swarm\UEPrereqSetup_x64.exe" -ArgumentList "/install /quiet" 181 | - name: OpenFirewall 182 | action: ExecutePowerShell 183 | inputs: 184 | commands: 185 | - New-NetFirewallRule -DisplayName 'Allow UE5 Swarm TCP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8008-8009 186 | - New-NetFirewallRule -DisplayName 'Allow UE5 Swarm UDP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 8008-8009 187 | - New-NetFirewallRule -DisplayName 'Allow ICMP' -Direction Inbound -Action Allow -Protocol ICMPv4 188 | Metadata: 189 | aws:cdk:path: gpic-pipeline-swarm-cluster/SwarmComponent 190 | SwarmInfraConfig: 191 | Type: AWS::ImageBuilder::InfrastructureConfiguration 192 | Properties: 193 | InstanceProfileName: ue5-swarm-instance-profile 194 | Name: GPIC-UE5-Swarm-WindowsServer-2019-Infra-Config 195 | InstanceTypes: 196 | - m5.large 197 | SecurityGroupIds: 198 | - Fn::GetAtt: 199 | - UE5SwarmSecurityGroup58BC7F66 200 | - GroupId 201 | SubnetId: 202 | Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCPrivateSubnet1Subnet8BCA10E01F79A1B7 203 | DependsOn: 204 | - GpicSwarmInstanceProfile 205 | Metadata: 206 | aws:cdk:path: gpic-pipeline-swarm-cluster/SwarmInfraConfig 207 | ImageRecipe: 208 | Type: AWS::ImageBuilder::ImageRecipe 209 | Properties: 210 | Components: 211 | - ComponentArn: 212 | Fn::GetAtt: 213 | - SwarmComponent 214 | - Arn 215 | Name: GPIC-UE5-Swarm-Image 216 | ParentImage: 217 | Ref: SsmParameterValueawsserviceamiwindowslatestWindowsServer2019EnglishFullBaseC96584B6F00A464EAD1953AFF4B05118Parameter 218 | Version: 1.0.0 219 | Metadata: 220 | aws:cdk:path: gpic-pipeline-swarm-cluster/ImageRecipe 221 | UnrealEngineSwarmImage: 222 | Type: AWS::ImageBuilder::Image 223 | Properties: 224 | InfrastructureConfigurationArn: 225 | Fn::GetAtt: 226 | - SwarmInfraConfig 227 | - Arn 228 | ImageRecipeArn: 229 | Fn::GetAtt: 230 | - ImageRecipe 231 | - Arn 232 | Metadata: 233 | aws:cdk:path: gpic-pipeline-swarm-cluster/UnrealEngineSwarmImage 234 | UnrealEngineSwarmInstancespassword695BA7B3: 235 | Type: AWS::SecretsManager::Secret 236 | Properties: 237 | GenerateSecretString: {} 238 | UpdateReplacePolicy: Delete 239 | DeletionPolicy: Delete 240 | Metadata: 241 | aws:cdk:path: gpic-pipeline-swarm-cluster/Unreal Engine Swarm Instances password/Resource 242 | ue5swarmcoordinatorInstanceProfile6BA76DE6: 243 | Type: AWS::IAM::InstanceProfile 244 | Properties: 245 | Roles: 246 | - Ref: SwarmInstanceRole6BCAB0C0 247 | Metadata: 248 | aws:cdk:path: gpic-pipeline-swarm-cluster/ue5-swarm-coordinator/InstanceProfile 249 | ue5swarmcoordinatorF31E1DFE: 250 | Type: AWS::EC2::Instance 251 | Properties: 252 | AvailabilityZone: 253 | Fn::Select: 254 | - 0 255 | - Fn::GetAZs: "" 256 | IamInstanceProfile: 257 | Ref: ue5swarmcoordinatorInstanceProfile6BA76DE6 258 | ImageId: 259 | Fn::GetAtt: 260 | - UnrealEngineSwarmImage 261 | - ImageId 262 | InstanceType: t3.large 263 | SecurityGroupIds: 264 | - Fn::GetAtt: 265 | - UE5SwarmSecurityGroup58BC7F66 266 | - GroupId 267 | SubnetId: 268 | Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCPrivateSubnet1Subnet8BCA10E01F79A1B7 269 | Tags: 270 | - Key: Name 271 | Value: gpic-pipeline-swarm-cluster/ue5-swarm-coordinator 272 | UserData: 273 | Fn::Base64: 274 | Fn::Join: 275 | - "" 276 | - - |- 277 | 278 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 279 | ## SPDX-License-Identifier: MIT-0 280 | 281 | # Get Administrator password from AWS Secrets Manager 282 | $admin_password_plaintext = Get-SECSecretValue 283 | - Ref: UnrealEngineSwarmInstancespassword695BA7B3 284 | - |-2 285 | | % { Echo $_.SecretString} 286 | $admin_password_secure_string = $admin_password_plaintext | ConvertTo-SecureString -AsPlainText -Force 287 | 288 | # Set Administrator password 289 | Get-LocalUser -Name "Administrator" | Set-LocalUser -Password $admin_password_secure_string 290 | 291 | # Define the Swarm Coordinator to start as a Scheduled task at startup 292 | $action = New-ScheduledTaskAction -Execute "C:\ue5-swarm\SwarmCoordinator.exe" 293 | $trigger = New-ScheduledTaskTrigger -AtStartup 294 | Register-ScheduledTask -Action $action -Trigger $trigger -User "Administrator" -Password $admin_password_plaintext -TaskName "SwarmCoordinator" -Description "UE5 Swarm Coordinator" -RunLevel Highest -AsJob 295 | 296 | # Restart the instance to trigger the Schedule task. 297 | Restart-Computer 298 | 299 | DependsOn: 300 | - SwarmInstanceRoleDefaultPolicy939A4DEF 301 | - SwarmInstanceRole6BCAB0C0 302 | Metadata: 303 | aws:cdk:path: gpic-pipeline-swarm-cluster/ue5-swarm-coordinator/Resource 304 | ue5swarmagentInstanceProfileCE81A3E4: 305 | Type: AWS::IAM::InstanceProfile 306 | Properties: 307 | Roles: 308 | - Ref: SwarmInstanceRole6BCAB0C0 309 | Metadata: 310 | aws:cdk:path: gpic-pipeline-swarm-cluster/ue5-swarm-agent/InstanceProfile 311 | ue5swarmagentLaunchConfig92C91706: 312 | Type: AWS::AutoScaling::LaunchConfiguration 313 | Properties: 314 | ImageId: 315 | Fn::GetAtt: 316 | - UnrealEngineSwarmImage 317 | - ImageId 318 | InstanceType: c5.4xlarge 319 | BlockDeviceMappings: 320 | - DeviceName: /dev/sda1 321 | Ebs: 322 | DeleteOnTermination: true 323 | VolumeSize: 100 324 | IamInstanceProfile: 325 | Ref: ue5swarmagentInstanceProfileCE81A3E4 326 | SecurityGroups: 327 | - Fn::GetAtt: 328 | - UE5SwarmSecurityGroup58BC7F66 329 | - GroupId 330 | UserData: 331 | Fn::Base64: 332 | Fn::Join: 333 | - "" 334 | - - |- 335 | 336 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 337 | ## SPDX-License-Identifier: MIT-0 338 | 339 | # Get Administrator password from AWS Secrets Manager 340 | 341 | $admin_password_plaintext = Get-SECSecretValue 342 | - Ref: UnrealEngineSwarmInstancespassword695BA7B3 343 | - |-2 344 | | % { Echo $_.SecretString} 345 | $admin_password_secure_string = $admin_password_plaintext | ConvertTo-SecureString -AsPlainText -Force 346 | 347 | # Set Administrator password 348 | Get-LocalUser -Name "Administrator" | Set-LocalUser -Password $admin_password_secure_string 349 | 350 | # Define Coordinator IP, Cloudformation replaces this when task is been created 351 | $coordinator_ip = " 352 | - Fn::GetAtt: 353 | - ue5swarmcoordinatorF31E1DFE 354 | - PrivateIp 355 | - |- 356 | " 357 | 358 | # Template of the Swarm Agent Developper Options file 359 | $developeroptions = ' 360 | 361 | true 362 | LOCALCORES 363 | BelowNormal 364 | REMOTECORES 365 | Idle 366 | false 367 | 15 368 | ' 369 | 370 | # Calculate number of cores 371 | $cores = (Get-WmiObject -Class Win32_Processor | Select-Object -Property NumberOfLogicalProcessors).NumberOfLogicalProcessors 372 | 373 | # Set the core values for the Swarm Agent 374 | $developeroptions = $developeroptions.replace("REMOTECORES", $cores) 375 | $developeroptions = $developeroptions.replace("LOCALCORES", $cores-1) 376 | 377 | # Save the configureation file 378 | $developeroptions | Out-File -FilePath "C:\ue5-swarm\SwarmAgent.DeveloperOptions.xml" 379 | 380 | # Template of the Swarm Options file 381 | $agentoptions = ' 382 | 383 | 384 | 385 | 386 | 387 | 388 | 389 | 390 | 391 | 392 | 393 | 394 | 395 | 396 | 397 | 398 | 399 | 400 | 401 | 402 | 403 | 404 | 405 | 406 | 407 | 408 | 409 | 410 | 411 | 412 | * 413 | ue5-swarm-aws 414 | ue5-swarm-aws 415 | COORDINATORHOST 416 | C:\ue5-swarm/SwarmCache 417 | 5 418 | false 419 | true 420 | 15 421 | 422 | 0 423 | 0 424 | 425 | 426 | 768 427 | 768 428 | 429 | 2 430 | 4 431 | ' 432 | 433 | # Replace the Coordinator IP in the template 434 | $agentoptions = $agentoptions.replace("COORDINATORHOST", $coordinator_ip) 435 | 436 | # Save the configuration file 437 | $agentoptions | Out-File -FilePath "C:\ue5-swarm\SwarmAgent.Options.xml" 438 | 439 | # Define the Swarm agent as Scheduled Task that starts at instance boot 440 | $action = New-ScheduledTaskAction -Execute "C:\ue5-swarm\SwarmAgent.exe" 441 | $trigger = New-ScheduledTaskTrigger -AtStartup 442 | Register-ScheduledTask -Action $action -Trigger $trigger -User "Administrator" -Password $admin_password_plaintext -TaskName "SwarmAgent" -Description "UE5 Swarm Agent" -RunLevel Highest 443 | 444 | # Restart the instance to trigger the Swarm Agent Scheduled Task 445 | Restart-Computer 446 | 447 | DependsOn: 448 | - SwarmInstanceRoleDefaultPolicy939A4DEF 449 | - SwarmInstanceRole6BCAB0C0 450 | Metadata: 451 | aws:cdk:path: gpic-pipeline-swarm-cluster/ue5-swarm-agent/LaunchConfig 452 | ue5swarmagentASGD632CDF3: 453 | Type: AWS::AutoScaling::AutoScalingGroup 454 | Properties: 455 | MaxSize: "1" 456 | MinSize: "1" 457 | DesiredCapacity: "1" 458 | LaunchConfigurationName: 459 | Ref: ue5swarmagentLaunchConfig92C91706 460 | Tags: 461 | - Key: Name 462 | PropagateAtLaunch: true 463 | Value: gpic-pipeline-swarm-cluster/ue5-swarm-agent 464 | VPCZoneIdentifier: 465 | - Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCPrivateSubnet1Subnet8BCA10E01F79A1B7 466 | - Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCPrivateSubnet2SubnetCFCDAA7AB22CF85D 467 | UpdatePolicy: 468 | AutoScalingScheduledAction: 469 | IgnoreUnmodifiedGroupSizeProperties: true 470 | Metadata: 471 | aws:cdk:path: gpic-pipeline-swarm-cluster/ue5-swarm-agent/ASG 472 | CDKMetadata: 473 | Type: AWS::CDK::Metadata 474 | Properties: 475 | Analytics: v2:deflate64:H4sIAAAAAAAA/31Q207DMAz9lr1n2QVeeAPtAU2aRNV9QfDczqxJKscRqqL8O2nLKgQST/Y5Pj6+7PR++6i3q2fzGdZwuW0SeEadzmLgpg6Nqwwbi4Ksagw+MuDIvkXpo6iDd0E4gozcvZ7V6JXIWJ1q3036KVa+Ixgm0yU7FgPjACv2DXWlF2FfpiNEJhle2cd+lP1PHF3LGIK6m/00zoqsafE9UndB1qmUDt723qGTWdewmY+IjOWghtrIRsi7qTw21wjU4wKzCgiMEqxxBfK0b8Hfe5WsvCCKD2A6cq1OLwWcZ7AccDLRwfXPvN/SnLOqBrl6t3nQT3q3+ghEa45OyKKu5/gFFxhE/cUBAAA= 476 | Metadata: 477 | aws:cdk:path: gpic-pipeline-swarm-cluster/CDKMetadata/Default 478 | Parameters: 479 | SsmParameterValueawsserviceamiwindowslatestWindowsServer2019EnglishFullBaseC96584B6F00A464EAD1953AFF4B05118Parameter: 480 | Type: AWS::SSM::Parameter::Value 481 | Default: /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base 482 | Outputs: 483 | UnrealEngine5SwarmAMI: 484 | Description: The AMI that is be used to deploy the Unreal Engine 5 Swarm Coordinator an the agents. 485 | Value: 486 | Fn::GetAtt: 487 | - UnrealEngineSwarmImage 488 | - ImageId 489 | UnrealEngine5SwarmCoordinatorPrivateIP: 490 | Description: The private IP of the Unreal Engine 5 Swam coordinator. 491 | Value: 492 | Fn::GetAtt: 493 | - ue5swarmcoordinatorF31E1DFE 494 | - PrivateIp 495 | 496 | -------------------------------------------------------------------------------- /cloudformation/gpic-pipeline-virtual-workstation.yaml: -------------------------------------------------------------------------------- 1 | Description: Guidance for a Game Production Environment on AWS - Virtual Workstation Stack (SO9329) 2 | Resources: 3 | VirutalWorkstationInstanceRole2E55B929: 4 | Type: AWS::IAM::Role 5 | Properties: 6 | AssumeRolePolicyDocument: 7 | Statement: 8 | - Action: sts:AssumeRole 9 | Effect: Allow 10 | Principal: 11 | Service: ec2.amazonaws.com 12 | Version: "2012-10-17" 13 | ManagedPolicyArns: 14 | - Fn::Join: 15 | - "" 16 | - - "arn:" 17 | - Ref: AWS::Partition 18 | - :iam::aws:policy/service-role/AmazonEC2RoleforSSM 19 | Policies: 20 | - PolicyDocument: 21 | Statement: 22 | - Action: s3:GetObject 23 | Effect: Allow 24 | Resource: arn:aws:s3:::ec2-windows-nvidia-drivers/* 25 | - Action: 26 | - s3:ListAllMyBuckets 27 | - s3:ListBucket 28 | Effect: Allow 29 | Resource: arn:aws:s3:::* 30 | - Action: s3:*Object* 31 | Effect: Allow 32 | Resource: 33 | Fn::Join: 34 | - "" 35 | - - Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputFnGetAttgpicpipelinebucket6D2579DDArnC5C75313 36 | - /* 37 | Version: "2012-10-17" 38 | PolicyName: GPICDemo_VirtualWorkStationAccess 39 | Metadata: 40 | aws:cdk:path: gpic-pipeline-virtual-workstation/VirutalWorkstationInstanceRole/Resource 41 | VirutalWorkstationInstanceRoleDefaultPolicy687E981D: 42 | Type: AWS::IAM::Policy 43 | Properties: 44 | PolicyDocument: 45 | Statement: 46 | - Action: 47 | - secretsmanager:GetSecretValue 48 | - secretsmanager:DescribeSecret 49 | Effect: Allow 50 | Resource: 51 | Ref: VirtualWorkstationPassword67EF7BF5 52 | Version: "2012-10-17" 53 | PolicyName: VirutalWorkstationInstanceRoleDefaultPolicy687E981D 54 | Roles: 55 | - Ref: VirutalWorkstationInstanceRole2E55B929 56 | Metadata: 57 | aws:cdk:path: gpic-pipeline-virtual-workstation/VirutalWorkstationInstanceRole/DefaultPolicy/Resource 58 | VirtualWorkstationPassword67EF7BF5: 59 | Type: AWS::SecretsManager::Secret 60 | Properties: 61 | GenerateSecretString: {} 62 | UpdateReplacePolicy: Delete 63 | DeletionPolicy: Delete 64 | Metadata: 65 | aws:cdk:path: gpic-pipeline-virtual-workstation/Virtual Workstation Password/Resource 66 | VirtualWorkstationSecurityGroup659745E6: 67 | Type: AWS::EC2::SecurityGroup 68 | Properties: 69 | GroupDescription: Allows remote access to the Virtual Workstation via RDP & Parsec, HP Anyware, and NICE DCV. In addition allows UE5 Swarm communication 70 | GroupName: Virtual-Workstation-SecurityGroup 71 | SecurityGroupEgress: 72 | - CidrIp: 0.0.0.0/0 73 | Description: Allow all outbound traffic by default 74 | IpProtocol: "-1" 75 | SecurityGroupIngress: 76 | - CidrIp: 10.0.0.0/8 77 | Description: Allow Trusted IP Swarm TCP 78 | FromPort: 8008 79 | IpProtocol: tcp 80 | ToPort: 8009 81 | - CidrIp: 10.0.0.0/8 82 | Description: Allow Trusted IP Swarm ICMP Ping 83 | FromPort: 8 84 | IpProtocol: icmp 85 | ToPort: -1 86 | - CidrIp: 0.0.0.0/0 87 | Description: Allow Trusted Remote CIDR to access Virtual Workstation via RDP 88 | FromPort: 3389 89 | IpProtocol: tcp 90 | ToPort: 3389 91 | - CidrIp: 0.0.0.0/0 92 | Description: Allow Trusted Remote CIDR to access Virtual Workstation via Parsec 93 | FromPort: 1666 94 | IpProtocol: tcp 95 | ToPort: 1666 96 | - CidrIp: 0.0.0.0/0 97 | Description: Allow Trusted Remote CIDR to access Virtual Workstation via PCoIP (Session Establishment) 98 | FromPort: 4172 99 | IpProtocol: tcp 100 | ToPort: 4172 101 | - CidrIp: 0.0.0.0/0 102 | Description: Allow Trusted Remote CIDR to access Virtual Workstation via PCoIP (Client Authentication) 103 | FromPort: 443 104 | IpProtocol: tcp 105 | ToPort: 443 106 | - CidrIp: 0.0.0.0/0 107 | Description: Allow Trusted Remote CIDR to access Virtual Workstation via PCoIP (PCoIP Session Data) 108 | FromPort: 4172 109 | IpProtocol: udp 110 | ToPort: 4172 111 | - CidrIp: 0.0.0.0/0 112 | Description: Allow Trusted Remote CIDR to access Virtual Workstation via NICE DCV 113 | FromPort: 8443 114 | IpProtocol: tcp 115 | ToPort: 8443 116 | VpcId: 117 | Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCB9E5F0B4BD23A326 118 | Metadata: 119 | aws:cdk:path: gpic-pipeline-virtual-workstation/Virtual-Workstation-SecurityGroup/Resource 120 | VirtualWorkstationInstanceProfile509990D1: 121 | Type: AWS::IAM::InstanceProfile 122 | Properties: 123 | Roles: 124 | - Ref: VirutalWorkstationInstanceRole2E55B929 125 | Metadata: 126 | aws:cdk:path: gpic-pipeline-virtual-workstation/Virtual Workstation/InstanceProfile 127 | VirtualWorkstation431834E1: 128 | Type: AWS::EC2::Instance 129 | Properties: 130 | AvailabilityZone: 131 | Fn::Select: 132 | - 0 133 | - Fn::GetAZs: "" 134 | BlockDeviceMappings: 135 | - DeviceName: /dev/sda1 136 | Ebs: 137 | DeleteOnTermination: true 138 | VolumeSize: 200 139 | IamInstanceProfile: 140 | Ref: VirtualWorkstationInstanceProfile509990D1 141 | ImageId: 142 | Ref: SsmParameterValueawsserviceamiwindowslatestWindowsServer2019EnglishFullBaseC96584B6F00A464EAD1953AFF4B05118Parameter 143 | InstanceType: g4dn.4xlarge 144 | SecurityGroupIds: 145 | - Fn::GetAtt: 146 | - VirtualWorkstationSecurityGroup659745E6 147 | - GroupId 148 | SubnetId: 149 | Fn::ImportValue: gpic-pipeline-foundation:ExportsOutputRefVPCPublicSubnet1SubnetB4246D30D84F935B 150 | Tags: 151 | - Key: Name 152 | Value: gpic-pipeline-virtual-workstation/Virtual Workstation 153 | UserData: 154 | Fn::Base64: 155 | Fn::Join: 156 | - "" 157 | - - |- 158 | 159 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 160 | ## SPDX-License-Identifier: MIT-0 161 | 162 | # Set Windows Administrator password 163 | 164 | $admin_password = Get-SECSecretValue 165 | - Ref: VirtualWorkstationPassword67EF7BF5 166 | - |-2 167 | | % { Echo $_.SecretString} | ConvertTo-SecureString -AsPlainText -Force 168 | 169 | Get-LocalUser -Name "Administrator" | Set-LocalUser -Password $admin_password 170 | 171 | # Setup Windows Firewall rules 172 | 173 | #Parsec 174 | New-NetFirewallRule -DisplayName 'Allow Parsec' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 1666 175 | 176 | #PCoIP 177 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 4172 178 | 179 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 4172 180 | 181 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 443 182 | 183 | # NICE DCV 184 | New-NetFirewallRule -DisplayName 'Allow PCoIP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8443 185 | 186 | # Allow Unreal Engine Swarm coomunication 187 | New-NetFirewallRule -DisplayName 'Allow UE5 Swarm TCP' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 8008-8009 188 | New-NetFirewallRule -DisplayName 'Allow UE5 Swarm UDP' -Direction Inbound -Action Allow -Protocol UDP -LocalPort 8008-8009 189 | New-NetFirewallRule -DisplayName 'Allow ICMP' -Direction Inbound -Action Allow -Protocol ICMPv4 190 | 191 | # Install NVIDIA Grid driver 192 | 193 | $Bucket = "ec2-windows-nvidia-drivers" 194 | $KeyPrefix = "latest" 195 | $LocalPath = "c:\nvidia-temp" 196 | $Objects = Get-S3Object -BucketName $Bucket -KeyPrefix $KeyPrefix -Region us-east-1 197 | foreach ($Object in $Objects) { 198 | $LocalFileName = $Object.Key 199 | if ($LocalFileName -ne '' -and $Object.Size -ne 0) { 200 | $LocalFilePath = Join-Path $LocalPath $LocalFileName 201 | Copy-S3Object -BucketName $Bucket -Key $Object.Key -LocalFile $LocalFilePath -Region us-east-1 202 | } 203 | } 204 | 205 | 206 | $nvidia_setup = Get-ChildItem -Path $LocalPath -Filter *server2019*.exe -Recurse -ErrorAction SilentlyContinue -Force | %{$_.FullName} 207 | 208 | 209 | & $nvidia_setup -s | Out-Null 210 | 211 | New-ItemProperty -Path "HKLM:\SOFTWARE\NVIDIA Corporation\Global\GridLicensing" -Name "NvCplDisableManageLicensePage" -PropertyType "DWord" -Value "1" 212 | 213 | Remove-Item $LocalPath -Recurse 214 | 215 | 216 | # Install .NET Core Framework 217 | Install-WindowsFeature Net-Framework-Core 218 | 219 | 220 | DependsOn: 221 | - VirutalWorkstationInstanceRoleDefaultPolicy687E981D 222 | - VirutalWorkstationInstanceRole2E55B929 223 | Metadata: 224 | aws:cdk:path: gpic-pipeline-virtual-workstation/Virtual Workstation/Resource 225 | VirtualWorkstationEIP: 226 | Type: AWS::EC2::EIP 227 | Properties: 228 | InstanceId: 229 | Ref: VirtualWorkstation431834E1 230 | Metadata: 231 | aws:cdk:path: gpic-pipeline-virtual-workstation/VirtualWorkstationEIP 232 | CDKMetadata: 233 | Type: AWS::CDK::Metadata 234 | Properties: 235 | Analytics: v2:deflate64:H4sIAAAAAAAA/01OQQ7CMAx7C/dSGHDhhoQQ2olqvKAKGRS2dkpToWnq31k7QDvZsR0nhdysd3K9OOi3X8LttRrAEcrhyhpe4lhbpUm3yEiiQu8CASb1ErgLLI7OeqYAnLSfH0XqGoxu5VC5JuczKtcY6HPpn5VjgbaAilxtmnHXIxCyb7XVd6TxkTyn6MSiQNhkOZDh/kwudF93Jvxq5ycSP5UqxihUzw9nV1u5l8Xi6Y1ZUrBsWpTVhB+LkfuAFwEAAA== 236 | Metadata: 237 | aws:cdk:path: gpic-pipeline-virtual-workstation/CDKMetadata/Default 238 | Parameters: 239 | SsmParameterValueawsserviceamiwindowslatestWindowsServer2019EnglishFullBaseC96584B6F00A464EAD1953AFF4B05118Parameter: 240 | Type: AWS::SSM::Parameter::Value 241 | Default: /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base 242 | Outputs: 243 | VirtualWorkstationPublicIp: 244 | Description: The public IP of the virtual Workstation 245 | Value: 246 | Fn::GetAtt: 247 | - VirtualWorkstation431834E1 248 | - PublicIp 249 | VirtualWorkstationSecretName: 250 | Description: The name of secret in the AWS Secrets Manager. Please open the AWS Secrets Manager to retrieve the password for the user 'Administrator'. 251 | Value: 252 | Fn::Join: 253 | - "-" 254 | - - Fn::Select: 255 | - 0 256 | - Fn::Split: 257 | - "-" 258 | - Fn::Select: 259 | - 6 260 | - Fn::Split: 261 | - ":" 262 | - Ref: VirtualWorkstationPassword67EF7BF5 263 | - Fn::Select: 264 | - 1 265 | - Fn::Split: 266 | - "-" 267 | - Fn::Select: 268 | - 6 269 | - Fn::Split: 270 | - ":" 271 | - Ref: VirtualWorkstationPassword67EF7BF5 272 | 273 | -------------------------------------------------------------------------------- /gpic_pipeline/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-solutions-library-samples/guidance-for-game-production-environment-on-aws/cec0b36404c637f82dda475d32b0d5f0276fb128/gpic_pipeline/__init__.py -------------------------------------------------------------------------------- /gpic_pipeline/foundation.py: -------------------------------------------------------------------------------- 1 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | ## SPDX-License-Identifier: MIT-0 3 | 4 | from aws_cdk import ( 5 | aws_ec2 as ec2, 6 | aws_s3 as s3, 7 | core 8 | ) 9 | 10 | class FoundationStack(core.Stack): 11 | 12 | bucket = s3.IBucket 13 | vpc = ec2.IVpc 14 | 15 | def __init__(self, scope: core.Construct, construct_id: str, **kwargs) -> None: 16 | super().__init__(scope, construct_id, **kwargs) 17 | 18 | # Create S3 Bucket for other Game Production in the Cloud modules 19 | self.bucket = s3.Bucket(self, "gpic-pipeline-bucket") 20 | 21 | # Retrieve CIDR from CDK Context 22 | foundation_vpc_cidr = self.node.try_get_context("foundation_vpc_cidr") 23 | 24 | # Set default CIDR 25 | if foundation_vpc_cidr is None: 26 | foundation_vpc_cidr = "10.0.0.0/16" 27 | 28 | # Create new VPC with one two private- and public subnet 29 | self.vpc = ec2.Vpc(self, "VPC", 30 | cidr=foundation_vpc_cidr, 31 | nat_gateways=1, 32 | max_azs=2, 33 | subnet_configuration=[ec2.SubnetConfiguration(name="Public",subnet_type=ec2.SubnetType.PUBLIC), 34 | ec2.SubnetConfiguration(name="Private",subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT)] 35 | ) 36 | 37 | 38 | 39 | 40 | # Output the S3 Bucket name that can be used by other stacks in this application 41 | output = core.CfnOutput(self, "BucketName", 42 | value=self.bucket.bucket_name, 43 | description="Game Production in the Cloud pipeline bucket. This bucket will be used by other stacks in this application.") 44 | 45 | 46 | 47 | 48 | -------------------------------------------------------------------------------- /gpic_pipeline/perforcehelixcore.py: -------------------------------------------------------------------------------- 1 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | ## SPDX-License-Identifier: MIT-0 3 | 4 | from aws_cdk import ( 5 | aws_ec2 as ec2, 6 | aws_iam as iam, 7 | aws_autoscaling as autoscaling, 8 | aws_secretsmanager as secretsmanager, 9 | core 10 | ) 11 | 12 | 13 | class PerforceHelixCoreStack(core.Stack): 14 | def __init__(self, scope: core.Construct, id: str, vpc, **kwargs) -> None: 15 | 16 | super().__init__(scope, id, **kwargs) 17 | 18 | 19 | # Instance Role and SSM Managed Policy 20 | role = iam.Role(self, "PerforceHelixCoreInstanceRole", assumed_by=iam.ServicePrincipal("ec2.amazonaws.com"),role_name="PerforceHelixCoreInstanceRole") 21 | 22 | 23 | role.add_managed_policy(iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonEC2RoleforSSM")) 24 | 25 | # Create password for Perforce administrative user in EC2 Secrets Manager and grant Perforce Helix Core instance the right to read this password. 26 | 27 | secret_generator = secretsmanager.SecretStringGenerator(exclude_characters="'\"", 28 | exclude_punctuation=True, 29 | include_space=False, 30 | password_length=32) 31 | 32 | secret = secretsmanager.Secret(self, "Perforce Helix Core Password", 33 | generate_secret_string=secret_generator) 34 | secret.grant_read(role) 35 | 36 | 37 | 38 | 39 | # Security Group for the Perforce instance that allows communication 40 | securitygroup = ec2.SecurityGroup(self, "PerforceHelixCoreSecurityGroup", 41 | vpc=vpc, 42 | security_group_name="Perforce-SecurityGroup", 43 | description="Allows access to the Perforce server", 44 | allow_all_outbound=True 45 | ) 46 | 47 | 48 | 49 | # Parameter for the trusted internal network 50 | perforce_trusted_internal_cidr = self.node.try_get_context("perforce_trusted_internal_cidr") 51 | 52 | # Define default trusted CIDR 53 | if perforce_trusted_internal_cidr is None: 54 | perforce_trusted_internal_cidr = "10.0.0.0/16" 55 | 56 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(perforce_trusted_internal_cidr), ec2.Port.tcp(1666), 'Allow access to Perforce Helix Core') 57 | 58 | 59 | # Lookup latest Amazon Linux2 AMI 60 | linux_image = ec2.MachineImage.latest_amazon_linux(generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2) 61 | 62 | # Parameter for the instance type of the Perforce Helix Core Instance 63 | instance_type = self.node.try_get_context("perforce_instance_type") 64 | 65 | # Define default instance type of the Perforce Helix Core Instance 66 | if instance_type is None: 67 | instance_type = "g4dn.4xlarge" 68 | 69 | server_hostname = self.node.try_get_context("perforce_server_hostname") 70 | if server_hostname is None: 71 | server_hostname = "pvs-aws-01" 72 | 73 | 74 | # Setup depot volume 75 | depot_volume_size = self.node.try_get_context("perforce_depot_volume_size") 76 | 77 | if depot_volume_size is None: 78 | depot_volume_size = 1024 79 | 80 | depot_volume_type = self.node.try_get_context("perforce_depot_volume_type") 81 | 82 | depot_volume_type = self.get_volume_type_from_string(depot_volume_type) 83 | if depot_volume_type is None: 84 | depot_volume_type = ec2.EbsDeviceVolumeType.ST1 85 | 86 | 87 | depot_block_device = ec2.BlockDevice( 88 | device_name='/dev/sdb', 89 | volume=ec2.BlockDeviceVolume.ebs( 90 | volume_type= depot_volume_type, 91 | volume_size=depot_volume_size, 92 | delete_on_termination=True 93 | 94 | ), 95 | ) 96 | 97 | 98 | # Setup log volume 99 | log_volume_size = self.node.try_get_context("perforce_log_volume_size") 100 | 101 | if log_volume_size is None: 102 | log_volume_size = 128 103 | 104 | log_volume_type = self.node.try_get_context("perforce_log_volume_type") 105 | 106 | log_volume_type = self.get_volume_type_from_string(log_volume_type) 107 | if log_volume_type is None: 108 | log_volume_type = autoscaling.EbsDeviceVolumeType.gp2 109 | 110 | 111 | log_block_device = ec2.BlockDevice( 112 | device_name='/dev/sdc', 113 | volume=ec2.BlockDeviceVolume.ebs( 114 | volume_type= log_volume_type, 115 | volume_size=log_volume_size, 116 | delete_on_termination=True 117 | ), 118 | ) 119 | 120 | 121 | 122 | # Setup metadata volume 123 | metadata_volume_size = self.node.try_get_context("perforce_metadata_volume_size") 124 | 125 | if metadata_volume_size is None: 126 | metadata_volume_size = 64 127 | 128 | metadata_volume_type = self.node.try_get_context("perforce_metadata_volume_type") 129 | 130 | metadata_volume_type = self.get_volume_type_from_string(metadata_volume_type) 131 | if metadata_volume_type is None: 132 | metadata_volume_type = autoscaling.EbsDeviceVolumeType.gp2 133 | 134 | 135 | metadata_block_device = ec2.BlockDevice( 136 | device_name='/dev/sdd', 137 | volume=ec2.BlockDeviceVolume.ebs( 138 | volume_type= metadata_volume_type, 139 | volume_size=metadata_volume_size, 140 | delete_on_termination=True 141 | ), 142 | ) 143 | 144 | 145 | # Read UserData Script an replace placeholders 146 | with open('assets/setup-perforce-helix-core.sh', 'r') as user_data_file: 147 | user_data = user_data_file.read() 148 | 149 | server_id = self.node.try_get_context("perforce_server_id") 150 | 151 | if server_id is None: 152 | server_id = "master.1" 153 | 154 | 155 | user_data = user_data.replace("SERVER_ID", server_id) 156 | 157 | user_data = user_data.replace("PERFORCE_PASSWORD_ARN", secret.secret_full_arn) 158 | 159 | 160 | 161 | # Launch the Perforce Helix Server 162 | instance = ec2.Instance(self, server_hostname, 163 | instance_type=ec2.InstanceType(instance_type), 164 | machine_image=linux_image, 165 | vpc = vpc, 166 | vpc_subnets = ec2.SubnetSelection(subnet_type=ec2.SubnetType('PRIVATE_WITH_NAT')), 167 | role = role, 168 | security_group=securitygroup, 169 | block_devices=[depot_block_device, log_block_device,metadata_block_device], 170 | user_data=ec2.UserData.custom(user_data) 171 | ) 172 | 173 | # Output Information of the Perforce Helix Core Server 174 | core.CfnOutput(self, "PerforceHelixCorePrivateDNS", value=instance.instance_private_dns_name, description="The private DNS name of the Perforce Helix Core Server.") 175 | 176 | core.CfnOutput(self, "PerforceHelixCorePrivateIP",value=instance.instance_private_ip, description="The private IP address of the Perforce Helix Core Server.") 177 | 178 | core.CfnOutput(self, "PerforceHelixCoreSecretName",value=secret.secret_name, description="The name of secret in the AWS Secrets Manager. Please open the AWS Secrets Manager to retrieve the password for the user 'perforce'.") 179 | 180 | # Helper method 181 | def get_volume_type_from_string(self, volume_string): 182 | 183 | if type(volume_string) != str: 184 | return None 185 | 186 | if volume_string.lower() == "gp2": 187 | return ec2.EbsDeviceVolumeType.GP2 188 | elif volume_string.lower() == "gp3": 189 | return ec2.EbsDeviceVolumeType.GP3 190 | elif volume_string.lower() == "io1": 191 | return ec2.EbsDeviceVolumeType.IO1 192 | elif volume_string.lower() == "st1": 193 | return ec2.EbsDeviceVolumeType.ST1 194 | elif volume_string.lower() == "sc1": 195 | return ec2.EbsDeviceVolumeType.SC1 196 | elif volume_string.lower() == "standard": 197 | return ec2.EbsDeviceVolumeType.STANDARD 198 | elif volume_string.lower() == "io2": 199 | return ec2.EbsDeviceVolumeType.IO2 200 | else: 201 | return None 202 | 203 | 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | 213 | 214 | -------------------------------------------------------------------------------- /gpic_pipeline/unrealengineswarmcluster.py: -------------------------------------------------------------------------------- 1 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | ## SPDX-License-Identifier: MIT-0 3 | 4 | from aws_cdk import ( 5 | aws_ec2 as ec2, 6 | aws_iam as iam, 7 | aws_s3 as s3, 8 | aws_imagebuilder as imagebuilder, 9 | aws_autoscaling as autoscaling, 10 | aws_secretsmanager as secretsmanager, 11 | core 12 | ) 13 | 14 | 15 | class UnrealEngineSwarmClusterStack(core.Stack): 16 | def __init__(self, scope: core.Construct, id: str, bucket, vpc, **kwargs) -> None: 17 | super().__init__(scope, id, **kwargs) 18 | 19 | 20 | 21 | # Custom policy to allow GET's to GPiC S3 bucket 22 | 23 | policy_statement = iam.PolicyStatement( 24 | effect=iam.Effect.ALLOW, 25 | actions=["s3:GetObject",], 26 | resources=[bucket.bucket_arn+ "/*"] 27 | ) 28 | 29 | policy_document = iam.PolicyDocument() 30 | policy_document.add_statements(policy_statement) 31 | 32 | # Instance Role and SSM Managed Policy 33 | role = iam.Role(self, "SwarmInstanceRole", assumed_by=iam.ServicePrincipal("ec2.amazonaws.com"), inline_policies={"GPICDemo_UESwarmAccess":policy_document}) 34 | 35 | role.add_managed_policy(iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonEC2RoleforSSM")) 36 | role.add_managed_policy(iam.ManagedPolicy.from_aws_managed_policy_name("EC2InstanceProfileForImageBuilder")) 37 | 38 | # Create instance profile that EC2 Image builder can use 39 | # This can be also later used running Swarm instances 40 | instanceprofile = iam.CfnInstanceProfile(self, "GpicSwarmInstanceProfile", 41 | instance_profile_name="ue5-swarm-instance-profile", 42 | path="/executionServiceEC2Role/", 43 | roles=[role.role_name] 44 | ) 45 | 46 | # Security Group for the Swarm instances that allows communication 47 | securitygroup = ec2.SecurityGroup(self, "UE5-Swarm-SecurityGroup", 48 | vpc=vpc, 49 | description="Security Group for UE5 Swarm Agent and Coordinator", 50 | security_group_name="Allow UE5 Swarm communication", 51 | allow_all_outbound=True 52 | ) 53 | 54 | # Allow Swarm Agents and Coordinator to talk to each other in the same Security Group 55 | securitygroup.add_ingress_rule(securitygroup, ec2.Port.tcp_range(8008,8009), 'Allow SG Swarm TCP') 56 | securitygroup.add_ingress_rule(securitygroup, ec2.Port.icmp_ping(), 'Allow SG Swarm ICMP Ping') 57 | 58 | # Parameter for the trusted network 59 | unreal_engine_swarm_cluster_trusted_internal_cidr = self.node.try_get_context("unreal_engine_swarm_cluster_trusted_internal_cidr") 60 | 61 | # Define default trusted CIDR 62 | if unreal_engine_swarm_cluster_trusted_internal_cidr is None: 63 | unreal_engine_swarm_cluster_trusted_internal_cidr = "10.0.0.0/16" 64 | 65 | # Allow RDP, Swarm and ICMP ping from trusted CIDR prefix 66 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(unreal_engine_swarm_cluster_trusted_internal_cidr), ec2.Port.tcp_range(8008,8009), 'Allow Trusted IP Swarm TCP') 67 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(unreal_engine_swarm_cluster_trusted_internal_cidr), ec2.Port.icmp_ping(), 'Allow Trusted IP Swarm ICMP Ping') 68 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(unreal_engine_swarm_cluster_trusted_internal_cidr), ec2.Port.tcp(3389), 'Allow Trusted IP RDP TCP') 69 | 70 | 71 | # Read the EC2 Image builder component instructions from a file 72 | # These instructions are PowerShell commands to download dependencies ZIP. 73 | # It will then uncompress it, run nessesary installations for the components. 74 | 75 | component_file_path = "assets/unreal-engine-swarm-cluster-component.yml" 76 | 77 | 78 | 79 | with open(component_file_path, 'r') as componentfile: 80 | componentdata = componentfile.read() 81 | 82 | componentdata = componentdata.replace("S3-BUCKET-NAME", bucket.bucket_name) 83 | 84 | 85 | # Get the Image Builder Instance type from the CDK context 86 | 87 | image_builder_instance_type = self.node.try_get_context("unreal_engine_swarm_cluster_image_builder_instance_type") 88 | 89 | # Set the default Image Builder Instance type if not defined in CDK context 90 | 91 | if image_builder_instance_type is None: 92 | image_builder_instance_type = "m5.large" 93 | 94 | 95 | image_builder_instance_type =[image_builder_instance_type] 96 | 97 | # Define the component for EC2 Image Builder 98 | swarmcomponent = imagebuilder.CfnComponent(self, 99 | "SwarmComponent", 100 | name="Install-Swarm-Dependencies", 101 | platform="Windows", 102 | version="1.0.0", 103 | data=componentdata 104 | ) 105 | 106 | privatesubnets = vpc.select_subnets(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT) 107 | 108 | # Define the new VPC and Instanceprofile to be used for Image Building 109 | infraconfig = imagebuilder.CfnInfrastructureConfiguration(self, 110 | "SwarmInfraConfig", 111 | name="GPIC-UE5-Swarm-WindowsServer-2019-Infra-Config", 112 | instance_profile_name=instanceprofile.instance_profile_name, 113 | # logging=imagebuilder.CfnInfrastructureConfiguration.S3LogsProperty(s3_bucket_name=bucket.bucket_name), 114 | subnet_id=privatesubnets.subnets[0].subnet_id, 115 | security_group_ids=[securitygroup.security_group_id], 116 | instance_types=image_builder_instance_type 117 | ) 118 | 119 | # Ensure that instance profile has completed creation before applying the aboce config 120 | infraconfig.add_depends_on(instanceprofile) 121 | 122 | # Lookup latest Windows Server 2019 123 | basewindows = ec2.MachineImage.latest_windows(ec2.WindowsVersion.WINDOWS_SERVER_2019_ENGLISH_FULL_BASE); 124 | 125 | # Define Image build recipe combinen the Windows image and our Component 126 | recipe = imagebuilder.CfnImageRecipe(self, 127 | "ImageRecipe", 128 | name="GPIC-UE5-Swarm-Image", 129 | parent_image=basewindows.get_image(self).image_id, 130 | version="1.0.0", 131 | components=[{"componentArn":swarmcomponent.attr_arn}] 132 | ) 133 | 134 | # Start the build of new AMI based on the Recipe and Infra config 135 | swarmimage = imagebuilder.CfnImage(self, 136 | "UnrealEngineSwarmImage", 137 | image_recipe_arn=recipe.attr_arn, 138 | infrastructure_configuration_arn=infraconfig.attr_arn 139 | ) 140 | 141 | # Lookup the AMI ID for the resulting image 142 | swarmami = ec2.GenericWindowsImage({self.region: swarmimage.attr_image_id}) 143 | 144 | 145 | 146 | # Create password for user Administrator in EC2 Secrets Manager and grant Virtual desktop instance the right to read this password. 147 | secret = secretsmanager.Secret(self, "Unreal Engine Swarm Instances password") 148 | secret.grant_read(role) 149 | 150 | # Read the Script to start Swarm Coordinator 151 | # Will be used in the instance user data 152 | 153 | with open('assets/setup-unreal-engine-swarm-coordinator.ps1', 'r') as coordinator_file: 154 | coordinator_user_data = coordinator_file.read() 155 | 156 | coordinator_user_data = coordinator_user_data.replace ("ADMIN_PASSWORD_SECRET_ARN", secret.secret_full_arn) 157 | 158 | # Parameter for the instance type 159 | coordinator_instance_type = self.node.try_get_context("unreal_engine_swarm_cluster_coordinator_instance_type") 160 | 161 | # Define default instance type 162 | if coordinator_instance_type is None: 163 | coordinator_instance_type = "t3.large" 164 | 165 | 166 | # Launch the Swarm Coordinator instance 167 | coordinator = ec2.Instance(self, "ue5-swarm-coordinator", 168 | instance_type=ec2.InstanceType(coordinator_instance_type), 169 | machine_image=swarmami, 170 | vpc = vpc, 171 | vpc_subnets = ec2.SubnetSelection(subnet_type=ec2.SubnetType('PRIVATE_WITH_NAT')), 172 | role = role, 173 | security_group=securitygroup, 174 | user_data=ec2.UserData.custom(coordinator_user_data) 175 | 176 | ) 177 | 178 | 179 | 180 | with open('assets/setup-unreal-egine-swarm-agent.ps1', 'r') as agent_file: 181 | agent_user_data = agent_file.read() 182 | 183 | agent_user_data = agent_user_data.replace("COORDINATOR_IP", coordinator.instance_private_ip) 184 | 185 | agent_user_data = agent_user_data.replace ("ADMIN_PASSWORD_SECRET_ARN", secret.secret_full_arn) 186 | 187 | agent_root_volume_size = self.node.try_get_context("unreal_engine_swarm_cluster_agent_root_volume_size") 188 | 189 | # Define default instance type 190 | if agent_root_volume_size is None: 191 | agent_root_volume_size = 100 192 | 193 | # Define C: -drives size for the Swarm Agents 194 | root_device = autoscaling.BlockDevice( 195 | device_name='/dev/sda1', 196 | volume=autoscaling.BlockDeviceVolume.ebs( 197 | volume_size=agent_root_volume_size, 198 | delete_on_termination=True 199 | ), 200 | ) 201 | 202 | # Parameter for the instance type 203 | agent_instancy_type = self.node.try_get_context("unreal_engine_swarm_cluster_agent_instance_type") 204 | 205 | # Define default instance type 206 | if agent_instancy_type is None: 207 | agent_instancy_type = "c5.4xlarge" 208 | 209 | # Create Autoscaling group for Swarm Agents 210 | # It won't automaticaly scale on load but instead it will automatically 211 | # Scale down at evening and bring the cluster up again in the morning 212 | swarmasg = autoscaling.AutoScalingGroup(self, "ue5-swarm-agent", 213 | instance_type=ec2.InstanceType(agent_instancy_type), 214 | machine_image=swarmami, 215 | role=role, 216 | vpc=vpc, 217 | vpc_subnets = ec2.SubnetSelection(subnet_type=ec2.SubnetType('PRIVATE_WITH_NAT')), 218 | security_group=securitygroup, 219 | block_devices=[root_device], 220 | user_data=ec2.UserData.custom(agent_user_data), 221 | desired_capacity=1, 222 | max_capacity=1, 223 | min_capacity=1) 224 | 225 | 226 | # Output the AMI ID and Coordinator IP 227 | core.CfnOutput(self, "UnrealEngine5SwarmAMI", value=swarmimage.attr_image_id,description="The AMI that is be used to deploy the Unreal Engine 5 Swarm Coordinator an the agents.") 228 | 229 | core.CfnOutput(self, "UnrealEngine5SwarmCoordinatorPrivateIP",value=coordinator.instance_private_ip,description="The private IP of the Unreal Engine 5 Swam coordinator.") 230 | -------------------------------------------------------------------------------- /gpic_pipeline/virtualworkstation.py: -------------------------------------------------------------------------------- 1 | ## Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | ## SPDX-License-Identifier: MIT-0 3 | 4 | from aws_cdk import ( 5 | aws_ec2 as ec2, 6 | aws_iam as iam, 7 | aws_secretsmanager as secretsmanager, 8 | core 9 | ) 10 | 11 | 12 | class VirtualWorkstationStack(core.Stack): 13 | 14 | def __init__(self, scope: core.Construct, id: str, bucket, vpc, **kwargs) -> None: 15 | super().__init__(scope,id,**kwargs) 16 | 17 | 18 | # Create Virtual Workstation Instance Role 19 | 20 | 21 | policy_statement_drivers = iam.PolicyStatement( 22 | effect=iam.Effect.ALLOW, 23 | actions=["s3:GetObject",], 24 | resources=["arn:aws:s3:::ec2-windows-nvidia-drivers/*"] 25 | ) 26 | 27 | policy_statement_bucket_list = iam.PolicyStatement( 28 | effect=iam.Effect.ALLOW, 29 | actions=["s3:ListAllMyBuckets","s3:ListBucket"], 30 | resources=["arn:aws:s3:::*"] 31 | ) 32 | 33 | policy_statement_bucket_object_actions = iam.PolicyStatement( 34 | effect=iam.Effect.ALLOW, 35 | actions=["s3:*Object*",], 36 | resources=[bucket.bucket_arn+ "/*"] 37 | ) 38 | 39 | policy_document = iam.PolicyDocument() 40 | policy_document.add_statements(policy_statement_drivers) 41 | policy_document.add_statements(policy_statement_bucket_list) 42 | policy_document.add_statements(policy_statement_bucket_object_actions) 43 | 44 | # Instance Role and SSM Managed Policy 45 | role = iam.Role(self, "VirutalWorkstationInstanceRole", assumed_by=iam.ServicePrincipal("ec2.amazonaws.com"), inline_policies={"GPICDemo_VirtualWorkStationAccess" : policy_document}) 46 | 47 | role.add_managed_policy(iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonEC2RoleforSSM")) 48 | 49 | 50 | 51 | # Create password for user Administrator in EC2 Secrets Manager and grant Virtual workstation instance the right to read this password. 52 | secret = secretsmanager.Secret(self, "Virtual Workstation Password") 53 | secret.grant_read(role) 54 | 55 | 56 | 57 | # Security Group for the Virtual Workstation 58 | securitygroup = ec2.SecurityGroup(self, "Virtual-Workstation-SecurityGroup", 59 | vpc=vpc, 60 | security_group_name="Virtual-Workstation-SecurityGroup", 61 | description="Allows remote access to the Virtual Workstation via RDP & Parsec, HP Anyware, and NICE DCV. In addition allows UE5 Swarm communication", 62 | allow_all_outbound=True 63 | ) 64 | 65 | # Parameter for the trusted internal network 66 | 67 | virtual_workstation_trusted_internal_cidr = self.node.try_get_context("virtual_workstation_trusted_internal_cidr") 68 | 69 | # Define default trusted CIDR 70 | if virtual_workstation_trusted_internal_cidr is None: 71 | virtual_workstation_trusted_internal_cidr = "10.0.0.0/16" 72 | 73 | # Parameter for the trusted remote network 74 | virtual_workstation_trusted_remote_cidr = self.node.try_get_context("virtual_workstation_trusted_remote_cidr") 75 | 76 | if virtual_workstation_trusted_remote_cidr is None: 77 | virtual_workstation_trusted_remote_cidr = "0.0.0.0/0" 78 | 79 | 80 | 81 | # Allow Swarm and ICMP ping from trusted interal CIDR prefix 82 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_internal_cidr), ec2.Port.tcp_range(8008,8009), 'Allow Trusted IP Swarm TCP') 83 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_internal_cidr), ec2.Port.icmp_ping(), 'Allow Trusted IP Swarm ICMP Ping') 84 | 85 | # Allow RDP from trusted remote CIDR prefix 86 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_remote_cidr),ec2.Port.tcp(3389), "Allow Trusted Remote CIDR to access Virtual Workstation via RDP") 87 | 88 | # Allow Parsec from trusted remote CIDR prefix 89 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_remote_cidr),ec2.Port.tcp(1666), "Allow Trusted Remote CIDR to access Virtual Workstation via Parsec") 90 | 91 | # Allow PCoIP from trusted remote CIDR prefix 92 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_remote_cidr),ec2.Port.tcp(4172), "Allow Trusted Remote CIDR to access Virtual Workstation via PCoIP (Session Establishment)") 93 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_remote_cidr),ec2.Port.tcp(443), "Allow Trusted Remote CIDR to access Virtual Workstation via PCoIP (Client Authentication)") 94 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_remote_cidr),ec2.Port.udp(4172), "Allow Trusted Remote CIDR to access Virtual Workstation via PCoIP (PCoIP Session Data)") 95 | 96 | # Allow NICE DCV from trusted remote CIDR prefix 97 | securitygroup.add_ingress_rule(ec2.Peer.ipv4(virtual_workstation_trusted_remote_cidr),ec2.Port.tcp(8443), "Allow Trusted Remote CIDR to access Virtual Workstation via NICE DCV") 98 | 99 | 100 | 101 | # Lookup latest Windows Server 2019 102 | basewindows = ec2.MachineImage.latest_windows(ec2.WindowsVersion.WINDOWS_SERVER_2019_ENGLISH_FULL_BASE); 103 | 104 | # Parameter for the instance type of the Virtual Workstation 105 | instance_type = self.node.try_get_context("virtual_workstation_instance_type") 106 | 107 | # Define default instance type of the Virtual Workstation 108 | if instance_type is None: 109 | instance_type = "g4dn.4xlarge" 110 | 111 | root_volume_size = self.node.try_get_context("virtual_workstation_root_volume_size") 112 | 113 | if root_volume_size is None: 114 | root_volume_size = 200 115 | 116 | # Define C: -drives size for Virtual Workstation 117 | root_device = ec2.BlockDevice( 118 | device_name='/dev/sda1', 119 | volume=ec2.BlockDeviceVolume.ebs( 120 | volume_size=root_volume_size, 121 | delete_on_termination=True 122 | ), 123 | ) 124 | 125 | 126 | with open('assets/setup-virtual-workstation.ps1', 'r') as user_file: 127 | user_data = user_file.read() 128 | 129 | user_data = user_data.replace("ADMIN_PASSWORD_SECRET_ARN", secret.secret_full_arn) 130 | 131 | 132 | # Launch the Virtual Workstation instance 133 | virtual_workstation = ec2.Instance(self, "Virtual Workstation", 134 | instance_type=ec2.InstanceType(instance_type), 135 | machine_image=basewindows, 136 | vpc = vpc, 137 | vpc_subnets = ec2.SubnetSelection(subnet_type=ec2.SubnetType('PUBLIC')), 138 | role = role, 139 | security_group=securitygroup, 140 | block_devices=[root_device], 141 | user_data=ec2.UserData.custom(user_data) 142 | ) 143 | 144 | 145 | # Create and associate static public IP adress 146 | ec2.CfnEIP(self, "VirtualWorkstationEIP", 147 | instance_id=virtual_workstation.instance_id 148 | 149 | ) 150 | 151 | # Output the Public IP adress of the Virtual Workstation 152 | core.CfnOutput(self, "VirtualWorkstationPublicIp", 153 | value=virtual_workstation.instance_public_ip, 154 | description="The public IP of the virtual Workstation") 155 | core.CfnOutput(self, "VirtualWorkstationSecretName", 156 | value=secret.secret_name, 157 | description="The name of secret in the AWS Secrets Manager. Please open the AWS Secrets Manager to retrieve the password for the user 'Administrator'.") 158 | 159 | 160 | 161 | 162 | 163 | 164 | 165 | 166 | 167 | 168 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aws-cdk.core 2 | aws-cdk.aws_s3 3 | aws-cdk.aws_ec2 4 | aws-cdk.aws_iam 5 | aws-cdk.aws_imagebuilder 6 | aws-cdk.aws_autoscaling 7 | aws_cdk.aws_secretsmanager -------------------------------------------------------------------------------- /source.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | rem The sole purpose of this script is to make the command 4 | rem 5 | rem source .venv/bin/activate 6 | rem 7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows. 8 | rem On Windows, this command just runs this batch file (the argument is ignored). 9 | rem 10 | rem Now we don't need to document a Windows command for activating a virtualenv. 11 | 12 | echo Executing .venv\Scripts\activate.bat for you 13 | .venv\Scripts\activate.bat 14 | --------------------------------------------------------------------------------