├── LICENSE ├── README.md ├── cloudformation ├── eks-workers.yaml ├── environment.yaml ├── fargate.yaml ├── network.yaml ├── permissions.yaml └── proxy.yaml ├── launch_all.sh ├── launch_workers.sh └── variables.sh /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Jason Barto 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # private-eks-cluster 2 | 3 | This repository is a collection of CloudFormation templates and shell scripts to create an Amazon EKS Kubernetes cluster in an AWS Virtual Private Cloud (VPC) without any Internet connectivity. 4 | 5 | ## Overview 6 | 7 | This collection of CloudFormation templates and Bash shell scripts will deploy an EKS cluster into a VPC with no Internet Gateway (IGW) or NAT Gateway attached. 8 | 9 | To do this it will create: 10 | - VPC 11 | - VPC endpoints - for EC2, ECR, STS, AutoScaling, SSM 12 | - VPC endpoint for Proxy (optional) - to an existing web proxy that you have already setup (not required by EKS but assumed you want to pull containers from DockerHub, GCR.io etc) 13 | - IAM Permissions 14 | - EKS Cluster - logging enabled, encrypted secrets, no public endpoint 15 | - OIDC IDP - To allow pods to assume AWS roles 16 | - Auto-scaling Group for Node group (optional) - including optional bootstrap configuration for the proxy 17 | - Fargate Profile (optional) - for running containers on Fargate 18 | 19 | Once completed you can (from within the VPC) communicate with your EKS cluster and see a list of running worker nodes. 20 | 21 | ## Justification 22 | 23 | To create an EKS cluster that is fully private and running within a VPC with no internet connection can be a challenge. A couple of challenges prevent this from happening easily. 24 | 25 | First the EKS Cluster resource in CloudFormation does not allow you to specify that you want a private-only endpoint. [Terraform](https://www.terraform.io/docs/providers/aws/r/eks_cluster.html) currently supports this configuration. 26 | 27 | Second, the EKS worker nodes, when they start need to communicate with the EKS master nodes and to do that they require details such as the CA certificate for the EKS master nodes. Normally, at bootstrap, the EC2 instance can query the EKS control plane and retrieve these details however the EKS service currently does not have support for [VPC endpoints for the EKS control plane](https://github.com/aws/containers-roadmap/issues/298). Managed node groups can be an offset for this but you may want to customize the underlying host or use a custom AMI. 28 | 29 | Third, once launched the instance role of the EC2 worker nodes must be registered with the EKS master node to allow the nodes to communicate with the cluster. 30 | 31 | To solve these issues this project takes advantage of all of the flexibility the EKS service makes available to script the creation of a completely private EKS cluster. 32 | 33 | > Note that this repository is here for illustration and demonstration purposes only. The hope is that this repository aids in helping your understanding of how EKS works to manage Kubernetes clusters on your behalf. It is not intended as production code and should not be adopted as such. 34 | 35 | ## Quickstart 36 | 37 | 1. Clone this repository to a machine that has CLI access to your AWS account. 38 | 1. Edit the values in `variables.sh` 39 | 40 | 1. Set `CLUSTER_NAME` to be a name you choose 41 | 1. Set `REGION` to be an AWS region you prefer, such as us-east-2, eu-west-2, or eu-central-1 42 | 1. Edit `AMI_ID` to be correct for your region 43 | 44 | 1. Execute `launch_all.sh` 45 | 46 | ## Getting started 47 | 48 | Edit the variable definitions found in `variables.sh`. 49 | 50 | These variables are: 51 | - CLUSTER_NAME - your desired EKS cluster name 52 | - REGION - the AWS region in which you want the resources created 53 | - HTTP_PROXY_ENDPOINT_SERVICE_NAME - this is the name of a VPC endpoint service you must have previously created which represents an HTTP proxy (e.g. Squid) 54 | - KEY_PAIR - the name of an existing EC2 key pair to be used as an SSH key on the worker nodes 55 | - VERSION - the EKS version you wish to create ('1.16', '1.15', '1.14' etc) 56 | - AMI_ID - the region-specific AWS EKS worker AMI to use. (See here for the list of managed AMIs)[https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html] 57 | - INSTANCE_TYPE - the instance type to be used for the worker nodes 58 | - S3_STAGING_LOCATION - an existing S3 bucket name and optional prefix to which CloudFormation templates and a kubectl binary will be uploaded 59 | - ENABLE_FARGATE - set to 'true' to enable fargate support, disabled by default as this requires the proxy to be a transparent proxy 60 | - FARGATE_PROFILE_NAME - the name for the Fargate profile for running EKS pods on Fargate 61 | - FARGATE_NAMESPACE - the namespace to match pods to for running EKS pods on Fargate. You must also create this inside the cluster with 'kubectl create namespace fargate' and then launch the pod into that namespace for Fargate to be the target 62 | 63 | If you do not have a proxy already configured you can use the cloudformation/proxy.yaml template provided which is a modified version of the template from this guide: 64 | https://aws.amazon.com/blogs/security/how-to-add-dns-filtering-to-your-nat-instance-with-squid/ 65 | This will setup a squid proxy in it's own VPC that you can use, along with a VPC endpoint service and test instance. The template can take a parameter: "whitelistedDomains" - a list of whitelisted domains separated by a comma for the proxy whitelist. This is refreshed on a regular basis, so modifying directly on the EC2 instance is not advised. 66 | ``` 67 | aws cloudformation create-stack --stack-name filtering-proxy --template-body file://cloudformation/proxy.yaml --capabilities CAPABILITY_IAM 68 | export ACCOUNT_ID=$(aws sts get-caller-identity --output json | jq -r '.Account') 69 | export HTTP_PROXY_ENDPOINT_SERVICE_NAME=$(aws ec2 describe-vpc-endpoint-services --output json | jq -r '.ServiceDetails[] | select(.Owner==env.ACCOUNT_ID) | .ServiceName') 70 | echo $HTTP_PROXY_ENDPOINT_SERVICE_NAME 71 | ``` 72 | After, enter the output of the proxy endpoint service name into the `variables.sh` file. 73 | 74 | Once these values are set you can execute `launch_all.sh` and get a coffee. This will take approximately 10 - 15 min to create the vpc, endpoints, cluster, and worker nodes. 75 | 76 | After this is completed you will have an EKS cluster that you can review using the AWS console or CLI. You can also remotely access your VPC using an Amazon WorkSpaces, VPN, or similar means. Using the `kubectl` client you should then see something similar to: 77 | 78 | ```bash 79 | [ec2-user@ip-10-10-40-207 ~]$ kubectl get nodes 80 | NAME STATUS ROLES AGE VERSION 81 | ip-10-0-2-186.eu-central-1.compute.internal Ready 45m v1.13.8-eks-cd3eb0 82 | ip-10-0-4-219.eu-central-1.compute.internal Ready 45m v1.13.8-eks-cd3eb0 83 | ip-10-0-8-46.eu-central-1.compute.internal Ready 45m v1.13.8-eks-cd3eb0 84 | [ec2-user@ip-10-10-40-207 ~]$ kubectl get ds -n kube-system 85 | NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 86 | aws-node 3 3 3 3 3 52m 87 | kube-proxy 3 3 3 3 3 52m 88 | ``` 89 | 90 | There you go - you now have an EKS cluster in a private VPC! 91 | 92 | ## Code Explained 93 | 94 | `variables.sh` defines key user configurable values that control how the scripts exxecute to create an EKS cluster. These values control whether Fargate is used to host worker nodes, whether a proxy server is configured on the worker nodes, and whether you would like the EKS master node to be accessible from outside of the VPC. 95 | 96 | `launch_all.sh` sources the values from `variables.sh` and then begins by creating an S3 bucket (if it does not already exist) to host the CloudFormation templates and kubectl binary. With an S3 bucket in place the script then moves on to deploying the CloudFormation stack defined by `environment.yaml`. This stack deploys 2 nested stacks, `permissions.yaml` and `network.yaml`. 97 | 98 | `permissions.yaml` creates an IAM role for the EKS cluster, a KMS key to encrypt K8s secrets held on the EKS master node, and an IAM role for the EC2 worker nodes to be created later. 99 | 100 | `network.yaml` creates a VPC with no IGW and 3 subnets. It also creates VPC endpoints for Amazon S3, Amazon ECR, Amazon EC2, EC2 AutoScaling, CloudWatch Logs, STS, and SSM. If an VPC endpoint service is specified for a proxy server an VPC Endpoint will also be created to point at the proxy server. 101 | 102 | With permissions and a network in place the `launch_all.sh` script next launches an EKS cluster using the AWS CLI. This cluster will be configured to operate privately, with full logging to CloudWatch logs, Kubernetes secrets encrypted using a KMS key, and with the role created in the `permissions.yaml` CloudFormation template. The script will then pause while it waits for the cluster to finish creating. 103 | 104 | Next the script will configured an OpenID Connect Provider which will be used to allow Kubernetes pods to authenticate against AWS IAM and obtain temporary credentials. This works in a manner similar to EC2 instance profiles where containers in the pod can then reference AWS credentials as secrets using standard K8s parlance. 105 | 106 | After the EKS cluster has been created an the OIDC provider configured the script will then configure your local `kubectl` tool to communicate with the EKS cluster. Please note this will only work if you have a network path to your EKS master node. To have this network path you will need to be connected to your VPC over Direct Connect or VPN, or you will have to enable communication with your EKS master node from outside of the VPC. 107 | 108 | Next the script will hand control over to `launch_workers.sh` which will again read values from `variables.sh` before proceeding. 109 | 110 | `launch_workers.sh` will read values from the previous CloudFormation stack to know what VPC subnets and security groups to use. The script will retreive the HTTPS endpoint for the EKS master node, and the CA certificate to be used during communication with the master. It will also request a token for communicating with the EKS master node created by `launch_all.sh`. 111 | 112 | With these values in hand the script will then launch worker nodes to run your K8s pods. Depending on your configuration of `variables.sh` the script will either apply the `fargate.yaml` CloudFormation template and create a Fargate Profile with EKS, allowing you to run a fully serverless K8s cluster. Or it will create an EC2 autoscaling group to create EC2 instances in your VPC that will connect with the EKS master node. 113 | 114 | To create the EC2 instances the script will first download the `kubectl` binary and store it in S3 for later retreival by the worker nodes. It will then apply the `eks-workers.yaml` CloudFormation template. The template will create a launch configuration and autoscaling group that will create EC2 instances to host your pods. 115 | 116 | When they first launch the EC2 worker nodes will use the CA certificate and EKS token provided to them to configure themselves and communicate with the EKS master node. The worker nodes, using Cloud-Init user data, will apply an auth config map to the EKS master node, giving the worker nodes permission to register as worker nodes with the EKS master. If a proxy has been configured the EC2 instance will configure Docker and Kubelet to use your HTTP proxy. The EC2 instance will also execute the EKS bootstrap.sh script which is provided by the EKS service AMI to configure the EKS components on the system. Lastly the EC2 instance will insert an IPTables rule that disallows pods to query the EC2 metadata service. 117 | 118 | When the CloudFormation template has been applied and the user data has executed on the EC2 worker nodes the shell script will return and you should now have a fully formed EKS cluster running privately in a VPC. 119 | 120 | ## EKS Under the covers 121 | 122 | Amazon EKS is managed upstream K8s. So all the requirements and capabilities of Kubernetes apply. This is to say that when you create an EKS cluster you are given either a private or public (or both) K8s master mode, managed for you as a service. When you create EC2 instances, hopefully as part of an auto scaling group, those nodes will need to be able to authenticate into the K8s master node and be managed by the master. The node runs the standard Kubelet and Docker daemon and will need the master's name and CA certificate. To do this the Kubelet will query the EKS service or you can provide these as arguments to the bootstrap.sh. After connecting to the master it will receive instruction to launch daemon sets. To do this Kubelet and Docker will need to authenticate themselves into ECR where the DS images are probably kept. Please note that the 1.13 version of Kubelet is compatible with VPC endpoints for ECR but 1.11 and 1.12 will require a proxy server to reach ecr.REGION.amazonaws.com. After pulling down the daemon sets your cluster should be stable and ready for use. For details about configuring proxy servers for Kubelet etc please check out the source code. 123 | 124 | ## Development notes 125 | --- 126 | ### configure proxy for docker daemon 127 | https://stackoverflow.com/questions/23111631/cannot-download-docker-images-behind-a-proxy 128 | 129 | ### authenticate with ECR 130 | https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth 131 | 132 | ```bash 133 | aws ecr get-login --region eu-west-1 --no-include-email 134 | ``` 135 | 136 | ### containers key to worker node operation 137 | 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.5.0 138 | 602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/kube-proxy:v1.11.5 139 | 602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/coredns:v1.1.3 140 | 602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/pause-amd64:3.1 141 | 142 | ### Docker unable to authenticate with ECR, couldn't get docker credential helper to work 143 | https://github.com/awslabs/amazon-ecr-credential-helper/issues/117 144 | 145 | Setting aws-node to pull image only if image is not present found success 146 | 147 | ### Procedure to create a privte EKS cluster (by hand) 148 | 149 | 1. Create a VPC with only private subnets 150 | 1. Create VPC endpoints for dkr.ecr, ecr, ec2, s3 151 | 1. Provide a web proxy for the EKS service API 152 | 1. Create an EKS cluster in the private VPC 153 | 1. Edit the aws-node daemonset to only pull images if not present 154 | ```bash 155 | kubectl edit ds/aws-node -n kube-system 156 | ``` 157 | 1. Deploy the CFN template, specifying proxy url and security group granting access to VPC endpoints 158 | 1. Add the worker instance role to the authentiation config map for the cluster 159 | ```bash 160 | kubectl apply -f aws-auth-cm.yaml 161 | ``` 162 | 1. profit 163 | 164 | **Note** EKS AMI list is at https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html 165 | 166 | **Note** Instructions to grant worker nodes access to the cluster https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html 167 | 168 | ### Sample http-proxy.conf for worker node to work with HTTP proxy 169 | /etc/systemd/system/kubelet.service.d/http-proxy.conf 170 | [Service] 171 | Environment="https_proxy=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 172 | Environment="HTTPS_PROXY=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 173 | Environment="http_proxy=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 174 | Environment="HTTP_PROXY=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 175 | Environment="NO_PROXY=169.254.169.254,2FDA1234AA4491779F1DF905AEFCB647.yl4.eu-west-1.eks.amazonaws.com,ec2.eu-west-1.amazonaws.com" 176 | 177 | /usr/lib/systemd/system/docker.service.d/http-proxy.conf 178 | [Service] 179 | Environment="https_proxy=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 180 | Environment="HTTPS_PROXY=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 181 | Environment="http_proxy=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 182 | Environment="HTTP_PROXY=http://vpce-001234f5aa16f2228-aspopn6a.vpce-svc-062e1dc8165cd99df.eu-west-1.vpce.amazonaws.com:3128" 183 | Environment="NO_PROXY=169.254.169.254,2FDA1234AA4491779F1DF905AEFCB647.yl4.eu-west-1.eks.amazonaws.com,ec2.eu-west-1.amazonaws.com" 184 | -------------------------------------------------------------------------------- /cloudformation/eks-workers.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: 2010-09-09 2 | Description: Amazon EKS - Node Group 3 | Parameters: 4 | KeyName: 5 | Description: The EC2 Key Pair to allow SSH access to the instances 6 | Type: String 7 | NodeImageId: 8 | Description: AMI id for the node instances. 9 | Type: 'AWS::EC2::Image::Id' 10 | NodeInstanceType: 11 | Description: EC2 instance type for the node instances 12 | Type: String 13 | Default: t3.medium 14 | ConstraintDescription: Must be a valid EC2 instance type 15 | AllowedValues: 16 | - t2.small 17 | - t2.medium 18 | - t2.large 19 | - t2.xlarge 20 | - t2.2xlarge 21 | - t3.nano 22 | - t3.micro 23 | - t3.small 24 | - t3.medium 25 | - t3.large 26 | - t3.xlarge 27 | - t3.2xlarge 28 | - m3.medium 29 | - m3.large 30 | - m3.xlarge 31 | - m3.2xlarge 32 | - m4.large 33 | - m4.xlarge 34 | - m4.2xlarge 35 | - m4.4xlarge 36 | - m4.10xlarge 37 | - m5.large 38 | - m5.xlarge 39 | - m5.2xlarge 40 | - m5.4xlarge 41 | - m5.12xlarge 42 | - m5.24xlarge 43 | - c4.large 44 | - c4.xlarge 45 | - c4.2xlarge 46 | - c4.4xlarge 47 | - c4.8xlarge 48 | - c5.large 49 | - c5.xlarge 50 | - c5.2xlarge 51 | - c5.4xlarge 52 | - c5.9xlarge 53 | - c5.18xlarge 54 | - i3.large 55 | - i3.xlarge 56 | - i3.2xlarge 57 | - i3.4xlarge 58 | - i3.8xlarge 59 | - i3.16xlarge 60 | - r3.xlarge 61 | - r3.2xlarge 62 | - r3.4xlarge 63 | - r3.8xlarge 64 | - r4.large 65 | - r4.xlarge 66 | - r4.2xlarge 67 | - r4.4xlarge 68 | - r4.8xlarge 69 | - r4.16xlarge 70 | - x1.16xlarge 71 | - x1.32xlarge 72 | - p2.xlarge 73 | - p2.8xlarge 74 | - p2.16xlarge 75 | - p3.2xlarge 76 | - p3.8xlarge 77 | - p3.16xlarge 78 | - p3dn.24xlarge 79 | - r5.large 80 | - r5.xlarge 81 | - r5.2xlarge 82 | - r5.4xlarge 83 | - r5.12xlarge 84 | - r5.24xlarge 85 | - r5d.large 86 | - r5d.xlarge 87 | - r5d.2xlarge 88 | - r5d.4xlarge 89 | - r5d.12xlarge 90 | - r5d.24xlarge 91 | - z1d.large 92 | - z1d.xlarge 93 | - z1d.2xlarge 94 | - z1d.3xlarge 95 | - z1d.6xlarge 96 | - z1d.12xlarge 97 | NodeAutoScalingGroupMinSize: 98 | Description: Minimum size of Node Group ASG. 99 | Type: Number 100 | Default: 1 101 | NodeAutoScalingGroupMaxSize: 102 | Description: >- 103 | Maximum size of Node Group ASG. Set to at least 1 greater than 104 | NodeAutoScalingGroupDesiredCapacity. 105 | Type: Number 106 | Default: 4 107 | NodeAutoScalingGroupDesiredCapacity: 108 | Description: Desired capacity of Node Group ASG. 109 | Type: Number 110 | Default: 3 111 | NodeVolumeSize: 112 | Description: Node volume size 113 | Type: Number 114 | Default: 20 115 | ClusterName: 116 | Description: >- 117 | The cluster name provided when the cluster was created. If it is 118 | incorrect, nodes will not be able to join the cluster. 119 | Type: String 120 | BootstrapArguments: 121 | Description: >- 122 | Arguments to pass to the bootstrap script. See files/bootstrap.sh in 123 | https://github.com/awslabs/amazon-eks-ami 124 | Type: String 125 | Default: '' 126 | NodeGroupName: 127 | Description: Unique identifier for the Node Group. 128 | Type: String 129 | ClusterControlPlaneSecurityGroup: 130 | Description: The security group of the cluster control plane. 131 | Type: 'AWS::EC2::SecurityGroup::Id' 132 | WorkerSecurityGroup: 133 | Description: Additional security group to grant to worker nodes. 134 | Type: 'AWS::EC2::SecurityGroup::Id' 135 | VpcId: 136 | Description: The VPC of the worker instances 137 | Type: 'AWS::EC2::VPC::Id' 138 | VpcCidr: 139 | Description: The CIDR of the VPC for the worker instances 140 | Type: String 141 | Subnets: 142 | Description: The subnets where workers can be created. 143 | Type: 'List' 144 | ClusterAPIEndpoint: 145 | Description: Private API endpoint for EKS cluster 146 | Type: String 147 | HttpsProxy: 148 | Description: HTTPS proxy for access to external resources such as ECR 149 | Type: String 150 | ClusterCA: 151 | Description: Certificate for EKS cluster 152 | Type: String 153 | UserToken: 154 | Description: Temporary Kubernetes user credentials token 155 | Type: String 156 | KubectlS3Location: 157 | Description: Where in S3 can the Kubectl binary be found 158 | Type: String 159 | Metadata: 160 | 'AWS::CloudFormation::Interface': 161 | ParameterGroups: 162 | - Label: 163 | default: EKS Cluster 164 | Parameters: 165 | - ClusterName 166 | - ClusterControlPlaneSecurityGroup 167 | - Label: 168 | default: Worker Node Configuration 169 | Parameters: 170 | - NodeGroupName 171 | - NodeAutoScalingGroupMinSize 172 | - NodeAutoScalingGroupDesiredCapacity 173 | - NodeAutoScalingGroupMaxSize 174 | - NodeInstanceType 175 | - NodeImageId 176 | - NodeVolumeSize 177 | - KeyName 178 | - BootstrapArguments 179 | - Label: 180 | default: Worker Network Configuration 181 | Parameters: 182 | - VpcId 183 | - Subnets 184 | 185 | Conditions: 186 | UseEC2KeyPair: !Not [!Equals [!Ref KeyName, ""]] 187 | 188 | ## RESOURCES 189 | Resources: 190 | NodeInstanceProfile: 191 | Type: 'AWS::IAM::InstanceProfile' 192 | Properties: 193 | Path: / 194 | Roles: 195 | - !Ref NodeInstanceRole 196 | NodeInstanceRole: 197 | Type: 'AWS::IAM::Role' 198 | Properties: 199 | AssumeRolePolicyDocument: 200 | Version: 2012-10-17 201 | Statement: 202 | - Effect: Allow 203 | Principal: 204 | Service: ec2.amazonaws.com 205 | Action: 'sts:AssumeRole' 206 | Path: / 207 | ManagedPolicyArns: 208 | - 'arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy' 209 | - 'arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy' 210 | - 'arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly' 211 | - 'arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess' 212 | - 'arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM' 213 | NodeSecurityGroup: 214 | Type: 'AWS::EC2::SecurityGroup' 215 | Properties: 216 | GroupDescription: Security group for all nodes in the cluster 217 | VpcId: !Ref VpcId 218 | Tags: 219 | - Key: !Sub 'kubernetes.io/cluster/${ClusterName}' 220 | Value: owned 221 | NodeSecurityGroupIngress: 222 | Type: 'AWS::EC2::SecurityGroupIngress' 223 | Properties: 224 | Description: Allow node to communicate with each other 225 | GroupId: !Ref NodeSecurityGroup 226 | SourceSecurityGroupId: !Ref NodeSecurityGroup 227 | IpProtocol: "-1" 228 | FromPort: 0 229 | ToPort: 65535 230 | NodeSecurityGroupFromControlPlaneIngress: 231 | Type: 'AWS::EC2::SecurityGroupIngress' 232 | Properties: 233 | Description: >- 234 | Allow worker Kubelets and pods to receive communication from the cluster 235 | control plane 236 | GroupId: !Ref NodeSecurityGroup 237 | SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup 238 | IpProtocol: tcp 239 | FromPort: 1025 240 | ToPort: 65535 241 | ControlPlaneEgressToNodeSecurityGroup: 242 | Type: 'AWS::EC2::SecurityGroupEgress' 243 | Properties: 244 | Description: >- 245 | Allow the cluster control plane to communicate with worker Kubelet and 246 | pods 247 | GroupId: !Ref ClusterControlPlaneSecurityGroup 248 | DestinationSecurityGroupId: !Ref NodeSecurityGroup 249 | IpProtocol: tcp 250 | FromPort: 1025 251 | ToPort: 65535 252 | NodeSecurityGroupFromControlPlaneOn443Ingress: 253 | Type: 'AWS::EC2::SecurityGroupIngress' 254 | Properties: 255 | Description: >- 256 | Allow pods running extension API servers on port 443 to receive 257 | communication from cluster control plane 258 | GroupId: !Ref NodeSecurityGroup 259 | SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup 260 | IpProtocol: tcp 261 | FromPort: 443 262 | ToPort: 443 263 | ControlPlaneEgressToNodeSecurityGroupOn443: 264 | Type: 'AWS::EC2::SecurityGroupEgress' 265 | Properties: 266 | Description: >- 267 | Allow the cluster control plane to communicate with pods running 268 | extension API servers on port 443 269 | GroupId: !Ref ClusterControlPlaneSecurityGroup 270 | DestinationSecurityGroupId: !Ref NodeSecurityGroup 271 | IpProtocol: tcp 272 | FromPort: 443 273 | ToPort: 443 274 | ClusterControlPlaneSecurityGroupIngress: 275 | Type: 'AWS::EC2::SecurityGroupIngress' 276 | Properties: 277 | Description: Allow pods to communicate with the cluster API Server 278 | GroupId: !Ref ClusterControlPlaneSecurityGroup 279 | SourceSecurityGroupId: !Ref NodeSecurityGroup 280 | IpProtocol: tcp 281 | ToPort: 443 282 | FromPort: 443 283 | NodeGroup: 284 | Type: 'AWS::AutoScaling::AutoScalingGroup' 285 | Properties: 286 | DesiredCapacity: !Ref NodeAutoScalingGroupDesiredCapacity 287 | LaunchConfigurationName: !Ref NodeLaunchConfig 288 | MinSize: !Ref NodeAutoScalingGroupMinSize 289 | MaxSize: !Ref NodeAutoScalingGroupMaxSize 290 | VPCZoneIdentifier: !Ref Subnets 291 | Tags: 292 | - Key: Name 293 | Value: !Sub '${ClusterName}-${NodeGroupName}-Node' 294 | PropagateAtLaunch: true 295 | - Key: !Sub 'kubernetes.io/cluster/${ClusterName}' 296 | Value: owned 297 | PropagateAtLaunch: true 298 | UpdatePolicy: 299 | AutoScalingRollingUpdate: 300 | MaxBatchSize: 1 301 | MinInstancesInService: !Ref NodeAutoScalingGroupDesiredCapacity 302 | PauseTime: PT5M 303 | NodeLaunchConfig: 304 | Type: 'AWS::AutoScaling::LaunchConfiguration' 305 | Properties: 306 | AssociatePublicIpAddress: false 307 | IamInstanceProfile: !Ref NodeInstanceProfile 308 | ImageId: !Ref NodeImageId 309 | InstanceType: !Ref NodeInstanceType 310 | KeyName: !If [UseEC2KeyPair, !Ref KeyName, !Ref "AWS::NoValue"] 311 | SecurityGroups: 312 | - !GetAtt NodeSecurityGroup.GroupId 313 | - !Ref WorkerSecurityGroup 314 | BlockDeviceMappings: 315 | - DeviceName: /dev/xvda 316 | Ebs: 317 | VolumeSize: !Ref NodeVolumeSize 318 | VolumeType: gp2 319 | DeleteOnTermination: true 320 | UserData: 321 | 'Fn::Base64': !Sub | 322 | #!/bin/bash 323 | 324 | set -o xtrace 325 | 326 | # Install the SSM Agent so we can remotely access the worker node if necessary 327 | yum install -y amazon-ssm-agent 328 | systemctl enable amazon-ssm-agent 329 | systemctl start amazon-ssm-agent 330 | systemctl status amazon-ssm-agent 331 | 332 | CLUSTER_API_HOSTNAME=`basename ${ClusterAPIEndpoint}` 333 | 334 | aws s3 cp ${KubectlS3Location} /tmp/kubectl --region ${AWS::Region} 335 | chmod 755 /tmp/kubectl 336 | 337 | /tmp/kubectl config set-cluster cfc --server=${ClusterAPIEndpoint} 338 | /tmp/kubectl config set clusters.cfc.certificate-authority-data ${ClusterCA} 339 | /tmp/kubectl config set-credentials user --token=${UserToken} 340 | /tmp/kubectl config set-context cfc --cluster=cfc --user=user 341 | /tmp/kubectl config use-context cfc 342 | 343 | cat </tmp/aws-auth-cm.yaml 344 | apiVersion: v1 345 | kind: ConfigMap 346 | metadata: 347 | name: aws-auth 348 | namespace: kube-system 349 | data: 350 | mapRoles: | 351 | - rolearn: '${NodeInstanceRole.Arn}' 352 | username: system:node:{{EC2PrivateDNSName}} 353 | groups: 354 | - system:bootstrappers 355 | - system:nodes 356 | EOF 357 | 358 | /tmp/kubectl get cm -n kube-system aws-auth 359 | if [ $? -ne 0 ]; 360 | then 361 | /tmp/kubectl apply -f /tmp/aws-auth-cm.yaml 362 | fi 363 | 364 | if [ "${HttpsProxy}" != "" ]; 365 | then 366 | cat </tmp/http-proxy.conf 367 | [Service] 368 | Environment="https_proxy=${HttpsProxy}" 369 | Environment="HTTPS_PROXY=${HttpsProxy}" 370 | Environment="http_proxy=${HttpsProxy}" 371 | Environment="HTTP_PROXY=${HttpsProxy}" 372 | Environment="NO_PROXY=169.254.169.254,${VpcCidr},$CLUSTER_API_HOSTNAME,s3.amazonaws.com,s3.${AWS::Region}.amazonaws.com,ec2.${AWS::Region}.amazonaws.com,ecr.${AWS::Region}.amazonaws.com,dkr.ecr.${AWS::Region}.amazonaws.com" 373 | EOF 374 | 375 | mkdir -p /usr/lib/systemd/system/docker.service.d 376 | cp /tmp/http-proxy.conf /etc/systemd/system/kubelet.service.d/ 377 | cp /tmp/http-proxy.conf /usr/lib/systemd/system/docker.service.d/ 378 | fi 379 | 380 | /etc/eks/bootstrap.sh ${ClusterName} --b64-cluster-ca ${ClusterCA} --apiserver-endpoint ${ClusterAPIEndpoint} --kubelet-extra-args "--node-labels=workergroup=${NodeGroupName}" ${BootstrapArguments} 381 | 382 | systemctl daemon-reload 383 | systemctl restart docker 384 | 385 | yum install -y iptables-services 386 | iptables --insert FORWARD 1 --in-interface eni+ --destination 169.254.169.254/32 --jump DROP 387 | iptables-save | tee /etc/sysconfig/iptables 388 | systemctl enable --now iptables 389 | 390 | /opt/aws/bin/cfn-signal --exit-code $? \ 391 | --stack ${AWS::StackName} \ 392 | --resource NodeGroup \ 393 | --region ${AWS::Region} 394 | Outputs: 395 | NodeInstanceRole: 396 | Description: The node instance role 397 | Value: !GetAtt NodeInstanceRole.Arn 398 | NodeSecurityGroup: 399 | Description: The security group for the node group 400 | Value: !Ref NodeSecurityGroup 401 | -------------------------------------------------------------------------------- /cloudformation/environment.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: 2010-09-09 2 | Description: EKS environment template to create a completely private environment. 3 | 4 | Parameters: 5 | StackPrefix: 6 | Description: The prefix to be used for named resources 7 | Type: String 8 | 9 | HttpProxyServiceName: 10 | Description: The name of an endpoint service serving an HTTP proxy 11 | Type: String 12 | 13 | Conditions: 14 | CreateProxyEndpoint: !Not [ !Equals [ !Ref HttpProxyServiceName, "" ]] 15 | 16 | Resources: 17 | Network: 18 | Type: AWS::CloudFormation::Stack 19 | Properties: 20 | Parameters: 21 | StackPrefix: !Ref StackPrefix 22 | HttpProxyServiceName: !Ref HttpProxyServiceName 23 | TemplateURL: network.yaml 24 | 25 | Principals: 26 | Type: AWS::CloudFormation::Stack 27 | Properties: 28 | Parameters: 29 | StackPrefix: !Ref StackPrefix 30 | TemplateURL: permissions.yaml 31 | 32 | Outputs: 33 | VPCId: 34 | Description: Private EKS VPC ID 35 | Value: !GetAtt Network.Outputs.VPCId 36 | 37 | VPCCIDR: 38 | Description: Private EKS VPC CIDR 39 | Value: !GetAtt Network.Outputs.VPCCIDR 40 | 41 | Subnets: 42 | Description: Private EKS Subnets 43 | Value: !Join [ ",", [ !GetAtt Network.Outputs.PrivateSubnet1, !GetAtt Network.Outputs.PrivateSubnet2, !GetAtt Network.Outputs.PrivateSubnet3 ] ] 44 | 45 | MasterRoleArn: 46 | Description: ARN of the IAM role for the EKS Master 47 | Value: !GetAtt Principals.Outputs.EKSMasterRoleArn 48 | 49 | MasterKeyArn: 50 | Description: ARN of the KMS key for encrypting EKS secrets 51 | Value: !GetAtt Principals.Outputs.EKSMasterKeyArn 52 | 53 | MasterSecurityGroup: 54 | Description: Security group ID for the master EKS node 55 | Value: !GetAtt Network.Outputs.MasterSecurityGroup 56 | 57 | EndpointClientSecurityGroup: 58 | Description: Security group ID for the client of the VPC endpoints 59 | Value: !GetAtt Network.Outputs.EndpointClientSecurityGroup 60 | 61 | HttpProxyUrl: 62 | Condition: CreateProxyEndpoint 63 | Description: HTTP/S proxy url for HTTP access beyond the local VPC 64 | Value: !GetAtt Network.Outputs.HttpProxyUrl 65 | -------------------------------------------------------------------------------- /cloudformation/fargate.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: 2010-09-09 2 | Description: EKS on Fargate environment template to create Fargate Permissions 3 | 4 | Parameters: 5 | StackPrefix: 6 | Description: The prefix to be used for named resources 7 | Type: String 8 | 9 | Resources: 10 | EKSFargatePodExecutionRole: 11 | Type: AWS::IAM::Role 12 | Properties: 13 | RoleName: !Sub ${StackPrefix}-eks-fargate-pod-execution-role 14 | Path: "/" 15 | AssumeRolePolicyDocument: 16 | Version: "2012-10-17" 17 | Statement: 18 | - 19 | Effect: "Allow" 20 | Principal: 21 | Service: 22 | - "eks-fargate-pods.amazonaws.com" 23 | Action: 24 | - "sts:AssumeRole" 25 | ManagedPolicyArns: 26 | - arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy 27 | 28 | Outputs: 29 | EKSFargatePodExecutionRoleArn: 30 | Description: ARN of the Fargate Pod Execution role for the Fargate Profile 31 | Value: !GetAtt EKSFargatePodExecutionRole.Arn 32 | Export: 33 | Name: !Sub "${StackPrefix}-EKSFargatePodExecutionRoleArn" 34 | -------------------------------------------------------------------------------- /cloudformation/network.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | AWSTemplateFormatVersion: "2010-09-09" 3 | 4 | Description: "Creates a private VPC with no IGW" 5 | Parameters: 6 | StackPrefix: 7 | Description: Prefix to prepend to named resources 8 | Type: String 9 | 10 | HttpProxyServiceName: 11 | Description: The name for the HTTP/S endpoint service 12 | Type: String 13 | 14 | Mappings: 15 | SubnetConfig: 16 | VPC: 17 | CIDR: "10.0.0.0/16" 18 | Private1: 19 | CIDR: "10.0.2.0/24" 20 | Private2: 21 | CIDR: "10.0.4.0/24" 22 | Private3: 23 | CIDR: "10.0.8.0/24" 24 | 25 | AZRegions: 26 | ap-northeast-1: 27 | AZs: ["a", "b", "c"] 28 | ap-northeast-2: 29 | AZs: ["a", "b", "c"] 30 | ap-south-1: 31 | AZs: ["a", "b", "c"] 32 | ap-southeast-1: 33 | AZs: ["a", "b", "c"] 34 | ap-southeast-2: 35 | AZs: ["a", "b", "c"] 36 | ca-central-1: 37 | AZs: ["a", "b", "c"] 38 | eu-central-1: 39 | AZs: ["a", "b", "c"] 40 | eu-west-1: 41 | AZs: ["a", "b", "c"] 42 | eu-west-2: 43 | AZs: ["a", "b", "c"] 44 | sa-east-1: 45 | AZs: ["a", "b", "c"] 46 | us-east-1: 47 | AZs: ["a", "b", "c"] 48 | us-east-2: 49 | AZs: ["a", "b", "c"] 50 | us-west-1: 51 | AZs: ["a", "b", "c"] 52 | us-west-2: 53 | AZs: ["a", "b", "c"] 54 | 55 | Conditions: 56 | CreateProxyEndpoint: !Not [!Equals [!Ref HttpProxyServiceName, ""]] 57 | 58 | Resources: 59 | PrivateEKSVPC: 60 | Type: "AWS::EC2::VPC" 61 | Properties: 62 | EnableDnsSupport: True 63 | EnableDnsHostnames: True 64 | CidrBlock: 65 | Fn::FindInMap: 66 | - "SubnetConfig" 67 | - "VPC" 68 | - "CIDR" 69 | Tags: 70 | - Key: "Name" 71 | Value: !Sub ${StackPrefix}-private-eks-vpc 72 | 73 | PrivateSubnet1: 74 | Type: "AWS::EC2::Subnet" 75 | Properties: 76 | VpcId: !Ref PrivateEKSVPC 77 | AvailabilityZone: 78 | Fn::Sub: 79 | - "${AWS::Region}${AZ}" 80 | - AZ: !Select [0, !FindInMap ["AZRegions", !Ref "AWS::Region", "AZs"]] 81 | CidrBlock: 82 | Fn::FindInMap: 83 | - "SubnetConfig" 84 | - "Private1" 85 | - "CIDR" 86 | Tags: 87 | - Key: "Name" 88 | Value: !Join 89 | - "" 90 | - - !Sub "${StackPrefix}" 91 | - "-private-" 92 | - !Select [0, !FindInMap ["AZRegions", !Ref "AWS::Region", "AZs"]] 93 | 94 | PrivateSubnet2: 95 | Type: "AWS::EC2::Subnet" 96 | Properties: 97 | VpcId: !Ref PrivateEKSVPC 98 | AvailabilityZone: 99 | Fn::Sub: 100 | - "${AWS::Region}${AZ}" 101 | - AZ: !Select [1, !FindInMap ["AZRegions", !Ref "AWS::Region", "AZs"]] 102 | CidrBlock: 103 | Fn::FindInMap: 104 | - "SubnetConfig" 105 | - "Private2" 106 | - "CIDR" 107 | Tags: 108 | - Key: "Name" 109 | Value: !Join 110 | - "" 111 | - - !Sub "${StackPrefix}" 112 | - "-private-" 113 | - !Select [1, !FindInMap ["AZRegions", !Ref "AWS::Region", "AZs"]] 114 | 115 | PrivateSubnet3: 116 | Type: "AWS::EC2::Subnet" 117 | Properties: 118 | VpcId: !Ref PrivateEKSVPC 119 | AvailabilityZone: 120 | Fn::Sub: 121 | - "${AWS::Region}${AZ}" 122 | - AZ: !Select [2, !FindInMap ["AZRegions", !Ref "AWS::Region", "AZs"]] 123 | CidrBlock: 124 | Fn::FindInMap: 125 | - "SubnetConfig" 126 | - "Private3" 127 | - "CIDR" 128 | Tags: 129 | - Key: "Name" 130 | Value: !Join 131 | - "" 132 | - - !Sub "${StackPrefix}" 133 | - "-private-" 134 | - !Select [2, !FindInMap ["AZRegions", !Ref "AWS::Region", "AZs"]] 135 | 136 | PrivateRouteTable1: 137 | Type: "AWS::EC2::RouteTable" 138 | Properties: 139 | VpcId: !Ref PrivateEKSVPC 140 | Tags: 141 | - Key: "Name" 142 | Value: !Sub ${StackPrefix}-private-route-table-1 143 | 144 | PrivateRouteTable2: 145 | Type: "AWS::EC2::RouteTable" 146 | Properties: 147 | VpcId: !Ref PrivateEKSVPC 148 | Tags: 149 | - Key: "Name" 150 | Value: !Sub ${StackPrefix}-private-route-table-2 151 | 152 | PrivateRouteTable3: 153 | Type: "AWS::EC2::RouteTable" 154 | Properties: 155 | VpcId: !Ref PrivateEKSVPC 156 | Tags: 157 | - Key: "Name" 158 | Value: !Sub ${StackPrefix}-private-route-table-3 159 | 160 | PrivateSubnetRouteTableAssociation1: 161 | Type: "AWS::EC2::SubnetRouteTableAssociation" 162 | Properties: 163 | SubnetId: 164 | Ref: "PrivateSubnet1" 165 | RouteTableId: 166 | Ref: "PrivateRouteTable1" 167 | 168 | PrivateSubnetRouteTableAssociation2: 169 | Type: "AWS::EC2::SubnetRouteTableAssociation" 170 | Properties: 171 | SubnetId: 172 | Ref: "PrivateSubnet2" 173 | RouteTableId: 174 | Ref: "PrivateRouteTable2" 175 | 176 | PrivateSubnetRouteTableAssociation3: 177 | Type: "AWS::EC2::SubnetRouteTableAssociation" 178 | Properties: 179 | SubnetId: 180 | Ref: "PrivateSubnet3" 181 | RouteTableId: 182 | Ref: "PrivateRouteTable3" 183 | 184 | EndpointClientSecurityGroup: 185 | Type: AWS::EC2::SecurityGroup 186 | Properties: 187 | GroupDescription: Security group to designate resources access to the VPC endpoints 188 | VpcId: !Ref PrivateEKSVPC 189 | 190 | EndpointSecurityGroup: 191 | Type: AWS::EC2::SecurityGroup 192 | Properties: 193 | GroupDescription: Security group to govern who can access the endpoints 194 | VpcId: !Ref PrivateEKSVPC 195 | SecurityGroupIngress: 196 | - IpProtocol: tcp 197 | FromPort: 443 198 | ToPort: 443 199 | SourceSecurityGroupId: !GetAtt EndpointClientSecurityGroup.GroupId 200 | - IpProtocol: tcp 201 | FromPort: 3128 202 | ToPort: 3128 203 | SourceSecurityGroupId: !GetAtt EndpointClientSecurityGroup.GroupId 204 | 205 | S3APIEndpoint: 206 | Type: "AWS::EC2::VPCEndpoint" 207 | Properties: 208 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.s3" 209 | VpcEndpointType: Gateway 210 | RouteTableIds: 211 | - !Ref PrivateRouteTable1 212 | - !Ref PrivateRouteTable2 213 | - !Ref PrivateRouteTable3 214 | VpcId: !Ref PrivateEKSVPC 215 | 216 | HttpProxyEndpoint: 217 | Type: "AWS::EC2::VPCEndpoint" 218 | Condition: CreateProxyEndpoint 219 | Properties: 220 | ServiceName: !Ref HttpProxyServiceName 221 | VpcEndpointType: Interface 222 | PrivateDnsEnabled: false 223 | SecurityGroupIds: 224 | - !GetAtt EndpointSecurityGroup.GroupId 225 | SubnetIds: 226 | - !Ref PrivateSubnet1 227 | - !Ref PrivateSubnet2 228 | - !Ref PrivateSubnet3 229 | VpcId: !Ref PrivateEKSVPC 230 | 231 | ECRAPIEndpoint: 232 | Type: "AWS::EC2::VPCEndpoint" 233 | Properties: 234 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.ecr.api" 235 | VpcEndpointType: Interface 236 | PrivateDnsEnabled: true 237 | SecurityGroupIds: 238 | - !GetAtt EndpointSecurityGroup.GroupId 239 | SubnetIds: 240 | - !Ref PrivateSubnet1 241 | - !Ref PrivateSubnet2 242 | - !Ref PrivateSubnet3 243 | VpcId: !Ref PrivateEKSVPC 244 | 245 | ECRDockerEndpoint: 246 | Type: "AWS::EC2::VPCEndpoint" 247 | Properties: 248 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.ecr.dkr" 249 | VpcEndpointType: Interface 250 | PrivateDnsEnabled: true 251 | SecurityGroupIds: 252 | - !GetAtt EndpointSecurityGroup.GroupId 253 | SubnetIds: 254 | - !Ref PrivateSubnet1 255 | - !Ref PrivateSubnet2 256 | - !Ref PrivateSubnet3 257 | VpcId: !Ref PrivateEKSVPC 258 | 259 | EC2Endpoint: 260 | Type: "AWS::EC2::VPCEndpoint" 261 | Properties: 262 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.ec2" 263 | VpcEndpointType: Interface 264 | PrivateDnsEnabled: true 265 | SecurityGroupIds: 266 | - !GetAtt EndpointSecurityGroup.GroupId 267 | SubnetIds: 268 | - !Ref PrivateSubnet1 269 | - !Ref PrivateSubnet2 270 | - !Ref PrivateSubnet3 271 | VpcId: !Ref PrivateEKSVPC 272 | 273 | CWLogsEndpoint: 274 | Type: "AWS::EC2::VPCEndpoint" 275 | Properties: 276 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.logs" 277 | VpcEndpointType: Interface 278 | PrivateDnsEnabled: true 279 | SecurityGroupIds: 280 | - !GetAtt EndpointSecurityGroup.GroupId 281 | SubnetIds: 282 | - !Ref PrivateSubnet1 283 | - !Ref PrivateSubnet2 284 | - !Ref PrivateSubnet3 285 | VpcId: !Ref PrivateEKSVPC 286 | 287 | EC2AutoScalingEndpoint: 288 | Type: "AWS::EC2::VPCEndpoint" 289 | Properties: 290 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.autoscaling" 291 | VpcEndpointType: Interface 292 | PrivateDnsEnabled: true 293 | SecurityGroupIds: 294 | - !GetAtt EndpointSecurityGroup.GroupId 295 | SubnetIds: 296 | - !Ref PrivateSubnet1 297 | - !Ref PrivateSubnet2 298 | - !Ref PrivateSubnet3 299 | VpcId: !Ref PrivateEKSVPC 300 | 301 | STSEndpoint: 302 | Type: "AWS::EC2::VPCEndpoint" 303 | Properties: 304 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.sts" 305 | VpcEndpointType: Interface 306 | PrivateDnsEnabled: true 307 | SecurityGroupIds: 308 | - !GetAtt EndpointSecurityGroup.GroupId 309 | SubnetIds: 310 | - !Ref PrivateSubnet1 311 | - !Ref PrivateSubnet2 312 | - !Ref PrivateSubnet3 313 | VpcId: !Ref PrivateEKSVPC 314 | 315 | SSMEndpoint: 316 | Type: "AWS::EC2::VPCEndpoint" 317 | Properties: 318 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.ssm" 319 | VpcEndpointType: Interface 320 | PrivateDnsEnabled: true 321 | SecurityGroupIds: 322 | - !GetAtt EndpointSecurityGroup.GroupId 323 | SubnetIds: 324 | - !Ref PrivateSubnet1 325 | - !Ref PrivateSubnet2 326 | - !Ref PrivateSubnet3 327 | VpcId: !Ref PrivateEKSVPC 328 | 329 | SSMMessagesEndpoint: 330 | Type: "AWS::EC2::VPCEndpoint" 331 | Properties: 332 | ServiceName: !Sub "com.amazonaws.${AWS::Region}.ssmmessages" 333 | VpcEndpointType: Interface 334 | PrivateDnsEnabled: true 335 | SecurityGroupIds: 336 | - !GetAtt EndpointSecurityGroup.GroupId 337 | SubnetIds: 338 | - !Ref PrivateSubnet1 339 | - !Ref PrivateSubnet2 340 | - !Ref PrivateSubnet3 341 | VpcId: !Ref PrivateEKSVPC 342 | 343 | Outputs: 344 | VPCId: 345 | Description: "VPCId of VPC" 346 | Value: !Ref PrivateEKSVPC 347 | Export: 348 | Name: !Sub "${StackPrefix}-PrivateEKSVPC" 349 | 350 | VPCCIDR: 351 | Description: "VPC CIDR block" 352 | Value: !GetAtt PrivateEKSVPC.CidrBlock 353 | Export: 354 | Name: !Sub "${StackPrefix}-PrivateEKSVPCCIDR" 355 | 356 | PrivateSubnet1: 357 | Description: "SubnetId of private subnet 1" 358 | Value: !Ref PrivateSubnet1 359 | Export: 360 | Name: !Sub "${StackPrefix}-PrivateSubnet1" 361 | 362 | PrivateSubnet2: 363 | Description: "SubnetId of private subnet 2" 364 | Value: !Ref PrivateSubnet2 365 | Export: 366 | Name: !Sub "${StackPrefix}-PrivateSubnet2" 367 | 368 | PrivateSubnet3: 369 | Description: "SubnetId of private subnet 3" 370 | Value: !Ref PrivateSubnet3 371 | Export: 372 | Name: !Sub "${StackPrefix}-PrivateSubnet3" 373 | 374 | DefaultSecurityGroup: 375 | Description: "DefaultSecurityGroup Id" 376 | Value: !GetAtt PrivateEKSVPC.DefaultSecurityGroup 377 | Export: 378 | Name: !Sub "${StackPrefix}-DefaultSecurityGroup" 379 | 380 | MasterSecurityGroup: 381 | Description: "Security Group ID for the EKS Master node" 382 | Value: !GetAtt PrivateEKSVPC.DefaultSecurityGroup 383 | Export: 384 | Name: !Sub "${StackPrefix}-MasterSecurityGroup" 385 | 386 | EndpointClientSecurityGroup: 387 | Description: Security group to grant access to VPC endpoints 388 | Value: !GetAtt EndpointClientSecurityGroup.GroupId 389 | Export: 390 | Name: !Sub "${StackPrefix}-EndpointClientSecurityGroup" 391 | 392 | HttpProxyUrl: 393 | Description: The URL for the HTTP/S proxy 394 | Condition: CreateProxyEndpoint 395 | Value: !Sub 396 | - "http://${ProxyEndpoint}:3128" 397 | - ProxyEndpoint: 398 | !Select [ 399 | 1, 400 | !Split [":", !Select [0, !GetAtt HttpProxyEndpoint.DnsEntries]], 401 | ] 402 | -------------------------------------------------------------------------------- /cloudformation/permissions.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: "2010-09-09" 2 | Metadata: 3 | License: Apache-2.0 4 | Description: 5 | "AWS CloudFormation Sample Template IAM_Users_Groups_and_Policies: Sample 6 | template showing how to create IAM users, groups and policies. It creates a single 7 | user that is a member of a users group and an admin group. The groups each have 8 | different IAM policies associated with them. Note: This example also creates an 9 | AWSAccessKeyId/AWSSecretKey pair associated with the new user. The example is somewhat 10 | contrived since it creates all of the users and groups, typically you would be creating 11 | policies, users and/or groups that contain references to existing users or groups 12 | in your environment. Note that you will need to specify the CAPABILITY_IAM flag 13 | when you create the stack to allow this template to execute. You can do this through 14 | the AWS management console by clicking on the check box acknowledging that you understand 15 | this template creates IAM resources or by specifying the CAPABILITY_IAM flag to 16 | the cfn-create-stack command line tool or CreateStack API call." 17 | 18 | Parameters: 19 | StackPrefix: 20 | Description: Prefix for prepending to named resources 21 | Type: String 22 | 23 | Resources: 24 | EKSMasterRole: 25 | Type: AWS::IAM::Role 26 | Properties: 27 | RoleName: !Sub ${StackPrefix}-eks-master-role 28 | Path: "/" 29 | AssumeRolePolicyDocument: 30 | Version: "2012-10-17" 31 | Statement: 32 | - Effect: "Allow" 33 | Principal: 34 | Service: 35 | - "eks.amazonaws.com" 36 | Action: 37 | - "sts:AssumeRole" 38 | ManagedPolicyArns: 39 | - arn:aws:iam::aws:policy/AmazonEKSServicePolicy 40 | - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy 41 | Policies: 42 | - PolicyName: "eks-master-node-permissions" 43 | PolicyDocument: 44 | Version: "2012-10-17" 45 | Statement: 46 | - Effect: "Allow" 47 | Action: "cloudwatch:PutMetricData" 48 | Resource: "*" 49 | - Effect: "Allow" 50 | Action: 51 | - kms:DescribeKey 52 | - kms:Encrypt 53 | - kms:Decrypt 54 | - kms:ReEncrypt* 55 | - kms:GenerateDataKey 56 | - kms:GenerateDataKeyWithoutPlaintext 57 | Resource: "*" 58 | 59 | EKSMasterKey: 60 | Type: AWS::KMS::Key 61 | Properties: 62 | Description: Key for encrypting EKS secrets 63 | KeyPolicy: 64 | Version: "2012-10-17" 65 | Statement: 66 | - Sid: Enable IAM User Permissions 67 | Effect: Allow 68 | Principal: 69 | AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root" 70 | Action: kms:* 71 | Resource: "*" 72 | - Sid: Allow use of the key 73 | Effect: Allow 74 | Principal: 75 | AWS: !GetAtt EKSMasterRole.Arn 76 | Action: 77 | - kms:DescribeKey 78 | - kms:Encrypt 79 | - kms:Decrypt 80 | - kms:ReEncrypt* 81 | - kms:GenerateDataKey 82 | - kms:GenerateDataKeyWithoutPlaintext 83 | Resource: "*" 84 | 85 | EKSWorkerRole: 86 | Type: AWS::IAM::Role 87 | Properties: 88 | RoleName: !Sub ${StackPrefix}-eks-worker-role 89 | Path: "/" 90 | AssumeRolePolicyDocument: 91 | Version: "2012-10-17" 92 | Statement: 93 | - Effect: "Allow" 94 | Principal: 95 | Service: 96 | - "ec2.amazonaws.com" 97 | Action: 98 | - "sts:AssumeRole" 99 | ManagedPolicyArns: 100 | - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy 101 | - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy 102 | - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly 103 | Policies: 104 | - PolicyName: "eks-worker-node-permissions" 105 | PolicyDocument: 106 | Version: "2012-10-17" 107 | Statement: 108 | - Effect: "Allow" 109 | Action: 110 | - "ec2:DescribeTags" 111 | - "cloudformation:SignalResource" 112 | Resource: "*" 113 | 114 | EKSWorkerNodePolicy: 115 | Type: AWS::IAM::ManagedPolicy 116 | Properties: 117 | ManagedPolicyName: !Sub ${StackPrefix}-EKSWorkerNode 118 | PolicyDocument: 119 | Version: "2012-10-17" 120 | Statement: 121 | - Effect: Allow 122 | Action: 123 | - "sagemaker:List*" 124 | - "sagemaker:Describe*" 125 | - "sagemaker:Search" 126 | - "sagemaker:GetSearchSuggestions" 127 | - "sagemaker:RenderUiTemplate" 128 | Resource: "*" 129 | 130 | Outputs: 131 | EKSMasterRoleArn: 132 | Description: Role to be used to administer environment resources 133 | Value: !GetAtt EKSMasterRole.Arn 134 | Export: 135 | Name: !Sub "${StackPrefix}-EnvironmentAdministratorRoleArn" 136 | EKSMasterKeyArn: 137 | Description: ARN of KMS key for encrypting EKS secrets 138 | Value: !GetAtt EKSMasterKey.Arn 139 | Export: 140 | Name: !Sub "${StackPrefix}-EKSMasterKeyArn" 141 | EKSWorkerRoleArn: 142 | Description: Role to be used by SageMaker provisioned infrastructure 143 | Value: !GetAtt EKSWorkerRole.Arn 144 | Export: 145 | Name: !Sub "${StackPrefix}-EKSWorkerRoleArn" 146 | -------------------------------------------------------------------------------- /cloudformation/proxy.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | 3 | Description: Provision the required resources for your blog post example 'Add domain filtering to 4 | your NAT instance with Squid'. Wait for the creation to complete before testing. 5 | 6 | Parameters: 7 | 8 | AmiId: 9 | Type: 'AWS::SSM::Parameter::Value' 10 | Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2' 11 | Description: AMI ID pointer in AWS Systems Manager Parameter Store. Default value points to the 12 | latest Amazon Linux 2 AMI ID. 13 | 14 | InstanceType: 15 | Type: String 16 | Default: t3.small 17 | Description: Instance type to use to launch the NAT instances. 18 | AllowedValues: 19 | - t3.nano 20 | - t3.micro 21 | - t3.small 22 | - t3.medium 23 | - t3.large 24 | - m4.large 25 | - m4.xlarge 26 | - m4.2xlarge 27 | - m5.large 28 | - m5.xlarge 29 | - m5.2xlarge 30 | - c4.large 31 | - c4.xlarge 32 | - c4.large 33 | - c5.large 34 | - c5.xlarge 35 | - c5.large 36 | 37 | WhitelistDomains: 38 | Type: CommaDelimitedList 39 | Default: .amazonaws.com 40 | Description: List of whitelisted domains separated by a comma. Enter ".example.com" to 41 | whitelist all the sub-domains of example.com. 42 | 43 | Metadata: 44 | 'AWS::CloudFormation::Interface': 45 | 46 | ParameterGroups: 47 | - Label: 48 | default: Instance Configuration 49 | Parameters: 50 | - AmiId 51 | - InstanceType 52 | - Label: 53 | default: Proxy Configuration 54 | Parameters: 55 | - WhitelistDomains 56 | 57 | ParameterLabels: 58 | AmiId: 59 | default: AMI ID 60 | InstanceType: 61 | default: NAT Instance Type 62 | WhitelistDomains: 63 | default: Allowed Domains 64 | 65 | Resources: 66 | 67 | VPC: 68 | Type: AWS::EC2::VPC 69 | Properties: 70 | CidrBlock: 10.0.0.0/16 71 | Tags: 72 | - Key: Name 73 | Value: !Sub 'VPC - ${AWS::StackName}' 74 | 75 | InternetGateway: 76 | Type: AWS::EC2::InternetGateway 77 | Properties: 78 | Tags: 79 | - Key: Name 80 | Value: !Sub 'IGW - ${AWS::StackName}' 81 | 82 | AttachGateway: 83 | Type: AWS::EC2::VPCGatewayAttachment 84 | Properties: 85 | VpcId: !Ref VPC 86 | InternetGatewayId: !Ref InternetGateway 87 | 88 | PrivateSubnet1: 89 | Type: AWS::EC2::Subnet 90 | Properties: 91 | VpcId: !Ref VPC 92 | CidrBlock: 10.0.0.0/24 93 | MapPublicIpOnLaunch: False 94 | AvailabilityZone: !Select 95 | - 0 96 | - !GetAZs 97 | Ref: 'AWS::Region' 98 | Tags: 99 | - Key: Name 100 | Value: !Sub 'Private Subnet 1 - ${AWS::StackName}' 101 | 102 | PrivateSubnet2: 103 | Type: AWS::EC2::Subnet 104 | Properties: 105 | VpcId: !Ref VPC 106 | CidrBlock: 10.0.1.0/24 107 | MapPublicIpOnLaunch: False 108 | AvailabilityZone: !Select 109 | - 1 110 | - !GetAZs 111 | Ref: 'AWS::Region' 112 | Tags: 113 | - Key: Name 114 | Value: !Sub 'Private Subnet 2 - ${AWS::StackName}' 115 | 116 | PrivateSubnet3: 117 | Type: AWS::EC2::Subnet 118 | Properties: 119 | VpcId: !Ref VPC 120 | CidrBlock: 10.0.2.0/24 121 | MapPublicIpOnLaunch: False 122 | AvailabilityZone: !Select 123 | - 2 124 | - !GetAZs 125 | Ref: 'AWS::Region' 126 | Tags: 127 | - Key: Name 128 | Value: !Sub 'Private Subnet 3 - ${AWS::StackName}' 129 | 130 | PublicSubnet1: 131 | Type: AWS::EC2::Subnet 132 | Properties: 133 | VpcId: !Ref VPC 134 | CidrBlock: 10.0.3.0/24 135 | MapPublicIpOnLaunch: True 136 | AvailabilityZone: !Select 137 | - 0 138 | - !GetAZs 139 | Ref: 'AWS::Region' 140 | Tags: 141 | - Key: Name 142 | Value: !Sub 'Public Subnet 1 - ${AWS::StackName}' 143 | 144 | PublicSubnet2: 145 | Type: AWS::EC2::Subnet 146 | Properties: 147 | VpcId: !Ref VPC 148 | CidrBlock: 10.0.4.0/24 149 | MapPublicIpOnLaunch: True 150 | AvailabilityZone: !Select 151 | - 1 152 | - !GetAZs 153 | Ref: 'AWS::Region' 154 | Tags: 155 | - Key: Name 156 | Value: !Sub 'Public Subnet 2 - ${AWS::StackName}' 157 | 158 | PublicSubnet3: 159 | Type: AWS::EC2::Subnet 160 | Properties: 161 | VpcId: !Ref VPC 162 | CidrBlock: 10.0.5.0/24 163 | MapPublicIpOnLaunch: True 164 | AvailabilityZone: !Select 165 | - 2 166 | - !GetAZs 167 | Ref: 'AWS::Region' 168 | Tags: 169 | - Key: Name 170 | Value: !Sub 'Public Subnet 3 - ${AWS::StackName}' 171 | 172 | PublicRouteTable: 173 | Type: AWS::EC2::RouteTable 174 | Properties: 175 | VpcId: !Ref VPC 176 | Tags: 177 | - Key: Name 178 | Value: !Sub 'Public Route Table - ${AWS::StackName}' 179 | 180 | PublicRouteTableEntry: 181 | Type: AWS::EC2::Route 182 | DependsOn: AttachGateway 183 | Properties: 184 | RouteTableId: !Ref PublicRouteTable 185 | DestinationCidrBlock: 0.0.0.0/0 186 | GatewayId: !Ref InternetGateway 187 | 188 | PublicRouteTableSubnetAssociation1: 189 | Type: AWS::EC2::SubnetRouteTableAssociation 190 | Properties: 191 | SubnetId: !Ref PublicSubnet1 192 | RouteTableId: !Ref PublicRouteTable 193 | 194 | PublicRouteTableSubnetAssociation2: 195 | Type: AWS::EC2::SubnetRouteTableAssociation 196 | Properties: 197 | SubnetId: !Ref PublicSubnet2 198 | RouteTableId: !Ref PublicRouteTable 199 | 200 | PublicRouteTableSubnetAssociation3: 201 | Type: AWS::EC2::SubnetRouteTableAssociation 202 | Properties: 203 | SubnetId: !Ref PublicSubnet3 204 | RouteTableId: !Ref PublicRouteTable 205 | 206 | PrivateRouteTable1: 207 | Type: AWS::EC2::RouteTable 208 | Properties: 209 | VpcId: !Ref VPC 210 | Tags: 211 | - Key: Name 212 | Value: !Sub 'Private Route Table 1 - ${AWS::StackName}' 213 | 214 | PrivateRouteTableSubnetAssociation1: 215 | Type: AWS::EC2::SubnetRouteTableAssociation 216 | Properties: 217 | SubnetId: !Ref PrivateSubnet1 218 | RouteTableId: !Ref PrivateRouteTable1 219 | 220 | PrivateRouteTable2: 221 | Type: AWS::EC2::RouteTable 222 | Properties: 223 | VpcId: !Ref VPC 224 | Tags: 225 | - Key: Name 226 | Value: !Sub 'Private Route Table 2 - ${AWS::StackName}' 227 | 228 | PrivateRouteTableSubnetAssociation2: 229 | Type: AWS::EC2::SubnetRouteTableAssociation 230 | Properties: 231 | SubnetId: !Ref PrivateSubnet2 232 | RouteTableId: !Ref PrivateRouteTable2 233 | 234 | PrivateRouteTable3: 235 | Type: AWS::EC2::RouteTable 236 | Properties: 237 | VpcId: !Ref VPC 238 | Tags: 239 | - Key: Name 240 | Value: !Sub 'Private Route Table 3 - ${AWS::StackName}' 241 | 242 | PrivateRouteTableSubnetAssociation3: 243 | Type: AWS::EC2::SubnetRouteTableAssociation 244 | Properties: 245 | SubnetId: !Ref PrivateSubnet3 246 | RouteTableId: !Ref PrivateRouteTable3 247 | 248 | S3Bucket: 249 | Type: AWS::S3::Bucket 250 | 251 | S3PutLambdaRole: 252 | Type: AWS::IAM::Role 253 | Properties: 254 | AssumeRolePolicyDocument: 255 | Version: '2012-10-17' 256 | Statement: 257 | - Effect: Allow 258 | Principal: 259 | Service: 260 | - lambda.amazonaws.com 261 | Action: 262 | - sts:AssumeRole 263 | Path: / 264 | ManagedPolicyArns: 265 | - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 266 | Policies: 267 | - PolicyName: root 268 | PolicyDocument: 269 | Version: '2012-10-17' 270 | Statement: 271 | - Effect: Allow 272 | Action: 273 | - s3:PutObject 274 | - s3:DeleteObject 275 | Resource: !Sub '${S3Bucket.Arn}*' 276 | 277 | S3PutLambdaFunction: 278 | Type: AWS::Lambda::Function 279 | Properties: 280 | Handler: index.handler 281 | Runtime: python3.7 282 | Timeout: 30 283 | Role: !GetAtt S3PutLambdaRole.Arn 284 | Code: 285 | ZipFile: | 286 | import json 287 | import cfnresponse 288 | import boto3 289 | 290 | def handler(event, context): 291 | try: 292 | print(json.dumps(event)) 293 | client = boto3.client('s3') 294 | content = event['ResourceProperties']['Content'] 295 | bucket = event['ResourceProperties']['Bucket'] 296 | key = event['ResourceProperties']['Key'] 297 | physicalid = 's3://%s/%s' % (bucket, key) 298 | if event['RequestType'] == 'Delete': 299 | client.delete_object(Bucket=bucket, Key=key) 300 | else: 301 | client.put_object(Bucket=bucket, Key=key, Body=content.encode()) 302 | cfnresponse.send(event, context, cfnresponse.SUCCESS, {}, physicalid) 303 | 304 | except Exception as e: 305 | cfnresponse.send(event, context, cfnresponse.FAILED, {}) 306 | raise(e) 307 | 308 | WhitelistS3Object: 309 | Type: Custom::S3Object 310 | Properties: 311 | ServiceToken: !GetAtt S3PutLambdaFunction.Arn 312 | Bucket: !Ref S3Bucket 313 | Key: whitelist.txt 314 | Content: !Join [ "\n", !Ref WhitelistDomains ] 315 | 316 | SquidConfS3Object: 317 | Type: Custom::S3Object 318 | Properties: 319 | ServiceToken: !GetAtt S3PutLambdaFunction.Arn 320 | Bucket: !Ref S3Bucket 321 | Key: squid.conf 322 | Content: | 323 | visible_hostname squid 324 | cache deny all 325 | 326 | # Log format and rotation 327 | logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %sni %Sh/% >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1 425 | 426 | # Apply the latest security patches 427 | yum update -y --security 428 | 429 | # Disable source / destination check. It cannot be disabled from the launch configuration 430 | region=${AWS::Region} 431 | instanceid=`curl -s http://169.254.169.254/latest/meta-data/instance-id` 432 | aws ec2 modify-instance-attribute --no-source-dest-check --instance-id $instanceid --region $region 433 | 434 | # Install and start Squid 435 | yum install -y squid 436 | systemctl start squid || service squid start 437 | iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3129 438 | iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130 439 | 440 | # Create a SSL certificate for the SslBump Squid module 441 | mkdir /etc/squid/ssl 442 | cd /etc/squid/ssl 443 | openssl genrsa -out squid.key 4096 444 | openssl req -new -key squid.key -out squid.csr -subj "/C=XX/ST=XX/L=squid/O=squid/CN=squid" 445 | openssl x509 -req -days 3650 -in squid.csr -signkey squid.key -out squid.crt 446 | cat squid.key squid.crt >> squid.pem 447 | 448 | # Refresh the Squid configuration files from S3 449 | mkdir /etc/squid/old 450 | cat > /etc/squid/squid-conf-refresh.sh << 'EOF' 451 | cp /etc/squid/* /etc/squid/old/ 452 | aws s3 sync s3://${S3Bucket} /etc/squid 453 | /usr/sbin/squid -k parse && /usr/sbin/squid -k reconfigure || (cp /etc/squid/old/* /etc/squid/; exit 1) 454 | EOF 455 | chmod +x /etc/squid/squid-conf-refresh.sh 456 | /etc/squid/squid-conf-refresh.sh 457 | 458 | # Schedule tasks 459 | cat > ~/mycron << 'EOF' 460 | * * * * * /etc/squid/squid-conf-refresh.sh 461 | 0 0 * * * sleep $(($RANDOM % 3600)); yum -y update --security 462 | 0 0 * * * /usr/sbin/squid -k rotate 463 | EOF 464 | crontab ~/mycron 465 | rm ~/mycron 466 | 467 | # Install and configure the CloudWatch Agent 468 | rpm -Uvh https://amazoncloudwatch-agent-${AWS::Region}.s3.${AWS::Region}.amazonaws.com/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm 469 | cat > /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json << 'EOF' 470 | { 471 | "agent": { 472 | "metrics_collection_interval": 10, 473 | "omit_hostname": true 474 | }, 475 | "metrics": { 476 | "metrics_collected": { 477 | "procstat": [ 478 | { 479 | "pid_file": "/var/run/squid.pid", 480 | "measurement": [ 481 | "cpu_usage" 482 | ] 483 | } 484 | ] 485 | }, 486 | "append_dimensions": { 487 | "AutoScalingGroupName": "${!aws:AutoScalingGroupName}" 488 | }, 489 | "force_flush_interval": 5 490 | }, 491 | "logs": { 492 | "logs_collected": { 493 | "files": { 494 | "collect_list": [ 495 | { 496 | "file_path": "/var/log/squid/access.log*", 497 | "log_group_name": "/filtering-nat-instance/access.log", 498 | "log_stream_name": "{instance_id}", 499 | "timezone": "Local" 500 | }, 501 | { 502 | "file_path": "/var/log/squid/cache.log*", 503 | "log_group_name": "/filtering-nat-instance/cache.log", 504 | "log_stream_name": "{instance_id}", 505 | "timezone": "Local" 506 | } 507 | ] 508 | } 509 | 510 | } 511 | } 512 | } 513 | EOF 514 | /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s 515 | 516 | # CloudFormation signal 517 | yum update -y aws-cfn-bootstrap 518 | /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource NATInstanceASG1 --region ${AWS::Region} 519 | 520 | NATInstanceASG1: 521 | Type: AWS::AutoScaling::AutoScalingGroup 522 | Properties: 523 | DesiredCapacity: 1 524 | HealthCheckGracePeriod: 300 525 | HealthCheckType: EC2 526 | LaunchConfigurationName: !Ref NATInstanceLC 527 | MaxSize: 1 528 | MinSize: 1 529 | Tags: 530 | - Key: Name 531 | Value: !Sub 'NAT Instance 1 - ${AWS::StackName}' 532 | PropagateAtLaunch: True 533 | - Key: RouteTableIds 534 | Value: !Ref PrivateRouteTable1 535 | PropagateAtLaunch: False 536 | VPCZoneIdentifier: 537 | - !Ref PublicSubnet1 538 | TargetGroupARNs: 539 | - !Ref ProxyTargetGroup 540 | CreationPolicy: 541 | ResourceSignal: 542 | Count: 1 543 | Timeout: PT10M 544 | 545 | NATInstanceASGHook1: 546 | Type: AWS::AutoScaling::LifecycleHook 547 | Properties: 548 | AutoScalingGroupName: !Ref NATInstanceASG1 549 | DefaultResult: ABANDON 550 | LifecycleTransition: autoscaling:EC2_INSTANCE_LAUNCHING 551 | HeartbeatTimeout: 300 552 | 553 | NATInstanceASG2: 554 | Type: AWS::AutoScaling::AutoScalingGroup 555 | Properties: 556 | DesiredCapacity: 1 557 | HealthCheckGracePeriod: 300 558 | HealthCheckType: EC2 559 | LaunchConfigurationName: !Ref NATInstanceLC 560 | MaxSize: 1 561 | MinSize: 1 562 | Tags: 563 | - Key: Name 564 | Value: !Sub 'NAT Instance 2 - ${AWS::StackName}' 565 | PropagateAtLaunch: True 566 | - Key: RouteTableIds 567 | Value: !Ref PrivateRouteTable2 568 | PropagateAtLaunch: False 569 | VPCZoneIdentifier: 570 | - !Ref PublicSubnet2 571 | TargetGroupARNs: 572 | - !Ref ProxyTargetGroup 573 | 574 | NATInstanceASGHook2: 575 | Type: AWS::AutoScaling::LifecycleHook 576 | Properties: 577 | AutoScalingGroupName: !Ref NATInstanceASG2 578 | DefaultResult: ABANDON 579 | LifecycleTransition: autoscaling:EC2_INSTANCE_LAUNCHING 580 | HeartbeatTimeout: 300 581 | 582 | AlarmTopic: 583 | Type: AWS::SNS::Topic 584 | 585 | AlarmTopicSubscription: 586 | Type: AWS::SNS::Subscription 587 | Properties: 588 | Endpoint: !GetAtt AlarmLambdaFunction.Arn 589 | Protocol: lambda 590 | TopicArn: !Ref AlarmTopic 591 | 592 | Alarm1: 593 | Type: AWS::CloudWatch::Alarm 594 | Properties: 595 | AlarmActions: 596 | - !Ref AlarmTopic 597 | AlarmDescription: !Sub 'Heart beat for NAT Instance 1' 598 | AlarmName: !Sub '${AWS::StackName}/${NATInstanceASG1}' 599 | ComparisonOperator: LessThanThreshold 600 | Dimensions: 601 | - Name: AutoScalingGroupName 602 | Value: !Ref NATInstanceASG1 603 | - Name: pidfile 604 | Value: /var/run/squid.pid 605 | - Name: process_name 606 | Value: squid 607 | EvaluationPeriods: 1 608 | MetricName: procstat_cpu_usage 609 | Namespace: CWAgent 610 | OKActions: 611 | - !Ref AlarmTopic 612 | Period: 10 613 | Statistic: Average 614 | Threshold: 0.0 615 | TreatMissingData: breaching 616 | 617 | Alarm2: 618 | Type: AWS::CloudWatch::Alarm 619 | Properties: 620 | AlarmActions: 621 | - !Ref AlarmTopic 622 | AlarmDescription: !Sub 'Heart beat for NAT Instance 2' 623 | AlarmName: !Sub '${AWS::StackName}/${NATInstanceASG2}' 624 | ComparisonOperator: LessThanThreshold 625 | Dimensions: 626 | - Name: AutoScalingGroupName 627 | Value: !Ref NATInstanceASG2 628 | - Name: pidfile 629 | Value: /var/run/squid.pid 630 | - Name: process_name 631 | Value: squid 632 | EvaluationPeriods: 1 633 | MetricName: procstat_cpu_usage 634 | Namespace: CWAgent 635 | OKActions: 636 | - !Ref AlarmTopic 637 | Period: 10 638 | Statistic: Average 639 | Threshold: 0.0 640 | TreatMissingData: breaching 641 | 642 | AlarmLambdaPermission: 643 | Type: AWS::Lambda::Permission 644 | Properties: 645 | FunctionName: !Ref AlarmLambdaFunction 646 | Action: lambda:InvokeFunction 647 | Principal: sns.amazonaws.com 648 | SourceArn: !Ref AlarmTopic 649 | 650 | AlarmLambdaRole: 651 | Type: AWS::IAM::Role 652 | Properties: 653 | AssumeRolePolicyDocument: 654 | Version: '2012-10-17' 655 | Statement: 656 | - Effect: Allow 657 | Principal: 658 | Service: 659 | - lambda.amazonaws.com 660 | Action: 661 | - sts:AssumeRole 662 | Path: / 663 | ManagedPolicyArns: 664 | - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 665 | Policies: 666 | - PolicyName: root 667 | PolicyDocument: 668 | Version: '2012-10-17' 669 | Statement: 670 | - Effect: Allow 671 | Action: 672 | - autoscaling:Describe* 673 | - autoscaling:CompleteLifecycleAction 674 | - autoscaling:SetInstanceHealth 675 | - cloudwatch:Describe* 676 | - ec2:CreateRoute 677 | - ec2:CreateTags 678 | - ec2:ReplaceRoute 679 | - ec2:Describe* 680 | Resource: '*' 681 | 682 | AlarmLambdaFunction: 683 | Type: AWS::Lambda::Function 684 | Properties: 685 | Handler: index.handler 686 | Runtime: python3.7 687 | Timeout: 30 688 | Role: !GetAtt AlarmLambdaRole.Arn 689 | Code: 690 | ZipFile: !Sub | 691 | import json 692 | import boto3 693 | 694 | as_client = boto3.client('autoscaling') 695 | cw_client = boto3.client('cloudwatch') 696 | ec2_client = boto3.client('ec2') 697 | 698 | # Function to create or update the default route 699 | def update_route(route_table_id, instance_id, asg_name): 700 | parameters = { 701 | 'DestinationCidrBlock': '0.0.0.0/0', 702 | 'RouteTableId': route_table_id, 703 | 'InstanceId': instance_id 704 | } 705 | try: 706 | ec2_client.replace_route(**parameters) 707 | except: 708 | ec2_client.create_route(**parameters) 709 | ec2_client.create_tags( 710 | Resources=[route_table_id], 711 | Tags=[{'Key': 'AutoScalingGroupName', 'Value': asg_name}] 712 | ) 713 | print('Updated default route of %s to %s' % (route_table_id, instance_id)) 714 | 715 | def handler(event, context): 716 | print(json.dumps(event)) 717 | for record in event['Records']: 718 | message = json.loads(record['Sns']['Message']) 719 | print('Alarm state: %s' % message['NewStateValue']) 720 | 721 | # Auto Scaling group associated to the alarm 722 | asg_name = message['AlarmName'].split('/')[1] 723 | print('ASG Name: %s' % asg_name) 724 | asg = as_client.describe_auto_scaling_groups( 725 | AutoScalingGroupNames=[asg_name] 726 | )['AutoScalingGroups'][0] 727 | 728 | # If the NAT instance has failed 729 | if message['NewStateValue'] == 'ALARM': 730 | 731 | # Set the NAT instance to Unhealthy 732 | try: 733 | for instance in asg['Instances']: 734 | as_client.set_instance_health( 735 | InstanceId=instance['InstanceId'], 736 | HealthStatus='Unhealthy' 737 | ) 738 | print('Set instance %s to Unhealthy' % instance['InstanceId']) 739 | except: 740 | pass 741 | 742 | # Route traffic to the first health NAT instance 743 | for healthy_alarm in cw_client.describe_alarms( 744 | AlarmNamePrefix='${AWS::StackName}/', 745 | ActionPrefix='${AlarmTopic}', 746 | StateValue='OK' 747 | )['MetricAlarms']: 748 | 749 | healthy_asg_name = healthy_alarm['AlarmName'].split('/')[1] 750 | healthy_asg = as_client.describe_auto_scaling_groups( 751 | AutoScalingGroupNames=[healthy_asg_name] 752 | )['AutoScalingGroups'][0] 753 | healthy_instance_id = healthy_asg['Instances'][0]['InstanceId'] 754 | print('Healthy NAT instance: %s' % healthy_instance_id) 755 | 756 | # For each route table that currently routes traffic to the unhealthy NAT 757 | # instance, update the default route 758 | for route_table in ec2_client.describe_route_tables( 759 | Filters=[{'Name': 'tag:AutoScalingGroupName', 'Values': [asg_name]}] 760 | )['RouteTables']: 761 | update_route(route_table['RouteTableId'], healthy_instance_id, healthy_asg_name) 762 | 763 | break 764 | 765 | # If the NAT instance has recovered 766 | else: 767 | 768 | # ID of the NAT instance launched by the Auto Scaling group 769 | for instance in asg['Instances']: 770 | if instance['HealthStatus'] == 'Healthy': 771 | asg_instance_id = instance['InstanceId'] 772 | break 773 | print('Instance launched by the ASG: %s' % asg_instance_id) 774 | 775 | # Complete the lifecycle action if the NAT instance was just launched 776 | lc_name = as_client.describe_lifecycle_hooks( 777 | AutoScalingGroupName=asg_name 778 | )['LifecycleHooks'][0]['LifecycleHookName'] 779 | try: 780 | as_client.complete_lifecycle_action( 781 | LifecycleHookName=lc_name, 782 | AutoScalingGroupName=asg_name, 783 | LifecycleActionResult='CONTINUE', 784 | InstanceId=asg_instance_id 785 | ) 786 | print('Lifecycle action completed') 787 | except: 788 | pass 789 | 790 | # Create or update the default route for each route table that should route 791 | # traffic to this NAT instance in a nominal situation 792 | for route_table_id in as_client.describe_tags( 793 | Filters=[ 794 | {'Name': 'auto-scaling-group', 'Values': [asg_name]}, 795 | {'Name': 'key', 'Values': ['RouteTableIds']} 796 | ] 797 | )['Tags'][0]['Value'].split(','): 798 | update_route(route_table_id, asg_instance_id, asg_name) 799 | 800 | TestingInstanceRole: 801 | Type: AWS::IAM::Role 802 | Properties: 803 | AssumeRolePolicyDocument: 804 | Version: '2012-10-17' 805 | Statement: 806 | - Effect: Allow 807 | Principal: 808 | Service: 809 | - ec2.amazonaws.com 810 | Action: 811 | - sts:AssumeRole 812 | Path: / 813 | ManagedPolicyArns: 814 | - arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM 815 | Policies: 816 | - PolicyName: root 817 | PolicyDocument: 818 | Version: '2012-10-17' 819 | Statement: 820 | - Effect: Allow 821 | Action: 822 | - ec2:DescribeRegions 823 | Resource: '*' 824 | 825 | TestingInstanceProfile: 826 | Type: AWS::IAM::InstanceProfile 827 | Properties: 828 | Roles: 829 | - !Ref TestingInstanceRole 830 | Path: / 831 | 832 | TestingInstance: 833 | Type: AWS::EC2::Instance 834 | DependsOn: 835 | - NATInstanceASG1 836 | Properties: 837 | KeyName: ubuntu-win10 838 | IamInstanceProfile: !Ref TestingInstanceProfile 839 | ImageId: !Ref AmiId 840 | InstanceType: t3.nano 841 | SecurityGroupIds: 842 | - !GetAtt VPC.DefaultSecurityGroup 843 | SubnetId: !Ref PrivateSubnet1 844 | Tags: 845 | - Key: Name 846 | Value: !Sub 'Testing Instance - ${AWS::StackName}' 847 | 848 | # Network Load balancer for endpoint service 849 | ProxyLoadBalancer: 850 | Type: AWS::ElasticLoadBalancingV2::LoadBalancer 851 | Properties: 852 | Name: !Sub 'NLB-${AWS::StackName}' 853 | Scheme: internal 854 | Type: network 855 | LoadBalancerAttributes: 856 | - { Key: load_balancing.cross_zone.enabled, Value: true } 857 | Subnets: 858 | - !Ref PrivateSubnet1 859 | - !Ref PrivateSubnet2 860 | - !Ref PrivateSubnet3 861 | ProxyTargetGroup: 862 | Type: AWS::ElasticLoadBalancingV2::TargetGroup 863 | Properties: 864 | Name: ProxyTargetGroup 865 | VpcId: !Ref VPC 866 | Protocol: 'TCP' 867 | Port: '80' 868 | ProxyLoadBalancerListener: 869 | Type: AWS::ElasticLoadBalancingV2::Listener 870 | Properties: 871 | DefaultActions: 872 | - Type: forward 873 | TargetGroupArn: !Ref ProxyTargetGroup 874 | LoadBalancerArn: !Ref ProxyLoadBalancer 875 | Port: '80' 876 | Protocol: 'TCP' 877 | # VPC Endpoint Service (PrivateLink) 878 | ProxyEndpointService: 879 | Type: AWS::EC2::VPCEndpointService 880 | Properties: 881 | AcceptanceRequired: False 882 | NetworkLoadBalancerArns: 883 | - !Ref ProxyLoadBalancer 884 | 885 | Outputs: 886 | ProxyEndpointServiceId: 887 | Description: ID of the Proxy Endpoint Service 888 | Value: !Ref ProxyEndpointService 889 | Export: 890 | Name: !Sub "ProxyEndpointServiceId" 891 | -------------------------------------------------------------------------------- /launch_all.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | 4 | source variables.sh 5 | 6 | # Check S3 bucket exists, if not create it 7 | if [[ $(aws s3 ls | grep ${S3_STAGING_LOCATION}) ]]; then 8 | echo "Using S3 bucket ${S3_STAGING_LOCATION} for cloudformation and kubectl binary" 9 | else 10 | aws s3api create-bucket \ 11 | --bucket ${S3_STAGING_LOCATION} \ 12 | --create-bucket-configuration LocationConstraint=${REGION} \ 13 | --region ${REGION} 14 | echo "Created S3 bucket ${S3_STAGING_LOCATION} for cloudformation and kubectl binary" 15 | fi 16 | 17 | # aws cloudformation deploy <-- create a network in which to put the EKS cluster 18 | # set SUBNETS, SECURITY_GROUPS, WORKER_SECURITY_GROUPS, VPC_ID appropriately 19 | STACK_NAME=${CLUSTER_NAME}-vpc 20 | aws cloudformation package \ 21 | --s3-bucket ${S3_STAGING_LOCATION} \ 22 | --output-template-file /tmp/packaged.yaml \ 23 | --region ${REGION} \ 24 | --template-file cloudformation/environment.yaml 25 | 26 | aws cloudformation deploy \ 27 | --template-file /tmp/packaged.yaml \ 28 | --region ${REGION} \ 29 | --stack-name ${STACK_NAME} \ 30 | --capabilities CAPABILITY_NAMED_IAM \ 31 | --parameter-overrides HttpProxyServiceName=${HTTP_PROXY_ENDPOINT_SERVICE_NAME} StackPrefix=${CLUSTER_NAME} 32 | 33 | VPC_ID=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='VPCId'].OutputValue" --output text` 34 | SUBNETS=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='Subnets'].OutputValue" --output text` 35 | ROLE_ARN=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='MasterRoleArn'].OutputValue" --output text` 36 | MASTER_SECURITY_GROUPS=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='MasterSecurityGroup'].OutputValue" --output text` 37 | WORKER_SECURITY_GROUPS=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='EndpointClientSecurityGroup'].OutputValue" --output text` 38 | EKS_CLUSTER_KMS_ARN=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='MasterKeyArn'].OutputValue" --output text` 39 | PROXY_URL=${HTTP_PROXY_ENDPOINT_SERVICE_NAME} 40 | if [ "${HTTP_PROXY_ENDPOINT_SERVICE_NAME}" != "" ] 41 | then 42 | PROXY_URL=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='HttpProxyUrl'].OutputValue" --output text` 43 | fi 44 | 45 | aws eks create-cluster \ 46 | --name ${CLUSTER_NAME} \ 47 | --role-arn ${ROLE_ARN} \ 48 | --encryption-config resources=secrets,provider={keyArn=${EKS_CLUSTER_KMS_ARN}} \ 49 | --resources-vpc subnetIds=${SUBNETS},securityGroupIds=${MASTER_SECURITY_GROUPS},endpointPublicAccess=${ENABLE_PUBLIC_ACCESS},endpointPrivateAccess=true \ 50 | --logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}' \ 51 | --kubernetes-version ${VERSION} \ 52 | --region ${REGION} 53 | 54 | # wait for the cluster to create 55 | while [ $(aws eks describe-cluster --name ${CLUSTER_NAME} --query 'cluster.status' --output text --region ${REGION}) == "CREATING" ] 56 | do 57 | echo Cluster ${CLUSTER_NAME} status: CREATING... 58 | sleep 60 59 | done 60 | echo Cluster ${CLUSTER_NAME} is ACTIVE 61 | 62 | ISSUER_URL=$(aws eks describe-cluster --name $CLUSTER_NAME --query cluster.identity.oidc.issuer --output text --region $REGION) 63 | AWS_FINGERPRINT=9E99A48A9960B14926BB7F3B02E22DA2B0AB7280 64 | 65 | aws iam create-open-id-connect-provider \ 66 | --url $ISSUER_URL \ 67 | --thumbprint-list $AWS_FINGERPRINT \ 68 | --client-id-list sts.amazonaws.com 69 | echo Registered OpenID Connect provider with IAM 70 | 71 | # Update Kubeconfig with new cluster details 72 | aws eks --region ${REGION} \ 73 | update-kubeconfig \ 74 | --name ${CLUSTER_NAME} 75 | 76 | source launch_workers.sh -------------------------------------------------------------------------------- /launch_workers.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | 4 | source variables.sh 5 | 6 | # aws cloudformation deploy <-- create a network in which to put the EKS cluster 7 | # set SUBNETS, SECURITY_GROUPS, WORKER_SECURITY_GROUPS, VPC_ID appropriately 8 | STACK_NAME=${CLUSTER_NAME}-vpc 9 | 10 | VPC_ID=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='VPCId'].OutputValue" --output text` 11 | VPC_CIDR=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='VPCCIDR'].OutputValue" --output text` 12 | SUBNETS=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='Subnets'].OutputValue" --output text` 13 | ROLE_ARN=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='MasterRoleArn'].OutputValue" --output text` 14 | MASTER_SECURITY_GROUPS=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='MasterSecurityGroup'].OutputValue" --output text` 15 | WORKER_SECURITY_GROUPS=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='EndpointClientSecurityGroup'].OutputValue" --output text` 16 | PROXY_URL=`aws cloudformation describe-stacks --stack-name ${STACK_NAME} --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='HttpProxyUrl'].OutputValue" --output text` 17 | 18 | ENDPOINT=`aws eks describe-cluster --name ${CLUSTER_NAME} --query 'cluster.endpoint' --output text --region ${REGION}` 19 | CERT_DATA=`aws eks describe-cluster --name ${CLUSTER_NAME} --query 'cluster.certificateAuthority.data' --output text --region ${REGION}` 20 | TOKEN=`aws eks get-token --cluster-name ${CLUSTER_NAME} --region ${REGION} | jq -r '.status.token'` 21 | 22 | echo Endpoint ${ENDPOINT} 23 | echo Certificate ${CERT_DATA} 24 | echo Token ${TOKEN} 25 | 26 | # Add optional support for EKS on Fargate 27 | # This requires you to change the private subnets route table to point directly at the proxy IPs since you cannot modify the kublet configuration on Fargate 28 | if [[ $ENABLE_FARGATE == "true" ]]; then 29 | echo "Configuring EKS on Fargate" 30 | # Deploy Fargate IAM permissions 31 | aws cloudformation deploy \ 32 | --template-file cloudformation/fargate.yaml \ 33 | --stack-name ${CLUSTER_NAME}-fargate \ 34 | --capabilities CAPABILITY_NAMED_IAM \ 35 | --region ${REGION} \ 36 | --parameter-overrides StackPrefix=${CLUSTER_NAME} 37 | 38 | FARGATE_EXEC_ROLE_ARN=`aws cloudformation describe-stacks --stack-name ${CLUSTER_NAME}-fargate --region ${REGION} --query "Stacks[0].Outputs[?OutputKey=='EKSFargatePodExecutionRoleArn'].OutputValue" --output text` 39 | # Create an EKS Fargate profile - waiting on CloudFormation support: https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/288 40 | SUBNETS_LIST=`echo ${SUBNETS} | sed 's/,/ /g'` 41 | 42 | aws eks create-fargate-profile \ 43 | --fargate-profile-name ${FARGATE_PROFILE_NAME} \ 44 | --cluster-name ${CLUSTER_NAME} \ 45 | --pod-execution-role-arn ${FARGATE_EXEC_ROLE_ARN} \ 46 | --subnets ${SUBNETS_LIST} \ 47 | --selectors namespace=${FARGATE_NAMESPACE} 48 | else 49 | echo Staging kubectl to S3 50 | curl -sLO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl 51 | aws s3 cp kubectl s3://${S3_STAGING_LOCATION}/kubectl 52 | rm kubectl 53 | 54 | aws cloudformation deploy \ 55 | --template-file cloudformation/eks-workers.yaml \ 56 | --stack-name ${CLUSTER_NAME}-worker \ 57 | --capabilities CAPABILITY_IAM \ 58 | --region ${REGION} \ 59 | --parameter-overrides ClusterControlPlaneSecurityGroup=${MASTER_SECURITY_GROUPS} \ 60 | ClusterName=${CLUSTER_NAME} \ 61 | KeyName=${KEY_PAIR} \ 62 | NodeGroupName=${CLUSTER_NAME}-workers \ 63 | NodeImageId=${AMI_ID} \ 64 | NodeInstanceType=${INSTANCE_TYPE} \ 65 | Subnets=${SUBNETS} \ 66 | VpcId=${VPC_ID} \ 67 | VpcCidr=${VPC_CIDR} \ 68 | ClusterAPIEndpoint=${ENDPOINT} \ 69 | ClusterCA=${CERT_DATA} \ 70 | HttpsProxy=${PROXY_URL} \ 71 | WorkerSecurityGroup=${WORKER_SECURITY_GROUPS} \ 72 | UserToken=${TOKEN} \ 73 | KubectlS3Location="s3://${S3_STAGING_LOCATION}/kubectl" 74 | fi 75 | -------------------------------------------------------------------------------- /variables.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CLUSTER_NAME='priv-fra-ssm-eks-cluster-01' 4 | REGION=eu-central-1 5 | HTTP_PROXY_ENDPOINT_SERVICE_NAME="" # leave blank for no proxy, or populate with a VPC endpoint ID to create a PrivateLink powered connection to a proxy server 6 | KEY_PAIR="" 7 | VERSION='1.16' # K8s version to deploy 8 | AMI_ID=ami-0bf7306240d09dcdd # AWS managed AMI for EKS worker nodes 9 | INSTANCE_TYPE=t3.large # instance type for EKS worker nodes 10 | S3_STAGING_LOCATION=jasbarto-eks-frankfurt-cfn # S3 location to be created to store Cloudformation templates and a copy of the kubectl binary 11 | ENABLE_PUBLIC_ACCESS=false 12 | ENABLE_FARGATE=false 13 | FARGATE_PROFILE_NAME=PrivateFargateProfile 14 | FARGATE_NAMESPACE=fargate 15 | --------------------------------------------------------------------------------