├── .gitignore ├── LICENSE ├── RBAC.md ├── README.md ├── cloudformation ├── controller.yaml ├── k8s-template.yaml ├── network-template.json └── node.yaml ├── images └── network.png ├── scripts └── certs │ ├── .gitignore │ ├── convertcerts2base64.sh │ ├── generate-api-server-certs.sh │ ├── generate-root-ca-certs.sh │ ├── generate-worker-certs.sh │ ├── openssl.cnf │ └── worker-openssl.cnf └── yaml ├── apiserver.yaml ├── cluster ├── controller │ ├── grafana-service.yaml │ ├── heapster-controller.yaml │ ├── heapster-service.yaml │ ├── influxdb-grafana-controller.yaml │ ├── influxdb-service.yaml │ ├── kube-dashboard-deploy.yaml │ ├── kube-dashboard-svc.yaml │ └── kube-podmaster.yaml └── kube-dns.yaml ├── controller-manager.yaml ├── kube-proxy-node.yaml ├── kube-proxy.yaml ├── rbac └── sloka-admin.yaml └── scheduler.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | ignition.json 2 | .DS_Store -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Copyright (c) 2016, UPMC Enterprises 3 | All rights reserved. 4 | 5 | Redistribution and use in source and binary forms, with or without 6 | modification, are permitted provided that the following conditions are met: 7 | * Redistributions of source code must retain the above copyright 8 | notice, this list of conditions and the following disclaimer. 9 | * Redistributions in binary form must reproduce the above copyright 10 | notice, this list of conditions and the following disclaimer in the 11 | documentation and/or other materials provided with the distribution. 12 | * Neither the name UPMC Enterprises nor the 13 | names of its contributors may be used to endorse or promote products 14 | derived from this software without specific prior written permission. 15 | 16 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 17 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 18 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 19 | DISCLAIMED. IN NO EVENT SHALL UPMC ENTERPRISES BE LIABLE FOR ANY 20 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 21 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 22 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 23 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 24 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 25 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 26 | -------------------------------------------------------------------------------- /RBAC.md: -------------------------------------------------------------------------------- 1 | # RBAC 2 | 3 | ## Certs based Auth 4 | 5 | This method uses certificates to identify uers. 6 | 7 | ### Create User 8 | 9 | Generate Certs for Superadmin: 10 | 11 | ``` 12 | $ sudo su - 13 | $ cd /etc/kubernetes/ssl 14 | $ openssl genrsa -out sloka.key 2048 15 | $ openssl req -new -key sloka.key -out sloka.csr -subj "/CN=sloka/O=cluster-admin" 16 | $ openssl x509 -req -in sloka.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out sloka.pem -days 10000 17 | ``` 18 | 19 | Setup Kubeconfig: 20 | 21 | Base64 encode the certs and copy to `client-certificate-data` & `client-key-data` in `~/.kube/config` configuration file under `user` section: 22 | 23 | ``` 24 | $ base64 -w0 sloka.pem && echo 25 | $ base64 -w0 sloka.key && echo 26 | ``` 27 | 28 | ### Bind the user to a Role 29 | 30 | This binds our new user to the `cluster-admin` role which allows them to do anything: 31 | 32 | ``` 33 | kind: ClusterRoleBinding 34 | apiVersion: rbac.authorization.k8s.io/v1beta1 35 | metadata: 36 | name: deployment-manager-binding 37 | namespace: office 38 | subjects: 39 | - kind: User 40 | name: sloka 41 | apiGroup: "" 42 | roleRef: 43 | kind: Role 44 | name: cluster-admin 45 | apiGroup: "" 46 | ``` -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # kubernetes-on-aws 2 | Cloudformation scripts to deploy Kubernetes cluster on AWS, everything is TLS and secured. The only public components are the kubernetes services that get exposed via Elastic Load Balancers via service type `LoadBalancer`. 3 | 4 | ## Overview 5 | 6 | There are two different CloudFormation templates, [network-template](cloudformation/network-template.json), [k8s-template](cloudformation/k8s-template.yaml). 7 | 8 | ### Technologies 9 | 10 | - CoreOS (Stable) for k8s master and nodes 11 | - OpsWorks to manage bastion instance running Amazon Linux 12 | 13 | ### cloudformation/network-template.json 14 | 15 | The network template is designed to provide a set of VPC's, subnets, and internet gateways to provide a basic "shell" to deploy AWS infrastructure into. This base architecture is modeled after the [NIST Template](https://aws.amazon.com/about-aws/whats-new/2016/01/nist-800-53-standardized-architecture-on-the-aws-cloud-quick-start-reference-deployment/) by only allowing access via a bastion instance. Access to the bastion instance is provided via OpsWorks to manage individual user keys on the box. 16 | 17 | ![base-network](images/network.png) 18 | 19 | ### cloudformation/k8s-template.yaml 20 | 21 | The k8s template is used to deploy a kubernetes cluster into the network-template or your own existing infrastructure. This is important as the k8s template was designed to not force users into any base infrastructure. All that is required to deploy the k8s template is a VPC, and a route table. The template creates it's own subnets and permissions, but everything else is left up to the user. 22 | 23 | _NOTE: The network template provided is NOT required to use the k8s template._ 24 | 25 | ## Deployment with Base Template 26 | 27 | 1. Modify SAN for certs by changing DNS entry in [openssl.cnf](scripts/certs/openssl.cnf) to match requirements (If required) 28 | 29 | 2. Generate RootCA: `./scripts/certs/generate-root-ca-certs.sh` 30 | 31 | 3. Generate API Server Certs: `./scripts/certs/generate-api-server-certs.sh` 32 | 33 | 4. Generate Worker Certs: `./scripts/certs/generate-worker-certs.sh` 34 | 35 | 5. Create ssh keypair for cluster management (All servers behind bastion will use this key) 36 | 37 | 6. Login to AWS console and run getting starting wizard for OpsWorks. This creates the needed roles to create the OpsStack for our deployment 38 | 39 | 7. Create the base stack (`cloudformation/NetworkBaseline.json`) by navigating to AWS CloudFormation screen, and uploading file, then executing 40 | 41 | 8. Update the ignition configs (`yaml/controller.yaml` & `yaml/node.yaml`) with certs generated in previous steps 42 | 43 | 9. Transpile configs and copy into template (See [Generate ignition config](#generate-ignition-config))) 44 | 45 | 10. Create the Kubernetes stack (`cloudformation/k8s-template.yaml`) by navigating to the AWS CloudFormation screen, and uploading file, then executing. _NOTE: The table to use can be found named: `Route Table for Private Networks`._ 46 | 47 | _NOTE: Only generate the RootCA once! If RootCA is regenerated, will need to re-create all certs._ 48 | 49 | ### Generate ignition config 50 | 51 | First download the container linux config transpiler to your local machine (See releases here: [ct transpiler](https://github.com/coreos/container-linux-config-transpiler/releases)) 52 | 53 | ``` 54 | # Transpile the ignition config, then copy into the `k8s-template.yaml` to UserData section of InstanceController 55 | $ rm ignition.json && ct --in-file controller.yaml > ignition.json --platform ec2 56 | 57 | # Transpile the ignition config, then copy into the `k8s-template.yaml` to UserData section of LaunchConfigurationWorker 58 | $ rm ignition.json && ct --in-file node.yaml > ignition.json --platform ec2 59 | ``` 60 | 61 | ### (Optional) Create Cloudformation via awscli: 62 | 63 | Use the following command to create the stack without needing to use the AWS Web UI: 64 | 65 | ``` 66 | $ aws cloudformation create-stack --stack-name sloka-k8s --template-body file://k8s-template.yaml --parameters ParameterKey=ApplicationVPC,ParameterValue=vpc-45097a3c ParameterKey=ClusterName,ParameterValue=sloka-k8s ParameterKey=KeyName,ParameterValue=sloka-virginia ParameterKey=RouteTableNAT,ParameterValue=rtb-e7cbc79f --capabilities CAPABILITY_IAM 67 | ``` 68 | 69 | ### Create Cluster Addons 70 | 71 | ``` 72 | $ export kubever=$(kubectl version | base64 | tr -d '\n') 73 | $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" 74 | $ kubectl create -f https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/cluster/kube-dns.yml 75 | ``` 76 | 77 | ### Edit RBAC role 78 | 79 | This is a bug in the current 1.7 release where the RBAC role is not properly defined. 80 | 81 | Edit the role and add `list` and `watch` to the `endpoints` section: 82 | ``` 83 | $ kubectl edit clusterrole system:node 84 | ``` 85 | 86 | ### Configure Kubectl 87 | 88 | Use the certs created in the generate certs section to configure local kubectl. [Create a user / role here.](RBAC.md) 89 | 90 | ``` 91 | $ kubectl config set-cluster aws --server=https://[name matching cert SAN from above]:9443 --certificate-authority=${CA_CERT} 92 | $ kubectl config set-credentials aws-admin --certificate-authority=${CA_CERT} --client-key=${ADMIN_KEY} --client-certificate=${ADMIN_CERT} 93 | $ kubectl config set-context aws-admin --cluster=aws --user=default-admin 94 | $ kubectl config use-context aws-admin 95 | ``` 96 | 97 | _NOTE: Since access to the k8s api will be via the bastion box, can setup a hosts entry in your system hosts file (e.g. /etc/hosts) which points to localhost but matches a SAN created above. This is also why port 9443 is used._ 98 | 99 | ##### Example hosts file: 100 | 101 | Here I added `127.0.0.1 kubernetes` to my local hosts file. 102 | 103 | ``` 104 | ## 105 | # Host Database 106 | # 107 | # localhost is used to configure the loopback interface 108 | # when the system is booting. Do not change this entry. 109 | ## 110 | 127.0.0.1 localhost 111 | 255.255.255.255 broadcasthost 112 | ::1 localhost 113 | 127.0.0.1 kubernetes 114 | ``` 115 | 116 | ## SSH Access Tunneling via Bastion Box 117 | 118 | To access the k8s api via `kubectl` use the following command to create a persistent tunnel: 119 | ``` 120 | ssh -L 9443:10.0.70.50:443 ec2-user@[BastionIPAddress] -N 121 | ``` 122 | 123 | ### Setup access to bastion via OpsWorks 124 | 125 | 1. Login to AWS console and choose OpsWorks 126 | 2. Select Stack 127 | 3. On left menu choose `Permissions` 128 | 4. Top right button choose `Edit` 129 | 5. Select IAM users who need access and choose `Save` 130 | 6. Verify status by refreshing `Deployments` 131 | 132 | _NOTE: Users must have uploaded keys to their profile via OpsWorks. The nice part is no shared keys and access can be controlled centrally._ 133 | -------------------------------------------------------------------------------- /cloudformation/controller.yaml: -------------------------------------------------------------------------------- 1 | etcd: 2 | version: "3.2.4" 3 | name: "10.0.70.50" 4 | advertise_client_urls: "http://10.0.70.50:2379" 5 | initial_advertise_peer_urls: "http://10.0.70.50:2380" 6 | listen_client_urls: "http://0.0.0.0:2379" 7 | listen_peer_urls: "http://10.0.70.50:2380" 8 | initial_cluster: "10.0.70.50=http://10.0.70.50:2380" 9 | 10 | systemd: 11 | units: 12 | - name: "rpcbind.service" 13 | enable: true 14 | - name: "kubelet.service" 15 | enable: true 16 | contents: | 17 | [Service] 18 | Environment=KUBELET_IMAGE_TAG=v1.7.8_coreos.2 19 | Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \ 20 | --volume var-log,kind=host,source=/var/log \ 21 | --mount volume=var-log,target=/var/log \ 22 | --volume dns,kind=host,source=/etc/resolv.conf \ 23 | --mount volume=dns,target=/etc/resolv.conf \ 24 | --volume cni-bin,kind=host,source=/opt/cni/bin \ 25 | --mount volume=cni-bin,target=/opt/cni/bin \ 26 | --volume cni-conf-dir,kind=host,source=/etc/cni/net.d \ 27 | --mount volume=cni-conf-dir,target=/etc/cni/net.d" 28 | ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests 29 | ExecStartPre=/usr/bin/mkdir -p /etc/cni/net.d 30 | ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid 31 | ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin 32 | ExecStartPre=/usr/bin/tar xzvf /opt/cni/bin/cni-plugins.tgz -C /opt/cni/bin 33 | ExecStart=/usr/lib/coreos/kubelet-wrapper \ 34 | --api-servers=http://127.0.0.1:8080 \ 35 | --register-schedulable=false \ 36 | --cni-conf-dir=/etc/cni/net.d \ 37 | --network-plugin=cni \ 38 | --cni-bin-dir=/opt/cni/bin \ 39 | --container-runtime=docker \ 40 | --allow-privileged=true \ 41 | --pod-manifest-path=/etc/kubernetes/manifests \ 42 | --hostname-override=ip-10-0-70-50.ec2.internal \ 43 | --cluster_dns=10.3.0.10 \ 44 | --cluster_domain=cluster.local 45 | ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid 46 | Restart=always 47 | RestartSec=10 48 | 49 | [Install] 50 | WantedBy=multi-user.target 51 | 52 | storage: 53 | files: 54 | - path: "/etc/sysctl.d/sysctl.conf" 55 | filesystem: "root" 56 | mode: 644 57 | contents: 58 | inline: | 59 | vm.max_map_count = 262144 60 | - path: "/etc/kubernetes/manifests/kube-apiserver.yaml" 61 | filesystem: "root" 62 | mode: 644 63 | contents: 64 | remote: 65 | url: https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/apiserver.yaml 66 | - path: "/etc/kubernetes/manifests/kube-proxy.yaml" 67 | filesystem: "root" 68 | mode: 644 69 | contents: 70 | remote: 71 | url: https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/kube-proxy.yaml 72 | - path: "/etc/kubernetes/manifests/kube-controller-manager.yaml" 73 | filesystem: "root" 74 | mode: 644 75 | contents: 76 | remote: 77 | url: https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/controller-manager.yaml 78 | - path: "/etc/kubernetes/manifests/kube-scheduler.yaml" 79 | filesystem: "root" 80 | mode: 644 81 | contents: 82 | remote: 83 | url: https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/scheduler.yaml 84 | - path: "/etc/kubernetes/bin/kubectl" 85 | filesystem: "root" 86 | mode: 755 87 | contents: 88 | remote: 89 | url: https://storage.googleapis.com/kubernetes-release/release/v1.6.7/bin/linux/amd64/kubectl 90 | - path: "/opt/cni/bin/cni-plugins.tgz" 91 | filesystem: "root" 92 | mode: 755 93 | contents: 94 | remote: 95 | url: https://github.com/containernetworking/plugins/releases/download/v0.6.0-rc2/cni-plugins-amd64-v0.6.0-rc2.tgz 96 | - path: "/etc/kubernetes/ssl/ca.pem" 97 | filesystem: "root" 98 | mode: 644 99 | contents: 100 | inline: | 101 | -----BEGIN CERTIFICATE----- 102 | MIIDGjCCAgKgAwIBAgIJAJ9qEsLLV83PMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV 103 | BAMTB2t1YmUtY2EwHhcNMTcwNzMxMTgzOTU0WhcNNDQxMjE2MTgzOTU0WjASMRAw 104 | DgYDVQQDEwdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA 105 | qjq0ZoCHwERNz04P5yohyYVSNL9F/oOB+dzn7rU4VGGP8zIcESHDRKUi61D4Sayw 106 | KpobZKiu7qEqV/oRwY/Cwp47z+zvArw9oaMEr/sl/S0bkuRUVIwj76WyokQOYWN3 107 | 5znxWun+kOVw1fnCqbzq4oFLdhdwqL8YP96T3MwMKe+XccFFHZQrzSqWLYrahkGe 108 | k+CMbm5zWespHDnPRWOXTYqQR6l7mWLTreAWrE3yanZ80yTkdBHwJ7Fi2ibG88fG 109 | 2KwvWZnwFMvTgJM9SJ4mWEztIkygDyxRPjvBFal65MnxRQJnjkzi2GcefcXTX1TM 110 | RefJ+KmtEMmx1mKXdszNLwIDAQABo3MwcTAdBgNVHQ4EFgQUu6RK2Do+7spfaAm7 111 | eT4ZQsD1EDgwQgYDVR0jBDswOYAUu6RK2Do+7spfaAm7eT4ZQsD1EDihFqQUMBIx 112 | EDAOBgNVBAMTB2t1YmUtY2GCCQCfahLCy1fNzzAMBgNVHRMEBTADAQH/MA0GCSqG 113 | SIb3DQEBBQUAA4IBAQAxEyEsrwT5IDTBBgxaMPOwEPWJqB0KE10m9L6Z6IP7Q/Ee 114 | KaeaaZX8rHOIUGlF1fUdHfYxFw1NV4J5fORum7yXRB3CBftsplzyOW6paeNt5Gal 115 | VHz9cxgNygWHOfbTKFJVa9HEh+pYbp0Ko07Cbj8Ev7bH6aQjU04IfaZEMhI1Y/WQ 116 | AT7m7R27ttIWX2RueVRdBaGNMUweBWg5Smnof+xiuQIoJNzzqFVRUOurvTAJw3rd 117 | FNiDDb8ozm04sYmNN4bgbQyyYNrO30BsNJpA7p9qr92bV3zU4fGC9mndQI1n2u7O 118 | lLGCuXbyMuhTp/upUcJTjxA9vXsfzlZF5OW+WcsR 119 | -----END CERTIFICATE----- 120 | - path: "/etc/kubernetes/ssl/apiserver.pem" 121 | filesystem: "root" 122 | mode: 644 123 | contents: 124 | inline: | 125 | -----BEGIN CERTIFICATE----- 126 | MIIDPzCCAiegAwIBAgIJAJxrbQ79ntkKMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV 127 | BAMTB2t1YmUtY2EwHhcNMTcwNzMxMTgzOTU3WhcNMTgwNzMxMTgzOTU3WjAZMRcw 128 | FQYDVQQDDA5rdWJlLWFwaXNlcnZlcjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC 129 | AQoCggEBANDgJn7vsIEuYOmmkZCOQ9VExgjJVOIHIikelkhxPk9ixz6tLkiKAU02 130 | lfO9Bp9TrtvLaK3Cryg2AkJANEG41S8p64K22hIbnfPEkcqTHYyJYZ+zfs/R/Axm 131 | eOnZd2qTZ7TZbVpCjaUnEbY7bJxWqAfS23jFfAlGmMRwzcyDxg6xVmDWNFd/Sxjb 132 | I94VQBeaHoKjHVAZfjSxvY9j/kpysTIbEs3jV2YzF13YrJhluUxX88kb3o5O609X 133 | AfY4RN3kXubX3fASMjzFqeE6Q6jyqo+wlJzrTCzAfxHQi5eNom35ANY6WjNTTf0W 134 | x4cJZD5NujuYZjS6dKGMmoh6bnuBdnMCAwEAAaOBkDCBjTAJBgNVHRMEAjAAMAsG 135 | A1UdDwQEAwIF4DBzBgNVHREEbDBqggprdWJlcm5ldGVzghJrdWJlcm5ldGVzLmRl 136 | ZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOCJGt1YmVybmV0ZXMuZGVmYXVs 137 | dC5zdmMuY2x1c3Rlci5sb2NhbIcECgMAAYcECgBGMjANBgkqhkiG9w0BAQUFAAOC 138 | AQEAdxqlgPoymDQTM3vX12u3kuTGAQvPMNzWkBBNAjRa2D+HJSMk2tYWkXuDm/OU 139 | ghZPlfvdDe4UkdtrZ82zcU8CZIfUEjxm1aZUSCBHXQh+TkdhhEg4VXyRN/8meR5z 140 | c/o1LfPEPA8z9QcJYVDyZUBObJUeITtjzQEAOikcak2QXaphOFM7EEDkwZ3pDeQ6 141 | BB7iAtq1GPGAobEXWwWTB1x58vPYmIbXjcSupHyzl7dhuYdo7GR9xRicfnilwuYk 142 | GVOvOw8oquJa/Up7ycZR9K9B4pTIVpzl3WBDRJ1nhMVn3R6IN3F3GNtw6q+NvQe4 143 | YcvDOQbJzryoVhhrGPEykyW7ZA== 144 | -----END CERTIFICATE----- 145 | - path: "/etc/kubernetes/ssl/apiserver-key.pem" 146 | filesystem: "root" 147 | mode: 644 148 | contents: 149 | inline: | 150 | -----BEGIN RSA PRIVATE KEY----- 151 | MIIEpQIBAAKCAQEA0OAmfu+wgS5g6aaRkI5D1UTGCMlU4gciKR6WSHE+T2LHPq0u 152 | SIoBTTaV870Gn1Ou28torcKvKDYCQkA0QbjVLynrgrbaEhud88SRypMdjIlhn7N+ 153 | z9H8DGZ46dl3apNntNltWkKNpScRtjtsnFaoB9LbeMV8CUaYxHDNzIPGDrFWYNY0 154 | V39LGNsj3hVAF5oegqMdUBl+NLG9j2P+SnKxMhsSzeNXZjMXXdismGW5TFfzyRve 155 | jk7rT1cB9jhE3eRe5tfd8BIyPMWp4TpDqPKqj7CUnOtMLMB/EdCLl42ibfkA1jpa 156 | M1NN/RbHhwlkPk26O5hmNLp0oYyaiHpue4F2cwIDAQABAoIBAQDOzC2w3TQeIcHX 157 | co+J1CA6pTV/+3zrr25V0a+ul1e+lyh22FULgn7ZaGK8B3joA50KhW/lIOvz3s0L 158 | tK9IJmwCnvlJ2Ck9ZlRSxVometL1kgqyZ670qIxn5oht1l2RidFSTzYh9+RvD6hM 159 | iLb2biE8Zbne737nXBrh2mEWy5wqbAOwpXUokJqfVfmVQB1M0b+m2lbQMpow9J7/ 160 | bWn9ZZMWnklGlrYojQBdCOynGy1MCPGUZ9EjviWl3/gvIBoyFCr2KLKY5a46XdJf 161 | 0j3JDeNWt6rAmpQHQjyxRKzueZWY1X8d/40FCHb8Ami++6CAgJETfZMm+m82EfHj 162 | Kja1aUJpAoGBAPonphhtpOouvcer9imZ1OpVUmNnqQpkTGiWr/BlYeLcVKSPozHm 163 | 8VUNHkSf8JMarOVnxnDWVyUxKUP9eRS5pFFl5jOH6qIyQsJD60Q5/s/dhtbQ4idS 164 | t7adoXMKbZszflcvSrhlZ1TccxclmHtxyeicL4NBZ8I4ujYXCj3PlCFvAoGBANXB 165 | lM4tJb5g2bQ6w12x57uC3Uis9MXpnKSsVW/eZko7pp92ik1TRsIdLghRLvJEaHhw 166 | sRZB/35brIFER/uvrT8j1z94oCueFyTXXQN36vq8B9QWcwuFyUcCgaZIZLrc3YMS 167 | VRG8+2lel9d/hgkJXX2ToEUleqfVbGgjunm0BPE9AoGBAI8TlmBqdeSrj0hhBo6M 168 | uca9vj200G5tJ3a6mS66Dd6ffpoQvZqRKH8o3aMKh6LbowAi9tEbBwTytVN56oL8 169 | GwujaKMYng7fCGfsSOfg8+kYH0NGfdNX8FO2nN0bnc0jCqP7HJWTCiLzY7BdhHU2 170 | g/FTQ6mjAyGHKJo/W1A3JdZpAoGAGQAQGGEdZfvL2pF44g95q+utV9+qrS8afAQP 171 | 5gqb6hi57zKdEFgqEW/6P0zHcdxgX53GiHTlnfC451GGHcC5QYY+mZTRHujZihyK 172 | K2quF+8/9yU9BV77YIvBgCI9bcGBQuA1BOMWgIdouPKYSZxHy/UlLJEqnFCQ4kkz 173 | eSJ95X0CgYEAn9A/tbXPD0Ympb0KODoVp+3YMlXg2z8uc7WMgnhhdW8QgxaTbxo2 174 | QCI9u8MZOCRjEZ1iQ9O6a1/vYcshTg1/QAt5xei6feOAU8dLKIkZCumwF60EkjeJ 175 | GiYFobOB58i7RTw5RxHfGWRDrmptguuDu8DMXeh3kiXyotDJ72aMF5Q= 176 | -----END RSA PRIVATE KEY----- 177 | 178 | locksmith: 179 | reboot_strategy: "off" -------------------------------------------------------------------------------- /cloudformation/k8s-template.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | AWSTemplateFormatVersion: '2010-09-09' 3 | Description: Kubernetes AWS Cluster 4 | Mappings: 5 | RegionMap: 6 | eu-central-1: 7 | stable: ami-eb3b6198 8 | eu-west-1: 9 | stable: ami-e3d6ab90 10 | us-east-1: 11 | stable: ami-6bb93c7d 12 | us-east-2: 13 | stable: ami-9995b4fc 14 | us-west-1: 15 | stable: ami-f7df8897 16 | us-west-2: 17 | stable: ami-d0e54eb0 18 | Parameters: 19 | ClusterName: 20 | Description: Name of Kubernetes cluster (Make this unique!) 21 | Type: String 22 | RouteTableNAT: 23 | Description: Id of the route table to route to the NAT instance 24 | Type: String 25 | k8sSubnetCidrBlockPrivateAZ1: 26 | Default: 10.0.70.0/24 27 | Description: CIDR block for kubernetes subnet (AZ1) 28 | Type: String 29 | k8sSubnetCidrBlockPrivateAZ2: 30 | Default: 10.0.71.0/24 31 | Description: CIDR block for kubernetes subnet (AZ2) 32 | Type: String 33 | k8sSubnetCidrBlockPrivateAZ3: 34 | Default: 10.0.72.0/24 35 | Description: CIDR block for kubernetes subnet (AZ3) 36 | Type: String 37 | EC2BootVolumeSizeGB: 38 | Default: '30' 39 | Description: Size in GB for boot volumes 40 | Type: String 41 | ApplicationVPC: 42 | Default: '' 43 | Description: VPC for kubernetes application to be deployed into 44 | Type: AWS::EC2::VPC::Id 45 | AvailabilityZone1: 46 | Default: us-east-1c 47 | Description: First availability zone 48 | Type: AWS::EC2::AvailabilityZone::Name 49 | AvailabilityZone2: 50 | Default: us-east-1d 51 | Description: Second availability zone 52 | Type: AWS::EC2::AvailabilityZone::Name 53 | AvailabilityZone3: 54 | Default: us-east-1e 55 | Description: Third availability zone 56 | Type: AWS::EC2::AvailabilityZone::Name 57 | ControllerInstanceType: 58 | Default: m3.medium 59 | Description: EC2 instance type used for each controller instance 60 | Type: String 61 | AllowedValues: 62 | - t2.micro 63 | - t2.large 64 | - m3.medium 65 | - m3.large 66 | - m3.xlarge 67 | - m3.2xlarge 68 | - m4.large 69 | - m4.xlarge 70 | KeyName: 71 | Type: AWS::EC2::KeyPair::KeyName 72 | Description: Name of SSH keypair to authorize on each instance 73 | ReleaseChannel: 74 | AllowedValues: 75 | - stable 76 | Default: stable 77 | Description: CoreOS Linux release channel to use as instance operating system 78 | Type: String 79 | WorkerCount: 80 | Default: '1' 81 | Description: Number of worker instances to create, may be modified later 82 | Type: String 83 | WorkerInstanceType: 84 | Default: m3.medium 85 | Description: EC2 instance type used for each worker instance 86 | Type: String 87 | AllowedValues: 88 | - t2.micro 89 | - t2.large 90 | - m3.medium 91 | - m3.large 92 | - m3.xlarge 93 | - m3.2xlarge 94 | - m4.large 95 | - m4.xlarge 96 | Resources: 97 | AlarmControllerRecover: 98 | Properties: 99 | AlarmActions: 100 | - Fn::Join: 101 | - '' 102 | - - 'arn:aws:automate:' 103 | - Ref: AWS::Region 104 | - ":ec2:recover" 105 | AlarmDescription: Trigger a recovery when system check fails for 5 consecutive 106 | minutes. 107 | ComparisonOperator: GreaterThanThreshold 108 | Dimensions: 109 | - Name: InstanceId 110 | Value: 111 | Ref: InstanceController 112 | EvaluationPeriods: '5' 113 | MetricName: StatusCheckFailed_System 114 | Namespace: AWS/EC2 115 | Period: '60' 116 | Statistic: Minimum 117 | Threshold: '0' 118 | Type: AWS::CloudWatch::Alarm 119 | AutoScaleWorker: 120 | Properties: 121 | AvailabilityZones: 122 | - Ref: AvailabilityZone1 123 | - Ref: AvailabilityZone2 124 | - Ref: AvailabilityZone3 125 | DesiredCapacity: 126 | Ref: WorkerCount 127 | HealthCheckGracePeriod: 600 128 | HealthCheckType: EC2 129 | LaunchConfigurationName: 130 | Ref: LaunchConfigurationWorker 131 | MaxSize: 132 | Ref: WorkerCount 133 | MinSize: 134 | Ref: WorkerCount 135 | Tags: 136 | - Key: KubernetesCluster 137 | PropagateAtLaunch: 'true' 138 | Value: 139 | Ref: ClusterName 140 | - Key: Name 141 | PropagateAtLaunch: 'true' 142 | Value: kube-aws-worker 143 | VPCZoneIdentifier: 144 | - Ref: k8sSubnetA 145 | - Ref: k8sSubnetB 146 | - Ref: k8sSubnetC 147 | Type: AWS::AutoScaling::AutoScalingGroup 148 | IAMInstanceProfileController: 149 | Properties: 150 | Path: "/" 151 | Roles: 152 | - Ref: IAMRoleController 153 | Type: AWS::IAM::InstanceProfile 154 | IAMInstanceProfileWorker: 155 | Properties: 156 | Path: "/" 157 | Roles: 158 | - Ref: IAMRoleWorker 159 | Type: AWS::IAM::InstanceProfile 160 | IAMRoleController: 161 | Properties: 162 | AssumeRolePolicyDocument: 163 | Statement: 164 | - Action: 165 | - sts:AssumeRole 166 | Effect: Allow 167 | Principal: 168 | Service: 169 | - ec2.amazonaws.com 170 | Version: '2012-10-17' 171 | Path: "/" 172 | Policies: 173 | - PolicyDocument: 174 | Statement: 175 | - Action: ec2:* 176 | Effect: Allow 177 | Resource: "*" 178 | - Action: elasticloadbalancing:* 179 | Effect: Allow 180 | Resource: "*" 181 | Version: '2012-10-17' 182 | PolicyName: root 183 | Type: AWS::IAM::Role 184 | IAMRoleWorker: 185 | Properties: 186 | AssumeRolePolicyDocument: 187 | Statement: 188 | - Action: 189 | - sts:AssumeRole 190 | Effect: Allow 191 | Principal: 192 | Service: 193 | - ec2.amazonaws.com 194 | Version: '2012-10-17' 195 | Path: "/" 196 | Policies: 197 | - PolicyDocument: 198 | Statement: 199 | - Action: ec2:Describe* 200 | Effect: Allow 201 | Resource: "*" 202 | - Action: ec2:AttachVolume 203 | Effect: Allow 204 | Resource: "*" 205 | - Action: ec2:DetachVolume 206 | Effect: Allow 207 | Resource: "*" 208 | - Effect: Allow 209 | Action: 210 | - ecr:GetAuthorizationToken 211 | - ecr:BatchCheckLayerAvailability 212 | - ecr:GetDownloadUrlForLayer 213 | - ecr:GetRepositoryPolicy 214 | - ecr:DescribeRepositories 215 | - ecr:ListImages 216 | - ecr:BatchGetImage 217 | Resource: "*" 218 | Version: '2012-10-17' 219 | PolicyName: root 220 | Type: AWS::IAM::Role 221 | InstanceController: 222 | Properties: 223 | AvailabilityZone: 224 | Ref: AvailabilityZone1 225 | IamInstanceProfile: 226 | Ref: IAMInstanceProfileController 227 | BlockDeviceMappings: 228 | - DeviceName: "/dev/xvda" 229 | Ebs: 230 | VolumeSize: 231 | Ref: EC2BootVolumeSizeGB 232 | ImageId: 233 | Fn::FindInMap: 234 | - RegionMap 235 | - Ref: AWS::Region 236 | - Ref: ReleaseChannel 237 | InstanceType: 238 | Ref: ControllerInstanceType 239 | KeyName: 240 | Ref: KeyName 241 | NetworkInterfaces: 242 | - AssociatePublicIpAddress: false 243 | DeleteOnTermination: true 244 | DeviceIndex: '0' 245 | GroupSet: 246 | - Ref: SecurityGroupController 247 | PrivateIpAddress: 10.0.70.50 248 | SubnetId: 249 | Ref: k8sSubnetA 250 | Tags: 251 | - Key: KubernetesCluster 252 | Value: 253 | Ref: ClusterName 254 | - Key: Name 255 | Value: kube-aws-controller 256 | UserData: 257 | Fn::Base64: | 258 | {"ignition":{"version":"2.0.0","config":{}},"storage":{"files":[{"filesystem":"root","path":"/etc/sysctl.d/sysctl.conf","contents":{"source":"data:,vm.max_map_count%20%3D%20262144%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/manifests/kube-apiserver.yaml","contents":{"source":"https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/apiserver.yaml","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/manifests/kube-proxy.yaml","contents":{"source":"https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/kube-proxy.yaml","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/manifests/kube-controller-manager.yaml","contents":{"source":"https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/controller-manager.yaml","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/manifests/kube-scheduler.yaml","contents":{"source":"https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/scheduler.yaml","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/bin/kubectl","contents":{"source":"https://storage.googleapis.com/kubernetes-release/release/v1.6.7/bin/linux/amd64/kubectl","verification":{}},"mode":755,"user":{},"group":{}},{"filesystem":"root","path":"/opt/cni/bin/cni-plugins.tgz","contents":{"source":"https://github.com/containernetworking/plugins/releases/download/v0.6.0-rc2/cni-plugins-amd64-v0.6.0-rc2.tgz","verification":{}},"mode":755,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/ssl/ca.pem","contents":{"source":"data:,-----BEGIN%20CERTIFICATE-----%0AMIIDGjCCAgKgAwIBAgIJAJ9qEsLLV83PMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV%0ABAMTB2t1YmUtY2EwHhcNMTcwNzMxMTgzOTU0WhcNNDQxMjE2MTgzOTU0WjASMRAw%0ADgYDVQQDEwdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA%0Aqjq0ZoCHwERNz04P5yohyYVSNL9F%2FoOB%2Bdzn7rU4VGGP8zIcESHDRKUi61D4Sayw%0AKpobZKiu7qEqV%2FoRwY%2FCwp47z%2BzvArw9oaMEr%2Fsl%2FS0bkuRUVIwj76WyokQOYWN3%0A5znxWun%2BkOVw1fnCqbzq4oFLdhdwqL8YP96T3MwMKe%2BXccFFHZQrzSqWLYrahkGe%0Ak%2BCMbm5zWespHDnPRWOXTYqQR6l7mWLTreAWrE3yanZ80yTkdBHwJ7Fi2ibG88fG%0A2KwvWZnwFMvTgJM9SJ4mWEztIkygDyxRPjvBFal65MnxRQJnjkzi2GcefcXTX1TM%0ARefJ%2BKmtEMmx1mKXdszNLwIDAQABo3MwcTAdBgNVHQ4EFgQUu6RK2Do%2B7spfaAm7%0AeT4ZQsD1EDgwQgYDVR0jBDswOYAUu6RK2Do%2B7spfaAm7eT4ZQsD1EDihFqQUMBIx%0AEDAOBgNVBAMTB2t1YmUtY2GCCQCfahLCy1fNzzAMBgNVHRMEBTADAQH%2FMA0GCSqG%0ASIb3DQEBBQUAA4IBAQAxEyEsrwT5IDTBBgxaMPOwEPWJqB0KE10m9L6Z6IP7Q%2FEe%0AKaeaaZX8rHOIUGlF1fUdHfYxFw1NV4J5fORum7yXRB3CBftsplzyOW6paeNt5Gal%0AVHz9cxgNygWHOfbTKFJVa9HEh%2BpYbp0Ko07Cbj8Ev7bH6aQjU04IfaZEMhI1Y%2FWQ%0AAT7m7R27ttIWX2RueVRdBaGNMUweBWg5Smnof%2BxiuQIoJNzzqFVRUOurvTAJw3rd%0AFNiDDb8ozm04sYmNN4bgbQyyYNrO30BsNJpA7p9qr92bV3zU4fGC9mndQI1n2u7O%0AlLGCuXbyMuhTp%2FupUcJTjxA9vXsfzlZF5OW%2BWcsR%0A-----END%20CERTIFICATE-----%20%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/ssl/apiserver.pem","contents":{"source":"data:,-----BEGIN%20CERTIFICATE-----%0AMIIDPzCCAiegAwIBAgIJAJxrbQ79ntkKMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV%0ABAMTB2t1YmUtY2EwHhcNMTcwNzMxMTgzOTU3WhcNMTgwNzMxMTgzOTU3WjAZMRcw%0AFQYDVQQDDA5rdWJlLWFwaXNlcnZlcjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC%0AAQoCggEBANDgJn7vsIEuYOmmkZCOQ9VExgjJVOIHIikelkhxPk9ixz6tLkiKAU02%0AlfO9Bp9TrtvLaK3Cryg2AkJANEG41S8p64K22hIbnfPEkcqTHYyJYZ%2Bzfs%2FR%2FAxm%0AeOnZd2qTZ7TZbVpCjaUnEbY7bJxWqAfS23jFfAlGmMRwzcyDxg6xVmDWNFd%2FSxjb%0AI94VQBeaHoKjHVAZfjSxvY9j%2FkpysTIbEs3jV2YzF13YrJhluUxX88kb3o5O609X%0AAfY4RN3kXubX3fASMjzFqeE6Q6jyqo%2BwlJzrTCzAfxHQi5eNom35ANY6WjNTTf0W%0Ax4cJZD5NujuYZjS6dKGMmoh6bnuBdnMCAwEAAaOBkDCBjTAJBgNVHRMEAjAAMAsG%0AA1UdDwQEAwIF4DBzBgNVHREEbDBqggprdWJlcm5ldGVzghJrdWJlcm5ldGVzLmRl%0AZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOCJGt1YmVybmV0ZXMuZGVmYXVs%0AdC5zdmMuY2x1c3Rlci5sb2NhbIcECgMAAYcECgBGMjANBgkqhkiG9w0BAQUFAAOC%0AAQEAdxqlgPoymDQTM3vX12u3kuTGAQvPMNzWkBBNAjRa2D%2BHJSMk2tYWkXuDm%2FOU%0AghZPlfvdDe4UkdtrZ82zcU8CZIfUEjxm1aZUSCBHXQh%2BTkdhhEg4VXyRN%2F8meR5z%0Ac%2Fo1LfPEPA8z9QcJYVDyZUBObJUeITtjzQEAOikcak2QXaphOFM7EEDkwZ3pDeQ6%0ABB7iAtq1GPGAobEXWwWTB1x58vPYmIbXjcSupHyzl7dhuYdo7GR9xRicfnilwuYk%0AGVOvOw8oquJa%2FUp7ycZR9K9B4pTIVpzl3WBDRJ1nhMVn3R6IN3F3GNtw6q%2BNvQe4%0AYcvDOQbJzryoVhhrGPEykyW7ZA%3D%3D%0A-----END%20CERTIFICATE-----%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/ssl/apiserver-key.pem","contents":{"source":"data:,-----BEGIN%20RSA%20PRIVATE%20KEY-----%0AMIIEpQIBAAKCAQEA0OAmfu%2BwgS5g6aaRkI5D1UTGCMlU4gciKR6WSHE%2BT2LHPq0u%0ASIoBTTaV870Gn1Ou28torcKvKDYCQkA0QbjVLynrgrbaEhud88SRypMdjIlhn7N%2B%0Az9H8DGZ46dl3apNntNltWkKNpScRtjtsnFaoB9LbeMV8CUaYxHDNzIPGDrFWYNY0%0AV39LGNsj3hVAF5oegqMdUBl%2BNLG9j2P%2BSnKxMhsSzeNXZjMXXdismGW5TFfzyRve%0Ajk7rT1cB9jhE3eRe5tfd8BIyPMWp4TpDqPKqj7CUnOtMLMB%2FEdCLl42ibfkA1jpa%0AM1NN%2FRbHhwlkPk26O5hmNLp0oYyaiHpue4F2cwIDAQABAoIBAQDOzC2w3TQeIcHX%0Aco%2BJ1CA6pTV%2F%2B3zrr25V0a%2Bul1e%2Blyh22FULgn7ZaGK8B3joA50KhW%2FlIOvz3s0L%0AtK9IJmwCnvlJ2Ck9ZlRSxVometL1kgqyZ670qIxn5oht1l2RidFSTzYh9%2BRvD6hM%0AiLb2biE8Zbne737nXBrh2mEWy5wqbAOwpXUokJqfVfmVQB1M0b%2Bm2lbQMpow9J7%2F%0AbWn9ZZMWnklGlrYojQBdCOynGy1MCPGUZ9EjviWl3%2FgvIBoyFCr2KLKY5a46XdJf%0A0j3JDeNWt6rAmpQHQjyxRKzueZWY1X8d%2F40FCHb8Ami%2B%2B6CAgJETfZMm%2Bm82EfHj%0AKja1aUJpAoGBAPonphhtpOouvcer9imZ1OpVUmNnqQpkTGiWr%2FBlYeLcVKSPozHm%0A8VUNHkSf8JMarOVnxnDWVyUxKUP9eRS5pFFl5jOH6qIyQsJD60Q5%2Fs%2FdhtbQ4idS%0At7adoXMKbZszflcvSrhlZ1TccxclmHtxyeicL4NBZ8I4ujYXCj3PlCFvAoGBANXB%0AlM4tJb5g2bQ6w12x57uC3Uis9MXpnKSsVW%2FeZko7pp92ik1TRsIdLghRLvJEaHhw%0AsRZB%2F35brIFER%2FuvrT8j1z94oCueFyTXXQN36vq8B9QWcwuFyUcCgaZIZLrc3YMS%0AVRG8%2B2lel9d%2FhgkJXX2ToEUleqfVbGgjunm0BPE9AoGBAI8TlmBqdeSrj0hhBo6M%0Auca9vj200G5tJ3a6mS66Dd6ffpoQvZqRKH8o3aMKh6LbowAi9tEbBwTytVN56oL8%0AGwujaKMYng7fCGfsSOfg8%2BkYH0NGfdNX8FO2nN0bnc0jCqP7HJWTCiLzY7BdhHU2%0Ag%2FFTQ6mjAyGHKJo%2FW1A3JdZpAoGAGQAQGGEdZfvL2pF44g95q%2ButV9%2BqrS8afAQP%0A5gqb6hi57zKdEFgqEW%2F6P0zHcdxgX53GiHTlnfC451GGHcC5QYY%2BmZTRHujZihyK%0AK2quF%2B8%2F9yU9BV77YIvBgCI9bcGBQuA1BOMWgIdouPKYSZxHy%2FUlLJEqnFCQ4kkz%0AeSJ95X0CgYEAn9A%2FtbXPD0Ympb0KODoVp%2B3YMlXg2z8uc7WMgnhhdW8QgxaTbxo2%0AQCI9u8MZOCRjEZ1iQ9O6a1%2FvYcshTg1%2FQAt5xei6feOAU8dLKIkZCumwF60EkjeJ%0AGiYFobOB58i7RTw5RxHfGWRDrmptguuDu8DMXeh3kiXyotDJ72aMF5Q%3D%0A-----END%20RSA%20PRIVATE%20KEY-----%20%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/coreos/update.conf","contents":{"source":"data:,%0AREBOOT_STRATEGY%3D%22off%22","verification":{}},"mode":420,"user":{},"group":{}}]},"systemd":{"units":[{"name":"etcd-member.service","enable":true,"dropins":[{"name":"20-clct-etcd-member.conf","contents":"[Service]\nEnvironment=\"ETCD_IMAGE_TAG=v3.2.4\"\nExecStart=\nExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \\\n --name=\"10.0.70.50\" \\\n --listen-peer-urls=\"http://10.0.70.50:2380\" \\\n --listen-client-urls=\"http://0.0.0.0:2379\" \\\n --initial-advertise-peer-urls=\"http://10.0.70.50:2380\" \\\n --initial-cluster=\"10.0.70.50=http://10.0.70.50:2380\" \\\n --advertise-client-urls=\"http://10.0.70.50:2379\""}]},{"name":"rpcbind.service","enable":true},{"name":"kubelet.service","enable":true,"contents":"[Service]\nEnvironment=KUBELET_IMAGE_TAG=v1.7.8_coreos.2\nEnvironment=\"RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \\\n --volume var-log,kind=host,source=/var/log \\\n --mount volume=var-log,target=/var/log \\\n --volume dns,kind=host,source=/etc/resolv.conf \\\n --mount volume=dns,target=/etc/resolv.conf \\\n --volume cni-bin,kind=host,source=/opt/cni/bin \\\n --mount volume=cni-bin,target=/opt/cni/bin \\\n --volume cni-conf-dir,kind=host,source=/etc/cni/net.d \\\n --mount volume=cni-conf-dir,target=/etc/cni/net.d\"\nExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests\nExecStartPre=/usr/bin/mkdir -p /etc/cni/net.d\nExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid\nExecStartPre=/usr/bin/mkdir -p /opt/cni/bin\nExecStartPre=/usr/bin/tar xzvf /opt/cni/bin/cni-plugins.tgz -C /opt/cni/bin\nExecStart=/usr/lib/coreos/kubelet-wrapper \\\n --api-servers=http://127.0.0.1:8080 \\\n --register-schedulable=false \\\n --cni-conf-dir=/etc/cni/net.d \\\n --network-plugin=cni \\\n --cni-bin-dir=/opt/cni/bin \\\n --container-runtime=docker \\\n --allow-privileged=true \\\n --pod-manifest-path=/etc/kubernetes/manifests \\\n --hostname-override=ip-10-0-70-50.ec2.internal \\\n --cluster_dns=10.3.0.10 \\\n --cluster_domain=cluster.local\nExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid\nRestart=always\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target\n"}]},"networkd":{},"passwd":{}} 259 | Type: AWS::EC2::Instance 260 | LaunchConfigurationWorker: 261 | Properties: 262 | BlockDeviceMappings: 263 | - DeviceName: "/dev/xvda" 264 | Ebs: 265 | VolumeSize: 266 | Ref: EC2BootVolumeSizeGB 267 | IamInstanceProfile: 268 | Ref: IAMInstanceProfileWorker 269 | ImageId: 270 | Fn::FindInMap: 271 | - RegionMap 272 | - Ref: AWS::Region 273 | - Ref: ReleaseChannel 274 | InstanceType: 275 | Ref: WorkerInstanceType 276 | KeyName: 277 | Ref: KeyName 278 | SecurityGroups: 279 | - Ref: SecurityGroupWorker 280 | UserData: 281 | Fn::Base64: | 282 | {"ignition":{"version":"2.0.0","config":{}},"storage":{"files":[{"filesystem":"root","path":"/etc/sysctl.d/sysctl.conf","contents":{"source":"data:,vm.max_map_count%20%3D%20262144%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/manifests/kube-proxy.yaml","contents":{"source":"https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/kube-proxy-node.yaml","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/worker-kubeconfig.yaml","contents":{"source":"data:,apiVersion%3A%20v1%0Akind%3A%20Config%0Aclusters%3A%0A-%20name%3A%20local%0A%20%20cluster%3A%0A%20%20%20%20certificate-authority%3A%20%2Fetc%2Fkubernetes%2Fssl%2Fca.pem%0Ausers%3A%0A-%20name%3A%20kubelet%0A%20%20user%3A%0A%20%20%20%20client-certificate%3A%20%2Fetc%2Fkubernetes%2Fssl%2Fworker.pem%0A%20%20%20%20client-key%3A%20%2Fetc%2Fkubernetes%2Fssl%2Fworker-key.pem%0Acontexts%3A%0A-%20context%3A%0A%20%20%20%20cluster%3A%20local%0A%20%20%20%20user%3A%20kubelet%0A%20%20name%3A%20kubelet-context%0Acurrent-context%3A%20kubelet-context%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/opt/cni/bin/cni-plugins.tgz","contents":{"source":"https://github.com/containernetworking/plugins/releases/download/v0.6.0-rc2/cni-plugins-amd64-v0.6.0-rc2.tgz","verification":{}},"mode":755,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/ssl/ca.pem","contents":{"source":"data:,-----BEGIN%20CERTIFICATE-----%0AMIIDGjCCAgKgAwIBAgIJAJ9qEsLLV83PMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV%0ABAMTB2t1YmUtY2EwHhcNMTcwNzMxMTgzOTU0WhcNNDQxMjE2MTgzOTU0WjASMRAw%0ADgYDVQQDEwdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA%0Aqjq0ZoCHwERNz04P5yohyYVSNL9F%2FoOB%2Bdzn7rU4VGGP8zIcESHDRKUi61D4Sayw%0AKpobZKiu7qEqV%2FoRwY%2FCwp47z%2BzvArw9oaMEr%2Fsl%2FS0bkuRUVIwj76WyokQOYWN3%0A5znxWun%2BkOVw1fnCqbzq4oFLdhdwqL8YP96T3MwMKe%2BXccFFHZQrzSqWLYrahkGe%0Ak%2BCMbm5zWespHDnPRWOXTYqQR6l7mWLTreAWrE3yanZ80yTkdBHwJ7Fi2ibG88fG%0A2KwvWZnwFMvTgJM9SJ4mWEztIkygDyxRPjvBFal65MnxRQJnjkzi2GcefcXTX1TM%0ARefJ%2BKmtEMmx1mKXdszNLwIDAQABo3MwcTAdBgNVHQ4EFgQUu6RK2Do%2B7spfaAm7%0AeT4ZQsD1EDgwQgYDVR0jBDswOYAUu6RK2Do%2B7spfaAm7eT4ZQsD1EDihFqQUMBIx%0AEDAOBgNVBAMTB2t1YmUtY2GCCQCfahLCy1fNzzAMBgNVHRMEBTADAQH%2FMA0GCSqG%0ASIb3DQEBBQUAA4IBAQAxEyEsrwT5IDTBBgxaMPOwEPWJqB0KE10m9L6Z6IP7Q%2FEe%0AKaeaaZX8rHOIUGlF1fUdHfYxFw1NV4J5fORum7yXRB3CBftsplzyOW6paeNt5Gal%0AVHz9cxgNygWHOfbTKFJVa9HEh%2BpYbp0Ko07Cbj8Ev7bH6aQjU04IfaZEMhI1Y%2FWQ%0AAT7m7R27ttIWX2RueVRdBaGNMUweBWg5Smnof%2BxiuQIoJNzzqFVRUOurvTAJw3rd%0AFNiDDb8ozm04sYmNN4bgbQyyYNrO30BsNJpA7p9qr92bV3zU4fGC9mndQI1n2u7O%0AlLGCuXbyMuhTp%2FupUcJTjxA9vXsfzlZF5OW%2BWcsR%0A-----END%20CERTIFICATE-----%20%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/ssl/worker.pem","contents":{"source":"data:,-----BEGIN%20CERTIFICATE-----%0AMIIDDTCCAfWgAwIBAgIJAJxrbQ79ntkQMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV%0ABAMTB2t1YmUtY2EwHhcNMTcwODExMTkyNTI3WhcNMTgwODExMTkyNTI3WjAtMRQw%0AEgYDVQQDDAtzeXN0ZW06bm9kZTEVMBMGA1UECgwMc3lzdGVtOm5vZGVzMIIBIjAN%0ABgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArt7YCqAdRUrlDJIspcwg%2BGqPNwq0%0ASq3FT8ug42QXu4C%2BFxeaa36BLycrSm2V6dmbCkImftUC%2F6u1IO0aJowrasO24Zi5%0A0Y84WVGIvGXLmU0D8Q6kIvTXqNDXnXAh%2FQHdF%2BOa4gMl2EPSuiWmoqhbjuith%2BKv%0AUM2QLFyWI7pihIPEu%2B4TWLtslp6q4VbwoLz7X2ggqCv0Ir%2FlYzhFzecVBgpXldq9%0AR7wVsXetiFwNB3Wck4j9McVp7Z51D%2FZ9%2BHF%2B1qKU4%2Bft0fk9yvdkL%2F85tbnBUwmR%0Az6uKeZnLF1NSVFjiB9Ub4OjPzMj72WfSPfiR9lb%2BR%2FNIGRBjszVNaOP0wwIDAQAB%0Ao0swSTAJBgNVHRMEAjAAMAsGA1UdDwQEAwIF4DAvBgNVHREEKDAmghQqLiouY2x1%0Ac3Rlci5pbnRlcm5hbIIOKi5lYzIuaW50ZXJuYWwwDQYJKoZIhvcNAQEFBQADggEB%0AAB01lllmgtQy%2BEhyKgcfUPYyQwXo1EAho9ND4b9q%2F7Z7RPdzGx7dFI6SQq0cQZxk%0A%2BB2ToPkEp4eS7RMndSMO6%2BwxtAq4gS9HE3d6zf5chEBN9u%2BSqtTJ9M4DyQ%2BZf%2FK1%0AqK842Bsxn3BnDKXylvF3aNUPWdWZf72Uqg8mYr1orWjWRLPv44b5LoUjv862kuMI%0AjVTjxEkHvqDEI7qluQhlxfu7AhSX8yn796jzPyNcW6XD7sbur6mVEkO%2BlUMLlh2M%0A2yEby5ZQe0Gfj00m%2BFUvK5VuT4HHuuhiMahn00gT0Jm5cQ4DpSGdJ7JwI2FcKooT%0Ac0P5hLXSVSocengkBTvynqs%3D%0A-----END%20CERTIFICATE-----%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/kubernetes/ssl/worker-key.pem","contents":{"source":"data:,-----BEGIN%20RSA%20PRIVATE%20KEY-----%0AMIIEpQIBAAKCAQEArt7YCqAdRUrlDJIspcwg%2BGqPNwq0Sq3FT8ug42QXu4C%2BFxea%0Aa36BLycrSm2V6dmbCkImftUC%2F6u1IO0aJowrasO24Zi50Y84WVGIvGXLmU0D8Q6k%0AIvTXqNDXnXAh%2FQHdF%2BOa4gMl2EPSuiWmoqhbjuith%2BKvUM2QLFyWI7pihIPEu%2B4T%0AWLtslp6q4VbwoLz7X2ggqCv0Ir%2FlYzhFzecVBgpXldq9R7wVsXetiFwNB3Wck4j9%0AMcVp7Z51D%2FZ9%2BHF%2B1qKU4%2Bft0fk9yvdkL%2F85tbnBUwmRz6uKeZnLF1NSVFjiB9Ub%0A4OjPzMj72WfSPfiR9lb%2BR%2FNIGRBjszVNaOP0wwIDAQABAoIBAQCqbYUg1euxHM0e%0A81eQPuHjOfdaLZSJM9KZclvbQjHfDBo3Z0mYejJtQj9uyl7RCsOPu%2BjIs9G4XCCr%0AdmmGKBYod5ZFSBPRqUPByTT6aDuFrQmqZhqR9w43%2BVIqnp6Bds%2BD%2BM96dpbrry4x%0APYCqBms1XI%2FDX6p9ldptYc7yAzUA784ePVm6BFDIqy%2BZ3opxoMFay25EQhnhkX2G%0AUxFWpSw9YolmH0FdA%2Fcj8QGuh0xslcrr92H9ZnLbZrkd%2FN8mIwsS6YOmGlAwM%2Fwi%0Ao7YR7Qpc%2FbyTipLynGfVAVqiRVHk%2FzWFbkDcgtoLRlDVqN%2B2LI8LJoi81ZbOX1jS%0A4YwfwOgBAoGBAODdxnbZe9zOuMvLiL6scPaofh5jJEe%2BtjJl%2FlTBUPUwO6A24s17%0A5muus4KoElGq%2Bo%2Fhca4TvmfkDIjWkf504EZ4hfViMnd673mxFr2iLgpv%2F62Y4y2y%0Amhwk3ueybd6xLKum%2BjyXw11IhfRDR3JF0Va4Y1i%2FQaLSYj9XCc1a6kv7AoGBAMcV%0AAKdOIkafYpFXaoXE7Od9Z2hMiBDNCM4Vh1BOCQfXSY8QUATYYeVcpDWF9Z2ngHhs%0AkRjID8rhMeeWjAM4qBvvWAst9ysjuARlKiu5y%2FDmRYSGDJXja1xu%2FF1Z%2FqMt4LD5%0AFI%2F65Rm03p7RJlE5QiT1ZnwX9pGjEHxmMk1CghfZAoGBAOC8ZazEoalGJaTwb2N5%0AfpDWRu3h0hGuRfPKwcw9RMc4BG%2BUS0po6Rp4CMqtZVmf0znXbEE5VFQKtIhSQqkY%0AcEmeDOv4z01gXVS3K24tV2xxEQyTv4Edfi5gnzLbvjkRw%2F5uLKxAVS223MIKN666%0AnoTYVdoNk%2FDB6RU6zP4jPgTfAoGAbeLL340jIjQrpenIZFnUIdp4T3uexxdFOutr%0AKwpHtcpBUfRBFsuRDZbbFKgCcKjaIp5aYIFdJjCy6Q%2BR7N1C%2FVhZEqKmgWtP0S09%0A37DIPwn7aTDMlZdX1Ud1iNl50fwqv8RcczSbbFsHXkY3jjG6rse9b9WSRcTp%2FqAy%0AN670O9ECgYEAkSCIeyvhtzKXN%2BgcTgK8ghzerXHYYFE28I4bt5%2FNmW%2ByHu7Og%2FfZ%0AhuZLU4hkOtpecTpoRN8qzUryzzozZydOqOXIVJRdtsjVa9M%2BDC95lnief8YsMRU0%0A5YcArPUzMGI%2BDhP5q%2BY4XVIjY7FPPMwNYz32xJomOcbz5toun7GfAXw%3D%0A-----END%20RSA%20PRIVATE%20KEY-----%0A","verification":{}},"mode":644,"user":{},"group":{}},{"filesystem":"root","path":"/etc/coreos/update.conf","contents":{"source":"data:,%0AREBOOT_STRATEGY%3D%22off%22","verification":{}},"mode":420,"user":{},"group":{}}]},"systemd":{"units":[{"name":"rpcbind.service","enable":true},{"name":"kubelet.service","enable":true,"contents":"[Service]\nEnvironment=KUBELET_IMAGE_TAG=v1.7.8_coreos.2\nEnvironment=\"RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \\\n --volume dns,kind=host,source=/etc/resolv.conf \\\n --mount volume=dns,target=/etc/resolv.conf \\\n --volume var-log,kind=host,source=/var/log \\\n --mount volume=var-log,target=/var/log \\\n --volume cni-bin,kind=host,source=/opt/cni/bin \\\n --mount volume=cni-bin,target=/opt/cni/bin \\\n --volume cni-conf-dir,kind=host,source=/etc/cni/net.d \\\n --mount volume=cni-conf-dir,target=/etc/cni/net.d\"\nExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests\nExecStartPre=/usr/bin/mkdir -p /var/log/containers\nExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid\nExecStartPre=/usr/bin/mkdir -p /opt/cni/bin\nExecStartPre=/usr/bin/mkdir -p /etc/cni/net.d\nExecStartPre=/usr/bin/tar xzvf /opt/cni/bin/cni-plugins.tgz -C /opt/cni/bin\nExecStart=/usr/lib/coreos/kubelet-wrapper \\\n --api-servers=https://10.0.70.50 \\\n --container-runtime=docker \\\n --allow-privileged=true \\\n --pod-manifest-path=/etc/kubernetes/manifests \\\n --cluster_dns=10.3.0.10 \\\n --cluster_domain=cluster.local \\\n --network-plugin=cni \\\n --cni-bin-dir=/opt/cni/bin \\\n --cni-conf-dir=/etc/cni/net.d \\\n --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \\\n --tls-cert-file=/etc/kubernetes/ssl/worker.pem \\\n --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem \\\n --client-ca-file=/etc/kubernetes/ssl/ca.pem \\\n --cloud-provider=aws \nExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid\nRestart=always\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target\n"}]},"networkd":{},"passwd":{}} 283 | Type: AWS::AutoScaling::LaunchConfiguration 284 | SecurityGroupELB: 285 | Properties: 286 | GroupDescription: 287 | Ref: AWS::StackName 288 | SecurityGroupIngress: 289 | - CidrIp: 0.0.0.0/0 290 | FromPort: 443 291 | IpProtocol: tcp 292 | ToPort: 443 293 | Tags: 294 | - Key: KubernetesCluster 295 | Value: 296 | Ref: ClusterName 297 | VpcId: 298 | Ref: ApplicationVPC 299 | Type: AWS::EC2::SecurityGroup 300 | SecurityGroupController: 301 | Properties: 302 | GroupDescription: 303 | Ref: AWS::StackName 304 | SecurityGroupEgress: 305 | - CidrIp: 0.0.0.0/0 306 | FromPort: 0 307 | IpProtocol: tcp 308 | ToPort: 65535 309 | - CidrIp: 0.0.0.0/0 310 | FromPort: 0 311 | IpProtocol: udp 312 | ToPort: 65535 313 | SecurityGroupIngress: 314 | - CidrIp: 0.0.0.0/0 315 | FromPort: 22 316 | IpProtocol: tcp 317 | ToPort: 22 318 | - CidrIp: 0.0.0.0/0 319 | FromPort: 443 320 | IpProtocol: tcp 321 | ToPort: 443 322 | - CidrIp: 0.0.0.0/0 323 | FromPort: 8080 324 | IpProtocol: tcp 325 | ToPort: 8080 326 | - CidrIp: 0.0.0.0/0 327 | FromPort: 6783 328 | IpProtocol: tcp 329 | ToPort: 6783 330 | - CidrIp: 0.0.0.0/0 331 | FromPort: 6783 332 | IpProtocol: udp 333 | ToPort: 6784 334 | Tags: 335 | - Key: KubernetesCluster 336 | Value: 337 | Ref: ClusterName 338 | VpcId: 339 | Ref: ApplicationVPC 340 | Type: AWS::EC2::SecurityGroup 341 | SecurityGroupControllerIngressFromWorkerToEtcd: 342 | Properties: 343 | FromPort: 2379 344 | GroupId: 345 | Ref: SecurityGroupController 346 | IpProtocol: tcp 347 | SourceSecurityGroupId: 348 | Ref: SecurityGroupWorker 349 | ToPort: 2379 350 | Type: AWS::EC2::SecurityGroupIngress 351 | SecurityGroupWorker: 352 | Properties: 353 | GroupDescription: 354 | Ref: AWS::StackName 355 | SecurityGroupEgress: 356 | - CidrIp: 0.0.0.0/0 357 | FromPort: 0 358 | IpProtocol: tcp 359 | ToPort: 65535 360 | - CidrIp: 0.0.0.0/0 361 | FromPort: 0 362 | IpProtocol: udp 363 | ToPort: 65535 364 | SecurityGroupIngress: 365 | - CidrIp: 0.0.0.0/0 366 | FromPort: 22 367 | IpProtocol: tcp 368 | ToPort: 22 369 | - CidrIp: 0.0.0.0/0 370 | FromPort: 6783 371 | IpProtocol: tcp 372 | ToPort: 6783 373 | - CidrIp: 0.0.0.0/0 374 | FromPort: 6783 375 | IpProtocol: udp 376 | ToPort: 6784 377 | Tags: 378 | - Key: KubernetesCluster 379 | Value: 380 | Ref: ClusterName 381 | VpcId: 382 | Ref: ApplicationVPC 383 | Type: AWS::EC2::SecurityGroup 384 | SecurityGroupWorkerIngressFromControllerToFlannel: 385 | Properties: 386 | FromPort: 8285 387 | GroupId: 388 | Ref: SecurityGroupWorker 389 | IpProtocol: udp 390 | SourceSecurityGroupId: 391 | Ref: SecurityGroupController 392 | ToPort: 8285 393 | Type: AWS::EC2::SecurityGroupIngress 394 | SecurityGroupWorkerIngressFromControllerToKubelet: 395 | Properties: 396 | FromPort: 10250 397 | GroupId: 398 | Ref: SecurityGroupWorker 399 | IpProtocol: tcp 400 | SourceSecurityGroupId: 401 | Ref: SecurityGroupController 402 | ToPort: 10250 403 | Type: AWS::EC2::SecurityGroupIngress 404 | SecurityGroupWorkerIngressFromWorkerToFlannel: 405 | Properties: 406 | FromPort: 8285 407 | GroupId: 408 | Ref: SecurityGroupWorker 409 | IpProtocol: udp 410 | SourceSecurityGroupId: 411 | Ref: SecurityGroupWorker 412 | ToPort: 8285 413 | Type: AWS::EC2::SecurityGroupIngress 414 | k8sSubnetA: 415 | Properties: 416 | AvailabilityZone: 417 | Ref: AvailabilityZone1 418 | CidrBlock: 419 | Ref: k8sSubnetCidrBlockPrivateAZ1 420 | MapPublicIpOnLaunch: false 421 | Tags: 422 | - Key: Name 423 | Value: k8sSubnetA 424 | - Key: KubernetesCluster 425 | Value: 426 | Ref: ClusterName 427 | VpcId: 428 | Ref: ApplicationVPC 429 | Type: AWS::EC2::Subnet 430 | k8sSubnetB: 431 | Properties: 432 | AvailabilityZone: 433 | Ref: AvailabilityZone2 434 | CidrBlock: 435 | Ref: k8sSubnetCidrBlockPrivateAZ2 436 | MapPublicIpOnLaunch: false 437 | Tags: 438 | - Key: Name 439 | Value: k8sSubnetB 440 | - Key: KubernetesCluster 441 | Value: 442 | Ref: ClusterName 443 | VpcId: 444 | Ref: ApplicationVPC 445 | Type: AWS::EC2::Subnet 446 | k8sSubnetC: 447 | Properties: 448 | AvailabilityZone: 449 | Ref: AvailabilityZone3 450 | CidrBlock: 451 | Ref: k8sSubnetCidrBlockPrivateAZ3 452 | MapPublicIpOnLaunch: false 453 | Tags: 454 | - Key: Name 455 | Value: k8sSubnetC 456 | - Key: KubernetesCluster 457 | Value: 458 | Ref: ClusterName 459 | VpcId: 460 | Ref: ApplicationVPC 461 | Type: AWS::EC2::Subnet 462 | k8sSubnetRouteTableAssociationA: 463 | Properties: 464 | RouteTableId: 465 | Ref: RouteTableNAT 466 | SubnetId: 467 | Ref: k8sSubnetA 468 | Type: AWS::EC2::SubnetRouteTableAssociation 469 | k8sSubnetRouteTableAssociationB: 470 | Properties: 471 | RouteTableId: 472 | Ref: RouteTableNAT 473 | SubnetId: 474 | Ref: k8sSubnetB 475 | Type: AWS::EC2::SubnetRouteTableAssociation 476 | k8sSubnetRouteTableAssociationC: 477 | Properties: 478 | RouteTableId: 479 | Ref: RouteTableNAT 480 | SubnetId: 481 | Ref: k8sSubnetC 482 | Type: AWS::EC2::SubnetRouteTableAssociation 483 | -------------------------------------------------------------------------------- /cloudformation/network-template.json: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion": "2010-09-09", 3 | "Description": "Baseline Network Configuration", 4 | "Metadata": { 5 | "AWS::CloudFormation::Interface": { 6 | "ParameterGroups": [ 7 | { 8 | "Label": { 9 | "default": "General Network Configuration" 10 | }, 11 | "Parameters": [ 12 | "VPCInstanceTenancy", 13 | "AvailabilityZone1", 14 | "AvailabilityZone2", 15 | "AvailabilityZone3" 16 | ] 17 | }, 18 | { 19 | "Label": { 20 | "default": "Application Network Configuration" 21 | }, 22 | "Parameters": [ 23 | "ApplicationVPCName", 24 | "ApplicationCIDR", 25 | "PublicSubnetACIDR", 26 | "PublicSubnetBCIDR", 27 | "PublicSubnetCCIDR", 28 | "AppPrivateSubnetACIDR", 29 | "AppPrivateSubnetBCIDR", 30 | "AppPrivateSubnetCCIDR", 31 | "NatInstanceType", 32 | "KeyName" 33 | ] 34 | }, 35 | { 36 | "Label": { 37 | "default": "Bastion Network Configuration" 38 | }, 39 | "Parameters": [ 40 | "BastionVPCName", 41 | "BastionCIDR", 42 | "BastionSubnetACIDR", 43 | "BastionSubnetBCIDR", 44 | "BastionSubnetCCIDR", 45 | "BastionOpsWorksInstanceType" 46 | ] 47 | }, 48 | { 49 | "Label": { 50 | "default": "Tags" 51 | }, 52 | "Parameters": [ 53 | "Application", 54 | "Environment" 55 | ] 56 | } 57 | ] 58 | } 59 | }, 60 | "Parameters": { 61 | "AvailabilityZone1": { 62 | "Description": "Availability Zone 1 Name in Region", 63 | "Type": "AWS::EC2::AvailabilityZone::Name" 64 | }, 65 | "AvailabilityZone2": { 66 | "Description": "Availability Zone 2 Name in Region", 67 | "Type": "AWS::EC2::AvailabilityZone::Name" 68 | }, 69 | "AvailabilityZone3": { 70 | "Description": "Availability Zone 3 Name in Region", 71 | "Type": "AWS::EC2::AvailabilityZone::Name" 72 | }, 73 | "ApplicationVPCName": { 74 | "Description": "App VPC Name", 75 | "Type": "String", 76 | "Default": "" 77 | }, 78 | "ApplicationCIDR": { 79 | "Description": "CIDR block for App VPC", 80 | "Type": "String", 81 | "Default": "10.0.0.0/16" 82 | }, 83 | "BastionVPCName": { 84 | "Description": "Bastion VPC Name", 85 | "Type": "String", 86 | "Default": "" 87 | }, 88 | "BastionCIDR": { 89 | "Default": "172.33.0.0/16", 90 | "Description": "CIDR block for Bastion VPC", 91 | "Type": "String" 92 | }, 93 | "BastionSubnetACIDR": { 94 | "Description": "CIDR block for Bastion Subnet A", 95 | "Type": "String", 96 | "Default": "172.33.0.0/24" 97 | }, 98 | "BastionSubnetBCIDR": { 99 | "Description": "CIDR block for Bastion Subnet B", 100 | "Type": "String", 101 | "Default": "172.33.1.0/24" 102 | }, 103 | "BastionSubnetCCIDR": { 104 | "Description": "CIDR block for Bastion Subnet C", 105 | "Type": "String", 106 | "Default": "172.33.2.0/24" 107 | }, 108 | "PublicSubnetACIDR": { 109 | "Description": "CIDR block for Public Subnet A", 110 | "Type": "String", 111 | "Default": "10.0.20.0/24" 112 | }, 113 | "PublicSubnetBCIDR": { 114 | "Description": "CIDR block for Public Subnet B", 115 | "Type": "String", 116 | "Default": "10.0.21.0/24" 117 | }, 118 | "PublicSubnetCCIDR": { 119 | "Description": "CIDR block for Public Subnet C", 120 | "Type": "String", 121 | "Default": "10.0.22.0/24" 122 | }, 123 | "AppPrivateSubnetACIDR": { 124 | "Description": "CIDR block for Private Subnet A", 125 | "Type": "String", 126 | "Default": "10.0.0.0/24" 127 | }, 128 | "AppPrivateSubnetBCIDR": { 129 | "Description": "CIDR block for Private Subnet B", 130 | "Type": "String", 131 | "Default": "10.0.1.0/24" 132 | }, 133 | "AppPrivateSubnetCCIDR": { 134 | "Description": "CIDR block for Private Subnet C", 135 | "Type": "String", 136 | "Default": "10.0.2.0/24" 137 | }, 138 | "KeyName": { 139 | "Description": "Key Name for Instance", 140 | "Type": "AWS::EC2::KeyPair::KeyName", 141 | "Default": "" 142 | }, 143 | "NatInstanceType": { 144 | "Description": "Nat EC2 instance type", 145 | "Type": "String", 146 | "Default": "t2.micro", 147 | "AllowedValues": [ 148 | "t2.micro", 149 | "m3.medium" 150 | ] 151 | }, 152 | "BastionOpsWorksInstanceType": { 153 | "Default": "t2.micro", 154 | "Description": "EC2 instance type used for each bastion box", 155 | "Type": "String", 156 | "AllowedValues": [ 157 | "t2.micro", 158 | "m3.medium" 159 | ] 160 | }, 161 | "VPCInstanceTenancy": { 162 | "Description": "The allowed tenancy of instances launched into the VPC (m3.medium is smallest for dedicated tenancy)", 163 | "Default": "default", 164 | "Type": "String", 165 | "AllowedValues": [ 166 | "default", 167 | "dedicated" 168 | ] 169 | }, 170 | "Application": { 171 | "Description": "Lowercase application name.", 172 | "Type": "String" 173 | }, 174 | "Environment": { 175 | "Description": "dev, tst, stg, prd, etc", 176 | "Type": "String" 177 | } 178 | }, 179 | "Mappings": { 180 | "AWSRegionArch2AMI": { 181 | "us-east-1": { 182 | "HVM64": "ami-08842d60" 183 | }, 184 | "us-east-2": { 185 | "HVM64": "ami-58277d3d" 186 | }, 187 | "us-west-2": { 188 | "HVM64": "ami-8786c6b7" 189 | }, 190 | "us-west-1": { 191 | "HVM64": "ami-cfa8a18a" 192 | } 193 | } 194 | }, 195 | "Resources": { 196 | "ApplicationVPC": { 197 | "Type": "AWS::EC2::VPC", 198 | "Properties": { 199 | "CidrBlock": { 200 | "Ref": "ApplicationCIDR" 201 | }, 202 | "InstanceTenancy": { 203 | "Ref": "VPCInstanceTenancy" 204 | }, 205 | "EnableDnsSupport": "true", 206 | "EnableDnsHostnames": "true", 207 | "Tags": [ 208 | { 209 | "Key": "Name", 210 | "Value": { 211 | "Ref": "ApplicationVPCName" 212 | } 213 | }, 214 | { 215 | "Key": "Application", 216 | "Value": { 217 | "Ref": "Application" 218 | } 219 | }, 220 | { 221 | "Key": "Environment", 222 | "Value": { 223 | "Ref": "Environment" 224 | } 225 | } 226 | ] 227 | } 228 | }, 229 | "BastionVPC": { 230 | "Type": "AWS::EC2::VPC", 231 | "Properties": { 232 | "CidrBlock": { 233 | "Ref": "BastionCIDR" 234 | }, 235 | "InstanceTenancy": { 236 | "Ref": "VPCInstanceTenancy" 237 | }, 238 | "EnableDnsSupport": "true", 239 | "EnableDnsHostnames": "true", 240 | "Tags": [ 241 | { 242 | "Key": "Name", 243 | "Value": { 244 | "Ref": "BastionVPCName" 245 | } 246 | }, 247 | { 248 | "Key": "Application", 249 | "Value": { 250 | "Ref": "Application" 251 | } 252 | }, 253 | { 254 | "Key": "Environment", 255 | "Value": { 256 | "Ref": "Environment" 257 | } 258 | } 259 | ] 260 | } 261 | }, 262 | "SecurityGroupNAT": { 263 | "Type": "AWS::EC2::SecurityGroup", 264 | "Properties": { 265 | "GroupDescription": "Allow Nat", 266 | "VpcId": { 267 | "Ref": "ApplicationVPC" 268 | }, 269 | "SecurityGroupIngress": [ 270 | { 271 | "IpProtocol": "tcp", 272 | "FromPort": "80", 273 | "ToPort": "80", 274 | "CidrIp": { 275 | "Ref": "ApplicationCIDR" 276 | } 277 | }, 278 | { 279 | "IpProtocol": "tcp", 280 | "FromPort": "443", 281 | "ToPort": "443", 282 | "CidrIp": { 283 | "Ref": "ApplicationCIDR" 284 | } 285 | } 286 | ] 287 | } 288 | }, 289 | "SecurityGroupSSH": { 290 | "Type": "AWS::EC2::SecurityGroup", 291 | "Properties": { 292 | "GroupDescription": "Enable SSH access via port 22", 293 | "VpcId": { 294 | "Ref": "ApplicationVPC" 295 | }, 296 | "SecurityGroupIngress": [ 297 | { 298 | "IpProtocol": "tcp", 299 | "FromPort": "22", 300 | "ToPort": "22", 301 | "CidrIp": { 302 | "Ref": "ApplicationCIDR" 303 | } 304 | } 305 | ] 306 | } 307 | }, 308 | "PeeringConnBastion": { 309 | "Type": "AWS::EC2::VPCPeeringConnection", 310 | "DependsOn": [ 311 | "ApplicationVPC" 312 | ], 313 | "Properties": { 314 | "VpcId": { 315 | "Ref": "BastionVPC" 316 | }, 317 | "PeerVpcId": { 318 | "Ref": "ApplicationVPC" 319 | } 320 | } 321 | }, 322 | "BastionSubnetA": { 323 | "Type": "AWS::EC2::Subnet", 324 | "DependsOn": "BastionIGWAttachment", 325 | "Properties": { 326 | "CidrBlock": { 327 | "Ref": "BastionSubnetACIDR" 328 | }, 329 | "AvailabilityZone": { 330 | "Ref": "AvailabilityZone1" 331 | }, 332 | "VpcId": { 333 | "Ref": "BastionVPC" 334 | }, 335 | "Tags": [ 336 | { 337 | "Key": "Name", 338 | "Value": "Bastion Subnet A" 339 | }, 340 | { 341 | "Key": "Application", 342 | "Value": { 343 | "Ref": "Application" 344 | } 345 | }, 346 | { 347 | "Key": "Environment", 348 | "Value": { 349 | "Ref": "Environment" 350 | } 351 | } 352 | ] 353 | } 354 | }, 355 | "BastionSubnetB": { 356 | "Type": "AWS::EC2::Subnet", 357 | "Properties": { 358 | "CidrBlock": { 359 | "Ref": "BastionSubnetBCIDR" 360 | }, 361 | "AvailabilityZone": { 362 | "Ref": "AvailabilityZone2" 363 | }, 364 | "VpcId": { 365 | "Ref": "BastionVPC" 366 | }, 367 | "Tags": [ 368 | { 369 | "Key": "Name", 370 | "Value": "Bastion Subnet B" 371 | }, 372 | { 373 | "Key": "Application", 374 | "Value": { 375 | "Ref": "Application" 376 | } 377 | }, 378 | { 379 | "Key": "Environment", 380 | "Value": { 381 | "Ref": "Environment" 382 | } 383 | } 384 | ] 385 | } 386 | }, 387 | "BastionSubnetC": { 388 | "Type": "AWS::EC2::Subnet", 389 | "Properties": { 390 | "CidrBlock": { 391 | "Ref": "BastionSubnetCCIDR" 392 | }, 393 | "AvailabilityZone": { 394 | "Ref": "AvailabilityZone3" 395 | }, 396 | "VpcId": { 397 | "Ref": "BastionVPC" 398 | }, 399 | "Tags": [ 400 | { 401 | "Key": "Name", 402 | "Value": "Bastion Subnet C" 403 | }, 404 | { 405 | "Key": "Application", 406 | "Value": { 407 | "Ref": "Application" 408 | } 409 | }, 410 | { 411 | "Key": "Environment", 412 | "Value": { 413 | "Ref": "Environment" 414 | } 415 | } 416 | ] 417 | } 418 | }, 419 | "PublicSubnetA": { 420 | "Type": "AWS::EC2::Subnet", 421 | "Properties": { 422 | "CidrBlock": { 423 | "Ref": "PublicSubnetACIDR" 424 | }, 425 | "AvailabilityZone": { 426 | "Ref": "AvailabilityZone1" 427 | }, 428 | "VpcId": { 429 | "Ref": "ApplicationVPC" 430 | }, 431 | "Tags": [ 432 | { 433 | "Key": "Name", 434 | "Value": "Public Subnet A" 435 | }, 436 | { 437 | "Key": "Application", 438 | "Value": { 439 | "Ref": "Application" 440 | } 441 | }, 442 | { 443 | "Key": "Environment", 444 | "Value": { 445 | "Ref": "Environment" 446 | } 447 | } 448 | ] 449 | } 450 | }, 451 | "PublicSubnetB": { 452 | "Type": "AWS::EC2::Subnet", 453 | "Properties": { 454 | "CidrBlock": { 455 | "Ref": "PublicSubnetBCIDR" 456 | }, 457 | "AvailabilityZone": { 458 | "Ref": "AvailabilityZone2" 459 | }, 460 | "VpcId": { 461 | "Ref": "ApplicationVPC" 462 | }, 463 | "Tags": [ 464 | { 465 | "Key": "Name", 466 | "Value": "Public Subnet B" 467 | }, 468 | { 469 | "Key": "Application", 470 | "Value": { 471 | "Ref": "Application" 472 | } 473 | }, 474 | { 475 | "Key": "Environment", 476 | "Value": { 477 | "Ref": "Environment" 478 | } 479 | } 480 | ] 481 | } 482 | }, 483 | "PublicSubnetC": { 484 | "Type": "AWS::EC2::Subnet", 485 | "Properties": { 486 | "CidrBlock": { 487 | "Ref": "PublicSubnetCCIDR" 488 | }, 489 | "AvailabilityZone": { 490 | "Ref": "AvailabilityZone3" 491 | }, 492 | "VpcId": { 493 | "Ref": "ApplicationVPC" 494 | }, 495 | "Tags": [ 496 | { 497 | "Key": "Name", 498 | "Value": "Public Subnet C" 499 | }, 500 | { 501 | "Key": "Application", 502 | "Value": { 503 | "Ref": "Application" 504 | } 505 | }, 506 | { 507 | "Key": "Environment", 508 | "Value": { 509 | "Ref": "Environment" 510 | } 511 | } 512 | ] 513 | } 514 | }, 515 | "AppPrivateSubnetA": { 516 | "Type": "AWS::EC2::Subnet", 517 | "Properties": { 518 | "CidrBlock": { 519 | "Ref": "AppPrivateSubnetACIDR" 520 | }, 521 | "AvailabilityZone": { 522 | "Ref": "AvailabilityZone1" 523 | }, 524 | "VpcId": { 525 | "Ref": "ApplicationVPC" 526 | }, 527 | "Tags": [ 528 | { 529 | "Key": "Name", 530 | "Value": "App Private Subnet A" 531 | }, 532 | { 533 | "Key": "Application", 534 | "Value": { 535 | "Ref": "Application" 536 | } 537 | }, 538 | { 539 | "Key": "Environment", 540 | "Value": { 541 | "Ref": "Environment" 542 | } 543 | } 544 | ] 545 | } 546 | }, 547 | "AppPrivateSubnetB": { 548 | "Type": "AWS::EC2::Subnet", 549 | "Properties": { 550 | "CidrBlock": { 551 | "Ref": "AppPrivateSubnetBCIDR" 552 | }, 553 | "AvailabilityZone": { 554 | "Ref": "AvailabilityZone2" 555 | }, 556 | "VpcId": { 557 | "Ref": "ApplicationVPC" 558 | }, 559 | "Tags": [ 560 | { 561 | "Key": "Name", 562 | "Value": "App Private Subnet B" 563 | }, 564 | { 565 | "Key": "Application", 566 | "Value": { 567 | "Ref": "Application" 568 | } 569 | }, 570 | { 571 | "Key": "Environment", 572 | "Value": { 573 | "Ref": "Environment" 574 | } 575 | } 576 | ] 577 | } 578 | }, 579 | "AppPrivateSubnetC": { 580 | "Type": "AWS::EC2::Subnet", 581 | "Properties": { 582 | "CidrBlock": { 583 | "Ref": "AppPrivateSubnetCCIDR" 584 | }, 585 | "AvailabilityZone": { 586 | "Ref": "AvailabilityZone3" 587 | }, 588 | "VpcId": { 589 | "Ref": "ApplicationVPC" 590 | }, 591 | "Tags": [ 592 | { 593 | "Key": "Name", 594 | "Value": "App Private Subnet C" 595 | }, 596 | { 597 | "Key": "Application", 598 | "Value": { 599 | "Ref": "Application" 600 | } 601 | }, 602 | { 603 | "Key": "Environment", 604 | "Value": { 605 | "Ref": "Environment" 606 | } 607 | } 608 | ] 609 | } 610 | }, 611 | "ApplicationIGW": { 612 | "Type": "AWS::EC2::InternetGateway", 613 | "Properties": { 614 | "Tags": [ 615 | { 616 | "Key": "Name", 617 | "Value": "Internet Gateway" 618 | }, 619 | { 620 | "Key": "Application", 621 | "Value": { 622 | "Ref": "Application" 623 | } 624 | }, 625 | { 626 | "Key": "Environment", 627 | "Value": { 628 | "Ref": "Environment" 629 | } 630 | } 631 | ] 632 | } 633 | }, 634 | "BastionIGW": { 635 | "Type": "AWS::EC2::InternetGateway", 636 | "DeletionPolicy": "Delete", 637 | "Properties": { 638 | "Tags": [ 639 | { 640 | "Key": "Name", 641 | "Value": "Bastion Internet Gateway" 642 | }, 643 | { 644 | "Key": "Application", 645 | "Value": { 646 | "Ref": "Application" 647 | } 648 | }, 649 | { 650 | "Key": "Environment", 651 | "Value": { 652 | "Ref": "Environment" 653 | } 654 | } 655 | ] 656 | } 657 | }, 658 | "RouteTableMain": { 659 | "Type": "AWS::EC2::RouteTable", 660 | "Properties": { 661 | "VpcId": { 662 | "Ref": "ApplicationVPC" 663 | }, 664 | "Tags": [ 665 | { 666 | "Key": "Name", 667 | "Value": "Route table for Public Subnets" 668 | }, 669 | { 670 | "Key": "Application", 671 | "Value": { 672 | "Ref": "Application" 673 | } 674 | }, 675 | { 676 | "Key": "Environment", 677 | "Value": { 678 | "Ref": "Environment" 679 | } 680 | } 681 | ] 682 | } 683 | }, 684 | "RouteTableBastion": { 685 | "Type": "AWS::EC2::RouteTable", 686 | "Properties": { 687 | "VpcId": { 688 | "Ref": "BastionVPC" 689 | }, 690 | "Tags": [ 691 | { 692 | "Key": "Name", 693 | "Value": "Route table for Bastion VPC" 694 | }, 695 | { 696 | "Key": "Application", 697 | "Value": { 698 | "Ref": "Application" 699 | } 700 | }, 701 | { 702 | "Key": "Environment", 703 | "Value": { 704 | "Ref": "Environment" 705 | } 706 | } 707 | ] 708 | } 709 | }, 710 | "RouteBastionIGW": { 711 | "Type": "AWS::EC2::Route", 712 | "DependsOn": "BastionIGWAttachment", 713 | "Properties": { 714 | "RouteTableId": { 715 | "Ref": "RouteTableBastion" 716 | }, 717 | "GatewayId": { 718 | "Ref": "BastionIGW" 719 | }, 720 | "DestinationCidrBlock": "0.0.0.0/0" 721 | } 722 | }, 723 | "RouteBastionPeering": { 724 | "Type": "AWS::EC2::Route", 725 | "Properties": { 726 | "RouteTableId": { 727 | "Ref": "RouteTableBastion" 728 | }, 729 | "VpcPeeringConnectionId": { 730 | "Ref": "PeeringConnBastion" 731 | }, 732 | "DestinationCidrBlock": { 733 | "Ref": "ApplicationCIDR" 734 | } 735 | } 736 | }, 737 | "RouteAssocBastionA": { 738 | "DependsOn": "BastionIGWAttachment", 739 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 740 | "Properties": { 741 | "RouteTableId": { 742 | "Ref": "RouteTableBastion" 743 | }, 744 | "SubnetId": { 745 | "Ref": "BastionSubnetA" 746 | } 747 | } 748 | }, 749 | "RouteAssocBastionB": { 750 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 751 | "Properties": { 752 | "RouteTableId": { 753 | "Ref": "RouteTableBastion" 754 | }, 755 | "SubnetId": { 756 | "Ref": "BastionSubnetB" 757 | } 758 | } 759 | }, 760 | "RouteAssocBastionC": { 761 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 762 | "Properties": { 763 | "RouteTableId": { 764 | "Ref": "RouteTableBastion" 765 | }, 766 | "SubnetId": { 767 | "Ref": "BastionSubnetC" 768 | } 769 | } 770 | }, 771 | "EIPNAT": { 772 | "Type": "AWS::EC2::EIP", 773 | "Properties": { 774 | "Domain": "vpc" 775 | } 776 | }, 777 | "AssociateEIPNAT": { 778 | "Type": "AWS::EC2::EIPAssociation", 779 | "Properties": { 780 | "AllocationId": { 781 | "Fn::GetAtt": [ 782 | "EIPNAT", 783 | "AllocationId" 784 | ] 785 | }, 786 | "NetworkInterfaceId": { 787 | "Ref": "ENINAT" 788 | } 789 | } 790 | }, 791 | "ENINAT": { 792 | "Type": "AWS::EC2::NetworkInterface", 793 | "Properties": { 794 | "SubnetId": { 795 | "Ref": "PublicSubnetA" 796 | }, 797 | "GroupSet": [ 798 | { 799 | "Ref": "SecurityGroupSSH" 800 | }, 801 | { 802 | "Ref": "SecurityGroupNAT" 803 | } 804 | ], 805 | "Description": "Interface for Nat device", 806 | "Tags": [ 807 | { 808 | "Key": "Network", 809 | "Value": "NAT Device" 810 | }, 811 | { 812 | "Key": "Name", 813 | "Value": "NAT Device" 814 | }, 815 | { 816 | "Key": "Application", 817 | "Value": { 818 | "Ref": "Application" 819 | } 820 | }, 821 | { 822 | "Key": "Environment", 823 | "Value": { 824 | "Ref": "Environment" 825 | } 826 | } 827 | ] 828 | } 829 | }, 830 | "NATInstance": { 831 | "Type": "AWS::EC2::Instance", 832 | "Properties": { 833 | "InstanceType": { 834 | "Ref": "NatInstanceType" 835 | }, 836 | "SourceDestCheck": false, 837 | "KeyName": { 838 | "Ref": "KeyName" 839 | }, 840 | "Tags": [ 841 | { 842 | "Key": "Name", 843 | "Value": "NAT Instance" 844 | }, 845 | { 846 | "Key": "Application", 847 | "Value": { 848 | "Ref": "Application" 849 | } 850 | }, 851 | { 852 | "Key": "Environment", 853 | "Value": { 854 | "Ref": "Environment" 855 | } 856 | } 857 | ], 858 | "UserData": { 859 | "Fn::Base64": { 860 | "Fn::Join": [ 861 | "", 862 | [ 863 | "#!/bin/sh\n", 864 | "echo 1 > /proc/sys/net/ipv4/ip_forward\n", 865 | "echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects\n", 866 | "/sbin/iptables -t nat -A POSTROUTING -o eth0 -s 0.0.0.0/0 -j MASQUERADE\n", 867 | "/sbin/iptables-save > /etc/sysconfig/iptables\n", 868 | "mkdir -p /etc/sysctl.d/\n", 869 | "cat < /etc/sysctl.d/nat.conf\n", 870 | "net.ipv4.ip_forward = 1 \n", 871 | "net.ipv4.conf.eth0.send_redirects = 0\n", 872 | "EOF\n" 873 | ] 874 | ] 875 | } 876 | }, 877 | "ImageId": { 878 | "Fn::FindInMap": [ 879 | "AWSRegionArch2AMI", 880 | { 881 | "Ref": "AWS::Region" 882 | }, 883 | "HVM64" 884 | ] 885 | }, 886 | "NetworkInterfaces": [ 887 | { 888 | "NetworkInterfaceId": { 889 | "Ref": "ENINAT" 890 | }, 891 | "DeviceIndex": "0" 892 | } 893 | ] 894 | } 895 | }, 896 | "RouteIGW": { 897 | "Type": "AWS::EC2::Route", 898 | "DependsOn": "IGWAttachment", 899 | "Properties": { 900 | "RouteTableId": { 901 | "Ref": "RouteTableMain" 902 | }, 903 | "GatewayId": { 904 | "Ref": "ApplicationIGW" 905 | }, 906 | "DestinationCidrBlock": "0.0.0.0/0" 907 | } 908 | }, 909 | "RoutetoBastionPublic": { 910 | "Type": "AWS::EC2::Route", 911 | "Properties": { 912 | "RouteTableId": { 913 | "Ref": "RouteTableMain" 914 | }, 915 | "VpcPeeringConnectionId": { 916 | "Ref": "PeeringConnBastion" 917 | }, 918 | "DestinationCidrBlock": { 919 | "Ref": "BastionCIDR" 920 | } 921 | } 922 | }, 923 | "RoutePrivateNAT": { 924 | "Type": "AWS::EC2::Route", 925 | "Properties": { 926 | "DestinationCidrBlock": "0.0.0.0/0", 927 | "RouteTableId": { 928 | "Ref": "RouteTablePrivate" 929 | }, 930 | "InstanceId": { 931 | "Ref": "NATInstance" 932 | } 933 | } 934 | }, 935 | "RoutePrivateBastionPeering": { 936 | "Type": "AWS::EC2::Route", 937 | "Properties": { 938 | "DestinationCidrBlock": { 939 | "Ref": "BastionCIDR" 940 | }, 941 | "RouteTableId": { 942 | "Ref": "RouteTablePrivate" 943 | }, 944 | "VpcPeeringConnectionId": { 945 | "Ref": "PeeringConnBastion" 946 | } 947 | } 948 | }, 949 | "RouteAssocPublicA": { 950 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 951 | "Properties": { 952 | "RouteTableId": { 953 | "Ref": "RouteTableMain" 954 | }, 955 | "SubnetId": { 956 | "Ref": "PublicSubnetA" 957 | } 958 | } 959 | }, 960 | "RouteAssocPublicB": { 961 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 962 | "Properties": { 963 | "RouteTableId": { 964 | "Ref": "RouteTableMain" 965 | }, 966 | "SubnetId": { 967 | "Ref": "PublicSubnetB" 968 | } 969 | } 970 | }, 971 | "RouteAssocPublicC": { 972 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 973 | "Properties": { 974 | "RouteTableId": { 975 | "Ref": "RouteTableMain" 976 | }, 977 | "SubnetId": { 978 | "Ref": "PublicSubnetC" 979 | } 980 | } 981 | }, 982 | "AppPrivateSubnetAssociationA": { 983 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 984 | "Properties": { 985 | "RouteTableId": { 986 | "Ref": "RouteTablePrivate" 987 | }, 988 | "SubnetId": { 989 | "Ref": "AppPrivateSubnetA" 990 | } 991 | } 992 | }, 993 | "AppPrivateSubnetAssociationB": { 994 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 995 | "Properties": { 996 | "RouteTableId": { 997 | "Ref": "RouteTablePrivate" 998 | }, 999 | "SubnetId": { 1000 | "Ref": "AppPrivateSubnetB" 1001 | } 1002 | } 1003 | }, 1004 | "AppPrivateSubnetAssociationC": { 1005 | "Type": "AWS::EC2::SubnetRouteTableAssociation", 1006 | "Properties": { 1007 | "RouteTableId": { 1008 | "Ref": "RouteTablePrivate" 1009 | }, 1010 | "SubnetId": { 1011 | "Ref": "AppPrivateSubnetC" 1012 | } 1013 | } 1014 | }, 1015 | "RouteTablePrivate": { 1016 | "Type": "AWS::EC2::RouteTable", 1017 | "Properties": { 1018 | "VpcId": { 1019 | "Ref": "ApplicationVPC" 1020 | }, 1021 | "Tags": [ 1022 | { 1023 | "Key": "Name", 1024 | "Value": "Route Table for Private Networks" 1025 | }, 1026 | { 1027 | "Key": "Application", 1028 | "Value": { 1029 | "Ref": "Application" 1030 | } 1031 | }, 1032 | { 1033 | "Key": "Environment", 1034 | "Value": { 1035 | "Ref": "Environment" 1036 | } 1037 | } 1038 | ] 1039 | } 1040 | }, 1041 | "IGWAttachment": { 1042 | "Type": "AWS::EC2::VPCGatewayAttachment", 1043 | "DependsOn": "ApplicationIGW", 1044 | "Properties": { 1045 | "VpcId": { 1046 | "Ref": "ApplicationVPC" 1047 | }, 1048 | "InternetGatewayId": { 1049 | "Ref": "ApplicationIGW" 1050 | } 1051 | } 1052 | }, 1053 | "BastionIGWAttachment": { 1054 | "Type": "AWS::EC2::VPCGatewayAttachment", 1055 | "DeletionPolicy": "Delete", 1056 | "DependsOn": "BastionIGW", 1057 | "Properties": { 1058 | "VpcId": { 1059 | "Ref": "BastionVPC" 1060 | }, 1061 | "InternetGatewayId": { 1062 | "Ref": "BastionIGW" 1063 | } 1064 | } 1065 | }, 1066 | "BastionInstanceA": { 1067 | "DependsOn": "BastionOpsApp", 1068 | "Type": "AWS::OpsWorks::Instance", 1069 | "Properties": { 1070 | "AmiId": { 1071 | "Fn::FindInMap": [ 1072 | "AWSRegionArch2AMI", 1073 | { 1074 | "Ref": "AWS::Region" 1075 | }, 1076 | "HVM64" 1077 | ] 1078 | }, 1079 | "Os": "Custom", 1080 | "InstallUpdatesOnBoot": true, 1081 | "StackId": { 1082 | "Ref": "BastionOpsStack" 1083 | }, 1084 | "LayerIds": [ 1085 | { 1086 | "Ref": "BastionOpsLayer" 1087 | } 1088 | ], 1089 | "InstanceType": { 1090 | "Ref": "BastionOpsWorksInstanceType" 1091 | }, 1092 | "RootDeviceType": "EBS backed", 1093 | "SubnetId": { 1094 | "Ref": "BastionSubnetA" 1095 | } 1096 | } 1097 | }, 1098 | "BastionOpsLayer": { 1099 | "Type": "AWS::OpsWorks::Layer", 1100 | "DependsOn": "BastionOpsApp", 1101 | "Properties": { 1102 | "StackId": { 1103 | "Ref": "BastionOpsStack" 1104 | }, 1105 | "Name": "bastion", 1106 | "Type": "custom", 1107 | "Shortname": "instance", 1108 | "EnableAutoHealing": "true", 1109 | "AutoAssignElasticIps": "false", 1110 | "AutoAssignPublicIps": "true" 1111 | } 1112 | }, 1113 | "BastionOpsApp": { 1114 | "DependsOn": "BastionOpsStack", 1115 | "Type": "AWS::OpsWorks::App", 1116 | "Properties": { 1117 | "StackId": { 1118 | "Ref": "BastionOpsStack" 1119 | }, 1120 | "Type": "other", 1121 | "Name": "bastion" 1122 | } 1123 | }, 1124 | "BastionOpsStack": { 1125 | "Type": "AWS::OpsWorks::Stack", 1126 | "Properties": { 1127 | "ServiceRoleArn": { 1128 | "Fn::Join": [ 1129 | "", 1130 | [ 1131 | "arn:aws:iam::", 1132 | { 1133 | "Ref": "AWS::AccountId" 1134 | }, 1135 | ":role/aws-opsworks-service-role" 1136 | ] 1137 | ] 1138 | }, 1139 | "DefaultInstanceProfileArn": { 1140 | "Fn::Join": [ 1141 | "", 1142 | [ 1143 | "arn:aws:iam::", 1144 | { 1145 | "Ref": "AWS::AccountId" 1146 | }, 1147 | ":instance-profile/aws-opsworks-ec2-role" 1148 | ] 1149 | ] 1150 | }, 1151 | "Name": { 1152 | "Ref": "BastionVPCName" 1153 | }, 1154 | "VpcId": { 1155 | "Ref": "BastionVPC" 1156 | }, 1157 | "DefaultSubnetId": { 1158 | "Ref": "BastionSubnetA" 1159 | } 1160 | } 1161 | } 1162 | }, 1163 | "Outputs": { 1164 | "ApplicationVPC": { 1165 | "Value": { 1166 | "Ref": "ApplicationVPC" 1167 | } 1168 | }, 1169 | "BastionVPC": { 1170 | "Value": { 1171 | "Ref": "BastionVPC" 1172 | } 1173 | }, 1174 | "BastionSubnetA": { 1175 | "Value": { 1176 | "Ref": "BastionSubnetA" 1177 | } 1178 | }, 1179 | "BastionSubnetB": { 1180 | "Value": { 1181 | "Ref": "BastionSubnetB" 1182 | } 1183 | }, 1184 | "BastionSubnetC": { 1185 | "Value": { 1186 | "Ref": "BastionSubnetC" 1187 | } 1188 | }, 1189 | "PublicSubnetA": { 1190 | "Value": { 1191 | "Ref": "PublicSubnetA" 1192 | } 1193 | }, 1194 | "PublicSubnetB": { 1195 | "Value": { 1196 | "Ref": "PublicSubnetB" 1197 | } 1198 | }, 1199 | "PublicSubnetC": { 1200 | "Value": { 1201 | "Ref": "PublicSubnetC" 1202 | } 1203 | }, 1204 | "AppPrivateSubnetA": { 1205 | "Value": { 1206 | "Ref": "AppPrivateSubnetA" 1207 | } 1208 | }, 1209 | "AppPrivateSubnetB": { 1210 | "Value": { 1211 | "Ref": "AppPrivateSubnetB" 1212 | } 1213 | }, 1214 | "AppPrivateSubnetC": { 1215 | "Value": { 1216 | "Ref": "AppPrivateSubnetC" 1217 | } 1218 | }, 1219 | "SecurityGroupSSH": { 1220 | "Value": { 1221 | "Ref": "SecurityGroupSSH" 1222 | } 1223 | } 1224 | } 1225 | } -------------------------------------------------------------------------------- /cloudformation/node.yaml: -------------------------------------------------------------------------------- 1 | systemd: 2 | units: 3 | - name: "rpcbind.service" 4 | enable: true 5 | - name: "kubelet.service" 6 | enable: true 7 | contents: | 8 | [Service] 9 | Environment=KUBELET_IMAGE_TAG=v1.7.8_coreos.2 10 | Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \ 11 | --volume dns,kind=host,source=/etc/resolv.conf \ 12 | --mount volume=dns,target=/etc/resolv.conf \ 13 | --volume var-log,kind=host,source=/var/log \ 14 | --mount volume=var-log,target=/var/log \ 15 | --volume cni-bin,kind=host,source=/opt/cni/bin \ 16 | --mount volume=cni-bin,target=/opt/cni/bin \ 17 | --volume cni-conf-dir,kind=host,source=/etc/cni/net.d \ 18 | --mount volume=cni-conf-dir,target=/etc/cni/net.d" 19 | ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests 20 | ExecStartPre=/usr/bin/mkdir -p /var/log/containers 21 | ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid 22 | ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin 23 | ExecStartPre=/usr/bin/mkdir -p /etc/cni/net.d 24 | ExecStartPre=/usr/bin/tar xzvf /opt/cni/bin/cni-plugins.tgz -C /opt/cni/bin 25 | ExecStart=/usr/lib/coreos/kubelet-wrapper \ 26 | --api-servers=https://10.0.70.50 \ 27 | --container-runtime=docker \ 28 | --allow-privileged=true \ 29 | --pod-manifest-path=/etc/kubernetes/manifests \ 30 | --cluster_dns=10.3.0.10 \ 31 | --cluster_domain=cluster.local \ 32 | --network-plugin=cni \ 33 | --cni-bin-dir=/opt/cni/bin \ 34 | --cni-conf-dir=/etc/cni/net.d \ 35 | --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \ 36 | --tls-cert-file=/etc/kubernetes/ssl/worker.pem \ 37 | --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem \ 38 | --client-ca-file=/etc/kubernetes/ssl/ca.pem \ 39 | --cloud-provider=aws 40 | ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid 41 | Restart=always 42 | RestartSec=10 43 | 44 | [Install] 45 | WantedBy=multi-user.target 46 | 47 | storage: 48 | files: 49 | - path: "/etc/sysctl.d/sysctl.conf" 50 | filesystem: "root" 51 | mode: 644 52 | contents: 53 | inline: | 54 | vm.max_map_count = 262144 55 | - path: "/etc/kubernetes/manifests/kube-proxy.yaml" 56 | filesystem: "root" 57 | mode: 644 58 | contents: 59 | remote: 60 | url: https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/master/yaml/kube-proxy-node.yaml 61 | - path: "/etc/kubernetes/worker-kubeconfig.yaml" 62 | filesystem: "root" 63 | mode: 644 64 | contents: 65 | inline: | 66 | apiVersion: v1 67 | kind: Config 68 | clusters: 69 | - name: local 70 | cluster: 71 | certificate-authority: /etc/kubernetes/ssl/ca.pem 72 | users: 73 | - name: kubelet 74 | user: 75 | client-certificate: /etc/kubernetes/ssl/worker.pem 76 | client-key: /etc/kubernetes/ssl/worker-key.pem 77 | contexts: 78 | - context: 79 | cluster: local 80 | user: kubelet 81 | name: kubelet-context 82 | current-context: kubelet-context 83 | - path: "/opt/cni/bin/cni-plugins.tgz" 84 | filesystem: "root" 85 | mode: 755 86 | contents: 87 | remote: 88 | url: https://github.com/containernetworking/plugins/releases/download/v0.6.0-rc2/cni-plugins-amd64-v0.6.0-rc2.tgz 89 | - path: "/etc/kubernetes/ssl/ca.pem" 90 | filesystem: "root" 91 | mode: 644 92 | contents: 93 | inline: | 94 | -----BEGIN CERTIFICATE----- 95 | MIIDGjCCAgKgAwIBAgIJAJ9qEsLLV83PMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV 96 | BAMTB2t1YmUtY2EwHhcNMTcwNzMxMTgzOTU0WhcNNDQxMjE2MTgzOTU0WjASMRAw 97 | DgYDVQQDEwdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA 98 | qjq0ZoCHwERNz04P5yohyYVSNL9F/oOB+dzn7rU4VGGP8zIcESHDRKUi61D4Sayw 99 | KpobZKiu7qEqV/oRwY/Cwp47z+zvArw9oaMEr/sl/S0bkuRUVIwj76WyokQOYWN3 100 | 5znxWun+kOVw1fnCqbzq4oFLdhdwqL8YP96T3MwMKe+XccFFHZQrzSqWLYrahkGe 101 | k+CMbm5zWespHDnPRWOXTYqQR6l7mWLTreAWrE3yanZ80yTkdBHwJ7Fi2ibG88fG 102 | 2KwvWZnwFMvTgJM9SJ4mWEztIkygDyxRPjvBFal65MnxRQJnjkzi2GcefcXTX1TM 103 | RefJ+KmtEMmx1mKXdszNLwIDAQABo3MwcTAdBgNVHQ4EFgQUu6RK2Do+7spfaAm7 104 | eT4ZQsD1EDgwQgYDVR0jBDswOYAUu6RK2Do+7spfaAm7eT4ZQsD1EDihFqQUMBIx 105 | EDAOBgNVBAMTB2t1YmUtY2GCCQCfahLCy1fNzzAMBgNVHRMEBTADAQH/MA0GCSqG 106 | SIb3DQEBBQUAA4IBAQAxEyEsrwT5IDTBBgxaMPOwEPWJqB0KE10m9L6Z6IP7Q/Ee 107 | KaeaaZX8rHOIUGlF1fUdHfYxFw1NV4J5fORum7yXRB3CBftsplzyOW6paeNt5Gal 108 | VHz9cxgNygWHOfbTKFJVa9HEh+pYbp0Ko07Cbj8Ev7bH6aQjU04IfaZEMhI1Y/WQ 109 | AT7m7R27ttIWX2RueVRdBaGNMUweBWg5Smnof+xiuQIoJNzzqFVRUOurvTAJw3rd 110 | FNiDDb8ozm04sYmNN4bgbQyyYNrO30BsNJpA7p9qr92bV3zU4fGC9mndQI1n2u7O 111 | lLGCuXbyMuhTp/upUcJTjxA9vXsfzlZF5OW+WcsR 112 | -----END CERTIFICATE----- 113 | - path: "/etc/kubernetes/ssl/worker.pem" 114 | filesystem: "root" 115 | mode: 644 116 | contents: 117 | inline: | 118 | -----BEGIN CERTIFICATE----- 119 | MIIDDTCCAfWgAwIBAgIJAJxrbQ79ntkQMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV 120 | BAMTB2t1YmUtY2EwHhcNMTcwODExMTkyNTI3WhcNMTgwODExMTkyNTI3WjAtMRQw 121 | EgYDVQQDDAtzeXN0ZW06bm9kZTEVMBMGA1UECgwMc3lzdGVtOm5vZGVzMIIBIjAN 122 | BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArt7YCqAdRUrlDJIspcwg+GqPNwq0 123 | Sq3FT8ug42QXu4C+Fxeaa36BLycrSm2V6dmbCkImftUC/6u1IO0aJowrasO24Zi5 124 | 0Y84WVGIvGXLmU0D8Q6kIvTXqNDXnXAh/QHdF+Oa4gMl2EPSuiWmoqhbjuith+Kv 125 | UM2QLFyWI7pihIPEu+4TWLtslp6q4VbwoLz7X2ggqCv0Ir/lYzhFzecVBgpXldq9 126 | R7wVsXetiFwNB3Wck4j9McVp7Z51D/Z9+HF+1qKU4+ft0fk9yvdkL/85tbnBUwmR 127 | z6uKeZnLF1NSVFjiB9Ub4OjPzMj72WfSPfiR9lb+R/NIGRBjszVNaOP0wwIDAQAB 128 | o0swSTAJBgNVHRMEAjAAMAsGA1UdDwQEAwIF4DAvBgNVHREEKDAmghQqLiouY2x1 129 | c3Rlci5pbnRlcm5hbIIOKi5lYzIuaW50ZXJuYWwwDQYJKoZIhvcNAQEFBQADggEB 130 | AB01lllmgtQy+EhyKgcfUPYyQwXo1EAho9ND4b9q/7Z7RPdzGx7dFI6SQq0cQZxk 131 | +B2ToPkEp4eS7RMndSMO6+wxtAq4gS9HE3d6zf5chEBN9u+SqtTJ9M4DyQ+Zf/K1 132 | qK842Bsxn3BnDKXylvF3aNUPWdWZf72Uqg8mYr1orWjWRLPv44b5LoUjv862kuMI 133 | jVTjxEkHvqDEI7qluQhlxfu7AhSX8yn796jzPyNcW6XD7sbur6mVEkO+lUMLlh2M 134 | 2yEby5ZQe0Gfj00m+FUvK5VuT4HHuuhiMahn00gT0Jm5cQ4DpSGdJ7JwI2FcKooT 135 | c0P5hLXSVSocengkBTvynqs= 136 | -----END CERTIFICATE----- 137 | - path: "/etc/kubernetes/ssl/worker-key.pem" 138 | filesystem: "root" 139 | mode: 644 140 | contents: 141 | inline: | 142 | -----BEGIN RSA PRIVATE KEY----- 143 | MIIEpQIBAAKCAQEArt7YCqAdRUrlDJIspcwg+GqPNwq0Sq3FT8ug42QXu4C+Fxea 144 | a36BLycrSm2V6dmbCkImftUC/6u1IO0aJowrasO24Zi50Y84WVGIvGXLmU0D8Q6k 145 | IvTXqNDXnXAh/QHdF+Oa4gMl2EPSuiWmoqhbjuith+KvUM2QLFyWI7pihIPEu+4T 146 | WLtslp6q4VbwoLz7X2ggqCv0Ir/lYzhFzecVBgpXldq9R7wVsXetiFwNB3Wck4j9 147 | McVp7Z51D/Z9+HF+1qKU4+ft0fk9yvdkL/85tbnBUwmRz6uKeZnLF1NSVFjiB9Ub 148 | 4OjPzMj72WfSPfiR9lb+R/NIGRBjszVNaOP0wwIDAQABAoIBAQCqbYUg1euxHM0e 149 | 81eQPuHjOfdaLZSJM9KZclvbQjHfDBo3Z0mYejJtQj9uyl7RCsOPu+jIs9G4XCCr 150 | dmmGKBYod5ZFSBPRqUPByTT6aDuFrQmqZhqR9w43+VIqnp6Bds+D+M96dpbrry4x 151 | PYCqBms1XI/DX6p9ldptYc7yAzUA784ePVm6BFDIqy+Z3opxoMFay25EQhnhkX2G 152 | UxFWpSw9YolmH0FdA/cj8QGuh0xslcrr92H9ZnLbZrkd/N8mIwsS6YOmGlAwM/wi 153 | o7YR7Qpc/byTipLynGfVAVqiRVHk/zWFbkDcgtoLRlDVqN+2LI8LJoi81ZbOX1jS 154 | 4YwfwOgBAoGBAODdxnbZe9zOuMvLiL6scPaofh5jJEe+tjJl/lTBUPUwO6A24s17 155 | 5muus4KoElGq+o/hca4TvmfkDIjWkf504EZ4hfViMnd673mxFr2iLgpv/62Y4y2y 156 | mhwk3ueybd6xLKum+jyXw11IhfRDR3JF0Va4Y1i/QaLSYj9XCc1a6kv7AoGBAMcV 157 | AKdOIkafYpFXaoXE7Od9Z2hMiBDNCM4Vh1BOCQfXSY8QUATYYeVcpDWF9Z2ngHhs 158 | kRjID8rhMeeWjAM4qBvvWAst9ysjuARlKiu5y/DmRYSGDJXja1xu/F1Z/qMt4LD5 159 | FI/65Rm03p7RJlE5QiT1ZnwX9pGjEHxmMk1CghfZAoGBAOC8ZazEoalGJaTwb2N5 160 | fpDWRu3h0hGuRfPKwcw9RMc4BG+US0po6Rp4CMqtZVmf0znXbEE5VFQKtIhSQqkY 161 | cEmeDOv4z01gXVS3K24tV2xxEQyTv4Edfi5gnzLbvjkRw/5uLKxAVS223MIKN666 162 | noTYVdoNk/DB6RU6zP4jPgTfAoGAbeLL340jIjQrpenIZFnUIdp4T3uexxdFOutr 163 | KwpHtcpBUfRBFsuRDZbbFKgCcKjaIp5aYIFdJjCy6Q+R7N1C/VhZEqKmgWtP0S09 164 | 37DIPwn7aTDMlZdX1Ud1iNl50fwqv8RcczSbbFsHXkY3jjG6rse9b9WSRcTp/qAy 165 | N670O9ECgYEAkSCIeyvhtzKXN+gcTgK8ghzerXHYYFE28I4bt5/NmW+yHu7Og/fZ 166 | huZLU4hkOtpecTpoRN8qzUryzzozZydOqOXIVJRdtsjVa9M+DC95lnief8YsMRU0 167 | 5YcArPUzMGI+DhP5q+Y4XVIjY7FPPMwNYz32xJomOcbz5toun7GfAXw= 168 | -----END RSA PRIVATE KEY----- 169 | 170 | locksmith: 171 | reboot_strategy: "off" -------------------------------------------------------------------------------- /images/network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/upmc-enterprises/kubernetes-on-aws/76ad437c380476dea6a4f2ed37b738e914a2b823/images/network.png -------------------------------------------------------------------------------- /scripts/certs/.gitignore: -------------------------------------------------------------------------------- 1 | output/ 2 | *.pem 3 | *.csr 4 | *.srl 5 | -------------------------------------------------------------------------------- /scripts/certs/convertcerts2base64.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | OperatingSystemType=$(uname -s) 4 | 5 | case ${OperatingSystemType} in 6 | 'Darwin') 7 | EncodeCommand='base64' 8 | ;; 9 | 'Linux') 10 | EncodeCommand='base64 -w 0' 11 | ;; 12 | esac 13 | 14 | rm -rf output && mkdir output 15 | 16 | cat apiserver-key.pem | (eval ${EncodeCommand}) > output/apiserver-key.pem.txt 17 | cat apiserver.pem | (eval ${EncodeCommand}) > output/apiserver.pem.txt 18 | cat ca-key.pem | (eval ${EncodeCommand}) > output/ca-key.pem.txt 19 | cat ca.pem | (eval ${EncodeCommand}) > output/ca.pem.txt 20 | cat worker-key.pem | (eval ${EncodeCommand}) > output/worker-key.pem.txt 21 | cat worker.pem | (eval ${EncodeCommand}) > output/worker.pem.txt 22 | -------------------------------------------------------------------------------- /scripts/certs/generate-api-server-certs.sh: -------------------------------------------------------------------------------- 1 | # Generate the API Server keypair 2 | openssl genrsa -out apiserver-key.pem 2048 3 | openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf 4 | openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf 5 | -------------------------------------------------------------------------------- /scripts/certs/generate-root-ca-certs.sh: -------------------------------------------------------------------------------- 1 | # Generate the Root CA 2 | openssl genrsa -out ca-key.pem 2048 3 | openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca" 4 | -------------------------------------------------------------------------------- /scripts/certs/generate-worker-certs.sh: -------------------------------------------------------------------------------- 1 | # Generate Worker Certs 2 | openssl genrsa -out worker-key.pem 2048 3 | openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=system:node/O=system:nodes" -config worker-openssl.cnf 4 | openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf 5 | -------------------------------------------------------------------------------- /scripts/certs/openssl.cnf: -------------------------------------------------------------------------------- 1 | [req] 2 | req_extensions = v3_req 3 | distinguished_name = req_distinguished_name 4 | [req_distinguished_name] 5 | [ v3_req ] 6 | basicConstraints = CA:FALSE 7 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 8 | subjectAltName = @alt_names 9 | [alt_names] 10 | DNS.1 = kubernetes 11 | DNS.2 = kubernetes.default 12 | DNS.3 = kubernetes.default.svc 13 | DNS.4 = kubernetes.default.svc.cluster.local 14 | IP.1 = 10.3.0.1 15 | IP.2 = 10.0.70.50 16 | -------------------------------------------------------------------------------- /scripts/certs/worker-openssl.cnf: -------------------------------------------------------------------------------- 1 | [req] 2 | req_extensions = v3_req 3 | distinguished_name = req_distinguished_name 4 | [req_distinguished_name] 5 | [ v3_req ] 6 | basicConstraints = CA:FALSE 7 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 8 | subjectAltName = @alt_names 9 | [alt_names] 10 | DNS.1 = *.*.cluster.internal 11 | DNS.2 = *.ec2.internal -------------------------------------------------------------------------------- /yaml/apiserver.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-apiserver 5 | namespace: kube-system 6 | spec: 7 | hostNetwork: true 8 | containers: 9 | - name: kube-apiserver 10 | image: gcr.io/google_containers/hyperkube:v1.7.8 11 | command: 12 | - /hyperkube 13 | - apiserver 14 | - --bind-address=0.0.0.0 15 | - --etcd-servers=http://10.0.70.50:2379 16 | - --allow-privileged=true 17 | - --service-cluster-ip-range=10.3.0.0/24 18 | - --secure-port=443 19 | - --advertise-address=10.0.70.50 20 | - --admission_control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,PersistentVolumeLabel 21 | - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem 22 | - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem 23 | - --client-ca-file=/etc/kubernetes/ssl/ca.pem 24 | - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem 25 | - --runtime-config=extensions/v1beta1/networkpolicies=true 26 | - --anonymous-auth=false 27 | - --runtime-config=batch/v2alpha1=true 28 | - --cloud-provider=aws 29 | - --authorization-mode=RBAC 30 | livenessProbe: 31 | httpGet: 32 | host: 127.0.0.1 33 | port: 8080 34 | path: /healthz 35 | initialDelaySeconds: 15 36 | timeoutSeconds: 15 37 | ports: 38 | - containerPort: 443 39 | hostPort: 443 40 | name: https 41 | - containerPort: 8080 42 | hostPort: 8080 43 | name: local 44 | volumeMounts: 45 | - mountPath: /etc/kubernetes/ssl 46 | name: ssl-certs-kubernetes 47 | readOnly: true 48 | - mountPath: /etc/ssl/certs 49 | name: ssl-certs-host 50 | readOnly: true 51 | volumes: 52 | - hostPath: 53 | path: /etc/kubernetes/ssl 54 | name: ssl-certs-kubernetes 55 | - hostPath: 56 | path: /usr/share/ca-certificates 57 | name: ssl-certs-host -------------------------------------------------------------------------------- /yaml/cluster/controller/grafana-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | labels: 5 | kubernetes.io/cluster-service: 'true' 6 | kubernetes.io/name: monitoring-grafana 7 | name: monitoring-grafana 8 | namespace: kube-system 9 | spec: 10 | # In a production setup, we recommend accessing Grafana through an external Loadbalancer 11 | # or through a public IP. 12 | # type: LoadBalancer 13 | ports: 14 | - port: 80 15 | targetPort: 3000 16 | selector: 17 | name: influxGrafana 18 | -------------------------------------------------------------------------------- /yaml/cluster/controller/heapster-controller.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | labels: 5 | k8s-app: heapster 6 | name: heapster 7 | version: v6 8 | name: heapster 9 | namespace: kube-system 10 | spec: 11 | replicas: 1 12 | selector: 13 | k8s-app: heapster 14 | version: v6 15 | template: 16 | metadata: 17 | labels: 18 | k8s-app: heapster 19 | version: v6 20 | spec: 21 | containers: 22 | - name: heapster 23 | image: kubernetes/heapster:canary 24 | imagePullPolicy: Always 25 | command: 26 | - /heapster 27 | - --source=kubernetes:https://kubernetes.default 28 | - --sink=influxdb:http://monitoring-influxdb:8086 29 | -------------------------------------------------------------------------------- /yaml/cluster/controller/heapster-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | labels: 5 | kubernetes.io/cluster-service: 'true' 6 | kubernetes.io/name: Heapster 7 | name: heapster 8 | namespace: kube-system 9 | spec: 10 | ports: 11 | - port: 80 12 | targetPort: 8082 13 | selector: 14 | k8s-app: heapster 15 | -------------------------------------------------------------------------------- /yaml/cluster/controller/influxdb-grafana-controller.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | labels: 5 | name: influxGrafana 6 | name: influxdb-grafana 7 | namespace: kube-system 8 | spec: 9 | replicas: 1 10 | selector: 11 | name: influxGrafana 12 | template: 13 | metadata: 14 | labels: 15 | name: influxGrafana 16 | spec: 17 | containers: 18 | - name: influxdb 19 | image: kubernetes/heapster_influxdb:v0.5 20 | volumeMounts: 21 | - mountPath: /data 22 | name: influxdb-storage 23 | - name: grafana 24 | image: gcr.io/google_containers/heapster_grafana:v2.6.0-2 25 | env: 26 | - name: INFLUXDB_SERVICE_URL 27 | value: http://monitoring-influxdb:8086 28 | # The following env variables are required to make Grafana accessible via 29 | # the kubernetes api-server proxy. On production clusters, we recommend 30 | # removing these env variables, setup auth for grafana, and expose the grafana 31 | # service using a LoadBalancer or a public IP. 32 | - name: GF_AUTH_BASIC_ENABLED 33 | value: "false" 34 | - name: GF_AUTH_ANONYMOUS_ENABLED 35 | value: "true" 36 | - name: GF_AUTH_ANONYMOUS_ORG_ROLE 37 | value: Admin 38 | - name: GF_SERVER_ROOT_URL 39 | value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ 40 | volumeMounts: 41 | - mountPath: /var 42 | name: grafana-storage 43 | volumes: 44 | - name: influxdb-storage 45 | emptyDir: {} 46 | - name: grafana-storage 47 | emptyDir: {} 48 | -------------------------------------------------------------------------------- /yaml/cluster/controller/influxdb-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | labels: null 5 | name: monitoring-influxdb 6 | namespace: kube-system 7 | spec: 8 | ports: 9 | - name: http 10 | port: 8083 11 | targetPort: 8083 12 | - name: api 13 | port: 8086 14 | targetPort: 8086 15 | selector: 16 | name: influxGrafana 17 | -------------------------------------------------------------------------------- /yaml/cluster/controller/kube-dashboard-deploy.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2015 Google Inc. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # Configuration to deploy release version of the Dashboard UI. 16 | # 17 | # Example usage: kubectl create -f 18 | 19 | kind: Deployment 20 | apiVersion: extensions/v1beta1 21 | metadata: 22 | labels: 23 | app: kubernetes-dashboard 24 | name: kubernetes-dashboard 25 | namespace: kube-system 26 | spec: 27 | replicas: 1 28 | selector: 29 | matchLabels: 30 | app: kubernetes-dashboard 31 | template: 32 | metadata: 33 | labels: 34 | app: kubernetes-dashboard 35 | spec: 36 | containers: 37 | - name: kubernetes-dashboard 38 | image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0 39 | imagePullPolicy: Always 40 | ports: 41 | - containerPort: 9090 42 | protocol: TCP 43 | args: 44 | # Uncomment the following line to manually specify Kubernetes API server Host 45 | # If not specified, Dashboard will attempt to auto discover the API server and connect 46 | # to it. Uncomment only if the default does not work. 47 | # - --apiserver-host=http://my-address:port 48 | livenessProbe: 49 | httpGet: 50 | path: / 51 | port: 9090 52 | initialDelaySeconds: 30 53 | timeoutSeconds: 30 54 | -------------------------------------------------------------------------------- /yaml/cluster/controller/kube-dashboard-svc.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | labels: 5 | app: kubernetes-dashboard 6 | name: kubernetes-dashboard 7 | namespace: kube-system 8 | spec: 9 | type: NodePort 10 | ports: 11 | - port: 80 12 | targetPort: 9090 13 | selector: 14 | app: kubernetes-dashboard 15 | -------------------------------------------------------------------------------- /yaml/cluster/controller/kube-podmaster.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-podmaster 5 | namespace: kube-system 6 | spec: 7 | hostNetwork: true 8 | containers: 9 | - name: scheduler-elector 10 | image: gcr.io/google_containers/podmaster:1.1 11 | command: 12 | - /podmaster 13 | - --etcd-servers=${ETCD_ENDPOINTS} 14 | - --key=scheduler 15 | - --whoami=${ADVERTISE_IP} 16 | - --source-file=/src/manifests/kube-scheduler.yaml 17 | - --dest-file=/dst/manifests/kube-scheduler.yaml 18 | volumeMounts: 19 | - mountPath: /src/manifests 20 | name: manifest-src 21 | readOnly: true 22 | - mountPath: /dst/manifests 23 | name: manifest-dst 24 | - name: controller-manager-elector 25 | image: gcr.io/google_containers/podmaster:1.1 26 | command: 27 | - /podmaster 28 | - --etcd-servers=${ETCD_ENDPOINTS} 29 | - --key=controller 30 | - --whoami=${ADVERTISE_IP} 31 | - --source-file=/src/manifests/kube-controller-manager.yaml 32 | - --dest-file=/dst/manifests/kube-controller-manager.yaml 33 | terminationMessagePath: /dev/termination-log 34 | volumeMounts: 35 | - mountPath: /src/manifests 36 | name: manifest-src 37 | readOnly: true 38 | - mountPath: /dst/manifests 39 | name: manifest-dst 40 | volumes: 41 | - hostPath: 42 | path: /srv/kubernetes/manifests 43 | name: manifest-src 44 | - hostPath: 45 | path: /etc/kubernetes/manifests 46 | name: manifest-dst 47 | -------------------------------------------------------------------------------- /yaml/cluster/kube-dns.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2016 The Kubernetes Authors. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml 16 | # in sync with this file. 17 | 18 | apiVersion: v1 19 | kind: ServiceAccount 20 | metadata: 21 | name: kube-dns 22 | namespace: kube-system 23 | labels: 24 | kubernetes.io/cluster-service: "true" 25 | addonmanager.kubernetes.io/mode: Reconcile 26 | 27 | --- 28 | 29 | kind: ClusterRoleBinding 30 | apiVersion: rbac.authorization.k8s.io/v1beta1 31 | metadata: 32 | name: dns-controller 33 | subjects: 34 | - kind: ServiceAccount 35 | name: kube-dns 36 | namespace: kube-system 37 | roleRef: 38 | kind: ClusterRole 39 | name: system:kube-dns 40 | apiGroup: rbac.authorization.k8s.io 41 | 42 | --- 43 | 44 | apiVersion: v1 45 | kind: Service 46 | metadata: 47 | name: kube-dns 48 | namespace: kube-system 49 | labels: 50 | k8s-app: kube-dns 51 | kubernetes.io/cluster-service: "true" 52 | addonmanager.kubernetes.io/mode: Reconcile 53 | kubernetes.io/name: "KubeDNS" 54 | spec: 55 | selector: 56 | k8s-app: kube-dns 57 | clusterIP: 10.3.0.10 58 | ports: 59 | - name: dns 60 | port: 53 61 | protocol: UDP 62 | - name: dns-tcp 63 | port: 53 64 | protocol: TCP 65 | 66 | --- 67 | 68 | apiVersion: extensions/v1beta1 69 | kind: Deployment 70 | metadata: 71 | name: kube-dns 72 | namespace: kube-system 73 | labels: 74 | k8s-app: kube-dns 75 | kubernetes.io/cluster-service: "true" 76 | addonmanager.kubernetes.io/mode: Reconcile 77 | spec: 78 | # replicas: not specified here: 79 | # 1. In order to make Addon Manager do not reconcile this replicas parameter. 80 | # 2. Default is 1. 81 | # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. 82 | strategy: 83 | rollingUpdate: 84 | maxSurge: 10% 85 | maxUnavailable: 0 86 | selector: 87 | matchLabels: 88 | k8s-app: kube-dns 89 | template: 90 | metadata: 91 | labels: 92 | k8s-app: kube-dns 93 | annotations: 94 | scheduler.alpha.kubernetes.io/critical-pod: '' 95 | spec: 96 | tolerations: 97 | - key: "CriticalAddonsOnly" 98 | operator: "Exists" 99 | volumes: 100 | - name: kube-dns-config 101 | configMap: 102 | name: kube-dns 103 | optional: true 104 | containers: 105 | - name: kubedns 106 | image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 107 | resources: 108 | # TODO: Set memory limits when we've profiled the container for large 109 | # clusters, then set request = limit to keep this container in 110 | # guaranteed class. Currently, this container falls into the 111 | # "burstable" category so the kubelet doesn't backoff from restarting it. 112 | limits: 113 | memory: 170Mi 114 | requests: 115 | cpu: 100m 116 | memory: 70Mi 117 | livenessProbe: 118 | httpGet: 119 | path: /healthcheck/kubedns 120 | port: 10054 121 | scheme: HTTP 122 | initialDelaySeconds: 60 123 | timeoutSeconds: 5 124 | successThreshold: 1 125 | failureThreshold: 5 126 | readinessProbe: 127 | httpGet: 128 | path: /readiness 129 | port: 8081 130 | scheme: HTTP 131 | # we poll on pod startup for the Kubernetes master service and 132 | # only setup the /readiness HTTP server once that's available. 133 | initialDelaySeconds: 3 134 | timeoutSeconds: 5 135 | args: 136 | - --domain=cluster.local. 137 | - --dns-port=10053 138 | - --config-dir=/kube-dns-config 139 | - --v=2 140 | env: 141 | - name: PROMETHEUS_PORT 142 | value: "10055" 143 | ports: 144 | - containerPort: 10053 145 | name: dns-local 146 | protocol: UDP 147 | - containerPort: 10053 148 | name: dns-tcp-local 149 | protocol: TCP 150 | - containerPort: 10055 151 | name: metrics 152 | protocol: TCP 153 | volumeMounts: 154 | - name: kube-dns-config 155 | mountPath: /kube-dns-config 156 | - name: dnsmasq 157 | image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 158 | livenessProbe: 159 | httpGet: 160 | path: /healthcheck/dnsmasq 161 | port: 10054 162 | scheme: HTTP 163 | initialDelaySeconds: 60 164 | timeoutSeconds: 5 165 | successThreshold: 1 166 | failureThreshold: 5 167 | args: 168 | - -v=2 169 | - -logtostderr 170 | - -configDir=/etc/k8s/dns/dnsmasq-nanny 171 | - -restartDnsmasq=true 172 | - -- 173 | - -k 174 | - --cache-size=1000 175 | - --log-facility=- 176 | - --server=/cluster.local/127.0.0.1#10053 177 | - --server=/in-addr.arpa/127.0.0.1#10053 178 | - --server=/ip6.arpa/127.0.0.1#10053 179 | ports: 180 | - containerPort: 53 181 | name: dns 182 | protocol: UDP 183 | - containerPort: 53 184 | name: dns-tcp 185 | protocol: TCP 186 | # see: https://github.com/kubernetes/kubernetes/issues/29055 for details 187 | resources: 188 | requests: 189 | cpu: 150m 190 | memory: 20Mi 191 | volumeMounts: 192 | - name: kube-dns-config 193 | mountPath: /etc/k8s/dns/dnsmasq-nanny 194 | - name: sidecar 195 | image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 196 | livenessProbe: 197 | httpGet: 198 | path: /metrics 199 | port: 10054 200 | scheme: HTTP 201 | initialDelaySeconds: 60 202 | timeoutSeconds: 5 203 | successThreshold: 1 204 | failureThreshold: 5 205 | args: 206 | - --v=2 207 | - --logtostderr 208 | - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A 209 | - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A 210 | ports: 211 | - containerPort: 10054 212 | name: metrics 213 | protocol: TCP 214 | resources: 215 | requests: 216 | memory: 20Mi 217 | cpu: 10m 218 | dnsPolicy: Default # Don't use cluster DNS. 219 | serviceAccountName: kube-dns -------------------------------------------------------------------------------- /yaml/controller-manager.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-controller-manager 5 | namespace: kube-system 6 | spec: 7 | hostNetwork: true 8 | containers: 9 | - name: kube-controller-manager 10 | image: gcr.io/google_containers/hyperkube:v1.7.8 11 | command: 12 | - /hyperkube 13 | - controller-manager 14 | - --master=http://127.0.0.1:8080 15 | - --leader-elect=true 16 | - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem 17 | - --root-ca-file=/etc/kubernetes/ssl/ca.pem 18 | - --cloud-provider=aws 19 | # - --cluster-cidr=10.244.0.0/16 20 | # - --allocate-node-cidrs=true 21 | resources: 22 | requests: 23 | cpu: 200m 24 | livenessProbe: 25 | httpGet: 26 | host: 127.0.0.1 27 | path: /healthz 28 | port: 10252 29 | initialDelaySeconds: 15 30 | timeoutSeconds: 15 31 | volumeMounts: 32 | - mountPath: /etc/kubernetes/ssl 33 | name: ssl-certs-kubernetes 34 | readOnly: true 35 | - mountPath: /etc/ssl/certs 36 | name: ssl-certs-host 37 | readOnly: true 38 | volumes: 39 | - hostPath: 40 | path: /etc/kubernetes/ssl 41 | name: ssl-certs-kubernetes 42 | - hostPath: 43 | path: /usr/share/ca-certificates 44 | name: ssl-certs-host -------------------------------------------------------------------------------- /yaml/kube-proxy-node.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-proxy 5 | namespace: kube-system 6 | spec: 7 | hostNetwork: true 8 | containers: 9 | - name: kube-proxy 10 | image: gcr.io/google_containers/hyperkube:v1.7.8 11 | command: 12 | - /hyperkube 13 | - proxy 14 | - --master=https://10.0.70.50 15 | - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml 16 | - --proxy-mode=iptables 17 | securityContext: 18 | privileged: true 19 | volumeMounts: 20 | - mountPath: /etc/ssl/certs 21 | name: "ssl-certs" 22 | - mountPath: /etc/kubernetes/worker-kubeconfig.yaml 23 | name: "kubeconfig" 24 | readOnly: true 25 | - mountPath: /etc/kubernetes/ssl 26 | name: "etc-kube-ssl" 27 | readOnly: true 28 | volumes: 29 | - name: "ssl-certs" 30 | hostPath: 31 | path: "/usr/share/ca-certificates" 32 | - name: "kubeconfig" 33 | hostPath: 34 | path: "/etc/kubernetes/worker-kubeconfig.yaml" 35 | - name: "etc-kube-ssl" 36 | hostPath: 37 | path: "/etc/kubernetes/ssl" -------------------------------------------------------------------------------- /yaml/kube-proxy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-proxy 5 | namespace: kube-system 6 | spec: 7 | hostNetwork: true 8 | containers: 9 | - name: kube-proxy 10 | image: gcr.io/google_containers/hyperkube:v1.7.8 11 | command: 12 | - /hyperkube 13 | - proxy 14 | - --master=http://127.0.0.1:8080 15 | - --proxy-mode=iptables 16 | securityContext: 17 | privileged: true 18 | volumeMounts: 19 | - mountPath: /etc/ssl/certs 20 | name: ssl-certs-host 21 | readOnly: true 22 | volumes: 23 | - hostPath: 24 | path: /usr/share/ca-certificates 25 | name: ssl-certs-host -------------------------------------------------------------------------------- /yaml/rbac/sloka-admin.yaml: -------------------------------------------------------------------------------- 1 | kind: ClusterRoleBinding 2 | apiVersion: rbac.authorization.k8s.io/v1beta1 3 | metadata: 4 | name: steve-admin 5 | subjects: 6 | - kind: User 7 | name: sloka 8 | roleRef: 9 | kind: ClusterRole 10 | name: cluster-admin 11 | apiGroup: rbac.authorization.k8s.io 12 | 13 | -------------------------------------------------------------------------------- /yaml/scheduler.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-scheduler 5 | namespace: kube-system 6 | spec: 7 | hostNetwork: true 8 | containers: 9 | - name: kube-scheduler 10 | image: gcr.io/google_containers/hyperkube:v1.7.8 11 | command: 12 | - /hyperkube 13 | - scheduler 14 | - --master=http://127.0.0.1:8080 15 | - --leader-elect=true 16 | resources: 17 | requests: 18 | cpu: 100m 19 | livenessProbe: 20 | httpGet: 21 | host: 127.0.0.1 22 | path: /healthz 23 | port: 10251 24 | initialDelaySeconds: 15 25 | timeoutSeconds: 15 --------------------------------------------------------------------------------