├── README.md
├── authentication.md
├── deploy
├── examples
└── rbac
│ └── pod-reader-role.yaml
└── src
├── addons
├── deploy
├── external-dns
│ └── external-dns.yaml
├── fluentd
│ ├── docker-image
│ │ └── v0.12
│ │ │ └── alpine-cloudwatch
│ │ │ ├── Dockerfile
│ │ │ ├── conf
│ │ │ ├── fluent.conf
│ │ │ ├── kubernetes.conf
│ │ │ └── systemd.conf
│ │ │ └── plugins
│ │ │ └── parser_kubernetes.rb
│ └── fluentd-kubernetes-cloudwatch
│ │ ├── fluentd.cm.yaml
│ │ ├── fluentd.ds.yaml
│ │ ├── fluentd.rbac.yaml
│ │ └── log.ns.yaml
├── ingress
│ └── nginx
│ │ ├── nginx.lb.yaml
│ │ └── rbac.yaml
└── kube-lego
│ ├── kube-lego.yaml
│ └── rbac.yaml
├── iam
├── deploy
└── iam.yaml
├── kube-aws
└── cluster.yaml
├── vpc
├── deploy
└── vpc.yaml
└── vpn
├── Docker
├── Dockerfile
└── start-pritunl
└── userdata.yaml
/README.md:
--------------------------------------------------------------------------------
1 | # Secure Kubernets HA cluster in AWS using kube-aws
2 |
3 | This repository contains an example of how to deploy a secure Kubernetes HA cluster in AWS using [kube-aws](https://github.com/kubernetes-incubator/kube-aws) automatically.
4 |
5 | The fallowing setup use a base CloudFormation stack to configure Public and Private Subnets, IGW, NatGW, Route Tables, KMS and deploys automatically a VPN server in a public subnet. After the stack is created, the Kubernetes cluster is automatically deployed on top of it using `kube-aws`.
6 |
7 | [](https://asciinema.org/a/145270)
8 |
9 | **Features:**
10 |
11 | * simple and interactive deployment
12 | * all the nodes are deployed in private subnets
13 | * 3 distinct availability zones
14 | * multi AZ masters
15 | * workers configured using node pools, similar to [GKE node pools](https://cloud.google.com/container-engine/docs/node-pools)
16 | * HA ETCD with encrypted partitions for data, automatic backups to S3 and automatic/manual recovery from failover
17 | * role based authentication using the RBAC plugin
18 | * users authentication using [OpenID Connect Identity](https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens) (OIDC)
19 | * AWS IAM roles directly assigned to pods using [kube2iam](https://github.com/jtblin/kube2iam)
20 | * cluster autoscaling
21 | * VPN server automatically deployed to a public subnet
22 |
23 | 
24 |
25 |
26 | ### Deploy the Kubernetes cluster
27 |
28 | 1. Clone this repository locally
29 |
30 | 2. run `./deploy` and fallow the instructions
31 |
32 | 3. Access your Kubernetes cluster. Since all the resources are in private networks, in order to access it, you'll need a VPN placed in one of the public subnets.[Pritunl](https://docs.pritunl.com/docs/installation) is now automatically deployed to a public subnet with a Elastic IP and DNS reccord.
33 |
34 |
35 | *Optionally you can configure your `~/.kube/config`according to `kubeconfig` file to avoid passing the `--kubeconfig` flag on your commands.*
36 |
37 | **Important**
38 |
39 | *In order to expose public services using ELB or Ingress, the public subnets have to be tagged with the cluster name.*
40 |
41 | *Ex. `KubeernetesCluser=cluster_name`*
42 |
43 | *This is now set automatically*
44 |
45 |
46 | ### Add-ons
47 |
48 |
49 | *Note: all the addons can now be deployed automatically using addons/deploy script*
50 | #### Route53
51 |
52 | This add-on is based on [ExternalDNS](https://github.com/kubernetes-incubator/external-dns) project which allows you to control Route53 DNS records dynamically via Kubernetes resources.
53 |
54 | *Note: before deploying this addon, you have to create a IAM role and setup a trust relationship*
55 |
56 | #### Nginx Ingress Controller
57 |
58 | [Nginx ingress controller](https://github.com/kubernetes/ingress-nginx) is deployed behind a ELB configured with Proxy Protocol. This way the ingress external address will be always associated with your ELB. Also you don't have to expose your workers publicly and get better protection from your ELB.
59 |
60 | #### kube-lego
61 | [Kube-Lego](https://github.com/jetstack/kube-lego) automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt.
62 |
63 | #### fluentd-cloudwatch
64 | This add-on is based on [fluentd-kubernetes-daemonset](https://github.com/fluent/fluentd-kubernetes-daemonset) and can forward the container logs to CloudWatchLogs.
65 |
66 | #### Monitoring
67 | A easy to setup, in-cluster, monitoring solution using Prometheus is available [here](https://github.com/camilb/prometheus-kubernetes)
68 |
--------------------------------------------------------------------------------
/authentication.md:
--------------------------------------------------------------------------------
1 | ##[Work in progress]
2 |
3 | #### OIDC
4 |
5 | #### Kubernetes Dashboard
6 |
7 | #### Dex
8 | Dex runs natively on top of any Kubernetes cluster using Third Party Resources and can drive API server authentication through the OpenID Connect plugin.
9 |
10 | By default you have administrator rights using the TLS certificates. If you plan to grant restricted permissions to other users, Dex can facilitate users access using OpenID Connect Tokens.
11 |
12 | In this example we use the [Github provider](https://github.com/coreos/dex/blob/master/Documentation/github-connector.md) to identify the users.
13 |
14 | Please configure the`./addons/dex/elb/internal-elb.yaml` file then expose the service.
15 |
16 | kubectl create -f ./addons/dex/elb/internal-elb.yaml
17 |
18 | The DNS is configured automatically by `ExternalDNS` add-on and should be available in approximately 1 minute.
19 |
20 | You can now use a client like dex's [example-app](https://github.com/coreos/dex/tree/master/cmd/example-app) to obtain a authentication token.
21 |
22 | If you prefer, you can use this app as a always running service by configuring and deploying `./addons/kid/kid.yaml`
23 |
24 | kubectl create secret \
25 | generic kid \
26 | --from-literal=CLIENT_ID=your-client-id \
27 | --from-literal=CLIENT_SECRET=your-client-secret \
28 | -n kube-system
29 |
30 | kubectl create -f ./addons/kid/kid.yaml
31 |
32 | Please check the dex [documentation](https://github.com/coreos/dex/tree/master/Documentation) if you need more informations.
33 |
34 | Make a quick test by granting a user permissions to list the pods in `kube-system` namespace.
35 |
36 | kubectl create -f `./examples/rbac/pod-reader.yaml`
37 | kubectl create rolebinding pod-reader --role=pod-reader --user=user@example.com --namespace=kube-system
38 |
39 |
40 | Example of `~/.kube/config` for a user
41 |
42 | apiVersion: v1
43 | clusters:
44 | - cluster:
45 | certificate-authority-data: ca.pem_base64_encoded
46 | server: https://kubeapi.example.com
47 | name: your_cluster_name
48 | contexts:
49 | - context:
50 | cluster: your_cluster_name
51 | user: user@example.com
52 | name: your_cluster_name
53 | current-context: your_cluster_name
54 | kind: Config
55 | preferences: {}
56 | users:
57 | - name: user@example.com
58 | user:
59 | auth-provider:
60 | config:
61 | access-token: id_token
62 | client-id: client_id
63 | client-secret: client_secret
64 | extra-scopes: groups
65 | id-token: id_token
66 | idp-issuer-url: https://dex.example.com
67 | refresh-token: refresh_token
68 | name: oidc
69 |
70 | If you already have the `~/.kube/config` set, you can use this example to configure the user authentication
71 |
72 | kubectl config set-credentials user@example.com \
73 | --auth-provider=oidc \
74 | --auth-provider-arg=idp-issuer-url=https://dex.example.com \
75 | --auth-provider-arg=client-id=your_client_id \
76 | --auth-provider-arg=client-secret=your_client_secret \
77 | --auth-provider-arg=refresh-token=your_refresh_token \
78 | --auth-provider-arg=id-token=your_id_token \
79 | --auth-provider-arg=extra-scopes=groups
80 |
81 | Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `~/.kube/config`
82 |
--------------------------------------------------------------------------------
/deploy:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | RED='\033[0;31m'
4 | GREEN='\033[0;32m'
5 | ORANGE='\033[0;33m'
6 | BLUE='\033[0;34m'
7 | WHITE='\033[0;37m'
8 | GREEN_PS3=$'\e[0;32m'
9 |
10 |
11 | #########################################################################################
12 | # Default CIDR
13 | #########################################################################################
14 | DEFAULT_VPC_CIDR=10.0.0.0/16
15 | DEFAULT_PRIVATE_SUBNET_A_CIDR=10.0.1.0/24
16 | DEFAULT_PRIVATE_SUBNET_B_CIDR=10.0.2.0/24
17 | DEFAULT_PRIVATE_SUBNET_C_CIDR=10.0.3.0/24
18 | DEFAULT_PUBLIC_SUBNET_A_CIDR=10.0.101.0/24
19 | DEFAULT_PUBLIC_SUBNET_B_CIDR=10.0.102.0/24
20 | DEFAULT_PUBLIC_SUBNET_C_CIDR=10.0.103.0/24
21 |
22 | #########################################################################################
23 | # check requirements
24 | #########################################################################################
25 | echo
26 | echo -e "${ORANGE}Checking requirements"
27 | tput sgr0
28 | echo
29 | #kube-aws
30 | if [[ "$(which kube-aws)" != "" ]] > /dev/null 2>&1; then
31 | echo -e "kube-aws: ......... ${GREEN}OK"
32 | tput sgr0
33 | echo
34 | else
35 | echo -e "${RED}Please install kube-aws and run this script again"
36 | tput sgr0
37 | echo
38 | exit
39 | fi
40 |
41 | #kubectl
42 | if [[ "$(which kubectl)" != "" ]] > /dev/null 2>&1; then
43 | echo -e "kubectl: .......... ${GREEN}OK"
44 | tput sgr0
45 | echo
46 | else
47 | echo -e "${RED}Please install kubectl and run this script again"
48 | tput sgr0
49 | echo
50 | exit
51 | fi
52 |
53 | #awscli
54 | if [[ "$(which aws)" != "" ]] > /dev/null 2>&1; then
55 | echo -e "awscli: ........... ${GREEN}OK"
56 | tput sgr0
57 | echo
58 | else
59 | echo -e "${RED}Please install awscli and run this script again"
60 | tput sgr0
61 | echo
62 | exit
63 | fi
64 |
65 | #aws config
66 | if [[ -f ~/.aws/config ]] > /dev/null 2>&1; then
67 | echo -e "aws config: ....... ${GREEN}OK"
68 | tput sgr0
69 | echo
70 | else
71 | echo -e "${RED}Please configure awscli using ${ORANGE}aws configure ${RED}and run this script again"
72 | tput sgr0
73 | echo
74 | exit
75 | fi
76 |
77 | #jq
78 | if [[ "$(which jq)" != "" ]] > /dev/null 2>&1; then
79 | echo -e "jq: ............... ${GREEN}OK"
80 | tput sgr0
81 | echo
82 | else
83 | echo -e "${RED}Please install jq and run this script again"
84 | tput sgr0
85 | echo
86 | exit
87 | fi
88 |
89 | #copy source
90 | if [ -d "kube-aws" ] && [ -d "vpc" ] && [ -d "iam" ] && [ -d "addons" ]; then
91 | true
92 | else
93 | mkdir vpc kube-aws iam addons
94 | fi
95 |
96 | cp src/kube-aws/cluster.yaml kube-aws/
97 | cp src/vpc/* vpc/
98 | cp src/iam/* iam/
99 | cp -r src/addons/* addons
100 |
101 | #########################################################################################
102 | # setup
103 | #########################################################################################
104 |
105 | #aws profile
106 | PS3="${GREEN_PS3}Please select the AWS profile:"
107 | echo
108 | set -- $(cat ~/.aws/config | grep "\[" | tr -d '[]' | cut -d " " -f 2)
109 | select opt in "$@"
110 | do
111 | case $opt in
112 | $opt)
113 | aws_profile="${opt}"
114 | tput sgr0
115 | echo "$aws_profile"
116 | break
117 | ;;
118 | *) echo invalid option;;
119 | esac
120 | done
121 | echo
122 |
123 | #aws region
124 | PS3="${GREEN_PS3}Please select the AWS Region:"
125 | echo
126 | set -- $(aws --profile $aws_profile ec2 describe-regions | jq -r '.Regions[] | .RegionName')
127 | select opt in "$@"
128 | do
129 | case $opt in
130 | $opt)
131 | aws_region="$opt"
132 | tput sgr0
133 | echo "$aws_region"
134 | break
135 | ;;
136 | *) echo invalid option;;
137 | esac
138 | done
139 | echo
140 |
141 | #aws key pair
142 | PS3="${GREEN_PS3}Please select which key pair to use:"
143 | echo
144 | set -- $(aws --profile $aws_profile ec2 describe-key-pairs | jq -r '.KeyPairs[] | .KeyName')
145 | select opt in "$@"
146 | do
147 | case $opt in
148 | $opt)
149 | key_name="$opt"
150 | tput sgr0
151 | echo "$key_name"
152 | break
153 | ;;
154 | *) echo invalid option;;
155 | esac
156 | done
157 | echo
158 |
159 | #aws domain name
160 | PS3="${GREEN_PS3}Please select which domain name to use:"
161 | echo
162 | set -- $(aws --profile $aws_profile route53 list-hosted-zones | jq -r '.HostedZones[].Name')
163 | select opt in "$@"
164 | do
165 | case $opt in
166 | $opt)
167 | domain_name="${opt%.}"
168 | tput sgr0
169 | echo "$domain_name"
170 | break
171 | ;;
172 | *) echo invalid option;;
173 | esac
174 | done
175 | echo
176 |
177 | #####################################################################################################
178 | # pritunl vpn
179 | #####################################################################################################
180 | ami_id=$(curl -s https://coreos.com/dist/aws/aws-stable.json | jq '.["'$aws_region'"].hvm')
181 |
182 | #detect external IP
183 | my_ip=$(dig +short myip.opendns.com @resolver1.opendns.com)
184 |
185 | #curl -s https://coreos.com/dist/aws/aws-stable.json | jq '.release_info.version'
186 |
187 |
188 | #aws hosted zone id
189 | hosted_zone_id=$(aws --profile $aws_profile route53 list-hosted-zones | jq -r '.HostedZones[] | select(.Name="'$domain_name'") | .Id' | cut -d "/" -f3)
190 |
191 | ##set aws region
192 | sed -i -e 's,aws_region,'"$aws_region"',g' kube-aws/cluster.yaml
193 | sed -i -e 's,aws_region,'"$aws_region"',g' vpc/vpc.yaml
194 | sed -i -e 's,aws_region,'"$aws_region"',g' vpc/deploy
195 | sed -i -e 's,aws_region,'"$aws_region"',g' iam/iam.yaml
196 | sed -i -e 's,aws_region,'"$aws_region"',g' iam/deploy
197 | sed -i -e 's,aws_region,'"$aws_region"',g' addons/fluentd/fluentd-kubernetes-cloudwatch/fluentd.ds.yaml
198 |
199 | ##set key pair
200 | sed -i -e 's,key_name,'"$key_name"',g' kube-aws/cluster.yaml
201 | sed -i -e 's,key_name,'"$key_name"',g' vpc/vpc.yaml
202 |
203 | ##set hosted_zone_id
204 | sed -i -e 's,hosted_zone_id,'"$hosted_zone_id"',g' kube-aws/cluster.yaml
205 | sed -i -e 's,hosted_zone_id,'"$hosted_zone_id"',g' iam/iam.yaml
206 |
207 | ##set domain_name
208 | sed -i -e 's,domain_name,'"$domain_name"',g' kube-aws/cluster.yaml
209 | sed -i -e 's,domain_name,'"$domain_name"',g' addons/external-dns/external-dns.yaml
210 | sed -i -e 's,domain_name,'"$domain_name"',g' vpc/vpc.yaml
211 |
212 | ##set ami_id
213 | sed -i -e 's,ami_id,'"$ami_id"',g' vpc/vpc.yaml
214 |
215 | ##set allowed ip for vpn access SSH and Web ports
216 | sed -i -e 's,my_ip,'"$my_ip"',g' vpc/vpc.yaml
217 |
218 | #clean sed generated files
219 | find . -name "*-e" -exec rm -rf {} \;
220 |
221 | #####################################################################################################
222 | # VPC stack
223 | #####################################################################################################
224 |
225 | #vpc stack name
226 | read -p "Set a name for the CloudFormation VPC stack: " stack_name
227 | echo
228 |
229 | if [[ "$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].StackStatus')" != "CREATE_COMPLETE" ]] > /dev/null 2>&1; then
230 | cd vpc
231 | sed -i -e 's,stack_name,'"$stack_name"',g' deploy
232 | echo -e "${BLUE}Use default CIDR settings?"
233 | tput sgr0
234 | echo
235 | echo -e "${BLUE}VPC CIDR: ${WHITE}10.0.0.0/16"
236 | echo
237 | echo -e "${BLUE}Private Subnet A CIDR: ${WHITE}10.0.1.0/24"
238 | echo -e "${BLUE}Private Subnet B CIDR: ${WHITE}10.0.2.0/24"
239 | echo -e "${BLUE}Private Subnet C CIDR: ${WHITE}10.0.3.0/24"
240 | echo
241 | echo -e "${BLUE}Public Subnet A CIDR: ${WHITE}10.0.101.0/24"
242 | echo -e "${BLUE}Public Subnet B CIDR: ${WHITE}10.0.102.0/24"
243 | echo -e "${BLUE}Public Subnet C CIDR: ${WHITE}10.0.103.0/24"
244 | tput sgr0
245 | echo
246 | read -p "Y/N [N]: " default_cidr
247 | if [[ $default_cidr =~ ^([yY][eE][sS]|[yY])$ ]]; then
248 | echo "Using default CIDR settings"
249 | echo
250 |
251 | #Set default CIDR values
252 | #VPC CIDR
253 | sed -i -e 's,VPC_CIDR,'"$DEFAULT_VPC_CIDR"',g' ./vpc.yaml
254 |
255 | #Private Subnet A CIDR
256 | sed -i -e 's,PRIVATE_SUBNET_A_CIDR,'"$DEFAULT_PRIVATE_SUBNET_A_CIDR"',g' ./vpc.yaml
257 |
258 | #Private Subnet B CIDR
259 | sed -i -e 's,PRIVATE_SUBNET_B_CIDR,'"$DEFAULT_PRIVATE_SUBNET_B_CIDR"',g' ./vpc.yaml
260 |
261 | #Private Subnet C CIDR
262 | sed -i -e 's,PRIVATE_SUBNET_C_CIDR,'"$DEFAULT_PRIVATE_SUBNET_C_CIDR"',g' ./vpc.yaml
263 |
264 | #Public Subnet A CIDR
265 | sed -i -e 's,PUBLIC_SUBNET_A_CIDR,'"$DEFAULT_PUBLIC_SUBNET_A_CIDR"',g' ./vpc.yaml
266 |
267 | #Public Subnet B CIDR
268 | sed -i -e 's,PUBLIC_SUBNET_B_CIDR,'"$DEFAULT_PUBLIC_SUBNET_B_CIDR"',g' ./vpc.yaml
269 |
270 | #Public Subnet C CIDR
271 | sed -i -e 's,PUBLIC_SUBNET_C_CIDR,'"$DEFAULT_PUBLIC_SUBNET_C_CIDR"',g' ./vpc.yaml
272 |
273 | #clean sed generated files
274 | find . -name "*-e" -exec rm -rf {} \;
275 |
276 | else
277 | #VPC CIDR
278 | echo
279 | read -p "VPC CIDR [$DEFAULT_VPC_CIDR]:" VPC_CIDR
280 | VPC_CIDR=${VPC_CIDR:-$DEFAULT_VPC_CIDR}
281 |
282 | #PRIVATE SUBNETS
283 | #private Subnet A CIDR
284 | echo
285 | read -p "Private Subnet A CIDR [$DEFAULT_PRIVATE_SUBNET_A_CIDR]:" PRIVATE_SUBNET_A_CIDR
286 | PRIVATE_SUBNET_A_CIDR=${PRIVATE_SUBNET_A_CIDR:-$DEFAULT_PRIVATE_SUBNET_A_CIDR}
287 |
288 | #private Subnet B CIDR
289 | echo
290 | read -p "Private Subnet B CIDR [$DEFAULT_PRIVATE_SUBNET_B_CIDR]:" PRIVATE_SUBNET_B_CIDR
291 | PRIVATE_SUBNET_B_CIDR=${PRIVATE_SUBNET_B_CIDR:-$DEFAULT_PRIVATE_SUBNET_B_CIDR}
292 |
293 | #private Subnet C CIDR
294 | echo
295 | read -p "Private Subnet C CIDR [$DEFAULT_PRIVATE_SUBNET_C_CIDR]:" PRIVATE_SUBNET_C_CIDR
296 | PRIVATE_SUBNET_C_CIDR=${PRIVATE_SUBNET_C_CIDR:-$DEFAULT_PRIVATE_SUBNET_C_CIDR}
297 |
298 | #PUBLIC SUBNETS
299 | #public Subnet A CIDR
300 | echo
301 | read -p "Public Subnet A CIDR [$DEFAULT_PUBLIC_SUBNET_A_CIDR]:" PUBLIC_SUBNET_A_CIDR
302 | PUBLIC_SUBNET_A_CIDR=${PUBLIC_SUBNET_A_CIDR:-$DEFAULT_PUBLIC_SUBNET_A_CIDR}
303 |
304 | #public Subnet B CIDR
305 | echo
306 | read -p "Public Subnet B CIDR [$DEFAULT_PUBLIC_SUBNET_B_CIDR]:" PUBLIC_SUBNET_B_CIDR
307 | PUBLIC_SUBNET_B_CIDR=${PUBLIC_SUBNET_B_CIDR:-$DEFAULT_PUBLIC_SUBNET_B_CIDR}
308 |
309 | #public Subnet C CIDR
310 | echo
311 | read -p "Public Subnet C CIDR [$DEFAULT_PUBLIC_SUBNET_C_CIDR]:" PUBLIC_SUBNET_C_CIDR
312 | PUBLIC_SUBNET_C_CIDR=${PUBLIC_SUBNET_C_CIDR:-$DEFAULT_PUBLIC_SUBNET_C_CIDR}
313 |
314 | #Replace default CIDR values
315 | #VPC CIDR
316 | sed -i -e 's,VPC_CIDR,'"$VPC_CIDR"',g' ./vpc.yaml
317 |
318 | #Private Subnet A CIDR
319 | sed -i -e 's,PRIVATE_SUBNET_A_CIDR,'"$PRIVATE_SUBNET_A_CIDR"',g' ./vpc.yaml
320 |
321 | #Private Subnet B CIDR
322 | sed -i -e 's,PRIVATE_SUBNET_B_CIDR,'"$PRIVATE_SUBNET_B_CIDR"',g' ./vpc.yaml
323 |
324 | #Private Subnet C CIDR
325 | sed -i -e 's,PRIVATE_SUBNET_C_CIDR,'"$PRIVATE_SUBNET_C_CIDR"',g' ./vpc.yaml
326 |
327 | #Public Subnet A CIDR
328 | sed -i -e 's,PUBLIC_SUBNET_A_CIDR,'"$PUBLIC_SUBNET_A_CIDR"',g' ./vpc.yaml
329 |
330 | #Public Subnet B CIDR
331 | sed -i -e 's,PUBLIC_SUBNET_B_CIDR,'"$PUBLIC_SUBNET_B_CIDR"',g' ./vpc.yaml
332 |
333 | #Public Subnet C CIDR
334 | sed -i -e 's,PUBLIC_SUBNET_C_CIDR,'"$PUBLIC_SUBNET_C_CIDR"',g' ./vpc.yaml
335 |
336 | #clean sed generated files
337 | find . -name "*-e" -exec rm -rf {} \;
338 |
339 | fi
340 |
341 |
342 | #Custom VPC settings
343 | echo
344 | echo -e "${BLUE}You can make additional changes in VPC config. Edit ${ORANGE}vpc/vpc.yaml ${BLUE}and press ENTER"
345 | tput sgr0
346 | read -p " [ENTER]: " vpc_yaml
347 | echo
348 |
349 | echo -e "${BLUE}Creating CloudFormation VPC stack"
350 | tput sgr0
351 | echo
352 |
353 | ./deploy
354 |
355 | echo -e "${BLUE}Wait until the CloudFormation stack is created"
356 | tput sgr0
357 | echo
358 |
359 | while [[ "$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].StackStatus')" != "CREATE_COMPLETE" ]]
360 | do sleep 2; printf ".";
361 | done
362 | echo
363 | echo
364 |
365 | echo -e "${BLUE}CloudFormation VPC stack successfully created"
366 | tput sgr0
367 | echo
368 | cd ../
369 |
370 | else
371 | echo -e "${BLUE}Stack already exists, getting the outputs"
372 | echo
373 | fi
374 |
375 | echo -e "${BLUE}Getting the CloudFormation stack outputs"
376 | tput sgr0
377 | echo
378 |
379 | #vpn DNS Reccord
380 | VPN_DNS_RECCORD=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="VpnDNSReccord") | .OutputValue')
381 | echo -e "${RED}VPN_DNS_RECCORD: ${GREEN}${VPN_DNS_RECCORD}"
382 |
383 | #vpn IP address
384 | VPN_IP_ADDRESS=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="VpnIpAddress") | .OutputValue')
385 | echo -e "${RED}VPN_IP_ADDRESS: ${GREEN}${VPN_IP_ADDRESS}"
386 |
387 | #kms key
388 | KMS_KEY_ARN=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="KMSKeyArn") | .OutputValue')
389 | echo -e "${RED}KMS_KEY_ARN: ${GREEN}${KMS_KEY_ARN}"
390 |
391 | #vpc id
392 | VPC_ID=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="VpcId") | .OutputValue')
393 | echo -e "${RED}VPC_ID: ${GREEN}${VPC_ID}"
394 |
395 | #vpc cidr
396 | VPC_CIDR=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="CidrBlock") | .OutputValue')
397 | echo -e "${RED}VPC_CIDR: ${GREEN}${VPC_CIDR}"
398 |
399 | #route table id
400 | ROUTE_TABLE_ID=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="PrivateRouteTableId") | .OutputValue')
401 | echo -e "${RED}ROUTE_TABLE_ID: ${GREEN}${ROUTE_TABLE_ID}"
402 |
403 | #private subnet a
404 | PRIVATE_SUBNET_A=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="PrivateSubnetAId") | .OutputValue')
405 | echo -e "${RED}PRIVATE_SUBNET_A: ${GREEN}${PRIVATE_SUBNET_A}"
406 |
407 | #private subnet b
408 | PRIVATE_SUBNET_B=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="PrivateSubnetBId") | .OutputValue')
409 | echo -e "${RED}PRIVATE_SUBNET_B: ${GREEN}${PRIVATE_SUBNET_B}"
410 |
411 | #private subnet c
412 | PRIVATE_SUBNET_C=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="PrivateSubnetCId") | .OutputValue')
413 | echo -e "${RED}PRIVATE_SUBNET_C: ${GREEN}${PRIVATE_SUBNET_C}"
414 | tput sgr0
415 |
416 | #aws account id
417 | AWS_ACCOUNT_ID=$(aws --profile $aws_profile cloudformation describe-stacks --stack-name=$stack_name | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="KMSKeyArn") | .OutputValue' | cut -d ':' -f5 )
418 | echo -e "${RED}AWS_ACCOUNT_ID: ${GREEN}${AWS_ACCOUNT_ID}"
419 | tput sgr0
420 |
421 | echo
422 | echo -e "${GREEN}Please go to ${RED}https://k8svpn.$domain_name ${GREEN}and configure the VPN server using user: ${RED}pritunl ${GREEN}and password: ${RED}pritunl."
423 | echo
424 | echo -e "${GREEN}If you don't want to configure the Security Group manually, please configure the server to use the port ${RED}12777. ${GREEN}An option to set the port will be added soon."
425 | tput sgr0
426 | echo
427 |
428 | #replace the values from the CloudFormation outputs
429 | ##kms key arn
430 | sed -i -e 's,kms_key_arn,'"$KMS_KEY_ARN"',g' kube-aws/cluster.yaml
431 |
432 | ## vpc id
433 | sed -i -e 's,vpc_id,'"$VPC_ID"',g' kube-aws/cluster.yaml
434 |
435 | ## vpc CIDR
436 | sed -i -e 's,vpc_cidr,'"$VPC_CIDR"',g' kube-aws/cluster.yaml
437 |
438 | ## route table id
439 | sed -i -e 's,route_table_id,'"$ROUTE_TABLE_ID"',g' kube-aws/cluster.yaml
440 |
441 | ## private subnet a
442 | sed -i -e 's,private_subnet_a,'"$PRIVATE_SUBNET_A"',g' kube-aws/cluster.yaml
443 |
444 | ## private subnet b
445 | sed -i -e 's,private_subnet_b,'"$PRIVATE_SUBNET_B"',g' kube-aws/cluster.yaml
446 |
447 | ## private subnet c
448 | sed -i -e 's,private_subnet_c,'"$PRIVATE_SUBNET_C"',g' kube-aws/cluster.yaml
449 |
450 | ## aws accont id
451 | sed -i -e 's,aws_account_id,'"$AWS_ACCOUNT_ID"',g' iam/iam.yaml
452 |
453 | #clean sed generated files
454 | find . -name "*-e" -exec rm -rf {} \;
455 |
456 | #####################################################################################################
457 | # kube-aws setup
458 | #####################################################################################################
459 |
460 | cd kube-aws
461 |
462 | #select the S3 bucket to use for deployment
463 | PS3="${GREEN_PS3}Please select which S3 bucket to use:"
464 | echo
465 | set -- $(aws --profile $aws_profile s3 ls | cut -d " " -f3)
466 | select opt in "$@"
467 | do
468 | case $opt in
469 | $opt)
470 | bucket="${opt%.}"
471 | tput sgr0
472 | echo "$bucket"
473 | break
474 | ;;
475 | *) echo invalid option;;
476 | esac
477 | done
478 | echo
479 |
480 | #cluster.yaml config
481 | echo
482 | echo -e "${BLUE}Please make the desired changes in ${ORANGE}kube-aws/cluster.yaml ${BLUE}and press ENTER"
483 | tput sgr0
484 | read -p " [ENTER]: " cluster_yaml
485 | echo
486 |
487 | #generate credentials
488 | echo -e "${ORANGE}Generate credentials"
489 | tput sgr0
490 | kube-aws render credentials --generate-ca
491 | echo
492 |
493 | #render stack
494 | echo -e "${ORANGE}Render stack"
495 | tput sgr0
496 | kube-aws render stack
497 | echo
498 |
499 | #validate stack
500 | echo -e "${ORANGE}Validate stack"
501 | tput sgr0
502 | AWS_PROFILE=$aws_profile kube-aws validate --s3-uri s3://$bucket
503 | echo
504 |
505 | #export stack
506 | echo -e "${ORANGE}Export stack"
507 | tput sgr0
508 | AWS_PROFILE=$aws_profile kube-aws up --s3-uri s3://$bucket --export
509 | echo
510 |
511 | #deploy
512 | echo -e "${ORANGE}Deploy stack"
513 | tput sgr0
514 | AWS_PROFILE=$aws_profile kube-aws up --s3-uri s3://$bucket
515 | echo
516 |
517 | echo -e "${GREEN}If you didn't configure the VPN server yet, please go to ${RED}https://k8svpn.$domain_name ${GREEN}and set it up using the user: ${RED}pritunl ${GREEN}and the password: ${RED}pritunl"
518 | echo
519 | echo -e "${GREEN}If you don't want to configure the Security Group manually, please configure the server to use the port ${RED}12777. ${GREEN}An option to set the port will be added soon."
520 | tput sgr0
521 | echo
522 |
523 | echo -e "${GREEN}Done"
524 | tput sgr0
525 |
--------------------------------------------------------------------------------
/examples/rbac/pod-reader-role.yaml:
--------------------------------------------------------------------------------
1 | kind: Role
2 | apiVersion: rbac.authorization.k8s.io/v1beta1
3 | metadata:
4 | namespace: kube-system
5 | name: pod-reader
6 | rules:
7 | - apiGroups: [""] # "" indicates the core API group
8 | resources: ["pods", "secrets"]
9 | verbs: ["get", "watch", "list"]
10 |
--------------------------------------------------------------------------------
/src/addons/deploy:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | RED='\033[0;31m'
4 | GREEN='\033[0;32m'
5 | ORANGE='\033[0;33m'
6 | BLUE='\033[0;34m'
7 | GREEN_PS3=$'\e[0;32m'
8 |
9 | #########################################################################################
10 | # external-dns
11 | #########################################################################################
12 | echo
13 | echo -e "${BLUE}Deploy external-dns?"
14 | tput sgr0
15 | read -p "Y/N [N]: " external_dns
16 |
17 | if [[ $external_dns =~ ^([yY][eE][sS]|[yY])$ ]]; then
18 |
19 | kubectl apply -f external-dns/external-dns.yaml
20 | else
21 | echo -e "Skipping"
22 | fi
23 |
24 |
25 | #########################################################################################
26 | # ingress
27 | #########################################################################################
28 | echo
29 | echo -e "${BLUE}Deploy nginx ingress controller?"
30 | tput sgr0
31 | read -p "Y/N [N]: " nginx_ingress
32 |
33 | if [[ $nginx_ingress =~ ^([yY][eE][sS]|[yY])$ ]]; then
34 |
35 | kubectl apply -f ingress/nginx/rbac.yaml
36 | kubectl apply -f ingress/nginx/nginx.lb.yaml
37 |
38 | else
39 | echo -e "Skipping"
40 | fi
41 |
42 | #########################################################################################
43 | # kube-lego
44 | #########################################################################################
45 | echo
46 | echo -e "${BLUE}Deploy kube-lego?"
47 | tput sgr0
48 | read -p "Y/N [N]: " kube_lego
49 |
50 | if [[ $kube_lego =~ ^([yY][eE][sS]|[yY])$ ]]; then
51 |
52 | kubectl apply -f kube-lego/rbac.yaml
53 | kubectl apply -f kube-lego/kube-lego.yaml
54 |
55 | else
56 | echo -e "Skipping"
57 | fi
58 |
59 | echo -e "${GREEN}Done"
60 | tput sgr0
61 |
62 | #########################################################################################
63 | # fluentd-kubernetes-cloudwatch
64 | #########################################################################################
65 | echo
66 | echo -e "${BLUE}Deploy fluentd-kubernetes-cloudwatch?"
67 | tput sgr0
68 | read -p "Y/N [N]: " fluentd_cloudwatch
69 |
70 | if [[ $fluentd_cloudwatch =~ ^([yY][eE][sS]|[yY])$ ]]; then
71 |
72 | kubectl apply -f fluentd-kubernetes-cloudwatch/log.ns.yaml
73 | kubectl apply -f fluentd-kubernetes-cloudwatch/fluentd.rbac.yaml
74 | #Custom Fluentd settings
75 | echo
76 | echo -e "${BLUE}Make sure you updated the configmap for fluentd with your custom configuration ${ORANGE}addons/fluentd/fluentd-kubernetes-cloudwatch/fluentd.cm.yaml ${BLUE}Press ENTER to continue"
77 | tput sgr0
78 | read -p " [ENTER]: " fluent_config
79 | echo
80 | kubectl apply -f fluentd-kubernetes-cloudwatch/fluentd.cm.yaml
81 | kubectl apply -f fluentd-kubernetes-cloudwatch/fluentd.ds.yaml
82 |
83 | else
84 | echo -e "Skipping"
85 | fi
86 |
87 | echo -e "${GREEN}Done"
88 | tput sgr0
89 |
--------------------------------------------------------------------------------
/src/addons/external-dns/external-dns.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: external-dns
5 | namespace: kube-system
6 | spec:
7 | strategy:
8 | type: Recreate
9 | template:
10 | metadata:
11 | labels:
12 | app: external-dns
13 | annotations:
14 | iam.amazonaws.com/role: k8s-route53-external-dns
15 | spec:
16 | containers:
17 | - name: external-dns
18 | image: registry.opensource.zalan.do/teapot/external-dns:v0.4.7
19 | args:
20 | - --source=service
21 | - --source=ingress
22 | - --domain-filter=domain_name. # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
23 | - --provider=aws
24 | - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
25 | - --registry=txt
26 | - --txt-owner-id=kube-aws # set a owner id
27 |
--------------------------------------------------------------------------------
/src/addons/fluentd/docker-image/v0.12/alpine-cloudwatch/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM fluent/fluentd:v0.12.33
2 | USER root
3 | WORKDIR /home/fluent
4 | ENV PATH /home/fluent/.gem/ruby/2.3.0/bin:$PATH
5 |
6 | RUN set -ex \
7 | && apk add --no-cache --virtual .build-deps \
8 | build-base \
9 | ruby-dev \
10 | libffi-dev \
11 | && echo 'gem: --no-document' >> /etc/gemrc \
12 | && gem install fluent-plugin-secure-forward \
13 | && gem install fluent-plugin-record-reformer \
14 | && gem install aws-sdk-core -v 2.10.50 \
15 | && gem install fluent-plugin-cloudwatch-logs -v 0.4.2 \
16 | && gem install fluent-plugin-kubernetes_metadata_filter \
17 | && apk del .build-deps \
18 | && gem sources --clear-all \
19 | && rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems/*/cache/*.gem
20 |
21 | # Copy configuration files
22 | COPY ./conf/fluent.conf /fluentd/etc/
23 | COPY ./conf/kubernetes.conf /fluentd/etc/
24 |
25 | # Copy plugins
26 | COPY plugins /fluentd/plugins/
27 |
28 | # Environment variables
29 | ENV FLUENTD_OPT=""
30 | ENV FLUENTD_CONF="fluent.conf"
31 |
32 | # jemalloc is memory optimization only available for td-agent
33 | # td-agent is provided and QA'ed by treasuredata as rpm/deb/.. package
34 | # -> td-agent (stable) vs fluentd (edge)
35 | #ENV LD_PRELOAD="/usr/lib/libjemalloc.so.2"
36 |
37 | # Run Fluentd
38 | CMD exec fluentd -c /fluentd/etc/$FLUENTD_CONF -p /fluentd/plugins $FLUENTD_OPT
39 |
--------------------------------------------------------------------------------
/src/addons/fluentd/docker-image/v0.12/alpine-cloudwatch/conf/fluent.conf:
--------------------------------------------------------------------------------
1 | @include kubernetes.conf
2 |
3 |
4 | type cloudwatch_logs
5 | log_group_name "#{ENV['LOG_GROUP_NAME']}"
6 | auto_create_stream true
7 | use_tag_as_stream true
8 |
9 |
--------------------------------------------------------------------------------
/src/addons/fluentd/docker-image/v0.12/alpine-cloudwatch/conf/kubernetes.conf:
--------------------------------------------------------------------------------
1 |
2 | type null
3 |
4 |
5 |
6 | type tail
7 | path /var/log/containers/*.log
8 | pos_file /var/log/fluentd-containers.log.pos
9 | time_format %Y-%m-%dT%H:%M:%S.%NZ
10 | tag kubernetes.*
11 | format json
12 | read_from_head true
13 |
14 |
15 |
16 | type tail
17 | format /^(?
23 |
24 |
25 | type tail
26 | format syslog
27 | path /var/log/startupscript.log
28 | pos_file /var/log/fluentd-startupscript.log.pos
29 | tag startupscript
30 |
31 |
32 |
33 | type tail
34 | format /^time="(?
39 |
40 |
41 | type tail
42 | format none
43 | path /var/log/etcd.log
44 | pos_file /var/log/fluentd-etcd.log.pos
45 | tag etcd
46 |
47 |
48 |
49 | type tail
50 | format kubernetes
51 | multiline_flush_interval 5s
52 | path /var/log/kubelet.log
53 | pos_file /var/log/fluentd-kubelet.log.pos
54 | tag kubelet
55 |
56 |
57 |
58 | type tail
59 | format kubernetes
60 | multiline_flush_interval 5s
61 | path /var/log/kube-proxy.log
62 | pos_file /var/log/fluentd-kube-proxy.log.pos
63 | tag kube-proxy
64 |
65 |
66 |
67 | type tail
68 | format kubernetes
69 | multiline_flush_interval 5s
70 | path /var/log/kube-apiserver.log
71 | pos_file /var/log/fluentd-kube-apiserver.log.pos
72 | tag kube-apiserver
73 |
74 |
75 |
76 | type tail
77 | format kubernetes
78 | multiline_flush_interval 5s
79 | path /var/log/kube-controller-manager.log
80 | pos_file /var/log/fluentd-kube-controller-manager.log.pos
81 | tag kube-controller-manager
82 |
83 |
84 |
85 | type tail
86 | format kubernetes
87 | multiline_flush_interval 5s
88 | path /var/log/kube-scheduler.log
89 | pos_file /var/log/fluentd-kube-scheduler.log.pos
90 | tag kube-scheduler
91 |
92 |
93 |
94 | type tail
95 | format kubernetes
96 | multiline_flush_interval 5s
97 | path /var/log/rescheduler.log
98 | pos_file /var/log/fluentd-rescheduler.log.pos
99 | tag rescheduler
100 |
101 |
102 |
103 | type tail
104 | format kubernetes
105 | multiline_flush_interval 5s
106 | path /var/log/glbc.log
107 | pos_file /var/log/fluentd-glbc.log.pos
108 | tag glbc
109 |
110 |
111 |
112 | type tail
113 | format kubernetes
114 | multiline_flush_interval 5s
115 | path /var/log/cluster-autoscaler.log
116 | pos_file /var/log/fluentd-cluster-autoscaler.log.pos
117 | tag cluster-autoscaler
118 |
119 |
120 |
121 | type kubernetes_metadata
122 |
123 |
--------------------------------------------------------------------------------
/src/addons/fluentd/docker-image/v0.12/alpine-cloudwatch/conf/systemd.conf:
--------------------------------------------------------------------------------
1 |
2 | # Logs from systemd-journal for interesting services.
3 |
4 | @type systemd
5 | filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
6 | pos_file /var/log/fluentd-journald-kubelet.pos
7 | read_from_head true
8 | tag kubelet
9 |
10 |
--------------------------------------------------------------------------------
/src/addons/fluentd/docker-image/v0.12/alpine-cloudwatch/plugins/parser_kubernetes.rb:
--------------------------------------------------------------------------------
1 | #
2 | # Fluentd
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | #
16 |
17 | # The following Fluentd parser plugin, aims to simplify the parsing of multiline
18 | # logs found in Kubernetes nodes. Since many log files shared the same format and
19 | # in order to simplify the configuration, this plugin provides a 'kubernetes' format
20 | # parser (built on top of MultilineParser).
21 | #
22 | # When tailing files, this 'kubernetes' format should be applied to the following
23 | # log file sources:
24 | #
25 | # - /var/log/kubelet.log
26 | # - /var/log/kube-proxy.log
27 | # - /var/log/kube-apiserver.log
28 | # - /var/log/kube-controller-manager.log
29 | # - /var/log/kube-scheduler.log
30 | # - /var/log/rescheduler.log
31 | # - /var/log/glbc.log
32 | # - /var/log/cluster-autoscaler.log
33 | #
34 | # Usage:
35 | #
36 | # ---- fluentd.conf ----
37 | #
38 | #
39 | # type tail
40 | # format kubernetes
41 | # path ./kubelet.log
42 | # read_from_head yes
43 | # tag kubelet
44 | #
45 | #
46 | # ---- EOF ---
47 |
48 | require 'fluent/parser'
49 |
50 | module Fluent
51 | class KubernetesParser < Fluent::TextParser::MultilineParser
52 | Fluent::Plugin.register_parser("kubernetes", self)
53 |
54 | CONF_FORMAT_FIRSTLINE = %q{/^\w\d{4}/}
55 | CONF_FORMAT1 = %q{/^(?\w)(?