├── .gitignore
├── README.md
├── blueprint.yaml
├── cloud-init
└── cloud-init.yml
├── dev-requirements.txt
├── inputs
├── heal_inputs.yaml
├── mist_ec2.yaml
├── mist_opestack.yaml
├── new_worker.yaml
└── remove_worker.yaml
├── scripts
├── deploy-node.sh
├── drain-node.sh
└── reset-node.sh
├── tasks
├── clone.py
├── configure.py
├── create.py
└── stop.py
└── workflows
├── scale_down.py
└── scale_up.py
/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 | *.sw[op]
3 | .cloudify/
4 | lib/
5 | bin/
6 | include/
7 | local/
8 | local-storage/
9 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Mist-Cloudify Kubernetes Cluster Example
2 |
3 | This repository contains a blueprint for installing a kubernetes cluster through mist.io.
4 | The aforementioned kubernetes cluster consists of:
5 |
6 | - A kubernetes master
7 | - A kubernetes worker
8 |
9 | Before you begin it's recommended you familiarize yourself with the
10 | [Cloudify Terminology](http://getcloudify.org/guide/3.1/reference-terminology.html).
11 | You will also need a [mist.io](https://mist.io/) account.
12 |
13 | This has been succesfully tested on CoreOS and Ubuntu 14.04 images under python 2.7.
14 |
15 | **Note: Documentation about the blueprints' content is located inside the blueprint files themselves.
16 | Presented here are only instructions on how to run the blueprints using the Cloudify CLI & Mist.io plugin.**
17 |
18 | ## Step 1: Install the software
19 |
20 | ```
21 | git clone https://github.com/mistio/kubernetes-blueprint
22 | cd kubernetes-blueprint
23 | virtualenv . # create virtualenv
24 | source bin/activate
25 | ./bin/pip install -r dev-requirements.txt # install dependencies
26 | ./bin/pip install cloudify https://github.com/mistio/mist.client/archive/master.zip
27 | git clone https://github.com/mistio/cloudify-mist-plugin
28 | cd cloudify-mist-plugin
29 | python setup.py develop
30 | cd ..
31 | ```
32 |
33 | ## Step 2: Initialize the environment
34 |
35 | First of all, you need to add a cloud to your mist.io account. Login to the [mist.io dashboard](https://mist.io) and click on
36 | the "ADD CLOUD" button. Once you have completed the aforementioned step, retrieve your cloud's ID by clicking on the tile
37 | containing your cloud's name on the mist.io home screen. The ID will be used later on as part of the required blueprint inputs.
38 |
39 | You will also need an SSH key. Visit the Keys tab and generate/upload a key. You can use separate keys for each machine.
40 | Once again, note the name/ID of the newly created SSH key, as it will be used by our inputs file.
41 |
42 | Then, visit your [account page](https://mist.io/account) and create a token under the API TOKENS tabs.
43 |
44 | Now, check the inputs files (in .yaml format) under the inputs directory in order to use them as a guide to fill in
45 | your resources' IDs accordingly.
46 |
47 | Here's a sample:
48 | ```
49 | mist_token: 544be89e3016f2fb0ba433802ee432da2e672cd7fe8e4ccc6926e1a54e835eec
50 | mist_key_master: c4b6efa8d0a74f989d2d8a0fae7a04d1
51 | mist_key_worker: c4b6efa8d0a74f989d2d8a0fae7a04d1
52 | mist_cloud_1: 1b2edcb11e524e2aa5fdd89cf1e24278
53 | mist_image_1: ami-d0e21bb1
54 | mist_size_1: m1.medium
55 | mist_location_1: '0'
56 | coreos: true
57 | worker_name: KubernetesWorker
58 | master_name: KubernetesMaster
59 | ```
60 |
61 | Afterwards, run:
62 |
63 | `./bin/cfy local init -p blueprint.yaml -i inputs/.yaml`
64 |
65 | This command will initialize your working directory with the given blueprint.
66 |
67 | The output would be something like this:
68 | ```
69 | (kubernetes-blueprint)user@user:~/kubernetes-blueprint$ ./bin/cfy local init -p blueprint.yaml -i inputs/mist_ec2.yaml
70 | Processing Inputs Source: inputs/mist_ec2.yaml
71 | Initiated blueprint.yaml
72 | If you make changes to the blueprint, run 'cfy local init -p mist-blueprint.yaml' again to apply them
73 | ```
74 |
75 | Now, you can run any type of workflows using your blueprint.
76 |
77 | ## Step 2: Install your Kubernetes cluster
78 |
79 | You are now ready to run the `install` workflow:
80 |
81 | `./bin/cfy local execute -w install`
82 |
83 | This command will deploy a kubernetes master and a kubernetes worker on the specified cloud via mist.io.
84 |
85 | The output should be something like:
86 | ```
87 | (kubernetes-blueprint)user@user:~/kubernetes-blueprint$ ./bin/cfy local execute -w install
88 | 2016-05-08 16:43:48 CFY Starting 'install' workflow execution
89 | 2016-05-08 16:43:48 CFY [key_13e52] Creating node
90 | 2016-05-08 16:43:48 CFY [master_677f6] Creating node
91 | 2016-05-08 16:43:48 CFY [master_677f6.create] Sending task 'plugin.kubernetes.create'
92 | ...
93 | 2016-05-08 16:52:43 CFY [worker_7a12b.start] Task succeeded 'plugin.kubernetes.start'
94 | 2016-05-08 16:52:44 CFY 'install' workflow execution succeeded
95 |
96 | ```
97 |
98 | This will take a while (approximately 10 minutes) to be fully executed. At the end, you will have a kubernetes cluster with two nodes.
99 |
100 | As soon as the installation has been succesffully completed, you should see your newly created VMs on the
101 | [mist.io machines page](https://mist.io/#/machines).
102 |
103 | At this point, you may specify the command `./bin/cfy local outputs` in order to retrieve the blueprint's outputs, which consist of a
104 | dashboard URL (alongside the required credentials) you may visit in order to verify the deployment of your kubernetes cluster and further
105 | explore it, as well as a `kubectl` command you may run directly in your shell.
106 |
107 | In case you do not have `kubectl` installed, simply run:
108 | `curl -O https://storage.googleapis.com/kubernetes-release/release/v1.1.8/bin/linux/amd64/kubectl && chmod +x kubectl`.
109 |
110 | ## Step 3: Scale your Kubernetes cluster
111 |
112 | To scale the cluster up first edit the `inputs/new_worker.yaml` file with the proper inputs.
113 | Edit the `delta` parameter to specify the number of machines to be added to the cluster.
114 | A positive number denotes an increase of kubernetes workers, while a negative number denotes a decrease of instances.
115 | As soon as you are done editing the inputs file, run:
116 |
117 | `./bin/cfy local execute -w scale_cluster -p inputs/new_worker.yaml `
118 |
119 | A sample output would be:
120 |
121 | ```
122 | (kubernetes-blueprint)user@user:~/kubernetes-blueprint$ ./bin/cfy local execute -w scale_cluster -p inputs/new_worker.yaml
123 | Processing Inputs Source: inputs/new_worker.yaml
124 | 2016-05-08 17:15:25 CFY Starting 'scale_cluster' workflow execution
125 | ...
126 | 2016-05-08 17:18:33 LOG INFO:
127 | 2016-05-08 17:18:33 LOG INFO:
128 | 2016-05-08 17:18:33 LOG INFO: Kubernetes worker 'NewKubernetesWorker' installation script succeeded
129 | 2016-05-08 17:18:33 LOG INFO: Upscaling kubernetes cluster succeeded
130 | 2016-05-08 17:18:33 CFY 'scale_cluster' workflow execution succeeded
131 | ```
132 |
133 | You may verify that the nodes were created and successfully added to the cluster by either running the `kubectl`
134 | command or visiting the kubernetes dashboard (both included in the blueprint's outputs section).
135 |
136 | To scale the cluster down edit the `inputs/remove_worker.yaml` file and specify the delta parameter as to how many
137 | machines should be removed (destroyed) from the cluster. Then, run:
138 |
139 | `./bin/cfy local execute -w scale_cluster -p inputs/remove_worker.yaml`
140 |
141 | ## Step 4: Uninstall the Kubernetes cluster
142 |
143 | To uninstall the kubernetes cluster and destroy all the machines run the `uninstall` workflow:
144 |
145 | `./bin/cfy local execute -w uninstall`
146 |
--------------------------------------------------------------------------------
/blueprint.yaml:
--------------------------------------------------------------------------------
1 | # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
2 | # #
3 | # Blueprint for deploying a Kubernetes cluster through Mist.io #
4 | # #
5 | # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
6 |
7 |
8 | tosca_definitions_version: cloudify_dsl_1_2
9 |
10 |
11 | # Imports section. Imports the cloudify-mist-plugin.
12 |
13 | imports:
14 | - http://www.getcloudify.org/spec/cloudify/3.3m5/types.yaml
15 | - http://raw.githubusercontent.com/mistio/cloudify-mist-plugin/master/plugin.yaml
16 |
17 |
18 | # Inputs section.
19 |
20 | inputs:
21 | mist_uri:
22 | description: >
23 | The Mist.io URL. Points to the Mist.io service that will handle the
24 | workflows' execution. This input is provided by the application itself,
25 | so no user interaction is necessary. Defaults to the Mist.io SaaS.
26 | type: string
27 | default: 'https://mist.io'
28 | mist_token:
29 | description: >
30 | An API Token generated by Mist.io in order to be used with every request
31 | to authenticate to the Mist.io API. This input is also auto-generated by
32 | Mist.io.
33 | type: string
34 | mist_machine_master:
35 | description: >
36 | The spec of the machine to be used as the kubernetes master node. This
37 | input has to comply with: `cloudify.datatypes.mist.MachineParams`. To
38 | use an existing machine, the `cloud_id` and `machine_id` params should
39 | should only be provided. The `machine_id` key indicates the use of an
40 | existing resource. If the `machine_id` is left blank, then the rest of
41 | the inputs have to be provided according to each cloud provider's spec
42 | and required parameters. Note that `cloud_id` is always required.
43 | mist_machine_worker:
44 | description: >
45 | The spec of the machine to be used as a kubernetes node. The input has
46 | to also comply with `cloudify.datatypes.mist.MachineParams`. The same
47 | rules as in case of `mist_machine_master`, apply here, too.
48 | auth_user:
49 | description: >
50 | The username used for connecting to the kubernetes cluster and accessing
51 | its dashboard. Defaults to 'admin'.
52 | type: string
53 | default: 'admin'
54 | auth_pass:
55 | description: >
56 | The password used for connecting to the kubernetes cluster and accessing
57 | its dashboard. If left blank, it will be auto-generated.
58 | type: string
59 | default: ''
60 |
61 |
62 | # DSL definitions section.
63 |
64 | dsl_definitions:
65 | mist_config: &mist_config
66 | mist_uri: { get_input: mist_uri }
67 | mist_token: { get_input: mist_token }
68 |
69 |
70 | # Kubernetes node types' section.
71 |
72 | node_types:
73 | cloudify.mist.nodes.KubernetesMaster:
74 | derived_from: cloudify.mist.nodes.Server
75 | properties:
76 | master:
77 | description: Indicates the kubernetes master
78 | type: boolean
79 | default: true
80 | configured:
81 | description: Indicates whether kubernetes is already configured
82 | type: boolean
83 | default: false
84 | auth_user:
85 | description: The username used for accessing the kubernetes cluster
86 | type: string
87 | default: 'admin'
88 | auth_pass:
89 | description: The password used for accessing the kubernetes cluster
90 | type: string
91 | default: ''
92 | interfaces:
93 | cloudify.interfaces.lifecycle:
94 | stop: tasks/stop.py
95 | create: tasks/create.py
96 | configure: tasks/configure.py
97 |
98 | cloudify.mist.nodes.KubernetesWorker:
99 | derived_from: cloudify.mist.nodes.Server
100 | properties:
101 | master:
102 | type: boolean
103 | default: false
104 | configured:
105 | type: boolean
106 | default: false
107 | interfaces:
108 | cloudify.interfaces.lifecycle:
109 | stop: tasks/stop.py
110 | clone: tasks/clone.py
111 | create: tasks/create.py
112 | configure: tasks/configure.py
113 |
114 |
115 | # Kubernetes node templates' section.
116 |
117 | node_templates:
118 | kube_master:
119 | type: cloudify.mist.nodes.KubernetesMaster
120 | properties:
121 | mist_config: *mist_config
122 | parameters: { get_input: mist_machine_master }
123 | auth_user: { get_input: auth_user }
124 | auth_pass: { get_input: auth_pass }
125 |
126 | kube_worker:
127 | type: cloudify.mist.nodes.KubernetesWorker
128 | properties:
129 | mist_config: *mist_config
130 | parameters: { get_input: mist_machine_worker }
131 | relationships:
132 | - target: kube_master
133 | type: cloudify.relationships.connected_to
134 |
135 |
136 | # Custom workflows sections. Use these to scale the cluster up/down.
137 |
138 | workflows:
139 | scale_cluster_up:
140 | mapping: workflows/scale_up.py
141 | parameters:
142 | mist_machine_worker_list:
143 | default: []
144 | description: >
145 | A list of `mist_machine_worker` inputs, as also defined in the
146 | inputs section, used to increase the cluster's size by more than
147 | a single node at once. The size of the list equals the scaling
148 | factor.
149 |
150 | scale_cluster_down:
151 | mapping: workflows/scale_down.py
152 | parameters:
153 | delta:
154 | type: integer
155 | default: 0
156 | description: The number of worker nodes to be removed from the cluster
157 |
158 |
159 | # Outputs section. Run "cfy local outputs" to get useful commands for
160 | # connecting to the cluster and accessing its dashboard.
161 |
162 | outputs:
163 | configure_context:
164 | description: Configure kubectl context for cluster
165 | value:
166 | command: { concat: [ 'kubectl config set-cluster ', { get_attribute: [ kube_master, machine_name ] }, '-cluster',
167 | ' --insecure-skip-tls-verify=true',
168 | ' --server="https://', { get_attribute: [ kube_master, server_ip ] }, '"',
169 | ' && kubectl config set-credentials ', { get_attribute: [ kube_master, machine_name ] }, '-admin',
170 | ' --username="', { get_attribute: [ kube_master, auth_user ] }, '"',
171 | ' --password="', { get_attribute: [ kube_master, auth_pass ] }, '"',
172 | ' && kubectl config set-context ', { get_attribute: [ kube_master, machine_name ] }, '-context',
173 | ' --cluster=', { get_attribute: [ kube_master, machine_name ] }, '-cluster',
174 | ' --user=', { get_attribute: [ kube_master, machine_name ] }, '-admin']}
175 | kubectl_use_context:
176 | description: Switch to the configured context for this cluster
177 | value:
178 | command: { concat: ['kubectl config use-context ', { get_attribute: [ kube_master, machine_name ] }, '-context'] }
179 | kubectl_cluster_info:
180 | description: Kubernetes cluster-info command
181 | value:
182 | command: { concat: [ 'kubectl cluster-info' ] }
183 | start_dashboard_proxy:
184 | description: Kubernetes dashboard URL
185 | value:
186 | command: { concat: [ 'kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml',
187 | ' && kubectl proxy' ] }
188 | url: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
189 |
--------------------------------------------------------------------------------
/cloud-init/cloud-init.yml:
--------------------------------------------------------------------------------
1 | #cloud-config
2 | packages:
3 | - curl
4 | runcmd:
5 | - >
6 | curl https://raw.githubusercontent.com/mistio/kubernetes-blueprint/master/scripts/mega-deploy.sh | sudo bash -s -- {{ ctx.instance.runtime_properties.cloud_init_arguments }} ||
7 | touch /tmp/cloud-init-error
8 | - |
9 | if [ -e /tmp/cloud-init-error ]; then
10 | curl -X DELETE -H 'Authorization: {{ ctx.node.properties.mist_config.mist_token }}' '{{ ctx.node.properties.mist_config.mist_uri }}/api/v1/jobs/{{ ctx.instance.runtime_properties.job_id }}?error=1&action=cloud_init_finished&machine_name={{ ctx.instance.runtime_properties.machine_name }}'
11 | else
12 | curl -X DELETE -H 'Authorization: {{ ctx.node.properties.mist_config.mist_token }}' '{{ ctx.node.properties.mist_config.mist_uri }}/api/v1/jobs/{{ ctx.instance.runtime_properties.job_id }}?error=0&action=cloud_init_finished&machine_name={{ ctx.instance.runtime_properties.machine_name }}'
13 | fi
14 |
--------------------------------------------------------------------------------
/dev-requirements.txt:
--------------------------------------------------------------------------------
1 | https://github.com/cloudify-cosmo/cloudify-dsl-parser/archive/master.zip
2 | https://github.com/cloudify-cosmo/cloudify-rest-client/archive/master.zip
3 | https://github.com/cloudify-cosmo/cloudify-plugins-common/archive/master.zip
4 | https://github.com/cloudify-cosmo/cloudify-script-plugin/archive/master.zip
5 | https://github.com/cloudify-cosmo/cloudify-cli/archive/master.zip
6 | https://github.com/mistio/mist.client/archive/master.zip
7 | https://github.com/mistio/cloudify-mist-plugin/archive/master.zip
8 |
9 |
--------------------------------------------------------------------------------
/inputs/heal_inputs.yaml:
--------------------------------------------------------------------------------
1 | node_instance_id: worker_62204
--------------------------------------------------------------------------------
/inputs/mist_ec2.yaml:
--------------------------------------------------------------------------------
1 | mist_uri: http://172.17.0.1/
2 | mist_token: 544be89e3016f2fb0ba433802ee432da2e672cd7fe8e4ccc6926e1a54e835eec
3 | mist_key_master: c4b6efa8d0a74f989d2d8a0fae7a04d1
4 | mist_key_worker: c4b6efa8d0a74f989d2d8a0fae7a04d1
5 | mist_cloud_1: 1b2edcb11e524e2aa5fdd89cf1e24278
6 | mist_image_1: ami-d0e21bb1
7 | mist_size_1: m1.medium
8 | mist_location_1: '0'
9 | coreos: true
10 | worker_name: GigaDemoFirstWorker
11 | master_name: GigaDemoMaster
12 |
--------------------------------------------------------------------------------
/inputs/mist_opestack.yaml:
--------------------------------------------------------------------------------
1 | mist_uri: http://172.17.0.1/
2 | mist_token: 544be89e3016f2fb0ba433802ee432da2e672cd7fe8e4ccc6926e1a54e835eec
3 | mist_key_master: c4b6efa8d0a74f989d2d8a0fae7a04d1
4 | mist_key_worker: c4b6efa8d0a74f989d2d8a0fae7a04d1
5 | mist_cloud_1: f9622b1679134189926aa2c01e465fc6
6 | mist_image_1: 0adee2e0-c414-42f3-b9ca-b61671924055
7 | mist_size_1: 41394119-5a38-4f67-b811-c730003ee75b
8 | mist_locatin_1: '0'
9 | coreos: true
10 | networks: ["d9e7836f-1c40-4bed-9921-772e68d62646"]
11 | worker_name: GigaDemoFirstWorker
12 | master_name: GigaDemoMaster
13 |
--------------------------------------------------------------------------------
/inputs/new_worker.yaml:
--------------------------------------------------------------------------------
1 | key: MyKey
2 | cloud_id: 1b2edcb11e524e2aa5fdd89cf1e24278
3 | image_id: ami-d0e21bb1
4 | coreos: true
5 | size_id: m1.medium
6 | location_id: '0'
7 | name: GigaDemoNewWorker
8 | delta: 1
9 |
--------------------------------------------------------------------------------
/inputs/remove_worker.yaml:
--------------------------------------------------------------------------------
1 | name: GigaDemoNewWorker
2 | delta: -1
3 | cloud_id: 1b2edcb11e524e2aa5fdd89cf1e24278
4 |
--------------------------------------------------------------------------------
/scripts/deploy-node.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | set -e
4 | while getopts "m:t:r:n:" OPTION
5 | do
6 | case $OPTION in
7 | m)
8 | MASTER=$OPTARG
9 | ;;
10 | t)
11 | TOKEN=$OPTARG
12 | ;;
13 | r)
14 | ROLE=$OPTARG
15 | ;;
16 | n)
17 | NODE_NAME=$OPTARG
18 | ;;
19 | ?)
20 | exit
21 | ;;
22 | esac
23 | done
24 |
25 | ubuntu_main() {
26 | ################################################################################
27 | #
28 | # UBUNTU
29 | #
30 | ################################################################################
31 | # Disable swap
32 | swapoff -a
33 | # Load br_netfilter
34 | modprobe br_netfilter
35 | # Set iptables to correctly see bridged traffic
36 | cat < /dev/null
87 | # Install the latest version of Docker Engine and containerd
88 | apt-get update
89 | apt-get install -y docker-ce docker-ce-cli containerd.io
90 | # Verify that Docker Enginer is installed
91 | docker run hello-world
92 | # Configure containerd
93 | mkdir -p /etc/containerd
94 | containerd config default > /etc/containerd/config.toml
95 | sed -i -e 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
96 | cat /etc/containerd/config.toml
97 | # Restart containerd
98 | systemctl restart containerd
99 | # Configure docker systemd cgroup driver
100 | mkdir -p /etc/docker
101 | mkdir -p /etc/systemd/system/docker.service.d
102 | cat < /etc/kubernetes/kubeadm-config.yaml
133 | apiVersion: kubeadm.k8s.io/v1beta2
134 | kind: InitConfiguration
135 | nodeRegistration:
136 | name: "$NODE_NAME"
137 | localAPIEndpoint:
138 | bindPort: 443
139 | bootstrapTokens:
140 | - token: "$TOKEN"
141 | ---
142 | apiVersion: kubeadm.k8s.io/v1beta2
143 | kind: ClusterConfiguration
144 | etcd:
145 | local:
146 | extraArgs:
147 | 'listen-peer-urls': 'http://127.0.0.1:2380'
148 | ---
149 | kind: KubeletConfiguration
150 | apiVersion: kubelet.config.k8s.io/v1beta1
151 | cgroupDriver: systemd
152 | EOF
153 | # Verify connectivity to the gcr.io container image registry
154 | kubeadm config images pull
155 | # Initialize kubeadm
156 | kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
157 | mkdir -p $HOME/.kube
158 | sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
159 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
160 | # Wait for kube-apiserver to be up and running
161 | until $(curl --output /dev/null --silent --head --insecure https://localhost:443); do
162 | printf '.'
163 | sleep 5
164 | done
165 | # Initialize pod network (weave)
166 | kubever=$(kubectl --kubeconfig /etc/kubernetes/admin.conf version | base64 | tr -d '\n')
167 | kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
168 | }
169 |
170 | install_node_ubuntu() {
171 | # Join cluster
172 | kubeadm join $MASTER:443 \
173 | --discovery-token-unsafe-skip-ca-verification \
174 | --token $TOKEN \
175 | --node-name $NODE_NAME
176 | }
177 |
178 |
179 |
180 | find_distro () {
181 | ################################################################################
182 | #
183 | # FIND WHICH OS/DISTRO WE HAVE
184 | #
185 | ################################################################################
186 |
187 |
188 | VERSION=`lsb_release -ds 2>/dev/null || cat /etc/*release 2>/dev/null | head -n1 || uname -om`
189 |
190 | if [[ $VERSION =~ .*Ubuntu* ]]
191 | then
192 | echo "Found Ubuntu distro"
193 | DISTRO="Ubuntu"
194 | elif [[ $VERSION =~ .*Debian* ]]
195 | then
196 | echo "Found Debian distro"
197 | DISTRO="Debian"
198 | else
199 | echo "Distro not supported"
200 | exit 1
201 | fi
202 |
203 | }
204 |
205 | main () {
206 | ################################################################################
207 | #
208 | # MAIN FUNCTION
209 | #
210 | ################################################################################
211 |
212 |
213 | # kubeadm init expects a token of <6chars>.<16chars>
214 | pass1=`date +%s | sha256sum | head -c 6 ; echo`
215 | pass2=`date +%s | sha256sum | head -c 16 ; echo`
216 | pass="${pass1}.${pass2}"
217 | TOKEN=${TOKEN-$pass}
218 |
219 | # Role must be provided
220 | if [ -z "$ROLE" ]
221 | then
222 | echo "Role is not set. You must specify role [-r ]"
223 | exit 1
224 | fi
225 |
226 | find_distro
227 |
228 | if [ $DISTRO = "Ubuntu" ] || [ $DISTRO = "Debian" ];then
229 | ubuntu_main
230 | fi
231 |
232 | }
233 |
234 | main
235 |
--------------------------------------------------------------------------------
/scripts/drain-node.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | set -ex
3 | kubectl drain {{hostname}} --delete-emptydir-data --force --ignore-daemonsets
4 | kubectl delete node {{hostname}}
5 |
--------------------------------------------------------------------------------
/scripts/reset-node.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | set -e
3 | rm -rf /etc/kubernetes/manifests/*.yaml
4 | rm -rf /var/lib/etcd
5 | rm -rf /etc/cni/net.d
6 | kubeadm reset -f
7 | iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
8 |
--------------------------------------------------------------------------------
/tasks/clone.py:
--------------------------------------------------------------------------------
1 | from cloudify import ctx
2 |
3 | from plugin.utils import LocalStorage
4 |
5 |
6 | if __name__ == '__main__':
7 | # FIXME HACK This operation is required by the scale_up workflow. It tries
8 | # to mimic - in a very simple, dummy way - the functionality of Deployment
9 | # Modification. Deployment Modification changes the data model by adding or
10 | # removing node instances, and returns the modified node instances for the
11 | # workflow to operate on them. However, the Deployment Modification does
12 | # not work for local deployments. Instead, it requires an active Cloudify
13 | # Manager. The built-in scale workflow makes use of this API in order to
14 | # scale a node instance up or down. More on Deployment Modification here:
15 | # https://docs.cloudify.co/4.2.0/workflows/creating-your-own-workflow/. In
16 | # our case, we are creating an exact copy of the specified node instance so
17 | # that we may re-execute the node's operations.
18 | storage = LocalStorage()
19 | storage.clone_node_instance(ctx.instance.id)
20 |
--------------------------------------------------------------------------------
/tasks/configure.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | from cloudify import ctx
4 |
5 | from plugin import constants
6 |
7 | from plugin.utils import random_string
8 | from plugin.utils import wait_for_event
9 |
10 | from plugin.connection import MistConnectionClient
11 |
12 |
13 | def remove_kubernetes_script():
14 | """Attempt to remove the kubernetes installation script.
15 |
16 | This method tries to remove the already uploaded installation script after
17 | each kubernetes node has been provisioned to prevent multiple scripts from
18 | accumulating in the user's account.
19 |
20 | If an error is raised, it's logged and the workflow execution is carried
21 | on.
22 |
23 | """
24 | # FIXME Perhaps, scrtipt should not be handled here or this way. The
25 | # cloudify-mist plugin should define a `Script` node type to execute
26 | # operations on scripts, such as uploading, deleting, etc.
27 | script_id = ctx.instance.runtime_properties.pop('script_id', '')
28 | if script_id:
29 | try:
30 | MistConnectionClient().client.remove_script(script_id)
31 | except Exception as exc:
32 | ctx.logger.warn('Failed to remove installation script: %r', exc)
33 |
34 |
35 | def prepare_kubernetes_script():
36 | """Upload kubernetes installation script, if missing.
37 |
38 | This method is executed at the very beginning, in a pre-configuration
39 | phase, to make sure that the kubernetes installation script has been
40 | uploaded to mist.io.
41 |
42 | This method is meant to be invoked early on by:
43 |
44 | configure_kubernetes_master()
45 | configure_kubernetes_worker()
46 |
47 | The script_id inside each instance's runtime properties is used later
48 | on in order to configure kubernetes on the provisioned machines.
49 |
50 | """
51 | if ctx.instance.runtime_properties.get('script_id'):
52 | ctx.logger.info('Kubernetes installation script already exists')
53 | else:
54 | ctx.logger.info('Uploading fresh kubernetes installation script')
55 | # If a script_id does not exist in the node instance's runtime
56 | # properties, perhaps because this is the first node that is being
57 | # configured, load the script from file, upload it to mist.io, and
58 | # run it over ssh.
59 | client = MistConnectionClient().client
60 | script = os.path.join(os.path.dirname(__file__), 'deploy-node.sh')
61 | ctx.download_resource(
62 | os.path.join('scripts', 'deploy-node.sh'), script
63 | )
64 | with open(os.path.abspath(script)) as fobj:
65 | script = fobj.read()
66 | script = client.add_script(
67 | name='install_kubernetes_%s' % random_string(length=4),
68 | script=script, location_type='inline', exec_type='executable'
69 | )
70 | ctx.instance.runtime_properties['script_id'] = script['id']
71 |
72 |
73 | def configure_kubernetes_master():
74 | """Configure the kubernetes master.
75 |
76 | Sets up the master node and stores the necessary settings inside the node
77 | instance's runtime properties, which are required by worker nodes in order
78 | to join the kubernetes cluster.
79 |
80 | """
81 | ctx.logger.info('Setting up kubernetes master node')
82 | prepare_kubernetes_script()
83 |
84 | conn = MistConnectionClient()
85 | machine = conn.get_machine(
86 | cloud_id=ctx.instance.runtime_properties['cloud_id'],
87 | machine_id=ctx.instance.runtime_properties['machine_id'],
88 | )
89 |
90 | # Token for secure master-worker communication.
91 | token = '%s.%s' % (random_string(length=6), random_string(length=16))
92 | ctx.instance.runtime_properties['master_token'] = token.lower()
93 |
94 | # Store kubernetes dashboard credentials in runtime properties.
95 | ctx.instance.runtime_properties.update({
96 | 'auth_user': ctx.node.properties['auth_user'],
97 | 'auth_pass': ctx.node.properties['auth_pass'] or random_string(10),
98 | })
99 |
100 | ctx.logger.info('Installing kubernetes on master node')
101 |
102 | # Prepare script parameters.
103 | params = "-n '%s' " % ctx.instance.runtime_properties['machine_name']
104 | params += "-t '%s' " % ctx.instance.runtime_properties['master_token']
105 | params += "-r 'master'"
106 |
107 | # Run the script.
108 | script = conn.client.run_script(
109 | script_id=ctx.instance.runtime_properties['script_id'], su=True,
110 | machine_id=machine.id,
111 | cloud_id=machine.cloud.id,
112 | script_params=params,
113 | )
114 | ctx.instance.runtime_properties['job_id'] = script['job_id']
115 |
116 |
117 | def configure_kubernetes_worker():
118 | """Configure a new kubernetes node.
119 |
120 | Configures a new worker node and connects it to the kubernetes master.
121 |
122 | """
123 | # Get master node from relationships schema.
124 | master = ctx.instance.relationships[0]._target.instance
125 | ctx.instance.runtime_properties.update({
126 | 'script_id': master.runtime_properties.get('script_id', ''),
127 | 'master_ip': master.runtime_properties.get('master_ip', ''),
128 | 'master_token': master.runtime_properties.get('master_token', ''),
129 | })
130 |
131 | ctx.logger.info('Setting up kubernetes worker')
132 | prepare_kubernetes_script()
133 |
134 | conn = MistConnectionClient()
135 | machine = conn.get_machine(
136 | cloud_id=ctx.instance.runtime_properties['cloud_id'],
137 | machine_id=ctx.instance.runtime_properties['machine_id'],
138 | )
139 |
140 | ctx.logger.info('Configuring kubernetes node')
141 |
142 | # Prepare script parameters.
143 | params = "-m '%s' " % ctx.instance.runtime_properties['master_ip']
144 | params += "-n '%s' " % ctx.instance.runtime_properties['machine_name']
145 | params += "-t '%s' " % ctx.instance.runtime_properties['master_token']
146 | params += "-r 'node'"
147 |
148 | # Run the script.
149 | script = conn.client.run_script(
150 | script_id=ctx.instance.runtime_properties['script_id'], su=True,
151 | machine_id=machine.id,
152 | cloud_id=machine.cloud.id,
153 | script_params=params,
154 | )
155 | ctx.instance.runtime_properties['job_id'] = script['job_id']
156 |
157 |
158 | if __name__ == '__main__':
159 | """Setup kubernetes on the machines defined by the blueprint."""
160 | conn = MistConnectionClient()
161 | cloud = conn.get_cloud(ctx.instance.runtime_properties['cloud_id'])
162 | if cloud.provider in constants.CLOUD_INIT_PROVIDERS:
163 | wait_for_event(
164 | job_id=ctx.instance.runtime_properties['job_id'],
165 | job_kwargs={
166 | 'action': 'cloud_init_finished',
167 | 'machine_name': ctx.instance.runtime_properties['machine_name']
168 | }
169 | )
170 | elif not ctx.node.properties['configured']:
171 | if not ctx.node.properties['master']:
172 | configure_kubernetes_worker()
173 | else:
174 | configure_kubernetes_master()
175 | try:
176 | wait_for_event(
177 | job_id=ctx.instance.runtime_properties['job_id'],
178 | job_kwargs={
179 | 'action': 'script_finished',
180 | 'external_id': ctx.instance.runtime_properties[
181 | 'machine_id'],
182 | }
183 | )
184 | except Exception:
185 | remove_kubernetes_script()
186 | raise
187 | else:
188 | remove_kubernetes_script()
189 | ctx.logger.info('Kubernetes installation succeeded!')
190 | else:
191 | ctx.logger.info('Kubernetes already configured')
192 |
--------------------------------------------------------------------------------
/tasks/create.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | from cloudify import ctx
4 | from cloudify.state import ctx_parameters as params
5 | from cloudify.exceptions import NonRecoverableError
6 |
7 | from plugin import constants
8 | from plugin.utils import random_string
9 | from plugin.utils import generate_name
10 | from plugin.utils import get_stack_name
11 | from plugin.utils import is_resource_external
12 |
13 | from plugin.server import get_cloud_id
14 | from plugin.server import create_machine
15 | from plugin.connection import MistConnectionClient
16 |
17 |
18 | def prepare_cloud_init():
19 | """Render the cloud-init script.
20 |
21 | This method is executed if the cloud provider is included in the
22 | CLOUD_INIT_PROVIDERS in order to prepare the cloud-init that is
23 | used to install kubernetes on each of the provisioned VMs at boot
24 | time.
25 |
26 | This method, based on each node's type, is meant to invoke:
27 |
28 | get_master_init_args()
29 | get_worker_init_args()
30 |
31 | in order to get the arguments required by the kubernetes installation
32 | script.
33 |
34 | The cloud-init.yml is just a wrapper around the deploy-node.sh, which
35 | is provided as a parameter at VM provision time. In return, we avoid
36 | the extra step of uploading an extra script and executing it over SSH.
37 |
38 | """
39 | if ctx.node.properties['master']:
40 | arguments = get_master_init_args()
41 | else:
42 | arguments = get_worker_init_args()
43 |
44 | ctx.logger.debug('Will run deploy-node.sh with: %s', arguments)
45 | ctx.instance.runtime_properties['cloud_init_arguments'] = arguments
46 |
47 | ctx.logger.debug('Current runtime: %s', ctx.instance.runtime_properties)
48 |
49 | ctx.logger.info('Rendering cloud-init.yml')
50 |
51 | cloud_init = os.path.join(os.path.dirname(__file__), 'cloud_init.yml')
52 | ctx.download_resource_and_render(
53 | os.path.join('cloud-init', 'cloud-init.yml'), cloud_init
54 | )
55 | with open(os.path.abspath(cloud_init)) as fobj:
56 | ctx.instance.runtime_properties['cloud_init'] = fobj.read()
57 |
58 |
59 | def get_master_init_args():
60 | """Return the arguments required to install the kubernetes master."""
61 |
62 | ctx.logger.info('Preparing cloud-init for kubernetes master')
63 |
64 | # Token for secure master-worker communication.
65 | token = '%s.%s' % (random_string(length=6), random_string(length=16))
66 | ctx.instance.runtime_properties['master_token'] = token.lower()
67 |
68 | # Store kubernetes dashboard credentials in runtime properties.
69 | ctx.instance.runtime_properties.update({
70 | 'auth_user': ctx.node.properties['auth_user'],
71 | 'auth_pass': ctx.node.properties['auth_pass'] or random_string(10),
72 | })
73 |
74 | arguments = "-n '%s' " % ctx.instance.runtime_properties['machine_name']
75 | arguments += "-t '%s' " % ctx.instance.runtime_properties['master_token']
76 | arguments += "-r 'master'"
77 |
78 | return arguments
79 |
80 |
81 | def get_worker_init_args():
82 | """Return the arguments required to install a kubernetes worker."""
83 |
84 | ctx.logger.info('Preparing cloud-init for kubernetes worker')
85 |
86 | # Get master node from relationships schema.
87 | master = ctx.instance.relationships[0]._target.instance
88 | ctx.instance.runtime_properties.update({
89 | 'master_ip': master.runtime_properties.get('master_ip', ''),
90 | 'master_token': master.runtime_properties.get('master_token', ''),
91 | })
92 |
93 | arguments = "-n '%s' " % ctx.instance.runtime_properties['machine_name']
94 | arguments += "-m '%s' " % master.runtime_properties['master_ip']
95 | arguments += "-t '%s' " % master.runtime_properties['master_token']
96 | arguments += "-r 'node'"
97 |
98 | return arguments
99 |
100 |
101 | if __name__ == '__main__':
102 | """Create the nodes on which to install kubernetes.
103 |
104 | Besides creating the nodes, this method also decides the way kubernetes
105 | will be configured on each of the nodes.
106 |
107 | The legacy way is to upload the script and execute it over SSH. However,
108 | if the cloud provider supports cloud-init, a cloud-config can be used as
109 | a wrapper around the actual script. In this case, the `configure` lifecycle
110 | operation of the blueprint is mostly skipped. More precisely, it just waits
111 | to be signalled regarding cloud-init's result and exits immediately without
112 | performing any additional actions.
113 |
114 | """
115 | conn = MistConnectionClient()
116 | ctx.instance.runtime_properties['job_id'] = conn.job_id
117 |
118 | # Create a copy of the node's immutable properties in order to update them.
119 | node_properties = ctx.node.properties.copy()
120 |
121 | # Override the node's properties with parameters passed from workflows.
122 | for key in params:
123 | if key in constants.INSTANCE_REQUIRED_PROPERTIES + ('machine_id', ):
124 | node_properties['parameters'][key] = params[key]
125 | ctx.logger.info('Added %s=%s to node parameters', key, params[key])
126 |
127 | # Generate a somewhat random machine name. NOTE that we need the name at
128 | # this early point in order to be passed into cloud-init, if used, so that
129 | # we may use it later on to match log entries.
130 | name = generate_name(
131 | get_stack_name(),
132 | 'master' if ctx.node.properties['master'] else 'worker'
133 | )
134 | node_properties['parameters']['name'] = name
135 | ctx.instance.runtime_properties['machine_name'] = name
136 |
137 | # Get the cloud based on the node's properties.
138 | cloud = conn.get_cloud(get_cloud_id(node_properties))
139 |
140 | # Generate cloud-init, if supported.
141 | # TODO This is NOT going to work when use_external_resource is True. We
142 | # are using cloud-init to configure the newly provisioned nodes in case
143 | # the VMs are unreachable over SSH. If the VMs already exist, cloud-init
144 | # is not an option. Perhaps, we should allow to toggle cloud-init on/off
145 | # in some way after deciding if the VMs are accessible over the public
146 | # internet.
147 | if cloud.provider in constants.CLOUD_INIT_PROVIDERS:
148 | if is_resource_external(node_properties):
149 | raise NonRecoverableError('use_external_resource may not be set')
150 | prepare_cloud_init()
151 | cloud_init = ctx.instance.runtime_properties.get('cloud_init', '')
152 | node_properties['parameters']['cloud_init'] = cloud_init
153 |
154 | # Do not wait for post-deploy-steps to finish in case the configuration
155 | # is done using a cloud-init script.
156 | skip_post_deploy = cloud.provider in constants.CLOUD_INIT_PROVIDERS
157 |
158 | # Create the nodes. Get the master node's IP address. NOTE that we prefer
159 | # to use private IP addresses for master-worker communication. Public IPs
160 | # are used mostly when connecting to the kubernetes API from the outside.
161 | if ctx.node.properties['master']:
162 | create_machine(node_properties, skip_post_deploy, node_type='master')
163 |
164 | ips = (ctx.instance.runtime_properties['info']['private_ips'] +
165 | ctx.instance.runtime_properties['info']['public_ips'])
166 | ips = filter(lambda ip: ':' not in ip, ips)
167 | if not ips:
168 | raise NonRecoverableError('No IPs associated with the machine')
169 |
170 | ctx.instance.runtime_properties['master_ip'] = ips[0]
171 | ctx.instance.runtime_properties['server_ip'] = ips[-1]
172 | else:
173 | create_machine(node_properties, skip_post_deploy, node_type='worker')
174 |
--------------------------------------------------------------------------------
/tasks/stop.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | from cloudify import ctx
4 |
5 | from plugin.utils import random_string
6 | from plugin.utils import wait_for_event
7 |
8 | from plugin.connection import MistConnectionClient
9 |
10 |
11 | def reset_kubeadm():
12 | """Uninstall kubernetes on a node.
13 |
14 | Runs `kubeadm reset` on the specified machine in order to remove the
15 | kubernetes services and undo all configuration set by `kubeadm init`.
16 |
17 | """
18 | # Get script from path.
19 | script = os.path.join(os.path.dirname(__file__), 'reset-node.sh')
20 | ctx.download_resource(
21 | os.path.join('scripts', 'reset-node.sh'), script
22 | )
23 |
24 | # Get worker.
25 | conn = MistConnectionClient()
26 | machine = conn.get_machine(
27 | cloud_id=ctx.instance.runtime_properties['cloud_id'],
28 | machine_id=ctx.instance.runtime_properties['machine_id'],
29 | )
30 |
31 | ctx.logger.info('Running "kubeadm reset" on %s', machine)
32 |
33 | _add_run_remove_script(
34 | cloud_id=machine.cloud.id,
35 | machine_id=machine.id,
36 | script_path=os.path.abspath(script),
37 | script_name='kubeadm_reset_%s' % random_string(length=4)
38 | )
39 |
40 |
41 | def drain_and_remove():
42 | """Mark the node as unschedulable, evict all pods, and remove it.
43 |
44 | Runs `kubectl drain` and `kubectl delete nodes` on the kubernetes
45 | master in order to drain and afterwards remove the specified node
46 | from the cluster.
47 |
48 | """
49 | if ctx.node.properties['master']: # FIXME Is this necessary?
50 | return
51 |
52 | # Get master instance.
53 | master = ctx.instance.relationships[0]._target.instance
54 |
55 | # Render script.
56 | script = os.path.join(os.path.dirname(__file__), 'drain-node.sh')
57 | ctx.download_resource_and_render(
58 | os.path.join('scripts', 'drain-node.sh'), script,
59 | template_variables={
60 | 'hostname': ctx.instance.runtime_properties.get(
61 | 'machine_name', '').lower()
62 | },
63 | )
64 |
65 | conn = MistConnectionClient()
66 | machine = conn.get_machine(
67 | cloud_id=master.runtime_properties['cloud_id'],
68 | machine_id=master.runtime_properties['machine_id'],
69 | )
70 |
71 | ctx.logger.info('Running "kubectl drain && kubectl delete" on %s', machine)
72 |
73 | _add_run_remove_script(
74 | cloud_id=machine.cloud.id,
75 | machine_id=machine.id,
76 | script_path=os.path.abspath(script),
77 | script_name='kubectl_drain_%s' % random_string(length=4)
78 | )
79 |
80 |
81 | # TODO This should be moved to the cloudify-mist-plugin as a generic method.
82 | # along with all related script stuff in tasks/create.py
83 | def _add_run_remove_script(cloud_id, machine_id, script_path, script_name):
84 | """Helper method to add a script, run it, and, finally, remove it."""
85 | conn = MistConnectionClient()
86 |
87 | # Upload script.
88 | with open(script_path) as fobj:
89 | script = conn.client.add_script(
90 | name=script_name, script=fobj.read(),
91 | location_type='inline', exec_type='executable'
92 | )
93 |
94 | # Run the script.
95 | job = conn.client.run_script(script_id=script['id'], machine_id=machine_id,
96 | cloud_id=cloud_id, su=True)
97 |
98 | # Wait for the script to exit. The script should exit fairly quickly,
99 | # thus we only wait for a couple of minutes for the corresponding log
100 | # entry.
101 | try:
102 | wait_for_event(
103 | job_id=job['job_id'],
104 | job_kwargs={
105 | 'action': 'script_finished',
106 | 'external_id': machine_id,
107 | },
108 | timeout=180
109 | )
110 | except Exception:
111 | ctx.logger.warn('Script %s finished with errors!', script_name)
112 | else:
113 | ctx.logger.info('Script %s finished successfully', script_name)
114 |
115 | # Remove the script.
116 | try:
117 | conn.client.remove_script(script['id'])
118 | except Exception as exc:
119 | ctx.logger.warn('Failed to remove script %s: %r', script_name, exc)
120 |
121 |
122 | if __name__ == '__main__':
123 | """Remove the node from cluster and uninstall the kubernetes services
124 |
125 | Initially, all resources will be drained from the kubernetes node and
126 | afterwards the node will be removed from the cluster.
127 |
128 | The `reset_kubeadm` method will only run in case an already existing
129 | resource has been used in order to setup the kubernetes cluster. As
130 | we do not destroy already existing resources, which have been used to
131 | setup kubernetes, we opt for uninstall the corresponding kubernetes
132 | services and undoing all configuration in order to bring the machines
133 | to their prior state.
134 |
135 | If `use_external_resource` is False, then this method is skipped and
136 | the resources will be destroyed later on.
137 |
138 | """
139 | drain_and_remove()
140 | if ctx.instance.runtime_properties.get('use_external_resource'):
141 | reset_kubeadm()
142 |
--------------------------------------------------------------------------------
/workflows/scale_down.py:
--------------------------------------------------------------------------------
1 | from cloudify.workflows import ctx as workctx
2 | from cloudify.workflows import parameters as inputs
3 |
4 |
5 | def graph_scale_down_workflow(delta):
6 | """Scale down the kubernetes cluster.
7 |
8 | A maximum number of `delta` nodes will be removed from the cluster.
9 |
10 | """
11 | # Set the workflow to be in graph mode.
12 | graph = workctx.graph_mode()
13 |
14 | # Get a maximum of `delta` number of workers.
15 | node = workctx.get_node('kube_worker')
16 | instances = [instance for instance in node.instances][:delta]
17 |
18 | # Setup events to denote the beginning and end of tasks.
19 | start_events, done_events = {}, {}
20 |
21 | for i, instance in enumerate(instances):
22 | start_events[i] = instance.send_event('Removing node cluster')
23 | done_events[i] = instance.send_event('Node removed from cluster')
24 |
25 | # Create `delta` number of TaskSequence objects. That way we are able to
26 | # control the sequence of events and the dependencies amongst tasks. One
27 | # graph sequence corresponds to node being removed from the cluster.
28 | for i, instance in enumerate(instances):
29 | sequence = graph.sequence()
30 | sequence.add(
31 | start_events[i],
32 | instance.execute_operation(
33 | operation='cloudify.interfaces.lifecycle.stop',
34 | ),
35 | instance.execute_operation(
36 | operation='cloudify.interfaces.lifecycle.delete',
37 | ),
38 | instance.set_state('deleted'),
39 | done_events[i],
40 | )
41 |
42 | # Start execution.
43 | return graph.execute()
44 |
45 |
46 | if __name__ == '__main__':
47 | delta = int(inputs.get('delta') or 0)
48 | workctx.logger.info('Scaling kubernetes cluster down by %d node(s)', delta)
49 | if delta:
50 | graph_scale_down_workflow(delta)
51 |
--------------------------------------------------------------------------------
/workflows/scale_up.py:
--------------------------------------------------------------------------------
1 | from cloudify.workflows import ctx as workctx
2 | from cloudify.workflows import parameters as inputs
3 |
4 |
5 | def graph_scale_up_workflow(delta, worker_data_list):
6 | """Scale up the kubernetes cluster.
7 |
8 | This method implements the scale up workflow using the Graph Framework.
9 |
10 | Scaling is based on the `delta` input, which must be greater than 0 for
11 | the workflow to run.
12 |
13 | """
14 | # Set the workflow to be in graph mode.
15 | graph = workctx.graph_mode()
16 |
17 | # Get the instance for which to add an execute operation task to the graph.
18 | node = workctx.get_node('kube_worker')
19 | instance = [instance for instance in node.instances][0]
20 |
21 | # Setup events to denote the beginning and end of tasks. The events will be
22 | # also used to control dependencies amongst tasks.
23 | start_events, done_events = {}, {}
24 |
25 | for i in range(delta):
26 | start_events[i] = instance.send_event('Adding node to cluster')
27 | done_events[i] = instance.send_event('Node added to cluster')
28 |
29 | # Prepare the operations' kwargs.
30 | operation_kwargs_list = []
31 |
32 | for worker_data in worker_data_list:
33 | if worker_data.get('machine_id'):
34 | operation_kwargs_list.append(
35 | {
36 | 'cloud_id': worker_data.get('cloud_id'),
37 | 'machine_id': worker_data['machine_id'],
38 | }
39 | )
40 | else:
41 | operation_kwargs_list.append(
42 | {
43 | 'key_id': worker_data.get('key_id', ''),
44 | 'size_id': worker_data.get('size_id', ''),
45 | 'image_id': worker_data.get('image_id', ''),
46 | 'cloud_id': worker_data.get('cloud_id', ''),
47 | 'machine_id': '',
48 | 'networks': worker_data.get('networks', []),
49 | 'location_id': worker_data.get('location_id', ''),
50 | }
51 | )
52 |
53 | # Create `delta` number of TaskSequence objects. That way we are able to
54 | # control the sequence of events and the dependencies amongst tasks. One
55 | # graph sequence corresponds to a new node added to the cluster.
56 | for i in range(delta):
57 | sequence = graph.sequence()
58 | sequence.add(
59 | start_events[i],
60 | instance.execute_operation(
61 | operation='cloudify.interfaces.lifecycle.clone',
62 | ),
63 | instance.execute_operation(
64 | operation='cloudify.interfaces.lifecycle.create',
65 | kwargs=operation_kwargs_list[i],
66 | ),
67 | instance.execute_operation(
68 | operation='cloudify.interfaces.lifecycle.configure',
69 | ),
70 | instance.set_state('started'),
71 | done_events[i],
72 | )
73 |
74 | # Now, we use the events to control the tasks' dependencies, ensuring that
75 | # tasks are executed in the correct order. We aim to create dependencies
76 | # between a sequence's last event and the next sequence's initial event.
77 | # That way, we ensure that sequences are executed sequentially, and not in
78 | # parallel. This is required, since the cloudify.interfaces.lifecycle.clone
79 | # operation modifies the node instances in local-storage and we want to
80 | # avoid having multiple tasks messing with the same files at the same time.
81 | for i in range(delta - 1):
82 | graph.add_dependency(start_events[i + 1], done_events[i])
83 |
84 | # Start execution.
85 | return graph.execute()
86 |
87 |
88 | if __name__ == '__main__':
89 | mist_machines = inputs.get('mist_machine_worker_list', [])
90 | assert isinstance(mist_machines, list), mist_machines
91 | if len(mist_machines) is 0:
92 | delta = 0
93 | if len(mist_machines) is 1:
94 | delta = mist_machines[0].get('quantity', 1)
95 | mist_machines *= delta
96 | if len(mist_machines) >= 2:
97 | delta = len(mist_machines)
98 | workctx.logger.info('Scaling kubernetes cluster up by %d node(s)', delta)
99 | if delta:
100 | graph_scale_up_workflow(delta, mist_machines)
101 |
--------------------------------------------------------------------------------