├── LICENSE
├── README.md
├── docs
├── figures
│ └── kindbox.png
└── issue-guidelines.md
└── scr
└── kindbox
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## Introduction
2 |
3 | Kindbox is a simple open-source tool created by Nestybox to easily create K8s clusters with Docker + Sysbox.
4 |
5 | Check out this [video](https://asciinema.org/a/VCgF094wb4CuVeI8h3iDKhh5m?speed=1.75).
6 |
7 |

8 |
9 | Kindbox does some of the same things that the K8s.io KinD tool does (e.g., cluster
10 | creation, destruction, etc.) but it's much simpler, more flexible, does not
11 | require complex container images, and it's even more efficient.
12 |
13 | Kindbox is not meant to compete with the K8s.io KinD tool. Rather, it's meant to
14 | provide a reference example of how easy it is to deploy a K8s cluster inside
15 | containers when using the Sysbox container runtime.
16 |
17 | Kindbox is a simple bash script wrapper around Docker commands. Feel free to
18 | modify it to fit your needs.
19 |
20 | ## Kindbox Simplicity & Flexibility
21 |
22 | Kindbox is a very simple and flexible tool: it's a bash script wrapper around
23 | Docker commands that create, destroy, and resize a Kubernetes-in-Docker cluster.
24 |
25 | That is, Kindbox talks to Docker, Docker talks to Sysbox, and Sysbox creates or
26 | destroys the containers.
27 |
28 | The reason the tool is so simple is that the Sysbox container runtime creates
29 | the containers such that they can run K8s seamlessly inside. Thus, Kindbox need
30 | only deploy the containers with Docker and run `kubeadm` within them to set them
31 | up. **It's that easy**.
32 |
33 | For this same reason, no specialized Docker images are needed for the containers
34 | that act as K8s nodes. In other words, the K8s node image does not require
35 | complex entrypoints or complex Docker commands for its deployment.
36 |
37 | This in turn enables you to fully control the contents of the images that make
38 | up the k8s nodes, as well as the process for launching the K8s cluster.
39 |
40 | ## Using Kindbox
41 |
42 | By default, Kindbox uses a Docker image called `nestybox/k8s-node` for the containers
43 | that make up the cluster.
44 |
45 | It's a simple image that includes systemd, Docker, the K8s `kubeadm` tool, and
46 | preloaded inner pod images for the K8s control plane.
47 |
48 | The Dockerfile is [here](https://github.com/nestybox/dockerfiles/tree/master/k8s-node)
49 |
50 | Feel free to copy it and customize it per your needs.
51 |
52 | ### Cluster Creation
53 |
54 | 1) Create a cluster called `mycluster` with 10 nodes (1 master + 9 workers):
55 |
56 | ```console
57 | $ kindbox create --num-workers=9 mycluster
58 |
59 | Creating a K8s cluster with Docker + Sysbox ...
60 |
61 | Cluster name : mycluster
62 | Worker nodes : 9
63 | Docker network : mycluster-net
64 | Node image : nestybox/k8s-node:v1.18.2
65 | K8s version : v1.18.2
66 | Publish apiserver port : false
67 |
68 | Creating the K8s cluster nodes ...
69 | - Creating node mycluster-master
70 | - Creating node mycluster-worker-0
71 | - Creating node mycluster-worker-1
72 | - Creating node mycluster-worker-2
73 | - Creating node mycluster-worker-3
74 | - Creating node mycluster-worker-4
75 | - Creating node mycluster-worker-5
76 | - Creating node mycluster-worker-6
77 | - Creating node mycluster-worker-7
78 | - Creating node mycluster-worker-8
79 |
80 | Initializing the K8s master node ...
81 | - Running kubeadm init on mycluster-master ... (may take up to a minute)
82 | - Setting up kubectl on mycluster-master ...
83 | - Initializing networking (flannel) on mycluster-master ...
84 | - Waiting for mycluster-master to be ready ...
85 |
86 | Initializing the K8s worker nodes ...
87 | - Joining the worker nodes to the cluster ...
88 |
89 | Cluster created successfully!
90 |
91 | Use kubectl to control the cluster.
92 |
93 | 1) Install kubectl on your host
94 | 2) export KUBECONFIG=${KUBECONFIG}:${HOME}/.kube/mycluster-config
95 | 3) kubectl config use-context kubernetes-admin@mycluster
96 | 4) kubectl get nodes
97 |
98 | Alternatively, use "docker exec" to control the cluster:
99 |
100 | $ docker exec mycluster-master kubectl get nodes
101 | ```
102 |
103 | This takes Kindbox less than 2 minutes and consumes < 1GB overhead on my laptop
104 | machine!
105 |
106 | In contrast, this same cluster requires 2.5GB when using K8s.io KinD +
107 | Sysbox, and 10GB when using K8s.io KinD without Sysbox.
108 |
109 | This means that with Sysbox, you can deploy large and/or more K8s clusters on
110 | your machine quickly and without eating up your disk space.
111 |
112 | 2) Setup kubectl on the host so we can control the cluster:
113 |
114 | (This assumes you've [installed kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your host).
115 |
116 | ```console
117 | $ export KUBECONFIG=${KUBECONFIG}:${HOME}/.kube/mycluster-config
118 |
119 | $ kubectl config use-context kubernetes-admin@mycluster
120 | Switched to context "kubernetes-admin@mycluster".
121 | ```
122 |
123 | 3) Use kubectl to verify all is good:
124 |
125 | ```console
126 | $ kubectl get nodes
127 | NAME STATUS ROLES AGE VERSION
128 | mycluster-master Ready master 4m43s v1.18.3
129 | mycluster-worker-0 Ready 3m51s v1.18.3
130 | mycluster-worker-1 Ready 3m53s v1.18.3
131 | mycluster-worker-2 Ready 3m52s v1.18.3
132 | mycluster-worker-3 Ready 3m53s v1.18.3
133 | mycluster-worker-4 Ready 3m51s v1.18.3
134 | mycluster-worker-5 Ready 3m52s v1.18.3
135 | mycluster-worker-6 Ready 3m50s v1.18.3
136 | mycluster-worker-7 Ready 3m50s v1.18.3
137 | mycluster-worker-8 Ready 3m50s v1.18.3
138 | ```
139 |
140 | From here on, we use kubectl as usual to deploy pods, services, etc.
141 |
142 | For example, to create an nginx deployment with 10 pods:
143 |
144 | ```console
145 | $ kubectl create deployment nginx --image=nginx
146 | $ kubectl scale --replicas=10 deployment nginx
147 | $ kubectl get pods -o wide
148 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
149 | nginx-f89759699-6ch9m 1/1 Running 0 21s 10.244.11.4 mycluster-worker-6
150 | nginx-f89759699-8jrc8 1/1 Running 0 21s 10.244.10.4 mycluster-worker-5
151 | nginx-f89759699-dgxq8 1/1 Running 0 28s 10.244.2.15 mycluster-worker-1
152 | nginx-f89759699-hx5tt 1/1 Running 0 21s 10.244.5.15 mycluster-worker-3
153 | nginx-f89759699-l9v5p 1/1 Running 0 21s 10.244.1.10 mycluster-worker-0
154 | nginx-f89759699-pdnhb 1/1 Running 0 21s 10.244.12.4 mycluster-worker-4
155 | nginx-f89759699-qf46b 1/1 Running 0 21s 10.244.2.16 mycluster-worker-1
156 | nginx-f89759699-vbnx5 1/1 Running 0 21s 10.244.3.14 mycluster-worker-2
157 | nginx-f89759699-whgt7 1/1 Running 0 21s 10.244.13.4 mycluster-worker-8
158 | nginx-f89759699-zblsb 1/1 Running 0 21s 10.244.14.4 mycluster-worker-7
159 | ```
160 |
161 | ### Cluster Network
162 |
163 | With Kindbox, you have full control over the container network used by the
164 | cluster.
165 |
166 | For example, you can deploy the cluster on a Docker network that you create:
167 |
168 | $ docker network create mynet
169 | $ kindbox create --num-workers=9 --net mynet mycluster
170 |
171 | Normally each cluster would be on a dedicated network for extra isolation, but
172 | it's up to you to decide. If you don't choose a network, Kindbox automatically
173 | creates one for the cluster (with the name `-net`).
174 |
175 | ### Cluster Resizing
176 |
177 | Kindbox also allows you to easily resize the cluster (i.e., add or remove worker
178 | nodes).
179 |
180 | Here we resize the cluster we previously created from 9 to 4 worker nodes.
181 |
182 | ```console
183 | $ kindbox resize --num-workers=4 mycluster
184 |
185 | Resizing the K8s cluster (current = 9, desired = 4) ...
186 | - Destroying node mycluster-worker-4
187 | - Destroying node mycluster-worker-5
188 | - Destroying node mycluster-worker-6
189 | - Destroying node mycluster-worker-7
190 | - Destroying node mycluster-worker-8
191 | Done (5 nodes removed)
192 | ```
193 |
194 | Then verify K8s no longer sees the removed nodes:
195 |
196 | ```console
197 | $ kubectl get nodes
198 | NAME STATUS ROLES AGE VERSION
199 | mycluster-master Ready master 32m v1.18.3
200 | mycluster-worker-0 Ready 31m v1.18.3
201 | mycluster-worker-1 Ready 31m v1.18.3
202 | mycluster-worker-2 Ready 31m v1.18.3
203 | mycluster-worker-3 Ready 31m v1.18.3
204 | ```
205 |
206 | You can also verify K8s has re-scheduled the pods away to the remaining nodes:
207 |
208 | ```console
209 | $ kubectl get pods -o wide
210 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
211 | nginx-f89759699-dgxq8 1/1 Running 0 10m 10.244.2.15 mycluster-worker-1
212 | nginx-f89759699-hx5tt 1/1 Running 0 10m 10.244.5.15 mycluster-worker-3
213 | nginx-f89759699-l6l7b 1/1 Running 0 28s 10.244.5.16 mycluster-worker-3
214 | nginx-f89759699-l9v5p 1/1 Running 0 10m 10.244.1.10 mycluster-worker-0
215 | nginx-f89759699-nbd2l 1/1 Running 0 28s 10.244.2.17 mycluster-worker-1
216 | nginx-f89759699-qf46b 1/1 Running 0 10m 10.244.2.16 mycluster-worker-1
217 | nginx-f89759699-rfklb 1/1 Running 0 28s 10.244.1.11 mycluster-worker-0
218 | nginx-f89759699-tr9tr 1/1 Running 0 28s 10.244.1.12 mycluster-worker-0
219 | nginx-f89759699-vbnx5 1/1 Running 0 10m 10.244.3.14 mycluster-worker-2
220 | nginx-f89759699-xvx52 1/1 Running 0 28s 10.244.3.15 mycluster-worker-2
221 | ```
222 |
223 | When resizing the cluster upwards, Kindbox allows you to choose the container
224 | image for newly added K8s nodes:
225 |
226 | ```console
227 | $ kindbox resize --num-workers=5 --image= mycluster
228 | ```
229 |
230 | This means you can have a K8s cluster with a mix of different node images. This
231 | is useful if you need some specialized K8s nodes.
232 |
233 | ### Multiple Clusters
234 |
235 | You can easily create multiple K8s clusters on the host by repeating the
236 | `kindbox create` command (step (1) above).
237 |
238 | And you can use `kubectl config use-context` to point to the cluster you wish to
239 | manage (see step (2) above).
240 |
241 | On my laptop (4 CPU & 8GB RAM), I am able to create three small clusters without
242 | problem:
243 |
244 | ```console
245 | $ kindbox list -l
246 | NAME WORKERS NET IMAGE K8S VERSION
247 | cluster3 5 cluster3-net nestybox/k8s-node:v1.18.2 v1.18.2
248 | cluster2 5 cluster2-net nestybox/k8s-node:v1.18.2 v1.18.2
249 | mycluster 4 mycluster-net nestybox/k8s-node:v1.18.2 v1.18.2
250 | ```
251 |
252 | With Sysbox, the clusters are well isolated from each other: the K8s nodes are in
253 | containers strongly secured via the Linux user namespace, and each cluster is in
254 | a dedicated Docker network (for traffic isolation).
255 |
256 | ### Cluster Destruction
257 |
258 | To destroy a cluster, simply type:
259 |
260 | ```console
261 | $ kindbox destroy mycluster
262 | Destroying K8s cluster "mycluster" ...
263 | - Destroying node mycluster-worker-0
264 | - Destroying node mycluster-worker-1
265 | - Destroying node mycluster-worker-2
266 | - Destroying node mycluster-worker-3
267 | - Destroying node mycluster-master
268 |
269 | Cluster destroyed. Remove stale entry from $KUBECONFIG env-var by doing ...
270 |
271 | export KUBECONFIG=`echo ${KUBECONFIG} | sed "s|:${HOME}/.kube/mycluster-config||"`
272 | ```
273 |
274 | To see what else you can do with Kindbox, type `kindbox help`.
275 |
276 | And remember, it should be fairly easy to add functionality to Kindbox, as it's
277 | just a bash wrapper around Docker commands that manage the cluster.
278 |
279 | If you would like Nestybox to add more functionality, please file an
280 | [issue](docs/issue-guidelines.md) in the Sysbox Github repo, or [contact us](#support).
281 |
282 | ## Support
283 |
284 | Reach us at our [slack channel][slack] or at `contact@nestybox.com` for any questions.
285 | See our [contact info](#contact) below for more options.
286 |
287 | ## About Nestybox
288 |
289 | [Nestybox](https://www.nestybox.com) enhances the power of Linux containers.
290 |
291 | We are developing software that enables containers to run **any type of
292 | workload** (not just micro-services), and do so easily and securely.
293 |
294 | Our mission is to provide users with a fast, efficient, easy-to-use, and secure
295 | alternative to virtual machines for deploying virtual hosts on Linux.
296 |
297 | ## Contact
298 |
299 | We are happy to help. You can reach us at:
300 |
301 | Email: `contact@nestybox.com`
302 |
303 | Slack: [Nestybox Slack Workspace][slack]
304 |
305 | Phone: 1-800-600-6788
306 |
307 | We are there from Monday-Friday, 9am-5pm Pacific Time.
308 |
309 | ## Thank You
310 |
311 | We thank you **very much** for using Kindbox. We hope you find it useful.
312 |
313 | Your trust in us is very much appreciated.
314 |
315 | \-- _The Nestybox Team_
316 |
317 | [slack]: https://join.slack.com/t/nestybox-support/shared_invite/enQtOTA0NDQwMTkzMjg2LTAxNGJjYTU2ZmJkYTZjNDMwNmM4Y2YxNzZiZGJlZDM4OTc1NGUzZDFiNTM4NzM1ZTA2NDE3NzQ1ODg1YzhmNDQ
318 |
--------------------------------------------------------------------------------
/docs/figures/kindbox.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nestybox/kindbox/b9de9184c9d2b03671cfe0e0d70af9178e7dbcfa/docs/figures/kindbox.png
--------------------------------------------------------------------------------
/docs/issue-guidelines.md:
--------------------------------------------------------------------------------
1 | # Guidelines for Filing Issues
2 |
3 | Issues can be filed [here](https://github.com/nestybox/kindbox/issues)
4 |
5 | Please follow these guidelines when filing issues for Kindbox.
6 |
7 | 1) Create an issue with one of the following labels:
8 |
9 | - `Bug`: for functional defects, performance issues, etc.
10 |
11 | - `Documentation`: documentation errors or improvements
12 |
13 | - `Enhancement`: Feature requests
14 |
15 | - `Question`: for questions related to usage, design, etc.
16 |
17 | 2) Add a label corresponding to the Kindbox release (e.g. `v0.1.0`)
18 |
19 | 3) Include information about the host's Linux (e.g., `lsb_release`, `uname -a`).
20 |
21 | 4) Describe the issue as clearly and completely as possible.
22 |
23 | 5) Tell us how to best reproduce it.
24 |
25 | We appreciate when our users report issues. Nestybox will try to
26 | address them ASAP. We will mark them as `fixed`, `invalid`,
27 | `duplicate`, or `wont-fix`.
28 |
29 | Thanks for helping us improve Kindbox!
30 |
--------------------------------------------------------------------------------
/scr/kindbox:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Kindbox: A simple program for deploying 'Kubernetes-in-Docker' using Docker +
4 | # Nestybox's "Sysbox" container runtime.
5 | #
6 | # This program is meant as a reference example of how to deploy a K8s cluster
7 | # inside Docker containers, using Docker + Sysbox. Feel free to use it and
8 | # modify it to your needs.
9 | #
10 | # Kindbox has some of the same functionality as the K8s.io KinD tool, except
11 | # that by virtue of using the Docker + Sysbox, the Docker images and commands
12 | # used by this script are **much simpler**, enabling you to easily and fully
13 | # control the cluster configuration and deployment (i.e., the Sysbox runtime
14 | # absorbs the complexity).
15 | #
16 | # Moreover, the resulting K8s cluster boots up pretty quickly (< 2 minutes for a
17 | # 10-node cluster), uses minimal resources (only 1 GB overhead for a 10-node
18 | # cluster!), and does not require privileged containers (i.e., it's much more
19 | # secure).
20 | #
21 | # NOTE: you must install the Sysbox container runtime in your host before using
22 | # this script. Download instructions are in www.nestybox.com.
23 | #
24 | # Enjoy,
25 | # - The Nestybox Team
26 |
27 | set +e
28 |
29 | VERSION=v0.1
30 |
31 | CLUSTER_NAME=k8s-cluster
32 | CLUSTER_CNI=flannel
33 | NUM_WORKERS=1
34 | IMAGE=ghcr.io/nestybox/k8s-node:v1.20.2
35 | K8S_VERSION=v1.20.2
36 |
37 | VERBOSE=1
38 | PUBLISH=0
39 | APISERVER_PORT=6443
40 | SUBCMD=""
41 | DESTROY_NET=0
42 | LONG_LIST=0
43 | CLUSTER_INFO=()
44 | RESIZE_IMAGE=0
45 |
46 | function retry() {
47 | local attempts=$1
48 | shift
49 | local delay=$1
50 | shift
51 | local i
52 |
53 | for ((i = 0; i < attempts; i++)); do
54 | "$@"
55 | if [[ $? -eq 0 ]]; then
56 | return 0
57 | fi
58 | sleep $delay
59 | done
60 |
61 | echo "Command \"$@\" failed $attempts times. Output: $status"
62 | false
63 | }
64 |
65 | function wait_for_inner_systemd {
66 | local node=$1
67 | retry 10 1 sh -c "docker exec ${node} sh -c 'systemctl is-system-running --wait 2>&1 | grep -q running'"
68 | }
69 |
70 | function k8s_node_ready() {
71 | local k8s_master=$1
72 | local node=$2
73 | local i
74 |
75 | docker exec "$k8s_master" sh -c "kubectl get node ${node} | grep ${node} | awk '{print \$2}' | grep -qw Ready" 2>&1
76 | }
77 |
78 | function wait_for_node_ready {
79 | local node=$1
80 | local k8s_master=${CLUSTER_NAME}-master
81 |
82 | retry 40 2 k8s_node_ready ${k8s_master} ${node}
83 | }
84 |
85 | function wait_all_nodes_ready() {
86 | local delay=$1
87 |
88 | local timestamp=$(date +%s)
89 | local timeout=$(( $timestamp + $delay ))
90 | local all_ok
91 |
92 | while [ $timestamp -lt $timeout ]; do
93 | all_ok="true"
94 |
95 | for i in $(seq 0 $(( $NUM_WORKERS - 1 ))); do
96 | local master=${CLUSTER_NAME}-master
97 | local worker=${CLUSTER_NAME}-worker-${i}
98 |
99 | k8s_node_ready $master $worker
100 |
101 | if [[ $? -ne 0 ]]; then
102 | all_ok="false"
103 | break
104 | fi
105 | done
106 |
107 | if [[ "$all_ok" == "true" ]]; then
108 | break
109 | fi
110 |
111 | sleep 2
112 | timestamp=$(date +%s)
113 | done
114 |
115 | if [[ "$all_ok" != "true" ]]; then
116 | return 1
117 | else
118 | return 0
119 | fi
120 | }
121 |
122 | function kubectl_config() {
123 | local node=$1
124 | docker exec ${node} sh -c "mkdir -p /root/.kube && \
125 | cp -i /etc/kubernetes/admin.conf /root/.kube/config && \
126 | chown $(id -u):$(id -g) /root/.kube/config"
127 |
128 | # Copy k8s config to the host to allow kubectl interaction.
129 | if [ ! -d ${HOME}/.kube ]; then
130 | docker cp ${node}:/root/.kube/. ${HOME}/.kube
131 | mv ${HOME}/.kube/config ${HOME}/.kube/${CLUSTER_NAME}-config
132 | else
133 | docker cp ${node}:/root/.kube/config ${HOME}/.kube/${CLUSTER_NAME}-config
134 | fi
135 |
136 | # As of today, kubeadm does not support 'multicluster' scenarios, so it generates
137 | # identical/overlapping k8s configurations for every new cluster. Here we are
138 | # simply adjusting the generated kubeconfig file to uniquely identify each cluster,
139 | # thereby allowing us to support multi-cluster setups.
140 | sed -i -e "s/^ name: kubernetes$/ name: ${CLUSTER_NAME}/" \
141 | -e "s/^ cluster: kubernetes$/ cluster: ${CLUSTER_NAME}/" \
142 | -e "s/^ user: kubernetes-admin$/ user: kubernetes-admin-${CLUSTER_NAME}/" \
143 | -e "s/^ name: kubernetes-admin@kubernetes/ name: kubernetes-admin@${CLUSTER_NAME}/" \
144 | -e "s/^current-context: kubernetes-admin@kubernetes/current-context: kubernetes-admin@${CLUSTER_NAME}/" \
145 | -e "s/^- name: kubernetes-admin/- name: kubernetes-admin-${CLUSTER_NAME}/" \
146 | -e "/^- name: kubernetes-admin/a\ username: kubernetes-admin" ${HOME}/.kube/${CLUSTER_NAME}-config
147 | if [[ $? -ne 0 ]]; then
148 | ERR="failed to edit kubeconfig file for cluster ${CLUSTER_NAME}"
149 | return 1
150 | fi
151 | }
152 |
153 | function flannel_config() {
154 | local node=$1
155 | local output
156 |
157 | output=$(sh -c "docker exec ${node} sh -c \"kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml\"" 2>&1)
158 | if [[ $? -ne 0 ]]; then
159 | echo "$output"
160 | return 1
161 | fi
162 | }
163 |
164 | function flannel_unconfig() {
165 | local node=$1
166 | local output
167 |
168 | output=$(sh -c "docker exec ${node} sh -c \"kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml\"" 2>&1)
169 | if [[ $? -ne 0 ]]; then
170 | echo "$output"
171 | return 1
172 | fi
173 | }
174 |
175 | function weave_config() {
176 | local node=$1
177 | local output
178 |
179 | # Fetch and apply Weave's manifest. Make sure the CIDR block matches the one
180 | # utilized by the cluster for the pod-network range.
181 | output=$(docker exec ${node} sh -c "kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$(kubectl version | base64 | tr -d '\n')\&env.IPALLOC_RANGE=10.244.0.0/16" 2>&1)
182 | if [[ $? -ne 0 ]]; then
183 | echo "$output"
184 | return 1
185 | fi
186 | }
187 |
188 | function weave_unconfig() {
189 | local node=$1
190 | local output
191 |
192 | output=$(sh -c "docker exec ${node} sh -c \"kubectl delete -f https://cloud.weave.works/k8s/net?k8s-version=${K8S_VERSION}\"" 2>&1)
193 | if [[ $? -ne 0 ]]; then
194 | echo "$output"
195 | return 1
196 | fi
197 | }
198 |
199 | function calico_config() {
200 | local node=$1
201 | local output
202 |
203 | # Install the Tigera Calico operator.
204 | output=$(sh -c "docker exec ${node} sh -c \"kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml\"" 2>&1)
205 | if [[ $? -ne 0 ]]; then
206 | echo "$output"
207 | return 1
208 | fi
209 |
210 | # Fetch Calico's CRD manifest and adjust its CIDR block to match the one
211 | # utilized by the cluster for the pod-network range.
212 | output=$(sh -c "docker exec ${node} sh -c \"curl https://docs.projectcalico.org/manifests/custom-resources.yaml --output calico-crd.yaml; sed -i 's/cidr: 192.168.0.0\/16/cidr: 10.244.0.0\/16/' calico-crd.yaml\"" 2>&1)
213 | if [[ $? -ne 0 ]]; then
214 | echo "$output"
215 | return 1
216 | fi
217 |
218 | # Deploy Calico's CRD.
219 | output=$(sh -c "docker exec ${node} sh -c \"kubectl create -f calico-crd.yaml\"" 2>&1)
220 | if [[ $? -ne 0 ]]; then
221 | echo "$output"
222 | return 1
223 | fi
224 | }
225 |
226 | function calico_unconfig() {
227 | local node=$1
228 | local output
229 |
230 | # Install Calico CRDs.
231 | output=$(sh -c "docker exec ${node} sh -c \"kubectl delete -f calico-crd.yaml\"" 2>&1)
232 | if [[ $? -ne 0 ]]; then
233 | echo "$output"
234 | return 1
235 | fi
236 |
237 | # Install the Tigera Calico operator.
238 | output=$(sh -c "docker exec ${node} sh -c \"kubectl delete -f https://docs.projectcalico.org/manifests/tigera-operator.yaml\"" 2>&1)
239 | if [[ $? -ne 0 ]]; then
240 | echo "$output"
241 | return 1
242 | fi
243 | }
244 |
245 | function docker_pull_image() {
246 |
247 | # If the image is present, no action
248 | output=$(sh -c "docker image inspect --format '{{.Id}}' ${IMAGE}" 2>&1)
249 | if [[ $? -eq 0 ]]; then
250 | return 0
251 | fi
252 |
253 | printf " - Pulling node image ${IMAGE} ... (may take a few seconds)\n"
254 |
255 | output=$(sh -c "docker pull ${IMAGE}" 2>&1)
256 | if [[ $? -ne 0 ]]; then
257 | ERR="docker pull ${IMAGE}: ${output}"
258 | return 1
259 | fi
260 | }
261 |
262 | function k8s_master_create() {
263 | local k8s_master=${CLUSTER_NAME}-master
264 | local output
265 |
266 | [[ $VERBOSE ]] && printf " - Creating node ${k8s_master}\n"
267 |
268 | if [[ $PUBLISH -eq 1 ]]; then
269 | output=$(sh -c "docker run --runtime=sysbox-runc -d --rm --network=$NET --name=${k8s_master} --hostname=${k8s_master} -p $HOST_PORT:$APISERVER_PORT $IMAGE" 2>&1)
270 | else
271 | output=$(sh -c "docker run --runtime=sysbox-runc -d --rm --network=$NET --name=${k8s_master} --hostname=${k8s_master} $IMAGE" 2>&1)
272 | fi
273 |
274 | if [[ $? -ne 0 ]]; then
275 | ERR="failed to deploy node $k8s_master: $output"
276 | return 1
277 | fi
278 | }
279 |
280 | function k8s_master_destroy() {
281 | local k8s_master=${CLUSTER_NAME}-master
282 | local output
283 |
284 | [[ $VERBOSE ]] && printf " - Destroying node ${k8s_master}\n"
285 |
286 | output=$(sh -c "docker stop -t0 ${k8s_master}" 2>&1)
287 | if [[ $? -ne 0 ]]; then
288 | ERR="failed to stop ${k8s_master}"
289 | return 1
290 | fi
291 | }
292 |
293 | # Initializes the K8s master node
294 | function k8s_master_init() {
295 | local node=${CLUSTER_NAME}-master
296 | local output
297 |
298 | output=$(wait_for_inner_systemd ${node})
299 | if [[ $? -ne 0 ]]; then
300 | ERR="systemd init failed for ${node}: ${output}"
301 | return 1
302 | fi
303 |
304 | [[ $VERBOSE ]] && printf " - Running kubeadm init on $node ... (may take up to a minute)\n"
305 |
306 | output=$(sh -c "docker exec ${node} sh -c \"kubeadm init --kubernetes-version=${K8S_VERSION} --pod-network-cidr=10.244.0.0/16 2>&1\"" 2>&1)
307 | if [[ $? -ne 0 ]]; then
308 | ERR="kubadm init failed on ${node}: ${output}"
309 | return 1
310 | fi
311 |
312 | output=$(echo "$output" | grep -q "Your Kubernetes control-plane has initialized successfully")
313 | if [[ $? -ne 0 ]]; then
314 | ERR="kubadm init failed on ${node}: ${output}"
315 | return 1
316 | fi
317 |
318 | [[ $VERBOSE ]] && printf " - Setting up kubectl on $node ... \n"
319 |
320 | output=$(kubectl_config ${node})
321 | if [[ $? -ne 0 ]]; then
322 | ERR="kubectl config failed on ${node}: ${output}"
323 | return 1
324 | fi
325 |
326 | [[ $VERBOSE ]] && printf " - Initializing networking (${CLUSTER_CNI} cni) on $node ...\n"
327 |
328 | if [[ ${CLUSTER_CNI} == "flannel" ]]; then
329 | output=$(flannel_config ${node})
330 | elif [[ ${CLUSTER_CNI} == "weave" ]]; then
331 | output=$(weave_config ${node})
332 | elif [[ ${CLUSTER_CNI} == "calico" ]]; then
333 | output=$(calico_config ${node})
334 | fi
335 |
336 | if [[ $? -ne 0 ]]; then
337 | ERR="cni init failed on ${node}: ${output}"
338 | return 1
339 | fi
340 |
341 | [[ $VERBOSE ]] && printf " - Waiting for $node to be ready ...\n"
342 |
343 | output=$(wait_for_node_ready ${node})
344 | if [[ $? -ne 0 ]]; then
345 | ERR="${node} did not reach ready state: ${output}"
346 | return 1
347 | fi
348 | }
349 |
350 | function k8s_master_get_network() {
351 | local k8s_master=${CLUSTER_NAME}-master
352 |
353 | output=$(sh -c "docker inspect --format='{{range \$k,\$v := .NetworkSettings.Networks}} {{\$k}} {{end}}' $k8s_master" 2>&1)
354 | if [[ $? -ne 0 ]]; then
355 | ERR="failed to get network for cluster ${CLUSTER_NAME}: ${output}"
356 | return 1
357 | fi
358 |
359 | echo $output
360 | }
361 |
362 | function k8s_master_get_image() {
363 | local k8s_master=${CLUSTER_NAME}-master
364 |
365 | output=$(sh -c "docker inspect --format='{{json .Image}}' $k8s_master | tr -d '\"'" 2>&1)
366 | if [[ $? -ne 0 ]]; then
367 | ERR="failed to get image for ${k8s_master}: ${output}"
368 | return 1
369 | fi
370 |
371 | local image_sha=$output
372 |
373 | output=$(sh -c "docker image inspect --format='{{range \$k := .RepoTags}} {{\$k}} {{end}}' $image_sha" 2>&1)
374 | if [[ $? -ne 0 ]]; then
375 | ERR="failed to inspect image for ${k8s_master} (${image_sha}): ${output}"
376 | return 1
377 | fi
378 |
379 | echo $output
380 | }
381 |
382 | function k8s_workers_create() {
383 | local start=$1
384 | local num=$2
385 | local node
386 |
387 | local end=$(( $start + $num ))
388 |
389 | for i in $(seq $start $(( $end - 1 ))); do
390 | node=${CLUSTER_NAME}-worker-${i}
391 |
392 | [[ $VERBOSE ]] && printf " - Creating node $node\n"
393 |
394 | output=$(sh -c "docker run --runtime=sysbox-runc -d --rm --network=$NET --name=${node} --hostname=${node} $IMAGE" 2>&1)
395 | if [[ $? -ne 0 ]]; then
396 | k8s_nodes_destroy $start $(($i + 1))
397 | ERR="failed to deploy node $node: $output"
398 | return 1
399 | fi
400 | done
401 | }
402 |
403 | function k8s_workers_destroy() {
404 | local start=$1
405 | local num=$2
406 | local node
407 | local failed=0
408 |
409 | local k8s_master=${CLUSTER_NAME}-master
410 | local end=$(( $start + $num ))
411 |
412 | for i in $(seq $start $(( $end - 1 ))); do
413 | node=${CLUSTER_NAME}-worker-${i}
414 |
415 | [[ $VERBOSE ]] && printf " - Destroying node $node\n"
416 |
417 | output=$(sh -c "docker stop -t0 ${node}" 2>&1)
418 | if [[ $? -ne 0 ]]; then
419 | ERR="failed to stop ${node}"
420 | failed=1
421 | fi
422 | done
423 |
424 | if [[ $failed == 1 ]]; then
425 | return 1
426 | fi
427 | }
428 |
429 | # Initializes the K8s worker nodes and joins them to the cluster
430 | function k8s_workers_init() {
431 | local start=$1
432 | local num=$2
433 |
434 | local i
435 | local node
436 | local join_cmd
437 | local output
438 |
439 | local k8s_master=${CLUSTER_NAME}-master
440 | local end=$(( $start + $num ))
441 |
442 | # Ensure systemd is ready in all workers
443 | for i in $(seq $start $(( $end - 1))); do
444 | node=${CLUSTER_NAME}-worker-${i}
445 | output=$(wait_for_inner_systemd ${node})
446 | if [[ $? -ne 0 ]]; then
447 | ERR="systemd init failed for ${node}: ${output}"
448 | return 1
449 | fi
450 | done
451 |
452 | # Get the cluster "join token" from the K8s master
453 | output=$(sh -c "docker exec ${k8s_master} sh -c \"kubeadm token create --print-join-command 2> /dev/null\"" 2>&1)
454 | if [[ $? -ne 0 || $output == "" ]]; then
455 | ERR="failed to get cluster token from ${k8s_master}: ${output}"
456 | return 1
457 | fi
458 |
459 | join_cmd=$output
460 |
461 | [[ $VERBOSE ]] && printf " - Joining the worker nodes to the cluster ...\n"
462 |
463 | for i in $(seq $start $(( $end - 1))); do
464 | node=${CLUSTER_NAME}-worker-${i}
465 | output=$(sh -c "docker exec -d ${node} sh -c \"${join_cmd}\"" 2>&1)
466 | if [[ $? -ne 0 ]]; then
467 | ERR="node ${node} failed to join the cluster: ${output}"
468 | return 1
469 | fi
470 | done
471 |
472 | # Wait for workers to join the cluster
473 |
474 | if [[ $WAIT_READY ]]; then
475 | [[ $VERBOSE ]] && printf " - Waiting for the worker nodes to be ready ... (may take up to a minute)\n"
476 |
477 | local join_timeout=$(( $num * 60 ))
478 | output=$(wait_all_nodes_ready $join_timeout)
479 | if [[ $? -ne 0 ]]; then
480 | ERR="cluster nodes did not reach ready state: ${output}"
481 | return 1
482 | fi
483 | fi
484 | }
485 |
486 | function k8s_workers_delete() {
487 | local start=$1
488 | local num=$2
489 | local node
490 | local failed=0
491 |
492 | local k8s_master=${CLUSTER_NAME}-master
493 | local end=$(( $start + $num ))
494 |
495 | for i in $(seq $start $(( $end - 1 ))); do
496 | node=${CLUSTER_NAME}-worker-${i}
497 | output=$(sh -c "docker exec $k8s_master kubectl delete node $node" 2>&1)
498 | if [[ $? -ne 0 ]]; then
499 | ERR="failed to delete ${node}"
500 | failed=1
501 | fi
502 | done
503 |
504 | if [[ $failed == 1 ]]; then
505 | return 1
506 | fi
507 | }
508 |
509 | # Creates the containers that act as the K8s cluster nodes
510 | function cluster_create_nodes() {
511 |
512 | printf "\e[92mCreating the K8s cluster nodes ... \e[0m\n"
513 |
514 | docker_pull_image
515 | if [[ $? -ne 0 ]]; then
516 | return 1
517 | fi
518 |
519 | k8s_master_create
520 | if [[ $? -ne 0 ]]; then
521 | return 1
522 | fi
523 |
524 | k8s_workers_create 0 $NUM_WORKERS
525 | if [[ $? -ne 0 ]]; then
526 | return 1
527 | fi
528 |
529 | printf "\n"
530 |
531 | return 0
532 | }
533 |
534 | # Destroys the containers that act as the K8s cluster nodes
535 | function cluster_destroy_nodes() {
536 |
537 | k8s_workers_destroy 0 $NUM_WORKERS
538 | if [[ $? -ne 0 ]]; then
539 | return 1
540 | fi
541 |
542 | k8s_master_destroy
543 | if [[ $? -ne 0 ]]; then
544 | return 1
545 | fi
546 |
547 | return 0
548 | }
549 |
550 | # Initializes the containers that act as the K8s cluster nodes
551 | function cluster_init_nodes() {
552 |
553 | printf "\e[92mInitializing the K8s master node ... \e[0m\n"
554 | k8s_master_init
555 | if [[ $? -ne 0 ]]; then
556 | return 1
557 | fi
558 |
559 | printf "\n"
560 |
561 | if [[ $NUM_WORKERS > 0 ]]; then
562 | printf "\e[92mInitializing the K8s worker nodes ... \e[0m\n"
563 | k8s_workers_init 0 $NUM_WORKERS
564 | if [[ $? -ne 0 ]]; then
565 | return 1
566 | fi
567 | fi
568 |
569 | printf "\n"
570 |
571 | return 0
572 | }
573 |
574 | function cluster_get_nodes() {
575 | local nodes=$(docker container ls --filter "name=${CLUSTER_NAME}-" --format='{{json .Names}}')
576 | echo $nodes
577 | }
578 |
579 | function cluster_get_version() {
580 | local k8s_master=${CLUSTER_NAME}-master
581 |
582 | output=$(sh -c "docker exec $k8s_master kubectl version --short | grep Server | awk '{print \$3}'" 2>&1)
583 | if [[ $? -ne 0 ]]; then
584 | ERR="failed to execute 'kubectl version' in $k8s_master: ${output}"
585 | return 1
586 | fi
587 |
588 | echo $output
589 | }
590 |
591 | function cluster_get_info() {
592 | CLUSTER_NAME=$1
593 |
594 | output=$(k8s_master_get_network)
595 | if [[ $? -ne 0 ]]; then
596 | return 1
597 | fi
598 | local net=$output
599 |
600 | output=$(k8s_master_get_image)
601 | if [[ $? -ne 0 ]]; then
602 | return 1
603 | fi
604 | local image=$output
605 |
606 | local nodes=$(cluster_get_nodes)
607 | local nodes_array=($nodes)
608 | local num_nodes=${#nodes_array[@]}
609 | local num_workers
610 | if [[ $num_nodes > 1 ]]; then
611 | num_workers=$(( $num_nodes - 1 ))
612 | else
613 | num_workers=0
614 | fi
615 |
616 | local k8s_version=$(cluster_get_version)
617 | if [[ $? -ne 0 ]]; then
618 | return 1
619 | fi
620 |
621 | CLUSTER_INFO=(${CLUSTER_NAME} $num_workers $net $image $k8s_version)
622 | }
623 |
624 | function show_kubectl_usage() {
625 |
626 | local k8s_master=${CLUSTER_NAME}-master
627 |
628 | printf "\n"
629 | printf "Use kubectl to control the cluster.\n"
630 | printf "\n"
631 | printf " 1) Install kubectl on your host\n"
632 | printf " 2) export KUBECONFIG=\${KUBECONFIG}:\${HOME}/.kube/${CLUSTER_NAME}-config\n"
633 | printf " 3) kubectl config use-context kubernetes-admin@${CLUSTER_NAME}\n"
634 | printf " 4) kubectl get nodes\n"
635 | printf "\n"
636 | printf "Alternatively, use \"docker exec\" to control the cluster:\n"
637 | printf "\n"
638 | printf " $ docker exec ${k8s_master} kubectl get nodes\n"
639 | printf "\n"
640 | }
641 |
642 | # Parses docker config file to obtain explictly configured 'mtu' value. If not
643 | # found, returns docker's default value (1500 bytes).
644 | function docker_iface_mtu() {
645 |
646 | local dockerCfgDir="/etc/docker"
647 | local dockerCfgFile="${dockerCfgDir}/daemon.json"
648 | local default_mtu=1500
649 |
650 | # Parsing the Docker config file requires root privilege; if we don't have it,
651 | # skip.
652 | user=$(whoami)
653 | if [[ "$user" != "root" ]]; then
654 | echo $default_mtu
655 | return
656 | fi
657 |
658 | if jq --exit-status 'has("mtu")' ${dockerCfgFile} >/dev/null; then
659 | local mtu=$(jq --exit-status '."mtu"' ${dockerCfgFile} 2>&1)
660 |
661 | if [ ! -z "$mtu" ] && [ "$mtu" -lt 1500 ]; then
662 | echo $mtu
663 | else
664 | echo $default_mtu
665 | fi
666 | else
667 | echo $default_mtu
668 | fi
669 | }
670 |
671 | function create_cluster() {
672 | local result
673 |
674 | printf "\e[1mCreating a K8s cluster with Docker + Sysbox ...\e[0m\n\n"
675 |
676 | if [[ $VERBOSE ]]; then
677 | local FORMAT="%-25s: %s\n"
678 |
679 | printf "$FORMAT" "Cluster name" "${CLUSTER_NAME}"
680 | printf "$FORMAT" "Worker nodes" "${NUM_WORKERS}"
681 | printf "$FORMAT" "CNI" "${CLUSTER_CNI}"
682 | printf "$FORMAT" "Docker network" "${NET}"
683 | printf "$FORMAT" "Node image" "${IMAGE}"
684 | printf "$FORMAT" "K8s version" "${K8S_VERSION}"
685 |
686 | local publish
687 | [[ $PUBLISH -eq 1 ]] && publish="true (port $HOST_PORT)" || publish="false"
688 | printf "$FORMAT" "Publish apiserver port" "${publish}"
689 | printf "\n"
690 | fi
691 |
692 | local iface_mtu=$(docker_iface_mtu)
693 | output=$(sh -c "docker network create -o \"com.docker.network.driver.mtu\"=\"${iface_mtu}\" ${NET}" 2>&1)
694 |
695 | cluster_create_nodes
696 | if [[ $? -ne 0 ]]; then
697 | printf "ERROR: failed to create nodes: $ERR\n"
698 | exit 1
699 | fi
700 |
701 | cluster_init_nodes
702 | if [[ $? -ne 0 ]]; then
703 | printf "ERROR: failed to initialize nodes: $ERR\n"
704 | [[ $CLUSTER_RETAIN ]] || cluster_destroy_nodes
705 | exit 1
706 | fi
707 |
708 | printf "\e[1mCluster created successfully!\e[0m\n"
709 |
710 | show_kubectl_usage
711 | }
712 |
713 | function resize_cluster() {
714 | local start
715 | local num
716 |
717 | # Get current number of nodes in cluster
718 | local nodes=$(docker container ls --filter "name=${CLUSTER_NAME}-" --format='{{json .Names}}')
719 | local nodes_array=($nodes)
720 | local num_nodes=${#nodes_array[@]}
721 | local curr_workers
722 |
723 | if [ $num_nodes -eq 0 ]; then
724 | printf "ERROR: no such cluster found.\n"
725 | exit 1
726 | fi
727 |
728 | if [ $num_nodes -gt 1 ]; then
729 | curr_workers=$(( $num_nodes - 1 ))
730 | else
731 | curr_workers=0
732 | fi
733 |
734 | printf "\e[1mResizing the K8s cluster (current = ${curr_workers}, desired = ${NUM_WORKERS}) ... \e[0m\n"
735 |
736 | if [ $curr_workers -eq $NUM_WORKERS ]; then
737 | printf "Done (no action required).\n"
738 | exit 0
739 | fi
740 |
741 | #
742 | # Downsize
743 | #
744 | if [ $curr_workers -gt $NUM_WORKERS ]; then
745 |
746 | num=$(( $curr_workers - $NUM_WORKERS ))
747 | start=$(( $curr_workers - $num))
748 |
749 | k8s_workers_delete $start $num
750 | if [[ $? -ne 0 ]]; then
751 | printf "ERROR: failed to resize cluster: $ERR\n"
752 | exit 1
753 | fi
754 |
755 | k8s_workers_destroy $start $num
756 | if [[ $? -ne 0 ]]; then
757 | printf "ERROR: failed to resize cluster: $ERR\n"
758 | exit 1
759 | fi
760 |
761 | printf "Done ($num nodes removed).\n"
762 | exit 0
763 | fi
764 |
765 | #
766 | # Upsize
767 | #
768 |
769 | output=$(k8s_master_get_network)
770 | if [[ $? -ne 0 ]]; then
771 | printf "ERROR: failed to resize cluster: $ERR\n"
772 | exit 1
773 | fi
774 | NET=$output
775 |
776 | if [[ $RESIZE_IMAGE == 0 ]]; then
777 | output=$(k8s_master_get_image)
778 | if [[ $? -ne 0 ]]; then
779 | printf "ERROR: failed to resize cluster: $ERR\n"
780 | exit 1
781 | fi
782 | IMAGE=$output
783 | fi
784 |
785 | docker_pull_image
786 | if [[ $? -ne 0 ]]; then
787 | printf "ERROR: failed to pull node image: $ERR\n"
788 | exit 1
789 | fi
790 |
791 | start=$curr_workers
792 | num=$(( $NUM_WORKERS - $curr_workers ))
793 |
794 | k8s_workers_create $start $num
795 | if [[ $? -ne 0 ]]; then
796 | printf "ERROR: failed to resize cluster: $ERR\n"
797 | exit 1
798 | fi
799 |
800 | k8s_workers_init $start $num
801 | if [[ $? -ne 0 ]]; then
802 | printf "ERROR: failed to resize cluster: $ERR\n"
803 | k8s_workers_destroy $start $num
804 | exit 1
805 | fi
806 |
807 | printf "Done ($num nodes added).\n"
808 | exit 0
809 | }
810 |
811 | function destroy_cluster() {
812 | local nodes=$(docker container ls --filter "name=${CLUSTER_NAME}-" --format='{{json .Names}}')
813 | local nodes_array=($nodes)
814 | local num_nodes=${#nodes_array[@]}
815 |
816 | if [ $num_nodes -ge 1 ]; then
817 | NUM_WORKERS=$(( $num_nodes - 1 ))
818 | else
819 | NUM_WORKERS=0
820 | fi
821 |
822 | if [[ $num_nodes == 0 ]]; then
823 | printf "ERROR: no such cluster found.\n"
824 | exit 1
825 | fi
826 |
827 | printf "\e[1mDestroying K8s cluster \"${CLUSTER_NAME}\" ...\e[0m\n"
828 |
829 | cluster_destroy_nodes
830 | if [ $? -ne 0 ]; then
831 | printf "ERROR: failed to destroy cluster: $ERR\n"
832 | exit 1
833 | fi
834 |
835 | if [[ $DESTROY_NET == 1 ]]; then
836 |
837 | [[ $VERBOSE ]] && printf " - Destroying network ${NET}\n"
838 |
839 | output=$(sh -c "docker network rm ${NET}" 2>&1)
840 | if [[ $? -ne 0 ]]; then
841 | printf "ERROR: failed to remove network ${NET}: $output\n"
842 | exit 1
843 | fi
844 | fi
845 |
846 | if [ -f ${HOME}/.kube/${CLUSTER_NAME}-config ]; then
847 | rm -rf ${HOME}/.kube/${CLUSTER_NAME}-config
848 | fi
849 |
850 | printf "\n\e[1mCluster destroyed. Remove stale entry from \$KUBECONFIG env-var by doing ...\e[0m\n\n"
851 | printf " export KUBECONFIG=\`echo \${KUBECONFIG} | sed \"s|:\${HOME}/.kube/${CLUSTER_NAME}-config||\"\`\n\n"
852 |
853 | exit 0
854 | }
855 |
856 | function list_clusters() {
857 | local masters=$(sh -c "docker container ls --filter \"name=-master\" --format='{{json .Names}}'" | tr -d '\"')
858 | local clusters=()
859 | local name
860 |
861 | # Derive cluster name from master node name (e.g., "kind-master" -> "kind")
862 | for m in $masters; do
863 | [[ $m =~ ^([a-zA-Z].+)-.* ]]
864 | name=${BASH_REMATCH[1]}
865 | clusters+=($name)
866 | done
867 |
868 | if [[ $LONG_LIST == 1 ]]; then
869 | local FORMAT="%-22s %-15s %-20s %-40s %-8s\n"
870 | printf "$FORMAT" "NAME" "WORKERS" "NET" "IMAGE" "K8S VERSION"
871 |
872 | for cl in ${clusters[@]}; do
873 | cluster_get_info $cl
874 | if [[ $? -ne 0 ]]; then
875 | printf "ERROR: failed to get info for cluster $cl: $ERR\n"
876 | exit 1
877 | fi
878 |
879 | printf "$FORMAT" "${CLUSTER_INFO[0]}" "${CLUSTER_INFO[1]}" "${CLUSTER_INFO[2]}" "${CLUSTER_INFO[3]}" "${CLUSTER_INFO[4]}"
880 | done
881 |
882 | else
883 | for cl in ${clusters[@]}; do
884 | printf "$cl\n"
885 | done
886 | fi
887 | }
888 |
889 | function show_version() {
890 | echo "$0 ${VERSION}"
891 | }
892 |
893 | function show_cmds() {
894 | local FORMAT="\e[92m%-30s\e[0m: %s\n"
895 | printf "For reference, these are some Docker commands used by this program to manage the cluster:\n"
896 | printf "\n"
897 | printf "$FORMAT" "Create a cluster node" "docker run --runtime=sysbox-runc -d --rm --network= --name= --hostname= node-image"
898 | printf "$FORMAT" "Initialize master node" "docker exec sh -c \"kubeadm init --kubernetes-version= --pod-network-cidr=10.244.0.0/16\""
899 | printf "$FORMAT" "Get join token from master" 'join_cmd=$(sh -c "docker exec sh -c \"kubeadm token create --print-join-command 2> /dev/null\"" 2>&1)'
900 | printf "$FORMAT" "Initialize & join worker node" 'docker exec -d sh -c "$join_cmd"'
901 | printf "$FORMAT" "Get kubectl the config" 'docker cp :/etc/kubernetes/admin.conf $HOME/.kube/config'
902 | printf "$FORMAT" "Remove node from cluster" "docker stop -t0 "
903 | printf "\n"
904 |
905 | }
906 |
907 | function show_create() {
908 | printf "\n"
909 | printf "Usage: $0 create [OPTIONS] CLUSTER_NAME\n"
910 | printf "\n"
911 | printf "Creates a K8s cluster using Docker containers as K8s nodes; requires Docker + the Sysbox container runtime.\n"
912 | printf "The cluster is composed of one master node and a configurable number of worker nodes.\n"
913 | printf "\n"
914 | printf "Options:\n"
915 | printf " -h, --help Display usage.\n"
916 | printf " --num-workers= Number of worker nodes (default = 1).\n"
917 | printf " --net= Docker bridge network to connect the cluster to; if it does not exist, it will be created (default = 'CLUSTER_NAME-net').\n"
918 | printf " --cni= Container Network Interface (CNI) to deploy; supported CNIs: flannel (default), weave and calico. The last two require Sysbox-Enterprise.\n"
919 | printf " --image= Docker image for the cluster nodes (default = ${IMAGE}).\n"
920 | printf " --k8s-version= Kubernetes version; must correspond to the version of K8s embeddeded in the image.\n"
921 | printf " -p, --publish= Publish the cluster's apiserver port via a host port; allows for remote control of the cluster.\n"
922 | printf " -w, --wait-all Wait for all nodes in the cluster to be ready; if not set, this command completes once the master node is ready (worker nodes may not be ready).\n"
923 | printf " -r, --retain Avoid destroying all the nodes if cluster-creation process fails at a late stage -- useful for debugging purposes (unset by default).\n"
924 | exit 1
925 | }
926 |
927 | function parse_create_args() {
928 | local new_net="true"
929 |
930 | options=$(getopt -o p:whr -l wait-all,retain,help,num-workers::,net::,cni::,image::,k8s-version::,publish:: -- "$@")
931 |
932 | [ $? -eq 0 ] || {
933 | show_create
934 | exit 1
935 | }
936 |
937 | eval set -- "$options"
938 |
939 | while true; do
940 | case "$1" in
941 | -h | --help)
942 | show_create
943 | ;;
944 | --num-workers)
945 | shift;
946 | NUM_WORKERS=$1
947 | if [[ ${NUM_WORKERS} -lt 0 ]]; then
948 | show_create
949 | fi
950 | ;;
951 | --net)
952 | shift;
953 | NET=$1
954 | new_net="false"
955 | ;;
956 | --cni)
957 | shift;
958 | CLUSTER_CNI=$1
959 | ;;
960 | --image)
961 | shift;
962 | IMAGE=$1
963 | ;;
964 | --k8s-version)
965 | shift;
966 | K8S_VERSION=$1
967 | ;;
968 | -w | --wait-all)
969 | WAIT_READY=1
970 | ;;
971 | -r | --retain)
972 | CLUSTER_RETAIN=1
973 | ;;
974 | -p | --publish)
975 | PUBLISH=1
976 | shift;
977 | HOST_PORT=$1
978 | ;;
979 | --)
980 | shift
981 | break
982 | ;;
983 | -*)
984 | show_create
985 | ;;
986 | *)
987 | show_create
988 | ;;
989 | esac
990 | shift
991 | done
992 |
993 | CLUSTER_NAME=$1
994 | if [[ $CLUSTER_NAME == "" ]]; then
995 | echo "ERROR: missing cluster name."
996 | show_create
997 | fi
998 |
999 | if [[ $new_net == "true" ]]; then
1000 | NET="${CLUSTER_NAME}-net"
1001 | fi
1002 |
1003 | if [[ $CLUSTER_CNI == "" ]]; then
1004 | CLUSTER_CNI="flannel"
1005 | elif
1006 | [[ ${CLUSTER_CNI} != "flannel" ]] &&
1007 | [[ ${CLUSTER_CNI} != "weave" ]] &&
1008 | [[ ${CLUSTER_CNI} != "calico" ]]; then
1009 | printf "Unsupported CNI: \"${CLUSTER_CNI}\". Enter one of the supported CNIs: flannel, weave, calico\n"
1010 | exit 1
1011 | fi
1012 | }
1013 |
1014 | function show_destroy() {
1015 | printf "\n"
1016 | printf "Usage: $0 destroy CLUSTER_NAME\n"
1017 | printf "\n"
1018 | printf "Destroys a K8s cluster.\n"
1019 | printf "\n"
1020 | printf "Options:\n"
1021 | printf " --net Destroy the docker network for the cluster (i.e., 'CLUSTER_NAME-net').\n"
1022 | printf " -h, --help Display usage.\n"
1023 | exit 1
1024 | }
1025 |
1026 | function parse_destroy_args() {
1027 | options=$(getopt -o h -l help,net -- "$@")
1028 |
1029 | [ $? -eq 0 ] || {
1030 | show_destroy
1031 | exit 1
1032 | }
1033 |
1034 | eval set -- "$options"
1035 |
1036 | while true; do
1037 | case "$1" in
1038 | --net)
1039 | DESTROY_NET=1
1040 | ;;
1041 | -h | --help)
1042 | show_destroy
1043 | ;;
1044 | --)
1045 | shift
1046 | break
1047 | ;;
1048 | -*)
1049 | show_destroy
1050 | ;;
1051 | *)
1052 | show_destroy
1053 | ;;
1054 | esac
1055 | shift
1056 | done
1057 |
1058 | CLUSTER_NAME=$1
1059 | if [[ $CLUSTER_NAME == "" ]]; then
1060 | echo "ERROR: missing cluster name."
1061 | show_destroy
1062 | fi
1063 |
1064 | if [[ $DESTROY_NET == 1 ]]; then
1065 | NET="${CLUSTER_NAME}-net"
1066 | fi
1067 | }
1068 |
1069 | function show_resize() {
1070 | printf "\n"
1071 | printf "Usage: $0 resize [OPTIONS] CLUSTER_NAME\n"
1072 | printf "\n"
1073 | printf "Resizes a K8s cluster (i.e., adds or removes nodes).\n"
1074 | printf "\n"
1075 | printf "When increasing the size of the cluster, you can optionally provide a Docker image. This \n"
1076 | printf "allows you to add nodes to the cluster with a different Docker image than when the cluster\n"
1077 | printf "was created."
1078 | printf "\n"
1079 | printf "Options:\n"
1080 | printf " --num-workers= Desired number of total worker nodes in the cluster.\n"
1081 | printf " --image= When increasing the size of the cluster, the Docker image for the new worker nodes (default = the image used when cluster was created).\n"
1082 | printf " -w, --wait-all When increasing the size of the cluster, wait for newly added nodes in the cluster to be ready; if not set, this command completes before the nodes may be ready).\n"
1083 | printf " -h, --help Display usage.\n"
1084 | exit 1
1085 | }
1086 |
1087 | function parse_resize_args() {
1088 | local take_action=0
1089 |
1090 | options=$(getopt -o wh -l wait-all,help,num-workers:,image: -- "$@")
1091 |
1092 | [ $? -eq 0 ] || {
1093 | show_resize
1094 | exit 1
1095 | }
1096 |
1097 | eval set -- "$options"
1098 |
1099 | while true; do
1100 | case "$1" in
1101 | -h | --help)
1102 | show_resize
1103 | ;;
1104 | --num-workers)
1105 | shift;
1106 | NUM_WORKERS=$1
1107 | if [[ ${NUM_WORKERS} -lt 0 ]]; then
1108 | show_resize
1109 | fi
1110 | take_action=1
1111 | ;;
1112 | --image)
1113 | shift;
1114 | IMAGE=$1
1115 | RESIZE_IMAGE=1
1116 | ;;
1117 | -w | --wait-all)
1118 | WAIT_READY=1
1119 | ;;
1120 | --)
1121 | shift
1122 | break
1123 | ;;
1124 | -*)
1125 | show_resize
1126 | ;;
1127 | *)
1128 | show_resize
1129 | ;;
1130 | esac
1131 | shift
1132 | done
1133 |
1134 | CLUSTER_NAME=$1
1135 | if [[ $CLUSTER_NAME == "" ]]; then
1136 | echo "ERROR: missing cluster name."
1137 | show_resize
1138 | fi
1139 |
1140 | if [[ $take_action == 0 ]]; then
1141 | echo "ERROR: missing --num-workers="
1142 | show_resize
1143 | exit 0
1144 | fi
1145 | }
1146 |
1147 | function show_list() {
1148 | printf "\n"
1149 | printf "Usage: $0 list [OPTIONS]\n"
1150 | printf "\n"
1151 | printf "Lists the K8s clusters.\n"
1152 | printf "\n"
1153 | printf "Options:\n"
1154 | printf " -l, --long Use a long listing format.\n"
1155 | printf " -h, --help Display usage.\n"
1156 | exit 1
1157 | }
1158 |
1159 | function parse_list_args() {
1160 | options=$(getopt -o lh -l long,help -- "$@")
1161 |
1162 | [ $? -eq 0 ] || {
1163 | show_list
1164 | exit 1
1165 | }
1166 |
1167 | eval set -- "$options"
1168 |
1169 | while true; do
1170 | case "$1" in
1171 | -h | --help)
1172 | show_list
1173 | ;;
1174 | -l | --long)
1175 | LONG_LIST=1
1176 | ;;
1177 | --)
1178 | shift
1179 | break
1180 | ;;
1181 | -*)
1182 | show_list
1183 | ;;
1184 | *)
1185 | show_list
1186 | ;;
1187 | esac
1188 | shift
1189 | done
1190 | }
1191 |
1192 | function show_usage() {
1193 | printf "\n"
1194 | printf "Usage: $0 COMMAND\n"
1195 | printf "\n"
1196 | printf "Simple program for deploying a K8s cluster inside Docker containers (aka Kubernetes-in-Docker),\n"
1197 | printf "using Docker + the Sysbox container runtime.\n"
1198 | printf "\n"
1199 | printf "NOTE: you must install the Sysbox container runtime in your host before using\n"
1200 | printf "this program. Download instructions are at www.nestybox.com.\n"
1201 | printf "\n"
1202 | printf "The cluster is composed of one master node and a configurable number of worker nodes.\n"
1203 | printf "Each node is a Docker container; the nodes are connected via a Docker bridge network.\n"
1204 | printf "\n"
1205 | printf "This program is meant as a reference example of how to deploy a K8s cluster\n"
1206 | printf "inside Docker containers, using simple Docker commands + the Sysbox container runtime.\n"
1207 | printf "Feel free to use it and adapt it to your needs.\n"
1208 | printf "\n"
1209 | printf "This program has some of the same functionality as the K8s.io KinD tool, except\n"
1210 | printf "that by virtue of using the Docker + Sysbox, the Docker images and commands\n"
1211 | printf "used by this program are **much simpler**, enabling you to easily and fully\n"
1212 | printf "control the cluster configuration and deployment (i.e., the Sysbox runtime\n"
1213 | printf "absorbs the complexity).\n"
1214 | printf "\n"
1215 | printf "Moreover, the resulting K8s cluster boots up pretty quickly (< 2 minutes for a\n"
1216 | printf "10-node cluster), uses minimal resources (only 1 GB overhead for a 10-node\n"
1217 | printf "cluster with Sysbox Enterprise!), and does **not** use privileged containers\n"
1218 | printf "(i.e., it's much more secure).\n"
1219 | printf "\n"
1220 | printf "Commands:\n"
1221 | printf " create Creates a cluster.\n"
1222 | printf " destroy Destroys a cluster.\n"
1223 | printf " resize Resizes a cluster.\n"
1224 | printf " list Lists the clusters.\n"
1225 | printf " showcmds Displays useful Docker commands used by this program to manage the cluster.\n"
1226 | printf " version Show version info.\n"
1227 | printf " help Show usage info.\n"
1228 | printf "\n"
1229 | printf "Run '$0 COMMAND --help' for for more info on that command.\n"
1230 | exit 1
1231 | }
1232 |
1233 | function args() {
1234 | SUBCMD=$2
1235 |
1236 | case "$SUBCMD" in
1237 | "create")
1238 | shift 2
1239 | parse_create_args "$@"
1240 | create_cluster
1241 | ;;
1242 | "destroy")
1243 | shift 2
1244 | parse_destroy_args "$@"
1245 | destroy_cluster
1246 | ;;
1247 | "resize")
1248 | shift 2
1249 | parse_resize_args "$@"
1250 | resize_cluster
1251 | ;;
1252 | "list")
1253 | shift 2
1254 | parse_list_args "$@"
1255 | list_clusters
1256 | ;;
1257 | "showcmds")
1258 | shift 2
1259 | show_cmds
1260 | ;;
1261 | "version")
1262 | shift 2
1263 | show_version
1264 | ;;
1265 | "help")
1266 | shift 2
1267 | show_usage
1268 | ;;
1269 | *)
1270 | echo 'Invalid command. Type "kindbox help" for usage.'
1271 | ;;
1272 | esac
1273 | }
1274 |
1275 | function main() {
1276 | args $0 "$@"
1277 | }
1278 |
1279 | main "$@"
1280 |
--------------------------------------------------------------------------------