├── .gitignore
├── LICENSE
├── README-vi.md
├── README.md
├── Vagrantfile
└── docs
├── en
├── Boostrapping-control-plane-and-nodes.md
├── Clean-up-environment.md
├── Installing-a-container-runtime.md
├── Installing-kubeadm-kubelet-kubectl.md
└── Provision-VirtualBoxVM-with-Vagrant.md
├── images
├── cleaning.png
├── cluster-k8s.png
├── components-of-kubernetes.svg
├── cri.png
├── us.png
├── vagrant-logo.png
├── vagrant-ssh-vscode.png
└── vi.png
└── vi
├── Boostrapping-control-plane-and-nodes.md
├── Clean-up-environment.md
├── Installing-a-container-runtime.md
├── Installing-kubeadm-kubelet-kubectl.md
└── Provision-VirtualBoxVM-with-Vagrant.md
/.gitignore:
--------------------------------------------------------------------------------
1 | # exclude vagrant artifact
2 | /.vagrant
3 | *.log
4 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Attribution 4.0 International
2 |
3 | =======================================================================
4 |
5 | Creative Commons Corporation ("Creative Commons") is not a law firm and
6 | does not provide legal services or legal advice. Distribution of
7 | Creative Commons public licenses does not create a lawyer-client or
8 | other relationship. Creative Commons makes its licenses and related
9 | information available on an "as-is" basis. Creative Commons gives no
10 | warranties regarding its licenses, any material licensed under their
11 | terms and conditions, or any related information. Creative Commons
12 | disclaims all liability for damages resulting from their use to the
13 | fullest extent possible.
14 |
15 | Using Creative Commons Public Licenses
16 |
17 | Creative Commons public licenses provide a standard set of terms and
18 | conditions that creators and other rights holders may use to share
19 | original works of authorship and other material subject to copyright
20 | and certain other rights specified in the public license below. The
21 | following considerations are for informational purposes only, are not
22 | exhaustive, and do not form part of our licenses.
23 |
24 | Considerations for licensors: Our public licenses are
25 | intended for use by those authorized to give the public
26 | permission to use material in ways otherwise restricted by
27 | copyright and certain other rights. Our licenses are
28 | irrevocable. Licensors should read and understand the terms
29 | and conditions of the license they choose before applying it.
30 | Licensors should also secure all rights necessary before
31 | applying our licenses so that the public can reuse the
32 | material as expected. Licensors should clearly mark any
33 | material not subject to the license. This includes other CC-
34 | licensed material, or material used under an exception or
35 | limitation to copyright. More considerations for licensors:
36 | wiki.creativecommons.org/Considerations_for_licensors
37 |
38 | Considerations for the public: By using one of our public
39 | licenses, a licensor grants the public permission to use the
40 | licensed material under specified terms and conditions. If
41 | the licensor's permission is not necessary for any reason--for
42 | example, because of any applicable exception or limitation to
43 | copyright--then that use is not regulated by the license. Our
44 | licenses grant only permissions under copyright and certain
45 | other rights that a licensor has authority to grant. Use of
46 | the licensed material may still be restricted for other
47 | reasons, including because others have copyright or other
48 | rights in the material. A licensor may make special requests,
49 | such as asking that all changes be marked or described.
50 | Although not required by our licenses, you are encouraged to
51 | respect those requests where reasonable. More considerations
52 | for the public:
53 | wiki.creativecommons.org/Considerations_for_licensees
54 |
55 | =======================================================================
56 |
57 | Creative Commons Attribution 4.0 International Public License
58 |
59 | By exercising the Licensed Rights (defined below), You accept and agree
60 | to be bound by the terms and conditions of this Creative Commons
61 | Attribution 4.0 International Public License ("Public License"). To the
62 | extent this Public License may be interpreted as a contract, You are
63 | granted the Licensed Rights in consideration of Your acceptance of
64 | these terms and conditions, and the Licensor grants You such rights in
65 | consideration of benefits the Licensor receives from making the
66 | Licensed Material available under these terms and conditions.
67 |
68 |
69 | Section 1 -- Definitions.
70 |
71 | a. Adapted Material means material subject to Copyright and Similar
72 | Rights that is derived from or based upon the Licensed Material
73 | and in which the Licensed Material is translated, altered,
74 | arranged, transformed, or otherwise modified in a manner requiring
75 | permission under the Copyright and Similar Rights held by the
76 | Licensor. For purposes of this Public License, where the Licensed
77 | Material is a musical work, performance, or sound recording,
78 | Adapted Material is always produced where the Licensed Material is
79 | synched in timed relation with a moving image.
80 |
81 | b. Adapter's License means the license You apply to Your Copyright
82 | and Similar Rights in Your contributions to Adapted Material in
83 | accordance with the terms and conditions of this Public License.
84 |
85 | c. Copyright and Similar Rights means copyright and/or similar rights
86 | closely related to copyright including, without limitation,
87 | performance, broadcast, sound recording, and Sui Generis Database
88 | Rights, without regard to how the rights are labeled or
89 | categorized. For purposes of this Public License, the rights
90 | specified in Section 2(b)(1)-(2) are not Copyright and Similar
91 | Rights.
92 |
93 | d. Effective Technological Measures means those measures that, in the
94 | absence of proper authority, may not be circumvented under laws
95 | fulfilling obligations under Article 11 of the WIPO Copyright
96 | Treaty adopted on December 20, 1996, and/or similar international
97 | agreements.
98 |
99 | e. Exceptions and Limitations means fair use, fair dealing, and/or
100 | any other exception or limitation to Copyright and Similar Rights
101 | that applies to Your use of the Licensed Material.
102 |
103 | f. Licensed Material means the artistic or literary work, database,
104 | or other material to which the Licensor applied this Public
105 | License.
106 |
107 | g. Licensed Rights means the rights granted to You subject to the
108 | terms and conditions of this Public License, which are limited to
109 | all Copyright and Similar Rights that apply to Your use of the
110 | Licensed Material and that the Licensor has authority to license.
111 |
112 | h. Licensor means the individual(s) or entity(ies) granting rights
113 | under this Public License.
114 |
115 | i. Share means to provide material to the public by any means or
116 | process that requires permission under the Licensed Rights, such
117 | as reproduction, public display, public performance, distribution,
118 | dissemination, communication, or importation, and to make material
119 | available to the public including in ways that members of the
120 | public may access the material from a place and at a time
121 | individually chosen by them.
122 |
123 | j. Sui Generis Database Rights means rights other than copyright
124 | resulting from Directive 96/9/EC of the European Parliament and of
125 | the Council of 11 March 1996 on the legal protection of databases,
126 | as amended and/or succeeded, as well as other essentially
127 | equivalent rights anywhere in the world.
128 |
129 | k. You means the individual or entity exercising the Licensed Rights
130 | under this Public License. Your has a corresponding meaning.
131 |
132 |
133 | Section 2 -- Scope.
134 |
135 | a. License grant.
136 |
137 | 1. Subject to the terms and conditions of this Public License,
138 | the Licensor hereby grants You a worldwide, royalty-free,
139 | non-sublicensable, non-exclusive, irrevocable license to
140 | exercise the Licensed Rights in the Licensed Material to:
141 |
142 | a. reproduce and Share the Licensed Material, in whole or
143 | in part; and
144 |
145 | b. produce, reproduce, and Share Adapted Material.
146 |
147 | 2. Exceptions and Limitations. For the avoidance of doubt, where
148 | Exceptions and Limitations apply to Your use, this Public
149 | License does not apply, and You do not need to comply with
150 | its terms and conditions.
151 |
152 | 3. Term. The term of this Public License is specified in Section
153 | 6(a).
154 |
155 | 4. Media and formats; technical modifications allowed. The
156 | Licensor authorizes You to exercise the Licensed Rights in
157 | all media and formats whether now known or hereafter created,
158 | and to make technical modifications necessary to do so. The
159 | Licensor waives and/or agrees not to assert any right or
160 | authority to forbid You from making technical modifications
161 | necessary to exercise the Licensed Rights, including
162 | technical modifications necessary to circumvent Effective
163 | Technological Measures. For purposes of this Public License,
164 | simply making modifications authorized by this Section 2(a)
165 | (4) never produces Adapted Material.
166 |
167 | 5. Downstream recipients.
168 |
169 | a. Offer from the Licensor -- Licensed Material. Every
170 | recipient of the Licensed Material automatically
171 | receives an offer from the Licensor to exercise the
172 | Licensed Rights under the terms and conditions of this
173 | Public License.
174 |
175 | b. No downstream restrictions. You may not offer or impose
176 | any additional or different terms or conditions on, or
177 | apply any Effective Technological Measures to, the
178 | Licensed Material if doing so restricts exercise of the
179 | Licensed Rights by any recipient of the Licensed
180 | Material.
181 |
182 | 6. No endorsement. Nothing in this Public License constitutes or
183 | may be construed as permission to assert or imply that You
184 | are, or that Your use of the Licensed Material is, connected
185 | with, or sponsored, endorsed, or granted official status by,
186 | the Licensor or others designated to receive attribution as
187 | provided in Section 3(a)(1)(A)(i).
188 |
189 | b. Other rights.
190 |
191 | 1. Moral rights, such as the right of integrity, are not
192 | licensed under this Public License, nor are publicity,
193 | privacy, and/or other similar personality rights; however, to
194 | the extent possible, the Licensor waives and/or agrees not to
195 | assert any such rights held by the Licensor to the limited
196 | extent necessary to allow You to exercise the Licensed
197 | Rights, but not otherwise.
198 |
199 | 2. Patent and trademark rights are not licensed under this
200 | Public License.
201 |
202 | 3. To the extent possible, the Licensor waives any right to
203 | collect royalties from You for the exercise of the Licensed
204 | Rights, whether directly or through a collecting society
205 | under any voluntary or waivable statutory or compulsory
206 | licensing scheme. In all other cases the Licensor expressly
207 | reserves any right to collect such royalties.
208 |
209 |
210 | Section 3 -- License Conditions.
211 |
212 | Your exercise of the Licensed Rights is expressly made subject to the
213 | following conditions.
214 |
215 | a. Attribution.
216 |
217 | 1. If You Share the Licensed Material (including in modified
218 | form), You must:
219 |
220 | a. retain the following if it is supplied by the Licensor
221 | with the Licensed Material:
222 |
223 | i. identification of the creator(s) of the Licensed
224 | Material and any others designated to receive
225 | attribution, in any reasonable manner requested by
226 | the Licensor (including by pseudonym if
227 | designated);
228 |
229 | ii. a copyright notice;
230 |
231 | iii. a notice that refers to this Public License;
232 |
233 | iv. a notice that refers to the disclaimer of
234 | warranties;
235 |
236 | v. a URI or hyperlink to the Licensed Material to the
237 | extent reasonably practicable;
238 |
239 | b. indicate if You modified the Licensed Material and
240 | retain an indication of any previous modifications; and
241 |
242 | c. indicate the Licensed Material is licensed under this
243 | Public License, and include the text of, or the URI or
244 | hyperlink to, this Public License.
245 |
246 | 2. You may satisfy the conditions in Section 3(a)(1) in any
247 | reasonable manner based on the medium, means, and context in
248 | which You Share the Licensed Material. For example, it may be
249 | reasonable to satisfy the conditions by providing a URI or
250 | hyperlink to a resource that includes the required
251 | information.
252 |
253 | 3. If requested by the Licensor, You must remove any of the
254 | information required by Section 3(a)(1)(A) to the extent
255 | reasonably practicable.
256 |
257 | 4. If You Share Adapted Material You produce, the Adapter's
258 | License You apply must not prevent recipients of the Adapted
259 | Material from complying with this Public License.
260 |
261 |
262 | Section 4 -- Sui Generis Database Rights.
263 |
264 | Where the Licensed Rights include Sui Generis Database Rights that
265 | apply to Your use of the Licensed Material:
266 |
267 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right
268 | to extract, reuse, reproduce, and Share all or a substantial
269 | portion of the contents of the database;
270 |
271 | b. if You include all or a substantial portion of the database
272 | contents in a database in which You have Sui Generis Database
273 | Rights, then the database in which You have Sui Generis Database
274 | Rights (but not its individual contents) is Adapted Material; and
275 |
276 | c. You must comply with the conditions in Section 3(a) if You Share
277 | all or a substantial portion of the contents of the database.
278 |
279 | For the avoidance of doubt, this Section 4 supplements and does not
280 | replace Your obligations under this Public License where the Licensed
281 | Rights include other Copyright and Similar Rights.
282 |
283 |
284 | Section 5 -- Disclaimer of Warranties and Limitation of Liability.
285 |
286 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
287 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
288 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
289 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
290 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
291 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
292 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
293 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
294 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
295 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
296 |
297 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
298 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
299 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
300 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
301 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
302 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
303 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
304 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
305 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
306 |
307 | c. The disclaimer of warranties and limitation of liability provided
308 | above shall be interpreted in a manner that, to the extent
309 | possible, most closely approximates an absolute disclaimer and
310 | waiver of all liability.
311 |
312 |
313 | Section 6 -- Term and Termination.
314 |
315 | a. This Public License applies for the term of the Copyright and
316 | Similar Rights licensed here. However, if You fail to comply with
317 | this Public License, then Your rights under this Public License
318 | terminate automatically.
319 |
320 | b. Where Your right to use the Licensed Material has terminated under
321 | Section 6(a), it reinstates:
322 |
323 | 1. automatically as of the date the violation is cured, provided
324 | it is cured within 30 days of Your discovery of the
325 | violation; or
326 |
327 | 2. upon express reinstatement by the Licensor.
328 |
329 | For the avoidance of doubt, this Section 6(b) does not affect any
330 | right the Licensor may have to seek remedies for Your violations
331 | of this Public License.
332 |
333 | c. For the avoidance of doubt, the Licensor may also offer the
334 | Licensed Material under separate terms or conditions or stop
335 | distributing the Licensed Material at any time; however, doing so
336 | will not terminate this Public License.
337 |
338 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
339 | License.
340 |
341 |
342 | Section 7 -- Other Terms and Conditions.
343 |
344 | a. The Licensor shall not be bound by any additional or different
345 | terms or conditions communicated by You unless expressly agreed.
346 |
347 | b. Any arrangements, understandings, or agreements regarding the
348 | Licensed Material not stated herein are separate from and
349 | independent of the terms and conditions of this Public License.
350 |
351 |
352 | Section 8 -- Interpretation.
353 |
354 | a. For the avoidance of doubt, this Public License does not, and
355 | shall not be interpreted to, reduce, limit, restrict, or impose
356 | conditions on any use of the Licensed Material that could lawfully
357 | be made without permission under this Public License.
358 |
359 | b. To the extent possible, if any provision of this Public License is
360 | deemed unenforceable, it shall be automatically reformed to the
361 | minimum extent necessary to make it enforceable. If the provision
362 | cannot be reformed, it shall be severed from this Public License
363 | without affecting the enforceability of the remaining terms and
364 | conditions.
365 |
366 | c. No term or condition of this Public License will be waived and no
367 | failure to comply consented to unless expressly agreed to by the
368 | Licensor.
369 |
370 | d. Nothing in this Public License constitutes or may be interpreted
371 | as a limitation upon, or waiver of, any privileges and immunities
372 | that apply to the Licensor or You, including from the legal
373 | processes of any jurisdiction or authority.
374 |
375 |
376 | =======================================================================
377 |
378 | Creative Commons is not a party to its public
379 | licenses. Notwithstanding, Creative Commons may elect to apply one of
380 | its public licenses to material it publishes and in those instances
381 | will be considered the “Licensor.” The text of the Creative Commons
382 | public licenses is dedicated to the public domain under the CC0 Public
383 | Domain Dedication. Except for the limited purpose of indicating that
384 | material is shared under a Creative Commons public license or as
385 | otherwise permitted by the Creative Commons policies published at
386 | creativecommons.org/policies, Creative Commons does not authorize the
387 | use of the trademark "Creative Commons" or any other trademark or logo
388 | of Creative Commons without its prior written consent including,
389 | without limitation, in connection with any unauthorized modifications
390 | to any of its public licenses or any other arrangements,
391 | understandings, or agreements concerning use of licensed material. For
392 | the avoidance of doubt, this paragraph does not form part of the
393 | public licenses.
394 |
395 | Creative Commons may be contacted at creativecommons.org.
396 |
--------------------------------------------------------------------------------
/README-vi.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Xây dựng Kubernetes cluster bằng công cụ kubeadm trên Virtual Box
7 |
8 |
9 |
10 |
🦖 Giải thích chi tiết từng bước cho người mới bắt đầu.
11 |
Tham khảo tài liệu của K8s - mục Bootstrapping clusters with kubeadm. Hướng dẫn này sẽ dựng Kubernetes cluster trên máy tính cá nhân bằng cách sử dụng máy ảo Virtual Box. Chúng ta sẽ sử dụng phần mềm Vagrant để tự động hóa quá trình tạo máy ảo Virtual Box.
37 |
38 | ### Trước khi bắt đầu
39 | * 🚧 Mô hình cluster: 1 máy Control Plan và 2 máy Node.
40 | * 🖥️ Hệ điều hành: Ubuntu 18.04 LTS (Bionic Beaver)
41 | * ⚙️ Tài nguyên hệ thống: 2 GB of RAM and 2 CPUs per machine.
42 | * 📮 Mỗi máy sẽ có hostname, MAC address, và product_uuid riêng biệt.
43 | * 🧱 Không thiết lập firewall. (Mặc định cho phép traffic ra vào các port)
44 | * 🌐 Các máy trong cluster được kết nối mạng với nhau (private network, sử dụng network interface enp0s8 của các máy ảo).
45 |
46 | ### Hướng dẫn các bước
47 |
48 | * ▶️ [Tạo máy ảo Virtual Box bằng Vagrant](docs/vi/Provision-VirtualBoxVM-with-Vagrant.md)
49 | * ▶️ [Cài đặt container runtime (containerd) trên tất cả các máy ảo](docs/vi/Installing-a-container-runtime.md)
50 | * ▶️ [Cài đặt kubeadm, kubelet và kubectl trên tất cả các máy ảo](docs/vi/Installing-kubeadm-kubelet-kubectl.md)
51 | * ▶️ [Khởi tạo control plane và nodes](docs/vi/Boostrapping-control-plane-and-nodes.md)
52 | * ▶️ [Dọn dẹp môi trường](docs/vi/Clean-up-environment.md)
53 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
Reference to K8s documentation - section Bootstrapping clusters with kubeadm. This tutorial walks you through setting up Kubernetes Cluster on a local machine using Virtual Box. We will use Vagrant to automate the provisioning of VirtualBox's VM.
37 |
38 | ### Before you begin
39 | * 🚧 Cluster diagram: 1 Control Plane & 2 Nodes.
40 | * 🖥️ Machine operating system: Ubuntu 18.04 LTS (Bionic Beaver)
41 | * ⚙️ System resource: 2 GB of RAM and 2 CPUs per machine.
42 | * 📮 Unique hostname, MAC address, and product_uuid for every machines.
43 | * 🧱 No firewall configuration. (Allow all traffic by default)
44 | * 🌐 Full network connectivity between all machines in the cluster (private network, network interface enp0s8 of virtual machines).
45 |
46 | ### Step-by-step tutorial
47 |
48 | * ▶️ [Provision Virtual Box's VM with Vagrant](docs/en/Provision-VirtualBoxVM-with-Vagrant.md)
49 | * ▶️ [Installing a container runtime (containerd) on all virtual machines](docs/en/Installing-a-container-runtime.md)
50 | * ▶️ [Installing kubeadm, kubelet, and kubectl on all virtual machines](docs/en/Installing-kubeadm-kubelet-kubectl.md)
51 | * ▶️ [Bootstrapping control plane and nodes](docs/en/Boostrapping-control-plane-and-nodes.md)
52 | * ▶️ [Clean up the environment](docs/en/Clean-up-environment.md)
53 |
--------------------------------------------------------------------------------
/Vagrantfile:
--------------------------------------------------------------------------------
1 | # -*- mode: ruby -*-
2 | # vi:set ft=ruby sw=2 ts=2 sts=2:
3 |
4 | # Define the number of control plane (MASTER_NODE) and node (WORKER_NODE)
5 | NUM_MASTER_NODE = 1
6 | NUM_WORKER_NODE = 2
7 |
8 | IP_NW = "192.168.56."
9 | MASTER_IP_START = 1
10 | NODE_IP_START = 2
11 |
12 | # All Vagrant configuration is done below. The "2" in Vagrant.configure
13 | # configures the configuration version (we support older styles for
14 | # backwards compatibility). Please don't change it unless you know what
15 | # you're doing.
16 | Vagrant.configure("2") do |config|
17 | # The most common configuration options are documented and commented below.
18 | # For a complete reference, please see the online documentation at
19 | # https://docs.vagrantup.com.
20 |
21 | # Every Vagrant development environment requires a box. You can search for
22 | # boxes at https://vagrantcloud.com/search.
23 | # Here are some key details about the "ubuntu/bionic64" Vagrant box:
24 | # Operating System: Ubuntu 18.04 LTS (Bionic Beaver)
25 | # Ubuntu 18.04 LTS will receive security updates and bug fixes
26 | # from Canonical, the company behind Ubuntu, until April 2023
27 | # for desktop and server versions, and until April 2028 for
28 | # server versions with Extended Security Maintenance (ESM) enabled.
29 | # Architecture: x86_64 (64-bit)
30 | # Disk Size: 10 GB
31 | # RAM: 2 GB
32 | # CPUs: 2
33 | # Desktop Environment: None (headless)
34 | # Provider: VirtualBox
35 | config.vm.box = "ubuntu/bionic64"
36 |
37 | # Disable automatic box update checking. If you disable this, then
38 | # boxes will only be checked for updates when the user runs
39 | # `vagrant box outdated`. This is not recommended.
40 | config.vm.box_check_update = false
41 |
42 | # View the documentation for the VirtualBox for more
43 | # information on available options.
44 | # https://developer.hashicorp.com/vagrant/docs/providers/virtualbox/configuration
45 |
46 | # Provision Control Plane
47 | (1..NUM_MASTER_NODE).each do |i|
48 | config.vm.define "kubemaster" do |node|
49 | node.vm.provider "virtualbox" do |vb|
50 | vb.name = "kubemaster"
51 | vb.memory = 2048
52 | vb.cpus = 2
53 | end
54 | node.vm.hostname = "kubemaster"
55 | node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
56 | end
57 | end
58 |
59 |
60 | # Provision Nodes
61 | (1..NUM_WORKER_NODE).each do |i|
62 | config.vm.define "kubenode0#{i}" do |node|
63 | node.vm.provider "virtualbox" do |vb|
64 | vb.name = "kubenode0#{i}"
65 | vb.memory = 2048
66 | vb.cpus = 2
67 | end
68 | node.vm.hostname = "kubenode0#{i}"
69 | node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
70 | end
71 | end
72 | end
--------------------------------------------------------------------------------
/docs/en/Boostrapping-control-plane-and-nodes.md:
--------------------------------------------------------------------------------
1 | # Bootstrapping control plane and nodes
2 |
3 |
4 |
5 |
6 | ## Initializing your control plane
7 |
8 | The `control plane` is the machine where run components including `etcd` (the cluster database) and the `API Server` (which the `kubectl` command line tool communicates with).
9 |
10 | To initialize the control plane, run this command in your virtual machine hostname `kubemaster`:
11 |
12 | sudo kubeadm init --apiserver-advertise-address=192.168.56.2 --pod-network-cidr=10.244.0.0/16
13 |
14 | * `--apiserver-advertise-address=192.168.56.2`: The IP address the API Server will advertise its listening on. In this tutorial, we will use the IP address of `kubemaster` VM.
15 | * `--pod-network-cidr=10.244.0.0/16`: the control plane will automatically allocate CIDRs for every node which specifies a range of IP addresses that can be used for pod IPs. **You may need to choose a CIDR range that does not overlap with any existing network ranges to avoid IP address conflicts**.
16 |
17 | `kubeadm init` first runs a series of prechecks to ensure that the machine is ready to run `Kubernetes`. These prechecks expose warnings and exit on errors. `kubeadm init` then downloads and installs the cluster control plane components. This may take several minutes. After it finishes you should see:
18 |
19 | Your Kubernetes control-plane has initialized successfully!
20 |
21 | To start using your cluster, you need to run the following as a regular user:
22 |
23 | mkdir -p $HOME/.kube
24 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
25 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
26 |
27 | You should now deploy a Pod network to the cluster.
28 | Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
29 | /docs/concepts/cluster-administration/addons/
30 |
31 | You can now join any number of machines by running the following on each node
32 | as root:
33 |
34 | kubeadm join : --token --discovery-token-ca-cert-hash sha256:
35 |
36 | **Save the command `kubeadm join...` to join the nodes into the cluster later**.
37 |
38 | To make `kubectl` work for your `non-root user`, run these commands, which are also part of the `kubeadm init` output:
39 |
40 | mkdir -p $HOME/.kube
41 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
42 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
43 |
44 | Alternatively, if you are the `root user`, you can run:
45 |
46 | export KUBECONFIG=/etc/kubernetes/admin.conf
47 |
48 | >Warning: `Kubeadm` signs the certificate in the `admin.conf` to have `Subject: O = system:masters`, `CN = kubernetes-admin`.
49 | `system:masters` is a break-glass, super user group that bypasses the authorization layer (e.g. [RBAC](https://docs.oracle.com/cd/E19253-01/816-4557/rbac-1/)). Do not share the `admin.conf` file with anyone and instead grant users custom permissions by generating them a `kubeconfig file` using the `kubeadm kubeconfig` user command. For more details see [Generating kubeconfig files for additional users](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubeconfig-additional-users).
50 |
51 | ## Joining your nodes
52 |
53 | Run the command that was output by `kubeadm init` on all worker nodes - virtual machine:`kubenode01`, `kubenode02`with sudo permission:
54 |
55 | sudo kubeadm join --token : --discovery-token-ca-cert-hash sha256:
56 |
57 | **If you lost the output command above, go to the control plane: `kubemaster`.**
58 |
59 | **Get the `` by running**
60 |
61 | kubeadm token list
62 |
63 | The output is similar to this:
64 |
65 | TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
66 | 8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
67 | signing token generated by bootstrappers:
68 | 'kubeadm init'. kubeadm:
69 | default-node-token
70 |
71 | By default, `` expire after 24 hours. If you are joining a node to the cluster after the current `` has expired, you can create a new `` by running the following command on the `control-plane node`:
72 |
73 | kubeadm token create
74 |
75 | The output is similar to this:
76 |
77 | 5didvk.d09sbcov8ph2amjw
78 |
79 | **Get the `` by running**
80 |
81 | openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
82 | openssl dgst -sha256 -hex | sed 's/^.* //'
83 |
84 | The output is similar to:
85 |
86 | 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
87 |
88 | **Get the `:` by running**
89 |
90 | cat /$HOME/.kube/config | grep server
91 |
92 | The output is similar to:
93 |
94 | server: https://192.168.56.2:6443
95 |
96 | **`:`** will be **`192.168.56.2:6443`**
97 |
98 | **Node join to Kubernetes cluster successfully**
99 |
100 | The output should look something like this:
101 |
102 | [preflight] Running pre-flight checks
103 |
104 | ... (log output of join workflow) ...
105 |
106 | Node join complete:
107 | * Certificate signing request sent to control-plane and response
108 | received.
109 | * Kubelet informed of new secure connection details.
110 |
111 | Run 'kubectl get nodes' on control-plane to see this machine join.
112 |
113 | A few seconds later, you should notice this node in the output from kubectl get nodes when run on the control-plane node.
114 |
115 | ## Verify Kubernetes cluster components
116 |
117 | 
118 |
119 | On control-plane `kubemaster` and worker nodes `kubenode01`, `kubenode02` run
120 |
121 | sudo netstat -lntp
122 |
123 | All components with LISTEN ports will be shown below:
124 |
125 | **kubemaster**
126 |
127 | Active Internet connections (only servers)
128 | Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
129 | tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8013/kubelet
130 | tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 8182/kube-proxy
131 | tcp 0 0 192.168.56.2:2379 0.0.0.0:* LISTEN 7811/etcd
132 | tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 7811/etcd
133 | tcp 0 0 192.168.56.2:2380 0.0.0.0:* LISTEN 7811/etcd
134 | tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 7811/etcd
135 | tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 7791/kube-controlle
136 | tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 7907/kube-scheduler
137 | tcp 0 0 127.0.0.1:34677 0.0.0.0:* LISTEN 2826/containerd
138 | tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 817/systemd-resolve
139 | tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1380/sshd
140 | tcp6 0 0 :::10250 :::* LISTEN 8013/kubelet
141 | tcp6 0 0 :::6443 :::* LISTEN 7884/kube-apiserver
142 | tcp6 0 0 :::10256 :::* LISTEN 8182/kube-proxy
143 | tcp6 0 0 :::22 :::* LISTEN 1380/sshd
144 |
145 | >`kube-apiserver` show that it only LISTEN on IPv6 `:::6443` but actually the API server is listening on an IPv6 address that can be accessed through an `IPv4-mapped IPv6 address`. That why we can run `kubeadm join` on worker nodes succesfully.
146 | For example, the IPv4 address `192.168.5.2` can be represented as the IPv6 address `::ffff:192.168.5.2`.
147 |
148 | **kubenode***
149 |
150 | Active Internet connections (only servers)
151 | Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
152 | tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8987/kubelet
153 | tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 9208/kube-proxy
154 | tcp 0 0 127.0.0.1:39989 0.0.0.0:* LISTEN 2785/containerd
155 | tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 782/systemd-resolve
156 | tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1431/sshd
157 | tcp6 0 0 :::10250 :::* LISTEN 8987/kubelet
158 | tcp6 0 0 :::10256 :::* LISTEN 9208/kube-proxy
159 | tcp6 0 0 :::22 :::* LISTEN 1431/sshd
160 |
161 | ## Installing a Pod network add-on
162 |
163 | Run `kubectl get nodes` on control-plane to see these joined nodes
164 |
165 | NAME STATUS ROLES AGE VERSION
166 | kubemaster NotReady control-plane 3h1m v1.26.2
167 | kubenode01 NotReady 3h v1.26.2
168 | kubenode02 NotReady 179m v1.26.2
169 |
170 | As you can see, our virtual machine `kubemaster`, `kubenode01`, `kubenode02` were joined the Kubernetes cluster but they are `NotReady`.
171 |
172 | Run `kubectl get pods -A` on control plane to see all pods of `kube-system`
173 |
174 | NAMESPACE NAME READY STATUS RESTARTS AGE
175 | kube-system coredns-787d4945fb-5cwlq 0/1 Pending 0 3h8m
176 | kube-system coredns-787d4945fb-q2s4p 0/1 Pending 0 3h8m
177 | kube-system etcd-controlplane 1/1 Running 0 3h8m
178 | kube-system kube-apiserver-controlplane 1/1 Running 0 3h8m
179 | kube-system kube-controller-manager-controlplane 1/1 Running 0 3h8m
180 | kube-system kube-proxy-7twwr 1/1 Running 0 3h7m
181 | kube-system kube-proxy-8mxt7 1/1 Running 0 3h8m
182 | kube-system kube-proxy-v9rc6 1/1 Running 0 3h8m
183 | kube-system kube-scheduler-controlplane 1/1 Running 0 3h9m
184 |
185 | You must deploy a `Container Network Interface (CNI)` based `Pod network add-on` so that your Pods can communicate with each other. `Cluster DNS (CoreDNS)` will not start up until the pod networking is configured.
186 |
187 | `Pod network add-ons` are `Kubernetes-specific CNI plugins` that provide **network connectivity between pods** in a `Kubernetes cluster`. They create a `virtual network overlay` that spans the entire cluster and provides each `pod` with its own `unique IP address`.
188 |
189 | While `CNI plugins` can be used with any container runtime, pod network add-ons are specific to Kubernetes and provide the networking functionality required for the Kubernetes networking model. Some examples of pod network add-ons include `Calico`, `Flannel`, and `Weave Net`.
190 |
191 | In this tutorial, we will use [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) add-ons. It is easier to set up, and use and is a good fit for smaller-scale deployments.
192 |
193 | To install onto the Kubernetes cluster, run the following command on the control plane `kubemaster`:
194 |
195 | kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
196 |
197 | Output will be
198 |
199 | serviceaccount/weave-net created
200 | clusterrole.rbac.authorization.k8s.io/weave-net created
201 | clusterrolebinding.rbac.authorization.k8s.io/weave-net created
202 | role.rbac.authorization.k8s.io/weave-net created
203 | rolebinding.rbac.authorization.k8s.io/weave-net created
204 | daemonset.apps/weave-net created
205 |
206 | Take care that your Pod network must not overlap with any of the machine networks. If you define a `CIDR block` during `kubeadm init` with `--pod-network-cidr`, insert parameter `IPALLOC_RANGE` in `Weaver network plugin's YAML`.
207 | Run this command on the control plane `kubemaster`:
208 |
209 | kubectl edit ds weave-net -n kube-system
210 |
211 | It will open allowing you to edit the YAML file of `weave-net daemon set`. Find `spec` of `container` name `weave` to add an environment variable `IPALLOC_RANGE`, value is `--pod-network-cidr`
212 |
213 | spec:
214 | ...
215 | template:
216 | ...
217 | spec:
218 | ...
219 | containers:
220 | ...
221 | env:
222 | - name: IPALLOC_RANGE
223 | value: 10.244.0.0/16
224 | ...
225 | name: weave
226 |
227 | Save the file and wait a few minutes for `weave-net` pods rebooting.
228 |
229 | ## Successful
230 |
231 | Run `kubectl get pods -A` on the control plane again to verify, you will see 3 pods of `weave-net daemon set` and the `coredns` pods are running now:
232 |
233 | NAMESPACE NAME READY STATUS RESTARTS AGE
234 | kube-system coredns-787d4945fb-48tbh 1/1 Running 0 6m57s
235 | kube-system coredns-787d4945fb-nrsp7 1/1 Running 0 6m57s
236 | kube-system etcd-kubemaster 1/1 Running 0 7m10s
237 | kube-system kube-apiserver-kubemaster 1/1 Running 0 7m12s
238 | kube-system kube-controller-manager-kubemaster 1/1 Running 0 7m10s
239 | kube-system kube-proxy-8sxss 1/1 Running 0 4m19s
240 | kube-system kube-proxy-j7z6x 1/1 Running 0 6m58s
241 | kube-system kube-proxy-nj8j2 1/1 Running 0 4m14s
242 | kube-system kube-scheduler-kubemaster 1/1 Running 0 7m10s
243 | kube-system weave-net-7mldz 2/2 Running 0 2m
244 | kube-system weave-net-dk5dl 2/2 Running 0 70s
245 | kube-system weave-net-znhnm 2/2 Running 0 2m
246 |
247 | Run `kubectl get nodes` to verify the status of the cluster, all nodes are ready now:
248 |
249 | NAME STATUS ROLES AGE VERSION
250 | kubemaster Ready control-plane 9m54s v1.26.2
251 | kubenode01 Ready 6m59s v1.26.2
252 | kubenode02 Ready 6m54s v1.26.2
253 |
254 | >If you see that a worker node has `ROLES` of ``, it means that the node is not running any `control plane components` or Kubernetes services that would give it a specific role. Worker nodes typically do not run any control plane components, so this is perfectly normal and expected behavior for worker nodes in a Kubernetes cluster.
255 |
256 | Networking is a central part of Kubernetes, see [Kubernetes networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model) for more information.
257 |
258 | For more options to customize the cluster with kubeadm, read [Create cluster kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
259 |
260 | ▶️ [Clean up your environment](Clean-up-environment.md/#clean-up-environment)
261 |
--------------------------------------------------------------------------------
/docs/en/Clean-up-environment.md:
--------------------------------------------------------------------------------
1 | # Clean up the environment
2 |
3 |
4 |
5 |
6 | Unsolved issue or just want to start over, this guide is for you!
7 |
8 | ## Keep the virtual machine, just clean up the Kubernetes cluster
9 |
10 | #### Remove the node
11 |
12 | Run this to gracefully evict all the running pods on the node:
13 |
14 | kubectl drain --delete-emptydir-data --force --ignore-daemonsets
15 |
16 | Reset the state installed by kubeadm
17 |
18 | kubeadm reset
19 |
20 | The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
21 |
22 | iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
23 |
24 | If you want to reset the IPVS tables, you must run the following command:
25 |
26 | ipvsadm -C
27 |
28 | Now remove the node:
29 |
30 | kubectl delete node
31 |
32 | If you wish to start over, run `kubeadm init` or `kubeadm join` with the appropriate arguments.
33 |
34 | #### Clean up the control plane
35 |
36 | Performs a best-effort revert of changes made to this host by `kubeadm init`
37 |
38 | sudo kubeadm reset --kubeconfig="/etc/kubernetes/admin.conf"
39 |
40 | `--kubeconfig=string`: Delete the kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (If you config to `non-root user` after `kubeadm init`, don't forget to also delete `$HOME/.kube/config`)
41 |
42 | The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually as we do in [Remove the node](#remove-the-node).
43 |
44 | ## Dispose of all virtual machines
45 |
46 | Since we use `Vagrant` to automate provisioning the virtual machines, you can dispose of all that with just a command. Run this in the project directory of your host, which installs `Vagrant` & `Virtual Box`:
47 |
48 | vagrant destroy
49 |
50 | >If you just want to turn off the virtual machines, run `vagrant halt` instead. When you `vagrant up` again, all your work will not be remove. Read more about this command [here](https://developer.hashicorp.com/vagrant/docs/cli/halt).
--------------------------------------------------------------------------------
/docs/en/Installing-a-container-runtime.md:
--------------------------------------------------------------------------------
1 | # Installing a container runtime (containerd) on all virtual machines
2 |
3 |
4 |
5 |
6 | *Performing this task on all virtual machines*
7 |
8 | > Note: Dockershim has been removed from the Kubernetes project as of release 1.24. Read the [Dockershim Removal FAQ](https://kubernetes.io/blog/2022/02/17/dockershim-faq/) for further details.
9 |
10 | > Dockershim is a component of Kubernetes that was used to communicate with the Docker runtime. It was introduce as a temporary solution to allow Kubernetes to use Docker as a container runtime, before Kubernetes had its own `container runtime interface (CRI)`.
11 |
12 | You need to install a container runtime into each node in the cluster so that Pods can run there. And Kubernetes 1.26 requires that you use a runtime that conforms with the `Container Runtime Interface (CRI)`. There are several common container runtimes with Kubernetes:
13 | * [containerd](https://containerd.io/)
14 | * [CRI-O](https://cri-o.io/)
15 | * [Docker Engine](https://docs.docker.com/engine/)
16 | * [Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html)
17 |
18 | You can find the instructions for each type [here](https://kubernetes.io/docs/setup/production-environment/container-runtimes/). In this tutorial, we will use [**Containerd**](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).
19 |
20 | ## Install and configure prerequisites
21 |
22 | ### Load kernel modules in Linux
23 |
24 | cat <Container Network Interface (CNI) is a standard interface for configuring networking for Linux containers. It enables a wide range of networking options, including overlay networks, load balancing, and security policies, to be used with containerized applications. In this tutorial we use CNI plugin based on Pod network add-on which will be installed later in task [Boostrapping control plane and nodes](/docs/en/Boostrapping-control-plane-and-nodes.md/##installing-a-pod-network-add-on).
68 |
69 | Update the apt package index and install packages to allow apt to use a repository over HTTPS:
70 |
71 | sudo apt-get update
72 |
73 | sudo apt-get install \
74 | ca-certificates \
75 | curl \
76 | gnupg \
77 | lsb-release
78 |
79 | Add Docker’s official GPG key:
80 |
81 | sudo mkdir -m 0755 -p /etc/apt/keyrings
82 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
83 |
84 | Use the following command to set up the repository:
85 |
86 | echo \
87 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
88 | $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
89 |
90 | Update the apt package index after setting up the repo:
91 |
92 | sudo apt-get update
93 |
94 | Install the latest version package `containerd.io`
95 |
96 | sudo apt-get install containerd.io
97 |
98 | ## Cgroup drivers
99 |
100 | On Linux, [control groups](https://docs.kernel.org/admin-guide/cgroup-v1/cgroups.html) are used to constrain resources that are allocated to processes.
101 |
102 | Both kubelet and the underlying container runtime need to interface with control groups to enforce resource management for pods and containers and set resources such as CPU/memory requests and limits. To interface with control groups, the kubelet and the container runtime need to use a cgroup driver. **It's critical that the `kubelet` and the `container runtime` uses the same `cgroup driver` and are configured the same**.
103 |
104 | There are two cgroup drivers available:
105 |
106 | * **cgroupfs**
107 | * **systemd**
108 |
109 | Since our virtual machine use **systemd**, we will configure `kubelet` and `containerd` use **systemd** as their `cgroup driver`.
110 |
111 |
112 | >Depending on the distribution and version, you may see the difference `cgroup driver`. To show the current `cgroup driver` type in Linux, you can check the value of the cgroup mount point by type the following command: `cat /proc/mounts | grep cgroup`
113 |
114 | #### Configuring the `containerd` cgroup driver
115 |
116 | To config `containerd` use the `systemd` cgroup driver, run:
117 |
118 | sudo vi /etc/containerd/config.toml
119 |
120 | replace all context of `config.toml` file with the setting below:
121 |
122 | [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
123 | [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
124 | SystemdCgroup = true
125 |
126 | make sure to restart `containerd` to apply this change
127 |
128 | sudo systemctl restart containerd
129 |
130 | #### Configuring the `kubelet` cgroup driver
131 |
132 | In v1.22, if the user is not setting the `cgroupDriver` field under `KubeletConfiguration`, `kubeadm` will default it to **systemd**.
133 | So in this tutorial, we don't need to config `cgroup driver` for `kubelet` cause we will use `kubeadm` bootstrapping the cluster in the next few steps.
134 | You can see [here](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver) for more configuring information.
135 |
136 | ## Next
137 |
138 | ▶️ [Installing kubeadm, kubelet and kubectl on all virtual machines](Installing-kubeadm-kubelet-kubectl.md/#installing-kubeadm-kubelet-and-kubectl)
--------------------------------------------------------------------------------
/docs/en/Installing-kubeadm-kubelet-kubectl.md:
--------------------------------------------------------------------------------
1 | # Installing kubeadm, kubelet, and kubectl
2 |
3 |
4 |
5 |
6 | *Performing this task on all virtual machines*
7 |
8 | `Kubeadm` is a command-line tool for bootstrapping a `Kubernetes cluster`. It is part of the official `Kubernetes distribution` and is designed to simplify the process of setting up a new `Kubernetes cluster`. `Kubeadm` automates many of the tasks involved in setting up a cluster, such as configuring the `control plane components`, generating `TLS certificates`, and setting up the `Kubernetes networking`.
9 |
10 | >One of the key areas covered in the `Certified Kubernetes Administrator (CKA)` exam is cluster setup, which includes using tools like Kubeadm to bootstrap a new Kubernetes cluster.
11 |
12 | ## Disable swap:
13 |
14 | You must disable `swap` for the `kubelet` to work properly. The discussion about this is in this issue https://github.com/kubernetes/kubernetes/issues/53533
15 |
16 | `Kubelet`, which is the primary node agent that runs on each worker node, assumes that each node has a fixed amount of available memory. If the node starts swapping, the kubelet may experience delays or other issues that can impact the `stability` and `reliability` of the Kubernetes cluster. As a result, Kubernetes recommends that `swap` be **disabled** on each node in the cluster.
17 |
18 | To disable `swap` on Linux Machines:
19 |
20 | # First diasbale swap
21 | sudo swapoff -a
22 |
23 | # And then to disable swap on startup in /etc/fstab
24 | sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
25 |
26 | ## Installing kubeadm, kubelet, and kubectl packages:
27 |
28 | * `kubelet`: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
29 |
30 | * `kubectl`: the command line util to talk to your cluster.
31 |
32 | * `kubeadm`: install the rest components of `kubernetes cluster`.
33 |
34 | `kubeadm` will not install or manage `kubelet` or `kubectl` for you, so you will need to ensure they match the version of the `Kubernetes control plane` you want `kubeadm` to install for you.
35 |
36 | >Warning: These instructions exclude all Kubernetes packages from any system upgrades. This is because kubeadm and Kubernetes require special attention to upgrade.
37 |
38 | For more information on version skews, see:
39 |
40 | * Kubernetes [version and version-skew policy](https://kubernetes.io/releases/version-skew-policy/)
41 | * Kubeadm-specific [version skew policy](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy)
42 |
43 | ##### Update the `apt package index` and install `packages` needed to use the `Kubernetes apt repository`:
44 |
45 | sudo apt-get update
46 | sudo apt-get install -y apt-transport-https ca-certificates curl
47 |
48 | ##### Download the `Google Cloud public signing key`:
49 |
50 | sudo mkdir -m 0755 -p /etc/apt/keyrings
51 | sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
52 |
53 | ##### Add the `Kubernetes apt repository`:
54 |
55 | echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
56 |
57 | ##### Update `apt package index` again, install `kubelet`, `kubeadm` and `kubectl`, pin their version to avoid auto-upgrade:
58 |
59 | sudo apt-get update
60 | sudo apt-get install -y kubelet kubeadm kubectl
61 | sudo apt-mark hold kubelet kubeadm kubectl
62 |
63 | The `kubelet` is now restarting every few seconds, as it waits in a crash loop for `kubeadm` to tell it what to do.
64 |
65 | >Node: 🔐 The client certificates generated by `kubeadm` will be expired after `1 year`. See [here](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) for more details about custom or renewals certificates.
66 |
67 | ## Next
68 |
69 | ▶️ [Boostrapping control plane and nodes](Boostrapping-control-plane-and-nodes.md/#boostraping-control-plane-and-nodes)
--------------------------------------------------------------------------------
/docs/en/Provision-VirtualBoxVM-with-Vagrant.md:
--------------------------------------------------------------------------------
1 | # Provision Virtual Box’s VM with Vagrant
2 |
3 |
4 |
5 |
6 | `Vagrant` is a tool for creating and managing virtual machine environments. It is often considered a form of `Infrastructure as Code (IaC)`, as it allows us to define and manage our infrastructure using code. However, it is essential to note that Vagrant is primarily a tool for managing virtual machine environments, and is not typically used for managing production infrastructure.
7 |
8 | ## Quick Start
9 |
10 | For a quick start, we'll bring up virtual machines `VirtualBox` because it is free and works on all major platforms.
11 |
12 | First, download and install the [VirtualBox](https://www.virtualbox.org/wiki/Download_Old_Builds), [Vagrant](https://www.vagrantup.com/downloads.html) on your host.
13 |
14 | We will build our virtual environment from `Vagrantfile`. Clone this repo or just copy this [Vagrantfile](../../Vagrantfile) to your current directory (Reference: [kodekloudhub](https://github.com/kodekloudhub/certified-kubernetes-administrator-course)).
15 |
16 | # -*- mode: ruby -*-
17 | # vi:set ft=ruby sw=2 ts=2 sts=2:
18 |
19 | # Define the number of control plane (MASTER_NODE) and node (WORKER_NODE)
20 | NUM_MASTER_NODE = 1
21 | NUM_WORKER_NODE = 2
22 |
23 | IP_NW = "192.168.56."
24 | MASTER_IP_START = 1
25 | NODE_IP_START = 2
26 |
27 | # All Vagrant configuration is done below. The "2" in Vagrant.configure
28 | # configures the configuration version (we support older styles for
29 | # backwards compatibility). Please don't change it unless you know what
30 | # you're doing.
31 |
32 | Vagrant.configure("2") do |config|
33 |
34 | # The most common configuration options are documented and commented below.
35 | # For a complete reference, please see the online documentation at
36 | # https://docs.vagrantup.com.
37 |
38 | # Every Vagrant development environment requires a box. You can search for
39 | # boxes at https://vagrantcloud.com/search.
40 | # Here are some key details about the "ubuntu/bionic64" Vagrant box:
41 | # Operating System: Ubuntu 18.04 LTS (Bionic Beaver)
42 | # Ubuntu 18.04 LTS will receive security updates and bug fixes
43 | # from Canonical, the company behind Ubuntu, until April 2023
44 | # for desktop and server versions, and until April 2028 for
45 | # server versions with Extended Security Maintenance (ESM) enabled.
46 | # Architecture: x86_64 (64-bit)
47 | # Disk Size: 10 GB
48 | # RAM: 2 GB
49 | # CPUs: 2
50 | # Desktop Environment: None (headless)
51 | # Provider: VirtualBox
52 |
53 | config.vm.box = "ubuntu/bionic64"
54 |
55 | # Disable automatic box update checking. If you disable this, then
56 | # boxes will only be checked for updates when the user runs
57 | # `vagrant box outdated`. This is not recommended.
58 | config.vm.box_check_update = false
59 |
60 | # View the documentation for the VirtualBox for more
61 | # information on available options.
62 | # https://developer.hashicorp.com/vagrant/docs/providers/virtualbox/configuration
63 |
64 | # Provision Control Plane
65 | (1..NUM_MASTER_NODE).each do |i|
66 | config.vm.define "kubemaster" do |node|
67 | node.vm.provider "virtualbox" do |vb|
68 | vb.name = "kubemaster"
69 | vb.memory = 2048
70 | vb.cpus = 2
71 | end
72 | node.vm.hostname = "kubemaster"
73 | node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
74 | end
75 | end
76 |
77 |
78 | # Provision Nodes
79 | (1..NUM_WORKER_NODE).each do |i|
80 | config.vm.define "kubenode0#{i}" do |node|
81 | node.vm.provider "virtualbox" do |vb|
82 | vb.name = "kubenode0#{i}"
83 | vb.memory = 2048
84 | vb.cpus = 2
85 | end
86 | node.vm.hostname = "kubenode0#{i}"
87 | node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
88 | end
89 | end
90 | end
91 |
92 | In this Vagrantfile, we specify:
93 | - A number of virtual machines: `NUM_MASTER_NODE`, `NUM_WORKER_NODE`
94 | - IP address: `IP_NW`, `MASTER_IP_START`, `NODE_IP_START`
95 | - Private networking connectivity: `node.vm.network`
96 | - Unique hostname: `node.vm.hostname`
97 | - Operating system: `config.vm.box`
98 | - System resources: `vb.memory`, `vb.cpus`
99 |
100 | The syntax of Vagrantfile is [Ruby](https://www.ruby-lang.org/en/), but knowledge of it is not necessary to make modifications to the `Vagrantfile`. See [here](https://developer.hashicorp.com/vagrant/docs/vagrantfile) for more information about `Vagrantfile syntax`.
101 |
102 | ## Start provisioning
103 |
104 | Run the command:
105 |
106 | vagrant up
107 |
108 | The output should be similar:
109 |
110 | Bringing machine 'kubemaster' up with 'virtualbox' provider...
111 | Bringing machine 'kubenode01' up with 'virtualbox' provider...
112 | Bringing machine 'kubenode02' up with 'virtualbox' provider...
113 | ==> kubemaster: Importing base box 'ubuntu/bionic64'...
114 | ==> kubemaster: Matching MAC address for NAT networking...
115 | ==> kubemaster: Setting the name of the VM: kubemaster
116 | ==> kubemaster: Clearing any previously set network interfaces...
117 | ==> kubemaster: Preparing network interfaces based on configuration...
118 | kubemaster: Adapter 1: nat
119 | kubemaster: Adapter 2: hostonly
120 | ==> kubemaster: Forwarding ports...
121 | kubemaster: 22 (guest) => 2222 (host) (adapter 1)
122 | ==> kubemaster: Running 'pre-boot' VM customizations...
123 | ==> kubemaster: Booting VM...
124 | ==> kubemaster: Waiting for machine to boot. This may take a few minutes...
125 | kubemaster: SSH address: 127.0.0.1:2222
126 | kubemaster: SSH username: vagrant
127 | kubemaster: SSH auth method: private key
128 | kubemaster: Warning: Connection reset. Retrying...
129 | kubemaster: Warning: Connection aborted. Retrying...
130 | kubemaster:
131 | kubemaster: Vagrant insecure key detected. Vagrant will automatically replace
132 | kubemaster: this with a newly generated keypair for better security.
133 | kubemaster:
134 | kubemaster: Inserting generated public key within guest...
135 | kubemaster: Removing insecure key from the guest if it's present...
136 | kubemaster: Key inserted! Disconnecting and reconnecting using new SSH key...
137 | ==> kubemaster: Machine booted and ready!
138 | ==> kubemaster: Checking for guest additions in VM...
139 | kubemaster: The guest additions on this VM do not match the installed version of
140 | kubemaster: VirtualBox! In most cases this is fine, but in rare cases it can
141 | kubemaster: prevent things such as shared folders from working properly. If you see
142 | kubemaster: shared folder errors, please make sure the guest additions within the
143 | kubemaster: virtual machine match the version of VirtualBox you have installed on
144 | kubemaster: your host and reload your VM.
145 | kubemaster:
146 | kubemaster: Guest Additions Version: 5.2.42
147 | kubemaster: VirtualBox Version: 7.0
148 | ==> kubemaster: Setting hostname...
149 | ==> kubemaster: Configuring and enabling network interfaces...
150 | ==> kubemaster: Mounting shared folders...
151 | kubemaster: /vagrant => C:/Users/MSI BRAVO/kubernetes-install-cluster-with-kubeadm
152 | ==> kubenode01: Importing base box 'ubuntu/bionic64'...
153 | ==> kubenode01: Matching MAC address for NAT networking...
154 | ==> kubenode01: Setting the name of the VM: kubenode01
155 | ==> kubenode01: Fixed port collision for 22 => 2222. Now on port 2200.
156 | ==> kubenode01: Clearing any previously set network interfaces...
157 | ==> kubenode01: Preparing network interfaces based on configuration...
158 | kubenode01: Adapter 1: nat
159 | kubenode01: Adapter 2: hostonly
160 | ==> kubenode01: Forwarding ports...
161 | kubenode01: 22 (guest) => 2200 (host) (adapter 1)
162 | ==> kubenode01: Running 'pre-boot' VM customizations...
163 | ==> kubenode01: Booting VM...
164 | ==> kubenode01: Waiting for machine to boot. This may take a few minutes...
165 | kubenode01: SSH address: 127.0.0.1:2200
166 | kubenode01: SSH username: vagrant
167 | kubenode01: SSH auth method: private key
168 | kubenode01: Warning: Connection reset. Retrying...
169 | kubenode01: Warning: Connection aborted. Retrying...
170 | kubenode01:
171 | kubenode01: Vagrant insecure key detected. Vagrant will automatically replace
172 | kubenode01: this with a newly generated keypair for better security.
173 | kubenode01:
174 | kubenode01: Inserting generated public key within guest...
175 | kubenode01: Removing insecure key from the guest if it's present...
176 | kubenode01: Key inserted! Disconnecting and reconnecting using new SSH key...
177 | ==> kubenode01: Machine booted and ready!
178 | ==> kubenode01: Checking for guest additions in VM...
179 | kubenode01: The guest additions on this VM do not match the installed version of
180 | kubenode01: VirtualBox! In most cases this is fine, but in rare cases it can
181 | kubenode01: prevent things such as shared folders from working properly. If you see
182 | kubenode01: shared folder errors, please make sure the guest additions within the
183 | kubenode01: virtual machine match the version of VirtualBox you have installed on
184 | kubenode01: your host and reload your VM.
185 | kubenode01:
186 | kubenode01: Guest Additions Version: 5.2.42
187 | kubenode01: VirtualBox Version: 7.0
188 | ==> kubenode01: Setting hostname...
189 | ==> kubenode01: Configuring and enabling network interfaces...
190 | ==> kubenode01: Mounting shared folders...
191 | kubenode01: /vagrant => C:/Users/MSI BRAVO/kubernetes-install-cluster-with-kubeadm
192 | ==> kubenode02: Importing base box 'ubuntu/bionic64'...
193 | ==> kubenode02: Matching MAC address for NAT networking...
194 | ==> kubenode02: Setting the name of the VM: kubenode02
195 | ==> kubenode02: Fixed port collision for 22 => 2222. Now on port 2201.
196 | ==> kubenode02: Clearing any previously set network interfaces...
197 | ==> kubenode02: Preparing network interfaces based on configuration...
198 | kubenode02: Adapter 1: nat
199 | kubenode02: Adapter 2: hostonly
200 | ==> kubenode02: Forwarding ports...
201 | kubenode02: 22 (guest) => 2201 (host) (adapter 1)
202 | ==> kubenode02: Running 'pre-boot' VM customizations...
203 | ==> kubenode02: Booting VM...
204 | ==> kubenode02: Waiting for machine to boot. This may take a few minutes...
205 | kubenode02: SSH address: 127.0.0.1:2201
206 | kubenode02: SSH username: vagrant
207 | kubenode02: SSH auth method: private key
208 | kubenode02: Warning: Connection reset. Retrying...
209 | kubenode02: Warning: Connection aborted. Retrying...
210 | kubenode02:
211 | kubenode02: Vagrant insecure key detected. Vagrant will automatically replace
212 | kubenode02: this with a newly generated keypair for better security.
213 | kubenode02:
214 | kubenode02: Inserting generated public key within guest...
215 | kubenode02: Removing insecure key from the guest if it's present...
216 | kubenode02: Key inserted! Disconnecting and reconnecting using new SSH key...
217 | ==> kubenode02: Machine booted and ready!
218 | ==> kubenode02: Checking for guest additions in VM...
219 | kubenode02: The guest additions on this VM do not match the installed version of
220 | kubenode02: VirtualBox! In most cases this is fine, but in rare cases it can
221 | kubenode02: prevent things such as shared folders from working properly. If you see
222 | kubenode02: shared folder errors, please make sure the guest additions within the
223 | kubenode02: virtual machine match the version of VirtualBox you have installed on
224 | kubenode02: your host and reload your VM.
225 | kubenode02:
226 | kubenode02: Guest Additions Version: 5.2.42
227 | kubenode02: VirtualBox Version: 7.0
228 | ==> kubenode02: Setting hostname...
229 | ==> kubenode02: Configuring and enabling network interfaces...
230 | ==> kubenode02: Mounting shared folders...
231 | kubenode02: /vagrant => C:/Users/MSI BRAVO/kubernetes-install-cluster-with-kubeadm
232 |
233 | You can verify our provisioning VM by command:
234 |
235 | vagrant status
236 |
237 | It will show your virtual machines which be managed by vagrant
238 |
239 | Current machine states:
240 |
241 | kubemaster running (virtualbox)
242 | kubenode01 running (virtualbox)
243 | kubenode02 running (virtualbox)
244 |
245 | This environment represents multiple VMs. The VMs are all listed
246 | above with their current state. For more information about a specific
247 | VM, run `vagrant status NAME`.
248 |
249 | #### Issue: vagrant up times out on 'default: SSH auth method: private key'
250 |
251 | This issue happened when failed to boot the virtual machine. By default, VirtualBox uses a TSC mode called "RealTscOffset," which adjusts the TSC value on the guest machine to compensate for any time differences between the host and guest.
252 | If you're using `Windows` and already have `Hyper-V`, you must disable `Hyper-V` to avoid conflict with `Virtual Box` which could lead to `vagrant up` time out.
253 |
254 | To disable `Hyper-V` completely, enter the following command in cmd:
255 |
256 | bcdedit /set hypervisorlaunchtype off
257 |
258 | followed by turning off & turn on the machine.
259 |
260 | >Note that `bcdedit` is short for `boot configuration data edit`, i.e. it affects what software will be loaded on the next OS boot, so it is essential that you perform a full boot from a complete power down (not a suspend and restart) in order for the changes to take effect. Leave the PC powered down for `10 seconds` before starting it again. If your PC does not offer a full shutdown from the start menu you could try running `shutdown /p` from an admin command prompt. On a laptop you may have to remove the battery.
261 |
262 | Reinstall `Vagrant` & `Virtual Box`. If the issue still exists, you might have to reinstall `Windows OS`!
263 |
264 | ## Remote to a virtual machine with Vagrant
265 |
266 | Just run the commands:
267 |
268 | vagrant ssh
269 |
270 | 
271 |
272 | As you can see in the output of `vagrant up`, Vagrant had forwarded port 22 and generated keypairs for each machine without ssh configuration in `Vagrantfile`.
273 | For more information, you can see [Vagrant Share: SSH Sharing](https://developer.hashicorp.com/vagrant/docs/share/ssh) and [Vagrantfile: config.ssh](https://developer.hashicorp.com/vagrant/docs/vagrantfile/ssh_settings).
274 |
275 | Then you're good to go!
276 |
277 | ## Next
278 |
279 | ▶️ [Installing a container runtime (containerd) on all virtual machines](Installing-a-container-runtime.md/#installing-a-container-runtime-containerd-on-all-virtual-machines)
--------------------------------------------------------------------------------
/docs/images/cleaning.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holdennguyen/kubernetes-install-cluster-with-kubeadm/0df67e42f263c24c1dce4f13ae720042c51c9271/docs/images/cleaning.png
--------------------------------------------------------------------------------
/docs/images/cluster-k8s.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holdennguyen/kubernetes-install-cluster-with-kubeadm/0df67e42f263c24c1dce4f13ae720042c51c9271/docs/images/cluster-k8s.png
--------------------------------------------------------------------------------
/docs/images/components-of-kubernetes.svg:
--------------------------------------------------------------------------------
1 |
2 |
413 |
--------------------------------------------------------------------------------
/docs/images/cri.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holdennguyen/kubernetes-install-cluster-with-kubeadm/0df67e42f263c24c1dce4f13ae720042c51c9271/docs/images/cri.png
--------------------------------------------------------------------------------
/docs/images/us.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holdennguyen/kubernetes-install-cluster-with-kubeadm/0df67e42f263c24c1dce4f13ae720042c51c9271/docs/images/us.png
--------------------------------------------------------------------------------
/docs/images/vagrant-logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holdennguyen/kubernetes-install-cluster-with-kubeadm/0df67e42f263c24c1dce4f13ae720042c51c9271/docs/images/vagrant-logo.png
--------------------------------------------------------------------------------
/docs/images/vagrant-ssh-vscode.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holdennguyen/kubernetes-install-cluster-with-kubeadm/0df67e42f263c24c1dce4f13ae720042c51c9271/docs/images/vagrant-ssh-vscode.png
--------------------------------------------------------------------------------
/docs/images/vi.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holdennguyen/kubernetes-install-cluster-with-kubeadm/0df67e42f263c24c1dce4f13ae720042c51c9271/docs/images/vi.png
--------------------------------------------------------------------------------
/docs/vi/Boostrapping-control-plane-and-nodes.md:
--------------------------------------------------------------------------------
1 | # Khởi tạo control plane và nodes
2 |
3 |
4 |
5 |
6 | ## Khởi tạo control plane
7 |
8 | `control plane` là nơi chạy các `component` bao gồm `etcd` (cơ sở dữ liệu của `cluster`) và `API Server` (nơi các câu lệnh `kubectl` giao tiếp).
9 |
10 | Để tiến hành khởi tạo, chạy câu lệnh sau ở máy ảo mà chúng ta đặt tên là `kubemaster`:
11 |
12 | sudo kubeadm init --apiserver-advertise-address=192.168.56.2 --pod-network-cidr=10.244.0.0/16
13 |
14 | * `--apiserver-advertise-address=192.168.56.2`: Địa chỉ IP mà máy chủ API sẽ lắng nghe các câu lệnh. Trong hướng dẫn này sẽ là địa chỉa IP của máy ảo `kubemaster`.
15 | * `--pod-network-cidr=10.244.0.0/16`: `control plane` sẽ tự động phân bổ địa chỉ IP trong `CIDR` chỉ định cho các `pod` trên mọi `node` trong cụm `cluster`. **Bạn sẽ cần phải chọn `CIDR` sao cho không trùng với bất kỳ dải mạng hiện có để tránh xung đột địa chỉ IP**.
16 |
17 | `kubeadm init` đầu tiên sẽ chạy một loại các bước kiểm tra để đảm bảo máy đã sẵn sàng chạy `Kubernetes`. Những bước kiểm tra này sẽ đưa ra các cảnh báo và thoát lệnh khi có lỗi. Kế tiếp `kubeadm init` tải xuống và cài đặt các thành phần của `control plane`. Việc này có thể sẽ mất vài phút, sau khi kết thúc bạn sẽ thấy thông báo:
18 |
19 | Your Kubernetes control-plane has initialized successfully!
20 |
21 | To start using your cluster, you need to run the following as a regular user:
22 |
23 | mkdir -p $HOME/.kube
24 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
25 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
26 |
27 | You should now deploy a Pod network to the cluster.
28 | Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
29 | /docs/concepts/cluster-administration/addons/
30 |
31 | You can now join any number of machines by running the following on each node
32 | as root:
33 |
34 | kubeadm join : --token --discovery-token-ca-cert-hash sha256:
35 |
36 | **Lưu lại câu lệnh `kubeadm join...` khi init thành công để thêm các node vào `cluster`**.
37 |
38 | Để `kubectl` có thể dùng với `non-root user`, chạy những lệnh sau, chúng cũng được nhắc trong output khi `kubeadm init` thành công:
39 |
40 | mkdir -p $HOME/.kube
41 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
42 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
43 |
44 | Mặt khác, nếu bạn là `root user`, có thể dùng lệnh sau:
45 |
46 | export KUBECONFIG=/etc/kubernetes/admin.conf
47 |
48 | >Cảnh báo: `Kubeadm` cấp certificate trong `admin.conf` để có `Subject: O = system:masters`, `CN = kubernetes-admin`.
49 | `system:masters` là một nhóm người dùng siêu cấp, bỏ qua lớp ủy quyền (như [RBAC](https://docs.oracle.com/cd/E19253-01/816-4557/rbac-1/)). Tuyệt đối không chia sẻ tệp `admin.conf` với bất kỳ ai, thay vào đó hãy cấp cho người dùng các quyền tùy chỉnh bằng cách tạo cho họ một tệp `kubeconfig` với lệnh `kubeadm kubeconfig`. Để biết thêm chi tiết hãy đọc [Generating kubeconfig files for additional users](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubeconfig-additional-users).
50 |
51 | ## Thêm các node vào cluster
52 |
53 | Chạy câu lệnh trong phần output của `kubeadm init` trên tất cả các `worker node` - máy ảo:`kubenode01`, `kubenode02` với sudo permission:
54 |
55 | sudo kubeadm join --token : --discovery-token-ca-cert-hash sha256:
56 |
57 | **Nếu bạn không lưu lại lệnh `kubeadm join`, quay lại máy control-plane: `kubemaster`.**
58 |
59 | **Lấy `` bằng lệnh**
60 |
61 | kubeadm token list
62 |
63 | Output sẽ tương tự như sau:
64 |
65 | TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
66 | 8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
67 | signing token generated by bootstrappers:
68 | 'kubeadm init'. kubeadm:
69 | default-node-token
70 |
71 | Mặc định, `` sẽ hết hạn sau `24 giờ`. Nếu bạn thêm `worker node` khi `` đã hết hạn, bạn có thể tạo `` mới bằng cách chạy lệnh sau trên `control-plane node`:
72 |
73 | kubeadm token create
74 |
75 | Output sẽ cho `` mới tương tự như sau:
76 |
77 | 5didvk.d09sbcov8ph2amjw
78 |
79 | **Lấy `` bằng lệnh**
80 |
81 | openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
82 | openssl dgst -sha256 -hex | sed 's/^.* //'
83 |
84 | Output sẽ tương tự:
85 |
86 | 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
87 |
88 | **Lấy `:` bằng lệnh**
89 |
90 | cat /$HOME/.kube/config | grep server
91 |
92 | Sẽ được output tương tự:
93 |
94 | server: https://192.168.56.2:6443
95 |
96 | **`:`** sẽ là **`192.168.56.2:6443`**
97 |
98 | **Thêm `worker node` vào `Kubernetes cluster` thành công**
99 |
100 | Bạn sẽ nhận được thông báo thành công như sau trên các máy `worker node`:
101 |
102 | [preflight] Running pre-flight checks
103 |
104 | ... (log output of join workflow) ...
105 |
106 | Node join complete:
107 | * Certificate signing request sent to control-plane and response
108 | received.
109 | * Kubelet informed of new secure connection details.
110 |
111 | Run 'kubectl get nodes' on control-plane to see this machine join.
112 |
113 | Sau vài giây, bạn sẽ thấy thông tin `node` này trong phần `output` của lệnh `kubectl get nodes` khi chạy trên `control plane node`.
114 |
115 | ## Kiểm tra các component của Kubernetes cluster
116 |
117 | 
118 |
119 | Ở control-plane `kubemaster` và worker nodes `kubenode01`, `kubenode02` chạy lệnh:
120 |
121 | sudo netstat -lntp
122 |
123 | Tất cả các components với LISTEN ports tương ứng sẽ được hiển thị như dưới đây:
124 |
125 | **kubemaster**
126 |
127 | Active Internet connections (only servers)
128 | Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
129 | tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8013/kubelet
130 | tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 8182/kube-proxy
131 | tcp 0 0 192.168.56.2:2379 0.0.0.0:* LISTEN 7811/etcd
132 | tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 7811/etcd
133 | tcp 0 0 192.168.56.2:2380 0.0.0.0:* LISTEN 7811/etcd
134 | tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 7811/etcd
135 | tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 7791/kube-controlle
136 | tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 7907/kube-scheduler
137 | tcp 0 0 127.0.0.1:34677 0.0.0.0:* LISTEN 2826/containerd
138 | tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 817/systemd-resolve
139 | tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1380/sshd
140 | tcp6 0 0 :::10250 :::* LISTEN 8013/kubelet
141 | tcp6 0 0 :::6443 :::* LISTEN 7884/kube-apiserver
142 | tcp6 0 0 :::10256 :::* LISTEN 8182/kube-proxy
143 | tcp6 0 0 :::22 :::* LISTEN 1380/sshd
144 |
145 | >`kube-apiserver` hiển thị chỉ LISTEN tới `IPv6 :::6443` nhưng thực chất `API server` đang lắng nghe qua địa chỉ `IPv6` cho phép truy cập qua địa chỉ `IPv4`, còn gọi là `IPv4-mapped IPv6 address`. Đây là lý do tại sao có thể chạy lệnh `kubeadm join` trên `worker nodes` thành công với `--apiserver-advertise-address` tới địa chỉ `IPv4`.
146 | Ví dụ, địa chỉ IPv4 `192.168.5.2` có thể biểu diễn bằng địa chỉ IPv6 `::ffff:192.168.5.2`.
147 |
148 | **kubenode***
149 |
150 | Active Internet connections (only servers)
151 | Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
152 | tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8987/kubelet
153 | tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 9208/kube-proxy
154 | tcp 0 0 127.0.0.1:39989 0.0.0.0:* LISTEN 2785/containerd
155 | tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 782/systemd-resolve
156 | tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1431/sshd
157 | tcp6 0 0 :::10250 :::* LISTEN 8987/kubelet
158 | tcp6 0 0 :::10256 :::* LISTEN 9208/kube-proxy
159 | tcp6 0 0 :::22 :::* LISTEN 1431/sshd
160 |
161 | ## Cài đặt Pod network add-on
162 |
163 | Chạy lệnh `kubectl get nodes` trên `control plane` để kiểm tra các `node` đã thêm vào `cluster`
164 |
165 | NAME STATUS ROLES AGE VERSION
166 | kubemaster NotReady control-plane 3h1m v1.26.2
167 | kubenode01 NotReady 3h v1.26.2
168 | kubenode02 NotReady 179m v1.26.2
169 |
170 | Có thể thấy, các máy ảo `kubemaster`, `kubenode01`, `kubenode02` đã được thêm vào `Kubernetes cluster` nhưng đang có `STATUS` là `NotReady`.
171 |
172 | Chạy lệnh `kubectl get pods -A` trên `control plane` để xem tất cả `pod` trong `kube-system namespace`
173 |
174 | NAMESPACE NAME READY STATUS RESTARTS AGE
175 | kube-system coredns-787d4945fb-5cwlq 0/1 Pending 0 3h8m
176 | kube-system coredns-787d4945fb-q2s4p 0/1 Pending 0 3h8m
177 | kube-system etcd-controlplane 1/1 Running 0 3h8m
178 | kube-system kube-apiserver-controlplane 1/1 Running 0 3h8m
179 | kube-system kube-controller-manager-controlplane 1/1 Running 0 3h8m
180 | kube-system kube-proxy-7twwr 1/1 Running 0 3h7m
181 | kube-system kube-proxy-8mxt7 1/1 Running 0 3h8m
182 | kube-system kube-proxy-v9rc6 1/1 Running 0 3h8m
183 | kube-system kube-scheduler-controlplane 1/1 Running 0 3h9m
184 |
185 | Bạn phải triển khai `Container Network Interface (CNI)` hỗ trợ `Pod network add-on` để các `Pod` có thể giao tiếp với nhau. `Cluster DNS (CoreDNS)` sẽ không được khởi động cho đến khi hoàn thất thiết lập `pod network`.
186 |
187 | `Pod network add-ons` là `Kubernetes-specific CNI plugins` cung cấp **kết nối mạng giữa các pod** trong một `Kubernetes cluster`. Nó tạo một `mạng overlay ảo` phủ toàn bộ `cluster` và gắn cho mỗi `pod` một địa chỉ IP riêng.
188 |
189 | Trong khi `CNI plugins` có thể được sử dụng với mọi `container runtime`, `pod network add-ons` dành riêng cho `Kubernetes` và cung cấp chức năng mạng cần thiết cho `mô hình mạng Kubernetes`. Một số ví dụ về `pod network add-ons` có kể đến `Calico`, `Flannel`, and `Weave Net`. (Xem thêm các `pod network add-ons` khác tại [đây](https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy))
190 |
191 | Trong hướng dẫn này, chúng ta sẽ sử dụng [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) add-ons. Nó dễ dàng cài đặt, sử dụng và phù hợp với việc triển khai ở quy mô nhỏ.
192 |
193 | Để cài đặt nó cho `Kubernetes cluster`, chạy lệnh dưới đây trên control plane `kubemaster`:
194 |
195 | kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
196 |
197 | Output sẽ như sau
198 |
199 | serviceaccount/weave-net created
200 | clusterrole.rbac.authorization.k8s.io/weave-net created
201 | clusterrolebinding.rbac.authorization.k8s.io/weave-net created
202 | role.rbac.authorization.k8s.io/weave-net created
203 | rolebinding.rbac.authorization.k8s.io/weave-net created
204 | daemonset.apps/weave-net created
205 |
206 | Cần đảm bảo rằng `dải mạng của Pod` không bị trùng lặp với mạng trên các máy trong `cluster`. Nếu bạn khai báo `--pod-network-cidr` khi chạy `kubeadm init`, phải thêm tham số `IPALLOC_RANGE` vào tệp YAML của `Weave network plugin`. Chạy lệnh sau trên control plane `kubemaster`:
207 |
208 | kubectl edit ds weave-net -n kube-system
209 |
210 | Lệnh này sẽ cho phép bạn chỉnh sửa tệp YAML của `weave-net daemon set`. Tìm đến phần `spec` của `container` có tham số `name: weave` để thêm biến môi trường `IPALLOC_RANGE` và truyền tham số `--pod-network-cidr` khi chạy `kubeadm init`. (Tệp được mở trong trình chỉnh sửa `vi`)
211 |
212 | spec:
213 | ...
214 | template:
215 | ...
216 | spec:
217 | ...
218 | containers:
219 | ...
220 | env:
221 | - name: IPALLOC_RANGE
222 | value: 10.244.0.0/16
223 | ...
224 | name: weave
225 |
226 | Lưu tệp và đợi một vài phút để `weave-net daemon set` khởi động lại các `pod`.
227 |
228 | ## Thiết lập thành công
229 |
230 | Chạy lại lệnh `kubectl get pods -A` trên control plane để kiểm tra, bạn sẽ thấy 3 pods của `weave-net daemon set` và `coredns pods` hiển thị đang chạy. (STATUS: Running)
231 |
232 | NAMESPACE NAME READY STATUS RESTARTS AGE
233 | kube-system coredns-787d4945fb-48tbh 1/1 Running 0 6m57s
234 | kube-system coredns-787d4945fb-nrsp7 1/1 Running 0 6m57s
235 | kube-system etcd-kubemaster 1/1 Running 0 7m10s
236 | kube-system kube-apiserver-kubemaster 1/1 Running 0 7m12s
237 | kube-system kube-controller-manager-kubemaster 1/1 Running 0 7m10s
238 | kube-system kube-proxy-8sxss 1/1 Running 0 4m19s
239 | kube-system kube-proxy-j7z6x 1/1 Running 0 6m58s
240 | kube-system kube-proxy-nj8j2 1/1 Running 0 4m14s
241 | kube-system kube-scheduler-kubemaster 1/1 Running 0 7m10s
242 | kube-system weave-net-7mldz 2/2 Running 0 2m
243 | kube-system weave-net-dk5dl 2/2 Running 0 70s
244 | kube-system weave-net-znhnm 2/2 Running 0 2m
245 |
246 | Chạy `kubectl get nodes` để kiểm tra trạng thái các `node` trong `cluster`, chúng sẽ đều ở trạng thái sẵn sàng. (STATUS: Ready)
247 |
248 | NAME STATUS ROLES AGE VERSION
249 | kubemaster Ready control-plane 9m54s v1.26.2
250 | kubenode01 Ready 6m59s v1.26.2
251 | kubenode02 Ready 6m54s v1.26.2
252 |
253 | >Nếu bạn thắc mắc tại sao `ROLES` của các `worker node` hiển thị ``, điều đó có nghĩa là các `node` này đang không chạy các `control plane component` hay `Kubernetes services` chỉ định `role`. Thông thường `worker nodes` sẽ không chạy các `control plane component`, vì thế điều này hoàn toàn bình thường trong một `Kubernetes cluster`.
254 |
255 | `Networking` là một phần trung tâm của `Kubernetes`, xem thêm [Kubernetes networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model) để biết thêm thông tin.
256 |
257 | Nếu muốn tùy chỉnh `cluster` với kubeadm, bạn có thể đọc [Create cluster kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
258 |
259 | ▶️ [Dọn dẹp môi trường](Clean-up-environment.md/#clean-up-environment)
260 |
--------------------------------------------------------------------------------
/docs/vi/Clean-up-environment.md:
--------------------------------------------------------------------------------
1 | # Dọn dẹp môi trường
2 |
3 |
4 |
5 |
6 | Sẽ có lúc bạn gặp những lỗi không biết cách giải quyết hoặc đơn giản chỉ muốn bắt đầu lại từ đầu, phần này sẽ dành cho bạn!
7 |
8 | ## Giữ lại các máy ảo, chỉ dọn dẹp Kubernetes cluster
9 |
10 | #### Loại bỏ node
11 |
12 | Chạy lệnh này để bỏ tất cả các `pod` đang chạy trên `node` theo đúng quy trình:
13 |
14 | kubectl drain --delete-emptydir-data --force --ignore-daemonsets
15 |
16 | Reset các trạng thái được cài đặt bởi kubeadm
17 |
18 | kubeadm reset
19 |
20 | Quá trình reset này sẽ không bao gồm việc reset hay dọn dẹp `iptables rules` hay `IPVS tables`. Nếu bạn muốn reset `iptables`, phải thực hiện thủ công như sau:
21 |
22 | iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
23 |
24 | Nếu muốn reset `IPVS tables`, bạn phải chạy lệnh dưới đây:
25 |
26 | ipvsadm -C
27 |
28 | Bây giờ tiến hành loại bỏ `node` khỏi `cluster`:
29 |
30 | kubectl delete node
31 |
32 | Nếu muốn thiết lập lại, run `kubeadm init` (thiết lập thành `control-plane`) hoặc `kubeadm join` (thiết lập thành `worker node`) với các đối số phù hợp.
33 |
34 | #### Dọn dẹp control plane
35 |
36 | Tiến hành quá trình đảo ngược lại tất cả các thay đổi `kubeadm init` đã thực hiện trên máy với lệnh:
37 |
38 | sudo kubeadm reset --kubeconfig="$HOME/.kube/config"
39 |
40 | `--kubeconfig=string`: Xóa `kubeconfig file` được dùng để giao tiếp với `cluster`. Nếu không khai báo, một số thư mục sẽ được tìm để xóa `kubeconfig file`. (Nếu bạn thiết lập cho `non-root user` sau khi chạy `kubeadm init`, đừng quên xóa tệp thiết lập `$HOME/.kube/config`)
41 |
42 | Tương tự như đã đề cập ở phần loại bỏ `node`, quá trình reset này sẽ không reset hay dọn dẹp `iptables rules` hay `IPVS tables`. Nếu muốn reset, bạn phải làm thủ công như khi loại bỏ node.
43 |
44 | ## Xóa bỏ tất cả các máy ảo
45 |
46 | Bởi vì sử dụng `Vagrant` để tự động hóa quá trình tạo các máy ảo, bạn có thể xóa bỏ tất cả chỉ với một câu lệnh. Chạy lệnh này ở thư mục đang sử dụng trên máy tính (Nơi cài đặt `Vagrant` và `Virtual Box`):
47 |
48 | vagrant destroy
49 |
50 | >Nếu bạn chỉ muốn tắt các máy ảo, thay vào đó hãy chạy lệnh `vagrant halt`. Khi đó, lúc `vagrant up` lại, tất cả công việc đã thực hiện trên máy ảo sẽ vẫn còn đó thay vì mất hết. Tìm hiểu thêm về lệnh này tại [đây](https://developer.hashicorp.com/vagrant/docs/cli/halt).
--------------------------------------------------------------------------------
/docs/vi/Installing-a-container-runtime.md:
--------------------------------------------------------------------------------
1 | # Container Runtimes
2 |
3 |
4 |
5 |
6 | *Thực hiện công việc ở bước này trên tất cả các máy ảo*
7 |
8 | > Ghi chú: `Dockershim` đã bị bỏ khỏi dự án `Kubernetes` ở bản `1.24`. Đọc [Dockershim Removal FAQ](https://kubernetes.io/blog/2022/02/17/dockershim-faq/) để biết thêm thông tin.
9 |
10 | > `Dockershim` là một thành phẩn của `Kubernetes` được sử dụng để giao tiếp với `Docker runtime`. Nó được giới thiệu như một giải pháp tạm thời cho phép `Kubernetes` sử dụng `Docker` như một `container runtime` trước khi `Kubernetes` có `container runtime interface (CRI)` của riêng họ.
11 |
12 | Bạn cần phải cài đặt một `container runtime` trên mỗi node trong cụm K8s (Kubernetes) để các `Pods` có thể chạy ở đó. Và phiên bản K8s 1.26 yêu cầu phải sử dụng một `container runtime` tương thích với `Container Runtime Interface (CRI)` của K8s. Đây là một số `container runtime` phổ biến với Kubernetes:
13 | * [containerd](https://containerd.io/)
14 | * [CRI-O](https://cri-o.io/)
15 | * [Docker Engine](https://docs.docker.com/engine/)
16 | * [Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html)
17 |
18 | Bạn có thể xem hướng dẫn cài đặt cho các loại trên tại [đây](https://kubernetes.io/docs/setup/production-environment/container-runtimes/). Trong hướng dẫn này, chúng ta sẽ sử dụng [**Containerd**](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).
19 |
20 | ## Cài đặt và thiết lập các yêu cầu cần chuẩn bị trước
21 |
22 | ### Tải các module của nhân Linux
23 |
24 | cat <`Container Network Interface (CNI)` là giao diện tiêu chuẩn để cấu hình mạng cho các `container` trên `Linux`. Nó cho phép một loạt các tùy chọn kết nối mạng bao gồm `overlay network`, `load balancing` và các chính sách bảo mật sử dụng với ứng dụng containerized. Trong hướng dẫn này, chúng ta sẽ sử dụng `CNI plugin` với `Pod network add-on` sẽ được cài đặt ở bước sau trong mục [Khởi tạo control plane và nodes](/docs/vi/Boostrapping-control-plane-and-nodes.md/##installing-a-pod-network-add-on).
68 |
69 | Cập nhật apt package index và cài đặt packages cho phép apt sử dụng repository qua HTTPS:
70 |
71 | sudo apt-get update
72 |
73 | sudo apt-get install \
74 | ca-certificates \
75 | curl \
76 | gnupg \
77 | lsb-release
78 |
79 | Thêm GPG key chính thức của Docker:
80 |
81 | sudo mkdir -m 0755 -p /etc/apt/keyrings
82 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
83 |
84 | Sử dụng lệnh sau để cài đặt repository:
85 |
86 | echo \
87 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
88 | $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
89 |
90 | Cập nhật lại apt package index sau khi cài đặt repo:
91 |
92 | sudo apt-get update
93 |
94 | Cài đặt phiên bản mới nhất của gói `containerd.io`
95 |
96 | sudo apt-get install containerd.io
97 |
98 | ## Cgroup drivers
99 |
100 | Trong Linux, các [control group](https://docs.kernel.org/admin-guide/cgroup-v1/cgroups.html) được sử dụng để giới hạn tài nguyên phân bổ cho các tiến trình.
101 |
102 | Các `kubelet` và `container runtime` chạy dưới nó đều cần `control groups` để thực hiện việc quản lý tài nguyên cho các `pod` và `container` như yêu cầu hay giới hạn về cpu/memory. Để giao tiếp với các `control group`, `kubelet` và `container runtime` cần sử dụng `cgroup driver`. **Một điều cực kỳ quan trọng đó là `kubelet` và `container runtime` cần phải sử dụng cùng một loại `cgroup driver` với thiết lập giống nhau**.
103 |
104 | Có hai loại `cgroup drivers` hỗ trợ đó là:
105 |
106 | * **cgroupfs**
107 | * **systemd**
108 |
109 | Bởi vì các máy ảo đã dựng của chúng ta sử dụng **systemd**, vậy nên ta sẽ thiết lập `kubelet` và `containerd` dùng **systemd** làm `cgroup driver`.
110 |
111 |
112 | >Tùy thuộc vào bản phân phối và phiên bản của Linux, bạn sẽ thấy loại `cgroup driver` khác nhau. Để xem loại `cgroup driver` hiện tại trên Linux, bạn có thể kiểm tra giá trị của `cgroup mount point` bằng cách nhập lệnh: `cat /proc/mounts | grep cgroup`
113 |
114 | #### Thiết lập `cgroup driver` cho `containerd`
115 |
116 | Để thiết lập cho `containerd` dùng `cgroup driver` là `systemd`, chạy:
117 |
118 | sudo vi /etc/containerd/config.toml
119 |
120 | thay thế toàn bộ nội dung trong tệp `config.toml` với nội dung cài đặt sau:
121 |
122 | [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
123 | [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
124 | SystemdCgroup = true
125 |
126 | nhớ khởi động lại `containerd` để áp dụng thay đổi
127 |
128 | sudo systemctl restart containerd
129 |
130 | #### Thiết lập `cgroup driver` cho `kubelet`
131 |
132 | Trong phiên bản 1.22, nếu người dùng không cài đặt trường `cgroupDriver` trong `KubeletConfiguration`, `kubeadm` sẽ mặc định nó là **systemd**.
133 | Chúng ta không cần làm gì để thiết lập `cgroup driver` cho `kubelet` trong hướng dẫn này vì sẽ dùng `kubeadm` để khởi tạo cụm K8s trong các bước tiếp theo.
134 | Bạn có thể xem tại [đây](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver) để biết thêm thông tin về cách thiết lập.
135 |
136 | ## Tiếp theo
137 |
138 | ▶️ [Cài đặt kubeadm, kubelet và kubectl trên tất cả các máy ảo](Installing-kubeadm-kubelet-kubectl.md/#kubernetes-cluster-with-containerd)
--------------------------------------------------------------------------------
/docs/vi/Installing-kubeadm-kubelet-kubectl.md:
--------------------------------------------------------------------------------
1 | # Kubeadm
2 |
3 |
4 |
5 |
6 | *Thực hiện công việc ở bước này trên tất cả các máy ảo*
7 |
8 | `Kubeadm` là một command-line tool dùng để khởi tạo một `Kubernetes cluster`. Nó là một bản phân phối chính thức của `Kubernetes` và được thiết kế để đơn giản hóa quá trình thiết lập `Kubernetes cluster`. `Kubeadm` tự động hóa nhiều tác vụ liên quan đến thiết lập `cluster` chẳng hạn như cấu hình `control plane components`, tạo `TLS certificates`, và thiết lập `Kubernetes networking`.
9 |
10 | >Một trong những nội dung chính được đề cập trong kỳ thi `Certified Kubernetes Administrator (CKA)` là thiết lập `cluster`, bao gồm việc sử dụng các công cụ như kubeadm để khởi tạo một `Kubernetes cluster` mới.
11 |
12 | ## Tắt swap space:
13 |
14 | Bạn phải tắt tính năng `swap` để `kubelet` hoạt động bình thường. Xem thêm thảo luận về điều này trong issue: https://github.com/kubernetes/kubernetes/issues/53533
15 |
16 | `Kubelet`, là `node agent` chính chạy trên `worker node`, giả sử mỗi `node` có một lượng bộ nhớ khả dụng cố định. Nếu `node` bắt đầu tiến hành [swap](https://web.mit.edu/rhel-doc/5/RHEL-5-manual/Deployment_Guide-en-US/ch-swapspace.html), `kubelet` có thể bị delay hoặc các vấn đề khác ảnh hưởng đến tính `stability` và `reliability` của `Kubernetes cluster`. Chính vì vậy, `Kubernetes` khuyên `swap` nên được **disabled** trên mỗi `node` trong `cluster`.
17 |
18 | Để tắt `swap` trên máy Linux, sử dụng:
19 |
20 | # Đầu tiên là tắt swap
21 | sudo swapoff -a
22 |
23 | # Sau đó tắt swap mỗi khi khởi động trong /etc/fstab
24 | sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
25 |
26 | ## Cài đặt kubeadm, kubelet và kubectl :
27 |
28 | * `kubelet`: component chạy trên tất cả các máy trong `cluster` và thực hiện những việc như khởi động các `pod` và `container`.
29 |
30 | * `kubectl`: command line tool dùng để nói chuyện với `cluster`.
31 |
32 | * `kubeadm`: công cụ cài đặt các component còn lại của `kubernetes cluster`.
33 |
34 | `kubeadm` sẽ không cài đặt `kubelet` hay `kubectl` cho bạn, vì vậy hãy đảm bảo chúng sử dụng các phiên bản phù hợp với các `component` khác trong `Kubernetes control plane` mà `kubeadm` cài cho bạn.
35 |
36 | >Cảnh báo: Hướng dẫn này sẽ loại bỏ các `Kubernetes packages` ra khỏi mọi tiến trình system upgrade. Do `kubeadm` và `Kubernetes` cần được đặc biệt chú ý mỗi khi upgrade.
37 |
38 | Để biết thêm thông tin về việc các phiên bản lệch nhau được hỗ trợ hãy xem:
39 |
40 | * Kubernetes [version and version-skew policy](https://kubernetes.io/releases/version-skew-policy/)
41 | * Kubeadm-specific [version skew policy](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy)
42 |
43 | ##### Cập nhật `apt package index` và cài các `package` cần thiết để sử dụng trong `Kubernetes apt repository`:
44 |
45 | sudo apt-get update
46 | sudo apt-get install -y apt-transport-https ca-certificates curl
47 |
48 | ##### Tải `Google Cloud public signing key`:
49 |
50 | sudo mkdir -m 0755 -p /etc/apt/keyrings
51 | sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
52 |
53 | ##### Thêm `Kubernetes apt repository`:
54 |
55 | echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
56 |
57 | ##### Cập nhật lại `apt package index`, cài đặt phiên bản mới nhất của `kubelet`, `kubeadm` và `kubectl`, ghim phiên bản hiện tại tránh việc tự động cập nhật:
58 |
59 | sudo apt-get update
60 | sudo apt-get install -y kubelet kubeadm kubectl
61 | sudo apt-mark hold kubelet kubeadm kubectl
62 |
63 | `kubelet` sẽ tự động khởi động lại mỗi giây vì ở trạng thái crashloop, đợi `kubeadm` đưa ra yêu cầu cần thực hiện.
64 |
65 | >Ghi chú: 🔐 `Client certificates` được tạo bởi `kubeadm` sẽ bị hết hạn sau `1 year`. Đọc thêm ở [đây](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) để biết thêm về cách tùy chỉnh và làm mới `certificates`.
66 |
67 | ## Tiếp theo
68 |
69 | ▶️ [Khởi động control plane và nodes](Boostrapping-control-plane-and-nodes.md/#boostraping-control-plane-and-nodes)
--------------------------------------------------------------------------------
/docs/vi/Provision-VirtualBoxVM-with-Vagrant.md:
--------------------------------------------------------------------------------
1 | # Vagrant
2 |
3 |
4 |
5 |
6 |
7 | `Vagrant` là một công cụ để tạo và quản lý môi trường máy ảo. Nó thường được coi là một dạng `Infrastructure as Code (IaC)`, cho phép chúng ta khởi tạo và quản lý cơ sở hạ tầng của mình bằng code thay vì chọn bằng tay trên console. Vagrant chủ yếu đóng vai trò là công cụ sử dụng cho môi trường máy ảo và thường sẽ không dùng để quản lý cơ sở hạ tầng Production.
8 |
9 | ## Bắt đầu
10 |
11 | Chúng ta sẽ tạo các máy ảo với `Virtual Box` bởi vì nó miễn phí và hoạt động ổn trên tất cả các hệ điều hành.
12 |
13 | Đầu tiên, tải và cài đặt [VirtualBox](https://www.virtualbox.org/wiki/Download_Old_Builds), [Vagrant](https://www.vagrantup.com/downloads.html) trên máy tính của bạn.
14 |
15 | Chúng ta sẽ xây dựng các máy ảo theo khai báo trong `Vagrantfile`. Dùng Git clone repo này hoặc đơn giản copy [Vagrantfile](../../Vagrantfile) này vào thư mục của bạn. (Tham khảo: [kodekloudhub](https://github.com/kodekloudhub/certified-kubernetes-administrator-course))
16 |
17 | # -*- mode: ruby -*-
18 | # vi:set ft=ruby sw=2 ts=2 sts=2:
19 |
20 | # Xác định số lượng máy control plane (MASTER_NODE) và máy node (WORKER_NODE)
21 | NUM_MASTER_NODE = 1
22 | NUM_WORKER_NODE = 2
23 |
24 | IP_NW = "192.168.56."
25 | MASTER_IP_START = 1
26 | NODE_IP_START = 2
27 |
28 | # Tất cả thiết lập Vagrant được khai báo dưới đây. Số "2" trong Vagrant.configure
29 | # là thiết lập phiên bản sử dụng
30 | # Đừng thay đổi trừ khi bạn biết mình đang làm gì
31 |
32 | Vagrant.configure("2") do |config|
33 |
34 | # Để tham khảo thêm, xem tài liệu tại
35 | # https://docs.vagrantup.com.
36 |
37 | # Tất cả môi trường mà Vagrant xây dựng đều cần một box. Bạn có thể tìm các
38 | # box tại https://vagrantcloud.com/search.
39 | # Đây là một số thông tin chi tiết về vagrant box "ubuntu/bionic64":
40 | # Hệ điều hành: Ubuntu 18.04 LTS (Bionic Beaver)
41 | # Ubuntu 18.04 LTS sẽ được cập nhật bảo mật và sửa lỗi
42 | # từ Canonical, công ty đứng sau Ubuntu, cho đến tháng 4 năm 2023
43 | # đối với bản desktop và server, và đến tháng 4 năm 2028 đối
44 | # với bản server có Extended Security Maintenance (ESM).
45 | # Kiến trúc: x86_64 (64-bit)
46 | # Dung lượng bộ nhớ: 10 GB
47 | # RAM: 2 GB
48 | # CPUs: 2
49 | # Giao diện đồ họa: None (headless)
50 | # Người tạo: VirtualBox
51 |
52 | config.vm.box = "ubuntu/bionic64"
53 |
54 | # Tắt tính năng tự động cập nhật của box. Nếu tắt,
55 | # boxes sẽ chỉ kiểm tra cập nhật khi người dùng chạy
56 | # `vagrant box outdated`. Không khuyến khích.
57 |
58 | config.vm.box_check_update = false
59 |
60 | # Xem thêm tài liệu của Virtual box tại
61 | # https://developer.hashicorp.com/vagrant/docs/providers/virtualbox/configuration
62 |
63 | # Khởi tạo Control Plane
64 | (1..NUM_MASTER_NODE).each do |i|
65 | config.vm.define "kubemaster" do |node|
66 | node.vm.provider "virtualbox" do |vb|
67 | vb.name = "kubemaster"
68 | vb.memory = 2048
69 | vb.cpus = 2
70 | end
71 | node.vm.hostname = "kubemaster"
72 | node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
73 | end
74 | end
75 |
76 |
77 | # Khởi tạo Nodes
78 | (1..NUM_WORKER_NODE).each do |i|
79 | config.vm.define "kubenode0#{i}" do |node|
80 | node.vm.provider "virtualbox" do |vb|
81 | vb.name = "kubenode0#{i}"
82 | vb.memory = 2048
83 | vb.cpus = 2
84 | end
85 | node.vm.hostname = "kubenode0#{i}"
86 | node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
87 | end
88 | end
89 | end
90 |
91 | Trong `Vagrantfile` này, chúng ta đơn giản chỉ khai báo:
92 | - Số lượng máy ảo: `NUM_MASTER_NODE`, `NUM_WORKER_NODE`
93 | - Địa chỉ IP: `IP_NW`, `MASTER_IP_START`, `NODE_IP_START`
94 | - Kết nối mạng nội bộ: `node.vm.network`
95 | - hostname riêng biệt cho mỗi máy ảo: `node.vm.hostname`
96 | - Hệ điều hành: `config.vm.box`
97 | - Tài nguyên hệ thống: `vb.memory`, `vb.cpus`
98 |
99 | Cú pháp trong `Vagrantfile` là [Ruby](https://www.ruby-lang.org/en/), nhưng để viết hay chỉnh sửa bạn không cần phải hiểu về `ngôn ngữ lập trình Ruby`. Xem thêm [đây](https://developer.hashicorp.com/vagrant/docs/vagrantfile) để biết thêm thông tin về `cú pháp trong Vagrantfile`.
100 |
101 | ## Bắt đầu khởi tạo
102 |
103 | Chạy câu lệnh:
104 |
105 | vagrant up
106 |
107 | Output sẽ tương tự như sau:
108 |
109 | Bringing machine 'kubemaster' up with 'virtualbox' provider...
110 | Bringing machine 'kubenode01' up with 'virtualbox' provider...
111 | Bringing machine 'kubenode02' up with 'virtualbox' provider...
112 | ==> kubemaster: Importing base box 'ubuntu/bionic64'...
113 | ==> kubemaster: Matching MAC address for NAT networking...
114 | ==> kubemaster: Setting the name of the VM: kubemaster
115 | ==> kubemaster: Clearing any previously set network interfaces...
116 | ==> kubemaster: Preparing network interfaces based on configuration...
117 | kubemaster: Adapter 1: nat
118 | kubemaster: Adapter 2: hostonly
119 | ==> kubemaster: Forwarding ports...
120 | kubemaster: 22 (guest) => 2222 (host) (adapter 1)
121 | ==> kubemaster: Running 'pre-boot' VM customizations...
122 | ==> kubemaster: Booting VM...
123 | ==> kubemaster: Waiting for machine to boot. This may take a few minutes...
124 | kubemaster: SSH address: 127.0.0.1:2222
125 | kubemaster: SSH username: vagrant
126 | kubemaster: SSH auth method: private key
127 | kubemaster: Warning: Connection reset. Retrying...
128 | kubemaster: Warning: Connection aborted. Retrying...
129 | kubemaster:
130 | kubemaster: Vagrant insecure key detected. Vagrant will automatically replace
131 | kubemaster: this with a newly generated keypair for better security.
132 | kubemaster:
133 | kubemaster: Inserting generated public key within guest...
134 | kubemaster: Removing insecure key from the guest if it's present...
135 | kubemaster: Key inserted! Disconnecting and reconnecting using new SSH key...
136 | ==> kubemaster: Machine booted and ready!
137 | ==> kubemaster: Checking for guest additions in VM...
138 | kubemaster: The guest additions on this VM do not match the installed version of
139 | kubemaster: VirtualBox! In most cases this is fine, but in rare cases it can
140 | kubemaster: prevent things such as shared folders from working properly. If you see
141 | kubemaster: shared folder errors, please make sure the guest additions within the
142 | kubemaster: virtual machine match the version of VirtualBox you have installed on
143 | kubemaster: your host and reload your VM.
144 | kubemaster:
145 | kubemaster: Guest Additions Version: 5.2.42
146 | kubemaster: VirtualBox Version: 7.0
147 | ==> kubemaster: Setting hostname...
148 | ==> kubemaster: Configuring and enabling network interfaces...
149 | ==> kubemaster: Mounting shared folders...
150 | kubemaster: /vagrant => C:/Users/MSI BRAVO/kubernetes-install-cluster-with-kubeadm
151 | ==> kubenode01: Importing base box 'ubuntu/bionic64'...
152 | ==> kubenode01: Matching MAC address for NAT networking...
153 | ==> kubenode01: Setting the name of the VM: kubenode01
154 | ==> kubenode01: Fixed port collision for 22 => 2222. Now on port 2200.
155 | ==> kubenode01: Clearing any previously set network interfaces...
156 | ==> kubenode01: Preparing network interfaces based on configuration...
157 | kubenode01: Adapter 1: nat
158 | kubenode01: Adapter 2: hostonly
159 | ==> kubenode01: Forwarding ports...
160 | kubenode01: 22 (guest) => 2200 (host) (adapter 1)
161 | ==> kubenode01: Running 'pre-boot' VM customizations...
162 | ==> kubenode01: Booting VM...
163 | ==> kubenode01: Waiting for machine to boot. This may take a few minutes...
164 | kubenode01: SSH address: 127.0.0.1:2200
165 | kubenode01: SSH username: vagrant
166 | kubenode01: SSH auth method: private key
167 | kubenode01: Warning: Connection reset. Retrying...
168 | kubenode01: Warning: Connection aborted. Retrying...
169 | kubenode01:
170 | kubenode01: Vagrant insecure key detected. Vagrant will automatically replace
171 | kubenode01: this with a newly generated keypair for better security.
172 | kubenode01:
173 | kubenode01: Inserting generated public key within guest...
174 | kubenode01: Removing insecure key from the guest if it's present...
175 | kubenode01: Key inserted! Disconnecting and reconnecting using new SSH key...
176 | ==> kubenode01: Machine booted and ready!
177 | ==> kubenode01: Checking for guest additions in VM...
178 | kubenode01: The guest additions on this VM do not match the installed version of
179 | kubenode01: VirtualBox! In most cases this is fine, but in rare cases it can
180 | kubenode01: prevent things such as shared folders from working properly. If you see
181 | kubenode01: shared folder errors, please make sure the guest additions within the
182 | kubenode01: virtual machine match the version of VirtualBox you have installed on
183 | kubenode01: your host and reload your VM.
184 | kubenode01:
185 | kubenode01: Guest Additions Version: 5.2.42
186 | kubenode01: VirtualBox Version: 7.0
187 | ==> kubenode01: Setting hostname...
188 | ==> kubenode01: Configuring and enabling network interfaces...
189 | ==> kubenode01: Mounting shared folders...
190 | kubenode01: /vagrant => C:/Users/MSI BRAVO/kubernetes-install-cluster-with-kubeadm
191 | ==> kubenode02: Importing base box 'ubuntu/bionic64'...
192 | ==> kubenode02: Matching MAC address for NAT networking...
193 | ==> kubenode02: Setting the name of the VM: kubenode02
194 | ==> kubenode02: Fixed port collision for 22 => 2222. Now on port 2201.
195 | ==> kubenode02: Clearing any previously set network interfaces...
196 | ==> kubenode02: Preparing network interfaces based on configuration...
197 | kubenode02: Adapter 1: nat
198 | kubenode02: Adapter 2: hostonly
199 | ==> kubenode02: Forwarding ports...
200 | kubenode02: 22 (guest) => 2201 (host) (adapter 1)
201 | ==> kubenode02: Running 'pre-boot' VM customizations...
202 | ==> kubenode02: Booting VM...
203 | ==> kubenode02: Waiting for machine to boot. This may take a few minutes...
204 | kubenode02: SSH address: 127.0.0.1:2201
205 | kubenode02: SSH username: vagrant
206 | kubenode02: SSH auth method: private key
207 | kubenode02: Warning: Connection reset. Retrying...
208 | kubenode02: Warning: Connection aborted. Retrying...
209 | kubenode02:
210 | kubenode02: Vagrant insecure key detected. Vagrant will automatically replace
211 | kubenode02: this with a newly generated keypair for better security.
212 | kubenode02:
213 | kubenode02: Inserting generated public key within guest...
214 | kubenode02: Removing insecure key from the guest if it's present...
215 | kubenode02: Key inserted! Disconnecting and reconnecting using new SSH key...
216 | ==> kubenode02: Machine booted and ready!
217 | ==> kubenode02: Checking for guest additions in VM...
218 | kubenode02: The guest additions on this VM do not match the installed version of
219 | kubenode02: VirtualBox! In most cases this is fine, but in rare cases it can
220 | kubenode02: prevent things such as shared folders from working properly. If you see
221 | kubenode02: shared folder errors, please make sure the guest additions within the
222 | kubenode02: virtual machine match the version of VirtualBox you have installed on
223 | kubenode02: your host and reload your VM.
224 | kubenode02:
225 | kubenode02: Guest Additions Version: 5.2.42
226 | kubenode02: VirtualBox Version: 7.0
227 | ==> kubenode02: Setting hostname...
228 | ==> kubenode02: Configuring and enabling network interfaces...
229 | ==> kubenode02: Mounting shared folders...
230 | kubenode02: /vagrant => C:/Users/MSI BRAVO/kubernetes-install-cluster-with-kubeadm
231 |
232 | Bạn có thể kiểm tra trạng thái các máy ảo đã dựng bằng lệnh sau:
233 |
234 | vagrant status
235 |
236 | Thông tin về các máy ảo được tạo và quản lý bởi `vagrant` được trả về
237 |
238 | Current machine states:
239 |
240 | kubemaster running (virtualbox)
241 | kubenode01 running (virtualbox)
242 | kubenode02 running (virtualbox)
243 |
244 | This environment represents multiple VMs. The VMs are all listed
245 | above with their current state. For more information about a specific
246 | VM, run `vagrant status NAME`.
247 |
248 | #### Lỗi nhức nách: vagrant up times out tại bước 'default: SSH auth method: private key'
249 |
250 | Lỗi này xảy ra khi bật máy ảo không thành công. Mặc định, Virtual Box sử dụng TSC mode gọi là "RealTscOffset," để điều chỉnh giá trị TSC ([Time Stamp Counter](https://learning.oreilly.com/library/view/mastering-linux-kernel/9781785883057/20712c1b-f659-40da-a09d-55efc93b0597.xhtml)) trên máy ảo để đồng bộ clock freqency của CPU giữa máy host và máy ảo.
251 |
252 | Nếu bạn đang sử dụng Windows đã bật phần mềm máy ảo `Hyper-V`, phải tắt `Hyper-V` để tránh gây ra xung đột với `Virtual Box` dẫn đến lỗi `vagrant up` time out ở trên.
253 |
254 | Để tắt hoàn toàn `Hyper-V`, chạy lệnh sau trong cmd:
255 |
256 | bcdedit /set hypervisorlaunchtype off
257 |
258 | sau đó tắt và bật lại máy tính.
259 |
260 | >Chú ý rằng `bcdedit` là viết tắt của `boot configuration data edit`, nói cách khác nó sẽ ảnh hưởng đến những phần mềm có thiết lập khi boot lại hệ điều hành, vì vậy bạn cần phải `shutdown` máy hoàn toàn (không `suspend` hay `restart`) để áp dụng thay đổi. Để PC tắt trong khoảng `10 giây` trước khi bật lại. Nếu PC không cho shutdown trong Start menu, bạn có thể chạy lệnh `shutdown /p` trong cmd dưới quyền admin. Trên laptop, bạn có thể cần phải tháo pin.
261 |
262 | Cài lại `Vagrant` và `Virtual Box`. Nếu lỗi vẫn còn, có thể bạn sẽ phải cài lại hệ điều hành `Windows`, nhớ đừng bật `Hyper-V`!
263 |
264 | ## Truy cập vào máy ảo bằng Vagrant
265 |
266 | Để ssh vào máy ảo, chỉ cần chạy lệnh:
267 |
268 | vagrant ssh
269 |
270 | 
271 |
272 | Có thể thấy trong output khi chạy `vagrant up`, `Vagrant` có chuyển tiếp port 22 và tạo ssh keypairs cho mỗi máy ảo dù chúng ta không thiết lập trong `Vagrantfile`. Để xem thêm thông tin, bạn có thể đọc [Vagrant Share: SSH Sharing](https://developer.hashicorp.com/vagrant/docs/share/ssh) và [Vagrantfile: config.ssh](https://developer.hashicorp.com/vagrant/docs/vagrantfile/ssh_settings).
273 |
274 | OK, sang bước kế tiếp nào!
275 |
276 | ## Tiếp theo
277 |
278 | ▶️ [Cài đặt container runtime (containerd) trên tất cả các máy ảo](Installing-a-container-runtime.md)
--------------------------------------------------------------------------------