├── .gitignore ├── LICENSE ├── Makefile ├── NOTES.md ├── OWNERS ├── README.md ├── administrator ├── README.md ├── ansible │ ├── README.md │ ├── gcp_compute.yml │ ├── playbooks │ │ └── kubernetes.yml │ └── roles │ │ ├── common │ │ ├── files │ │ │ ├── docker-ce.repo │ │ │ ├── docker-daemon.json │ │ │ ├── ifcfg-br1 │ │ │ └── ifcfg-eth1 │ │ ├── handlers │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── centos.yml │ │ │ ├── common.yml │ │ │ ├── gce.yml │ │ │ ├── libvirt.yml │ │ │ └── main.yml │ │ └── vars │ │ │ └── main.yml │ │ ├── helm │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ └── tiller-rbac.yml │ │ └── tasks │ │ │ ├── helm.yml │ │ │ └── main.yml │ │ ├── kubernetes │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ └── kube-flannel.yml │ │ └── tasks │ │ │ ├── common.yml │ │ │ ├── main.yml │ │ │ └── masters.yml │ │ ├── kubevirt │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ ├── cdi-operator-manifests │ │ │ │ ├── cdi-operator-cr.yaml │ │ │ │ └── cdi-operator.yaml │ │ │ ├── hco-patches │ │ │ │ ├── cluster_role.yaml │ │ │ │ ├── cluster_role_binding.yaml │ │ │ │ ├── nodemaintenance_crd.yaml │ │ │ │ └── service_account.yaml │ │ │ ├── kubevirt-operator-manifests │ │ │ │ ├── kubevirt-cr.yaml │ │ │ │ └── kubevirt-operator.yaml │ │ │ ├── kubevirt-ui-custom-manifests │ │ │ │ └── kubevirt_ui.yml │ │ │ ├── multus-custom-manifests │ │ │ │ ├── cni-plugins.yml │ │ │ │ ├── l2-bridge.yml │ │ │ │ ├── multus-clusterrole.yml │ │ │ │ ├── multus-clusterrolebinding.yml │ │ │ │ ├── multus-cni-configmap.yml │ │ │ │ ├── multus-ds.yml │ │ │ │ ├── multus-ns.yml │ │ │ │ ├── multus-sa.yml │ │ │ │ └── nad-crd.yml │ │ │ ├── ovs-cni-manifests │ │ │ │ ├── kubernetes-ovs-cni.yml │ │ │ │ └── nad-br1.yml │ │ │ └── student-materials │ │ │ │ ├── KubeVirt-grafana-dashboard.json │ │ │ │ ├── kubevirt-servicemonitor.yml │ │ │ │ ├── multus_nad_br1.yml │ │ │ │ ├── vm_containerdisk.yml │ │ │ │ ├── vm_datavolume.yml │ │ │ │ ├── vm_multus1.yml │ │ │ │ └── vm_multus2.yml │ │ └── tasks │ │ │ ├── kubevirt.yml │ │ │ ├── main.yml │ │ │ ├── multus.yml │ │ │ └── ovs_cni.yml │ │ ├── local-storage │ │ ├── files │ │ │ └── local-storage-manifests │ │ │ │ ├── provisioner.yml │ │ │ │ └── sc.yml │ │ └── tasks │ │ │ └── main.yml │ │ └── prometheus │ │ └── tasks │ │ ├── main.yml │ │ └── prometheus.yml ├── instructor-kickoff.md └── terraform │ ├── gcp │ ├── README.md │ ├── gcp.tf │ ├── varfiles │ │ ├── containerdays-2019.tfvars │ │ ├── kubevirt-prow.tfvars │ │ ├── kubevirtest.tfvars │ │ └── opensouthcode19.tfvars │ └── variables.tf │ └── libvirt │ ├── cloud-init.cfg │ ├── libvirt.tf │ ├── network_config.cfg │ ├── varfiles │ └── jparrill.tf │ └── variables.tf ├── hack ├── README.md ├── build ├── test_lab └── tests └── labs ├── lab000 └── lab000.md ├── lab001 ├── RSA │ ├── kubevirt-tutorial │ └── kubevirt-tutorial.pub └── lab001.md ├── lab002 └── lab002.md ├── lab003 └── lab003.md ├── lab004 └── lab004.md ├── lab005 └── lab005.md ├── lab006 ├── images │ ├── kwebui-01.png │ ├── kwebui-02.png │ ├── kwebui-03.png │ ├── kwebui-04.png │ ├── kwebui-05.png │ └── kwebui-06.png └── lab006.md ├── lab007 ├── images │ ├── basic_settings.png │ ├── new_project.png │ ├── new_vm_wizard.png │ ├── overview.png │ ├── pod_overview.png │ ├── pods.png │ ├── start_vm.png │ ├── ui.png │ └── vm_console.png └── lab007.md ├── lab008 └── lab008.md ├── lab009 ├── images │ ├── grafana-01.png │ ├── grafana-02.png │ ├── grafana-03.png │ ├── promui-01.png │ ├── promui-02.png │ └── promui-03.png └── lab009.md ├── lab010 └── lab010.md └── slides └── OpenSouthCode-2019.pdf /.gitignore: -------------------------------------------------------------------------------- 1 | cnv_rsa 2 | cnv_rsa.ppk 3 | **.terraform 4 | **tfstate* 5 | sjr* 6 | build/* 7 | logs/* 8 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright 2017 The KubeVirt Authors 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | BASE_FOLDER ?= $(shell bash -c 'pwd') # Base repo Folder 2 | LABS_FOLDER ?= "labs" # Relative to the BASE_FOLDER, where the md files are located 3 | TARGET ?= "k8s-multus-1.13.3" # K8s taget based on Kubevirtci k8s supported versions 4 | 5 | default: tests 6 | 7 | build: clean 8 | hack/build ${LABS_FOLDER} 9 | 10 | tests: 11 | hack/tests ${TARGET} 12 | 13 | clean: 14 | @echo "cleaning build folder" 15 | @rm -rf build 16 | 17 | .PHONY: clean tests 18 | -------------------------------------------------------------------------------- /NOTES.md: -------------------------------------------------------------------------------- 1 | # NOTES 2 | 3 | * Switch to OKD 4 | * Include a worker node to demo live migrations 5 | * Storage (TBD) 6 | * Shared NFS server ? 7 | * Easy to set up and manage 8 | * Low infra requirements 9 | * Shared Ceph Cluster (standalone)? 10 | * Maintenance 11 | * External dependency 12 | * Dedicated Rook cluster by student ? 13 | * Ansible role 14 | * No maintenance 15 | * HCO 16 | * Once it stabilizes 17 | * Deploy from UI (requires OKD) 18 | * Ansible 19 | * One of the VMs could be deployed using Ansible 20 | 21 | ## Issues 22 | 23 | * KubeVirt Pods go on ContainerConfigError 24 | * UI wizard seems to be looking for projects instead of namespaces 25 | * UI operator seems to only work in OCP 26 | -------------------------------------------------------------------------------- /OWNERS: -------------------------------------------------------------------------------- 1 | filters: 2 | "^labs/.*": 3 | labels: 4 | - kind/lab-updates 5 | "^administrator/.*": 6 | labels: 7 | - kind/lab-deployment 8 | approvers: 9 | - mazzystr 10 | reviewers: 11 | - mazzystr 12 | emeritus_approvers: 13 | - codificat # 25 Feb 20201 14 | - iranzo # 1 Apr 2020 15 | - ptrnull # 1 Apr 2020 16 | - alosadagrande # 1 Apr 2020 17 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | **⚠️ This repository is deprecated and no longer maintained.** 2 | 3 | # Deploy and Explore KubeVirt 4 | 5 | [![Licensed under Apache License version 2.0](https://img.shields.io/github/license/kubevirt/kubevirt.svg)](https://www.apache.org/licenses/LICENSE-2.0) 6 | 7 | This is a hands-on workshop on [KubeVirt](https://kubevirt.io/). 8 | 9 | The workshop consists of the following steps: 10 | 11 | - Lab 0: [Lab Overview](labs/lab000/lab000.md) 12 | - Lab 1: [Introduction / Connectivity](labs/lab001/lab001.md) 13 | - Lab 2: [Review the Kubernetes environment](labs/lab002/lab002.md) 14 | - Lab 3: [Deploy KubeVirt/CDI/UI](labs/lab003/lab003.md) 15 | - Lab 4: [Deploy our first Virtual Machine](labs/lab004/lab004.md) 16 | - Lab 5: [Deploy a VM using a DataVolume](labs/lab005/lab005.md) 17 | - Lab 6: [Exploring the KubeVirt UI](labs/lab006/lab006.md) 18 | - Lab 7: [Using the KubeVirt UI to interact with VMs](labs/lab007/lab007.md) 19 | - Lab 8: [Deploy a multi-homed VM using Multus](labs/lab008/lab008.md) 20 | - Lab 9: [Exploring Kubevirt metrics](labs/lab009/lab009.md) 21 | - Lab 10: [Conclusion](labs/lab010/lab010.md) 22 | -------------------------------------------------------------------------------- /administrator/README.md: -------------------------------------------------------------------------------- 1 | # KubeVirt Workshop 2 | 3 | This directory provides bases to deploy the lab. 4 | 5 | ## Requirements 6 | 7 | * Terraform (libvirt provider) 8 | * Ansible 9 | * A virtual machine (with nested virtualization) 10 | 11 | ## Installation on Libvirt 12 | 13 | ```shell 14 | git clone https://github.com/kubevirt/kubevirt-tutorial 15 | cd kubevirt-tutorial 16 | cd administrator/terraform/libvirt 17 | terraform init 18 | terraform plan -var-file varfiles/.tf 19 | cd ../ansible 20 | ANSIBLE_ROLES_PATH=roles ansible-playbook -i inventories/ -u cloud --private-key playbooks/kubernetes.yml 21 | ``` 22 | 23 | ## Installation on GCE 24 | 25 | ``` 26 | gcloud auth login 27 | gcloud auth application-default login 28 | ``` 29 | 30 | Create your varfile: `varfiles/containerdays-2019.tfvars` 31 | 32 | ```shell 33 | git clone https://github.com/kubevirt/kubevirt-tutorial 34 | cd kubevirt-tutorial 35 | cd administrator/terraform/gcp 36 | terraform init -get -upgrade=true 37 | terraform plan -var-file varfiles/containerdays-2019.tfvars -refresh=true 38 | terraform apply -var-file varfiles/containerdays-2019.tfvars 39 | ### If you want to debug it, just add this env var: 40 | TF_LOG=DEBUG terraform apply -var-file varfiles/containerdays-2019.tfvars -refresh=true 41 | ### 42 | ``` 43 | 44 | 45 | - Create a service account and download its JSON file 46 | 47 | - See the [README for ansible](./ansible/README.md) for details on Ansible configuration 48 | 49 | 50 | ```shell 51 | wget -O ~/.ssh/kubevirt-tutorial 52 | ssh-add ~/.ssh/kubevirt-tutorial 53 | 54 | cd ../ansible 55 | 56 | export GCE_CREDENTIALS_FILE_PATH=/path/to/your/gcp/service/account/keyfile.json 57 | 58 | ANSIBLE_ROLES_PATH=roles \ 59 | ansible-playbook \ 60 | -i gcp_compute.yml \ 61 | --private-key ~/.ssh/kubevirt-tutorial \ 62 | -l lab \ 63 | playbooks/kubernetes.yml 64 | ``` 65 | 66 | ## Versions used 67 | 68 | | Component | Version | 69 | | ----------- | -------- | 70 | | Kubernetes | stable-1 | 71 | | kubevirt | v0.22.0 | 72 | | cdi | 1.9.0 | 73 | 74 | ## Terraform variable file example 75 | 76 | Variable files are used to override (and/or define) variables listed in [terraform variables definition file](terraform/variables.tf). 77 | 78 | ```hcl 79 | libvirt_url="qemu+ssh://sjr@libvirt.domain.tld/system" 80 | host_bridge_iface="br0" 81 | dns_domain_name="domain.tld" 82 | ssh_pub_key="ssh-rsa AAAABNzaC1yc2EAAAADAQABAAABAQC7ANGakwxmSNsDkvJ3ot0cBVeEIgRNAuCessDFd+6Uk2/zt+aewZn3DGPiWKy8VmprBncXhKIIO0mc1Sh4vnxL8jyho+YowVnD6SyByqkXOvmonY4gfUKEIb5aYMbXIc/wKfKLhWzrqki8HWGOESVxqx6WMN+mkBkarWeEjA7+ZpvpJXtgSZoh378WxnRb8v2Pm6qFgEFJK3kaKwdK/dNCsnnhuLxS0HHT/aTfVFA2rzPBYxbfJr2youztQLrVERxpBqYvov0ydoemdeMRQycNR7EY+fqkD1ABkpFKufZCTYcNuGiuhaOjkmU0uHtztwnV64I5mdeqrITRhHCF7y7" 83 | hostname_prefix="kubevirtlab" 84 | ``` 85 | -------------------------------------------------------------------------------- /administrator/ansible/README.md: -------------------------------------------------------------------------------- 1 | # GCE and Ansible 2.8 2 | 3 | Prior Ansible 2.8 version we consume the gce.py and gce.ini file in order to execute GCE Dynamic Inventory, this has changed on 2.8 version of ansible. Now works in a different way and this document tries to help you there. 4 | 5 | ## Pre-reqs 6 | 7 | - You at least need the JSON file to access to your GCE instances (as usual) 8 | 9 | Pip3 libraries: 10 | 11 | - requests >= 2.18.4 12 | - google-auth >= 1.3.0 13 | 14 | ``` 15 | pip3 freeze | egrep "(^requests=|^google-auth)" 16 | ``` 17 | 18 | Must give you something like: 19 | 20 | ``` 21 | google-auth==1.1.1 22 | requests==2.21.0 23 | ``` 24 | 25 | ## Hands on 26 | 27 | From 2.8 version you need to use an built in plugin called `gce_compute` in order to do that just add in you `ansible.cfg` (usually from `/etc/ansible/ansible.cfg`) file: 28 | 29 | ``` 30 | [inventory] 31 | enable_plugins = gcp_compute 32 | ``` 33 | 34 | Now you need to fill the `gcp_compute.yml` file, it's easy, you just need to ensure that the required fields are there, use this as an example for the labs: 35 | 36 | ``` 37 | plugin: gcp_compute 38 | zones: # populate inventory with instances in these regions 39 | - europe-west4-a 40 | projects: 41 | - cnvlab-209908 42 | service_account_file: /home/PUT YOUR USER HERE/.ansible/inventory/gce.json 43 | auth_kind: serviceaccount 44 | scopes: 45 | - 'https://www.googleapis.com/auth/cloud-platform' 46 | - 'https://www.googleapis.com/auth/compute.readonly' 47 | hostnames: 48 | # List host by name instead of the default public ip 49 | - name 50 | compose: 51 | # Set an inventory parameter to use the Public IP address to connect to the host 52 | # For Private ip use "networkInterfaces[0].networkIP" 53 | ansible_host: networkInterfaces[0].accessConfigs[0].natIP 54 | 55 | ``` 56 | 57 | **NOTES:** 58 | - Filters does not work properly, if you just select one MachineType and copy/paste the same as you know that exists gives you an empty inv 59 | - Use the _zones_ to filter just in one zone in order to avoid wait several time (you also could add more than 1 zone) 60 | 61 | 62 | ## Execution 63 | 64 | Ensure that you're pointing to the correct JSON auth file and has those 65 | 66 | ``` 67 | λ ansible git:(master) ✗ ansible -i gcp_compute.yml all -u jparrill -m command -a "ping 127.0.0.1 -c4" 68 | kubevirt-prow-0 | CHANGED | rc=0 >> 69 | PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 70 | 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 71 | 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.113 ms 72 | 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.069 ms 73 | 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.069 ms 74 | 75 | --- 127.0.0.1 ping statistics --- 76 | 4 packets transmitted, 4 received, 0% packet loss, time 2999ms 77 | rtt min/avg/max/mdev = 0.069/0.083/0.113/0.021 ms 78 | 79 | λ ansible git:(master) ✗ ansible -i gcp_compute.yml all -u kubevirt -m command -a "kubectl get pods" 80 | kubevirt-prow-0 | CHANGED | rc=0 >> 81 | NAME READY STATUS RESTARTS AGE 82 | deck-f958755bc-skq64 1/1 Running 1 5d 83 | deck-f958755bc-v5568 1/1 Running 1 5d 84 | hook-57d5467cbc-6ndvg 1/1 Running 1 13d 85 | hook-57d5467cbc-k22fm 1/1 Running 1 13d 86 | horologium-79965ffb5-c29nt 1/1 Running 1 13d 87 | plank-79646b78fb-mn796 1/1 Running 1 13d 88 | sinker-796f7df8cb-tdj5b 1/1 Running 1 13d 89 | statusreconciler-56b668448f-bm7f9 1/1 Running 1 13d 90 | tide-6d66ddcb4c-gms2t 1/1 Running 1 13d 91 | ``` 92 | 93 | In order to filter by your needs, you must to modify the config file: 94 | 95 | ``` 96 | keyed_groups: 97 | # Create groups from GCE tags 98 | - prefix: gcp 99 | key: tags 100 | groups: 101 | push-button: "'kubevirt-button' in (name)" 102 | lab: "'kubevirtlab' in (name)" 103 | ``` 104 | 105 | Those are the important parts, and it's not well documented. with `keyed_groups` you have an autogenerated groups based on (in this case) tags, also admit labels. On groups you could autogenerate the prefixed groups based on variables given by `--list` command. In this case based on the name. 106 | 107 | With this command `ansible-inventory -i gcp_compute.yml --graph` we will take this output 108 | 109 | ``` 110 | @all: 111 | |--@gcp_fingerprint_42WmSpB8rSM_: 112 | | |--kubevirt-button-master-build-1 113 | | |--kubevirt-button-master-build-86 114 | | |--kubevirt-button-owners-build-1 115 | | |--kubevirt-button-pr-117-build-1 116 | |--@gcp_fingerprint_6v1v_HXmsw4_: 117 | | |--kubevirt-prow-0 118 | |--@gcp_fingerprint_L0U99fDszC0_: 119 | | |--kubevirtlab-0 120 | |--@gcp_items___kubevirtcomm__: 121 | | |--kubevirt-prow-0 122 | |--@gcp_items___kubevirtlab__: 123 | | |--kubevirtlab-0 124 | |--@lab: 125 | | |--kubevirtlab-0 126 | |--@push_button: 127 | | |--kubevirt-button-master-build-1 128 | | |--kubevirt-button-master-build-86 129 | | |--kubevirt-button-owners-build-1 130 | | |--kubevirt-button-pr-117-build-1 131 | |--@ungrouped: 132 | ``` 133 | 134 | Then in order o just go against your instances, just create with a prefix and add to the config file the proper group filter or use the autogenerated tag filer `gcp_items___kubevirtlab__` 135 | 136 | ``` 137 | λ ansible git:(feature/ansible28_gce) ✗ ansible-inventory -i gcp_compute.yml gcp_items___kubevirtlab__ --graph 138 | @gcp_items___kubevirtlab__: 139 | |--kubevirtlab-0 140 | 141 | Σ ansible git:(feature/ansible28_gce) ✗ ansible-inventory -i gcp_compute.yml lab --graph 142 | @lab: 143 | |--kubevirtlab-0 144 | ``` 145 | 146 | ## References 147 | 148 | - [Ansible GCE Guide](https://docs.ansible.com/ansible/latest/plugins/inventory/gcp_compute.html) 149 | 150 | Enjoy ;) 151 | -------------------------------------------------------------------------------- /administrator/ansible/gcp_compute.yml: -------------------------------------------------------------------------------- 1 | plugin: gcp_compute 2 | zones: # populate inventory with instances in these regions 3 | - europe-west4-a 4 | - us-central1-b 5 | projects: 6 | - cnvlab-209908 7 | # You can set this via the GCE_CREDENTIALS_FILE_PATH env var instead: 8 | #service_account_file: /home/elvis/.ansible/inventory/gce.json 9 | auth_kind: serviceaccount 10 | keyed_groups: 11 | # Create groups from GCE labels 12 | - prefix: gcp 13 | key: tags 14 | groups: 15 | push-button: "'kubevirt-button' in (name)" 16 | lab: "'kubevirtlab' in (name)" 17 | scopes: 18 | - 'https://www.googleapis.com/auth/cloud-platform' 19 | - 'https://www.googleapis.com/auth/compute.readonly' 20 | hostnames: 21 | # List host by name instead of the default public ip 22 | - name 23 | compose: 24 | # Set an inventory parameter to use the Public IP address to connect to the host 25 | # For Private ip use "networkInterfaces[0].networkIP" 26 | ansible_host: networkInterfaces[0].accessConfigs[0].natIP 27 | -------------------------------------------------------------------------------- /administrator/ansible/playbooks/kubernetes.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | vars: 5 | kubelet_extra_args: "{{ KUBELET_EXTRA_ARGS | default('--cgroup-driver=systemd') }}" 6 | ansible_ssh_common_args: "-o StrictHostKeyChecking=no" 7 | # ansible user could be found on administrator/terraform/gcp/varfiles/opensouthcode19.tfvars 8 | # lab_username 9 | ansible_user: "kubevirt" 10 | # tag_lab could be found on administrator/terraform/gcp/variables.tf 11 | # instance_tag="kubevirtlab", take care, by default GCE adds tag_ as a preffix on the tag 12 | tag_lab: "tag_kubevirtlab" 13 | # Deploy Kubevirt 14 | deploy_kubevirt: true 15 | # Parameter just to indicate the playbooks that are on a testing platform 16 | # and avoid to apply some tasks 17 | testing: "{{ TESTING | default(false) }}" 18 | pre_tasks: 19 | - name: 'Set Testing facts' 20 | set_fact: 21 | k8s_vers: "{{ TARGET | default('k8s-1.13.3') }}" 22 | # Those match with gcr.io/k8s-testimages/bootstrap:v20190516-c6832d9 container image spec 23 | ansible_user: 'root' 24 | ansible_user_dir: '/workspace' 25 | when: testing | bool 26 | 27 | - name: 'Set location of key projects' 28 | set_fact: 29 | kutu_path: "{{ ansible_user_dir }}/kubevirt-tutorial" 30 | kuci_path: "{{ ansible_user_dir }}/kubevirtci" 31 | when: testing | bool 32 | 33 | - name: 'Set Kubeconfig location' 34 | set_fact: 35 | kubeconfig: "{{ kuci_path }}/_ci-configs/{{ k8s_vers }}/.kubeconfig" 36 | when: testing | bool 37 | roles: 38 | - { role: common, when: not testing | bool } 39 | - { role: kubernetes, when: not testing |bool } 40 | - { role: kubevirt, ansible_become: false, when: deploy_kubevirt | bool } 41 | - { role: local-storage, ansible_become: false } 42 | - { role: helm, ansible_become: false, when: deploy_kubevirt | bool } 43 | - { role: prometheus, ansible_become: false, when: deploy_kubevirt | bool } 44 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/files/docker-ce.repo: -------------------------------------------------------------------------------- 1 | [docker-ce-stable] 2 | name=Docker CE Stable - $basearch 3 | baseurl=https://download.docker.com/linux/centos/7/$basearch/stable 4 | enabled=1 5 | gpgcheck=1 6 | gpgkey=https://download.docker.com/linux/centos/gpg 7 | 8 | [docker-ce-stable-debuginfo] 9 | name=Docker CE Stable - Debuginfo $basearch 10 | baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable 11 | enabled=0 12 | gpgcheck=1 13 | gpgkey=https://download.docker.com/linux/centos/gpg 14 | 15 | [docker-ce-stable-source] 16 | name=Docker CE Stable - Sources 17 | baseurl=https://download.docker.com/linux/centos/7/source/stable 18 | enabled=0 19 | gpgcheck=1 20 | gpgkey=https://download.docker.com/linux/centos/gpg 21 | 22 | [docker-ce-edge] 23 | name=Docker CE Edge - $basearch 24 | baseurl=https://download.docker.com/linux/centos/7/$basearch/edge 25 | enabled=0 26 | gpgcheck=1 27 | gpgkey=https://download.docker.com/linux/centos/gpg 28 | 29 | [docker-ce-edge-debuginfo] 30 | name=Docker CE Edge - Debuginfo $basearch 31 | baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/edge 32 | enabled=0 33 | gpgcheck=1 34 | gpgkey=https://download.docker.com/linux/centos/gpg 35 | 36 | [docker-ce-edge-source] 37 | name=Docker CE Edge - Sources 38 | baseurl=https://download.docker.com/linux/centos/7/source/edge 39 | enabled=0 40 | gpgcheck=1 41 | gpgkey=https://download.docker.com/linux/centos/gpg 42 | 43 | [docker-ce-test] 44 | name=Docker CE Test - $basearch 45 | baseurl=https://download.docker.com/linux/centos/7/$basearch/test 46 | enabled=0 47 | gpgcheck=1 48 | gpgkey=https://download.docker.com/linux/centos/gpg 49 | 50 | [docker-ce-test-debuginfo] 51 | name=Docker CE Test - Debuginfo $basearch 52 | baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/test 53 | enabled=0 54 | gpgcheck=1 55 | gpgkey=https://download.docker.com/linux/centos/gpg 56 | 57 | [docker-ce-test-source] 58 | name=Docker CE Test - Sources 59 | baseurl=https://download.docker.com/linux/centos/7/source/test 60 | enabled=0 61 | gpgcheck=1 62 | gpgkey=https://download.docker.com/linux/centos/gpg 63 | 64 | [docker-ce-nightly] 65 | name=Docker CE Nightly - $basearch 66 | baseurl=https://download.docker.com/linux/centos/7/$basearch/nightly 67 | enabled=0 68 | gpgcheck=1 69 | gpgkey=https://download.docker.com/linux/centos/gpg 70 | 71 | [docker-ce-nightly-debuginfo] 72 | name=Docker CE Nightly - Debuginfo $basearch 73 | baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/nightly 74 | enabled=0 75 | gpgcheck=1 76 | gpgkey=https://download.docker.com/linux/centos/gpg 77 | 78 | [docker-ce-nightly-source] 79 | name=Docker CE Nightly - Sources 80 | baseurl=https://download.docker.com/linux/centos/7/source/nightly 81 | enabled=0 82 | gpgcheck=1 83 | gpgkey=https://download.docker.com/linux/centos/gpg 84 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/files/docker-daemon.json: -------------------------------------------------------------------------------- 1 | { 2 | "exec-opts": ["native.cgroupdriver=systemd"], 3 | "log-driver": "json-file", 4 | "log-opts": { 5 | "max-size": "100m" 6 | }, 7 | "storage-driver": "overlay2" 8 | } 9 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/files/ifcfg-br1: -------------------------------------------------------------------------------- 1 | DEVICE="br1" 2 | ONBOOT="yes" 3 | DEVICETYPE="ovs" 4 | TYPE="OVSBridge" 5 | OVSBOOTPROTO="none" 6 | OVSDHCPINTERFACES="eth1" 7 | HOTPLUG="no" 8 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/files/ifcfg-eth1: -------------------------------------------------------------------------------- 1 | DEVICE="eth1" 2 | ONBOOT="yes" 3 | DEVICETYPE="ovs" 4 | TYPE="OVSIntPort" 5 | OVS_BRIDGE="br1" 6 | HOTPLUG="no" 7 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: reload sshd 3 | systemd: 4 | name: sshd 5 | state: reloaded 6 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/tasks/centos.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Add kubernetes EL7 repository 3 | yum_repository: 4 | name: kubernetes 5 | baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 6 | enabled: yes 7 | gpgcheck: yes 8 | gpgkey: https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 9 | description: EL7 Kubernetes repository 10 | 11 | - name: Add Docker CE repo 12 | copy: 13 | src: files/docker-ce.repo 14 | dest: /etc/yum.repos.d/docker-ce.repo 15 | mode: 0644 16 | owner: root 17 | 18 | - name: Enable EPEL 19 | yum: 20 | name: epel-release 21 | state: latest 22 | 23 | - name: Upgrade them all 24 | yum: 25 | name: '*' 26 | state: latest 27 | update_cache: yes 28 | update_only: yes 29 | 30 | - name: Disable SELinux 31 | lineinfile: 32 | path: /etc/selinux/config 33 | regexp: '^SELINUX=.*' 34 | line: "SELINUX=permissive" 35 | 36 | - name: Disable sshd UseDNS 37 | lineinfile: 38 | path: /etc/ssh/sshd_config 39 | regexp: '^#UseDNS.*' 40 | line: "UseDNS no" 41 | notify: 42 | - reload sshd 43 | 44 | - name: Enable IPv4 forwarding 45 | sysctl: 46 | name: net.ipv4.ip_forward 47 | value: 1 48 | sysctl_set: yes 49 | 50 | - name: Create modules-load file for br_netfilter 51 | file: 52 | state: touch 53 | path: /etc/modules-load.d/br_netfilter.conf 54 | mode: 0644 55 | owner: root 56 | group: root 57 | 58 | - name: Add br_netfiltter to auto-load 59 | lineinfile: 60 | path: /etc/modules-load.d/br_netfilter.conf 61 | regexp: '^br_netfilter' 62 | line: "br_netfilter" 63 | 64 | - name: Load br_netfilter kernel 65 | modprobe: 66 | name: br_netfilter 67 | state: present 68 | 69 | - name: Install K8s and dependencies 70 | yum: 71 | name: "{{ kubernetes_packages }}" 72 | state: latest 73 | vars: 74 | kubernetes_packages: 75 | - docker-ce 76 | - docker-ce-cli 77 | - containerd.io 78 | - kubeadm 79 | - kubelet 80 | - kubectl 81 | - xfsprogs 82 | - bridge-utils 83 | - git 84 | - ceph-common 85 | - openvswitch 86 | - openvswitch-controller 87 | - python2-openshift 88 | - vim 89 | - tmux 90 | 91 | - name: Add kubelet extra args 92 | lineinfile: 93 | path: /etc/sysconfig/kubelet 94 | regexp: '^KUBELET_EXTRA_ARGS=.*' 95 | line: "KUBELET_EXTRA_ARGS={{ kubelet_extra_args }}" 96 | when: kubelet_extra_args | length > 0 97 | 98 | - name: Make sure /etc/docker directory exists 99 | file: 100 | state: directory 101 | path: /etc/docker 102 | mode: 0755 103 | owner: root 104 | 105 | - name: Configure docker daemon to use systemd cgroups driver 106 | copy: 107 | src: files/docker-daemon.json 108 | dest: /etc/docker/daemon.json 109 | mode: 0644 110 | owner: root 111 | 112 | - name: Enabling services 113 | systemd: 114 | state: started 115 | enabled: yes 116 | name: "{{ item }}" 117 | loop: 118 | - docker 119 | - openvswitch 120 | 121 | - name: Add networking files for OVS bridge 122 | copy: 123 | src: files/{{ item }} 124 | dest: /etc/sysconfig/network-scripts/{{ item }} 125 | mode: 0644 126 | owner: root 127 | loop: 128 | - ifcfg-eth1 129 | - ifcfg-br1 130 | 131 | - name: Create ~/.local/bin 132 | file: 133 | path: "{{ ansible_user_dir }}/.local/bin" 134 | state: directory 135 | mode: 0755 136 | owner: "{{ ansible_user }}" 137 | vars: 138 | ansible_become: False 139 | 140 | - name: reboot 141 | reboot: 142 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/tasks/common.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Add nf-call-iptables sysctl knob 3 | sysctl: 4 | name: net.bridge.bridge-nf-call-iptables 5 | value: 1 6 | state: present 7 | reload: yes 8 | 9 | - name: Enable and start kubelet 10 | systemd: 11 | state: started 12 | enabled: yes 13 | name: kubelet 14 | 15 | - name: Create base directory for local storage 16 | file: 17 | state: directory 18 | path: /mnt/storage 19 | 20 | - name: Summon Libvirt provider specific activities 21 | import_tasks: libvirt.yml 22 | when: "'QEMU' in ansible_system_vendor" 23 | 24 | - name: Summon GCE provider specific activities 25 | import_tasks: gce.yml 26 | when: "'Google' in ansible_system_vendor" 27 | 28 | - name: Create facts directory 29 | file: 30 | state: directory 31 | path: /etc/ansible/facts.d 32 | 33 | - name: Check Nested Virtualization 34 | command: 'cat /sys/module/kvm_intel/parameters/nested' 35 | register: nested 36 | 37 | - block: 38 | - name: Disable kvm_intel 39 | command: 'modprobe -r kvm_intel' 40 | 41 | - name: Make persistent nested virtualization 42 | lineinfile: 43 | path: '/etc/modprobe.d/kvm.conf' 44 | line: 'options kvm_intel nested=1' 45 | create: yes 46 | 47 | - name: Activate Nested Virtualization 48 | modprobe: 49 | name: kvm_intel 50 | params: 'nested=1' 51 | state: present 52 | when: "'N' in nested.stdout" 53 | 54 | - block: 55 | - name: Add Laboratory nodes to masters group 56 | add_host: 57 | name: "{{ item }}" 58 | hostname: "{{ item }}" 59 | groups: 'masters' 60 | with_items: 61 | - "{{ groups['lab'] }}" 62 | 63 | ### Workaround for GCE, always gives an IP on network interfaces 64 | - name: Disable the eth1 interface 65 | command: ifdown eth1 66 | 67 | - name: Enable the eth1 interface 68 | command: ifup eth1 69 | when: "'Google' in ansible_system_vendor" 70 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/tasks/gce.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: '[GCE] Partition disks for local storage' 3 | parted: 4 | device: "/dev/{{ item }}" 5 | number: 1 6 | state: present 7 | loop: "{{ gce_devices }}" 8 | 9 | - name: '[GCE] Format disks for local storage' 10 | filesystem: 11 | dev: /dev/{{ item }}1 12 | fstype: xfs 13 | opts: -L {{ item }} 14 | loop: "{{ gce_devices }}" 15 | 16 | - name: '[GCE] Mount disks for local storage' 17 | mount: 18 | path: /mnt/storage/{{ item }} 19 | src: LABEL={{ item }} 20 | fstype: xfs 21 | state: mounted 22 | loop: "{{ gce_devices }}" 23 | 24 | - name: '[GCE] Install bridges' 25 | openvswitch_bridge: 26 | bridge: "br1" 27 | state: present 28 | 29 | - name: '[GCE] Add port to bridge' 30 | openvswitch_port: 31 | bridge: "br1" 32 | port: "eth1" 33 | state: present 34 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/tasks/libvirt.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: '[Libvirt] Partition disks for local storage' 3 | parted: 4 | device: "/dev/{{ item }}" 5 | number: 1 6 | state: present 7 | loop: "{{ libvirt_devices }}" 8 | 9 | - name: '[Libvirt] Format disks for local storage' 10 | filesystem: 11 | dev: /dev/{{ item }}1 12 | fstype: xfs 13 | opts: -L {{ item }} 14 | loop: "{{ libvirt_devices }}" 15 | 16 | - name: '[Libvirt] Mount disks for local storage' 17 | mount: 18 | path: /mnt/storage/{{ item }} 19 | src: LABEL={{ item }} 20 | fstype: xfs 21 | state: mounted 22 | loop: "{{ libvirt_devices }}" 23 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - import_tasks: centos.yml 3 | when: ansible_distribution == "CentOS" 4 | - import_tasks: common.yml 5 | -------------------------------------------------------------------------------- /administrator/ansible/roles/common/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | libvirt_devices: ['vdb', 'vdc', 'vdd', 'vde', 'vdf'] 3 | gce_devices: ['sdb', 'sdc', 'sdd', 'sde'] 4 | -------------------------------------------------------------------------------- /administrator/ansible/roles/helm/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | helm_version: 2.12.1 3 | -------------------------------------------------------------------------------- /administrator/ansible/roles/helm/files/tiller-rbac.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: tiller 6 | namespace: kube-system 7 | --- 8 | apiVersion: rbac.authorization.k8s.io/v1 9 | kind: ClusterRoleBinding 10 | metadata: 11 | name: tiller 12 | roleRef: 13 | apiGroup: rbac.authorization.k8s.io 14 | kind: ClusterRole 15 | name: cluster-admin 16 | subjects: 17 | - kind: ServiceAccount 18 | name: tiller 19 | namespace: kube-system 20 | -------------------------------------------------------------------------------- /administrator/ansible/roles/helm/tasks/helm.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Get JSON information for Helm latest version 3 | uri: 4 | url: https://api.github.com/repos/helm/helm/releases/latest 5 | return_contents: yes 6 | register: json_response 7 | 8 | - name: Fetch Helm 9 | get_url: 10 | url: https://storage.googleapis.com/kubernetes-helm/helm-{{ helm_version }}-linux-amd64.tar.gz 11 | dest: /tmp/helm.tar.gz 12 | retries: 5 13 | delay: 5 14 | vars: 15 | helm_version: "{{ json_response.json.tag_name }}" 16 | 17 | - name: Unarchive Helm 18 | unarchive: 19 | src: /tmp/helm.tar.gz 20 | dest: /tmp 21 | remote_src: yes 22 | creates: /tmp/linux-amd64/helm 23 | 24 | - name: Copy Helm within PATH 25 | copy: 26 | src: /tmp/linux-amd64/helm 27 | dest: /home/{{ ansible_user }}/.local/bin/helm 28 | owner: "{{ ansible_user }}" 29 | mode: 0755 30 | remote_src: yes 31 | 32 | - name: Create SA and RBAC for tiller 33 | k8s: 34 | state: present 35 | resource_definition: "{{ lookup('file', 'files/tiller-rbac.yml') }}" 36 | 37 | - name: Initialize Helm 38 | command: /home/{{ ansible_user }}/.local/bin/helm init --service-account tiller --history-max 50 --net-host --replicas 1 --wait 39 | args: 40 | creates: /home/{{ ansible_user }}/.helm 41 | 42 | - name: Pausing for 1 minute 43 | wait_for: timeout=60 44 | delegate_to: localhost 45 | vars: 46 | ansible_become: False 47 | -------------------------------------------------------------------------------- /administrator/ansible/roles/helm/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - import_tasks: helm.yml 3 | when: "'masters' in group_names" 4 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubernetes/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kubernetes_version: stable-1 3 | kubelet_extra_args: 4 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubernetes/files/kube-flannel.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: extensions/v1beta1 3 | kind: PodSecurityPolicy 4 | metadata: 5 | name: psp.flannel.unprivileged 6 | annotations: 7 | seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default 8 | seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default 9 | apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default 10 | apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default 11 | spec: 12 | privileged: false 13 | volumes: 14 | - configMap 15 | - secret 16 | - emptyDir 17 | - hostPath 18 | allowedHostPaths: 19 | - pathPrefix: "/etc/cni/net.d" 20 | - pathPrefix: "/etc/kube-flannel" 21 | - pathPrefix: "/run/flannel" 22 | readOnlyRootFilesystem: false 23 | # Users and groups 24 | runAsUser: 25 | rule: RunAsAny 26 | supplementalGroups: 27 | rule: RunAsAny 28 | fsGroup: 29 | rule: RunAsAny 30 | # Privilege Escalation 31 | allowPrivilegeEscalation: false 32 | defaultAllowPrivilegeEscalation: false 33 | # Capabilities 34 | allowedCapabilities: ['NET_ADMIN'] 35 | defaultAddCapabilities: [] 36 | requiredDropCapabilities: [] 37 | # Host namespaces 38 | hostPID: false 39 | hostIPC: false 40 | hostNetwork: true 41 | hostPorts: 42 | - min: 0 43 | max: 65535 44 | # SELinux 45 | seLinux: 46 | # SELinux is unsed in CaaSP 47 | rule: 'RunAsAny' 48 | --- 49 | kind: ClusterRole 50 | apiVersion: rbac.authorization.k8s.io/v1beta1 51 | metadata: 52 | name: flannel 53 | rules: 54 | - apiGroups: ['extensions'] 55 | resources: ['podsecuritypolicies'] 56 | verbs: ['use'] 57 | resourceNames: ['psp.flannel.unprivileged'] 58 | - apiGroups: 59 | - "" 60 | resources: 61 | - pods 62 | verbs: 63 | - get 64 | - apiGroups: 65 | - "" 66 | resources: 67 | - nodes 68 | verbs: 69 | - list 70 | - watch 71 | - apiGroups: 72 | - "" 73 | resources: 74 | - nodes/status 75 | verbs: 76 | - patch 77 | --- 78 | kind: ClusterRoleBinding 79 | apiVersion: rbac.authorization.k8s.io/v1beta1 80 | metadata: 81 | name: flannel 82 | roleRef: 83 | apiGroup: rbac.authorization.k8s.io 84 | kind: ClusterRole 85 | name: flannel 86 | subjects: 87 | - kind: ServiceAccount 88 | name: flannel 89 | namespace: kube-system 90 | --- 91 | apiVersion: v1 92 | kind: ServiceAccount 93 | metadata: 94 | name: flannel 95 | namespace: kube-system 96 | --- 97 | kind: ConfigMap 98 | apiVersion: v1 99 | metadata: 100 | name: kube-flannel-cfg 101 | namespace: kube-system 102 | labels: 103 | tier: node 104 | app: flannel 105 | data: 106 | cni-conf.json: | 107 | { 108 | "name": "cbr0", 109 | "plugins": [ 110 | { 111 | "type": "flannel", 112 | "delegate": { 113 | "hairpinMode": true, 114 | "isDefaultGateway": true 115 | } 116 | }, 117 | { 118 | "type": "portmap", 119 | "capabilities": { 120 | "portMappings": true 121 | } 122 | } 123 | ] 124 | } 125 | net-conf.json: | 126 | { 127 | "Network": "10.244.0.0/16", 128 | "Backend": { 129 | "Type": "vxlan" 130 | } 131 | } 132 | --- 133 | apiVersion: extensions/v1beta1 134 | kind: DaemonSet 135 | metadata: 136 | name: kube-flannel-ds-amd64 137 | namespace: kube-system 138 | labels: 139 | tier: node 140 | app: flannel 141 | spec: 142 | template: 143 | metadata: 144 | labels: 145 | tier: node 146 | app: flannel 147 | spec: 148 | hostNetwork: true 149 | nodeSelector: 150 | beta.kubernetes.io/arch: amd64 151 | tolerations: 152 | - operator: Exists 153 | effect: NoSchedule 154 | serviceAccountName: flannel 155 | initContainers: 156 | - name: install-cni 157 | image: quay.io/coreos/flannel:v0.11.0-amd64 158 | command: 159 | - cp 160 | args: 161 | - -f 162 | - /etc/kube-flannel/cni-conf.json 163 | - /etc/cni/net.d/10-flannel.conflist 164 | volumeMounts: 165 | - name: cni 166 | mountPath: /etc/cni/net.d 167 | - name: flannel-cfg 168 | mountPath: /etc/kube-flannel/ 169 | containers: 170 | - name: kube-flannel 171 | image: quay.io/coreos/flannel:v0.11.0-amd64 172 | command: 173 | - /opt/bin/flanneld 174 | args: 175 | - --ip-masq 176 | - --kube-subnet-mgr 177 | resources: 178 | requests: 179 | cpu: "100m" 180 | memory: "50Mi" 181 | limits: 182 | cpu: "100m" 183 | memory: "50Mi" 184 | securityContext: 185 | privileged: false 186 | capabilities: 187 | add: ["NET_ADMIN"] 188 | env: 189 | - name: POD_NAME 190 | valueFrom: 191 | fieldRef: 192 | fieldPath: metadata.name 193 | - name: POD_NAMESPACE 194 | valueFrom: 195 | fieldRef: 196 | fieldPath: metadata.namespace 197 | volumeMounts: 198 | - name: run 199 | mountPath: /run/flannel 200 | - name: flannel-cfg 201 | mountPath: /etc/kube-flannel/ 202 | volumes: 203 | - name: run 204 | hostPath: 205 | path: /run/flannel 206 | - name: cni 207 | hostPath: 208 | path: /etc/cni/net.d 209 | - name: flannel-cfg 210 | configMap: 211 | name: kube-flannel-cfg 212 | --- 213 | apiVersion: extensions/v1beta1 214 | kind: DaemonSet 215 | metadata: 216 | name: kube-flannel-ds-arm64 217 | namespace: kube-system 218 | labels: 219 | tier: node 220 | app: flannel 221 | spec: 222 | template: 223 | metadata: 224 | labels: 225 | tier: node 226 | app: flannel 227 | spec: 228 | hostNetwork: true 229 | nodeSelector: 230 | beta.kubernetes.io/arch: arm64 231 | tolerations: 232 | - operator: Exists 233 | effect: NoSchedule 234 | serviceAccountName: flannel 235 | initContainers: 236 | - name: install-cni 237 | image: quay.io/coreos/flannel:v0.11.0-arm64 238 | command: 239 | - cp 240 | args: 241 | - -f 242 | - /etc/kube-flannel/cni-conf.json 243 | - /etc/cni/net.d/10-flannel.conflist 244 | volumeMounts: 245 | - name: cni 246 | mountPath: /etc/cni/net.d 247 | - name: flannel-cfg 248 | mountPath: /etc/kube-flannel/ 249 | containers: 250 | - name: kube-flannel 251 | image: quay.io/coreos/flannel:v0.11.0-arm64 252 | command: 253 | - /opt/bin/flanneld 254 | args: 255 | - --ip-masq 256 | - --kube-subnet-mgr 257 | resources: 258 | requests: 259 | cpu: "100m" 260 | memory: "50Mi" 261 | limits: 262 | cpu: "100m" 263 | memory: "50Mi" 264 | securityContext: 265 | privileged: false 266 | capabilities: 267 | add: ["NET_ADMIN"] 268 | env: 269 | - name: POD_NAME 270 | valueFrom: 271 | fieldRef: 272 | fieldPath: metadata.name 273 | - name: POD_NAMESPACE 274 | valueFrom: 275 | fieldRef: 276 | fieldPath: metadata.namespace 277 | volumeMounts: 278 | - name: run 279 | mountPath: /run/flannel 280 | - name: flannel-cfg 281 | mountPath: /etc/kube-flannel/ 282 | volumes: 283 | - name: run 284 | hostPath: 285 | path: /run/flannel 286 | - name: cni 287 | hostPath: 288 | path: /etc/cni/net.d 289 | - name: flannel-cfg 290 | configMap: 291 | name: kube-flannel-cfg 292 | --- 293 | apiVersion: extensions/v1beta1 294 | kind: DaemonSet 295 | metadata: 296 | name: kube-flannel-ds-arm 297 | namespace: kube-system 298 | labels: 299 | tier: node 300 | app: flannel 301 | spec: 302 | template: 303 | metadata: 304 | labels: 305 | tier: node 306 | app: flannel 307 | spec: 308 | hostNetwork: true 309 | nodeSelector: 310 | beta.kubernetes.io/arch: arm 311 | tolerations: 312 | - operator: Exists 313 | effect: NoSchedule 314 | serviceAccountName: flannel 315 | initContainers: 316 | - name: install-cni 317 | image: quay.io/coreos/flannel:v0.11.0-arm 318 | command: 319 | - cp 320 | args: 321 | - -f 322 | - /etc/kube-flannel/cni-conf.json 323 | - /etc/cni/net.d/10-flannel.conflist 324 | volumeMounts: 325 | - name: cni 326 | mountPath: /etc/cni/net.d 327 | - name: flannel-cfg 328 | mountPath: /etc/kube-flannel/ 329 | containers: 330 | - name: kube-flannel 331 | image: quay.io/coreos/flannel:v0.11.0-arm 332 | command: 333 | - /opt/bin/flanneld 334 | args: 335 | - --ip-masq 336 | - --kube-subnet-mgr 337 | resources: 338 | requests: 339 | cpu: "100m" 340 | memory: "50Mi" 341 | limits: 342 | cpu: "100m" 343 | memory: "50Mi" 344 | securityContext: 345 | privileged: false 346 | capabilities: 347 | add: ["NET_ADMIN"] 348 | env: 349 | - name: POD_NAME 350 | valueFrom: 351 | fieldRef: 352 | fieldPath: metadata.name 353 | - name: POD_NAMESPACE 354 | valueFrom: 355 | fieldRef: 356 | fieldPath: metadata.namespace 357 | volumeMounts: 358 | - name: run 359 | mountPath: /run/flannel 360 | - name: flannel-cfg 361 | mountPath: /etc/kube-flannel/ 362 | volumes: 363 | - name: run 364 | hostPath: 365 | path: /run/flannel 366 | - name: cni 367 | hostPath: 368 | path: /etc/cni/net.d 369 | - name: flannel-cfg 370 | configMap: 371 | name: kube-flannel-cfg 372 | --- 373 | apiVersion: extensions/v1beta1 374 | kind: DaemonSet 375 | metadata: 376 | name: kube-flannel-ds-ppc64le 377 | namespace: kube-system 378 | labels: 379 | tier: node 380 | app: flannel 381 | spec: 382 | template: 383 | metadata: 384 | labels: 385 | tier: node 386 | app: flannel 387 | spec: 388 | hostNetwork: true 389 | nodeSelector: 390 | beta.kubernetes.io/arch: ppc64le 391 | tolerations: 392 | - operator: Exists 393 | effect: NoSchedule 394 | serviceAccountName: flannel 395 | initContainers: 396 | - name: install-cni 397 | image: quay.io/coreos/flannel:v0.11.0-ppc64le 398 | command: 399 | - cp 400 | args: 401 | - -f 402 | - /etc/kube-flannel/cni-conf.json 403 | - /etc/cni/net.d/10-flannel.conflist 404 | volumeMounts: 405 | - name: cni 406 | mountPath: /etc/cni/net.d 407 | - name: flannel-cfg 408 | mountPath: /etc/kube-flannel/ 409 | containers: 410 | - name: kube-flannel 411 | image: quay.io/coreos/flannel:v0.11.0-ppc64le 412 | command: 413 | - /opt/bin/flanneld 414 | args: 415 | - --ip-masq 416 | - --kube-subnet-mgr 417 | resources: 418 | requests: 419 | cpu: "100m" 420 | memory: "50Mi" 421 | limits: 422 | cpu: "100m" 423 | memory: "50Mi" 424 | securityContext: 425 | privileged: false 426 | capabilities: 427 | add: ["NET_ADMIN"] 428 | env: 429 | - name: POD_NAME 430 | valueFrom: 431 | fieldRef: 432 | fieldPath: metadata.name 433 | - name: POD_NAMESPACE 434 | valueFrom: 435 | fieldRef: 436 | fieldPath: metadata.namespace 437 | volumeMounts: 438 | - name: run 439 | mountPath: /run/flannel 440 | - name: flannel-cfg 441 | mountPath: /etc/kube-flannel/ 442 | volumes: 443 | - name: run 444 | hostPath: 445 | path: /run/flannel 446 | - name: cni 447 | hostPath: 448 | path: /etc/cni/net.d 449 | - name: flannel-cfg 450 | configMap: 451 | name: kube-flannel-cfg 452 | --- 453 | apiVersion: extensions/v1beta1 454 | kind: DaemonSet 455 | metadata: 456 | name: kube-flannel-ds-s390x 457 | namespace: kube-system 458 | labels: 459 | tier: node 460 | app: flannel 461 | spec: 462 | template: 463 | metadata: 464 | labels: 465 | tier: node 466 | app: flannel 467 | spec: 468 | hostNetwork: true 469 | nodeSelector: 470 | beta.kubernetes.io/arch: s390x 471 | tolerations: 472 | - operator: Exists 473 | effect: NoSchedule 474 | serviceAccountName: flannel 475 | initContainers: 476 | - name: install-cni 477 | image: quay.io/coreos/flannel:v0.11.0-s390x 478 | command: 479 | - cp 480 | args: 481 | - -f 482 | - /etc/kube-flannel/cni-conf.json 483 | - /etc/cni/net.d/10-flannel.conflist 484 | volumeMounts: 485 | - name: cni 486 | mountPath: /etc/cni/net.d 487 | - name: flannel-cfg 488 | mountPath: /etc/kube-flannel/ 489 | containers: 490 | - name: kube-flannel 491 | image: quay.io/coreos/flannel:v0.11.0-s390x 492 | command: 493 | - /opt/bin/flanneld 494 | args: 495 | - --ip-masq 496 | - --kube-subnet-mgr 497 | resources: 498 | requests: 499 | cpu: "100m" 500 | memory: "50Mi" 501 | limits: 502 | cpu: "100m" 503 | memory: "50Mi" 504 | securityContext: 505 | privileged: false 506 | capabilities: 507 | add: ["NET_ADMIN"] 508 | env: 509 | - name: POD_NAME 510 | valueFrom: 511 | fieldRef: 512 | fieldPath: metadata.name 513 | - name: POD_NAMESPACE 514 | valueFrom: 515 | fieldRef: 516 | fieldPath: metadata.namespace 517 | volumeMounts: 518 | - name: run 519 | mountPath: /run/flannel 520 | - name: flannel-cfg 521 | mountPath: /etc/kube-flannel/ 522 | volumes: 523 | - name: run 524 | hostPath: 525 | path: /run/flannel 526 | - name: cni 527 | hostPath: 528 | path: /etc/cni/net.d 529 | - name: flannel-cfg 530 | configMap: 531 | name: kube-flannel-cfg 532 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubernetes/tasks/common.yml: -------------------------------------------------------------------------------- 1 | - name: Pre-pull k8s images 2 | command: kubeadm config images pull --kubernetes-version {{ kubernetes_version }} 3 | 4 | #- name: Set pull images completed 5 | # file: 6 | # state: file 7 | # path: /etc/ansible/facts.d/kubernetes_images_pull.fact 8 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubernetes/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - import_tasks: common.yml 3 | - import_tasks: masters.yml -------------------------------------------------------------------------------- /administrator/ansible/roles/kubernetes/tasks/masters.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Initialize Kubernetes 3 | command: kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version {{ kubernetes_version }} --node-name {{ inventory_hostname }} 4 | args: 5 | creates: /etc/kubernetes/admin.conf 6 | 7 | - name: Setup kube configuration for user 8 | file: 9 | path: /home/{{ ansible_user }}/.kube 10 | state: directory 11 | owner: "{{ ansible_user }}" 12 | mode: 0750 13 | 14 | - name: Copy kubeconfig to user home 15 | copy: 16 | src: /etc/kubernetes/admin.conf 17 | dest: /home/{{ ansible_user }}/.kube/config 18 | owner: "{{ ansible_user }}" 19 | remote_src: yes 20 | 21 | - name: Make the master schedulable 22 | shell: kubectl taint nodes --all node-role.kubernetes.io/master- 23 | register: cmd_result 24 | failed_when: "('not found' not in cmd_result.stderr) and (cmd_result.stderr != '')" 25 | vars: 26 | ansible_become: False 27 | 28 | - name: Apply Flannel manifests 29 | k8s: 30 | state: present 31 | resource_definition: "{{ lookup('file', 'files/kube-flannel.yml') }}" 32 | vars: 33 | ansible_become: False 34 | 35 | # TODO: Check for something reasonable, e.g. Flannel DS 36 | - name: Pausing for 3 minute 37 | wait_for: timeout=180 38 | delegate_to: localhost 39 | vars: 40 | ansible_become: False 41 | 42 | - name: Scale CoreDNS deployment down to 1 replica 43 | k8s: 44 | kind: Deployment 45 | name: coredns 46 | namespace: kube-system 47 | resource_definition: 48 | spec: 49 | replicas: 1 50 | vars: 51 | ansible_become: False 52 | 53 | - name: Allow Priville CoreDNS deployment down to 1 replica 54 | shell: "kubectl -n kube-system get deployment coredns -o yaml | sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | kubectl apply -f -" 55 | vars: 56 | ansible_become: False 57 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kubevirt_version: v0.22.0 3 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/cdi-operator-manifests/cdi-operator-cr.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cdi.kubevirt.io/v1alpha1 2 | kind: CDI 3 | metadata: 4 | name: cdi 5 | namespace: cdi 6 | spec: 7 | imagePullPolicy: IfNotPresent 8 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/cdi-operator-manifests/cdi-operator.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | labels: 5 | cdi.kubevirt.io: "" 6 | name: cdi 7 | --- 8 | apiVersion: apiextensions.k8s.io/v1beta1 9 | kind: CustomResourceDefinition 10 | metadata: 11 | labels: 12 | operator.cdi.kubevirt.io: "" 13 | name: cdis.cdi.kubevirt.io 14 | spec: 15 | group: cdi.kubevirt.io 16 | names: 17 | kind: CDI 18 | listKind: CDIList 19 | plural: cdis 20 | singular: cdi 21 | scope: Cluster 22 | version: v1alpha1 23 | 24 | --- 25 | apiVersion: v1 26 | kind: ConfigMap 27 | metadata: 28 | labels: 29 | operator.cdi.kubevirt.io: "" 30 | name: cdi-operator-leader-election-helper 31 | namespace: cdi 32 | 33 | --- 34 | apiVersion: rbac.authorization.k8s.io/v1 35 | kind: ClusterRole 36 | metadata: 37 | name: cdi.kubevirt.io:operator 38 | labels: 39 | operator.cdi.kubevirt.io: "" 40 | rbac.authorization.k8s.io/aggregate-to-admin: "true" 41 | rules: 42 | - apiGroups: 43 | - cdi.kubevirt.io 44 | resources: 45 | - cdis 46 | verbs: 47 | - create 48 | - patch 49 | - get 50 | - list 51 | - delete 52 | - watch 53 | - deletecollection 54 | --- 55 | apiVersion: v1 56 | kind: ServiceAccount 57 | metadata: 58 | labels: 59 | operator.cdi.kubevirt.io: "" 60 | name: cdi-operator 61 | namespace: cdi 62 | --- 63 | apiVersion: rbac.authorization.k8s.io/v1 64 | kind: ClusterRole 65 | metadata: 66 | labels: 67 | cdi.kubevirt.io: "" 68 | name: cdi-operator-cluster-role 69 | rules: 70 | - apiGroups: 71 | - rbac.authorization.k8s.io 72 | resources: 73 | - roles 74 | - rolebindings 75 | - clusterrolebindings 76 | - clusterroles 77 | verbs: 78 | - '*' 79 | - apiGroups: 80 | - security.openshift.io 81 | resources: 82 | - securitycontextconstraints 83 | verbs: 84 | - '*' 85 | - apiGroups: 86 | - "" 87 | resources: 88 | - serviceaccounts 89 | - services 90 | verbs: 91 | - '*' 92 | - apiGroups: 93 | - "" 94 | resources: 95 | - nodes 96 | verbs: 97 | - get 98 | - list 99 | - watch 100 | - update 101 | - patch 102 | - apiGroups: 103 | - extensions 104 | resources: 105 | - deployments 106 | verbs: 107 | - '*' 108 | - apiGroups: 109 | - extensions 110 | resources: 111 | - ingresses 112 | verbs: 113 | - get 114 | - list 115 | - watch 116 | - apiGroups: 117 | - "" 118 | resources: 119 | - configmaps 120 | verbs: 121 | - watch 122 | - create 123 | - delete 124 | - get 125 | - update 126 | - patch 127 | - list 128 | - apiGroups: 129 | - batch 130 | resources: 131 | - jobs 132 | verbs: 133 | - create 134 | - delete 135 | - get 136 | - update 137 | - patch 138 | - list 139 | - apiGroups: 140 | - apiextensions.k8s.io 141 | resources: 142 | - customresourcedefinitions 143 | verbs: 144 | - create 145 | - delete 146 | - get 147 | - update 148 | - patch 149 | - list 150 | - apiGroups: 151 | - apps 152 | resources: 153 | - deployments 154 | - daemonstes 155 | verbs: 156 | - create 157 | - get 158 | - list 159 | - delete 160 | - watch 161 | - update 162 | - apiGroups: 163 | - admissionregistration.k8s.io 164 | resources: 165 | - validatingwebhookconfigurations 166 | verbs: 167 | - get 168 | - create 169 | - update 170 | - apiGroups: 171 | - apiregistration.k8s.io 172 | resources: 173 | - apiservices 174 | verbs: 175 | - get 176 | - create 177 | - update 178 | - apiGroups: 179 | - cdi.kubevirt.io 180 | resources: 181 | - '*' 182 | verbs: 183 | - '*' 184 | - apiGroups: 185 | - "" 186 | resources: 187 | - events 188 | verbs: 189 | - create 190 | - update 191 | - patch 192 | - apiGroups: 193 | - "" 194 | resources: 195 | - pods 196 | - persistentvolumeclaims 197 | verbs: 198 | - get 199 | - list 200 | - watch 201 | - create 202 | - update 203 | - patch 204 | - delete 205 | - apiGroups: 206 | - "" 207 | resources: 208 | - persistentvolumeclaims/finalizers 209 | - pods/finalizers 210 | verbs: 211 | - update 212 | - apiGroups: 213 | - "" 214 | resources: 215 | - services 216 | verbs: 217 | - get 218 | - list 219 | - watch 220 | - create 221 | - delete 222 | - apiGroups: 223 | - "" 224 | resources: 225 | - secrets 226 | verbs: 227 | - get 228 | - list 229 | - watch 230 | - create 231 | - apiGroups: 232 | - "" 233 | resources: 234 | - namespaces 235 | verbs: 236 | - get 237 | - list 238 | - apiGroups: 239 | - route.openshift.io 240 | resources: 241 | - routes 242 | verbs: 243 | - get 244 | - list 245 | - watch 246 | --- 247 | apiVersion: rbac.authorization.k8s.io/v1 248 | kind: ClusterRoleBinding 249 | metadata: 250 | labels: 251 | cdi.kubevirt.io: "" 252 | name: cdi-operator 253 | roleRef: 254 | apiGroup: rbac.authorization.k8s.io 255 | kind: ClusterRole 256 | name: cdi-operator-cluster-role 257 | subjects: 258 | - kind: ServiceAccount 259 | name: cdi-operator 260 | namespace: cdi 261 | 262 | --- 263 | apiVersion: apps/v1 264 | kind: Deployment 265 | metadata: 266 | labels: 267 | operator.cdi.kubevirt.io: "" 268 | name: cdi-operator 269 | namespace: cdi 270 | spec: 271 | replicas: 1 272 | selector: 273 | matchLabels: 274 | name: cdi-operator 275 | strategy: {} 276 | template: 277 | metadata: 278 | labels: 279 | name: cdi-operator 280 | spec: 281 | containers: 282 | - env: 283 | - name: DEPLOY_CLUSTER_RESOURCES 284 | value: "true" 285 | - name: DOCKER_REPO 286 | value: kubevirt 287 | - name: DOCKER_TAG 288 | value: v1.9.0 289 | - name: CONTROLLER_IMAGE 290 | value: cdi-controller 291 | - name: IMPORTER_IMAGE 292 | value: cdi-importer 293 | - name: CLONER_IMAGE 294 | value: cdi-cloner 295 | - name: APISERVER_IMAGE 296 | value: cdi-apiserver 297 | - name: UPLOAD_SERVER_IMAGE 298 | value: cdi-uploadserver 299 | - name: UPLOAD_PROXY_IMAGE 300 | value: cdi-uploadproxy 301 | - name: VERBOSITY 302 | value: "1" 303 | - name: PULL_POLICY 304 | value: IfNotPresent 305 | image: kubevirt/cdi-operator:v1.9.3 306 | imagePullPolicy: IfNotPresent 307 | name: cdi-operator 308 | ports: 309 | - containerPort: 60000 310 | name: metrics 311 | resources: {} 312 | serviceAccountName: cdi-operator 313 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/hco-patches/cluster_role_binding.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: ClusterRoleBinding 4 | metadata: 5 | labels: 6 | kubevirt.io: "" 7 | name: hyperconverged-cluster-operator 8 | roleRef: 9 | apiGroup: rbac.authorization.k8s.io 10 | kind: ClusterRole 11 | name: hyperconverged-cluster-operator 12 | subjects: 13 | - kind: ServiceAccount 14 | name: hyperconverged-cluster-operator 15 | namespace: kubevirt-hyperconverged 16 | 17 | --- 18 | apiVersion: rbac.authorization.k8s.io/v1 19 | kind: ClusterRoleBinding 20 | metadata: 21 | labels: 22 | kubevirt.io: "" 23 | name: kubevirt-operator 24 | roleRef: 25 | apiGroup: rbac.authorization.k8s.io 26 | kind: ClusterRole 27 | name: kubevirt-operator 28 | subjects: 29 | - kind: ServiceAccount 30 | name: kubevirt-operator 31 | namespace: kubevirt-hyperconverged 32 | --- 33 | apiVersion: rbac.authorization.k8s.io/v1 34 | kind: ClusterRoleBinding 35 | metadata: 36 | labels: 37 | kubevirt.io: "" 38 | name: cdi-operator 39 | roleRef: 40 | apiGroup: rbac.authorization.k8s.io 41 | kind: ClusterRole 42 | name: cdi-operator 43 | subjects: 44 | - kind: ServiceAccount 45 | name: cdi-operator 46 | namespace: kubevirt-hyperconverged 47 | --- 48 | apiVersion: rbac.authorization.k8s.io/v1 49 | kind: ClusterRoleBinding 50 | metadata: 51 | name: cdi-operator-admin 52 | labels: 53 | operator.kubevirt.io: "" 54 | roleRef: 55 | kind: ClusterRole 56 | name: cluster-admin 57 | apiGroup: rbac.authorization.k8s.io 58 | subjects: 59 | - kind: ServiceAccount 60 | name: cdi-operator 61 | namespace: kubevirt-hyperconverged 62 | --- 63 | apiVersion: rbac.authorization.k8s.io/v1 64 | kind: ClusterRoleBinding 65 | metadata: 66 | labels: 67 | kubevirt.io: "" 68 | name: cluster-network-addons-operator 69 | roleRef: 70 | apiGroup: rbac.authorization.k8s.io 71 | kind: ClusterRole 72 | name: cluster-network-addons-operator 73 | subjects: 74 | - kind: ServiceAccount 75 | name: cluster-network-addons-operator 76 | namespace: kubevirt-hyperconverged 77 | --- 78 | apiVersion: rbac.authorization.k8s.io/v1 79 | kind: ClusterRoleBinding 80 | metadata: 81 | name: cluster-network-addons-operator-admin 82 | labels: 83 | operator.kubevirt.io: "" 84 | roleRef: 85 | kind: ClusterRole 86 | name: cluster-admin 87 | apiGroup: rbac.authorization.k8s.io 88 | subjects: 89 | - kind: ServiceAccount 90 | name: cluster-network-addons-operator 91 | namespace: kubevirt-hyperconverged 92 | --- 93 | apiVersion: rbac.authorization.k8s.io/v1 94 | kind: ClusterRoleBinding 95 | metadata: 96 | name: kubevirt-ssp-operator 97 | labels: 98 | operator.kubevirt.io: "" 99 | roleRef: 100 | kind: ClusterRole 101 | name: kubevirt-ssp-operator 102 | apiGroup: rbac.authorization.k8s.io 103 | subjects: 104 | - kind: ServiceAccount 105 | name: kubevirt-ssp-operator 106 | namespace: kubevirt-hyperconverged 107 | --- 108 | apiVersion: rbac.authorization.k8s.io/v1 109 | kind: ClusterRoleBinding 110 | metadata: 111 | name: kubevirt-web-ui-operator 112 | labels: 113 | operator.kubevirt.io: "" 114 | roleRef: 115 | kind: ClusterRole 116 | name: kubevirt-web-ui-operator 117 | apiGroup: rbac.authorization.k8s.io 118 | subjects: 119 | - kind: ServiceAccount 120 | name: kubevirt-web-ui-operator 121 | namespace: kubevirt-hyperconverged 122 | --- 123 | kind: ClusterRoleBinding 124 | apiVersion: rbac.authorization.k8s.io/v1 125 | metadata: 126 | name: node-maintenance-operator 127 | roleRef: 128 | kind: ClusterRole 129 | name: node-maintenance-operator 130 | apiGroup: rbac.authorization.k8s.io 131 | subjects: 132 | - kind: ServiceAccount 133 | name: node-maintenance-operator 134 | namespace: node-maintenance-operator 135 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/hco-patches/nodemaintenance_crd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.k8s.io/v1beta1 2 | kind: CustomResourceDefinition 3 | metadata: 4 | creationTimestamp: null 5 | name: nodemaintenances.kubevirt.io 6 | spec: 7 | group: kubevirt.io 8 | names: 9 | kind: NodeMaintenance 10 | listKind: NodeMaintenanceList 11 | plural: nodemaintenances 12 | singular: nodemaintenance 13 | scope: Cluster 14 | subresources: 15 | status: {} 16 | validation: 17 | openAPIV3Schema: 18 | properties: 19 | apiVersion: 20 | type: string 21 | kind: 22 | type: string 23 | metadata: 24 | type: object 25 | spec: 26 | properties: 27 | nodeName: 28 | type: string 29 | reason: 30 | type: string 31 | required: 32 | - nodeName 33 | type: object 34 | status: 35 | properties: 36 | phase: 37 | type: string 38 | type: object 39 | version: v1alpha1 40 | versions: 41 | - name: v1alpha1 42 | served: true 43 | storage: true 44 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/hco-patches/service_account.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | labels: 6 | kubevirt.io: "" 7 | name: hyperconverged-cluster-operator 8 | namespace: kubevirt-hyperconverged 9 | 10 | --- 11 | apiVersion: v1 12 | kind: ServiceAccount 13 | metadata: 14 | labels: 15 | kubevirt.io: "" 16 | name: cdi-operator 17 | namespace: kubevirt-hyperconverged 18 | --- 19 | apiVersion: v1 20 | kind: ServiceAccount 21 | metadata: 22 | labels: 23 | kubevirt.io: "" 24 | name: kubevirt-operator 25 | namespace: kubevirt-hyperconverged 26 | --- 27 | apiVersion: v1 28 | kind: ServiceAccount 29 | metadata: 30 | labels: 31 | kubevirt.io: "" 32 | name: cluster-network-addons-operator 33 | namespace: kubevirt-hyperconverged 34 | --- 35 | apiVersion: v1 36 | kind: ServiceAccount 37 | metadata: 38 | labels: 39 | kubevirt.io: "" 40 | name: kubevirt-ssp-operator 41 | namespace: kubevirt-hyperconverged 42 | --- 43 | apiVersion: v1 44 | kind: ServiceAccount 45 | metadata: 46 | labels: 47 | kubevirt.io: "" 48 | name: kubevirt-web-ui-operator 49 | namespace: kubevirt-hyperconverged 50 | --- 51 | apiVersion: v1 52 | kind: ServiceAccount 53 | metadata: 54 | labels: 55 | kubevirt.io: "" 56 | name: node-maintenance-operator 57 | namespace: node-maintenance-operator 58 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/kubevirt-operator-manifests/kubevirt-cr.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: kubevirt.io/v1alpha3 3 | kind: KubeVirt 4 | metadata: 5 | name: kubevirt 6 | namespace: kubevirt 7 | spec: 8 | imagePullPolicy: IfNotPresent 9 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/kubevirt-operator-manifests/kubevirt-operator.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | labels: 6 | kubevirt.io: "" 7 | name: kubevirt 8 | --- 9 | apiVersion: v1 10 | kind: ConfigMap 11 | metadata: 12 | name: kubevirt-config 13 | namespace: kubevirt 14 | labels: 15 | kubevirt.io: "" 16 | data: 17 | feature-gates: "DataVolumes" 18 | --- 19 | apiVersion: apiextensions.k8s.io/v1beta1 20 | kind: CustomResourceDefinition 21 | metadata: 22 | labels: 23 | operator.kubevirt.io: "" 24 | name: kubevirts.kubevirt.io 25 | spec: 26 | additionalPrinterColumns: 27 | - JSONPath: .metadata.creationTimestamp 28 | name: Age 29 | type: date 30 | - JSONPath: .status.phase 31 | name: Phase 32 | type: string 33 | group: kubevirt.io 34 | names: 35 | kind: KubeVirt 36 | plural: kubevirts 37 | shortNames: 38 | - kv 39 | - kvs 40 | singular: kubevirt 41 | scope: Namespaced 42 | version: v1alpha3 43 | 44 | --- 45 | apiVersion: rbac.authorization.k8s.io/v1 46 | kind: ClusterRole 47 | metadata: 48 | name: kubevirt.io:operator 49 | labels: 50 | operator.kubevirt.io: "" 51 | rbac.authorization.k8s.io/aggregate-to-admin: "true" 52 | rules: 53 | - apiGroups: 54 | - kubevirt.io 55 | resources: 56 | - kubevirts 57 | verbs: 58 | - get 59 | - delete 60 | - create 61 | - update 62 | - patch 63 | - list 64 | - watch 65 | - deletecollection 66 | --- 67 | apiVersion: v1 68 | kind: ServiceAccount 69 | metadata: 70 | labels: 71 | kubevirt.io: "" 72 | name: kubevirt-operator 73 | namespace: kubevirt 74 | --- 75 | apiVersion: rbac.authorization.k8s.io/v1 76 | kind: ClusterRole 77 | metadata: 78 | labels: 79 | kubevirt.io: "" 80 | name: kubevirt-operator 81 | rules: 82 | - apiGroups: 83 | - kubevirt.io 84 | resources: 85 | - kubevirts 86 | verbs: 87 | - get 88 | - list 89 | - watch 90 | - patch 91 | - update 92 | - patch 93 | - apiGroups: 94 | - "" 95 | resources: 96 | - serviceaccounts 97 | - services 98 | - endpoints 99 | verbs: 100 | - get 101 | - list 102 | - watch 103 | - create 104 | - update 105 | - delete 106 | - patch 107 | - apiGroups: 108 | - batch 109 | resources: 110 | - jobs 111 | verbs: 112 | - get 113 | - list 114 | - watch 115 | - create 116 | - delete 117 | - patch 118 | - apiGroups: 119 | - apps 120 | resources: 121 | - deployments 122 | - daemonsets 123 | verbs: 124 | - get 125 | - list 126 | - watch 127 | - create 128 | - delete 129 | - patch 130 | - apiGroups: 131 | - rbac.authorization.k8s.io 132 | resources: 133 | - clusterroles 134 | - clusterrolebindings 135 | - roles 136 | - rolebindings 137 | verbs: 138 | - get 139 | - list 140 | - watch 141 | - create 142 | - delete 143 | - patch 144 | - update 145 | - apiGroups: 146 | - apiextensions.k8s.io 147 | resources: 148 | - customresourcedefinitions 149 | verbs: 150 | - get 151 | - list 152 | - watch 153 | - create 154 | - delete 155 | - patch 156 | - apiGroups: 157 | - security.openshift.io 158 | resources: 159 | - securitycontextconstraints 160 | verbs: 161 | - get 162 | - list 163 | - watch 164 | - apiGroups: 165 | - security.openshift.io 166 | resourceNames: 167 | - privileged 168 | resources: 169 | - securitycontextconstraints 170 | verbs: 171 | - get 172 | - patch 173 | - update 174 | - apiGroups: 175 | - admissionregistration.k8s.io 176 | resources: 177 | - validatingwebhookconfigurations 178 | verbs: 179 | - get 180 | - list 181 | - watch 182 | - create 183 | - delete 184 | - apiGroups: 185 | - admissionregistration.k8s.io 186 | resources: 187 | - validatingwebhookconfigurations 188 | - mutatingwebhookconfigurations 189 | verbs: 190 | - get 191 | - create 192 | - update 193 | - apiGroups: 194 | - apiregistration.k8s.io 195 | resources: 196 | - apiservices 197 | verbs: 198 | - get 199 | - create 200 | - update 201 | - apiGroups: 202 | - "" 203 | resources: 204 | - pods 205 | verbs: 206 | - get 207 | - list 208 | - apiGroups: 209 | - "" 210 | resources: 211 | - pods/exec 212 | verbs: 213 | - create 214 | - apiGroups: 215 | - kubevirt.io 216 | resources: 217 | - virtualmachines 218 | - virtualmachineinstances 219 | - virtualmachineinstancemigrations 220 | verbs: 221 | - get 222 | - list 223 | - watch 224 | - patch 225 | - apiGroups: 226 | - kubevirt.io 227 | resources: 228 | - virtualmachineinstancepresets 229 | verbs: 230 | - watch 231 | - list 232 | - apiGroups: 233 | - "" 234 | resources: 235 | - configmaps 236 | verbs: 237 | - get 238 | - list 239 | - watch 240 | - apiGroups: 241 | - "" 242 | resources: 243 | - limitranges 244 | verbs: 245 | - watch 246 | - list 247 | - apiGroups: 248 | - "" 249 | resources: 250 | - secrets 251 | verbs: 252 | - get 253 | - list 254 | - delete 255 | - update 256 | - create 257 | - apiGroups: 258 | - "" 259 | resources: 260 | - configmaps 261 | verbs: 262 | - get 263 | - list 264 | - watch 265 | - apiGroups: 266 | - policy 267 | resources: 268 | - poddisruptionbudgets 269 | verbs: 270 | - get 271 | - list 272 | - watch 273 | - delete 274 | - create 275 | - apiGroups: 276 | - "" 277 | resources: 278 | - pods 279 | - configmaps 280 | - endpoints 281 | verbs: 282 | - get 283 | - list 284 | - watch 285 | - delete 286 | - update 287 | - create 288 | - apiGroups: 289 | - "" 290 | resources: 291 | - events 292 | verbs: 293 | - update 294 | - create 295 | - patch 296 | - apiGroups: 297 | - "" 298 | resources: 299 | - pods/finalizers 300 | verbs: 301 | - update 302 | - apiGroups: 303 | - "" 304 | resources: 305 | - nodes 306 | verbs: 307 | - get 308 | - list 309 | - watch 310 | - update 311 | - patch 312 | - apiGroups: 313 | - "" 314 | resources: 315 | - persistentvolumeclaims 316 | verbs: 317 | - get 318 | - list 319 | - watch 320 | - apiGroups: 321 | - kubevirt.io 322 | resources: 323 | - '*' 324 | verbs: 325 | - '*' 326 | - apiGroups: 327 | - cdi.kubevirt.io 328 | resources: 329 | - '*' 330 | verbs: 331 | - '*' 332 | - apiGroups: 333 | - k8s.cni.cncf.io 334 | resources: 335 | - network-attachment-definitions 336 | verbs: 337 | - get 338 | - list 339 | - watch 340 | - apiGroups: 341 | - kubevirt.io 342 | resources: 343 | - virtualmachineinstances 344 | verbs: 345 | - update 346 | - list 347 | - watch 348 | - apiGroups: 349 | - "" 350 | resources: 351 | - secrets 352 | - persistentvolumeclaims 353 | verbs: 354 | - get 355 | - apiGroups: 356 | - "" 357 | resources: 358 | - nodes 359 | verbs: 360 | - patch 361 | - apiGroups: 362 | - "" 363 | resources: 364 | - events 365 | verbs: 366 | - create 367 | - patch 368 | - apiGroups: 369 | - "" 370 | resources: 371 | - configmaps 372 | verbs: 373 | - get 374 | - list 375 | - watch 376 | - apiGroups: 377 | - "" 378 | resources: 379 | - secrets 380 | verbs: 381 | - create 382 | - apiGroups: 383 | - subresources.kubevirt.io 384 | resources: 385 | - version 386 | verbs: 387 | - get 388 | - list 389 | - apiGroups: 390 | - subresources.kubevirt.io 391 | resources: 392 | - virtualmachineinstances/console 393 | - virtualmachineinstances/vnc 394 | verbs: 395 | - get 396 | - apiGroups: 397 | - subresources.kubevirt.io 398 | resources: 399 | - virtualmachines/restart 400 | verbs: 401 | - put 402 | - update 403 | - apiGroups: 404 | - kubevirt.io 405 | resources: 406 | - virtualmachines 407 | - virtualmachineinstances 408 | - virtualmachineinstancepresets 409 | - virtualmachineinstancereplicasets 410 | verbs: 411 | - get 412 | - delete 413 | - create 414 | - update 415 | - patch 416 | - list 417 | - watch 418 | - deletecollection 419 | - apiGroups: 420 | - subresources.kubevirt.io 421 | resources: 422 | - virtualmachineinstances/console 423 | - virtualmachineinstances/vnc 424 | verbs: 425 | - get 426 | - apiGroups: 427 | - subresources.kubevirt.io 428 | resources: 429 | - virtualmachines/restart 430 | verbs: 431 | - put 432 | - update 433 | - apiGroups: 434 | - kubevirt.io 435 | resources: 436 | - virtualmachines 437 | - virtualmachineinstances 438 | - virtualmachineinstancepresets 439 | - virtualmachineinstancereplicasets 440 | verbs: 441 | - get 442 | - delete 443 | - create 444 | - update 445 | - patch 446 | - list 447 | - watch 448 | - apiGroups: 449 | - kubevirt.io 450 | resources: 451 | - virtualmachines 452 | - virtualmachineinstances 453 | - virtualmachineinstancepresets 454 | - virtualmachineinstancereplicasets 455 | verbs: 456 | - get 457 | - list 458 | - watch 459 | - apiGroups: 460 | - authentication.k8s.io 461 | resources: 462 | - tokenreviews 463 | verbs: 464 | - create 465 | - apiGroups: 466 | - authorization.k8s.io 467 | resources: 468 | - subjectaccessreviews 469 | verbs: 470 | - create 471 | --- 472 | apiVersion: rbac.authorization.k8s.io/v1 473 | kind: ClusterRoleBinding 474 | metadata: 475 | labels: 476 | kubevirt.io: "" 477 | name: kubevirt-operator 478 | roleRef: 479 | apiGroup: rbac.authorization.k8s.io 480 | kind: ClusterRole 481 | name: kubevirt-operator 482 | subjects: 483 | - kind: ServiceAccount 484 | name: kubevirt-operator 485 | namespace: kubevirt 486 | 487 | --- 488 | apiVersion: apps/v1 489 | kind: Deployment 490 | metadata: 491 | labels: 492 | kubevirt.io: virt-operator 493 | name: virt-operator 494 | namespace: kubevirt 495 | spec: 496 | replicas: 2 497 | selector: 498 | matchLabels: 499 | kubevirt.io: virt-operator 500 | strategy: 501 | type: RollingUpdate 502 | template: 503 | metadata: 504 | annotations: 505 | scheduler.alpha.kubernetes.io/critical-pod: "" 506 | scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly","operator":"Exists"}]' 507 | labels: 508 | kubevirt.io: virt-operator 509 | prometheus.kubevirt.io: "" 510 | name: virt-operator 511 | spec: 512 | containers: 513 | - command: 514 | - virt-operator 515 | - --port 516 | - "8443" 517 | - -v 518 | - "2" 519 | env: 520 | - name: OPERATOR_IMAGE 521 | value: index.docker.io/kubevirt/virt-operator:v0.18.1 522 | - name: WATCH_NAMESPACE 523 | valueFrom: 524 | fieldRef: 525 | fieldPath: metadata.annotations['olm.targetNamespaces'] 526 | image: index.docker.io/kubevirt/virt-operator:v0.18.1 527 | imagePullPolicy: IfNotPresent 528 | name: virt-operator 529 | ports: 530 | - containerPort: 8443 531 | name: metrics 532 | protocol: TCP 533 | readinessProbe: 534 | httpGet: 535 | path: /metrics 536 | port: 8443 537 | scheme: HTTPS 538 | initialDelaySeconds: 5 539 | timeoutSeconds: 10 540 | resources: {} 541 | securityContext: 542 | runAsNonRoot: true 543 | serviceAccountName: kubevirt-operator 544 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/kubevirt-ui-custom-manifests/kubevirt_ui.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: ClusterRoleBinding 4 | metadata: 5 | name: kweb-ui 6 | roleRef: 7 | apiGroup: rbac.authorization.k8s.io 8 | kind: ClusterRole 9 | name: cluster-admin 10 | subjects: 11 | - kind: ServiceAccount 12 | name: default 13 | namespace: kubevirt 14 | --- 15 | apiVersion: v1 16 | kind: ReplicationController 17 | metadata: 18 | labels: 19 | app: kubevirt-web-ui 20 | openshift.io/deployment-config.name: kubevirt-web-ui 21 | name: kubevirt-web-ui 22 | namespace: kubevirt 23 | spec: 24 | replicas: 1 25 | selector: 26 | app: kubevirt-web-ui 27 | deployment: kubevirt-web-ui 28 | deploymentconfig: kubevirt-web-ui 29 | template: 30 | metadata: 31 | labels: 32 | app: kubevirt-web-ui 33 | deployment: kubevirt-web-ui 34 | deploymentconfig: kubevirt-web-ui 35 | spec: 36 | containers: 37 | - image: quay.io/kubevirt/kubevirt-web-ui:latest 38 | imagePullPolicy: IfNotPresent 39 | name: kubevirt-web-ui 40 | resources: {} 41 | dnsPolicy: ClusterFirst 42 | restartPolicy: Always 43 | schedulerName: default-scheduler 44 | securityContext: {} 45 | terminationGracePeriodSeconds: 30 46 | --- 47 | apiVersion: v1 48 | kind: Service 49 | metadata: 50 | labels: 51 | app: kubevirt-web-ui 52 | name: kubevirt-web-ui 53 | namespace: kubevirt 54 | spec: 55 | ports: 56 | - name: https 57 | nodePort: 30000 58 | port: 9000 59 | protocol: TCP 60 | targetPort: 9000 61 | selector: 62 | deploymentconfig: kubevirt-web-ui 63 | sessionAffinity: None 64 | type: NodePort 65 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/cni-plugins.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: extensions/v1beta1 3 | kind: DaemonSet 4 | metadata: 5 | name: kube-cni-plugins-ds-amd64 6 | namespace: multus 7 | labels: 8 | tier: node 9 | app: cni-plugins 10 | spec: 11 | template: 12 | metadata: 13 | labels: 14 | tier: node 15 | app: cni-plugins 16 | spec: 17 | hostNetwork: true 18 | nodeSelector: 19 | beta.kubernetes.io/arch: amd64 20 | tolerations: 21 | - key: node-role.kubernetes.io/master 22 | operator: Exists 23 | effect: NoSchedule 24 | containers: 25 | - name: cni-plugins 26 | image: quay.io/schseba/cni-plugins:latest 27 | resources: 28 | requests: 29 | cpu: "100m" 30 | memory: "50Mi" 31 | limits: 32 | cpu: "100m" 33 | memory: "50Mi" 34 | securityContext: 35 | privileged: true 36 | volumeMounts: 37 | - name: cnibin 38 | mountPath: /opt/cni/bin 39 | volumes: 40 | - name: cnibin 41 | hostPath: 42 | path: /opt/cni/bin 43 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/l2-bridge.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: extensions/v1beta1 3 | kind: DaemonSet 4 | metadata: 5 | name: kube-l2-bridge-ds-amd64 6 | namespace: multus 7 | labels: 8 | tier: node 9 | app: l2-bridge-cni-plugin 10 | spec: 11 | template: 12 | metadata: 13 | labels: 14 | tier: node 15 | app: l2-bridge-cni-plugin-cni-plugin 16 | spec: 17 | hostNetwork: true 18 | nodeSelector: 19 | beta.kubernetes.io/arch: amd64 20 | tolerations: 21 | - key: node-role.kubernetes.io/master 22 | operator: Exists 23 | effect: NoSchedule 24 | containers: 25 | - name: cni-l2-bridge-plugin 26 | image: quay.io/schseba/l2-bridge-cni-plugin:latest 27 | resources: 28 | requests: 29 | cpu: "100m" 30 | memory: "50Mi" 31 | limits: 32 | cpu: "100m" 33 | memory: "50Mi" 34 | securityContext: 35 | privileged: true 36 | volumeMounts: 37 | - name: cnibin 38 | mountPath: /opt/cni/bin 39 | volumes: 40 | - name: cnibin 41 | hostPath: 42 | path: /opt/cni/bin 43 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/multus-clusterrole.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: ClusterRole 3 | apiVersion: rbac.authorization.k8s.io/v1beta1 4 | metadata: 5 | name: multus 6 | rules: 7 | - apiGroups: 8 | - '*' 9 | resources: 10 | - '*' 11 | verbs: 12 | - '*' 13 | - nonResourceURLs: 14 | - '*' 15 | verbs: 16 | - '*' -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/multus-clusterrolebinding.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: ClusterRoleBinding 3 | apiVersion: rbac.authorization.k8s.io/v1beta1 4 | metadata: 5 | name: multus 6 | roleRef: 7 | apiGroup: rbac.authorization.k8s.io 8 | kind: ClusterRole 9 | name: multus 10 | subjects: 11 | - kind: ServiceAccount 12 | name: multus 13 | namespace: multus -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/multus-cni-configmap.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: ConfigMap 3 | apiVersion: v1 4 | metadata: 5 | name: multus-cni-config 6 | namespace: multus 7 | labels: 8 | tier: node 9 | app: multus 10 | data: 11 | 00-multus.conf: | 12 | { 13 | "name": "multus-cni-network", 14 | "type": "multus", 15 | "delegates": [ 16 | { 17 | "type": "flannel", 18 | "name": "flannel.1", 19 | "delegate": { 20 | "isDefaultGateway": true 21 | } 22 | } 23 | ], 24 | "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig" 25 | } -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/multus-ds.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: extensions/v1beta1 3 | kind: DaemonSet 4 | metadata: 5 | name: kube-multus-ds-amd64 6 | namespace: multus 7 | labels: 8 | tier: node 9 | app: multus 10 | spec: 11 | template: 12 | metadata: 13 | labels: 14 | tier: node 15 | app: multus 16 | spec: 17 | hostNetwork: true 18 | nodeSelector: 19 | beta.kubernetes.io/arch: amd64 20 | tolerations: 21 | - key: node-role.kubernetes.io/master 22 | operator: Exists 23 | effect: NoSchedule 24 | serviceAccountName: multus 25 | containers: 26 | - name: kube-multus 27 | command: ["/entrypoint.sh"] 28 | args: ["--multus-conf-file=/usr/src/multus-cni/images/00-multus.conf"] 29 | image: docker.io/nfvpe/multus:latest 30 | resources: 31 | requests: 32 | cpu: "100m" 33 | memory: "50Mi" 34 | limits: 35 | cpu: "100m" 36 | memory: "50Mi" 37 | securityContext: 38 | privileged: true 39 | volumeMounts: 40 | - name: cni 41 | mountPath: /host/etc/cni/net.d 42 | - name: cnibin 43 | mountPath: /host/opt/cni/bin 44 | - name: multus-cfg 45 | mountPath: /usr/src/multus-cni/images/ 46 | volumes: 47 | - name: cni 48 | hostPath: 49 | path: /etc/cni/net.d 50 | - name: cnibin 51 | hostPath: 52 | path: /opt/cni/bin 53 | - name: multus-cfg 54 | configMap: 55 | name: multus-cni-config -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/multus-ns.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: multus -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/multus-sa.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: multus 6 | namespace: multus -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/multus-custom-manifests/nad-crd.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apiextensions.k8s.io/v1beta1 3 | kind: CustomResourceDefinition 4 | metadata: 5 | name: network-attachment-definitions.k8s.cni.cncf.io 6 | spec: 7 | group: k8s.cni.cncf.io 8 | version: v1 9 | scope: Namespaced 10 | names: 11 | plural: network-attachment-definitions 12 | singular: network-attachment-definition 13 | kind: NetworkAttachmentDefinition 14 | shortNames: 15 | - net-attach-def 16 | validation: 17 | openAPIV3Schema: 18 | properties: 19 | spec: 20 | properties: 21 | config: 22 | type: string 23 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/ovs-cni-manifests/kubernetes-ovs-cni.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: ovs-cni-marker 6 | namespace: kube-system 7 | --- 8 | kind: ClusterRole 9 | apiVersion: rbac.authorization.k8s.io/v1beta1 10 | metadata: 11 | name: ovs-cni-marker-cr 12 | rules: 13 | - apiGroups: 14 | - "" 15 | resources: 16 | - nodes 17 | - nodes/status 18 | verbs: 19 | - get 20 | - update 21 | - patch 22 | --- 23 | kind: ClusterRoleBinding 24 | apiVersion: rbac.authorization.k8s.io/v1beta1 25 | metadata: 26 | name: ovs-cni-marker-crb 27 | roleRef: 28 | apiGroup: rbac.authorization.k8s.io 29 | kind: ClusterRole 30 | name: ovs-cni-marker-cr 31 | subjects: 32 | - kind: ServiceAccount 33 | name: ovs-cni-marker 34 | namespace: kube-system 35 | --- 36 | apiVersion: extensions/v1beta1 37 | kind: DaemonSet 38 | metadata: 39 | name: ovs-cni-amd64 40 | namespace: kube-system 41 | labels: 42 | tier: node 43 | app: ovs-cni 44 | spec: 45 | template: 46 | metadata: 47 | labels: 48 | tier: node 49 | app: ovs-cni 50 | spec: 51 | serviceAccountName: ovs-cni-marker 52 | hostNetwork: true 53 | nodeSelector: 54 | beta.kubernetes.io/arch: amd64 55 | tolerations: 56 | - key: node-role.kubernetes.io/master 57 | operator: Exists 58 | effect: NoSchedule 59 | containers: 60 | - name: ovs-cni-plugin 61 | image: quay.io/kubevirt/ovs-cni-plugin:latest 62 | imagePullPolicy: IfNotPresent 63 | resources: 64 | requests: 65 | cpu: "100m" 66 | memory: "50Mi" 67 | limits: 68 | cpu: "100m" 69 | memory: "50Mi" 70 | securityContext: 71 | privileged: true 72 | volumeMounts: 73 | - name: cnibin 74 | mountPath: /host/opt/cni/bin 75 | - name: ovs-cni-marker 76 | image: quay.io/kubevirt/ovs-cni-marker:latest 77 | imagePullPolicy: IfNotPresent 78 | securityContext: 79 | privileged: true 80 | args: 81 | - -node-name 82 | - $(NODE_NAME) 83 | - -ovs-socket 84 | - unix:///host/var/run/openvswitch/db.sock 85 | volumeMounts: 86 | - name: ovs-var-run 87 | mountPath: /host/var/run/openvswitch 88 | env: 89 | - name: NODE_NAME 90 | valueFrom: 91 | fieldRef: 92 | fieldPath: spec.nodeName 93 | volumes: 94 | - name: cnibin 95 | hostPath: 96 | path: /opt/cni/bin 97 | - name: ovs-var-run 98 | hostPath: 99 | path: /var/run/openvswitch 100 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/ovs-cni-manifests/nad-br1.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: "k8s.cni.cncf.io/v1" 3 | kind: NetworkAttachmentDefinition 4 | metadata: 5 | name: ovs-br1 6 | namespace: default 7 | spec: 8 | config: '{ "cniVersion": "0.3.1", "type": "ovs", "bridge": "br1" }' 9 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/student-materials/KubeVirt-grafana-dashboard.json: -------------------------------------------------------------------------------- 1 | { 2 | "annotations": { 3 | "list": [ 4 | { 5 | "builtIn": 1, 6 | "datasource": "-- Grafana --", 7 | "enable": true, 8 | "hide": true, 9 | "iconColor": "rgba(0, 211, 255, 1)", 10 | "name": "Annotations & Alerts", 11 | "type": "dashboard" 12 | } 13 | ] 14 | }, 15 | "editable": true, 16 | "gnetId": null, 17 | "graphTooltip": 0, 18 | "id": 14, 19 | "links": [], 20 | "panels": [ 21 | { 22 | "aliasColors": {}, 23 | "bars": false, 24 | "dashLength": 10, 25 | "dashes": false, 26 | "fill": 1, 27 | "gridPos": { 28 | "h": 8, 29 | "w": 12, 30 | "x": 0, 31 | "y": 0 32 | }, 33 | "id": 7, 34 | "legend": { 35 | "avg": false, 36 | "current": true, 37 | "hideEmpty": true, 38 | "hideZero": false, 39 | "max": false, 40 | "min": false, 41 | "show": true, 42 | "total": false, 43 | "values": true 44 | }, 45 | "lines": true, 46 | "linewidth": 1, 47 | "links": [], 48 | "nullPointMode": "null", 49 | "percentage": false, 50 | "pointradius": 2, 51 | "points": false, 52 | "renderer": "flot", 53 | "seriesOverrides": [], 54 | "spaceLength": 10, 55 | "stack": true, 56 | "steppedLine": false, 57 | "targets": [ 58 | { 59 | "expr": "sum(increase(kubevirt_vm_vcpu_seconds[5m])) by (domain)", 60 | "format": "heatmap", 61 | "intervalFactor": 1, 62 | "legendFormat": "{{domain}}", 63 | "refId": "A" 64 | } 65 | ], 66 | "thresholds": [], 67 | "timeFrom": null, 68 | "timeRegions": [], 69 | "timeShift": null, 70 | "title": "VMs CPU time in sec", 71 | "tooltip": { 72 | "shared": true, 73 | "sort": 0, 74 | "value_type": "individual" 75 | }, 76 | "type": "graph", 77 | "xaxis": { 78 | "buckets": null, 79 | "mode": "time", 80 | "name": null, 81 | "show": true, 82 | "values": [] 83 | }, 84 | "yaxes": [ 85 | { 86 | "format": "s", 87 | "label": null, 88 | "logBase": 1, 89 | "max": null, 90 | "min": null, 91 | "show": true 92 | }, 93 | { 94 | "format": "short", 95 | "label": null, 96 | "logBase": 1, 97 | "max": null, 98 | "min": null, 99 | "show": false 100 | } 101 | ], 102 | "yaxis": { 103 | "align": false, 104 | "alignLevel": null 105 | } 106 | }, 107 | { 108 | "gridPos": { 109 | "h": 9, 110 | "w": 12, 111 | "x": 12, 112 | "y": 0 113 | }, 114 | "id": 2, 115 | "links": [], 116 | "options": { 117 | "maxValue": 100, 118 | "minValue": 0, 119 | "orientation": "auto", 120 | "showThresholdLabels": false, 121 | "showThresholdMarkers": true, 122 | "thresholds": [ 123 | { 124 | "color": "green", 125 | "index": 0, 126 | "value": null 127 | }, 128 | { 129 | "color": "red", 130 | "index": 1, 131 | "value": 100 132 | } 133 | ], 134 | "valueMappings": [], 135 | "valueOptions": { 136 | "decimals": null, 137 | "prefix": "", 138 | "stat": "sum", 139 | "suffix": " - 2xx", 140 | "unit": "reqps" 141 | } 142 | }, 143 | "pluginVersion": "6.1.6", 144 | "targets": [ 145 | { 146 | "expr": "sum(rate(rest_client_requests_total{code=~\"2..\",job=\"kubevirt-prometheus-metrics\"}[5m]))", 147 | "format": "time_series", 148 | "interval": "", 149 | "intervalFactor": 1, 150 | "legendFormat": "Code 2xx", 151 | "refId": "A" 152 | } 153 | ], 154 | "timeFrom": null, 155 | "timeShift": null, 156 | "title": "API client request - 2xx", 157 | "transparent": true, 158 | "type": "gauge" 159 | }, 160 | { 161 | "aliasColors": {}, 162 | "bars": false, 163 | "dashLength": 10, 164 | "dashes": false, 165 | "fill": 1, 166 | "gridPos": { 167 | "h": 8, 168 | "w": 12, 169 | "x": 0, 170 | "y": 8 171 | }, 172 | "id": 9, 173 | "legend": { 174 | "avg": false, 175 | "current": true, 176 | "max": false, 177 | "min": false, 178 | "show": true, 179 | "total": false, 180 | "values": true 181 | }, 182 | "lines": true, 183 | "linewidth": 1, 184 | "links": [], 185 | "nullPointMode": "null", 186 | "percentage": false, 187 | "pointradius": 2, 188 | "points": false, 189 | "renderer": "flot", 190 | "seriesOverrides": [], 191 | "spaceLength": 10, 192 | "stack": false, 193 | "steppedLine": false, 194 | "targets": [ 195 | { 196 | "expr": "sum(rate(kubevirt_vm_storage_iops_total[5m])) by (domain,type)", 197 | "format": "time_series", 198 | "intervalFactor": 1, 199 | "legendFormat": "{{domain}} - {{type}}", 200 | "refId": "A" 201 | } 202 | ], 203 | "thresholds": [], 204 | "timeFrom": null, 205 | "timeRegions": [], 206 | "timeShift": null, 207 | "title": "VMs IOPs", 208 | "tooltip": { 209 | "shared": true, 210 | "sort": 0, 211 | "value_type": "individual" 212 | }, 213 | "type": "graph", 214 | "xaxis": { 215 | "buckets": null, 216 | "mode": "time", 217 | "name": null, 218 | "show": true, 219 | "values": [] 220 | }, 221 | "yaxes": [ 222 | { 223 | "format": "ops", 224 | "label": null, 225 | "logBase": 1, 226 | "max": null, 227 | "min": null, 228 | "show": true 229 | }, 230 | { 231 | "format": "short", 232 | "label": null, 233 | "logBase": 1, 234 | "max": null, 235 | "min": null, 236 | "show": true 237 | } 238 | ], 239 | "yaxis": { 240 | "align": false, 241 | "alignLevel": null 242 | } 243 | }, 244 | { 245 | "gridPos": { 246 | "h": 9, 247 | "w": 12, 248 | "x": 12, 249 | "y": 9 250 | }, 251 | "id": 3, 252 | "links": [], 253 | "options": { 254 | "maxValue": "50", 255 | "minValue": 0, 256 | "orientation": "auto", 257 | "showThresholdLabels": true, 258 | "showThresholdMarkers": true, 259 | "thresholds": [ 260 | { 261 | "color": "green", 262 | "index": 0, 263 | "value": null 264 | }, 265 | { 266 | "color": "red", 267 | "index": 1, 268 | "value": 20 269 | } 270 | ], 271 | "valueMappings": [], 272 | "valueOptions": { 273 | "decimals": null, 274 | "prefix": "", 275 | "stat": "sum", 276 | "suffix": " 4xx", 277 | "unit": "reqps" 278 | } 279 | }, 280 | "pluginVersion": "6.1.6", 281 | "targets": [ 282 | { 283 | "expr": "sum(rate(rest_client_requests_total{code=~\"4..\",job=\"kubevirt-prometheus-metrics\"}[5m]))", 284 | "format": "time_series", 285 | "interval": "", 286 | "intervalFactor": 1, 287 | "legendFormat": "Code 5XX", 288 | "refId": "A" 289 | } 290 | ], 291 | "timeFrom": null, 292 | "timeShift": null, 293 | "title": "API client request - 4xx", 294 | "transparent": true, 295 | "type": "gauge" 296 | }, 297 | { 298 | "aliasColors": {}, 299 | "bars": false, 300 | "dashLength": 10, 301 | "dashes": false, 302 | "description": "", 303 | "fill": 1, 304 | "gridPos": { 305 | "h": 8, 306 | "w": 12, 307 | "x": 0, 308 | "y": 16 309 | }, 310 | "id": 11, 311 | "legend": { 312 | "avg": false, 313 | "current": true, 314 | "max": false, 315 | "min": false, 316 | "show": true, 317 | "total": false, 318 | "values": true 319 | }, 320 | "lines": true, 321 | "linewidth": 1, 322 | "links": [], 323 | "nullPointMode": "null as zero", 324 | "percentage": false, 325 | "pointradius": 2, 326 | "points": false, 327 | "renderer": "flot", 328 | "seriesOverrides": [], 329 | "spaceLength": 10, 330 | "stack": true, 331 | "steppedLine": false, 332 | "targets": [ 333 | { 334 | "expr": "sum(kubevirt_vm_memory_resident_bytes) by (domain)", 335 | "format": "time_series", 336 | "intervalFactor": 1, 337 | "legendFormat": "{{domain}}", 338 | "refId": "A" 339 | } 340 | ], 341 | "thresholds": [], 342 | "timeFrom": null, 343 | "timeRegions": [], 344 | "timeShift": null, 345 | "title": "VMs resident memory", 346 | "tooltip": { 347 | "shared": true, 348 | "sort": 0, 349 | "value_type": "individual" 350 | }, 351 | "type": "graph", 352 | "xaxis": { 353 | "buckets": null, 354 | "mode": "time", 355 | "name": null, 356 | "show": true, 357 | "values": [] 358 | }, 359 | "yaxes": [ 360 | { 361 | "format": "bytes", 362 | "label": null, 363 | "logBase": 1, 364 | "max": null, 365 | "min": null, 366 | "show": true 367 | }, 368 | { 369 | "format": "short", 370 | "label": null, 371 | "logBase": 1, 372 | "max": null, 373 | "min": null, 374 | "show": false 375 | } 376 | ], 377 | "yaxis": { 378 | "align": false, 379 | "alignLevel": null 380 | } 381 | }, 382 | { 383 | "gridPos": { 384 | "h": 9, 385 | "w": 12, 386 | "x": 12, 387 | "y": 18 388 | }, 389 | "id": 4, 390 | "interval": "0", 391 | "links": [], 392 | "options": { 393 | "maxValue": "50", 394 | "minValue": 0, 395 | "orientation": "auto", 396 | "showThresholdLabels": true, 397 | "showThresholdMarkers": true, 398 | "thresholds": [ 399 | { 400 | "color": "green", 401 | "index": 0, 402 | "value": null 403 | }, 404 | { 405 | "color": "red", 406 | "index": 1, 407 | "value": 10 408 | } 409 | ], 410 | "valueMappings": [], 411 | "valueOptions": { 412 | "decimals": null, 413 | "prefix": "", 414 | "stat": "sum", 415 | "suffix": " - 5xx", 416 | "unit": "reqps" 417 | } 418 | }, 419 | "pluginVersion": "6.1.6", 420 | "targets": [ 421 | { 422 | "expr": "sum(rate(rest_client_requests_total{code=~\"5..\",job=\"kubevirt-prometheus-metrics\"}[5m])) or vector(0)", 423 | "format": "time_series", 424 | "interval": "", 425 | "intervalFactor": 1, 426 | "legendFormat": "Code 5XX", 427 | "refId": "A" 428 | } 429 | ], 430 | "timeFrom": null, 431 | "timeShift": null, 432 | "title": "API client request - 5xx", 433 | "transparent": true, 434 | "type": "gauge" 435 | }, 436 | { 437 | "aliasColors": {}, 438 | "bars": false, 439 | "dashLength": 10, 440 | "dashes": false, 441 | "fill": 1, 442 | "gridPos": { 443 | "h": 8, 444 | "w": 12, 445 | "x": 0, 446 | "y": 24 447 | }, 448 | "id": 13, 449 | "legend": { 450 | "avg": false, 451 | "current": true, 452 | "max": false, 453 | "min": false, 454 | "show": true, 455 | "total": false, 456 | "values": true 457 | }, 458 | "lines": true, 459 | "linewidth": 1, 460 | "links": [], 461 | "nullPointMode": "null as zero", 462 | "percentage": false, 463 | "pointradius": 2, 464 | "points": false, 465 | "renderer": "flot", 466 | "seriesOverrides": [], 467 | "spaceLength": 10, 468 | "stack": false, 469 | "steppedLine": false, 470 | "targets": [ 471 | { 472 | "expr": "sum(rate(kubevirt_vm_network_traffic_bytes_total[5m])) by (domain,type)", 473 | "format": "time_series", 474 | "intervalFactor": 1, 475 | "legendFormat": "{{domain}} - {{type}}", 476 | "refId": "A" 477 | } 478 | ], 479 | "thresholds": [], 480 | "timeFrom": null, 481 | "timeRegions": [], 482 | "timeShift": null, 483 | "title": "VMs Network Traffic in bytes", 484 | "tooltip": { 485 | "shared": true, 486 | "sort": 0, 487 | "value_type": "individual" 488 | }, 489 | "type": "graph", 490 | "xaxis": { 491 | "buckets": null, 492 | "mode": "time", 493 | "name": null, 494 | "show": true, 495 | "values": [] 496 | }, 497 | "yaxes": [ 498 | { 499 | "format": "Bps", 500 | "label": null, 501 | "logBase": 1, 502 | "max": null, 503 | "min": null, 504 | "show": true 505 | }, 506 | { 507 | "format": "short", 508 | "label": null, 509 | "logBase": 1, 510 | "max": null, 511 | "min": null, 512 | "show": false 513 | } 514 | ], 515 | "yaxis": { 516 | "align": false, 517 | "alignLevel": null 518 | } 519 | }, 520 | { 521 | "gridPos": { 522 | "h": 9, 523 | "w": 12, 524 | "x": 12, 525 | "y": 27 526 | }, 527 | "id": 5, 528 | "interval": "0", 529 | "links": [], 530 | "options": { 531 | "maxValue": "50", 532 | "minValue": 0, 533 | "orientation": "auto", 534 | "showThresholdLabels": true, 535 | "showThresholdMarkers": true, 536 | "thresholds": [ 537 | { 538 | "color": "green", 539 | "index": 0, 540 | "value": null 541 | }, 542 | { 543 | "color": "red", 544 | "index": 1, 545 | "value": 5 546 | } 547 | ], 548 | "valueMappings": [], 549 | "valueOptions": { 550 | "decimals": null, 551 | "prefix": "", 552 | "stat": "sum", 553 | "suffix": " - errors", 554 | "unit": "reqps" 555 | } 556 | }, 557 | "pluginVersion": "6.1.6", 558 | "targets": [ 559 | { 560 | "expr": "sum(rate(rest_client_requests_total{code=\"\",job=\"kubevirt-prometheus-metrics\"}[5m])) or vector(0)", 561 | "format": "time_series", 562 | "interval": "", 563 | "intervalFactor": 1, 564 | "legendFormat": "Code ", 565 | "refId": "A" 566 | } 567 | ], 568 | "timeFrom": null, 569 | "timeShift": null, 570 | "title": "API client request - errors", 571 | "transparent": true, 572 | "type": "gauge" 573 | } 574 | ], 575 | "refresh": "10s", 576 | "schemaVersion": 18, 577 | "style": "dark", 578 | "tags": [], 579 | "templating": { 580 | "list": [] 581 | }, 582 | "time": { 583 | "from": "now-5m", 584 | "to": "now" 585 | }, 586 | "timepicker": { 587 | "refresh_intervals": [ 588 | "5s", 589 | "10s", 590 | "30s", 591 | "1m", 592 | "5m", 593 | "15m", 594 | "30m", 595 | "1h", 596 | "2h", 597 | "1d" 598 | ], 599 | "time_options": [ 600 | "5m", 601 | "15m", 602 | "1h", 603 | "6h", 604 | "12h", 605 | "24h", 606 | "2d", 607 | "7d", 608 | "30d" 609 | ] 610 | }, 611 | "timezone": "", 612 | "title": "KubeVirt", 613 | "uid": "zCQVcNZWk", 614 | "version": 2 615 | } -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/student-materials/kubevirt-servicemonitor.yml: -------------------------------------------------------------------------------- 1 | apiVersion: monitoring.coreos.com/v1 2 | kind: ServiceMonitor 3 | metadata: 4 | labels: 5 | app: prometheus-operator-kubelet 6 | chart: prometheus-operator-5.7.0 7 | heritage: Tiller 8 | release: kubevirtlab 9 | prometheus.kubevirt.io: "" 10 | name: kubevirtlab-kubevirt 11 | namespace: prometheus 12 | spec: 13 | endpoints: 14 | - honorLabels: true 15 | port: metrics 16 | scheme: https 17 | tlsConfig: 18 | insecureSkipVerify: true 19 | jobLabel: prometheus.kubevirt.io 20 | namespaceSelector: 21 | matchNames: 22 | - kubevirt 23 | selector: 24 | matchLabels: 25 | prometheus.kubevirt.io: "" 26 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/student-materials/multus_nad_br1.yml: -------------------------------------------------------------------------------- 1 | apiVersion: "k8s.cni.cncf.io/v1" 2 | kind: NetworkAttachmentDefinition 3 | metadata: 4 | name: ovs-net-1 5 | namespace: 'kubevirt' 6 | spec: 7 | config: '{ 8 | "cniVersion": "0.3.1", 9 | "type": "ovs", 10 | "bridge": "br1" 11 | }' 12 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/student-materials/vm_containerdisk.yml: -------------------------------------------------------------------------------- 1 | apiVersion: kubevirt.io/v1alpha3 2 | kind: VirtualMachine 3 | metadata: 4 | labels: 5 | kubevirt.io/os: linux 6 | name: vm1 7 | spec: 8 | running: false 9 | template: 10 | metadata: 11 | labels: 12 | kubevirt.io/domain: vm1 13 | creationTimestamp: null 14 | spec: 15 | domain: 16 | cpu: 17 | cores: 2 18 | devices: 19 | disks: 20 | - disk: 21 | bus: virtio 22 | name: disk0 23 | - cdrom: 24 | bus: sata 25 | name: cloudinitdisk 26 | interfaces: 27 | - bridge: {} 28 | name: default 29 | machine: 30 | type: q35 31 | resources: 32 | requests: 33 | memory: 1024M 34 | networks: 35 | - name: default 36 | pod: {} 37 | volumes: 38 | - name: disk0 39 | containerDisk: 40 | image: kubevirt/fedora-cloud-container-disk-demo:latest 41 | - cloudInitNoCloud: 42 | userData: |- 43 | #cloud-config 44 | hostname: vm1 45 | password: fedora 46 | ssh_pwauth: True 47 | chpasswd: { expire: False } 48 | disable_root: false 49 | ssh_authorized_keys: 50 | - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5Qbj7vDf0uYQpeYb432g5R4YvYJaPfPA4EM4qc3lO62c7oUsWbZlZBl5neEWX41HGCIP4Zm1ybN9iiDyeIns6hg5OkU2vUGuPtV2KCAZOI7snzXeZxlrjsVMjMy/CYUlvIOAPxY4XzfzMMAJjIJni18R2PqVRI4f4SeSq3IIzpnOu2VQmqjFmmdybQY83BvBvWj6KLszAXkJk9LkZSAoktXimDBWFPQYikzZihLolRxwHzo21lXSw58D1N+6IeMudOviAte5yu6FBUN6dFYbt9dkLuH2/ONliFz/042n5UNp0wC5BLdpVwJpWqqrCVaeXBgla/gYm8YNZJIAlf8K5 kboumedh@vegeta.local 51 | name: cloudinitdisk 52 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/student-materials/vm_datavolume.yml: -------------------------------------------------------------------------------- 1 | apiVersion: kubevirt.io/v1alpha3 2 | kind: VirtualMachine 3 | metadata: 4 | labels: 5 | kubevirt.io/vm: vm2 6 | name: vm2 7 | namespace: default 8 | spec: 9 | running: true 10 | template: 11 | metadata: 12 | labels: 13 | kubevirt.io/vm: vm2 14 | spec: 15 | domain: 16 | devices: 17 | disks: 18 | - disk: 19 | bus: virtio 20 | name: datavolumedisk1 21 | resources: 22 | requests: 23 | memory: 128M 24 | volumes: 25 | - dataVolume: 26 | name: vm2-dv 27 | name: datavolumedisk1 28 | dataVolumeTemplates: 29 | - metadata: 30 | name: vm2-dv 31 | spec: 32 | pvc: 33 | accessModes: 34 | - ReadWriteOnce 35 | resources: 36 | requests: 37 | storage: 5109Mi 38 | storageClassName: local-volumes 39 | source: 40 | http: 41 | url: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img 42 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/student-materials/vm_multus1.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: kubevirt.io/v1alpha3 3 | kind: VirtualMachine 4 | metadata: 5 | labels: 6 | kubevirt-vm: fedora-multus-1 7 | name: fedora-multus-1 8 | spec: 9 | running: true 10 | template: 11 | metadata: 12 | labels: 13 | kubevirt.io/domain: fedora-multus-1 14 | spec: 15 | domain: 16 | devices: 17 | disks: 18 | - disk: 19 | bus: virtio 20 | name: disk0 21 | - disk: 22 | bus: virtio 23 | name: cloudinitdisk 24 | interfaces: 25 | - bridge: {} 26 | name: default 27 | - bridge: {} 28 | macAddress: 20:37:cf:e0:ad:f1 29 | name: ovs-net-1 30 | machine: 31 | type: "" 32 | resources: 33 | requests: 34 | memory: 1024M 35 | networks: 36 | - name: default 37 | pod: {} 38 | - multus: 39 | networkName: ovs-net-1 40 | name: ovs-net-1 41 | volumes: 42 | - name: disk0 43 | containerDisk: 44 | image: docker.io/kubevirt/fedora-cloud-container-disk-demo:latest 45 | - cloudInitNoCloud: 46 | userData: |- 47 | #cloud-config 48 | password: fedora 49 | chpasswd: { expire: False } 50 | ssh_pwauth: True 51 | runcmd: 52 | - nmcli con mod 'Wired connection 1' ipv4.address 11.0.0.5/24 53 | - nmcli con mod 'Wired connection 1' ipv4.method static 54 | - nmcli con up 'Wired connection 1' 55 | name: cloudinitdisk 56 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/files/student-materials/vm_multus2.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: kubevirt.io/v1alpha3 3 | kind: VirtualMachine 4 | metadata: 5 | labels: 6 | kubevirt-vm: fedora-multus-2 7 | name: fedora-multus-2 8 | spec: 9 | running: true 10 | template: 11 | metadata: 12 | labels: 13 | kubevirt.io/domain: fedora-multus-2 14 | spec: 15 | domain: 16 | devices: 17 | disks: 18 | - disk: 19 | bus: virtio 20 | name: disk0 21 | - disk: 22 | bus: virtio 23 | name: cloudinitdisk 24 | interfaces: 25 | - bridge: {} 26 | name: default 27 | - bridge: {} 28 | macAddress: 20:37:cf:e0:ad:f2 29 | name: ovs-br1 30 | machine: 31 | type: "" 32 | resources: 33 | requests: 34 | memory: 1024M 35 | networks: 36 | - name: default 37 | pod: {} 38 | - multus: 39 | networkName: ovs-br1 40 | name: ovs-br1 41 | volumes: 42 | - name: disk0 43 | containerDisk: 44 | image: docker.io/kubevirt/fedora-cloud-container-disk-demo:latest 45 | - cloudInitNoCloud: 46 | userData: |- 47 | #cloud-config 48 | password: fedora 49 | chpasswd: { expire: False } 50 | ssh_pwauth: True 51 | runcmd: 52 | - nmcli con mod 'Wired connection 1' ipv4.address 11.0.0.6/24 53 | - nmcli con mod 'Wired connection 1' ipv4.method static 54 | - nmcli con up 'Wired connection 1' 55 | name: cloudinitdisk 56 | 57 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/tasks/kubevirt.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install virtctl 3 | get_url: 4 | url: https://github.com/kubevirt/kubevirt/releases/download/{{ kubevirt_version }}/virtctl-{{ kubevirt_version }}-linux-amd64 5 | dest: "{{ ansible_user_dir }}/.local/bin/virtctl" 6 | mode: 0755 7 | owner: "{{ ansible_user }}" 8 | validate_certs: false 9 | retries: 5 10 | delay: 5 11 | when: not testing | bool 12 | 13 | - name: Install virtctl on testing 14 | get_url: 15 | url: https://github.com/kubevirt/kubevirt/releases/download/{{ kubevirt_version }}/virtctl-{{ kubevirt_version }}-linux-amd64 16 | dest: '/root/.local/bin/virtctl' 17 | mode: 0755 18 | owner: "{{ ansible_user }}" 19 | validate_certs: false 20 | retries: 5 21 | delay: 5 22 | when: testing | bool 23 | 24 | - name: Create directory for KubeVirt manifests 25 | file: 26 | state: directory 27 | path: "{{ ansible_user_dir }}/kubevirt" 28 | mode: 0755 29 | owner: "{{ ansible_user }}" 30 | 31 | - name: Copy KubeVirt manifests 32 | copy: 33 | src: files/{{ item }} 34 | dest: "{{ ansible_user_dir }}/kubevirt/" 35 | mode: 0755 36 | owner: "{{ ansible_user }}" 37 | loop: 38 | - kubevirt-ui-custom-manifests 39 | - kubevirt-operator-manifests 40 | - cdi-operator-manifests 41 | 42 | - name: Copy student materials 43 | copy: 44 | src: files/student-materials 45 | dest: "{{ ansible_user_dir }}/" 46 | owner: "{{ ansible_user }}" 47 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - import_tasks: kubevirt.yml 2 | - import_tasks: multus.yml 3 | - import_tasks: ovs_cni.yml 4 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/tasks/multus.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Apply Multus manifests 3 | k8s: 4 | state: present 5 | resource_definition: "{{ lookup('file', 'files/multus-custom-manifests/{{ item }}.yml') }}" 6 | loop: 7 | - multus-ns 8 | - nad-crd 9 | - multus-clusterrole 10 | - multus-clusterrolebinding 11 | - multus-sa 12 | - multus-cni-configmap 13 | - multus-ds 14 | - cni-plugins 15 | - l2-bridge 16 | loop_control: 17 | pause: 10 18 | 19 | - name: Pausing for 2 minute 20 | wait_for: timeout=120 21 | delegate_to: localhost 22 | vars: 23 | ansible_become: False 24 | -------------------------------------------------------------------------------- /administrator/ansible/roles/kubevirt/tasks/ovs_cni.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install OVS CNI 3 | k8s: 4 | state: present 5 | resource_definition: "{{ lookup('file', 'files/ovs-cni-manifests/kubernetes-ovs-cni.yml') }}" 6 | 7 | - name: Pausing for 1 minute 8 | wait_for: timeout=60 9 | delegate_to: localhost 10 | vars: 11 | ansible_become: False 12 | -------------------------------------------------------------------------------- /administrator/ansible/roles/local-storage/files/local-storage-manifests/provisioner.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: local-provisioner-config 6 | namespace: local-volume-provisioner 7 | data: 8 | storageClassMap: | 9 | local-volumes: 10 | hostDir: /mnt/storage 11 | mountDir: /mnt/storage 12 | blockCleanerCommand: 13 | - "/scripts/shred.sh" 14 | - "2" 15 | volumeMode: Filesystem 16 | fsType: xfs 17 | --- 18 | apiVersion: extensions/v1beta1 19 | kind: DaemonSet 20 | metadata: 21 | name: local-volume-provisioner 22 | namespace: local-volume-provisioner 23 | labels: 24 | app: local-volume-provisioner 25 | spec: 26 | selector: 27 | matchLabels: 28 | app: local-volume-provisioner 29 | template: 30 | metadata: 31 | labels: 32 | app: local-volume-provisioner 33 | spec: 34 | serviceAccountName: local-storage-admin 35 | containers: 36 | - image: "quay.io/external_storage/local-volume-provisioner:v2.1.0" 37 | imagePullPolicy: "Always" 38 | name: provisioner 39 | securityContext: 40 | privileged: true 41 | env: 42 | - name: MY_NODE_NAME 43 | valueFrom: 44 | fieldRef: 45 | fieldPath: spec.nodeName 46 | volumeMounts: 47 | - mountPath: /etc/provisioner/config 48 | name: provisioner-config 49 | readOnly: true 50 | - mountPath: /mnt/storage 51 | name: local-volumes 52 | mountPropagation: "HostToContainer" 53 | volumes: 54 | - name: provisioner-config 55 | configMap: 56 | name: local-provisioner-config 57 | - name: local-volumes 58 | hostPath: 59 | path: /mnt/storage 60 | 61 | --- 62 | apiVersion: v1 63 | kind: ServiceAccount 64 | metadata: 65 | name: local-storage-admin 66 | namespace: local-volume-provisioner 67 | 68 | --- 69 | apiVersion: rbac.authorization.k8s.io/v1 70 | kind: ClusterRoleBinding 71 | metadata: 72 | name: local-storage-provisioner-pv-binding 73 | namespace: local-volume-provisioner 74 | subjects: 75 | - kind: ServiceAccount 76 | name: local-storage-admin 77 | namespace: local-volume-provisioner 78 | roleRef: 79 | kind: ClusterRole 80 | name: system:persistent-volume-provisioner 81 | apiGroup: rbac.authorization.k8s.io 82 | --- 83 | apiVersion: rbac.authorization.k8s.io/v1 84 | kind: ClusterRole 85 | metadata: 86 | name: local-storage-provisioner-node-clusterrole 87 | namespace: local-volume-provisioner 88 | rules: 89 | - apiGroups: [""] 90 | resources: ["nodes"] 91 | verbs: ["get"] 92 | --- 93 | apiVersion: rbac.authorization.k8s.io/v1 94 | kind: ClusterRoleBinding 95 | metadata: 96 | name: local-storage-provisioner-node-binding 97 | namespace: local-volume-provisioner 98 | subjects: 99 | - kind: ServiceAccount 100 | name: local-storage-admin 101 | namespace: local-volume-provisioner 102 | roleRef: 103 | kind: ClusterRole 104 | name: local-storage-provisioner-node-clusterrole 105 | apiGroup: rbac.authorization.k8s.io 106 | -------------------------------------------------------------------------------- /administrator/ansible/roles/local-storage/files/local-storage-manifests/sc.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: storage.k8s.io/v1 3 | kind: StorageClass 4 | metadata: 5 | name: local-volumes 6 | provisioner: kubernetes.io/no-provisioner 7 | volumeBindingMode: WaitForFirstConsumer 8 | reclaimPolicy: Delete -------------------------------------------------------------------------------- /administrator/ansible/roles/local-storage/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create local-volume-provisioner namespace 3 | k8s: 4 | state: present 5 | name: local-volume-provisioner 6 | kind: Namespace 7 | api_version: v1 8 | 9 | - name: Create StorageClass for local volumes 10 | k8s: 11 | state: present 12 | resource_definition: "{{ lookup('file', 'files/local-storage-manifests/sc.yml') }}" 13 | 14 | - name: Deploy local storage provisioner 15 | k8s: 16 | state: present 17 | resource_definition: "{{ lookup('file', 'files/local-storage-manifests/provisioner.yml') }}" 18 | -------------------------------------------------------------------------------- /administrator/ansible/roles/prometheus/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - import_tasks: prometheus.yml 2 | when: "'masters' in group_names" 3 | -------------------------------------------------------------------------------- /administrator/ansible/roles/prometheus/tasks/prometheus.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create Prometheus namespace 3 | k8s: 4 | state: present 5 | name: prometheus 6 | kind: Namespace 7 | api_version: v1 8 | 9 | - name: Get an existing Service object 10 | k8s_facts: 11 | api_version: v1 12 | kind: Deployment 13 | name: kubevirtlab-prometheus-ope-operator 14 | namespace: default 15 | register: promop_deployment 16 | 17 | - name: Install Prometheus 18 | command: ~/.local/bin/helm install --name kubevirtlab --wait --set grafana.adminPassword=kubevirtlab123 --set prometheus.service.type=NodePort --namespace prometheus stable/prometheus-operator 19 | when: not (promop_deployment['resources'] | length > 0) 20 | 21 | - name: Expose grafana on a nodeport 22 | k8s: 23 | name: kubevirtlab-grafana-nodeport 24 | namespace: prometheus 25 | kind: Service 26 | api_version: v1 27 | resource_definition: 28 | spec: 29 | selector: 30 | app: grafana 31 | ports: 32 | - protocol: TCP 33 | port: 3000 34 | nodePort: 30300 35 | type: NodePort 36 | -------------------------------------------------------------------------------- /administrator/instructor-kickoff.md: -------------------------------------------------------------------------------- 1 | * Set expectations for attendees 2 | * This is targeted towards engineers who don’t have OCP or KubeVirt experience. If they have a lot of OCP, this will be some review. The goal is to show them how to run KubeVirt locally and get a bit familiar with it. 3 | * Temp check 4 | * How many have used OpenShift? 5 | * How many have used KubeVirt? 6 | * Cover Requirements 7 | * Laptop with a modern browser for OCP console 8 | * SSH client 9 | * Smartphones and tablets are not really useful. 10 | * We don’t have a windows .ppk key for putty, but can make one if they have a windows box. 11 | * Explain the environment 12 | * One dedicated VM on GCP, they don’t need to know GCP. Only log into the instance 13 | * Let them know that this is the first time this lab has been given, there will be some bumps and bruises, we’ll capture notes on it and be sure to improve it. 14 | * Explain what will be covered here: 15 | * Deployed OCP with oc cluster up 16 | * Explored the environment and some basic OpenShift commands 17 | * Deployed an application on OpenShift 18 | * Deployed and explored KubeVirt 19 | * Explored OCP web console 20 | * Deployed a virtual machine on OCP 21 | * Accessed the virtual machine 22 | * Deployed CDI 23 | * Deployed APB 24 | * This is a self paced lab, not instructor interrupt driven 25 | * They have 2 hours 26 | * If they don’t get through it all, we can leave the VM up for them or spin one up for them later 27 | * Take away: for you to be able to run through this yourself. All scripts are on Github. 28 | * Share out the URL for the github repo 29 | * https:///cnvlab (points to github repo, if this is easier, I dunno) 30 | * We can redeliver this at anytime, spread the word. 31 | * Assign student IDs 32 | * Turn them loose, start the lab 33 | * Check in on the students after 10 minutes to make sure everyone got logged in. 34 | -------------------------------------------------------------------------------- /administrator/terraform/gcp/README.md: -------------------------------------------------------------------------------- 1 | # GCP Environment 2 | 3 | ## Requirements for Ansible and Terraform 4 | 5 | ### Terraform 6 | 7 | You just need to use the google cloud SDK from [here](https://cloud.google.com/sdk/downloads#yum) and then execute those commands in order to get logged by the CLI: 8 | ``` 9 | gcloud auth login 10 | gcloud auth application-default login 11 | ``` 12 | 13 | Now just it's time to download the modules, fill the variables and execute the plan: 14 | ``` 15 | terraform init -get -upgrade=true 16 | TF_LOG=DEBUG terraform plan -var-file varfiles/opensouthcode19.tfvars -refresh=true 17 | TF_LOG=DEBUG terraform apply -var-file varfiles/opensouthcode19.tfvars -refresh=true 18 | ``` 19 | 20 | **Note**: Does not work with Terraform 0.12 21 | - https://github.com/terraform-providers/terraform-provider-google/issues/3280 22 | 23 | ### Ansible 24 | 25 | It's a bit complicated but [here it's very well explained](https://devopscube.com/ansible-dymanic-inventry-google-cloud/), I recommend to follow those commands. 26 | ``` 27 | pip3 install --user apache-libcloud 28 | mkdir -p ~/.ansible/inventory/ 29 | wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/gce.py -O ~/.ansible/inventory/gce.py 30 | wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/gce.ini -O ~/.ansible/inventory/gce.ini 31 | ``` 32 | 33 | Open gce.ini file and configure the following values from the service accout json file. A service account json will look like the following, and save it on _~/.ansible/inventory/gce.json_ . (This is a sample) 34 | ``` 35 | { 36 | "type": "service_account", 37 | "project_id": "devopscube-sandbox", 38 | "private_key_id": "sdfkjhsadfkjansdf9asdf87eraksd", 39 | "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBaksdhfjkasdljf sALDIFUHW8klhklSDGKAPISD GIAJDGHIJLSDGJAFSHGJN;MLASDKJHFGHAILFN DGALIJDFHG;ALSDN J Lkhawu8a2 87356801w tljasdbjkh=\n-----END PRIVATE KEY-----\n", 40 | "client_email": "ansible-provisioning@devopscube-sandbox.iam.gserviceaccount.com", 41 | "client_id": "32453948568273645823", 42 | "auth_uri": "https://accounts.google.com/o/oauth2/auth", 43 | "token_uri": "https://accounts.google.com/o/oauth2/token", 44 | "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", 45 | "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/ansible-provisioning%40devopscube-sandbox.iam.gserviceaccount.com" 46 | } 47 | ``` 48 | 49 | In **gce.ini** you have to configure the following values. 50 | ``` 51 | gce_service_account_email_address = ansible-provisioning@devopscube-sandbox.iam.gserviceaccount.com 52 | gce_service_account_pem_file_path = /opt/ansible/service-account.json 53 | gce_project_id = devopscube-sandbox 54 | ``` 55 | 56 | Now use the module: 57 | ``` 58 | GCE_INI_PATH=~/.ansible/inventory/gce.ini ansible -i ~/.ansible/inventory/gce.py tag_ -m ping 59 | ``` 60 | 61 | To filter your nodes just apply a tag to them on creation time and just filter the group by the tagname as shows above: 62 | ``` 63 | GCE_INI_PATH=~/.ansible/inventory/gce.ini ansible -i ~/.ansible/inventory/gce.py tag_kubevirt -m ping 64 | 65 | Output: 66 | λ GCE_INI_PATH=~/.ansible/inventory/gce.ini ansible -i ~/.ansible/inventory/gce.py tag_kubevirt -m ping 67 | [WARNING]: Found both group and host with same name: kubevirt-button-master-build-1 68 | 69 | kubevirtlab-1 | SUCCESS => { 70 | "changed": false, 71 | "ping": "pong" 72 | } 73 | kubevirtlab-2 | SUCCESS => { 74 | "changed": false, 75 | "ping": "pong" 76 | } 77 | ``` 78 | -------------------------------------------------------------------------------- /administrator/terraform/gcp/gcp.tf: -------------------------------------------------------------------------------- 1 | provider "google" { 2 | project = "${var.gcp_project}" 3 | region = "${var.gcp_region}" 4 | zone = "${var.gcp_zone}" 5 | } 6 | 7 | resource "google_compute_network" "default" { 8 | name = "${var.gcp_network_name}" 9 | auto_create_subnetworks = "false" 10 | } 11 | 12 | resource "google_compute_firewall" "default" { 13 | name = "${var.gcp_firewall_rule_name}" 14 | description = "Firewall rules for Kubevirt lab" 15 | #network = "${google_compute_network.default.name}" 16 | network = "default" 17 | 18 | allow { 19 | protocol = "icmp" 20 | } 21 | 22 | allow { 23 | protocol = "tcp" 24 | ports = ["80", "443", "8443", "30300", "30000", "30090"] 25 | } 26 | 27 | target_tags = ["${var.gcp_instance_tag}"] 28 | } 29 | 30 | #module "google-dns-managed-zone" { 31 | # source = "github.com/Eimert/terraform-google-dns-managed-zone" 32 | # dns_name = "${var.gcp_network_name}" 33 | # dns_zone = "${var.dns_domain_name}" 34 | # description = "Kubevirt Laboratory for ${var.lab_description}" 35 | #} 36 | 37 | module "gcp_kubevirt_lab" { 38 | source = "github.com/codificat/terraform-google-compute-engine-instance" 39 | amount = "${var.gcp_instances}" 40 | region = "${var.gcp_region}" 41 | zone = "${var.gcp_zone}" 42 | # hostname format: name_prefix-amount 43 | name_prefix = "${var.hostname_prefix}" 44 | machine_type = "${var.gcp_instance_size}" 45 | disk_type = "pd-ssd" 46 | disk_size = "${var.gcp_boot_image_size_gb}" 47 | disk_image = "${var.gcp_boot_image}" 48 | 49 | dns_name = "${var.gcp_network_name}" 50 | dns_zone = "${var.dns_domain_name}" 51 | hostname_prefix = "${var.hostname_prefix}" 52 | 53 | user_data = "Kubevirt Laboratory for ${var.lab_description}" 54 | username = "${var.lab_username}" 55 | public_key_path = "~/.ssh/kubevirt-tutorial.pub" 56 | instance_tag = "${var.gcp_instance_tag}" 57 | } 58 | -------------------------------------------------------------------------------- /administrator/terraform/gcp/varfiles/containerdays-2019.tfvars: -------------------------------------------------------------------------------- 1 | gcp_project="cnvlab-209908" 2 | gcp_region="europe-west4" 3 | gcp_zone="europe-west4-a" 4 | gcp_firewall_rule_name="kubevirtlab-firewall" 5 | gcp_network_name="kubevirt-tutorial" 6 | gcp_instances="19" 7 | gcp_instance_size="n1-standard-2" 8 | gcp_boot_image="nested-centos7" 9 | gcp_instance_tag="kubevirtlab" 10 | gcp_sa="kubevirt-tutorial" 11 | host_bridge_iface="br0" 12 | dns_domain_name="try.kubevirt.me." 13 | ssh_pub_key="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCu9J6jXMMRlKyORwRPt9wjTgqLaDWkJyrbv84PM+Kpi+WxeTutNcTZ44QkWIlvKQW0jEasgLADscPyKykvhoRCwei54G4erdERD+k3EkdR01AwWA2tFGgt0Aw8SVvwzomMvqEXDSs9Nqarzi4gb5I2tQxSIItW61Yjd3Xms5k2WKpZGvNQhkdCUb53cBP27jJuF9rGVKQNliErkhb4unw6m2BBfQ0QflCkIlbzPI0U5Fof3IhdgVPtY2Jvs3NTLMp9cvb235AnqPkiUuj7jp9jGrJAeIQIszF/bX43KrFF6QRUZ9batJaKFgpONXqGWoo0P5GWxjWHkf8zwFPbEC9 kubevirt-tutorial" 14 | hostname_prefix="kubevirtlab" 15 | lab_description="containerdays.io 2019" 16 | -------------------------------------------------------------------------------- /administrator/terraform/gcp/varfiles/kubevirt-prow.tfvars: -------------------------------------------------------------------------------- 1 | gcp_project="cnvlab-209908" 2 | gcp_region="europe-west4" 3 | gcp_zone="europe-west4-a" 4 | gcp_firewall_rule_name="kubevirt-lab-firewall" 5 | gcp_network_name="sjr" 6 | gcp_instances="1" 7 | gcp_instance_size="n1-standard-2" 8 | gcp_boot_image="nested-centos7" 9 | gcp_instance_tag="kubevirtcomm" 10 | gcp_sa="jparrill-SA" 11 | host_bridge_iface="br0" 12 | dns_domain_name="gce.sexylinux.net." 13 | ssh_pub_key="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD0IFmpMQHlk87299njLxEgMJ3BAVN0zXlPQvE/vT1rHrluD+/vosXNdpzCEMSd3VQHduzXTOhNxYYAEgL3vy9EgCWnofJ96aPTLUz6aNdviltkfwtn8npPQ7ojnsa02ATHUjqI5ZbiQo2BcJScx3bEr/nvlczcuV6QF0EmKTPAEYRM1QQtE3TpozEAjOzElQkMepZc+RxI9k3HoSlWRiZK9o2mu96Y+aaCs9hXlmiYL7fbPVMnN83U3NMAAGqzUXT0QXjdVIuxEEvRYX2vE4LqjAopmTvfLy6c3VvO88w/0nbabQCoiWSTkZ/Wh4Pv0WVAyuahnr99sURQ5j2Zmd2f jparrill@deimos.localdomain" 14 | hostname_prefix="kubevirt-prow" 15 | lab_description="Kubevirt Communitty Infra purposes" 16 | -------------------------------------------------------------------------------- /administrator/terraform/gcp/varfiles/kubevirtest.tfvars: -------------------------------------------------------------------------------- 1 | gcp_project="cnvlab-209908" 2 | gcp_region="europe-west4" 3 | gcp_zone="europe-west4-a" 4 | gcp_firewall_rule_name="kubevirt-lab-firewall" 5 | gcp_network_name="kubevirt-tutorial" 6 | gcp_instances="1" 7 | gcp_instance_size="n1-standard-2" 8 | gcp_boot_image="nested-centos7" 9 | gcp_instance_tag="kubevirtlab" 10 | gcp_sa="jparrill-SA" 11 | host_bridge_iface="br0" 12 | dns_domain_name="try.kubevirt.me." 13 | ssh_pub_key="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD0IFmpMQHlk87299njLxEgMJ3BAVN0zXlPQvE/vT1rHrluD+/vosXNdpzCEMSd3VQHduzXTOhNxYYAEgL3vy9EgCWnofJ96aPTLUz6aNdviltkfwtn8npPQ7ojnsa02ATHUjqI5ZbiQo2BcJScx3bEr/nvlczcuV6QF0EmKTPAEYRM1QQtE3TpozEAjOzElQkMepZc+RxI9k3HoSlWRiZK9o2mu96Y+aaCs9hXlmiYL7fbPVMnN83U3NMAAGqzUXT0QXjdVIuxEEvRYX2vE4LqjAopmTvfLy6c3VvO88w/0nbabQCoiWSTkZ/Wh4Pv0WVAyuahnr99sURQ5j2Zmd2f jparrill@deimos.localdomain" 14 | hostname_prefix="kubevirtest" 15 | lab_description="Test purposes" 16 | -------------------------------------------------------------------------------- /administrator/terraform/gcp/varfiles/opensouthcode19.tfvars: -------------------------------------------------------------------------------- 1 | gcp_project="cnvlab-209908" 2 | gcp_region="europe-west4" 3 | gcp_zone="europe-west4-a" 4 | gcp_firewall_rule_name="kubevirt-lab-firewall" 5 | gcp_network_name="sjr" 6 | gcp_instances="20" 7 | gcp_instance_size="n1-standard-2" 8 | gcp_boot_image="nested-centos7" 9 | gcp_instance_tag="kubevirtlab" 10 | gcp_sa="jparrill-SA" 11 | host_bridge_iface="br0" 12 | dns_domain_name="gce.sexylinux.net." 13 | ssh_pub_key="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD0IFmpMQHlk87299njLxEgMJ3BAVN0zXlPQvE/vT1rHrluD+/vosXNdpzCEMSd3VQHduzXTOhNxYYAEgL3vy9EgCWnofJ96aPTLUz6aNdviltkfwtn8npPQ7ojnsa02ATHUjqI5ZbiQo2BcJScx3bEr/nvlczcuV6QF0EmKTPAEYRM1QQtE3TpozEAjOzElQkMepZc+RxI9k3HoSlWRiZK9o2mu96Y+aaCs9hXlmiYL7fbPVMnN83U3NMAAGqzUXT0QXjdVIuxEEvRYX2vE4LqjAopmTvfLy6c3VvO88w/0nbabQCoiWSTkZ/Wh4Pv0WVAyuahnr99sURQ5j2Zmd2f jparrill@deimos.localdomain" 14 | hostname_prefix="kubevirtlab" 15 | lab_description="OpenSouthCode 2019" 16 | -------------------------------------------------------------------------------- /administrator/terraform/gcp/variables.tf: -------------------------------------------------------------------------------- 1 | variable "gcp_project" { 2 | type = "string" 3 | default = "cnvlab-209908" 4 | description = "GCE Project to work with" 5 | } 6 | 7 | variable "gcp_region" { 8 | type = "string" 9 | default = "europe-west4" 10 | description = "GCE Region to work with" 11 | } 12 | 13 | variable "gcp_zone" { 14 | type = "string" 15 | default = "europe-west4-a" 16 | description = "GCE Zone to work with" 17 | } 18 | 19 | variable "gcp_firewall_rule_name" { 20 | type = "string" 21 | default = "gcp_firewall_rule_name" 22 | description = "GCE Name for firewall bucket rule" 23 | } 24 | 25 | variable "gcp_network_name" { 26 | type = "string" 27 | default = "malpar-io" 28 | description = "GCE Network Name" 29 | } 30 | 31 | variable "gcp_instances" { 32 | type = "string" 33 | default = "1" 34 | description = "Number of instances to create in GCE" 35 | } 36 | 37 | variable "gcp_instance_size" { 38 | type = "string" 39 | default = "n1-standard-4" 40 | description = "GCE Size of the instances to create" 41 | } 42 | 43 | variable "gcp_boot_image" { 44 | type = "string" 45 | default = "nested-centos7" 46 | description = "GCE Image to boot on the created instances" 47 | } 48 | 49 | variable "gcp_boot_image_size_gb" { 50 | type = "string" 51 | default = "20" 52 | description = "GCE Image size to boot on the created instances in gb" 53 | } 54 | 55 | variable "gcp_instance_tag" { 56 | type = "string" 57 | default = "kubevirtlab" 58 | description = "GCE Tag for instances created" 59 | } 60 | 61 | variable "gcp_sa" { 62 | type = "string" 63 | description = "GCE SA to use for provision" 64 | } 65 | 66 | variable "ssh_pub_key" { 67 | type = "string" 68 | description = "SSH Public key, it's added to the cloud user" 69 | default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD0IFmpMQHlk87299njLxEgMJ3BAVN0zXlPQvE/vT1rHrluD+/vosXNdpzCEMSd3VQHduzXTOhNxYYAEgL3vy9EgCWnofJ96aPTLUz6aNdviltkfwtn8npPQ7ojnsa02ATHUjqI5ZbiQo2BcJScx3bEr/nvlczcuV6QF0EmKTPAEYRM1QQtE3TpozEAjOzElQkMepZc+RxI9k3HoSlWRiZK9o2mu96Y+aaCs9hXlmiYL7fbPVMnN83U3NMAAGqzUXT0QXjdVIuxEEvRYX2vE4LqjAopmTvfLy6c3VvO88w/0nbabQCoiWSTkZ/Wh4Pv0WVAyuahnr99sURQ5j2Zmd2f jparrill@deimos.localdomain" 70 | } 71 | 72 | variable "dns_domain_name" { 73 | type = "string" 74 | default = "kubevirtlab.local" 75 | description = "DNS domain name" 76 | } 77 | 78 | variable "host_bridge_iface" { 79 | type = "string" 80 | description = "Bridge interface the VMs will use for network connection" 81 | } 82 | 83 | variable "hostname_prefix" { 84 | type = "string" 85 | default = "kubevirt-lab" 86 | description = "Prefix add to hostnames" 87 | } 88 | 89 | variable "lab_description" { 90 | type = "string" 91 | default = "OpenSouthCode 2019" 92 | description = "User data for new user" 93 | } 94 | 95 | variable "lab_username" { 96 | type = "string" 97 | default = "kubevirt" 98 | description = "New user for Laboratory" 99 | } 100 | -------------------------------------------------------------------------------- /administrator/terraform/libvirt/cloud-init.cfg: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | fqdn: ${host_name}.${cluster_dns_domain} 3 | hostname: ${host_name} 4 | manage_etc_hosts: true 5 | timezone: etc/UTC 6 | users: 7 | - name: kubevirt 8 | sudo: ['ALL=(ALL) NOPASSWD:ALL'] 9 | lock_passwd: false 10 | ssh_authorized_keys: 11 | - ${cloud_ssh_key} 12 | shell: /bin/bash 13 | growpart: 14 | mode: auto 15 | devices: ['/'] 16 | -------------------------------------------------------------------------------- /administrator/terraform/libvirt/libvirt.tf: -------------------------------------------------------------------------------- 1 | provider "libvirt" { 2 | uri = "${var.libvirt_url}" 3 | } 4 | 5 | resource "libvirt_volume" "mastervol" { 6 | name = "disk-master.img" 7 | base_volume_name = "${var.base_image}" 8 | size = "${var.osdisk}" 9 | } 10 | 11 | resource "libvirt_volume" "datavols" { 12 | name = "disk-data-${count.index}.img" 13 | size = "${var.pvs_disk_size}" 14 | count = 5 15 | } 16 | 17 | data "template_file" "user_data_kubemaster" { 18 | template = "${file("${path.module}/cloud-init.cfg")}" 19 | vars { 20 | host_name = "${var.hostname_prefix}-kubemaster" 21 | cluster_dns_domain = "${var.dns_domain_name}" 22 | cloud_ssh_key = "${var.ssh_pub_key}" 23 | } 24 | } 25 | 26 | data "template_file" "network_config" { 27 | template = "${file("${path.module}/network_config.cfg")}" 28 | } 29 | 30 | resource "libvirt_cloudinit_disk" "commoninit" { 31 | name = "commoninit.iso" 32 | user_data = "${data.template_file.user_data_kubemaster.rendered}" 33 | network_config = "${data.template_file.network_config.rendered}" 34 | } 35 | 36 | resource "libvirt_domain" "kubemaster" { 37 | name = "kubemaster" 38 | memory = "${var.memory}" 39 | vcpu = "${var.vcpus}" 40 | cloudinit = "${libvirt_cloudinit_disk.commoninit.id}" 41 | qemu_agent = true 42 | 43 | disk { 44 | volume_id = "${libvirt_volume.mastervol.id}" 45 | } 46 | 47 | disk { 48 | volume_id = "${element(libvirt_volume.datavols.*.id, 0)}" 49 | } 50 | 51 | disk { 52 | volume_id = "${element(libvirt_volume.datavols.*.id, 1)}" 53 | } 54 | 55 | disk { 56 | volume_id = "${element(libvirt_volume.datavols.*.id, 2)}" 57 | } 58 | 59 | disk { 60 | volume_id = "${element(libvirt_volume.datavols.*.id, 3)}" 61 | } 62 | 63 | disk { 64 | volume_id = "${element(libvirt_volume.datavols.*.id, 4)}" 65 | } 66 | 67 | cpu { 68 | mode = "host-passthrough" 69 | } 70 | 71 | console { 72 | type = "pty" 73 | target_port = "0" 74 | target_type = "serial" 75 | } 76 | 77 | graphics { 78 | type = "vnc" 79 | listen_type = "address" 80 | } 81 | 82 | network_interface { 83 | bridge = "${var.host_bridge_iface}" 84 | hostname = "${var.hostname_prefix}-kubemaster" 85 | mac = "${var.master_mac_address}" 86 | wait_for_lease = true 87 | } 88 | 89 | network_interface { 90 | bridge = "${var.host_bridge_iface}" 91 | } 92 | } 93 | -------------------------------------------------------------------------------- /administrator/terraform/libvirt/network_config.cfg: -------------------------------------------------------------------------------- 1 | version: 1 2 | config: 3 | - type: physical 4 | name: eth0 5 | subnets: 6 | - type: dhcp 7 | - type: physical 8 | name: eth1 9 | subnets: 10 | - control: manual 11 | type: static -------------------------------------------------------------------------------- /administrator/terraform/libvirt/varfiles/jparrill.tf: -------------------------------------------------------------------------------- 1 | libvirt_url="qemu+ssh://jparrill@192.168.1.101/system" 2 | host_bridge_iface="virbr2" 3 | dns_domain_name="covenant" 4 | ssh_pub_key="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD0IFmpMQHlk87299njLxEgMJ3BAVN0zXlPQvE/vT1rHrluD+/vosXNdpzCEMSd3VQHduzXTOhNxYYAEgL3vy9EgCWnofJ96aPTLUz6aNdviltkfwtn8npPQ7ojnsa02ATHUjqI5ZbiQo2BcJScx3bEr/nvlczcuV6QF0EmKTPAEYRM1QQtE3TpozEAjOzElQkMepZc+RxI9k3HoSlWRiZK9o2mu96Y+aaCs9hXlmiYL7fbPVMnN83U3NMAAGqzUXT0QXjdVIuxEEvRYX2vE4LqjAopmTvfLy6c3VvO88w/0nbabQCoiWSTkZ/Wh4Pv0WVAyuahnr99sURQ5j2Zmd2f jparrill@deimos.localdomain" 5 | hostname_prefix="k8s" 6 | 7 | -------------------------------------------------------------------------------- /administrator/terraform/libvirt/variables.tf: -------------------------------------------------------------------------------- 1 | variable "libvirt_url" { 2 | type = "string" 3 | default = "qemu:///system" 4 | description = "Libvirt URL terraform will connect to" 5 | } 6 | 7 | variable "base_image" { 8 | type = "string" 9 | default = "CentOS-7-x86_64-GenericCloud.qcow2" 10 | description = "Base image used to spawn VMs from it" 11 | } 12 | 13 | variable "vcpus" { 14 | type = "string" 15 | default = "4" 16 | description = "vCPUs to assign to the VMs" 17 | } 18 | 19 | variable "memory" { 20 | type = "string" 21 | default = "16384" 22 | description = "Memory, in megabytes, to assign to the VMs" 23 | } 24 | 25 | variable "osdisk" { 26 | type = "string" 27 | default = "32212254720" 28 | description = "OS disk size" 29 | } 30 | 31 | variable "pvs_disk_size" { 32 | type = "string" 33 | default = "10737418240" 34 | description = "PVs data disks size" 35 | } 36 | 37 | variable "ssh_pub_key" { 38 | type = "string" 39 | description = "SSH Public key, it's added to the user cloud" 40 | default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5Qbj7vDf0uYQpeYb432g5R4YvYJaPfPA4EM4qc3lO62c7oUsWbZlZBl5neEWX41HGCIP4Zm1ybN9iiDyeIns6hg5OkU2vUGuPtV2KCAZOI7snzXeZxlrjsVMjMy/CYUlvIOAPxY4XzfzMMAJjIJni18R2PqVRI4f4SeSq3IIzpnOu2VQmqjFmmdybQY83BvBvWj6KLszAXkJk9LkZSAoktXimDBWFPQYikzZihLolRxwHzo21lXSw58D1N+6IeMudOviAte5yu6FBUN6dFYbt9dkLuH2/ONliFz/042n5UNp0wC5BLdpVwJpWqqrCVaeXBgla/gYm8YNZJIAlf8K5 kboumedh@vegeta.local" 41 | } 42 | 43 | variable "dns_domain_name" { 44 | type = "string" 45 | default = "kubevirtlab.local" 46 | description = "DNS domain name" 47 | } 48 | 49 | variable "host_bridge_iface" { 50 | type = "string" 51 | description = "Bridge interface the VMs will use for network connection" 52 | } 53 | 54 | variable "master_mac_address" { 55 | type = "string" 56 | default = "e6:fa:a6:92:a1:af" 57 | description = "kubemaster mac address, useful for DHCP reservations" 58 | } 59 | 60 | variable "hostname_prefix" { 61 | type = "string" 62 | default = "kubevirt-lab" 63 | description = "Prefix add to hostnames" 64 | } 65 | -------------------------------------------------------------------------------- /hack/README.md: -------------------------------------------------------------------------------- 1 | # Hacks 2 | 3 | ## Testing 4 | 5 | To perform the test you just need to execute it using `make` command with a previoulsy execution of `make build` in order to generate the script file. This last command will take care about download the proper dependecies. 6 | 7 | ### Testing Deep Dive 8 | 9 | The base concept for Lab testing is to use `mdsh` to generate `labX.sh` from _labX.md_ file and launch the tests in the correct order. 10 | 11 | To get mdsh just do this: 12 | 13 | ``` 14 | cd $HOME && git clone https://github.com/bashup/mdsh.git 15 | mkdir $HOME/bin 16 | ln -s $HOME/mdsh/bin/mdsh $HOME/bin/mdsh 17 | ``` 18 | 19 | An example of this: 20 | 21 | - Go to `labs/lab1` and execute 22 | ``` 23 | mdsh --out lab1.sh -c lab1.md && cat lab1.sh 24 | ``` 25 | 26 | - Output: 27 | ``` 28 | wget https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/master/labs/lab1/RSA/kubevirt-tutorial 29 | chmod 600 kubevirt-tutorial 30 | ssh-add kubevirt-tutorial 31 | ``` 32 | -------------------------------------------------------------------------------- /hack/build: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This file is part of the KubeVirt project 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | # 17 | # Copyright 2019 Red Hat, Inc. 18 | # 19 | 20 | set +e 21 | 22 | function install_mdsh() { 23 | # Download and install mdsh 24 | mkdir -p "${HOME}/.local/bin/" 25 | wget -O "${HOME}/.local/bin/mdsh" https://raw.githubusercontent.com/bashup/mdsh/master/bin/mdsh 26 | chmod 755 "${HOME}/.local/bin/mdsh" 27 | export "PATH=${PATH}:${HOME}/.local/bin" 28 | } 29 | 30 | function validations() { 31 | # Common duties to perform before the script execution 32 | BASE_PATH="$(pwd)" 33 | BUILD_PATH="${BASE_PATH}/build" 34 | mkdir -p "${BUILD_PATH}" 35 | 36 | # Look for mdsh executable 37 | type mdsh >/dev/null 2>&1 || install_mdsh 38 | 39 | if [[ ! -e ${BASE_PATH}/${1} ]]; then 40 | echo "Error: File or Folder doesn't exists" 41 | exit -1 42 | fi 43 | 44 | } 45 | 46 | function create_binaries() { 47 | # This function will generate the necessary resultant scripts 48 | # from MD files 49 | echo "Generating script files..." 50 | md_path="$1" 51 | 52 | if [[ -d ${BASE_PATH}/${md_path} ]]; then 53 | while IFS= read -r -d $'\0'; do 54 | mdsh --out "${BUILD_PATH}/$(basename ${REPLY%.*}).sh" -c "${REPLY}" 55 | done < <(find "${BASE_PATH}/${md_path}" -name "*.md" -print0) 56 | elif [[ -f "${BASE_PATH}/${md_path}" ]]; then 57 | mdsh --out "${BUILD_PATH}/$(basename ${md_path%.*}).sh" -c "${md_path}" 58 | else 59 | echo "Error: File or Folder doesn't exists" 60 | exit -1 61 | fi 62 | } 63 | 64 | function concat_binaries() { 65 | # Concatenate sh files into the all-in-one test 66 | find "${BUILD_PATH}" -name "*.sh" -print0 | sort -z | xargs -0 cat > "/tmp/all-labs.sh" 67 | rm -f ${BUILD_PATH}/* 68 | mv "/tmp/all-labs.sh" "${BUILD_PATH}/all-labs.sh" 69 | echo "Done!" 70 | } 71 | 72 | # $1 is the relative path where the MD files are located 73 | validations "$1" 74 | create_binaries "$1" 75 | -------------------------------------------------------------------------------- /hack/test_lab: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This file is part of the KubeVirt project 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | # 17 | # Copyright 2019 Red Hat, Inc. 18 | # 19 | 20 | set +e 21 | #set -euo pipefail 22 | 23 | TEST_FILE=$1 24 | EXPECTED_RC=${2:-PASS} 25 | BASE_FOLDER="$(dirname "${BASH_SOURCE[0]}" >/dev/null 2>&1 && pwd)/.." 26 | LOG_FOLDER="${BASE_FOLDER}/logs" 27 | TEST_NAME=$(basename ${TEST_FILE%.*}) 28 | LOG_FILE="${LOG_FOLDER}/${TEST_NAME}.log" 29 | RC=PASS 30 | REAL_TIME=0 31 | 32 | function validations() { 33 | # Pre-validations that the script needs 34 | 35 | if [[ ! -d ${LOG_FOLDER} ]]; then 36 | mkdir -p ${LOG_FOLDER} || continue 37 | fi 38 | 39 | touch "${LOG_FILE}" || continue 40 | 41 | if [[ ! -f ${TEST_FILE} ]]; then 42 | echo "Test file: ${TEST_FILE} not found" 43 | exit 1 44 | else 45 | if [[ ! ${TEST_FILE#.*} =~ '.sh' ]]; then 46 | echo "You need a file with '.sh' in order to be tested" 47 | exit 0 48 | fi 49 | fi 50 | 51 | } 52 | 53 | function load_test() { 54 | # This function will load the raw test file to be executed 55 | declare -A TEST_MAP 56 | export TIMEFORMAT="%3R" 57 | 58 | while IFS= read -r _command; do 59 | if [[ -z "$_command" || ${_command} =~ ^#.* ]];then 60 | continue 61 | else 62 | # This creates 2 new fd, in order to redirect the time command output 63 | # Also the command-execution time values are stored in an array, maybe could be useful 64 | exec 3>&1 4>&2 65 | echo "Command: ${_command}" 1>>${LOG_FILE} 66 | echo "Output:" 1>>${LOG_FILE} 67 | TEST_MAP[${_command}]=$( { time eval $_command &>>${LOG_FILE}; } 2>&1 ) || RC=ERR 68 | echo "Time: ${TEST_MAP[${_command}]}" 1>>${LOG_FILE} 69 | echo 1>>${LOG_FILE} 70 | REAL_TIME="$(echo "${REAL_TIME} + ${TEST_MAP[${_command}]}" | bc -l)" 71 | exec 3>&- 4>&- 72 | fi 73 | done < "${TEST_FILE}" 74 | # If you need to go through map, just uncomment this 75 | #for _command in "${!TEST_MAP[@]}"; do echo "$_command - ${TEST_MAP[$_command]}"; echo ; echo; done 76 | 77 | } 78 | 79 | function test_report() { 80 | # How the test will be reported to the user 81 | echo "-------" 82 | echo "Test Name: ${TEST_NAME}" 83 | echo "Test Result: ${RC}" 84 | echo "Test Expected Result: ${EXPECTED_RC}" 85 | echo "Real Time: ${REAL_TIME}" 86 | } 87 | 88 | function upload_artifacts() { 89 | # This function will move the logs generated to the $ARTIFACTS env var 90 | # which points to /logs/artifacts/ 91 | cp -r ${LOG_FOLDER} ${ARTIFACTS} 92 | } 93 | 94 | function test_end() { 95 | # This funciton will end the test based on the RC value 96 | if [[ ${RC} == "PASS" ]]; then 97 | exit 0 98 | elif [[ ${RC} == "ERR" ]]; then 99 | exit 1 100 | fi 101 | } 102 | 103 | echo -e "\n\nStarting $TEST_FILE test_lab" 104 | echo "#################### SOURCE OF TEST ####################" 105 | cat ${TEST_FILE} 106 | echo -e "#################### SOURCE OF TEST ####################\n" 107 | validations 108 | load_test 109 | test_report 110 | upload_artifacts 111 | test_end 112 | echo "Finished $TEST_FILE test_lab" 113 | -------------------------------------------------------------------------------- /hack/tests: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This file is part of the KubeVirt project 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | # 17 | # Copyright 2019 Red Hat, Inc. 18 | # 19 | 20 | set +e 21 | 22 | function find_student_materials(){ 23 | # When we work with prow, by default the workspace are located on /workspace 24 | # instead of $HOME, then we need to redirect the materials properly 25 | ln -s /workspace/student-materials ~/student-materials 26 | ln -s /workspace/kubevirt ~/kubevirt 27 | 28 | # Workaround, error: the path "$HOME/student-materials/kubevirt-servicemonitor.yml" does not exist 29 | # seems like the executor does not understand env vars 30 | mkdir -p /home/kubevirt 31 | ln -s /workspace/kubevirt /home/kubevirt/kubevirt 32 | ln -s /workspace/student-materials /home/kubevirt/student-materials 33 | } 34 | 35 | function validations() { 36 | # Common duties to perform before the script execution 37 | KUTU_PATH="$(pwd)" 38 | mkdir -p "${HOME}/.local/bin" 39 | KUCI_PATH="${KUTU_PATH}/../kubevirtci" 40 | KUCI_REPO="https://github.com/kubevirt/kubevirtci.git" 41 | K8S_VERS="${1}" 42 | BUILD_PATH="${KUTU_PATH}/build" 43 | HACK_PATH="${KUTU_PATH}/hack" 44 | export PATH=${PATH}:${HOME}/.local/bin 45 | 46 | # Installing apt source 47 | add-apt-repository 'deb http://ftp.debian.org/debian stretch main contrib non-free' 48 | apt-get update 49 | 50 | # Install bc on debian based OS 51 | wget http://ftp.us.debian.org/debian/pool/main/b/bc/bc_1.06.95-9+b3_amd64.deb -O /workspace/bc.deb 52 | apt-get install /workspace/bc.deb 53 | 54 | # Install Python3 55 | apt-get -y install python3 wget 56 | wget -O get-pip.py https://bootstrap.pypa.io/pip/3.5/get-pip.py 57 | python3 get-pip.py 58 | 59 | # Hack default python to be python3 to avoid later ansible issues 60 | update-alternatives --install /usr/bin/python python /usr/bin/python3 9 61 | update-alternatives --config python 62 | 63 | ls -l /usr/bin/python 64 | 65 | # Install ansible 66 | pip3 install ansible 67 | 68 | # Install kubernetes 69 | pip3 install kubernetes 70 | 71 | # Install openshift libraries 72 | pip3 install openshift 73 | 74 | # Check student-materials 75 | [[ -d ${HOME}/student-materials ]] || find_student_materials 76 | 77 | # Set K8s version to spin up, if empty 78 | [[ -z ${K8S_VERS} ]] && K8S_VERS="k8s-1.13.3" 79 | 80 | # Download KubevirtCI repo 81 | if [[ ! -d ${KUCI_PATH} ]]; then 82 | mkdir -p "${KUCI_PATH}" 83 | git clone ${KUCI_REPO} ${KUCI_PATH} 84 | fi 85 | } 86 | 87 | function k8s_cluster() { 88 | # Go to KubevirtCI path 89 | cd ${KUCI_PATH} 90 | 91 | # Once on KubevirtCI repo, spin up a K8s cluster 92 | export TARGET=${K8S_VERS} 93 | export KUBEVIRT_NUM_SECONDARY_NICS=1 94 | make cluster-up 95 | 96 | # Get into the cluster context 97 | export KUBECONFIG=$(cluster-up/kubeconfig.sh) 98 | } 99 | 100 | function lab_provision() { 101 | # Provision laboratory into K8s 102 | cd ${KUTU_PATH}/administrator/ansible 103 | ANSIBLE_ROLES_PATH=roles ansible-playbook -i "localhost," --connection=local playbooks/kubernetes.yml -e "TESTING=true" -e "TARGET=${TARGET}" 104 | } 105 | 106 | function lab_testing() { 107 | # Build Lab bin files and execute it 108 | cd ${KUTU_PATH} 109 | 110 | make build 111 | 112 | # Decorating the shell 113 | export HOME=/workspace 114 | export KUBECONFIG=$(${KUCI_PATH}/cluster-up/kubeconfig.sh) 115 | alias virtctl="virtctl --kubeconfig=${KUBECONFIG}" 116 | 117 | for test_file in $(ls -1 "${BUILD_PATH}"); do 118 | ${HACK_PATH}/test_lab ${BUILD_PATH}/${test_file} 119 | test_result="$?" 120 | 121 | ## Retry if fail 122 | if [[ ${test_result} -ne 0 ]]; then 123 | echo "Retrying Test..." 124 | ${HACK_PATH}/test_lab ${BUILD_PATH}/${test_file} 125 | test_result="$?" 126 | fi 127 | 128 | test_results+=("${test_result}") 129 | if [[ ${test_result} -eq 0 ]]; then 130 | test_ok+=("${test_result}") 131 | else 132 | test_nok+=("${test_result}") 133 | fi 134 | done 135 | } 136 | 137 | function test_sumarization() { 138 | echo "-------" 139 | echo "Tests Done: ${#test_results[@]}" 140 | echo "Tests on Passed state: ${#test_ok[@]}" 141 | echo "Tests on Error state: ${#test_nok[@]}" 142 | 143 | if [[ ${#test_nok[@]} -ge 1 ]]; then 144 | exit 1 145 | else 146 | exit 0 147 | fi 148 | } 149 | 150 | declare -a test_results test_ok test_nok 151 | 152 | validations "$1" 153 | k8s_cluster 154 | lab_provision 155 | lab_testing 156 | test_sumarization 157 | -------------------------------------------------------------------------------- /labs/lab000/lab000.md: -------------------------------------------------------------------------------- 1 | # Lab 0: Overview 2 | 3 | This lab is targeted towards people who don’t have any KubeVirt experience. 4 | 5 | However, knowledge of Kubernetes and familiarity with the *kubectl* command line tool is assumed. 6 | 7 | The goal of the workshop is to show you how to deploy KubeVirt and get familiar with it. Each participant will work on an all-in-one Kubernetes cluster, on top of which we will: 8 | 9 | * Deploy and explore KubeVirt 10 | * Deploy a virtual machine (VM) using KubeVirt 11 | * Create a VM that has persistent storage 12 | * Explore the KubeVirt web UI 13 | * Create a multi-homed VM using Multus 14 | * Explore metrics through Grafana 15 | 16 | **DISCLAIMER:** This header is just for our own purpose. If you find it during the laboratory, don't worry about, **you don't need execute it!**: 17 | 18 | ```bash @mdsh 19 | mdsh-lang-bash() { shell; } 20 | ``` 21 | 22 | ## Requirements 23 | 24 | The clusters for the lab are deployed externally, and the only requisite to follow the workshop is a laptop with a modern web browser to access the web UI and an SSH client to connect to the cluster and execute all the commands. 25 | 26 | ## Learn more 27 | 28 | More information about KubeVirt and related components can be found here: 29 | 30 | - [KubeVirt](http://kubevirt.io/) 31 | - [KubeVirt user guide](https://kubevirt.io/user-guide) 32 | - [KubeVirt source code](https://github.com/kubevirt/kubevirt) repository 33 | - [Kubernetes](https://kubernetes.io) 34 | 35 | [<< Back to README](../../README.md) | [Next: Introduction / Connectivity >>](../lab001/lab001.md) 36 | -------------------------------------------------------------------------------- /labs/lab001/RSA/kubevirt-tutorial: -------------------------------------------------------------------------------- 1 | -----BEGIN OPENSSH PRIVATE KEY----- 2 | b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn 3 | NhAAAAAwEAAQAAAQEAwrvSeo1zDEZSsjkcET7fcI04Ki2g1pCcq27/ODzPiqYvlsXk7rTX 4 | E2eOEJFiJbykFtIxGrICwA7HD8ispL4aEQsHoueBuHq3REQ/pNxJHUdNQMFgNrRRoLdAMP 5 | Elb8M6JjL6hFw0rPTamq84uIG+SNrUMUiCLVutWI3d15rOZNliqWRrzUIZHQlG+d3AT9u4 6 | ybhfaxlSkDZYhK5IW+Lp8OptgQX0NEH5QpCJW8zyNFORaH9yIXYFT7WNib7NzUyzKfXL29 7 | t+QJ6j5IlLo+46fYxqyQHiECLMxf21+NyqxRekEVGfW2rSWihYKTjV6hlqKND+RlsY1h5H 8 | /M8BT2xAvQAAA8DDCejEwwnoxAAAAAdzc2gtcnNhAAABAQDCu9J6jXMMRlKyORwRPt9wjT 9 | gqLaDWkJyrbv84PM+Kpi+WxeTutNcTZ44QkWIlvKQW0jEasgLADscPyKykvhoRCwei54G4 10 | erdERD+k3EkdR01AwWA2tFGgt0Aw8SVvwzomMvqEXDSs9Nqarzi4gb5I2tQxSIItW61Yjd 11 | 3Xms5k2WKpZGvNQhkdCUb53cBP27jJuF9rGVKQNliErkhb4unw6m2BBfQ0QflCkIlbzPI0 12 | U5Fof3IhdgVPtY2Jvs3NTLMp9cvb235AnqPkiUuj7jp9jGrJAeIQIszF/bX43KrFF6QRUZ 13 | 9batJaKFgpONXqGWoo0P5GWxjWHkf8zwFPbEC9AAAAAwEAAQAAAQEAriCZAvD80RsI00jx 14 | 6hHYZqJAeKa4TWSeU0U7fiQSSR51K1LldPXL5BQTGomFw8y8xZNKSV6nyujr4xdEGUPLtz 15 | WvrGFqw3Un7yk/58D6t+2MDL1dtUzkONvj0F+xZBCkLIglLrnseEOyPeM0yvdpGWhjmXYG 16 | wVxa0vZ4SlSo/c89XAW5uabGtNs1UKAzzrIHArcjQBAkU7cuSILLvwqBA29zE1299hRF0T 17 | xp0A+Iapakaj/nrvLfesME743I7ovRcMjhkGUVmefWV3UTFsJcydCcsaZmqBJbnLykE4I9 18 | 4sKjuWHfyGwcxHv+SHswLx6onqLforsLTnzeDmhJWSJdWQAAAIEAqSmacR7t8irDbUgm/u 19 | jRwFd7mhYBgpqdR7k1L3TEWJbdz0I1csM1UHjQ5fi0Z8CyEJOgKMkhkAZYfc1friOXu2TN 20 | Xwruxyv+/Ty0sr+LSwg4W/hbz/hf/eKSu17A3CByanW6X4iP6Au4TY4smsw+ONkyREKpxY 21 | 3eJIBTouH7R1QAAACBAORhF0nF2YBmGt0q4tlk0JZTQVQdPUbrRk8xT5EpIVV94I+ZfE78 22 | hTt22IIjLypjpovHviig/iwH0t3eRP6IRqXvnfcQSwg7zk/5X63PZPeFteWPiHpy4c5hMd 23 | 3jvLxELR6h683hCm3pTCurxe5cPiABKRr7Ys8Jp3oNLnymnda/AAAAgQDaSQWPsoBMVpKG 24 | zFxymY4cXrD2E7NdYO6GYxU8tSUF6DUGXsZ9+yeRZaCMLWIvHGuRyA1eTPhtifXkKJyAc9 25 | vg8ldDBepNd21LPvXz7KTI6a2V9pjh6VAcJZenre6JLWXSsgtplhKuXrOR7qEGEh7zCtXB 26 | YvwSOayzNEwuk8TjgwAAAAdwZXBAdWlvAQI= 27 | -----END OPENSSH PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /labs/lab001/RSA/kubevirt-tutorial.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCu9J6jXMMRlKyORwRPt9wjTgqLaDWkJyrbv84PM+Kpi+WxeTutNcTZ44QkWIlvKQW0jEasgLADscPyKykvhoRCwei54G4erdERD+k3EkdR01AwWA2tFGgt0Aw8SVvwzomMvqEXDSs9Nqarzi4gb5I2tQxSIItW61Yjd3Xms5k2WKpZGvNQhkdCUb53cBP27jJuF9rGVKQNliErkhb4unw6m2BBfQ0QflCkIlbzPI0U5Fof3IhdgVPtY2Jvs3NTLMp9cvb235AnqPkiUuj7jp9jGrJAeIQIszF/bX43KrFF6QRUZ9batJaKFgpONXqGWoo0P5GWxjWHkf8zwFPbEC9 kubevirt-tutorial 2 | -------------------------------------------------------------------------------- /labs/lab001/lab001.md: -------------------------------------------------------------------------------- 1 | # Lab 1: Student Connection Process 2 | 3 | In this section, we'll review how to connect to your instance. 4 | 5 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 6 | 7 | ```bash @mdsh 8 | mdsh-lang-bash() { shell; } 9 | ``` 10 | 11 | ## Find your GCP Instance 12 | 13 | This lab is designed to accommodate several students. Each student will be provided with a cloud instance running on GCP with nested virtualization, 4 vcpus and 16GiB of RAM. No GCP knowledge is required for this lab. 14 | 15 | The naming convention for the lab VMs is: **kubevirtlab-\.try.kubevirt.me**. You will be assigned an instance number by the instructor. 16 | 17 | All the boxes have been provisioned with an SSH public key, so you can SSH into your instance using the ssh key located into [RSA](./RSA) folder. 18 | 19 | ```shell 20 | ## LAB 001 21 | 22 | wget https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/master/labs/lab001/RSA/kubevirt-tutorial 23 | chmod 600 kubevirt-tutorial 24 | ``` 25 | 26 | - And then add it to your ssh agent 27 | ``` 28 | ssh-add kubevirt-tutorial 29 | ``` 30 | 31 | ## Connecting to your Instance 32 | 33 | This lab should be performed on **YOUR ASSIGNED INSTANCE** only, as the *kubevirt* user, unless otherwise instructed. 34 | 35 | **NOTE**: Please be respectful and only connect to your assigned instance. All instances on this lab use the **same** public key, so you could accidentally connect to the wrong system. If you have any issues, please inform your instructor/s. 36 | 37 | ``` 38 | $ ssh -i kubevirt-tutorial kubevirt@kubevirtlab-.try.kubevirt.me 39 | 40 | The authenticity of host 'kubevirtlab-2.try.kubevirt.me (35.188.64.157)' can't be established. 41 | ECDSA key fingerprint is SHA256:36+hPGyR9ZxYRRfMngif8PXLR1yoVFCGZ1kylpNE8Sk. 42 | Are you sure you want to continue connecting (yes/no)? yes 43 | Warning: Permanently added 'kubevirtlab-2.try.kubevirt.me,35.188.64.157' (ECDSA) to the list of known hosts. 44 | ``` 45 | 46 | This means the host you are about to connect to is not in your *known_hosts* list. Accept the fingerprint to connect to the instance. 47 | 48 | This concludes this section of the lab. 49 | 50 | [<< Previous: Lab Overview](../lab000/lab000.md) | [README](../../README.md) | [Next: Review the Kubernetes environment >>](../lab002/lab002.md) 51 | -------------------------------------------------------------------------------- /labs/lab002/lab002.md: -------------------------------------------------------------------------------- 1 | # Lab 2: Review the Kubernetes environment 2 | 3 | For the sake of time, some of the required setup has already been taken care of. The following steps were performed as part of the instance preparation: 4 | 5 | * Installed Kubernetes prerequisites 6 | * Deployed an all-in-one Kubernetes cluster using *kubeadm* 7 | * Deployed all the networking components: 8 | * Deployed [Flannel](https://coreos.com/flannel/docs/latest/) 9 | * Deployed [Multus CNI](https://01.org/kubernetes/building-blocks/multus-cni) 10 | * Deployed [OVS CNI](https://github.com/kubevirt/ovs-cni) 11 | * Deployed [local volumes provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) 12 | * Deployed [Prometheus Operator](https://github.com/coreos/prometheus-operator) 13 | * [Prometheus](https://prometheus.io) with its components 14 | * [Grafana](https://grafana.com) 15 | * Resource manifests to interact with KubeVirt components have been copied over to your *$HOME/student-materials* 16 | 17 | 18 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 19 | 20 | ```bash @mdsh 21 | mdsh-lang-bash() { shell; } 22 | ``` 23 | 24 | ## Verify the cluster 25 | 26 | Let's ask the cluster for its status: 27 | 28 | ```shell 29 | ## LAB 002 30 | 31 | kubectl version 32 | ``` 33 | 34 | - Output: 35 | 36 | ``` 37 | Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} 38 | Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} 39 | ``` 40 | 41 | Now let's check that persistent volumes are ready and available: 42 | 43 | ```shell 44 | kubectl get pv 45 | ``` 46 | 47 | - Output: 48 | 49 | ``` 50 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 51 | local-pv-234582fe 5109Mi RWO Delete Available local-volumes 2d 52 | local-pv-5d33c489 5109Mi RWO Delete Available local-volumes 2d 53 | local-pv-6035a584 5109Mi RWO Delete Available local-volumes 2d 54 | local-pv-a56cebb5 5109Mi RWO Delete Available local-volumes 2d 55 | ``` 56 | 57 | And finally, let's check that Prometheus and Grafana are running and exposed: 58 | 59 | ```shell 60 | kubectl get all -n prometheus 61 | ``` 62 | 63 | - Output: 64 | 65 | ``` 66 | NAME READY STATUS RESTARTS AGE 67 | pod/alertmanager-kubevirtlab-prometheus-ope-alertmanager-0 2/2 Running 0 2d 68 | pod/kubevirtlab-grafana-bf9db4bd9-r8pmt 2/2 Running 0 2d 69 | pod/kubevirtlab-kube-state-metrics-6544b5778c-gvqc9 1/1 Running 0 2d 70 | pod/kubevirtlab-prometheus-node-exporter-8dp4d 1/1 Running 0 2d 71 | pod/kubevirtlab-prometheus-ope-operator-78d5bbf8ff-rx9kv 1/1 Running 0 2d 72 | pod/prometheus-kubevirtlab-prometheus-ope-prometheus-0 3/3 Running 0 2d 73 | 74 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 75 | service/alertmanager-operated ClusterIP None 9093/TCP,6783/TCP 2d 76 | service/kubevirtlab-grafana ClusterIP 10.96.1.202 80/TCP 2d 77 | service/kubevirtlab-grafana-nodeport NodePort 10.111.14.110 3000:30300/TCP 2d 78 | service/kubevirtlab-kube-state-metrics ClusterIP 10.101.96.232 8080/TCP 2d 79 | service/kubevirtlab-prometheus-node-exporter ClusterIP 10.99.86.74 9100/TCP 2d 80 | service/kubevirtlab-prometheus-ope-alertmanager ClusterIP 10.105.253.180 9093/TCP 2d 81 | service/kubevirtlab-prometheus-ope-operator ClusterIP 10.109.209.37 8080/TCP 2d 82 | service/kubevirtlab-prometheus-ope-prometheus NodePort 10.109.224.249 9090:30090/TCP 2d 83 | service/prometheus-operated ClusterIP None 9090/TCP 2d 84 | 85 | NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 86 | daemonset.apps/kubevirtlab-prometheus-node-exporter 1 1 1 1 1 2d 87 | 88 | NAME READY UP-TO-DATE AVAILABLE AGE 89 | deployment.apps/kubevirtlab-grafana 1/1 1 1 2d 90 | deployment.apps/kubevirtlab-kube-state-metrics 1/1 1 1 2d 91 | deployment.apps/kubevirtlab-prometheus-ope-operator 1/1 1 1 2d 92 | 93 | NAME DESIRED CURRENT READY AGE 94 | replicaset.apps/kubevirtlab-grafana-bf9db4bd9 1 1 1 2d 95 | replicaset.apps/kubevirtlab-kube-state-metrics-6544b5778c 1 1 1 2d 96 | replicaset.apps/kubevirtlab-prometheus-ope-operator-78d5bbf8ff 1 1 1 2d 97 | 98 | NAME READY AGE 99 | statefulset.apps/alertmanager-kubevirtlab-prometheus-ope-alertmanager 1/1 2d 100 | statefulset.apps/prometheus-kubevirtlab-prometheus-ope-prometheus 1/1 2d 101 | ``` 102 | 103 | Notice that both *PromUI* and *Grafana* have been exposed on the node's TCP ports 30090 and 30300. Both can be accessed with the following details: 104 | 105 | * PromUI 106 | * http://kubevirtlab-\.try.kubevirt.me:30090 107 | * Grafana 108 | * http://kubevirtlab-\.try.kubevirt.me:30300 109 | * Username: admin 110 | * Password: kubevirtlab123 111 | 112 | ## Recap 113 | 114 | * We've reviewed the available PVs in the cluster that we'll use later on 115 | * We've reviewed the Prometheus deployment, Grafana and PromUI services exposed on the node 116 | 117 | This concludes this section of the lab. 118 | 119 | [<< Previous: Introduction / Connectivity](../lab001/lab001.md) | [README](../../README.md) | [Next: Deploy KubeVirt/CDI/UI >>](../lab003/lab003.md) 120 | -------------------------------------------------------------------------------- /labs/lab003/lab003.md: -------------------------------------------------------------------------------- 1 | # Lab 3: Deploy KubeVirt and CDI 2 | 3 | In this section we're going to deploy the following three components: 4 | 5 | * The [KubeVirt Operator](https://github.com/kubevirt/kubevirt) provides a set of Custom Resource Definitions (CRDs) and components required to manage VMs inside Kubernetes 6 | * The Containerized Data Importer [(CDI) Operator](https://github.com/kubevirt/containerized-data-importer) provides another set of CRDs to facilitate the management of persistent storage for KubeVirt-based VMs 7 | 8 | 9 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 10 | 11 | ```bash @mdsh 12 | mdsh-lang-bash() { shell; } 13 | ``` 14 | 15 | ## Install the KubeVirt Operator 16 | 17 | We're going to start with the KubeVirt operator. 18 | 19 | Connect to your assigned instance and execute the following steps: 20 | 21 | ```shell 22 | ## LAB003 23 | 24 | kubectl apply -f $HOME/kubevirt/kubevirt-operator-manifests/kubevirt-operator.yaml 25 | kubectl config set-context --current --namespace=kubevirt 26 | while ! kubectl get deployment virt-operator -n kubevirt --no-headers && echo 'Looping...'; do sleep 5; done 27 | kubectl wait deployment virt-operator --for=condition=available --timeout=600s -n kubevirt 28 | ``` 29 | 30 | - Output: 31 | ``` 32 | deployment.extensions/virt-operator condition met 33 | ``` 34 | 35 | ```shell 36 | kubectl apply -f $HOME/kubevirt/kubevirt-operator-manifests/kubevirt-cr.yaml 37 | while ! kubectl get deployment virt-api -n kubevirt --no-headers && echo 'Looping...'; do sleep 5; done 38 | kubectl wait deployment virt-api --for condition=available --timeout=600s -n kubevirt 39 | ``` 40 | 41 | - Output: 42 | ``` 43 | deployment.extensions/virt-api condition met 44 | ``` 45 | 46 | ```shell 47 | while ! kubectl get deployment virt-controller -n kubevirt --no-headers && echo 'Looping...'; do sleep 5; done 48 | kubectl wait deployment virt-controller --for condition=available --timeout=600s -n kubevirt 49 | ``` 50 | 51 | - Output: 52 | ``` 53 | deployment.extensions/virt-controller condition met 54 | ``` 55 | 56 | Let's explore what is being deployed as a result: 57 | 58 | ```shell 59 | kubectl get all -l kubevirt.io 60 | ``` 61 | 62 | ``` 63 | NAME READY STATUS RESTARTS AGE 64 | pod/virt-api-58554748d5-5lzsd 1/1 Running 0 100m 65 | pod/virt-api-58554748d5-rg5rk 1/1 Running 0 100m 66 | pod/virt-controller-76765f49f9-wphx9 1/1 Running 0 99m 67 | pod/virt-controller-76765f49f9-xdhlp 1/1 Running 0 99m 68 | pod/virt-handler-4nv45 1/1 Running 0 99m 69 | pod/virt-operator-5ddb4674b9-cj2s6 1/1 Running 0 101m 70 | pod/virt-operator-5ddb4674b9-kvd8r 1/1 Running 0 101m 71 | 72 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 73 | service/kubevirt-prometheus-metrics ClusterIP 10.110.103.5 443/TCP 100m 74 | service/virt-api ClusterIP 10.97.212.57 443/TCP 100m 75 | 76 | NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 77 | daemonset.apps/virt-handler 1 1 1 1 1 99m 78 | 79 | NAME READY UP-TO-DATE AVAILABLE AGE 80 | deployment.apps/virt-api 2/2 2 2 100m 81 | deployment.apps/virt-controller 2/2 2 2 99m 82 | deployment.apps/virt-operator 2/2 2 2 101m 83 | 84 | NAME DESIRED CURRENT READY AGE 85 | replicaset.apps/virt-api-58554748d5 2 2 2 100m 86 | replicaset.apps/virt-controller-76765f49f9 2 2 2 99m 87 | replicaset.apps/virt-operator-5ddb4674b9 2 2 2 101m 88 | ``` 89 | 90 | Let's verify the KubeVirt API is reachable: 91 | 92 | ```shell 93 | virtctl version 94 | ``` 95 | 96 | - Output: 97 | 98 | ``` 99 | Client Version: version.Info{GitVersion:"v0.17.0", GitCommit:"c0f960702dce718419a767f3913669f539229ff0", GitTreeState:"clean", BuildDate:"2019-05-05T08:09:14Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} 100 | Server Version: version.Info{GitVersion:"v0.17.0", GitCommit:"a067696ed6c25b0eab9dfcd01bbdc045f500f8ca", GitTreeState:"clean", BuildDate:"2019-05-06T14:58:11Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} 101 | ``` 102 | ### Exercise 103 | 104 | Check that a few new Custom Resource Definitions (CRDs) are now available in the cluster. 105 | 106 | What are these? What is their purpose? How do they look like? 107 | 108 | Do you see any instance of any of these Custom Resources? 109 | 110 | ## Install the CDI operator 111 | 112 | ```shell 113 | kubectl apply -f $HOME/kubevirt/cdi-operator-manifests/cdi-operator.yaml 114 | kubectl config set-context --current --namespace=cdi 115 | while ! kubectl get deployment cdi-operator -n cdi --no-headers && echo 'Looping...'; do sleep 5; done 116 | kubectl wait deployment cdi-operator --for condition=available --timeout=600s -n cdi 117 | ``` 118 | 119 | - Output: 120 | 121 | ``` 122 | deployment.extensions/cdi-operator condition met 123 | ``` 124 | 125 | ```shell 126 | kubectl apply -f $HOME/kubevirt/cdi-operator-manifests/cdi-operator-cr.yaml 127 | 128 | while ! kubectl get deployment cdi-apiserver -n cdi --no-headers && echo 'Looping...'; do sleep 5; done 129 | while ! kubectl get deployment cdi-deployment -n cdi --no-headers && echo 'Looping...'; do sleep 5; done 130 | while ! kubectl get deployment cdi-uploadproxy -n cdi --no-headers && echo 'Looping...'; do sleep 5; done 131 | ``` 132 | 133 | - This will be the output for each component: 134 | ``` 135 | Error from server (NotFound): deployments.extensions "cdi-apiserver" not found 136 | Looping... 137 | ``` 138 | 139 | - And then: 140 | ```shell 141 | kubectl wait deployment -l cdi.kubevirt.io --for condition=available --timeout=600s -n cdi 142 | ``` 143 | 144 | - Output: 145 | 146 | ``` 147 | deployment.extensions/cdi-apiserver condition met 148 | deployment.extensions/cdi-deployment condition met 149 | deployment.extensions/cdi-uploadproxy condition met 150 | ``` 151 | 152 | Now, let's see what we've got deployed: 153 | 154 | ```shell 155 | kubectl get all 156 | ``` 157 | 158 | - Output: 159 | 160 | ``` 161 | NAME READY STATUS RESTARTS AGE 162 | pod/cdi-apiserver-d88b544bb-cbm27 1/1 Running 0 85m 163 | pod/cdi-deployment-6875d8bff8-qnp4p 1/1 Running 0 85m 164 | pod/cdi-operator-5f58bbbbcf-gmk2t 1/1 Running 0 87m 165 | pod/cdi-uploadproxy-cddbb95b-6dzkd 1/1 Running 0 85m 166 | 167 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 168 | service/cdi-api ClusterIP 10.103.200.116 443/TCP 85m 169 | service/cdi-uploadproxy ClusterIP 10.103.55.72 443/TCP 85m 170 | 171 | NAME READY UP-TO-DATE AVAILABLE AGE 172 | deployment.apps/cdi-apiserver 1/1 1 1 85m 173 | deployment.apps/cdi-deployment 1/1 1 1 85m 174 | deployment.apps/cdi-operator 1/1 1 1 87m 175 | deployment.apps/cdi-uploadproxy 1/1 1 1 85m 176 | 177 | NAME DESIRED CURRENT READY AGE 178 | replicaset.apps/cdi-apiserver-d88b544bb 1 1 1 85m 179 | replicaset.apps/cdi-deployment-6875d8bff8 1 1 1 85m 180 | replicaset.apps/cdi-operator-5f58bbbbcf 1 1 1 87m 181 | replicaset.apps/cdi-uploadproxy-cddbb95b 1 1 1 85m 182 | ``` 183 | 184 | ## Enable Prometheus to scrap KubeVirt metrics 185 | 186 | ``` 187 | kubectl apply -f $HOME/student-materials/kubevirt-servicemonitor.yml 188 | ``` 189 | 190 | - Output: 191 | 192 | ``` 193 | servicemonitor.monitoring.coreos.com/kubevirtlab-kubevirt created 194 | ``` 195 | 196 | ## Recap 197 | 198 | Let's summarize what happened on this lab: 199 | 200 | * We have installed the KubeVirt and CDI operators: 201 | * The operators enabled us to create instances of KubeVirt and CDI as Custom Resources (CRs). 202 | * By creating CRs for both KubeVirt and CDI, the operators deployed the necessary KubeVirt and CDI components. 203 | * We verified that KubeVirt's API is available using *virtctl*, the CLI tool to manage KubeVirt VMs. 204 | * Finally, we've deployed a *ServiceMonitor* object to ask Prometheus to scrape the KubeVirt components, including the VMs we'll be running in subsequent labs. 205 | 206 | 207 | This concludes this section, take your time to review what's been deployed, all the resources, etc and then head off to the next lab! 208 | 209 | [<< Previous: Review the Kubernetes environment](../lab002/lab002.md) | [README](../../README.md) | [Next: Deploy our first Virtual Machine >>](../lab004/lab004.md) 210 | -------------------------------------------------------------------------------- /labs/lab004/lab004.md: -------------------------------------------------------------------------------- 1 | # Lab 4: Deploy your first Virtual Machine 2 | 3 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 4 | 5 | ```bash @mdsh 6 | mdsh-lang-bash() { shell; } 7 | ``` 8 | 9 | We will deploy a VM that uses a [ContainerDisk](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk). Therefore, the VM will not use any of the available Persistent Volumes (PVs), as ContainerDisk based storage is ephemeral. 10 | 11 | ```shell 12 | ## LAB004 13 | 14 | kubectl config set-context --current --namespace=default 15 | kubectl apply -f $HOME/student-materials/vm_containerdisk.yml 16 | ``` 17 | 18 | - Output: 19 | ``` 20 | virtualmachine.kubevirt.io/vm1 created 21 | ``` 22 | 23 | Check the VM that we just created: 24 | 25 | ```shell 26 | kubectl get vm vm1 27 | ``` 28 | 29 | - Output: 30 | ``` 31 | NAME AGE RUNNING VOLUME 32 | vm1 24s false 33 | ``` 34 | 35 | Notice it's not running. This is because in its YAML definition we can find the following: 36 | 37 | ``` 38 | ... 39 | spec: 40 | running: false 41 | ... 42 | ``` 43 | 44 | ## Start the Virtual Machine 45 | 46 | The *VirtualMachine* object is the definition of our VM, but a running VM is represented by a *VirtualMachineInstance*. Our VM is not instantiated yet, let's do so using *virtctl* as follows: 47 | 48 | ```shell 49 | virtctl start vm1 50 | ``` 51 | 52 | - Output: 53 | ``` 54 | VM vm1 was scheduled to start 55 | ``` 56 | 57 | A *virt-launcher* Pod should be starting now, which will spawn the actual virtual machine instance (or VMI): 58 | 59 | - With this command you will be sure that the vm instance has been created and the VM has started 60 | ``` 61 | while ! kubectl get vmi vm1 -n default --no-headers && echo 'Looping...'; do sleep 5; done 62 | kubectl wait vmi vm1 --timeout=600s --for condition=Ready 63 | ``` 64 | 65 | - The usual K8s command is: 66 | ``` 67 | kubectl get pods -w 68 | ``` 69 | 70 | - Output: 71 | ``` 72 | NAME READY STATUS RESTARTS AGE 73 | virt-launcher-vm1-2qflc 0/2 Running 0 12s 74 | virt-launcher-vm1-2qflc 0/2 Running 0 12s 75 | virt-launcher-vm1-2qflc 1/2 Running 0 17s 76 | virt-launcher-vm1-2qflc 2/2 Running 0 23s 77 | ``` 78 | 79 | We can also use *kubectl* to check the virtual machine and its instance: 80 | 81 | ```shell 82 | kubectl get vm vm1 83 | ``` 84 | 85 | - Output: 86 | ``` 87 | NAME AGE RUNNING VOLUME 88 | vm1 19m true 89 | ``` 90 | 91 | ```shell 92 | kubectl get vmi vm1 93 | ``` 94 | 95 | - Output: 96 | ``` 97 | NAME AGE PHASE IP NODENAME 98 | vm1 3m49s Running 10.244.0.22 kubevirtlab-2 99 | ``` 100 | 101 | Using *virtctl* we can connect to the VMI's console as follows: 102 | 103 | ``` 104 | virtctl console vm1 105 | ``` 106 | 107 | - Output: 108 | ``` 109 | Successfully connected to vm1 console. The escape sequence is ^] 110 | 111 | Fedora 29 (Cloud Edition) 112 | Kernel 4.18.16-300.fc29.x86_64 on an x86_64 (ttyS0) 113 | 114 | vm1 login: fedora 115 | Password: fedora 116 | 117 | [fedora@vm1 ~]$ ping -c 1 kubevirt.io 118 | PING kubevirt.io (185.199.111.153) 56(84) bytes of data. 119 | 64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=1 ttl=54 time=5.30 ms 120 | 121 | --- kubevirt.io ping statistics --- 122 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms 123 | rtt min/avg/max/mdev = 5.303/5.303/5.303/0.000 ms 124 | [fedora@vm1 ~]$ exit 125 | 126 | Fedora 29 (Cloud Edition) 127 | Kernel 4.18.16-300.fc29.x86_64 on an x86_64 (ttyS0) 128 | 129 | vm1 login: 130 | ``` 131 | 132 | In order to verify that the VM is up and running just ensure that you could reach over ssh using this command: 133 | ``` 134 | VM_IP="$(kubectl get vmi vm1 -o jsonpath='{.status.interfaces[0].ipAddress}')" 135 | ``` 136 | 137 | **NOTE**: To exit the console press *Ctrl+]* 138 | 139 | There is also a graphical console to connect to the VMs, using *virtctl vnc* instead of *virtctl console*. This requires you to have a VNC client like *remote-viewer* installed on your system. In this lab we are executing *virtctl* remotely on the lab instance, and therefore we can't use this option. Instead, we will access a VM's graphical console in a later lab through the web UI. 140 | 141 | You could shutdown the vm using this command: 142 | ```shell 143 | virtctl stop vm1 144 | ``` 145 | 146 | And you could erase the VM definition using this other one: 147 | ```shell 148 | kubectl delete vm vm1 -n default 149 | ``` 150 | 151 | 152 | ### Exercise 153 | 154 | Can you operate on the VM and VMI objects using standard *kubectl* commands without relying on *virtctl*? 155 | 156 | In particular, can you start or stop a VM without *virtctl*? 157 | 158 | ## Recap 159 | 160 | * We defined a *VirtualMachine* object on the cluster. This didn't actually instantiate the *vm1* VM. 161 | * We started the *vm1* using *virtctl*, which instantiated it creating the *VirtualMachineInstance* object 162 | * *kubectl patch* could have also been used to start *vm1* 163 | * Finally, we connected to the *vm1's* serial console using *virtctl* 164 | 165 | This concludes this section of the lab, before heading off to the next lab, spend some time describing and exploring the objects this lab created on the cluster. 166 | 167 | [<< Previous: Deploy KubeVirt/CDI/UI](../lab003/lab003.md) | [README](../../README.md) | [Next: Deploy a VM using a DataVolume >>](../lab005/lab005.md) 168 | -------------------------------------------------------------------------------- /labs/lab005/lab005.md: -------------------------------------------------------------------------------- 1 | # Lab 5: Deploy a VM using a DataVolume 2 | 3 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 4 | 5 | ```bash @mdsh 6 | mdsh-lang-bash() { shell; } 7 | ``` 8 | 9 | There is another VM definition that uses a [DataVolume](https://kubevirt.io/user-guide/docs/latest/creating-virtual-machines/disks-and-volumes.html#datavolume) instead of a ContainerDisk: 10 | 11 | ```shell 12 | ## LAB005 13 | 14 | cat $HOME/student-materials/vm_datavolume.yml 15 | ``` 16 | 17 | Now let's create this VM and we'll observe the image importing process before the VMI is created. **Note:** the image import is relatively fast and the importer pod terminates quickly, so be ready to watch the logs shortly after the VM creation or you will miss the opportunity: 18 | 19 | ```shell 20 | kubectl apply -f $HOME/student-materials/vm_datavolume.yml 21 | ``` 22 | 23 | - Output: 24 | ``` 25 | virtualmachine.kubevirt.io/vm2 created 26 | ``` 27 | 28 | ```shell 29 | kubectl get pods | grep importer 30 | ``` 31 | 32 | - Output: 33 | ``` 34 | importer-vm2-dv-sgch7 35 | ``` 36 | 37 | ``` 38 | kubectl logs -f importer-vm2-dv-sgch7 39 | ``` 40 | 41 | - Output: 42 | ``` 43 | I0517 14:43:45.670580 1 importer.go:58] Starting importer 44 | I0517 14:43:45.670866 1 importer.go:100] begin import process 45 | I0517 14:43:47.098597 1 data-processor.go:237] Calculating available size 46 | I0517 14:43:47.102760 1 data-processor.go:245] Checking out file system volume size. 47 | I0517 14:43:47.102885 1 data-processor.go:249] Request image size not empty. 48 | I0517 14:43:47.102915 1 data-processor.go:254] Target size 10692100096. 49 | I0517 14:43:47.271400 1 data-processor.go:167] New phase: Convert 50 | I0517 14:43:47.271430 1 data-processor.go:173] Validating image 51 | I0517 14:43:49.478205 1 qemu.go:205] 0.00 52 | I0517 14:43:51.845307 1 qemu.go:205] 1.19 53 | I0517 14:43:51.846846 1 qemu.go:205] 2.38 54 | ... 55 | I0517 14:44:04.279388 1 qemu.go:205] 99.21 56 | I0517 14:44:04.296137 1 data-processor.go:167] New phase: Resize 57 | W0517 14:44:04.316865 1 data-processor.go:224] Available space less than requested size, resizing image to available space 10692100096. 58 | I0517 14:44:04.317260 1 data-processor.go:230] Expanding image size to: 10692100096 59 | I0517 14:44:04.373807 1 data-processor.go:167] New phase: Complete 60 | ``` 61 | 62 | Once the VM has started we can again connect to its serial console and check if we have the extra space that the Containerized Data Importer (CDI) made for us: 63 | 64 | ``` 65 | virtctl console vm2 66 | ``` 67 | 68 | - Output: 69 | ``` 70 | Successfully connected to vm2 console. The escape sequence is ^] 71 | 72 | login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. 73 | cirros login: cirros 74 | Password: gocubsgo 75 | ``` 76 | 77 | - Once inside of the VM: 78 | ```shell 79 | df -h / 80 | ``` 81 | 82 | - Output: 83 | ``` 84 | Filesystem Size Used Available Use% Mounted on 85 | /dev/vda1 4.8G 24.1M 4.6G 1% / 86 | ``` 87 | 88 | ## Explore the DataVolume 89 | 90 | ```shell 91 | while ! kubectl describe dv vm2-dv -n default && echo 'Looping...'; do sleep 5; done 92 | ``` 93 | 94 | - Output: 95 | ``` 96 | Name: vm2-dv 97 | Namespace: default 98 | Labels: kubevirt.io/created-by=09dda4d2-78b2-11e9-b502-e6faa692a1af 99 | Annotations: kubevirt.io/delete-after-timestamp: 1558104328 100 | API Version: cdi.kubevirt.io/v1alpha1 101 | Kind: DataVolume 102 | Metadata: 103 | Creation Timestamp: 2019-05-17T14:42:57Z 104 | Generation: 15 105 | Owner References: 106 | API Version: kubevirt.io/v1alpha3 107 | Block Owner Deletion: true 108 | Controller: true 109 | Kind: VirtualMachine 110 | Name: vm2 111 | UID: 09dda4d2-78b2-11e9-b502-e6faa692a1af 112 | Resource Version: 286134 113 | Self Link: /apis/cdi.kubevirt.io/v1alpha1/namespaces/default/datavolumes/vm2-dv 114 | UID: 10054733-78b2-11e9-b502-e6faa692a1af 115 | Spec: 116 | Pvc: 117 | Access Modes: 118 | ReadWriteOnce 119 | Data Source: 120 | Resources: 121 | Requests: 122 | Storage: 10229Mi 123 | Storage Class Name: local-volumes 124 | Source: 125 | Http: 126 | URL: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img 127 | Status: 128 | Phase: Succeeded 129 | Progress: 100.0% 130 | Events: 131 | Type Reason Age From Message 132 | ---- ------ ---- ---- ------- 133 | Normal ImportScheduled 3m55s datavolume-controller Import into vm2-dv scheduled 134 | Normal ImportInProgress 3m38s datavolume-controller Import into vm2-dv in progress 135 | Normal Synced 3m13s (x16 over 3m55s) datavolume-controller DataVolume synced successfully 136 | Normal ImportSucceeded 3m13s datavolume-controller Successfully imported into PVC vm2-dv 137 | ``` 138 | 139 | The associated Persistent Volume Claim (PVC) has the same name as the DataVolume, *vm2-dv*. Let's describe it as well, it contains some interesting data: 140 | 141 | ```shell 142 | while ! kubectl describe pvc vm2-dv -n default && echo 'Looping...'; do sleep 5; done 143 | ``` 144 | 145 | - Output: 146 | ``` 147 | Name: vm2-dv 148 | Namespace: default 149 | StorageClass: local-volumes 150 | Status: Bound 151 | Volume: local-pv-6035a584 152 | Labels: app=containerized-data-importer 153 | cdi-controller=vm2-dv 154 | Annotations: cdi.kubevirt.io/storage.contentType: kubevirt 155 | cdi.kubevirt.io/storage.import.endpoint: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img 156 | cdi.kubevirt.io/storage.import.importPodName: importer-vm2-dv-sgch7 157 | cdi.kubevirt.io/storage.import.source: http 158 | cdi.kubevirt.io/storage.pod.phase: Succeeded 159 | pv.kubernetes.io/bind-completed: yes 160 | pv.kubernetes.io/bound-by-controller: yes 161 | Finalizers: [kubernetes.io/pvc-protection] 162 | Capacity: 10229Mi 163 | Access Modes: RWO 164 | VolumeMode: Filesystem 165 | Events: 166 | Type Reason Age From Message 167 | ---- ------ ---- ---- ------- 168 | Normal WaitForFirstConsumer 22m (x3 over 22m) persistentvolume-controller waiting for first consumer to be created before binding 169 | Mounted By: virt-launcher-vm2-92pm2 170 | ``` 171 | 172 | Pay special attention to the *Annotations* section where CDI adds interesting metadata like the endpoint where the image was imported from, the phase and pod name. 173 | 174 | You could shutdown the vm using this command: 175 | ``` 176 | virtctl stop vm2 177 | ``` 178 | 179 | And you could erase the VM definition using this other one: 180 | ```shell 181 | kubectl delete vm vm2 -n default 182 | ``` 183 | 184 | ## Recap 185 | 186 | * We created a second VM, *vm2*. This one included a *DataVolume* template which instructs KubeVirt and CDI to: 187 | * Create a PVC which uses a PV already created on the cluster. 188 | * The CDI importer pod to take the source, in this case a URL, and import the image directly to the PV attached to the PVC. 189 | * Once that process finishes, a *virt-launcher* pod starts using the same PVC, with the imported image, as its boot disk. 190 | * A detail that might go unnoticed: CDI is able to detect the requested storage size, the available space (accounting for file system overhead) and resize the image accordingly. 191 | * We connected to *vm2's* serial console and verified that the root partition was expanded to fill the underlying disk size. This is usually done through configuration services like *cloud-init*. 192 | 193 | This concludes this section of the lab, spend some time checking out all the objects and how they relate to each other, then head off to the next lab! 194 | 195 | [<< Previous: Deploy our first Virtual Machine](../lab004/lab004.md) | [README](../../README.md) | [Next: Exploring the KubeVirt UI >>](../lab006/lab006.md) 196 | -------------------------------------------------------------------------------- /labs/lab006/images/kwebui-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab006/images/kwebui-01.png -------------------------------------------------------------------------------- /labs/lab006/images/kwebui-02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab006/images/kwebui-02.png -------------------------------------------------------------------------------- /labs/lab006/images/kwebui-03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab006/images/kwebui-03.png -------------------------------------------------------------------------------- /labs/lab006/images/kwebui-04.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab006/images/kwebui-04.png -------------------------------------------------------------------------------- /labs/lab006/images/kwebui-05.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab006/images/kwebui-05.png -------------------------------------------------------------------------------- /labs/lab006/images/kwebui-06.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab006/images/kwebui-06.png -------------------------------------------------------------------------------- /labs/lab006/lab006.md: -------------------------------------------------------------------------------- 1 | # Lab 6: Install and explore the KubeVirt Web UI 2 | 3 | 4 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 5 | 6 | ```bash @mdsh 7 | mdsh-lang-bash() { shell; } 8 | ``` 9 | 10 | ## Install the KubeVirt Web UI 11 | 12 | In this lab we are going to deploy KubeVirt's web UI, which eases the management of VMs. 13 | 14 | ```shell 15 | ## LAB006 16 | 17 | kubectl config set-context --current --namespace=kubevirt 18 | kubectl apply -f $HOME/kubevirt/kubevirt-ui-custom-manifests/kubevirt_ui.yml 19 | while ! kubectl get rc kubevirt-web-ui -n kubevirt --no-headers && echo 'Looping...'; do sleep 5; done 20 | kubectl wait pod -l app=kubevirt-web-ui --for condition=Ready --timeout=600s -n kubevirt 21 | ``` 22 | 23 | - Output: 24 | ``` 25 | pod/kubevirt-web-ui-qmpwl condition met 26 | ``` 27 | 28 | Now, check the three resources deployed for the Web UI: 29 | 30 | ```shell 31 | kubectl get all -l app=kubevirt-web-ui 32 | ``` 33 | 34 | - Output: 35 | ``` 36 | NAME READY STATUS RESTARTS AGE 37 | pod/kubevirt-web-ui-qmpwl 1/1 Running 0 78m 38 | 39 | NAME DESIRED CURRENT READY AGE 40 | replicationcontroller/kubevirt-web-ui 1 1 1 78m 41 | 42 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 43 | service/kubevirt-web-ui NodePort 10.96.35.132 9000:30000/TCP 78m 44 | ``` 45 | 46 | **NOTE**: take a look at the kubevirt-web-ui service, the UI should be available on your node's assigned hostname at port 30000. 47 | 48 | ## Explore the KubeVirt UI 49 | 50 | Using your browser, head to *http://:30000* and you'll be greeted by an status page showing the health of the cluster and a stream of events. 51 | 52 | ![Cluster status page](images/kwebui-01.png) 53 | 54 | On the left side navigation bar, click on *Workloads* and then *VirtualMachines*, you'll be presented with a view of the defined VMs in the cluster, click on the *vm1* to open up its details page. 55 | 56 | ![VM1 details](images/kwebui-02.png) 57 | 58 | Notice all the available tabs, *YAML*, *Consoles*, ... click on *Consoles* 59 | 60 | ![VM1 VNC Console](images/kwebui-03.png) 61 | 62 | Click now on the *actions* button, it will present you with few options: 63 | 64 | ![VM actions](images/kwebui-04.png) 65 | 66 | You can interact with the VM from the UI as well. The *Migrate Virtual Machine* action is not really available for this lab because we are running on a single node cluster (so we can't migrate a VM from one node to another), but clicking it on a multi-node cluster would trigger a live migration of the VM. 67 | 68 | Now let's get back clicking on *Workloads*, *VirtualMachines* and review the details for *vm2*. Take a closer look to its disks by clicking on the *Disks* tab: 69 | 70 | ![VM2 storage details](images/kwebui-05.png) 71 | 72 | To see even more details, go to the *Storage* section and click on *Persistent Volume Claims*. There you will see a summary of *PVCs* on the selected namespace, and *vm2-dv* should be listed there. Click on it to see its details: 73 | 74 | ![vm2-dv details](images/kwebui-06.png) 75 | 76 | Here we can see that the PVC has an owner, the *vm2-dv* *DataVolume*, and that the PVC is attached to a Persistent Volume attached to this claim (in this case the *local-pv-2049c47d* *PV*). 77 | 78 | ## Recap 79 | 80 | * We deployed the KubeVirt web UI 81 | * We connected to the web UI, which is exposed on the node's IP at port 30000 82 | * Checked the VMs in the cluster 83 | * Checked available actions and attributes 84 | 85 | This concludes this lab! Take your time exploring the UI and when your ready head to the next one! 86 | 87 | [<< Previous: Deploy a VM using a DataVolume](../lab005/lab005.md) | [README](../../README.md) | [Next: Using the KubeVirt UI to interact with VMs >>](../lab007/lab007.md) 88 | -------------------------------------------------------------------------------- /labs/lab007/images/basic_settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/basic_settings.png -------------------------------------------------------------------------------- /labs/lab007/images/new_project.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/new_project.png -------------------------------------------------------------------------------- /labs/lab007/images/new_vm_wizard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/new_vm_wizard.png -------------------------------------------------------------------------------- /labs/lab007/images/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/overview.png -------------------------------------------------------------------------------- /labs/lab007/images/pod_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/pod_overview.png -------------------------------------------------------------------------------- /labs/lab007/images/pods.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/pods.png -------------------------------------------------------------------------------- /labs/lab007/images/start_vm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/start_vm.png -------------------------------------------------------------------------------- /labs/lab007/images/ui.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/ui.png -------------------------------------------------------------------------------- /labs/lab007/images/vm_console.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab007/images/vm_console.png -------------------------------------------------------------------------------- /labs/lab007/lab007.md: -------------------------------------------------------------------------------- 1 | # Lab 7: Using the Kubevirt UI to interact with VMs 2 | 3 | In this section we will deploy and interact with VMs using KubeVirt's web-based user interface. 4 | 5 | You can then access the web UI at `http://kubevirtlab-.:30000` and use it to: 6 | 7 | * stop/start/delete/... VMs 8 | * Create new ones 9 | * Access VM consoles through your browser 10 | 11 | ![kubevirt-ui](images/ui.png) 12 | 13 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 14 | 15 | ```bash @mdsh 16 | mdsh-lang-bash() { shell; } 17 | ``` 18 | 19 | ## Using the KubeVirt web UI 20 | 21 | ### Create a Virtual Machine 22 | 23 | > **NOTE:** at the time of this writing, the WebUI is fully functional on 24 | > OpenShift but not yet in Kubernetes (we are working on it! :D). In Kubernetes 25 | > if you try to spin up a VM using the *wizard* the namespaces will not be 26 | > shown, preventing this workflow to complete. For now, on a Kubernetes 27 | > environment, just skip this section about the create VM wizard and look at the 28 | > other UI features. 29 | 30 | Click the `Create Virtual Machine` drop-down and select `Create with Wizard` 31 | 32 | ![create virtual machine wizard](images/new_vm_wizard.png) 33 | 34 | In the `Basic Settings` configure with the following 35 | 36 | * Name: `test` 37 | * Namespace: `myproject` 38 | * Provision Source: `Container` 39 | * Container Image: `docker.io/kubevirt/cirros-container-disk-demo:latest` 40 | * Operating System: `fedora29` 41 | * Flavor: `Custom` 42 | * Memory: `1` 43 | * CPUs: `1` 44 | * Workload Profile: `generic` 45 | 46 | ![create virtual machine wizard](images/basic_settings.png) 47 | 48 | Click `Next >` until result and finish. 49 | 50 | ### Controlling the State of the VM 51 | 52 | To start the virtual machine click the cog and select `Start Virtual Machine`. 53 | 54 | ![start vm](images/start_vm.png) 55 | 56 | Now click the virtual machine link `test` 57 | 58 | ### Virtual Machine Overview and Console 59 | 60 | While looking at a VM's detailed page: 61 | 62 | ![overview](images/overview.png) 63 | 64 | click the *Consoles* tab to view its graphical (VNC) console: 65 | 66 | ![overview](images/vm_console.png) 67 | 68 | ### Associated Pods 69 | 70 | Virtual Machines run inside pods. Navigate to *Workloads -> Pods* to see a list of the pods that are currently running in the selected namespace: 71 | 72 | ![pods](images/pods.png) 73 | 74 | Then clicking on a specific pod's name will provide an overview of the pod: 75 | 76 | ![pods](images/pod_overview.png) 77 | 78 | #### Exercise 79 | 80 | - The pod details page has a *Terminal* tab. Is there any relationwhip between that tab and the *Consoles* tab in the VM details page? 81 | 82 | - Is there more than one container in the pod? What does each container do? 83 | 84 | This concludes this section of the lab. 85 | 86 | [<< Previous: Exploring the KubeVirt UI](../lab006/lab006.md) | [README](../../README.md) | [Next: Deploy a multi-homed VM using Multus >>](../lab008/lab008.md) 87 | -------------------------------------------------------------------------------- /labs/lab008/lab008.md: -------------------------------------------------------------------------------- 1 | # Lab 8: Deploy a multi-homed VM using Multus 2 | 3 | In this lab we will run virtual machines with multiple network interfaces (NICs) by leveraging the Multus and OVS CNI plugins. 4 | 5 | Having these two plugins makes it possible to define multiple networks in our Kubernetes cluster. Once the *Network Attachement Definitions* (NADs) are present it's only a matter of defining the VM's NICs and link them to each network as needed. 6 | 7 | > **WARNING: this lab contains deliberate errors!**. Simple copy&paste will 8 | > *not* work on this lab. This was done to help you understand what you are 9 | > doing :), enjoy! 10 | 11 | **DISCLAIMER:** This header is just for our own purpose, then **you don't need execute it!**: 12 | 13 | ```bash @mdsh 14 | mdsh-lang-bash() { shell; } 15 | ``` 16 | 17 | ## Open vSwitch Configuration 18 | 19 | Since we are using the OVS CNI plugin, we need to configure dedicated Open vSwitch bridges. 20 | 21 | The lab instances already have an OVS bridge named `br1`. 22 | 23 | As a reference, if we were to provision the bridge manually we do just need execute the following command: 24 | 25 | You will need to execute `ovs-vsctl` commands as a root, then just add sudo or impersonate as root user 26 | ``` 27 | sudo su - 28 | ``` 29 | 30 | ``` 31 | ## LAB008 32 | 33 | ovs-vsctl add-br br1 34 | ``` 35 | 36 | To see the already provisioned bridge execute this command: 37 | 38 | ``` 39 | ovs-vsctl show 40 | ``` 41 | 42 | - Output: 43 | ``` 44 | 48244a9a-507f-457e-ae78-574aa318d98e 45 | Bridge "br1" 46 | Port "eth1" 47 | Interface "eth1" 48 | type: internal 49 | Port "br1" 50 | Interface "br1" 51 | type: internal 52 | ovs_version: "2.0.0" 53 | ``` 54 | 55 | In a production environment, we would do the same on each of the cluster nodes and attach a dedicated physical interface to the bridge. 56 | 57 | ## Create a Network Attachment Definition 58 | 59 | A *NetworkAttachmentDefinition* (a Custom Resource), within its *config* section, configures the CNI plugin. Among other settings there is the bridge itself, which is used by OVS to attach the Pod's virtual interface to it. 60 | 61 | ``` 62 | apiVersion: "k8s.cni.cncf.io/v1" 63 | kind: NetworkAttachmentDefinition 64 | metadata: 65 | name: ovs-net-1 66 | namespace: 'multus' 67 | spec: 68 | config: '{ 69 | "cniVersion": "0.3.1", 70 | "type": "ovs", 71 | "bridge": "br1" 72 | }' 73 | ``` 74 | 75 | Let's now create a new NAD which will use the bridge *br1*: 76 | 77 | 78 | ```shell 79 | kubectl config set-context --current --namespace=default 80 | kubectl apply -f $HOME/student-materials/multus_nad_br1.yml 81 | ``` 82 | 83 | ## Create Virtual Machine 84 | 85 | So far we've seen *VirtualMachine* resource definitions that connect to one single network, the cluster's default or PodNetwork. 86 | 87 | To use the newly created network, the *VirtualMachine* object needs to reference it and include a new interface that will use it, similar to what we would do to attach volumes to a regular Pod: 88 | 89 | ``` 90 | ... 91 | ... 92 | interfaces: 93 | - bridge: {} 94 | name: default 95 | - bridge: {} 96 | macAddress: 20:37:cf:e0:ad:f1 97 | name: ovs-net-1 98 | ... 99 | ... 100 | networks: 101 | - name: default 102 | pod: {} 103 | - multus: 104 | networkName: ovs-net-1 105 | name: ovs-net-1 106 | ``` 107 | 108 | Create two VMs named **fedora-multus-1** and **fedora-multus-2**, both with a secondary NIC pointing to the previously created bridge/network attachment definition: 109 | 110 | ```shell 111 | kubectl apply -f $HOME/student-materials/vm_multus1.yml 112 | kubectl apply -f $HOME/student-materials/vm_multus2.yml 113 | ``` 114 | 115 | ## Verifying the network connectivity 116 | 117 | Locate the IPs of the two VMs, open two connections to your GCP instance and connect to both VM serial consoles: 118 | 119 | ``` 120 | virtctl console fedora-multus-1 121 | virtctl console fedora-multus-2 122 | ``` 123 | 124 | Confirm the VMs got two network interfaces, *eth0* and *eth1*: 125 | 126 | ``` 127 | [root@fedora-multus-1 ~]# ip a 128 | ``` 129 | 130 | - Output: 131 | ``` 132 | ...OUTPUT... 133 | 2: eth0: ... 134 | 3: eth1: mtu 1500 qdisc noop state UP group default qlen 1000 135 | link/ether 20:37:cf:e0:ad:f1 brd ff:ff:ff:ff:ff:ff 136 | ``` 137 | 138 | Both VMs should have already an IP address on their *eth1*, cloudinit was used to configure it. The *fedora-multus-1* VM should have the IP address *11.0.0.5* and *fedora-multus-2* VM should have *11.0.0.6*, let's try to use ping or SSH to verify the connectivity: 139 | 140 | ``` 141 | ping 11.0.0.(5|6) 142 | ``` 143 | 144 | Or 145 | 146 | ``` 147 | ssh fedora@11.0.0.(5|6) 148 | ``` 149 | 150 | To stop the vms, you could execute those commands: 151 | ``` 152 | virtctl stop fedora-multus-1 153 | virtctl stop fedora-multus-2 154 | ``` 155 | 156 | And to erase the VMs just execute those ones: 157 | ```shell 158 | kubectl delete vm fedora-multus-1 -n default 159 | kubectl delete vm fedora-multus-2 -n default 160 | ``` 161 | 162 | [<< Previous: Using the KubeVirt UI to interact with VMs](../lab007/lab007.md) | [README](../../README.md) | [Next: Exploring Kubevirt metrics >>](../lab009/lab009.md) 163 | -------------------------------------------------------------------------------- /labs/lab009/images/grafana-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab009/images/grafana-01.png -------------------------------------------------------------------------------- /labs/lab009/images/grafana-02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab009/images/grafana-02.png -------------------------------------------------------------------------------- /labs/lab009/images/grafana-03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab009/images/grafana-03.png -------------------------------------------------------------------------------- /labs/lab009/images/promui-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab009/images/promui-01.png -------------------------------------------------------------------------------- /labs/lab009/images/promui-02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab009/images/promui-02.png -------------------------------------------------------------------------------- /labs/lab009/images/promui-03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/lab009/images/promui-03.png -------------------------------------------------------------------------------- /labs/lab009/lab009.md: -------------------------------------------------------------------------------- 1 | # Lab 9: Exploring KubeVirt metrics 2 | 3 | ## Using the PromUI 4 | 5 | Using your brower, head to your node's hostname on port 30090 to find the PromUI. 6 | 7 | ![PromUI landing page](images/promui-01.png) 8 | 9 | Metrics in KubeVirt can be differentiated in two types: 10 | 11 | * Metrics for components 12 | * Go internal metrics 13 | * API requests 14 | * Metrics for VMs 15 | * Resident memory in bytes 16 | * Total network traffic in bytes 17 | * Total storage IOPs 18 | * Total storage times in ms 19 | * Total storage traffic in bytes 20 | * CPU usage in seconds 21 | 22 | Note that metrics are a relatively new feature in KubeVirt and are under development. 23 | 24 | Let's query PromUI for the resident memory that our VMs are currently using. The metric's name is `kubevirt_vm_memory_resident_bytes`: 25 | 26 | ![PromUI VMs memory](images/promui-02.png) 27 | 28 | We could also use any of the following VM metric names: 29 | 30 | * `kubevirt_vm_network_traffic_bytes_total` 31 | * `kubevirt_vm_storage_iops_total` 32 | * `kubevirt_vm_storage_times_ms_total` 33 | * `kubevirt_vm_storage_traffic_bytes_total` 34 | 35 | **NOTE**: it is useful to have a look at the [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) and its functions, some are helpful to shape the metrics into more useful data. 36 | 37 | Now let's check the API metrics using `rest_client_requests_total{job="kubevirt-prometheus-metrics"}`: 38 | 39 | ![PromUI API metrics](images/promui-03.png) 40 | 41 | Again, these will tend to be ever-growing numbers, thus PromQL functions are needed. Note the information that Promtheus gives us, like code, Pod, method... we could make use of those to create monitoring dashboards and alterting. 42 | 43 | ## Using Grafana 44 | 45 | In the student materials (*$HOME/student-materials*) you'll find a [Grafana](https://grafana.org) dashboard we've prepared for you all, so let's import it and see what it shows, perform the following steps in your own workstation/laptop: 46 | 47 | ```shell 48 | ## LAB009 49 | 50 | wget https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/master/administrator/ansible/roles/kubevirt/files/student-materials/KubeVirt-grafana-dashboard.json 51 | ``` 52 | 53 | Now, point your browser to your node's hostname port 30300. You will be greeted by Grafana's login page. Use the following authentication details: 54 | 55 | * User: admin 56 | * Password: kubevirtlab123 57 | 58 | ![Grafana login page](images/grafana-01.png) 59 | 60 | To import the dashboard click on the *plus* symbol, to the left of the page and then *Import*: 61 | 62 | ![Grafana dashboard import page](images/grafana-02.png) 63 | 64 | Click on the *Upload .json File* button, find the dashboard file and import it! 65 | 66 | The dashboard will open right away: 67 | 68 | ![Grafana KubeVirt dashboard](images/grafana-03.png) 69 | 70 | The dashboard counts the different return codes from the API client requests we saw earlier on PromUI and graphs CPU, memory and IOPs for all the VMs running on the cluster. 71 | 72 | This concludes this lab! 73 | 74 | [<< Previous: Deploy a multi-homed VM using Multus](../lab008/lab008.md) | [README](../../README.md) | [Next: Conclusion >>](../lab010/lab010.md) 75 | -------------------------------------------------------------------------------- /labs/lab010/lab010.md: -------------------------------------------------------------------------------- 1 | # Lab 10: Conclusion 2 | 3 | Now that you have completed the lab, let's review what was covered here: 4 | 5 | * Deploy Kubevirt using Kubevirt-Operator 6 | * Deploy CDI using CDI-Operator 7 | * Deploy the Kubevirt web UI 8 | * Deploy a simple Virtual Machine on Kubevirt 9 | * Access the Virtual Machine 10 | * Deploy a VM using persistent storage, DataVolume importing and image from a URL thanks to CDI 11 | * Explore the VMs and their attributes through the KubeVirt web UI 12 | * Deploy and use VMs through the Kubevirt web UI 13 | * Create multi-homed VMs with using Multus and OVS 14 | * Explore KubeVirt metrics from PromUI and Grafana 15 | 16 | Thanks for joining us today and don't forget to submit that PR if you found any issues! :+1: 17 | 18 | [<< Previous: Exploring Kubevirt metrics](../lab009/lab009.md) | [Back to README](../../README.md) 19 | -------------------------------------------------------------------------------- /labs/slides/OpenSouthCode-2019.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/kubevirt-tutorial/a4208f62d36374ea1211c72b0cffcce884a5b6ae/labs/slides/OpenSouthCode-2019.pdf --------------------------------------------------------------------------------