├── .gitignore ├── docs ├── assets │ ├── check.png │ ├── pause.png │ ├── play.png │ ├── actions.png │ ├── ellipsis.png │ ├── favicon.ico │ ├── KubeVirt_icon.png │ ├── architecture.png │ ├── book_pages_bg.jpg │ ├── favicon32x32.png │ ├── architecture-simple.png │ ├── kubevirt-pre-256x256.png │ ├── virtio_custom_install_0.png │ ├── virtio_custom_install_1.png │ ├── virtio_custom_install_2.png │ ├── virtio_custom_install_3.png │ ├── virtio_custom_install_4.png │ ├── virtio_custom_install_5.png │ ├── virtio_driver_install_0.png │ ├── virtio_driver_install_1.png │ ├── virtio_driver_install_2.png │ ├── virtio_driver_install_3.png │ └── asciibinder-logo-horizontal.png ├── compute │ ├── live-migration.png │ ├── decentralized-live-migration.png │ ├── OWNERS │ ├── .pages │ ├── hugepages.md │ ├── client_passthrough.md │ ├── vsock.md │ ├── resources_requests_and_limits.md │ ├── memory_hotplug.md │ ├── run_strategies.md │ ├── persistent_tpm_and_uefi_state.md │ ├── memory_dump.md │ ├── mediated_devices_configuration.md │ ├── node_assignment.md │ ├── cpu_hotplug.md │ └── windows_virtio_drivers.md ├── debug_virt_stack │ ├── kubevirt-demo-debug-qemu-strace.png │ ├── .pages │ ├── OWNERS │ ├── virsh-commands.md │ └── logging.md ├── network │ ├── OWNERS │ ├── .pages │ ├── networkpolicy.md │ ├── dns.md │ ├── net_binding_plugins │ │ └── slirp.md │ ├── istio_service_mesh.md │ └── service_objects.md ├── storage │ ├── OWNERS │ └── .pages ├── cluster_admin │ ├── OWNERS │ ├── .pages │ ├── gitops.md │ ├── operations_on_Arm64.md │ ├── annotations_and_labels.md │ ├── device_status_on_Arm64.md │ ├── activating_feature_gates.md │ ├── virtual_machines_on_Arm64.md │ ├── customize_components.md │ ├── ksm.md │ ├── api_validation.md │ ├── feature_gate_status_on_Arm64.md │ ├── unresponsive_nodes.md │ ├── tekton_tasks.md │ ├── authorization.md │ ├── scheduler.md │ ├── migration_policies.md │ └── updating_and_deletion.md ├── .pages ├── quickstarts.md ├── user_workloads │ ├── basic_use.md │ ├── .pages │ ├── legacy_windows.md │ ├── virtctl_client_tool.md │ ├── deploy_common_instancetypes.md │ ├── vm_rollout_strategies.md │ ├── virtual_machine_instances.md │ ├── lifecycle.md │ ├── creating_it_pref.md │ ├── guest_agent_information.md │ ├── boot_from_external_source.md │ ├── component_monitoring.md │ └── guest_operating_system_information.md ├── stylesheets │ └── extra.css ├── index.md ├── contributing.md └── _redirects ├── _config └── src │ └── Gemfile ├── CONTRIBUTING.md ├── OWNERS ├── netlify.toml ├── update_changelog.sh ├── Rakefile ├── OWNERS_ALIASES └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Site is autogenerated 2 | site/ 3 | ./Dockerfile 4 | ./Gemfile.lock 5 | -------------------------------------------------------------------------------- /docs/assets/check.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/check.png -------------------------------------------------------------------------------- /docs/assets/pause.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/pause.png -------------------------------------------------------------------------------- /docs/assets/play.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/play.png -------------------------------------------------------------------------------- /docs/assets/actions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/actions.png -------------------------------------------------------------------------------- /docs/assets/ellipsis.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/ellipsis.png -------------------------------------------------------------------------------- /docs/assets/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/favicon.ico -------------------------------------------------------------------------------- /docs/assets/KubeVirt_icon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/KubeVirt_icon.png -------------------------------------------------------------------------------- /docs/assets/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/architecture.png -------------------------------------------------------------------------------- /docs/assets/book_pages_bg.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/book_pages_bg.jpg -------------------------------------------------------------------------------- /docs/assets/favicon32x32.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/favicon32x32.png -------------------------------------------------------------------------------- /docs/compute/live-migration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/compute/live-migration.png -------------------------------------------------------------------------------- /docs/assets/architecture-simple.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/architecture-simple.png -------------------------------------------------------------------------------- /docs/assets/kubevirt-pre-256x256.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/kubevirt-pre-256x256.png -------------------------------------------------------------------------------- /docs/assets/virtio_custom_install_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_custom_install_0.png -------------------------------------------------------------------------------- /docs/assets/virtio_custom_install_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_custom_install_1.png -------------------------------------------------------------------------------- /docs/assets/virtio_custom_install_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_custom_install_2.png -------------------------------------------------------------------------------- /docs/assets/virtio_custom_install_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_custom_install_3.png -------------------------------------------------------------------------------- /docs/assets/virtio_custom_install_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_custom_install_4.png -------------------------------------------------------------------------------- /docs/assets/virtio_custom_install_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_custom_install_5.png -------------------------------------------------------------------------------- /docs/assets/virtio_driver_install_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_driver_install_0.png -------------------------------------------------------------------------------- /docs/assets/virtio_driver_install_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_driver_install_1.png -------------------------------------------------------------------------------- /docs/assets/virtio_driver_install_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_driver_install_2.png -------------------------------------------------------------------------------- /docs/assets/virtio_driver_install_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/virtio_driver_install_3.png -------------------------------------------------------------------------------- /docs/assets/asciibinder-logo-horizontal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/assets/asciibinder-logo-horizontal.png -------------------------------------------------------------------------------- /docs/compute/decentralized-live-migration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/compute/decentralized-live-migration.png -------------------------------------------------------------------------------- /docs/debug_virt_stack/kubevirt-demo-debug-qemu-strace.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kubevirt/user-guide/HEAD/docs/debug_virt_stack/kubevirt-demo-debug-qemu-strace.png -------------------------------------------------------------------------------- /docs/debug_virt_stack/.pages: -------------------------------------------------------------------------------- 1 | nav: 2 | - debug.md 3 | - logging.md 4 | - privileged-node-debugging.md 5 | - virsh-commands.md 6 | - launch-qemu-strace.md 7 | - launch-qemu-gdb.md 8 | -------------------------------------------------------------------------------- /docs/compute/OWNERS: -------------------------------------------------------------------------------- 1 | filters: 2 | ".*": 3 | reviewers: 4 | - sig-compute-reviewers 5 | approvers: 6 | - sig-compute-approvers 7 | emeritus_approvers: 8 | - emeritus_approvers 9 | -------------------------------------------------------------------------------- /docs/network/OWNERS: -------------------------------------------------------------------------------- 1 | filters: 2 | ".*": 3 | reviewers: 4 | - sig-network-reviewers 5 | approvers: 6 | - sig-network-approvers 7 | emeritus_approvers: 8 | - emeritus_approvers 9 | -------------------------------------------------------------------------------- /docs/storage/OWNERS: -------------------------------------------------------------------------------- 1 | filters: 2 | ".*": 3 | reviewers: 4 | - sig-storage-reviewers 5 | approvers: 6 | - sig-storage-approvers 7 | emeritus_approvers: 8 | - emeritus_approvers 9 | -------------------------------------------------------------------------------- /docs/cluster_admin/OWNERS: -------------------------------------------------------------------------------- 1 | filters: 2 | ".*": 3 | reviewers: 4 | - sig-compute-reviewers 5 | approvers: 6 | - sig-compute-approvers 7 | emeritus_approvers: 8 | - emeritus_approvers 9 | -------------------------------------------------------------------------------- /docs/debug_virt_stack/OWNERS: -------------------------------------------------------------------------------- 1 | filters: 2 | ".*": 3 | reviewers: 4 | - sig-compute-reviewers 5 | approvers: 6 | - sig-compute-approvers 7 | emeritus_approvers: 8 | - emeritus_approvers 9 | -------------------------------------------------------------------------------- /docs/network/.pages: -------------------------------------------------------------------------------- 1 | nav: 2 | - dns.md 3 | - hotplug_interfaces.md 4 | - interfaces_and_networks.md 5 | - istio_service_mesh.md 6 | - network_binding_plugins.md 7 | - networkpolicy.md 8 | - service_objects.md 9 | -------------------------------------------------------------------------------- /docs/storage/.pages: -------------------------------------------------------------------------------- 1 | nav: 2 | - clone_api.md 3 | - containerized_data_importer.md 4 | - disks_and_volumes.md 5 | - export_api.md 6 | - guestfs.md 7 | - hotplug_volumes.md 8 | - snapshot_restore_api.md 9 | - volume_migration.md 10 | -------------------------------------------------------------------------------- /_config/src/Gemfile: -------------------------------------------------------------------------------- 1 | Encoding.default_external = Encoding::UTF_8 2 | Encoding.default_internal = Encoding::UTF_8 3 | 4 | source "https://rubygems.org" 5 | 6 | gem "rake" 7 | gem "html-proofer" 8 | gem "premonition", "~> 2.0.1" 9 | gem "unicode_utils" 10 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing guidelines 2 | 3 | Please see the [contributing guide](https://kubevirt.io/user-guide/contributing) for information on how and where to contribute to the project. 4 | 5 | The contents of this repository is published to [kubevirt.io/user-guide](https://kubevirt.io/user-guide). 6 | -------------------------------------------------------------------------------- /docs/.pages: -------------------------------------------------------------------------------- 1 | nav: 2 | - Welcome: index.md 3 | - architecture.md 4 | - Quickstarts: quickstarts.md 5 | - Cluster Administration: cluster_admin 6 | - User Workloads: user_workloads 7 | - Compute: compute 8 | - Network: network 9 | - Storage: storage 10 | - Release Notes: release_notes.md 11 | - contributing.md 12 | - Virtualization Debugging: debug_virt_stack 13 | -------------------------------------------------------------------------------- /OWNERS: -------------------------------------------------------------------------------- 1 | filters: 2 | ".*": 3 | reviewers: 4 | - reviewers 5 | approvers: 6 | - approvers 7 | emeritus_approvers: 8 | - ashleyschuett 9 | - codificat 10 | - hroyrh 11 | - danielBelenky 12 | - mazzystr 13 | - AlonaKaplan 14 | - cwilkers 15 | 16 | "^docs/.*": 17 | labels: 18 | - kind/documentation 19 | "^site/.*": 20 | labels: 21 | - kind/website 22 | -------------------------------------------------------------------------------- /docs/compute/.pages: -------------------------------------------------------------------------------- 1 | nav: 2 | - client_passthrough.md 3 | - cpu_hotplug.md 4 | - dedicated_cpu_resources.md 5 | - host-devices.md 6 | - hugepages.md 7 | - live_migration.md 8 | - decentralized_live_migration.md 9 | - mediated_devices_configuration.md 10 | - memory_hotplug.md 11 | - node_assignment.md 12 | - node_overcommit.md 13 | - numa.md 14 | - persistent_tpm_and_uefi_state.md 15 | - resources_requests_and_limits.md 16 | - run_strategies.md 17 | - virtual_hardware.md 18 | - memory_dump.md 19 | - vsock.md 20 | -------------------------------------------------------------------------------- /docs/quickstarts.md: -------------------------------------------------------------------------------- 1 | --- 2 | hide: 3 | - navigation 4 | - toc 5 | --- 6 | 7 | 8 | ## Quickstart Guides 9 | 10 | Killercoda provides an interactive environment for exploring KubeVirt scenarios: 11 | 12 | - [KubeVirt on killercoda](https://killercoda.com/kubevirt) 13 | 14 | Guides for deploying KubeVirt with different Kubernetes tools: 15 | 16 | - [KubeVirt on minikube](https://kubevirt.io/quickstart_minikube/) 17 | 18 | - [KubeVirt on kind](https://kubevirt.io/quickstart_kind/) 19 | 20 | - [KubeVirt on cloud providers](https://kubevirt.io/quickstart_cloud/) 21 | -------------------------------------------------------------------------------- /netlify.toml: -------------------------------------------------------------------------------- 1 | [build] 2 | publish = "./site" 3 | command = """ 4 | source /opt/buildhome/python3.8/bin/activate; 5 | pip3 install mkdocs mkdocs-awesome-pages-plugin mkdocs-material mkdocs-redirects; 6 | sed -i 's|site_url: https://kubevirt.io/docs|site_url: https://kubevirt.io/|' /opt/build/repo/mkdocs.yml; 7 | sed -i 's/docs_dir: docs/docs_dir:/' /opt/build/repo/mkdocs.yml; 8 | echo '*** BEGIN /opt/build/repo/mkdocs.yml ***'; 9 | cat /opt/build/repo/mkdocs.yml; 10 | echo '*** END /opt/build/repo/mkdocs.yml ***'; 11 | mkdocs build -f /opt/build/repo/mkdocs.yml -d /opt/build/repo/site 12 | """ 13 | -------------------------------------------------------------------------------- /docs/cluster_admin/.pages: -------------------------------------------------------------------------------- 1 | nav: 2 | - installation.md 3 | - updating_and_deletion.md 4 | - activating_feature_gates.md 5 | - annotations_and_labels.md 6 | - api_validation.md 7 | - authorization.md 8 | - confidential_computing.md 9 | - customize_components.md 10 | - gitops.md 11 | - ksm.md 12 | - migration_policies.md 13 | - node_maintenance.md 14 | - scheduler.md 15 | - tekton_tasks.md 16 | - unresponsive_nodes.md 17 | - ARM cluster: 18 | - device_status_on_Arm64.md 19 | - feature_gate_status_on_Arm64.md 20 | - operations_on_Arm64.md 21 | - virtual_machines_on_Arm64.md 22 | -------------------------------------------------------------------------------- /docs/user_workloads/basic_use.md: -------------------------------------------------------------------------------- 1 | # Basic use 2 | 3 | Using KubeVirt should be fairly natural if you are used to working with 4 | Kubernetes. 5 | 6 | The primary way of using KubeVirt is by working with the KubeVirt kinds 7 | in the Kubernetes API: 8 | 9 | $ kubectl create -f vmi.yaml 10 | $ kubectl wait --for=condition=Ready vmis/my-vmi 11 | $ kubectl get vmis 12 | $ kubectl delete vmis testvmi 13 | 14 | The following pages describe how to use and discover the API, manage, 15 | and access virtual machines. 16 | 17 | ## User Interface 18 | 19 | KubeVirt does not come with a UI, it is only extending the Kubernetes 20 | API with virtualization functionality. 21 | -------------------------------------------------------------------------------- /docs/user_workloads/.pages: -------------------------------------------------------------------------------- 1 | nav: 2 | - lifecycle.md 3 | - basic_use.md 4 | - creating_vms.md 5 | - creating_it_pref.md 6 | - virtctl_client_tool.md 7 | - accessing_virtual_machines.md 8 | - boot_from_external_source.md 9 | - startup_scripts.md 10 | - windows_virtio_drivers.md 11 | - legacy_windows.md 12 | - Monitoring: 13 | - component_monitoring.md 14 | - guest_agent_information.md 15 | - guest_operating_system_information.md 16 | - liveness_and_readiness_probes.md 17 | - Workloads: 18 | - instancetypes.md 19 | - deploy_common_instancetypes.md 20 | - hook-sidecar.md 21 | - presets.md 22 | - templates.md 23 | - pool.md 24 | - replicaset.md 25 | - virtual_machine_instances.md 26 | - vm_rollout_strategies.md 27 | -------------------------------------------------------------------------------- /docs/compute/hugepages.md: -------------------------------------------------------------------------------- 1 | # Hugepages support 2 | 3 | For hugepages support you need at least Kubernetes version `1.9`. 4 | 5 | ## Enable feature-gate 6 | 7 | To enable hugepages on Kubernetes, check the [official 8 | documentation](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/). 9 | 10 | To enable hugepages on OKD, check the [official 11 | documentation](https://docs.openshift.org/3.9/scaling_performance/managing_hugepages.html#huge-pages-prerequisites). 12 | 13 | ## Pre-allocate hugepages on a node 14 | 15 | To pre-allocate hugepages on boot time, you will need to specify 16 | hugepages under kernel boot parameters `hugepagesz=2M hugepages=64` and 17 | restart your machine. 18 | 19 | You can find more about hugepages under [official 20 | documentation](https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt). 21 | -------------------------------------------------------------------------------- /docs/stylesheets/extra.css: -------------------------------------------------------------------------------- 1 | :root, [data-md-color-primary="teal"] { 2 | --md-primary-fg-color: #0db2b6; 3 | } 4 | 5 | :root, [data-md-color-accent="teal"] { 6 | --md-accent-fg-color: #006166; 7 | } 8 | 9 | .md-nav a, .md-typeset a { 10 | filter: brightness(80%); 11 | } 12 | 13 | .md-tabs__link { 14 | opacity: 0.85; 15 | font-weight: bold; 16 | } 17 | 18 | .md-tabs__link--active { 19 | opacity: 1.0; 20 | text-underline-offset: 8px; 21 | text-decoration-line: underline; 22 | text-decoration-thickness: 2px; 23 | } 24 | 25 | @media screen and (min-width: 76.25em) { 26 | .md-nav label, .md-nav__item--section > .md-nav__link[for] { 27 | font-size: 115%; 28 | color: #00797f; 29 | } 30 | } 31 | 32 | .md-typeset code { 33 | border: 1px solid #e1e4e5; 34 | } 35 | 36 | .md-typeset__table { 37 | min-width: 100%; 38 | } 39 | -------------------------------------------------------------------------------- /docs/cluster_admin/gitops.md: -------------------------------------------------------------------------------- 1 | # Managing KubeVirt with GitOps 2 | 3 | The GitOps way uses Git repositories as a single source of truth to deliver 4 | infrastructure as code. Automation is employed to keep the desired and the live 5 | state of clusters in sync at all times. This means any change to a repository 6 | is automatically applied to one or more clusters while changes to a cluster will 7 | be automatically reverted to the state described in the single source of truth. 8 | 9 | With GitOps the separation of testing and production environments, improving 10 | the availability of applications and working with multi-cluster environments 11 | becomes considerably easier. 12 | 13 | ## Demo repository 14 | 15 | A demo with detailed explanation on how to manage KubeVirt with GitOps can be 16 | found [here](https://github.com/0xFelix/gitops-demo). 17 | 18 | The demo is using [Open Cluster Management](https://open-cluster-management.io/) 19 | and [ArgoCD](https://argoproj.github.io/cd/) to deploy KubeVirt and virtual 20 | machines across multiple clusters. -------------------------------------------------------------------------------- /docs/cluster_admin/operations_on_Arm64.md: -------------------------------------------------------------------------------- 1 | # Arm64 Operations 2 | 3 | This page summarizes all operations that are not supported on Arm64. 4 | 5 | ## Hotplug Network Interfaces 6 | 7 | Hotplug Network Interfaces are not supported on Arm64, because the image ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-thick does not support for the Arm64 platform. For more information please refer to https://github.com/k8snetworkplumbingwg/multus-cni/pull/1027. 8 | 9 | ## Hugepages support 10 | 11 | Hugepages feature is not supported on Arm64. The hugepage mechanism differs between X86_64 and Arm64. Now we only verify KubeVirt on 4k pagesize systems. 12 | 13 | ## Export API 14 | 15 | Export API is partially supported on the Arm64 platform. As CDI is not supported yet, the export of DataVolumes and MemoryDump are not supported on Arm64. 16 | 17 | ## Virtual machine memory dump 18 | 19 | As explained above, MemoryDump requires CDI, and is not yet supported on Arm64. 20 | 21 | ## Mediated devices and virtual GPUs 22 | 23 | This is not verified on Arm64 platform. 24 | -------------------------------------------------------------------------------- /update_changelog.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | set -e 3 | 4 | [[ -e kubevirt ]] || git clone https://github.com/kubevirt/kubevirt.git kubevirt 5 | git -C kubevirt checkout main 6 | git -C kubevirt fetch --tags 7 | 8 | releases() { 9 | # I'm sure some one can do better here 10 | if [ $# -eq 1 ]; then 11 | git -C kubevirt tag | sort -rV | egrep -v "alpha|rc|cnv" | grep "$1" 12 | else 13 | git -C kubevirt tag | sort -rV | egrep -v "alpha|rc|cnv" | head -1 14 | fi 15 | } 16 | 17 | features_for() { 18 | git -C kubevirt show $1 | grep Date: | head -n1 | sed -e "s/Date:\s\+/Released on: /" 19 | git -C kubevirt show $1 | sed -n "/changes$/,/Contributors/ p" | sed -e '1d;2d;$d' 20 | } 21 | 22 | gen_changelog() { 23 | IFS=$'\n' 24 | sed -i -e "s/# KubeVirt release notes//" ./docs/release_notes.md 25 | REL_NOTES=$(for REL in `releases $1`; do 26 | echo -e "## $REL\n" ; 27 | features_for $REL 28 | done) 29 | printf '%s %s\n' "$REL_NOTES `cat docs/release_notes.md`" > ./docs/release_notes.md 30 | sed -i "1 i\# KubeVirt release notes\n" ./docs/release_notes.md 31 | sed -i 's/[ \t]*$//' docs/release_notes.md 32 | } 33 | 34 | gen_changelog $1 35 | -------------------------------------------------------------------------------- /docs/user_workloads/legacy_windows.md: -------------------------------------------------------------------------------- 1 | # Running legacy Windows versions 2 | 3 | Legacy Windows versions like Windows XP or Windows Server 2003 are unable to 4 | boot on KubeVirt out of the box. This is due to the combination of the Q35 5 | machine-type and a 64-Bit PCI hole reported to the guest, 6 | that is not supported by these operating systems. 7 | 8 | To run legacy Windows versions on KubeVirt, reporting of the 64-Bit PCI hole 9 | needs to be disabled. This can be achieved by adding the 10 | `kubevirt.io/disablePCIHole64` annotation with a value of `true` to a 11 | `VirtualMachineInstance`'s annotations. 12 | 13 | ## Example 14 | 15 | With this `VirtualMachine` definition a legacy Windows guest is able to boot: 16 | 17 | ```yaml 18 | apiVersion: kubevirt.io/v1 19 | kind: VirtualMachine 20 | metadata: 21 | name: xp 22 | spec: 23 | runStrategy: Always 24 | template: 25 | metadata: 26 | annotations: 27 | kubevirt.io/disablePCIHole64: "true" 28 | spec: 29 | domain: 30 | devices: {} 31 | memory: 32 | guest: 512Mi 33 | terminationGracePeriodSeconds: 180 34 | volumes: 35 | - containerDisk: 36 | image: my/windowsxp:containerdisk 37 | name: xp-containerdisk-0 38 | ``` 39 | -------------------------------------------------------------------------------- /docs/cluster_admin/annotations_and_labels.md: -------------------------------------------------------------------------------- 1 | # Annotations and labels 2 | 3 | KubeVirt builds on and exposes a number of labels and annotations that 4 | either are used for internal implementation needs or expose useful 5 | information to API users. This page documents the labels and annotations 6 | that may be useful for regular API consumers. This page intentionally 7 | does *not* list labels and annotations that are merely part of internal 8 | implementation. 9 | 10 | **Note:** Annotations and labels that are not specific to KubeVirt are 11 | also documented 12 | [here](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/). 13 | 14 | ## kubevirt.io 15 | 16 | Example: `kubevirt.io=virt-launcher` 17 | 18 | Used on: Pod 19 | 20 | This label marks resources that belong to KubeVirt. An optional value 21 | may indicate which specific KubeVirt component a resource belongs to. 22 | This label may be used to list all resources that belong to KubeVirt, 23 | for example, to uninstall it from a cluster. 24 | 25 | ### kubevirt.io/schedulable 26 | 27 | Example: `kubevirt.io/schedulable=true` 28 | 29 | Used on: Node 30 | 31 | This label declares whether a particular node is available for 32 | scheduling virtual machine instances on it. 33 | 34 | ### kubevirt.io/heartbeat 35 | 36 | Example: `kubevirt.io/heartbeat=2018-07-03T20:07:25Z` 37 | 38 | Used on: Node 39 | 40 | This annotation is regularly updated by virt-handler to help determine 41 | if a particular node is alive and hence should be available for new 42 | virtual machine instance scheduling. 43 | -------------------------------------------------------------------------------- /docs/cluster_admin/device_status_on_Arm64.md: -------------------------------------------------------------------------------- 1 | # Device Status on Arm64 2 | 3 | This page is based on https://github.com/kubevirt/kubevirt/issues/8916 4 | 5 | Devices | Description | Status on Arm64 6 | -- | -- | -- 7 | DisableHotplug |   | supported 8 | Disks | sata/ virtio bus | support virtio bus 9 | Watchdog | i6300esb | not supported 10 | UseVirtioTransitional | virtio-transitional | supported 11 | Interfaces | e1000/ virtio-net-device | support virtio-net-device 12 | Inputs | tablet virtio/usb bus | supported 13 | AutoattachPodInterface | connect to /net/tun (devices.kubevirt.io/tun) | supported 14 | AutoattachGraphicsDevice | create a virtio-gpu device / vga device | support virtio-gpu 15 | AutoattachMemBalloon | virtio-balloon-pci-non-transitional | supported 16 | AutoattachInputDevice | auto add tablet | supported 17 | Rng | virtio-rng-pci-non-transitional host:/dev/urandom | supported 18 | BlockMultiQueue | "driver":"virtio-blk-pci-non-transitional","num-queues":$cpu_number | supported 19 | NetworkInterfaceMultiQueue | -netdev tap,fds=21:23:24:25,vhost=on,vhostfds=26:27:28:29,id=hostua-default#fd number equals to queue number | supported 20 | GPUs |   | not verified 21 | Filesystems | virtiofs, vhost-user-fs-pci, need to enable featuregate: ExperimentalVirtiofsSupport | supported 22 | ClientPassthrough | https://www.linaro.org/blog/kvm-pciemsi-passthrough-armarm64/on x86_64, iommu need to be enabled | not verified 23 | Sound | ich9/ ac97 | not supported 24 | TPM | tpm-tis-devicehttps://qemu.readthedocs.io/en/latest/specs/tpm.html | supported 25 | Sriov | vfio-pci | not verified 26 | -------------------------------------------------------------------------------- /docs/cluster_admin/activating_feature_gates.md: -------------------------------------------------------------------------------- 1 | # Activating feature gates 2 | 3 | KubeVirt has a set of features that are not mature enough to be enabled by 4 | default. As such, they are protected by a Kubernetes concept called 5 | [feature gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/). 6 | 7 | ## How to activate a feature gate 8 | You can activate a specific feature gate directly in KubeVirt's CR, by 9 | provisioning the following yaml, which uses the `LiveMigration` feature gate 10 | as an example: 11 | ```bash 12 | cat << END > enable-feature-gate.yaml 13 | --- 14 | apiVersion: kubevirt.io/v1 15 | kind: KubeVirt 16 | metadata: 17 | name: kubevirt 18 | namespace: kubevirt 19 | spec: 20 | configuration: 21 | developerConfiguration: 22 | featureGates: 23 | - LiveMigration 24 | END 25 | 26 | kubectl apply -f enable-feature-gate.yaml 27 | ``` 28 | 29 | Alternatively, the existing kubevirt CR can be altered: 30 | ```bash 31 | kubectl edit kubevirt kubevirt -n kubevirt 32 | ``` 33 | 34 | ```yaml 35 | ... 36 | spec: 37 | configuration: 38 | developerConfiguration: 39 | featureGates: 40 | - DataVolumes 41 | - LiveMigration 42 | ``` 43 | 44 | **Note:** the name of the feature gates is case sensitive. 45 | 46 | The snippet above assumes KubeVirt is installed in the `kubevirt` namespace. 47 | Change the namespace to suite your installation. 48 | 49 | ## List of feature gates 50 | The list of feature gates (which evolve in time) can be checked directly from 51 | the [source code](https://github.com/kubevirt/kubevirt/blob/main/pkg/virt-config/featuregate/active.go) (use the const string values in the yaml configuration). 52 | 53 | -------------------------------------------------------------------------------- /docs/user_workloads/virtctl_client_tool.md: -------------------------------------------------------------------------------- 1 | # Download and Install the virtctl Command Line Interface 2 | 3 | ## Download the `virtctl` client tool 4 | 5 | Basic VirtualMachineInstance operations can be performed with the stock 6 | `kubectl` utility. However, the `virtctl` binary utility is required to 7 | use advanced features such as: 8 | 9 | - Serial and graphical console access 10 | 11 | It also provides convenience commands for: 12 | 13 | - Starting and stopping VirtualMachineInstances 14 | 15 | - Live migrating VirtualMachineInstances and canceling live migrations 16 | 17 | - Uploading virtual machine disk images 18 | 19 | There are two ways to get it: 20 | 21 | - the most recent version of the tool can be retrieved from the 22 | [official release 23 | page](https://github.com/kubevirt/kubevirt/releases) 24 | 25 | - it can be installed as a `kubectl` plugin using 26 | [krew](https://krew.dev/) 27 | 28 | Example: 29 | 30 | ``` 31 | export VERSION=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt) 32 | wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64 33 | ``` 34 | 35 | ## Install `virtctl` with `krew` 36 | 37 | It is required to [install `krew` plugin 38 | manager](https://github.com/kubernetes-sigs/krew/#installation) 39 | beforehand. If `krew` is installed, `virtctl` can be installed via 40 | `krew`: 41 | 42 | $ kubectl krew install virt 43 | 44 | Then `virtctl` can be used as a kubectl plugin. For a list of available 45 | commands run: 46 | 47 | $ kubectl virt help 48 | 49 | Every occurrence throughout this guide of 50 | 51 | $ ./virtctl ... 52 | 53 | should then be read as 54 | 55 | $ kubectl virt ... 56 | -------------------------------------------------------------------------------- /docs/cluster_admin/virtual_machines_on_Arm64.md: -------------------------------------------------------------------------------- 1 | # Virtual Machines on Arm64 2 | 3 | This page summaries all unsupported Virtual Machines configurations and different default setups on Arm64 platform. 4 | 5 | ## Virtual hardware 6 | 7 | ### Machine Type 8 | 9 | Currently, we only support one machine type, `virt`, which is set by default. 10 | 11 | ### BIOS/UEFI 12 | 13 | On Arm64 platform, we only support UEFI boot which is set by default. UEFI secure boot is not supported. 14 | 15 | ### CPU 16 | 17 | #### Node-labeller 18 | 19 | Currently, Node-labeller is partially supported on Arm64 platform. It does not yet support parsing virsh_domcapabilities.xml and capabilities.xml, and extracting related information such as CPU features. 20 | 21 | #### Model 22 | 23 | `host-passthrough` is the only model that supported on Arm64. The CPU model is set by default on Arm64 platform. 24 | 25 | ### Clock 26 | 27 | `kvm` and `hyperv` timers are not supported on Arm64 platform. 28 | 29 | ### Video and Graphics Device 30 | 31 | We do not support vga devices but use virtio-gpu by default. 32 | 33 | ### Hugepages 34 | 35 | Hugepages are not supported on Arm64 platform. 36 | 37 | ### Resources Requests and Limits 38 | 39 | CPU pinning is supported on Arm64 platform. 40 | 41 | ## NUMA 42 | 43 | As Hugepages are a precondition of the NUMA feature, and Hugepages are not enabled on the Arm64 platform, the NUMA feature does not work on Arm64. 44 | 45 | ## Disks and Volumes 46 | 47 | Arm64 only supports virtio and scsi disk bus types. 48 | 49 | ## Interface and Networks 50 | ### macvlan 51 | 52 | We do not support `macvlan` network because the project https://github.com/kubevirt/macvtap-cni does not support Arm64. 53 | 54 | ### SRIOV 55 | 56 | This class of devices is not verified on the Arm64 platform. 57 | 58 | ## Liveness and Readiness Probes 59 | 60 | `Watchdog` device is not supported on Arm64 platform. 61 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | --- 2 | hide: 3 | - navigation 4 | --- 5 | 6 | # Welcome 7 | 8 | The KubeVirt User Guide is divided into the following sections: 9 | 10 | * Architecture: Technical and conceptual overview of KubeVirt components 11 | * Quickstarts: A list of resources to help you learn KubeVirt basics 12 | * Cluster Administration: Cluster-level administration concepts and tasks 13 | * User Workloads: Creating, customizing, using, and monitoring virtual machines 14 | * Compute: Resource allocation and optimization for the virtualization layer 15 | * Network: Concepts and tasks for the networking and service layers 16 | * Storage: Concepts and tasks for the storage layer, including importing and exporting. 17 | * Release Notes: The release notes for all KubeVirt releases 18 | * Contributing: How you can contribute to this guide or the KubeVirt project 19 | * Virtualization Debugging: How to debug your KubeVirt cluster and virtual resources 20 | 21 | ## Try it out 22 | 23 | - Kubevirt on Killercoda: 24 | 25 | - Kubevirt on Minikube: 26 | 27 | - Kubevirt on Kind: 28 | 29 | - Kubevirt on cloud providers: 30 | 31 | ## KubeVirt Labs 32 | 33 | - [Use KubeVirt](https://kubevirt.io/labs/kubernetes/lab1.html) 34 | 35 | - [Experiment with Containerized Data Importer (CDI)](https://kubevirt.io/labs/kubernetes/lab2.html) 36 | 37 | - [Experiment with KubeVirt Upgrades](https://kubevirt.io/labs/kubernetes/lab3.html) 38 | 39 | - [Live Migration](https://kubevirt.io/labs/kubernetes/migration.html) 40 | 41 | ## Getting help 42 | 43 | - File a bug: 44 | 45 | - Mailing list: 46 | 47 | - Slack: 48 | 49 | ## Developer 50 | 51 | - Start contributing: [Contributing](./contributing.md) 52 | 53 | - API Reference: 54 | 55 | ## Privacy 56 | 57 | - Check our privacy policy at: 58 | 59 | - We do use Open Source Plan for Rendering Pull 60 | Requests to the documentation repository 61 | -------------------------------------------------------------------------------- /docs/user_workloads/deploy_common_instancetypes.md: -------------------------------------------------------------------------------- 1 | # Deploy common-instancetypes 2 | 3 | The [`kubevirt/common-instancetypes`](https://github.com/kubevirt/common-instancetypes) provide a set of [instancetypes and preferences](../user_workloads/instancetypes.md) to help create KubeVirt [`VirtualMachines`](http://kubevirt.io/api-reference/main/definitions.html#_v1_virtualmachine). 4 | 5 | Beginning with the 1.1 release of KubeVirt, cluster wide resources can be deployed directly through KubeVirt, without another operator. 6 | This allows deployment of a set of default instancetypes and preferences along side KubeVirt. 7 | 8 | With the [`v1.4.0`](https://github.com/kubevirt/kubevirt/releases/tag/v1.4.0) release of KubeVirt, common-instancetypes are now deployed by default. 9 | 10 | ## **FEATURE STATE:** 11 | 12 | * Alpha - [`v1.1.0`](https://github.com/kubevirt/kubevirt/releases/tag/v1.1.0) 13 | * Beta - [`v1.2.0`](https://github.com/kubevirt/kubevirt/releases/tag/v1.2.0) 14 | * GA - [`v1.4.0`](https://github.com/kubevirt/kubevirt/releases/tag/v1.4.0) (Enabled by default) 15 | 16 | ## Control deployment of common-instancetypes 17 | 18 | To explicitly enable or disable the deployment of cluster-wide common-instancetypes through the KubeVirt `virt-operator` use the `spec.configuration.commonInstancetypesDeployment.enabled` configurable. 19 | 20 | ```shell 21 | kubectl patch -n kubevirt kv/kubevirt --type merge -p '{"spec":{"configuration":{"commonInstancetypesDeployment":{"enabled": false}}}}' 22 | ``` 23 | 24 | ## Deploy common-instancetypes manually 25 | 26 | For customization purposes or to install namespaced resources, common-instancetypes can also be deployed by hand. 27 | 28 | To install all resources provided by the [`kubevirt/common-instancetypes`](https://github.com/kubevirt/common-instancetypes) project without further customizations, simply apply with [`kustomize`](https://kustomize.io/) enabled (-k flag): 29 | 30 | ```shell 31 | kubectl apply -k https://github.com/kubevirt/common-instancetypes.git 32 | ``` 33 | 34 | Alternatively, targets for each of the available custom resource types (e.g. namespaced instancetypes) are available. 35 | 36 | For example, to deploy `VirtualMachineInstancetypes` run the following command: 37 | 38 | ```shell 39 | kubectl apply -k https://github.com/kubevirt/common-instancetypes.git/VirtualMachineInstancetypes 40 | ``` 41 | -------------------------------------------------------------------------------- /docs/debug_virt_stack/virsh-commands.md: -------------------------------------------------------------------------------- 1 | # Execute virsh commands in virt-launcher pod 2 | 3 | A powerful utility to check and troubleshoot the VM state is [`virsh`](https://www.libvirt.org/manpages/virsh.html) and the utility is already installed in the `compute` container on the virt-launcher pod. 4 | 5 | For example, it possible to run any QMP commands. 6 | 7 | For a full list of QMP command, please refer to the [QEMU documentation](https://qemu-project.gitlab.io/qemu/interop/qemu-qmp-ref.html). 8 | 9 | ```console 10 | $ kubectl get po 11 | NAME READY STATUS RESTARTS AGE 12 | virt-launcher-vmi-ephemeral-xg98p 3/3 Running 0 44m 13 | $ kubectl exec -ti virt-launcher-vmi-debug-tools-fk64q -- bash 14 | bash-5.1$ virsh list 15 | Id Name State 16 | ----------------------------------------- 17 | 1 default_vmi-debug-tools running 18 | bash-5.1$ virsh qemu-monitor-command default_vmi-debug-tools query-status --pretty 19 | { 20 | "return": { 21 | "status": "running", 22 | "singlestep": false, 23 | "running": true 24 | }, 25 | "id": "libvirt-439" 26 | } 27 | $ virsh qemu-monitor-command default_vmi-debug-tools query-kvm --pretty 28 | { 29 | "return": { 30 | "enabled": true, 31 | "present": true 32 | }, 33 | "id": "libvirt-438" 34 | } 35 | ``` 36 | 37 | Another useful virsh command is the `qemu-monitor-event`. Once invoked, it observes and reports the QEMU events. 38 | 39 | The following example shows the events generated for pausing and unpausing the guest. 40 | 41 | ```console 42 | $ kubectl get po 43 | NAME READY STATUS RESTARTS AGE 44 | virt-launcher-vmi-ephemeral-nqcld 3/3 Running 0 57m 45 | $ kubectl exec -ti virt-launcher-vmi-ephemeral-nqcld -- virsh qemu-monitor-event --pretty --loop 46 | ``` 47 | 48 | Then, you can, for example, pause and then unpause the guest and check the triggered events: 49 | ```console 50 | $ virtctl pause vmi vmi-ephemeral 51 | VMI vmi-ephemeral was scheduled to pause 52 | $ virtctl unpause vmi vmi-ephemeral 53 | VMI vmi-ephemeral was scheduled to unpause 54 | ``` 55 | 56 | From the monitored events: 57 | ```console 58 | $ kubectl exec -ti virt-launcher-vmi-ephemeral-nqcld -- virsh qemu-monitor-event --pretty --loop 59 | event STOP at 1698405797.422823 for domain 'default_vmi-ephemeral': 60 | event RESUME at 1698405823.162458 for domain 'default_vmi-ephemeral': 61 | ``` 62 | -------------------------------------------------------------------------------- /docs/user_workloads/vm_rollout_strategies.md: -------------------------------------------------------------------------------- 1 | # VM Rollout Strategies 2 | 3 | In KubeVirt, the VM rollout strategy defines how changes to a VM object affect a running guest. 4 | In other words, it defines when and how changes to a VM object get propagated to its corresponding VMI object. 5 | 6 | There are currently 2 rollout strategies: `LiveUpdate` and `Stage`. 7 | Only 1 can be specified and the default is `Stage`. 8 | 9 | ## LiveUpdate 10 | 11 | The `LiveUpdate` VM rollout strategy tries to propagate VM object changes to running VMIs as soon as possible. 12 | For example, changing the number of CPU sockets will trigger a [CPU hotplug](../compute/cpu_hotplug.md). 13 | 14 | Enable the `LiveUpdate` VM rollout strategy in the KubeVirt CR: 15 | 16 | ```yaml 17 | apiVersion: kubevirt.io/v1 18 | kind: KubeVirt 19 | spec: 20 | configuration: 21 | vmRolloutStrategy: "LiveUpdate" 22 | ``` 23 | 24 | ## Stage 25 | 26 | The `Stage` VM rollout strategy stages every change made to the VM object until its next reboot. 27 | 28 | ```yaml 29 | apiVersion: kubevirt.io/v1 30 | kind: KubeVirt 31 | spec: 32 | configuration: 33 | vmRolloutStrategy: "Stage" 34 | ``` 35 | 36 | ## RestartRequired condition 37 | 38 | Any change made to a VM object when the rollout strategy is `Stage` will trigger the `RestartRequired` VM condition. 39 | When the rollout strategy is `LiveUpdate`, only non-propagatable changes will trigger the condition. 40 | 41 | Once the `RestartRequired` condition is set on a VM object, no further changes can be propagated, even if the strategy is set to `LiveUpdate`. 42 | Changes will become effective on next reboot, and the condition will be removed. 43 | 44 | ## Limitations 45 | 46 | The current implementation has the following limitations: 47 | 48 | - Once the `RestartRequired` condition is set, the only way to get rid of it is to restart the VM. In the future, we plan on implementing a way to get rid of it by reverting the VM template spec to its last non-RestartRequired state. 49 | - Cluster defaults are excluded from this logic. It means that changing a cluster-wide setting that impacts VM specs will not be live-updated, regardless of the rollout strategy. 50 | - The `RestartRequired` condition comes with a message stating what kind of change triggered the condition (CPU/memory/other). That message pertains only to the first change that triggered the condition. Additional changes that would usually trigger the condition will just get staged and no additional `RestartRequired` condition will be added. 51 | -------------------------------------------------------------------------------- /docs/network/networkpolicy.md: -------------------------------------------------------------------------------- 1 | # NetworkPolicy 2 | 3 | Before creating NetworkPolicy objects, make sure you are using a 4 | networking solution which supports NetworkPolicy. Network isolation is 5 | controlled entirely by NetworkPolicy objects. By default, all vmis in a 6 | namespace are accessible from other vmis and network endpoints. To 7 | isolate one or more vmis in a project, you can create NetworkPolicy 8 | objects in that namespace to indicate the allowed incoming connections. 9 | 10 | > Note: vmis and pods are treated equally by network policies, since 11 | > labels are passed through to the pods which contain the running vmi. 12 | > With other words, labels on vmis can be matched by `spec.podSelector` 13 | > on the policy. 14 | 15 | ## Create NetworkPolicy to Deny All Traffic 16 | 17 | To make a project "deny by default" add a NetworkPolicy object that 18 | matches all vmis but accepts no traffic. 19 | 20 | ```yaml 21 | kind: NetworkPolicy 22 | apiVersion: networking.k8s.io/v1 23 | metadata: 24 | name: deny-by-default 25 | spec: 26 | podSelector: {} 27 | ingress: [] 28 | ``` 29 | 30 | ## Create NetworkPolicy to only Accept connections from vmis within namespaces 31 | 32 | To make vmis accept connections from other vmis in the same namespace, 33 | but reject all other connections from vmis in other namespaces: 34 | 35 | ```yaml 36 | kind: NetworkPolicy 37 | apiVersion: networking.k8s.io/v1 38 | metadata: 39 | name: allow-same-namespace 40 | spec: 41 | podSelector: {} 42 | ingress: 43 | - from: 44 | - podSelector: {} 45 | ``` 46 | 47 | ## Create NetworkPolicy to only allow HTTP and HTTPS traffic 48 | 49 | To enable only HTTP and HTTPS access to the vmis, add a NetworkPolicy 50 | object similar to: 51 | 52 | ```yaml 53 | kind: NetworkPolicy 54 | apiVersion: networking.k8s.io/v1 55 | metadata: 56 | name: allow-http-https 57 | spec: 58 | podSelector: {} 59 | ingress: 60 | - ports: 61 | - protocol: TCP 62 | port: 8080 63 | - protocol: TCP 64 | port: 8443 65 | ``` 66 | 67 | ## Create NetworkPolicy to deny traffic by labels 68 | 69 | To make one specific vmi with a label `type: test` to reject all traffic 70 | from other vmis, create: 71 | 72 | ```yaml 73 | kind: NetworkPolicy 74 | apiVersion: networking.k8s.io/v1 75 | metadata: 76 | name: deny-by-label 77 | spec: 78 | podSelector: 79 | matchLabels: 80 | type: test 81 | ingress: [] 82 | ``` 83 | 84 | Kubernetes NetworkPolicy Documentation can be found here: [Kubernetes 85 | NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) 86 | -------------------------------------------------------------------------------- /docs/user_workloads/virtual_machine_instances.md: -------------------------------------------------------------------------------- 1 | # Virtual Machines Instances 2 | 3 | The `VirtualMachineInstance` type conceptionally has two parts: 4 | 5 | - Information for making scheduling decisions 6 | 7 | - Information about the virtual machine API 8 | 9 | Every `VirtualMachineInstance` object represents a single running 10 | virtual machine instance. 11 | 12 | ## API Overview 13 | 14 | With the installation of KubeVirt, new types are added to the Kubernetes 15 | API to manage Virtual Machines. 16 | 17 | You can interact with the new resources (via `kubectl`) as you would 18 | with any other API resource. 19 | 20 | ## VirtualMachineInstance API 21 | 22 | > Note: A full API reference is available at 23 | > . 24 | 25 | Here is an example of a VirtualMachineInstance object: 26 | 27 | ```yaml 28 | apiVersion: kubevirt.io/v1 29 | kind: VirtualMachineInstance 30 | metadata: 31 | name: testvmi-nocloud 32 | spec: 33 | terminationGracePeriodSeconds: 30 34 | domain: 35 | resources: 36 | requests: 37 | memory: 1024M 38 | devices: 39 | disks: 40 | - name: containerdisk 41 | disk: 42 | bus: virtio 43 | - name: emptydisk 44 | disk: 45 | bus: virtio 46 | - disk: 47 | bus: virtio 48 | name: cloudinitdisk 49 | volumes: 50 | - name: containerdisk 51 | containerDisk: 52 | image: kubevirt/fedora-cloud-container-disk-demo:latest 53 | - name: emptydisk 54 | emptyDisk: 55 | capacity: "2Gi" 56 | - name: cloudinitdisk 57 | cloudInitNoCloud: 58 | userData: |- 59 | #cloud-config 60 | password: fedora 61 | chpasswd: { expire: False } 62 | ``` 63 | 64 | This example uses a fedora cloud image in combination with cloud-init 65 | and an ephemeral empty disk with a capacity of `2Gi`. For the sake of 66 | simplicity, the volume sources in this example are ephemeral and don't 67 | require a provisioner in your cluster. 68 | 69 | ## Additional Information 70 | 71 | - Using instancetypes and preferences with a VirtualMachine: 72 | [Instancetypes and preferences](../user_workloads/instancetypes.md) 73 | 74 | - More information about persistent and ephemeral volumes: 75 | [Disks and Volumes](../storage/disks_and_volumes.md) 76 | 77 | - How to access a VirtualMachineInstance via `console` or `vnc`: 78 | [Console Access](../user_workloads/accessing_virtual_machines.md) 79 | 80 | - How to customize VirtualMachineInstances with `cloud-init`: 81 | [Cloud Init](../user_workloads/startup_scripts.md#cloud-init) 82 | -------------------------------------------------------------------------------- /docs/compute/client_passthrough.md: -------------------------------------------------------------------------------- 1 | # Client Passthrough 2 | 3 | KubeVirt included support for redirecting devices from the client's machine to the VMI with the 4 | support of virtctl command. 5 | 6 | ## USB Redirection 7 | 8 | Support for redirection of client's USB device was introduced in release v0.44. This feature is not 9 | enabled by default. To enable it, add an empty `clientPassthrough` under devices, as such: 10 | 11 | ```yaml 12 | spec: 13 | domain: 14 | devices: 15 | clientPassthrough: {} 16 | ``` 17 | 18 | This configuration currently adds 4 USB slots to the VMI that can only be used with virtctl. 19 | 20 | There are two ways of redirecting the same USB devices: Either using its device's vendor and product 21 | information or the actual bus and device address information. In Linux, you can gather this info 22 | with `lsusb`, a redacted example below: 23 | 24 | ```shell 25 | > lsusb 26 | Bus 002 Device 008: ID 0951:1666 Kingston Technology DataTraveler 100 G3/G4/SE9 G2/50 27 | Bus 001 Device 003: ID 13d3:5406 IMC Networks Integrated Camera 28 | Bus 001 Device 010: ID 0781:55ae SanDisk Corp. Extreme 55AE 29 | ``` 30 | 31 | ### Using Vendor and Product 32 | 33 | Redirecting the Kingston storage device. 34 | ``` 35 | virtctl usbredir 0951:1666 vmi-name 36 | ``` 37 | 38 | 39 | ### Using Bus and Device address 40 | 41 | Redirecting the integrated camera 42 | ``` 43 | virtctl usbredir 01-03 vmi-name 44 | ``` 45 | 46 | ### Requirements for `virtctl usbredir` 47 | 48 | The `virtctl` command uses an application called `usbredirect` to handle client's USB device by 49 | unplugging the device from the Client OS and channeling the communication between the device and the 50 | VMI. 51 | 52 | #### usbredirect 53 | 54 | The `usbredirect` binary comes from the [usbredir][] project and is supported by most Linux distros. 55 | You can either fetch the [latest release][] or [MSI installer][] for Windows support. 56 | 57 | [usbredir]: https://gitlab.freedesktop.org/spice/usbredir/ 58 | [latest release]: https://www.spice-space.org/download/usbredir/ 59 | [MSI installer]: https://www.spice-space.org/download/windows/usbredirect/ 60 | 61 | #### Permissions 62 | 63 | Managing USB devices requires privileged access in most Operation Systems. The user running 64 | `virtctl usbredir` would need to be privileged or run it in a privileged manner (e.g: with `sudo`) 65 | 66 | #### Windows support 67 | 68 | - Redirecting USB devices on Windows requires the installation of [UsbDk][]. 69 | - Be sure to have `usbredirect` included in the PATH Enviroment Variable. 70 | 71 | [UsbDk]: https://github.com/daynix/UsbDk 72 | -------------------------------------------------------------------------------- /Rakefile: -------------------------------------------------------------------------------- 1 | #encoding: utf-8 2 | namespace :links do 3 | require 'html-proofer' 4 | require 'optparse' 5 | 6 | files_ignore = [] 7 | note = '(This can take a few mins to run) ' 8 | 9 | desc 'Build site' 10 | task :build do 11 | if ARGV.length > 0 12 | if ARGV.include? "quiet" 13 | quiet = '-q' 14 | 15 | # BLACK MAGIC TO HIJACK ARG AS A TASK 16 | task ARGV.last.to_sym do ; end 17 | else 18 | quiet = '' 19 | 20 | # BLACK MAGIC TO HIJACK ARG AS A TASK 21 | task ARGV.last.to_sym do ; end 22 | end 23 | end 24 | 25 | puts 26 | puts "Building site..." 27 | sh 'mkdocs build -f mkdocs.yml -d site' + ' ' + String(quiet) 28 | end 29 | 30 | desc 'Checks html files for broken external links' 31 | task :test_external, [:ARGV] do 32 | # Verify regex at https://regex101.com 33 | options = { 34 | :assume_extension => true, 35 | :log_level => :info, 36 | :external_only => true, 37 | :internal_domains => ["https://instructor.labs.sysdeseng.com", "https://www.youtube.com"], 38 | :url_ignore => [ /http(s)?:\/\/(www.)?katacoda.com.*/ ], 39 | :url_swap => {'https://kubevirt.io/' => '',}, 40 | :http_status_ignore => [0, 400, 429, 999] 41 | } 42 | 43 | parser = OptionParser.new 44 | parser.banner = "Usage: rake -- [arguments]" 45 | # Added option -u which will remove the url_swap option to from the map 46 | parser.on("-u", "--us", "Remove url_swap from htmlProofer") do |url_swap| 47 | options.delete(:url_swap) 48 | end 49 | 50 | args = parser.order!(ARGV) {} 51 | parser.parse!(args) 52 | 53 | puts 54 | puts "Checks html files for broken external links " + note + "..." 55 | HTMLProofer.check_directory("./site", options).run 56 | end 57 | 58 | desc 'Checks html files for broken internal links' 59 | task :test_internal do 60 | options = { 61 | :assume_extension => true, 62 | :allow_hash_href => true, 63 | :log_level => :info, 64 | :disable_external => true, 65 | :url_swap => {'/user-guide' => '',}, 66 | :http_status_ignore => [0, 200, 400, 429, 999] 67 | } 68 | 69 | puts 70 | puts "Checks html files for broken internal links " + note + "..." 71 | HTMLProofer.check_directory("./site", options).run 72 | end 73 | end 74 | 75 | desc 'The default task will execute all tests in a row' 76 | task :default => ['links:build', 'links:test_external', 'links:test_internal'] 77 | -------------------------------------------------------------------------------- /docs/cluster_admin/customize_components.md: -------------------------------------------------------------------------------- 1 | 2 | ## Customize KubeVirt Components 3 | 4 | 5 | ### Customize components using patches 6 | 7 | > :warning: If the patch created is invalid KubeVirt will not be able to update or deploy the system. This is intended for special use cases and should not be used unless you know what you are doing. 8 | 9 | Valid resource types are: Deployment, DaemonSet, Service, ValidatingWebhookConfiguraton, MutatingWebhookConfiguration, APIService, and CertificateSecret. More information can be found in the [API spec](http://kubevirt.io/api-reference/master/definitions.html#_v1_customizecomponentspatch). 10 | 11 | 12 | Example customization patch: 13 | ```yaml 14 | --- 15 | apiVersion: kubevirt.io/v1 16 | kind: KubeVirt 17 | metadata: 18 | name: kubevirt 19 | namespace: kubevirt 20 | spec: 21 | certificateRotateStrategy: {} 22 | configuration: {} 23 | customizeComponents: 24 | patches: 25 | - resourceType: Deployment 26 | resourceName: virt-controller 27 | patch: '[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]' 28 | type: json 29 | - resourceType: Deployment 30 | resourceName: virt-controller 31 | patch: '{"metadata":{"annotations":{"patch": "true"}}}' 32 | type: strategic 33 | ``` 34 | 35 | The above example will update the `virt-controller` deployment to have an annotation in it's metadata that says `patch: true` and will remove the livenessProbe from the container definition. 36 | 37 | 38 | ### Customize Flags 39 | 40 | > :warning: If the flags are invalid or become invalid on update the component will not be able to run 41 | 42 | 43 | By using the customize flag option, whichever component the flags are to be applied to, all default flags will be removed and only the flags specified will be used. The available resources to change the flags on are `api`, `controller` and `handler`. You can find our more details about the API in the [API spec](http://kubevirt.io/api-reference/master/definitions.html#_v1_flags). 44 | 45 | ```yaml 46 | apiVersion: kubevirt.io/v1 47 | kind: KubeVirt 48 | metadata: 49 | name: kubevirt 50 | namespace: kubevirt 51 | spec: 52 | certificateRotateStrategy: {} 53 | configuration: {} 54 | customizeComponents: 55 | flags: 56 | api: 57 | v: "5" 58 | port: "8443" 59 | console-server-port: "8186" 60 | subresources-only: "true" 61 | ``` 62 | 63 | The above example would produce a `virt-api` pod with the following command 64 | 65 | ```yaml 66 | ... 67 | spec: 68 | .... 69 | container: 70 | - name: virt-api 71 | command: 72 | - virt-api 73 | - --v 74 | - "5" 75 | - --console-server-port 76 | - "8186" 77 | - --port 78 | - "8443" 79 | - --subresources-only 80 | - "true" 81 | ... 82 | ``` 83 | -------------------------------------------------------------------------------- /docs/user_workloads/lifecycle.md: -------------------------------------------------------------------------------- 1 | # Lifecycle 2 | 3 | Every `VirtualMachineInstance` represents a single virtual machine 4 | *instance*. In general, the management of VirtualMachineInstances is 5 | kept similar to how `Pods` are managed: Every VM that is defined in the 6 | cluster is expected to be running, just like Pods. Deleting a 7 | VirtualMachineInstance is equivalent to shutting it down, this is also 8 | equivalent to how Pods behave. 9 | 10 | 11 | ## Launching a virtual machine 12 | 13 | In order to start a VirtualMachineInstance, you just need to create a 14 | `VirtualMachineInstance` object using `kubectl`: 15 | 16 | $ kubectl create -f vmi.yaml 17 | 18 | 19 | ## Listing virtual machines 20 | 21 | VirtualMachineInstances can be listed by querying for 22 | VirtualMachineInstance objects: 23 | 24 | $ kubectl get vmis 25 | 26 | 27 | ## Retrieving a virtual machine instance definition 28 | 29 | A single VirtualMachineInstance definition can be retrieved by getting 30 | the specific VirtualMachineInstance object: 31 | 32 | $ kubectl get vmis testvmi 33 | 34 | 35 | ## Stopping a virtual machine instance 36 | 37 | To stop the VirtualMachineInstance, you just need to delete the 38 | corresponding `VirtualMachineInstance` object using `kubectl`. 39 | 40 | $ kubectl delete -f vmi.yaml 41 | # OR 42 | $ kubectl delete vmis testvmi 43 | 44 | > **Note:** Stopping a VirtualMachineInstance implies that it will be 45 | > deleted from the cluster. You will not be able to start this 46 | > VirtualMachineInstance object again. 47 | 48 | ## Starting and stopping a virtual machine 49 | 50 | Virtual machines, in contrast to VirtualMachineInstances, have a running state. Thus on VM you can define if it 51 | should be running, or not. VirtualMachineInstances are, if they are defined in the cluster, always running and consuming resources. 52 | 53 | `virtctl` is used in order to start and stop a VirtualMachine: 54 | 55 | $ virtctl start my-vm 56 | $ virtctl stop my-vm 57 | 58 | > **Note:** You can force stop a VM (which is like pulling the power cord, 59 | > with all its implications like data inconsistencies or 60 | > [in the worst case] data loss) by 61 | 62 | $ virtctl stop my-vm --grace-period 0 --force 63 | 64 | ## Pausing and unpausing a virtual machine 65 | 66 | > **Note:** Pausing in this context refers to libvirt's `virDomainSuspend` command: 67 | > "The process is frozen without further access to CPU resources and I/O but the memory used by the domain at the hypervisor level will stay allocated" 68 | 69 | To pause a virtual machine, you need the `virtctl` command line tool. Its `pause` command works on either `VirtualMachine` s 70 | or `VirtualMachinesInstance` s: 71 | 72 | $ virtctl pause vm testvm 73 | # OR 74 | $ virtctl pause vmi testvm 75 | 76 | Paused VMIs have a `Paused` condition in their status: 77 | 78 | $ kubectl get vmi testvm -o=jsonpath='{.status.conditions[?(@.type=="Paused")].message}' 79 | VMI was paused by user 80 | 81 | Unpausing works similar to pausing: 82 | 83 | $ virtctl unpause vm testvm 84 | # OR 85 | $ virtctl unpause vmi testvm 86 | -------------------------------------------------------------------------------- /OWNERS_ALIASES: -------------------------------------------------------------------------------- 1 | aliases: 2 | reviewers: 3 | - aburdenthehand 4 | - dhiller 5 | - fabiand 6 | - jean-edouard 7 | - mhenriks 8 | - phoracek 9 | - vladikr 10 | approvers: 11 | - aburdenthehand 12 | - davidvossel 13 | - dhiller 14 | - fabiand 15 | - jean-edouard 16 | - mhenriks 17 | - phoracek 18 | - rmohr 19 | - stu-gott 20 | - vladikr 21 | 22 | # 23 | # SIG Test 24 | # 25 | sig-test-reviewers: 26 | - kbidarkar 27 | sig-test-approvers: 28 | - kbidarkar 29 | - phoracek 30 | - enp0s3 31 | - xpivarc 32 | - acardace 33 | - dhiller 34 | 35 | # 36 | # SIG Network 37 | # Owns anything related to networking. 38 | # 39 | sig-network-reviewers: 40 | - EdDev 41 | - RamLavi 42 | - ormergi 43 | - orelmisan 44 | sig-network-approvers: 45 | - EdDev 46 | - orelmisan 47 | 48 | # 49 | # SIG Scale 50 | # Owns to keep kubevirt's scalability comparable to Kubernetes'. 51 | # 52 | sig-scale-approvers: 53 | - rthallisey 54 | sig-scale-reviewers: 55 | - rthallisey 56 | 57 | # 58 | # SIG Storage 59 | # Owns anything related to storage. 60 | # 61 | sig-storage-approvers: 62 | - mhenriks 63 | - alicefr 64 | sig-storage-reviewers: 65 | - awels 66 | - akalenyu 67 | - ShellyKa13 68 | 69 | # 70 | # SIG API 71 | # Owns the API including API life-cycle, deprecation, 72 | # and backwards compatibility. 73 | # 74 | sig-api-approvers: [] 75 | sig-api-reviewers: [] 76 | 77 | # 78 | # SIG Compute 79 | # Owns everything which is taking place on a node, for example 80 | # (but not limited to) groups, SELinux, node labels, … 81 | # And everything on the cluster level such as RBAC, controller, … 82 | # 83 | sig-compute-approvers: 84 | - jean-edouard 85 | - iholder101 86 | sig-compute-reviewers: 87 | - victortoso 88 | 89 | # 90 | # SIG Observability 91 | # Owns the responsibility to keep kubevirt observable by i.e. 92 | # having mertics, alters, and runbooks. 93 | # 94 | sig-observability-approvers: 95 | - sradco 96 | - machadovilaca 97 | sig-observability-reviewers: 98 | - machadovilaca 99 | - avlitman 100 | - assafad 101 | 102 | # 103 | # SIG Release 104 | # Owns the release process, including the schedule, and tools. 105 | # 106 | sig-release-approvers: 107 | - acardace 108 | - fossedihelm 109 | - xpivarc 110 | sig-release-reviewers: 111 | - acardace 112 | - fossedihelm 113 | - xpivarc 114 | 115 | # 116 | # SIG Buildsystem 117 | # Owns bazel, and ensures that kubevirt can be build. 118 | # 119 | sig-buildsystem-approvers: 120 | - brianmcarey 121 | - dhiller 122 | - xpivarc 123 | sig-buildsystem-reviewers: 124 | - brianmcarey 125 | - enp0s3 126 | - xpivarc 127 | 128 | # 129 | # SIG Architecture 130 | # Owns the overall architecture, and supporting the growth, health, 131 | # openess of KubeVirt. 132 | # 133 | sig-architecture-approvers: [] 134 | sig-architecture-reviewers: [] 135 | -------------------------------------------------------------------------------- /docs/user_workloads/creating_it_pref.md: -------------------------------------------------------------------------------- 1 | # Creating Instance Types and Preferences by using virtctl 2 | 3 | As of KubeVirt v1.0, you can use virtctl subcommands to create instance 4 | types and preferences. 5 | 6 | ## Creating Instance Types 7 | 8 | The virtctl subcommand `create instancetype` allows easy creation of an instance 9 | type manifest from the command line. The command also provides several flags 10 | that can be used to create your desired manifest. 11 | 12 | There are two required flags that need to be specified: 13 | 14 | 1. `--cpu`: the number of vCPUs to be requested 15 | 2. `--memory`: the amount of memory to be requested 16 | 17 | Additionally, there are several optional flags that can be used, such as 18 | specifying a list of GPUs for passthrough, choosing the desired IOThreadsPolicy, 19 | or simply providing the name of our instance type. 20 | 21 | By default, the command creates cluster-wide instance types. If the user 22 | wants to create the namespaced version, they need to provide the namespaced 23 | flag. The namespace name can be specified by using the `--namespace` flag. 24 | 25 | For a complete list of flags and their descriptions, use the following command: 26 | 27 | ```shell 28 | virtctl create instancetype -h 29 | ``` 30 | 31 | ### Examples 32 | 33 | Create a manifest for a VirtualMachineClusterInstancetype with the required 34 | `--cpu` and `--memory` flags: 35 | 36 | ```shell 37 | virtctl create instancetype --cpu 2 --memory 256Mi 38 | ``` 39 | 40 | Create a manifest for a VirtualMachineInstancetype with a specified namespace: 41 | 42 | ```shell 43 | virtctl create instancetype --cpu 2 --memory 256Mi --namespace my-namespace 44 | ``` 45 | 46 | Create a manifest for a VirtualMachineInstancetype without a specified 47 | namespace name: 48 | 49 | ```shell 50 | virtctl create instancetype --cpu 2 --memory 256Mi --namespaced 51 | ``` 52 | 53 | ## Creating Preferences 54 | 55 | The virtctl subcommand `create preference` allows easy creation of a preference 56 | manifest from the command line. This command serves as a starting point to 57 | create the basic structure of a preference manifest, as it does not allow 58 | specifying all the options that are supported in preferences. 59 | 60 | The current set of flags allows us, for example, to specify the preferred CPU 61 | topology, machine type or a storage class. 62 | 63 | By default, the command creates cluster-wide preferences. If the user wants to 64 | create the namespaced version, they need to provide the namespaced flag. The 65 | namespace name can be specified by using the `--namespace` flag. 66 | 67 | For a complete list of flags and their descriptions, use the following command: 68 | 69 | ```shell 70 | virtctl create preference -h 71 | ``` 72 | 73 | ### Examples 74 | 75 | Create a manifest for a VirtualMachineClusterPreference with a preferred CPU 76 | topology: 77 | 78 | ```shell 79 | virtctl create preference --cpu-topology preferSockets 80 | ``` 81 | 82 | Create a manifest for a VirtualMachinePreference with a specified namespace: 83 | 84 | ```shell 85 | virtctl create preference --namespace my-namespace 86 | ``` 87 | 88 | Create a manifest for a VirtualMachinePreference with the preferred storage 89 | class: 90 | 91 | ```shell 92 | virtctl create preference --namespaced --volume-storage-class my-storage 93 | ``` 94 | -------------------------------------------------------------------------------- /docs/compute/vsock.md: -------------------------------------------------------------------------------- 1 | # VSOCK 2 | 3 | VM Sockets (vsock) is a fast and efficient guest-host communication mechanism. 4 | 5 | ## Background 6 | 7 | Right now KubeVirt uses virtio-serial for local guest-host communication. Currently it used in KubeVirt by libvirt and qemu to communicate with the qemu-guest-agent. Virtio-serial can also be used by other agents, but it is a little bit cumbersome due to: 8 | 9 | - A small set of ports on the virtio-serial device 10 | - Low bandwidth 11 | - No socket based communication possible, which requires every agent to establish their own protocols, or work with translation layers like SLIP to be able to use protocols like gRPC for reliable communication. 12 | - No easy and supportable way to get a virtio-serial socket assigned and being able to access it without entering the virt-launcher pod. 13 | - Due to the point above, privileges are required for services. 14 | 15 | With [virtio-vsock](https://man7.org/linux/man-pages/man7/vsock.7.html) we get support for easy guest-host communication which solves the above issues from a user/admin perspective. 16 | 17 | ## Usage 18 | 19 | ### Feature Gate 20 | 21 | To enable VSOCK in KubeVirt cluster, the user may expand the `featureGates` 22 | field in the KubeVirt CR by adding the `VSOCK` to it. 23 | 24 | ```yaml 25 | apiVersion: kubevirt.io/v1 26 | kind: Kubevirt 27 | metadata: 28 | name: kubevirt 29 | namespace: kubevirt 30 | spec: 31 | ... 32 | configuration: 33 | developerConfiguration: 34 | featureGates: 35 | - "VSOCK" 36 | ``` 37 | 38 | Alternatively, users can edit an existing kubevirt CR: 39 | 40 | `kubectl edit kubevirt kubevirt -n kubevirt` 41 | 42 | ```yaml 43 | ... 44 | spec: 45 | configuration: 46 | developerConfiguration: 47 | featureGates: 48 | - "VSOCK" 49 | ``` 50 | 51 | ### Virtual Machine Instance 52 | 53 | To attach VSOCK device to a Virtual Machine, the user has to add `autoattachVSOCK: true` in a `devices` section of Virtual Machine Instance specification: 54 | 55 | ```yaml 56 | apiVersion: kubevirt.io/v1 57 | kind: VirtualMachineInstance 58 | metadata: 59 | name: testvmi-vsock 60 | spec: 61 | domain: 62 | resources: 63 | requests: 64 | memory: 64M 65 | devices: 66 | autoattachVSOCK: true 67 | ``` 68 | 69 | This will expose VSOCK device to the VM. The `CID` will be assigned randomly by `virt-controller`, and exposed to the Virtual Machine Instance status: 70 | 71 | ```yaml 72 | status: 73 | VSOCKCID: 123 74 | ``` 75 | 76 | ## Security 77 | 78 | > **_NOTE:_** The `/dev/vhost-vsock` device is *NOT NEEDED* to connect or bind to a VSOCK socket. 79 | 80 | To make VSOCK feature secure, following measures are put in place: 81 | 82 | - The whole VSOCK features will live behind a feature gate. 83 | - By default the first 1024 ports of a vsock device are privileged. Services trying to bind to those require `CAP_NET_BIND_SERVICE` capability. 84 | - `AF_VSOCK` socket syscall gets blocked in containerd 1.7+ (containerd/containerd#7442). It is right now the responsibility of the vendor to ensure that the used CRI selects a default seccomp policy which blocks VSOCK socket calls in a similar way like it was done for containerd. 85 | - CIDs are assigned by `virt-controller` and are unique per Virtual Machine Instance to ensure that `virt-handler` has an easy way of tracking the identity without races. While this still allows `virt-launcher` to fake-use an assigned CID, it eliminates possible assignment races which attackers could make use-of to redirect VSOCK calls. 86 | -------------------------------------------------------------------------------- /docs/cluster_admin/ksm.md: -------------------------------------------------------------------------------- 1 | # KSM Management 2 | 3 | Kernel Samepage Merging ([KSM](http://www.linux-kvm.org/page/KSM)) 4 | allows de-duplication of memory. KSM tries to find identical Memory Pages and merge 5 | those to free memory. 6 | >Further Information: 7 | >- [KSM (Kernel Samepage Merging) feature](https://www.thomas-krenn.com/en/wiki/KSM_(Kernel_Samepage_Merging)_feature) 8 | >- [Kernel Same-page Merging (KSM)](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-ksm) 9 | 10 | ## Enabling KSM through KubeVirt CR 11 | KSM can be enabled on nodes by `spec.configuration.ksmConfiguration` in the KubeVirt CR. 12 | `ksmConfiguration` instructs on which nodes KSM will be enabled, exposing a `nodeLabelSelector`. 13 | `nodeLabelSelector` is a [LabelSelector](https://github.com/kubernetes/apimachinery/blob/60180f072f73eafec72ef9f2c418a6bb1357d434/pkg/apis/meta/v1/types.go#L1195) 14 | and defines the filter, based on the node labels. If a node's labels match the label selector term, 15 | then on that node, KSM will be enabled. 16 | >**NOTE** 17 | >If `nodeLabelSelector` is nil KSM will not be enabled on any nodes. 18 | >Empty `nodeLabelSelector` will enable KSM on every node. 19 | 20 | #### Examples: 21 | 22 | - Enabling KSM on nodes in which the hostname is `node01` or `node03`: 23 | ```yaml 24 | spec: 25 | configuration: 26 | ksmConfiguration: 27 | nodeLabelSelector: 28 | matchExpressions: 29 | - key: kubernetes.io/hostname 30 | operator: In 31 | values: 32 | - node01 33 | - node03 34 | ``` 35 | 36 | - Enabling KSM on nodes with labels `kubevirt.io/first-label: true`, `kubevirt.io/second-label: true`: 37 | ```yaml 38 | spec: 39 | configuration: 40 | ksmConfiguration: 41 | nodeLabelSelector: 42 | matchLabels: 43 | kubevirt.io/first-label: "true" 44 | kubevirt.io/second-label: "true" 45 | ``` 46 | 47 | - Enabling KSM on every node: 48 | ```yaml 49 | spec: 50 | configuration: 51 | ksmConfiguration: 52 | nodeLabelSelector: {} 53 | ``` 54 | 55 | 56 | ## Annotation and restore mechanism 57 | 58 | On those nodes where KubeVirt enables the KSM via configuration, an annotation will be 59 | added (`kubevirt.io/ksm-handler-managed`). 60 | This annotation is an internal record to keep track of which nodes are currently 61 | managed by virt-handler, so that it is possible to distinguish which nodes should be restored 62 | in case of future ksmConfiguration changes. 63 | 64 | Let's imagine this scenario: 65 | 66 | 1. There are 3 nodes in the cluster and one of them(`node01`) has KSM externally enabled. 67 | 2. An admin patches the KubeVirt CR adding a ksmConfiguration which enables ksm for `node02` and `node03`. 68 | 3. After a while, an admin patches again the KubeVirt CR deleting the ksmConfiguration. 69 | 70 | Thanks to the annotation, the virt-handler is able to disable ksm on only those nodes where it 71 | itself had enabled it(`node02` `node03`), leaving the others unchanged (`node01`). 72 | 73 | ## Node labelling 74 | 75 | KubeVirt can discover on which nodes KSM is enabled and will mark them 76 | with a special label (`kubevirt.io/ksm-enabled`) with value `true`. 77 | This label can be used to schedule the vms in nodes with KSM enabled or not. 78 | ```yaml 79 | apiVersion: kubevirt.io/v1 80 | kind: VirtualMachine 81 | metadata: 82 | name: testvm 83 | spec: 84 | runStrategy: Always 85 | template: 86 | metadata: 87 | labels: 88 | kubevirt.io/vm: testvm 89 | spec: 90 | nodeSelector: 91 | kubevirt.io/ksm-enabled: "true" 92 | [...] 93 | ``` 94 | -------------------------------------------------------------------------------- /docs/cluster_admin/api_validation.md: -------------------------------------------------------------------------------- 1 | # API Validation 2 | 3 | The KubeVirt VirtualMachineInstance API is implemented using a 4 | Kubernetes Custom Resource Definition (CRD). Because of this, KubeVirt 5 | is able to leverage a couple of features Kubernetes provides in order to 6 | perform validation checks on our API as objects created and updated on 7 | the cluster. 8 | 9 | ## How API Validation Works 10 | 11 | ### CRD OpenAPIv3 Schema 12 | 13 | The KubeVirt API is registered with Kubernetes at install time through a 14 | series of CRD definitions. KubeVirt includes an OpenAPIv3 schema in 15 | these definitions which indicates to the Kubernetes Apiserver some very 16 | basic information about our API, such as what fields are required and 17 | what type of data is expected for each value. 18 | 19 | This OpenAPIv3 schema validation is installed automatically and requires 20 | no thought on the users part to enable. 21 | 22 | ### Admission Control Webhooks 23 | 24 | The OpenAPIv3 schema validation is limited. It only validates the 25 | general structure of a KubeVirt object looks correct. It does not 26 | however verify that the contents of that object make sense. 27 | 28 | With OpenAPIv3 validation alone, users can easily make simple mistakes 29 | (like not referencing a volume's name correctly with a disk) and the 30 | cluster will still accept the object. However, the 31 | VirtualMachineInstance will of course not start if these errors in the 32 | API exist. Ideally we'd like to catch configuration issues as early as 33 | possible and not allow an object to even be posted to the cluster if we 34 | can detect there's a problem with the object's Spec. 35 | 36 | In order to perform this advanced validation, KubeVirt implements its 37 | own admission controller which is registered with kubernetes as an 38 | admission controller webhook. This webhook is registered with Kubernetes 39 | at install time. As KubeVirt objects are posted to the cluster, the 40 | Kubernetes API server forwards Creation requests to our webhook for 41 | validation before persisting the object into storage. 42 | 43 | Note however that the KubeVirt admission controller requires features to 44 | be enabled on the cluster in order to be enabled. 45 | 46 | ### Enabling KubeVirt Admission Controller on Kubernetes 47 | 48 | When provisioning a new Kubernetes cluster, ensure that both the 49 | **MutatingAdmissionWebhook** and **ValidatingAdmissionWebhook** values 50 | are present in the Apiserver's **--admission-control** cli argument. 51 | 52 | Below is an example of the **--admission-control** values we use during 53 | development 54 | 55 | ``` 56 | --admission-control='Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota' 57 | ``` 58 | 59 | Note that the old --admission-control flag was deprecated in 1.10 and replaced with --enable-admission-plugins. 60 | **MutatingAdmissionWebhook** and **ValidatingAdmissionWebhook** are enabled by default. 61 | 62 | ### Enabling KubeVirt Admission Controller on OKD 63 | 64 | OKD also requires the admission control webhooks to be enabled at 65 | install time. The process is slightly different though. With OKD, we 66 | enable webhooks using an admission plugin. 67 | 68 | These admission control plugins can be configured in openshift-ansible 69 | by setting the following value in ansible inventory file. 70 | 71 | ``` 72 | openshift_master_admission_plugin_config={"ValidatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}},"MutatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}}} 73 | ``` 74 | -------------------------------------------------------------------------------- /docs/network/dns.md: -------------------------------------------------------------------------------- 1 | # DNS records 2 | 3 | In order to create unique DNS records per VirtualMachineInstance, it is 4 | possible to set `spec.hostname` and `spec.subdomain`. If a subdomain is 5 | set and a headless service with a name, matching the subdomain, exists, 6 | kube-dns will create unique DNS entries for every VirtualMachineInstance 7 | which matches the selector of the service. Have a look at the [DNS for 8 | Services and Pods 9 | documentation](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-hostname-and-subdomain-fields) 10 | for additional information. 11 | 12 | The following example consists of a VirtualMachine and a headless 13 | Service which matches the labels and the subdomain of the 14 | VirtualMachineInstance: 15 | 16 | ```yaml 17 | apiVersion: kubevirt.io/v1 18 | kind: VirtualMachineInstance 19 | metadata: 20 | name: vmi-fedora 21 | labels: 22 | expose: me 23 | spec: 24 | hostname: "myvmi" 25 | subdomain: "mysubdomain" 26 | domain: 27 | devices: 28 | disks: 29 | - disk: 30 | bus: virtio 31 | name: containerdisk 32 | - disk: 33 | bus: virtio 34 | name: cloudinitdisk 35 | resources: 36 | requests: 37 | memory: 1024M 38 | terminationGracePeriodSeconds: 0 39 | volumes: 40 | - name: containerdisk 41 | containerDisk: 42 | image: kubevirt/fedora-cloud-registry-disk-demo:latest 43 | - cloudInitNoCloud: 44 | userDataBase64: IyEvYmluL2Jhc2gKZWNobyAiZmVkb3JhOmZlZG9yYSIgfCBjaHBhc3N3ZAo= 45 | name: cloudinitdisk 46 | --- 47 | apiVersion: v1 48 | kind: Service 49 | metadata: 50 | name: mysubdomain 51 | spec: 52 | selector: 53 | expose: me 54 | clusterIP: None 55 | ports: 56 | - name: foo # Actually, no port is needed. 57 | port: 1234 58 | targetPort: 1234 59 | ``` 60 | 61 | As a consequence, when we enter the VirtualMachineInstance via e.g. 62 | `virtctl console vmi-fedora` and ping `myvmi.mysubdomain` we see that we 63 | find a DNS entry for `myvmi.mysubdomain.default.svc.cluster.local` which 64 | points to `10.244.0.57`, which is the IP of the VirtualMachineInstance 65 | (not of the Service): 66 | 67 | [fedora@myvmi ~]$ ping myvmi.mysubdomain 68 | PING myvmi.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 69 | 64 bytes from myvmi.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms 70 | [fedora@myvmi ~]$ ip a 71 | 2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 72 | link/ether 0a:58:0a:f4:00:39 brd ff:ff:ff:ff:ff:ff 73 | inet 10.244.0.57/24 brd 10.244.0.255 scope global dynamic eth0 74 | valid_lft 86313556sec preferred_lft 86313556sec 75 | inet6 fe80::858:aff:fef4:39/64 scope link 76 | valid_lft forever preferred_lft forever 77 | 78 | So `spec.hostname` and `spec.subdomain` get translated to a DNS A-record 79 | of the form 80 | `...svc.cluster.local`. 81 | If no `spec.hostname` is set, then we fall back to the 82 | VirtualMachineInstance name itself. The resulting DNS A-record looks 83 | like this then: 84 | `...svc.cluster.local`. 85 | 86 | > **Note** To resolve short names like `myvmi.mysubdomain` using search domains, the guest's `/etc/resolv.conf` must include `options ndots:2` or higher. 87 | > In a Linux guest if `options ndots` is not specified, the system defaults to `ndots:1` as indicated in resolver spec](https://man7.org/linux/man-pages/man5/resolv.conf.5.html). 88 | > KubeVirt does not configure the guest system resolver. It can either be pre-configured in the VM image or configured at runtime using cloud-init. -------------------------------------------------------------------------------- /docs/network/net_binding_plugins/slirp.md: -------------------------------------------------------------------------------- 1 | # Slirp 2 | 3 | ## Overview 4 | 5 | [SLIRP](https://en.wikipedia.org/wiki/Slirp) provides user-space 6 | network connectivity. 7 | 8 | > **Note:** in `slirp` mode, the only supported protocols are TCP and 9 | > UDP. ICMP is *not* supported. 10 | 11 | ## Slirp network binding plugin 12 | [v1.1.0] 13 | 14 | The binding plugin replaces the [core `slirp` binding](../../network/interfaces_and_networks.md#slirp) 15 | API. 16 | 17 | > **Note**: The network binding plugin infrastructure is in Alpha stage. 18 | > Please use them with care. 19 | 20 | The slirp binding plugin consists of the following components: 21 | 22 | - Sidecar image. 23 | 24 | As described in the [definition & flow](../../network/network_binding_plugins.md#definition--flow) section, 25 | the slirp plugin needs to: 26 | 27 | - Assure access to the sidecar image. 28 | - Enable the network binding plugin framework FG. 29 | - Register the binding plugin on the Kubevirt CR. 30 | - Reference the network binding by name from the VM spec interface. 31 | 32 | > **Note**: In order for the core slirp binding to use the network binding plugin 33 | > the registered name for this binding should be `slirp`. 34 | 35 | ### Feature Gate 36 | If not already set, add the `NetworkBindingPlugins` FG. 37 | ``` 38 | kubectl patch kubevirts -n kubevirt kubevirt --type=json -p='[{"op": "add", "path": "/spec/configuration/developerConfiguration/featureGates/-", "value": "NetworkBindingPlugins"}]' 39 | ``` 40 | 41 | > **Note**: The specific slirp plugin has no FG by its own. It is up to the cluster 42 | > admin to decide if the plugin is to be available in the cluster. 43 | 44 | ### Slirp Registration 45 | As described in the [registration section](../../network/network_binding_plugins.md#register), slirp binding plugin 46 | configuration needs to be added to the kubevirt CR. 47 | 48 | To register the slirp binding, patch the kubevirt CR as follows: 49 | ```yaml 50 | kubectl patch kubevirts -n kubevirt kubevirt --type=json -p='[{"op": "add", "path": "/spec/configuration/network", "value": { 51 | "binding": { 52 | "slirp": { 53 | "sidecarImage": "quay.io/kubevirt/network-slirp-binding:v1.1.0" 54 | } 55 | } 56 | }}]' 57 | ``` 58 | 59 | ### VM Slirp Network Interface 60 | Set the VM network interface binding name to reference the one defined in the 61 | kubevirt CR. 62 | 63 | ```yaml 64 | --- 65 | apiVersion: kubevirt.io/v1 66 | kind: VirtualMachine 67 | metadata: 68 | labels: 69 | kubevirt.io/vm: vm-net-binding-slirp 70 | name: vm-net-binding-passt 71 | spec: 72 | runStrategy: Always 73 | template: 74 | metadata: 75 | labels: 76 | kubevirt.io/vm: vm-net-binding-slirp 77 | spec: 78 | domain: 79 | devices: 80 | disks: 81 | - disk: 82 | bus: virtio 83 | name: containerdisk 84 | - disk: 85 | bus: virtio 86 | name: cloudinitdisk 87 | interfaces: 88 | - name: slirpnet 89 | binding: 90 | name: slirp 91 | rng: {} 92 | resources: 93 | requests: 94 | memory: 1024M 95 | networks: 96 | - name: slirpnet 97 | pod: {} 98 | terminationGracePeriodSeconds: 0 99 | volumes: 100 | - containerDisk: 101 | image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0 102 | name: containerdisk 103 | - cloudInitNoCloud: 104 | networkData: | 105 | version: 2 106 | ethernets: 107 | eth0: 108 | dhcp4: true 109 | name: cloudinitdisk 110 | ``` 111 | -------------------------------------------------------------------------------- /docs/cluster_admin/feature_gate_status_on_Arm64.md: -------------------------------------------------------------------------------- 1 | # Feature Gate Status on Arm64 2 | 3 | This page is based on https://github.com/kubevirt/kubevirt/issues/9749 4 | It records the feature gate status on Arm64 platform. Here is the explanation of the status: 5 | 6 | - **Supported**: the feature gate support on Arm64 platform. 7 | - **Not supported yet**: there are some dependencies of the feature gate not support Arm64, so this feature does not support for now. We may support the dependencies in the future. 8 | - **Not supported**: The feature gate is not support on Arm64. 9 | - **Not verified**: The feature has not been verified yet. 10 | 11 | 12 | FEATURE GATE | STATUS | NOTES 13 | -- | -- | -- 14 | ExpandDisksGate | Not supported yet| CDI is needed 15 | CPUManager | Supported | use taskset to do CPU pinning, do not support kvm-hint-dedicated (this is only works on x86 platform) 16 | NUMAFeatureGate | Not supported yet | Need to support Hugepage on Arm64 17 | IgnitionGate | Supported | This feature is only used for CoreOS/RhCOS 18 | LiveMigrationGate | Supported | Verified live migration with masquerade network 19 | SRIOVLiveMigrationGate | Not verified | Need two same Machine and SRIOV device 20 | HypervStrictCheckGate | Not supported | Hyperv does not work on Arm64 21 | SidecarGate | Supported |   22 | GPUGate | Not verified | Need GPU device 23 | HostDevicesGate | Not verified | Need GPU or sound card 24 | SnapshotGate | Supported | Need snapshotter support https://github.com/kubernetes-csi/external-snapshotter 25 | VMExportGate | Partially supported | Need snapshotter support https://kubevirt.io/user-guide/operations/export_api/, support exporting pvc, not support exporting DataVolumes and MemoryDump which rely on CDI 26 | HotplugVolumesGate | Not supported yet | Rely on datavolume and CDI 27 | HostDiskGate | Supported |   28 | VirtIOFSGate | Supported |   29 | MacvtapGate | Not supported yet | quay.io/kubevirt/macvtap-cni not support Arm64, https://github.com/kubevirt/macvtap-cni#deployment 30 | PasstGate | Supported | VM have same ip with pods; start a process for network /usr/bin/passt --runas 107 -e -t 8080 31 | DownwardMetricsFeatureGate | need more information | It used to let guest get host information, failed on both Arm64 and x86_64.

The block is successfully attached and can see the following information:
 `-blockdev {"driver":"file","filename":"/var/run/kubevirt-private/downwardapi-disks/vhostmd0","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}`

But unable to get information via `vm-dump-metrics`:

`LIBMETRICS: read_mdisk(): Unable to read metrics disk`
`LIBMETRICS: get_virtio_metrics(): Unable to export metrics: open(/dev/virtio-ports/org.github.vhostmd.1) No such file or directory`
`LIBMETRICS: get_virtio_metrics(): Unable to read metrics` 32 | NonRootDeprecated | Supported |   33 | NonRoot | Supported |   34 | Root | Supported |   35 | ClusterProfiler | Supported | 36 | WorkloadEncryptionSEV | Not supported | SEV is only available on x86_64 37 | VSOCKGate | Supported |   38 | HotplugNetworkIfacesGate | Not supported yet | Need to setup *multus-cni* and *multus-dynamic-networks-controller*: https://github.com/k8snetworkplumbingwg/multus-cni
`cat ./deployments/multus-daemonset-thick.yml \| kubectl apply -f -`https://github.com/k8snetworkplumbingwg/multus-dynamic-networks-controller
`kubectl apply -f manifests/dynamic-networks-controller.yaml`

Currently, the image ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-thick does not support Arm64 server. For more information please refer to https://github.com/k8snetworkplumbingwg/multus-cni/pull/1027. 39 | CommonInstancetypesDeploymentGate | Not supported yet | Support of common-instancetypes instancetypes needs to be tested, common-instancetypes preferences for ARM workloads are still missing 40 | 41 | -------------------------------------------------------------------------------- /docs/cluster_admin/unresponsive_nodes.md: -------------------------------------------------------------------------------- 1 | # Unresponsive nodes 2 | 3 | KubeVirt has its own node daemon, called virt-handler. In addition to 4 | the usual k8s methods of detecting issues on nodes, the virt-handler 5 | daemon has its own heartbeat mechanism. This allows for fine-tuned error 6 | handling of VirtualMachineInstances. 7 | 8 | ## virt-handler heartbeat 9 | 10 | `virt-handler` periodically tries to update the 11 | `kubevirt.io/schedulable` label and the `kubevirt.io/heartbeat` 12 | annotation on the node it is running on: 13 | 14 | $ kubectl get nodes -o yaml 15 | apiVersion: v1 16 | items: 17 | - apiVersion: v1 18 | kind: Node 19 | metadata: 20 | annotations: 21 | kubevirt.io/heartbeat: 2018-11-05T09:42:25Z 22 | creationTimestamp: 2018-11-05T08:55:53Z 23 | labels: 24 | beta.kubernetes.io/arch: amd64 25 | beta.kubernetes.io/os: linux 26 | cpumanager: "false" 27 | kubernetes.io/hostname: node01 28 | kubevirt.io/schedulable: "true" 29 | node-role.kubernetes.io/control-plane: "" 30 | 31 | If a `VirtualMachineInstance` gets scheduled, the scheduler is only 32 | considering nodes where `kubevirt.io/schedulable` is `true`. This can be 33 | seen when looking on the corresponding pod of a 34 | `VirtualMachineInstance`: 35 | 36 | $ kubectl get pods virt-launcher-vmi-nocloud-ct6mr -o yaml 37 | apiVersion: v1 38 | kind: Pod 39 | metadata: 40 | [...] 41 | spec: 42 | [...] 43 | nodeName: node01 44 | nodeSelector: 45 | kubevirt.io/schedulable: "true" 46 | [...] 47 | 48 | In case there is a communication issue or the host goes down, 49 | `virt-handler` can't update its labels and annotations any-more. Once 50 | the last `kubevirt.io/heartbeat` timestamp is older than five minutes, 51 | the KubeVirt node-controller kicks in and sets the 52 | `kubevirt.io/schedulable` label to `false`. As a consequence no more 53 | VMIs will be schedule to this node until virt-handler is connected 54 | again. 55 | 56 | ## Deleting stuck VMIs when virt-handler is unresponsive 57 | 58 | In cases where `virt-handler` has some issues but the node is in general 59 | fine, a `VirtualMachineInstance` can be deleted as usual via 60 | `kubectl delete vmi `. Pods of a `VirtualMachineInstance` will be 61 | told by the cluster-controllers they should shut down. As soon as the 62 | Pod is gone, the `VirtualMachineInstance` will be moved to `Failed` 63 | state, if `virt-handler` did not manage to update it's heartbeat in the 64 | meantime. If `virt-handler` could recover in the meantime, 65 | `virt-handler` will move the `VirtualMachineInstance` to failed state 66 | instead of the cluster-controllers. 67 | 68 | ## Deleting stuck VMIs when the whole node is unresponsive 69 | 70 | If the whole node is unresponsive, deleting a `VirtualMachineInstance` 71 | via `kubectl delete vmi ` alone will never remove the 72 | `VirtualMachineInstance`. In this case all pods on the unresponsive node 73 | need to be force-deleted: First make sure that the node is really dead. 74 | Then delete all pods on the node via a force-delete: 75 | `kubectl delete pod --force --grace-period=0 `. 76 | 77 | As soon as the pod disappears and the heartbeat from virt-handler timed 78 | out, the VMIs will be moved to `Failed` state. If they were already 79 | marked for deletion they will simply disappear. If not, they can be 80 | deleted and will disappear almost immediately. 81 | 82 | ## Timing considerations 83 | 84 | It takes up to five minutes until the KubeVirt cluster components can 85 | detect that virt-handler is unhealthy. During that time-frame it is 86 | possible that new VMIs are scheduled to the affected node. If 87 | virt-handler is not capable of connecting to these pods on the node, the 88 | pods will sooner or later go to failed state. As soon as the cluster 89 | finally detects the issue, the VMIs will be set to failed by the 90 | cluster. 91 | -------------------------------------------------------------------------------- /docs/contributing.md: -------------------------------------------------------------------------------- 1 | --- 2 | hide: 3 | - navigation 4 | --- 5 | 6 | # Contributing 7 | 8 | Welcome!! And thank you for taking the first step to contributing to the KubeVirt project. On this page you should be able to find all the information required to get started on your contribution journey, as well as information on how to become a community member and grow into roles of responsibility. 9 | 10 | If you think something might be missing from this page, please help us by [raising a bug](https://github.com/kubevirt/user-guide/issues)! 11 | 12 | ## Prerequisites 13 | 14 | Reviewing the following will prepare you for contributing: 15 | 16 | * If this is your first step in the world of open source, consider reading the CNCF's [Start Contributing to Open Source](https://contribute.cncf.io/contributors/getting-started/) page for an introduction to key concepts. 17 | * You should be comfortable with git. Most contributions follow the GitHub workflow of fork, branch, commit, open pull request, review changes, and merge to work effectively in the KubeVirt community. If you're new to git, [git-scm.com](https://git-scm.com/doc) has a nice set of tutorials. 18 | * Familiarize yourself with the various repositories of the [KubeVirt](https://github.com/kubevirt) GitHub organization. 19 | * Try the one of our [quick start labs](https://kubevirt.io/user-guide/) on [killercoda](https://killercoda.com/kubevirt), [minikube](https://kubevirt.io/quickstart_minikube/), or [kind](https://kubevirt.io/quickstart_kind/). 20 | * See the "Other ways to contribute" section below. 21 | 22 | For code contributors: 23 | 24 | * You need to be familiar with writing code in golang. See the [golang tour](https://tour.golang.org/welcome/1) to familiarize yourself. 25 | * To contribute to the core of the project, read the Developer [contribution page](https://github.com/kubevirt/kubevirt/blob/main/CONTRIBUTING.md) and the [getting started page](https://github.com/kubevirt/kubevirt/blob/main/docs/getting-started.md) in the kubevirt/kubevirt repo. 26 | * Alternatively, to contribute to its storage management add-on, check out the [kubevirt/containerized-data-importer](https://github.com/kubevirt/containerized-data-importer/tree/main) (CDI) repo, and their [contribution page](https://github.com/kubevirt/containerized-data-importer/blob/main/CONTRIBUTING.md). 27 | 28 | ## Your first contribution 29 | 30 | The following will help you decide where to start: 31 | 32 | * Check a repository issues list and label `good-first-issue` for issues that make good entry points. 33 | * Open a pull request using GitHub to documentation. The tutorials found here can be helpful https://lab.github.com/ 34 | * Review a pull request from other community members for accuracy and language. 35 | 36 | ## Important community resources 37 | 38 | You should familiarize yourself with the following documents, which are critical to being a member of the community: 39 | 40 | * [Code of Conduct](https://github.com/kubevirt/kubevirt/blob/main/CODE_OF_CONDUCT.md): Everyone is expected to abide by the CoC to ensure an open and welcoming environment. Our CoC is based off the [CNCF Code of conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md) which also has a variety of translations. 41 | * [Our community membership policy](https://github.com/kubevirt/community/blob/main/membership_policy.md): How to become a member and grow into roles of responsibility. 42 | * [Project governance](https://github.com/kubevirt/community/blob/main/GOVERNANCE.md): Project Maintainer responsibilities. 43 | 44 | ## Other ways to contribute 45 | 46 | * Visit the KubeVirt [community page](https://kubevirt.io/community/), participate on Twitter or Slack, learn about local meetups and events. 47 | * Visit the KubeVirt [website repository](https://github.com/kubevirt/kubevirt.github.io) and submit a blog post, case study or lab. 48 | * Visit the KubeVirt [user-guide repository](https://github.com/kubevirt/user-guide) and find feature documentation that could use an update. 49 | -------------------------------------------------------------------------------- /docs/compute/resources_requests_and_limits.md: -------------------------------------------------------------------------------- 1 | # Resources requests and limits 2 | 3 | In this document, we are talking about the resources values set on the virt-launcher compute container, referred to as "the container" below for simplicity. 4 | 5 | ## CPU 6 | 7 | Note: dedicated CPUs (and isolated emulator thread) are ignored here as they have a [dedicated page](../compute/dedicated_cpu_resources.md). 8 | 9 | ### CPU requests on the container 10 | - By default, the container requests (1/cpuAllocationRatio) CPU per vCPU. The number of vCPUs is sockets * cores * threads, defaults to 1. 11 | - cpuAllocationRatio defaults to 10 but can be changed in the CR. 12 | - If a CPU limit is manually set on the VM(I) and no CPU request is, the CPU requests on the container will match the CPU limits 13 | - Manually setting CPU requests on the VM(I) will override all of the above and be the CPU requests for the container 14 | 15 | ### CPU limits on the container 16 | - By default, no CPU limit is set on the container 17 | - If auto CPU limits is enabled (see next section), then the container will have a CPU limit of 1 per vCPU 18 | - Manually setting CPU limits on the VM(I) will override all of the above and be the CPU limits for the container 19 | 20 | ### Auto CPU limits 21 | KubeVirt automatically set CPU limits on VM(I)s if all the following conditions are true: 22 | 23 | - The namespace where the VMI will be created has a ResourceQuota containing cpu limits. 24 | - The VMI has no manually set cpu limits. 25 | - The VMI is not requesting dedicated CPU. 26 | 27 | Additionally, you can add the namespaceLabelSelector in the KubeVirt CR, forcing CPU limits to be set. 28 | 29 | In both cases, the VM(I) created will have a CPU limit of 1 per vCPU. 30 | 31 | #### autoCPULimitNamespaceLabelSelector configuration 32 | Cluster admins can define a label selector in the KubeVirt CR. 33 | Once that label selector is defined, if the creation namespace matches the selector, all VM(I)s created in it will have a CPU limits set. 34 | 35 | Example: 36 | 37 | - CR: 38 | ```yaml 39 | apiVersion: kubevirt.io/v1 40 | kind: KubeVirt 41 | metadata: 42 | name: kubevirt 43 | namespace: kubevirt 44 | spec: 45 | configuration: 46 | autoCPULimitNamespaceLabelSelector: 47 | matchLabels: 48 | autoCpuLimit: "true" 49 | ``` 50 | 51 | - Namespace: 52 | ```yaml 53 | apiVersion: v1 54 | kind: Namespace 55 | metadata: 56 | labels: 57 | autoCpuLimit: "true" 58 | kubernetes.io/metadata.name: default 59 | name: default 60 | ``` 61 | 62 | ## Memory 63 | ### Memory requests on the container 64 | - VM(I)s must specify a desired amount of memory, in either spec.domain.memory.guest or spec.domain.resources.requests.memory (ignoring hugepages, see the [dedicated page](../compute/hugepages.md)). If both are set, the memory requests take precedence. A calculated amount of overhead will be added to it, forming the memory request value for the container. 65 | 66 | ### Memory limits on the container 67 | - By default, no memory limit is set on the container 68 | - If auto memory limits is enabled (see next section), then the container will have a limit of 2x the requested memory. 69 | - Manually setting a memory limit on the VM(I) will set the same value on the container 70 | 71 | #### Warnings 72 | - Memory limits have to be more than memory requests + overhead, otherwise the container will have memory requests > limits and be rejected by Kubernetes. 73 | - Memory usage bursts could lead to VM crashes when memory limits are set 74 | 75 | 76 | ### Auto memory limits 77 | KubeVirt automatically set memory limits on VM(I)s if all the following conditions are true: 78 | 79 | - The namespace where the VMI will be created has a ResourceQuota containing memory limits. 80 | - The VMI has no manually set memory limits. 81 | - The VMI is not requesting dedicated CPU. 82 | 83 | If all the previous conditions are true, the memory limits will be set to a value (`2x`) of the memory requests. 84 | This ratio can be adjusted, per namespace, by adding the annotation `alpha.kubevirt.io/auto-memory-limits-ratio`, 85 | with the desired custom value. 86 | For example, with `alpha.kubevirt.io/auto-memory-limits-ratio: 1.2`, the memory limits set will be equal to (`1.2x`) of the memory requests. 87 | -------------------------------------------------------------------------------- /docs/cluster_admin/tekton_tasks.md: -------------------------------------------------------------------------------- 1 | # KubeVirt Tekton 2 | 3 | ### Prerequisites 4 | - [Tekton](https://tekton.dev/) 5 | - [KubeVirt](https://kubevirt.io/) 6 | - [CDI](https://github.com/kubevirt/containerized-data-importer) 7 | 8 | ## KubeVirt Tekton Tasks 9 | ### What are KubeVirt Tekton Tasks? 10 | KubeVirt-specific Tekton Tasks, which are focused on: 11 | 12 | - Creating and managing resources (VMs, DataVolumes) 13 | - Executing commands in VMs 14 | - Manipulating disk images with libguestfs tools 15 | 16 | KubeVirt Tekton Tasks and example Pipelines are available in [artifacthub.io](https://artifacthub.io/packages/search?org=kubevirt&sort=relevance&page=1) from where you can easily deploy them to your cluster. 17 | 18 | ### Existing Tasks 19 | 20 | #### Create Virtual Machines 21 | - create-vm-from-manifest - create a VM from provided manifest or with virtctl. 22 | - create-vm-from-template - create a VM from template (works only on OpenShift). 23 | 24 | #### Utilize Templates 25 | - copy-template - Copies the given template and creates a new one (works only on OpenShift). 26 | - modify-vm-template - Modifies a template with user provided data (works only on OpenShift). 27 | 28 | #### Modify Data Objects 29 | - modify-data-object - Creates / modifies / deletes a datavolume / datasource 30 | 31 | #### Generate SSH Keys 32 | - generate-ssh-keys - Generates a private and public key pair, and injects it into a VM. 33 | 34 | #### Execute commands in Virtual Machines 35 | - execute-in-vm - Execute commands over SSH in a VM. 36 | - cleanup-vm - Execute commands and/or stop/delete VMs. 37 | 38 | #### Manipulate PVCs with libguestfs tools 39 | - disk-virt-customize - execute virt-customize commands in PVCs. 40 | - disk-virt-sysprep- execute virt-sysprep commands in PVCs. 41 | 42 | #### Wait for Virtual Machine Instance Status 43 | - wait-for-vmi-status - Waits for a VMI to be running. 44 | 45 | #### Modify Windows iso 46 | - modify-windows-iso-file - modifies windows iso (replaces prompt bootloader with no-prompt bootloader) and replaces original iso 47 | in PVC with updated one. This helps with automated installation of Windows in EFI boot mode. By default Windows in EFI boot mode 48 | uses a prompt bootloader, which will not continue with the boot process until a key is pressed. By replacing it with the non-prompt 49 | bootloader no key press is required to boot into the Windows installer. 50 | 51 | ### Example Pipeline 52 | All these Tasks can be used for creating [Pipelines](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md). 53 | We prepared example Pipelines which show what can you do with the KubeVirt Tasks. 54 | 55 | - [Windows efi installer](https://github.com/kubevirt/kubevirt-tekton-tasks/blob/main/release/pipelines/windows-efi-installer/windows-efi-installer.yaml) - This Pipeline will prepare a Windows 10/11/2k22 datavolume with virtio drivers installed. User has to provide a working link to a Windows 10/11/2k22 iso file. The Pipeline is suitable for Windows versions, which requires EFI (e.g. Windows 10/11/2k22). More information about Pipeline can be found [here](https://github.com/kubevirt/kubevirt-tekton-tasks/blob/main/release/pipelines/windows-efi-installer/README.md) 56 | 57 | - [Windows customize](https://github.com/kubevirt/kubevirt-tekton-tasks/blob/main/release/pipelines/windows-customize/windows-customize.yaml) - This Pipeline will install a SQL server or a VS Code in a Windows VM. More information about Pipeline can be found [here](https://github.com/kubevirt/kubevirt-tekton-tasks/blob/main/release/pipelines/windows-customize/README.md) 58 | 59 | !!! note 60 | - If you define a different namespace for Pipelines and a different namespace for Tasks, you will have to create a [cluster resolver](https://tekton.dev/docs/pipelines/cluster-resolver/) object.
61 | - By default, example Pipelines create the resulting datavolume in the `kubevirt-os-images` namespace.
62 | - In case you would like to create resulting datavolume in different namespace (by specifying `baseDvNamespace` attribute in Pipeline), additional RBAC permissions will be required (list of all required RBAC permissions can be found [here](https://github.com/kubevirt/kubevirt-tekton-tasks/tree/main/release/tasks/modify-data-object#usage-in-different-namespaces)).
63 | - In case you would like to live migrate the VM while a given Pipeline is running, the following [prerequisities](../operations/live_migration.md#limitations) must be met 64 | -------------------------------------------------------------------------------- /docs/compute/memory_hotplug.md: -------------------------------------------------------------------------------- 1 | # Memory Hotplug 2 | 3 | Memory hotplug was introduced in KubeVirt version 1.1, enabling the dynamic resizing of the amount of memory available to a running VM. 4 | 5 | ## Limitations 6 | * Memory hotplug is currently only supported on the x86_64,arm64 architectures. 7 | * Linux guests running at least Linux v5.8 are fully supported. 8 | * Windows guests support has been added to virtio-win, but it should be considered unstable. 9 | * Current hotplug implementation involves live-migration of the VM workload. 10 | * VirtualMachines must have at least 1GiB of memory to support memory-hotplug. 11 | 12 | 13 | # Configuration 14 | 15 | ### Configure the Workload Update Strategy 16 | 17 | Configure `LiveMigrate` as `workloadUpdateStrategy` in the KubeVirt CR, since the current implementation of the hotplug process requires the VM to live-migrate. 18 | 19 | ```yaml 20 | apiVersion: kubevirt.io/v1 21 | kind: KubeVirt 22 | spec: 23 | workloadUpdateStrategy: 24 | workloadUpdateMethods: 25 | - LiveMigrate 26 | ``` 27 | 28 | ### Configure the VM rollout strategy 29 | 30 | Finally, set the VM rollout strategy to `LiveUpdate`, so that the changes made to the VM object propagate to the VMI without a restart. 31 | This is also done in the KubeVirt CR configuration: 32 | 33 | ```yaml 34 | apiVersion: kubevirt.io/v1 35 | kind: KubeVirt 36 | spec: 37 | configuration: 38 | vmRolloutStrategy: "LiveUpdate" 39 | ``` 40 | 41 | **NOTE:** If memory hotplug is enabled/disabled on an already running VM, a reboot is necessary for the changes to take effect. 42 | 43 | More information can be found on the [VM Rollout Strategies](../user_workloads/vm_rollout_strategies.md) page. 44 | 45 | ### [OPTIONAL] Set a cluster-wide maximum amount of memory 46 | 47 | You can set the maximum amount of memory for the guest using a cluster level setting in the KubeVirt CR. 48 | 49 | ```yaml 50 | apiVersion: kubevirt.io/v1 51 | kind: KubeVirt 52 | spec: 53 | configuration: 54 | liveUpdateConfiguration: 55 | maxGuest: 8Gi 56 | ``` 57 | 58 | The VM-level configuration will take precedence over the cluster-wide one. 59 | 60 | ## Memory Hotplug in Action 61 | 62 | First we set the rollout strategy to `LiveUpdate` and `LiveMigrate` as `workloadUpdateStrategy` in the KubeVirt CR. 63 | 64 | ```sh 65 | $ kubectl --namespace kubevirt patch kv kubevirt -p='[{"op": "add", "path": "/spec/configuration/vmRolloutStrategy", "value": "LiveUpdate"}]' --type='json' 66 | $ kubectl --namespace kubevirt patch kv kubevirt -p='[{"op": "add", "path": "/spec/workloadUpdateStrategy/workloadUpdateMethods", "value": ["LiveMigrate"]}]' --type='json' 67 | ``` 68 | 69 | Now we create a VM with memory hotplug enabled. 70 | 71 | ```yaml 72 | apiVersion: kubevirt.io/v1 73 | kind: VirtualMachine 74 | metadata: 75 | name: vm-alpine 76 | spec: 77 | runStrategy: Always 78 | template: 79 | spec: 80 | domain: 81 | memory: 82 | guest: 1Gi 83 | devices: 84 | interfaces: 85 | - masquerade: {} 86 | model: virtio 87 | name: default 88 | disks: 89 | - disk: 90 | bus: virtio 91 | name: containerdisk 92 | networks: 93 | - name: default 94 | pod: {} 95 | volumes: 96 | - containerDisk: 97 | image: registry:5000/kubevirt/alpine-container-disk-demo:devel 98 | name: containerdisk 99 | ``` 100 | 101 | The Virtual Machine will automatically start and once booted it will report the currently available memory to the guest in the `status.memory` field inside the VMI. 102 | 103 | ```sh 104 | $ kubectl get vmi vm-cirros -o json | jq .status.memory 105 | ``` 106 | ```json 107 | { 108 | "guestAtBoot": "1Gi", 109 | "guestCurrent": "1Gi", 110 | "guestRequested": "1Gi" 111 | } 112 | ``` 113 | 114 | Since the Virtual Machine is now running we can patch the VM object to double the available guest memory so that we'll go from 1Gi to 2Gi. 115 | 116 | ```sh 117 | $ kubectl patch vm vm-cirros -p='[{"op": "replace", "path": "/spec/template/spec/domain/memory/guest", "value": "2Gi"}]' --type='json' 118 | ``` 119 | 120 | After the hotplug request is processed and the Virtual Machine is live migrated, the new amount of memory should be available to the guest 121 | and visible in the VMI object. 122 | 123 | ```sh 124 | $ kubectl get vmi vm-cirros -o json | jq .status.memory 125 | ``` 126 | ```json 127 | { 128 | "guestAtBoot": "1Gi", 129 | "guestCurrent": "2Gi", 130 | "guestRequested": "2Gi" 131 | } 132 | ``` 133 | -------------------------------------------------------------------------------- /docs/debug_virt_stack/logging.md: -------------------------------------------------------------------------------- 1 | # Control libvirt logging for each component 2 | 3 | Generally, cluster admins can control the log verbosity of each KubeVirt component in KubeVirt CR. For more details, please, check the [KubeVirt documentation](https://kubevirt.io/user-guide/operations/debug/#log-verbosity). 4 | 5 | Nonetheless, regular users can also adjust the qemu component logging to have a finer control over it. The annotation `kubevirt.io/libvirt-log-filters` enables you to modify each component's log level. 6 | 7 | Example: 8 | ```yaml 9 | apiVersion: kubevirt.io/v1 10 | kind: VirtualMachineInstance 11 | metadata: 12 | annotations: 13 | kubevirt.io/libvirt-log-filters: "2:qemu.qemu_monitor 3:*" 14 | labels: 15 | special: vmi-debug-tools 16 | name: vmi-debug-tools 17 | spec: 18 | domain: 19 | devices: 20 | disks: 21 | - disk: 22 | bus: virtio 23 | name: containerdisk 24 | - disk: 25 | bus: virtio 26 | name: cloudinitdisk 27 | rng: {} 28 | resources: 29 | requests: 30 | memory: 1024M 31 | volumes: 32 | - containerDisk: 33 | image: registry:5000/kubevirt/fedora-with-test-tooling-container-disk:devel 34 | name: containerdisk 35 | - cloudInitNoCloud: 36 | userData: |- 37 | #cloud-config 38 | password: fedora 39 | chpasswd: { expire: False } 40 | name: cloudinitdisk 41 | ``` 42 | 43 | Then, it is possible to obtain the logs from the virt-launcher output: 44 | 45 | ```console 46 | $ kubectl get pods 47 | NAME READY STATUS RESTARTS AGE 48 | virt-launcher-vmi-debug-tools-fk64q 3/3 Running 0 64s 49 | $ kubectl logs virt-launcher-vmi-debug-tools-fk64q 50 | [..] 51 | {"component":"virt-launcher","level":"info","msg":"QEMU_MONITOR_RECV_EVENT: mon=0x7faa8801f5d0 event={\"timestamp\": {\"seconds\": 1698324640, \"microseconds\": 523652}, \"event\": \"NIC_RX_FILTER_CHANGED\", \"data\": {\"name\": \"ua-default\", \"path\": \"/machine/peripheral/ua-default/virtio-backend\"}}","pos":"qemuMonitorJSONIOProcessLine:205","subcomponent":"libvirt","thread":"80","timestamp":"2023-10-26T12:50:40.523000Z"} 52 | {"component":"virt-launcher","level":"info","msg":"QEMU_MONITOR_RECV_EVENT: mon=0x7faa8801f5d0 event={\"timestamp\": {\"seconds\": 1698324644, \"microseconds\": 165626}, \"event\": \"VSERPORT_CHANGE\", \"data\": {\"open\": true, \"id\": \"channel0\"}}","pos":"qemuMonitorJSONIOProcessLine:205","subcomponent":"libvirt","thread":"80","timestamp":"2023-10-26T12:50:44.165000Z"} 53 | [..] 54 | {"component":"virt-launcher","level":"info","msg":"QEMU_MONITOR_RECV_EVENT: mon=0x7faa8801f5d0 event={\"timestamp\": {\"seconds\": 1698324646, \"microseconds\": 707666}, \"event\": \"RTC_CHANGE\", \"data\": {\"offset\": 0, \"qom-path\": \"/machine/unattached/device[8]\"}}","pos":"qemuMonitorJSONIOProcessLine:205","subcomponent":"libvirt","thread":"80","timestamp":"2023-10-26T12:50:46.708000Z"} 55 | [..] 56 | ``` 57 | 58 | The annotation enables the filter from the container creation. However, in 59 | certain cases you might desire to change the logging level dynamically once the container and libvirt have already been started. In this case, [`virt-admin`](https://libvirt.org/manpages/virt-admin.html) comes to the rescue. 60 | 61 | Example: 62 | ```console 63 | $ kubectl get pods 64 | NAME READY STATUS RESTARTS AGE 65 | virt-launcher-vmi-ephemeral-nqcld 3/3 Running 0 26m 66 | $ kubectl exec -ti virt-launcher-vmi-ephemeral-nqcld -- virt-admin -c virtqemud:///session daemon-log-filters "1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util" 67 | $ kubectl exec -ti virt-launcher-vmi-ephemeral-nqcld -- virt-admin -c virtqemud:///session daemon-log-filters 68 | Logging filters: 1:*libvirt* 1:*qemu* 1:*conf* 1:*security* 3:*event* 3:*json* 3:*file* 3:*object* 1:*util* 69 | ``` 70 | 71 | Otherwise, if you prefer to redirect the output to a file and fetch it later, you can rely on `kubectl cp` to retrieve the file. In this case, we are saving the file in the `/var/run/libvirt` directory because the compute container has the permissions to write there. 72 | 73 | Example: 74 | ```console 75 | $ kubectl get pods 76 | NAME READY STATUS RESTARTS AGE 77 | virt-launcher-vmi-ephemeral-nqcld 3/3 Running 0 26m 78 | $ kubectl exec -ti virt-launcher-vmi-ephemeral-nqcld -- virt-admin -c virtqemud:///session daemon-log-outputs "1:file:/var/run/libvirt/libvirtd.log" 79 | $ kubectl cp virt-launcher-vmi-ephemeral-nqcld:/var/run/libvirt/libvirtd.log libvirt-kubevirt.log 80 | tar: Removing leading `/' from member names 81 | ``` 82 | -------------------------------------------------------------------------------- /docs/_redirects: -------------------------------------------------------------------------------- 1 | /operations/customize_components /cluster_admin/customize_components 2 | /operations/installation /cluster_admin/installation 3 | /operations/updating_and_deletion /cluster_admin/updating_and_deletion 4 | /operations/basic_use /user_workloads/basic_use 5 | /operations/customize_components /cluster_admin/customize_components 6 | /operations/deploy_common_instancetypes /user_workloads/deploy_common_instancetypes 7 | /operations/api_validation /cluster_admin/api_validation 8 | /operations/debug /debug_virt_stack/debug 9 | /operations/virtctl_client_tool /user_workloads/virtctl_client_tool 10 | /operations/live_migration /compute/live_migration 11 | /operations/hotplug_interfaces /network/hotplug_interfaces 12 | /operations/hotplug_volumes /storage/hotplug_volumes 13 | /operations/client_passthrough /compute/client_passthrough 14 | /operations/snapshot_restore_api /storage/snapshot_restore_api 15 | /operations/scheduler /cluster_admin/scheduler 16 | /operations/hugepages /compute/hugepages 17 | /operations/component_monitoring /user_workloads/component_monitoring 18 | /operations/authorization /cluster_admin/authorization 19 | /operations/annotations_and_labels /cluster_admin/annotations_and_labels 20 | /operations/node_assignment /compute/node_assignment 21 | /operations/node_maintenance /cluster_admin/node_maintenance 22 | /operations/node_overcommit /compute/node_overcommit 23 | /operations/unresponsive_nodes /cluster_admin/unresponsive_nodes 24 | /operations/containerized_data_importer /storage/containerized_data_importer 25 | /operations/activating_feature_gates /cluster_admin/activating_feature_gates 26 | /operations/export_api /storage/export_api 27 | /operations/clone_api /storage/clone_api 28 | /operations/memory_dump /compute/memory_dump 29 | /operations/mediated_devices_configuration /compute/mediated_devices_configuration 30 | /operations/migration_policies /cluster_admin/migration_policies 31 | /operations/ksm /cluster_admin/ksm 32 | /operations/gitops /cluster_admin/gitops 33 | /operations/operations_on_Arm64 /cluster_admin/operations_on_Arm64 34 | /operations/feature_gate_status_on_Arm64 /cluster_admin/feature_gate_status_on_Arm64 35 | /operations/cpu_hotplug /compute/cpu_hotplug 36 | /operations/memory_hotplug /compute/memory_hotplug 37 | /operations/vm_rollout_strategies /user_workloads/vm_rollout_strategies 38 | /operations/hook-sidecar /user_workloads/hook-sidecar 39 | /virtual_machines/virtual_machine_instances /user_workloads/virtual_machine_instances 40 | /virtual_machines/creating_vms /user_workloads/creating_vms 41 | /virtual_machines/lifecycle /user_workloads/lifecycle 42 | /virtual_machines/run_strategies /compute/run_strategies 43 | /virtual_machines/instancetypes /user_workloads/instancetypes 44 | /virtual_machines/presets /user_workloads/presets 45 | /virtual_machines/virtual_hardware /compute/virtual_hardware 46 | /virtual_machines/dedicated_cpu_resources /compute/dedicated_cpu_resources 47 | /virtual_machines/numa /compute/numa 48 | /virtual_machines/disks_and_volumes /storage/disks_and_volumes 49 | /virtual_machines/interfaces_and_networks /network/interfaces_and_networks 50 | /virtual_machines/network_binding_plugins /network/network_binding_plugins 51 | /virtual_machines/istio_service_mesh /network/istio_service_mesh 52 | /virtual_machines/networkpolicy /network/networkpolicy 53 | /virtual_machines/host-devices /compute/host-devices 54 | /virtual_machines/windows_virtio_drivers /user_workloads/windows_virtio_drivers 55 | /virtual_machines/guest_operating_system_information /user_workloads/guest_operating_system_information 56 | /virtual_machines/guest_agent_information /user_workloads/guest_agent_information 57 | /virtual_machines/liveness_and_readiness_probes /user_workloads/liveness_and_readiness_probes 58 | /virtual_machines/accessing_virtual_machines /user_workloads/accessing_virtual_machines 59 | /virtual_machines/startup_scripts /user_workloads/startup_scripts 60 | /virtual_machines/service_objects /network/service_objects 61 | /virtual_machines/templates /user_workloads/templates 62 | /virtual_machines/tekton_tasks /cluster_admin/tekton_tasks 63 | /virtual_machines/replicaset /user_workloads/replicaset 64 | /virtual_machines/pool /user_workloads/pool 65 | /virtual_machines/dns /network/dns 66 | /virtual_machines/boot_from_external_source /user_workloads/boot_from_external_source 67 | /virtual_machines/confidential_computing /cluster_admin/confidential_computing 68 | /virtual_machines/vsock /compute/vsock 69 | /virtual_machines/virtual_machines_on_Arm64 /cluster_admin/virtual_machines_on_Arm64 70 | /virtual_machines/device_status_on_Arm64 /cluster_admin/device_status_on_Arm64 71 | /virtual_machines/persistent_tpm_and_uefi_state /compute/persistent_tpm_and_uefi_state 72 | /virtual_machines/resources_requests_and_limits /compute/resources_requests_and_limits 73 | /virtual_machines/guestfs /storage/guestfs 74 | -------------------------------------------------------------------------------- /docs/compute/run_strategies.md: -------------------------------------------------------------------------------- 1 | # Run Strategies 2 | 3 | ## Overview 4 | 5 | VirtualMachines have a `Running` setting that determines whether or not 6 | there should be a guest running or not. Because KubeVirt will always 7 | immediately restart a VirtualMachineInstance for VirtualMachines with 8 | `spec.running: true`, a simple boolean is not always enough to fully 9 | describe desired behavior. For instance, there are cases when a user 10 | would like the ability to shut down a guest from inside the virtual 11 | machine. With `spec.running: true`, KubeVirt would immediately restart 12 | the VirtualMachineInstance. 13 | 14 | ## RunStrategy 15 | 16 | To allow for greater variation of user states, the `RunStrategy` field 17 | has been introduced. This is mutually exclusive with `Running` as they 18 | have somewhat overlapping conditions. There are currently five 19 | RunStrategies defined: 20 | 21 | - Always: The system is tasked with keeping the VM in a running 22 | state. 23 | This is achieved by respawning a VirtualMachineInstance whenever 24 | the current one terminated in a controlled (e.g. shutdown from 25 | inside the guest) or uncontrolled (e.g. crash) way. 26 | This behavior is equal to `spec.running: true`. 27 | 28 | - RerunOnFailure: Similar to `Always`, except that the VM is only 29 | restarted if it terminated in an uncontrolled way (e.g. crash) 30 | and due to an infrastructure reason (i.e. the node crashed, 31 | the KVM related process OOMed). 32 | This allows a user to determine when the VM should be shut down 33 | by initiating the shut down inside the guest. 34 | Note: Guest sided crashes (i.e. BSOD) are not covered by this. 35 | In such cases liveness checks or the use of a watchdog can help. 36 | 37 | - Once: The VM will run once and not be restarted upon completion 38 | regardless if the completion is of phase Failure or Success. 39 | 40 | - Manual: The system will not automatically turn the VM on or off, 41 | instead the user manually controlls the VM status by issuing 42 | start, stop, and restart commands on the VirtualMachine 43 | subresource endpoints. 44 | 45 | - Halted: The system is asked to ensure that no VM is running. 46 | This is achieved by stopping any VirtualMachineInstance that is 47 | associated ith the VM. If a guest is already running, it will be 48 | stopped. 49 | This behavior is equal to `spec.running: false`. 50 | 51 | *Note*: `RunStrategy` and `running` are mutually exclusive, because 52 | they can be contradictory. The API server will reject VirtualMachine 53 | resources that define both. 54 | 55 | ### Virtctl 56 | 57 | The `start`, `stop` and `restart` methods of virtctl will invoke their 58 | respective subresources of VirtualMachines. This can have an effect on 59 | the runStrategy of the VirtualMachine as below: 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 |
RunStrategystartstoprestart

Always

-

Halted

Always

RerunOnFailure

RerunOnFailure

RerunOnFailure

RerunOnFailure

Manual

Manual

Manual

Manual

Halted

Always

-

-

103 | 104 | Table entries marked with `-` don't make sense, so won't have an effect 105 | on RunStrategy. 106 | 107 | ## RunStrategy Examples 108 | 109 | ### Always 110 | 111 | An example usage of the Always RunStrategy. 112 | 113 | ```yaml 114 | apiVersion: kubevirt.io/v1 115 | kind: VirtualMachine 116 | metadata: 117 | labels: 118 | kubevirt.io/vm: vm-cirros 119 | name: vm-cirros 120 | spec: 121 | runStrategy: Always 122 | template: 123 | metadata: 124 | labels: 125 | kubevirt.io/vm: vm-cirros 126 | spec: 127 | domain: 128 | devices: 129 | disks: 130 | - disk: 131 | bus: virtio 132 | name: containerdisk 133 | terminationGracePeriodSeconds: 0 134 | volumes: 135 | - containerDisk: 136 | image: kubevirt/cirros-container-disk-demo:latest 137 | name: containerdisk 138 | ``` 139 | -------------------------------------------------------------------------------- /docs/compute/persistent_tpm_and_uefi_state.md: -------------------------------------------------------------------------------- 1 | # Persistent TPM and UEFI state 2 | 3 | **FEATURE STATE:** KubeVirt v1.0.0 4 | 5 | For both TPM and UEFI, libvirt supports persisting data created by a virtual machine as files on the virtualization host. 6 | In KubeVirt, the virtualization host is the virt-launcher pod, which is ephemeral (created on VM start and destroyed on VM stop). 7 | As of v1.0.0, KubeVirt supports using a PVC to persist those files. KubeVirt usually refers to that storage area as "backend storage". 8 | 9 | ## Backend storage 10 | 11 | KubeVirt automatically creates backend storage PVCs for VMs that need it. However, to persist TPM and UEFI state, the admin must first enable the `VMPersistentState` feature gate. The KubeVirt CR configuration option `vmStateStorageClass` may be used to manually specify a storage class, otherwise the default storage class will be used. 12 | 13 | Here's an example of KubeVirt CR that sets both: 14 | ```yaml 15 | apiVersion: kubevirt.io/v1 16 | kind: KubeVirt 17 | spec: 18 | configuration: 19 | vmStateStorageClass: "nfs-csi" 20 | developerConfiguration: 21 | featureGates: 22 | - VMPersistentState 23 | ``` 24 | 25 | ### Notes: 26 | - Backend storage is currently incompatible with VM snapshot. It is planned to add snapshot support in the future. 27 | 28 | ## TPM with persistent state 29 | 30 | Since KubeVirt v0.53.0, a TPM device can be added to a VM (with just `tpm: {}`). However, the data stored in it does not persist across reboots. 31 | Support for persistence was added in v1.0.0 using a simple `persistent` boolean parameter that default to false, to preserve previous behavior. 32 | Of course, backend storage must first be configured before adding a persistent TPM to a VM. See above. 33 | Here's a portion of a VM definition that includes a persistent TPM: 34 | ```yaml 35 | apiVersion: kubevirt.io/v1 36 | kind: VirtualMachine 37 | metadata: 38 | name: vm 39 | spec: 40 | template: 41 | spec: 42 | domain: 43 | devices: 44 | tpm: 45 | persistent: true 46 | ``` 47 | 48 | 49 | In order for the persistent tpm volume to be created successfully you must ensure your storage classes and [storage profiles](https://github.com/kubevirt/containerized-data-importer/blob/main/doc/storageprofile.md) are configured correctly. 50 | The persistent tpm volume will be created with the below access mode if one of the constraints for the access mode is true. 51 | 52 | * RWX (ReadWriteMany): 53 | * the respective storage profile has any `claimPropertySet` in `claimPropertySets` with Filesystem volume mode and RWX access mode. 54 | * the kubevirt cluster config has `VMStateStorageClass` set and the storage profile does not exist. 55 | * the kubevirt cluster config has `VMStateStorageClass` set and the storage profile exists but `claimPropertySets` is an empty list. 56 | 57 | * RWO (ReadWriteOnce): 58 | * the respective storage profile has `claimPropertySets` where all `claimPropertySet` in `claimPropertySets` have Filesystem volume mode and RWO access mode but not RWX. 59 | * the kubevirt cluster config has `VMStateStorageClass` **unset** and the storage profile does not exist. 60 | * the kubevirt cluster config has `VMStateStorageClass` **unset** and the storage profile exists but `claimPropertySets` is an empty list. 61 | 62 | ### Uses 63 | - The Microsoft Windows 11 installer requires the presence of a TPM device, even though it doesn't use this. Persistence is not required in this case however. 64 | - Some disk encryption software have optional/mandatory TPM support. For example, Bitlocker requires a persistent TPM device. 65 | 66 | ### Notes 67 | - The TPM device exposed to the virtual machine is fully emulated (vTPM). The worker nodes do not need to have a TPM device. 68 | - When TPM persistence is enabled, the `tpm-crb` model is used (instead of `tpm-tis` for non-persistent vTPMs) 69 | - A virtual TPM does not provide the same security guarantees as a physical one. 70 | 71 | ## EFI with persistent VARS 72 | 73 | EFI support is handled by libvirt using OVMF. OVMF data usually consists of 2 files, CODE and VARS. VARS is where persistent data from the guest can be stored. 74 | When EFI persistence is enabled on a VM, the VARS file will be persisted inside the backend storage. 75 | Of course, backend storage must first be configured before enabling EFI persistence on a VM. See above. 76 | Here's a portion of a VM definition that includes a persistent EFI: 77 | ```yaml 78 | apiVersion: kubevirt.io/v1 79 | kind: VirtualMachine 80 | metadata: 81 | name: vm 82 | spec: 83 | template: 84 | spec: 85 | domain: 86 | firmware: 87 | bootloader: 88 | efi: 89 | persistent: true 90 | ``` 91 | 92 | ### Uses 93 | - Preserving user-created Secure Boot certificates. 94 | - Preserving EFI firmware settings, like language or display resolution. 95 | 96 | ### Notes 97 | - The boot entries/order can, and most likely will, get overriden by libvirt. This is to satisfy the VM specfications. Do not expect manual boot setting changes to persist. 98 | -------------------------------------------------------------------------------- /docs/compute/memory_dump.md: -------------------------------------------------------------------------------- 1 | # Virtual machine memory dump 2 | 3 | Kubevirt now supports getting a VM memory dump for analysis purposes. 4 | The Memory dump can be used to diagnose, identify and resolve issues in the VM. Typically providing information about the last state of the programs, applications and system before they were terminated or crashed. 5 | 6 | > *Note* This memory dump is not used for saving VM state and resuming it later. 7 | 8 | ## Prerequisites 9 | 10 | ### Hot plug Feature Gate 11 | 12 | The memory dump process mounts a PVC to the virt-launcher in order to get the output in that PVC, hence the hot plug volumes feature gate must be enabled. The 13 | [feature gates](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate) 14 | field in the KubeVirt CR must be expanded by adding the `HotplugVolumes` to it. 15 | 16 | ## Virtctl support 17 | 18 | ### Get memory dump 19 | 20 | Now lets assume we have a running VM and the name of the VM is 'my-vm'. 21 | We can either dump to an existing pvc, or request one to be created. 22 | 23 | #### Existing PVC 24 | The size of the PVC must be big enough to hold the memory dump. The calculation is (VMMemorySize + 100Mi) * FileSystemOverhead, Where `VMMemorySize` is the memory size, 25 | 100Mi is reserved space for the memory dump overhead and `FileSystemOverhead` is the value used to adjust requested PVC size with the filesystem overhead. 26 | also the PVC must have a `FileSystem` volume mode. 27 | 28 | Example for such PVC: 29 | 30 | ```yaml 31 | apiVersion: v1 32 | kind: PersistentVolumeClaim 33 | metadata: 34 | name: my-pvc 35 | spec: 36 | accessModes: 37 | - ReadWriteOnce 38 | resources: 39 | requests: 40 | storage: 2Gi 41 | storageClassName: rook-ceph-block 42 | volumeMode: Filesystem 43 | ``` 44 | 45 | We can get a memory dump of the VM to the PVC by using the 'memory-dump get' command available with virtctl 46 | 47 | ```bash 48 | $ virtctl memory-dump get my-vm --claim-name=my-pvc 49 | ``` 50 | 51 | #### On demand PVC 52 | For on demand PVC, we need to add `--create-claim` flag to the virtctl request: 53 | 54 | ```bash 55 | $ virtctl memory-dump get my-vm --claim-name=new-pvc --create-claim 56 | ``` 57 | 58 | A PVC with size big enough for the dump will be created. We can also request specific storage class and access mode with appropriate flags. 59 | 60 | #### Download memory dump 61 | By adding the `--output` flag, the memory will be dumped to the PVC and then downloaded to the given output path. 62 | 63 | ```bash 64 | $ virtctl memory-dump get myvm --claim-name=memoryvolume --create-claim --output=memoryDump.dump.gz 65 | ``` 66 | 67 | For downloading the last memory dump from the PVC associated with the VM, without triggering another memory dump, use the memory dump download command. 68 | 69 | ```bash 70 | $ virtctl memory-dump download myvm --output=memoryDump.dump.gz 71 | ``` 72 | 73 | For downloading a memory dump from a PVC already disassociated from the VM you can use the [virtctl vmexport command](https://github.com/kubevirt/user-guide/blob/main/docs/operations/export_api.md) 74 | 75 | ### Monitoring the memory dump 76 | Information regarding the memory dump process will be available on the VM's status section 77 | ```yaml 78 | memoryDumpRequest: 79 | claimName: memory-dump 80 | phase: Completed 81 | startTimestamp: "2022-03-29T11:00:04Z" 82 | endTimestamp: "2022-03-29T11:00:09Z" 83 | fileName: my-vm-my-pvc-20220329-110004 84 | ``` 85 | 86 | During the process the volumeStatus on the VMI will be updated with the process information such as the attachment pod information and messages, if all goes well once the process is completed, the PVC is unmounted from the virt-launcher pod and the volumeStatus is deleted. 87 | A memory dump annotation will be added to the PVC with the memory dump file name. 88 | 89 | ### Retriggering the memory dump 90 | Getting a new memory dump to the same PVC is possible without the need to use any flag: 91 | ```bash 92 | $ virtctl memory-dump get my-vm 93 | ``` 94 | 95 | > *Note* Each memory-dump command will delete the previous dump in that PVC. 96 | 97 | In order to get a memory dump to a different PVC you need to 'remove' the current memory-dump PVC and then do a new get with the new PVC name. 98 | 99 | ### Remove memory dump 100 | 101 | As mentioned in order to remove the associated memory dump PVC you need to run a 'memory-dump remove' command. This will allow you to replace the current PVC and get the memory dump to a new one. 102 | 103 | ```bash 104 | $ virtctl memory-dump remove my-vm 105 | ``` 106 | 107 | ## Handle the memory dump 108 | Once the memory dump process is completed the PVC will hold the output. 109 | You can manage the dump in one of the following ways: 110 | - Download the memory dump 111 | - Create a pod with troubleshooting tools that will mount the PVC and inspect it within the pod. 112 | - Include the memory dump in the VM Snapshot (will include both the memory dump and the disks) to save a snapshot of the VM in that point of time and inspect it when needed. (The VM Snapshot can be exported and downloaded). 113 | 114 | The output of the memory dump can be inspected with memory analysis tools for example [Volatility3](https://github.com/volatilityfoundation/volatility3) 115 | -------------------------------------------------------------------------------- /docs/user_workloads/guest_agent_information.md: -------------------------------------------------------------------------------- 1 | # Guest Agent information 2 | 3 | Guest Agent (GA) is an optional component that can run inside of Virtual Machines. 4 | The GA provides plenty of additional runtime information about the running operating system (OS). 5 | More technical detail about available GA commands is available [here](https://qemu.weilnetz.de/doc/3.1/qemu-ga-ref.html). 6 | 7 | 8 | ## Guest Agent info in Virtual Machine status 9 | 10 | GA presence in the Virtual Machine is signaled with a condition in the VirtualMachineInstance status. 11 | The condition tells that the GA is connected and can be used. 12 | 13 | GA condition on VirtualMachineInstance 14 | 15 | ```yaml 16 | status: 17 | conditions: 18 | - lastProbeTime: "2020-02-28T10:22:59Z" 19 | lastTransitionTime: null 20 | status: "True" 21 | type: AgentConnected 22 | ``` 23 | 24 | When the GA is connected, additional OS information is shown in the status. 25 | This information comprises: 26 | 27 | - guest info, which contains OS runtime data 28 | - interfaces info, which shows QEMU interfaces merged with GA interfaces info. 29 | 30 | Below is the example of the information shown in the VirtualMachineInstance status. 31 | 32 | GA info with merged into status 33 | 34 | ```yaml 35 | status: 36 | guestOSInfo: 37 | id: fedora 38 | kernelRelease: 4.18.16-300.fc29.x86_64 39 | kernelVersion: '#1 SMP Sat Oct 20 23:24:08 UTC 2018' 40 | name: Fedora 41 | prettyName: Fedora 29 (Cloud Edition) 42 | version: "29" 43 | versionId: "29" 44 | interfaces: 45 | - infoSource: domain, guest-agent 46 | interfaceName: eth0 47 | ipAddress: 10.244.0.23/24 48 | ipAddresses: 49 | - 10.244.0.23/24 50 | - fe80::858:aff:fef4:17/64 51 | mac: 0a:58:0a:f4:00:17 52 | name: default 53 | ``` 54 | 55 | When the Guest Agent is not present in the Virtual Machine, the Guest Agent information is not shown. No error is reported because the Guest Agent is an optional component. 56 | 57 | The infoSource field indicates where the info is gathered from. Valid values: 58 | 59 | - domain: the info is based on the domain spec 60 | - guest-agent: the info is based on Guest Agent report 61 | - domain, guest-agent: the info is based on both the domain spec and the Guest Agent report 62 | 63 | ## Guest Agent info available through the API 64 | 65 | The data shown in the VirtualMachineInstance status are a subset of the information available. 66 | The rest of the data is available via the REST API exposed in the Kubernetes `kube-api` server. 67 | 68 | There are three new subresources added to the VirtualMachineInstance object: 69 | 70 | - guestosinfo 71 | - userlist 72 | - filesystemlist 73 | 74 | 75 | The whole GA data is returned via `guestosinfo` subresource available behind the API endpoint. 76 | 77 | /apis/subresources.kubevirt.io/v1/namespaces/{namespace}/virtualmachineinstances/{name}/guestosinfo 78 | 79 | 80 | GuestOSInfo sample data: 81 | 82 | ```json 83 | { 84 | "fsInfo": { 85 | "disks": [ 86 | { 87 | "diskName": "vda1", 88 | "fileSystemType": "ext4", 89 | "mountPoint": "/", 90 | "totalBytes": 0, 91 | "usedBytes": 0 92 | } 93 | ] 94 | }, 95 | "guestAgentVersion": "2.11.2", 96 | "hostname": "testvmi6m5krnhdlggc9mxfsrnhlxqckgv5kqrwcwpgr5mdpv76grrk", 97 | "metadata": { 98 | "creationTimestamp": null 99 | }, 100 | "os": { 101 | "id": "fedora", 102 | "kernelRelease": "4.18.16-300.fc29.x86_64", 103 | "kernelVersion": "#1 SMP Sat Oct 20 23:24:08 UTC 2018", 104 | "machine": "x86_64", 105 | "name": "Fedora", 106 | "prettyName": "Fedora 29 (Cloud Edition)", 107 | "version": "29 (Cloud Edition)", 108 | "versionId": "29" 109 | }, 110 | "timezone": "UTC, 0" 111 | } 112 | ``` 113 | 114 | Items FSInfo and UserList are capped to the max capacity of 10 items, as a precaution for VMs with thousands of users. 115 | 116 | Full list of Filesystems is available through the subresource `filesystemlist` which is available as endpoint. 117 | 118 | /apis/subresources.kubevirt.io/v1/namespaces/{namespace}/virtualmachineinstances/{name}/filesystemlist 119 | 120 | Filesystem sample data: 121 | 122 | ```json 123 | { 124 | "items": [ 125 | { 126 | "diskName": "vda1", 127 | "fileSystemType": "ext4", 128 | "mountPoint": "/", 129 | "totalBytes": 3927900160, 130 | "usedBytes": 1029201920 131 | } 132 | ], 133 | "metadata": {} 134 | } 135 | ``` 136 | 137 | Full list of the Users is available through the subresource `userlist` which is available as endpoint. 138 | 139 | /apis/subresources.kubevirt.io/v1/namespaces/{namespace}/virtualmachineinstances/{name}/userlist 140 | 141 | Userlist sample data: 142 | 143 | ```json 144 | { 145 | "items": [ 146 | { 147 | "loginTime": 1580467675.876078, 148 | "userName": "fedora" 149 | } 150 | ], 151 | "metadata": {} 152 | } 153 | ``` 154 | 155 | User LoginTime is in fractional seconds since epoch time. It is left for the consumer to convert to the desired format. 156 | -------------------------------------------------------------------------------- /docs/cluster_admin/authorization.md: -------------------------------------------------------------------------------- 1 | # Authorization 2 | 3 | KubeVirt authorization is performed using Kubernetes's Resource Based 4 | Authorization Control system (RBAC). RBAC allows cluster admins to grant 5 | access to cluster resources by binding RBAC roles to users. 6 | 7 | For example, an admin creates an RBAC role that represents the 8 | permissions required to create a VirtualMachineInstance. The admin can 9 | then bind that role to users in order to grant them the permissions 10 | required to launch a VirtualMachineInstance. 11 | 12 | With RBAC roles, admins can grant users targeted access to various 13 | KubeVirt features. 14 | 15 | ## KubeVirt Default RBAC ClusterRoles 16 | 17 | KubeVirt comes with a set of predefined RBAC ClusterRoles that can be 18 | used to grant users permissions to access KubeVirt Resources. 19 | 20 | ### Default View Role 21 | 22 | The **kubevirt.io:view** ClusterRole gives users permissions to view all 23 | KubeVirt resources in the cluster. The permissions to create, delete, 24 | modify or access any KubeVirt resources beyond viewing the resource's 25 | spec are not included in this role. This means a user with this role 26 | could see that a VirtualMachineInstance is running, but neither shutdown 27 | nor gain access to that VirtualMachineInstance via console/VNC. 28 | 29 | ### Default Edit Role 30 | 31 | The **kubevirt.io:edit** ClusterRole gives users permissions to modify 32 | all KubeVirt resources in the cluster. For example, a user with this 33 | role can create new VirtualMachineInstances, delete 34 | VirtualMachineInstances, and gain access to both console and VNC. 35 | 36 | ### Default Admin Role 37 | 38 | The **kubevirt.io:admin** ClusterRole grants users full permissions to 39 | all KubeVirt resources, including the ability to delete collections of 40 | resources. 41 | 42 | The admin role also grants users access to view and modify the KubeVirt 43 | runtime config. This config exists within the Kubevirt Custom Resource under 44 | the `configuration` key in the namespace the KubeVirt operator is running. 45 | 46 | > *NOTE* Users are only guaranteed the ability to modify the kubevirt 47 | > runtime configuration if a ClusterRoleBinding is used. A RoleBinding 48 | > will work to provide kubevirt CR access only if the RoleBinding 49 | > targets the same namespace that the kubevirt CR exists in. 50 | 51 | ### Binding Default ClusterRoles to Users 52 | 53 | The KubeVirt default ClusterRoles are granted to users by creating 54 | either a ClusterRoleBinding or RoleBinding object. 55 | 56 | #### Binding within All Namespaces 57 | 58 | With a ClusterRoleBinding, users receive the permissions granted by the 59 | role across all namespaces. 60 | 61 | #### Binding within Single Namespace 62 | 63 | With a RoleBinding, users receive the permissions granted by the role 64 | only within a targeted namespace. 65 | 66 | ## Extending Kubernetes Default Roles with KubeVirt permissions 67 | 68 | The aggregated ClusterRole Kubernetes feature facilitates combining 69 | multiple ClusterRoles into a single aggregated ClusterRole. This feature 70 | is commonly used to extend the default Kubernetes roles with permissions 71 | to access custom resources that do not exist in the Kubernetes core. 72 | 73 | In order to extend the default Kubernetes roles to provide permission to 74 | access KubeVirt resources, we need to add the following labels to the 75 | KubeVirt ClusterRoles. 76 | 77 | kubectl label clusterrole kubevirt.io:admin rbac.authorization.k8s.io/aggregate-to-admin=true 78 | kubectl label clusterrole kubevirt.io:edit rbac.authorization.k8s.io/aggregate-to-edit=true 79 | kubectl label clusterrole kubevirt.io:view rbac.authorization.k8s.io/aggregate-to-view=true 80 | 81 | By adding these labels, any user with a RoleBinding or 82 | ClusterRoleBinding involving one of the default Kubernetes roles will 83 | automatically gain access to the equivalent KubeVirt roles as well. 84 | 85 | More information about aggregated cluster roles can be found 86 | [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) 87 | 88 | ## Creating Custom RBAC Roles 89 | 90 | If the default KubeVirt ClusterRoles are not expressive enough, admins 91 | can create their own custom RBAC roles to grant user access to KubeVirt 92 | resources. The creation of a RBAC role is inclusive only, meaning 93 | there's no way to deny access. Instead access is only granted. 94 | 95 | Below is an example of what KubeVirt's default admin ClusterRole looks 96 | like. A custom RBAC role can be created by reducing the permissions in 97 | this example role. 98 | 99 | ```yaml 100 | apiVersion: rbac.authorization.k8s.io/v1beta1 101 | kind: ClusterRole 102 | metadata: 103 | name: my-custom-rbac-role 104 | labels: 105 | kubevirt.io: "" 106 | rules: 107 | - apiGroups: 108 | - subresources.kubevirt.io 109 | resources: 110 | - virtualmachineinstances/console 111 | - virtualmachineinstances/vnc 112 | verbs: 113 | - get 114 | - apiGroups: 115 | - kubevirt.io 116 | resources: 117 | - virtualmachineinstances 118 | - virtualmachines 119 | - virtualmachineinstancepresets 120 | - virtualmachineinstancereplicasets 121 | verbs: 122 | - get 123 | - delete 124 | - create 125 | - update 126 | - patch 127 | - list 128 | - watch 129 | - deletecollection 130 | ``` -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # KubeVirt User-Guide 2 | 3 | [![Netlify Status](https://api.netlify.com/api/v1/badges/2430a4f6-4a28-4e60-853d-f0cc395e13bb/deploy-status)](https://app.netlify.com/sites/kubevirt-user-guide/deploys) 4 | 5 | ## Contributing contents 6 | 7 | We more than welcome contributions to KubeVirt documentation. Please reach out if you happen to have an idea or find an issue with our contents! 8 | 9 | ## Get started 10 | 11 | ### Fork this repository 12 | 13 | ### Make changes to your fork 14 | 15 | You can find the markdown that powers the user guide in `./docs`, most commits are to that area. 16 | 17 | We use [mkdocs](https://www.mkdocs.org/) markdown engine with [mkdocs-awesome-pages](https://github.com/lukasgeiter/mkdocs-awesome-pages-plugin/) plugin 18 | - mkdocs config file 19 | - Each subdirectory of `./docs` contains a `.pages` file. We use this to force the ordering of pages. Alphabetical ordering is not ideal for technical documentation. 20 | 21 | #### Sign your commits 22 | 23 | Signature verification on commits are required -- you may sign your commits by running: 24 | 25 | ```console 26 | $ git commit -s -m "The commit message" file1 file 2 ... 27 | ``` 28 | 29 | If you need to sign all commits from a certain point (for example, `main`), you may run: 30 | 31 | ```console 32 | git rebase --exec 'git commit --amend --no-edit -n -s' -i main 33 | ``` 34 | 35 | Signed commit messages generally take the following form: 36 | 37 | ``` 38 | 39 | 40 | Signed-off-by: 41 | ``` 42 | 43 | 44 | ### Test your changes locally: 45 | 46 | ```console 47 | $ make build_img 48 | $ make check_spelling 49 | $ make check_links 50 | $ make run 51 | ``` 52 | 53 | **NOTE** If you use `docker` you may need to set `CONTAINER_ENGINE` and `BUILD_ENGINE`: 54 | 55 | ```console 56 | $ export CONTAINER_ENGINE=docker 57 | $ export BUILD_ENGINE=docker 58 | $ make run 59 | ``` 60 | 61 | 62 | Open your web browser to http://0.0.0.0:8000 and validate page rendering 63 | 64 | 65 | ### Create a pull request to `kubevirt/user-guide` 66 | 67 | After you have vetted your changes, make a PR to `kubevirt/user-guide` so that others can review. 68 | 69 | ## Makefile Help 70 | 71 | ```console 72 | Makefile for user-guide mkdocs application 73 | 74 | Usage: 75 | make 76 | 77 | Env Variables: 78 | CONTAINER_ENGINE Set container engine, [*podman*, docker] 79 | BUILD_ENGINE Set build engine, [*podman*, buildah, docker] 80 | SELINUX_ENABLED Enable SELinux on containers, [*False*, True] 81 | LOCAL_SERVER_PORT Port on which the local mkdocs server will run, [*8000*] 82 | 83 | Targets: 84 | help Show help 85 | check_links Check external and internal links 86 | check_spelling Check spelling on site content 87 | build_img Build image: userguide 88 | build_image_yaspeller Build image: yaspeller 89 | build Build site. This target should only be used by Prow jobs. 90 | run Run site. App available @ http://0.0.0.0:8000 91 | status Container status 92 | stop Stop site 93 | stop_yaspeller Stop yaspeller image 94 | ``` 95 | 96 | ### Environment Variables 97 | 98 | * `CONTAINER_ENGINE`: Some of us use `docker`. Some of us use `podman` (default: `podman`). 99 | 100 | * `BUILD_ENGINE`: Some of us use `docker`. Some of us use `podman` or `buildah` (default: `podman`). 101 | 102 | * `SELINUX_ENABLED`: Some of us run SELinux enabled. Set to `True` to enable container mount labelling. 103 | 104 | * `PYTHON`: Change the `python` executable used (default: `python3.7`). 105 | 106 | * `PIP`: Change the `pip` executable used (default: `pip3`). 107 | 108 | * `LOCAL_SERVER_PORT`: Port on which the local `mkdocs` server will run, i.e. `http://localhost:` (default: `8000`). 109 | 110 | * `DEBUG`: This is normally hidden. Set to `True` to echo target commands to terminal. 111 | 112 | ### Targets: 113 | 114 | * check_links: HTMLProofer is used to check any links to external websites as we as any cross-page links 115 | 116 | * check_spelling: yaspeller is used to check spelling. Feel free to update to the dictionary file as needed (`kubevirt/project-infra/images/yaspeller/.yaspeller.json`). 117 | 118 | * build_img: mkdocs project does not provide a container image. Use this target to build an image packed with python and mkdocs app. ./docs will be mounted. ./site will be mounted as tmpfs...changes here are lost. 119 | 120 | * build_image_yaspeller: yaspeller project does not provide a container image. User this target to Build an image packed with nodejs and yaspeller app. ./docs will be mounted. yaspeller will check content for spelling and other bad forms of English. 121 | 122 | * status: Basically `${BUILD_ENGINE} ps` for an easy way to see what's running. 123 | 124 | * stop: Stop container and app 125 | 126 | * stop_yaspeller: Sometimes yaspeller goes bonkers. Stop it here. 127 | 128 | ## Getting help 129 | 130 | - File a bug: 131 | 132 | - Mailing list: 133 | 134 | - Slack: 135 | 136 | ## Developer 137 | 138 | - Start contributing: 139 | 140 | ## Privacy 141 | 142 | - Check our privacy policy at: 143 | 144 | - We do use Open Source Plan for rendering Pull Requests to the documentation repository 145 | -------------------------------------------------------------------------------- /docs/cluster_admin/scheduler.md: -------------------------------------------------------------------------------- 1 | # KubeVirt Scheduler 2 | Scheduling is the process of matching Pods/VMs to Nodes. By default, the scheduler used is 3 | [kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/). 4 | Further details can be found at [Kubernetes Scheduler Documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/). 5 | 6 | Custom schedulers can be used if the default scheduler does not satisfy your needs. For instance, you might want to schedule 7 | VMs using a load aware scheduler such as [Trimaran Schedulers](https://cloud.redhat.com/blog/improving-the-resource-efficiency-for-openshift-clusters-via-trimaran-schedulers). 8 | 9 | ## Creating a Custom Scheduler 10 | KubeVirt is compatible with custom schedulers. The configuration steps are described in the [Official Kubernetes 11 | Documentation](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers). 12 | Please note, the Kubernetes version KubeVirt is running on and the Kubernetes version used to build the custom 13 | scheduler have to match. 14 | To get the Kubernetes version KubeVirt is running on, you can run the following command: 15 | 16 | ```shell 17 | $ kubectl version 18 | Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:28:56Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"} 19 | Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:23:45Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"} 20 | ``` 21 | 22 | Pay attention to the `Server` line. 23 | In this case, the Kubernetes version is `v1.22.13`. 24 | You have to checkout the matching Kubernetes version and [build the Kubernetes project](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/#package-the-scheduler): 25 | 26 | ```shell 27 | $ cd kubernetes 28 | $ git checkout v1.22.13 29 | $ make 30 | ``` 31 | 32 | Then, you can follow the configuration steps described [here](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/). 33 | Additionally, the ClusterRole `system:kube-scheduler` needs permissions to use the verbs `watch`, `list` and `get` on StorageClasses. 34 | 35 | ```yaml 36 | - apiGroups: 37 | - storage.k8s.io 38 | resources: 39 | - storageclasses 40 | verbs: 41 | - watch 42 | - list 43 | - get 44 | ``` 45 | 46 | 47 | ## Scheduling VMs with the Custom Scheduler 48 | 49 | The second scheduler should be up and running. You can check it with: 50 | 51 | ```shell 52 | $ kubectl get all -n kube-system 53 | ``` 54 | 55 | The deployment `my-scheduler` should be up and running if everything is setup properly. 56 | In order to launch the VM using the custom scheduler, you need to set the `SchedulerName` in the VM's spec to `my-scheduler`. 57 | Here is an example VM definition: 58 | 59 | ```yaml 60 | apiVersion: kubevirt.io/v1 61 | kind: VirtualMachine 62 | metadata: 63 | name: vm-fedora 64 | spec: 65 | runStrategy: Always 66 | template: 67 | spec: 68 | schedulerName: my-scheduler 69 | domain: 70 | devices: 71 | disks: 72 | - name: containerdisk 73 | disk: 74 | bus: virtio 75 | - name: cloudinitdisk 76 | disk: 77 | bus: virtio 78 | rng: {} 79 | resources: 80 | requests: 81 | memory: 1Gi 82 | terminationGracePeriodSeconds: 180 83 | volumes: 84 | - containerDisk: 85 | image: quay.io/containerdisks/fedora:latest 86 | name: containerdisk 87 | - cloudInitNoCloud: 88 | userData: |- 89 | #cloud-config 90 | chpasswd: 91 | expire: false 92 | password: fedora 93 | user: fedora 94 | name: cloudinitdisk 95 | ``` 96 | In case the specified `SchedulerName` does not match any existing scheduler, the `virt-launcher` pod will stay in state 97 | [Pending](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/#verifying-that-the-pods-were-scheduled-using-the-desired-schedulers), 98 | until the specified scheduler can be found. 99 | You can check if the VM has been scheduled using the `my-scheduler` checking the `virt-launcher` pod events associated 100 | with the VM. The pod should have been scheduled with `my-scheduler`. 101 | 102 | ```shell 103 | $ kubectl get pods 104 | NAME READY STATUS RESTARTS AGE 105 | virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m 106 | 107 | $ kubectl describe pod virt-launcher-vm-fedora-dpc87 108 | [...] 109 | Events: 110 | Type Reason Age From Message 111 | ---- ------ ---- ---- ------- 112 | Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 113 | [...] 114 | ``` -------------------------------------------------------------------------------- /docs/user_workloads/boot_from_external_source.md: -------------------------------------------------------------------------------- 1 | # Booting From External Source 2 | 3 | When installing a new guest virtual machine OS, it is often useful to boot directly from a kernel and initrd stored in 4 | the host physical machine OS, allowing command line arguments to be passed directly to the installer. 5 | 6 | Booting from an external source is supported in Kubevirt starting from [version v0.42.0-rc.0](https://github.com/kubevirt/kubevirt/releases/tag/v0.42.0-rc.0). 7 | This enables the capability to define a Virtual Machine that will use a custom kernel / initrd binary, with possible 8 | custom arguments, during its boot process. 9 | 10 | The binaries are provided though a container image. 11 | The container is pulled from the container registry and resides on the local node hosting the VMs. 12 | 13 | ## Use cases 14 | Some use cases for this may be: 15 | - For a kernel developer it may be very convenient to launch VMs that are defined to boot from the latest kernel binary 16 | that is often being changed. 17 | - Initrd can be set with files that need to reside on-memory during all the VM's life-cycle. 18 | 19 | ## Workflow 20 | Defining an external boot source can be done in the following way: 21 | ```yaml 22 | apiVersion: kubevirt.io/v1 23 | kind: VirtualMachine 24 | metadata: 25 | name: ext-kernel-boot-vm 26 | spec: 27 | runStrategy: Manual 28 | template: 29 | spec: 30 | domain: 31 | devices: {} 32 | firmware: 33 | kernelBoot: 34 | container: 35 | image: vmi_ext_boot/kernel_initrd_binaries_container:latest 36 | initrdPath: /boot/initramfs-virt 37 | kernelPath: /boot/vmlinuz-virt 38 | imagePullPolicy: Always 39 | imagePullSecret: IfNotPresent 40 | kernelArgs: console=ttyS0 41 | resources: 42 | requests: 43 | memory: 1Gi 44 | ``` 45 | 46 | Notes: 47 | 48 | - `initrdPath` and `kernelPath` define the path for the binaries inside the container. 49 | 50 | - Kernel and Initrd binaries must be owned by `qemu` user & group. 51 | - To change ownership: `chown qemu:qemu ` when `` is the binary file. 52 | 53 | - `kernelArgs` can only be provided if a kernel binary is provided (i.e. `kernelPath` not defined). These 54 | arguments will be passed to the default kernel the VM boots from. 55 | 56 | - `imagePullSecret` and `imagePullPolicy` are optional 57 | 58 | - if `imagePullPolicy` is `Always` and the container image is updated then the VM will be booted 59 | into the new kernel when VM restarts 60 | 61 | 62 | Booting the kernel with a root file system can be done by specifying the root file system as a 63 | [disk](https://kubevirt.io/user-guide/storage/disks_and_volumes/) (e.g. containerDisk): 64 | ```yaml 65 | apiVersion: kubevirt.io/v1 66 | kind: VirtualMachine 67 | metadata: 68 | name: ext-kernel-boot-vm 69 | spec: 70 | runStrategy: Manual 71 | instancetype: 72 | name: u1.medium 73 | preference: 74 | name: fedora 75 | template: 76 | spec: 77 | domain: 78 | devices: 79 | disks: 80 | - name: kernel-modules 81 | cdrom: 82 | bus: sata 83 | firmware: 84 | kernelBoot: 85 | container: 86 | image: custom-containerdisk:latest 87 | initrdPath: /boot/initramfs 88 | kernelPath: /boot/vmlinuz 89 | imagePullPolicy: Always 90 | kernelArgs: "no_timer_check console=tty1 console=ttyS0,115200n8 systemd=off root=/dev/vda4 rootflags=subvol=root" 91 | volumes: 92 | - name: root-filesystem 93 | containerDisk: 94 | image: custom-containerdisk:latest 95 | imagePullPolicy: Always 96 | - name: kernel-modules 97 | containerDisk: 98 | image: custom-containerdisk:latest 99 | path: /boot/kernel-modules.isofs 100 | imagePullPolicy: Always 101 | - name: cloudinitdisk 102 | cloudInitNoCloud: 103 | userData: |- 104 | #cloud-config 105 | chpasswd: 106 | expire: false 107 | password: fedora 108 | user: fedora 109 | runcmd: 110 | - "sudo mkdir /mnt/kernel-modules-disk" 111 | - "sudo mount /dev/sr0 /mnt/kernel-modules-disk" 112 | - "sudo tar -xvf /mnt/kernel-modules-disk/kernel_m.tgz --directory /usr/lib/modules/" 113 | ``` 114 | 115 | Notes: 116 | 117 | - It is not necessary to package the root file system into the same `containerDisk` as the kernel. The file system 118 | could also be pulled in via something like a `dataVolumeDisk`. 119 | 120 | - If the custom kernel was configured to build modules, we need to install these in the root file system. For this 121 | example the kernel modules were bundled into a tarball and packaged into an isofs. This isofs is then added to 122 | the `containerDisk` and unpacked into the root file system with the cloudinit runcmd. 123 | 124 | - Following the [containerDisk Workflow Example](https://kubevirt.io/user-guide/storage/disks_and_volumes/#containerdisk-workflow-example), 125 | we need to add the initramfs, kernel, root file system and kernel modules isofs to our custom-containerdisk: 126 | ```dockerfile 127 | FROM scratch 128 | ADD --chown=107:107 initramfs /boot/ 129 | ADD --chown=107:107 vmlinuz /boot/ 130 | ADD --chown=107:107 https://download.fedoraproject.org/pub/fedora/linux/releases/42/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-42-1.1.x86_64.qcow2 /disk/ 131 | ADD --chown=107:107 kernel-modules.isofs /boot/ 132 | ``` 133 | - The `kernelArgs` must specify the correct root device. In this case, the arguments were simply copied from the cloud 134 | image's `/proc/cmdline`. 135 | -------------------------------------------------------------------------------- /docs/user_workloads/component_monitoring.md: -------------------------------------------------------------------------------- 1 | # Component monitoring 2 | 3 | All KubeVirt system-components expose Prometheus metrics at their 4 | `/metrics` REST endpoint. 5 | 6 | You can consult the complete and up-to-date metric list at [kubevirt/monitoring](https://github.com/kubevirt/monitoring/blob/main/docs/metrics.md). 7 | 8 | ## Custom Service Discovery 9 | 10 | Prometheus supports service discovery based on 11 | [Pods](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#pod) 12 | and 13 | [Endpoints](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#endpoints) 14 | out of the box. Both can be used to discover KubeVirt services. 15 | 16 | All Pods which expose metrics are labeled with `prometheus.kubevirt.io` 17 | and contain a port-definition which is called `metrics`. In the KubeVirt 18 | release-manifests, the default `metrics` port is `8443`. 19 | 20 | The above labels and port informations are collected by a `Service` 21 | called `kubevirt-prometheus-metrics`. Kubernetes automatically creates a 22 | corresponding `Endpoint` with an equal name: 23 | 24 | ```bash 25 | kubectl get endpoints -n kubevirt kubevirt-prometheus-metrics -o yaml 26 | ``` 27 | 28 | ```yaml 29 | apiVersion: v1 30 | kind: Endpoints 31 | metadata: 32 | labels: 33 | kubevirt.io: "" 34 | prometheus.kubevirt.io: "" 35 | name: kubevirt-prometheus-metrics 36 | namespace: kubevirt 37 | subsets: 38 | - addresses: 39 | - ip: 10.244.0.5 40 | nodeName: node01 41 | targetRef: 42 | kind: Pod 43 | name: virt-handler-cjzg6 44 | namespace: kubevirt 45 | resourceVersion: "4891" 46 | uid: c67331f9-bfcf-11e8-bc54-525500d15501 47 | - ip: 10.244.0.6 48 | [...] 49 | ports: 50 | - name: metrics 51 | port: 8443 52 | protocol: TCP 53 | ``` 54 | 55 | By watching this endpoint for added and removed IPs to 56 | `subsets.addresses` and appending the `metrics` port from 57 | `subsets.ports`, it is possible to always get a complete list of 58 | ready-to-be-scraped Prometheus targets. 59 | 60 | ## Integrating with the prometheus-operator 61 | 62 | The [prometheus-operator](https://github.com/coreos/prometheus-operator) 63 | can make use of the `kubevirt-prometheus-metrics` service to 64 | automatically create the appropriate Prometheus config. 65 | 66 | KubeVirt's `virt-operator` checks if the `ServiceMonitor` custom 67 | resource exists when creating an install strategy for deployment. 68 | KubeVirt will automatically create a `ServiceMonitor` resource in the 69 | `monitorNamespace`, as well as an appropriate role and rolebinding in 70 | KubeVirt's namespace. 71 | 72 | Three settings are exposed in the `KubeVirt` custom resource to direct 73 | KubeVirt to create these resources correctly: 74 | 75 | - `monitorNamespace`: The namespace that prometheus-operator runs in. 76 | Defaults to `openshift-monitoring`. 77 | 78 | - `monitorAccount`: The serviceAccount that prometheus-operator runs 79 | with. Defaults to `prometheus-k8s`. 80 | 81 | - `serviceMonitorNamespace`: The namespace that the serviceMonitor runs in. 82 | Defaults to be `monitorNamespace` 83 | 84 | Please note that if you decide to set `serviceMonitorNamespace` than this 85 | namespace must be included in `serviceMonitorNamespaceSelector` field of 86 | Prometheus spec. 87 | 88 | If the prometheus-operator for a given deployment uses these defaults, 89 | then these values can be omitted. 90 | 91 | An example of the KubeVirt resource depicting these default values: 92 | 93 | ```yaml 94 | apiVersion: kubevirt.io/v1 95 | kind: KubeVirt 96 | metadata: 97 | name: kubevirt 98 | spec: 99 | monitorNamespace: openshift-monitoring 100 | monitorAccount: prometheus-k8s 101 | ``` 102 | ## Integrating with the OKD cluster-monitoring-operator 103 | 104 | After the 105 | [cluster-monitoring-operator](https://github.com/openshift/cluster-monitoring-operator) 106 | is up and running, KubeVirt will detect the existence of the 107 | `ServiceMonitor` resource. Because the definition contains the 108 | `openshift.io/cluster-monitoring` label, it will automatically be picked 109 | up by the cluster monitor. 110 | 111 | ## Metrics about Virtual Machines 112 | 113 | The endpoints report metrics related to the runtime behaviour of the 114 | Virtual Machines. All the relevant metrics are prefixed with 115 | `kubevirt_vmi`. 116 | 117 | The metrics have labels that allow to connect to the VMI objects they 118 | refer to. At minimum, the labels will expose `node`, `name` and 119 | `namespace` of the related VMI object. 120 | 121 | For example, reported metrics could look like 122 | 123 | ``` 124 | kubevirt_vmi_memory_resident_bytes{domain="default_vm-test-01",name="vm-test-01",namespace="default",node="node01"} 2.5595904e+07 125 | kubevirt_vmi_network_traffic_bytes_total{domain="default_vm-test-01",interface="vnet0",name="vm-test-01",namespace="default",node="node01",type="rx"} 8431 126 | kubevirt_vmi_network_traffic_bytes_total{domain="default_vm-test-01",interface="vnet0",name="vm-test-01",namespace="default",node="node01",type="tx"} 1835 127 | kubevirt_vmi_vcpu_seconds_total{domain="default_vm-test-01",id="0",name="vm-test-01",namespace="default",node="node01",state="1"} 19 128 | ``` 129 | 130 | Please note the `domain` label in the above example. This label is 131 | deprecated and it will be removed in a future release. You should 132 | identify the VMI using the `node`, `namespace`, `name` labels instead. 133 | 134 | ## Important Queries 135 | 136 | ### Detecting connection issues for the REST client 137 | 138 | Use the following query to get a counter for all REST call which 139 | indicate connection issues: 140 | 141 | rest_client_requests_total{code=""} 142 | 143 | If this counter is continuously increasing, it is an indicator that the 144 | corresponding KubeVirt component has general issues to connect to the 145 | apiserver 146 | -------------------------------------------------------------------------------- /docs/compute/mediated_devices_configuration.md: -------------------------------------------------------------------------------- 1 | # Mediated devices and virtual GPUs 2 | ## Configuring mediated devices and virtual GPUs 3 | 4 | KubeVirt aims to facilitate the configuration of mediated devices on large clusters. 5 | Administrators can use the `mediatedDevicesConfiguration` API in the KubeVirt CR to 6 | create or remove mediated devices in a declarative way, by providing a list of the desired mediated device types that they expect to be configured in the cluster. 7 | 8 | You can also include the `nodeMediatedDeviceTypes` option to provide a more specific configuration that targets a specific node or a group of nodes directly with a node selector. 9 | The `nodeMediatedDeviceTypes` option must be used in combination with `mediatedDevicesTypes` 10 | in order to override the global configuration set in the `mediatedDevicesTypes` section. 11 | 12 | KubeVirt will use the provided configuration to automatically create the relevant mdev/vGPU devices on nodes that can support it. 13 | 14 | Currently, a single mdev type per card will be configured. 15 | The maximum amount of instances of the selected mdev type will be configured per card. 16 | 17 | > Note: Some vendors, such as NVIDIA, require a driver to be installed on the nodes to provide mediated devices, including vGPUs. 18 | 19 | Example snippet of a KubeVirt CR configuration that includes both `nodeMediatedDeviceTypes` and `mediatedDevicesTypes`: 20 | ```yaml 21 | spec: 22 | configuration: 23 | mediatedDevicesConfiguration: 24 | mediatedDevicesTypes: 25 | - nvidia-222 26 | - nvidia-228 27 | nodeMediatedDeviceTypes: 28 | - nodeSelector: 29 | kubernetes.io/hostname: nodeName 30 | mediatedDevicesTypes: 31 | - nvidia-234 32 | ``` 33 | 34 | ## Configuration scenarios 35 | ### Example: Large cluster with multiple cards on each node 36 | 37 | On nodes with multiple cards that can support similar vGPU types, the relevant desired types will be created in a round-robin manner. 38 | 39 | For example, considering the following KubeVirt CR configuration: 40 | 41 | ```yaml 42 | spec: 43 | configuration: 44 | mediatedDevicesConfiguration: 45 | mediatedDevicesTypes: 46 | - nvidia-222 47 | - nvidia-228 48 | - nvidia-105 49 | - nvidia-108 50 | ``` 51 | 52 | This cluster has nodes with two different PCIe cards: 53 | 54 | 1. Nodes with 3 Tesla T4 cards, where each card can support multiple devices types: 55 | * nvidia-222 56 | * nvidia-223 57 | * nvidia-228 58 | * ... 59 | 60 | 2. Nodes with 2 Tesla V100 cards, where each card can support multiple device types: 61 | * nvidia-105 62 | * nvidia-108 63 | * nvidia-217 64 | * nvidia-299 65 | * ... 66 | 67 | KubeVirt will then create the following devices: 68 | 69 | 1. Nodes with 3 Tesla T4 cards will be configured with: 70 | * 16 vGPUs of type nvidia-222 on card 1 71 | * 2 vGPUs of type nvidia-228 on card 2 72 | * 16 vGPUs of type nvidia-222 on card 3 73 | 2. Nodes with 2 Tesla V100 cards will be configured with: 74 | * 16 vGPUs of type nvidia-105 on card 1 75 | * 2 vGPUs of type nvidia-108 on card 2 76 | 77 | 78 | ### Example: Single card on a node, multiple desired vGPU types are supported 79 | 80 | When nodes only have a single card, the first supported type from the list will be configured. 81 | 82 | For example, consider the following list of desired types, where nvidia-223 and nvidia-224 are supported: 83 | 84 | ```yaml 85 | spec: 86 | configuration: 87 | mediatedDevicesConfiguration: 88 | mediatedDevicesTypes: 89 | - nvidia-223 90 | - nvidia-224 91 | ``` 92 | In this case, nvidia-223 will be configured on the node because it is the first supported type in the list. 93 | 94 | ## Overriding configuration on a specifc node 95 | 96 | To override the global configuration set by `mediatedDevicesTypes`, include the `nodeMediatedDeviceTypes` option, specifying the node selector and the `mediatedDevicesTypes` that you want to override for that node. 97 | 98 | ### Example: Overriding the configuration for a specific node in a large cluster with multiple cards on each node 99 | 100 | In this example, the KubeVirt CR includes the `nodeMediatedDeviceTypes` option to override the global configuration specifically for node 2, which will only use the nvidia-234 type. 101 | 102 | ```yaml 103 | spec: 104 | configuration: 105 | mediatedDevicesConfiguration: 106 | mediatedDevicesTypes: 107 | - nvidia-230 108 | - nvidia-223 109 | - nvidia-224 110 | nodeMediatedDeviceTypes: 111 | - nodeSelector: 112 | kubernetes.io/hostname: node2 113 | mediatedDevicesTypes: 114 | - nvidia-234 115 | ``` 116 | 117 | The cluster has two nodes that both have 3 Tesla T4 cards. 118 | 119 | * Each card can support a long list of types, including: 120 | * nvidia-222 121 | * nvidia-223 122 | * nvidia-224 123 | * nvidia-230 124 | * ... 125 | 126 | KubeVirt will then create the following devices: 127 | 128 | 1. Node 1 129 | * type nvidia-230 on card 1 130 | * type nvidia-223 on card 2 131 | 2. Node 2 132 | * type nvidia-234 on card 1 and card 2 133 | 134 | Node 1 has been configured in a round-robin manner based on the global configuration but node 2 only uses the nvidia-234 that was specified for it. 135 | 136 | ## Updating and Removing vGPU types 137 | 138 | Changes made to the `mediatedDevicesTypes` section of the KubeVirt CR will trigger a re-evaluation of the configured mdevs/vGPU types on the cluster nodes. 139 | 140 | Any change to the node labels that match the `nodeMediatedDeviceTypes` nodeSelector in the KubeVirt CR will trigger a similar re-evaluation. 141 | 142 | Consequently, mediated devices will be reconfigured or entirely removed based on the updated configuration. 143 | 144 | ## Assigning vGPU/MDEV to a Virtual Machine 145 | See the [Host Devices Assignment](../compute/host-devices.md) to learn how to consume the newly created mediated devices/vGPUs. 146 | -------------------------------------------------------------------------------- /docs/user_workloads/guest_operating_system_information.md: -------------------------------------------------------------------------------- 1 | # Guest Operating System Information 2 | 3 | Guest operating system identity for the VirtualMachineInstance will be 4 | provided by the label `kubevirt.io/os` : 5 | 6 | ```yaml 7 | metadata: 8 | name: myvmi 9 | labels: 10 | kubevirt.io/os: win2k12r2 11 | ``` 12 | 13 | The `kubevirt.io/os` label is based on the short OS identifier from 14 | [libosinfo](https://libosinfo.org/) database. The following Short IDs 15 | are currently supported: 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 |
Short IDNameVersionFamilyID

win2k12r2

Microsoft Windows Server 2012 R2

6.3

winnt

https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2012-r2

44 | 45 | ## Use with presets 46 | 47 | A VirtualMachineInstancePreset representing an operating system with a 48 | `kubevirt.io/os` label could be applied on any given 49 | VirtualMachineInstance that have and match the `kubevirt.io/os` label. 50 | 51 | Default presets for the OS identifiers above are included in the current 52 | release. 53 | 54 | ### Windows Server 2012R2 `VirtualMachineInstancePreset` Example 55 | 56 | ```yaml 57 | apiVersion: kubevirt.io/v1 58 | kind: VirtualMachineInstancePreset 59 | metadata: 60 | name: windows-server-2012r2 61 | selector: 62 | matchLabels: 63 | kubevirt.io/os: win2k12r2 64 | spec: 65 | domain: 66 | cpu: 67 | cores: 2 68 | resources: 69 | requests: 70 | memory: 2G 71 | features: 72 | acpi: {} 73 | apic: {} 74 | hyperv: 75 | relaxed: {} 76 | vapic: {} 77 | spinlocks: 78 | spinlocks: 8191 79 | clock: 80 | utc: {} 81 | timer: 82 | hpet: 83 | present: false 84 | pit: 85 | tickPolicy: delay 86 | rtc: 87 | tickPolicy: catchup 88 | hyperv: {} 89 | --- 90 | apiVersion: kubevirt.io/v1 91 | kind: VirtualMachineInstance 92 | metadata: 93 | labels: 94 | kubevirt.io/os: win2k12r2 95 | name: windows2012r2 96 | spec: 97 | terminationGracePeriodSeconds: 0 98 | domain: 99 | firmware: 100 | uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 101 | devices: 102 | disks: 103 | - name: server2012r2 104 | disk: 105 | dev: vda 106 | volumes: 107 | - name: server2012r2 108 | persistentVolumeClaim: 109 | claimName: my-windows-image 110 | ``` 111 | 112 | Once the `VirtualMachineInstancePreset` is applied to the 113 | `VirtualMachineInstance`, the resulting resource would look like this: 114 | 115 | ```yaml 116 | apiVersion: kubevirt.io/v1 117 | kind: VirtualMachineInstance 118 | metadata: 119 | annotations: 120 | presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1 121 | virtualmachineinstancepreset.kubevirt.io/windows-server-2012r2: kubevirt.io/v1 122 | labels: 123 | kubevirt.io/os: win2k12r2 124 | name: windows2012r2 125 | spec: 126 | terminationGracePeriodSeconds: 0 127 | domain: 128 | cpu: 129 | cores: 2 130 | resources: 131 | requests: 132 | memory: 2G 133 | features: 134 | acpi: {} 135 | apic: {} 136 | hyperv: 137 | relaxed: {} 138 | vapic: {} 139 | spinlocks: 140 | spinlocks: 8191 141 | clock: 142 | utc: {} 143 | timer: 144 | hpet: 145 | present: false 146 | pit: 147 | tickPolicy: delay 148 | rtc: 149 | tickPolicy: catchup 150 | hyperv: {} 151 | firmware: 152 | uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 153 | devices: 154 | disks: 155 | - name: server2012r2 156 | disk: 157 | dev: vda 158 | volumes: 159 | - name: server2012r2 160 | persistentVolumeClaim: 161 | claimName: my-windows-image 162 | ``` 163 | 164 | For more information see [VirtualMachineInstancePresets](../user_workloads/presets.md) 165 | 166 | ### HyperV optimizations 167 | 168 | KubeVirt supports quite a lot of so-called "HyperV enlightenments", 169 | which are optimizations for Windows Guests. Some of these optimization 170 | may require an up to date host kernel support to work properly, or to 171 | deliver the maximum performance gains. 172 | 173 | KubeVirt can perform extra checks on the hosts before to run Hyper-V 174 | enabled VMs, to make sure the host has no known issues with Hyper-V 175 | support, properly expose all the required features and thus we can 176 | expect optimal performance. These checks are disabled by default for 177 | backward compatibility and because they depend on the 178 | [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) 179 | and on extra configuration. 180 | 181 | To enable strict host checking, the user may expand the `featureGates` 182 | field in the KubeVirt CR by adding the `HypervStrictCheck` to it. 183 | 184 | ```yaml 185 | apiVersion: kubevirt.io/v1 186 | kind: Kubevirt 187 | metadata: 188 | name: kubevirt 189 | namespace: kubevirt 190 | spec: 191 | ... 192 | configuration: 193 | developerConfiguration: 194 | featureGates: 195 | - "HypervStrictCheck" 196 | ``` 197 | 198 | Alternatively, users can edit an existing kubevirt CR: 199 | 200 | ```bash 201 | kubectl edit kubevirt kubevirt -n kubevirt 202 | ``` 203 | 204 | ```yaml 205 | ... 206 | spec: 207 | configuration: 208 | developerConfiguration: 209 | featureGates: 210 | - "HypervStrictCheck" 211 | - "CPUManager" 212 | ``` 213 | -------------------------------------------------------------------------------- /docs/compute/node_assignment.md: -------------------------------------------------------------------------------- 1 | # Node assignment 2 | 3 | You can constrain the VM to only run on specific nodes or to prefer 4 | running on specific nodes: 5 | 6 | - **nodeSelector** 7 | - **Affinity and anti-affinity** 8 | - **Taints and Tolerations** 9 | 10 | 11 | ## nodeSelector 12 | 13 | Setting `spec.nodeSelector` requirements, constrains the scheduler to 14 | only schedule VMs on nodes, which contain the specified labels. In the 15 | following example the vmi contains the labels `cpu: slow` and 16 | `storage: fast`: 17 | 18 | ```yaml 19 | metadata: 20 | name: testvmi-ephemeral 21 | apiVersion: kubevirt.io/v1 22 | kind: VirtualMachineInstance 23 | spec: 24 | nodeSelector: 25 | cpu: slow 26 | storage: fast 27 | domain: 28 | resources: 29 | requests: 30 | memory: 64M 31 | devices: 32 | disks: 33 | - name: mypvcdisk 34 | lun: {} 35 | volumes: 36 | - name: mypvcdisk 37 | persistentVolumeClaim: 38 | claimName: mypvc 39 | ``` 40 | 41 | Thus the scheduler will only schedule the vmi to nodes which contain 42 | these labels in their metadata. It works exactly like the Pods 43 | `nodeSelector`. See the [Pod nodeSelector 44 | Documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) 45 | for more examples. 46 | 47 | 48 | ## Affinity and anti-affinity 49 | 50 | The `spec.affinity` field allows specifying hard- and soft-affinity for 51 | VMs. It is possible to write matching rules against workloads (VMs and 52 | Pods) and Nodes. Since VMs are a workload type based on Pods, 53 | Pod-affinity affects VMs as well. 54 | 55 | An example for `podAffinity` and `podAntiAffinity` may look like this: 56 | 57 | ```yaml 58 | metadata: 59 | name: testvmi-ephemeral 60 | apiVersion: kubevirt.io/v1 61 | kind: VirtualMachineInstance 62 | spec: 63 | nodeSelector: 64 | cpu: slow 65 | storage: fast 66 | domain: 67 | resources: 68 | requests: 69 | memory: 64M 70 | devices: 71 | disks: 72 | - name: mypvcdisk 73 | lun: {} 74 | affinity: 75 | podAffinity: 76 | requiredDuringSchedulingIgnoredDuringExecution: 77 | - labelSelector: 78 | matchExpressions: 79 | - key: security 80 | operator: In 81 | values: 82 | - S1 83 | topologyKey: failure-domain.beta.kubernetes.io/zone 84 | podAntiAffinity: 85 | preferredDuringSchedulingIgnoredDuringExecution: 86 | - weight: 100 87 | podAffinityTerm: 88 | labelSelector: 89 | matchExpressions: 90 | - key: security 91 | operator: In 92 | values: 93 | - S2 94 | topologyKey: kubernetes.io/hostname 95 | volumes: 96 | - name: mypvcdisk 97 | persistentVolumeClaim: 98 | claimName: mypvc 99 | ``` 100 | 101 | Affinity and anti-affinity works exactly like the Pods `affinity`. This 102 | includes `podAffinity`, `podAntiAffinity`, `nodeAffinity` and 103 | `nodeAntiAffinity`. See the [Pod affinity and anti-affinity 104 | Documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) 105 | for more examples and details. 106 | 107 | 108 | ## Taints and Tolerations 109 | 110 | Affinity as described above, is a property of VMs that attracts them to 111 | a set of nodes (either as a preference or a hard requirement). Taints 112 | are the opposite - they allow a node to repel a set of VMs. 113 | 114 | Taints and tolerations work together to ensure that VMs are not 115 | scheduled onto inappropriate nodes. One or more taints are applied to a 116 | node; this marks that the node should not accept any VMs that do not 117 | tolerate the taints. Tolerations are applied to VMs, and allow (but do 118 | not require) the VMs to schedule onto nodes with matching taints. 119 | 120 | You add a taint to a node using kubectl taint. For example, 121 | 122 | kubectl taint nodes node1 key=value:NoSchedule 123 | 124 | An example for `tolerations` may look like this: 125 | 126 | ```yaml 127 | metadata: 128 | name: testvmi-ephemeral 129 | apiVersion: kubevirt.io/v1 130 | kind: VirtualMachineInstance 131 | spec: 132 | nodeSelector: 133 | cpu: slow 134 | storage: fast 135 | domain: 136 | resources: 137 | requests: 138 | memory: 64M 139 | devices: 140 | disks: 141 | - name: mypvcdisk 142 | lun: {} 143 | tolerations: 144 | - key: "key" 145 | operator: "Equal" 146 | value: "value" 147 | effect: "NoSchedule" 148 | ``` 149 | ## Node balancing with Descheduler 150 | 151 | In some cases we might need to rebalance the cluster on current scheduling policy 152 | and load conditions. [Descheduler](https://github.com/kubernetes-sigs/descheduler) 153 | can find pods, which violates e.g. scheduling decisions and evict them based on descheduler 154 | policies. Kubevirt VMs are handled as pods with local storage, so by default, 155 | descheduler will not evict them. But it can be easily overridden by adding special 156 | annotation to the VMI template in the VM: 157 | 158 | ```console 159 | spec: 160 | template: 161 | metadata: 162 | annotations: 163 | descheduler.alpha.kubernetes.io/evict: true 164 | ``` 165 | 166 | This annotation will cause, that the descheduler will be able to evict the VM's pod which can then be 167 | scheduled by scheduler on different nodes. A VirtualMachine will never restart or re-create a 168 | VirtualMachineInstance until the current instance of the VirtualMachineInstance is deleted from the cluster. 169 | 170 | ## Live update 171 | 172 | When the [VM rollout strategy](../user_workloads/vm_rollout_strategies.md) is set to `LiveUpdate`, changes to a VM's 173 | node selector or affinities will dynamically propagate to the VMI (unless the `RestartRequired` condition is set). 174 | Changes to tolerations will not dynamically propagate, and will trigger a `RestartRequired` condition if changed on a 175 | running VM. 176 | 177 | Modifications of the node selector / affinities will only take effect on next [migration](live_migration.md), the change 178 | alone will not trigger one. 179 | -------------------------------------------------------------------------------- /docs/network/istio_service_mesh.md: -------------------------------------------------------------------------------- 1 | # Istio service mesh 2 | 3 | Service mesh allows to monitor, visualize and control traffic between pods. 4 | Kubevirt supports running VMs as a part of Istio service mesh. 5 | 6 | ## Limitations 7 | 8 | - Istio service mesh is only supported with a pod network masquerade or passt binding. 9 | 10 | - Istio uses a [list of ports](https://istio.io/latest/docs/ops/deployment/requirements/#ports-used-by-istio) for its own purposes, these ports must not be explicitly specified in a VMI interface. 11 | 12 | - Istio only supports IPv4. 13 | 14 | ## Prerequisites 15 | 16 | - This guide assumes that Istio is already deployed and uses Istio CNI Plugin. See [Istio documentation](https://istio.io/latest/docs/) for more information. 17 | 18 | - Optionally, `istioctl` binary for troubleshooting. See Istio [installation inctructions](https://istio.io/latest/docs/setup/getting-started/). 19 | 20 | - The target namespace where the VM is created must be labelled with `istio-injection=enabled` label. 21 | 22 | - If Multus is used to manage CNI, the following `NetworkAttachmentDefinition` is required in the application namespace: 23 | ```yaml 24 | apiVersion: "k8s.cni.cncf.io/v1" 25 | kind: NetworkAttachmentDefinition 26 | metadata: 27 | name: istio-cni 28 | ``` 29 | 30 | ## Create a VirtualMachineInstance with enabled Istio proxy injecton 31 | 32 | The example below specifies a VMI with masquerade network interface and `sidecar.istio.io/inject` annotation to register the VM to the service mesh. 33 | 34 | ```yaml 35 | apiVersion: kubevirt.io/v1 36 | kind: VirtualMachineInstance 37 | metadata: 38 | annotations: 39 | sidecar.istio.io/inject: "true" 40 | labels: 41 | app: vmi-istio 42 | name: vmi-istio 43 | spec: 44 | domain: 45 | devices: 46 | interfaces: 47 | - name: default 48 | masquerade: {} 49 | disks: 50 | - disk: 51 | bus: virtio 52 | name: containerdisk 53 | resources: 54 | requests: 55 | memory: 1024M 56 | networks: 57 | - name: default 58 | pod: {} 59 | terminationGracePeriodSeconds: 0 60 | volumes: 61 | - name: containerdisk 62 | containerDisk: 63 | image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel 64 | ``` 65 | 66 | Istio expects each application to be associated with at least one Kubernetes service. Create the following Service exposing port 8080: 67 | 68 | ```yaml 69 | apiVersion: v1 70 | kind: Service 71 | metadata: 72 | name: vmi-istio 73 | spec: 74 | selector: 75 | app: vmi-istio 76 | ports: 77 | - port: 8080 78 | name: http 79 | protocol: TCP 80 | ``` 81 | 82 | **Note:** Each Istio enabled VMI must feature the `sidecar.istio.io/inject` annotation instructing KubeVirt to perform necessary network configuration. 83 | 84 | ## Verification 85 | 86 | Verify istio-proxy sidecar is deployed and able to synchronize with Istio control plane using `istioctl proxy-status` command. See Istio [Debbuging Envoy and Istiod](https://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/) documentation section for more information about `proxy-status` subcommand. 87 | 88 | ```shell 89 | $ kubectl get pods 90 | NAME READY STATUS RESTARTS AGE 91 | virt-launcher-vmi-istio-ncx7r 3/3 Running 0 7s 92 | 93 | $ kubectl get pods virt-launcher-vmi-istio-ncx7r -o jsonpath='{.spec.containers[*].name}' 94 | compute volumecontainerdisk istio-proxy 95 | 96 | $ istioctl proxy-status 97 | NAME CDS LDS EDS RDS ISTIOD VERSION 98 | ... 99 | virt-launcher-vmi-istio-ncx7r.default SYNCED SYNCED SYNCED SYNCED istiod-7c4d8c7757-hshj5 1.10.0 100 | ``` 101 | 102 | ## Troubleshooting 103 | 104 | ### Istio sidecar is not deployed 105 | 106 | ```shell 107 | $ kubectl get pods 108 | NAME READY STATUS RESTARTS AGE 109 | virt-launcher-vmi-istio-jnw6p 2/2 Running 0 37s 110 | 111 | $ kubectl get pods virt-launcher-vmi-istio-jnw6p -o jsonpath='{.spec.containers[*].name}' 112 | compute volumecontainerdisk 113 | ``` 114 | 115 | **Resolution:** Make sure the `istio-injection=enabled` is added to the target namespace. If the issue persists, consult [relevant part of Istio documentation](https://istio.io/latest/docs/ops/configuration/mesh/injection-concepts/). 116 | 117 | ### Istio sidecar is not ready 118 | ```shell 119 | $ kubectl get pods 120 | NAME READY STATUS RESTARTS AGE 121 | virt-launcher-vmi-istio-lg5gp 2/3 Running 0 90s 122 | 123 | $ kubectl describe pod virt-launcher-vmi-istio-lg5gp 124 | ... 125 | Warning Unhealthy 2d8h (x3 over 2d8h) kubelet Readiness probe failed: Get "http://10.244.186.222:15021/healthz/ready": dial tcp 10.244.186.222:15021: connect: no route to host 126 | Warning Unhealthy 2d8h (x4 over 2d8h) kubelet Readiness probe failed: Get "http://10.244.186.222:15021/healthz/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) 127 | ``` 128 | 129 | **Resolution:** Make sure the `sidecar.istio.io/inject: "true"` annotation is defined in the created VMI and that masquerade or passt binding is used for pod network interface. 130 | 131 | ### Virt-launcher pod for VMI is stuck at initialization phase 132 | ```shell 133 | $ kubectl get pods 134 | NAME READY STATUS RESTARTS AGE 135 | virt-launcher-vmi-istio-44mws 0/3 Init:0/3 0 29s 136 | 137 | $ kubectl describe pod virt-launcher-vmi-istio-44mws 138 | ... 139 | Multus: [default/virt-launcher-vmi-istio-44mws]: error loading k8s delegates k8s args: TryLoadPodDelegates: error in getting k8s network for pod: GetNetworkDelegates: failed getting the delegate: getKubernetesDelegate: cannot find a network-attachment-definition (istio-cni) in namespace (default): network-attachment-definitions.k8s.cni.cncf.io "istio-cni" not found 140 | 141 | ``` 142 | 143 | **Resolution:** Make sure the `istio-cni` NetworkAttachmentDefinition (provided in the [Prerequisites](#prerequisites) section) is created in the target namespace. 144 | -------------------------------------------------------------------------------- /docs/network/service_objects.md: -------------------------------------------------------------------------------- 1 | # Service objects 2 | 3 | Once the VirtualMachineInstance is started, in order to connect to a 4 | VirtualMachineInstance, you can create a `Service` object for a 5 | VirtualMachineInstance. Currently, three types of service are supported: 6 | `ClusterIP`, `NodePort` and `LoadBalancer`. The default type is 7 | `ClusterIP`. 8 | 9 | > **Note**: Labels on a VirtualMachineInstance are passed through to the 10 | > pod, so simply add your labels for service creation to the 11 | > VirtualMachineInstance. From there on it works like exposing any other 12 | > k8s resource, by referencing these labels in a service. 13 | 14 | 15 | ## Expose VirtualMachineInstance as a ClusterIP Service 16 | 17 | Give a VirtualMachineInstance with the label `special: key`: 18 | 19 | ```yaml 20 | apiVersion: kubevirt.io/v1 21 | kind: VirtualMachineInstance 22 | metadata: 23 | name: vmi-ephemeral 24 | labels: 25 | special: key 26 | spec: 27 | domain: 28 | devices: 29 | disks: 30 | - disk: 31 | bus: virtio 32 | name: containerdisk 33 | resources: 34 | requests: 35 | memory: 64M 36 | volumes: 37 | - name: containerdisk 38 | containerDisk: 39 | image: kubevirt/cirros-registry-disk-demo:latest 40 | ``` 41 | 42 | we can expose its SSH port (22) by creating a `ClusterIP` service: 43 | 44 | ```yaml 45 | apiVersion: v1 46 | kind: Service 47 | metadata: 48 | name: vmiservice 49 | spec: 50 | ports: 51 | - port: 27017 52 | protocol: TCP 53 | targetPort: 22 54 | selector: 55 | special: key 56 | type: ClusterIP 57 | ``` 58 | 59 | You just need to create this `ClusterIP` service by using `kubectl`: 60 | 61 | $ kubectl create -f vmiservice.yaml 62 | 63 | Alternatively, the VirtualMachineInstance could be exposed using the 64 | `virtctl` command: 65 | 66 | $ virtctl expose virtualmachineinstance vmi-ephemeral --name vmiservice --port 27017 --target-port 22 67 | 68 | Notes: \* If `--target-port` is not set, it will be take the same value 69 | as `--port` \* The cluster IP is usually allocated automatically, but it 70 | may also be forced into a value using the `--cluster-ip` flag (assuming 71 | value is in the valid range and not taken) 72 | 73 | Query the service object: 74 | 75 | $ kubectl get service 76 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 77 | vmiservice ClusterIP 172.30.3.149 27017/TCP 2m 78 | 79 | You can connect to the VirtualMachineInstance by service IP and service 80 | port inside the cluster network: 81 | 82 | $ ssh cirros@172.30.3.149 -p 27017 83 | 84 | 85 | ## Expose VirtualMachineInstance as a NodePort Service 86 | 87 | To expose a VirtualMachineInstance (VMI) via a Kubernetes service (e.g., NodePort, ClusterIP, LoadBalancer), ensure the associated VirtualMachine (VM) template includes the appropriate labels. These labels are inherited by the underlying pod created by the VMI, allowing the service to target the correct pod via its selector. 88 | 89 | For further details, refer to Kubernetes documentation on [Services and Selectors](https://kubernetes.io/docs/concepts/services-networking/service/#services-with-selectors). 90 | 91 | Expose the SSH port (22) of a VirtualMachineInstance running on KubeVirt 92 | by creating a `NodePort` service: 93 | 94 | ```yaml 95 | apiVersion: v1 96 | kind: Service 97 | metadata: 98 | name: nodeport 99 | spec: 100 | externalTrafficPolicy: Cluster 101 | ports: 102 | - name: nodeport 103 | nodePort: 30000 104 | port: 27017 105 | protocol: TCP 106 | targetPort: 22 107 | selector: 108 | special: key 109 | type: NodePort 110 | ``` 111 | 112 | You just need to create this `NodePort` service by using `kubectl`: 113 | 114 | $ kubectl create -f nodeport.yaml 115 | 116 | Alternatively, the VirtualMachineInstance could be exposed using the 117 | `virtctl` command: 118 | 119 | $ virtctl expose virtualmachineinstance vmi-ephemeral --name nodeport --type NodePort --port 27017 --target-port 22 --node-port 30000 120 | 121 | Notes: \* If `--node-port` is not set, its value will be allocated 122 | dynamically (in the range above 30000) \* If the `--node-port` value is 123 | set, it must be unique across all services 124 | 125 | The service can be listed by querying for the service objects: 126 | 127 | $ kubectl get service 128 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 129 | nodeport NodePort 172.30.232.73 27017:30000/TCP 5m 130 | 131 | Connect to the VirtualMachineInstance by using a node IP and node port 132 | outside the cluster network: 133 | 134 | $ ssh cirros@$NODE_IP -p 30000 135 | 136 | 137 | ## Expose VirtualMachineInstance as a LoadBalancer Service 138 | 139 | Expose the RDP port (3389) of a VirtualMachineInstance running on 140 | KubeVirt by creating `LoadBalancer` service. Here is an example: 141 | 142 | ```yaml 143 | apiVersion: v1 144 | kind: Service 145 | metadata: 146 | name: lbsvc 147 | spec: 148 | externalTrafficPolicy: Cluster 149 | ports: 150 | - port: 27017 151 | protocol: TCP 152 | targetPort: 3389 153 | selector: 154 | special: key 155 | type: LoadBalancer 156 | ``` 157 | 158 | You could create this `LoadBalancer` service by using `kubectl`: 159 | 160 | $ kubectl create -f lbsvc.yaml 161 | 162 | Alternatively, the VirtualMachineInstance could be exposed using the 163 | `virtctl` command: 164 | 165 | $ virtctl expose virtualmachineinstance vmi-ephemeral --name lbsvc --type LoadBalancer --port 27017 --target-port 3389 166 | 167 | Note that the external IP of the service could be forced to a value 168 | using the `--external-ip` flag (no validation is performed on this 169 | value). 170 | 171 | The service can be listed by querying for the service objects: 172 | 173 | $ kubectl get svc 174 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 175 | lbsvc LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s 176 | 177 | Use `vinagre` client to connect your VirtualMachineInstance by using the 178 | public IP and port. 179 | 180 | Note that here the external port here (31829) was dynamically allocated. 181 | -------------------------------------------------------------------------------- /docs/cluster_admin/migration_policies.md: -------------------------------------------------------------------------------- 1 | # Migration Policies 2 | 3 | Migration policies provides a new way of applying migration configurations to Virtual Machines. The policies 4 | can refine Kubevirt CR's `MigrationConfiguration` that sets the cluster-wide migration configurations. This way, 5 | the cluster-wide settings serve as a default that can be refined (i.e. changed, removed or added) by the migration 6 | policy. 7 | 8 | Please bear in mind that migration policies are in version `v1alpha1`. This means that this API is not fully stable 9 | yet and that APIs may change in the future. 10 | 11 | ## Overview 12 | 13 | KubeVirt supports [Live Migrations](../compute/live_migration.md) of Virtual Machine workloads. 14 | Before migration policies were introduced, migration settings could be configurable only on the cluster-wide 15 | scope by editing [KubevirtCR's spec](https://kubevirt.io/api-reference/master/definitions.html#_v1_kubevirtspec) 16 | or more specifically [MigrationConfiguration](https://kubevirt.io/api-reference/master/definitions.html#_v1_migrationconfiguration) 17 | CRD. 18 | 19 | Several aspects (although not all) of migration behaviour that can be customized are: 20 | - Bandwidth 21 | - Auto-convergence 22 | - Post/Pre-copy 23 | - Max number of parallel migrations 24 | - Timeout 25 | 26 | Migration policies generalize the concept of defining migration configurations, so it would be 27 | possible to apply different configurations to specific groups of VMs. 28 | 29 | Such capability can be useful for a lot of different use cases on which there is a need to differentiate 30 | between different workloads. Differentiation of different configurations could be needed because different 31 | workloads are considered to be in different priorities, security segregation, workloads with different 32 | requirements, help to converge workloads which aren't migration-friendly, and many other reasons. 33 | 34 | ## API Examples 35 | 36 | #### Migration Configurations 37 | Currently the MigrationPolicy spec will only include the following configurations from KubevirtCR's 38 | MigrationConfiguration (in the future more configurations that aren't part of Kubevirt CR are intended to be added): 39 | ```yaml 40 | apiVersion: migrations.kubevirt.io/v1alpha1 41 | kind: MigrationPolicy 42 | spec: 43 | allowAutoConverge: true 44 | bandwidthPerMigration: 217Ki 45 | completionTimeoutPerGiB: 23 46 | allowPostCopy: false 47 | ``` 48 | 49 | All above fields are optional. When omitted, the configuration will be applied as defined in 50 | KubevirtCR's MigrationConfiguration. This way, KubevirtCR will serve as a configurable set of defaults for both 51 | VMs that are not bound to any MigrationPolicy and VMs that are bound to a MigrationPolicy that does not 52 | define all fields of the configurations. 53 | 54 | #### Matching Policies to VMs 55 | 56 | Next in the spec are the selectors that define the group of VMs on which to apply the policy. The options to do so 57 | are the following. 58 | 59 | **This policy applies to the VMs in namespaces that have all the required labels:** 60 | ```yaml 61 | apiVersion: migrations.kubevirt.io/v1alpha1 62 | kind: MigrationPolicy 63 | spec: 64 | selectors: 65 | namespaceSelector: 66 | hpc-workloads: true # Matches a key and a value 67 | ``` 68 | 69 | **This policy applies for the VMs that have all the required labels:** 70 | ```yaml 71 | apiVersion: migrations.kubevirt.io/v1alpha1 72 | kind: MigrationPolicy 73 | spec: 74 | selectors: 75 | virtualMachineInstanceSelector: 76 | workload-type: db # Matches a key and a value 77 | ``` 78 | 79 | **It is also possible to combine the previous two:** 80 | ```yaml 81 | apiVersion: migrations.kubevirt.io/v1alpha1 82 | kind: MigrationPolicy 83 | spec: 84 | selectors: 85 | namespaceSelector: 86 | hpc-workloads: true 87 | virtualMachineInstanceSelector: 88 | workload-type: db 89 | ``` 90 | 91 | #### Full Manifest: 92 | 93 | ```yaml 94 | apiVersion: migrations.kubevirt.io/v1alpha1 95 | kind: MigrationPolicy 96 | metadata: 97 | name: my-awesome-policy 98 | spec: 99 | # Migration Configuration 100 | allowAutoConverge: true 101 | bandwidthPerMigration: 217Ki 102 | completionTimeoutPerGiB: 23 103 | allowPostCopy: false 104 | 105 | # Matching to VMs 106 | selectors: 107 | namespaceSelector: 108 | hpc-workloads: true 109 | virtualMachineInstanceSelector: 110 | workload-type: db 111 | ``` 112 | ## Policies' Precedence 113 | 114 | It is possible that multiple policies apply to the same VMI. In such cases, the precedence is in the 115 | same order as the bullets above (VMI labels first, then namespace labels). It is not allowed to define 116 | two policies with the exact same selectors. 117 | 118 | If multiple policies apply to the same VMI: 119 | * The most detailed policy will be applied, that is, the policy with the highest number of matching labels 120 | 121 | * If multiple policies match to a VMI with the same number of matching labels, the policies will be sorted by the 122 | lexicographic order of the matching labels keys. The first one in this order will be applied. 123 | 124 | ### Example 125 | 126 | For example, let's imagine a VMI with the following labels: 127 | 128 | * size: small 129 | 130 | * os: fedora 131 | 132 | * gpu: nvidia 133 | 134 | And let's say the namespace to which the VMI belongs contains the following labels: 135 | 136 | * priority: high 137 | 138 | * bandwidth: medium 139 | 140 | * hpc-workload: true 141 | 142 | The following policies are listed by their precedence (high to low): 143 | 144 | 1) VMI labels: `{size: small, gpu: nvidia}`, Namespace labels: `{priority:high, bandwidth: medium}` 145 | 146 | * Matching labels: 4, First key in lexicographic order: `bandwidth`. 147 | 148 | 2) VMI labels: `{size: small, gpu: nvidia}`, Namespace labels: `{priority:high, hpc-workload:true}` 149 | 150 | * Matching labels: 4, First key in lexicographic order: `gpu`. 151 | 152 | 3) VMI labels: `{size: small, gpu: nvidia}`, Namespace labels: `{priority:high}` 153 | 154 | * Matching labels: 3, First key in lexicographic order: `gpu`. 155 | 156 | 4) VMI labels: `{size: small}`, Namespace labels: `{priority:high, hpc-workload:true}` 157 | 158 | * Matching labels: 3, First key in lexicographic order: `hpc-workload`. 159 | 160 | 5) VMI labels: `{gpu: nvidia}`, Namespace labels: `{priority:high}` 161 | 162 | * Matching labels: 2, First key in lexicographic order: `gpu`. 163 | 164 | 6) VMI labels: `{gpu: nvidia}`, Namespace labels: `{}` 165 | 166 | * Matching labels: 1, First key in lexicographic order: `gpu`. 167 | 168 | 7) VMI labels: `{gpu: intel}`, Namespace labels: `{priority:high}` 169 | 170 | * VMI label does not match - policy cannot be applied. 171 | -------------------------------------------------------------------------------- /docs/compute/cpu_hotplug.md: -------------------------------------------------------------------------------- 1 | # CPU Hotplug 2 | 3 | The CPU hotplug feature was introduced in KubeVirt v1.0, making it possible to configure the VM workload 4 | to allow for adding or removing virtual CPUs while the VM is running. 5 | 6 | ### Abstract 7 | A **virtual CPU** (vCPU) is the CPU that is seen to the Guest VM OS. A VM owner can manage the amount of vCPUs from the VM spec template using the CPU topology fields (`spec.template.spec.domain.cpu`). The `cpu` object has the integers `cores,sockets,threads` so that the virtual CPU is calculated by the following formula: `cores * sockets * threads`. 8 | 9 | Before CPU hotplug was introduced, the VM owner could change these integers in the VM template while the VM is running, and they were staged until the next boot cycle. With CPU hotplug, it is possible to patch the `sockets` integer in the VM template and the change will take effect right away. 10 | 11 | Per each new socket that is hot-plugged, the amount of new vCPUs that would be seen by the guest is `cores * threads`, since the overall calculation of vCPUs is `cores * sockets * threads`. 12 | 13 | ## Configuration 14 | 15 | ### Configure the workload update strategy 16 | Current implementation of the hotplug process requires the VM to live-migrate. 17 | The migration will be triggered automatically by the workload updater. The workload update strategy in the KubeVirt CR must be configured with `LiveMigrate`, as follows: 18 | 19 | ```yaml 20 | apiVersion: kubevirt.io/v1 21 | kind: KubeVirt 22 | spec: 23 | workloadUpdateStrategy: 24 | workloadUpdateMethods: 25 | - LiveMigrate 26 | ``` 27 | 28 | ### Configure the VM rollout strategy 29 | Hotplug requires a VM rollout strategy of `LiveUpdate`, so that the changes made to the VM object propagate to the VMI without a restart. 30 | This is also done in the KubeVirt CR configuration: 31 | 32 | ```yaml 33 | apiVersion: kubevirt.io/v1 34 | kind: KubeVirt 35 | spec: 36 | configuration: 37 | vmRolloutStrategy: "LiveUpdate" 38 | ``` 39 | 40 | More information can be found on the [VM Rollout Strategies](../user_workloads/vm_rollout_strategies.md) page 41 | 42 | ### [OPTIONAL] Set maximum sockets or hotplug ratio 43 | You can explicitly set the maximum amount of sockets in three ways: 44 | 45 | 1. with a value VM level 46 | 2. with a value at the cluster level 47 | 3. with a ratio at the cluster level (`maxSockets = ratio * sockets`). 48 | 49 | Note: the third way (cluster-level ratio) will also affect other quantitative hotplug resources like memory. 50 | 51 | 52 | 53 | 54 | 59 | 64 | 69 | 70 | 71 | 84 | 96 | 97 | 109 | 110 |
55 |

56 | VM level 57 |

58 |
60 |

61 | Cluster level value 62 |

63 |
65 |

66 | Cluster level ratio 67 |

68 |
72 | 73 | ```yaml 74 | apiVersion: kubevirt.io/v1 75 | kind: VirtualMachine 76 | spec: 77 | template: 78 | spec: 79 | domain: 80 | cpu: 81 | maxSockets: 8 82 | ``` 83 | 85 | 86 | ```yaml 87 | apiVersion: kubevirt.io/v1 88 | kind: KubeVirt 89 | spec: 90 | configuration: 91 | liveUpdateConfiguration: 92 | maxCpuSockets: 8 93 | ``` 94 | 95 | 98 | 99 | ```yaml 100 | apiVersion: kubevirt.io/v1 101 | kind: KubeVirt 102 | spec: 103 | configuration: 104 | liveUpdateConfiguration: 105 | maxHotplugRatio: 4 106 | ``` 107 | 108 |
111 | 112 | The VM-level configuration will take precedence over the cluster-wide configuration. 113 | 114 | 115 | 116 | ## Hotplug process 117 | Let's assume we have a running VM with the 4 vCPUs, which were configured with `sockets:4 cores:1 threads:1` 118 | In the VMI status we can observe the current CPU topology the VM is running with: 119 | 120 | ```yaml 121 | apiVersion: kubevirt.io/v1 122 | kind: VirtualMachineInstance 123 | ... 124 | status: 125 | currentCPUTopology: 126 | cores: 1 127 | sockets: 4 128 | threads: 1 129 | ``` 130 | Now we want to hotplug another socket, by patching the VM object: 131 | 132 | ``` 133 | kubectl patch vm vm-cirros --type='json' \ 134 | -p='[{"op": "replace", "path": "/spec/template/spec/domain/cpu/sockets", "value": 5}]' 135 | ``` 136 | We can observe the CPU hotplug process in the VMI status: 137 | 138 | ```yaml 139 | status: 140 | conditions: 141 | - lastProbeTime: null 142 | lastTransitionTime: null 143 | status: "True" 144 | type: LiveMigratable 145 | - lastProbeTime: null 146 | lastTransitionTime: null 147 | status: "True" 148 | type: HotVCPUChange 149 | currentCPUTopology: 150 | cores: 1 151 | sockets: 4 152 | threads: 1 153 | ``` 154 | 155 | Please note the condition `HotVCPUChange` that indicates the hotplug process is taking place. 156 | Also you can notice the VirtualMachineInstanceMigration object that was created for the VM in subject: 157 | 158 | ``` 159 | NAME PHASE VMI 160 | kubevirt-workload-update-kflnl Running vm-cirros 161 | ``` 162 | When the hotplug process has completed, the `currentCPUTopology` will be updated with the new number of sockets and the migration 163 | is marked as successful. 164 | 165 | ```yaml 166 | #kubectl get vmi vm-cirros -oyaml 167 | 168 | apiVersion: kubevirt.io/v1 169 | kind: VirtualMachineInstance 170 | metadata: 171 | name: vm-cirros 172 | spec: 173 | domain: 174 | cpu: 175 | cores: 1 176 | sockets: 5 177 | threads: 1 178 | ... 179 | ... 180 | status: 181 | currentCPUTopology: 182 | cores: 1 183 | sockets: 5 184 | threads: 1 185 | 186 | 187 | #kubectl get vmim -l kubevirt.io/vmi-name=vm-cirros 188 | NAME PHASE VMI 189 | kubevirt-workload-update-cgdgd Succeeded vm-cirros 190 | ``` 191 | 192 | ## Limitations 193 | * VPCU hotplug is currently not supported by ARM64 architecture. 194 | * Current hotplug implementation involves live-migration of the VM workload. 195 | 196 | ### With NetworkInterfaceMultiQueue 197 | 198 | When a VM has VM.spec.template.spec.domain.devices.networkInterfaceMultiQueue set to `true`: 199 | 200 | - On a CPU hotplug scenario, the VM owner will certainly gain the benefit of the additional vCPUs. 201 | However, any network interfaces that were already present before the CPU hotplug will retain their initial queue count. 202 | This is a characteristic of the underlying virtualization technology. 203 | - If the VM owner would like the network interface queue count to align with the new vCPU configuration to maximize performance, a restart of the VM will be required. Of course, this decision is entirely up to the VM owner. 204 | - Any new virtio network interfaces hotplugged after the CPU hotplug - will automatically have their queue count configured to match the updated vCPU configuration. 205 | 206 | > **Note:** This scenario is blocked in release-1.3 and release-1.2. -------------------------------------------------------------------------------- /docs/cluster_admin/updating_and_deletion.md: -------------------------------------------------------------------------------- 1 | # Updating and deletion 2 | 3 | ## Updating KubeVirt Control Plane 4 | 5 | Zero downtime rolling updates are supported starting with release 6 | `v0.17.0` onward. Updating from any release prior to the KubeVirt 7 | `v0.17.0` release is not supported. 8 | 9 | > Note: Updating is only supported from N-1 to N release. 10 | 11 | Updates are triggered one of two ways. 12 | 13 | 1. By changing the imageTag value in the KubeVirt CR's spec. 14 | 15 | For example, updating from `v0.17.0-alpha.1` to `v0.17.0` is as simple 16 | as patching the KubeVirt CR with the `imageTag: v0.17.0` value. From 17 | there the KubeVirt operator will begin the process of rolling out the 18 | new version of KubeVirt. Existing VM/VMIs will remain uninterrupted both 19 | during and after the update succeeds. 20 | 21 | ``` 22 | $ kubectl patch kv kubevirt -n kubevirt --type=json -p '[{ "op": "add", "path": "/spec/imageTag", "value": "v0.17.0" }]' 23 | ``` 24 | 25 | 2. Or, by updating the kubevirt operator if no imageTag value is set. 26 | 27 | When no imageTag value is set in the kubevirt CR, the system assumes 28 | that the version of KubeVirt is locked to the version of the operator. 29 | This means that updating the operator will result in the underlying 30 | KubeVirt installation being updated as well. 31 | 32 | ``` 33 | $ export RELEASE=v0.26.0 34 | $ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml 35 | ``` 36 | 37 | The first way provides a fine granular approach where you have full 38 | control over what version of KubeVirt is installed independently of what 39 | version of the KubeVirt operator you might be running. The second 40 | approach allows you to lock both the operator and operand to the same 41 | version. 42 | 43 | Newer KubeVirt may require additional or extended RBAC rules. In this 44 | case, the #1 update method may fail, because the virt-operator present 45 | in the cluster doesn't have these RBAC rules itself. In this case, you 46 | need to update the `virt-operator` first, and then proceed to update 47 | kubevirt. See [this issue for more 48 | details](https://github.com/kubevirt/kubevirt/issues/2533). 49 | 50 | ## Updating KubeVirt Workloads 51 | 52 | Workload updates are supported as an opt in feature starting with `v0.39.0` 53 | 54 | By default, when KubeVirt is updated this only involves the control plane 55 | components. Any existing VirtualMachineInstance (VMI) workloads that are 56 | running before an update occurs remain 100% untouched. The workloads 57 | continue to run and are not interrupted as part of the default update process. 58 | 59 | It's important to note that these VMI workloads do involve components such as 60 | libvirt, qemu, and virt-launcher, which can optionally be updated during the 61 | KubeVirt update process as well. However that requires opting in to having 62 | virt-operator perform automated actions on workloads. 63 | 64 | Opting in to VMI updates involves configuring the `workloadUpdateStrategy` 65 | field on the KubeVirt CR. This field controls the methods virt-operator will 66 | use to when updating the VMI workload pods. 67 | 68 | There are two methods supported. 69 | 70 | **LiveMigrate:** Which results in VMIs being updated by live migrating the 71 | virtual machine guest into a new pod with all the updated components enabled. 72 | 73 | **Evict:** Which results in the VMI's pod being shutdown. If the VMI is 74 | controlled by a higher level VirtualMachine object with `runStrategy: always`, 75 | then a new VMI will spin up in a new pod with updated components. 76 | 77 | The least disruptive way to update VMI workloads is to use LiveMigrate. Any 78 | VMI workload that is not live migratable will be left untouched. If live 79 | migration is not enabled in the cluster, then the only option available for 80 | virt-operator managed VMI updates is the Evict method. 81 | 82 | 83 | **Example: Enabling VMI workload updates via LiveMigration** 84 | 85 | ```yaml 86 | apiVersion: kubevirt.io/v1 87 | kind: KubeVirt 88 | metadata: 89 | name: kubevirt 90 | namespace: kubevirt 91 | spec: 92 | imagePullPolicy: IfNotPresent 93 | workloadUpdateStrategy: 94 | workloadUpdateMethods: 95 | - LiveMigrate 96 | ``` 97 | 98 | **Example: Enabling VMI workload updates via Evict with batch tunings** 99 | 100 | The batch tunings allow configuring how quickly VMI's are evicted. In large 101 | clusters, it's desirable to ensure that VMI's are evicted in batches in order 102 | to distribute load. 103 | 104 | ```yaml 105 | apiVersion: kubevirt.io/v1 106 | kind: KubeVirt 107 | metadata: 108 | name: kubevirt 109 | namespace: kubevirt 110 | spec: 111 | imagePullPolicy: IfNotPresent 112 | workloadUpdateStrategy: 113 | workloadUpdateMethods: 114 | - Evict 115 | batchEvictionSize: 10 116 | batchEvictionInterval: "1m" 117 | ``` 118 | 119 | 120 | **Example: Enabling VMI workload updates with both LiveMigrate and Evict** 121 | 122 | When both LiveMigrate and Evict are specified, then any workloads which are 123 | live migratable will be guaranteed to be live migrated. Only workloads which 124 | are not live migratable will be evicted. 125 | 126 | 127 | ```yaml 128 | apiVersion: kubevirt.io/v1 129 | kind: KubeVirt 130 | metadata: 131 | name: kubevirt 132 | namespace: kubevirt 133 | spec: 134 | imagePullPolicy: IfNotPresent 135 | workloadUpdateStrategy: 136 | workloadUpdateMethods: 137 | - LiveMigrate 138 | - Evict 139 | batchEvictionSize: 10 140 | batchEvictionInterval: "1m" 141 | ``` 142 | 143 | ## Deleting KubeVirt 144 | 145 | To delete the KubeVirt you should first to delete `KubeVirt` custom 146 | resource and then delete the KubeVirt operator. 147 | 148 | $ export RELEASE=v0.17.0 149 | $ kubectl delete -n kubevirt kubevirt kubevirt --wait=true # --wait=true should anyway be default 150 | $ kubectl delete apiservices v1.subresources.kubevirt.io # this needs to be deleted to avoid stuck terminating namespaces 151 | $ kubectl delete mutatingwebhookconfigurations virt-api-mutator # not blocking but would be left over 152 | $ kubectl delete validatingwebhookconfigurations virt-operator-validator # not blocking but would be left over 153 | $ kubectl delete validatingwebhookconfigurations virt-api-validator # not blocking but would be left over 154 | $ kubectl delete -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml --wait=false 155 | 156 | > Note: If by mistake you deleted the operator first, the KV custom 157 | > resource will get stuck in the `Terminating` state, to fix it, delete 158 | > manually finalizer from the resource. 159 | > 160 | > Note: The `apiservice` and the `webhookconfigurations` need to be 161 | > deleted manually due to a bug. 162 | > 163 | > $ kubectl -n kubevirt patch kv kubevirt --type=json -p '[{ "op": "remove", "path": "/metadata/finalizers" }]' 164 | -------------------------------------------------------------------------------- /docs/compute/windows_virtio_drivers.md: -------------------------------------------------------------------------------- 1 | # Windows virtio drivers 2 | 3 | Purpose of this document is to explain how to install virtio drivers for 4 | Microsoft Windows running in a fully virtualized guest. 5 | 6 | ## Do I need virtio drivers? 7 | 8 | Yes. Without the virtio drivers, you cannot use paravirtualized hardware properly. It would either not work, or will have a severe performance penalty. 9 | 10 | For more information about VirtIO and paravirtualization, see [VirtIO and paravirtualization](https://wiki.libvirt.org/page/Virtio) 11 | 12 | For more details on configuring your VirtIO driver please refer to [Installing VirtIO driver on a new Windows virtual machine](https://docs.openshift.com/container-platform/4.10/virt/virtual_machines/virt-installing-virtio-drivers-on-new-windows-vm.html) and [Installing VirtIO driver on an existing Windows virtual machine](https://docs.openshift.com/container-platform/4.10/virt/virtual_machines/virt-installing-virtio-drivers-on-existing-windows-vm.html). 13 | 14 | 15 | ## Which drivers I need to install? 16 | 17 | There are usually up to 8 possible devices that are required to run 18 | Windows smoothly in a virtualized environment. KubeVirt currently 19 | supports only: 20 | 21 | - **viostor**, the block driver, applies to SCSI Controller in the 22 | Other devices group. 23 | 24 | - **viorng**, the entropy source driver, applies to PCI Device in the 25 | Other devices group. 26 | 27 | - **NetKVM**, the network driver, applies to Ethernet Controller in 28 | the Other devices group. Available only if a virtio NIC is 29 | configured. 30 | 31 | Other virtio drivers, that exists and might be supported in the future: 32 | 33 | - Balloon, the balloon driver, applies to PCI Device in the Other 34 | devices group 35 | 36 | - vioserial, the paravirtual serial driver, applies to PCI Simple 37 | Communications Controller in the Other devices group. 38 | 39 | - vioscsi, the SCSI block driver, applies to SCSI Controller in the 40 | Other devices group. 41 | 42 | - qemupciserial, the emulated PCI serial driver, applies to PCI Serial 43 | Port in the Other devices group. 44 | 45 | - qxl, the paravirtual video driver, applied to Microsoft Basic 46 | Display Adapter in the Display adapters group. 47 | 48 | - pvpanic, the paravirtual panic driver, applies to Unknown device in 49 | the Other devices group. 50 | 51 | > **Note** 52 | > 53 | > Some drivers are required in the installation phase. When you are 54 | > installing Windows onto the virtio block storage you have to provide 55 | > an appropriate virtio driver. Namely, choose viostor driver for your 56 | > version of Microsoft Windows, eg. does not install XP driver when you 57 | > run Windows 10. 58 | > 59 | > Other drivers can be installed after the successful windows 60 | > installation. Again, please install only drivers matching your Windows 61 | > version. 62 | 63 | ### How to install during Windows install? 64 | 65 | To install drivers before the Windows starts its install, make sure you 66 | have virtio-win package attached to your VirtualMachine as SATA CD-ROM. 67 | In the Windows installation, choose advanced install and load driver. 68 | Then please navigate to loaded Virtio CD-ROM and install one of viostor 69 | or vioscsi, depending on whichever you have set up. 70 | 71 | Step by step screenshots: 72 | 73 | ![Windows install](../assets/virtio_custom_install_0.png) 74 | 75 | ![Custom install](../assets/virtio_custom_install_1.png) 76 | 77 | ![Disk install](../assets/virtio_custom_install_2.png) 78 | 79 | ![Choose medium](../assets/virtio_custom_install_3.png) 80 | 81 | ![Choose driver](../assets/virtio_custom_install_4.png) 82 | 83 | ![Continue install](../assets/virtio_custom_install_5.png) 84 | 85 | ### How to install after Windows install? 86 | 87 | After windows install, please go to [Device 88 | Manager](https://support.microsoft.com/en-us/help/4026149/windows-open-device-manager). 89 | There you should see undetected devices in "available devices" section. 90 | You can install virtio drivers one by one going through this list. 91 | 92 | ![Unknown devices](../assets/virtio_driver_install_0.png) 93 | 94 | ![Select driver by id](../assets/virtio_driver_install_1.png) 95 | 96 | ![Install driver](../assets/virtio_driver_install_2.png) 97 | 98 | ![Driver installed](../assets/virtio_driver_install_3.png) 99 | 100 | For more details on how to choose a proper driver and how to install the 101 | driver, please refer to the [Windows Guest Virtual Machines on Red Hat 102 | Enterprise Linux 7](https://access.redhat.com/articles/2470791). 103 | 104 | ## How to obtain virtio drivers? 105 | 106 | The virtio Windows drivers are distributed in a form of 107 | [containerDisk](../storage/disks_and_volumes.md#containerdisk), 108 | which can be simply mounted to the VirtualMachine. The container image, 109 | containing the disk is located at: 110 | and the image 111 | be pulled as any other docker container: 112 | 113 | docker pull quay.io/kubevirt/virtio-container-disk 114 | 115 | However, pulling image manually is not required, it will be downloaded 116 | if not present by Kubernetes when deploying VirtualMachine. 117 | 118 | ## Attaching to VirtualMachine 119 | 120 | KubeVirt distributes virtio drivers for Microsoft Windows in a form of 121 | container disk. The package contains the virtio drivers and QEMU guest 122 | agent. The disk was tested on Microsoft Windows Server 2012. Supported 123 | Windows version is XP and up. 124 | 125 | The package is intended to be used as CD-ROM attached to the virtual 126 | machine with Microsoft Windows. It can be used as SATA CDROM during 127 | install phase or to provide drivers in an existing Windows installation. 128 | 129 | Attaching the virtio-win package can be done simply by adding 130 | ContainerDisk to you VirtualMachine. 131 | 132 | spec: 133 | domain: 134 | devices: 135 | disks: 136 | - name: virtiocontainerdisk 137 | # Any other disk you want to use, must go before virtioContainerDisk. 138 | # KubeVirt boots from disks in order ther are defined. 139 | # Therefore virtioContainerDisk, must be after bootable disk. 140 | # Other option is to choose boot order explicitly: 141 | # - https://kubevirt.io/api-reference/v0.13.2/definitions.html#_v1_disk 142 | # NOTE: You either specify bootOrder explicitely or sort the items in 143 | # disks. You can not do both at the same time. 144 | # bootOrder: 2 145 | cdrom: 146 | bus: sata 147 | volumes: 148 | - containerDisk: 149 | image: quay.io/kubevirt/virtio-container-disk 150 | name: virtiocontainerdisk 151 | 152 | Once you are done installing virtio drivers, you can remove virtio 153 | container disk by simply removing the disk from yaml specification and 154 | restarting the VirtualMachine. 155 | --------------------------------------------------------------------------------