├── .gitignore ├── README.md ├── Vagrantfile ├── architecture.png ├── architecture.uxf ├── bootstrap-talos.sh ├── bring-up.sh ├── grafana-datasources.yml ├── lib.rb ├── lib.sh ├── loki-config.yml ├── machinator ├── Dockerfile ├── amt_bmc.go ├── bmc.go ├── go.mod ├── go.sum ├── ipmi_bmc.go ├── kubernetes.go ├── machines-status.html ├── main.go ├── matchbox.go ├── qmp.go ├── redfish_bmc.go ├── talos.go └── wipe.sh ├── machines.py ├── machines.yaml ├── meshcommander └── Dockerfile ├── provision-base.sh ├── provision-chart-cert-manager.sh ├── provision-chart-external-dns.sh ├── provision-chart-kubernetes-dashboard.sh ├── provision-chart-metallb.sh ├── provision-chart-traefik.sh ├── provision-chart-vector.sh ├── provision-chrony.sh ├── provision-crane.sh ├── provision-dnsmasq.sh ├── provision-docker-hub-auth.sh ├── provision-docker.sh ├── provision-etcdctl.sh ├── provision-example-daemonset.sh ├── provision-grafana.sh ├── provision-helm.sh ├── provision-iptables.sh ├── provision-ipxe.sh ├── provision-k9s.sh ├── provision-kubectl.sh ├── provision-loki.sh ├── provision-machinator.sh ├── provision-matchbox.sh ├── provision-meshcommander.sh ├── provision-pdns.sh ├── provision-registry.sh ├── provision-rescue.sh ├── provision-talos-poke.sh ├── provision-talos.sh ├── provision-theila.sh ├── provision-vector.sh ├── renovate.json5 ├── renovate.sh ├── summary.sh ├── talos-poke └── Dockerfile ├── vector-k8s.yml ├── vector.yml └── wireshark.sh /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant/ 2 | shared/ 3 | tmp/ 4 | *.log 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | This is a [Vagrant](https://www.vagrantup.com/) Environment for a playing with [Talos](https://www.talos.dev). 2 | 3 | For playing with [Sidero](https://www.sidero.dev) see the [rgl/sidero-vagrant](https://github.com/rgl/sidero-vagrant) repository. 4 | 5 | # Table Of Contents 6 | 7 | * [Architecture](#architecture) 8 | * [Usage](#usage) 9 | * [Network Packet Capture](#network-packet-capture) 10 | * [Network Booting](#network-booting) 11 | * [Tested Physical Machines](#tested-physical-machines) 12 | * [Troubleshoot](#troubleshoot) 13 | * [Alternatives and Related Projects](#alternatives-and-related-projects) 14 | * [References](#references) 15 | 16 | # Architecture 17 | 18 | ![](architecture.png) 19 | 20 | # Usage 21 | 22 | Install docker, vagrant, vagrant-libvirt, and the [Ubuntu Base Box](https://github.com/rgl/ubuntu-vagrant). 23 | 24 | Login into docker hub to have a [higher rate limits](https://www.docker.com/increase-rate-limits). 25 | 26 | If you want to connect to the external physical network, you must configure your host network as described in [rgl/ansible-collection-tp-link-easy-smart-switch](https://github.com/rgl/ansible-collection-tp-link-easy-smart-switch#take-ownership-procedure) (e.g. have the `br-rpi` linux bridge) and set `CONFIG_PANDORA_BRIDGE_NAME` in the `Vagrantfile`. 27 | 28 | Bring up the cluster virtual machines: 29 | 30 | ```bash 31 | time ./bring-up.sh | tee bring-up.log 32 | ``` 33 | 34 | Access talos: 35 | 36 | ```bash 37 | export TALOSCONFIG="$PWD/shared/talosconfig" 38 | ./shared/talosctl --nodes cp1,w1 version 39 | ``` 40 | 41 | Access kubernetes: 42 | 43 | ```bash 44 | export KUBECONFIG="$PWD/shared/kubeconfig" 45 | ./shared/kubectl get nodes -o wide 46 | ``` 47 | 48 | Start an example service in each worker node: 49 | 50 | ```bash 51 | vagrant ssh -c 'bash /vagrant/provision-example-daemonset.sh' pandora 52 | ``` 53 | 54 | Access the example service: 55 | 56 | ```bash 57 | vagrant ssh -c "watch -n .2 'wget -qO- http://example-daemonset.\$(hostname --domain)?format=text | tail -25; kubectl get pod -l app=example-daemonset -o=custom-columns=NODE:.spec.nodeName,STATUS:.status.phase,NAME:.metadata.name'" pandora 58 | ``` 59 | 60 | List this repository dependencies (and which have newer versions): 61 | 62 | ```bash 63 | export GITHUB_COM_TOKEN='YOUR_GITHUB_PERSONAL_TOKEN' 64 | ./renovate.sh 65 | ``` 66 | 67 | ## Network Packet Capture 68 | 69 | You can easily capture and see traffic from the host with the `wireshark.sh` 70 | script, e.g., to capture the traffic from the `eth1` interface: 71 | 72 | ```bash 73 | ./wireshark.sh pandora eth1 74 | ``` 75 | 76 | ## Host DNS resolver 77 | 78 | To delegate the `talos.test` zone to the kubernetes managed external dns server (running in pandora) you need to configure your system to delegate that DNS zone to the pandora DNS server, for that, you can configure your system to only use dnsmasq. 79 | 80 | For example, on my Ubuntu 22.04 Desktop, I have uninstalled `resolvconf`, disabled `NetworkManager`, and manually configured the network interfaces: 81 | 82 | ```bash 83 | sudo su -l 84 | for n in NetworkManager NetworkManager-wait-online NetworkManager-dispatcher network-manager; do 85 | systemctl mask --now $n 86 | done 87 | apt-get remove --purge resolvconf 88 | cat >/etc/network/interfaces <<'EOF' 89 | # interfaces(5) file used by ifup(8) and ifdown(8) 90 | auto lo 91 | iface lo inet loopback 92 | 93 | auto enp3s0 94 | iface enp3s0 inet dhcp 95 | EOF 96 | reboot 97 | ``` 98 | 99 | Then, replaced `systemd-resolved` with `dnsmasq`: 100 | 101 | ```bash 102 | sudo su -l 103 | apt-get install -y --no-install-recommends dnsutils dnsmasq 104 | systemctl mask --now systemd-resolved 105 | cat >/etc/dnsmasq.d/local.conf </etc/dnsmasq.d/talos.test.conf </etc/resolv.conf < 'unmap', :cache => 'unsafe' 105 | config.vm.synced_folder '.', '/vagrant', type: 'nfs', nfs_version: '4.2', nfs_udp: false 106 | end 107 | 108 | config.vm.define :pandora do |config| 109 | config.vm.provider :libvirt do |lv, config| 110 | lv.cpus = 4 111 | lv.memory = 4*1024 112 | lv.machine_virtual_size = 16 113 | # configure the vagrant synced folder. 114 | lv.memorybacking :source, :type => 'memfd' # required for virtiofs. 115 | lv.memorybacking :access, :mode => 'shared' # required for virtiofs. 116 | config.vm.synced_folder '.', '/vagrant', type: 'virtiofs' 117 | end 118 | config.vm.hostname = CONFIG_PANDORA_FQDN 119 | if CONFIG_PANDORA_BRIDGE_NAME 120 | config.vm.network :public_network, 121 | dev: CONFIG_PANDORA_BRIDGE_NAME, 122 | mode: 'bridge', 123 | type: 'bridge', 124 | ip: CONFIG_PANDORA_IP 125 | else 126 | config.vm.network :private_network, 127 | ip: CONFIG_PANDORA_IP, 128 | libvirt__dhcp_enabled: false, 129 | libvirt__forward_mode: 'none' 130 | end 131 | config.vm.provision :shell, path: 'provision-base.sh' 132 | config.vm.provision :shell, path: 'provision-chrony.sh' 133 | config.vm.provision :shell, path: 'provision-iptables.sh' 134 | config.vm.provision :shell, path: 'provision-docker.sh' 135 | config.vm.provision :shell, path: 'provision-docker-hub-auth.sh', env: {'DOCKER_HUB_AUTH' => DOCKER_HUB_AUTH} if DOCKER_HUB_AUTH 136 | config.vm.provision :shell, path: 'provision-registry.sh' 137 | config.vm.provision :shell, path: 'provision-crane.sh' 138 | config.vm.provision :shell, path: 'provision-meshcommander.sh' 139 | config.vm.provision :shell, path: 'provision-pdns.sh', args: [CONFIG_PANDORA_IP] 140 | config.vm.provision :shell, path: 'provision-dnsmasq.sh', args: [CONFIG_PANDORA_IP, CONFIG_PANDORA_DHCP_RANGE, CONFIG_CONTROL_PLANE_VIP] 141 | config.vm.provision :shell, path: 'provision-matchbox.sh' 142 | config.vm.provision :shell, path: 'provision-ipxe.sh' 143 | config.vm.provision :shell, path: 'provision-rescue.sh' 144 | config.vm.provision :shell, path: 'provision-loki.sh' 145 | config.vm.provision :shell, path: 'provision-vector.sh' 146 | config.vm.provision :shell, path: 'provision-grafana.sh' 147 | config.vm.provision :shell, path: 'provision-machinator.sh' 148 | config.vm.provision :shell, path: 'provision-kubectl.sh', args: [CONFIG_KUBECTL_VERSION] 149 | config.vm.provision :shell, path: 'provision-etcdctl.sh', args: [CONFIG_ETCDCTL_VERSION] 150 | config.vm.provision :shell, path: 'provision-helm.sh', args: [CONFIG_HELM_VERSION] 151 | config.vm.provision :shell, path: 'provision-k9s.sh', args: [CONFIG_K9S_VERSION] 152 | config.vm.provision :shell, path: 'provision-talos-poke.sh' 153 | config.vm.provision :shell, path: 'provision-talos.sh', args: [CONFIG_TALOS_VERSION, CONFIG_KUBERNETES_VERSION, CONFIG_CONTROL_PLANE_VIP] 154 | config.vm.provision :shell, path: 'provision-theila.sh', args: [CONFIG_THEILA_VERSION] 155 | end 156 | 157 | virtual_machines.each do |name, arch, firmware, ip, mac, bmc_type, bmc_ip, bmc_port, bmc_qmp_port| 158 | config.vm.define name do |config| 159 | config.vm.box = nil 160 | config.vm.provider :libvirt do |lv, config| 161 | lv.loader = '/usr/share/ovmf/OVMF.fd' if firmware == 'uefi' 162 | lv.boot 'hd' 163 | lv.boot 'network' 164 | lv.storage :file, :size => '40G' 165 | if CONFIG_PANDORA_BRIDGE_NAME 166 | config.vm.network :public_network, 167 | dev: CONFIG_PANDORA_BRIDGE_NAME, 168 | mode: 'bridge', 169 | type: 'bridge', 170 | mac: mac, 171 | ip: ip, 172 | auto_config: false 173 | else 174 | config.vm.network :private_network, 175 | mac: mac, 176 | ip: ip, 177 | auto_config: false 178 | end 179 | lv.mgmt_attach = false 180 | lv.graphics_type = 'spice' 181 | lv.video_type = 'virtio' 182 | # set some BIOS settings that will help us identify this particular machine. 183 | # 184 | # QEMU | Linux 185 | # --------------------+---------------------------------------------- 186 | # type=1,manufacturer | /sys/devices/virtual/dmi/id/sys_vendor 187 | # type=1,product | /sys/devices/virtual/dmi/id/product_name 188 | # type=1,version | /sys/devices/virtual/dmi/id/product_version 189 | # type=1,serial | /sys/devices/virtual/dmi/id/product_serial 190 | # type=1,sku | dmidecode 191 | # type=1,uuid | /sys/devices/virtual/dmi/id/product_uuid 192 | # type=3,manufacturer | /sys/devices/virtual/dmi/id/chassis_vendor 193 | # type=3,family | /sys/devices/virtual/dmi/id/chassis_type 194 | # type=3,version | /sys/devices/virtual/dmi/id/chassis_version 195 | # type=3,serial | /sys/devices/virtual/dmi/id/chassis_serial 196 | # type=3,asset | /sys/devices/virtual/dmi/id/chassis_asset_tag 197 | [ 198 | 'type=1,manufacturer=your vendor name here', 199 | 'type=1,product=your product name here', 200 | 'type=1,version=your product version here', 201 | 'type=1,serial=your product serial number here', 202 | 'type=1,sku=your product SKU here', 203 | "type=1,uuid=00000000-0000-4000-8000-#{mac.tr(':', '')}", 204 | 'type=3,manufacturer=your chassis vendor name here', 205 | #'type=3,family=1', # TODO why this does not work on qemu from ubuntu 18.04? 206 | 'type=3,version=your chassis version here', 207 | 'type=3,serial=your chassis serial number here', 208 | "type=3,asset=your chassis asset tag here #{name}", 209 | ].each do |value| 210 | lv.qemuargs :value => '-smbios' 211 | lv.qemuargs :value => value 212 | end 213 | # expose the VM QMP socket. 214 | # see https://gist.github.com/rgl/dc38c6875a53469fdebb2e9c0a220c6c 215 | lv.qemuargs :value => '-qmp' 216 | lv.qemuargs :value => "tcp:#{bmc_ip}:#{bmc_qmp_port},server,nowait" 217 | config.vm.synced_folder '.', '/vagrant', disabled: true 218 | config.trigger.after :up do |trigger| 219 | trigger.ruby do |env, machine| 220 | vbmc_up(machine, bmc_type, bmc_ip, bmc_port) 221 | end 222 | end 223 | config.trigger.after :destroy do |trigger| 224 | trigger.ruby do |env, machine| 225 | vbmc_destroy(machine, bmc_type) 226 | end 227 | end 228 | end 229 | end 230 | end 231 | end 232 | -------------------------------------------------------------------------------- /architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rgl/talos-vagrant/8ef2eb6f5fb975ce5b29e5a469914a86f1dedd79/architecture.png -------------------------------------------------------------------------------- /bootstrap-talos.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | pandora_ip_address="$(jq -r .CONFIG_PANDORA_IP /vagrant/shared/config.json)" 6 | dns_domain="$(hostname --domain)" 7 | control_plane_fqdn="cp.$dns_domain" 8 | control_plane_vip="$(jq -r .CONFIG_CONTROL_PLANE_VIP /vagrant/shared/config.json)" 9 | 10 | 11 | # 12 | # bootstrap etcd. 13 | # see https://www.talos.dev/v1.4/talos-guides/install/bare-metal-platforms/matchbox/#bootstrap-etcd 14 | 15 | control_plane_ips="$(cat /vagrant/shared/machines.json | jq -r '.[] | select(.role == "controlplane") | .ip')" 16 | first_control_plane_ip="$(echo "$control_plane_ips" | head -1)" 17 | 18 | function set-control-plane-dns-rr { 19 | local ip_address="$1" 20 | title 'Reconfiguring the Kubernetes control plane endpoint DNS A RR to the VIP' 21 | # see https://doc.powerdns.com/authoritative/http-api 22 | # see https://doc.powerdns.com/md/httpapi/api_spec/ 23 | http \ 24 | --print '' \ 25 | PATCH \ 26 | http://$pandora_ip_address:8081/api/v1/servers/localhost/zones/$dns_domain \ 27 | X-API-Key:vagrant \ 28 | rrsets:="$(cat </dev/null)" != "$ip_address" ]; do sleep 3; done 47 | dig $control_plane_fqdn 48 | } 49 | 50 | # ensure the k8s control plane DNS RR points to the talos managed VIP. 51 | set-control-plane-dns-rr $control_plane_vip 52 | 53 | title 'Adding the first control plane endpoint to the talosctl local configuration' 54 | rm -rf ~/.talos/* /vagrant/shared/kubeconfig 55 | install -d -m 700 ~/.talos 56 | install -m 600 /vagrant/shared/talosconfig ~/.talos/config 57 | talosctl config endpoints $first_control_plane_ip 58 | talosctl config nodes $first_control_plane_ip 59 | 60 | title 'Bootstrapping talos' 61 | t=$SECONDS 62 | while ! talosctl bootstrap; do sleep 10; done 63 | echo "talos is bootstrapped (took $(($SECONDS - t))s)!" 64 | 65 | title 'Waiting for talos to be healthy' 66 | t=$SECONDS 67 | controllers="$(jq -r '.[] | select(.type == "virtual" and .role == "controlplane") | .ip' /vagrant/shared/machines.json | tr '\n' ',' | sed 's/,$/\n/')" 68 | workers="$(jq -r '.[] | select(.type == "virtual" and .role == "worker") | .ip' /vagrant/shared/machines.json | tr '\n' ',' | sed 's/,$/\n/')" 69 | talosctl -n $first_control_plane_ip \ 70 | health \ 71 | --control-plane-nodes $controllers \ 72 | --worker-nodes $workers 73 | echo "healthy in $(($SECONDS - t))s" 74 | 75 | title 'Downloading Kubernetes config to ~/.kube/config' 76 | rm -rf ~/.kube/* 77 | talosctl kubeconfig 78 | chmod 600 ~/.kube/config 79 | rm -rf /home/vagrant/.kube 80 | install -d -m 700 -o vagrant -g vagrant /home/vagrant/.kube 81 | install -m 600 -o vagrant -g vagrant ~/.kube/config /home/vagrant/.kube/config 82 | 83 | title 'Adding all the control plane endpoints to the talosctl local configuration' 84 | talosctl config endpoints $control_plane_ips 85 | talosctl config nodes # NB this makes sure there are no default nodes. 86 | install -d -m 700 -o vagrant -g vagrant /home/vagrant/.talos 87 | install -m 600 -o vagrant -g vagrant ~/.talos/config /home/vagrant/.talos/config 88 | cp ~/.talos/config /vagrant/shared/talosconfig 89 | 90 | title 'Copying Kubernetes config to the host' 91 | sed "s,$control_plane_fqdn,$control_plane_vip,g" ~/.kube/config >/vagrant/shared/kubeconfig 92 | 93 | title 'Downloading etcd credentials to /etc/talos/etcd' 94 | rm -rf /etc/talos/etcd 95 | install -m 700 -d /etc/talos/etcd 96 | # TODO instead of using the kube-apiserver credentials, create new ones? 97 | talosctl -n $first_control_plane_ip read /system/secrets/etcd/ca.crt >/etc/talos/etcd/ca.crt 98 | talosctl -n $first_control_plane_ip read /system/secrets/kubernetes/kube-apiserver/etcd-client.crt >/etc/talos/etcd/client.crt 99 | talosctl -n $first_control_plane_ip read /system/secrets/kubernetes/kube-apiserver/etcd-client.key >/etc/talos/etcd/client.key 100 | ETCDCTL_ENDPOINTS="$(echo $controllers | tr ',' '\n' | while read ip; do echo "https://$ip:2379"; done | tr '\n' ',' | sed -E 's/,$//')" 101 | cat >/etc/profile.d/etcdctl.sh </vagrant/shared/images.txt 220 | cat /vagrant/shared/images.txt 221 | 222 | title 'Difference' 223 | diff -u /vagrant/shared/talos-images.txt /vagrant/shared/images.txt || true 224 | 225 | 226 | # 227 | # show the environment summary. 228 | 229 | bash /vagrant/summary.sh 230 | -------------------------------------------------------------------------------- /bring-up.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source lib.sh 3 | 4 | # start the virtual machines. 5 | # NB pandora should be first. it provide pxe boot for the other machines. 6 | title 'Starting pandora machine' 7 | vagrant up --provider=libvirt --no-destroy-on-error pandora 8 | title 'Starting the cluster machines' 9 | vagrant up --provider=libvirt --no-destroy-on-error 10 | 11 | # bootstrap the cluster. 12 | title 'Bootstrapping the cluster' 13 | vagrant ssh -c 'sudo bash /vagrant/bootstrap-talos.sh' pandora 14 | -------------------------------------------------------------------------------- /grafana-datasources.yml: -------------------------------------------------------------------------------- 1 | apiVersion: 1 2 | datasources: 3 | - name: Loki 4 | type: loki 5 | access: proxy 6 | url: @@loki_base_url@@ 7 | jsonData: 8 | maxLines: 1000 9 | -------------------------------------------------------------------------------- /lib.rb: -------------------------------------------------------------------------------- 1 | require 'open3' 2 | 3 | def get_docker_hub_auth 4 | config_path = File.expand_path '~/.docker/config.json' 5 | return nil unless File.exists? config_path 6 | config = JSON.load File.read(config_path) 7 | return nil unless config.has_key?('auths') && config['auths'].has_key?('https://index.docker.io/v1/') 8 | config['auths']['https://index.docker.io/v1/']['auth'] 9 | end 10 | 11 | def virtual_machines 12 | save_config 13 | save_virtual_machines_config 14 | machines = JSON.load(File.read('shared/machines.json')).select{|m| m['type'] == 'virtual'} 15 | machines.each_with_index.map do |m, i| 16 | [m['name'], m['arch'], m['firmware'], m['ip'], m['mac'], m['bmcType'], m['bmcIp'], m['bmcPort'], m['bmcQmpPort']] 17 | end 18 | end 19 | 20 | def save_config 21 | FileUtils.mkdir_p 'shared' 22 | File.write('shared/config.json', JSON.pretty_generate({ 23 | 'CONFIG_PANDORA_IP': CONFIG_PANDORA_IP, 24 | 'CONFIG_CONTROL_PLANE_VIP': CONFIG_CONTROL_PLANE_VIP, 25 | 'CONFIG_PANDORA_LOAD_BALANCER_RANGE': CONFIG_PANDORA_LOAD_BALANCER_RANGE, 26 | 'CONFIG_VECTOR_CHART_VERSION': CONFIG_VECTOR_CHART_VERSION, 27 | 'CONFIG_METALLB_CHART_VERSION': CONFIG_METALLB_CHART_VERSION, 28 | 'CONFIG_EXTERNAL_DNS_CHART_VERSION': CONFIG_EXTERNAL_DNS_CHART_VERSION, 29 | 'CONFIG_CERT_MANAGER_CHART_VERSION': CONFIG_CERT_MANAGER_CHART_VERSION, 30 | 'CONFIG_TRAEFIK_CHART_VERSION': CONFIG_TRAEFIK_CHART_VERSION, 31 | 'CONFIG_KUBERNETES_DASHBOARD_CHART_VERSION': CONFIG_KUBERNETES_DASHBOARD_CHART_VERSION, 32 | })) 33 | end 34 | 35 | def save_virtual_machines_config 36 | stdout, stderr, status = Open3.capture3('python3', 'machines.py', 'get-machines-json') 37 | if status.exitstatus != 0 38 | raise "failed to run python3 machines.py get-machines-json. status=#{status.exitstatus} stdout=#{stdout} stderr=#{stderr}" 39 | end 40 | FileUtils.mkdir_p 'shared' 41 | File.write('shared/machines.json', stdout) 42 | end 43 | 44 | def vbmc_domain_name(machine) 45 | "#{File.basename(File.dirname(__FILE__))}_#{machine.name}" 46 | end 47 | 48 | def vbmc_container_name(machine, bmc_type) 49 | "vbmc-emulator-#{bmc_type}-#{vbmc_domain_name(machine)}" 50 | end 51 | 52 | def vbmc_up(machine, bmc_type, bmc_ip, bmc_port) 53 | case bmc_type 54 | when 'redfish' 55 | vbmc_up_redfish(machine, bmc_type, bmc_ip, bmc_port) 56 | when 'ipmi' 57 | vbmc_up_ipmi(machine, bmc_type, bmc_ip, bmc_port) 58 | end 59 | end 60 | 61 | def vbmc_up_redfish(machine, bmc_type, bmc_ip, bmc_port) 62 | vbmc_destroy(machine, bmc_type) 63 | container_name = vbmc_container_name(machine, bmc_type) 64 | machine.ui.info("Creating the #{container_name} docker container...") 65 | stdout, stderr, status = Open3.capture3( 66 | 'docker', 67 | 'run', 68 | '--rm', 69 | '--name', 70 | container_name, 71 | '--detach', 72 | '-v', 73 | '/var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock', 74 | '-v', 75 | '/var/run/libvirt/libvirt-sock-ro:/var/run/libvirt/libvirt-sock-ro', 76 | '-e', 77 | "SUSHY_EMULATOR_ALLOWED_INSTANCES=#{machine.id}", 78 | '-p', 79 | "#{bmc_ip}:#{bmc_port}:8000/tcp", 80 | 'ruilopes/sushy-vbmc-emulator') 81 | if status.exitstatus != 0 82 | if stderr.include? 'No such container' 83 | return 84 | end 85 | raise "failed to run the #{container_name} docker container. status=#{status.exitstatus} stdout=#{stdout} stderr=#{stderr}" 86 | end 87 | end 88 | 89 | def vbmc_up_ipmi(machine, bmc_type, bmc_ip, bmc_port) 90 | vbmc_destroy(machine, bmc_type) 91 | container_name = vbmc_container_name(machine, bmc_type) 92 | machine.ui.info("Creating the #{container_name} docker container...") 93 | stdout, stderr, status = Open3.capture3( 94 | 'docker', 95 | 'run', 96 | '--rm', 97 | '--name', 98 | container_name, 99 | '--detach', 100 | '-v', 101 | '/var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock', 102 | '-v', 103 | '/var/run/libvirt/libvirt-sock-ro:/var/run/libvirt/libvirt-sock-ro', 104 | '-e', 105 | "VBMC_EMULATOR_DOMAIN_NAME=#{vbmc_domain_name(machine)}", 106 | '-e', 107 | "VBMC_EMULATOR_USERNAME=admin", 108 | '-e', 109 | "VBMC_EMULATOR_PASSWORD=password", 110 | '-p', 111 | "#{bmc_ip}:#{bmc_port}:6230/udp", 112 | 'ruilopes/vbmc-emulator') 113 | if status.exitstatus != 0 114 | if stderr.include? 'No such container' 115 | return 116 | end 117 | raise "failed to run the #{container_name} docker container. status=#{status.exitstatus} stdout=#{stdout} stderr=#{stderr}" 118 | end 119 | end 120 | 121 | def vbmc_destroy(machine, bmc_type) 122 | container_name = vbmc_container_name(machine, bmc_type) 123 | stdout, stderr, status = Open3.capture3('docker', 'inspect', container_name) 124 | if status.exitstatus != 0 125 | if stderr.include? 'No such object' 126 | return 127 | end 128 | raise "failed to inspect the #{container_name} docker container. status=#{status.exitstatus} stdout=#{stdout} stderr=#{stderr}" 129 | end 130 | machine.ui.info("Destroying the #{container_name} docker container...") 131 | stdout, stderr, status = Open3.capture3('docker', 'kill', '--signal', 'INT', container_name) 132 | if status.exitstatus != 0 133 | if stderr.include? 'No such container' 134 | return 135 | end 136 | raise "failed to kill the #{container_name} docker container. status=#{status.exitstatus} stdout=#{stdout} stderr=#{stderr}" 137 | end 138 | stdout, stderr, status = Open3.capture3('docker', 'wait', container_name) 139 | if status.exitstatus != 0 140 | if stderr.include? 'No such container' 141 | return 142 | end 143 | raise "failed to wait for the #{container_name} docker container. status=#{status.exitstatus} stdout=#{stdout} stderr=#{stderr}" 144 | end 145 | return 146 | end 147 | -------------------------------------------------------------------------------- /lib.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -eu -o pipefail -o errtrace 3 | 4 | 5 | function title { 6 | cat <= 0 && powerStateId < len(DmtfPowerStatesStrings) { 103 | return fmt.Sprintf("%s (#%d)", DmtfPowerStatesStrings[powerStateId], powerStateId) 104 | } 105 | return fmt.Sprintf("(#%d)", powerStateId) 106 | } 107 | 108 | type PowerState struct { 109 | PowerState int 110 | AvailablePowerStates []int 111 | } 112 | 113 | type Client struct { 114 | client *wsman.Client 115 | } 116 | 117 | // TODO wsman.NewClient default to InsecureSkipVerify at res.Transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true},} 118 | // we MUST fix that by letting the user pass-in the http.Client. 119 | func NewClient(url string, username string, password string) (*Client, error) { 120 | client, err := wsman.NewClient(url, username, password, true) 121 | if err != nil { 122 | return nil, fmt.Errorf("failed to create the wsman client: %v", err) 123 | } 124 | return &Client{ 125 | client: client, 126 | }, nil 127 | } 128 | 129 | // get the current power state. 130 | // NB this is equivalent to: 131 | // ./wscli -d -u {username} -p {password} -e {url} -a Get -r http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_AssociatedPowerManagementService 132 | // see getPowerState at https://github.com/rgl/intel-amt-toggle-power-example/blob/master/intel-amt-toggle-power-example.js 133 | // see CIM_AssociatedPowerManagementService at https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/HTMLDocuments/WS-Management_Class_Reference/CIM_AssociatedPowerManagementService.htm 134 | // see Get System Power State at https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/WordDocuments/getsystempowerstate.htm 135 | func (c *Client) GetPowerState() (*PowerState, error) { 136 | cimAssociatedPowerManagementServiceNs := "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_AssociatedPowerManagementService" 137 | request := c.client.Get(cimAssociatedPowerManagementServiceNs) 138 | response, err := request.Send() 139 | if err != nil { 140 | return nil, fmt.Errorf("failed to get CIM_AssociatedPowerManagementService: %v", err) 141 | } 142 | powerStateString := search.First(search.Tag("PowerState", cimAssociatedPowerManagementServiceNs), response.AllBodyElements()).Content 143 | powerState, err := strconv.Atoi(string(powerStateString)) 144 | if err != nil { 145 | return nil, fmt.Errorf("failed to parse CIM_AssociatedPowerManagementService.PowerState: %v", err) 146 | } 147 | availablePowerStatesNodes := search.All(search.Tag("AvailableRequestedPowerStates", cimAssociatedPowerManagementServiceNs), response.AllBodyElements()) 148 | availablePowerStates := make([]int, 0, len(availablePowerStatesNodes)) 149 | for _, node := range availablePowerStatesNodes { 150 | v, err := strconv.Atoi(string(node.Content)) 151 | if err != nil { 152 | return nil, fmt.Errorf("failed to parse CIM_AssociatedPowerManagementService.AvailableRequestedPowerStates: %v", err) 153 | } 154 | availablePowerStates = append(availablePowerStates, v) 155 | } 156 | return &PowerState{ 157 | PowerState: powerState, 158 | AvailablePowerStates: availablePowerStates, 159 | }, nil 160 | } 161 | 162 | // see setPowerState at https://github.com/rgl/intel-amt-toggle-power-example/blob/master/intel-amt-toggle-power-example.js 163 | // see CIM_PowerManagementService at https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/HTMLDocuments/WS-Management_Class_Reference/CIM_PowerManagementService.htm 164 | func (c *Client) SetPowerState(powerState int) error { 165 | // Create an Envelope like: 166 | // 167 | //
168 | // http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService/RequestPowerStateChange 169 | // /wsman 170 | // http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService 171 | // 1 172 | // 173 | // http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous 174 | // 175 | // PT60S 176 | //
177 | // 178 | // 179 | // 2 180 | // 181 | //
http://schemas.xmlsoap.org/ws/2004/08/addressing
182 | // 183 | // http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ComputerSystem 184 | // 185 | // CIM_ComputerSystem 186 | // ManagedSystem 187 | // 188 | // 189 | //
190 | //
191 | // 192 | //
193 | 194 | cimPowerManagementServiceNs := "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService" 195 | cimComputerSystemNs := "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ComputerSystem" 196 | request := c.client.Invoke(cimPowerManagementServiceNs, "RequestPowerStateChange") 197 | 198 | // PowerState parameter. 199 | request.Parameters("PowerState", strconv.Itoa(powerState)) 200 | 201 | // ManagedElement parameter. 202 | managedElementNode := request.MakeParameter("ManagedElement") 203 | request.AddParameter(managedElementNode) 204 | 205 | // ManagedElement.Address child. 206 | managedElementNode.AddChild(dom.ElemC("Address", wsman.NS_WSA, wsman.NS_WSA)) 207 | 208 | // ManagedElement.ReferenceParameters child. 209 | referenceParametersNode := dom.Elem("ReferenceParameters", wsman.NS_WSA) 210 | managedElementNode.AddChild(referenceParametersNode) 211 | 212 | // ManagedElement.ReferenceParameters.ResourceURI child. 213 | referenceParametersNode.AddChild(wsman.Resource(cimComputerSystemNs)) 214 | 215 | // ManagedElement.ReferenceParameters.SelectorSet child. 216 | selectorSetNode := dom.Elem("SelectorSet", wsman.NS_WSMAN) 217 | referenceParametersNode.AddChild(selectorSetNode) 218 | 219 | // ManagedElement.ReferenceParameters.SelectorSet.Selector CreationClassName child. 220 | creationClassNameNode := request.MakeSelector("CreationClassName") 221 | creationClassNameNode.Content = []byte("CIM_ComputerSystem") 222 | selectorSetNode.AddChild(creationClassNameNode) 223 | 224 | // ManagedElement.ReferenceParameters.SelectorSet.Selector Name child. 225 | nameNode := request.MakeSelector("Name") 226 | nameNode.Content = []byte("ManagedSystem") 227 | selectorSetNode.AddChild(nameNode) 228 | 229 | // send the request. 230 | response, err := request.Send() 231 | if err != nil { 232 | return fmt.Errorf("failed to call CIM_PowerManagementService.RequestPowerStateChange: %v", err) 233 | } 234 | 235 | // parse the response. 236 | returnValueString := search.First(search.Tag("ReturnValue", cimPowerManagementServiceNs), response.AllBodyElements()).Content 237 | returnValue, err := strconv.Atoi(string(returnValueString)) 238 | if err != nil { 239 | return fmt.Errorf("failed to convert response ReturnValue to an integer: %v", err) 240 | } 241 | if returnValue != 0 { 242 | if returnValue == 2 { 243 | return fmt.Errorf("failed with ReturnValue: %d (not ready; please note that not all state transitions are possible when remote desktop or ider is enabled", returnValue) 244 | } else { 245 | return fmt.Errorf("failed with ReturnValue: %d", returnValue) 246 | } 247 | } 248 | return nil 249 | } 250 | 251 | func (c *Client) ResetToBootDeviceOnce(bootDevice string) error { 252 | err := c.SetBootDeviceOnce(bootDevice) 253 | if err != nil { 254 | return fmt.Errorf("failed to set boot device: %v", err) 255 | } 256 | return c.Reset() 257 | } 258 | 259 | func (c *Client) Reset() error { 260 | powerState, err := c.GetPowerState() 261 | if err != nil { 262 | return fmt.Errorf("failed to get power state: %v", err) 263 | } 264 | var desiredPowerState int 265 | if powerState.PowerState == DmtfPowerStateOffSoft { 266 | desiredPowerState = DmtfPowerStateOn 267 | } else { 268 | desiredPowerState = DmtfPowerStateMasterBusReset 269 | } 270 | err = c.SetPowerState(desiredPowerState) 271 | if err != nil { 272 | return fmt.Errorf("failed to set power state: %v", err) 273 | } 274 | return nil 275 | } 276 | 277 | // see Set or Disable Boot Configuration Settings for the Next Boot at https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/WordDocuments/setordisablebootconfigurationsettingsforthenextboot.htm 278 | func (c *Client) SetBootDeviceOnce(bootDevice string) error { 279 | cimBootConfigSettingNs := "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_BootConfigSetting" 280 | cimBootSourceSettingNs := "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_BootSourceSetting" 281 | 282 | request := c.client.Invoke(cimBootConfigSettingNs, "ChangeBootOrder") 283 | request.Selectors("InstanceID", "Intel(r) AMT: Boot Configuration 0") 284 | 285 | // Source parameter. 286 | sourceNode := request.MakeParameter("Source") 287 | request.AddParameter(sourceNode) 288 | 289 | // Source.Address child. 290 | sourceNode.AddChild(dom.ElemC("Address", wsman.NS_WSA, "http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous")) 291 | 292 | // Source.ReferenceParameters child. 293 | referenceParametersNode := dom.Elem("ReferenceParameters", wsman.NS_WSA) 294 | sourceNode.AddChild(referenceParametersNode) 295 | 296 | // Source.ReferenceParameters.ResourceURI child. 297 | referenceParametersNode.AddChild(wsman.Resource(cimBootSourceSettingNs)) 298 | 299 | // Source.ReferenceParameters.SelectorSet child. 300 | selectorSetNode := dom.Elem("SelectorSet", wsman.NS_WSMAN) 301 | referenceParametersNode.AddChild(selectorSetNode) 302 | 303 | // ManagedElement.ReferenceParameters.SelectorSet.Selector CreationClassName child. 304 | instanceIdNode := request.MakeSelector("InstanceID") 305 | switch bootDevice { 306 | case "pxe": 307 | instanceIdNode.Content = []byte("Intel(r) AMT: Force PXE Boot") 308 | case "hd": 309 | instanceIdNode.Content = []byte("Intel(r) AMT: Force Hard-drive Boot") 310 | case "cd": 311 | instanceIdNode.Content = []byte("Intel(r) AMT: Force CD/DVD Boot") 312 | default: 313 | return fmt.Errorf("unknown bootDevice %s", bootDevice) 314 | } 315 | selectorSetNode.AddChild(instanceIdNode) 316 | 317 | // send the request. 318 | response, err := request.Send() 319 | if err != nil { 320 | return fmt.Errorf("failed to call CIM_BootConfigSetting.ChangeBootOrder: %v", err) 321 | } 322 | 323 | // parse the response. 324 | returnValueString := search.First(search.Tag("ReturnValue", cimBootConfigSettingNs), response.AllBodyElements()).Content 325 | returnValue, err := strconv.Atoi(string(returnValueString)) 326 | if err != nil { 327 | return fmt.Errorf("failed to convert response ReturnValue to an integer: %v", err) 328 | } 329 | if returnValue != 0 { 330 | return fmt.Errorf("failed with ReturnValue: %d", returnValue) 331 | } 332 | 333 | // 1: IsNextSingleUse (aka Once) 334 | return c.SetBootConfigRole(1) 335 | } 336 | 337 | func (c *Client) SetBootConfigRole(role int) error { 338 | cimBootServiceNs := "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_BootService" 339 | cimBootConfigSettingNs := "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_BootConfigSetting" 340 | 341 | request := c.client.Invoke(cimBootServiceNs, "SetBootConfigRole") 342 | request.Selectors("Name", "Intel(r) AMT Boot Service") 343 | 344 | // BootConfigSetting parameter. 345 | bootConfigSettingNode := request.MakeParameter("BootConfigSetting") 346 | request.AddParameter(bootConfigSettingNode) 347 | 348 | // BootConfigSetting.Address child. 349 | bootConfigSettingNode.AddChild(dom.ElemC("Address", wsman.NS_WSA, "http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous")) 350 | 351 | // BootConfigSetting.ReferenceParameters child. 352 | referenceParametersNode := dom.Elem("ReferenceParameters", wsman.NS_WSA) 353 | bootConfigSettingNode.AddChild(referenceParametersNode) 354 | 355 | // BootConfigSetting.ReferenceParameters.ResourceURI child. 356 | referenceParametersNode.AddChild(wsman.Resource(cimBootConfigSettingNs)) 357 | 358 | // BootConfigSetting.ReferenceParameters.SelectorSet child. 359 | selectorSetNode := dom.Elem("SelectorSet", wsman.NS_WSMAN) 360 | referenceParametersNode.AddChild(selectorSetNode) 361 | 362 | // ManagedElement.ReferenceParameters.SelectorSet.Selector CreationClassName child. 363 | instanceIdNode := request.MakeSelector("InstanceID") 364 | instanceIdNode.Content = []byte("Intel(r) AMT: Boot Configuration 0") 365 | selectorSetNode.AddChild(instanceIdNode) 366 | 367 | // Role parameter. 368 | request.Parameters("Role", strconv.Itoa(role)) 369 | 370 | // send the request. 371 | response, err := request.Send() 372 | if err != nil { 373 | return fmt.Errorf("failed to call CIM_BootService.SetBootConfigRole: %v", err) 374 | } 375 | 376 | // parse the response. 377 | returnValueString := search.First(search.Tag("ReturnValue", cimBootServiceNs), response.AllBodyElements()).Content 378 | returnValue, err := strconv.Atoi(string(returnValueString)) 379 | if err != nil { 380 | return fmt.Errorf("failed to convert response ReturnValue to an integer: %v", err) 381 | } 382 | if returnValue != 0 { 383 | return fmt.Errorf("failed with ReturnValue: %d", returnValue) 384 | } 385 | 386 | return nil 387 | } 388 | -------------------------------------------------------------------------------- /machinator/bmc.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | ) 6 | 7 | func bmcGetPowerState(machine *Machine) (string, error) { 8 | switch machine.BmcType { 9 | case "ipmi": 10 | return ipmiBmcGetPowerState(machine) 11 | case "redfish": 12 | return redfishBmcGetPowerState(machine) 13 | case "amt": 14 | return amtBmcGetPowerState(machine) 15 | default: 16 | return "", fmt.Errorf("unknown bmc type: %s", machine.BmcType) 17 | } 18 | } 19 | 20 | func bmcResetToNetwork(machine *Machine, os string) error { 21 | err := matchboxSetMachineOs(machine, os) 22 | if err != nil { 23 | return err 24 | } 25 | switch machine.BmcType { 26 | case "ipmi": 27 | return ipmiBmcResetToNetwork(machine) 28 | case "redfish": 29 | return redfishBmcResetToNetwork(machine) 30 | case "amt": 31 | return amtBmcResetToNetwork(machine) 32 | default: 33 | return fmt.Errorf("unknown bmc type: %s", machine.BmcType) 34 | } 35 | } 36 | 37 | func bmcResetToDisk(machine *Machine) error { 38 | switch machine.BmcType { 39 | case "ipmi": 40 | return ipmiBmcResetToDisk(machine) 41 | case "redfish": 42 | return redfishBmcResetToDisk(machine) 43 | case "amt": 44 | return amtBmcResetToDisk(machine) 45 | default: 46 | return fmt.Errorf("unknown bmc type: %s", machine.BmcType) 47 | } 48 | } 49 | 50 | func bmcReset(machine *Machine) error { 51 | switch machine.BmcType { 52 | case "ipmi": 53 | return ipmiBmcReset(machine) 54 | case "redfish": 55 | return redfishBmcReset(machine) 56 | case "amt": 57 | return amtBmcReset(machine) 58 | default: 59 | return fmt.Errorf("unknown bmc type: %s", machine.BmcType) 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /machinator/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/rgl/linuxkit-vagrant/machinator 2 | 3 | require ( 4 | github.com/VictorLowther/simplexml v0.0.0-20180716164440-0bff93621230 5 | github.com/VictorLowther/wsman v0.0.0-20170302224735-f2a5e756d330 6 | github.com/digitalocean/go-qemu v0.0.0-20221209210016-f035778c97f7 7 | github.com/stmcginnis/gofish v0.14.0 8 | github.com/tomruk/oui v1.0.0 9 | ) 10 | 11 | require ( 12 | github.com/VictorLowther/soap v0.0.0-20150314151524-8e36fca84b22 // indirect 13 | github.com/digitalocean/go-libvirt v0.0.0-20221205150000-2939327a8519 // indirect 14 | github.com/satori/go.uuid v1.2.0 // indirect 15 | golang.org/x/sys v0.7.0 // indirect 16 | ) 17 | 18 | replace github.com/VictorLowther/wsman => github.com/rgl/wsman v0.0.1 19 | 20 | replace github.com/tomruk/oui => github.com/rgl/oui v1.0.1-0.20210624175153-a4c98e6f25ea 21 | 22 | go 1.20 23 | -------------------------------------------------------------------------------- /machinator/go.sum: -------------------------------------------------------------------------------- 1 | github.com/VictorLowther/simplexml v0.0.0-20180716164440-0bff93621230 h1:t95Grn2mOPfb3+kPDWsNnj4dlNcxnvuR72IjY8eYjfQ= 2 | github.com/VictorLowther/simplexml v0.0.0-20180716164440-0bff93621230/go.mod h1:t2EzW1qybnPDQ3LR/GgeF0GOzHUXT5IVMLP2gkW1cmc= 3 | github.com/VictorLowther/soap v0.0.0-20150314151524-8e36fca84b22 h1:a0MBqYm44o0NcthLKCljZHe1mxlN6oahCQHHThnSwB4= 4 | github.com/VictorLowther/soap v0.0.0-20150314151524-8e36fca84b22/go.mod h1:/B7V22rcz4860iDqstGvia/2+IYWXf3/JdQCVd/1D2A= 5 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 6 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= 7 | github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 8 | github.com/digitalocean/go-libvirt v0.0.0-20221205150000-2939327a8519 h1:OpkN/n40cmKenDQS+IOAeW9DLhYy4DADSeZnouCEV/E= 9 | github.com/digitalocean/go-libvirt v0.0.0-20221205150000-2939327a8519/go.mod h1:WyJJyfmJ0gWJvjV+ZH4DOgtOYZc1KOvYyBXWCLKxsUU= 10 | github.com/digitalocean/go-qemu v0.0.0-20221209210016-f035778c97f7 h1:3OVJAbR131SnAXao7c9w8bFlAGH0oa29DCwsa88MJGk= 11 | github.com/digitalocean/go-qemu v0.0.0-20221209210016-f035778c97f7/go.mod h1:K4+o74YGNjOb9N6yyG+LPj1NjHtk+Qz0IYQPvirbaLs= 12 | github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI= 13 | github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= 14 | github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= 15 | github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= 16 | github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= 17 | github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= 18 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 19 | github.com/rgl/oui v1.0.1-0.20210624175153-a4c98e6f25ea h1:4qagGgl1Lmb8UOn/jY13xfV6j04HTviYpdu/O8ydUr4= 20 | github.com/rgl/oui v1.0.1-0.20210624175153-a4c98e6f25ea/go.mod h1:0V8BLBhlaaunndElRMZMqYFxVNwaxCPIcLIqjfz+QLg= 21 | github.com/rgl/wsman v0.0.1 h1:QBB7w63d7XzKTIMpGjOCv2YS0nqfkQQCwRMURSfIIjw= 22 | github.com/rgl/wsman v0.0.1/go.mod h1:U5mT9qfhoUy5Lm+btmpoJ0zFV4PSjg766adrMDbyE4c= 23 | github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww= 24 | github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0= 25 | github.com/stmcginnis/gofish v0.14.0 h1:geECNAiG33JDB2x2xDkerpOOuXFqxp5YP3EFE3vd5iM= 26 | github.com/stmcginnis/gofish v0.14.0/go.mod h1:BLDSFTp8pDlf/xDbLZa+F7f7eW0E/CHCboggsu8CznI= 27 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 28 | github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= 29 | github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= 30 | github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 31 | github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= 32 | github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk= 33 | github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= 34 | github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= 35 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 36 | golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= 37 | golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= 38 | golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 39 | golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= 40 | golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= 41 | golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 42 | golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 43 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 44 | golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 45 | golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 46 | golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 47 | golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 48 | golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU= 49 | golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 50 | golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= 51 | golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= 52 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 53 | golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= 54 | golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= 55 | golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 56 | golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= 57 | golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= 58 | golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 59 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 60 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= 61 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= 62 | gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 63 | gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= 64 | gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 65 | -------------------------------------------------------------------------------- /machinator/ipmi_bmc.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "fmt" 6 | "log" 7 | "os" 8 | "os/exec" 9 | "regexp" 10 | "strconv" 11 | "strings" 12 | "time" 13 | ) 14 | 15 | var ( 16 | ipmiChassisPowerRegexp = regexp.MustCompile("^Chassis Power is (.+)$") 17 | ) 18 | 19 | type IpmitoolExecError struct { 20 | ExitCode int 21 | Stdout string 22 | Stderr string 23 | } 24 | 25 | func (err *IpmitoolExecError) Error() string { 26 | return fmt.Sprintf("failed to exec ipmitool: exitCode=%d stdout=%s stderr=%s", err.ExitCode, err.Stdout, err.Stderr) 27 | } 28 | 29 | func ipmitool(machine *Machine, args ...string) (string, error) { 30 | args = append([]string{ 31 | "-I", "lanplus", 32 | "-U", os.Getenv("IPMI_USERNAME"), 33 | "-P", os.Getenv("IPMI_PASSWORD"), 34 | "-H", machine.BmcIp, 35 | "-p", strconv.Itoa(machine.BmcPort)}, args...) 36 | 37 | var stderr, stdout bytes.Buffer 38 | 39 | cmd := exec.Command("ipmitool", args...) 40 | cmd.Stderr = &stderr 41 | cmd.Stdout = &stdout 42 | 43 | err := cmd.Run() 44 | 45 | if err != nil { 46 | exitCode := -1 47 | if exitError, ok := err.(*exec.ExitError); ok { 48 | exitCode = exitError.ProcessState.ExitCode() 49 | } 50 | return "", &IpmitoolExecError{ 51 | ExitCode: exitCode, 52 | Stdout: stdout.String(), 53 | Stderr: stderr.String(), 54 | } 55 | } 56 | 57 | return strings.TrimSpace(stdout.String()), nil 58 | } 59 | 60 | func ipmiBmcGetPowerState(machine *Machine) (string, error) { 61 | stdout, err := ipmitool(machine, "chassis", "power", "status") 62 | if err != nil { 63 | return "", err 64 | } 65 | m := ipmiChassisPowerRegexp.FindStringSubmatch(stdout) 66 | if m == nil { 67 | return "", fmt.Errorf("failed to parse ipmitool chassis power status. stdout=%s", stdout) 68 | } 69 | return m[1], nil 70 | } 71 | 72 | func ipmiBmcReset(machine *Machine) error { 73 | powerState, err := ipmiBmcGetPowerState(machine) 74 | if err != nil { 75 | return err 76 | } 77 | log.Printf("Current System PowerState: %s", powerState) 78 | 79 | if powerState == "off" { 80 | log.Printf("Forcing the system on...") 81 | _, err = ipmitool(machine, "chassis", "power", "on") 82 | if err != nil { 83 | return err 84 | } 85 | } else { 86 | log.Printf("Forcing the system restart...") 87 | _, err := ipmitool(machine, "chassis", "power", "reset") 88 | if err != nil { 89 | return err 90 | } 91 | } 92 | 93 | return nil 94 | } 95 | 96 | func ipmiBmcResetToNetwork(machine *Machine) error { 97 | return ipmiBmcResetToBootTarget(machine, "pxe") 98 | } 99 | 100 | func ipmiBmcResetToDisk(machine *Machine) error { 101 | return ipmiBmcResetToBootTarget(machine, "disk") 102 | } 103 | 104 | func ipmiBmcResetToBootTarget(machine *Machine, bootDevice string) error { 105 | powerState, err := ipmiBmcGetPowerState(machine) 106 | if err != nil { 107 | return err 108 | } 109 | log.Printf("Current System PowerState: %s", powerState) 110 | 111 | log.Printf("Setting the boot order to %s...", bootDevice) 112 | _, err = ipmitool(machine, "chassis", "bootdev", bootDevice) 113 | if err != nil { 114 | return err 115 | } 116 | 117 | if powerState != "off" { 118 | log.Printf("Forcing the system off...") 119 | _, err = ipmitool(machine, "chassis", "power", "off") 120 | if err != nil { 121 | return err 122 | } 123 | log.Printf("Waiting for the system to be off...") 124 | for { 125 | powerState, err = ipmiBmcGetPowerState(machine) 126 | if err == nil && powerState == "off" { 127 | break 128 | } 129 | time.Sleep(1 * time.Second) 130 | } 131 | } 132 | 133 | log.Printf("Forcing the system on...") 134 | _, err = ipmitool(machine, "chassis", "power", "on") 135 | if err != nil { 136 | return err 137 | } 138 | log.Printf("Waiting for the system to be on...") 139 | for { 140 | powerState, err = ipmiBmcGetPowerState(machine) 141 | if err == nil && powerState == "on" { 142 | break 143 | } 144 | time.Sleep(1 * time.Second) 145 | } 146 | 147 | // because vbmc-emulator does not support the setting the boot device to once. 148 | // we must revert the bootDevice to disk again. 149 | // NB this will only work after a power-cycle (from off to on); which 150 | // makes this useless for my purposes of having it work immediately. 151 | // see https://storyboard.openstack.org/#!/story/2005368#comment-175052 152 | // NB the qemu boot order is determined by each individual device `bootindex`, e.g.; 153 | // -device scsi-hd,drive=drive0,bootindex=0 154 | // NB `-boot` seems to be only supported by BIOS (not UEFI). 155 | // NB setting the boot order from QMP seems to work, so we do that to 156 | // workaround the BMC. 157 | if bootDevice != "disk" { 158 | bootDevice = "disk" 159 | // NB we also do it from vbmc side; because that will apply once 160 | // the VM is started again; our qmp changes only last while the 161 | // VM is on. 162 | log.Printf("Resetting the boot order to %s...", bootDevice) 163 | _, err = ipmitool(machine, "chassis", "bootdev", bootDevice) 164 | if err != nil { 165 | return err 166 | } 167 | log.Printf("Resetting the boot order to %s (QMP)...", bootDevice) 168 | err = qmpSetBootOrderToDisk(machine) 169 | if err != nil { 170 | return err 171 | } 172 | } 173 | 174 | return nil 175 | } 176 | -------------------------------------------------------------------------------- /machinator/kubernetes.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "crypto/x509" 6 | "encoding/base64" 7 | "encoding/json" 8 | "encoding/pem" 9 | "fmt" 10 | "os/exec" 11 | "sort" 12 | "strings" 13 | "time" 14 | ) 15 | 16 | type KubernetesNode struct { 17 | Name string `json:"name"` 18 | Status string `json:"status"` 19 | Roles string `json:"roles"` 20 | Version string `json:"version"` 21 | InternalIp string `json:"internalIp"` 22 | ExternalIp string `json:"externalIp"` 23 | Architecture string `json:"architecture"` 24 | Cpu string `json:"cpu"` 25 | Memory string `json:"memory"` 26 | OsImage string `json:"osImage"` 27 | KernelVersion string `json:"kernelVersion"` 28 | ContainerRuntime string `json:"containerRuntime"` 29 | } 30 | 31 | type KubernetesIngress struct { 32 | Name string `json:"name"` 33 | Namespace string `json:"namespace"` 34 | Url string `json:"url"` 35 | CertificateExpiresAt time.Time `json:"certificateExpiresAt"` 36 | } 37 | 38 | type KubernetesExecError struct { 39 | ExitCode int 40 | Stdout string 41 | Stderr string 42 | } 43 | 44 | func (err *KubernetesExecError) Error() string { 45 | return fmt.Sprintf("failed to exec kubectl: exitCode=%d stdout=%s stderr=%s", err.ExitCode, err.Stdout, err.Stderr) 46 | } 47 | 48 | func kubectl(args ...string) (string, error) { 49 | var stderr, stdout bytes.Buffer 50 | 51 | cmd := exec.Command("kubectl", args...) 52 | cmd.Stderr = &stderr 53 | cmd.Stdout = &stdout 54 | 55 | err := cmd.Run() 56 | 57 | if err != nil { 58 | exitCode := -1 59 | if exitError, ok := err.(*exec.ExitError); ok { 60 | exitCode = exitError.ProcessState.ExitCode() 61 | } 62 | return "", &KubernetesExecError{ 63 | ExitCode: exitCode, 64 | Stdout: stdout.String(), 65 | Stderr: stderr.String(), 66 | } 67 | } 68 | 69 | return strings.TrimSpace(stdout.String()), nil 70 | } 71 | 72 | type kubernetesNodeAddress struct { 73 | Address string `json:"address"` 74 | Type string `json:"type"` 75 | } 76 | 77 | type kubernetesNodeCondition struct { 78 | Type string `json:"type"` 79 | Status string `json:"status"` 80 | Reason string `json:"reason"` 81 | } 82 | 83 | type kubernetesNodeStatus struct { 84 | Addresses []kubernetesNodeAddress `json:"addresses"` 85 | Capacity map[string]string `json:"capacity"` 86 | Conditions []kubernetesNodeCondition `json:"conditions"` 87 | NodeInfo map[string]string `json:"nodeInfo"` 88 | } 89 | 90 | type kubernetesNodeMetadata struct { 91 | Name string `json:"name"` 92 | Labels map[string]string `json:"labels"` 93 | } 94 | 95 | type kubernetesNode struct { 96 | Kind string `json:"kind"` 97 | Status kubernetesNodeStatus `json:"status"` 98 | Metadata kubernetesNodeMetadata `json:"metadata"` 99 | } 100 | 101 | type kubernetesGetNodesResponse struct { 102 | Items []kubernetesNode `json:"items"` 103 | } 104 | 105 | type kubernetesIngressMetadata struct { 106 | Name string `json:"name"` 107 | Namespace string `json:"namespace"` 108 | } 109 | 110 | type kubernetesIngressRule struct { 111 | Host string `json:"host"` 112 | } 113 | 114 | type kubernetesIngressTls struct { 115 | SecretName string `json:"secretName"` 116 | } 117 | 118 | type kubernetesIngressSpec struct { 119 | Rules []kubernetesIngressRule `json:"rules"` 120 | Tls []kubernetesIngressTls `json:"tls"` 121 | } 122 | 123 | type kubernetesIngress struct { 124 | Kind string `json:"kind"` 125 | Metadata kubernetesIngressMetadata `json:"metadata"` 126 | Spec kubernetesIngressSpec `json:"spec"` 127 | } 128 | 129 | type kubernetesGetKubernetesIngressesResponse struct { 130 | Items []kubernetesIngress `json:"items"` 131 | } 132 | 133 | func GetKubernetesNodes() ([]KubernetesNode, error) { 134 | stdout, err := kubectl("get", "nodes", "-o", "json") 135 | if err != nil { 136 | return nil, err 137 | } 138 | 139 | var response kubernetesGetNodesResponse 140 | if err := json.Unmarshal([]byte(stdout), &response); err != nil { 141 | return nil, err 142 | } 143 | 144 | nodes := make([]KubernetesNode, 0, len(response.Items)) 145 | 146 | for _, item := range response.Items { 147 | roles := make([]string, 0) 148 | for label := range item.Metadata.Labels { 149 | if strings.HasPrefix(label, "node-role.kubernetes.io/") { 150 | roles = append(roles, label[len("node-role.kubernetes.io/"):]) 151 | } 152 | } 153 | sort.Strings(roles) 154 | var internalIp string 155 | var externalIp string 156 | for _, address := range item.Status.Addresses { 157 | switch address.Type { 158 | case "InternalIP": 159 | internalIp = address.Address 160 | case "ExternalIP": 161 | externalIp = address.Address 162 | } 163 | } 164 | var status string 165 | for _, condition := range item.Status.Conditions { 166 | switch condition.Type { 167 | case "Ready": 168 | if condition.Status == "True" { 169 | status = "Ready" 170 | } else { 171 | status = "NotReady" 172 | } 173 | } 174 | } 175 | nodes = append(nodes, KubernetesNode{ 176 | Name: item.Metadata.Name, 177 | Status: status, 178 | Roles: strings.Join(roles, ","), 179 | InternalIp: internalIp, 180 | ExternalIp: externalIp, 181 | Cpu: item.Status.Capacity["cpu"], 182 | Memory: item.Status.Capacity["memory"], 183 | Architecture: item.Status.NodeInfo["architecture"], 184 | Version: item.Status.NodeInfo["kubeletVersion"], 185 | OsImage: item.Status.NodeInfo["osImage"], 186 | KernelVersion: item.Status.NodeInfo["kernelVersion"], 187 | ContainerRuntime: item.Status.NodeInfo["containerRuntimeVersion"], 188 | }) 189 | } 190 | 191 | return nodes, nil 192 | } 193 | 194 | type kubernetesCertificateSecretData struct { 195 | TlsCrt string `json:"tls.crt"` 196 | } 197 | 198 | type kubernetesCertificateSecret struct { 199 | Data kubernetesCertificateSecretData `json:"data"` 200 | } 201 | 202 | func getKubernetesCertificate(namespace string, secretName string) (*x509.Certificate, error) { 203 | stdout, err := kubectl("get", "secret", "-n", namespace, secretName, "-o", "json") 204 | if err != nil { 205 | return nil, err 206 | } 207 | 208 | var secret kubernetesCertificateSecret 209 | if err := json.Unmarshal([]byte(stdout), &secret); err != nil { 210 | return nil, err 211 | } 212 | 213 | pemData, err := base64.StdEncoding.DecodeString(secret.Data.TlsCrt) 214 | if err != nil { 215 | return nil, err 216 | } 217 | 218 | block, _ := pem.Decode(pemData) 219 | 220 | return x509.ParseCertificate(block.Bytes) 221 | } 222 | 223 | func GetKubernetesIngresses() ([]KubernetesIngress, error) { 224 | stdout, err := kubectl("get", "ingress", "-A", "-o", "json") 225 | if err != nil { 226 | return nil, err 227 | } 228 | 229 | var response kubernetesGetKubernetesIngressesResponse 230 | if err := json.Unmarshal([]byte(stdout), &response); err != nil { 231 | return nil, err 232 | } 233 | 234 | ingresses := make([]KubernetesIngress, 0, len(response.Items)) 235 | 236 | for _, item := range response.Items { 237 | var certificateExpiresAt time.Time 238 | if len(item.Spec.Tls) > 0 { 239 | certificate, err := getKubernetesCertificate(item.Metadata.Namespace, item.Spec.Tls[0].SecretName) 240 | if err != nil { 241 | return nil, err 242 | } 243 | certificateExpiresAt = certificate.NotAfter.Local() 244 | } 245 | for _, rule := range item.Spec.Rules { 246 | ingresses = append(ingresses, KubernetesIngress{ 247 | Name: item.Metadata.Name, 248 | Namespace: item.Metadata.Namespace, 249 | Url: fmt.Sprintf("https://%s", rule.Host), 250 | CertificateExpiresAt: certificateExpiresAt, 251 | }) 252 | } 253 | } 254 | 255 | return ingresses, nil 256 | } 257 | -------------------------------------------------------------------------------- /machinator/machines-status.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Machinator 5 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | {{- range .MachinesStatus}} 62 | 63 | 64 | {{- if .BmcType}} 65 | 66 | 67 | {{- else}} 68 | 69 | 70 | {{- end}} 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | {{- end}} 80 | 81 |
Machines
Talos ActionsPower ActionsPower StateNameIpMacMac VendorClient IdHostnameExpires At ({{.Location}})
{{.PowerState}}{{.Name}}{{.Ip}}{{.Mac}}{{.MacVendor}}{{.ClientId}}{{.Hostname}}{{if not .ExpiresAt.IsZero}}{{.ExpiresAt}}{{end}}
82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | {{- range .KubernetesNodes}} 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | {{- end}} 117 | 118 |
Kubernetes Nodes
NameStatusRolesVersionInternal IPExternal IPCpuMemoryArchitectureOs ImageKernel VersionContainer Runtime
{{.Name}}{{.Status}}{{.Roles}}{{.Version}}{{.InternalIp}}{{.ExternalIp}}{{.Cpu}}{{.Memory}}{{.Architecture}}{{.OsImage}}{{.KernelVersion}}{{.ContainerRuntime}}
119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | {{- range .KubernetesIngresses}} 131 | 132 | 133 | 134 | 135 | 136 | 137 | {{- end}} 138 | 139 |
Kubernetes Ingresses
NameNamespaceUrlCertificate Expires At ({{.Location}})
{{.Name}}{{.Namespace}}{{.Url}}{{if not .CertificateExpiresAt.IsZero}}{{.CertificateExpiresAt}}{{end}}
140 | 171 | 172 | -------------------------------------------------------------------------------- /machinator/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bufio" 5 | _ "embed" 6 | "encoding/json" 7 | "flag" 8 | "fmt" 9 | "html/template" 10 | "io/ioutil" 11 | "log" 12 | "net/http" 13 | "os" 14 | "sort" 15 | "strconv" 16 | "strings" 17 | text_template "text/template" 18 | "time" 19 | _ "time/tzdata" // include the embedded timezone database. 20 | 21 | "github.com/tomruk/oui/ouidata" 22 | ) 23 | 24 | type MachineStatus struct { 25 | Type string `json:"type"` 26 | Name string `json:"name"` 27 | BmcType string `json:"bmcType"` 28 | PowerState string `json:"powerState"` 29 | Ip string `json:"ip"` 30 | Mac string `json:"mac"` 31 | MacVendor string `json:"macVendor"` 32 | Hostname string `json:"hostname"` 33 | ClientId string `json:"clientId"` 34 | ExpiresAt time.Time `json:"expiresAt"` 35 | } 36 | 37 | type machineStatusByName []MachineStatus 38 | 39 | func (a machineStatusByName) Len() int { return len(a) } 40 | func (a machineStatusByName) Swap(i, j int) { a[i], a[j] = a[j], a[i] } 41 | func (a machineStatusByName) Less(i, j int) bool { return a[i].Name < a[j].Name } 42 | 43 | // read from /vagrant/shared/machines.json 44 | type Machine struct { 45 | Type string `json:"type"` 46 | Name string `json:"name"` 47 | Role string `json:"role"` 48 | Arch string `json:"arch"` 49 | Ip string `json:"ip"` 50 | Mac string `json:"mac"` 51 | BmcType string `json:"bmcType"` 52 | BmcIp string `json:"bmcIp"` 53 | BmcPort int `json:"bmcPort"` 54 | BmcQmpPort int `json:"bmcQmpPort"` // NB used by the redfish BmtType to emulate "boot once" in libvirt. 55 | } 56 | 57 | // read from /var/lib/misc/dnsmasq.leases 58 | // each line is 59 | // e.g. 1624470573 ec:b1:d7:71:ff:f3 10.3.0.131 DESKTOP-8RFCDG6 01:ec:b1:d7:71:ff:f3 60 | type DhcpLease struct { 61 | ExpiresAt time.Time 62 | Mac string 63 | Ip string 64 | Hostname string 65 | ClientId string 66 | } 67 | 68 | func GetMachinesStatus() ([]MachineStatus, error) { 69 | ouiDb, err := ouidata.NewDB() 70 | if err != nil { 71 | return nil, err 72 | } 73 | 74 | machines, err := GetMachines("machines.json") 75 | if err != nil { 76 | return nil, err 77 | } 78 | 79 | dhcpLeases, err := GetDhcpLeases("dnsmasq.leases") 80 | if err != nil { 81 | return nil, err 82 | } 83 | 84 | machinesMap := make(map[string]Machine) 85 | for _, m := range machines { 86 | machinesMap[m.Mac] = m 87 | } 88 | 89 | machinesStatusMap := make(map[string]MachineStatus) 90 | 91 | for _, m := range machines { 92 | macVendor, _ := ouiDb.Lookup(m.Mac) 93 | powerState, _ := bmcGetPowerState(&m) 94 | machinesStatusMap[m.Mac] = MachineStatus{ 95 | Type: m.Type, 96 | Name: m.Name, 97 | BmcType: m.BmcType, 98 | PowerState: powerState, 99 | Ip: m.Ip, 100 | Mac: m.Mac, 101 | MacVendor: macVendor, 102 | } 103 | } 104 | 105 | for _, l := range dhcpLeases { 106 | if machine, ok := machinesStatusMap[l.Mac]; ok { 107 | machinesStatusMap[l.Mac] = MachineStatus{ 108 | Type: machine.Type, 109 | Name: machine.Name, 110 | BmcType: machine.BmcType, 111 | PowerState: machine.PowerState, 112 | Ip: l.Ip, 113 | Mac: machine.Mac, 114 | MacVendor: machine.MacVendor, 115 | Hostname: l.Hostname, 116 | ClientId: l.ClientId, 117 | ExpiresAt: l.ExpiresAt, 118 | } 119 | } else { 120 | macVendor, _ := ouiDb.Lookup(l.Mac) 121 | machinesStatusMap[l.Mac] = MachineStatus{ 122 | Ip: l.Ip, 123 | Mac: l.Mac, 124 | MacVendor: macVendor, 125 | Hostname: l.Hostname, 126 | ClientId: l.ClientId, 127 | ExpiresAt: l.ExpiresAt, 128 | } 129 | } 130 | } 131 | 132 | machineStatus := make([]MachineStatus, 0, len(machinesStatusMap)) 133 | 134 | for _, m := range machinesStatusMap { 135 | machineStatus = append(machineStatus, m) 136 | } 137 | 138 | sort.Sort(machineStatusByName(machineStatus)) 139 | 140 | return machineStatus, nil 141 | } 142 | 143 | func GetMachines(filePath string) ([]Machine, error) { 144 | data, err := ioutil.ReadFile(filePath) 145 | if err != nil { 146 | return nil, err 147 | } 148 | var machines []Machine 149 | if err := json.Unmarshal(data, &machines); err != nil { 150 | return nil, err 151 | } 152 | return machines, nil 153 | } 154 | 155 | func GetDhcpLeases(filePath string) ([]DhcpLease, error) { 156 | file, err := os.Open(filePath) 157 | if err != nil { 158 | return nil, err 159 | } 160 | defer file.Close() 161 | 162 | dhcpLeases := make([]DhcpLease, 0) 163 | 164 | scanner := bufio.NewScanner(file) 165 | 166 | for scanner.Scan() { 167 | fields := strings.Fields(scanner.Text()) 168 | timestamp, err := strconv.ParseInt(fields[0], 10, 64) 169 | if err != nil { 170 | return nil, err 171 | } 172 | dhcpLeases = append(dhcpLeases, DhcpLease{ 173 | ExpiresAt: time.Unix(timestamp, 0).Local(), 174 | Mac: fields[1], 175 | Ip: fields[2], 176 | Hostname: fields[3], 177 | ClientId: fields[4], 178 | }) 179 | } 180 | 181 | if err := scanner.Err(); err != nil { 182 | return nil, err 183 | } 184 | 185 | return dhcpLeases, nil 186 | } 187 | 188 | func executeMachineAction(machine *Machine, action string) error { 189 | switch action { 190 | case "talos-shutdown": 191 | return talosShutdown(machine) 192 | case "talos-reboot": 193 | return talosReboot(machine) 194 | case "talos-reset": 195 | return talosReset(machine) 196 | case "bmc-reset-to-network-talos": 197 | return bmcResetToNetwork(machine, "talos") 198 | case "bmc-reset-to-network-rescue": 199 | return bmcResetToNetwork(machine, "rescue") 200 | case "bmc-reset-to-network-rescue-wipe": 201 | return bmcResetToNetwork(machine, "rescue-wipe") 202 | case "bmc-reset-to-disk": 203 | return bmcResetToDisk(machine) 204 | case "bmc-reset": 205 | return bmcReset(machine) 206 | case "matchbox-network-talos": 207 | return matchboxSetMachineOs(machine, "talos") 208 | case "matchbox-network-rescue": 209 | return matchboxSetMachineOs(machine, "rescue") 210 | case "matchbox-network-rescue-wipe": 211 | return matchboxSetMachineOs(machine, "rescue-wipe") 212 | } 213 | return fmt.Errorf("unsupported machine action: %s", action) 214 | } 215 | 216 | //go:embed machines-status.html 217 | var machinesStatusTemplateText string 218 | 219 | var machinesStatusTemplate = template.Must(template.New("MachinesStatus").Parse(machinesStatusTemplateText)) 220 | 221 | type machinesStatusData struct { 222 | Location *time.Location 223 | MachinesStatus []MachineStatus 224 | KubernetesNodes []KubernetesNode 225 | KubernetesIngresses []KubernetesIngress 226 | } 227 | 228 | //go:embed wipe.sh 229 | var wipeScriptTemplateText string 230 | 231 | var wipeScriptTemplate = text_template.Must(text_template.New("WipeScript").Parse(wipeScriptTemplateText)) 232 | 233 | type wipeScriptData struct { 234 | WipedUrl string 235 | } 236 | 237 | type logWriter struct { 238 | } 239 | 240 | func (writer logWriter) Write(bytes []byte) (int, error) { 241 | return fmt.Print(time.Now().Format("2006-01-02T15:04:05.999Z") + " " + string(bytes)) 242 | } 243 | 244 | func main() { 245 | log.SetFlags(0) 246 | log.SetOutput(new(logWriter)) 247 | 248 | var listenAddress = flag.String("listen", ":8000", "Listen address.") 249 | 250 | flag.Parse() 251 | 252 | if flag.NArg() != 0 { 253 | flag.Usage() 254 | log.Fatalf("\nERROR You MUST NOT pass any positional arguments") 255 | } 256 | 257 | timezone, err := ioutil.ReadFile("/etc/timezone") 258 | if err != nil { 259 | log.Fatalf("\nERROR Failed to get the local time zone: %v", err) 260 | } 261 | 262 | location, err := time.LoadLocation(strings.TrimSpace(string(timezone))) 263 | if err != nil { 264 | log.Fatalf("\nERROR Failed to load local time zone: %v", err) 265 | } 266 | 267 | http.HandleFunc("/machines.json", func(w http.ResponseWriter, r *http.Request) { 268 | machinesStatus, err := GetMachinesStatus() 269 | if err != nil { 270 | http.Error(w, err.Error(), http.StatusInternalServerError) 271 | return 272 | } 273 | 274 | w.Header().Set("Content-Type", "application/json") 275 | 276 | json.NewEncoder(w).Encode(machinesStatus) 277 | }) 278 | 279 | http.HandleFunc("/action", func(w http.ResponseWriter, r *http.Request) { 280 | actionRequest := struct { 281 | Action string `json:"action"` 282 | MachineName string `json:"machineName"` 283 | }{} 284 | 285 | err = json.NewDecoder(r.Body).Decode(&actionRequest) 286 | if err != nil { 287 | http.Error(w, err.Error(), http.StatusInternalServerError) 288 | return 289 | } 290 | 291 | machines, err := GetMachines("machines.json") 292 | if err != nil { 293 | http.Error(w, err.Error(), http.StatusInternalServerError) 294 | return 295 | } 296 | 297 | for _, machine := range machines { 298 | if machine.Name == actionRequest.MachineName { 299 | err = executeMachineAction(&machine, actionRequest.Action) 300 | if err != nil { 301 | actionResult := struct { 302 | Error string `json:"error"` 303 | }{ 304 | Error: err.Error(), 305 | } 306 | log.Printf("ERROR: machine %s action %s failed with: %s", machine.Name, actionRequest.Action, actionResult.Error) 307 | w.WriteHeader(http.StatusInternalServerError) 308 | w.Header().Set("Content-Type", "application/json") 309 | json.NewEncoder(w).Encode(actionResult) 310 | } else { 311 | log.Printf("INFO: machine %s action %s succeeded", machine.Name, actionRequest.Action) 312 | w.WriteHeader(http.StatusOK) 313 | w.Header().Set("Content-Type", "application/json") 314 | w.Write([]byte("{}")) 315 | } 316 | return 317 | } 318 | } 319 | 320 | w.WriteHeader(http.StatusNotFound) 321 | w.Header().Set("Content-Type", "application/json") 322 | w.Write([]byte("{}")) 323 | }) 324 | 325 | http.HandleFunc("/wipe.sh", func(w http.ResponseWriter, r *http.Request) { 326 | wipedUrl := strings.Replace( 327 | fmt.Sprintf("http://%s%s", r.Host, r.URL.String()), 328 | "/wipe.sh", 329 | "/wiped", 330 | 1) 331 | w.Header().Set("Content-Type", "text/plain") 332 | err = wipeScriptTemplate.ExecuteTemplate(w, "WipeScript", wipeScriptData{ 333 | WipedUrl: wipedUrl, 334 | }) 335 | if err != nil { 336 | http.Error(w, err.Error(), http.StatusInternalServerError) 337 | return 338 | } 339 | }) 340 | 341 | http.HandleFunc("/wiped", func(w http.ResponseWriter, r *http.Request) { 342 | if r.Method != "POST" { 343 | http.Error(w, http.StatusText(http.StatusMethodNotAllowed), http.StatusMethodNotAllowed) 344 | return 345 | } 346 | 347 | mac := strings.ReplaceAll(r.URL.Query().Get("mac"), "-", ":") 348 | 349 | machines, err := GetMachines("machines.json") 350 | if err != nil { 351 | http.Error(w, err.Error(), http.StatusInternalServerError) 352 | return 353 | } 354 | 355 | for _, machine := range machines { 356 | if machine.Mac == mac { 357 | switch machine.BmcType { 358 | case "": 359 | err := executeMachineAction(&machine, "matchbox-network-talos") 360 | if err != nil { 361 | http.Error(w, err.Error(), http.StatusInternalServerError) 362 | return 363 | } 364 | w.Write([]byte("reboot")) 365 | default: 366 | err := executeMachineAction(&machine, "bmc-reset-to-network-talos") 367 | if err != nil { 368 | http.Error(w, err.Error(), http.StatusInternalServerError) 369 | return 370 | } 371 | } 372 | return 373 | } 374 | } 375 | 376 | w.WriteHeader(http.StatusNotFound) 377 | }) 378 | 379 | http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { 380 | if r.URL.Path != "/" { 381 | http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) 382 | return 383 | } 384 | 385 | machinesStatus, err := GetMachinesStatus() 386 | if err != nil { 387 | http.Error(w, err.Error(), http.StatusInternalServerError) 388 | return 389 | } 390 | 391 | kubernetesNodes, err := GetKubernetesNodes() 392 | if err != nil { 393 | http.Error(w, err.Error(), http.StatusInternalServerError) 394 | return 395 | } 396 | 397 | kubernetesIngresses, err := GetKubernetesIngresses() 398 | if err != nil { 399 | http.Error(w, err.Error(), http.StatusInternalServerError) 400 | return 401 | } 402 | 403 | w.Header().Set("Content-Type", "text/html") 404 | 405 | err = machinesStatusTemplate.ExecuteTemplate(w, "MachinesStatus", machinesStatusData{ 406 | Location: location, 407 | MachinesStatus: machinesStatus, 408 | KubernetesNodes: kubernetesNodes, 409 | KubernetesIngresses: kubernetesIngresses, 410 | }) 411 | if err != nil { 412 | http.Error(w, err.Error(), http.StatusInternalServerError) 413 | return 414 | } 415 | }) 416 | 417 | log.Printf("Listening at http://%s", *listenAddress) 418 | 419 | err = http.ListenAndServe(*listenAddress, nil) 420 | if err != nil { 421 | log.Fatalf("Failed to ListenAndServe: %v", err) 422 | } 423 | } 424 | -------------------------------------------------------------------------------- /machinator/matchbox.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "encoding/json" 5 | "fmt" 6 | "io/ioutil" 7 | "os" 8 | ) 9 | 10 | func matchboxSetMachineOs(machine *Machine, osName string) error { 11 | var profile string 12 | 13 | switch osName { 14 | case "talos": 15 | profile = fmt.Sprintf("%s-%s", machine.Role, machine.Arch) 16 | case "rescue": 17 | profile = fmt.Sprintf("rescue-%s", machine.Arch) 18 | case "rescue-wipe": 19 | profile = fmt.Sprintf("rescue-wipe-%s", machine.Arch) 20 | default: 21 | return fmt.Errorf("unknown os: %s", osName) 22 | } 23 | 24 | _, err := os.Stat(fmt.Sprintf("/var/lib/matchbox/profiles/%s.json", profile)) 25 | if err != nil { 26 | return err 27 | } 28 | 29 | machineGroupPath := fmt.Sprintf("/var/lib/matchbox/groups/%s.json", machine.Name) 30 | 31 | data, err := ioutil.ReadFile(machineGroupPath) 32 | if err != nil { 33 | return err 34 | } 35 | 36 | var group map[string]interface{} 37 | 38 | err = json.Unmarshal(data, &group) 39 | if err != nil { 40 | return err 41 | } 42 | 43 | group["profile"] = profile 44 | 45 | data, err = json.Marshal(group) 46 | if err != nil { 47 | return err 48 | } 49 | 50 | err = ioutil.WriteFile(machineGroupPath, data, 0644) 51 | 52 | return err 53 | } 54 | -------------------------------------------------------------------------------- /machinator/qmp.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "encoding/json" 5 | "fmt" 6 | "time" 7 | 8 | "github.com/digitalocean/go-qemu/qmp" 9 | ) 10 | 11 | type qomSetCommand struct { 12 | Execute string `json:"execute"` 13 | Arguments qomSetArguments `json:"arguments"` 14 | } 15 | 16 | type qomSetArguments struct { 17 | Path string `json:"path"` 18 | Property string `json:"property"` 19 | Value int `json:"value"` 20 | } 21 | 22 | // set the boot order to boot from disk. 23 | // NB this is equivalent of using the following commands: 24 | // ./qmp-shell /tmp/qmp-talos-vagrant_w1.socket 25 | // qom-list path=/machine/peripheral 26 | // qom-list path=/machine/peripheral/net0 27 | // qom-get path=/machine/peripheral/net0 property=bootindex 28 | // qom-set path=/machine/peripheral/net0 property=bootindex value=-1 29 | // qom-set path=/machine/peripheral/net0 property=bootindex value=1 30 | // qom-list path=/machine/peripheral/virtio-disk1 31 | // qom-get path=/machine/peripheral/virtio-disk1 property=bootindex 32 | // qom-set path=/machine/peripheral/virtio-disk1 property=bootindex value=-1 33 | // qom-set path=/machine/peripheral/virtio-disk1 property=bootindex value=1 34 | // NB we cannot set the same bootindex value. 35 | // see https://gist.github.com/rgl/dc38c6875a53469fdebb2e9c0a220c6c 36 | func qmpSetBootOrderToDisk(machine *Machine) error { 37 | monitor, err := qmp.NewSocketMonitor("tcp", fmt.Sprintf("%s:%d", machine.BmcIp, machine.BmcQmpPort), 2*time.Second) 38 | if err != nil { 39 | return err 40 | } 41 | monitor.Connect() 42 | defer monitor.Disconnect() 43 | 44 | // remove net0 from boot. 45 | command := qomSetCommand{ 46 | Execute: "qom-set", 47 | Arguments: qomSetArguments{ 48 | Path: "/machine/peripheral/net0", 49 | Property: "bootindex", 50 | Value: -1, 51 | }, 52 | } 53 | commandJson, err := json.Marshal(command) 54 | if err != nil { 55 | return err 56 | } 57 | _, err = monitor.Run(commandJson) 58 | if err != nil { 59 | return err 60 | } 61 | 62 | // add virtio-disk1 to boot. 63 | command = qomSetCommand{ 64 | Execute: "qom-set", 65 | Arguments: qomSetArguments{ 66 | Path: "/machine/peripheral/virtio-disk1", 67 | Property: "bootindex", 68 | Value: 1, 69 | }, 70 | } 71 | commandJson, err = json.Marshal(command) 72 | if err != nil { 73 | return err 74 | } 75 | _, err = monitor.Run(commandJson) 76 | if err != nil { 77 | return err 78 | } 79 | return nil 80 | } 81 | -------------------------------------------------------------------------------- /machinator/redfish_bmc.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "log" 6 | "time" 7 | 8 | "github.com/stmcginnis/gofish" 9 | "github.com/stmcginnis/gofish/redfish" 10 | ) 11 | 12 | func redfishBmcGetPowerState(machine *Machine) (string, error) { 13 | _, system, err := redfishBmcGetSystem(machine) 14 | if err != nil { 15 | return "", err 16 | } 17 | return string(system.PowerState), nil 18 | } 19 | 20 | func redfishBmcReset(machine *Machine) error { 21 | _, system, err := redfishBmcGetSystem(machine) 22 | if err != nil { 23 | return err 24 | } 25 | log.Printf("Forcing the system restart...") 26 | err = system.Reset(redfish.ForceRestartResetType) 27 | if err != nil { 28 | return err 29 | } 30 | return nil 31 | } 32 | 33 | func redfishBmcResetToNetwork(machine *Machine) error { 34 | return redfishBmcResetToBootTarget(machine, redfish.PxeBootSourceOverrideTarget) 35 | } 36 | 37 | func redfishBmcResetToDisk(machine *Machine) error { 38 | return redfishBmcResetToBootTarget(machine, redfish.HddBootSourceOverrideTarget) 39 | } 40 | 41 | func redfishBmcResetToBootTarget(machine *Machine, bootTarget redfish.BootSourceOverrideTarget) error { 42 | c, system, err := redfishBmcGetSystem(machine) 43 | if err != nil { 44 | return err 45 | } 46 | 47 | log.Printf("Current System State:") 48 | log.Printf(" ODataID: %s", system.ODataID) 49 | log.Printf(" UUID: %s", system.UUID) 50 | log.Printf(" Name: %s", system.Name) 51 | log.Printf(" PowerState: %s", system.PowerState) 52 | log.Printf(" SupportedResetTypes: %s", system.SupportedResetTypes) 53 | log.Printf(" BootSourceOverrideEnabled: %s", system.Boot.BootSourceOverrideEnabled) 54 | log.Printf(" BootSourceOverrideTarget: %s", system.Boot.BootSourceOverrideTarget) 55 | 56 | log.Printf("Setting the boot order to %s...", bootTarget) 57 | err = system.SetBoot(redfish.Boot{ 58 | // NB sushy-vbmc-emulator does not support Once :-( 59 | // see https://storyboard.openstack.org/#!/story/2005368#comment-175052 60 | BootSourceOverrideEnabled: redfish.OnceBootSourceOverrideEnabled, 61 | BootSourceOverrideTarget: bootTarget, 62 | }) 63 | if err != nil { 64 | return err 65 | } 66 | 67 | if system.PowerState != redfish.OffPowerState { 68 | log.Printf("Forcing the system off...") 69 | err = system.Reset(redfish.ForceOffResetType) 70 | if err != nil { 71 | return err 72 | } 73 | log.Printf("Waiting for the system to be off...") 74 | for { 75 | system, err = redfish.GetComputerSystem(c, system.ODataID) 76 | if err == nil && system.PowerState == redfish.OffPowerState { 77 | break 78 | } 79 | time.Sleep(1 * time.Second) 80 | } 81 | } 82 | 83 | log.Printf("Forcing the system on...") 84 | err = system.Reset(redfish.ForceOnResetType) 85 | if err != nil { 86 | return err 87 | } 88 | log.Printf("Waiting for the system to be on...") 89 | for { 90 | system, err = redfish.GetComputerSystem(c, system.ODataID) 91 | if err == nil && system.PowerState == redfish.OnPowerState { 92 | break 93 | } 94 | time.Sleep(1 * time.Second) 95 | } 96 | 97 | // because sushy-vbmc-emulator does not support the 98 | // OnceBootSourceOverrideEnabled BootSourceOverrideEnabled method. 99 | // we must revert the bootTarget to disk again. 100 | // NB this will only work after a power-cycle (from off to on); which 101 | // makes this useless for my purposes of having it work immediately. 102 | // see https://storyboard.openstack.org/#!/story/2005368#comment-175052 103 | // NB the qemu boot order is determined by each individual device `bootindex`, e.g.; 104 | // -device scsi-hd,drive=drive0,bootindex=0 105 | // NB `-boot` seems to be only supported by BIOS (not UEFI). 106 | // NB setting the boot order from QMP seems to work, so we do that to 107 | // workaround the BMC. 108 | if bootTarget != redfish.HddBootSourceOverrideTarget { 109 | bootTarget = redfish.HddBootSourceOverrideTarget 110 | // NB we also do it from sushy side; because that will apply once 111 | // the VM is started again; our qmp changes only last while the 112 | // VM is on. 113 | log.Printf("Resetting the boot order to %s...", bootTarget) 114 | err = system.SetBoot(redfish.Boot{ 115 | BootSourceOverrideEnabled: redfish.OnceBootSourceOverrideEnabled, 116 | BootSourceOverrideTarget: bootTarget, 117 | }) 118 | if err != nil { 119 | return err 120 | } 121 | log.Printf("Resetting the boot order to %s (QMP)...", bootTarget) 122 | err = qmpSetBootOrderToDisk(machine) 123 | if err != nil { 124 | return err 125 | } 126 | } 127 | 128 | return nil 129 | } 130 | 131 | func redfishBmcGetSystem(machine *Machine) (*gofish.APIClient, *redfish.ComputerSystem, error) { 132 | if machine.BmcIp == "" { 133 | return nil, nil, fmt.Errorf("machine has no defined bmcIp") 134 | } 135 | if machine.BmcPort == 0 { 136 | return nil, nil, fmt.Errorf("machine has no defined bmcPort") 137 | } 138 | 139 | c, err := gofish.ConnectDefault(fmt.Sprintf("http://%s:%d", machine.BmcIp, machine.BmcPort)) 140 | if err != nil { 141 | return nil, nil, err 142 | } 143 | 144 | systems, err := c.Service.Systems() 145 | if err != nil { 146 | return nil, nil, err 147 | } 148 | if len(systems) == 0 { 149 | return nil, nil, fmt.Errorf("machine has no system") 150 | } 151 | if len(systems) > 1 { 152 | return nil, nil, fmt.Errorf("machine has more than one system") 153 | } 154 | 155 | return c, systems[0], nil 156 | } 157 | -------------------------------------------------------------------------------- /machinator/talos.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "fmt" 6 | "os/exec" 7 | "strings" 8 | ) 9 | 10 | type TalosExecError struct { 11 | ExitCode int 12 | Stdout string 13 | Stderr string 14 | } 15 | 16 | func (err *TalosExecError) Error() string { 17 | return fmt.Sprintf("failed to exec talosctl: exitCode=%d stdout=%s stderr=%s", err.ExitCode, err.Stdout, err.Stderr) 18 | } 19 | 20 | func talosctl(args ...string) (string, error) { 21 | var stderr, stdout bytes.Buffer 22 | 23 | cmd := exec.Command("talosctl", args...) 24 | cmd.Stderr = &stderr 25 | cmd.Stdout = &stdout 26 | 27 | err := cmd.Run() 28 | 29 | if err != nil { 30 | exitCode := -1 31 | if exitError, ok := err.(*exec.ExitError); ok { 32 | exitCode = exitError.ProcessState.ExitCode() 33 | } 34 | return "", &TalosExecError{ 35 | ExitCode: exitCode, 36 | Stdout: stdout.String(), 37 | Stderr: stderr.String(), 38 | } 39 | } 40 | 41 | return strings.TrimSpace(stdout.String()), nil 42 | } 43 | 44 | func talosShutdown(machine *Machine) error { 45 | // NB this will directly connect to the target machine (-e and 46 | // -n addresses are the same) without going tru a control 47 | // plane node. 48 | _, err := talosctl("-e", machine.Name, "-n", machine.Name, "shutdown") 49 | 50 | return err 51 | } 52 | 53 | func talosReboot(machine *Machine) error { 54 | // NB this will directly connect to the target machine (-e and 55 | // -n addresses are the same) without going tru a control 56 | // plane node. 57 | _, err := talosctl("-e", machine.Name, "-n", machine.Name, "reboot") 58 | 59 | return err 60 | } 61 | 62 | func talosReset(machine *Machine) error { 63 | // NB this will directly connect to the target machine (-e and 64 | // -n addresses are the same) without going tru a control 65 | // plane node. 66 | _, err := talosctl("-e", machine.Name, "-n", machine.Name, "reset") 67 | 68 | return err 69 | } 70 | -------------------------------------------------------------------------------- /machinator/wipe.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -euo pipefail 3 | 4 | # NB this file is executed synchronously by the live-config service 5 | # (/lib/systemd/system/live-config.service) from the 6 | # /lib/live/config/9990-hooks hook script. 7 | # NB the systemd basic.target is only executed after this script 8 | # finishes (live-config.service has the WantedBy=basic.target 9 | # setting). 10 | 11 | # remove all the signatures from the install disk device. 12 | cat >/wipe.py <<'EOF' 13 | import json 14 | import requests 15 | import subprocess 16 | import time 17 | 18 | def parse_args(lines): 19 | args = {} 20 | for l in lines: 21 | parts = l.split('=', 1) 22 | k = parts[0] 23 | if len(parts) > 1: 24 | v = parts[1] 25 | else: 26 | v = None 27 | args[k] = v 28 | return args 29 | 30 | def get_cmdline(): 31 | with open('/proc/cmdline', 'r') as f: 32 | return parse_args(f.read().rstrip().split(' ')) 33 | 34 | def get_metadata(metadata_url): 35 | r = requests.get(metadata_url) 36 | r.raise_for_status() 37 | return parse_args(r.text.rstrip().split('\n')) 38 | 39 | cmdline = get_cmdline() 40 | 41 | metadata = get_metadata(cmdline['matchbox.metadata']) 42 | 43 | install_disk = metadata['INSTALLDISK'] 44 | 45 | if not install_disk: 46 | raise Exception('the INSTALLDISK metadata must not be empty') 47 | 48 | print(f'Wiping the {install_disk} disk device...') 49 | subprocess.run(['wipefs', '--all', install_disk], check=True) 50 | 51 | print(f'Creating an empty GPT label in the {install_disk} disk device...') 52 | subprocess.run(['parted', '--script', install_disk, 'mklabel', 'gpt'], check=True) 53 | 54 | print('Sync...') 55 | subprocess.run(['sync'], check=True) 56 | 57 | print(f'Calling {{.WipedUrl}}...') 58 | r = requests.post('{{.WipedUrl}}') 59 | r.raise_for_status() 60 | 61 | if r.text == 'reboot': 62 | for t in reversed(range(10)): 63 | print(f'Rebooting in T-{t+1}...') 64 | time.sleep(1) 65 | subprocess.run(['reboot'], check=True) 66 | EOF 67 | 68 | # configure the system to automatically execute wipe when the user logins. 69 | # NB the vagrant user is automatically logged in when the username=vagrant 70 | # is present in the kernel command line. 71 | # NB this file is sourced by the login shell. 72 | profile_wipe_sh_path='/etc/profile.d/Z99-wipe.sh' 73 | cat >$profile_wipe_sh_path </etc/sudoers.d/env_keep_apt 6 | chmod 440 /etc/sudoers.d/env_keep_apt 7 | export DEBIAN_FRONTEND=noninteractive 8 | 9 | 10 | # 11 | # make sure the package index cache is up-to-date before installing anything. 12 | 13 | apt-get update 14 | 15 | 16 | # 17 | # expand the root partition. 18 | 19 | apt-get install -y --no-install-recommends parted 20 | partition_device="$(findmnt -no SOURCE /)" 21 | partition_number="$(echo "$partition_device" | perl -ne '/(\d+)$/ && print $1')" 22 | disk_device="$(echo "$partition_device" | perl -ne '/(.+?)\d+$/ && print $1')" 23 | parted ---pretend-input-tty "$disk_device" <~/.vimrc <<'EOF' 41 | syntax on 42 | set background=dark 43 | set esckeys 44 | set ruler 45 | set laststatus=2 46 | set nobackup 47 | EOF 48 | 49 | 50 | # 51 | # configure the shell. 52 | 53 | cat >~/.bash_history <<'EOF' 54 | EOF 55 | 56 | cat >~/.bashrc <<'EOF' 57 | # If not running interactively, don't do anything 58 | [[ "$-" != *i* ]] && return 59 | 60 | export EDITOR=vim 61 | export PAGER=less 62 | 63 | alias l='ls -lF --color' 64 | alias ll='l -a' 65 | alias h='history 25' 66 | alias j='jobs -l' 67 | EOF 68 | 69 | cat >~/.inputrc <<'EOF' 70 | "\e[A": history-search-backward 71 | "\e[B": history-search-forward 72 | "\eOD": backward-word 73 | "\eOC": forward-word 74 | set show-all-if-ambiguous on 75 | set completion-ignore-case on 76 | EOF 77 | 78 | 79 | # 80 | # install git. 81 | 82 | apt-get install -y git 83 | 84 | 85 | # 86 | # configure git. 87 | 88 | git config --global user.name 'Rui Lopes' 89 | git config --global user.email 'rgl@ruilopes.com' 90 | git config --global push.default simple 91 | git config --global core.autocrlf false 92 | 93 | 94 | # 95 | # install useful tools. 96 | 97 | apt-get install -y unzip 98 | apt-get install -y httpie 99 | apt-get install -y jq 100 | apt-get install -y python3-tabulate python3-yaml 101 | apt-get install -y bash-completion 102 | apt-get install -y p7zip-full 103 | 104 | # etherwake lets us power-on a machine by sending a Wake-on-LAN (WOL) 105 | # magic packet to its ethernet card. 106 | # e.g. etherwake -i eth1 00:e0:4c:01:93:a8 107 | apt-get install -y etherwake 108 | 109 | # arp-scan lets us discover nodes in the local network. 110 | apt-get install -y arp-scan 111 | -------------------------------------------------------------------------------- /provision-chart-cert-manager.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | cert_manager_chart_version="${1:-1.12.1}"; shift || true 5 | 6 | # provision cert-manager. 7 | # NB YOU MUST INSTALL CERT-MANAGER TO THE cert-manager NAMESPACE. the CRDs have it hard-coded. 8 | # NB YOU CANNOT INSTALL MULTIPLE INSTANCES OF CERT-MANAGER IN A CLUSTER. 9 | # NB the CRDs have to be installaled separately from the chart. 10 | # TODO would it make sense to have a separate helm chart for installing the CRDs? 11 | # see https://artifacthub.io/packages/helm/cert-manager/cert-manager 12 | # see https://github.com/jetstack/cert-manager/tree/master/deploy/charts/cert-manager 13 | # see https://cert-manager.io/docs/installation/supported-releases/ 14 | # see https://cert-manager.io/docs/configuration/selfsigned/#bootstrapping-ca-issuers 15 | # see https://cert-manager.io/docs/usage/ingress/ 16 | helm repo add jetstack https://charts.jetstack.io 17 | helm repo update 18 | title 'Installing cert-manager crds' 19 | kubectl apply -f "https://github.com/jetstack/cert-manager/releases/download/v$cert_manager_chart_version/cert-manager.crds.yaml" 20 | title 'Installing cert-manager' 21 | helm upgrade --install \ 22 | cert-manager \ 23 | jetstack/cert-manager \ 24 | --namespace cert-manager \ 25 | --version "$cert_manager_chart_version" \ 26 | --create-namespace \ 27 | --wait \ 28 | --values <(cat </vagrant/shared/admin-token.txt 112 | -------------------------------------------------------------------------------- /provision-chart-metallb.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | # metallb chart. 5 | # see https://github.com/metallb/metallb/releases 6 | # see https://github.com/metallb/metallb/tree/v0.13.10/charts/metallb 7 | # see https://metallb.universe.tf/installation/#installation-with-helm 8 | # see https://metallb.universe.tf/configuration/#layer-2-configuration 9 | metallb_chart_version="${1:-0.13.10}"; shift || true 10 | metallb_ip_addresses="${1:-10.10.0.200-10.10.0.219}"; shift || true 11 | 12 | # add the metallb helm charts repository. 13 | helm repo add metallb https://metallb.github.io/metallb 14 | 15 | # search the chart and app versions, e.g.: in this case we are using: 16 | # NAME CHART VERSION APP VERSION DESCRIPTION 17 | # metallb/metallb 0.13.10 v0.13.10 A network load-balancer implementation for Kube... 18 | helm search repo metallb/metallb --versions | head -5 19 | 20 | # create the namespace. 21 | # see https://github.com/metallb/metallb/blob/v0.13.10/config/native/ns.yaml 22 | # see https://github.com/metallb/metallb/issues/1457 23 | kubectl apply -f - <<'EOF' 24 | apiVersion: v1 25 | kind: Namespace 26 | metadata: 27 | name: metallb-system 28 | labels: 29 | pod-security.kubernetes.io/enforce: privileged 30 | pod-security.kubernetes.io/audit: privileged 31 | pod-security.kubernetes.io/warn: privileged 32 | EOF 33 | 34 | # install. 35 | helm upgrade --install \ 36 | metallb \ 37 | metallb/metallb \ 38 | --version $metallb_chart_version \ 39 | --namespace metallb-system \ 40 | --wait 41 | 42 | # advertise addresses using the L2 mode. 43 | # NB we have to sit in a loop until the metallb-webhook-service endpoint is 44 | # available. while its starting, it will fail with: 45 | # Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://metallb-webhook-service.cluster-metallb.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.103.0.220:443: connect: connection refused 46 | # see https://github.com/metallb/metallb/issues/1547 47 | while ! kubectl apply --namespace metallb-system -f - <>/etc/chrony/chrony.conf </usr/share/bash-completion/completions/crane 19 | -------------------------------------------------------------------------------- /provision-dnsmasq.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | dns_domain="$(hostname --domain)" 6 | ip_address="${1:-10.10.0.2}"; shift || true 7 | dhcp_range="${1:-10.10.0.100,10.10.0.200,10m}"; shift || true 8 | control_plane_vip="${1:-10.10.0.3}"; shift || true 9 | # NB since we are going to use the integrated talos vip mode, while 10 | # bootstrapping, the DNS A RR must point to the first node. after 11 | # bootstrap, bootstrap-talos.sh will modify the DNS A RR to point 12 | # to the VIP. 13 | first_control_plane_ip="$((jq -r '.[] | select(.role == "controlplane") | .ip' | head -1) /etc/resolv.conf </etc/dnsmasq.d/local.conf <~/.docker/config.json </etc/apt/keyrings/download.docker.com.gpg 22 | echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/download.docker.com.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \ 23 | >/etc/apt/sources.list.d/download_docker_com_linux_ubuntu.list 24 | apt-get update 25 | apt-cache madison docker-ce 26 | docker_apt_version="$(apt-cache madison docker-ce | awk "/:$docker_version[~-]/{print \$3}")" 27 | apt-get install -y "docker-ce=$docker_apt_version" "docker-ce-cli=$docker_apt_version" containerd.io 28 | 29 | # configure it. 30 | systemctl stop docker 31 | cat >/etc/docker/daemon.json <<'EOF' 32 | { 33 | "experimental": false, 34 | "debug": false, 35 | "features": { 36 | "buildkit": true 37 | }, 38 | "log-driver": "journald", 39 | "labels": [ 40 | "os=linux" 41 | ], 42 | "hosts": [ 43 | "unix://" 44 | ] 45 | } 46 | EOF 47 | # start docker without any command line flags as its entirely configured from daemon.json. 48 | install -d /etc/systemd/system/docker.service.d 49 | cat >/etc/systemd/system/docker.service.d/override.conf <<'EOF' 50 | [Service] 51 | ExecStart= 52 | ExecStart=/usr/bin/dockerd 53 | EOF 54 | systemctl daemon-reload 55 | systemctl start docker 56 | 57 | # let the vagrant user manage docker. 58 | usermod -aG docker vagrant 59 | 60 | # kick the tires. 61 | ctr version 62 | docker version 63 | docker info 64 | docker network ls 65 | ip link 66 | bridge link 67 | #docker run --rm hello-world 68 | #docker run --rm alpine ping -c1 8.8.8.8 69 | #docker run --rm debian:11 ping -c1 8.8.8.8 70 | #docker run --rm debian:11-slim cat /etc/os-release 71 | #docker run --rm ubuntu:22.04 cat /etc/os-release 72 | -------------------------------------------------------------------------------- /provision-etcdctl.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | etcd_version="${1:-3.5.9}"; shift || true 5 | 6 | # install the binaries. 7 | url="https://github.com/etcd-io/etcd/releases/download/v$etcd_version/etcd-v$etcd_version-linux-amd64.tar.gz" 8 | filename="$(basename "$url")" 9 | wget -q "$url" 10 | rm -rf etcd && mkdir etcd 11 | tar xf "$filename" --strip-components 1 -C etcd 12 | install etcd/etcdctl /usr/local/bin 13 | rm -rf "$filename" etcd 14 | -------------------------------------------------------------------------------- /provision-example-daemonset.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | domain="$(hostname --domain)" 6 | 7 | 8 | kubectl apply -f - <datasources/datasources.yml 20 | 21 | # start grafana. 22 | # see https://grafana.com/docs/grafana/latest/installation/docker/ 23 | docker run \ 24 | -d \ 25 | --restart unless-stopped \ 26 | --name grafana \ 27 | -p 3000:3000 \ 28 | -e GF_AUTH_ANONYMOUS_ENABLED=true \ 29 | -e GF_AUTH_ANONYMOUS_ORG_ROLE=Admin \ 30 | -e GF_AUTH_DISABLE_LOGIN_FORM=true \ 31 | -v $PWD/datasources:/etc/grafana/provisioning/datasources \ 32 | grafana/grafana:$grafana_version 33 | -------------------------------------------------------------------------------- /provision-helm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | # 5 | # deploy helm. 6 | 7 | # see https://github.com/helm/helm/releases 8 | helm_version="${1:-v3.12.0}"; shift || true 9 | 10 | # install helm. 11 | # see https://helm.sh/docs/intro/install/ 12 | echo "installing helm $helm_version client..." 13 | case `uname -m` in 14 | x86_64) 15 | wget -qO- "https://get.helm.sh/helm-$helm_version-linux-amd64.tar.gz" | tar xzf - --strip-components=1 linux-amd64/helm 16 | ;; 17 | armv7l) 18 | wget -qO- "https://get.helm.sh/helm-$helm_version-linux-arm.tar.gz" | tar xzf - --strip-components=1 linux-arm/helm 19 | ;; 20 | esac 21 | install helm /usr/local/bin 22 | rm helm 23 | 24 | # install the bash completion script. 25 | helm completion bash >/usr/share/bash-completion/completions/helm 26 | 27 | # kick the tires. 28 | title 'helm version' 29 | helm version 30 | -------------------------------------------------------------------------------- /provision-iptables.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | # configure apt for non-interactive mode. 6 | export DEBIAN_FRONTEND=noninteractive 7 | 8 | 9 | # 10 | # setup routing/forwarding/nat. 11 | 12 | # these anwsers were obtained (after installing iptables-persistent) with: 13 | # #sudo debconf-show iptables-persistent 14 | # sudo apt-get install debconf-utils 15 | # # this way you can see the comments: 16 | # sudo debconf-get-selections 17 | # # this way you can just see the values needed for debconf-set-selections: 18 | # sudo debconf-get-selections | grep -E '^iptables-persistent\s+' | sort 19 | debconf-set-selections <<'EOF' 20 | iptables-persistent iptables-persistent/autosave_v4 boolean false 21 | iptables-persistent iptables-persistent/autosave_v6 boolean false 22 | EOF 23 | apt-get install -y iptables iptables-persistent 24 | 25 | # enable IPv4 forwarding. 26 | sysctl net.ipv4.ip_forward=1 27 | sed -i -E 's,^\s*#?\s*(net.ipv4.ip_forward=).+,\11,g' /etc/sysctl.conf 28 | 29 | # route between all interfaces. 30 | # nat through eth0. 31 | cat >/etc/iptables/rules.v4 </usr/share/bash-completion/completions/kubectl 12 | kubectl version --client 13 | cp /usr/local/bin/kubectl /vagrant/shared 14 | -------------------------------------------------------------------------------- /provision-loki.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -euxo pipefail 3 | 4 | # see https://github.com/grafana/loki/releases 5 | # see https://hub.docker.com/r/grafana/loki/tags 6 | # renovate: datasource=docker depName=grafana/loki 7 | loki_version="2.8.1" 8 | 9 | # destroy the existing loki container and data. 10 | docker rm --force loki && rm -rf ~/loki && mkdir ~/loki 11 | 12 | cd ~/loki 13 | 14 | cp /vagrant/loki-config.yml . 15 | 16 | # see https://grafana.com/docs/loki/latest/installation/docker/ 17 | # see https://grafana.com/docs/loki/latest/configuration/examples/#complete-local-config 18 | # see https://hub.docker.com/r/grafana/loki 19 | # see https://github.com/grafana/loki 20 | docker run \ 21 | -d \ 22 | --restart unless-stopped \ 23 | --name loki \ 24 | -p 3100:3100 \ 25 | -v "$PWD:/etc/loki" \ 26 | grafana/loki:$loki_version \ 27 | -config.file=/etc/loki/loki-config.yml 28 | 29 | # wait for loki to be ready. 30 | # see https://grafana.com/docs/loki/latest/api/ 31 | bash -euc 'while [ "$(wget -qO- http://localhost:3100/ready)" != "ready" ]; do sleep 5; done' 32 | #wget -qO- http://localhost:3100/metrics 33 | #wget -qO- http://localhost:3100/config | yq eval - 34 | 35 | # install logcli. 36 | # see https://grafana.com/docs/loki/latest/getting-started/logcli/ 37 | wget -q https://github.com/grafana/loki/releases/download/v$loki_version/logcli-linux-amd64.zip 38 | unzip logcli-linux-amd64.zip 39 | install -m 755 logcli-linux-amd64 /usr/local/bin/logcli 40 | rm logcli-linux-amd64* 41 | -------------------------------------------------------------------------------- /provision-machinator.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | cd /vagrant/machinator 4 | 5 | cp /vagrant/shared/machines.json "$HOME/machines.json" 6 | 7 | docker build -t machinator . 8 | 9 | docker rm -f machinator || true 10 | 11 | docker run \ 12 | -d \ 13 | --restart unless-stopped \ 14 | --name machinator \ 15 | -v "$HOME/machines.json:/machines.json:ro" \ 16 | -v "$HOME/.talos:/root/.talos:ro" \ 17 | -v "$HOME/.kube:/root/.kube:ro" \ 18 | -v "/var/lib/matchbox:/var/lib/matchbox" \ 19 | -v /var/lib/misc/dnsmasq.leases:/dnsmasq.leases:ro \ 20 | -v /etc/timezone:/etc/timezone:ro \ 21 | -v /etc/localtime:/etc/localtime:ro \ 22 | -e AMT_USERNAME='admin' \ 23 | -e AMT_PASSWORD='HeyH0Password!' \ 24 | -e IPMI_USERNAME='admin' \ 25 | -e IPMI_PASSWORD='password' \ 26 | -p 8000:8000 \ 27 | machinator 28 | -------------------------------------------------------------------------------- /provision-matchbox.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | # see https://github.com/poseidon/matchbox/releases 6 | # renovate: datasource=github-releases depName=poseidon/matchbox 7 | matchbox_version='0.10.0' 8 | matchbox_image="quay.io/poseidon/matchbox:v$matchbox_version" 9 | 10 | 11 | # 12 | # register the machines. 13 | 14 | mkdir -p /var/lib/matchbox/{assets,groups,profiles,ignition,cloud,generic} 15 | python3 /vagrant/machines.py 16 | 17 | 18 | # 19 | # restart dnsmasq. 20 | 21 | systemctl restart dnsmasq 22 | 23 | 24 | # 25 | # install matchbox. 26 | 27 | docker run \ 28 | -d \ 29 | --restart unless-stopped \ 30 | --name matchbox \ 31 | --net host \ 32 | -v /var/lib/matchbox:/var/lib/matchbox:Z \ 33 | "$matchbox_image" \ 34 | -address=0.0.0.0:80 \ 35 | -log-level=info 36 | -------------------------------------------------------------------------------- /provision-meshcommander.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | cd /vagrant/meshcommander 4 | 5 | docker rm -f meshcommander || true 6 | 7 | docker build -t meshcommander . 8 | 9 | docker run \ 10 | -d \ 11 | --restart unless-stopped \ 12 | --name meshcommander \ 13 | -p 4000:4000 \ 14 | -e NODE_ENV=production \ 15 | meshcommander 16 | -------------------------------------------------------------------------------- /provision-pdns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | pandora_ip_address="${1:-10.10.0.2}"; shift || true 5 | domain="$(hostname --domain)" 6 | first_control_plane_ip="$((jq -r '.[] | select(.role == "controlplane") | .ip' | head -1) /etc/powerdns/pdns.d/gsqlite3.conf <<'EOF' 45 | launch=gsqlite3 46 | gsqlite3-database=/var/lib/powerdns/pdns.sqlite3 47 | EOF 48 | su pdns \ 49 | -s /bin/bash \ 50 | -c 'sqlite3 /var/lib/powerdns/pdns.sqlite3' \ 51 | /opt/registry/secrets/htpasswd 23 | 24 | # create the http secret. 25 | echo -n 'http secret' >/opt/registry/secrets/http 26 | 27 | # launch the registry. 28 | # see https://docs.docker.com/registry/deploying/ 29 | echo "starting the registry $registry_url..." 30 | install -d -m 700 /opt/registry/data 31 | docker run -d \ 32 | --restart=unless-stopped \ 33 | --name registry \ 34 | -p 5000:5000 \ 35 | -v /opt/registry/data:/var/lib/registry \ 36 | -v /opt/registry/secrets:/run/secrets \ 37 | -e REGISTRY_HTTP_SECRET=/run/secrets/http \ 38 | -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \ 39 | -e REGISTRY_AUTH=htpasswd \ 40 | -e 'REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm' \ 41 | -e REGISTRY_AUTH_HTPASSWD_PATH=/run/secrets/htpasswd \ 42 | "$registry_image" 43 | 44 | # wait for the registry to be available. 45 | echo "waiting for the registry $registry_url to become available..." 46 | while ! wget -q --spider --user "$registry_username" --password "$registry_password" "$registry_url/v2/"; do sleep 1; done; 47 | 48 | # login into the registry. 49 | echo "logging in the registry..." 50 | docker login $registry_host --username "$registry_username" --password-stdin </var/lib/matchbox/profiles/rescue-$rescue_arch.json </var/lib/matchbox/profiles/rescue-wipe-$rescue_arch.json </usr/local/bin/talos-poke <<'EOF_TALOS_POKE' 14 | #!/bin/bash 15 | set -euo pipefail 16 | 17 | node="${1:-cp1}"; shift || true 18 | command="${1:-images}"; shift || true 19 | registry_domain="$(hostname --fqdn)" 20 | registry_host="$registry_domain:5000" 21 | 22 | kubectl -n kube-system delete --grace-period=0 --force pod/talos-poke 2>&1 | grep -v NotFound || true 23 | 24 | kubectl -n kube-system apply -f - <&1 | grep -v NotFound || true 82 | 83 | EOF_TALOS_POKE 84 | -------------------------------------------------------------------------------- /provision-talos.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | dns_domain="$(hostname --domain)" 6 | talos_version="${1:-1.4.5}"; shift || true 7 | kubernetes_version="${1:-1.26.5}"; shift || true 8 | control_plane_vip="${1:-10.10.0.3}"; shift || true 9 | pandora_ip_address="$(jq -r .CONFIG_PANDORA_IP /vagrant/shared/config.json)" 10 | registry_domain="$(hostname --fqdn)" 11 | registry_host="$registry_domain:5000" 12 | 13 | 14 | # 15 | # download talos. 16 | 17 | assets=( 18 | vmlinuz-amd64 19 | initramfs-amd64.xz 20 | vmlinuz-arm64 21 | initramfs-arm64.xz 22 | ) 23 | for asset in ${assets[@]}; do 24 | wget -qO /var/lib/matchbox/assets/$asset "https://github.com/siderolabs/talos/releases/download/v$talos_version/$asset" 25 | done 26 | wget -qO /usr/local/bin/talosctl "https://github.com/siderolabs/talos/releases/download/v$talos_version/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64" 27 | chmod +x /usr/local/bin/talosctl 28 | cp /usr/local/bin/talosctl /vagrant/shared 29 | talosctl completion bash >/usr/share/bash-completion/completions/talosctl 30 | talosctl version --client 31 | 32 | 33 | # 34 | # install talos. 35 | # see https://www.talos.dev/v1.4/talos-guides/install/bare-metal-platforms/matchbox/ 36 | # see https://www.talos.dev/v1.4/talos-guides/network/vip/ 37 | # NB this generates yaml file that will be interpreted by matchbox as Go 38 | # templates. this means we can use matchbox metadata variables like 39 | # `installDisk`. you can see the end result at, e.g.: 40 | # http://10.3.0.2/generic?mac=08:00:27:00:00:00 41 | 42 | rm -rf talos 43 | mkdir -p talos 44 | pushd talos 45 | # NB wipe:true is too slow and wasteful for our use-case as it will zero the 46 | # entire device. instead, we have to net boot the rescue wipe image and 47 | # use wipefs to wipe the boot/install disk. 48 | # NB the kernel.kexec_load_disabled sysctl cannot be set to 0. so we must do 49 | # this with /machine/install/extraKernelArgs instead of using 50 | # /machine/sysctls. 51 | cat >config-patch.yaml <config-patch-controlplane.yaml </vagrant/shared/talos-images.txt 132 | import glob 133 | import re 134 | import subprocess 135 | 136 | def run(*args): 137 | result = subprocess.run( 138 | args, 139 | check=True, 140 | text=True, 141 | stdout=subprocess.PIPE, 142 | stderr=subprocess.PIPE) 143 | for line in result.stdout.splitlines(): 144 | yield line 145 | 146 | def parse_image(image): 147 | m = re.match(r'(?P.+?):(?P[^:]+)', image) 148 | return (m.group('name'), m.group('tag')) 149 | 150 | def get_bundled_images(): 151 | for image in run('talosctl', 'images'): 152 | yield parse_image(image) 153 | 154 | def get_generated_images(): 155 | for path in glob.glob('talos/**/*.yaml', recursive=True): 156 | with open(path, 'r') as f: 157 | for line in f: 158 | m = re.match(r'\s*image:\s*(?P.+?):(?P[^:]+)', line.strip()) 159 | if not m: 160 | continue 161 | yield (m.group('name'), m.group('tag')) 162 | 163 | def get_images(): 164 | images = {name: tag for (name, tag) in get_bundled_images()} 165 | for (name, tag) in get_generated_images(): 166 | images[name] = tag 167 | for (name, tag) in images.items(): 168 | yield f'{name}:{tag}' 169 | 170 | for image in get_images(): 171 | print(image) 172 | EOF 173 | cat /vagrant/shared/talos-images.txt | while read source_image; do 174 | destination_image="$registry_host/$(echo $source_image | sed -E 's,^[^/]+/,,g')" 175 | crane copy --insecure "$source_image" "$destination_image" 176 | done 177 | # NB to also copy all the talosctl bundled images uncomment the following block. 178 | # NB the above python code, will not copy the bundled images that are replaced 179 | # by talosctl gen config --kubernetes-version $kubernetes_version. 180 | # talosctl images | while read source_image; do 181 | # destination_image="$registry_host/$(echo $source_image | sed -E 's,^[^/]+/,,g')" 182 | # crane copy --insecure "$source_image" "$destination_image" 183 | # done 184 | 185 | # list the images available in the local registry. 186 | crane catalog --insecure $registry_host | sort | while read name; do 187 | crane ls --insecure "$registry_host/$name" | while read tag; do 188 | echo "$registry_host/$name:$tag" 189 | #crane manifest --insecure "$registry_host/$name:$tag" 190 | done 191 | done 192 | 193 | 194 | # 195 | # install into the pxe server. 196 | 197 | python3 /vagrant/machines.py 198 | systemctl restart dnsmasq 199 | -------------------------------------------------------------------------------- /provision-theila.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | theila_version="${1:-0.2.1}"; shift || true 6 | 7 | 8 | # download. 9 | wget -qO /usr/local/bin/theila "https://github.com/siderolabs/theila/releases/download/v$theila_version/theila-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64" 10 | chmod +x /usr/local/bin/theila 11 | 12 | # create and enable the systemd service unit. 13 | # TODO run as non-root user and somehow generate proper talos/k8s credentials for theila. 14 | cat >/etc/systemd/system/theila.service <<'EOF' 15 | [Unit] 16 | Description=theila 17 | After=network.target 18 | 19 | [Service] 20 | Type=simple 21 | Environment=HOME=/root 22 | WorkingDirectory=/root 23 | ExecStart=/usr/local/bin/theila \ 24 | --address 0.0.0.0 \ 25 | --port 8080 26 | Restart=always 27 | 28 | [Install] 29 | WantedBy=multi-user.target 30 | EOF 31 | systemctl enable theila 32 | systemctl start theila 33 | -------------------------------------------------------------------------------- /provision-vector.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -euxo pipefail 3 | 4 | pandora_ip_address="$(jq -r .CONFIG_PANDORA_IP /vagrant/shared/config.json)" 5 | 6 | # see https://github.com/vectordotdev/vector/releases 7 | # see https://hub.docker.com/r/timberio/vector/ 8 | # renovate: datasource=docker depName=timberio/vector 9 | vector_version="0.30.0" 10 | 11 | # destroy the existing loki container and data. 12 | docker rm --force vector && rm -rf ~/vector && mkdir ~/vector 13 | 14 | cd ~/vector 15 | 16 | sed -E "s,http://localhost:3100,http://$pandora_ip_address:3100,g" /vagrant/vector.yml >vector.yml 17 | 18 | # see https://vector.dev/docs/reference/configuration/sources/socket/ 19 | # see https://vector.dev/docs/reference/configuration/sinks/loki/ 20 | # see https://github.com/vectordotdev/vector 21 | # see https://vector.dev/docs/setup/installation/platforms/docker/ 22 | docker run \ 23 | -d \ 24 | --restart unless-stopped \ 25 | --name vector \ 26 | -p 5170:5170 \ 27 | -v "$PWD/vector.yml:/etc/vector/vector.yml:ro" \ 28 | "timberio/vector:$vector_version-debian" \ 29 | -c /etc/vector/vector.yml 30 | -------------------------------------------------------------------------------- /renovate.json5: -------------------------------------------------------------------------------- 1 | // see https://docs.renovatebot.com/templates/ 2 | // see https://docs.renovatebot.com/modules/manager/ 3 | // see https://docs.renovatebot.com/modules/manager/regex/ 4 | // see https://docs.renovatebot.com/configuration-options/ 5 | { 6 | "$schema": "https://docs.renovatebot.com/renovate-schema.json", 7 | "regexManagers": [ 8 | // default datasources. 9 | { 10 | "fileMatch": [ 11 | "Vagrantfile", 12 | "Dockerfile", 13 | "\\.sh$" 14 | ], 15 | "matchStrings": [ 16 | "# renovate: datasource=(?[^:]+?) depName=(?.+?)( versioning=(?.+?))?( extractVersion=(?.+?))?( registryUrl=(?.+?))?\\s.+?[:=]\\s*[\"']?(?.+?)[\"']?\\s" 17 | ], 18 | "versioningTemplate": "{{#if versioning}}{{{versioning}}}{{else}}semver-coerced{{/if}}", 19 | "extractVersionTemplate": "{{#if extractVersion}}{{{extractVersion}}}{{else}}^v?(?.+)${{/if}}" 20 | } 21 | ] 22 | } -------------------------------------------------------------------------------- /renovate.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -euo pipefail 3 | 4 | # this executes renovate against the local repository. 5 | # NB this uses a temporary gitea instance because running renovate against a 6 | # local directory not (yet?) supported. 7 | # see https://github.com/renovatebot/renovate/issues/3609 8 | 9 | export RENOVATE_USERNAME='renovate' 10 | export RENOVATE_NAME='Renovate Bot' 11 | export RENOVATE_PASSWORD='password' 12 | export RENOVATE_ENDPOINT="http://localhost:3000" 13 | export GIT_PUSH_REPOSITORY="http://$RENOVATE_USERNAME:$RENOVATE_PASSWORD@localhost:3000/$RENOVATE_USERNAME/test.git" 14 | gitea_container_name="$(basename "$(dirname "$(realpath "${BASH_SOURCE[0]}")")")-renovate-gitea" 15 | 16 | # see https://hub.docker.com/r/gitea/gitea/tags 17 | # renovate: datasource=docker depName=gitea/gitea 18 | gitea_version='1.19.1' 19 | 20 | # see https://hub.docker.com/r/renovate/renovate/tags 21 | # renovate: datasource=docker depName=renovate/renovate extractVersion=(?.+)-slim$ 22 | renovate_version='35.58.2' 23 | 24 | # clean. 25 | echo 'Deleting existing Gitea...' 26 | docker rm --force "$gitea_container_name" >/dev/null 2>&1 27 | echo 'Deleting existing temporary files...' 28 | rm -f tmp/renovate-* 29 | 30 | # start gitea in background. 31 | # see https://docs.gitea.io/en-us/config-cheat-sheet/ 32 | # see https://github.com/go-gitea/gitea/releases 33 | # see https://github.com/go-gitea/gitea/blob/v1.19.1/docker/root/etc/s6/gitea/setup 34 | echo 'Starting Gitea...' 35 | docker run \ 36 | --detach \ 37 | --name "$gitea_container_name" \ 38 | -v /etc/timezone:/etc/timezone:ro \ 39 | -v /etc/localtime:/etc/localtime:ro \ 40 | -e SECRET_KEY=abracadabra \ 41 | -p 3000:3000 \ 42 | "gitea/gitea:$gitea_version" \ 43 | >/dev/null 44 | # wait for gitea to be ready. 45 | bash -euc 'while [ -z "$(wget -qO- http://localhost:3000/api/v1/version | jq -r ".version | select(.!=null)")" ]; do sleep 5; done' 46 | 47 | # create user in gitea. 48 | echo "Creating Gitea $RENOVATE_USERNAME user..." 49 | docker exec --user git "$gitea_container_name" gitea admin user create \ 50 | --admin \ 51 | --email "$RENOVATE_USERNAME@example.com" \ 52 | --username "$RENOVATE_USERNAME" \ 53 | --password "$RENOVATE_PASSWORD" 54 | curl \ 55 | -s \ 56 | -u "$RENOVATE_USERNAME:$RENOVATE_PASSWORD" \ 57 | -X 'PATCH' \ 58 | -H 'Accept: application/json' \ 59 | -H 'Content-Type: application/json' \ 60 | -d "{\"full_name\":\"$RENOVATE_NAME\"}" \ 61 | "http://localhost:3000/api/v1/user/settings" \ 62 | | jq \ 63 | > /dev/null 64 | 65 | # create the user personal access token. 66 | # see https://docs.gitea.io/en-us/api-usage/ 67 | # see https://docs.gitea.io/en-us/oauth2-provider/#scopes 68 | # see https://try.gitea.io/api/swagger#/user/userCreateToken 69 | echo "Creating Gitea $RENOVATE_USERNAME user personal access token..." 70 | install -d tmp 71 | curl \ 72 | -s \ 73 | -u "$RENOVATE_USERNAME:$RENOVATE_PASSWORD" \ 74 | -X POST \ 75 | -H "Content-Type: application/json" \ 76 | -d '{"name": "renovate", "scopes": ["repo"]}' \ 77 | "http://localhost:3000/api/v1/users/$RENOVATE_USERNAME/tokens" \ 78 | | jq -r .sha1 \ 79 | >tmp/renovate-gitea-token.txt 80 | 81 | # try the token. 82 | export RENOVATE_TOKEN="$(cat tmp/renovate-gitea-token.txt)" 83 | curl \ 84 | -s \ 85 | -H "Authorization: token $RENOVATE_TOKEN" \ 86 | -H 'Accept: application/json' \ 87 | "http://localhost:3000/api/v1/user" \ 88 | | jq \ 89 | > /dev/null 90 | 91 | # create remote repository in gitea. 92 | echo "Creating Gitea $RENOVATE_USERNAME test repository..." 93 | curl \ 94 | -s \ 95 | -u "$RENOVATE_USERNAME:$RENOVATE_PASSWORD" \ 96 | -X POST \ 97 | -H 'Accept: application/json' \ 98 | -H 'Content-Type: application/json' \ 99 | -d '{"name": "test"}' \ 100 | http://localhost:3000/api/v1/user/repos \ 101 | | jq \ 102 | > /dev/null 103 | 104 | # push the code to local gitea repository. 105 | # NB running renovate locally is not yet supported. 106 | # see https://github.com/renovatebot/renovate/issues/3609 107 | echo "Pushing local repository to Gitea $RENOVATE_USERNAME test repository..." 108 | git push --force "$GIT_PUSH_REPOSITORY" 109 | 110 | # see https://docs.renovatebot.com/modules/platform/gitea/ 111 | # see https://docs.renovatebot.com/self-hosted-configuration/#dryrun 112 | # see https://github.com/renovatebot/renovate/blob/main/docs/usage/examples/self-hosting.md 113 | # see https://github.com/renovatebot/renovate/tree/main/lib/modules/datasource 114 | # see https://github.com/renovatebot/renovate/tree/main/lib/modules/versioning 115 | export RENOVATE_TOKEN="$(cat tmp/renovate-gitea-token.txt)" 116 | # NB these can also be passed as raw positional arguments to docker run. 117 | export RENOVATE_REPOSITORIES="$RENOVATE_USERNAME/test" 118 | # see https://docs.github.com/en/rest/rate-limit#get-rate-limit-status-for-the-authenticated-user 119 | # see https://github.com/settings/tokens 120 | # NB this is only used for authentication. the token should not have any scope enabled. 121 | #export GITHUB_COM_TOKEN='TODO-YOUR-TOKEN' 122 | # let renovate create all the required pull requests. 123 | # see https://docs.renovatebot.com/configuration-options/#prhourlylimit 124 | # see https://docs.renovatebot.com/configuration-options/#prconcurrentlimit 125 | export RENOVATE_PR_HOURLY_LIMIT='0' 126 | export RENOVATE_PR_CONCURRENT_LIMIT='0' 127 | echo 'Running renovate...' 128 | install -d tmp 129 | # NB use --dry-run=lookup for not modifying the repository (e.g. for not 130 | # creating pull requests). 131 | docker run \ 132 | --rm \ 133 | --tty \ 134 | --interactive \ 135 | --net host \ 136 | --env GITHUB_COM_TOKEN \ 137 | --env RENOVATE_ENDPOINT \ 138 | --env RENOVATE_TOKEN \ 139 | --env RENOVATE_REPOSITORIES \ 140 | --env RENOVATE_PR_HOURLY_LIMIT \ 141 | --env RENOVATE_PR_CONCURRENT_LIMIT \ 142 | --env LOG_LEVEL=debug \ 143 | --env LOG_FORMAT=json \ 144 | "renovate/renovate:$renovate_version-slim" \ 145 | --platform=gitea \ 146 | --git-url=endpoint \ 147 | >tmp/renovate-log.json 148 | 149 | echo 'Getting results...' 150 | # extract the errors. 151 | jq 'select(.err)' tmp/renovate-log.json >tmp/renovate-errors.json 152 | # extract the result from the renovate log. 153 | jq 'select(.msg == "packageFiles with updates") | .config' tmp/renovate-log.json >tmp/renovate-result.json 154 | # extract all the dependencies. 155 | jq 'to_entries[].value[] | {packageFile,dep:.deps[]}' tmp/renovate-result.json >tmp/renovate-dependencies.json 156 | # extract the dependencies that have updates. 157 | jq 'select((.dep.updates | length) > 0)' tmp/renovate-dependencies.json >tmp/renovate-dependencies-updates.json 158 | 159 | # helpers. 160 | function show-title { 161 | echo 162 | echo '#' 163 | echo "# $1" 164 | echo '#' 165 | echo 166 | } 167 | 168 | # show dependencies. 169 | function show-dependencies { 170 | show-title "$1" 171 | ( 172 | printf 'packageFile\tdatasource\tdepName\tcurrentValue\tnewVersions\tskipReason\twarnings\n' 173 | jq \ 174 | -r \ 175 | '[ 176 | .packageFile, 177 | .dep.datasource, 178 | .dep.depName, 179 | .dep.currentValue, 180 | (.dep | select(.updates) | .updates | map(.newVersion) | join(" | ")), 181 | .dep.skipReason, 182 | (.dep | select(.warnings) | .warnings | map(.message) | join(" | ")) 183 | ] | @tsv' \ 184 | "$2" \ 185 | | sort 186 | ) | column -t -s "$(printf \\t)" 187 | } 188 | show-dependencies 'Dependencies' tmp/renovate-dependencies.json 189 | show-dependencies 'Dependencies Updates' tmp/renovate-dependencies-updates.json 190 | 191 | # show errors. 192 | if [ "$(jq --slurp length tmp/renovate-errors.json)" -ne '0' ]; then 193 | show-title errors 194 | jq . tmp/renovate-errors.json 195 | fi 196 | 197 | # show the gitea project. 198 | show-title "See PRs at http://localhost:3000/$RENOVATE_USERNAME/test/pulls (you can login as $RENOVATE_USERNAME:$RENOVATE_PASSWORD)" 199 | -------------------------------------------------------------------------------- /summary.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /vagrant/lib.sh 3 | 4 | 5 | dns_domain="$(hostname --domain)" 6 | host_ip_address="$(ip addr show eth1 | perl -n -e'/ inet (\d+(\.\d+)+)/ && print $1')" 7 | first_vm_mac="$(cat /vagrant/shared/machines.json | jq -r '.[] | select(.type == "virtual") | .mac' | head -1)" 8 | 9 | 10 | title 'matchbox addresses' 11 | cat </etc/profile.d/etcdctl.sh <<'STDIN' 33 | export ETCDCTL_CACERT=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt 34 | export ETCDCTL_CERT=/var/lib/rancher/k3s/server/tls/etcd/server-client.crt 35 | export ETCDCTL_KEY=/var/lib/rancher/k3s/server/tls/etcd/server-client.key 36 | STDIN 37 | EOF 38 | ENTRYPOINT ["/bin/bash", "-c", "sleep infinity"] 39 | -------------------------------------------------------------------------------- /vector-k8s.yml: -------------------------------------------------------------------------------- 1 | sources: 2 | k8s_logs: 3 | type: kubernetes_logs 4 | sinks: 5 | k8s_logs_loki: 6 | type: loki 7 | inputs: 8 | - k8s_logs 9 | endpoint: http://localhost:3100 10 | labels: 11 | job: k8s 12 | host: ${VECTOR_SELF_NODE_NAME} 13 | namespace: '{{ kubernetes.pod_namespace }}' 14 | pod: '{{ kubernetes.pod_name }}' 15 | container: '{{ kubernetes.container_name }}' 16 | encoding: 17 | codec: json 18 | batch: 19 | max_bytes: 1048576 # 1 MiB 20 | -------------------------------------------------------------------------------- /vector.yml: -------------------------------------------------------------------------------- 1 | sources: 2 | talos_source: 3 | type: socket 4 | mode: tcp 5 | address: 0.0.0.0:5170 6 | decoding: 7 | codec: json 8 | host_key: __host 9 | transforms: 10 | talos_transform: 11 | type: remap 12 | inputs: 13 | - talos_source 14 | source: | 15 | .timestamp = del(."talos-time") 16 | sinks: 17 | loki: 18 | type: loki 19 | inputs: 20 | - talos_transform 21 | endpoint: http://localhost:3100 22 | labels: 23 | job: talos 24 | host: '{{ __host }}' 25 | service: '{{ "talos-service" }}' 26 | level: '{{ "talos-level" }}' 27 | encoding: 28 | codec: json 29 | except_fields: 30 | - __host 31 | - '"talos-service"' 32 | - '"talos-level"' 33 | batch: 34 | max_bytes: 1048576 # 1 MiB 35 | -------------------------------------------------------------------------------- /wireshark.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source lib.sh 3 | 4 | vm_name=${1:-pandora}; shift || true 5 | interface_name=${1:-eth1}; shift || true 6 | capture_filter=${1:-not port 22 and not port 3000 and not port 3100 and not 4000 and not port 16992 and not port 16994}; shift || true 7 | 8 | mkdir -p tmp 9 | vagrant ssh-config $vm_name >tmp/$vm_name-ssh-config.conf 10 | wireshark -o "gui.window_title:$vm_name $interface_name" -k -i <(ssh -F tmp/$vm_name-ssh-config.conf $vm_name "sudo tcpdump -s 0 -U -n -i $interface_name -w - $capture_filter") 11 | --------------------------------------------------------------------------------