├── ansible.cfg ├── post-install ├── README.md ├── roles │ ├── install-operators │ │ ├── templates │ │ │ ├── namespace.j2 │ │ │ ├── operatorgroup.j2 │ │ │ └── subscription.j2 │ │ └── tasks │ │ │ └── main.yaml │ ├── oauth-ldap │ │ ├── templates │ │ │ ├── oauth-ldap-secret.j2 │ │ │ ├── oauth-ldap-ca-config-map.j2 │ │ │ ├── cluster-admin.j2 │ │ │ └── oauth-ldap.j2 │ │ └── tasks │ │ │ └── main.yaml │ └── label-nodes │ │ └── tasks │ │ └── main.yaml ├── vars │ ├── main.yaml │ └── rhv-upi.yaml └── post-install.yaml ├── qemu-guest-agent ├── 0-namespace.yaml ├── 1-service-account.yaml ├── Dockerfile ├── 2-rbac.yaml └── 3-daemonset.yaml ├── roles ├── firewalld │ └── tasks │ │ └── main.yml ├── dhcpd │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── dhcpd.conf.j2 ├── rhv-retire │ └── tasks │ │ └── main.yml ├── httpd │ ├── files │ │ └── ignition-downloader.php │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── httpd.conf.j2 ├── haproxy │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── haproxy.cfg.j2 ├── ipa-retire │ └── tasks │ │ └── main.yml ├── boot-instances │ └── tasks │ │ └── main.yml ├── ipa │ └── tasks │ │ └── main.yml └── rhv │ └── tasks │ └── main.yml ├── ocs ├── registry-cephfs-pvc.yaml ├── localstorage-operator.yaml ├── storageclass-osd.yaml ├── storageclass-mon.yaml ├── ocs-operator.yaml ├── storagecluster.yaml └── inventory-example-ocs.yaml ├── bootstrap-cleanup.yaml ├── provision.yml ├── retire.yml ├── util ├── iso-generator.sh └── verify-dns.sh ├── inventory-example.yml └── README.md /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | retry_files_enabled = False 3 | host_key_checking = False 4 | forks = 20 5 | -------------------------------------------------------------------------------- /post-install/README.md: -------------------------------------------------------------------------------- 1 | # OpenShift Day 2 2 | 3 | Ansible roles and utilities for configuring OpenShift post install. -------------------------------------------------------------------------------- /qemu-guest-agent/0-namespace.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: qemu-guest-agent 5 | -------------------------------------------------------------------------------- /post-install/roles/install-operators/templates/namespace.j2: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | {{ oitem.namespace | to_nice_yaml(indent=2) }} -------------------------------------------------------------------------------- /qemu-guest-agent/1-service-account.yaml: -------------------------------------------------------------------------------- 1 | kind: ServiceAccount 2 | apiVersion: v1 3 | metadata: 4 | name: qga-service-account 5 | namespace: qemu-guest-agent 6 | -------------------------------------------------------------------------------- /post-install/roles/install-operators/templates/operatorgroup.j2: -------------------------------------------------------------------------------- 1 | apiVersion: operators.coreos.com/v1 2 | kind: OperatorGroup 3 | {{ oitem.operatorgroup | to_nice_yaml(indent=2) }} -------------------------------------------------------------------------------- /post-install/roles/install-operators/templates/subscription.j2: -------------------------------------------------------------------------------- 1 | apiVersion: operators.coreos.com/v1alpha1 2 | kind: Subscription 3 | {{ oitem.subscription | to_nice_yaml(indent=2) }} -------------------------------------------------------------------------------- /post-install/roles/oauth-ldap/templates/oauth-ldap-secret.j2: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: {{ oauth_ldap_secret_name }} 5 | namespace: openshift-config 6 | type: Opaque 7 | data: 8 | bindPassword: {{ oauth_ldap_bind_pass | b64encode }} -------------------------------------------------------------------------------- /post-install/roles/oauth-ldap/templates/oauth-ldap-ca-config-map.j2: -------------------------------------------------------------------------------- 1 | # 4 spaces required in front of each line in ca.crt 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: {{ oauth_ldap_ca_cm_name }} 6 | namespace: openshift-config 7 | data: 8 | ca.crt: | 9 | {{ oauth_ldap_ca_cert_data }} -------------------------------------------------------------------------------- /roles/firewalld/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Install firewalld Package 2 | yum: 3 | name: 4 | - firewalld 5 | state: latest 6 | tags: 7 | - install-packages 8 | 9 | - name: Enable/Start firewalld Service 10 | systemd: 11 | name: firewalld 12 | state: started 13 | enabled: yes 14 | -------------------------------------------------------------------------------- /ocs/registry-cephfs-pvc.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: PersistentVolumeClaim 4 | metadata: 5 | name: registry 6 | namespace: openshift-image-registry 7 | spec: 8 | accessModes: 9 | - ReadWriteMany 10 | resources: 11 | requests: 12 | storage: 30Gi 13 | storageClassName: storagecluster-cephfs 14 | -------------------------------------------------------------------------------- /ocs/localstorage-operator.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: operators.coreos.com/v1alpha1 3 | kind: Subscription 4 | metadata: 5 | name: local-storage-operator 6 | namespace: openshift-storage 7 | spec: 8 | channel: "4.3" 9 | name: local-storage-operator 10 | source: redhat-operators 11 | sourceNamespace: openshift-marketplace 12 | -------------------------------------------------------------------------------- /post-install/roles/oauth-ldap/templates/cluster-admin.j2: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRoleBinding 3 | metadata: 4 | name: cluster-admin-{{ user }} 5 | roleRef: 6 | apiGroup: rbac.authorization.k8s.io 7 | kind: ClusterRole 8 | name: cluster-admin 9 | subjects: 10 | - apiGroup: rbac.authorization.k8s.io 11 | kind: User 12 | name: {{ user }} 13 | -------------------------------------------------------------------------------- /post-install/vars/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | oc_bin: "/home/chris/bin/oc" 3 | oc_kubeconfig: "/home/chris/upi/rhv-upi/auth/kubeconfig" 4 | 5 | oauth_ldap_ca_cert: /etc/ipa/ca.crt 6 | oauth_ldap_ca_cm_name: ca-config-map 7 | oauth_ldap_url: ldap://idm1.umbrella.local/cn=users,cn=accounts,dc=umbrella,dc=local?uid 8 | oauth_ldap_secret_name: ldap-secret 9 | oauth_ldap_provider_name: "Umbrella IdM" 10 | 11 | cluster_admin_users: 12 | - chris -------------------------------------------------------------------------------- /post-install/roles/label-nodes/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Debug 3 | debug: 4 | msg: "oc label node {{ nitem.name }} {{ label }}" 5 | loop: "{{ nitem.labels }}" 6 | loop_control: 7 | loop_var: label 8 | 9 | - name: Label Node 10 | command: 11 | cmd: "{{ oc_bin }} --kubeconfig {{ oc_kubeconfig }} label node {{ nitem.name }} {{ label }} --overwrite" 12 | loop: "{{ nitem.labels }}" 13 | loop_control: 14 | loop_var: label -------------------------------------------------------------------------------- /bootstrap-cleanup.yaml: -------------------------------------------------------------------------------- 1 | - name: Remove bootstrap entry from haproxy 2 | hosts: loadbalancer 3 | gather_facts: no 4 | become: yes 5 | tasks: 6 | - name: Make sure bootstrap lines are commented out 7 | replace: 8 | path: /etc/haproxy/haproxy.cfg 9 | regexp: '(server bootstrap .*)' 10 | replace: '# \1' 11 | 12 | - name: Reload haproxy service 13 | service: 14 | name: haproxy 15 | state: reloaded 16 | -------------------------------------------------------------------------------- /roles/dhcpd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Install DHCP Server Package 2 | yum: 3 | name: 4 | - dhcp 5 | state: latest 6 | tags: 7 | - install-packages 8 | 9 | - name: Copy dhcpd.conf Template 10 | template: 11 | src: templates/dhcpd.conf.j2 12 | dest: /etc/dhcp/dhcpd.conf 13 | owner: root 14 | group: root 15 | mode: 0644 16 | setype: dhcp_etc_t 17 | 18 | - name: Enable/Start dhcpd Service 19 | systemd: 20 | name: dhcpd 21 | state: started 22 | enabled: yes 23 | -------------------------------------------------------------------------------- /roles/rhv-retire/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Delete VMs 3 | ovirt_vm: 4 | auth: 5 | username: "{{ rhv_username }}" 6 | password: "{{ rhv_password }}" 7 | url: "https://{{ rhv_hostname }}/ovirt-engine/api" 8 | insecure: true 9 | state: absent 10 | name: "{{ item }}.{{ base_domain }}" 11 | cluster: "{{ hostvars[item].rhv_cluster }}" 12 | register: ovirt_vm_results 13 | with_items: 14 | - "{{ groups[provision_group] }}" 15 | 16 | - name: Debug ovirt_vm_results 17 | debug: 18 | var: ovirt_vm_results 19 | verbosity: 2 20 | -------------------------------------------------------------------------------- /ocs/storageclass-osd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "local.storage.openshift.io/v1" 2 | kind: "LocalVolume" 3 | metadata: 4 | name: "localstorage-ocs-osd" 5 | namespace: "openshift-storage" 6 | spec: 7 | nodeSelector: 8 | nodeSelectorTerms: 9 | - matchExpressions: 10 | - key: kubernetes.io/hostname 11 | operator: In 12 | values: 13 | - ocs-node0.rhv-upi.ocp.pwc.umbrella.local 14 | - ocs-node1.rhv-upi.ocp.pwc.umbrella.local 15 | - ocs-node2.rhv-upi.ocp.pwc.umbrella.local 16 | storageClassDevices: 17 | - storageClassName: "localstorage-ocs-osd-sc" 18 | volumeMode: Block 19 | devicePaths: 20 | - /dev/sdb 21 | -------------------------------------------------------------------------------- /post-install/roles/oauth-ldap/templates/oauth-ldap.j2: -------------------------------------------------------------------------------- 1 | apiVersion: config.openshift.io/v1 2 | kind: OAuth 3 | metadata: 4 | name: cluster 5 | spec: 6 | identityProviders: 7 | - name: {{ oauth_ldap_provider_name }} 8 | mappingMethod: claim 9 | type: LDAP 10 | ldap: 11 | attributes: 12 | id: 13 | - dn 14 | email: 15 | - mail 16 | name: 17 | - cn 18 | preferredUsername: 19 | - uid 20 | bindDN: {{ oauth_ldap_bind_user }} 21 | bindPassword: 22 | name: {{ oauth_ldap_secret_name }} 23 | ca: 24 | name: {{ oauth_ldap_ca_cm_name }} 25 | insecure: false 26 | url: {{ oauth_ldap_url }} 27 | -------------------------------------------------------------------------------- /ocs/storageclass-mon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "local.storage.openshift.io/v1" 2 | kind: "LocalVolume" 3 | metadata: 4 | name: "localstorage-ocs-mon" 5 | namespace: "openshift-storage" 6 | spec: 7 | nodeSelector: 8 | nodeSelectorTerms: 9 | - matchExpressions: 10 | - key: kubernetes.io/hostname 11 | operator: In 12 | values: 13 | - ocs-node0.rhv-upi.ocp.pwc.umbrella.local 14 | - ocs-node1.rhv-upi.ocp.pwc.umbrella.local 15 | - ocs-node2.rhv-upi.ocp.pwc.umbrella.local 16 | storageClassDevices: 17 | - storageClassName: "localstorage-ocs-mon-sc" 18 | volumeMode: Filesystem 19 | fsType: xfs 20 | devicePaths: 21 | - /dev/sdc 22 | -------------------------------------------------------------------------------- /qemu-guest-agent/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM registry.access.redhat.com/ubi8/ubi 2 | USER root 3 | LABEL summary="The QEMU Guest Agent" \ 4 | io.k8s.description="This package provides an agent to run inside guests, which communicates with the host over a virtio-serial channel named 'org.qemu.guest_agent.0'" \ 5 | io.k8s.display-name="QEMU Guest Agent" \ 6 | license="GPLv2+ and LGPLv2+ and BSD" \ 7 | architecture="x86_64" \ 8 | maintainer="Chris Keller " 9 | 10 | COPY qemu-guest-agent-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64.rpm /qemu-guest-agent-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64.rpm 11 | RUN rpm -ivh /qemu-guest-agent-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64.rpm 12 | -------------------------------------------------------------------------------- /post-install/post-install.yaml: -------------------------------------------------------------------------------- 1 | - name: OpenShift Cluster Post Install Tasks 2 | hosts: localhost 3 | vars_files: 4 | - vars/main.yaml 5 | - vault.yaml 6 | tasks: 7 | - name: Configure OAuth 8 | include_role: 9 | name: oauth-ldap 10 | tags: 11 | - oauth-ldap 12 | 13 | - name: Label Nodes 14 | include_role: 15 | name: label-nodes 16 | loop: "{{ nodes }}" 17 | loop_control: 18 | loop_var: nitem 19 | tags: 20 | - label-nodes 21 | 22 | - name: Install Operators 23 | include_role: 24 | name: install-operators 25 | loop: "{{ operators }}" 26 | loop_control: 27 | loop_var: oitem 28 | tags: 29 | - install-operators -------------------------------------------------------------------------------- /ocs/ocs-operator.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | labels: 6 | openshift.io/cluster-monitoring: "true" 7 | name: openshift-storage 8 | spec: {} 9 | --- 10 | apiVersion: operators.coreos.com/v1 11 | kind: OperatorGroup 12 | metadata: 13 | name: openshift-storage-operatorgroup 14 | namespace: openshift-storage 15 | spec: 16 | serviceAccount: 17 | metadata: 18 | creationTimestamp: null 19 | targetNamespaces: 20 | - openshift-storage 21 | --- 22 | apiVersion: operators.coreos.com/v1alpha1 23 | kind: Subscription 24 | metadata: 25 | name: ocs-subscription 26 | namespace: openshift-storage 27 | spec: 28 | channel: "stable-4.4" 29 | name: ocs-operator 30 | source: redhat-operators 31 | sourceNamespace: openshift-marketplace -------------------------------------------------------------------------------- /ocs/storagecluster.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: ocs.openshift.io/v1 2 | kind: StorageCluster 3 | metadata: 4 | namespace: openshift-storage 5 | name: storagecluster 6 | spec: 7 | manageNodes: false 8 | monPVCTemplate: 9 | spec: 10 | storageClassName: localstorage-ocs-mon-sc 11 | accessModes: 12 | - ReadWriteOnce 13 | resources: 14 | requests: 15 | storage: 10Gi 16 | storageDeviceSets: 17 | - name: deviceset 18 | count: 3 19 | resources: {} 20 | placement: {} 21 | dataPVCTemplate: 22 | spec: 23 | storageClassName: localstorage-ocs-osd-sc 24 | accessModes: 25 | - ReadWriteOnce 26 | volumeMode: Block 27 | resources: 28 | requests: 29 | storage: 1Ti 30 | portable: true 31 | -------------------------------------------------------------------------------- /qemu-guest-agent/2-rbac.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: ClusterRole 4 | metadata: 5 | annotations: 6 | openshift.io/description: "Role-Based Access to SCCs for QEMU Guest Agent" 7 | name: qga-privileged-user 8 | namespace: qemu-guest-agent 9 | rules: 10 | - apiGroups: 11 | - security.openshift.io 12 | resources: 13 | - securitycontextconstraints 14 | verbs: 15 | - use 16 | resourceNames: 17 | - privileged 18 | --- 19 | apiVersion: rbac.authorization.k8s.io/v1 20 | kind: RoleBinding 21 | metadata: 22 | name: qga-privileged-user 23 | namespace: qemu-guest-agent 24 | subjects: 25 | - kind: ServiceAccount 26 | name: qga-service-account 27 | roleRef: 28 | kind: ClusterRole 29 | apiGroup: rbac.authorization.k8s.io 30 | name: qga-privileged-user 31 | -------------------------------------------------------------------------------- /provision.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Provision OpenShift 4 Environment 3 | hosts: localhost 4 | vars_files: 5 | - vault.yml 6 | roles: 7 | - role: ipa 8 | tags: 9 | - ipa 10 | - role: rhv 11 | tags: 12 | - rhv 13 | 14 | - name: Setup Loadbalancer Host 15 | hosts: loadbalancer 16 | become: yes 17 | gather_facts: false 18 | roles: 19 | - role: dhcpd 20 | tags: 21 | - dhcpd 22 | - role: firewalld 23 | tags: 24 | - firewalld 25 | - role: haproxy 26 | tags: 27 | - haproxy 28 | - role: httpd 29 | tags: 30 | - httpd 31 | 32 | - name: Boot CoreOS Nodes 33 | hosts: localhost 34 | vars_files: 35 | - vault.yml 36 | roles: 37 | - role: boot-instances 38 | tags: 39 | - boot-instances 40 | -------------------------------------------------------------------------------- /roles/dhcpd/templates/dhcpd.conf.j2: -------------------------------------------------------------------------------- 1 | option domain-name "{{ base_domain }}"; 2 | option domain-name-servers {{ dhcp_server_dns_servers }}; 3 | default-lease-time 1800; 4 | max-lease-time 7200; 5 | authoritative; 6 | log-facility local7; 7 | 8 | subnet {{ dhcp_server_subnet }} netmask {{ dhcp_server_subnet_mask }} { 9 | option routers {{ dhcp_server_gateway }}; 10 | option subnet-mask {{ dhcp_server_subnet_mask }}; 11 | option domain-search "{{ base_domain }}"; 12 | option domain-name "{{ base_domain }}"; 13 | option domain-name-servers {{ dhcp_server_dns_servers }}; 14 | } 15 | 16 | {% for item in hostvars['localhost'].host_mac_list %} 17 | host {{ item.name }} { 18 | option host-name "{{ item.name }}.{{ base_domain }}"; 19 | hardware ethernet {{ item.mac }}; 20 | fixed-address {{ item.ip }}; 21 | } 22 | {% endfor %} 23 | -------------------------------------------------------------------------------- /post-install/roles/oauth-ldap/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: Configure OAuth LDAP Secret 2 | k8s: 3 | state: present 4 | definition: "{{ lookup('template', 'templates/oauth-ldap-secret.j2') }}" 5 | 6 | - name: Source CA Certificate 7 | set_fact: 8 | oauth_ldap_ca_cert_data: "{{ lookup('file', 'files/oauth-ldap-ca.crt') }}" 9 | 10 | - name: Configure OAuth LDAP CA ConfigMap 11 | k8s: 12 | state: present 13 | definition: "{{ lookup('template', 'templates/oauth-ldap-ca-config-map.j2') }}" 14 | 15 | - name: Update OAuth LDAP Configuration 16 | k8s: 17 | state: present 18 | definition: "{{ lookup('template', 'templates/oauth-ldap.j2') }}" 19 | 20 | - name: Create cluster-admin ClusterRoleBindings 21 | k8s: 22 | state: present 23 | definition: "{{ lookup('template', 'templates/cluster-admin.j2') }}" 24 | loop: "{{ cluster_admin_users }}" 25 | loop_control: 26 | loop_var: user -------------------------------------------------------------------------------- /roles/httpd/files/ignition-downloader.php: -------------------------------------------------------------------------------- 1 | 29 | -------------------------------------------------------------------------------- /roles/haproxy/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Install Packages 2 | yum: 3 | name: 4 | - haproxy 5 | - libselinux-python 6 | - libsemanage-python 7 | state: latest 8 | tags: 9 | - install-packages 10 | 11 | - name: Copy haproxy.cfg Template 12 | template: 13 | src: templates/haproxy.cfg.j2 14 | dest: /etc/haproxy/haproxy.cfg 15 | owner: root 16 | group: root 17 | mode: 0644 18 | setype: etc_t 19 | 20 | - name: Enable haproxy_connect_any 21 | seboolean: 22 | name: haproxy_connect_any 23 | state: yes 24 | persistent: yes 25 | 26 | - name: Enable proxied ports in firewall 27 | firewalld: 28 | port: "{{ item }}/tcp" 29 | state: enabled 30 | permanent: yes 31 | immediate: yes 32 | loop: 33 | - 80 34 | - 443 35 | - 6443 36 | - 22623 37 | tags: 38 | - firewalld 39 | 40 | - name: Enable/Start haproxy Service 41 | systemd: 42 | name: haproxy 43 | state: started 44 | enabled: yes 45 | -------------------------------------------------------------------------------- /retire.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Cleanup Local known_hosts File 3 | hosts: localhost 4 | vars: 5 | # Fixes selinux issue w/ virtualenv 6 | ansible_python_interpreter: "/usr/bin/python" 7 | tasks: 8 | - name: Clean up ~/.ssh/known_hosts 9 | block: 10 | - name: Remove Lines w/ Inventory Hostname in known_hosts 11 | lineinfile: 12 | dest: ~/.ssh/known_hosts 13 | state: absent 14 | regexp: "^.*{{ item }}.{{ base_domain }}.*$" 15 | with_items: 16 | - "{{ groups[provision_group] }}" 17 | 18 | - name: Remove Lines w/ Inventory IP in known_hosts 19 | lineinfile: 20 | dest: ~/.ssh/known_hosts 21 | state: absent 22 | regexp: "^.*{{ lookup('dig', item) }}.*$" 23 | with_items: 24 | - "{{ groups[provision_group] }}" 25 | when: 26 | - cleanup_known_hosts is defined 27 | - cleanup_known_hosts 28 | 29 | - name: Retire Integrated VMs in Bulk 30 | hosts: localhost 31 | vars_files: 32 | - vault.yml 33 | roles: 34 | - role: rhv-retire 35 | tags: 36 | - rhv 37 | - role: ipa-retire 38 | tags: 39 | - ipa 40 | -------------------------------------------------------------------------------- /post-install/roles/install-operators/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Debug Namespace Rendering 3 | debug: 4 | msg: "{{ lookup('template', 'templates/namespace.j2') }}" 5 | when: 6 | - oitem.namespace is defined 7 | 8 | - name: "Create Namespace {{ oitem.namespace.metadata.name }}" 9 | k8s: 10 | state: present 11 | definition: "{{ lookup('template', 'templates/namespace.j2') }}" 12 | when: 13 | - oitem.namespace is defined 14 | 15 | - name: Debug OperatorGroup Rendering 16 | debug: 17 | msg: "{{ lookup('template', 'templates/operatorgroup.j2') }}" 18 | when: 19 | - oitem.operatorgroup is defined 20 | 21 | - name: "Create OperatorGroup for Namespace {{ oitem.namespace.metadata.name }}" 22 | k8s: 23 | state: present 24 | definition: "{{ lookup('template', 'templates/operatorgroup.j2') }}" 25 | when: 26 | - oitem.operatorgroup is defined 27 | 28 | - name: Debug Subscription Rendering 29 | debug: 30 | msg: "{{ lookup('template', 'templates/subscription.j2') }}" 31 | when: 32 | - oitem.subscription is defined 33 | 34 | - name: "Create Subscription for Operator {{ oitem.subscription.spec.name }}" 35 | k8s: 36 | state: present 37 | definition: "{{ lookup('template', 'templates/subscription.j2') }}" 38 | when: 39 | - oitem.subscription is defined -------------------------------------------------------------------------------- /roles/httpd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Install Packages 2 | yum: 3 | name: 4 | - httpd 5 | - php 6 | - policycoreutils-python 7 | state: latest 8 | tags: 9 | - install-packages 10 | 11 | - name: Allow httpd to listen on TCP port {{ httpd_port }} 12 | seport: 13 | ports: "{{ httpd_port }}" 14 | proto: tcp 15 | setype: http_port_t 16 | state: present 17 | 18 | - name: Copy httpd.conf Template 19 | template: 20 | src: templates/httpd.conf.j2 21 | dest: /etc/httpd/conf/httpd.conf 22 | owner: root 23 | group: root 24 | mode: 0644 25 | setype: httpd_config_t 26 | 27 | - name: Copy Ignition PHP Script 28 | copy: 29 | src: files/ignition-downloader.php 30 | dest: /var/www/html/ignition-downloader.php 31 | owner: root 32 | group: root 33 | mode: 0644 34 | setype: httpd_sys_content_t 35 | 36 | - name: Copy ignition files 37 | copy: 38 | src: "{{ installation_directory }}/{{ item }}" 39 | dest: /var/www/html/ 40 | owner: root 41 | group: root 42 | mode: 0644 43 | setype: httpd_sys_content_t 44 | loop: 45 | - bootstrap.ign 46 | - master.ign 47 | - worker.ign 48 | - ocs.ign 49 | - infra.ign 50 | 51 | - name: Restore SELinux Contexts in Document Root 52 | shell: restorecon -R /var/www/html 53 | 54 | - name: Enable httpd port in firewall 55 | firewalld: 56 | port: "{{ httpd_port }}/tcp" 57 | state: enabled 58 | permanent: yes 59 | immediate: yes 60 | tags: 61 | - firewalld 62 | 63 | - name: Enable/Start httpd Service 64 | systemd: 65 | name: httpd 66 | state: started 67 | enabled: yes 68 | -------------------------------------------------------------------------------- /util/iso-generator.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Run this script as root 4 | 5 | if [ `whoami` != "root" ] ; then 6 | echo "Must be run as root!" 7 | exit 1 8 | fi 9 | 10 | VERSION=4.4.3-x86_64 11 | ISO_SOURCE=/tmp/rhcos-$VERSION-installer.x86_64.iso 12 | ISO_OUTPUT=/tmp/rhcos-$VERSION-installer.x86_64-auto.iso 13 | 14 | DIRECTORY_MOUNT=/tmp/rhcos-$VERSION-installer 15 | DIRECTORY_WORKING=/tmp/rhcos-$VERSION-installer-auto 16 | 17 | KP_WEBSERVER=lb.rhv-upi.ocp.pwc.umbrella.local:8080 18 | KP_COREOS_IMAGE=rhcos-$VERSION-metal.x86_64.raw.gz 19 | KP_BLOCK_DEVICE=sda 20 | 21 | if [ -d $DIRECTORY_MOUNT ] || [ -d $DIRECTORY_WORKING ] ; then 22 | echo "$DIRECTORY_MOUNT or $DIRECTORY_WORKING already exist!" 23 | exit 24 | fi 25 | 26 | # Setup 27 | 28 | mkdir -p $DIRECTORY_MOUNT $DIRECTORY_WORKING 29 | mount -o loop -t iso9660 $ISO_SOURCE $DIRECTORY_MOUNT 30 | 31 | if [ ! -f $DIRECTORY_MOUNT/isolinux/isolinux.cfg ] ; then 32 | echo "Unexpected contents in $DIRECTORY_MOUNT!" 33 | exit 34 | fi 35 | 36 | rsync -av $DIRECTORY_MOUNT/* $DIRECTORY_WORKING/ 37 | 38 | # Edit isolinux.cfg 39 | 40 | INST_INSTALL_DEV="coreos.inst.install_dev=$KP_BLOCK_DEVICE" 41 | INST_IMAGE_URL="coreos.inst.image_url=http:\/\/$KP_WEBSERVER\/$KP_COREOS_IMAGE" 42 | INST_IGNITION_URL="coreos.inst.ignition_url=http:\/\/$KP_WEBSERVER\/ignition-downloader.php" 43 | 44 | sed -i 's/default vesamenu.c32/default linux/' $DIRECTORY_WORKING/isolinux/isolinux.cfg 45 | sed -i 's/timeout 600/timeout 0/' $DIRECTORY_WORKING/isolinux/isolinux.cfg 46 | sed -i "/coreos.inst=yes/s|$| $INST_INSTALL_DEV $INST_IMAGE_URL $INST_IGNITION_URL|" $DIRECTORY_WORKING/isolinux/isolinux.cfg 47 | 48 | # Generate new ISO 49 | 50 | mkisofs -o $ISO_OUTPUT -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -input-charset utf-8 -JRV CoreOS $DIRECTORY_WORKING/ 51 | 52 | umount $DIRECTORY_MOUNT 53 | 54 | # Cleanup... 55 | # rm -rf $DIRECTORY_MOUNT $DIRECTORY_WORKING 56 | -------------------------------------------------------------------------------- /qemu-guest-agent/3-daemonset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: DaemonSet 3 | metadata: 4 | name: qemu-guest-agent 5 | namespace: qemu-guest-agent 6 | spec: 7 | selector: 8 | matchLabels: 9 | name: qemu-guest-agent 10 | template: 11 | metadata: 12 | labels: 13 | name: qemu-guest-agent 14 | spec: 15 | hostNetwork: true 16 | hostPID: true 17 | serviceAccountName: qga-service-account 18 | containers: 19 | - name: qemu-guest-agent 20 | securityContext: 21 | privileged: true 22 | capabilities: 23 | add: ["SYS_ADMIN"] 24 | allowPrivilegeEscalation: true 25 | procMount: Unmasked 26 | image: quay.io/nasx/qemu-guest-agent:latest 27 | command: ['/usr/bin/qemu-ga', '--verbose', '--method=virtio-serial', '--path=/dev/virtio-ports/org.qemu.guest_agent.0', '--blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status'] 28 | volumeMounts: 29 | - name: host-dev 30 | mountPath: /dev 31 | - name: host-sys-cpu 32 | mountPath: /sys/devices/system/cpu 33 | - name: host-sys-memory 34 | mountPath: /sys/devices/system/memory 35 | - name: host-etc-os-release 36 | mountPath: /etc/os-release 37 | - name: host-etc-redhat-release 38 | mountPath: /etc/redhat-release 39 | tolerations: 40 | - key: node-role.kubernetes.io/master 41 | operator: Exists 42 | effect: NoSchedule 43 | - key: node.kubernetes.io/disk-pressure 44 | operator: Exists 45 | effect: NoSchedule 46 | volumes: 47 | - name: host-dev 48 | hostPath: 49 | path: /dev 50 | - name: host-sys-cpu 51 | hostPath: 52 | path: /sys/devices/system/cpu 53 | - name: host-sys-memory 54 | hostPath: 55 | path: /sys/devices/system/memory 56 | - name: host-etc-os-release 57 | hostPath: 58 | path: /etc/os-release 59 | - name: host-etc-redhat-release 60 | hostPath: 61 | path: /etc/redhat-release 62 | -------------------------------------------------------------------------------- /roles/ipa-retire/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Delete SRV Record 2 | ipa_dnsrecord: 3 | name: "_etcd-server-ssl._tcp" 4 | zone_name: "{{ base_domain }}" 5 | record_type: SRV 6 | record_value: "0 10 2380 {{ item }}.{{ base_domain }}." 7 | ipa_host: "{{ ipa_hostname }}" 8 | ipa_user: "{{ ipa_username }}" 9 | ipa_pass: "{{ ipa_password }}" 10 | state: absent 11 | with_items: 12 | - etcd-0 13 | - etcd-1 14 | - etcd-2 15 | 16 | - name: Delete A Records 17 | ipa_dnsrecord: 18 | name: "{{ item }}" 19 | zone_name: "{{ base_domain }}" 20 | record_type: A 21 | record_value: "{{ hostvars[item].ip }}" 22 | ipa_host: "{{ ipa_hostname }}" 23 | ipa_user: "{{ ipa_username }}" 24 | ipa_pass: "{{ ipa_password }}" 25 | state: absent 26 | with_items: 27 | - "{{ groups[provision_group] }}" 28 | 29 | - name: Delete PTR Records 30 | ipa_dnsrecord: 31 | name: "{{ hostvars[item].ip.split('.')[-1] }}" 32 | zone_name: "{{ hostvars[item].ip.split('.')[-2] }}.{{ hostvars[item].ip.split('.')[-3] }}.{{ hostvars[item].ip.split('.')[-4] }}.in-addr.arpa." 33 | record_type: PTR 34 | record_value: "{{ item }}.{{ base_domain }}." 35 | ipa_host: "{{ ipa_hostname }}" 36 | ipa_user: "{{ ipa_username }}" 37 | ipa_pass: "{{ ipa_password }}" 38 | state: absent 39 | with_items: 40 | - "{{ groups[provision_group] }}" 41 | 42 | - name: Delete api/api-int Records 43 | ipa_dnsrecord: 44 | name: "{{ item }}" 45 | zone_name: "{{ base_domain }}" 46 | record_type: A 47 | record_value: "{{ load_balancer_ip }}" 48 | ipa_host: "{{ ipa_hostname }}" 49 | ipa_user: "{{ ipa_username }}" 50 | ipa_pass: "{{ ipa_password }}" 51 | state: absent 52 | with_items: 53 | - api 54 | - api-int 55 | 56 | - name: Delete Wildcard for Applications 57 | ipa_dnsrecord: 58 | name: "*.apps" 59 | zone_name: "{{ base_domain }}" 60 | record_type: A 61 | record_value: "{{ load_balancer_ip }}" 62 | ipa_host: "{{ ipa_hostname }}" 63 | ipa_user: "{{ ipa_username }}" 64 | ipa_pass: "{{ ipa_password }}" 65 | state: absent 66 | 67 | - name: Delete etcd-x Records 68 | ipa_dnsrecord: 69 | name: "{{ hostvars[item].etcd_name }}" 70 | zone_name: "{{ base_domain }}" 71 | record_type: A 72 | record_value: "{{ hostvars[item].ip }}" 73 | ipa_host: "{{ ipa_hostname }}" 74 | ipa_user: "{{ ipa_username }}" 75 | ipa_pass: "{{ ipa_password }}" 76 | state: absent 77 | with_items: 78 | - "{{ groups[provision_group] }}" 79 | when: item is search("master") 80 | 81 | -------------------------------------------------------------------------------- /roles/boot-instances/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Authenticate to oVirt 2 | ovirt_auth: 3 | username: "{{ rhv_username }}" 4 | password: "{{ rhv_password }}" 5 | url: "https://{{ rhv_hostname }}/ovirt-engine/api" 6 | insecure: True 7 | 8 | - name: Startup (install) Bootstrap Node 9 | ovirt_vm: 10 | auth: "{{ ovirt_auth }}" 11 | cluster: "{{ hostvars[item].rhv_cluster }}" 12 | name: "{{ item }}.{{ base_domain }}" 13 | state: running 14 | with_items: 15 | - "{{ groups[provision_group] }}" 16 | when: item is search("bootstrap") 17 | 18 | - name: Wait 30 Minutes for SSH on Bootstrap Node 19 | wait_for: 20 | host: "{{ hostvars['bootstrap'].ip }}" 21 | port: 22 22 | sleep: 10 23 | timeout: 1800 24 | 25 | - name: Startup (install) Master Nodes 26 | ovirt_vm: 27 | auth: "{{ ovirt_auth }}" 28 | cluster: "{{ hostvars[item].rhv_cluster }}" 29 | name: "{{ item }}.{{ base_domain }}" 30 | state: running 31 | with_items: 32 | - "{{ groups[provision_group] }}" 33 | when: item is search("master") 34 | 35 | - name: Wait 30 Minutes for SSH on Master Nodes 36 | wait_for: 37 | host: "{{ hostvars[item].ip }}" 38 | port: 22 39 | sleep: 10 40 | timeout: 1800 41 | with_items: 42 | - "{{ groups[provision_group] }}" 43 | when: item is search("master") 44 | 45 | - name: Startup (install) Worker Nodes 46 | ovirt_vm: 47 | auth: "{{ ovirt_auth }}" 48 | cluster: "{{ hostvars[item].rhv_cluster }}" 49 | name: "{{ item }}.{{ base_domain }}" 50 | state: running 51 | with_items: 52 | - "{{ groups[provision_group] }}" 53 | when: item is search("worker") 54 | 55 | - name: Wait 30 Minutes for SSH on Worker Nodes 56 | wait_for: 57 | host: "{{ hostvars[item].ip }}" 58 | port: 22 59 | sleep: 10 60 | timeout: 1800 61 | with_items: 62 | - "{{ groups[provision_group] }}" 63 | when: item is search("worker") 64 | 65 | - name: Startup (install) Other Nodes 66 | ovirt_vm: 67 | auth: "{{ ovirt_auth }}" 68 | cluster: "{{ hostvars[item].rhv_cluster }}" 69 | name: "{{ item }}.{{ base_domain }}" 70 | state: running 71 | with_items: 72 | - "{{ groups[provision_group] }}" 73 | when: 74 | - item is not search("worker") 75 | - item is not search("bootstrap") 76 | - item is not search("master") 77 | 78 | - name: Wait 30 Minutes for SSH on Other Nodes 79 | wait_for: 80 | host: "{{ hostvars[item].ip }}" 81 | port: 22 82 | sleep: 10 83 | timeout: 1800 84 | with_items: 85 | - "{{ groups[provision_group] }}" 86 | when: 87 | - item is not search("worker") 88 | - item is not search("bootstrap") 89 | - item is not search("master") 90 | 91 | - name: Revoke the SSO token 92 | ovirt_auth: 93 | state: absent 94 | ovirt_auth: "{{ ovirt_auth }}" 95 | -------------------------------------------------------------------------------- /util/verify-dns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Use this script to verify DNS entires for your OpenShift 4 cluster. 4 | # Checks for API end points, app wildcard, A/PTR records for nodes 5 | # and SRV/A for etcd. Be sure to adjust CLUSTER_NAME and BASE_DOMAIN 6 | # accordingly, as well as the NODES array to match your environment. 7 | # 8 | # $? returns 0 on success and 1 on failure. 9 | 10 | CLUSTER_NAME=rhv-upi 11 | BASE_DOMAIN=ocp.pwc.umbrella.local 12 | DIG=/usr/bin/dig 13 | 14 | NODES=(bootstrap master0 master1 master2 worker0 worker1 worker2 worker3 worker4 worker5) 15 | API=(api api-int) 16 | ETCD=(etcd-0 etcd-1 etcd-2) 17 | 18 | if [ ! -f $DIG ]; then 19 | echo "Could not find $DIG. Please ensure the bind-utils package is installed, or update the DIG variable to reflect the appropriate binary path." 20 | exit 1 21 | fi 22 | 23 | echo -e "Verifying node A/PTR records..." 24 | 25 | for i in ${NODES[@]} 26 | do 27 | RET=`$DIG A $i.$CLUSTER_NAME.$BASE_DOMAIN +short` 28 | 29 | if [ -z "$RET" ]; then 30 | echo "Could not resolve $i.$CLUSTER_NAME.$BASE_DOMAIN!" 31 | exit 1 32 | else 33 | echo "$i.$CLUSTER_NAME.$BASE_DOMAIN resolved to: $RET" 34 | 35 | PRET=`$DIG -x $RET +short` 36 | 37 | if [ -z "$PRET" ]; then 38 | echo "Could not resolve PTR record for $RET" 39 | exit 1 40 | else 41 | echo "PTR: $PRET" 42 | fi 43 | fi 44 | done 45 | 46 | echo -e "\nVerifying API A records..." 47 | 48 | for i in ${API[@]} 49 | do 50 | RET=`$DIG A $i.$CLUSTER_NAME.$BASE_DOMAIN +short` 51 | 52 | if [ -z "$RET" ]; then 53 | echo "Could not resolve $i.$CLUSTER_NAME.$BASE_DOMAIN!" 54 | exit 1 55 | else 56 | echo "$i.$CLUSTER_NAME.$BASE_DOMAIN resolved to: $RET" 57 | fi 58 | done 59 | 60 | echo -e "\nVerifying etcd A records..." 61 | 62 | for i in ${ETCD[@]} 63 | do 64 | RET=`$DIG A $i.$CLUSTER_NAME.$BASE_DOMAIN +short` 65 | 66 | if [ -z "$RET" ]; then 67 | echo "Could not resolve $i.$CLUSTER_NAME.$BASE_DOMAIN!" 68 | exit 1 69 | else 70 | echo "$i.$CLUSTER_NAME.$BASE_DOMAIN resolved to: $RET" 71 | fi 72 | done 73 | 74 | echo -e "\nVerifying etcd SRV record..." 75 | 76 | RET=`$DIG SRV _etcd-server-ssl._tcp.$CLUSTER_NAME.$BASE_DOMAIN +short` 77 | 78 | if [ -z "$RET" ]; then 79 | echo "Could not resolve _etcd-server-ssl._tcp.$CLUSTER_NAME.$BASE_DOMAIN" 80 | exit 1 81 | else 82 | echo -e "_etcd-server-ssl._tcp.$CLUSTER_NAME.$BASE_DOMAIN resolved to:\n$RET" 83 | fi 84 | 85 | echo -e "\nVerifying application wildcard A record..." 86 | 87 | RET=`$DIG A *.apps.$CLUSTER_NAME.$BASE_DOMAIN +short` 88 | 89 | if [ -z "$RET" ]; then 90 | echo "Could not resolve *.apps.$CLUSTER_NAME.$BASE_DOMAIN" 91 | exit 1 92 | else 93 | echo "*.apps.$CLUSTER_NAME.$BASE_DOMAIN resolved to: $RET" 94 | fi 95 | 96 | echo -e "\nVerification succeeded!" 97 | exit 0 98 | -------------------------------------------------------------------------------- /roles/ipa/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Generate IPA Session Cookie 2 | uri: 3 | url: "https://{{ ipa_hostname }}/ipa/session/login_password" 4 | validate_certs: no 5 | method: POST 6 | status_code: 200 7 | headers: 8 | Content-Type: "application/x-www-form-urlencoded" 9 | Accept: "text/plain" 10 | Referer: "https://{{ ipa_hostname }}/ipa" 11 | body: "user={{ ipa_username }}&password={{ ipa_password }}" 12 | register: ipa_session 13 | 14 | - name: Debug ipa_session 15 | debug: 16 | var: ipa_session 17 | verbosity: 2 18 | 19 | - name: Create SRV Records 20 | uri: 21 | url: "https://{{ ipa_hostname }}/ipa/session/json" 22 | validate_certs: no 23 | method: POST 24 | status_code: 200 25 | headers: 26 | Cookie: "{{ ipa_session.set_cookie }}" 27 | Accept: "application/json" 28 | Referer: "https://{{ ipa_hostname }}/ipa" 29 | body: 30 | method: dnsrecord_add 31 | params: 32 | - - "{{ base_domain }}." 33 | - _etcd-server-ssl._tcp 34 | - srv_part_priority: '0' 35 | srv_part_weight: '10' 36 | srv_part_port: '2380' 37 | srv_part_target: "{{ item }}.{{ base_domain }}." 38 | body_format: json 39 | with_items: 40 | - "etcd-0" 41 | - "etcd-1" 42 | - "etcd-2" 43 | 44 | - name: Create A/PTR Records 45 | uri: 46 | url: "https://{{ ipa_hostname }}/ipa/session/json" 47 | validate_certs: no 48 | method: POST 49 | status_code: 200 50 | headers: 51 | Cookie: "{{ ipa_session.set_cookie }}" 52 | Accept: "application/json" 53 | Referer: "https://{{ ipa_hostname }}/ipa" 54 | body: 55 | method: dnsrecord_add 56 | params: 57 | - - "{{ base_domain }}." 58 | - "{{ item }}" 59 | - a_part_ip_address: "{{ hostvars[item].ip }}" 60 | a_extra_create_reverse: true 61 | body_format: json 62 | with_items: 63 | - "{{ groups[provision_group] }}" 64 | 65 | - name: Create api/api-int Records 66 | ipa_dnsrecord: 67 | name: "{{ item }}" 68 | zone_name: "{{ base_domain }}" 69 | record_type: A 70 | record_value: "{{ load_balancer_ip }}" 71 | ipa_host: "{{ ipa_hostname }}" 72 | ipa_user: "{{ ipa_username }}" 73 | ipa_pass: "{{ ipa_password }}" 74 | validate_certs: "{{ ipa_validate_certs }}" 75 | state: present 76 | with_items: 77 | - api 78 | - api-int 79 | 80 | - name: Create Wildcard for Applications 81 | ipa_dnsrecord: 82 | name: "*.apps" 83 | zone_name: "{{ base_domain }}" 84 | record_type: A 85 | record_value: "{{ load_balancer_ip }}" 86 | ipa_host: "{{ ipa_hostname }}" 87 | ipa_user: "{{ ipa_username }}" 88 | ipa_pass: "{{ ipa_password }}" 89 | validate_certs: "{{ ipa_validate_certs }}" 90 | state: present 91 | 92 | - name: Create etcd-x Records 93 | ipa_dnsrecord: 94 | name: "{{ hostvars[item].etcd_name }}" 95 | zone_name: "{{ base_domain }}" 96 | record_type: A 97 | record_value: "{{ hostvars[item].ip }}" 98 | ipa_host: "{{ ipa_hostname }}" 99 | ipa_user: "{{ ipa_username }}" 100 | ipa_pass: "{{ ipa_password }}" 101 | validate_certs: "{{ ipa_validate_certs }}" 102 | state: present 103 | with_items: 104 | - "{{ groups[provision_group] }}" 105 | when: item is search("master") 106 | -------------------------------------------------------------------------------- /post-install/vars/rhv-upi.yaml: -------------------------------------------------------------------------------- 1 | nodes: 2 | - name: ocs-node0.rhv-upi.ocp.pwc.umbrella.local 3 | labels: 4 | - "cluster.ocs.openshift.io/openshift-storage=" 5 | - "node-role.kubernetes.io/ocs=" 6 | - "node-role.kubernetes.io/worker-" 7 | - name: ocs-node1.rhv-upi.ocp.pwc.umbrella.local 8 | labels: 9 | - "cluster.ocs.openshift.io/openshift-storage=" 10 | - "node-role.kubernetes.io/ocs=" 11 | - "node-role.kubernetes.io/worker-" 12 | - name: ocs-node2.rhv-upi.ocp.pwc.umbrella.local 13 | labels: 14 | - "cluster.ocs.openshift.io/openshift-storage=" 15 | - "node-role.kubernetes.io/ocs=" 16 | - "node-role.kubernetes.io/worker-" 17 | - name: infra-node0.rhv-upi.ocp.pwc.umbrella.local 18 | labels: 19 | - "node-role.kubernetes.io/infra=" 20 | - "node-role.kubernetes.io/worker-" 21 | - name: infra-node1.rhv-upi.ocp.pwc.umbrella.local 22 | labels: 23 | - "node-role.kubernetes.io/infra=" 24 | - "node-role.kubernetes.io/worker-" 25 | - name: infra-node2.rhv-upi.ocp.pwc.umbrella.local 26 | labels: 27 | - "node-role.kubernetes.io/infra=" 28 | - "node-role.kubernetes.io/worker-" 29 | operators: 30 | - namespace: 31 | metadata: 32 | name: openshift-storage 33 | labels: 34 | openshift.io/cluster-monitoring: 'true' 35 | operatorgroup: 36 | metadata: 37 | name: openshift-storage-operatorgroup 38 | namespace: openshift-storage 39 | spec: 40 | targetNamespaces: 41 | - openshift-storage 42 | subscription: 43 | metadata: 44 | name: ocs-subscription 45 | namespace: openshift-storage 46 | spec: 47 | channel: stable-4.4 48 | name: ocs-operator 49 | source: redhat-operators 50 | sourceNamespace: openshift-marketplace 51 | - subscription: 52 | metadata: 53 | name: local-storage-operator 54 | namespace: openshift-storage 55 | spec: 56 | channel: '4.4' 57 | name: local-storage-operator 58 | source: redhat-operators 59 | sourceNamespace: openshift-marketplace 60 | - namespace: 61 | metadata: 62 | name: openshift-operators-redhat 63 | labels: 64 | openshift.io/cluster-monitoring: 'true' 65 | operatorgroup: 66 | metadata: 67 | name: openshift-operators-redhat-operatorgroup 68 | namespace: openshift-operators-redhat 69 | spec: {} 70 | subscription: 71 | metadata: 72 | generateName: 'elasticsearch-' 73 | namespace: openshift-operators-redhat 74 | name: elasticsearch-operator 75 | spec: 76 | channel: '4.4' 77 | installPlanApproval: Automatic 78 | source: redhat-operators 79 | sourceNamespace: openshift-marketplace 80 | name: elasticsearch-operator 81 | - namespace: 82 | metadata: 83 | name: openshift-logging 84 | labels: 85 | openshift.io/cluster-monitoring: "true" 86 | spec: 87 | nodeSelector: 88 | node-role.kubernetes.io/infra: '' 89 | operatorgroup: 90 | metadata: 91 | name: cluster-logging 92 | namespace: openshift-logging 93 | spec: 94 | targetNamespaces: 95 | - openshift-logging 96 | subscription: 97 | metadata: 98 | name: cluster-logging 99 | namespace: openshift-logging 100 | spec: 101 | channel: '4.4' 102 | name: cluster-logging 103 | source: redhat-operators 104 | sourceNamespace: openshift-marketplace -------------------------------------------------------------------------------- /roles/rhv/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Authenticate to oVirt 3 | ovirt_auth: 4 | username: "{{ rhv_username }}" 5 | password: "{{ rhv_password }}" 6 | url: "https://{{ rhv_hostname }}/ovirt-engine/api" 7 | insecure: True 8 | 9 | 10 | - name: Create VMs 11 | ovirt_vm: 12 | auth: "{{ ovirt_auth }}" 13 | state: stopped 14 | delete_protected: False 15 | name: "{{ item }}.{{ base_domain }}" 16 | cluster: "{{ hostvars[item].rhv_cluster }}" 17 | type: server 18 | memory: "{{ hostvars[item].memory }}" 19 | cpu_sockets: "{{ hostvars[item].sockets }}" 20 | cpu_cores: "{{ hostvars[item].cores }}" 21 | cd_iso: "{{ hostvars[item].iso }}" 22 | boot_devices: 23 | - hd 24 | - cdrom 25 | tags: 26 | - create_vms 27 | register: ovirt_vm_results 28 | with_items: 29 | - "{{ groups[provision_group] }}" 30 | 31 | - name: Debug ovirt_vm_results 32 | debug: 33 | var: ovirt_vm_results 34 | verbosity: 2 35 | 36 | - name: Get VM Facts 37 | ovirt_vm_facts: 38 | auth: "{{ ovirt_auth }}" 39 | fetch_nested: True 40 | nested_attributes: True 41 | pattern: name={{ item }}.{{ base_domain }} and cluster={{ hostvars[item].rhv_cluster }} 42 | with_items: 43 | - "{{ groups[provision_group] }}" 44 | register: ovirt_vm_facts_results 45 | 46 | - name: Debug ovirt_vm_facts_results 47 | debug: 48 | var: ovirt_vm_facts_results 49 | verbosity: 2 50 | 51 | - name: Combine Applicable Disks & Hostnames into Dictionary for Easy Lookup 52 | set_fact: 53 | disk_name_dict: >- 54 | {{ 55 | disk_name_dict | default([]) + 56 | [ 57 | { 58 | 'name': item, 59 | 'disks': hostvars[item].disks | default([]) 60 | } 61 | ] 62 | }} 63 | tags: 64 | - disks 65 | with_items: 66 | - "{{ groups[provision_group] }}" 67 | 68 | - name: Add Disks to VMs 69 | ovirt_disk: 70 | auth: "{{ ovirt_auth }}" 71 | name: "{{ item.0.name }}.{{ base_domain }}-{{ item.1.name }}" 72 | bootable: "{{ item.1.bootable }}" 73 | vm_name: "{{ item.0.name }}.{{ base_domain }}" 74 | size: "{{ item.1.size }}" 75 | format: "{{ item.1.format }}" 76 | interface: "{{ item.1.interface }}" 77 | storage_domain: "{{ item.1.storage_domain }}" 78 | activate: true 79 | timeout: 900 80 | tags: 81 | - disks 82 | with_subelements: 83 | - "{{ disk_name_dict }}" 84 | - disks 85 | - skip_missing: False 86 | 87 | - name: Combine Applicable NICs & Hostnames into Dictionary for Easy Lookup 88 | set_fact: 89 | nic_name_dict: >- 90 | {{ 91 | nic_name_dict | default([]) + 92 | [ 93 | { 94 | 'name': item, 95 | 'nics': hostvars[item].nics | default([]) 96 | } 97 | ] 98 | }} 99 | with_items: 100 | - "{{ groups[provision_group] }}" 101 | 102 | - name: Add NICs to VMs 103 | ovirt_nic: 104 | auth: "{{ ovirt_auth }}" 105 | name: "{{ item.1.name }}" 106 | interface: "{{ item.1.interface }}" 107 | network: "{{ item.1.network }}" 108 | profile: "{{ item.1.network }}" 109 | mac_address: "{{ item.1.mac_address | default(omit) }}" 110 | vm: "{{ item.0.name }}.{{ base_domain }}" 111 | tags: 112 | - create_nics 113 | with_subelements: 114 | - "{{ nic_name_dict }}" 115 | - nics 116 | - skip_missing: False 117 | 118 | - name: Get VM NIC Facts 119 | ovirt_nic_facts: 120 | auth: "{{ ovirt_auth }}" 121 | name: eth0 122 | vm: "{{ item }}.{{ base_domain }}" 123 | register: ovirt_nic_facts_results 124 | with_items: 125 | - "{{ groups[provision_group] }}" 126 | 127 | - name: Combine Hostname/IP/MAC into Dictionary for Easy Lookup 128 | set_fact: 129 | host_mac_list: >- 130 | {{ 131 | host_mac_list | default([]) + 132 | [ 133 | { 134 | 'name': item.item, 135 | 'mac': item.ansible_facts.ovirt_nics[0].mac.address, 136 | 'ip': hostvars[item.item]['ip'] 137 | } 138 | ] 139 | }} 140 | with_items: 141 | - "{{ ovirt_nic_facts_results.results }}" 142 | 143 | - name: Debug host_mac_list 144 | debug: 145 | var: host_mac_list 146 | 147 | - name: Revoke the SSO token 148 | ovirt_auth: 149 | state: absent 150 | ovirt_auth: "{{ ovirt_auth }}" 151 | 152 | #- name: Build VM/eth0 Results for DHCP 153 | # set_fact: 154 | # host_mac_list: "{{ host_mac_list | default([]) }} + [ '{{ item.item }}.{{ base_domain }} - {{ item.ansible_facts.ovirt_nics[0].mac.address }}' ]" 155 | # with_items: 156 | # - "{{ ovirt_nic_facts_results.results }}" 157 | 158 | #- name: Show Host MAC List 159 | # debug: 160 | # var: host_mac_list 161 | -------------------------------------------------------------------------------- /roles/haproxy/templates/haproxy.cfg.j2: -------------------------------------------------------------------------------- 1 | #--------------------------------------------------------------------- 2 | # Example configuration for a possible web application. See the 3 | # full configuration options online. 4 | # 5 | # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt 6 | # 7 | #--------------------------------------------------------------------- 8 | 9 | #--------------------------------------------------------------------- 10 | # Global settings 11 | #--------------------------------------------------------------------- 12 | global 13 | # to have these messages end up in /var/log/haproxy.log you will 14 | # need to: 15 | # 16 | # 1) configure syslog to accept network log events. This is done 17 | # by adding the '-r' option to the SYSLOGD_OPTIONS in 18 | # /etc/sysconfig/syslog 19 | # 20 | # 2) configure local2 events to go to the /var/log/haproxy.log 21 | # file. A line like the following can be added to 22 | # /etc/sysconfig/syslog 23 | # 24 | # local2.* /var/log/haproxy.log 25 | # 26 | log 127.0.0.1 local2 27 | 28 | chroot /var/lib/haproxy 29 | pidfile /var/run/haproxy.pid 30 | maxconn 4000 31 | user haproxy 32 | group haproxy 33 | daemon 34 | 35 | # turn on stats unix socket 36 | stats socket /var/lib/haproxy/stats 37 | 38 | #--------------------------------------------------------------------- 39 | # common defaults that all the 'listen' and 'backend' sections will 40 | # use if not designated in their block 41 | #--------------------------------------------------------------------- 42 | defaults 43 | mode http 44 | log global 45 | option httplog 46 | option dontlognull 47 | option http-server-close 48 | option forwardfor except 127.0.0.0/8 49 | option redispatch 50 | retries 3 51 | timeout http-request 10s 52 | timeout queue 1m 53 | timeout connect 10s 54 | timeout client 1m 55 | timeout server 1m 56 | timeout http-keep-alive 10s 57 | timeout check 10s 58 | maxconn 3000 59 | 60 | frontend frontend_6443 61 | bind *:6443 62 | mode tcp 63 | option tcplog 64 | timeout client 1m 65 | default_backend backend_6443 66 | 67 | frontend frontend_22623 68 | bind *:22623 69 | mode tcp 70 | option tcplog 71 | timeout client 1m 72 | default_backend backend_22623 73 | 74 | frontend frontend_443 75 | bind *:443 76 | mode tcp 77 | option tcplog 78 | timeout client 1m 79 | default_backend backend_443 80 | 81 | frontend backend_80 82 | bind *:80 83 | mode tcp 84 | option tcplog 85 | timeout client 1m 86 | default_backend backend_80 87 | 88 | backend backend_6443 89 | mode tcp 90 | option tcplog 91 | option log-health-checks 92 | option redispatch 93 | log global 94 | balance roundrobin 95 | timeout connect 10s 96 | timeout server 1m 97 | # Remove the bootstrap server after ./openshift-install wait-for bootstrap-complete 98 | server bootstrap {{ hostvars['bootstrap'].ip }}:6443 check 99 | {% for item in groups[provision_group] %} 100 | {% if 'master' in item %} 101 | server {{ item }} {{ hostvars[item].ip }}:6443 check 102 | {% endif %} 103 | {% endfor %} 104 | 105 | backend backend_22623 106 | mode tcp 107 | option tcplog 108 | option log-health-checks 109 | option redispatch 110 | log global 111 | balance roundrobin 112 | timeout connect 10s 113 | timeout server 1m 114 | # Remove the bootstrap server after ./openshift-install wait-for bootstrap-complete 115 | server bootstrap {{ hostvars['bootstrap'].ip }}:22623 check 116 | {% for item in groups[provision_group] %} 117 | {% if 'master' in item %} 118 | server {{ item }} {{ hostvars[item].ip }}:22623 check 119 | {% endif %} 120 | {% endfor %} 121 | 122 | backend backend_443 123 | mode tcp 124 | option tcplog 125 | option log-health-checks 126 | option redispatch 127 | log global 128 | balance roundrobin 129 | timeout connect 10s 130 | timeout server 1m 131 | {% for item in groups[provision_group] %} 132 | {% if 'worker' in item %} 133 | server {{ item }} {{ hostvars[item].ip }}:443 check 134 | {% endif %} 135 | {% endfor %} 136 | 137 | backend backend_80 138 | mode tcp 139 | option tcplog 140 | option log-health-checks 141 | option redispatch 142 | log global 143 | balance roundrobin 144 | timeout connect 10s 145 | timeout server 1m 146 | {% for item in groups[provision_group] %} 147 | {% if 'worker' in item %} 148 | server {{ item }} {{ hostvars[item].ip }}:80 check 149 | {% endif %} 150 | {% endfor %} 151 | 152 | # Enable stats on :9000/haproxy_stats (no auth required) 153 | 154 | listen stats 155 | bind 0.0.0.0:9000 156 | mode http 157 | stats enable 158 | stats uri /haproxy_stats 159 | -------------------------------------------------------------------------------- /inventory-example.yml: -------------------------------------------------------------------------------- 1 | --- 2 | all: 3 | vars: 4 | provision_group: pg 5 | iso_name: rhcos-4.3.0-x86_64-installer-auto.iso 6 | base_domain: rhv-upi.ocp.pwc.umbrella.local 7 | dhcp_server_dns_servers: 172.16.10.11 8 | dhcp_server_gateway: 172.16.10.254 9 | dhcp_server_subnet_mask: 255.255.255.0 10 | dhcp_server_subnet: 172.16.10.0 11 | load_balancer_ip: 172.16.10.114 12 | ipa_validate_certs: yes 13 | installation_directory: /home/chris/git/openshift4-rhv-upi/local/install/rhv-upi 14 | children: 15 | webserver: 16 | hosts: 17 | lb.rhv-upi.ocp.pwc.umbrella.local: 18 | httpd_port: 8888 19 | loadbalancer: 20 | hosts: 21 | lb.rhv-upi.ocp.pwc.umbrella.local: 22 | pg: 23 | hosts: 24 | bootstrap: 25 | rhv_cluster: "r710s" 26 | ip: 172.16.10.150 27 | memory: 24GiB 28 | sockets: 1 29 | cores: 4 30 | iso: "{{ iso_name }}" 31 | disks: 32 | - name: os 33 | bootable: yes 34 | size: 140GiB 35 | format: cow 36 | interface: virtio_scsi 37 | storage_domain: nfs-vms 38 | nics: 39 | - name: eth0 40 | network: lab 41 | interface: virtio 42 | # mac_address: You can provide specific MAC address here 43 | master0: 44 | rhv_cluster: "r710s" 45 | etcd_name: etcd-0 46 | ip: 172.16.10.151 47 | memory: 24GiB 48 | sockets: 1 49 | cores: 4 50 | iso: "{{ iso_name }}" 51 | disks: 52 | - name: os 53 | bootable: yes 54 | size: 140GiB 55 | format: cow 56 | interface: virtio_scsi 57 | storage_domain: nfs-vms 58 | nics: 59 | - name: eth0 60 | network: lab 61 | interface: virtio 62 | master1: 63 | rhv_cluster: "r710s" 64 | etcd_name: etcd-1 65 | ip: 172.16.10.152 66 | memory: 24GiB 67 | sockets: 1 68 | cores: 4 69 | iso: "{{ iso_name }}" 70 | disks: 71 | - name: os 72 | bootable: yes 73 | size: 140GiB 74 | format: cow 75 | interface: virtio_scsi 76 | storage_domain: nfs-vms 77 | nics: 78 | - name: eth0 79 | network: lab 80 | interface: virtio 81 | master2: 82 | rhv_cluster: "r710s" 83 | etcd_name: etcd-2 84 | ip: 172.16.10.153 85 | memory: 24GiB 86 | sockets: 1 87 | cores: 4 88 | iso: "{{ iso_name }}" 89 | disks: 90 | - name: os 91 | bootable: yes 92 | size: 140GiB 93 | format: cow 94 | interface: virtio_scsi 95 | storage_domain: nfs-vms 96 | nics: 97 | - name: eth0 98 | network: lab 99 | interface: virtio 100 | worker0: 101 | rhv_cluster: "r710s" 102 | ip: 172.16.10.154 103 | memory: 24GiB 104 | sockets: 1 105 | cores: 4 106 | iso: "{{ iso_name }}" 107 | disks: 108 | - name: os 109 | bootable: yes 110 | size: 140GiB 111 | format: cow 112 | interface: virtio_scsi 113 | storage_domain: nfs-vms 114 | nics: 115 | - name: eth0 116 | network: lab 117 | interface: virtio 118 | worker1: 119 | rhv_cluster: "r710s" 120 | ip: 172.16.10.155 121 | memory: 24GiB 122 | sockets: 1 123 | cores: 4 124 | iso: "{{ iso_name }}" 125 | disks: 126 | - name: os 127 | bootable: yes 128 | size: 140GiB 129 | format: cow 130 | interface: virtio_scsi 131 | storage_domain: nfs-vms 132 | nics: 133 | - name: eth0 134 | network: lab 135 | interface: virtio 136 | worker2: 137 | rhv_cluster: "r710s" 138 | ip: 172.16.10.156 139 | memory: 24GiB 140 | sockets: 1 141 | cores: 4 142 | iso: "{{ iso_name }}" 143 | disks: 144 | - name: os 145 | bootable: yes 146 | size: 140GiB 147 | format: cow 148 | interface: virtio_scsi 149 | storage_domain: nfs-vms 150 | nics: 151 | - name: eth0 152 | network: lab 153 | interface: virtio 154 | worker3: 155 | rhv_cluster: "r710s" 156 | ip: 172.16.10.157 157 | memory: 24GiB 158 | sockets: 1 159 | cores: 4 160 | iso: "{{ iso_name }}" 161 | disks: 162 | - name: os 163 | bootable: yes 164 | size: 140GiB 165 | format: cow 166 | interface: virtio_scsi 167 | storage_domain: nfs-vms 168 | nics: 169 | - name: eth0 170 | network: lab 171 | interface: virtio 172 | -------------------------------------------------------------------------------- /ocs/inventory-example-ocs.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | all: 3 | vars: 4 | provision_group: pg 5 | iso_name: rhcos-4.3.0-x86_64-installer-auto.iso 6 | base_domain: rhv-upi.ocp.pwc.umbrella.local 7 | dhcp_server_dns_servers: 172.16.10.11 8 | dhcp_server_gateway: 172.16.10.254 9 | dhcp_server_subnet_mask: 255.255.255.0 10 | dhcp_server_subnet: 172.16.10.0 11 | load_balancer_ip: 172.16.10.115 12 | installation_directory: /home/chris/upi/rhv-upi 13 | children: 14 | webserver: 15 | hosts: 16 | lb.rhv-upi.ocp.pwc.umbrella.local: 17 | httpd_port: 8080 18 | loadbalancer: 19 | hosts: 20 | lb.rhv-upi.ocp.pwc.umbrella.local: 21 | pg: 22 | hosts: 23 | bootstrap: 24 | rhv_cluster: "r710s" 25 | ip: 172.16.10.150 26 | memory: 24GiB 27 | sockets: 1 28 | cores: 4 29 | iso: "{{ iso_name }}" 30 | disks: 31 | - name: os 32 | bootable: yes 33 | size: 100GiB 34 | format: cow 35 | interface: virtio_scsi 36 | storage_domain: nfs-vms 37 | nics: 38 | - name: eth0 39 | network: lab 40 | interface: virtio 41 | master0: 42 | rhv_cluster: "r710s" 43 | etcd_name: etcd-0 44 | ip: 172.16.10.151 45 | memory: 24GiB 46 | sockets: 1 47 | cores: 4 48 | iso: "{{ iso_name }}" 49 | disks: 50 | - name: os 51 | bootable: yes 52 | size: 100GiB 53 | format: cow 54 | interface: virtio_scsi 55 | storage_domain: nfs-vms 56 | nics: 57 | - name: eth0 58 | network: lab 59 | interface: virtio 60 | master1: 61 | rhv_cluster: "r710s" 62 | etcd_name: etcd-1 63 | ip: 172.16.10.152 64 | memory: 24GiB 65 | sockets: 1 66 | cores: 4 67 | iso: "{{ iso_name }}" 68 | disks: 69 | - name: os 70 | bootable: yes 71 | size: 100GiB 72 | format: cow 73 | interface: virtio_scsi 74 | storage_domain: nfs-vms 75 | nics: 76 | - name: eth0 77 | network: lab 78 | interface: virtio 79 | master2: 80 | rhv_cluster: "r710s" 81 | etcd_name: etcd-2 82 | ip: 172.16.10.153 83 | memory: 24GiB 84 | sockets: 1 85 | cores: 4 86 | iso: "{{ iso_name }}" 87 | disks: 88 | - name: os 89 | bootable: yes 90 | size: 100GiB 91 | format: cow 92 | interface: virtio_scsi 93 | storage_domain: nfs-vms 94 | nics: 95 | - name: eth0 96 | network: lab 97 | interface: virtio 98 | worker0: 99 | rhv_cluster: "r710-hv1" 100 | ip: 172.16.10.154 101 | memory: 48GiB 102 | sockets: 2 103 | cores: 6 104 | iso: "{{ iso_name }}" 105 | disks: 106 | - name: os 107 | bootable: yes 108 | size: 100GiB 109 | format: cow 110 | interface: virtio_scsi 111 | storage_domain: rhv-r710-hv1-cluster-vms 112 | - name: mon 113 | bootable: no 114 | size: 10GiB 115 | format: cow 116 | interface: virtio_scsi 117 | storage_domain: nvme-vms 118 | - name: osd 119 | bootable: no 120 | size: 300GiB 121 | format: cow 122 | interface: virtio_scsi 123 | storage_domain: nvme-vms 124 | nics: 125 | - name: eth0 126 | network: lab 127 | interface: virtio 128 | worker1: 129 | rhv_cluster: "r710-hv1" 130 | ip: 172.16.10.155 131 | memory: 48GiB 132 | sockets: 2 133 | cores: 6 134 | iso: "{{ iso_name }}" 135 | disks: 136 | - name: os 137 | bootable: yes 138 | size: 100GiB 139 | format: cow 140 | interface: virtio_scsi 141 | storage_domain: rhv-r710-hv1-cluster-vms 142 | - name: mon 143 | bootable: no 144 | size: 10GiB 145 | format: cow 146 | interface: virtio_scsi 147 | storage_domain: nvme-vms 148 | - name: osd 149 | bootable: no 150 | size: 300GiB 151 | format: cow 152 | interface: virtio_scsi 153 | storage_domain: nvme-vms 154 | nics: 155 | - name: eth0 156 | network: lab 157 | interface: virtio 158 | worker2: 159 | rhv_cluster: "r710-hv1" 160 | ip: 172.16.10.156 161 | memory: 48GiB 162 | sockets: 2 163 | cores: 6 164 | iso: "{{ iso_name }}" 165 | disks: 166 | - name: os 167 | bootable: yes 168 | size: 100GiB 169 | format: cow 170 | interface: virtio_scsi 171 | storage_domain: rhv-r710-hv1-cluster-vms 172 | - name: mon 173 | bootable: no 174 | size: 10GiB 175 | format: cow 176 | interface: virtio_scsi 177 | storage_domain: nvme-vms 178 | - name: osd 179 | bootable: no 180 | size: 300GiB 181 | format: cow 182 | interface: virtio_scsi 183 | storage_domain: nvme-vms 184 | nics: 185 | - name: eth0 186 | network: lab 187 | interface: virtio 188 | worker3: 189 | rhv_cluster: "r710-hv1" 190 | ip: 172.16.10.157 191 | memory: 24GiB 192 | sockets: 1 193 | cores: 4 194 | iso: "{{ iso_name }}" 195 | disks: 196 | - name: os 197 | bootable: yes 198 | size: 100GiB 199 | format: cow 200 | interface: virtio_scsi 201 | storage_domain: rhv-r710-hv1-cluster-vms 202 | nics: 203 | - name: eth0 204 | network: lab 205 | interface: virtio 206 | worker4: 207 | rhv_cluster: "r710-hv1" 208 | ip: 172.16.10.158 209 | memory: 24GiB 210 | sockets: 1 211 | cores: 4 212 | iso: "{{ iso_name }}" 213 | disks: 214 | - name: os 215 | bootable: yes 216 | size: 100GiB 217 | format: cow 218 | interface: virtio_scsi 219 | storage_domain: rhv-r710-hv1-cluster-vms 220 | nics: 221 | - name: eth0 222 | network: lab 223 | interface: virtio 224 | worker5: 225 | rhv_cluster: "r710-hv1" 226 | ip: 172.16.10.159 227 | memory: 24GiB 228 | sockets: 1 229 | cores: 4 230 | iso: "{{ iso_name }}" 231 | disks: 232 | - name: os 233 | bootable: yes 234 | size: 100GiB 235 | format: cow 236 | interface: virtio_scsi 237 | storage_domain: rhv-r710-hv1-cluster-vms 238 | nics: 239 | - name: eth0 240 | network: lab 241 | interface: virtio 242 | -------------------------------------------------------------------------------- /roles/httpd/templates/httpd.conf.j2: -------------------------------------------------------------------------------- 1 | # 2 | # This is the main Apache HTTP server configuration file. It contains the 3 | # configuration directives that give the server its instructions. 4 | # See for detailed information. 5 | # In particular, see 6 | # 7 | # for a discussion of each configuration directive. 8 | # 9 | # Do NOT simply read the instructions in here without understanding 10 | # what they do. They're here only as hints or reminders. If you are unsure 11 | # consult the online docs. You have been warned. 12 | # 13 | # Configuration and logfile names: If the filenames you specify for many 14 | # of the server's control files begin with "/" (or "drive:/" for Win32), the 15 | # server will use that explicit path. If the filenames do *not* begin 16 | # with "/", the value of ServerRoot is prepended -- so 'log/access_log' 17 | # with ServerRoot set to '/www' will be interpreted by the 18 | # server as '/www/log/access_log', where as '/log/access_log' will be 19 | # interpreted as '/log/access_log'. 20 | 21 | # 22 | # ServerRoot: The top of the directory tree under which the server's 23 | # configuration, error, and log files are kept. 24 | # 25 | # Do not add a slash at the end of the directory path. If you point 26 | # ServerRoot at a non-local disk, be sure to specify a local disk on the 27 | # Mutex directive, if file-based mutexes are used. If you wish to share the 28 | # same ServerRoot for multiple httpd daemons, you will need to change at 29 | # least PidFile. 30 | # 31 | ServerRoot "/etc/httpd" 32 | 33 | # 34 | # Listen: Allows you to bind Apache to specific IP addresses and/or 35 | # ports, instead of the default. See also the 36 | # directive. 37 | # 38 | # Change this to Listen on specific IP addresses as shown below to 39 | # prevent Apache from glomming onto all bound IP addresses. 40 | # 41 | #Listen 12.34.56.78:80 42 | Listen {{ httpd_port }} 43 | 44 | # 45 | # Dynamic Shared Object (DSO) Support 46 | # 47 | # To be able to use the functionality of a module which was built as a DSO you 48 | # have to place corresponding `LoadModule' lines at this location so the 49 | # directives contained in it are actually available _before_ they are used. 50 | # Statically compiled modules (those listed by `httpd -l') do not need 51 | # to be loaded here. 52 | # 53 | # Example: 54 | # LoadModule foo_module modules/mod_foo.so 55 | # 56 | Include conf.modules.d/*.conf 57 | 58 | # 59 | # If you wish httpd to run as a different user or group, you must run 60 | # httpd as root initially and it will switch. 61 | # 62 | # User/Group: The name (or #number) of the user/group to run httpd as. 63 | # It is usually good practice to create a dedicated user and group for 64 | # running httpd, as with most system services. 65 | # 66 | User apache 67 | Group apache 68 | 69 | # 'Main' server configuration 70 | # 71 | # The directives in this section set up the values used by the 'main' 72 | # server, which responds to any requests that aren't handled by a 73 | # definition. These values also provide defaults for 74 | # any containers you may define later in the file. 75 | # 76 | # All of these directives may appear inside containers, 77 | # in which case these default settings will be overridden for the 78 | # virtual host being defined. 79 | # 80 | 81 | # 82 | # ServerAdmin: Your address, where problems with the server should be 83 | # e-mailed. This address appears on some server-generated pages, such 84 | # as error documents. e.g. admin@your-domain.com 85 | # 86 | ServerAdmin root@localhost 87 | 88 | # 89 | # ServerName gives the name and port that the server uses to identify itself. 90 | # This can often be determined automatically, but we recommend you specify 91 | # it explicitly to prevent problems during startup. 92 | # 93 | # If your host doesn't have a registered DNS name, enter its IP address here. 94 | # 95 | #ServerName www.example.com:80 96 | 97 | # 98 | # Deny access to the entirety of your server's filesystem. You must 99 | # explicitly permit access to web content directories in other 100 | # blocks below. 101 | # 102 | 103 | AllowOverride none 104 | Require all denied 105 | 106 | 107 | # 108 | # Note that from this point forward you must specifically allow 109 | # particular features to be enabled - so if something's not working as 110 | # you might expect, make sure that you have specifically enabled it 111 | # below. 112 | # 113 | 114 | # 115 | # DocumentRoot: The directory out of which you will serve your 116 | # documents. By default, all requests are taken from this directory, but 117 | # symbolic links and aliases may be used to point to other locations. 118 | # 119 | DocumentRoot "/var/www/html" 120 | 121 | # 122 | # Relax access to content within /var/www. 123 | # 124 | 125 | AllowOverride None 126 | # Allow open access: 127 | Require all granted 128 | 129 | 130 | # Further relax access to the default document root: 131 | 132 | # 133 | # Possible values for the Options directive are "None", "All", 134 | # or any combination of: 135 | # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews 136 | # 137 | # Note that "MultiViews" must be named *explicitly* --- "Options All" 138 | # doesn't give it to you. 139 | # 140 | # The Options directive is both complicated and important. Please see 141 | # http://httpd.apache.org/docs/2.4/mod/core.html#options 142 | # for more information. 143 | # 144 | Options Indexes FollowSymLinks 145 | 146 | # 147 | # AllowOverride controls what directives may be placed in .htaccess files. 148 | # It can be "All", "None", or any combination of the keywords: 149 | # Options FileInfo AuthConfig Limit 150 | # 151 | AllowOverride None 152 | 153 | # 154 | # Controls who can get stuff from this server. 155 | # 156 | Require all granted 157 | 158 | 159 | # 160 | # DirectoryIndex: sets the file that Apache will serve if a directory 161 | # is requested. 162 | # 163 | 164 | DirectoryIndex index.html 165 | 166 | 167 | # 168 | # The following lines prevent .htaccess and .htpasswd files from being 169 | # viewed by Web clients. 170 | # 171 | 172 | Require all denied 173 | 174 | 175 | # 176 | # ErrorLog: The location of the error log file. 177 | # If you do not specify an ErrorLog directive within a 178 | # container, error messages relating to that virtual host will be 179 | # logged here. If you *do* define an error logfile for a 180 | # container, that host's errors will be logged there and not here. 181 | # 182 | ErrorLog "logs/error_log" 183 | 184 | # 185 | # LogLevel: Control the number of messages logged to the error_log. 186 | # Possible values include: debug, info, notice, warn, error, crit, 187 | # alert, emerg. 188 | # 189 | LogLevel warn 190 | 191 | 192 | # 193 | # The following directives define some format nicknames for use with 194 | # a CustomLog directive (see below). 195 | # 196 | LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined 197 | LogFormat "%h %l %u %t \"%r\" %>s %b" common 198 | 199 | 200 | # You need to enable mod_logio.c to use %I and %O 201 | LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio 202 | 203 | 204 | # 205 | # The location and format of the access logfile (Common Logfile Format). 206 | # If you do not define any access logfiles within a 207 | # container, they will be logged here. Contrariwise, if you *do* 208 | # define per- access logfiles, transactions will be 209 | # logged therein and *not* in this file. 210 | # 211 | #CustomLog "logs/access_log" common 212 | 213 | # 214 | # If you prefer a logfile with access, agent, and referer information 215 | # (Combined Logfile Format) you can use the following directive. 216 | # 217 | CustomLog "logs/access_log" combined 218 | 219 | 220 | 221 | # 222 | # Redirect: Allows you to tell clients about documents that used to 223 | # exist in your server's namespace, but do not anymore. The client 224 | # will make a new request for the document at its new location. 225 | # Example: 226 | # Redirect permanent /foo http://www.example.com/bar 227 | 228 | # 229 | # Alias: Maps web paths into filesystem paths and is used to 230 | # access content that does not live under the DocumentRoot. 231 | # Example: 232 | # Alias /webpath /full/filesystem/path 233 | # 234 | # If you include a trailing / on /webpath then the server will 235 | # require it to be present in the URL. You will also likely 236 | # need to provide a section to allow access to 237 | # the filesystem path. 238 | 239 | # 240 | # ScriptAlias: This controls which directories contain server scripts. 241 | # ScriptAliases are essentially the same as Aliases, except that 242 | # documents in the target directory are treated as applications and 243 | # run by the server when requested rather than as documents sent to the 244 | # client. The same rules about trailing "/" apply to ScriptAlias 245 | # directives as to Alias. 246 | # 247 | ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" 248 | 249 | 250 | 251 | # 252 | # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased 253 | # CGI directory exists, if you have that configured. 254 | # 255 | 256 | AllowOverride None 257 | Options None 258 | Require all granted 259 | 260 | 261 | 262 | # 263 | # TypesConfig points to the file containing the list of mappings from 264 | # filename extension to MIME-type. 265 | # 266 | TypesConfig /etc/mime.types 267 | 268 | # 269 | # AddType allows you to add to or override the MIME configuration 270 | # file specified in TypesConfig for specific file types. 271 | # 272 | #AddType application/x-gzip .tgz 273 | # 274 | # AddEncoding allows you to have certain browsers uncompress 275 | # information on the fly. Note: Not all browsers support this. 276 | # 277 | #AddEncoding x-compress .Z 278 | #AddEncoding x-gzip .gz .tgz 279 | # 280 | # If the AddEncoding directives above are commented-out, then you 281 | # probably should define those extensions to indicate media types: 282 | # 283 | AddType application/x-compress .Z 284 | AddType application/x-gzip .gz .tgz 285 | 286 | # 287 | # AddHandler allows you to map certain file extensions to "handlers": 288 | # actions unrelated to filetype. These can be either built into the server 289 | # or added with the Action directive (see below) 290 | # 291 | # To use CGI scripts outside of ScriptAliased directories: 292 | # (You will also need to add "ExecCGI" to the "Options" directive.) 293 | # 294 | #AddHandler cgi-script .cgi 295 | 296 | # For type maps (negotiated resources): 297 | #AddHandler type-map var 298 | 299 | # 300 | # Filters allow you to process content before it is sent to the client. 301 | # 302 | # To parse .shtml files for server-side includes (SSI): 303 | # (You will also need to add "Includes" to the "Options" directive.) 304 | # 305 | AddType text/html .shtml 306 | AddOutputFilter INCLUDES .shtml 307 | 308 | 309 | # 310 | # Specify a default charset for all content served; this enables 311 | # interpretation of all content as UTF-8 by default. To use the 312 | # default browser choice (ISO-8859-1), or to allow the META tags 313 | # in HTML content to override this choice, comment out this 314 | # directive: 315 | # 316 | AddDefaultCharset UTF-8 317 | 318 | 319 | # 320 | # The mod_mime_magic module allows the server to use various hints from the 321 | # contents of the file itself to determine its type. The MIMEMagicFile 322 | # directive tells the module where the hint definitions are located. 323 | # 324 | MIMEMagicFile conf/magic 325 | 326 | 327 | # 328 | # Customizable error responses come in three flavors: 329 | # 1) plain text 2) local redirects 3) external redirects 330 | # 331 | # Some examples: 332 | #ErrorDocument 500 "The server made a boo boo." 333 | #ErrorDocument 404 /missing.html 334 | #ErrorDocument 404 "/cgi-bin/missing_handler.pl" 335 | #ErrorDocument 402 http://www.example.com/subscription_info.html 336 | # 337 | 338 | # 339 | # EnableMMAP and EnableSendfile: On systems that support it, 340 | # memory-mapping or the sendfile syscall may be used to deliver 341 | # files. This usually improves server performance, but must 342 | # be turned off when serving from networked-mounted 343 | # filesystems or if support for these functions is otherwise 344 | # broken on your system. 345 | # Defaults if commented: EnableMMAP On, EnableSendfile Off 346 | # 347 | #EnableMMAP off 348 | EnableSendfile on 349 | 350 | # Supplemental configuration 351 | # 352 | # Load config files in the "/etc/httpd/conf.d" directory, if any. 353 | IncludeOptional conf.d/*.conf 354 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Provisioning OpenShift 4.4 on RHV Using Baremetal UPI 2 | 3 | This repository contains a set of playbooks to help facilitate the deployment of OpenShift 4.4 on RHV. 4 | 5 | ## Background 6 | 7 | The playbooks/scripts in this repository should help you automate the vast majority of an OpenShift 4.4 UPI deployment on RHV. Be sure to read the requirements section below. My initial installation of OCP on RHV was a little cumbersome, so I opted to automate the majority of the installation to allow for iterative deployments. 8 | 9 | The biggest challenge was the installation of the Red Hat Enterprise Linux CoreOS (RHCOS) nodes themselves and that is the focal point of the automation. The playbooks/scripts provided are essentially an automated walk through of the standard baremetal UPI installation instructions but tailored for RHV. 10 | 11 | To automate the deployment of RHCOS, the standard boot ISO is modified so the installation automatically starts with specific kernel parameters. The parameters for each node type (bootstrap, masters and workers) are the same with the exception of `coreos.inst.ignition_url`. To simplify the process, the boot ISO is made to reference a PHP script that offers up the correct ignition config based on the requesting hosts DNS name. This method allows the same boot ISO to be used for each node type. 12 | 13 | Before provisioning the RHCOS nodes a lot of prep work needs to be completed. This includes creating the proper DNS entries for the environment, configuring a DHCP server, configuring a load balancer and configuring a web server to store ignition configs and other installation artifacts. Ansible playbooks are provided to automate much of this process. 14 | 15 | ## Specific Automations 16 | 17 | * Deployment of RHCOS on RHV 18 | * Creation of all SRV, A and PTR records in IdM 19 | * Deployment of httpd Server for Installation Artifacts and Logic 20 | * Deployment of HAProxy and Applicable Configuration 21 | * Deployment of dhcpd and Applicable Static IP Assignment 22 | * Ordered Starting (i.e. installation) of VMs 23 | 24 | ## Requirements 25 | 26 | To leverage the automation in this guide you need to bring the following: 27 | 28 | * RHV Environment (tested on 4.3.9) 29 | * IdM Server with DNS Enabled 30 | * Must have Proper Forward/Reverse Zones Configured 31 | * RHEL 7 Server which will act as a Web Server, Load Balancer and DHCP Server 32 | * Only Repository Requirement is `rhel-7-server-rpms` 33 | 34 | ### Naming Convention 35 | 36 | All hostnames must follow the following format: 37 | 38 | * bootstrap.\ 39 | * master0.\ 40 | * masterX.\ 41 | * worker0.\ 42 | * workerX.\ 43 | 44 | ## UPI Installation Notes 45 | 46 | * Bootstrap SSL Certificate is only valid for 24 hours 47 | * etcd/master naming convention conforms to 0 based index (i.e. master0, master1, master2...not master1, master2, master3) 48 | * qemu-guest-agent is not included in RHCOS (instructions for installed daemonset provided below) 49 | 50 | # Installing 51 | 52 | Read through the [Installing on baremetal](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/html-single/installing_on_bare_metal/index) installation documentation before proceeding. 53 | 54 | ## Clone this Repository 55 | 56 | Find a good working directory and clone this repository using the following command: 57 | 58 | ```console 59 | $ git clone https://github.com/sa-ne/openshift4-rhv-upi.git 60 | ``` 61 | 62 | ## Create DNS Zones in IdM 63 | 64 | Login to your IdM server and make sure a reverse zone is configured for your subnet. My lab has a subnet of `172.16.10.0` so the corresponding reverse zone is called `10.16.172.in-addr.arpa.`. Make sure a forward zone is configured as well. It should be whatever is defined in the `base_domain` variable in your Ansible inventory file (`rhv-upi.ocp.pwc.umbrella.local` in this example). 65 | 66 | ## Creating Inventory File for Ansible 67 | 68 | An example inventory file is included for Ansible (`inventory-example.yml`). Use this file as a baseline. Make sure to configure the appropriate number of master/worker nodes for your deployment. 69 | 70 | The following global variables will need to be modified (the default values are what I use in my lab, consider them examples): 71 | 72 | |Variable|Description| 73 | |:---|:---| 74 | |iso_name|The name of the custom ISO file in the RHV ISO domain| 75 | |base\_domain|The base DNS domain. Not to be confused with the base domain in the UPI instructions. Our base\_domain variable in this case is ``.``| 76 | |dhcp\_server\_dns\_servers|DNS server assigned by DHCP server| 77 | |dhcp\_server\_gateway|Gateway assigned by DHCP server| 78 | |dhcp\_server\_subnet\_mask|Subnet mask assigned by DHCP server| 79 | |dhcp\_server\_subnet|IP Subnet used to configure dhcpd.conf| 80 | |load\_balancer\_ip|This IP address of your load balancer (the server that HAProxy will be installed on)| 81 | |ipa\_validate\_certs|Enable or disable validation of the certificates for your IdM server (default: `yes`)| 82 | |installation_directory|The directory that you will be using with `openshift-install` command for generating ignition files| 83 | 84 | For the individual node configuration, be sure to update the hosts in the `pg` hostgroup. Several parameters will need to be changed for _each_ host including `ip`, `storage_domain` and `network`. You can also specify `mac_address` for each of the VMs in its `network` section (if you don't, VMs will obtain their MAC address from cluster's MAC pool automatically). Match up your RHV environment with the inventory file. 85 | 86 | Under the `webserver` and `loadbalancer` group include the FQDN of each host. Also make sure you configure the `httpd_port` variable for the web server host. In this example, the web server that will serve up installation artifacts and load balancer (HAProxy) are the same host. 87 | 88 | ## Creating an Ansible Vault 89 | 90 | In the directory that contains your cloned copy of this git repo, create an Ansible vault called vault.yml as follows: 91 | 92 | ```console 93 | $ ansible-vault create vault.yml 94 | ``` 95 | 96 | The vault requires the following variables. Adjust the values to suit your environment. 97 | 98 | ```yaml 99 | --- 100 | rhv_hostname: "rhevm.pwc.umbrella.local" 101 | rhv_username: "admin@internal" 102 | rhv_password: "changeme" 103 | ipa_hostname: "idm1.umbrella.local" 104 | ipa_username: "admin" 105 | ipa_password: "changeme" 106 | ``` 107 | 108 | ## Download the OpenShift Installer 109 | 110 | The OpenShift Installer releases are stored [here](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/). Find the installer, right click on the "Download Now" button and select copy link. Then pull the installer using curl (be sure to quote the URL) as shown (linux client used as example): 111 | 112 | ```console 113 | $ curl -o openshift-install-linux-4.4.3.tar.gz https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux-4.4.3.tar.gz 114 | ``` 115 | 116 | Extract the archive and continue. 117 | 118 | ## Creating Ignition Configs 119 | 120 | After you download the installer we need to create our ignition configs using the `openshift-install` command. Create a file called `install-config.yaml` similar to the one show below. This example shows 3 masters and 4 worker nodes. 121 | 122 | ```yaml 123 | apiVersion: v1 124 | baseDomain: ocp.pwc.umbrella.local 125 | compute: 126 | - name: worker 127 | replicas: 4 128 | controlPlane: 129 | name: master 130 | replicas: 3 131 | metadata: 132 | name: rhv-upi 133 | networking: 134 | clusterNetworks: 135 | - cidr: 10.128.0.0/14 136 | hostPrefix: 23 137 | networkType: OpenShiftSDN 138 | serviceNetwork: 139 | - 172.30.0.0/16 140 | platform: 141 | none: {} 142 | fips: false 143 | pullSecret: '{ ... }' 144 | sshKey: 'ssh-rsa ... user@host' 145 | ``` 146 | 147 | You will need to modify baseDomain, pullSecret and sshKey (be sure to use your _public_ key) with the appropriate values. Next, copy `install-config.yaml` into your working directory (`/home/chris/upi/rhv-upi` in this example) and run the OpenShift installer as follows to generate your Ignition configs. 148 | 149 | Your pull secret can be obtained from the [OpenShift start page](https://cloud.redhat.com/openshift/install/metal/user-provisioned). 150 | 151 | ```console 152 | $ ./openshift-install create ignition-configs --dir=/home/chris/upi/rhv-upi 153 | ``` 154 | 155 | ## Staging Content 156 | 157 | Next we need the RHCOS image. These images are stored [here](https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.4/latest/). On our web server, download the RHCOS image (BIOS, not UEFI) to the document root (assuming `/var/www/html`). 158 | 159 | _NOTE: You may be wondering about SELinux contexts since httpd is not installed. Fear not, our playbooks will handle that during the installation phase._ 160 | 161 | ```console 162 | $ sudo mkdir -p /var/www/html 163 | ``` 164 | 165 | ```console 166 | $ sudo curl -o /var/www/html/rhcos-4.4.3-x86_64-metal.x86_64.raw.gz https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/rhcos-4.4.3-x86_64-metal.x86_64.raw.gz 167 | ``` 168 | 169 | Ignition files generated in the previous step will be copied to web server automatically as part of `httpd` role. If you intend to skip that role, copy bootstrap.ign, master.ign and worker.ign from your working directory to `/var/www/html` on your web server manually now. 170 | 171 | ## Generating Boot ISOs 172 | 173 | We will use a bootable ISO to install RHCOS on our virtual machines. We need to pass several parameters to the kernel (see below). This can be cumbersome, so to speed things along we will generate a single boot ISO that can be used for bootstrap, master and worker nodes. During the installation, a playbook will install the PHP script on your web server. This script will serve up the appropriate ignition config based on the requesting servers DNS name. 174 | 175 | __Kernel Parameters__ 176 | 177 | Note these parameters are for reference only. Specify the appropriate values for your environment in `util/iso-generator.sh` and run the script to generate an ISO specific to your environment. 178 | 179 | * coreos.inst=yes 180 | * coreos.inst.install\_dev=sda 181 | * coreos.inst.image\_url=http://example.com/rhcos-4.4.3-x86_64-metal.raw.gz 182 | * coreos.inst.ignition\_url=http://example.com/ignition-downloader.php 183 | 184 | ### Obtaining RHCOS Install ISO 185 | 186 | Next we need the RHCOS ISO installer (stored [here](https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.4/latest/)). Download the ISO file as shown. Be sure to check the directory for the latest version. 187 | 188 | ```console 189 | $ curl -o /tmp/rhcos-4.4.3-x86_64-installer.x86_64.iso https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.4/latest/rhcos-4.4.3-x86_64-installer.x86_64.iso 190 | ``` 191 | 192 | ### Modifying the ISO 193 | 194 | A script is provided to recreate an ISO that will automatically boot with the appropriate kernel parameters. Locate the script and modify the variables at the top to suite your environment. 195 | 196 | Most parameters can be left alone. You WILL need to change at least the `KP_WEBSERVER` variable to point to the web server hosting your ignition configs and RHCOS image. 197 | 198 | ```shell-script 199 | VERSION=4.4.3-x86_64 200 | ISO_SOURCE=/tmp/rhcos-$VERSION-installer.x86_64.iso 201 | ISO_OUTPUT=/tmp/rhcos-$VERSION-installer.x86_64-auto.iso 202 | 203 | DIRECTORY_MOUNT=/tmp/rhcos-$VERSION-installer 204 | DIRECTORY_WORKING=/tmp/rhcos-$VERSION-installer-auto 205 | 206 | KP_WEBSERVER=lb.rhv-upi.ocp.pwc.umbrella.local:8888 207 | KP_COREOS_IMAGE=rhcos-$VERSION-metal.x86_64.raw.gz 208 | KP_BLOCK_DEVICE=sda 209 | ``` 210 | 211 | Running the script (make sure to do this as root) should produce similar output: 212 | 213 | ```console 214 | <<<<<<< HEAD 215 | (rhv) 0 chris@umbrella.local@toaster:~ $ sudo ./util/iso-generator.sh 216 | mount: /tmp/rhcos-4.3.0-x86_64-installer: WARNING: device write-protected, mounted read-only. 217 | ======= 218 | (rhv) 0 chris@umbrella.local@toaster:~ $ sudo ./iso-generator.sh 219 | mount: /tmp/rhcos-4.4.3-x86_64-installer: WARNING: device write-protected, mounted read-only. 220 | >>>>>>> a8694ee2d2d1a85a14ca24fdc4ea7fd1716632e6 221 | sending incremental file list 222 | README.md 223 | EFI/ 224 | EFI/fedora/ 225 | EFI/fedora/grub.cfg 226 | images/ 227 | images/efiboot.img 228 | images/initramfs.img 229 | images/vmlinuz 230 | isolinux/ 231 | isolinux/boot.cat 232 | isolinux/boot.msg 233 | isolinux/isolinux.bin 234 | isolinux/isolinux.cfg 235 | isolinux/ldlinux.c32 236 | isolinux/libcom32.c32 237 | isolinux/libutil.c32 238 | isolinux/vesamenu.c32 239 | 240 | sent 75,912,546 bytes received 295 bytes 151,825,682.00 bytes/sec 241 | total size is 75,893,080 speedup is 1.00 242 | Size of boot image is 4 sectors -> No emulation 243 | 13.43% done, estimate finish Tue Jun 4 20:28:18 2019 244 | 26.88% done, estimate finish Tue Jun 4 20:28:18 2019 245 | 40.28% done, estimate finish Tue Jun 4 20:28:18 2019 246 | 53.72% done, estimate finish Tue Jun 4 20:28:18 2019 247 | 67.12% done, estimate finish Tue Jun 4 20:28:18 2019 248 | 80.56% done, estimate finish Tue Jun 4 20:28:18 2019 249 | 93.96% done, estimate finish Tue Jun 4 20:28:18 2019 250 | Total translation table size: 2048 251 | Total rockridge attributes bytes: 2086 252 | Total directory bytes: 8192 253 | Path table size(bytes): 66 254 | Max brk space used 1c000 255 | 37255 extents written (72 MB) 256 | ``` 257 | 258 | Copy the ISO to your ISO domain in RHV. After that you can cleanup the /tmp directory by doing `rm -rf /tmp/rhcos*`. Make sure to update the `iso_name` variable in your Ansible inventory file with the correct name (`rhcos-4.4.3-x86_64-installer.x86_64-auto.iso` in this example). 259 | 260 | At this point we have completed the staging process and can let Ansible take over. 261 | 262 | ## Deploying OpenShift 4.4 on RHV with Ansible 263 | 264 | To kick off the installation, simply run the provision.yml playbook as follows: 265 | 266 | ```console 267 | $ ansible-playbook -i inventory.yml --ask-vault-pass provision.yml 268 | ``` 269 | 270 | The order of operations for the `provision.yml` playbook is as follows: 271 | 272 | * Create DNS Entries in IdM 273 | * Create VMs in RHV 274 | - Create VMs 275 | - Create Disks 276 | - Create NICs 277 | * Configure Load Balancer Host 278 | - Install and Configure dhcpd 279 | - Install and Configure HAProxy 280 | - Install and Configure httpd 281 | * Boot VMs 282 | - Start bootstrap VM and wait for SSH 283 | - Start master VMs and wait for SSH 284 | - Start worker VMs and wait for SSH 285 | 286 | Once the playbook completes (should several minutes) continue with the instructions. 287 | 288 | ### Skipping Portions of Automation 289 | 290 | If you already have your own DNS, DHCP or Load Balancer you can skip those portions of the automation by passing the appropriate `--skip-tags` argument to the `ansible-playbook` command. 291 | 292 | Each step of the automation is placed in its own role. Each is tagged ipa, dhcpd and haproxy. If you have your own DHCP configured, you can skip that portion as follows: 293 | 294 | ```console 295 | $ ansible-playbook -i inventory.yml --ask-vault-pass --skip-tags dhcpd provision.yml 296 | ``` 297 | 298 | All three roles could be skipped using the following command: 299 | 300 | ```console 301 | $ ansible-playbook -i inventory.yml --ask-vault-pass --skip-tags dhcpd,ipa,haproxy provision.yml 302 | ``` 303 | 304 | ## Finishing the Deployment 305 | 306 | Once the VMs boot RHCOS will be installed and nodes will automatically start configuring themselves. From this point we are essentially following the Baremetal UPI instructions starting with [Creating the Cluster](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/html-single/installing_on_bare_metal/index#installation-installing-bare-metal_installing-bare-metal). 307 | 308 | Run the following command to ensure the bootstrap process completes (be sure to adjust the `--dir` flag with your working directory): 309 | 310 | ```console 311 | $ ./openshift-install --dir=/home/chris/upi/rhv-upi wait-for bootstrap-complete 312 | INFO Waiting up to 30m0s for the Kubernetes API at https://api.rhv-upi.ocp.pwc.umbrella.local:6443... 313 | INFO API v1.17.1+b9b84e0 up 314 | INFO Waiting up to 30m0s for bootstrapping to complete... 315 | INFO It is now safe to remove the bootstrap resources 316 | ``` 317 | 318 | Once this openshift-install command completes successfully, login to the load balancer and comment out the references to the bootstrap server in `/etc/haproxy/haproxy.cfg`. There should be two references, one in the backend configuration `backend_22623` and one in the backend configuration `backend_6443`. Alternativaly, you can just run this utility playbook to achieve the same: 319 | 320 | ```console 321 | ansible-playbook -i inventory.yml bootstrap-cleanup.yml 322 | ``` 323 | 324 | Lastly, refer to the baremetal UPI documentation and complete [Logging into the cluster](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/html-single/installing_on_bare_metal/index#cli-logging-in-kubeadmin_installing-bare-metal) and all remaining steps. 325 | 326 | # Installing QEMU Guest Agent 327 | 328 | RHCOS includes the `open-vm-tools` package by default but does not include `qemu-guest-agent`. To work around this, we can run the `qemu-ga` daemon via a container using a DaemonSet or by copying the `qemu-ga` binary and requisite configuration to RHCOS using a MachineConfig. 329 | 330 | ## Option 1 - Installing QEMU Guest Agent using a DaemonSet 331 | 332 | *Note: The DaemonSet option is not completely working. Guest memory is not reported to RHV. This is a work in progress.* 333 | 334 | The Dockerfile used to build the container is included in the `qemu-guest-agent` directory in this git repository. You will need to download the latest `qemu-guest-agent` RPM from access.redhat.com and place it alongside the Dockerfile to successfully build the container. The container was built using buildah as follows: 335 | 336 | ```console 337 | buildah bud -t qemu-guest-agent --format docker 338 | ``` 339 | 340 | However, building the container is for documentation purposes only. The DaemonSet is already configured to pull the `qemu-guest-agent` container from quay.io. 341 | 342 | To start the deployment we need to create the namespace, to do that run the following command: 343 | 344 | ```console 345 | oc create -f qemu-guest-agent/0-namespace.yaml 346 | ``` 347 | 348 | Next we need to create the service account: 349 | 350 | ```console 351 | oc create -f qemu-guest-agent/1-service-account.yaml 352 | ``` 353 | 354 | The pods will require the privileged SCC, so add the appropriate RBAC as follows: 355 | 356 | ```console 357 | oc create -f qemu-guest-agent/2-rbac.yaml 358 | ``` 359 | 360 | Finally deploy the DaemonSet by running: 361 | 362 | ```console 363 | oc create -f 3-daemonset.yaml 364 | ``` 365 | 366 | ## Option 2 - Installing QEMU Guest Agent using MachineConfigs 367 | 368 | The following files were extracted from the `qemu-guest-agent` rpm, base64 encoded and added to the ignition portion of the Machine Config. 369 | 370 | * /etc/qemu-ga/fsfreeze-hook 371 | * /etc/sysconfig/qemu-ga 372 | * /etc/udev/rules.d/99-qemu-guest-agent.rules 373 | * /usr/bin/qemu-ga 374 | 375 | Since the `/usr` filesystem on RHCOS is mounted read-only, the `qemu-ga` binary was placed in `/opt/qemu-guest-agent/bin/qemu-ga`. Left alone the `qemu-guest-agent` service will fail to start because the `qemu-ga` binary does not have the appropriate SELinux contexts. To work around this, an additional service named `qemu-guest-agent-selinux` is added to force the appropriate contexts before the `qemu-guest-agent` services starts. Both services are added via the `systemd` portion of the ignition config. 376 | 377 | To add the `qemu-guest-agent` service to your worker nodes, simply run the following command: 378 | 379 | ```console 380 | oc create -f 50-worker-qemu-guest-agent.yaml 381 | ``` 382 | 383 | When applied, the Machine Config Operator will perform a rolling reboot of all worker nodes in your cluster. 384 | 385 | Similarly, the `qemu-guest-agent` service can be applied to your master nodes using the following command: 386 | 387 | ```console 388 | oc create -f 50-master-qemu-guest-agent.yaml 389 | ``` 390 | 391 | # Installing OpenShift Container Storage (OCS) 392 | 393 | This guide will show you how to deploy OpenShift Container Storage (OCS) using a bare metal methodology and local storage. 394 | 395 | ## Requirements 396 | 397 | For typical deployments, OCS will require three dedicated worker nodes with the following VM specs: 398 | 399 | * 48GB RAM 400 | * 12 vCPU 401 | * 1 OSD Disk (300Gi) 402 | * 1 MON Disk (10Gi) 403 | 404 | The OSD disks can really be any size but 300Gi is used in this example. Also, an example inventory file (`ocs/inventory-example-ocs.yaml`) that shows how to add multiple disks to a worker node is included in the root of this repository. 405 | 406 | ## Labeling Nodes 407 | 408 | Before we begin an installation, we need to label our OCS nodes with the label `cluster.ocs.openshift.io/openshift-storage`. Label each node with the following command: 409 | 410 | ```console 411 | $ oc label node workerX.rhv-upi.ocp.pwc.umbrella.local cluster.ocs.openshift.io/openshift-storage='' 412 | ``` 413 | 414 | ## Deploying OCS and Local Storage Operator 415 | 416 | OCS will use the default storage class (typically `gp2` in AWS and VMware deployments) to create the PVCs used for the OSD and MON disks. Since our RHV deployment does not have an existing storage class we will use the Local Storage Operator to create two storage class with PVs backed by block devices on our OCS nodes. 417 | 418 | ### Deploying OCS Operator 419 | 420 | To deploy the OCS operator, run the following command: 421 | 422 | ```console 423 | $ oc create -f ocs/ocs-operator.yaml 424 | ``` 425 | 426 | ### Deploying the Local Storage Operator 427 | 428 | To deploy the Local Storage operator, run the following command (note that this will install the Local Storage Operator and supporting operators in the same namespace as the OCS operator): 429 | 430 | ```console 431 | $ oc create -f ocs/localstorage-operator.yaml 432 | ``` 433 | 434 | ### Verifying 435 | 436 | To verify the operators were successfully installed, run the following: 437 | 438 | ```console 439 | $ oc get csv -n openshift-storage 440 | NAME DISPLAY VERSION REPLACES PHASE 441 | awss3operator.1.0.1 AWS S3 Operator 1.0.1 awss3operator.1.0.0 Succeeded 442 | local-storage-operator.4.2.15-202001171551 Local Storage 4.2.15-202001171551 Succeeded 443 | ocs-operator.v4.2.1 OpenShift Container Storage 4.2.1 Succeeded 444 | ``` 445 | 446 | You should see phase `Succeeded` for all operators. 447 | 448 | ## Creating Storage Classes for OSD and MON Disks 449 | 450 | Next we will create two storage classes using the Local Storage Operator. One for the OSD disks and another for the MON disks. Two storage classes are used as the OSDs require `volumeMode: block` and the MONs require `volumeMode: filesystem`. 451 | 452 | Login to your worker nodes over SSH and verify the locations of your block devices (this can be done w/ the `lsblk` command). In this example, the OSD disks on each node are located at `/dev/sdb` and the MON disks are located at `/dev/sdc`. 453 | 454 | Modify the `ocs/storageclass-mon.yaml` and `ocs/storageclass-osd.yaml` files to suit your environment. Pay special attention to the `nodeSelectors` and `devicePaths` field. 455 | 456 | Once you have the right values, create the storage classes as follows: 457 | 458 | ```console 459 | $ oc create -f ocs/storageclass-mon.yaml 460 | ``` 461 | 462 | ```console 463 | $ oc create -f ocs/storageclass-osd.yaml 464 | ``` 465 | 466 | To verify, check for the appropriate storage classes and persistent volumes as follows (output may vary slightly): 467 | 468 | ```console 469 | $ oc get sc 470 | NAME PROVISIONER AGE 471 | localstorage-ocs-mon-sc kubernetes.io/no-provisioner 49m 472 | localstorage-ocs-osd-sc kubernetes.io/no-provisioner 49m 473 | ``` 474 | 475 | ```console 476 | $ oc get pv 477 | NAME CAPACITY ACCESS MODES RECLAIM POLICY ... STORAGECLASS REASON AGE 478 | local-pv-1e5a8670 300Gi RWO Delete ... localstorage-ocs-osd-sc 6h53m 479 | local-pv-60e5cc95 300Gi RWO Delete ... localstorage-ocs-osd-sc 6h53m 480 | local-pv-bc51d61e 300Gi RWO Delete ... localstorage-ocs-osd-sc 6h53m 481 | local-pv-6b8a2749 10Gi RWO Delete ... localstorage-ocs-mon-sc 6h53m 482 | local-pv-709ff523 10Gi RWO Delete ... localstorage-ocs-mon-sc 6h53m 483 | local-pv-a8913854 10Gi RWO Delete ... localstorage-ocs-mon-sc 6h53m 484 | ``` 485 | 486 | ## Provisioning OCS Cluster 487 | 488 | Modify the file `ocs/storagecluster.yaml` and adjust the storage requests accordingly. These requests must match the underlying PV sizes in the corresponding storage class. 489 | 490 | To create the cluster, run the following command: 491 | 492 | ```console 493 | $ oc create -f ocs/storagecluster.yaml 494 | ``` 495 | 496 | The installation process should take approximately 5 minutes. Run `oc get pods -n openshift-storage -w` to observe the process. 497 | 498 | To verify the installation is complete, run the following: 499 | 500 | ```console 501 | $ oc get storagecluster storagecluster -ojson -n openshift-storage | jq .status 502 | { 503 | "cephBlockPoolsCreated": true, 504 | "cephFilesystemsCreated": true, 505 | "cephObjectStoreUsersCreated": true, 506 | "cephObjectStoresCreated": true, 507 | ... 508 | } 509 | ``` 510 | 511 | All fields should be marked true. 512 | 513 | ## Adding Storage for OpenShift Registry 514 | 515 | OCS provides RBD and CephFS backed storage classes for use within the cluster. We can leverage the CephFS storage class to create a PVC for the OpenShift registry. 516 | 517 | Modify the file `ocs/registry-cephfs-pvc.yaml` file and adjust the size of the claim. Then run the following to create the PVC: 518 | 519 | ```console 520 | $ oc create -f ocs/registry-cephfs-pvc.yaml 521 | ``` 522 | 523 | To reconfigure the registry to use our new PVC, run the following: 524 | 525 | ```console 526 | $ oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"managementState":"Managed","storage":{"pvc":{"claim":"registry"}}}}' 527 | ``` 528 | 529 | 530 | # Retiring 531 | 532 | Playbooks are also provided to remove VMs from RHV and DNS entries from IdM. To do this, run the retirement playbook as follows: 533 | 534 | ```console 535 | $ ansible-playbook -i inventory.yml --ask-vault-pass retire.yml 536 | ``` 537 | --------------------------------------------------------------------------------