├── .gitignore ├── README.md ├── inventory ├── group_vars │ ├── all │ └── computes ├── host_vars │ ├── compute01 │ ├── compute02 │ └── controller └── hosts ├── linuxboot-ci.yml └── roles ├── compute ├── files │ └── var │ │ └── lib │ │ └── kvm │ │ └── templates │ │ └── ubuntu.xml ├── handlers │ └── main.yml └── tasks │ ├── kvm.yml │ ├── main.yml │ ├── munge.yml │ ├── nfs.yml │ ├── packages.yml │ └── slurm.yml ├── controller ├── files │ └── usr │ │ └── local │ │ └── bin │ │ └── kvmjob.sh ├── handlers │ └── main.yml ├── tasks │ ├── api.yml │ ├── dhcp.yml │ ├── main.yml │ ├── munge.yml │ ├── nfs.yml │ ├── scripts.yml │ └── slurm.yml └── templates │ ├── etc │ ├── default │ │ └── linuxboot-ci-api.j2 │ ├── dnsmasq.conf.j2 │ ├── ethers.j2 │ ├── exports.j2 │ ├── slurm-llnl │ │ └── slurm.conf.j2 │ └── systemd │ │ └── system │ │ └── linuxboot-ci-api.service.j2 │ └── lib │ └── systemd │ └── system │ └── dnsmasq.service.j2 └── ubuntu-common ├── tasks ├── dns.yml ├── env.yml ├── main.yml ├── ssh.yml ├── tools.yml └── users.yml └── templates ├── .ssh └── config └── etc ├── apt └── apt.conf.j2 ├── environment ├── hostname.j2 ├── hosts.j2 ├── resolvconf └── resolv.conf.d │ └── tail.j2 └── sudoers.d └── linuxboot.j2 /.gitignore: -------------------------------------------------------------------------------- 1 | *.retry 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Linuxboot CI 2 | 3 | Linuxboot CI is a continuous integration platform dedicated to build and test Linuxboot firmware. This 4 | repository contains deployment automation tools for Linuxboot CI platform. 5 | 6 | ## Prerequisites 7 | 8 | In order to deploy Linuxboot CI you need at leat two machines on a single subnet. First machine will be the controller node and the other one will be the compute node (job runner). 9 | 10 | __Note.__ 11 | > Currently, deployment works only on debian based distributions and is tested on Ubuntu 16.04 only. 12 | 13 | Controller node can be either a bare matal server or a virtual machine, but compute nodes should be bare metal servers because it uses virtualization to sandbox jobs. 14 | 15 | Additionnaly, you need a deployer host to orchestrate Linuxboot CI deployment. This host can be any machine with 16 | [Ansible](https://www.ansible.com/) installed (at least version 2.4). This host must be able to reach the controller node using SSH. 17 | 18 | Also: 19 | 20 | * Controller and Compute hosts must have a sudoer account without password 21 | * Controller and Compute hosts must be SSH accessible without password from Deployer host 22 | 23 | __Sample infrastructure__ 24 | 25 | ``` 26 | |-----------------| |-----------------| 27 | | Controller | | Compute | 28 | |-----------------| |-----------------| 29 | | 10.0.3.2 | 10.0.3.4 30 | | | 31 | ----------------------------------------------------------------- 10.0.3.0/24 32 | | 33 | | 10.0.3.100 34 | |-----------------| 35 | | Deployer | 36 | |-----------------| 37 | ``` 38 | 39 | ## Prepare your configuration 40 | 41 | Clone this repository on the Deployer host and edit configuration files in `inventory` folders: 42 | 43 | * Hosts usernames 44 | * Hosts IP addresses 45 | 46 | In the default configuration, the Controller node is a DHCP serveur for compute nodes. If you already have a DHCP server on your network or if you setted up static addressing, you can disable it using the variable `dhcp["enabled"]`. 47 | 48 | ## Run deployment 49 | 50 | Once configuration is setted up for your environment, you are good to go. On the Deployer, from the root of this 51 | repository tree, run 52 | 53 | ``` 54 | $ ansible-playbook -i inventory/hosts linuxboot-ci.yml 55 | ``` 56 | 57 | When command is done, the platform is up and running. 58 | 59 | ## Run you first job 60 | 61 | It's time to run you first job. You need a git repository containing a CI descriptor `.ci.yml`. The sample repository [linuxboot/linuxboot-ci-test](https://github.com/linuxboot/linuxboot-ci-test) is used to perform 62 | some tests. Each single branch in this repos is a different test case. 63 | 64 | Interraction with the CI platform is achived using a REST API. API specification can be found in 65 | [linuxboot/linuxboot-ci-api](https://github.com/linuxboot/linuxboot-ci-api). 66 | 67 | __Example__ 68 | 69 | Submit a job using `curl` client 70 | 71 | ``` 72 | curl -i -X POST "http://:1234/v1/jobs" -H "X-Auth-Secret: ..." -d ' 73 | { 74 | "repository": { 75 | "url": "https://github.com/linuxboot/linuxboot-ci-test.git" 76 | } 77 | } 78 | ' 79 | ``` 80 | 81 | ## Platform Architecture 82 | 83 | __To Do__ 84 | > Global architecture involving should be described here 85 | -------------------------------------------------------------------------------- /inventory/group_vars/all: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | # 4 | # Set to True is a HTTP Proxy is needed to access 5 | # 6 | behind_proxy: False 7 | 8 | # 9 | # HTTP Proxy URL if needed 10 | # 11 | #proxy_url= 12 | 13 | # 14 | # If a SSH proxy command is needed to reach inventory hosts 15 | # uncomment the next two lines and set the correct IP address 16 | # 17 | ssh_proxy_addr: 149.13.123.7 18 | ansible_ssh_common_args: "-o ProxyCommand='ssh -W %h:%p -q ubuntu@{{ ssh_proxy_addr }}'" 19 | 20 | 21 | network: 22 | gateway: 10.0.3.1 23 | netmask: 255.255.255.0 24 | 25 | slurm: 26 | controller: 27 | name: controller 28 | listen_addr: 10.0.3.2 29 | compute01: 30 | name: compute01 31 | addr: 10.0.3.4 32 | mac: 08:9e:01:fc:74:e2 33 | compute02: 34 | name: compute02 35 | addr: 10.0.3.6 36 | mac: 08:9e:01:fe:fc:8c 37 | -------------------------------------------------------------------------------- /inventory/group_vars/computes: -------------------------------------------------------------------------------- 1 | --- 2 | -------------------------------------------------------------------------------- /inventory/host_vars/compute01: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | ansible_user: ubuntu 4 | ansible_host: 10.0.3.4 5 | 6 | linux_hostname: compute01 7 | -------------------------------------------------------------------------------- /inventory/host_vars/compute02: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | ansible_user: ubuntu 4 | ansible_host: 10.0.3.6 5 | 6 | linux_hostname: compute02 7 | -------------------------------------------------------------------------------- /inventory/host_vars/controller: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | ansible_user: ubuntu 4 | ansible_host: 10.0.3.2 5 | 6 | linux_hostname: controller 7 | 8 | dhcp: 9 | enabled: True 10 | interface: enp12s0 11 | range: 12 | start: 10.0.3.100 13 | end: 10.0.3.240 14 | lease_time: 180m 15 | -------------------------------------------------------------------------------- /inventory/hosts: -------------------------------------------------------------------------------- 1 | [controllers] 2 | controller 3 | 4 | [computes] 5 | compute01 6 | compute02 7 | 8 | [all:children] 9 | controllers 10 | computes 11 | -------------------------------------------------------------------------------- /linuxboot-ci.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | roles: 5 | - ubuntu-common 6 | 7 | - hosts: controllers 8 | roles: 9 | - controller 10 | 11 | - hosts: computes 12 | roles: 13 | - compute 14 | -------------------------------------------------------------------------------- /roles/compute/files/var/lib/kvm/templates/ubuntu.xml: -------------------------------------------------------------------------------- 1 | 2 | %SLURM_VM_NAME% 3 | %SLURM_VM_UUID% 4 | 16777216 5 | 16777216 6 | 20 7 | 8 | hvm 9 | 10 | 11 | 12 | 13 | 14 | 15 | destroy 16 | restart 17 | destroy 18 | 19 | /usr/bin/kvm 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 |
33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | -------------------------------------------------------------------------------- /roles/compute/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: restart munge 4 | service: 5 | name: munge 6 | state: restarted 7 | become: yes 8 | 9 | - name: restart slurmd 10 | service: 11 | name: slurmd 12 | state: restarted 13 | become: yes 14 | -------------------------------------------------------------------------------- /roles/compute/tasks/kvm.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install KVM and Virtualization tools 4 | apt: 5 | name: "{{ item }}" 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | with_items: 10 | - qemu-kvm 11 | - qemu-block-extra 12 | - qemu-system-common 13 | - qemu-system-x86 14 | - qemu-utils 15 | - ipxe-qemu 16 | - libvirt-bin 17 | - libguestfs-tools 18 | 19 | - name: Configure user linuxboot for KVM 20 | user: 21 | name: linuxboot 22 | shell: /bin/bash 23 | groups: kvm,libvirtd 24 | append: true 25 | become: yes 26 | 27 | - name: Ensure directories for KVM data exists 28 | file: 29 | path: "{{ item }}" 30 | state: directory 31 | mode: 0755 32 | owner: "linuxboot" 33 | group: "ci" 34 | become: yes 35 | with_items: 36 | - /var/lib/kvm 37 | - /var/lib/kvm/images 38 | - /var/lib/kvm/templates 39 | - /var/lib/kvm/vms 40 | 41 | - name: Copy KVM templates 42 | template: 43 | src: "{{ item }}" 44 | dest: "/var/lib/kvm/templates/" 45 | mode: 0444 46 | owner: root 47 | group: root 48 | with_fileglob: 49 | - var/lib/kvm/templates/*.xml 50 | become: yes 51 | -------------------------------------------------------------------------------- /roles/compute/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - import_tasks: packages.yml 4 | tags: 5 | - packages 6 | 7 | - import_tasks: munge.yml 8 | tags: 9 | - munge 10 | 11 | - import_tasks: slurm.yml 12 | tags: 13 | - slurm 14 | 15 | - import_tasks: kvm.yml 16 | tags: 17 | - kvm 18 | 19 | - import_tasks: nfs.yml 20 | tags: 21 | - nfs 22 | -------------------------------------------------------------------------------- /roles/compute/tasks/munge.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Munge 4 | apt: 5 | name: munge 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | 10 | - name: Get local key file stats 11 | stat: 12 | path: /etc/munge/munge.key 13 | register: keyfile_local 14 | become: yes 15 | 16 | - name: Get controller key file stats 17 | stat: 18 | path: /etc/munge/munge.key 19 | register: keyfile_ctl 20 | become: yes 21 | delegate_to: controller 22 | 23 | - name: Temporary "read for all" permission on original key file 24 | file: 25 | path: /etc/munge/munge.key 26 | state: file 27 | mode: 0444 28 | owner: munge 29 | group: munge 30 | become: yes 31 | delegate_to: controller 32 | when: keyfile_local.stat.checksum != keyfile_ctl.stat.checksum 33 | 34 | - name: Ensure secret key is in sync with controller 35 | synchronize: 36 | src: /etc/munge/munge.key 37 | dest: /etc/munge/munge.key 38 | delegate_to: controller 39 | become: yes 40 | notify: restart munge 41 | when: keyfile_local.stat.checksum != keyfile_ctl.stat.checksum 42 | 43 | - name: Set good permission on key file 44 | file: 45 | path: /etc/munge/munge.key 46 | state: file 47 | mode: 0400 48 | owner: munge 49 | group: munge 50 | become: yes 51 | when: keyfile_local.stat.checksum != keyfile_ctl.stat.checksum 52 | 53 | - name: Restore good permission on original key file 54 | file: 55 | path: /etc/munge/munge.key 56 | state: file 57 | mode: 0400 58 | owner: munge 59 | group: munge 60 | become: yes 61 | delegate_to: controller 62 | when: keyfile_local.stat.checksum != keyfile_ctl.stat.checksum 63 | -------------------------------------------------------------------------------- /roles/compute/tasks/nfs.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install NFS client 4 | apt: 5 | name: "{{ item }}" 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | with_items: 10 | - nfs-common 11 | 12 | - name: Ensure directory for job data exists 13 | file: 14 | path: "/var/lib/ci" 15 | state: directory 16 | mode: 0755 17 | owner: "linuxboot" 18 | group: "ci" 19 | become: yes 20 | 21 | - name: Mount NFS Folder 22 | mount: 23 | path: /var/lib/ci 24 | src: "{{ slurm['controller']['listen_addr'] }}:/var/lib/ci" 25 | fstype: nfs 26 | opts: auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 27 | state: mounted 28 | become: yes 29 | -------------------------------------------------------------------------------- /roles/compute/tasks/packages.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install some needed packages using apt 4 | apt: 5 | name: "{{ item }}" 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | with_items: 10 | - git 11 | - jq 12 | - python-pip 13 | 14 | - name: Install some needed packages using pip 15 | pip: 16 | name: "{{ item }}" 17 | become: yes 18 | with_items: 19 | - yq 20 | -------------------------------------------------------------------------------- /roles/compute/tasks/slurm.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Slurm packages 4 | apt: 5 | name: "{{ item }}" 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | with_items: 10 | - slurmd 11 | 12 | - name: Slurmd configuration file 13 | synchronize: 14 | src: /etc/slurm-llnl/slurm.conf 15 | dest: /etc/slurm-llnl/slurm.conf 16 | delegate_to: controller 17 | become: yes 18 | notify: restart slurmd 19 | 20 | - name: Enable and ensure started service slurmd 21 | systemd: 22 | name: slurmd 23 | enabled: yes 24 | state: started 25 | daemon_reload: yes 26 | become: yes 27 | -------------------------------------------------------------------------------- /roles/controller/files/usr/local/bin/kvmjob.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This script is executed on the compute node 4 | #SBATCH -p compile # partition (queue) 5 | #SBATCH -N 1 # number of nodes 6 | #SBATCH -n 1 # number of cores 7 | #SBATCH --mem 100 # memory pool for all cores 8 | #SBATCH -t 0-4:00 # time (D-HH:MM) 9 | #SBATCH -o /var/lib/ci/%N.job.%j.out # STDOUT 10 | #SBATCH -e /var/lib/ci/%N.job.%j.err # STDERR 11 | 12 | set -e 13 | set -x 14 | 15 | export LANGUAGE=en_US.UTF-8 16 | export LC_ALL=en_US.UTF-8 17 | 18 | if [ -z "$SLURM_JOB_ID" ] 19 | then 20 | { 21 | echo "WARNING : Job not launched by Slurm" 22 | echo " If you are doing this for testing for testing purpose it can be fine" 23 | echo " If not, there is maybe something wrong somewhere !" 24 | } >&2 25 | 26 | SLURM_JOB_ID=$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32) 27 | fi 28 | 29 | git_url=$1 30 | git_branch=$2 31 | 32 | machine_name="ubuntu" 33 | 34 | vmTemplate="/var/lib/kvm/templates/$machine_name.xml" 35 | sourceImage="/var/lib/kvm/images/$machine_name.img" 36 | 37 | jobDir="/var/lib/ci/$SLURM_JOB_ID" 38 | sourcesDir=${jobDir}/sources 39 | artifactsDir=${jobDir}/artifacts 40 | vmImage="/var/lib/kvm/vms/${SLURM_JOB_ID}.img" 41 | vmConfig="${jobDir}/vm.xml" 42 | vmName="job-$SLURM_JOB_ID" 43 | vmUser="sds" 44 | vmIP= 45 | 46 | mkdir -p ${jobDir} 47 | 48 | # 49 | # Function call by a trap when script exits 50 | # 51 | cleanupAndExit() { 52 | if [ $(virsh list | grep ${vmName} | wc -l) -eq 1 ] ; then 53 | log "Destroying virtual machine..." 54 | virsh destroy ${vmName} >&2 55 | fi 56 | 57 | log "Cleanup..." 58 | rm -f ${vmImage} 59 | 60 | if [ ! -e ${jobDir}/status ] ; then 61 | echo 1 > ${jobDir}/status 62 | fi 63 | 64 | local status=$(cat ${jobDir}/status) 65 | local message=$(cat ${jobDir}/error_msg) 66 | 67 | if [ "${status}" -ne 0 ] ; then 68 | log "" 69 | if [ -n "${message}" ] ; then 70 | log "${message}" 71 | fi 72 | log "Build failed with status code ${status}" 73 | else 74 | log "Success !" 75 | fi 76 | } 77 | 78 | trap cleanupAndExit EXIT 79 | 80 | cp ${sourceImage} ${vmImage} 81 | 82 | log() { 83 | echo "$(date '+[%Y-%m-%d %H:%M:%S]') ${1}" 84 | } 85 | 86 | # 87 | # Put SSH public key to be able to log into the VM and set the user as a sudoer 88 | # 89 | configureImage() { 90 | local mountDir=/tmp/mnt.$SLURM_JOB_ID 91 | mkdir -p ${mountDir} 92 | sudo guestmount -a ${vmImage} -m /dev/sda1 ${mountDir} 93 | sudo mkdir -p ${mountDir}/home/${vmUser}/.ssh 94 | 95 | local sudoersFile=$(mktemp) 96 | echo "${vmUser} ALL=(ALL) NOPASSWD: ALL" >> ${sudoersFile} 97 | sudo mv ${sudoersFile} ${mountDir}/etc/sudoers.d/${vmUser} 98 | sudo chown -f root:root ${mountDir}/etc/sudoers.d/${vmUser} 99 | 100 | sudo bash -c "cat $HOME/.ssh/id_rsa.pub >> ${mountDir}/home/${vmUser}/.ssh/authorized_keys" 101 | sudo chown -Rf 1000:1000 ${mountDir}/home/${vmUser}/.ssh 102 | sudo guestunmount ${mountDir} 103 | sudo rm -rf ${mountDir} 104 | } 105 | 106 | # 107 | # Generate XML configuration for virsh from a template 108 | # 109 | generateVMConfiguration() { 110 | cat /var/images/ubuntu.xml \ 111 | | sed "s/%SLURM_VM_NAME%/${vmName}/" \ 112 | | sed "s/%SLURM_VM_UUID%/$(uuidgen)/" \ 113 | | sed "s/%SLURM_VM_IMAGE%/$(echo ${vmImage} | sed 's/\//\\\//g')/" > ${vmConfig} 114 | } 115 | 116 | # 117 | # Find IP address for VM from its name 118 | # 119 | # $1 - Machine name 120 | # 121 | findVMIP() { 122 | local name="$1" 123 | arp -an | grep "`virsh dumpxml ${name} | grep "mac address" | sed "s/.*'\(.*\)'.*/\1/g"`" | awk '{ gsub(/[\(\)]/,"",$2); print $2 }' 124 | } 125 | 126 | # 127 | # Run virtual machine from XML file descriptor 128 | # 129 | runVM() { 130 | virsh create ${vmConfig} >&2 131 | } 132 | 133 | # 134 | # Function call by a trap when script exits 135 | # 136 | cleanupAndExit() { 137 | if [ $(virsh list | grep ${vmName} | wc -l) -eq 1 ] ; then 138 | log "Destroying virtual machine..." 139 | virsh destroy ${vmName} >&2 140 | fi 141 | 142 | log "Cleanup..." 143 | rm -f ${vmImage} 144 | 145 | if [ ! -e ${jobDir}/status ] ; then 146 | echo 1 > ${jobDir}/status 147 | fi 148 | 149 | local status=$(cat ${jobDir}/status) 150 | local message=$(cat ${jobDir}/error_msg) 151 | 152 | if [ "${status}" -ne 0 ] ; then 153 | log "" 154 | if [ -n "${message}" ] ; then 155 | log "${message}" 156 | fi 157 | log "Build failed with status code ${status}" 158 | else 159 | log "Success !" 160 | fi 161 | } 162 | 163 | # 164 | # Wait for machine to get an IP address. When the IP address is found, it is written 165 | # in file ${jobDir}/vm_ip. If this file is empty when the function returns, it probably 166 | # means there's an issue somewhere 167 | # 168 | waitForVMToGetIP() { 169 | local max_retry=60 170 | local vmIP="" 171 | 172 | while [ -z "${vmIP}" ] && [ "${max_retry}" -gt 0 ] 173 | do 174 | sleep 5 175 | vmIP=$(findVMIP ${vmName}) 176 | max_retry=$((max_retry-1)) 177 | done 178 | 179 | echo ${vmIP} > ${jobDir}/vm_ip 180 | } 181 | 182 | # 183 | # Wait for the VM to be reachable on SSH port 184 | # 185 | waitForVMToBeReachable() { 186 | local is_ssh_running= 187 | local max_retry=60 188 | 189 | while [ -z "${is_ssh_running}" ] 190 | do 191 | sleep 5 192 | is_ssh_running=$(nmap -p22 $vmIP | grep -i open) 193 | done 194 | 195 | if [ -n "$is_ssh_running" ] 196 | then 197 | echo true > ${jobDir}/vm_ssh 198 | fi 199 | } 200 | 201 | log "=== Initialize building sandbox ==============================================" 202 | 203 | log "Setting up virtual machine configuration..." 204 | 205 | configureImage 206 | generateVMConfiguration 207 | 208 | log "Running virtual machine..." 209 | 210 | runVM 211 | 212 | log "Waiting for virtual machine network..." 213 | 214 | waitForVMToGetIP 215 | 216 | vmIP=$(cat ${jobDir}/vm_ip) 217 | 218 | if [ "$vmIP" == "" ]; then 219 | # We got an error the VM doesn't have networking 220 | log "ERROR : No IP detected" 221 | exit 1 222 | fi 223 | 224 | log "Waiting for virtual machine accessible from SSH..." 225 | 226 | waitForVMToBeReachable 227 | 228 | if [ "$(cat ${jobDir}/vm_ssh)" != "true" ] 229 | then 230 | # ssh server is not running 231 | log "ERROR - SSH server not running in VM" 232 | exit 1 233 | fi 234 | 235 | ### Get sources from Git 236 | 237 | log "Fetching source code from git repository '${git_url}'..." 238 | 239 | if [ -n "${git_branch}" ]; then 240 | git_branch="-b ${git_branch}" 241 | fi 242 | 243 | git clone --depth 1 ${git_branch} ${git_url} ${sourcesDir} 244 | 245 | ### Check Repository have de CI file descriptor 246 | 247 | log "Reading CI descriptor .ci.yml" 248 | 249 | if [ ! -f ${sourcesDir}/.ci.yml ] ; then 250 | echo 1 > status 251 | echo "No build descriptor .ci.yml can be found in the source code repository." > error_msg 252 | destroyVM 253 | exit 0 254 | fi 255 | 256 | ### Generate bash script from YAML descriptor 257 | cat ${sourcesDir}/.ci.yml | yq .script | jq -r .[] > ${sourcesDir}/.ci.sh 258 | 259 | ### Copy source git repository into sandbox 260 | scp -r ${sourcesDir} ${vmUser}@${vmIP}: 261 | 262 | ### Install additionnal tools into sandbox 263 | 264 | log "Install additionnal tools into sandbox" 265 | 266 | ssh -t ${vmUser}@${vmIP} >&2 <<-'EOF' 267 | set -x 268 | sudo apt update 269 | sudo apt install -y moreutils 270 | EOF 271 | 272 | log "=== Running scripts ==========================================================" 273 | 274 | echo "<% USER LOG PLACEHOLDER %>" 275 | 276 | ssh -t ${vmUser}@${vmIP} >&2 <<-'EOF' 277 | set -x 278 | mkdir ci 279 | touch ci/status 280 | touch ci/error_msg 281 | touch ci/log 282 | EOF 283 | 284 | # 285 | # TODO Do something smarter to share files between host and VM 286 | # 287 | GetLogFileFromVM() { 288 | while true ; do 289 | scp ${vmUser}@${vmIP}:ci/log ${jobDir} < /dev/null > /dev/null 290 | sleep 10 291 | done 292 | } 293 | GetLogFileFromVM & 294 | 295 | 296 | ### Run build 297 | ssh -t ${vmUser}@${vmIP} >&2 <<-'EOF' 298 | set -x 299 | cd sources 300 | 301 | bash .ci.sh > ../ci/log 2>&1 302 | 303 | status_code=$? 304 | echo ${status_code} > ~/ci/status 305 | 306 | exit 0 307 | EOF 308 | 309 | log "=== Finish job ===============================================================" 310 | 311 | log "Reading build information from sandbox..." 312 | 313 | ### Get job status and log files and stdout stderr outputs 314 | scp ${vmUser}@${vmIP}:ci/* ${jobDir} 315 | 316 | status_code=$(cat ${jobDir}/status) 317 | if [[ -n "${status_code}" && "${status_code}" -ne 0 ]] ; then 318 | log "ERROR : Job failed with status code ${status_code}" > ${jobDir}/error_msg 319 | exit ${status_code} 320 | fi 321 | 322 | ### Extract build artifacts from the VM 323 | 324 | log "Extract build artifacts from sandbox..." 325 | 326 | mkdir ${artifactsDir} 327 | 328 | yaml_artifacts=$(cat ${sourcesDir}/.ci.yml | yq .artifacts) 329 | if [ "${yaml_artifacts}" != "null" ] ; then 330 | for artifact in $(echo ${yaml_artifacts} | jq -r .[]) ; do 331 | # Check artifact exists 332 | if [ "$(ssh ${vmUser}@${vmIP} ls sources/${artifact} | wc -l)" -eq 0 ] ; then 333 | echo "ERROR : Job failed because artifact \"${artifact}\" cannot be found" > ${jobDir}/error_msg 334 | echo 1 > ${jobDir}/status 335 | exit 1 336 | fi 337 | # Copy artifact from VM 338 | log "Getting build artifacts '${artifact}'..." 339 | scp ${vmUser}@${vmIP}:sources/${artifact} ${artifactsDir}/${artifact} 340 | done 341 | fi 342 | -------------------------------------------------------------------------------- /roles/controller/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: restart dnsmasq 4 | service: 5 | name: dnsmasq 6 | state: restarted 7 | become: yes 8 | 9 | - name: restart munge 10 | service: 11 | name: munge 12 | state: restarted 13 | become: yes 14 | 15 | - name: restart slurmctld 16 | service: 17 | name: slurmctld 18 | state: restarted 19 | become: yes 20 | 21 | - name: restart nfs-kernel-server 22 | service: 23 | name: nfs-kernel-server 24 | state: restarted 25 | become: yes 26 | 27 | - name: restart linuxboot-ci-api 28 | service: 29 | name: linuxboot-ci-api 30 | state: restarted 31 | become: yes 32 | -------------------------------------------------------------------------------- /roles/controller/tasks/api.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Linuxboot CI API 4 | get_url: 5 | url: https://github.com/linuxboot/linuxboot-ci-api/releases/download/0.1.0/linuxboot-ci-api-amd64-linux 6 | dest: /usr/local/bin/linuxboot-ci-api 7 | mode: 0555 8 | owner: root 9 | group: root 10 | checksum: sha256:8031d5089c44e0f172a40e7f279037d06d8ce428fb514b7454ab05b693c59b97 11 | become: yes 12 | notify: restart linuxboot-ci-api 13 | 14 | - name: Systemd configuration for Linuxboot CI API 15 | template: 16 | src: "{{ item }}.j2" 17 | dest: "/{{ item }}" 18 | owner: root 19 | group: root 20 | mode: 0644 21 | become: yes 22 | with_items: 23 | - etc/default/linuxboot-ci-api 24 | - etc/systemd/system/linuxboot-ci-api.service 25 | notify: restart linuxboot-ci-api 26 | 27 | - name: Enable service linuxboot-ci-api 28 | systemd: 29 | name: linuxboot-ci-api 30 | enabled: yes 31 | state: started 32 | daemon_reload: yes 33 | become: yes 34 | -------------------------------------------------------------------------------- /roles/controller/tasks/dhcp.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Dnsmasq 4 | apt: 5 | name: dnsmasq 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | 10 | - name: Ensure Dnsmasq log file exists 11 | copy: 12 | content: "" 13 | dest: /var/log/dnsmasq.log 14 | force: no 15 | owner: dnsmasq 16 | group: root 17 | mode: 0644 18 | become: yes 19 | 20 | - name: Create TFTP root directory 21 | file: 22 | path: /var/tftp 23 | state: directory 24 | mode: 0755 25 | owner: root 26 | group: root 27 | become: yes 28 | 29 | - name: Update Dnsmasq configuration 30 | template: 31 | src: etc/dnsmasq.conf.j2 32 | dest: /etc/dnsmasq.conf 33 | owner: root 34 | group: root 35 | mode: 0644 36 | become: yes 37 | notify: restart dnsmasq 38 | 39 | - name: Update Dnsmasq service configuration 40 | template: 41 | src: lib/systemd/system/dnsmasq.service.j2 42 | dest: /lib/systemd/system/dnsmasq.service 43 | owner: root 44 | group: root 45 | mode: 0644 46 | become: yes 47 | notify: restart dnsmasq 48 | 49 | - name: Enable service dnsmasq 50 | systemd: 51 | name: dnsmasq 52 | enabled: yes 53 | state: started 54 | daemon_reload: yes 55 | become: yes 56 | 57 | - name: Update /etc/ethers configuration 58 | template: 59 | src: etc/ethers.j2 60 | dest: /etc/ethers 61 | owner: root 62 | group: root 63 | mode: 0644 64 | become: yes 65 | notify: restart dnsmasq 66 | -------------------------------------------------------------------------------- /roles/controller/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - import_tasks: dhcp.yml 4 | tags: 5 | - dhcp 6 | when: dhcp['enabled'] 7 | 8 | - import_tasks: munge.yml 9 | tags: 10 | - munge 11 | 12 | - import_tasks: slurm.yml 13 | tags: 14 | - slurm 15 | 16 | - import_tasks: scripts.yml 17 | tags: 18 | - scripts 19 | 20 | - import_tasks: nfs.yml 21 | tags: 22 | - nfs 23 | 24 | - import_tasks: api.yml 25 | tags: 26 | - api 27 | -------------------------------------------------------------------------------- /roles/controller/tasks/munge.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Munge 4 | apt: 5 | name: munge 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | 10 | - name: Check if secret key have been initialized 11 | stat: 12 | path: /etc/munge/key_initialized 13 | register: key_initialized 14 | become: yes 15 | 16 | - name: Generate secret key 17 | shell: | 18 | dd if=/dev/urandom bs=1 count=1024 > /etc/munge/munge.key 19 | touch /etc/munge/key_initialized 20 | when: key_initialized.stat.exists == False 21 | become: yes 22 | notify: restart munge 23 | -------------------------------------------------------------------------------- /roles/controller/tasks/nfs.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install NFS Server 4 | apt: 5 | name: "{{ item }}" 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | with_items: 10 | - nfs-kernel-server 11 | 12 | - name: Ensure directory for job data exists 13 | file: 14 | path: "/var/lib/ci" 15 | state: directory 16 | mode: 0755 17 | owner: "linuxboot" 18 | group: "ci" 19 | become: yes 20 | 21 | - name: Export NFS folder 22 | template: 23 | src: "{{ item }}.j2" 24 | dest: "/{{ item }}" 25 | owner: root 26 | group: root 27 | mode: 0644 28 | become: yes 29 | with_items: 30 | - etc/exports 31 | notify: restart nfs-kernel-server 32 | -------------------------------------------------------------------------------- /roles/controller/tasks/scripts.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Deploy utility scripts 4 | template: 5 | src: "{{ item }}" 6 | dest: "/usr/local/bin/{{ item | basename | regex_replace('\\.sh','') }}" 7 | mode: 0555 8 | owner: root 9 | group: root 10 | with_fileglob: 11 | - usr/local/bin/*.sh 12 | become: yes 13 | -------------------------------------------------------------------------------- /roles/controller/tasks/slurm.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Slurm packages 4 | apt: 5 | name: "{{ item }}" 6 | state: present 7 | update_cache: yes 8 | become: yes 9 | with_items: 10 | - slurmctld 11 | - slurm-client 12 | 13 | - name: Slurmctld configuration file 14 | template: 15 | src: "{{ item }}.j2" 16 | dest: "/{{ item }}" 17 | owner: root 18 | group: root 19 | mode: 0644 20 | become: yes 21 | with_items: 22 | - etc/slurm-llnl/slurm.conf 23 | notify: restart slurmctld 24 | 25 | - name: Ensure slurm data files exists 26 | copy: 27 | content: "" 28 | dest: "{{ item }}" 29 | force: no 30 | owner: slurm 31 | group: slurm 32 | mode: 0644 33 | become: yes 34 | with_items: 35 | - /var/lib/slurm-llnl/slurmctld/accounting.db 36 | - /var/log/slurm_jobcomp.log 37 | notify: restart slurmctld 38 | 39 | - name: Enable and ensure started service slurmctld 40 | systemd: 41 | name: slurmctld 42 | enabled: yes 43 | state: started 44 | daemon_reload: yes 45 | become: yes 46 | -------------------------------------------------------------------------------- /roles/controller/templates/etc/default/linuxboot-ci-api.j2: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/linuxboot/linuxboot-ci/33a05b954616264a7ed546fbe5227e001cad39fe/roles/controller/templates/etc/default/linuxboot-ci-api.j2 -------------------------------------------------------------------------------- /roles/controller/templates/etc/dnsmasq.conf.j2: -------------------------------------------------------------------------------- 1 | ############################################################################### 2 | # # 3 | # WARNING : Generated file - Do no edit manually # 4 | # # 5 | ############################################################################### 6 | 7 | interface={{ dhcp['interface'] }} 8 | 9 | dhcp-range={{ dhcp['interface'] }},{{ dhcp['range']['start'] }},{{ dhcp['range']['end'] }},{{ network['netmask'] }},{{ dhcp['lease_time'] }} 10 | dhcp-option=3,{{ network['gateway'] }} 11 | 12 | log-queries 13 | log-dhcp 14 | log-facility=/var/log/dnsmasq.log 15 | 16 | server=8.8.8.8 17 | 18 | read-ethers 19 | -------------------------------------------------------------------------------- /roles/controller/templates/etc/ethers.j2: -------------------------------------------------------------------------------- 1 | # Compute Node 1 2 | {{ slurm['compute01']['mac'] }} {{ slurm['compute01']['addr'] }} 3 | 4 | # Compute Node 2 5 | {{ slurm['compute02']['mac'] }} {{ slurm['compute02']['addr'] }} 6 | -------------------------------------------------------------------------------- /roles/controller/templates/etc/exports.j2: -------------------------------------------------------------------------------- 1 | /var/lib/ci {{ slurm['compute01']['addr'] }}(rw,sync,no_subtree_check) 2 | /var/lib/ci {{ slurm['compute02']['addr'] }}(rw,sync,no_subtree_check) 3 | -------------------------------------------------------------------------------- /roles/controller/templates/etc/slurm-llnl/slurm.conf.j2: -------------------------------------------------------------------------------- 1 | # slurm.conf file generated by configurator.html. 2 | # Put this file on all nodes of your cluster. 3 | # See the slurm.conf man page for more information. 4 | # 5 | ControlMachine={{ slurm['controller']['name'] }} 6 | ControlAddr={{ slurm['controller']['listen_addr'] }} 7 | #BackupController= 8 | #BackupAddr= 9 | # 10 | AuthType=auth/munge 11 | CacheGroups=0 12 | #CheckpointType=checkpoint/none 13 | CryptoType=crypto/munge 14 | #DisableRootJobs=NO 15 | #EnforcePartLimits=NO 16 | #EpilogSlurmctld= 17 | #Epilog= 18 | #FirstJobId=1 19 | #MaxJobId=999999 20 | #GresTypes= 21 | #GroupUpdateForce=0 22 | #GroupUpdateTime=600 23 | #JobCheckpointDir=/var/lib/slurm-llnl/checkpoint 24 | #JobCredentialPrivateKey= 25 | #JobCredentialPublicCertificate= 26 | #JobFileAppend=0 27 | #JobRequeue=1 28 | #JobSubmitPlugins=1 29 | #KillOnBadExit=0 30 | #LaunchType=launch/slurm 31 | #Licenses=foo*4,bar 32 | #MailProg=/usr/bin/mail 33 | #MaxJobCount=5000 34 | #MaxStepCount=40000 35 | #MaxTasksPerNode=128 36 | MpiDefault=none 37 | #MpiParams=ports=#-# 38 | #PluginDir= 39 | #PlugStackConfig= 40 | #PrivateData=jobs 41 | ProctrackType=proctrack/pgid 42 | #Prolog= 43 | #PrologFlags= 44 | #PrologSlurmctld= 45 | #PropagatePrioProcess=0 46 | #PropagateResourceLimits= 47 | #PropagateResourceLimitsExcept= 48 | #RebootProgram= 49 | ReturnToService=1 50 | #SallocDefaultCommand= 51 | SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid 52 | SlurmctldPort=6817 53 | SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid 54 | SlurmdPort=6818 55 | SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd 56 | SlurmUser=slurm 57 | #SlurmdUser=root 58 | #SrunEpilog= 59 | #SrunProlog= 60 | StateSaveLocation=/var/lib/slurm-llnl/slurmctld 61 | SwitchType=switch/none 62 | TaskPlugin=task/none 63 | #TaskPluginParam= 64 | TaskProlog=/bin/hostname 65 | #TopologyPlugin=topology/tree 66 | #TmpFS=/tmp 67 | #TrackWCKey=no 68 | #TreeWidth= 69 | #UnkillableStepProgram= 70 | #UsePAM=0 71 | # 72 | # 73 | # TIMERS 74 | #BatchStartTimeout=10 75 | #CompleteWait=0 76 | #EpilogMsgTime=2000 77 | #GetEnvTimeout=2 78 | #HealthCheckInterval=0 79 | #HealthCheckProgram= 80 | InactiveLimit=0 81 | KillWait=30 82 | #MessageTimeout=10 83 | #ResvOverRun=0 84 | MinJobAge=300 85 | #OverTimeLimit=0 86 | SlurmctldTimeout=120 87 | SlurmdTimeout=300 88 | #UnkillableStepTimeout=60 89 | #VSizeFactor=0 90 | Waittime=0 91 | # 92 | # 93 | # SCHEDULING 94 | #DefMemPerCPU=0 95 | FastSchedule=1 96 | #MaxMemPerCPU=0 97 | #SchedulerRootFilter=1 98 | #SchedulerTimeSlice=30 99 | SchedulerType=sched/backfill 100 | SchedulerPort=7321 101 | SelectType=select/cons_res 102 | SelectTypeParameters=CR_Core_Memory 103 | # 104 | # 105 | # JOB PRIORITY 106 | #PriorityFlags= 107 | #PriorityType=priority/basic 108 | #PriorityDecayHalfLife= 109 | #PriorityCalcPeriod= 110 | #PriorityFavorSmall= 111 | #PriorityMaxAge= 112 | #PriorityUsageResetPeriod= 113 | #PriorityWeightAge= 114 | #PriorityWeightFairshare= 115 | #PriorityWeightJobSize= 116 | #PriorityWeightPartition= 117 | #PriorityWeightQOS= 118 | # 119 | # 120 | # LOGGING AND ACCOUNTING 121 | #AccountingStorageEnforce=0 122 | #AccountingStorageHost= 123 | AccountingStorageLoc=/var/lib/slurm-llnl/slurmctld/accounting.db 124 | #AccountingStoragePass= 125 | #AccountingStoragePort= 126 | AccountingStorageType=accounting_storage/filetxt 127 | #AccountingStorageUser= 128 | AccountingStoreJobComment=YES 129 | ClusterName=cluster 130 | #DebugFlags= 131 | #JobCompHost= 132 | #JobCompLoc= 133 | #JobCompPass= 134 | #JobCompPort= 135 | JobCompType=jobcomp/filetxt 136 | #JobCompUser= 137 | #JobContainerPlugin=job_container/none 138 | JobAcctGatherFrequency=30 139 | JobAcctGatherType=jobacct_gather/linux 140 | SlurmctldDebug=3 141 | SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log 142 | SlurmdDebug=3 143 | SlurmdLogFile=/var/log/slurm-llnl/slurmd.log 144 | #SlurmSchedLogFile= 145 | #SlurmSchedLogLevel= 146 | # 147 | # 148 | # POWER SAVE SUPPORT FOR IDLE NODES (optional) 149 | #SuspendProgram= 150 | #ResumeProgram= 151 | #SuspendTimeout= 152 | #ResumeTimeout= 153 | #ResumeRate= 154 | #SuspendExcNodes= 155 | #SuspendExcParts= 156 | #SuspendRate= 157 | #SuspendTime= 158 | # 159 | # 160 | # COMPUTE NODES 161 | NodeName=compute01 NodeAddr={{ slurm['compute01']['addr'] }} RealMemory=61440 Sockets=2 CoresPerSocket=10 ThreadsPerCore=1 State=UNKNOWN 162 | NodeName=compute02 NodeAddr={{ slurm['compute02']['addr'] }} RealMemory=61440 Sockets=2 CoresPerSocket=10 ThreadsPerCore=1 State=UNKNOWN 163 | PartitionName=compile Nodes=compute01,compute02 Default=YES MaxTime=240 State=UP 164 | -------------------------------------------------------------------------------- /roles/controller/templates/etc/systemd/system/linuxboot-ci-api.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Linuxboot CI API 3 | After=network-online.target 4 | 5 | [Service] 6 | EnvironmentFile=/etc/default/linuxboot-ci-api 7 | User=linuxboot 8 | Group=ci 9 | ExecStart=/usr/local/bin/linuxboot-ci-api run 10 | KillMode=process 11 | Restart=on-failure 12 | 13 | [Install] 14 | WantedBy=multi-user.target 15 | -------------------------------------------------------------------------------- /roles/controller/templates/lib/systemd/system/dnsmasq.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=dnsmasq - A lightweight DHCP and caching DNS server 3 | Requires=network.target 4 | 5 | [Service] 6 | Type=forking 7 | Restart=always 8 | PIDFile=/var/run/dnsmasq/dnsmasq.pid 9 | 10 | # Test the config file and refuse starting if it is not valid. 11 | ExecStartPre=/usr/sbin/dnsmasq --test 12 | 13 | # We run dnsmasq via the /etc/init.d/dnsmasq script which acts as a 14 | # wrapper picking up extra configuration files and then execs dnsmasq 15 | # itself, when called with the "systemd-exec" function. 16 | ExecStart=/etc/init.d/dnsmasq systemd-exec 17 | 18 | # The systemd-*-resolvconf functions configure (and deconfigure) 19 | # resolvconf to work with the dnsmasq DNS server. They're called liek 20 | # this to get correct error handling (ie don't start-resolvconf if the 21 | # dnsmasq daemon fails to start. 22 | ExecStartPost=/etc/init.d/dnsmasq systemd-start-resolvconf 23 | ExecStop=/etc/init.d/dnsmasq systemd-stop-resolvconf 24 | 25 | 26 | ExecReload=/bin/kill -HUP $MAINPID 27 | 28 | [Install] 29 | WantedBy=multi-user.target 30 | -------------------------------------------------------------------------------- /roles/ubuntu-common/tasks/dns.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Setup DNS configuration 4 | template: 5 | src: etc/resolvconf/resolv.conf.d/tail.j2 6 | dest: /etc/resolvconf/resolv.conf.d/tail 7 | owner: root 8 | group: root 9 | mode: 0644 10 | become: yes 11 | register: dnsconfig 12 | 13 | - name: Regenerate DNS configuration 14 | shell: resolvconf -u 15 | become: yes 16 | when: dnsconfig.changed 17 | -------------------------------------------------------------------------------- /roles/ubuntu-common/tasks/env.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Read Current hostname 4 | shell: hostname 5 | register: cur_hostname 6 | changed_when: False 7 | become: yes 8 | 9 | - name: Set Linux hostname 10 | shell: "hostname {{ linux_hostname }}" 11 | when: '"stdout" in cur_hostname and cur_hostname.stdout != linux_hostname' 12 | become: yes 13 | 14 | - name: Persist Linux hostname 15 | template: 16 | src: "{{ item }}.j2" 17 | dest: "/{{ item }}" 18 | owner: root 19 | group: root 20 | mode: 0644 21 | become: yes 22 | with_items: 23 | - etc/hostname 24 | - etc/hosts 25 | 26 | - name: Setup APT Proxy configuration 27 | template: 28 | src: etc/apt/apt.conf.j2 29 | dest: /etc/apt/apt.conf 30 | owner: root 31 | group: root 32 | mode: 0644 33 | become: yes 34 | 35 | - name: Setup environment configuration 36 | template: 37 | src: etc/environment 38 | dest: /etc/environment 39 | owner: root 40 | group: root 41 | mode: 0644 42 | become: yes 43 | -------------------------------------------------------------------------------- /roles/ubuntu-common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - import_tasks: env.yml 4 | tags: 5 | - env 6 | 7 | - import_tasks: users.yml 8 | tags: 9 | - users 10 | 11 | - import_tasks: dns.yml 12 | tags: 13 | - dns 14 | 15 | - import_tasks: ssh.yml 16 | tags: 17 | - ssh 18 | 19 | - import_tasks: tools.yml 20 | tags: 21 | - tools 22 | -------------------------------------------------------------------------------- /roles/ubuntu-common/tasks/ssh.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure .ssh directory exists 4 | file: 5 | path: "{{ item.home }}/.ssh" 6 | state: directory 7 | mode: 0755 8 | owner: "{{ item.user }}" 9 | group: "{{ item.group }}" 10 | with_items: 11 | - { user: "{{ ansible_env.USER }}", group: "{{ ansible_env.USER }}", home: "{{ ansible_env.HOME }}" } 12 | - { user: "linuxboot", group: "ci", home: "/home/linuxboot" } 13 | - { user: "root", group: "root", home: "/root" } 14 | become: yes 15 | 16 | - name: Ensure file .ssh/authorized_keys exists 17 | shell: > 18 | if [ ! -e {{ item.home }}/.ssh/authorized_keys ] ; then 19 | sudo -u {{ item.user }} touch {{ item.home }}/.ssh/authorized_keys ; 20 | fi 21 | args: 22 | creates: "{{ item.home }}/.ssh/authorized_keys" 23 | with_items: 24 | - { user: "{{ ansible_env.USER }}", home: "{{ ansible_env.HOME }}" } 25 | - { user: "linuxboot", home: "/home/linuxboot" } 26 | - { user: "root", home: "/root" } 27 | become: yes 28 | 29 | - name: Deploy SSH configuration 30 | template: 31 | src: ".ssh/config" 32 | dest: "{{ item.home }}/.ssh/config" 33 | mode: 0444 34 | owner: "{{ item.user }}" 35 | group: "{{ item.group }}" 36 | with_items: 37 | - { user: "{{ ansible_env.USER }}", group: "{{ ansible_env.USER }}", home: "{{ ansible_env.HOME }}" } 38 | - { user: "linuxboot", group: "ci", home: "/home/linuxboot" } 39 | - { user: "root", group: "root", home: "/root" } 40 | become: yes 41 | 42 | - name: Ensure SSH Keypair exists 43 | shell: "ssh-keygen -b 2048 -t rsa -f {{ ansible_env.HOME }}/.ssh/id_rsa -q -N ''" 44 | args: 45 | creates: "{{ ansible_env.HOME }}/.ssh/id_rsa" 46 | 47 | - name: Read SSH public key 48 | shell: "cat {{ ansible_env.HOME }}/.ssh/id_rsa.pub" 49 | register: ssh_pub_key 50 | changed_when: False 51 | 52 | - name: Aggregate SSH public keys 53 | set_fact: 54 | ssh_keys: "{{ ansible_play_hosts | map('extract', hostvars, 'ssh_pub_key') | map(attribute='stdout') | list }}" 55 | run_once: True 56 | 57 | - name: Copy Private SSH Key 58 | copy: 59 | remote_src: true 60 | src: "{{ ansible_env.HOME }}/.ssh/id_rsa" 61 | dest: "{{ item.home }}/.ssh/id_rsa" 62 | mode: 0400 63 | owner: "{{ item.user }}" 64 | group: "{{ item.group }}" 65 | become: yes 66 | with_items: 67 | - { user: "linuxboot", group: "ci", home: "/home/linuxboot" } 68 | - { user: "root", group: "root", home: "/root" } 69 | 70 | - name: Copy Public SSH Key 71 | copy: 72 | remote_src: true 73 | src: "{{ ansible_env.HOME }}/.ssh/id_rsa.pub" 74 | dest: "{{ item.home }}/.ssh/id_rsa.pub" 75 | mode: 0400 76 | owner: "{{ item.user }}" 77 | group: "{{ item.group }}" 78 | become: yes 79 | with_items: 80 | - { user: "linuxboot", group: "ci", home: "/home/linuxboot" } 81 | - { user: "root", group: "root", home: "/root" } 82 | 83 | - name: 84 | debug: 85 | var: ssh_keys 86 | run_once: True 87 | 88 | - name: Add SSH public key in ~/.ssh/authorized_keys 89 | lineinfile: 90 | path: "{{ item[0] }}/.ssh/authorized_keys" 91 | line: "{{ item[1] }}" 92 | with_nested: 93 | - ["{{ ansible_env.HOME }}", "/home/linuxboot", "/root"] 94 | - "{{ ssh_keys }}" 95 | become: yes 96 | -------------------------------------------------------------------------------- /roles/ubuntu-common/tasks/tools.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install some packages 4 | apt: 5 | name: "{{ item }}" 6 | state: present 7 | update_cache: yes 8 | with_items: 9 | - nmap 10 | - tcpdump 11 | - fping 12 | - vim 13 | become: yes 14 | -------------------------------------------------------------------------------- /roles/ubuntu-common/tasks/users.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - group: 4 | name: ci 5 | gid: 2222 6 | become: yes 7 | 8 | - user: 9 | name: linuxboot 10 | shell: /bin/bash 11 | group: ci 12 | groups: sudo 13 | append: yes 14 | uid: 2222 15 | become: yes 16 | 17 | - name: Set linuxboot user sudoer 18 | template: 19 | src: etc/sudoers.d/linuxboot.j2 20 | dest: /etc/sudoers.d/linuxboot 21 | owner: root 22 | group: root 23 | mode: 0444 24 | become: yes 25 | -------------------------------------------------------------------------------- /roles/ubuntu-common/templates/.ssh/config: -------------------------------------------------------------------------------- 1 | 2 | Host * 3 | StrictHostKeyChecking no 4 | -------------------------------------------------------------------------------- /roles/ubuntu-common/templates/etc/apt/apt.conf.j2: -------------------------------------------------------------------------------- 1 | {% if behind_proxy %} 2 | Acquire::http::Proxy "{{ proxy_url }}"; 3 | {% endif %} 4 | -------------------------------------------------------------------------------- /roles/ubuntu-common/templates/etc/environment: -------------------------------------------------------------------------------- 1 | PATH="/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" 2 | {% if behind_proxy %} 3 | http_proxy='{{ proxy_url }}' 4 | https_proxy='{{ proxy_url }}' 5 | {% endif %} 6 | -------------------------------------------------------------------------------- /roles/ubuntu-common/templates/etc/hostname.j2: -------------------------------------------------------------------------------- 1 | {{ linux_hostname }} 2 | -------------------------------------------------------------------------------- /roles/ubuntu-common/templates/etc/hosts.j2: -------------------------------------------------------------------------------- 1 | 127.0.0.1 localhost 2 | 127.0.1.1 {{ linux_hostname }} 3 | 4 | # The following lines are desirable for IPv6 capable hosts 5 | ::1 localhost ip6-localhost ip6-loopback 6 | ff02::1 ip6-allnodes 7 | ff02::2 ip6-allrouters 8 | -------------------------------------------------------------------------------- /roles/ubuntu-common/templates/etc/resolvconf/resolv.conf.d/tail.j2: -------------------------------------------------------------------------------- 1 | {% if not behind_proxy %} 2 | nameserver 8.8.8.8 3 | nameserver 8.8.4.4 4 | {% endif %} 5 | -------------------------------------------------------------------------------- /roles/ubuntu-common/templates/etc/sudoers.d/linuxboot.j2: -------------------------------------------------------------------------------- 1 | linuxboot ALL=(ALL:ALL) NOPASSWD: ALL --------------------------------------------------------------------------------