├── .gitignore ├── CONTRIBUTORS.md ├── INSTALL.md ├── README.md ├── docs ├── BASE-CEPH.md ├── BASE-NETWORKING.md ├── BASE.md ├── Bridge topology.odp ├── COMPUTES.md ├── CONTROLLERS.md ├── NETWORKS.md ├── TEST.md ├── archi_reseau.png ├── archi_reseau.svg ├── archi_reseau_20150206.png ├── bridge_topology_allinone.jpg ├── bridge_topology_controller_compute.jpg ├── bridge_topology_network.jpg ├── ha.png ├── ha.svg ├── opensteak_ha.png └── opensteak_ha_20150209.png └── infra ├── .gitignore ├── config ├── arnaud.yaml └── infra.sample.yaml ├── configure_foreman.py ├── create_foreman.py ├── foreman ├── README.md ├── files │ ├── id_rsa │ ├── id_rsa.pub │ └── puppet_master │ │ ├── etc │ │ ├── puppet │ │ │ ├── auth.conf │ │ │ ├── hiera.yaml │ │ │ ├── hieradata │ │ │ │ └── production │ │ │ │ │ └── nodes │ │ │ │ │ └── proxy.DOMAIN.yaml │ │ │ └── manifests │ │ │ │ └── site.pp │ │ └── r10k.yaml │ │ ├── patches │ │ ├── add_require_json_openstack.rb.patch │ │ └── check_virsh_secret_using_secret_get_value.patch │ │ └── usr │ │ └── local │ │ └── bin │ │ └── opensteak-r10k-update ├── provisioning_templates │ ├── preseed_default.tpl │ ├── preseed_default_finish.tpl │ └── preseed_default_pxelinux.tpl └── templates │ ├── common.yaml │ ├── install.sh │ ├── kvm-config │ ├── meta-data │ └── user-data ├── install_opensteak.py ├── opensteak ├── .gitignore ├── __init__.py ├── argparser.py ├── conf.py ├── printer.py ├── templateparser.py └── virsh.py └── tools ├── delete_vm.sh ├── foreman_cli.py └── prepare_jump.sh /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | 47 | # Translations 48 | *.mo 49 | *.pot 50 | 51 | # Django stuff: 52 | *.log 53 | 54 | # Sphinx documentation 55 | docs/_build/ 56 | 57 | # PyBuilder 58 | target/ 59 | -------------------------------------------------------------------------------- /CONTRIBUTORS.md: -------------------------------------------------------------------------------- 1 | # Contributors 2 | 3 | * Arnaud Morin (arnaudmorinol)- Orange 4 | * David Blaisonneau (davidblaisonneau-orange)- Orange 5 | * Valentin Boucher (boucherv) - Orange 6 | * Pawel Chomicki (pchomik) - Nokia 7 | * Juan Antonio Osorio (JAORMX) - Ericsson 8 | -------------------------------------------------------------------------------- /INSTALL.md: -------------------------------------------------------------------------------- 1 | # OpenSteak install with Foreman 2 | 3 | ## Pre-requisite 4 | 5 | * 1 medium server installed with ubuntu 14.04 connected to the admin network 6 | * 1 controller server at least that will contain the Openstack VM 7 | 8 | ## Get the code and configure 9 | 10 | * Install dependancies: 11 | 12 | ``` 13 | sudo apt-get install libvirt-bin git qemu-kvm genisoimage bridge-utils 14 | sudo service libvirt-bin restart 15 | ``` 16 | 17 | * Get the code: 18 | 19 | ``` 20 | git clone https://github.com/Orange-OpenSource/opnfv.git 21 | ``` 22 | 23 | * Configure, check the config, and check again 24 | Edit both files: 25 | * ```~/opnfv/infra/config/infra.yaml``` 26 | * ```~/opnfv/infra/foreman/templates/common.yaml``` 27 | 28 | 29 | * Check again the config file... 30 | 31 | ## Prepare libvirt 32 | 33 | This part will be added in the script in the future release. 34 | 35 | Set default pool for libvirt during the install not before. 36 | 37 | ### Set the network 38 | 39 | We need to set the admin interface on a bridge. 40 | 41 | If this interface is eth0 with ip 192.168.1.4 and gateway 192.168.1.1, 42 | and the default ip on the actual bridge (virbr0) is 192.168.122.1, 43 | this is the lines you need to execute. 44 | 45 | ``` 46 | ubuntu@jumphost:~$ sudo ip a d 192.168.122.1/24 dev virbr0 47 | ubuntu@jumphost:~$ sudo ip a a 192.168.1.4/24 dev virbr0 48 | ubuntu@jumphost:~$ sudo ip a d 192.168.1.4/24 dev eth0 && sudo brctl addif virbr0 eth0 49 | ubuntu@jumphost:~$ sudo ip r a default dev virbr0 via 192.168.1.1 50 | ``` 51 | 52 | Save the config, edit ```/etc/network/interfaces``` and set changed interfaces: 53 | ``` 54 | # Set up interfaces manually, avoiding conflicts with, e.g., network manager 55 | iface eth0 inet manual 56 | # Bridge setup 57 | iface virbr0 inet static 58 | bridge_ports eth0 59 | address 192.168.1.4 60 | broadcast 192.168.1.255 61 | netmask 255.255.255.0 62 | gateway 192.168.1.1 63 | 64 | ``` 65 | 66 | 67 | ### Issue #9 68 | * Create the default_pool.xml file: 69 | 70 | ``` 71 | 72 | default 73 | 74 | /var/lib/libvirt/images 75 | 76 | 77 | ``` 78 | 79 | * Create the pool 80 | 81 | ```sudo virsh pool-define default_pool.xml``` 82 | 83 | * Start and Set autostart 84 | 85 | ``` 86 | ubuntu@jumphost:~$ sudo virsh pool-start default 87 | ubuntu@jumphost:~$ sudo virsh pool-autostart default 88 | ``` 89 | 90 | * Refresh and check 91 | 92 | ``` 93 | ubuntu@jumphost:~$ sudo virsh pool-refresh default 94 | ubuntu@jumphost:~$ sudo virsh pool-list --all 95 | ``` 96 | 97 | ### Issue #10 98 | 99 | ``` 100 | ubuntu@jumphost:~$ sudo wget -O /var/lib/libvirt/images/trusty-server-cloudimg-amd64-disk1.img http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img 101 | ``` 102 | 103 | ## Install Foreman VM 104 | 105 | ``` 106 | ubuntu@jumphost:~$ cd opnfv/infra/ 107 | ubuntu@jumphost:~/opnfv/infra$ sudo python3 create_foreman.py 108 | ``` 109 | 110 | When done, you can check the creation process with: 111 | 112 | ```sudo tail -f /var/log/libvirt/qemu/foreman-serial.log``` 113 | 114 | 115 | ## Configure Foreman 116 | 117 | * Install python foreman api 118 | 119 | ``` 120 | ubuntu@jumphost:~/opnfv/infra$ sudo pip3 install foreman 121 | ``` 122 | 123 | * Configure Foreman 124 | 125 | ``` 126 | ubuntu@jumphost:~/opnfv/infra$ sudo python3 configure_foreman.py 127 | ``` 128 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Opensteak Get Starting Installation 2 | 3 | [![Join the chat at https://gitter.im/Orange-OpenSource/opnfv](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/Orange-OpenSource/opnfv?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 4 | 5 | Openstack installer and configuration by Orange Labs (Project code name: OpenSteak) 6 | 7 | ## Introduction 8 | This repo contains the tools and the scripts to install a full Openstack Juno over Ubuntu 14.04. 9 | 10 | It aims to propose an **High Availability** deployment with **Bare Metal** provisioning. 11 | 12 | The configuration is automatically done with **Puppet**, **Python scripts** and **Foreman**. 13 | 14 | To keep the module dependencies up to date (and to download modules automatically), we use **r10k**. 15 | 16 | The storage is handled by **Ceph**. 17 | 18 | The only thing you should do is to provide a valid **YAML** configuration file. 19 | 20 | ### Puppet modules 21 | 22 | We use pure **Puppet modules from OpenStack**: 23 | 24 | https://wiki.openstack.org/wiki/Puppet#Puppet_Modules 25 | 26 | We also use an **OpenSteak Puppet module** to superseed the OpenStack modules: 27 | 28 | https://github.com/Orange-OpenSource/opnfv-puppet 29 | 30 | All dependencies between puppet modules are managed by **r10k**: 31 | 32 | https://github.com/Orange-OpenSource/opnfv-r10k 33 | 34 | 35 | ## Installation 36 | 37 | see [INSTALL](INSTALL.md) 38 | 39 | ## Architecture 40 | ### Basic setup 41 | 42 | In a lab configuration, to optimize resource usage: 43 | 44 | * All nodes are *compute* and *storage*: they all contains nova-compute, neutron-compute and ceph OSD 45 | * 2 nodes are also *controllers* containing KVM VMs for Openstack bricks, a DNS node and HAproxy 46 | * 1 node is a network gateway to external networks 47 | 48 | ![Image of Bridging topology - Controller and compute](https://github.com/Orange-OpenSource/opnfv/raw/master/docs/bridge_topology_controller_compute.jpg) 49 | 50 | ![Image of Bridging topology - Network](https://github.com/Orange-OpenSource/opnfv/raw/master/docs/bridge_topology_network.jpg) 51 | 52 | On each server, we have at least 4 networks: 53 | 54 | * **br-int** : Integration bridge. Tag/untag VLAN for VM. veth for VM will be in this bridge. 55 | * **br-vm** : Distributed bridge between compute nodes. Used to transport VM flows. In our case, we use the physical interface name **em5** as support for this bridge. 56 | * **br-adm** : Administration bridge. This is used by controller part of OpenStack (which are in KVM) and other administrative tasks. In our case, we use the physical interface named **em3** as support for this bridge. This network needs an internet access through a default gateway to proceed installation. 57 | * **br-storage**: Storage bridge, used by the ceph cluster. In our case, we use the physical interface name **em4** as support for this bridge. 58 | 59 | The network server will also have one other bridge: 60 | 61 | * **br-ex** : Bridge to communicate with external networks. In our case, we use the physical interface named **em2** as support for this bridge. 62 | 63 | 64 | ![Image of Basic setup](https://github.com/Orange-OpenSource/opnfv/raw/master/docs/archi_reseau.png) 65 | 66 | 67 | ### How do we handle OpenStack functions 68 | Each controller part of Openstack is created separatly in a KVM machine. So that it can easily be updated or redeployed. 69 | 70 | Each KVM machine is automatically created by a script (opensteak-create-vm) and basic configuration comes through cloud-init. Openstack related configuration is handled by puppet. 71 | 72 | ### How do we provide HA 73 | The work is still in progress, but we plan to use HAProxy in front of nodes, with VRRP IPs and weighted routes. 74 | 75 | ![Image of HA](https://raw.githubusercontent.com/Orange-OpenSource/opnfv/master/docs/opensteak_ha.png) 76 | 77 | ## Status 78 | * Puppet modules: 79 | * Mysql: OK, 80 | * Rabbit: OK, 81 | * Keystone: OK, 82 | * Glance: OK, 83 | * with ceph: OK, 84 | * Cinder: OK, 85 | * with ceph: OK, 86 | * Nova: OK, 87 | * with ceph: OK, 88 | * Neutron: OK, 89 | * Bare metal provisioning: OK with Foreman 90 | * OpenDayLight: WiP 91 | * High Availability: WiP 92 | 93 | -------------------------------------------------------------------------------- /docs/BASE-CEPH.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* 4 | 5 | - [Ceph Installation](#ceph-installation) 6 | - [](#) 7 | - [ Intro](#intro) 8 | - [Architecture](#architecture) 9 | - [Ceph-admin machine preparation](#ceph-admin-machine-preparation) 10 | - [ Install ceph-deploy](#install-ceph-deploy) 11 | - [ Install ntp](#install-ntp) 12 | - [ Create a ceph user on each node (ceph-admin included)](#create-a-ceph-user-on-each-node-ceph-admin-included) 13 | - [Add each node in hosts file (ceph-admin included)](#add-each-node-in-hosts-file-ceph-admin-included) 14 | - [Create and copy a passwordless ssh key to each node](#create-and-copy-a-passwordless-ssh-key-to-each-node) 15 | - [Create a .ssh/config file to connect automatically](#create-a-sshconfig-file-to-connect-automatically) 16 | - [ Ceph storage cluster](#ceph-storage-cluster) 17 | - [Prepare folder](#prepare-folder) 18 | - [Deploy initial monitor on first node](#deploy-initial-monitor-on-first-node) 19 | - [Configure ceph](#configure-ceph) 20 | - [Install ceph in all nodes](#install-ceph-in-all-nodes) 21 | - [Create initial monitor and gather the keys](#create-initial-monitor-and-gather-the-keys) 22 | - [ Create and add OSD](#create-and-add-osd) 23 | - [Prepare all nodes to administer the cluster](#prepare-all-nodes-to-administer-the-cluster) 24 | - [Add a metadata server in first node](#add-a-metadata-server-in-first-node) 25 | - [Extend](#extend) 26 | - [Extend the OSD pool](#extend-the-osd-pool) 27 | - [ Extend the monitors](#extend-the-monitors) 28 | - [Check status](#check-status) 29 | - [Create a file system](#create-a-file-system) 30 | - [ Mount file system](#mount-file-system) 31 | - [Customize for OpenStack](#customize-for-openstack) 32 | - [ Add a ceph pool in libvirt](#add-a-ceph-pool-in-libvirt) 33 | - [ Configure OpenStack Ceph Clients](#configure-openstack-ceph-clients) 34 | - [ Create Ceph Pool in libvirt so that controller machines can be integrated in it](#create-ceph-pool-in-libvirt-so-that-controller-machines-can-be-integrated-in-it) 35 | - [ Purge conf if needed](#purge-conf-if-needed) 36 | - [TODO](#todo) 37 | 38 | 39 | 40 | # Ceph Installation 41 | 42 | --- 43 | ## Intro 44 | Ceph is used to build a storage system accross all machines 45 | 46 | ## Architecture 47 | We consider the following architecture 48 | 49 | TODO: add schema (4 machines: ceph-admin, 3 ceph-nodes: opensteak9{2,3,4}) 50 | 51 | Networks: 52 | ``` 53 | 192.168.0.0/24 is the cluster network (use for storage) 54 | 192.168.1.0/24 is the management network (use for admin task) 55 | ``` 56 | 57 | 58 | ## Ceph-admin machine preparation 59 | 60 | This is done on an Ubuntu 14.04 64b server 61 | 62 | ### Install ceph-deploy 63 | ```bash 64 | wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add - 65 | echo deb http://ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list 66 | sudo apt-get update && sudo apt-get install ceph-deploy 67 | ``` 68 | 69 | ### Install ntp 70 | ```bash 71 | sudo apt-get install ntp 72 | sudo service ntp restart 73 | ``` 74 | 75 | ### Create a ceph user on each node (ceph-admin included) 76 | ```bash 77 | sudo useradd -d /home/ceph -m ceph 78 | sudo passwd ceph 79 | ``` 80 | 81 | Add sudo rights: 82 | ```bash 83 | echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph 84 | sudo chmod 0440 /etc/sudoers.d/ceph 85 | ``` 86 | 87 | * *Note: if you think this can be a security threat, remove the ceph user from sudoers after the installation is complete* 88 | 89 | * *Note 2: the ceph documentation ask for this user: http://ceph.com/docs/master/rados/deployment/preflight-checklist/?highlight=sudoers* 90 | 91 | 92 | ### Add each node in hosts file (ceph-admin included) 93 | ```bash 94 | sudo bash -c ' cat << EOF >> /etc/hosts 95 | 192.168.1.200 ceph-admin 96 | 192.168.1.92 opensteak92 97 | 192.168.1.93 opensteak93 98 | 192.168.1.94 opensteak94 99 | EOF' 100 | ``` 101 | 102 | ### Create and copy a passwordless ssh key to each node 103 | ```bash 104 | ssh-keygen 105 | ssh-copy-id ceph@ceph-admin 106 | ssh-copy-id ceph@opensteak92 107 | ssh-copy-id ceph@opensteak93 108 | ssh-copy-id ceph@opensteak94 109 | ``` 110 | 111 | ### Create a .ssh/config file to connect automatically 112 | ```bash 113 | cat << EOF >> .ssh/config 114 | Host ceph-admin 115 | Hostname ceph-admin 116 | User ceph 117 | Host opensteak92 118 | Hostname opensteak92 119 | User ceph 120 | Host opensteak93 121 | Hostname opensteak93 122 | User ceph 123 | Host opensteak94 124 | Hostname opensteak94 125 | User ceph 126 | EOF 127 | ``` 128 | 129 | ## Ceph storage cluster 130 | All these commands must be run inside the ceph-admin machine as a regular user 131 | 132 | ### Prepare folder 133 | ```bash 134 | mkdir ceph-cluster 135 | cd ceph-cluster/ 136 | ``` 137 | 138 | ### Deploy initial monitor on first node 139 | ```bash 140 | ceph-deploy new opensteak92 141 | ``` 142 | 143 | ### Configure ceph 144 | We set default pool size to 2 and public/cluster networks: 145 | 146 | ```bash 147 | cat << EOF >> ceph.conf 148 | osd pool default size = 2 149 | public network = 192.168.1.0/24 150 | cluster network = 192.168.0.0/24 151 | EOF 152 | ``` 153 | 154 | ### Install ceph in all nodes 155 | ```bash 156 | ceph-deploy --username ceph install ceph-admin opensteak92 opensteak93 opensteak94 157 | ``` 158 | 159 | ### Create initial monitor and gather the keys 160 | ```bash 161 | ceph-deploy --username ceph mon create-initial 162 | ``` 163 | 164 | ### Create and add OSD 165 | We will use hard disk (/dev/sdb) for storage: http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/ 166 | 167 | ```bash 168 | ceph-deploy --username ceph osd create opensteak93:sdb 169 | ceph-deploy --username ceph osd create opensteak94:sdb 170 | ``` 171 | 172 | ### Prepare all nodes to administer the cluster 173 | Prepare all nodes with a ceph.conf and ceph.client.admin.keyring keyring so that it can administer the cluster: 174 | 175 | ```bash 176 | ceph-deploy admin ceph-admin opensteak92 opensteak93 opensteak94 177 | sudo chmod +r /etc/ceph/ceph.client.admin.keyring 178 | ``` 179 | 180 | ### Add a metadata server in first node 181 | ```bash 182 | ceph-deploy--username ceph mds create opensteak92 183 | ``` 184 | 185 | ## Extend 186 | ### Extend the OSD pool 187 | We decided to extend OSD pool by adding the first node as well: 188 | 189 | ```bash 190 | ceph-deploy --username ceph osd create opensteak92:sdb 191 | ``` 192 | 193 | ### Extend the monitors 194 | In the same spirit, extend the monitor by adding the two last nodes and check the status 195 | ```bash 196 | ceph-deploy --username ceph mon create opensteak93 opensteak94 197 | ceph quorum_status --format json-pretty 198 | ``` 199 | 200 | ## Check status 201 | ```bash 202 | ceph health 203 | ``` 204 | 205 | ## Create a file system 206 | Check osd pools: 207 | ```bash 208 | ceph osd lspools 209 | ``` 210 | 211 | I you don't have data and metadata pools, create it: 212 | ```bash 213 | ceph osd pool create cephfs_data 64 214 | ceph osd pool create cephfs_metadata 64 215 | ``` 216 | 217 | Then enable filesystem on the cephfs_data pool: 218 | ```bash 219 | ceph fs new cephfs cephfs_metadata cephfs_data 220 | ``` 221 | 222 | And check again: 223 | ```bash 224 | ceph osd lspools 225 | ``` 226 | 227 | Should produce: 228 | ```bash 229 | 0 rbd,1 cephfs_data,2 cephfs_metadata, 230 | ``` 231 | 232 | You can check as well with: 233 | ```bash 234 | $ ceph fs ls 235 | name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] 236 | 237 | $ ceph mds stat 238 | e5: 1/1/1 up {0=opensteak92=up:active} 239 | ``` 240 | 241 | ## Mount file system 242 | For each node you want to mount ceph in **/mnt/cephfs/**, run: 243 | ```bash 244 | ssh opensteak9x "cat /etc/ceph/ceph.client.admin.keyring |grep key|awk '{print \$3}'|sudo tee /etc/ceph/ceph.client.admin.key" 245 | 246 | ssh opensteak9x "sudo mkdir /mnt/cephfs" 247 | 248 | ssh opensteak9x "echo '192.168.1.92:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/ceph.client.admin.key,noatime 0 2' | sudo tee --append /etc/fstab && sudo mount /mnt/cephfs" 249 | ``` 250 | 251 | This will add a line in fstab so the file system will automatically be mounted on boot. 252 | 253 | ## Customize for OpenStack 254 | ### Add a ceph pool in libvirt 255 | 256 | ```bash 257 | ceph osd pool create volumes 128 258 | ceph osd pool create images 128 259 | ceph osd pool create backups 128 260 | ceph osd pool create vms 128 261 | ``` 262 | ### Configure OpenStack Ceph Clients 263 | 264 | ```bash 265 | ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' 266 | ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' 267 | ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' 268 | ``` 269 | 270 | ### Create Ceph Pool in libvirt so that controller machines can be integrated in it 271 | 272 | ```bash 273 | cd /usr/local/opensteak/infra/kvm/ 274 | virsh pool-define ceph_pool.xml 275 | virsh pool-start ceph 276 | virsh pool-autostart ceph 277 | virsh pool-autostart default 278 | virsh pool-refresh default 279 | virsh pool-refresh ceph 280 | virsh vol-list ceph 281 | ``` 282 | 283 | ## Purge conf if needed 284 | Do this only to restart from scratch 285 | 286 | Erase disks 287 | 288 | ```bash 289 | ceph-deploy --username ceph disk zap opensteak92:sdb 290 | ceph-deploy --username ceph disk zap opensteak93:sdb 291 | ceph-deploy --username ceph disk zap opensteak94:sdb 292 | ``` 293 | 294 | Purge 295 | 296 | ```bash 297 | ceph-deploy --username ceph purge opensteak92 298 | ceph-deploy --username ceph purgedata opensteak92 299 | ceph-deploy --username ceph purge opensteak93 300 | ceph-deploy --username ceph purgedata opensteak93 301 | ceph-deploy --username ceph purge opensteak94 302 | ceph-deploy --username ceph purgedata opensteak94 303 | ``` 304 | 305 | 306 | 307 | ## TODO 308 | 309 | * create a python/bash script that will install & check that the cluster is well configured (do all of this automatically) 310 | * create a conf file that will be used by the above script to describe the architecture? 311 | -------------------------------------------------------------------------------- /docs/BASE-NETWORKING.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* 4 | 5 | - [Network configuration](#network-configuration) 6 | - [Installation](#installation) 7 | - [ 4 interfaces configuration](#4-interfaces-configuration) 8 | - [ 2 interfaces configuration](#2-interfaces-configuration) 9 | - [Restart](#restart) 10 | 11 | 12 | 13 | # Network configuration 14 | Before proceeding, be sure to have already completed the [base](docs/BASE.md) install 15 | 16 | Info on network config for OpenStack: 17 | 18 | * https://openstack.redhat.com/Networking_in_too_much_detail 19 | * http://docs.openstack.org/juno/install-guide/install/apt/content/neutron-initial-networks.html 20 | 21 | 22 | ## Installation 23 | 24 | We provide two bash scripts to automate the configuration. 25 | 26 | * 1 script for servers with 4 interfaces (which is the default) 27 | * 1 script for servers with 2 interfaces (with trunk access on eth0 to allow both br-adm & br-storage to share the interface) 28 | 29 | ### 4 interfaces configuration 30 | 31 | ```bash 32 | cd /usr/local/opensteak/ 33 | cp infra/network/interfaces /etc/network/interfaces 34 | cp infra/network/interfaces.d/4_interfaces_servers/* /etc/network/interfaces.d/ 35 | ``` 36 | 37 | Then you should replace **XXX** with the ip address of your server. Here is an example for 92: 38 | ```bash 39 | perl -i -pe 's/XXX/92/g' /etc/network/interfaces.d/* 40 | ``` 41 | 42 | ### 2 interfaces configuration 43 | 44 | Do the same, but take care of the configuration. We use eth0 as a trunk port with access mode for br-adm & VLAN 600 for br-storage. 45 | 46 | ```bash 47 | cd /usr/local/opensteak/ 48 | cp infra/network/interfaces /etc/network/interfaces 49 | cp infra/network/interfaces.d/2_interfaces_servers/* /etc/network/interfaces.d/ 50 | ``` 51 | 52 | Then you should replace **XXX** with the ip address of your server. Here is an example for 92: 53 | ```bash 54 | perl -i -pe 's/XXX/92/g' /etc/network/interfaces.d/* 55 | ``` 56 | 57 | ## Restart 58 | 59 | Warning: this might kill your SSH connection: 60 | 61 | ```bash 62 | ifdown eth0 && ifup eth0 63 | ifdown eth1 && ifup eth1 64 | ifdown br-adm && ifup br-adm 65 | ifdown br-vm && ifup br-vm 66 | ifdown br-storage && ifup br-storage 67 | service openvswitch-switch restart 68 | ``` 69 | -------------------------------------------------------------------------------- /docs/BASE.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* 4 | 5 | - [Base infra install](#base-infra-install) 6 | - [Dependencies](#dependencies) 7 | - [ Clone this repo](#clone-this-repo) 8 | - [ Create config file from template](#create-config-file-from-template) 9 | - [Libvirt default pool](#libvirt-default-pool) 10 | - [ Import binaries](#import-binaries) 11 | 12 | 13 | 14 | # Base infra install 15 | Being here, we suppose that you already have an Ubuntu 14.04 server up and running. 16 | 17 | ## Dependencies 18 | 19 | As **root**, run: 20 | 21 | ```bash 22 | wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb 23 | dpkg -i puppetlabs-release-trusty.deb 24 | apt-get update 25 | apt-get upgrade 26 | apt-get dist-upgrade 27 | apt-get install vim git hiera ntp virtinst genisoimage curl qemu-system-x86 qemu-system-common qemu-keymaps ceph-common ipxe-qemu openvswitch-switch puppet 28 | service ntp restart 29 | service libvirt-bin restart 30 | ``` 31 | 32 | ## Clone this repo 33 | 34 | We expect you to clone this repo in /usr/local/ folder as most of the time, the script will try to find necessary files from this folder. 35 | 36 | ```bash 37 | cd /usr/local 38 | git clone https://github.com/Orange-OpenSource/opnfv.git opensteak 39 | ``` 40 | 41 | ## Create config file from template 42 | 43 | The common.yaml file is the only file that you should tweak in order to setup your OpenSteak installation. 44 | 45 | ```bash 46 | cp /usr/local/opensteak/infra/config/common.yaml.tpl /usr/local/opensteak/infra/config/common.yaml 47 | vim /usr/local/opensteak/infra/config/common.yaml 48 | ``` 49 | 50 | To help you generate ceph keys, you can use: 51 | 52 | ```bash 53 | ceph-authtool --gen-print-key 54 | ``` 55 | 56 | To generate uuids, you can use: 57 | 58 | ```bash 59 | uuidgen 60 | ``` 61 | 62 | ## Libvirt default pool 63 | Libvirt needs a pool to store virtual machines: 64 | 65 | ```bash 66 | cd /usr/local/opensteak/infra/kvm/ 67 | virsh pool-create default_pool.xml 68 | ``` 69 | 70 | ## Import binaries 71 | To help deploy future controller VM: 72 | 73 | ```bash 74 | cp bin/* /usr/local/bin/ 75 | chmod +x /usr/local/bin/opensteak* 76 | ``` 77 | 78 | This will create at least the opensteak-create-vm script. This script will help you create automatically OpenSteak VM. 79 | -------------------------------------------------------------------------------- /docs/Bridge topology.odp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/Bridge topology.odp -------------------------------------------------------------------------------- /docs/COMPUTES.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* 4 | 5 | - [Compute](#compute) 6 | - [Puppet](#puppet) 7 | - [Create a volume](#create-a-volume) 8 | 9 | 10 | 11 | # Compute 12 | Each compute node is configured through puppet as well. To continue, be sure to have a base install with a valid network configuration 13 | 14 | ## Puppet 15 | 16 | ```bash 17 | puppet agent -t -v 18 | ``` 19 | 20 | Test if it works well from keystone: 21 | 22 | ```bash 23 | cd /root 24 | source os-creds-admin 25 | openstack compute service list 26 | +------------------+-------------+----------+----------+-------+----------------------------+ 27 | | Binary | Host | Zone | Status | State | Updated At | 28 | +------------------+-------------+----------+----------+-------+----------------------------+ 29 | | nova-consoleauth | nova | internal | enabled | up | 2015-02-26T14:09:04.000000 | 30 | | nova-scheduler | nova | internal | enabled | up | 2015-02-26T14:09:03.000000 | 31 | | nova-conductor | nova | internal | enabled | up | 2015-02-26T14:09:04.000000 | 32 | | nova-cert | nova | internal | enabled | up | 2015-02-26T14:09:04.000000 | 33 | | nova-compute | opensteak93 | nova | enabled | up | 2015-02-26T14:08:57.000000 | 34 | +------------------+-------------+----------+----------+-------+----------------------------+ 35 | openstack host list 36 | +-------------+-------------+----------+ 37 | | Host Name | Service | Zone | 38 | +-------------+-------------+----------+ 39 | | nova | consoleauth | internal | 40 | | nova | scheduler | internal | 41 | | nova | conductor | internal | 42 | | nova | cert | internal | 43 | | opensteak93 | compute | nova | 44 | +-------------+-------------+----------+ 45 | 46 | ``` 47 | 48 | 49 | Test if neutron-openvswitch-agent is here as well 50 | 51 | ```bash 52 | neutron agent-list 53 | +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ 54 | | id | agent_type | host | alive | admin_state_up | binary | 55 | +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ 56 | | f7447c91-6cf5-49b1-a83d-4defc330e6eb | Open vSwitch agent | opensteak93 | :-) | True | neutron-openvswitch-agent | 57 | +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ 58 | ``` 59 | 60 | 61 | ## Create a volume 62 | When your compute node will be ready, you will be able to test if the volume is well created in ceph with: 63 | 64 | From keystone, create a volume: 65 | 66 | 67 | ```bash 68 | cd /root 69 | source os-creds-admin 70 | 71 | cinder create --display-name demo-volume1 1 72 | +---------------------+--------------------------------------+ 73 | | Property | Value | 74 | +---------------------+--------------------------------------+ 75 | | attachments | [] | 76 | | availability_zone | nova | 77 | | bootable | false | 78 | | created_at | 2015-03-04T14:36:12.422919 | 79 | | display_description | None | 80 | | display_name | demo-volume1 | 81 | | encrypted | False | 82 | | id | 6d343dde-1525-488f-85a3-1985614551ea | 83 | | metadata | {} | 84 | | size | 1 | 85 | | snapshot_id | None | 86 | | source_volid | None | 87 | | status | creating | 88 | | volume_type | None | 89 | +---------------------+--------------------------------------+ 90 | 91 | cinder list 92 | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ 93 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | 94 | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ 95 | | 6d343dde-1525-488f-85a3-1985614551ea | available | demo-volume1 | 1 | None | false | | 96 | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ 97 | 98 | 99 | ``` 100 | 101 | From a compute node, try: 102 | 103 | ```bash 104 | rbd -p vms ls 105 | 106 | ``` 107 | 108 | Should return a line with the volume id, like: 109 | 110 | ```bash 111 | volume-6d343dde-1525-488f-85a3-1985614551ea 112 | ``` 113 | 114 | -------------------------------------------------------------------------------- /docs/CONTROLLERS.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* 4 | 5 | - [Controllers VM installation](#controllers-vm-installation) 6 | - [Puppet master](#puppet-master) 7 | - [DNS](#dns) 8 | - [RabbitMQ](#rabbitmq) 9 | - [MySQL](#mysql) 10 | - [Keystone](#keystone) 11 | - [Glance](#glance) 12 | - [With Ceph](#with-ceph) 13 | - [Convert from qcow2 to raw](#convert-from-qcow2-to-raw) 14 | - [Upload to glance](#upload-to-glance) 15 | - [Nova (controller part)](#nova-controller-part) 16 | - [Neutron (controller part)](#neutron-controller-part) 17 | - [Cinder](#cinder) 18 | - [Move a VM](#move-a-vm) 19 | - [ On host A (old host)](#on-host-a-old-host) 20 | - [Shutdown VM](#shutdown-vm) 21 | - [Delete the VM](#delete-the-vm) 22 | - [ On host B (new host)](#on-host-b-new-host) 23 | - [ Create the VM](#create-the-vm) 24 | - [Start the VM](#start-the-vm) 25 | 26 | 27 | 28 | # Controllers VM installation 29 | 30 | Each controller part of OpenStack is installed in a KVM based virtual machine. 31 | 32 | ```bash 33 | cd /usr/local/opensteak/infra/kvm/vm_configs 34 | ``` 35 | 36 | ## Puppet master 37 | 38 | This is the first machine that we install, as it contains all the configuration for others machines. 39 | 40 | To create the machine, run: 41 | 42 | ```bash 43 | opensteak-create-vm --name puppet --cloud-init puppet-master 44 | ``` 45 | 46 | It should configure itself by grabbing the *common.yaml* file from */usr/local/opensteak/infra/config/common.yaml* 47 | 48 | r10k will also update all the puppet modules that will be needed 49 | 50 | ## DNS 51 | 52 | ```bash 53 | opensteak-create-vm --name dns --cloud-init dns 54 | ``` 55 | 56 | ## RabbitMQ 57 | 58 | ```bash 59 | opensteak-create-vm --name rabbitmq1 60 | ``` 61 | 62 | ## MySQL 63 | 64 | ```bash 65 | opensteak-create-vm --name mysql1 66 | ``` 67 | 68 | Check that it listen on 0.0.0.0:3306 port correctly: 69 | 70 | ```bash 71 | netstat -laputen |grep 3306 72 | ``` 73 | 74 | You can connect over mysql with: (password is defined in your common.yaml file) 75 | 76 | ```bash 77 | mysql -u root -p 78 | ``` 79 | 80 | ## Keystone 81 | 82 | ```bash 83 | opensteak-create-vm --name keystone1 84 | ``` 85 | 86 | Test if it works well with (ssh on VM before): 87 | 88 | ```bash 89 | cd /root 90 | source os-creds-admin 91 | openstack service list 92 | ``` 93 | 94 | You should have: 95 | 96 | ```bash 97 | +----------------------------------+----------+-----------+ 98 | | ID | Name | Type | 99 | +----------------------------------+----------+-----------+ 100 | | 28efd5d67e444d4abde377562394ff05 | neutron | network | 101 | | 41c2b2f2603944a4ae141ad10ad9b436 | cinderv2 | volumev2 | 102 | | 7443585082eb4b0bbb7ff062831d5ce8 | nova_ec2 | ec2 | 103 | | 96bf087f57514478b73474d3ec5b5050 | cinder | volume | 104 | | b4f2a47081e94b98bc5cff5d13bc4999 | nova | compute | 105 | | dfba7dbe490e4dce9c5b4b93e647df15 | keystone | identity | 106 | | ef303427135847abbb2e979fb03ff819 | glance | image | 107 | | f3144de7a2e244768486237fbcfd4819 | novav3 | computev3 | 108 | +----------------------------------+----------+-----------+ 109 | ``` 110 | 111 | ## Glance 112 | 113 | ```bash 114 | opensteak-create-vm --name glance1 115 | ``` 116 | ### First test 117 | from keystone node: 118 | ```bash 119 | cd /root 120 | source os-creds-admin 121 | glance image-list 122 | +--------------------------------------+--------------+-------------+------------------+----------+--------+ 123 | | ID | Name | Disk Format | Container Format | Size | Status | 124 | +--------------------------------------+--------------+-------------+------------------+----------+--------+ 125 | +--------------------------------------+--------------+-------------+------------------+----------+--------+ 126 | ``` 127 | 128 | ### Import cirros image 129 | 130 | ```bash 131 | mkdir images && cd images 132 | wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img 133 | glance image-create \ 134 | --name "cirros-0.3.3-x86_64" \ 135 | --file cirros-0.3.3-x86_64-disk.img \ 136 | --disk-format qcow2 \ 137 | --container-format bare \ 138 | --visibility public \ 139 | --progress 140 | ``` 141 | 142 | ### Import Ubuntu 14.04 image 143 | 144 | ```bash 145 | wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img 146 | glance image-create \ 147 | --name "Ubuntu 14.04.1 LTS" \ 148 | --file trusty-server-cloudimg-amd64-disk1.img \ 149 | --disk-format qcow2 \ 150 | --container-format bare \ 151 | --visibility public \ 152 | --progress 153 | ``` 154 | 155 | ### Check image list 156 | ```bash 157 | glance image-list 158 | +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ 159 | | ID | Name | Disk Format | Container Format | Size | Status | 160 | +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ 161 | | 5d01c794-d63a-43e2-a356-3d0912b6b046 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active | 162 | | 92ad5e49-12fa-4d29-8a7a-c7dea05823cd | Ubuntu 14.04.1 LTS | qcow2 | bare | 267452928 | active | 163 | +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ 164 | ``` 165 | 166 | ### With Ceph 167 | When ceph is installed as a backend for Glance, you must upload image in raw format instead of qcow2: 168 | 169 | #### Convert from qcow2 to raw 170 | 171 | ```bash 172 | qemu-img convert -f qcow2 -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw 173 | ``` 174 | 175 | #### Upload to glance 176 | 177 | ```bash 178 | glance image-create \ 179 | --name "Ubuntu 14.04.1 LTS" \ 180 | --file trusty-server-cloudimg-amd64-disk1.img \ 181 | --disk-format raw \ 182 | --container-format bare \ 183 | --is-public True \ 184 | --progress 185 | ``` 186 | 187 | ## Nova (controller part) 188 | 189 | ```bash 190 | opensteak-create-vm --name nova1 191 | ``` 192 | 193 | Test if it works well from keystone: 194 | 195 | ```bash 196 | cd /root 197 | source os-creds-admin 198 | openstack compute service list 199 | +------------------+------+----------+---------+-------+----------------------------+ 200 | | Binary | Host | Zone | Status | State | Updated At | 201 | +------------------+------+----------+---------+-------+----------------------------+ 202 | | nova-consoleauth | nova | internal | enabled | up | 2015-02-20T09:21:18.000000 | 203 | | nova-scheduler | nova | internal | enabled | up | 2015-02-20T09:17:44.000000 | 204 | | nova-conductor | nova | internal | enabled | up | 2015-02-20T09:18:28.000000 | 205 | | nova-cert | nova | internal | enabled | up | 2015-02-20T09:21:18.000000 | 206 | +------------------+------+----------+---------+-------+----------------------------+ 207 | openstack host list 208 | +-----------+-------------+----------+ 209 | | Host Name | Service | Zone | 210 | +-----------+-------------+----------+ 211 | | nova | consoleauth | internal | 212 | | nova | scheduler | internal | 213 | | nova | conductor | internal | 214 | | nova | cert | internal | 215 | +-----------+-------------+----------+ 216 | 217 | ``` 218 | 219 | 220 | ## Neutron (controller part) 221 | 222 | ```bash 223 | opensteak-create-vm --name neutron1 224 | ``` 225 | 226 | Test if it works well from keystone: 227 | 228 | ```bash 229 | cd /root 230 | source os-creds-admin 231 | openstack extension list --network -c Name -c Alias 232 | +-----------------------------------------------+-----------------------+ 233 | | Name | Alias | 234 | +-----------------------------------------------+-----------------------+ 235 | | security-group | security-group | 236 | | L3 Agent Scheduler | l3_agent_scheduler | 237 | | Neutron L3 Configurable external gateway mode | ext-gw-mode | 238 | | Port Binding | binding | 239 | | Provider Network | provider | 240 | | agent | agent | 241 | | Quota management support | quotas | 242 | | DHCP Agent Scheduler | dhcp_agent_scheduler | 243 | | Multi Provider Network | multi-provider | 244 | | Neutron external network | external-net | 245 | | Neutron L3 Router | router | 246 | | Allowed Address Pairs | allowed-address-pairs | 247 | | Neutron Extra DHCP opts | extra_dhcp_opt | 248 | | Neutron Extra Route | extraroute | 249 | +-----------------------------------------------+-----------------------+ 250 | ``` 251 | 252 | 253 | ## Cinder 254 | 255 | 256 | ```bash 257 | opensteak-create-vm --name cinder1 258 | ``` 259 | 260 | Test if it works well from keystone: 261 | 262 | ```bash 263 | cd /root 264 | source os-creds-admin 265 | cinder service-list 266 | +------------------+--------+------+---------+-------+----------------------------+-----------------+ 267 | | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | 268 | +------------------+--------+------+---------+-------+----------------------------+-----------------+ 269 | | cinder-scheduler | cinder | nova | enabled | up | 2015-03-04T14:35:10.000000 | None | 270 | | cinder-volume | cinder | nova | enabled | up | 2015-03-04T14:35:11.000000 | None | 271 | +------------------+--------+------+---------+-------+----------------------------+-----------------+ 272 | 273 | ``` 274 | 275 | # Move a VM 276 | If you need to move a VM from a controller server to another one, you can do that with the following help. 277 | 278 | We suppose that the VM disk is located in the ceph mount point (/mnt/cephfs/) 279 | 280 | ## On host A (old host) 281 | ### Shutdown VM 282 | First, shutdown the VM: 283 | 284 | ```bash 285 | virsh destroy glance1 286 | ``` 287 | 288 | This will **NOT** destroy the VM, it will just shut it down. You can check the result with: 289 | 290 | ```bash 291 | virsh list --all 292 | ``` 293 | 294 | ### Delete the VM 295 | 296 | ```bash 297 | virsh undefine glance1 298 | ``` 299 | 300 | ## On host B (new host) 301 | ### Create the VM 302 | Now create the VM on the new host: 303 | 304 | ```bash 305 | cd /usr/local/opensteak/infra/kvm/vm_configs 306 | opensteak-create-vm --name glance1 307 | ``` 308 | 309 | This will just create the vm config in a subfolder named glance1: 310 | 311 | ```bash 312 | ls glance1 313 | config.log glance1.xml meta-data user-data 314 | ``` 315 | 316 | ### Start the VM 317 | 318 | ```bash 319 | virsh define glance1/glance1.xml 320 | virsh autostart glance1 321 | virsh start glance1 --console 322 | ``` 323 | 324 | -------------------------------------------------------------------------------- /docs/NETWORKS.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* 4 | 5 | - [External network machine configuration (the one with neutron)](#external-network-machine-configuration-the-one-with-neutron) 6 | - [Puppet](#puppet) 7 | - [Verification](#verification) 8 | - [ Create networks](#create-networks) 9 | - [Run a VM to test](#run-a-vm-to-test) 10 | 11 | 12 | 13 | # External network machine configuration (the one with neutron) 14 | 15 | ## Puppet 16 | puppet agent -t -v 17 | 18 | ## Verification 19 | Test from keystone: 20 | 21 | ```bash 22 | root@keystone:~/images# neutron agent-list 23 | +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ 24 | | id | agent_type | host | alive | admin_state_up | binary | 25 | +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ 26 | | 541ccb55-de6f-4e5b-bc28-0e362b42e672 | DHCP agent | opensteak99 | :-) | True | neutron-dhcp-agent | 27 | | 83e43c64-c8ec-49cc-b9ce-0a1da2a62758 | Metadata agent | opensteak99 | :-) | True | neutron-metadata-agent | 28 | | a7088fa9-d6c1-44a5-af40-3071e1933bd8 | L3 agent | opensteak99 | :-) | True | neutron-l3-agent | 29 | | a7768722-aa0f-48a8-bff8-d4af9c3d0992 | Open vSwitch agent | opensteak99 | :-) | True | neutron-openvswitch-agent | 30 | | f7447c91-6cf5-49b1-a83d-4defc330e6eb | Open vSwitch agent | opensteak93 | :-) | True | neutron-openvswitch-agent | 31 | +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ 32 | ``` 33 | 34 | ## Create networks 35 | Commandes to create network: 36 | 37 | ```bash 38 | neutron net-create Externe --router:external --provider:physical_network physnet-ex --provider:network_type flat 39 | neutron subnet-create --name "161.105.252.0/24" --allocation-pool start=161.105.252.107,end=161.105.252.108 --disable-dhcp --gateway 161.105.252.1 Externe 161.105.252.0/24 40 | neutron net-create demo 41 | neutron subnet-create --name "192.168.42.0/24" --gateway 192.168.42.1 demo 192.168.42.0/24 42 | neutron router-create demo-router 43 | neutron router-gateway-set demo-router Externe 44 | neutron router-interface-add demo-router "192.168.42.0/24" 45 | ``` 46 | 47 | ## Run a VM to test 48 | 49 | From keystone: 50 | 51 | ```bash 52 | neutron security-group-rule-create --protocol icmp --direction ingress default 53 | neutron security-group-rule-create --protocol icmp --direction egress default 54 | neutron security-group-rule-create --protocol tcp --port-range-min 1 --port-range-max 65000 --direction ingress default 55 | neutron security-group-rule-create --protocol tcp --port-range-min 1 --port-range-max 65000 --direction egress default 56 | neutron security-group-rule-create --protocol udp --port-range-min 1 --port-range-max 65000 --direction ingress default 57 | neutron security-group-rule-create --protocol udp --port-range-min 1 --port-range-max 65000 --direction egress default 58 | ``` 59 | 60 | From keystone: 61 | 62 | ```bash 63 | ssh-keygen 64 | openstack keypair create --public-key /root/.ssh/id_rsa.pub demo-key 65 | openstack server create --flavor m1.small --image "Ubuntu 14.04.1 LTS" --nic net-id=demo --security-group default --key-name demo-key demo-instance1 66 | 67 | ``` 68 | 69 | Then add a floating IP: 70 | 71 | ```bash 72 | neutron floatingip-create Externe 73 | nova floating-ip-associate demo-instance1 161.105.252.108 74 | ``` 75 | 76 | And try to connect to the VM: 77 | 78 | ```bash 79 | ssh -i .ssh/id_rsa ubuntu@161.105.252.108 -o userknownhostsfile=/dev/null 80 | ``` 81 | -------------------------------------------------------------------------------- /docs/TEST.md: -------------------------------------------------------------------------------- 1 | # Testing 2 | 3 | --- 4 | ## Intro 5 | In order to perform functional and performance testing, we use a dedicated VM. 6 | 7 | TODO: add test-tool VM in architecture and puppetise test-tool VM 8 | 9 | On this VM, we installed: 10 | * Rally: https://wiki.openstack.org/wiki/Rally 11 | 12 | TODO RobotFramework: http://robotframework.org, SIPP http://sipp.sourceforge.net/ 13 | 14 | 15 | 16 | ## Installation & Configuration 17 | 18 | ### Rally 19 | 20 | * Log on the test-tool VM 21 | * install Rally (https://rally.readthedocs.org/en/latest/tutorial/step_0_installation.html) 22 | 23 | openrc file (info from OpenStack) is needed, password is required during configuration procedure 24 | 25 | * check 26 | ```bash 27 | # rally deployment check 28 | keystone endpoints are valid and following service are available: 29 | +-------------+-----------+------------+ 30 | | Services | Type | Status | 31 | +-----------+-------------+------------+ 32 | | cinder | volume | Available | 33 | | cinderv2 | volumev2 | Available | 34 | | glance | image | Available | 35 | | keystone | identity | Available | 36 | | neutron | network | Available | 37 | | nova | compute | Available | 38 | | nova_ec2 | compute_ec2 | Available | 39 | | novav3 | computev3 | Available | 40 | +-----------+-------------+------------+ 41 | 42 | ``` 43 | 44 | * For Rally scenario, follow https://rally.readthedocs.org/en/latest/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html 45 | ```bash 46 | # rally task start ./samples/tasks/scenarios/nova/my-boot-and-delete.json 47 | -------------------------------------------------------------------------------- 48 | Preparing input task 49 | -------------------------------------------------------------------------------- 50 | 51 | Input task is: 52 | { 53 | "NovaServers.boot_and_delete_server": [ 54 | { 55 | "args": { 56 | "flavor": { 57 | "name": "m1.small" 58 | }, 59 | "image": { 60 | "name": "^ubuntu-14.10-64b" 61 | }, 62 | "force_delete": false 63 | }, 64 | "runner": { 65 | "type": "constant", 66 | "times": 10, 67 | "concurrency": 2 68 | }, 69 | "context": { 70 | "users": { 71 | "tenants": 3, 72 | "users_per_tenant": 2 73 | } 74 | } 75 | } 76 | ] 77 | } 78 | 79 | -------------------------------------------------------------------------------- 80 | Task f42c8aed-00a6-4715-9951-945b4fb97c32: started 81 | -------------------------------------------------------------------------------- 82 | 83 | Benchmarking... This can take a while... 84 | 85 | To track task status use: 86 | 87 | rally task status 88 | or 89 | rally task detailed 90 | 91 | -------------------------------------------------------------------------------- 92 | Task f42c8aed-00a6-4715-9951-945b4fb97c32: finished 93 | -------------------------------------------------------------------------------- 94 | 95 | test scenario NovaServers.boot_and_delete_server 96 | args position 0 97 | args values: 98 | OrderedDict([(u'runner', OrderedDict([(u'type', u'constant'), (u'concurrency', 2), (u'times', 10)])), (u'args', OrderedDict([(u'force_delete', False), (u'flavor', OrderedDict([(u'name', u'm1.small')])), (u'image', OrderedDict([(u'name', u'^ubuntu-14.10-64b')]))])), (u'context', OrderedDict([(u'users', OrderedDict([(u'project_domain', u'default'), (u'users_per_tenant', 2), (u'tenants', 3), (u'resource_management_workers', 30), (u'user_domain', u'default')]))]))]) 99 | +--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+ 100 | | action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count | 101 | +--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+ 102 | | nova.boot_server | 4.675 | 5.554 | 6.357 | 6.289 | 6.323 | 100.0% | 10 | 103 | | nova.delete_server | 2.365 | 3.301 | 4.728 | 4.553 | 4.64 | 100.0% | 10 | 104 | | total | 7.303 | 8.857 | 10.789 | 10.543 | 10.666 | 100.0% | 10 | 105 | +--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+ 106 | Load duration: 45.7972288132 107 | Full duration: 58.912060976 108 | 109 | HINTS: 110 | * To plot HTML graphics with this data, run: 111 | rally task report f42c8aed-00a6-4715-9951-945b4fb97c32 --out output.html 112 | 113 | * To get raw JSON output of task results, run: 114 | rally task results f42c8aed-00a6-4715-9951-945b4fb97c32 115 | 116 | Using task: f42c8aed-00a6-4715-9951-945b4fb97c32 117 | 118 | ``` 119 | * For Tempest, follow the instructions https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler 120 | * In first step Rally scenario were fine but Tempest scenarios failed due to configuration 121 | Apply patch https://review.openstack.org/#/c/163330/ 122 | ```bash 123 | pip uninstall rally && cd ./rally && python setup.py install 124 | ``` 125 | 126 | You shall be able to run Rally/Tempest towards your OpenStack 127 | ```bash 128 | root@rally:~/rally# rally verify start 129 | [...] 130 | tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest 131 | test_attach_volumes_with_nonexistent_volume_id[compute,gate,id-f5e56b0a-5d02-43c1-a2a7-c9b792c2e3f6,negative]FAIL 132 | test_create_volume_with_invalid_size[gate,id-1ed83a8a-682d-4dfb-a30e-ee63ffd6c049,negative]OK 0.02 133 | test_create_volume_with_nonexistent_snapshot_id[gate,id-0c36f6ae-4604-4017-b0a9-34fdc63096f9,negative]OK 0.04 134 | test_create_volume_with_nonexistent_source_volid[gate,id-47c73e08-4be8-45bb-bfdf-0c4e79b88344,negative]OK 0.05 135 | test_create_volume_with_nonexistent_volume_type[gate,id-10254ed8-3849-454e-862e-3ab8e6aa01d2,negative]OK 0.02 136 | test_create_volume_with_out_passing_size[gate,id-9387686f-334f-4d31-a439-33494b9e2683,negative]OK 0.02 137 | test_create_volume_with_size_negative[gate,id-8b472729-9eba-446e-a83b-916bdb34bef7,negative]OK 0.02 138 | [...] 139 | Ran 933 tests in 1020.200s 140 | 141 | FAILED (failures=186) 142 | Test set 'full' has been finished with error. Check log for details 143 | 144 | ``` 145 | 146 | It is possible to get a better view on the result 147 | ```bash 148 | # rally verify list 149 | +--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------------+----------+ 150 | | UUID | Deployment UUID | Set name | Tests | Failures | Created at | Duration | Status | 151 | +--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------------+----------+ 152 | | b1de3608-dbee-40e7-84c4-1c756ca0347c | e7d70ddf-9be0-4681-9456-aa8dce515e0e | None | 0 | 0 | 2015-03-11 08:48:04.416793 | 0:00:00.102275 | running | 153 | | ff0d9285-184f-47d5-9474-7475135ae8cf | e7d70ddf-9be0-4681-9456-aa8dce515e0e | full | 933 | 186 | 2015-03-11 09:57:01.836611 | 0:18:08.360204 | finished | 154 | | fec2fd0a-a4ef-4064-a292-95e9da68025c | e7d70ddf-9be0-4681-9456-aa8dce515e0e | full | 933 | 186 | 2015-03-12 09:46:40.818691 | 0:17:02.316443 | finished | 155 | +--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------------+----------+ 156 | 157 | rally verify show fec2fd0a-a4ef-4064-a292-95e9da68025c 158 | Total results of verification: 159 | 160 | +--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------+ 161 | | UUID | Deployment UUID | Set name | Tests | Failures | Created at | Status | 162 | +--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------+ 163 | | fec2fd0a-a4ef-4064-a292-95e9da68025c | e7d70ddf-9be0-4681-9456-aa8dce515e0e | full | 933 | 186 | 2015-03-12 09:46:40.818691 | finished | 164 | +--------------------------------------+--------------------------------------+----------+-------+----------+----------------------------+----------+ 165 | 166 | Tests: 167 | 168 | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+--------+ 169 | | name | time | status | 170 | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+--------+ 171 | | tearDownClass (tempest.api.image.v1.test_images.CreateRegisterImagesTest) | 0.0 | FAIL | 172 | | tearDownClass (tempest.api.image.v1.test_images.UpdateImageMetaTest) | 0.0 | FAIL | 173 | [...] 174 | | tempest.cli.simple_read_only.volume.test_cinder.SimpleReadOnlyCinderClientTest.test_cinder_quota_show[id-18166673-ffa8-4df3-b60c-6375532288bc] | 1.309555 | OK | 175 | | tempest.cli.simple_read_only.volume.test_cinder.SimpleReadOnlyCinderClientTest.test_cinder_rate_limits[id-b2c66ed9-ca96-4dc4-94cc-8083e664e516] | 1.277704 | OK | 176 | | tempest.cli.simple_read_only.volume.test_cinder.SimpleReadOnlyCinderClientTest.test_cinder_region_list[id-95a2850c-35b4-4159-bb93-51647a5ad232] | 1.105877 | FAIL | 177 | | tempest.cli.simple_read_only.volume.test_cinder.SimpleReadOnlyCinderClientTest.test_cinder_retries_list[id-6d97fcd2-5dd1-429d-af70-030c949d86cd] | 1.306407 | OK | 178 | | tempest.cli.simple_read_only.volume.test_cinder.SimpleReadOnlyCinderClientTest.test_cinder_service_list[id-301b5ae1-9591-4e9f-999c-d525a9bdf822] | 1.24909 | OK | 179 | | tempest.cli.simple_read_only.volume.test_cinder.SimpleReadOnlyCinderClientTest.test_cinder_snapshot_list[id-7a19955b-807c-481a-a2ee-9d76733eac28] | 1.270242 | OK | 180 | [...] 181 | | tempest.thirdparty.boto.test_s3_ec2_images.S3ImagesTest | 0.0 | SKIP | 182 | | tempest.thirdparty.boto.test_s3_objects.S3BucketsTest.test_create_get_delete_object[id-4eea567a-b46a-405b-a475-6097e1faebde] | 0.239222 | FAIL | 183 | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+--------+ 184 | 185 | 186 | ``` 187 | Rally includes a reporting tool 188 | https://rally.readthedocs.org/en/latest/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html 189 | 190 | 191 | 192 | 193 | 194 | ## Test description 195 | 196 | ### Rally 197 | 198 | By default, the different Rally Scenarios are: 199 | ```bash 200 | 201 | ls samples/tasks/scenarios/ 202 | authenticate cinder dummy heat mistral nova README.rst sahara vm 203 | ceilometer designate glance keystone neutron quotas requests tempest-do-not-run-against-production zaqar 204 | 205 | ``` 206 | 207 | tempest tests can be retrieved at https://github.com/openstack/tempest 208 | 209 | 210 | ## Automation 211 | -------------------------------------------------------------------------------- /docs/archi_reseau.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/archi_reseau.png -------------------------------------------------------------------------------- /docs/archi_reseau_20150206.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/archi_reseau_20150206.png -------------------------------------------------------------------------------- /docs/bridge_topology_allinone.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/bridge_topology_allinone.jpg -------------------------------------------------------------------------------- /docs/bridge_topology_controller_compute.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/bridge_topology_controller_compute.jpg -------------------------------------------------------------------------------- /docs/bridge_topology_network.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/bridge_topology_network.jpg -------------------------------------------------------------------------------- /docs/ha.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/ha.png -------------------------------------------------------------------------------- /docs/opensteak_ha.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/opensteak_ha.png -------------------------------------------------------------------------------- /docs/opensteak_ha_20150209.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Orange-OpenSource/opnfv/9d1e12b17dced4bda1a392adcef2d617fce1d5f2/docs/opensteak_ha_20150209.png -------------------------------------------------------------------------------- /infra/.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | 47 | # Translations 48 | *.mo 49 | *.pot 50 | 51 | # Django stuff: 52 | *.log 53 | 54 | # Sphinx documentation 55 | docs/_build/ 56 | 57 | # PyBuilder 58 | target/ 59 | -------------------------------------------------------------------------------- /infra/config/arnaud.yaml: -------------------------------------------------------------------------------- 1 | domains: "infra.opensteak.fr" 2 | media: "Ubuntu mirror" 3 | environments: "production" 4 | operatingsystems: "Ubuntu 14.04 Cloud" 5 | subnets: "Admin" 6 | compute_profiles: "Test" 7 | smart_proxies: "foreman.infra.opensteak.fr" 8 | ptables: "Preseed default" 9 | architectures: "x86_64" 10 | defaultController: "controller1.infra.opensteak.fr" 11 | hostgroups: "controller_VM" 12 | 13 | operatingSystemsList: 14 | "Ubuntu 14.04.2 LTS": 15 | name: "Ubuntu" 16 | description: "Ubuntu 14.04.2 LTS" 17 | major: "14" 18 | minor: "04" 19 | family: "Debian" 20 | release_name: "trusty" 21 | password_hash: "MD5" 22 | templates: 23 | - Preseed default 24 | - Preseed default PXELinux 25 | - Preseed default finish 26 | media: 27 | - Ubuntu mirror 28 | ptables: 29 | - Preseed default 30 | "Ubuntu 14.04 Cloud": 31 | name: "Ubuntu14.04Cloud" 32 | description: "Ubuntu 14.04 Cloud" 33 | major: "14" 34 | minor: "04" 35 | family: "Debian" 36 | release_name: "trusty" 37 | password_hash: "MD5" 38 | templates: 39 | - Preseed default 40 | - Preseed default user data 41 | - Preseed default finish 42 | media: 43 | - Ubuntu mirror 44 | ptables: 45 | - Preseed default 46 | 47 | configTemplatesList: 48 | Preseed default: "foreman/provisioning_templates/preseed_default.tpl" 49 | Preseed default finish: "foreman/provisioning_templates/preseed_default_finish.tpl" 50 | Preseed default PXELinux: "foreman/provisioning_templates/preseed_default_pxelinux.tpl" 51 | 52 | hostgroupTop: 53 | name: 'test' 54 | classes: 55 | - "ntp" 56 | - "opensteak::puppet" 57 | subnet: "Admin" 58 | params: 59 | password: 'password' 60 | 61 | hostgroupsList: 62 | hostgroupController: 63 | name: 'controller' 64 | classes: 65 | - "opensteak::base-network" 66 | - "opensteak::libvirt" 67 | params: 68 | global_sshkey: #Build in the configure_foreman.py 69 | hostgroupControllerVM: 70 | name: 'controller_VM' 71 | classes: 72 | - "opensteak::apt" 73 | params: 74 | global_sshkey: #Build in the configure_foreman.py 75 | password: 'password' 76 | hostgroupCompute: 77 | name: 'compute' 78 | classes: 79 | - "opensteak::neutron-compute" 80 | - "opensteak::nova-compute" 81 | hostgroupNetwork: 82 | name: 'network' 83 | classes: 84 | - "opensteak::neutron-network" 85 | 86 | subnetsList: 87 | Admin: 88 | domain: 'infra.opensteak.fr' 89 | shared: False 90 | data: 91 | network: "192.168.4.0" 92 | mask: "255.255.255.0" 93 | vlanid: 94 | gateway: "192.168.4.1" 95 | dns_primary: "192.168.4.4" 96 | from: "192.168.4.20" 97 | to: "192.168.4.170" 98 | ipam: "DHCP" 99 | boot_mode: "DHCP" 100 | Storage: 101 | domain: 'storage.opensteak.fr' 102 | shared: False 103 | data: 104 | network: "192.168.0.0" 105 | mask: "255.255.255.0" 106 | vlanid: 107 | from: "192.168.0.20" 108 | to: "192.168.0.170" 109 | ipam: "DHCP" 110 | boot_mode: "DHCP" 111 | VM: 112 | domain: 'vm.opensteak.fr' 113 | shared: False 114 | data: 115 | network: "192.168.2.0" 116 | mask: "255.255.255.0" 117 | vlanid: 118 | from: "192.168.2.20" 119 | to: "192.168.2.170" 120 | ipam: "DHCP" 121 | boot_mode: "DHCP" 122 | 123 | foreman: 124 | ip: "192.168.4.4" 125 | admin: "admin" 126 | password: "password" 127 | cpu: "2" 128 | ram: "2097152" 129 | iso: "trusty-server-cloudimg-amd64-disk1.img" 130 | disksize: "5G" 131 | force: True 132 | dns: "8.8.8.8" 133 | bridge: "br-libvirt" 134 | bridge_type: "openvswitch" 135 | templatesFolder: "foreman/templates" 136 | filesFolder: "foreman/files" 137 | classes: 138 | "opensteak::dhcp": 139 | dnsdomain: #Build with the other parameters 140 | interfaces: 141 | - "eth0" 142 | # - "eth1" 143 | # - "eth2" 144 | pools: #Build with the other parameters 145 | "opensteak::known-hosts": 146 | known_hosts_file: "/usr/share/foreman/.ssh/known_hosts" 147 | hosts: #Build with the controller list 148 | owner: foreman 149 | "opensteak::metadata": 150 | foreman_admin: #Build from foreman::admin 151 | foreman_password: #Build from foreman::password 152 | foreman_fqdn: #Build from domain name 153 | 154 | controllersList: 155 | controller1: 156 | controllerName: "controller1.infra.opensteak.fr" 157 | operatingSystem: "Ubuntu 14.04.2 LTS" 158 | macAddress: "00:1a:a0:cd:14:5a" 159 | password: "password" 160 | ipmiMacAddress: "40:f2:e9:2a:4d:e6" 161 | impiIpAddress: "192.168.1.199" 162 | impiUser: "user" 163 | impiPassword: "password" 164 | params: 165 | ovs_config: 166 | - "br-adm:eth2:dhcp" 167 | - "br-vm:eth1:dhcp" 168 | - "br-ex:eth0:none" 169 | 170 | controllersAttributes: 171 | cloudImagePath: '/var/lib/libvirt/images/trusty-server-cloudimg-amd64-disk1.img' 172 | adminBridge: 'br-adm' 173 | 174 | computesList: 175 | compute1: 176 | name: "compute1.infra.opensteak.fr" 177 | operatingSystem: "Ubuntu 14.04.2 LTS" 178 | macAddress: "00:24:e8:d2:43:2a" 179 | password: "password" 180 | ipmiMacAddress: "40:f2:e9:2a:4d:e2" 181 | impiIpAddress: "192.168.4.252" 182 | impiUser: "user" 183 | impiPassword: "password" 184 | params: 185 | bridge_uplinks: 186 | - "br-vm:wlan0" 187 | 188 | opensteak: 189 | vm_list: 190 | - mysql 191 | - rabbitmq 192 | - keystone 193 | - glance 194 | - nova 195 | - neutron 196 | - cinder 197 | 198 | vm: 199 | mysql: 200 | puppet_classes: 201 | - "opensteak::mysql" 202 | description: "Opensteak infra MySQL" 203 | rabbitmq: 204 | puppet_classes: 205 | - "opensteak::rabbitmq" 206 | description: "Opensteak infra RabbitMQ" 207 | keystone: 208 | puppet_classes: 209 | - "opensteak::keystone" 210 | - "opensteak::key" 211 | description: "Opensteak infra Keystone" 212 | glance: 213 | puppet_classes: 214 | - "opensteak::glance" 215 | description: "Opensteak infra Glance" 216 | nova: 217 | puppet_classes: 218 | - "opensteak::nova" 219 | description: "Opensteak infra Nova Controller" 220 | neutron: 221 | puppet_classes: 222 | - "opensteak::neutron-controller" 223 | description: "Opensteak infra Neutron Controller" 224 | cinder: 225 | puppet_classes: 226 | - "opensteak::cinder" 227 | description: "Opensteak infra Cinder" 228 | 229 | -------------------------------------------------------------------------------- /infra/config/infra.sample.yaml: -------------------------------------------------------------------------------- 1 | domains: "infra.opensteak.fr" 2 | media: "Ubuntu mirror" 3 | environments: "production" 4 | operatingsystems: "Ubuntu 14.04 Cloud" 5 | subnets: "Admin" 6 | compute_profiles: "Test" 7 | smart_proxies: "foreman.infra.opensteak.fr" 8 | ptables: "Preseed default" 9 | architectures: "x86_64" 10 | defaultController: "controller1.infra.opensteak.fr" 11 | hostgroups: "controller_VM" 12 | 13 | operatingSystemsList: 14 | "Ubuntu 14.04.2 LTS": 15 | name: "Ubuntu" 16 | description: "Ubuntu 14.04.2 LTS" 17 | major: "14" 18 | minor: "04" 19 | family: "Debian" 20 | release_name: "trusty" 21 | password_hash: "MD5" 22 | templates: 23 | - Preseed default 24 | - Preseed default PXELinux 25 | - Preseed default finish 26 | media: 27 | - Ubuntu mirror 28 | ptables: 29 | - Preseed default 30 | "Ubuntu 14.04 Cloud": 31 | name: "Ubuntu14.04Cloud" 32 | description: "Ubuntu 14.04 Cloud" 33 | major: "14" 34 | minor: "04" 35 | family: "Debian" 36 | release_name: "trusty" 37 | password_hash: "MD5" 38 | templates: 39 | - Preseed default 40 | - Preseed default user data 41 | - Preseed default finish 42 | media: 43 | - Ubuntu mirror 44 | ptables: 45 | - Preseed default 46 | 47 | configTemplatesList: 48 | Preseed default: "foreman/provisioning_templates/preseed_default.tpl" 49 | Preseed default finish: "foreman/provisioning_templates/preseed_default_finish.tpl" 50 | Preseed default PXELinux: "foreman/provisioning_templates/preseed_default_pxelinux.tpl" 51 | 52 | hostgroupTop: 53 | name: 'opensteak' 54 | classes: 55 | - "ntp" 56 | - "opensteak::puppet" 57 | subnet: "Admin" 58 | params: 59 | password: "password" 60 | 61 | hostgroupsList: 62 | hostgroupController: 63 | name: 'controller' 64 | classes: 65 | - "opensteak::base-network" 66 | - "opensteak::libvirt" 67 | params: 68 | global_sshkey: #Build in the configure_foreman.py 69 | hostgroupControllerVM: 70 | name: 'controller_VM' 71 | classes: 72 | - "opensteak::apt" 73 | params: 74 | global_sshkey: #Build in the configure_foreman.py 75 | password: "password" 76 | hostgroupCompute: 77 | name: 'compute' 78 | classes: 79 | - "opensteak::neutron-compute" 80 | - "opensteak::nova-compute" 81 | hostgroupNetwork: 82 | name: 'network' 83 | classes: 84 | - "opensteak::neutron-network" 85 | 86 | subnetsList: 87 | Admin: 88 | domain: 'infra.opensteak.fr' 89 | shared: False 90 | data: 91 | network: "192.168.1.0" 92 | mask: "255.255.255.0" 93 | vlanid: 94 | gateway: "192.168.1.1" 95 | dns_primary: "192.168.1.2" 96 | from: "192.168.1.20" 97 | to: "192.168.1.170" 98 | ipam: "DHCP" 99 | boot_mode: "DHCP" 100 | Storage: 101 | domain: 'storage.opensteak.fr' 102 | shared: False 103 | data: 104 | network: "192.168.0.0" 105 | mask: "255.255.255.0" 106 | vlanid: 107 | from: "192.168.0.20" 108 | to: "192.168.0.170" 109 | ipam: "DHCP" 110 | boot_mode: "DHCP" 111 | VM: 112 | domain: 'vm.opensteak.fr' 113 | shared: False 114 | data: 115 | network: "192.168.2.0" 116 | mask: "255.255.255.0" 117 | vlanid: 118 | from: "192.168.2.20" 119 | to: "192.168.2.170" 120 | ipam: "DHCP" 121 | boot_mode: "DHCP" 122 | 123 | foreman: 124 | ip: "192.168.1.5" 125 | admin: "admin" 126 | password: "password" 127 | cpu: "2" 128 | ram: "2097152" 129 | iso: "trusty-server-cloudimg-amd64-disk1.img" 130 | disksize: "5G" 131 | force: True 132 | dns: "127.0.0.1 8.8.8.8" # do not remove 127.0.0.1 from the list 133 | bridge: "virbr0" 134 | bridge_type: "linuxbridge" # set to openvswitch if you are Arnaud 135 | templatesFolder: "foreman/templates" 136 | filesFolder: "foreman/files" 137 | classes: 138 | "opensteak::dhcp": 139 | dnsdomain: #Build with the other parameters 140 | interfaces: 141 | - "eth0" 142 | # - "eth1" 143 | # - "eth2" 144 | pools: #Build with the other parameters 145 | "opensteak::known-hosts": 146 | known_hosts_file: "/usr/share/foreman/.ssh/known_hosts" 147 | hosts: #Build with the controller list 148 | owner: foreman 149 | "opensteak::metadata": 150 | foreman_admin: #Build from foreman::admin 151 | foreman_password: #Build from foreman::password 152 | foreman_fqdn: #Build from domain name 153 | 154 | controllersList: 155 | controller98: 156 | controllerName: "controller98.infra.opensteak.fr" 157 | operatingSystem: "Ubuntu 14.04.2 LTS" 158 | macAddress: "40:f2:e9:2a:30:3b" 159 | password: "password" 160 | ipmiMacAddress: "40:f2:e9:2a:30:3e" 161 | impiIpAddress: "192.168.1.198" 162 | impiUser: "user" 163 | impiPassword: "password" 164 | params: 165 | ovs_config: 166 | - "br-adm:em3:dhcp" 167 | - "br-vm:em5:dhcp" 168 | - "br-ex:em2:none" 169 | 170 | controllersAttributes: 171 | cloudImagePath: '/var/lib/libvirt/images/trusty-server-cloudimg-amd64-disk1.img' 172 | adminBridge: 'br-adm' 173 | 174 | computesList: 175 | compute99: 176 | name: "compute99.infra.opensteak.fr" 177 | operatingSystem: "Ubuntu 14.04.2 LTS" 178 | macAddress: "00:24:e8:d2:43:2a" 179 | password: "password" 180 | ipmiMacAddress: "40:f2:e9:2a:4d:e6" 181 | impiIpAddress: "192.168.1.199" 182 | impiUser: "user" 183 | impiPassword: "password" 184 | params: 185 | bridge_uplinks: 186 | - "br-vm:em5" 187 | - "br-ex:em2" 188 | 189 | opensteak: 190 | vm_list: 191 | - mysql 192 | - rabbitmq 193 | - keystone 194 | - glance 195 | - nova 196 | - neutron 197 | - cinder 198 | 199 | vm: 200 | mysql: 201 | puppet_classes: 202 | - "opensteak::mysql" 203 | description: "Opensteak infra MySQL" 204 | rabbitmq: 205 | puppet_classes: 206 | - "opensteak::rabbitmq" 207 | description: "Opensteak infra RabbitMQ" 208 | keystone: 209 | puppet_classes: 210 | - "opensteak::keystone" 211 | - "opensteak::key" 212 | description: "Opensteak infra Keystone" 213 | glance: 214 | puppet_classes: 215 | - "opensteak::glance" 216 | description: "Opensteak infra Glance" 217 | nova: 218 | puppet_classes: 219 | - "opensteak::nova" 220 | description: "Opensteak infra Nova Controller" 221 | neutron: 222 | puppet_classes: 223 | - "opensteak::neutron-controller" 224 | description: "Opensteak infra Neutron Controller" 225 | cinder: 226 | puppet_classes: 227 | - "opensteak::cinder" 228 | description: "Opensteak infra Cinder" 229 | 230 | -------------------------------------------------------------------------------- /infra/configure_foreman.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | # -*- coding: utf-8 -*- 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | # 15 | # Authors: 16 | # @author: David Blaisonneau 17 | # @author: Arnaud Morin 18 | 19 | from opensteak.conf import OpenSteakConfig 20 | from foreman import Foreman 21 | from opensteak.printer import OpenSteakPrinter 22 | import argparse 23 | from pprint import pprint as pp 24 | 25 | 26 | p = OpenSteakPrinter() 27 | 28 | # 29 | # Check for params 30 | # 31 | p.header("Check parameters") 32 | args = {} 33 | 34 | # Update args with values from CLI 35 | parser = argparse.ArgumentParser(description='This script will configure' 36 | 'foreman.', 37 | usage='%(prog)s [options]') 38 | parser.add_argument('-c', '--config', 39 | help='YAML config file to use (default is ' 40 | 'config/infra.yaml).', 41 | default='config/infra.yaml') 42 | parser.add_argument('-d', '--disable_update', 43 | help='Disable Puppet class update on foreman. This can ' 44 | 'be used when the configuration has already ' 45 | 'been done.', 46 | default=False, 47 | action='store_true') 48 | args.update(vars(parser.parse_args())) 49 | 50 | # Open config file 51 | conf = OpenSteakConfig(config_file=args["config"]) 52 | 53 | a = {} 54 | a["admin"] = conf["foreman"]["admin"] 55 | a["password"] = conf["foreman"]["password"] 56 | a["ip"] = conf["foreman"]["ip"] 57 | # Update args with values from config file 58 | args.update(a) 59 | del a 60 | 61 | # p.list_id(args) 62 | 63 | # 64 | # Prepare classes 65 | # 66 | foreman = Foreman(login=args["admin"], 67 | password=args["password"], 68 | ip=args["ip"]) 69 | 70 | ############################################## 71 | p.header("Check smart proxy") 72 | ############################################## 73 | 74 | smartProxy = conf['smart_proxies'] 75 | smartProxyId = foreman.smartProxies[smartProxy]['id'] 76 | p.status(bool(smartProxyId), 'Check smart proxy ' + smartProxy + ' exists') 77 | 78 | ############################################## 79 | p.header("Check and create - Environment production") 80 | ############################################## 81 | 82 | environment = conf['environments'] 83 | environmentId = foreman.environments.checkAndCreate(environment, {}) 84 | p.status(bool(environmentId), 'Environment ' + environment) 85 | 86 | ############################################## 87 | p.header("Get puppet classes") 88 | ############################################## 89 | 90 | # Reload the smart proxy to get the latest puppet classes 91 | if not args["disable_update"]: 92 | p.status(bool(foreman.smartProxies.importPuppetClasses(smartProxyId)), 93 | 'Import puppet classes from proxy '+smartProxy, 94 | '{}\n >> {}'.format(foreman.api.errorMsg, foreman.api.url)) 95 | 96 | # Get the list of puppet classes ids 97 | puppetClassesId = {} 98 | puppetClassesId['hostgroupTop'] = {} 99 | for pclass in conf['hostgroupTop']['classes']: 100 | puppetClassesId['hostgroupTop'][pclass] =\ 101 | foreman.puppetClasses[pclass]['id'] 102 | p.status(bool(puppetClassesId['hostgroupTop'][pclass]), 103 | 'Puppet Class "{}"'.format(pclass)) 104 | for name in conf['hostgroupsList']: 105 | puppetClassesId[name] = {} 106 | for pclass in conf['hostgroupsList'][name]['classes']: 107 | puppetClassesId[name][pclass] = foreman.puppetClasses[pclass]['id'] 108 | p.status(bool(puppetClassesId[name][pclass]), 109 | 'Puppet Class "{}"'.format(pclass)) 110 | puppetClassesId['foreman'] = {} 111 | for pclass in conf['foreman']['classes']: 112 | puppetClassesId['foreman'][pclass] =\ 113 | foreman.puppetClasses[pclass]['id'] 114 | p.status(bool(puppetClassesId['foreman'][pclass]), 115 | 'Puppet Class "{}"'.format(pclass)) 116 | 117 | ############################################## 118 | p.header("Check templates") 119 | ############################################## 120 | 121 | tplList = [] 122 | for data in conf['operatingSystemsList'].values(): 123 | if 'templates' in data.keys(): 124 | tplList.extend(data['templates']) 125 | tplList = set(tplList) 126 | for tpl in tplList: 127 | p.status(tpl in foreman.configTemplates, 128 | 'Template "{}" is present'.format(tpl)) 129 | 130 | # Overwrite some provisioning templates with files 131 | for templateName, templateFile in conf['configTemplatesList'].items(): 132 | # Overwrite only if template exists in foreman 133 | if templateName in foreman.configTemplates: 134 | with open(templateFile) as f: 135 | data = f.read() 136 | # Set 137 | foreman.configTemplates[templateName]['template'] = data 138 | # Check 139 | p.status(foreman.configTemplates[templateName]['template'] == data, 140 | 'Template "{}" is set'.format(templateName)) 141 | 142 | ############################################## 143 | p.header("Check and create - OS") 144 | ############################################## 145 | 146 | operatingSystems = conf['operatingSystemsList'] 147 | osIds = set() 148 | for os, data in operatingSystems.items(): 149 | templates = media = ptables = [] 150 | if 'templates' in data.keys(): 151 | templates = data.pop('templates') 152 | if 'media' in data.keys(): 153 | media = data.pop('media') 154 | if 'ptables' in data.keys(): 155 | ptables = data.pop('ptables') 156 | osId = foreman.operatingSystems.checkAndCreate(os, data) 157 | p.status(bool(osId), 'Operating system ' + os) 158 | osIds.add(osId) 159 | # Set medias 160 | media_ids = list(map(lambda x: foreman.media[x]['id'], media)) 161 | foreman.operatingSystems[os]['media'] = media_ids 162 | p.status(list(foreman.operatingSystems[os]['media'].keys()) == media, 163 | 'Media for operating system ' + os) 164 | # Set ptables 165 | ptables_ids = list(map(lambda x: foreman.ptables[x]['id'], ptables)) 166 | foreman.operatingSystems[os]['ptables'] = ptables_ids 167 | p.status(list(foreman.operatingSystems[os] 168 | ['ptables'].keys()) == ptables, 169 | 'PTables for operating system ' + os) 170 | # Set templates 171 | # - First check OS is added in the template 172 | for tpl in templates: 173 | p.status(foreman.configTemplates[tpl].checkOrAddOS(os, osId), 174 | "OS {} is in template {}".format(os, tpl)) 175 | for tpl in templates: 176 | p.status(foreman.operatingSystems[os].checkOrAddDefaultTemplate( 177 | foreman.configTemplates[tpl]), 178 | 'Add template "{}" to Operating system {}'.format(tpl, os)) 179 | 180 | ############################################## 181 | p.header("Check and create - Architecture") 182 | ############################################## 183 | 184 | architecture = conf['architectures'] 185 | architectureId = foreman.architectures.checkAndCreate(architecture, {}, osIds) 186 | p.status(bool(architectureId), 'Architecture ' + architecture) 187 | 188 | ############################################## 189 | p.header("Check and create - Domains") 190 | ############################################## 191 | 192 | domain = conf['domains'] 193 | domainId = foreman.domains.checkAndCreate(domain, {}) 194 | p.status(bool(domainId), 'Domain ' + domain) 195 | 196 | ############################################## 197 | p.header("Check and create - Subnets") 198 | ############################################## 199 | 200 | confSubnets = conf['subnetsList'] 201 | for name, data in confSubnets.items(): 202 | payload = data['data'] 203 | if 'dhcp_id' not in data['data'].keys(): 204 | payload['dhcp_id'] = smartProxyId 205 | if 'tftp_id' not in data['data'].keys(): 206 | payload['tftp_id'] = smartProxyId 207 | if 'dns_id' not in data['data'].keys(): 208 | payload['dns_id'] = smartProxyId 209 | subnetId = foreman.subnets.checkAndCreate(name, payload, domainId) 210 | netmaskshort = sum([bin(int(x)).count('1') 211 | for x in data['data']['mask'].split('.')]) 212 | p.status(bool(subnetId), 213 | 'Subnet {} ({}/{})'.format(name, 214 | data['data']['network'], 215 | netmaskshort)) 216 | 217 | ############################################## 218 | p.header("Check and create - Hostgroups") 219 | ############################################## 220 | 221 | hg_parent = conf['hostgroupTop']['name'] 222 | payload = {"environment_name": conf['environments'], 223 | "subnet_name": conf['hostgroupTop']['subnet'], 224 | "domain_name": domain} 225 | hg_parentId = foreman.hostgroups.checkAndCreate( 226 | hg_parent, 227 | payload, 228 | conf['hostgroupTop'], 229 | False, 230 | puppetClassesId['hostgroupTop'] 231 | ) 232 | p.status(bool(hg_parentId), 'Hostgroup {}'.format(hg_parent)) 233 | 234 | for hg in conf['hostgroupsList'].keys(): 235 | key = hg_parent + '_' + conf['hostgroupsList'][hg]['name'] 236 | payload = {"title": hg_parent + '/' + conf['hostgroupsList'][hg]['name'], 237 | "parent_id": hg_parentId} 238 | # Get back SSH key from foreman/files/id_rsa.pub file 239 | if ('params' in conf['hostgroupsList'][hg] and 240 | 'global_sshkey' in conf['hostgroupsList'][hg]['params'] and 241 | conf['hostgroupsList'][hg]['params']['global_sshkey'] is None): 242 | with open("{0}/id_rsa.pub".format(conf['foreman']['filesFolder']), 243 | 'r') as content_file: 244 | fullKeyString = content_file.read() 245 | keyParts = fullKeyString.strip().split(None, 3) 246 | conf['hostgroupsList'][hg]['params']['global_sshkey'] = keyParts[1] 247 | p.status(bool(foreman.hostgroups.checkAndCreate(key, payload, 248 | conf['hostgroupsList'][hg], 249 | hg_parent, 250 | puppetClassesId[hg])), 251 | 'Sub Hostgroup {}'.format(conf['hostgroupsList'][hg]['name'])) 252 | 253 | 254 | ############################################## 255 | p.header("Authorize Foreman to do puppet runs") 256 | ############################################## 257 | 258 | foreman.settings['puppetrun']['value'] = 'true' 259 | p.status(foreman.settings['puppetrun']['value'], 260 | 'Set puppetrun parameter to True') 261 | 262 | ############################################## 263 | p.header("Configure Foreman host") 264 | ############################################## 265 | 266 | hostName = "foreman.{}".format(conf['domains']) 267 | foremanHost = foreman.hosts[hostName] 268 | 269 | # Add puppet classes to foreman 270 | p.status(foreman.hosts[hostName].checkAndCreateClasses( 271 | puppetClassesId['foreman'].values()), 272 | "Add puppet classes to foreman host") 273 | 274 | # Add smart class parameters of opensteak::dhcp to foreman 275 | className = 'opensteak::dhcp' 276 | scp = foreman.puppetClasses[className].smartClassParametersList() 277 | for k, v in conf['foreman']['classes'][className].items(): 278 | if v is None: 279 | # Construct the var pools if no value 280 | if k == 'pools': 281 | v = {'pools': dict()} 282 | for subn in conf['subnetsList'].values(): 283 | v['pools'][subn['domain']] = dict() 284 | v['pools'][subn['domain']]['network'] = subn['data']['network'] 285 | v['pools'][subn['domain']]['netmask'] = subn['data']['mask'] 286 | v['pools'][subn['domain']]['range'] =\ 287 | subn['data']['from'] + ' ' + subn['data']['to'] 288 | if 'gateway' in subn['data'].keys(): 289 | v['pools'][subn['domain']]['gateway'] =\ 290 | subn['data']['gateway'] 291 | # Construct the var dnsdomain if no value 292 | elif k == 'dnsdomain': 293 | v = list() 294 | for subn in conf['subnetsList'].values(): 295 | v.append(subn['domain']) 296 | revZone = subn['data']['network'].split('.')[::-1] 297 | revMask = subn['data']['mask'].split('.')[::-1] 298 | while revMask[0] != '255': 299 | revZone = revZone[1::] 300 | revMask = revMask[1::] 301 | v.append('.'.join(revZone) + '.in-addr.arpa') 302 | scp_id = scp[k] 303 | foreman.smartClassParameters[scp_id].setOverrideValue(v, hostName) 304 | 305 | # Add smart class parameters of opensteak::metadata to foreman 306 | className = "opensteak::metadata" 307 | scp = foreman.puppetClasses[className].smartClassParametersList() 308 | for k, v in conf['foreman']['classes'][className].items(): 309 | if v is None: 310 | # Construct the vars 311 | if k == 'foreman_admin': 312 | v = conf['foreman']['admin'] 313 | elif k == 'foreman_password': 314 | v = conf['foreman']['password'] 315 | elif k == 'foreman_fqdn': 316 | v = 'foreman.{}'.format(conf['domains']) 317 | scp_id = scp[k] 318 | foreman.smartClassParameters[scp_id].setOverrideValue(v, hostName) 319 | 320 | # Add smart class parameters of opensteak::known hosts to foreman 321 | className = "opensteak::known-hosts" 322 | scp = foreman.puppetClasses[className].smartClassParametersList() 323 | for k, v in conf['foreman']['classes'][className].items(): 324 | if v is None: 325 | # Construct the var hosts if no value 326 | if k == 'hosts': 327 | v = ','.join(list(map(lambda x: x['controllerName'], 328 | conf['controllersList'].values()))) 329 | scp_id = scp[k] 330 | foreman.smartClassParameters[scp_id].setOverrideValue(v, hostName) 331 | 332 | foremanSCP = set([x['parameter'] 333 | for x in foreman.hosts[hostName] 334 | ['smart_class_parameters'].values()]) 335 | awaitedSCP = set(conf['foreman']['classes'][className].keys()) 336 | p.status(awaitedSCP.issubset(foremanSCP), 337 | "Add smart class parameters to class {} on foreman host" 338 | .format(className)) 339 | 340 | # Run puppet on foreman 341 | p.status(bool(foreman.hosts[hostName].puppetRun()), 342 | 'Run puppet on foreman host') 343 | 344 | 345 | ############################################## 346 | p.header("Add controller nodes") 347 | ############################################## 348 | 349 | for c in conf['controllersList']: 350 | 351 | # Create the controller 352 | cConf = conf['controllersList'][c] 353 | hostName = cConf['controllerName'] 354 | payload = { 355 | "host": { 356 | "name": hostName, 357 | "environment_id": foreman.environments[conf['environments']]['id'], 358 | "mac": cConf['macAddress'], 359 | "domain_id": foreman.domains[conf['domains']]['id'], 360 | "subnet_id": foreman.subnets[conf['subnets']]['id'], 361 | "ptable_id": foreman.ptables[conf['ptables']]['id'], 362 | "medium_id": foreman.media[conf['media']]['id'], 363 | "architecture_id": foreman.architectures[ 364 | conf['architectures']]['id'], 365 | "operatingsystem_id": foreman.operatingSystems[ 366 | cConf['operatingSystem']]['id'], 367 | "puppet_proxy_id": foreman.smartProxies[ 368 | conf['smart_proxies']]['id'], 369 | "hostgroup_id": foreman.hostgroups['{}_{}'.format( 370 | conf['hostgroupTop']['name'], 371 | conf['hostgroupsList']['hostgroupController']['name'])]['id'], 372 | "root_pass": cConf['password'] 373 | } 374 | } 375 | payloadBMC = {"ip": cConf['impiIpAddress'], 376 | "mac": cConf['ipmiMacAddress'], 377 | "type": "bmc", 378 | "managed": False, 379 | "identifier": "ipmi", 380 | "username": cConf['impiUser'], 381 | "password": cConf['impiPassword'], 382 | "provider": "IPMI", 383 | "virtual": False} 384 | controllerId = foreman.hosts.createController(hostName, 385 | payload, payloadBMC) 386 | p.status(bool(controllerId), "Create controller {}".format(hostName)) 387 | 388 | # Configure OVS - for opensteak::base-network 389 | ovs_config = cConf['params']['ovs_config'] 390 | pClass = 'opensteak::base-network' 391 | scp = foreman.puppetClasses[pClass].smartClassParametersList() 392 | scp_id = scp['ovs_config'] 393 | res = foreman.smartClassParameters[scp_id].setOverrideValue(ovs_config, 394 | hostName) 395 | p.status(bool(res), 396 | "Configure {} for the controller {}".format(pClass, hostName)) 397 | # Configure OVS - for opensteak::libvirt 398 | pClass = 'opensteak::libvirt' 399 | scp = foreman.puppetClasses[pClass].smartClassParametersList() 400 | scp_id = scp['ovs_config'] 401 | res = foreman.smartClassParameters[scp_id].setOverrideValue(ovs_config, 402 | hostName) 403 | p.status(bool(res), 404 | "Configure {} for the controller {}".format(pClass, hostName)) 405 | 406 | # Add the controller to the list of computeRessources 407 | res = foreman.computeResources.checkAndCreate( 408 | hostName, 409 | {'description': 'OpenSteak compute ressource', 410 | 'display_type': 'SPICE', 411 | 'name': hostName, 412 | 'provider': 'Libvirt', 413 | 'set_console_password': True, 414 | 'url': 'qemu+ssh://ubuntu@{}/system'.format(hostName)}) 415 | p.status(bool(res), 416 | 'Set {} as a compute ressource'.format(hostName)) 417 | 418 | computeRessourcesId = foreman.computeResources[hostName]['id'] 419 | # Declare the cloud image to the controller 420 | if 'UbuntuCloudImage' not in foreman.computeResources[ 421 | computeRessourcesId]['images'].keys(): 422 | image = {'architecture_id': foreman.architectures[ 423 | conf['architectures']]['id'], 424 | 'compute_resource_id': computeRessourcesId, 425 | 'name': 'UbuntuCloudImage', 426 | 'operatingsystem_id': foreman.operatingSystems[ 427 | conf['operatingsystems']]['id'], 428 | 'username': 'ubuntu', 429 | 'uuid': conf['controllersAttributes']['cloudImagePath']} 430 | p.status(bool(foreman.computeResources[foreman.computeResources[ 431 | hostName]['id']]['images'].append(image)), 432 | 'Add image "UbuntuCloudImage" in the compute ' 433 | 'ressource {}'.format(hostName)) 434 | 435 | ############################################## 436 | p.header("Add compute nodes") 437 | ############################################## 438 | 439 | for c in conf['computesList']: 440 | 441 | # Create the compute 442 | cConf = conf['computesList'][c] 443 | hostName = cConf['name'] 444 | payload = { 445 | "host": { 446 | "name": hostName, 447 | "environment_id": foreman.environments[conf['environments']]['id'], 448 | "mac": cConf['macAddress'], 449 | "domain_id": foreman.domains[conf['domains']]['id'], 450 | "subnet_id": foreman.subnets[conf['subnets']]['id'], 451 | "ptable_id": foreman.ptables[conf['ptables']]['id'], 452 | "medium_id": foreman.media[conf['media']]['id'], 453 | "architecture_id": foreman.architectures[ 454 | conf['architectures']]['id'], 455 | "operatingsystem_id": foreman.operatingSystems[ 456 | cConf['operatingSystem']]['id'], 457 | "puppet_proxy_id": foreman.smartProxies[ 458 | conf['smart_proxies']]['id'], 459 | "hostgroup_id": foreman.hostgroups['{}_{}'.format( 460 | conf['hostgroupTop']['name'], 461 | conf['hostgroupsList']['hostgroupCompute']['name'])]['id'], 462 | "root_pass": cConf['password'] 463 | } 464 | } 465 | payloadBMC = {"ip": cConf['impiIpAddress'], 466 | "mac": cConf['ipmiMacAddress'], 467 | "type": "bmc", 468 | "managed": False, 469 | "identifier": "ipmi", 470 | "username": cConf['impiUser'], 471 | "password": cConf['impiPassword'], 472 | "provider": "IPMI", 473 | "virtual": False} 474 | computeId = foreman.hosts.createController( hostName, 475 | payload, payloadBMC) 476 | p.status(bool(computeId), "Create compute {}".format(hostName)) 477 | 478 | # # Configure OVS - for opensteak::base-network 479 | # ovs_config = cConf['params']['ovs_config'] 480 | # pClass = 'opensteak::base-network' 481 | # scp = foreman.puppetClasses[pClass].smartClassParametersList() 482 | # scp_id = scp['ovs_config'] 483 | # res = foreman.smartClassParameters[scp_id].setOverrideValue(ovs_config, 484 | # hostName) 485 | # p.status(bool(res), 486 | # "Configure {} for the controller {}".format(pClass, hostName)) 487 | 488 | # Configure Neutron compute 489 | bridge_uplinks = cConf['params']['bridge_uplinks'] 490 | pClass = 'opensteak::neutron-compute' 491 | scp = foreman.puppetClasses[pClass].smartClassParametersList() 492 | scp_id = scp['bridge_uplinks'] 493 | res = foreman.smartClassParameters[scp_id].setOverrideValue(bridge_uplinks, 494 | hostName) 495 | p.status(bool(res), 496 | "Configure {} for the controller {}".format(pClass, hostName)) 497 | 498 | # # Add the controller to the list of computeRessources 499 | # res = foreman.computeResources.checkAndCreate( 500 | # hostName, 501 | # {'description': 'OpenSteak compute ressource', 502 | # 'display_type': 'SPICE', 503 | # 'name': hostName, 504 | # 'provider': 'Libvirt', 505 | # 'set_console_password': True, 506 | # 'url': 'qemu+ssh://ubuntu@{}/system'.format(hostName)}) 507 | # p.status(bool(res), 508 | # 'Set {} as a compute ressource'.format(hostName)) 509 | # 510 | # computeRessourcesId = foreman.computeResources[hostName]['id'] 511 | # # Declare the cloud image to the controller 512 | # if 'UbuntuCloudImage' not in foreman.computeResources[ 513 | # computeRessourcesId]['images'].keys(): 514 | # image = {'architecture_id': foreman.architectures[ 515 | # conf['architectures']]['id'], 516 | # 'compute_resource_id': computeRessourcesId, 517 | # 'name': 'UbuntuCloudImage', 518 | # 'operatingsystem_id': foreman.operatingSystems[ 519 | # conf['operatingsystems']]['id'], 520 | # 'username': 'ubuntu', 521 | # 'uuid': conf['controllersAttributes']['cloudImagePath']} 522 | # p.status(bool(foreman.computeResources[foreman.computeResources[ 523 | # hostName]['id']]['images'].append(image)), 524 | # 'Add image "UbuntuCloudImage" in the compute ' 525 | # 'ressource {}'.format(hostName)) 526 | 527 | # ############################################# 528 | # p.header("Clean") 529 | # ############################################# 530 | # 531 | # # Delete Sub Hostgroups 532 | # for hg in conf['hostgroupsList'].keys(): 533 | # key = hg_parent + '_' + conf['hostgroupsList'][hg]['name'] 534 | # del(foreman.hostgroups[key]) 535 | # p.status(bool(key not in foreman.hostgroups), 536 | # 'Delete sub hostgroup {}'.format(conf[ 537 | # 'hostgroupsList'][hg]['name'])) 538 | # 539 | # # Delete Top hostgroup 540 | # hg_parent = conf['hostgroupTop']['name'] 541 | # del(foreman.hostgroups[hg_parent]) 542 | # p.status(bool(hg_parent not in foreman.hostgroups), 543 | # 'Delete top hostgroup {}'.format(hg_parent)) 544 | # 545 | # # Delete subnets or remove domain from its 546 | # for name, data in confSubnets.items(): 547 | # subnetId = foreman.subnets[name]['id'] 548 | # p.status(foreman.subnets.removeDomain(subnetId, domainId), 549 | # 'Remove domain {} from subnet {}'.format(domain, name)) 550 | # if not data['shared']: 551 | # del(foreman.subnets[subnetId]) 552 | # p.status(name not in foreman.subnets, 'Delete subnet ' + name) 553 | # 554 | # # Delete domain 555 | # del(foreman.domains[domainId]) 556 | # p.status(domainId not in foreman.domains, 'Delete domain ' + domain) 557 | -------------------------------------------------------------------------------- /infra/create_foreman.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | # -*- coding: utf-8 -*- 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | # 15 | # Authors: 16 | # @author: David Blaisonneau 17 | # @author: Arnaud Morin 18 | 19 | """ 20 | Create Virtual Machines 21 | """ 22 | 23 | # TODO: be sure that we are runnning as root 24 | 25 | from opensteak.conf import OpenSteakConfig 26 | from opensteak.printer import OpenSteakPrinter 27 | # from opensteak.argparser import OpenSteakArgParser 28 | from opensteak.templateparser import OpenSteakTemplateParser 29 | from opensteak.virsh import OpenSteakVirsh 30 | from pprint import pprint as pp 31 | # from ipaddress import IPv4Address 32 | import argparse 33 | import tempfile 34 | import shutil 35 | import os 36 | # import sys 37 | 38 | p = OpenSteakPrinter() 39 | 40 | # 41 | # Check for params 42 | # 43 | p.header("Check parameters") 44 | args = {} 45 | 46 | # Update args with values from CLI 47 | parser = argparse.ArgumentParser(description='This script will create a foreman VM.', usage='%(prog)s [options]') 48 | parser.add_argument('-c', '--config', help='YAML config file to use (default is config/infra.yaml).', default='config/infra.yaml') 49 | args.update(vars(parser.parse_args())) 50 | 51 | # Open config file 52 | conf = OpenSteakConfig(config_file=args["config"]) 53 | # pp(conf.dump()) 54 | 55 | a = {} 56 | a["name"] = "foreman" 57 | a["ip"] = conf["foreman"]["ip"] 58 | a["netmask"] = conf["subnetsList"]["Admin"]["data"]["mask"] 59 | a["netmaskshort"] = sum([bin(int(x)).count('1') 60 | for x in conf["subnetsList"]["Admin"] 61 | ["data"]["mask"] 62 | .split('.')]) 63 | a["gateway"] = conf["subnetsList"]["Admin"]["data"]["gateway"] 64 | a["network"] = conf["subnetsList"]["Admin"]["data"]["network"] 65 | a["admin"] = conf["foreman"]["admin"] 66 | a["password"] = conf["foreman"]["password"] 67 | a["cpu"] = conf["foreman"]["cpu"] 68 | a["ram"] = conf["foreman"]["ram"] 69 | a["iso"] = conf["foreman"]["iso"] 70 | a["disksize"] = conf["foreman"]["disksize"] 71 | a["force"] = conf["foreman"]["force"] 72 | a["dhcprange"] = "{0} {1}".format(conf["subnetsList"]["Admin"] 73 | ["data"]["from"], 74 | conf["subnetsList"]["Admin"] 75 | ["data"]["to"]) 76 | a["domain"] = conf["domains"] 77 | reverse_octets = str(conf["foreman"]["ip"]).split('.')[-2::-1] 78 | a["reversedns"] = '.'.join(reverse_octets) + '.in-addr.arpa' 79 | a["dns"] = conf["foreman"]["dns"] 80 | a["bridge"] = conf["foreman"]["bridge"] 81 | if conf["foreman"]["bridge_type"] == "openvswitch": 82 | a["bridgeconfig"] = "" 83 | else: 84 | # no specific config for linuxbridge 85 | a["bridgeconfig"] = "" 86 | 87 | # Update args with values from config file 88 | args.update(a) 89 | del a 90 | 91 | p.list_id(args) 92 | 93 | # Ask confirmation 94 | if args["force"] is not True: 95 | p.ask_validation() 96 | 97 | # Create the VM 98 | p.header("Initiate configuration") 99 | 100 | ### 101 | # Work on templates 102 | ### 103 | # Create temporary folders and files 104 | tempFolder = tempfile.mkdtemp(dir="/tmp") 105 | tempFiles = {} 106 | 107 | for f in os.listdir(conf["foreman"]["templatesFolder"]): 108 | tempFiles[f] = "{0}/{1}".format(tempFolder, f) 109 | try: 110 | OpenSteakTemplateParser("{0}/{1}".format( 111 | conf["foreman"]["templatesFolder"], f), 112 | tempFiles[f], args) 113 | except Exception as err: 114 | p.status(False, msg=("Something went wrong when trying to create " 115 | "the file {0} from the template " 116 | "{1}/{2}").format(tempFiles[f], 117 | conf["foreman"]["templatesFolder"], 118 | f), 119 | failed="{0}".format(err)) 120 | 121 | ### 122 | # Work on files 123 | ### 124 | for f in os.listdir(conf["foreman"]["filesFolder"]): 125 | tempFiles[f] = "{0}/{1}".format(tempFolder, f) 126 | fFullPath = "{0}/{1}".format(conf["foreman"]["filesFolder"], f) 127 | if os.path.isdir(fFullPath): 128 | shutil.copytree(fFullPath, "{0}/{1}".format(tempFiles[f], f)) 129 | else: 130 | shutil.copyfile(fFullPath, tempFiles[f]) 131 | 132 | p.status(True, msg="Temporary files created:") 133 | p.list_id(tempFiles) 134 | 135 | 136 | ### 137 | # Delete if already exists 138 | ### 139 | 140 | # Get all volumes and VM 141 | p.header("Virsh calls") 142 | OpenSteakVirsh = OpenSteakVirsh() 143 | volumeList = OpenSteakVirsh.volumeList() 144 | domainList = OpenSteakVirsh.domainList() 145 | # p.list_id(volumeList) 146 | # p.list_id(domainList) 147 | 148 | # TODO: check that the default image is in the list 149 | # (trusty-server-cloudimg-amd64-disk1.img by default) 150 | 151 | # Delete the volume if exists 152 | try: 153 | oldVolume = volumeList[args["name"]] 154 | 155 | # Ask confirmation 156 | if args["force"] is not True: 157 | p.ask_validation() 158 | 159 | status = OpenSteakVirsh.volumeDelete(volumeList[args["name"]]) 160 | if (status["stderr"]): 161 | p.status(False, msg=status["stderr"]) 162 | p.status(True, msg=status["stdout"]) 163 | except KeyError as err: 164 | # no old volume, do nothing 165 | pass 166 | 167 | # Delete the VM if exists 168 | try: 169 | vmStatus = domainList[args["name"]] 170 | 171 | # Ask confirmation 172 | if args["force"] is not True: 173 | p.ask_validation() 174 | 175 | # Destroy (stop) 176 | if vmStatus == "running": 177 | status = OpenSteakVirsh.domainDestroy(args["name"]) 178 | if (status["stderr"]): 179 | p.status(False, msg=status["stderr"]) 180 | p.status(True, msg=status["stdout"]) 181 | 182 | # Undefine (delete) 183 | status = OpenSteakVirsh.domainUndefine(args["name"]) 184 | if (status["stderr"]): 185 | p.status(False, msg=status["stderr"]) 186 | p.status(True, msg=status["stdout"]) 187 | except KeyError as err: 188 | # no old VM defined, do nothing 189 | pass 190 | 191 | ### 192 | # Create the configuration image file from metadata and userdata 193 | ### 194 | status = OpenSteakVirsh.generateConfiguration(args["name"], tempFiles) 195 | if (status["stderr"]): 196 | p.status(False, msg=status["stderr"]) 197 | p.status(True, msg=("Configuration generated successfully in " 198 | "/var/lib/libvirt/images/{0}-configuration.iso") 199 | .format(args["name"])) 200 | 201 | # Refresh the pool 202 | status = OpenSteakVirsh.poolRefresh() 203 | if (status["stderr"]): 204 | p.status(False, msg=status["stderr"]) 205 | p.status(True, msg=status["stdout"]) 206 | 207 | ### 208 | # Create the new VM 209 | ### 210 | # Create the volume from a clone 211 | status = OpenSteakVirsh.volumeClone(args["iso"], args["name"]) 212 | if (status["stderr"]): 213 | p.status(False, msg=status["stderr"]) 214 | p.status(True, msg=status["stdout"]) 215 | 216 | # Resize the volume 217 | status = OpenSteakVirsh.volumeResize(args["name"], args["disksize"]) 218 | if (status["stderr"]): 219 | p.status(False, msg=status["stderr"]) 220 | p.status(True, msg=status["stdout"]) 221 | 222 | # Create the VM 223 | status = OpenSteakVirsh.domainDefine(tempFiles["kvm-config"]) 224 | if (status["stderr"]): 225 | p.status(False, msg=status["stderr"]) 226 | p.status(True, msg=status["stdout"]) 227 | 228 | 229 | ### 230 | # Start the VM 231 | ### 232 | status = OpenSteakVirsh.domainStart(args["name"]) 233 | if (status["stderr"]): 234 | p.status(False, msg=status["stderr"]) 235 | p.status(True, msg=status["stdout"]) 236 | 237 | p.status(True, msg="Log file is at: /var/log/libvirt/qemu/{0}-serial.log" 238 | .format(args["name"])) 239 | 240 | p.header("fini") 241 | 242 | # Delete temporary dir 243 | shutil.rmtree(tempFolder) 244 | -------------------------------------------------------------------------------- /infra/foreman/README.md: -------------------------------------------------------------------------------- 1 | The files located in "files" folder will be copied into the foreman VM configuration disk. 2 | The files located in "templates" folder will be computed and copied into the foreman VM configuration disk. 3 | 4 | The configuration disk should be mounted in /mnt. 5 | -------------------------------------------------------------------------------- /infra/foreman/files/id_rsa: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpQIBAAKCAQEAz0jMplucYXoe0xJ21ASL98PGbwZYCI5Xr4/kHXOdGvHvZr3z 3 | 58tWU1Ta4qMf0qa272VsdQiO1pCmSlqrDW5C9rEeqLhhRX/yLbgv35mOdjRoIIAX 4 | 6RfNniT/xXrfvPZYdw603fIbbw5igTRwc6W5QvJHRcKRKb762Vw2gPSS0GgFBLCk 5 | vC2kQbW4cfP+9elo86FAhNBs2TbBHLc9H2W+9KzYfgsigjJLsgRXL6/uhu3+sL2d 6 | 3F1J9Nhyy3aoUOVxD2YPJlJvzYhLZcSXgXI+Oi0gZmhh3uImc4WRyOihK5jRpJaw 7 | desygyXo4lVskzxBjm7L9ynbCNMOO85ZVVJGxQIDAQABAoIBAQCaOWcSy4yRtiPj 8 | FZTV8MAXS1GD36t2SjoRhLTL+O5GUwW1YtVrfA2xmKv2/jm6KJJpkgPdG83y9NLU 9 | 9ZrZNlWaaHQQQocVB7ovrB/qdLzbU+i5bbTcl/pDlPG8g8yeMoflpUqK7AzfV0uR 10 | KGwWj5JErjC7RaVt8wt+164xykbFyZeUu9htNthFD/OPaIPqgv6AoJdEULyGrTbd 11 | SRyJ01n0beGkB0o+0dnOEO34K+pU0Zzk+rAcOEl3UNkpxOzedEFOR6NdnX1eH4t4 12 | a6OZgskcVjyxFQPAyhcSkQ2iWncQx2ritTclst4NFjBae5hwYgEB4S9ZN5IOueMH 13 | eYhxYthNAoGBAPXtSDmRGPc4EHDBrbgDn4vhxK7QN35bWFW1KvHLD0hBBJO57GqT 14 | jGCJsbkw6peERuFV8qq+Bvz0nvlKl9humB1djlndUETksUTrNz73XxpJJ8L5parF 15 | okx0QLMXONOP5b6yGWYay3QD0gNz/HYVf//oDTdWRhbq5EY6VarOagfjAoGBANfG 16 | UrlxEYHwq3TE7unvgaao5Vpmw8Hqir2bnl2zKmPoV8ds/V+paMnV6Hhzgzu3bKgF 17 | ukZgAizEcfvxrxnfIraRJTI5xgBoIl8gdbsWkLre4qKpVSAkw4JLyzVVlXCyKYHp 18 | ocjeNVbO5Z2Yft0cv30LfeX+DEDeQS12RHLu/Sc3AoGBAMns2ZfC5p/encknje8A 19 | spjVeHwdJOOQNxiwl6FPHK40DIELcO4VVnbRuGaZnpVoHBbbTlQZkX1TkdCZCdLB 20 | BA9giQiKamUW7eLry0HdNW5M0OQLvZZZjih+b71c/ODhTz/j1mz65UDN/jutmYaP 21 | orjJnUhpg0U/+s0bCsojj/YHAoGBAKtsMhiFjaUv8OdJ9Y0A7H3dPKk/b1JF5YeR 22 | dJV4W7sXwXT8T6eKTWfce14GV0JADSDHvB9g8xlh0DSa48OoFEn6shRe9cEo+fWd 23 | Mis6WC0+Gcukv65TxsdjM8PhhGIOCQ/e7ttIPhQDN0Sm/FLqHe9YC+OGm3GFoT5e 24 | 8S5mU9StAoGABFwqkFELU84twzKYJCVPZPktwtfrD0Hkbd9pk0ebuSnQ3bATFIyU 25 | CDspTADbY2IgC53u+XAhTd5BOsicTtMM9x1p5EOglbK1ANagWuGlzVfdbp+bmql9 26 | S8AaH22lha5vCfHHfAN2NSkQ+ABZnNpP66nFx06VcyEYkhuZgd6s5A0= 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /infra/foreman/files/id_rsa.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPSMymW5xheh7TEnbUBIv3w8ZvBlgIjlevj+Qdc50a8e9mvfPny1ZTVNriox/SprbvZWx1CI7WkKZKWqsNbkL2sR6ouGFFf/ItuC/fmY52NGgggBfpF82eJP/Fet+89lh3DrTd8htvDmKBNHBzpblC8kdFwpEpvvrZXDaA9JLQaAUEsKS8LaRBtbhx8/716WjzoUCE0GzZNsEctz0fZb70rNh+CyKCMkuyBFcvr+6G7f6wvZ3cXUn02HLLdqhQ5XEPZg8mUm/NiEtlxJeBcj46LSBmaGHe4iZzhZHI6KErmNGklrB16zKDJejiVWyTPEGObsv3KdsI0w47zllVUkbF arnaud@l-bibicy 2 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/etc/puppet/auth.conf: -------------------------------------------------------------------------------- 1 | # This is the default auth.conf file, which implements the default rules 2 | # used by the puppet master. (That is, the rules below will still apply 3 | # even if this file is deleted.) 4 | # 5 | # The ACLs are evaluated in top-down order. More specific stanzas should 6 | # be towards the top of the file and more general ones at the bottom; 7 | # otherwise, the general rules may "steal" requests that should be 8 | # governed by the specific rules. 9 | # 10 | # See http://docs.puppetlabs.com/guides/rest_auth_conf.html for a more complete 11 | # description of auth.conf's behavior. 12 | # 13 | # Supported syntax: 14 | # Each stanza in auth.conf starts with a path to match, followed 15 | # by optional modifiers, and finally, a series of allow or deny 16 | # directives. 17 | # 18 | # Example Stanza 19 | # --------------------------------- 20 | # path /path/to/resource # simple prefix match 21 | # # path ~ regex # alternately, regex match 22 | # [environment envlist] 23 | # [method methodlist] 24 | # [auth[enthicated] {yes|no|on|off|any}] 25 | # allow [host|backreference|*|regex] 26 | # deny [host|backreference|*|regex] 27 | # allow_ip [ip|cidr|ip_wildcard|*] 28 | # deny_ip [ip|cidr|ip_wildcard|*] 29 | # 30 | # The path match can either be a simple prefix match or a regular 31 | # expression. `path /file` would match both `/file_metadata` and 32 | # `/file_content`. Regex matches allow the use of backreferences 33 | # in the allow/deny directives. 34 | # 35 | # The regex syntax is the same as for Ruby regex, and captures backreferences 36 | # for use in the `allow` and `deny` lines of that stanza 37 | # 38 | # Examples: 39 | # 40 | # path ~ ^/path/to/resource # Equivalent to `path /path/to/resource`. 41 | # allow * # Allow all authenticated nodes (since auth 42 | # # defaults to `yes`). 43 | # 44 | # path ~ ^/catalog/([^/]+)$ # Permit nodes to access their own catalog (by 45 | # allow $1 # certname), but not any other node's catalog. 46 | # 47 | # path ~ ^/file_(metadata|content)/extra_files/ # Only allow certain nodes to 48 | # auth yes # access the "extra_files" 49 | # allow /^(.+)\.example\.com$/ # mount point; note this must 50 | # allow_ip 192.168.100.0/24 # go ABOVE the "/file" rule, 51 | # # since it is more specific. 52 | # 53 | # environment:: restrict an ACL to a comma-separated list of environments 54 | # method:: restrict an ACL to a comma-separated list of HTTP methods 55 | # auth:: restrict an ACL to an authenticated or unauthenticated request 56 | # the default when unspecified is to restrict the ACL to authenticated requests 57 | # (ie exactly as if auth yes was present). 58 | # 59 | 60 | ### Authenticated ACLs - these rules apply only when the client 61 | ### has a valid certificate and is thus authenticated 62 | 63 | # allow nodes to retrieve their own catalog 64 | path ~ ^/catalog/([^/]+)$ 65 | method find 66 | allow $1 67 | 68 | # allow nodes to retrieve their own node definition 69 | path ~ ^/node/([^/]+)$ 70 | method find 71 | allow $1 72 | 73 | # allow all nodes to access the certificates services 74 | path /certificate_revocation_list/ca 75 | method find 76 | allow * 77 | 78 | # allow all nodes to store their own reports 79 | path ~ ^/report/([^/]+)$ 80 | method save 81 | allow $1 82 | 83 | # Allow all nodes to access all file services; this is necessary for 84 | # pluginsync, file serving from modules, and file serving from custom 85 | # mount points (see fileserver.conf). Note that the `/file` prefix matches 86 | # requests to both the file_metadata and file_content paths. See "Examples" 87 | # above if you need more granular access control for custom mount points. 88 | path /file 89 | allow * 90 | 91 | ### Unauthenticated ACLs, for clients without valid certificates; authenticated 92 | ### clients can also access these paths, though they rarely need to. 93 | 94 | # allow access to the CA certificate; unauthenticated nodes need this 95 | # in order to validate the puppet master's certificate 96 | path /certificate/ca 97 | auth any 98 | method find 99 | allow * 100 | 101 | # allow nodes to retrieve the certificate they requested earlier 102 | path /certificate/ 103 | auth any 104 | method find 105 | allow * 106 | 107 | # allow nodes to request a new certificate 108 | path /certificate_request 109 | auth any 110 | method find, save 111 | allow * 112 | 113 | path /v2.0/environments 114 | method find 115 | allow * 116 | 117 | path ~ /certificate_status/([^/]+)$ 118 | method find,destroy 119 | auth no 120 | allow __FOREMAN_HOST__ 121 | 122 | path /run 123 | auth any 124 | method save 125 | allow __FOREMAN_HOST__ 126 | 127 | 128 | # deny everything else; this ACL is not strictly necessary, but 129 | # illustrates the default policy. 130 | path / 131 | auth any 132 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/etc/puppet/hiera.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | :backends: 3 | # Use yaml to store data 4 | - yaml 5 | :yaml: 6 | # Per environment data 7 | :datadir: /etc/puppet/hieradata/%{environment}/ 8 | :hierarchy: 9 | - nodes/%{::fqdn} 10 | - common 11 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/etc/puppet/hieradata/production/nodes/proxy.DOMAIN.yaml: -------------------------------------------------------------------------------- 1 | nginx::nginx_vhosts: 2 | "keystone.%{hiera('domain')}": 3 | listen_port: 5000 4 | proxy: "http://keystone.%{hiera('domain')}:5000" 5 | "glance.%{hiera('domain')}": 6 | listen_port: 9292 7 | proxy: "http://glance.%{hiera('domain')}:9292" 8 | "nova.%{hiera('domain')}": 9 | listen_port: 8774 10 | proxy: "http://nova.%{hiera('domain')}:8774" 11 | "neutron.%{hiera('domain')}": 12 | listen_port: 9696 13 | proxy: "http://neutron.%{hiera('domain')}:9696" 14 | "cinder.%{hiera('domain')}": 15 | listen_port: 8776 16 | proxy: "http://cinder.%{hiera('domain')}:8776" 17 | "%{hiera('domain')}": 18 | listen_port: 80 19 | proxy: "http://horizon.%{hiera('domain')}" 20 | "%{hiera('domain')}": 21 | listen_port: 443 22 | ssl: true 23 | ssl_cert: "/var/lib/puppet/ssl/certs/proxy.%{hiera('domain')}.pem" 24 | ssl_key: "/var/lib/puppet/ssl/private_keys/proxy.%{hiera('domain')}.pem" 25 | proxy: "http://horizon.%{hiera('domain')}" 26 | proxy_redirect: "http://stack.opensteak.fr /" 27 | 28 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/etc/puppet/manifests/site.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright (C) 2015 Orange Labs 3 | # 4 | # This software is distributed under the terms and conditions of the 'Apache-2.0' 5 | # license which can be found in the file 'LICENSE.txt' in this package distribution 6 | # or at 'http://www.apache.org/licenses/LICENSE-2.0'. 7 | # 8 | # Authors: Arnaud Morin 9 | # David Blaisonneau 10 | # 11 | 12 | # 13 | # The general site.pp 14 | # 15 | 16 | # Set exec path for all modules 17 | Exec { path => '/usr/bin:/usr/sbin:/bin:/sbin' } 18 | 19 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/etc/r10k.yaml: -------------------------------------------------------------------------------- 1 | # The location to use for storing cached Git repos 2 | :cachedir: '/var/cache/r10k' 3 | 4 | # A list of git repositories to create 5 | :sources: 6 | # This will clone the git repository and instantiate an environment per 7 | # branch in /etc/puppet/environments 8 | :my-org: 9 | remote: 'https://github.com/Orange-OpenSource/opnfv-r10k.git' 10 | basedir: '/etc/puppet/environments' 11 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/patches/add_require_json_openstack.rb.patch: -------------------------------------------------------------------------------- 1 | --- openstack.rb.ori 2015-01-22 13:35:39.375910000 +0000 2 | +++ openstack.rb 2015-01-22 13:36:12.139910000 +0000 3 | @@ -1,6 +1,7 @@ 4 | # TODO: This needs to be extracted out into openstacklib in the Kilo cycle 5 | require 'csv' 6 | require 'puppet' 7 | +require 'json' 8 | 9 | class Puppet::Error::OpenstackAuthInputError < Puppet::Error 10 | end 11 | 12 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/patches/check_virsh_secret_using_secret_get_value.patch: -------------------------------------------------------------------------------- 1 | --- a/manifests/compute/rbd.pp 2 | +++ b/manifests/compute/rbd.pp 3 | @@ -77,7 +77,7 @@ class nova::compute::rbd ( 4 | 5 | exec { 'set-secret-value virsh': 6 | command => "/usr/bin/virsh secret-set-value --secret $(cat /etc/nova/virsh.secret) --base64 $(ceph auth get-key ${rbd_keyring})", 7 | - unless => "/usr/bin/virsh secret-list | grep ${rbd_keyring}", 8 | + unless => "/usr/bin/virsh secret-get-value ${libvirt_rbd_secret_uuid} | grep $(ceph auth get-key ${rbd_keyring})", 9 | require => Exec['get-or-set virsh secret'] 10 | } 11 | 12 | 13 | -------------------------------------------------------------------------------- /infra/foreman/files/puppet_master/usr/local/bin/opensteak-r10k-update: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | PRGNAME=$(basename $0) 3 | PATCHES="/usr/local/opensteak/infra/puppet_master/patches" 4 | 5 | # Check if we are root 6 | if [ "Z$USER" != "Zroot" ] ; then 7 | echo "You need to execute this script as root:" 8 | echo "sudo $PRGNAME" 9 | exit 1 10 | fi 11 | 12 | # Run r10k 13 | r10k deploy environment -pv 14 | 15 | # Apply the patches 16 | echo " * Applying some patches" 17 | 18 | # COMMENTED ON 2015/06/17 as seems to be patched upstream. TO BE CONFIRMED. 19 | # add_require_json_openstack.rb.patch 20 | #echo " -> add_require_json_openstack.rb.patch" 21 | #cd /etc/puppet/environments/production/modules/keystone/lib/puppet/provider/ 22 | #patch < $PATCHES/add_require_json_openstack.rb.patch 23 | 24 | # COMMENTED ON 2015/03/31 as seems to be patched upstream. TO BE CONFIRMED. 25 | # check_virsh_secret_using_secret_get_value.patch 26 | #echo " -> check_virsh_secret_using_secret_get_value.patch" 27 | #cd /etc/puppet/environments/production/modules/nova/ 28 | #patch -p1 < $PATCHES/check_virsh_secret_using_secret_get_value.patch 29 | -------------------------------------------------------------------------------- /infra/foreman/provisioning_templates/preseed_default.tpl: -------------------------------------------------------------------------------- 1 | <%# 2 | kind: provision 3 | name: Preseed default 4 | oses: 5 | - Debian 6.0 6 | - Debian 7. 7 | - Debian 8. 8 | - Ubuntu 10.04 9 | - Ubuntu 12.04 10 | - Ubuntu 13.04 11 | - Ubuntu 14.04 12 | %> 13 | <% 14 | # safemode renderer does not support unary negation 15 | pm_set = @host.puppetmaster.empty? ? false : true 16 | proxy_string = @host.params['http-proxy'] ? " http://#{@host.params['http-proxy']}:#{@host.params['http-proxy-port']}" : '' 17 | puppet_enabled = pm_set || @host.params['force-puppet'] && @host.params['force-puppet'] == 'true' 18 | salt_enabled = @host.params['salt_master'] ? true : false 19 | %> 20 | # Locale 21 | d-i debian-installer/locale string <%= @host.params['lang'] || 'en_US' %> 22 | # country and keyboard settings are automatic. Keep them ... 23 | # ... for wheezy and newer: 24 | d-i keyboard-configuration/xkb-keymap seen true 25 | # ... for squeeze and older: 26 | d-i console-keymaps-at/keymap seen true 27 | 28 | <% subnet = @host.subnet -%> 29 | <% if subnet.respond_to?(:dhcp_boot_mode?) -%> 30 | <% dhcp = subnet.dhcp_boot_mode? && !@static -%> 31 | <% else -%> 32 | <% dhcp = !@static -%> 33 | <% end -%> 34 | <% unless dhcp -%> 35 | # Static network configuration. 36 | d-i preseed/early_command string /bin/killall.sh; /bin/netcfg 37 | d-i netcfg/disable_dhcp boolean true 38 | d-i netcfg/get_ipaddress string <%= @host.ip %> 39 | d-i netcfg/get_netmask string <%= subnet.mask %> 40 | d-i netcfg/get_nameservers string <%= [subnet.dns_primary,subnet.dns_secondary].reject{|n| n.blank?}.join(' ') %> 41 | d-i netcfg/get_gateway string <%= subnet.gateway %> 42 | d-i netcfg/confirm_static boolean true 43 | <% end -%> 44 | 45 | # Network configuration 46 | d-i netcfg/choose_interface select auto 47 | d-i netcfg/get_hostname string <%= @host %> 48 | d-i netcfg/get_domain string <%= @host.domain %> 49 | d-i netcfg/wireless_wep string 50 | 51 | d-i hw-detect/load_firmware boolean true 52 | 53 | # Mirror settings 54 | d-i mirror/country string manual 55 | d-i mirror/http/hostname string <%= @preseed_server %> 56 | d-i mirror/http/directory string <%= @preseed_path %> 57 | d-i mirror/http/proxy string<%= proxy_string %> 58 | d-i mirror/codename string <%= @host.operatingsystem.release_name %> 59 | d-i mirror/suite string <%= @host.operatingsystem.release_name %> 60 | d-i mirror/udeb/suite string <%= @host.operatingsystem.release_name %> 61 | 62 | # Time settings 63 | d-i clock-setup/utc boolean true 64 | d-i time/zone string <%= @host.params['time-zone'] || 'UTC' %> 65 | 66 | # NTP 67 | d-i clock-setup/ntp boolean true 68 | d-i clock-setup/ntp-server string <%= @host.params['ntp-server'] || '0.debian.pool.ntp.org' %> 69 | 70 | # Set alignment for automatic partitioning 71 | # Choices: cylinder, minimal, optimal 72 | #d-i partman/alignment select cylinder 73 | 74 | <%= @host.diskLayout %> 75 | 76 | # Install different kernel 77 | #d-i base-installer/kernel/image string linux-server 78 | 79 | # User settings 80 | d-i passwd/root-password-crypted password <%= root_pass %> 81 | user-setup-udeb passwd/root-login boolean true 82 | 83 | # The user's name and login. 84 | d-i passwd/make-user boolean true 85 | user-setup-udeb passwd/make-user boolean true 86 | passwd passwd/user-fullname string ubuntu 87 | passwd passwd/username string ubuntu 88 | d-i passwd/user-password-crypted password <%= root_pass %> 89 | d-i passwd/user-default-groups string ubuntu adm dialout cdrom floppy sudo audio dip video plugdev netdev 90 | d-i user-setup/encrypt-home boolean false 91 | user-setup-udeb user-setup/encrypt-home boolean false 92 | 93 | 94 | <% repos = 0 %> 95 | <% if puppet_enabled && @host.params['enable-puppetlabs-repo'] && @host.params['enable-puppetlabs-repo'] == 'true' -%> 96 | # Puppetlabs products 97 | d-i apt-setup/local<%= repos %>/repository string \ 98 | http://apt.puppetlabs.com <%= @host.operatingsystem.release_name %> main 99 | d-i apt-setup/local<%= repos %>/comment string Puppetlabs products 100 | d-i apt-setup/local<%= repos %>/source boolean true 101 | d-i apt-setup/local<%= repos %>/key string http://apt.puppetlabs.com/pubkey.gpg 102 | <% repos += 1 -%> 103 | # Puppetlabs dependencies 104 | d-i apt-setup/local<%= repos %>/repository string \ 105 | http://apt.puppetlabs.com <%= @host.operatingsystem.release_name %> dependencies 106 | d-i apt-setup/local<%= repos %>/comment string Puppetlabs dependencies 107 | d-i apt-setup/local<%= repos %>/source boolean true 108 | d-i apt-setup/local<%= repos %>/key string http://apt.puppetlabs.com/pubkey.gpg 109 | <% repos += 1 -%> 110 | <% end -%> 111 | 112 | <% if salt_enabled -%> 113 | <% salt_package = 'salt-minion' -%> 114 | <% if @host.params['enable-saltstack-repo'] && @host.params['enable-saltstack-repo'] == 'true' -%> 115 | <% if @host.operatingsystem.name == 'Debian' -%> 116 | d-i apt-setup/local<%= repos %>/repository string http://debian.saltstack.com/debian <%= @host.operatingsystem.release_name %>-saltstack main 117 | d-i apt-setup/local<%= repos %>/comment string SaltStack Repository 118 | d-i apt-setup/local<%= repos %>/key string http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key 119 | <% repos += 1 -%> 120 | <% end -%> 121 | <% if @host.operatingsystem.name == 'Ubuntu' -%> 122 | d-i apt-setup/local<%= repos %>/repository string http://ppa.launchpad.net/saltstack/salt/ubuntu <%= @host.operatingsystem.release_name %> main 123 | d-i apt-setup/local<%= repos %>/comment string SaltStack Repository 124 | d-i apt-setup/local<%= repos %>/key string http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x4759FA960E27C0A6 125 | <% repos += 1 -%> 126 | <% end -%> 127 | <% end -%> 128 | <% else -%> 129 | <% salt_package = '' -%> 130 | <% end -%> 131 | 132 | # Install minimal task set (see tasksel --task-packages minimal) 133 | tasksel tasksel/first multiselect minimal, ssh-server, openssh-server 134 | 135 | <% if puppet_enabled %> 136 | <% if @host.operatingsystem.name == 'Ubuntu' and @host.operatingsystem.major.to_i == 10 -%> 137 | <% puppet_package = 'puppet/lucid-backports' -%> 138 | d-i apt-setup/backports boolean true 139 | <% else -%> 140 | <% puppet_package = 'puppet' -%> 141 | <% end -%> 142 | <% else -%> 143 | <% puppet_package = '' -%> 144 | <% end -%> 145 | 146 | # Install some base packages 147 | d-i pkgsel/include string <%= puppet_package %> <%= salt_package %> lsb-release 148 | d-i pkgsel/update-policy select unattended-upgrades 149 | 150 | popularity-contest popularity-contest/participate boolean false 151 | 152 | # Boot loader settings 153 | #grub-pc grub-pc/hidden_timeout boolean false 154 | #grub-pc grub-pc/timeout string 10 155 | d-i grub-installer/only_debian boolean true 156 | d-i grub-installer/with_other_os boolean true 157 | <% if @host.params['install-disk'] -%> 158 | d-i grub-installer/bootdev string <%= @host.params['install-disk'] %> 159 | <% elsif @host.operatingsystem.name == 'Debian' and @host.operatingsystem.major.to_i >= 8 -%> 160 | d-i grub-installer/bootdev string default 161 | <% end -%> 162 | d-i finish-install/reboot_in_progress note 163 | 164 | d-i preseed/late_command string wget -Y off <%= @static ? "'#{foreman_url('finish')}&static=true'" : foreman_url('finish') %> -O /target/tmp/finish.sh && in-target chmod +x /tmp/finish.sh && in-target /tmp/finish.sh && rm -f /usr/lib/finish-install.d/55netcfg-copy-config 165 | -------------------------------------------------------------------------------- /infra/foreman/provisioning_templates/preseed_default_finish.tpl: -------------------------------------------------------------------------------- 1 | <%# 2 | kind: finish 3 | name: Preseed default finish 4 | oses: 5 | - Debian 6.0 6 | - Debian 7. 7 | - Debian 8. 8 | - Ubuntu 10.04 9 | - Ubuntu 12.04 10 | - Ubuntu 13.04 11 | - Ubuntu 14.04 12 | %> 13 | <% 14 | # safemode renderer does not support unary negation 15 | pm_set = @host.puppetmaster.empty? ? false : true 16 | puppet_enabled = pm_set || @host.params['force-puppet'] && @host.params['force-puppet'] == 'true' 17 | salt_enabled = @host.params['salt_master'] ? true : false 18 | chef_enabled = @host.respond_to?(:chef_proxy) && @host.chef_proxy 19 | %> 20 | 21 | <% subnet = @host.subnet -%> 22 | <% if subnet.respond_to?(:dhcp_boot_mode?) -%> 23 | <% dhcp = subnet.dhcp_boot_mode? && !@static -%> 24 | <% else -%> 25 | <% dhcp = !@static -%> 26 | <% end -%> 27 | <% unless dhcp -%> 28 | # host and domain name need setting as these values may have come from dhcp if pxe booting 29 | /bin/sed -i "s/^search.*$/search <%= @host.domain %>/g" /etc/resolv.conf 30 | /bin/sed -i "s/.*dns-search.*/\tdns-search <%= @host.domain %>/g" /etc/network/interfaces 31 | /bin/sed -i "s/^<%= @host.ip %>.*/<%= @host.ip %>\t<%= @host.shortname %>.<%= @host.domain %>\t<%= @host.shortname %>/g" /etc/hosts 32 | /bin/echo <%= @host.shortname %> > /etc/hostname 33 | <% end -%> 34 | <% @host.interfaces.each do |interface| %> 35 | echo '\ngot interface <%= interface.identifier %> with mac <%= interface.mac %>' >>/root/install_log 36 | <% if interface.identifier != "" && interface.identifier != "ipmi" %> 37 | echo '\nauto <%= interface.identifier %>\niface <%= interface.identifier %> inet dhcp' >>/etc/network/interfaces 38 | <% end %> 39 | <% end %> 40 | <% if puppet_enabled %> 41 | cat > /etc/puppet/puppet.conf << EOF 42 | <%= snippet 'puppet.conf' %> 43 | EOF 44 | if [ -f "/etc/default/puppet" ] 45 | then 46 | /bin/sed -i 's/^START=no/START=yes/' /etc/default/puppet 47 | fi 48 | /bin/touch /etc/puppet/namespaceauth.conf 49 | /usr/bin/puppet agent --enable 50 | /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : "--server #{@host.puppetmaster}" %> --no-daemonize 51 | <% end -%> 52 | 53 | <% if @host.info['parameters']['realm'] && @host.otp && @host.realm && @host.realm.realm_type == 'FreeIPA' -%> 54 | <%= snippet 'freeipa_register' %> 55 | <% end -%> 56 | 57 | <% if salt_enabled %> 58 | cat > /etc/salt/minion << EOF 59 | <%= snippet 'saltstack_minion' %> 60 | EOF 61 | # Running salt-call to trigger key signing 62 | salt-call --no-color --grains >/dev/null 63 | <% end -%> 64 | 65 | <% if chef_enabled %> 66 | <%= respond_to?(:chef_bootstrap) ? chef_bootstrap(@host) : snippet_if_exists("chef-client omnibus bootstrap") %> 67 | <% end%> 68 | 69 | /usr/bin/wget --no-proxy --quiet --output-document=/dev/null --no-check-certificate <%= foreman_url %> 70 | -------------------------------------------------------------------------------- /infra/foreman/provisioning_templates/preseed_default_pxelinux.tpl: -------------------------------------------------------------------------------- 1 | <%# 2 | kind: PXELinux 3 | name: Preseed default PXELinux 4 | oses: 5 | - Debian 6.0 6 | - Debian 7. 7 | - Debian 8. 8 | - Ubuntu 10.04 9 | - Ubuntu 12.04 10 | - Ubuntu 13.04 11 | - Ubuntu 14.04 12 | %> 13 | 14 | <% if @host.operatingsystem.name == 'Debian' -%> 15 | <% keyboard_params = "auto=true domain=#{@host.domain}" -%> 16 | <% else -%> 17 | <% keyboard_params = 'console-setup/ask_detect=false console-setup/layout=USA console-setup/variant=USA keyboard-configuration/layoutcode=us localechooser/translation/warn-light=true localechooser/translation/warn-severe=true' -%> 18 | <% end -%> 19 | DEFAULT linux 20 | 21 | LABEL linux 22 | KERNEL <%= @kernel %> 23 | APPEND initrd=<%= @initrd %> interface=auto url=<%= foreman_url('provision')%> ramdisk_size=10800 root=/dev/rd/0 rw auto hostname=<%= @host.name %> <%= keyboard_params %> locale=<%= @host.params['lang'] || 'en_US' %> 24 | IPAPPEND 2 25 | -------------------------------------------------------------------------------- /infra/foreman/templates/common.yaml: -------------------------------------------------------------------------------- 1 | # common.yaml 2 | --- 3 | 4 | ### 5 | ## OpenStack passwords 6 | ### 7 | ceph_password: "password" 8 | admin_password: "password" 9 | mysql_service_password: "password" 10 | mysql_root_password: "password" 11 | rabbitmq_password: "password" 12 | glance_password: "password" 13 | nova_password: "password" 14 | neutron_shared_secret: "password" 15 | neutron_password: "password" 16 | cinder_password: "password" 17 | keystone_admin_token: "password" 18 | horizon_secret_key: "12345" 19 | 20 | domain: "${domain}" 21 | 22 | ### 23 | ## Class parameters 24 | ### 25 | # Puppet 26 | opensteak::puppet::foreman_fqdn: "foreman.%{hiera('domain')}" 27 | 28 | # Rabbit 29 | opensteak::rabbitmq::rabbitmq_password: "%{hiera('rabbitmq_password')}" 30 | 31 | # MySQL 32 | opensteak::mysql::root_password: "%{hiera('mysql_root_password')}" 33 | opensteak::mysql::mysql_password: "%{hiera('mysql_service_password')}" 34 | 35 | # Key 36 | opensteak::key::password: "%{hiera('admin_password')}" 37 | opensteak::key::stack_domain: "%{hiera('domain')}" 38 | 39 | # Keystone 40 | opensteak::keystone::mysql_password: "%{hiera('mysql_root_password')}" 41 | opensteak::keystone::rabbitmq_password: "%{hiera('rabbitmq_password')}" 42 | opensteak::keystone::keystone_token: "%{hiera('keystone_admin_token')}" 43 | opensteak::keystone::stack_domain: "%{hiera('domain')}" 44 | opensteak::keystone::admin_mail: "admin@opensteak.fr" 45 | opensteak::keystone::admin_password: "%{hiera('admin_password')}" 46 | opensteak::keystone::glance_password: "%{hiera('glance_password')}" 47 | opensteak::keystone::nova_password: "%{hiera('nova_password')}" 48 | opensteak::keystone::neutron_password: "%{hiera('neutron_password')}" 49 | opensteak::keystone::cinder_password: "%{hiera('cinder_password')}" 50 | 51 | # Glance 52 | opensteak::glance::mysql_password: "%{hiera('mysql_root_password')}" 53 | opensteak::glance::rabbitmq_password: "%{hiera('rabbitmq_password')}" 54 | opensteak::glance::stack_domain: "%{hiera('domain')}" 55 | opensteak::glance::glance_password: "%{hiera('glance_password')}" 56 | 57 | # Nova 58 | opensteak::nova::mysql_password: "%{hiera('mysql_root_password')}" 59 | opensteak::nova::rabbitmq_password: "%{hiera('rabbitmq_password')}" 60 | opensteak::nova::stack_domain: "%{hiera('domain')}" 61 | opensteak::nova::nova_password: "%{hiera('nova_password')}" 62 | opensteak::nova::neutron_password: "%{hiera('neutron_password')}" 63 | opensteak::nova::neutron_shared: "%{hiera('neutron_shared_secret')}" 64 | 65 | # Cinder 66 | opensteak::cinder::mysql_password: "%{hiera('mysql_root_password')}" 67 | opensteak::cinder::rabbitmq_password: "%{hiera('rabbitmq_password')}" 68 | opensteak::cinder::stack_domain: "%{hiera('domain')}" 69 | opensteak::cinder::nova_password: "%{hiera('cinder_password')}" 70 | 71 | # Compute 72 | opensteak::nova-compute::mysql_password: "%{hiera('mysql_root_password')}" 73 | opensteak::nova-compute::rabbitmq_password: "%{hiera('rabbitmq_password')}" 74 | opensteak::nova-compute::stack_domain: "%{hiera('domain')}" 75 | opensteak::nova-compute::neutron_password: "%{hiera('neutron_password')}" 76 | 77 | 78 | # Neutron controller 79 | opensteak::neutron-controller::mysql_password: "%{hiera('mysql_root_password')}" 80 | opensteak::neutron-controller::rabbitmq_password: "%{hiera('rabbitmq_password')}" 81 | opensteak::neutron-controller::stack_domain: "%{hiera('domain')}" 82 | opensteak::neutron-controller::nova_password: "%{hiera('nova_password')}" 83 | opensteak::neutron-controller::neutron_password: "%{hiera('neutron_password')}" 84 | # Neutron compute 85 | opensteak::neutron-compute::mysql_password: "%{hiera('mysql_root_password')}" 86 | opensteak::neutron-compute::rabbitmq_password: "%{hiera('rabbitmq_password')}" 87 | opensteak::neutron-compute::stack_domain: "%{hiera('domain')}" 88 | opensteak::neutron-compute::neutron_password: "%{hiera('neutron_password')}" 89 | opensteak::neutron-compute::neutron_shared: "%{hiera('neutron_shared_secret')}" 90 | opensteak::neutron-compute::infra_nodes: 91 | server186: 92 | ip: 192.168.1.27 93 | bridge_uplinks: 94 | - 'br-vm:p3p1' 95 | server187: 96 | ip: 192.168.1.155 97 | bridge_uplinks: 98 | - 'br-vm:p3p1' 99 | server188: 100 | ip: 192.168.1.116 101 | bridge_uplinks: 102 | - 'br-vm:p3p1' 103 | server189: 104 | ip: 192.168.1.117 105 | bridge_uplinks: 106 | - 'br-vm:p3p1' 107 | # Neutron network 108 | opensteak::neutron-network::mysql_password: "%{hiera('mysql_root_password')}" 109 | opensteak::neutron-network::rabbitmq_password: "%{hiera('rabbitmq_password')}" 110 | opensteak::neutron-network::stack_domain: "%{hiera('domain')}" 111 | opensteak::neutron-network::neutron_password: "%{hiera('neutron_password')}" 112 | opensteak::neutron-network::neutron_shared: "%{hiera('neutron_shared_secret')}" 113 | opensteak::neutron-network::infra_nodes: 114 | server98: 115 | ip: 192.168.1.58 116 | bridge_uplinks: 117 | - 'br-ex:em2' 118 | - 'br-vm:em5' 119 | 120 | # Horizon 121 | opensteak::horizon::stack_domain: "%{hiera('domain')}" 122 | opensteak::horizon::secret_key: "%{hiera('horizon_secret_key')}" 123 | -------------------------------------------------------------------------------- /infra/foreman/templates/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # -*- coding: utf-8 -*- 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | # 15 | # Authors: 16 | # @author: David Blaisonneau 17 | # @author: Arnaud Morin 18 | 19 | ### Set vars 20 | NAME="${name}" 21 | DOMAIN="${domain}" 22 | DATEE=$$(date +%F-%Hh%M) 23 | IP="${ip}" 24 | DHCP_RANGE="${dhcprange}" 25 | REVERSE_DNS="${reversedns}" 26 | DNS_FORWARDER="${dns}" 27 | ADMIN="${admin}" 28 | PASSWORD="${password}" 29 | 30 | ### Set correct env 31 | #dpkg-reconfigure locales 32 | export LC_CTYPE=en_US.UTF-8 33 | export LANG=en_US.UTF-8 34 | unset LC_ALL 35 | umask 0022 36 | 37 | ### Check hostname is on the public interface 38 | echo "* Ensure hostname point to external IP" 39 | # Remove useless lines 40 | perl -i -pe 's/^127.0.1.1.*\n$$//' /etc/hosts 41 | perl -i -pe "s/^$${IP}.*\n$$//" /etc/hosts 42 | # Append a line 43 | echo "$${IP} $${NAME}.$${DOMAIN} $${NAME}" >> /etc/hosts 44 | 45 | ### Dependencies 46 | echo "* Install dependencies" 47 | apt-get update 48 | apt-get -y install ca-certificates wget git isc-dhcp-server 49 | 50 | ### Set AppArmor 51 | echo "* Set App armor" 52 | cat /etc/apparmor.d/local/usr.sbin.dhcpd | grep '/etc/bind/rndc.key r,' >/dev/null 53 | if [ $$? -eq 1 ] ; then 54 | echo "/etc/bind/rndc.key r," >> /etc/apparmor.d/local/usr.sbin.dhcpd 55 | service apparmor reload 56 | fi 57 | 58 | ### Prepare repos 59 | echo "* Enable Puppet labs repo" 60 | if [ "Z" = "Z$$(dpkg -l |grep 'ii puppetlabs-release')" ] ; then 61 | wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb 62 | dpkg -i puppetlabs-release-trusty.deb 63 | apt-get update 64 | fi 65 | 66 | # Install puppetmaster 67 | echo "* Install puppetmaster" 68 | if [ "Z" = "Z$$(dpkg -l |grep 'ii puppetmaster')" ] ; then 69 | apt-get -y install puppetmaster 70 | fi 71 | 72 | # Enable the Foreman repo 73 | echo "* Enable Foreman repo" 74 | if [ ! -e /etc/apt/sources.list.d/foreman.list ] ; then 75 | echo "deb http://deb.theforeman.org/ trusty 1.8" > /etc/apt/sources.list.d/foreman.list 76 | echo "deb http://deb.theforeman.org/ plugins 1.8" >> /etc/apt/sources.list.d/foreman.list 77 | wget -q http://deb.theforeman.org/pubkey.gpg -O- | apt-key add - 78 | apt-get update 79 | fi 80 | 81 | ### Install Foreman 82 | echo "* Install foreman-installer" 83 | if [ "Z" = "Z$$(dpkg -l |grep 'ii foreman-installer')" ] ; then 84 | apt-get -y install foreman-installer 85 | fi 86 | if [ "Z" = "Z$$(gem list --local |grep rubyipmi)" ] ; then 87 | gem install -q rubyipmi 88 | fi 89 | 90 | ### Execute foreman installer 91 | echo "* Execute foreman installer" 92 | 93 | foreman-installer \ 94 | --foreman-admin-username="$$ADMIN" \ 95 | --foreman-admin-password="$$PASSWORD" \ 96 | --enable-foreman-plugin-templates \ 97 | --enable-foreman-plugin-discovery \ 98 | --foreman-plugin-discovery-install-images=true \ 99 | --puppet-listen=true \ 100 | --enable-foreman-compute-libvirt 101 | 102 | 103 | foreman-installer \ 104 | --foreman-admin-username="$$ADMIN" \ 105 | --foreman-admin-password="$$PASSWORD" \ 106 | --enable-foreman-plugin-templates \ 107 | --enable-foreman-plugin-discovery \ 108 | --foreman-plugin-discovery-install-images=true \ 109 | --enable-foreman-compute-libvirt \ 110 | --enable-foreman-proxy \ 111 | --foreman-proxy-bmc=true \ 112 | --foreman-proxy-tftp=true \ 113 | --foreman-proxy-tftp-servername="$$IP" \ 114 | --foreman-proxy-dhcp=true \ 115 | --foreman-proxy-dhcp-interface="eth0" \ 116 | --foreman-proxy-dhcp-gateway="$$IP" \ 117 | --foreman-proxy-dhcp-range="$$DHCP_RANGE" \ 118 | --foreman-proxy-dhcp-nameservers="$$IP" \ 119 | --foreman-proxy-dns=true \ 120 | --foreman-proxy-dns-interface="eth0" \ 121 | --foreman-proxy-dns-zone="$$DOMAIN" \ 122 | --foreman-proxy-dns-reverse="$$REVERSE_DNS" \ 123 | --foreman-proxy-dns-forwarders="$$DNS_FORWARDER" \ 124 | --foreman-proxy-foreman-base-url="https://localhost" 125 | 126 | ### Sync community templates for last ubuntu versions 127 | 128 | echo "* Sync community templates for last ubuntu versions" 129 | foreman-rake templates:sync 130 | 131 | ### Install OpenSteak files 132 | 133 | echo "* Set puppet auth" 134 | echo "*.$$DOMAIN" > /etc/puppet/autosign.conf 135 | if [ -e /etc/puppet/auth.conf ] ; then 136 | # Make a backup 137 | mv /etc/puppet/auth.conf /etc/puppet/auth.conf.$$DATEE 138 | fi 139 | cp /mnt/puppet_master/etc/puppet/auth.conf /etc/puppet/auth.conf 140 | perl -i -pe "s/__FOREMAN_HOST__/$${NAME}.$${DOMAIN}/" /etc/puppet/auth.conf 141 | 142 | # Set Hiera Conf 143 | echo "* Push Hiera conf into /etc/puppet/" 144 | if [ -e /etc/puppet/hiera.yaml ] ; then 145 | # Make a backup 146 | mv /etc/puppet/hiera.yaml /etc/puppet/hiera.yaml.$$DATEE 147 | fi 148 | cp /mnt/puppet_master/etc/puppet/hiera.yaml /etc/puppet/hiera.yaml 149 | if [ -e /etc/hiera.yaml ] ; then 150 | rm /etc/hiera.yaml 151 | fi 152 | ln -s /etc/puppet/hiera.yaml /etc/hiera.yaml 153 | cp -rf /mnt/puppet_master/etc/puppet/hieradata /etc/puppet/ 154 | rename s/DOMAIN/$$DOMAIN/ /etc/puppet/hieradata/production/nodes/*.yaml 155 | cp /mnt/puppet_master/etc/puppet/manifests/site.pp /etc/puppet/manifests/site.pp 156 | cp /mnt/*.yaml /etc/puppet/hieradata/production/ 157 | chgrp puppet /etc/puppet/hieradata/production/*.yaml 158 | 159 | # Install and config r10k 160 | echo "* Install and setup r10k" 161 | if [ "Z" = "Z$$(gem list --local |grep r10k)" ] ; then 162 | gem install -q r10k 163 | fi 164 | if [ -e /etc/r10k.yaml ] ; then 165 | # Make a backup 166 | mv /etc/r10k.yaml /etc/r10k.yaml.$$DATEE 167 | fi 168 | cp /mnt/puppet_master/etc/r10k.yaml /etc/r10k.yaml 169 | 170 | # Install opensteak-r10k-update script 171 | echo "* Install opensteak-r10k-update script into /usr/local/bin" 172 | cp /mnt/puppet_master/usr/local/bin/opensteak-r10k-update /usr/local/bin/opensteak-r10k-update 173 | chmod +x /usr/local/bin/opensteak-r10k-update 174 | 175 | echo "* Run R10k. You can re-run r10k by calling:" 176 | echo " opensteak-r10k-update" 177 | opensteak-r10k-update 178 | 179 | #### Install VIM puppet 180 | echo "* Install VIM puppet" 181 | if [ ! -d ~/.vim/autoload ] ; then 182 | mkdir -p ~/.vim/autoload 183 | fi 184 | if [ ! -d ~/.vim/bundle ] ; then 185 | mkdir -p ~/.vim/bundle 186 | fi 187 | curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim 188 | cat < ~/.vimrc 189 | execute pathogen#infect() 190 | syntax on 191 | filetype plugin indent on 192 | EOF 193 | cd ~/.vim/bundle 194 | if [ ! -d vim-puppet ] ; then 195 | git clone https://github.com/rodjek/vim-puppet.git > /dev/null 196 | fi 197 | 198 | ### Gen SSH key for foreman 199 | echo "* SSH Key" 200 | cp /mnt/id_rsa /usr/share/foreman/.ssh/ 201 | cp /mnt/id_rsa.pub /usr/share/foreman/.ssh/ 202 | chown foreman:foreman /usr/share/foreman/.ssh/ -R 203 | chmod 700 /usr/share/foreman/.ssh/ 204 | chmod 600 /usr/share/foreman/.ssh/id_rsa 205 | 206 | ### Run puppet 207 | puppet agent -t -v 208 | -------------------------------------------------------------------------------- /infra/foreman/templates/kvm-config: -------------------------------------------------------------------------------- 1 | 2 | ${name} 3 | ${ram} 4 | ${ram} 5 | ${cpu} 6 | 7 | hvm 8 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | preserve 18 | restart 19 | restart 20 | 21 | /usr/bin/qemu-system-x86_64 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 |