├── README.md ├── ansible.cfg ├── group_vars └── all ├── hosts ├── images ├── os_architecture.png ├── os_dep_diagram.png ├── os_images.png ├── os_instances.png ├── os_log_network.png ├── os_network_detailed.png ├── os_networks.png ├── os_phy_network.png ├── os_projects.png ├── os_sysinfo.png └── os_vnc.png ├── playbooks ├── image.yml ├── tenant.yml └── vm.yml ├── roles ├── common │ ├── files │ │ ├── RPM-GPG-KEY-EPEL-6 │ │ ├── epel-openstack-grizzly.repo │ │ ├── epel.repo │ │ ├── openvswitch-kmod-rhel6.spec │ │ ├── openvswitch-kmod.files │ │ ├── openvswitch.spec │ │ └── selinux │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── hosts.j2 │ │ └── iptables.j2 ├── compute │ ├── files │ │ ├── guestfs.py │ │ └── qemu.conf.j2 │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── nova.conf.j2 │ │ └── ovs_quantum_plugin.ini.j2 └── controller │ ├── files │ └── sysctl.conf │ ├── handlers │ └── main.yml │ ├── tasks │ ├── base.yml │ ├── cinder.yml │ ├── glance.yml │ ├── horizon.yml │ ├── keystone.yml │ ├── main.yml │ ├── nova.yml │ └── quantum.yml │ └── templates │ ├── cinder.conf.j2 │ ├── dhcp_agent.ini.j2 │ ├── glance-api.conf.j2 │ ├── glance-registry.conf.j2 │ ├── keystone.conf.j2 │ ├── keystone_data.sh.j2 │ ├── keystonerc.j2 │ ├── l3_agent.ini.ext.j2 │ ├── l3_agent.ini.j2 │ ├── local_settings.j2 │ ├── nova.conf.j2 │ ├── ovs_quantum_plugin.ini.j2 │ ├── quantum.conf.j2 │ └── targets.conf.j2 └── site.yml /README.md: -------------------------------------------------------------------------------- 1 | # Deploying OpenStack with Ansible 2 | 3 | - Requires Ansible 1.2 4 | - Expects CentOS/RHEL 6 hosts (64 bit) 5 | - Kernel Version 2.6.32-358.2.1.el6.x86_64 6 | 7 | 8 | ## A Primer into OpenStack Architecture 9 | 10 | ![Alt text](/images/os_architecture.png "Arch") 11 | 12 | As the above diagram depicts, there are several key components that make up the 13 | OpenStack Cloud Infrastructure. A brief description about the services are as 14 | follows: 15 | 16 | - Dashboard (Horizon): A Web based GUI that provides an interface for the end 17 | user or Administrator to interact with the OpenStack cloud infrastructure. 18 | 19 | - Nova: The Nova component is primarily responsible for taking API requests 20 | from endusers/admins to create virtual machines and coordinate in 21 | creating/delivering them. The Nova component consists of several 22 | subcomponents/processes which helps Nova deliver the virtual machines. Let's 23 | have a brief look them: 24 | 25 | - nova-api: Runs the API server, which accepts and responds to the 26 | enduser/admin requests. 27 | 28 | - nova-compute: Runs on the compute nodes and interacts with hypervisors to 29 | create/terminate virtual machines. 30 | 31 | - nova-volume: Responsible for providing persistant storage to VMs--it is 32 | gradually being migrated to Cinder. 33 | 34 | - nova-nework: Responsible for networking requirements like adding firewall 35 | rules, creating bridges, etc. This component is also slowly being migrated to 36 | Quantum Services. 37 | 38 | - nova-scheduler: Responsible for choosing the best compute server where the 39 | requested virtual machine should run. 40 | 41 | - nova-console: Responsible for providing the VM's console via VNC to end 42 | users. 43 | 44 | - Glance: This service provides the endusers/admins with options to add/remove 45 | images to the OpenStack Infrastructure, which would be used to deploy virtual 46 | machines. 47 | 48 | - Keystone: The Keystone component is repsonsible for providing identity and 49 | authentication services to all other services that require these services. 50 | 51 | - Quantum: Quantum is primarily responsible for providing networks and 52 | connectivity to the Nova instances. 53 | 54 | - Swift: The Swift service provides endusers/admins with a higly scalable and 55 | fault-tolerant object store. 56 | 57 | - Cinder: Provides permanant disk storage services to Nova instances. 58 | 59 | 60 | ## Ansible's Example Deployment Diagram 61 | 62 | ![Alt text](/images/os_dep_diagram.png "diagram") 63 | 64 | The above diagram shows Ansible playbooks which deploy OpenStack, combining all 65 | the management processes into a single node (controller node) and the compute 66 | service into the other nodes (compute nodes). 67 | 68 | The management services deployed/configured on the controller node are as 69 | follows: 70 | 71 | - Nova: Uses MySQL to store VM info and state. The communication between 72 | several Nova components are handled by Apache QPID messaging system. 73 | 74 | - Quantum: Uses MySQL as its backend database and QPID for interprocess 75 | communication. The Quantum service uses the OpenVSwitch plugin to provide 76 | network services. L2 and L3 services are taken care by quantum while the 77 | firewall service is provided by Nova. 78 | 79 | - Cinder: Uses MySQL as backend database and provides permanant storage to 80 | instaces via lvm and tgtd daemon(ISCSI). 81 | 82 | - Keystone: Uses MySQL to store the tenant/user details and provides identity 83 | and authorization service to all other components. 84 | 85 | - Glance: Uses MySQL as the backend database, and images are stored on the 86 | local filesystem. 87 | 88 | - Horizon: Uses memcached to store user session details and provides UI 89 | interface to endusers/admins. 90 | 91 | The playbooks also configure the compute nodes with the following components: 92 | 93 | - Nova Compute: The Nova Compute service is configured to use qemu as the 94 | hypervisor and uses libvirt to create/configure/teminate virtual machines. 95 | 96 | - Quantum Agent: A Quantum Agent is also configured on the compute node which 97 | uses OpenVSwitch to provide L2 services to the running virtual machines. 98 | 99 | 100 | ## Physical Network Deployment Diagram 101 | 102 | ![Alt text](/images/os_phy_network.png "phy_net") 103 | 104 | The diagram in Fig 1.a shows a typical OpenStack production network setup which 105 | consists of four networks: A management network for management of the servers, 106 | a data network which would have all the inter-VM traffic flowing through it, 107 | an external network which would faciliate the flow of network traffic destined 108 | to an outside network; this network/interface would be attached to the network 109 | controller. The fourth network is the API network, which would be used by the 110 | endusers to access the APIs for cloud operations. 111 | 112 | For the sake of simplicity, the Ansible example deployment simplifies the 113 | network topology by deploying two networks: the first network would handle the 114 | traffic for management/API and data, while the second network/interface would 115 | be for the external access. 116 | 117 | 118 | ### Logical Network Deployment Diagram 119 | 120 | ![Alt text](/images/os_log_network.png "log_network") 121 | 122 | OpenStack provides several logical network deployment options like flat 123 | networks, provider router with private networks, tenant routers with private 124 | networks. The above diagram shows the network deployment carried out by the 125 | Ansible playbooks, which is refered to as provider router with private 126 | networks. 127 | 128 | As it can be seen above all the tenants would get their own private networks 129 | which can be one or many. The provider, in this case the cloud admin would have 130 | a dedicated router which would have an external interface for all outgoing 131 | traffic. Incase any of the tenants require external access (internet/other 132 | datacenter networks) they could attach thier private network to the router and 133 | get external access. 134 | 135 | 136 | ## Network Under The Hood 137 | 138 | ![Alt text](/images/os_network_detailed.png "Detailed network") 139 | 140 | As described previousely the Ansible example playbooks deploys OpenStack with a 141 | network topology of "Provider Router with private networks for tenants". The 142 | above diagram gives a brief description of the data flow in this network toplogy. 143 | 144 | As the first step during deployment the Ansible playbooks create a few OpenVSwitch 145 | software bridges on the controller and compute nodes. They are as follows: 146 | 147 | 148 | ### Controller Node: 149 | 150 | - br-int: Known as the integration bridge will have all the interfaces of the 151 | router attached to it. 152 | 153 | - br-ext: Known as the external bridge, this will have the interfaces attached 154 | to it that have an external IP address. For example, when an external network 155 | is created and gateway is set, a new interface is created and attached to the 156 | br-ext bridge with the gateway IP. All floating IPs will have an interface 157 | created and attached to this bridge. A physical NIC is also added to this 158 | bridge so that external traffic flows through this NIC to reach the next 159 | router. 160 | 161 | 162 | ### Compute Node: 163 | 164 | - br-int: This integration bridge will have all the interfaces from virtual 165 | machines attached to it. 166 | 167 | Apart from the above bridges there will be one more bridge automatically 168 | created by Openstack for communication between computenode and network 169 | controller node. 170 | 171 | - br-tun: The tunnel bridge are created on both the compute nodes and the 172 | network controller node, they will have two interface attached to it. 173 | 174 | - A patch port/interface: This acts like an uplink between the integration 175 | bridge and the tunnel bridge, so that all the traffic at the intergration 176 | bridge will be forwarded to the tunnel bridge. 177 | 178 | - A GRE Tunnel port/interface: A gre tunnel basically encapsulates all the 179 | traffic incoming to its interface with its own IP address and delivers it to 180 | the opposite endpoint. So in our case all the compute nodes will have a tunnel 181 | set up with network controller as its remote endpoint. So the tunnel interface 182 | in the br-tun bridge in compute node delivers all the VM traffic coming from 183 | the br-int interface to the tunnel endpoint interface in the controller's 184 | br-tun bridge. 185 | 186 | 187 | ## Network Operations In Detail 188 | 189 | ### A private tenant network is created: 190 | 191 | When a private network for a tenant is created, the first task is to: 192 | 193 | - Create an interface and assign an IP from that range and attach it to the 194 | br-int bridge on the controller node. This interface will be used by the 195 | dhcp-agent process running on the node to give out dynamic IP addresses to VMs 196 | that get created on this subnet. So as the above diagram shows if the tenant 197 | subnet created is 10.0.0.0/24, an IP of 10.0.0.3 would be added to a vnic and 198 | added to br-int bridge, and this interface would be used by the dnsmasq process 199 | to give IPs to VMs in this subnet. 200 | 201 | - Also an virtual interface would be created and added to the br-int bridge, 202 | this interface would typically be given the first ip in subnet created and 203 | would as the default gateway for instances that have an interface attached to 204 | this subnet. So if the subnet created is 10.0.0.0/24 an ip of 10.0.0.1 would be 205 | given to the router interface which would be attached to the br-int bridge. 206 | 207 | 208 | ### A new VM is created: 209 | 210 | As per the above diagram, consider that a tenant VM is created and running on 211 | the 10.0.0.0/24 subnet with an IP of 10.0.0.5 on a compute node. 212 | 213 | - During the VM creation process the OpenVSwitch agent in the compute node 214 | would have attached the vNIC of the VM to br-int bridge. The br-tun bridge 215 | would have been set up with a tunnel interface with source as the compute node 216 | and the remote endpoint as the network controller. 217 | 218 | - A patch interface would have been created between the two bridges so that all 219 | traffic in the br-int bridge are forwarded to the br-tun interface. 220 | 221 | 222 | ### External network is created and gateway set on router: 223 | 224 | As the figure above depicts, an external network is used to route the traffic 225 | from the internal/private OpenStack networks to the external network/internet. 226 | External network creation expects a physical interface added to the br-ex 227 | bridge and that physical interface can transfer packets to next hop router for 228 | the external network. As per the above diagram 1.1.1.1 would be an interface on 229 | the next hop router for the OpenStack controller. 230 | 231 | - The first thing OpenStack does on external network creation is that it 232 | creates a virtual interface by the name "qg-xxx" on the br-ex bridge and 233 | assigns the second IP on the subnet to this interface. eg: 1.1.1.2 234 | 235 | - OpenStack also adds a default route to the controller, which would be the 236 | first IP on the subnet. Eg: 1.1.1.1 237 | 238 | 239 | ### Associate Floating IP to a VM: 240 | 241 | Floating IPs are used to give access to the VM from external networks, and to 242 | give external access to the VMs in the private network. 243 | 244 | - When a floating IP is associated with a VM, a virtual interface is created in 245 | the br-ex bridge and the public IP is assigned to this interface. 246 | 247 | - A Destination NAT is setup in the iptables rules so that the traffic coming 248 | to a particular external IP would be translated to its corresponding private 249 | IP. 250 | 251 | - A Source NAT is setup in the iptables so that traffic originating from a 252 | particular private IP is NATted to its corresponding external IP so that it can 253 | access external network. 254 | 255 | 256 | ### A Step by Step look a VM in a private network accessing internet: 257 | 258 | As per the above diagram, consider the tenant VM is on the 10.0.0.0/24 subnet 259 | and has an IP of 10.0.0.5. An external network is created with a subnet of 260 | 1.1.1.0/24 and it attached to the external router. A floating IP is also given 261 | to the tenant's VM: 1.1.1.10. So in case of the VM pinging an external IP, the 262 | sequence of events would be thus. 263 | 264 | 1. Since the destination is in a external subnet, the VM would try to forward 265 | the packet to its default gateway, which in this case would be 10.0.0.1 which 266 | is a virtual interface in the br-int bridge on the controller node. 267 | 268 | 2. The packet traverses through the patch interface in the br-int bridge to 269 | the br-tun bridge. 270 | 271 | 3. The br-tun bridge on the compute node has tunnel endpoint with 272 | 192.168.2.42 as source, and remote endpoint as 192.168.2.41 which is the data 273 | interface of controller node. The packet then gets encapsulated in the GRE 274 | packets and get transferred to the network controller node. 275 | 276 | 4. Upon reaching the br-tun bridge in the network controller, the GRE packet 277 | is stripped and forwarded to the br-int bridge through the patch interface. 278 | 279 | 5. The 10.0.0.1 interface intercepts the packet and based on the routing 280 | rules forwards the packet to appropriate interface, eg., 1.1.1.2. 281 | 282 | 6. The SNAT rule on the post-routing chain of iptables NATs the source IP 283 | from 10.0.0.5 to 1.1.1.10 and the packet is sent out to next hop router: 284 | 1.1.1.1. 285 | 286 | 7. On receiving a reply, the pre-routing chains does a destination NAT and 287 | the 1.1.1.10 is changed to 10.0.0.5 and follows the same route to reach the VM. 288 | 289 | 290 | ## Deploying OpenStack with Ansible 291 | 292 | As discussed above this example deploys OpenStack in an all-in-one model, where 293 | all the management services are deployed on a single host (controller node) and 294 | the hypervisors (compute nodes) are deployed on the other nodes. 295 | 296 | 297 | #### Prerequisites: 298 | 299 | - CentOS version 6.4 (64-bit) 300 | - Kernel Version: 2.6.32-358.2.1.el6.x86_64 301 | - Controller Node: Should have an extra NIC (for external traffic routing) and 302 | an extra disk or partition for Cinder Volumes. 303 | 304 | Before the playbooks are run please make sure that the following parameters in 305 | group_vars/all are suited to your environment: 306 | 307 | - quantum_external_interface: eth2 # The nic that would be used for external 308 | traffic. Make sure there is no IP assigned to it and it is in enabled state. 309 | 310 | - cinder_volume_dev: /dev/vdb # An additional disk or a partition that would be 311 | used for cinder volumes. Please make sure that it is not mounted or any other 312 | volume groups are created on this disk. 313 | 314 | - external_subnet_cidr: 1.1.1.0/24 # The subnet that would be used for external 315 | network access. If you are just testing, any subnet should be fine, and the VMs 316 | should be accesible from the controller using this external IP. If you need the 317 | Ansible host to access the VMs using this public IP, make sure Ansible host has 318 | an IP in this segment and that interface is on the same broadcast domain as the 319 | second interface on the network controller. 320 | 321 | 322 | ### Operation: 323 | 324 | Once the prerequisites are satisfied and the group variables are changed to 325 | suit your environment, modify the inventory file 'hosts' to match your 326 | environment. Here's an example inventory: 327 | 328 | [openstack_controller] 329 | openstack-controller 330 | 331 | [openstack_compute] 332 | openstack-compute 333 | 334 | Run the playbook to deploy the OpenStack Cloud: 335 | 336 | ansible-playbook -i hosts site.yml 337 | 338 | Once the playbooks complete you can check the deployment by logging into the 339 | Horizon console http://controller-ip/dashboard. The credentials for login would 340 | be as defined in group_vars/all. The state of all services can be verified by 341 | checking the "system info" tab. 342 | 343 | ![Alt text](/images/os_sysinfo.png "sysinfo") 344 | 345 | 346 | ### Uploading an Image to OpenStack 347 | 348 | Once OpenStack is deployed, an image can be uploaded to Glance for VM creation 349 | using the following command. This uploads a cirros image from a an URL directly 350 | to Glance. 351 | 352 | Note: if your external network is not a valid network, this download of image 353 | from the Internet might not work as OpenStack adds a default route to your 354 | controller to the first IP of external network. So please make sure you have 355 | the right default route set in your controller. 356 | 357 | ansible-playbook -i hosts playbooks/image.yml -e "image_name=cirros image_url=https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img" 358 | 359 | We can verify the image status by checking the "images" tab in Horizon. 360 | 361 | ![Alt text](/images/os_images.png "Images") 362 | 363 | 364 | ### Create a Tenant and their Private Network 365 | 366 | The following command shows how to create a tenant and a private network. The 367 | following command creates a tenant by the name of "tenant1" and creates an admin 368 | tenant by the name of "t1admin" and also provisions a private network/subnet with 369 | a cidr of 2.2.2.0/24 for their VMs. 370 | 371 | ansible-playbook -i hosts playbooks/tenant.yml -e "tenant_name=tenant1 tenant_username=t1admin tenant_password=abc network_name=t1net subnet_name=t1subnet subnet_cidr=2.2.2.0/24 tunnel_id=3" 372 | 373 | The status of the tenant and the network can be verified from Horizon. The 374 | tenant list would be available in the "Projects" tab. 375 | 376 | ![Alt text](/images/os_projects.png "projects") 377 | 378 | The network topology would be visible in the "Projects -> Network Topology" tab. 379 | 380 | ![Alt text](/images/os_networks.png "networks") 381 | 382 | 383 | ### Creating a VM for the Tenant 384 | 385 | To create a VM for a tenant we can issue the following command. The command 386 | creates a new VM for tenant1. The VM is attached to the network created above, 387 | t1net, and the Ansible user's public key is injected into the node so that 388 | the VM is ready to be managed by Ansible. 389 | 390 | ansible-playbook -i hosts playbooks/vm.yml -e "tenant_name=tenant1 tenant_username=t1admin tenant_password=abc network_name=t1net vm_name=t1vm flavor_id=6 keypair_name=t1keypair image_name=cirros" 391 | 392 | Once created, the VM can be verified by going to the "Instances" tab in Horizon. 393 | 394 | ![Alt text](/images/os_instances.png "Instances") 395 | 396 | We can also get the console of the VM from the Horizon UI by clicking on the 397 | "Console" drop down menu. 398 | 399 | ![Alt text](/images/os_vnc.png "console") 400 | 401 | 402 | ### Managing OpenStack VMs Dynamically from Ansible 403 | 404 | There are multiple ways to manage the VMs deployed in OpenStack via Ansible. 405 | 406 | 1. Use the inventory script. While creating the virtual machine pass the 407 | metadata paramater to the vm specifying the group to which it should belong, 408 | and the inventory script would generate hosts arranged by their group name. 409 | 410 | 2. If the requirement is to deploy and manage the virtual machines using the 411 | same playbook, this can be achieved by writing a play to create virtual 412 | machines and add those VMs dynamically to the inventory using 'add_host' 413 | module. Later, write another play targeting group name defined in the add_host 414 | module and implement the tasks to be carried out on those new VMs. 415 | 416 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | # config file for ansible -- http://ansible.github.com 2 | # nearly all parameters can be overridden in ansible-playbook or with command line flags 3 | # ansible will read ~/.ansible.cfg or /etc/ansible/ansible.cfg, whichever it finds first 4 | 5 | [defaults] 6 | 7 | # location of inventory file, eliminates need to specify -i 8 | 9 | hostfile = /etc/ansible/hosts 10 | 11 | # location of ansible library, eliminates need to specify --module-path 12 | 13 | library = /usr/share/ansible 14 | 15 | # default module name used in /usr/bin/ansible when -m is not specified 16 | 17 | module_name = command 18 | 19 | # location for ansible log file. If set, will store output from ansible 20 | # and ansible-playbook. If enabling, you may wish to configure 21 | # logrotate. 22 | 23 | #log_path = /var/log/ansible.log 24 | 25 | # home directory where temp files are stored on remote systems. Should 26 | # almost always contain $HOME or be a directory writeable by all users 27 | 28 | remote_tmp = $HOME/.ansible/tmp 29 | 30 | # the default pattern for ansible-playbooks ("hosts:") 31 | 32 | pattern = * 33 | 34 | # the default number of forks (parallelism) to be used. Usually you 35 | # can crank this up. 36 | 37 | forks=5 38 | 39 | # the timeout used by various connection types. Usually this corresponds 40 | # to an SSH timeout 41 | 42 | timeout=10 43 | 44 | # when using --poll or "poll:" in an ansible playbook, and not specifying 45 | # an explicit poll interval, use this interval 46 | 47 | poll_interval=15 48 | 49 | # when specifying --sudo to /usr/bin/ansible or "sudo:" in a playbook, 50 | # and not specifying "--sudo-user" or "sudo_user" respectively, sudo 51 | # to this user account 52 | 53 | sudo_user=root 54 | 55 | # the following forces ansible to always ask for the sudo password (instead of having 56 | # to add -K to the commandline). Or you can use the environment variable (ANSIBLE_ASK_SUDO_PASS) 57 | 58 | #ask_sudo_pass=True 59 | 60 | # the following forces ansible to always ask for the ssh-password (-k) 61 | # can also be set by the environment variable ANSIBLE_ASK_PASS 62 | 63 | #ask_pass=True 64 | 65 | # connection to use when -c is not specified 66 | 67 | transport=paramiko 68 | 69 | # remote SSH port to be used when --port or "port:" or an equivalent inventory 70 | # variable is not specified. 71 | 72 | remote_port=22 73 | 74 | # if set, always run /usr/bin/ansible commands as this user, and assume this value 75 | # if "user:" is not set in a playbook. If not set, use the current Unix user 76 | # as the default 77 | 78 | #remote_user=root 79 | 80 | # the default sudo executable. If a sudo alternative with a sudo-compatible interface 81 | # is used, specify its executable name as the default 82 | 83 | sudo_exe=sudo 84 | 85 | # the default flags passed to sudo 86 | # sudo_flags=-H 87 | 88 | # all commands executed under sudo are passed as arguments to a shell command 89 | # This shell command defaults to /bin/sh 90 | # Changing this helps the situation where a user is only allowed to run 91 | # e.g. /bin/bash with sudo privileges 92 | 93 | # executable = /bin/sh 94 | 95 | # how to handle hash defined in several places 96 | # hash can be merged, or replaced 97 | # if you use replace, and have multiple hashes named 'x', the last defined 98 | # will override the previously defined one 99 | # if you use merge here, hash will cumulate their keys, but keys will still 100 | # override each other 101 | # replace is the default value, and is how ansible always handled hash variables 102 | # 103 | # hash_behaviour=replace 104 | 105 | # How to handle variable replacement - as of 1.2, Jinja2 variable syntax is 106 | # preferred, but we still support the old $variable replacement too. 107 | # If you change legacy_playbook_variables to no then Ansible will no longer 108 | # try to do replacement on $variable style variables. 109 | # 110 | # legacy_playbook_variables=yes 111 | 112 | # if you need to use jinja2 extensions, you can list them here 113 | # use a coma to separate extensions, e.g. : 114 | # jinja2_extensions=jinja2.ext.do,jinja2.ext.i18n 115 | # no extensions are loaded by default 116 | 117 | #jinja2_extensions= 118 | 119 | # if set, always use this private key file for authentication, same as if passing 120 | # --private-key to ansible or ansible-playbook 121 | 122 | #private_key_file=/path/to/file 123 | 124 | # format of string $ansible_managed available within Jinja2 templates, replacing 125 | # {file}, {host} and {uid} with template filename, host and owner respectively. 126 | # The resulting string is passed through strftime(3) so it may contain any 127 | # time-formatting specifiers. 128 | # 129 | # Example: ansible_managed = DONT TOUCH {file}: call {uid} at {host} for changes 130 | ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} 131 | 132 | # additional plugin paths for non-core plugins 133 | 134 | action_plugins = /usr/share/ansible_plugins/action_plugins 135 | callback_plugins = /usr/share/ansible_plugins/callback_plugins 136 | connection_plugins = /usr/share/ansible_plugins/connection_plugins 137 | lookup_plugins = /usr/share/ansible_plugins/lookup_plugins 138 | vars_plugins = /usr/share/ansible_plugins/vars_plugins 139 | filter_plugins = /usr/share/ansible_plugins/filter_plugins 140 | 141 | # set to 1 if you don't want cowsay support. Alternatively, set ANSIBLE_NOCOWS=1 142 | # in your environment 143 | # nocows = 1 144 | 145 | [paramiko_connection] 146 | 147 | # nothing to configure yet 148 | 149 | [ssh_connection] 150 | 151 | # if uncommented, sets the ansible ssh arguments to the following. Leaving off ControlPersist 152 | # will result in poor performance, so use transport=paramiko on older platforms rather than 153 | # removing it 154 | 155 | ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r 156 | 157 | # the following makes ansible use scp if the connection type is ssh (default is sftp) 158 | 159 | #scp_if_ssh=True 160 | 161 | 162 | -------------------------------------------------------------------------------- /group_vars/all: -------------------------------------------------------------------------------- 1 | --- 2 | # Variables for openstack 3 | 4 | keystone_db_pass: Rottinva 5 | keystone_admin_token: 03a776b545249a6f6b2f 6 | 7 | glance_db_pass: RedebTix 8 | glance_filesystem_store_datadir: /var/lib/glance/images/ 9 | 10 | nova_db_pass: Eawjoof2 11 | nova_libvirt_type: kvm 12 | 13 | quantum_db_pass: poufujusio 14 | quantum_external_interface: eth1 15 | 16 | cinder_db_pass: gamBefcel 17 | cinder_volume_dev: /dev/md127 18 | 19 | external_network_name: external_network 20 | external_subnet_name: external_sub_network 21 | external_subnet_cidr: 10.227.1.0/16 22 | external_router_name: external_router 23 | # 24 | #By default a tenant admin is created and below is the password for the admin user of the admin tenant. 25 | # 26 | admin_tenant: admin 27 | admin_tenant_user: admin 28 | admin_pass: waxDoptewv 29 | 30 | -------------------------------------------------------------------------------- /hosts: -------------------------------------------------------------------------------- 1 | [openstack_controller] 2 | server1.4aths.friocorte.com 3 | 4 | [openstack_compute] 5 | server1.4aths.friocorte.com 6 | 7 | -------------------------------------------------------------------------------- /images/os_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_architecture.png -------------------------------------------------------------------------------- /images/os_dep_diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_dep_diagram.png -------------------------------------------------------------------------------- /images/os_images.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_images.png -------------------------------------------------------------------------------- /images/os_instances.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_instances.png -------------------------------------------------------------------------------- /images/os_log_network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_log_network.png -------------------------------------------------------------------------------- /images/os_network_detailed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_network_detailed.png -------------------------------------------------------------------------------- /images/os_networks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_networks.png -------------------------------------------------------------------------------- /images/os_phy_network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_phy_network.png -------------------------------------------------------------------------------- /images/os_projects.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_projects.png -------------------------------------------------------------------------------- /images/os_sysinfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_sysinfo.png -------------------------------------------------------------------------------- /images/os_vnc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/c54075027a909be22816fc38f7ae209fa1a6b7b4/images/os_vnc.png -------------------------------------------------------------------------------- /playbooks/image.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Upload an image 3 | - hosts: openstack_controller 4 | tasks: 5 | - name: upload the file to glance 6 | glance_image: login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 7 | login_tenant_name={{ admin_tenant }} name={{ image_name }} container_format=bare 8 | disk_format=qcow2 state=present copy_from={{ image_url }} 9 | 10 | -------------------------------------------------------------------------------- /playbooks/tenant.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Creates tenant user and role 3 | 4 | - hosts: openstack_controller 5 | tasks: 6 | - name: Create Tenant 7 | keystone_user: token={{ keystone_admin_token }} tenant={{ tenant_name }} tenant_description="New Tenant" 8 | 9 | - name: Create the user for tenant 10 | keystone_user: token={{ keystone_admin_token }} user={{ tenant_username }} tenant={{ tenant_name }} 11 | password={{ tenant_password }} 12 | 13 | - name: Assign role to the created user 14 | keystone_user: token={{ keystone_admin_token }} role=admin user={{ tenant_username }} tenant={{ tenant_name }} 15 | 16 | - name: Create a network for the tenant 17 | quantum_network: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 18 | provider_network_type=gre login_tenant_name={{ admin_tenant }} 19 | provider_segmentation_id={{ tunnel_id }} tenant_name={{ tenant_name }} name={{ network_name }} 20 | 21 | - name: Create a subnet for the network 22 | quantum_subnet: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 23 | login_tenant_name={{ admin_tenant }} tenant_name={{ tenant_name }} 24 | network_name={{ network_name }} name={{ subnet_name }} cidr={{ subnet_cidr }} 25 | 26 | 27 | - name: Add the network interface to the router 28 | quantum_router_interface: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} login_tenant_name={{ admin_tenant }} tenant_name={{ tenant_name }} router_name={{ external_router_name }} 29 | subnet_name={{ subnet_name }} 30 | -------------------------------------------------------------------------------- /playbooks/vm.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Playbook to create vm for a tenant 3 | 4 | - hosts: openstack_controller 5 | tasks: 6 | - name: Get the image id from glance 7 | glance_image: login_username={{ tenant_username }} login_password={{ tenant_password }} 8 | login_tenant_name={{ tenant_name }} name={{ image_name }} state=present copy_from="http://127.0.0.1/dummy" 9 | register: image 10 | 11 | - name: Create keypair for tenant 12 | nova_keypair: state=present login_username={{ tenant_username }} login_password={{ tenant_password }} 13 | login_tenant_name={{ tenant_name }} name={{ keypair_name }} 14 | public_key="{{ lookup('file','~/.ssh/id_rsa.pub') }}" 15 | 16 | - name: Get network id for the tenant 17 | quantum_network: state=present login_username={{ tenant_username }} login_password={{ tenant_password }} 18 | login_tenant_name={{ tenant_name }} name={{ network_name }} 19 | register: network 20 | 21 | - name: Create a vm for the tenant 22 | nova_compute: 23 | state: present 24 | login_username: '{{ tenant_username }}' 25 | login_password: '{{ tenant_password }}' 26 | login_tenant_name: '{{ tenant_name }}' 27 | name: '{{ vm_name }}' 28 | image_id: '{{ image.id }}' 29 | key_name: '{{ keypair_name }}' 30 | wait_for: 440 31 | flavor_id: '{{ flavor_id }}' 32 | nics: 33 | - net-id: '{{ network.id }}' 34 | meta: 35 | hostname: test1 36 | group: uge_master 37 | register: vm 38 | 39 | - name: Assign a public ip to vm 40 | quantum_floating_ip: state=present login_username={{ tenant_username }} login_password={{ tenant_password }} 41 | login_tenant_name={{ tenant_name }} network_name={{ external_network_name }} instance_name={{ vm_name }} 42 | 43 | 44 | - name: Let's wait till the instance comes up 45 | wait_for: port=22 delay=10 host={{ vm.private_ip }} 46 | -------------------------------------------------------------------------------- /roles/common/files/RPM-GPG-KEY-EPEL-6: -------------------------------------------------------------------------------- 1 | -----BEGIN PGP PUBLIC KEY BLOCK----- 2 | Version: GnuPG v1.4.5 (GNU/Linux) 3 | 4 | mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1 5 | JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B 6 | M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn 7 | XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6 8 | pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV 9 | QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp 10 | Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq 11 | 3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu 12 | vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar 13 | 1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g 14 | YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB 15 | tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS 16 | KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9 17 | qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT 18 | 9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP 19 | Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS 20 | WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft 21 | HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF 22 | p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP 23 | x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8 24 | wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J 25 | l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG 26 | iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR 27 | XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ== 28 | =V/6I 29 | -----END PGP PUBLIC KEY BLOCK----- 30 | -------------------------------------------------------------------------------- /roles/common/files/epel-openstack-grizzly.repo: -------------------------------------------------------------------------------- 1 | # Place this file in your /etc/yum.repos.d/ directory 2 | 3 | [epel-openstack-grizzly] 4 | name=OpenStack Folsom Repository for EPEL 6 5 | baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6 6 | enabled=1 7 | skip_if_unavailable=1 8 | gpgcheck=0 9 | priority=98 10 | -------------------------------------------------------------------------------- /roles/common/files/epel.repo: -------------------------------------------------------------------------------- 1 | [epel] 2 | name=Extra Packages for Enterprise Linux 6 - $basearch 3 | #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch 4 | mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch 5 | failovermethod=priority 6 | enabled=1 7 | gpgcheck=1 8 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 9 | 10 | [epel-debuginfo] 11 | name=Extra Packages for Enterprise Linux 6 - $basearch - Debug 12 | #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug 13 | mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch 14 | failovermethod=priority 15 | enabled=0 16 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 17 | gpgcheck=1 18 | 19 | [epel-source] 20 | name=Extra Packages for Enterprise Linux 6 - $basearch - Source 21 | #baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS 22 | mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch 23 | failovermethod=priority 24 | enabled=0 25 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 26 | gpgcheck=1 27 | -------------------------------------------------------------------------------- /roles/common/files/openvswitch-kmod-rhel6.spec: -------------------------------------------------------------------------------- 1 | # Generated automatically -- do not modify! -*- buffer-read-only: t -*- 2 | # Spec file for Open vSwitch kernel modules on Red Hat Enterprise 3 | # Linux 6. 4 | 5 | # Copyright (C) 2011, 2012 Nicira, Inc. 6 | # 7 | # Copying and distribution of this file, with or without modification, 8 | # are permitted in any medium without royalty provided the copyright 9 | # notice and this notice are preserved. This file is offered as-is, 10 | # without warranty of any kind. 11 | 12 | %define oname openvswitch 13 | 14 | Name: %{oname}-kmod 15 | Version: 1.10.0 16 | Release: 1%{?dist} 17 | Summary: Open vSwitch kernel module 18 | 19 | Group: System/Kernel 20 | License: GPLv2 21 | URL: http://openvswitch.org/ 22 | Source0: %{oname}-%{version}.tar.gz 23 | Source1: %{oname}-kmod.files 24 | BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) 25 | BuildRequires: %kernel_module_package_buildreqs 26 | 27 | # Without this we get an empty openvswitch-debuginfo package (whose name 28 | # conflicts with the openvswitch-debuginfo package for OVS userspace). 29 | %undefine _enable_debug_packages 30 | 31 | # Use -D 'kversion 2.6.32-131.6.1.el6.x86_64' to build package 32 | # for specified kernel version. 33 | %{?kversion:%define kernel_version %kversion} 34 | 35 | # Use -D 'kflavors default debug kdump' to build packages for 36 | # specified kernel variants. 37 | %{!?kflavors:%define kflavors default} 38 | 39 | %kernel_module_package -n %{oname} -f %{SOURCE1} %kflavors 40 | 41 | %description 42 | Open vSwitch Linux kernel module. 43 | 44 | %prep 45 | 46 | %setup -n %{oname}-%{version} 47 | cat > %{oname}.conf << EOF 48 | override %{oname} * extra/%{oname} 49 | override %{oname} * weak-updates/%{oname} 50 | EOF 51 | 52 | %build 53 | for flavor in %flavors_to_build; do 54 | mkdir _$flavor 55 | (cd _$flavor && ../configure --with-linux="%{kernel_source $flavor}") 56 | %{__make} -C _$flavor/datapath/linux %{?_smp_mflags} 57 | done 58 | 59 | %install 60 | export INSTALL_MOD_PATH=$RPM_BUILD_ROOT 61 | export INSTALL_MOD_DIR=extra/%{oname} 62 | for flavor in %flavors_to_build ; do 63 | make -C %{kernel_source $flavor} modules_install \ 64 | M="`pwd`"/_$flavor/datapath/linux 65 | done 66 | install -d %{buildroot}%{_sysconfdir}/depmod.d/ 67 | install -m 644 %{oname}.conf %{buildroot}%{_sysconfdir}/depmod.d/ 68 | 69 | %clean 70 | rm -rf $RPM_BUILD_ROOT 71 | -------------------------------------------------------------------------------- /roles/common/files/openvswitch-kmod.files: -------------------------------------------------------------------------------- 1 | %defattr(644,root,root,755) 2 | /lib/modules/%2-%1 3 | /etc/depmod.d/openvswitch.conf 4 | -------------------------------------------------------------------------------- /roles/common/files/openvswitch.spec: -------------------------------------------------------------------------------- 1 | # Generated automatically -- do not modify! -*- buffer-read-only: t -*- 2 | # Spec file for Open vSwitch on Red Hat Enterprise Linux. 3 | 4 | # Copyright (C) 2009, 2010, 2011, 2012 Nicira, Inc. 5 | # 6 | # Copying and distribution of this file, with or without modification, 7 | # are permitted in any medium without royalty provided the copyright 8 | # notice and this notice are preserved. This file is offered as-is, 9 | # without warranty of any kind. 10 | 11 | Name: openvswitch 12 | Summary: Open vSwitch daemon/database/utilities 13 | Group: System Environment/Daemons 14 | URL: http://www.openvswitch.org/ 15 | Vendor: Nicira, Inc. 16 | Version: 1.10.0 17 | 18 | License: ASL 2.0 19 | Release: 1 20 | Source: openvswitch-%{version}.tar.gz 21 | Buildroot: /tmp/openvswitch-rpm 22 | Requires: openvswitch-kmod, logrotate, python 23 | 24 | %description 25 | Open vSwitch provides standard network bridging functions and 26 | support for the OpenFlow protocol for remote per-flow control of 27 | traffic. 28 | 29 | %prep 30 | %setup -q 31 | 32 | %build 33 | ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=%{_localstatedir} --enable-ssl 34 | make %{_smp_mflags} 35 | 36 | %install 37 | rm -rf $RPM_BUILD_ROOT 38 | make install DESTDIR=$RPM_BUILD_ROOT 39 | 40 | rhel_cp() { 41 | base=$1 42 | mode=$2 43 | dst=$RPM_BUILD_ROOT/$(echo $base | sed 's,_,/,g') 44 | install -D -m $mode rhel/$base $dst 45 | } 46 | rhel_cp etc_init.d_openvswitch 0755 47 | rhel_cp etc_logrotate.d_openvswitch 0644 48 | rhel_cp etc_sysconfig_network-scripts_ifup-ovs 0755 49 | rhel_cp etc_sysconfig_network-scripts_ifdown-ovs 0755 50 | rhel_cp usr_share_openvswitch_scripts_sysconfig.template 0644 51 | 52 | docdir=$RPM_BUILD_ROOT/usr/share/doc/openvswitch-%{version} 53 | install -d -m755 "$docdir" 54 | install -m 0644 FAQ rhel/README.RHEL "$docdir" 55 | install python/compat/uuid.py $RPM_BUILD_ROOT/usr/share/openvswitch/python 56 | install python/compat/argparse.py $RPM_BUILD_ROOT/usr/share/openvswitch/python 57 | 58 | # Get rid of stuff we don't want to make RPM happy. 59 | rm \ 60 | $RPM_BUILD_ROOT/usr/bin/ovs-controller \ 61 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-controller.8 \ 62 | $RPM_BUILD_ROOT/usr/bin/ovs-test \ 63 | $RPM_BUILD_ROOT/usr/bin/ovs-l3ping \ 64 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-test.8 \ 65 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-l3ping.8 \ 66 | $RPM_BUILD_ROOT/usr/sbin/ovs-vlan-bug-workaround \ 67 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-vlan-bug-workaround.8 68 | 69 | install -d -m 755 $RPM_BUILD_ROOT/var/lib/openvswitch 70 | 71 | %clean 72 | rm -rf $RPM_BUILD_ROOT 73 | 74 | %post 75 | # Create default or update existing /etc/sysconfig/openvswitch. 76 | SYSCONFIG=/etc/sysconfig/openvswitch 77 | TEMPLATE=/usr/share/openvswitch/scripts/sysconfig.template 78 | if [ ! -e $SYSCONFIG ]; then 79 | cp $TEMPLATE $SYSCONFIG 80 | else 81 | for var in $(awk -F'[ :]' '/^# [_A-Z0-9]+:/{print $2}' $TEMPLATE) 82 | do 83 | if ! grep $var $SYSCONFIG >/dev/null 2>&1; then 84 | echo >> $SYSCONFIG 85 | sed -n "/$var:/,/$var=/p" $TEMPLATE >> $SYSCONFIG 86 | fi 87 | done 88 | fi 89 | 90 | # Ensure all required services are set to run 91 | /sbin/chkconfig --add openvswitch 92 | /sbin/chkconfig openvswitch on 93 | 94 | %preun 95 | if [ "$1" = "0" ]; then # $1 = 0 for uninstall 96 | /sbin/service openvswitch stop 97 | /sbin/chkconfig --del openvswitch 98 | fi 99 | 100 | %postun 101 | if [ "$1" = "0" ]; then # $1 = 0 for uninstall 102 | rm -f /etc/openvswitch/conf.db 103 | rm -f /etc/sysconfig/openvswitch 104 | rm -f /etc/openvswitch/vswitchd.cacert 105 | fi 106 | 107 | exit 0 108 | 109 | %files 110 | %defattr(-,root,root) 111 | /etc/init.d/openvswitch 112 | %config(noreplace) /etc/logrotate.d/openvswitch 113 | /etc/sysconfig/network-scripts/ifup-ovs 114 | /etc/sysconfig/network-scripts/ifdown-ovs 115 | /usr/bin/ovs-appctl 116 | /usr/bin/ovs-benchmark 117 | /usr/bin/ovs-dpctl 118 | /usr/bin/ovs-ofctl 119 | /usr/bin/ovs-parse-backtrace 120 | /usr/bin/ovs-parse-leaks 121 | /usr/bin/ovs-pcap 122 | /usr/bin/ovs-pki 123 | /usr/bin/ovs-tcpundump 124 | /usr/bin/ovs-vlan-test 125 | /usr/bin/ovs-vsctl 126 | /usr/bin/ovsdb-client 127 | /usr/bin/ovsdb-tool 128 | /usr/sbin/ovs-bugtool 129 | /usr/sbin/ovs-vswitchd 130 | /usr/sbin/ovsdb-server 131 | /usr/share/man/man1/ovs-benchmark.1.gz 132 | /usr/share/man/man1/ovs-pcap.1.gz 133 | /usr/share/man/man1/ovs-tcpundump.1.gz 134 | /usr/share/man/man1/ovsdb-client.1.gz 135 | /usr/share/man/man1/ovsdb-server.1.gz 136 | /usr/share/man/man1/ovsdb-tool.1.gz 137 | /usr/share/man/man5/ovs-vswitchd.conf.db.5.gz 138 | /usr/share/man/man8/ovs-appctl.8.gz 139 | /usr/share/man/man8/ovs-bugtool.8.gz 140 | /usr/share/man/man8/ovs-ctl.8.gz 141 | /usr/share/man/man8/ovs-dpctl.8.gz 142 | /usr/share/man/man8/ovs-ofctl.8.gz 143 | /usr/share/man/man8/ovs-parse-backtrace.8.gz 144 | /usr/share/man/man8/ovs-parse-leaks.8.gz 145 | /usr/share/man/man8/ovs-pki.8.gz 146 | /usr/share/man/man8/ovs-vlan-test.8.gz 147 | /usr/share/man/man8/ovs-vsctl.8.gz 148 | /usr/share/man/man8/ovs-vswitchd.8.gz 149 | /usr/share/openvswitch/bugtool-plugins/ 150 | /usr/share/openvswitch/python/ 151 | /usr/share/openvswitch/scripts/ovs-bugtool-* 152 | /usr/share/openvswitch/scripts/ovs-check-dead-ifs 153 | /usr/share/openvswitch/scripts/ovs-ctl 154 | /usr/share/openvswitch/scripts/ovs-lib 155 | /usr/share/openvswitch/scripts/ovs-save 156 | /usr/share/openvswitch/scripts/sysconfig.template 157 | /usr/share/openvswitch/vswitch.ovsschema 158 | /usr/share/doc/openvswitch-%{version}/FAQ 159 | /usr/share/doc/openvswitch-%{version}/README.RHEL 160 | /var/lib/openvswitch 161 | -------------------------------------------------------------------------------- /roles/common/files/selinux: -------------------------------------------------------------------------------- 1 | # This file controls the state of SELinux on the system. 2 | # SELINUX= can take one of these three values: 3 | # enforcing - SELinux security policy is enforced. 4 | # permissive - SELinux prints warnings instead of enforcing. 5 | # disabled - No SELinux policy is loaded. 6 | SELINUX=disabled 7 | # SELINUXTYPE= can take one of these two values: 8 | # targeted - Targeted processes are protected, 9 | # mls - Multi Level Security protection. 10 | SELINUXTYPE=targeted 11 | -------------------------------------------------------------------------------- /roles/common/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers for the common tasks 3 | 4 | - name: restart iptables 5 | service: name=iptables state=restarted 6 | 7 | - name: restart services 8 | service: name={{ item }} state=restarted 9 | with_items: 10 | - ntpd 11 | - messagebus 12 | 13 | 14 | -------------------------------------------------------------------------------- /roles/common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # The common tasks 3 | 4 | - name: copy yum repo files 5 | copy: src={{ item }} dest=/etc/yum.repos.d/ 6 | with_items: 7 | - epel.repo 8 | - epel-openstack-grizzly.repo 9 | 10 | - name: Create the GPG key for EPEL 11 | copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg 12 | 13 | - name: Install required packages 14 | yum: name={{ item }} state=installed 15 | with_items: 16 | - ntp 17 | - sudo 18 | - scsi-target-utils 19 | - dbus 20 | - libselinux-python 21 | - openssl-devel 22 | notify: restart services 23 | 24 | - name: Remove previous openvwitch if installed 25 | yum: name=openvswitch-1.9.0 state=absent 26 | 27 | - name: Check if custom Openvswitch is installed 28 | shell: rpm -qa | grep kmod-openvswitch-1.10 | cut -d- -f3 29 | register: vswitch_version 30 | 31 | - name: Install Development tools 32 | shell: yum -y groupinstall "Development Tools" 33 | when: vswitch_version.stdout != '1.10.0' 34 | 35 | - name: Create the directories for compiling the openvswitch source 36 | file: path=/root/rpmbuild/SOURCES state=directory 37 | 38 | - name: Get the Openvswitch stable release 39 | get_url: url=http://openvswitch.org/releases/openvswitch-1.10.0.tar.gz dest=/root/rpmbuild/SOURCES/ 40 | when: vswitch_version.stdout != '1.10.0' 41 | 42 | - name: Copy the kmod spec file 43 | copy: src=openvswitch-kmod.files dest=/root/rpmbuild/SOURCES/ 44 | 45 | - name: Copy the spec files for rpmbuild 46 | copy: src={{ item }} dest=/root 47 | with_items: 48 | - openvswitch-kmod-rhel6.spec 49 | - openvswitch.spec 50 | 51 | - name: Build the Openvswitch kernel module. 52 | shell: chdir=/root rpmbuild -bb /root/openvswitch-kmod-rhel6.spec 53 | creates=/root/rpmbuild/RPMS/x86_64/kmod-openvswitch-1.10.0-1.el6.x86_64.rpm 54 | 55 | - name: Build the Openvswitch userspace application. 56 | shell: chdir=/root rpmbuild -bb /root/openvswitch.spec 57 | creates=/root/rpmbuild/RPMS/x86_64/openvswitch-1.10.0-1.x86_64.rpm 58 | 59 | - name: Install the custom vswitch kernel rpm. 60 | shell: yum -y localinstall /root/rpmbuild/RPMS/x86_64/kmod-openvswitch-1.10.0-1.el6.x86_64.rpm 61 | creates=/lib/modules/2.6.32-358.2.1.el6.x86_64/extra/openvswitch/openvswitch.ko 62 | 63 | - name: Install the custom openvswitch rpm 64 | shell: yum -y localinstall /root/rpmbuild/RPMS/x86_64/openvswitch-1.10.0-1.x86_64.rpm 65 | creates=/usr/share/doc/openvswitch-1.10.90/FAQ 66 | 67 | - name: Create the hosts entry. 68 | template: src=hosts.j2 dest=/etc/hosts 69 | 70 | - name: start the openvswitch service. 71 | service: name=openvswitch state=started 72 | 73 | - name: Disable selinux dynamically 74 | shell: creates=/etc/sysconfig/selinux.disabled setenforce 0 ; touch /etc/sysconfig/selinux.disabled 75 | 76 | - name: Disable SELinux in conf file 77 | copy: src=selinux dest=/etc/sysconfig/selinux 78 | 79 | - name: Disable iptables 80 | service: name=iptables state=stopped enabled=no 81 | 82 | 83 | 84 | -------------------------------------------------------------------------------- /roles/common/templates/hosts.j2: -------------------------------------------------------------------------------- 1 | 127.0.0.1 localhost 2 | {% for host in groups['all'] %} 3 | 4 | {{ hostvars[host].ansible_default_ipv4.address }} {{ host }} 5 | 6 | {% endfor %} 7 | 8 | -------------------------------------------------------------------------------- /roles/common/templates/iptables.j2: -------------------------------------------------------------------------------- 1 | # Firewall configuration written by system-config-firewall 2 | # Manual customization of this file is not recommended_ 3 | *filter 4 | :INPUT ACCEPT [0:0] 5 | :FORWARD ACCEPT [0:0] 6 | :OUTPUT ACCEPT [0:0] 7 | {% if 'openstack_controller' in group_names %} 8 | -A INPUT -p gre -j ACCEPT 9 | -A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT 10 | -A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT 11 | -A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT 12 | -A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT 13 | -A INPUT -p tcp -m multiport --dports 5672 -j ACCEPT 14 | -A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT 15 | -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT 16 | -A INPUT -p tcp -m multiport --dports 8773,8774,8775 -j ACCEPT 17 | -A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT 18 | -A INPUT -p tcp -m multiport --dports 443 -j ACCEPT 19 | -A INPUT -p tcp -m multiport --dports 80 -j ACCEPT 20 | -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT 21 | {% endif %} 22 | -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT 23 | -A INPUT -p icmp -j ACCEPT 24 | -A INPUT -i lo -j ACCEPT 25 | -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT 26 | -A INPUT -j REJECT --reject-with icmp-host-prohibited 27 | -A FORWARD -j REJECT --reject-with icmp-host-prohibited 28 | COMMIT 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /roles/compute/files/guestfs.py: -------------------------------------------------------------------------------- 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 2 | 3 | # Copyright 2012 Red Hat, Inc. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 6 | # not use this file except in compliance with the License. You may obtain 7 | # a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 14 | # License for the specific language governing permissions and limitations 15 | # under the License. 16 | 17 | from eventlet import tpool 18 | import guestfs 19 | 20 | from nova import exception 21 | from nova.openstack.common import log as logging 22 | from nova.virt.disk.vfs import api as vfs 23 | from nova.virt.libvirt import driver as libvirt_driver 24 | 25 | 26 | LOG = logging.getLogger(__name__) 27 | 28 | guestfs = None 29 | 30 | 31 | class VFSGuestFS(vfs.VFS): 32 | 33 | """ 34 | This class implements a VFS module that uses the libguestfs APIs 35 | to access the disk image. The disk image is never mapped into 36 | the host filesystem, thus avoiding any potential for symlink 37 | attacks from the guest filesystem. 38 | """ 39 | def __init__(self, imgfile, imgfmt='raw', partition=None): 40 | super(VFSGuestFS, self).__init__(imgfile, imgfmt, partition) 41 | 42 | global guestfs 43 | if guestfs is None: 44 | guestfs = __import__('guestfs') 45 | 46 | self.handle = None 47 | 48 | def setup_os(self): 49 | if self.partition == -1: 50 | self.setup_os_inspect() 51 | else: 52 | self.setup_os_static() 53 | 54 | def setup_os_static(self): 55 | LOG.debug(_("Mount guest OS image %(imgfile)s partition %(part)s"), 56 | {'imgfile': self.imgfile, 'part': str(self.partition)}) 57 | 58 | if self.partition: 59 | self.handle.mount_options("", "/dev/sda%d" % self.partition, "/") 60 | else: 61 | self.handle.mount_options("", "/dev/sda", "/") 62 | 63 | def setup_os_inspect(self): 64 | LOG.debug(_("Inspecting guest OS image %s"), self.imgfile) 65 | roots = self.handle.inspect_os() 66 | 67 | if len(roots) == 0: 68 | raise exception.NovaException(_("No operating system found in %s") 69 | % self.imgfile) 70 | 71 | if len(roots) != 1: 72 | LOG.debug(_("Multi-boot OS %(roots)s") % {'roots': str(roots)}) 73 | raise exception.NovaException( 74 | _("Multi-boot operating system found in %s") % 75 | self.imgfile) 76 | 77 | self.setup_os_root(roots[0]) 78 | 79 | def setup_os_root(self, root): 80 | LOG.debug(_("Inspecting guest OS root filesystem %s"), root) 81 | mounts = self.handle.inspect_get_mountpoints(root) 82 | 83 | if len(mounts) == 0: 84 | raise exception.NovaException( 85 | _("No mount points found in %(root)s of %(imgfile)s") % 86 | {'root': root, 'imgfile': self.imgfile}) 87 | 88 | mounts.sort(key=lambda mount: mount[1]) 89 | for mount in mounts: 90 | LOG.debug(_("Mounting %(dev)s at %(dir)s") % 91 | {'dev': mount[1], 'dir': mount[0]}) 92 | self.handle.mount_options("", mount[1], mount[0]) 93 | 94 | def setup(self): 95 | LOG.debug(_("Setting up appliance for %(imgfile)s %(imgfmt)s") % 96 | {'imgfile': self.imgfile, 'imgfmt': self.imgfmt}) 97 | self.handle = tpool.Proxy(guestfs.GuestFS()) 98 | 99 | try: 100 | self.handle.add_drive_opts(self.imgfile, format=self.imgfmt) 101 | if self.handle.get_attach_method() == 'libvirt': 102 | libvirt_url = 'libvirt:' + libvirt_driver.LibvirtDriver.uri() 103 | self.handle.set_attach_method(libvirt_url) 104 | self.handle.launch() 105 | 106 | self.setup_os() 107 | 108 | self.handle.aug_init("/", 0) 109 | except RuntimeError, e: 110 | # dereference object and implicitly close() 111 | self.handle = None 112 | raise exception.NovaException( 113 | _("Error mounting %(imgfile)s with libguestfs (%(e)s)") % 114 | {'imgfile': self.imgfile, 'e': e}) 115 | except Exception: 116 | self.handle = None 117 | raise 118 | 119 | def teardown(self): 120 | LOG.debug(_("Tearing down appliance")) 121 | 122 | try: 123 | try: 124 | self.handle.aug_close() 125 | except RuntimeError, e: 126 | LOG.warn(_("Failed to close augeas %s"), e) 127 | 128 | try: 129 | self.handle.shutdown() 130 | except AttributeError: 131 | # Older libguestfs versions haven't an explicit shutdown 132 | pass 133 | except RuntimeError, e: 134 | LOG.warn(_("Failed to shutdown appliance %s"), e) 135 | 136 | try: 137 | self.handle.close() 138 | except AttributeError: 139 | # Older libguestfs versions haven't an explicit close 140 | pass 141 | except RuntimeError, e: 142 | LOG.warn(_("Failed to close guest handle %s"), e) 143 | finally: 144 | # dereference object and implicitly close() 145 | self.handle = None 146 | 147 | @staticmethod 148 | def _canonicalize_path(path): 149 | if path[0] != '/': 150 | return '/' + path 151 | return path 152 | 153 | def make_path(self, path): 154 | LOG.debug(_("Make directory path=%(path)s") % locals()) 155 | path = self._canonicalize_path(path) 156 | self.handle.mkdir_p(path) 157 | 158 | def append_file(self, path, content): 159 | LOG.debug(_("Append file path=%(path)s") % locals()) 160 | path = self._canonicalize_path(path) 161 | self.handle.write_append(path, content) 162 | 163 | def replace_file(self, path, content): 164 | LOG.debug(_("Replace file path=%(path)s") % locals()) 165 | path = self._canonicalize_path(path) 166 | self.handle.write(path, content) 167 | 168 | def read_file(self, path): 169 | LOG.debug(_("Read file path=%(path)s") % locals()) 170 | path = self._canonicalize_path(path) 171 | return self.handle.read_file(path) 172 | 173 | def has_file(self, path): 174 | LOG.debug(_("Has file path=%(path)s") % locals()) 175 | path = self._canonicalize_path(path) 176 | try: 177 | self.handle.stat(path) 178 | return True 179 | except RuntimeError: 180 | return False 181 | 182 | def set_permissions(self, path, mode): 183 | LOG.debug(_("Set permissions path=%(path)s mode=%(mode)s") % locals()) 184 | path = self._canonicalize_path(path) 185 | self.handle.chmod(mode, path) 186 | 187 | def set_ownership(self, path, user, group): 188 | LOG.debug(_("Set ownership path=%(path)s " 189 | "user=%(user)s group=%(group)s") % locals()) 190 | path = self._canonicalize_path(path) 191 | uid = 0 192 | gid = 0 193 | LOG.debug(_("chown uid=%(uid)d gid=%(gid)s") % locals()) 194 | self.handle.chown(uid, gid, path) 195 | -------------------------------------------------------------------------------- /roles/compute/files/qemu.conf.j2: -------------------------------------------------------------------------------- 1 | user = "root" 2 | group = "root" 3 | 4 | cgroup_device_acl = [ 5 | "/dev/null", "/dev/full", "/dev/zero", 6 | "/dev/random", "/dev/urandom", 7 | "/dev/ptmx", "/dev/kvm", "/dev/kqemu", 8 | "/dev/rtc","/dev/hpet", "/dev/net/tun", 9 | ] 10 | 11 | clear_emulator_capabilities = 0 12 | -------------------------------------------------------------------------------- /roles/compute/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Handler for the compute node 3 | 4 | - name: restart libvirtd 5 | service: name=libvirtd state=restarted 6 | 7 | - name: restart nova compute 8 | service: name=openstack-nova-compute state=restarted enabled=yes 9 | 10 | - name: restart quantum ovsagent 11 | service: name=quantum-openvswitch-agent state=restarted enabled=yes 12 | 13 | 14 | -------------------------------------------------------------------------------- /roles/compute/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the openstack compute nodes 3 | 4 | - name: Install the required virtualization packages 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - qemu-img 8 | - qemu-kvm 9 | - qemu-kvm-tools 10 | - libvirt 11 | - libvirt-client 12 | - libvirt-python 13 | - libguestfs-tools 14 | 15 | - name: Configure qemu.conf in libvirt 16 | copy: src=qemu.conf.j2 dest=/etc/libvirt/qemu.conf 17 | notify: restart libvirtd 18 | 19 | - name: service libvirt start 20 | service: name=libvirtd state=started enabled=yes 21 | 22 | - name: create link for the qemu 23 | file: src=/usr/libexec/qemu-kvm dest=/usr/bin/qemu-system-x86_64 state=link 24 | 25 | - name: Install packages for openstack 26 | yum: name={{ item }} state=installed 27 | with_items: 28 | - openstack-nova-compute 29 | - openstack-quantum-openvswitch 30 | 31 | - name: Create the internal bridges for openvswitch 32 | shell: creates=/etc/quantum/br-int.created /usr/bin/ovs-vsctl add-br br-int; touch /etc/quantum/br-int.created 33 | 34 | - name: Copy the quantum.conf configuration files 35 | template: src=roles/controller/templates/quantum.conf.j2 dest=/etc/quantum/quantum.conf 36 | 37 | - name: Copy the quantum ovs agent configuration files 38 | template: src=ovs_quantum_plugin.ini.j2 dest=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini 39 | notify: restart quantum ovsagent 40 | 41 | - name: Copy the nova configuration files 42 | template: src=nova.conf.j2 dest=/etc/nova/nova.conf 43 | notify: restart nova compute 44 | 45 | - name: Give permissions to lock folder 46 | file: path=/var/lock state=directory owner=root group=root mode=0777 47 | 48 | - name: Copy the modified file for key injection bug 49 | copy: src=guestfs.py dest=/usr/lib/python2.6/site-packages/nova/virt/disk/vfs/ 50 | -------------------------------------------------------------------------------- /roles/compute/templates/nova.conf.j2: -------------------------------------------------------------------------------- 1 | {% set head_node = hostvars[groups['openstack_controller'][0]] %} 2 | [DEFAULT] 3 | 4 | # LOGS/STATE 5 | verbose=True 6 | logdir=/var/log/nova 7 | state_path=/var/lib/nova 8 | lock_path=/var/lock/nova 9 | rootwrap_config=/etc/nova/rootwrap.conf 10 | 11 | # SCHEDULER 12 | compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler 13 | 14 | # VOLUMES 15 | volume_api_class=nova.volume.cinder.API 16 | 17 | 18 | # DATABASE 19 | sql_connection=mysql://nova:{{ nova_db_pass }}@{{ head_node.ansible_default_ipv4.address }}/nova 20 | 21 | # COMPUTE 22 | libvirt_type={{ nova_libvirt_type }} 23 | compute_driver=libvirt.LibvirtDriver 24 | instance_name_template=instance-%08x 25 | api_paste_config=/etc/nova/api-paste.ini 26 | 27 | # COMPUTE/APIS: if you have separate configs for separate services 28 | # this flag is required for both nova-api and nova-compute 29 | allow_resize_to_same_host=True 30 | 31 | # APIS 32 | osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions 33 | ec2_dmz_host={{ head_node.ansible_default_ipv4.address }} 34 | s3_host={{ head_node.ansible_default_ipv4.address }} 35 | 36 | # QPID 37 | rpc_backend=nova.rpc.impl_qpid 38 | qpid_hostname={{ head_node.ansible_default_ipv4.address }} 39 | 40 | # GLANCE 41 | image_service=nova.image.glance.GlanceImageService 42 | glance_api_servers={{ head_node.ansible_default_ipv4.address }}:9292 43 | 44 | # NETWORK 45 | network_api_class = nova.network.quantumv2.api.API 46 | quantum_admin_username = admin 47 | quantum_admin_password = {{ admin_pass }} 48 | quantum_admin_auth_url = http://{{ head_node.ansible_default_ipv4.address }}:35357/v2.0 49 | quantum_auth_strategy = keystone 50 | quantum_admin_tenant_name = admin 51 | quantum_url = http://{{ head_node.ansible_default_ipv4.address }}:9696/ 52 | #libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver 53 | libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchDriver 54 | #security_group_api=quantum 55 | #firewall_driver=nova.virt.firewall.NoopFirewallDriver 56 | service_quantum_metadata_proxy=true 57 | libvirt_cpu_mode=host-passthrough 58 | 59 | # Change my_ip to match each host 60 | my_ip={{ ansible_default_ipv4.address }} 61 | 62 | # NOVNC CONSOLE 63 | novncproxy_base_url=http://{{ head_node.ansible_default_ipv4.address }}:6080/vnc_auto.html 64 | # Change vncserver_proxyclient_address and vncserver_listen to match each compute host 65 | vncserver_proxyclient_address={{ ansible_default_ipv4.address }} 66 | vncserver_listen={{ ansible_default_ipv4.address }} 67 | libvirt_cpu_mode=none 68 | 69 | # AUTHENTICATION 70 | auth_strategy = keystone 71 | [keystone_authtoken] 72 | admin_tenant_name = admin 73 | admin_user = admin 74 | admin_password = {{ admin_pass }} 75 | auth_host = {{ head_node.ansible_default_ipv4.address }} 76 | auth_port = 35357 77 | auth_protocol = http 78 | signing_dir = /tmp/keystone-signing-nova 79 | 80 | 81 | -------------------------------------------------------------------------------- /roles/compute/templates/ovs_quantum_plugin.ini.j2: -------------------------------------------------------------------------------- 1 | [DATABASE] 2 | sql_connection = mysql://quantum:{{ quantum_db_pass }}@{{ hostvars[groups['openstack_controller'][0]].ansible_default_ipv4.address }}/ovs_quantum 3 | reconnect_interval = 2 4 | 5 | [OVS] 6 | tenant_network_type = gre 7 | tunnel_id_ranges = 1:1000 8 | enable_tunneling = True 9 | local_ip = {{ ansible_default_ipv4.address }} 10 | 11 | [AGENT] 12 | polling_interval = 2 13 | root_helper=sudo quantum-rootwrap /etc/quantum/rootwrap.conf 14 | 15 | [SECURITYGROUP] 16 | firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver 17 | 18 | -------------------------------------------------------------------------------- /roles/controller/files/sysctl.conf: -------------------------------------------------------------------------------- 1 | # Kernel sysctl configuration file for Red Hat Linux 2 | # 3 | # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and 4 | # sysctl.conf(5) for more details. 5 | 6 | # Controls IP packet forwarding 7 | net.ipv4.ip_forward = 1 8 | 9 | # Controls source route verification 10 | net.ipv4.conf.default.rp_filter = 1 11 | 12 | # Do not accept source routing 13 | net.ipv4.conf.default.accept_source_route = 0 14 | 15 | # Controls the System Request debugging functionality of the kernel 16 | kernel.sysrq = 0 17 | 18 | # Controls whether core dumps will append the PID to the core filename. 19 | # Useful for debugging multi-threaded applications. 20 | kernel.core_uses_pid = 1 21 | 22 | # Controls the use of TCP syncookies 23 | net.ipv4.tcp_syncookies = 1 24 | 25 | # Disable netfilter on bridges. 26 | #net.bridge.bridge-nf-call-ip6tables = 0 27 | #net.bridge.bridge-nf-call-iptables = 0 28 | #net.bridge.bridge-nf-call-arptables = 0 29 | 30 | # Controls the default maxmimum size of a mesage queue 31 | kernel.msgmnb = 65536 32 | 33 | # Controls the maximum size of a message, in bytes 34 | kernel.msgmax = 65536 35 | 36 | # Controls the maximum shared segment size, in bytes 37 | kernel.shmmax = 68719476736 38 | 39 | # Controls the maximum number of shared memory segments, in pages 40 | kernel.shmall = 4294967296 41 | -------------------------------------------------------------------------------- /roles/controller/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Handlers for the controller plays 3 | 4 | - name: restart keystone 5 | service: name=openstack-keystone state=restarted 6 | 7 | - name: restart glance 8 | service: name={{ item }} state=restarted 9 | with_items: 10 | - openstack-glance-api 11 | - openstack-glance-registry 12 | 13 | - name: restart nova 14 | service: name={{ item }} state=restarted 15 | with_items: 16 | - openstack-nova-api 17 | - openstack-nova-scheduler 18 | - openstack-nova-network 19 | - openstack-nova-cert 20 | - openstack-nova-console 21 | - openstack-nova-conductor 22 | 23 | - name: restart quantum 24 | service: name={{ item }} state=restarted 25 | with_items: 26 | - quantum-server 27 | - quantum-l3-agent 28 | - quantum-dhcp-agent 29 | - quantum-openvswitch-agent 30 | 31 | - name: restart cinder 32 | service: name={{ item }} state=restarted enabled=yes 33 | with_items: 34 | - openstack-cinder-api 35 | - openstack-cinder-scheduler 36 | - openstack-cinder-volume 37 | 38 | - name: restart tgtd 39 | service: name=tgtd state=restarted 40 | 41 | - name: reload kernel parameters 42 | shell: sysctl -p 43 | 44 | - name: restart horizon 45 | service: name={{ item }} state=restarted 46 | with_items: 47 | - openstack-nova-consoleauth 48 | - openstack-nova-novncproxy 49 | - httpd 50 | - memcached 51 | -------------------------------------------------------------------------------- /roles/controller/tasks/base.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the base controller node 3 | 4 | - name: Install the base packages 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - mysql 8 | - mysql-server 9 | - MySQL-python 10 | - qpid-cpp-client 11 | - qpid-cpp-server 12 | - openstack-utils 13 | 14 | - name: start the services 15 | service: name={{ item }} state=started enabled=yes 16 | with_items: 17 | - mysqld 18 | - qpidd 19 | 20 | 21 | 22 | 23 | 24 | -------------------------------------------------------------------------------- /roles/controller/tasks/cinder.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the cinder controller node 3 | 4 | - name: Install OpenStack cinder packages. 5 | yum: name=openstack-cinder state=installed 6 | 7 | - name: Setup DB for nova 8 | shell: /usr/bin/openstack-db --init --service cinder -p {{ cinder_db_pass }} -r " " -y 9 | creates=/var/lib/mysql/cinder 10 | 11 | - name: copy configuration file for nova 12 | template: src=cinder.conf.j2 dest=/etc/cinder/cinder.conf 13 | notify: restart cinder 14 | 15 | - name: copy tgtd configuration file for nova 16 | template: src=targets.conf.j2 dest=/etc/tgt/targets.conf 17 | notify: restart tgtd 18 | 19 | - name: start the tgtd service 20 | service: name=tgtd state=started enabled=yes 21 | 22 | - name: DB sync for the nova services 23 | shell: cinder-manage db sync; touch /etc/cinder/db.synced 24 | creates=/etc/cinder/db.synced 25 | 26 | - name: create the volume group for cinder 27 | shell: vgcreate cinder-volumes {{ cinder_volume_dev }} 28 | creates=/etc/lvm/backup/cinder-volumes 29 | 30 | - name: Start the services for cinder 31 | service: name={{ item }} state=started enabled=yes 32 | with_items: 33 | - openstack-cinder-api 34 | - openstack-cinder-scheduler 35 | - openstack-cinder-volume 36 | 37 | 38 | -------------------------------------------------------------------------------- /roles/controller/tasks/glance.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the glance controller node 3 | 4 | - name: Install glance OpenStack components 5 | yum: name=openstack-glance state=installed 6 | 7 | - name: Setup DB for glance 8 | shell: /usr/bin/openstack-db --init --service glance -p {{ glance_db_pass }} -r " " -y 9 | creates=/var/lib/mysql/glance 10 | 11 | - name: Copy configuration files for glance 12 | template: src=glance-api.conf.j2 dest=/etc/glance/glance-api.conf 13 | notify: restart glance 14 | 15 | - name: Copy configuration files for glance 16 | template: src=glance-registry.conf.j2 dest=/etc/glance/glance-registry.conf 17 | notify: restart glance 18 | 19 | - name: DB sync for Glance 20 | shell: /usr/bin/glance-manage db_sync; touch /etc/glance/db.synced 21 | creates=/etc/glance/db.synced 22 | 23 | - name: start the glance services 24 | service: name={{ item }} state=started enabled=yes 25 | with_items: 26 | - openstack-glance-api 27 | - openstack-glance-registry 28 | 29 | -------------------------------------------------------------------------------- /roles/controller/tasks/horizon.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the Horizon controller node 3 | 4 | - name: Install OpenStack Horizon packages. 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - memcached 8 | - python-memcached 9 | - mod_wsgi 10 | - openstack-dashboard 11 | - openstack-nova-novncproxy 12 | 13 | - name: Copy the configuration files for horizon 14 | template: src=local_settings.j2 dest=/etc/openstack-dashboard/local_settings 15 | notify: restart horizon 16 | 17 | - name: Start the services for horizon 18 | service: name={{ item }} state=started enabled=yes 19 | with_items: 20 | - openstack-nova-consoleauth 21 | - openstack-nova-novncproxy 22 | - httpd 23 | - memcached 24 | 25 | 26 | -------------------------------------------------------------------------------- /roles/controller/tasks/keystone.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the keystone controller node 3 | 4 | - name: Install OpenStack keystone packages. 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - openstack-keystone 8 | - python-keystoneclient 9 | 10 | - name: Setup DB for keystone 11 | shell: creates=/var/lib/mysql/keystone /usr/bin/openstack-db --init --service keystone -p {{ keystone_db_pass }} -r " " -y 12 | 13 | - name: Create certs for the keystone 14 | shell: creates=/etc/keystone/ssl/certs/ca.pem /usr/bin/keystone-manage pki_setup; chmod 777 /var/lock 15 | 16 | - name: Copy the configuration files for keystone 17 | template: src=keystone.conf.j2 dest=/etc/keystone/keystone.conf 18 | notify: restart keystone 19 | 20 | - name: Change ownership of all files to keystone 21 | file: path=/etc/keystone recurse=yes owner=keystone group=keystone state=directory 22 | 23 | - name: Start the keystone services 24 | service: name=openstack-keystone state=started enabled=yes 25 | 26 | - name: DB sync for keystone 27 | shell: creates=/etc/keystone/db.synced /usr/bin/keystone-manage db_sync; touch /etc/keystone/db.synced 28 | 29 | - name: Copy the file for data insertion into keystone 30 | template: src=keystone_data.sh.j2 dest=/tmp/keystone_data.sh mode=0755 31 | 32 | - name: Copy the keystonerc file 33 | template: src=keystonerc.j2 dest=/root/keystonerc mode=0755 34 | 35 | - name: Upload the data to keystone database 36 | shell: creates=/etc/keystone/db.updated /tmp/keystone_data.sh; touch /etc/keystone/db.updated 37 | 38 | -------------------------------------------------------------------------------- /roles/controller/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # The mail include file for the controller node 3 | 4 | - include: base.yml 5 | - include: keystone.yml 6 | - include: glance.yml 7 | - include: nova.yml 8 | - include: cinder.yml 9 | - include: horizon.yml 10 | - include: quantum.yml 11 | 12 | -------------------------------------------------------------------------------- /roles/controller/tasks/nova.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the nova controller node 3 | 4 | - name: Install OpenStack nova packages. 5 | yum: name=openstack-nova state=installed 6 | 7 | - name: Setup DB for nova 8 | shell: /usr/bin/openstack-db --init --service nova -p {{ nova_db_pass }} -r " " -y 9 | creates=/var/lib/mysql/nova 10 | 11 | - name: DB sync for the nova services 12 | shell: nova-manage db sync; touch /etc/nova/db.synced 13 | creates=/etc/nova/db.synced 14 | 15 | - name: copy configuration file for nova 16 | template: src=nova.conf.j2 dest=/etc/nova/nova.conf 17 | notify: restart nova 18 | 19 | - name: Start the services for nova 20 | service: name={{ item }} state=started enabled=yes 21 | with_items: 22 | - openstack-nova-api 23 | - openstack-nova-scheduler 24 | - openstack-nova-network 25 | - openstack-nova-cert 26 | - openstack-nova-console 27 | - openstack-nova-conductor 28 | 29 | -------------------------------------------------------------------------------- /roles/controller/tasks/quantum.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the quantum controller node 3 | 4 | - name: Install packages for quantum 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - openstack-quantum 8 | - openstack-quantum-openvswitch 9 | 10 | 11 | - name: Enable ipv4 forwarding in the host 12 | sysctl: name=net.ipv4.ip_forward value=1 reload=yes 13 | 14 | - name: Setup DB for quantum 15 | shell: /usr/bin/quantum-server-setup -q {{ quantum_db_pass }} -r " " -u quantum --plugin openvswitch -y 16 | creates=/var/lib/mysql/ovs_quantum 17 | 18 | - name: Give right to quantum user. 19 | mysql_user: name=quantum password={{ quantum_db_pass }} priv=*.*:ALL host='localhost' state=present 20 | 21 | - name: Give right to quantum user. 22 | mysql_user: name=quantum password={{ quantum_db_pass }} priv=*.*:ALL host='%' state=present 23 | 24 | - name: Copy the quantum.conf configuration files 25 | template: src=quantum.conf.j2 dest=/etc/quantum/quantum.conf 26 | notify: restart quantum 27 | tags: test 28 | 29 | - name: Copy the quantum dhcp agent configuration files 30 | template: src=dhcp_agent.ini.j2 dest=/etc/quantum/dhcp_agent.ini 31 | notify: restart quantum 32 | 33 | 34 | - name: Copy the quantum ovs agent configuration files 35 | template: src=ovs_quantum_plugin.ini.j2 dest=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini 36 | notify: restart quantum 37 | 38 | - name: Create the external bridges for openvswitch 39 | shell: /usr/bin/ovs-vsctl add-br br-ex; touch /etc/quantum/br-ex.created 40 | creates=/etc/quantum/br-ex.created 41 | 42 | - name: Create the internal bridges for openvswitch 43 | shell: /usr/bin/ovs-vsctl add-br br-int; touch /etc/quantum/br-int.created 44 | creates=/etc/quantum/br-int.created 45 | 46 | - name: Add the interface for the external bridge 47 | shell: /usr/bin/ovs-vsctl add-port br-ex {{ quantum_external_interface }}; touch /etc/quantum/br-ext.interface 48 | creates=/etc/quantum/br-ext.interface 49 | 50 | - name: copy configuration file for nova 51 | template: src=nova.conf.j2 dest=/etc/nova/nova.conf 52 | notify: restart nova 53 | 54 | - name: Start the quantum services 55 | service: name={{ item }} state=started enabled=yes 56 | with_items: 57 | - quantum-server 58 | - quantum-dhcp-agent 59 | - quantum-openvswitch-agent 60 | 61 | - local_action: pause seconds=20 62 | 63 | - name: create the external network 64 | quantum_network: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} provider_network_type=local 65 | login_tenant_name={{ admin_tenant }} name={{ external_network_name }} router_external=true 66 | register: network 67 | 68 | - name: create external router 69 | quantum_router: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 70 | login_tenant_name={{ admin_tenant }} name={{ external_router_name }} 71 | register: router 72 | 73 | - name: create the subnet for external network 74 | quantum_subnet: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 75 | login_tenant_name={{ admin_tenant }} enable_dhcp=false network_name={{ external_network_name }} 76 | name={{ external_subnet_name }} cidr={{ external_subnet_cidr }} 77 | 78 | - name: Copy the quantum l3 agent configuration files 79 | template: src=l3_agent.ini.j2 dest=/etc/quantum/l3_agent.ini 80 | notify: restart quantum 81 | 82 | - name: Start the quantum l3 services 83 | service: name=quantum-l3-agent state=started enabled=yes 84 | 85 | - name: create external interface for router 86 | quantum_router_gateway: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 87 | login_tenant_name={{ admin_tenant }} router_name={{ external_router_name }} 88 | network_name={{ external_network_name }} 89 | 90 | -------------------------------------------------------------------------------- /roles/controller/templates/cinder.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | logdir = /var/log/cinder 3 | state_path = /var/lib/cinder 4 | lock_path = /var/lib/cinder/tmp 5 | volumes_dir = /etc/cinder/volumes 6 | iscsi_helper = tgtadm 7 | sql_connection = mysql://cinder:{{ cinder_db_pass }}@localhost/cinder 8 | rpc_backend = cinder.openstack.common.rpc.impl_qpid 9 | rootwrap_config = /etc/cinder/rootwrap.conf 10 | auth_strategy = keystone 11 | 12 | [keystone_authtoken] 13 | admin_tenant_name = admin 14 | admin_user = admin 15 | admin_password = {{ admin_pass }} 16 | auth_host = localhost 17 | auth_port = 35357 18 | auth_protocol = http 19 | signing_dirname = /tmp/keystone-signing-cinder 20 | -------------------------------------------------------------------------------- /roles/controller/templates/dhcp_agent.ini.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver 3 | dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq 4 | auth_url = http://{{ hostvars[groups['openstack_controller'][0]].ansible_default_ipv4.address }}:35357/v2.0/ 5 | admin_username = admin 6 | admin_password = {{ admin_pass }} 7 | admin_tenant_name = admin 8 | use_namespaces = False 9 | root_helper=sudo quantum-rootwrap /etc/quantum/rootwrap.conf 10 | -------------------------------------------------------------------------------- /roles/controller/templates/glance-api.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | default_store = file 3 | bind_host = 0.0.0.0 4 | bind_port = 9292 5 | log_file = /var/log/glance/api.log 6 | backlog = 4096 7 | sql_connection = mysql://glance:{{ glance_db_pass }}@localhost/glance 8 | sql_idle_timeout = 3600 9 | workers = 1 10 | enable_v1_api = True 11 | enable_v2_api = True 12 | registry_host = 0.0.0.0 13 | registry_port = 9191 14 | registry_client_protocol = http 15 | qpid_notification_exchange = glance 16 | qpid_notification_topic = notifications 17 | qpid_host = localhost 18 | qpid_port = 5672 19 | qpid_username = 20 | qpid_password = 21 | qpid_sasl_mechanisms = 22 | qpid_reconnect_timeout = 0 23 | qpid_reconnect_limit = 0 24 | qpid_reconnect_interval_min = 0 25 | qpid_reconnect_interval_max = 0 26 | qpid_reconnect_interval = 0 27 | qpid_heartbeat = 5 28 | # Set to 'ssl' to enable SSL 29 | qpid_protocol = tcp 30 | qpid_tcp_nodelay = True 31 | filesystem_store_datadir = {{ glance_filesystem_store_datadir }} 32 | delayed_delete = False 33 | image_cache_dir = /var/lib/glance/image-cache/ 34 | 35 | [keystone_authtoken] 36 | auth_host = 127.0.0.1 37 | auth_port = 35357 38 | auth_protocol = http 39 | admin_tenant_name = admin 40 | admin_user = admin 41 | admin_password = {{ admin_pass }} 42 | 43 | [paste_deploy] 44 | config_file = /etc/glance/glance-api-paste.ini 45 | flavor=keystone 46 | 47 | -------------------------------------------------------------------------------- /roles/controller/templates/glance-registry.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | bind_host = 0.0.0.0 3 | bind_port = 9191 4 | log_file = /var/log/glance/registry.log 5 | backlog = 4096 6 | sql_connection = mysql://glance:{{ glance_db_pass }}@localhost/glance 7 | sql_idle_timeout = 3600 8 | api_limit_max = 1000 9 | limit_param_default = 25 10 | 11 | [keystone_authtoken] 12 | auth_host = 127.0.0.1 13 | auth_port = 35357 14 | auth_protocol = http 15 | admin_tenant_name = admin 16 | admin_user = admin 17 | admin_password = {{ admin_pass }} 18 | 19 | [paste_deploy] 20 | config_file = /etc/glance/glance-registry-paste.ini 21 | flavor=keystone 22 | 23 | -------------------------------------------------------------------------------- /roles/controller/templates/keystone.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | 3 | log_file = /var/log/keystone/keystone.log 4 | admin_token = {{ keystone_admin_token }} 5 | 6 | [sql] 7 | connection = mysql://keystone:{{ keystone_db_pass }}@localhost/keystone 8 | 9 | [identity] 10 | driver = keystone.identity.backends.sql.Identity 11 | 12 | [trust] 13 | 14 | [catalog] 15 | template_file = /etc/keystone/default_catalog.templates 16 | driver = keystone.catalog.backends.sql.Catalog 17 | 18 | [token] 19 | driver = keystone.token.backends.sql.Token 20 | 21 | [policy] 22 | 23 | [ec2] 24 | driver = keystone.contrib.ec2.backends.sql.Ec2 25 | 26 | [ssl] 27 | 28 | [signing] 29 | 30 | [ldap] 31 | 32 | [auth] 33 | methods = password,token 34 | password = keystone.auth.plugins.password.Password 35 | token = keystone.auth.plugins.token.Token 36 | 37 | [filter:debug] 38 | paste.filter_factory = keystone.common.wsgi:Debug.factory 39 | 40 | [filter:token_auth] 41 | paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory 42 | 43 | [filter:admin_token_auth] 44 | paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory 45 | 46 | [filter:xml_body] 47 | paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory 48 | 49 | [filter:json_body] 50 | paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory 51 | 52 | [filter:user_crud_extension] 53 | paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory 54 | 55 | [filter:crud_extension] 56 | paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory 57 | 58 | [filter:ec2_extension] 59 | paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory 60 | 61 | [filter:s3_extension] 62 | paste.filter_factory = keystone.contrib.s3:S3Extension.factory 63 | 64 | [filter:url_normalize] 65 | paste.filter_factory = keystone.middleware:NormalizingFilter.factory 66 | 67 | [filter:sizelimit] 68 | paste.filter_factory = keystone.middleware:RequestBodySizeLimiter.factory 69 | 70 | [filter:stats_monitoring] 71 | paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory 72 | 73 | [filter:stats_reporting] 74 | paste.filter_factory = keystone.contrib.stats:StatsExtension.factory 75 | 76 | [filter:access_log] 77 | paste.filter_factory = keystone.contrib.access:AccessLogMiddleware.factory 78 | 79 | [app:public_service] 80 | paste.app_factory = keystone.service:public_app_factory 81 | 82 | [app:service_v3] 83 | paste.app_factory = keystone.service:v3_app_factory 84 | 85 | [app:admin_service] 86 | paste.app_factory = keystone.service:admin_app_factory 87 | 88 | [pipeline:public_api] 89 | pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service 90 | 91 | [pipeline:admin_api] 92 | pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service 93 | 94 | [pipeline:api_v3] 95 | pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension service_v3 96 | 97 | [app:public_version_service] 98 | paste.app_factory = keystone.service:public_version_app_factory 99 | 100 | [app:admin_version_service] 101 | paste.app_factory = keystone.service:admin_version_app_factory 102 | 103 | [pipeline:public_version_api] 104 | pipeline = access_log sizelimit stats_monitoring url_normalize xml_body public_version_service 105 | 106 | [pipeline:admin_version_api] 107 | pipeline = access_log sizelimit stats_monitoring url_normalize xml_body admin_version_service 108 | 109 | [composite:main] 110 | use = egg:Paste#urlmap 111 | /v2.0 = public_api 112 | /v3 = api_v3 113 | / = public_version_api 114 | 115 | [composite:admin] 116 | use = egg:Paste#urlmap 117 | /v2.0 = admin_api 118 | /v3 = api_v3 119 | / = admin_version_api 120 | -------------------------------------------------------------------------------- /roles/controller/templates/keystone_data.sh.j2: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright 2013 OpenStack LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 6 | # not use this file except in compliance with the License. You may obtain 7 | # a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 14 | # License for the specific language governing permissions and limitations 15 | # under the License. 16 | 17 | CONTROLLER_PUBLIC_ADDRESS={{ ansible_default_ipv4.address }} 18 | CONTROLLER_ADMIN_ADDRESS={{ ansible_default_ipv4.address }} 19 | CONTROLLER_INTERNAL_ADDRESS={{ ansible_default_ipv4.address }} 20 | 21 | TOOLS_DIR=$(cd $(dirname "$0") && pwd) 22 | KEYSTONE_CONF=${KEYSTONE_CONF:-/etc/keystone/keystone.conf} 23 | if [[ -r "$KEYSTONE_CONF" ]]; then 24 | EC2RC="$(dirname "$KEYSTONE_CONF")/ec2rc" 25 | elif [[ -r "$TOOLS_DIR/../etc/keystone.conf" ]]; then 26 | # assume git checkout 27 | KEYSTONE_CONF="$TOOLS_DIR/../etc/keystone.conf" 28 | EC2RC="$TOOLS_DIR/../etc/ec2rc" 29 | else 30 | KEYSTONE_CONF="" 31 | EC2RC="ec2rc" 32 | fi 33 | 34 | # Extract some info from Keystone's configuration file 35 | if [[ -r "$KEYSTONE_CONF" ]]; then 36 | CONFIG_SERVICE_TOKEN=$(sed 's/[[:space:]]//g' $KEYSTONE_CONF | grep ^admin_token= | cut -d'=' -f2) 37 | CONFIG_ADMIN_PORT=$(sed 's/[[:space:]]//g' $KEYSTONE_CONF | grep ^admin_port= | cut -d'=' -f2) 38 | fi 39 | 40 | export SERVICE_TOKEN=${SERVICE_TOKEN:-$CONFIG_SERVICE_TOKEN} 41 | if [[ -z "$SERVICE_TOKEN" ]]; then 42 | echo "No service token found." 43 | echo "Set SERVICE_TOKEN manually from keystone.conf admin_token." 44 | exit 1 45 | fi 46 | 47 | export SERVICE_ENDPOINT=${SERVICE_ENDPOINT:-http://$CONTROLLER_PUBLIC_ADDRESS:${CONFIG_ADMIN_PORT:-35357}/v2.0} 48 | 49 | function get_id () { 50 | echo `"$@" | grep ' id ' | awk '{print $4}'` 51 | } 52 | 53 | # 54 | # Default tenant 55 | # 56 | ADMIN_TENANT=$(get_id keystone tenant-create --name={{ admin_tenant }} \ 57 | --description "Admin Tenant") 58 | 59 | ADMIN_USER=$(get_id keystone user-create --name={{ admin_tenant_user }} \ 60 | --pass={{ admin_pass }}) 61 | 62 | ADMIN_ROLE=$(get_id keystone role-create --name=admin) 63 | 64 | keystone user-role-add --user-id $ADMIN_USER \ 65 | --role-id $ADMIN_ROLE \ 66 | --tenant-id $ADMIN_TENANT 67 | 68 | 69 | # 70 | # Keystone service 71 | # 72 | KEYSTONE_SERVICE=$(get_id \ 73 | keystone service-create --name=keystone \ 74 | --type=identity \ 75 | --description="Keystone Identity Service") 76 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 77 | keystone endpoint-create --region RegionOne --service-id $KEYSTONE_SERVICE \ 78 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:\$(public_port)s/v2.0" \ 79 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:\$(admin_port)s/v2.0" \ 80 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:\$(public_port)s/v2.0" 81 | fi 82 | 83 | # 84 | # Nova service 85 | # 86 | NOVA_SERVICE=$(get_id \ 87 | keystone service-create --name=nova \ 88 | --type=compute \ 89 | --description="Nova Compute Service") 90 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 91 | keystone endpoint-create --region RegionOne --service-id $NOVA_SERVICE \ 92 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:\$(compute_port)s/v1.1/\$(tenant_id)s" \ 93 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:\$(compute_port)s/v1.1/\$(tenant_id)s" \ 94 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:\$(compute_port)s/v1.1/\$(tenant_id)s" 95 | fi 96 | 97 | # 98 | # Volume service 99 | # 100 | VOLUME_SERVICE=$(get_id \ 101 | keystone service-create --name=cinder \ 102 | --type=volume \ 103 | --description="Cinder Volume Service") 104 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 105 | keystone endpoint-create --region RegionOne --service-id $VOLUME_SERVICE \ 106 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:8776/v1/\$(tenant_id)s" \ 107 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:8776/v1/\$(tenant_id)s" \ 108 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:8776/v1/\$(tenant_id)s" 109 | fi 110 | 111 | # 112 | # Image service 113 | # 114 | GLANCE_SERVICE=$(get_id \ 115 | keystone service-create --name=glance \ 116 | --type=image \ 117 | --description="Glance Image Service") 118 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 119 | keystone endpoint-create --region RegionOne --service-id $GLANCE_SERVICE \ 120 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:9292" \ 121 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:9292" \ 122 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:9292" 123 | fi 124 | # 125 | # Quantum service 126 | # 127 | QUANTUM_SERVICE=$(get_id \ 128 | keystone service-create --name=quantum \ 129 | --type=network \ 130 | --description="Quantun Network Service") 131 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 132 | keystone endpoint-create --region RegionOne --service-id $QUANTUM_SERVICE \ 133 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:9696" \ 134 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:9696" \ 135 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:9696" 136 | fi 137 | 138 | # 139 | # EC2 service 140 | # 141 | EC2_SERVICE=$(get_id \ 142 | keystone service-create --name=ec2 \ 143 | --type=ec2 \ 144 | --description="EC2 Compatibility Layer") 145 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 146 | keystone endpoint-create --region RegionOne --service-id $EC2_SERVICE \ 147 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:8773/services/Cloud" \ 148 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:8773/services/Admin" \ 149 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:8773/services/Cloud" 150 | fi 151 | 152 | # 153 | # Swift service 154 | # 155 | SWIFT_SERVICE=$(get_id \ 156 | keystone service-create --name=swift \ 157 | --type="object-store" \ 158 | --description="Swift Service") 159 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 160 | keystone endpoint-create --region RegionOne --service-id $SWIFT_SERVICE \ 161 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:8888/v1/AUTH_\$(tenant_id)s" \ 162 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:8888/v1" \ 163 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:8888/v1/AUTH_\$(tenant_id)s" 164 | fi 165 | 166 | # create ec2 creds and parse the secret and access key returned 167 | RESULT=$(keystone ec2-credentials-create --tenant-id=$SERVICE_TENANT --user-id=$ADMIN_USER) 168 | ADMIN_ACCESS=`echo "$RESULT" | grep access | awk '{print $4}'` 169 | ADMIN_SECRET=`echo "$RESULT" | grep secret | awk '{print $4}'` 170 | 171 | # write the secret and access to ec2rc 172 | cat > $EC2RC <