├── images ├── os_vnc.png ├── os_images.png ├── os_networks.png ├── os_projects.png ├── os_sysinfo.png ├── os_instances.png ├── os_architecture.png ├── os_dep_diagram.png ├── os_log_network.png ├── os_phy_network.png └── os_network_detailed.png ├── roles ├── controller │ ├── templates │ │ ├── targets.conf.j2 │ │ ├── l3_agent.ini.ext.j2 │ │ ├── keystonerc.j2 │ │ ├── ovs_quantum_plugin.ini.j2 │ │ ├── dhcp_agent.ini.j2 │ │ ├── l3_agent.ini.j2 │ │ ├── glance-registry.conf.j2 │ │ ├── cinder.conf.j2 │ │ ├── quantum.conf.j2 │ │ ├── glance-api.conf.j2 │ │ ├── nova.conf.j2 │ │ ├── keystone.conf.j2 │ │ ├── keystone_data.sh.j2 │ │ └── local_settings.j2 │ ├── tasks │ │ ├── main.yml │ │ ├── base.yml │ │ ├── horizon.yml │ │ ├── nova.yml │ │ ├── glance.yml │ │ ├── cinder.yml │ │ ├── keystone.yml │ │ └── quantum.yml │ ├── files │ │ └── sysctl.conf │ └── handlers │ │ └── main.yml ├── common │ ├── files │ │ ├── openvswitch-kmod.files │ │ ├── epel-openstack-grizzly.repo │ │ ├── selinux │ │ ├── epel.repo │ │ ├── RPM-GPG-KEY-EPEL-6 │ │ ├── openvswitch-kmod-rhel6.spec │ │ └── openvswitch.spec │ ├── templates │ │ ├── hosts.j2 │ │ └── iptables.j2 │ ├── handlers │ │ └── main.yml │ └── tasks │ │ └── main.yml └── compute │ ├── files │ ├── qemu.conf.j2 │ └── guestfs.py │ ├── handlers │ └── main.yml │ ├── templates │ ├── ovs_quantum_plugin.ini.j2 │ └── nova.conf.j2 │ └── tasks │ └── main.yml ├── hosts ├── site.yml ├── playbooks ├── image.yml ├── tenant.yml └── vm.yml ├── group_vars └── all ├── ansible.cfg └── README.md /images/os_vnc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_vnc.png -------------------------------------------------------------------------------- /images/os_images.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_images.png -------------------------------------------------------------------------------- /images/os_networks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_networks.png -------------------------------------------------------------------------------- /images/os_projects.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_projects.png -------------------------------------------------------------------------------- /images/os_sysinfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_sysinfo.png -------------------------------------------------------------------------------- /images/os_instances.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_instances.png -------------------------------------------------------------------------------- /roles/controller/templates/targets.conf.j2: -------------------------------------------------------------------------------- 1 | include /etc/cinder/volumes/* 2 | 3 | default-driver iscsi 4 | 5 | 6 | -------------------------------------------------------------------------------- /images/os_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_architecture.png -------------------------------------------------------------------------------- /images/os_dep_diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_dep_diagram.png -------------------------------------------------------------------------------- /images/os_log_network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_log_network.png -------------------------------------------------------------------------------- /images/os_phy_network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_phy_network.png -------------------------------------------------------------------------------- /hosts: -------------------------------------------------------------------------------- 1 | [openstack_controller] 2 | server1.4aths.friocorte.com 3 | 4 | [openstack_compute] 5 | server1.4aths.friocorte.com 6 | 7 | -------------------------------------------------------------------------------- /images/os_network_detailed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/goozbach/ansible-redhat-openstack/HEAD/images/os_network_detailed.png -------------------------------------------------------------------------------- /roles/common/files/openvswitch-kmod.files: -------------------------------------------------------------------------------- 1 | %defattr(644,root,root,755) 2 | /lib/modules/%2-%1 3 | /etc/depmod.d/openvswitch.conf 4 | -------------------------------------------------------------------------------- /roles/controller/templates/l3_agent.ini.ext.j2: -------------------------------------------------------------------------------- 1 | 2 | gateway_external_network_id = {{ network.id }} 3 | router_id = {{ router.id }} 4 | 5 | -------------------------------------------------------------------------------- /roles/common/templates/hosts.j2: -------------------------------------------------------------------------------- 1 | 127.0.0.1 localhost 2 | {% for host in groups['all'] %} 3 | 4 | {{ hostvars[host].ansible_default_ipv4.address }} {{ host }} 5 | 6 | {% endfor %} 7 | 8 | -------------------------------------------------------------------------------- /roles/controller/templates/keystonerc.j2: -------------------------------------------------------------------------------- 1 | export OS_USERNAME=admin 2 | export OS_PASSWORD={{ admin_pass }} 3 | export OS_TENANT_NAME=admin 4 | export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/ 5 | -------------------------------------------------------------------------------- /site.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # The main openstack site deployment playbook 3 | 4 | - hosts: all 5 | roles: 6 | - common 7 | 8 | - hosts: openstack_controller 9 | roles: 10 | - controller 11 | 12 | - hosts: openstack_compute 13 | roles: 14 | - compute 15 | 16 | 17 | -------------------------------------------------------------------------------- /roles/controller/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # The mail include file for the controller node 3 | 4 | - include: base.yml 5 | - include: keystone.yml 6 | - include: glance.yml 7 | - include: nova.yml 8 | - include: cinder.yml 9 | - include: horizon.yml 10 | - include: quantum.yml 11 | 12 | -------------------------------------------------------------------------------- /roles/common/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers for the common tasks 3 | 4 | - name: restart iptables 5 | service: name=iptables state=restarted 6 | 7 | - name: restart services 8 | service: name={{ item }} state=restarted 9 | with_items: 10 | - ntpd 11 | - messagebus 12 | 13 | 14 | -------------------------------------------------------------------------------- /roles/compute/files/qemu.conf.j2: -------------------------------------------------------------------------------- 1 | user = "root" 2 | group = "root" 3 | 4 | cgroup_device_acl = [ 5 | "/dev/null", "/dev/full", "/dev/zero", 6 | "/dev/random", "/dev/urandom", 7 | "/dev/ptmx", "/dev/kvm", "/dev/kqemu", 8 | "/dev/rtc","/dev/hpet", "/dev/net/tun", 9 | ] 10 | 11 | clear_emulator_capabilities = 0 12 | -------------------------------------------------------------------------------- /roles/common/files/epel-openstack-grizzly.repo: -------------------------------------------------------------------------------- 1 | # Place this file in your /etc/yum.repos.d/ directory 2 | 3 | [epel-openstack-grizzly] 4 | name=OpenStack Folsom Repository for EPEL 6 5 | baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6 6 | enabled=1 7 | skip_if_unavailable=1 8 | gpgcheck=0 9 | priority=98 10 | -------------------------------------------------------------------------------- /roles/compute/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Handler for the compute node 3 | 4 | - name: restart libvirtd 5 | service: name=libvirtd state=restarted 6 | 7 | - name: restart nova compute 8 | service: name=openstack-nova-compute state=restarted enabled=yes 9 | 10 | - name: restart quantum ovsagent 11 | service: name=quantum-openvswitch-agent state=restarted enabled=yes 12 | 13 | 14 | -------------------------------------------------------------------------------- /playbooks/image.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Upload an image 3 | - hosts: openstack_controller 4 | tasks: 5 | - name: upload the file to glance 6 | glance_image: login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 7 | login_tenant_name={{ admin_tenant }} name={{ image_name }} container_format=bare 8 | disk_format=qcow2 state=present copy_from={{ image_url }} 9 | 10 | -------------------------------------------------------------------------------- /roles/controller/templates/ovs_quantum_plugin.ini.j2: -------------------------------------------------------------------------------- 1 | [DATABASE] 2 | sql_connection = mysql://quantum:{{ quantum_db_pass }}@localhost/ovs_quantum 3 | reconnect_interval = 2 4 | 5 | [OVS] 6 | tenant_network_type = gre 7 | tunnel_id_ranges = 1:1000 8 | enable_tunneling = True 9 | local_ip = {{ ansible_default_ipv4.address }} 10 | 11 | [AGENT] 12 | polling_interval = 2 13 | root_helper=sudo quantum-rootwrap /etc/quantum/rootwrap.conf 14 | 15 | [SECURITYGROUP] 16 | 17 | -------------------------------------------------------------------------------- /roles/controller/templates/dhcp_agent.ini.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver 3 | dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq 4 | auth_url = http://{{ hostvars[groups['openstack_controller'][0]].ansible_default_ipv4.address }}:35357/v2.0/ 5 | admin_username = admin 6 | admin_password = {{ admin_pass }} 7 | admin_tenant_name = admin 8 | use_namespaces = False 9 | root_helper=sudo quantum-rootwrap /etc/quantum/rootwrap.conf 10 | -------------------------------------------------------------------------------- /roles/controller/tasks/base.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the base controller node 3 | 4 | - name: Install the base packages 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - mysql 8 | - mysql-server 9 | - MySQL-python 10 | - qpid-cpp-client 11 | - qpid-cpp-server 12 | - openstack-utils 13 | 14 | - name: start the services 15 | service: name={{ item }} state=started enabled=yes 16 | with_items: 17 | - mysqld 18 | - qpidd 19 | 20 | 21 | 22 | 23 | 24 | -------------------------------------------------------------------------------- /roles/common/files/selinux: -------------------------------------------------------------------------------- 1 | # This file controls the state of SELinux on the system. 2 | # SELINUX= can take one of these three values: 3 | # enforcing - SELinux security policy is enforced. 4 | # permissive - SELinux prints warnings instead of enforcing. 5 | # disabled - No SELinux policy is loaded. 6 | SELINUX=disabled 7 | # SELINUXTYPE= can take one of these two values: 8 | # targeted - Targeted processes are protected, 9 | # mls - Multi Level Security protection. 10 | SELINUXTYPE=targeted 11 | -------------------------------------------------------------------------------- /roles/controller/templates/l3_agent.ini.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver 3 | auth_url = http://{{ hostvars[groups['openstack_controller'][0]].ansible_default_ipv4.address }}:35357/v2.0/ 4 | admin_username = admin 5 | admin_password = {{ admin_pass }} 6 | admin_tenant_name = admin 7 | use_namespaces = False 8 | gateway_external_network_id = {{ network.id }} 9 | router_id = {{ router.id }} 10 | external_network_bridge = br-ex 11 | root_helper=sudo quantum-rootwrap /etc/quantum/rootwrap.conf 12 | 13 | -------------------------------------------------------------------------------- /roles/compute/templates/ovs_quantum_plugin.ini.j2: -------------------------------------------------------------------------------- 1 | [DATABASE] 2 | sql_connection = mysql://quantum:{{ quantum_db_pass }}@{{ hostvars[groups['openstack_controller'][0]].ansible_default_ipv4.address }}/ovs_quantum 3 | reconnect_interval = 2 4 | 5 | [OVS] 6 | tenant_network_type = gre 7 | tunnel_id_ranges = 1:1000 8 | enable_tunneling = True 9 | local_ip = {{ ansible_default_ipv4.address }} 10 | 11 | [AGENT] 12 | polling_interval = 2 13 | root_helper=sudo quantum-rootwrap /etc/quantum/rootwrap.conf 14 | 15 | [SECURITYGROUP] 16 | firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver 17 | 18 | -------------------------------------------------------------------------------- /roles/controller/templates/glance-registry.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | bind_host = 0.0.0.0 3 | bind_port = 9191 4 | log_file = /var/log/glance/registry.log 5 | backlog = 4096 6 | sql_connection = mysql://glance:{{ glance_db_pass }}@localhost/glance 7 | sql_idle_timeout = 3600 8 | api_limit_max = 1000 9 | limit_param_default = 25 10 | 11 | [keystone_authtoken] 12 | auth_host = 127.0.0.1 13 | auth_port = 35357 14 | auth_protocol = http 15 | admin_tenant_name = admin 16 | admin_user = admin 17 | admin_password = {{ admin_pass }} 18 | 19 | [paste_deploy] 20 | config_file = /etc/glance/glance-registry-paste.ini 21 | flavor=keystone 22 | 23 | -------------------------------------------------------------------------------- /roles/controller/templates/cinder.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | logdir = /var/log/cinder 3 | state_path = /var/lib/cinder 4 | lock_path = /var/lib/cinder/tmp 5 | volumes_dir = /etc/cinder/volumes 6 | iscsi_helper = tgtadm 7 | sql_connection = mysql://cinder:{{ cinder_db_pass }}@localhost/cinder 8 | rpc_backend = cinder.openstack.common.rpc.impl_qpid 9 | rootwrap_config = /etc/cinder/rootwrap.conf 10 | auth_strategy = keystone 11 | 12 | [keystone_authtoken] 13 | admin_tenant_name = admin 14 | admin_user = admin 15 | admin_password = {{ admin_pass }} 16 | auth_host = localhost 17 | auth_port = 35357 18 | auth_protocol = http 19 | signing_dirname = /tmp/keystone-signing-cinder 20 | -------------------------------------------------------------------------------- /roles/controller/tasks/horizon.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the Horizon controller node 3 | 4 | - name: Install OpenStack Horizon packages. 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - memcached 8 | - python-memcached 9 | - mod_wsgi 10 | - openstack-dashboard 11 | - openstack-nova-novncproxy 12 | 13 | - name: Copy the configuration files for horizon 14 | template: src=local_settings.j2 dest=/etc/openstack-dashboard/local_settings 15 | notify: restart horizon 16 | 17 | - name: Start the services for horizon 18 | service: name={{ item }} state=started enabled=yes 19 | with_items: 20 | - openstack-nova-consoleauth 21 | - openstack-nova-novncproxy 22 | - httpd 23 | - memcached 24 | 25 | 26 | -------------------------------------------------------------------------------- /group_vars/all: -------------------------------------------------------------------------------- 1 | --- 2 | # Variables for openstack 3 | 4 | keystone_db_pass: Rottinva 5 | keystone_admin_token: 03a776b545249a6f6b2f 6 | 7 | glance_db_pass: RedebTix 8 | glance_filesystem_store_datadir: /var/lib/glance/images/ 9 | 10 | nova_db_pass: Eawjoof2 11 | nova_libvirt_type: kvm 12 | 13 | quantum_db_pass: poufujusio 14 | quantum_external_interface: eth1 15 | 16 | cinder_db_pass: gamBefcel 17 | cinder_volume_dev: /dev/md127 18 | 19 | external_network_name: external_network 20 | external_subnet_name: external_sub_network 21 | external_subnet_cidr: 10.227.1.0/16 22 | external_router_name: external_router 23 | # 24 | #By default a tenant admin is created and below is the password for the admin user of the admin tenant. 25 | # 26 | admin_tenant: admin 27 | admin_tenant_user: admin 28 | admin_pass: waxDoptewv 29 | 30 | -------------------------------------------------------------------------------- /roles/controller/templates/quantum.conf.j2: -------------------------------------------------------------------------------- 1 | {% set head_node = hostvars[groups['openstack_controller'][0]] %} 2 | [DEFAULT] 3 | lock_path = $state_path/lock 4 | bind_host = 0.0.0.0 5 | bind_port = 9696 6 | api_paste_config = api-paste.ini 7 | auth_strategy = keystone 8 | control_exchange = quantum 9 | allow_overlapping_ips = False 10 | notification_topics = notifications 11 | core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 12 | rpc_backend = quantum.openstack.common.rpc.impl_qpid 13 | qpid_hostname = {{ head_node.ansible_default_ipv4.address }} 14 | 15 | [QUOTAS] 16 | 17 | [DEFAULT_SERVICETYPE] 18 | 19 | [AGENT] 20 | 21 | [keystone_authtoken] 22 | auth_host = {{ head_node.ansible_default_ipv4.address }} 23 | auth_port = 35357 24 | auth_protocol = http 25 | signing_dir = /var/lib/quantum/keystone-signing 26 | admin_tenant_name = admin 27 | admin_user = admin 28 | admin_password = {{ admin_pass }} 29 | 30 | -------------------------------------------------------------------------------- /roles/controller/tasks/nova.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the nova controller node 3 | 4 | - name: Install OpenStack nova packages. 5 | yum: name=openstack-nova state=installed 6 | 7 | - name: Setup DB for nova 8 | shell: /usr/bin/openstack-db --init --service nova -p {{ nova_db_pass }} -r " " -y 9 | creates=/var/lib/mysql/nova 10 | 11 | - name: DB sync for the nova services 12 | shell: nova-manage db sync; touch /etc/nova/db.synced 13 | creates=/etc/nova/db.synced 14 | 15 | - name: copy configuration file for nova 16 | template: src=nova.conf.j2 dest=/etc/nova/nova.conf 17 | notify: restart nova 18 | 19 | - name: Start the services for nova 20 | service: name={{ item }} state=started enabled=yes 21 | with_items: 22 | - openstack-nova-api 23 | - openstack-nova-scheduler 24 | - openstack-nova-network 25 | - openstack-nova-cert 26 | - openstack-nova-console 27 | - openstack-nova-conductor 28 | 29 | -------------------------------------------------------------------------------- /roles/controller/tasks/glance.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the glance controller node 3 | 4 | - name: Install glance OpenStack components 5 | yum: name=openstack-glance state=installed 6 | 7 | - name: Setup DB for glance 8 | shell: /usr/bin/openstack-db --init --service glance -p {{ glance_db_pass }} -r " " -y 9 | creates=/var/lib/mysql/glance 10 | 11 | - name: Copy configuration files for glance 12 | template: src=glance-api.conf.j2 dest=/etc/glance/glance-api.conf 13 | notify: restart glance 14 | 15 | - name: Copy configuration files for glance 16 | template: src=glance-registry.conf.j2 dest=/etc/glance/glance-registry.conf 17 | notify: restart glance 18 | 19 | - name: DB sync for Glance 20 | shell: /usr/bin/glance-manage db_sync; touch /etc/glance/db.synced 21 | creates=/etc/glance/db.synced 22 | 23 | - name: start the glance services 24 | service: name={{ item }} state=started enabled=yes 25 | with_items: 26 | - openstack-glance-api 27 | - openstack-glance-registry 28 | 29 | -------------------------------------------------------------------------------- /roles/common/files/epel.repo: -------------------------------------------------------------------------------- 1 | [epel] 2 | name=Extra Packages for Enterprise Linux 6 - $basearch 3 | #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch 4 | mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch 5 | failovermethod=priority 6 | enabled=1 7 | gpgcheck=1 8 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 9 | 10 | [epel-debuginfo] 11 | name=Extra Packages for Enterprise Linux 6 - $basearch - Debug 12 | #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug 13 | mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch 14 | failovermethod=priority 15 | enabled=0 16 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 17 | gpgcheck=1 18 | 19 | [epel-source] 20 | name=Extra Packages for Enterprise Linux 6 - $basearch - Source 21 | #baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS 22 | mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch 23 | failovermethod=priority 24 | enabled=0 25 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 26 | gpgcheck=1 27 | -------------------------------------------------------------------------------- /roles/controller/tasks/cinder.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the cinder controller node 3 | 4 | - name: Install OpenStack cinder packages. 5 | yum: name=openstack-cinder state=installed 6 | 7 | - name: Setup DB for nova 8 | shell: /usr/bin/openstack-db --init --service cinder -p {{ cinder_db_pass }} -r " " -y 9 | creates=/var/lib/mysql/cinder 10 | 11 | - name: copy configuration file for nova 12 | template: src=cinder.conf.j2 dest=/etc/cinder/cinder.conf 13 | notify: restart cinder 14 | 15 | - name: copy tgtd configuration file for nova 16 | template: src=targets.conf.j2 dest=/etc/tgt/targets.conf 17 | notify: restart tgtd 18 | 19 | - name: start the tgtd service 20 | service: name=tgtd state=started enabled=yes 21 | 22 | - name: DB sync for the nova services 23 | shell: cinder-manage db sync; touch /etc/cinder/db.synced 24 | creates=/etc/cinder/db.synced 25 | 26 | - name: create the volume group for cinder 27 | shell: vgcreate cinder-volumes {{ cinder_volume_dev }} 28 | creates=/etc/lvm/backup/cinder-volumes 29 | 30 | - name: Start the services for cinder 31 | service: name={{ item }} state=started enabled=yes 32 | with_items: 33 | - openstack-cinder-api 34 | - openstack-cinder-scheduler 35 | - openstack-cinder-volume 36 | 37 | 38 | -------------------------------------------------------------------------------- /roles/controller/files/sysctl.conf: -------------------------------------------------------------------------------- 1 | # Kernel sysctl configuration file for Red Hat Linux 2 | # 3 | # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and 4 | # sysctl.conf(5) for more details. 5 | 6 | # Controls IP packet forwarding 7 | net.ipv4.ip_forward = 1 8 | 9 | # Controls source route verification 10 | net.ipv4.conf.default.rp_filter = 1 11 | 12 | # Do not accept source routing 13 | net.ipv4.conf.default.accept_source_route = 0 14 | 15 | # Controls the System Request debugging functionality of the kernel 16 | kernel.sysrq = 0 17 | 18 | # Controls whether core dumps will append the PID to the core filename. 19 | # Useful for debugging multi-threaded applications. 20 | kernel.core_uses_pid = 1 21 | 22 | # Controls the use of TCP syncookies 23 | net.ipv4.tcp_syncookies = 1 24 | 25 | # Disable netfilter on bridges. 26 | #net.bridge.bridge-nf-call-ip6tables = 0 27 | #net.bridge.bridge-nf-call-iptables = 0 28 | #net.bridge.bridge-nf-call-arptables = 0 29 | 30 | # Controls the default maxmimum size of a mesage queue 31 | kernel.msgmnb = 65536 32 | 33 | # Controls the maximum size of a message, in bytes 34 | kernel.msgmax = 65536 35 | 36 | # Controls the maximum shared segment size, in bytes 37 | kernel.shmmax = 68719476736 38 | 39 | # Controls the maximum number of shared memory segments, in pages 40 | kernel.shmall = 4294967296 41 | -------------------------------------------------------------------------------- /roles/controller/templates/glance-api.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | default_store = file 3 | bind_host = 0.0.0.0 4 | bind_port = 9292 5 | log_file = /var/log/glance/api.log 6 | backlog = 4096 7 | sql_connection = mysql://glance:{{ glance_db_pass }}@localhost/glance 8 | sql_idle_timeout = 3600 9 | workers = 1 10 | enable_v1_api = True 11 | enable_v2_api = True 12 | registry_host = 0.0.0.0 13 | registry_port = 9191 14 | registry_client_protocol = http 15 | qpid_notification_exchange = glance 16 | qpid_notification_topic = notifications 17 | qpid_host = localhost 18 | qpid_port = 5672 19 | qpid_username = 20 | qpid_password = 21 | qpid_sasl_mechanisms = 22 | qpid_reconnect_timeout = 0 23 | qpid_reconnect_limit = 0 24 | qpid_reconnect_interval_min = 0 25 | qpid_reconnect_interval_max = 0 26 | qpid_reconnect_interval = 0 27 | qpid_heartbeat = 5 28 | # Set to 'ssl' to enable SSL 29 | qpid_protocol = tcp 30 | qpid_tcp_nodelay = True 31 | filesystem_store_datadir = {{ glance_filesystem_store_datadir }} 32 | delayed_delete = False 33 | image_cache_dir = /var/lib/glance/image-cache/ 34 | 35 | [keystone_authtoken] 36 | auth_host = 127.0.0.1 37 | auth_port = 35357 38 | auth_protocol = http 39 | admin_tenant_name = admin 40 | admin_user = admin 41 | admin_password = {{ admin_pass }} 42 | 43 | [paste_deploy] 44 | config_file = /etc/glance/glance-api-paste.ini 45 | flavor=keystone 46 | 47 | -------------------------------------------------------------------------------- /roles/common/templates/iptables.j2: -------------------------------------------------------------------------------- 1 | # Firewall configuration written by system-config-firewall 2 | # Manual customization of this file is not recommended_ 3 | *filter 4 | :INPUT ACCEPT [0:0] 5 | :FORWARD ACCEPT [0:0] 6 | :OUTPUT ACCEPT [0:0] 7 | {% if 'openstack_controller' in group_names %} 8 | -A INPUT -p gre -j ACCEPT 9 | -A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT 10 | -A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT 11 | -A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT 12 | -A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT 13 | -A INPUT -p tcp -m multiport --dports 5672 -j ACCEPT 14 | -A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT 15 | -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT 16 | -A INPUT -p tcp -m multiport --dports 8773,8774,8775 -j ACCEPT 17 | -A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT 18 | -A INPUT -p tcp -m multiport --dports 443 -j ACCEPT 19 | -A INPUT -p tcp -m multiport --dports 80 -j ACCEPT 20 | -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT 21 | {% endif %} 22 | -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT 23 | -A INPUT -p icmp -j ACCEPT 24 | -A INPUT -i lo -j ACCEPT 25 | -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT 26 | -A INPUT -j REJECT --reject-with icmp-host-prohibited 27 | -A FORWARD -j REJECT --reject-with icmp-host-prohibited 28 | COMMIT 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /roles/controller/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Handlers for the controller plays 3 | 4 | - name: restart keystone 5 | service: name=openstack-keystone state=restarted 6 | 7 | - name: restart glance 8 | service: name={{ item }} state=restarted 9 | with_items: 10 | - openstack-glance-api 11 | - openstack-glance-registry 12 | 13 | - name: restart nova 14 | service: name={{ item }} state=restarted 15 | with_items: 16 | - openstack-nova-api 17 | - openstack-nova-scheduler 18 | - openstack-nova-network 19 | - openstack-nova-cert 20 | - openstack-nova-console 21 | - openstack-nova-conductor 22 | 23 | - name: restart quantum 24 | service: name={{ item }} state=restarted 25 | with_items: 26 | - quantum-server 27 | - quantum-l3-agent 28 | - quantum-dhcp-agent 29 | - quantum-openvswitch-agent 30 | 31 | - name: restart cinder 32 | service: name={{ item }} state=restarted enabled=yes 33 | with_items: 34 | - openstack-cinder-api 35 | - openstack-cinder-scheduler 36 | - openstack-cinder-volume 37 | 38 | - name: restart tgtd 39 | service: name=tgtd state=restarted 40 | 41 | - name: reload kernel parameters 42 | shell: sysctl -p 43 | 44 | - name: restart horizon 45 | service: name={{ item }} state=restarted 46 | with_items: 47 | - openstack-nova-consoleauth 48 | - openstack-nova-novncproxy 49 | - httpd 50 | - memcached 51 | -------------------------------------------------------------------------------- /roles/controller/tasks/keystone.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the keystone controller node 3 | 4 | - name: Install OpenStack keystone packages. 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - openstack-keystone 8 | - python-keystoneclient 9 | 10 | - name: Setup DB for keystone 11 | shell: creates=/var/lib/mysql/keystone /usr/bin/openstack-db --init --service keystone -p {{ keystone_db_pass }} -r " " -y 12 | 13 | - name: Create certs for the keystone 14 | shell: creates=/etc/keystone/ssl/certs/ca.pem /usr/bin/keystone-manage pki_setup; chmod 777 /var/lock 15 | 16 | - name: Copy the configuration files for keystone 17 | template: src=keystone.conf.j2 dest=/etc/keystone/keystone.conf 18 | notify: restart keystone 19 | 20 | - name: Change ownership of all files to keystone 21 | file: path=/etc/keystone recurse=yes owner=keystone group=keystone state=directory 22 | 23 | - name: Start the keystone services 24 | service: name=openstack-keystone state=started enabled=yes 25 | 26 | - name: DB sync for keystone 27 | shell: creates=/etc/keystone/db.synced /usr/bin/keystone-manage db_sync; touch /etc/keystone/db.synced 28 | 29 | - name: Copy the file for data insertion into keystone 30 | template: src=keystone_data.sh.j2 dest=/tmp/keystone_data.sh mode=0755 31 | 32 | - name: Copy the keystonerc file 33 | template: src=keystonerc.j2 dest=/root/keystonerc mode=0755 34 | 35 | - name: Upload the data to keystone database 36 | shell: creates=/etc/keystone/db.updated /tmp/keystone_data.sh; touch /etc/keystone/db.updated 37 | 38 | -------------------------------------------------------------------------------- /roles/common/files/RPM-GPG-KEY-EPEL-6: -------------------------------------------------------------------------------- 1 | -----BEGIN PGP PUBLIC KEY BLOCK----- 2 | Version: GnuPG v1.4.5 (GNU/Linux) 3 | 4 | mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1 5 | JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B 6 | M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn 7 | XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6 8 | pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV 9 | QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp 10 | Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq 11 | 3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu 12 | vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar 13 | 1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g 14 | YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB 15 | tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS 16 | KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9 17 | qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT 18 | 9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP 19 | Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS 20 | WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft 21 | HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF 22 | p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP 23 | x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8 24 | wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J 25 | l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG 26 | iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR 27 | XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ== 28 | =V/6I 29 | -----END PGP PUBLIC KEY BLOCK----- 30 | -------------------------------------------------------------------------------- /playbooks/tenant.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Creates tenant user and role 3 | 4 | - hosts: openstack_controller 5 | tasks: 6 | - name: Create Tenant 7 | keystone_user: token={{ keystone_admin_token }} tenant={{ tenant_name }} tenant_description="New Tenant" 8 | 9 | - name: Create the user for tenant 10 | keystone_user: token={{ keystone_admin_token }} user={{ tenant_username }} tenant={{ tenant_name }} 11 | password={{ tenant_password }} 12 | 13 | - name: Assign role to the created user 14 | keystone_user: token={{ keystone_admin_token }} role=admin user={{ tenant_username }} tenant={{ tenant_name }} 15 | 16 | - name: Create a network for the tenant 17 | quantum_network: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 18 | provider_network_type=gre login_tenant_name={{ admin_tenant }} 19 | provider_segmentation_id={{ tunnel_id }} tenant_name={{ tenant_name }} name={{ network_name }} 20 | 21 | - name: Create a subnet for the network 22 | quantum_subnet: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 23 | login_tenant_name={{ admin_tenant }} tenant_name={{ tenant_name }} 24 | network_name={{ network_name }} name={{ subnet_name }} cidr={{ subnet_cidr }} 25 | 26 | 27 | - name: Add the network interface to the router 28 | quantum_router_interface: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} login_tenant_name={{ admin_tenant }} tenant_name={{ tenant_name }} router_name={{ external_router_name }} 29 | subnet_name={{ subnet_name }} 30 | -------------------------------------------------------------------------------- /roles/compute/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the openstack compute nodes 3 | 4 | - name: Install the required virtualization packages 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - qemu-img 8 | - qemu-kvm 9 | - qemu-kvm-tools 10 | - libvirt 11 | - libvirt-client 12 | - libvirt-python 13 | - libguestfs-tools 14 | 15 | - name: Configure qemu.conf in libvirt 16 | copy: src=qemu.conf.j2 dest=/etc/libvirt/qemu.conf 17 | notify: restart libvirtd 18 | 19 | - name: service libvirt start 20 | service: name=libvirtd state=started enabled=yes 21 | 22 | - name: create link for the qemu 23 | file: src=/usr/libexec/qemu-kvm dest=/usr/bin/qemu-system-x86_64 state=link 24 | 25 | - name: Install packages for openstack 26 | yum: name={{ item }} state=installed 27 | with_items: 28 | - openstack-nova-compute 29 | - openstack-quantum-openvswitch 30 | 31 | - name: Create the internal bridges for openvswitch 32 | shell: creates=/etc/quantum/br-int.created /usr/bin/ovs-vsctl add-br br-int; touch /etc/quantum/br-int.created 33 | 34 | - name: Copy the quantum.conf configuration files 35 | template: src=roles/controller/templates/quantum.conf.j2 dest=/etc/quantum/quantum.conf 36 | 37 | - name: Copy the quantum ovs agent configuration files 38 | template: src=ovs_quantum_plugin.ini.j2 dest=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini 39 | notify: restart quantum ovsagent 40 | 41 | - name: Copy the nova configuration files 42 | template: src=nova.conf.j2 dest=/etc/nova/nova.conf 43 | notify: restart nova compute 44 | 45 | - name: Give permissions to lock folder 46 | file: path=/var/lock state=directory owner=root group=root mode=0777 47 | 48 | - name: Copy the modified file for key injection bug 49 | copy: src=guestfs.py dest=/usr/lib/python2.6/site-packages/nova/virt/disk/vfs/ 50 | -------------------------------------------------------------------------------- /playbooks/vm.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Playbook to create vm for a tenant 3 | 4 | - hosts: openstack_controller 5 | tasks: 6 | - name: Get the image id from glance 7 | glance_image: login_username={{ tenant_username }} login_password={{ tenant_password }} 8 | login_tenant_name={{ tenant_name }} name={{ image_name }} state=present copy_from="http://127.0.0.1/dummy" 9 | register: image 10 | 11 | - name: Create keypair for tenant 12 | nova_keypair: state=present login_username={{ tenant_username }} login_password={{ tenant_password }} 13 | login_tenant_name={{ tenant_name }} name={{ keypair_name }} 14 | public_key="{{ lookup('file','~/.ssh/id_rsa.pub') }}" 15 | 16 | - name: Get network id for the tenant 17 | quantum_network: state=present login_username={{ tenant_username }} login_password={{ tenant_password }} 18 | login_tenant_name={{ tenant_name }} name={{ network_name }} 19 | register: network 20 | 21 | - name: Create a vm for the tenant 22 | nova_compute: 23 | state: present 24 | login_username: '{{ tenant_username }}' 25 | login_password: '{{ tenant_password }}' 26 | login_tenant_name: '{{ tenant_name }}' 27 | name: '{{ vm_name }}' 28 | image_id: '{{ image.id }}' 29 | key_name: '{{ keypair_name }}' 30 | wait_for: 440 31 | flavor_id: '{{ flavor_id }}' 32 | nics: 33 | - net-id: '{{ network.id }}' 34 | meta: 35 | hostname: test1 36 | group: uge_master 37 | register: vm 38 | 39 | - name: Assign a public ip to vm 40 | quantum_floating_ip: state=present login_username={{ tenant_username }} login_password={{ tenant_password }} 41 | login_tenant_name={{ tenant_name }} network_name={{ external_network_name }} instance_name={{ vm_name }} 42 | 43 | 44 | - name: Let's wait till the instance comes up 45 | wait_for: port=22 delay=10 host={{ vm.private_ip }} 46 | -------------------------------------------------------------------------------- /roles/common/files/openvswitch-kmod-rhel6.spec: -------------------------------------------------------------------------------- 1 | # Generated automatically -- do not modify! -*- buffer-read-only: t -*- 2 | # Spec file for Open vSwitch kernel modules on Red Hat Enterprise 3 | # Linux 6. 4 | 5 | # Copyright (C) 2011, 2012 Nicira, Inc. 6 | # 7 | # Copying and distribution of this file, with or without modification, 8 | # are permitted in any medium without royalty provided the copyright 9 | # notice and this notice are preserved. This file is offered as-is, 10 | # without warranty of any kind. 11 | 12 | %define oname openvswitch 13 | 14 | Name: %{oname}-kmod 15 | Version: 1.10.0 16 | Release: 1%{?dist} 17 | Summary: Open vSwitch kernel module 18 | 19 | Group: System/Kernel 20 | License: GPLv2 21 | URL: http://openvswitch.org/ 22 | Source0: %{oname}-%{version}.tar.gz 23 | Source1: %{oname}-kmod.files 24 | BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) 25 | BuildRequires: %kernel_module_package_buildreqs 26 | 27 | # Without this we get an empty openvswitch-debuginfo package (whose name 28 | # conflicts with the openvswitch-debuginfo package for OVS userspace). 29 | %undefine _enable_debug_packages 30 | 31 | # Use -D 'kversion 2.6.32-131.6.1.el6.x86_64' to build package 32 | # for specified kernel version. 33 | %{?kversion:%define kernel_version %kversion} 34 | 35 | # Use -D 'kflavors default debug kdump' to build packages for 36 | # specified kernel variants. 37 | %{!?kflavors:%define kflavors default} 38 | 39 | %kernel_module_package -n %{oname} -f %{SOURCE1} %kflavors 40 | 41 | %description 42 | Open vSwitch Linux kernel module. 43 | 44 | %prep 45 | 46 | %setup -n %{oname}-%{version} 47 | cat > %{oname}.conf << EOF 48 | override %{oname} * extra/%{oname} 49 | override %{oname} * weak-updates/%{oname} 50 | EOF 51 | 52 | %build 53 | for flavor in %flavors_to_build; do 54 | mkdir _$flavor 55 | (cd _$flavor && ../configure --with-linux="%{kernel_source $flavor}") 56 | %{__make} -C _$flavor/datapath/linux %{?_smp_mflags} 57 | done 58 | 59 | %install 60 | export INSTALL_MOD_PATH=$RPM_BUILD_ROOT 61 | export INSTALL_MOD_DIR=extra/%{oname} 62 | for flavor in %flavors_to_build ; do 63 | make -C %{kernel_source $flavor} modules_install \ 64 | M="`pwd`"/_$flavor/datapath/linux 65 | done 66 | install -d %{buildroot}%{_sysconfdir}/depmod.d/ 67 | install -m 644 %{oname}.conf %{buildroot}%{_sysconfdir}/depmod.d/ 68 | 69 | %clean 70 | rm -rf $RPM_BUILD_ROOT 71 | -------------------------------------------------------------------------------- /roles/compute/templates/nova.conf.j2: -------------------------------------------------------------------------------- 1 | {% set head_node = hostvars[groups['openstack_controller'][0]] %} 2 | [DEFAULT] 3 | 4 | # LOGS/STATE 5 | verbose=True 6 | logdir=/var/log/nova 7 | state_path=/var/lib/nova 8 | lock_path=/var/lock/nova 9 | rootwrap_config=/etc/nova/rootwrap.conf 10 | 11 | # SCHEDULER 12 | compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler 13 | 14 | # VOLUMES 15 | volume_api_class=nova.volume.cinder.API 16 | 17 | 18 | # DATABASE 19 | sql_connection=mysql://nova:{{ nova_db_pass }}@{{ head_node.ansible_default_ipv4.address }}/nova 20 | 21 | # COMPUTE 22 | libvirt_type={{ nova_libvirt_type }} 23 | compute_driver=libvirt.LibvirtDriver 24 | instance_name_template=instance-%08x 25 | api_paste_config=/etc/nova/api-paste.ini 26 | 27 | # COMPUTE/APIS: if you have separate configs for separate services 28 | # this flag is required for both nova-api and nova-compute 29 | allow_resize_to_same_host=True 30 | 31 | # APIS 32 | osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions 33 | ec2_dmz_host={{ head_node.ansible_default_ipv4.address }} 34 | s3_host={{ head_node.ansible_default_ipv4.address }} 35 | 36 | # QPID 37 | rpc_backend=nova.rpc.impl_qpid 38 | qpid_hostname={{ head_node.ansible_default_ipv4.address }} 39 | 40 | # GLANCE 41 | image_service=nova.image.glance.GlanceImageService 42 | glance_api_servers={{ head_node.ansible_default_ipv4.address }}:9292 43 | 44 | # NETWORK 45 | network_api_class = nova.network.quantumv2.api.API 46 | quantum_admin_username = admin 47 | quantum_admin_password = {{ admin_pass }} 48 | quantum_admin_auth_url = http://{{ head_node.ansible_default_ipv4.address }}:35357/v2.0 49 | quantum_auth_strategy = keystone 50 | quantum_admin_tenant_name = admin 51 | quantum_url = http://{{ head_node.ansible_default_ipv4.address }}:9696/ 52 | #libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver 53 | libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchDriver 54 | #security_group_api=quantum 55 | #firewall_driver=nova.virt.firewall.NoopFirewallDriver 56 | service_quantum_metadata_proxy=true 57 | libvirt_cpu_mode=host-passthrough 58 | 59 | # Change my_ip to match each host 60 | my_ip={{ ansible_default_ipv4.address }} 61 | 62 | # NOVNC CONSOLE 63 | novncproxy_base_url=http://{{ head_node.ansible_default_ipv4.address }}:6080/vnc_auto.html 64 | # Change vncserver_proxyclient_address and vncserver_listen to match each compute host 65 | vncserver_proxyclient_address={{ ansible_default_ipv4.address }} 66 | vncserver_listen={{ ansible_default_ipv4.address }} 67 | libvirt_cpu_mode=none 68 | 69 | # AUTHENTICATION 70 | auth_strategy = keystone 71 | [keystone_authtoken] 72 | admin_tenant_name = admin 73 | admin_user = admin 74 | admin_password = {{ admin_pass }} 75 | auth_host = {{ head_node.ansible_default_ipv4.address }} 76 | auth_port = 35357 77 | auth_protocol = http 78 | signing_dir = /tmp/keystone-signing-nova 79 | 80 | 81 | -------------------------------------------------------------------------------- /roles/controller/templates/nova.conf.j2: -------------------------------------------------------------------------------- 1 | {% set head_node = hostvars[groups['openstack_controller'][0]] %} 2 | [DEFAULT] 3 | 4 | # LOGS/STATE 5 | verbose=True 6 | logdir=/var/log/nova 7 | state_path=/var/lib/nova 8 | lock_path=/var/lock/nova 9 | rootwrap_config=/etc/nova/rootwrap.conf 10 | 11 | # SCHEDULER 12 | compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler 13 | 14 | # VOLUMES 15 | volume_api_class=nova.volume.cinder.API 16 | 17 | 18 | # DATABASE 19 | sql_connection=mysql://nova:{{ nova_db_pass }}@localhost/nova 20 | 21 | # COMPUTE 22 | libvirt_type={{ nova_libvirt_type }} 23 | compute_driver=libvirt.LibvirtDriver 24 | instance_name_template=instance-%08x 25 | api_paste_config=/etc/nova/api-paste.ini 26 | 27 | # COMPUTE/APIS: if you have separate configs for separate services 28 | # this flag is required for both nova-api and nova-compute 29 | allow_resize_to_same_host=True 30 | 31 | # APIS 32 | osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions 33 | ec2_dmz_host={{ head_node.ansible_default_ipv4.address }} 34 | s3_host={{ head_node.ansible_default_ipv4.address }} 35 | 36 | # QPID 37 | rpc_backend=nova.rpc.impl_qpid 38 | qpid_hostname={{ head_node.ansible_default_ipv4.address }} 39 | 40 | # GLANCE 41 | image_service=nova.image.glance.GlanceImageService 42 | glance_api_servers={{ head_node.ansible_default_ipv4.address }}:9292 43 | 44 | # NETWORK 45 | network_api_class = nova.network.quantumv2.api.API 46 | quantum_admin_username = admin 47 | quantum_admin_password = {{ admin_pass }} 48 | quantum_admin_auth_url = http://{{ head_node.ansible_default_ipv4.address }}:35357/v2.0 49 | quantum_auth_strategy = keystone 50 | quantum_admin_tenant_name = admin 51 | quantum_url = http://{{ head_node.ansible_default_ipv4.address }}:9696/ 52 | #libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver 53 | libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchDriver 54 | #security_group_api=quantum 55 | #firewall_driver=nova.virt.firewall.NoopFirewallDriver 56 | firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver 57 | service_quantum_metadata_proxy=true 58 | libvirt_cpu_mode=host-passthrough 59 | 60 | # Change my_ip to match each host 61 | my_ip={{ ansible_default_ipv4.address }} 62 | 63 | # NOVNC CONSOLE 64 | novncproxy_base_url=http://{{ head_node.ansible_default_ipv4.address }}:6080/vnc_auto.html 65 | # Change vncserver_proxyclient_address and vncserver_listen to match each compute host 66 | vncserver_proxyclient_address={{ ansible_default_ipv4.address }} 67 | vncserver_listen={{ ansible_default_ipv4.address }} 68 | libvirt_cpu_mode=none 69 | 70 | # AUTHENTICATION 71 | auth_strategy = keystone 72 | [keystone_authtoken] 73 | admin_tenant_name = admin 74 | admin_user = admin 75 | admin_password = {{ admin_pass }} 76 | auth_host = {{ head_node.ansible_default_ipv4.address }} 77 | auth_port = 35357 78 | auth_protocol = http 79 | signing_dir = /tmp/keystone-signing-nova 80 | 81 | 82 | -------------------------------------------------------------------------------- /roles/common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # The common tasks 3 | 4 | - name: copy yum repo files 5 | copy: src={{ item }} dest=/etc/yum.repos.d/ 6 | with_items: 7 | - epel.repo 8 | - epel-openstack-grizzly.repo 9 | 10 | - name: Create the GPG key for EPEL 11 | copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg 12 | 13 | - name: Install required packages 14 | yum: name={{ item }} state=installed 15 | with_items: 16 | - ntp 17 | - sudo 18 | - scsi-target-utils 19 | - dbus 20 | - libselinux-python 21 | - openssl-devel 22 | notify: restart services 23 | 24 | - name: Remove previous openvwitch if installed 25 | yum: name=openvswitch-1.9.0 state=absent 26 | 27 | - name: Check if custom Openvswitch is installed 28 | shell: rpm -qa | grep kmod-openvswitch-1.10 | cut -d- -f3 29 | register: vswitch_version 30 | 31 | - name: Install Development tools 32 | shell: yum -y groupinstall "Development Tools" 33 | when: vswitch_version.stdout != '1.10.0' 34 | 35 | - name: Create the directories for compiling the openvswitch source 36 | file: path=/root/rpmbuild/SOURCES state=directory 37 | 38 | - name: Get the Openvswitch stable release 39 | get_url: url=http://openvswitch.org/releases/openvswitch-1.10.0.tar.gz dest=/root/rpmbuild/SOURCES/ 40 | when: vswitch_version.stdout != '1.10.0' 41 | 42 | - name: Copy the kmod spec file 43 | copy: src=openvswitch-kmod.files dest=/root/rpmbuild/SOURCES/ 44 | 45 | - name: Copy the spec files for rpmbuild 46 | copy: src={{ item }} dest=/root 47 | with_items: 48 | - openvswitch-kmod-rhel6.spec 49 | - openvswitch.spec 50 | 51 | - name: Build the Openvswitch kernel module. 52 | shell: chdir=/root rpmbuild -bb /root/openvswitch-kmod-rhel6.spec 53 | creates=/root/rpmbuild/RPMS/x86_64/kmod-openvswitch-1.10.0-1.el6.x86_64.rpm 54 | 55 | - name: Build the Openvswitch userspace application. 56 | shell: chdir=/root rpmbuild -bb /root/openvswitch.spec 57 | creates=/root/rpmbuild/RPMS/x86_64/openvswitch-1.10.0-1.x86_64.rpm 58 | 59 | - name: Install the custom vswitch kernel rpm. 60 | shell: yum -y localinstall /root/rpmbuild/RPMS/x86_64/kmod-openvswitch-1.10.0-1.el6.x86_64.rpm 61 | creates=/lib/modules/2.6.32-358.2.1.el6.x86_64/extra/openvswitch/openvswitch.ko 62 | 63 | - name: Install the custom openvswitch rpm 64 | shell: yum -y localinstall /root/rpmbuild/RPMS/x86_64/openvswitch-1.10.0-1.x86_64.rpm 65 | creates=/usr/share/doc/openvswitch-1.10.90/FAQ 66 | 67 | - name: Create the hosts entry. 68 | template: src=hosts.j2 dest=/etc/hosts 69 | 70 | - name: start the openvswitch service. 71 | service: name=openvswitch state=started 72 | 73 | - name: Disable selinux dynamically 74 | shell: creates=/etc/sysconfig/selinux.disabled setenforce 0 ; touch /etc/sysconfig/selinux.disabled 75 | 76 | - name: Disable SELinux in conf file 77 | copy: src=selinux dest=/etc/sysconfig/selinux 78 | 79 | - name: Disable iptables 80 | service: name=iptables state=stopped enabled=no 81 | 82 | 83 | 84 | -------------------------------------------------------------------------------- /roles/controller/tasks/quantum.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Tasks for the quantum controller node 3 | 4 | - name: Install packages for quantum 5 | yum: name={{ item }} state=installed 6 | with_items: 7 | - openstack-quantum 8 | - openstack-quantum-openvswitch 9 | 10 | 11 | - name: Enable ipv4 forwarding in the host 12 | sysctl: name=net.ipv4.ip_forward value=1 reload=yes 13 | 14 | - name: Setup DB for quantum 15 | shell: /usr/bin/quantum-server-setup -q {{ quantum_db_pass }} -r " " -u quantum --plugin openvswitch -y 16 | creates=/var/lib/mysql/ovs_quantum 17 | 18 | - name: Give right to quantum user. 19 | mysql_user: name=quantum password={{ quantum_db_pass }} priv=*.*:ALL host='localhost' state=present 20 | 21 | - name: Give right to quantum user. 22 | mysql_user: name=quantum password={{ quantum_db_pass }} priv=*.*:ALL host='%' state=present 23 | 24 | - name: Copy the quantum.conf configuration files 25 | template: src=quantum.conf.j2 dest=/etc/quantum/quantum.conf 26 | notify: restart quantum 27 | tags: test 28 | 29 | - name: Copy the quantum dhcp agent configuration files 30 | template: src=dhcp_agent.ini.j2 dest=/etc/quantum/dhcp_agent.ini 31 | notify: restart quantum 32 | 33 | 34 | - name: Copy the quantum ovs agent configuration files 35 | template: src=ovs_quantum_plugin.ini.j2 dest=/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini 36 | notify: restart quantum 37 | 38 | - name: Create the external bridges for openvswitch 39 | shell: /usr/bin/ovs-vsctl add-br br-ex; touch /etc/quantum/br-ex.created 40 | creates=/etc/quantum/br-ex.created 41 | 42 | - name: Create the internal bridges for openvswitch 43 | shell: /usr/bin/ovs-vsctl add-br br-int; touch /etc/quantum/br-int.created 44 | creates=/etc/quantum/br-int.created 45 | 46 | - name: Add the interface for the external bridge 47 | shell: /usr/bin/ovs-vsctl add-port br-ex {{ quantum_external_interface }}; touch /etc/quantum/br-ext.interface 48 | creates=/etc/quantum/br-ext.interface 49 | 50 | - name: copy configuration file for nova 51 | template: src=nova.conf.j2 dest=/etc/nova/nova.conf 52 | notify: restart nova 53 | 54 | - name: Start the quantum services 55 | service: name={{ item }} state=started enabled=yes 56 | with_items: 57 | - quantum-server 58 | - quantum-dhcp-agent 59 | - quantum-openvswitch-agent 60 | 61 | - local_action: pause seconds=20 62 | 63 | - name: create the external network 64 | quantum_network: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} provider_network_type=local 65 | login_tenant_name={{ admin_tenant }} name={{ external_network_name }} router_external=true 66 | register: network 67 | 68 | - name: create external router 69 | quantum_router: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 70 | login_tenant_name={{ admin_tenant }} name={{ external_router_name }} 71 | register: router 72 | 73 | - name: create the subnet for external network 74 | quantum_subnet: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 75 | login_tenant_name={{ admin_tenant }} enable_dhcp=false network_name={{ external_network_name }} 76 | name={{ external_subnet_name }} cidr={{ external_subnet_cidr }} 77 | 78 | - name: Copy the quantum l3 agent configuration files 79 | template: src=l3_agent.ini.j2 dest=/etc/quantum/l3_agent.ini 80 | notify: restart quantum 81 | 82 | - name: Start the quantum l3 services 83 | service: name=quantum-l3-agent state=started enabled=yes 84 | 85 | - name: create external interface for router 86 | quantum_router_gateway: state=present login_username={{ admin_tenant_user }} login_password={{ admin_pass }} 87 | login_tenant_name={{ admin_tenant }} router_name={{ external_router_name }} 88 | network_name={{ external_network_name }} 89 | 90 | -------------------------------------------------------------------------------- /roles/controller/templates/keystone.conf.j2: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | 3 | log_file = /var/log/keystone/keystone.log 4 | admin_token = {{ keystone_admin_token }} 5 | 6 | [sql] 7 | connection = mysql://keystone:{{ keystone_db_pass }}@localhost/keystone 8 | 9 | [identity] 10 | driver = keystone.identity.backends.sql.Identity 11 | 12 | [trust] 13 | 14 | [catalog] 15 | template_file = /etc/keystone/default_catalog.templates 16 | driver = keystone.catalog.backends.sql.Catalog 17 | 18 | [token] 19 | driver = keystone.token.backends.sql.Token 20 | 21 | [policy] 22 | 23 | [ec2] 24 | driver = keystone.contrib.ec2.backends.sql.Ec2 25 | 26 | [ssl] 27 | 28 | [signing] 29 | 30 | [ldap] 31 | 32 | [auth] 33 | methods = password,token 34 | password = keystone.auth.plugins.password.Password 35 | token = keystone.auth.plugins.token.Token 36 | 37 | [filter:debug] 38 | paste.filter_factory = keystone.common.wsgi:Debug.factory 39 | 40 | [filter:token_auth] 41 | paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory 42 | 43 | [filter:admin_token_auth] 44 | paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory 45 | 46 | [filter:xml_body] 47 | paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory 48 | 49 | [filter:json_body] 50 | paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory 51 | 52 | [filter:user_crud_extension] 53 | paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory 54 | 55 | [filter:crud_extension] 56 | paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory 57 | 58 | [filter:ec2_extension] 59 | paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory 60 | 61 | [filter:s3_extension] 62 | paste.filter_factory = keystone.contrib.s3:S3Extension.factory 63 | 64 | [filter:url_normalize] 65 | paste.filter_factory = keystone.middleware:NormalizingFilter.factory 66 | 67 | [filter:sizelimit] 68 | paste.filter_factory = keystone.middleware:RequestBodySizeLimiter.factory 69 | 70 | [filter:stats_monitoring] 71 | paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory 72 | 73 | [filter:stats_reporting] 74 | paste.filter_factory = keystone.contrib.stats:StatsExtension.factory 75 | 76 | [filter:access_log] 77 | paste.filter_factory = keystone.contrib.access:AccessLogMiddleware.factory 78 | 79 | [app:public_service] 80 | paste.app_factory = keystone.service:public_app_factory 81 | 82 | [app:service_v3] 83 | paste.app_factory = keystone.service:v3_app_factory 84 | 85 | [app:admin_service] 86 | paste.app_factory = keystone.service:admin_app_factory 87 | 88 | [pipeline:public_api] 89 | pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service 90 | 91 | [pipeline:admin_api] 92 | pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service 93 | 94 | [pipeline:api_v3] 95 | pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension service_v3 96 | 97 | [app:public_version_service] 98 | paste.app_factory = keystone.service:public_version_app_factory 99 | 100 | [app:admin_version_service] 101 | paste.app_factory = keystone.service:admin_version_app_factory 102 | 103 | [pipeline:public_version_api] 104 | pipeline = access_log sizelimit stats_monitoring url_normalize xml_body public_version_service 105 | 106 | [pipeline:admin_version_api] 107 | pipeline = access_log sizelimit stats_monitoring url_normalize xml_body admin_version_service 108 | 109 | [composite:main] 110 | use = egg:Paste#urlmap 111 | /v2.0 = public_api 112 | /v3 = api_v3 113 | / = public_version_api 114 | 115 | [composite:admin] 116 | use = egg:Paste#urlmap 117 | /v2.0 = admin_api 118 | /v3 = api_v3 119 | / = admin_version_api 120 | -------------------------------------------------------------------------------- /roles/common/files/openvswitch.spec: -------------------------------------------------------------------------------- 1 | # Generated automatically -- do not modify! -*- buffer-read-only: t -*- 2 | # Spec file for Open vSwitch on Red Hat Enterprise Linux. 3 | 4 | # Copyright (C) 2009, 2010, 2011, 2012 Nicira, Inc. 5 | # 6 | # Copying and distribution of this file, with or without modification, 7 | # are permitted in any medium without royalty provided the copyright 8 | # notice and this notice are preserved. This file is offered as-is, 9 | # without warranty of any kind. 10 | 11 | Name: openvswitch 12 | Summary: Open vSwitch daemon/database/utilities 13 | Group: System Environment/Daemons 14 | URL: http://www.openvswitch.org/ 15 | Vendor: Nicira, Inc. 16 | Version: 1.10.0 17 | 18 | License: ASL 2.0 19 | Release: 1 20 | Source: openvswitch-%{version}.tar.gz 21 | Buildroot: /tmp/openvswitch-rpm 22 | Requires: openvswitch-kmod, logrotate, python 23 | 24 | %description 25 | Open vSwitch provides standard network bridging functions and 26 | support for the OpenFlow protocol for remote per-flow control of 27 | traffic. 28 | 29 | %prep 30 | %setup -q 31 | 32 | %build 33 | ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=%{_localstatedir} --enable-ssl 34 | make %{_smp_mflags} 35 | 36 | %install 37 | rm -rf $RPM_BUILD_ROOT 38 | make install DESTDIR=$RPM_BUILD_ROOT 39 | 40 | rhel_cp() { 41 | base=$1 42 | mode=$2 43 | dst=$RPM_BUILD_ROOT/$(echo $base | sed 's,_,/,g') 44 | install -D -m $mode rhel/$base $dst 45 | } 46 | rhel_cp etc_init.d_openvswitch 0755 47 | rhel_cp etc_logrotate.d_openvswitch 0644 48 | rhel_cp etc_sysconfig_network-scripts_ifup-ovs 0755 49 | rhel_cp etc_sysconfig_network-scripts_ifdown-ovs 0755 50 | rhel_cp usr_share_openvswitch_scripts_sysconfig.template 0644 51 | 52 | docdir=$RPM_BUILD_ROOT/usr/share/doc/openvswitch-%{version} 53 | install -d -m755 "$docdir" 54 | install -m 0644 FAQ rhel/README.RHEL "$docdir" 55 | install python/compat/uuid.py $RPM_BUILD_ROOT/usr/share/openvswitch/python 56 | install python/compat/argparse.py $RPM_BUILD_ROOT/usr/share/openvswitch/python 57 | 58 | # Get rid of stuff we don't want to make RPM happy. 59 | rm \ 60 | $RPM_BUILD_ROOT/usr/bin/ovs-controller \ 61 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-controller.8 \ 62 | $RPM_BUILD_ROOT/usr/bin/ovs-test \ 63 | $RPM_BUILD_ROOT/usr/bin/ovs-l3ping \ 64 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-test.8 \ 65 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-l3ping.8 \ 66 | $RPM_BUILD_ROOT/usr/sbin/ovs-vlan-bug-workaround \ 67 | $RPM_BUILD_ROOT/usr/share/man/man8/ovs-vlan-bug-workaround.8 68 | 69 | install -d -m 755 $RPM_BUILD_ROOT/var/lib/openvswitch 70 | 71 | %clean 72 | rm -rf $RPM_BUILD_ROOT 73 | 74 | %post 75 | # Create default or update existing /etc/sysconfig/openvswitch. 76 | SYSCONFIG=/etc/sysconfig/openvswitch 77 | TEMPLATE=/usr/share/openvswitch/scripts/sysconfig.template 78 | if [ ! -e $SYSCONFIG ]; then 79 | cp $TEMPLATE $SYSCONFIG 80 | else 81 | for var in $(awk -F'[ :]' '/^# [_A-Z0-9]+:/{print $2}' $TEMPLATE) 82 | do 83 | if ! grep $var $SYSCONFIG >/dev/null 2>&1; then 84 | echo >> $SYSCONFIG 85 | sed -n "/$var:/,/$var=/p" $TEMPLATE >> $SYSCONFIG 86 | fi 87 | done 88 | fi 89 | 90 | # Ensure all required services are set to run 91 | /sbin/chkconfig --add openvswitch 92 | /sbin/chkconfig openvswitch on 93 | 94 | %preun 95 | if [ "$1" = "0" ]; then # $1 = 0 for uninstall 96 | /sbin/service openvswitch stop 97 | /sbin/chkconfig --del openvswitch 98 | fi 99 | 100 | %postun 101 | if [ "$1" = "0" ]; then # $1 = 0 for uninstall 102 | rm -f /etc/openvswitch/conf.db 103 | rm -f /etc/sysconfig/openvswitch 104 | rm -f /etc/openvswitch/vswitchd.cacert 105 | fi 106 | 107 | exit 0 108 | 109 | %files 110 | %defattr(-,root,root) 111 | /etc/init.d/openvswitch 112 | %config(noreplace) /etc/logrotate.d/openvswitch 113 | /etc/sysconfig/network-scripts/ifup-ovs 114 | /etc/sysconfig/network-scripts/ifdown-ovs 115 | /usr/bin/ovs-appctl 116 | /usr/bin/ovs-benchmark 117 | /usr/bin/ovs-dpctl 118 | /usr/bin/ovs-ofctl 119 | /usr/bin/ovs-parse-backtrace 120 | /usr/bin/ovs-parse-leaks 121 | /usr/bin/ovs-pcap 122 | /usr/bin/ovs-pki 123 | /usr/bin/ovs-tcpundump 124 | /usr/bin/ovs-vlan-test 125 | /usr/bin/ovs-vsctl 126 | /usr/bin/ovsdb-client 127 | /usr/bin/ovsdb-tool 128 | /usr/sbin/ovs-bugtool 129 | /usr/sbin/ovs-vswitchd 130 | /usr/sbin/ovsdb-server 131 | /usr/share/man/man1/ovs-benchmark.1.gz 132 | /usr/share/man/man1/ovs-pcap.1.gz 133 | /usr/share/man/man1/ovs-tcpundump.1.gz 134 | /usr/share/man/man1/ovsdb-client.1.gz 135 | /usr/share/man/man1/ovsdb-server.1.gz 136 | /usr/share/man/man1/ovsdb-tool.1.gz 137 | /usr/share/man/man5/ovs-vswitchd.conf.db.5.gz 138 | /usr/share/man/man8/ovs-appctl.8.gz 139 | /usr/share/man/man8/ovs-bugtool.8.gz 140 | /usr/share/man/man8/ovs-ctl.8.gz 141 | /usr/share/man/man8/ovs-dpctl.8.gz 142 | /usr/share/man/man8/ovs-ofctl.8.gz 143 | /usr/share/man/man8/ovs-parse-backtrace.8.gz 144 | /usr/share/man/man8/ovs-parse-leaks.8.gz 145 | /usr/share/man/man8/ovs-pki.8.gz 146 | /usr/share/man/man8/ovs-vlan-test.8.gz 147 | /usr/share/man/man8/ovs-vsctl.8.gz 148 | /usr/share/man/man8/ovs-vswitchd.8.gz 149 | /usr/share/openvswitch/bugtool-plugins/ 150 | /usr/share/openvswitch/python/ 151 | /usr/share/openvswitch/scripts/ovs-bugtool-* 152 | /usr/share/openvswitch/scripts/ovs-check-dead-ifs 153 | /usr/share/openvswitch/scripts/ovs-ctl 154 | /usr/share/openvswitch/scripts/ovs-lib 155 | /usr/share/openvswitch/scripts/ovs-save 156 | /usr/share/openvswitch/scripts/sysconfig.template 157 | /usr/share/openvswitch/vswitch.ovsschema 158 | /usr/share/doc/openvswitch-%{version}/FAQ 159 | /usr/share/doc/openvswitch-%{version}/README.RHEL 160 | /var/lib/openvswitch 161 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | # config file for ansible -- http://ansible.github.com 2 | # nearly all parameters can be overridden in ansible-playbook or with command line flags 3 | # ansible will read ~/.ansible.cfg or /etc/ansible/ansible.cfg, whichever it finds first 4 | 5 | [defaults] 6 | 7 | # location of inventory file, eliminates need to specify -i 8 | 9 | hostfile = /etc/ansible/hosts 10 | 11 | # location of ansible library, eliminates need to specify --module-path 12 | 13 | library = /usr/share/ansible 14 | 15 | # default module name used in /usr/bin/ansible when -m is not specified 16 | 17 | module_name = command 18 | 19 | # location for ansible log file. If set, will store output from ansible 20 | # and ansible-playbook. If enabling, you may wish to configure 21 | # logrotate. 22 | 23 | #log_path = /var/log/ansible.log 24 | 25 | # home directory where temp files are stored on remote systems. Should 26 | # almost always contain $HOME or be a directory writeable by all users 27 | 28 | remote_tmp = $HOME/.ansible/tmp 29 | 30 | # the default pattern for ansible-playbooks ("hosts:") 31 | 32 | pattern = * 33 | 34 | # the default number of forks (parallelism) to be used. Usually you 35 | # can crank this up. 36 | 37 | forks=5 38 | 39 | # the timeout used by various connection types. Usually this corresponds 40 | # to an SSH timeout 41 | 42 | timeout=10 43 | 44 | # when using --poll or "poll:" in an ansible playbook, and not specifying 45 | # an explicit poll interval, use this interval 46 | 47 | poll_interval=15 48 | 49 | # when specifying --sudo to /usr/bin/ansible or "sudo:" in a playbook, 50 | # and not specifying "--sudo-user" or "sudo_user" respectively, sudo 51 | # to this user account 52 | 53 | sudo_user=root 54 | 55 | # the following forces ansible to always ask for the sudo password (instead of having 56 | # to add -K to the commandline). Or you can use the environment variable (ANSIBLE_ASK_SUDO_PASS) 57 | 58 | #ask_sudo_pass=True 59 | 60 | # the following forces ansible to always ask for the ssh-password (-k) 61 | # can also be set by the environment variable ANSIBLE_ASK_PASS 62 | 63 | #ask_pass=True 64 | 65 | # connection to use when -c is not specified 66 | 67 | transport=paramiko 68 | 69 | # remote SSH port to be used when --port or "port:" or an equivalent inventory 70 | # variable is not specified. 71 | 72 | remote_port=22 73 | 74 | # if set, always run /usr/bin/ansible commands as this user, and assume this value 75 | # if "user:" is not set in a playbook. If not set, use the current Unix user 76 | # as the default 77 | 78 | #remote_user=root 79 | 80 | # the default sudo executable. If a sudo alternative with a sudo-compatible interface 81 | # is used, specify its executable name as the default 82 | 83 | sudo_exe=sudo 84 | 85 | # the default flags passed to sudo 86 | # sudo_flags=-H 87 | 88 | # all commands executed under sudo are passed as arguments to a shell command 89 | # This shell command defaults to /bin/sh 90 | # Changing this helps the situation where a user is only allowed to run 91 | # e.g. /bin/bash with sudo privileges 92 | 93 | # executable = /bin/sh 94 | 95 | # how to handle hash defined in several places 96 | # hash can be merged, or replaced 97 | # if you use replace, and have multiple hashes named 'x', the last defined 98 | # will override the previously defined one 99 | # if you use merge here, hash will cumulate their keys, but keys will still 100 | # override each other 101 | # replace is the default value, and is how ansible always handled hash variables 102 | # 103 | # hash_behaviour=replace 104 | 105 | # How to handle variable replacement - as of 1.2, Jinja2 variable syntax is 106 | # preferred, but we still support the old $variable replacement too. 107 | # If you change legacy_playbook_variables to no then Ansible will no longer 108 | # try to do replacement on $variable style variables. 109 | # 110 | # legacy_playbook_variables=yes 111 | 112 | # if you need to use jinja2 extensions, you can list them here 113 | # use a coma to separate extensions, e.g. : 114 | # jinja2_extensions=jinja2.ext.do,jinja2.ext.i18n 115 | # no extensions are loaded by default 116 | 117 | #jinja2_extensions= 118 | 119 | # if set, always use this private key file for authentication, same as if passing 120 | # --private-key to ansible or ansible-playbook 121 | 122 | #private_key_file=/path/to/file 123 | 124 | # format of string $ansible_managed available within Jinja2 templates, replacing 125 | # {file}, {host} and {uid} with template filename, host and owner respectively. 126 | # The resulting string is passed through strftime(3) so it may contain any 127 | # time-formatting specifiers. 128 | # 129 | # Example: ansible_managed = DONT TOUCH {file}: call {uid} at {host} for changes 130 | ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} 131 | 132 | # additional plugin paths for non-core plugins 133 | 134 | action_plugins = /usr/share/ansible_plugins/action_plugins 135 | callback_plugins = /usr/share/ansible_plugins/callback_plugins 136 | connection_plugins = /usr/share/ansible_plugins/connection_plugins 137 | lookup_plugins = /usr/share/ansible_plugins/lookup_plugins 138 | vars_plugins = /usr/share/ansible_plugins/vars_plugins 139 | filter_plugins = /usr/share/ansible_plugins/filter_plugins 140 | 141 | # set to 1 if you don't want cowsay support. Alternatively, set ANSIBLE_NOCOWS=1 142 | # in your environment 143 | # nocows = 1 144 | 145 | [paramiko_connection] 146 | 147 | # nothing to configure yet 148 | 149 | [ssh_connection] 150 | 151 | # if uncommented, sets the ansible ssh arguments to the following. Leaving off ControlPersist 152 | # will result in poor performance, so use transport=paramiko on older platforms rather than 153 | # removing it 154 | 155 | ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r 156 | 157 | # the following makes ansible use scp if the connection type is ssh (default is sftp) 158 | 159 | #scp_if_ssh=True 160 | 161 | 162 | -------------------------------------------------------------------------------- /roles/controller/templates/keystone_data.sh.j2: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright 2013 OpenStack LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 6 | # not use this file except in compliance with the License. You may obtain 7 | # a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 14 | # License for the specific language governing permissions and limitations 15 | # under the License. 16 | 17 | CONTROLLER_PUBLIC_ADDRESS={{ ansible_default_ipv4.address }} 18 | CONTROLLER_ADMIN_ADDRESS={{ ansible_default_ipv4.address }} 19 | CONTROLLER_INTERNAL_ADDRESS={{ ansible_default_ipv4.address }} 20 | 21 | TOOLS_DIR=$(cd $(dirname "$0") && pwd) 22 | KEYSTONE_CONF=${KEYSTONE_CONF:-/etc/keystone/keystone.conf} 23 | if [[ -r "$KEYSTONE_CONF" ]]; then 24 | EC2RC="$(dirname "$KEYSTONE_CONF")/ec2rc" 25 | elif [[ -r "$TOOLS_DIR/../etc/keystone.conf" ]]; then 26 | # assume git checkout 27 | KEYSTONE_CONF="$TOOLS_DIR/../etc/keystone.conf" 28 | EC2RC="$TOOLS_DIR/../etc/ec2rc" 29 | else 30 | KEYSTONE_CONF="" 31 | EC2RC="ec2rc" 32 | fi 33 | 34 | # Extract some info from Keystone's configuration file 35 | if [[ -r "$KEYSTONE_CONF" ]]; then 36 | CONFIG_SERVICE_TOKEN=$(sed 's/[[:space:]]//g' $KEYSTONE_CONF | grep ^admin_token= | cut -d'=' -f2) 37 | CONFIG_ADMIN_PORT=$(sed 's/[[:space:]]//g' $KEYSTONE_CONF | grep ^admin_port= | cut -d'=' -f2) 38 | fi 39 | 40 | export SERVICE_TOKEN=${SERVICE_TOKEN:-$CONFIG_SERVICE_TOKEN} 41 | if [[ -z "$SERVICE_TOKEN" ]]; then 42 | echo "No service token found." 43 | echo "Set SERVICE_TOKEN manually from keystone.conf admin_token." 44 | exit 1 45 | fi 46 | 47 | export SERVICE_ENDPOINT=${SERVICE_ENDPOINT:-http://$CONTROLLER_PUBLIC_ADDRESS:${CONFIG_ADMIN_PORT:-35357}/v2.0} 48 | 49 | function get_id () { 50 | echo `"$@" | grep ' id ' | awk '{print $4}'` 51 | } 52 | 53 | # 54 | # Default tenant 55 | # 56 | ADMIN_TENANT=$(get_id keystone tenant-create --name={{ admin_tenant }} \ 57 | --description "Admin Tenant") 58 | 59 | ADMIN_USER=$(get_id keystone user-create --name={{ admin_tenant_user }} \ 60 | --pass={{ admin_pass }}) 61 | 62 | ADMIN_ROLE=$(get_id keystone role-create --name=admin) 63 | 64 | keystone user-role-add --user-id $ADMIN_USER \ 65 | --role-id $ADMIN_ROLE \ 66 | --tenant-id $ADMIN_TENANT 67 | 68 | 69 | # 70 | # Keystone service 71 | # 72 | KEYSTONE_SERVICE=$(get_id \ 73 | keystone service-create --name=keystone \ 74 | --type=identity \ 75 | --description="Keystone Identity Service") 76 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 77 | keystone endpoint-create --region RegionOne --service-id $KEYSTONE_SERVICE \ 78 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:\$(public_port)s/v2.0" \ 79 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:\$(admin_port)s/v2.0" \ 80 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:\$(public_port)s/v2.0" 81 | fi 82 | 83 | # 84 | # Nova service 85 | # 86 | NOVA_SERVICE=$(get_id \ 87 | keystone service-create --name=nova \ 88 | --type=compute \ 89 | --description="Nova Compute Service") 90 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 91 | keystone endpoint-create --region RegionOne --service-id $NOVA_SERVICE \ 92 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:\$(compute_port)s/v1.1/\$(tenant_id)s" \ 93 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:\$(compute_port)s/v1.1/\$(tenant_id)s" \ 94 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:\$(compute_port)s/v1.1/\$(tenant_id)s" 95 | fi 96 | 97 | # 98 | # Volume service 99 | # 100 | VOLUME_SERVICE=$(get_id \ 101 | keystone service-create --name=cinder \ 102 | --type=volume \ 103 | --description="Cinder Volume Service") 104 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 105 | keystone endpoint-create --region RegionOne --service-id $VOLUME_SERVICE \ 106 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:8776/v1/\$(tenant_id)s" \ 107 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:8776/v1/\$(tenant_id)s" \ 108 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:8776/v1/\$(tenant_id)s" 109 | fi 110 | 111 | # 112 | # Image service 113 | # 114 | GLANCE_SERVICE=$(get_id \ 115 | keystone service-create --name=glance \ 116 | --type=image \ 117 | --description="Glance Image Service") 118 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 119 | keystone endpoint-create --region RegionOne --service-id $GLANCE_SERVICE \ 120 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:9292" \ 121 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:9292" \ 122 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:9292" 123 | fi 124 | # 125 | # Quantum service 126 | # 127 | QUANTUM_SERVICE=$(get_id \ 128 | keystone service-create --name=quantum \ 129 | --type=network \ 130 | --description="Quantun Network Service") 131 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 132 | keystone endpoint-create --region RegionOne --service-id $QUANTUM_SERVICE \ 133 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:9696" \ 134 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:9696" \ 135 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:9696" 136 | fi 137 | 138 | # 139 | # EC2 service 140 | # 141 | EC2_SERVICE=$(get_id \ 142 | keystone service-create --name=ec2 \ 143 | --type=ec2 \ 144 | --description="EC2 Compatibility Layer") 145 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 146 | keystone endpoint-create --region RegionOne --service-id $EC2_SERVICE \ 147 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:8773/services/Cloud" \ 148 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:8773/services/Admin" \ 149 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:8773/services/Cloud" 150 | fi 151 | 152 | # 153 | # Swift service 154 | # 155 | SWIFT_SERVICE=$(get_id \ 156 | keystone service-create --name=swift \ 157 | --type="object-store" \ 158 | --description="Swift Service") 159 | if [[ -z "$DISABLE_ENDPOINTS" ]]; then 160 | keystone endpoint-create --region RegionOne --service-id $SWIFT_SERVICE \ 161 | --publicurl "http://$CONTROLLER_PUBLIC_ADDRESS:8888/v1/AUTH_\$(tenant_id)s" \ 162 | --adminurl "http://$CONTROLLER_ADMIN_ADDRESS:8888/v1" \ 163 | --internalurl "http://$CONTROLLER_INTERNAL_ADDRESS:8888/v1/AUTH_\$(tenant_id)s" 164 | fi 165 | 166 | # create ec2 creds and parse the secret and access key returned 167 | RESULT=$(keystone ec2-credentials-create --tenant-id=$SERVICE_TENANT --user-id=$ADMIN_USER) 168 | ADMIN_ACCESS=`echo "$RESULT" | grep access | awk '{print $4}'` 169 | ADMIN_SECRET=`echo "$RESULT" | grep secret | awk '{print $4}'` 170 | 171 | # write the secret and access to ec2rc 172 | cat > $EC2RC < Network Topology" tab. 379 | 380 | ![Alt text](/images/os_networks.png "networks") 381 | 382 | 383 | ### Creating a VM for the Tenant 384 | 385 | To create a VM for a tenant we can issue the following command. The command 386 | creates a new VM for tenant1. The VM is attached to the network created above, 387 | t1net, and the Ansible user's public key is injected into the node so that 388 | the VM is ready to be managed by Ansible. 389 | 390 | ansible-playbook -i hosts playbooks/vm.yml -e "tenant_name=tenant1 tenant_username=t1admin tenant_password=abc network_name=t1net vm_name=t1vm flavor_id=6 keypair_name=t1keypair image_name=cirros" 391 | 392 | Once created, the VM can be verified by going to the "Instances" tab in Horizon. 393 | 394 | ![Alt text](/images/os_instances.png "Instances") 395 | 396 | We can also get the console of the VM from the Horizon UI by clicking on the 397 | "Console" drop down menu. 398 | 399 | ![Alt text](/images/os_vnc.png "console") 400 | 401 | 402 | ### Managing OpenStack VMs Dynamically from Ansible 403 | 404 | There are multiple ways to manage the VMs deployed in OpenStack via Ansible. 405 | 406 | 1. Use the inventory script. While creating the virtual machine pass the 407 | metadata paramater to the vm specifying the group to which it should belong, 408 | and the inventory script would generate hosts arranged by their group name. 409 | 410 | 2. If the requirement is to deploy and manage the virtual machines using the 411 | same playbook, this can be achieved by writing a play to create virtual 412 | machines and add those VMs dynamically to the inventory using 'add_host' 413 | module. Later, write another play targeting group name defined in the add_host 414 | module and implement the tasks to be carried out on those new VMs. 415 | 416 | --------------------------------------------------------------------------------