├── Cluster-deployment-collapsed.png ├── Cluster-deployment-segregated.png ├── HA-keepalived.md ├── HA_Architecture-full.png ├── README.md ├── build-all.sh ├── full-stack.md ├── ha-openstack.md ├── iha-install-10.sh ├── iha-install-9.sh ├── iha-uninstall.sh ├── keepalived ├── Controller-network.jpg ├── Highlevelarch.jpg ├── Keepalived-arch.jpg ├── Mariadb-haproxy.jpg ├── Rabbitmq_clustering.jpg ├── ceilometer-config.md ├── cinder-config.md ├── compute-node.md ├── controller-node.md ├── galera-bootstrap.md ├── galera-config.md ├── glance-config.md ├── haproxy-config.md ├── heat-config.md ├── horizon-config.md ├── keepalived-config.md ├── keystone-config.md ├── memcached-config.md ├── mongodb-config.md ├── mongodb-recovery.md ├── neutron-config.md ├── nova-config.md ├── phd-setup │ ├── ceilometer.scenario │ ├── cinder.scenario │ ├── compute.scenario │ ├── galera.scenario │ ├── glance.scenario │ ├── ha-collapsed.variables │ ├── heat.scenario │ ├── horizon.scenario │ ├── hypervisors.scenario │ ├── keepalived.scenario │ ├── keystone.scenario │ ├── lb.scenario │ ├── memcached.scenario │ ├── mongodb.scenario │ ├── neutron.scenario │ ├── nova.scenario │ ├── rabbitmq.scenario │ ├── readme.txt │ ├── redis.scenario │ ├── sahara.scenario │ ├── serverprep.scenario │ ├── swift.scenario │ ├── test.sh │ └── trove.scenario ├── rabbitmq-config.md ├── rabbitmq-restart.md ├── redis-config.md ├── sahara-config.md ├── swift-config.md └── trove-config.md ├── make-vm └── pcmk ├── NovaCompute ├── NovaEvacuate ├── baremetal-rollback.scenario ├── baremetal.scenario ├── basic-cluster.scenario ├── beaker.scenario ├── ceilometer-test.sh ├── ceilometer.scenario ├── cinder-test.sh ├── cinder.scenario ├── compute-cluster.scenario ├── compute-common.scenario ├── compute-managed.scenario ├── controller-managed.scenario ├── galera-test.sh ├── galera.scenario ├── gateway.scenario ├── glance-test.sh ├── glance.scenario ├── ha-collapsed.variables ├── ha-segregated.variables ├── hacks.scenario ├── heat-test.sh ├── heat.scenario ├── horizon-test.sh ├── horizon.scenario ├── keystone-test.sh ├── keystone.scenario ├── lb.scenario ├── memcached.scenario ├── mongodb.scenario ├── mrg.variables ├── neutron-agents.scenario ├── neutron-server.scenario ├── neutron-test.sh ├── nova-test.sh ├── nova.scenario ├── nova_client.py ├── rabbitmq-test.sh ├── rabbitmq.scenario ├── swift-aco.scenario ├── swift-test.sh ├── swift.scenario ├── virt-hosts.scenario ├── vmsnap-rollback.scenario └── vmsnap.scenario /Cluster-deployment-collapsed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/Cluster-deployment-collapsed.png -------------------------------------------------------------------------------- /Cluster-deployment-segregated.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/Cluster-deployment-segregated.png -------------------------------------------------------------------------------- /HA_Architecture-full.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/HA_Architecture-full.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | This repository contains the description of two highly available OpenStack architectures using RDO or Red Hat Enterprise Linux OpenStack Platform: 5 | 6 | - An architecture based on [Pacemaker](ha-openstack.md). 7 | - An architecture based on [application-native tools and Keepalived](HA-keepalived.md). 8 | 9 | Each architecture includes a description and implementation instructions. Be sure to understand the contrains and limitations of each architecture, and choose the one that better suits your needs. 10 | 11 | How you can help 12 | ---------------- 13 | 14 | Feedback is encouraged for this project. Feel free to submit patches and report issues. 15 | -------------------------------------------------------------------------------- /full-stack.md: -------------------------------------------------------------------------------- 1 | This is what the result of `pcs status` should look like: 2 | 3 | Cluster name: rhos-node 4 | Last updated: Fri Mar 6 22:06:28 2015 5 | Last change: Fri Mar 6 22:03:52 2015 6 | Stack: corosync 7 | Current DC: rhos6-node2 (2) - partition with quorum 8 | Version: 1.1.12-a14efad 9 | 3 Nodes configured 10 | 121 Resources configured 11 | 12 | 13 | Online: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 14 | 15 | Full list of resources: 16 | 17 | fence1 (stonith:fence_xvm): Started rhos6-node1 18 | fence2 (stonith:fence_xvm): Started rhos6-node2 19 | fence3 (stonith:fence_xvm): Started rhos6-node3 20 | Clone Set: lb-haproxy-clone [lb-haproxy] 21 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 22 | vip-db (ocf::heartbeat:IPaddr2): Started rhos6-node1 23 | vip-qpid (ocf::heartbeat:IPaddr2): Started rhos6-node2 24 | vip-keystone (ocf::heartbeat:IPaddr2): Started rhos6-node3 25 | vip-glance (ocf::heartbeat:IPaddr2): Started rhos6-node1 26 | vip-cinder (ocf::heartbeat:IPaddr2): Started rhos6-node2 27 | vip-swift (ocf::heartbeat:IPaddr2): Started rhos6-node3 28 | vip-neutron (ocf::heartbeat:IPaddr2): Started rhos6-node1 29 | vip-nova (ocf::heartbeat:IPaddr2): Started rhos6-node2 30 | vip-horizon (ocf::heartbeat:IPaddr2): Started rhos6-node3 31 | vip-heat (ocf::heartbeat:IPaddr2): Started rhos6-node1 32 | vip-ceilometer (ocf::heartbeat:IPaddr2): Started rhos6-node2 33 | vip-rabbitmq (ocf::heartbeat:IPaddr2): Started rhos6-node3 34 | Master/Slave Set: galera-master [galera] 35 | Masters: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 36 | Clone Set: mongodb-clone [mongodb] 37 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 38 | Clone Set: memcached-clone [memcached] 39 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 40 | Clone Set: rabbitmq-server-clone [rabbitmq-server] 41 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 42 | Clone Set: keystone-clone [keystone] 43 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 44 | Clone Set: glance-fs-clone [glance-fs] 45 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 46 | Clone Set: glance-registry-clone [glance-registry] 47 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 48 | Clone Set: glance-api-clone [glance-api] 49 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 50 | Clone Set: cinder-api-clone [cinder-api] 51 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 52 | Clone Set: cinder-scheduler-clone [cinder-scheduler] 53 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 54 | cinder-volume (systemd:openstack-cinder-volume): Started rhos6-node1 55 | Clone Set: swift-fs-clone [swift-fs] 56 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 57 | Clone Set: swift-account-clone [swift-account] 58 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 59 | Clone Set: swift-container-clone [swift-container] 60 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 61 | Clone Set: swift-object-clone [swift-object] 62 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 63 | Clone Set: swift-proxy-clone [swift-proxy] 64 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 65 | swift-object-expirer (systemd:openstack-swift-object-expirer): Started rhos6-node2 66 | Clone Set: neutron-server-clone [neutron-server] 67 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 68 | Clone Set: neutron-scale-clone [neutron-scale] (unique) 69 | neutron-scale:0 (ocf::neutron:NeutronScale): Started rhos6-node3 70 | neutron-scale:1 (ocf::neutron:NeutronScale): Started rhos6-node1 71 | neutron-scale:2 (ocf::neutron:NeutronScale): Started rhos6-node2 72 | Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] 73 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 74 | Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] 75 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 76 | Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] 77 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 78 | Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] 79 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 80 | Clone Set: neutron-l3-agent-clone [neutron-l3-agent] 81 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 82 | Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] 83 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 84 | ceilometer-central (systemd:openstack-ceilometer-central): Started rhos6-node3 85 | Clone Set: ceilometer-collector-clone [ceilometer-collector] 86 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 87 | Clone Set: ceilometer-api-clone [ceilometer-api] 88 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 89 | Clone Set: ceilometer-delay-clone [ceilometer-delay] 90 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 91 | Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator] 92 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 93 | Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier] 94 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 95 | Clone Set: ceilometer-notification-clone [ceilometer-notification] 96 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 97 | Clone Set: heat-api-clone [heat-api] 98 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 99 | Clone Set: heat-api-cfn-clone [heat-api-cfn] 100 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 101 | Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch] 102 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 103 | heat-engine (systemd:openstack-heat-engine): Started rhos6-node1 104 | Clone Set: horizon-clone [horizon] 105 | Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ] 106 | 107 | PCSD Status: 108 | rhos6-node1: Online 109 | rhos6-node2: Online 110 | rhos6-node3: Online 111 | 112 | Daemon Status: 113 | corosync: active/enabled 114 | pacemaker: active/enabled 115 | pcsd: active/enabled 116 | -------------------------------------------------------------------------------- /iha-uninstall.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -ex 4 | 5 | source stackrc 6 | 7 | # If your machines don't conform to this structure, try calling the script like this: 8 | # COMPUTE_PATTERN=mycompute ./iha-uninstall.sh upgrade 9 | : ${COMPUTE_PATTERN=novacompute} 10 | : ${CONTROLLER_PATTERN=controller} 11 | 12 | COMPUTES=$(nova list | grep ${COMPUTE_PATTERN} | awk -F\| '{ print $3}' | tr '\n' ' ') 13 | CONTROLLERS=$(nova list | grep ${CONTROLLER_PATTERN} | awk -F\| '{ print $3}' | tr '\n' ' ') 14 | 15 | FIRST_COMPUTE=$(echo $COMPUTES | awk '{print $1}') 16 | FIRST_CONTROLLER=$(echo $CONTROLLERS | awk '{print $1}') 17 | 18 | ssh ${FIRST_CONTROLLER} -- sudo pcs property set stonith-enabled=false 19 | 20 | if [ $1 = "upgrade" ]; then 21 | 22 | SERVICES="nova-evacuate neutron-openvswitch-agent-compute libvirtd-compute-clone ceilometer-compute-clone nova-compute-checkevacuate-clone nova-compute-clone" 23 | 24 | helper=iha-helper-remove.sh 25 | cat < $helper 26 | set -ex 27 | pcs property set maintenance-mode=true 28 | 29 | for resource in $COMPUTES $SERVICES $FUDGE; do 30 | pcs resource cleanup \${resource} 31 | pcs --force resource delete \${resource} 32 | done 33 | 34 | for node in $COMPUTES; do 35 | cibadmin --delete --xml-text "" 36 | cibadmin --delete --xml-text "" 37 | done 38 | 39 | pcs property set maintenance-mode=false --wait 40 | 41 | EOF 42 | 43 | scp $helper heat-admin@${FIRST_CONTROLLER}: 44 | ssh heat-admin@${FIRST_CONTROLLER} -- sudo bash $helper 45 | fi 46 | 47 | helper=iha-helper-reenable.sh 48 | cat < $helper 49 | set -ex 50 | for service in neutron-openvswitch-agent openstack-ceilometer-compute openstack-nova-compute libvirtd; do 51 | systemctl enable \${service} 52 | done 53 | EOF 54 | 55 | for node in $COMPUTES; do scp $helper heat-admin@${node}: ; ssh heat-admin@${node} -- sudo bash $helper ; done 56 | 57 | -------------------------------------------------------------------------------- /keepalived/Controller-network.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/keepalived/Controller-network.jpg -------------------------------------------------------------------------------- /keepalived/Highlevelarch.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/keepalived/Highlevelarch.jpg -------------------------------------------------------------------------------- /keepalived/Keepalived-arch.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/keepalived/Keepalived-arch.jpg -------------------------------------------------------------------------------- /keepalived/Mariadb-haproxy.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/keepalived/Mariadb-haproxy.jpg -------------------------------------------------------------------------------- /keepalived/Rabbitmq_clustering.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/beekhof/osp-ha-deploy/2373fbfd39fc896198de37cbf1d8271748d7994c/keepalived/Rabbitmq_clustering.jpg -------------------------------------------------------------------------------- /keepalived/ceilometer-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | In terms of high availability, the Ceilometer central agent deserves special attention. This agent had to run in a single node until the Juno release cycle, since there was no way to coordinate multiple agents and ensure they would not duplicate metrics. Now, multiple central agent instances can run in parallel with workload partitioning among these running instances, using the tooz library with a Redis backend for coordination. See [here](http://docs.openstack.org/admin-guide-cloud/telemetry-data-collection.html#support-for-ha-deployment) for additional information. 5 | 6 | The following commands will be executed on all controller nodes, unless otherwise stated. 7 | 8 | You can find a phd scenario file [here](phd-setup/ceilometer.scenario). 9 | 10 | Install software 11 | ---------------- 12 | 13 | yum install -y openstack-ceilometer-api openstack-ceilometer-central openstack-ceilometer-collector openstack-ceilometer-common openstack-ceilometer-alarm python-ceilometer python-ceilometerclient python-redis 14 | 15 | 16 | Configure ceilometer 17 | -------------------- 18 | 19 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_uri http://controller-vip.example.com:5000/ 20 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_plugin password 21 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_url http://controller-vip.example.com:35357/ 22 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken username ceilometer 23 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken password ceilometertest 24 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_name services 25 | openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT memcache_servers hacontroller1:11211,hacontroller2:11211,hacontroller3:11211 26 | openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_hosts hacontroller1,hacontroller2,hacontroller3 27 | openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_ha_queues true 28 | openstack-config --set /etc/ceilometer/ceilometer.conf publisher telemetry_secret ceilometersecret 29 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_auth_url http://controller-vip.example.com:5000/v2.0 30 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_username ceilometer 31 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_tenant_name services 32 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_password ceilometertest 33 | openstack-config --set /etc/ceilometer/ceilometer.conf database connection mongodb://hacontroller1,hacontroller2,hacontroller3:27017/ceilometer?replicaSet=ceilometer 34 | openstack-config --set /etc/ceilometer/ceilometer.conf database max_retries -1 35 | 36 | # keep last 5 days data only (value is in secs) 37 | openstack-config --set /etc/ceilometer/ceilometer.conf database metering_time_to_live 432000 38 | openstack-config --set /etc/ceilometer/ceilometer.conf api host 192.168.1.22X 39 | 40 | Configure coordination URL 41 | -------------------------- 42 | 43 | openstack-config --set /etc/ceilometer/ceilometer.conf coordination backend_url 'redis://hacontroller1:26379?sentinel=mymaster&sentinel_fallback=hacontroller2:26379&sentinel_fallback=hacontroller3:26379' 44 | 45 | Enable and start Ceilometer services, open firewall ports 46 | --------------------------------------------------------- 47 | 48 | systemctl start openstack-ceilometer-central 49 | systemctl enable openstack-ceilometer-central 50 | systemctl start openstack-ceilometer-collector 51 | systemctl enable openstack-ceilometer-collector 52 | systemctl start openstack-ceilometer-api 53 | systemctl enable openstack-ceilometer-api 54 | systemctl start openstack-ceilometer-alarm-evaluator 55 | systemctl enable openstack-ceilometer-alarm-evaluator 56 | systemctl start openstack-ceilometer-alarm-notifier 57 | systemctl enable openstack-ceilometer-alarm-notifier 58 | systemctl start openstack-ceilometer-notification 59 | systemctl enable openstack-ceilometer-notification 60 | firewall-cmd --add-port=8777/tcp 61 | firewall-cmd --add-port=8777/tcp --permanent 62 | firewall-cmd --add-port=4952/udp 63 | firewall-cmd --add-port=4952/udp --permanent 64 | 65 | Tests 66 | ----- 67 | 68 | On any node: 69 | 70 | . /root/keystonerc_admin 71 | 72 | for m in storage.objects image network volume instance ; do ceilometer sample-list -m $m | tail -2 ; done 73 | -------------------------------------------------------------------------------- /keepalived/cinder-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | Cinder will be configured in this example to use the NFS backend driver. Instructions for any other backend driver will only differ in the `volume_driver` config option and any driver-specific options. 5 | 6 | The following commands will be executed on all controller nodes, unless otherwise stated. 7 | 8 | You can find a phd scenario file [here](phd-setup/cinder.scenario). 9 | 10 | Install software 11 | ---------------- 12 | 13 | yum install -y openstack-cinder openstack-utils openstack-selinux python-memcached 14 | 15 | Configure 16 | --------- 17 | 18 | openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cindertest@controller-vip.example.com/cinder 19 | openstack-config --set /etc/cinder/cinder.conf database max_retries -1 20 | openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone 21 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller-vip.example.com:5000/ 22 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_plugin password 23 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller-vip.example.com:35357/ 24 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder 25 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cindertest 26 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name services 27 | openstack-config --set /etc/cinder/cinder.conf DEFAULT notification_driver messaging 28 | openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder 29 | openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller-vip.example.com 30 | openstack-config --set /etc/cinder/cinder.conf DEFAULT memcache_servers hacontroller1:11211,hacontroller2:11211,hacontroller3:11211 31 | openstack-config --set /etc/cinder/cinder.conf DEFAULT host rhos7-cinder 32 | openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen 192.168.1.22X 33 | openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts hacontroller1,hacontroller2,hacontroller3 34 | openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues true 35 | openstack-config --set /etc/cinder/cinder.conf keymgr encryption_auth_url http://controller-vip.example.com:5000/v3 36 | 37 | **Note:** We are setting a single "host" entry for all nodes, this is related to the A/P issues with cinder-volume. 38 | 39 | Configure NFS driver 40 | -------------------- 41 | 42 | # Choose whatever NFS share is used 43 | 44 | cat > /etc/cinder/nfs_exports << EOF 45 | 192.168.1.4:/volumeUSB1/usbshare/openstack/cinder 46 | EOF 47 | 48 | chown root:cinder /etc/cinder/nfs_exports 49 | chmod 0640 /etc/cinder/nfs_exports 50 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfs_exports 51 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_sparsed_volumes true 52 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_mount_options v3 53 | openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver 54 | 55 | Manage DB 56 | --------- 57 | 58 | On node 1: 59 | 60 | su cinder -s /bin/sh -c "cinder-manage db sync" 61 | 62 | Start services 63 | -------------- 64 | 65 | On node 1: 66 | 67 | systemctl start openstack-cinder-api 68 | systemctl start openstack-cinder-scheduler 69 | systemctl start openstack-cinder-volume 70 | systemctl enable openstack-cinder-api 71 | systemctl enable openstack-cinder-scheduler 72 | systemctl enable openstack-cinder-volume 73 | 74 | **Note:** If this node crashes, it should be manually started on another node. Refer to [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=1193229) for additional information. 75 | 76 | On nodes 2 and 3: 77 | 78 | systemctl start openstack-cinder-api 79 | systemctl start openstack-cinder-scheduler 80 | systemctl enable openstack-cinder-api 81 | systemctl enable openstack-cinder-scheduler 82 | 83 | Open firewall ports 84 | ------------------- 85 | 86 | On all nodes: 87 | 88 | firewall-cmd --add-port=8776/tcp 89 | firewall-cmd --add-port=8776/tcp --permanent 90 | 91 | Test 92 | ---- 93 | 94 | On any node: 95 | 96 | . /root/keystonerc_demo 97 | cinder create --display-name test 1 98 | cinder extend test 4 99 | cinder delete test 100 | -------------------------------------------------------------------------------- /keepalived/controller-node.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | Controller nodes will run all OpenStack services in this architecture, while compute nodes will only act as hypervisors. 5 | 6 | Environment description 7 | ----------------------- 8 | 9 | ### Network setup 10 | 11 | The basic requirements for this environment include 5 nodes, with the network setup described in the following diagram: 12 | 13 | ![](Controller-network.jpg "Network setup") 14 | 15 | - The external network is used by the Neutron floating IPs, and for any external access. The hypervisor nodes (hacompute1 and hacompute2) do not need to be connected to this network, but in the demo setup they are connected for testing purposes. 16 | - The internal network will carry all other traffic: API traffic, tenant networks and storage traffic. 17 | - A router will provide connectivity for the controller nodes to the floating IP network as required by Sahara for instance management. In this configuration example, it will also allow virtual machines to access the controller nodes, allowing Trove instances access to the RabbitMQ server (**please note this is not recommended for a production setup**, refer to [trove-config.md](the Trove section) for details). If neitner Sahara or Trove are used, the router is not required. 18 | 19 | Please note this is a minimum test setup. Any production setup should separate internal and external API traffic, tenant networks and storage traffic in different network segments, and route traffic between instances and the controller nodes via a firewall. 20 | 21 | ### Node setup 22 | 23 | The following table provides the system details for the environment used during testing. 24 | 25 | 26 | | Hostname | NIC 1 | NIC 2 | Disk 1 | Disk 2 | 27 | |--------------------|--------------------------------------|-------------------|------------------|-----------------| 28 | |hacontroller1 |eth0: no IP, used for provider network|eth1: 192.168.1.221| 60 GB (/dev/vda) | 8 GB (/dev/vdb) | 29 | |hacontroller2 |eth0: no IP, used for provider network|eth1: 192.168.1.222| 60 GB (/dev/vda) | 8 GB (/dev/vdb) | 30 | |hacontroller3 |eth0: no IP, used for provider network|eth1: 192.168.1.223| 60 GB (/dev/vda) | 8 GB (/dev/vdb) | 31 | |controller-vip (virtual hostname)| |eth1: 192.168.1.220| | | 32 | |hacompute1 |eth0: 10.10.10.224 (-priv) |eth1: 192.168.1.224| 60 GB (/dev/vda) | | 33 | |hacompute2 |eth0: 10.10.10.225 (-priv) |eth1: 192.168.1.225| 60 GB (/dev/vda) | | 34 | 35 | All nodes have SELinux set to *Enforcing* and an active firewall. 36 | 37 | On the detailed installation notes, remember to substitute any occurrence of 192.168.1.22X with the IP of the node being configured. 38 | 39 | #### Single vs multiple VIPs 40 | 41 | Please note that the current document uses a single virtual IP for the whole stack of OpenStack API services. It is also possible to use multiple virtual IPs, one per API service. Using multiple virtual IPs will allow for load distribution between the three HAProxy instances. In contrast, the requirements for available IP addresses increase, and the initial configuration can be slightly more complex if performed manually. 42 | 43 | #### Base operating system installation 44 | 45 | All nodes start from a *minimal* CentOS 7 or RHEL 7 installation, then enabling the required software channels. 46 | 47 | - For RDO Liberty, follow the steps specified in the [RDO wiki](https://openstack.redhat.com/Repositories) 48 | 49 | **Note:** For now, use [this release RPM](http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm) to setup the required repositories. 50 | 51 | - For the [Red Hat Enterprise Linux OpenStack Platform](http://www.redhat.com/openstack), run the following commands to enable the required repositories: 52 | 53 |
subscription-manager register --username=${RHSMUSER} --password=${RHSMPASS} 
 54 | subscription-manager attach --auto
 55 | subscription-manager repos --disable \* 
 56 | subscription-manager repos --enable rhel-7-server-rpms 
 57 | subscription-manager repos --enable rhel-7-server-rh-common-rpms 
 58 | subscription-manager repos --enable rhel-7-server-openstack-7.0-rpms
 59 | yum -y update 
 60 | reboot
61 | 62 | The systems also had NetworkManager disabled: 63 | 64 | systemctl disable NetworkManager 65 | systemctl enable network 66 | 67 | If the installed system cannot access the gateway, be sure there is a “GATEWAY=xxx” entry in the relevant ifcfg file: 68 | 69 | NAME=eth0 70 | ... 71 | GATEWAY=192.168.1.1 72 | 73 | **Note:** It is very important to make sure that NTP is correctly configured on all nodes. Failure to do so can cause environment instability. 74 | 75 | The following phd scenario files have been created: 76 | 77 | - A file with the [VM creation and initial setup](phd-setup/hypervisors.scenario) 78 | - A file with some [basic system preparation tasks](phd-setup/serverprep.scenario) 79 | 80 | #### Configuration steps 81 | 82 | **NOTE:** before moving on, please remember that the following instructions have been created for the above-mentioned scenario (3 controller nodes, with a specific network setup). If you are using this guide to implement your own environment, pay close attention to any IP substitution and other changes that may apply to your environment. Also, do not forget to set sensible passwords. 83 | 84 | The configuration steps can be divided into: 85 | 86 | - Installing/configuring/enabling all core non-Openstack services 87 | - [HAProxy](haproxy-config.md) 88 | - [Galera](galera-config.md) 89 | - [RabbitMQ](rabbitmq-config.md) 90 | - [Memcached](memcached-config.md) 91 | - [Redis](redis-config.md) 92 | - [MongoDB](mongodb-config.md) 93 | - [Keepalived](keepalived-config.md) 94 | - Installing/configuring/enabling all core Openstack services 95 | - [Keystone](keystone-config.md) 96 | - [Glance](glance-config.md) 97 | - [Cinder](cinder-config.md) 98 | - [Swift](swift-config.md) 99 | - [Neutron](neutron-config.md) 100 | - [Nova](nova-config.md) 101 | - [Ceilometer](ceilometer-config.md) 102 | - [Heat](heat-config.md) 103 | - [Horizon](horizon-config.md) 104 | - [Trove](trove-config.md) 105 | - [Sahara](sahara-config.md) 106 | 107 | On each section, a link to a [phd-based](https://github.com/davidvossel/phd) scenario file is provided, as a reference. 108 | -------------------------------------------------------------------------------- /keepalived/galera-bootstrap.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | Here is an outline of the steps needed to re-establish/bootstrap Galera quorum. 5 | 6 | 1. Determine loss of quorum 7 | 2. Determine systems with last activity 8 | 3. Start first DB on first node 9 | 4. Start DB on remaining nodes 10 | 11 | Determine loss of quorum 12 | ------------------------ 13 | 14 | Confirm in the */var/log/mariadb/mariadb.log* on each system, looking for Errors 15 | 16 | 140929 11:25:40 [ERROR] WSREP: Local state seqno (1399488) is greater than group seqno (10068): states diverged. Aborting to avoid potential data loss. Remove '/var/lib/mysql//grastate.dat' file and restart if you wish to continue. (FATAL) 17 | 140929 11:25:40 [ERROR] Aborting 18 | [root@ospha2 ~]# 19 | 20 | Also the clustercheck command should so that there are some systems not in sync 21 | 22 | [root@ospha2 ~]# clustercheck 23 | HTTP/1.1 503 Service Unavailable 24 | Content-Type: text/plain 25 | Connection: close 26 | Content-Length: 36 27 | 28 | Galera cluster node is not synced. 29 | [root@ospha2 ~]# 30 | 31 | Determine systems with last activity 32 | ------------------------------------ 33 | 34 | In this section we attempt to determine which system or systems has the highest valid sequence number for the for the latest UUID. 35 | 36 | ### Orderly shutdown 37 | 38 | If the cluster shutdown correctly the `/var/lib/mysql/grastate.dat` file will have positive numbers for the seqno. Note which system or systems have the greatest seqno. However, if any system has a `-1` value, that indicates the shutdown was not clean and another method to determine the seqno is needed. 39 | 40 | [root@ospha2 ~]# cat /var/lib/mysql/grastate.dat 41 | # GALERA saved state 42 | version: 2.1 43 | uuid: b048715d-4369-11e4-b7ef-af1999a6c989 44 | seqno: -1 45 | cert_index: 46 | [root@ospha2 ~]# 47 | 48 | ### Disorderly Shutdown 49 | 50 | The seqno is in the `/var/log/mariadb/mariadb.log` file. Search for lines with "Found save state", ignoring any -1 values. The last value on each line is in the form UUID:seqno. 51 | 52 | [root@ospha1 ~]# tail -n 1000 /var/log/mariadb/mariadb.log | grep "Found saved state" | grep -v ":-1" 53 | 140923 17:49:19 [Note] WSREP: Found saved state: b048715d-4369-11e4-b7ef-af1999a6c989:2229 54 | 140924 15:37:13 [Note] WSREP: Found saved state: b048715d-4369-11e4-b7ef-af1999a6c989:2248 55 | 140929 11:24:26 [Note] WSREP: Found saved state: b048715d-4369-11e4-b7ef-af1999a6c989:10060 56 | [root@ospha1 ~]# 57 | 58 | [root@ospha2 ~]# tail -n 1000 /var/log/mariadb/mariadb.log | grep "Found saved state" | grep -v ":-1" 59 | 140926 14:58:16 [Note] WSREP: Found saved state: b048715d-4369-11e4-b7ef-af1999a6c989:171535 60 | 140929 11:24:28 [Note] WSREP: Found saved state: b048715d-4369-11e4-b7ef-af1999a6c989:1399488 61 | [root@ospha2 ~]# 62 | 63 | [root@ospha3 ~]# tail -n 2000 /var/log/mariadb/mariadb.log | grep "Found saved state" | grep -v ":-1" 64 | 140923 17:36:57 [Note] WSREP: Found saved state: b048715d-4369-11e4-b7ef-af1999a6c989:36 65 | 140923 17:43:18 [Note] WSREP: Found saved state: b048715d-4369-11e4-b7ef-af1999a6c989:785 66 | [root@ospha3 ~]# 67 | 68 | Notice all servers have the same UUID (b048715d-4369-11e4-b7ef-af1999a6c989), but server *ospha2* has the largest seqno (1399488). 69 | 70 | Start first DB on first node 71 | ---------------------------- 72 | 73 | The following command will initiate the Galera cluster. Since ospha2 had the highest seqno, that is the node to start first. 74 | 75 | [root@ospha2 ~]# sudo -u mysql /usr/libexec/mysqld --wsrep-cluster-address='gcomm://' & 76 | [1] 1910 77 | [root@ospha2 ~]# 140929 16:31:00 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295 78 | 140929 16:31:00 [Warning] Could not increase number of max_open_files to more than 1024 (request: 1835) 79 | /usr/libexec/mysqld: Query cache is disabled (resize or similar command in progress); repeat this command later 80 | 81 | Verify that this brought the this node into sync. 82 | 83 | [root@ospha2 ~]# clustercheck 84 | HTTP/1.1 200 OK 85 | Content-Type: text/plain 86 | Connection: close 87 | Content-Length: 32 88 | 89 | Galera cluster node is synced. 90 | [root@ospha2 ~]# 91 | 92 | Start DB on remaining nodes 93 | --------------------------- 94 | 95 | On another cluster member, start the database, and then verify this node reports synced. 96 | 97 | [root@ospha1 ~]# systemctl start mariadb 98 | [root@ospha1 ~]# clustercheck 99 | HTTP/1.1 200 OK 100 | Content-Type: text/plain 101 | Connection: close 102 | Content-Length: 32 103 | 104 | Galera cluster node is synced. 105 | [root@ospha1 ~]# 106 | 107 | Once `clustercheck` returns 200 on all nodes, restart MariaDB on the first node. 108 | 109 | kill 110 | systemctl start mariadb 111 | 112 | -------------------------------------------------------------------------------- /keepalived/glance-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | The following commands will be executed on all controller nodes, unless otherwise stated. 5 | 6 | You can find a phd scenario file [here](phd-setup/glance.scenario). 7 | 8 | Install software 9 | ---------------- 10 | 11 | yum install -y openstack-glance openstack-utils openstack-selinux nfs-utils 12 | 13 | Configure glance-api and glance-registry 14 | ---------------------------------------- 15 | 16 | openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:glancetest@controller-vip.example.com/glance 17 | openstack-config --set /etc/glance/glance-api.conf database max_retries -1 18 | openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone 19 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller-vip.example.com:5000/ 20 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_plugin password 21 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller-vip.example.com:35357/ 22 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance 23 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glancetest 24 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name services 25 | openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver messaging 26 | openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.1.22X 27 | openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host controller-vip.example.com 28 | openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts hacontroller1,hacontroller2,hacontroller3 29 | openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues true 30 | openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:glancetest@controller-vip.example.com/glance 31 | openstack-config --set /etc/glance/glance-registry.conf database max_retries -1 32 | openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone 33 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller-vip.example.com:5000/ 34 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_plugin password 35 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller-vip.example.com:35357/ 36 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance 37 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glancetest 38 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name services 39 | openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 192.168.1.22X 40 | 41 | Manage DB 42 | --------- 43 | 44 | On node 1: 45 | 46 | su glance -s /bin/sh -c "glance-manage db_sync" 47 | 48 | Configure backend 49 | ----------------- 50 | 51 | For this setup, NFS will be used. Add the NFS mount to `/etc/fstab`, making sure it is mounted on `/var/lib/glance`. Be aware the last two columns in `fstab` need to be "0 0" on RHEL/CentOS 7, due to [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=1120367). You may also find [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=1203820) if using NFS v3 shares. 52 | 53 | Also, note there is currently a known SELinux issue when using an NFS backend for Glance. See [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=1219406) for a description and fix. 54 | 55 | On all nodes: 56 | 57 | chown glance:nobody /var/lib/glance 58 | 59 | Start services and open firewall ports 60 | -------------------------------------- 61 | 62 | systemctl start openstack-glance-registry 63 | systemctl start openstack-glance-api 64 | systemctl enable openstack-glance-registry 65 | systemctl enable openstack-glance-api 66 | firewall-cmd --add-port=9191/tcp 67 | firewall-cmd --add-port=9191/tcp --permanent 68 | firewall-cmd --add-port=9292/tcp 69 | firewall-cmd --add-port=9292/tcp --permanent 70 | 71 | Test 72 | ---- 73 | 74 | On any node: 75 | 76 | . /root/keystonerc_admin 77 | wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img 78 | glance image-create --name "cirros" --disk-format qcow2 --container-format bare --file cirros-0.3.3-x86_64-disk.img --visibility public 79 | glance image-list 80 | -------------------------------------------------------------------------------- /keepalived/heat-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | The following commands will be executed on all controller nodes, unless otherwise stated. 5 | 6 | You can find a phd scenario file [here](phd-setup/heat.scenario). 7 | 8 | Install software 9 | ---------------- 10 | 11 | yum install -y openstack-heat-engine openstack-heat-api openstack-heat-api-cfn openstack-heat-api-cloudwatch python-heatclient openstack-utils python-glanceclient 12 | 13 | Configure Heat domain 14 | --------------------- 15 | 16 | To allow non-admin users to create Heat stacks, a Keystone domain needs to be created. Run the following commands to create the Heat domain, and configure Heat to use it. 17 | 18 | On node 1: 19 | 20 | . /root/keystonerc_admin 21 | openstack role create heat_stack_user 22 | openstack token issue 23 | 24 | Take note of the token ID issued, then: 25 | 26 | openstack --os-token=${TOKEN_ID} --os-url=http://controller-vip.example.com:5000/v3 --os-identity-api-version=3 domain create heat --description "Owns users and projects created by heat" 27 | openstack --os-token=${TOKEN_ID} --os-url=http://controller-vip.example.com:5000/v3 --os-identity-api-version=3 user create --password heattest --domain heat --description "Manages users and projects created by heat" heat_domain_admin 28 | openstack --os-token=${TOKEN_ID} --os-url=http://controller-vip.example.com:5000/v3 --os-identity-api-version=3 role add --user heat_domain_admin --domain heat admin 29 | 30 | On all nodes: 31 | 32 | openstack-config --set /etc/heat/heat.conf DEFAULT stack_domain_admin_password heattest 33 | openstack-config --set /etc/heat/heat.conf DEFAULT stack_domain_admin heat_domain_admin 34 | openstack-config --set /etc/heat/heat.conf DEFAULT stack_user_domain_name heat 35 | 36 | Configure Heat 37 | -------------- 38 | 39 | openstack-config --set /etc/heat/heat.conf database connection mysql://heat:heattest@controller-vip.example.com/heat 40 | openstack-config --set /etc/heat/heat.conf database max_retries -1 41 | openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_uri http://controller-vip.example.com:5000/ 42 | openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_plugin password 43 | openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_url http://controller-vip.example.com:35357/ 44 | openstack-config --set /etc/heat/heat.conf keystone_authtoken username heat 45 | openstack-config --set /etc/heat/heat.conf keystone_authtoken password heattest 46 | openstack-config --set /etc/heat/heat.conf keystone_authtoken project_name services 47 | openstack-config --set /etc/heat/heat.conf keystone_authtoken keystone_ec2_uri http://controller-vip.example.com:35357/v2.0 48 | openstack-config --set /etc/heat/heat.conf keystone_authtoken identity_uri http://controller-vip.example.com:35357 49 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_tenant_name services 50 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_user heat 51 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_password heattest 52 | openstack-config --set /etc/heat/heat.conf ec2authtoken auth_uri http://controller-vip.example.com:5000/v2.0 53 | openstack-config --set /etc/heat/heat.conf DEFAULT memcache_servers hacontroller1:11211,hacontroller2:11211,hacontroller3:11211 54 | openstack-config --set /etc/heat/heat.conf heat_api bind_host 192.168.1.22X 55 | openstack-config --set /etc/heat/heat.conf heat_api_cfn bind_host 192.168.1.22X 56 | openstack-config --set /etc/heat/heat.conf heat_api_cloudwatch bind_host 192.168.1.22X 57 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_metadata_server_url controller-vip.example.com:8000 58 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url controller-vip.example.com:8000/v1/waitcondition 59 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_watch_server_url controller-vip.example.com:8003 60 | openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_hosts hacontroller1,hacontroller2,hacontroller3 61 | openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_ha_queues true 62 | openstack-config --set /etc/heat/heat.conf DEFAULT rpc_backend rabbit 63 | openstack-config --set /etc/heat/heat.conf DEFAULT notification_driver heat.openstack.common.notifier.rpc_notifier 64 | openstack-config --set /etc/heat/heat.conf DEFAULT enable_cloud_watch_lite false 65 | 66 | 67 | Manage DB 68 | --------- 69 | 70 | On node 1: 71 | 72 | su heat -s /bin/sh -c "heat-manage db_sync" 73 | 74 | Start services, open firewall ports 75 | ----------------------------------- 76 | 77 | On all nodes: 78 | 79 | systemctl start openstack-heat-api 80 | systemctl start openstack-heat-api-cfn 81 | systemctl start openstack-heat-api-cloudwatch 82 | systemctl start openstack-heat-engine 83 | systemctl enable openstack-heat-api 84 | systemctl enable openstack-heat-api-cfn 85 | systemctl enable openstack-heat-api-cloudwatch 86 | systemctl enable openstack-heat-engine 87 | firewall-cmd --add-port=8000/tcp 88 | firewall-cmd --add-port=8000/tcp --permanent 89 | firewall-cmd --add-port=8003/tcp 90 | firewall-cmd --add-port=8003/tcp --permanent 91 | firewall-cmd --add-port=8004/tcp 92 | firewall-cmd --add-port=8004/tcp --permanent 93 | -------------------------------------------------------------------------------- /keepalived/horizon-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | The following commands will be executed on all controller nodes, unless otherwise stated. 5 | 6 | You can find a phd scenario file [here](phd-setup/horizon.scenario). 7 | 8 | Install software 9 | ---------------- 10 | 11 | yum install -y mod_wsgi httpd mod_ssl python-memcached openstack-dashboard 12 | 13 | Set secret key 14 | -------------- 15 | 16 | On node 1: 17 | 18 | openssl rand -hex 10 19 | 20 | Take note of the generated random value, then on all nodes: 21 | 22 | sed -i -e "s#SECRET_KEY.*#SECRET_KEY = 'VALUE'#g#" /etc/openstack-dashboard/local_settings 23 | 24 | Configure local\_settings and httpd.conf 25 | ---------------------------------------- 26 | 27 | sed -i -e "s#ALLOWED_HOSTS.*#ALLOWED_HOSTS = ['*',]#g" \ 28 | -e "s#^CACHES#SESSION_ENGINE = 'django.contrib.sessions.backends.cache'\nCACHES#g#" \ 29 | -e "s#locmem.LocMemCache'#memcached.MemcachedCache',\n\t'LOCATION' : [ 'hacontroller1:11211', 'hacontroller2:11211', 'hacontroller3:11211', ]#g" \ 30 | -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "controller-vip.example.com"#g' \ 31 | -e "s#^LOCAL_PATH.*#LOCAL_PATH = '/var/lib/openstack-dashboard'#g" \ 32 | /etc/openstack-dashboard/local_settings 33 | 34 | Restart httpd and open firewall port 35 | ------------------------------------ 36 | 37 | systemctl daemon-reload 38 | systemctl restart httpd 39 | firewall-cmd --add-port=80/tcp 40 | firewall-cmd --add-port=80/tcp --permanent 41 | -------------------------------------------------------------------------------- /keepalived/keepalived-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | [Keepalived](http://www.keepalived.org/) provides simple and robust facilities for load balancing and high-availability to Linux system and Linux based infrastructures. In this highly available OpenStack architecture, it is used to provide high availability to the virtual IP(s) used by HAProxy. High-availability is achieved by VRRP protocol, a fundamental brick for router failover. 5 | 6 | ![](Keepalived-arch.jpg "Keepalived architecture") 7 | 8 | The following commands will be executed on all controller nodes, unless otherwise stated. 9 | 10 | You can find a phd scenario file [here](phd-setup/keepalived.scenario). 11 | 12 | Install software 13 | ---------------- 14 | 15 | yum -y install keepalived psmisc 16 | 17 | Create configuration file 18 | ------------------------- 19 | 20 | On all nodes: 21 | 22 | cat > /etc/keepalived/keepalived.conf << EOF 23 | 24 | vrrp_script chk_haproxy { 25 | script "/usr/bin/killall -0 haproxy" 26 | interval 2 27 | } 28 | 29 | vrrp_instance VI_PUBLIC { 30 | interface eth1 31 | state BACKUP 32 | virtual_router_id 52 33 | priority 101 34 | virtual_ipaddress { 35 | 192.168.1.220 dev eth1 36 | } 37 | track_script { 38 | chk_haproxy 39 | } 40 | # Avoid failback 41 | nopreempt 42 | } 43 | 44 | vrrp_sync_group VG1 45 | group { 46 | VI_PUBLIC 47 | } 48 | EOF 49 | 50 | 51 | Open firewall rules and start services 52 | -------------------------------------- 53 | 54 | On all nodes: 55 | 56 | firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -i eth1 -d 224.0.0.0/8 -j ACCEPT 57 | firewall-cmd --direct --perm --add-rule ipv4 filter INPUT 0 -i eth1 -d 224.0.0.0/8 -j ACCEPT 58 | systemctl start keepalived 59 | systemctl enable keepalived 60 | -------------------------------------------------------------------------------- /keepalived/memcached-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | Memcached is a general-purpose distributed memory caching system. It is used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source must be read. 5 | 6 | **Note:** Access to memcached is not handled by HAproxy because replicated access is currently only in an experimental state. Instead consumers must be supplied with the full list of hosts running memcached. 7 | 8 | The following commands will be executed on all controller nodes. 9 | 10 | You can find a phd scenario file [here](phd-setup/memcached.scenario). 11 | 12 | Install and enable memcached 13 | ---------------------------- 14 | 15 | yum install -y memcached 16 | systemctl start memcached 17 | systemctl enable memcached 18 | firewall-cmd --add-port=11211/tcp 19 | firewall-cmd --add-port=11211/tcp --permanent 20 | -------------------------------------------------------------------------------- /keepalived/mongodb-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | MongoDB can provide high availability through the use of replica sets. A replica set in MongoDB is a group of mongod processes that maintain the same data set, where one of the nodes is specified as master and the rest as slaves. Clients are explicitly told to connect to the replica set, by specifying all its members. In case of a node failure, the client should transparently reconnect to a surviving replica. 5 | 6 | The following commands will be executed on all controller nodes, unless otherwise stated. 7 | 8 | You can find a phd scenario file [here](phd-setup/mongodb.scenario). 9 | 10 | Install packages 11 | ---------------- 12 | 13 | yum install -y mongodb mongodb-server 14 | 15 | Listen to external connections, and configure replication set 16 | ------------------------------------------------------------- 17 | 18 | sed -i -e 's#bind_ip.*#bind_ip = 0.0.0.0#g' /etc/mongod.conf 19 | echo "replSet = ceilometer" >> /etc/mongod.conf 20 | 21 | Start services and enable firewall ports 22 | ---------------------------------------- 23 | 24 | systemctl start mongod 25 | systemctl enable mongod 26 | firewall-cmd --add-port=27017/tcp 27 | firewall-cmd --add-port=27017/tcp --permanent 28 | 29 | Create replica set 30 | ------------------ 31 | 32 | On node 1: 33 | 34 | mongo 35 | 36 | 37 | > rs.initiate() 38 | > sleep(10000) 39 | > rs.add("hacontroller2.example.com"); 40 | > rs.add("hacontroller3.example.com"); 41 | 42 | And verify: 43 | 44 | > rs.status() 45 | 46 | Until all nodes show `"stateStr" : "PRIMARY"` or `"stateStr" : "SECONDARY"`, then: 47 | 48 | > quit() 49 | -------------------------------------------------------------------------------- /keepalived/mongodb-recovery.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | MongoDB usually does a good job at re-forming a replica set after a full cluster reboot. In case of any failure, [its documentation](http://docs.mongodb.org/v2.6/tutorial/troubleshoot-replica-sets/) provides an excellent reference on how to troubleshoot and fix any error. 5 | -------------------------------------------------------------------------------- /keepalived/nova-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | The following commands will be executed on all controller nodes, unless otherwise stated. 5 | 6 | You can find a phd scenario file [here](phd-setup/nova.scenario). 7 | 8 | Install software 9 | ---------------- 10 | 11 | yum install -y openstack-nova-console openstack-nova-novncproxy openstack-utils openstack-nova-api openstack-nova-conductor openstack-nova-scheduler python-cinderclient python-memcached 12 | 13 | Configure Nova API 14 | ------------------ 15 | 16 | openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers hacontroller1:11211,hacontroller2:11211,hacontroller3:11211 17 | openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_host 192.168.1.22X 18 | openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller-vip.example.com:6080/vnc_auto.html 19 | openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.1.22X 20 | openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.1.22X 21 | openstack-config --set /etc/nova/nova.conf database connection mysql://nova:novatest@controller-vip.example.com/nova 22 | openstack-config --set /etc/nova/nova.conf database max_retries -1 23 | openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone 24 | openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.1.22X 25 | openstack-config --set /etc/nova/nova.conf DEFAULT metadata_host 192.168.1.22X 26 | openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.1.22X 27 | openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775 28 | openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller-vip.example.com 29 | openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API 30 | openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver 31 | openstack-config --set /etc/nova/nova.conf libvirt vif_driver nova.virt.libvirt.vif.LibvirtGenericVIFDriver 32 | openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron 33 | openstack-config --set /etc/nova/nova.conf cinder cinder_catalog_info volume:cinder:internalURL 34 | openstack-config --set /etc/nova/nova.conf conductor use_local false 35 | openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts hacontroller1,hacontroller2,hacontroller3 36 | openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues True 37 | openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True 38 | openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret metatest 39 | openstack-config --set /etc/nova/nova.conf neutron url http://controller-vip.example.com:9696/ 40 | openstack-config --set /etc/nova/nova.conf neutron project_domain_id default 41 | openstack-config --set /etc/nova/nova.conf neutron project_name services 42 | openstack-config --set /etc/nova/nova.conf neutron user_domain_id default 43 | openstack-config --set /etc/nova/nova.conf neutron username neutron 44 | openstack-config --set /etc/nova/nova.conf neutron password neutrontest 45 | openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller-vip.example.com:35357/ 46 | openstack-config --set /etc/nova/nova.conf neutron auth_uri http://controller-vip.example.com:5000/ 47 | openstack-config --set /etc/nova/nova.conf neutron auth_plugin password 48 | openstack-config --set /etc/nova/nova.conf neutron region_name regionOne 49 | 50 | # REQUIRED FOR A/A scheduler 51 | openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_host_subset_size 30 52 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_plugin password 53 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_url http://controller-vip.example.com:35357/ 54 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken username compute 55 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken password novatest 56 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken project_name services 57 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_uri http://controller-vip.example.com:5000/ 58 | 59 | 60 | Only run the following command if you are creating a test environment where your hypervisors will be virtual machines 61 | 62 | openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu 63 | 64 | Manage DB 65 | --------- 66 | 67 | On node 1: 68 | 69 | su nova -s /bin/sh -c "nova-manage db sync" 70 | 71 | Start services, open firewall ports 72 | ----------------------------------- 73 | 74 | On all nodes: 75 | 76 | systemctl start openstack-nova-consoleauth 77 | systemctl start openstack-nova-novncproxy 78 | systemctl start openstack-nova-api 79 | systemctl start openstack-nova-scheduler 80 | systemctl start openstack-nova-conductor 81 | systemctl enable openstack-nova-consoleauth 82 | systemctl enable openstack-nova-novncproxy 83 | systemctl enable openstack-nova-api 84 | systemctl enable openstack-nova-scheduler 85 | systemctl enable openstack-nova-conductor 86 | 87 | firewall-cmd --add-port=8773-8775/tcp 88 | firewall-cmd --add-port=8773-8775/tcp --permanent 89 | firewall-cmd --add-port=6080/tcp 90 | firewall-cmd --add-port=6080/tcp --permanent 91 | -------------------------------------------------------------------------------- /keepalived/phd-setup/ceilometer.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Install Ceilometer 22 | # - Configuring Ceilometer 23 | # - Start services and add firewall rules 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_nic_internal 31 | PHD_VAR_network_nic_external 32 | PHD_VAR_network_hosts_vip 33 | PHD_VAR_network_hosts_controllers 34 | PHD_VAR_network_hosts_rabbitmq 35 | PHD_VAR_network_hosts_memcache 36 | PHD_VAR_network_hosts_mongodb 37 | PHD_VAR_network_ips_controllers 38 | PHD_VAR_network_neutron_externalgateway 39 | PHD_VAR_network_neutron_externalnetwork 40 | PHD_VAR_network_neutron_allocpoolstart 41 | PHD_VAR_network_neutron_allocpoolend 42 | 43 | ################################# 44 | # Scenario Requirements Section # 45 | ################################# 46 | = REQUIREMENTS = 47 | nodes: 1 48 | 49 | ###################### 50 | # Deployment Scripts # 51 | ###################### 52 | = SCRIPTS = 53 | 54 | target=all 55 | .... 56 | myip=$(ip a |grep ${PHD_VAR_network_nic_internal} | grep inet | awk '{print $2}' | awk -F/ '{print $1}' | head -n 1) 57 | 58 | yum install -y openstack-ceilometer-api openstack-ceilometer-central openstack-ceilometer-collector openstack-ceilometer-common openstack-ceilometer-alarm python-ceilometer python-ceilometerclient 59 | 60 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken identity_uri http://${PHD_VAR_network_hosts_vip}:35357/ 61 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name services 62 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer 63 | openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password ceilometertest 64 | openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT memcache_servers ${PHD_VAR_network_hosts_memcache} 65 | openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 66 | openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_ha_queues true 67 | openstack-config --set /etc/ceilometer/ceilometer.conf publisher telemetry_secret ceilometersecret 68 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_auth_url http://${PHD_VAR_network_hosts_vip}:5000/v2.0 69 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_username ceilometer 70 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_tenant_name services 71 | openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_password ceilometertest 72 | openstack-config --set /etc/ceilometer/ceilometer.conf database connection mongodb://${PHD_VAR_network_hosts_mongodb}:27017/ceilometer?replicaSet=ceilometer 73 | openstack-config --set /etc/ceilometer/ceilometer.conf database max_retries -1 74 | 75 | # keep last 5 days data only (value is in secs) 76 | openstack-config --set /etc/ceilometer/ceilometer.conf database metering_time_to_live 432000 77 | openstack-config --set /etc/ceilometer/ceilometer.conf api host ${myip} 78 | 79 | IFS=', ' read -a controller_names <<< "${PHD_VAR_network_hosts_controllers}" 80 | 81 | openstack-config --set /etc/ceilometer/ceilometer.conf coordination backend_url "redis://${controller_names[0]}:26379?sentinel=mymaster&sentinel_fallback=${controller_names[0]}:26379&sentinel_fallback=${controller_names[0]}:26379" 82 | 83 | systemctl start openstack-ceilometer-central 84 | systemctl enable openstack-ceilometer-central 85 | systemctl start openstack-ceilometer-collector 86 | systemctl enable openstack-ceilometer-collector 87 | systemctl start openstack-ceilometer-api 88 | systemctl enable openstack-ceilometer-api 89 | systemctl start openstack-ceilometer-alarm-evaluator 90 | systemctl enable openstack-ceilometer-alarm-evaluator 91 | systemctl start openstack-ceilometer-alarm-notifier 92 | systemctl enable openstack-ceilometer-alarm-notifier 93 | systemctl start openstack-ceilometer-notification 94 | systemctl enable openstack-ceilometer-notification 95 | firewall-cmd --add-port=8777/tcp 96 | firewall-cmd --add-port=8777/tcp --permanent 97 | firewall-cmd --add-port=4952/udp 98 | firewall-cmd --add-port=4952/udp --permanent 99 | .... 100 | -------------------------------------------------------------------------------- /keepalived/phd-setup/cinder.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Install Cinder 22 | # - Configure Cinder, using a NFS v3 backend 23 | # - Start services, open firewall rules 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_nic_internal 31 | PHD_VAR_network_hosts_vip 32 | PHD_VAR_network_ips_controllers 33 | PHD_VAR_network_hosts_rabbitmq 34 | PHD_VAR_network_hosts_memcache 35 | PHD_VAR_network_nfs_cindershare 36 | 37 | ################################# 38 | # Scenario Requirements Section # 39 | ################################# 40 | = REQUIREMENTS = 41 | nodes: 1 42 | 43 | ###################### 44 | # Deployment Scripts # 45 | ###################### 46 | = SCRIPTS = 47 | 48 | target=all 49 | .... 50 | myip=$(ip a |grep ${PHD_VAR_network_nic_internal} | grep inet | awk '{print $2}' | awk -F/ '{print $1}' | head -n 1) 51 | 52 | yum install -y openstack-cinder openstack-utils openstack-selinux python-memcached 53 | 54 | openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cindertest@${PHD_VAR_network_hosts_vip}/cinder 55 | openstack-config --set /etc/cinder/cinder.conf database max_retries -1 56 | openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone 57 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken identity_uri http://${PHD_VAR_network_hosts_vip}:35357/ 58 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name services 59 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder 60 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cindertest 61 | openstack-config --set /etc/cinder/cinder.conf DEFAULT notification_driver messaging 62 | openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder 63 | openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host ${PHD_VAR_network_hosts_vip} 64 | openstack-config --set /etc/cinder/cinder.conf DEFAULT memcache_servers ${PHD_VAR_network_hosts_memcache} 65 | openstack-config --set /etc/cinder/cinder.conf DEFAULT host rhos7-cinder 66 | openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen ${myip} 67 | openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 68 | openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues true 69 | 70 | cat > /etc/cinder/nfs_exports << EOF 71 | ${PHD_VAR_network_nfs_cindershare} 72 | EOF 73 | 74 | chown root:cinder /etc/cinder/nfs_exports 75 | chmod 0640 /etc/cinder/nfs_exports 76 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfs_exports 77 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_sparsed_volumes true 78 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_mount_options v3 79 | openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver 80 | 81 | .... 82 | 83 | target=$PHD_ENV_nodes1 84 | .... 85 | su cinder -s /bin/sh -c "cinder-manage db sync" 86 | systemctl start openstack-cinder-volume 87 | systemctl enable openstack-cinder-volume 88 | .... 89 | 90 | target=all 91 | .... 92 | systemctl start openstack-cinder-api 93 | systemctl start openstack-cinder-scheduler 94 | systemctl enable openstack-cinder-api 95 | systemctl enable openstack-cinder-scheduler 96 | firewall-cmd --add-port=8776/tcp 97 | firewall-cmd --add-port=8776/tcp --permanent 98 | .... 99 | -------------------------------------------------------------------------------- /keepalived/phd-setup/galera.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Install MariaDB / Galera 22 | # - Configure Galera 23 | # - Bootstrap an initial Galera cluster 24 | # - Create databases for OpenStack services 25 | 26 | ################################# 27 | # Scenario Requirements Section # 28 | ################################# 29 | = VARIABLES = 30 | 31 | PHD_VAR_network_nic_internal 32 | 33 | ################################# 34 | # Scenario Requirements Section # 35 | ################################# 36 | = REQUIREMENTS = 37 | nodes: 1 38 | 39 | ###################### 40 | # Deployment Scripts # 41 | ###################### 42 | = SCRIPTS = 43 | 44 | target=all 45 | .... 46 | yum install -y mariadb-galera-server xinetd rsync psmisc 47 | 48 | cat > /etc/sysconfig/clustercheck << EOF 49 | MYSQL_USERNAME="clustercheck" 50 | MYSQL_PASSWORD="redhat" 51 | MYSQL_HOST="localhost" 52 | MYSQL_PORT="3306" 53 | EOF 54 | 55 | systemctl start mysqld 56 | mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY 'redhat';" 57 | systemctl stop mysqld 58 | 59 | myip=$(ip a |grep ${PHD_VAR_network_nic_internal} | grep inet | awk '{print $2}' | awk -F/ '{print $1}' | head -n 1) 60 | 61 | cat > /etc/my.cnf.d/galera.cnf << EOF 62 | [mysqld] 63 | skip-name-resolve=1 64 | binlog_format=ROW 65 | default-storage-engine=innodb 66 | innodb_autoinc_lock_mode=2 67 | innodb_locks_unsafe_for_binlog=1 68 | max_connections=2048 69 | query_cache_size=0 70 | query_cache_type=0 71 | bind_address=${myip} 72 | wsrep_provider=/usr/lib64/galera/libgalera_smm.so 73 | wsrep_cluster_name="galera_cluster" 74 | wsrep_cluster_address="gcomm://192.168.1.221,192.168.1.222,192.168.1.223" 75 | wsrep_slave_threads=1 76 | wsrep_certify_nonPK=1 77 | wsrep_max_ws_rows=131072 78 | wsrep_max_ws_size=1073741824 79 | wsrep_debug=0 80 | wsrep_convert_LOCK_to_trx=0 81 | wsrep_retry_autocommit=1 82 | wsrep_auto_increment_control=1 83 | wsrep_drupal_282555_workaround=0 84 | wsrep_causal_reads=0 85 | wsrep_notify_cmd= 86 | wsrep_sst_method=rsync 87 | EOF 88 | 89 | mkdir -p /etc/systemd/system/mariadb.service.d/ 90 | cat > /etc/systemd/system/mariadb.service.d/limits.conf << EOF 91 | [Service] 92 | LimitNOFILE=16384 93 | EOF 94 | 95 | cat > /etc/xinetd.d/galera-monitor << EOF 96 | service galera-monitor 97 | { 98 | port = 9200 99 | disable = no 100 | socket_type = stream 101 | protocol = tcp 102 | wait = no 103 | user = root 104 | group = root 105 | groups = yes 106 | server = /usr/bin/clustercheck 107 | type = UNLISTED 108 | per_source = UNLIMITED 109 | log_on_success = 110 | log_on_failure = HOST 111 | flags = REUSE 112 | } 113 | EOF 114 | 115 | systemctl daemon-reload 116 | systemctl enable xinetd 117 | systemctl start xinetd 118 | systemctl enable haproxy 119 | systemctl start haproxy 120 | 121 | firewall-cmd --add-service=mysql 122 | firewall-cmd --add-port=4444/tcp 123 | firewall-cmd --add-port=4567/tcp 124 | firewall-cmd --add-port=4568/tcp 125 | firewall-cmd --add-port=4568/tcp --permanent 126 | firewall-cmd --add-service=mysql --permanent 127 | firewall-cmd --add-port=4567/tcp --permanent 128 | firewall-cmd --add-port=4444/tcp --permanent 129 | firewall-cmd --add-port=9300/tcp 130 | firewall-cmd --add-port=9300/tcp --permanent 131 | firewall-cmd --add-port=9200/tcp 132 | firewall-cmd --add-port=9200/tcp --permanent 133 | 134 | systemctl enable mariadb 135 | .... 136 | 137 | target=$PHD_ENV_nodes1 138 | .... 139 | # This is required to allow sudo execution without a tty 140 | sed -i 's/Defaults requiretty/Defaults !requiretty/g' /etc/sudoers 141 | nohup sudo -u mysql /usr/libexec/mysqld --wsrep-cluster-address='gcomm://' < /dev/null > /dev/null 2>&1 & 142 | sleep 30 143 | # A little cleanup 144 | sed -i 's/Defaults !requiretty/Defaults requiretty/g' /etc/sudoers 145 | .... 146 | 147 | target=$PHD_ENV_nodes2 148 | .... 149 | 150 | systemctl start mariadb 151 | sleep 10 152 | .... 153 | 154 | target=$PHD_ENV_nodes3 155 | .... 156 | 157 | systemctl start mariadb 158 | sleep 10 159 | .... 160 | 161 | target=$PHD_ENV_nodes1 162 | .... 163 | 164 | cat > /tmp/mysql.sql << EOF 165 | use mysql; 166 | GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED by 'mysqltest' WITH GRANT OPTION; 167 | CREATE DATABASE keystone; 168 | GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystonetest'; 169 | CREATE DATABASE glance; 170 | GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glancetest'; 171 | CREATE DATABASE cinder; 172 | GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cindertest'; 173 | CREATE DATABASE neutron; 174 | GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutrontest'; 175 | CREATE DATABASE nova; 176 | GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'novatest'; 177 | CREATE DATABASE heat; 178 | GRANT ALL ON heat.* TO 'heat'@'%' IDENTIFIED BY 'heattest'; 179 | CREATE DATABASE sahara; 180 | GRANT ALL ON sahara.* TO 'sahara'@'%' IDENTIFIED BY 'saharatest'; 181 | CREATE DATABASE trove; 182 | GRANT ALL ON trove.* TO 'trove'@'%' IDENTIFIED BY 'trovetest'; 183 | FLUSH PRIVILEGES; 184 | EOF 185 | 186 | killall mysqld 187 | # it takes some time for mysqld to actually stop after you kill it 188 | sleep 30 189 | systemctl start mariadb 190 | 191 | mysql < /tmp/mysql.sql > /tmp/mysql.out 192 | rm -f /tmp/mysql.sql 193 | mysqladmin flush-hosts 194 | 195 | .... 196 | -------------------------------------------------------------------------------- /keepalived/phd-setup/glance.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing Glance 22 | # - Configuring Glance 23 | # - Starting services and opening firewall rules 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_nic_internal 31 | PHD_VAR_network_hosts_vip 32 | PHD_VAR_network_ips_controllers 33 | PHD_VAR_network_hosts_rabbitmq 34 | PHD_VAR_network_nfs_glanceshare 35 | 36 | ################################# 37 | # Scenario Requirements Section # 38 | ################################# 39 | = REQUIREMENTS = 40 | nodes: 1 41 | 42 | ###################### 43 | # Deployment Scripts # 44 | ###################### 45 | = SCRIPTS = 46 | 47 | target=all 48 | .... 49 | myip=$(ip a |grep ${PHD_VAR_network_nic_internal} | grep inet | awk '{print $2}' | awk -F/ '{print $1}' | head -n 1) 50 | 51 | yum install -y openstack-glance openstack-utils openstack-selinux nfs-utils 52 | openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:glancetest@${PHD_VAR_network_hosts_vip}/glance 53 | openstack-config --set /etc/glance/glance-api.conf database max_retries -1 54 | openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone 55 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken identity_uri http://${PHD_VAR_network_hosts_vip}:35357/ 56 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name services 57 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance 58 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glancetest 59 | openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver messaging 60 | openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host ${myip} 61 | openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host ${PHD_VAR_network_hosts_vip} 62 | openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 63 | openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues true 64 | openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:glancetest@${PHD_VAR_network_hosts_vip}/glance 65 | openstack-config --set /etc/glance/glance-registry.conf database max_retries -1 66 | openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone 67 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken identity_uri http://${PHD_VAR_network_hosts_vip}:35357/ 68 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name services 69 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance 70 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glancetest 71 | openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host ${myip} 72 | .... 73 | 74 | target=$PHD_ENV_nodes1 75 | .... 76 | su glance -s /bin/sh -c "glance-manage db_sync" 77 | .... 78 | 79 | target=all 80 | .... 81 | echo "${PHD_VAR_network_nfs_glanceshare} /var/lib/glance nfs vers=3 0 0" >> /etc/fstab 82 | # Workaround for bz#1203820 83 | systemctl start rpcbind 84 | systemctl start nfs-config 85 | systemctl start rpc-statd 86 | mount -a 87 | chown glance:nobody /var/lib/glance 88 | systemctl start openstack-glance-registry 89 | systemctl start openstack-glance-api 90 | systemctl enable openstack-glance-registry 91 | systemctl enable openstack-glance-api 92 | firewall-cmd --add-port=9191/tcp 93 | firewall-cmd --add-port=9191/tcp --permanent 94 | firewall-cmd --add-port=9292/tcp 95 | firewall-cmd --add-port=9292/tcp --permanent 96 | .... 97 | -------------------------------------------------------------------------------- /keepalived/phd-setup/ha-collapsed.variables: -------------------------------------------------------------------------------- 1 | # Expanded to $PHD_VAR_network_domain, $PHD_VAR_network_internal, etc by PHD 2 | # Each scenario file will verify that values have been provided for the variables it requires. 3 | 4 | deployment: collapsed 5 | 6 | hypervisors: 7 | controllers: controller1:oslab1,controller2:oslab2,controller3:oslab3 8 | computenodes: compute1:oslab1,compute2:oslab3 9 | 10 | rhn: 11 | user: jpena@redhat.com 12 | pass: OSA07novp7her 13 | 14 | network: 15 | ips: 16 | vip: 192.168.1.220 17 | controllers: 192.168.1.221,192.168.1.222,192.168.1.223 18 | computeinternal: 192.168.1.224,192.168.1.225 19 | computeexternal: 10.10.10.224,10.10.10.225 20 | gateway: 192.168.1.1 21 | netmask: 255.255.255.0 22 | nfs: 23 | glanceshare: 192.168.1.4:/volumeUSB1/usbshare/openstack/glance 24 | cindershare: 192.168.1.4:/volumeUSB1/usbshare/openstack/cinder 25 | nic: 26 | base: 54:52:00 27 | internal: eth0 28 | external: eth1 29 | hosts: 30 | vip: controller-vip.example.com 31 | controllers: controller1.example.com, controller2.example.com, controller3.example.com 32 | compute: compute1.example.com, compute2.example.com 33 | mongodb: controller1,controller2,controller3 34 | memcache: controller1:11211,controller2:11211,controller3:11211 35 | rabbitmq: controller1,controller2,controller3 36 | domain: example.com 37 | neutron: 38 | externalgateway: 10.10.10.1 39 | externalnetwork: 10.10.10.0/24 40 | allocpoolstart: 10.10.10.100 41 | allocpoolend: 10.10.10.150 42 | ssh: 43 | pubkey: ssh-dss AAAAB3NzaC1kc3MAAACBANTRU30h4EC1sv5kb/KJBDYx0D/YKWAVzMJpeWLzFrECF41Tymte8W5s4oz0CzpQQH4r19YFpeS+jingeakkRc749ImwhJE77yRmPsjaEDe3OZXW+Udz4dikE5HBjuVlEL9UN3yoGfBZCXo9Jm+oRJCH+wx36Pl69IHRZwJIAvQfAAAAFQDTdIeHmLP77PvjG5jo6eEYDTedXwAAAIEAvwKlws7V3qy6Ckd26FcAr1noN+myBdVVpb960F7BPWIxXkA9z12o+3y5JEUJRVTplRf8pBz1JwKdsYS7WJj8teSslmwbQz0kQ61uSNkWqBZGOLE/ddQxCJbD3OTXRZ289A/2mRBcil8mIg82JY1AzNr1OaQQ82ZP35nISA/jeQMAAACBAIeuHCLqkYRf7Xuv2ztFOhPzLrJRRSm7Zcpdxp5Ano7R68Yq1RQbMn0lUIG4QMssviQPyklF/xHYGJLUXBIUfNORn7BYqHikbcguUHKiwiC9io9xR9+zlvgIAIU6bbnsNT9MofTdrRT+H7df6cDP8NO43w8dWFXWX4K3NUU0YjbM 44 | 45 | components: serverprep lb galera rabbitmq keystone memcache glance cinder swift-brick swift neutron nova horizon heat mongodb ceilometer node 46 | 47 | osp: 48 | major: 7 49 | 50 | env: 51 | password: cluster 52 | 53 | # TODO... 54 | password: 55 | cluster: foo 56 | keystone: bar 57 | 58 | -------------------------------------------------------------------------------- /keepalived/phd-setup/heat.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing Heat 22 | # - Creating the required Heat domain 23 | # - Configuring Heat 24 | # - Starting services and opening firewall rules 25 | 26 | ################################# 27 | # Scenario Requirements Section # 28 | ################################# 29 | = VARIABLES = 30 | 31 | PHD_VAR_network_nic_internal 32 | PHD_VAR_network_nic_external 33 | PHD_VAR_network_hosts_vip 34 | PHD_VAR_network_ips_controllers 35 | PHD_VAR_network_hosts_rabbitmq 36 | PHD_VAR_network_hosts_memcache 37 | PHD_VAR_network_neutron_externalgateway 38 | PHD_VAR_network_neutron_externalnetwork 39 | PHD_VAR_network_neutron_allocpoolstart 40 | PHD_VAR_network_neutron_allocpoolend 41 | 42 | ################################# 43 | # Scenario Requirements Section # 44 | ################################# 45 | = REQUIREMENTS = 46 | nodes: 1 47 | 48 | ###################### 49 | # Deployment Scripts # 50 | ###################### 51 | = SCRIPTS = 52 | 53 | target=all 54 | .... 55 | yum install -y openstack-heat-engine openstack-heat-api openstack-heat-api-cfn openstack-heat-api-cloudwatch python-heatclient openstack-utils python-glanceclient 56 | .... 57 | 58 | target=$PHD_ENV_nodes1 59 | .... 60 | . /root/keystonerc_admin 61 | openstack role create heat_stack_user 62 | TOKEN_ID=$(openstack token issue --format value --column id) 63 | 64 | openstack --os-token=${TOKEN_ID} --os-url=http://${PHD_VAR_network_hosts_vip}:5000/v3 --os-identity-api-version=3 domain create heat --description "Owns users and projects created by heat" 65 | openstack --os-token=${TOKEN_ID} --os-url=http://${PHD_VAR_network_hosts_vip}:5000/v3 --os-identity-api-version=3 user create --password heattest --domain heat --description "Manages users and projects created by heat" heat_domain_admin 66 | openstack --os-token=${TOKEN_ID} --os-url=http://${PHD_VAR_network_hosts_vip}:5000/v3 --os-identity-api-version=3 role add --user heat_domain_admin --domain heat admin 67 | .... 68 | 69 | target=all 70 | .... 71 | myip=$(ip a |grep ${PHD_VAR_network_nic_internal} | grep inet | awk '{print $2}' | awk -F/ '{print $1}' | head -n 1) 72 | 73 | . /root/keystonerc_admin 74 | TOKEN_ID=$(openstack token issue --format value --column id) 75 | HEAT_DOMAIN_ID=$(openstack --os-token=${TOKEN_ID} --os-url=http://${PHD_VAR_network_hosts_vip}:5000/v3 --os-identity-api-version=3 domain show heat --column id --format value) 76 | 77 | openstack-config --set /etc/heat/heat.conf DEFAULT stack_domain_admin_password heattest 78 | openstack-config --set /etc/heat/heat.conf DEFAULT stack_domain_admin heat_domain_admin 79 | openstack-config --set /etc/heat/heat.conf DEFAULT stack_user_domain_id ${HEAT_DOMAIN_ID} 80 | 81 | openstack-config --set /etc/heat/heat.conf database connection mysql://heat:heattest@${PHD_VAR_network_hosts_vip}/heat 82 | openstack-config --set /etc/heat/heat.conf database max_retries -1 83 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_tenant_name services 84 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_user heat 85 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_password heattest 86 | openstack-config --set /etc/heat/heat.conf keystone_authtoken service_host ${PHD_VAR_network_hosts_vip} 87 | openstack-config --set /etc/heat/heat.conf keystone_authtoken identity_uri http://${PHD_VAR_network_hosts_vip}:35357/ 88 | openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_uri http://${PHD_VAR_network_hosts_vip}:35357/v2.0 89 | openstack-config --set /etc/heat/heat.conf keystone_authtoken keystone_ec2_uri http://${PHD_VAR_network_hosts_vip}:35357/v2.0 90 | openstack-config --set /etc/heat/heat.conf ec2authtoken auth_uri http://${PHD_VAR_network_hosts_vip}:5000/v2.0 91 | openstack-config --set /etc/heat/heat.conf DEFAULT memcache_servers ${PHD_VAR_network_hosts_memcache} 92 | openstack-config --set /etc/heat/heat.conf heat_api bind_host ${myip} 93 | openstack-config --set /etc/heat/heat.conf heat_api_cfn bind_host ${myip} 94 | openstack-config --set /etc/heat/heat.conf heat_api_cloudwatch bind_host ${myip} 95 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_metadata_server_url http://${PHD_VAR_network_hosts_vip}:8000 96 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url http://${PHD_VAR_network_hosts_vip}:8000/v1/waitcondition 97 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_watch_server_url http://${PHD_VAR_network_hosts_vip}:8003 98 | openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 99 | openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_ha_queues true 100 | openstack-config --set /etc/heat/heat.conf DEFAULT rpc_backend rabbit 101 | openstack-config --set /etc/heat/heat.conf DEFAULT notification_driver heat.openstack.common.notifier.rpc_notifier 102 | openstack-config --set /etc/heat/heat.conf DEFAULT enable_cloud_watch_lite false 103 | .... 104 | 105 | target=$PHD_ENV_nodes1 106 | .... 107 | su heat -s /bin/sh -c "heat-manage db_sync" 108 | .... 109 | 110 | target=all 111 | .... 112 | systemctl start openstack-heat-api 113 | systemctl start openstack-heat-api-cfn 114 | systemctl start openstack-heat-api-cloudwatch 115 | systemctl start openstack-heat-engine 116 | systemctl enable openstack-heat-api 117 | systemctl enable openstack-heat-api-cfn 118 | systemctl enable openstack-heat-api-cloudwatch 119 | systemctl enable openstack-heat-engine 120 | firewall-cmd --add-port=8000/tcp 121 | firewall-cmd --add-port=8000/tcp --permanent 122 | firewall-cmd --add-port=8003/tcp 123 | firewall-cmd --add-port=8003/tcp --permanent 124 | firewall-cmd --add-port=8004/tcp 125 | firewall-cmd --add-port=8004/tcp --permanent 126 | .... 127 | -------------------------------------------------------------------------------- /keepalived/phd-setup/horizon.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing Horizon packages 22 | # - Configuring the OpenStack dashboard 23 | # - Starting the service and opening firewall rules 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_nic_internal 31 | PHD_VAR_network_ips_controllers 32 | PHD_VAR_network_hosts_memcache 33 | PHD_VAR_network_hosts_vip 34 | 35 | ################################# 36 | # Scenario Requirements Section # 37 | ################################# 38 | = REQUIREMENTS = 39 | nodes: 1 40 | 41 | ###################### 42 | # Deployment Scripts # 43 | ###################### 44 | = SCRIPTS = 45 | 46 | target=all 47 | .... 48 | yum install -y mod_wsgi httpd mod_ssl python-memcached openstack-dashboard 49 | .... 50 | 51 | target=$PHD_ENV_nodes1 52 | .... 53 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 54 | 55 | openssl rand -hex 10 > /tmp/horizon_secret.key 56 | scp -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p /tmp/horizon_secret.key ${controller_ips[1]}:/tmp 57 | scp -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p /tmp/horizon_secret.key ${controller_ips[2]}:/tmp 58 | .... 59 | 60 | target=all 61 | .... 62 | IFS=', ' read -a memcached_hosts <<< "${PHD_VAR_network_hosts_memcache}" 63 | 64 | SECRET_KEY=$(cat /tmp/horizon_secret.key) 65 | rm -f /tmp/horizon_secret.key 66 | 67 | sed -i -e "s#SECRET_KEY.*#SECRET_KEY = \'${SECRET_KEY}\'#g#" /etc/openstack-dashboard/local_settings 68 | 69 | sed -i -e "s#ALLOWED_HOSTS.*#ALLOWED_HOSTS = ['*',]#g" \ 70 | -e "s#^CACHES#SESSION_ENGINE = 'django.contrib.sessions.backends.cache'\nCACHES#g#" \ 71 | -e "s#locmem.LocMemCache'#memcached.MemcachedCache',\n\t'LOCATION' : [ '${memcached_hosts[0]}', '${memcached_hosts[1]}', '${memcached_hosts[2]}', ]#g" \ 72 | -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "${PHD_VAR_network_hosts_vip}"#g' \ 73 | -e "s#^LOCAL_PATH.*#LOCAL_PATH = '/var/lib/openstack-dashboard'#g" \ 74 | /etc/openstack-dashboard/local_settings 75 | 76 | systemctl daemon-reload 77 | systemctl restart httpd 78 | firewall-cmd --add-port=80/tcp 79 | firewall-cmd --add-port=80/tcp --permanent 80 | .... 81 | 82 | 83 | -------------------------------------------------------------------------------- /keepalived/phd-setup/keepalived.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing keepalived 22 | # - Configuring keepalived 23 | # - Starting the service and opening the required firewall rule 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_ips_vip 31 | PHD_VAR_network_nic_internal 32 | 33 | ################################# 34 | # Scenario Requirements Section # 35 | ################################# 36 | = REQUIREMENTS = 37 | nodes: 1 38 | 39 | ###################### 40 | # Deployment Scripts # 41 | ###################### 42 | = SCRIPTS = 43 | 44 | target=all 45 | .... 46 | yum install -y keepalived psmisc 47 | 48 | cat > /etc/keepalived/keepalived.conf << EOF 49 | vrrp_script chk_haproxy { 50 | script "/usr/bin/killall -0 haproxy" 51 | interval 2 52 | } 53 | 54 | vrrp_instance VI_PUBLIC { 55 | interface ${PHD_VAR_network_nic_internal} 56 | state BACKUP 57 | virtual_router_id 52 58 | priority 101 59 | virtual_ipaddress { 60 | ${PHD_VAR_network_ips_vip} dev ${PHD_VAR_network_nic_internal} 61 | } 62 | track_script { 63 | chk_haproxy 64 | } 65 | } 66 | 67 | vrrp_sync_group VG1 68 | group { 69 | VI_PUBLIC 70 | } 71 | EOF 72 | 73 | firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -i ${PHD_VAR_network_nic_internal} -d 224.0.0.0/8 -j ACCEPT 74 | firewall-cmd --direct --perm --add-rule ipv4 filter INPUT 0 -i ${PHD_VAR_network_nic_internal} -d 224.0.0.0/8 -j ACCEPT 75 | systemctl start keepalived 76 | systemctl enable keepalived 77 | .... 78 | -------------------------------------------------------------------------------- /keepalived/phd-setup/memcached.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing memcached 22 | # - Starting memcached, opening firewall rules 23 | 24 | ################################# 25 | # Scenario Requirements Section # 26 | ################################# 27 | = VARIABLES = 28 | 29 | PHD_VAR_network_ips_vip 30 | 31 | ################################# 32 | # Scenario Requirements Section # 33 | ################################# 34 | = REQUIREMENTS = 35 | nodes: 1 36 | 37 | ###################### 38 | # Deployment Scripts # 39 | ###################### 40 | = SCRIPTS = 41 | 42 | target=all 43 | .... 44 | 45 | yum install -y memcached 46 | systemctl start memcached 47 | systemctl enable memcached 48 | firewall-cmd --add-port=11211/tcp 49 | firewall-cmd --add-port=11211/tcp --permanent 50 | .... 51 | -------------------------------------------------------------------------------- /keepalived/phd-setup/mongodb.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing MongoDB 22 | # - Setting up MongoDB, including a 3-way replica set 23 | # - Starting MongoDB, opening firewall ports 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_hosts_mongodb 31 | 32 | ################################# 33 | # Scenario Requirements Section # 34 | ################################# 35 | = REQUIREMENTS = 36 | nodes: 1 37 | 38 | ###################### 39 | # Deployment Scripts # 40 | ###################### 41 | = SCRIPTS = 42 | 43 | target=all 44 | .... 45 | 46 | yum install -y mongodb mongodb-server 47 | sed -i -e 's#bind_ip.*#bind_ip = 0.0.0.0#g' /etc/mongod.conf 48 | echo "replSet = ceilometer" >> /etc/mongod.conf 49 | systemctl start mongod 50 | systemctl enable mongod 51 | firewall-cmd --add-port=27017/tcp 52 | firewall-cmd --add-port=27017/tcp --permanent 53 | .... 54 | 55 | target=$PHD_ENV_nodes1 56 | .... 57 | IFS=', ' read -a controller_names <<< "${PHD_VAR_network_hosts_mongodb}" 58 | 59 | cat > /tmp/mongoinit.js << EOF 60 | rs.initiate() 61 | sleep(10000) 62 | rs.add("${controller_names[1]}"); 63 | rs.add("${controller_names[2]}"); 64 | EOF 65 | 66 | mongo /tmp/mongoinit.js 67 | .... 68 | 69 | -------------------------------------------------------------------------------- /keepalived/phd-setup/nova.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing Nova 22 | # - Configuring Nova 23 | # - Starting service, opening firewall ports 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_nic_internal 31 | PHD_VAR_network_nic_external 32 | PHD_VAR_network_hosts_vip 33 | PHD_VAR_network_ips_controllers 34 | PHD_VAR_network_hosts_rabbitmq 35 | PHD_VAR_network_hosts_memcache 36 | PHD_VAR_network_neutron_externalgateway 37 | PHD_VAR_network_neutron_externalnetwork 38 | PHD_VAR_network_neutron_allocpoolstart 39 | PHD_VAR_network_neutron_allocpoolend 40 | 41 | ################################# 42 | # Scenario Requirements Section # 43 | ################################# 44 | = REQUIREMENTS = 45 | nodes: 1 46 | 47 | ###################### 48 | # Deployment Scripts # 49 | ###################### 50 | = SCRIPTS = 51 | 52 | target=all 53 | .... 54 | myip=$(ip a |grep ${PHD_VAR_network_nic_internal} | grep inet | awk '{print $2}' | awk -F/ '{print $1}' | head -n 1) 55 | 56 | yum install -y openstack-nova-console openstack-nova-novncproxy openstack-utils openstack-nova-api openstack-nova-conductor openstack-nova-scheduler python-cinderclient python-memcached 57 | 58 | openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers ${PHD_VAR_network_hosts_memcache} 59 | openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address ${myip} 60 | openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen ${myip} 61 | openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_host ${myip} 62 | openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://${PHD_VAR_network_hosts_vip}:6080/vnc_auto.html 63 | openstack-config --set /etc/nova/nova.conf database connection mysql://nova:novatest@${PHD_VAR_network_hosts_vip}/nova 64 | openstack-config --set /etc/nova/nova.conf database max_retries -1 65 | openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone 66 | openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen ${myip} 67 | openstack-config --set /etc/nova/nova.conf DEFAULT metadata_host ${myip} 68 | openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen ${myip} 69 | openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775 70 | openstack-config --set /etc/nova/nova.conf DEFAULT glance_host ${PHD_VAR_network_hosts_vip} 71 | openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API 72 | openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver 73 | openstack-config --set /etc/nova/nova.conf libvirt vif_driver nova.virt.libvirt.vif.LibvirtGenericVIFDriver 74 | openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron 75 | openstack-config --set /etc/nova/nova.conf cinder cinder_catalog_info volume:cinder:internalURL 76 | openstack-config --set /etc/nova/nova.conf conductor use_local false 77 | openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 78 | openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues True 79 | openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True 80 | openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret metatest 81 | openstack-config --set /etc/nova/nova.conf neutron url http://${PHD_VAR_network_hosts_vip}:9696/ 82 | openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name services 83 | openstack-config --set /etc/nova/nova.conf neutron admin_username neutron 84 | openstack-config --set /etc/nova/nova.conf neutron admin_password neutrontest 85 | openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://${PHD_VAR_network_hosts_vip}:35357/v2.0 86 | openstack-config --set /etc/nova/nova.conf neutron region_name regionOne 87 | 88 | # REQUIRED FOR A/A scheduler 89 | openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_host_subset_size 30 90 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken identity_uri http://${PHD_VAR_network_hosts_vip}:35357/ 91 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name services 92 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user compute 93 | openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password novatest 94 | 95 | # WARNING: Only run the following if your hypervisors are VMs 96 | openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu 97 | .... 98 | 99 | target=$PHD_ENV_nodes1 100 | .... 101 | su nova -s /bin/sh -c "nova-manage db sync" 102 | .... 103 | 104 | target=all 105 | .... 106 | systemctl start openstack-nova-consoleauth 107 | systemctl start openstack-nova-novncproxy 108 | systemctl start openstack-nova-api 109 | systemctl start openstack-nova-scheduler 110 | systemctl start openstack-nova-conductor 111 | systemctl enable openstack-nova-consoleauth 112 | systemctl enable openstack-nova-novncproxy 113 | systemctl enable openstack-nova-api 114 | systemctl enable openstack-nova-scheduler 115 | systemctl enable openstack-nova-conductor 116 | 117 | firewall-cmd --add-port=8773-8775/tcp 118 | firewall-cmd --add-port=8773-8775/tcp --permanent 119 | firewall-cmd --add-port=6080/tcp 120 | firewall-cmd --add-port=6080/tcp --permanent 121 | .... 122 | -------------------------------------------------------------------------------- /keepalived/phd-setup/rabbitmq.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing RabbitMQ 22 | # - Configuring RabbitMQ, with a 3-node cluster 23 | # - Setting TCP keepalived kernel parameters 24 | # - Starting the RabbitMQ cluster, opening firewall ports 25 | 26 | ################################# 27 | # Scenario Requirements Section # 28 | ################################# 29 | = VARIABLES = 30 | 31 | PHD_VAR_network_nic_internal 32 | PHD_VAR_network_ips_controllers 33 | PHD_VAR_network_hosts_rabbitmq 34 | 35 | ################################# 36 | # Scenario Requirements Section # 37 | ################################# 38 | = REQUIREMENTS = 39 | nodes: 1 40 | 41 | ###################### 42 | # Deployment Scripts # 43 | ###################### 44 | = SCRIPTS = 45 | 46 | target=all 47 | .... 48 | yum -y install rabbitmq-server 49 | .... 50 | 51 | target=$PHD_ENV_nodes1 52 | .... 53 | 54 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 55 | 56 | cat > /etc/rabbitmq/rabbitmq-env.conf << EOF 57 | NODE_IP_ADDRESS=${controller_ips[0]} 58 | EOF 59 | 60 | systemctl start rabbitmq-server 61 | systemctl stop rabbitmq-server 62 | scp -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p /var/lib/rabbitmq/.erlang.cookie ${controller_ips[1]}:/tmp 63 | scp -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p /var/lib/rabbitmq/.erlang.cookie ${controller_ips[2]}:/tmp 64 | .... 65 | 66 | target=$PHD_ENV_nodes2 67 | .... 68 | 69 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 70 | 71 | cat > /etc/rabbitmq/rabbitmq-env.conf << EOF 72 | NODE_IP_ADDRESS=${controller_ips[1]} 73 | EOF 74 | 75 | cp /tmp/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie 76 | chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie 77 | .... 78 | 79 | target=$PHD_ENV_nodes3 80 | .... 81 | 82 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 83 | 84 | cat > /etc/rabbitmq/rabbitmq-env.conf << EOF 85 | NODE_IP_ADDRESS=${controller_ips[2]} 86 | EOF 87 | 88 | cp /tmp/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie 89 | chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie 90 | .... 91 | 92 | target=all 93 | .... 94 | IFS=', ' read -a nodes <<< "${PHD_VAR_network_hosts_rabbitmq}" 95 | 96 | cat > /etc/rabbitmq/rabbitmq.config << EOF 97 | 98 | [ 99 | {rabbit, [ 100 | {cluster_nodes, {['rabbit@${nodes[0]}', 'rabbit@${nodes[1]}', 'rabbit@${nodes[2]}'], disc}}, 101 | {cluster_partition_handling, ignore}, 102 | {default_user, <<"guest">>}, 103 | {default_pass, <<"guest">>}, 104 | {tcp_listen_options, [binary, 105 | {packet, raw}, 106 | {reuseaddr, true}, 107 | {backlog, 128}, 108 | {nodelay, true}, 109 | {exit_on_close, false}, 110 | {keepalive, true}]} 111 | ]}, 112 | {kernel, [ 113 | {inet_dist_listen_max, 44001}, 114 | {inet_dist_listen_min, 44001} 115 | ]} 116 | ]. 117 | 118 | EOF 119 | 120 | cat > /etc/sysctl.d/tcpka.conf << EOF 121 | net.ipv4.tcp_keepalive_intvl = 1 122 | net.ipv4.tcp_keepalive_probes = 5 123 | net.ipv4.tcp_keepalive_time = 5 124 | EOF 125 | 126 | sysctl -p /etc/sysctl.d/tcpka.conf 127 | 128 | firewall-cmd --add-port=5672/tcp 129 | firewall-cmd --add-port=4369/tcp 130 | firewall-cmd --add-port=5672/tcp --permanent 131 | firewall-cmd --add-port=4369/tcp --permanent 132 | firewall-cmd --add-port=44001/tcp 133 | firewall-cmd --add-port=44001/tcp --permanent 134 | systemctl enable rabbitmq-server 135 | systemctl start rabbitmq-server 136 | .... 137 | 138 | target=$PHD_ENV_nodes1 139 | .... 140 | rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}' 141 | .... 142 | -------------------------------------------------------------------------------- /keepalived/phd-setup/readme.txt: -------------------------------------------------------------------------------- 1 | Assumptions: 2 | 3 | 1. We have a basic rhel7-base image. It is just a cloud image where we have manually injected a root ssh key, and allow direct root login (so the account is not locked). About the rest of the parameters: 4 | 5 | - NIC: default NAT 6 | - Disk: 60 GB, qcow2, virtio 7 | - CPU: 1 8 | - RAM: 4 GB 9 | 10 | 2. About SSH keys, it is important that the base image includes SSH keys in /root/.ssh/authorized_keys for: 11 | 12 | - The system running phd 13 | - The hypervisor that will run the VM 14 | 15 | -------------------------------------------------------------------------------- /keepalived/phd-setup/redis.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing Redis 22 | # - Setting up Redis, including a master and two slaves 23 | # - Setting up Sentinel for HA 24 | # - Starting services, opening firewall ports 25 | 26 | ################################# 27 | # Scenario Requirements Section # 28 | ################################# 29 | = VARIABLES = 30 | 31 | PHD_VAR_network_nic_internal 32 | PHD_VAR_network_ips_controllers 33 | PHD_VAR_network_hosts_rabbitmq 34 | 35 | ################################# 36 | # Scenario Requirements Section # 37 | ################################# 38 | = REQUIREMENTS = 39 | nodes: 1 40 | 41 | ###################### 42 | # Deployment Scripts # 43 | ###################### 44 | = SCRIPTS = 45 | 46 | target=all 47 | .... 48 | yum install -y redis 49 | .... 50 | 51 | target=$PHD_ENV_nodes1 52 | .... 53 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 54 | 55 | sed --in-place "s/bind 127.0.0.1/bind 127.0.0.1 ${controller_ips[0]}/" /etc/redis.conf 56 | .... 57 | 58 | target=$PHD_ENV_nodes2 59 | .... 60 | 61 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 62 | 63 | sed --in-place "s/bind 127.0.0.1/bind 127.0.0.1 ${controller_ips[1]}/" /etc/redis.conf 64 | echo slaveof ''${controller_ips[0]}'' 6379 >> /etc/redis.conf 65 | .... 66 | 67 | target=$PHD_ENV_nodes3 68 | .... 69 | 70 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 71 | 72 | sed --in-place "s/bind 127.0.0.1/bind 127.0.0.1 ${controller_ips[2]}/" /etc/redis.conf 73 | echo slaveof ''${controller_ips[0]}'' 6379 >> /etc/redis.conf 74 | .... 75 | 76 | target=all 77 | .... 78 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 79 | 80 | cat > /etc/redis-sentinel.conf << EOF 81 | 82 | sentinel monitor mymaster ${controller_ips[0]} 6379 2 83 | sentinel down-after-milliseconds mymaster 30000 84 | sentinel failover-timeout mymaster 180000 85 | sentinel parallel-syncs mymaster 1 86 | min-slaves-to-write 1 87 | min-slaves-max-lag 10 88 | logfile /var/log/redis/sentinel.log 89 | EOF 90 | 91 | firewall-cmd --add-port=6379/tcp 92 | firewall-cmd --add-port=6379/tcp --permanent 93 | firewall-cmd --add-port=26379/tcp 94 | firewall-cmd --add-port=26379/tcp --permanent 95 | systemctl enable redis 96 | systemctl start redis 97 | systemctl enable redis-sentinel 98 | systemctl start redis-sentinel 99 | 100 | .... 101 | 102 | -------------------------------------------------------------------------------- /keepalived/phd-setup/sahara.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Installing Sahara 22 | # - Configuring Sahara 23 | # - Starting services, opening firewall ports 24 | 25 | ################################# 26 | # Scenario Requirements Section # 27 | ################################# 28 | = VARIABLES = 29 | 30 | PHD_VAR_network_nic_internal 31 | PHD_VAR_network_nic_external 32 | PHD_VAR_network_hosts_vip 33 | PHD_VAR_network_ips_controllers 34 | PHD_VAR_network_hosts_rabbitmq 35 | PHD_VAR_network_hosts_memcache 36 | PHD_VAR_network_neutron_externalgateway 37 | PHD_VAR_network_neutron_externalnetwork 38 | PHD_VAR_network_neutron_allocpoolstart 39 | PHD_VAR_network_neutron_allocpoolend 40 | 41 | ################################# 42 | # Scenario Requirements Section # 43 | ################################# 44 | = REQUIREMENTS = 45 | nodes: 1 46 | 47 | ###################### 48 | # Deployment Scripts # 49 | ###################### 50 | = SCRIPTS = 51 | 52 | target=all 53 | .... 54 | myip=$(ip a |grep ${PHD_VAR_network_nic_internal} | grep inet | awk '{print $2}' | awk -F/ '{print $1}' | head -n 1) 55 | 56 | yum install -y openstack-sahara-api openstack-sahara-engine openstack-sahara-common openstack-sahara python-saharaclient 57 | 58 | openstack-config --set /etc/sahara/sahara.conf DEFAULT host ${myip} 59 | openstack-config --set /etc/sahara/sahara.conf DEFAULT use_floating_ips True 60 | openstack-config --set /etc/sahara/sahara.conf DEFAULT use_neutron True 61 | openstack-config --set /etc/sahara/sahara.conf DEFAULT rpc_backend rabbit 62 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 63 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_port 5672 64 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_use_ssl False 65 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_userid guest 66 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_password guest 67 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_login_method AMQPLAIN 68 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_ha_queues true 69 | openstack-config --set /etc/sahara/sahara.conf DEFAULT notification_topics notifications 70 | openstack-config --set /etc/sahara/sahara.conf database connection mysql://sahara:saharatest@${PHD_VAR_network_hosts_vip}/sahara 71 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken auth_uri http://${PHD_VAR_network_hosts_vip}:5000/v2.0 72 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken identity_uri http://${PHD_VAR_network_hosts_vip}:35357/ 73 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken admin_user sahara 74 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken admin_password saharatest 75 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken admin_tenant_name services 76 | openstack-config --set /etc/sahara/sahara.conf DEFAULT log_file /var/log/sahara/sahara.log 77 | .... 78 | 79 | target=$PHD_ENV_nodes1 80 | .... 81 | sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head 82 | .... 83 | 84 | target=all 85 | .... 86 | firewall-cmd --add-port=8386/tcp 87 | firewall-cmd --add-port=8386/tcp --permanent 88 | systemctl enable openstack-sahara-api 89 | systemctl enable openstack-sahara-engine 90 | systemctl start openstack-sahara-api 91 | systemctl start openstack-sahara-engine 92 | .... 93 | 94 | -------------------------------------------------------------------------------- /keepalived/phd-setup/serverprep.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - Tweaking the IP stack to allow nonlocal binding and adjusting keepalive timings 22 | # - Configuring haproxy 23 | # - Adding the virtual IPs to the cluster 24 | # - Putting haproxy under the cluster's control 25 | 26 | ################################# 27 | # Scenario Requirements Section # 28 | ################################# 29 | = VARIABLES = 30 | 31 | PHD_VAR_network_ips_controllers 32 | PHD_VAR_network_ips_computeinternal 33 | PHD_VAR_network_hosts_controllers 34 | PHD_VAR_network_hosts_compute 35 | PHD_VAR_network_ips_vip 36 | PHD_VAR_network_hosts_vip 37 | 38 | ################################# 39 | # Scenario Requirements Section # 40 | ################################# 41 | = REQUIREMENTS = 42 | nodes: 1 43 | 44 | ###################### 45 | # Deployment Scripts # 46 | ###################### 47 | = SCRIPTS = 48 | 49 | target=all 50 | .... 51 | 52 | IFS=', ' read -a controller_names <<< "${PHD_VAR_network_hosts_controllers}" 53 | IFS=', ' read -a controller_ips <<< "${PHD_VAR_network_ips_controllers}" 54 | IFS=', ' read -a compute_names <<< "${PHD_VAR_network_hosts_compute}" 55 | IFS=', ' read -a compute_ips <<< "${PHD_VAR_network_ips_computeinternal}" 56 | 57 | addhosts="" 58 | 59 | for item in "${controller_names[@]}" 60 | do 61 | shortname=$(echo ${controller_names[item]} | awk -F. '{print $1}') 62 | 63 | addhosts=$(printf "${addhosts}\n${controller_ips[item]} ${controller_names[item]} ${shortname}") 64 | done 65 | 66 | for item in "${controller_names[@]}" 67 | do 68 | shortname=$(echo ${compute_names[item]} | awk -F. '{print $1}') 69 | 70 | addhosts=$(printf "${addhosts}\n${compute_ips[item]} ${compute_names[item]} ${shortname}") 71 | done 72 | 73 | shortname=$(echo ${PHD_VAR_network_hosts_vip} | awk -F. '{print $1}') 74 | addhosts=$(printf "${addhosts}\n${PHD_VAR_network_ips_vip} ${PHD_VAR_network_hosts_vip} ${shortname}") 75 | 76 | echo "$addhosts" >> /etc/hosts 77 | .... 78 | -------------------------------------------------------------------------------- /keepalived/phd-setup/test.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | declare -A nodeMap 6 | declare -A variables 7 | declare -A cluster 8 | 9 | nodeMap["hypervisors"]="oslab1 oslab2 oslab3" 10 | nodeMap["controllers"]="controller1 controller2 controller3" 11 | nodeMap["compute"]="compute1 compute2" 12 | nodeMap["serverprep"]="controller1 controller2 controller3 compute1 compute2" 13 | 14 | variables["nodes"]="" 15 | variables["components"]="hypervisors serverprep lb galera rabbitmq memcached redis mongodb keepalived keystone glance cinder swift neutron nova ceilometer heat horizon sahara trove compute" 16 | #variables["components"]="compute" 17 | variables["network_domain"]="example.com" 18 | variables["config"]="ha-collapsed" 19 | 20 | 21 | function create_phd_definition() { 22 | scenario=$1 23 | definition=$2 24 | rm -f ${definition} 25 | 26 | nodes=${variables["nodes"]} 27 | 28 | if [ "x$nodes" = x ]; then 29 | nodes=${nodeMap[$scenario]} 30 | fi 31 | 32 | if [ "x$nodes" = "x" ]; then 33 | for n in `seq 1 3`; do 34 | nodes="$nodes controller${n}" 35 | done 36 | fi 37 | 38 | nodelist="nodes=" 39 | for node in $nodes; do 40 | nodelist="${nodelist}${node}.${variables["network_domain"]} " 41 | done 42 | 43 | echo "$nodelist" >> ${definition} 44 | cat ${definition} 45 | } 46 | 47 | 48 | function run_phd() { 49 | phd_exec -s ./${1}.scenario -d ./phd.${1}.conf -V ./${variables["config"]}.variables 50 | } 51 | 52 | scenarios=${variables["components"]} 53 | 54 | 55 | for scenario in $scenarios; do 56 | create_phd_definition ${scenario} ./phd.${scenario}.conf 57 | echo "$(date) :: Beginning scenario $scenario" 58 | run_phd ${scenario} 59 | done 60 | -------------------------------------------------------------------------------- /keepalived/rabbitmq-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | RabbitMQ can create a native cluster, by grouping several nodes and replicating all message queues. Clustered RabbitMQ environments tolerate the failure of individual nodes. Nodes can be started and stopped at will. 5 | 6 | ![](Rabbitmq_clustering.jpg "RabbitMQ clustering") 7 | 8 | **Note:** Access to RabbitMQ is not handled by HAproxy, as there are known issues integrating RabbitMQ with HAProxy. Instead consumers must be supplied with the full list of hosts running RabbitMQ with `rabbit_hosts` and `rabbit_ha_queues` options, and connect directly to one or more RabbitMQ servers. Reconnections in case of a node failure are handled automatically. 9 | 10 | The following commands will be executed on all controller nodes, unless stated otherwise. 11 | 12 | You can find a phd scenario file [here](phd-setup/rabbitmq.scenario). 13 | 14 | Install package 15 | --------------- 16 | 17 | yum -y install rabbitmq-server 18 | 19 | Create erlang cookie and distribute 20 | ----------------------------------- 21 | 22 | **On node 1:** 23 | 24 | cat > /etc/rabbitmq/rabbitmq-env.conf << EOF 25 | NODE_IP_ADDRESS=192.168.1.221 26 | EOF 27 | 28 | systemctl start rabbitmq-server 29 | systemctl stop rabbitmq-server 30 | scp -p /var/lib/rabbitmq/.erlang.cookie hacontroller2:/var/lib/rabbitmq 31 | scp -p /var/lib/rabbitmq/.erlang.cookie hacontroller3:/var/lib/rabbitmq 32 | 33 | Set permissions for erlang cookie 34 | --------------------------------- 35 | 36 | **On node 2 and node 3:** 37 | 38 | 39 | chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie 40 | 41 | cat > /etc/rabbitmq/rabbitmq-env.conf << EOF 42 | NODE_IP_ADDRESS=192.168.1.22X 43 | EOF 44 | 45 | ### Create rabbitmq configuration 46 | 47 | **On all nodes:** 48 | 49 | cat > /etc/rabbitmq/rabbitmq.config << EOF 50 | 51 | [ 52 | {rabbit, [ 53 | {cluster_nodes, {['rabbit@hacontroller1', 'rabbit@hacontroller2', 'rabbit@hacontroller3'], disc}}, 54 | {cluster_partition_handling, ignore}, 55 | {default_user, <<"guest">>}, 56 | {default_pass, <<"guest">>}, 57 | {tcp_listen_options, [binary, 58 | {packet, raw}, 59 | {reuseaddr, true}, 60 | {backlog, 128}, 61 | {nodelay, true}, 62 | {exit_on_close, false}, 63 | {keepalive, true}]} 64 | ]}, 65 | {kernel, [ 66 | {inet_dist_listen_max, 44001}, 67 | {inet_dist_listen_min, 44001} 68 | ]} 69 | ]. 70 | 71 | EOF 72 | 73 | Set kernel TCP keepalive parameters 74 | ----------------------------------- 75 | 76 | cat > /etc/sysctl.d/tcpka.conf << EOF 77 | net.ipv4.tcp_keepalive_intvl = 1 78 | net.ipv4.tcp_keepalive_probes = 5 79 | net.ipv4.tcp_keepalive_time = 5 80 | EOF 81 | 82 | sysctl -p /etc/sysctl.d/tcpka.conf 83 | 84 | Start services and open firewall ports 85 | -------------------------------------- 86 | 87 | firewall-cmd --add-port=5672/tcp 88 | firewall-cmd --add-port=4369/tcp 89 | firewall-cmd --add-port=5672/tcp --permanent 90 | firewall-cmd --add-port=4369/tcp --permanent 91 | firewall-cmd --add-port=44001/tcp 92 | firewall-cmd --add-port=44001/tcp --permanent 93 | systemctl enable rabbitmq-server 94 | systemctl start rabbitmq-server 95 | 96 | And check everything is going fine by running: 97 | 98 |
 # rabbitmqctl cluster_status
 99 | 
100 | Cluster status of node rabbit@hacontroller1 ...
101 | [{nodes,[{disc,[rabbit@hacontroller1,rabbit@hacontroller2,
102 | rabbit@hacontroller3]}]},
103 | {running_nodes,[rabbit@hacontroller3,rabbit@hacontroller2,
104 | rabbit@hacontroller1]},
105 | {cluster_name,<<"rabbit@hacontroller1.example.com">\>}, {partitions,[]}]
106 | 
107 | ...done.
108 | 
109 | 
110 | Set HA mode for all queues 111 | -------------------------- 112 | 113 | **On node 1:** 114 | 115 | rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}' 116 | -------------------------------------------------------------------------------- /keepalived/rabbitmq-restart.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | In general, RabbitMQ does a good job at restarting the cluster when all nodes are stared at the same time. However, we may find times were this is not the case, and we will have to restart the cluster manually. 5 | 6 | According to [](http://previous.rabbitmq.com/v3_3_x/clustering.html) *"the last node to go down must be the first node to be brought online. If this doesn't happen, the nodes will wait 30 seconds for the last disc node to come back online, and fail afterwards."*. Thus, it is necessary to find the last node going down and start it. Depending on how the nodes were started, you may see some nodes running and some stopped. 7 | 8 | Checking RabbitMQ cluster status 9 | -------------------------------- 10 | 11 | Run the following command to verify the current RabbitMQ cluster status: 12 | 13 | rabbitmqctl cluster_status 14 | 15 | Cluster status of node rabbit@hacontroller3 ... 16 | [{nodes,[{disc,[rabbit@hacontroller1,rabbit@hacontroller2, 17 | rabbit@hacontroller3]}]}, 18 | {running_nodes,[rabbit@hacontroller2,rabbit@hacontroller1, 19 | rabbit@hacontroller3]}, 20 | {cluster_name,<<"rabbit@hacontroller1.example.com">>}, 21 | {partitions,[]}] 22 | 23 | ### Some nodes are running 24 | 25 | If some nodes are running, the most probable reason is that the failed nodes timed out before finding the last node to come back online. In this case, start rabbitmq-server on the failed nodes. 26 | 27 | [root@hacontroller1 ~]# systemctl start rabbitmq-server 28 | 29 | ### None of the nodes are running 30 | 31 | In this case, we need to find which node should be started first. 32 | 33 | Select a node as first node, start rabbitmq-server, then start it on the remaining nodes. 34 | 35 | [root@hacontroller2 ~]# systemctl start rabbitmq-server 36 | [root@hacontroller3 ~]# systemctl start rabbitmq-server 37 | [root@hacontroller1 ~]# systemctl start rabbitmq-server 38 | 39 | Check that all nodes are running rabbitmq-server. If not, stop any surviving rabbitmq-server and select a different node as first node. 40 | -------------------------------------------------------------------------------- /keepalived/redis-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | Redis in a key-value cache and store, used by Ceilometer with the [tooz](https://github.com/openstack/tooz) library. It uses an master-slave architecture for high availability, where a single node is used for writes and a number of slaves replicate data from it. Using [Sentinel](http://redis.io/topics/sentinel), it is possible monitor node health and fail over automatically to another node if needed. By configuring Ceilometer to access the Sentinel processes, high availability from the consumer point of view is transparent. 5 | 6 | The following commands will be executed on all controller nodes, unless otherwise stated. 7 | 8 | You can find a phd scenario file [here](phd-setup/redis.scenario). 9 | 10 | Install redis 11 | ------------- 12 | 13 | yum install -y redis 14 | 15 | Configure bind IP, set master and slaves 16 | ---------------------------------------- 17 | 18 | On all nodes: 19 | 20 | sed --in-place 's/bind 127.0.0.1/bind 127.0.0.1 192.168.1.X/' /etc/redis.conf 21 | 22 | On node2 and 3: 23 | 24 | echo slaveof '''' 6379 >> /etc/redis.conf 25 | 26 | Configure Sentinel, used for master failover 27 | -------------------------------------------- 28 | 29 | **On all nodes:** 30 | 31 | cat > /etc/redis-sentinel.conf << EOF 32 | 33 | sentinel monitor mymaster 6379 2 34 | sentinel down-after-milliseconds mymaster 30000 35 | sentinel failover-timeout mymaster 180000 36 | sentinel parallel-syncs mymaster 1 37 | min-slaves-to-write 1 38 | min-slaves-max-lag 10 39 | logfile /var/log/redis/sentinel.log 40 | EOF 41 | 42 | Configure firewall, start services 43 | ---------------------------------- 44 | 45 | firewall-cmd --add-port=6379/tcp 46 | firewall-cmd --add-port=6379/tcp --permanent 47 | firewall-cmd --add-port=26379/tcp 48 | firewall-cmd --add-port=26379/tcp --permanent 49 | systemctl enable redis 50 | systemctl start redis 51 | systemctl enable redis-sentinel 52 | systemctl start redis-sentinel 53 | -------------------------------------------------------------------------------- /keepalived/sahara-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | **Important:** this configuration assumes that the controller nodes can access the floating IP network (10.10.10.0/24 in the example configuration). This was not the case on [the controller node configuration](controller-node.md), because the NIC used for the provider network did not have an IP address. You can accomplish this by setting up routes in the default gateway (192.168.1.1), or creating a separate route on the controller nodes. 5 | 6 | The following commands will be executed on all controller nodes, unless otherwise stated. 7 | 8 | You can find a phd scenario file [here](phd-setup/sahara.scenario). 9 | 10 | Install software 11 | ---------------- 12 | 13 | yum install -y openstack-sahara-api openstack-sahara-engine openstack-sahara-common openstack-sahara python-saharaclient 14 | 15 | Configure Sahara 16 | ---------------- 17 | 18 | openstack-config --set /etc/sahara/sahara.conf DEFAULT host 192.168.1.22X 19 | openstack-config --set /etc/sahara/sahara.conf DEFAULT use_floating_ips True 20 | openstack-config --set /etc/sahara/sahara.conf DEFAULT use_neutron True 21 | openstack-config --set /etc/sahara/sahara.conf DEFAULT rpc_backend rabbit 22 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_hosts hacontroller1,hacontroller2,hacontroller3 23 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_port 5672 24 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_use_ssl False 25 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_userid guest 26 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_password guest 27 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_login_method AMQPLAIN 28 | openstack-config --set /etc/sahara/sahara.conf oslo_messaging_rabbit rabbit_ha_queues true 29 | openstack-config --set /etc/sahara/sahara.conf DEFAULT notification_topics notifications 30 | openstack-config --set /etc/sahara/sahara.conf database connection mysql://sahara:saharatest@controller-vip.example.com/sahara 31 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken auth_uri http://controller-vip.example.com:5000/ 32 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken auth_plugin password 33 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken auth_url http://controller-vip.example.com:35357/ 34 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken username sahara 35 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken password saharatest 36 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken project_name services 37 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken admin_tenant_name services 38 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken admin_user sahara 39 | openstack-config --set /etc/sahara/sahara.conf keystone_authtoken admin_password saharatest 40 | openstack-config --set /etc/sahara/sahara.conf DEFAULT log_file /var/log/sahara/sahara.log 41 | 42 | Manage DB 43 | --------- 44 | 45 | On node 1: 46 | 47 | sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head 48 | 49 | 50 | Start services, open firewall ports 51 | ----------------------------------- 52 | firewall-cmd --add-port=8386/tcp 53 | firewall-cmd --add-port=8386/tcp --permanent 54 | systemctl enable openstack-sahara-api 55 | systemctl enable openstack-sahara-engine 56 | systemctl start openstack-sahara-api 57 | systemctl start openstack-sahara-engine 58 | 59 | Testing 60 | ------- 61 | 62 | On node 1, run the following commands to test the Sahara API: 63 | 64 | . /root/keystonerc_admin 65 | sahara plugin-list 66 | 67 | Further Sahara testing requires creating a specific virtual machine image, which is outside the scope of this document. You can find instructions on [the Sahara wiki](http://docs.openstack.org/developer/sahara/devref/quickstart.html#upload-an-image-to-the-image-service). 68 | -------------------------------------------------------------------------------- /keepalived/swift-config.md: -------------------------------------------------------------------------------- 1 | Introduction 2 | ------------ 3 | 4 | We need to have an additional disk, `/dev/vdb` in our test available for Swift usage. 5 | 6 | The following commands will be executed on all controller nodes, unless otherwise stated. 7 | 8 | You can find a phd scenario file [here](phd-setup/swift.scenario). 9 | 10 | Install software 11 | ---------------- 12 | 13 | yum install -y openstack-swift-object openstack-swift-container openstack-swift-account openstack-swift-proxy openstack-utils rsync xfsprogs 14 | 15 | Create XFS file system for additional disk, and mount it 16 | -------------------------------------------------------- 17 | 18 | mkfs.xfs /dev/vdb 19 | mkdir -p /srv/node/vdb 20 | echo "/dev/vdb /srv/node/vdb xfs defaults 1 2" >> /etc/fstab 21 | mount -a 22 | chown -R swift:swift /srv/node 23 | restorecon -R /srv/node 24 | 25 | Configure account, container and object services 26 | ------------------------------------------------ 27 | 28 | openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 192.168.1.22X 29 | openstack-config --set /etc/swift/object-server.conf DEFAULT devices /srv/node 30 | openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 192.168.1.22X 31 | openstack-config --set /etc/swift/account-server.conf DEFAULT devices /srv/node 32 | openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 192.168.1.22X 33 | openstack-config --set /etc/swift/container-server.conf DEFAULT devices /srv/node 34 | chown -R root:swift /etc/swift 35 | 36 | Start account, container and object services, open firewall ports 37 | ----------------------------------------------------------------- 38 | 39 | systemctl start openstack-swift-account 40 | systemctl start openstack-swift-container 41 | systemctl start openstack-swift-object 42 | systemctl enable openstack-swift-account 43 | systemctl enable openstack-swift-container 44 | systemctl enable openstack-swift-object 45 | 46 | firewall-cmd --add-port=6200/tcp 47 | firewall-cmd --add-port=6200/tcp --permanent 48 | firewall-cmd --add-port=6201/tcp 49 | firewall-cmd --add-port=6201/tcp --permanent 50 | firewall-cmd --add-port=6202/tcp 51 | firewall-cmd --add-port=6202/tcp --permanent 52 | 53 | Configure swift proxy and object expirer 54 | ---------------------------------------- 55 | 56 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_uri http://controller-vip.example.com:5000/ 57 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_plugin password 58 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_url http://controller-vip.example.com:35357/ 59 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken username swift 60 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken password swifttest 61 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken project_name services 62 | openstack-config --set /etc/swift/proxy-server.conf filter:cache memcache_servers hacontroller1:11211,hacontroller2:11211,hacontroller3:11211 63 | openstack-config --set /etc/swift/proxy-server.conf DEFAULT bind_ip 192.168.1.22X 64 | openstack-config --set /etc/swift/object-expirer.conf filter:cache memcache_servers hacontroller1:11211,hacontroller2:11211,hacontroller3:11211 65 | openstack-config --set /etc/swift/object-expirer.conf object-expirer concurrency 100 66 | 67 | Set Ceilometer hook 68 | ------------------- 69 | 70 | On node 1: 71 | 72 | cat >> /etc/swift/swift.conf << EOF 73 | [filter:ceilometer] 74 | use = egg:ceilometer#swift 75 | [pipeline:main] 76 | pipeline = healthcheck cache authtoken keystoneauth proxy-server ceilometer 77 | EOF 78 | 79 | Configure hash path suffix 80 | -------------------------- 81 | 82 | On node 1: 83 | 84 | openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix $(openssl rand -hex 10) 85 | 86 | Create rings 87 | ------------ 88 | 89 | On node 1: 90 | 91 | swift-ring-builder /etc/swift/object.builder create 16 3 24 92 | swift-ring-builder /etc/swift/container.builder create 16 3 24 93 | swift-ring-builder /etc/swift/account.builder create 16 3 24 94 | swift-ring-builder /etc/swift/account.builder add z1-192.168.1.221:6202/vdb 10 95 | swift-ring-builder /etc/swift/container.builder add z1-192.168.1.221:6201/vdb 10 96 | swift-ring-builder /etc/swift/object.builder add z1-192.168.1.221:6200/vdb 10 97 | swift-ring-builder /etc/swift/account.builder add z2-192.168.1.222:6202/vdb 10 98 | swift-ring-builder /etc/swift/container.builder add z2-192.168.1.222:6201/vdb 10 99 | swift-ring-builder /etc/swift/object.builder add z2-192.168.1.222:6200/vdb 10 100 | swift-ring-builder /etc/swift/account.builder add z3-192.168.1.223:6202/vdb 10 101 | swift-ring-builder /etc/swift/container.builder add z3-192.168.1.223:6201/vdb 10 102 | swift-ring-builder /etc/swift/object.builder add z3-192.168.1.223:6200/vdb 10 103 | swift-ring-builder /etc/swift/account.builder rebalance 104 | swift-ring-builder /etc/swift/container.builder rebalance 105 | swift-ring-builder /etc/swift/object.builder rebalance 106 | 107 | cd /etc/swift 108 | tar cvfz /tmp/swift_configs.tgz swift.conf *.builder *.gz 109 | scp /tmp/swift_configs.tgz hacontroller2:/tmp 110 | scp /tmp/swift_configs.tgz hacontroller3:/tmp 111 | chown -R root:swift /etc/swift 112 | 113 | Import swift configuration from node 1 114 | -------------------------------------- 115 | 116 | On nodes 2 and 3: 117 | 118 | cd /etc/swift 119 | tar xvfz /tmp/swift_configs.tgz 120 | chown -R root:swift /etc/swift 121 | restorecon -R /etc/swift 122 | 123 | Start services, open firewall ports 124 | ----------------------------------- 125 | 126 | On all nodes: 127 | 128 | systemctl start openstack-swift-proxy 129 | systemctl enable openstack-swift-proxy 130 | systemctl start openstack-swift-object-expirer 131 | systemctl enable openstack-swift-object-expirer 132 | firewall-cmd --add-port=8080/tcp 133 | firewall-cmd --add-port=8080/tcp --permanent 134 | 135 | Test 136 | ---- 137 | 138 | On any node: 139 | 140 | . /root/keystonerc_admin 141 | swift list 142 | swift upload test /tmp/cirros-0.3.3-x86_64-disk.img 143 | swift list 144 | swift list test 145 | swift download test tmp/cirros-0.3.3-x86_64-disk.img 146 | -------------------------------------------------------------------------------- /make-vm: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | onlyone=$1 4 | 5 | export PHD_VAR_network_nic_base="54:52:00" 6 | export PHD_VAR_vm_base="/srv/rhos6-rhel7-vms/rhos6-rhel7-base.img" 7 | export PHD_VAR_vm_vcpu="1" 8 | export PHD_VAR_vm_ram="2048" 9 | export PHD_ENV_nodes1=east-01.lab.bos.redhat.com 10 | 11 | cat<<-EOF > /localvms/template.xml 12 | 13 | VM_NAME 14 | ${PHD_VAR_vm_ram}000 15 | ${PHD_VAR_vm_ram}000 16 | ${PHD_VAR_vm_cpus} 17 | 18 | hvm 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | destroy 28 | restart 29 | restart 30 | 31 | /usr/libexec/qemu-kvm 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | EOF 56 | sequence=16 57 | lastoct="$(hostname -s | sed -e 's#^[a-z]*-##g' -e 's#^0*##g')" 58 | offset="$(echo ${PHD_ENV_nodes1} | sed -e 's#^[a-z]*-##g' -e 's#^0*##g' -e 's#\..*##')" 59 | for section in lb db rabbitmq memcache glance cinder swift-brick swift neutron nova horizon heat mongodb ceilometer qpid node keystone; do 60 | 61 | dobuild=0 62 | if [ -z $onlyone ]; then 63 | dobuild=1 64 | elif [ $onlyone = $section ]; then 65 | dobuild=1 66 | fi 67 | 68 | if [ $dobuild = 1 ]; then 69 | cd /localvms/ 70 | target=rhos6-${section}$(( ${lastoct} - ${offset} )) 71 | virsh destroy $target > /dev/null 2>&1 72 | virsh undefine $target > /dev/null 2>&1 73 | cp template.xml ${target}.xml 74 | sed -i.sed s#VM_NAME#${target}#g ${target}.xml 75 | sed -i.sed s#EXTERNAL_MAC#${PHD_VAR_network_nic_base}:0${lastoct}:00:${sequence}#g ${target}.xml 76 | sed -i.sed s#INTERNAL_MAC#${PHD_VAR_network_nic_base}:0${lastoct}:01:${sequence}#g ${target}.xml 77 | sed -i.sed s:source\ file.*\/:source\ file=\'/localvms/${target}.cow\'\/:g ${target}.xml 78 | diff -u template.xml ${target}.xml 79 | rm -f /localvms/${target}.cow 80 | qemu-img create -b /localvms/$(basename ${PHD_VAR_vm_base}) -f qcow2 /localvms/${target}.cow 81 | virsh define ${target}.xml 82 | if [ $? != 0 ]; then exit 1; fi 83 | virsh start ${target} 84 | if [ $? != 0 ]; then exit 1; fi 85 | rm ${target}.xml.sed ${target}.xml 86 | fi 87 | 88 | sequence=$((sequence + 1)) 89 | done 90 | -------------------------------------------------------------------------------- /pcmk/baremetal-rollback.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # We start with 3 (or more, up to 16) nodes running a minimal CentOS 6 21 | # 22 | # Tasks to be performed include: 23 | # - setting up the required repositories from which to download Openstack and the HA-Addon 24 | # - disabling firewalls and SElinux. This is a necessary evil until the proper policies can be written. 25 | # - creating network bridges for use by VMs hosting OpenStack services 26 | # - normalizing network interface names 27 | # - fixing multicast 28 | # - removing /home and making the root partition as large as possible to maximumize the amount of space available to openstack 29 | 30 | ################################# 31 | # Scenario Requirements Section # 32 | ################################# 33 | = VARIABLES = 34 | 35 | ################################# 36 | # Scenario Requirements Section # 37 | ################################# 38 | = REQUIREMENTS = 39 | nodes: 9 40 | 41 | ###################### 42 | # Deployment Scripts # 43 | ###################### 44 | = SCRIPTS = 45 | 46 | target=all 47 | .... 48 | lvconvert --merge /dev/mapper/*baremetal_snap 49 | .... 50 | 51 | target=local 52 | .... 53 | # Reboot each node and wait for it to return 54 | 55 | # disable set -e when calling phd_cmd_* because 56 | # phd doesn't manage all return codes properly 57 | set +e 58 | for node in $(echo $PHD_ENV_nodes); do 59 | phd_cmd_exec "reboot > /dev/null 2>&1" "$node" 60 | phd_wait_connection 2400 $node || exit 1 61 | done 62 | .... 63 | 64 | target=all 65 | .... 66 | # wait for the old snapshot to be merged/deleted 67 | loop=0 68 | while ! lvcreate -s -n baremetal_snap -l100%FREE /dev/mapper/*root; do 69 | sleep 1 70 | if [ "$loop" = 240 ]; then 71 | echo "Unknown error waiting for old snap to be deleted/merged" 72 | exit 1 73 | fi 74 | loop=$((loop + 1)) 75 | done 76 | .... 77 | 78 | -------------------------------------------------------------------------------- /pcmk/baremetal.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # We start with 3 (or more, up to 16) nodes running a minimal CentOS 6 21 | # 22 | # Tasks to be performed include: 23 | # - setting up the required repositories from which to download Openstack and the HA-Addon 24 | # - disabling firewalls and SElinux. This is a necessary evil until the proper policies can be written. 25 | # - creating network bridges for use by VMs hosting OpenStack services 26 | # - normalizing network interface names 27 | # - fixing multicast 28 | # - removing /home and making the root partition as large as possible to maximumize the amount of space available to openstack 29 | 30 | ################################# 31 | # Scenario Requirements Section # 32 | ################################# 33 | = VARIABLES = 34 | 35 | PHD_VAR_network_domain 36 | PHD_VAR_network_internal 37 | PHD_VAR_network_nic_external 38 | PHD_VAR_network_nic_internal 39 | PHD_VAR_network_named_forwarders 40 | PHD_VAR_rpm_download 41 | 42 | ################################# 43 | # Scenario Requirements Section # 44 | ################################# 45 | = REQUIREMENTS = 46 | nodes: 9 47 | 48 | ###################### 49 | # Deployment Scripts # 50 | ###################### 51 | = SCRIPTS = 52 | 53 | target=all 54 | .... 55 | 56 | yum install -y http://rhos-release.virt.bos.redhat.com/repos/rhos-release/rhos-release-latest.noarch.rpm wget 57 | 58 | rhos-release 7 59 | 60 | wget -O /etc/yum.repos.d/test.repo http://www.kronosnet.org/testrepo/test.repo 61 | 62 | yum clean all 63 | yum update -y 64 | 65 | # ntpd conflicts with chrony 66 | yum erase -y chrony 67 | rm -f /etc/chrony* 68 | 69 | yum install -y pacemaker fence-agents resource-agents pcs libvirt qemu-kvm bind-utils net-tools tcpdump ntp ntpdate sos nfs-utils 70 | 71 | # The cluster shouldn't need NTP configured, but without it the 72 | # network goes bye-bye when using DHCP 73 | # 74 | # Must point to clock.redhat.com to work internally 75 | sed -i s/^server.*// /etc/ntp.conf 76 | echo "server $PHD_VAR_network_clock iburst" >> /etc/ntp.conf 77 | echo $PHD_VAR_network_clock > /etc/ntp/step-tickers 78 | 79 | #sync_to_hardware clock 80 | echo "SYNC_HWCLOCK=yes" >> /etc/sysconfig/ntpdate 81 | 82 | systemctl enable ntpdate 83 | 84 | systemctl enable ntpd 85 | 86 | sed -i -e 's/=enforcing/=disabled/g' /etc/sysconfig/selinux 87 | sed -i -e 's/=enforcing/=disabled/g' /etc/selinux/config 88 | 89 | systemctl disable firewalld 90 | 91 | systemctl enable libvirtd 92 | systemctl start libvirtd 93 | virsh net-destroy default 94 | virsh net-undefine default 95 | 96 | lastoct="$(hostname -s | sed -e 's#^[a-z]*-##g' -e 's#^0*##g')" 97 | 98 | cat > /etc/sysconfig/network-scripts/ifcfg-ext0 << EOF 99 | DEVICE=ext0 100 | NAME=ext0 101 | TYPE=Bridge 102 | BOOTPROTO=dhcp 103 | ONBOOT=yes 104 | IPV6INIT=yes 105 | IPV6_AUTOCONF=yes 106 | EOF 107 | 108 | cat > /etc/sysconfig/network-scripts/ifcfg-vmnet0 << EOF 109 | DEVICE=vmnet0 110 | NAME=vmnet0 111 | TYPE=Bridge 112 | BOOTPROTO=static 113 | ONBOOT=yes 114 | IPV6INIT=yes 115 | IPV6_AUTOCONF=yes 116 | IPADDR=${PHD_VAR_network_internal}.$lastoct 117 | NETMASK=255.255.255.0 118 | NETWORK=${PHD_VAR_network_internal}.0 119 | EOF 120 | 121 | for device in `ls -1 /sys/class/net`; do 122 | case ${PHD_VAR_network_nic_external} in 123 | *${device}*) 124 | cat > /etc/sysconfig/network-scripts/ifcfg-${device} << EOF 125 | DEVICE=${device} 126 | BOOTPROTO=none 127 | ONBOOT=yes 128 | BRIDGE=ext0 129 | NAME=${device} 130 | EOF 131 | ;; 132 | esac 133 | case ${PHD_VAR_network_nic_internal} in 134 | *${device}*) 135 | cat > /etc/sysconfig/network-scripts/ifcfg-${device} << EOF 136 | DEVICE=${device} 137 | BOOTPROTO=none 138 | ONBOOT=yes 139 | BRIDGE=vmnet0 140 | NAME=${device} 141 | EOF 142 | ;; 143 | esac 144 | done 145 | 146 | if grep -q ip_forward /etc/sysctl.conf; then 147 | sed -i -e 's#ip_forward.*#ip_forward = 1#g' /etc/sysctl.conf 148 | else 149 | echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf 150 | fi 151 | 152 | echo "echo 1 > /sys/class/net/ext0/bridge/multicast_querier" >> /etc/rc.d/rc.local 153 | echo "echo 1 > /sys/class/net/vmnet0/bridge/multicast_querier" >> /etc/rc.d/rc.local 154 | chmod +x /etc/rc.d/rc.local 155 | 156 | # Turn off auto-generation of resolv.conf so we can override it 157 | # make a backup that we will need for gateway scenario otherwise 158 | # we have a catch 22 159 | rm -f /etc/resolv.conf.backup 160 | cp /etc/resolv.conf /etc/resolv.conf.backup 161 | echo PEERDNS=no >> /etc/sysconfig/network-scripts/ifcfg-ext0 162 | echo search vmnet.${PHD_VAR_network_domain} ${PHD_VAR_network_domain} > /etc/resolv.conf 163 | echo nameserver ${PHD_VAR_network_internal}.1 >> /etc/resolv.conf 164 | 165 | # get rid of /home from beaker 166 | sed -i -e 's#.*home.*##g' /etc/fstab 167 | umount /home 168 | 169 | # remove home lv 170 | lvremove -f /dev/mapper/*home 171 | 172 | # expand root lv to 50% vg 173 | lvresize -f -l50%VG /dev/mapper/*root 174 | 175 | # expand root fs 176 | xfs_growfs /dev/mapper/*-root 177 | 178 | # create the snapshot 179 | lvcreate -s -n baremetal_snap -l100%FREE /dev/mapper/*root 180 | 181 | # regenerate initramfs to include dm-snapshot modules/utils 182 | for i in $(ls /boot/vmlinuz-*x86*); do 183 | ver=$(basename $i | sed -e 's#vmlinuz-##g') 184 | dracut -f --kver $ver 185 | done 186 | 187 | .... 188 | 189 | # Implied by the reboot below 190 | #target=all 191 | #.... 192 | #service network restart 193 | #/etc/rc.local 194 | #.... 195 | 196 | target=local 197 | .... 198 | # Reboot each node and wait for it to return 199 | 200 | # disable set -e when calling phd_cmd_* because 201 | # phd doesn't manage all return codes properly 202 | set +e 203 | for node in $(echo $PHD_ENV_nodes); do 204 | phd_cmd_exec "reboot > /dev/null 2>&1" "$node" 205 | phd_wait_connection 2400 $node || exit 1 206 | done 207 | .... 208 | -------------------------------------------------------------------------------- /pcmk/basic-cluster.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - installing the cluster software 22 | # - enabling the pcs daemon to allow remote management 23 | # - setting a password for the hacluster user for use with pcs 24 | # - authenticating to pcs on the other hosts with the hacluster user and password 25 | # - creating and starting the cluster 26 | # - configuring fencing using the multicast addresses specified for fence_virt on the bare metal hosts 27 | 28 | ################################# 29 | # Scenario Requirements Section # 30 | ################################# 31 | 32 | = VARIABLES = 33 | 34 | PHD_VAR_env_password 35 | PHD_VAR_network_clock 36 | PHD_VAR_network_internal 37 | PHD_VAR_secrets_fence_xvm 38 | 39 | ################################# 40 | # Scenario Requirements Section # 41 | ################################# 42 | = REQUIREMENTS = 43 | nodes: 1 44 | 45 | ###################### 46 | # Deployment Scripts # 47 | ###################### 48 | = SCRIPTS = 49 | 50 | target=all 51 | .... 52 | 53 | yum install -y http://rhos-release.virt.bos.redhat.com/repos/rhos-release/rhos-release-latest.noarch.rpm 54 | 55 | rhos-release 7 56 | 57 | wget -O /etc/yum.repos.d/test.repo http://www.kronosnet.org/testrepo/test.repo 58 | 59 | yum update -y 60 | 61 | # install the packages 62 | yum install -y pcs pacemaker corosync fence-agents-all resource-agents 63 | 64 | # enable pcsd 65 | systemctl enable pcsd 66 | systemctl start pcsd 67 | 68 | systemctl disable firewalld 69 | systemctl stop firewalld 70 | 71 | # The cluster shouldn't need NTP configured, but without it the 72 | # network goes bye-bye when using DHCP 73 | # 74 | # Must point to clock.redhat.com to work internally 75 | sed -i s/^server.*// /etc/ntp.conf 76 | echo "server $PHD_VAR_network_clock iburst" >> /etc/ntp.conf 77 | echo $PHD_VAR_network_clock > /etc/ntp/step-tickers 78 | 79 | #sync_to_hardware clock 80 | echo "SYNC_HWCLOCK=yes" >> /etc/sysconfig/ntpdate 81 | 82 | systemctl enable ntpdate 83 | systemctl start ntpdate 84 | 85 | systemctl enable ntpd 86 | systemctl start ntpd 87 | 88 | # set a password for hacluster user. password should be the same on all nodes 89 | echo ${PHD_VAR_env_password} | passwd --stdin hacluster 90 | 91 | # Now mount /srv so that we can use $PHD_VAR_osp_configdir further down 92 | 93 | if grep -q srv /etc/fstab; then 94 | echo /srv is already mounted; 95 | else 96 | mkdir -p /srv 97 | echo "${PHD_VAR_network_internal}.1:/srv /srv nfs defaults,v3 0 0" >> /etc/fstab 98 | mount /srv 99 | fi 100 | 101 | # Set up the authkey 102 | mkdir -p /etc/cluster 103 | echo ${PHD_VAR_secrets_fence_xvm} > /etc/cluster/fence_xvm.key 104 | 105 | .... 106 | 107 | target=$PHD_ENV_nodes1 108 | .... 109 | short_nodes="" 110 | for node in $PHD_ENV_nodes; do 111 | short_nodes="${short_nodes} $(echo ${node} | sed s/\\..*//g)" 112 | done 113 | 114 | # autheticate nodes, requires all nodes to have pcsd up and running 115 | # the -p option is used to give the password on command line and make it easier to script 116 | pcs cluster auth $short_nodes -u hacluster -p ${PHD_VAR_env_password} --force 117 | 118 | # Construct and start the cluster 119 | # The cluster needs a unique name, base it on the first node's name 120 | pcs cluster setup --force --name $(echo $PHD_ENV_nodes1 | sed -e s/[0-9]//g -e s/\\..*//g ) ${short_nodes} 121 | pcs cluster enable --all 122 | pcs cluster start --all 123 | 124 | # Give the cluster a moment to come up 125 | sleep 5 126 | 127 | # Assumes bare metal nodes are: 128 | # - named with a trailing offset 129 | # - numbered sequentially 130 | # - the first node is a proxy 131 | # - total of three nodes 132 | # - are configured with fence-virtd (see virt-hosts.scenario) 133 | # 134 | # If any of the above is not true, change the fencing devices below 135 | 136 | pcs stonith create fence1 fence_xvm multicast_address=225.0.0.2 137 | pcs stonith create fence2 fence_xvm multicast_address=225.0.0.3 138 | pcs stonith create fence3 fence_xvm multicast_address=225.0.0.4 139 | 140 | # For clones this is not so important 141 | # 142 | # However we really don't wan't VIPs moving around as it can cause 143 | # fencing to fail (eg. if vip-db is stopped, then keystone won't 144 | # function and we can't confirm if nova recognized the compute node is 145 | # down) 146 | pcs resource defaults resource-stickiness=INFINITY 147 | 148 | .... 149 | -------------------------------------------------------------------------------- /pcmk/beaker.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # We start with 3 (or more, up to 16) nodes running a minimal CentOS 6 21 | # 22 | # Tasks to be performed include: 23 | # - setting up the required repositories from which to download Openstack and the HA-Addon 24 | # - disabling firewalls and SElinux. This is a necessary evil until the proper policies can be written. 25 | # - creating network bridges for use by VMs hosting OpenStack services 26 | # - normalizing network interface names 27 | # - fixing multicast 28 | # - removing /home and making the root partition as large as possible to maximumize the amount of space available to openstack 29 | 30 | ################################# 31 | # Scenario Requirements Section # 32 | ################################# 33 | = VARIABLES = 34 | 35 | PHD_VAR_network_domain 36 | PHD_VAR_beaker_disttree 37 | 38 | ################################# 39 | # Scenario Requirements Section # 40 | ################################# 41 | = REQUIREMENTS = 42 | nodes: 7 43 | 44 | ###################### 45 | # Deployment Scripts # 46 | ###################### 47 | = SCRIPTS = 48 | 49 | target=local 50 | .... 51 | # Reboot each node and wait for it to return 52 | # REQUIRES kinit and we don't check for completition yet 53 | for node in $(echo $PHD_ENV_nodes); do 54 | loop=0 55 | bkr system-provision --distro-tree ${PHD_VAR_beaker_disttree} $node 56 | while [ "$(ping $node -c 3 -q | grep 'packets transmitted' | sed -e 's#.*transmitted, ##g' -e 's# received.*##g')" != 0 ] && \ 57 | [ "$loop" -lt 180 ]; do 58 | loop=$((loop + 1)) 59 | echo "Waiting for beaker to kick in ($loop)" 60 | sleep 1 61 | done 62 | done 63 | 64 | # disable set -e when calling phd_cmd_* because 65 | # phd doesn't manage all return codes properly 66 | set +e 67 | for node in $(echo $PHD_ENV_nodes); do 68 | phd_wait_connection 2400 $node || exit 1 69 | done 70 | .... 71 | -------------------------------------------------------------------------------- /pcmk/ceilometer-test.sh: -------------------------------------------------------------------------------- 1 | . ${PHD_VAR_env_configdir}/keystonerc_admin 2 | for m in storage.objects image network volume instance ; do ceilometer sample-list -m $m | tail -2 ; done 3 | 4 | # https://bugzilla.redhat.com/show_bug.cgi?id=1127526#c2 <- for A/A testing 5 | -------------------------------------------------------------------------------- /pcmk/cinder-test.sh: -------------------------------------------------------------------------------- 1 | . ${PHD_VAR_env_configdir}/keystonerc_admin 2 | 3 | openstack volume list 4 | openstack volume create --size 10 test-volume 5 | openstack volume list 6 | openstack volume delete test-volume 7 | openstack volume list 8 | -------------------------------------------------------------------------------- /pcmk/cinder.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = VARIABLES = 26 | 27 | PHD_VAR_deployment 28 | PHD_VAR_osp_major 29 | PHD_VAR_osp_configdir 30 | PHD_VAR_network_internal 31 | PHD_VAR_network_hosts_memcache 32 | PHD_VAR_network_hosts_rabbitmq 33 | 34 | ################################# 35 | # Scenario Requirements Section # 36 | ################################# 37 | = REQUIREMENTS = 38 | nodes: 1 39 | 40 | ###################### 41 | # Deployment Scripts # 42 | ###################### 43 | = SCRIPTS = 44 | 45 | target=all 46 | .... 47 | yum install -y openstack-cinder openstack-utils python-memcached python-keystonemiddleware python-openstackclient 48 | 49 | # Pending ack from cinder team 50 | #openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v1_api false 51 | #openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v2_api true 52 | 53 | openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cindertest@vip-db/cinder 54 | openstack-config --set /etc/cinder/cinder.conf database max_retries -1 55 | 56 | openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone 57 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken identity_uri http://vip-keystone:35357/ 58 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://vip-keystone:5000/ 59 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name services 60 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder 61 | openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cindertest 62 | 63 | openstack-config --set /etc/cinder/cinder.conf DEFAULT notification_driver messaging 64 | openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder 65 | 66 | openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host vip-glance 67 | 68 | openstack-config --set /etc/cinder/cinder.conf DEFAULT memcache_servers ${PHD_VAR_network_hosts_memcache} 69 | 70 | openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 71 | openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues true 72 | openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60 73 | 74 | # rdo${PHD_VAR_osp_major}-cinder isn't the name of a real host or an IP 75 | # Its the name which we should advertise ourselves as and for A/P it should be the same everywhere 76 | openstack-config --set /etc/cinder/cinder.conf DEFAULT host rdo${PHD_VAR_osp_major}-cinder 77 | openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen $(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 78 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfs_exports 79 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_sparsed_volumes true 80 | openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_mount_options v3 81 | 82 | openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver 83 | 84 | # NOTE: this config section is to enable and configure the NFS cinder driver. 85 | 86 | # Create the directory on the server 87 | mkdir -p $PHD_VAR_osp_configdir/cinder 88 | 89 | chown -R cinder:cinder $PHD_VAR_osp_configdir/cinder 90 | 91 | cat > /etc/cinder/nfs_exports << EOF 92 | ${PHD_VAR_network_internal}.1:$PHD_VAR_osp_configdir/cinder 93 | EOF 94 | 95 | chown root:cinder /etc/cinder/nfs_exports 96 | chmod 0640 /etc/cinder/nfs_exports 97 | 98 | .... 99 | 100 | 101 | target=$PHD_ENV_nodes1 102 | .... 103 | su cinder -s /bin/sh -c "cinder-manage db sync" 104 | 105 | # create services in pacemaker 106 | pcs resource create openstack-cinder-api systemd:openstack-cinder-api --clone interleave=true 107 | pcs resource create openstack-cinder-scheduler systemd:openstack-cinder-scheduler --clone interleave=true 108 | 109 | # Volume must be A/P for now. See https://bugzilla.redhat.com/show_bug.cgi?id=1193229 110 | pcs resource create openstack-cinder-volume systemd:openstack-cinder-volume 111 | 112 | pcs constraint order start openstack-cinder-api-clone then openstack-cinder-scheduler-clone 113 | pcs constraint colocation add openstack-cinder-scheduler-clone with openstack-cinder-api-clone 114 | pcs constraint order start openstack-cinder-scheduler-clone then openstack-cinder-volume 115 | pcs constraint colocation add openstack-cinder-volume with openstack-cinder-scheduler-clone 116 | 117 | if [ $PHD_VAR_deployment = collapsed ]; then 118 | pcs constraint order start openstack-keystone-clone then openstack-cinder-api-clone 119 | fi 120 | .... 121 | 122 | -------------------------------------------------------------------------------- /pcmk/compute-cluster.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = REQUIREMENTS = 26 | nodes: 1 27 | 28 | = VARIABLES = 29 | 30 | PHD_VAR_env_password 31 | PHD_VAR_network_internal 32 | 33 | ###################### 34 | # Deployment Scripts # 35 | ###################### 36 | = SCRIPTS = 37 | 38 | target=all 39 | .... 40 | # make sure we are not running on a controller node 41 | pcs resource show swift-fs > /dev/null 2>&1 42 | if [ $? != 0 ]; then 43 | 44 | pcs resource show computenode > /dev/null 2>&1 45 | if [ $? = 0 ]; then 46 | exit 0 47 | fi 48 | 49 | # We must be either the first node to run, or configuring 50 | # single-node clusters for a segregated deployment 51 | 52 | pcs cluster auth $(hostname -s) -u hacluster -p ${PHD_VAR_env_password} --force 53 | 54 | pcs cluster setup --name $(hostname -s)-compute $(hostname -s) 55 | pcs cluster enable --all 56 | pcs cluster start --all 57 | 58 | sleep 30 59 | 60 | # hack!! IPMI does not work from localhost to localhost on those test nodes!. Use 61 | # proper fencing settings! 62 | # TODO: Add SBD support 63 | pcs property set stonith-enabled=false 64 | 65 | pcs resource create nova-compute-fs Filesystem device="${PHD_VAR_network_internal}.1:$PHD_VAR_osp_configdir/instances" directory="/var/lib/nova/instances" fstype="nfs" options="v3" op start timeout=240 --group computenode 66 | pcs resource create neutron-openvswitch-agent systemd:neutron-openvswitch-agent --group computenode 67 | pcs resource create libvirtd systemd:libvirtd --group computenode 68 | pcs resource create ceilometer-compute systemd:openstack-ceilometer-compute --group computenode 69 | pcs resource create nova-compute systemd:openstack-nova-compute --group computenode 70 | fi 71 | 72 | .... 73 | -------------------------------------------------------------------------------- /pcmk/compute-managed.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = REQUIREMENTS = 26 | nodes: 1 27 | 28 | = VARIABLES = 29 | 30 | PHD_VAR_deployment 31 | PHD_VAR_osp_configdir 32 | PHD_VAR_network_domain 33 | PHD_VAR_network_internal 34 | 35 | ###################### 36 | # Deployment Scripts # 37 | ###################### 38 | = SCRIPTS = 39 | 40 | target=all 41 | .... 42 | 43 | if [ $PHD_VAR_deployment = segregated ]; then 44 | echo "We don't document managed compute nodes in a segregated environment yet" 45 | 46 | # Certainly none of the location constraints would work and the 47 | # resource-discovery options are mostly redundant 48 | exit 1 49 | fi 50 | 51 | yum install -y pacemaker-remote resource-agents pcs 52 | 53 | if [ ! -e $PHD_VAR_osp_configdir/pcmk-authkey ]; then 54 | dd if=/dev/urandom of=$PHD_VAR_osp_configdir/pcmk-authkey bs=4096 count=1 55 | fi 56 | 57 | mkdir -p /etc/pacemaker 58 | cp $PHD_VAR_osp_configdir/pcmk-authkey /etc/pacemaker/authkey 59 | 60 | if [ -z "$(pidof pacemakerd)" ]; then 61 | chkconfig pacemaker_remote on 62 | service pacemaker_remote start 63 | fi 64 | .... 65 | -------------------------------------------------------------------------------- /pcmk/galera-test.sh: -------------------------------------------------------------------------------- 1 | clustercheck 2 | 3 | # verify sync is done 4 | mysql 5 | SHOW STATUS LIKE 'wsrep%'; 6 | quit 7 | -------------------------------------------------------------------------------- /pcmk/galera.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = VARIABLES = 26 | 27 | PHD_VAR_env_password 28 | PHD_VAR_network_domain 29 | PHD_VAR_deployment 30 | 31 | ################################# 32 | # Scenario Requirements Section # 33 | ################################# 34 | = REQUIREMENTS = 35 | nodes: 1 36 | 37 | ###################### 38 | # Deployment Scripts # 39 | ###################### 40 | = SCRIPTS = 41 | 42 | target=all 43 | .... 44 | yum install -y mariadb-galera-server xinetd rsync 45 | 46 | if [ $PHD_VAR_deployment = collapsed ]; then 47 | # Allowing the proxy to perform health checks on galera while we're initializing it is... problematic. 48 | pcs resource disable haproxy 49 | fi 50 | 51 | cat > /etc/sysconfig/clustercheck << EOF 52 | MYSQL_USERNAME="clustercheck" 53 | MYSQL_PASSWORD="${PHD_VAR_env_password}" 54 | MYSQL_HOST="localhost" 55 | MYSQL_PORT="3306" 56 | EOF 57 | 58 | # workaround some old buggy mariadb packages 59 | # that created log files as root:root and newer 60 | # packages would fail to start.... 61 | chown mysql:mysql /var/log/mariadb -R 62 | 63 | systemctl start mysqld 64 | 65 | # required for clustercheck to work 66 | mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${PHD_VAR_env_password}';" 67 | systemctl stop mysqld 68 | 69 | # Configure galera cluster 70 | # NOTE: wsrep ssl encryption is strongly recommended and should be enabled 71 | # on all production deployments. This how-to does NOT display how to 72 | # configure ssl. The shell expansion points to the internal IP address of the 73 | # node. 74 | 75 | cat > /etc/my.cnf.d/galera.cnf << EOF 76 | [mysqld] 77 | skip-name-resolve=1 78 | binlog_format=ROW 79 | default-storage-engine=innodb 80 | innodb_autoinc_lock_mode=2 81 | innodb_locks_unsafe_for_binlog=1 82 | query_cache_size=0 83 | query_cache_type=0 84 | bind_address=$(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 85 | 86 | wsrep_provider=/usr/lib64/galera/libgalera_smm.so 87 | wsrep_cluster_name="galera_cluster" 88 | wsrep_slave_threads=1 89 | wsrep_certify_nonPK=1 90 | wsrep_max_ws_rows=131072 91 | wsrep_max_ws_size=1073741824 92 | wsrep_debug=0 93 | wsrep_convert_LOCK_to_trx=0 94 | wsrep_retry_autocommit=1 95 | wsrep_auto_increment_control=1 96 | wsrep_drupal_282555_workaround=0 97 | wsrep_causal_reads=0 98 | wsrep_notify_cmd= 99 | wsrep_sst_method=rsync 100 | EOF 101 | 102 | cat > /etc/xinetd.d/galera-monitor << EOF 103 | service galera-monitor 104 | { 105 | port = 9200 106 | disable = no 107 | socket_type = stream 108 | protocol = tcp 109 | wait = no 110 | user = root 111 | group = root 112 | groups = yes 113 | server = /usr/bin/clustercheck 114 | type = UNLISTED 115 | per_source = UNLIMITED 116 | log_on_success = 117 | log_on_failure = HOST 118 | flags = REUSE 119 | } 120 | EOF 121 | 122 | systemctl enable xinetd 123 | systemctl start xinetd 124 | 125 | .... 126 | 127 | target=$PHD_ENV_nodes1 128 | .... 129 | # node_list must be of the form node1,node2,node3 130 | # 131 | # node names must be in the form that the cluster knows them as 132 | # (ie. no domains) and there can't be a trailing comma (hence the 133 | # extra weird sed command) 134 | node_list=$(echo $PHD_ENV_nodes | sed -e s/.vmnet.${PHD_VAR_network_domain}\ /,/g -e s/.vmnet.${PHD_VAR_network_domain}//) 135 | pcs resource create galera galera enable_creation=true wsrep_cluster_address="gcomm://${node_list}" additional_parameters='--open-files-limit=16384' meta master-max=3 ordered=true op promote timeout=300s on-fail=block --master 136 | 137 | if [ $PHD_VAR_deployment = collapsed ]; then 138 | # Now we can re-enable the proxy 139 | pcs resource enable haproxy 140 | fi 141 | 142 | # wait for galera to start and become promoted 143 | loop=0; while ! clustercheck > /dev/null 2>&1 && [ "$loop" -lt 60 ]; do 144 | echo waiting galera to be promoted 145 | loop=$((loop + 1)) 146 | sleep 5 147 | done 148 | 149 | # this one can fail depending on who bootstrapped the cluster 150 | for node in $PHD_ENV_nodes; do 151 | mysql -e "DROP USER ''@'${node}';" || true 152 | mysql -e "DROP USER 'root'@'${node}';" || true 153 | done 154 | 155 | galera_script=galera.setup 156 | echo "" > $galera_script 157 | 158 | echo "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED by 'mysqltest' WITH GRANT OPTION;" >> $galera_script 159 | 160 | for db in keystone glance cinder neutron nova heat; do 161 | cat<> $galera_script 162 | CREATE DATABASE ${db}; 163 | GRANT ALL ON ${db}.* TO '${db}'@'%' IDENTIFIED BY '${db}test'; 164 | EOF 165 | done 166 | 167 | echo "FLUSH PRIVILEGES;" >> $galera_script 168 | #echo "quit" >> $galera_script 169 | 170 | if [ "$loop" -ge 60 ]; then 171 | echo Timeout waiting for galera 172 | else 173 | mysql mysql < $galera_script 174 | mysqladmin flush-hosts 175 | fi 176 | 177 | .... 178 | -------------------------------------------------------------------------------- /pcmk/glance-test.sh: -------------------------------------------------------------------------------- 1 | . ${PHD_VAR_env_configdir}/keystonerc_admin 2 | 3 | if [ ! -f ${PHD_VAR_env_configdir}/cirros-0.3.2-x86_64-disk.img ]; then 4 | wget -O ${PHD_VAR_env_configdir}/cirros-0.3.2-x86_64-disk.img http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img 5 | fi 6 | 7 | openstack image create --container-format bare --disk-format qcow2 --public --file ${PHD_VAR_env_configdir}/cirros-0.3.2-x86_64-disk.img cirros 8 | 9 | openstack image list 10 | -------------------------------------------------------------------------------- /pcmk/glance.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = VARIABLES = 26 | 27 | PHD_VAR_network_hosts_rabbitmq 28 | PHD_VAR_osp_configdir 29 | PHD_VAR_deployment 30 | 31 | ################################# 32 | # Scenario Requirements Section # 33 | ################################# 34 | = REQUIREMENTS = 35 | nodes: 1 36 | 37 | ###################### 38 | # Deployment Scripts # 39 | ###################### 40 | = SCRIPTS = 41 | 42 | 43 | target=all 44 | .... 45 | yum install -y openstack-glance openstack-utils python-openstackclient 46 | 47 | # Configure the API service 48 | 49 | openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:glancetest@vip-db/glance 50 | openstack-config --set /etc/glance/glance-api.conf database max_retries -1 51 | 52 | openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone 53 | 54 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken identity_uri http://vip-keystone:35357/ 55 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://vip-keystone:5000/ 56 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name services 57 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance 58 | openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glancetest 59 | 60 | openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver messaging 61 | 62 | openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 63 | openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues true 64 | openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60 65 | 66 | openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host vip-glance 67 | openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host $(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 68 | 69 | # Configure the registry service 70 | 71 | openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:glancetest@vip-db/glance 72 | openstack-config --set /etc/glance/glance-registry.conf database max_retries -1 73 | 74 | openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone 75 | 76 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken identity_uri http://vip-keystone:35357/ 77 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://vip-keystone:5000/ 78 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name services 79 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance 80 | openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glancetest 81 | 82 | openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver messaging 83 | 84 | openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 85 | openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues true 86 | openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60 87 | 88 | openstack-config --set /etc/glance/glance-registry.conf DEFAULT registry_host vip-glance 89 | openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host $(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 90 | 91 | # create the NFS share mountpoint on the nfs server 92 | mkdir -p $PHD_VAR_osp_configdir/glance 93 | .... 94 | 95 | target=$PHD_ENV_nodes1 96 | .... 97 | # Now have pacemaker mount the NFS share as service. 98 | pcs resource create glance-fs Filesystem device="${PHD_VAR_network_internal}.1:$PHD_VAR_osp_configdir/glance" directory="/var/lib/glance" fstype="nfs" options="v3" --clone 99 | 100 | # wait for glance-fs to be started and running 101 | sleep 5 102 | 103 | # Make sure it's writable 104 | chown glance:nobody /var/lib/glance 105 | 106 | # Now populate the database 107 | su glance -s /bin/sh -c "glance-manage db_sync" 108 | 109 | pcs resource create openstack-glance-registry systemd:openstack-glance-registry --clone interleave=true 110 | pcs resource create openstack-glance-api systemd:openstack-glance-api --clone interleave=true 111 | 112 | pcs constraint order start openstack-glance-fs-clone then openstack-glance-registry-clone 113 | pcs constraint colocation add openstack-glance-registry-clone with openstack-glance-fs-clone 114 | pcs constraint order start openstack-glance-registry-clone then openstack-glance-api-clone 115 | pcs constraint colocation add openstack-glance-api-clone with openstack-glance-registry-clone 116 | 117 | if [ $PHD_VAR_deployment = collapsed ]; then 118 | pcs constraint order start openstack-keystone-clone then openstack-glance-registry-clone 119 | fi 120 | .... 121 | 122 | -------------------------------------------------------------------------------- /pcmk/ha-collapsed.variables: -------------------------------------------------------------------------------- 1 | # Expanded to $PHD_VAR_network_domain, $PHD_VAR_network_internal, etc by PHD 2 | # Each scenario file will verify that values have been provided for the variables it requires. 3 | 4 | # Deployment types: collapsed | segregated 5 | deployment: collapsed 6 | 7 | network: 8 | clock: clock.redhat.com 9 | domain: lab.bos.redhat.com 10 | internal: 192.168.124 11 | named: 12 | forwarders: 10.16.36.29 10.11.5.19 10.5.30.160 13 | nic: 14 | base: 54:52:00 15 | external: enp1s0 eth0 16 | internal: enp2s0 eth1 17 | hosts: 18 | gateway: east-01 19 | 20 | # These services are not able to live behind a proxy, so we must list the hosts explicitly 21 | # In the segregated case, it is the per-service guests we will create 22 | # In the collapsed case, it is the members of the single cluster we will create 23 | mongodb: rdo7-node1,rdo7-node2,rdo7-node3 24 | memcache: rdo7-node1:11211,rdo7-node2:11211,rdo7-node3:11211 25 | rabbitmq: rdo7-node1,rdo7-node2,rdo7-node3 26 | 27 | # We use a centrally defined list so that segregated guests, DNS, and DHCP are all created consistently 28 | # Changing this list in any way will probably break the haproxy configuration 29 | # - ONLY add entries 30 | # - ALWAYS do so to the end of the list 31 | # - LOOK for PHD_VAR_components to see places that may be affected 32 | components: lb db rabbitmq keystone memcache glance cinder swift-brick swift neutron nova horizon heat mongodb ceilometer qpid node 33 | 34 | # this is RHEL7.1 GA 35 | # https://beaker.engineering.redhat.com/distros/ 36 | # Use simple search to find release ex: RHEL-7.1 37 | # click on the GA -> and take the ID from Server install 38 | beaker: 39 | disttree: 69383 40 | 41 | rpm: 42 | download: download.devel.redhat.com 43 | major: 7 44 | minor: 1 45 | #base: rel-eng/latest-RHEL-7/compose/Server/x86_64/os/ 46 | #cluster: rel-eng/latest-RHEL-7/compose/Server/x86_64/os/addons/HighAvailability/ 47 | base: released/RHEL-7/7.1/Server/x86_64/os/ 48 | cluster: released/RHEL-7/7.1/Server/x86_64/os/addons/HighAvailability/ 49 | updates: brewroot/repos/rhel-7.1-z-build/latest/x86_64/ 50 | 51 | osp: 52 | configdir: /srv/RDO-7/configs 53 | major: 7 54 | 55 | # generate with openssl rand -hex 10 56 | secrets: 57 | fence_xvm: bcf4e54dcfbfecb9e62c 58 | keystone_secret: 114a23ae49996a26d916 59 | swift_prefix: 8f2d8a3e326a078c5edf 60 | swift_suffix: 8d33d13b18e3eef6d3ca 61 | horizon_secret: 357a57d6a9a2353ccde5 62 | 63 | # Dedicate all cores to the sole VM that we'll create per host 64 | vm: 65 | cpus: 4 66 | ram: 8048 67 | disk: 25G 68 | base: rdo7-rhel7-base.img 69 | key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDHs2qRMxtqEpr7gJygHAn2rSWKUS/FlJ9oLG7cRtzLyhIl+oSrs30KrdzkgsGTZqSEwfKM8f2LGF08x5HbN2cIDc9YhnwHQNnb8qDIXY2UqzpyLUzckctOMSiRSz/qYxeutDYGg/p1lPzPdWQPympFVIoAzCRDhogX26kXQTpKs7uUzEvZCnnzSn2I9ynchKGP3TlOzTaZHqJM4bj5+KqvUTH2ifvX3EgolP/XtIWjW54zhQnlDuS2UsDd8vvB8ZRrgtaFEXhCSivvazE8zMVAOxCFNYjnh+SvV96VB+hEjqQQeDSdhkgC2huHwsAB3Y9XCkyFe6DEfKuQZwLJjlTZ 70 | 71 | # I set the password to 'cluster', USE A SAFER ONE 72 | env: 73 | password: cluster 74 | 75 | # TODO... 76 | password: 77 | cluster: foo 78 | keystone: bar 79 | -------------------------------------------------------------------------------- /pcmk/ha-segregated.variables: -------------------------------------------------------------------------------- 1 | # Expanded to $PHD_VAR_network_domain, $PHD_VAR_network_internal, etc by PHD 2 | # Each scenario file will verify that values have been provided for the variables it requires. 3 | 4 | # Deployment types: collapsed | segregated 5 | deployment: segregated 6 | 7 | network: 8 | domain: lab.bos.redhat.com 9 | internal: 192.168.124 10 | named: 11 | forwarders: 10.16.36.29 10.11.5.19 10.5.30.160 12 | nic: 13 | base: 54:52:00 14 | external: enp1s0 eth0 15 | internal: enp2s0 eth1 16 | hosts: 17 | gateway: east-01 18 | 19 | # These services are not able to live behind a proxy, so we must list the hosts explicitly 20 | # In the segregated case, it is the per-service guests we will create 21 | # In the collapsed case, it is the members of the single cluster we will create 22 | mongodb: rhos6-mongodb1,rhos6-mongodb2,rhos6-mongodb3 23 | memcache: rhos6-memcache1:11211,rhos6-memcache2:11211,rhos6-memcache3:11211 24 | rabbitmq: rhos6-rabbitmq1,rhos6-rabbitmq2,rhos6-rabbitmq3 25 | 26 | # We use a centrally defined list so that segregated guests, DNS, and DHCP are all created consistently 27 | # Changing this list in any way will probably break the haproxy configuration 28 | # - ONLY add entries 29 | # - ALWAYS do so to the end of the list 30 | # - LOOK for PHD_VAR_components to see places that may be affected 31 | components: lb db rabbitmq keystone memcache glance cinder swift-brick swift neutron nova horizon heat mongodb ceilometer qpid node 32 | 33 | rpm: 34 | download: download.devel.redhat.com 35 | rhel: 7.1 36 | osp: 6.0 37 | # Optional 38 | # beta: -Beta 39 | 40 | vm: 41 | cpus: 1 42 | ram: 2048 43 | disk: 25G 44 | base: rhos6-rhel7-base.img 45 | key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDHs2qRMxtqEpr7gJygHAn2rSWKUS/FlJ9oLG7cRtzLyhIl+oSrs30KrdzkgsGTZqSEwfKM8f2LGF08x5HbN2cIDc9YhnwHQNnb8qDIXY2UqzpyLUzckctOMSiRSz/qYxeutDYGg/p1lPzPdWQPympFVIoAzCRDhogX26kXQTpKs7uUzEvZCnnzSn2I9ynchKGP3TlOzTaZHqJM4bj5+KqvUTH2ifvX3EgolP/XtIWjW54zhQnlDuS2UsDd8vvB8ZRrgtaFEXhCSivvazE8zMVAOxCFNYjnh+SvV96VB+hEjqQQeDSdhkgC2huHwsAB3Y9XCkyFe6DEfKuQZwLJjlTZ 46 | 47 | # I set the password to 'cluster', USE A SAFER ONE 48 | env: 49 | password: cluster 50 | configdir: /srv/RDO-6.0/configs 51 | -------------------------------------------------------------------------------- /pcmk/hacks.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | # - installing the cluster software 22 | # - enabling the pcs daemon to allow remote management 23 | # - setting a password for the hacluster user for use with pcs 24 | # - authenticating to pcs on the other hosts with the hacluster user and password 25 | # - creating and starting the cluster 26 | # - configuring fencing using the multicast addresses specified for fence_virt on the bare metal hosts 27 | 28 | ################################# 29 | # Scenario Requirements Section # 30 | ################################# 31 | = REQUIREMENTS = 32 | nodes: 1 33 | 34 | ###################### 35 | # Deployment Scripts # 36 | ###################### 37 | = SCRIPTS = 38 | 39 | # hack to set hostnames till we figure out why it doesn't work 40 | # from kickstart anymore 41 | target=local 42 | .... 43 | for node in $(echo $PHD_ENV_nodes); do 44 | phd_cmd_exec "hostnamectl set-hostname $node" "$node" 45 | phd_cmd_exec "hostname" "$node" 46 | done 47 | .... 48 | -------------------------------------------------------------------------------- /pcmk/heat-test.sh: -------------------------------------------------------------------------------- 1 | # TEST: 2 | # Requires a compute node! 3 | 4 | . ${PHD_VAR_env_configdir}/keystonerc_admin 5 | 6 | nova keypair-add --pub_key ~/.ssh/authorized_keys heat-userkey-test 7 | 8 | cat > /root/ha_test.yaml << EOF 9 | heat_template_version: 2013-05-23 10 | 11 | description: > 12 | HA test. 13 | 14 | parameters: 15 | key_name: 16 | type: string 17 | description: Name of keypair to assign to servers 18 | image: 19 | type: string 20 | description: Name of image to use for servers 21 | flavor: 22 | type: string 23 | description: Flavor to use for servers 24 | private_net_id: 25 | type: string 26 | description: ID of private network into which servers get deployed 27 | private_subnet_id: 28 | type: string 29 | description: ID of private sub network into which servers get deployed 30 | 31 | resources: 32 | server1: 33 | type: OS::Nova::Server 34 | properties: 35 | name: Server1 36 | image: { get_param: image } 37 | flavor: { get_param: flavor } 38 | key_name: { get_param: key_name } 39 | networks: 40 | - port: { get_resource: server1_port } 41 | 42 | server1_port: 43 | type: OS::Neutron::Port 44 | properties: 45 | network_id: { get_param: private_net_id } 46 | fixed_ips: 47 | - subnet_id: { get_param: private_subnet_id } 48 | 49 | outputs: 50 | server1_private_ip: 51 | description: IP address of server1 in private network 52 | value: { get_attr: [ server1, first_address ] } 53 | EOF 54 | 55 | privatenetid=$(neutron net-list |grep internal_lan | awk '{print $2}') 56 | privatesubnetid=$(neutron subnet-list |grep internal_subnet|awk '{print $2}') 57 | 58 | heat stack-create testtest --template-file=/root/ha_test.yaml --parameters="key_name=heat-userkey-test;image=cirros;flavor=m1.large;private_net_id=$privatenetid;private_subnet_id=$privatesubnetid" 59 | 60 | heat stack-list 61 | 62 | heat stack-delete testtest 63 | 64 | -------------------------------------------------------------------------------- /pcmk/heat.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = REQUIREMENTS = 26 | nodes: 1 27 | 28 | = VARIABLES = 29 | 30 | PHD_VAR_deployment 31 | PHD_VAR_network_hosts_memcache 32 | PHD_VAR_network_hosts_rabbitmq 33 | 34 | ###################### 35 | # Deployment Scripts # 36 | ###################### 37 | = SCRIPTS = 38 | 39 | target=all 40 | .... 41 | yum install -y openstack-heat-engine openstack-heat-api openstack-heat-api-cfn openstack-heat-api-cloudwatch python-heatclient openstack-utils python-glanceclient 42 | 43 | openstack-config --set /etc/heat/heat.conf database connection mysql://heat:heattest@vip-db/heat 44 | openstack-config --set /etc/heat/heat.conf database database max_retries -1 45 | 46 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_tenant_name services 47 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_user heat 48 | openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_password heattest 49 | openstack-config --set /etc/heat/heat.conf keystone_authtoken service_host vip-keystone 50 | openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_host vip-keystone 51 | openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_uri http://vip-keystone:35357/v2.0 52 | openstack-config --set /etc/heat/heat.conf keystone_authtoken keystone_ec2_uri http://vip-keystone:35357/v2.0 53 | openstack-config --set /etc/heat/heat.conf ec2authtoken auth_uri http://vip-keystone:5000/v2.0 54 | 55 | openstack-config --set /etc/heat/heat.conf DEFAULT memcache_servers ${PHD_VAR_network_hosts_memcache} 56 | openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_hosts ${PHD_VAR_network_hosts_rabbitmq} 57 | openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_ha_queues true 58 | openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60 59 | 60 | 61 | openstack-config --set /etc/heat/heat.conf heat_api bind_host $(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 62 | openstack-config --set /etc/heat/heat.conf heat_api_cfn bind_host $(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 63 | openstack-config --set /etc/heat/heat.conf heat_api_cloudwatch bind_host $(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 64 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_metadata_server_url vip-heat:8000 65 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url vip-heat:8000/v1/waitcondition 66 | openstack-config --set /etc/heat/heat.conf DEFAULT heat_watch_server_url vip-heat:8003 67 | 68 | openstack-config --set /etc/heat/heat.conf DEFAULT rpc_backend heat.openstack.common.rpc.impl_kombu 69 | 70 | openstack-config --set /etc/heat/heat.conf DEFAULT notification_driver heat.openstack.common.notifier.rpc_notifier 71 | 72 | # disable CWLiteAlarm that is incompatible with A/A 73 | openstack-config --set /etc/heat/heat.conf DEFAULT enable_cloud_watch_lite false 74 | 75 | .... 76 | 77 | target=$PHD_ENV_nodes1 78 | .... 79 | su heat -s /bin/sh -c "heat-manage db_sync" 80 | 81 | pcs resource create openstack-heat-api systemd:openstack-heat-api --clone interleave=true 82 | pcs resource create openstack-heat-api-cfn systemd:openstack-heat-api-cfn --clone interleave=true 83 | pcs resource create openstack-heat-api-cloudwatch systemd:openstack-heat-api-cloudwatch --clone interleave=true 84 | pcs resource create openstack-heat-engine systemd:openstack-heat-engine --clone interleave=true 85 | 86 | pcs constraint order start openstack-heat-api-clone then openstack-heat-api-cfn-clone 87 | pcs constraint colocation add openstack-heat-api-cfn-clone with openstack-heat-api-clone 88 | pcs constraint order start openstack-heat-api-cfn-clone then openstack-heat-api-cloudwatch-clone 89 | pcs constraint colocation add openstack-heat-api-cloudwatch-clone with openstack-heat-api-cfn-clone 90 | pcs constraint order start openstack-heat-api-cloudwatch-clone then openstack-heat-engine-clone 91 | pcs constraint colocation add openstack-heat-engine-clone with openstack-heat-api-cloudwatch-clone 92 | 93 | if [ $PHD_VAR_deployment = collapsed ]; then 94 | pcs constraint order start openstack-ceilometer-notification-clone then openstack-heat-api-clone 95 | fi 96 | .... 97 | 98 | -------------------------------------------------------------------------------- /pcmk/horizon-test.sh: -------------------------------------------------------------------------------- 1 | 2 | It should be possible now to login to http://vip-horizon/dashboard with admin account. 3 | Note that it is still not possible to deploy instances since compute nodes are not attached. 4 | -------------------------------------------------------------------------------- /pcmk/horizon.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = VARIABLES = 26 | 27 | PHD_VAR_secrets_horizon_secret 28 | PHD_VAR_network_hosts_memcache 29 | 30 | ################################# 31 | # Scenario Requirements Section # 32 | ################################# 33 | = REQUIREMENTS = 34 | nodes: 1 35 | 36 | ###################### 37 | # Deployment Scripts # 38 | ###################### 39 | = SCRIPTS = 40 | 41 | target=all 42 | .... 43 | yum install -y mod_wsgi httpd mod_ssl openstack-dashboard 44 | 45 | # NOTE this is a rather scary sed and replace operation to configure horizon 46 | # in one shot, scriptable way. 47 | # Keypoints: 48 | # set ALLOWED_HOSTS to access the web service. 49 | # BE AWARE that this command will allow access from everywhere! 50 | # connection CACHES to memcacehed 51 | # connect with keystone for authentication 52 | # fix a LOCAL_PATH to point to the correct location. 53 | 54 | horizonememcachenodes=$(echo ${PHD_VAR_network_hosts_memcache} | sed -e "s#,#', '#g" -e "s#^#[ '#g" -e "s#\$#', ]#g") 55 | 56 | sed -i \ 57 | -e "s#ALLOWED_HOSTS.*#ALLOWED_HOSTS = ['*',]#g" \ 58 | -e "s#^CACHES#SESSION_ENGINE = 'django.contrib.sessions.backends.cache'\nCACHES#g#" \ 59 | -e "s#locmem.LocMemCache'#memcached.MemcachedCache',\n\t'LOCATION' : $horizonememcachenodes#g" \ 60 | -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "vip-keystone"#g' \ 61 | -e "s#^LOCAL_PATH.*#LOCAL_PATH = '/var/lib/openstack-dashboard'#g" \ 62 | -e "s#SECRET_KEY.*#SECRET_KEY = '${PHD_VAR_secrets_horizon_secret}'#g#" \ 63 | /etc/openstack-dashboard/local_settings 64 | 65 | # workaround buggy packages 66 | echo "COMPRESS_OFFLINE = True" >> /etc/openstack-dashboard/local_settings 67 | python /usr/share/openstack-dashboard/manage.py compress 68 | 69 | # NOTE: fix apache config to listen only on a given interface (internal) 70 | sed -i -e 's/^Listen.*/Listen '$(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g')':80/g' /etc/httpd/conf/httpd.conf 71 | 72 | # NOTE: enable server-status. this is required by pacemaker to verify apache is 73 | # responding. Only allow from localhost. 74 | cat > /etc/httpd/conf.d/server-status.conf << EOF 75 | 76 | SetHandler server-status 77 | Order deny,allow 78 | Deny from all 79 | Allow from localhost 80 | 81 | EOF 82 | 83 | .... 84 | 85 | 86 | target=$PHD_ENV_nodes1 87 | .... 88 | pcs resource create httpd apache --clone interleave=true 89 | .... 90 | -------------------------------------------------------------------------------- /pcmk/keystone-test.sh: -------------------------------------------------------------------------------- 1 | # TEST (might require logout/login to reset the environmet that was set before 2 | # during initial bootstrap) 3 | 4 | unset SERVICE_TOKEN 5 | unset SERVICE_ENDPOINT 6 | . ${PHD_VAR_env_configdir}/keystonerc_user 7 | openstack user show demo 8 | . ${PHD_VAR_env_configdir}/keystonerc_admin 9 | openstack user list 10 | -------------------------------------------------------------------------------- /pcmk/memcached.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = REQUIREMENTS = 26 | nodes: 1 27 | 28 | ###################### 29 | # Deployment Scripts # 30 | ###################### 31 | = SCRIPTS = 32 | 33 | target=all 34 | .... 35 | yum install -y memcached 36 | .... 37 | 38 | target=$PHD_ENV_nodes1 39 | .... 40 | pcs resource create memcached systemd:memcached --clone interleave=true 41 | .... 42 | -------------------------------------------------------------------------------- /pcmk/mongodb.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = REQUIREMENTS = 26 | nodes: 1 27 | 28 | ###################### 29 | # Deployment Scripts # 30 | ###################### 31 | = SCRIPTS = 32 | 33 | target=all 34 | .... 35 | yum install -y mongodb mongodb-server 36 | 37 | # set binding IP address and replica set 38 | # also use smallfiles = true to stall installation while allocating N GB of journals 39 | sed -i \ 40 | -e 's#.*bind_ip.*#bind_ip = 0.0.0.0#g' \ 41 | -e 's/.*replSet.*/replSet = ceilometer/g' \ 42 | -e 's/.*smallfiles.*/smallfiles = true/g' \ 43 | /etc/mongod.conf 44 | 45 | # required to bootstrap mongodb 46 | systemctl start mongod 47 | systemctl stop mongod 48 | .... 49 | 50 | target=$PHD_ENV_nodes1 51 | .... 52 | pcs resource create mongod systemd:mongod op start timeout=300s --clone 53 | 54 | # Setup replica (need to wait for mongodb to settle down first!) 55 | sleep 20 56 | 57 | # Careful with the node names here, must match FQDN 58 | rm -f /root/mongo_replica_setup.js 59 | cat > /root/mongo_replica_setup.js << EOF 60 | rs.initiate() 61 | sleep(10000) 62 | EOF 63 | 64 | for node in $PHD_ENV_nodes; do 65 | cat >> /root/mongo_replica_setup.js << EOF 66 | rs.add("$node"); 67 | EOF 68 | done 69 | 70 | mongo /root/mongo_replica_setup.js 71 | rm -f /root/mongo_replica_setup.js 72 | .... 73 | -------------------------------------------------------------------------------- /pcmk/mrg.variables: -------------------------------------------------------------------------------- 1 | # Expanded to $PHD_VAR_network_domain, $PHD_VAR_network_internal, etc by PHD 2 | # Each scenario file will verify that values have been provided for the variables it requires. 3 | 4 | # Deployment types: collapsed | segregated 5 | deployment: collapsed 6 | 7 | network: 8 | clock: clock.redhat.com 9 | domain: mpc.lab.eng.bos.redhat.com 10 | internal: 192.168.16 11 | named: 12 | forwarders: 10.16.36.29 10.11.5.19 10.5.30.160 13 | nic: 14 | base: 54:52:00 15 | external: em1 eno1 enp1s0 eth0 16 | internal: em2 eno2 enp2s0 eth1 17 | hosts: 18 | gateway: mrg-01 19 | 20 | # These services are not able to live behind a proxy, so we must list the hosts explicitly 21 | # In the segregated case, it is the per-service guests we will create 22 | # In the collapsed case, it is the members of the single cluster we will create 23 | mongodb: rdo7-node1,rdo7-node2,rdo7-node3 24 | memcache: rdo7-node1:11211,rdo7-node2:11211,rdo7-node3:11211 25 | rabbitmq: rdo7-node1,rdo7-node2,rdo7-node3 26 | 27 | # We use a centrally defined list so that segregated guests, DNS, and DHCP are all created consistently 28 | # Changing this list in any way will probably break the haproxy configuration 29 | # - ONLY add entries 30 | # - ALWAYS do so to the end of the list 31 | # - LOOK for PHD_VAR_components to see places that may be affected 32 | components: lb db rabbitmq keystone memcache glance cinder swift-brick swift neutron nova horizon heat mongodb ceilometer qpid node 33 | 34 | # this is RHEL7.1 GA 35 | # https://beaker.engineering.redhat.com/distros/ 36 | # Use simple search to find release ex: RHEL-7.1 37 | # click on the GA -> and take the ID from Server install 38 | beaker: 39 | disttree: 69383 40 | 41 | rpm: 42 | download: download.devel.redhat.com 43 | major: 7 44 | minor: 1 45 | base: released/RHEL-7/7.1/Server/x86_64/os/ 46 | cluster: released/RHEL-7/7.1/Server/x86_64/os/addons/HighAvailability/ 47 | updates: brewroot/repos/rhel-7.1-z-build/latest/x86_64/ 48 | 49 | osp: 50 | configdir: /srv/RDO-7/configs 51 | major: 7 52 | 53 | # generate with openssl rand -hex 10 54 | secrets: 55 | fence_xvm: bcf4e54dcfbfecb9e62c 56 | keystone_secret: 114a23ae49996a26d916 57 | swift_prefix: 8f2d8a3e326a078c5edf 58 | swift_suffix: 8d33d13b18e3eef6d3ca 59 | horizon_secret: 357a57d6a9a2353ccde5 60 | 61 | # Dedicate all cores to the sole VM that we'll create per host 62 | vm: 63 | cpus: 4 64 | ram: 8048 65 | disk: 25G 66 | base: rdo7-rhel7-base.img 67 | key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDHs2qRMxtqEpr7gJygHAn2rSWKUS/FlJ9oLG7cRtzLyhIl+oSrs30KrdzkgsGTZqSEwfKM8f2LGF08x5HbN2cIDc9YhnwHQNnb8qDIXY2UqzpyLUzckctOMSiRSz/qYxeutDYGg/p1lPzPdWQPympFVIoAzCRDhogX26kXQTpKs7uUzEvZCnnzSn2I9ynchKGP3TlOzTaZHqJM4bj5+KqvUTH2ifvX3EgolP/XtIWjW54zhQnlDuS2UsDd8vvB8ZRrgtaFEXhCSivvazE8zMVAOxCFNYjnh+SvV96VB+hEjqQQeDSdhkgC2huHwsAB3Y9XCkyFe6DEfKuQZwLJjlTZ 68 | 69 | # I set the password to 'cluster', USE A SAFER ONE 70 | env: 71 | password: cluster 72 | 73 | # TODO... 74 | password: 75 | cluster: foo 76 | keystone: bar 77 | -------------------------------------------------------------------------------- /pcmk/neutron-test.sh: -------------------------------------------------------------------------------- 1 | # TEST/create your first network 2 | # It is not possible to test neutron completely until the full deployment is complete as it is required to run instances to verify network connectivity. 3 | 4 | . ${PHD_VAR_env_configdir}/keystonerc_admin 5 | 6 | # WARNING: openstack client is NOT ready to manage neutron! 7 | 8 | neutron net-create internal_lan 9 | neutron subnet-create --ip_version 4 --gateway 192.168.100.1 --name "internal_subnet" internal_lan 192.168.100.0/24 10 | neutron net-create public_lan --router:external 11 | neutron subnet-create --gateway 10.16.151.254 --allocation-pool start=10.16.144.76,end=10.16.144.83 --disable-dhcp --name public_subnet public_lan 10.16.144.0/21 12 | neutron router-create router 13 | neutron router-gateway-set router public_lan 14 | neutron router-interface-add router internal_subnet 15 | -------------------------------------------------------------------------------- /pcmk/nova-test.sh: -------------------------------------------------------------------------------- 1 | . ${PHD_VAR_env_configdir}/keystonerc_admin 2 | nova usage 3 | nova usage-list -------------------------------------------------------------------------------- /pcmk/rabbitmq-test.sh: -------------------------------------------------------------------------------- 1 | rabbitmqctl cluster_status 2 | rabbitmqctl list_policies 3 | -------------------------------------------------------------------------------- /pcmk/rabbitmq.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = VARIABLES = 26 | 27 | PHD_VAR_deployment 28 | PHD_VAR_osp_configdir 29 | PHD_VAR_network_domain 30 | PHD_VAR_network_internal 31 | 32 | ################################# 33 | # Scenario Requirements Section # 34 | ################################# 35 | = REQUIREMENTS = 36 | nodes: 1 37 | 38 | ###################### 39 | # Deployment Scripts # 40 | ###################### 41 | = SCRIPTS = 42 | 43 | target=all 44 | .... 45 | yum install -y rabbitmq-server 46 | 47 | # NOTE: we need to bind the service to the internal IP address 48 | 49 | cat > /etc/rabbitmq/rabbitmq-env.conf << EOF 50 | NODE_IP_ADDRESS=$(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 51 | EOF 52 | 53 | # required to generate the cookies 54 | systemctl start rabbitmq-server 55 | systemctl stop rabbitmq-server 56 | 57 | mkdir -p $PHD_VAR_osp_configdir 58 | if [ ! -e $PHD_VAR_osp_configdir/rabbitmq_erlang_cookie ]; then 59 | cp /var/lib/rabbitmq/.erlang.cookie $PHD_VAR_osp_configdir/rabbitmq_erlang_cookie 60 | fi 61 | 62 | # the cookie has to be the same across all nodes. Copy around as preferred, I am 63 | # using my NFS commodity storage. Also check for file permission/ownership. I 64 | # workaround that step by using 'cat' vs cp. 65 | cat $PHD_VAR_osp_configdir/rabbitmq_erlang_cookie > /var/lib/rabbitmq/.erlang.cookie 66 | .... 67 | 68 | 69 | target=$PHD_ENV_nodes1 70 | .... 71 | pcs resource create rabbitmq rabbitmq-cluster set_policy='ha-all ^(?!amq\.).* {"ha-mode":"all"}' meta notify=true --clone ordered=true interleave=true 72 | .... 73 | -------------------------------------------------------------------------------- /pcmk/swift-aco.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = REQUIREMENTS = 26 | nodes: 3 27 | 28 | = VARIABLES = 29 | 30 | PHD_VAR_deployment 31 | PHD_VAR_secrets_swift_prefix 32 | PHD_VAR_secrets_swift_suffix 33 | 34 | ###################### 35 | # Deployment Scripts # 36 | ###################### 37 | = SCRIPTS = 38 | 39 | target=all 40 | .... 41 | yum install -y openstack-swift-object openstack-swift-container openstack-swift-account openstack-utils rsync xfsprogs 42 | 43 | # If you have a dedicated device, format it here and have the cluster mount it as per 'swift-fs' below 44 | # We don't, but swift wants a different partition than /, so we'll create a loopback file and mount that for 'swift-fs' 45 | 46 | mkdir -p /local/swiftstorage/target 47 | truncate --size=1G /local/swift.img 48 | losetup /dev/loop0 /local/swift.img 49 | mkfs.xfs /dev/loop0 50 | mount /dev/loop0 /local/swiftstorage/target 51 | chown -R swift:swift /local 52 | umount /dev/loop0 53 | 54 | # Some extra magic to set up the loopback device after a reboot 55 | echo "losetup /dev/loop0 /local/swift.img" >> /etc/rc.d/rc.local 56 | chmod a+x /etc/rc.d/rc.local 57 | 58 | openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix ${PHD_VAR_secrets_swift_prefix} 59 | openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix ${PHD_VAR_secrets_swift_suffix} 60 | openstack-config --set /etc/swift/swift.conf filter:ceilometer use egg:ceilometer#swift 61 | openstack-config --set /etc/swift/swift.conf pipeline:main pipeline "healthcheck cache authtoken keystoneauth proxy-server ceilometer" 62 | 63 | openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 0.0.0.0 64 | openstack-config --set /etc/swift/object-server.conf DEFAULT devices /local/swiftstorage 65 | openstack-config --set /etc/swift/object-server.conf DEFAULT mount_check false 66 | openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 0.0.0.0 67 | openstack-config --set /etc/swift/account-server.conf DEFAULT devices /local/swiftstorage 68 | openstack-config --set /etc/swift/account-server.conf DEFAULT mount_check false 69 | openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 0.0.0.0 70 | openstack-config --set /etc/swift/container-server.conf DEFAULT devices /local/swiftstorage 71 | openstack-config --set /etc/swift/container-server.conf DEFAULT mount_check false 72 | 73 | openstack-config --set /etc/swift/object-server.conf DEFAULT mount_check false 74 | openstack-config --set /etc/swift/account-server.conf DEFAULT mount_check false 75 | openstack-config --set /etc/swift/container-server.conf DEFAULT mount_check false 76 | 77 | chown -R root:swift /etc/swift 78 | .... 79 | 80 | target=all 81 | .... 82 | if ! pcs resource show swift-fs > /dev/null 2>&1; then 83 | 84 | # We must be either the first node to run, or configuring 85 | # single-node clusters for a segregated deployment 86 | 87 | pcs resource create swift-fs Filesystem device="/dev/loop0" directory="/local/swiftstorage/target" fstype="xfs" force_clones="yes" --clone interleave=true 88 | 89 | pcs resource create swift-account systemd:openstack-swift-account --clone interleave=true 90 | pcs constraint colocation add swift-account-clone with swift-fs-clone 91 | pcs constraint order start swift-fs-clone then swift-account-clone 92 | 93 | pcs resource create swift-container systemd:openstack-swift-container --clone interleave=true 94 | pcs constraint colocation add swift-container-clone with swift-account-clone 95 | pcs constraint order start swift-account-clone then swift-container-clone 96 | 97 | pcs resource create swift-object systemd:openstack-swift-object --clone interleave=true 98 | pcs constraint colocation add swift-object-clone with swift-container-clone 99 | pcs constraint order start swift-container-clone then swift-object-clone 100 | if [ $PHD_VAR_deployment = collapsed ]; then 101 | pcs constraint order start openstack-keystone-clone then swift-account-clone 102 | fi 103 | fi 104 | .... 105 | -------------------------------------------------------------------------------- /pcmk/swift-test.sh: -------------------------------------------------------------------------------- 1 | . ${PHD_VAR_env_configdir}/keystonerc_admin 2 | 3 | openstack container list 4 | openstack container create test 5 | openstack container list 6 | 7 | openstack object list test 8 | truncate --size=1M /tmp/foobar 9 | openstack object create test /tmp/foobar 10 | openstack object list test 11 | openstack object delete test /tmp/foobar 12 | openstack object list test 13 | 14 | openstack container delete test 15 | openstack container list 16 | -------------------------------------------------------------------------------- /pcmk/swift.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # Tasks to be performed at this step include: 21 | 22 | ################################# 23 | # Scenario Requirements Section # 24 | ################################# 25 | = VARIABLES = 26 | 27 | PHD_VAR_deployment 28 | PHD_VAR_network_internal 29 | PHD_VAR_network_hosts_memcache 30 | PHD_VAR_osp_configdir 31 | PHD_VAR_osp_major 32 | PHD_VAR_secrets_swift_prefix 33 | PHD_VAR_secrets_swift_suffix 34 | 35 | ################################# 36 | # Scenario Requirements Section # 37 | ################################# 38 | = REQUIREMENTS = 39 | nodes: 1 40 | 41 | ###################### 42 | # Deployment Scripts # 43 | ###################### 44 | = SCRIPTS = 45 | 46 | target=all 47 | .... 48 | yum install -y openstack-swift-proxy openstack-utils python-swiftclient python-openstackclient 49 | .... 50 | 51 | target=$PHD_ENV_nodes1 52 | .... 53 | # NOTE: you MUST refer to the swift-ring-builder documentation in order to 54 | # configure prope data redundancy and set those values properly. 55 | # This is just a generic example that will store 3 copies of the same data for 56 | # proof-of-concept purposes. 57 | 58 | swift-ring-builder /etc/swift/object.builder create 16 3 24 59 | swift-ring-builder /etc/swift/container.builder create 16 3 24 60 | swift-ring-builder /etc/swift/account.builder create 16 3 24 61 | 62 | # .76,.77 and .78 are the addresses of rdo${PHD_VAR_osp_major}-swift-brick{1,2,3} 63 | # 'target' here comes from the swift-fs resource created on the proxy 64 | # pcs resource create swift-fs Filesystem device="/local/swiftsource" directory="/local/swiftstorage/target" fstype="none" options="bind" 65 | 66 | if [ $PHD_VAR_deployment = collapsed ]; then 67 | baseip=102 68 | else 69 | baseip=75 70 | fi 71 | 72 | for i in 1 2 3; do 73 | target=$((baseip + $i)) 74 | swift-ring-builder /etc/swift/account.builder add z$i-${PHD_VAR_network_internal}.${target}:6202/target 10 75 | swift-ring-builder /etc/swift/container.builder add z$i-${PHD_VAR_network_internal}.${target}:6201/target 10 76 | swift-ring-builder /etc/swift/object.builder add z$i-${PHD_VAR_network_internal}.${target}:6200/target 10 77 | done 78 | 79 | swift-ring-builder /etc/swift/account.builder rebalance 80 | swift-ring-builder /etc/swift/container.builder rebalance 81 | swift-ring-builder /etc/swift/object.builder rebalance 82 | 83 | mkdir -p $PHD_VAR_osp_configdir/swift 84 | cp /etc/swift/*.builder /etc/swift/*.gz $PHD_VAR_osp_configdir/swift/ 85 | 86 | .... 87 | 88 | target=all 89 | .... 90 | 91 | openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix ${PHD_VAR_secrets_swift_prefix} 92 | openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix ${PHD_VAR_secrets_swift_suffix} 93 | openstack-config --set /etc/swift/swift.conf filter:ceilometer use egg:ceilometer#swift 94 | openstack-config --set /etc/swift/swift.conf pipeline:main pipeline "healthcheck cache authtoken keystoneauth proxy-server ceilometer" 95 | 96 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_tenant_name services 97 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_user swift 98 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_password swifttest 99 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken identity_uri http://vip-keystone:35357/ 100 | openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_uri http://vip-keystone:5000/ 101 | openstack-config --set /etc/swift/proxy-server.conf DEFAULT bind_ip $(ip addr show dev eth1 scope global | grep dynamic| sed -e 's#.*inet ##g' -e 's#/.*##g') 102 | 103 | openstack-config --set /etc/swift/object-expirer.conf object-expirer concurrency 100 104 | 105 | openstack-config --set /etc/swift/proxy-server.conf filter:cache memcache_servers ${PHD_VAR_network_hosts_memcache} 106 | openstack-config --set /etc/swift/object-expirer.conf filter:cache memcache_servers ${PHD_VAR_network_hosts_memcache} 107 | 108 | cp $PHD_VAR_osp_configdir/swift/*.builder $PHD_VAR_osp_configdir/swift/*.gz /etc/swift/ 109 | chown -R root:swift /etc/swift 110 | 111 | .... 112 | 113 | target=$PHD_ENV_nodes1 114 | .... 115 | pcs resource create swift-proxy systemd:openstack-swift-proxy --clone interleave=true 116 | pcs resource create swift-object-expirer systemd:openstack-swift-object-expirer 117 | 118 | pcs constraint order start swift-proxy-clone then swift-object-expirer 119 | 120 | if [ $PHD_VAR_deployment = collapsed ]; then 121 | pcs constraint order start swift-account-clone then swift-proxy-clone 122 | fi 123 | .... 124 | -------------------------------------------------------------------------------- /pcmk/vmsnap-rollback.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # We start with 3 (or more, up to 16) nodes running a minimal CentOS 6 21 | # 22 | # Tasks to be performed include: 23 | # - setting up the required repositories from which to download Openstack and the HA-Addon 24 | # - disabling firewalls and SElinux. This is a necessary evil until the proper policies can be written. 25 | # - creating network bridges for use by VMs hosting OpenStack services 26 | # - normalizing network interface names 27 | # - fixing multicast 28 | # - removing /home and making the root partition as large as possible to maximumize the amount of space available to openstack 29 | 30 | ################################# 31 | # Scenario Requirements Section # 32 | ################################# 33 | = VARIABLES = 34 | PHD_ENV_snapshot_name 35 | PHD_VAR_osp_configdir 36 | 37 | ################################# 38 | # Scenario Requirements Section # 39 | ################################# 40 | = REQUIREMENTS = 41 | nodes: 3 42 | 43 | ###################### 44 | # Deployment Scripts # 45 | ###################### 46 | = SCRIPTS = 47 | 48 | target=all 49 | .... 50 | # get list of all VMs configured on a given host 51 | vmlist=$(virsh list --all --name) 52 | 53 | for vm in $vmlist; do 54 | # if the requested snapshot doesn't exist, don't continue with 55 | # disruptive actions 56 | if [ ! -f /localvms/${vm}-${PHD_ENV_snapshot_name}.cow ]; then 57 | echo "ERROR: Snapshot ${PHD_ENV_snapshot_name} does not exists" 58 | exit 1 59 | fi 60 | 61 | current=$(virsh snapshot-current --name $vm) 62 | if [ "$current" = ${PHD_ENV_snapshot_name} ]; then 63 | echo "ERROR: We are already at snapshot: ${PHD_ENV_snapshot_name}" 64 | exit 1 65 | fi 66 | 67 | # make sure that the we are rolling back to a snap 68 | # that is parent of the current 69 | snaplist=$(virsh snapshot-list --tree $vm | sed -e 's#.*|.*##g' -e 's#.*+- ##g') 70 | for snap in $snaplist; do 71 | if [ "$snap" = "${PHD_ENV_snapshot_name}" ]; then 72 | # we found requested snap before current and we are good 73 | echo "found snap ${PHD_ENV_snapshot_name}" 74 | break; 75 | fi 76 | if [ "$snap" = "$current" ]; then 77 | # current is before the requested snap, error 78 | echo "ERROR: Requested snap ${PHD_ENV_snapshot_name} is not a parent of current $current snap" 79 | exit 1 80 | fi 81 | done 82 | done 83 | 84 | .... 85 | 86 | target=all 87 | .... 88 | # to roll back we need to kill hard the VMs but we want to make 89 | # sure that the other nodes won't fence before we get to 90 | # revert properly 91 | systemctl stop fence_virtd 92 | 93 | vmlist=$(virsh list --all --name) 94 | for vm in $vmlist; do 95 | rollback="" 96 | 97 | virsh destroy $vm 98 | 99 | snaplist=$(virsh snapshot-list --tree $vm | sed -e 's#.*|.*##g' -e 's#.*+- ##g') 100 | 101 | # we need to proceed from the bottom of the list 102 | for snap in $snaplist; do 103 | rollback="$snap $rollback" 104 | done 105 | 106 | # kill all disks 107 | for snap in $rollback; do 108 | if [ "$snap" = "${PHD_ENV_snapshot_name}" ]; then 109 | break; 110 | fi 111 | rm -f /localvms/${vm}-${snap}.cow 112 | # needs extra cleaning 113 | case $snap in 114 | cinder) 115 | rm -rf $PHD_VAR_osp_configdir/cinder 116 | ;; 117 | glance) 118 | rm -rf $PHD_VAR_osp_configdir/glance 119 | ;; 120 | compute) 121 | rm -rf $PHD_VAR_osp_configdir/instances/* 122 | ;; 123 | esac 124 | done 125 | 126 | # drop all childs 127 | virsh snapshot-delete --domain $vm ${PHD_ENV_snapshot_name} --metadata --children-only 128 | 129 | virsh detach-disk $vm vda --config 130 | virsh attach-disk $vm /localvms/$vm-${PHD_ENV_snapshot_name}.cow vda \ 131 | --sourcetype file --type disk \ 132 | --driver qemu --subdriver qcow2 \ 133 | --config 134 | 135 | done 136 | 137 | .... 138 | 139 | target=all 140 | .... 141 | 142 | systemctl start fence_virtd 143 | # get list of all VMs configured on a given host 144 | vmlist=$(virsh list --all --name) 145 | 146 | for vm in $vmlist; do 147 | virsh start $vm 148 | done 149 | .... 150 | -------------------------------------------------------------------------------- /pcmk/vmsnap.scenario: -------------------------------------------------------------------------------- 1 | # This file can be used directly by 'phd', see 'build-all.sh' in this 2 | # directory for how it can be invoked. The only requirement is a list 3 | # of nodes you'd like it to modify. 4 | # 5 | # The scope of each command-block is controlled by the preceeding 6 | # 'target' line. 7 | # 8 | # - target=all 9 | # The commands are executed on evey node provided 10 | # 11 | # - target=local 12 | # The commands are executed from the node hosting phd. When not 13 | # using phd, they should be run from some other independant host 14 | # (such as the puppet master) 15 | # 16 | # - target=$PHD_ENV_nodes{N} 17 | # The commands are executed on the Nth node provided. 18 | # For example, to run on only the first node would be target=$PHD_ENV_nodes1 19 | # 20 | # We start with 3 (or more, up to 16) nodes running a minimal CentOS 6 21 | # 22 | # Tasks to be performed include: 23 | # - setting up the required repositories from which to download Openstack and the HA-Addon 24 | # - disabling firewalls and SElinux. This is a necessary evil until the proper policies can be written. 25 | # - creating network bridges for use by VMs hosting OpenStack services 26 | # - normalizing network interface names 27 | # - fixing multicast 28 | # - removing /home and making the root partition as large as possible to maximumize the amount of space available to openstack 29 | 30 | ################################# 31 | # Scenario Requirements Section # 32 | ################################# 33 | = VARIABLES = 34 | PHD_ENV_snapshot_name 35 | 36 | ################################# 37 | # Scenario Requirements Section # 38 | ################################# 39 | = REQUIREMENTS = 40 | nodes: 3 41 | 42 | ###################### 43 | # Deployment Scripts # 44 | ###################### 45 | = SCRIPTS = 46 | 47 | # links to snapshot management: 48 | # https://kashyapc.fedorapeople.org/virt/infra.next-2015/Advanced-Snapshots-with-libvirt-and-QEMU.pdf 49 | # http://lists.openstack.org/pipermail/openstack-dev/2013-June/010212.html 50 | # https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Virtualization_Deployment_and_Administration_Guide/index.html#warning-live-snapshots 51 | 52 | target=all 53 | .... 54 | # get list of all VMs configured on a given host 55 | vmlist=$(virsh list --all --name) 56 | 57 | # check if snapshot already exists, otherwise take the snapshot 58 | for vm in $vmlist; do 59 | if [ -f /localvms/${vm}-${PHD_ENV_snapshot_name}.cow ]; then 60 | echo "Snapshot ${PHD_ENV_snapshot_name} already exists" 61 | exit 0 62 | fi 63 | virsh snapshot-create-as --domain $vm ${PHD_ENV_snapshot_name} --diskspec vda,file=/localvms/${vm}-${PHD_ENV_snapshot_name}.cow,snapshot=external --disk-only --atomic 64 | done 65 | .... 66 | --------------------------------------------------------------------------------