├── LICENSE ├── README.md ├── openstack-arch.png ├── pillar ├── openstack │ ├── cinder.sls │ ├── glance.sls │ ├── horizon.sls │ ├── keystone.sls │ ├── neutron.sls │ ├── nova.sls │ └── rabbit.sls └── top.sls └── states ├── openstack-mitaka ├── all-in-one.sls ├── cinder │ ├── config.sls │ ├── files │ │ ├── api-paste.ini │ │ ├── cinder.conf │ │ ├── cinder_data.sh │ │ ├── exports │ │ ├── logging.conf │ │ ├── nfs_shares │ │ ├── openstack-cinder-api │ │ ├── openstack-cinder-scheduler │ │ ├── openstack-cinder-volume │ │ ├── policy.json │ │ ├── rootwrap.conf │ │ └── rootwrap.d │ │ │ └── volume.filters │ ├── nfs.sls │ └── server.sls ├── compute.sls ├── control.sls ├── glance │ ├── files │ │ ├── config │ │ │ ├── glance-api.conf │ │ │ ├── glance-cache.conf │ │ │ ├── glance-registry.conf │ │ │ ├── glance-scrubber.conf │ │ │ ├── policy.json │ │ │ └── schema-image.json │ │ ├── glance_init.sh │ │ ├── openstack-glance-api │ │ ├── openstack-glance-registry │ │ └── openstack-glance-scrubber │ ├── init.sls │ └── server.sls ├── horizon │ ├── files │ │ └── config │ │ │ └── local_settings │ └── server.sls ├── init │ ├── base.sls │ ├── compute_base.sls │ └── files │ │ ├── compute_pip.txt │ │ ├── control_pip.txt │ │ ├── epel-6.repo │ │ ├── foreman.repo │ │ ├── ntp.conf │ │ ├── pip.conf │ │ ├── puppetlabs.repo │ │ └── rdo-release.repo ├── keystone │ ├── files │ │ ├── config │ │ │ ├── default_catalog.templates │ │ │ ├── keystone.conf │ │ │ ├── logging.conf │ │ │ └── policy.json │ │ ├── keystone_admin │ │ ├── keystone_init.sh │ │ └── openstack-keystone │ ├── init.sls │ └── server.sls ├── mysql │ ├── cinder.sls │ ├── files │ │ └── my.cnf │ ├── glance.sls │ ├── init.sls │ ├── keystone.sls │ ├── neutron.sls │ ├── nova.sls │ └── server.sls ├── neutron │ ├── config.sls │ ├── dhcp_agent.sls │ ├── files │ │ ├── config │ │ │ ├── dhcp_agent.ini │ │ │ ├── fwaas_driver.ini │ │ │ ├── l3_agent.ini │ │ │ ├── lbaas_agent.ini │ │ │ ├── metadata_agent.ini │ │ │ ├── neutron.conf │ │ │ ├── plugins.ini │ │ │ ├── plugins │ │ │ │ ├── linuxbridge │ │ │ │ │ └── linuxbridge_conf.ini │ │ │ │ └── ml2 │ │ │ │ │ ├── ml2_conf.ini │ │ │ │ │ ├── ml2_conf_arista.ini │ │ │ │ │ ├── ml2_conf_brocade.ini │ │ │ │ │ ├── ml2_conf_cisco.ini │ │ │ │ │ ├── ml2_conf_mlnx.ini │ │ │ │ │ ├── ml2_conf_ncs.ini │ │ │ │ │ ├── ml2_conf_odl.ini │ │ │ │ │ ├── ml2_conf_ofa.ini │ │ │ │ │ └── restproxy.ini │ │ │ ├── policy.json │ │ │ ├── release │ │ │ ├── rootwrap.conf │ │ │ └── vpn_agent.ini │ │ ├── neutron-dhcp-agent │ │ ├── neutron-l3-agent │ │ ├── neutron-lbaas-agent │ │ ├── neutron-linuxbridge-agent │ │ ├── neutron-metadata-agent │ │ ├── neutron-ovs-cleanup │ │ ├── neutron-server │ │ ├── neutron-vpn-agent │ │ └── neutron_init.sh │ ├── init.sls │ ├── lbaas_agent.sls │ ├── linuxbridge_agent.sls │ └── server.sls ├── nova │ ├── compute.sls │ ├── config.sls │ ├── control.sls │ ├── files │ │ ├── config │ │ │ ├── api-paste.ini │ │ │ ├── nova.conf │ │ │ ├── policy.json │ │ │ ├── release │ │ │ └── rootwrap.conf │ │ ├── nova_init.sh │ │ ├── openstack-nova-api │ │ ├── openstack-nova-cert │ │ ├── openstack-nova-compute │ │ ├── openstack-nova-conductor │ │ ├── openstack-nova-console │ │ ├── openstack-nova-consoleauth │ │ ├── openstack-nova-metadata-api │ │ ├── openstack-nova-novncproxy │ │ ├── openstack-nova-scheduler │ │ ├── openstack-nova-spicehtml5proxy │ │ └── openstack-nova-xvpvncproxy │ └── init.sls └── rabbitmq │ └── server.sls └── top.sls /README.md: -------------------------------------------------------------------------------- 1 | # SaltStack 自动化部署OpenStack Stein 2 | 前言 3 | ==== 4 | 5 | **诞生记** 6 | - 首先:之前编写了SaltStack自动化部署OpenStackI版:使用的源码包的方式 7 | - 因为:有用户反映安装起来比较繁琐,加上pip源网络的问题,很多朋友自动化部署有问题。 8 | - 所以:本次重新使用yum安装的方式重新编写了一遍,2019年更新到了Stein版本。 9 | - 最后:如有建议欢迎反馈。QQ:57459267 10 | 11 | **友情提示** 12 | 13 | - 本文的使用对象为熟悉OpenStack,可以手动完成OpenStack的部署的用户。如果不熟悉OpenStack的用户,可以参考我录制的 14 | - 《基于SaltStack的自动化运维实践》课程: [http://www.devopsedu.com/front/couinfo/49] 15 | - 《基于OpenStack构建企业私有云实践》课程: [http://www.devopsedu.com/front/couinfo/101] 16 | 17 | 使用方式 18 | ==== 19 | 20 | **1.OpenStack架构** 21 | 22 | ![架构图](https://github.com/unixhot/saltstack-openstack/blob/master/openstack-arch.png) 23 | 24 | **2.介绍** 25 | 26 | - 1.每个服务均有一个目录存放SLS文件。每个目录下均有files目录,用来存放模板文件和脚本。 27 | - 2.每个服务均有一个Pillar文件,主要定义和配置相关的如IP地址、网络接口、用户名和密码等。 28 | 29 | **使用步骤** 30 | 31 | ## 1.系统初始化(必备) 32 | 33 | 1.1 设置主机名!!! 34 | ``` 35 | [root@linux-node1 ~]# cat /etc/hostname 36 | linux-node1.example.com 37 | 38 | [root@linux-node2 ~]# cat /etc/hostname 39 | linux-node2.example.com 40 | 41 | [root@linux-node3 ~]# cat /etc/hostname 42 | linux-node3.example.com 43 | 44 | ``` 45 | 1.2 设置/etc/hosts保证主机名能够解析 46 | ``` 47 | [root@linux-node1 ~]# cat /etc/hosts 48 | 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 49 | ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 50 | 192.168.56.11 linux-node1 linux-node1.example.com 51 | 192.168.56.12 linux-node2 linux-node2.example.com 52 | 192.168.56.13 linux-node3 linux-node3.example.com 53 | 54 | ``` 55 | 1.3 关闭SELinux和防火墙 56 | ``` 57 | [root@linux-node1 ~]# vim /etc/sysconfig/selinux 58 | SELINUX=disabled #修改为disabled 59 | ``` 60 | 61 | 1.4 关闭NetworkManager和防火墙开启自启动 62 | ``` 63 | [root@linux-node1 ~]# systemctl disable firewalld 64 | [root@linux-node1 ~]# systemctl disable NetworkManager 65 | ``` 66 | 67 | ## 2.安装Salt-SSH并克隆本项目代码。 68 | 69 | 2.1 设置部署节点到其它所有节点的SSH免密码登录(包括本机) 70 | ```bash 71 | [root@linux-node1 ~]# ssh-keygen -t rsa 72 | [root@linux-node1 ~]# ssh-copy-id linux-node1 73 | [root@linux-node1 ~]# ssh-copy-id linux-node2 74 | [root@linux-node1 ~]# ssh-copy-id linux-node3 75 | ``` 76 | 77 | 2.2 安装Salt SSH(注意:老版本的Salt SSH不支持Roster定义Grains,需要2017.7.4以上版本) 78 | ``` 79 | [root@linux-node1 ~]# yum install https://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm 80 | [root@linux-node1 ~]# yum install -y https://mirrors.aliyun.com/saltstack/yum/redhat/salt-repo-latest-2.el7.noarch.rpm 81 | [root@linux-node1 ~]# sed -i "s/repo.saltstack.com/mirrors.aliyun.com\/saltstack/g" /etc/yum.repos.d/salt-latest.repo 82 | [root@linux-node1 ~]# yum install -y salt-ssh git unzip 83 | ``` 84 | 85 | 2.3 设置SaltStack管理其它节点 86 | ```ObjectiveC 87 | [root@linux-node1 ~]# vim /etc/salt/roster 88 | linux-node1: 89 | host: 192.168.56.11 90 | user: root 91 | priv: /root/.ssh/id_rsa 92 | 93 | linux-node2: 94 | host: 192.168.56.12 95 | user: root 96 | priv: /root/.ssh/id_rsa 97 | ``` 98 | 99 | 2.4 设置状态文件存放目录 100 | ``` 101 | [root@linux-node1 ~]# vim /etc/salt/master 102 | file_roots: 103 | base: 104 | - /srv/salt/ 105 | 106 | pillar_roots: 107 | base: 108 | - /srv/pillar 109 | [root@linux-node1 ~]# mkdir -p /srv/{salt,pillar} 110 | 111 | ``` 112 | 113 | 2.5 克隆项目代码 114 | ``` 115 | # git clone https://github.com/unixhot/salt-openstack.git 116 | # cd salt-openstack/ 117 | ``` 118 | 119 | 120 | **2.修改Pillar目录的各个服务的配置** 121 | 122 | **3.修改top.sls** 123 | 124 | **All-In-One(所有服务均安装在一台服务器)** 125 | 126 | ```ObjectiveC 127 | base: 128 | '*': 129 | - openstack.all-in-one 130 | ``` 131 | 132 | ***控制节点+计算节点*** 133 | 134 | ```ObjectiveC 135 | base: 136 | 'control': 137 | - openstack.control 138 | 'compute': 139 | - openstack.compute 140 | ``` 141 | 142 | ***多节点*** 143 | 144 | ```ObjectiveC 145 | base: 146 | 'mysql': 147 | - openstack.mysql.server 148 | 'rabbitmq': 149 | - openstack.rabbitmq-server 150 | 'keystone': 151 | - openstack.keystone.server 152 | 'glance': 153 | - openstack.glance.server 154 | 'nova-control': 155 | - openstack.nova.control 156 | 'nova-compute': 157 | - openstack.nova.compute 158 | - openstack.neutron.linuxbridge_agent 159 | 'neutron-server': 160 | - openstack.neutron.server 161 | 'cinder-server': 162 | - opensack.cinder.server 163 | 'horizon': 164 | - openstack.horizon.server 165 | 166 | ##已知Bug 167 | 168 | * Neutron的neutron.conf需要获取service的tenant id后,手动设置 169 | keystone tenant-list | awk '/ service / { print $2 }' 170 | nova_admin_tenant_id = 171 | 172 | 173 | -------------------------------------------------------------------------------- /openstack-arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/unixhot/salt-openstack/75d63ccd4c3810c8d88e0e8331d873bcc46417a3/openstack-arch.png -------------------------------------------------------------------------------- /pillar/openstack/cinder.sls: -------------------------------------------------------------------------------- 1 | cinder: 2 | MYSQL_SERVER: 192.168.56.22 3 | CINDER_DBNAME: cinder 4 | CINDER_USER: cinder 5 | CINDER_PASS: cinder 6 | DB_ALLOW: cinder.* 7 | HOST_ALLOW: 192.168.0.0/255.255.0.0 8 | RABBITMQ_HOST: 192.168.56.22 9 | RABBITMQ_PORT: 5672 10 | RABBITMQ_USER: guest 11 | RABBITMQ_PASS: guest 12 | AUTH_KEYSTONE_HOST: 192.168.56.22 13 | AUTH_KEYSTONE_PORT: 35357 14 | AUTH_KEYSTONE_PROTOCOL: http 15 | AUTH_ADMIN_PASS: admin 16 | ADMIN_PASSWD: admin 17 | ADMIN_TOKEN: 5ba5e30637c0dedbc411 18 | CONTROL_IP: 192.168.56.22 19 | NFS_IP: 192.168.56.22 20 | IPADDR: salt['network.ip_addrs'] 21 | -------------------------------------------------------------------------------- /pillar/openstack/glance.sls: -------------------------------------------------------------------------------- 1 | glance: 2 | MYSQL_SERVER: 192.168.56.22 3 | GLANCE_IP: 192.168.56.22 4 | GLANCE_DB_USER: glance 5 | GLANCE_DB_NAME: glance 6 | GLANCE_DB_PASS: glance 7 | DB_ALLOW: glance.* 8 | HOST_ALLOW: 192.168.0.0/255.255.0.0 9 | RABBITMQ_HOST: 192.168.56.22 10 | RABBITMQ_PORT: 5672 11 | RABBITMQ_USER: guest 12 | RABBITMQ_PASS: guest 13 | AUTH_KEYSTONE_HOST: 192.168.56.22 14 | AUTH_KEYSTONE_PORT: 35357 15 | AUTH_KEYSTONE_PROTOCOL: http 16 | AUTH_GLANCE_ADMIN_TENANT: service 17 | AUTH_GLANCE_ADMIN_USER: glance 18 | AUTH_GLANCE_ADMIN_PASS: glance 19 | -------------------------------------------------------------------------------- /pillar/openstack/horizon.sls: -------------------------------------------------------------------------------- 1 | horizon: 2 | ALLOWED_HOSTS: ['127.0.0.1', '192.168.56.22'] 3 | OPENSTACK_HOST: "192.168.56.22" 4 | -------------------------------------------------------------------------------- /pillar/openstack/keystone.sls: -------------------------------------------------------------------------------- 1 | keystone: 2 | MYSQL_SERVER: 192.168.56.22 3 | KEYSTONE_IP: 192.168.56.22 4 | KEYSTONE_ADMIN_TOKEN: ADMIN 5 | KEYSTONE_ADMIN_TENANT: admin 6 | KEYSTONE_ADMIN_USER: admin 7 | KEYSTONE_ADMIN_PASSWD: admin 8 | KEYSTONE_ROLE_NAME: admin 9 | KEYSTONE_AUTH_URL: http://192.168.56.22:35357/v2.0 10 | KEYSTONE_DB_NAME: keystone 11 | KEYSTONE_DB_USER: keystone 12 | KEYSTONE_DB_PASS: keystone 13 | DB_ALLOW: keystone.* 14 | HOST_ALLOW: 192.168.0.0/255.255.0.0 15 | -------------------------------------------------------------------------------- /pillar/openstack/neutron.sls: -------------------------------------------------------------------------------- 1 | neutron: 2 | MYSQL_SERVER: 192.168.56.22 3 | NEUTRON_IP: 192.168.56.22 4 | NEUTRON_DB_NAME: neutron 5 | NEUTRON_DB_USER: neutron 6 | NEUTRON_DB_PASS: neutron 7 | AUTH_KEYSTONE_HOST: 192.168.56.22 8 | AUTH_KEYSTONE_PORT: 35357 9 | AUTH_KEYSTONE_PROTOCOL: http 10 | AUTH_ADMIN_PASS: neutron 11 | VM_INTERFACE: eth0 12 | NOVA_URL: http://192.168.56.22:8774/v2 13 | NOVA_ADMIN_USER: nova 14 | NOVA_ADMIN_PASS: nova 15 | NOVA_ADMIN_TENANT: service 16 | NOVA_ADMIN_TENANT_ID: b28c286fd0f84130a2722065976623ea 17 | NOVA_ADMIN_AUTH_URL: http://192.168.56.22:35357/v2.0 18 | AUTH_NEUTRON_ADMIN_TENANT: service 19 | AUTH_NEUTRON_ADMIN_USER: neutron 20 | AUTH_NEUTRON_ADMIN_PASS: neutron 21 | DB_ALLOW: neutron.* 22 | HOST_ALLOW: 192.168.0.0/255.255.0.0 23 | -------------------------------------------------------------------------------- /pillar/openstack/nova.sls: -------------------------------------------------------------------------------- 1 | nova: 2 | MYSQL_SERVER: 192.168.56.22 3 | NOVA_IP: 192.168.56.22 4 | NOVA_DB_NAME: nova 5 | NOVA_DB_USER: nova 6 | NOVA_DB_PASS: nova 7 | DB_ALLOW: nova.* 8 | HOST_ALLOW: 192.168.0.0/255.255.0.0 9 | RABBITMQ_HOST: 192.168.56.22 10 | RABBITMQ_PORT: 5672 11 | RABBITMQ_USER: guest 12 | RABBITMQ_PASS: guest 13 | AUTH_KEYSTONE_HOST: 192.168.56.22 14 | AUTH_KEYSTONE_PORT: 35357 15 | AUTH_KEYSTONE_PROTOCOL: http 16 | AUTH_NOVA_ADMIN_TENANT: service 17 | AUTH_NOVA_ADMIN_USER: nova 18 | AUTH_NOVA_ADMIN_PASS: nova 19 | GLANCE_HOST: 192.168.56.22 20 | AUTH_KEYSTONE_URI: http://192.168.56.22:5000 21 | NEUTRON_URL: http://192.168.56.22:9696 22 | NEUTRON_ADMIN_USER: neutron 23 | NEUTRON_ADMIN_PASS: neutron 24 | NEUTRON_ADMIN_TENANT: service 25 | NEUTRON_ADMIN_AUTH_URL: http://192.168.56.22:5000/v2.0 26 | NOVNCPROXY_BASE_URL: http://192.168.56.22:6080/vnc_auto.html 27 | AUTH_URI: http://192.168.56.22:5000 28 | -------------------------------------------------------------------------------- /pillar/openstack/rabbit.sls: -------------------------------------------------------------------------------- 1 | rabbit: 2 | RABBITMQ_HOST: 192.168.56.22 3 | RABBITMQ_PORT: 5672 4 | RABBITMQ_USER: guest 5 | RABBITMQ_PASS: guest 6 | -------------------------------------------------------------------------------- /pillar/top.sls: -------------------------------------------------------------------------------- 1 | prod: 2 | '*': 3 | - openstack.keystone 4 | - openstack.glance 5 | - openstack.nova 6 | - openstack.neutron 7 | - openstack.cinder 8 | - openstack.horizon 9 | - openstack.rabbit 10 | -------------------------------------------------------------------------------- /states/openstack-mitaka/all-in-one.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.init.base 3 | - openstack.rabbitmq.server 4 | - openstack.mysql.server 5 | - openstack.keystone.server 6 | - openstack.glance.server 7 | - openstack.nova.control 8 | - openstack.nova.compute 9 | - openstack.neutron.server 10 | - openstack.neutron.linuxbridge_agent 11 | - openstack.horizon.server 12 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/config.sls: -------------------------------------------------------------------------------- 1 | /etc/cinder/cinder.conf: 2 | file.managed: 3 | - source: salt://openstack/cinder/files/cinder.conf 4 | - mode: 644 5 | - user: root 6 | - group: root 7 | - template: jinja 8 | - defaults: 9 | RABBITMQ_HOST: {{ pillar['cinder']['RABBITMQ_HOST'] }} 10 | RABBITMQ_PORT: {{ pillar['cinder']['RABBITMQ_PORT'] }} 11 | RABBITMQ_USER: {{ pillar['cinder']['RABBITMQ_USER'] }} 12 | RABBITMQ_PASS: {{ pillar['cinder']['RABBITMQ_PASS'] }} 13 | AUTH_KEYSTONE_HOST: {{ pillar['cinder']['AUTH_KEYSTONE_HOST'] }} 14 | AUTH_KEYSTONE_PORT: {{ pillar['cinder']['AUTH_KEYSTONE_PORT'] }} 15 | AUTH_KEYSTONE_PROTOCOL: {{ pillar['cinder']['AUTH_KEYSTONE_PROTOCOL'] }} 16 | AUTH_ADMIN_PASS: {{ pillar['cinder']['AUTH_ADMIN_PASS'] }} 17 | CONTROL_IP: {{ pillar['cinder']['CONTROL_IP'] }} 18 | MYSQL_SERVER: {{ pillar['cinder']['MYSQL_SERVER'] }} 19 | CINDER_USER: {{ pillar['cinder']['CINDER_USER'] }} 20 | CINDER_PASS: {{ pillar['cinder']['CINDER_PASS'] }} 21 | CINDER_DBNAME: {{ pillar['cinder']['CINDER_DBNAME'] }} 22 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/api-paste.ini: -------------------------------------------------------------------------------- 1 | ############# 2 | # OpenStack # 3 | ############# 4 | 5 | [composite:osapi_volume] 6 | use = call:cinder.api:root_app_factory 7 | /: apiversions 8 | /v1: openstack_volume_api_v1 9 | /v2: openstack_volume_api_v2 10 | 11 | [composite:openstack_volume_api_v1] 12 | use = call:cinder.api.middleware.auth:pipeline_factory 13 | noauth = request_id faultwrap sizelimit noauth apiv1 14 | keystone = request_id faultwrap sizelimit authtoken keystonecontext apiv1 15 | keystone_nolimit = request_id faultwrap sizelimit authtoken keystonecontext apiv1 16 | 17 | [composite:openstack_volume_api_v2] 18 | use = call:cinder.api.middleware.auth:pipeline_factory 19 | noauth = request_id faultwrap sizelimit noauth apiv2 20 | keystone = request_id faultwrap sizelimit authtoken keystonecontext apiv2 21 | keystone_nolimit = request_id faultwrap sizelimit authtoken keystonecontext apiv2 22 | 23 | [filter:request_id] 24 | paste.filter_factory = cinder.openstack.common.middleware.request_id:RequestIdMiddleware.factory 25 | 26 | [filter:faultwrap] 27 | paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory 28 | 29 | [filter:noauth] 30 | paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory 31 | 32 | [filter:sizelimit] 33 | paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory 34 | 35 | [app:apiv1] 36 | paste.app_factory = cinder.api.v1.router:APIRouter.factory 37 | 38 | [app:apiv2] 39 | paste.app_factory = cinder.api.v2.router:APIRouter.factory 40 | 41 | [pipeline:apiversions] 42 | pipeline = faultwrap osvolumeversionapp 43 | 44 | [app:osvolumeversionapp] 45 | paste.app_factory = cinder.api.versions:Versions.factory 46 | 47 | ########## 48 | # Shared # 49 | ########## 50 | 51 | [filter:keystonecontext] 52 | paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory 53 | 54 | [filter:authtoken] 55 | paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory 56 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/cinder_data.sh: -------------------------------------------------------------------------------- 1 | export OS_TENANT_NAME="admin" 2 | export OS_USERNAME="admin" 3 | export OS_PASSWORD="{{ADMIN_PASSWD}}" 4 | export OS_AUTH_URL="http://{{CONTROL_IP}}:5000/v2.0/" 5 | 6 | function get_id () { 7 | echo `"$@" | grep ' id ' | awk '{print $4}'` 8 | } 9 | 10 | CINDER_SERVICE=$(get_id \ 11 | keystone service-create --name=cinder \ 12 | --type=volume \ 13 | --description="OpenStack Block Storage") 14 | 15 | keystone endpoint-create --service-id="$CINDER_SERVICE" \ 16 | --publicurl=http://{{CONTROL_IP}}:8776/v1/%\(tenant_id\)s \ 17 | --adminurl=http://{{CONTROL_IP}}:8776/v1/%\(tenant_id\)s \ 18 | --internalurl=http://{{CONTROL_IP}}:8776/v1/%\(tenant_id\)s 19 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/exports: -------------------------------------------------------------------------------- 1 | /data/nfs *(rw,no_root_squash) 2 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/logging.conf: -------------------------------------------------------------------------------- 1 | [loggers] 2 | keys = root, cinder 3 | 4 | [handlers] 5 | keys = stderr, stdout, watchedfile, syslog, null 6 | 7 | [formatters] 8 | keys = context, default 9 | 10 | [logger_root] 11 | level = WARNING 12 | handlers = null 13 | 14 | [logger_cinder] 15 | level = INFO 16 | handlers = stderr 17 | qualname = cinder 18 | 19 | [logger_amqplib] 20 | level = WARNING 21 | handlers = stderr 22 | qualname = amqplib 23 | 24 | [logger_sqlalchemy] 25 | level = WARNING 26 | handlers = stderr 27 | qualname = sqlalchemy 28 | # "level = INFO" logs SQL queries. 29 | # "level = DEBUG" logs SQL queries and results. 30 | # "level = WARNING" logs neither. (Recommended for production systems.) 31 | 32 | [logger_boto] 33 | level = WARNING 34 | handlers = stderr 35 | qualname = boto 36 | 37 | [logger_suds] 38 | level = INFO 39 | handlers = stderr 40 | qualname = suds 41 | 42 | [logger_eventletwsgi] 43 | level = WARNING 44 | handlers = stderr 45 | qualname = eventlet.wsgi.server 46 | 47 | [handler_stderr] 48 | class = StreamHandler 49 | args = (sys.stderr,) 50 | formatter = context 51 | 52 | [handler_stdout] 53 | class = StreamHandler 54 | args = (sys.stdout,) 55 | formatter = context 56 | 57 | [handler_watchedfile] 58 | class = handlers.WatchedFileHandler 59 | args = ('cinder.log',) 60 | formatter = context 61 | 62 | [handler_syslog] 63 | class = handlers.SysLogHandler 64 | args = ('/dev/log', handlers.SysLogHandler.LOG_USER) 65 | formatter = context 66 | 67 | [handler_null] 68 | class = cinder.openstack.common.log.NullHandler 69 | formatter = default 70 | args = () 71 | 72 | [formatter_context] 73 | class = cinder.openstack.common.log.ContextFormatter 74 | 75 | [formatter_default] 76 | format = %(message)s 77 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/nfs_shares: -------------------------------------------------------------------------------- 1 | {{NFS_IP}}:/data/nfs 2 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/openstack-cinder-api: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-cinder-api OpenStack cinder API Server 4 | # 5 | # chkconfig: - 98 02 6 | # description: At the heart of the cloud framework is an API Server. \ 7 | # This API Server makes command and control of the \ 8 | # hypervisor, storage, and networking programmatically \ 9 | # available to users in realization of the definition \ 10 | # of cloud computing. 11 | 12 | ### BEGIN INIT INFO 13 | # Provides: 14 | # Required-Start: $remote_fs $network $syslog 15 | # Required-Stop: $remote_fs $syslog 16 | # Default-Stop: 0 1 6 17 | # Short-Description: OpenStack cinder API Server 18 | # Description: At the heart of the cloud framework is an API Server. 19 | # This API Server makes command and control of the 20 | # hypervisor, storage, and networking programmatically 21 | # available to users in realization of the definition 22 | # of cloud computing. 23 | ### END INIT INFO 24 | 25 | . /etc/rc.d/init.d/functions 26 | 27 | suffix=api 28 | prog=openstack-cinder-$suffix 29 | exec="/usr/bin/cinder-$suffix" 30 | config="/etc/cinder/cinder.conf" 31 | pidfile="/var/run/cinder/cinder-$suffix.pid" 32 | logfile="/var/log/cinder/$suffix.log" 33 | 34 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 35 | 36 | lockfile=/var/lock/subsys/$prog 37 | 38 | start() { 39 | [ -x $exec ] || exit 5 40 | [ -f $config ] || exit 6 41 | echo -n $"Starting $prog: " 42 | daemon --user root --pidfile $pidfile "$exec --config-file $config --logfile $logfile &>/dev/null & echo \$! > $pidfile" 43 | retval=$? 44 | echo 45 | [ $retval -eq 0 ] && touch $lockfile 46 | return $retval 47 | } 48 | 49 | stop() { 50 | echo -n $"Stopping $prog: " 51 | killproc -p $pidfile $prog 52 | retval=$? 53 | echo 54 | [ $retval -eq 0 ] && rm -f $lockfile 55 | return $retval 56 | } 57 | 58 | restart() { 59 | stop 60 | start 61 | } 62 | 63 | reload() { 64 | restart 65 | } 66 | 67 | force_reload() { 68 | restart 69 | } 70 | 71 | rh_status() { 72 | status -p $pidfile $prog 73 | } 74 | 75 | rh_status_q() { 76 | rh_status >/dev/null 2>&1 77 | } 78 | 79 | 80 | case "$1" in 81 | start) 82 | rh_status_q && exit 0 83 | $1 84 | ;; 85 | stop) 86 | rh_status_q || exit 0 87 | $1 88 | ;; 89 | restart) 90 | $1 91 | ;; 92 | reload) 93 | rh_status_q || exit 7 94 | $1 95 | ;; 96 | force-reload) 97 | force_reload 98 | ;; 99 | status) 100 | rh_status 101 | ;; 102 | condrestart|try-restart) 103 | rh_status_q || exit 0 104 | restart 105 | ;; 106 | *) 107 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 108 | exit 2 109 | esac 110 | exit $? 111 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/openstack-cinder-scheduler: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-cinder-scheduler OpenStack cinder Scheduler 4 | # 5 | # chkconfig: - 98 02 6 | # description: Determines which physical hardware to allocate to a virtual resource 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack cinder Scheduler 14 | # Description: Determines which physical hardware to allocate to a virtual resource 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=scheduler 20 | prog=openstack-cinder-$suffix 21 | exec="/usr/bin/cinder-$suffix" 22 | config="/etc/cinder/cinder.conf" 23 | pidfile="/var/run/cinder/cinder-$suffix.pid" 24 | logfile="/var/log/cinder/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user root --pidfile $pidfile "$exec --config-file $config --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/openstack-cinder-volume: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-cinder-volume OpenStack cinder Volume Worker 4 | # 5 | # chkconfig: - 98 02 6 | # description: Volume Workers interact with iSCSI storage to manage \ 7 | # LVM-based instance volumes. Specific functions include: \ 8 | # * Create Volumes \ 9 | # * Delete Volumes \ 10 | # * Establish Compute volumes 11 | 12 | ### BEGIN INIT INFO 13 | # Provides: 14 | # Required-Start: $remote_fs $network $syslog 15 | # Required-Stop: $remote_fs $syslog 16 | # Default-Stop: 0 1 6 17 | # Short-Description: OpenStack cinder Volume Worker 18 | # Description: Volume Workers interact with iSCSI storage to manage 19 | # LVM-based instance volumes. Specific functions include: 20 | # * Create Volumes 21 | # * Delete Volumes 22 | # * Establish Compute volumes 23 | ### END INIT INFO 24 | 25 | . /etc/rc.d/init.d/functions 26 | 27 | suffix=volume 28 | prog=openstack-cinder-$suffix 29 | exec="/usr/bin/cinder-$suffix" 30 | config="/etc/cinder/cinder.conf" 31 | pidfile="/var/run/cinder/cinder-$suffix.pid" 32 | logfile="/var/log/cinder/$suffix.log" 33 | 34 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 35 | 36 | lockfile=/var/lock/subsys/$prog 37 | 38 | start() { 39 | [ -x $exec ] || exit 5 40 | [ -f $config ] || exit 6 41 | echo -n $"Starting $prog: " 42 | daemon --user root --pidfile $pidfile "$exec --config-file $config --logfile $logfile &>/dev/null & echo \$! > $pidfile" 43 | retval=$? 44 | echo 45 | [ $retval -eq 0 ] && touch $lockfile 46 | return $retval 47 | } 48 | 49 | stop() { 50 | echo -n $"Stopping $prog: " 51 | killproc -p $pidfile $prog 52 | retval=$? 53 | echo 54 | [ $retval -eq 0 ] && rm -f $lockfile 55 | return $retval 56 | } 57 | 58 | restart() { 59 | stop 60 | start 61 | } 62 | 63 | reload() { 64 | restart 65 | } 66 | 67 | force_reload() { 68 | restart 69 | } 70 | 71 | rh_status() { 72 | status -p $pidfile $prog 73 | } 74 | 75 | rh_status_q() { 76 | rh_status >/dev/null 2>&1 77 | } 78 | 79 | 80 | case "$1" in 81 | start) 82 | rh_status_q && exit 0 83 | $1 84 | ;; 85 | stop) 86 | rh_status_q || exit 0 87 | $1 88 | ;; 89 | restart) 90 | $1 91 | ;; 92 | reload) 93 | rh_status_q || exit 7 94 | $1 95 | ;; 96 | force-reload) 97 | force_reload 98 | ;; 99 | status) 100 | rh_status 101 | ;; 102 | condrestart|try-restart) 103 | rh_status_q || exit 0 104 | restart 105 | ;; 106 | *) 107 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 108 | exit 2 109 | esac 110 | exit $? 111 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "context_is_admin": [["role:admin"]], 3 | "admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]], 4 | "default": [["rule:admin_or_owner"]], 5 | 6 | "admin_api": [["is_admin:True"]], 7 | 8 | "volume:create": [], 9 | "volume:get_all": [], 10 | "volume:get_volume_metadata": [], 11 | "volume:get_volume_admin_metadata": [["rule:admin_api"]], 12 | "volume:delete_volume_admin_metadata": [["rule:admin_api"]], 13 | "volume:update_volume_admin_metadata": [["rule:admin_api"]], 14 | "volume:get_snapshot": [], 15 | "volume:get_all_snapshots": [], 16 | "volume:extend": [], 17 | "volume:update_readonly_flag": [], 18 | "volume:retype": [], 19 | 20 | "volume_extension:types_manage": [["rule:admin_api"]], 21 | "volume_extension:types_extra_specs": [["rule:admin_api"]], 22 | "volume_extension:volume_type_encryption": [["rule:admin_api"]], 23 | "volume_extension:volume_encryption_metadata": [["rule:admin_or_owner"]], 24 | "volume_extension:extended_snapshot_attributes": [], 25 | "volume_extension:volume_image_metadata": [], 26 | 27 | "volume_extension:quotas:show": [], 28 | "volume_extension:quotas:update": [["rule:admin_api"]], 29 | "volume_extension:quota_classes": [], 30 | 31 | "volume_extension:volume_admin_actions:reset_status": [["rule:admin_api"]], 32 | "volume_extension:snapshot_admin_actions:reset_status": [["rule:admin_api"]], 33 | "volume_extension:volume_admin_actions:force_delete": [["rule:admin_api"]], 34 | "volume_extension:snapshot_admin_actions:force_delete": [["rule:admin_api"]], 35 | "volume_extension:volume_admin_actions:migrate_volume": [["rule:admin_api"]], 36 | "volume_extension:volume_admin_actions:migrate_volume_completion": [["rule:admin_api"]], 37 | 38 | "volume_extension:volume_host_attribute": [["rule:admin_api"]], 39 | "volume_extension:volume_tenant_attribute": [["rule:admin_or_owner"]], 40 | "volume_extension:volume_mig_status_attribute": [["rule:admin_api"]], 41 | "volume_extension:hosts": [["rule:admin_api"]], 42 | "volume_extension:services": [["rule:admin_api"]], 43 | "volume:services": [["rule:admin_api"]], 44 | 45 | "volume:create_transfer": [], 46 | "volume:accept_transfer": [], 47 | "volume:delete_transfer": [], 48 | "volume:get_all_transfers": [], 49 | 50 | "backup:create" : [], 51 | "backup:delete": [], 52 | "backup:get": [], 53 | "backup:get_all": [], 54 | "backup:restore": [], 55 | "backup:backup-import": [["rule:admin_api"]], 56 | "backup:backup-export": [["rule:admin_api"]], 57 | 58 | "snapshot_extension:snapshot_actions:update_snapshot_status": [] 59 | } 60 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/rootwrap.conf: -------------------------------------------------------------------------------- 1 | # Configuration for cinder-rootwrap 2 | # This file should be owned by (and only-writeable by) the root user 3 | 4 | [DEFAULT] 5 | # List of directories to load filter definitions from (separated by ','). 6 | # These directories MUST all be only writeable by root ! 7 | filters_path=/etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap 8 | 9 | # List of directories to search executables in, in case filters do not 10 | # explicitely specify a full path (separated by ',') 11 | # If not specified, defaults to system PATH environment variable. 12 | # These directories MUST all be only writeable by root ! 13 | exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin 14 | 15 | # Enable logging to syslog 16 | # Default value is False 17 | use_syslog=False 18 | 19 | # Which syslog facility to use. 20 | # Valid values include auth, authpriv, syslog, local0, local1... 21 | # Default value is 'syslog' 22 | syslog_log_facility=syslog 23 | 24 | # Which messages to log. 25 | # INFO means log all usage 26 | # ERROR means only log unsuccessful attempts 27 | syslog_log_level=ERROR 28 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/files/rootwrap.d/volume.filters: -------------------------------------------------------------------------------- 1 | # cinder-rootwrap command filters for volume nodes 2 | # This file should be owned by (and only-writeable by) the root user 3 | 4 | [Filters] 5 | # cinder/volume/iscsi.py: iscsi_helper '--op' ... 6 | ietadm: CommandFilter, ietadm, root 7 | tgtadm: CommandFilter, tgtadm, root 8 | tgt-admin: CommandFilter, tgt-admin, root 9 | cinder-rtstool: CommandFilter, cinder-rtstool, root 10 | 11 | # LVM related show commands 12 | pvs: EnvFilter, env, root, LC_ALL=C, pvs 13 | vgs: EnvFilter, env, root, LC_ALL=C, vgs 14 | lvs: EnvFilter, env, root, LC_ALL=C, lvs 15 | lvdisplay: EnvFilter, env, root, LC_ALL=C, lvdisplay 16 | 17 | # cinder/volume/driver.py: 'lvcreate', '-L', sizestr, '-n', volume_name,.. 18 | # cinder/volume/driver.py: 'lvcreate', '-L', ... 19 | lvcreate: CommandFilter, lvcreate, root 20 | 21 | # cinder/volume/driver.py: 'dd', 'if=%s' % srcstr, 'of=%s' % deststr,... 22 | dd: CommandFilter, dd, root 23 | 24 | # cinder/volume/driver.py: 'lvremove', '-f', %s/%s % ... 25 | lvremove: CommandFilter, lvremove, root 26 | 27 | # cinder/volume/driver.py: 'lvrename', '%(vg)s', '%(orig)s' '(new)s'... 28 | lvrename: CommandFilter, lvrename, root 29 | 30 | # cinder/volume/driver.py: 'lvextend', '-L' '%(new_size)s', '%(lv_name)s' ... 31 | lvextend: CommandFilter, lvextend, root 32 | 33 | # cinder/brick/local_dev/lvm.py: 'lvchange -a y -K ' 34 | lvchange: CommandFilter, lvchange, root 35 | 36 | # cinder/volume/driver.py: 'iscsiadm', '-m', 'discovery', '-t',... 37 | # cinder/volume/driver.py: 'iscsiadm', '-m', 'node', '-T', ... 38 | iscsiadm: CommandFilter, iscsiadm, root 39 | 40 | # cinder/volume/drivers/lvm.py: 'shred', '-n3' 41 | # cinder/volume/drivers/lvm.py: 'shred', '-n0', '-z', '-s%dMiB' 42 | shred: CommandFilter, shred, root 43 | 44 | #cinder/volume/.py: utils.temporary_chown(path, 0), ... 45 | chown: CommandFilter, chown, root 46 | ionice_1: RegExpFilter, ionice, root, ionice, -c[0-3]( -n[0-7])?, dd, if=\S+, of=\S+, count=\d+, bs=\S+ 47 | ionice_2: RegExpFilter, ionice, root, ionice, -c[0-3]( -n[0-7])?, dd, if=\S+, of=\S+, count=\d+, bs=\S+, iflag=direct, oflag=direct 48 | ionice_3: RegExpFilter, ionice, root, ionice, -c[0-3]( -n[0-7])?, dd, if=\S+, of=\S+, count=\d+, bs=\S+, conv=fdatasync 49 | 50 | # cinder/volume/driver.py 51 | dmsetup: CommandFilter, dmsetup, root 52 | ln: CommandFilter, ln, root 53 | 54 | # cinder/image/image_utils.py 55 | qemu-img: EnvFilter, env, root, LC_ALL=C, qemu-img 56 | qemu-img_convert: CommandFilter, qemu-img, root 57 | 58 | udevadm: CommandFilter, udevadm, root 59 | 60 | # cinder/volume/driver.py: utils.read_file_as_root() 61 | cat: CommandFilter, cat, root 62 | 63 | # cinder/volume/nfs.py 64 | stat: CommandFilter, stat, root 65 | mount: CommandFilter, mount, root 66 | df: CommandFilter, df, root 67 | du: CommandFilter, du, root 68 | truncate: CommandFilter, truncate, root 69 | chmod: CommandFilter, chmod, root 70 | rm: CommandFilter, rm, root 71 | 72 | # cinder/volume/drivers/netapp/nfs.py: 73 | netapp_nfs_find: RegExpFilter, find, root, find, ^[/]*([^/\0]+(/+)?)*$, -maxdepth, \d+, -name, img-cache.*, -amin, \+\d+ 74 | 75 | # cinder/volume/drivers/glusterfs.py 76 | chgrp: CommandFilter, chgrp, root 77 | 78 | # cinder/volumes/drivers/hds/hds.py: 79 | hus-cmd: CommandFilter, hus-cmd, root 80 | hus-cmd_local: CommandFilter, /usr/local/bin/hus-cmd, root 81 | 82 | # cinder/brick/initiator/connector.py: 83 | ls: CommandFilter, ls, root 84 | tee: CommandFilter, tee, root 85 | multipath: CommandFilter, multipath, root 86 | systool: CommandFilter, systool, root 87 | 88 | # cinder/volume/drivers/block_device.py 89 | blockdev: CommandFilter, blockdev, root 90 | 91 | # cinder/volume/drivers/ibm/gpfs.py 92 | mv: CommandFilter, mv, root 93 | mmgetstate: CommandFilter, /usr/lpp/mmfs/bin/mmgetstate, root 94 | mmclone: CommandFilter, /usr/lpp/mmfs/bin/mmclone, root 95 | mmlsattr: CommandFilter, /usr/lpp/mmfs/bin/mmlsattr, root 96 | mmchattr: CommandFilter, /usr/lpp/mmfs/bin/mmchattr, root 97 | mmlsconfig: CommandFilter, /usr/lpp/mmfs/bin/mmlsconfig, root 98 | mmlsfs: CommandFilter, /usr/lpp/mmfs/bin/mmlsfs, root 99 | mmlspool: CommandFilter, /usr/lpp/mmfs/bin/mmlspool, root 100 | mkfs: CommandFilter, mkfs, root 101 | 102 | # cinder/volume/drivers/ibm/gpfs.py 103 | # cinder/volume/drivers/ibm/ibmnas.py 104 | find_maxdepth_inum: RegExpFilter, find, root, find, ^[/]*([^/\0]+(/+)?)*$, -maxdepth, \d+, -inum, \d+ 105 | 106 | # cinder/brick/initiator/connector.py: 107 | aoe-revalidate: CommandFilter, aoe-revalidate, root 108 | aoe-discover: CommandFilter, aoe-discover, root 109 | aoe-flush: CommandFilter, aoe-flush, root 110 | 111 | # cinder/brick/initiator/linuxscsi.py: 112 | sg_scan: CommandFilter, sg_scan, root 113 | 114 | #cinder/backup/services/tsm.py 115 | dsmc:CommandFilter,/usr/bin/dsmc,root 116 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/nfs.sls: -------------------------------------------------------------------------------- 1 | nfs-install: 2 | pkg.installed: 3 | - names: 4 | - nfs-utils 5 | - rpcbind 6 | service.running: 7 | - name: nfs 8 | - enable: True 9 | - require: 10 | - file: /etc/exports 11 | 12 | rpcbind: 13 | service.running: 14 | - enable: True 15 | - require: 16 | - service: nfs 17 | 18 | /data/nfs: 19 | file.directory: 20 | - user: root 21 | - group: root 22 | - makedirs: True 23 | 24 | /etc/exports: 25 | file.managed: 26 | - source: salt://openstack/cinder/files/exports 27 | - uesr: root 28 | - group: root 29 | - mode: 644 30 | 31 | /etc/cinder/nfs_shares: 32 | file.managed: 33 | - source: salt://openstack/cinder/files/nfs_shares 34 | - user: root 35 | - group: root 36 | - mode: 644 37 | - template: jinja 38 | - defaults: 39 | NFS_IP: {{ pillar['cinder']['NFS_IP'] }} 40 | -------------------------------------------------------------------------------- /states/openstack-mitaka/cinder/server.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.cinder.config 3 | - openstack.cinder.nfs 4 | 5 | cinder-server: 6 | pkg.installed: 7 | - names: 8 | - openstack-cinder 9 | - python-cinderclient 10 | file.managed: 11 | - name: /etc/init.d/ 12 | 13 | /usr/local/bin/cinder_data.sh: 14 | file.managed: 15 | - source: salt://openstack/cinder/files/cinder_data.sh 16 | - mode: 755 17 | - user: root 18 | - group: root 19 | - template: jinja 20 | - defaults: 21 | ADMIN_PASSWD: {{ pillar['cinder']['ADMIN_PASSWD'] }} 22 | ADMIN_TOKEN: {{ pillar['cinder']['ADMIN_TOKEN'] }} 23 | CONTROL_IP: {{ pillar['cinder']['CONTROL_IP'] }} 24 | 25 | cinder-keystone-init: 26 | cmd.run: 27 | - name: bash /usr/local/bin/cinder_data.sh && touch /etc/cinder-datainit.lock 28 | - require: 29 | - file: /usr/local/bin/cinder_data.sh 30 | - unless: test -f /etc/cinder-datainit.lock 31 | 32 | cinder-db-init: 33 | cmd.run: 34 | - name: cinder-manage db sync && touch /etc/cinder-db-sync.lock 35 | - require: 36 | - mysql_grants: cinder-mysql 37 | - unless: test -f /etc/cinder-db-sync.lock 38 | 39 | openstack-cinder-api: 40 | file.managed: 41 | - name: /etc/init.d/openstack-cinder-api 42 | - source: salt://openstack/cinder/files/openstack-cinder-api 43 | - mode: 755 44 | - user: root 45 | - group: root 46 | cmd.run: 47 | - name: chkconfig --add openstack-cinder-api 48 | - unless: chkconfig --list | grep openstack-cinder-api 49 | - require: 50 | - file: openstack-cinder-api 51 | service.running: 52 | - enable: True 53 | - watch: 54 | - file: /etc/cinder/api-paste.ini 55 | - file: /etc/cinder/policy.json 56 | - file: /etc/cinder/rootwrap.conf 57 | - file: /etc/cinder/rootwrap.d 58 | - file: /etc/cinder/cinder.conf 59 | - file: openstack-cinder-api 60 | - require: 61 | - cmd: cinder-keystone-init 62 | - cmd: openstack-cinder-api 63 | - cmd: cinder-db-init 64 | - file: /var/log/cinder 65 | - file: /var/lib/cinder 66 | 67 | openstack-cinder-scheduler: 68 | file.managed: 69 | - name: /etc/init.d/openstack-cinder-scheduler 70 | - source: salt://openstack/cinder/files/openstack-cinder-scheduler 71 | - mode: 755 72 | - user: root 73 | - group: root 74 | cmd.run: 75 | - name: chkconfig --add openstack-cinder-scheduler 76 | - unless: chkconfig --list | grep openstack-cinder-scheduler 77 | - require: 78 | - file: openstack-cinder-scheduler 79 | service.running: 80 | - enable: True 81 | - watch: 82 | - file: /etc/cinder/api-paste.ini 83 | - file: /etc/cinder/policy.json 84 | - file: /etc/cinder/rootwrap.conf 85 | - file: /etc/cinder/rootwrap.d 86 | - file: /etc/cinder/cinder.conf 87 | - file: openstack-cinder-scheduler 88 | - require: 89 | - cmd: cinder-keystone-init 90 | - cmd: openstack-cinder-scheduler 91 | - cmd: cinder-db-init 92 | - file: /var/log/cinder 93 | - file: /var/lib/cinder 94 | 95 | openstack-cinder-volume: 96 | file.managed: 97 | - name: /etc/init.d/openstack-cinder-volume 98 | - source: salt://openstack/cinder/files/openstack-cinder-volume 99 | - mode: 755 100 | - user: root 101 | - group: root 102 | cmd.run: 103 | - name: chkconfig --add openstack-cinder-volume 104 | - unless: chkconfig --list | grep openstack-cinder-volume 105 | - require: 106 | - file: openstack-cinder-volume 107 | service.running: 108 | - enable: True 109 | - watch: 110 | - file: /etc/cinder/api-paste.ini 111 | - file: /etc/cinder/policy.json 112 | - file: /etc/cinder/rootwrap.conf 113 | - file: /etc/cinder/rootwrap.d 114 | - file: /etc/cinder/cinder.conf 115 | - file: openstack-cinder-volume 116 | - require: 117 | - cmd: cinder-keystone-init 118 | - cmd: openstack-cinder-volume 119 | - cmd: cinder-db-init 120 | - file: /var/log/cinder 121 | - file: /var/lib/cinder 122 | -------------------------------------------------------------------------------- /states/openstack-mitaka/compute.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.init.base 3 | - openstack.nova.compute 4 | - openstack.neutron.linuxbridge_agent 5 | -------------------------------------------------------------------------------- /states/openstack-mitaka/control.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.init.base 3 | - openstack.rabbitmq.server 4 | - openstack.mysql.server 5 | - openstack.mysql.init 6 | - openstack.keystone.server 7 | - openstack.glance.server 8 | - openstack.nova.control 9 | - openstack.horizon.server 10 | - openstack.neutron.server 11 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/config/glance-cache.conf: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Show more verbose log output (sets INFO log level output) 3 | #verbose=True 4 | 5 | # Show debugging output in logs (sets DEBUG log level output) 6 | #debug=False 7 | 8 | # Log to this file. Make sure you do not set the same log file for both the API 9 | # and registry servers! 10 | # 11 | # If `log_file` is omitted and `use_syslog` is false, then log messages are 12 | # sent to stdout as a fallback. 13 | #log_file=/var/log/glance/image-cache.log 14 | 15 | # Send logs to syslog (/dev/log) instead of to file specified by `log_file` 16 | #use_syslog=False 17 | 18 | # Directory that the Image Cache writes data to 19 | #image_cache_dir=/var/lib/glance/image-cache/ 20 | 21 | # Number of seconds after which we should consider an incomplete image to be 22 | # stalled and eligible for reaping 23 | #image_cache_stall_time=86400 24 | 25 | # Max cache size in bytes 26 | #image_cache_max_size=10737418240 27 | 28 | # Address to find the registry server 29 | #registry_host=0.0.0.0 30 | 31 | # Port the registry server is listening on 32 | #registry_port=9191 33 | 34 | # Auth settings if using Keystone 35 | # auth_url = http://127.0.0.1:5000/v2.0/ 36 | # admin_tenant_name = %SERVICE_TENANT_NAME% 37 | # admin_user = %SERVICE_USER% 38 | # admin_password = %SERVICE_PASSWORD% 39 | 40 | # List of which store classes and store class locations are 41 | # currently known to glance at startup. 42 | # known_stores = glance.store.filesystem.Store, 43 | # glance.store.http.Store, 44 | # glance.store.rbd.Store, 45 | # glance.store.s3.Store, 46 | # glance.store.swift.Store, 47 | # glance.store.cinder.Store, 48 | 49 | # ============ Filesystem Store Options ======================== 50 | 51 | # Directory that the Filesystem backend store 52 | # writes image data to 53 | #filesystem_store_datadir=/var/lib/glance/images/ 54 | 55 | # ============ Swift Store Options ============================= 56 | 57 | # Version of the authentication service to use 58 | # Valid versions are '2' for keystone and '1' for swauth and rackspace 59 | #swift_store_auth_version=2 60 | 61 | # Address where the Swift authentication service lives 62 | # Valid schemes are 'http://' and 'https://' 63 | # If no scheme specified, default to 'https://' 64 | # For swauth, use something like '127.0.0.1:8080/v1.0/' 65 | #swift_store_auth_address=127.0.0.1:5000/v2.0/ 66 | 67 | # User to authenticate against the Swift authentication service 68 | # If you use Swift authentication service, set it to 'account':'user' 69 | # where 'account' is a Swift storage account and 'user' 70 | # is a user in that account 71 | #swift_store_user=jdoe:jdoe 72 | 73 | # Auth key for the user authenticating against the 74 | # Swift authentication service 75 | #swift_store_key=a86850deb2742ec3cb41518e26aa2d89 76 | 77 | # Container within the account that the account should use 78 | # for storing images in Swift 79 | #swift_store_container=glance 80 | 81 | # Do we create the container if it does not exist? 82 | #swift_store_create_container_on_put=False 83 | 84 | # What size, in MB, should Glance start chunking image files 85 | # and do a large object manifest in Swift? By default, this is 86 | # the maximum object size in Swift, which is 5GB 87 | #swift_store_large_object_size=5120 88 | 89 | # When doing a large object manifest, what size, in MB, should 90 | # Glance write chunks to Swift? This amount of data is written 91 | # to a temporary disk buffer during the process of chunking 92 | # the image file, and the default is 200MB 93 | #swift_store_large_object_chunk_size=200 94 | 95 | # Whether to use ServiceNET to communicate with the Swift storage servers. 96 | # (If you aren't RACKSPACE, leave this False!) 97 | # 98 | # To use ServiceNET for authentication, prefix hostname of 99 | # `swift_store_auth_address` with 'snet-'. 100 | # Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ 101 | #swift_enable_snet=False 102 | 103 | # ============ S3 Store Options ============================= 104 | 105 | # Address where the S3 authentication service lives 106 | # Valid schemes are 'http://' and 'https://' 107 | # If no scheme specified, default to 'http://' 108 | #s3_store_host=127.0.0.1:8080/v1.0/ 109 | 110 | # User to authenticate against the S3 authentication service 111 | #s3_store_access_key=<20-charAWSaccesskey> 112 | 113 | # Auth key for the user authenticating against the 114 | # S3 authentication service 115 | #s3_store_secret_key=<40-charAWSsecretkey> 116 | 117 | # Container within the account that the account should use 118 | # for storing images in S3. Note that S3 has a flat namespace, 119 | # so you need a unique bucket name for your glance images. An 120 | # easy way to do this is append your AWS access key to "glance". 121 | # S3 buckets in AWS *must* be lowercased, so remember to lowercase 122 | # your AWS access key if you use it in your bucket name below! 123 | #s3_store_bucket=glance 124 | 125 | # Do we create the bucket if it does not exist? 126 | #s3_store_create_bucket_on_put=False 127 | 128 | # When sending images to S3, the data will first be written to a 129 | # temporary buffer on disk. By default the platform's temporary directory 130 | # will be used. If required, an alternative directory can be specified here. 131 | # s3_store_object_buffer_dir = /path/to/dir 132 | 133 | # ============ Cinder Store Options =========================== 134 | 135 | # Info to match when looking for cinder in the service catalog 136 | # Format is : separated values of the form: 137 | # :: (string value) 138 | #cinder_catalog_info=volume:cinder:publicURL 139 | 140 | # Override service catalog lookup with template for cinder endpoint 141 | # e.g. http://localhost:8776/v1/%(project_id)s (string value) 142 | #cinder_endpoint_template= 143 | 144 | # Region name of this node (string value) 145 | #os_region_name= 146 | 147 | # Location of ca certicates file to use for cinder client requests 148 | # (string value) 149 | #cinder_ca_certificates_file= 150 | 151 | # Number of cinderclient retries on failed http calls (integer value) 152 | #cinder_http_retries=3 153 | 154 | # Allow to perform insecure SSL requests to cinder (boolean value) 155 | #cinder_api_insecure=False 156 | 157 | # ============ VMware Datastore Store Options ===================== 158 | 159 | # ESX/ESXi or vCenter Server target system. 160 | # The server value can be an IP address or a DNS name 161 | # e.g. 127.0.0.1, 127.0.0.1:443, www.vmware-infra.com 162 | #vmware_server_host= 163 | 164 | # Server username (string value) 165 | #vmware_server_username= 166 | 167 | # Server password (string value) 168 | #vmware_server_password= 169 | 170 | # Inventory path to a datacenter (string value) 171 | # Value optional when vmware_server_ip is an ESX/ESXi host: if specified 172 | # should be `ha-datacenter`. 173 | #vmware_datacenter_path= 174 | 175 | # Datastore associated with the datacenter (string value) 176 | #vmware_datastore_name= 177 | 178 | # The number of times we retry on failures 179 | # e.g., socket error, etc (integer value) 180 | #vmware_api_retry_count=10 181 | 182 | # The interval used for polling remote tasks 183 | # invoked on VMware ESX/VC server in seconds (integer value) 184 | #vmware_task_poll_interval=5 185 | 186 | # Absolute path of the folder containing the images in the datastore 187 | # (string value) 188 | #vmware_store_image_dir=/openstack_glance 189 | 190 | # Allow to perform insecure SSL requests to the target system (boolean value) 191 | #vmware_api_insecure=False 192 | 193 | # ================= Security Options ========================== 194 | 195 | # AES key for encrypting store 'location' metadata, including 196 | # -- if used -- Swift or S3 credentials 197 | # Should be set to a random string of length 16, 24 or 32 bytes 198 | # metadata_encryption_key = <16, 24 or 32 char registry metadata key> 199 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/config/glance-registry.conf: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Show more verbose log output (sets INFO log level output) 3 | #verbose=True 4 | 5 | # Show debugging output in logs (sets DEBUG log level output) 6 | debug=true 7 | 8 | # Address to bind the registry server 9 | #bind_host=0.0.0.0 10 | 11 | # Port the bind the registry server to 12 | #bind_port=9191 13 | 14 | # Log to this file. Make sure you do not set the same log file for both the API 15 | # and registry servers! 16 | # 17 | # If `log_file` is omitted and `use_syslog` is false, then log messages are 18 | # sent to stdout as a fallback. 19 | #log_file=/var/log/glance/registry.log 20 | 21 | # Backlog requests when creating socket 22 | #backlog=4096 23 | 24 | # TCP_KEEPIDLE value in seconds when creating socket. 25 | # Not supported on OS X. 26 | #tcp_keepidle=600 27 | 28 | # API to use for accessing data. Default value points to sqlalchemy 29 | # package. 30 | #data_api=glance.db.sqlalchemy.api 31 | 32 | # Enable Registry API versions individually or simultaneously 33 | #enable_v1_registry=True 34 | #enable_v2_registry=True 35 | 36 | # Limit the api to return `param_limit_max` items in a call to a container. If 37 | # a larger `limit` query param is provided, it will be reduced to this value. 38 | #api_limit_max=1000 39 | 40 | # If a `limit` query param is not provided in an api request, it will 41 | # default to `limit_param_default` 42 | #limit_param_default=25 43 | 44 | # Role used to identify an authenticated user as administrator 45 | #admin_role=admin 46 | 47 | # Whether to automatically create the database tables. 48 | # Default: False 49 | #db_auto_create=False 50 | 51 | # Enable DEBUG log messages from sqlalchemy which prints every database 52 | # query and response. 53 | # Default: False 54 | #sqlalchemy_debug=True 55 | 56 | # ================= Syslog Options ============================ 57 | 58 | # Send logs to syslog (/dev/log) instead of to file specified 59 | # by `log_file` 60 | #use_syslog=False 61 | 62 | # Facility to use. If unset defaults to LOG_USER. 63 | #syslog_log_facility=LOG_LOCAL1 64 | 65 | # ================= SSL Options =============================== 66 | 67 | # Certificate file to use when starting registry server securely 68 | #cert_file=/path/to/certfile 69 | 70 | # Private key file to use when starting registry server securely 71 | #key_file=/path/to/keyfile 72 | 73 | # CA certificate file to use to verify connecting clients 74 | #ca_file=/path/to/cafile 75 | 76 | # ================= Database Options ========================== 77 | 78 | [database] 79 | # The file name to use with SQLite (string value) 80 | #sqlite_db=glance.sqlite 81 | 82 | # If True, SQLite uses synchronous mode (boolean value) 83 | #sqlite_synchronous=True 84 | 85 | # The backend to use for db (string value) 86 | # Deprecated group/name - [DEFAULT]/db_backend 87 | #backend=sqlalchemy 88 | 89 | # The SQLAlchemy connection string used to connect to the 90 | # database (string value) 91 | # Deprecated group/name - [DEFAULT]/sql_connection 92 | # Deprecated group/name - [DATABASE]/sql_connection 93 | # Deprecated group/name - [sql]/connection 94 | connection=mysql://{{GLANCE_DB_USER}}:{{GLANCE_DB_PASS}}@{{MYSQL_SERVER}}/{{GLANCE_DB_NAME}} 95 | 96 | # The SQL mode to be used for MySQL sessions. This option, 97 | # including the default, overrides any server-set SQL mode. To 98 | # use whatever SQL mode is set by the server configuration, 99 | # set this to no value. Example: mysql_sql_mode= (string 100 | # value) 101 | #mysql_sql_mode=TRADITIONAL 102 | 103 | # Timeout before idle sql connections are reaped (integer 104 | # value) 105 | # Deprecated group/name - [DEFAULT]/sql_idle_timeout 106 | # Deprecated group/name - [DATABASE]/sql_idle_timeout 107 | # Deprecated group/name - [sql]/idle_timeout 108 | #idle_timeout=3600 109 | 110 | # Minimum number of SQL connections to keep open in a pool 111 | # (integer value) 112 | # Deprecated group/name - [DEFAULT]/sql_min_pool_size 113 | # Deprecated group/name - [DATABASE]/sql_min_pool_size 114 | #min_pool_size=1 115 | 116 | # Maximum number of SQL connections to keep open in a pool 117 | # (integer value) 118 | # Deprecated group/name - [DEFAULT]/sql_max_pool_size 119 | # Deprecated group/name - [DATABASE]/sql_max_pool_size 120 | #max_pool_size= 121 | 122 | # Maximum db connection retries during startup. (setting -1 123 | # implies an infinite retry count) (integer value) 124 | # Deprecated group/name - [DEFAULT]/sql_max_retries 125 | # Deprecated group/name - [DATABASE]/sql_max_retries 126 | #max_retries=10 127 | 128 | # Interval between retries of opening a sql connection 129 | # (integer value) 130 | # Deprecated group/name - [DEFAULT]/sql_retry_interval 131 | # Deprecated group/name - [DATABASE]/reconnect_interval 132 | #retry_interval=10 133 | 134 | # If set, use this value for max_overflow with sqlalchemy 135 | # (integer value) 136 | # Deprecated group/name - [DEFAULT]/sql_max_overflow 137 | # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow 138 | #max_overflow= 139 | 140 | # Verbosity of SQL debugging information. 0=None, 141 | # 100=Everything (integer value) 142 | # Deprecated group/name - [DEFAULT]/sql_connection_debug 143 | #connection_debug=0 144 | 145 | # Add python stack traces to SQL as comment strings (boolean 146 | # value) 147 | # Deprecated group/name - [DEFAULT]/sql_connection_trace 148 | #connection_trace=False 149 | 150 | # If set, use this value for pool_timeout with sqlalchemy 151 | # (integer value) 152 | # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout 153 | #pool_timeout= 154 | 155 | # Enable the experimental use of database reconnect on 156 | # connection lost (boolean value) 157 | #use_db_reconnect=False 158 | 159 | # seconds between db connection retries (integer value) 160 | #db_retry_interval=1 161 | 162 | # Whether to increase interval between db connection retries, 163 | # up to db_max_retry_interval (boolean value) 164 | #db_inc_retry_interval=True 165 | 166 | # max seconds between db connection retries, if 167 | # db_inc_retry_interval is enabled (integer value) 168 | #db_max_retry_interval=10 169 | 170 | # maximum db connection retries before error is raised. 171 | # (setting -1 implies an infinite retry count) (integer value) 172 | #db_max_retries=20 173 | 174 | [keystone_authtoken] 175 | auth_host={{AUTH_KEYSTONE_HOST}} 176 | auth_port={{AUTH_KEYSTONE_PORT}} 177 | auth_protocol={{AUTH_KEYSTONE_PROTOCOL}} 178 | admin_tenant_name={{AUTH_GLANCE_ADMIN_TENANT}} 179 | admin_user={{AUTH_GLANCE_ADMIN_USER}} 180 | admin_password={{AUTH_GLANCE_ADMIN_PASS}} 181 | 182 | [paste_deploy] 183 | # Name of the paste configuration file that defines the available pipelines 184 | #config_file=/usr/share/glance/glance-registry-dist-paste.ini 185 | 186 | # Partial name of a pipeline in your paste configuration file with the 187 | # service name removed. For example, if your paste section name is 188 | # [pipeline:glance-registry-keystone], you would configure the flavor below 189 | # as 'keystone'. 190 | flavor=keystone 191 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/config/glance-scrubber.conf: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Show more verbose log output (sets INFO log level output) 3 | #verbose=True 4 | 5 | # Show debugging output in logs (sets DEBUG log level output) 6 | #debug=False 7 | 8 | # Log to this file. Make sure you do not set the same log file for both the API 9 | # and registry servers! 10 | # 11 | # If `log_file` is omitted and `use_syslog` is false, then log messages are 12 | # sent to stdout as a fallback. 13 | #log_file=/var/log/glance/scrubber.log 14 | 15 | # Send logs to syslog (/dev/log) instead of to file specified by `log_file` 16 | #use_syslog=False 17 | 18 | # Should we run our own loop or rely on cron/scheduler to run us 19 | #daemon=False 20 | 21 | # Loop time between checking for new items to schedule for delete 22 | #wakeup_time=300 23 | 24 | # Directory that the scrubber will use to remind itself of what to delete 25 | # Make sure this is also set in glance-api.conf 26 | #scrubber_datadir=/var/lib/glance/scrubber 27 | 28 | # Only one server in your deployment should be designated the cleanup host 29 | #cleanup_scrubber=False 30 | 31 | # pending_delete items older than this time are candidates for cleanup 32 | #cleanup_scrubber_time=86400 33 | 34 | # Address to find the registry server for cleanups 35 | #registry_host=0.0.0.0 36 | 37 | # Port the registry server is listening on 38 | #registry_port=9191 39 | 40 | # Auth settings if using Keystone 41 | # auth_url = http://127.0.0.1:5000/v2.0/ 42 | # admin_tenant_name = %SERVICE_TENANT_NAME% 43 | # admin_user = %SERVICE_USER% 44 | # admin_password = %SERVICE_PASSWORD% 45 | 46 | # Directory to use for lock files. Default to a temp directory 47 | # (string value). This setting needs to be the same for both 48 | # glance-scrubber and glance-api. 49 | #lock_path= 50 | 51 | # ================= Security Options ========================== 52 | 53 | # AES key for encrypting store 'location' metadata, including 54 | # -- if used -- Swift or S3 credentials 55 | # Should be set to a random string of length 16, 24 or 32 bytes 56 | #metadata_encryption_key=<16, 24 or 32 char registry metadata key> 57 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/config/policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "context_is_admin": "role:admin", 3 | "default": "", 4 | 5 | "add_image": "", 6 | "delete_image": "", 7 | "get_image": "", 8 | "get_images": "", 9 | "modify_image": "", 10 | "publicize_image": "", 11 | "copy_from": "", 12 | 13 | "download_image": "", 14 | "upload_image": "", 15 | 16 | "delete_image_location": "", 17 | "get_image_location": "", 18 | "set_image_location": "", 19 | 20 | "add_member": "", 21 | "delete_member": "", 22 | "get_member": "", 23 | "get_members": "", 24 | "modify_member": "", 25 | 26 | "manage_image_cache": "role:admin", 27 | 28 | "get_task": "", 29 | "get_tasks": "", 30 | "add_task": "", 31 | "modify_task": "" 32 | } 33 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/config/schema-image.json: -------------------------------------------------------------------------------- 1 | { 2 | "kernel_id": { 3 | "type": "string", 4 | "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", 5 | "description": "ID of image stored in Glance that should be used as the kernel when booting an AMI-style image." 6 | }, 7 | "ramdisk_id": { 8 | "type": "string", 9 | "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", 10 | "description": "ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image." 11 | }, 12 | "instance_uuid": { 13 | "type": "string", 14 | "description": "ID of instance used to create this image." 15 | }, 16 | "architecture": { 17 | "description": "Operating system architecture as specified in http://docs.openstack.org/trunk/openstack-compute/admin/content/adding-images.html", 18 | "type": "string" 19 | }, 20 | "os_distro": { 21 | "description": "Common name of operating system distribution as specified in http://docs.openstack.org/trunk/openstack-compute/admin/content/adding-images.html", 22 | "type": "string" 23 | }, 24 | "os_version": { 25 | "description": "Operating system version as specified by the distributor", 26 | "type": "string" 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/glance_init.sh: -------------------------------------------------------------------------------- 1 | export OS_TENANT_NAME="{{KEYSTONE_ADMIN_TENANT}}" 2 | export OS_USERNAME="{{KEYSTONE_ADMIN_USER}}" 3 | export OS_PASSWORD="{{KEYSTONE_ADMIN_PASSWD}}" 4 | export OS_AUTH_URL="{{KEYSTONE_AUTH_URL}}" 5 | 6 | keystone user-create --name={{AUTH_GLANCE_ADMIN_USER}} --pass={{AUTH_GLANCE_ADMIN_PASS}} --email=glance@example.com 7 | keystone user-role-add --user={{AUTH_GLANCE_ADMIN_USER}} --tenant={{AUTH_GLANCE_ADMIN_TENANT}} --role=admin 8 | 9 | function get_id () { 10 | echo `"$@" | grep ' id ' | awk '{print $4}'` 11 | } 12 | 13 | GLANCE_SERVICE=$(get_id \ 14 | keystone service-create --name=glance \ 15 | --type=image \ 16 | --description="OpenStack Image Service") 17 | 18 | keystone endpoint-create --service-id="$GLANCE_SERVICE" \ 19 | --publicurl="http://{{GLANCE_IP}}:9292" \ 20 | --adminurl="http://{{GLANCE_IP}}:9292" \ 21 | --internalurl="http://{{GLANCE_IP}}:9292" 22 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/openstack-glance-api: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-glance-api OpenStack Image Service API server 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Image Service (code-named Glance) API server 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: Glance API server 14 | # Description: OpenStack Image Service (code-named Glance) API server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=api 20 | prog=openstack-glance-$suffix 21 | exec="/usr/bin/glance-$suffix" 22 | config="/etc/glance/glance-$suffix.conf" 23 | pidfile="/var/run/glance/glance-$suffix.pid" 24 | startuplog="/var/log/glance/$prog-startup.log" 25 | timeout=60 26 | wrapper="/usr/share/glance/daemon_notify.sh" 27 | 28 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 29 | 30 | lockfile=/var/lock/subsys/$prog 31 | 32 | start() { 33 | [ -x $exec ] || exit 5 34 | [ -f $config ] || exit 6 35 | echo -n $"Starting $prog: " 36 | daemon --user glance --pidfile $pidfile "$wrapper $exec $pidfile $startuplog $timeout" 37 | retval=$? 38 | echo 39 | [ $retval -eq 0 ] && touch $lockfile 40 | return $retval 41 | } 42 | 43 | stop() { 44 | echo -n $"Stopping $prog: " 45 | killproc -p $pidfile $prog 46 | retval=$? 47 | echo 48 | [ $retval -eq 0 ] && rm -f $lockfile 49 | return $retval 50 | } 51 | 52 | restart() { 53 | stop 54 | start 55 | } 56 | 57 | reload() { 58 | restart 59 | } 60 | 61 | force_reload() { 62 | restart 63 | } 64 | 65 | rh_status() { 66 | status -p $pidfile $prog 67 | } 68 | 69 | rh_status_q() { 70 | rh_status >/dev/null 2>&1 71 | } 72 | 73 | 74 | case "$1" in 75 | start) 76 | rh_status_q && exit 0 77 | $1 78 | ;; 79 | stop) 80 | rh_status_q || exit 0 81 | $1 82 | ;; 83 | restart) 84 | $1 85 | ;; 86 | reload) 87 | rh_status_q || exit 7 88 | $1 89 | ;; 90 | force-reload) 91 | force_reload 92 | ;; 93 | status) 94 | rh_status 95 | ;; 96 | condrestart|try-restart) 97 | rh_status_q || exit 0 98 | restart 99 | ;; 100 | *) 101 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 102 | exit 2 103 | esac 104 | exit $? 105 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/openstack-glance-registry: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-glance-api OpenStack Image Service API server 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Image Service (code-named Glance) API server 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: Glance API server 14 | # Description: OpenStack Image Service (code-named Glance) API server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=registry 20 | prog=openstack-glance-$suffix 21 | exec="/usr/bin/glance-$suffix" 22 | config="/etc/glance/glance-$suffix.conf" 23 | pidfile="/var/run/glance/glance-$suffix.pid" 24 | startuplog="/var/log/glance/$prog-startup.log" 25 | timeout=60 26 | wrapper="/usr/share/glance/daemon_notify.sh" 27 | 28 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 29 | 30 | lockfile=/var/lock/subsys/$prog 31 | 32 | start() { 33 | [ -x $exec ] || exit 5 34 | [ -f $config ] || exit 6 35 | echo -n $"Starting $prog: " 36 | daemon --user glance --pidfile $pidfile "$wrapper $exec $pidfile $startuplog $timeout" 37 | retval=$? 38 | echo 39 | [ $retval -eq 0 ] && touch $lockfile 40 | return $retval 41 | } 42 | 43 | stop() { 44 | echo -n $"Stopping $prog: " 45 | killproc -p $pidfile $prog 46 | retval=$? 47 | echo 48 | [ $retval -eq 0 ] && rm -f $lockfile 49 | return $retval 50 | } 51 | 52 | restart() { 53 | stop 54 | start 55 | } 56 | 57 | reload() { 58 | restart 59 | } 60 | 61 | force_reload() { 62 | restart 63 | } 64 | 65 | rh_status() { 66 | status -p $pidfile $prog 67 | } 68 | 69 | rh_status_q() { 70 | rh_status >/dev/null 2>&1 71 | } 72 | 73 | 74 | case "$1" in 75 | start) 76 | rh_status_q && exit 0 77 | $1 78 | ;; 79 | stop) 80 | rh_status_q || exit 0 81 | $1 82 | ;; 83 | restart) 84 | $1 85 | ;; 86 | reload) 87 | rh_status_q || exit 7 88 | $1 89 | ;; 90 | force-reload) 91 | force_reload 92 | ;; 93 | status) 94 | rh_status 95 | ;; 96 | condrestart|try-restart) 97 | rh_status_q || exit 0 98 | restart 99 | ;; 100 | *) 101 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 102 | exit 2 103 | esac 104 | exit $? 105 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/files/openstack-glance-scrubber: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-glance-scrubber OpenStack Image Service scrubber daemon 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Image Service (code-named Glance) scrubber daemon 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: Glance scrubber daemon 14 | # Description: OpenStack Image Service (code-named Glance) scrubber daemon 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=scrubber 20 | prog=openstack-glance-$suffix 21 | exec="/usr/bin/glance-$suffix" 22 | user_config="/etc/glance/glance-$suffix.conf" 23 | pidfile="/var/run/glance/glance-$suffix.pid" 24 | startuplog="/var/log/glance/$prog-startup.log" 25 | timeout=60 26 | wrapper="/usr/share/glance/daemon_notify.sh" 27 | 28 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 29 | 30 | lockfile=/var/lock/subsys/$prog 31 | 32 | start() { 33 | [ -x $exec ] || exit 5 34 | [ -f $user_config ] || exit 7 35 | echo -n $"Starting $prog: " 36 | daemon --user glance --pidfile $pidfile "$wrapper $exec $pidfile $startuplog $timeout" 37 | retval=$? 38 | echo 39 | [ $retval -eq 0 ] && touch $lockfile 40 | return $retval 41 | } 42 | 43 | stop() { 44 | echo -n $"Stopping $prog: " 45 | killproc -p $pidfile $prog 46 | retval=$? 47 | echo 48 | [ $retval -eq 0 ] && rm -f $lockfile 49 | return $retval 50 | } 51 | 52 | restart() { 53 | stop 54 | start 55 | } 56 | 57 | reload() { 58 | restart 59 | } 60 | 61 | force_reload() { 62 | restart 63 | } 64 | 65 | rh_status() { 66 | status -p $pidfile $prog 67 | } 68 | 69 | rh_status_q() { 70 | rh_status >/dev/null 2>&1 71 | } 72 | 73 | 74 | case "$1" in 75 | start) 76 | rh_status_q && exit 0 77 | $1 78 | ;; 79 | stop) 80 | rh_status_q || exit 0 81 | $1 82 | ;; 83 | restart) 84 | $1 85 | ;; 86 | reload) 87 | rh_status_q || exit 7 88 | $1 89 | ;; 90 | force-reload) 91 | force_reload 92 | ;; 93 | status) 94 | rh_status 95 | ;; 96 | condrestart|try-restart) 97 | rh_status_q || exit 0 98 | restart 99 | ;; 100 | *) 101 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 102 | exit 2 103 | esac 104 | exit $? 105 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/init.sls: -------------------------------------------------------------------------------- 1 | glance-init: 2 | file.managed: 3 | - name: /usr/local/bin/glance_init.sh 4 | - source: salt://openstack/glance/files/glance_init.sh 5 | - mode: 755 6 | - user: root 7 | - group: root 8 | - template: jinja 9 | - defaults: 10 | KEYSTONE_ADMIN_TENANT: {{ pillar['keystone']['KEYSTONE_ADMIN_TENANT'] }} 11 | KEYSTONE_ADMIN_USER: {{ pillar['keystone']['KEYSTONE_ADMIN_USER'] }} 12 | KEYSTONE_ADMIN_PASSWD: {{ pillar['keystone']['KEYSTONE_ADMIN_PASSWD'] }} 13 | KEYSTONE_AUTH_URL: {{ pillar['keystone']['KEYSTONE_AUTH_URL'] }} 14 | GLANCE_IP: {{ pillar['glance']['GLANCE_IP'] }} 15 | AUTH_GLANCE_ADMIN_TENANT: {{ pillar['glance']['AUTH_GLANCE_ADMIN_TENANT'] }} 16 | AUTH_GLANCE_ADMIN_USER: {{ pillar['glance']['AUTH_GLANCE_ADMIN_USER'] }} 17 | AUTH_GLANCE_ADMIN_PASS: {{ pillar['glance']['AUTH_GLANCE_ADMIN_PASS'] }} 18 | cmd.run: 19 | - name: sleep 10 && bash /usr/local/bin/glance_init.sh && touch /etc/glance-init.lock 20 | - require: 21 | - file: glance-init 22 | - service: openstack-glance-api 23 | - service: openstack-glance-registry 24 | - unless: test -f /etc/glance-init.lock 25 | -------------------------------------------------------------------------------- /states/openstack-mitaka/glance/server.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.glance.init 3 | glance-install: 4 | pkg.installed: 5 | - names: 6 | - openstack-glance 7 | - python-glanceclient 8 | - require: 9 | - file: rdo_repo 10 | 11 | /etc/glance: 12 | file.recurse: 13 | - source: salt://openstack/glance/files/config 14 | - user: glance 15 | - group: glance 16 | - template: jinja 17 | - defaults: 18 | MYSQL_SERVER: {{ pillar['keystone']['MYSQL_SERVER'] }} 19 | GLANCE_DB_PASS: {{ pillar['glance']['GLANCE_DB_PASS'] }} 20 | GLANCE_DB_USER: {{ pillar['glance']['GLANCE_DB_USER'] }} 21 | GLANCE_DB_NAME: {{ pillar['glance']['GLANCE_DB_NAME'] }} 22 | RABBITMQ_HOST: {{ pillar['rabbit']['RABBITMQ_HOST'] }} 23 | RABBITMQ_PORT: {{ pillar['rabbit']['RABBITMQ_PORT'] }} 24 | RABBITMQ_USER: {{ pillar['rabbit']['RABBITMQ_USER'] }} 25 | RABBITMQ_PASS: {{ pillar['rabbit']['RABBITMQ_PASS'] }} 26 | AUTH_KEYSTONE_HOST: {{ pillar['glance']['AUTH_KEYSTONE_HOST'] }} 27 | AUTH_KEYSTONE_PORT: {{ pillar['glance']['AUTH_KEYSTONE_PORT'] }} 28 | AUTH_KEYSTONE_PROTOCOL: {{ pillar['glance']['AUTH_KEYSTONE_PROTOCOL'] }} 29 | AUTH_GLANCE_ADMIN_TENANT: {{ pillar['glance']['AUTH_GLANCE_ADMIN_TENANT'] }} 30 | AUTH_GLANCE_ADMIN_USER: {{ pillar['glance']['AUTH_GLANCE_ADMIN_USER'] }} 31 | AUTH_GLANCE_ADMIN_PASS: {{ pillar['glance']['AUTH_GLANCE_ADMIN_PASS'] }} 32 | 33 | glance-db-sync: 34 | cmd.run: 35 | - name: yum install -y python-crypto && glance-manage db_sync && touch /etc/glance-datasync.lock && chown glance:glance /var/log/glance/* 36 | - require: 37 | - mysql_grants: glance-mysql 38 | - pkg: glance-install 39 | - unless: test -f /etc/glance-datasync.lock 40 | 41 | openstack-glance-api: 42 | file.managed: 43 | - name: /etc/init.d/openstack-glance-api 44 | - source: salt://openstack/glance/files/openstack-glance-api 45 | - mode: 755 46 | - user: root 47 | - group: root 48 | service: 49 | - running 50 | - enable: True 51 | - watch: 52 | - file: /etc/glance 53 | - require: 54 | - pkg: glance-install 55 | - cmd: glance-db-sync 56 | 57 | openstack-glance-registry: 58 | file.managed: 59 | - name: /etc/init.d/openstack-glance-registry 60 | - source: salt://openstack/glance/files/openstack-glance-registry 61 | - mode: 755 62 | - user: root 63 | - group: root 64 | service: 65 | - running 66 | - enable: True 67 | - watch: 68 | - file: /etc/glance 69 | - require: 70 | - pkg: glance-install 71 | - cmd: glance-db-sync 72 | -------------------------------------------------------------------------------- /states/openstack-mitaka/horizon/server.sls: -------------------------------------------------------------------------------- 1 | openstack_dashboard: 2 | pkg.installed: 3 | - names: 4 | - httpd 5 | - mod_wsgi 6 | - memcached 7 | - python-memcached 8 | - openstack-dashboard 9 | file.managed: 10 | - name: /etc/openstack-dashboard/local_settings 11 | - source: salt://openstack/horizon/files/config/local_settings 12 | - user: apache 13 | - group: apache 14 | - template: jinja 15 | - defaults: 16 | ALLOWED_HOSTS: {{ pillar['horizon']['ALLOWED_HOSTS'] }} 17 | OPENSTACK_HOST: {{ pillar['horizon']['OPENSTACK_HOST'] }} 18 | service.running: 19 | - name: httpd 20 | - enable: True 21 | - reload: True 22 | - require: 23 | - pkg: openstack_dashboard 24 | - watch: 25 | - file: openstack_dashboard 26 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/base.sls: -------------------------------------------------------------------------------- 1 | 2 | 3 | 体系化培训课程: 4 | 5 | 《自动化运维工程师》 6 | 《云计算与容器架构师》 7 | 《DevOps工程师:运维开发方向》 8 | 《DevOps沙盘:凤凰项目沙盘演练》 9 | 《全开源端到端DevOps部署流水线》 10 | 《EXIN DevOps Professional认证研修》 11 | 《企业级DevOps立体化实施框架与实践案例》 12 | 13 | 14 | 15 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/compute_base.sls: -------------------------------------------------------------------------------- 1 | messagebus: 2 | service.running: 3 | - name: messagebus 4 | - enable: True 5 | 6 | libvirtd: 7 | pkg.installed: 8 | - names: 9 | - libvirt 10 | - libvirt-python 11 | - libvirt-client 12 | service.running: 13 | - name: libvirtd 14 | - enable: True 15 | 16 | avahi-daemon: 17 | pkg.installed: 18 | - name: avahi 19 | service.running: 20 | - name: avahi-daemon 21 | - enable: True 22 | - require: 23 | - pkg: avahi-daemon 24 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/compute_pip.txt: -------------------------------------------------------------------------------- 1 | alembic>=0.4.1 2 | amqplib>=0.6.1 3 | anyjson>=0.3.3 4 | argparse 5 | Babel>=1.3 6 | boto>=2.12.0,!=2.13.0 7 | eventlet>=0.13.0 8 | greenlet>=0.3.2 9 | httplib2>=0.7.5 10 | iso8601>=0.1.9 11 | Jinja2 12 | jsonrpclib 13 | jsonschema>=2.0.0,<3.0.0 14 | kombu>=2.4.8 15 | lxml>=2.3 16 | netaddr>=0.7.6 17 | oslo.config>=1.2.0 18 | oslo.messaging>=1.3.0a9 19 | oslo.rootwrap 20 | paramiko>=1.9.0 21 | Paste 22 | PasteDeploy>=1.5.0 23 | pbr>=0.6,<1.0 24 | pyasn1 25 | pycadf>=0.4.1 26 | python-cinderclient>=1.0.6 27 | python-glanceclient>=0.9.0 28 | python-keystoneclient>=0.7.0 29 | python-neutronclient>=2.3.4,<3 30 | python-novaclient>=2.17.0 31 | requests>=1.1 32 | Routes>=1.12.3 33 | six>=1.5.2 34 | SQLAlchemy>=0.7.8,<=0.9.99 35 | sqlalchemy-migrate>=0.8.2,!=0.8.4 36 | stevedore>=0.14 37 | suds>=0.4 38 | WebOb>=1.2.3 39 | websockify>=0.5.1,<0.6 40 | wsgiref>=0.1.2 41 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/control_pip.txt: -------------------------------------------------------------------------------- 1 | alembic>=0.4.1 2 | amqplib>=0.6.1 3 | anyjson>=0.3.3 4 | argparse 5 | Babel>=1.3 6 | boto>=2.12.0,!=2.13.0 7 | Django>=1.4,<1.7 8 | django_compressor>=1.3 9 | django_openstack_auth>=1.1.4 10 | dogpile.cache>=0.5.0 11 | eventlet>=0.13.0 12 | greenlet>=0.3.2 13 | httplib2>=0.7.5 14 | iso8601>=0.1.9 15 | Jinja2 16 | jsonrpclib 17 | jsonschema>=2.0.0,<3.0.0 18 | kombu>=2.4.8 19 | lesscpy>=0.9j 20 | lockfile>=0.8 21 | lxml>=2.3 22 | netaddr>=0.7.6 23 | oauthlib>=0.6 24 | ordereddict 25 | oslo.config>=1.2.0 26 | oslo.messaging>=1.3.0a9 27 | oslo.rootwrap 28 | oslo.vmware>=0.2 29 | paramiko>=1.9.0 30 | passlib 31 | Paste 32 | PasteDeploy>=1.5.0 33 | pbr>=0.6,<1.0 34 | pyasn1 35 | pycadf>=0.4.1 36 | pycrypto>=2.6 37 | pyOpenSSL>=0.11 38 | python-ceilometerclient>=1.0.6 39 | python-cinderclient>=1.0.6 40 | python-glanceclient>=0.9.0 41 | python-heatclient>=0.2.3 42 | python-keystoneclient>=0.7.0 43 | python-neutronclient>=2.3.4,<3 44 | python-novaclient>=2.17.0 45 | python-swiftclient>=1.6 46 | python-troveclient>=1.0.3 47 | pytz>=2010h 48 | requests>=1.1 49 | Routes>=1.12.3 50 | rtslib-fb>=2.1.39 51 | six>=1.6.0 52 | SQLAlchemy>=0.7.8,<=0.9.99 53 | sqlalchemy-migrate>=0.8.2,!=0.8.4 54 | stevedore>=0.14 55 | suds>=0.4 56 | taskflow>=0.1.3,<0.2 57 | WebOb>=1.2.3 58 | websockify>=0.5.1,<0.6 59 | wsgiref>=0.1.2 60 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/epel-6.repo: -------------------------------------------------------------------------------- 1 | [epel] 2 | name=Extra Packages for Enterprise Linux 6 - $basearch 3 | baseurl=http://mirrors.aliyun.com/epel/6/$basearch 4 | http://mirrors.aliyuncs.com/epel/6/$basearch 5 | #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch 6 | failovermethod=priority 7 | enabled=1 8 | gpgcheck=0 9 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 10 | 11 | [epel-debuginfo] 12 | name=Extra Packages for Enterprise Linux 6 - $basearch - Debug 13 | baseurl=http://mirrors.aliyun.com/epel/6/$basearch/debug 14 | http://mirrors.aliyuncs.com/epel/6/$basearch/debug 15 | #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch 16 | failovermethod=priority 17 | enabled=0 18 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 19 | gpgcheck=0 20 | 21 | [epel-source] 22 | name=Extra Packages for Enterprise Linux 6 - $basearch - Source 23 | baseurl=http://mirrors.aliyun.com/epel/6/SRPMS 24 | http://mirrors.aliyuncs.com/epel/6/SRPMS 25 | #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch 26 | failovermethod=priority 27 | enabled=0 28 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 29 | gpgcheck=0 -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/foreman.repo: -------------------------------------------------------------------------------- 1 | [foreman] 2 | name=Foreman stable 3 | baseurl=http://yum.theforeman.org/releases/1.5/el6/x86_64 4 | enabled=1 5 | gpgcheck=1 6 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-foreman 7 | 8 | [foreman-source] 9 | name=Foreman stable - source 10 | baseurl=http://yum.theforeman.org/releases/1.5/el6/source 11 | enabled=0 12 | gpgcheck=1 13 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-foreman 14 | 15 | [foreman-plugins] 16 | name=Foreman stable - plugins 17 | baseurl=http://yum.theforeman.org/plugins/1.5/el6/x86_64 18 | enabled=1 19 | gpgcheck=0 20 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-foreman 21 | 22 | [foreman-plugins-source] 23 | name=Foreman stable - plugins source 24 | baseurl=http://yum.theforeman.org/plugins/1.5/el6/source 25 | enabled=0 26 | gpgcheck=0 27 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-foreman 28 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/ntp.conf: -------------------------------------------------------------------------------- 1 | # For more information about this file, see the man pages 2 | # ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5). 3 | 4 | driftfile /var/lib/ntp/drift 5 | 6 | # Permit time synchronization with our time source, but do not 7 | # permit the source to query or modify the service on this system. 8 | restrict default kod nomodify notrap nopeer noquery 9 | restrict -6 default kod nomodify notrap nopeer noquery 10 | 11 | # Permit all access over the loopback interface. This could 12 | # be tightened as well, but to do so would effect some of 13 | # the administrative functions. 14 | restrict 127.0.0.1 15 | restrict -6 ::1 16 | 17 | # Hosts on local network are less restricted. 18 | #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap 19 | 20 | # Use public servers from the pool.ntp.org project. 21 | # Please consider joining the pool (http://www.pool.ntp.org/join.html). 22 | server 0.centos.pool.ntp.org iburst 23 | server 1.centos.pool.ntp.org iburst 24 | server 2.centos.pool.ntp.org iburst 25 | server 3.centos.pool.ntp.org iburst 26 | 27 | #broadcast 192.168.1.255 autokey # broadcast server 28 | #broadcastclient # broadcast client 29 | #broadcast 224.0.1.1 autokey # multicast server 30 | #multicastclient 224.0.1.1 # multicast client 31 | #manycastserver 239.255.254.254 # manycast server 32 | #manycastclient 239.255.254.254 autokey # manycast client 33 | 34 | # Enable public key cryptography. 35 | #crypto 36 | 37 | includefile /etc/ntp/crypto/pw 38 | 39 | # Key file containing the keys and key identifiers used when operating 40 | # with symmetric key cryptography. 41 | keys /etc/ntp/keys 42 | 43 | # Specify the key identifiers which are trusted. 44 | #trustedkey 4 8 42 45 | 46 | # Specify the key identifier to use with the ntpdc utility. 47 | #requestkey 8 48 | 49 | # Specify the key identifier to use with the ntpq utility. 50 | #controlkey 8 51 | 52 | # Enable writing of statistics records. 53 | #statistics clockstats cryptostats loopstats peerstats 54 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/pip.conf: -------------------------------------------------------------------------------- 1 | [global] 2 | timeout = 60 3 | index-url = http://pypi.douban.com/simple/ 4 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/puppetlabs.repo: -------------------------------------------------------------------------------- 1 | [puppetlabs-products] 2 | name=Puppet Labs Products - $basearch 3 | baseurl=http://yum.puppetlabs.com/el/6/products/$basearch 4 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs 5 | enabled=1 6 | gpgcheck=1 7 | 8 | [puppetlabs-deps] 9 | name=Puppet Labs Dependencies - $basearch 10 | baseurl=http://yum.puppetlabs.com/el/6/dependencies/$basearch 11 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs 12 | enabled=1 13 | gpgcheck=1 14 | 15 | [puppetlabs-devel] 16 | name=Puppet Labs Devel - $basearch 17 | baseurl=http://yum.puppetlabs.com/el/6/devel/$basearch 18 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs 19 | enabled=0 20 | gpgcheck=1 21 | 22 | [puppetlabs-products-source] 23 | name=Puppet Labs Products - $basearch - Source 24 | baseurl=http://yum.puppetlabs.com/el/6/products/SRPMS 25 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs 26 | failovermethod=priority 27 | enabled=0 28 | gpgcheck=1 29 | 30 | [puppetlabs-deps-source] 31 | name=Puppet Labs Source Dependencies - $basearch - Source 32 | baseurl=http://yum.puppetlabs.com/el/6/dependencies/SRPMS 33 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs 34 | enabled=0 35 | gpgcheck=1 36 | 37 | [puppetlabs-devel-source] 38 | name=Puppet Labs Devel - $basearch - Source 39 | baseurl=http://yum.puppetlabs.com/el/6/devel/SRPMS 40 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs 41 | enabled=0 42 | gpgcheck=1 43 | -------------------------------------------------------------------------------- /states/openstack-mitaka/init/files/rdo-release.repo: -------------------------------------------------------------------------------- 1 | [openstack-icehouse] 2 | name=OpenStack Icehouse Repository 3 | baseurl=http://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/ 4 | enabled=1 5 | skip_if_unavailable=0 6 | gpgcheck=1 7 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse 8 | priority=98 9 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/files/config/default_catalog.templates: -------------------------------------------------------------------------------- 1 | # config for templated.Catalog, using camelCase because I don't want to do 2 | # translations for keystone compat 3 | catalog.RegionOne.identity.publicURL = http://localhost:$(public_port)s/v2.0 4 | catalog.RegionOne.identity.adminURL = http://localhost:$(admin_port)s/v2.0 5 | catalog.RegionOne.identity.internalURL = http://localhost:$(public_port)s/v2.0 6 | catalog.RegionOne.identity.name = Identity Service 7 | 8 | # fake compute service for now to help novaclient tests work 9 | catalog.RegionOne.compute.publicURL = http://localhost:$(compute_port)s/v1.1/$(tenant_id)s 10 | catalog.RegionOne.compute.adminURL = http://localhost:$(compute_port)s/v1.1/$(tenant_id)s 11 | catalog.RegionOne.compute.internalURL = http://localhost:$(compute_port)s/v1.1/$(tenant_id)s 12 | catalog.RegionOne.compute.name = Compute Service 13 | 14 | catalog.RegionOne.volume.publicURL = http://localhost:8776/v1/$(tenant_id)s 15 | catalog.RegionOne.volume.adminURL = http://localhost:8776/v1/$(tenant_id)s 16 | catalog.RegionOne.volume.internalURL = http://localhost:8776/v1/$(tenant_id)s 17 | catalog.RegionOne.volume.name = Volume Service 18 | 19 | catalog.RegionOne.ec2.publicURL = http://localhost:8773/services/Cloud 20 | catalog.RegionOne.ec2.adminURL = http://localhost:8773/services/Admin 21 | catalog.RegionOne.ec2.internalURL = http://localhost:8773/services/Cloud 22 | catalog.RegionOne.ec2.name = EC2 Service 23 | 24 | catalog.RegionOne.image.publicURL = http://localhost:9292/v1 25 | catalog.RegionOne.image.adminURL = http://localhost:9292/v1 26 | catalog.RegionOne.image.internalURL = http://localhost:9292/v1 27 | catalog.RegionOne.image.name = Image Service 28 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/files/config/logging.conf: -------------------------------------------------------------------------------- 1 | [loggers] 2 | keys=root,access 3 | 4 | [handlers] 5 | keys=production,file,access_file,devel 6 | 7 | [formatters] 8 | keys=minimal,normal,debug 9 | 10 | 11 | ########### 12 | # Loggers # 13 | ########### 14 | 15 | [logger_root] 16 | level=WARNING 17 | handlers=file 18 | 19 | [logger_access] 20 | level=INFO 21 | qualname=access 22 | handlers=access_file 23 | 24 | 25 | ################ 26 | # Log Handlers # 27 | ################ 28 | 29 | [handler_production] 30 | class=handlers.SysLogHandler 31 | level=ERROR 32 | formatter=normal 33 | args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER) 34 | 35 | [handler_file] 36 | class=handlers.WatchedFileHandler 37 | level=WARNING 38 | formatter=normal 39 | args=('error.log',) 40 | 41 | [handler_access_file] 42 | class=handlers.WatchedFileHandler 43 | level=INFO 44 | formatter=minimal 45 | args=('access.log',) 46 | 47 | [handler_devel] 48 | class=StreamHandler 49 | level=NOTSET 50 | formatter=debug 51 | args=(sys.stdout,) 52 | 53 | 54 | ################## 55 | # Log Formatters # 56 | ################## 57 | 58 | [formatter_minimal] 59 | format=%(message)s 60 | 61 | [formatter_normal] 62 | format=(%(name)s): %(asctime)s %(levelname)s %(message)s 63 | 64 | [formatter_debug] 65 | format=(%(name)s): %(asctime)s %(levelname)s %(module)s %(funcName)s %(message)s 66 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/files/config/policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "admin_required": "role:admin or is_admin:1", 3 | "service_role": "role:service", 4 | "service_or_admin": "rule:admin_required or rule:service_role", 5 | "owner" : "user_id:%(user_id)s", 6 | "admin_or_owner": "rule:admin_required or rule:owner", 7 | 8 | "default": "rule:admin_required", 9 | 10 | "identity:get_region": "", 11 | "identity:list_regions": "", 12 | "identity:create_region": "rule:admin_required", 13 | "identity:update_region": "rule:admin_required", 14 | "identity:delete_region": "rule:admin_required", 15 | 16 | "identity:get_service": "rule:admin_required", 17 | "identity:list_services": "rule:admin_required", 18 | "identity:create_service": "rule:admin_required", 19 | "identity:update_service": "rule:admin_required", 20 | "identity:delete_service": "rule:admin_required", 21 | 22 | "identity:get_endpoint": "rule:admin_required", 23 | "identity:list_endpoints": "rule:admin_required", 24 | "identity:create_endpoint": "rule:admin_required", 25 | "identity:update_endpoint": "rule:admin_required", 26 | "identity:delete_endpoint": "rule:admin_required", 27 | 28 | "identity:get_domain": "rule:admin_required", 29 | "identity:list_domains": "rule:admin_required", 30 | "identity:create_domain": "rule:admin_required", 31 | "identity:update_domain": "rule:admin_required", 32 | "identity:delete_domain": "rule:admin_required", 33 | 34 | "identity:get_project": "rule:admin_required", 35 | "identity:list_projects": "rule:admin_required", 36 | "identity:list_user_projects": "rule:admin_or_owner", 37 | "identity:create_project": "rule:admin_required", 38 | "identity:update_project": "rule:admin_required", 39 | "identity:delete_project": "rule:admin_required", 40 | 41 | "identity:get_user": "rule:admin_required", 42 | "identity:list_users": "rule:admin_required", 43 | "identity:create_user": "rule:admin_required", 44 | "identity:update_user": "rule:admin_required", 45 | "identity:delete_user": "rule:admin_required", 46 | "identity:change_password": "rule:admin_or_owner", 47 | 48 | "identity:get_group": "rule:admin_required", 49 | "identity:list_groups": "rule:admin_required", 50 | "identity:list_groups_for_user": "rule:admin_or_owner", 51 | "identity:create_group": "rule:admin_required", 52 | "identity:update_group": "rule:admin_required", 53 | "identity:delete_group": "rule:admin_required", 54 | "identity:list_users_in_group": "rule:admin_required", 55 | "identity:remove_user_from_group": "rule:admin_required", 56 | "identity:check_user_in_group": "rule:admin_required", 57 | "identity:add_user_to_group": "rule:admin_required", 58 | 59 | "identity:get_credential": "rule:admin_required", 60 | "identity:list_credentials": "rule:admin_required", 61 | "identity:create_credential": "rule:admin_required", 62 | "identity:update_credential": "rule:admin_required", 63 | "identity:delete_credential": "rule:admin_required", 64 | 65 | "identity:ec2_get_credential": "rule:admin_or_owner", 66 | "identity:ec2_list_credentials": "rule:admin_or_owner", 67 | "identity:ec2_create_credential": "rule:admin_or_owner", 68 | "identity:ec2_delete_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)", 69 | 70 | "identity:get_role": "rule:admin_required", 71 | "identity:list_roles": "rule:admin_required", 72 | "identity:create_role": "rule:admin_required", 73 | "identity:update_role": "rule:admin_required", 74 | "identity:delete_role": "rule:admin_required", 75 | 76 | "identity:check_grant": "rule:admin_required", 77 | "identity:list_grants": "rule:admin_required", 78 | "identity:create_grant": "rule:admin_required", 79 | "identity:revoke_grant": "rule:admin_required", 80 | 81 | "identity:list_role_assignments": "rule:admin_required", 82 | 83 | "identity:get_policy": "rule:admin_required", 84 | "identity:list_policies": "rule:admin_required", 85 | "identity:create_policy": "rule:admin_required", 86 | "identity:update_policy": "rule:admin_required", 87 | "identity:delete_policy": "rule:admin_required", 88 | 89 | "identity:check_token": "rule:admin_required", 90 | "identity:validate_token": "rule:service_or_admin", 91 | "identity:validate_token_head": "rule:service_or_admin", 92 | "identity:revocation_list": "rule:service_or_admin", 93 | "identity:revoke_token": "rule:admin_or_owner", 94 | 95 | "identity:create_trust": "user_id:%(trust.trustor_user_id)s", 96 | "identity:get_trust": "rule:admin_or_owner", 97 | "identity:list_trusts": "", 98 | "identity:list_roles_for_trust": "", 99 | "identity:check_role_for_trust": "", 100 | "identity:get_role_for_trust": "", 101 | "identity:delete_trust": "", 102 | 103 | "identity:create_consumer": "rule:admin_required", 104 | "identity:get_consumer": "rule:admin_required", 105 | "identity:list_consumers": "rule:admin_required", 106 | "identity:delete_consumer": "rule:admin_required", 107 | "identity:update_consumer": "rule:admin_required", 108 | 109 | "identity:authorize_request_token": "rule:admin_required", 110 | "identity:list_access_token_roles": "rule:admin_required", 111 | "identity:get_access_token_role": "rule:admin_required", 112 | "identity:list_access_tokens": "rule:admin_required", 113 | "identity:get_access_token": "rule:admin_required", 114 | "identity:delete_access_token": "rule:admin_required", 115 | 116 | "identity:list_projects_for_endpoint": "rule:admin_required", 117 | "identity:add_endpoint_to_project": "rule:admin_required", 118 | "identity:check_endpoint_in_project": "rule:admin_required", 119 | "identity:list_endpoints_for_project": "rule:admin_required", 120 | "identity:remove_endpoint_from_project": "rule:admin_required", 121 | 122 | "identity:create_identity_provider": "rule:admin_required", 123 | "identity:list_identity_providers": "rule:admin_required", 124 | "identity:get_identity_providers": "rule:admin_required", 125 | "identity:update_identity_provider": "rule:admin_required", 126 | "identity:delete_identity_provider": "rule:admin_required", 127 | 128 | "identity:create_protocol": "rule:admin_required", 129 | "identity:update_protocol": "rule:admin_required", 130 | "identity:get_protocol": "rule:admin_required", 131 | "identity:list_protocols": "rule:admin_required", 132 | "identity:delete_protocol": "rule:admin_required", 133 | 134 | "identity:create_mapping": "rule:admin_required", 135 | "identity:get_mapping": "rule:admin_required", 136 | "identity:list_mappings": "rule:admin_required", 137 | "identity:delete_mapping": "rule:admin_required", 138 | "identity:update_mapping": "rule:admin_required", 139 | 140 | "identity:list_projects_for_groups": "", 141 | "identity:list_domains_for_groups": "", 142 | 143 | "identity:list_revoke_events": "" 144 | } 145 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/files/keystone_admin: -------------------------------------------------------------------------------- 1 | export OS_TENANT_NAME="{{KEYSTONE_ADMIN_TENANT}}" 2 | export OS_USERNAME="{{KEYSTONE_ADMIN_USER}}" 3 | export OS_PASSWORD="{{KEYSTONE_ADMIN_PASSWD}}" 4 | export OS_AUTH_URL="{{KEYSTONE_AUTH_URL}}" 5 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/files/keystone_init.sh: -------------------------------------------------------------------------------- 1 | export OS_SERVICE_TOKEN="{{KEYSTONE_ADMIN_TOKEN}}" 2 | export OS_SERVICE_ENDPOINT="{{KEYSTONE_AUTH_URL}}" 3 | 4 | keystone user-create --name={{KEYSTONE_ADMIN_USER}} --pass="{{KEYSTONE_ADMIN_PASSWD}}" 5 | keystone tenant-create --name={{KEYSTONE_ADMIN_TENANT}} --description="Admin Tenant" 6 | keystone role-create --name={{KEYSTONE_ROLE_NAME}} 7 | keystone user-role-add --user={{KEYSTONE_ADMIN_USER}} --tenant={{KEYSTONE_ADMIN_TENANT}} --role={{KEYSTONE_ROLE_NAME}} 8 | keystone user-role-add --user={{KEYSTONE_ADMIN_USER}} --role=_member_ --tenant={{KEYSTONE_ADMIN_TENANT}} 9 | keystone tenant-create --name=service 10 | 11 | function get_id () { 12 | echo `"$@" | grep ' id ' | awk '{print $4}'` 13 | } 14 | #Keystone Service and Endpoint 15 | KEYSTONE_SERVICE_ID=$(get_id \ 16 | keystone service-create --name=keystone \ 17 | --type=identity \ 18 | --description="Keystone Identity Service") 19 | keystone endpoint-create --service-id="$KEYSTONE_SERVICE_ID" \ 20 | --publicurl="http://{{KEYSTONE_IP}}:5000/v2.0" \ 21 | --adminurl="http://{{KEYSTONE_IP}}:35357/v2.0" \ 22 | --internalurl="http://{{KEYSTONE_IP}}:5000/v2.0" 23 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/files/openstack-keystone: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # keystone OpenStack Identity Service 4 | # 5 | # chkconfig: - 98 02 6 | # description: keystone works provide apis to \ 7 | # * Authenticate users and provide a token \ 8 | # * Validate tokens 9 | ### END INIT INFO 10 | 11 | . /etc/rc.d/init.d/functions 12 | 13 | prog=keystone 14 | exec="/usr/bin/$prog-all" 15 | config="/etc/$prog/$prog.conf" 16 | distconfig="/usr/share/$prog/$prog-dist.conf" 17 | pidfile="/var/run/$prog/$prog.pid" 18 | startuplog="/var/log/$prog/$prog-startup.log" 19 | timeout=60 20 | wrapper="/usr/share/$prog/daemon_notify.sh" 21 | 22 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 23 | 24 | lockfile=/var/lock/subsys/$prog 25 | 26 | start() { 27 | [ -x $exec ] || exit 5 28 | [ -f $config ] || exit 6 29 | echo -n $"Starting $prog: " 30 | daemon --user keystone --pidfile $pidfile "$wrapper $exec $pidfile $startuplog $timeout" 31 | retval=$? 32 | echo 33 | [ $retval -eq 0 ] && touch $lockfile 34 | return $retval 35 | } 36 | 37 | stop() { 38 | echo -n $"Stopping $prog: " 39 | killproc -p $pidfile $prog 40 | retval=$? 41 | echo 42 | [ $retval -eq 0 ] && rm -f $lockfile 43 | return $retval 44 | } 45 | 46 | restart() { 47 | stop 48 | start 49 | } 50 | 51 | reload() { 52 | restart 53 | } 54 | 55 | force_reload() { 56 | restart 57 | } 58 | 59 | rh_status() { 60 | status -p $pidfile $prog 61 | } 62 | 63 | rh_status_q() { 64 | rh_status >/dev/null 2>&1 65 | } 66 | 67 | 68 | case "$1" in 69 | start) 70 | rh_status_q && exit 0 71 | $1 72 | ;; 73 | stop) 74 | rh_status_q || exit 0 75 | $1 76 | ;; 77 | restart) 78 | $1 79 | ;; 80 | reload) 81 | rh_status_q || exit 7 82 | $1 83 | ;; 84 | force-reload) 85 | force_reload 86 | ;; 87 | status) 88 | rh_status 89 | ;; 90 | condrestart|try-restart) 91 | rh_status_q || exit 0 92 | restart 93 | ;; 94 | *) 95 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 96 | exit 2 97 | esac 98 | exit $? 99 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/init.sls: -------------------------------------------------------------------------------- 1 | keystone-init: 2 | file.managed: 3 | - name: /usr/local/bin/keystone_init.sh 4 | - source: salt://openstack/keystone/files/keystone_init.sh 5 | - mode: 755 6 | - user: root 7 | - group: root 8 | - template: jinja 9 | - defaults: 10 | KEYSTONE_ADMIN_TOKEN: {{ pillar['keystone']['KEYSTONE_ADMIN_TOKEN'] }} 11 | KEYSTONE_ADMIN_TENANT: {{ pillar['keystone']['KEYSTONE_ADMIN_TENANT'] }} 12 | KEYSTONE_ADMIN_USER: {{ pillar['keystone']['KEYSTONE_ADMIN_USER'] }} 13 | KEYSTONE_ADMIN_PASSWD: {{ pillar['keystone']['KEYSTONE_ADMIN_PASSWD'] }} 14 | KEYSTONE_ROLE_NAME: {{ pillar['keystone']['KEYSTONE_ROLE_NAME'] }} 15 | KEYSTONE_AUTH_URL: {{ pillar['keystone']['KEYSTONE_AUTH_URL'] }} 16 | KEYSTONE_IP: {{ pillar['keystone']['KEYSTONE_IP'] }} 17 | cmd.run: 18 | - name: sleep 10 && bash /usr/local/bin/keystone_init.sh && touch /etc/keystone-init.lock 19 | - require: 20 | - file: keystone-init 21 | - service: openstack-keystone 22 | - unless: test -f /etc/keystone-init.lock 23 | -------------------------------------------------------------------------------- /states/openstack-mitaka/keystone/server.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.keystone.init 3 | 4 | keystone-install: 5 | pkg.installed: 6 | - names: 7 | - openstack-keystone 8 | - python-keystoneclient 9 | - require: 10 | - file: rdo_repo 11 | 12 | /etc/keystone: 13 | file.recurse: 14 | - source: salt://openstack/keystone/files/config 15 | - user: keystone 16 | - group: keystone 17 | - template: jinja 18 | - defaults: 19 | KEYSTONE_ADMIN_TOKEN: {{ pillar['keystone']['KEYSTONE_ADMIN_TOKEN'] }} 20 | MYSQL_SERVER: {{ pillar['keystone']['MYSQL_SERVER'] }} 21 | KEYSTONE_DB_PASS: {{ pillar['keystone']['KEYSTONE_DB_PASS'] }} 22 | KEYSTONE_DB_USER: {{ pillar['keystone']['KEYSTONE_DB_USER'] }} 23 | KEYSTONE_DB_NAME: {{ pillar['keystone']['KEYSTONE_DB_NAME'] }} 24 | 25 | keystone-pki-setup: 26 | cmd.run: 27 | - name: keystone-manage pki_setup --keystone-user keystone --keystone-group keystone && chown -R keystone:keystone /etc/keystone/ssl && chmod -R o-rwx /etc/keystone/ssl 28 | - require: 29 | - pkg: keystone-install 30 | - unless: test -d /etc/keystone/ssl 31 | 32 | keystone-db-sync: 33 | cmd.run: 34 | - name: keystone-manage db_sync && touch /etc/keystone-datasync.lock && chown keystone:keystone /var/log/keystone/* 35 | - require: 36 | - mysql_grants: keystone-mysql 37 | - pkg: keystone-install 38 | - file: /etc/keystone 39 | - unless: test -f /etc/keystone-datasync.lock 40 | 41 | openstack-keystone: 42 | file.managed: 43 | - name: /etc/init.d/openstack-keystone 44 | - source: salt://openstack/keystone/files/openstack-keystone 45 | - user: root 46 | - group: root 47 | - mode: 755 48 | service.running: 49 | - enable: True 50 | - reload: True 51 | - require: 52 | - pkg: keystone-install 53 | - watch: 54 | - file: /etc/keystone 55 | - cmd: keystone-db-sync 56 | 57 | /root/keystone_admin: 58 | file.managed: 59 | - source: salt://openstack/keystone/files/keystone_admin 60 | - mode: 755 61 | - user: root 62 | - group: root 63 | - template: jinja 64 | - defaults: 65 | KEYSTONE_ADMIN_TENANT: {{ pillar['keystone']['KEYSTONE_ADMIN_TENANT'] }} 66 | KEYSTONE_ADMIN_USER: {{ pillar['keystone']['KEYSTONE_ADMIN_USER'] }} 67 | KEYSTONE_ADMIN_PASSWD: {{ pillar['keystone']['KEYSTONE_ADMIN_PASSWD'] }} 68 | KEYSTONE_AUTH_URL: {{ pillar['keystone']['KEYSTONE_AUTH_URL'] }} 69 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/cinder.sls: -------------------------------------------------------------------------------- 1 | cinder-mysql: 2 | mysql_database.present: 3 | - name: {{ pillar['cinder']['CINDER_DBNAME'] }} 4 | - require: 5 | - service: mysql-server 6 | mysql_user.present: 7 | - name: {{ pillar['cinder']['CINDER_USER'] }} 8 | - host: {{ pillar['cinder']['HOST_ALLOW'] }} 9 | - password: {{ pillar['cinder']['CINDER_PASS'] }} 10 | - require: 11 | - mysql_database: cinder-mysql 12 | mysql_grants.present: 13 | - grant: all 14 | - database: {{ pillar['cinder']['DB_ALLOW'] }} 15 | - user: {{ pillar['cinder']['CINDER_USER'] }} 16 | - host: {{ pillar['cinder']['HOST_ALLOW'] }} 17 | - require: 18 | - mysql_user: cinder-mysql 19 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/files/my.cnf: -------------------------------------------------------------------------------- 1 | # Example MySQL config file for medium systems. 2 | # 3 | # This is for a system with little memory (32M - 64M) where MySQL plays 4 | # an important part, or systems up to 128M where MySQL is used together with 5 | # other programs (such as a web server) 6 | # 7 | # MySQL programs look for option files in a set of 8 | # locations which depend on the deployment platform. 9 | # You can copy this option file to one of those 10 | # locations. For information about these locations, see: 11 | # http://dev.mysql.com/doc/mysql/en/option-files.html 12 | # 13 | # In this file, you can use all long options that a program supports. 14 | # If you want to know which options a program supports, run the program 15 | # with the "--help" option. 16 | 17 | # The following options will be passed to all MySQL clients 18 | [client] 19 | #password = your_password 20 | port = 3306 21 | socket = /var/lib/mysql/mysql.sock 22 | 23 | # Here follows entries for some specific programs 24 | 25 | # The MySQL server 26 | [mysqld] 27 | port = 3306 28 | socket = /var/lib/mysql/mysql.sock 29 | skip-locking 30 | key_buffer_size = 16M 31 | max_allowed_packet = 1M 32 | table_open_cache = 64 33 | sort_buffer_size = 512K 34 | net_buffer_length = 8K 35 | read_buffer_size = 256K 36 | read_rnd_buffer_size = 512K 37 | myisam_sort_buffer_size = 8M 38 | #For OpenStack 39 | default-storage-engine = innodb 40 | innodb_file_per_table 41 | collation-server = utf8_general_ci 42 | init-connect = 'SET NAMES utf8' 43 | character-set-server = utf8 44 | 45 | # Don't listen on a TCP/IP port at all. This can be a security enhancement, 46 | # if all processes that need to connect to mysqld run on the same host. 47 | # All interaction with mysqld must be made via Unix sockets or named pipes. 48 | # Note that using this option without enabling named pipes on Windows 49 | # (via the "enable-named-pipe" option) will render mysqld useless! 50 | # 51 | #skip-networking 52 | 53 | # Replication Master Server (default) 54 | # binary logging is required for replication 55 | log-bin=mysql-bin 56 | 57 | # binary logging format - mixed recommended 58 | binlog_format=mixed 59 | 60 | # required unique id between 1 and 2^32 - 1 61 | # defaults to 1 if master-host is not set 62 | # but will not function as a master if omitted 63 | server-id = 1 64 | 65 | # Replication Slave (comment out master section to use this) 66 | # 67 | # To configure this host as a replication slave, you can choose between 68 | # two methods : 69 | # 70 | # 1) Use the CHANGE MASTER TO command (fully described in our manual) - 71 | # the syntax is: 72 | # 73 | # CHANGE MASTER TO MASTER_HOST=, MASTER_PORT=, 74 | # MASTER_USER=, MASTER_PASSWORD= ; 75 | # 76 | # where you replace , , by quoted strings and 77 | # by the master's port number (3306 by default). 78 | # 79 | # Example: 80 | # 81 | # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, 82 | # MASTER_USER='joe', MASTER_PASSWORD='secret'; 83 | # 84 | # OR 85 | # 86 | # 2) Set the variables below. However, in case you choose this method, then 87 | # start replication for the first time (even unsuccessfully, for example 88 | # if you mistyped the password in master-password and the slave fails to 89 | # connect), the slave will create a master.info file, and any later 90 | # change in this file to the variables' values below will be ignored and 91 | # overridden by the content of the master.info file, unless you shutdown 92 | # the slave server, delete master.info and restart the slaver server. 93 | # For that reason, you may want to leave the lines below untouched 94 | # (commented) and instead use CHANGE MASTER TO (see above) 95 | # 96 | # required unique id between 2 and 2^32 - 1 97 | # (and different from the master) 98 | # defaults to 2 if master-host is set 99 | # but will not function as a slave if omitted 100 | #server-id = 2 101 | # 102 | # The replication master for this slave - required 103 | #master-host = 104 | # 105 | # The username the slave will use for authentication when connecting 106 | # to the master - required 107 | #master-user = 108 | # 109 | # The password the slave will authenticate with when connecting to 110 | # the master - required 111 | #master-password = 112 | # 113 | # The port the master is listening on. 114 | # optional - defaults to 3306 115 | #master-port = 116 | # 117 | # binary logging - not required for slaves, but recommended 118 | #log-bin=mysql-bin 119 | 120 | # Uncomment the following if you are using InnoDB tables 121 | #innodb_data_home_dir = /var/lib/mysql 122 | #innodb_data_file_path = ibdata1:10M:autoextend 123 | #innodb_log_group_home_dir = /var/lib/mysql 124 | # You can set .._buffer_pool_size up to 50 - 80 % 125 | # of RAM but beware of setting memory usage too high 126 | #innodb_buffer_pool_size = 16M 127 | #innodb_additional_mem_pool_size = 2M 128 | # Set .._log_file_size to 25 % of buffer pool size 129 | #innodb_log_file_size = 5M 130 | #innodb_log_buffer_size = 8M 131 | #innodb_flush_log_at_trx_commit = 1 132 | #innodb_lock_wait_timeout = 50 133 | 134 | [mysqldump] 135 | quick 136 | max_allowed_packet = 16M 137 | 138 | [mysql] 139 | no-auto-rehash 140 | # Remove the next comment character if you are not familiar with SQL 141 | #safe-updates 142 | 143 | [myisamchk] 144 | key_buffer_size = 20M 145 | sort_buffer_size = 20M 146 | read_buffer = 2M 147 | write_buffer = 2M 148 | 149 | [mysqlhotcopy] 150 | interactive-timeout 151 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/glance.sls: -------------------------------------------------------------------------------- 1 | glance-mysql: 2 | mysql_database.present: 3 | - name: {{ pillar['glance']['GLANCE_DB_NAME'] }} 4 | - require: 5 | - service: mysql-server 6 | mysql_user.present: 7 | - name: {{ pillar['glance']['GLANCE_DB_USER'] }} 8 | - host: {{ pillar['glance']['HOST_ALLOW'] }} 9 | - password: {{ pillar['glance']['GLANCE_DB_PASS'] }} 10 | - require: 11 | - mysql_database: glance-mysql 12 | mysql_grants.present: 13 | - grant: all 14 | - database: {{ pillar['glance']['DB_ALLOW'] }} 15 | - user: {{ pillar['glance']['GLANCE_DB_USER'] }} 16 | - host: {{ pillar['glance']['HOST_ALLOW'] }} 17 | - require: 18 | - mysql_user: glance-mysql 19 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.mysql.keystone 3 | - openstack.mysql.glance 4 | - openstack.mysql.nova 5 | - openstack.mysql.neutron 6 | - openstack.mysql.cinder 7 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/keystone.sls: -------------------------------------------------------------------------------- 1 | keystone-mysql: 2 | mysql_database.present: 3 | - name: {{ pillar['keystone']['KEYSTONE_DB_NAME'] }} 4 | - require: 5 | - service: mysql-server 6 | mysql_user.present: 7 | - name: {{ pillar['keystone']['KEYSTONE_DB_USER'] }} 8 | - host: {{ pillar['keystone']['HOST_ALLOW'] }} 9 | - password: {{ pillar['keystone']['KEYSTONE_DB_PASS'] }} 10 | - require: 11 | - mysql_database: keystone-mysql 12 | mysql_grants.present: 13 | - grant: all 14 | - database: {{ pillar['keystone']['DB_ALLOW'] }} 15 | - user: {{ pillar['keystone']['KEYSTONE_DB_USER'] }} 16 | - host: {{ pillar['keystone']['HOST_ALLOW'] }} 17 | - require: 18 | - mysql_user: keystone-mysql 19 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/neutron.sls: -------------------------------------------------------------------------------- 1 | neutron-mysql: 2 | mysql_database.present: 3 | - name: {{ pillar['neutron']['NEUTRON_DB_NAME'] }} 4 | - require: 5 | - service: mysql-server 6 | mysql_user.present: 7 | - name: {{ pillar['neutron']['NEUTRON_DB_USER'] }} 8 | - host: {{ pillar['neutron']['HOST_ALLOW'] }} 9 | - password: {{ pillar['neutron']['NEUTRON_DB_PASS'] }} 10 | - require: 11 | - mysql_database: neutron-mysql 12 | mysql_grants.present: 13 | - grant: all 14 | - database: {{ pillar['neutron']['DB_ALLOW'] }} 15 | - user: {{ pillar['neutron']['NEUTRON_DB_USER'] }} 16 | - host: {{ pillar['neutron']['HOST_ALLOW'] }} 17 | - require: 18 | - mysql_user: neutron-mysql 19 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/nova.sls: -------------------------------------------------------------------------------- 1 | nova-mysql: 2 | mysql_database.present: 3 | - name: {{ pillar['nova']['NOVA_DB_NAME'] }} 4 | - require: 5 | - service: mysql-server 6 | mysql_user.present: 7 | - name: {{ pillar['nova']['NOVA_DB_USER'] }} 8 | - host: {{ pillar['nova']['HOST_ALLOW'] }} 9 | - password: {{ pillar['nova']['NOVA_DB_PASS'] }} 10 | - require: 11 | - mysql_database: nova-mysql 12 | mysql_grants.present: 13 | - grant: all 14 | - database: {{ pillar['nova']['DB_ALLOW'] }} 15 | - user: {{ pillar['nova']['NOVA_DB_USER'] }} 16 | - host: {{ pillar['nova']['HOST_ALLOW'] }} 17 | - require: 18 | - mysql_user: nova-mysql 19 | -------------------------------------------------------------------------------- /states/openstack-mitaka/mysql/server.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.mysql.init 3 | 4 | mysql-server: 5 | pkg.installed: 6 | - name: mysql-server 7 | 8 | file.managed: 9 | - name: /etc/my.cnf 10 | - source: salt://openstack/mysql/files/my.cnf 11 | 12 | service.running: 13 | - name: mysqld 14 | - enable: True 15 | - require: 16 | - pkg: mysql-server 17 | - watch: 18 | - file: mysql-server 19 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/config.sls: -------------------------------------------------------------------------------- 1 | /etc/neutron: 2 | file.recurse: 3 | - source: salt://openstack/neutron/files/config 4 | - user: neutron 5 | - group: neutron 6 | - template: jinja 7 | - defaults: 8 | MYSQL_SERVER: {{ pillar['neutron']['MYSQL_SERVER'] }} 9 | NEUTRON_IP: {{ pillar['neutron']['NEUTRON_IP'] }} 10 | NEUTRON_DB_NAME: {{ pillar['neutron']['NEUTRON_DB_NAME'] }} 11 | NEUTRON_DB_USER: {{ pillar['neutron']['NEUTRON_DB_USER'] }} 12 | NEUTRON_DB_PASS: {{ pillar['neutron']['NEUTRON_DB_PASS'] }} 13 | AUTH_KEYSTONE_HOST: {{ pillar['neutron']['AUTH_KEYSTONE_HOST'] }} 14 | AUTH_KEYSTONE_PORT: {{ pillar['neutron']['AUTH_KEYSTONE_PORT'] }} 15 | AUTH_KEYSTONE_PROTOCOL: {{ pillar['neutron']['AUTH_KEYSTONE_PROTOCOL'] }} 16 | AUTH_ADMIN_PASS: {{ pillar['neutron']['AUTH_ADMIN_PASS'] }} 17 | NOVA_URL: {{ pillar['neutron']['NOVA_URL'] }} 18 | NOVA_ADMIN_USER: {{ pillar['neutron']['NOVA_ADMIN_USER'] }} 19 | NOVA_ADMIN_PASS: {{ pillar['neutron']['NOVA_ADMIN_PASS'] }} 20 | NOVA_ADMIN_TENANT: {{ pillar['neutron']['NOVA_ADMIN_TENANT'] }} 21 | NOVA_ADMIN_AUTH_URL: {{ pillar['neutron']['NOVA_ADMIN_AUTH_URL'] }} 22 | RABBITMQ_HOST: {{ pillar['rabbit']['RABBITMQ_HOST'] }} 23 | RABBITMQ_PORT: {{ pillar['rabbit']['RABBITMQ_PORT'] }} 24 | RABBITMQ_USER: {{ pillar['rabbit']['RABBITMQ_USER'] }} 25 | RABBITMQ_PASS: {{ pillar['rabbit']['RABBITMQ_PASS'] }} 26 | AUTH_NEUTRON_ADMIN_TENANT: {{ pillar['neutron']['AUTH_NEUTRON_ADMIN_TENANT'] }} 27 | AUTH_NEUTRON_ADMIN_USER: {{ pillar['neutron']['AUTH_NEUTRON_ADMIN_USER'] }} 28 | AUTH_NEUTRON_ADMIN_PASS: {{ pillar['neutron']['AUTH_NEUTRON_ADMIN_PASS'] }} 29 | VM_INTERFACE: {{ pillar['neutron']['VM_INTERFACE'] }} 30 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/dhcp_agent.sls: -------------------------------------------------------------------------------- 1 | openstack-neutron-dhcp-agent: 2 | file.managed: 3 | - name: /etc/init.d/openstack-neutron-dhcp-agent 4 | - source: salt://openstack/neutron/files/openstack-neutron-dhcp-agent 5 | - mode: 755 6 | - user: root 7 | - group: root 8 | cmd.run: 9 | - name: chkconfig --add openstack-neutron-dhcp-agent 10 | - unless: chkconfig --list | grep openstack-neutron-dhcp-agent 11 | - require: 12 | - file: openstack-neutron-dhcp-agent 13 | service.disabled: 14 | - enable: False 15 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/dhcp_agent.ini: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Show debugging output in log (sets DEBUG log level output) 3 | debug = true 4 | 5 | # The DHCP agent will resync its state with Neutron to recover from any 6 | # transient notification or rpc errors. The interval is number of 7 | # seconds between attempts. 8 | # resync_interval = 5 9 | 10 | # The DHCP agent requires an interface driver be set. Choose the one that best 11 | # matches your plugin. 12 | # interface_driver = 13 | 14 | # Example of interface_driver option for OVS based plugins(OVS, Ryu, NEC, NVP, 15 | # BigSwitch/Floodlight) 16 | # interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver 17 | 18 | # Name of Open vSwitch bridge to use 19 | # ovs_integration_bridge = br-int 20 | 21 | # Use veth for an OVS interface or not. 22 | # Support kernels with limited namespace support 23 | # (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. 24 | # ovs_use_veth = False 25 | 26 | # Example of interface_driver option for LinuxBridge 27 | interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver 28 | 29 | # The agent can use other DHCP drivers. Dnsmasq is the simplest and requires 30 | # no additional setup of the DHCP server. 31 | dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 32 | 33 | # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and 34 | # iproute2 package that supports namespaces). 35 | use_namespaces = True 36 | 37 | # The DHCP server can assist with providing metadata support on isolated 38 | # networks. Setting this value to True will cause the DHCP server to append 39 | # specific host routes to the DHCP request. The metadata service will only 40 | # be activated when the subnet does not contain any router port. The guest 41 | # instance must be configured to request host routes via DHCP (Option 121). 42 | # enable_isolated_metadata = False 43 | 44 | # Allows for serving metadata requests coming from a dedicated metadata 45 | # access network whose cidr is 169.254.169.254/16 (or larger prefix), and 46 | # is connected to a Neutron router from which the VMs send metadata 47 | # request. In this case DHCP Option 121 will not be injected in VMs, as 48 | # they will be able to reach 169.254.169.254 through a router. 49 | # This option requires enable_isolated_metadata = True 50 | # enable_metadata_network = False 51 | 52 | # Number of threads to use during sync process. Should not exceed connection 53 | # pool size configured on server. 54 | # num_sync_threads = 4 55 | 56 | # Location to store DHCP server config files 57 | # dhcp_confs = $state_path/dhcp 58 | 59 | # Domain to use for building the hostnames 60 | # dhcp_domain = openstacklocal 61 | 62 | # Override the default dnsmasq settings with this file 63 | # dnsmasq_config_file = 64 | 65 | # Comma-separated list of DNS servers which will be used by dnsmasq 66 | # as forwarders. 67 | # dnsmasq_dns_servers = 68 | 69 | # Limit number of leases to prevent a denial-of-service. 70 | # dnsmasq_lease_max = 16777216 71 | 72 | # Location to DHCP lease relay UNIX domain socket 73 | # dhcp_lease_relay_socket = $state_path/dhcp/lease_relay 74 | 75 | # Location of Metadata Proxy UNIX domain socket 76 | # metadata_proxy_socket = $state_path/metadata_proxy 77 | 78 | # dhcp_delete_namespaces, which is false by default, can be set to True if 79 | # namespaces can be deleted cleanly on the host running the dhcp agent. 80 | # Do not enable this until you understand the problem with the Linux iproute 81 | # utility mentioned in https://bugs.launchpad.net/neutron/+bug/1052535 and 82 | # you are sure that your version of iproute does not suffer from the problem. 83 | # If True, namespaces will be deleted when a dhcp server is disabled. 84 | # dhcp_delete_namespaces = False 85 | 86 | # Timeout for ovs-vsctl commands. 87 | # If the timeout expires, ovs commands will fail with ALARMCLOCK error. 88 | # ovs_vsctl_timeout = 10 89 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/fwaas_driver.ini: -------------------------------------------------------------------------------- 1 | [fwaas] 2 | #driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver 3 | #enabled = True 4 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/l3_agent.ini: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Show debugging output in log (sets DEBUG log level output) 3 | # debug = False 4 | 5 | # L3 requires that an interface driver be set. Choose the one that best 6 | # matches your plugin. 7 | # interface_driver = 8 | 9 | # Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC) 10 | # that supports L3 agent 11 | # interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver 12 | 13 | # Use veth for an OVS interface or not. 14 | # Support kernels with limited namespace support 15 | # (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. 16 | # ovs_use_veth = False 17 | 18 | # Example of interface_driver option for LinuxBridge 19 | # interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver 20 | 21 | # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and 22 | # iproute2 package that supports namespaces). 23 | # use_namespaces = True 24 | 25 | # If use_namespaces is set as False then the agent can only configure one router. 26 | 27 | # This is done by setting the specific router_id. 28 | # router_id = 29 | 30 | # When external_network_bridge is set, each L3 agent can be associated 31 | # with no more than one external network. This value should be set to the UUID 32 | # of that external network. To allow L3 agent support multiple external 33 | # networks, both the external_network_bridge and gateway_external_network_id 34 | # must be left empty. 35 | # gateway_external_network_id = 36 | 37 | # Indicates that this L3 agent should also handle routers that do not have 38 | # an external network gateway configured. This option should be True only 39 | # for a single agent in a Neutron deployment, and may be False for all agents 40 | # if all routers must have an external network gateway 41 | # handle_internal_only_routers = True 42 | 43 | # Name of bridge used for external network traffic. This should be set to 44 | # empty value for the linux bridge. when this parameter is set, each L3 agent 45 | # can be associated with no more than one external network. 46 | # external_network_bridge = br-ex 47 | 48 | # TCP Port used by Neutron metadata server 49 | # metadata_port = 9697 50 | 51 | # Send this many gratuitous ARPs for HA setup. Set it below or equal to 0 52 | # to disable this feature. 53 | # send_arp_for_ha = 0 54 | 55 | # seconds between re-sync routers' data if needed 56 | # periodic_interval = 40 57 | 58 | # seconds to start to sync routers' data after 59 | # starting agent 60 | # periodic_fuzzy_delay = 5 61 | 62 | # enable_metadata_proxy, which is true by default, can be set to False 63 | # if the Nova metadata server is not available 64 | # enable_metadata_proxy = True 65 | 66 | # Location of Metadata Proxy UNIX domain socket 67 | # metadata_proxy_socket = $state_path/metadata_proxy 68 | 69 | # router_delete_namespaces, which is false by default, can be set to True if 70 | # namespaces can be deleted cleanly on the host running the L3 agent. 71 | # Do not enable this until you understand the problem with the Linux iproute 72 | # utility mentioned in https://bugs.launchpad.net/neutron/+bug/1052535 and 73 | # you are sure that your version of iproute does not suffer from the problem. 74 | # If True, namespaces will be deleted when a router is destroyed. 75 | # router_delete_namespaces = False 76 | 77 | # Timeout for ovs-vsctl commands. 78 | # If the timeout expires, ovs commands will fail with ALARMCLOCK error. 79 | # ovs_vsctl_timeout = 10 80 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/lbaas_agent.ini: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Show debugging output in log (sets DEBUG log level output). 3 | # debug = False 4 | 5 | # The LBaaS agent will resync its state with Neutron to recover from any 6 | # transient notification or rpc errors. The interval is number of 7 | # seconds between attempts. 8 | # periodic_interval = 10 9 | 10 | # LBaas requires an interface driver be set. Choose the one that best 11 | # matches your plugin. 12 | # interface_driver = 13 | 14 | # Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC, NVP, 15 | # BigSwitch/Floodlight) 16 | # interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver 17 | 18 | # Use veth for an OVS interface or not. 19 | # Support kernels with limited namespace support 20 | # (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. 21 | # ovs_use_veth = False 22 | 23 | # Example of interface_driver option for LinuxBridge 24 | interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver 25 | 26 | # The agent requires drivers to manage the loadbalancer. HAProxy is the opensource version. 27 | # Multiple device drivers reflecting different service providers could be specified: 28 | # device_driver = path.to.provider1.driver.Driver 29 | # device_driver = path.to.provider2.driver.Driver 30 | # Default is: 31 | device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver 32 | 33 | [haproxy] 34 | # Location to store config and state files 35 | # loadbalancer_state_path = $state_path/lbaas 36 | 37 | # The user group 38 | # user_group = nogroup 39 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/metadata_agent.ini: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Show debugging output in log (sets DEBUG log level output) 3 | # debug = True 4 | 5 | # The Neutron user information for accessing the Neutron API. 6 | auth_url = http://localhost:5000/v2.0 7 | auth_region = RegionOne 8 | # Turn off verification of the certificate for ssl 9 | # auth_insecure = False 10 | # Certificate Authority public key (CA cert) file for ssl 11 | # auth_ca_cert = 12 | admin_tenant_name = %SERVICE_TENANT_NAME% 13 | admin_user = %SERVICE_USER% 14 | admin_password = %SERVICE_PASSWORD% 15 | 16 | # Network service endpoint type to pull from the keystone catalog 17 | # endpoint_type = adminURL 18 | 19 | # IP address used by Nova metadata server 20 | # nova_metadata_ip = 127.0.0.1 21 | 22 | # TCP Port used by Nova metadata server 23 | # nova_metadata_port = 8775 24 | 25 | # When proxying metadata requests, Neutron signs the Instance-ID header with a 26 | # shared secret to prevent spoofing. You may select any string for a secret, 27 | # but it must match here and in the configuration used by the Nova Metadata 28 | # Server. NOTE: Nova uses a different key: neutron_metadata_proxy_shared_secret 29 | # metadata_proxy_shared_secret = 30 | 31 | # Location of Metadata Proxy UNIX domain socket 32 | # metadata_proxy_socket = $state_path/metadata_proxy 33 | 34 | # Number of separate worker processes for metadata server 35 | # metadata_workers = 0 36 | 37 | # Number of backlog requests to configure the metadata server socket with 38 | # metadata_backlog = 128 39 | 40 | # URL to connect to the cache backend. 41 | # Example of URL using memory caching backend 42 | # with ttl set to 5 seconds: cache_url = memory://?default_ttl=5 43 | # default_ttl=0 parameter will cause cache entries to never expire. 44 | # Otherwise default_ttl specifies time in seconds a cache entry is valid for. 45 | # No cache is used in case no value is passed. 46 | # cache_url = 47 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins.ini: -------------------------------------------------------------------------------- 1 | [ml2] 2 | # (ListOpt) List of network type driver entrypoints to be loaded from 3 | # the neutron.ml2.type_drivers namespace. 4 | # 5 | type_drivers = local,flat,vlan,gre,vxlan 6 | # Example: type_drivers = flat,vlan,gre,vxlan 7 | 8 | # (ListOpt) Ordered list of network_types to allocate as tenant 9 | # networks. The default value 'local' is useful for single-box testing 10 | # but provides no connectivity between hosts. 11 | # 12 | tenant_network_types = flat,vlan,gre,vxlan 13 | # Example: tenant_network_types = vlan,gre,vxlan 14 | 15 | # (ListOpt) Ordered list of networking mechanism driver entrypoints 16 | # to be loaded from the neutron.ml2.mechanism_drivers namespace. 17 | mechanism_drivers = linuxbridge,openvswitch 18 | # Example: mechanism_drivers = openvswitch,mlnx 19 | # Example: mechanism_drivers = arista 20 | # Example: mechanism_drivers = cisco,logger 21 | # Example: mechanism_drivers = openvswitch,brocade 22 | # Example: mechanism_drivers = linuxbridge,brocade 23 | 24 | [ml2_type_flat] 25 | # (ListOpt) List of physical_network names with which flat networks 26 | # can be created. Use * to allow flat networks with arbitrary 27 | # physical_network names. 28 | # 29 | flat_networks = physnet1 30 | # Example:flat_networks = physnet1,physnet2 31 | # Example:flat_networks = * 32 | 33 | [ml2_type_vlan] 34 | # (ListOpt) List of [::] tuples 35 | # specifying physical_network names usable for VLAN provider and 36 | # tenant networks, as well as ranges of VLAN tags on each 37 | # physical_network available for allocation as tenant networks. 38 | # 39 | # network_vlan_ranges = 40 | # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 41 | 42 | [ml2_type_gre] 43 | # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation 44 | # tunnel_id_ranges = 45 | 46 | [ml2_type_vxlan] 47 | # (ListOpt) Comma-separated list of : tuples enumerating 48 | # ranges of VXLAN VNI IDs that are available for tenant network allocation. 49 | # 50 | # vni_ranges = 51 | 52 | # (StrOpt) Multicast group for the VXLAN interface. When configured, will 53 | # enable sending all broadcast traffic to this multicast group. When left 54 | # unconfigured, will disable multicast VXLAN mode. 55 | # 56 | # vxlan_group = 57 | # Example: vxlan_group = 239.1.1.1 58 | 59 | [securitygroup] 60 | # Controls if neutron security group is enabled or not. 61 | # It should be false when you use nova security group. 62 | enable_security_group = True 63 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/linuxbridge/linuxbridge_conf.ini: -------------------------------------------------------------------------------- 1 | [vlans] 2 | # (StrOpt) Type of network to allocate for tenant networks. The 3 | # default value 'local' is useful only for single-box testing and 4 | # provides no connectivity between hosts. You MUST change this to 5 | # 'vlan' and configure network_vlan_ranges below in order for tenant 6 | # networks to provide connectivity between hosts. Set to 'none' to 7 | # disable creation of tenant networks. 8 | # 9 | # tenant_network_type = local 10 | # Example: tenant_network_type = vlan 11 | 12 | # (ListOpt) Comma-separated list of 13 | # [::] tuples enumerating ranges 14 | # of VLAN IDs on named physical networks that are available for 15 | # allocation. All physical networks listed are available for flat and 16 | # VLAN provider network creation. Specified ranges of VLAN IDs are 17 | # available for tenant network allocation if tenant_network_type is 18 | # 'vlan'. If empty, only local networks may be created. 19 | # 20 | network_vlan_ranges = physnet1 21 | # Example: network_vlan_ranges = physnet1:1000:2999 22 | 23 | [linux_bridge] 24 | # (ListOpt) Comma-separated list of 25 | # : tuples mapping physical 26 | # network names to the agent's node-specific physical network 27 | # interfaces to be used for flat and VLAN networks. All physical 28 | # networks listed in network_vlan_ranges on the server should have 29 | # mappings to appropriate interfaces on each agent. 30 | # 31 | physical_interface_mappings = physnet1:{{VM_INTERFACE}} 32 | # Example: physical_interface_mappings = physnet1:eth1 33 | 34 | [vxlan] 35 | # (BoolOpt) enable VXLAN on the agent 36 | # VXLAN support can be enabled when agent is managed by ml2 plugin using 37 | # linuxbridge mechanism driver. Useless if set while using linuxbridge plugin. 38 | # enable_vxlan = False 39 | # 40 | # (IntOpt) use specific TTL for vxlan interface protocol packets 41 | # ttl = 42 | # 43 | # (IntOpt) use specific TOS for vxlan interface protocol packets 44 | # tos = 45 | # 46 | # (StrOpt) multicast group to use for broadcast emulation. 47 | # This group must be the same on all the agents. 48 | # vxlan_group = 224.0.0.1 49 | # 50 | # (StrOpt) Local IP address to use for VXLAN endpoints (required) 51 | # local_ip = 52 | # 53 | # (BoolOpt) Flag to enable l2population extension. This option should be used 54 | # in conjunction with ml2 plugin l2population mechanism driver (in that case, 55 | # both linuxbridge and l2population mechanism drivers should be loaded). 56 | # It enables plugin to populate VXLAN forwarding table, in order to limit 57 | # the use of broadcast emulation (multicast will be turned off if kernel and 58 | # iproute2 supports unicast flooding - requires 3.11 kernel and iproute2 3.10) 59 | # l2_population = False 60 | 61 | [agent] 62 | # Agent's polling interval in seconds 63 | # polling_interval = 2 64 | 65 | # (BoolOpt) Enable server RPC compatibility with old (pre-havana) 66 | # agents. 67 | # 68 | # rpc_support_old_agents = False 69 | # Example: rpc_support_old_agents = True 70 | 71 | [securitygroup] 72 | # Firewall driver for realizing neutron security group function 73 | # firewall_driver = neutron.agent.firewall.NoopFirewallDriver 74 | firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 75 | # Example: firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 76 | 77 | # Controls if neutron security group is enabled or not. 78 | # It should be false when you use nova security group. 79 | enable_security_group = True 80 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf.ini: -------------------------------------------------------------------------------- 1 | [ml2] 2 | # (ListOpt) List of network type driver entrypoints to be loaded from 3 | # the neutron.ml2.type_drivers namespace. 4 | # 5 | type_drivers = local,flat,vlan,gre,vxlan 6 | # Example: type_drivers = flat,vlan,gre,vxlan 7 | 8 | # (ListOpt) Ordered list of network_types to allocate as tenant 9 | # networks. The default value 'local' is useful for single-box testing 10 | # but provides no connectivity between hosts. 11 | # 12 | tenant_network_types = flat,vlan,gre,vxlan 13 | # Example: tenant_network_types = vlan,gre,vxlan 14 | 15 | # (ListOpt) Ordered list of networking mechanism driver entrypoints 16 | # to be loaded from the neutron.ml2.mechanism_drivers namespace. 17 | mechanism_drivers = linuxbridge,openvswitch 18 | # Example: mechanism_drivers = openvswitch,mlnx 19 | # Example: mechanism_drivers = arista 20 | # Example: mechanism_drivers = cisco,logger 21 | # Example: mechanism_drivers = openvswitch,brocade 22 | # Example: mechanism_drivers = linuxbridge,brocade 23 | 24 | [ml2_type_flat] 25 | # (ListOpt) List of physical_network names with which flat networks 26 | # can be created. Use * to allow flat networks with arbitrary 27 | # physical_network names. 28 | # 29 | flat_networks = physnet1 30 | # Example:flat_networks = physnet1,physnet2 31 | # Example:flat_networks = * 32 | 33 | [ml2_type_vlan] 34 | # (ListOpt) List of [::] tuples 35 | # specifying physical_network names usable for VLAN provider and 36 | # tenant networks, as well as ranges of VLAN tags on each 37 | # physical_network available for allocation as tenant networks. 38 | # 39 | # network_vlan_ranges = 40 | # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 41 | 42 | [ml2_type_gre] 43 | # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation 44 | # tunnel_id_ranges = 45 | 46 | [ml2_type_vxlan] 47 | # (ListOpt) Comma-separated list of : tuples enumerating 48 | # ranges of VXLAN VNI IDs that are available for tenant network allocation. 49 | # 50 | # vni_ranges = 51 | 52 | # (StrOpt) Multicast group for the VXLAN interface. When configured, will 53 | # enable sending all broadcast traffic to this multicast group. When left 54 | # unconfigured, will disable multicast VXLAN mode. 55 | # 56 | # vxlan_group = 57 | # Example: vxlan_group = 239.1.1.1 58 | 59 | [securitygroup] 60 | # Controls if neutron security group is enabled or not. 61 | # It should be false when you use nova security group. 62 | enable_security_group = True 63 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf_arista.ini: -------------------------------------------------------------------------------- 1 | # Defines configuration options specific for Arista ML2 Mechanism driver 2 | 3 | [ml2_arista] 4 | # (StrOpt) EOS IP address. This is required field. If not set, all 5 | # communications to Arista EOS will fail 6 | # 7 | # eapi_host = 8 | # Example: eapi_host = 192.168.0.1 9 | # 10 | # (StrOpt) EOS command API username. This is required field. 11 | # if not set, all communications to Arista EOS will fail. 12 | # 13 | # eapi_username = 14 | # Example: arista_eapi_username = admin 15 | # 16 | # (StrOpt) EOS command API password. This is required field. 17 | # if not set, all communications to Arista EOS will fail. 18 | # 19 | # eapi_password = 20 | # Example: eapi_password = my_password 21 | # 22 | # (StrOpt) Defines if hostnames are sent to Arista EOS as FQDNs 23 | # ("node1.domain.com") or as short names ("node1"). This is 24 | # optional. If not set, a value of "True" is assumed. 25 | # 26 | # use_fqdn = 27 | # Example: use_fqdn = True 28 | # 29 | # (IntOpt) Sync interval in seconds between Neutron plugin and EOS. 30 | # This field defines how often the synchronization is performed. 31 | # This is an optional field. If not set, a value of 180 seconds 32 | # is assumed. 33 | # 34 | # sync_interval = 35 | # Example: sync_interval = 60 36 | # 37 | # (StrOpt) Defines Region Name that is assigned to this OpenStack Controller. 38 | # This is useful when multiple OpenStack/Neutron controllers are 39 | # managing the same Arista HW clusters. Note that this name must 40 | # match with the region name registered (or known) to keystone 41 | # service. Authentication with Keysotne is performed by EOS. 42 | # This is optional. If not set, a value of "RegionOne" is assumed. 43 | # 44 | # region_name = 45 | # Example: region_name = RegionOne 46 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf_brocade.ini: -------------------------------------------------------------------------------- 1 | [ml2_brocade] 2 | # username = 3 | # password = 4 | # address = 5 | # ostype = NOS 6 | # osversion = autodetect | n.n.n 7 | # physical_networks = physnet1,physnet2 8 | # 9 | # Example: 10 | # username = admin 11 | # password = password 12 | # address = 10.24.84.38 13 | # ostype = NOS 14 | # osversion = 4.1.1 15 | # physical_networks = physnet1,physnet2 16 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf_cisco.ini: -------------------------------------------------------------------------------- 1 | [ml2_cisco] 2 | 3 | # (StrOpt) A short prefix to prepend to the VLAN number when creating a 4 | # VLAN interface. For example, if an interface is being created for 5 | # VLAN 2001 it will be named 'q-2001' using the default prefix. 6 | # 7 | # vlan_name_prefix = q- 8 | # Example: vlan_name_prefix = vnet- 9 | 10 | # (BoolOpt) A flag to enable round robin scheduling of routers for SVI. 11 | # svi_round_robin = False 12 | 13 | # 14 | # (StrOpt) The name of the physical_network managed via the Cisco Nexus Switch. 15 | # This string value must be present in the ml2_conf.ini network_vlan_ranges 16 | # variable. 17 | # 18 | # managed_physical_network = 19 | # Example: managed_physical_network = physnet1 20 | 21 | # Cisco Nexus Switch configurations. 22 | # Each switch to be managed by Openstack Neutron must be configured here. 23 | # 24 | # Cisco Nexus Switch Format. 25 | # [ml2_mech_cisco_nexus:] 26 | # = (1) 27 | # ssh_port= (2) 28 | # username= (3) 29 | # password= (4) 30 | # 31 | # (1) For each host connected to a port on the switch, specify the hostname 32 | # and the Nexus physical port (interface) it is connected to. 33 | # (2) The TCP port for connecting via SSH to manage the switch. This is 34 | # port number 22 unless the switch has been configured otherwise. 35 | # (3) The username for logging into the switch to manage it. 36 | # (4) The password for logging into the switch to manage it. 37 | # 38 | # Example: 39 | # [ml2_mech_cisco_nexus:1.1.1.1] 40 | # compute1=1/1 41 | # compute2=1/2 42 | # ssh_port=22 43 | # username=admin 44 | # password=mySecretPassword 45 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf_mlnx.ini: -------------------------------------------------------------------------------- 1 | [eswitch] 2 | # (StrOpt) Type of Network Interface to allocate for VM: 3 | # mlnx_direct or hostdev according to libvirt terminology 4 | # vnic_type = mlnx_direct 5 | # (BoolOpt) Enable server compatibility with old nova 6 | # apply_profile_patch = False 7 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf_ncs.ini: -------------------------------------------------------------------------------- 1 | # Defines configuration options specific to the Tail-f NCS Mechanism Driver 2 | 3 | [ml2_ncs] 4 | # (StrOpt) Tail-f NCS HTTP endpoint for REST access to the OpenStack 5 | # subtree. 6 | # If this is not set then no HTTP requests will be made. 7 | # 8 | # url = 9 | # Example: url = http://ncs/api/running/services/openstack 10 | 11 | # (StrOpt) Username for HTTP basic authentication to NCS. 12 | # This is an optional parameter. If unspecified then no authentication is used. 13 | # 14 | # username = 15 | # Example: username = admin 16 | 17 | # (StrOpt) Password for HTTP basic authentication to NCS. 18 | # This is an optional parameter. If unspecified then no authentication is used. 19 | # 20 | # password = 21 | # Example: password = admin 22 | 23 | # (IntOpt) Timeout in seconds to wait for NCS HTTP request completion. 24 | # This is an optional parameter, default value is 10 seconds. 25 | # 26 | # timeout = 27 | # Example: timeout = 15 28 | 29 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf_odl.ini: -------------------------------------------------------------------------------- 1 | # Configuration for the OpenDaylight MechanismDriver 2 | 3 | [ml2_odl] 4 | # (StrOpt) OpenDaylight REST URL 5 | # If this is not set then no HTTP requests will be made. 6 | # 7 | # url = 8 | # Example: url = http://192.168.56.1:8080/controller/nb/v2/neutron 9 | 10 | # (StrOpt) Username for HTTP basic authentication to ODL. 11 | # 12 | # username = 13 | # Example: username = admin 14 | 15 | # (StrOpt) Password for HTTP basic authentication to ODL. 16 | # 17 | # password = 18 | # Example: password = admin 19 | 20 | # (IntOpt) Timeout in seconds to wait for ODL HTTP request completion. 21 | # This is an optional parameter, default value is 10 seconds. 22 | # 23 | # timeout = 10 24 | # Example: timeout = 15 25 | 26 | # (IntOpt) Timeout in minutes to wait for a Tomcat session timeout. 27 | # This is an optional parameter, default value is 30 minutes. 28 | # 29 | # session_timeout = 30 30 | # Example: session_timeout = 60 31 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/ml2_conf_ofa.ini: -------------------------------------------------------------------------------- 1 | # Defines configuration options specific to the OpenFlow Agent Mechanism Driver 2 | 3 | [ovs] 4 | # Please refer to configuration options to the OpenvSwitch 5 | 6 | [agent] 7 | # (IntOpt) Number of seconds to retry acquiring an Open vSwitch datapath. 8 | # This is an optional parameter, default value is 60 seconds. 9 | # 10 | # get_datapath_retry_times = 11 | # Example: get_datapath_retry_times = 30 12 | 13 | # Please refer to configuration options to the OpenvSwitch else the above. 14 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/plugins/ml2/restproxy.ini: -------------------------------------------------------------------------------- 1 | # Config file for neutron-proxy-plugin. 2 | 3 | [restproxy] 4 | # All configuration for this plugin is in section '[restproxy]' 5 | # 6 | # The following parameters are supported: 7 | # servers : [,]* (Error if not set) 8 | # server_auth : (default: no auth) 9 | # server_ssl : True | False (default: True) 10 | # ssl_cert_directory : (default: /etc/neutron/plugins/bigswitch/ssl) 11 | # no_ssl_validation : True | False (default: False) 12 | # ssl_sticky : True | False (default: True) 13 | # sync_data : True | False (default: False) 14 | # auto_sync_on_failure : True | False (default: True) 15 | # server_timeout : (default: 10 seconds) 16 | # neutron_id : (default: neutron-) 17 | # add_meta_server_route : True | False (default: True) 18 | # thread_pool_size : (default: 4) 19 | 20 | # A comma separated list of BigSwitch or Floodlight servers and port numbers. The plugin proxies the requests to the BigSwitch/Floodlight server, which performs the networking configuration. Note that only one server is needed per deployment, but you may wish to deploy multiple servers to support failover. 21 | servers=localhost:8080 22 | 23 | # The username and password for authenticating against the BigSwitch or Floodlight controller. 24 | # server_auth=username:password 25 | 26 | # Use SSL when connecting to the BigSwitch or Floodlight controller. 27 | # server_ssl=True 28 | 29 | # Directory which contains the ca_certs and host_certs to be used to validate 30 | # controller certificates. 31 | # ssl_cert_directory=/etc/neutron/plugins/bigswitch/ssl/ 32 | 33 | # If a certificate does not exist for a controller, trust and store the first 34 | # certificate received for that controller and use it to validate future 35 | # connections to that controller. 36 | # ssl_sticky=True 37 | 38 | # Do not validate the controller certificates for SSL 39 | # Warning: This will not provide protection against man-in-the-middle attacks 40 | # no_ssl_validation=False 41 | 42 | # Sync data on connect 43 | # sync_data=False 44 | 45 | # If neutron fails to create a resource because the backend controller 46 | # doesn't know of a dependency, automatically trigger a full data 47 | # synchronization to the controller. 48 | # auto_sync_on_failure=True 49 | 50 | # Maximum number of seconds to wait for proxy request to connect and complete. 51 | # server_timeout=10 52 | 53 | # User defined identifier for this Neutron deployment 54 | # neutron_id = 55 | 56 | # Flag to decide if a route to the metadata server should be injected into the VM 57 | # add_meta_server_route = True 58 | 59 | # Number of threads to use to handle large volumes of port creation requests 60 | # thread_pool_size = 4 61 | 62 | [nova] 63 | # Specify the VIF_TYPE that will be controlled on the Nova compute instances 64 | # options: ivs or ovs 65 | # default: ovs 66 | # vif_type = ovs 67 | 68 | # Overrides for vif types based on nova compute node host IDs 69 | # Comma separated list of host IDs to fix to a specific VIF type 70 | # The VIF type is taken from the end of the configuration item 71 | # node_override_vif_ 72 | # For example, the following would set the VIF type to IVS for 73 | # host-id1 and host-id2 74 | # node_overrride_vif_ivs=host-id1,host-id2 75 | 76 | [router] 77 | # Specify the default router rules installed in newly created tenant routers 78 | # Specify multiple times for multiple rules 79 | # Format is ::: 80 | # Optionally, a comma-separated list of nexthops may be included after 81 | # Use an * to specify default for all tenants 82 | # Default is any any allow for all tenants 83 | # tenant_default_router_rule=*:any:any:permit 84 | 85 | # Maximum number of rules that a single router may have 86 | # Default is 200 87 | # max_router_rules=200 88 | 89 | [restproxyagent] 90 | 91 | # Specify the name of the bridge used on compute nodes 92 | # for attachment. 93 | # Default: br-int 94 | # integration_bridge=br-int 95 | 96 | # Change the frequency of polling by the restproxy agent. 97 | # Value is seconds 98 | # Default: 5 99 | # polling_interval=5 100 | 101 | # Virtual switch type on the compute node. 102 | # Options: ovs or ivs 103 | # Default: ovs 104 | # virtual_switch_type = ovs 105 | 106 | [securitygroup] 107 | # Controls if neutron security group is enabled or not. 108 | # It should be false when you use nova security group. 109 | # enable_security_group = True 110 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "context_is_admin": "role:admin", 3 | "admin_or_owner": "rule:context_is_admin or tenant_id:%(tenant_id)s", 4 | "admin_or_network_owner": "rule:context_is_admin or tenant_id:%(network:tenant_id)s", 5 | "admin_only": "rule:context_is_admin", 6 | "regular_user": "", 7 | "shared": "field:networks:shared=True", 8 | "shared_firewalls": "field:firewalls:shared=True", 9 | "external": "field:networks:router:external=True", 10 | "default": "rule:admin_or_owner", 11 | 12 | "subnets:private:read": "rule:admin_or_owner", 13 | "subnets:private:write": "rule:admin_or_owner", 14 | "subnets:shared:read": "rule:regular_user", 15 | "subnets:shared:write": "rule:admin_only", 16 | 17 | "create_subnet": "rule:admin_or_network_owner", 18 | "get_subnet": "rule:admin_or_owner or rule:shared", 19 | "update_subnet": "rule:admin_or_network_owner", 20 | "delete_subnet": "rule:admin_or_network_owner", 21 | 22 | "create_network": "", 23 | "get_network": "rule:admin_or_owner or rule:shared or rule:external", 24 | "get_network:router:external": "rule:regular_user", 25 | "get_network:segments": "rule:admin_only", 26 | "get_network:provider:network_type": "rule:admin_only", 27 | "get_network:provider:physical_network": "rule:admin_only", 28 | "get_network:provider:segmentation_id": "rule:admin_only", 29 | "get_network:queue_id": "rule:admin_only", 30 | "create_network:shared": "rule:admin_only", 31 | "create_network:router:external": "rule:admin_only", 32 | "create_network:segments": "rule:admin_only", 33 | "create_network:provider:network_type": "rule:admin_only", 34 | "create_network:provider:physical_network": "rule:admin_only", 35 | "create_network:provider:segmentation_id": "rule:admin_only", 36 | "update_network": "rule:admin_or_owner", 37 | "update_network:segments": "rule:admin_only", 38 | "update_network:shared": "rule:admin_only", 39 | "update_network:provider:network_type": "rule:admin_only", 40 | "update_network:provider:physical_network": "rule:admin_only", 41 | "update_network:provider:segmentation_id": "rule:admin_only", 42 | "delete_network": "rule:admin_or_owner", 43 | 44 | "create_port": "", 45 | "create_port:mac_address": "rule:admin_or_network_owner", 46 | "create_port:fixed_ips": "rule:admin_or_network_owner", 47 | "create_port:port_security_enabled": "rule:admin_or_network_owner", 48 | "create_port:binding:host_id": "rule:admin_only", 49 | "create_port:binding:profile": "rule:admin_only", 50 | "create_port:mac_learning_enabled": "rule:admin_or_network_owner", 51 | "get_port": "rule:admin_or_owner", 52 | "get_port:queue_id": "rule:admin_only", 53 | "get_port:binding:vif_type": "rule:admin_only", 54 | "get_port:binding:vif_details": "rule:admin_only", 55 | "get_port:binding:host_id": "rule:admin_only", 56 | "get_port:binding:profile": "rule:admin_only", 57 | "update_port": "rule:admin_or_owner", 58 | "update_port:fixed_ips": "rule:admin_or_network_owner", 59 | "update_port:port_security_enabled": "rule:admin_or_network_owner", 60 | "update_port:binding:host_id": "rule:admin_only", 61 | "update_port:binding:profile": "rule:admin_only", 62 | "update_port:mac_learning_enabled": "rule:admin_or_network_owner", 63 | "delete_port": "rule:admin_or_owner", 64 | 65 | "create_router:external_gateway_info:enable_snat": "rule:admin_only", 66 | "update_router:external_gateway_info:enable_snat": "rule:admin_only", 67 | 68 | "create_firewall": "", 69 | "get_firewall": "rule:admin_or_owner", 70 | "create_firewall:shared": "rule:admin_only", 71 | "get_firewall:shared": "rule:admin_only", 72 | "update_firewall": "rule:admin_or_owner", 73 | "delete_firewall": "rule:admin_or_owner", 74 | 75 | "create_firewall_policy": "", 76 | "get_firewall_policy": "rule:admin_or_owner or rule:shared_firewalls", 77 | "create_firewall_policy:shared": "rule:admin_or_owner", 78 | "update_firewall_policy": "rule:admin_or_owner", 79 | "delete_firewall_policy": "rule:admin_or_owner", 80 | 81 | "create_firewall_rule": "", 82 | "get_firewall_rule": "rule:admin_or_owner or rule:shared_firewalls", 83 | "update_firewall_rule": "rule:admin_or_owner", 84 | "delete_firewall_rule": "rule:admin_or_owner", 85 | 86 | "create_qos_queue": "rule:admin_only", 87 | "get_qos_queue": "rule:admin_only", 88 | 89 | "update_agent": "rule:admin_only", 90 | "delete_agent": "rule:admin_only", 91 | "get_agent": "rule:admin_only", 92 | 93 | "create_dhcp-network": "rule:admin_only", 94 | "delete_dhcp-network": "rule:admin_only", 95 | "get_dhcp-networks": "rule:admin_only", 96 | "create_l3-router": "rule:admin_only", 97 | "delete_l3-router": "rule:admin_only", 98 | "get_l3-routers": "rule:admin_only", 99 | "get_dhcp-agents": "rule:admin_only", 100 | "get_l3-agents": "rule:admin_only", 101 | "get_loadbalancer-agent": "rule:admin_only", 102 | "get_loadbalancer-pools": "rule:admin_only", 103 | 104 | "create_router": "rule:regular_user", 105 | "get_router": "rule:admin_or_owner", 106 | "update_router:add_router_interface": "rule:admin_or_owner", 107 | "update_router:remove_router_interface": "rule:admin_or_owner", 108 | "delete_router": "rule:admin_or_owner", 109 | 110 | "create_floatingip": "rule:regular_user", 111 | "update_floatingip": "rule:admin_or_owner", 112 | "delete_floatingip": "rule:admin_or_owner", 113 | "get_floatingip": "rule:admin_or_owner", 114 | 115 | "create_network_profile": "rule:admin_only", 116 | "update_network_profile": "rule:admin_only", 117 | "delete_network_profile": "rule:admin_only", 118 | "get_network_profiles": "", 119 | "get_network_profile": "", 120 | "update_policy_profiles": "rule:admin_only", 121 | "get_policy_profiles": "", 122 | "get_policy_profile": "", 123 | 124 | "create_metering_label": "rule:admin_only", 125 | "delete_metering_label": "rule:admin_only", 126 | "get_metering_label": "rule:admin_only", 127 | 128 | "create_metering_label_rule": "rule:admin_only", 129 | "delete_metering_label_rule": "rule:admin_only", 130 | "get_metering_label_rule": "rule:admin_only", 131 | 132 | "get_service_provider": "rule:regular_user", 133 | "get_lsn": "rule:admin_only", 134 | "create_lsn": "rule:admin_only" 135 | } 136 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/release: -------------------------------------------------------------------------------- 1 | [Neutron] 2 | vendor = Fedora Project 3 | product = OpenStack Neutron 4 | package = 5.el6 5 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/rootwrap.conf: -------------------------------------------------------------------------------- 1 | # Configuration for neutron-rootwrap 2 | # This file should be owned by (and only-writeable by) the root user 3 | 4 | [DEFAULT] 5 | # List of directories to load filter definitions from (separated by ','). 6 | # These directories MUST all be only writeable by root ! 7 | filters_path=/etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap,/etc/quantum/rootwrap.d,/usr/share/quantum/rootwrap 8 | 9 | # List of directories to search executables in, in case filters do not 10 | # explicitely specify a full path (separated by ',') 11 | # If not specified, defaults to system PATH environment variable. 12 | # These directories MUST all be only writeable by root ! 13 | exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin 14 | 15 | # Enable logging to syslog 16 | # Default value is False 17 | use_syslog=False 18 | 19 | # Which syslog facility to use. 20 | # Valid values include auth, authpriv, syslog, local0, local1... 21 | # Default value is 'syslog' 22 | syslog_log_facility=syslog 23 | 24 | # Which messages to log. 25 | # INFO means log all usage 26 | # ERROR means only log unsuccessful attempts 27 | syslog_log_level=ERROR 28 | 29 | [xenapi] 30 | # XenAPI configuration is only required by the L2 agent if it is to 31 | # target a XenServer/XCP compute host's dom0. 32 | xenapi_connection_url= 33 | xenapi_connection_username=root 34 | xenapi_connection_password= 35 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/config/vpn_agent.ini: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # VPN-Agent configuration file 3 | # Note vpn-agent inherits l3-agent, so you can use configs on l3-agent also 4 | 5 | [vpnagent] 6 | # vpn device drivers which vpn agent will use 7 | # If we want to use multiple drivers, we need to define this option multiple times. 8 | vpn_device_driver=neutron.services.vpn.device_drivers.ipsec.OpenSwanDriver 9 | # vpn_device_driver=neutron.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver 10 | # vpn_device_driver=another_driver 11 | 12 | [ipsec] 13 | # Status check interval 14 | ipsec_status_check_interval=60 15 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-dhcp-agent: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron-dhcp-agent OpenStack Neutron DHCP Agent 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Neutron DHCP Agent 7 | ### END INIT INFO 8 | 9 | . /etc/rc.d/init.d/functions 10 | 11 | proj=neutron 12 | plugin=dhcp-agent 13 | prog=$proj-$plugin 14 | exec="/usr/bin/$prog" 15 | configs=( 16 | "/etc/neutron/neutron.conf" \ 17 | "/etc/neutron/dhcp_agent.ini" \ 18 | ) 19 | pidfile="/var/run/$proj/$prog.pid" 20 | 21 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 22 | 23 | lockfile=/var/lock/subsys/$prog 24 | 25 | start() { 26 | [ -x $exec ] || exit 5 27 | for config in ${configs[@]}; do 28 | [ -f $config ] || exit 6 29 | done 30 | echo -n $"Starting $prog: " 31 | daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile" 32 | retval=$? 33 | echo 34 | [ $retval -eq 0 ] && touch $lockfile 35 | return $retval 36 | } 37 | 38 | stop() { 39 | echo -n $"Stopping $prog: " 40 | killproc -p $pidfile $prog 41 | retval=$? 42 | echo 43 | [ $retval -eq 0 ] && rm -f $lockfile 44 | return $retval 45 | } 46 | 47 | restart() { 48 | stop 49 | start 50 | } 51 | 52 | reload() { 53 | restart 54 | } 55 | 56 | force_reload() { 57 | restart 58 | } 59 | 60 | rh_status() { 61 | status -p $pidfile $prog 62 | } 63 | 64 | rh_status_q() { 65 | rh_status >/dev/null 2>&1 66 | } 67 | 68 | 69 | case "$1" in 70 | start) 71 | rh_status_q && exit 0 72 | $1 73 | ;; 74 | stop) 75 | rh_status_q || exit 0 76 | $1 77 | ;; 78 | restart) 79 | $1 80 | ;; 81 | reload) 82 | rh_status_q || exit 7 83 | $1 84 | ;; 85 | force-reload) 86 | force_reload 87 | ;; 88 | status) 89 | rh_status 90 | ;; 91 | condrestart|try-restart) 92 | rh_status_q || exit 0 93 | restart 94 | ;; 95 | *) 96 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 97 | exit 2 98 | esac 99 | exit $? 100 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-l3-agent: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron-l3-agent OpenStack Neutron Layer 3 Agent 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Neutron Layer 3 Agent 7 | ### END INIT INFO 8 | 9 | . /etc/rc.d/init.d/functions 10 | 11 | proj=neutron 12 | plugin=l3-agent 13 | prog=$proj-$plugin 14 | exec="/usr/bin/$prog" 15 | configs=( 16 | "/usr/share/$proj/$proj-dist.conf" \ 17 | "/etc/$proj/$proj.conf" \ 18 | "/etc/$proj/l3_agent.ini" \ 19 | "/etc/$proj/fwaas_driver.ini" \ 20 | ) 21 | pidfile="/var/run/$proj/$prog.pid" 22 | 23 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 24 | 25 | lockfile=/var/lock/subsys/$prog 26 | 27 | start() { 28 | [ -x $exec ] || exit 5 29 | for config in ${configs[@]}; do 30 | [ -f $config ] || exit 6 31 | done 32 | echo -n $"Starting $prog: " 33 | daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile" 34 | retval=$? 35 | echo 36 | [ $retval -eq 0 ] && touch $lockfile 37 | return $retval 38 | } 39 | 40 | stop() { 41 | echo -n $"Stopping $prog: " 42 | killproc -p $pidfile $prog 43 | retval=$? 44 | echo 45 | [ $retval -eq 0 ] && rm -f $lockfile 46 | return $retval 47 | } 48 | 49 | restart() { 50 | stop 51 | start 52 | } 53 | 54 | reload() { 55 | restart 56 | } 57 | 58 | force_reload() { 59 | restart 60 | } 61 | 62 | rh_status() { 63 | status -p $pidfile $prog 64 | } 65 | 66 | rh_status_q() { 67 | rh_status >/dev/null 2>&1 68 | } 69 | 70 | 71 | case "$1" in 72 | start) 73 | rh_status_q && exit 0 74 | $1 75 | ;; 76 | stop) 77 | rh_status_q || exit 0 78 | $1 79 | ;; 80 | restart) 81 | $1 82 | ;; 83 | reload) 84 | rh_status_q || exit 7 85 | $1 86 | ;; 87 | force-reload) 88 | force_reload 89 | ;; 90 | status) 91 | rh_status 92 | ;; 93 | condrestart|try-restart) 94 | rh_status_q || exit 0 95 | restart 96 | ;; 97 | *) 98 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 99 | exit 2 100 | esac 101 | exit $? 102 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-lbaas-agent: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron-lbaas-agent OpenStack Neutron LBaaS Agent 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Neutron LBaaS Agent 7 | ### END INIT INFO 8 | 9 | . /etc/rc.d/init.d/functions 10 | 11 | proj=neutron 12 | plugin=lbaas-agent 13 | prog=$proj-$plugin 14 | exec="/usr/bin/$prog" 15 | configs=( 16 | "/etc/neutron/neutron.conf" \ 17 | "/etc/neutron/lbaas_agent.ini" \ 18 | ) 19 | pidfile="/var/run/$proj/$prog.pid" 20 | 21 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 22 | 23 | lockfile=/var/lock/subsys/$prog 24 | 25 | start() { 26 | [ -x $exec ] || exit 5 27 | for config in ${configs[@]}; do 28 | [ -f $config ] || exit 6 29 | done 30 | echo -n $"Starting $prog: " 31 | daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile" 32 | retval=$? 33 | echo 34 | [ $retval -eq 0 ] && touch $lockfile 35 | return $retval 36 | } 37 | 38 | stop() { 39 | echo -n $"Stopping $prog: " 40 | killproc -p $pidfile $prog 41 | retval=$? 42 | echo 43 | [ $retval -eq 0 ] && rm -f $lockfile 44 | return $retval 45 | } 46 | 47 | restart() { 48 | stop 49 | start 50 | } 51 | 52 | reload() { 53 | restart 54 | } 55 | 56 | force_reload() { 57 | restart 58 | } 59 | 60 | rh_status() { 61 | status -p $pidfile $prog 62 | } 63 | 64 | rh_status_q() { 65 | rh_status >/dev/null 2>&1 66 | } 67 | 68 | 69 | case "$1" in 70 | start) 71 | rh_status_q && exit 0 72 | $1 73 | ;; 74 | stop) 75 | rh_status_q || exit 0 76 | $1 77 | ;; 78 | restart) 79 | $1 80 | ;; 81 | reload) 82 | rh_status_q || exit 7 83 | $1 84 | ;; 85 | force-reload) 86 | force_reload 87 | ;; 88 | status) 89 | rh_status 90 | ;; 91 | condrestart|try-restart) 92 | rh_status_q || exit 0 93 | restart 94 | ;; 95 | *) 96 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 97 | exit 2 98 | esac 99 | exit $? 100 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-linuxbridge-agent: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron-linuxbridge-agent OpenStack linuxbridge plugin 4 | # 5 | # chkconfig: - 98 02 6 | # description: Support VLANs using Linux bridging 7 | ### END INIT INFO 8 | 9 | . /etc/rc.d/init.d/functions 10 | 11 | proj=neutron 12 | plugin=linuxbridge-agent 13 | prog=$proj-$plugin 14 | exec="/usr/bin/$prog" 15 | configs=( 16 | "/etc/neutron/neutron.conf" \ 17 | "/etc/neutron/plugins/ml2/ml2_conf.ini" \ 18 | "/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini" \ 19 | ) 20 | pidfile="/var/run/$proj/$prog.pid" 21 | 22 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 23 | 24 | lockfile=/var/lock/subsys/$prog 25 | 26 | start() { 27 | [ -x $exec ] || exit 5 28 | for config in ${configs[@]}; do 29 | [ -f $config ] || exit 6 30 | done 31 | echo -n $"Starting $prog: " 32 | daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile" 33 | retval=$? 34 | echo 35 | [ $retval -eq 0 ] && touch $lockfile 36 | return $retval 37 | } 38 | 39 | stop() { 40 | echo -n $"Stopping $prog: " 41 | killproc -p $pidfile $prog 42 | retval=$? 43 | echo 44 | [ $retval -eq 0 ] && rm -f $lockfile 45 | return $retval 46 | } 47 | 48 | restart() { 49 | stop 50 | start 51 | } 52 | 53 | reload() { 54 | restart 55 | } 56 | 57 | force_reload() { 58 | restart 59 | } 60 | 61 | rh_status() { 62 | status -p $pidfile $prog 63 | } 64 | 65 | rh_status_q() { 66 | rh_status >/dev/null 2>&1 67 | } 68 | 69 | 70 | case "$1" in 71 | start) 72 | rh_status_q && exit 0 73 | $1 74 | ;; 75 | stop) 76 | rh_status_q || exit 0 77 | $1 78 | ;; 79 | restart) 80 | $1 81 | ;; 82 | reload) 83 | rh_status_q || exit 7 84 | $1 85 | ;; 86 | force-reload) 87 | force_reload 88 | ;; 89 | status) 90 | rh_status 91 | ;; 92 | condrestart|try-restart) 93 | rh_status_q || exit 0 94 | restart 95 | ;; 96 | *) 97 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 98 | exit 2 99 | esac 100 | exit $? 101 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-metadata-agent: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron-metadata-agent OpenStack Neutron Metadata Agent 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Neutron Metadata Agent 7 | ### END INIT INFO 8 | 9 | . /etc/rc.d/init.d/functions 10 | 11 | proj=neutron 12 | plugin=metadata-agent 13 | prog=$proj-$plugin 14 | exec="/usr/bin/$prog" 15 | configs=( 16 | "/usr/share/$proj/$proj-dist.conf" \ 17 | "/etc/$proj/$proj.conf" \ 18 | "/etc/$proj/metadata_agent.ini" \ 19 | ) 20 | pidfile="/var/run/$proj/$prog.pid" 21 | 22 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 23 | 24 | lockfile=/var/lock/subsys/$prog 25 | 26 | start() { 27 | [ -x $exec ] || exit 5 28 | for config in ${configs[@]}; do 29 | [ -f $config ] || exit 6 30 | done 31 | echo -n $"Starting $prog: " 32 | daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile" 33 | retval=$? 34 | echo 35 | [ $retval -eq 0 ] && touch $lockfile 36 | return $retval 37 | } 38 | 39 | stop() { 40 | echo -n $"Stopping $prog: " 41 | killproc -p $pidfile $prog 42 | retval=$? 43 | echo 44 | [ $retval -eq 0 ] && rm -f $lockfile 45 | return $retval 46 | } 47 | 48 | restart() { 49 | stop 50 | start 51 | } 52 | 53 | reload() { 54 | restart 55 | } 56 | 57 | force_reload() { 58 | restart 59 | } 60 | 61 | rh_status() { 62 | status -p $pidfile $prog 63 | } 64 | 65 | rh_status_q() { 66 | rh_status >/dev/null 2>&1 67 | } 68 | 69 | 70 | case "$1" in 71 | start) 72 | rh_status_q && exit 0 73 | $1 74 | ;; 75 | stop) 76 | rh_status_q || exit 0 77 | $1 78 | ;; 79 | restart) 80 | $1 81 | ;; 82 | reload) 83 | rh_status_q || exit 7 84 | $1 85 | ;; 86 | force-reload) 87 | force_reload 88 | ;; 89 | status) 90 | rh_status 91 | ;; 92 | condrestart|try-restart) 93 | rh_status_q || exit 0 94 | restart 95 | ;; 96 | *) 97 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 98 | exit 2 99 | esac 100 | exit $? 101 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-ovs-cleanup: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron-ovs-cleanup OpenStack Open vSwitch cleanup utility 4 | # 5 | # chkconfig: - 97 02 6 | # description: Purge Open vSwitch of the Neutron devices 7 | ### END INIT INFO 8 | 9 | . /etc/rc.d/init.d/functions 10 | 11 | proj=neutron 12 | prog=$proj-ovs-cleanup 13 | exec="/usr/bin/$prog" 14 | pidfile="/var/run/$proj/$prog.pid" 15 | configs=( 16 | "/usr/share/$proj/$proj-dist.conf" \ 17 | "/etc/$proj/$proj.conf" \ 18 | "/etc/$proj/plugins/openvswitch/ovs_neutron_plugin.ini" \ 19 | ) 20 | configs_str=${configs[@]/#/--config-file } 21 | 22 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 23 | 24 | lockfile=/var/lock/subsys/$prog 25 | 26 | start() { 27 | [ -x $exec ] || exit 5 28 | for config in ${configs[@]}; do 29 | [ -f $config ] || exit 6 30 | done 31 | runuser -s /bin/bash neutron -c "$exec --log-file /var/log/$proj/ovs-cleanup.log $configs_str &>/dev/null" 32 | retval=$? 33 | [ $retval -eq 0 ] && touch $lockfile 34 | return $retval 35 | } 36 | 37 | case "$1" in 38 | start) 39 | $1 40 | ;; 41 | stop|restart|reload|force-reload|status|condrestart|try-restart) 42 | # Do nothing 43 | ;; 44 | *) 45 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 46 | exit 2 47 | esac 48 | exit $? 49 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-server: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron OpenStack Software Defined Networking Service 4 | # 5 | # chkconfig: - 98 02 6 | # description: neutron provides an API to \ 7 | # * request and configure virtual networks 8 | ### END INIT INFO 9 | 10 | . /etc/rc.d/init.d/functions 11 | 12 | prog=neutron 13 | exec="/usr/bin/$prog-server" 14 | configs=( 15 | "/etc/neutron/neutron.conf" \ 16 | "/etc/neutron/plugins/ml2/ml2_conf.ini" \ 17 | "/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini" \ 18 | ) 19 | pidfile="/var/run/$prog/$prog.pid" 20 | logfile="/var/log/$prog/server.log" 21 | 22 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 23 | 24 | lockfile=/var/lock/subsys/$prog-server 25 | 26 | start() { 27 | [ -x $exec ] || exit 5 28 | for config in ${configs[@]}; do 29 | [ -f $config ] || exit 6 30 | done 31 | echo -n $"Starting $prog: " 32 | daemon --user neutron --pidfile $pidfile "$exec ${configs[@]/#/--config-file } --log-file $logfile &>/dev/null & echo \$! > $pidfile" 33 | retval=$? 34 | echo 35 | [ $retval -eq 0 ] && touch $lockfile 36 | return $retval 37 | } 38 | 39 | stop() { 40 | echo -n $"Stopping $prog: " 41 | killproc -p $pidfile $prog 42 | retval=$? 43 | echo 44 | [ $retval -eq 0 ] && rm -f $lockfile 45 | return $retval 46 | } 47 | 48 | restart() { 49 | stop 50 | start 51 | } 52 | 53 | reload() { 54 | restart 55 | } 56 | 57 | force_reload() { 58 | restart 59 | } 60 | 61 | rh_status() { 62 | status -p $pidfile $prog 63 | } 64 | 65 | rh_status_q() { 66 | rh_status >/dev/null 2>&1 67 | } 68 | 69 | 70 | case "$1" in 71 | start) 72 | rh_status_q && exit 0 73 | $1 74 | ;; 75 | stop) 76 | rh_status_q || exit 0 77 | $1 78 | ;; 79 | restart) 80 | $1 81 | ;; 82 | reload) 83 | rh_status_q || exit 7 84 | $1 85 | ;; 86 | force-reload) 87 | force_reload 88 | ;; 89 | status) 90 | rh_status 91 | ;; 92 | condrestart|try-restart) 93 | rh_status_q || exit 0 94 | restart 95 | ;; 96 | *) 97 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 98 | exit 2 99 | esac 100 | exit $? 101 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron-vpn-agent: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # neutron-vpn-agent OpenStack Neutron VPN Agent 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Neutron VPN Agent 7 | ### END INIT INFO 8 | 9 | . /etc/rc.d/init.d/functions 10 | 11 | proj=neutron 12 | plugin=vpn-agent 13 | prog=$proj-$plugin 14 | exec="/usr/bin/$prog" 15 | configs=( 16 | "/usr/share/$proj/$proj-dist.conf" \ 17 | "/etc/$proj/$proj.conf" \ 18 | "/etc/$proj/vpn_agent.ini" \ 19 | "/etc/$proj/l3_agent.ini" \ 20 | "/etc/$proj/fwaas_driver.ini" \ 21 | ) 22 | pidfile="/var/run/$proj/$prog.pid" 23 | 24 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 25 | 26 | lockfile=/var/lock/subsys/$prog 27 | 28 | start() { 29 | [ -x $exec ] || exit 5 30 | for config in ${configs[@]}; do 31 | [ -f $config ] || exit 6 32 | done 33 | echo -n $"Starting $prog: " 34 | daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/files/neutron_init.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | export OS_TENANT_NAME="{{KEYSTONE_ADMIN_TENANT}}" 3 | export OS_USERNAME="{{KEYSTONE_ADMIN_USER}}" 4 | export OS_PASSWORD="{{KEYSTONE_ADMIN_PASSWD}}" 5 | export OS_AUTH_URL="{{KEYSTONE_AUTH_URL}}" 6 | 7 | keystone user-create --name={{AUTH_NEUTRON_ADMIN_USER}} --pass={{AUTH_NEUTRON_ADMIN_PASS}} --email=neutron@example.com 8 | keystone user-role-add --user={{AUTH_NEUTRON_ADMIN_USER}} --tenant={{AUTH_NEUTRON_ADMIN_TENANT}} --role=admin 9 | 10 | function get_id () { 11 | echo `"$@" | grep ' id ' | awk '{print $4}'` 12 | } 13 | 14 | NEUTRON_SERVICE=$(get_id \ 15 | keystone service-create --name=neutron \ 16 | --type=network \ 17 | --description="OpenStack Networking Service") 18 | 19 | keystone endpoint-create --service-id="$NEUTRON_SERVICE" \ 20 | --publicurl="http://{{NEUTRON_IP}}:9696" \ 21 | --adminurl="http://{{NEUTRON_IP}}:9696" \ 22 | --internalurl="http://{{NEUTRON_IP}}:9696" 23 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/init.sls: -------------------------------------------------------------------------------- 1 | neutron-init: 2 | file.managed: 3 | - name: /usr/local/bin/neutron_init.sh 4 | - source: salt://openstack/neutron/files/neutron_init.sh 5 | - mode: 755 6 | - user: root 7 | - group: root 8 | - template: jinja 9 | - defaults: 10 | KEYSTONE_ADMIN_TENANT: {{ pillar['keystone']['KEYSTONE_ADMIN_TENANT'] }} 11 | KEYSTONE_ADMIN_USER: {{ pillar['keystone']['KEYSTONE_ADMIN_USER'] }} 12 | KEYSTONE_ADMIN_PASSWD: {{ pillar['keystone']['KEYSTONE_ADMIN_PASSWD'] }} 13 | KEYSTONE_AUTH_URL: {{ pillar['keystone']['KEYSTONE_AUTH_URL'] }} 14 | NEUTRON_IP: {{ pillar['neutron']['NEUTRON_IP'] }} 15 | AUTH_NEUTRON_ADMIN_TENANT: {{ pillar['neutron']['AUTH_NEUTRON_ADMIN_TENANT'] }} 16 | AUTH_NEUTRON_ADMIN_USER: {{ pillar['neutron']['AUTH_NEUTRON_ADMIN_USER'] }} 17 | AUTH_NEUTRON_ADMIN_PASS: {{ pillar['neutron']['AUTH_NEUTRON_ADMIN_PASS'] }} 18 | cmd.run: 19 | - name: bash /usr/local/bin/neutron_init.sh && touch /etc/neutron-datainit.lock 20 | - require: 21 | - file: /usr/local/bin/neutron_init.sh 22 | - unless: test -f /etc/neutron-datainit.lock 23 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/lbaas_agent.sls: -------------------------------------------------------------------------------- 1 | neutron-lbaas-agent: 2 | pkg.installed: 3 | - name: haproxy 4 | file.managed: 5 | - name: /etc/init.d/neutron-lbaas-agent 6 | - source: salt://openstack/neutron/files/config/neutron-lbaas-agent 7 | - mode: 755 8 | - user: root 9 | - group: root 10 | service.running: 11 | - enable: True 12 | - watch: 13 | - file: /etc/neutron 14 | - require: 15 | - cmd: neutron-lbaas-agent 16 | - file: neutron-lbaas-agent 17 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/linuxbridge_agent.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.neutron.config 3 | 4 | neutron-linuxbridge-agent: 5 | pkg.installed: 6 | - names: 7 | - openstack-neutron 8 | - openstack-neutron-ml2 9 | - python-neutronclient 10 | - openstack-neutron-linuxbridge 11 | file.managed: 12 | - name: /etc/init.d/neutron-linuxbridge-agent 13 | - source: salt://openstack/neutron/files/neutron-linuxbridge-agent 14 | - mode: 755 15 | - user: root 16 | - group: root 17 | service.running: 18 | - name: neutron-linuxbridge-agent 19 | - enable: True 20 | - watch: 21 | - file: /etc/neutron 22 | - file: neutron-linuxbridge-agent 23 | - require: 24 | - pkg: neutron-linuxbridge-agent 25 | -------------------------------------------------------------------------------- /states/openstack-mitaka/neutron/server.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.neutron.config 3 | - openstack.neutron.linuxbridge_agent 4 | - openstack.neutron.init 5 | 6 | neutron-server: 7 | pkg.installed: 8 | - names: 9 | - openstack-neutron 10 | - openstack-neutron-ml2 11 | - python-neutronclient 12 | - openstack-neutron-linuxbridge 13 | file.managed: 14 | - name: /etc/init.d/neutron-server 15 | - source: salt://openstack/neutron/files/neutron-server 16 | - mode: 755 17 | - user: root 18 | - group: root 19 | service.running: 20 | - name: neutron-server 21 | - enable: True 22 | - watch: 23 | - file: /etc/neutron 24 | - require: 25 | - cmd: neutron-init 26 | - pkg: neutron-server 27 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/compute.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.nova.config 3 | 4 | messagebus: 5 | service.running: 6 | - name: messagebus 7 | - enable: True 8 | 9 | libvirtd: 10 | pkg.installed: 11 | - names: 12 | - libvirt 13 | - libvirt-python 14 | - libvirt-client 15 | service.running: 16 | - name: libvirtd 17 | - enable: True 18 | 19 | avahi-daemon: 20 | pkg.installed: 21 | - name: avahi 22 | service.running: 23 | - name: avahi-daemon 24 | - enable: True 25 | - require: 26 | - pkg: avahi-daemon 27 | 28 | nova-compute-service: 29 | pkg.installed: 30 | - names: 31 | - openstack-nova-compute 32 | - sysfsutils 33 | service.running: 34 | - name: openstack-nova-compute 35 | - enable: True 36 | - watch: 37 | - file: /etc/nova 38 | - pkg: nova-compute-service 39 | - require: 40 | - service: messagebus 41 | - service: libvirtd 42 | - service: avahi-daemon 43 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/config.sls: -------------------------------------------------------------------------------- 1 | /etc/nova: 2 | file.recurse: 3 | - source: salt://openstack/nova/files/config 4 | - user: nova 5 | - group: nova 6 | - template: jinja 7 | - defaults: 8 | MYSQL_SERVER: {{ pillar['nova']['MYSQL_SERVER'] }} 9 | NOVA_IP: {{ pillar['nova']['NOVA_IP'] }} 10 | NOVA_DB_PASS: {{ pillar['nova']['NOVA_DB_PASS'] }} 11 | NOVA_DB_USER: {{ pillar['nova']['NOVA_DB_USER'] }} 12 | NOVA_DB_NAME: {{ pillar['nova']['NOVA_DB_NAME'] }} 13 | RABBITMQ_HOST: {{ pillar['rabbit']['RABBITMQ_HOST'] }} 14 | RABBITMQ_PORT: {{ pillar['rabbit']['RABBITMQ_PORT'] }} 15 | RABBITMQ_USER: {{ pillar['rabbit']['RABBITMQ_USER'] }} 16 | RABBITMQ_PASS: {{ pillar['rabbit']['RABBITMQ_PASS'] }} 17 | AUTH_KEYSTONE_HOST: {{ pillar['nova']['AUTH_KEYSTONE_HOST'] }} 18 | AUTH_KEYSTONE_PORT: {{ pillar['nova']['AUTH_KEYSTONE_PORT'] }} 19 | AUTH_KEYSTONE_PROTOCOL: {{ pillar['nova']['AUTH_KEYSTONE_PROTOCOL'] }} 20 | AUTH_NOVA_ADMIN_TENANT: {{ pillar['nova']['AUTH_NOVA_ADMIN_TENANT'] }} 21 | AUTH_NOVA_ADMIN_USER: {{ pillar['nova']['AUTH_NOVA_ADMIN_USER'] }} 22 | AUTH_NOVA_ADMIN_PASS: {{ pillar['nova']['AUTH_NOVA_ADMIN_PASS'] }} 23 | NEUTRON_URL: {{ pillar['nova']['NEUTRON_URL'] }} 24 | NEUTRON_ADMIN_USER: {{ pillar['nova']['NEUTRON_ADMIN_USER'] }} 25 | NEUTRON_ADMIN_PASS: {{ pillar['nova']['NEUTRON_ADMIN_PASS'] }} 26 | NEUTRON_ADMIN_TENANT: {{ pillar['nova']['NEUTRON_ADMIN_TENANT'] }} 27 | NEUTRON_ADMIN_AUTH_URL: {{ pillar['nova']['NEUTRON_ADMIN_AUTH_URL'] }} 28 | NOVNCPROXY_BASE_URL: {{ pillar['nova']['NOVNCPROXY_BASE_URL'] }} 29 | VNCSERVER_PROXYCLIENT: {{ grains['fqdn'] }} 30 | AUTH_URI: {{ pillar['nova']['AUTH_URI'] }} 31 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/control.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - openstack.nova.config 3 | - openstack.nova.init 4 | 5 | nova-control-install: 6 | pkg.installed: 7 | - names: 8 | - openstack-nova-api 9 | - openstack-nova-cert 10 | - openstack-nova-conductor 11 | - openstack-nova-console 12 | - openstack-nova-novncproxy 13 | - openstack-nova-scheduler 14 | - python-novaclient 15 | - require: 16 | - file: rdo_repo 17 | 18 | nova-db-sync: 19 | cmd.run: 20 | - name: nova-manage db sync && touch /etc/nova-dbsync.lock && chown nova:nova /var/log/nova/* 21 | - require: 22 | - mysql_grants: nova-mysql 23 | - pkg: nova-control-install 24 | - unless: test -f /etc/nova-dbsync.lock 25 | 26 | nova-api-service: 27 | file.managed: 28 | - name: /etc/init.d/openstack-nova-api 29 | - source: salt://openstack/nova/files/openstack-nova-api 30 | - user: root 31 | - group: root 32 | - mode: 755 33 | service.running: 34 | - name: openstack-nova-api 35 | - enable: True 36 | - watch: 37 | - file: /etc/nova 38 | - require: 39 | - pkg: nova-control-install 40 | - cmd: nova-db-sync 41 | 42 | nova-cert-service: 43 | file.managed: 44 | - name: /etc/init.d/openstack-nova-cert 45 | - source: salt://openstack/nova/files/openstack-nova-cert 46 | - user: root 47 | - group: root 48 | - mode: 755 49 | service.running: 50 | - name: openstack-nova-cert 51 | - enable: True 52 | - watch: 53 | - file: /etc/nova 54 | - require: 55 | - pkg: nova-control-install 56 | - cmd: nova-db-sync 57 | 58 | nova-conductor-service: 59 | file.managed: 60 | - name: /etc/init.d/openstack-nova-conductor 61 | - source: salt://openstack/nova/files/openstack-nova-conductor 62 | - user: root 63 | - group: root 64 | - mode: 755 65 | service.running: 66 | - name: openstack-nova-conductor 67 | - enable: True 68 | - watch: 69 | - file: /etc/nova 70 | - require: 71 | - pkg: nova-control-install 72 | - cmd: nova-db-sync 73 | 74 | nova-consoleauth-service: 75 | file.managed: 76 | - name: /etc/init.d/openstack-nova-consoleauth 77 | - source: salt://openstack/nova/files/openstack-nova-consoleauth 78 | - user: root 79 | - group: root 80 | - mode: 755 81 | service.running: 82 | - name: openstack-nova-consoleauth 83 | - enable: True 84 | - watch: 85 | - file: /etc/nova 86 | - require: 87 | - pkg: nova-control-install 88 | - cmd: nova-db-sync 89 | 90 | nova-novncproxy-service: 91 | file.managed: 92 | - name: /etc/init.d/openstack-nova-novncproxy 93 | - source: salt://openstack/nova/files/openstack-nova-novncproxy 94 | - user: root 95 | - group: root 96 | - mode: 755 97 | service.running: 98 | - name: openstack-nova-novncproxy 99 | - enable: True 100 | - watch: 101 | - file: /etc/nova 102 | - require: 103 | - pkg: nova-control-install 104 | - cmd: nova-db-sync 105 | 106 | nova-scheduler-service: 107 | file.managed: 108 | - name: /etc/init.d/openstack-nova-scheduler 109 | - source: salt://openstack/nova/files/openstack-nova-scheduler 110 | - user: root 111 | - group: root 112 | - mode: 755 113 | service.running: 114 | - name: openstack-nova-scheduler 115 | - enable: True 116 | - watch: 117 | - file: /etc/nova 118 | - require: 119 | - pkg: nova-control-install 120 | - cmd: nova-db-sync 121 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/config/api-paste.ini: -------------------------------------------------------------------------------- 1 | ############ 2 | # Metadata # 3 | ############ 4 | [composite:metadata] 5 | use = egg:Paste#urlmap 6 | /: meta 7 | 8 | [pipeline:meta] 9 | pipeline = ec2faultwrap logrequest metaapp 10 | 11 | [app:metaapp] 12 | paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory 13 | 14 | ####### 15 | # EC2 # 16 | ####### 17 | 18 | [composite:ec2] 19 | use = egg:Paste#urlmap 20 | /services/Cloud: ec2cloud 21 | 22 | [composite:ec2cloud] 23 | use = call:nova.api.auth:pipeline_factory 24 | noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor 25 | keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor 26 | 27 | [filter:ec2faultwrap] 28 | paste.filter_factory = nova.api.ec2:FaultWrapper.factory 29 | 30 | [filter:logrequest] 31 | paste.filter_factory = nova.api.ec2:RequestLogging.factory 32 | 33 | [filter:ec2lockout] 34 | paste.filter_factory = nova.api.ec2:Lockout.factory 35 | 36 | [filter:ec2keystoneauth] 37 | paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory 38 | 39 | [filter:ec2noauth] 40 | paste.filter_factory = nova.api.ec2:NoAuth.factory 41 | 42 | [filter:cloudrequest] 43 | controller = nova.api.ec2.cloud.CloudController 44 | paste.filter_factory = nova.api.ec2:Requestify.factory 45 | 46 | [filter:authorizer] 47 | paste.filter_factory = nova.api.ec2:Authorizer.factory 48 | 49 | [filter:validator] 50 | paste.filter_factory = nova.api.ec2:Validator.factory 51 | 52 | [app:ec2executor] 53 | paste.app_factory = nova.api.ec2:Executor.factory 54 | 55 | ############# 56 | # OpenStack # 57 | ############# 58 | 59 | [composite:osapi_compute] 60 | use = call:nova.api.openstack.urlmap:urlmap_factory 61 | /: oscomputeversions 62 | /v1.1: openstack_compute_api_v2 63 | /v2: openstack_compute_api_v2 64 | /v3: openstack_compute_api_v3 65 | 66 | [composite:openstack_compute_api_v2] 67 | use = call:nova.api.auth:pipeline_factory 68 | noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2 69 | keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2 70 | keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2 71 | 72 | [composite:openstack_compute_api_v3] 73 | use = call:nova.api.auth:pipeline_factory_v3 74 | noauth = faultwrap sizelimit noauth_v3 osapi_compute_app_v3 75 | keystone = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v3 76 | 77 | [filter:faultwrap] 78 | paste.filter_factory = nova.api.openstack:FaultWrapper.factory 79 | 80 | [filter:noauth] 81 | paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory 82 | 83 | [filter:noauth_v3] 84 | paste.filter_factory = nova.api.openstack.auth:NoAuthMiddlewareV3.factory 85 | 86 | [filter:ratelimit] 87 | paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory 88 | 89 | [filter:sizelimit] 90 | paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory 91 | 92 | [app:osapi_compute_app_v2] 93 | paste.app_factory = nova.api.openstack.compute:APIRouter.factory 94 | 95 | [app:osapi_compute_app_v3] 96 | paste.app_factory = nova.api.openstack.compute:APIRouterV3.factory 97 | 98 | [pipeline:oscomputeversions] 99 | pipeline = faultwrap oscomputeversionapp 100 | 101 | [app:oscomputeversionapp] 102 | paste.app_factory = nova.api.openstack.compute.versions:Versions.factory 103 | 104 | ########## 105 | # Shared # 106 | ########## 107 | 108 | [filter:keystonecontext] 109 | paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory 110 | 111 | [filter:authtoken] 112 | paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory 113 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/config/release: -------------------------------------------------------------------------------- 1 | [Nova] 2 | vendor = RDO Project 3 | product = OpenStack Nova 4 | package = 3.el6 5 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/config/rootwrap.conf: -------------------------------------------------------------------------------- 1 | # Configuration for nova-rootwrap 2 | # This file should be owned by (and only-writeable by) the root user 3 | 4 | [DEFAULT] 5 | # List of directories to load filter definitions from (separated by ','). 6 | # These directories MUST all be only writeable by root ! 7 | filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap 8 | 9 | # List of directories to search executables in, in case filters do not 10 | # explicitely specify a full path (separated by ',') 11 | # If not specified, defaults to system PATH environment variable. 12 | # These directories MUST all be only writeable by root ! 13 | exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin 14 | 15 | # Enable logging to syslog 16 | # Default value is False 17 | use_syslog=False 18 | 19 | # Which syslog facility to use. 20 | # Valid values include auth, authpriv, syslog, user0, user1... 21 | # Default value is 'syslog' 22 | syslog_log_facility=syslog 23 | 24 | # Which messages to log. 25 | # INFO means log all usage 26 | # ERROR means only log unsuccessful attempts 27 | syslog_log_level=ERROR 28 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/nova_init.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | export OS_TENANT_NAME="{{KEYSTONE_ADMIN_TENANT}}" 3 | export OS_USERNAME="{{KEYSTONE_ADMIN_USER}}" 4 | export OS_PASSWORD="{{KEYSTONE_ADMIN_PASSWD}}" 5 | export OS_AUTH_URL="{{KEYSTONE_AUTH_URL}}" 6 | 7 | keystone user-create --name={{AUTH_NOVA_ADMIN_USER}} --pass={{AUTH_NOVA_ADMIN_PASS}} --email=nova@example.com 8 | keystone user-role-add --user={{AUTH_NOVA_ADMIN_USER}} --tenant={{AUTH_NOVA_ADMIN_TENANT}} --role=admin 9 | 10 | function get_id () { 11 | echo `"$@" | grep ' id ' | awk '{print $4}'` 12 | } 13 | 14 | NOVA_SERVICE=$(get_id \ 15 | keystone service-create --name=nova \ 16 | --type=compute \ 17 | --description="OpenStack Compute Service") 18 | 19 | keystone endpoint-create --service-id="$NOVA_SERVICE" \ 20 | --publicurl=http://{{NOVA_IP}}:8774/v2/%\(tenant_id\)s \ 21 | --adminurl=http://{{NOVA_IP}}:8774/v2/%\(tenant_id\)s \ 22 | --internalurl=http://{{NOVA_IP}}:8774/v2/%\(tenant_id\)s 23 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-api: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-api OpenStack Nova API Server 4 | # 5 | # chkconfig: - 98 02 6 | # description: At the heart of the cloud framework is an API Server. \ 7 | # This API Server makes command and control of the \ 8 | # hypervisor, storage, and networking programmatically \ 9 | # available to users in realization of the definition \ 10 | # of cloud computing. 11 | 12 | ### BEGIN INIT INFO 13 | # Provides: 14 | # Required-Start: $remote_fs $network $syslog 15 | # Required-Stop: $remote_fs $syslog 16 | # Default-Stop: 0 1 6 17 | # Short-Description: OpenStack Nova API Server 18 | # Description: At the heart of the cloud framework is an API Server. 19 | # This API Server makes command and control of the 20 | # hypervisor, storage, and networking programmatically 21 | # available to users in realization of the definition 22 | # of cloud computing. 23 | ### END INIT INFO 24 | 25 | . /etc/rc.d/init.d/functions 26 | 27 | suffix=api 28 | prog=openstack-nova-$suffix 29 | exec="/usr/bin/nova-$suffix" 30 | config="/etc/nova/nova.conf" 31 | pidfile="/var/run/nova/nova-$suffix.pid" 32 | logfile="/var/log/nova/$suffix.log" 33 | 34 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 35 | 36 | lockfile=/var/lock/subsys/$prog 37 | 38 | start() { 39 | [ -x $exec ] || exit 5 40 | [ -f $config ] || exit 6 41 | echo -n $"Starting $prog: " 42 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 43 | retval=$? 44 | echo 45 | [ $retval -eq 0 ] && touch $lockfile 46 | return $retval 47 | } 48 | 49 | stop() { 50 | echo -n $"Stopping $prog: " 51 | killproc -p $pidfile $prog 52 | retval=$? 53 | echo 54 | [ $retval -eq 0 ] && rm -f $lockfile 55 | return $retval 56 | } 57 | 58 | restart() { 59 | stop 60 | start 61 | } 62 | 63 | reload() { 64 | restart 65 | } 66 | 67 | force_reload() { 68 | restart 69 | } 70 | 71 | rh_status() { 72 | status -p $pidfile $prog 73 | } 74 | 75 | rh_status_q() { 76 | rh_status >/dev/null 2>&1 77 | } 78 | 79 | 80 | case "$1" in 81 | start) 82 | rh_status_q && exit 0 83 | $1 84 | ;; 85 | stop) 86 | rh_status_q || exit 0 87 | $1 88 | ;; 89 | restart) 90 | $1 91 | ;; 92 | reload) 93 | rh_status_q || exit 7 94 | $1 95 | ;; 96 | force-reload) 97 | force_reload 98 | ;; 99 | status) 100 | rh_status 101 | ;; 102 | condrestart|try-restart) 103 | rh_status_q || exit 0 104 | restart 105 | ;; 106 | *) 107 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 108 | exit 2 109 | esac 110 | exit $? 111 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-cert: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-cert OpenStack Nova cert Worker 4 | # 5 | # chkconfig: - 98 02 6 | # description: cert manages auth cert access and creation 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Cert Manager 14 | # Description: cert manages auth cert access and creation 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=cert 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-compute: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-compute OpenStack Nova Compute Worker 4 | # 5 | # chkconfig: - 98 02 6 | # description: Compute workers manage computing instances on host \ 7 | # machines. Through the API, commands are dispatched \ 8 | # to compute workers to: \ 9 | # * Run instances \ 10 | # * Terminate instances \ 11 | # * Reboot instances \ 12 | # * Attach volumes \ 13 | # * Detach volumes \ 14 | # * Get console output 15 | 16 | ### BEGIN INIT INFO 17 | # Provides: 18 | # Required-Start: $remote_fs $network $syslog 19 | # Required-Stop: $remote_fs $syslog 20 | # Default-Stop: 0 1 6 21 | # Short-Description: OpenStack Nova Compute Worker 22 | # Description: Compute workers manage computing instances on host 23 | # machines. Through the API, commands are dispatched 24 | # to compute workers to: 25 | # * Run instances 26 | # * Terminate instances 27 | # * Reboot instances 28 | # * Attach volumes 29 | # * Detach volumes 30 | # * Get console output 31 | ### END INIT INFO 32 | 33 | . /etc/rc.d/init.d/functions 34 | 35 | suffix=compute 36 | prog=openstack-nova-$suffix 37 | exec="/usr/bin/nova-$suffix" 38 | config="/etc/nova/nova.conf" 39 | pidfile="/var/run/nova/nova-$suffix.pid" 40 | logfile="/var/log/nova/$suffix.log" 41 | 42 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 43 | 44 | lockfile=/var/lock/subsys/$prog 45 | 46 | start() { 47 | [ -x $exec ] || exit 5 48 | [ -f $config ] || exit 6 49 | echo -n $"Starting $prog: " 50 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 51 | retval=$? 52 | echo 53 | [ $retval -eq 0 ] && touch $lockfile 54 | return $retval 55 | } 56 | 57 | stop() { 58 | echo -n $"Stopping $prog: " 59 | killproc -p $pidfile $prog 60 | retval=$? 61 | echo 62 | [ $retval -eq 0 ] && rm -f $lockfile 63 | return $retval 64 | } 65 | 66 | restart() { 67 | stop 68 | start 69 | } 70 | 71 | reload() { 72 | restart 73 | } 74 | 75 | force_reload() { 76 | restart 77 | } 78 | 79 | rh_status() { 80 | status -p $pidfile $prog 81 | } 82 | 83 | rh_status_q() { 84 | rh_status >/dev/null 2>&1 85 | } 86 | 87 | 88 | case "$1" in 89 | start) 90 | rh_status_q && exit 0 91 | $1 92 | ;; 93 | stop) 94 | rh_status_q || exit 0 95 | $1 96 | ;; 97 | restart) 98 | $1 99 | ;; 100 | reload) 101 | rh_status_q || exit 7 102 | $1 103 | ;; 104 | force-reload) 105 | force_reload 106 | ;; 107 | status) 108 | rh_status 109 | ;; 110 | condrestart|try-restart) 111 | rh_status_q || exit 0 112 | restart 113 | ;; 114 | *) 115 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 116 | exit 2 117 | esac 118 | exit $? 119 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-conductor: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-conductor OpenStack Nova Compute DB Access service 4 | # 5 | # chkconfig: - 98 02 6 | # description: Implementation of an S3-like storage server based on local files. 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Compute DB Access service 14 | # Description: Service to handle database access for compute nodes 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=conductor 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-console: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-console OpenStack Nova Console Proxy 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Nova Console Proxy Server 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Console Proxy 14 | # Description: OpenStack Nova Console Proxy Server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=console 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-consoleauth: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-console OpenStack Nova Console Auth Proxy 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Nova Console Auth Proxy Server 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Console Auth Proxy 14 | # Description: OpenStack Nova Console Auth Proxy Server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=consoleauth 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-metadata-api: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-metadata-api OpenStack Nova Metadata API Server 4 | # 5 | # chkconfig: - 98 02 6 | # description: Metadata provider for guests 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Metadata API Server 14 | # Description: OpenStack Nova Metadata API Server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=metadata-api 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-api-metadata" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-novncproxy: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-novncproxy OpenStack Nova Console noVNC Proxy 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Nova Console noVNC Proxy Server 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Console noVNC Proxy 14 | # Description: OpenStack Nova Console noVNC Proxy Server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=novncproxy 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --web /usr/share/novnc/ ${OPTIONS} &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-scheduler: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-scheduler OpenStack Nova Scheduler 4 | # 5 | # chkconfig: - 98 02 6 | # description: Determines which physical hardware to allocate to a virtual resource 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Scheduler 14 | # Description: Determines which physical hardware to allocate to a virtual resource 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=scheduler 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-spicehtml5proxy: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-spicehtml5proxy OpenStack Nova Spice HTML5 Proxy 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Nova Spice HTML5 Proxy Server 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Spice HTML5 Proxy 14 | # Description: OpenStack Nova Spice HTML5 Server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=spicehtml5proxy 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/files/openstack-nova-xvpvncproxy: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # openstack-nova-xvpvncproxy OpenStack Nova Console XVP VNC Proxy 4 | # 5 | # chkconfig: - 98 02 6 | # description: OpenStack Nova Console XVP VNC Proxy Server 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: $remote_fs $network $syslog 11 | # Required-Stop: $remote_fs $syslog 12 | # Default-Stop: 0 1 6 13 | # Short-Description: OpenStack Nova Console XVP VNC Proxy 14 | # Description: OpenStack Nova Console XVP VNC Proxy Server 15 | ### END INIT INFO 16 | 17 | . /etc/rc.d/init.d/functions 18 | 19 | suffix=xvpvncproxy 20 | prog=openstack-nova-$suffix 21 | exec="/usr/bin/nova-$suffix" 22 | config="/etc/nova/nova.conf" 23 | pidfile="/var/run/nova/nova-$suffix.pid" 24 | logfile="/var/log/nova/$suffix.log" 25 | 26 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | start() { 31 | [ -x $exec ] || exit 5 32 | [ -f $config ] || exit 6 33 | echo -n $"Starting $prog: " 34 | daemon --user nova --pidfile $pidfile "$exec --logfile $logfile &>/dev/null & echo \$! > $pidfile" 35 | retval=$? 36 | echo 37 | [ $retval -eq 0 ] && touch $lockfile 38 | return $retval 39 | } 40 | 41 | stop() { 42 | echo -n $"Stopping $prog: " 43 | killproc -p $pidfile $prog 44 | retval=$? 45 | echo 46 | [ $retval -eq 0 ] && rm -f $lockfile 47 | return $retval 48 | } 49 | 50 | restart() { 51 | stop 52 | start 53 | } 54 | 55 | reload() { 56 | restart 57 | } 58 | 59 | force_reload() { 60 | restart 61 | } 62 | 63 | rh_status() { 64 | status -p $pidfile $prog 65 | } 66 | 67 | rh_status_q() { 68 | rh_status >/dev/null 2>&1 69 | } 70 | 71 | 72 | case "$1" in 73 | start) 74 | rh_status_q && exit 0 75 | $1 76 | ;; 77 | stop) 78 | rh_status_q || exit 0 79 | $1 80 | ;; 81 | restart) 82 | $1 83 | ;; 84 | reload) 85 | rh_status_q || exit 7 86 | $1 87 | ;; 88 | force-reload) 89 | force_reload 90 | ;; 91 | status) 92 | rh_status 93 | ;; 94 | condrestart|try-restart) 95 | rh_status_q || exit 0 96 | restart 97 | ;; 98 | *) 99 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 100 | exit 2 101 | esac 102 | exit $? 103 | -------------------------------------------------------------------------------- /states/openstack-mitaka/nova/init.sls: -------------------------------------------------------------------------------- 1 | nova-init: 2 | file.managed: 3 | - name: /usr/local/bin/nova_init.sh 4 | - source: salt://openstack/nova/files/nova_init.sh 5 | - mode: 755 6 | - user: root 7 | - group: root 8 | - template: jinja 9 | - defaults: 10 | KEYSTONE_ADMIN_TENANT: {{ pillar['keystone']['KEYSTONE_ADMIN_TENANT'] }} 11 | KEYSTONE_ADMIN_USER: {{ pillar['keystone']['KEYSTONE_ADMIN_USER'] }} 12 | KEYSTONE_ADMIN_PASSWD: {{ pillar['keystone']['KEYSTONE_ADMIN_PASSWD'] }} 13 | KEYSTONE_AUTH_URL: {{ pillar['keystone']['KEYSTONE_AUTH_URL'] }} 14 | NOVA_IP: {{ pillar['nova']['NOVA_IP'] }} 15 | AUTH_NOVA_ADMIN_TENANT: {{ pillar['nova']['AUTH_NOVA_ADMIN_TENANT'] }} 16 | AUTH_NOVA_ADMIN_USER: {{ pillar['nova']['AUTH_NOVA_ADMIN_USER'] }} 17 | AUTH_NOVA_ADMIN_PASS: {{ pillar['nova']['AUTH_NOVA_ADMIN_PASS'] }} 18 | cmd.run: 19 | - name: bash /usr/local/bin/nova_init.sh && touch /etc/nova-datainit.lock 20 | - require: 21 | - file: nova-init 22 | - unless: test -f /etc/nova-datainit.lock 23 | -------------------------------------------------------------------------------- /states/openstack-mitaka/rabbitmq/server.sls: -------------------------------------------------------------------------------- 1 | rabbitmq-server: 2 | pkg.installed: 3 | - name: rabbitmq-server 4 | service.running: 5 | - name: rabbitmq-server 6 | - enable: True 7 | - require: 8 | - pkg: rabbitmq-server 9 | -------------------------------------------------------------------------------- /states/top.sls: -------------------------------------------------------------------------------- 1 | base: 2 | 'linux-node1.example.com': 3 | - openstack.control 4 | 'linux-node2.example.com': 5 | - openstack.compute 6 | --------------------------------------------------------------------------------