├── requirements.txt ├── doc ├── source │ ├── _static │ │ ├── placeholder │ │ └── scripts │ │ │ ├── gate-check-commit.sh │ │ │ ├── minions │ │ │ ├── config.conf │ │ │ ├── compute.conf │ │ │ └── control.conf │ │ │ ├── envs │ │ │ ├── openstack_cluster_kilo_ubuntu_opencontrail.env.example │ │ │ ├── openstack_single_kilo_ubuntu_opencontrail.env.example │ │ │ ├── openstack_single_kilo_ubuntu_neutron.env.example │ │ │ ├── openstack_cluster_kilo_redhat_opencontrail.env.example │ │ │ ├── openstack_single_kilo_redhat_opencontrail.env.example │ │ │ ├── openstack_cluster.env.example │ │ │ └── openstack_single.env.example │ │ │ ├── requirements │ │ │ └── heat.txt │ │ │ ├── Vagrantfile │ │ │ ├── bootstrap │ │ │ ├── bootstrap-salt-minion.sh │ │ │ └── bootstrap-salt-master.sh │ │ │ └── openstack_single.hot │ ├── install │ │ ├── figures │ │ │ ├── meta_host.png │ │ │ ├── meta_system.png │ │ │ ├── meta_service.png │ │ │ ├── openstack_system.png │ │ │ ├── server_topology.jpg │ │ │ ├── mda_system_composition.png │ │ │ └── production_architecture.png │ │ ├── install-infrastructure-validate.rst │ │ ├── navigation.txt │ │ ├── overview-security.rst │ │ ├── install-openstack.rst │ │ ├── install-config.rst │ │ ├── install-infrastructure.rst │ │ ├── overview.rst │ │ ├── configure.rst │ │ ├── overview-requirements.rst │ │ ├── index.rst │ │ ├── install-infrastructure-orchestrate.rst │ │ ├── install-openstack-validate.rst │ │ ├── configure-image.rst │ │ ├── overview-pillar.rst │ │ ├── install-openstack-orchestrate.rst │ │ ├── install-config-validate.rst │ │ ├── configure-telemetry.rst │ │ ├── configure-orchestrate.rst │ │ ├── configure-volume.rst │ │ ├── overview-salt.rst │ │ ├── install-config-minion.rst │ │ ├── configure-network.rst │ │ ├── configure-compute.rst │ │ ├── install-config-master.rst │ │ ├── overview-server-topology.rst │ │ ├── configure-dashboard.rst │ │ ├── overview-mda.rst │ │ └── configure-infrastructure.rst │ ├── operate │ │ ├── figures │ │ │ ├── kibana_fields.png │ │ │ ├── graphite_composer.png │ │ │ ├── graphite_functions.png │ │ │ ├── heka_message_flow.png │ │ │ ├── kibana_save_search.png │ │ │ ├── heka_overview_diagram.png │ │ │ └── kibana_index_pattern.png │ │ ├── navigation.txt │ │ ├── troubleshoot.rst │ │ ├── monitoring.rst │ │ ├── index.rst │ │ ├── overview.rst │ │ ├── monitoring-events.rst │ │ ├── troubleshoot-database.rst │ │ ├── monitoring-logs.rst │ │ ├── monitoring-meters.rst │ │ └── troubleshoot-networking.rst │ ├── develop │ │ ├── navigation.txt │ │ ├── quickstart.rst │ │ ├── testing.rst │ │ ├── extending.rst │ │ ├── index.rst │ │ ├── testing-metadata.rst │ │ ├── testing-coding-style.rst │ │ ├── extending-formulas.rst │ │ ├── testing-integration.rst │ │ ├── quickstart-vagrant.rst │ │ ├── extending-ecosystem.rst │ │ ├── extending-contribute.rst │ │ └── quickstart-heat.rst │ ├── index.rst │ └── conf.py └── Makefile ├── .gitreview ├── test-requirements.txt ├── setup.cfg ├── setup.py ├── .gitignore ├── tox.ini ├── README.rst ├── .gitmodules └── LICENSE.txt /requirements.txt: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /doc/source/_static/placeholder: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/gate-check-commit.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | -------------------------------------------------------------------------------- /.gitreview: -------------------------------------------------------------------------------- 1 | [gerrit] 2 | host=review.opendev.org 3 | port=29418 4 | project=openstack/openstack-salt.git 5 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/minions/config.conf: -------------------------------------------------------------------------------- 1 | 2 | id: config.openstack.local 3 | 4 | master: 10.10.10.200 5 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/minions/compute.conf: -------------------------------------------------------------------------------- 1 | 2 | id: compute.openstack.local 3 | 4 | master: 10.10.10.200 5 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/minions/control.conf: -------------------------------------------------------------------------------- 1 | 2 | id: control.openstack.local 3 | 4 | master: 10.10.10.200 5 | -------------------------------------------------------------------------------- /doc/source/install/figures/meta_host.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/install/figures/meta_host.png -------------------------------------------------------------------------------- /doc/source/install/figures/meta_system.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/install/figures/meta_system.png -------------------------------------------------------------------------------- /doc/source/install/figures/meta_service.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/install/figures/meta_service.png -------------------------------------------------------------------------------- /doc/source/operate/figures/kibana_fields.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/operate/figures/kibana_fields.png -------------------------------------------------------------------------------- /doc/source/install/figures/openstack_system.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/install/figures/openstack_system.png -------------------------------------------------------------------------------- /doc/source/install/figures/server_topology.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/install/figures/server_topology.jpg -------------------------------------------------------------------------------- /doc/source/operate/figures/graphite_composer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/operate/figures/graphite_composer.png -------------------------------------------------------------------------------- /doc/source/operate/figures/graphite_functions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/operate/figures/graphite_functions.png -------------------------------------------------------------------------------- /doc/source/operate/figures/heka_message_flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/operate/figures/heka_message_flow.png -------------------------------------------------------------------------------- /doc/source/operate/figures/kibana_save_search.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/operate/figures/kibana_save_search.png -------------------------------------------------------------------------------- /doc/source/operate/figures/heka_overview_diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/operate/figures/heka_overview_diagram.png -------------------------------------------------------------------------------- /doc/source/operate/figures/kibana_index_pattern.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/operate/figures/kibana_index_pattern.png -------------------------------------------------------------------------------- /doc/source/install/figures/mda_system_composition.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/install/figures/mda_system_composition.png -------------------------------------------------------------------------------- /doc/source/install/figures/production_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack-archive/openstack-salt/HEAD/doc/source/install/figures/production_architecture.png -------------------------------------------------------------------------------- /doc/source/install/install-infrastructure-validate.rst: -------------------------------------------------------------------------------- 1 | 2 | Validate infrastrucutre services 3 | ================================ 4 | 5 | 6 | -------------- 7 | 8 | .. include:: navigation.txt 9 | -------------------------------------------------------------------------------- /doc/source/develop/navigation.txt: -------------------------------------------------------------------------------- 1 | 2 | * `Documentation Home <../index.html>`_ 3 | * `Installation Manual <../install/index.html>`_ 4 | * `Operations Manual <../operate/index.html>`_ 5 | * `Development Documentation `_ 6 | -------------------------------------------------------------------------------- /doc/source/install/navigation.txt: -------------------------------------------------------------------------------- 1 | 2 | * `Documentation Home <../index.html>`_ 3 | * `Installation Manual `_ 4 | * `Operations Manual <../operate/index.html>`_ 5 | * `Development Documentation <../develop/index.html>`_ 6 | -------------------------------------------------------------------------------- /doc/source/operate/navigation.txt: -------------------------------------------------------------------------------- 1 | 2 | * `Documentation Home <../index.html>`_ 3 | * `Installation Manual <../install/index.html>`_ 4 | * `Operations Manual `_ 5 | * `Development Documentation <../develop/index.html>`_ 6 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/envs/openstack_cluster_kilo_ubuntu_opencontrail.env.example: -------------------------------------------------------------------------------- 1 | parameters: 2 | public_net_id: ext-net 3 | instance_image: trusty-server-cloudimg-amd64 4 | config_image: trusty-server-cloudimg-amd64 5 | key_value: public part of your SSH key 6 | 7 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/envs/openstack_single_kilo_ubuntu_opencontrail.env.example: -------------------------------------------------------------------------------- 1 | parameters: 2 | public_net_id: ext-net 3 | instance_image: trusty-server-cloudimg-amd64 4 | config_image: trusty-server-cloudimg-amd64 5 | key_value: public part of your SSH key 6 | 7 | -------------------------------------------------------------------------------- /doc/source/install/overview-security.rst: -------------------------------------------------------------------------------- 1 | 2 | Security issues 3 | ================== 4 | 5 | Encrypted communication 6 | ----------------------- 7 | 8 | 9 | 10 | System permissions 11 | ------------------ 12 | 13 | 14 | -------------- 15 | 16 | .. include:: navigation.txt 17 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/envs/openstack_single_kilo_ubuntu_neutron.env.example: -------------------------------------------------------------------------------- 1 | parameters: 2 | public_net_id: ext-net 3 | instance_image: trusty-server-cloudimg-amd64 4 | config_image: trusty-server-cloudimg-amd64 5 | key_value: public part of your SSH key 6 | os_networking: neutron 7 | 8 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/envs/openstack_cluster_kilo_redhat_opencontrail.env.example: -------------------------------------------------------------------------------- 1 | parameters: 2 | public_net_id: ext-net 3 | instance_image: CentOS-7-x86_64-GenericCloud 4 | config_image: trusty-server-cloudimg-amd64 5 | key_value: public part of your SSH key 6 | os_distribution: redhat 7 | 8 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/envs/openstack_single_kilo_redhat_opencontrail.env.example: -------------------------------------------------------------------------------- 1 | parameters: 2 | public_net_id: ext-net 3 | instance_image: CentOS-7-x86_64-GenericCloud 4 | config_image: trusty-server-cloudimg-amd64 5 | key_value: public part of your SSH key 6 | os_distribution: redhat 7 | 8 | -------------------------------------------------------------------------------- /doc/source/install/install-openstack.rst: -------------------------------------------------------------------------------- 1 | 2 | Chapter 5. Install OpenStack services 3 | ===================================== 4 | 5 | .. toctree:: 6 | 7 | install-openstack-orchestrate.rst 8 | install-openstack-validate.rst 9 | 10 | -------------- 11 | 12 | .. include:: navigation.txt 13 | -------------------------------------------------------------------------------- /doc/source/operate/troubleshoot.rst: -------------------------------------------------------------------------------- 1 | 2 | ==================================== 3 | Chapter 3. Troubleshooting OpenStack 4 | ==================================== 5 | 6 | .. toctree:: 7 | 8 | troubleshoot-database.rst 9 | troubleshoot-networking.rst 10 | 11 | -------------- 12 | 13 | .. include:: navigation.txt -------------------------------------------------------------------------------- /doc/source/develop/quickstart.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | Chapter 1. Quick start 4 | ======================= 5 | 6 | 7 | .. toctree:: 8 | 9 | quickstart-heat.rst 10 | quickstart-vagrant.rst 11 | 12 | 13 | -------------- 14 | 15 | .. include:: navigation.txt 16 | -------------------------------------------------------------------------------- /test-requirements.txt: -------------------------------------------------------------------------------- 1 | flake8==2.2.4 2 | hacking>=0.10.0,<0.11 3 | pep8==1.5.7 4 | pyflakes==0.8.1 5 | mccabe==0.2.1 # capped for flake8 6 | bashate>=0.2 # Apache-2.0 7 | 8 | # this is required for the docs build jobs 9 | sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 10 | oslosphinx>=2.5.0 # Apache-2.0 11 | reno>=0.1.1 # Apache-2.0 -------------------------------------------------------------------------------- /doc/source/develop/testing.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | Chapter 3. Testing 4 | ================== 5 | 6 | .. toctree:: 7 | 8 | testing-coding-style.rst 9 | testing-metadata.rst 10 | testing-integration.rst 11 | 12 | -------------- 13 | 14 | .. include:: navigation.txt 15 | -------------------------------------------------------------------------------- /doc/source/develop/extending.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | Chapter 2. Extending 4 | ===================== 5 | 6 | 7 | .. toctree:: 8 | 9 | extending-ecosystem.rst 10 | extending-formulas.rst 11 | extending-contribute.rst 12 | 13 | 14 | -------------- 15 | 16 | .. include:: navigation.txt 17 | -------------------------------------------------------------------------------- /doc/source/install/install-config.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Installation Manual 2 | 3 | Chapter 3. Install configuration service 4 | ======================================== 5 | 6 | .. toctree:: 7 | 8 | install-config-master.rst 9 | install-config-minion.rst 10 | install-config-validate.rst 11 | 12 | -------------- 13 | 14 | .. include:: navigation.txt 15 | -------------------------------------------------------------------------------- /doc/source/install/install-infrastructure.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Installation Manual 2 | 3 | Chapter 4. Install infrastructure services 4 | ========================================== 5 | 6 | .. toctree:: 7 | 8 | install-infrastructure-orchestrate.rst 9 | install-infrastructure-validate.rst 10 | 11 | 12 | -------------- 13 | 14 | .. include:: navigation.txt 15 | -------------------------------------------------------------------------------- /doc/source/install/overview.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Installation Manual 2 | 3 | Chapter 1. Overview 4 | ------------------- 5 | 6 | .. toctree:: 7 | 8 | overview-salt.rst 9 | overview-mda.rst 10 | overview-pillar.rst 11 | overview-requirements.rst 12 | overview-server-topology.rst 13 | overview-security.rst 14 | 15 | 16 | -------------- 17 | 18 | .. include:: navigation.txt 19 | -------------------------------------------------------------------------------- /doc/source/operate/monitoring.rst: -------------------------------------------------------------------------------- 1 | 2 | 3 | =========================================== 4 | 5 | 6 | 7 | 8 | =========================================== 9 | Chapter 2. Monitoring, Metering and Logging 10 | =========================================== 11 | 12 | .. toctree:: 13 | 14 | monitoring-events.rst 15 | monitoring-meters.rst 16 | monitoring-logs.rst 17 | 18 | -------------- 19 | 20 | .. include:: navigation.txt 21 | -------------------------------------------------------------------------------- /doc/source/operate/index.rst: -------------------------------------------------------------------------------- 1 | ================================ 2 | OpenStack-Salt operations manual 3 | ================================ 4 | 5 | `Home `_ OpenStack-Salt Operations Manual 6 | 7 | 8 | Overview 9 | ^^^^^^^^ 10 | 11 | .. toctree:: 12 | 13 | overview.rst 14 | 15 | 16 | Monitoring 17 | ^^^^^^^^^^ 18 | 19 | .. toctree:: 20 | 21 | monitoring.rst 22 | 23 | 24 | Troubleshooting 25 | ^^^^^^^^^^^^^^^ 26 | 27 | .. toctree:: 28 | 29 | troubleshoot.rst 30 | -------------------------------------------------------------------------------- /doc/source/install/configure.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Installation Manual 2 | 3 | Chapter 2. Configuration 4 | ------------------------ 5 | 6 | .. toctree:: 7 | 8 | configure-initial.rst 9 | configure-infrastructure.rst 10 | configure-compute.rst 11 | configure-network.rst 12 | configure-volume.rst 13 | configure-image.rst 14 | configure-telemetry.rst 15 | configure-orchestrate.rst 16 | configure-dashboard.rst 17 | 18 | -------------- 19 | 20 | .. include:: navigation.txt 21 | -------------------------------------------------------------------------------- /doc/source/install/overview-requirements.rst: -------------------------------------------------------------------------------- 1 | 2 | Installation requirements 3 | ========================= 4 | 5 | Salt master 6 | ------------ 7 | 8 | Required items: 9 | 10 | - Ubuntu 14.04 LTS (Trusty Tahr) 11 | - Synchronized network time (NTP) client. 12 | - Python 2.7 or later. 13 | 14 | Salt minions 15 | ------------ 16 | 17 | Required items: 18 | 19 | - Ubuntu Server 14.04 LTS (Trusty Tahr) 64-bit operating system 20 | - Synchronized NTP client. 21 | 22 | -------------- 23 | 24 | .. include:: navigation.txt 25 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/requirements/heat.txt: -------------------------------------------------------------------------------- 1 | 2 | python-cinderclient>=1.3.1,<1.4.0 3 | python-glanceclient>=0.19.0,<0.20.0 4 | #python-heatclient>=0.6.0,<0.7.0 5 | git+https://github.com/tcpcloud/python-heatclient.git@stable/juno#egg=heatclient 6 | python-keystoneclient>=1.6.0,<1.7.0 7 | python-neutronclient>=2.2.6,<2.3.0 8 | python-novaclient>=2.19.0,<2.20.0 9 | python-swiftclient>=2.5.0,<2.6.0 10 | 11 | oslo.config>=2.2.0,<2.3.0 12 | oslo.i18n>=2.3.0,<2.4.0 13 | oslo.serialization>=1.8.0,<1.9.0 14 | oslo.utils>=1.4.0,<1.5.0 15 | -------------------------------------------------------------------------------- /doc/source/install/index.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | OpenStack-Salt installation manual 3 | ================================== 4 | 5 | `Home `_ OpenStack-Salt Installation Manual 6 | 7 | 8 | Overview 9 | ^^^^^^^^ 10 | 11 | .. toctree:: 12 | 13 | overview.rst 14 | 15 | 16 | Configuration 17 | ^^^^^^^^^^^^^ 18 | 19 | .. toctree:: 20 | 21 | configure.rst 22 | 23 | 24 | Installation 25 | ^^^^^^^^^^^^ 26 | 27 | .. toctree:: 28 | 29 | install-config.rst 30 | install-infrastructure.rst 31 | install-openstack.rst 32 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | name = openstack-salt 3 | summary = SaltStack formulas for deploying OpenStack 4 | description-file = 5 | README.rst 6 | author = OpenStack 7 | author-email = openstack-dev@lists.openstack.org 8 | home-page = http://www.openstack.org/ 9 | classifier = 10 | Intended Audience :: Developers 11 | Intended Audience :: System Administrators 12 | License :: OSI Approved :: Apache Software License 13 | Operating System :: POSIX :: Linux 14 | 15 | [build_sphinx] 16 | all_files = 1 17 | build-dir = doc/build 18 | source-dir = doc/source 19 | 20 | [pbr] 21 | warnerrors = True 22 | 23 | [wheel] 24 | universal = 1 25 | -------------------------------------------------------------------------------- /doc/source/develop/index.rst: -------------------------------------------------------------------------------- 1 | 2 | Development documentation 3 | ========================= 4 | 5 | In this section, you will find documentation relevant to developing 6 | openstack-salt. 7 | 8 | Contents: 9 | 10 | 11 | Quick start 12 | ^^^^^^^^^^^ 13 | 14 | .. toctree:: 15 | :maxdepth: 2 16 | 17 | quickstart.rst 18 | 19 | 20 | Extending 21 | ^^^^^^^^^ 22 | 23 | .. toctree:: 24 | :maxdepth: 2 25 | 26 | extending.rst 27 | 28 | 29 | Testing 30 | ^^^^^^^ 31 | 32 | .. toctree:: 33 | :maxdepth: 2 34 | 35 | testing.rst 36 | 37 | 38 | Indices and tables 39 | ================== 40 | 41 | * :ref:`genindex` 42 | * :ref:`modindex` 43 | * :ref:`search` 44 | -------------------------------------------------------------------------------- /doc/source/index.rst: -------------------------------------------------------------------------------- 1 | .. openstack-salt documentation master file, created by 2 | sphinx-quickstart on Mon Nov 25 09:13:17 2015. 3 | 4 | =============================== 5 | OpenStack-Salt's Documentation 6 | =============================== 7 | 8 | Overview 9 | ======== 10 | 11 | This project provides scalable and reliable IT automation using SaltStack for installing and operating OpenStack cloud deployments. 12 | 13 | Contents: 14 | 15 | .. toctree:: 16 | :maxdepth: 2 17 | 18 | install/index 19 | operate/index 20 | develop/index 21 | 22 | 23 | Indices and tables 24 | ================== 25 | 26 | * :ref:`genindex` 27 | * :ref:`modindex` 28 | * :ref:`search` 29 | -------------------------------------------------------------------------------- /doc/source/install/install-infrastructure-orchestrate.rst: -------------------------------------------------------------------------------- 1 | 2 | Orchestrate infrastrucutre services 3 | =================================== 4 | 5 | First execute basic states on all nodes to ensure Salt minion, system and 6 | OpenSSH are set up. 7 | 8 | .. code-block:: bash 9 | 10 | salt '*' state.sls linux,salt,openssh,ntp 11 | 12 | 13 | Support infrastructure deployment 14 | --------------------------------- 15 | 16 | Metering node is deployed by running highstate: 17 | 18 | .. code-block:: bash 19 | 20 | salt 'mtr*' state.highstate 21 | 22 | On monitoring node, git needs to be setup first: 23 | 24 | .. code-block:: bash 25 | 26 | salt 'mon*' state.sls git 27 | salt 'mon*' state.highstate 28 | 29 | 30 | -------------- 31 | 32 | .. include:: navigation.txt 33 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 13 | # implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT 18 | import setuptools 19 | 20 | setuptools.setup( 21 | setup_requires=['pbr'], 22 | pbr=True) 23 | -------------------------------------------------------------------------------- /doc/source/install/install-openstack-validate.rst: -------------------------------------------------------------------------------- 1 | 2 | Validate OpenStack services 3 | ================================ 4 | 5 | Everything should be up and running now. You should execute a few checks 6 | before continue. Execute following checks on one or all control nodes. 7 | 8 | Check GlusterFS status: 9 | 10 | .. code-block:: bash 11 | 12 | gluster peer status 13 | gluster volume status 14 | 15 | Check Galera status (execute on one of the controllers): 16 | 17 | .. code-block:: bash 18 | 19 | mysql -p -e'SHOW STATUS;' 20 | 21 | Check OpenContrail status: 22 | 23 | .. code-block:: bash 24 | 25 | contrail-status 26 | 27 | Check OpenStack services: 28 | 29 | .. code-block:: bash 30 | 31 | nova-manage service list 32 | cinder-manage service list 33 | 34 | Source keystone credentials and try Nova API: 35 | 36 | .. code-block:: bash 37 | 38 | source keystonerc 39 | nova list 40 | 41 | -------------- 42 | 43 | .. include:: navigation.txt 44 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled source # 2 | ################### 3 | *.com 4 | *.class 5 | *.dll 6 | *.exe 7 | *.o 8 | *.so 9 | *.pyc 10 | build/ 11 | dist/ 12 | doc/build/ 13 | doc/source/_static/scripts/.vagrant/ 14 | 15 | # Packages # 16 | ############ 17 | # it's better to unpack these files and commit the raw source 18 | # git has its own built in compression methods 19 | *.7z 20 | *.dmg 21 | *.gz 22 | *.iso 23 | *.jar 24 | *.rar 25 | *.tar 26 | *.zip 27 | 28 | # Logs and databases # 29 | ###################### 30 | *.log 31 | *.sql 32 | *.sqlite 33 | 34 | # OS generated files # 35 | ###################### 36 | .DS_Store 37 | .DS_Store? 38 | ._* 39 | .Spotlight-V100 40 | .Trashes 41 | .idea 42 | .tox 43 | *.sublime* 44 | *.egg-info 45 | Icon? 46 | ehthumbs.db 47 | Thumbs.db 48 | .eggs 49 | 50 | # User driven backup files # 51 | ############################ 52 | *.bak 53 | 54 | # Generated by pbr while building docs 55 | ###################################### 56 | AUTHORS 57 | ChangeLog 58 | 59 | # Files created by releasenotes build 60 | releasenotes/build 61 | -------------------------------------------------------------------------------- /doc/source/develop/testing-metadata.rst: -------------------------------------------------------------------------------- 1 | 2 | Metadata testing 3 | ================ 4 | 5 | Pillars are tree-like structures of data defined on the Salt Master and passed through to the minions. They allow confidential, targeted data to be securely sent only to the relevant minion. Pillar is therefore one of the most important systems when using Salt. 6 | 7 | 8 | Testing scenarios 9 | ----------------- 10 | 11 | Testing plan tests each formula with the example pillars covering all possible deployment setups: 12 | 13 | The first test run covers ``state.show_sls`` call to ensure that it parses properly with debug output. 14 | 15 | The second test covers ``state.sls`` to run the state definition, and run ``state.sls again, capturing output, asserting that ``^Not Run:`` is not present in the output, because if it is then it means that a state cannot detect by itself whether it has to be run or not and thus is not idempotent. 16 | 17 | 18 | metadata.yml 19 | ~~~~~~~~~~~~ 20 | 21 | .. code-block:: yaml 22 | 23 | name: "service" 24 | version: "0.2" 25 | source: "https://github.com/tcpcloud/salt-formula-service" 26 | 27 | 28 | -------------- 29 | 30 | .. include:: navigation.txt 31 | -------------------------------------------------------------------------------- /doc/source/install/configure-image.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the Image service 3 | ============================= 4 | 5 | .. code-block:: yaml 6 | 7 | glance: 8 | server: 9 | enabled: true 10 | version: kilo 11 | policy: 12 | publicize_image: 13 | - "role:admin" 14 | - "role:image_manager" 15 | database: 16 | engine: mysql 17 | host: 127.0.0.1 18 | port: 3306 19 | name: glance 20 | user: glance 21 | password: pwd 22 | identity: 23 | engine: keystone 24 | host: 127.0.0.1 25 | port: 35357 26 | tenant: service 27 | user: glance 28 | password: pwd 29 | message_queue: 30 | engine: rabbitmq 31 | host: 127.0.0.1 32 | port: 5672 33 | user: openstack 34 | password: pwd 35 | virtual_host: '/openstack' 36 | storage: 37 | engine: file 38 | images: 39 | - name: "CirrOS 0.3.1" 40 | format: qcow2 41 | file: cirros-0.3.1-x86_64-disk.img 42 | source: http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img 43 | public: true 44 | 45 | -------------- 46 | 47 | .. include:: navigation.txt 48 | -------------------------------------------------------------------------------- /doc/source/operate/overview.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Installation Manual 2 | 3 | Chapter 1. Overview 4 | ------------------- 5 | 6 | .. toctree:: 7 | 8 | The overall health of the systems is measured continuously. The metering system collects metrics from the systems and store them in time-series database for furher evaluation and analysys. The log collecting system collects logs from all systems, transforms them to unified form and stores them for analysis. The monitoring system checks for functionality of separate systems and raises events in case of threshold breach. The monitoring systems may query log and time-series databases for accident patterns and raise an event if anomaly is detected. 9 | 10 | .. figure :: /operate/figures/monitoring_system.svg 11 | :width: 100% 12 | :align: center 13 | 14 | **The difference between monitoring and metering** 15 | 16 | Monitoring is generally used to check for functionality on the overall system and to figure out if the hardware for the overall installation and usage needs to be scaled up. With monitoring, we also do not care that much if we have lost some samples in between. Metering is required for information gathering on usage as a base for resource utilisation. Many monitoring checks are simple meter checks with threshold definitions. 17 | 18 | -------------- 19 | 20 | .. include:: navigation.txt 21 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/envs/openstack_cluster.env.example: -------------------------------------------------------------------------------- 1 | parameters: 2 | public_net_id: name of external network 3 | instance_image: image for OpenStack instances, must correspond with os_distribution 4 | config_image: image for Salt master node, currently only Ubuntu 14.04 supported 5 | key_value: paste your SSH key here 6 | salt_source: salt-master installation source, options: pkg/pip, default: pkg 7 | salt_version: salt-master version, options: latest/specific, default: latest 8 | formula_source: salt formulas source, options: git/pkg, default: git 9 | formula_path: path to formulas, default: /usr/share/salt-formulas 10 | formula_branch: formulas git branch, default: master 11 | reclass_address: reclass git repository, default: https://github.com/tcpcloud/openstack-salt-model.git 12 | reclass_branch: reclass git branch, default: master 13 | os_version: OpenStack release version, options: kilo, default: kilo 14 | os_distribution: OpenStack nodes distribution, options: ubuntu, redhat, debian, default: ubuntu 15 | os_networking: OpenStack networking engine, options: opencontrail, neutron, default: opencontrail 16 | os_deployment: OpenStack architecture, options: single/cluster, default: single 17 | config_hostname: salt-master hostname, default: config 18 | config_domain: salt-master domain, default: openstack.local 19 | config_address: salt-master internal IP address, default: 10.10.10.200 20 | 21 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/envs/openstack_single.env.example: -------------------------------------------------------------------------------- 1 | parameters: 2 | public_net_id: name of external network 3 | instance_image: image for OpenStack instances, must correspond with os_distribution 4 | config_image: image for Salt master node, currently only Ubuntu 14.04 supported 5 | key_value: paste your SSH key here 6 | salt_source: salt-master installation source, options: pkg/pip, default: pkg 7 | salt_version: salt-master version, options: latest/specific, default: latest 8 | formula_source: salt formulas source, options: git/pkg, default: git 9 | formula_path: path to formulas, default: /usr/share/salt-formulas 10 | formula_branch: formulas git branch, default: master 11 | reclass_address: reclass git repository, default: https://github.com/tcpcloud/openstack-salt-model.git 12 | reclass_branch: reclass git branch, default: master 13 | os_version: OpenStack release version, options: kilo, default: kilo 14 | os_distribution: OpenStack nodes distribution, options: ubuntu, redhat, debian, default: ubuntu 15 | os_networking: OpenStack networking engine, options: opencontrail, neutron, default: opencontrail 16 | os_deployment: OpenStack architecture, options: single/cluster, default: single 17 | config_hostname: salt-master hostname, default: config 18 | config_domain: salt-master domain, default: openstack.local 19 | config_address: salt-master internal IP address, default: 10.10.10.200 20 | 21 | -------------------------------------------------------------------------------- /doc/source/install/overview-pillar.rst: -------------------------------------------------------------------------------- 1 | 2 | Service classification 3 | ====================== 4 | 5 | Pillar is an interface for Salt designed to offer global values that are distributed to all minions. The ext_pillar option allows for any number of external pillar interfaces to be called to populate the pillar data. 6 | 7 | Pillar metadata 8 | --------------- 9 | 10 | Pillar data is managed in a similar way as the Salt State Tree. It is the default metadata source for 11 | 12 | Reclass reclass 13 | --------------- 14 | 15 | Reclass is an “external node classifier” (ENC) for Salt, Ansible or Puppet and has ability to merge data sources in recursive way and interpolate variables. I 16 | 17 | reclass installation 18 | ~~~~~~~~~~~~~~~~~~ 19 | 20 | First we will install the application and then configure it. 21 | 22 | .. code-block:: bash 23 | 24 | cd /tmp 25 | git clone https://github.com/madduck/reclass.git 26 | cd reclass 27 | python setup.py install 28 | mkdir /etc/reclass 29 | vim /etc/reclass/reclass-config.yml 30 | 31 | And set the content to the following to setup reclass as salt-master metadata source. 32 | 33 | .. code-block:: yaml 34 | 35 | storage_type: yaml_fs 36 | pretty_print: True 37 | output: yaml 38 | inventory_base_uri: /srv/salt/reclass 39 | 40 | To test reclass you can use CLI to get the complete service catalog or 41 | 42 | 43 | -------------- 44 | 45 | .. include:: navigation.txt 46 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | minversion = 1.6 3 | skipsdist = True 4 | envlist = docs,pep8,bashate,releasenotes 5 | 6 | [testenv] 7 | usedevelop = True 8 | install_command = pip install -U {opts} {packages} 9 | setenv = VIRTUAL_ENV={envdir} 10 | deps = -r{toxinidir}/test-requirements.txt 11 | 12 | [testenv:docs] 13 | commands= 14 | python setup.py build_sphinx 15 | 16 | # environment used by the -infra templated docs job 17 | [testenv:venv] 18 | deps = -r{toxinidir}/test-requirements.txt 19 | commands = {posargs} 20 | 21 | # Run hacking/flake8 check for all python files 22 | [testenv:pep8] 23 | deps = flake8 24 | whitelist_externals = bash 25 | commands = 26 | bash -c "grep -Irl \ 27 | -e '!/usr/bin/env python' \ 28 | -e '!/bin/python' \ 29 | -e '!/usr/bin/python' \ 30 | --exclude-dir '.*' \ 31 | --exclude-dir 'doc' \ 32 | --exclude-dir '*.egg' \ 33 | --exclude-dir '*.egg-info' \ 34 | --exclude-dir '*templates' \ 35 | --exclude 'tox.ini' \ 36 | --exclude '*.sh' \ 37 | {toxinidir} | xargs flake8 --verbose" 38 | 39 | 40 | # Run bashate check for all bash scripts 41 | # Ignores the following rules: 42 | # E003: Indent not multiple of 4 (we prefer to use multiples of 2) 43 | [testenv:bashate] 44 | deps = bashate 45 | whitelist_externals = bash 46 | commands = 47 | bash -c "grep -Irl \ 48 | -e '!/usr/bin/env bash' \ 49 | -e '!/bin/bash' \ 50 | -e '!/bin/sh' \ 51 | --exclude-dir '.*' \ 52 | --exclude-dir '*.egg' \ 53 | --exclude-dir '*.egg-info' \ 54 | --exclude 'tox.ini' \ 55 | {toxinidir} | xargs bashate --verbose --ignore=E003" 56 | -------------------------------------------------------------------------------- /doc/source/operate/monitoring-events.rst: -------------------------------------------------------------------------------- 1 | 2 | Event monitoring 3 | ================ 4 | 5 | **Monitoring Service (Sensu)** 6 | 7 | Sensu is often described as the “monitoring router”. Essentially, Sensu takes the results of “check” scripts run across many systems, and if certain conditions are met, passes their information to one or more “handlers”. Checks are used, for example, to determine if a service like Apache is up or down. Checks can also be used to collect data, such as MySQL query statistics or Rails application metrics. Handlers take actions, using result information, such as sending an email, messaging a chat room, or adding a data point to a graph. There are several types of handlers, but the most common and most powerful is “pipe”, a script that receives data via standard input. Check and handler scripts can be written in any language, and the community repository continues to grow! 8 | 9 | Sensu properties: 10 | 11 | * Written in Ruby, using EventMachine 12 | * Great test coverage with continuous integration via Travis CI 13 | * Can use existing Nagios plugins 14 | * Configuration all in JSON 15 | * Has a message-oriented architecture, using RabbitMQ and JSON payloads 16 | * Packages are “omnibus”, for consistency, isolation, and low-friction deployment 17 | 18 | Sensu embraces modern infrastructure design, works elegantly with configuration management tools, and is built for the cloud. 19 | 20 | The Sensu framework contains a number of components. The following diagram depicts these core elements and how they interact with one another. 21 | 22 | .. image:: https://sensuapp.org/docs/latest/img/sensu-diagram-87a902f0.gif 23 | 24 | -------------- 25 | 26 | .. include:: navigation.txt 27 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | 2 | ============== 3 | OpenStack Salt 4 | ============== 5 | 6 | OpenStack-Salt is project which aims to deploy production environments from source in a way that makes it scalable while also being simple to operate and upgrade. 7 | 8 | For an overview of the mission, repositories and related Wiki home page, please see the formal `Home Page`_ for the project. 9 | 10 | For those looking to test openstack-salt using an All-In-One (AIO) build, please see the `Quick Start`_ guide. 11 | 12 | For more detailed Installation and Operator documentation, please see the `Install Guide`_. 13 | 14 | If openstack-salt is missing something you'd like to see included, then we encourage you to see the `Developer Documentation`_ for more details on how you can get involved. 15 | 16 | Developers wishing to work on the OpenStack Salt project should always base their work on the lastest Salt code, available from the master GIT repository at `Source`_. 17 | 18 | If you have some questions, or would like some assistance with achieving your goals, then please feel free to reach out to us on the 19 | `OpenStack Mailing Lists`_ (particularly openstack-operators or openstack-dev) or on IRC in ``#openstack-salt`` on the `freenode network`_. 20 | 21 | .. _Home Page: https://wiki.openstack.org/wiki/OpenStackSalt 22 | .. _Install Guide: http://openstack-salt.tcpcloud.eu/install/index.html 23 | .. _Quick Start: http://openstack-salt.tcpcloud.eu/develop/quickstart.html 24 | .. _Developer Documentation: http://openstack-salt.tcpcloud.eu/develop/index.html 25 | .. _Source: http://git.openstack.org/cgit/openstack/openstack-salt 26 | .. _OpenStack Mailing Lists: http://lists.openstack.org/ 27 | .. _freenode network: https://freenode.net/ 28 | -------------------------------------------------------------------------------- /doc/source/develop/testing-coding-style.rst: -------------------------------------------------------------------------------- 1 | 2 | Coding style testing 3 | ==================== 4 | 5 | Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can be used for tasks such as installing a package, configuring, and starting a service, setting up users or permissions, and many other common tasks. They have certain rules that needs to be adhered. 6 | 7 | Using double quotes with no variables 8 | ------------------------------------- 9 | 10 | In general - it's a bad idea. All the strings which does not contain dynamic content ( variables ) should use single quote instead of double. 11 | 12 | 13 | Line length above 80 characters 14 | ------------------------------- 15 | 16 | As a 'standard code width limit' and for historical reasons - [IBM punch card](http://en.wikipedia.org/wiki/Punched_card) had exactly 80 columns. 17 | 18 | Single line declarations 19 | ------------------------ 20 | 21 | Avoid extending your code by adding single-line declarations. It makes your code much cleaner and easier to parse / grep while searching for those declarations. 22 | 23 | No newline at the end of the file 24 | --------------------------------- 25 | 26 | Each line should be terminated in a newline character, including the last one. Some programs have problems processing the last line of a file if it isn't newline terminated. [Stackoverflow thread](http://stackoverflow.com/questions/729692/why-should-files-end-with-a-newline) 27 | 28 | Trailing whitespace characters 29 | ------------------------------ 30 | 31 | Trailing whitespaces take more spaces than necessary, any regexp based searches won't return lines as a result due to trailing whitespace(s). 32 | 33 | -------------- 34 | 35 | .. include:: navigation.txt 36 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "formulas/ceilometer"] 2 | path = formulas/ceilometer 3 | url = https://git.openstack.org/openstack/salt-formula-ceilometer 4 | [submodule "formulas/cinder"] 5 | path = formulas/cinder 6 | url = https://git.openstack.org/openstack/salt-formula-cinder 7 | [submodule "formulas/glance"] 8 | path = formulas/glance 9 | url = https://git.openstack.org/openstack/salt-formula-glance 10 | [submodule "formulas/heat"] 11 | path = formulas/heat 12 | url = https://git.openstack.org/openstack/salt-formula-heat 13 | [submodule "formulas/horizon"] 14 | path = formulas/horizon 15 | url = https://git.openstack.org/openstack/salt-formula-horizon 16 | [submodule "formulas/keystone"] 17 | path = formulas/keystone 18 | url = https://git.openstack.org/openstack/salt-formula-keystone 19 | [submodule "formulas/midonet"] 20 | path = formulas/midonet 21 | url = https://git.openstack.org/openstack/salt-formula-midonet 22 | [submodule "formulas/neutron"] 23 | path = formulas/neutron 24 | url = https://git.openstack.org/openstack/salt-formula-neutron 25 | [submodule "formulas/nova"] 26 | path = formulas/nova 27 | url = https://git.openstack.org/openstack/salt-formula-nova 28 | [submodule "formulas/opencontrail"] 29 | path = formulas/opencontrail 30 | url = https://git.openstack.org/openstack/salt-formula-opencontrail 31 | [submodule "formulas/swift"] 32 | path = formulas/swift 33 | url = https://git.openstack.org/openstack/salt-formula-swift 34 | [submodule "formulas/linux"] 35 | path = formulas/linux 36 | url = https://github.com/tcpcloud/salt-formula-linux.git 37 | [submodule "formulas/openssh"] 38 | path = formulas/openssh 39 | url = https://github.com/tcpcloud/salt-formula-openssh.git 40 | [submodule "formulas/salt"] 41 | path = formulas/salt 42 | url = https://github.com/tcpcloud/salt-formula-salt.git 43 | -------------------------------------------------------------------------------- /doc/source/develop/extending-formulas.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | Formula guidelines 4 | ================== 5 | 6 | The OpenStack-Salt formulas are stored in the formulas directory `/usr/share/salt-formulas/env`. 7 | 8 | There are several top-level formulas that are run to prepare the host machines 9 | before actually deploying OpenStack and associated services. 10 | 11 | Running Formulas 12 | ---------------- 13 | 14 | The recommended way of running formulas is through Salt orchestration framework. 15 | Orchestration is accomplished in salt primarily through the `Orchestrate Runner`_. 16 | 17 | .. code-block:: yaml 18 | 19 | salt-run state.orchestrate orch.system 20 | 21 | Or directly calling formula by Salt state module. 22 | 23 | .. code-block:: bash 24 | 25 | salt-call state.sls formula 26 | 27 | Setting up the Physical Hosts 28 | ----------------------------- 29 | 30 | Run `salt-run state.orchestrate orch.bare_metal` to set up the physical hosts for further setup. 31 | 32 | Setting up Infrastructure Services 33 | ---------------------------------- 34 | 35 | Infrastructure services services such as RabbitMQ, memcached, galera, and monitoring services which are not actually OpenStack services, but that OpenStack relies on. 36 | 37 | Run `salt-run state.orchestrate orch.infrastructure` to install these services. 38 | 39 | Setting up OpenStack Services 40 | ----------------------------- 41 | 42 | Running `salt-run state.orchestrate orch.openstack` will install the following OpenStack services: 43 | 44 | * Keystone 45 | * Nova 46 | * Glance 47 | * Cinder 48 | * Neutron 49 | * Horizon 50 | 51 | Optional services 52 | 53 | * Heat 54 | * Ceilometer 55 | * Swift 56 | 57 | .. _Orchestrate Runner: https://docs.saltstack.com/en/latest/ref/runners/index.html 58 | 59 | -------------- 60 | 61 | .. include:: navigation.txt 62 | -------------------------------------------------------------------------------- /doc/source/install/install-openstack-orchestrate.rst: -------------------------------------------------------------------------------- 1 | 2 | Orchestrate OpenStack services 3 | ================================ 4 | 5 | Control nodes deployment 6 | ------------------------- 7 | 8 | Next you can deploy basic services: 9 | 10 | * keepalived - this service will set up virtual IP on controllers 11 | * rabbitmq 12 | * GlusterFS server service 13 | 14 | .. code-block:: bash 15 | 16 | salt 'ctl*' state.sls keepalived,rabbitmq,glusterfs.server.service 17 | 18 | Now you can deploy Galera MySQL and GlusterFS cluster node by node. 19 | 20 | .. code-block:: bash 21 | 22 | salt 'ctl01*' state.sls glusterfs.server,galera 23 | salt 'ctl02*' state.sls glusterfs.server,galera 24 | salt 'ctl03*' state.sls glusterfs.server,galera 25 | 26 | Next you need to ensure that GlusterFS is mounted. Permission errors are ok at 27 | this point, because some users and groups does not exist yet. 28 | 29 | .. code-block:: bash 30 | 31 | salt 'ctl*' state.sls glusterfs.client 32 | 33 | Finally you can execute highstate to deploy remaining services. Again, run 34 | this node by node. 35 | 36 | .. code-block:: bash 37 | 38 | salt 'ctl01*' state.highstate 39 | salt 'ctl02*' state.highstate 40 | salt 'ctl03*' state.highstate 41 | 42 | 43 | Compute nodes deployment 44 | ~~~~~~~~~~~~~~~~~~~~~~~~ 45 | 46 | Simply run highstate (better twice): 47 | 48 | .. code-block:: bash 49 | 50 | salt 'cmp*' state.highstate 51 | 52 | Dashboard and support infrastructure 53 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 54 | 55 | Web and metering nodes can be deployed by running highstate: 56 | 57 | .. code-block:: bash 58 | 59 | salt 'web*' state.highstate 60 | salt 'mtr*' state.highstate 61 | 62 | On monitoring node, get needs to setup first: 63 | 64 | .. code-block:: bash 65 | 66 | salt 'mon*' state.sls git 67 | salt 'mon*' state.highstate 68 | 69 | -------------- 70 | 71 | .. include:: navigation.txt 72 | -------------------------------------------------------------------------------- /doc/source/install/install-config-validate.rst: -------------------------------------------------------------------------------- 1 | 2 | Validate Configuration Node 3 | ============================ 4 | 5 | Now it's time to validate your configuration infrastrucuture. 6 | 7 | Check validity of reclass data for entire infrastructure: 8 | 9 | .. code-block:: bash 10 | 11 | reclass-salt --top 12 | 13 | It will return service catalog of entire infrastructure. 14 | 15 | Get reclass data for specific node: 16 | 17 | .. code-block:: bash 18 | 19 | reclass-salt --pillar ctl01.workshop.cloudlab.cz 20 | 21 | Verify that all salt minions are accepted at master: 22 | 23 | .. code-block:: bash 24 | 25 | root@cfg01:~# salt-key 26 | Accepted Keys: 27 | cfg01.workshop.cloudlab.cz 28 | mtr01.workshop.cloudlab.cz 29 | Denied Keys: 30 | Unaccepted Keys: 31 | Rejected Keys: 32 | 33 | Verify that all Salt minions are responding: 34 | 35 | .. code-block:: bash 36 | 37 | root@cfg01:~# salt '*workshop.cloudlab.cz' test.ping 38 | cfg01.workshop.cloudlab.cz: 39 | True 40 | mtr01.workshop.cloudlab.cz: 41 | True 42 | web01.workshop.cloudlab.cz: 43 | True 44 | cmp02.workshop.cloudlab.cz: 45 | True 46 | cmp01.workshop.cloudlab.cz: 47 | True 48 | mon01.workshop.cloudlab.cz: 49 | True 50 | ctl02.workshop.cloudlab.cz: 51 | True 52 | ctl01.workshop.cloudlab.cz: 53 | True 54 | ctl03.workshop.cloudlab.cz: 55 | True 56 | 57 | Get IP addresses of minions: 58 | 59 | .. code-block:: bash 60 | 61 | root@cfg01:~# salt "*.workshop.cloudlab.cz" grains.get ipv4" 62 | 63 | Show top states (installed services) for all nodes in the infrastructure. 64 | 65 | .. code-block:: bash 66 | 67 | root@cfg01:~# salt '*' state.show_top 68 | [INFO ] Loading fresh modules for state activity 69 | nodeXXX: 70 | ---------- 71 | base: 72 | - git 73 | - linux 74 | - ntp 75 | - salt 76 | - collectd 77 | - openssh 78 | - reclass 79 | 80 | -------------- 81 | 82 | .. include:: navigation.txt 83 | -------------------------------------------------------------------------------- /doc/source/install/configure-telemetry.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the Telemetry service 3 | ================================= 4 | 5 | Control nodes 6 | ------------- 7 | 8 | Ceilometer API 9 | ************** 10 | 11 | .. code-block:: yaml 12 | 13 | ceilometer: 14 | server: 15 | enabled: true 16 | version: havana 17 | cluster: true 18 | secret: pwd 19 | bind: 20 | host: 127.0.0.1 21 | port: 8777 22 | identity: 23 | engine: keystone 24 | host: 127.0.0.1 25 | port: 35357 26 | tenant: service 27 | user: ceilometer 28 | password: pwd 29 | message_queue: 30 | engine: rabbitmq 31 | host: 127.0.0.1 32 | port: 5672 33 | user: openstack 34 | password: pwd 35 | virtual_host: '/openstack' 36 | rabbit_ha_queues: true 37 | database: 38 | engine: mongodb 39 | host: 127.0.0.1 40 | port: 27017 41 | name: ceilometer 42 | user: ceilometer 43 | password: pwd 44 | 45 | Compute nodes 46 | ------------- 47 | 48 | Ceilometer Graphite publisher 49 | ***************************** 50 | 51 | .. code-block:: yaml 52 | 53 | ceilometer: 54 | server: 55 | enabled: true 56 | publisher: 57 | graphite: 58 | enabled: true 59 | host: 10.0.0.1 60 | port: 2003 61 | 62 | Ceilometer agent 63 | **************** 64 | 65 | .. code-block:: yaml 66 | 67 | ceilometer: 68 | agent: 69 | enabled: true 70 | version: havana 71 | secret: pwd 72 | identity: 73 | engine: keystone 74 | host: 127.0.0.1 75 | port: 35357 76 | tenant: service 77 | user: ceilometer 78 | password: pwd 79 | message_queue: 80 | engine: rabbitmq 81 | host: 127.0.0.1 82 | port: 5672 83 | user: openstack 84 | password: pwd 85 | virtual_host: '/openstack' 86 | rabbit_ha_queues: true 87 | 88 | -------------- 89 | 90 | .. include:: navigation.txt 91 | -------------------------------------------------------------------------------- /doc/source/operate/troubleshoot-database.rst: -------------------------------------------------------------------------------- 1 | 2 | Troubleshooting database 3 | ============================ 4 | 5 | MySQL Galera 6 | ************ 7 | 8 | MySQL galera cluster status can be verifed through following command: 9 | 10 | .. code-block:: bash 11 | 12 | root@ctl01:~# mysql -uroot -pXXXX -e "show status;" 13 | ... 14 | | wsrep_local_state_comment | Synced | 15 | | wsrep_cert_index_size | 41 | 16 | | wsrep_causal_reads | 0 | 17 | | wsrep_incoming_addresses | 10.0.106.72:3306,10.0.106.73:3306,10.0.106.71:3306 | 18 | | wsrep_cluster_conf_id | 29 | 19 | | wsrep_cluster_size | 3 | 20 | ... 21 | 22 | Rejoining one node 23 | ------------------ 24 | 25 | MySQL Galera is build from 3 nodes. Failure one of node does not cause any outage of database and should be solved by restarting mysql service. If node cannot be rejoined back to cluster, there must be removed several files: 26 | 27 | .. code-block:: bash 28 | 29 | rm -rf /var/lib/mysql/grastate* 30 | rm -rf /var/lib/mysql/ib_log* 31 | service mysql start 32 | 33 | 34 | Restarting whole cluster 35 | ------------------------ 36 | 37 | In case of outage all three mysql cluster nodes, it must be started with specific order and parameters. At first check that all mysql proceses at all nodes are killed. 38 | 39 | **Node 1** - configure wsrep_cluster_address without any ip addresses and start mysql 40 | 41 | .. code-block:: bash 42 | 43 | vim /etc/mysql/my.cnf 44 | .... 45 | wsrep_cluster_address=gcomm:// 46 | .... 47 | 48 | service mysql start 49 | 50 | **Node 2** and **Node 3** 51 | 52 | .. code-block:: bash 53 | 54 | rm -rf /var/lib/mysql/grastate* 55 | rm -rf /var/lib/mysql/ib_log* 56 | service mysql start 57 | 58 | -------------- 59 | 60 | .. include:: navigation.txt 61 | -------------------------------------------------------------------------------- /doc/source/install/configure-orchestrate.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the Orchestrate service 3 | =================================== 4 | 5 | Heat server 6 | ----------- 7 | 8 | Heat control services 9 | ********************* 10 | 11 | .. code-block:: yaml 12 | 13 | heat: 14 | server: 15 | enabled: true 16 | version: icehouse 17 | bind: 18 | metadata: 19 | address: 10.0.106.10 20 | port: 8000 21 | waitcondition: 22 | address: 10.0.106.10 23 | port: 8000 24 | watch: 25 | address: 10.0.106.10 26 | port: 8003 27 | cloudwatch: 28 | host: 10.0.106.20 29 | api: 30 | host: 10.0.106.20 31 | api_cfn: 32 | host: 10.0.106.20 33 | database: 34 | engine: mysql 35 | host: 10.0.106.20 36 | port: 3306 37 | name: heat 38 | user: heat 39 | password: password 40 | identity: 41 | engine: keystone 42 | host: 10.0.106.20 43 | port: 35357 44 | tenant: service 45 | user: heat 46 | password: password 47 | message_queue: 48 | engine: rabbitmq 49 | host: 10.0.106.20 50 | port: 5672 51 | user: openstack 52 | password: password 53 | virtual_host: '/openstack' 54 | ha_queues: True 55 | 56 | Heat template deployment 57 | ************************ 58 | 59 | .. code-block:: yaml 60 | 61 | heat: 62 | control: 63 | enabled: true 64 | system: 65 | web_production: 66 | format: hot 67 | template_file: /srv/heat/template/web_cluster.hot 68 | environment: /srv/heat/env/web_cluster/prd.env 69 | web_staging: 70 | format: hot 71 | template_file: /srv/heat/template/web_cluster.hot 72 | environment: /srv/heat/env/web_cluster/stg.env 73 | 74 | Heat client 75 | ----------- 76 | 77 | .. code-block:: yaml 78 | 79 | heat: 80 | client: 81 | enabled: true 82 | source: 83 | engine: git 84 | address: git@repo.domain.com/heat-templates.git 85 | revision: master 86 | 87 | -------------- 88 | 89 | .. include:: navigation.txt 90 | -------------------------------------------------------------------------------- /doc/source/install/configure-volume.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the Volume service 3 | ============================== 4 | 5 | Ceph backend 6 | ------------------- 7 | 8 | .. code-block:: yaml 9 | 10 | cinder: 11 | controller: 12 | enabled: true 13 | backend: 14 | ceph_backend: 15 | type_name: standard-iops 16 | backend: ceph_backend 17 | pool: volumes 18 | engine: ceph 19 | user: cinder 20 | secret_uuid: da74ccb7-aa59-1721-a172-0126b1aa4e3e 21 | client_cinder_key: AQDOavlU6BsSJsmoAnFR906mvdgdfRqLHwu0Uw== 22 | 23 | 24 | Hitachi VSP backend 25 | ------------------- 26 | 27 | Cinder setup with Hitachi VPS 28 | 29 | .. code-block:: yaml 30 | 31 | cinder: 32 | controller: 33 | enabled: true 34 | backend: 35 | hus100_backend: 36 | name: HUS100 37 | backend: hus100_backend 38 | engine: hitachi_vsp 39 | connection: FC 40 | 41 | 42 | IBM Storwize backend 43 | -------------------- 44 | 45 | .. code-block:: yaml 46 | 47 | cinder: 48 | volume: 49 | enabled: true 50 | backend: 51 | 7k2_SAS: 52 | engine: storwize 53 | type_name: 7k2 SAS disk 54 | host: 192.168.0.1 55 | port: 22 56 | user: username 57 | password: pass 58 | connection: FC/iSCSI 59 | multihost: true 60 | multipath: true 61 | pool: SAS7K2 62 | 10k_SAS: 63 | engine: storwize 64 | type_name: 10k SAS disk 65 | host: 192.168.0.1 66 | port: 22 67 | user: username 68 | password: pass 69 | connection: FC/iSCSI 70 | multihost: true 71 | multipath: true 72 | pool: SAS10K 73 | 74 | Solidfire backend 75 | ------------------- 76 | 77 | Cinder setup with Hitachi VPS 78 | 79 | .. code-block:: yaml 80 | 81 | cinder: 82 | controller: 83 | enabled: true 84 | backend: 85 | hus100_backend: 86 | name: HUS100 87 | backend: hus100_backend 88 | engine: hitachi_vsp 89 | connection: FC 90 | 91 | -------------- 92 | 93 | .. include:: navigation.txt 94 | -------------------------------------------------------------------------------- /doc/source/operate/monitoring-logs.rst: -------------------------------------------------------------------------------- 1 | 2 | Log monitoring 3 | ============================ 4 | 5 | **Log Processing Service (Heka, ElasticSearch)** 6 | 7 | Our logging stack currently contains following services: 8 | 9 | * Heka - log collection, streaming and processing 10 | * Rabbitmq - amqp message broker 11 | * Elasticsearch - indexed log storage 12 | * Kibana - UI for log analysis 13 | 14 | Following diagram show heka message flow thru components 15 | 16 | .. figure :: /operate/figures/heka_message_flow.png 17 | :width: 75% 18 | :align: center 19 | 20 | Heka 21 | **** 22 | 23 | Heka is an open source stream processing software system developed by Mozilla. 24 | Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as: 25 | 26 | * Loading and parsing log files from a file system. 27 | * Accepting statsd type metrics data for aggregation and forwarding to upstream time series data stores such as graphite or InfluxDB. 28 | * Launching external processes to gather operational data from the local system. 29 | * Performing real time analysis, graphing, and anomaly detection on any data flowing through the Heka pipeline. 30 | * Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP). 31 | * Delivering processed data to one or more persistent data stores. 32 | 33 | Heka overview diagram 34 | 35 | .. figure :: /operate/figures/heka_overview_diagram.png 36 | :width: 75% 37 | :align: center 38 | 39 | ElasticSearch 40 | ************* 41 | 42 | Elasticsearch is a search server based on Lucene.It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. 43 | 44 | Kibana 45 | ****** 46 | 47 | Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data. 48 | 49 | Kibana dashboard 50 | **************** 51 | 52 | 1. Setup index pattern 53 | 54 | .. figure :: /operate/figures/kibana_index_pattern.png 55 | :width: 100% 56 | :align: center 57 | 58 | 2. Select fields of interrest 59 | 60 | .. figure :: /operate/figures/kibana_fields.png 61 | :width: 25% 62 | :align: center 63 | 64 | 3. Save default search 65 | 66 | .. figure :: /operate/figures/kibana_save_search.png 67 | :width: 75% 68 | :align: center 69 | 70 | -------------- 71 | 72 | .. include:: navigation.txt 73 | -------------------------------------------------------------------------------- /doc/source/install/overview-salt.rst: -------------------------------------------------------------------------------- 1 | 2 | SaltStack configuration 3 | ======================= 4 | 5 | OpenStack-Salt Deployment uses Salt configuration platform to install and manage OpenStack. Salt is an automation platform that greatly simplifies system and application deployment. Salt infrastructure uses asynchronous and reliable RAET protocol to communicate and it provides speed of task execution and message transport. 6 | 7 | Salt uses *formulas* to define resources written in the YAML language that orchestrate the individual parts of system into the working entity. For more information, see `Salt Formulas `_. 8 | 9 | This guide refers to the host running Salt formulas and metadata service as the *master* and the hosts on which Salt installs the OpenStack-Salt as the *minions*. 10 | 11 | A recommended minimal layout for deployments involves five target hosts in total: three infrastructure hosts, and two compute host. All hosts require one network interface. More information on setting up target hosts can be found in `the section called "Server topology" `_. 12 | 13 | For more information on physical, logical, and virtual network interfaces within hosts see `the section called "Server networking" `_. 14 | 15 | 16 | Using SaltStack 17 | --------------- 18 | 19 | Remote execution principles carry over all aspects of Salt platform. Command are made of: 20 | 21 | - **Target** - Matching minion ID with globbing, regular expressions, Grains matching, Node groups, compound matching is possible 22 | - **Function** - Commands haveform: module.function, arguments are YAML formatted, compound commands are possible 23 | 24 | 25 | Targetting minions 26 | ~~~~~~~~~~~~~~~~~~ 27 | 28 | Examples of different kinds of targetting minions 29 | 30 | .. code-block:: bash 31 | 32 | salt '*' test.version 33 | salt -E '.*' apache.signal restart 34 | salt -G 'os:Fedora' test.version 35 | salt '*' cmd.exec_code python 'import sys; print sys.version' 36 | 37 | 38 | SaltStack commands 39 | ~~~~~~~~~~~~~~~~~~ 40 | 41 | Minion inner facts 42 | 43 | .. code-block:: bash 44 | 45 | salt-call grains.items 46 | 47 | Minion external parameters 48 | 49 | .. code-block:: bash 50 | 51 | salt-call pillar.data 52 | 53 | Run the full configuration catalog 54 | 55 | .. code-block:: bash 56 | 57 | salt-call state.highstate 58 | 59 | Run one given service from catalog 60 | 61 | .. code-block:: bash 62 | 63 | salt-call state.sls servicename 64 | 65 | 66 | -------------- 67 | 68 | .. include:: navigation.txt 69 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | boxes = { 5 | 'ubuntu/trusty64' => { 6 | 'name' => 'ubuntu/trusty64', 7 | 'url' => 'ubuntu/trusty64' 8 | }, 9 | } 10 | 11 | Vagrant.configure("2") do |config| 12 | 13 | config.vm.define :openstack_config do |openstack_config| 14 | 15 | openstack_config.vm.hostname = 'config.openstack.local' 16 | openstack_config.vm.box = 'ubuntu/trusty64' 17 | openstack_config.vm.box_url = boxes['ubuntu/trusty64']['url'] 18 | openstack_config.vm.network :private_network, ip: "10.10.10.200" 19 | 20 | openstack_config.vm.provider :virtualbox do |vb| 21 | vb.customize ["modifyvm", :id, "--memory", 512] 22 | vb.customize ["modifyvm", :id, "--cpus", 1] 23 | vb.name = 'openstack-config' 24 | vb.gui = false 25 | end 26 | 27 | openstack_config.vm.provision :salt do |salt| 28 | salt.minion_config = "minions/config.conf" 29 | salt.colorize = true 30 | salt.bootstrap_options = "-F -c /tmp -P" 31 | end 32 | 33 | end 34 | 35 | config.vm.define :openstack_control do |openstack_control| 36 | 37 | openstack_control.vm.hostname = 'control.openstack.local' 38 | openstack_control.vm.box = 'ubuntu/trusty64' 39 | openstack_control.vm.box_url = boxes['ubuntu/trusty64']['url'] 40 | openstack_control.vm.network :private_network, ip: "10.10.10.201" 41 | 42 | openstack_control.vm.provider :virtualbox do |vb| 43 | vb.customize ["modifyvm", :id, "--memory", 4096] 44 | vb.customize ["modifyvm", :id, "--cpus", 1] 45 | vb.name = 'openstack-control' 46 | vb.gui = false 47 | end 48 | 49 | openstack_control.vm.provision :salt do |salt| 50 | salt.minion_config = "minions/control.conf" 51 | salt.colorize = true 52 | salt.bootstrap_options = "-F -c /tmp -P" 53 | end 54 | 55 | end 56 | 57 | config.vm.define :openstack_compute do |openstack_compute| 58 | 59 | openstack_compute.vm.hostname = 'compute.openstack.local' 60 | openstack_compute.vm.box = 'ubuntu/trusty64' 61 | openstack_compute.vm.box_url = boxes['ubuntu/trusty64']['url'] 62 | openstack_compute.vm.network :private_network, ip: "10.10.10.202" 63 | 64 | openstack_compute.vm.provider :virtualbox do |vb| 65 | vb.customize ["modifyvm", :id, "--memory", 1024] 66 | vb.customize ["modifyvm", :id, "--cpus", 1] 67 | vb.name = 'openstack-compute' 68 | vb.gui = false 69 | end 70 | 71 | openstack_compute.vm.provision :salt do |salt| 72 | salt.minion_config = "minions/compute.conf" 73 | salt.colorize = true 74 | salt.bootstrap_options = "-F -c /tmp -P" 75 | end 76 | 77 | end 78 | 79 | end 80 | -------------------------------------------------------------------------------- /doc/source/install/install-config-minion.rst: -------------------------------------------------------------------------------- 1 | 2 | Target nodes installation 3 | ========================= 4 | 5 | On most distributions, you can set up a Salt Minion with the `Salt Bootstrap `_ . 6 | 7 | .. note:: 8 | 9 | In every two-step example, you would be well-served to examine the downloaded file and examine 10 | it to ensure that it does what you expect. 11 | 12 | 13 | Using ``curl`` to install latest git: 14 | 15 | .. code:: console 16 | 17 | curl -L https://bootstrap.saltstack.com -o install_salt.sh 18 | sudo sh install_salt.sh git develop 19 | 20 | Using ``wget`` to install your distribution's stable packages: 21 | 22 | .. code:: console 23 | 24 | wget -O install_salt.sh https://bootstrap.saltstack.com 25 | sudo sh install_salt.sh 26 | 27 | Install a specific version from git using ``wget``: 28 | 29 | .. code:: console 30 | 31 | wget -O install_salt.sh https://bootstrap.saltstack.com 32 | sudo sh install_salt.sh -P git v2015.5 33 | 34 | On the above example we added `-P` which will allow PIP packages to be installed if required but 35 | it's no a necessary flag for git based bootstraps. 36 | 37 | Basic minion Configuration 38 | --------------------------- 39 | 40 | Salt configuration is very simple. The only requirement for setting up a minion is to set the location of the master in the minion configuration file. 41 | 42 | The configuration files will be installed to :file:`/etc/salt` and are named 43 | after the respective components, :file:`/etc/salt/master`, and :file:`/etc/salt/minion`. 44 | 45 | Setting ``Salt Master host`` 46 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 47 | 48 | Although there are many Salt Minion configuration options, configuring 49 | a Salt Minion is very simple. By default a Salt Minion will 50 | try to connect to the DNS name "salt"; if the Minion is able to 51 | resolve that name correctly, no configuration is needed. 52 | 53 | If the DNS name "salt" does not resolve to point to the correct 54 | location of the Master, redefine the "master" directive in the minion 55 | configuration file, typically ``/etc/salt/minion``, as follows: 56 | 57 | .. code-block:: diff 58 | 59 | - #master: salt 60 | + master: 10.0.0.1 61 | 62 | Setting ``Salt minion ID`` 63 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 64 | 65 | Then explicitly declare the id for this minion to use. Since Salt uses detached ids it is possible to run multiple minions on the same machine but with different ids. 66 | 67 | .. code-block:: yaml 68 | 69 | id: foo.bar.com 70 | 71 | After updating the configuration files, restart the Salt minion. 72 | 73 | .. code-block:: bash 74 | 75 | # Ubuntu 76 | service salt-minion restart 77 | 78 | # Redhat 79 | systemctl enable salt-minion.service 80 | systemctl start salt-minion 81 | 82 | See the `minion configuration reference `_ 83 | for more details about other configurable options. 84 | 85 | -------------- 86 | 87 | .. include:: navigation.txt 88 | -------------------------------------------------------------------------------- /doc/source/install/configure-network.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the Network service 3 | =============================== 4 | 5 | Control nodes 6 | ------------- 7 | 8 | .. code-block:: yaml 9 | 10 | neutron: 11 | server: 12 | enabled: true 13 | version: kilo 14 | plugin: ml2/contrail 15 | bind: 16 | address: 172.20.0.1 17 | port: 9696 18 | tunnel_type: vxlan 19 | public_networks: 20 | - name: public 21 | subnets: 22 | - name: public-subnet 23 | gateway: 10.0.0.1 24 | network: 10.0.0.0/24 25 | pool_start: 10.0.5.20 26 | pool_end: 10.0.5.200 27 | dhcp: False 28 | database: 29 | engine: mysql 30 | host: 127.0.0.1 31 | port: 3306 32 | name: neutron 33 | user: neutron 34 | password: pwd 35 | identity: 36 | engine: keystone 37 | host: 127.0.0.1 38 | port: 35357 39 | user: neutron 40 | password: pwd 41 | tenant: service 42 | message_queue: 43 | engine: rabbitmq 44 | host: 127.0.0.1 45 | port: 5672 46 | user: openstack 47 | password: pwd 48 | virtual_host: '/openstack' 49 | metadata: 50 | host: 127.0.0.1 51 | port: 8775 52 | password: pass 53 | fwaas: false 54 | 55 | Network nodes 56 | ------------- 57 | 58 | .. code-block:: yaml 59 | 60 | neutron: 61 | bridge: 62 | enabled: true 63 | version: kilo 64 | tunnel_type: vxlan 65 | bind: 66 | address: 172.20.0.2 67 | database: 68 | engine: mysql 69 | host: 127.0.0.1 70 | port: 3306 71 | name: neutron 72 | user: neutron 73 | password: pwd 74 | identity: 75 | engine: keystone 76 | host: 127.0.0.1 77 | port: 35357 78 | user: neutron 79 | password: pwd 80 | tenant: service 81 | message_queue: 82 | engine: rabbitmq 83 | host: 127.0.0.1 84 | port: 5672 85 | user: openstack 86 | password: pwd 87 | virtual_host: '/openstack' 88 | 89 | Compute nodes 90 | ------------- 91 | 92 | .. code-block:: yaml 93 | 94 | neutron: 95 | switch: 96 | enabled: true 97 | version: kilo 98 | migration: True 99 | tunnel_type: vxlan 100 | bind: 101 | address: 127.20.0.100 102 | database: 103 | engine: mysql 104 | host: 127.0.0.1 105 | port: 3306 106 | name: neutron 107 | user: neutron 108 | password: pwd 109 | identity: 110 | engine: keystone 111 | host: 127.0.0.1 112 | port: 35357 113 | user: neutron 114 | password: pwd 115 | tenant: service 116 | message_queue: 117 | engine: rabbitmq 118 | host: 127.0.0.1 119 | port: 5672 120 | user: openstack 121 | password: pwd 122 | virtual_host: '/openstack' 123 | 124 | -------------- 125 | 126 | .. include:: navigation.txt 127 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/bootstrap/bootstrap-salt-minion.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | # 4 | # ENVIRONMENT 5 | # 6 | 7 | OS_DISTRIBUTION=${OS_DISTRIBUTION:-ubuntu} 8 | OS_NETWORKING=${OS_NETWORKING:-opencontrail} 9 | OS_DEPLOYMENT=${OS_DEPLOYMENT:-single} 10 | OS_SYSTEM="${OS_DISTRIBUTION}_${OS_NETWORKING}_${OS_DEPLOYMENT}" 11 | 12 | SALT_SOURCE=${SALT_SOURCE:-pkg} 13 | SALT_VERSION=${SALT_VERSION:-latest} 14 | 15 | FORMULA_SOURCE=${FORMULA_SOURCE:-git} 16 | FORMULA_PATH=${FORMULA_PATH:-/usr/share/salt-formulas} 17 | FORMULA_BRANCH=${FORMULA_BRANCH:-master} 18 | FORMULA_REPOSITORY=${FORMULA_REPOSITORY:-deb [arch=amd64] http://apt.tcpcloud.eu/nightly/ trusty tcp-salt} 19 | FORMULA_GPG=${FORMULA_GPG:-http://apt.tcpcloud.eu/public.gpg} 20 | 21 | if [ "$FORMULA_SOURCE" == "git" ]; then 22 | SALT_ENV="dev" 23 | elif [ "$FORMULA_SOURCE" == "pkg" ]; then 24 | SALT_ENV="prd" 25 | fi 26 | 27 | RECLASS_ADDRESS=${RECLASS_ADDRESS:-https://github.com/tcpcloud/openstack-salt-model.git} 28 | RECLASS_BRANCH=${RECLASS_BRANCH:-master} 29 | RECLASS_SYSTEM=${RECLASS_SYSTEM:-$OS_SYSTEM} 30 | 31 | CONFIG_HOSTNAME=${CONFIG_HOSTNAME:-config} 32 | CONFIG_DOMAIN=${CONFIG_DOMAIN:-openstack.local} 33 | CONFIG_HOST=${CONFIG_HOSTNAME}.${CONFIG_DOMAIN} 34 | CONFIG_ADDRESS=${CONFIG_ADDRESS:-10.10.10.200} 35 | 36 | MINION_MASTER=${MINION_MASTER:-$CONFIG_ADDRESS} 37 | MINION_HOSTNAME=${MINION_HOSTNAME:-minion} 38 | MINION_ID=${MINION_HOSTNAME}.${CONFIG_DOMAIN} 39 | 40 | install_salt_minion_pkg() 41 | { 42 | echo -e "\nInstalling salt minion ...\n" 43 | 44 | if [ "$SALT_VERSION" == "latest" ]; then 45 | apt-get install -y salt-common salt-minion 46 | else 47 | apt-get install -y --force-yes salt-common=$SALT_VERSION salt-minion=$SALT_VERSION 48 | fi 49 | 50 | echo -e "\nPreparing base OS repository ...\n" 51 | 52 | echo -e "deb [arch=amd64] http://apt.tcpcloud.eu/nightly/ trusty main security extra tcp" > /etc/apt/sources.list 53 | wget -O - http://apt.tcpcloud.eu/public.gpg | apt-key add - 54 | 55 | apt-get clean 56 | apt-get update 57 | 58 | if [ "$SALT_VERSION" == "latest" ]; then 59 | apt-get install -y salt-common salt-minion 60 | else 61 | apt-get install -y --force-yes salt-common=$SALT_VERSION salt-minion=$SALT_VERSION 62 | fi 63 | 64 | echo -e "\nInstalling salt minion ...\n" 65 | 66 | [ ! -d /etc/salt/minion.d ] && mkdir -p /etc/salt/minion.d 67 | echo -e "master: $MINION_MASTER\nid: $MINION_ID" > /etc/salt/minion.d/minion.conf 68 | 69 | service salt-minion restart 70 | } 71 | 72 | install_salt_minion_pip() 73 | { 74 | echo -e "\nInstalling salt minion ...\n" 75 | 76 | echo -e "\nPreparing base OS repository ...\n" 77 | 78 | echo -e "deb [arch=amd64] http://apt.tcpcloud.eu/nightly/ trusty main security extra tcp" > /etc/apt/sources.list 79 | wget -O - http://apt.tcpcloud.eu/public.gpg | apt-key add - 80 | 81 | apt-get clean 82 | apt-get update 83 | 84 | echo -e "\nInstalling salt minion ...\n" 85 | 86 | if [ -x "`which invoke-rc.d 2>/dev/null`" -a -x "/etc/init.d/salt-minion" ] ; then 87 | apt-get purge -y salt-minion salt-common && apt-get autoremove -y 88 | fi 89 | 90 | apt-get install -y python-pip python-dev zlib1g-dev reclass git 91 | 92 | if [ "$SALT_VERSION" == "latest" ]; then 93 | pip install salt 94 | else 95 | pip install salt==$SALT_VERSION 96 | fi 97 | 98 | [ ! -d /etc/salt/minion.d ] && mkdir -p /etc/salt/minion.d 99 | echo -e "master: $MINION_MASTER\nid: $MINION_ID" > /etc/salt/minion.d/minion.conf 100 | 101 | wget -O /etc/init.d/salt-minion https://anonscm.debian.org/cgit/pkg-salt/salt.git/plain/debian/salt-minion.init && chmod 755 /etc/init.d/salt-minion 102 | ln -s /usr/local/bin/salt-minion /usr/bin/salt-minion 103 | 104 | service salt-minion restart 105 | } 106 | 107 | if [ "$SALT_SOURCE" == "pkg" ]; then 108 | install_salt_minion_pkg 109 | elif [ "$SALT_SOURCE" == "pip" ]; then 110 | install_salt_minion_pip 111 | fi 112 | -------------------------------------------------------------------------------- /doc/source/install/configure-compute.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the Compute service 3 | ================================ 4 | 5 | KVM backend 6 | ------------------- 7 | 8 | Control nodes 9 | ************* 10 | 11 | Nova services on the control node 12 | 13 | .. code-block:: yaml 14 | 15 | nova: 16 | controller: 17 | version: kilo 18 | enabled: true 19 | security_group: true 20 | cpu_allocation_ratio: 8.0 21 | ram_allocation_ratio: 1.0 22 | bind: 23 | public_address: 10.0.0.122 24 | public_name: openstack.domain.com 25 | novncproxy_port: 6080 26 | database: 27 | engine: mysql 28 | host: 127.0.0.1 29 | port: 3306 30 | name: nova 31 | user: nova 32 | password: pwd 33 | identity: 34 | engine: keystone 35 | host: 127.0.0.1 36 | port: 35357 37 | user: nova 38 | password: pwd 39 | tenant: service 40 | message_queue: 41 | engine: rabbitmq 42 | host: 127.0.0.1 43 | port: 5672 44 | user: openstack 45 | password: pwd 46 | virtual_host: '/openstack' 47 | network: 48 | engine: neutron 49 | host: 127.0.0.1 50 | port: 9696 51 | identity: 52 | engine: keystone 53 | host: 127.0.0.1 54 | port: 35357 55 | user: neutron 56 | password: pwd 57 | tenant: service 58 | metadata: 59 | password: password 60 | 61 | Nova services from custom package repository 62 | 63 | .. code-block:: yaml 64 | 65 | nova: 66 | controller: 67 | version: kilo 68 | source: 69 | engine: pkg 70 | address: http://... 71 | .... 72 | 73 | Compute nodes 74 | ************* 75 | 76 | Nova services on compute node with Neutron networking 77 | 78 | .. code-block:: yaml 79 | 80 | nova: 81 | compute: 82 | version: kilo 83 | enabled: true 84 | virtualization: kvm 85 | security_group: true 86 | bind: 87 | vnc_address: 172.20.0.100 88 | vnc_port: 6080 89 | vnc_name: openstack.domain.com 90 | vnc_protocol: http 91 | database: 92 | engine: mysql 93 | host: 127.0.0.1 94 | port: 3306 95 | name: nova 96 | user: nova 97 | password: pwd 98 | identity: 99 | engine: keystone 100 | host: 127.0.0.1 101 | port: 35357 102 | user: nova 103 | password: pwd 104 | tenant: service 105 | message_queue: 106 | engine: rabbitmq 107 | host: 127.0.0.1 108 | port: 5672 109 | user: openstack 110 | password: pwd 111 | virtual_host: '/openstack' 112 | image: 113 | engine: glance 114 | host: 127.0.0.1 115 | port: 9292 116 | network: 117 | engine: neutron 118 | host: 127.0.0.1 119 | port: 9696 120 | identity: 121 | engine: keystone 122 | host: 127.0.0.1 123 | port: 35357 124 | user: neutron 125 | password: pwd 126 | tenant: service 127 | qemu: 128 | max_files: 4096 129 | max_processes: 4096 130 | 131 | Nova services on compute node with OpenContrail 132 | 133 | .. code-block:: yaml 134 | 135 | nova: 136 | compute: 137 | enabled: true 138 | ... 139 | networking: contrail 140 | 141 | Nova services on compute node with memcached caching 142 | 143 | .. code-block:: yaml 144 | 145 | nova: 146 | compute: 147 | enabled: true 148 | ... 149 | cache: 150 | engine: memcached 151 | members: 152 | - host: 127.0.0.1 153 | port: 11211 154 | - host: 127.0.0.1 155 | port: 11211 156 | 157 | -------------- 158 | 159 | .. include:: navigation.txt 160 | -------------------------------------------------------------------------------- /doc/source/develop/testing-integration.rst: -------------------------------------------------------------------------------- 1 | 2 | Integration testing 3 | =================== 4 | 5 | There are requirements, in addition to Salt's requirements, which need to be installed in order to run the test suite. Install the line below. 6 | 7 | .. code-block:: bash 8 | 9 | pip install -r requirements/dev_python27.txt 10 | 11 | Once all require requirements are set, use ``tests/runtests.py`` to run all of the tests included in Salt's test suite. For more information, see --help. 12 | 13 | Running the tests 14 | ----------------- 15 | 16 | An alternative way of invoking the test suite is available in setup.py: 17 | 18 | .. code-block:: bash 19 | 20 | ./setup.py test 21 | 22 | Instead of running the entire test suite, there are several ways to run only specific groups of tests or individual tests: 23 | 24 | * Run unit tests only: ./tests/runtests.py --unit-tests 25 | * Run unit and integration tests for states: ./tests/runtests.py --state 26 | * Run integration tests for an individual module: ./tests/runtests.py -n integration.modules.virt 27 | * Run unit tests for an individual module: ./tests/runtests.py -n unit.modules.virt_test 28 | * Run an individual test by using the class and test name (this example is for the test_default_kvm_profile test in the integration.module.virt) 29 | 30 | Running Unit tests without integration test daemons 31 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 32 | 33 | Since the unit tests do not require a master or minion to execute, it is often useful to be able to run unit tests individually, or as a whole group, without having to start up the integration testing daemons. Starting up the master, minion, and syndic daemons takes a lot of time before the tests can even start running and is unnecessary to run unit tests. To run unit tests without invoking the integration test daemons, simple remove the /tests portion of the runtests.py command: 34 | 35 | .. code-block:: bash 36 | 37 | ./runtests.py --unit 38 | 39 | All of the other options to run individual tests, entire classes of tests, or entire test modules still apply. 40 | 41 | 42 | Destructive integration tests 43 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 44 | 45 | Salt is used to change the settings and behavior of systems. In order to effectively test Salt's functionality, some integration tests are written to make actual changes to the underlying system. These tests are referred to as "destructive tests". Some examples of destructive tests are changes may be testing the addition of a user or installing packages. By default, destructive tests are disabled and will be skipped. 46 | 47 | Generally, destructive tests should clean up after themselves by attempting to restore the system to its original state. For instance, if a new user is created during a test, the user should be deleted after the related test(s) have completed. However, no guarantees are made that test clean-up will complete successfully. Therefore, running destructive tests should be done with caution. 48 | 49 | To run tests marked as destructive, set the ``--run-destructive`` flag: 50 | 51 | .. code-block:: bash 52 | 53 | ./tests/runtests.py --run-destructive 54 | 55 | Automated test runs 56 | ------------------- 57 | 58 | Jenkins server executes series of tests across supported platforms. The tests executed from OpenStack-Salt's Jenkins server create fresh virtual machines for each test run, then execute destructive tests on the new, clean virtual machines. 59 | 60 | When a pull request is submitted to OpenStack-Salt's repository, Jenkins runs Salt's test suite on a couple of virtual machines to gauge the pull request's viability to merge into OpenStack-Salt's develop branch. If these initial tests pass, the pull request can then merged into OpenStack-Salt's develop branch by one of OpenStack-Salt's core developers, pending their discretion. If the initial tests fail, core developers may request changes to the pull request. If the failure is unrelated to the changes in question, core developers may merge the pull request despite the initial failure. 61 | 62 | Once the pull request is merged into OpenStack-Salt's develop branch, a new set of Jenkins virtual machines will begin executing the test suite. The develop branch tests have many more virtual machines to provide more comprehensive results. 63 | 64 | -------------- 65 | 66 | .. include:: navigation.txt 67 | -------------------------------------------------------------------------------- /doc/source/install/install-config-master.rst: -------------------------------------------------------------------------------- 1 | 2 | ============================ 3 | Configuration node setup 4 | ======================== 5 | 6 | 7 | Configuring the operating system 8 | ================================ 9 | 10 | The configuration files will be installed to :file:`/etc/salt` and are named 11 | after the respective components, :file:`/etc/salt/master`, and 12 | :file:`/etc/salt/minion`. 13 | 14 | By default the Salt master listens on ports 4505 and 4506 on all 15 | interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the 16 | "interface" directive in the master configuration file, typically 17 | ``/etc/salt/master``, as follows: 18 | 19 | .. code-block:: diff 20 | 21 | - #interface: 0.0.0.0 22 | + interface: 10.0.0.1 23 | 24 | After updating the configuration file, restart the Salt master. 25 | for more details about other configurable options. 26 | Make sure that mentioned ports are open by your network firewall. 27 | 28 | Open salt master config 29 | 30 | .. code-block:: bash 31 | 32 | vim /etc/salt/master.d/master.conf 33 | 34 | And set the content to the following, enabling dev environment and reclass metadata source. 35 | 36 | .. code-block:: yaml 37 | 38 | file_roots: 39 | base: 40 | - /srv/salt/env/dev 41 | - /srv/salt/env/base 42 | 43 | pillar_opts: False 44 | 45 | reclass: &reclass 46 | storage_type: yaml_fs 47 | inventory_base_uri: /srv/salt/reclass 48 | 49 | ext_pillar: 50 | - reclass: *reclass 51 | 52 | master_tops: 53 | reclass: *reclass 54 | 55 | And set the content to the following to setup reclass as salt-master metadata source. 56 | 57 | .. code-block:: bash 58 | 59 | vim /etc/reclass/reclass-config.yml 60 | 61 | .. code-block:: yaml 62 | 63 | storage_type: yaml_fs 64 | pretty_print: True 65 | output: yaml 66 | inventory_base_uri: /srv/salt/reclass 67 | 68 | Configure the master service 69 | 70 | .. code-block:: bash 71 | 72 | # Ubuntu 73 | service salt-master restart 74 | # Redhat 75 | systemctl enable salt-master.service 76 | systemctl start salt-master 77 | 78 | 79 | See the `master configuration reference `_ 80 | for more details about other configurable options. 81 | 82 | 83 | 84 | Setting up package repository 85 | ================================ 86 | 87 | Use ``curl`` to install your distribution's stable packages. Examine the downloaded file ``install_salt.sh`` to ensure that it contains what you expect (bash script). You need to perform this step even for salt-master instalation as it adds official saltstack package management PPA repository. 88 | 89 | .. code:: console 90 | 91 | apt-get install vim curl git-core 92 | curl -L https://bootstrap.saltstack.com -o install_salt.sh 93 | sudo sh install_salt.sh 94 | 95 | Install the Salt master from the apt repository with the apt-get command after you installed salt-minion. 96 | 97 | .. code-block:: bash 98 | 99 | sudo apt-get install salt-minion salt-master reclass 100 | 101 | .. Note:: 102 | 103 | Instalation is tested on Ubuntu Linux 12.04/14.04, but should work on any distribution with python 2.7 installed. 104 | You should keep Salt components at current stable version. 105 | 106 | 107 | Configuring Secure Shell (SSH) keys 108 | =================================== 109 | 110 | Generate SSH key file for accessing your reclass metadata and development formulas. 111 | 112 | .. code-block:: bash 113 | 114 | mkdir /root/.ssh 115 | ssh-keygen -b 4096 -t rsa -f /root/.ssh/id_rsa -q -N "" 116 | chmod 400 /root/.ssh/id_rsa 117 | 118 | Create SaltStack environment file root, we will use ``dev`` environment. 119 | 120 | .. code-block:: bash 121 | 122 | mkdir /srv/salt/env/dev -p 123 | 124 | Get the reclass metadata definition from the git server. 125 | 126 | .. code-block:: bash 127 | 128 | git clone git@github.com:tcpcloud/workshop-salt-model.git /srv/salt/reclass 129 | 130 | Get the core formulas from git repository server needed to setup the rest. 131 | 132 | .. code-block:: bash 133 | 134 | git clone git@github.com:tcpcloud/salt-formula-linux.git /srv/salt/env/dev/linux -b develop 135 | git clone git@github.com:tcpcloud/salt-formula-salt.git /srv/salt/env/dev/salt -b develop 136 | git clone git@github.com:tcpcloud/salt-formula-openssh.git /srv/salt/env/dev/openssh -b develop 137 | git clone git@github.com:tcpcloud/salt-formula-git.git /srv/salt/env/dev/git -b develop 138 | 139 | 140 | -------------- 141 | 142 | .. include:: navigation.txt 143 | -------------------------------------------------------------------------------- /doc/source/install/overview-server-topology.rst: -------------------------------------------------------------------------------- 1 | 2 | Server Topology 3 | ================== 4 | 5 | High availability is the default environment setup. Reference architecture covers only the HA deployment. HA provides replicated servers to prevent single points of failure. Single node deployments are supported for development environments in Vagrant and Heat. 6 | 7 | Production setup consists from several roles of physical nodes: 8 | 9 | * Foreman/Ubuntu MaaS 10 | * KVM Control cluster 11 | * Compute nodes 12 | 13 | Server role description 14 | ----------------------- 15 | 16 | Virtual Machine nodes: 17 | 18 | SaltMaster node 19 | ~~~~~~~~~~~~~~~ 20 | 21 | SaltMaster node contains supporting installation components for deploying OpenStack cloud as Salt Master, git repositories, package repository, etc. 22 | 23 | OpenStack controller nodes 24 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 | 26 | Controller is fail-over cluster for hosting OpenStack core cloud components (Nova, Neutron, Cinder, Glance), OpenContrail control roles and multi-mastar database for all OpenStack services. 27 | 28 | OpenContrail controller nodes 29 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 30 | 31 | OpenContrail controller is fail-over cluster for hosting OpenContrail Config, Neutron, Control and other services like Cassandra, Zookeeper, Redis, HAProxy, Keepalived fully operated in High Availability. 32 | 33 | OpenContrail analytics node 34 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 35 | 36 | OpenContrail Analytics node is fail-over cluster for OpenContrail analytics. 37 | 38 | Database node 39 | ~~~~~~~~~~~~~ 40 | 41 | MySQL Galera nodes contain multi-master database for all OpenStack and Monitoring services. 42 | 43 | Telemetry node 44 | ~~~~~~~~~~~~~~ 45 | 46 | Ceilometer node is separated from central controllers for better performance, maintenance and upgrades. MongoDB cluster is used for storing telemetry data. 47 | 48 | Proxy node 49 | ~~~~~~~~~~~~~~ 50 | 51 | This node does proxy for all OpenStack APIs and Dashboards. 52 | 53 | Monitoring node 54 | ~~~~~~~~~~~~~~~~~~~~ 55 | 56 | This node contains modules for TCP Monitoring, which include Sensu open source monitoring framework, RabbitMQ and KEDB. 57 | 58 | Billometer node 59 | ~~~~~~~~~~~~~~~~~~~ 60 | 61 | This node contains modules for TCP Billing, which include Horizon dashboard. 62 | 63 | Metering node 64 | ~~~~~~~~~~~~~~~~~ 65 | 66 | This node contains Graphite, which is a highly scalable real-time graphing system. It includes Graphite's processing backend - Carbon and fixed-size database - Whisper. 67 | 68 | .. figure:: figures/production_architecture.png 69 | :width: 100% 70 | :align: center 71 | 72 | Reference Architecture 73 | -------------------------- 74 | 75 | .. figure:: figures/server_topology.jpg 76 | :width: 100% 77 | :align: center 78 | 79 | Reclass model for: 80 | 81 | * 1x Salt master 82 | * 3x OpenStack, OpenContrail control nodes 83 | * 2x Openstack compute nodes 84 | * 1x Ceilometer, Graphite metering nodes 85 | * 1x Sensu monitoring node 86 | * 1x Heka, ElasticSearch, Kibana node 87 | 88 | .. list-table:: Nodes in setup 89 | :header-rows: 1 90 | 91 | * - **Hostname** 92 | - **Description** 93 | - **IP Address** 94 | * - cfg01 95 | - Salt Master 96 | - 172.10.10.100 97 | * - ctl01 98 | - Openstack & Opencontrail controller 99 | - 172.10.10.101 100 | * - ctl02 101 | - Openstack & Opencontrail controller 102 | - 172.10.10.102 103 | * - ctl03 104 | - Openstack & Opencontrail controller 105 | - 172.10.10.103 106 | * - web01 107 | - Openstack Dashboard and API proxy 108 | - 172.10.10.104 109 | * - cmp01 110 | - Compute node 111 | - 172.10.10.105 112 | * - cmp02 113 | - Compute node 114 | - 172.10.10.106 115 | * - mon01 116 | - Ceilometer 117 | - 172.10.10.107 118 | * - mtr01 119 | - Monitoring node 120 | - 172.10.10.108 121 | 122 | All hosts are deployed in `workshop.cloudlab.cz` domain. 123 | 124 | Instructions for reclass modification 125 | ------------------------------------------ 126 | 127 | - Fork this repository 128 | - Make customizations according to your environment: 129 | 130 | - ``classes/system/openssh/server/single.yml`` 131 | 132 | - setup public SSH key 133 | - disable password auth 134 | - comment out root password 135 | 136 | - ``nodes/cfg01.workshop.cloudlab.cz.yml`` and 137 | ``classes/system/reclass/storage/system/workshop.yml`` 138 | 139 | - fix IP addresses 140 | - fix domain 141 | 142 | - ``classes/system/openstack/common/workshop.yml`` 143 | 144 | - fix passwords and keys 145 | - fix IP addresses 146 | 147 | - ``classes/billometer/server/single.yml`` - set password 148 | 149 | - ``classes/system/graphite`` - set password 150 | 151 | -------------- 152 | 153 | .. include:: navigation.txt 154 | -------------------------------------------------------------------------------- /doc/source/operate/monitoring-meters.rst: -------------------------------------------------------------------------------- 1 | 2 | Meter monitoring 3 | ============================ 4 | 5 | Collectd/Graphite 6 | ***************** 7 | 8 | Collectd gathers statistics about the system it is running on and stores this information. Those statistics can then be used to find current performance bottlenecks (i.e. performance analysis) and predict future system load (i.e. capacity planning). It's written in C for performance and portability, allowing it to run on systems without scripting language or cron daemon, such as embedded systems. At the same time it includes optimizations and features to handle hundreds of thousands of data sets. It comes with over 90 plugins which range from standard cases to very specialized and advanced topics. It provides powerful networking features and is extensible in numerous ways 9 | 10 | Graphite is an enterprise-scale monitoring tool that runs well on cheap hardware. It was originally designed and written by Chris Davis at Orbitz in 2006 as side project that ultimately grew to be a foundational monitoring tool. In 2008, Orbitz allowed Graphite to be released under the open source Apache 2.0 license. Since then Chris has continued to work on Graphite and has deployed it at other companies including Sears, where it serves as a pillar of the e-commerce monitoring system. Today many large companies use it. 11 | 12 | What Graphite does not do is collect data for you, however there are some tools out there that know how to send data to graphite. Even though it often requires a little code, sending data to Graphite is very simple. 13 | 14 | There are three basic types of meters that are stored in the time-series database. 15 | 16 | * Cumulative: increasing over time (network or disk usage counters) 17 | * Gauge: discrete items (number of connected users) and fluctuating values (system load) 18 | * Delta: values changing over time (bandwidth) 19 | 20 | Graphite consists of 3 software components: 21 | 22 | * carbon - a Twisted daemon that listens for time-series data 23 | * whisper - a simple database library for storing time-series data (similar in design to RRD) 24 | * graphite - A Django webapp that renders graphs on-demand using Cairo 25 | 26 | Graphite composer 27 | ***************** 28 | 29 | Graphite composer is a graphical tool for manipulating graphite metrics and tuning up functions applied to metrics. 30 | 31 | .. figure :: /operate/figures/graphite_composer.png 32 | :width: 100% 33 | :align: center 34 | 35 | Graphite metrics functions 36 | ************************** 37 | 38 | The metrics can be adjusted by applying functions on them within the Graphite composer. 39 | 40 | .. figure :: /operate/figures/graphite_functions.png 41 | :width: 100% 42 | :align: center 43 | 44 | Aside the ability to store time-series data Graphite has a lot of additional functions that can be used to alter time-series data to more appropriate form, if we want to get the delta from the cumulative metrics or ad vice versa. 45 | 46 | **integral(seriesList)** 47 | 48 | This will show the sum over time, sort of like a continuous addition function. Useful for finding totals or trends in metrics that are collected per minute. 49 | 50 | Example: 51 | 52 | .. code-block:: yaml 53 | 54 | &target=integral(company.sales.perMinute) 55 | 56 | This would start at zero on the left side of the graph, adding the sales each minute, and show the total sales for the time period selected at the right side, (time now, or the time specified by ‘&until=’). 57 | 58 | **derivative(seriesList)** 59 | 60 | This is the opposite of the integral function. This is useful for taking a running total metric and calculating the delta between subsequent data points. 61 | 62 | This function does not normalize for periods of time, as a true derivative would. Instead see the perSecond() function to calculate a rate of change over time. 63 | 64 | Example: 65 | 66 | .. code-block:: yaml 67 | 68 | &target=derivative(company.server.application01.ifconfig.TXPackets) 69 | 70 | **sumSeries(*seriesLists)** 71 | 72 | Short form: sum() 73 | 74 | This will add metrics together and return the sum at each datapoint. (See integral for a sum over time) 75 | 76 | Example: 77 | 78 | .. code-block:: yaml 79 | 80 | &target=sum(company.server.application*.requestsHandled) 81 | 82 | This would show the sum of all requests handled per minute (provided requestsHandled are collected once a minute). If metrics with different retention rates are combined, the coarsest metric is graphed, and the sum of the other metrics is averaged for the metrics with finer retention rates. 83 | 84 | Read more about functions at http://graphite.readthedocs.org/en/latest/functions.html#module-graphite.render.functions 85 | 86 | Get aggregated CPU usage of a node in percents 87 | ********************************************** 88 | 89 | In this task we'll look at how to get some useful metric out of data gathered by collectd. The CPU usage is in form of counters. We basically sum value of all states for all CPUs and sum value of idle state for all CPUs and transform these 2 by asPercent function to percentage value. As we are doing ratio there's no need for derivation. 90 | 91 | .. code-block:: yaml 92 | 93 | sumSeries(default_prd.nodename.cpu.*.idle) 94 | # sum of all CPU idle state counters 95 | 96 | sumSeries(default_prd.nodename.cpu.*.*) 97 | # sum of all CPU state counters 98 | 99 | asPercent(sumSeries(default_prd.nodename.cpu.*.idle), sumSeries(default_prd.nodename.cpu.*.*)) 100 | # gives percentual ratio of two metrics and in our case the ratio of free cpu, to get ratio of used CPU, the resulting series should be scaled by -1 and ofsetted by 100. 101 | 102 | Send arbitrary metric to Graphite 103 | ********************************* 104 | 105 | It is possible to send metrics to graphite from a bash script or from within an application.Graphite understands messages with this format: 106 | 107 | .. code-block:: yaml 108 | 109 | metric_path value timestamp\n 110 | 111 | * `metric_path` is the metric namespace that you want to populate. 112 | * `value` is the value that you want to assign to the metric at this time. 113 | * `timestamp` is the unix epoch time. 114 | 115 | Try send following metric from any node in the cluster: 116 | 117 | .. code-block:: bash 118 | 119 | root@cfg01:~# echo "test.bash.stats 42 `date +%s`" | nc mon01 2003 120 | 121 | -------------- 122 | 123 | .. include:: navigation.txt 124 | -------------------------------------------------------------------------------- /doc/source/develop/quickstart-vagrant.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | OpenStack-Salt Vagrant deployment 4 | ================================= 5 | 6 | All-in-one (AIO) deployments are a great way to setup an OpenStack-Salt cloud 7 | for: 8 | 9 | * a service development environment 10 | * an overview of how all of the OpenStack services and roles play together 11 | * a simple lab deployment for testing 12 | 13 | Although AIO builds aren't suitable for large production deployments, they're 14 | great for small proof-of-concept deployments. 15 | 16 | It's strongly recommended to have hardware that meets the following 17 | requirements before starting an AIO deployment: 18 | 19 | * CPU with `hardware-assisted virtualization`_ support 20 | * At least 80GB disk space 21 | * 8GB RAM 22 | 23 | Vagrant setup 24 | ------------- 25 | 26 | Installing Vagrant is extremely easy for many operating systems. Go to the 27 | `Vagrant downloads page`_ and get the appropriate installer or package for 28 | your platform. Install the package using standard procedures for your 29 | operating system. 30 | 31 | The installer will automatically add vagrant to your system path so that it is 32 | available in shell. Try logging out and logging back in to your system (this 33 | is particularly necessary sometimes for Windows) to get the updated system 34 | path up and running. 35 | 36 | Add the generic ubuntu1404 image for virtualbox virtualization. 37 | 38 | .. code-block:: bash 39 | 40 | $ vagrant box add ubuntu/trusty64 41 | 42 | ==> box: Loading metadata for box 'ubuntu/trusty64' 43 | box: URL: https://atlas.hashicorp.com/ubuntu/trusty64 44 | ==> box: Adding box 'ubuntu/trusty64' (v20160122.0.0) for provider: virtualbox 45 | box: Downloading: https://vagrantcloud.com/ubuntu/boxes/trusty64/versions/20160122.0.0/providers/virtualbox.box 46 | ==> box: Successfully added box 'ubuntu/trusty64' (v20160122.0.0) for 'virtualbox'! 47 | 48 | 49 | Environment setup 50 | ----------------- 51 | 52 | The environment consists of three nodes. 53 | 54 | .. list-table:: 55 | :stub-columns: 1 56 | 57 | * - **FQDN** 58 | - **Role** 59 | - **IP** 60 | * - config.openstack.local 61 | - Salt master node 62 | - 10.10.10.200 63 | * - control.openstack.local 64 | - OpenStack control node 65 | - 10.10.10.201 66 | * - compute.openstack.local 67 | - OpenStack compute node 68 | - 10.10.10.202 69 | 70 | 71 | 72 | Minion configuration files 73 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 74 | 75 | Download openstack-salt 76 | 77 | Look at configuration files for each node deployed. 78 | 79 | ``scripts/minions/config.conf`` configuration: 80 | 81 | .. literalinclude:: /_static/scripts/minions/config.conf 82 | :language: yaml 83 | 84 | ``scripts/minions/control.conf`` configuration: 85 | 86 | .. literalinclude:: /_static/scripts/minions/control.conf 87 | :language: yaml 88 | 89 | ``scripts/minions/compute.conf`` configuration: 90 | 91 | .. literalinclude:: /_static/scripts/minions/compute.conf 92 | :language: yaml 93 | 94 | 95 | Vagrant configuration file 96 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 97 | 98 | The main vagrant configuration for OpenStack-Salt deployment is located at 99 | ``scripts/Vagrantfile``. 100 | 101 | .. literalinclude:: /_static/scripts/Vagrantfile 102 | :language: ruby 103 | 104 | 105 | Salt master bootstrap from package 106 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 107 | 108 | The salt-master bootstrap is located at ``scripts/bootstrap/salt- 109 | master-pkg.sh`` and is accessible from the virtual machine at ``/vagrant 110 | /bootstrap/salt-master-pkg.sh``. 111 | 112 | .. literalinclude:: /_static/scripts/bootstrap/salt-master-pkg.sh 113 | :language: bash 114 | 115 | Salt master pip based bootstrap 116 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 117 | 118 | The salt-master bootstrap is located at ``scripts/bootstrap/salt- master- 119 | pip.sh`` and is accessible from the virtual machine at ``/vagrant/bootstrap 120 | /salt-master-pip.sh``. 121 | 122 | .. literalinclude:: /_static/scripts/bootstrap/salt-master-pip.sh 123 | :language: bash 124 | 125 | Launching the Vagrant nodes 126 | --------------------------- 127 | 128 | Check the status of the deployment environment. 129 | 130 | .. code-block:: bash 131 | 132 | $ cd /srv/vagrant-openstack 133 | $ vagrant status 134 | 135 | Current machine states: 136 | 137 | openstack_config not created (virtualbox) 138 | openstack_control not created (virtualbox) 139 | openstack_compute not created (virtualbox) 140 | 141 | Setup OpenStack-Salt config node, launch it and connect to it using following 142 | commands, it cannot be provisioned by vagrant salt, as the salt master is not 143 | configured yet. 144 | 145 | .. code-block:: bash 146 | 147 | $ vagrant up openstack_config 148 | $ vagrant ssh openstack_config 149 | 150 | 151 | Bootstrap Salt master 152 | ~~~~~~~~~~~~~~~~~~~~~ 153 | 154 | Bootstrap the salt master service on the config node, it can be configured 155 | with following parameters: 156 | 157 | .. code-block:: bash 158 | 159 | $ export RECLASS_ADDRESS=https://github.com/tcpcloud/openstack-salt-model.git 160 | $ export CONFIG_HOST=config.openstack.local 161 | 162 | To deploy salt-master from packages, run on config node: 163 | 164 | .. code-block:: bash 165 | 166 | $ /vagrant/bootstrap/salt-master-pkg.sh 167 | 168 | To deploy salt-master from pip, run on config node: 169 | 170 | .. code-block:: bash 171 | 172 | $ /vagrant/bootstrap/salt-master-pip.sh 173 | 174 | Now setup the OpenStack-Salt control node. Launch it using following command: 175 | 176 | .. code-block:: bash 177 | 178 | $ vagrant up openstack_control 179 | 180 | Now setup the OpenStack-Salt compute node. Launch it using following command: 181 | 182 | .. code-block:: bash 183 | 184 | $ vagrant up openstack_compute 185 | 186 | To orchestrate the services accross all nodes, run following command on config 187 | node: 188 | 189 | .. code-block:: bash 190 | 191 | $ salt-run state.orchestrate orchestrate 192 | 193 | The installation is now over, you should be able to access the user interface 194 | of cloud deployment at your control node. 195 | 196 | .. _hardware-assisted virtualization: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization 197 | .. _Vagrant downloads page: https://www.vagrantup.com/downloads.html 198 | 199 | 200 | -------------- 201 | 202 | .. include:: navigation.txt 203 | -------------------------------------------------------------------------------- /doc/source/develop/extending-ecosystem.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | Formula ecosystem 4 | ================= 5 | 6 | The OpenStack-Salt formulas are divided into several groups according to their purpose. Formulas share same structure and metadata definions, expose vital information into the Salt Mine for monitoring and audit. 7 | 8 | **Infrastructure services** 9 | Core services needed for basic infrastructure operation. 10 | 11 | **Supplemental services** 12 | Support services as databases, proxies, application servers. 13 | 14 | **OpenStack services** 15 | All supported OpenStack cloud platform services. 16 | 17 | **Monitoring services** 18 | Monitoring, metering and log collecting tools implementing complete monitoring stack. 19 | 20 | **Integration services** 21 | Continuous integration services for automated integration and delivery pipelines. 22 | 23 | Each of the service groups contain of several individual service formulas, listed in following tables. 24 | 25 | 26 | Infrastructure services 27 | ----------------------- 28 | 29 | Core services needed for basic infrastructure operation. 30 | 31 | .. list-table:: 32 | :widths: 33 66 33 | :header-rows: 1 34 | :stub-columns: 1 35 | 36 | * - Formula 37 | - Repository 38 | * - foreman 39 | - https://github.com/tcpcloud/salt-formula-foreman 40 | * - freeipa 41 | - https://github.com/tcpcloud/salt-formula-freeipa 42 | * - git 43 | - https://github.com/tcpcloud/salt-formula-git 44 | * - glusterfs 45 | - https://github.com/tcpcloud/salt-formula-glusterfs 46 | * - iptables 47 | - https://github.com/tcpcloud/salt-formula-iptables 48 | * - linux 49 | - https://github.com/tcpcloud/salt-formula-linux 50 | * - maas 51 | - https://git.tcpcloud.eu/salt-formulas/maas-formula 52 | * - ntp 53 | - https://github.com/tcpcloud/salt-formula-ntp 54 | * - openssh 55 | - https://github.com/tcpcloud/salt-formula-openssh 56 | * - reclass 57 | - https://github.com/tcpcloud/salt-formula-reclass 58 | * - salt 59 | - https://github.com/tcpcloud/salt-formula-salt 60 | 61 | 62 | Supplemental services 63 | --------------------- 64 | 65 | Support services as databases, proxies, application servers. 66 | 67 | .. list-table:: 68 | :widths: 33 66 69 | :header-rows: 1 70 | :stub-columns: 1 71 | 72 | * - Formula 73 | - Repository 74 | * - apache 75 | - https://github.com/tcpcloud/salt-formula-apache 76 | * - bind 77 | - https://github.com/tcpcloud/salt-formula-bind 78 | * - dovecot 79 | - https://github.com/tcpcloud/salt-formula-dovecot 80 | * - elasticsearch 81 | - https://github.com/tcpcloud/salt-formula-elasticsearch 82 | * - galera 83 | - https://github.com/tcpcloud/salt-formula-galera 84 | * - git 85 | - https://github.com/tcpcloud/salt-formula-git 86 | * - haproxy 87 | - https://github.com/tcpcloud/salt-formula-haproxy 88 | * - keepalived 89 | - https://github.com/tcpcloud/salt-formula-keepalived 90 | * - letsencrypt 91 | - https://github.com/tcpcloud/salt-formula-letsencrypt 92 | * - memcached 93 | - https://github.com/tcpcloud/salt-formula-memcached 94 | * - mongodb 95 | - https://github.com/tcpcloud/salt-formula-mongodb 96 | * - mysql 97 | - https://github.com/tcpcloud/salt-formula-mysql 98 | * - nginx 99 | - https://github.com/tcpcloud/salt-formula-nginx 100 | * - postfix 101 | - https://github.com/tcpcloud/salt-formula-postfix 102 | * - postgresql 103 | - https://github.com/tcpcloud/salt-formula-postgresql 104 | * - rabbitmq 105 | - https://github.com/tcpcloud/salt-formula-rabbitmq 106 | * - redis 107 | - https://github.com/tcpcloud/salt-formula-redis 108 | * - supervisor 109 | - https://github.com/tcpcloud/salt-formula-supervisor 110 | * - varnish 111 | - https://github.com/tcpcloud/salt-formula-varnish 112 | 113 | 114 | OpenStack services 115 | ------------------ 116 | 117 | All supported OpenStack cloud platform services. 118 | 119 | .. list-table:: 120 | :widths: 33 66 121 | :header-rows: 1 122 | :stub-columns: 1 123 | 124 | * - Formula 125 | - Repository 126 | * - ceilometer 127 | - https://github.com/openstack/salt-formula-ceilometer 128 | * - cinder 129 | - https://github.com/openstack/salt-formula-cinder 130 | * - glance 131 | - https://github.com/openstack/salt-formula-glance 132 | * - heat 133 | - https://github.com/openstack/salt-formula-heat 134 | * - horizon 135 | - https://github.com/openstack/salt-formula-horizon 136 | * - keystone 137 | - https://github.com/openstack/salt-formula-keystone 138 | * - magnum 139 | - https://github.com/tcpcloud/salt-formula-magnum 140 | * - midonet 141 | - https://github.com/openstack/salt-formula-midonet 142 | * - murano 143 | - https://github.com/tcpcloud/salt-formula-murano 144 | * - neutron 145 | - https://github.com/openstack/salt-formula-neutron 146 | * - nova 147 | - https://github.com/openstack/salt-formula-nova 148 | * - opencontrail 149 | - https://github.com/openstack/salt-formula-opencontrail 150 | * - swift 151 | - https://github.com/openstack/salt-formula-swift 152 | 153 | 154 | Monitoring services 155 | ------------------- 156 | 157 | Monitoring, metering and log collecting tools implementing complete monitoring stack. 158 | 159 | .. list-table:: 160 | :widths: 33 66 161 | :header-rows: 1 162 | :stub-columns: 1 163 | 164 | * - Formula 165 | - Repository 166 | * - collectd 167 | - https://github.com/tcpcloud/salt-formula-collectd 168 | * - graphite 169 | - https://github.com/tcpcloud/salt-formula-graphite 170 | * - heka 171 | - https://github.com/tcpcloud/salt-formula-heka 172 | * - kibana 173 | - https://github.com/tcpcloud/salt-formula-kibana 174 | * - sensu 175 | - https://github.com/tcpcloud/salt-formula-sensu 176 | * - sphinx 177 | - https://github.com/tcpcloud/salt-formula-sphinx 178 | * - statsd 179 | - https://github.com/tcpcloud/salt-formula-sensu 180 | 181 | 182 | Integration services 183 | -------------------- 184 | 185 | Continuous integration services for automated integration and delivery pipelines. 186 | 187 | .. list-table:: 188 | :widths: 33 66 189 | :header-rows: 1 190 | :stub-columns: 1 191 | 192 | * - Formula 193 | - Repository 194 | * - aptly 195 | - https://github.com/tcpcloud/salt-formula-aptly 196 | * - gerrit 197 | - https://git.tcpcloud.eu/salt-formulas/gerrit-formula 198 | * - gitlab 199 | - https://github.com/tcpcloud/salt-formula-gitlab 200 | * - jenkins 201 | - https://github.com/tcpcloud/salt-formula-jenkins 202 | * - owncloud 203 | - https://github.com/tcpcloud/salt-formula-owncloud 204 | * - roundcube 205 | - https://github.com/tcpcloud/salt-formula-roundcube 206 | 207 | 208 | -------------- 209 | 210 | .. include:: navigation.txt 211 | -------------------------------------------------------------------------------- /doc/source/operate/troubleshoot-networking.rst: -------------------------------------------------------------------------------- 1 | 2 | Troubleshooting networking 3 | ============================ 4 | 5 | OpenContrail 6 | ************ 7 | 8 | Contrail-status provides information status of all contrail services. All of them should be active except contrail-device-manager, contrail-schema and contrail-svc-monitor. These can be in active state at only one node in cluster. It is dynamically switched in case of failure. 9 | 10 | .. code-block:: bash 11 | 12 | root@ctl01:~# contrail-status 13 | 14 | == Contrail Control == 15 | supervisor-control: active 16 | contrail-control active 17 | contrail-control-nodemgr active 18 | contrail-dns active 19 | contrail-named active 20 | 21 | == Contrail Analytics == 22 | supervisor-analytics: active 23 | contrail-analytics-api active 24 | contrail-analytics-nodemgr active 25 | contrail-collector active 26 | contrail-query-engine active 27 | contrail-snmp-collector active 28 | contrail-topology active 29 | 30 | == Contrail Config == 31 | supervisor-config: active 32 | contrail-api:0 active 33 | contrail-config-nodemgr active 34 | contrail-device-manager initializing 35 | contrail-discovery:0 active 36 | contrail-schema initializing 37 | contrail-svc-monitor initializing 38 | ifmap active 39 | 40 | == Contrail Web UI == 41 | supervisor-webui: active 42 | contrail-webui active 43 | contrail-webui-middleware active 44 | 45 | == Contrail Database == 46 | supervisor-database: active 47 | contrail-database active 48 | contrail-database-nodemgr active 49 | 50 | == Contrail Support Services == 51 | supervisor-support-service: active 52 | rabbitmq-server active 53 | 54 | OpenContrail uses for all services python daemon supervisord, which create logical groups from specific services. It is automaticaly installed with contrail packages. 55 | 56 | * supervisor-support-service 57 | * supervisor-openstack 58 | * supervisor-database 59 | * supervisor-config 60 | * supervisor-analytics 61 | * supervisor-control 62 | * supervisor-webui 63 | 64 | Services can be restarted as whole supervisor 65 | 66 | .. code-block:: bash 67 | 68 | service supervisor-openstack restart 69 | 70 | or as individual services inside of supervisor 71 | 72 | .. code-block:: bash 73 | 74 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_support_service.sock status 75 | rabbitmq-server RUNNING pid 1335, uptime 2 days, 21:11:55 76 | 77 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_openstack.sock status 78 | cinder-api RUNNING pid 57685, uptime 2 days, 0:10:39 79 | cinder-scheduler RUNNING pid 57675, uptime 2 days, 0:10:44 80 | glance-api RUNNING pid 9317, uptime 2 days, 21:08:52 81 | glance-registry RUNNING pid 9352, uptime 2 days, 21:08:51 82 | heat-api RUNNING pid 9393, uptime 2 days, 21:08:50 83 | heat-engine RUNNING pid 9351, uptime 2 days, 21:08:51 84 | keystone RUNNING pid 9325, uptime 2 days, 21:08:52 85 | nova-api RUNNING pid 9339, uptime 2 days, 21:08:51 86 | nova-conductor RUNNING pid 9300, uptime 2 days, 21:08:53 87 | nova-console RUNNING pid 9330, uptime 2 days, 21:08:52 88 | nova-consoleauth RUNNING pid 9319, uptime 2 days, 21:08:52 89 | nova-novncproxy RUNNING pid 9299, uptime 2 days, 21:08:53 90 | nova-objectstore RUNNING pid 9321, uptime 2 days, 21:08:52 91 | nova-scheduler RUNNING pid 9344, uptime 2 days, 21:08:51 92 | 93 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_database.sock status 94 | contrail-database RUNNING pid 1349, uptime 2 days, 21:12:33 95 | contrail-database-nodemgr RUNNING pid 1347, uptime 2 days, 21:12:33 96 | 97 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_config.sock status 98 | contrail-api:0 RUNNING pid 49848, uptime 2 days, 20:11:54 99 | contrail-config-nodemgr RUNNING pid 49845, uptime 2 days, 20:11:54 100 | contrail-device-manager RUNNING pid 49849, uptime 2 days, 20:11:54 101 | contrail-discovery:0 RUNNING pid 49847, uptime 2 days, 20:11:54 102 | contrail-schema RUNNING pid 49850, uptime 2 days, 20:11:54 103 | contrail-svc-monitor RUNNING pid 49851, uptime 2 days, 20:11:54 104 | ifmap RUNNING pid 49846, uptime 2 days, 20:11:54 105 | 106 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_config.sock status 107 | contrail-api:0 RUNNING pid 49848, uptime 2 days, 20:12:08 108 | contrail-config-nodemgr RUNNING pid 49845, uptime 2 days, 20:12:08 109 | contrail-device-manager RUNNING pid 49849, uptime 2 days, 20:12:08 110 | contrail-discovery:0 RUNNING pid 49847, uptime 2 days, 20:12:08 111 | contrail-schema RUNNING pid 49850, uptime 2 days, 20:12:08 112 | contrail-svc-monitor RUNNING pid 49851, uptime 2 days, 20:12:08 113 | ifmap RUNNING pid 49846, uptime 2 days, 20:12:08 114 | 115 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_analytics.sock status 116 | contrail-analytics-api RUNNING pid 1346, uptime 2 days, 21:13:17 117 | contrail-analytics-nodemgr RUNNING pid 1340, uptime 2 days, 21:13:17 118 | contrail-collector RUNNING pid 1344, uptime 2 days, 21:13:17 119 | contrail-query-engine RUNNING pid 1345, uptime 2 days, 21:13:17 120 | contrail-snmp-collector RUNNING pid 1341, uptime 2 days, 21:13:17 121 | contrail-topology RUNNING pid 1343, uptime 2 days, 21:13:17 122 | 123 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_control.sock status 124 | contrail-control RUNNING pid 1330, uptime 2 days, 21:13:29 125 | contrail-control-nodemgr RUNNING pid 1328, uptime 2 days, 21:13:29 126 | contrail-dns RUNNING pid 1331, uptime 2 days, 21:13:29 127 | contrail-named RUNNING pid 1333, uptime 2 days, 21:13:29 128 | 129 | root@ctl01:~# supervisorctl -s unix:///tmp/supervisord_webui.sock status 130 | contrail-webui RUNNING pid 1339, uptime 2 days, 21:13:44 131 | contrail-webui-middleware RUNNING pid 1342, uptime 2 days, 21:13:44 132 | 133 | -------------- 134 | 135 | .. include:: navigation.txt 136 | -------------------------------------------------------------------------------- /doc/source/develop/extending-contribute.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | Contributor guidelines 4 | ====================== 5 | 6 | Bugs 7 | ---- 8 | 9 | Bugs should be filed on `Bug Launchpad`_ for OpenStack-Salt. 10 | 11 | When submitting a bug, or working on a bug, please ensure the following 12 | criteria are met: 13 | 14 | * The description clearly states or describes the original problem or root 15 | cause of the problem. 16 | * Include historical information on how the problem was identified. 17 | * Any relevant logs are included. 18 | * If the issue is a bug that needs fixing in a branch other than master, 19 | please note the associated branch within the launchpad issue. 20 | * The provided information should be totally self-contained. External access 21 | to web services/sites should not be needed. 22 | * Steps to reproduce the problem if possible. 23 | 24 | Tags 25 | ~~~~ 26 | 27 | If it's a bug that needs fixing in a branch in addition to master, add a 28 | '\-backport-potential' tag (e.g. ``kilo-backport-potential``). 29 | There are predefined tags that will auto-complete. 30 | 31 | Status 32 | ~~~~~~ 33 | 34 | Please leave the **status** of an issue alone until someone confirms it or 35 | a member of the bugs team triages it. While waiting for the issue to be 36 | confirmed or triaged the status should remain as **New**. 37 | 38 | Importance 39 | ~~~~~~~~~~ 40 | 41 | Should only be touched if it is a Blocker/Gating issue. If it is, please 42 | set to **High**, and only use **Critical** if you have found a bug that 43 | can take down whole infrastructures. Once the importance has been changed 44 | the status should be changed to *Triaged* by someone other than the bug 45 | creator. 46 | 47 | Triaging bugs 48 | ~~~~~~~~~~~~~ 49 | 50 | Reported bugs need prioritization, confirmation, and shouldn't go stale. 51 | If you care about OpenStack stability but aren't wanting to actively 52 | develop the roles and playbooks used within the "openstack-salt" 53 | project consider contributing in the area of bug triage, which helps 54 | immensely. The whole process is described in the upstream 55 | `Bug Triage Documentation`_. 56 | 57 | .. _Bug Launchpad: https://bugs.launchpad.net/openstack-salt 58 | .. _Bug Triage Documentation: https://wiki.openstack.org/wiki/BugTriage 59 | 60 | Submitting Code 61 | --------------- 62 | 63 | * Write good commit messages. We follow the OpenStack 64 | "`Git Commit Good Practice`_" guide. if you have any questions regarding how 65 | to write good commit messages please review the upstream OpenStack 66 | documentation. 67 | * Changes to the project should be submitted for review via the Gerrit tool, 68 | following the `workflow documented here`_. 69 | * Pull requests submitted through GitHub will be ignored and closed without 70 | regard. 71 | * All feature additions/deletions should be accompanied by a blueprint/spec. 72 | ie: adding additional active agents to neutron, developing a new service 73 | role, etc... 74 | * Before creating blueprint/spec an associated issue should be raised on 75 | launchpad. This issue will be triaged and a determination will be made on 76 | how large the change is and whether or not the change warrants a 77 | blueprint/spec. Both features and bug fixes may require the creation of a 78 | blueprint/spec. This requirement will be voted on by core reviewers and will 79 | be based on the size and impact of the change. 80 | * All blueprints/specs should be voted on and approved by core reviewers 81 | before any associated code will be merged. For more information on 82 | blueprints/specs please review the 83 | `upstream OpenStack Blueprint documentation`_. At the time the 84 | blueprint/spec is voted on a determination will be made whether or not the 85 | work will be backported to any of the "released" branches. 86 | * Patches should be focused on solving one problem at a time. If the review is 87 | overly complex or generally large the initial commit will receive a "**-2**" 88 | and the contributor will be asked to split the patch up across multiple 89 | reviews. In the case of complex feature additions the design and 90 | implementation of the feature should be done in such a way that it can be 91 | submitted in multiple patches using dependencies. Using dependent changes 92 | should always aim to result in a working build throughout the dependency 93 | chain. Documentation is available for `advanced gerrit usage`_ too. 94 | * All patch sets should adhere to the Salt Style Guide listed here as well 95 | as adhere to the `Salt best practices`_ when possible. 96 | * All changes should be clearly listed in the commit message, with an 97 | associated bug id/blueprint along with any extra information where 98 | applicable. 99 | * Refactoring work should never include additional "rider" features. Features 100 | that may pertain to something that was re-factored should be raised as an 101 | issue and submitted in prior or subsequent patches. 102 | 103 | .. _Git Commit Good Practice: https://wiki.openstack.org/wiki/GitCommitMessages 104 | .. _workflow documented here: http://docs.openstack.org/infra/manual/developers.html#development-workflow 105 | .. _upstream OpenStack Blueprint documentation: https://wiki.openstack.org/wiki/Blueprints 106 | .. _advanced gerrit usage: http://www.mediawiki.org/wiki/Gerrit/Advanced_usage 107 | .. _Salt best practices: http://docs.salt.com/playbooks_best_practices.html 108 | 109 | Backporting 110 | ----------- 111 | * Backporting is defined as the act of reproducing a change from another 112 | branch. Unclean/squashed/modified cherry-picks and complete 113 | reimplementations are OK. 114 | * Backporting is often done by using the same code (via cherry picking), but 115 | this is not always the case. This method is preferred when the cherry-pick 116 | provides a complete solution for the targeted problem. 117 | * When cherry-picking a commit from one branch to another the commit message 118 | should be amended with any files that may have been in conflict while 119 | performing the cherry-pick operation. Additionally, cherry-pick commit 120 | messages should contain the original commit *SHA* near the bottom of the new 121 | commit message. This can be done with ``cherry-pick -x``. Here's more 122 | information on `Submitting a change to a branch for review`_. 123 | * Every backport commit must still only solve one problem, as per the 124 | guidelines in `Submitting Code`_. 125 | * If a backport is a squashed set of cherry-picked commits, the original SHAs 126 | should be referenced in the commit message and the reason for squashing the 127 | commits should be clearly explained. 128 | * When a cherry-pick is modified in any way, the changes made and the reasons 129 | for them must be explicitly expressed in the commit message. 130 | * Refactoring work must not be backported to a "released" branch. 131 | 132 | .. _Submitting a change to a branch for review: http://www.mediawiki.org/wiki/Gerrit/Advanced_usage#Submitting_a_change_to_a_branch_for_review_.28.22backporting.22.29 133 | 134 | Style Guide 135 | ----------- 136 | 137 | When creating tasks and other roles for use in Salt please create then 138 | using the YAML dictionary format. 139 | 140 | Example YAML dictionary format: 141 | 142 | .. code-block:: yaml 143 | 144 | - name: The name of the tasks 145 | module_name: 146 | thing1: "some-stuff" 147 | thing2: "some-other-stuff" 148 | tags: 149 | - some-tag 150 | - some-other-tag 151 | 152 | 153 | Example what **NOT** to do: 154 | 155 | .. code-block:: yaml 156 | 157 | - name: The name of the tasks 158 | module_name: thing1="some-stuff" thing2="some-other-stuff" 159 | tags: some-tag 160 | 161 | .. code-block:: yaml 162 | 163 | - name: The name of the tasks 164 | module_name: > 165 | thing1="some-stuff" 166 | thing2="some-other-stuff" 167 | tags: some-tag 168 | 169 | 170 | Usage of the ">" and "|" operators should be limited to Salt conditionals 171 | and command modules such as the Salt ``shell`` or ``command``. 172 | 173 | -------------- 174 | 175 | .. include:: navigation.txt 176 | -------------------------------------------------------------------------------- /doc/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " applehelp to make an Apple Help Book" 34 | @echo " devhelp to make HTML files and a Devhelp project" 35 | @echo " epub to make an epub" 36 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 37 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 38 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 39 | @echo " text to make text files" 40 | @echo " man to make manual pages" 41 | @echo " texinfo to make Texinfo files" 42 | @echo " info to make Texinfo files and run them through makeinfo" 43 | @echo " gettext to make PO message catalogs" 44 | @echo " changes to make an overview of all changed/added/deprecated items" 45 | @echo " xml to make Docutils-native XML files" 46 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 47 | @echo " linkcheck to check all external links for integrity" 48 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 49 | @echo " coverage to run coverage check of the documentation (if enabled)" 50 | 51 | clean: 52 | rm -rf $(BUILDDIR)/* 53 | 54 | html: 55 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 56 | @echo 57 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 58 | 59 | dirhtml: 60 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 61 | @echo 62 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 63 | 64 | singlehtml: 65 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 66 | @echo 67 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 68 | 69 | pickle: 70 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 71 | @echo 72 | @echo "Build finished; now you can process the pickle files." 73 | 74 | json: 75 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 76 | @echo 77 | @echo "Build finished; now you can process the JSON files." 78 | 79 | htmlhelp: 80 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 81 | @echo 82 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 83 | ".hhp project file in $(BUILDDIR)/htmlhelp." 84 | 85 | qthelp: 86 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 87 | @echo 88 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 89 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 90 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/openstack-salt.qhcp" 91 | @echo "To view the help file:" 92 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/openstack-salt.qhc" 93 | 94 | applehelp: 95 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp 96 | @echo 97 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." 98 | @echo "N.B. You won't be able to view it unless you put it in" \ 99 | "~/Library/Documentation/Help or install it in your application" \ 100 | "bundle." 101 | 102 | devhelp: 103 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 104 | @echo 105 | @echo "Build finished." 106 | @echo "To view the help file:" 107 | @echo "# mkdir -p $$HOME/.local/share/devhelp/openstack-salt" 108 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/openstack-salt" 109 | @echo "# devhelp" 110 | 111 | epub: 112 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 113 | @echo 114 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 115 | 116 | latex: 117 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 118 | @echo 119 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 120 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 121 | "(use \`make latexpdf' here to do that automatically)." 122 | 123 | latexpdf: 124 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 125 | @echo "Running LaTeX files through pdflatex..." 126 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 127 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 128 | 129 | latexpdfja: 130 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 131 | @echo "Running LaTeX files through platex and dvipdfmx..." 132 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 133 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 134 | 135 | text: 136 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 137 | @echo 138 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 139 | 140 | man: 141 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 142 | @echo 143 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 144 | 145 | texinfo: 146 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 147 | @echo 148 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 149 | @echo "Run \`make' in that directory to run these through makeinfo" \ 150 | "(use \`make info' here to do that automatically)." 151 | 152 | info: 153 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 154 | @echo "Running Texinfo files through makeinfo..." 155 | make -C $(BUILDDIR)/texinfo info 156 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 157 | 158 | gettext: 159 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 160 | @echo 161 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 162 | 163 | changes: 164 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 165 | @echo 166 | @echo "The overview file is in $(BUILDDIR)/changes." 167 | 168 | linkcheck: 169 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 170 | @echo 171 | @echo "Link check complete; look for any errors in the above output " \ 172 | "or in $(BUILDDIR)/linkcheck/output.txt." 173 | 174 | doctest: 175 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 176 | @echo "Testing of doctests in the sources finished, look at the " \ 177 | "results in $(BUILDDIR)/doctest/output.txt." 178 | 179 | coverage: 180 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage 181 | @echo "Testing of coverage in the sources finished, look at the " \ 182 | "results in $(BUILDDIR)/coverage/python.txt." 183 | 184 | xml: 185 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 186 | @echo 187 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 188 | 189 | pseudoxml: 190 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 191 | @echo 192 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 193 | -------------------------------------------------------------------------------- /doc/source/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | # 4 | # -- General configuration ------------------------------------------------ 5 | 6 | # If your documentation needs a minimal Sphinx version, state it here. 7 | # needs_sphinx = '1.0' 8 | 9 | # Add any Sphinx extension module names here, as strings. They can be 10 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 11 | # ones. 12 | extensions = [ 13 | 'sphinx.ext.autodoc', 14 | 'oslosphinx' 15 | ] 16 | 17 | # The link to the browsable source code (for the left hand menu) 18 | oslosphinx_cgit_link = 'http://git.openstack.org/cgit/openstack/openstack-salt' 19 | 20 | # Add any paths that contain templates here, relative to this directory. 21 | templates_path = ['_templates'] 22 | 23 | # The suffix(es) of source filenames. 24 | # You can specify multiple suffix as a list of string: 25 | # source_suffix = ['.rst', '.md'] 26 | source_suffix = '.rst' 27 | 28 | # The encoding of source files. 29 | # source_encoding = 'utf-8-sig' 30 | 31 | # The master toctree document. 32 | master_doc = 'index' 33 | 34 | # General information about the project. 35 | project = 'openstack-salt' 36 | copyright = '2015, openstack-salt contributors' 37 | author = 'openstack-salt contributors' 38 | 39 | # The version info for the project you're documenting, acts as replacement for 40 | # |version| and |release|, also used in various other places throughout the 41 | # built documents. 42 | # 43 | # The short X.Y version. 44 | version = 'master' 45 | # The full version, including alpha/beta/rc tags. 46 | release = 'master' 47 | 48 | # The language for content autogenerated by Sphinx. Refer to documentation 49 | # for a list of supported languages. 50 | # 51 | # This is also used if you do content translation via gettext catalogs. 52 | # Usually you set "language" from the command line for these cases. 53 | language = None 54 | 55 | # There are two options for replacing |today|: either, you set today to some 56 | # non-false value, then it is used: 57 | # today = '' 58 | # Else, today_fmt is used as the format for a strftime call. 59 | # today_fmt = '%B %d, %Y' 60 | 61 | # List of patterns, relative to source directory, that match files and 62 | # directories to ignore when looking for source files. 63 | exclude_patterns = [] 64 | 65 | # The reST default role (used for this markup: `text`) to use for all 66 | # documents. 67 | # default_role = None 68 | 69 | # If true, '()' will be appended to :func: etc. cross-reference text. 70 | # add_function_parentheses = True 71 | 72 | # If true, the current module name will be prepended to all description 73 | # unit titles (such as .. function::). 74 | # add_module_names = True 75 | 76 | # If true, sectionauthor and moduleauthor directives will be shown in the 77 | # output. They are ignored by default. 78 | # show_authors = False 79 | 80 | # The name of the Pygments (syntax highlighting) style to use. 81 | pygments_style = 'sphinx' 82 | 83 | # A list of ignored prefixes for module index sorting. 84 | # modindex_common_prefix = [] 85 | 86 | # If true, keep warnings as "system message" paragraphs in the built documents. 87 | # keep_warnings = False 88 | 89 | # If true, `todo` and `todoList` produce output, else they produce nothing. 90 | todo_include_todos = False 91 | 92 | 93 | # -- Options for HTML output ---------------------------------------------- 94 | 95 | # The theme to use for HTML and HTML Help pages. See the documentation for 96 | # a list of builtin themes. 97 | # html_theme = 'alabaster' 98 | 99 | # Theme options are theme-specific and customize the look and feel of a theme 100 | # further. For a list of options available for each theme, see the 101 | # documentation. 102 | # html_theme_options = {} 103 | 104 | # Add any paths that contain custom themes here, relative to this directory. 105 | # html_theme_path = [] 106 | 107 | # The name for this set of Sphinx documents. If None, it defaults to 108 | # " v documentation". 109 | # html_title = None 110 | 111 | # A shorter title for the navigation bar. Default is the same as html_title. 112 | # html_short_title = None 113 | 114 | # The name of an image file (relative to this directory) to place at the top 115 | # of the sidebar. 116 | # html_logo = None 117 | 118 | # The name of an image file (within the static path) to use as favicon of the 119 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 120 | # pixels large. 121 | # html_favicon = None 122 | 123 | # Add any paths that contain custom static files (such as style sheets) here, 124 | # relative to this directory. They are copied after the builtin static files, 125 | # so a file named "default.css" will overwrite the builtin "default.css". 126 | html_static_path = ['_static'] 127 | 128 | # Add any extra paths that contain custom files (such as robots.txt or 129 | # .htaccess) here, relative to this directory. These files are copied 130 | # directly to the root of the documentation. 131 | # html_extra_path = [] 132 | 133 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 134 | # using the given strftime format. 135 | # html_last_updated_fmt = '%b %d, %Y' 136 | 137 | # If true, SmartyPants will be used to convert quotes and dashes to 138 | # typographically correct entities. 139 | # html_use_smartypants = True 140 | 141 | # Custom sidebar templates, maps document names to template names. 142 | # html_sidebars = {} 143 | 144 | # Additional templates that should be rendered to pages, maps page names to 145 | # template names. 146 | # html_additional_pages = {} 147 | 148 | # If false, no module index is generated. 149 | # html_domain_indices = True 150 | 151 | # If false, no index is generated. 152 | # html_use_index = True 153 | 154 | # If true, the index is split into individual pages for each letter. 155 | # html_split_index = False 156 | 157 | # If true, links to the reST sources are added to the pages. 158 | # html_show_sourcelink = True 159 | 160 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 161 | # html_show_sphinx = True 162 | 163 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 164 | # html_show_copyright = True 165 | 166 | # If true, an OpenSearch description file will be output, and all pages will 167 | # contain a tag referring to it. The value of this option must be the 168 | # base URL from which the finished HTML is served. 169 | # html_use_opensearch = '' 170 | 171 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 172 | # html_file_suffix = None 173 | 174 | # Language to be used for generating the HTML full-text search index. 175 | # Sphinx supports the following languages: 176 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja' 177 | # 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr' 178 | # html_search_language = 'en' 179 | 180 | # A dictionary with options for the search language support, empty by default. 181 | # Now only 'ja' uses this config value 182 | # html_search_options = {'type': 'default'} 183 | 184 | # The name of a javascript file (relative to the configuration directory) that 185 | # implements a search results scorer. If empty, the default will be used. 186 | # html_search_scorer = 'scorer.js' 187 | 188 | # Output file base name for HTML help builder. 189 | htmlhelp_basename = 'openstack-saltdoc' 190 | 191 | # -- Options for LaTeX output --------------------------------------------- 192 | 193 | latex_elements = { 194 | # The paper size ('letterpaper' or 'a4paper'). 195 | # 'papersize': 'letterpaper', 196 | 197 | # The font size ('10pt', '11pt' or '12pt'). 198 | # 'pointsize': '10pt', 199 | 200 | # Additional stuff for the LaTeX preamble. 201 | # 'preamble': '', 202 | 203 | # Latex figure (float) alignment 204 | # 'figure_align': 'htbp', 205 | } 206 | 207 | # Grouping the document tree into LaTeX files. List of tuples 208 | # (source start file, target name, title, 209 | # author, documentclass [howto, manual, or own class]). 210 | latex_documents = [ 211 | (master_doc, 'openstack-salt.tex', 212 | 'openstack-salt Documentation', 213 | 'openstack-salt contributors', 'manual'), 214 | ] 215 | 216 | # The name of an image file (relative to this directory) to place at the top of 217 | # the title page. 218 | # latex_logo = None 219 | 220 | # For "manual" documents, if this is true, then toplevel headings are parts, 221 | # not chapters. 222 | # latex_use_parts = False 223 | 224 | # If true, show page references after internal links. 225 | # latex_show_pagerefs = False 226 | 227 | # If true, show URL addresses after external links. 228 | # latex_show_urls = False 229 | 230 | # Documents to append as an appendix to all manuals. 231 | # latex_appendices = [] 232 | 233 | # If false, no module index is generated. 234 | # latex_domain_indices = True 235 | 236 | 237 | # -- Options for manual page output --------------------------------------- 238 | 239 | # One entry per manual page. List of tuples 240 | # (source start file, name, description, authors, manual section). 241 | man_pages = [ 242 | (master_doc, 'openstack-salt', 243 | 'OpenStack-Salt Documentation', 244 | [author], 1) 245 | ] 246 | 247 | # If true, show URL addresses after external links. 248 | # man_show_urls = False 249 | 250 | 251 | # -- Options for Texinfo output ------------------------------------------- 252 | 253 | # Grouping the document tree into Texinfo files. List of tuples 254 | # (source start file, target name, title, author, 255 | # dir menu entry, description, category) 256 | texinfo_documents = [ 257 | (master_doc, 'openstack-salt', 258 | 'OpenStack-Salt Documentation', 259 | author, 'openstack-salt', 'One line description of project.', 260 | 'Miscellaneous'), 261 | ] 262 | 263 | # Documents to append as an appendix to all manuals. 264 | # texinfo_appendices = [] 265 | 266 | # If false, no module index is generated. 267 | # texinfo_domain_indices = True 268 | 269 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 270 | # texinfo_show_urls = 'footnote' 271 | 272 | # If true, do not generate a @detailmenu in the "Top" node's menu. 273 | # texinfo_no_detailmenu = False 274 | -------------------------------------------------------------------------------- /doc/source/install/configure-dashboard.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the Dashboard service 3 | =================================== 4 | 5 | OS Horizon from package 6 | ----------------------- 7 | 8 | Simple Horizon setup 9 | ******************** 10 | 11 | .. code-block:: yaml 12 | 13 | linux: 14 | system: 15 | name: horizon 16 | repo: 17 | - cloudarchive-kilo: 18 | enabled: true 19 | source: 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/kilo main' 20 | pgpcheck: 0 21 | horizon: 22 | server: 23 | manage_repo: true 24 | enabled: true 25 | secret_key: SECRET 26 | host: 27 | name: cloud.lab.cz 28 | cache: 29 | engine: 'memcached' 30 | host: '127.0.0.1' 31 | port: 11211 32 | prefix: 'CACHE_HORIZON' 33 | identity: 34 | engine: 'keystone' 35 | host: '127.0.0.1' 36 | port: 5000 37 | api_version: 2 38 | mail: 39 | host: '127.0.0.1' 40 | 41 | Simple Horizon setup with branding 42 | ********************************** 43 | 44 | .. code-block:: yaml 45 | 46 | horizon: 47 | server: 48 | enabled: true 49 | branding: 'OpenStack Company Dashboard' 50 | default_dashboard: 'admin' 51 | help_url: 'http://doc.domain.com' 52 | 53 | Horizon setup with SSL 54 | ********************** 55 | 56 | .. code-block:: yaml 57 | 58 | horizon: 59 | server: 60 | enabled: true 61 | secret_key: MEGASECRET 62 | version: juno 63 | ssl: 64 | enabled: true 65 | authority: CA_Authority 66 | host: 67 | name: cloud.lab.cz 68 | cache: 69 | engine: 'memcached' 70 | host: '127.0.0.1' 71 | port: 11211 72 | prefix: 'CACHE_HORIZON' 73 | identity: 74 | engine: 'keystone' 75 | host: '127.0.0.1' 76 | port: 5000 77 | api_version: 2 78 | mail: 79 | host: '127.0.0.1' 80 | 81 | Horizon setup with multiple regions 82 | *********************************** 83 | 84 | .. code-block:: yaml 85 | 86 | horizon: 87 | server: 88 | enabled: true 89 | version: juno 90 | secret_key: MEGASECRET 91 | cache: 92 | engine: 'memcached' 93 | host: '127.0.0.1' 94 | port: 11211 95 | prefix: 'CACHE_HORIZON' 96 | identity: 97 | engine: 'keystone' 98 | host: '127.0.0.1' 99 | port: 5000 100 | api_version: 2 101 | mail: 102 | host: '127.0.0.1' 103 | regions: 104 | - name: cluster1 105 | address: http://cluster1.example.com:5000/v2.0 106 | - name: cluster2 107 | address: http://cluster2.example.com:5000/v2.0 108 | 109 | Horizon setup with sensu plugin 110 | ******************************* 111 | 112 | .. code-block:: yaml 113 | 114 | horizon: 115 | server: 116 | enabled: true 117 | version: juno 118 | sensu_api: 119 | host: localhost 120 | port: 4567 121 | plugins: 122 | - name: monitoring 123 | app: horizon_monitoring 124 | source: 125 | type: git 126 | address: git@repo1.robotice.cz:django/horizon-monitoring.git 127 | revision: master 128 | - name: api-mask 129 | app: api_mask 130 | mask_url: 'custom-url.cz' 131 | mask_protocol: 'http' 132 | source: 133 | type: git 134 | address: git@repo1.robotice.cz:django/horizon-api-mask.git 135 | revision: master 136 | 137 | Horizon Sensu plugin with multiple endpoints 138 | ******************************************** 139 | 140 | .. code-block:: yaml 141 | 142 | horizon: 143 | server: 144 | enabled: true 145 | version: juno 146 | sensu_api: 147 | dc1: 148 | host: localhost 149 | port: 4567 150 | dc2: 151 | host: anotherhost 152 | port: 4567 153 | 154 | Horizon setup with Billometer plugin 155 | ************************************ 156 | 157 | .. code-block:: yaml 158 | 159 | horizon: 160 | server: 161 | enabled: true 162 | version: juno 163 | billometer_api: 164 | host: localhost 165 | port: 9753 166 | api_version: 1 167 | plugins: 168 | - name: billing 169 | app: horizon_billing 170 | source: 171 | type: git 172 | address: git@repo1.robotice.cz:django/horizon-billing.git 173 | revision: master 174 | 175 | Horizon setup with Contrail plugin 176 | ********************************** 177 | 178 | .. code-block:: yaml 179 | 180 | horizon: 181 | server: 182 | enabled: true 183 | version: icehouse 184 | plugins: 185 | - name: contrail 186 | app: contrail_openstack_dashboard 187 | override: true 188 | source: 189 | type: git 190 | address: git@repo1.robotice.cz:django/horizon-contrail.git 191 | revision: master 192 | 193 | Horizon setup with sentry log handler 194 | ************************************* 195 | 196 | .. code-block:: yaml 197 | 198 | horizon: 199 | server: 200 | enabled: true 201 | version: juno 202 | ... 203 | logging: 204 | engine: raven 205 | dsn: http://pub:private@sentry1.test.cz/2 206 | 207 | OS Horizon from Git repository (multisite support) 208 | -------------------------------------------------- 209 | 210 | Simple Horizon setup 211 | ******************** 212 | 213 | .. code-block:: yaml 214 | 215 | horizon: 216 | server: 217 | enabled: true 218 | app: 219 | default: 220 | secret_key: MEGASECRET 221 | source: 222 | engine: git 223 | address: https://github.com/openstack/horizon.git 224 | revision: stable/kilo 225 | cache: 226 | engine: 'memcached' 227 | host: '127.0.0.1' 228 | port: 11211 229 | prefix: 'CACHE_DEFAULT' 230 | identity: 231 | engine: 'keystone' 232 | host: '127.0.0.1' 233 | port: 5000 234 | api_version: 2 235 | mail: 236 | host: '127.0.0.1' 237 | 238 | Themed Horizon multisite 239 | ************************ 240 | 241 | .. code-block:: yaml 242 | 243 | horizon: 244 | server: 245 | enabled: true 246 | app: 247 | openstack1c: 248 | secret_key: SECRET1 249 | source: 250 | engine: git 251 | address: https://github.com/openstack/horizon.git 252 | revision: stable/kilo 253 | plugin: 254 | contrail: 255 | app: contrail_openstack_dashboard 256 | override: true 257 | source: 258 | type: git 259 | address: git@repo1.robotice.cz:django/horizon-contrail.git 260 | revision: master 261 | theme: 262 | app: site1_theme 263 | source: 264 | type: git 265 | address: git@repo1.domain.com:django/horizon-site1-theme.git 266 | cache: 267 | engine: 'memcached' 268 | host: '127.0.0.1' 269 | port: 11211 270 | prefix: 'CACHE_SITE1' 271 | identity: 272 | engine: 'keystone' 273 | host: '127.0.0.1' 274 | port: 5000 275 | api_version: 2 276 | mail: 277 | host: '127.0.0.1' 278 | openstack2: 279 | secret_key: SECRET2 280 | source: 281 | engine: git 282 | address: https://repo1.domain.com/openstack/horizon.git 283 | revision: stable/kilo 284 | plugin: 285 | contrail: 286 | app: contrail_openstack_dashboard 287 | override: true 288 | source: 289 | type: git 290 | address: git@repo1.domain.com:django/horizon-contrail.git 291 | revision: master 292 | monitoring: 293 | app: horizon_monitoring 294 | source: 295 | type: git 296 | address: git@domain.com:django/horizon-monitoring.git 297 | revision: master 298 | theme: 299 | app: bootswatch_theme 300 | source: 301 | type: git 302 | address: git@repo1.robotice.cz:django/horizon-bootswatch-theme.git 303 | revision: master 304 | cache: 305 | engine: 'memcached' 306 | host: '127.0.0.1' 307 | port: 11211 308 | prefix: 'CACHE_SITE2' 309 | identity: 310 | engine: 'keystone' 311 | host: '127.0.0.1' 312 | port: 5000 313 | api_version: 3 314 | mail: 315 | host: '127.0.0.1' 316 | 317 | Horizon with API versions override 318 | ********************************** 319 | 320 | .. code-block:: yaml 321 | 322 | horizon: 323 | server: 324 | enabled: true 325 | app: 326 | openstack_api_overrride: 327 | secret_key: SECRET 328 | api_versions: 329 | identity: 3 330 | volume: 2 331 | source: 332 | engine: git 333 | address: https://github.com/openstack/horizon.git 334 | revision: stable/kilo 335 | 336 | Horizon with changed dashboard behaviour 337 | ---------------------------------------- 338 | 339 | .. code-block:: yaml 340 | 341 | horizon: 342 | server: 343 | enabled: true 344 | app: 345 | openstack_dashboard_overrride: 346 | secret_key: SECRET 347 | dashboards: 348 | settings: 349 | enabled: true 350 | project: 351 | enabled: false 352 | order: 10 353 | admin: 354 | enabled: false 355 | order: 20 356 | source: 357 | engine: git 358 | address: https://github.com/openstack/horizon.git 359 | revision: stable/kilo 360 | 361 | -------------- 362 | 363 | .. include:: navigation.txt 364 | -------------------------------------------------------------------------------- /doc/source/install/overview-mda.rst: -------------------------------------------------------------------------------- 1 | 2 | Model-driven architectures 3 | ========================== 4 | 5 | Model Driven Architecture (MDA) is an answer to growing complexity of systems controlled by configuration management tools. It provides unified node classification with atomic service definitions. 6 | 7 | Core principles for deploying model-driven architectures. 8 | 9 | - Atomicity - Services are serated at such level of granularity that. 10 | - Reusability/replacibility - Different services serving same role can be replaced without affecting neigbouring services. 11 | - System roles - Services may implement various roles, most often client/server variations. 12 | - Dynamic resources - Service metadata must be alwyas available for dynamically created resources. 13 | - Change management - The strength lies not in descibing the static state of service but more the process of everchanging improvements. 14 | 15 | 16 | Sample MDA Scenario 17 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 18 | 19 | Following image show example system that has reasonable amount of services with some outsourced by 3rd party providers. The OpenStack architecture is too big to fit here. 20 | 21 | .. figure:: figures/openstack_system.png 22 | :width: 100% 23 | :align: center 24 | 25 | We can identify several layers within the largers application systems. 26 | 27 | * Proxy layer - Distributing load to application layer 28 | * Application layer - Application with caches 29 | * Persistence layer - Databases 30 | 31 | .. figure:: figures/mda_system_composition.png 32 | :width: 100% 33 | :align: center 34 | 35 | Application systems are supported by shared subsystems that span across multiple application systems. These usually are: 36 | 37 | - Access & control system - SSH access, orchestration engine access, user authentication 38 | - Monitoring system - Events and metric collections 39 | - Essential systems - DNS, NTP, MTA clients 40 | 41 | 42 | Service Decomposition 43 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 44 | 45 | The following chapter shows service decomposition of GitLab all-in-one box. 46 | 47 | Server level 48 | ^^^^^^^^^^^^^^^^^^^^ 49 | 50 | Servers contain one or more systems that bring business value and several maintenance systems that are common to any node. Following list shows basic systems at almost any node. 51 | 52 | - Application systems - Business Logic implementations 53 | - Access & control system - SSH access, orchestration engine access, user authentication 54 | - Monitoring system - Events and metric collections 55 | - Essential systems - DNS, NTP, MTA clients 56 | 57 | Systems can be seen at following picture. 58 | 59 | .. figure:: figures/meta_host.png 60 | :width: 100% 61 | :align: center 62 | 63 | Multiple servers with various systems can are defined by the reclass definition to aleviate node definitions. 64 | 65 | 66 | .. code-block:: yaml 67 | 68 | reclass: 69 | storage: 70 | node: 71 | tpc_int_prd_vcs01: 72 | name: vcs01 73 | domain: int.prd.tcp.cloudlab.cz 74 | classes: 75 | - system.linux.system.kvm 76 | - system.openssh.server.team.tcpcloud 77 | - system.gitlab.server.single 78 | params: 79 | salt_master_host: ${_param:reclass_config_master} 80 | gitlab_server_version: '7.7' 81 | gitlab_server_name: git.tcpcloud.eu 82 | gitlab_server_email: gitlab@robotice.cz 83 | gitlab_client_host: git.tcpcloud.eu 84 | gitlab_client_token: 85 | nginx_site_gitlab_ssl_authority: 86 | nginx_site_gitlab_server_host: git.tcpcloud.eu 87 | mysql_admin_password: 88 | mysql_gitlab_password: 89 | tpc_int_prd_svc01: 90 | .. 91 | tpc_int_prd_svc0N: 92 | .. 93 | 94 | The actual generated definition is in the following code. All of the logic is encapuslated by system that are customised by set of parameteres, some of which may be GPG encrypted. This is better form to use in development as reclass powered metadata requires explicit state declaration that may not be automatic and just bothers at development stage. 95 | 96 | .. code-block:: yaml 97 | 98 | classes: 99 | - system.linux.system.kvm 100 | - system.openssh.server.team.tcpcloud 101 | - system.gitlab.server.single 102 | parameters: 103 | _param: 104 | salt_master_host: master1.robotice.cz 105 | mysql_admin_password: 106 | gitlab_client_token: 107 | gitlab_server_name: git.tcpcloud.eu 108 | gitlab_client_host: git.tcpcloud.eu 109 | mysql_gitlab_password: 110 | nginx_site_gitlab_server_host: git.tcpcloud.eu 111 | gitlab_server_email: gitlab@robotice.cz 112 | gitlab_server_version: '7.7' 113 | nginx_site_gitlab_ssl_authority: 114 | linux: 115 | system: 116 | name: vcs01 117 | domain: int.prd.tcp.cloudlab.cz 118 | 119 | System level 120 | ^^^^^^^^^^^^^^^^^^^^ 121 | 122 | There are usually multiple systems on each server that may span one or more servers. There are defintions of all-in-one solutions for common systems for development purposes. 123 | 124 | .. figure:: figures/meta_system.png 125 | :width: 100% 126 | :align: center 127 | 128 | The systems are classified by simple following rule. 129 | 130 | - Single - Usually all-in-one application system on a node (Taiga, Gitlab) 131 | - Multi - Multiple all-in-one application systems on a node (Horizon, Wordpress) 132 | - Cluster - Node is par of larger cluster of nodes (OpenStack controllers, larger webapp applications) 133 | 134 | Redis itself does not form any system but is part of many well know applications, the following example shows usage of Redis within complex Gitlab system. 135 | 136 | .. code-block:: yaml 137 | 138 | classes: 139 | - service.git.client 140 | - service.gitlab.server.single 141 | - service.nginx.server.single 142 | - service.mysql.server.local 143 | - service.redis.server.local 144 | parameters: 145 | _param: 146 | nginx_site_gitlab_host: ${linux:network:fqdn} 147 | mysql_admin_user: root 148 | mysql_admin_password: password 149 | mysql_gitlab_password: password 150 | mysql: 151 | server: 152 | database: 153 | gitlab: 154 | encoding: UTF8 155 | locale: cs_CZ 156 | users: 157 | - name: gitlab 158 | password: ${_param:mysql_gitlab_password} 159 | host: 127.0.0.1 160 | rights: all privileges 161 | nginx: 162 | server: 163 | site: 164 | gitlab_server: 165 | enabled: true 166 | type: gitlab 167 | name: server 168 | ssl: 169 | enabled: true 170 | authority: ${_param:nginx_site_gitlab_ssl_authority} 171 | certificate: ${_param:nginx_site_gitlab_host} 172 | mode: secure 173 | host: 174 | name: ${_param:nginx_site_gitlab_server_host} 175 | port: 443 176 | 177 | Service level 178 | ^^^^^^^^^^^^^^^^^^^^ 179 | 180 | The services are the atomic units of config management. SaltStack formula or Puppet recipe with default metadata set can be considered as a service. Each service implements one or more roles and together with other services form systems. Following list shows decomposition 181 | 182 | - Formula - Set of states that perform together atomic service 183 | - State - Declarative definition of various resources (package, files, services) 184 | - Module - Imperative interaction enforcing defined state for each State 185 | 186 | Given Redis formula from Gitlab example we set basic set of parametes that can be used for actual service configuration as well as support services configuration. 187 | 188 | 189 | .. figure:: figures/meta_service.png 190 | :width: 100% 191 | :align: center 192 | 193 | Following code shows sample Redis formula showing several states: pkg, systl, file and service. 194 | 195 | .. code-block:: yaml 196 | 197 | {%- from "redis/map.jinja" import server with context %} 198 | {%- if server.enabled %} 199 | 200 | redis_packages: 201 | pkg.installed: 202 | - names: {{ server.pkgs }} 203 | 204 | vm.overcommit_memory: 205 | sysctl.present: 206 | - value: 1 207 | 208 | {{ server.conf_dir }}: 209 | file.managed: 210 | - source: {{ conf_file_source }} 211 | - template: jinja 212 | - user: root 213 | - group: root 214 | - mode: 644 215 | - require: 216 | - pkg: redis_packages 217 | 218 | redis_service: 219 | service.running: 220 | - enable: true 221 | - name: {{ server.service }} 222 | - watch: 223 | - file: {{ server.conf_dir }} 224 | 225 | {%- endif %} 226 | 227 | Along with the actual definition of node there are service level metadata fragments for common situations. Following fragment shows localhost only Redis database. 228 | 229 | .. code-block:: yaml 230 | 231 | applications: 232 | - redis 233 | parameters: 234 | redis: 235 | server: 236 | enabled: true 237 | bind: 238 | address: 127.0.0.1 239 | port: 6379 240 | protocol: tcp 241 | 242 | Service Decomposition 243 | ~~~~~~~~~~~~~~~~~~~~~ 244 | 245 | reclass has some features that make it unique: 246 | 247 | - Including and deep merging of data structures through usage of classes 248 | - Parameter interpolation for cross-referencing parameter values 249 | 250 | The system aggregation level: 251 | 252 | .. code-block:: yaml 253 | 254 | classes: 255 | - system.linux.system.kvm 256 | - system.openssh.server.team.tcpcloud 257 | - system.gitlab.server.single 258 | parameters: 259 | _param: 260 | salt_master_host: master1.robotice.cz 261 | mysql_admin_password: 262 | gitlab_client_token: 263 | gitlab_server_name: git.tcpcloud.eu 264 | gitlab_client_host: git.tcpcloud.eu 265 | mysql_gitlab_password: 266 | nginx_site_gitlab_server_host: git.tcpcloud.eu 267 | gitlab_server_email: gitlab@robotice.cz 268 | gitlab_server_version: '7.7' 269 | nginx_site_gitlab_ssl_authority: 270 | 271 | The single system level: 272 | 273 | .. code-block:: yaml 274 | 275 | classes: 276 | - service.git.client 277 | - service.gitlab.server.single 278 | - service.nginx.server.single 279 | - service.mysql.server.local 280 | - service.redis.server.local 281 | parameters: 282 | _param: 283 | nginx_site_gitlab_host: ${linux:network:fqdn} 284 | mysql_admin_user: root 285 | mysql_admin_password: password 286 | mysql_gitlab_password: password 287 | 288 | The single service level: 289 | 290 | .. code-block:: yaml 291 | 292 | applications: 293 | - redis 294 | parameters: 295 | redis: 296 | server: 297 | enabled: true 298 | bind: 299 | address: 127.0.0.1 300 | port: 6379 301 | protocol: tcp 302 | 303 | -------------- 304 | 305 | .. include:: navigation.txt -------------------------------------------------------------------------------- /doc/source/_static/scripts/bootstrap/bootstrap-salt-master.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | # 4 | # ENVIRONMENT 5 | # 6 | 7 | OS_DISTRIBUTION=${OS_DISTRIBUTION:-ubuntu} 8 | OS_NETWORKING=${OS_NETWORKING:-opencontrail} 9 | OS_VERSION=${OS_VERSION:-mitaka} 10 | OS_DEPLOYMENT=${OS_DEPLOYMENT:-single} 11 | OS_SYSTEM="${OS_VERSION}_${OS_DISTRIBUTION}_${OS_NETWORKING}_${OS_DEPLOYMENT}" 12 | 13 | SALT_SOURCE=${SALT_SOURCE:-pkg} 14 | SALT_VERSION=${SALT_VERSION:-latest} 15 | 16 | FORMULA_SOURCE=${FORMULA_SOURCE:-git} 17 | FORMULA_PATH=${FORMULA_PATH:-/usr/share/salt-formulas} 18 | FORMULA_BRANCH=${FORMULA_BRANCH:-master} 19 | FORMULA_REPOSITORY=${FORMULA_REPOSITORY:-deb [arch=amd64] http://apt.tcpcloud.eu/nightly/ trusty tcp-salt} 20 | FORMULA_GPG=${FORMULA_GPG:-http://apt.tcpcloud.eu/public.gpg} 21 | 22 | if [ "$FORMULA_SOURCE" == "git" ]; then 23 | SALT_ENV="dev" 24 | elif [ "$FORMULA_SOURCE" == "pkg" ]; then 25 | SALT_ENV="prd" 26 | fi 27 | 28 | RECLASS_ADDRESS=${RECLASS_ADDRESS:-https://github.com/tcpcloud/openstack-salt-model.git} 29 | RECLASS_BRANCH=${RECLASS_BRANCH:-master} 30 | RECLASS_SYSTEM=${RECLASS_SYSTEM:-$OS_SYSTEM} 31 | 32 | CONFIG_HOSTNAME=${CONFIG_HOSTNAME:-config} 33 | CONFIG_DOMAIN=${CONFIG_DOMAIN:-openstack.local} 34 | CONFIG_HOST=${CONFIG_HOSTNAME}.${CONFIG_DOMAIN} 35 | CONFIG_ADDRESS=${CONFIG_ADDRESS:-10.10.10.200} 36 | 37 | MINION_MASTER=${MINION_MASTER:-$CONFIG_ADDRESS} 38 | MINION_HOSTNAME=${MINION_HOSTNAME:-minion} 39 | MINION_ID=${MINION_HOSTNAME}.${CONFIG_DOMAIN} 40 | 41 | install_salt_master_pkg() 42 | { 43 | echo -e "\nPreparing base OS repository ...\n" 44 | 45 | echo -e "deb [arch=amd64] http://apt.tcpcloud.eu/nightly/ trusty main security extra tcp" > /etc/apt/sources.list 46 | wget -O - http://apt.tcpcloud.eu/public.gpg | apt-key add - 47 | 48 | apt-get clean 49 | apt-get update 50 | 51 | echo -e "\nInstalling salt master ...\n" 52 | 53 | apt-get install reclass git -y 54 | 55 | if [ "$SALT_VERSION" == "latest" ]; then 56 | apt-get install -y salt-common salt-master salt-minion 57 | else 58 | apt-get install -y --force-yes salt-common=$SALT_VERSION salt-master=$SALT_VERSION salt-minion=$SALT_VERSION 59 | fi 60 | 61 | configure_salt_master 62 | 63 | install_salt_minion_pkg "master" 64 | 65 | echo -e "\nRestarting services ...\n" 66 | service salt-master restart 67 | [ -f /etc/salt/pki/minion/minion_master.pub ] && rm -f /etc/salt/pki/minion/minion_master.pub 68 | service salt-minion restart 69 | salt-call pillar.data > /dev/null 2>&1 70 | } 71 | 72 | install_salt_master_pip() 73 | { 74 | echo -e "\nPreparing base OS repository ...\n" 75 | 76 | echo -e "deb [arch=amd64] http://apt.tcpcloud.eu/nightly/ trusty main security extra tcp" > /etc/apt/sources.list 77 | wget -O - http://apt.tcpcloud.eu/public.gpg | apt-key add - 78 | 79 | apt-get clean 80 | apt-get update 81 | 82 | echo -e "\nInstalling salt master ...\n" 83 | 84 | if [ -x "`which invoke-rc.d 2>/dev/null`" -a -x "/etc/init.d/salt-minion" ] ; then 85 | apt-get purge -y salt-minion salt-common && apt-get autoremove -y 86 | fi 87 | 88 | apt-get install -y python-pip python-dev zlib1g-dev reclass git 89 | 90 | if [ "$SALT_VERSION" == "latest" ]; then 91 | pip install salt 92 | else 93 | pip install salt==$SALT_VERSION 94 | fi 95 | 96 | wget -O /etc/init.d/salt-master https://anonscm.debian.org/cgit/pkg-salt/salt.git/plain/debian/salt-master.init && chmod 755 /etc/init.d/salt-master 97 | ln -s /usr/local/bin/salt-master /usr/bin/salt-master 98 | 99 | configure_salt_master 100 | 101 | install_salt_minion_pkg "master" 102 | 103 | echo -e "\nRestarting services ...\n" 104 | service salt-master restart 105 | [ -f /etc/salt/pki/minion/minion_master.pub ] && rm -f /etc/salt/pki/minion/minion_master.pub 106 | service salt-minion restart 107 | salt-call pillar.data > /dev/null 2>&1 108 | } 109 | 110 | configure_salt_master() 111 | { 112 | 113 | [ ! -d /etc/salt/master.d ] && mkdir -p /etc/salt/master.d 114 | 115 | cat << 'EOF' > /etc/salt/master.d/master.conf 116 | file_roots: 117 | base: 118 | - /usr/share/salt-formulas/env 119 | pillar_opts: False 120 | open_mode: True 121 | reclass: &reclass 122 | storage_type: yaml_fs 123 | inventory_base_uri: /srv/salt/reclass 124 | ext_pillar: 125 | - reclass: *reclass 126 | master_tops: 127 | reclass: *reclass 128 | EOF 129 | 130 | echo "Configuring reclass ..." 131 | 132 | [ ! -d /etc/reclass ] && mkdir /etc/reclass 133 | cat << 'EOF' > /etc/reclass/reclass-config.yml 134 | storage_type: yaml_fs 135 | pretty_print: True 136 | output: yaml 137 | inventory_base_uri: /srv/salt/reclass 138 | EOF 139 | 140 | git clone ${RECLASS_ADDRESS} /srv/salt/reclass -b ${RECLASS_BRANCH} 141 | 142 | if [ ! -f "/srv/salt/reclass/nodes/${CONFIG_HOST}.yml" ]; then 143 | 144 | cat << EOF > /srv/salt/reclass/nodes/${CONFIG_HOST}.yml 145 | classes: 146 | - service.git.client 147 | - system.linux.system.single 148 | - system.openssh.client.workshop 149 | - system.salt.master.single 150 | - system.salt.master.formula.$FORMULA_SOURCE 151 | - system.reclass.storage.salt 152 | - system.reclass.storage.system.$RECLASS_SYSTEM 153 | parameters: 154 | _param: 155 | reclass_data_repository: "$RECLASS_ADDRESS" 156 | reclass_data_revision: $RECLASS_BRANCH 157 | salt_formula_branch: $FORMULA_BRANCH 158 | reclass_config_master: $CONFIG_ADDRESS 159 | single_address: $CONFIG_ADDRESS 160 | salt_master_host: 127.0.0.1 161 | salt_master_base_environment: $SALT_ENV 162 | linux: 163 | system: 164 | name: $CONFIG_HOSTNAME 165 | domain: $CONFIG_DOMAIN 166 | EOF 167 | 168 | if [ "$SALT_VERSION" == "latest" ]; then 169 | 170 | cat << EOF >> /srv/salt/reclass/nodes/${CONFIG_HOST}.yml 171 | salt: 172 | master: 173 | accept_policy: open_mode 174 | source: 175 | engine: $SALT_SOURCE 176 | minion: 177 | source: 178 | engine: $SALT_SOURCE 179 | EOF 180 | 181 | else 182 | 183 | cat << EOF >> /srv/salt/reclass/nodes/${CONFIG_HOST}.yml 184 | salt: 185 | master: 186 | accept_policy: open_mode 187 | source: 188 | engine: $SALT_SOURCE 189 | version: $SALT_VERSION 190 | minion: 191 | source: 192 | engine: $SALT_SOURCE 193 | version: $SALT_VERSION 194 | EOF 195 | 196 | fi 197 | 198 | fi 199 | 200 | service salt-master restart 201 | } 202 | 203 | install_salt_minion_pkg() 204 | { 205 | echo -e "\nInstalling salt minion ...\n" 206 | 207 | if [ "$SALT_VERSION" == "latest" ]; then 208 | apt-get install -y salt-common salt-minion 209 | else 210 | apt-get install -y --force-yes salt-common=$SALT_VERSION salt-minion=$SALT_VERSION 211 | fi 212 | 213 | if [ "$SALT_VERSION" == "latest" ]; then 214 | apt-get install -y salt-common salt-minion 215 | else 216 | apt-get install -y --force-yes salt-common=$SALT_VERSION salt-minion=$SALT_VERSION 217 | fi 218 | 219 | [ ! -d /etc/salt/minion.d ] && mkdir -p /etc/salt/minion.d 220 | echo -e "master: 127.0.0.1\nid: $CONFIG_HOST" > /etc/salt/minion.d/minion.conf 221 | 222 | service salt-minion restart 223 | } 224 | 225 | install_salt_minion_pip() 226 | { 227 | echo -e "\nInstalling salt minion ...\n" 228 | 229 | [ ! -d /etc/salt/minion.d ] && mkdir -p /etc/salt/minion.d 230 | echo -e "master: 127.0.0.1\nid: $CONFIG_HOST" > /etc/salt/minion.d/minion.conf 231 | 232 | wget -O /etc/init.d/salt-minion https://anonscm.debian.org/cgit/pkg-salt/salt.git/plain/debian/salt-minion.init && chmod 755 /etc/init.d/salt-minion 233 | ln -s /usr/local/bin/salt-minion /usr/bin/salt-minion 234 | 235 | service salt-minion restart 236 | } 237 | 238 | install_salt_formula_pkg() 239 | { 240 | echo "Configuring necessary formulas ..." 241 | which wget > /dev/null || (apt-get update; apt-get install -y wget) 242 | 243 | echo "${FORMULA_REPOSITORY}" > /etc/apt/sources.list.d/salt-formulas.list 244 | wget -O - "${FORMULA_GPG}" | apt-key add - 245 | 246 | apt-get clean 247 | apt-get update 248 | 249 | [ ! -d /srv/salt/reclass/classes/service ] && mkdir -p /srv/salt/reclass/classes/service 250 | 251 | declare -a formula_services=("linux" "reclass" "salt" "openssh" "ntp" "git" "nginx" "collectd" "sensu" "heka" "sphinx") 252 | 253 | for formula_service in "${formula_services[@]}"; do 254 | echo -e "\nConfiguring salt formula ${formula_service} ...\n" 255 | [ ! -d "${FORMULA_PATH}/env/${formula_service}" ] && \ 256 | apt-get install -y salt-formula-${formula_service} 257 | [ ! -L "/srv/salt/reclass/classes/service/${formula_service}" ] && \ 258 | ln -s ${FORMULA_PATH}/reclass/service/${formula_service} /srv/salt/reclass/classes/service/${formula_service} 259 | done 260 | 261 | [ ! -d /srv/salt/env ] && mkdir -p /srv/salt/env 262 | [ ! -L /srv/salt/env/prd ] && ln -s ${FORMULA_PATH}/env /srv/salt/env/prd 263 | } 264 | 265 | install_salt_formula_git() 266 | { 267 | echo "Configuring necessary formulas ..." 268 | 269 | [ ! -d /srv/salt/reclass/classes/service ] && mkdir -p /srv/salt/reclass/classes/service 270 | 271 | declare -a formula_services=("linux" "reclass" "salt" "openssh" "ntp" "git" "nginx" "collectd" "sensu" "heka" "sphinx") 272 | 273 | for formula_service in "${formula_services[@]}"; do 274 | echo -e "\nConfiguring salt formula ${formula_service} ...\n" 275 | [ ! -d "${FORMULA_PATH}/env/_formulas/${formula_service}" ] && \ 276 | git clone https://github.com/tcpcloud/salt-formula-${formula_service}.git ${FORMULA_PATH}/env/_formulas/${formula_service} -b ${FORMULA_BRANCH} 277 | [ ! -L "/usr/share/salt-formulas/env/${formula_service}" ] && \ 278 | ln -s ${FORMULA_PATH}/env/_formulas/${formula_service}/${formula_service} /usr/share/salt-formulas/env/${formula_service} 279 | [ ! -L "/srv/salt/reclass/classes/service/${formula_service}" ] && \ 280 | ln -s ${FORMULA_PATH}/env/_formulas/${formula_service}/metadata/service /srv/salt/reclass/classes/service/${formula_service} 281 | done 282 | 283 | [ ! -d /srv/salt/env ] && mkdir -p /srv/salt/env 284 | [ ! -L /srv/salt/env/dev ] && ln -s /usr/share/salt-formulas/env /srv/salt/env/dev 285 | } 286 | 287 | run_salt_states() 288 | { 289 | echo -e "\nReclass metadata ...\n" 290 | reclass --nodeinfo ${CONFIG_HOST} 291 | 292 | echo -e "\nSalt grains metadata ...\n" 293 | salt-call grains.items --no-color 294 | 295 | echo -e "\nSalt pillar metadata ...\n" 296 | salt-call pillar.data --no-color 297 | 298 | echo -e "\nRunning necessary base states ...\n" 299 | salt-call --retcode-passthrough state.sls linux,salt.minion,salt --no-color 300 | 301 | echo -e "\nRunning complete state ...\n" 302 | salt-call --retcode-passthrough state.highstate --no-color 303 | } 304 | 305 | if [ "$SALT_SOURCE" == "pkg" ]; then 306 | install_salt_master_pkg 307 | install_salt_minion_pkg 308 | elif [ "$SALT_SOURCE" == "pip" ]; then 309 | install_salt_master_pip 310 | install_salt_minion_pip 311 | fi 312 | 313 | if [ "$FORMULA_SOURCE" == "pkg" ]; then 314 | install_salt_formula_pkg 315 | elif [ "$FORMULA_SOURCE" == "git" ]; then 316 | install_salt_formula_git 317 | fi 318 | 319 | run_salt_states 320 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright 2015 tcp cloud a.s. 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /doc/source/install/configure-infrastructure.rst: -------------------------------------------------------------------------------- 1 | 2 | Configuring the infrastructure services 3 | ======================================= 4 | 5 | RabbitMQ 6 | -------- 7 | 8 | RabbitMQ single node 9 | ******************** 10 | 11 | RabbitMQ as AMQP broker with admin user and vhosts 12 | 13 | .. code-block:: yaml 14 | 15 | rabbitmq: 16 | server: 17 | enabled: true 18 | bind: 19 | address: 0.0.0.0 20 | port: 5672 21 | secret_key: rabbit_master_cookie 22 | admin: 23 | name: adminuser 24 | password: pwd 25 | plugins: 26 | - amqp_client 27 | - rabbitmq_management 28 | virtual_hosts: 29 | - enabled: true 30 | host: '/monitor' 31 | user: 'monitor' 32 | password: 'password' 33 | 34 | RabbitMQ as a Stomp broker 35 | 36 | .. code-block:: yaml 37 | 38 | rabbitmq: 39 | server: 40 | enabled: true 41 | secret_key: rabbit_master_cookie 42 | bind: 43 | address: 0.0.0.0 44 | port: 5672 45 | virtual_hosts: 46 | - enabled: true 47 | host: '/monitor' 48 | user: 'monitor' 49 | password: 'password' 50 | plugins: 51 | - rabbitmq_stomp 52 | 53 | RabbitMQ cluster 54 | **************** 55 | 56 | RabbitMQ as base cluster node 57 | 58 | .. code-block:: yaml 59 | 60 | rabbitmq: 61 | server: 62 | enabled: true 63 | bind: 64 | address: 0.0.0.0 65 | port: 5672 66 | secret_key: rabbit_master_cookie 67 | admin: 68 | name: adminuser 69 | password: pwd 70 | cluster: 71 | enabled: true 72 | role: master 73 | mode: disc 74 | members: 75 | - name: openstack1 76 | host: 10.10.10.212 77 | - name: openstack2 78 | host: 10.10.10.213 79 | 80 | HA Queues definition 81 | 82 | .. code-block:: yaml 83 | 84 | rabbitmq: 85 | server: 86 | enabled: true 87 | ... 88 | virtual_hosts: 89 | - enabled: true 90 | host: '/monitor' 91 | user: 'monitor' 92 | password: 'password' 93 | policies: 94 | - name: HA 95 | pattern: '^(?!amq\.).*' 96 | definition: '{"ha-mode": "all"}' 97 | 98 | 99 | MySQL 100 | ----- 101 | 102 | MySQL database - simple 103 | *********************** 104 | 105 | .. code-block:: yaml 106 | 107 | mysql: 108 | server: 109 | enabled: true 110 | version: '5.5' 111 | admin: 112 | user: root 113 | password: pwd 114 | bind: 115 | address: '127.0.0.1' 116 | port: 3306 117 | database: 118 | name: 119 | encoding: 'utf8' 120 | users: 121 | - name: 'username' 122 | password: 'password' 123 | host: 'localhost' 124 | rights: 'all privileges' 125 | 126 | MySQL database - configured 127 | *************************** 128 | 129 | .. code-block:: yaml 130 | 131 | mysql: 132 | server: 133 | enabled: true 134 | version: '5.5' 135 | admin: 136 | user: root 137 | password: pwd 138 | bind: 139 | address: '127.0.0.1' 140 | port: 3306 141 | key_buffer: 250M 142 | max_allowed_packet: 32M 143 | max_connections: 1000 144 | thread_stack: 512K 145 | thread_cache_size: 64 146 | query_cache_limit: 16M 147 | query_cache_size: 96M 148 | force_encoding: utf8 149 | database: 150 | name: 151 | encoding: 'utf8' 152 | users: 153 | - name: 'username' 154 | password: 'password' 155 | host: 'localhost' 156 | rights: 'all privileges' 157 | 158 | Galera database cluster 159 | ----------------------- 160 | 161 | Galera cluster master node 162 | ************************** 163 | 164 | .. code-block:: yaml 165 | 166 | galera: 167 | master: 168 | enabled: true 169 | name: openstack 170 | bind: 171 | address: 192.168.0.1 172 | port: 3306 173 | members: 174 | - host: 192.168.0.1 175 | port: 4567 176 | - host: 192.168.0.2 177 | port: 4567 178 | admin: 179 | user: root 180 | password: pwd 181 | database: 182 | name: 183 | encoding: 'utf8' 184 | users: 185 | - name: 'username' 186 | password: 'password' 187 | host: 'localhost' 188 | rights: 'all privileges' 189 | 190 | Galera cluster slave node 191 | ************************* 192 | 193 | .. code-block:: yaml 194 | 195 | galera: 196 | slave: 197 | enabled: true 198 | name: openstack 199 | bind: 200 | address: 192.168.0.2 201 | port: 3306 202 | members: 203 | - host: 192.168.0.1 204 | port: 4567 205 | - host: 192.168.0.2 206 | port: 4567 207 | admin: 208 | user: root 209 | password: pass 210 | 211 | Galera cluster - Usage 212 | 213 | MySQL Galera check sripts 214 | 215 | .. code-block:: bash 216 | 217 | mysql> SHOW STATUS LIKE 'wsrep%'; 218 | 219 | mysql> SHOW STATUS LIKE 'wsrep_cluster_size' ;" 220 | 221 | Galera monitoring command, performed from extra server 222 | 223 | .. code-block:: bash 224 | 225 | garbd -a gcomm://ipaddrofone:4567 -g my_wsrep_cluster -l /tmp/1.out -d 226 | 227 | 1. salt-call state.sls mysql 228 | 2. Comment everything starting wsrep* (wsrep_provider, wsrep_cluster, wsrep_sst) 229 | 3. service mysql start 230 | 4. run on each node mysql_secure_install and filling root password. 231 | 232 | .. code-block:: bash 233 | 234 | Enter current password for root (enter for none): 235 | OK, successfully used password, moving on... 236 | 237 | Setting the root password ensures that nobody can log into the MySQL 238 | root user without the proper authorisation. 239 | 240 | Set root password? [Y/n] y 241 | New password: 242 | Re-enter new password: 243 | Password updated successfully! 244 | Reloading privilege tables.. 245 | ... Success! 246 | 247 | By default, a MySQL installation has an anonymous user, allowing anyone 248 | to log into MySQL without having to have a user account created for 249 | them. This is intended only for testing, and to make the installation 250 | go a bit smoother. You should remove them before moving into a 251 | production environment. 252 | 253 | Remove anonymous users? [Y/n] y 254 | ... Success! 255 | 256 | Normally, root should only be allowed to connect from 'localhost'. This 257 | ensures that someone cannot guess at the root password from the network. 258 | 259 | Disallow root login remotely? [Y/n] n 260 | ... skipping. 261 | 262 | By default, MySQL comes with a database named 'test' that anyone can 263 | access. This is also intended only for testing, and should be removed 264 | before moving into a production environment. 265 | 266 | Remove test database and access to it? [Y/n] y 267 | - Dropping test database... 268 | ... Success! 269 | - Removing privileges on test database... 270 | ... Success! 271 | 272 | Reloading the privilege tables will ensure that all changes made so far 273 | will take effect immediately. 274 | 275 | Reload privilege tables now? [Y/n] y 276 | ... Success! 277 | 278 | Cleaning up... 279 | 280 | 5. service mysql stop 281 | 6. uncomment all wsrep* lines except first server, where leave only in my.cnf wsrep_cluster_address='gcomm://'; 282 | 7. start first node 283 | 8. Start third node which is connected to first one 284 | 9. Start second node which is connected to third one 285 | 10. After starting cluster, it must be change cluster address at first starting node without restart database and change config my.cnf. 286 | 287 | .. code-block:: bash 288 | 289 | mysql> SET GLOBAL wsrep_cluster_address='gcomm://10.0.0.2'; 290 | 291 | Metering database (Graphite) 292 | ---------------------------- 293 | 294 | 1. Set up the monitoring node for metering. 295 | 296 | .. code-block:: bash 297 | 298 | root@cfg01:~# salt 'mon01*' state.sls git,rabbitmq,postgresql 299 | root@cfg01:~# salt 'mon01*' state.sls graphite,apache 300 | 301 | 2. Make some manual adjustments. 302 | 303 | .. code-block:: bash 304 | 305 | root@mon01:~# service carbon-aggregator start 306 | root@mon01:~# apt-get install python-django=1.6.1-2ubuntu0.11 307 | root@mon01:~# service apache2 restart 308 | 309 | 3. Update all client nodes in infrastructure for metrics service. 310 | 311 | .. code-block:: bash 312 | 313 | root@cfg01:~# salt "*" state.sls collectd.client 314 | 315 | 4. Check the browser for the metering service output 316 | 317 | http://185.22.97.69:8080 318 | 319 | Monitoring server (Sensu) 320 | ------------------------- 321 | 322 | Instalation 323 | *********** 324 | 325 | 1. Set up the monitoring node. 326 | 327 | .. code-block:: bash 328 | 329 | root@cfg01:~# salt 'mon01*' state.sls git,rabbitmq,redis 330 | root@cfg01:~# salt 'mon01*' state.sls sensu 331 | 332 | 2. Update all client nodes in infrastructure. 333 | 334 | .. code-block:: bash 335 | 336 | root@cfg01:~# salt "*" state.sls sensu.client 337 | 338 | 3. Update check defitions based on model on Sensu server. 339 | 340 | .. code-block:: bash 341 | 342 | root@cfg01:~# salt "*" state.sls sensu.client 343 | root@cfg01:~# salt "*" state.sls salt 344 | root@cfg01:~# salt "*" mine.flush 345 | root@cfg01:~# salt "*" mine.update 346 | root@cfg01:~# salt "*" service.restart salt-minion 347 | root@cfg01:~# salt "mon*" state.sls sensu.server 348 | 349 | # as 1-liner 350 | 351 | salt "*" state.sls sensu.client; salt "*" state.sls salt.minion; salt "*" mine.flush; salt "*" mine.update; salt "*" service.restart salt-minion; salt "mon*" state.sls sensu.server 352 | 353 | salt 'mon*' service.restart rabbimq-server; salt 'mon*' service.restart sensu-server; salt 'mon*' service.restart sensu-api; salt '*' service.restart sensu-client 354 | 355 | 4. View the monitored infrastructure in web user interface. 356 | 357 | .. code-block:: bash 358 | 359 | http://185.22.97.69:8088 360 | 361 | Creating checks 362 | --------------- 363 | 364 | Check can be created in 2 separate ways. 365 | 366 | Service driven checks 367 | ********************* 368 | 369 | Checks are created and populated by existing services. Check definition is stored at ``formula_name/files/sensu.conf``. For example SSH service creates check that checks running process. 370 | 371 | .. code-block:: yaml 372 | 373 | local_openssh_server_proc: 374 | command: "PATH=$PATH:/usr/lib64/nagios/plugins:/usr/lib/nagios/plugins check_procs -a '/usr/sbin/sshd' -u root -c 1:1" 375 | interval: 60 376 | occurrences: 1 377 | subscribers: 378 | - local-openssh-server 379 | 380 | Arbitrary check definitions 381 | *************************** 382 | 383 | These checks are custom created from definition files located in ``system.sensu.server.checks``, this class must be included in monitoring node definition. 384 | 385 | .. code-block:: yaml 386 | 387 | parameters: 388 | sensu: 389 | server: 390 | checks: 391 | - name: local_service_name_proc 392 | command: "PATH=$PATH:/usr/lib64/nagios/plugins:/usr/lib/nagios/plugins check_procs -C service-name" 393 | interval: 60 394 | occurrences: 1 395 | subscribers: 396 | - local-service-name-server 397 | 398 | Create file ``/etc/sensu/conf.d/check_graphite.json``: 399 | 400 | .. code-block:: yaml 401 | 402 | { 403 | "checks": { 404 | "remote_graphite_users": { 405 | "subscribers": [ 406 | "remote-network" 407 | ], 408 | "command": "~/sensu-plugins-graphite/bin/check-graphite-stats.rb --host 127.0.0.1 --period -2mins --target 'default_prd.*.users.users' --warn 1 --crit 2", 409 | "handlers": [ 410 | "default" 411 | ], 412 | "occurrences": 1, 413 | "interval": 30 414 | } 415 | } 416 | } 417 | 418 | Restart sensu-server 419 | 420 | .. code-block:: bash 421 | 422 | root@mon01:~# service sensu-server restart 423 | 424 | -------------- 425 | 426 | .. include:: navigation.txt 427 | -------------------------------------------------------------------------------- /doc/source/_static/scripts/openstack_single.hot: -------------------------------------------------------------------------------- 1 | heat_template_version: 2013-05-23 2 | description: Base Heat stack with simple OS setup 3 | parameters: 4 | key_name: 5 | type: string 6 | default: openstack_salt_key 7 | key_value: 8 | type: string 9 | salt_source: 10 | type: string 11 | default: pkg 12 | salt_version: 13 | type: string 14 | default: latest 15 | formula_source: 16 | type: string 17 | default: git 18 | formula_path: 19 | type: string 20 | default: /usr/share/salt-formulas 21 | formula_branch: 22 | type: string 23 | default: master 24 | reclass_address: 25 | type: string 26 | default: https://github.com/tcpcloud/openstack-salt-model.git 27 | reclass_branch: 28 | type: string 29 | default: master 30 | os_version: 31 | type: string 32 | default: kilo 33 | os_distribution: 34 | type: string 35 | default: ubuntu 36 | os_networking: 37 | type: string 38 | default: opencontrail 39 | os_deployment: 40 | type: string 41 | default: single 42 | config_hostname: 43 | type: string 44 | default: config 45 | config_domain: 46 | type: string 47 | default: openstack.local 48 | config_address: 49 | type: string 50 | default: 10.10.10.200 51 | ctl01_name: 52 | type: string 53 | default: control 54 | cmp01_name: 55 | type: string 56 | default: compute 57 | prx01_name: 58 | type: string 59 | default: proxy 60 | instance_flavor: 61 | type: string 62 | description: Instance type for servers 63 | default: m1.small 64 | constraints: 65 | - allowed_values: [m1.tiny, m1.small, m1.medium, m1.large] 66 | description: instance_type must be a valid instance type 67 | config_image: 68 | type: string 69 | description: Image name to use for Salt master. 70 | default: ubuntu-14-04-x64-1452267252 71 | instance_image: 72 | type: string 73 | description: Image name to use for the servers. 74 | default: ubuntu-14-04-x64-1452267252 75 | public_net_id: 76 | type: string 77 | description: ID or name of public network for which floating IP addresses will be allocated 78 | router_name: 79 | type: string 80 | description: Name of router to be created 81 | default: openstack-salt-router 82 | private_net_name: 83 | type: string 84 | description: Name of private network to be created 85 | default: openstack-salt-net 86 | private_net_cidr: 87 | type: string 88 | description: Private network address (CIDR notation) 89 | default: 10.10.10.0/24 90 | instance_flavor_controller: 91 | type: string 92 | description: Instance type for controllers 93 | default: m1.large 94 | constraints: 95 | - allowed_values: [m1.tiny, m1.small, m1.medium, m1.large] 96 | description: instance_type must be a valid instance type 97 | instance_flavor_compute: 98 | type: string 99 | description: Instance type for compute nodes 100 | default: m1.medium 101 | constraints: 102 | - allowed_values: [m1.tiny, m1.small, m1.medium, m1.large] 103 | description: instance_type must be a valid instance type 104 | instance_flavor_support: 105 | type: string 106 | description: Instance type for support nodes (web, monitoring, etc.) 107 | default: m1.small 108 | constraints: 109 | - allowed_values: [m1.tiny, m1.small, m1.medium, m1.large] 110 | description: instance_type must be a valid instance type 111 | resources: 112 | keypair: 113 | type: OS::Nova::KeyPair 114 | properties: 115 | name: { get_param: key_name } 116 | public_key: { get_param: key_value } 117 | save_private_key: false 118 | private_net: 119 | type: OS::Neutron::Net 120 | properties: 121 | name: { get_param: private_net_name } 122 | private_subnet: 123 | type: OS::Neutron::Subnet 124 | properties: 125 | name: { get_param: private_net_name } 126 | network_id: { get_resource: private_net } 127 | cidr: { get_param: private_net_cidr } 128 | router: 129 | type: OS::Neutron::Router 130 | properties: 131 | name: { get_param: router_name } 132 | external_gateway_info: 133 | network: { get_param: public_net_id } 134 | router_interface: 135 | type: OS::Neutron::RouterInterface 136 | properties: 137 | router_id: { get_resource: router } 138 | subnet_id: { get_resource: private_subnet } 139 | security_group: 140 | type: OS::Neutron::SecurityGroup 141 | properties: 142 | name: { get_param: router_name } 143 | rules: 144 | - protocol: tcp 145 | remote_ip_prefix: 0.0.0.0/0 146 | - protocol: icmp 147 | remote_ip_prefix: 0.0.0.0/0 148 | cfg01_floating_ip: 149 | type: OS::Nova::FloatingIP 150 | properties: 151 | pool: { get_param: public_net_id } 152 | cfg01_floating_ip_association: 153 | type: OS::Nova::FloatingIPAssociation 154 | properties: 155 | floating_ip: { get_resource: cfg01_floating_ip } 156 | server_id: { get_resource: cfg01_instance } 157 | cfg01_port: 158 | type: OS::Neutron::Port 159 | properties: 160 | network_id: { get_resource: private_net } 161 | fixed_ips: 162 | - ip_address: 10.10.10.200 163 | security_groups: 164 | - default 165 | - { get_resource: security_group } 166 | cfg01_instance: 167 | type: OS::Nova::Server 168 | properties: 169 | image: { get_param: config_image } 170 | flavor: { get_param: instance_flavor } 171 | key_name: { get_resource: keypair } 172 | name: { list_join: [ '.', [ { get_param: config_hostname }, { get_param: config_domain } ]] } 173 | networks: 174 | - port: { get_resource: cfg01_port } 175 | user_data_format: RAW 176 | user_data: 177 | str_replace: 178 | template: | 179 | #!/bin/bash 180 | 181 | export SALT_SOURCE=$SALT_SOURCE 182 | export SALT_VERSION=$SALT_VERSION 183 | export FORMULA_SOURCE=$FORMULA_SOURCE 184 | export FORMULA_PATH=$FORMULA_PATH 185 | export FORMULA_BRANCH=$FORMULA_BRANCH 186 | export RECLASS_ADDRESS=$RECLASS_ADDRESS 187 | export RECLASS_BRANCH=$RECLASS_BRANCH 188 | export RECLASS_SYSTEM=$RECLASS_SYSTEM 189 | export CONFIG_HOSTNAME=$CONFIG_HOSTNAME 190 | export CONFIG_DOMAIN=$CONFIG_DOMAIN 191 | export CONFIG_HOST=$CONFIG_HOST 192 | export CONFIG_ADDRESS=$CONFIG_ADDRESS 193 | 194 | BOOTSTRAP 195 | params: 196 | $SALT_SOURCE: { get_param: salt_source } 197 | $SALT_VERSION: { get_param: salt_version } 198 | $FORMULA_SOURCE: { get_param: formula_source } 199 | $FORMULA_PATH: { get_param: formula_path } 200 | $FORMULA_BRANCH: { get_param: formula_branch } 201 | $RECLASS_ADDRESS: { get_param: reclass_address } 202 | $RECLASS_BRANCH: { get_param: reclass_branch } 203 | $RECLASS_SYSTEM: { list_join: [ '_', [ { get_param: os_version }, { get_param: os_distribution }, { get_param: os_networking }, { get_param: os_deployment } ]] } 204 | $CONFIG_HOSTNAME: { get_param: config_hostname } 205 | $CONFIG_DOMAIN: { get_param: config_domain } 206 | $CONFIG_HOST: { list_join: [ '.', [ { get_param: config_hostname }, { get_param: config_domain } ]] } 207 | $CONFIG_ADDRESS: { get_param: config_address } 208 | BOOTSTRAP: { get_file: bootstrap/bootstrap-salt-master.sh } 209 | ctl01_port: 210 | type: OS::Neutron::Port 211 | properties: 212 | network_id: { get_resource: private_net } 213 | fixed_ips: 214 | - ip_address: 10.10.10.201 215 | security_groups: 216 | - default 217 | - { get_resource: security_group } 218 | ctl01_instance: 219 | type: OS::Nova::Server 220 | properties: 221 | image: { get_param: instance_image } 222 | flavor: { get_param: instance_flavor_controller } 223 | key_name: { get_resource: keypair } 224 | name: { list_join: [ '.', [ { get_param: ctl01_name }, { get_param: config_domain } ]] } 225 | networks: 226 | - port: { get_resource: ctl01_port } 227 | user_data_format: RAW 228 | user_data: 229 | str_replace: 230 | template: | 231 | #!/bin/bash 232 | export SALT_SOURCE=$SALT_SOURCE 233 | export SALT_VERSION=$SALT_VERSION 234 | export MINION_MASTER=$MINION_MASTER 235 | export MINION_HOSTNAME=$MINION_HOSTNAME 236 | export MINION_ID=$MINION_ID 237 | 238 | BOOTSTRAP 239 | 240 | params: 241 | $SALT_SOURCE: { get_param: salt_source } 242 | $SALT_VERSION: { get_param: salt_version } 243 | $MINION_MASTER: { get_param: config_address } 244 | $MINION_HOSTNAME: { get_param: ctl01_name } 245 | $MINION_ID: { list_join: [ '.', [ { get_param: ctl01_name }, { get_param: config_domain } ]] } 246 | BOOTSTRAP: { get_file: bootstrap/bootstrap-salt-minion.sh } 247 | cmp01_port: 248 | type: OS::Neutron::Port 249 | properties: 250 | network_id: { get_resource: private_net } 251 | fixed_ips: 252 | - ip_address: 10.10.10.202 253 | security_groups: 254 | - default 255 | - { get_resource: security_group } 256 | cmp01_instance: 257 | type: OS::Nova::Server 258 | properties: 259 | image: { get_param: instance_image } 260 | flavor: { get_param: instance_flavor_compute } 261 | key_name: { get_resource: keypair } 262 | name: { list_join: [ '.', [ { get_param: cmp01_name }, { get_param: config_domain } ]] } 263 | networks: 264 | - port: { get_resource: cmp01_port } 265 | user_data_format: RAW 266 | user_data: 267 | str_replace: 268 | template: | 269 | #!/bin/bash 270 | export SALT_SOURCE=$SALT_SOURCE 271 | export SALT_VERSION=$SALT_VERSION 272 | export MINION_MASTER=$MINION_MASTER 273 | export MINION_HOSTNAME=$MINION_HOSTNAME 274 | export MINION_ID=$MINION_ID 275 | 276 | BOOTSTRAP 277 | 278 | params: 279 | $SALT_SOURCE: { get_param: salt_source } 280 | $SALT_VERSION: { get_param: salt_version } 281 | $MINION_MASTER: { get_param: config_address } 282 | $MINION_HOSTNAME: { get_param: cmp01_name } 283 | $MINION_ID: { list_join: [ '.', [ { get_param: cmp01_name }, { get_param: config_domain } ]] } 284 | BOOTSTRAP: { get_file: bootstrap/bootstrap-salt-minion.sh } 285 | prx01_floating_ip: 286 | type: OS::Nova::FloatingIP 287 | properties: 288 | pool: { get_param: public_net_id } 289 | prx01_floating_ip_association: 290 | type: OS::Nova::FloatingIPAssociation 291 | properties: 292 | floating_ip: { get_resource: prx01_floating_ip } 293 | server_id: { get_resource: prx01_instance } 294 | prx01_port: 295 | type: OS::Neutron::Port 296 | properties: 297 | network_id: { get_resource: private_net } 298 | fixed_ips: 299 | - ip_address: 10.10.10.203 300 | security_groups: 301 | - default 302 | - { get_resource: security_group } 303 | prx01_instance: 304 | type: OS::Nova::Server 305 | properties: 306 | image: { get_param: instance_image } 307 | flavor: { get_param: instance_flavor_support } 308 | key_name: { get_resource: keypair } 309 | name: { list_join: [ '.', [ { get_param: prx01_name }, { get_param: config_domain } ]] } 310 | networks: 311 | - port: { get_resource: prx01_port } 312 | user_data_format: RAW 313 | user_data: 314 | str_replace: 315 | template: | 316 | #!/bin/bash 317 | export SALT_SOURCE=$SALT_SOURCE 318 | export SALT_VERSION=$SALT_VERSION 319 | export MINION_MASTER=$MINION_MASTER 320 | export MINION_HOSTNAME=$MINION_HOSTNAME 321 | export MINION_ID=$MINION_ID 322 | 323 | BOOTSTRAP 324 | 325 | params: 326 | $SALT_SOURCE: { get_param: salt_source } 327 | $SALT_VERSION: { get_param: salt_version } 328 | $MINION_MASTER: { get_param: config_address } 329 | $MINION_HOSTNAME: { get_param: prx01_name } 330 | $MINION_ID: { list_join: [ '.', [ { get_param: prx01_name }, { get_param: config_domain } ]] } 331 | BOOTSTRAP: { get_file: bootstrap/bootstrap-salt-minion.sh } 332 | -------------------------------------------------------------------------------- /doc/source/develop/quickstart-heat.rst: -------------------------------------------------------------------------------- 1 | `Home `_ OpenStack-Salt Development Documentation 2 | 3 | OpenStack-Salt Heat deployment 4 | ============================== 5 | 6 | All-in-one (AIO) deployments are a great way to setup an OpenStack-Salt cloud for: 7 | 8 | * a service development environment 9 | * an overview of how all of the OpenStack services and roles play together 10 | * a simple lab deployment for testing 11 | 12 | It is possible to run full size proof-of-concept deployment on OpenStack with `Heat template`, the stack has following requirements for cluster deployment: 13 | 14 | * At least 200GB disk space 15 | * 70GB RAM 16 | 17 | The single-node deployment has following requirements: 18 | 19 | * At least 80GB disk space 20 | * 16GB RAM 21 | 22 | 23 | Available Heat templates 24 | ------------------------ 25 | 26 | We have prepared two generic OpenStack Salt lab templates, OpenStack in single and OpenStack in cluster configuration. Both are deployed by custom parametrized bootstrap script, which sets up Salt master with OpenStack Salt formula ecosystem and example metadata. 27 | 28 | Openstack-salt single setup 29 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 30 | 31 | The ``openstack_single`` environment consists of three nodes. 32 | 33 | .. list-table:: 34 | :stub-columns: 1 35 | 36 | * - **FQDN** 37 | - **Role** 38 | - **IP** 39 | * - config.openstack.local 40 | - Salt master node 41 | - 10.10.10.200 42 | * - control.openstack.local 43 | - OpenStack control node 44 | - 10.10.10.201 45 | * - compute.openstack.local 46 | - OpenStack compute node 47 | - 10.10.10.202 48 | 49 | 50 | Openstack-salt cluster setup 51 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 52 | 53 | The ``openstack_cluster`` environment consists of six nodes. 54 | 55 | .. list-table:: 56 | :stub-columns: 1 57 | 58 | * - **FQDN** 59 | - **Role** 60 | - **IP** 61 | * - config.openstack.local 62 | - Salt master node 63 | - 10.10.10.200 64 | * - control01.openstack.local 65 | - OpenStack control node 66 | - 10.10.10.201 67 | * - control02.openstack.local 68 | - OpenStack control node 69 | - 10.10.10.202 70 | * - control03.openstack.local 71 | - OpenStack control node 72 | - 10.10.10.203 73 | * - compute01.openstack.local 74 | - OpenStack compute node 75 | - 10.10.10.211 76 | * - compute02.openstack.local 77 | - OpenStack compute node 78 | - 10.10.10.212 79 | 80 | 81 | Heat client setup 82 | ----------------- 83 | 84 | The preffered way of installing OpenStack clients is isolated Python 85 | environment. To creat Python environment and install compatible OpenStack 86 | clients, you need to install build tools first. 87 | 88 | Ubuntu installation 89 | ~~~~~~~~~~~~~~~~~~~ 90 | 91 | Install required packages: 92 | 93 | .. code-block:: bash 94 | 95 | $ apt-get install python-dev python-pip python-virtualenv build-essential 96 | 97 | Now create and activate virtualenv `venv-heat` so you can install specific 98 | versions of OpenStack clients. 99 | 100 | .. code-block:: bash 101 | 102 | $ virtualenv venv-heat 103 | $ source ./venv-heat/bin/activate 104 | 105 | Use `requirements.txt` from the `OpenStack-Salt heat templates repository`_ to install 106 | tested versions of clients into activated environment. 107 | 108 | .. code-block:: bash 109 | 110 | $ pip install -r requirements.txt 111 | 112 | The summary of clients for OpenStack. Following clients were tested with Juno and Kilo 113 | Openstack versions. 114 | 115 | .. literalinclude:: ../../../scripts/requirements/heat.txt 116 | :language: python 117 | 118 | 119 | If everything goes right, you should be able to use openstack clients, `heat`, 120 | `nova`, etc. 121 | 122 | 123 | Connecting to OpenStack cloud 124 | ----------------------------- 125 | 126 | Setup OpenStack credentials so you can use openstack clients. You can 127 | download ``openrc`` file from Openstack dashboard and source it or execute 128 | following commands with filled credentials: 129 | 130 | .. code-block:: bash 131 | 132 | $ vim ~/openrc 133 | 134 | export OS_AUTH_URL=https://:5000/v2.0 135 | export OS_USERNAME= 136 | export OS_PASSWORD= 137 | export OS_TENANT_NAME= 138 | 139 | Now source the OpenStack credentials: 140 | 141 | .. code-block:: bash 142 | 143 | $ source openrc 144 | 145 | To test your sourced variables: 146 | 147 | .. code-block:: bash 148 | 149 | $ env | grep OS 150 | 151 | Some resources required for heat environment deployment. 152 | 153 | Get network ID 154 | ~~~~~~~~~~~~~~ 155 | 156 | The public network is needed for setting up both testing heat stacks. The network ID can be found in Openstack Dashboard or by running following command: 157 | 158 | 159 | .. code-block:: bash 160 | 161 | $ neutron net-list 162 | 163 | 164 | Get image ID 165 | ~~~~~~~~~~~~ 166 | 167 | Image ID is required to run OpenStack Salt lab templates, Ubuntu 14.04 LTS is required as config_image and image for one of the supported platforms is required as instance_image, used for OpenStack instances. To lookup for actual installed images run: 168 | 169 | .. code-block:: bash 170 | 171 | $ glance image-list 172 | 173 | 174 | Launching the Heat stack 175 | ------------------------ 176 | 177 | Download heat templates from this repository. 178 | 179 | .. code-block:: bash 180 | 181 | $ git clone git@github.com:openstack/openstack-salt.git 182 | $ cd doc/source/_static/scripts/ 183 | 184 | Now you need to customize env files for stacks, see examples in envs directory ``doc/source/_static/scripts/envs`` and set required parameters. 185 | 186 | Full examples of env files for the two respective stacks: 187 | 188 | OpenStack templates generic parameters 189 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 190 | 191 | **public_net_id** 192 | name of external network 193 | 194 | **instance_image** 195 | image for OpenStack instances, must correspond with os_distribution 196 | 197 | **config_image** 198 | image for Salt master node, currently only Ubuntu 14.04 supported 199 | 200 | **key_value** 201 | paste your SSH key here 202 | 203 | **salt_source** 204 | salt-master installation source 205 | 206 | options: 207 | - pkg 208 | - pip 209 | 210 | default: 211 | - pkg 212 | 213 | **salt_version** 214 | salt-master version 215 | 216 | options: 217 | - latest 218 | - 2015.8.11 219 | - 2015.8.10 220 | - 2015.8.9 221 | - ... 222 | 223 | default: 224 | - latest 225 | 226 | **formula_source** 227 | salt formulas source 228 | 229 | options: 230 | - git 231 | - pkg 232 | 233 | default: 234 | - git 235 | 236 | **formula_path** 237 | path to formulas 238 | 239 | default: 240 | - /usr/share/salt-formulas 241 | 242 | **formula_branch** 243 | formulas git branch 244 | 245 | default: 246 | - master 247 | 248 | **reclass_address** 249 | reclass git repository 250 | 251 | default: 252 | - https://github.com/tcpcloud/openstack-salt-model.git 253 | 254 | **reclass_branch** 255 | reclass git branch 256 | 257 | default: 258 | - master 259 | 260 | **os_version** 261 | OpenStack release version 262 | 263 | options: 264 | - kilo 265 | 266 | default: 267 | - kilo 268 | 269 | **os_distribution** 270 | OpenStack nodes distribution 271 | 272 | options: 273 | - ubuntu 274 | - redhat 275 | 276 | default: 277 | - ubuntu 278 | 279 | **os_networking** 280 | OpenStack networking engine 281 | 282 | options: 283 | - opencontrail 284 | - neutron 285 | 286 | default: 287 | - opencontrail 288 | 289 | **os_deployment** 290 | OpenStack architecture 291 | 292 | options: 293 | - single 294 | - cluster 295 | 296 | default: 297 | - single 298 | 299 | **config_hostname** 300 | salt-master hostname 301 | 302 | default: 303 | - config 304 | 305 | **config_address** 306 | salt-master internal IP address 307 | 308 | default: 309 | - 10.10.10.200 310 | 311 | OpenStack single specific parameters 312 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 313 | 314 | **config_domain** 315 | salt-master domain 316 | 317 | default: 318 | - openstack.local 319 | 320 | **ctl01_name** 321 | OS controller hostname 322 | 323 | default: 324 | - control 325 | 326 | **cmp01_name** 327 | OS compute hostname 328 | 329 | default: 330 | - compute 331 | 332 | **prx01_name** 333 | OS proxy hostname 334 | 335 | default: 336 | - proxy 337 | 338 | OpenStack cluster specific parameters 339 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 340 | 341 | **config_domain** 342 | salt-master domain 343 | 344 | default: 345 | - openstack-ha.local 346 | 347 | **ctl01_name** 348 | OS controller 1 hostname 349 | 350 | default: 351 | - control01 352 | 353 | **ctl02_name** 354 | OS controller 2 hostname 355 | 356 | default: 357 | - control02 358 | 359 | **ctl03_name** 360 | OS controller 3 hostname 361 | 362 | default: 363 | - control03 364 | 365 | **cmp01_name** 366 | OS compute 1 hostname 367 | 368 | default: 369 | - compute01 370 | 371 | **cmp02_name** 372 | OS compute 2 hostname 373 | 374 | default: 375 | - compute02 376 | 377 | **prx01_name** 378 | OS proxy hostname 379 | 380 | default: 381 | - proxy 382 | 383 | openstack_single.hot environment examples 384 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 385 | 386 | **OpenStack Kilo on Ubuntu with OpenContrail networking** 387 | 388 | ``/_static/scripts/envs/openstack_single_kilo_ubuntu_opencontrail.env.example`` 389 | 390 | .. literalinclude:: /_static/scripts/envs/openstack_single_kilo_ubuntu_opencontrail.env.example 391 | :language: yaml 392 | 393 | **OpenStack Kilo on Ubuntu with Neutron DVR networking** 394 | 395 | ``/_static/scripts/envs/openstack_single_kilo_ubuntu_neutron.env.example`` 396 | 397 | .. literalinclude:: /_static/scripts/envs/openstack_single_kilo_ubuntu_neutron.env.example 398 | :language: yaml 399 | 400 | **OpenStack Kilo on CentOS/RHEL with OpenContrail networking** 401 | 402 | ``/_static/scripts/envs/openstack_single_kilo_redhat_opencontrail.env.example`` 403 | 404 | .. literalinclude:: /_static/scripts/envs/openstack_single_kilo_redhat_opencontrail.env.example 405 | :language: yaml 406 | 407 | openstack_cluster.hot environment examples 408 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 409 | 410 | **OpenStack Kilo on Ubuntu with OpenContrail networking** 411 | 412 | ``/_static/scripts/envs/openstack_cluster_kilo_ubuntu_opencontrail.env.example`` 413 | 414 | .. literalinclude:: /_static/scripts/envs/openstack_cluster_kilo_ubuntu_opencontrail.env.example 415 | :language: yaml 416 | 417 | **OpenStack Kilo on CentOS/RHEL with OpenContrail networking** 418 | 419 | ``/_static/scripts/envs/openstack_cluster_kilo_ubuntu_opencontrail.env.example`` 420 | 421 | .. literalinclude:: /_static/scripts/envs/openstack_cluster_kilo_ubuntu_opencontrail.env.example 422 | :language: yaml 423 | 424 | .. code-block:: bash 425 | 426 | $ heat stack-create -e envs/ENV_FILE -f openstack_single.hot 427 | $ heat stack-create -e envs/ENV_FILE -f openstack_cluster.hot 428 | 429 | If everything goes right, stack should be ready in a few minutes. You can verify by running following commands: 430 | 431 | .. code-block:: bash 432 | 433 | $ heat stack-list 434 | $ nova list 435 | 436 | You should be also able to log in as root to public IP provided by ``nova list`` command. When this cluster is deployed, you canlog in to the instances through the Salt master node. 437 | 438 | Current state of supported env configurations 439 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 440 | 441 | .. list-table:: 442 | :stub-columns: 1 443 | 444 | * - **ENV configuration** 445 | - **Single Status** 446 | - **Cluster Status** 447 | * - OS Kilo Ubuntu with OpenContrail 448 | - Stable 449 | - Stable 450 | * - OS Kilo RedHat single with OpenContrail 451 | - Experimental 452 | - Experimental 453 | * - OS Kilo Ubuntu single with Neutron DVR 454 | - Experimental 455 | - NONE 456 | 457 | OpenStack Heat templates 458 | ~~~~~~~~~~~~~~~~~~~~~~~~ 459 | 460 | **OpenStack single Heat template** 461 | 462 | ``/_static/scripts/openstack_single.hot`` 463 | 464 | .. literalinclude:: /_static/scripts/openstack_single.hot 465 | :language: yaml 466 | 467 | **OpenStack cluster Heat template** 468 | 469 | ``/_static/scripts/openstack_cluster.hot`` 470 | 471 | .. literalinclude:: /_static/scripts/openstack_cluster.hot 472 | :language: yaml 473 | 474 | Openstack-salt testing labs 475 | --------------------------- 476 | 477 | You can use publicly available labs offered by technology partners. 478 | 479 | Testing lab at `tcp cloud` 480 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 481 | 482 | Company tcp cloud has provided 100 cores and 400 GB of RAM divided in 5 separate projects, each with quotas set to 20 cores and 80 GB of RAM. Each project is capable of running both single and cluster deployments. 483 | 484 | Endpoint URL: 485 | **https://cloudempire-api.tcpcloud.eu:35357/v2.0** 486 | 487 | .. list-table:: 488 | :stub-columns: 1 489 | 490 | * - **User** 491 | - **Project** 492 | - **Domain** 493 | * - openstack_salt_user01 494 | - openstack_salt_lab01 495 | - default 496 | * - openstack_salt_user02 497 | - openstack_salt_lab02 498 | - default 499 | * - openstack_salt_user03 500 | - openstack_salt_lab03 501 | - default 502 | * - openstack_salt_user04 503 | - openstack_salt_lab04 504 | - default 505 | * - openstack_salt_user05 506 | - openstack_salt_lab05 507 | - default 508 | 509 | To get the access credentials and full support, visit ``#openstack-salt`` IRC channel. 510 | 511 | -------------- 512 | 513 | .. include:: navigation.txt 514 | --------------------------------------------------------------------------------