├── .github └── stale.yml ├── INSTALL.md ├── LICENSE.md ├── README.md ├── ansible.cfg ├── inventory ├── localhost ├── rax.py └── static ├── kafka_rax.sh ├── kafka_static.sh ├── playbooks ├── group_vars │ └── all ├── kafka.yml ├── provision_rax.yml └── roles │ ├── common │ ├── files │ │ └── tuned.conf │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── firewall.yml │ │ ├── main.yml │ │ └── partitioning.yml │ ├── templates │ │ └── hosts.j2 │ └── vars │ │ ├── debian.yml │ │ ├── redhat-6.yml │ │ ├── redhat-7.yml │ │ └── redhat-amazon.yml │ ├── kafka │ ├── defaults │ │ └── main.yml │ ├── files │ │ ├── kafka.sh │ │ ├── log4j.properties │ │ └── tools-log4j.properties │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── kafka-server.j2 │ │ └── server.properties.j2 │ └── zookeeper │ ├── defaults │ └── main.yml │ ├── files │ ├── log4j.properties │ └── zookeeper.sh │ ├── handlers │ └── main.yml │ ├── tasks │ └── main.yml │ └── templates │ ├── myid.j2 │ ├── zoo.cfg.j2 │ └── zookeeper-server.j2 └── provision_rax.sh /.github/stale.yml: -------------------------------------------------------------------------------- 1 | # Number of days of inactivity before an issue becomes stale 2 | daysUntilStale: 30 3 | # Number of days of inactivity before a stale issue is closed 4 | daysUntilClose: 7 5 | # Issues with these labels will never be considered stale 6 | exemptLabels: 7 | - security 8 | # Label to use when marking an issue as stale 9 | staleLabel: stale 10 | # Comment to post when marking an issue as stale. Set to `false` to disable 11 | markComment: > 12 | This issue has been automatically marked as stale because it has not had 13 | recent activity. It will be closed if no further activity occurs. Thank you 14 | for your contributions. 15 | # Comment to post when closing a stale issue. Set to `false` to disable 16 | closeComment: false 17 | -------------------------------------------------------------------------------- /INSTALL.md: -------------------------------------------------------------------------------- 1 | ansible-kafka installation guide 2 | --------- 3 | 4 | * These Ansible playbooks can build a Cloud environment and install Kafka with Zookeeper on it. Follow this [link](#install-kafka-on-rackspace-cloud). 5 | 6 | * It can also install Kafka on existing Linux devices, be it dedicated devices in a datacenter or VMs running on a hypervizor. Follow this [link](#install-kafka-on-existing-devices). 7 | 8 | 9 | --- 10 | 11 | 12 | # Install Kafka on Rackspace Cloud 13 | 14 | ## Build setup 15 | 16 | First step is to setup the build node / workstation. 17 | 18 | This build node or workstation will run the Ansible code and build the Kafka cluster (itself can be a Kafka node). 19 | 20 | This node needs to be able to contact the cluster devices via SSH and the Rackspace APIs via HTTPS. 21 | 22 | The following steps must be followed to install Ansible and the prerequisites on this build node / workstation, depending on its operating system: 23 | 24 | ### CentOS/RHEL 6 25 | 26 | 1. Install Ansible and git: 27 | 28 | ``` 29 | sudo su - 30 | yum -y remove python-crypto 31 | yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 32 | yum repolist; yum install gcc gcc-c++ python-pip python-devel sshpass git vim-enhanced -y 33 | pip install ansible pyrax importlib oslo.config==3.0.0 34 | ``` 35 | 36 | 2. Generate SSH public/private key pair (press Enter for defaults): 37 | 38 | ``` 39 | ssh-keygen -q -t rsa 40 | ``` 41 | 42 | ### CentOS/RHEL 7 43 | 44 | 1. Install Ansible and git: 45 | 46 | ``` 47 | sudo su - 48 | yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm 49 | yum repolist; yum install gcc gcc-c++ python-pip python-devel sshpass git vim-enhanced -y 50 | pip install ansible pyrax 51 | ``` 52 | 53 | 2. Generate SSH public/private key pair (press Enter for defaults): 54 | 55 | ``` 56 | ssh-keygen -q -t rsa 57 | ``` 58 | 59 | ### Ubuntu 14+ / Debian 8 60 | 61 | 1. Install Ansible and git: 62 | 63 | ``` 64 | sudo su - 65 | apt-get update; apt-get -y install python-pip python-dev sshpass git vim 66 | pip install ansible pyrax 67 | ``` 68 | 69 | 2. Generate SSH public/private key pair (press Enter for defaults): 70 | 71 | ``` 72 | ssh-keygen -q -t rsa 73 | ``` 74 | 75 | ## Setup the Rackspace credentials files 76 | 77 | The cloud environment requires the standard [pyrax](https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) credentials file that looks like this: 78 | ``` 79 | [rackspace_cloud] 80 | username = my_username 81 | api_key = 01234567890abcdef 82 | ``` 83 | 84 | Replace `my_username` with your Rackspace Cloud username and `01234567890abcdef` with your API key. 85 | 86 | Save this file as `.raxpub` under the home folder of the user running the playbook. 87 | 88 | 89 | ## Clone the repository 90 | 91 | On the same build node / workstation, run the following: 92 | 93 | ``` 94 | cd; git clone https://github.com/rackerlabs/ansible-kafka 95 | ``` 96 | 97 | 98 | ## Set the global variables 99 | 100 | Modify the file at `~/ansible-kafka/playbooks/group_vars/all` to set the cluster configuration. 101 | 102 | The following table will describe the most important variables: 103 | 104 | | Variable | Description | 105 | | -------------------| ----------------------------------------------------------------------------------------------- | 106 | | kafka_version | The version of Kafka to install. | 107 | | apache_mirror | The mirror used to install Kafka and Zookeeper. | 108 | | kafka_port | The Kafka port (default is 9092). | 109 | | cluster_interface | Should be set to the network device that the Kafka nodes will listen on for client connections. | 110 | | data_disks_devices | The device name of the data disk(s), usually starting with `xvde` for Rackspace Servers. If the disk is already partitioned / mounted or the root volume is to be used, set this variable to `[]`. | 111 | 112 | 113 | ## Set the cloud variables 114 | 115 | Modify the same file at `~/ansible-kafka/playbooks/group_vars/all` to set the cloud configuration. 116 | 117 | | Variable | Description | 118 | | ----------- | --------------------------------------------------------------------------------------------------------- | 119 | | region | The cloud region where nodes will be built. | 120 | | domain | The domain name used to build the nodes. | 121 | | nodes_count | The desired number of nodes to be built. | 122 | | image | The OS image to be used. Can be `CentOS 6 (PVHVM)`, `CentOS 7 (PVHVM)` or `Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)`. | 123 | | flavor | [Size flavor](https://developer.rackspace.com/docs/cloud-servers/v2/developer-guide/#supported-flavors-for-cloud-servers) of the nodes. | 124 | | ssh keyfile | The SSH keyfile that will be placed on cluster nodes at build time. | 125 | | ssh keyname | The name of the SSH key. Make sure you change this if another key was previously used with the same name. | 126 | 127 | 128 | If Rackspace Block Storage is to be built for storing Kafka data, set the following options: 129 | 130 | | Variable | Description | 131 | | ------------------ | ----------------------------------------------------------------------------------- | 132 | | build_cbs | Set to `true` to build CBS. `data_disks_devices` also needs to be set. | 133 | | disks_size | The size of the disk(s) in GB. | 134 | | disks_type | The type of the disk(s), can be `SATA` or `SSD`. | 135 | 136 | 137 | ## Provision the Cloud environment 138 | 139 | The first step is to run the script that will provision the Cloud environment:. 140 | 141 | Set the environment variable `RAX_CREDS_FILE` to point to the Rackspace credentials file(s) [set previously](#setup-the-rackspace-credentials-files). 142 | 143 | ``` 144 | export RAX_CREDS_FILE="/root/.raxpub" 145 | 146 | cd ~/ansible-kafka/ && bash provision_rax.sh 147 | ``` 148 | 149 | 150 | ## Kafka Installation 151 | 152 | Run the following to proceed with the Zookeeper and Kafka installation. 153 | 154 | Set the environment variable `RAX_CREDS_FILE` to point to the Rackspace credentials file(s) [set previously](#setup-the-rackspace-credentials-files). 155 | 156 | ``` 157 | export RAX_CREDS_FILE="/root/.raxpub" 158 | 159 | cd ~/ansible-kafka/ && bash kafka_rax.sh 160 | ``` 161 | 162 | 163 | ## Test 164 | 165 | Run the following on a Zookeeper node (one of the first 3 nodes): 166 | 167 | ``` 168 | zkCli.sh -cmd ls /brokers/ids 169 | zkCli.sh -cmd get /brokers/ids/1 170 | ``` 171 | 172 | It should return a list of all Kafka brokers and some information from the first broker. 173 | 174 | 175 | --- 176 | 177 | 178 | # Install Kafka on existing devices 179 | 180 | 181 | ## Build setup 182 | 183 | First step is to setup the build node / workstation. 184 | 185 | This build node or workstation will run the Ansible code and build the Kafka cluster (itself can be a Kafka node). 186 | 187 | This node needs to be able to contact the cluster devices via SSH. 188 | 189 | All the SSH logins must be known / prepared in advance or alternative SSH public-key authentication can also be used. 190 | 191 | The following steps must be followed to install Ansible and the prerequisites on this build node / workstation, depending on its operating system: 192 | 193 | ### CentOS/RHEL 6 194 | 195 | Install Ansible and git: 196 | 197 | ``` 198 | sudo su - 199 | yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 200 | yum repolist; yum install python-pip python-devel sshpass git vim-enhanced -y 201 | pip install ansible 202 | ``` 203 | 204 | ### CentOS/RHEL 7 205 | 206 | Install Ansible and git: 207 | 208 | ``` 209 | sudo su - 210 | yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm 211 | yum repolist; yum install python-pip python-devel sshpass git vim-enhanced -y 212 | pip install ansible 213 | ``` 214 | 215 | ### Ubuntu 14+ / Debian 8 216 | 217 | Install Ansible and git: 218 | 219 | ``` 220 | sudo su - 221 | apt-get update; apt-get -y install python-pip python-dev sshpass git vim 222 | pip install ansible 223 | ``` 224 | 225 | 226 | ## Clone the repository 227 | 228 | On the same build node / workstation, run the following: 229 | 230 | ``` 231 | cd; git clone https://github.com/rackerlabs/ansible-kafka 232 | ``` 233 | 234 | 235 | ## Set the global variables 236 | 237 | Modify the file at `~/ansible-kafka/playbooks/group_vars/all` to set the cluster configuration. 238 | 239 | The following table will describe the most important variables: 240 | 241 | | Variable | Description | 242 | | -------------------| ----------------------------------------------------------------------------------------------- | 243 | | kafka_version | The version of Kafka to install. | 244 | | apache_mirror | The mirror used to install Kafka and Zookeeper. | 245 | | kafka_port | The Kafka port (default is 9092). | 246 | | cluster_interface | Should be set to the network device that the Kafka nodes will listen on for client connections. | 247 | | data_disks_devices | The device name of the data disk(s). If the disk is already partitioned / mounted or the root volume is to be used, set this variable to `[]`. | 248 | 249 | 250 | ## Set the inventory 251 | 252 | Modify the inventory file at `~/ansible-kafka/inventory/static` and add the nodes to install Kafka on. 253 | 254 | - For each node, set the `ansible_host` to the IP address that is reachable from the build node / workstation. 255 | 256 | - Then set `ansible_user=root` and `ansible_ssh_pass` if the node allows for root user logins. If these are not set, public-key authentication will be used. 257 | 258 | - If root logins are not allowed then sudo can be used, set `ansible_user` to a user that can sudo. 259 | 260 | - Example inventory with 3 nodes: 261 | 262 | ``` 263 | [kafka-nodes] 264 | kafka-01 ansible_host=192.168.0.1 ansible_user=root ansible_ssh_pass=AsdQwe123 265 | kafka-02 ansible_host=192.168.0.2 ansible_user=root ansible_ssh_pass=AsdQwe123 266 | kafka-03 ansible_host=192.168.0.3 ansible_user=root ansible_ssh_pass=AsdQwe123 267 | ``` 268 | 269 | 270 | ## Kafka Installation 271 | 272 | Run the following to proceed with the Zookeeper and Kafka installation. 273 | 274 | ``` 275 | cd ~/ansible-kafka/ && bash kafka_static.sh 276 | ``` 277 | 278 | 279 | ## Test 280 | 281 | Run the following on a Zookeeper node (one of the first 3 nodes): 282 | 283 | ``` 284 | zkCli.sh -cmd ls /brokers/ids 285 | zkCli.sh -cmd get /brokers/ids/1 286 | ``` 287 | 288 | It should return a list of all Kafka brokers and some information from the first broker. 289 | 290 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 Alexandru Anghel 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ansible-kafka 2 | --------- 3 | This Ansible playbook will build a Kafka cluster with Zookeeper. 4 | 5 | You can pre-build a Rackspace cloud environment or run the playbook against an existing environment. 6 | 7 | --- 8 | 9 | ## [Installation] (id:installation) 10 | 11 | See [INSTALL.md](../master/INSTALL.md) for installation and build instructions. 12 | 13 | 14 | ## [Requirements] (id:requirements) 15 | 16 | - Ansible >= 2.0. 17 | 18 | - Expects RHEL/CentOS 6/7 or Ubuntu 14 hosts. 19 | 20 | - Building the Rackspace Cloud environment requires the `pyrax` Python module: [pyrax link](https://github.com/rackspace/pyrax). 21 | 22 | 23 | ## [Features] (id:features) 24 | 25 | - It installs Zookeeper for Kafka (Zookeeper is installed on the first 3 nodes only). 26 | 27 | - It supports static inventory if the environment is pre-built (`inventory/static` file). 28 | 29 | - The `id` is automatically set (on the inventory items in alphabetical order). 30 | 31 | - The data drives can be customized and can be put on top of Rackspace Cloud Block Storage. 32 | 33 | - It includes init scripts for both Zookeeper and Kafka. 34 | 35 | 36 | ## [Inventory] (id:inventory) 37 | 38 | - The cloud environment requires the standard `pyrax` credentials file that looks like this: 39 | ```` 40 | [rackspace_cloud] 41 | username = my_username 42 | api_key = 01234567890abcdef 43 | ```` 44 | 45 | This file will be referenced in the `RAX_CREDS_FILE` environment variable. 46 | 47 | By default, the file is expected to be: `~/.raxpub`. 48 | 49 | - When provisioning Kafka on existing infrastructure edit `inventory/static` and add the nodes. 50 | 51 | 52 | ## [Configuration] (id:configuration) 53 | 54 | To customize, change the variables under `playbooks/group_vars` and `playbooks/roles` folders: 55 | 56 | 1. **playbooks/group_vars/all**: contains cluster and cloud settings 57 | 1. **playbooks/roles/zookeeper/defaults/main.yml**: Zookeeper specific settings 58 | 1. **playbooks/roles/kafka/defaults/main.yml**: Kafka specific settings 59 | 60 | For a one-node cluster, set `nodes_count` in `playbooks/group_vars/all` to 1. 61 | 62 | 63 | ## [Scripts] (id:scripts) 64 | 65 | To provision a Rackspace Cloud environment, run the `provision_rax.sh` script after you've customized the variables: 66 | ```` 67 | bash provision_rax.sh 68 | ```` 69 | 70 | Similarly, run the one of the `kafka_*.sh` scripts, depending what type of environment you have. 71 | 72 | - For a Rackspace Cloud environment: 73 | ```` 74 | bash kafka_rax.sh 75 | ```` 76 | 77 | - For static / prebuilt environments: 78 | ```` 79 | bash kafka_static.sh 80 | ```` 81 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | host_key_checking = False 3 | timeout = 30 4 | pipelining=True 5 | -------------------------------------------------------------------------------- /inventory/localhost: -------------------------------------------------------------------------------- 1 | [localhost] 2 | localhost ansible_connection=local -------------------------------------------------------------------------------- /inventory/rax.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # (c) 2013, Jesse Keating , 5 | # Matt Martz 6 | # 7 | # This file is part of Ansible. 8 | # 9 | # Ansible is free software: you can redistribute it and/or modify 10 | # it under the terms of the GNU General Public License as published by 11 | # the Free Software Foundation, either version 3 of the License, or 12 | # (at your option) any later version. 13 | # 14 | # Ansible is distributed in the hope that it will be useful, 15 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 16 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 | # GNU General Public License for more details. 18 | # 19 | # You should have received a copy of the GNU General Public License 20 | # along with Ansible. If not, see . 21 | 22 | """ 23 | Rackspace Cloud Inventory 24 | 25 | Authors: 26 | Jesse Keating , 28 | Matt Martz 29 | 30 | 31 | Description: 32 | Generates inventory that Ansible can understand by making API request to 33 | Rackspace Public Cloud API 34 | 35 | When run against a specific host, this script returns variables similar to: 36 | rax_os-ext-sts_task_state 37 | rax_addresses 38 | rax_links 39 | rax_image 40 | rax_os-ext-sts_vm_state 41 | rax_flavor 42 | rax_id 43 | rax_rax-bandwidth_bandwidth 44 | rax_user_id 45 | rax_os-dcf_diskconfig 46 | rax_accessipv4 47 | rax_accessipv6 48 | rax_progress 49 | rax_os-ext-sts_power_state 50 | rax_metadata 51 | rax_status 52 | rax_updated 53 | rax_hostid 54 | rax_name 55 | rax_created 56 | rax_tenant_id 57 | rax_loaded 58 | 59 | Configuration: 60 | rax.py can be configured using a rax.ini file or via environment 61 | variables. The rax.ini file should live in the same directory along side 62 | this script. 63 | 64 | The section header for configuration values related to this 65 | inventory plugin is [rax] 66 | 67 | [rax] 68 | creds_file = ~/.rackspace_cloud_credentials 69 | regions = IAD,ORD,DFW 70 | env = prod 71 | meta_prefix = meta 72 | access_network = public 73 | access_ip_version = 4 74 | 75 | Each of these configurations also has a corresponding environment variable. 76 | An environment variable will override a configuration file value. 77 | 78 | creds_file: 79 | Environment Variable: RAX_CREDS_FILE 80 | 81 | An optional configuration that points to a pyrax-compatible credentials 82 | file. 83 | 84 | If not supplied, rax.py will look for a credentials file 85 | at ~/.rackspace_cloud_credentials. It uses the Rackspace Python SDK, 86 | and therefore requires a file formatted per the SDK's specifications. 87 | 88 | https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md 89 | 90 | regions: 91 | Environment Variable: RAX_REGION 92 | 93 | An optional environment variable to narrow inventory search 94 | scope. If used, needs a value like ORD, DFW, SYD (a Rackspace 95 | datacenter) and optionally accepts a comma-separated list. 96 | 97 | environment: 98 | Environment Variable: RAX_ENV 99 | 100 | A configuration that will use an environment as configured in 101 | ~/.pyrax.cfg, see 102 | https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md 103 | 104 | meta_prefix: 105 | Environment Variable: RAX_META_PREFIX 106 | Default: meta 107 | 108 | A configuration that changes the prefix used for meta key/value groups. 109 | For compatibility with ec2.py set to "tag" 110 | 111 | access_network: 112 | Environment Variable: RAX_ACCESS_NETWORK 113 | Default: public 114 | 115 | A configuration that will tell the inventory script to use a specific 116 | server network to determine the ansible_ssh_host value. If no address 117 | is found, ansible_ssh_host will not be set. Accepts a comma-separated 118 | list of network names, the first found wins. 119 | 120 | access_ip_version: 121 | Environment Variable: RAX_ACCESS_IP_VERSION 122 | Default: 4 123 | 124 | A configuration related to "access_network" that will attempt to 125 | determine the ansible_ssh_host value for either IPv4 or IPv6. If no 126 | address is found, ansible_ssh_host will not be set. 127 | Acceptable values are: 4 or 6. Values other than 4 or 6 128 | will be ignored, and 4 will be used. Accepts a comma-separated list, 129 | the first found wins. 130 | 131 | Examples: 132 | List server instances 133 | $ RAX_CREDS_FILE=~/.raxpub rax.py --list 134 | 135 | List servers in ORD datacenter only 136 | $ RAX_CREDS_FILE=~/.raxpub RAX_REGION=ORD rax.py --list 137 | 138 | List servers in ORD and DFW datacenters 139 | $ RAX_CREDS_FILE=~/.raxpub RAX_REGION=ORD,DFW rax.py --list 140 | 141 | Get server details for server named "server.example.com" 142 | $ RAX_CREDS_FILE=~/.raxpub rax.py --host server.example.com 143 | 144 | Use the instance private IP to connect (instead of public IP) 145 | $ RAX_CREDS_FILE=~/.raxpub RAX_ACCESS_NETWORK=private rax.py --list 146 | """ 147 | 148 | import os 149 | import re 150 | import sys 151 | import argparse 152 | import warnings 153 | import collections 154 | import ConfigParser 155 | 156 | from six import iteritems 157 | 158 | from ansible.constants import get_config, mk_boolean 159 | 160 | try: 161 | import json 162 | except ImportError: 163 | import simplejson as json 164 | 165 | try: 166 | import pyrax 167 | from pyrax.utils import slugify 168 | except ImportError: 169 | print('pyrax is required for this module') 170 | sys.exit(1) 171 | 172 | from time import time 173 | 174 | 175 | NON_CALLABLES = (basestring, bool, dict, int, list, type(None)) 176 | 177 | 178 | def load_config_file(): 179 | p = ConfigParser.ConfigParser() 180 | config_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), 181 | 'rax.ini') 182 | try: 183 | p.read(config_file) 184 | except ConfigParser.Error: 185 | return None 186 | else: 187 | return p 188 | p = load_config_file() 189 | 190 | 191 | def rax_slugify(value): 192 | return 'rax_%s' % (re.sub('[^\w-]', '_', value).lower().lstrip('_')) 193 | 194 | 195 | def to_dict(obj): 196 | instance = {} 197 | for key in dir(obj): 198 | value = getattr(obj, key) 199 | if isinstance(value, NON_CALLABLES) and not key.startswith('_'): 200 | key = rax_slugify(key) 201 | instance[key] = value 202 | 203 | return instance 204 | 205 | 206 | def host(regions, hostname): 207 | hostvars = {} 208 | 209 | for region in regions: 210 | # Connect to the region 211 | cs = pyrax.connect_to_cloudservers(region=region) 212 | for server in cs.servers.list(): 213 | if server.name == hostname: 214 | for key, value in to_dict(server).items(): 215 | hostvars[key] = value 216 | 217 | # And finally, add an IP address 218 | hostvars['ansible_ssh_host'] = server.accessIPv4 219 | print(json.dumps(hostvars, sort_keys=True, indent=4)) 220 | 221 | 222 | def _list_into_cache(regions): 223 | groups = collections.defaultdict(list) 224 | hostvars = collections.defaultdict(dict) 225 | images = {} 226 | cbs_attachments = collections.defaultdict(dict) 227 | 228 | prefix = get_config(p, 'rax', 'meta_prefix', 'RAX_META_PREFIX', 'meta') 229 | 230 | networks = get_config(p, 'rax', 'access_network', 'RAX_ACCESS_NETWORK', 231 | 'public', islist=True) 232 | try: 233 | ip_versions = map(int, get_config(p, 'rax', 'access_ip_version', 234 | 'RAX_ACCESS_IP_VERSION', 4, 235 | islist=True)) 236 | except: 237 | ip_versions = [4] 238 | else: 239 | ip_versions = [v for v in ip_versions if v in [4, 6]] 240 | if not ip_versions: 241 | ip_versions = [4] 242 | 243 | # Go through all the regions looking for servers 244 | for region in regions: 245 | # Connect to the region 246 | cs = pyrax.connect_to_cloudservers(region=region) 247 | if cs is None: 248 | warnings.warn( 249 | 'Connecting to Rackspace region "%s" has caused Pyrax to ' 250 | 'return None. Is this a valid region?' % region, 251 | RuntimeWarning) 252 | continue 253 | for server in cs.servers.list(): 254 | # Create a group on region 255 | groups[region].append(server.name) 256 | 257 | # Check if group metadata key in servers' metadata 258 | group = server.metadata.get('group') 259 | if group: 260 | groups[group].append(server.name) 261 | 262 | for extra_group in server.metadata.get('groups', '').split(','): 263 | if extra_group: 264 | groups[extra_group].append(server.name) 265 | 266 | # Add host metadata 267 | for key, value in to_dict(server).items(): 268 | hostvars[server.name][key] = value 269 | 270 | hostvars[server.name]['rax_region'] = region 271 | 272 | for key, value in iteritems(server.metadata): 273 | groups['%s_%s_%s' % (prefix, key, value)].append(server.name) 274 | 275 | groups['instance-%s' % server.id].append(server.name) 276 | groups['flavor-%s' % server.flavor['id']].append(server.name) 277 | 278 | # Handle boot from volume 279 | if not server.image: 280 | if not cbs_attachments[region]: 281 | cbs = pyrax.connect_to_cloud_blockstorage(region) 282 | for vol in cbs.list(): 283 | if mk_boolean(vol.bootable): 284 | for attachment in vol.attachments: 285 | metadata = vol.volume_image_metadata 286 | server_id = attachment['server_id'] 287 | cbs_attachments[region][server_id] = { 288 | 'id': metadata['image_id'], 289 | 'name': slugify(metadata['image_name']) 290 | } 291 | image = cbs_attachments[region].get(server.id) 292 | if image: 293 | server.image = {'id': image['id']} 294 | hostvars[server.name]['rax_image'] = server.image 295 | hostvars[server.name]['rax_boot_source'] = 'volume' 296 | images[image['id']] = image['name'] 297 | else: 298 | hostvars[server.name]['rax_boot_source'] = 'local' 299 | 300 | try: 301 | imagegroup = 'image-%s' % images[server.image['id']] 302 | groups[imagegroup].append(server.name) 303 | groups['image-%s' % server.image['id']].append(server.name) 304 | except KeyError: 305 | try: 306 | image = cs.images.get(server.image['id']) 307 | except cs.exceptions.NotFound: 308 | groups['image-%s' % server.image['id']].append(server.name) 309 | else: 310 | images[image.id] = image.human_id 311 | groups['image-%s' % image.human_id].append(server.name) 312 | groups['image-%s' % server.image['id']].append(server.name) 313 | 314 | # And finally, add an IP address 315 | ansible_ssh_host = None 316 | # use accessIPv[46] instead of looping address for 'public' 317 | for network_name in networks: 318 | if ansible_ssh_host: 319 | break 320 | if network_name == 'public': 321 | for version_name in ip_versions: 322 | if ansible_ssh_host: 323 | break 324 | if version_name == 6 and server.accessIPv6: 325 | ansible_ssh_host = server.accessIPv6 326 | elif server.accessIPv4: 327 | ansible_ssh_host = server.accessIPv4 328 | if not ansible_ssh_host: 329 | addresses = server.addresses.get(network_name, []) 330 | for address in addresses: 331 | for version_name in ip_versions: 332 | if ansible_ssh_host: 333 | break 334 | if address.get('version') == version_name: 335 | ansible_ssh_host = address.get('addr') 336 | break 337 | if ansible_ssh_host: 338 | hostvars[server.name]['ansible_ssh_host'] = ansible_ssh_host 339 | 340 | if hostvars: 341 | groups['_meta'] = {'hostvars': hostvars} 342 | 343 | with open(get_cache_file_path(regions), 'w') as cache_file: 344 | json.dump(groups, cache_file) 345 | 346 | 347 | def get_cache_file_path(regions): 348 | regions_str = '.'.join([reg.strip().lower() for reg in regions]) 349 | ansible_tmp_path = os.path.join(os.path.expanduser("~"), '.ansible', 'tmp') 350 | if not os.path.exists(ansible_tmp_path): 351 | os.makedirs(ansible_tmp_path) 352 | return os.path.join(ansible_tmp_path, 353 | 'ansible-rax-%s-%s.cache' % ( 354 | pyrax.identity.username, regions_str)) 355 | 356 | 357 | def _list(regions, refresh_cache=True): 358 | cache_max_age = int(get_config(p, 'rax', 'cache_max_age', 359 | 'RAX_CACHE_MAX_AGE', 600)) 360 | 361 | if (not os.path.exists(get_cache_file_path(regions)) or 362 | refresh_cache or 363 | (time() - os.stat(get_cache_file_path(regions))[-1]) > cache_max_age): 364 | # Cache file doesn't exist or older than 10m or refresh cache requested 365 | _list_into_cache(regions) 366 | 367 | with open(get_cache_file_path(regions), 'r') as cache_file: 368 | groups = json.load(cache_file) 369 | print(json.dumps(groups, sort_keys=True, indent=4)) 370 | 371 | 372 | def parse_args(): 373 | parser = argparse.ArgumentParser(description='Ansible Rackspace Cloud ' 374 | 'inventory module') 375 | group = parser.add_mutually_exclusive_group(required=True) 376 | group.add_argument('--list', action='store_true', 377 | help='List active servers') 378 | group.add_argument('--host', help='List details about the specific host') 379 | parser.add_argument('--refresh-cache', action='store_true', default=True, 380 | help=('Force refresh of cache, making API requests to' 381 | 'RackSpace (default: False - use cache files)')) 382 | return parser.parse_args() 383 | 384 | 385 | def setup(): 386 | default_creds_file = os.path.expanduser('~/.rackspace_cloud_credentials') 387 | 388 | env = get_config(p, 'rax', 'environment', 'RAX_ENV', None) 389 | if env: 390 | pyrax.set_environment(env) 391 | 392 | keyring_username = pyrax.get_setting('keyring_username') 393 | 394 | # Attempt to grab credentials from environment first 395 | creds_file = get_config(p, 'rax', 'creds_file', 396 | 'RAX_CREDS_FILE', None) 397 | if creds_file is not None: 398 | creds_file = os.path.expanduser(creds_file) 399 | else: 400 | # But if that fails, use the default location of 401 | # ~/.rackspace_cloud_credentials 402 | if os.path.isfile(default_creds_file): 403 | creds_file = default_creds_file 404 | elif not keyring_username: 405 | sys.stderr.write('No value in environment variable %s and/or no ' 406 | 'credentials file at %s\n' 407 | % ('RAX_CREDS_FILE', default_creds_file)) 408 | sys.exit(1) 409 | 410 | identity_type = pyrax.get_setting('identity_type') 411 | pyrax.set_setting('identity_type', identity_type or 'rackspace') 412 | 413 | region = pyrax.get_setting('region') 414 | 415 | try: 416 | if keyring_username: 417 | pyrax.keyring_auth(keyring_username, region=region) 418 | else: 419 | pyrax.set_credential_file(creds_file, region=region) 420 | except Exception as e: 421 | sys.stderr.write("%s: %s\n" % (e, e.message)) 422 | sys.exit(1) 423 | 424 | regions = [] 425 | if region: 426 | regions.append(region) 427 | else: 428 | region_list = get_config(p, 'rax', 'regions', 'RAX_REGION', 'all', 429 | islist=True) 430 | for region in region_list: 431 | region = region.strip().upper() 432 | if region == 'ALL': 433 | regions = pyrax.regions 434 | break 435 | elif region not in pyrax.regions: 436 | sys.stderr.write('Unsupported region %s' % region) 437 | sys.exit(1) 438 | elif region not in regions: 439 | regions.append(region) 440 | 441 | return regions 442 | 443 | 444 | def main(): 445 | args = parse_args() 446 | regions = setup() 447 | if args.list: 448 | _list(regions, refresh_cache=args.refresh_cache) 449 | elif args.host: 450 | host(regions, args.host) 451 | sys.exit(0) 452 | 453 | 454 | if __name__ == '__main__': 455 | main() 456 | -------------------------------------------------------------------------------- /inventory/static: -------------------------------------------------------------------------------- 1 | [kafka-nodes] 2 | kafka-01 ansible_host=192.168.0.1 ansible_user=root ansible_ssh_pass=AsdQwe123 3 | kafka-02 ansible_host=192.168.0.2 ansible_user=root ansible_ssh_pass=AsdQwe123 4 | kafka-03 ansible_host=192.168.0.3 ansible_user=root ansible_ssh_pass=AsdQwe123 5 | -------------------------------------------------------------------------------- /kafka_rax.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | VARS="${VARS} ANSIBLE_SCP_IF_SSH=y ANSIBLE_HOST_KEY_CHECKING=False" 4 | 5 | RAX_CREDS_FILE=${RAX_CREDS_FILE:-"${HOME}/.raxpub"} 6 | RAX_REGION=ALL 7 | 8 | if [ -n "${RAX_CREDS_FILE}" ] && [ -f "${RAX_CREDS_FILE}" ] 9 | then 10 | echo "Rackspace Cloud will be used." 11 | VARS="${VARS} RAX_CREDS_FILE=${RAX_CREDS_FILE}" 12 | else 13 | echo "RAX_CREDS_FILE could not be found." 14 | echo "Exiting..." 15 | exit 1 16 | fi 17 | 18 | export $VARS 19 | ansible-playbook -f 20 -i inventory/rax.py playbooks/kafka.yml 20 | -------------------------------------------------------------------------------- /kafka_static.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | VARS="${VARS} ANSIBLE_SCP_IF_SSH=y ANSIBLE_HOST_KEY_CHECKING=False" 4 | export $VARS 5 | 6 | ansible-playbook -f 20 -i inventory/static playbooks/kafka.yml 7 | -------------------------------------------------------------------------------- /playbooks/group_vars/all: -------------------------------------------------------------------------------- 1 | --- 2 | kafka_version: 0.9.0.1 3 | apache_mirror: apache.mirror.anlx.net 4 | kafka_port: 9092 5 | zookeeper_client_port: 2181 6 | 7 | # cluster_interface: 'eth0' 8 | data_disks_devices: [] 9 | #data_disks_devices: ['xvde'] 10 | data_disks_filesystem: xfs 11 | 12 | cloud: 13 | region: 'LON' 14 | domain: 'localnet' 15 | nodes_count: 3 16 | image: 'CentOS 7 (PVHVM)' 17 | flavor: 'general1-2' 18 | cbs: 19 | build_cbs: false 20 | disks_size: 100 21 | disks_type: 'SATA' 22 | allowed_external_ips: ['127.0.0.1'] 23 | ssh: 24 | keyname: 'kafka-ssh-key' 25 | keyfile: '~/.ssh/id_rsa.pub' 26 | 27 | # set to true to show host variables 28 | debug: false 29 | -------------------------------------------------------------------------------- /playbooks/kafka.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: "Add nodes to required groups" 3 | hosts: localhost 4 | connection: local 5 | gather_facts: False 6 | tasks: 7 | - name: "Add a maximum of 3 nodes to the zookeeper-nodes group" 8 | add_host: 9 | name: "{{ hostvars[item.1].inventory_hostname }}" 10 | ansible_host: "{{ hostvars[item.1].ansible_host|default(hostvars[item.1].ansible_ssh_host) }}" 11 | ansible_user: "{{ hostvars[item.1].ansible_user|default('root') }}" 12 | ansible_ssh_pass: "{{ hostvars[item.1].ansible_ssh_pass|default('') }}" 13 | ansible_become_user: root 14 | ansible_become_pass: "{{ hostvars[item.1].ansible_ssh_pass|default('') }}" 15 | id: "{{ item.0 + 1 }}" 16 | groups: zookeeper-nodes 17 | with_indexed_items: "{{ groups['kafka-nodes']|sort|batch(3)|list|first }}" 18 | 19 | - name: "Add all cluster nodes to the kafka-nodes group" 20 | add_host: 21 | name: "{{ hostvars[item.1].inventory_hostname }}" 22 | ansible_host: "{{ hostvars[item.1].ansible_host|default(hostvars[item.1].ansible_ssh_host) }}" 23 | ansible_user: "{{ hostvars[item.1].ansible_user|default('root') }}" 24 | ansible_ssh_pass: "{{ hostvars[item.1].ansible_ssh_pass|default('') }}" 25 | ansible_become_user: root 26 | ansible_become_pass: "{{ hostvars[item.1].ansible_ssh_pass|default('') }}" 27 | id: "{{ item.0 + 1 }}" 28 | groups: kafka-nodes 29 | with_indexed_items: "{{ groups['kafka-nodes']|sort }}" 30 | 31 | - name: "Show debug info" 32 | hosts: kafka-nodes 33 | become: yes 34 | tasks: 35 | - name: "Show Kafka nodes info" 36 | debug: var=hostvars[inventory_hostname] 37 | when: debug 38 | 39 | - name: "Apply the common role to all nodes" 40 | hosts: kafka-nodes 41 | become: yes 42 | roles: 43 | - common 44 | 45 | - name: "Apply the zookeeper role to the Zookeeper nodes" 46 | hosts: zookeeper-nodes 47 | become: yes 48 | roles: 49 | - zookeeper 50 | 51 | - name: "Apply the kafka role to the Kafka nodes" 52 | hosts: kafka-nodes 53 | become: yes 54 | roles: 55 | - kafka 56 | -------------------------------------------------------------------------------- /playbooks/provision_rax.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: "Build the environment" 3 | hosts: localhost 4 | connection: local 5 | gather_facts: False 6 | tasks: 7 | - name: Create the SSH keypair 8 | local_action: 9 | module: rax_keypair 10 | credentials: "{{ credentials_file }}" 11 | name: "{{ cloud.ssh.keyname }}" 12 | public_key: "{{ lookup('file', cloud.ssh.keyfile) }}" 13 | region: "{{ cloud.region }}" 14 | 15 | - name: "Create kafka nodes" 16 | local_action: 17 | module: rax 18 | credentials: "{{ credentials_file }}" 19 | name: "kafka-%02d.{{ cloud.domain }}" 20 | image: "{{ cloud.image }}" 21 | flavor: "{{ cloud.flavor }}" 22 | count: "{{ cloud.nodes_count }}" 23 | region: "{{ cloud.region }}" 24 | key_name: "{{ cloud.ssh.keyname }}" 25 | exact_count: yes 26 | auto_increment: true 27 | group: kafka-nodes 28 | state: present 29 | wait: true 30 | wait_timeout: 900 31 | register: rax_kafka 32 | when: cloud.nodes_count > 0 33 | 34 | - name: "Create CBS Volumes for kafka nodes" 35 | local_action: 36 | module: rax_cbs 37 | credentials: "{{ credentials_file }}" 38 | name: "{{ item[0].name }}_{{ item[1] }}" 39 | volume_type: "{{ cloud.cbs.disks_type }}" 40 | size: "{{ cloud.cbs.disks_size }}" 41 | region: "{{ cloud.region }}" 42 | state: present 43 | wait: true 44 | with_nested: 45 | - "{{ rax_kafka.instances }}" 46 | - "{{ data_disks_devices }}" 47 | when: cloud.nodes_count > 0 and cloud.cbs.build_cbs 48 | 49 | - name: "Attach CBS Volumes to kafka nodes" 50 | local_action: 51 | module: rax_cbs_attachments 52 | credentials: "{{ credentials_file }}" 53 | volume: "{{ item[0].name }}_{{ item[1] }}" 54 | server: "{{ item[0].id }}" 55 | region: "{{ cloud.region }}" 56 | device: "/dev/{{ item[1] }}" 57 | state: present 58 | wait: true 59 | with_nested: 60 | - "{{ rax_kafka.instances }}" 61 | - "{{ data_disks_devices }}" 62 | when: cloud.nodes_count > 0 and cloud.cbs.build_cbs 63 | 64 | - name: "Show debug info" 65 | hosts: localhost 66 | connection: local 67 | gather_facts: False 68 | tasks: 69 | - name: "Show rax_kafka info" 70 | debug: var=rax_kafka 71 | when: debug 72 | -------------------------------------------------------------------------------- /playbooks/roles/common/files/tuned.conf: -------------------------------------------------------------------------------- 1 | # 2 | # tuned configuration 3 | # 4 | 5 | [main] 6 | include=throughput-performance 7 | 8 | [vm] 9 | transparent_hugepages=never 10 | 11 | [sysctl] 12 | vm.swappiness=1 13 | -------------------------------------------------------------------------------- /playbooks/roles/common/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Restart iptables 3 | service: name=iptables state=restarted enabled=yes 4 | 5 | - name: Reload firewalld 6 | command: firewall-cmd --reload 7 | 8 | - name: Run update-grub 9 | shell: "{{ update_grub }}" 10 | when: ansible_distribution_major_version|int > 6 11 | -------------------------------------------------------------------------------- /playbooks/roles/common/tasks/firewall.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Set iptables between cluster nodes 3 | lineinfile: dest=/etc/sysconfig/iptables 4 | insertbefore="^-A INPUT" 5 | line="-A INPUT -s {{ hostvars[item]['ansible_'~hostvars[item].cluster_interface|default(hostvars[item].ansible_default_ipv4.alias)]['ipv4']['address'] }}/32 -j ACCEPT" 6 | state=present 7 | with_items: "{{ play_hosts }}" 8 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "6" 9 | notify: Restart iptables 10 | 11 | - name: Set iptables to allow cluster access from external IPs 12 | lineinfile: dest=/etc/sysconfig/iptables 13 | insertbefore="^-A INPUT" 14 | line="-A INPUT -s {{ item }}/32 -j ACCEPT" 15 | state=present 16 | with_items: "{{ cloud.allowed_external_ips }}" 17 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "6" 18 | notify: Restart iptables 19 | 20 | - name: Set firewalld rules between cluster nodes 21 | command: firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="{{ hostvars[item]['ansible_'~hostvars[item].cluster_interface|default(hostvars[item].ansible_default_ipv4.alias)]['ipv4']['address'] }}" accept' 22 | # firewalld: source={{ hostvars[item][['ansible_', hostvars[item]['cluster_interface']]|join]['ipv4']['address'] }} state=enabled 23 | with_items: "{{ play_hosts }}" 24 | notify: Reload firewalld 25 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "7" 26 | 27 | - name: Set firewalld to allow cluster access from external IPs 28 | command: firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="{{ item }}" accept' 29 | # firewalld: source={{ item }} state=enabled 30 | with_items: "{{ cloud.allowed_external_ips }}" 31 | notify: Reload firewalld 32 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "7" 33 | 34 | - name: Set UFW rules between cluster nodes 35 | ufw: rule=allow src={{ hostvars[item]['ansible_'~hostvars[item].cluster_interface|default(hostvars[item].ansible_default_ipv4.alias)]['ipv4']['address'] }} 36 | with_items: "{{ play_hosts }}" 37 | when: ansible_distribution == "Ubuntu" 38 | 39 | - name: Set UFW to allow cluster access from external IPs 40 | ufw: rule=allow src={{ item }} 41 | with_items: "{{ cloud.allowed_external_ips }}" 42 | when: ansible_distribution == "Ubuntu" 43 | -------------------------------------------------------------------------------- /playbooks/roles/common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Load OS specific variables 3 | include_vars: "{{ item }}" 4 | with_first_found: 5 | - files: 6 | - "{{ ansible_os_family|lower }}-{{ ansible_distribution_major_version }}.yml" 7 | - "{{ ansible_os_family|lower }}-{{ ansible_distribution|lower }}.yml" 8 | - "{{ ansible_os_family|lower }}.yml" 9 | - defaults.yml 10 | paths: 11 | - ../vars 12 | 13 | - name: Install epel-release 14 | yum: 15 | name: "{{ epel_yum }}" 16 | state: present 17 | update_cache: yes 18 | ignore_errors: true 19 | register: epel_result 20 | when: ansible_os_family == "RedHat" 21 | 22 | - name: Install epel-release (rpm) 23 | yum: 24 | name: "{{ epel_rpm_url }}" 25 | state: present 26 | when: ansible_os_family == "RedHat" and epel_result|failed 27 | 28 | - name: Ensure required packages are installed (yum) 29 | yum: 30 | name: "{{ item }}" 31 | update_cache: yes 32 | state: installed 33 | with_items: "{{ packages|default([]) }}" 34 | when: ansible_os_family == "RedHat" 35 | 36 | - name: Ensure required packages are installed (apt) 37 | apt: 38 | name: "{{ item }}" 39 | update_cache: yes 40 | state: installed 41 | with_items: "{{ packages|default([]) }}" 42 | when: ansible_os_family == "Debian" 43 | 44 | - name: Upgrade all packages (yum) 45 | yum: name=* state=latest 46 | when: ansible_os_family == "RedHat" 47 | 48 | - name: Upgrade all packages (apt) 49 | apt: upgrade=dist 50 | when: ansible_os_family == "Debian" 51 | 52 | - name: Make sure the NTP service is stopped 53 | service: name={{ ntp_service }} state=stopped 54 | 55 | - name: Force NTP sync 56 | command: "{{ ntp_sync }}" 57 | 58 | - name: Start the NTP service 59 | service: name={{ ntp_service }} state=started enabled=yes 60 | 61 | - name: Set nofile limits 62 | lineinfile: dest=/etc/security/limits.conf 63 | insertbefore="^# End of file" 64 | state=present 65 | line="{{ item }}" 66 | with_items: 67 | - "* soft nofile 32768" 68 | - "* hard nofile 32768" 69 | 70 | - name: Set nproc limits 71 | lineinfile: dest=/etc/security/limits.d/90-nproc.conf 72 | insertafter=EOF 73 | state=present 74 | create=yes 75 | line="{{ item }}" 76 | mode=0644 77 | with_items: 78 | - "* soft nproc 32768" 79 | - "* hard nproc 32768" 80 | 81 | - name: Set swappiness to 1 82 | sysctl: name=vm.swappiness value=1 state=present ignoreerrors=yes 83 | 84 | - name: Get number of kernels in grub.conf 85 | shell: grep -E "^[[:blank:]]*kernel" /boot/grub/grub.conf | grep -v transparent_hugepage 86 | register: grep_result 87 | when: ansible_os_family == "RedHat" and (ansible_distribution == "Amazon" or ansible_distribution_major_version == "6") 88 | ignore_errors: true 89 | 90 | - name: Disable Transparent Huge Pages in Grub 1 91 | lineinfile: dest=/boot/grub/grub.conf 92 | backrefs=True 93 | state=present 94 | regexp='(^\s*kernel(\s+(?!transparent_hugepage=never)[\w=/\-\.\,]+)*)\s*$' 95 | line='\1 transparent_hugepage=never' 96 | with_items: "{{ grep_result.stdout_lines | default('') }}" 97 | when: ansible_os_family == "RedHat" and (ansible_distribution == "Amazon" or ansible_distribution_major_version == "6") 98 | 99 | - name: Disable Transparent Huge Pages in Grub 2 100 | lineinfile: dest=/etc/default/grub 101 | state=present 102 | line='GRUB_CMDLINE_LINUX=$GRUB_CMDLINE_LINUX" transparent_hugepage=never"' 103 | when: ansible_distribution_major_version|int > 6 104 | notify: Run update-grub 105 | 106 | - meta: flush_handlers 107 | 108 | - name: Disable Transparent Huge Pages until reboot 109 | shell: echo never > /sys/kernel/mm/transparent_hugepage/enabled && echo never > /sys/kernel/mm/transparent_hugepage/defrag 110 | ignore_errors: true 111 | 112 | - name: Set the tuned profile 113 | copy: src=tuned.conf 114 | dest=/etc/tuned/cassandra/ 115 | mode=0755 116 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "7" 117 | 118 | - name: Activate the tuned profile 119 | shell: tuned-adm profile cassandra 120 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "7" 121 | 122 | - name: Set hosts file 123 | template: src=hosts.j2 dest=/etc/hosts mode=0644 124 | 125 | - name: Include firewall.yml 126 | include: firewall.yml 127 | when: rax_id is defined 128 | 129 | - name: Include partitioning.yml 130 | include: partitioning.yml 131 | with_items: 132 | - "{{ data_disks_devices|default([]) }}" 133 | 134 | - name: Mount kafka data disks under /disk/{0..n} 135 | mount: state=mounted 136 | src="/dev/{{ item.1 }}1" 137 | name="/disk/{{ item.0 + 1 }}" 138 | fstype="{{ data_disks_filesystem }}" 139 | opts=defaults,noatime 140 | dump=0 141 | passno=0 142 | with_indexed_items: "{{ data_disks_devices|default([]) }}" 143 | when: data_disks_devices is defined and data_disks_devices and ansible_devices[item.1] is defined 144 | -------------------------------------------------------------------------------- /playbooks/roles/common/tasks/partitioning.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Get disk alignment for disk {{ item }} 3 | shell: if [[ -e /sys/block/{{ item }}/queue/optimal_io_size && -e /sys/block/{{ item }}/alignment_offset && -e /sys/block/{{ item }}/queue/physical_block_size ]]; then echo $[$(( ($(cat /sys/block/{{ item }}/queue/optimal_io_size) + $(cat /sys/block/{{ item }}/alignment_offset)) / $(cat /sys/block/{{ item }}/queue/physical_block_size) )) | 2048]; else echo 2048; fi 4 | when: ansible_devices[item] is defined 5 | register: disk_offset 6 | 7 | - name: Partition disk {{ item }} 8 | shell: parted -s -a optimal /dev/{{ item }} mklabel gpt mkpart primary {{ data_disks_filesystem }} {{ disk_offset.stdout|default("2048") }}s 100% && sleep 5 && partprobe /dev/{{ item }}; sleep 5 9 | when: ansible_devices[item] is defined and ansible_devices[item]['partitions'] == {} 10 | 11 | - name: Create the filesystem on disk {{ item }} 12 | filesystem: dev="/dev/{{ item }}1" fstype="{{ data_disks_filesystem }}" 13 | when: ansible_devices[item] is defined 14 | 15 | - name: Disable periodic fsck on {{ item }} 16 | shell: tune2fs -c0 -i0 /dev/{{ item }}1 17 | when: ansible_devices[item] is defined and (data_disks_filesystem == "ext4" or data_disks_filesystem == "ext3") 18 | -------------------------------------------------------------------------------- /playbooks/roles/common/templates/hosts.j2: -------------------------------------------------------------------------------- 1 | 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 2 | ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 3 | {% for node in groups['kafka-nodes'] %} 4 | {{ hostvars[node]['ansible_'~hostvars[node].cluster_interface|default(hostvars[node].ansible_default_ipv4.alias)]['ipv4']['address'] }} {{ hostvars[node]['ansible_nodename'] }} {{ hostvars[node]['ansible_hostname'] }} 5 | {% endfor %} 6 | -------------------------------------------------------------------------------- /playbooks/roles/common/vars/debian.yml: -------------------------------------------------------------------------------- 1 | packages: 2 | - default-jdk 3 | - python-simplejson 4 | - man-db 5 | - vim 6 | - sysstat 7 | - lsof 8 | - netcat 9 | - ntp 10 | - parted 11 | - iotop 12 | - unzip 13 | - curl 14 | - wget 15 | - ifenslave 16 | - xfsprogs 17 | 18 | update_grub: "/usr/sbin/update-grub" 19 | 20 | ntp_service: ntp 21 | ntp_sync: "/usr/sbin/ntpd -gq" 22 | 23 | datastax_repo: "debian-datastax_repo.j2" 24 | datastax_repo_file: "/etc/apt/sources.list.d/datastax.sources.list" 25 | -------------------------------------------------------------------------------- /playbooks/roles/common/vars/redhat-6.yml: -------------------------------------------------------------------------------- 1 | packages: 2 | - java-1.8.0-openjdk-devel 3 | - python-simplejson 4 | - python-httplib2 5 | - man 6 | - man-pages 7 | - vim-enhanced 8 | - sysstat 9 | - lsof 10 | - jwhois 11 | - nc 12 | - libselinux-python 13 | - ntp 14 | - parted 15 | - openssh-clients 16 | - iotop 17 | - unzip 18 | - curl 19 | - wget 20 | - xfsprogs 21 | 22 | ntp_service: ntpd 23 | ntp_sync: "/usr/sbin/ntpd -gq" 24 | 25 | epel_rpm_url: "http://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm" 26 | epel_yum: "epel-release" 27 | 28 | datastax_repo: "redhat-datastax_repo.j2" 29 | datastax_repo_file: "/etc/yum.repos.d/datastax.repo" 30 | datastax_repo_key: "http://rpm.datastax.com/rpm/repo_key" 31 | -------------------------------------------------------------------------------- /playbooks/roles/common/vars/redhat-7.yml: -------------------------------------------------------------------------------- 1 | packages: 2 | - java-1.8.0-openjdk-devel 3 | - sysstat 4 | - lsof 5 | - parted 6 | - openssh-clients 7 | - unzip 8 | - curl 9 | - wget 10 | - iotop 11 | - ntpdate 12 | - chrony 13 | - nmap-ncat 14 | 15 | update_grub: 'cd && grub2-mkconfig -o "$(readlink -n /etc/grub2.cfg)"' 16 | 17 | ntp_service: chronyd 18 | ntp_sync: "systemctl restart ntpdate" 19 | 20 | epel_rpm_url: "http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm" 21 | epel_yum: "epel-release" 22 | 23 | datastax_repo: "redhat-datastax_repo.j2" 24 | datastax_repo_file: "/etc/yum.repos.d/datastax.repo" 25 | datastax_repo_key: "http://rpm.datastax.com/rpm/repo_key" 26 | -------------------------------------------------------------------------------- /playbooks/roles/common/vars/redhat-amazon.yml: -------------------------------------------------------------------------------- 1 | packages: 2 | - java-1.8.0-openjdk-devel 3 | - python-simplejson 4 | - python26-httplib2 5 | - python27-httplib2 6 | - man 7 | - man-pages 8 | - vim-enhanced 9 | - sysstat 10 | - lsof 11 | - jwhois 12 | - nc 13 | - libselinux-python 14 | - ntp 15 | - parted 16 | - openssh-clients 17 | - iotop 18 | - unzip 19 | - curl 20 | - wget 21 | - xfsprogs 22 | 23 | ntp_service: ntpd 24 | ntp_sync: "/usr/sbin/ntpd -gq" 25 | 26 | epel_rpm_url: "http://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm" 27 | epel_yum: "epel-release" 28 | 29 | resolv_conf: "/etc/resolv.conf" 30 | 31 | networking_path: "/etc/sysconfig/network-scripts" 32 | networking_eth: "redhat-ifcfg-eth.j2" 33 | networking_bond: "redhat-ifcfg-bond.j2" 34 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kafka_user: kafka 3 | kafka_group: kafka 4 | kafka_conf_dir: /etc/kafka/config 5 | kafka_log_dir: /var/log/kafka 6 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/files/kafka.sh: -------------------------------------------------------------------------------- 1 | export PATH=/opt/kafka/bin:$PATH 2 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/files/log4j.properties: -------------------------------------------------------------------------------- 1 | # Licensed to the Apache Software Foundation (ASF) under one or more 2 | # contributor license agreements. See the NOTICE file distributed with 3 | # this work for additional information regarding copyright ownership. 4 | # The ASF licenses this file to You under the Apache License, Version 2.0 5 | # (the "License"); you may not use this file except in compliance with 6 | # the License. You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | log4j.rootLogger=INFO, stdout 17 | 18 | log4j.appender.stdout=org.apache.log4j.ConsoleAppender 19 | log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 20 | log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n 21 | 22 | log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender 23 | log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH 24 | log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log 25 | log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout 26 | log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n 27 | 28 | log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender 29 | log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH 30 | log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log 31 | log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout 32 | log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n 33 | 34 | log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender 35 | log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH 36 | log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log 37 | log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout 38 | log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n 39 | 40 | log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender 41 | log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH 42 | log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log 43 | log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout 44 | log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n 45 | 46 | log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender 47 | log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH 48 | log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log 49 | log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout 50 | log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n 51 | 52 | log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender 53 | log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH 54 | log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log 55 | log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout 56 | log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n 57 | 58 | # Turn on all our debugging info 59 | #log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender 60 | #log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender 61 | #log4j.logger.kafka.perf=DEBUG, kafkaAppender 62 | #log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender 63 | #log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG 64 | log4j.logger.kafka=INFO, kafkaAppender 65 | 66 | log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender 67 | log4j.additivity.kafka.network.RequestChannel$=false 68 | 69 | #log4j.logger.kafka.network.Processor=TRACE, requestAppender 70 | #log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender 71 | #log4j.additivity.kafka.server.KafkaApis=false 72 | log4j.logger.kafka.request.logger=WARN, requestAppender 73 | log4j.additivity.kafka.request.logger=false 74 | 75 | log4j.logger.kafka.controller=TRACE, controllerAppender 76 | log4j.additivity.kafka.controller=false 77 | 78 | log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender 79 | log4j.additivity.kafka.log.LogCleaner=false 80 | 81 | log4j.logger.state.change.logger=TRACE, stateChangeAppender 82 | log4j.additivity.state.change.logger=false 83 | 84 | #Change this to debug to get the actual audit log for authorizer. 85 | log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender 86 | log4j.additivity.kafka.authorizer.logger=false 87 | 88 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/files/tools-log4j.properties: -------------------------------------------------------------------------------- 1 | # Licensed to the Apache Software Foundation (ASF) under one or more 2 | # contributor license agreements. See the NOTICE file distributed with 3 | # this work for additional information regarding copyright ownership. 4 | # The ASF licenses this file to You under the Apache License, Version 2.0 5 | # (the "License"); you may not use this file except in compliance with 6 | # the License. You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | log4j.rootLogger=WARN, stderr 17 | 18 | log4j.appender.stderr=org.apache.log4j.ConsoleAppender 19 | log4j.appender.stderr.layout=org.apache.log4j.PatternLayout 20 | log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n 21 | log4j.appender.stderr.Target=System.err 22 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Reload systemd 3 | command: systemctl daemon-reload 4 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "7" 5 | 6 | - name: Restart kafka 7 | service: name=kafka-server enabled=yes state=restarted 8 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Add group "kafka" 3 | group: name={{ kafka_group }} system=yes 4 | 5 | - name: Add user "kafka" 6 | user: name={{ kafka_user }} group={{ kafka_group }} shell=/sbin/nologin system=yes 7 | 8 | - name: Download the tarball 9 | get_url: url="http://{{ apache_mirror }}/kafka/{{ kafka_version }}/kafka_2.11-{{ kafka_version }}.tgz" dest=/opt/kafka_2.11-{{ kafka_version }}.tgz 10 | 11 | - name: Extract the tarball 12 | unarchive: src=/opt/kafka_2.11-{{ kafka_version }}.tgz dest=/opt/ copy=no creates=/opt/kafka_2.11-{{ kafka_version }} 13 | 14 | - name: Symlink install directory 15 | file: src=/opt/kafka_2.11-{{ kafka_version }} path=/opt/kafka state=link 16 | 17 | - name: Set permissions to Kafka folder 18 | file: path=/opt/kafka/ owner=root group=root recurse=yes 19 | 20 | - name: Create /kafka/data directory 21 | file: path=/kafka/data state=directory owner={{ kafka_user }} group={{ kafka_group }} 22 | when: data_disks_devices is not defined or not data_disks_devices or data_disks_devices|length <= 0 23 | 24 | - name: Create /disk/{0..n}/kafka/data directories 25 | file: path="/disk/{{ item.0 + 1 }}/kafka/data" state=directory owner={{ kafka_user }} group={{ kafka_group }} 26 | with_indexed_items: "{{ data_disks_devices }}" 27 | when: data_disks_devices is defined and data_disks_devices 28 | 29 | - name: Create Kafka log directory 30 | file: path={{ kafka_log_dir }} state=directory owner={{ kafka_user }} group={{ kafka_group }} 31 | 32 | - name: Link the log directory 33 | file: src={{ kafka_log_dir }} path=/opt/kafka/logs state=link 34 | 35 | - name: Create Kafka conf directory 36 | file: path={{ kafka_conf_dir }} state=directory owner=root group=root mode=755 37 | 38 | - name: Install Kafka init script 39 | template: src=kafka-server.j2 dest=/etc/init.d/kafka-server mode=0755 40 | notify: Reload systemd 41 | 42 | - meta: flush_handlers 43 | 44 | - name: Configure Kafka server 45 | template: src=server.properties.j2 dest="{{kafka_conf_dir}}/server.properties" 46 | notify: Restart kafka 47 | 48 | - name: Upload log4j configs 49 | copy: src={{ item }} dest="{{ kafka_conf_dir }}/" owner=root group=root mode=644 50 | with_items: 51 | - log4j.properties 52 | - tools-log4j.properties 53 | notify: Restart kafka 54 | 55 | - meta: flush_handlers 56 | 57 | - name: Make sure Kafka is running 58 | service: name=kafka-server enabled=yes state=started 59 | 60 | - name: Add Kafka binaries to PATH 61 | copy: src=kafka.sh dest=/etc/profile.d/ owner=root group=root mode=644 62 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/templates/kafka-server.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # kafka-server This shell script takes care of starting and stopping Kafka 4 | # 5 | # chkconfig: 2345 81 19 6 | # 7 | ### BEGIN INIT INFO 8 | # Provides: kafka-server 9 | # Required-Start: $network $syslog $zookeeper-server 10 | # Required-Stop: $network $syslog 11 | # Default-Start: 2 3 4 5 12 | # Default-Stop: 0 1 6 13 | # Description: Kafka server 14 | # Short-Description: start and stop kafka-server 15 | ### END INIT INFO 16 | # 17 | # - based on https://fedoraproject.org/wiki/Packaging:SysVInitScript?rd=Packaging/SysVInitScript 18 | 19 | ## Source function library. 20 | . /etc/rc.d/init.d/functions 2> /dev/null || . /lib/lsb/init-functions 2> /dev/null 21 | 22 | prog="kafka-server" 23 | 24 | # Get instance specific config file 25 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 26 | [ -e /etc/default/$prog ] && . /etc/default/$prog 27 | 28 | lockfile=/var/lock/subsys/$prog 29 | 30 | # For SELinux we need to use 'runuser' not 'su' 31 | if [ -x "/sbin/runuser" ]; then 32 | SU="/sbin/runuser -s /bin/sh" 33 | else 34 | SU="/bin/su -s /bin/sh" 35 | fi 36 | 37 | # Define the kafka username 38 | KAFKA_USER="${KAFKA_USER:-{{ kafka_user }}}" 39 | 40 | # Define the kafka group 41 | KAFKA_GROUP="${KAFKA_GROUP:-`id -gn $KAFKA_USER 2> /dev/null`}" 42 | 43 | # Kafka home 44 | KAFKA_HOME="${KAFKA_HOME:-/opt/kafka}" 45 | 46 | # Kafka startup script 47 | KAFKA_START_SCRIPT="${KAFKA_HOME}/bin/kafka-server-start.sh" 48 | 49 | # Kafka configuration file 50 | KAFKA_CONFIG="{{ kafka_conf_dir }}/server.properties" 51 | 52 | # Time to wait in seconds, before sending SIGKILL to the process 53 | SHUTDOWN_WAIT=${SHUTDOWN_WAIT:-"30"} 54 | 55 | # Whether to output "attempting to shut down" messages or not 56 | SHUTDOWN_VERBOSE=${SHUTDOWN_VERBOSE:-false} 57 | 58 | retval="0" 59 | 60 | # Defining log_success, log_failure and log_warning to not depend on lsb being installed 61 | function log_success() { 62 | echo -n $* 63 | (success 2> /dev/null && echo) || log_end_msg 0 2> /dev/null 64 | } 65 | 66 | 67 | function log_failure() { 68 | echo -n $* 69 | (failure 2> /dev/null && echo) || log_end_msg 1 2> /dev/null 70 | } 71 | 72 | 73 | function log_warning() { 74 | echo -n $* 75 | (warning 2> /dev/null && echo) || log_end_msg 255 2> /dev/null 76 | } 77 | 78 | 79 | function checklockfile() { 80 | if [ -f $lockfile ]; then 81 | pid="$(/usr/bin/pgrep -d , -u ${KAFKA_USER} -G ${KAFKA_GROUP} java)" 82 | [ "$?" == "2" ] && exit 2 83 | # The lockfile exists but the process is not running 84 | if [ -z "$pid" ]; then 85 | retval="2" 86 | fi 87 | fi 88 | } 89 | 90 | 91 | function status() { 92 | checklockfile 93 | if [ "$retval" -eq "2" ]; then 94 | retval=2 95 | [ "$log_messages" = "true" ] && log_failure "$prog lockfile exists but process is not running" 96 | else 97 | pid="$(/usr/bin/pgrep -d , -u ${KAFKA_USER} -G ${KAFKA_GROUP} java)" 98 | [ "$?" == "2" ] && exit 2 99 | if [ -z "$pid" ]; then 100 | [ "$log_messages" = "true" ] && log_success "$prog is stopped" 101 | retval="3" 102 | else 103 | [ "$log_messages" = "true" ] && log_success "$prog (pid ${pid}) is running..." 104 | retval="0" 105 | fi 106 | fi 107 | } 108 | 109 | 110 | function start() { 111 | echo -n "Starting $prog: " 112 | log_messages=false 113 | status 114 | if [ "$retval" -eq "0" ]; then 115 | log_success "${prog} (pid ${pid}) is already running..." 116 | else 117 | if [ -r /etc/rc.d/init.d/functions ]; then 118 | daemon --user $KAFKA_USER $KAFKA_START_SCRIPT -daemon $KAFKA_CONFIG 119 | else 120 | $SU - $KAFKA_USER -c "$KAFKA_START_SCRIPT -daemon $KAFKA_CONFIG" 121 | fi 122 | sleep 3 123 | status 124 | if [ "$retval" -eq "3" ]; then 125 | log_failure "Error code ${retval}" 126 | else 127 | retval=0 128 | log_success 129 | touch $lockfile 2> /dev/null 130 | fi 131 | fi 132 | } 133 | 134 | 135 | function stop() { 136 | echo -n "Stopping $prog: " 137 | log_messages=false 138 | status 139 | if [ "$retval" -ne "0" ]; then 140 | log_success "${prog} is already stopped..." 141 | rm -f $lockfile 142 | retval=0 143 | else 144 | java_pids="$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')" 145 | # Try issuing SIGTERM 146 | kill -15 $java_pids 147 | sleep 3 148 | count="0" 149 | until [ "$(ps --pid $pid | grep -c $pid)" -eq "0" ] || [ "$count" -gt "$SHUTDOWN_WAIT" ]; do 150 | [ "$SHUTDOWN_VERBOSE" = "true" ] && echo "waiting for processes $pid to exit" 151 | sleep 1 152 | let count="${count}+1" 153 | done 154 | if [ "$count" -gt "$SHUTDOWN_WAIT" ]; then 155 | [ "$SHUTDOWN_VERBOSE" = "true" ] && log_warning "killing processes which did not stop after ${SHUTDOWN_WAIT} seconds" 156 | kill -9 $pid 157 | fi 158 | log_success 159 | rm -f $lockfile 160 | retval=0 161 | fi 162 | } 163 | 164 | 165 | function restart() { 166 | stop 167 | sleep 1 168 | [ "$retval" -eq "0" ] && start 169 | } 170 | 171 | case "$1" in 172 | start|stop|restart) 173 | $1 174 | ;; 175 | force-reload) 176 | restart 177 | ;; 178 | status) 179 | log_messages=true 180 | status 181 | ;; 182 | try-restart|condrestart) 183 | if status >/dev/null ; then 184 | restart 185 | fi 186 | ;; 187 | reload) 188 | echo "Configuration reload not possible with $prog. Use restart." 189 | retval="3" 190 | ;; 191 | *) 192 | echo "Usage: $0 {start|stop|status|restart|try-restart|force-reload}" 193 | exit 2 194 | esac 195 | 196 | exit $retval 197 | -------------------------------------------------------------------------------- /playbooks/roles/kafka/templates/server.properties.j2: -------------------------------------------------------------------------------- 1 | # Licensed to the Apache Software Foundation (ASF) under one or more 2 | # contributor license agreements. See the NOTICE file distributed with 3 | # this work for additional information regarding copyright ownership. 4 | # The ASF licenses this file to You under the Apache License, Version 2.0 5 | # (the "License"); you may not use this file except in compliance with 6 | # the License. You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | # see kafka.server.KafkaConfig for additional details and defaults 16 | 17 | ############################# Server Basics ############################# 18 | 19 | # The id of the broker. This must be set to a unique integer for each broker. 20 | broker.id={{ hostvars[inventory_hostname]['id'] }} 21 | 22 | ############################# Socket Server Settings ############################# 23 | 24 | # The port the socket server listens on 25 | port={{ kafka_port }} 26 | 27 | # Hostname the broker will bind to. If not set, the server will bind to all interfaces 28 | #host.name=localhost 29 | host.name={{ hostvars[inventory_hostname]['ansible_nodename'] }} 30 | 31 | # Hostname the broker will advertise to producers and consumers. If not set, it uses the 32 | # value for "host.name" if configured. Otherwise, it will use the value returned from 33 | # java.net.InetAddress.getCanonicalHostName(). 34 | #advertised.host.name= 35 | 36 | # The port to publish to ZooKeeper for clients to use. If this is not set, 37 | # it will publish the same port that the broker binds to. 38 | #advertised.port= 39 | 40 | # The number of threads handling network requests 41 | num.network.threads=2 42 | 43 | # The number of threads doing disk I/O 44 | num.io.threads=8 45 | 46 | # The send buffer (SO_SNDBUF) used by the socket server 47 | socket.send.buffer.bytes=1048576 48 | 49 | # The receive buffer (SO_RCVBUF) used by the socket server 50 | socket.receive.buffer.bytes=1048576 51 | 52 | # The maximum size of a request that the socket server will accept (protection against OOM) 53 | socket.request.max.bytes=104857600 54 | 55 | 56 | ############################# Log Basics ############################# 57 | 58 | # A comma seperated list of directories under which to store log files 59 | log.dirs={% for disk in ansible_mounts if disk.mount | match("/disk/*") %}{{ disk.mount }}/kafka/data{% if not loop.last %},{% endif %}{% else %}/kafka/data{%- endfor %} 60 | 61 | # The default number of log partitions per topic. More partitions allow greater 62 | # parallelism for consumption, but this will also result in more files across 63 | # the brokers. 64 | num.partitions=2 65 | 66 | ############################# Log Flush Policy ############################# 67 | 68 | # Messages are immediately written to the filesystem but by default we only fsync() to sync 69 | # the OS cache lazily. The following configurations control the flush of data to disk. 70 | # There are a few important trade-offs here: 71 | # 1. Durability: Unflushed data may be lost if you are not using replication. 72 | # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. 73 | # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. 74 | # The settings below allow one to configure the flush policy to flush data after a period of time or 75 | # every N messages (or both). This can be done globally and overridden on a per-topic basis. 76 | 77 | # The number of messages to accept before forcing a flush of data to disk 78 | #log.flush.interval.messages=10000 79 | 80 | # The maximum amount of time a message can sit in a log before we force a flush 81 | #log.flush.interval.ms=1000 82 | 83 | ############################# Log Retention Policy ############################# 84 | 85 | # The following configurations control the disposal of log segments. The policy can 86 | # be set to delete segments after a period of time, or after a given size has accumulated. 87 | # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens 88 | # from the end of the log. 89 | 90 | # The minimum age of a log file to be eligible for deletion 91 | log.retention.hours=168 92 | 93 | # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining 94 | # segments don't drop below log.retention.bytes. 95 | #log.retention.bytes=1073741824 96 | 97 | # The maximum size of a log segment file. When this size is reached a new log segment will be created. 98 | log.segment.bytes=536870912 99 | 100 | # The interval at which log segments are checked to see if they can be deleted according 101 | # to the retention policies 102 | log.retention.check.interval.ms=60000 103 | 104 | # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires. 105 | # If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction. 106 | log.cleaner.enable=false 107 | 108 | ############################# Zookeeper ############################# 109 | 110 | # Zookeeper connection string (see zookeeper docs for details). 111 | # This is a comma separated host:port pairs, each corresponding to a zk 112 | # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". 113 | # You can also append an optional chroot string to the urls to specify the 114 | # root directory for all kafka znodes. 115 | zookeeper.connect={% for server in groups['zookeeper-nodes'] %}{{ hostvars[server]['ansible_nodename'] }}:{{ zookeeper_client_port }}{% if not loop.last %},{% endif %}{% endfor %} 116 | 117 | 118 | # Timeout in ms for connecting to zookeeper 119 | zookeeper.connection.timeout.ms=1000000 120 | -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | zookeeper_version: 3.4.10 3 | zookeeper_user: zookeeper 4 | zookeeper_group: zookeeper 5 | zookeeper_conf_dir: /etc/zookeeper/conf 6 | zookeeper_log_dir: /var/log/zookeeper 7 | zookeeper_data_dir: /var/lib/zookeeper 8 | zookeeper_maxClientCnxns: 50 9 | -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/files/log4j.properties: -------------------------------------------------------------------------------- 1 | # Define some default values that can be overridden by system properties 2 | zookeeper.root.logger=INFO, CONSOLE 3 | zookeeper.console.threshold=INFO 4 | zookeeper.log.dir=. 5 | zookeeper.log.file=zookeeper.log 6 | zookeeper.log.threshold=DEBUG 7 | zookeeper.tracelog.dir=. 8 | zookeeper.tracelog.file=zookeeper_trace.log 9 | 10 | # 11 | # ZooKeeper Logging Configuration 12 | # 13 | 14 | # Format is " (, )+ 15 | 16 | # DEFAULT: console appender only 17 | log4j.rootLogger=${zookeeper.root.logger} 18 | 19 | # Example with rolling log file 20 | #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE 21 | 22 | # Example with rolling log file and tracing 23 | #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE 24 | 25 | # 26 | # Log INFO level and above messages to the console 27 | # 28 | log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender 29 | log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold} 30 | log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout 31 | log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n 32 | 33 | # 34 | # Add ROLLINGFILE to rootLogger to get log file output 35 | # Log DEBUG level and above messages to a log file 36 | log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender 37 | log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold} 38 | log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file} 39 | 40 | # Max log file size of 10MB 41 | log4j.appender.ROLLINGFILE.MaxFileSize=10MB 42 | # uncomment the next line to limit number of backup files 43 | #log4j.appender.ROLLINGFILE.MaxBackupIndex=10 44 | 45 | log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout 46 | log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n 47 | 48 | 49 | # 50 | # Add TRACEFILE to rootLogger to get log file output 51 | # Log DEBUG level and above messages to a log file 52 | log4j.appender.TRACEFILE=org.apache.log4j.FileAppender 53 | log4j.appender.TRACEFILE.Threshold=TRACE 54 | log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file} 55 | 56 | log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout 57 | ### Notice we are including log4j's NDC here (%x) 58 | log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n 59 | -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/files/zookeeper.sh: -------------------------------------------------------------------------------- 1 | export PATH=/opt/zookeeper/bin:$PATH 2 | -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Reload systemd 3 | command: systemctl daemon-reload 4 | when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "7" 5 | 6 | - name: Restart zookeeper 7 | service: name=zookeeper-server enabled=yes state=restarted 8 | -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Add group "zookeeper" 3 | group: name={{ zookeeper_group }} system=yes 4 | 5 | - name: Add user "zookeeper" 6 | user: name={{ zookeeper_group }} group={{ zookeeper_group }} home={{ zookeeper_data_dir }} shell=/sbin/nologin system=yes 7 | 8 | - name: Download the tarball 9 | get_url: url=http://{{ apache_mirror }}/zookeeper/stable/zookeeper-{{ zookeeper_version }}.tar.gz dest=/opt/zookeeper-{{ zookeeper_version }}.tar.gz 10 | 11 | - name: Extract the tarball 12 | unarchive: src=/opt/zookeeper-{{ zookeeper_version }}.tar.gz dest=/opt/ copy=no creates=/opt/zookeeper-{{ zookeeper_version }} 13 | 14 | - name: Symlink install directory 15 | file: src=/opt/zookeeper-{{ zookeeper_version }} path=/opt/zookeeper state=link 16 | 17 | - name: Set permissions to Zookeeper folder 18 | file: path=/opt/zookeeper/ owner=root group=root recurse=yes 19 | 20 | - name: Create Zookeeper data directory 21 | file: path={{ zookeeper_data_dir }} state=directory owner={{ zookeeper_user }} group={{ zookeeper_group }} 22 | 23 | - name: Create Zookeeper log directory 24 | file: path={{ zookeeper_log_dir }} state=directory owner={{ zookeeper_user }} group={{ zookeeper_group }} 25 | 26 | - name: Create Zookeeper conf directory 27 | file: path={{ zookeeper_conf_dir }} state=directory owner=root group=root mode=755 28 | 29 | - name: Install Zookeeper init script 30 | template: src=zookeeper-server.j2 dest=/etc/init.d/zookeeper-server mode=0755 31 | notify: Reload systemd 32 | 33 | - meta: flush_handlers 34 | 35 | - name: Configure Zookeeper server 36 | template: src=zoo.cfg.j2 dest="{{zookeeper_conf_dir}}/zoo.cfg" 37 | notify: Restart zookeeper 38 | 39 | - name: Configure myid 40 | template: src=myid.j2 dest="{{ zookeeper_data_dir }}/myid" 41 | notify: Restart zookeeper 42 | 43 | - name: Upload log4j config 44 | copy: src=log4j.properties dest="{{ zookeeper_conf_dir }}/log4j.properties" owner=root group=root mode=644 45 | notify: Restart zookeeper 46 | 47 | - meta: flush_handlers 48 | 49 | - name: Make sure Zookeeper is running 50 | service: name=zookeeper-server enabled=yes state=started 51 | 52 | - name: Add Zookeeper binaries to PATH 53 | copy: src=zookeeper.sh dest=/etc/profile.d/ owner=root group=root mode=644 54 | -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/templates/myid.j2: -------------------------------------------------------------------------------- 1 | {{ hostvars[inventory_hostname]['id'] }} -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/templates/zoo.cfg.j2: -------------------------------------------------------------------------------- 1 | # Licensed to the Apache Software Foundation (ASF) under one or more 2 | # contributor license agreements. See the NOTICE file distributed with 3 | # this work for additional information regarding copyright ownership. 4 | # The ASF licenses this file to You under the Apache License, Version 2.0 5 | # (the "License"); you may not use this file except in compliance with 6 | # the License. You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | maxClientCnxns={{ zookeeper_maxClientCnxns }} 17 | # The number of milliseconds of each tick 18 | tickTime=2000 19 | # The number of ticks that the initial 20 | # synchronization phase can take 21 | initLimit=10 22 | # The number of ticks that can pass between 23 | # sending a request and getting an acknowledgement 24 | syncLimit=5 25 | # the directory where the snapshot is stored. 26 | dataDir={{ zookeeper_data_dir }} 27 | # the port at which the clients will connect 28 | clientPort={{ zookeeper_client_port }} 29 | 30 | {% for server in play_hosts %} 31 | server.{{ hostvars[server]['id'] }}={{ hostvars[server]['ansible_nodename'] }}:2888:3888 32 | {% endfor %} 33 | -------------------------------------------------------------------------------- /playbooks/roles/zookeeper/templates/zookeeper-server.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Licensed to the Apache Software Foundation (ASF) under one or more 4 | # contributor license agreements. See the NOTICE file distributed with 5 | # this work for additional information regarding copyright ownership. 6 | # The ASF licenses this file to You under the Apache License, Version 2.0 7 | # (the "License"); you may not use this file except in compliance with 8 | # the License. You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, software 13 | # distributed under the License is distributed on an "AS IS" BASIS, 14 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | # See the License for the specific language governing permissions and 16 | # limitations under the License. 17 | 18 | # chkconfig: 2345 80 20 19 | # description: Summary: ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them ,which make them brittle in the presence of change and difficult to manage. Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed. 20 | # processname: java 21 | # pidfile: /var/run/zookeeper/zookeeper-server.pid 22 | ### BEGIN INIT INFO 23 | # Provides: zookeeper-server 24 | # Required-Start: $network $local_fs 25 | # Required-Stop: 26 | # Should-Start: $named 27 | # Should-Stop: 28 | # Default-Start: 2 3 4 5 29 | # Default-Stop: 0 1 6 30 | # Short-Description: ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. 31 | ### END INIT INFO 32 | set -e 33 | 34 | PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin 35 | 36 | NAME=zookeeper-server 37 | DESC="ZooKeeper daemon" 38 | PID_FILE=/var/run/zookeeper/zookeeper-server.pid 39 | install -d -m 0755 -o {{ zookeeper_user }} -g {{ zookeeper_group }} /var/run/zookeeper/ 40 | 41 | DODTIME=3 42 | 43 | export ZOOPIDFILE=${PID_FILE} 44 | export ZOOKEEPER_HOME=${ZOOKEEPER_HOME:-/opt/zookeeper} 45 | export ZOOKEEPER_CONF=${ZOOKEEPER_CONF:-{{ zookeeper_conf_dir }}} 46 | export ZOOCFGDIR=${ZOOCFGDIR:-$ZOOKEEPER_CONF} 47 | export CLASSPATH=$CLASSPATH:$ZOOKEEPER_CONF:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/* 48 | export ZOO_LOG_DIR=${ZOO_LOG_DIR:-{{ zookeeper_log_dir }}} 49 | export ZOO_LOG4J_PROP=${ZOO_LOG4J_PROP:-INFO,ROLLINGFILE} 50 | export JVMFLAGS=${JVMFLAGS:--Dzookeeper.log.threshold=INFO} 51 | export ZOO_DATADIR_AUTOCREATE_DISABLE=${ZOO_DATADIR_AUTOCREATE_DISABLE:-true} 52 | DAEMON_SCRIPT="env CLASSPATH=$CLASSPATH $ZOOKEEPER_HOME/bin/zkServer.sh" 53 | 54 | 55 | # Checks if the given pid represents a live process. 56 | # Returns 0 if the pid is a live process, 1 otherwise 57 | hadoop_is_process_alive() { 58 | local pid="$1" 59 | ps -fp $pid | grep $pid | grep zookeeper > /dev/null 2>&1 60 | } 61 | 62 | hadoop_check_pidfile() { 63 | local pidfile="$1" # IN 64 | local pid 65 | 66 | pid=`cat "$pidfile" 2>/dev/null` 67 | if [ "$pid" = '' ]; then 68 | # The file probably does not exist or is empty. 69 | return 1 70 | fi 71 | 72 | set -- $pid 73 | pid="$1" 74 | 75 | hadoop_is_process_alive $pid 76 | } 77 | 78 | hadoop_process_kill() { 79 | local pid="$1" # IN 80 | local signal="$2" # IN 81 | local second 82 | 83 | kill -$signal $pid 2>/dev/null 84 | 85 | # Wait a bit to see if the dirty job has really been done 86 | for second in 0 1 2 3 4 5 6 7 8 9 10; do 87 | if hadoop_is_process_alive "$pid"; then 88 | # Success 89 | return 0 90 | fi 91 | 92 | sleep 1 93 | done 94 | 95 | # Timeout 96 | return 1 97 | } 98 | hadoop_stop_pidfile() { 99 | local pidfile="$1" # IN 100 | local pid 101 | 102 | pid=`cat "$pidfile" 2>/dev/null` 103 | if [ "$pid" = '' ]; then 104 | # The file probably does not exist or is empty. Success 105 | return 0 106 | fi 107 | 108 | set -- $pid 109 | pid="$1" 110 | 111 | # First try the easy way 112 | if hadoop_process_kill "$pid" 15; then 113 | return 0 114 | fi 115 | 116 | # Otherwise try the hard way 117 | if hadoop_process_kill "$pid" 9; then 118 | return 0 119 | fi 120 | 121 | return 1 122 | } 123 | 124 | 125 | start() { 126 | su -s /bin/bash zookeeper -c "${DAEMON_SCRIPT} start" 127 | sleep 5 128 | } 129 | stop() { 130 | if hadoop_check_pidfile $PID_FILE ; then 131 | su -s /bin/bash zookeeper -c "${DAEMON_SCRIPT} stop" 132 | fi 133 | } 134 | 135 | case "$1" in 136 | start) 137 | start 138 | ;; 139 | stop) 140 | stop 141 | ;; 142 | force-stop) 143 | echo -n "Forcefully stopping $DESC: " 144 | hadoop_stop_pidfile $PID_FILE 145 | if hadoop_check_pidfile $PID_FILE ; then 146 | echo "$NAME." 147 | else 148 | echo " ERROR." 149 | fi 150 | ;; 151 | force-reload|condrestart|try-restart) 152 | # check wether $DAEMON is running. If so, restart 153 | hadoop_check_pidfile $PID_FILE && $0 restart 154 | ;; 155 | restart|reload) 156 | echo -n "Restarting $DESC: " 157 | stop 158 | [ -n "$DODTIME" ] && sleep $DODTIME 159 | $0 start 160 | ;; 161 | status) 162 | echo -n "$NAME is " 163 | if hadoop_check_pidfile $PID_FILE ; then 164 | echo "running" 165 | else 166 | echo "not running." 167 | exit 1 168 | fi 169 | ;; 170 | init) 171 | if hadoop_check_pidfile $PID_FILE ; then 172 | echo "Error: $DESC is running. Stop it first." >&2 173 | exit 1 174 | else 175 | shift 176 | su -s /bin/bash zookeeper -c "zookeeper-server-initialize $*" 177 | fi 178 | ;; 179 | *) 180 | N=/etc/init.d/$NAME 181 | # echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2 182 | echo "Usage: $N {start|stop|restart|force-reload|status|force-stop|condrestart|try-restart|init}" >&2 183 | 184 | exit 1 185 | ;; 186 | esac 187 | 188 | exit 0 189 | -------------------------------------------------------------------------------- /provision_rax.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | RAX_CREDS_FILE=${RAX_CREDS_FILE:-"${HOME}/.raxpub"} 4 | RAX_REGION=ALL 5 | 6 | if [ -n "${RAX_CREDS_FILE}" ] && [ -f "${RAX_CREDS_FILE}" ] 7 | then 8 | echo "Rackspace Cloud will be used." 9 | EXTRA_VARS="${EXTRA_VARS} credentials_file=$RAX_CREDS_FILE" 10 | else 11 | echo "RAX_CREDS_FILE could not be found." 12 | echo "Exiting..." 13 | exit 1 14 | fi 15 | 16 | ansible-playbook -i inventory/localhost -e "$EXTRA_VARS" playbooks/provision_rax.yml 17 | --------------------------------------------------------------------------------