├── .gitignore ├── AUTHORS ├── CHANGELOG.rst ├── FORMULA ├── LICENSE ├── README.rst ├── VERSION ├── Vagrantfile ├── _modules └── vault.py ├── _pillar └── vault.py ├── _renderers └── vault.py ├── _states └── vault.py ├── _utils ├── data_structures.py └── vault.py ├── minion.conf ├── pillar.example ├── requirements.dev.txt ├── salt-top.example ├── scripts ├── gitfs_deps.sh ├── testinfra.sh └── vagrant_setup.sh ├── tests └── test_hashicorp-vault.py └── vault ├── configure.sls ├── files ├── README.rst ├── vault.service ├── vault.upstart └── vault_service.sh ├── init.sls ├── initialize.sls ├── install.sls ├── install_module_dependencies.sls ├── map.jinja ├── service.sls ├── templates └── README.rst ├── tests ├── init.sls ├── test_configure.sls └── test_install.sls └── upgrade.sls /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant/* 2 | *~ 3 | -------------------------------------------------------------------------------- /AUTHORS: -------------------------------------------------------------------------------- 1 | ======= 2 | Authors 3 | ======= 4 | 5 | * Tobias Macey (blarghmatey) 6 | -------------------------------------------------------------------------------- /CHANGELOG.rst: -------------------------------------------------------------------------------- 1 | vault formula 2 | ======================= 3 | 4 | 201702 (2017-02-14) 5 | ------------------- 6 | - Added an upgrade state 7 | - Refactored test organization 8 | - Moved installation out of `init.sls` 9 | 10 | 201605 (2016-05-25) 11 | ------------------- 12 | - Updated to be compatible with Salt version 2016.11 by adding `enforce_toplevel: False` 13 | - Updated to use official release of HVAC library 14 | 15 | 201605 (2016-05-25) 16 | ------------------- 17 | 18 | - First release 19 | -------------------------------------------------------------------------------- /FORMULA: -------------------------------------------------------------------------------- 1 | name: vault 2 | os: RedHat, CentOS, Ubuntu, Debian 3 | os_family: RedHat, Debian 4 | version: 201702 5 | release: 1 6 | summary: SaltStack formula to install and configure Vault from Hashicorp for managing secrets in your infrastructure 7 | description: SaltStack formula to install and configure Vault from Hashicorp for managing secrets in your infrastructure 8 | top_level_dir: vault 9 | recommended: 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Copyright (c) 2016, Massachusetts Institute of Technology 3 | All rights reserved. 4 | 5 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 6 | 7 | * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 8 | 9 | * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 10 | 11 | * Neither the name of nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. 12 | 13 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 14 | 15 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | =============== 2 | vault 3 | =============== 4 | 5 | SaltStack formula to install and configure Vault from Hashicorp for managing secrets in your infrastructure 6 | 7 | .. note:: 8 | 9 | This formula includes custom execution and state modules that must be synced to the target minion/master prior to executing the formula. These modules additionally require the `hvac` library to be installed for the extensions to be made available. 10 | 11 | 12 | Available states 13 | ================ 14 | 15 | .. contents:: 16 | :local: 17 | 18 | ``vault`` 19 | ------------------- 20 | 21 | Install and start the Vault server 22 | 23 | ``vault.configure`` 24 | ------------------------ 25 | 26 | Create a configuration file for the installed Vault server and restart the Vault service 27 | 28 | ``vault.initialize`` 29 | -------------------- 30 | 31 | Initialize and optionally unseal the installed Vault server. If PGP public keys or Keybase usernames are provided then the sealing keys will be regenerated after unsealing and then backed up to the Vault server. 32 | 33 | ``vault.upgrade`` 34 | ----------------- 35 | 36 | Do an in place upgrade of Vault to the version specified in the `vault:overrides:version` pillar value. 37 | 38 | ``vault.tests`` 39 | ---------------- 40 | 41 | Execute the tests for the associated state files. 42 | 43 | 44 | Template 45 | ======== 46 | 47 | This formula was created from a cookiecutter template. 48 | 49 | See https://github.com/mitodl/saltstack-formula-cookiecutter. 50 | -------------------------------------------------------------------------------- /VERSION: -------------------------------------------------------------------------------- 1 | 201702 2 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 | # configures the configuration version (we support older styles for 6 | # backwards compatibility). Please don't change it unless you know what 7 | # you're doing. 8 | Vagrant.configure(2) do |config| 9 | # The most common configuration options are documented and commented below. 10 | # For a complete reference, please see the online documentation at 11 | # https://docs.vagrantup.com. 12 | 13 | # Every Vagrant development environment requires a box. You can search for 14 | # boxes at https://atlas.hashicorp.com/search. 15 | 16 | config.vm.define "debian" do |debian| 17 | debian.vm.box = "debian/stretch64" 18 | end 19 | 20 | config.vm.define "centos" do |centos| 21 | centos.vm.box = "centos/7" 22 | end 23 | 24 | config.vm.define "ubuntu" do |ubuntu| 25 | ubuntu.vm.box = "ubuntu/xenial64" 26 | end 27 | 28 | # Disable automatic box update checking. If you disable this, then 29 | # boxes will only be checked for updates when the user runs 30 | # `vagrant box outdated`. This is not recommended. 31 | # config.vm.box_check_update = false 32 | 33 | # Create a forwarded port mapping which allows access to a specific port 34 | # within the machine from a port on the host machine. In the example below, 35 | # accessing "localhost:8080" will access port 80 on the guest machine. 36 | # config.vm.network "forwarded_port", guest: 80, host: 8080 37 | 38 | # Create a private network, which allows host-only access to the machine 39 | # using a specific IP. 40 | # config.vm.network "private_network", ip: "192.168.33.10" 41 | 42 | # Create a public network, which generally matched to bridged network. 43 | # Bridged networks make the machine appear as another physical device on 44 | # your network. 45 | # config.vm.network "public_network" 46 | 47 | # Share an additional folder to the guest VM. The first argument is 48 | # the path on the host to the actual folder. The second argument is 49 | # the path on the guest to mount the folder. And the optional third 50 | # argument is a set of non-required options. 51 | # config.vm.synced_folder "../data", "/vagrant_data" 52 | config.vm.synced_folder ".", "/srv/salt/", type: "rsync", rsync__exclude: ".git" 53 | config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: ".git" 54 | 55 | # Provider-specific configuration so you can fine-tune various 56 | # backing providers for Vagrant. These expose provider-specific options. 57 | # Example for VirtualBox: 58 | # 59 | # config.vm.provider "virtualbox" do |vb| 60 | # # Display the VirtualBox GUI when booting the machine 61 | # vb.gui = true 62 | # 63 | # # Customize the amount of memory on the VM: 64 | # vb.memory = "1024" 65 | # end 66 | # 67 | # View the documentation for the provider you are using for more 68 | # information on available options. 69 | 70 | # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies 71 | # such as FTP and Heroku are also available. See the documentation at 72 | # https://docs.vagrantup.com/v2/push/atlas.html for more information. 73 | # config.push.define "atlas" do |push| 74 | # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME" 75 | # end 76 | 77 | # Enable provisioning with a shell script. Additional provisioners such as 78 | # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the 79 | # documentation for more information about their specific syntax and use. 80 | # config.vm.provision "shell", inline: <<-SHELL 81 | # sudo apt-get update 82 | # sudo apt-get install -y apache2 83 | # SHELL 84 | config.vm.provision "shell", path: "scripts/vagrant_setup.sh" 85 | config.vm.provision "shell", path: "scripts/gitfs_deps.sh" 86 | config.vm.provision :salt do |salt| 87 | salt.minion_config = 'minion.conf' 88 | salt.bootstrap_options = '-U -Z' 89 | salt.masterless = true 90 | salt.run_highstate = true 91 | salt.colorize = true 92 | salt.verbose = true 93 | end 94 | 95 | end 96 | -------------------------------------------------------------------------------- /_modules/vault.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | This module provides methods for interacting with Hashicorp Vault via the HVAC 4 | library. 5 | """ 6 | from __future__ import absolute_import 7 | 8 | import logging 9 | from datetime import datetime, timedelta 10 | 11 | log = logging.getLogger(__name__) 12 | 13 | try: 14 | import requests 15 | DEPS_INSTALLED = True 16 | except ImportError: 17 | log.debug('Unable to import the requests library.') 18 | DEPS_INSTALLED = False 19 | 20 | __all__ = ['initialize', 'is_initialized'] 21 | 22 | SEVEN_DAYS = (7 * 24 * 60 * 60) 23 | 24 | 25 | def __init__(opts): 26 | if DEPS_INSTALLED: 27 | _register_functions() 28 | 29 | 30 | class InsufficientParameters(Exception): 31 | pass 32 | 33 | 34 | def __virtual__(): 35 | return DEPS_INSTALLED 36 | 37 | 38 | def initialize(secret_shares=5, secret_threshold=3, pgp_keys=None, 39 | keybase_users=None, unseal=True): 40 | success = True 41 | if keybase_users and isinstance(keybase_users, list): 42 | keybase_keys = [] 43 | for user in keybase_users: 44 | log.debug('Retrieving public keys for Keybase user {}.' 45 | .format(user)) 46 | keybase_keys.append(__utils__['vault.get_keybase_pubkey'](user)) 47 | pgp_keys = pgp_keys or [] 48 | pgp_keys.extend(keybase_keys) 49 | if pgp_keys and len(pgp_keys) < secret_shares: 50 | raise InsufficientParameters('The number of PGP keys does not match' 51 | ' the number of secret shares.') 52 | client = __utils__['vault.build_client']() 53 | try: 54 | if pgp_keys and not unseal: 55 | secrets = client.initialize(secret_shares, secret_threshold, 56 | pgp_keys) 57 | else: 58 | secrets = client.initialize(secret_shares, secret_threshold) 59 | sealing_keys = secrets['keys'] 60 | root_token = secrets['root_token'] 61 | if unseal: 62 | __utils__['vault.wait_after_init'](client) 63 | log.debug('Unsealing Vault with generated sealing keys.') 64 | __utils__['vault.unseal'](sealing_keys) 65 | except __utils__['vault.vault_error']() as e: 66 | log.exception(e) 67 | success = False 68 | sealing_keys = None 69 | try: 70 | if pgp_keys and unseal: 71 | __utils__['vault.wait_after_init'](client) 72 | log.debug('Regenerating PGP encrypted keys and backing them up.') 73 | log.debug('PGP keys: {}'.format(pgp_keys)) 74 | client.token = root_token 75 | __utils__['vault.rekey'](secret_shares, secret_threshold, 76 | sealing_keys, pgp_keys, root_token) 77 | encrypted_sealing_keys = client.get_backed_up_keys()['keys'] 78 | if encrypted_sealing_keys: 79 | sealing_keys = encrypted_sealing_keys 80 | except __utils__['vault.vault_error']() as e: 81 | log.error('Vault was initialized but PGP encrypted keys were not able to' 82 | ' be generated after unsealing.') 83 | log.debug('Failed to rekey and backup the sealing keys.') 84 | log.exception(e) 85 | client.token = root_token 86 | return success, sealing_keys, root_token 87 | 88 | 89 | def scan_leases(prefix='', time_horizon=SEVEN_DAYS, send_events=True): 90 | """Scan all leases and generate an event for any that are near expiration 91 | 92 | :param prefix: The prefix path of leases that you want to scan 93 | :param time_horizon: How far in advance you want to be alerted for expiring leases (seconds) 94 | :param send_events: Boolean to specify whether to fire events for matched leases 95 | :returns: List of lease info for leases expiring soon 96 | :rtype: list 97 | 98 | """ 99 | client = __utils__['vault.build_client']() 100 | try: 101 | prefixes = client.list('sys/leases/lookup/{0}'.format(prefix)) 102 | except __utils__['vault.vault_error']() as e: 103 | log.exception('Failed to retrieve lease information for prefix %s', 104 | prefix) 105 | return [] 106 | if prefixes: 107 | prefixes = prefixes.get('data', {}).get('keys', []) 108 | else: 109 | prefixes = [] 110 | expiring_leases = [] 111 | for node in prefixes: 112 | if node.endswith('/'): 113 | log.debug('Recursing into path %s for prefix %s', node, prefix) 114 | expiring_leases.extend(scan_leases('{0}/{1}'.format( 115 | prefix.strip('/'), node), time_horizon)) 116 | else: 117 | log.debug('Retrieving lease information for %s/%s', prefix, node) 118 | try: 119 | lease_info = client.write( 120 | 'sys/leases/lookup', 121 | lease_id='{0}/{1}'.format(prefix.strip('/'), node)) 122 | except __utils__['vault.vault_error']() as e: 123 | log.exception('Failed to retrieve lease information for %s', 124 | '{0}/{1}'.format(prefix.strip('/'), node)) 125 | continue 126 | lease_expiry = datetime.strptime( 127 | lease_info.get('data', {}).get('expire_time')[:-4], 128 | '%Y-%m-%dT%H:%M:%S.%f') 129 | lease_lifetime = lease_expiry - datetime.utcnow() 130 | if lease_lifetime < timedelta(seconds=time_horizon): 131 | if send_events: 132 | __salt__['event.send']( 133 | 'vault/lease/expiring/{0}/{1}'.format(prefix, node), 134 | data=lease_info.get('data', {})) 135 | expiring_leases.append(lease_info.get('data', {})) 136 | return expiring_leases 137 | 138 | 139 | def clean_expired_leases(prefix='', time_horizon=0): 140 | """Scan all leases and delete any that have an expiration beyond the specified time horizon 141 | 142 | :param prefix: The prefix path of leases that you want to scan 143 | :param time_horizon: How far in advance you want to be alerted for expiring leases (seconds) 144 | :returns: List of lease info for leases that were deleted 145 | :rtype: list 146 | 147 | """ 148 | client = __utils__['vault.build_client']() 149 | expired_leases = scan_leases(prefix, time_horizon, send_events=False) 150 | for index, lease in enumerate(expired_leases): 151 | try: 152 | client.write('sys/leases/revoke', lease_id=lease['id']) 153 | except __utils__['vault.vault_error'](): 154 | log.exception('Failed to revoke lease %s', lease['id']) 155 | expired_leases.pop(index) 156 | continue 157 | return expired_leases 158 | 159 | 160 | def check_cached_lease(path, cache_prefix='', **kwargs): 161 | """Check whether cached leases have expired and if they are renewable. 162 | 163 | :param path: path to the full vault cache path 164 | :param cache_prefix: usually the minion_id 165 | :param **kwargs: other data that the function might require 166 | :rtype: list, dict 167 | 168 | """ 169 | lease_valid = None 170 | cache_base_path = __opts__.get('vault.cache_base_path', 171 | 'secret/pillar_cache') 172 | cache_path = '/'.join((cache_base_path, cache_prefix, path)) 173 | renewal_threshold = __opts__.get('vault.lease_renewal_threshold', 174 | {'days': 7}) 175 | vault_client = __utils__['vault.build_client']() 176 | 177 | vault_data = vault_client.read(cache_path) 178 | 179 | if vault_data: 180 | vault_data = vault_data['data']['value'] 181 | lease = vault_client.get_lease(vault_data['lease_id']) 182 | 183 | if (lease and timedelta(seconds=lease['data']['ttl']) > 184 | timedelta(**renewal_threshold)): 185 | lease_valid = True 186 | else: 187 | lease_valid = False 188 | vault_client.delete(cache_path) 189 | vault_data = None 190 | 191 | if not vault_data or not lease_valid: 192 | __salt__['event.send']( 193 | 'vault/cache/miss/{0}'.format(cache_path), 194 | data={'message': 'The cached lease at {0} is either invalid or ' 195 | 'expired and not renewable. It will be ' 196 | 'regenerated and cached with new data.' 197 | .format(cache_path)}) 198 | return cache_path, vault_client, vault_data 199 | 200 | 201 | def cached_read(path, cache_prefix='', **kwargs): 202 | """Generate new secret through vault read function and copy it to the vault 203 | cache path. 204 | 205 | :param path: path to the full vault cache path 206 | :param cache_prefix: usually the minion_id 207 | :param **kwargs: other data that the function might require 208 | :rtype: list, dict 209 | 210 | """ 211 | cache_path, vault_client, vault_data = check_cached_lease(path, 212 | cache_prefix=cache_prefix, 213 | **kwargs) 214 | if not vault_data: 215 | vault_data = vault_client.read(path) 216 | if vault_data is None: 217 | log.info("Failed to load Vault data from path: %s", path) 218 | vault_data['created'] = datetime.utcnow().isoformat() 219 | vault_client.write(cache_path, value=vault_data) 220 | vault_data = vault_client.read(cache_path)['data']['value'] 221 | 222 | return vault_data 223 | 224 | 225 | def cached_write(path, cache_prefix='', **kwargs): 226 | """Generate new secret through vault write function and copy it to the vault 227 | cache path. 228 | 229 | :param path: path to the full vault cache path 230 | :param cache_prefix: usually the minion_id 231 | :param **kwargs: other data that the function might require 232 | :rtype: list, dict 233 | 234 | """ 235 | cache_path, vault_client, vault_data = check_cached_lease(path, 236 | cache_prefix=cache_prefix, 237 | **kwargs) 238 | if not vault_data: 239 | vault_data = vault_client.write(path, **kwargs) 240 | if vault_data is None: 241 | log.info("Failed to load Vault data from path: %s", path) 242 | vault_data['created'] = datetime.utcnow().isoformat() 243 | vault_client.write(cache_path, value=vault_data) 244 | vault_data = vault_client.read(cache_path)['data']['value'] 245 | 246 | return vault_data 247 | 248 | 249 | def list_cache_paths(prefix=None, cache_filter=''): 250 | client = __utils__['vault.build_client']() 251 | if not prefix: 252 | prefix = __opts__.get('vault.cache_base_path', 253 | 'secret/pillar_cache') 254 | 255 | caches = client.list( 256 | prefix 257 | ).get('data', {}).get('keys', []) 258 | cache_paths = [] 259 | for node in caches: 260 | if node.endswith('/'): 261 | log.debug('Recursing into path %s for prefix %s', node, prefix) 262 | cache_paths.extend(list_cache_paths(prefix='{0}/{1}'.format( 263 | prefix.strip('/'), node), cache_filter=cache_filter)) 264 | else: 265 | cache_paths.append('{0}/{1}'.format(prefix.strip('/'), node)) 266 | 267 | cache_paths = [path for path in cache_paths if cache_filter in path] 268 | return cache_paths 269 | 270 | 271 | def list_cached_data(prefix=None, cache_filter='', attribute_path=''): 272 | client = __utils__['vault.build_client']() 273 | cache_paths = list_cache_paths(prefix, cache_filter) 274 | cached_data = [] 275 | for path in cache_paths: 276 | cache_data = client.read(path) 277 | if attribute_path: 278 | cache_data = __utils__['data.traverse_dict'](cache_data, 279 | attribute_path) 280 | cached_data.append((path, cache_data)) 281 | return cached_data 282 | 283 | 284 | def purge_cache_data(cache_filter): 285 | """Scan cached leases and delete any that match the given prefix 286 | 287 | :param prefix: The prefix path of cached leases that you want to purge 288 | :returns: List of lease ids that were deleted 289 | :rtype: list 290 | 291 | """ 292 | client = __utils__['vault.build_client']() 293 | cached_leases = list_cache_paths(cache_filter=cache_filter) 294 | for path in cached_leases: 295 | client.delete(path) 296 | 297 | return cached_leases 298 | 299 | 300 | def _register_functions(): 301 | log.info('Utils object is: {0}'.format(__utils__)) 302 | for method_name in dir(__utils__['vault.vault_client']()): 303 | if not method_name.startswith('_'): 304 | method = getattr(__utils__['vault.vault_client'](), method_name) 305 | if not isinstance(method, property): 306 | if method_name == 'list': 307 | method_name = 'list_values' 308 | globals()[method_name] = __utils__['vault.bind_client'](method) 309 | -------------------------------------------------------------------------------- /_pillar/vault.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | ''' 4 | Retrieve data from Hashicorp Vault with the option of caching returned data 5 | 6 | To retrieve data from Vault you can write your pillar as follows: 7 | 8 | .. code:: yaml 9 | 10 | foo_pillar: 11 | - static_value: __vault__::secret/value[data.value] 12 | - dynamic_user: __vault__:cached:rabbitmq/creds/user[data.username] 13 | - dynamic_password: __vault__:cached:rabbitmq/creds/user[data.password] 14 | 15 | ''' 16 | 17 | from __future__ import absolute_import, print_function, unicode_literals 18 | import logging 19 | 20 | import salt.loader 21 | 22 | log = logging.getLogger(__name__) 23 | 24 | 25 | def ext_pillar(minion_id, pillar, *args, **kwargs): 26 | render_function = salt.loader.render(__opts__, __salt__).get("vault") 27 | return render_function(pillar, cache_prefix=minion_id) 28 | -------------------------------------------------------------------------------- /_renderers/vault.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | ''' 4 | Supported Syntax 5 | 6 | __vault__:cache:path/to/data>attribute>subattribute 7 | __vault__:gen_if_missing[32]:path/to/key>attribute 8 | ''' 9 | import logging 10 | from datetime import datetime, timedelta 11 | 12 | import six 13 | import salt.loader 14 | 15 | log = logging.getLogger(__name__) 16 | local_cache = {} 17 | 18 | __utils__ = {} 19 | 20 | 21 | def __init__(opts): 22 | global __utils__ 23 | __utils__.update(salt.loader.utils(opts)) 24 | 25 | 26 | def _read(path, *args, **kwargs): 27 | vault_client = __utils__['vault.build_client']() 28 | try: 29 | vault_data = local_cache[path] 30 | except KeyError: 31 | vault_data = local_cache[path] = vault_client.read(path) 32 | return vault_data 33 | 34 | 35 | def _gen_if_missing(path, string_length=42, **kwargs): 36 | vault_client = __utils__['vault.build_client']() 37 | try: 38 | vault_data = local_cache[path] 39 | except KeyError: 40 | vault_data = local_cache[path] = vault_client.read(path) 41 | 42 | if not vault_data: 43 | new_value = __salt__['random.get_str'](string_length) 44 | vault_client.write(path, value=new_value) 45 | vault_data = local_cache[path] = vault_client.read(path) 46 | 47 | return vault_data 48 | 49 | 50 | def dispatch(func_name): 51 | func_dict = { 52 | '': _read, 53 | 'cache': __salt__['vault.cached_read'], 54 | 'gen_if_missing': _gen_if_missing 55 | } 56 | return func_dict[func_name] 57 | 58 | 59 | def leaf_filter(leaf_data): 60 | return (isinstance(leaf_data, six.string_types) 61 | and leaf_data.startswith('__vault__')) 62 | 63 | 64 | def render(data, 65 | saltenv='base', 66 | sls='', 67 | argline='', 68 | cache_prefix='', 69 | **kwargs): 70 | # Traverse data structure to leaf nodes 71 | for leaf_node, location, container in __utils__[ 72 | 'data_structures.traverse_leaf_nodes']( 73 | data, leaf_filter): 74 | # Parse leaf nodes 75 | instructions, path = leaf_node.split(':', 2)[1:] 76 | # Replace values in matching leaf nodes 77 | parsed_path = path.split('>') 78 | log.debug("Trying to load data from Vault at path %s", parsed_path[0]) 79 | vault_data = dispatch(instructions)(parsed_path[0], 80 | cache_prefix=cache_prefix, 81 | **kwargs) 82 | container[location] = __utils__['data.traverse_dict']( 83 | vault_data, ':'.join(parsed_path[1:])) 84 | return data 85 | -------------------------------------------------------------------------------- /_states/vault.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | 3 | import logging 4 | import os 5 | 6 | import salt.config 7 | import salt.syspaths 8 | import salt.utils 9 | import salt.exceptions 10 | 11 | log = logging.getLogger(__name__) 12 | 13 | try: 14 | import requests 15 | DEPS_INSTALLED = True 16 | except ImportError: 17 | log.debug('Unable to import the requests library.') 18 | DEPS_INSTALLED = False 19 | 20 | __all__ = ['initialize'] 21 | 22 | 23 | def __virtual__(): 24 | return DEPS_INSTALLED 25 | 26 | 27 | def initialized(name, secret_shares=5, secret_threshold=3, pgp_keys=None, 28 | keybase_users=None, unseal=True): 29 | """ 30 | Ensure that the vault instance has been initialized and run the 31 | initialization if it has not. 32 | 33 | :param name: The id used for the state definition 34 | :param secret_shares: THe number of secret shares to use for the 35 | initialization key 36 | :param secret_threshold: The number of keys required to unseal the vault 37 | :param pgp_keys: List of PGP public key strings to use for encrypting 38 | the sealing keys 39 | :param keybase_users: List of Keybase users to retrieve public PGP keys 40 | for to use in encrypting the sealing keys 41 | :param unseal: Whether to unseal the vault during initialization 42 | :returns: Result of the execution 43 | :rtype: dict 44 | """ 45 | ret = {'name': name, 46 | 'comment': '', 47 | 'result': '', 48 | 'changes': {}} 49 | initialized = __salt__['vault.is_initialized']() 50 | 51 | if initialized: 52 | ret['result'] = True 53 | ret['Comment'] = 'Vault is already initialized' 54 | elif __opts__['test']: 55 | ret['result'] = None 56 | ret['comment'] = 'Vault will be initialized.' 57 | else: 58 | success, sealing_keys, root_token = __salt__['vault.initialize']( 59 | secret_shares, secret_threshold, pgp_keys, keybase_users, unseal 60 | ) if not initialized else (True, {}, '') 61 | ret['result'] = success 62 | ret['changes'] = { 63 | 'root_credentials': { 64 | 'new': { 65 | 'sealing_keys': sealing_keys, 66 | 'root_token': root_token 67 | }, 68 | 'old': {} 69 | } 70 | } 71 | ret['comment'] = 'Vault has {}initialized'.format( 72 | '' if success else 'failed to be ') 73 | return ret 74 | 75 | 76 | def auth_backend_enabled(name, backend_type, description='', mount_point=None): 77 | """ 78 | Ensure that the named backend has been enabled 79 | 80 | :param name: ID for state definition 81 | :param backend_type: The type of authentication backend to enable 82 | :param description: The description to set for the backend 83 | :param mount_point: The root path at which the backend will be mounted 84 | :returns: The result of the state execution 85 | :rtype: dict 86 | """ 87 | backends = __salt__['vault.list_auth_backends']() 88 | setting_dict = {'type': backend_type, 'description': description} 89 | backend_enabled = False 90 | ret = {'name': name, 91 | 'comment': '', 92 | 'result': '', 93 | 'changes': {'old': backends}} 94 | 95 | for path, settings in __salt__['vault.list_auth_backends']().get('data', {}).items(): 96 | if (path.strip('/') == mount_point or backend_type and 97 | settings['type'] == backend_type): 98 | backend_enabled = True 99 | 100 | if backend_enabled: 101 | ret['comment'] = ('The {auth_type} backend mounted at {mount} is already' 102 | ' enabled.'.format(auth_type=backend_type, 103 | mount=mount_point)) 104 | ret['result'] = True 105 | elif __opts__['test']: 106 | ret['result'] = None 107 | else: 108 | try: 109 | __salt__['vault.enable_auth_backend'](backend_type, 110 | description=description, 111 | mount_point=mount_point) 112 | ret['result'] = True 113 | ret['changes']['new'] = __salt__[ 114 | 'vault.list_auth_backends']() 115 | except __utils__['vault.vault_error']() as e: 116 | ret['result'] = False 117 | log.exception(e) 118 | ret['comment'] = ('The {backend} has been successfully mounted at ' 119 | '{mount}.'.format(backend=backend_type, 120 | mount=mount_point)) 121 | return ret 122 | 123 | 124 | def audit_backend_enabled(name, backend_type, description='', options=None, 125 | backend_name=None): 126 | if not backend_name: 127 | backend_name = backend_type 128 | backends = __salt__['vault.list_audit_backends']().get('data', {}) 129 | setting_dict = {'type': backend_type, 'description': description} 130 | backend_enabled = False 131 | ret = {'name': name, 132 | 'comment': '', 133 | 'result': '', 134 | 'changes': {'old': backends}} 135 | 136 | for path, settings in __salt__['vault.list_audit_backends']().items(): 137 | if (path.strip('/') == backend_type and 138 | settings['type'] == backend_type): 139 | backend_enabled = True 140 | 141 | if backend_enabled: 142 | ret['comment'] = ('The {audit_type} backend is already enabled.' 143 | .format(audit_type=backend_type)) 144 | ret['result'] = True 145 | elif __opts__['test']: 146 | ret['result'] = None 147 | else: 148 | try: 149 | __salt__['vault.enable_audit_backend'](backend_type, 150 | description=description, 151 | name=backend_name) 152 | ret['result'] = True 153 | ret['changes']['new'] = __salt__[ 154 | 'vault.list_audit_backends']() 155 | ret['comment'] = ('The {backend} audit backend has been ' 156 | 'successfully enabled.'.format( 157 | backend=backend_type)) 158 | except __utils__['vault.vault_error']() as e: 159 | ret['result'] = False 160 | log.exception(e) 161 | return ret 162 | 163 | 164 | def secret_backend_enabled(name, backend_type, description='', mount_point=None, 165 | connection_config_path=None, connection_config=None, 166 | lease_max=None, lease_default=None, ttl_max=None, 167 | ttl_default=None, override=False): 168 | """ 169 | 170 | :param name: The ID for the state definition 171 | :param backend_type: The type of the backend to be enabled (e.g. MySQL) 172 | :param description: The description to set for the enabled backend 173 | :param mount_point: The root path for the backend 174 | :param connection_config_path: The full path to the endpoint used for 175 | configuring the connection (needed for 176 | e.g. Consul) 177 | :param connection_config: The configuration settings for the backend 178 | connection 179 | :param lease_max: The maximum allowed lease for credentials retrieved from 180 | the backend 181 | :param lease_default: The default allowed lease for credentials retrieved from 182 | the backend 183 | :param ttl_max: The maximum TTL for a lease generated by the backend. Uses 184 | the mounts//tune endpoint. 185 | :param ttl_default: The default TTL for a lease generated by the backend. 186 | Uses the mounts//tune endpoint. 187 | :param override: Specifies whether to override the settings for an existing mount 188 | :returns: The result of the execution 189 | :rtype: dict 190 | 191 | """ 192 | backends = __salt__['vault.list_secret_backends']().get('data', {}) 193 | backend_enabled = False 194 | ret = {'name': name, 195 | 'comment': '', 196 | 'result': '', 197 | 'changes': {'old': backends}} 198 | 199 | for path, settings in __salt__['vault.list_secret_backends']().get('data', {}).items(): 200 | if (path.strip('/') == mount_point and 201 | settings['type'] == backend_type): 202 | backend_enabled = True 203 | 204 | if backend_enabled and not override: 205 | ret['comment'] = ('The {secret_type} backend mounted at {mount} is already' 206 | ' enabled.'.format(secret_type=backend_type, 207 | mount=mount_point)) 208 | ret['result'] = True 209 | elif __opts__['test']: 210 | ret['result'] = None 211 | else: 212 | try: 213 | __salt__['vault.enable_secret_backend'](backend_type, 214 | description=description, 215 | mount_point=mount_point) 216 | ret['result'] = True 217 | ret['changes']['new'] = __salt__[ 218 | 'vault.list_secret_backends']() 219 | except __utils__['vault.vault_error']() as e: 220 | ret['result'] = False 221 | log.exception(e) 222 | if connection_config: 223 | if not connection_config_path: 224 | connection_config_path = '{mount}/config/connection'.format( 225 | mount=mount_point) 226 | try: 227 | __salt__['vault.write'](connection_config_path, 228 | **connection_config) 229 | except __utils__['vault.vault_error']() as e: 230 | ret['comment'] += ('The backend was enabled but the connection ' 231 | 'could not be configured\n') 232 | log.exception(e) 233 | raise salt.exceptions.CommandExecutionError(str(e)) 234 | if ttl_max or ttl_default: 235 | ttl_config_path = 'sys/mounts/{mount}/tune'.format( 236 | mount=mount_point) 237 | if ttl_default > ttl_max: 238 | raise salt.exceptions.SaltInvocationError( 239 | 'The specified default ttl is longer than the maximum') 240 | if ttl_max and not ttl_default: 241 | ttl_default = ttl_max 242 | if ttl_default and not ttl_max: 243 | ttl_max = ttl_default 244 | try: 245 | log.debug('Tuning the mount ttl to be: Max={ttl_max}, ' 246 | 'Default={ttl_default}'.format( 247 | ttl_max=ttl_max, ttl_default=ttl_default)) 248 | __salt__['vault.write'](ttl_config_path, 249 | default_lease_ttl=ttl_default, 250 | max_lease_ttl=ttl_max) 251 | except __utils__['vault.vault_error']() as e: 252 | ret['comment'] += ('The backend was enabled but the connection ' 253 | 'ttl could not be tuned\n'.format(e)) 254 | log.exception(e) 255 | raise salt.exceptions.CommandExecutionError(str(e)) 256 | if lease_max or lease_default: 257 | lease_config_path = '{mount}/config/lease'.format( 258 | mount=mount_point) 259 | if lease_default > lease_max: 260 | raise salt.exceptions.SaltInvocationError( 261 | 'The specified default lease is longer than the maximum') 262 | if lease_max and not lease_default: 263 | lease_default = lease_max 264 | if lease_default and not lease_max: 265 | lease_max = lease_default 266 | try: 267 | log.debug('Tuning the lease config to be: Max={lease_max}, ' 268 | 'Default={lease_default}'.format( 269 | lease_max=lease_max, lease_default=lease_default)) 270 | __salt__['vault.write'](lease_config_path, 271 | ttl=lease_default, 272 | max_ttl=lease_max) 273 | except __utils__['vault.vault_error']() as e: 274 | ret['comment'] += ('The backend was enabled but the lease ' 275 | 'length could not be configured\n'.format(e)) 276 | log.exception(e) 277 | raise salt.exceptions.CommandExecutionError(str(e)) 278 | ret['comment'] += ('The {backend} has been successfully mounted at ' 279 | '{mount}.'.format(backend=backend_type, 280 | mount=mount_point)) 281 | return ret 282 | 283 | def app_id_created(name, app_id, policies, display_name=None, 284 | mount_point='app-id', **kwargs): 285 | ret = {'name': app_id, 286 | 'comment': '', 287 | 'result': False, 288 | 'changes': {}} 289 | current_id = __salt__['vault.get_app_id'](app_id, mount_point) 290 | if (current_id.get('data') is not None and 291 | current_id['data'].get('policies') == policies): 292 | ret['result'] = True 293 | ret['comment'] = ('The app-id {app_id} exists with the specified ' 294 | 'policies'.format(app_id=app_id)) 295 | elif __opts__['test']: 296 | ret['result'] = None 297 | if current_id['data'] is None: 298 | ret['changes']['old'] = {} 299 | ret['comment'] = 'The app-id {app_id} will be created.'.format( 300 | app_id=app_id) 301 | elif current_id['data']['policies'] != policies: 302 | ret['changes']['old'] = current_id 303 | ret['comment'] = ('The app-id {app_id} will have its policies ' 304 | 'updated'.format(app_id=app_id)) 305 | else: 306 | try: 307 | new_id = __salt__['vault.create_app_id'](app_id, 308 | policies, 309 | display_name, 310 | mount_point, 311 | **kwargs) 312 | ret['result'] = True 313 | ret['comment'] = ('Successfully created app-id {app_id}'.format( 314 | app_id=app_id)) 315 | ret['changes'] = { 316 | 'old': current_id, 317 | 'new': __salt__['vault.get_app_id'](app_id, mount_point) 318 | } 319 | except __utils__['vault.vault_error']() as e: 320 | log.exception(e) 321 | ret['result'] = False 322 | ret['comment'] = ('Encountered an error while attempting to ' 323 | 'create app id.') 324 | return ret 325 | 326 | 327 | def policy_present(name, rules): 328 | """ 329 | Ensure that the named policy exists and has the defined rules set 330 | 331 | :param name: The name of the policy 332 | :param rules: The rules to set on the policy 333 | :returns: The result of the state execution 334 | :rtype: dict 335 | """ 336 | current_policy = __salt__['vault.get_policy'](name, parse=True) 337 | ret = {'name': name, 338 | 'comment': '', 339 | 'result': False, 340 | 'changes': {}} 341 | if current_policy == rules: 342 | ret['result'] = True 343 | ret['comment'] = ('The {policy_name} policy already exists with the ' 344 | 'given rules.'.format(policy_name=name)) 345 | elif __opts__['test']: 346 | ret['result'] = None 347 | if current_policy: 348 | ret['changes']['old'] = current_policy 349 | ret['changes']['new'] = rules 350 | ret['comment'] = ('The {policy_name} policy will be {suffix}.'.format( 351 | policy_name=name, 352 | suffix='updated' if current_policy else 'created')) 353 | else: 354 | try: 355 | __salt__['vault.set_policy'](name, rules) 356 | ret['result'] = True 357 | ret['comment'] = ('The {policy_name} policy was successfully ' 358 | 'created/updated.'.format(policy_name=name)) 359 | ret['changes']['old'] = current_policy 360 | ret['changes']['new'] = rules 361 | except __utils__['vault.vault_error']() as e: 362 | log.exception(e) 363 | ret['comment'] = ('The {policy_name} policy failed to be ' 364 | 'created/updated'.format(policy_name=name)) 365 | return ret 366 | 367 | def policy_absent(name): 368 | """ 369 | Ensure that the named policy is not present 370 | 371 | :param name: The name of the policy to be deleted 372 | :returns: The result of the state execution 373 | :rtype: dict 374 | """ 375 | current_policy = __salt__['vault.get_policy'](name, parse=True) 376 | ret = {'name': name, 377 | 'comment': '', 378 | 'result': False, 379 | 'changes': {}} 380 | if not current_policy: 381 | ret['result'] = True 382 | ret['comment'] = ('The {policy_name} policy is not present.'.format( 383 | policy_name=name)) 384 | elif __opts__['test']: 385 | ret['result'] = None 386 | if current_policy: 387 | ret['changes']['old'] = current_policy 388 | ret['changes']['new'] = {} 389 | ret['comment'] = ('The {policy_name} policy {suffix}.'.format( 390 | policy_name=name, 391 | suffix='will be deleted' if current_policy else 'is not present')) 392 | else: 393 | try: 394 | __salt__['vault.delete_policy'](name) 395 | ret['result'] = True 396 | ret['comment'] = ('The {policy_name} policy was successfully ' 397 | 'deleted.') 398 | ret['changes']['old'] = current_policy 399 | ret['changes']['new'] = {} 400 | except __utils__['vault.vault_error']() as e: 401 | log.exception(e) 402 | ret['comment'] = ('The {policy_name} policy failed to be ' 403 | 'created/updated'.format(policy_name=name)) 404 | return ret 405 | 406 | 407 | def role_present(name, mount_point, options, override=False): 408 | """ 409 | Ensure that the named role exists. If it does not already exist then it 410 | will be created with the specified options. 411 | 412 | :param name: The name of the role 413 | :param mount_point: The mount point of the target backend 414 | :param options: A dictionary of the configuration options for the role 415 | :param override: Write the role definition even if there is already one 416 | present. Useful if the existing role doesn't match the 417 | desired state. 418 | :returns: Result of executing the state 419 | :rtype: dict 420 | """ 421 | current_role = __salt__['vault.read']('{mount}/roles/{name}'.format( 422 | mount=mount_point, name=name)) 423 | ret = {'name': name, 424 | 'comment': '', 425 | 'result': False, 426 | 'changes': {}} 427 | if current_role and not override: 428 | ret['result'] = True 429 | ret['comment'] = ('The {role} role already exists with the ' 430 | 'given rules.'.format(role=name)) 431 | elif __opts__['test']: 432 | ret['result'] = None 433 | if current_role: 434 | ret['changes']['old'] = current_role 435 | ret['changes']['new'] = None 436 | ret['comment'] = ('The {role} role {suffix}.'.format( 437 | role=name, 438 | suffix='already exists' if current_role else 'will be created')) 439 | else: 440 | try: 441 | response = __salt__['vault.write']('{mount}/roles/{role}'.format( 442 | mount=mount_point, role=name), **options) 443 | ret['result'] = True 444 | ret['comment'] = ('The {role} role was successfully ' 445 | 'created.'.format(role=name)) 446 | ret['changes']['old'] = current_role 447 | ret['changes']['new'] = response 448 | except __utils__['vault.vault_error']() as e: 449 | log.exception(e) 450 | ret['comment'] = ('The {role} role failed to be ' 451 | 'created'.format(role=name)) 452 | return ret 453 | 454 | 455 | def role_absent(name, mount_point): 456 | """ 457 | Ensure that the named role does not exist. 458 | 459 | :param name: The name of the role to be deleted if present 460 | :param mount_point: The mount point of the target backend 461 | :returns: The result of the stae execution 462 | :rtype: dict 463 | """ 464 | current_role = __salt__['vault.read']('{mount}/roles/{name}'.format( 465 | mount=mount_point, name=name)) 466 | ret = {'name': name, 467 | 'comment': '', 468 | 'result': False, 469 | 'changes': {}} 470 | if current_role: 471 | ret['changes']['old'] = current_role 472 | ret['changes']['new'] = None 473 | else: 474 | ret['changes'] = None 475 | ret['result'] = True 476 | if __opts__['test']: 477 | ret['result'] = None 478 | return ret 479 | try: 480 | __salt__['vault.delete']('{mount}/roles/{name}'.format( 481 | mount=mount_point, name=name)) 482 | ret['result'] = True 483 | except __utils__['vault.vault_error']() as e: 484 | log.exception(e) 485 | raise salt.exceptions.SaltInvocationError(e) 486 | return ret 487 | 488 | def ec2_role_created(name, 489 | role, 490 | bound_ami_id=None, 491 | bound_iam_role_arn=None, 492 | bound_account_id=None, 493 | bound_iam_instance_profile_arn=None, 494 | role_tag=None, 495 | ttl=None, 496 | max_ttl=None, 497 | policies=None, 498 | allow_instance_migration=False, 499 | disallow_reauthentication=False, 500 | period="", 501 | update_role=False, 502 | **kwargs): 503 | """ 504 | Ensure that the specified EC2 role exists so that it can be used for 505 | authenticating with the Vault EC2 backend. 506 | 507 | :param name: Contains the id of the state definition 508 | :param role: The name of the EC2 role 509 | :param bound_ami_id: The AMI ID to bind the role to 510 | :param bound_iam_role_arn: The IAM role ARN to bind the Vault EC2 role to 511 | :param bound_account_id: The account ID to bind the role to 512 | :param bound_iam_instance_profile_arn: The instance profile ARN to bind the 513 | role to 514 | :param role_tag: The EC2 tag to use for specifying role access 515 | :param ttl: The ttl of the credentials granted when authenticating with 516 | this role 517 | :param max_ttl: The ttl of the credentials granted when authenticating with 518 | this role 519 | :param policies: The policies to grant on this role 520 | :param allow_instance_migration: Whether to allow for instance migration 521 | :param disallow_reauthentication: Whether this role should allow 522 | reauthenticating against Vault 523 | :returns: The result of the execution 524 | :rtype: dict 525 | """ 526 | try: 527 | current_role = __salt__['vault.get_ec2_role'](role) 528 | except __utils__['vault.vault_error'](): 529 | current_role = None 530 | 531 | role_params = dict( 532 | role=role, 533 | bound_ami_id=bound_ami_id, 534 | role_tag=role_tag, 535 | bound_iam_role_arn=bound_iam_role_arn, 536 | bound_account_id=bound_account_id, 537 | bound_iam_instance_profile_arn=bound_iam_instance_profile_arn, 538 | ttl=ttl, max_ttl=max_ttl, 539 | policies=','.join(policies), 540 | allow_instance_migration=allow_instance_migration, 541 | disallow_reauthentication=disallow_reauthentication, 542 | period=period, 543 | **kwargs 544 | ) 545 | 546 | current_params = (current_role or {}).get('data', {}) 547 | 548 | ret = {'name': name, 549 | 'comment': '', 550 | 'result': False, 551 | 'changes': {}} 552 | 553 | if current_role and not update_role: 554 | ret['result'] = True 555 | ret['comment'] = 'The {0} role already exists'.format(role) 556 | elif __opts__['test']: 557 | ret['result'] = None 558 | if current_role: 559 | ret['comment'] = ('The {0} role will be updated with the given ' 560 | 'parameters').format(role) 561 | ret['changes']['old'] = current_params 562 | ret['changes']['new'] = role_params 563 | else: 564 | ret['comment'] = ('The {0} role will be created') 565 | else: 566 | try: 567 | __salt__['vault.create_vault_ec2_client_configuration']() 568 | __salt__['vault.create_ec2_role']( 569 | **{k: str(v) for k, v in role_params.items() if not v is None}) 570 | ret['result'] = True 571 | ret['comment'] = 'Successfully created the {0} role.'.format(role) 572 | ret['changes']['new'] = __salt__['vault.get_ec2_role'](role) 573 | ret['changes']['old'] = current_role or {} 574 | except __utils__['vault.vault_error']() as e: 575 | log.exception(e) 576 | ret['result'] = False 577 | ret['comment'] = 'Failed to create the {0} role.'.format(role) 578 | return ret 579 | 580 | 581 | def ec2_minion_authenticated(name, role, pkcs7=None, nonce=None, 582 | is_master=False, client_conf_files=None): 583 | """Authenticate a minion using EC2 auth and write the client token to the 584 | configuration file to be used for subsequent calls to vault. 585 | 586 | :param name: String, unused 587 | :param role: The role that the minion is to be authenticated against 588 | :param pkcs7: The pkcs7 key for the minion, will be fetched from EC2 589 | metadata if not passed to the function. 590 | :param nonce: An arbitrary string to be used for future authentication attempts. 591 | Will be generated automatically by Vault if not provided. 592 | :param is_master: Boolean value to determine whether the configuration file 593 | needs to be written out for the master as well. 594 | :param client_conf_file: One or more file paths for where the client token 595 | and nonce will be written to. 596 | :returns: client token and lease information 597 | :rtype: dict 598 | 599 | """ 600 | ret = { 601 | 'name': name, 602 | 'comment': '', 603 | 'result': False, 604 | 'changes': {} 605 | } 606 | try: 607 | is_authenticated = __salt__['vault.is_authenticated']() 608 | except (__utils__['vault.vault_error']('InvalidRequest'), __utils__['vault.vault_error']('InvalidPath')) as e: 609 | log.exception(e) 610 | raise 611 | if not is_authenticated: 612 | ret['comment'] = ('The minion will be authenticated to Vault using ' 613 | 'the EC2 authentication backend.') 614 | else: 615 | ret['comment'] = ('The minion is already authenticated. No ' 616 | 'action will be performed.') 617 | if __opts__['test']: 618 | ret['result'] = None 619 | else: 620 | try: 621 | if not pkcs7: 622 | pkcs7 = ''.join( 623 | __salt__['http.query']( 624 | 'http://169.254.169.254/latest/dynamic/instance-identity/pkcs7' 625 | ).get('body', '').splitlines()) 626 | if not nonce and __salt__['config.get']('vault.nonce'): 627 | nonce = __salt__['config.get']('vault.nonce') 628 | auth_result = __salt__['vault.auth_ec2'](pkcs7=pkcs7, role=role, 629 | nonce=nonce) 630 | log.debug('Auth response attributes: {}'.format( 631 | auth_result['auth'].keys())) 632 | client_config = { 633 | 'vault.{0}'.format(k): v for k, v in auth_result['auth'].items() 634 | } 635 | 636 | client_config['vault.token'] = client_config.pop('vault.client_token') 637 | 638 | if nonce: 639 | client_config['vault.nonce'] = nonce 640 | 641 | vault_conf_files = [] 642 | if not client_conf_files: 643 | vault_conf_files.append(os.path.join( 644 | salt.syspaths.CONFIG_DIR, 645 | os.path.dirname(__salt__['config.get']('default_include')), 646 | '99_vault_client.conf')) 647 | if is_master: 648 | vault_conf_files.append(os.path.join( 649 | salt.syspaths.CONFIG_DIR, 650 | os.path.dirname( 651 | salt.config.apply_master_config( 652 | {})['default_include']), 653 | '99_vault_client.conf')) 654 | else: 655 | if not isinstance(client_conf_files, list): 656 | client_conf_files = [client_conf_files] 657 | vault_conf_files.extend(client_conf_files) 658 | for fpath in vault_conf_files: 659 | with open(fpath, 'w') as vault_conf: 660 | for k, v in client_config.items(): 661 | vault_conf.write('{key}: {value}\n'.format(key=k, value=v)) 662 | ret['changes']['new'] = auth_result 663 | ret['changes']['old'] = {} 664 | ret['comment'] = 'Successfully authenticated using EC2 backend' 665 | ret['result'] = True 666 | except (__utils__['vault.vault_error']('InvalidRequest'), __utils__['vault.vault_error']('InvalidPath')) as e: 667 | log.exception(e) 668 | ret['result'] = False 669 | ret['comment'] = 'Failed to authenticate' 670 | return ret 671 | -------------------------------------------------------------------------------- /_utils/data_structures.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | log = logging.getLogger(__name__) 4 | 5 | 6 | class DuplicateElement(Exception): 7 | pass 8 | 9 | 10 | class UniqueSet(set): 11 | def add(self, elem): 12 | if elem in self: 13 | raise DuplicateElement('%s already exists in the set', elem) 14 | return super(UniqueSet, self).add(elem) 15 | 16 | 17 | def traverse_leaf_nodes(dict_or_list, leaf_filter=lambda x: True): 18 | """Walk a data structure and perform an action on 19 | leaf nodes matching a filter. 20 | 21 | :param dict_or_list: Arbitrarily nested data structure to traverse 22 | :returns: List of 3-tuples (leaf_node, leaf_index, leaf_container) 23 | :rtype: List 24 | 25 | """ 26 | processed_objects = UniqueSet() 27 | processed_objects.add(id(dict_or_list)) 28 | leaf_nodes = [] 29 | 30 | def _is_leaf_node(element): 31 | return not isinstance(element, (list, dict, set, tuple)) 32 | 33 | def _process_sequence(sequence_object, 34 | leaf_filter=None): 35 | for index, element in enumerate(sequence_object): 36 | try: 37 | processed_objects.add(id(element)) 38 | except DuplicateElement: 39 | continue 40 | if _is_leaf_node(element): 41 | if not leaf_filter(element): 42 | continue 43 | leaf_nodes.append((element, index, sequence_object)) 44 | else: 45 | _process_router(element, leaf_filter) 46 | 47 | def _process_dict(dict_object, 48 | leaf_filter=None): 49 | for k, v in dict_object.items(): 50 | try: 51 | processed_objects.add(id(v)) 52 | except DuplicateElement: 53 | continue 54 | if _is_leaf_node(v): 55 | if not leaf_filter(v): 56 | continue 57 | leaf_nodes.append((v, k, dict_object)) 58 | else: 59 | _process_router(v, leaf_filter) 60 | 61 | def _process_router(dict_or_list, leaf_filter=None): 62 | if isinstance(dict_or_list, dict): 63 | _process_dict(dict_or_list, leaf_filter) 64 | else: 65 | _process_sequence(dict_or_list, leaf_filter) 66 | 67 | _process_router(dict_or_list, leaf_filter) 68 | return leaf_nodes 69 | -------------------------------------------------------------------------------- /_utils/vault.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | ''' 3 | :maintainer: SaltStack 4 | :maturity: new 5 | :platform: all 6 | 7 | Utilities supporting modules for Hashicorp Vault. Configuration instructions are 8 | documented in the execution module docs. 9 | ''' 10 | 11 | from __future__ import absolute_import, print_function, unicode_literals 12 | import base64 13 | import logging 14 | import os 15 | import requests 16 | import json 17 | import time 18 | from functools import wraps 19 | 20 | import six 21 | import salt.crypt 22 | import salt.exceptions 23 | import salt.utils.versions 24 | 25 | try: 26 | import hcl 27 | HAS_HCL_PARSER = True 28 | except ImportError: 29 | HAS_HCL_PARSER = False 30 | 31 | try: 32 | from urlparse import urljoin 33 | except ImportError: 34 | from urllib.parse import urljoin 35 | 36 | log = logging.getLogger(__name__) 37 | logging.getLogger("requests").setLevel(logging.WARNING) 38 | 39 | # Load the __salt__ dunder if not already loaded (when called from utils-module) 40 | __salt__ = None 41 | 42 | 43 | def __virtual__(): # pylint: disable=expected-2-blank-lines-found-0 44 | try: 45 | global __salt__ # pylint: disable=global-statement 46 | if not __salt__: 47 | __salt__ = salt.loader.minion_mods(__opts__) 48 | return True 49 | except Exception as e: 50 | log.error("Could not load __salt__: %s", e) 51 | return False 52 | 53 | 54 | def _get_token_and_url_from_master(): 55 | ''' 56 | Get a token with correct policies for the minion, and the url to the Vault 57 | service 58 | ''' 59 | minion_id = __grains__['id'] 60 | pki_dir = __opts__['pki_dir'] 61 | 62 | # When rendering pillars, the module executes on the master, but the token 63 | # should be issued for the minion, so that the correct policies are applied 64 | if __opts__.get('__role', 'minion') == 'minion': 65 | private_key = '{0}/minion.pem'.format(pki_dir) 66 | log.debug('Running on minion, signing token request with key %s', 67 | private_key) 68 | signature = base64.b64encode( 69 | salt.crypt.sign_message(private_key, minion_id)) 70 | result = __salt__['publish.runner']( 71 | 'vault.generate_token', arg=[minion_id, signature]) 72 | else: 73 | private_key = '{0}/master.pem'.format(pki_dir) 74 | log.debug( 75 | 'Running on master, signing token request for %s with key %s', 76 | minion_id, private_key) 77 | signature = base64.b64encode( 78 | salt.crypt.sign_message(private_key, minion_id)) 79 | result = __salt__['saltutil.runner']( 80 | 'vault.generate_token', 81 | minion_id=minion_id, 82 | signature=signature, 83 | impersonated_by_master=True) 84 | 85 | if not result: 86 | log.error('Failed to get token from master! No result returned - ' 87 | 'is the peer publish configuration correct?') 88 | raise salt.exceptions.CommandExecutionError(result) 89 | if not isinstance(result, dict): 90 | log.error('Failed to get token from master! ' 91 | 'Response is not a dict: %s', result) 92 | raise salt.exceptions.CommandExecutionError(result) 93 | if 'error' in result: 94 | log.error('Failed to get token from master! ' 95 | 'An error was returned: %s', result['error']) 96 | raise salt.exceptions.CommandExecutionError(result) 97 | return { 98 | 'url': result['url'], 99 | 'token': result['token'], 100 | 'verify': result['verify'], 101 | } 102 | 103 | 104 | def _get_vault_connection(): 105 | ''' 106 | Get the connection details for calling Vault, from local configuration if 107 | it exists, or from the master otherwise 108 | ''' 109 | 110 | def _use_local_config(): 111 | log.debug('Using Vault connection details from local config') 112 | try: 113 | if __opts__['vault']['auth']['method'] == 'approle': 114 | verify = __opts__['vault'].get('verify', None) 115 | if _selftoken_expired(): 116 | log.debug('Vault token expired. Recreating one') 117 | # Requesting a short ttl token 118 | url = '{0}/v1/auth/approle/login'.format( 119 | __opts__['vault']['url']) 120 | payload = {'role_id': __opts__['vault']['auth']['role_id']} 121 | if 'secret_id' in __opts__['vault']['auth']: 122 | payload['secret_id'] = __opts__['vault']['auth'][ 123 | 'secret_id'] 124 | response = requests.post(url, json=payload, verify=verify) 125 | if response.status_code != 200: 126 | errmsg = 'An error occured while getting a token from approle' 127 | raise salt.exceptions.CommandExecutionError(errmsg) 128 | __opts__['vault']['auth']['token'] = response.json()[ 129 | 'auth']['client_token'] 130 | return { 131 | 'url': __opts__['vault']['url'], 132 | 'token': __opts__['vault']['auth']['token'], 133 | 'verify': __opts__['vault'].get('verify', None) 134 | } 135 | except KeyError as err: 136 | errmsg = 'Minion has "vault" config section, but could not find key "{0}" within'.format( 137 | err.message) 138 | raise salt.exceptions.CommandExecutionError(errmsg) 139 | 140 | if 'vault' in __opts__ and __opts__.get('__role', 'minion') == 'master': 141 | return _use_local_config() 142 | elif any((__opts__['local'], __opts__['file_client'] == 'local', 143 | __opts__['master_type'] == 'disable')): 144 | return _use_local_config() 145 | else: 146 | log.debug('Contacting master for Vault connection details') 147 | return _get_token_and_url_from_master() 148 | 149 | 150 | def make_request(method, resource, profile=None, **args): 151 | ''' 152 | Make a request to Vault 153 | ''' 154 | if profile is not None and profile.keys().remove('driver') is not None: 155 | # Deprecated code path 156 | return make_request_with_profile(method, resource, profile, **args) 157 | 158 | connection = _get_vault_connection() 159 | token, vault_url = connection['token'], connection['url'] 160 | if 'verify' not in args: 161 | args['verify'] = connection['verify'] 162 | 163 | url = "{0}/{1}".format(vault_url, resource) 164 | headers = {'X-Vault-Token': token, 'Content-Type': 'application/json'} 165 | response = requests.request(method, url, headers=headers, **args) 166 | 167 | return response 168 | 169 | 170 | def make_request_with_profile(method, resource, profile, **args): 171 | ''' 172 | DEPRECATED! Make a request to Vault, with a profile including connection 173 | details. 174 | ''' 175 | salt.utils.versions.warn_until( 176 | 'Fluorine', 177 | 'Specifying Vault connection data within a \'profile\' has been ' 178 | 'deprecated. Please see the documentation for details on the new ' 179 | 'configuration schema. Support for this function will be removed ' 180 | 'in Salt Fluorine.') 181 | url = '{0}://{1}:{2}/v1/{3}'.format( 182 | profile.get('vault.scheme', 'https'), 183 | profile.get('vault.host'), 184 | profile.get('vault.port'), 185 | resource, 186 | ) 187 | token = os.environ.get('VAULT_TOKEN', profile.get('vault.token')) 188 | if token is None: 189 | raise salt.exceptions.CommandExecutionError( 190 | 'A token was not configured') 191 | 192 | headers = {'X-Vault-Token': token, 'Content-Type': 'application/json'} 193 | response = requests.request(method, url, headers=headers, **args) 194 | 195 | return response 196 | 197 | 198 | def _selftoken_expired(): 199 | ''' 200 | Validate the current token exists and is still valid 201 | ''' 202 | try: 203 | verify = __opts__['vault'].get('verify', None) 204 | url = '{0}/v1/auth/token/lookup-self'.format(__opts__['vault']['url']) 205 | if 'token' not in __opts__['vault']['auth']: 206 | return True 207 | headers = {'X-Vault-Token': __opts__['vault']['auth']['token']} 208 | response = requests.get(url, headers=headers, verify=verify) 209 | if response.status_code != 200: 210 | return True 211 | return False 212 | except Exception as e: 213 | raise salt.exceptions.CommandExecutionError( 214 | 'Error while looking up self token : {0}'.format(e)) 215 | 216 | 217 | class VaultError(Exception): 218 | def __init__(self, message=None, errors=None): 219 | if errors: 220 | message = ', '.join(errors) 221 | 222 | self.errors = errors 223 | 224 | super(VaultError, self).__init__(message) 225 | 226 | 227 | class InvalidRequest(VaultError): 228 | pass 229 | 230 | 231 | class Unauthorized(VaultError): 232 | pass 233 | 234 | 235 | class Forbidden(VaultError): 236 | pass 237 | 238 | 239 | class InvalidPath(VaultError): 240 | pass 241 | 242 | 243 | class RateLimitExceeded(VaultError): 244 | pass 245 | 246 | 247 | class InternalServerError(VaultError): 248 | pass 249 | 250 | 251 | class VaultNotInitialized(VaultError): 252 | pass 253 | 254 | 255 | class VaultDown(VaultError): 256 | pass 257 | 258 | 259 | class UnexpectedError(VaultError): 260 | pass 261 | 262 | 263 | class VaultClient(object): 264 | def __init__(self, 265 | url='http://localhost:8200', 266 | token=None, 267 | cert=None, 268 | verify=True, 269 | timeout=30, 270 | proxies=None, 271 | allow_redirects=True, 272 | session=None): 273 | 274 | if not session: 275 | session = requests.Session() 276 | self.allow_redirects = allow_redirects 277 | self.session = session 278 | self.token = token 279 | 280 | self._url = url 281 | self._kwargs = { 282 | 'cert': cert, 283 | 'verify': verify, 284 | 'timeout': timeout, 285 | 'proxies': proxies, 286 | } 287 | 288 | def read(self, path, wrap_ttl=None): 289 | """ 290 | GET / 291 | """ 292 | try: 293 | log.trace('Reading vault data from %s', path) 294 | return self._get('/v1/{0}'.format(path), wrap_ttl=wrap_ttl).json() 295 | except InvalidPath: 296 | return None 297 | 298 | def list(self, path): 299 | """ 300 | GET /?list=true 301 | """ 302 | try: 303 | payload = {'list': True} 304 | return self._get('/v1/{}'.format(path), params=payload).json() 305 | except InvalidPath: 306 | return None 307 | 308 | def update(self, path, translate_newlines=False, wrap_ttl=None, **kwargs): 309 | """ 310 | PUT / 311 | """ 312 | if translate_newlines: 313 | for k, v in kwargs.items(): 314 | if isinstance(v, six.string_types): 315 | kwargs[k] = v.replace(r'\n', '\n') 316 | 317 | response = self._put( 318 | '/v1/{0}'.format(path), json=kwargs, wrap_ttl=wrap_ttl) 319 | 320 | if response.status_code == 200: 321 | return response.json() 322 | 323 | def write(self, path, translate_newlines=False, wrap_ttl=None, **kwargs): 324 | """ 325 | POST / 326 | """ 327 | if translate_newlines: 328 | for k, v in kwargs.items(): 329 | if isinstance(v, six.string_types): 330 | kwargs[k] = v.replace(r'\n', '\n') 331 | 332 | response = self._post( 333 | '/v1/{0}'.format(path), json=kwargs, wrap_ttl=wrap_ttl) 334 | 335 | if response.status_code == 200: 336 | return response.json() 337 | 338 | def delete(self, path): 339 | """ 340 | DELETE / 341 | """ 342 | self._delete('/v1/{0}'.format(path)) 343 | 344 | def unwrap(self, token): 345 | """ 346 | GET /cubbyhole/response 347 | X-Vault-Token: 348 | """ 349 | path = "cubbyhole/response" 350 | _token = self.token 351 | try: 352 | self.token = token 353 | return json.loads(self.read(path)['data']['response']) 354 | finally: 355 | self.token = _token 356 | 357 | def is_initialized(self): 358 | """ 359 | GET /sys/init 360 | """ 361 | return self._get('/v1/sys/init').json()['initialized'] 362 | 363 | # def initialize(self, secret_shares=5, secret_threshold=3, pgp_keys=None): 364 | # """ 365 | # PUT /sys/init 366 | # """ 367 | # params = { 368 | # 'secret_shares': secret_shares, 369 | # 'secret_threshold': secret_threshold, 370 | # } 371 | 372 | # if pgp_keys: 373 | # if len(pgp_keys) != secret_shares: 374 | # raise ValueError('Length of pgp_keys must equal secret shares') 375 | 376 | # params['pgp_keys'] = pgp_keys 377 | 378 | # return self._put('/v1/sys/init', json=params).json() 379 | 380 | @property 381 | def seal_status(self): 382 | """ 383 | GET /sys/seal-status 384 | """ 385 | return self._get('/v1/sys/seal-status').json() 386 | 387 | def is_sealed(self): 388 | return self.seal_status['sealed'] 389 | 390 | def seal(self): 391 | """ 392 | PUT /sys/seal 393 | """ 394 | self._put('/v1/sys/seal') 395 | 396 | def unseal(self, key, reset=False): 397 | """ 398 | PUT /sys/unseal 399 | """ 400 | params = {'key': key, 'reset': reset} 401 | 402 | return self._put('/v1/sys/unseal', json=params).json() 403 | 404 | def unseal_multi(self, keys): 405 | result = None 406 | 407 | for key in keys: 408 | result = self.unseal(key) 409 | if not result['sealed']: 410 | break 411 | 412 | return result 413 | 414 | @property 415 | def key_status(self): 416 | """ 417 | GET /sys/key-status 418 | """ 419 | return self._get('/v1/sys/key-status').json() 420 | 421 | def rotate(self): 422 | """ 423 | PUT /sys/rotate 424 | """ 425 | self._put('/v1/sys/rotate') 426 | 427 | @property 428 | def rekey_status(self): 429 | """ 430 | GET /sys/rekey/init 431 | """ 432 | return self._get('/v1/sys/rekey/init').json() 433 | 434 | def start_rekey(self, 435 | secret_shares=5, 436 | secret_threshold=3, 437 | pgp_keys=None, 438 | backup=False): 439 | """ 440 | PUT /sys/rekey/init 441 | """ 442 | params = { 443 | 'secret_shares': secret_shares, 444 | 'secret_threshold': secret_threshold, 445 | } 446 | 447 | if pgp_keys: 448 | if len(pgp_keys) != secret_shares: 449 | raise ValueError('Length of pgp_keys must equal secret shares') 450 | 451 | params['pgp_keys'] = pgp_keys 452 | params['backup'] = backup 453 | 454 | resp = self._put('/v1/sys/rekey/init', json=params) 455 | if resp.text: 456 | return resp.json() 457 | 458 | def cancel_rekey(self): 459 | """ 460 | DELETE /sys/rekey/init 461 | """ 462 | self._delete('/v1/sys/rekey/init') 463 | 464 | def rekey(self, key, nonce=None): 465 | """ 466 | PUT /sys/rekey/update 467 | """ 468 | params = { 469 | 'key': key, 470 | } 471 | 472 | if nonce: 473 | params['nonce'] = nonce 474 | 475 | return self._put('/v1/sys/rekey/update', json=params).json() 476 | 477 | def rekey_multi(self, keys, nonce=None): 478 | result = None 479 | 480 | for key in keys: 481 | result = self.rekey(key, nonce=nonce) 482 | if 'complete' in result and result['complete']: 483 | break 484 | 485 | return result 486 | 487 | def get_backed_up_keys(self): 488 | """ 489 | GET /sys/rekey/backup 490 | """ 491 | return self._get('/v1/sys/rekey/backup').json() 492 | 493 | @property 494 | def ha_status(self): 495 | """ 496 | GET /sys/leader 497 | """ 498 | return self._get('/v1/sys/leader').json() 499 | 500 | def get_lease(self, lease_id): 501 | try: 502 | lease = self.write('sys/leases/lookup', lease_id=lease_id) 503 | except InvalidRequest: 504 | log.warning('The following lease_id is not valid: %s', lease_id) 505 | lease = None 506 | 507 | return lease 508 | 509 | def renew_secret(self, lease_id, increment=None): 510 | """ 511 | PUT /sys/leases/renew 512 | """ 513 | params = { 514 | 'lease_id': lease_id, 515 | 'increment': increment, 516 | } 517 | return self._put('/v1/sys/leases/renew', json=params).json() 518 | 519 | def revoke_secret(self, lease_id): 520 | """ 521 | PUT /sys/revoke/ 522 | """ 523 | self._put('/v1/sys/revoke/{0}'.format(lease_id)) 524 | 525 | def revoke_secret_prefix(self, path_prefix): 526 | """ 527 | PUT /sys/revoke-prefix/ 528 | """ 529 | self._put('/v1/sys/revoke-prefix/{0}'.format(path_prefix)) 530 | 531 | def revoke_self_token(self): 532 | """ 533 | PUT /auth/token/revoke-self 534 | """ 535 | self._put('/v1/auth/token/revoke-self') 536 | 537 | def list_secret_backends(self): 538 | """ 539 | GET /sys/mounts 540 | """ 541 | return self._get('/v1/sys/mounts').json() 542 | 543 | def enable_secret_backend(self, 544 | backend_type, 545 | description=None, 546 | mount_point=None, 547 | config=None): 548 | """ 549 | POST /sys/auth/ 550 | """ 551 | if not mount_point: 552 | mount_point = backend_type 553 | 554 | params = { 555 | 'type': backend_type, 556 | 'description': description, 557 | 'config': config, 558 | } 559 | 560 | self._post('/v1/sys/mounts/{0}'.format(mount_point), json=params) 561 | 562 | def tune_secret_backend(self, 563 | backend_type, 564 | mount_point=None, 565 | default_lease_ttl=None, 566 | max_lease_ttl=None): 567 | """ 568 | POST /sys/mounts//tune 569 | """ 570 | 571 | if not mount_point: 572 | mount_point = backend_type 573 | 574 | params = { 575 | 'default_lease_ttl': default_lease_ttl, 576 | 'max_lease_ttl': max_lease_ttl 577 | } 578 | 579 | self._post('/v1/sys/mounts/{0}/tune'.format(mount_point), json=params) 580 | 581 | def get_secret_backend_tuning(self, backend_type, mount_point=None): 582 | """ 583 | GET /sys/mounts//tune 584 | """ 585 | if not mount_point: 586 | mount_point = backend_type 587 | 588 | return self._get('/v1/sys/mounts/{0}/tune'.format(mount_point)).json() 589 | 590 | def disable_secret_backend(self, mount_point): 591 | """ 592 | DELETE /sys/mounts/ 593 | """ 594 | self._delete('/v1/sys/mounts/{0}'.format(mount_point)) 595 | 596 | def remount_secret_backend(self, from_mount_point, to_mount_point): 597 | """ 598 | POST /sys/remount 599 | """ 600 | params = { 601 | 'from': from_mount_point, 602 | 'to': to_mount_point, 603 | } 604 | 605 | self._post('/v1/sys/remount', json=params) 606 | 607 | def list_policies(self): 608 | """ 609 | GET /sys/policy 610 | """ 611 | return self._get('/v1/sys/policy').json()['policies'] 612 | 613 | def get_policy(self, name, parse=False): 614 | """ 615 | GET /sys/policy/ 616 | """ 617 | try: 618 | policy = self._get( 619 | '/v1/sys/policy/{0}'.format(name)).json()['rules'] 620 | if parse: 621 | if not HAS_HCL_PARSER: 622 | raise ImportError('pyhcl is required for policy parsing') 623 | policy = hcl.loads(policy) 624 | 625 | return policy 626 | except InvalidPath: 627 | return None 628 | 629 | def set_policy(self, name, rules): 630 | """ 631 | PUT /sys/policy/ 632 | """ 633 | 634 | if isinstance(rules, dict): 635 | rules = json.dumps(rules) 636 | 637 | params = { 638 | 'rules': rules, 639 | } 640 | 641 | self._put('/v1/sys/policy/{0}'.format(name), json=params) 642 | 643 | def delete_policy(self, name): 644 | """ 645 | DELETE /sys/policy/ 646 | """ 647 | self._delete('/v1/sys/policy/{0}'.format(name)) 648 | 649 | def list_audit_backends(self): 650 | """ 651 | GET /sys/audit 652 | """ 653 | return self._get('/v1/sys/audit').json() 654 | 655 | def enable_audit_backend(self, 656 | backend_type, 657 | description=None, 658 | options=None, 659 | name=None): 660 | """ 661 | POST /sys/audit/ 662 | """ 663 | if not name: 664 | name = backend_type 665 | 666 | params = { 667 | 'type': backend_type, 668 | 'description': description, 669 | 'options': options, 670 | } 671 | 672 | self._post('/v1/sys/audit/{0}'.format(name), json=params) 673 | 674 | def disable_audit_backend(self, name): 675 | """ 676 | DELETE /sys/audit/ 677 | """ 678 | self._delete('/v1/sys/audit/{0}'.format(name)) 679 | 680 | def audit_hash(self, name, input): 681 | """ 682 | POST /sys/audit-hash 683 | """ 684 | params = { 685 | 'input': input, 686 | } 687 | return self._post( 688 | '/v1/sys/audit-hash/{0}'.format(name), json=params).json() 689 | 690 | def create_token(self, 691 | role=None, 692 | token_id=None, 693 | policies=None, 694 | meta=None, 695 | no_parent=False, 696 | lease=None, 697 | display_name=None, 698 | num_uses=None, 699 | no_default_policy=False, 700 | ttl=None, 701 | orphan=False, 702 | wrap_ttl=None, 703 | renewable=None, 704 | explicit_max_ttl=None, 705 | period=None): 706 | """ 707 | POST /auth/token/create 708 | POST /auth/token/create/ 709 | POST /auth/token/create-orphan 710 | """ 711 | params = { 712 | 'id': token_id, 713 | 'policies': policies, 714 | 'meta': meta, 715 | 'no_parent': no_parent, 716 | 'display_name': display_name, 717 | 'num_uses': num_uses, 718 | 'no_default_policy': no_default_policy, 719 | 'renewable': renewable 720 | } 721 | 722 | if lease: 723 | params['lease'] = lease 724 | else: 725 | params['ttl'] = ttl 726 | params['explicit_max_ttl'] = explicit_max_ttl 727 | 728 | if explicit_max_ttl: 729 | params['explicit_max_ttl'] = explicit_max_ttl 730 | 731 | if period: 732 | params['period'] = period 733 | 734 | if orphan: 735 | return self._post( 736 | '/v1/auth/token/create-orphan', json=params, 737 | wrap_ttl=wrap_ttl).json() 738 | elif role: 739 | return self._post( 740 | '/v1/auth/token/create/{0}'.format(role), 741 | json=params, 742 | wrap_ttl=wrap_ttl).json() 743 | else: 744 | return self._post( 745 | '/v1/auth/token/create', json=params, 746 | wrap_ttl=wrap_ttl).json() 747 | 748 | def lookup_token(self, token=None, accessor=False, wrap_ttl=None): 749 | """ 750 | GET /auth/token/lookup/ 751 | GET /auth/token/lookup-accessor/ 752 | GET /auth/token/lookup-self 753 | """ 754 | if token: 755 | if accessor: 756 | path = '/v1/auth/token/lookup-accessor/{0}'.format(token) 757 | return self._post(path, wrap_ttl=wrap_ttl).json() 758 | else: 759 | return self._get( 760 | '/v1/auth/token/lookup/{0}'.format(token)).json() 761 | else: 762 | return self._get( 763 | '/v1/auth/token/lookup-self', wrap_ttl=wrap_ttl).json() 764 | 765 | def revoke_token(self, token, orphan=False, accessor=False): 766 | """ 767 | POST /auth/token/revoke/ 768 | POST /auth/token/revoke-orphan/ 769 | POST /auth/token/revoke-accessor/ 770 | """ 771 | if accessor and orphan: 772 | msg = ("revoke_token does not support 'orphan' and 'accessor' " 773 | "flags together") 774 | raise InvalidRequest(msg) 775 | elif accessor: 776 | self._post('/v1/auth/token/revoke-accessor/{0}'.format(token)) 777 | elif orphan: 778 | self._post('/v1/auth/token/revoke-orphan/{0}'.format(token)) 779 | else: 780 | self._post('/v1/auth/token/revoke/{0}'.format(token)) 781 | 782 | def revoke_token_prefix(self, prefix): 783 | """ 784 | POST /auth/token/revoke-prefix/ 785 | """ 786 | self._post('/v1/auth/token/revoke-prefix/{0}'.format(prefix)) 787 | 788 | def renew_token(self, token=None, increment=None, wrap_ttl=None): 789 | """ 790 | POST /auth/token/renew/ 791 | POST /auth/token/renew-self 792 | """ 793 | params = { 794 | 'increment': increment, 795 | } 796 | 797 | if token: 798 | path = '/v1/auth/token/renew/{0}'.format(token) 799 | return self._post(path, json=params, wrap_ttl=wrap_ttl).json() 800 | else: 801 | return self._post( 802 | '/v1/auth/token/renew-self', json=params, 803 | wrap_ttl=wrap_ttl).json() 804 | 805 | def create_token_role(self, 806 | role, 807 | allowed_policies=None, 808 | disallowed_policies=None, 809 | orphan=None, 810 | period=None, 811 | renewable=None, 812 | path_suffix=None, 813 | explicit_max_ttl=None): 814 | """ 815 | POST /auth/token/roles/ 816 | """ 817 | params = { 818 | 'allowed_policies': allowed_policies, 819 | 'disallowed_policies': disallowed_policies, 820 | 'orphan': orphan, 821 | 'period': period, 822 | 'renewable': renewable, 823 | 'path_suffix': path_suffix, 824 | 'explicit_max_ttl': explicit_max_ttl 825 | } 826 | return self._post('/v1/auth/token/roles/{0}'.format(role), json=params) 827 | 828 | def token_role(self, role): 829 | """ 830 | Returns the named token role. 831 | """ 832 | return self.read('auth/token/roles/{0}'.format(role)) 833 | 834 | def delete_token_role(self, role): 835 | """ 836 | Deletes the named token role. 837 | """ 838 | return self.delete('auth/token/roles/{0}'.format(role)) 839 | 840 | def list_token_roles(self): 841 | """ 842 | GET /auth/token/roles?list=true 843 | """ 844 | return self.list('auth/token/roles') 845 | 846 | def logout(self, revoke_token=False): 847 | """ 848 | Clears the token used for authentication, optionally revoking it 849 | before doing so 850 | """ 851 | if revoke_token: 852 | self.revoke_self_token() 853 | 854 | self.token = None 855 | 856 | def is_authenticated(self): 857 | """ 858 | Helper method which returns the authentication status of the client 859 | """ 860 | if not self.token: 861 | return False 862 | 863 | try: 864 | self.lookup_token() 865 | return True 866 | except Forbidden: 867 | return False 868 | except InvalidPath: 869 | return False 870 | except InvalidRequest: 871 | return False 872 | 873 | def auth_app_id(self, 874 | app_id, 875 | user_id, 876 | mount_point='app-id', 877 | use_token=True): 878 | """ 879 | POST /auth//login 880 | """ 881 | params = { 882 | 'app_id': app_id, 883 | 'user_id': user_id, 884 | } 885 | 886 | return self.auth( 887 | '/v1/auth/{0}/login'.format(mount_point), 888 | json=params, 889 | use_token=use_token) 890 | 891 | def auth_tls(self, mount_point='cert', use_token=True): 892 | """ 893 | POST /auth//login 894 | """ 895 | return self.auth( 896 | '/v1/auth/{0}/login'.format(mount_point), use_token=use_token) 897 | 898 | def auth_userpass(self, 899 | username, 900 | password, 901 | mount_point='userpass', 902 | use_token=True, 903 | **kwargs): 904 | """ 905 | POST /auth//login/ 906 | """ 907 | params = { 908 | 'password': password, 909 | } 910 | 911 | params.update(kwargs) 912 | 913 | return self.auth( 914 | '/v1/auth/{0}/login/{1}'.format(mount_point, username), 915 | json=params, 916 | use_token=use_token) 917 | 918 | def auth_ec2(self, pkcs7, nonce=None, role=None, use_token=True): 919 | """ 920 | POST /auth/aws/login 921 | """ 922 | params = {'pkcs7': pkcs7} 923 | if nonce: 924 | params['nonce'] = nonce 925 | if role: 926 | params['role'] = role 927 | 928 | return self.auth( 929 | '/v1/auth/aws/login', json=params, use_token=use_token) 930 | 931 | def create_userpass(self, 932 | username, 933 | password, 934 | policies, 935 | mount_point='userpass', 936 | **kwargs): 937 | """ 938 | POST /auth//users/ 939 | """ 940 | 941 | # Users can have more than 1 policy. It is easier for the user to pass 942 | # in the policies as a list so if they do, we need to convert 943 | # to a , delimited string. 944 | if isinstance(policies, (list, set, tuple)): 945 | policies = ','.join(policies) 946 | 947 | params = {'password': password, 'policies': policies} 948 | params.update(kwargs) 949 | 950 | return self._post( 951 | '/v1/auth/{}/users/{}'.format(mount_point, username), json=params) 952 | 953 | def delete_userpass(self, username, mount_point='userpass'): 954 | """ 955 | DELETE /auth//users/ 956 | """ 957 | return self._delete('/v1/auth/{}/users/{}'.format( 958 | mount_point, username)) 959 | 960 | def create_app_id(self, 961 | app_id, 962 | policies, 963 | display_name=None, 964 | mount_point='app-id', 965 | **kwargs): 966 | """ 967 | POST /auth//map/app-id/ 968 | """ 969 | 970 | # app-id can have more than 1 policy. It is easier for the user to 971 | # pass in the policies as a list so if they do, we need to convert 972 | # to a , delimited string. 973 | if isinstance(policies, (list, set, tuple)): 974 | policies = ','.join(policies) 975 | 976 | params = {'value': policies} 977 | 978 | # Only use the display_name if it has a value. Made it a named param 979 | # for user convienence instead of leaving it as part of the kwargs 980 | if display_name: 981 | params['display_name'] = display_name 982 | 983 | params.update(kwargs) 984 | 985 | return self._post( 986 | '/v1/auth/{}/map/app-id/{}'.format(mount_point, app_id), 987 | json=params) 988 | 989 | def get_app_id(self, app_id, mount_point='app-id', wrap_ttl=None): 990 | """ 991 | GET /auth//map/app-id/ 992 | """ 993 | path = '/v1/auth/{0}/map/app-id/{1}'.format(mount_point, app_id) 994 | return self._get(path, wrap_ttl=wrap_ttl).json() 995 | 996 | def delete_app_id(self, app_id, mount_point='app-id'): 997 | """ 998 | DELETE /auth//map/app-id/ 999 | """ 1000 | return self._delete('/v1/auth/{0}/map/app-id/{1}'.format( 1001 | mount_point, app_id)) 1002 | 1003 | def create_user_id(self, 1004 | user_id, 1005 | app_id, 1006 | cidr_block=None, 1007 | mount_point='app-id', 1008 | **kwargs): 1009 | """ 1010 | POST /auth//map/user-id/ 1011 | """ 1012 | 1013 | # user-id can be associated to more than 1 app-id (aka policy). 1014 | # It is easier for the user to pass in the policies as a list so if 1015 | # they do, we need to convert to a , delimited string. 1016 | if isinstance(app_id, (list, set, tuple)): 1017 | app_id = ','.join(app_id) 1018 | 1019 | params = {'value': app_id} 1020 | 1021 | # Only use the cidr_block if it has a value. Made it a named param for 1022 | # user convienence instead of leaving it as part of the kwargs 1023 | if cidr_block: 1024 | params['cidr_block'] = cidr_block 1025 | 1026 | params.update(kwargs) 1027 | 1028 | return self._post( 1029 | '/v1/auth/{}/map/user-id/{}'.format(mount_point, user_id), 1030 | json=params) 1031 | 1032 | def get_user_id(self, user_id, mount_point='app-id', wrap_ttl=None): 1033 | """ 1034 | GET /auth//map/user-id/ 1035 | """ 1036 | path = '/v1/auth/{0}/map/user-id/{1}'.format(mount_point, user_id) 1037 | return self._get(path, wrap_ttl=wrap_ttl).json() 1038 | 1039 | def delete_user_id(self, user_id, mount_point='app-id'): 1040 | """ 1041 | DELETE /auth//map/user-id/ 1042 | """ 1043 | return self._delete('/v1/auth/{0}/map/user-id/{1}'.format( 1044 | mount_point, user_id)) 1045 | 1046 | def create_vault_ec2_client_configuration(self, 1047 | access_key=None, 1048 | secret_key=None, 1049 | endpoint=None): 1050 | """ 1051 | POST /auth/aws/config/client 1052 | """ 1053 | params = {} 1054 | if access_key: 1055 | params['access_key'] = access_key 1056 | if secret_key: 1057 | params['secret_key'] = secret_key 1058 | if endpoint is not None: 1059 | params['endpoint'] = endpoint 1060 | 1061 | return self._post('/v1/auth/aws/config/client', json=params) 1062 | 1063 | def get_vault_ec2_client_configuration(self): 1064 | """ 1065 | GET /auth/aws/config/client 1066 | """ 1067 | return self._get('/v1/auth/aws/config/client').json() 1068 | 1069 | def delete_vault_ec2_client_configuration(self): 1070 | """ 1071 | DELETE /auth/aws/config/client 1072 | """ 1073 | return self._delete('/v1/auth/aws/config/client') 1074 | 1075 | def create_vault_ec2_certificate_configuration(self, cert_name, 1076 | aws_public_cert): 1077 | """ 1078 | POST /auth/aws/config/certificate/ 1079 | """ 1080 | params = {'cert_name': cert_name, 'aws_public_cert': aws_public_cert} 1081 | return self._post( 1082 | '/v1/auth/aws/config/certificate/{0}'.format(cert_name), 1083 | json=params) 1084 | 1085 | def get_vault_ec2_certificate_configuration(self, cert_name): 1086 | """ 1087 | GET /auth/aws/config/certificate/ 1088 | """ 1089 | return self._get('/v1/auth/aws/config/certificate/{0}'.format( 1090 | cert_name)).json() 1091 | 1092 | def list_vault_ec2_certificate_configurations(self): 1093 | """ 1094 | GET /auth/aws/config/certificates?list=true 1095 | """ 1096 | params = {'list': True} 1097 | return self._get( 1098 | '/v1/auth/aws/config/certificates', params=params).json() 1099 | 1100 | def create_ec2_role(self, 1101 | role, 1102 | bound_ami_id=None, 1103 | bound_account_id=None, 1104 | bound_iam_role_arn=None, 1105 | bound_iam_instance_profile_arn=None, 1106 | role_tag=None, 1107 | max_ttl=None, 1108 | policies=None, 1109 | allow_instance_migration=False, 1110 | disallow_reauthentication=False, 1111 | period="", 1112 | **kwargs): 1113 | """ 1114 | POST /auth/aws/role/ 1115 | """ 1116 | params = { 1117 | 'role': role, 1118 | 'disallow_reauthentication': disallow_reauthentication, 1119 | 'allow_instance_migration': allow_instance_migration, 1120 | 'period': period 1121 | } 1122 | if bound_ami_id is not None: 1123 | params['bound_ami_id'] = bound_ami_id 1124 | if bound_account_id is not None: 1125 | params['bound_account_id'] = bound_account_id 1126 | if bound_iam_role_arn is not None: 1127 | params['bound_iam_role_arn'] = bound_iam_role_arn 1128 | if bound_iam_instance_profile_arn is not None: 1129 | params[ 1130 | 'bound_iam_instance_profile_arn'] = bound_iam_instance_profile_arn 1131 | if role_tag is not None: 1132 | params['role_tag'] = role_tag 1133 | if max_ttl is not None: 1134 | params['max_ttl'] = max_ttl 1135 | if policies is not None: 1136 | params['policies'] = policies 1137 | params.update(**kwargs) 1138 | return self._post( 1139 | '/v1/auth/aws/role/{0}'.format(role), json=params) 1140 | 1141 | def get_ec2_role(self, role): 1142 | """ 1143 | GET /auth/aws/role/ 1144 | """ 1145 | return self._get('/v1/auth/aws/role/{0}'.format(role)).json() 1146 | 1147 | def delete_ec2_role(self, role): 1148 | """ 1149 | DELETE /auth/aws/role/ 1150 | """ 1151 | return self._delete('/v1/auth/aws/role/{0}'.format(role)) 1152 | 1153 | def list_ec2_roles(self): 1154 | """ 1155 | GET /auth/aws/roles?list=true 1156 | """ 1157 | try: 1158 | return self._get( 1159 | '/v1/auth/aws/roles', params={ 1160 | 'list': True 1161 | }).json() 1162 | except InvalidPath: 1163 | return None 1164 | 1165 | def create_ec2_role_tag(self, 1166 | role, 1167 | policies=None, 1168 | max_ttl=None, 1169 | instance_id=None, 1170 | disallow_reauthentication=False, 1171 | allow_instance_migration=False): 1172 | """ 1173 | POST /auth/aws/role//tag 1174 | """ 1175 | params = { 1176 | 'role': role, 1177 | 'disallow_reauthentication': disallow_reauthentication, 1178 | 'allow_instance_migration': allow_instance_migration 1179 | } 1180 | if max_ttl is not None: 1181 | params['max_ttl'] = max_ttl 1182 | if policies is not None: 1183 | params['policies'] = policies 1184 | if instance_id is not None: 1185 | params['instance_id'] = instance_id 1186 | return self._post( 1187 | '/v1/auth/aws/role/{0}/tag'.format(role), json=params).json() 1188 | 1189 | def auth_ldap(self, 1190 | username, 1191 | password, 1192 | mount_point='ldap', 1193 | use_token=True, 1194 | **kwargs): 1195 | """ 1196 | POST /auth//login/ 1197 | """ 1198 | params = { 1199 | 'password': password, 1200 | } 1201 | 1202 | params.update(kwargs) 1203 | 1204 | return self.auth( 1205 | '/v1/auth/{0}/login/{1}'.format(mount_point, username), 1206 | json=params, 1207 | use_token=use_token) 1208 | 1209 | def auth_github(self, token, mount_point='github', use_token=True): 1210 | """ 1211 | POST /auth//login 1212 | """ 1213 | params = { 1214 | 'token': token, 1215 | } 1216 | 1217 | return self.auth( 1218 | '/v1/auth/{0}/login'.format(mount_point), 1219 | json=params, 1220 | use_token=use_token) 1221 | 1222 | def auth(self, url, use_token=True, **kwargs): 1223 | response = self._post(url, **kwargs).json() 1224 | 1225 | if use_token: 1226 | self.token = response['auth']['client_token'] 1227 | 1228 | return response 1229 | 1230 | def list_auth_backends(self): 1231 | """ 1232 | GET /sys/auth 1233 | """ 1234 | return self._get('/v1/sys/auth').json() 1235 | 1236 | def enable_auth_backend(self, 1237 | backend_type, 1238 | description=None, 1239 | mount_point=None): 1240 | """ 1241 | POST /sys/auth/ 1242 | """ 1243 | if not mount_point: 1244 | mount_point = backend_type 1245 | 1246 | params = { 1247 | 'type': backend_type, 1248 | 'description': description, 1249 | } 1250 | 1251 | self._post('/v1/sys/auth/{0}'.format(mount_point), json=params) 1252 | 1253 | def disable_auth_backend(self, mount_point): 1254 | """ 1255 | DELETE /sys/auth/ 1256 | """ 1257 | self._delete('/v1/sys/auth/{0}'.format(mount_point)) 1258 | 1259 | def create_role(self, role_name, **kwargs): 1260 | """ 1261 | POST /auth/approle/role/ 1262 | """ 1263 | 1264 | self._post('/v1/auth/approle/role/{0}'.format(role_name), json=kwargs) 1265 | 1266 | def list_roles(self): 1267 | """ 1268 | GET /auth/approle/role 1269 | """ 1270 | 1271 | return self._get('/v1/auth/approle/role?list=true').json() 1272 | 1273 | def get_role_id(self, role_name): 1274 | """ 1275 | GET /auth/approle/role//role-id 1276 | """ 1277 | 1278 | url = '/v1/auth/approle/role/{0}/role-id'.format(role_name) 1279 | return self._get(url).json()['data']['role_id'] 1280 | 1281 | def set_role_id(self, role_name, role_id): 1282 | """ 1283 | POST /auth/approle/role//role-id 1284 | """ 1285 | 1286 | url = '/v1/auth/approle/role/{0}/role-id'.format(role_name) 1287 | params = {'role_id': role_id} 1288 | self._post(url, json=params) 1289 | 1290 | def get_role(self, role_name): 1291 | """ 1292 | GET /auth/approle/role/ 1293 | """ 1294 | return self._get('/v1/auth/approle/role/{0}'.format(role_name)).json() 1295 | 1296 | def create_role_secret_id(self, role_name, meta=None, cidr_list=None): 1297 | """ 1298 | POST /auth/approle/role//secret-id 1299 | """ 1300 | 1301 | url = '/v1/auth/approle/role/{0}/secret-id'.format(role_name) 1302 | params = {} 1303 | if meta is not None: 1304 | params['metadata'] = json.dumps(meta) 1305 | if cidr_list is not None: 1306 | params['cidr_list'] = cidr_list 1307 | return self._post(url, json=params).json() 1308 | 1309 | def get_role_secret_id(self, role_name, secret_id): 1310 | """ 1311 | POST /auth/approle/role//secret-id/lookup 1312 | """ 1313 | url = '/v1/auth/approle/role/{0}/secret-id/lookup'.format(role_name) 1314 | params = {'secret_id': secret_id} 1315 | return self._post(url, json=params).json() 1316 | 1317 | def list_role_secrets(self, role_name): 1318 | """ 1319 | GET /auth/approle/role//secret-id?list=true 1320 | """ 1321 | url = '/v1/auth/approle/role/{0}/secret-id?list=true'.format(role_name) 1322 | return self._get(url).json() 1323 | 1324 | def get_role_secret_id_accessor(self, role_name, secret_id_accessor): 1325 | """ 1326 | GET /auth/approle/role//secret-id-accessor/ 1327 | """ 1328 | url = '/v1/auth/approle/role/{0}/secret-id-accessor/{1}'.format( 1329 | role_name, secret_id_accessor) 1330 | return self._get(url).json() 1331 | 1332 | def delete_role_secret_id(self, role_name, secret_id): 1333 | """ 1334 | POST /auth/approle/role//secret-id/destroy 1335 | """ 1336 | url = '/v1/auth/approle/role/{0}/secret-id/destroy'.format(role_name) 1337 | params = {'secret_id': secret_id} 1338 | self._post(url, json=params) 1339 | 1340 | def delete_role_secret_id_accessor(self, role_name, secret_id_accessor): 1341 | """ 1342 | DELETE /auth/approle/role//secret-id/ 1343 | """ 1344 | url = '/v1/auth/approle/role/{0}/secret-id-accessor/{1}'.format( 1345 | role_name, secret_id_accessor) 1346 | self._delete(url) 1347 | 1348 | def create_role_custom_secret_id(self, role_name, secret_id, meta=None): 1349 | """ 1350 | POST /auth/approle/role//custom-secret-id 1351 | """ 1352 | url = '/v1/auth/approle/role/{0}/custom-secret-id'.format(role_name) 1353 | params = {'secret_id': secret_id} 1354 | if meta is not None: 1355 | params['meta'] = meta 1356 | return self._post(url, json=params).json() 1357 | 1358 | def auth_approle(self, 1359 | role_id, 1360 | secret_id=None, 1361 | mount_point='approle', 1362 | use_token=True): 1363 | """ 1364 | POST /auth/approle/login 1365 | """ 1366 | params = {'role_id': role_id} 1367 | if secret_id is not None: 1368 | params['secret_id'] = secret_id 1369 | 1370 | return self.auth( 1371 | '/v1/auth/{0}/login'.format(mount_point), 1372 | json=params, 1373 | use_token=use_token) 1374 | 1375 | def transit_create_key(self, 1376 | name, 1377 | convergent_encryption=None, 1378 | derived=None, 1379 | exportable=None, 1380 | key_type=None, 1381 | mount_point='transit'): 1382 | """ 1383 | POST //keys/ 1384 | """ 1385 | url = '/v1/{0}/keys/{1}'.format(mount_point, name) 1386 | params = {} 1387 | if convergent_encryption is not None: 1388 | params['convergent_encryption'] = convergent_encryption 1389 | if derived is not None: 1390 | params['derived'] = derived 1391 | if exportable is not None: 1392 | params['exportable'] = exportable 1393 | if key_type is not None: 1394 | params['type'] = key_type 1395 | 1396 | return self._post(url, json=params) 1397 | 1398 | def transit_read_key(self, name, mount_point='transit'): 1399 | """ 1400 | GET //keys/ 1401 | """ 1402 | url = '/v1/{0}/keys/{1}'.format(mount_point, name) 1403 | return self._get(url).json() 1404 | 1405 | def transit_list_keys(self, mount_point='transit'): 1406 | """ 1407 | GET //keys?list=true 1408 | """ 1409 | url = '/v1/{0}/keys?list=true'.format(mount_point) 1410 | return self._get(url).json() 1411 | 1412 | def transit_delete_key(self, name, mount_point='transit'): 1413 | """ 1414 | DELETE //keys/ 1415 | """ 1416 | url = '/v1/{0}/keys/{1}'.format(mount_point, name) 1417 | return self._delete(url) 1418 | 1419 | def transit_update_key(self, 1420 | name, 1421 | min_decryption_version=None, 1422 | min_encryption_version=None, 1423 | deletion_allowed=None, 1424 | mount_point='transit'): 1425 | """ 1426 | POST //keys//config 1427 | """ 1428 | url = '/v1/{0}/keys/{1}/config'.format(mount_point, name) 1429 | params = {} 1430 | if min_decryption_version is not None: 1431 | params['min_decryption_version'] = min_decryption_version 1432 | if min_encryption_version is not None: 1433 | params['min_encryption_version'] = min_encryption_version 1434 | if deletion_allowed is not None: 1435 | params['deletion_allowed'] = deletion_allowed 1436 | 1437 | return self._post(url, json=params) 1438 | 1439 | def transit_rotate_key(self, name, mount_point='transit'): 1440 | """ 1441 | POST //keys//rotate 1442 | """ 1443 | url = '/v1/{0}/keys/{1}/rotate'.format(mount_point, name) 1444 | return self._post(url) 1445 | 1446 | def transit_export_key(self, 1447 | name, 1448 | key_type, 1449 | version=None, 1450 | mount_point='transit'): 1451 | """ 1452 | GET //export//(/) 1453 | """ 1454 | if version is not None: 1455 | url = '/v1/{0}/export/{1}/{2}/{3}'.format(mount_point, key_type, 1456 | name, version) 1457 | else: 1458 | url = '/v1/{0}/export/{1}/{2}'.format(mount_point, key_type, name) 1459 | return self._get(url).json() 1460 | 1461 | def transit_encrypt_data(self, 1462 | name, 1463 | plaintext, 1464 | context=None, 1465 | key_version=None, 1466 | nonce=None, 1467 | batch_input=None, 1468 | key_type=None, 1469 | convergent_encryption=None, 1470 | mount_point='transit'): 1471 | """ 1472 | POST //encrypt/ 1473 | """ 1474 | url = '/v1/{0}/encrypt/{1}'.format(mount_point, name) 1475 | params = {'plaintext': plaintext} 1476 | if context is not None: 1477 | params['context'] = context 1478 | if key_version is not None: 1479 | params['key_version'] = key_version 1480 | if nonce is not None: 1481 | params['nonce'] = nonce 1482 | if batch_input is not None: 1483 | params['batch_input'] = batch_input 1484 | if key_type is not None: 1485 | params['type'] = key_type 1486 | if convergent_encryption is not None: 1487 | params['convergent_encryption'] = convergent_encryption 1488 | 1489 | return self._post(url, json=params).json() 1490 | 1491 | def transit_decrypt_data(self, 1492 | name, 1493 | ciphertext, 1494 | context=None, 1495 | nonce=None, 1496 | batch_input=None, 1497 | mount_point='transit'): 1498 | """ 1499 | POST //decrypt/ 1500 | """ 1501 | url = '/v1/{0}/decrypt/{1}'.format(mount_point, name) 1502 | params = {'ciphertext': ciphertext} 1503 | if context is not None: 1504 | params['context'] = context 1505 | if nonce is not None: 1506 | params['nonce'] = nonce 1507 | if batch_input is not None: 1508 | params['batch_input'] = batch_input 1509 | 1510 | return self._post(url, json=params).json() 1511 | 1512 | def transit_rewrap_data(self, 1513 | name, 1514 | ciphertext, 1515 | context=None, 1516 | key_version=None, 1517 | nonce=None, 1518 | batch_input=None, 1519 | mount_point='transit'): 1520 | """ 1521 | POST //rewrap/ 1522 | """ 1523 | url = '/v1/{0}/rewrap/{1}'.format(mount_point, name) 1524 | params = {'ciphertext': ciphertext} 1525 | if context is not None: 1526 | params['context'] = context 1527 | if key_version is not None: 1528 | params['key_version'] = key_version 1529 | if nonce is not None: 1530 | params['nonce'] = nonce 1531 | if batch_input is not None: 1532 | params['batch_input'] = batch_input 1533 | 1534 | return self._post(url, json=params).json() 1535 | 1536 | def transit_generate_data_key(self, 1537 | name, 1538 | key_type, 1539 | context=None, 1540 | nonce=None, 1541 | bits=None, 1542 | mount_point='transit'): 1543 | """ 1544 | POST //datakey// 1545 | """ 1546 | url = '/v1/{0}/datakey/{1}/{2}'.format(mount_point, key_type, name) 1547 | params = {} 1548 | if context is not None: 1549 | params['context'] = context 1550 | if nonce is not None: 1551 | params['nonce'] = nonce 1552 | if bits is not None: 1553 | params['bits'] = bits 1554 | 1555 | return self._post(url, json=params).json() 1556 | 1557 | def transit_generate_rand_bytes(self, 1558 | data_bytes=None, 1559 | output_format=None, 1560 | mount_point='transit'): 1561 | """ 1562 | POST //random(/) 1563 | """ 1564 | if data_bytes is not None: 1565 | url = '/v1/{0}/random/{1}'.format(mount_point, data_bytes) 1566 | else: 1567 | url = '/v1/{0}/random'.format(mount_point) 1568 | 1569 | params = {} 1570 | if output_format is not None: 1571 | params["format"] = output_format 1572 | 1573 | return self._post(url, json=params).json() 1574 | 1575 | def transit_hash_data(self, 1576 | hash_input, 1577 | algorithm=None, 1578 | output_format=None, 1579 | mount_point='transit'): 1580 | """ 1581 | POST //hash(/) 1582 | """ 1583 | if algorithm is not None: 1584 | url = '/v1/{0}/hash/{1}'.format(mount_point, algorithm) 1585 | else: 1586 | url = '/v1/{0}/hash'.format(mount_point) 1587 | 1588 | params = {'input': hash_input} 1589 | if output_format is not None: 1590 | params['format'] = output_format 1591 | 1592 | return self._post(url, json=params).json() 1593 | 1594 | def transit_generate_hmac(self, 1595 | name, 1596 | hmac_input, 1597 | key_version=None, 1598 | algorithm=None, 1599 | mount_point='transit'): 1600 | """ 1601 | POST //hmac/(/) 1602 | """ 1603 | if algorithm is not None: 1604 | url = '/v1/{0}/hmac/{1}/{2}'.format(mount_point, name, algorithm) 1605 | else: 1606 | url = '/v1/{0}/hmac/{1}'.format(mount_point, name) 1607 | params = {'input': hmac_input} 1608 | if key_version is not None: 1609 | params['key_version'] = key_version 1610 | 1611 | return self._post(url, json=params).json() 1612 | 1613 | def transit_sign_data(self, 1614 | name, 1615 | input_data, 1616 | key_version=None, 1617 | algorithm=None, 1618 | context=None, 1619 | prehashed=None, 1620 | mount_point='transit'): 1621 | """ 1622 | POST //sign/(/) 1623 | """ 1624 | if algorithm is not None: 1625 | url = '/v1/{0}/sign/{1}/{2}'.format(mount_point, name, algorithm) 1626 | else: 1627 | url = '/v1/{0}/sign/{1}'.format(mount_point, name) 1628 | 1629 | params = {'input': input_data} 1630 | if key_version is not None: 1631 | params['key_version'] = key_version 1632 | if context is not None: 1633 | params['context'] = context 1634 | if prehashed is not None: 1635 | params['prehashed'] = prehashed 1636 | 1637 | return self._post(url, json=params).json() 1638 | 1639 | def transit_verify_signed_data(self, 1640 | name, 1641 | input_data, 1642 | algorithm=None, 1643 | signature=None, 1644 | hmac=None, 1645 | context=None, 1646 | prehashed=None, 1647 | mount_point='transit'): 1648 | """ 1649 | POST //verify/(/) 1650 | """ 1651 | if algorithm is not None: 1652 | url = '/v1/{0}/verify/{1}/{2}'.format(mount_point, name, algorithm) 1653 | else: 1654 | url = '/v1/{0}/verify/{1}'.format(mount_point, name) 1655 | 1656 | params = {'input': input_data} 1657 | if signature is not None: 1658 | params['signature'] = signature 1659 | if hmac is not None: 1660 | params['hmac'] = hmac 1661 | if context is not None: 1662 | params['context'] = context 1663 | if prehashed is not None: 1664 | params['prehashed'] = prehashed 1665 | 1666 | return self._post(url, json=params).json() 1667 | 1668 | def close(self): 1669 | """ 1670 | Close the underlying Requests session 1671 | """ 1672 | self.session.close() 1673 | 1674 | def _get(self, url, **kwargs): 1675 | return self.__request('get', url, **kwargs) 1676 | 1677 | def _post(self, url, **kwargs): 1678 | return self.__request('post', url, **kwargs) 1679 | 1680 | def _put(self, url, **kwargs): 1681 | return self.__request('put', url, **kwargs) 1682 | 1683 | def _delete(self, url, **kwargs): 1684 | return self.__request('delete', url, **kwargs) 1685 | 1686 | def __request(self, method, url, headers=None, **kwargs): 1687 | url = urljoin(self._url, url) 1688 | 1689 | if not headers: 1690 | headers = {} 1691 | 1692 | if self.token: 1693 | headers['X-Vault-Token'] = self.token 1694 | 1695 | wrap_ttl = kwargs.pop('wrap_ttl', None) 1696 | if wrap_ttl: 1697 | headers['X-Vault-Wrap-TTL'] = str(wrap_ttl) 1698 | 1699 | _kwargs = self._kwargs.copy() 1700 | _kwargs.update(kwargs) 1701 | 1702 | response = self.session.request( 1703 | method, url, headers=headers, allow_redirects=False, **_kwargs) 1704 | 1705 | # NOTE(ianunruh): workaround for https://github.com/ianunruh/hvac/issues/51 1706 | while response.is_redirect and self.allow_redirects: 1707 | url = urljoin(self._url, response.headers['Location']) 1708 | response = self.session.request( 1709 | method, url, headers=headers, allow_redirects=False, **_kwargs) 1710 | 1711 | if response.status_code >= 400 and response.status_code < 600: 1712 | text = errors = None 1713 | if response.headers.get('Content-Type') == 'application/json': 1714 | errors = response.json().get('errors') 1715 | if errors is None: 1716 | text = response.text 1717 | self.__raise_error(response.status_code, text, errors=errors) 1718 | 1719 | return response 1720 | 1721 | def __raise_error(self, status_code, message=None, errors=None): 1722 | if status_code == 400: 1723 | raise InvalidRequest(message, errors=errors) 1724 | elif status_code == 401: 1725 | raise Unauthorized(message, errors=errors) 1726 | elif status_code == 403: 1727 | raise Forbidden(message, errors=errors) 1728 | elif status_code == 404: 1729 | raise InvalidPath(message, errors=errors) 1730 | elif status_code == 429: 1731 | raise RateLimitExceeded(message, errors=errors) 1732 | elif status_code == 500: 1733 | raise InternalServerError(message, errors=errors) 1734 | elif status_code == 501: 1735 | raise VaultNotInitialized(message, errors=errors) 1736 | elif status_code == 503: 1737 | raise VaultDown(message, errors=errors) 1738 | else: 1739 | raise UnexpectedError(message) 1740 | 1741 | 1742 | def cache_client(client_builder): 1743 | _client = [] 1744 | @wraps(client_builder) 1745 | def get_client(*args, **kwargs): 1746 | if not _client: 1747 | _client.append(client_builder(*args, **kwargs)) 1748 | return _client[0] 1749 | return get_client 1750 | 1751 | 1752 | @cache_client 1753 | def build_client(url='https://localhost:8200', 1754 | token=None, 1755 | cert=None, 1756 | verify=True, 1757 | timeout=30, 1758 | proxies=None, 1759 | allow_redirects=True, 1760 | session=None): 1761 | client_kwargs = locals() 1762 | for k, v in client_kwargs.items(): 1763 | if k.startswith('_'): 1764 | continue 1765 | arg_val = __salt__['config.get']('vault.{key}'.format(key=k), v) 1766 | log.debug('Setting {0} parameter for HVAC client to {1}.' 1767 | .format(k, arg_val)) 1768 | client_kwargs[k] = arg_val 1769 | return VaultClient(**client_kwargs) 1770 | 1771 | 1772 | def vault_client(): 1773 | return VaultClient 1774 | 1775 | 1776 | def vault_error(error_type=None): 1777 | if error_type is None: 1778 | return VaultError 1779 | else: 1780 | error_dict = { 1781 | 'InvalidRequest': InvalidRequest, 1782 | 'Unauthorized': Unauthorized, 1783 | 'Forbidden': Forbidden, 1784 | 'InvalidPath': InvalidPath, 1785 | 'RateLimitExceeded': RateLimitExceeded, 1786 | 'InternalServerError': InternalServerError, 1787 | 'VaultNotInitialized': VaultNotInitialized, 1788 | 'VaultDown': VaultDown, 1789 | 'UnexpectedError': UnexpectedError 1790 | } 1791 | return error_dict[error_type] 1792 | 1793 | 1794 | def bind_client(unbound_function): 1795 | @wraps(unbound_function) 1796 | def bound_function(*args, **kwargs): 1797 | filtered_kwargs = {k: v for k, v in kwargs.items() 1798 | if not k.startswith('_')} 1799 | ignore_invalid = filtered_kwargs.pop('ignore_invalid', None) 1800 | client = build_client() 1801 | try: 1802 | return unbound_function(client, *args, **filtered_kwargs) 1803 | except InvalidRequest: 1804 | if ignore_invalid: 1805 | return None 1806 | else: 1807 | raise 1808 | return bound_function 1809 | 1810 | 1811 | def get_keybase_pubkey(username): 1812 | """ 1813 | Return the base64 encoded public PGP key for a keybase user. 1814 | """ 1815 | # Retrieve the text of the public key stored in Keybase 1816 | user = requests.get('https://keybase.io/{username}/key.asc'.format( 1817 | username=username)) 1818 | # Explicitly raise an exception if there is an HTTP error. No-op on success 1819 | user.raise_for_status() 1820 | # Process the key to only include the contents and not the wrapping 1821 | # contents (e.g. ----BEGIN PGP KEY---) 1822 | key_lines = user.text.strip('\n').split('\n') 1823 | key_lines = key_lines[key_lines.index(''):-2] 1824 | return ''.join(key_lines) 1825 | 1826 | 1827 | def unseal(sealing_keys): 1828 | client = build_client() 1829 | client.unseal_multi(sealing_keys) 1830 | 1831 | 1832 | def rekey(secret_shares, secret_threshold, sealing_keys, pgp_keys, root_token): 1833 | client = build_client(token=root_token) 1834 | rekey = client.start_rekey(secret_shares, secret_threshold, pgp_keys, 1835 | backup=True) 1836 | client.rekey_multi(sealing_keys, nonce=rekey['nonce']) 1837 | 1838 | 1839 | def wait_after_init(client, retries=5): 1840 | '''This function will allow for a configurable delay before attempting 1841 | to issue requests after an initialization. This is necessary because when 1842 | running on an HA backend there is a short period where the Vault instance 1843 | will be on standby while it acquires the lock.''' 1844 | ready = False 1845 | while retries > 0 and not ready: 1846 | try: 1847 | status = client.read('sys/health') 1848 | ready = (status.get('initialized') and not status.get('sealed') 1849 | and not status.get('standby')) 1850 | except VaultError: 1851 | pass 1852 | if ready: 1853 | break 1854 | retries -= 1 1855 | time.sleep(1) 1856 | -------------------------------------------------------------------------------- /minion.conf: -------------------------------------------------------------------------------- 1 | file_client: local 2 | 3 | fileserver_backend: 4 | - git 5 | - roots 6 | 7 | gitfs_provider: gitpython 8 | 9 | gitfs_remotes: 10 | - https://github.com/mitodl/salt-extensions: 11 | - root: extensions 12 | 13 | vault.verify: False 14 | -------------------------------------------------------------------------------- /pillar.example: -------------------------------------------------------------------------------- 1 | vault: 2 | overrides: 3 | keybase_users: 4 | - renaissancedev 5 | - pdpinch 6 | - bdero 7 | secret_shares: 3 8 | secret_threshold: 2 9 | config: 10 | backend: 11 | file: 12 | path: /var/vault/data 13 | -------------------------------------------------------------------------------- /requirements.dev.txt: -------------------------------------------------------------------------------- 1 | pytest 2 | pytest-xdist 3 | testinfra 4 | -------------------------------------------------------------------------------- /salt-top.example: -------------------------------------------------------------------------------- 1 | base: 2 | '*': 3 | - vault 4 | - vault.tests 5 | -------------------------------------------------------------------------------- /scripts/gitfs_deps.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ $(which apt-get) ]; 4 | then 5 | sudo apt-get update 6 | PKG_MANAGER="apt-get" 7 | PKGS="python python-dev git curl" 8 | else 9 | PKG_MANAGER="yum" 10 | PKGS="python python-devel git curl" 11 | fi 12 | 13 | sudo $PKG_MANAGER -y install $PKGS 14 | 15 | if [ $(which pip) ]; 16 | then 17 | echo '' 18 | else 19 | curl -L "https://bootstrap.pypa.io/get-pip.py" > get_pip.py 20 | sudo python get_pip.py 21 | rm get_pip.py 22 | sudo pip install gitpython 23 | fi 24 | -------------------------------------------------------------------------------- /scripts/testinfra.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [[ -z $(which pip) ]] 4 | then 5 | sudo salt-call --local pkg.install python-pip 6 | fi 7 | if [[ -z $(which testinfra) ]] 8 | then 9 | sudo pip install testinfra 10 | fi 11 | if [ "$(ls /vagrant)" ] 12 | then 13 | SRCDIR=/vagrant 14 | else 15 | SRCDIR=/home/vagrant/sync 16 | fi 17 | sudo rm -rf $SRCDIR/tests/__pycache__ 18 | testinfra $SRCDIR/tests 19 | -------------------------------------------------------------------------------- /scripts/vagrant_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | sudo mkdir -p /srv/salt 4 | sudo mkdir -p /srv/pillar 5 | sudo cp /srv/salt/pillar.example /srv/pillar/pillar.sls 6 | echo "\ 7 | base: 8 | '*': 9 | - pillar" | sudo tee /srv/pillar/top.sls 10 | sudo cp /srv/salt/salt-top.example /srv/salt/top.sls 11 | -------------------------------------------------------------------------------- /tests/test_hashicorp-vault.py: -------------------------------------------------------------------------------- 1 | """Use testinfra and py.test to verify formula works properly""" 2 | 3 | def test_stub(): 4 | assert True is not False 5 | -------------------------------------------------------------------------------- /vault/configure.sls: -------------------------------------------------------------------------------- 1 | {% from "vault/map.jinja" import vault with context %} 2 | 3 | include: 4 | - .service 5 | 6 | configure_vault_server: 7 | file.managed: 8 | - name: /etc/vault/vault.json 9 | - makedirs: True 10 | - contents: | 11 | {{ vault.config | json(indent=2, sort_keys=True) | indent(8) }} 12 | - watch_in: 13 | - service: vault_service_running 14 | - require_in: 15 | - service: vault_service_running 16 | -------------------------------------------------------------------------------- /vault/files/README.rst: -------------------------------------------------------------------------------- 1 | Files 2 | ===== 3 | 4 | Put your files here (e.g. scripts, configuration files etc.). 5 | 6 | They should be relevant to vault-formula and should be used as is 7 | (i.e. those should NOT be templates that are renderend using Jinja or any other template engine) 8 | -------------------------------------------------------------------------------- /vault/files/vault.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Hashicorp Vault secret management service 3 | Documentation=http://vaultproject.io 4 | 5 | [Service] 6 | Type=forking 7 | ExecStart=/usr/local/bin/vault server -config /etc/vault 2>&1 & 8 | ExecReload=/bin/kill -HUP $MAINPID 9 | KillSignal=SIGTERM 10 | Restart=always 11 | PIDFile=/var/run/vault.pid 12 | TimeoutStartSec=1 13 | User={{ vault.user }} 14 | Group={{ vault.group }} 15 | 16 | [Install] 17 | WantedBy=multi-user.target 18 | -------------------------------------------------------------------------------- /vault/files/vault.upstart: -------------------------------------------------------------------------------- 1 | description "Hashicorp Vault secrets management server" 2 | author "Tobias Macey" 3 | start on filesystem or runlevel [2345] 4 | stop on shutdown 5 | 6 | script 7 | /usr/local/bin/vaultserver start 8 | end script 9 | 10 | pre-stop script 11 | /usr/local/bin/vaultserver stop 12 | end script 13 | -------------------------------------------------------------------------------- /vault/files/vault_service.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | PIDFILE=/var/run/vault.pid 4 | 5 | start () { 6 | /usr/local/bin/vault server -config /etc/vault 2>&1 & 7 | echo $! > $PIDFILE 8 | echo "Vult is starting" 9 | } 10 | 11 | stop () { 12 | if [ -e $PIDFILE ] 13 | then 14 | /bin/kill $(cat $PIDFILE) 15 | echo "Stopping Vault" 16 | else 17 | echo 'Vault is not running' 18 | fi 19 | } 20 | 21 | reload () { 22 | if [ -e $PIDFILE ] 23 | then 24 | /bin/kill -1 $(cat $PIDFILE) 25 | echo "Vault reloaded" 26 | else 27 | echo 'Vault is not running' 28 | fi 29 | } 30 | 31 | case $1 in 32 | start) 33 | start 34 | ;; 35 | stop) 36 | stop 37 | ;; 38 | reload) 39 | reload 40 | ;; 41 | restart) 42 | stop 43 | sleep 1 44 | start 45 | ;; 46 | *) exit 1 47 | esac 48 | -------------------------------------------------------------------------------- /vault/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - .install 3 | - .configure 4 | - .service 5 | - .install_module_dependencies 6 | -------------------------------------------------------------------------------- /vault/initialize.sls: -------------------------------------------------------------------------------- 1 | {% from "vault/map.jinja" import vault with context %} 2 | 3 | include: 4 | - .install_module_dependencies 5 | 6 | install_hvac_library: 7 | pip.installed: 8 | - name: hvac 9 | - reload_modules: True 10 | 11 | initialize_vault_server: 12 | vault.initialized: 13 | - secret_shares: {{ vault.secret_shares }} 14 | - secret_threshold: {{ vault.secret_threshold }} 15 | - unseal: {{ vault.unseal }} 16 | - pgp_keys: {{ vault.pgp_keys }} 17 | - keybase_users: {{ vault.keybase_users }} 18 | -------------------------------------------------------------------------------- /vault/install.sls: -------------------------------------------------------------------------------- 1 | {% from "vault/map.jinja" import vault, vault_service with context %} 2 | 3 | include: 4 | - .configure 5 | - .service 6 | - .install_module_dependencies 7 | 8 | install_vault_binary: 9 | archive.extracted: 10 | - name: /usr/local/bin/ 11 | - source: {{ vault.repo_base_url }}/{{ vault.version }}/vault_{{ vault.version }}_linux_{{ vault.architecture_dict[grains['osarch']] }}.zip 12 | - source_hash: {{ vault.repo_base_url }}/{{ vault.version }}/vault_{{ vault.version }}_SHA256SUMS 13 | - archive_format: zip 14 | - if_missing: /usr/local/bin/vault 15 | - source_hash_update: True 16 | - enforce_toplevel: False 17 | file.managed: 18 | - name: /usr/local/bin/vault 19 | - mode: '0755' 20 | - require: 21 | - archive: install_vault_binary 22 | - require_in: 23 | - file: configure_vault_server 24 | 25 | install_vault_server_management_script: 26 | file.managed: 27 | - name: /usr/local/bin/vaultserver 28 | - source: salt://vault/files/vault_service.sh 29 | - mode: '0755' 30 | - require_in: 31 | - service: vault_service_running 32 | 33 | install_vault_init_configuration: 34 | file.managed: 35 | - name: {{ vault_service.init_file }} 36 | - source: {{ vault_service.init_source }} 37 | - require_in: 38 | - service: vault_service_running 39 | 40 | {% if salt.grains.get('init') == 'systemd' %} 41 | reload_systemd_units: 42 | cmd.wait: 43 | - name: systemctl daemon-reload 44 | - watch: 45 | - file: install_vault_init_configuration 46 | - require_in: 47 | - service: vault_service_running 48 | {% endif %} 49 | 50 | ensure_vault_ssl_directory: 51 | file.directory: 52 | - name: {{ vault.ssl_directory }}/certs 53 | - makedirs: True 54 | 55 | {% if vault.ssl.get('cert_source') or vault.ssl.get('cert_contents') %} 56 | setup_vault_ssl_cert: 57 | file.managed: 58 | - name: {{vault.ssl_directory}}/certs/{{ vault.ssl.cert_file }} 59 | {% if vault.ssl.get('cert_source') %} 60 | - source: {{ vault.ssl.cert_source }} 61 | {% elif vault.ssl.get('cert_contents') %} 62 | - contents: | 63 | {{ vault.ssl.cert_contents | indent(8) }} 64 | {% endif %} 65 | - makedirs: True 66 | - require_in: 67 | - service: vault_service_running 68 | 69 | setup_vault_ssl_key: 70 | file.managed: 71 | - name: {{vault.ssl_directory}}/certs/{{ vault.ssl.key_file }} 72 | {% if vault.ssl.get('key_source') %} 73 | - source: {{ vault.ssl.key_source }} 74 | {% elif vault.ssl.get('key_contents') %} 75 | - contents: | 76 | {{ vault.ssl.key_contents | indent(8) }} 77 | {% endif %} 78 | - makedirs: True 79 | - require_in: 80 | - service: vault_service_running 81 | {% else %} 82 | install_tls_module_dependency: 83 | pip.installed: 84 | - name: pyopenssl 85 | - reload_modules: True 86 | - require: 87 | - pkg: install_package_dependencies 88 | 89 | setup_vault_ssl_cert: 90 | module.run: 91 | - name: tls.create_self_signed_cert 92 | - tls_dir: '' 93 | - cacert_path: {{ vault.ssl_directory }} 94 | - makedirs: True 95 | {% for arg, val in salt.pillar.get('vault:ssl:cert_params', 96 | {'CN': 'vault.example.com'}).items() -%} 97 | - {{ arg }}: {{ val }} 98 | {% endfor -%} 99 | - require: 100 | - pip: install_tls_module_dependency 101 | - require_in: 102 | - service: vault_service_running 103 | {% endif %} 104 | -------------------------------------------------------------------------------- /vault/install_module_dependencies.sls: -------------------------------------------------------------------------------- 1 | {% from "vault/map.jinja" import vault, vault_service with context %} 2 | 3 | install_package_dependencies: 4 | pkg.installed: 5 | - pkgs: {{ vault.module_dependencies.pkgs }} 6 | - reload_modules: True 7 | 8 | install_vault_pip_executable: 9 | cmd.run: 10 | - name: | 11 | curl -L "https://bootstrap.pypa.io/get-pip.py" > get_pip.py 12 | {{ salt.grains.get('pythonexecutable') }} get_pip.py 13 | rm get_pip.py 14 | - reload_modules: True 15 | - unless: {{ salt.grains.get('pythonexecutable') }} -m pip --version 16 | -------------------------------------------------------------------------------- /vault/map.jinja: -------------------------------------------------------------------------------- 1 | {% set vault = salt.grains.filter_by({ 2 | 'default': { 3 | 'architecture_dict': { 4 | 'x86_64': 'amd64', 5 | 'amd64': 'amd64', 6 | 'i386': '386', 7 | 'i686': '386' 8 | }, 9 | 'service': 'vault', 10 | 'ssl_directory': '/etc/salt/ssl', 11 | 'ssl': {}, 12 | 'config': { 13 | 'listener': { 14 | 'tcp': { 15 | 'address': 'localhost:8200', 16 | 'tls_cert_file': '/etc/salt/ssl/certs/vault.example.com.crt', 17 | 'tls_key_file': '/etc/salt/ssl/certs/vault.example.com.key' 18 | } 19 | }, 20 | }, 21 | 'version': '0.6.0', 22 | 'repo_base_url': 'https://releases.hashicorp.com/vault', 23 | 'secret_shares': 5, 24 | 'secret_threshold': 3, 25 | 'unseal': True, 26 | 'pgp_keys': [], 27 | 'keybase_users': [] 28 | }, 29 | 'Debian': { 30 | 'module_dependencies': { 31 | 'pkgs': ['libffi-dev', 'python-dev', 'gcc', 'libssl-dev', 'python', 'curl'], 32 | 'pip_deps': ['pyopenssl', 'hvac', 'testinfra'] 33 | } 34 | }, 35 | 'RedHat': { 36 | 'module_dependencies': { 37 | 'pkgs': ['libffi-devel', 'openssl-devel', 'gcc', 'python-devel', 'curl'], 38 | 'pip_deps': ['pyopenssl', 'hvac', 'testinfra'] 39 | } 40 | }, 41 | }, grain='os_family', merge=salt.pillar.get('vault:overrides'), base='default') %} 42 | 43 | {% set vault_service = salt.grains.filter_by({ 44 | 'systemd': { 45 | 'init_file': '/etc/systemd/system/vault.service', 46 | 'init_source': 'salt://vault/files/vault.service' 47 | }, 48 | 'upstart': { 49 | 'init_file': '/etc/init/vault.conf', 50 | 'init_source': 'salt://vault/files/vault.upstart' 51 | } 52 | }, grain='init', merge=salt.pillar.get('vault:service')) %} 53 | -------------------------------------------------------------------------------- /vault/service.sls: -------------------------------------------------------------------------------- 1 | vault_service_running: 2 | service.running: 3 | - name: vault 4 | - enable: True 5 | - reload: True 6 | -------------------------------------------------------------------------------- /vault/templates/README.rst: -------------------------------------------------------------------------------- 1 | Templates 2 | ========= 3 | 4 | Put your templates here. 5 | 6 | They should be relevant to vault-formula and should be used 7 | to generate files using a template engine (e.g. Jinja). 8 | -------------------------------------------------------------------------------- /vault/tests/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - vault.install_module_dependencies 3 | - .test_install 4 | - .test_configure 5 | 6 | install_testinfra_library_for_vault_testing: 7 | pip.installed: 8 | - name: testinfra 9 | - reload_modules: True 10 | - order: 1 11 | -------------------------------------------------------------------------------- /vault/tests/test_configure.sls: -------------------------------------------------------------------------------- 1 | test_config_file_present: 2 | testinfra.file: 3 | - name: /etc/vault/vault.json 4 | - exists: True 5 | - is_file: True 6 | -------------------------------------------------------------------------------- /vault/tests/test_install.sls: -------------------------------------------------------------------------------- 1 | {% from "vault/map.jinja" import vault, vault_service with context %} 2 | 3 | test_vault_is_installed: 4 | testinfra.file: 5 | - name: /usr/local/bin/vault 6 | - exists: True 7 | - mode: 8 | expected: 493 9 | comparison: eq 10 | 11 | test_vault_service_script_present: 12 | testinfra.file: 13 | - name: /usr/local/bin/vaultserver 14 | - exists: True 15 | - is_file: True 16 | - mode: 17 | expected: 493 18 | comparison: eq 19 | 20 | test_vault_service_init_file_present: 21 | testinfra.file: 22 | - name: {{ vault_service.init_file }} 23 | - exists: True 24 | - is_file: True 25 | 26 | test_vault_service_enabled: 27 | testinfra.service: 28 | - name: vault 29 | - is_enabled: True 30 | - is_running: True 31 | -------------------------------------------------------------------------------- /vault/upgrade.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - .install 3 | - .service 4 | 5 | rename_old_vault_binary_for_backup: 6 | file.rename: 7 | - name: /usr/local/bin/vault.bak 8 | - source: /usr/local/bin/vault 9 | - require_in: 10 | - archive: install_vault_binary 11 | 12 | extend: 13 | vault_service_running: 14 | service: 15 | - reload: False 16 | - watch: 17 | - archive: install_vault_binary 18 | --------------------------------------------------------------------------------