├── .gitignore ├── README.md ├── TIPS.md ├── conftest.py ├── copyright ├── dibctl ├── __init__.py ├── commands.py ├── config.py ├── dib.py ├── do_tests.py ├── image_preprocessing.py ├── osclient.py ├── prepare_os.py ├── pytest_runner.py ├── shell_runner.py ├── ssh.py ├── timeout.py └── version.py ├── docs ├── example_configs │ ├── images.yaml │ ├── test.yaml │ └── upload.yaml ├── exit_codes.md └── tests_examples │ ├── pytest.md │ ├── pytest_examples │ ├── example.py │ └── fail.py │ └── shell_examples.d │ ├── 01-simple_success.bash │ ├── 02-simple_failure.bash │ └── 03-check_ssh.bash ├── doctests ├── __init__.py └── test_docs.py ├── integration_tests ├── README.md ├── cassettes │ ├── test_command_rotate_bad_passwd.yaml │ ├── test_command_rotate_dry_run_no_candidates.yaml │ ├── test_command_rotate_dry_run_with_candidates.yaml │ ├── test_command_rotate_no_candidates.yaml │ ├── test_command_rotate_nova_forbidden.yaml │ ├── test_command_rotate_only_used_obsolete.yaml │ ├── test_command_rotate_used_and_unused_images.yaml │ ├── test_command_rotate_with_candidates.yaml │ ├── test_instance_in_error_state_code_70.yaml │ ├── test_instance_is_not_answer_port_code_71.yaml │ ├── test_non_existing_image_code_50.yaml │ ├── test_non_existing_network_code_60.yaml │ ├── test_test_existing_image_success.yaml │ ├── test_test_image_with_min_size_more_than_flavor_code_60.yaml │ ├── test_test_image_with_sizes.yaml │ ├── test_test_imagetest_failed_code_80.yaml │ ├── test_test_normal.yaml │ ├── test_upload_bad_credentials.yaml │ ├── test_upload_image_normal_no_obsolete.yaml │ ├── test_upload_image_normal_no_obsolete_convertion.yaml │ ├── test_upload_image_normal_obsolete.yaml │ └── test_upload_image_with_sizes_and_protection.yaml ├── clear_creds.py ├── conftest.py ├── damaged.img.qcow2 ├── dibctl │ ├── images.yaml │ ├── test.yaml │ └── upload.yaml ├── empty.img.qcow2 ├── simple_fail.bash ├── simple_success.bash ├── test_command_rotate.py ├── test_command_test.py ├── test_command_upload.py ├── test_simple_no_net.py └── xenial.img.qcow2 ├── requirements.txt ├── setup.py ├── specs ├── README.md ├── conf.d.md ├── external_commands.md ├── preprocessing.md └── rotate.md ├── test-requirements.txt ├── tests ├── __init__.py ├── bad_configs │ ├── 00-example.image.yaml │ ├── 00-example.test.yaml │ ├── 00-example.upload.yaml │ ├── 01-no-flavor-and-flavor-id.test.yaml │ ├── 02-wrong-tests-env-vars.test.yaml │ ├── 03-wrong_type_for_protected.upload.yaml │ ├── 04-wrong_type_for_min_disk.upload.yaml │ ├── 05-wrong_value_for_min_ram.upload.yaml │ ├── 06-unknown_variable_in_glance_section.upload.yaml │ └── README ├── test_bad_configs.py ├── test_commands.py ├── test_config.py ├── test_dib.py ├── test_do_tests.py ├── test_image_preprocessing.py ├── test_osclient.py ├── test_prepare_os.py ├── test_pytest_runner.py ├── test_shell_runner.py ├── test_ssh.py └── test_timeout.py ├── tox.ini └── workflow.md /.gitignore: -------------------------------------------------------------------------------- 1 | .venv 2 | *.pyc 3 | __pycache__ 4 | .cache 5 | .coverage 6 | dibctl.egg-info/ 7 | build/ 8 | os_cred 9 | dibtests 10 | integration_tests/cassettes/*.old 11 | integration_tests/images.yaml 12 | integration_tests/test.yaml 13 | integration_tests/upload.yaml 14 | integration_tests/test.yaml.secret 15 | integration_tests/upload.yaml.secret 16 | integration_tests/images.yaml.secret 17 | -------------------------------------------------------------------------------- /TIPS.md: -------------------------------------------------------------------------------- 1 | How to debug images 2 | ------------------- 3 | 4 | There are few moments when you find your image does not work. It's normal, and fixing it is a way to 5 | make image works. 6 | 7 | This file describes few debugging techniques for different types of failures. 8 | 9 | Instance can't spawn 10 | ==================== 11 | 1. Check if you have your own `OS_*` credentials in shell environment. They override 12 | test.yaml/upload.yaml settings. 13 | 2. Check if you can spawn instance with same settings. 14 | 15 | Instance spawns but become ERROR 16 | ================================ 17 | 18 | This is your Openstack issue. Check logs. Often this is 'No valid hosts found'. 19 | Check if you are using proper flavor, availability\_zone (if you use them). 20 | 21 | One of possible mistakes: specified 'net-id' network is full and there are no 22 | free IP left. 23 | 24 | Instance become active but there is no connection 25 | ================================================= 26 | 1. Enable devuser in the `dib.elements` section for image config 27 | 2. Set user, password and passwordless sudo (in the `dib.environment_variables`): 28 | ``` 29 | DIB_DEV_USER_PWDLESS_SUDO: yes 30 | DIB_DEV_USER_USERNAME: dibdebug 31 | DIB_DEV_USER_PASSWORD: joj0quilUst 32 | ``` 33 | 3. Rebuild image 34 | 4. Repeat test with --keep-failed-instance 35 | 36 | It will fail again, but now you can log into instance via console and see why 37 | it didn't recieve IP (or have trouble with SSH). 38 | 39 | (!) Do not forget to remove devuser element and rebuild image after you done, otherwise 40 | your clients/users would be really upset to find unknown user with password and sudo 41 | in their systems. 42 | 43 | Common reasons: 44 | 1. No cloud-init or no proper data-source for cloud-init. 45 | 2. Security groups (which prevents incoming connection by default) 46 | 3. Incorrect networks (local network instead of 'internet') 47 | 4. Wrong port settings 48 | 5. `wait_for_port` timeout is too short (common mistake for baremetal instances - they usually take more time to boot) 49 | 50 | Instance failing tests 51 | ====================== 52 | 53 | Use `--shell` option to get shell to failed instance and inspect it. 54 | If you want to keep instance after you log out of the shell, use `exit 42`. 55 | 56 | -------------------------------------------------------------------------------- /conftest.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | from mock import sentinel 3 | import mock 4 | 5 | 6 | class MockSock(object): 7 | 8 | def __init__(self, sequence): 9 | self.sequence = sequence 10 | 11 | AF_INET = sentinel.AF_INET 12 | SOCK_STREAM = sentinel.SOCK_STREAM 13 | 14 | def socket(self, arg1, arg2): 15 | assert arg1 is self.AF_INET 16 | assert arg2 is self.SOCK_STREAM 17 | return self 18 | 19 | def connect_ex(self, ip): 20 | next = self.sequence[0] 21 | if next is None: 22 | return -1 # Loop forever with None 23 | self.sequence = self.sequence[1:] 24 | return next 25 | 26 | 27 | @pytest.fixture(scope="module") 28 | def MockSocket(request): 29 | return MockSock 30 | 31 | 32 | class MockTimeClass(object): 33 | def __init__(self): 34 | self.wallclock = 42 35 | 36 | def sleep(self, shift): 37 | self.wallclock += shift 38 | 39 | def time(self): 40 | self.wallclock += 1 41 | return self.wallclock 42 | 43 | 44 | @pytest.fixture(scope="module") 45 | def MockTime(request): 46 | return MockTimeClass 47 | 48 | 49 | @pytest.fixture 50 | def quick_commands(MockSocket, MockTime): 51 | def full_read(ignore_self, filename): 52 | return open(filename, 'rb', buffering=65536).read() 53 | from dibctl import commands 54 | with mock.patch.object(commands.prepare_os, "time", MockTime()): 55 | with mock.patch.object(commands.prepare_os, "socket", MockSocket([0])): 56 | with mock.patch.object( 57 | commands.prepare_os.uuid, 58 | "uuid4", 59 | return_value='deadbeaf-4078-11e7-8228-000000000000' 60 | ): 61 | with mock.patch.object( 62 | commands.do_tests.prepare_os.osclient.OSClient, 63 | '_file_to_upload', 64 | full_read 65 | ): 66 | yield commands 67 | -------------------------------------------------------------------------------- /copyright: -------------------------------------------------------------------------------- 1 | Copyright 2017 Servers.com 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | -------------------------------------------------------------------------------- /dibctl/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/serverscom/dibctl/0749a79e64113e951cdfddfdcb4df4d061040fe2/dibctl/__init__.py -------------------------------------------------------------------------------- /dibctl/dib.py: -------------------------------------------------------------------------------- 1 | '''Wrapper around diskimage builder''' 2 | import subprocess 3 | import os 4 | import sys 5 | import pprint 6 | import pkg_resources 7 | import semantic_version 8 | 9 | 10 | class NoElementsError(IndexError): 11 | pass 12 | 13 | 14 | class BadDibVersion(EnvironmentError): 15 | pass 16 | 17 | 18 | class BadVersion(ValueError): 19 | pass 20 | 21 | 22 | class NoDibError(BadDibVersion): 23 | pass 24 | 25 | 26 | def _version(text): 27 | try: 28 | return semantic_version.Version.coerce(text) 29 | except StandardError as e: 30 | raise BadVersion("''%s' is not a proper version: %s" % (text, e.message)) 31 | 32 | 33 | def validate_version(min_version, max_version): 34 | if min_version or max_version: 35 | dib_version = get_installed_version() 36 | if min_version: 37 | if dib_version < _version(min_version): 38 | raise BadDibVersion( 39 | 'dib.min_version (%s) for image is greater that installed (%s)' % ( 40 | min_version, 41 | dib_version 42 | ) 43 | ) 44 | if max_version: 45 | if dib_version > _version(max_version): 46 | raise BadDibVersion( 47 | 'dib.max_version (%s) for image is less than installed (%s)' % ( 48 | max_version, 49 | dib_version 50 | ) 51 | ) 52 | return True 53 | 54 | 55 | def get_installed_version(): 56 | try: 57 | version = pkg_resources.get_distribution('diskimage_builder').version 58 | except pkg_resources.DistributionNotFound: 59 | raise NoDibError("diskimage-builder is not available (as python package)") 60 | return semantic_version.Version.coerce(version) 61 | 62 | 63 | class DIB(): 64 | def __init__( 65 | self, 66 | filename, 67 | elements, 68 | arch="amd64", 69 | exec_path=None, 70 | offline=False, 71 | tracing=False, 72 | additional_options=[], 73 | env={} 74 | ): 75 | if not elements: 76 | raise NoElementsError("No elements to build") 77 | self.elements = elements 78 | self.filename = filename 79 | self.exec_path = exec_path or 'disk-image-create' 80 | self.arch = arch 81 | self.tracing = tracing 82 | self.offline = offline 83 | self.additional_options = additional_options 84 | self.env = env 85 | self.cmdline = [] 86 | self._create_cmdline() 87 | 88 | def _create_cmdline(self): 89 | self.cmdline = [self.exec_path] 90 | self.cmdline.extend(['-a', self.arch]) 91 | if self.tracing: 92 | self.cmdline.append('-x') 93 | if self.offline: 94 | self.cmdline.append('--offline') 95 | self.cmdline.extend(["-o", self.filename]) 96 | if self.additional_options: 97 | self.cmdline.extend(self.additional_options) 98 | self.cmdline.extend(self.elements) 99 | 100 | def _prep_env(self): 101 | new_env = dict(os.environ) 102 | new_env.update({'ARCH': self.arch}) 103 | new_env.update(self.env) 104 | return new_env 105 | 106 | def print_settings(self, env): 107 | print("Will run disk-image-create:") 108 | print("Environment:") 109 | pprint.pprint(env) 110 | print("Command line: %s" % " ".join(self.cmdline)) 111 | 112 | def run(self): 113 | env = self._prep_env() 114 | self.print_settings(env) 115 | sys.stdout.flush() 116 | dib_process = subprocess.Popen( 117 | self.cmdline, 118 | stdout=sys.stdout, 119 | stdin=None, 120 | stderr=sys.stderr, 121 | env=env, 122 | bufsize=1 123 | ) 124 | self.returncode = dib_process.wait() 125 | return self.returncode 126 | -------------------------------------------------------------------------------- /dibctl/do_tests.py: -------------------------------------------------------------------------------- 1 | import prepare_os 2 | import pytest_runner 3 | import shell_runner 4 | import config 5 | import os 6 | 7 | 8 | class TestError(EnvironmentError): 9 | pass 10 | 11 | 12 | class PortWaitError(EnvironmentError): 13 | pass 14 | 15 | 16 | class UnsupportedRunnerError(TestError): 17 | pass 18 | 19 | 20 | class BadTestConfigError(TestError): 21 | pass 22 | 23 | 24 | class DoTests(object): 25 | DEFAULT_PORT_WAIT_TIMEOUT = 61 26 | # DEFAULT_PORT = 22 27 | 28 | def __init__( 29 | self, 30 | image, 31 | test_env, 32 | image_uuid=None, 33 | upload_only=False, 34 | continue_on_fail=False, 35 | keep_failed_image=False, 36 | keep_failed_instance=False 37 | ): 38 | ''' 39 | - image - entry of images.yaml 40 | - env - environment to use (entry of test_environments.yaml) 41 | - image_uuid - override image-related things 42 | - upload_only - don't run tests 43 | ''' 44 | self.keep_failed_image = keep_failed_image 45 | self.keep_failed_instance = keep_failed_instance 46 | self.continue_on_fail = continue_on_fail 47 | self.image = image 48 | self.override_image_uuid = image_uuid 49 | if image_uuid: 50 | self.delete_image = False 51 | else: 52 | self.delete_image = True 53 | self.tests_list = image.get('tests.tests_list', []) 54 | self.environment_variables = self.make_env_vars(image, test_env) 55 | self.test_env = test_env 56 | 57 | def make_env_vars(self, image_cfg, test_env_cfg): 58 | combined = {} 59 | combined.update(os.environ) 60 | combined.update(image_cfg.get('tests.environment_variables', {})) 61 | combined.update(test_env_cfg.get('tests.environment_variables', {})) 62 | return combined 63 | 64 | def check_if_keep_stuff_after_fail(self, prep_os): 65 | if self.keep_failed_instance: 66 | prep_os.update_instance_delete_status(delete=False) 67 | prep_os.update_keypair_delete_status(delete=False) 68 | if self.keep_failed_image: 69 | prep_os.update_image_delete_status(delete=False) 70 | self.report(prep_os) 71 | 72 | @staticmethod 73 | def get_runner(test): 74 | RUNNERS = { 75 | 'pytest': pytest_runner.runner, 76 | 'shell': shell_runner.runner 77 | } 78 | found = False 79 | for runner_name, runner_func in RUNNERS.iteritems(): 80 | if runner_name in test: 81 | if found: 82 | raise BadTestConfigError("Duplicate runner name in %s" % str(test)) 83 | name = runner_name 84 | runner = runner_func 85 | path = test[runner_name] 86 | found = True 87 | if not found: 88 | raise BadTestConfigError("No known runner names found in %s" % str(test)) 89 | return name, runner, path 90 | 91 | def run_test(self, ssh, test, instance_config, vars): 92 | runner_name, runner, path = self.get_runner(test) 93 | print("Running tests %s: %s." % (runner_name, path)) 94 | timeout_val = test.get('timeout', 300) 95 | if runner( 96 | path, 97 | ssh, 98 | instance_config, 99 | vars, 100 | timeout_val=timeout_val, 101 | continue_on_fail=self.continue_on_fail 102 | ): 103 | print("Done running tests %s: %s." % (runner_name, path)) 104 | return True 105 | else: 106 | print("Some %s: %s tests have failed." % (runner_name, path)) 107 | if self.continue_on_fail: 108 | print("Continue testing.") 109 | return True 110 | else: 111 | print("Stop testing due to previous error.") 112 | return False 113 | 114 | def run_all_tests(self, prep_os): 115 | success = True 116 | 117 | for test in self.tests_list: 118 | if self.run_test(self.ssh, test, prep_os, self.environment_variables) is not True: 119 | self.check_if_keep_stuff_after_fail(prep_os) 120 | success = False 121 | break 122 | return success 123 | 124 | def init_ssh(self, prep_os): 125 | if 'ssh' in self.image['tests']: 126 | self.ssh = prep_os.ssh # continue refactoring this! 127 | else: 128 | self.ssh = None 129 | 130 | def wait_port(self, prep_os): 131 | if 'wait_for_port' in self.image['tests']: 132 | port = self.image['tests']['wait_for_port'] 133 | port_wait_timeout = config.get_max( 134 | self.image, 135 | self.test_env, 136 | 'tests.port_wait_timeout', 137 | self.DEFAULT_PORT_WAIT_TIMEOUT 138 | ) 139 | port_available = prep_os.wait_for_port(port, port_wait_timeout) 140 | if not port_available: 141 | self.check_if_keep_stuff_after_fail(prep_os) 142 | raise PortWaitError("Timeout while waiting instance to accept connection on port %s." % port) 143 | return True 144 | else: 145 | return False 146 | 147 | def process(self, shell_only, shell_on_errors): 148 | prep_os = prepare_os.PrepOS( 149 | self.image, 150 | self.test_env, 151 | override_image=self.override_image_uuid, 152 | delete_image=self.delete_image 153 | ) 154 | with prep_os: 155 | self.init_ssh(prep_os) 156 | self.wait_port(prep_os) 157 | if shell_only: 158 | result = self.open_shell( 159 | prep_os.ssh, 160 | 'Opening ssh shell to instance without running tests' 161 | ) 162 | self.check_if_keep_stuff_after_fail(prep_os) 163 | return result 164 | result = self.run_all_tests(prep_os) 165 | if not result: 166 | print("Some tests failed") 167 | if shell_on_errors: 168 | self.open_shell(prep_os.ssh, 'There was an test error and asked to open --shell') 169 | self.check_if_keep_stuff_after_fail(prep_os) 170 | return result 171 | else: 172 | print("All tests passed successfully.") 173 | return result 174 | 175 | def open_shell(self, ssh, reason): 176 | if not ssh: 177 | raise TestError('Asked to open ssh shell to server, but there is no ssh section in the image config') 178 | message = reason + '\nUse "exit 42" to keep instance\n' 179 | status = ssh.shell(os.environ, message) 180 | if status == 42: # magical constant! 181 | self.keep_failed_instance = True 182 | return status 183 | 184 | @staticmethod 185 | def report_item(onthologic_name, item): 186 | if not item["was_removed"] and not item['preexisted'] and not item['deletable']: 187 | print("%s: %s (%s), will not be removed" % (onthologic_name, item['id'], item['name'])) 188 | 189 | @staticmethod 190 | def report_ssh(ssh): 191 | if ssh: 192 | print("You may use following line to access server") 193 | print(" ".join(ssh.command_line())) 194 | 195 | def report(self, prep_os): 196 | image_status = prep_os.image_status() 197 | instance_status = prep_os.instance_status() 198 | keypair_status = prep_os.keypair_status() 199 | self.report_item("Keypair", keypair_status) 200 | self.report_item("Image", image_status) 201 | self.report_item("Instance", instance_status) 202 | if not instance_status['was_removed'] and not instance_status['deletable']: 203 | self.report_ssh(prep_os.ssh) 204 | 205 | def reconfigure_for_existing_instance(self, instance, private_key_file=None): 206 | raise NotImplementedError 207 | -------------------------------------------------------------------------------- /dibctl/image_preprocessing.py: -------------------------------------------------------------------------------- 1 | import config 2 | import subprocess 3 | import sys 4 | import os 5 | 6 | 7 | class PreprocessError(Exception): 8 | pass 9 | 10 | 11 | class Preprocess(object): 12 | def __init__(self, input_filename, glance_data, preprocessing_settings): 13 | self.input_filename = input_filename 14 | self.glance_data = glance_data 15 | self.preprocessing_settings = preprocessing_settings 16 | self.delete_after = self.preprocessing_settings.get('delete_processed_after_upload', True) 17 | self.use_existing = self.preprocessing_settings.get('use_existing', False) 18 | 19 | def prep_output_name(self, allowed_vars): 20 | try: 21 | return self.preprocessing_settings["output_filename"] % allowed_vars 22 | except (KeyError, ValueError) as e: 23 | raise config.InvaidConfigError( 24 | 'Invalid string interpolation in output filename: %s: %s' % ( 25 | self.preprocessing_settings["output_filename"], 26 | e.message 27 | ) 28 | ) 29 | 30 | def prep_cmdline(self, allowed_vars): 31 | try: 32 | return self.preprocessing_settings['cmdline'] % allowed_vars 33 | except (KeyError, ValueError) as e: 34 | raise config.InvaidConfigError( 35 | 'Invalid string interpolation in cmdline: %s: %s' % ( 36 | self.preprocessing_settings["cmdline"], 37 | e.message 38 | ) 39 | ) 40 | 41 | def interpolate(self): 42 | allowed_vars = { 43 | 'input_filename': self.input_filename, 44 | 'disk_format': self.glance_data.get('disk_format', 'qcow2'), 45 | 'container_format': self.glance_data.get('container_format', 'bare') 46 | } 47 | 48 | self.output_filename = self.prep_output_name(allowed_vars) 49 | allowed_vars['output_filename'] = self.output_filename 50 | self.command_line = self.prep_cmdline(allowed_vars) 51 | 52 | def __enter__(self): 53 | if not self.preprocessing_settings: 54 | return self.input_filename 55 | 56 | self.interpolate() 57 | self.run() 58 | return self.output_filename 59 | 60 | def __exit__(self, exc_type, exc_value, traceback): 61 | if self.preprocessing_settings: 62 | if self.delete_after and not self.use_existing: 63 | os.remove(self.output_filename) 64 | 65 | def run(self): 66 | if os.path.isfile(self.output_filename): 67 | if self.use_existing: 68 | return self.output_filename 69 | else: 70 | os.remove(self.output_filename) 71 | sys.stdout.flush() 72 | try: 73 | print("Preprocessing...") 74 | subprocess.check_call(self.command_line, shell=True, stdout=sys.stdout, stderr=sys.stderr, stdin=None) 75 | except subprocess.CalledProcessError as e: 76 | raise PreprocessError('Preprocessing failed with code %s' % e.returncode) 77 | if not os.path.isfile(self.output_filename): 78 | raise PreprocessError('There is no output file %s after preprocess had finished') 79 | print("Preprocessing done.") 80 | sys.stdout.flush() 81 | -------------------------------------------------------------------------------- /dibctl/pytest_runner.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import pytest 3 | import paramiko 4 | 5 | 6 | class DibCtlPlugin(object): 7 | def __init__(self, ssh, tos, environment_variables): 8 | self.cached_ssh_backend = None 9 | self.env_vars = environment_variables 10 | self.tos = tos 11 | self.ssh_data = ssh 12 | self.enable_control_master = False # I wasn't able to make it stable enough, so it's disabled 13 | try: 14 | import testinfra 15 | self.testinfra = testinfra 16 | except ImportError: 17 | print("Warning: no testinfra installed, ssh_backend fixture is unavaiable") 18 | self.testinfra = None 19 | self.sshclient = paramiko.SSHClient() 20 | 21 | @pytest.fixture 22 | def flavor(self, request): 23 | return self.tos.flavor 24 | 25 | def flavor_meta(self, request): 26 | return self.tos.flavor().get_keys() 27 | 28 | @pytest.fixture 29 | def ips(self, request): 30 | return self.tos.ips() 31 | 32 | @pytest.fixture 33 | def ips_v4(self, request): 34 | return self.tos.ips_by_version(version=4) 35 | 36 | @pytest.fixture 37 | def ips_v6(self, request): 38 | return self.tos.ips_by_version(version=6) 39 | 40 | @pytest.fixture 41 | def main_ip(self, request): 42 | return self.tos.ip 43 | 44 | @pytest.fixture 45 | def network(self, request): 46 | return self.tos.os_instance.interface_list() 47 | 48 | @pytest.fixture 49 | def instance(self, request): 50 | return self.tos.os_instance 51 | 52 | @pytest.fixture 53 | def ssh(self, request): 54 | return self.ssh_data.info() 55 | 56 | @pytest.fixture 57 | def wait_for_port(self, request): 58 | def wfp(port=None, timeout=None): 59 | if port is None: 60 | port = 22 # FIXME From image configuration!!!! 61 | if timeout is None: 62 | timeout = 60 # FIXME from image configuration!!!! 63 | return self.tos.wait_for_port(port, timeout) 64 | return wfp 65 | 66 | @pytest.fixture 67 | def ssh_backend(self, request): 68 | if not self.ssh_data: 69 | raise ValueError("no ssh settings available in image config") 70 | if not self.cached_ssh_backend: 71 | if not self.testinfra: 72 | raise ImportError( 73 | "ssh_backend is unavailable because testinfra module" 74 | "is not found." 75 | ) 76 | self.cached_ssh_backend = self.testinfra.get_backend( 77 | self.ssh_data.connector(), 78 | ssh_config=self.ssh_data.config() 79 | ) 80 | return self.cached_ssh_backend 81 | 82 | @pytest.fixture 83 | def environment_variables(self, request): 84 | return self.env_vars 85 | 86 | @pytest.fixture 87 | def port(self, request): 88 | raise NotImplementedError 89 | return {'ip': 0, 'port': 0, 'timeout': 0, 'delay': 0.0} 90 | 91 | @pytest.fixture 92 | def nova(self, request): 93 | return self.tos.os.nova 94 | 95 | @pytest.fixture 96 | def glance(self, request): 97 | return self.tos.os.glance 98 | 99 | @pytest.fixture 100 | def image_info(self, request): 101 | return self.tos.get_image_info() 102 | 103 | @pytest.fixture 104 | def image_config(self, request): 105 | return self.tos.image 106 | 107 | @pytest.fixture 108 | def console_output(self, request): 109 | return self.tos.os_instance.get_console_output() 110 | 111 | @pytest.fixture 112 | def ssh_client(self): 113 | self.sshclient.set_missing_host_key_policy(paramiko.AutoAddPolicy()) 114 | return self.sshclient 115 | 116 | 117 | def runner(path, ssh, tos, environment_variables, timeout_val, continue_on_fail): 118 | cmdline = [path, '-v', '-s'] 119 | dibctl_plugin = DibCtlPlugin(ssh, tos, environment_variables) 120 | sys.stdout.flush() 121 | if not continue_on_fail: 122 | cmdline.append('-x') 123 | result = pytest.main(cmdline, plugins=[dibctl_plugin]) 124 | sys.stdout.flush() 125 | if result == 0: 126 | return True 127 | else: 128 | return False 129 | -------------------------------------------------------------------------------- /dibctl/shell_runner.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import os 3 | import sys 4 | import itertools 5 | 6 | ENV_PREFIX = 'DIBCTL' 7 | 8 | 9 | class BadRunnerError(EnvironmentError): 10 | pass 11 | 12 | 13 | def unwrap_config(prefix, config): 14 | 'returns linear key-value list from a tree' 15 | return_value = {} 16 | if type(config) is dict: 17 | for key, value in config.iteritems(): 18 | new_prefix = prefix + '_' + str(key).upper() 19 | return_value.update(unwrap_config(new_prefix, value)) 20 | elif type(config) is list: 21 | for element in config: 22 | return_value.update(unwrap_config(prefix, element)) 23 | elif config is None: 24 | pass # do not add anything, return empty dict 25 | else: 26 | return_value = {prefix: str(config)} 27 | return return_value 28 | 29 | 30 | def gather_tests(path): 31 | if os.path.isdir(path): 32 | filelist = [os.path.join(path, f) for f in os.listdir(path)] 33 | filelist.sort() 34 | all_files = map(gather_tests, filelist) 35 | return list(itertools.chain.from_iterable(filter(None, all_files))) 36 | elif os.path.isfile(path) and os.access(path, os.X_OK): 37 | if '/' not in path: 38 | return ['./' + path] 39 | else: 40 | return [path] 41 | else: 42 | return None 43 | 44 | 45 | def run_shell_test(path, env): 46 | print("Running %s" % path) 47 | try: 48 | sys.stdout.flush() 49 | subprocess.check_call(path, stdout=sys.stdout, stderr=sys.stderr, env=env) 50 | return True 51 | except subprocess.CalledProcessError: 52 | return False 53 | 54 | 55 | def runner(path, ssh, tos, vars, timeout_val, continue_on_fail): 56 | result = True 57 | tests = gather_tests(path) 58 | if tests is None: 59 | raise BadRunnerError('Path %s is not a test file or a dir' % path) 60 | config = dict(os.environ) 61 | config.update(unwrap_config(ENV_PREFIX, tos.get_env_config())) 62 | if ssh: 63 | config.update(ssh.env_vars('DIBCTL_')) 64 | config.update(unwrap_config(ENV_PREFIX, vars)) 65 | for test in tests: 66 | test_successfull = run_shell_test(test, config) 67 | if test_successfull: 68 | print("Test %s succeeded" % test) 69 | continue 70 | elif continue_on_fail: 71 | print("Test %s failed, continue other tests" % test) 72 | result = False 73 | continue 74 | else: 75 | print("Test %s failed, skipping all other tests" % test) 76 | result = False 77 | break 78 | return result 79 | -------------------------------------------------------------------------------- /dibctl/ssh.py: -------------------------------------------------------------------------------- 1 | import tempfile 2 | import subprocess 3 | import sys 4 | 5 | 6 | class SSH(object): 7 | 8 | COMMAND_NAME = 'ssh' 9 | 10 | def __init__(self, ip, username, private_key, port=22, override_ssh_key_filename=None): 11 | self.ip = ip 12 | self.username = username 13 | self.private_key = private_key 14 | self.port = port 15 | self.private_key_file = None 16 | self.config_file = None 17 | self.override_ssh_key_filename = override_ssh_key_filename 18 | 19 | def user_host_and_port(self): 20 | if self.port == 22: 21 | return self.username + '@' + self.ip 22 | else: 23 | return "{user}@{host}:{port}".format( 24 | user=self.username, 25 | host=self.ip, 26 | port=str(self.port) 27 | ) 28 | 29 | def key_file(self): 30 | ''' 31 | creates file with private ssh key 32 | file exists until instance of SSH class 33 | is removed. 34 | Returns filename of that file 35 | ''' 36 | if self.override_ssh_key_filename: 37 | return self.override_ssh_key_filename 38 | if not self.private_key_file: 39 | self.private_key_file = tempfile.NamedTemporaryFile( 40 | prefix='dibctl_key_', 41 | delete=True 42 | ) 43 | self.private_key_file.write(self.private_key) 44 | self.private_key_file.flush() 45 | return self.private_key_file.name 46 | 47 | def keep_key_file(self): 48 | '''saves file in persistent way''' 49 | if self.override_ssh_key_filename: 50 | return self.override_ssh_key_filename 51 | if self.private_key_file: 52 | del self.private_key_file 53 | self.private_key_file = tempfile.NamedTemporaryFile(prefix='saved_dibctl_key_', delete=False) 54 | self.private_key_file.write(self.private_key) 55 | self.private_key_file.close() 56 | return self.private_key_file.name 57 | 58 | def command_line(self): 59 | command_line = [ 60 | self.COMMAND_NAME, 61 | "-o", "StrictHostKeyChecking=no", 62 | "-o", "UserKnownHostsFile=/dev/null", 63 | "-o", "UpdateHostKeys=no", 64 | "-o", "PasswordAuthentication=no", 65 | "-i", self.key_file(), 66 | self.user_host_and_port() 67 | ] 68 | return command_line 69 | 70 | def connector(self): 71 | return 'ssh://%s' % self.ip 72 | 73 | def config(self): 74 | ''' 75 | creates file with ssh configuration 76 | file exists until instance of SSH class 77 | is removed. 78 | Returns filename of that config 79 | ''' 80 | if not self.config_file: 81 | self.config_file = tempfile.NamedTemporaryFile(prefix="dibctl_config_") 82 | self.config_file.write('# Automatically generated by dibctl\n\n') 83 | self.config_file.write('Host %s\n' % self.ip) 84 | self.config_file.write('\tUser %s\n' % self.username) 85 | self.config_file.write('\tStrictHostKeyChecking no\n') 86 | self.config_file.write('\tUserKnownHostsFile /dev/null\n') 87 | self.config_file.write('\tUpdateHostKeys no\n') 88 | self.config_file.write('\tPasswordAuthentication no\n') 89 | self.config_file.write('\tIdentityFile %s\n' % self.key_file()) 90 | self.config_file.write('\tPort %s\n' % self.port) 91 | self.config_file.flush() 92 | return self.config_file.name 93 | 94 | def env_vars(self, prefix): 95 | ''' 96 | return list of env vars related to SSH with 97 | given prefix 98 | ''' 99 | result = { 100 | prefix + 'SSH_IP': self.ip, 101 | prefix + 'SSH_USERNAME': self.username, 102 | prefix + 'SSH_PORT': str(self.port), 103 | prefix + 'SSH_CONFIG': self.config(), 104 | prefix + 'SSH_PRIVATE_KEY': self.key_file(), 105 | prefix + 'SSH': ' '.join(self.command_line()) 106 | } 107 | return result 108 | 109 | def shell(self, env, message): 110 | '''Opens a user-accessible shell to machine''' 111 | command_line = self.command_line() 112 | print(message) 113 | print("Executing shell: %s" % " ".join(command_line)) 114 | return subprocess.call(command_line, stdin=sys.stdin, stderr=sys.stderr, stdout=sys.stdout, env=env) 115 | 116 | def info(self): 117 | result = { 118 | 'ip': self.ip, 119 | 'private_key_file': self.key_file(), 120 | 'username': self.username, 121 | 'port': self.port, 122 | 'config': self.config(), 123 | 'command_line': self.command_line(), 124 | 'connector': self.connector() 125 | } 126 | return result 127 | -------------------------------------------------------------------------------- /dibctl/timeout.py: -------------------------------------------------------------------------------- 1 | import signal 2 | 3 | 4 | class TimeoutError(EnvironmentError): 5 | pass 6 | 7 | 8 | class timeout(object): 9 | def __init__(self, timeout): 10 | self.timeout = timeout 11 | 12 | def raise_timeout(self, signum, frame): 13 | if signum == signal.SIGALRM: 14 | raise TimeoutError("Timed out after %s" % self.timeout) 15 | else: 16 | raise RuntimeError("Signal %s, have no idea what to do" % signum) 17 | 18 | def __enter__(self): 19 | signal.signal(signal.SIGALRM, self.raise_timeout) 20 | signal.alarm(self.timeout) 21 | 22 | def __exit__(self, exc_type, exc_val, exc_tb): 23 | signal.alarm(0) 24 | -------------------------------------------------------------------------------- /dibctl/version.py: -------------------------------------------------------------------------------- 1 | VERSION = '0.8.2' 2 | NAME = 'dibctl' 3 | VERSION_STRING = "%s %s" % (NAME, VERSION) 4 | -------------------------------------------------------------------------------- /docs/example_configs/images.yaml: -------------------------------------------------------------------------------- 1 | xenial: 2 | filename: xenial.img.qcow2 3 | glance: 4 | endpoint: "http://foo.example.com" #not implmented yet 5 | name: 'Example of Ubuntu Xenial' 6 | upload_timeout: 1800 7 | container_format: bare 8 | disk_format: qcow2 9 | min_disk: 4 10 | min_ram: 1024 11 | protected: False 12 | properties: 13 | foo: "bar" 14 | public: False 15 | dib: 16 | min_version: '2.0.0' 17 | max_version: '9.0.2' 18 | environment_variables: 19 | DIB_RELEASE: xenial 20 | elements: 21 | - ubuntu-minimal 22 | - vm 23 | cli_options: 24 | - '-x' 25 | nova: 26 | create_timeout: 5 27 | active_timeout: 14 28 | keypair_timeout: 5 29 | cleanup_timeout: 30 30 | tests: 31 | ssh: 32 | username: cloud-user 33 | wait_for_port: 22 34 | port_wait_timeout: 30 35 | environment_name: example_env 36 | environment_variables: 37 | foo: bar 38 | tests_list: 39 | - shell: docs/tests_examples/shell_examples.d/ 40 | timeout: 300 41 | - pytest: docs/tests_examples/pytest_examples/example.py 42 | timeout: 300 43 | external_build: 44 | - cmdline: "build something --output %(filename)s" 45 | external_tests: 46 | - cmdline: "test image %(filename)s" 47 | - cmdline: "test2 %(filename)s" 48 | -------------------------------------------------------------------------------- /docs/example_configs/test.yaml: -------------------------------------------------------------------------------- 1 | example_env: 2 | ssl_insecure: True 3 | ssl_ca_path: /etc/ssl/certs 4 | disable_warnings: False 5 | keystone: 6 | api_version: 2 7 | #username: 'will be taken from OS_USERNAME enviroment variable' 8 | #password: 'same (OS_PASSWORD)' 9 | #project_name: 'same (OS_TENANT_NAME' 10 | user_domain_id: default 11 | project_domain_id: default 12 | #auth_url: 'same (OS_AUTH_URL)' 13 | nova: 14 | # api_version: 2 #not implemented yet 15 | flavor: "30" # fuzzy search, try use it as id, then as name. Use flavor_id to use only flavor_id 16 | nics: 17 | - net_id: fe2acef0-4383-4432-8fca-f9e23f835dd5 18 | v4_fixed_ip: '192.168.0.1' 19 | - net_id: 631a37ed-8ab5-4463-aae0-eb06d841ee6a 20 | port_id: 4c7d017c-6d98-4ce3-b547-2c30be70bab1 21 | - net_id: db26f178-c617-47d2-928e-5ed3c0f34b43 22 | - net_id: 824c3c9a-5ce2-4ab6-84c5-fd296623d3ab 23 | v6_fixed_ip: '2a00:1450:400e:80a::200e' 24 | - net_id: 1280925a-7a0e-40ce-a4b2-2ec4c6a362f7 25 | tag: 'foo' 26 | userdata_file: path/to/file 27 | #security_group: # not implemented yet 28 | # - default 29 | # - allow_everything 30 | #hint: 31 | # key1: value1 32 | # key2: value2 33 | config_drive: true 34 | main_nic_regexp: internet 35 | availability_zone: az2 36 | create_timeout: 10 37 | active_timeout: 999 38 | keypair_timeout: 5 39 | cleanup_timeout: 120 40 | tests: 41 | port_wait_timeout: 3000 42 | environment_variables: 43 | foo: bar # has prority over os.environ and images.tests.environment_variables 44 | #neutron: 45 | # is: not_implemented_yet 46 | glance: 47 | upload_timeout: 1666 48 | api_version: 1 # 2 is not yet supported 49 | protected: True 50 | properties: 51 | key1: value1 52 | key2: value2 53 | #tag: 54 | # - tag1 55 | example_env2: 56 | keystone: 57 | auth_url: http://auth.example.com/ 58 | nova: 59 | flavor_id: '30' 60 | nics: 61 | - net_id: fe2acef0-4383-4432-8fca-f9e23f835dd5 62 | userdata: '{"json": "here"}' 63 | availability_zone: 'az2' 64 | -------------------------------------------------------------------------------- /docs/example_configs/upload.yaml: -------------------------------------------------------------------------------- 1 | non_production_environment: 2 | keystone: 3 | auth_url: https://auth.lab-2.azure.com:5000/v2.0 4 | tenant_name: tenantname 5 | username: mock_user 6 | password: mock_password 7 | ssl_insecure: true 8 | ssl_ca_path: /etc/ssl/certs 9 | disable_warnings: True 10 | 11 | azure-us-1: 12 | keystone: 13 | auth_url: https://auth.us01.azure.com:5000/v2.0 14 | tenant_name: tenantname 15 | #username: may use OS_USERNAME environemnt variable 16 | password: mock_password 17 | region3: 18 | keystone: 19 | auth_url: https://auth.lab-2.azure.com:5000/v2.0 20 | tenant_name: tenantname 21 | username: mock_user 22 | password: mock_password 23 | preprocessing: 24 | # order of interpolating: 25 | # 1. input_filename is set according to image 'filename' attribute 26 | # or --input command line option 27 | # 2. output_filename is processed 28 | # 3. cmdline is processed 29 | # available variable names: 30 | # - input_filename 31 | # - output_filename 32 | # - disk_format 33 | # - container_format 34 | cmdline: "qemu-img convert %(input_filename) %(output_filename) -O raw" 35 | output_filename: "%(input_filename).raw" 36 | glance: 37 | disk_format: raw 38 | container_format: bare 39 | protected: True 40 | min_disk: 35 41 | 42 | external_upload_example: 43 | external_upload: 44 | - cmdline: upload-command1 %(output_filename)s 45 | - cmdline: other-command %(output_filensme)s 46 | -------------------------------------------------------------------------------- /docs/exit_codes.md: -------------------------------------------------------------------------------- 1 | Exit codes 2 | ---------- 3 | 0 - everything is fine (success) 4 | 1 - generic unspecified error (if it annoy you, report bug for case and I'll add code) 5 | 6 | 10 - dibctl config not found or there is an error in config 7 | 11 - dibctl couldn't find requested item in configuration files (image, environment, etc) 8 | 12 - Not enough credentials in configuration file or environment to continue 9 | 18 - preprocessing during image upload has failed (exit code for cmdline is not zero) 10 | 20 - Authorization failure from keystone 11 | 50 - Glance return 'HTTPNotFoundError', which usually means that uuid in --use-existing-image is not found in Glance 12 | 60 - Nova returns BadRequest (unfortunately there is no way to distinct between codes) 13 | 61 - Nova request was rejected (Forbidden). Mostly affects rotate command which requires higher priveleges 14 | 70 - Instance become ERROR after creation 15 | 71 - Timeout while waiting for port (after instance become ACTIVE). 16 | 80 - Some tests have failed. 17 | -------------------------------------------------------------------------------- /docs/tests_examples/pytest.md: -------------------------------------------------------------------------------- 1 | Pytest tests 2 | --- 3 | 4 | Dibctl may use py.test as test framework. Usage of testinfra plugin is highly recommended, but not mandatory. 5 | 6 | All pytest tests follows generic rules for py.test. If any test failed, dibctl will return error for test results. 7 | 8 | Tests, written for pytest, may use set of fixtures with information about instance and image under test. 9 | 10 | List of available fixtures: 11 | - flavor - information about flavor used to run test instance. 12 | - flavor_meta - meta information for flavor 13 | - ips - list of all ips on all interfaces of instance 14 | - ips_v4 - list of all IPv4 addresses on all interfaces 15 | - ips_v6 - same as above with IPv6 addresses 16 | - main_ip - single value with IPv4, selected according to main_nic_regexp. 17 | - network - additional information about all interfaces, including expected MAC addresses, subnets, etc. 18 | - ssh - information about ssh connection to test instance. Includes main ip, path to the private key, username to connect to instance 19 | - ssh_backend - prepared testinfra backed to the instance. 20 | - environment_variables - environment_variables from image config. 21 | - port - information about ip/port, used to server availability prior to test 22 | 23 | flavor fixture 24 | --- 25 | Flavor fixture is a copy of 'flavor' object from novaclient. It contains: 26 | - flavor.id - flavor id 27 | - flavor.name - flavor name 28 | - flavor.disk - disk size (in GB) 29 | - flavor.mem - memory size (in MB) 30 | - flavor.vcpus - amount of VCPUs 31 | 32 | flavor_meta fixture 33 | --- 34 | It is a dict with key:value structure, containing all metadata for flavor used to start test instance. 35 | 36 | ips 37 | --- 38 | It is a list of all (IPv4 and IPv6) addresses allocated by neutron/nova. Does not include any floatingIPs. 39 | 40 | ips_v4 41 | --- 42 | It is a list of all IPv4 addresses on allocated by neutron/nova. Does not include any floatingIPs. 43 | 44 | ips_v6 45 | --- 46 | It is a list of all IPv6 addresses on allocated by neutron/nova. Does not include any floatingIPs. 47 | 48 | main_ip 49 | --- 50 | It's a single unicode string containing IPv4 address which was choosen as main_ip according to main_nic_regexp (or it single ip if instance has one interface with one IP address) 51 | 52 | network 53 | --- 54 | It contains result of nova.instance.interface_list() call. 55 | 56 | ssh 57 | --- 58 | It's a dictionary with following elements: 59 | - ip 60 | - private_key_file 61 | - key_name 62 | - username (may be None if none was specified in the image configuration) 63 | 64 | ssh_backend 65 | --- 66 | It's a testinfra ssh backend 67 | 68 | environment_variables 69 | --- 70 | It's a dictionary from environment_variables section of tests section of image config. 71 | 72 | port 73 | --- 74 | It's a dictionary. It contains information about port, used to check connection to instance prior running tests. 75 | Contains: 76 | - ip (main ip) - unicode string 77 | - port (int) 78 | - timeout (int) - expected maximum timeout 79 | - delay (float) - who long was wait process before instance replied on the given port 80 | 81 | nova 82 | --- 83 | A fixture to have access to nova client (python-novaclient) with established credentials to 84 | openstack. 85 | 86 | glance 87 | --- 88 | Fixture returns configured glance client (python-glanceclient) with established session. 89 | 90 | image_info 91 | --- 92 | Glance metadata of the instance's boot image. 93 | 94 | image_config 95 | --- 96 | Dictionary of current image configuration items from `image.yaml`. 97 | 98 | console_output 99 | --- 100 | Text full console log of an instance, stored by OpenStack. 101 | 102 | ssh_client 103 | --- 104 | Unconfigured instance of [paramiko.SSHClient()](https://docs.paramiko.org/en/stable/api/client.html#paramiko.client.SSHClient). 105 | May be used for testing user authentication with password. -------------------------------------------------------------------------------- /docs/tests_examples/pytest_examples/example.py: -------------------------------------------------------------------------------- 1 | def test_example(ssh): 2 | print (ssh['username'], ssh['private_key_file'], ssh['key_name'], ssh['ip']) 3 | -------------------------------------------------------------------------------- /docs/tests_examples/pytest_examples/fail.py: -------------------------------------------------------------------------------- 1 | # This test displays how to fail tests properly 2 | def test_fail(): 3 | assert True is False 4 | -------------------------------------------------------------------------------- /docs/tests_examples/shell_examples.d/01-simple_success.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo Nothing to test 4 | echo "This is an example of successful test" 5 | echo "DIB variables:" 6 | env|grep DIB 7 | 8 | exit 0 9 | -------------------------------------------------------------------------------- /docs/tests_examples/shell_examples.d/02-simple_failure.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # to make this test fail change it to executable (chmod +x) 3 | 4 | echo "This is an example of a failing test" 5 | 6 | exit -1 7 | -------------------------------------------------------------------------------- /docs/tests_examples/shell_examples.d/03-check_ssh.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | set -x 5 | echo "This test checking if ssh is available" 6 | sleep 5 # to allow cloudinit to setup ssh login 7 | timeout --foreground 120 $ssh uname -a 8 | -------------------------------------------------------------------------------- /doctests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/serverscom/dibctl/0749a79e64113e951cdfddfdcb4df4d061040fe2/doctests/__init__.py -------------------------------------------------------------------------------- /doctests/test_docs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import os 3 | import inspect 4 | import sys 5 | import pytest 6 | import mock 7 | 8 | 9 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 10 | currentdir = os.path.dirname(ourfilename) 11 | parentdir = os.path.dirname(currentdir) 12 | 13 | 14 | @pytest.fixture 15 | def config(): 16 | from dibctl import config 17 | return config 18 | 19 | 20 | @pytest.fixture 21 | def commands(): 22 | from dibctl import commands 23 | return commands 24 | 25 | 26 | # this is integration test 27 | # it uses 'docs/example_configs/upload.yaml' file to 28 | # ensure that examples and code stay in sync 29 | # plus this is a good way to store huge input data sample 30 | # it mainly checks the schema but also all other 31 | # bits related to 'forced config'. No mocks involved 32 | 33 | images_yaml = os.path.join(parentdir, "docs/example_configs/images.yaml") 34 | test_yaml = os.path.join(parentdir, "docs/example_configs/test.yaml") 35 | upload_yaml = os.path.join(parentdir, "docs/example_configs/upload.yaml") 36 | 37 | 38 | def test_integration_uploadenvconfig_schema_from_docs_example(config): 39 | config.UploadEnvConfig(upload_yaml) 40 | 41 | 42 | def test_integration_testenvconfig_schema_from_docs_example(config): 43 | config.TestEnvConfig(test_yaml) 44 | 45 | 46 | def test_integration_imageconfig_schema_from_docs_example(config): 47 | config.ImageConfig(images_yaml) 48 | 49 | 50 | def test_call_actual_dibctl_validate(commands): 51 | commands.main([ 52 | 'validate', 53 | '--images-config', images_yaml, 54 | '--upload-config', upload_yaml, 55 | '--test-config', test_yaml 56 | ]) == 0 57 | 58 | 59 | if __name__ == "__main__": 60 | file_to_test = os.path.join( 61 | parentdir, 62 | os.path.basename(parentdir), 63 | os.path.basename(ourfilename).replace("test_", '') 64 | ) 65 | pytest.main([ 66 | "-vv" 67 | ] + sys.argv) 68 | -------------------------------------------------------------------------------- /integration_tests/README.md: -------------------------------------------------------------------------------- 1 | HOWTO 2 | ----- 3 | 4 | Add a new test 5 | -------------- 6 | 1. copy test.yaml.secret to test.yaml 7 | 2. copy upload.yaml.secret to upload.yaml 8 | 3. Add new test 9 | 4. Run `./clear_creds.py` script 10 | 11 | Updating existing test 12 | ---------------------- 13 | 1. copy test.yaml.secret to test.yaml 14 | 2. copy upload.yaml.secret to upload.yaml 15 | 3. Remove it's cassete from cassetes directory 16 | 4. Check your test instruction (in test docstring) 17 | 5. Update your configs 18 | 6. Run test 19 | 7. Run `./clear_creds.py` script 20 | 21 | 22 | New users 23 | --------- 24 | If you want just to run tests - use `py.test`. 25 | 26 | If you want to update cassettes, then you need 27 | a well configured Openstack. Some tests 28 | require special policy.json - check docstrings 29 | for details 30 | 1. Copy dibctl/test.yaml into test.yaml 31 | 2. Copy dibctl/upload.yaml into upload.yaml 32 | 3. Fill credentials with actual openstack credentials 33 | in both files 34 | 4. VERY CAREFULY read docstring for test. Some tests 35 | need to be updated in parallel, some require to 36 | update test code (uuids) after cassette run, 37 | some require manual preparation of installation 38 | (mostly rotate command). 39 | 4. Remove correponding cassettes 40 | 5. Run tests in a proper order (`py.test ... -k`) 41 | 6. Run `./clear\_creds.py` 42 | 43 | Please be careful not to send your '.secret' files 44 | into git. 45 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_command_rotate_bad_passwd.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | Accept: ['*/*'] 6 | Accept-Encoding: ['gzip, deflate'] 7 | Connection: [keep-alive] 8 | User-Agent: [python-requests/2.12.4] 9 | pytest-filtered: ['true'] 10 | method: GET 11 | uri: https://auth.servers.nl01.cloud.servers.com:5000/v2.0 12 | response: 13 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 14 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 15 | "id": "v2.0", "links": [{"href": "https://auth.servers.nl01.cloud.servers.com:5000/v2.0/", 16 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 17 | "rel": "describedby"}]}}'} 18 | headers: 19 | content-length: ['362'] 20 | content-type: [application/json] 21 | date: ['Wed, 16 Aug 2017 12:41:22 GMT'] 22 | vary: [X-Auth-Token] 23 | x-distribution: [Ubuntu] 24 | x-openstack-request-id: [req-2a626ad1-e029-48c3-ba3c-f0619e833f1c] 25 | status: {code: 200, message: OK} 26 | - request: 27 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 28 | {\"username\": \"username\", \"password\": \"password\"}}"}' 29 | headers: 30 | Accept: [application/json] 31 | Accept-Encoding: ['gzip, deflate'] 32 | Connection: [keep-alive] 33 | Content-Length: ['119'] 34 | Content-Type: [application/json] 35 | User-Agent: [dibctl keystoneauth1/2.20.0 python-requests/2.12.4 CPython/2.7.13] 36 | pytest-filtered: ['true'] 37 | method: POST 38 | uri: https://auth.servers.nl01.cloud.servers.com:5000/v2.0/tokens 39 | response: 40 | body: {string: !!python/unicode '{"error": {"message": "The request you have made 41 | requires authentication.", "code": 401, "title": "Unauthorized"}}'} 42 | headers: 43 | content-length: ['114'] 44 | content-type: [application/json] 45 | date: ['Wed, 16 Aug 2017 12:41:23 GMT'] 46 | vary: [X-Auth-Token] 47 | www-authenticate: ['Keystone uri="https://auth.servers.nl01.cloud.servers.com:5000"'] 48 | x-distribution: [Ubuntu] 49 | x-openstack-request-id: [req-0649076a-fc56-4ec6-ad6d-480c5d98bf71] 50 | status: {code: 401, message: Unauthorized} 51 | version: 1 52 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_command_rotate_nova_forbidden.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | Accept: ['*/*'] 6 | Accept-Encoding: ['gzip, deflate'] 7 | Connection: [keep-alive] 8 | User-Agent: [python-requests/2.12.4] 9 | pytest-filtered: ['true'] 10 | method: GET 11 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0 12 | response: 13 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 14 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 15 | "id": "v2.0", "links": [{"href": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/", 16 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 17 | "rel": "describedby"}]}}'} 18 | headers: 19 | content-length: ['358'] 20 | content-type: [application/json] 21 | date: ['Wed, 16 Aug 2017 13:21:53 GMT'] 22 | vary: [X-Auth-Token] 23 | x-distribution: [Ubuntu] 24 | x-openstack-request-id: [req-9ec1c1b0-9770-41a9-ac6d-5c84d4770eb6] 25 | status: {code: 200, message: OK} 26 | - request: 27 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 28 | {\"username\": \"username\", \"password\": \"password\"}}"}' 29 | headers: 30 | Accept: [application/json] 31 | Accept-Encoding: ['gzip, deflate'] 32 | Connection: [keep-alive] 33 | Content-Length: ['120'] 34 | Content-Type: [application/json] 35 | User-Agent: [dibctl keystoneauth1/2.20.0 python-requests/2.12.4 CPython/2.7.13] 36 | pytest-filtered: ['true'] 37 | method: POST 38 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/tokens 39 | response: 40 | body: {string: !!python/unicode '{"access": {"token": {"issued_at": "2017-08-16T13:21:54.000000Z", 41 | "expires": "2038-01-15T16:17:18Z", "id": "consealed id", "tenant": {"id": 42 | "4e632076f7004f908c8da67345a7592e", "enabled": true, "description": "", "name": 43 | "consealed name"}, "audit_ids": ["bTNG410dQ5-waBBc5ao2sA"]}, "serviceCatalog": 44 | [{"endpoints": [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/4e632076f7004f908c8da67345a7592e", 45 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/4e632076f7004f908c8da67345a7592e", 46 | "id": "2e61ac6cc99345a1b152779600918c2e", "publicURL": "https://compute.nova-lab-1.mgm.servers.com:8774/v2/4e632076f7004f908c8da67345a7592e"}], 47 | "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": 48 | "http://neutron-server.p.nova-lab-1.servers.com:29696", "region": "lab-1", 49 | "internalURL": "http://neutron-server.p.nova-lab-1.servers.com:29696", "id": 50 | "6ec649b0f7cc4554a041defadf552fc1", "publicURL": "https://network.nova-lab-1.mgm.servers.com:9696"}], 51 | "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": 52 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/4e632076f7004f908c8da67345a7592e", 53 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/4e632076f7004f908c8da67345a7592e", 54 | "id": "09d47ca5543b41849da1a7690acd5844", "publicURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/4e632076f7004f908c8da67345a7592e"}], 55 | "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": 56 | [{"adminURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", "region": 57 | "lab-1", "internalURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", 58 | "id": "2982f5e7320040acb9fe1dac441a8551", "publicURL": "https://images.nova-lab-1.mgm.servers.com:9292/v2"}], 59 | "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": 60 | "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "region": "lab-1", 61 | "internalURL": "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "id": 62 | "1cc0a58b1bab45509ec3991bf7111a3f", "publicURL": "https://metering.nova-lab-1.mgm.servers.com:8777"}], 63 | "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": 64 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/4e632076f7004f908c8da67345a7592e", 65 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/4e632076f7004f908c8da67345a7592e", 66 | "id": "6ae4b74962fc4b5abb4fed44dbf534a5", "publicURL": "https://volume.nova-lab-1.mgm.servers.com:8776/v1/4e632076f7004f908c8da67345a7592e"}], 67 | "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": 68 | [{"adminURL": "http://keystone.p.nova-lab-1.servers.com:35357/v2.0", "region": 69 | "lab-1", "internalURL": "http://keystone.p.nova-lab-1.servers.com:5001/v2.0", 70 | "id": "19226f12c0bb4438a6032b9c55c1a8e9", "publicURL": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0"}], 71 | "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": 72 | "consealed username", "roles_links": [], "id": "cc876061ad6a459eb7ff29e1ee04a3a6", 73 | "roles": [{"name": "_member_"}], "name": "consealed name"}, "metadata": {"is_admin": 74 | 0, "roles": ["9fe2ff9ee4384b1894a90878d3e92bab"]}}}'} 75 | headers: 76 | content-length: ['3266'] 77 | content-type: [application/json] 78 | date: ['Wed, 16 Aug 2017 13:21:54 GMT'] 79 | vary: [X-Auth-Token] 80 | x-distribution: [Ubuntu] 81 | x-openstack-request-id: [req-f9a72f9d-275e-4165-aeb6-64baf563ef18] 82 | status: {code: 200, message: OK} 83 | - request: 84 | body: null 85 | headers: 86 | Accept: [application/json] 87 | Accept-Encoding: ['gzip, deflate'] 88 | Connection: [keep-alive] 89 | User-Agent: [python-novaclient] 90 | pytest-filtered: ['true'] 91 | method: GET 92 | uri: https://compute.nova-lab-1.mgm.servers.com:8774/v2/4e632076f7004f908c8da67345a7592e/servers/detail?all_tenants=1 93 | response: 94 | body: {string: !!python/unicode '{"forbidden": {"message": "Policy doesn''t allow 95 | os_compute_api:servers:detail:get_all_tenants to be performed.", "code": 403}}'} 96 | headers: 97 | content-length: ['126'] 98 | content-type: [application/json; charset=UTF-8] 99 | date: ['Wed, 16 Aug 2017 13:21:55 GMT'] 100 | x-compute-request-id: [req-b76dc82f-d49a-42a0-95c9-fb4263f454c6] 101 | status: {code: 403, message: Forbidden} 102 | version: 1 103 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_non_existing_image_code_50.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | Accept: ['*/*'] 6 | Accept-Encoding: ['gzip, deflate'] 7 | Connection: [keep-alive] 8 | User-Agent: [python-requests/2.12.4] 9 | pytest-filtered: ['true'] 10 | method: GET 11 | uri: https://auth.servers.nl01.cloud.servers.com:5000/v2.0 12 | response: 13 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 14 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 15 | "id": "v2.0", "links": [{"href": "https://auth.servers.nl01.cloud.servers.com:5000/v2.0/", 16 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 17 | "rel": "describedby"}]}}'} 18 | headers: 19 | content-length: ['362'] 20 | content-type: [application/json] 21 | date: ['Wed, 16 Aug 2017 15:51:10 GMT'] 22 | vary: [X-Auth-Token] 23 | x-distribution: [Ubuntu] 24 | x-openstack-request-id: [req-45b3afe9-d2d4-40fc-ace4-79724cb673b5] 25 | status: {code: 200, message: OK} 26 | - request: 27 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 28 | {\"username\": \"username\", \"password\": \"password\"}}"}' 29 | headers: 30 | Accept: [application/json] 31 | Accept-Encoding: ['gzip, deflate'] 32 | Connection: [keep-alive] 33 | Content-Length: ['105'] 34 | Content-Type: [application/json] 35 | User-Agent: [dibctl keystoneauth1/2.20.0 python-requests/2.12.4 CPython/2.7.13] 36 | pytest-filtered: ['true'] 37 | method: POST 38 | uri: https://auth.servers.nl01.cloud.servers.com:5000/v2.0/tokens 39 | response: 40 | body: {string: !!python/unicode '{"access": {"token": {"issued_at": "2017-08-16T15:51:10.000000Z", 41 | "expires": "2038-01-15T16:17:18Z", "id": "consealed id", "tenant": {"id": 42 | "1d7f6604ebb54c69820f9d157bcea5f9", "enabled": true, "description": "", "name": 43 | "consealed name"}, "audit_ids": ["Tan3bpXOQG6b2emQwekqeQ"]}, "serviceCatalog": 44 | [{"endpoints": [{"adminURL": "http://glance-api.p.nova-ams-1.servers.com:29292/v2", 45 | "region": "ams-1", "internalURL": "http://glance-api.p.nova-ams-1.servers.com:29292/v2", 46 | "id": "50cef89163cc43ca8bda7e369cc52e43", "publicURL": "https://images.servers.nl01.cloud.servers.com:9292/v2"}], 47 | "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": 48 | "http://nova-api.p.nova-ams-1.servers.com:28774/v2/1d7f6604ebb54c69820f9d157bcea5f9", 49 | "region": "ams-1", "internalURL": "http://nova-api.p.nova-ams-1.servers.com:28774/v2/1d7f6604ebb54c69820f9d157bcea5f9", 50 | "id": "1728d1eb1fce49299f3b9af2b483ea07", "publicURL": "https://compute.servers.nl01.cloud.servers.com:8774/v2/1d7f6604ebb54c69820f9d157bcea5f9"}], 51 | "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": 52 | "http://ceilometer-api.p.nova-ams-1.servers.com:28777", "region": "ams-1", 53 | "internalURL": "http://ceilometer-api.p.nova-ams-1.servers.com:28777", "id": 54 | "1143aedd4705418dbefc5ca672f12371", "publicURL": "https://metering.servers.nl01.cloud.servers.com:8777"}], 55 | "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": 56 | [{"adminURL": "http://keystone.p.nova-ams-1.servers.com:35357/v2.0", "region": 57 | "ams-1", "internalURL": "http://keystone.p.nova-ams-1.servers.com:5001/v2.0", 58 | "id": "50072ef2be7d4a23911ddfde497605bf", "publicURL": "https://auth.servers.nl01.cloud.servers.com:5000/v2.0"}], 59 | "endpoints_links": [], "type": "identity", "name": "keystone"}, {"endpoints": 60 | [{"adminURL": "http://neutron-server.p.nova-ams-1.servers.com:29696", "region": 61 | "ams-1", "internalURL": "http://neutron-server.p.nova-ams-1.servers.com:29696", 62 | "id": "604ba7d60702440ab82f7cccb01a63d7", "publicURL": "https://network.servers.nl01.cloud.servers.com:9696"}], 63 | "endpoints_links": [], "type": "network", "name": "neutron"}], "user": {"username": 64 | "consealed username", "roles_links": [], "id": "05678d47b3ce4c7bab64a729b39a63e0", 65 | "roles": [{"name": "_member_"}], "name": "consealed name"}, "metadata": {"is_admin": 66 | 0, "roles": ["9fe2ff9ee4384b1894a90878d3e92bab"]}}}'} 67 | headers: 68 | content-length: ['2400'] 69 | content-type: [application/json] 70 | date: ['Wed, 16 Aug 2017 15:51:10 GMT'] 71 | vary: [X-Auth-Token] 72 | x-distribution: [Ubuntu] 73 | x-openstack-request-id: [req-6fa28849-8531-4c7f-a468-1526069700df] 74 | status: {code: 200, message: OK} 75 | - request: 76 | body: null 77 | headers: 78 | Accept: ['*/*'] 79 | Accept-Encoding: ['gzip, deflate'] 80 | Connection: [keep-alive] 81 | Content-Type: [application/octet-stream] 82 | User-Agent: [python-glanceclient] 83 | pytest-filtered: ['true'] 84 | method: HEAD 85 | uri: https://images.servers.nl01.cloud.servers.com:9292/v2/v1/images/deadbeaf-0000-0000-0000-b7a14cdd1169 86 | response: 87 | body: {string: !!python/unicode ''} 88 | headers: 89 | content-length: ['0'] 90 | content-type: [text/plain; charset=UTF-8] 91 | date: ['Wed, 16 Aug 2017 15:51:11 GMT'] 92 | x-openstack-request-id: [req-c495e8eb-237d-48c1-9770-b7e83ff8e4ca] 93 | status: {code: 404, message: Not Found} 94 | version: 1 95 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_upload_bad_credentials.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | Accept: ['*/*'] 6 | Accept-Encoding: ['gzip, deflate'] 7 | Connection: [keep-alive] 8 | User-Agent: [python-requests/2.12.4] 9 | pytest-filtered: ['true'] 10 | method: GET 11 | uri: https://auth.servers.nl01.cloud.servers.com:5000/v2.0 12 | response: 13 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 14 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 15 | "id": "v2.0", "links": [{"href": "https://auth.servers.nl01.cloud.servers.com:5000/v2.0/", 16 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 17 | "rel": "describedby"}]}}'} 18 | headers: 19 | content-length: ['362'] 20 | content-type: [application/json] 21 | date: ['Wed, 16 Aug 2017 12:22:09 GMT'] 22 | vary: [X-Auth-Token] 23 | x-distribution: [Ubuntu] 24 | x-openstack-request-id: [req-efb20470-666e-406f-af1d-dbd981e7322c] 25 | status: {code: 200, message: OK} 26 | - request: 27 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 28 | {\"username\": \"username\", \"password\": \"password\"}}"}' 29 | headers: 30 | Accept: [application/json] 31 | Accept-Encoding: ['gzip, deflate'] 32 | Connection: [keep-alive] 33 | Content-Length: ['119'] 34 | Content-Type: [application/json] 35 | User-Agent: [dibctl keystoneauth1/2.20.0 python-requests/2.12.4 CPython/2.7.13] 36 | pytest-filtered: ['true'] 37 | method: POST 38 | uri: https://auth.servers.nl01.cloud.servers.com:5000/v2.0/tokens 39 | response: 40 | body: {string: !!python/unicode '{"error": {"message": "The request you have made 41 | requires authentication.", "code": 401, "title": "Unauthorized"}}'} 42 | headers: 43 | content-length: ['114'] 44 | content-type: [application/json] 45 | date: ['Wed, 16 Aug 2017 12:22:09 GMT'] 46 | vary: [X-Auth-Token] 47 | www-authenticate: ['Keystone uri="https://auth.servers.nl01.cloud.servers.com:5000"'] 48 | x-distribution: [Ubuntu] 49 | x-openstack-request-id: [req-d45206d9-3a02-4087-a87b-afa0fdd44f2c] 50 | status: {code: 401, message: Unauthorized} 51 | version: 1 52 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_upload_image_normal_no_obsolete.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | pytest-filtered: ['true'] 6 | method: GET 7 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0 8 | response: 9 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 10 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 11 | "id": "v2.0", "links": [{"href": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/", 12 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 13 | "rel": "describedby"}]}}'} 14 | headers: 15 | content-length: ['358'] 16 | content-type: [application/json] 17 | date: ['Thu, 17 Aug 2017 09:25:13 GMT'] 18 | vary: [X-Auth-Token] 19 | x-distribution: [Ubuntu] 20 | x-openstack-request-id: [req-cf439cae-dc7d-4385-8e07-3cea5fc1c1ac] 21 | status: {code: 200, message: OK} 22 | - request: 23 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 24 | {\"username\": \"username\", \"password\": \"password\"}}"}' 25 | headers: 26 | Content-Type: [application/json] 27 | pytest-filtered: ['true'] 28 | method: POST 29 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/tokens 30 | response: 31 | body: {string: !!python/unicode '{"access": {"token": {"issued_at": "2017-08-17T09:25:13.000000Z", 32 | "expires": "2038-01-15T16:17:18Z", "id": "consealed id", "tenant": {"id": 33 | "61ed529dc6024dc5968acf32b6f4142c", "enabled": true, "description": "", "name": 34 | "consealed name"}, "audit_ids": ["1Q6aQW1JSCG2sp-iJ_rKOw"]}, "serviceCatalog": 35 | [{"endpoints": [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 36 | "region": "lab-1", "id": "2e61ac6cc99345a1b152779600918c2e", "internalURL": 37 | "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 38 | "publicURL": "https://compute.nova-lab-1.mgm.servers.com:8774/v2/61ed529dc6024dc5968acf32b6f4142c"}], 39 | "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": 40 | "http://neutron-server.p.nova-lab-1.servers.com:29696", "region": "lab-1", 41 | "id": "6ec649b0f7cc4554a041defadf552fc1", "internalURL": "http://neutron-server.p.nova-lab-1.servers.com:29696", 42 | "publicURL": "https://network.nova-lab-1.mgm.servers.com:9696"}], "endpoints_links": 43 | [], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 44 | "region": "lab-1", "id": "09d47ca5543b41849da1a7690acd5844", "internalURL": 45 | "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 46 | "publicURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c"}], 47 | "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": 48 | [{"adminURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", "region": 49 | "lab-1", "id": "2982f5e7320040acb9fe1dac441a8551", "internalURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", 50 | "publicURL": "https://images.nova-lab-1.mgm.servers.com:9292/v2"}], "endpoints_links": 51 | [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://ceilometer-api.p.nova-lab-1.servers.com:28777", 52 | "region": "lab-1", "id": "1cc0a58b1bab45509ec3991bf7111a3f", "internalURL": 53 | "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "publicURL": "https://metering.nova-lab-1.mgm.servers.com:8777"}], 54 | "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": 55 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 56 | "region": "lab-1", "id": "6ae4b74962fc4b5abb4fed44dbf534a5", "internalURL": 57 | "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 58 | "publicURL": "https://volume.nova-lab-1.mgm.servers.com:8776/v1/61ed529dc6024dc5968acf32b6f4142c"}], 59 | "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": 60 | [{"adminURL": "http://keystone.p.nova-lab-1.servers.com:35357/v2.0", "region": 61 | "lab-1", "id": "19226f12c0bb4438a6032b9c55c1a8e9", "internalURL": "http://keystone.p.nova-lab-1.servers.com:5001/v2.0", 62 | "publicURL": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0"}], "endpoints_links": 63 | [], "type": "identity", "name": "keystone"}], "user": {"username": "consealed 64 | username", "roles_links": [], "id": "155f21999dd34a679797a5eb86b0ee20", "roles": 65 | [{"name": "images"}, {"name": "_member_"}], "name": "consealed name"}, "metadata": 66 | {"is_admin": 0, "roles": ["d80c47ca65a6448fb92b0b948178aa88", "9fe2ff9ee4384b1894a90878d3e92bab"]}}}'} 67 | headers: 68 | content-length: ['3326'] 69 | content-type: [application/json] 70 | date: ['Thu, 17 Aug 2017 09:25:13 GMT'] 71 | vary: [X-Auth-Token] 72 | x-distribution: [Ubuntu] 73 | x-openstack-request-id: [req-f7432cb0-cc31-4633-b947-2beec8ca166e] 74 | status: {code: 200, message: OK} 75 | - request: 76 | body: !!python/unicode 'test 77 | 78 | ' 79 | headers: 80 | Content-Type: [application/octet-stream] 81 | pytest-filtered: ['true'] 82 | x-image-meta-container_format: [bare] 83 | x-image-meta-disk_format: [qcow2] 84 | x-image-meta-is_public: ['False'] 85 | x-image-meta-min_disk: ['0'] 86 | x-image-meta-min_ram: ['0'] 87 | x-image-meta-name: [Empty image] 88 | x-image-meta-protected: ['False'] 89 | method: POST 90 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images 91 | response: 92 | body: {string: !!python/unicode '{"image": {"status": "active", "deleted": false, 93 | "container_format": "bare", "min_ram": 0, "updated_at": "2017-08-17T09:25:15.000000", 94 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "min_disk": 0, "is_public": false, 95 | "deleted_at": null, "id": "06957589-e664-47db-af4a-3712f9476cac", "size": 96 | 5, "virtual_size": null, "name": "Empty image", "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 97 | "created_at": "2017-08-17T09:25:15.000000", "disk_format": "qcow2", "properties": 98 | {}, "protected": false}}'} 99 | headers: 100 | content-length: ['491'] 101 | content-type: [application/json] 102 | date: ['Thu, 17 Aug 2017 09:25:15 GMT'] 103 | etag: [d8e8fca2dc0f896fd7cb4cb0031ba249] 104 | location: ['http://images.nova-lab-1.mgm.servers.com:9292/v1/images/06957589-e664-47db-af4a-3712f9476cac'] 105 | x-openstack-request-id: [req-52c20f65-eeec-48bb-a927-afed283e920f] 106 | status: {code: 201, message: Created} 107 | - request: 108 | body: null 109 | headers: 110 | Content-Type: [application/octet-stream] 111 | pytest-filtered: ['true'] 112 | method: GET 113 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images/detail?limit=20&name=Empty+image 114 | response: 115 | body: {string: !!python/unicode '{"images": [{"status": "active", "deleted_at": 116 | null, "name": "Empty image", "deleted": false, "container_format": "bare", 117 | "created_at": "2017-08-17T09:25:15.000000", "disk_format": "qcow2", "updated_at": 118 | "2017-08-17T09:25:15.000000", "min_disk": 0, "protected": false, "id": "06957589-e664-47db-af4a-3712f9476cac", 119 | "min_ram": 0, "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", "owner": "61ed529dc6024dc5968acf32b6f4142c", 120 | "is_public": false, "virtual_size": null, "properties": {}, "size": 5}, {"status": 121 | "active", "deleted_at": null, "name": "Empty image", "deleted": false, "container_format": 122 | "bare", "created_at": "2017-08-16T12:47:24.000000", "disk_format": "raw", 123 | "updated_at": "2017-08-16T12:47:25.000000", "min_disk": 0, "protected": false, 124 | "id": "0389c7ea-130b-44eb-8dc5-02aa909e5450", "min_ram": 0, "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 125 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "is_public": false, "virtual_size": 126 | null, "properties": {}, "size": 5}]}'} 127 | headers: 128 | content-length: ['974'] 129 | content-type: [application/json; charset=UTF-8] 130 | date: ['Thu, 17 Aug 2017 09:25:15 GMT'] 131 | x-openstack-request-id: [req-608e9dbb-6fb7-49ac-8fbc-dcf12e3444ab] 132 | status: {code: 200, message: OK} 133 | - request: 134 | body: null 135 | headers: 136 | Content-Type: [application/octet-stream] 137 | pytest-filtered: ['true'] 138 | x-glance-registry-purge-props: ['false'] 139 | x-image-meta-name: [Obsolete Empty image] 140 | x-image-meta-property-obsolete: ['True'] 141 | method: PUT 142 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images/0389c7ea-130b-44eb-8dc5-02aa909e5450 143 | response: 144 | body: {string: !!python/unicode '{"image": {"status": "active", "deleted": false, 145 | "container_format": "bare", "min_ram": 0, "updated_at": "2017-08-17T09:25:16.000000", 146 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "min_disk": 0, "is_public": false, 147 | "deleted_at": null, "id": "0389c7ea-130b-44eb-8dc5-02aa909e5450", "size": 148 | 5, "virtual_size": null, "name": "Obsolete Empty image", "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 149 | "created_at": "2017-08-16T12:47:24.000000", "disk_format": "raw", "properties": 150 | {"obsolete": "True"}, "protected": false}}'} 151 | headers: 152 | content-length: ['516'] 153 | content-type: [application/json] 154 | date: ['Thu, 17 Aug 2017 09:25:16 GMT'] 155 | etag: [d8e8fca2dc0f896fd7cb4cb0031ba249] 156 | x-openstack-request-id: [req-042aa4b4-07e4-4c9b-925a-610caad64202] 157 | status: {code: 200, message: OK} 158 | version: 1 159 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_upload_image_normal_no_obsolete_convertion.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | pytest-filtered: ['true'] 6 | method: GET 7 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0 8 | response: 9 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 10 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 11 | "id": "v2.0", "links": [{"href": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/", 12 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 13 | "rel": "describedby"}]}}'} 14 | headers: 15 | content-length: ['358'] 16 | content-type: [application/json] 17 | date: ['Thu, 17 Aug 2017 09:25:19 GMT'] 18 | vary: [X-Auth-Token] 19 | x-distribution: [Ubuntu] 20 | x-openstack-request-id: [req-99bf75b5-cb3c-4ac4-9fbc-65d07412a01c] 21 | status: {code: 200, message: OK} 22 | - request: 23 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 24 | {\"username\": \"username\", \"password\": \"password\"}}"}' 25 | headers: 26 | Content-Type: [application/json] 27 | pytest-filtered: ['true'] 28 | method: POST 29 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/tokens 30 | response: 31 | body: {string: !!python/unicode '{"access": {"token": {"issued_at": "2017-08-17T09:25:19.000000Z", 32 | "expires": "2038-01-15T16:17:18Z", "id": "consealed id", "tenant": {"id": 33 | "61ed529dc6024dc5968acf32b6f4142c", "enabled": true, "description": "", "name": 34 | "consealed name"}, "audit_ids": ["9d4hQIUSQnKJ2nbS8b72dg"]}, "serviceCatalog": 35 | [{"endpoints": [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 36 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 37 | "id": "2e61ac6cc99345a1b152779600918c2e", "publicURL": "https://compute.nova-lab-1.mgm.servers.com:8774/v2/61ed529dc6024dc5968acf32b6f4142c"}], 38 | "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": 39 | "http://neutron-server.p.nova-lab-1.servers.com:29696", "region": "lab-1", 40 | "internalURL": "http://neutron-server.p.nova-lab-1.servers.com:29696", "id": 41 | "6ec649b0f7cc4554a041defadf552fc1", "publicURL": "https://network.nova-lab-1.mgm.servers.com:9696"}], 42 | "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": 43 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 44 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 45 | "id": "09d47ca5543b41849da1a7690acd5844", "publicURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c"}], 46 | "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": 47 | [{"adminURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", "region": 48 | "lab-1", "internalURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", 49 | "id": "2982f5e7320040acb9fe1dac441a8551", "publicURL": "https://images.nova-lab-1.mgm.servers.com:9292/v2"}], 50 | "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": 51 | "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "region": "lab-1", 52 | "internalURL": "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "id": 53 | "1cc0a58b1bab45509ec3991bf7111a3f", "publicURL": "https://metering.nova-lab-1.mgm.servers.com:8777"}], 54 | "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": 55 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 56 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 57 | "id": "6ae4b74962fc4b5abb4fed44dbf534a5", "publicURL": "https://volume.nova-lab-1.mgm.servers.com:8776/v1/61ed529dc6024dc5968acf32b6f4142c"}], 58 | "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": 59 | [{"adminURL": "http://keystone.p.nova-lab-1.servers.com:35357/v2.0", "region": 60 | "lab-1", "internalURL": "http://keystone.p.nova-lab-1.servers.com:5001/v2.0", 61 | "id": "19226f12c0bb4438a6032b9c55c1a8e9", "publicURL": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0"}], 62 | "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": 63 | "consealed username", "roles_links": [], "id": "155f21999dd34a679797a5eb86b0ee20", 64 | "roles": [{"name": "images"}, {"name": "_member_"}], "name": "consealed name"}, 65 | "metadata": {"is_admin": 0, "roles": ["d80c47ca65a6448fb92b0b948178aa88", 66 | "9fe2ff9ee4384b1894a90878d3e92bab"]}}}'} 67 | headers: 68 | content-length: ['3326'] 69 | content-type: [application/json] 70 | date: ['Thu, 17 Aug 2017 09:25:19 GMT'] 71 | vary: [X-Auth-Token] 72 | x-distribution: [Ubuntu] 73 | x-openstack-request-id: [req-93b9c39b-690c-4a8b-8541-f02245159a5e] 74 | status: {code: 200, message: OK} 75 | - request: 76 | body: !!python/unicode 'test 77 | 78 | ' 79 | headers: 80 | Content-Type: [application/octet-stream] 81 | pytest-filtered: ['true'] 82 | x-image-meta-container_format: [bare] 83 | x-image-meta-disk_format: [raw] 84 | x-image-meta-is_public: ['False'] 85 | x-image-meta-min_disk: ['0'] 86 | x-image-meta-min_ram: ['0'] 87 | x-image-meta-name: [Empty image] 88 | x-image-meta-protected: ['False'] 89 | method: POST 90 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images 91 | response: 92 | body: {string: !!python/unicode '{"image": {"status": "active", "deleted": false, 93 | "container_format": "bare", "min_ram": 0, "updated_at": "2017-08-17T09:25:21.000000", 94 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "min_disk": 0, "is_public": false, 95 | "deleted_at": null, "id": "dc7131e8-b6a0-404f-918f-13c3781e7301", "size": 96 | 5, "virtual_size": null, "name": "Empty image", "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 97 | "created_at": "2017-08-17T09:25:20.000000", "disk_format": "raw", "properties": 98 | {}, "protected": false}}'} 99 | headers: 100 | content-length: ['489'] 101 | content-type: [application/json] 102 | date: ['Thu, 17 Aug 2017 09:25:20 GMT'] 103 | etag: [d8e8fca2dc0f896fd7cb4cb0031ba249] 104 | location: ['http://images.nova-lab-1.mgm.servers.com:9292/v1/images/dc7131e8-b6a0-404f-918f-13c3781e7301'] 105 | x-openstack-request-id: [req-4ed09f41-8677-4f8c-8c19-bcd452f13de7] 106 | status: {code: 201, message: Created} 107 | - request: 108 | body: null 109 | headers: 110 | Content-Type: [application/octet-stream] 111 | pytest-filtered: ['true'] 112 | method: GET 113 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images/detail?limit=20&name=Empty+image 114 | response: 115 | body: {string: !!python/unicode '{"images": [{"status": "active", "deleted_at": 116 | null, "name": "Empty image", "deleted": false, "container_format": "bare", 117 | "created_at": "2017-08-17T09:25:20.000000", "disk_format": "raw", "updated_at": 118 | "2017-08-17T09:25:21.000000", "min_disk": 0, "protected": false, "id": "dc7131e8-b6a0-404f-918f-13c3781e7301", 119 | "min_ram": 0, "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", "owner": "61ed529dc6024dc5968acf32b6f4142c", 120 | "is_public": false, "virtual_size": null, "properties": {}, "size": 5}, {"status": 121 | "active", "deleted_at": null, "name": "Empty image", "deleted": false, "container_format": 122 | "bare", "created_at": "2017-08-17T09:25:18.000000", "disk_format": "qcow2", 123 | "updated_at": "2017-08-17T09:25:18.000000", "min_disk": 0, "protected": false, 124 | "id": "d01189e4-57eb-4a99-aa4a-7a482e9063fe", "min_ram": 0, "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 125 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "is_public": false, "virtual_size": 126 | null, "properties": {}, "size": 5}]}'} 127 | headers: 128 | content-length: ['974'] 129 | content-type: [application/json; charset=UTF-8] 130 | date: ['Thu, 17 Aug 2017 09:25:21 GMT'] 131 | x-openstack-request-id: [req-93b85814-20e6-4bdc-b8b3-6ccafb391d9f] 132 | status: {code: 200, message: OK} 133 | - request: 134 | body: null 135 | headers: 136 | Content-Type: [application/octet-stream] 137 | pytest-filtered: ['true'] 138 | x-glance-registry-purge-props: ['false'] 139 | x-image-meta-name: [Obsolete Empty image] 140 | x-image-meta-property-obsolete: ['True'] 141 | method: PUT 142 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images/d01189e4-57eb-4a99-aa4a-7a482e9063fe 143 | response: 144 | body: {string: !!python/unicode '{"image": {"status": "active", "deleted": false, 145 | "container_format": "bare", "min_ram": 0, "updated_at": "2017-08-17T09:25:21.000000", 146 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "min_disk": 0, "is_public": false, 147 | "deleted_at": null, "id": "d01189e4-57eb-4a99-aa4a-7a482e9063fe", "size": 148 | 5, "virtual_size": null, "name": "Obsolete Empty image", "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 149 | "created_at": "2017-08-17T09:25:18.000000", "disk_format": "qcow2", "properties": 150 | {"obsolete": "True"}, "protected": false}}'} 151 | headers: 152 | content-length: ['518'] 153 | content-type: [application/json] 154 | date: ['Thu, 17 Aug 2017 09:25:21 GMT'] 155 | etag: [d8e8fca2dc0f896fd7cb4cb0031ba249] 156 | x-openstack-request-id: [req-5deed933-4ee2-4e12-bb30-e52688f89d2b] 157 | status: {code: 200, message: OK} 158 | version: 1 159 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_upload_image_normal_obsolete.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | pytest-filtered: ['true'] 6 | method: GET 7 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0 8 | response: 9 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 10 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 11 | "id": "v2.0", "links": [{"href": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/", 12 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 13 | "rel": "describedby"}]}}'} 14 | headers: 15 | content-length: ['358'] 16 | content-type: [application/json] 17 | date: ['Thu, 17 Aug 2017 09:25:16 GMT'] 18 | vary: [X-Auth-Token] 19 | x-distribution: [Ubuntu] 20 | x-openstack-request-id: [req-dd627d0f-a09d-481a-947e-d8398e640cd7] 21 | status: {code: 200, message: OK} 22 | - request: 23 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 24 | {\"username\": \"username\", \"password\": \"password\"}}"}' 25 | headers: 26 | Content-Type: [application/json] 27 | pytest-filtered: ['true'] 28 | method: POST 29 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/tokens 30 | response: 31 | body: {string: !!python/unicode '{"access": {"token": {"issued_at": "2017-08-17T09:25:17.000000Z", 32 | "expires": "2038-01-15T16:17:18Z", "id": "consealed id", "tenant": {"id": 33 | "61ed529dc6024dc5968acf32b6f4142c", "enabled": true, "description": "", "name": 34 | "consealed name"}, "audit_ids": ["i6MKktn3R0ei9x7LMF8ONg"]}, "serviceCatalog": 35 | [{"endpoints": [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 36 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 37 | "id": "2e61ac6cc99345a1b152779600918c2e", "publicURL": "https://compute.nova-lab-1.mgm.servers.com:8774/v2/61ed529dc6024dc5968acf32b6f4142c"}], 38 | "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": 39 | "http://neutron-server.p.nova-lab-1.servers.com:29696", "region": "lab-1", 40 | "internalURL": "http://neutron-server.p.nova-lab-1.servers.com:29696", "id": 41 | "6ec649b0f7cc4554a041defadf552fc1", "publicURL": "https://network.nova-lab-1.mgm.servers.com:9696"}], 42 | "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": 43 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 44 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 45 | "id": "09d47ca5543b41849da1a7690acd5844", "publicURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c"}], 46 | "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": 47 | [{"adminURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", "region": 48 | "lab-1", "internalURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", 49 | "id": "2982f5e7320040acb9fe1dac441a8551", "publicURL": "https://images.nova-lab-1.mgm.servers.com:9292/v2"}], 50 | "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": 51 | "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "region": "lab-1", 52 | "internalURL": "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "id": 53 | "1cc0a58b1bab45509ec3991bf7111a3f", "publicURL": "https://metering.nova-lab-1.mgm.servers.com:8777"}], 54 | "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": 55 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 56 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 57 | "id": "6ae4b74962fc4b5abb4fed44dbf534a5", "publicURL": "https://volume.nova-lab-1.mgm.servers.com:8776/v1/61ed529dc6024dc5968acf32b6f4142c"}], 58 | "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": 59 | [{"adminURL": "http://keystone.p.nova-lab-1.servers.com:35357/v2.0", "region": 60 | "lab-1", "internalURL": "http://keystone.p.nova-lab-1.servers.com:5001/v2.0", 61 | "id": "19226f12c0bb4438a6032b9c55c1a8e9", "publicURL": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0"}], 62 | "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": 63 | "consealed username", "roles_links": [], "id": "155f21999dd34a679797a5eb86b0ee20", 64 | "roles": [{"name": "images"}, {"name": "_member_"}], "name": "consealed name"}, 65 | "metadata": {"is_admin": 0, "roles": ["d80c47ca65a6448fb92b0b948178aa88", 66 | "9fe2ff9ee4384b1894a90878d3e92bab"]}}}'} 67 | headers: 68 | content-length: ['3326'] 69 | content-type: [application/json] 70 | date: ['Thu, 17 Aug 2017 09:25:17 GMT'] 71 | vary: [X-Auth-Token] 72 | x-distribution: [Ubuntu] 73 | x-openstack-request-id: [req-a550184a-f3ab-4ec7-863b-743ccc5f12c2] 74 | status: {code: 200, message: OK} 75 | - request: 76 | body: !!python/unicode 'test 77 | 78 | ' 79 | headers: 80 | Content-Type: [application/octet-stream] 81 | pytest-filtered: ['true'] 82 | x-image-meta-container_format: [bare] 83 | x-image-meta-disk_format: [qcow2] 84 | x-image-meta-is_public: ['False'] 85 | x-image-meta-min_disk: ['0'] 86 | x-image-meta-min_ram: ['0'] 87 | x-image-meta-name: [Empty image] 88 | x-image-meta-protected: ['False'] 89 | method: POST 90 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images 91 | response: 92 | body: {string: !!python/unicode '{"image": {"status": "active", "deleted": false, 93 | "container_format": "bare", "min_ram": 0, "updated_at": "2017-08-17T09:25:18.000000", 94 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "min_disk": 0, "is_public": false, 95 | "deleted_at": null, "id": "d01189e4-57eb-4a99-aa4a-7a482e9063fe", "size": 96 | 5, "virtual_size": null, "name": "Empty image", "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 97 | "created_at": "2017-08-17T09:25:18.000000", "disk_format": "qcow2", "properties": 98 | {}, "protected": false}}'} 99 | headers: 100 | content-length: ['491'] 101 | content-type: [application/json] 102 | date: ['Thu, 17 Aug 2017 09:25:18 GMT'] 103 | etag: [d8e8fca2dc0f896fd7cb4cb0031ba249] 104 | location: ['http://images.nova-lab-1.mgm.servers.com:9292/v1/images/d01189e4-57eb-4a99-aa4a-7a482e9063fe'] 105 | x-openstack-request-id: [req-148725d5-2682-4d76-881a-2e049ee54733] 106 | status: {code: 201, message: Created} 107 | - request: 108 | body: null 109 | headers: 110 | Content-Type: [application/octet-stream] 111 | pytest-filtered: ['true'] 112 | method: GET 113 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images/detail?limit=20&name=Empty+image 114 | response: 115 | body: {string: !!python/unicode '{"images": [{"status": "active", "deleted_at": 116 | null, "name": "Empty image", "deleted": false, "container_format": "bare", 117 | "created_at": "2017-08-17T09:25:18.000000", "disk_format": "qcow2", "updated_at": 118 | "2017-08-17T09:25:18.000000", "min_disk": 0, "protected": false, "id": "d01189e4-57eb-4a99-aa4a-7a482e9063fe", 119 | "min_ram": 0, "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", "owner": "61ed529dc6024dc5968acf32b6f4142c", 120 | "is_public": false, "virtual_size": null, "properties": {}, "size": 5}, {"status": 121 | "active", "deleted_at": null, "name": "Empty image", "deleted": false, "container_format": 122 | "bare", "created_at": "2017-08-17T09:25:15.000000", "disk_format": "qcow2", 123 | "updated_at": "2017-08-17T09:25:15.000000", "min_disk": 0, "protected": false, 124 | "id": "06957589-e664-47db-af4a-3712f9476cac", "min_ram": 0, "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 125 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "is_public": false, "virtual_size": 126 | null, "properties": {}, "size": 5}]}'} 127 | headers: 128 | content-length: ['976'] 129 | content-type: [application/json; charset=UTF-8] 130 | date: ['Thu, 17 Aug 2017 09:25:18 GMT'] 131 | x-openstack-request-id: [req-62e63c83-828d-4af6-95fb-d45ee7cbfe39] 132 | status: {code: 200, message: OK} 133 | - request: 134 | body: null 135 | headers: 136 | Content-Type: [application/octet-stream] 137 | pytest-filtered: ['true'] 138 | x-glance-registry-purge-props: ['false'] 139 | x-image-meta-name: [Obsolete Empty image] 140 | x-image-meta-property-obsolete: ['True'] 141 | method: PUT 142 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images/06957589-e664-47db-af4a-3712f9476cac 143 | response: 144 | body: {string: !!python/unicode '{"image": {"status": "active", "deleted": false, 145 | "container_format": "bare", "min_ram": 0, "updated_at": "2017-08-17T09:25:19.000000", 146 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "min_disk": 0, "is_public": false, 147 | "deleted_at": null, "id": "06957589-e664-47db-af4a-3712f9476cac", "size": 148 | 5, "virtual_size": null, "name": "Obsolete Empty image", "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 149 | "created_at": "2017-08-17T09:25:15.000000", "disk_format": "qcow2", "properties": 150 | {"obsolete": "True"}, "protected": false}}'} 151 | headers: 152 | content-length: ['518'] 153 | content-type: [application/json] 154 | date: ['Thu, 17 Aug 2017 09:25:18 GMT'] 155 | etag: [d8e8fca2dc0f896fd7cb4cb0031ba249] 156 | x-openstack-request-id: [req-8cd4d179-96d8-4c46-bd20-fca2f08e3263] 157 | status: {code: 200, message: OK} 158 | version: 1 159 | -------------------------------------------------------------------------------- /integration_tests/cassettes/test_upload_image_with_sizes_and_protection.yaml: -------------------------------------------------------------------------------- 1 | interactions: 2 | - request: 3 | body: null 4 | headers: 5 | pytest-filtered: ['true'] 6 | method: GET 7 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0 8 | response: 9 | body: {string: !!python/unicode '{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 10 | "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 11 | "id": "v2.0", "links": [{"href": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/", 12 | "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", 13 | "rel": "describedby"}]}}'} 14 | headers: 15 | content-length: ['358'] 16 | content-type: [application/json] 17 | date: ['Thu, 17 Aug 2017 11:06:17 GMT'] 18 | vary: [X-Auth-Token] 19 | x-distribution: [Ubuntu] 20 | x-openstack-request-id: [req-01e6e09e-1fc6-4605-9815-e9f539c0bfbb] 21 | status: {code: 200, message: OK} 22 | - request: 23 | body: !!python/unicode '{"auth": "{\"tenantName\": \"pyvcr\", \"passwordCredentials\": 24 | {\"username\": \"username\", \"password\": \"password\"}}"}' 25 | headers: 26 | Content-Type: [application/json] 27 | pytest-filtered: ['true'] 28 | method: POST 29 | uri: https://auth.nova-lab-1.mgm.servers.com:5000/v2.0/tokens 30 | response: 31 | body: {string: !!python/unicode '{"access": {"token": {"issued_at": "2017-08-17T11:06:17.000000Z", 32 | "expires": "2038-01-15T16:17:18Z", "id": "consealed id", "tenant": {"id": 33 | "61ed529dc6024dc5968acf32b6f4142c", "enabled": true, "description": "", "name": 34 | "consealed name"}, "audit_ids": ["Eh510ctiQ3KtViGV7nRw9A"]}, "serviceCatalog": 35 | [{"endpoints": [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 36 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28774/v2/61ed529dc6024dc5968acf32b6f4142c", 37 | "id": "2e61ac6cc99345a1b152779600918c2e", "publicURL": "https://compute.nova-lab-1.mgm.servers.com:8774/v2/61ed529dc6024dc5968acf32b6f4142c"}], 38 | "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": 39 | "http://neutron-server.p.nova-lab-1.servers.com:29696", "region": "lab-1", 40 | "internalURL": "http://neutron-server.p.nova-lab-1.servers.com:29696", "id": 41 | "6ec649b0f7cc4554a041defadf552fc1", "publicURL": "https://network.nova-lab-1.mgm.servers.com:9696"}], 42 | "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": 43 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 44 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c", 45 | "id": "09d47ca5543b41849da1a7690acd5844", "publicURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v2/61ed529dc6024dc5968acf32b6f4142c"}], 46 | "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": 47 | [{"adminURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", "region": 48 | "lab-1", "internalURL": "http://glance-api.p.nova-lab-1.servers.com:29292/v2", 49 | "id": "2982f5e7320040acb9fe1dac441a8551", "publicURL": "https://images.nova-lab-1.mgm.servers.com:9292/v2"}], 50 | "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": 51 | "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "region": "lab-1", 52 | "internalURL": "http://ceilometer-api.p.nova-lab-1.servers.com:28777", "id": 53 | "1cc0a58b1bab45509ec3991bf7111a3f", "publicURL": "https://metering.nova-lab-1.mgm.servers.com:8777"}], 54 | "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": 55 | [{"adminURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 56 | "region": "lab-1", "internalURL": "http://nova-api.p.nova-lab-1.servers.com:28776/v1/61ed529dc6024dc5968acf32b6f4142c", 57 | "id": "6ae4b74962fc4b5abb4fed44dbf534a5", "publicURL": "https://volume.nova-lab-1.mgm.servers.com:8776/v1/61ed529dc6024dc5968acf32b6f4142c"}], 58 | "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": 59 | [{"adminURL": "http://keystone.p.nova-lab-1.servers.com:35357/v2.0", "region": 60 | "lab-1", "internalURL": "http://keystone.p.nova-lab-1.servers.com:5001/v2.0", 61 | "id": "19226f12c0bb4438a6032b9c55c1a8e9", "publicURL": "https://auth.nova-lab-1.mgm.servers.com:5000/v2.0"}], 62 | "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": 63 | "consealed username", "roles_links": [], "id": "155f21999dd34a679797a5eb86b0ee20", 64 | "roles": [{"name": "images"}, {"name": "_member_"}], "name": "consealed name"}, 65 | "metadata": {"is_admin": 0, "roles": ["d80c47ca65a6448fb92b0b948178aa88", 66 | "9fe2ff9ee4384b1894a90878d3e92bab"]}}}'} 67 | headers: 68 | content-length: ['3326'] 69 | content-type: [application/json] 70 | date: ['Thu, 17 Aug 2017 11:06:17 GMT'] 71 | vary: [X-Auth-Token] 72 | x-distribution: [Ubuntu] 73 | x-openstack-request-id: [req-f4b71ae6-7438-46ec-be28-998808eefadb] 74 | status: {code: 200, message: OK} 75 | - request: 76 | body: !!python/unicode 'test 77 | 78 | ' 79 | headers: 80 | Content-Type: [application/octet-stream] 81 | pytest-filtered: ['true'] 82 | x-image-meta-container_format: [bare] 83 | x-image-meta-disk_format: [qcow2] 84 | x-image-meta-is_public: ['False'] 85 | x-image-meta-min_disk: ['30'] 86 | x-image-meta-min_ram: ['4'] 87 | x-image-meta-name: [Empty image] 88 | x-image-meta-protected: ['True'] 89 | method: POST 90 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images 91 | response: 92 | body: {string: !!python/unicode '{"image": {"status": "active", "deleted": false, 93 | "container_format": "bare", "min_ram": 4, "updated_at": "2017-08-17T11:06:19.000000", 94 | "owner": "61ed529dc6024dc5968acf32b6f4142c", "min_disk": 30, "is_public": 95 | false, "deleted_at": null, "id": "45f16360-f7c2-450d-ace0-3bf046f104bd", "size": 96 | 5, "virtual_size": null, "name": "Empty image", "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", 97 | "created_at": "2017-08-17T11:06:18.000000", "disk_format": "qcow2", "properties": 98 | {}, "protected": true}}'} 99 | headers: 100 | content-length: ['491'] 101 | content-type: [application/json] 102 | date: ['Thu, 17 Aug 2017 11:06:18 GMT'] 103 | etag: [d8e8fca2dc0f896fd7cb4cb0031ba249] 104 | location: ['http://images.nova-lab-1.mgm.servers.com:9292/v1/images/45f16360-f7c2-450d-ace0-3bf046f104bd'] 105 | x-openstack-request-id: [req-ec83fad0-c93a-410a-85d0-b5362c808099] 106 | status: {code: 201, message: Created} 107 | - request: 108 | body: null 109 | headers: 110 | Content-Type: [application/octet-stream] 111 | pytest-filtered: ['true'] 112 | method: GET 113 | uri: https://images.nova-lab-1.mgm.servers.com:9292/v2/v1/images/detail?limit=20&name=Empty+image 114 | response: 115 | body: {string: !!python/unicode '{"images": [{"status": "active", "deleted_at": 116 | null, "name": "Empty image", "deleted": false, "container_format": "bare", 117 | "created_at": "2017-08-17T11:06:18.000000", "disk_format": "qcow2", "updated_at": 118 | "2017-08-17T11:06:19.000000", "min_disk": 30, "protected": true, "id": "45f16360-f7c2-450d-ace0-3bf046f104bd", 119 | "min_ram": 4, "checksum": "d8e8fca2dc0f896fd7cb4cb0031ba249", "owner": "61ed529dc6024dc5968acf32b6f4142c", 120 | "is_public": false, "virtual_size": null, "properties": {}, "size": 5}]}'} 121 | headers: 122 | content-length: ['494'] 123 | content-type: [application/json; charset=UTF-8] 124 | date: ['Thu, 17 Aug 2017 11:06:18 GMT'] 125 | x-openstack-request-id: [req-ecfe8040-5b71-4291-ac57-cb00821b200a] 126 | status: {code: 200, message: OK} 127 | version: 1 128 | -------------------------------------------------------------------------------- /integration_tests/clear_creds.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | ''' 3 | This script cleanup credentials: 4 | - replaces passwords with word 'password' 5 | - replaces usernames with name 'username' 6 | 7 | It cleanup test.yaml and upload.yaml files. 8 | 9 | Non cleaned files (originals) are moved to .secret: 10 | - test.yaml -> test.yaml.secret 11 | - upload.yaml -> upload.yaml.secret 12 | 13 | Cleared copies stored in dibctl: 14 | - dibctl/test.yaml 15 | - dibctl/upload.yaml 16 | 17 | Addtitionally it checks all cassetess for found passwords 18 | and usernames in all cassettes. They shouldn't be there. 19 | ''' 20 | import yaml 21 | import os 22 | 23 | CFG_LIST = ('test.yaml', 'upload.yaml') 24 | pw_replacements = [] 25 | un_replacements = [] 26 | CASSETTES_LOCATION = 'cassettes' 27 | PUBLIC_CONFIG_LOCATION = 'dibctl' 28 | 29 | for cfgname in CFG_LIST: 30 | 31 | obj = yaml.load(open(cfgname, 'r')) 32 | 33 | for e in obj: 34 | if 'username' in obj[e]['keystone']: 35 | un_replacements.append(obj[e]['keystone']['username']) 36 | obj[e]['keystone']['username'] = "username" 37 | 38 | if 'password' in obj[e]['keystone']: 39 | pw_replacements.append(obj[e]['keystone']['password']) 40 | obj[e]['keystone']['password'] = "password" 41 | if 'tenant_name' in obj[e]['keystone']: 42 | obj[e]['keystone']['tenant_name'] = "pyvcr" 43 | open(os.path.join(PUBLIC_CONFIG_LOCATION, cfgname), 'w').write( 44 | yaml.dump(obj) 45 | ) 46 | os.rename(cfgname, cfgname + '.secret') 47 | print("Cleared passwords: %s\nCleared usernames: %s" % ( 48 | pw_replacements, 49 | un_replacements 50 | )) 51 | 52 | 53 | def check_location(location, pw_list): 54 | for f in os.listdir(location): 55 | path = os.path.join(location, f) 56 | cassette = open(path, 'r').read() 57 | for item in pw_list: 58 | if cassette.find(item) != -1: 59 | print("CRITICAL!!!! Found password %s in %s" % ( 60 | item, path 61 | )) 62 | 63 | 64 | check_location(CASSETTES_LOCATION, pw_replacements) 65 | check_location(PUBLIC_CONFIG_LOCATION, pw_replacements) 66 | -------------------------------------------------------------------------------- /integration_tests/conftest.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import vcr 3 | import json 4 | import pytest 5 | import sys 6 | import copy 7 | 8 | 9 | class HappyVCR(object): 10 | 11 | def __init__(self): 12 | self.VCR = vcr.VCR( 13 | cassette_library_dir='cassettes/', 14 | record_mode='once', 15 | match_on=['uri', 'method', 'headers', 'body'], 16 | filter_headers=( 17 | 'Content-Length', 'User-Agent', 18 | 'Accept-Encoding', 'Connection', 'Accept' 19 | ), 20 | before_record_request=self.filter_request, 21 | before_record_response=self.filter_response 22 | ) 23 | self.VCR.decode_compressed_response = True 24 | logging.basicConfig() 25 | vcr_log = logging.getLogger('vcr') 26 | vcr_log.setLevel(logging.INFO) 27 | ch = logging.FileHandler('/tmp/requests.log', mode='w') 28 | formatter = logging.Formatter( 29 | '%(asctime)s - %(name)s - %(levelname)s - %(message)s' 30 | ) 31 | ch.setFormatter(formatter) 32 | ch.setLevel(logging.INFO) 33 | vcr_log.addHandler(ch) 34 | vcr_log.debug('Set up logging') 35 | self.log = vcr_log 36 | root = logging.getLogger() 37 | root.setLevel(logging.INFO) 38 | ch = logging.StreamHandler(sys.stdout) 39 | ch.setLevel(logging.INFO) 40 | formatter = logging.Formatter( 41 | '%(asctime)s - %(name)s - %(levelname)s - %(message)s' 42 | ) 43 | ch.setFormatter(formatter) 44 | root.addHandler(ch) 45 | self.count = 0 46 | 47 | def filter_request(self, request): 48 | # if 'pytest-filtered' in request.headers: 49 | # self.log.debug("repeated request to %s %s, ignoring" % ( 50 | # request.method, 51 | # request.uri 52 | # )) 53 | # return request 54 | request = copy.deepcopy(request) 55 | request.add_header('pytest-filtered', 'true') 56 | self.log.debug("filter request: %s: %s, %s, %s" % ( 57 | id(request), self.count, request.method, request.uri 58 | )) 59 | if 'X-Auth-Token' in request.headers: 60 | self.log.debug("old token %s" % request.headers['X-Auth-Token']) 61 | request.headers.pop('X-Auth-Token') 62 | self.log.debug('Consealing X-Auth-Token header in %s (%s)' % ( 63 | request.uri, id(request) 64 | )) 65 | request.headers.pop('x-distribution', None) 66 | if 'tokens' in request.uri and request.method == 'POST': 67 | self.log.debug("%s: Token request detected", id(request)) 68 | unsafe = json.loads(request.body) 69 | replacement = '{"tenantName": "pyvcr", ' + \ 70 | '"passwordCredentials": ' + \ 71 | '{"username": "username", "password": "password"}}' 72 | if 'auth' in unsafe: 73 | self.log.debug("old creds %s" % str(unsafe['auth'])) 74 | unsafe['auth'] = replacement 75 | safe = unsafe 76 | request.body = json.dumps(safe) 77 | self.log.debug('Consealing request credentials in %s (%s)' % ( 78 | request.uri, id(request) 79 | )) 80 | if 'images' in request.uri and request.method == 'POST': 81 | if len(request.body) > 256: 82 | self.log.debug("Body is too large (%s bytes), truncating" % ( 83 | len(request.body))) 84 | request.body = "'1f\r\nBody was too large, " + \ 85 | "truncated.\n\r\n0\r\n\r\n'" 86 | return request 87 | 88 | def filter_response(self, response): 89 | body = response['body'] 90 | if 'string' in body: 91 | try: 92 | decoded_string = json.loads(body['string']) 93 | except: 94 | self.log.debug("non-json body, ignoring") 95 | return response 96 | if 'access' in decoded_string: 97 | access = copy.deepcopy(decoded_string['access']) 98 | if 'token' in access: 99 | access['token']['expires'] = '2038-01-15T16:17:18Z' 100 | self.log.debug("Patching token expiration date") 101 | access['token']['id'] = 'consealed id' 102 | self.log.debug("Consealing token id") 103 | access['token']['tenant']['description'] = '' 104 | self.log.debug("Consealing tenant description") 105 | access['token']['tenant']['name'] = 'consealed name' 106 | self.log.debug("Consealing tenant name") 107 | if 'user' in access: 108 | access['user']['username'] = 'consealed username' 109 | self.log.debug("Consealing username") 110 | access['user']['username'] = 'consealed username' 111 | self.log.debug("Consealing username") 112 | access['user']['name'] = 'consealed name' 113 | self.log.debug("Consealing user name") 114 | response['body']['string'] = json.dumps({'access': access}) 115 | else: 116 | self.log.debug("no access section in the body, ignoring") 117 | return response 118 | 119 | 120 | @pytest.fixture(scope="function") 121 | def happy_vcr(request): 122 | vcr = HappyVCR() 123 | return vcr.VCR.use_cassette 124 | -------------------------------------------------------------------------------- /integration_tests/damaged.img.qcow2: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/serverscom/dibctl/0749a79e64113e951cdfddfdcb4df4d061040fe2/integration_tests/damaged.img.qcow2 -------------------------------------------------------------------------------- /integration_tests/dibctl/images.yaml: -------------------------------------------------------------------------------- 1 | xenial: 2 | filename: xenial.img.qcow2 3 | glance: 4 | upload_timeout: 7200 5 | name: 'LAB Ubuntu Xenial 16.04 (x86_64)' 6 | properties: 7 | display_order: "4" 8 | public: False 9 | dib: 10 | environment_variables: 11 | ARCH: 'amd64' 12 | ELEMENTS_PATH: "elements" 13 | elements: 14 | - serverscom-ubuntu-xenial 15 | tests: 16 | ssh: 17 | username: cloud-user 18 | wait_for_port: 22 19 | port_wait_timeout: 30 20 | environment_name: nl01 21 | tests_list: 22 | - shell: ./simple_success.bash 23 | 24 | xenial-sizes: 25 | filename: xenial.img.qcow2 26 | glance: 27 | upload_timeout: 7200 28 | name: 'LAB Ubuntu Xenial 16.04 (x86_64)' 29 | properties: 30 | display_order: "4" 31 | public: False 32 | min_disk: 4 33 | min_ram: 1000 34 | dib: 35 | environment_variables: 36 | ARCH: 'amd64' 37 | ELEMENTS_PATH: "elements" 38 | elements: 39 | - serverscom-ubuntu-xenial 40 | tests: 41 | ssh: 42 | username: cloud-user 43 | wait_for_port: 22 44 | port_wait_timeout: 30 45 | environment_name: lab1 46 | tests_list: 47 | - shell: ./simple_success.bash 48 | 49 | xenial_fail: 50 | filename: xenial.img.qcow2 51 | glance: 52 | upload_timeout: 1200 53 | name: 'LAB Ubuntu Xenial 16.04 (x86_64)' 54 | properties: 55 | display_order: "4" 56 | public: False 57 | dib: 58 | environment_variables: 59 | ARCH: 'amd64' 60 | ELEMENTS_PATH: "elements" 61 | elements: 62 | - serverscom-ubuntu-xenial 63 | tests: 64 | ssh: 65 | username: cloud-user 66 | wait_for_port: 22 67 | port_wait_timeout: 30 68 | environment_name: nl01 69 | tests_list: 70 | - shell: ./simple_fail.bash 71 | 72 | overrided_raw_format: 73 | filename: empty.img.qcow2 74 | glance: 75 | name: "Empty image" 76 | tests: 77 | environment_name: env_with_format_override 78 | tests_list: 79 | - shell: ./simple_success.bash 80 | -------------------------------------------------------------------------------- /integration_tests/dibctl/test.yaml: -------------------------------------------------------------------------------- 1 | bad_network_id: 2 | keystone: {auth_url: 'https://auth.servers.nl01.cloud.servers.com:5000/v2.0', password: password, 3 | tenant_name: pyvcr, username: username} 4 | nova: 5 | flavor: SSD.30 6 | nics: 7 | - {net_id: deadbeaf-7577-4706-9a41-fc88d8bee945} 8 | env_with_format_override: 9 | glance: {disk_format: raw} 10 | keystone: {auth_url: 'https://auth.servers.nl01.cloud.servers.com:5000/v2.0', password: password, 11 | tenant_name: pyvcr, username: username} 12 | nova: 13 | flavor: SSD.30 14 | nics: 15 | - {net_id: c7ccdf1c-e6c3-4257-8b2e-740748c14564} 16 | env_with_sizes: 17 | glance: {min_disk: 20, min_ram: 2, protected: false} 18 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 19 | tenant_name: pyvcr, username: username} 20 | nova: 21 | flavor: SSD.50 22 | main_nic_regexp: internet 23 | nics: 24 | - {net_id: 89e03f44-d874-4c79-bbb5-30d5c576fef3} 25 | ssl_insecure: true 26 | lab1: 27 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 28 | tenant_name: pyvcr, username: username} 29 | nova: 30 | flavor: SSD.30 31 | main_nic_regexp: internet 32 | nics: 33 | - {net_id: 89e03f44-d874-4c79-bbb5-30d5c576fef3} 34 | ssl_insecure: true 35 | lab1-size-too-big: 36 | glance: {min_disk: 50, min_ram: 16768} 37 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 38 | tenant_name: pyvcr, username: username} 39 | nova: 40 | flavor: SSD.30 41 | main_nic_regexp: internet 42 | nics: 43 | - {net_id: 89e03f44-d874-4c79-bbb5-30d5c576fef3} 44 | ssl_insecure: true 45 | nl01: 46 | keystone: {auth_url: 'https://auth.servers.nl01.cloud.servers.com:5000/v2.0', password: password, 47 | tenant_name: pyvcr, username: username} 48 | nova: 49 | flavor: SSD.30 50 | main_nic_regexp: internet 51 | nics: 52 | - {net_id: fe2acef0-4383-4432-8fca-f9e23f835dd5} 53 | - {net_id: a3af8097-f348-4767-97c3-b9bf75263ef9} 54 | - {net_id: c7ccdf1c-e6c3-4257-8b2e-740748c14564} 55 | -------------------------------------------------------------------------------- /integration_tests/dibctl/upload.yaml: -------------------------------------------------------------------------------- 1 | env_with_failed_convertion: 2 | keystone: {auth_url: 'https://auth.servers.nl01.cloud.servers.com:5000/v2.0', password: password, 3 | tenant_name: pyvcr, username: username} 4 | preprocessing: {cmdline: exit 1, output_filename: doesntmatter} 5 | env_with_sizes: 6 | glance: {min_disk: 30, min_ram: 4, protected: true} 7 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 8 | tenant_name: pyvcr, username: username} 9 | ssl_insecure: true 10 | upload_env_1: 11 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 12 | tenant_name: pyvcr, username: username} 13 | ssl_insecure: true 14 | upload_env_1_no_enough_priveleges: 15 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 16 | tenant_name: pyvcr, username: username} 17 | ssl_insecure: true 18 | upload_env_2: 19 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 20 | tenant_name: pyvcr, username: username} 21 | ssl_insecure: true 22 | upload_env_bad_credentials: 23 | keystone: {auth_url: 'https://auth.servers.nl01.cloud.servers.com:5000/v2.0', password: password, 24 | tenant_name: pyvcr, username: username} 25 | upload_env_raw: 26 | glance: {container_format: bare, disk_format: raw} 27 | keystone: {auth_url: 'https://auth.nova-lab-1.mgm.servers.com:5000/v2.0', password: password, 28 | tenant_name: pyvcr, username: username} 29 | ssl_insecure: true 30 | -------------------------------------------------------------------------------- /integration_tests/empty.img.qcow2: -------------------------------------------------------------------------------- 1 | test 2 | -------------------------------------------------------------------------------- /integration_tests/simple_fail.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | exit 1 3 | -------------------------------------------------------------------------------- /integration_tests/simple_success.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | exit 0 3 | -------------------------------------------------------------------------------- /integration_tests/test_command_rotate.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | 4 | def setup_module(module): 5 | global saved_curdir 6 | global log 7 | saved_curdir = os.getcwd() 8 | for forbidden in [ 9 | 'OS_AUTH_URL', 10 | 'OS_USERNAME', 11 | 'OS_PASSWORD', 12 | 'OS_TENANT_NAME', 13 | 'OS_PROJECT_NAME', 14 | 'OS_PROJECT' 15 | ]: 16 | if forbidden in os.environ: 17 | del os.environ[forbidden] 18 | if 'integration_tests' not in saved_curdir: 19 | os.chdir('integration_tests') 20 | 21 | 22 | def teardown_modlue(module): 23 | global saved_curdir 24 | os.chdir(saved_curdir) 25 | 26 | 27 | # HAPPY tests 28 | # to rewrite cassette one need to run (update) upload tests 29 | # test order is important. 30 | 31 | def test_command_rotate_dry_run_with_candidates( 32 | quick_commands, 33 | capsys, 34 | happy_vcr 35 | ): 36 | ''' 37 | This test checks if: 38 | - rotate can see images 39 | - dry run is respected (we ran this test twice) 40 | 41 | This test is hard to update as it depends on 42 | precise uuid values in the output. If cassette was 43 | overwritten, one need to update values in assert. 44 | 45 | This test need to have no used images (with booted instances) 46 | ''' 47 | with happy_vcr('test_command_rotate_dry_run_with_candidates.yaml'): 48 | for interation in (1, 2): 49 | assert quick_commands.main([ 50 | 'rotate', 51 | 'upload_env_1', 52 | '--dry-run' 53 | ]) == 0 54 | out = capsys.readouterr()[0] 55 | assert '7b055f53-3221-43b3-a047-9ad7ddebbb5f' in out # first 56 | assert '45d69b26-6c4b-48de-adc5-0a83183e01dd' in out # last 57 | 58 | 59 | def test_command_rotate_with_candidates(quick_commands, capsys, happy_vcr): 60 | ''' 61 | This test removes unused obsolete images 62 | This test need to have no used images (with booted instances) 63 | 64 | Cassette update to this test is destructive, and to repeat it 65 | one need to update upload tests. 66 | ''' 67 | with happy_vcr('test_command_rotate_with_candidates.yaml'): 68 | assert quick_commands.main([ 69 | 'rotate', 70 | 'upload_env_1' 71 | ]) == 0 72 | out = capsys.readouterr()[0] 73 | assert 'will be removed' in out 74 | assert '7b055f53-3221-43b3-a047-9ad7ddebbb5f' in out # first 75 | assert '45d69b26-6c4b-48de-adc5-0a83183e01dd' in out # last 76 | 77 | 78 | def test_command_rotate_no_candidates(quick_commands, capsys, happy_vcr): 79 | ''' 80 | This test checks if we have no images to remove 81 | it expected to be updated after test_command_rotate_with_candidates 82 | ''' 83 | with happy_vcr('test_command_rotate_no_candidates.yaml'): 84 | assert quick_commands.main([ 85 | 'rotate', 86 | 'upload_env_1' 87 | ]) == 0 88 | out = capsys.readouterr()[0] 89 | assert 'No unused obsolete images found' in out 90 | 91 | 92 | def test_command_rotate_used_and_unused_images( 93 | quick_commands, 94 | capsys, 95 | happy_vcr 96 | ): 97 | ''' 98 | This test checks if we realy remove only unused images. 99 | It requires specfial configuration for reproduction: 100 | - Image with used and unsued obsolete copy 101 | 102 | To update this test: 103 | - Clean environment from obsolete images 104 | - Boot any image to instance 105 | - Upload new copy of image twice. 1st copy cause 'used obsolete' 106 | and second copy cause 'unused obosolete' 107 | 108 | - UUID updates (inside test) is required 109 | ''' 110 | with happy_vcr('test_command_rotate_used_and_unused_images.yaml'): 111 | assert quick_commands.main([ 112 | 'rotate', 113 | 'upload_env_1' 114 | ]) == 0 115 | out = capsys.readouterr()[0] 116 | assert 'will be removed' in out 117 | assert '8b6dad09-3dc1-49bd-b43e-07cfc7ef02a7' in out # unused 118 | assert 'a7eb42e6-9d60-409d-b1b9-cdc8a45de50a' not in out # used & obs. 119 | assert '8b68b1bd-bdbf-48f8-b472-dc336c17c032' not in out # current 120 | 121 | 122 | def test_command_rotate_only_used_obsolete( 123 | quick_commands, 124 | capsys, 125 | happy_vcr 126 | ): 127 | ''' 128 | This test checks if we ignore used obsolete images 129 | It's very similar to test_command_rotate_no_candidates, 130 | and the difference is in the environment (inside Openstack) 131 | It requires specfial configuration for reproduction: 132 | - Image with used obsolete copy 133 | 134 | To update this test: 135 | - Clean environment from obsolete images 136 | - Boot any image to instance 137 | - Upload new copy of image once. 138 | 139 | - UUID updates (inside test) is required 140 | ''' 141 | with happy_vcr('test_command_rotate_only_used_obsolete.yaml'): 142 | assert quick_commands.main([ 143 | 'rotate', 144 | 'upload_env_1' 145 | ]) == 0 146 | out = capsys.readouterr()[0] 147 | assert 'No unused obsolete images found' in out 148 | assert 'a7eb42e6-9d60-409d-b1b9-cdc8a45de50a' not in out # used & obs. 149 | assert '8b68b1bd-bdbf-48f8-b472-dc336c17c032' not in out # current 150 | 151 | 152 | # SAD tests 153 | 154 | def test_command_rotate_no_env(quick_commands): 155 | ''' This test check how dibctl fails if env is not found in config ''' 156 | assert quick_commands.main([ 157 | 'rotate', 158 | 'unknown_upload_env' 159 | ]) == 11 160 | 161 | 162 | def test_command_rotate_bad_passwd(quick_commands, happy_vcr): 163 | ''' 164 | This test check fail code for bad password situation 165 | ''' 166 | with happy_vcr('test_command_rotate_bad_passwd.yaml'): 167 | assert quick_commands.main([ 168 | 'rotate', 169 | 'upload_env_bad_credentials' 170 | ]) == 20 171 | 172 | 173 | def test_command_rotate_nova_forbidden(quick_commands, happy_vcr): 174 | ''' 175 | This test check if forbidden message from nova is handled 176 | correctly. 177 | If user has no permissions to view instances of other users, 178 | than nova return message: 179 | Policy doesn't allow os_compute_api:servers:detail:get_all_tenants 180 | to be performed 181 | 182 | To update cassette one need to change 183 | os_compute_api:servers:detail:get_all_tenants in policy.json or 184 | use account withous special privelege. 185 | ''' 186 | with happy_vcr('test_command_rotate_nova_forbidden.yaml'): 187 | assert quick_commands.main([ 188 | 'rotate', 189 | 'upload_env_1_no_enough_priveleges' 190 | ]) == 61 191 | -------------------------------------------------------------------------------- /integration_tests/test_command_test.py: -------------------------------------------------------------------------------- 1 | import os 2 | import mock 3 | 4 | 5 | def setup_module(module): 6 | global saved_curdir 7 | global log 8 | saved_curdir = os.getcwd() 9 | for forbidden in [ 10 | 'OS_AUTH_URL', 11 | 'OS_USERNAME', 12 | 'OS_PASSWORD', 13 | 'OS_TENANT_NAME', 14 | 'OS_PROJECT_NAME', 15 | 'OS_PROJECT' 16 | ]: 17 | if forbidden in os.environ: 18 | del os.environ[forbidden] 19 | if 'integration_tests' not in saved_curdir: 20 | os.chdir('integration_tests') 21 | 22 | 23 | def teardown_modlue(module): 24 | global saved_curdir 25 | os.chdir(saved_curdir) 26 | 27 | 28 | # HAPPY tests 29 | 30 | def test_test_normal(quick_commands, happy_vcr, capfd): 31 | ''' 32 | This test check if we can do test 33 | workflow: 34 | - Upload new image 35 | - Create new keypair 36 | - Start new instance 37 | - cleanup 38 | 39 | Please not: it does not upload real image (it's too slow to do), 40 | Mocked functions: 41 | - check if port is alive 42 | - test successfull anyway (simple_success.bash) 43 | 44 | To update this test: 45 | - check if network uuids are vaild (in config) 46 | - it need normal user priveleges 47 | ''' 48 | with happy_vcr('test_test_normal.yaml'): 49 | assert quick_commands.main([ 50 | 'test', 51 | 'xenial' 52 | ]) == 0 53 | out = capfd.readouterr()[0] 54 | assert 'All tests passed successfully' in out 55 | assert 'Removing instance' in out 56 | assert 'Removing ssh key' in out 57 | assert 'Removing image' in out 58 | assert 'Clearing done' in out 59 | 60 | 61 | def test_test_existing_image_success(quick_commands, happy_vcr, capfd): 62 | ''' 63 | This test check if we can do test 64 | workflow: 65 | - Create new keypair 66 | - Start new instance 67 | - cleanup 68 | 69 | Please not: it does not upload real image (it's too slow to do), 70 | Mocked functions: 71 | - check if port is alive 72 | - test successfull anyway (simple_success.bash) 73 | 74 | To update this test: 75 | - check if network uuids are vaild (in config) 76 | - check if image uuid is valid (it should be uploaded) 77 | - it need normal user priveleges 78 | ''' 79 | with happy_vcr('test_test_existing_image_success.yaml'): 80 | assert quick_commands.main([ 81 | 'test', 82 | 'xenial', 83 | '--use-existing-image', 84 | 'f4ffd69e-20c9-40e6-b22a-808f00bf6458' 85 | ]) == 0 86 | out = capfd.readouterr()[0] 87 | assert 'All tests passed successfully' in out 88 | assert 'Removing instance' in out 89 | assert 'Removing ssh key' in out 90 | assert 'Not removing image' in out 91 | assert 'Removing image' not in out 92 | assert 'Clearing done' in out 93 | 94 | 95 | def test_test_image_with_sizes(quick_commands, happy_vcr, capfd): 96 | ''' 97 | This test check if we can do test workflow with image 98 | with sizes (min_disk/min_ram) 99 | 100 | Please note: it does not upload real image (it's too slow to do), 101 | Mocked functions: 102 | - check if port is alive 103 | - test successfull anyway (simple_success.bash) 104 | 105 | To update this test: 106 | - check if network uuids are vaild (in config) 107 | - it need normal user priveleges 108 | ''' 109 | with happy_vcr('test_test_image_with_sizes.yaml'): 110 | assert quick_commands.main([ 111 | 'test', 112 | 'xenial-sizes' 113 | ]) == 0 114 | out = capfd.readouterr()[0] 115 | assert 'All tests passed successfully' in out 116 | assert 'Removing instance' in out 117 | assert 'Removing ssh key' in out 118 | assert 'Removing image' in out 119 | assert 'Clearing done' in out 120 | 121 | 122 | # SAD tests 123 | 124 | def test_test_imagetest_failed_code_80(quick_commands, happy_vcr, capfd): 125 | ''' 126 | This test checks how dibctl handle 'test failed' 127 | situation when image test failed. 128 | It expected to start instance successfully. 129 | It uses exsisting image 130 | 131 | This test uses mocked port ready and image test which always fails. 132 | 133 | To update this test: 134 | - check if network uuids are vaild (in config) 135 | - check if image uuid is valid (it should be uploaded) 136 | - it need normal user priveleges 137 | ''' 138 | with happy_vcr('test_test_imagetest_failed_code_80.yaml'): 139 | assert quick_commands.main([ 140 | 'test', 141 | 'xenial_fail', 142 | '--use-existing-image', 143 | '2eb14fc3-4edc-4068-8748-988f369302c2' 144 | ]) == 80 # this code is inside __command() in TestCommand 145 | out = capfd.readouterr()[0] 146 | assert 'Some tests failed' in out 147 | assert 'Removing instance' in out 148 | assert 'Removing ssh key' in out 149 | assert 'Not removing image' in out 150 | assert 'Removing image' not in out 151 | assert 'Clearing done' in out 152 | 153 | 154 | def test_non_existing_image_code_50(quick_commands, happy_vcr): 155 | ''' 156 | This test checks how dibctl handle when there is no image 157 | inside openstack and --use-existing-image option is used. 158 | 159 | To update this test: 160 | - check if network uuids are vaild (in config) 161 | - check if image with given uuid does not exist 162 | - it need normal user priveleges 163 | ''' 164 | with happy_vcr('test_non_existing_image_code_50.yaml'): 165 | assert quick_commands.main([ 166 | 'test', 167 | 'xenial', 168 | '--use-existing-image', 169 | 'deadbeaf-0000-0000-0000-b7a14cdd1169' 170 | ]) == 50 171 | 172 | 173 | def test_non_existing_network_code_60(quick_commands, happy_vcr, capfd): 174 | ''' 175 | This test checks how dibctl handle when there is no network 176 | found. 177 | 178 | To update this test: 179 | - check if network uuids are invaild (in config) 180 | - it need normal user priveleges 181 | ''' 182 | with happy_vcr('test_non_existing_network_code_60.yaml'): 183 | assert quick_commands.main([ 184 | 'test', 185 | 'xenial', 186 | '--environment', 187 | 'bad_network_id' 188 | ]) == 60 189 | out = capfd.readouterr()[0] 190 | assert 'Removing image' in out 191 | assert 'Removing ssh key' in out 192 | 193 | 194 | def test_instance_is_not_answer_port_code_71(quick_commands, happy_vcr, capfd): 195 | ''' 196 | This test checks how dibctl handle when instance does not 197 | reply to port in timely manner. 198 | 199 | It uses mocked 'non answering port' 200 | 201 | To update this test: 202 | - check if network uuids are vaild (in config) 203 | - it need normal user priveleges 204 | ''' 205 | with happy_vcr('test_instance_is_not_answer_port_code_71.yaml'): 206 | quick_commands.prepare_os.socket.sequence = [None] 207 | assert quick_commands.main([ 208 | 'test', 209 | 'xenial' 210 | ]) == 71 211 | out = capfd.readouterr()[0] 212 | assert 'Instance is not accepting connection' in out 213 | assert 'Removing instance' in out 214 | assert 'Removing ssh key' in out 215 | assert 'Removing image' in out 216 | 217 | 218 | def test_instance_in_error_state_code_70(quick_commands, happy_vcr, capfd): 219 | ''' 220 | This test check how handled instance in ERROR state. 221 | To make instance 'ERROR' we use low-level trick with invalid 222 | image format (damaged.img.qcow2), which cause errors on libvirt+qemu. 223 | 224 | To update this test: 225 | - check if network uuids are vaild (in config) 226 | - it need normal user priveleges 227 | ''' 228 | with happy_vcr('test_instance_in_error_state_code_70.yaml'): 229 | assert quick_commands.main([ 230 | 'test', 231 | 'xenial', 232 | '--input', 233 | 'damaged.img.qcow2' 234 | ]) == 70 235 | out = capfd.readouterr()[0] 236 | assert "is 'ERROR' (expected 'ACTIVE')" in out 237 | assert 'Removing instance' in out 238 | assert 'Removing ssh key' in out 239 | assert 'Removing image' in out 240 | 241 | 242 | def test_test_image_with_min_size_more_than_flavor_code_60( 243 | quick_commands, happy_vcr, capfd 244 | ): 245 | ''' 246 | This test check behavior if environment has glance section with 247 | min_size > flavor allows. 248 | 249 | Please note: it does not upload real image (it's too slow to do), 250 | Mocked functions: 251 | - check if port is alive 252 | - test successfull anyway (simple_success.bash) 253 | 254 | To update this test: 255 | - check if network uuids are vaild (in config) 256 | - it need normal user priveleges 257 | ''' 258 | with happy_vcr( 259 | 'test_test_image_with_min_size_more_than_flavor_code_60.yaml' 260 | ): 261 | assert quick_commands.main([ 262 | 'test', 263 | 'xenial-sizes', 264 | '--environment', 265 | 'lab1-size-too-big' 266 | ]) == 60 267 | out = capfd.readouterr()[0] 268 | assert 'Removing ssh key' in out 269 | assert 'Removing image' in out 270 | assert 'Clearing done' in out 271 | -------------------------------------------------------------------------------- /integration_tests/test_command_upload.py: -------------------------------------------------------------------------------- 1 | 2 | import os 3 | import mock 4 | 5 | 6 | def setup_module(module): 7 | global saved_curdir 8 | global log 9 | saved_curdir = os.getcwd() 10 | for forbidden in [ 11 | 'OS_AUTH_URL', 12 | 'OS_USERNAME', 13 | 'OS_PASSWORD', 14 | 'OS_TENANT_NAME', 15 | 'OS_PROJECT_NAME', 16 | 'OS_PROJECT' 17 | ]: 18 | if forbidden in os.environ: 19 | del os.environ[forbidden] 20 | if 'integration_tests' not in saved_curdir: 21 | os.chdir('integration_tests') 22 | 23 | 24 | def teardown_modlue(module): 25 | global saved_curdir 26 | os.chdir(saved_curdir) 27 | 28 | 29 | # HAPPY TESTS (normal workflow) 30 | 31 | def test_upload_image_normal_no_obsolete(quick_commands, happy_vcr): 32 | ''' 33 | Thise test check if we can upload image. 34 | It assumes that we have no other copies of image to obsolete 35 | ''' 36 | with happy_vcr('test_upload_image_normal_no_obsolete.yaml'): 37 | assert quick_commands.main([ 38 | 'upload', 39 | 'overrided_raw_format', 40 | 'upload_env_1' 41 | ]) == 0 42 | 43 | 44 | def test_upload_image_normal_obsolete(quick_commands, happy_vcr): 45 | ''' 46 | Thise test check if we can upload image and obsolete older copy. 47 | It assumes that we have older copy of the image 48 | ''' 49 | with happy_vcr('test_upload_image_normal_obsolete.yaml'): 50 | assert quick_commands.main([ 51 | 'upload', 52 | 'overrided_raw_format', 53 | 'upload_env_1' 54 | ]) == 0 55 | 56 | 57 | def test_upload_image_normal_no_obsolete_convertion(quick_commands, happy_vcr): 58 | ''' 59 | Thise test check if we can convert image before upload and then 60 | upload it. 61 | It assumes that we have older copy of the image to obsolete. 62 | (obsoletion is not the part of the test but it makes easier to 63 | rerecord test) 64 | ''' 65 | with happy_vcr('test_upload_image_normal_no_obsolete_convertion.yaml'): 66 | assert quick_commands.main([ 67 | 'upload', 68 | 'overrided_raw_format', 69 | 'upload_env_raw' 70 | ]) == 0 71 | 72 | 73 | def test_upload_image_with_sizes_and_protection(quick_commands, happy_vcr): 74 | ''' 75 | Thise test check if we honor min_disk/min_ram/protected 76 | in glance section of upload config. 77 | It assumes that we have no other copies of image to obsolete 78 | 79 | To update this test, remove older copy of image (mind that image 80 | is protected!) 81 | ''' 82 | with happy_vcr('test_upload_image_with_sizes_and_protection.yaml'): 83 | assert quick_commands.main([ 84 | 'upload', 85 | 'overrided_raw_format', 86 | 'env_with_sizes' 87 | ]) == 0 88 | 89 | 90 | # SAD TESTS (handling errors) 91 | 92 | def test_upload_image_bad_credentials(quick_commands, happy_vcr): 93 | ''' 94 | This test should fail due to bad credentials 95 | it's a bit akward as we conseal 'bad credentials' from cassette, 96 | but nevertheless it's still valid 97 | ''' 98 | with happy_vcr('test_upload_bad_credentials.yaml'): 99 | assert quick_commands.main([ 100 | 'upload', 101 | 'overrided_raw_format', 102 | 'upload_env_bad_credentials' 103 | ]) == 20 104 | 105 | 106 | def test_upload_image_error_for_convertion(quick_commands): 107 | ''' 108 | This test should fail on convertion, no actual upload 109 | should happen 110 | ''' 111 | assert quick_commands.main([ 112 | 'upload', 113 | 'xenial', 114 | 'env_with_failed_convertion' 115 | ]) == 18 116 | -------------------------------------------------------------------------------- /integration_tests/test_simple_no_net.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | 4 | # this file contains tests which do not require vcr or http 5 | 6 | def setup_module(module): 7 | global curdir 8 | curdir = os.getcwd() 9 | for forbidden in [ 10 | 'OS_AUTH_URL', 11 | 'OS_USERNAME', 12 | 'OS_PASSWORD', 13 | 'OS_TENANT_NAME', 14 | 'OS_PROJECT_NAME', 15 | 'OS_PROJECT' 16 | ]: 17 | if forbidden in os.environ: 18 | del os.environ[forbidden] 19 | if 'integration_tests' not in curdir: 20 | os.chdir('integration_tests') 21 | 22 | 23 | def teardown_modlue(module): 24 | global curdir 25 | os.chdir(curdir) 26 | 27 | 28 | def test_no_image_config_code_10(quick_commands): 29 | assert quick_commands.main([ 30 | 'test', 31 | 'xenial', 32 | '--images-config', 'not_existing_name' 33 | ]) == 10 34 | 35 | 36 | def test_no_test_config_code_10(quick_commands): 37 | assert quick_commands.main([ 38 | 'test', 39 | 'xenial', 40 | '--test-config', 'not_existing_name' 41 | ]) == 10 42 | 43 | 44 | def test_no_upload_config_code_10(quick_commands): 45 | assert quick_commands.main([ 46 | 'upload', 47 | 'xenial', 48 | 'nowhere', 49 | '--upload-config', 'not_existing_name' 50 | ]) == 10 51 | 52 | 53 | def test_not_found_in_config_code_11(quick_commands): 54 | assert quick_commands.main([ 55 | 'test', 56 | 'no_such_image_in_config' 57 | ]) == 11 58 | -------------------------------------------------------------------------------- /integration_tests/xenial.img.qcow2: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/serverscom/dibctl/0749a79e64113e951cdfddfdcb4df4d061040fe2/integration_tests/xenial.img.qcow2 -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | PyYAML>=3.1.0 2 | diskimage-builder 3 | python-glanceclient 4 | python-novaclient 5 | ipaddress 6 | keystoneauth1 7 | pytest < 4.0.0 8 | pytest-timeout 9 | testinfra 10 | jsonschema 11 | urllib3 12 | semantic_version 13 | paramiko 14 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | from setuptools import setup, find_packages, Command 3 | import sys 4 | from dibctl import version 5 | 6 | 7 | class PyTest(Command): 8 | user_options = [('pytest-args=', 'a', "Arguments to pass to py.test")] 9 | 10 | def initialize_options(self): 11 | pass 12 | 13 | def finalize_options(self): 14 | pass 15 | 16 | def run(self): 17 | import pytest 18 | print("Running unit tests") 19 | error_code = pytest.main([ 20 | 'build', 21 | '--ignore', 'build/doctests', 22 | '--ignore', 'build/tests/test_bad_configs.py', 23 | '--ignore', 'build/integration_tests' 24 | ]) 25 | if error_code: 26 | sys.exit(error_code) 27 | print("Running integration tests for docs examples") 28 | # doctests should be run against current dir, not 'build' 29 | # because config examples are not copied to build 30 | # (they are installed as config files) 31 | error_code = pytest.main(['doctests/']) 32 | if error_code: 33 | sys.exit(error_code) 34 | 35 | 36 | setup( 37 | name="dibctl", 38 | version=version.VERSION, 39 | description="diskimage-builder control", 40 | author="George Shuklin", 41 | author_email="george.shuklin@gmail.com", 42 | url="http://github.com/serverscom/dibctl", 43 | packages=find_packages(), 44 | install_requires=[ 45 | 'PyYAML', 46 | 'keystoneauth1', 47 | 'python-glanceclient', 48 | 'python-novaclient', 49 | 'pytest-timeout', 50 | 'jsonschema', 51 | 'pytest', # not a mistake - we use pytest as a part of the app 52 | 'semantic_version', 53 | 'requests', 54 | 'urllib3', 55 | 'paramiko' 56 | ], 57 | entry_points=""" 58 | [console_scripts] 59 | dibctl=dibctl.commands:main 60 | # """, 61 | cmdclass={'test': PyTest}, 62 | long_description="""diskimage-builder control""" 63 | ) 64 | -------------------------------------------------------------------------------- /specs/README.md: -------------------------------------------------------------------------------- 1 | This is a spec directory. It contains brief description of features prior to their implementation. 2 | 3 | Therefore, implemented features may be differ from specs. 4 | -------------------------------------------------------------------------------- /specs/conf.d.md: -------------------------------------------------------------------------------- 1 | Conf.d support 2 | -------------- 3 | 4 | Production use of dibctl shows that current configuration scheme for images, 5 | tests and uploads in hard to maintain. Average image configuration is about 6 | 40 lines long. With ~10 images it's a 400 lines config, which is hard to navigate 7 | and manage. Additionally, images, grouped in one file, prevents seamless 8 | merge of independent changes for different images. 9 | 10 | Preposition 11 | =========== 12 | 13 | Add support for conf.d style configurations. Each top-level element of the list 14 | (image, test, upload) may be placed in a separate file. Global configuration 15 | is still honored (for example, there is no need for splitting upload.yaml, so far, 16 | so it can be left alone as a single file). 17 | 18 | Proposed structure 19 | ================== 20 | Old search structures will be left intact. New search pathes: 21 | 22 | First: 23 | - `./images.d/*.yaml` 24 | - `./test.d/*.yaml` 25 | - `./upload.d/*.yaml` 26 | 27 | Second: 28 | - `./dibctl/images.d/*.yaml` 29 | - `./dibctl/test.d/*.yaml` 30 | - `./dibctl/upload.d/*.yaml` 31 | 32 | Last: 33 | - `/etc/dibctl/images.d/*.yaml` 34 | - `/etc/dibctl/test.d/*.yaml` 35 | - `/etc/dibctl/upload.d/*.yaml` 36 | 37 | 38 | Joining instead of override 39 | =========================== 40 | As those are independent files, we need to change the way configuration files are 41 | loaded. Old approach used `first win` approach: seach continued until config found. 42 | (separately for each config). 43 | 44 | New approach will use 'merge' strategy, that means that we'll start from least 45 | priority and will merge/override from configs with higher priority. 46 | 47 | Each config contains hashtable (dictionary), and merge policy for those will 48 | be: 49 | - For new elements: add to hashtable 50 | - For conflicting labels (keys) - override 51 | 52 | New lookup rules (for images, test/upload follow the same pattern): 53 | - `/etc/dibctl/images.yaml` 54 | - `/etc/dibctl/images.d/*.yaml` (alphabetical order) 55 | - `./dibctl/images.yaml` 56 | - `./dibctl/images.d/*.yaml` (alphabetical order) 57 | - `./images.yaml` 58 | - `./images.d/*.yaml` (alphabetical order) 59 | 60 | If user uses --images-config it will be used without searching other options. 61 | 62 | Each entry in configuration files will have a separate validation 63 | (big changes in validate sequence). 64 | 65 | When element is overrided there going to be notification of override. 66 | 67 | -------------------------------------------------------------------------------- /specs/external_commands.md: -------------------------------------------------------------------------------- 1 | External commands 2 | ------------------------- 3 | 4 | There was a request to add external test and upload 5 | features into dibctl. 6 | 7 | Dibctl is tightly linked with Openstack. Live images 8 | for distributions couldn't be tested and uploaded 9 | by means of openstack, but build process may be performed 10 | by DIB. To unify build process it's convenient to 11 | keep image description (environment variables, elements) 12 | inside the configuration as Openstack Images. 13 | 14 | ## Proposed solution 15 | Add full set of external commands: 16 | - External build 17 | - External test 18 | - External upload 19 | 20 | Each of them would be able to replace (add to in case of 21 | external_tests) native workflow with single external command. 22 | 23 | ### external build 24 | external build will be called instead of dib. If there are 25 | external build and dib sections, dib section would be called 26 | first, and external_build later. This should allow to build 27 | image by DIB, but tweak image later. This is slightly different 28 | from preprocessing option for upload, as preprocessing should 29 | be performed after tests. external_build should produce new 30 | image which would require new tests. 31 | 32 | External build may produce more then one artifact out of build 33 | process, therefore, there should be section for artifact processing (see corresponding blueprint, there is none at 34 | the moment of this blueprint creation). 35 | 36 | ### external_tests 37 | External tests are applied alongside with normal 'tests' 38 | section. As tests shouldn't modify original image, 39 | order of execution is not important. For now I plan to execute 40 | external tests prior to native, but this could change in the future. 41 | 42 | External tests are executed consecutively, end rely on 43 | return code. By default code 0 is 'success', and any other 44 | return (exit) code is 'failure'. It's possible to change 45 | expected exit code ranges, but I'll leave this function 46 | unimplemented for now. (i.e. 0 is 'ok', any other is 'bad'). 47 | 48 | Both external_build and external_tests are placed inside 49 | image.yaml (external_tests does not need for test_environment). 50 | 51 | ### external_upload 52 | External upload is very different from tests and build, 53 | as one upload excludes other (If someone wants to upload 54 | image to more then one region two calls of dibctl with different region names should be used). 55 | This would require additional check into config load 56 | section. 57 | 58 | Preprocessing would happen for both types of upload before 59 | upload itself. 60 | 61 | ## Interpolation 62 | 63 | All cmdlines for all three external commands will be 64 | interpolated with list of variables (not defined yet) in 65 | the same way as cmdline for preprocessing. 66 | 67 | Note: if preprocessing was performed, it will change 68 | output_filename variable to a new name (for external_upload). 69 | 70 | ## Implementation details 71 | 1. Add options into configuration files. 72 | 2. Refactor interpolation code away from 'preprocess' module. 73 | 3. Create a generic 'external_commands' module to handle 74 | preprocess, external_build, external_tests and external_upload. 75 | 4. Create stable set of variables and rules for their modification after interpolation. 76 | 5. Make current dib call conditional without fail (if no dib section) 77 | 5. Add external_build code after dib processing 78 | 6. Made existing tests conditional (without fail) 79 | 7. Add special handling for '--keep*' options 80 | 8. Add special handling for 'shell' command 81 | 9. Add external_tests code after normal tests. 82 | 10. Add external_upload code into upload command. 83 | 11. Incorporate changes into user documentation 84 | -------------------------------------------------------------------------------- /specs/preprocessing.md: -------------------------------------------------------------------------------- 1 | # Conversion before upload 2 | 3 | We need some way to perform conversion of images prior upload into some regions. 4 | 5 | ## Configuration details 6 | 7 | There is a new section in upload.yaml, with name 'preprocessing'. 8 | It is similar to dib section for images.yaml and describes how to convert. 9 | It can contain few variables: 10 | - cmdline - contains command line with substitutions, using .format() function of python. 11 | (I need to think about explicit list of variables for substitution to avoid chaos). 12 | - input\_filename - name of file before processing 13 | - output\_filename - name of file after processing 14 | - use\_existing - use file with output\_name if it is exist. (if use\_existing is false, existing file will be removed and processing performend as mandatory stage) 15 | - delete\_processed\_after\_upload - self-descripting. 16 | 17 | Variables for substitution: 18 | 19 | - 'input\_filename' 20 | - 'output\_filename' 21 | - 'container\_format' 22 | - 'disk\_format' 23 | 24 | ## Implementation details 25 | If there is a preprocessing section, it will be used before upload. 26 | Module (akin to dib.py) for calling and processing errors. 27 | 28 | Need to take in account caching and changes in output\_name. 29 | 30 | 31 | ## PROBLEM 32 | 33 | As we change format we need to change upload options for image. I think we need to add glance section and use smart merge (with priority from upload section) to allow 34 | overrides in image properties (especially, format). 35 | 36 | ## Format 37 | Now format is hardcoreded, so we need to have format specs in glance section. Normally it should be in image, but can be overriden by preprocessing. 38 | add new variables `disk_format` and `container_format` into glance section 39 | -------------------------------------------------------------------------------- /specs/rotate.md: -------------------------------------------------------------------------------- 1 | Rotate feautre has been planned from very start but I have no time to deal with it. 2 | 3 | # Feautre description 4 | When new image is uploaded, images with same name are marked as 'obsolete' by using a name prefix 5 | ("Obsolete ") and adding meta 'obsolete' to image. 6 | 7 | We do this to keep images available until all instances which had been started from this image, finally, 8 | have been removed or reubilded from other images. (Reason why we do this is too complicated, it is 9 | related to '\_base' thing in nova-compute and a way how instances migrages) 10 | 11 | To get list of all instances which uses given image we need escalated priveleges (admin priveleges) 12 | as we need to look to other tenants instances list. 13 | 14 | Escalated priveleges is a main reason why rotate is separate command from 'upload', as we can happily 15 | upload and mark old image obsolete without peeking in other tenants instances list. 16 | 17 | # Workflow 18 | I see two ways to use rotate command: 19 | - administrative console (dibctl rotate regionname), where command is run by operator with 20 | own set of OS\_\* environemnt variables. 21 | - special job in Jenkins where those credentials are passed in secure manner (more secure than 22 | default login/password for image upload). 23 | 24 | # How it works 25 | 1. Get all instance (nova list --all) 26 | 2. Get all obsolete images (images with obsolete meta, name is ignored) 27 | 3. Filter images such there is instance with 'base\_image\_ref' pointing to it. 28 | 4. Remove all such images 29 | 30 | # Command line options 31 | --dry-run should just print candidates for removal without actual deletion. 32 | 33 | # Priveleges 34 | To allow dibctl perform rotation properly it should be able to see all tenants instances. 35 | To do this one need to update `/etc/nova/policy.json`: 36 | Default value: 37 | ``` 38 | "os_compute_api:servers:detail:get_all_tenants": "is_admin:True" 39 | ``` 40 | Proposed value: 41 | ``` 42 | "os_compute_api:servers:detail:get_all_tenants": "is_admin:True or role:imagerole" 43 | ``` 44 | Where `imagerole` is a role assined to user for image manipulations (you should create it manually with keystone) 45 | -------------------------------------------------------------------------------- /test-requirements.txt: -------------------------------------------------------------------------------- 1 | mock 2 | testinfra 3 | vcrpy 4 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/serverscom/dibctl/0749a79e64113e951cdfddfdcb4df4d061040fe2/tests/__init__.py -------------------------------------------------------------------------------- /tests/bad_configs/00-example.image.yaml: -------------------------------------------------------------------------------- 1 | [] 2 | -------------------------------------------------------------------------------- /tests/bad_configs/00-example.test.yaml: -------------------------------------------------------------------------------- 1 | True 2 | -------------------------------------------------------------------------------- /tests/bad_configs/00-example.upload.yaml: -------------------------------------------------------------------------------- 1 | "invalid" 2 | -------------------------------------------------------------------------------- /tests/bad_configs/01-no-flavor-and-flavor-id.test.yaml: -------------------------------------------------------------------------------- 1 | both_flavor_and_flavor_id: 2 | keystone: 3 | auth_url: http://auth.example.com/ 4 | nova: 5 | flavor_id: '30' 6 | flavor: '30' 7 | nics: 8 | - net_id: fe2acef0-4383-4432-8fca-f9e23f835dd5 9 | 10 | -------------------------------------------------------------------------------- /tests/bad_configs/02-wrong-tests-env-vars.test.yaml: -------------------------------------------------------------------------------- 1 | example_env: 2 | tests: 3 | environment_variables: 4 | foo: 3 #must be string 5 | -------------------------------------------------------------------------------- /tests/bad_configs/03-wrong_type_for_protected.upload.yaml: -------------------------------------------------------------------------------- 1 | example_env: 2 | glance: 3 | protected: 0 4 | -------------------------------------------------------------------------------- /tests/bad_configs/04-wrong_type_for_min_disk.upload.yaml: -------------------------------------------------------------------------------- 1 | example_env: 2 | glance: 3 | min_disk: "30" 4 | -------------------------------------------------------------------------------- /tests/bad_configs/05-wrong_value_for_min_ram.upload.yaml: -------------------------------------------------------------------------------- 1 | example_env: 2 | glance: 3 | min_ram: -1 4 | -------------------------------------------------------------------------------- /tests/bad_configs/06-unknown_variable_in_glance_section.upload.yaml: -------------------------------------------------------------------------------- 1 | example_env: 2 | glance: 3 | min-ram: 3 4 | -------------------------------------------------------------------------------- /tests/bad_configs/README: -------------------------------------------------------------------------------- 1 | This is examples of config files which should be threated as invalid. 2 | None of them should pass validation. They are used to ensure that 3 | config schemas are actually not only permissive (doctests) but also 4 | catches bad examples. 5 | -------------------------------------------------------------------------------- /tests/test_bad_configs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import os 3 | import inspect 4 | import pytest 5 | 6 | 7 | @pytest.fixture 8 | def config_module(): 9 | from dibctl import config 10 | return config 11 | 12 | 13 | def gather_configs(ctype): 14 | PATH = 'bad_configs' 15 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 16 | currentdir = os.path.dirname(ourfilename) 17 | for f in os.listdir(os.path.join(currentdir, PATH)): 18 | if not f.endswith(ctype + '.yaml'): 19 | continue 20 | yield os.path.join(currentdir, PATH, f) 21 | 22 | 23 | @pytest.mark.parametrize('conf', gather_configs('image')) 24 | def test_image_conf(config_module, conf): 25 | with pytest.raises(config_module.InvaidConfigError): 26 | config_module.ImageConfig(config_file=conf) 27 | 28 | 29 | @pytest.mark.parametrize('conf', gather_configs('test')) 30 | def test_test_conf(config_module, conf): 31 | with pytest.raises(config_module.InvaidConfigError): 32 | config_module.TestEnvConfig(config_file=conf) 33 | 34 | 35 | @pytest.mark.parametrize('conf', gather_configs('upload')) 36 | def test_upload_conf(config_module, conf): 37 | with pytest.raises(config_module.InvaidConfigError): 38 | config_module.UploadEnvConfig(config_file=conf) 39 | -------------------------------------------------------------------------------- /tests/test_commands.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import os 3 | import inspect 4 | import sys 5 | import pytest 6 | import mock 7 | from mock import sentinel 8 | import argparse 9 | 10 | 11 | @pytest.fixture 12 | def commands(): 13 | from dibctl import commands 14 | return commands 15 | 16 | 17 | @pytest.fixture 18 | def config(): 19 | from dibctl import config 20 | return config 21 | 22 | 23 | @pytest.fixture 24 | def cred(): 25 | return { 26 | 'os_auth_url': 'mock', 27 | 'os_tenant_name': 'mock', 28 | 'os_username': 'mock', 29 | 'os_password': 'mock' 30 | } 31 | 32 | 33 | @pytest.fixture 34 | def mock_image_cfg(): 35 | image = { 36 | 'glance': { 37 | 'name': 'foo' 38 | }, 39 | 'dib': { 40 | 'elements': ['foo'] 41 | }, 42 | 'filename': sentinel.filename, 43 | 'tests': { 44 | 'environment_name': 'env' 45 | } 46 | } 47 | return image 48 | 49 | 50 | @pytest.fixture 51 | def mock_env_cfg(): 52 | env = { 53 | 'keystone': { 54 | 'auth_url': sentinel.auth, 55 | 'tenant_name': sentinel.tenant, 56 | 'password': sentinel.password, 57 | 'username': sentinel.user 58 | }, 59 | 'nova': { 60 | 'flavor': 'example', 61 | 'nics': [ 62 | {'net_id': sentinel.net_id} 63 | ] 64 | } 65 | } 66 | return env 67 | 68 | 69 | def create_subparser(ParserClass): 70 | parser = argparse.ArgumentParser() 71 | subparsers = parser.add_subparsers(title='commands') 72 | inst = ParserClass(subparsers) 73 | return parser, inst 74 | 75 | 76 | @pytest.mark.parametrize('args, expected', [ 77 | [['generic', '--debug'], True], 78 | [['generic'], False] 79 | ]) 80 | def test_GenericCommand_debug(commands, args, expected): 81 | parser = create_subparser(commands.GenericCommand)[0] 82 | args = parser.parse_args(args) 83 | assert args.debug == expected 84 | assert args.command.__func__ == commands.GenericCommand.command.__func__ 85 | 86 | 87 | def test_GenericCommand_no_command(commands): 88 | parser = create_subparser(commands.GenericCommand)[0] 89 | args = parser.parse_args(['generic']) 90 | with pytest.raises(NotImplementedError): 91 | args.command(sentinel.args) 92 | 93 | 94 | def test_BuildCommand_actual(commands): 95 | parser, obj = create_subparser(commands.BuildCommand) 96 | args = parser.parse_args(['build', 'label']) 97 | assert args.imagelabel == 'label' 98 | assert args.filename is None 99 | assert args.images_config is None 100 | with mock.patch.object(obj, "_command"): 101 | with mock.patch.object(commands.config, "ImageConfig"): 102 | args.command(args) 103 | assert obj.image 104 | 105 | 106 | def test_BuildCommand_output(commands): 107 | parser = create_subparser(commands.BuildCommand)[0] 108 | args = parser.parse_args(['build', 'label', '--output', 'foo']) 109 | assert args.filename is 'foo' 110 | 111 | 112 | def test_BuildCommand_img_config(commands): 113 | parser = create_subparser(commands.BuildCommand)[0] 114 | args = parser.parse_args(['build', '--images-config', 'bar', 'label']) 115 | assert args.images_config is 'bar' 116 | 117 | 118 | def test_BuildCommand_prepare(commands, mock_image_cfg): 119 | parser, obj = create_subparser(commands.BuildCommand) 120 | args = parser.parse_args(['build', 'label']) 121 | assert args.imagelabel == 'label' 122 | with mock.patch.object(obj, "_run"): 123 | with mock.patch.object(commands.config, "ImageConfig", return_value={'label': mock_image_cfg}): 124 | args.command(args) 125 | assert obj.dib 126 | 127 | 128 | def test_BuildCommand_run_success(commands, mock_image_cfg, capsys): 129 | parser, obj = create_subparser(commands.BuildCommand) 130 | args = parser.parse_args(['build', 'label']) 131 | assert args.imagelabel == 'label' 132 | with mock.patch.object(commands.dib.DIB, "run") as mock_run: 133 | mock_run.return_value = 0 134 | with mock.patch.object(commands.config, "ImageConfig", return_value={'label': mock_image_cfg}): 135 | assert args.command(args) == 0 136 | assert mock_run.called 137 | s_in = capsys.readouterr()[0] 138 | assert 'successfully' in s_in 139 | 140 | 141 | def test_BuildCommand_run_error(commands, mock_image_cfg, capsys): 142 | parser, obj = create_subparser(commands.BuildCommand) 143 | args = parser.parse_args(['build', 'label']) 144 | assert args.imagelabel == 'label' 145 | with mock.patch.object(commands.dib.DIB, "run") as mock_run: 146 | mock_run.return_value = 1 147 | with mock.patch.object(commands.config, "ImageConfig", return_value={'label': mock_image_cfg}): 148 | assert args.command(args) == 1 149 | assert mock_run.called 150 | s_in = capsys.readouterr()[0] 151 | assert 'Error' in s_in 152 | 153 | 154 | @pytest.mark.parametrize('status, exit_code', [ 155 | [True, 0], 156 | [False, 80] 157 | ]) 158 | def test_TestCommand_actual(commands, status, exit_code): 159 | parser, obj = create_subparser(commands.TestCommand) 160 | args = parser.parse_args(['test', 'label']) 161 | assert args.imagelabel == 'label' 162 | with mock.patch.object(commands.config, "ImageConfig"): 163 | with mock.patch.object(commands.config, "TestEnvConfig"): 164 | with mock.patch.object(commands.do_tests, "DoTests") as dt: 165 | dt.return_value.process.return_value = status 166 | assert args.command(args) == exit_code 167 | assert obj.image 168 | 169 | 170 | def test_TestCommand_input(commands): 171 | parser = create_subparser(commands.TestCommand)[0] 172 | args = parser.parse_args(['test', 'label', '--input', 'file']) 173 | with mock.patch.object(commands.config, "TestEnvConfig"): 174 | with mock.patch.object(commands.config, "ImageConfig"): 175 | with mock.patch.object(commands.do_tests, "DoTests"): 176 | args.command(args) 177 | assert args.filename == 'file' 178 | 179 | 180 | def test_TestCommand_test_env(commands): 181 | parser = create_subparser(commands.TestCommand)[0] 182 | args = parser.parse_args(['test', 'label', '--test-config', 'cfg']) 183 | with mock.patch.object(commands.config, "TestEnvConfig") as mock_tec: 184 | mock_tec.get.return_value = sentinel.data 185 | with mock.patch.object(commands.config, "ImageConfig"): 186 | with mock.patch.object(commands.do_tests, "DoTests"): 187 | args.command(args) 188 | assert args.test_config == 'cfg' 189 | 190 | 191 | def test_TestCommand_env_name(commands): 192 | parser = create_subparser(commands.TestCommand)[0] 193 | args = parser.parse_args(['test', 'label', '--environment', 'env']) 194 | assert args.envlabel == 'env' 195 | with mock.patch.object(commands.config, "TestEnvConfig"): 196 | with mock.patch.object(commands.config, "ImageConfig"): 197 | with mock.patch.object(commands.do_tests, "DoTests"): 198 | args.command(args) 199 | assert args.envlabel == 'env' 200 | 201 | 202 | def test_TestCommand_existing(commands): 203 | parser = create_subparser(commands.TestCommand)[0] 204 | args = parser.parse_args(['test', 'label', '--use-existing-image', 'myuuid']) 205 | assert args.uuid == 'myuuid' 206 | with mock.patch.object(commands.config, "TestEnvConfig"): 207 | with mock.patch.object(commands.config, "ImageConfig"): 208 | with mock.patch.object(commands.do_tests, "DoTests"): 209 | args.command(args) 210 | assert args.uuid == 'myuuid' 211 | 212 | 213 | def test_TestCommand_keep_image(commands): 214 | parser = create_subparser(commands.TestCommand)[0] 215 | args = parser.parse_args(['test', 'label', '--keep-failed-image']) 216 | assert args.keep_failed_image is True 217 | with mock.patch.object(commands.config, "TestEnvConfig"): 218 | with mock.patch.object(commands.config, "ImageConfig"): 219 | with mock.patch.object(commands.do_tests, "DoTests"): 220 | args.command(args) 221 | assert args.keep_failed_image is True 222 | 223 | 224 | def test_TestCommand_keep_instance(commands): 225 | parser = create_subparser(commands.TestCommand)[0] 226 | args = parser.parse_args(['test', 'label', '--keep-failed-instance']) 227 | assert args.keep_failed_instance is True 228 | with mock.patch.object(commands.config, "TestEnvConfig"): 229 | with mock.patch.object(commands.config, "ImageConfig"): 230 | with mock.patch.object(commands.do_tests, "DoTests"): 231 | args.command(args) 232 | assert args.keep_failed_instance is True 233 | 234 | 235 | def test_TestCommand_actual_no_tests(commands): 236 | parser, obj = create_subparser(commands.TestCommand) 237 | args = parser.parse_args(['test', 'label']) 238 | assert args.imagelabel == 'label' 239 | with mock.patch.object(commands.config, "ImageConfig") as ic: 240 | ic.return_value.__getitem__.return_value = {} 241 | with mock.patch.object(commands.config, "TestEnvConfig"): 242 | with pytest.raises(commands.NoTestsError): 243 | args.command(args) 244 | 245 | 246 | def test_TestCommand_actual_no_proper_env(commands): 247 | parser, obj = create_subparser(commands.TestCommand) 248 | args = parser.parse_args(['test', 'label']) 249 | assert args.imagelabel == 'label' 250 | with mock.patch.object(commands.config, "ImageConfig") as ic: 251 | ic.return_value.__getitem__.return_value = {'tests': {'something': 'unrelated'}} 252 | with mock.patch.object(commands.config, "TestEnvConfig"): 253 | with pytest.raises(commands.TestEnvironmentNotFoundError): 254 | args.command(args) 255 | 256 | 257 | def test_TestCommand_actual_proper_env_override(commands): 258 | parser, obj = create_subparser(commands.TestCommand) 259 | args = parser.parse_args(['test', 'label', '--environment', 'foo']) 260 | assert args.imagelabel == 'label' 261 | with mock.patch.object(commands.config, "ImageConfig") as ic: 262 | ic.return_value.get.return_value = {'tests': {'environment_name': 'unrelated', 'tests_list': []}} 263 | with mock.patch.object(commands.config, "TestEnvConfig") as tec: 264 | tec.return_value = {'foo': 'bar'} 265 | with mock.patch.object(commands.do_tests, "DoTests"): 266 | args.command(args) 267 | assert obj.test_env == 'bar' 268 | 269 | 270 | def test_UploadCommand_actual_with_obsolete(commands, cred, mock_env_cfg, mock_image_cfg, config): 271 | parser, obj = create_subparser(commands.UploadCommand) 272 | args = parser.parse_args(['upload', 'label', 'uploadlabel']) 273 | assert args.imagelabel == 'label' 274 | assert args.no_obsolete is False 275 | assert args.images_config is None 276 | assert args.filename is None 277 | 278 | with mock.patch.object(commands.config, "UploadEnvConfig", autospec=True, strict=True) as uec: 279 | uec.return_value = config.Config({'uploadlabel': mock_env_cfg}) 280 | with mock.patch.object(commands.osclient, "OSClient") as mock_os: 281 | mock_os.return_value.older_images.return_value = [sentinel.one, sentinel.two] 282 | with mock.patch.object(commands.config, "ImageConfig", autospec=True, strict=True) as ic: 283 | ic.return_value = config.Config({'label': mock_image_cfg}) 284 | args.command(args) 285 | 286 | 287 | def test_UploadCommand_no_glance_section(commands, mock_env_cfg, config): 288 | img_config = {'filename': 'foobar'} 289 | parser, obj = create_subparser(commands.UploadCommand) 290 | args = parser.parse_args(['upload', 'label', 'uploadlabel']) 291 | with mock.patch.object(commands.config, "UploadEnvConfig") as uec: 292 | uec.return_value = config.Config({"uploadlabel": mock_env_cfg}) 293 | with mock.patch.object(commands.osclient, "OSClient"): 294 | with mock.patch.object(commands.config, "ImageConfig") as ic: 295 | ic.return_value.__getitem__.return_value = img_config 296 | with pytest.raises(commands.NotFoundInConfigError): 297 | args.command(args) 298 | 299 | 300 | def test_UploadCommand_obsolete(commands): 301 | parser = create_subparser(commands.UploadCommand)[0] 302 | args = parser.parse_args(['upload', 'label', 'uploadlabel', '--no-obsolete']) 303 | assert args.no_obsolete is True 304 | 305 | 306 | def test_RotateCommand_actual(commands, mock_env_cfg, config): 307 | parser, obj = create_subparser(commands.RotateCommand) 308 | args = parser.parse_args(['rotate', 'uploadlabel']) 309 | with mock.patch.object(commands.config, "UploadEnvConfig") as uec: 310 | uec.return_value = config.Config({"uploadlabel": mock_env_cfg}) 311 | with mock.patch.object(commands.osclient, "OSClient"): 312 | args.command(args) 313 | assert obj.upload_env 314 | 315 | 316 | def test_ObsoleteCommand_actual(commands, mock_env_cfg, config): 317 | parser, obj = create_subparser(commands.ObsoleteCommand) 318 | args = parser.parse_args(['mark-obsolete', 'uploadlabel', 'myuuid']) 319 | assert args.uuid == 'myuuid' 320 | with mock.patch.object(commands.config, "UploadEnvConfig") as uec: 321 | uec.return_value = config.Config({'uploadlabel': mock_env_cfg}) 322 | with mock.patch.object(commands.osclient, "OSClient"): 323 | args.command(args) 324 | assert obj.upload_env 325 | 326 | 327 | def test_TransferCommand_simple(commands): 328 | parser = create_subparser(commands.TransferCommand)[0] 329 | args = parser.parse_args(['transfer', 'myuuid']) 330 | assert args.uuid == 'myuuid' 331 | assert args.src_auth_url is None 332 | assert args.dst_auth_url is None 333 | assert args.src_tenant_name is None 334 | assert args.dst_tenant_name is None 335 | assert args.src_username is None 336 | assert args.dst_username is None 337 | assert args.src_password is None 338 | assert args.dst_password is None 339 | assert args.ignore_meta is False 340 | assert args.ignore_membership is False 341 | args.command(args) 342 | 343 | 344 | @pytest.mark.parametrize("opt", [ 345 | "src-auth-url", 346 | "dst-auth-url", 347 | "src-tenant-name", 348 | "dst-tenant-name", 349 | "src-username", 350 | "dst-username", 351 | "src-password", 352 | "dst-password", 353 | ]) 354 | def test_TransferCommand_args_param(commands, opt): 355 | parser = create_subparser(commands.TransferCommand)[0] 356 | args = parser.parse_args(['transfer', 'myuuid', "--" + opt, 'foo']) 357 | name = opt.replace('-', '_') 358 | assert args.__getattribute__(name) == 'foo' 359 | 360 | 361 | @pytest.mark.parametrize("opt", [ 362 | "ignore-meta", 363 | "ignore-membership" 364 | ]) 365 | def test_TransferCommand_args_param_ignore(commands, opt): 366 | parser = create_subparser(commands.TransferCommand)[0] 367 | args = parser.parse_args(['transfer', 'myuuid', "--" + opt]) 368 | name = opt.replace('-', '_') 369 | assert args.__getattribute__(name) is True 370 | 371 | 372 | def test_Main_empty_cmdline(commands): 373 | with pytest.raises(SystemExit): 374 | commands.Main([]) 375 | 376 | 377 | def test_Main_build_success(commands, mock_image_cfg): 378 | with mock.patch.object(commands.config, "ImageConfig", return_value={'label': mock_image_cfg}): 379 | with mock.patch.object(commands.dib.DIB, 'run', return_value=0): 380 | m = commands.Main(['build', 'label']) 381 | assert m.run() == 0 382 | 383 | 384 | def test_Main_build_error(commands, mock_image_cfg): 385 | with mock.patch.object(commands.config, "ImageConfig", return_value={'label': mock_image_cfg}): 386 | with mock.patch.object(commands.dib.DIB, 'run', return_value=1): 387 | m = commands.Main(['build', 'label']) 388 | assert m.run() == 1 389 | 390 | 391 | def test_main_build(commands, mock_image_cfg): 392 | with mock.patch.object(commands.config, "ImageConfig", return_value={'label': mock_image_cfg}): 393 | with mock.patch.object(commands.dib.DIB, 'run', return_value=1): 394 | commands.main(['build', 'label']) == 1 395 | 396 | 397 | def test_main_premature_exit_config(commands): 398 | with mock.patch.object(commands.config, "ImageConfig") as m: 399 | m.side_effect = commands.config.NotFoundInConfigError 400 | commands.main(['build', 'label']) == 10 401 | 402 | 403 | @pytest.mark.parametrize('exc', [ 404 | IOError 405 | ]) 406 | def test_main_test_command_with_exceptions(commands, mock_image_cfg, mock_env_cfg, exc): 407 | with mock.patch.object(commands.config, "ImageConfig", return_value={'label': mock_image_cfg}): 408 | with mock.patch.object(commands.config, "TestEnvConfig", return_value={'env': mock_env_cfg}): 409 | with mock.patch.object(commands.do_tests.DoTests, 'process', side_effect=exc): 410 | commands.main(['test', 'label']) == 1 411 | 412 | 413 | def test_init(commands): 414 | with mock.patch.object(commands, "Main") as m: 415 | m.return_value.run.return_value = 42 416 | with mock.patch.object(commands, "__name__", "__main__"): 417 | with mock.patch.object(commands.sys, 'exit') as mock_exit: 418 | commands.init() 419 | assert mock_exit.call_args[0][0] == 42 420 | 421 | 422 | if __name__ == "__main__": 423 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 424 | currentdir = os.path.dirname(ourfilename) 425 | parentdir = os.path.dirname(currentdir) 426 | file_to_test = os.path.join( 427 | parentdir, 428 | os.path.basename(parentdir), 429 | os.path.basename(ourfilename).replace("test_", '') 430 | ) 431 | pytest.main([ 432 | "-vv", 433 | "--cov", file_to_test, 434 | "--cov-report", "term-missing" 435 | ] + sys.argv) 436 | -------------------------------------------------------------------------------- /tests/test_dib.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import os 3 | import inspect 4 | import sys 5 | import pytest 6 | import mock 7 | from mock import sentinel 8 | 9 | 10 | @pytest.fixture 11 | def dib(): 12 | from dibctl import dib 13 | return dib 14 | 15 | 16 | @pytest.fixture 17 | def DIB(dib): 18 | DIB = dib.DIB(sentinel.filename, [sentinel.element]) 19 | return DIB 20 | 21 | 22 | def test_dib_cmdline_all_defaults(dib): 23 | dib = dib.DIB(sentinel.filename, [sentinel.element]) 24 | assert dib.cmdline == ['disk-image-create', '-a', 'amd64', '-o', sentinel.filename, sentinel.element] 25 | 26 | 27 | def test_dib_cmdline_no_tracing(dib): 28 | dib = dib.DIB(sentinel.filename, [sentinel.element], tracing=False) 29 | assert '-x' not in dib.cmdline 30 | 31 | 32 | def test_dib_cmdline_tracing(dib): 33 | dib = dib.DIB(sentinel.filename, [sentinel.element], tracing=True) 34 | assert '-x' in dib.cmdline 35 | 36 | 37 | def test_dib_cmdline_offline(dib): 38 | dib = dib.DIB(sentinel.filename, [sentinel.element], offline=True, tracing=False) 39 | assert '--offline' in dib.cmdline 40 | 41 | 42 | def test_dib_cmdline_additional_options(dib): 43 | opts = [sentinel.opt1, sentinel.opt2] 44 | dib = dib.DIB(sentinel.filename, [sentinel.element], additional_options=opts) 45 | assert opts[0] in dib.cmdline 46 | assert opts[1] in dib.cmdline 47 | 48 | 49 | def test_dib_cmdline_no_elements(dib): 50 | with pytest.raises(dib.NoElementsError): 51 | dib = dib.DIB(sentinel.filename, []) 52 | 53 | 54 | def test_prep_env(dib): 55 | with mock.patch.object(dib.os, "environ", {"key1": "value1"}): 56 | dib = dib.DIB(sentinel.filename, [sentinel.element], env={"key1": "value2", "key2": "value2"}) 57 | new_env = dib._prep_env() 58 | assert new_env["key1"] == "value2" 59 | assert new_env["key2"] == "value2" 60 | 61 | 62 | def test_prep_env_empy(dib): 63 | with mock.patch.object(dib.os, "environ", {"key1": "value1"}): 64 | dib = dib.DIB(sentinel.filename, [sentinel.element]) 65 | new_env = dib._prep_env() 66 | assert new_env["key1"] == "value1" 67 | 68 | 69 | def test_run_mocked_check_output(dib, DIB, capsys): 70 | with mock.patch.object(dib.subprocess, "Popen"): 71 | dib = dib.DIB("filename42", ["element42"]) 72 | dib.run() 73 | out = capsys.readouterr()[0] 74 | assert 'filename42' in out 75 | assert 'element42' in out 76 | 77 | 78 | def test_run_echoed(dib, DIB): 79 | dib = dib.DIB("filename", ["element1", "element2"], exec_path="echo") 80 | assert dib.run() == 0 81 | 82 | 83 | def test_get_installed_version_normal(dib): 84 | with mock.patch.object(dib.pkg_resources, 'get_distribution') as mock_gd: 85 | mock_gd.return_value.version = '0.1.1' 86 | assert dib.get_installed_version() == dib.semantic_version.Version('0.1.1') 87 | 88 | 89 | def test_get_installed_version_normal_with_rc(dib): 90 | with mock.patch.object(dib.pkg_resources, 'get_distribution') as mock_gd: 91 | mock_gd.return_value.version = '2.0.0rc2' 92 | assert dib.get_installed_version() == dib.semantic_version.Version('2.0.0-rc2') 93 | 94 | 95 | def test_get_installed_version_no(dib): 96 | with mock.patch.object(dib.pkg_resources, 'get_distribution') as mock_gd: 97 | mock_gd.side_effect = dib.pkg_resources.DistributionNotFound 98 | with pytest.raises(dib.NoDibError): 99 | dib.get_installed_version() 100 | 101 | 102 | def test_get_installed_version_actual(dib): 103 | assert dib.get_installed_version() is not None 104 | 105 | 106 | def test_version_good(dib): 107 | assert dib._version('1.0.0') == dib.semantic_version.Version('1.0.0') 108 | 109 | 110 | def test_version_not_bad(dib): 111 | assert dib._version('1.0.0rc1') == dib.semantic_version.Version('1.0.0-rc1') 112 | 113 | 114 | def test_version_bad(dib): 115 | with pytest.raises(dib.BadVersion): 116 | dib._version('not-a-version') 117 | 118 | 119 | @pytest.mark.parametrize('min_version, max_version', [ 120 | [None, None], 121 | ['0.0.1', None], 122 | [None, '999.999.999'], 123 | ['0.0.1', '999.99.999'] 124 | ]) 125 | def test_validate_version_pass(dib, min_version, max_version): 126 | assert dib.validate_version(min_version, max_version) is True 127 | 128 | 129 | @pytest.mark.parametrize('min_version, max_version', [ 130 | [None, '0.0.1'], 131 | ['999.999.999', '0.0.1'], 132 | ['999.999.999', None], 133 | ['0.0.1', '0.0.1'], 134 | ['999.999.999', '999.99.999'] 135 | ]) 136 | def test_validate_version_not_pass(dib, min_version, max_version): 137 | with pytest.raises(dib.BadDibVersion): 138 | dib.validate_version(min_version, max_version) 139 | 140 | 141 | if __name__ == "__main__": 142 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 143 | currentdir = os.path.dirname(ourfilename) 144 | parentdir = os.path.dirname(currentdir) 145 | file_to_test = os.path.join( 146 | parentdir, 147 | os.path.basename(parentdir), 148 | os.path.basename(ourfilename).replace("test_", '') 149 | ) 150 | pytest.main([ 151 | "-vv", 152 | "--cov", file_to_test, 153 | "--cov-report", "term-missing" 154 | ] + sys.argv) 155 | -------------------------------------------------------------------------------- /tests/test_do_tests.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import mock 3 | import pytest 4 | import os 5 | import inspect 6 | import sys 7 | from mock import sentinel 8 | 9 | 10 | @pytest.fixture 11 | def Config(): 12 | from dibctl import config 13 | return config.Config 14 | 15 | 16 | @pytest.fixture 17 | def do_tests(): 18 | from dibctl import do_tests 19 | return do_tests 20 | 21 | 22 | @pytest.fixture 23 | def mock_env(Config): 24 | return Config({ 25 | 'nova': { 26 | 'flavor': 'some flavor' 27 | } 28 | }) 29 | 30 | 31 | @pytest.fixture 32 | def mock_image(Config): 33 | return Config({ 34 | 'tests': { 35 | 'wait_for_port': 22, 36 | 'tests_list': [{'pytest': sentinel.path1}, {'shell': sentinel.path2}] 37 | } 38 | }) 39 | 40 | 41 | def test_init_no_tests(do_tests): 42 | image = {} 43 | env = {} 44 | dt = do_tests.DoTests(image, env) 45 | assert dt.tests_list == [] 46 | 47 | 48 | def test_init_no_override(do_tests): 49 | image = {} 50 | dt = do_tests.DoTests(image, {}, image_uuid=sentinel.uuid) 51 | assert dt.tests_list == [] 52 | assert dt.delete_image is False 53 | assert dt.override_image_uuid == sentinel.uuid 54 | 55 | 56 | def test_init_tests(do_tests, Config): 57 | image = Config({ 58 | 'tests': { 59 | 'tests_list': ['test'] 60 | } 61 | }) 62 | env = Config({}) 63 | dt = do_tests.DoTests(image, env) 64 | assert dt.tests_list == ['test'] 65 | 66 | 67 | @pytest.mark.parametrize("os_env, img, tenv, combined", [ 68 | [{}, {}, {}, {}], 69 | [{'a': 'b'}, {}, {}, {'a': 'b'}], 70 | [{}, {'a': 'b'}, {}, {'a': 'b'}], 71 | [{}, {}, {'a': 'b'}, {'a': 'b'}], 72 | [{'a': '1'}, {'a': '2'}, {'a': '3'}, {'a': '3'}], 73 | [{'a': '1'}, {'a': '2'}, {}, {'a': '2'}], 74 | [{'a': '1', 'b': '1'}, {'a': '2', 'c': '1'}, {'a': '3', 'd': '1'}, {'a': '3', 'b': '1', 'c': '1', 'd': '1'}], 75 | ]) 76 | def test_make_env_vars(do_tests, Config, os_env, img, tenv, combined): 77 | img_cfg = Config({'tests': {'environment_variables': img}}) 78 | tenv_cfg = Config({'tests': {'environment_variables': tenv}}) 79 | with mock.patch.object(do_tests.os, "environ", os_env): 80 | dt = do_tests.DoTests(img_cfg, tenv_cfg) 81 | assert dt.environment_variables == combined 82 | 83 | 84 | def test_run_test_bad_config(do_tests): 85 | dt = do_tests.DoTests({}, {}) 86 | with pytest.raises(do_tests.BadTestConfigError): 87 | dt.run_test(sentinel.ssh, {'one': 1, 'two': 2}, sentinel.config, sentinel.env) 88 | 89 | 90 | def test_run_test_bad_runner(do_tests): 91 | dt = do_tests.DoTests({}, {}) 92 | with pytest.raises(do_tests.BadTestConfigError): 93 | dt.run_test(sentinel.ssh, {'badrunner': 1}, sentinel.config, sentinel.env) 94 | 95 | 96 | def test_run_test_duplicate_runner(do_tests): 97 | dt = do_tests.DoTests({}, {}) 98 | with pytest.raises(do_tests.BadTestConfigError): 99 | dt.run_test(sentinel.ssh, {'pytest': 1, 'shell': 2}, sentinel.config, sentinel.env) 100 | 101 | 102 | @pytest.mark.parametrize('continue_on_fail, result, expected', [ 103 | [True, False, True], 104 | [True, True, True], 105 | [False, True, True], 106 | [False, False, False], 107 | ]) 108 | @pytest.mark.parametrize('runner', ['pytest', 'shell']) 109 | def test_run_test_matrix(do_tests, runner, continue_on_fail, result, expected): 110 | dt = do_tests.DoTests({}, {}, continue_on_fail=continue_on_fail) 111 | with mock.patch.multiple(do_tests, pytest_runner=mock.DEFAULT, shell_runner=mock.DEFAULT) as mock_rs: 112 | mock_r = mock_rs[runner + '_runner'] 113 | mock_r.runner.return_value = result 114 | assert dt.run_test(sentinel.ssh, {runner: sentinel.path}, sentinel.config, sentinel.var) is expected 115 | assert mock_r.runner.called 116 | 117 | 118 | def test_init_ssh_with_data(do_tests): 119 | env = { 120 | 'nova': { 121 | 'flavor': 'some flavor' 122 | } 123 | } 124 | image = { 125 | 'tests': { 126 | 'tests_list': [], 127 | 'ssh': { 128 | 'username': 'user' 129 | } 130 | } 131 | } 132 | dt = do_tests.DoTests(image, env) 133 | dt.init_ssh(mock.MagicMock()) 134 | assert dt.ssh is not None 135 | 136 | 137 | def test_wait_port_good(do_tests, Config): 138 | env = { 139 | 'nova': { 140 | 'flavor': 'some flavor' 141 | } 142 | } 143 | image = { 144 | 'tests': { 145 | 'tests_list': [], 146 | 'wait_for_port': 22, 147 | 'port_wait_timeout': 180 148 | } 149 | } 150 | dt = do_tests.DoTests(Config(image), Config(env)) 151 | mock_prep_os = mock.MagicMock() 152 | assert dt.wait_port(mock_prep_os) is True 153 | assert mock_prep_os.wait_for_port.call_args == mock.call(22, 180) 154 | 155 | 156 | @pytest.mark.parametrize('env_timeout, image_timeout, result', [ 157 | [1, 2, 2], 158 | [2, 1, 2], 159 | ]) 160 | def test_get_port_timeout_uses_max(do_tests, Config, env_timeout, image_timeout, result): 161 | env = { 162 | 'nova': { 163 | 'flavor': 'some flavor' 164 | }, 165 | 'tests': { 166 | 'port_wait_timeout': env_timeout 167 | } 168 | } 169 | image = { 170 | 'tests': { 171 | 'tests_list': [], 172 | 'wait_for_port': 22, 173 | 'port_wait_timeout': image_timeout 174 | } 175 | } 176 | dt = do_tests.DoTests(Config(image), Config(env)) 177 | mock_prep_os = mock.MagicMock() 178 | dt.wait_port(mock_prep_os) 179 | assert mock_prep_os.wait_for_port.call_args == mock.call(22, result) 180 | 181 | 182 | @pytest.mark.parametrize('true_value', [ 183 | 'was_removed', 184 | 'preexisted', 185 | 'deletable' 186 | ]) 187 | def test_report_item_silent(do_tests, true_value, capsys): 188 | data = { 189 | 'was_removed': False, 190 | 'preexisted': False, 191 | 'deletable': False, 192 | 'id': 'some_id', 193 | 'name': 'some name' 194 | } 195 | data[true_value] = True 196 | do_tests.DoTests.report_item('name', data) 197 | assert 'will not be removed' not in capsys.readouterr()[0] 198 | 199 | 200 | def test_report_ssh(do_tests, capsys): 201 | ssh = mock.MagicMock() 202 | ssh.command_line.return_value = ['some', 'command', 'line'] 203 | do_tests.DoTests.report_ssh(ssh) 204 | assert 'some command line' in capsys.readouterr()[0] 205 | 206 | 207 | def test_report(do_tests, mock_env, mock_image): 208 | dt = do_tests.DoTests(mock_image, mock_env) 209 | prep_os = mock.MagicMock() 210 | mock_status = { 211 | 'was_removed': False, 212 | 'preexisted': False, 213 | 'deletable': False, 214 | 'id': 'some_id', 215 | 'name': 'some_name' 216 | } 217 | prep_os.image_status.return_value = mock_status 218 | prep_os.instance_status.return_value = mock_status 219 | prep_os.keypair_status.return_value = mock_status 220 | dt.report(prep_os) 221 | 222 | 223 | def test_report_item(do_tests, capsys): 224 | data = { 225 | 'was_removed': False, 226 | 'preexisted': False, 227 | 'deletable': False, 228 | 'id': 'some_id', 229 | 'name': 'some name' 230 | } 231 | do_tests.DoTests.report_item('name', data) 232 | assert 'will not be removed' in capsys.readouterr()[0] 233 | 234 | 235 | def test_get_port_timeout_uses_env(do_tests, Config): 236 | env = { 237 | 'nova': { 238 | 'flavor': 'some flavor' 239 | }, 240 | 'tests': { 241 | 'port_wait_timeout': 42 242 | } 243 | } 244 | image = { 245 | 'tests': { 246 | 'tests_list': [], 247 | 'wait_for_port': 22 248 | } 249 | } 250 | dt = do_tests.DoTests(Config(image), Config(env)) 251 | mock_prep_os = mock.MagicMock() 252 | dt.wait_port(mock_prep_os) 253 | assert mock_prep_os.wait_for_port.call_args == mock.call(22, 42) 254 | 255 | 256 | def test_get_port_timeout_uses_img(do_tests, Config): 257 | env = { 258 | 'nova': { 259 | 'flavor': 'some flavor' 260 | }, 261 | } 262 | image = { 263 | 'tests': { 264 | 'tests_list': [], 265 | 'wait_for_port': 22, 266 | 'port_wait_timeout': 42 267 | } 268 | } 269 | dt = do_tests.DoTests(Config(image), Config(env)) 270 | mock_prep_os = mock.MagicMock() 271 | dt.wait_port(mock_prep_os) 272 | assert mock_prep_os.wait_for_port.call_args == mock.call(22, 42) 273 | 274 | 275 | def test_get_port_timeout_uses_default(do_tests): 276 | env = { 277 | 'nova': { 278 | 'flavor': 'some flavor' 279 | }, 280 | } 281 | image = { 282 | 'tests': { 283 | 'tests_list': [], 284 | 'wait_for_port': 22 285 | } 286 | } 287 | dt = do_tests.DoTests(image, env) 288 | mock_prep_os = mock.MagicMock() 289 | dt.wait_port(mock_prep_os) 290 | assert mock_prep_os.wait_for_port.call_args == mock.call(22, 61) # magical constant! 291 | 292 | 293 | def test_wait_port_no_port(do_tests): 294 | env = { 295 | 'nova': { 296 | 'flavor': 'some flavor' 297 | } 298 | } 299 | image = { 300 | 'tests': { 301 | 'tests_list': [], 302 | } 303 | } 304 | dt = do_tests.DoTests(image, env) 305 | mock_prep_os = mock.MagicMock() 306 | assert dt.wait_port(mock_prep_os) is False 307 | 308 | 309 | def test_wait_port_timeout(do_tests): 310 | env = { 311 | 'nova': { 312 | 'flavor': 'some flavor' 313 | } 314 | } 315 | image = { 316 | 'tests': { 317 | 'tests_list': [], 318 | 'wait_for_port': 42 319 | } 320 | } 321 | dt = do_tests.DoTests(image, env) 322 | mock_prep_os = mock.MagicMock() 323 | mock_prep_os.wait_for_port.return_value = False 324 | with pytest.raises(do_tests.PortWaitError): 325 | dt.wait_port(mock_prep_os) 326 | 327 | 328 | @pytest.mark.parametrize('port', [False, 22]) 329 | def test_process_minimal(do_tests, port, capsys): 330 | env = { 331 | 'nova': { 332 | 'flavor': 'some flavor' 333 | } 334 | } 335 | image = { 336 | 'tests': { 337 | 'wait_for_port': port, 338 | 'tests_list': [] 339 | } 340 | } 341 | dt = do_tests.DoTests(image, env) 342 | with mock.patch.object(do_tests.prepare_os, "PrepOS"): 343 | assert dt.process(False, False) is True 344 | assert 'passed' in capsys.readouterr()[0] 345 | 346 | 347 | def refactor_test_process_port_timeout(do_tests): 348 | env = { 349 | 'nova': { 350 | 'flavor': 'some flavor' 351 | } 352 | } 353 | image = { 354 | 'tests': { 355 | 'wait_for_port': 22, 356 | 'tests_list': [] 357 | } 358 | } 359 | dt = do_tests.DoTests(image, env) 360 | with mock.patch.object(do_tests.prepare_os, "PrepOS") as mock_prep_os_class: 361 | mock_prep_os = mock.MagicMock() 362 | mock_prep_os.wait_for_port.return_value = False 363 | mock_enter = mock.MagicMock() 364 | mock_enter.__enter__.return_value = mock_prep_os 365 | mock_prep_os_class.return_value = mock_enter 366 | with pytest.raises(do_tests.TestError): 367 | dt.process(False, False) 368 | 369 | 370 | def test_process_with_tests(do_tests, capsys): 371 | env = { 372 | 'nova': { 373 | 'flavor': 'some flavor' 374 | } 375 | } 376 | image = { 377 | 'tests': { 378 | 'wait_for_port': 22, 379 | 'tests_list': [{'pytest': sentinel.path1}, {'shell': sentinel.path2}] 380 | } 381 | } 382 | dt = do_tests.DoTests(image, env) 383 | with mock.patch.multiple(do_tests, pytest_runner=mock.DEFAULT, shell_runner=mock.DEFAULT): 384 | with mock.patch.object(do_tests.prepare_os, "PrepOS") as mock_prep_os_class: 385 | mock_prep_os = mock.MagicMock() 386 | mock_enter = mock.MagicMock() 387 | mock_enter.__enter__.return_value = mock_prep_os 388 | mock_prep_os_class.return_value = mock_enter 389 | assert dt.process(False, False) is True 390 | 391 | 392 | def test_process_shell_only(do_tests, capsys): 393 | env = { 394 | 'nova': { 395 | 'flavor': 'some flavor' 396 | } 397 | } 398 | image = { 399 | 'tests': { 400 | 'wait_for_port': 22, 401 | 'tests_list': [{'pytest': sentinel.path1}, {'shell': sentinel.path2}] 402 | } 403 | } 404 | with mock.patch.object(do_tests.prepare_os, "PrepOS"): 405 | with mock.patch.object(do_tests.DoTests, "open_shell", return_value=sentinel.result): 406 | dt = do_tests.DoTests(image, env) 407 | assert dt.process(shell_only=True, shell_on_errors=False) == sentinel.result 408 | 409 | 410 | def test_process_all_tests_fail(do_tests, capsys, Config): 411 | env = { 412 | 'nova': { 413 | 'flavor': 'some flavor' 414 | } 415 | } 416 | image = { 417 | 'tests': { 418 | 'wait_for_port': 22, 419 | 'tests_list': [{'pytest': sentinel.path1}, {'pytest': sentinel.path2}] 420 | } 421 | } 422 | dt = do_tests.DoTests(Config(image), Config(env)) 423 | dt.ssh = mock.MagicMock() 424 | with mock.patch.object(do_tests.pytest_runner, "runner") as runner: 425 | runner.side_effect = [False, ValueError("Shouldn't be called")] 426 | with mock.patch.object(do_tests.prepare_os, "PrepOS") as mock_prep_os_class: 427 | mock_prep_os = mock.MagicMock() 428 | mock_enter = mock.MagicMock() 429 | mock_enter.__enter__.return_value = mock_prep_os 430 | mock_prep_os_class.return_value = mock_enter 431 | assert dt.process(False, False) is False 432 | assert runner.call_count == 1 433 | 434 | 435 | def test_process_all_tests_fail_open_shell(do_tests, Config): 436 | env = { 437 | 'nova': { 438 | 'flavor': 'some flavor' 439 | } 440 | } 441 | image = { 442 | 'tests': { 443 | 'wait_for_port': 22, 444 | 'tests_list': [{'pytest': sentinel.path1}, {'pytest': sentinel.path2}] 445 | } 446 | } 447 | dt = do_tests.DoTests(Config(image), Config(env)) 448 | dt.ssh = mock.MagicMock() 449 | with mock.patch.object(do_tests.pytest_runner, "runner") as runner: 450 | runner.side_effect = [False, ValueError("Shouldn't be called")] 451 | with mock.patch.object(do_tests.prepare_os, "PrepOS") as mock_prep_os_class: 452 | mock_prep_os = mock.MagicMock() 453 | mock_enter = mock.MagicMock() 454 | mock_enter.__enter__.return_value = mock_prep_os 455 | mock_prep_os_class.return_value = mock_enter 456 | with mock.patch.object(dt, 'open_shell') as mock_open_shell: 457 | assert dt.process(False, shell_on_errors=True) is False 458 | assert mock_open_shell.called 459 | 460 | 461 | @pytest.mark.parametrize('result', [True, False]) 462 | def test_run_all_tests(do_tests, result, Config): 463 | env = { 464 | 'nova': { 465 | 'flavor': 'some flavor' 466 | } 467 | } 468 | image = { 469 | 'tests': { 470 | 'wait_for_port': 22, 471 | 'tests_list': [{'pytest': sentinel.path1}, {'pytest': sentinel.path2}] 472 | } 473 | } 474 | with mock.patch.object(do_tests.DoTests, "run_test", return_value=result): 475 | dt = do_tests.DoTests(Config(image), Config(env)) 476 | dt.ssh = mock.MagicMock() 477 | assert dt.run_all_tests(mock.MagicMock()) is result 478 | 479 | 480 | @pytest.mark.parametrize('retval, keep', [ 481 | [0, False], 482 | [1, False], 483 | [42, True] 484 | ]) 485 | def test_open_shell(do_tests, retval, keep): 486 | env = { 487 | 'nova': { 488 | 'flavor': 'some flavor' 489 | } 490 | } 491 | image = { 492 | 'tests': { 493 | 'ssh': {'username': 'user'}, 494 | 'wait_for_port': 22, 495 | 'tests_list': [{'pytest': sentinel.path1}, {'pytest': sentinel.path2}] 496 | } 497 | } 498 | dt = do_tests.DoTests(image, env) 499 | mock_ssh = mock.MagicMock() 500 | mock_ssh.shell.return_value = retval 501 | dt.open_shell(mock_ssh, 'reason') 502 | assert dt.keep_failed_instance == keep 503 | assert 'exit 42' in mock_ssh.shell.call_args[0][1] 504 | 505 | 506 | def test_open_shell_no_ssh_config(do_tests): 507 | env = { 508 | 'nova': { 509 | 'flavor': 'some flavor' 510 | } 511 | } 512 | image = { 513 | } 514 | dt = do_tests.DoTests(image, env) 515 | with pytest.raises(do_tests.TestError): 516 | dt.open_shell(None, 'reason') 517 | 518 | 519 | @pytest.mark.parametrize('kins', [True, False]) 520 | @pytest.mark.parametrize('kimg', [True, False]) 521 | def test_check_if_keep_stuff_after_fail_code_coverage(do_tests, kins, kimg): 522 | env = { 523 | 'nova': { 524 | 'flavor': 'some flavor' 525 | }, 526 | } 527 | image = { 528 | 'tests': { 529 | 'tests_list': [], 530 | 'wait_for_port': 22 531 | } 532 | } 533 | dt = do_tests.DoTests(image, env) 534 | dt.keep_failed_instance = kins 535 | dt.keep_failed_image = kimg 536 | dt.check_if_keep_stuff_after_fail(mock.MagicMock()) 537 | 538 | 539 | if __name__ == "__main__": 540 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 541 | currentdir = os.path.dirname(ourfilename) 542 | parentdir = os.path.dirname(currentdir) 543 | file_to_test = os.path.join( 544 | parentdir, 545 | os.path.basename(parentdir), 546 | os.path.basename(ourfilename).replace("test_", '', 1) 547 | ) 548 | pytest.main([ 549 | "-vv", 550 | "--cov", file_to_test, 551 | "--cov-report", "term-missing" 552 | ] + sys.argv) 553 | -------------------------------------------------------------------------------- /tests/test_image_preprocessing.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import os 3 | import inspect 4 | import sys 5 | import pytest 6 | import tempfile 7 | 8 | 9 | @pytest.fixture 10 | def i_p(): 11 | from dibctl import image_preprocessing 12 | return image_preprocessing 13 | 14 | 15 | @pytest.mark.parametrize('input, output', [ 16 | ["", ""], 17 | ["foo", "foo"], 18 | ["%(input_filename)s", "name"], 19 | ["%(input_filename)s.%(disk_format)s", "name.raw"], 20 | ["%(input_filename)s.%(container_format)s", "name.bare"], 21 | ]) 22 | def test_prep_output_name_normal(i_p, input, output): 23 | p = i_p.Preprocess('name', {'disk_format': 'raw'}, {'output_filename': input, 'cmdline': 'ignore'}) 24 | p.interpolate() 25 | assert p.output_filename == output 26 | 27 | 28 | @pytest.mark.parametrize('input', [ 29 | "%", 30 | "%{foo}", 31 | "/tmp/tmpyPhzXr" 32 | "%(unknown_variable)", 33 | "%(unknown_variable)s", 34 | "%(bad" 35 | ]) 36 | def test_prep_output_name_bad(i_p, input): 37 | p = i_p.Preprocess('name', {'disk_format': 'raw'}, {'output_filename': input, 'cmdline': 'ignore'}) 38 | with pytest.raises(i_p.config.ConfigError): 39 | p.interpolate() 40 | 41 | 42 | @pytest.mark.parametrize('input, output', [ 43 | ["", ""], 44 | ["foo bar", "foo bar"], 45 | ["qemu-img %(input_filename)s %(output_filename)s", "qemu-img name name.raw"], 46 | ["qemu-img -O %(disk_format)s %(input_filename)s %(output_filename)s", "qemu-img -O raw name name.raw"] 47 | ]) 48 | def test_prep_output_cmdline_normal(i_p, input, output): 49 | p = i_p.Preprocess( 50 | 'name', 51 | {'disk_format': 'raw'}, 52 | {'output_filename': "%(input_filename)s.%(disk_format)s", 'cmdline': input} 53 | ) 54 | p.interpolate() 55 | assert p.command_line == output 56 | 57 | 58 | @pytest.mark.parametrize('input', [ 59 | "%", 60 | "%{foo}", 61 | "%(unknown_variable)", 62 | "%(unknown_variable)s", 63 | "%(bad" 64 | ]) 65 | def test_prep_output_cmdline_bad(i_p, input): 66 | p = i_p.Preprocess( 67 | 'name', 68 | {'disk_format': 'raw'}, 69 | {'output_filename': "%(input_filename)s.%(disk_format)s", 'cmdline': input} 70 | ) 71 | with pytest.raises(i_p.config.ConfigError): 72 | p.interpolate() 73 | 74 | 75 | def test_context_no_preprocessing(i_p): 76 | with i_p.Preprocess('name', {}, {}) as new_name: 77 | assert new_name == 'name' 78 | 79 | 80 | def test_context_reuse_existing_file_no_exec(i_p): 81 | with tempfile.NamedTemporaryFile() as t: 82 | with i_p.Preprocess( 83 | 'foo', 84 | {}, 85 | {'output_filename': t.name, "cmdline": "exit 1", "use_existing": True} 86 | ) as new_name: 87 | assert new_name == t.name 88 | 89 | 90 | def test_context_use_existing_no_file(i_p): 91 | with tempfile.NamedTemporaryFile() as t: 92 | tempname = t.name 93 | del t 94 | with i_p.Preprocess( 95 | tempname, 96 | {}, 97 | {'output_filename': tempname, "cmdline": "touch %(input_filename)s", "use_existing": True} 98 | ) as new_name: 99 | assert new_name == tempname 100 | os.remove(tempname) 101 | 102 | 103 | def test_context_delete_after_upload_no_use_existing(i_p): 104 | with tempfile.NamedTemporaryFile() as t: 105 | tempname = t.name 106 | del t 107 | with i_p.Preprocess( 108 | tempname, 109 | {}, 110 | {'output_filename': tempname, "cmdline": "touch %(input_filename)s"} 111 | ) as new_name: 112 | assert new_name == tempname 113 | assert os.path.isfile(tempname) is False 114 | 115 | 116 | def test_context_file_existed_before_upload(i_p): 117 | with tempfile.NamedTemporaryFile(delete=False) as t: 118 | tempname = t.name 119 | t.write("no!") 120 | del t 121 | with i_p.Preprocess( 122 | tempname, 123 | {}, 124 | {'output_filename': tempname, "cmdline": "touch %(input_filename)s"} 125 | ) as new_name: 126 | assert new_name == tempname 127 | assert "no!" not in open(tempname, 'r').read() 128 | assert os.path.isfile(tempname) is False 129 | 130 | 131 | def test_context_do_not_delete_after_upload_no_use_existing(i_p): 132 | with tempfile.NamedTemporaryFile() as t: 133 | tempname = t.name 134 | del t 135 | with i_p.Preprocess( 136 | tempname, 137 | {}, 138 | {'output_filename': tempname, "cmdline": "touch %(input_filename)s", 'delete_processed_after_upload': False} 139 | ) as new_name: 140 | assert new_name == tempname 141 | assert os.path.isfile(tempname) is True 142 | os.remove(tempname) 143 | 144 | 145 | def test_context_assertion_on_exec(i_p): 146 | with tempfile.NamedTemporaryFile() as t: 147 | tempname = t.name 148 | del t 149 | with pytest.raises(i_p.PreprocessError): 150 | with i_p.Preprocess( 151 | tempname, 152 | {}, 153 | {'output_filename': tempname, "cmdline": "exit 1"} 154 | ): 155 | pass 156 | 157 | 158 | def test_context_no_file_created(i_p): 159 | with tempfile.NamedTemporaryFile() as t: 160 | tempname = t.name 161 | del t 162 | with pytest.raises(i_p.PreprocessError): 163 | with i_p.Preprocess( 164 | tempname, 165 | {}, 166 | {'output_filename': tempname, "cmdline": "exit 0"} 167 | ): 168 | pass 169 | 170 | 171 | def test_context_manager_without_processing(i_p): 172 | with tempfile.NamedTemporaryFile() as t: 173 | tempname = t.name 174 | del t 175 | with i_p.Preprocess( 176 | tempname, 177 | {}, 178 | {} 179 | ) as new_name: 180 | assert tempname == new_name 181 | 182 | 183 | if __name__ == "__main__": 184 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 185 | currentdir = os.path.dirname(ourfilename) 186 | parentdir = os.path.dirname(currentdir) 187 | file_to_test = os.path.join( 188 | parentdir, 189 | os.path.basename(parentdir), 190 | os.path.basename(ourfilename).replace("test_", '') 191 | ) 192 | pytest.main([ 193 | "-vv", 194 | "--cov", file_to_test, 195 | "--cov-report", "term-missing" 196 | ] + sys.argv) 197 | -------------------------------------------------------------------------------- /tests/test_pytest_runner.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import mock 3 | import pytest 4 | import os 5 | import inspect 6 | import sys 7 | from mock import sentinel 8 | import paramiko 9 | 10 | 11 | @pytest.fixture 12 | def pytest_runner(): 13 | from dibctl import pytest_runner 14 | return pytest_runner 15 | 16 | 17 | @pytest.fixture 18 | def ssh(): 19 | from dibctl import ssh 20 | return ssh 21 | 22 | 23 | @pytest.fixture 24 | def prepare_os(): 25 | from dibctl import prepare_os 26 | return prepare_os 27 | 28 | 29 | @pytest.fixture 30 | def dcp(pytest_runner, ssh): 31 | tos = mock.MagicMock() 32 | tos.ip = '192.168.0.1' 33 | tos.os_instance.interface_list.return_value = [sentinel.iface1, sentinel.iface2] 34 | tos.flavor.return_value.get_keys.return_value = {'name': 'value'} 35 | tos.key_name = 'foo-key-name' 36 | tos.os_key_private_file = 'private-file' 37 | tos.ips.return_value = [sentinel.ip1, sentinel.ip2] 38 | tos.ips_by_version.return_value = [sentinel.ip3, sentinel.ip4] 39 | tos.get_image_info.return_value = sentinel.image_info 40 | tos.image = sentinel.image 41 | tos.os_instance.get_console_output.return_value = sentinel.console_out 42 | s = ssh.SSH('192.168.0.1', 'root', 'secret') 43 | dcp = pytest_runner.DibCtlPlugin(s, tos, {}) 44 | return dcp 45 | 46 | 47 | @pytest.mark.parametrize("code, status", [ 48 | [0, True], 49 | [-1, False], 50 | [1, False] 51 | ]) 52 | def test_runner_status(pytest_runner, code, status): 53 | with mock.patch.object(pytest_runner, "DibCtlPlugin"): 54 | with mock.patch.object(pytest_runner.pytest, "main", return_value=code): 55 | assert pytest_runner.runner( 56 | sentinel.path, 57 | sentinel.ssh, 58 | sentinel.tos, 59 | sentinel.environment_variables, 60 | sentinel.timeout_val, 61 | False, 62 | ) == status 63 | 64 | 65 | def test_runner_status_cont_on_fail_true(pytest_runner): 66 | with mock.patch.object(pytest_runner, "DibCtlPlugin"): 67 | with mock.patch.object(pytest_runner.pytest, "main", return_value=-1) as mock_main: 68 | pytest_runner.runner( 69 | sentinel.path, 70 | sentinel.ssh, 71 | sentinel.tos, 72 | sentinel.environment_variables, 73 | sentinel.timeout_val, 74 | False, 75 | ) 76 | assert '-x' in mock_main.call_args[0][0] 77 | 78 | 79 | def test_runner_status_cont_on_fail_false(pytest_runner): 80 | with mock.patch.object(pytest_runner, "DibCtlPlugin"): 81 | with mock.patch.object(pytest_runner.pytest, "main", return_value=-1) as mock_main: 82 | pytest_runner.runner( 83 | sentinel.path, 84 | sentinel.ssh, 85 | sentinel.tos, 86 | sentinel.environment_variables, 87 | sentinel.timeout_val, 88 | True 89 | ) 90 | assert '-x' not in mock_main.call_args[0][0] 91 | 92 | 93 | def test_DibCtlPlugin_init_soft_import(dcp): 94 | assert dcp.testinfra 95 | 96 | 97 | def test_DibCtlPlugin_init_no_testinfra(pytest_runner): 98 | with mock.patch.dict(sys.modules, {'testinfra': None}): 99 | dcp = pytest_runner.DibCtlPlugin( 100 | sentinel.ssh, 101 | mock.MagicMock(), 102 | {} 103 | ) 104 | assert dcp.testinfra is None 105 | with pytest.raises(ImportError): 106 | dcp.ssh_backend(mock.MagicMock()) 107 | 108 | 109 | def test_DibCtlPlugin_flavor_fixture(dcp): 110 | assert dcp.flavor(sentinel.request) 111 | 112 | 113 | def test_DibCtlPlugin_flavor_meta_fixture(dcp): 114 | assert dcp.flavor_meta(sentinel.request) == {'name': 'value'} 115 | 116 | 117 | def test_DibCtlPlugin_instance_fixture(dcp): 118 | assert dcp.instance(sentinel.request) 119 | 120 | 121 | def test_DibCtlPlugin_network_fixture(dcp): 122 | assert dcp.network(sentinel.request) == [sentinel.iface1, sentinel.iface2] 123 | 124 | 125 | def test_DibCtlPlugin_wait_for_port_fixture(dcp): 126 | dcp.wait_for_port(sentinel.request)() 127 | assert dcp.tos.wait_for_port.call_args == mock.call(22, 60) 128 | 129 | 130 | def test_DibCtlPlugin_ips_fixture(dcp): 131 | assert dcp.ips(sentinel.request) == [sentinel.ip1, sentinel.ip2] 132 | 133 | 134 | def test_DibCtlPlugin_ips_v4_fixture(dcp): 135 | assert dcp.ips_v4(sentinel.request) == [sentinel.ip3, sentinel.ip4] 136 | 137 | 138 | def test_DibCtlPlugin_main_ip_fixture(dcp): 139 | assert dcp.main_ip(sentinel.request) == '192.168.0.1' 140 | 141 | 142 | def test_DibCtlPlugin_image_info_fixture(dcp): 143 | assert dcp.image_info(sentinel.request) == sentinel.image_info 144 | 145 | 146 | def test_DibCtlPlugin_image_config_fixture(dcp): 147 | assert dcp.image_config(sentinel.request) == sentinel.image 148 | 149 | 150 | def test_DibCtlPlugin_console_output_fixture(dcp): 151 | assert dcp.console_output(sentinel.request) == sentinel.console_out 152 | 153 | 154 | def test_DibCtlPlugin_ssh_client_fixture(dcp): 155 | assert isinstance(dcp.ssh_client(), paramiko.client.SSHClient) 156 | 157 | 158 | @pytest.mark.parametrize('key, value', [ 159 | ['ip', '192.168.0.1'], 160 | ['username', 'root'] 161 | ]) 162 | def test_DibCtlPlugin_ssh_fixture(dcp, key, value): 163 | ssh = dcp.ssh(sentinel.request) 164 | assert ssh[key] == value 165 | 166 | 167 | if __name__ == "__main__": 168 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 169 | currentdir = os.path.dirname(ourfilename) 170 | parentdir = os.path.dirname(currentdir) 171 | file_to_test = os.path.join( 172 | parentdir, 173 | os.path.basename(parentdir), 174 | os.path.basename(ourfilename).replace("test_", '', 1) 175 | ) 176 | pytest.main([ 177 | "-vv", 178 | "--cov", file_to_test, 179 | "--cov-report", "term-missing" 180 | ] + sys.argv) 181 | -------------------------------------------------------------------------------- /tests/test_shell_runner.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import mock 3 | import pytest 4 | import os 5 | import inspect 6 | import sys 7 | from mock import sentinel 8 | import tempfile 9 | import uuid 10 | 11 | 12 | @pytest.fixture 13 | def ssh(): 14 | from dibctl import ssh 15 | return ssh 16 | 17 | 18 | @pytest.fixture 19 | def shell_runner(): 20 | from dibctl import shell_runner 21 | return shell_runner 22 | 23 | 24 | @pytest.mark.parametrize("input, output", [ 25 | [[], {}], 26 | [{}, {}], 27 | [None, {}], 28 | ["foo", {"P": "foo"}], 29 | [1, {"P": "1"}], 30 | [True, {"P": "True"}], 31 | [{"sublevel": "value"}, {"P_SUBLEVEL": "value"}], 32 | [{1: "value"}, {"P_1": "value"}], 33 | [{"one": "value", "two": "value"}, {"P_ONE": "value", "P_TWO": "value"}], 34 | [[{"one": "value"}, {"two": "value"}], {"P_ONE": "value", "P_TWO": "value"}], 35 | [{"sublevel1": {"sublevel2": "value"}}, {"P_SUBLEVEL1_SUBLEVEL2": "value"}], 36 | [{"one": "foo", "sublevel1": {"sublevel2": "value"}}, {"P_SUBLEVEL1_SUBLEVEL2": "value", "P_ONE": "foo"}], 37 | [{"one": ["foo", "bar"]}, {"P_ONE": "bar"}], # invalid 38 | ]) 39 | def test_unwrap_config(shell_runner, input, output): 40 | assert shell_runner.unwrap_config("P", input) == output 41 | 42 | 43 | def test_run_shell_test_success(shell_runner): 44 | with mock.patch.object(shell_runner.subprocess, "check_call", autospec=True) as mock_call: 45 | assert shell_runner.run_shell_test(sentinel.path, sentinel.evn) 46 | 47 | 48 | def test_run_shell_test_fail(shell_runner): 49 | with mock.patch.object(shell_runner.subprocess, "check_call") as mock_call: 50 | mock_call.side_effect = shell_runner.subprocess.CalledProcessError(0, [], "") 51 | assert shell_runner.run_shell_test(sentinel.path, sentinel.evn) is False 52 | 53 | 54 | def test_gather_tests_bad_file(shell_runner): 55 | assert shell_runner.gather_tests('/dev/null') is None 56 | 57 | 58 | def test_gather_tests_single_file_no_exec(shell_runner): 59 | ftmp = tempfile.NamedTemporaryFile(delete=False) 60 | os.chmod(ftmp.name, 0o600) 61 | assert shell_runner.gather_tests(ftmp.name) is None 62 | ftmp.close() 63 | os.remove(ftmp.name) 64 | 65 | 66 | def test_gather_tests_single_file_exec(shell_runner): 67 | ftmp = tempfile.NamedTemporaryFile(delete=False) 68 | os.chmod(ftmp.name, 0o700) 69 | assert shell_runner.gather_tests(ftmp.name) == ftmp.name 70 | ftmp.close() 71 | os.remove(ftmp.name) 72 | 73 | 74 | def test_gather_tests_single_file_exec(shell_runner): 75 | uuid1 = str(uuid.uuid4()) 76 | uuid2 = str(uuid.uuid4()) 77 | uuid3 = str(uuid.uuid4()) 78 | tdir = tempfile.mkdtemp() 79 | tdir2 = os.path.join(tdir, uuid1) 80 | tdir3 = os.path.join(tdir2, uuid3) 81 | os.mkdir(tdir2) 82 | os.mkdir(tdir3) 83 | tmp1 = os.path.join(tdir, uuid2) 84 | tmp2 = os.path.join(tdir3, uuid3) 85 | file(tmp1, 'w').write('pytest') 86 | file(tmp2, 'w').write('pytest') 87 | os.chmod(tmp1, 0o600) 88 | os.chmod(tmp2, 0o700) 89 | assert shell_runner.gather_tests(tdir) == [tmp2] 90 | os.remove(tmp1) 91 | os.remove(tmp2) 92 | os.rmdir(tdir3) 93 | os.rmdir(tdir2) 94 | os.rmdir(tdir) 95 | 96 | 97 | def test_runner_bad_path(shell_runner): 98 | with pytest.raises(shell_runner.BadRunnerError): 99 | shell_runner.runner('/dev/null', sentinel.ssh, sentinel.config, sentinel.vars, sentinel.timeout, continue_on_fail=False) 100 | 101 | 102 | def test_runner_empty_tests(shell_runner, ssh): 103 | tos = mock.MagicMock() 104 | tos.ip = '192.168.1.1' 105 | tos.os_key_private_file = '~/.ssh/config' 106 | vars = {} 107 | s = ssh.SSH('192.168.1.1', 'user', 'secret') 108 | with mock.patch.object(shell_runner, "gather_tests", return_value=[]): 109 | assert shell_runner.runner(sentinel.path, s, tos, vars, sentinel.timeout, continue_on_fail=False) is True 110 | del s 111 | 112 | 113 | def test_runner_all_ok(shell_runner, ssh): 114 | tos = mock.MagicMock() 115 | tos.ip = '192.168.1.1' 116 | tos.os_key_private_file = '~/.ssh/config' 117 | vars = {} 118 | with mock.patch.object(shell_runner, "gather_tests", return_value=["test1", "test2"]): 119 | with mock.patch.object(shell_runner, "run_shell_test", return_value=True): 120 | s = ssh.SSH('192.168.1.1', 'user', 'secret') 121 | assert shell_runner.runner(sentinel.path, s, tos, vars, sentinel.timeout, continue_on_fail=False) is True 122 | del s 123 | 124 | 125 | def test_runner_all_no_continue(shell_runner, ssh): 126 | tos = mock.MagicMock() 127 | tos.ip = '192.168.1.1' 128 | tos.os_key_private_file = '~/.ssh/config' 129 | vars = {} 130 | s = ssh.SSH('192.168.1.1', 'user', 'secret') 131 | with mock.patch.object(shell_runner, "gather_tests", return_value=["test1", "test2"]): 132 | with mock.patch.object(shell_runner, "run_shell_test", return_value=False) as mock_run: 133 | assert shell_runner.runner(sentinel.path, s, tos, vars, sentinel.timeout, continue_on_fail=False) is False 134 | assert mock_run.call_count == 1 135 | del s 136 | 137 | 138 | def test_runner_all_with_continue(shell_runner, ssh): 139 | tos = mock.MagicMock() 140 | tos.ip = '192.168.1.1' 141 | tos.os_key_private_file = '~/.ssh/config' 142 | vars = {} 143 | s = ssh.SSH('192.168.1.1', 'user', 'secret') 144 | with mock.patch.object(shell_runner, "gather_tests", return_value=["test1", "test2"]): 145 | with mock.patch.object(shell_runner, "run_shell_test", return_value=False) as mock_run: 146 | assert shell_runner.runner(sentinel.path, s, tos, vars, sentinel.timeout, continue_on_fail=True) is False 147 | assert mock_run.call_count == 2 148 | del s 149 | 150 | 151 | if __name__ == "__main__": 152 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 153 | currentdir = os.path.dirname(ourfilename) 154 | parentdir = os.path.dirname(currentdir) 155 | file_to_test = os.path.join( 156 | parentdir, 157 | os.path.basename(parentdir), 158 | os.path.basename(ourfilename).replace("test_", '', 1) 159 | ) 160 | pytest.main([ 161 | "-vv", 162 | "--cov", file_to_test, 163 | "--cov-report", "term-missing" 164 | ] + sys.argv) 165 | -------------------------------------------------------------------------------- /tests/test_ssh.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import os 3 | import inspect 4 | import sys 5 | import pytest 6 | import mock 7 | from mock import sentinel 8 | import tempfile 9 | 10 | 11 | @pytest.fixture 12 | def ssh(): 13 | from dibctl import ssh 14 | return ssh 15 | 16 | 17 | @pytest.mark.parametrize('ip, port, user, output', [ 18 | ['192.168.1.1', 22, 'user', 'user@192.168.1.1'], 19 | ['192.168.1.1', 1222, 'user', 'user@192.168.1.1:1222'] 20 | ]) 21 | def test_user_host_and_port(ssh, ip, port, user, output): 22 | s = ssh.SSH(ip, user, None, port) 23 | assert s.user_host_and_port() == output 24 | 25 | 26 | def test_key_file_with_override(ssh): 27 | t = tempfile.NamedTemporaryFile() 28 | t.write('secret') 29 | t.flush() 30 | s = ssh.SSH(sentinel.ip, sentinel.user, None, sentinel.port, override_ssh_key_filename=t.name) 31 | assert s.key_file() == t.name 32 | del t 33 | 34 | 35 | def test_key_file(ssh): 36 | with mock.patch.object(ssh.tempfile, 'NamedTemporaryFile') as mock_tmp: 37 | mock_tmp.return_value.name = sentinel.filename 38 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 39 | assert s.key_file() == sentinel.filename 40 | assert mock_tmp.return_value.write.call_args == mock.call('secret') 41 | 42 | 43 | def test_key_file_name(ssh): 44 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 45 | f = s.key_file() 46 | assert 'dibctl_key' in f 47 | 48 | 49 | def test_key_file_content(ssh): 50 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 51 | f = s.key_file() 52 | assert open(f, 'r').read() == "secret" 53 | 54 | 55 | def test_key_file_remove_afterwards(ssh): 56 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 57 | f = s.key_file() 58 | del s 59 | with pytest.raises(IOError): 60 | open(f, 'r') 61 | 62 | 63 | def test_keep_key_file_with_override(ssh): 64 | t = tempfile.NamedTemporaryFile() 65 | t.write('secret') 66 | t.flush() 67 | s = ssh.SSH(sentinel.ip, sentinel.user, None, sentinel.port, override_ssh_key_filename=t.name) 68 | assert s.keep_key_file() == t.name 69 | del t 70 | 71 | 72 | def test_keep_key_file_name(ssh): 73 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 74 | f = s.keep_key_file() 75 | assert 'saved_dibctl_key_' in f 76 | os.remove(f) 77 | 78 | 79 | def test_keep_key_file_content(ssh): 80 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 81 | f = s.keep_key_file() 82 | assert 'secret' in open(f, 'r').read() 83 | os.remove(f) 84 | 85 | 86 | def test_keep_key_file_kept_after_removal(ssh): 87 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 88 | f = s.keep_key_file() 89 | del s 90 | assert 'secret' in open(f, 'r').read() 91 | os.remove(f) 92 | 93 | 94 | def test_key_file_keep_key_file(ssh): 95 | s = ssh.SSH(sentinel.ip, sentinel.user, "secret", sentinel.port) 96 | f1 = s.key_file() 97 | f2 = s.keep_key_file() 98 | with pytest.raises(IOError): 99 | open(f1, 'r') 100 | assert 'secret' == open(f2, 'r').read() 101 | os.remove(f2) 102 | 103 | 104 | def test_command_line(ssh): 105 | s = ssh.SSH('192.168.0.1', 'user', 'secret') 106 | cmdline = " ".join(s.command_line()) 107 | assert 'user@192.168.0.1' in cmdline 108 | assert "-o StrictHostKeyChecking=no" in cmdline 109 | assert "-o UserKnownHostsFile=/dev/null" in cmdline 110 | assert "-o UpdateHostKeys=no" in cmdline 111 | assert "-o PasswordAuthentication=no" in cmdline 112 | assert "-i " in cmdline 113 | del s 114 | 115 | 116 | def test_connector(ssh): 117 | s = ssh.SSH('192.168.0.1', sentinel.user, sentinel.key) 118 | assert s.connector() == 'ssh://192.168.0.1' 119 | 120 | 121 | def test_config_name(ssh): 122 | s = ssh.SSH('192.168.0.1', 'user', 'secret') 123 | assert 'dibctl_config_' in s.config() 124 | del s 125 | 126 | 127 | def test_config_content(ssh): 128 | s = ssh.SSH('192.168.0.1', 'user', 'secret', port=99) 129 | cfg = s.config() 130 | with open(cfg, 'r') as c: 131 | data = c.read() 132 | assert "User user" in data 133 | assert "Host 192.168.0.1" in data 134 | assert "StrictHostKeyChecking no" in data 135 | assert "UserKnownHostsFile /dev/null" in data 136 | assert "UpdateHostKeys no" in data 137 | assert "PasswordAuthentication no" in data 138 | assert "Port 99" in data 139 | assert "IdentityFile" in data 140 | del s 141 | 142 | 143 | def test_config_afterwards(ssh): 144 | s = ssh.SSH('192.168.0.1', 'user', 'secret') 145 | cfg = s.config() 146 | del s 147 | with pytest.raises(IOError): 148 | open(cfg, 'r') 149 | 150 | 151 | def test_env_vars(ssh): 152 | s = ssh.SSH('192.168.0.1', 'user', 'secret') 153 | env = s.env_vars('TEST_') 154 | assert env['TEST_SSH_USERNAME'] == 'user' 155 | assert env['TEST_SSH_IP'] == '192.168.0.1' 156 | assert env['TEST_SSH_PORT'] == '22' 157 | assert open(env['TEST_SSH_PRIVATE_KEY'], 'r').read() == 'secret' 158 | assert "Host" in open(env['TEST_SSH_CONFIG'], 'r').read() 159 | assert "192.168.0.1" in env['TEST_SSH'] 160 | del s 161 | 162 | 163 | def test_shell_simple_run(ssh): 164 | with mock.patch.object(ssh.SSH, "COMMAND_NAME", "echo"): 165 | s = ssh.SSH('192.168.0.1', 'user', 'secret') 166 | rfd, wfd = os.pipe() 167 | w = os.fdopen(wfd, 'w', 0) 168 | with mock.patch.multiple(ssh.sys, stdout=w, stderr=w, stdin=None): 169 | s.shell({}, 'test message') 170 | output = os.read(rfd, 1000) 171 | assert 'echo' in output # should be ssh, but test demands 172 | assert 'user@192.168.0.1' in output 173 | 174 | 175 | @pytest.mark.parametrize('key, value', [ 176 | ['ip', '192.168.0.1'], 177 | ['username', 'user'], 178 | ['port', 22] 179 | ]) 180 | def test_info(ssh, key, value): 181 | s = ssh.SSH('192.168.0.1', 'user', 'secret') 182 | i = s.info() 183 | assert i[key] == value 184 | del s 185 | 186 | 187 | @pytest.mark.parametrize('key', [ 188 | 'config', 189 | 'private_key_file', 190 | 'config', 191 | 'command_line', 192 | 'connector' 193 | ]) 194 | def test_info_2(ssh, key): 195 | s = ssh.SSH('192.168.0.1', 'user', 'secret') 196 | i = s.info() 197 | assert key in i 198 | del s 199 | 200 | 201 | if __name__ == "__main__": 202 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 203 | currentdir = os.path.dirname(ourfilename) 204 | parentdir = os.path.dirname(currentdir) 205 | file_to_test = os.path.join( 206 | parentdir, 207 | os.path.basename(parentdir), 208 | os.path.basename(ourfilename).replace("test_", '') 209 | ) 210 | pytest.main([ 211 | "-vv", 212 | "--cov", file_to_test, 213 | "--cov-report", "term-missing" 214 | ] + sys.argv) 215 | -------------------------------------------------------------------------------- /tests/test_timeout.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import os 3 | import inspect 4 | import sys 5 | import pytest 6 | import mock 7 | import time 8 | from mock import sentinel 9 | 10 | 11 | @pytest.fixture 12 | def timeout(): 13 | from dibctl import timeout 14 | return timeout 15 | 16 | 17 | def test_timeout_real_timeout(timeout): 18 | with pytest.raises(timeout.TimeoutError): 19 | with timeout.timeout(1): 20 | import time 21 | time.sleep(2) 22 | 23 | 24 | def test_timeout_after_exception(timeout): 25 | # based on real bug: alarm wasn't cleared after error handler 26 | with pytest.raises(ValueError): 27 | with timeout.timeout(1): 28 | raise ValueError 29 | time.sleep(2) 30 | 31 | 32 | def test_timeout_check_no_error(timeout): 33 | with timeout.timeout(1): 34 | pass 35 | 36 | 37 | def test_timeout_bad_signal(timeout): 38 | t = timeout.timeout(1) 39 | with pytest.raises(RuntimeError): 40 | t.raise_timeout("NOT A TIMEOUT", sentinel.frame) 41 | 42 | 43 | if __name__ == "__main__": 44 | ourfilename = os.path.abspath(inspect.getfile(inspect.currentframe())) 45 | currentdir = os.path.dirname(ourfilename) 46 | parentdir = os.path.dirname(currentdir) 47 | file_to_test = os.path.join( 48 | parentdir, 49 | os.path.basename(parentdir), 50 | os.path.basename(ourfilename).replace("test_", '', 1) 51 | ) 52 | pytest.main([ 53 | "-vv", 54 | "--cov", file_to_test, 55 | "--cov-report", "term-missing" 56 | ] + sys.argv) 57 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [flake8] 2 | max-line-length=120 3 | -------------------------------------------------------------------------------- /workflow.md: -------------------------------------------------------------------------------- 1 | All changes go to the master branch (directly or merged from feature-branches). 2 | Packaging-related things are stored in 'debian' branch. 3 | Versions are pinned by tags. Tag format: X.Y.Z, semantic versioning. 4 | 5 | --------------------------------------------------------------------------------