├── .gitignore ├── .travis.yml ├── DESCRIPTION.rst ├── README.md ├── azure_flocker_driver ├── __init__.py ├── azure_storage_driver.py ├── azure_utils │ ├── __init__.py │ ├── arm_disk_manager.py │ ├── run_tests.sh │ ├── test_disk_manager.py │ └── vhd.py ├── lun.py ├── package.json └── test_azure_driver.py ├── example.azure_agent.yml ├── requirements.txt ├── setup.py └── tox.ini /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | # secrets file 6 | .config.yml 7 | # Mac OS X 8 | .DS_STORE 9 | 10 | # Temp Flocker 11 | azure_flocker_driver/flocker 12 | 13 | # Twisted Test Trial Temp folder 14 | _trial_temp 15 | 16 | # C extensions 17 | *.so 18 | 19 | # Secret Config Settings 20 | azure_storage_config.yml 21 | 22 | # Certs/Keys 23 | *.pem 24 | 25 | # Distribution / packaging 26 | .Python 27 | env/ 28 | build/ 29 | develop-eggs/ 30 | dist/ 31 | downloads/ 32 | eggs/ 33 | .eggs/ 34 | lib/ 35 | lib64/ 36 | parts/ 37 | sdist/ 38 | var/ 39 | *.egg-info/ 40 | .installed.cfg 41 | *.egg 42 | 43 | # PyInstaller 44 | # Usually these files are written by a python script from a template 45 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 46 | *.manifest 47 | *.spec 48 | 49 | # Installer logs 50 | pip-log.txt 51 | pip-delete-this-directory.txt 52 | 53 | # Unit test / coverage reports 54 | htmlcov/ 55 | .tox/ 56 | .coverage 57 | .coverage.* 58 | .cache 59 | nosetests.xml 60 | coverage.xml 61 | *,cover 62 | 63 | # Translations 64 | *.mo 65 | *.pot 66 | 67 | # Django stuff: 68 | *.log 69 | 70 | # Sphinx documentation 71 | docs/_build/ 72 | 73 | # PyBuilder 74 | target/ 75 | 76 | *.swp 77 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - "2.7" 4 | # command to install dependencies 5 | install: "sudo pip install -r requirements.txt" 6 | # command to run tests 7 | script: "sudo -H tox --develop && cat /home/travis/build/sedouard/azure-flocker-driver/.tox/lint/log/lint-1.log" 8 | -------------------------------------------------------------------------------- /DESCRIPTION.rst: -------------------------------------------------------------------------------- 1 | Azure Plugin/Driver for ClusterHQ/flocker 2 | ====================== 3 | 4 | This is a plugin driver for the [Flocker](https://clusterhq.com/) project which delivers Fast, local, persistent storage for Docker containers, Multi-host container management, Database migrations, and Flexible shared storage (SAN, NAS or block) for Docker when you want it 5 | 6 | ## Description 7 | Flocker can help orchestrate and provision storage to your clustered docker container microservices applications. Use cases include --> 8 | - Seamlessly Running Stateful Microservices 9 | - Run Databases in Containers 10 | - MongoDB, Cassandra, Postgres, MySQL, and more! 11 | - Generally orchestrate and schedule your container applications across a cluster that optionally provides flexible shared storage when you need it. 12 | - Use it with [Docker Native Extensions](https://github.com/ClusterHQ/flocker-docker-plugin) 13 | 14 | ## Authors 15 | This work is based off of the initial work of [Steve Edouard](https://github.com/sedouard). It is currently being maintained by [Jim Spring](https://github.com/jmspring). 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Azure Driver for ClusterHQ/Flocker 2 | ==================================== 3 | 4 | [![Build Status](https://travis-ci.org/CatalystCode/azure-flocker-driver.svg?branch=master)](https://travis-ci.org/CatalystCode/azure-flocker-driver) 5 | [![Code Climate](https://codeclimate.com/github/CatalystCode/azure-flocker-driver/badges/gpa.svg)](https://codeclimate.com/github/CatalystCode/azure-flocker-driver) 6 | 7 | *Tested against Flocker 1.14.0* 8 | 9 | This block storage driver for [Flocker](https://clusterhq.com/) enables the use of data disks with Azure VMs. 10 | 11 | ## Overview 12 | Flocker is an open-source Container Data Volume Manager for your Dockerized applications. 13 | 14 | Typical Docker data volumes are tied to a single server. With Flocker datasets, the data volume can move with a container between different hosts in your cluster. This flexibility allows stateful container services to access data no matter where the container is placed. 15 | 16 | ## Prerequisites 17 | 18 | The following components are required before using the Azure Driver for Flocker: 19 | 20 | * A working Flocker installation on Azure 21 | * Azure VMs with at least 4 data disk slots. 22 | 23 | **Flocker** 24 | 25 | You must first have Flocker installed on your node. Instructions on getting started with Flocker can be found on the [Flocker](https://clusterhq.com/flocker/getting-started) web site. 26 | 27 | 28 | ## Installation 29 | 30 | **Download Driver** 31 | 32 | Download the Azure driver to the node on which you want to use Azure storage. This process will need to be performed for each node in your cluster. 33 | 34 | ```bash 35 | git clone https://github.com/CatalystCode/azure-flocker-driver 36 | cd azure-flocker-driver 37 | sudo /opt/flocker/bin/pip install . 38 | ``` 39 | 40 | **_NOTE:_** Make sure to use the python version installed with Flocker or the driver will not be installed correctly. 41 | 42 | **Configure Flocker** 43 | 44 | After the Azure Flocker driver is installed on the local node, Flocker must be configured to use that driver. 45 | 46 | Configuration is set in Flocker's agent.yml file. Copy the example agent file installed with the driver to get started: 47 | 48 | ```bash 49 | sudo cp /etc/flocker/example.azure_agent.yml /etc/flocker/agent.yml 50 | sudo vi /etc/flocker/agent.yml 51 | ``` 52 | 53 | Edit the agent.yml file to include the required Azure configuration settings. Sample placeholder information is included in the example file. 54 | 55 | Some descriptions of the values are: 56 | 57 | ```bash 58 | version: 1 59 | control-server: 60 | hostname: "" 61 | "port": 4524 62 | 63 | dataset: 64 | backend: "azure_flocker_driver" 65 | client_id: "" 66 | tenant_id: "" 67 | client_secret: "" 68 | subscription_id: "" 69 | storage_account_name: "" 70 | storage_account_key: "" 71 | storage_account_container: "" 72 | group_name: "" 73 | location: "" 74 | async_timeout: 100000 75 | debug: "false" 76 | ``` 77 | 78 | **_NOTE:_** The agent configuration should match between all nodes of the cluster. 79 | 80 | 81 | **Test Configuration** 82 | 83 | To validate agent settings and make sure everything will work as expected, you may run the following tests from the downloaded driver directory. 84 | 85 | ```bash 86 | cd azure-flocker-driver 87 | export FLOCKER_CONFIG="/etc/flocker/agent.yml" 88 | sudo trial test_azure_driver.py 89 | ``` 90 | 91 | Several tests will be run to verify the functionality of the driver. Test action logging will output to the file driver.log in the local directory. 92 | 93 | ## Getting Help 94 | For general Flocker issues, you can either contact [Flocker](http://docs.clusterhq.com/en/latest/gettinginvolved/contributing.html#talk-to-us) or file a [GitHub Issue](https://github.com/clusterhq/flocker/issues). 95 | 96 | You can also connect with ClusterHQ help on [IRC](https://webchat.freenode.net/) in the \#clusterhq channel. 97 | 98 | For specific issues with the Azure Driver for Flocker, file a [GitHub Issue](https://github.com/CatalystCode/azure-flocker-driver/issues). 99 | 100 | If you have any suggestions for an improvements, please feel free create a fork in your repository, make any changes, and submit a pull request to have the changes considered for merging. Community collaboration is welcome! 101 | 102 | **As a community project, no warranties are provided for the use of this code.** 103 | 104 | ## License 105 | Licensed under the Apache License, Version 2.0 (the "License"); 106 | you may not use this file except in compliance with the License. 107 | You may obtain a copy of the License at 108 | 109 | http://www.apache.org/licenses/LICENSE-2.0 110 | 111 | Unless required by applicable law or agreed to in writing, software 112 | distributed under the License is distributed on an "AS IS" BASIS, 113 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 114 | See the License for the specific language governing permissions and 115 | limitations under the License. 116 | -------------------------------------------------------------------------------- /azure_flocker_driver/__init__.py: -------------------------------------------------------------------------------- 1 | from flocker.node import BackendDescription, DeployerType 2 | from .azure_storage_driver import ( 3 | azure_driver_from_configuration 4 | ) 5 | 6 | 7 | def api_factory(**kwargs): 8 | 9 | return azure_driver_from_configuration( 10 | client_id=kwargs['client_id'], 11 | client_secret=kwargs['client_secret'], 12 | tenant_id=kwargs['tenant_id'], 13 | subscription_id=kwargs['subscription_id'], 14 | storage_account_name=kwargs['storage_account_name'], 15 | storage_account_key=kwargs['storage_account_key'], 16 | storage_account_container=kwargs['storage_account_container'], 17 | group_name=kwargs['group_name'], 18 | location=kwargs['location'], 19 | debug=kwargs['debug']) 20 | 21 | FLOCKER_BACKEND = BackendDescription( 22 | name=u"azure_flocker_driver", 23 | needs_reactor=True, needs_cluster_id=False, 24 | api_factory=api_factory, deployer_type=DeployerType.block) 25 | -------------------------------------------------------------------------------- /azure_flocker_driver/azure_storage_driver.py: -------------------------------------------------------------------------------- 1 | from uuid import UUID 2 | import socket 3 | from bitmath import Byte, GiB 4 | from zope.interface import implementer 5 | import eliot 6 | import threading 7 | 8 | from azure.storage.blob import PageBlobService 9 | from azure.common.credentials import ServicePrincipalCredentials 10 | from azure.mgmt.resource.resources import ResourceManagementClient 11 | from azure.mgmt.compute import ComputeManagementClient 12 | from azure_utils.arm_disk_manager import DiskManager 13 | from lun import Lun 14 | 15 | from flocker.node.agents.blockdevice import IBlockDeviceAPI, \ 16 | BlockDeviceVolume, UnknownVolume, UnattachedVolume, \ 17 | AlreadyAttachedVolume 18 | 19 | _logger = eliot.Logger() 20 | 21 | _vmstate_lock = threading.Lock() 22 | 23 | 24 | # Logging Helpers 25 | def log_info(message): 26 | eliot.Message.new(info=message).write(_logger) 27 | 28 | 29 | def log_error(message): 30 | eliot.Message.new(error=message).write(_logger) 31 | 32 | 33 | class UnsupportedVolumeSize(Exception): 34 | """ 35 | The volume size is not supported 36 | Needs to be 1GB allocation granularity 37 | :param unicode dataset_id: The volume dataset_id 38 | """ 39 | 40 | def __init__(self, dataset_id): 41 | if not isinstance(dataset_id, UUID): 42 | raise TypeError( 43 | 'Unexpected dataset_id type. ' 44 | + 'Expected unicode. Got {!r}.'.format( 45 | dataset_id)) 46 | Exception.__init__(self, dataset_id) 47 | self.dataset_id = dataset_id 48 | 49 | 50 | class AsynchronousTimeout(Exception): 51 | 52 | def __init__(self): 53 | pass 54 | 55 | 56 | @implementer(IBlockDeviceAPI) 57 | class AzureStorageBlockDeviceAPI(object): 58 | """ 59 | An ``IBlockDeviceAsyncAPI`` which uses Azure Storage Backed Block Devices 60 | Current Support: Azure SMS API 61 | """ 62 | 63 | def __init__(self, **azure_config): 64 | """ 65 | :param ServiceManagement azure_client: an instance of the azure 66 | serivce managment api client. 67 | :param String service_name: The name of the cloud service 68 | :param 69 | names of Azure volumes to identify cluster 70 | :returns: A ``BlockDeviceVolume``. 71 | """ 72 | self._instance_id = self.compute_instance_id() 73 | creds = ServicePrincipalCredentials( 74 | client_id=azure_config['client_id'], 75 | secret=azure_config['client_secret'], 76 | tenant=azure_config['tenant_id']) 77 | self._resource_client = ResourceManagementClient( 78 | creds, 79 | azure_config['subscription_id']) 80 | self._compute_client = ComputeManagementClient( 81 | creds, 82 | azure_config['subscription_id']) 83 | self._azure_storage_client = PageBlobService( 84 | account_name=azure_config['storage_account_name'], 85 | account_key=azure_config['storage_account_key']) 86 | self._manager = DiskManager(self._resource_client, 87 | self._compute_client, 88 | self._azure_storage_client, 89 | azure_config['storage_account_container'], 90 | azure_config['group_name'], 91 | azure_config['location']) 92 | self._storage_account_name = azure_config['storage_account_name'] 93 | self._disk_container_name = azure_config['storage_account_container'] 94 | self._resource_group = azure_config['group_name'] 95 | 96 | def allocation_unit(self): 97 | """ 98 | 1GiB is the minimum allocation unit for azure disks 99 | return int: 1 GiB 100 | """ 101 | 102 | return int(GiB(1).to_Byte().value) 103 | 104 | def compute_instance_id(self): 105 | """ 106 | Azure Stored a UUID in the SDC kernel module. 107 | """ 108 | 109 | # Node host names should be unique within a vnet 110 | 111 | return unicode(socket.gethostname()) 112 | 113 | def create_volume(self, dataset_id, size): 114 | """ 115 | Create a new volume. 116 | :param UUID dataset_id: The Flocker dataset ID of the dataset on this 117 | volume. 118 | :param int size: The size of the new volume in bytes. 119 | :returns: A ``Deferred`` that fires with a ``BlockDeviceVolume`` when 120 | the volume has been created. 121 | """ 122 | size_in_gb = Byte(size).to_GiB().value 123 | 124 | if size_in_gb % 1 != 0: 125 | raise UnsupportedVolumeSize(dataset_id) 126 | 127 | disk_label = self._disk_label_for_dataset_id(dataset_id) 128 | self._manager.create_disk(disk_label, size_in_gb) 129 | 130 | return BlockDeviceVolume( 131 | blockdevice_id=unicode(disk_label), 132 | size=size, 133 | attached_to=None, 134 | dataset_id=dataset_id) 135 | 136 | def destroy_volume(self, blockdevice_id): 137 | """ 138 | Destroy an existing volume. 139 | :param unicode blockdevice_id: The unique identifier for the volume to 140 | destroy. 141 | :raises UnknownVolume: If the supplied ``blockdevice_id`` does not 142 | exist. 143 | :return: ``None`` 144 | """ 145 | log_info('Destorying block device: ' + str(blockdevice_id)) 146 | disks = self._manager.list_disks() 147 | target_disk = None 148 | for disk in disks: 149 | if disk.name == blockdevice_id: 150 | target_disk = disk 151 | break 152 | 153 | if target_disk is None: 154 | raise UnknownVolume(blockdevice_id) 155 | 156 | self._manager.destroy_disk(target_disk.name) 157 | 158 | def attach_volume(self, blockdevice_id, attach_to): 159 | """ 160 | Attach ``blockdevice_id`` to ``host``. 161 | :param unicode blockdevice_id: The unique identifier for the block 162 | device being attached. 163 | :param unicode attach_to: An identifier like the one returned by the 164 | ``compute_instance_id`` method indicating the node to which to 165 | attach the volume. 166 | :raises UnknownVolume: If the supplied ``blockdevice_id`` does not 167 | exist. 168 | :raises AlreadyAttachedVolume: If the supplied ``blockdevice_id`` is 169 | already attached. 170 | :returns: A ``BlockDeviceVolume`` with a ``host`` attribute set to 171 | ``host``. 172 | """ 173 | 174 | log_info('Attempting to attach ' + str(blockdevice_id) 175 | + ' to ' + str(attach_to)) 176 | 177 | _vmstate_lock.acquire() 178 | try: 179 | # Make sure disk is present. Also, need the disk size is needed. 180 | disks = self._manager.list_disks() 181 | target_disk = None 182 | for disk in disks: 183 | if disk.name == blockdevice_id: 184 | target_disk = disk 185 | break 186 | if target_disk is None: 187 | raise UnknownVolume(blockdevice_id) 188 | 189 | (disk, vmname, lun) = self._get_disk_vmname_lun(blockdevice_id) 190 | if vmname is not None: 191 | raise AlreadyAttachedVolume(blockdevice_id) 192 | 193 | self._manager.attach_disk( 194 | str(attach_to), 195 | target_disk.name, 196 | int(GiB(bytes=target_disk.properties.content_length))) 197 | finally: 198 | _vmstate_lock.release() 199 | 200 | log_info('disk attached') 201 | 202 | return self._blockdevicevolume_from_azure_volume( 203 | blockdevice_id, 204 | target_disk.properties.content_length, 205 | attach_to) 206 | 207 | def detach_volume(self, blockdevice_id): 208 | """ 209 | Detach ``blockdevice_id`` from whatever host it is attached to. 210 | :param unicode blockdevice_id: The unique identifier for the block 211 | device being detached. 212 | :raises UnknownVolume: If the supplied ``blockdevice_id`` does not 213 | exist. 214 | :raises UnattachedVolume: If the supplied ``blockdevice_id`` is 215 | not attached to anything. 216 | :returns: ``None`` 217 | """ 218 | 219 | _vmstate_lock.acquire() 220 | try: 221 | (target_disk, vm_name, lun) = \ 222 | self._get_disk_vmname_lun(blockdevice_id) 223 | 224 | if target_disk is None: 225 | raise UnknownVolume(blockdevice_id) 226 | 227 | if lun is None: 228 | raise UnattachedVolume(blockdevice_id) 229 | 230 | self._manager.detach_disk(vm_name, target_disk) 231 | finally: 232 | _vmstate_lock.release() 233 | 234 | def get_device_path(self, blockdevice_id): 235 | """ 236 | Return the device path that has been allocated to the block device on 237 | the host to which it is currently attached. 238 | :param unicode blockdevice_id: The unique identifier for the block 239 | device. 240 | :raises UnknownVolume: If the supplied ``blockdevice_id`` does not 241 | exist. 242 | :raises UnattachedVolume: If the supplied ``blockdevice_id`` is 243 | not attached to a host. 244 | :returns: A ``FilePath`` for the device. 245 | """ 246 | 247 | (target_disk, vm_name, lun) = \ 248 | self._get_disk_vmname_lun(blockdevice_id) 249 | 250 | if target_disk is None: 251 | raise UnknownVolume(blockdevice_id) 252 | 253 | if lun is None: 254 | raise UnattachedVolume(blockdevice_id) 255 | 256 | return Lun.get_device_path_for_lun(lun) 257 | 258 | def _get_details_for_disks(self, disks_in): 259 | """ 260 | Give a list of disks, returns a ''list'' of ''BlockDeviceVolume''s 261 | """ 262 | disk_info = [] 263 | disks = dict((d.name, d) for d in disks_in) 264 | 265 | # first handle disks attached to vms 266 | vms_info = {} 267 | vms = self._compute_client.virtual_machines.list(self._resource_group) 268 | for vm in vms: 269 | vm_info = self._compute_client.virtual_machines.get( 270 | self._resource_group, 271 | vm.name, 272 | expand="instanceView") 273 | vms_info[vm.name] = vm_info 274 | for data_disk in vm.storage_profile.data_disks: 275 | if 'flocker-' in data_disk.name: 276 | disk_name = data_disk.name.replace('.vhd', '') 277 | if disk_name in disks: 278 | disk_info.append( 279 | self._blockdevicevolume_from_azure_volume( 280 | disk_name, 281 | self._gibytes_to_bytes(data_disk.disk_size_gb), 282 | vm.name)) 283 | del disks[disk_name] 284 | else: 285 | # We have a data disk mounted that isn't in the known 286 | # list of blobs. 287 | log_info( 288 | "Disk attached, but not known in container: " + 289 | disk_name) 290 | for vm_info in vms_info: 291 | vm = vms_info[vm_info] 292 | if vm.instance_view is not None: 293 | for disk in vm.instance_view.disks: 294 | if 'flocker-' in disk.name: 295 | disk_name = disk.name.replace('.vhd', '') 296 | if disk_name in disks: 297 | disk_size = disks[disk_name].properties.\ 298 | content_length 299 | disk_size = disk_size - (disk_size % 300 | (1024 * 1024 * 1024)) 301 | disk_info.append( 302 | self._blockdevicevolume_from_azure_volume( 303 | disk_name, 304 | disk_size, 305 | vm.name)) 306 | del disks[disk_name] 307 | 308 | # each remaining disk should be added as not attached 309 | for disk in disks: 310 | if 'flocker-' in disk: 311 | disk_info.append(self._blockdevicevolume_from_azure_volume( 312 | disk.replace('.vhd', ''), 313 | disks[disk].properties.content_length, 314 | None)) 315 | 316 | return disk_info 317 | 318 | def list_volumes(self): 319 | """ 320 | List all the block devices available via the back end API. 321 | :returns: A ``list`` of ``BlockDeviceVolume``s. 322 | """ 323 | disks = self._manager.list_disks() 324 | disk_list = self._get_details_for_disks(disks) 325 | return disk_list 326 | 327 | def _disk_label_for_dataset_id(self, dataset_id): 328 | """ 329 | Returns a disk label for a given Dataset ID 330 | :param unicode dataset_id: The identifier of the dataset 331 | :returns string: A string representing the disk label 332 | """ 333 | label = 'flocker-' + str(dataset_id) 334 | return label 335 | 336 | def _dataset_id_for_disk_label(self, disk_label): 337 | """ 338 | Returns a UUID representing the Dataset ID for the given disk 339 | label 340 | :param string disk_label: The disk label 341 | :returns UUID: The UUID of the dataset 342 | """ 343 | return UUID(disk_label.replace('flocker-', '')) 344 | 345 | def _get_disk_vmname_lun(self, blockdevice_id): 346 | target_disk = None 347 | target_lun = None 348 | vm_name = None 349 | 350 | disk_list = self._manager.list_disks() 351 | for disk in disk_list: 352 | if 'flocker-' not in disk.name: 353 | continue 354 | if disk.name == blockdevice_id: 355 | target_disk = disk 356 | break 357 | if target_disk is None: 358 | return (None, None, None) 359 | 360 | vm_info = None 361 | vm_disk_info = None 362 | vms = self._compute_client.virtual_machines.list(self._resource_group) 363 | for vm in vms: 364 | for disk in vm.storage_profile.data_disks: 365 | if disk.name == target_disk.name: 366 | vm_disk_info = disk 367 | vm_info = vm 368 | break 369 | if vm_disk_info is not None: 370 | break 371 | if vm_info is not None: 372 | vm_name = vm_info.name 373 | target_lun = vm_disk_info.lun 374 | 375 | return (target_disk.name, vm_name, target_lun) 376 | 377 | def _gibytes_to_bytes(self, size): 378 | 379 | return int(GiB(size).to_Byte().value) 380 | 381 | def _blockdevicevolume_from_azure_volume(self, label, size, 382 | attached_to_name): 383 | 384 | # azure will report the disk size including the 512 byte footer 385 | # however flocker expects the exact value it requested for disk size 386 | # so remove the offset when reportint the size to flocker by 512 bytes 387 | if (size % self.allocation_unit()) == 512: 388 | size = size - 512 389 | 390 | return BlockDeviceVolume( 391 | blockdevice_id=unicode(label), 392 | size=int(size), 393 | attached_to=attached_to_name, 394 | dataset_id=self._dataset_id_for_disk_label(label) 395 | ) # disk labels are formatted as flocker- 396 | 397 | 398 | def azure_driver_from_configuration(client_id, 399 | client_secret, 400 | tenant_id, 401 | subscription_id, 402 | storage_account_name, 403 | storage_account_key, 404 | storage_account_container, 405 | group_name, 406 | location, 407 | debug): 408 | """ 409 | Returns Flocker Azure BlockDeviceAPI from plugin config yml. 410 | :param dictonary config: The Dictonary representing 411 | the data from the configuration yaml 412 | """ 413 | return AzureStorageBlockDeviceAPI( 414 | client_id=client_id, 415 | client_secret=client_secret, 416 | tenant_id=tenant_id, 417 | subscription_id=subscription_id, 418 | storage_account_name=storage_account_name, 419 | storage_account_key=storage_account_key, 420 | storage_account_container=storage_account_container, 421 | group_name=group_name, 422 | location=location, 423 | debug=debug) 424 | -------------------------------------------------------------------------------- /azure_flocker_driver/azure_utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CatalystCode/azure-flocker-driver/1a834925b8455524976e519f5f35c4acc619e927/azure_flocker_driver/azure_utils/__init__.py -------------------------------------------------------------------------------- /azure_flocker_driver/azure_utils/arm_disk_manager.py: -------------------------------------------------------------------------------- 1 | from azure.mgmt.compute.models import DataDisk 2 | from azure.mgmt.compute.models import VirtualHardDisk 3 | from bitmath import GiB 4 | from vhd import Vhd 5 | import uuid 6 | import time 7 | 8 | 9 | class AzureAsynchronousTimeout(Exception): 10 | 11 | def __init__(self): 12 | pass 13 | 14 | 15 | class AzureInsufficientLuns(Exception): 16 | 17 | def __init__(self): 18 | pass 19 | 20 | 21 | class AzureElementNotFound(Exception): 22 | 23 | def __init__(self): 24 | pass 25 | 26 | 27 | class AzureVMSizeNotSupported(Exception): 28 | 29 | def __init__(self): 30 | pass 31 | 32 | 33 | class AzureOperationNotAllowed(Exception): 34 | 35 | def __init__(self): 36 | pass 37 | 38 | 39 | class DiskManager(object): 40 | 41 | # Resource provider constants 42 | STORAGE_RESOURCE_PROVIDER_NAME = "Microsoft.Storage" 43 | STORAGE_RESORUCE_PROVIDER_VERSION = "2016-01-01" 44 | LUN0_RESERVED_VHD_NAME_SUFFIX = "lun0_reserved" 45 | 46 | def __init__(self, 47 | resource_client, 48 | compute_client, 49 | storage_client, 50 | disk_container_name, 51 | group_name, 52 | location, 53 | async_timeout=600): 54 | self._resource_client = resource_client 55 | self._compute_client = compute_client 56 | self._resource_group = group_name 57 | self._location = location 58 | self._storage_client = storage_client 59 | self._disk_container = disk_container_name 60 | self._async_timeout = async_timeout 61 | 62 | # ensure the container exists. 63 | self._storage_client.create_container(disk_container_name) 64 | 65 | def _str_array_to_lower(self, str_arry): 66 | array = [] 67 | for s in str_arry: 68 | array.append(s.lower().replace(' ', '')) 69 | return array 70 | 71 | def _get_max_luns_for_vm_size(self, vm_size): 72 | max_luns = 0 73 | vmSizes = self._compute_client.virtual_machine_sizes.list( 74 | self._location) 75 | for vm in vmSizes: 76 | if vm.name == vm_size: 77 | max_luns = vm.max_data_disk_count 78 | break 79 | return max_luns 80 | 81 | def _is_lun_0_empty(self, diskInfo): 82 | lun0Empty = True 83 | for disk in diskInfo: 84 | if disk.lun == 0: 85 | lun0Empty = False 86 | break 87 | return lun0Empty 88 | 89 | def _compute_next_lun(self, total_luns, data_disks): 90 | nextLun = -1 91 | usedLuns = [] 92 | for i in range(0, len(data_disks)): 93 | usedLuns.append(data_disks[i].lun) 94 | for lun in range(1, total_luns): 95 | if lun not in usedLuns: 96 | nextLun = lun 97 | break 98 | if nextLun == -1: 99 | raise AzureInsufficientLuns() 100 | return nextLun 101 | 102 | def _attach_disk(self, vm_name, vhd_name, vhd_size_in_gibs, lun): 103 | self._attach_or_detach_disk(vm_name, vhd_name, vhd_size_in_gibs, lun) 104 | 105 | timeout_count = 0 106 | while self.is_disk_attached(vm_name, vhd_name) is False: 107 | time.sleep(1) 108 | timeout_count += 1 109 | if timeout_count > self._async_timeout: 110 | raise AzureAsynchronousTimeout() 111 | 112 | def attach_disk(self, vm_name, vhd_name, vhd_size_in_gibs): 113 | # get VM information 114 | vm = self.get_vm(vm_name) 115 | vm_size = vm.hardware_profile.vm_size 116 | vm_luns = self._get_max_luns_for_vm_size(vm_size) 117 | 118 | # first check and see if we need to add a special place holder 119 | # on lun-0 120 | if self._is_lun_0_empty(vm.storage_profile.data_disks): 121 | lun0_disk_name = vm_name + "-" + self.LUN0_RESERVED_VHD_NAME_SUFFIX 122 | print("Need to attach reserved disk named '%s' to lun 0" % 123 | lun0_disk_name) 124 | self.create_disk(lun0_disk_name, 1) 125 | self._attach_disk(vm_name, lun0_disk_name, 1, 0) 126 | vm = self.get_vm(vm_name) 127 | 128 | lun = self._compute_next_lun(vm_luns, vm.storage_profile.data_disks) 129 | self._attach_disk(vm_name, vhd_name, vhd_size_in_gibs, lun) 130 | return 131 | 132 | def detach_disk(self, vm_name, vhd_name, allow_lun0_detach=False): 133 | self._attach_or_detach_disk(vm_name, vhd_name, 0, 134 | 0, True, allow_lun0_detach) 135 | timeout_count = 0 136 | while self.is_disk_attached(vm_name, vhd_name) is True: 137 | time.sleep(1) 138 | timeout_count += 1 139 | 140 | if timeout_count > self._async_timeout: 141 | raise AzureAsynchronousTimeout() 142 | 143 | return 144 | 145 | def list_disks(self): 146 | # will list a max of 5000 blobs, but there really shouldn't 147 | # be that many 148 | disks = self._storage_client.list_blobs(self._disk_container) 149 | return_disks = [] 150 | for disk in disks: 151 | disk.name = disk.name.replace('.vhd', '') 152 | return_disks.append(disk) 153 | return return_disks 154 | 155 | def destroy_disk(self, disk_name): 156 | self._storage_client.delete_blob(self._disk_container, 157 | disk_name + '.vhd') 158 | return 159 | 160 | def create_disk(self, disk_name, size_in_gibs): 161 | size_in_bytes = int(GiB(size_in_gibs).to_Byte().value) 162 | link = Vhd.create_blank_vhd(self._storage_client, 163 | self._disk_container, 164 | disk_name + '.vhd', 165 | size_in_bytes) 166 | return link 167 | 168 | def is_disk_attached(self, vm_name, disk_name): 169 | disks = self.list_attached_disks(vm_name) 170 | return disk_name in [d.name for d in disks] 171 | 172 | def _is_disk_successfully_attached(self, vm_name, disk_name): 173 | vm = self.get_vm(vm_name=vm_name, expand="instanceView") 174 | 175 | for disk_instance in vm.instance_view.disks: 176 | if disk_instance.name == disk_name: 177 | return disk_instance.statuses[0].code == \ 178 | "ProvisioningState/succeeded" 179 | 180 | return False 181 | 182 | def list_attached_disks(self, vm_name): 183 | vm = self.get_vm(vm_name=vm_name, expand="instanceView") 184 | 185 | disks_in_model = vm.storage_profile.data_disks 186 | disk_names = [d.name for d in disks_in_model] 187 | 188 | # If there's a disk in the instance view which is not in the model. 189 | # This disk is stuck. Add it to our list since we need to know 190 | # about stuck disks 191 | for disk_instance in vm.instance_view.disks: 192 | if disk_instance.name in disk_names is None: 193 | disk = DataDisk(lun=-1, 194 | name=disk_instance.name) 195 | disks_in_model.append(disk) 196 | 197 | return disks_in_model 198 | 199 | def get_vm(self, vm_name, expand=None): 200 | return self._compute_client.virtual_machines.get( 201 | resource_group_name=self._resource_group, 202 | vm_name=vm_name, 203 | expand=expand) 204 | 205 | def _update_vm(self, vm_name, vm): 206 | # To ensure the VM update will be a async update even if the 207 | # VM did not change we force a change to the VM by setting a 208 | # tag in every PUT request with a UUID 209 | if vm.tags is not None: 210 | vm.tags['updateId'] = str(uuid.uuid4()) 211 | else: 212 | vm.tags = {'updateId': str(uuid.uuid4())} 213 | 214 | return self._compute_client.virtual_machines.create_or_update( 215 | self._resource_group, 216 | vm_name, 217 | vm) 218 | 219 | def _attach_or_detach_disk(self, 220 | vm_name, 221 | vhd_name, 222 | vhd_size_in_gibs, 223 | lun, 224 | detach=False, 225 | allow_lun_0_detach=False, 226 | is_from_retry=False): 227 | vmcompute = self.get_vm(vm_name) 228 | 229 | if (not detach): 230 | vhd_url = self._storage_client.make_blob_url(self._disk_container, 231 | vhd_name + ".vhd") 232 | print("Attach disk name %s lun %s uri %s" % 233 | (vhd_name, lun, vhd_url)) 234 | disk = DataDisk(lun=lun, 235 | name=vhd_name, 236 | vhd=VirtualHardDisk(vhd_url), 237 | caching="None", 238 | create_option="attach", 239 | disk_size_gb=vhd_size_in_gibs) 240 | vmcompute.storage_profile.data_disks.append(disk) 241 | else: 242 | for i in range(len(vmcompute.storage_profile.data_disks)): 243 | disk = vmcompute.storage_profile.data_disks[i] 244 | if disk.name == vhd_name: 245 | if disk.lun == 0 and not allow_lun_0_detach: 246 | # lun-0 is special, throw an exception if attempting 247 | # to detach that disk. 248 | raise AzureOperationNotAllowed() 249 | 250 | print("Detach disk name %s lun %s uri %s" % 251 | (disk.name, disk.lun, disk.vhd.uri)) 252 | del vmcompute.storage_profile.data_disks[i] 253 | break 254 | 255 | result = self._update_vm(vm_name, vmcompute) 256 | start = time.time() 257 | while True: 258 | time.sleep(2) 259 | waited_sec = int(abs(time.time() - start)) 260 | if waited_sec > self._async_timeout: 261 | raise AzureAsynchronousTimeout() 262 | 263 | if not result.done(): 264 | continue 265 | 266 | updated = self.get_vm(vm_name) 267 | 268 | print("Waited for %s s provisioningState is %s" % 269 | (waited_sec, updated.provisioning_state)) 270 | 271 | if updated.provisioning_state == "Succeeded": 272 | print("Operation finshed") 273 | break 274 | 275 | if updated.provisioning_state == "Failed": 276 | print("Provisioning ended up in failed state.") 277 | 278 | # Recovery from failed disk atatch-detach operation. 279 | # For Attach Disk: Detach The Disk then Try To Attach Again 280 | # For Detach Disk: Call update again, which always sets a tag, 281 | # which forces the service to retry. 282 | 283 | # is_from_retry is checked so we are not stuck in a loop 284 | # calling ourself 285 | if not is_from_retry and not detach: 286 | print("Detach disk %s after failure, then try attach again" 287 | % vhd_name) 288 | 289 | self._attach_or_detach_disk(vm_name, 290 | vhd_name, 291 | vhd_size_in_gibs, 292 | lun, 293 | detach=True, 294 | allow_lun_0_detach=True, 295 | is_from_retry=True) 296 | 297 | print("Retry disk action for disk %s" % vhd_name) 298 | result = self._update_vm(vm_name, vmcompute) 299 | -------------------------------------------------------------------------------- /azure_flocker_driver/azure_utils/run_tests.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | for i in {1..100} 4 | do 5 | echo "Trial $i" 6 | trial test_disk_manager.py 7 | done 8 | 9 | -------------------------------------------------------------------------------- /azure_flocker_driver/azure_utils/test_disk_manager.py: -------------------------------------------------------------------------------- 1 | from arm_disk_manager import DiskManager 2 | from arm_disk_manager import AzureOperationNotAllowed 3 | from twisted.trial import unittest 4 | from eliot import Logger 5 | from azure.storage.blob import PageBlobService 6 | from azure.common.credentials import ServicePrincipalCredentials 7 | from azure.mgmt.resource.resources import ResourceManagementClient 8 | from azure.mgmt.compute import ComputeManagementClient 9 | import os 10 | import yaml 11 | 12 | azure_config = None 13 | _logger = Logger() 14 | config_file_path = os.environ.get('AZURE_CONFIG_FILE') 15 | 16 | if config_file_path is not None: 17 | config_file = open(config_file_path) 18 | config = yaml.load(config_file.read()) 19 | azure_config = config['azure_settings'] 20 | 21 | 22 | class DiskCreateTestCase(unittest.TestCase): 23 | 24 | def setUp(self): 25 | creds = ServicePrincipalCredentials( 26 | client_id=azure_config['client_id'], 27 | secret=azure_config['client_secret'], 28 | tenant=azure_config['tenant_id']) 29 | self._resource_client = ResourceManagementClient( 30 | creds, 31 | azure_config['subscription_id']) 32 | self._compute_client = ComputeManagementClient( 33 | creds, 34 | azure_config['subscription_id']) 35 | self._page_blob_service = PageBlobService( 36 | account_name=azure_config['storage_account_name'], 37 | account_key=azure_config['storage_account_key']) 38 | self._manager = DiskManager( 39 | self._resource_client, 40 | self._compute_client, 41 | self._page_blob_service, 42 | azure_config['storage_account_container'], 43 | azure_config['group_name'], 44 | azure_config['location']) 45 | 46 | def _has_lun0_disk(self, node_name): 47 | vm_disks = self._manager.list_attached_disks(node_name) 48 | for disk in vm_disks: 49 | if disk.lun == 0: 50 | return True 51 | return False 52 | 53 | def list_disks(self): 54 | return self._manager.list_disks() 55 | 56 | def _create_disk(self, vhd_name, vhd_size_in_gibs): 57 | print("creating disk " + vhd_name + ", size " + str(vhd_size_in_gibs)) 58 | link = self._manager.create_disk(vhd_name, vhd_size_in_gibs) 59 | disks = self.list_disks() 60 | found = False 61 | for disk in disks: 62 | if disk.name == vhd_name: 63 | found = True 64 | break 65 | self.assertEqual(found, True, 66 | 'Expected disk: ' + vhd_name + 67 | ' to be listed in DiskManager.list_disks') 68 | return link 69 | 70 | def _attach_disk(self, node_name, vhd_name, vhd_size_in_gibs): 71 | print("attaching disk " + vhd_name + ", node " + node_name) 72 | self._manager.attach_disk(node_name, vhd_name, vhd_size_in_gibs) 73 | attached = self._manager.is_disk_attached(node_name, vhd_name) 74 | self.assertEqual(attached, True, 75 | 'Expected disk: ' + vhd_name + 76 | ' should be attached to vm: ' + node_name) 77 | 78 | def _check_for_lun_0_disk(self, node_name, vhd_name): 79 | print("checking for lun 0 on " + node_name) 80 | lun0Disk = None 81 | vm_disks = self._manager.list_attached_disks(node_name) 82 | for disk in vm_disks: 83 | if disk.lun == 0: 84 | lun0Disk = disk.name.replace('.vhd', '') 85 | break 86 | self.assertNotEqual(lun0Disk, None, 87 | 'After an attach of any disk, lun0 should ' 88 | 'have a disk attached to vm: ' + node_name) 89 | self.assertNotEqual(lun0Disk, vhd_name, 90 | 'Lun-0 disk is special and should not be ' 91 | 'the same as the test_vhd ' + vhd_name) 92 | return lun0Disk 93 | 94 | def _try_to_detach_disk_on_lun_0(self, node_name, lun0_disk): 95 | print("attempted lun 0 detach on " + node_name) 96 | exceptionCaught = False 97 | try: 98 | self._manager.detach_disk(node_name, lun0_disk) 99 | except AzureOperationNotAllowed: 100 | exceptionCaught = True 101 | self.assertEqual(exceptionCaught, True, 102 | 'Detaching lun-0 should result in an ' 103 | 'exception. vm: ' + node_name + ', ' 104 | 'lun-0 disk: ' + lun0_disk) 105 | 106 | def _detach_disk(self, node_name, vhd_name, allow_lun0=False): 107 | print("detach disk " + vhd_name + ", node " + node_name) 108 | self._manager.detach_disk(node_name, vhd_name, allow_lun0) 109 | attached = self._manager.is_disk_attached(node_name, vhd_name) 110 | self.assertEqual(attached, False, 111 | 'Expected disk: ' + 112 | azure_config['test_vhd_name'] + 113 | ' should not be attached to vm: ' + node_name) 114 | 115 | def _destroy_disk(self, vhd_name): 116 | print("destroy disk " + vhd_name) 117 | self._manager.destroy_disk(vhd_name) 118 | disks = self.list_disks() 119 | found = False 120 | for disk in disks: 121 | if disk.name == vhd_name: 122 | found = True 123 | break 124 | self.assertEqual(found, False, 'Expected disk: ' + vhd_name + 125 | ' should not to be listed in ' 126 | 'DiskManager.list_disks') 127 | 128 | def test_delete_lun_0(self): 129 | node_name = azure_config['test_vm_name'] 130 | vhd_name = azure_config['test_vhd_name'] 131 | 132 | # does the VM have an existing disk attached to Lun 0? 133 | # if so, we will not remove it at end of the test 134 | lun0_exists = self._has_lun0_disk(node_name) 135 | 136 | # create the test vhd 137 | self._create_disk(vhd_name, 2) 138 | 139 | # attach it 140 | self._attach_disk(node_name, vhd_name, 2) 141 | 142 | # make sure there is a disk on lun 0 143 | lun0_disk = self._check_for_lun_0_disk(node_name, vhd_name) 144 | 145 | # attempt to delete the disk on lun 0 146 | self._try_to_detach_disk_on_lun_0(node_name, lun0_disk) 147 | 148 | # detach the test vhd 149 | self._detach_disk(node_name, vhd_name) 150 | 151 | # delete the test_vhd 152 | self._destroy_disk(vhd_name) 153 | 154 | # to return VM to proper state, if lun 0 was not present 155 | # at start of test, then clean up to original state 156 | if not lun0_exists: 157 | self._detach_disk(node_name, lun0_disk, True) 158 | self._destroy_disk(lun0_disk) 159 | 160 | def test_create_blank_vhd(self): 161 | # create a blank vhd and verify the link 162 | link = self._create_disk(azure_config['test_vhd_name'], 2) 163 | self.assertEqual(link, 164 | 'https://' + 165 | self._page_blob_service.account_name + 166 | '.blob.core.windows.net/' + 167 | azure_config['storage_account_container'] + 168 | '/' + azure_config['test_vhd_name'] + '.vhd') 169 | 170 | # delete the test vhd 171 | self._destroy_disk(azure_config['test_vhd_name']) 172 | -------------------------------------------------------------------------------- /azure_flocker_driver/azure_utils/vhd.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import uuid 3 | 4 | 5 | class AzureOperationFailed(Exception): 6 | 7 | def __init__(self): 8 | pass 9 | 10 | 11 | class Vhd(object): 12 | 13 | def __init__(): 14 | return 15 | 16 | @staticmethod 17 | def create_blank_vhd(azure_storage_client, 18 | container_name, 19 | name, 20 | size_in_bytes): 21 | # VHD size must be aligned on a megabyte boundary. The 22 | # current calling function converts from gigabytes to bytes, 23 | # but ideally a check should be added. 24 | # 25 | # The blob itself, must include a footer which is an additional 26 | # 512 bytes. So, the size is increased accordingly to allow 27 | # for the vhd footer. 28 | size_in_bytes_with_footer = size_in_bytes + 512 29 | 30 | # Create a new page blob as a blank disk 31 | azure_storage_client.create_container(container_name) 32 | azure_storage_client.create_blob( 33 | container_name=container_name, 34 | blob_name=name, 35 | content_length=size_in_bytes_with_footer) 36 | 37 | # for disk to be a valid vhd it requires a vhd footer 38 | # on the last 512 bytes 39 | vhd_footer = Vhd.generate_vhd_footer(size_in_bytes) 40 | azure_storage_client.update_page( 41 | container_name=container_name, 42 | blob_name=name, 43 | page=vhd_footer, 44 | start_range=size_in_bytes_with_footer-512, 45 | end_range=size_in_bytes_with_footer-1) 46 | 47 | return azure_storage_client.make_blob_url(container_name, name) 48 | 49 | @staticmethod 50 | def calculate_geometry(size): 51 | # this value taken from how Azure generates geometry values for VHDs 52 | vhd_sector_length = 512 53 | 54 | total_sectors = size / vhd_sector_length 55 | if total_sectors > 65535 * 16 * 255: 56 | total_sectors = 65535 * 16 * 255 57 | 58 | if total_sectors > 65535 * 16 * 63: 59 | sectors_per_track = 255 60 | heads = 16 61 | cylinder_times_heads = int(total_sectors / sectors_per_track) 62 | else: 63 | sectors_per_track = 17 64 | cylinder_times_heads = int(total_sectors / sectors_per_track) 65 | 66 | heads = int((cylinder_times_heads + 1023) / 1024) 67 | if heads < 4: 68 | heads = 4 69 | 70 | if cylinder_times_heads >= (heads * 1024) or heads > 16: 71 | sectors_per_track = 31 72 | heads = 16 73 | cylinder_times_heads = int(total_sectors / sectors_per_track) 74 | 75 | if cylinder_times_heads >= (heads * 1024): 76 | sectors_per_track = 63 77 | heads = 16 78 | cylinder_times_heads = int(total_sectors / sectors_per_track) 79 | cylinders = int(cylinder_times_heads / heads) 80 | 81 | return cylinders, heads, sectors_per_track 82 | 83 | @staticmethod 84 | def generate_vhd_footer(size): 85 | """ 86 | Generate a binary VHD Footer 87 | # Fixed VHD Footer Format Specification 88 | # spec: 89 | # https://technet.microsoft.com/en-us/virtualization/bb676673.aspx#E3B 90 | # Field Size (bytes) 91 | # Cookie 8 92 | # Features 4 93 | # Version 4 94 | # Data Offset 4 95 | # TimeStamp 4 96 | # Creator App 4 97 | # Creator Ver 4 98 | # CreatorHostOS 4 99 | # Original Size 8 100 | # Current Size 8 101 | # Disk Geo 4 102 | # Disk Type 4 103 | # Checksum 4 104 | # Unique ID 16 105 | # Saved State 1 106 | # Reserved 427 107 | # 108 | 109 | """ 110 | # TODO Are we taking any unreliable dependencies of the content of 111 | # the azure VHD footer? 112 | footer_dict = {} 113 | # the ascii string 'conectix' 114 | footer_dict['cookie'] = \ 115 | bytearray([0x63, 0x6f, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x78]) 116 | # no features enabled 117 | footer_dict['features'] = bytearray([0x00, 0x00, 0x00, 0x02]) 118 | # current file version 119 | footer_dict['version'] = bytearray([0x00, 0x01, 0x00, 0x00]) 120 | # in the case of a fixed disk, this is set to -1 121 | footer_dict['data_offset'] = \ 122 | bytearray([0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 123 | 0xff]) 124 | # hex representation of seconds since january 1st 2000 125 | footer_dict['timestamp'] = Vhd._generate_timestamp() 126 | # ascii code for 'wa' = windowsazure 127 | footer_dict['creator_app'] = bytearray([0x77, 0x61, 0x00, 0x00]) 128 | # ascii code for version of creator application 129 | footer_dict['creator_version'] = \ 130 | bytearray([0x00, 0x07, 0x00, 0x00]) 131 | # creator host os. windows or mac, ascii for 'wi2k' 132 | footer_dict['creator_os'] = \ 133 | bytearray([0x57, 0x69, 0x32, 0x6b]) 134 | footer_dict['original_size'] = \ 135 | bytearray.fromhex(hex(size).replace('0x', '').zfill(16)) 136 | footer_dict['current_size'] = \ 137 | bytearray.fromhex(hex(size).replace('0x', '').zfill(16)) 138 | # given the size, calculate the geometry -- 2 bytes cylinders, 139 | # 1 byte heads, 1 byte sectors 140 | (cylinders, heads, sectors) = Vhd.calculate_geometry(size) 141 | footer_dict['disk_geometry'] = \ 142 | bytearray([((cylinders >> 8) & 0xff), (cylinders & 0xff), 143 | (heads & 0xff), (sectors & 0xff)]) 144 | # 0x2 = fixed hard disk 145 | footer_dict['disk_type'] = bytearray([0x00, 0x00, 0x00, 0x02]) 146 | # a uuid 147 | footer_dict['unique_id'] = bytearray.fromhex(uuid.uuid4().hex) 148 | # saved state and reserved 149 | footer_dict['saved_reserved'] = bytearray(428) 150 | 151 | footer_dict['checksum'] = Vhd._compute_checksum(footer_dict) 152 | 153 | return bytes(Vhd._combine_byte_arrays(footer_dict)) 154 | 155 | @staticmethod 156 | def _generate_timestamp(): 157 | hevVal = hex(long(datetime.datetime.now().strftime("%s")) - 946684800) 158 | return bytearray.fromhex(hevVal.replace( 159 | 'L', '').replace('0x', '').zfill(8)) 160 | 161 | @staticmethod 162 | def _compute_checksum(vhd_data): 163 | 164 | if 'checksum' in vhd_data: 165 | del vhd_data['checksum'] 166 | 167 | wholeArray = Vhd._combine_byte_arrays(vhd_data) 168 | 169 | total = 0 170 | for byte in wholeArray: 171 | total += byte 172 | 173 | # ones compliment 174 | total = ~total 175 | 176 | def tohex(val, nbits): 177 | return hex((val + (1 << nbits)) % (1 << nbits)) 178 | 179 | return bytearray.fromhex(tohex(total, 32).replace('0x', '')) 180 | 181 | @staticmethod 182 | def _combine_byte_arrays(vhd_data): 183 | wholeArray = vhd_data['cookie'] \ 184 | + vhd_data['features'] \ 185 | + vhd_data['version'] \ 186 | + vhd_data['data_offset'] \ 187 | + vhd_data['timestamp'] \ 188 | + vhd_data['creator_app'] \ 189 | + vhd_data['creator_version'] \ 190 | + vhd_data['creator_os'] \ 191 | + vhd_data['original_size'] \ 192 | + vhd_data['current_size'] \ 193 | + vhd_data['disk_geometry'] \ 194 | + vhd_data['disk_type'] 195 | 196 | if 'checksum' in vhd_data: 197 | wholeArray += vhd_data['checksum'] 198 | 199 | wholeArray += vhd_data['unique_id'] \ 200 | + vhd_data['saved_reserved'] 201 | 202 | return wholeArray 203 | -------------------------------------------------------------------------------- /azure_flocker_driver/lun.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import os 3 | 4 | from twisted.python.filepath import FilePath 5 | 6 | 7 | class Lun(object): 8 | 9 | device_path = '' 10 | lun = '' 11 | 12 | def __init__(): 13 | return 14 | 15 | @staticmethod 16 | def rescan_scsi(): 17 | with open(os.devnull, 'w') as shutup: 18 | subprocess.call(['fdisk', '-l'], stdout=shutup, stderr=shutup) 19 | 20 | # Returns a string representing the block device path based 21 | # on a provided lun slot 22 | @staticmethod 23 | def get_device_path_for_lun(lun): 24 | """ 25 | Returns a FilePath representing the path of the device 26 | with the sepcified LUN. TODO: Is it valid to predict the 27 | path based on the LUN? 28 | return FilePath: The FilePath representing the attached disk 29 | """ 30 | Lun.rescan_scsi() 31 | if lun > 31: 32 | raise Exception('valid lun parameter is 0 - 31, inclusive') 33 | base = '/dev/sd' 34 | 35 | # luns go 0-31 36 | ascii_base = ord('c') 37 | 38 | return FilePath(base + chr(ascii_base + lun), False) 39 | -------------------------------------------------------------------------------- /azure_flocker_driver/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "azure-flocker-driver", 3 | "version": "1.0.0", 4 | "description": "[![Build Status](https://travis-ci.org/sedouard/azure-flocker-driver.svg?branch=master)](https://travis-ci.org/sedouard/azure-flocker-driver)\r [![Code Climate](https://codeclimate.com/github/sedouard/azure-flocker-driver/badges/gpa.svg)](https://codeclimate.com/github/sedouard/azure-flocker-driver)", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "repository": { 10 | "type": "git", 11 | "url": "git+https://github.com/sedouard/azure-flocker-driver.git" 12 | }, 13 | "author": "", 14 | "license": "ISC", 15 | "bugs": { 16 | "url": "https://github.com/sedouard/azure-flocker-driver/issues" 17 | }, 18 | "homepage": "https://github.com/sedouard/azure-flocker-driver#readme", 19 | "dependencies": { 20 | "adal-node": "^0.1.13" 21 | } 22 | } 23 | -------------------------------------------------------------------------------- /azure_flocker_driver/test_azure_driver.py: -------------------------------------------------------------------------------- 1 | """ 2 | Functional tests for 3 | ``flocker.node.agents.blockdevice.AzureStorageBlockDeviceAPI`` 4 | """ 5 | import bitmath 6 | import logging 7 | import os 8 | from uuid import uuid4 9 | import yaml 10 | 11 | from flocker.node.agents import blockdevice 12 | from flocker.node.agents.test.test_blockdevice import ( 13 | make_iblockdeviceapi_tests) 14 | from twisted.python.components import proxyForInterface 15 | from zope.interface import implementer 16 | 17 | from azure_storage_driver import ( 18 | azure_driver_from_configuration 19 | ) 20 | 21 | MIN_ALLOCATION_SIZE = bitmath.GiB(1).bytes 22 | MIN_ALLOCATION_UNIT = MIN_ALLOCATION_SIZE 23 | 24 | LOG = logging.getLogger(__name__) 25 | 26 | 27 | @implementer(blockdevice.IBlockDeviceAPI) 28 | class TestDriver(proxyForInterface(blockdevice.IBlockDeviceAPI, 'original')): 29 | """Wrapper around driver class to provide test cleanup.""" 30 | def __init__(self, original): 31 | self.original = original 32 | self.volumes = {} 33 | 34 | def _cleanup(self): 35 | """Clean up testing artifacts.""" 36 | for vol in self.volumes.keys(): 37 | # Make sure it has been cleanly removed 38 | try: 39 | self.original.detach_volume(self.volumes[vol]) 40 | except Exception: 41 | pass 42 | 43 | try: 44 | self.original.destroy_volume(self.volumes[vol]) 45 | except Exception: 46 | LOG.exception('Error cleaning up volume.') 47 | 48 | def create_volume(self, dataset_id, size): 49 | """Track all volume creation.""" 50 | blockdevvol = self.original.create_volume(dataset_id, size) 51 | self.volumes[u"%s" % dataset_id] = blockdevvol.blockdevice_id 52 | return blockdevvol 53 | 54 | 55 | def create_driver(**config): 56 | return azure_driver_from_configuration( 57 | client_id=config['client_id'], 58 | client_secret=config['client_secret'], 59 | tenant_id=config['tenant_id'], 60 | subscription_id=config['subscription_id'], 61 | storage_account_name=config['storage_account_name'], 62 | storage_account_key=config['storage_account_key'], 63 | storage_account_container=config['storage_account_container'], 64 | group_name=config['group_name'], 65 | location=config['location'], 66 | debug=config['debug']) 67 | 68 | 69 | def api_factory(test_case): 70 | """Create a test instance of the block driver. 71 | 72 | :param test_case: The specific test case instance. 73 | :return: A test configured driver instance. 74 | """ 75 | logging.basicConfig( 76 | format='%(asctime)s %(levelname)-7s [%(threadName)-19s]: %(message)s', 77 | datefmt='%Y-%m-%d %H:%M:%S', 78 | level=logging.DEBUG, 79 | filename='../driver.log') 80 | test_config_path = os.environ.get( 81 | 'FLOCKER_CONFIG', 82 | '../example.azure_agent.yml') 83 | if not os.path.exists(test_config_path): 84 | raise Exception('Functional test configuration not found.') 85 | 86 | with open(test_config_path) as config_file: 87 | config = yaml.load(config_file.read()) 88 | 89 | config = config.get('dataset', {}) 90 | test_driver = TestDriver( 91 | create_driver( 92 | **config)) 93 | test_case.addCleanup(test_driver._cleanup) 94 | return test_driver 95 | 96 | 97 | class AzureStorageBlockDeviceAPIInterfaceTests( 98 | make_iblockdeviceapi_tests( 99 | blockdevice_api_factory=( 100 | lambda test_case: api_factory(test_case) 101 | ), 102 | minimum_allocatable_size=MIN_ALLOCATION_SIZE, 103 | device_allocation_unit=MIN_ALLOCATION_UNIT, 104 | unknown_blockdevice_id_factory=lambda test: unicode(uuid4()))): 105 | 106 | def repeat_retries(self): 107 | """ 108 | Override retry policy from base class. 109 | 110 | Azure disks are eventually consistent. Detach 111 | operations may continue to show up in listing 112 | results until the VM state settled. 113 | 114 | Retry for 180 seconds. 115 | """ 116 | return [2] * 90 117 | -------------------------------------------------------------------------------- /example.azure_agent.yml: -------------------------------------------------------------------------------- 1 | "version": 1 2 | "control-service": 3 | "hostname": "" 4 | "port": 4524 5 | 6 | dataset: 7 | backend: "azure_flocker_driver" 8 | client_id: "" 9 | tenant_id: "" 10 | client_secret: "" 11 | subscription_id: "" 12 | storage_account_name: "" 13 | storage_account_key: "" 14 | storage_account_container: "" 15 | group_name: "" 16 | location: "" 17 | async_timeout: 100000 18 | debug: "false" 19 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | azure==2.0.0rc5 2 | azure-storage==0.32.0 3 | msrest>=0.4.0,<0.5.0 4 | msrestazure>=0.4.0,<0.5.0 5 | bitmath==1.3.0.2 6 | characteristic==14.3.0 7 | eliot==0.11.0 8 | flake8==3.0.4 9 | futures==3.0.5 10 | mccabe==0.5.2 11 | mimic==2.1.0 12 | pep8==1.7.0 13 | pluggy==0.3.1 14 | py==1.4.31 15 | pyflakes==1.2.3 16 | python-dateutil==2.5.3 17 | PyYAML==3.11 18 | requests==2.10.0 19 | six==1.10.0 20 | testtools==2.2.0 21 | tox==2.3.1 22 | Twisted==16.2.0 23 | virtualenv==15.0.3 24 | wheel==0.29.0 25 | zope.interface==4.2.0 26 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | import codecs # To use a consistent encoding 3 | 4 | # Get the long description from the relevant file 5 | with codecs.open('DESCRIPTION.rst', encoding='utf-8') as f: 6 | long_description = f.read() 7 | 8 | with codecs.open('requirements.txt', encoding='utf-8') as f: 9 | install_requires = f.readlines() 10 | 11 | setup( 12 | name='azure_flocker_driver', 13 | version='1.0', 14 | description='Azure Backend Plugin for ClusterHQ/Flocker ', 15 | long_description=long_description, 16 | author='Jim Spring , ' 17 | 'Joseph Romeo ', 18 | author_email='jaspring@microsoft.com', 19 | url='https://github.com/CatalystCode/azure-flocker-driver', 20 | license='Apache 2.0', 21 | 22 | classifiers=[ 23 | 24 | 'Development Status :: Alpha', 25 | 26 | 'Intended Audience :: System Administrators', 27 | 'Intended Audience :: Developers', 28 | 'Topic :: Software Development :: Libraries :: Python Modules', 29 | 30 | 'License :: OSI Approved :: MIT', 31 | 32 | # Python versions supported 33 | 'Programming Language :: Python :: 2.7', 34 | ], 35 | 36 | keywords='backend, plugin, flocker, docker, python', 37 | packages=find_packages(exclude=['test*']), 38 | install_requires=install_requires, 39 | data_files=[('/etc/flocker', ['example.azure_agent.yml'])] 40 | ) 41 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = lint 3 | 4 | [testenv:lint] 5 | basepython = python2.7 6 | changedir = {toxinidir} 7 | commands = 8 | pip install flake8 9 | flake8 azure_flocker_driver 10 | --------------------------------------------------------------------------------