├── .gitignore ├── README.md ├── azext_kube ├── __init__.py ├── _help.py ├── cli_utils.py ├── kube_operations.py ├── kubewrapper.py └── storage.py ├── setup.cfg └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | .static_storage/ 57 | .media/ 58 | local_settings.py 59 | 60 | # Flask stuff: 61 | instance/ 62 | .webassets-cache 63 | 64 | # Scrapy stuff: 65 | .scrapy 66 | 67 | # Sphinx documentation 68 | docs/_build/ 69 | 70 | # PyBuilder 71 | target/ 72 | 73 | # Jupyter Notebook 74 | .ipynb_checkpoints 75 | 76 | # pyenv 77 | .python-version 78 | 79 | # celery beat schedule file 80 | celerybeat-schedule 81 | 82 | # SageMath parsed files 83 | *.sage.py 84 | 85 | # Environments 86 | .env 87 | .venv 88 | env/ 89 | venv/ 90 | ENV/ 91 | env.bak/ 92 | venv.bak/ 93 | 94 | # Spyder project settings 95 | .spyderproject 96 | .spyproject 97 | 98 | # Rope project settings 99 | .ropeproject 100 | 101 | # mkdocs documentation 102 | /site 103 | 104 | # mypy 105 | .mypy_cache/ 106 | 107 | # VS Code 108 | .vscode 109 | 110 | # custom 111 | deploy.sh -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Azure CLI Extension for ACS/AKS Infrastructure Operations 2 | 3 | ![Python](https://img.shields.io/pypi/pyversions/azure-cli.svg?maxAge=2592000) 4 | 5 | This CLI extension allows for various general purpose operations for Kubernetes clusters running on Azure. 6 | 7 | Currently supports migration of Persistent Volumes and Data Disks between a source ACS (Azure Container Service) cluster and a target AKS (Managed Kubernetes) cluster as well as export the entire cluster state for backup/restore scenarios. 8 | 9 | ## Features 10 | 11 | The CLI extension will let you: 12 | 13 | - Take a snapshot of cluster state for back/restore purposes 14 | - Migrate Persistent Volumes resources from ACS to AKS 15 | - Migrate Persistent Volumes resources from AKS to AKS 16 | - Move Unmanaged Data Disks from ACS to AKS 17 | - Move Managed Data Disks from ACS to AKS 18 | - Migrate clusters between regions 19 | 20 | ### Cluster Backup Features 21 | 22 | The following platforms are supported for cluster backup functionality: 23 | 24 | - AKS 25 | - ACS 26 | - ACS-Engine 27 | - OpenShift 28 | - Tectonic 29 | 30 | ## Installation 31 | 32 | ### Step 0: Install/Update Azure CLI 33 | 34 | Make sure you have the latest version of the Azure CLI installed. 35 | 36 | If you don't have the Azure CLI intalled, follow the installation instructions on [GitHub](https://github.com/Azure/azure-cli) or [Microsoft Docs](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) to setup Azure CLI in your environment. 37 | 38 | ### Step 1: 39 | 40 | Navigate to this project's release tab in GitHub to see the list of releases. Run the extension add command using the `--source` parameter. 41 | 42 | The argument for the source parameter is either the URL download path (the extension package ends with '.whl') of your chosen release, or the local path to the extension where you downloaded the release package. 43 | 44 | `az extension add --source ` 45 | 46 | For example, to install version 0.0.1 47 | 48 | `az extension add --source 'https://github.com/yaron2/azure-kube-cli/releases/download/0.0.1/azure_kube_cli-0.0.1-py2.py3-none-any.whl'` 49 | 50 | ## Command-Line Usage 51 | 52 | ```bash 53 | Group 54 | az kube 55 | 56 | Commands: 57 | copy-volumes: Copy Persistent Volumes from ACS to AKS. 58 | copy-aks-volumes: Copy Persistent Volumes from AKS to AKS. 59 | export : Export a Kubernetes cluster's resources to disk. 60 | ``` 61 | 62 | ## Usage Examples 63 | 64 | ### migrate an ACS cluster to AKS 65 | 66 | `az kube copy-volumes --source-acs-name=myacs --target-aks-name=myaks --acs-resourcegroup=rg1 --aks-resourcegroup=rg2` 67 | 68 | ### migrate and use your own source and target kubeconfigs 69 | 70 | `az kube copy-volumes --source-kubeconfig=~/.source --target-kubeconfig=~/.target --source-acs-name=myacs --target-aks-name=myaks` 71 | 72 | ### migrate AKS cluster volumes to AKS 73 | 74 | `az kube copy-aks-volumes --source-aks-name test-kube-source --source-aks-resourcegroup test-kube-source --target-aks-name test-kube-target --target-aks-resourcegroup test-kube-target --source-kubeconfig source.config --target-kubeconfig target.config` 75 | 76 | ### Backups a cluster's state 77 | 78 | `az kube export --kubeconfig=./myconfig` 79 | 80 | ### Backups a cluster's state to a custom dir 81 | 82 | `az kube export --kubeconfig=./myconfig --output-dir=./backup` 83 | 84 | ### Restore or migrate a cluster's state 85 | 86 | Once you have executed the kube export command, the cluster's configuation will be generated and will be exported to a configuration file named cluster.json in the directory executed where executed or in the custom directory path specified. Execute the kubectl apply command on the cluster.json to import the saved cluster configuration. Make sure your config has the correct cluster context you want to restore to. 87 | 88 | `kubectl apply -f cluster.json` 89 | 90 | ## Development 91 | 92 | Extension development depends on a local Azure CLI dev environment. First, follow these [instructions](https://github.com/Azure/azure-cli/blob/master/doc/configuring_your_machine.md) to prepare your machine. 93 | 94 | Next, update your `AZURE_EXTENSION_DIR` environment variable to a target extension deployment directory. This overrides the standard extension directory. 95 | 96 | Example `export AZURE_EXTENSION_DIR=~/.azure/devcliextensions/` 97 | 98 | Run the following command to setup and install all dependencies in the extension deployment directory. 99 | 100 | `pip install -U --target /azure_kube_cli_ext ` 101 | 102 | Repeat the above command as needed. 103 | 104 | At this point, assuming the setup is good the extension should be loaded and you should see the extension command space. Use `az --debug` and `az extension list` for debugging this step. 105 | 106 | Helpful [Reference](https://github.com/Azure/azure-cli/tree/master/doc/extensions) docs for Az CLI Extension development 107 | -------------------------------------------------------------------------------- /azext_kube/__init__.py: -------------------------------------------------------------------------------- 1 | from ._help import helps 2 | from azure.cli.core import AzCommandsLoader 3 | 4 | 5 | class KubeCommandsLoader(AzCommandsLoader): 6 | def __init__(self, cli_ctx=None): 7 | from azure.cli.core.commands import CliCommandType 8 | custom_type = CliCommandType( 9 | operations_tmpl='azext_kube.kube_operations#{}') 10 | super(KubeCommandsLoader, self).__init__( 11 | cli_ctx=cli_ctx, custom_command_type=custom_type) 12 | 13 | def load_command_table(self, args): 14 | with self.command_group('kube') as g: 15 | g.custom_command('copy-volumes', 'copy_volumes') 16 | g.custom_command('copy-aks-volumes', 'copy_aks_volumes') 17 | g.custom_command('export', 'export_cluster_to_dir') 18 | return self.command_table 19 | 20 | def load_arguments(self, _): 21 | with self.argument_context('kube copy-volumes') as c: 22 | c.argument('source_acs_name', options_list=['--source-acs-name']) 23 | c.argument('target_aks_name', options_list=['--target-aks-name']) 24 | c.argument('acs_resource_group', options_list=['--acs-resourcegroup']) 25 | c.argument('aks_resource_group', options_list=['--aks-resourcegroup']) 26 | c.argument('source_kubeconfig', options_list=['--source-kubeconfig']) 27 | c.argument('target_kubeconfig', options_list=['--target-kubeconfig']) 28 | with self.argument_context('kube copy-aks-volumes') as c: 29 | c.argument('source_aks_name', options_list=['--source-aks-name']) 30 | c.argument('target_aks_name', options_list=['--target-aks-name']) 31 | c.argument('source_aks_resource_group', options_list=['--source-aks-resourcegroup']) 32 | c.argument('target_aks_resource_group', options_list=['--target-aks-resourcegroup']) 33 | c.argument('source_kubeconfig', options_list=['--source-kubeconfig']) 34 | c.argument('target_kubeconfig', options_list=['--target-kubeconfig']) 35 | with self.argument_context('kube export') as c: 36 | c.argument('kubeconfig_path', options_list=['--kubeconfig']) 37 | c.argument('output_dir', options_list=['--output-dir']) 38 | 39 | 40 | COMMAND_LOADER_CLS = KubeCommandsLoader 41 | -------------------------------------------------------------------------------- /azext_kube/_help.py: -------------------------------------------------------------------------------- 1 | from knack.help_files import helps 2 | 3 | helps['kube'] = """ 4 | type: group 5 | short-summary: Infrastructure operations for Azure Kubernetes clusters. 6 | """ 7 | 8 | helps['kube copy-volumes'] = """ 9 | type: command 10 | short-summary: Copy Persistent Volumes from ACS to AKS. 11 | long-summary: > 12 | Creates Managed Disks in target AKS resource group using storage snapshots and copies PV k8s resources from ACS cluster to AKS. 13 | Supports cross-region copies. 14 | parameters: 15 | - name: --source-acs-name 16 | type: string 17 | short-summary: Name of the source ACS instance 18 | - name: --target-aks-name 19 | type: string 20 | short-summary: Name of the target AKS instance 21 | - name: --acs-resourcegroup 22 | type: string 23 | short-summary: Name of the ACS cluster resource group 24 | - name: --aks-resourcegroup 25 | type: string 26 | short-summary: Name of the AKS cluster resource group 27 | - name: --source-kubeconfig 28 | type: string 29 | short-summary: Path to the source cluster kubeconfig file 30 | - name: --target-kubeconfig 31 | type: string 32 | short-summary: Path to the target cluster kubeconfig file 33 | examples: 34 | - name: Copy all Persistent Volumes from ACS to AKS 35 | text: > 36 | az kube copy-volumes --source-acs-name=acs-cluster --target-aks-name=aks-cluster --acs-resourcegroup=rg1 --aks-resourcegroup=rg2 37 | - name: Copy using a custom Kubeconfig file 38 | text: > 39 | az kube copy-volumes --source-kubeconfig=/.myconfig --source-acs-name=acs-cluster --target-aks-name=aks-cluster 40 | """ 41 | 42 | helps['kube copy-aks-volumes'] = """ 43 | type: command 44 | short-summary: Copy Persistent Volumes from one AKS to another AKS. 45 | long-summary: > 46 | Creates Managed Disks in target AKS resource group using storage snapshots and copies PV k8s resources from one AKS cluster to another AKS cluster. 47 | Supports cross-region copies. 48 | parameters: 49 | - name: --source-aks-name 50 | type: string 51 | short-summary: Name of the source AKS instance 52 | - name: --target-aks-name 53 | type: string 54 | short-summary: Name of the target AKS instance 55 | - name: --source-aks-resourcegroup 56 | type: string 57 | short-summary: Name of the source AKS cluster resource group 58 | - name: --target-aks-resourcegroup 59 | type: string 60 | short-summary: Name of the target AKS cluster resource group 61 | - name: --source-kubeconfig 62 | type: string 63 | short-summary: Path to the source cluster kubeconfig file 64 | - name: --target-kubeconfig 65 | type: string 66 | short-summary: Path to the target cluster kubeconfig file 67 | examples: 68 | - name: Copy all Persistent Volumes from ACS to AKS 69 | text: > 70 | az kube copy-aks-volumes --source-aks-name=source-aks-cluster --target-aks-name=target-aks-cluster --source-aks-resourcegroup=rg1 --target-aks-resourcegroup=rg2 71 | - name: Copy using a custom Kubeconfig file 72 | text: > 73 | az kube copy-aks-volumes --source-kubeconfig=/.myconfig --source-aks-name=source-aks-cluster-name --target-aks-name=target-aks-cluster-name 74 | """ 75 | 76 | helps['kube export'] = """ 77 | type: command 78 | short-summary: Export a Kubernetes cluster's resources to disk 79 | long-summary: > 80 | Export deployments, replication controllers, secrets, limits, quotas, deployments, services, config maps, daemon sets, stateful sets 81 | and horizontal pod autoscalers to disk in a format that allows for a later restore 82 | parameters: 83 | - name: --kubeconfig 84 | type: string 85 | short-summary: Path to the cluster kubeconfig file 86 | - name: --output-dir 87 | type: string 88 | short-summary: The directory to write the backup to 89 | examples: 90 | - name: Export a cluster to dir 91 | text: > 92 | az kube export --kubeconfig=./myconfig --output-dir=./cluster 93 | """ 94 | -------------------------------------------------------------------------------- /azext_kube/cli_utils.py: -------------------------------------------------------------------------------- 1 | # -------------------------------------------------------------------------------------------- 2 | # Copyright (c) Microsoft Corporation. All rights reserved. 3 | # Licensed under the MIT License. See License.txt in the project root for license information. 4 | # -------------------------------------------------------------------------------------------- 5 | 6 | import json 7 | import sys 8 | from subprocess import STDOUT, CalledProcessError, check_output 9 | 10 | from knack.log import get_logger 11 | from knack.util import CLIError 12 | 13 | logger = get_logger(__name__) 14 | 15 | def az_cli(cmd, env=None): 16 | cli_cmd = prepare_cli_command(cmd) 17 | json_cmd_output = run_cli_command(cli_cmd, env=env) 18 | return json_cmd_output 19 | 20 | # pylint: disable=inconsistent-return-statements 21 | def run_cli_command(cmd, return_as_json=True, empty_json_as_error=False, env=None): 22 | try: 23 | cmd_output = check_output(cmd, stderr=STDOUT, universal_newlines=True, env=env) 24 | logger.debug('command: %s ended with output: %s', cmd, cmd_output) 25 | 26 | if return_as_json: 27 | if cmd_output: 28 | try: 29 | json_output = json.loads(cmd_output) 30 | except: 31 | # They have a bug here. 32 | return "" 33 | return json_output 34 | elif empty_json_as_error: 35 | raise CLIError("Command returned an unexpected empty string.") 36 | else: 37 | return None 38 | else: 39 | return cmd_output 40 | except CalledProcessError as ex: 41 | logger.error('command failed: %s', cmd) 42 | logger.error('output: %s', ex.output) 43 | pass 44 | except: 45 | logger.error('command ended with an error: %s', cmd) 46 | raise 47 | 48 | 49 | def prepare_cli_command(cmd, output_as_json=True): 50 | full_cmd = [sys.executable, '-m', 'azure.cli'] + cmd 51 | 52 | if output_as_json: 53 | full_cmd += ['--output', 'json'] 54 | else: 55 | full_cmd += ['--output', 'tsv'] 56 | 57 | # tag newly created resources, containers don't have tags 58 | if 'create' in cmd and ('container' not in cmd): 59 | create_tags = True 60 | for idx, arg in enumerate(cmd): 61 | if arg == '--tags': 62 | create_tags = False 63 | cmd[idx+1] = cmd[idx+1] + ' created_by=disk-copy-extension' 64 | 65 | if not create_tags: 66 | full_cmd += ['--tags', 'created_by=disk-copy-extension'] 67 | 68 | return full_cmd -------------------------------------------------------------------------------- /azext_kube/kube_operations.py: -------------------------------------------------------------------------------- 1 | from knack.log import get_logger 2 | from knack.util import CLIError 3 | from .cli_utils import az_cli 4 | from collections import namedtuple 5 | from azext_kube import storage 6 | from azext_kube import kubewrapper 7 | import uuid 8 | import os 9 | 10 | 11 | def get_clusters_info(source_acs_name, acs_resourcegroup, target_aks_name, aks_resourcegroup): 12 | acs_clusters = az_cli(['acs', 'list']) 13 | if acs_clusters: 14 | filtered_acs = [c for c in acs_clusters if c['resourceGroup'].lower() == acs_resourcegroup and c['name'] == source_acs_name] 15 | 16 | if not filtered_acs: 17 | raise CLIError( 18 | 'ACS Cluster with name {0} not found'.format(source_acs_name)) 19 | 20 | source_acs = filtered_acs[0] 21 | print("Found source ACS cluster with name {0}".format(source_acs_name)) 22 | source_resource_group = source_acs['resourceGroup'] 23 | source_location = source_acs['location'] 24 | else: 25 | source_resource_group = acs_resourcegroup 26 | source_location = raw_input("ACS Location:") 27 | aks_clusters = az_cli(['aks', 'list']) 28 | filtered_aks = [c for c in aks_clusters if c['resourceGroup'].lower() == aks_resourcegroup and c['name'] == target_aks_name] 29 | 30 | if not filtered_aks: 31 | raise CLIError( 32 | 'AKS Cluster with name {0} not found'.format(target_aks_name)) 33 | 34 | target_aks = filtered_aks[0] 35 | print("Found target AKS cluster with name {0}".format(target_aks_name)) 36 | 37 | target_resource_group = target_aks['resourceGroup'] 38 | target_mc_resource_group = 'mc_{0}_{1}_{2}'.format(target_resource_group, target_aks['name'], target_aks['location']) 39 | target_location = target_aks['location'] 40 | 41 | ClusterInfo = namedtuple('ClusterInfo', 'acs_resource_group aks_resource_group aks_mc_resource_group acs_name aks_name acs_location aks_location') 42 | 43 | info = ClusterInfo(acs_resource_group=source_resource_group, acs_name=source_acs_name, 44 | aks_mc_resource_group=target_mc_resource_group, aks_resource_group=target_resource_group, 45 | aks_name=target_aks_name, acs_location=source_location, aks_location=target_location) 46 | 47 | return info 48 | 49 | 50 | def get_aks_clusters_info(source_aks_name, source_aks_resourcegroup, target_aks_name, target_aks_resourcegroup): 51 | 52 | aks_clusters = az_cli(['aks', 'list']) 53 | 54 | #Target AKS Cluster Info 55 | 56 | filtered_target_aks = [c for c in aks_clusters if c['resourceGroup'].lower() == target_aks_resourcegroup and c['name'] == target_aks_name] 57 | 58 | if not filtered_target_aks: 59 | raise CLIError( 60 | 'Target AKS Cluster with name {0} not found'.format(target_aks_name)) 61 | 62 | target_aks = filtered_target_aks[0] 63 | print("Found target AKS cluster with name {0}".format(target_aks_name)) 64 | 65 | target_resource_group = target_aks['resourceGroup'] 66 | target_mc_resource_group = 'mc_{0}_{1}_{2}'.format(target_resource_group, target_aks['name'], target_aks['location']) 67 | target_location = target_aks['location'] 68 | 69 | #Source AKS Cluster Info 70 | filtered_source_aks = [c for c in aks_clusters if c['resourceGroup'].lower() == source_aks_resourcegroup and c['name'] == source_aks_name] 71 | 72 | if not filtered_source_aks: 73 | raise CLIError( 74 | 'Source AKS Cluster with name {0} not found'.format(source_aks_name)) 75 | 76 | source_aks = filtered_source_aks[0] 77 | print("Found source AKS cluster with name {0}".format(source_aks_name)) 78 | 79 | source_resource_group = source_aks['resourceGroup'] 80 | source_mc_resource_group = 'mc_{0}_{1}_{2}'.format(source_resource_group, source_aks['name'], source_aks['location']) 81 | source_location = source_aks['location'] 82 | 83 | 84 | 85 | ClusterInfo = namedtuple('ClusterInfo', 'source_aks_resource_group source_aks_mc_resource_group target_aks_resource_group target_aks_mc_resource_group source_cluster_name target_cluster_name source_aks_location target_aks_location') 86 | 87 | info = ClusterInfo(source_aks_resource_group=source_resource_group, source_aks_mc_resource_group=source_mc_resource_group, source_cluster_name=source_aks_name, 88 | target_aks_mc_resource_group=target_mc_resource_group, target_aks_resource_group=target_resource_group, 89 | target_cluster_name=target_aks_name, source_aks_location=source_location, target_aks_location=target_location) 90 | 91 | return info 92 | 93 | 94 | 95 | def get_kubeconfig(cli_service, name, resource_group): 96 | output_path = os.path.join(os.getcwd(), str(uuid.uuid4())) 97 | 98 | if cli_service.lower() == 'aks': 99 | az_cli([cli_service, 'get-credentials', '-n', name, '-g', resource_group, '-f', output_path]) 100 | elif cli_service.lower() == 'acs': 101 | az_cli([cli_service, 'kubernetes', 'get-credentials', '-n', name, '-g', resource_group, '-f', output_path]) 102 | 103 | if not os.path.isfile(output_path): 104 | raise CLIError( 105 | 'Failed getting credentials for {0} cluster with cmd {1}'.format(cli_service, name)) 106 | 107 | return output_path 108 | 109 | 110 | def get_pvs_from_source(kubeconfig_path): 111 | pvs = kubewrapper.get_persistent_volumes(kubeconfig_path) 112 | 113 | if not pvs: 114 | raise CLIError( 115 | "No Persistent Volumes found to migrate") 116 | 117 | return pvs 118 | 119 | 120 | def export_cluster_to_dir(kubeconfig_path, output_dir = os.getcwd()): 121 | kubewrapper.export_cluster_to_dir(kubeconfig_path, output_dir) 122 | print("Cluster exported successfully") 123 | 124 | 125 | def copy_volumes(source_acs_name, target_aks_name, acs_resourcegroup, aks_resourcegroup, source_kubeconfig=None, target_kubeconfig=None): 126 | clusters_info = get_clusters_info(source_acs_name, acs_resourcegroup, target_aks_name, aks_resourcegroup) 127 | 128 | remove_source_config = False 129 | remove_target_config = False 130 | 131 | if not source_kubeconfig: 132 | source_kubeconfig = get_kubeconfig('acs', clusters_info.acs_name, clusters_info.acs_resource_group) 133 | remove_source_config = True 134 | 135 | source_pvs = get_pvs_from_source(source_kubeconfig) 136 | 137 | print("Starting migration of {0} Persistent Volumes from ACS cluster {1} in {2} to AKS cluster {3} in {4}".format(len(source_pvs), clusters_info.acs_name, 138 | clusters_info.acs_location, clusters_info.aks_name, clusters_info.aks_location)) 139 | 140 | for pv in source_pvs: 141 | if pv.spec.azure_disk.kind.lower() == 'managed': 142 | disk_name = pv.spec.azure_disk.disk_name 143 | disk = storage.copy_disk_to_disk(clusters_info.acs_resource_group, disk_name ,clusters_info.aks_mc_resource_group) 144 | pv.target_disk_name = disk['name'] 145 | pv.target_disk_uri = disk['id'] 146 | else: 147 | disk_uri = pv.spec.azure_disk.disk_uri 148 | disk = storage.copy_vhd_to_disk(disk_uri, clusters_info.aks_mc_resource_group) 149 | pv.target_disk_uri = disk['creationData']['sourceUri'] 150 | print("Disks migration successful") 151 | print("Starting Persistent Volume creation on target cluster") 152 | 153 | if not target_kubeconfig: 154 | target_kubeconfig = get_kubeconfig('aks', clusters_info.aks_name, clusters_info.aks_resource_group) 155 | remove_target_config = True 156 | 157 | # this is a shady business right here. 158 | for pv in source_pvs: 159 | pv_copy_result = kubewrapper.create_pv_from_current_pv(target_kubeconfig, pv) 160 | 161 | if pv_copy_result: 162 | print("Persistent Volume migration successful") 163 | else: 164 | print("Failed migrating Persistent Volumes") 165 | 166 | if remove_source_config: 167 | os.remove(source_kubeconfig) 168 | 169 | if remove_target_config: 170 | os.remove(target_kubeconfig) 171 | 172 | print("All done") 173 | 174 | 175 | def copy_aks_volumes(source_aks_name, target_aks_name, source_aks_resourcegroup, target_aks_resourcegroup, source_kubeconfig=None, target_kubeconfig=None): 176 | clusters_info = get_aks_clusters_info(source_aks_name, source_aks_resourcegroup, target_aks_name, target_aks_resourcegroup) 177 | 178 | remove_source_config = False 179 | remove_target_config = False 180 | 181 | if not source_kubeconfig: 182 | source_kubeconfig = get_kubeconfig('aks', clusters_info.source_aks_name, clusters_info.source_aks_resource_group) 183 | remove_source_config = True 184 | 185 | source_pvs = get_pvs_from_source(source_kubeconfig) 186 | 187 | print("Starting migration of {0} Persistent Volumes from AKS cluster {1} in {2} to AKS cluster {3} in {4}".format(len(source_pvs), clusters_info.source_cluster_name, 188 | clusters_info.source_aks_location, clusters_info.target_cluster_name, clusters_info.target_aks_location)) 189 | 190 | for pv in source_pvs: 191 | if pv.spec.azure_disk.kind.lower() == 'managed': 192 | disk_name = pv.spec.azure_disk.disk_name 193 | disk = storage.copy_disk_to_disk(clusters_info.source_aks_mc_resource_group, disk_name ,clusters_info.target_aks_mc_resource_group) 194 | pv.target_disk_name = disk['name'] 195 | pv.target_disk_uri = disk['id'] 196 | else: 197 | disk_uri = pv.spec.azure_disk.disk_uri 198 | disk = storage.copy_vhd_to_disk(disk_uri, clusters_info.aks_mc_resource_group) 199 | pv.target_disk_uri = disk['creationData']['sourceUri'] 200 | print("Disks migration successful") 201 | print("Starting Persistent Volume creation on target cluster") 202 | 203 | if not target_kubeconfig: 204 | target_kubeconfig = get_kubeconfig('aks', clusters_info.target_aks_name, clusters_info.target_aks_resource_group) 205 | remove_target_config = True 206 | 207 | # this is a shady business right here. 208 | for pv in source_pvs: 209 | pv_copy_result = kubewrapper.create_pv_from_current_pv(target_kubeconfig, pv) 210 | 211 | if pv_copy_result: 212 | print("Persistent Volume migration successful") 213 | else: 214 | print("Failed migrating Persistent Volumes") 215 | 216 | if remove_source_config: 217 | os.remove(source_kubeconfig) 218 | 219 | if remove_target_config: 220 | os.remove(target_kubeconfig) 221 | 222 | print("All done") -------------------------------------------------------------------------------- /azext_kube/kubewrapper.py: -------------------------------------------------------------------------------- 1 | from kubernetes import client, config 2 | from kubernetes.client.rest import ApiException 3 | import json 4 | import re 5 | import os 6 | 7 | type_map = {'kubernetes.client.models.v1_replication_controller.V1ReplicationController': 'ReplicationController', 8 | 'kubernetes.client.models.v1_limit_range.V1LimitRange': 'LimitRange', 'kubernetes.client.models.v1_service.V1Service': 'Service', 9 | 'kubernetes.client.models.v1_config_map.V1ConfigMap': 'ConfigMap', 'kubernetes.client.models.v1_resource_quota.V1ResourceQuota': 'ResourceQuota', 10 | 'kubernetes.client.models.v1_daemon_set': 'DaemonSet', 'kubernetes.client.models.v1_deployment.V1Deployment': 'Deployment', 11 | 'kubernetes.client.models.v1_stateful_set.V1StatefulSet': 'StatefulSet', 'kubernetes.client.models.v1_horizontal_pod_autoscaler': 'V1HorizontalPodAutoscaler', 12 | 'kubernetes.client.models.v1_secret.V1Secret': 'Secret', 'kubernetes.client.models.v1_namespace.V1Namespace': 'Namespace', 13 | 'kubernetes.client.models.v1beta1_storage_class.V1beta1StorageClass': 'StorageClass'} 14 | 15 | def get_persistent_volumes(kubeconfig_path): 16 | config.load_kube_config(kubeconfig_path) 17 | v1 = client.CoreV1Api() 18 | pvs = v1.list_persistent_volume() 19 | return pvs.items 20 | 21 | 22 | def create_pv_from_current_pv(kubeconfig_path, pv): 23 | config.load_kube_config(kubeconfig_path) 24 | v1 = client.CoreV1Api() 25 | 26 | newpv = client.V1PersistentVolume() 27 | newpv.kind = "PersistentVolume" 28 | newpv.api_version = "v1" 29 | newpv.metadata = client.V1ObjectMeta() 30 | newpv.metadata.name = pv.metadata.name 31 | 32 | newpv.spec = client.V1PersistentVolumeSpec() 33 | newpv.spec.capacity = pv.spec.capacity 34 | newpv.spec.access_modes = pv.spec.access_modes 35 | newpv.spec.azure_disk = pv.spec.azure_disk 36 | newpv.spec.azure_disk.disk_uri = pv.target_disk_uri 37 | if pv.spec.azure_disk.kind.lower() == 'managed': 38 | newpv.spec.azure_disk.disk_name = pv.target_disk_name 39 | else: 40 | newpv.spec.azure_disk.disk_uri = pv.target_disk_uri 41 | try: 42 | v1.create_persistent_volume(newpv) 43 | except ApiException: 44 | return False 45 | 46 | return True 47 | 48 | def get_cluster_resources(kubeconfig_path): 49 | config.load_kube_config(kubeconfig_path) 50 | coreV1 = client.CoreV1Api() 51 | extV1 = client.ExtensionsV1beta1Api() 52 | appsV1 = client.AppsV1beta1Api() 53 | storageV1 = client.StorageV1beta1Api() 54 | autoscaleV1 = client.AutoscalingV1Api() 55 | 56 | fields_to_remove_for_ns = ['status', 'uid', 'selfLink', 'resourceVersion', 'creationTimestamp', 'generation'] 57 | fields_to_remove_for_resources = ['clusterIP', 'claimRef', 'securityContext', 'terminationGracePeriodSeconds', 'restartPolicy', 'nodePort', 'dnsPolicy'] 58 | fields_to_remove_for_resources.extend(fields_to_remove_for_ns) 59 | 60 | ns_exclusions = ['kube-system'] 61 | 62 | response = coreV1.list_namespace() 63 | all_namespaces = client.ApiClient().sanitize_for_serialization(fix_kubernetes_objects(response.items)) 64 | namespaces = [n for n in all_namespaces if n['metadata']['name'] not in ns_exclusions] 65 | 66 | resources = [] 67 | 68 | storage_classes = client.ApiClient().sanitize_for_serialization(fix_kubernetes_objects(storageV1.list_storage_class().items)) 69 | resources.extend(storage_classes) 70 | 71 | list_of_operations = [coreV1.list_namespaced_replication_controller,coreV1.list_namespaced_limit_range,coreV1.list_namespaced_service, 72 | coreV1.list_namespaced_config_map,coreV1.list_namespaced_resource_quota,extV1.list_namespaced_daemon_set,extV1.list_namespaced_deployment, 73 | appsV1.list_namespaced_stateful_set,autoscaleV1.list_namespaced_horizontal_pod_autoscaler,coreV1.list_namespaced_secret] 74 | 75 | 76 | for ns in namespaces: 77 | __delete_keys_from_dict__(ns, fields_to_remove_for_ns) 78 | ns_name = ns['metadata']['name'] 79 | 80 | for op in list_of_operations: 81 | res = client.ApiClient().sanitize_for_serialization(fix_kubernetes_objects(op(ns_name).items)) 82 | resources.extend(res) 83 | 84 | resources.append(ns) 85 | 86 | 87 | filtered = [] 88 | 89 | for r in resources: 90 | if 'type' in r and r['type'] == 'kubernetes.io/service-account-token': 91 | continue 92 | 93 | # bypass kubernetes python client bug... again. manually detect StorageClasses... 94 | if 'provisioner' in r: 95 | r['apiVersion'] = 'storage.k8s.io/v1' 96 | else: 97 | r['apiVersion'] = 'v1' 98 | 99 | __delete_keys_from_dict__(r, fields_to_remove_for_resources) 100 | filtered.append(r) 101 | 102 | return filtered 103 | 104 | 105 | def fix_kubernetes_objects(items, api_version = 'v1'): 106 | # bypass kubernetes python client bug... Kind is not returned for list opeerations 107 | for item in items: 108 | item_type = str(type(item)) 109 | item.kind = type_map[re.findall(r"'(.*?)'", item_type, re.DOTALL)[0]] 110 | 111 | return items 112 | 113 | def export_cluster_to_dir(kubeconfig_path, output_dir): 114 | resources = get_cluster_resources(kubeconfig_path) 115 | items_obj = {'kind': 'List', 'items': resources, 'apiVersion': 'v1'} 116 | 117 | if not os.path.isdir(output_dir): 118 | os.makedirs(output_dir) 119 | 120 | file = open(os.path.join(output_dir, "cluster.json"), 'w') 121 | file.write(json.dumps(items_obj)) 122 | 123 | file.close() 124 | 125 | 126 | def __delete_keys_from_dict__(dict_del, lst_keys): 127 | for k in lst_keys: 128 | try: 129 | del dict_del[k] 130 | except KeyError: 131 | pass 132 | for v in dict_del.values(): 133 | if isinstance(v, dict): 134 | __delete_keys_from_dict__(v, lst_keys) 135 | 136 | return dict_del 137 | -------------------------------------------------------------------------------- /azext_kube/storage.py: -------------------------------------------------------------------------------- 1 | # Taken from https://github.com/noelbundick/azure-cli-disk-copy-extension 2 | 3 | import random 4 | import re 5 | import time 6 | 7 | from knack.util import CLIError 8 | 9 | from .cli_utils import az_cli 10 | 11 | blob_regex = re.compile( 12 | 'https://(?P.*).blob.core.windows.net/(?P.*)/(?P.*)') 13 | file_regex = re.compile(r'(?P.*)\.(?P.*)') 14 | 15 | 16 | def copy_vhd_to_disk(source_vhd_uri, target_resource_group_name, 17 | target_disk_name=None, target_disk_sku=None, temp_storage_account_name=None): 18 | # TODO: Write map of (blob uri):(managed disk resource id) to output file if specified 19 | # TODO: move validation to a dedicated validator 20 | 21 | # Ensure source blob uri is valid 22 | blob_match = blob_regex.match(source_vhd_uri) 23 | if not blob_match: 24 | raise CLIError('--source-uri did not match format of a blob URI') 25 | 26 | # Use blob file name if target disk name wasn't specified 27 | if target_disk_name is None: 28 | file_match = file_regex.match(blob_match.group('blob')) 29 | target_disk_name = file_match.group('filename') 30 | 31 | # Ensure target disk does not already exist 32 | target_disk = get_disk(target_resource_group_name, target_disk_name) 33 | if target_disk: 34 | raise CLIError('{0} already exists in resource group {1}. Cannot overwrite an existing disk'.format( 35 | target_disk_name, target_resource_group_name)) 36 | 37 | # Ensure that the target resource group does exist 38 | target_rg = assert_resource_group(target_resource_group_name) 39 | 40 | # Ensure that the source storage account exists 41 | source_storage_acct_name = blob_match.group('storage_account') 42 | source_storage_acct = assert_storage_account(source_storage_acct_name) 43 | 44 | # Use storage acct sku if disk sku wasn't specified 45 | if target_disk_sku is None: 46 | target_disk_sku = get_matching_disk_sku(source_storage_acct) 47 | 48 | # Ensure that a temp storage account exists if it was specified 49 | if temp_storage_account_name is not None: 50 | assert_storage_account(temp_storage_account_name) 51 | 52 | if source_storage_acct['location'].lower() == target_rg['location'].lower(): 53 | disk = sameregion_copy_vhd_to_disk( 54 | source_vhd_uri, source_storage_acct, target_resource_group_name, target_disk_name, target_disk_sku) 55 | else: 56 | disk = crossregion_copy_vhd_to_disk(source_vhd_uri, blob_match, source_storage_acct, 57 | target_rg, target_disk_name, target_disk_sku, temp_storage_account_name) 58 | 59 | return disk 60 | 61 | 62 | def create_or_use_storage_account(storage_account_name, resource_group_name): 63 | storage_account = az_cli(['storage', 'account', 'list', 64 | '--query', "[?name=='{0}'] | [0]".format(storage_account_name)]) 65 | if storage_account: 66 | return storage_account 67 | 68 | storage_account = az_cli(['storage', 'account', 'create', 69 | '-n', storage_account_name, 70 | '-g', resource_group_name, 71 | '--sku', 'Standard_LRS', 72 | '--https-only', 'true', 73 | '--tags', 'disk-copy-temp', 74 | '--encryption-services', 'blob']) 75 | return storage_account 76 | 77 | 78 | def create_blob_snapshot(blob_uri): 79 | blob_match = blob_regex.match(blob_uri) 80 | storage_account_name = blob_match.group('storage_account') 81 | storage_container = blob_match.group('container') 82 | blob_name = blob_match.group('blob') 83 | 84 | env = {} 85 | env['AZURE_STORAGE_ACCOUNT'] = storage_account_name 86 | blob_snapshot = az_cli(['storage', 'blob', 'snapshot', 87 | '-c', storage_container, 88 | '-n', blob_name], env=env) 89 | return blob_snapshot 90 | 91 | 92 | def create_blob_container(storage_account_name, container_name): 93 | env = {} 94 | env['AZURE_STORAGE_ACCOUNT'] = storage_account_name 95 | blob_container = az_cli(['storage', 'container', 'create', 96 | '-n', container_name], env=env) 97 | return blob_container 98 | 99 | 100 | def get_storage_account_key(storage_account_rg, storage_account_name): 101 | key = az_cli(['storage', 'account', 'keys', 'list', 102 | '-g', storage_account_rg, 103 | '-n', storage_account_name]) 104 | return key[0]['value'] 105 | 106 | 107 | def start_blob_copy(source_resource_group, source_storage_account_name, source_container, source_blob, source_snapshot, 108 | target_storage_account_name, destination_container=None, destination_blob=None): 109 | source_storage_account_key = get_storage_account_key( 110 | source_resource_group, source_storage_account_name) 111 | 112 | if destination_container is None: 113 | destination_container = source_container 114 | 115 | if destination_blob is None: 116 | destination_blob = source_blob 117 | 118 | env = {} 119 | env['AZURE_STORAGE_ACCOUNT'] = target_storage_account_name 120 | blob_copy = az_cli(['storage', 'blob', 'copy', 'start', 121 | '--source-account-name', source_storage_account_name, 122 | '--source-account-key', source_storage_account_key, 123 | '--source-container', source_container, 124 | '--source-blob', source_blob, 125 | '--source-snapshot', source_snapshot, 126 | '--destination-container', destination_container, 127 | '--destination-blob', destination_blob], env=env) 128 | return blob_copy 129 | 130 | 131 | def get_storage_blob(blob_uri): 132 | blob_match = blob_regex.match(blob_uri) 133 | storage_account_name = blob_match.group('storage_account') 134 | storage_container = blob_match.group('container') 135 | blob_name = blob_match.group('blob') 136 | 137 | env = {} 138 | env['AZURE_STORAGE_ACCOUNT'] = storage_account_name 139 | blob = az_cli(['storage', 'blob', 'show', 140 | '-c', storage_container, 141 | '-n', blob_name], env=env) 142 | return blob 143 | 144 | 145 | def create_disk_from_blob(blob_uri, resource_group_name, disk_name, disk_sku): 146 | disk = az_cli(['disk', 'create', 147 | '-n', disk_name, 148 | '-g', resource_group_name, 149 | '--sku', disk_sku, 150 | '--source', blob_uri]) 151 | return disk 152 | 153 | 154 | def delete_blob_snapshot(blob_uri, snapshot): 155 | blob_match = blob_regex.match(blob_uri) 156 | storage_account_name = blob_match.group('storage_account') 157 | storage_container = blob_match.group('container') 158 | blob_name = blob_match.group('blob') 159 | 160 | env = {} 161 | env['AZURE_STORAGE_ACCOUNT'] = storage_account_name 162 | az_cli(['storage', 'blob', 'delete', 163 | '-c', storage_container, 164 | '-n', blob_name, 165 | '--snapshot', snapshot], env=env) 166 | 167 | def crossregion_copy_vhd_to_disk(source_vhd_uri, blob_match, source_storage_acct, 168 | target_rg, target_disk_name, target_disk_sku, 169 | temp_storage_account_name=None): 170 | # Create a blob snapshot 171 | blob_snapshot = create_blob_snapshot(source_vhd_uri) 172 | 173 | # Use a temporary storage account to copy the snapshot 174 | temp_storage_created = False 175 | if temp_storage_account_name is None: 176 | # TODO: reuse a temp storage account if one isn't specified & one already exists in the target RG. Use tags 177 | temp_storage_account_name = 'diskcopytemp{0}'.format( 178 | random.randint(0, 100000)) 179 | temp_storage_created = True 180 | temp_storage_acct = create_or_use_storage_account( 181 | temp_storage_account_name, target_rg['name']) 182 | create_blob_container(temp_storage_account_name, 183 | blob_match.group('container')) 184 | 185 | # Copy the blob across regions 186 | start_blob_copy(source_storage_acct['resourceGroup'], source_storage_acct['name'], blob_match.group( 187 | 'container'), blob_match.group('blob'), blob_snapshot['snapshot'], temp_storage_account_name) 188 | temp_blob_uri = 'https://{0}.blob.core.windows.net/{1}/{2}'.format( 189 | temp_storage_account_name, blob_match.group('container'), blob_match.group('blob')) 190 | 191 | # TODO: for very long running copies, it might be better to register a function + event grid listener, but this works to start 192 | while True: 193 | temp_blob = get_storage_blob(temp_blob_uri) 194 | copy_status = temp_blob['properties']['copy']['status'] 195 | copy_progress = temp_blob['properties']['copy']['progress'] 196 | if copy_status == 'success': 197 | break 198 | time.sleep(5) 199 | # TODO: Cleanup storage accounts and snapshots 200 | # Create a disk from the temporary blob 201 | disk = create_disk_from_blob( 202 | temp_blob_uri, target_rg['name'], target_disk_name, target_disk_sku) 203 | 204 | # Clean up blob snapshot 205 | delete_blob_snapshot(source_vhd_uri, blob_snapshot['snapshot']) 206 | 207 | # Clean up temp storage account if auto-created 208 | if temp_storage_created: 209 | delete_resource(temp_storage_acct['id']) 210 | 211 | return disk 212 | 213 | 214 | def create_snapshot_from_blob(snapshot_name, resource_group_name, blob_uri): 215 | snapshot = az_cli(['snapshot', 'create', 216 | '-n', snapshot_name, 217 | '-g', resource_group_name, 218 | '--tags', 'disk-copy-temp', 219 | '--source', blob_uri]) 220 | return snapshot 221 | 222 | 223 | def delete_resource(resource_id): 224 | az_cli(['resource', 'delete', 225 | '--ids', resource_id]) 226 | 227 | 228 | def sameregion_copy_vhd_to_disk(source_vhd_uri, source_storage_acct, target_resource_group_name, target_disk_name, target_disk_sku): 229 | # Create a disk from a snapshot 230 | snapshot_name = 'snapshot_{0}'.format(random.randint(0, 100000)) 231 | snapshot = create_snapshot_from_blob( 232 | snapshot_name, source_storage_acct['resourceGroup'], source_vhd_uri) 233 | disk = create_disk_from_snapshot( 234 | snapshot['id'], target_resource_group_name, target_disk_name, target_disk_sku) 235 | 236 | # Clean up snapshot 237 | delete_resource(snapshot['id']) 238 | 239 | return disk 240 | 241 | 242 | def get_sas_for_snapshot(snapshot_id): 243 | sas = az_cli(['snapshot', 'grant-access', '--duration-in-seconds', '86400', 244 | '--ids', snapshot_id]) 245 | return sas['accessSas'] 246 | 247 | def create_snapshot_from_disk(snapshot_name, resource_group_name, disk_name): 248 | snapshot = az_cli(['snapshot', 'create', 249 | '-n', snapshot_name, 250 | '-g', resource_group_name, 251 | '--tags', 'disk-copy-temp', 252 | '--source', disk_name]) 253 | return snapshot 254 | 255 | 256 | def sameregion_copy_disk_to_disk(source_rg, source_disk_name, target_resource_group_name, target_disk_name, target_disk_sku): 257 | source_snapshot_name = 'snapshot_{0}'.format(random.randint(0, 100000)) 258 | source_snapshot = create_snapshot_from_disk(source_snapshot_name, source_rg['name'], source_disk_name) 259 | disk = create_disk_from_snapshot(source_snapshot['id'], target_resource_group_name, target_disk_name, target_disk_sku) 260 | 261 | # Clean up snapshot 262 | delete_resource(source_snapshot['id']) 263 | 264 | return disk 265 | 266 | def start_blob_copy_with_sas(snapshot_blob_sas, blob_name, target_storage_account_name, target_storage_account_key, target_storage_container_name): 267 | blob_copy = az_cli(['storage', 'blob', 'copy', 'start', 268 | '--source-uri', snapshot_blob_sas, 269 | '-b', blob_name, 270 | '--account-name', target_storage_account_name, 271 | '--account-key', target_storage_account_key, 272 | '-c', target_storage_container_name]) 273 | return blob_copy 274 | 275 | def wait_for_blob_success(blob_uri): 276 | while True: 277 | blob = get_storage_blob(blob_uri) 278 | copy_status = blob['properties']['copy']['status'] 279 | copy_progress = blob['properties']['copy']['progress'] 280 | 281 | if copy_status == 'success': 282 | break 283 | time.sleep(5) 284 | 285 | return blob 286 | 287 | def revoke_sas_for_snapshot(snapshot_id): 288 | az_cli(['snapshot', 'revoke-access', 289 | '--ids', snapshot_id]) 290 | 291 | def crossregion_copy_disk_to_disk(source_rg, source_disk_name, 292 | target_rg, target_disk_name, target_disk_sku, 293 | temp_storage_account_name=None): 294 | 295 | # Create a snapshot 296 | source_snapshot_name = 'snapshot_{0}'.format(random.randint(0, 100000)) 297 | source_snapshot = create_snapshot_from_disk(source_snapshot_name, source_rg['name'], source_disk_name) 298 | 299 | # Use a temporary storage account to copy the snapshot 300 | temp_storage_created = False 301 | if temp_storage_account_name is None: 302 | # TODO: reuse a temp storage account if one isn't specified & one already exists in the target RG. Use tags 303 | temp_storage_account_name = 'diskcopytemp{0}'.format(random.randint(0, 100000)) 304 | temp_storage_created = True 305 | temp_storage_acct = create_or_use_storage_account(temp_storage_account_name, target_rg['name']) 306 | temp_blob_container_name = 'disks' 307 | create_blob_container(temp_storage_account_name, temp_blob_container_name) 308 | 309 | # Generate a limited-time SAS url to access the source snapshot 310 | sas_url = get_sas_for_snapshot(source_snapshot['id']) 311 | 312 | # Copy the blob across regions 313 | temp_storage_acct_key = get_storage_account_key(temp_storage_acct['resourceGroup'], temp_storage_acct['name']) 314 | temp_blob_name = '{0}.vhd'.format(source_disk_name) 315 | start_blob_copy_with_sas(sas_url, temp_blob_name, temp_storage_account_name, temp_storage_acct_key, temp_blob_container_name) 316 | temp_blob_uri ='https://{0}.blob.core.windows.net/{1}/{2}'.format(temp_storage_account_name, temp_blob_container_name, temp_blob_name) 317 | 318 | # TODO: for very long running copies, it might be better to register a function + event grid listener, but this works to start 319 | wait_for_blob_success(temp_blob_uri) 320 | 321 | # Create a disk from the temporary blob 322 | disk = create_disk_from_blob(temp_blob_uri, target_rg['name'], target_disk_name, target_disk_sku) 323 | 324 | # Revoke SAS after copy 325 | revoke_sas_for_snapshot(source_snapshot['id']) 326 | 327 | # Clean up snapshot 328 | delete_resource(source_snapshot['id']) 329 | 330 | # Clean up temp storage account if auto-created 331 | if temp_storage_created: 332 | delete_resource(temp_storage_acct['id']) 333 | 334 | return disk 335 | 336 | def create_disk_from_snapshot(snapshot_id, resource_group_name, disk_name, disk_sku): 337 | disk = az_cli(['disk', 'create', 338 | '-n', disk_name, 339 | '-g', resource_group_name, 340 | '--sku', disk_sku, 341 | '--source', snapshot_id]) 342 | return disk 343 | 344 | 345 | def get_matching_disk_sku(source_storage_acct): 346 | source_storage_acct_tier = source_storage_acct['sku']['tier'] 347 | if source_storage_acct_tier == 'Premium': 348 | disk_sku = 'Premium_LRS' 349 | else: 350 | disk_sku = 'Standard_LRS' 351 | return disk_sku 352 | 353 | 354 | def assert_resource_group(resource_group_name): 355 | resource_group = az_cli(['group', 'show', 356 | '-n', resource_group_name]) 357 | if not resource_group: 358 | raise CLIError( 359 | 'Resource group {0} not found'.format(resource_group_name)) 360 | 361 | return resource_group 362 | 363 | 364 | def assert_storage_account(storage_account_name): 365 | storage_account = az_cli(['storage', 'account', 'list', 366 | '--query', "[?name=='{0}'] | [0]".format(storage_account_name)]) 367 | if not storage_account: 368 | raise CLIError('Storage account {0} not found'.format( 369 | storage_account_name)) 370 | 371 | return storage_account 372 | 373 | 374 | def get_disk(resource_group_name, disk_name): 375 | disk = az_cli(['disk', 'show', 376 | '-n', disk_name, 377 | '-g', resource_group_name]) 378 | return disk 379 | 380 | def copy_disk_to_disk(source_resource_group_name, source_disk_name, target_resource_group_name, 381 | target_disk_name=None, target_disk_sku=None, temp_storage_account_name=None): 382 | # Check that source and destination resource groups exist 383 | target_rg = assert_resource_group(target_resource_group_name) 384 | source_rg = assert_resource_group(source_resource_group_name) 385 | 386 | # Ensure source disk exists 387 | source_disk = get_disk(source_resource_group_name, source_disk_name) 388 | if not source_disk: 389 | raise CLIError('{0} does not exist in resource group {1}.'.format(source_disk_name, source_resource_group_name)) 390 | 391 | # Use source disk name if target disk name wasn't specified 392 | if target_disk_name is None: 393 | target_disk_name = source_disk_name 394 | 395 | # Ensure target disk does not already exist 396 | target_disk = get_disk(target_resource_group_name, target_disk_name) 397 | if target_disk: 398 | raise CLIError('{0} already exists in resource group {1}. Cannot overwrite an existing disk'.format(target_disk_name, target_resource_group_name)) 399 | 400 | # Use source disk sku if target disk sku wasn't specified 401 | if target_disk_sku is None: 402 | target_disk_sku = source_disk['sku']['name'] 403 | 404 | # Ensure that a temp storage account exists if it was specified 405 | if temp_storage_account_name is not None: 406 | assert_storage_account(temp_storage_account_name) 407 | 408 | if source_rg['location'].lower() == target_rg['location'].lower(): 409 | disk = sameregion_copy_disk_to_disk(source_rg, source_disk_name, target_resource_group_name, target_disk_name, target_disk_sku) 410 | else: 411 | 412 | disk = crossregion_copy_disk_to_disk(source_rg, source_disk_name, target_rg, target_disk_name, target_disk_sku, temp_storage_account_name) 413 | 414 | return disk 415 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [bdist_wheel] 2 | universal=1 -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # -------------------------------------------------------------------------------------------- 4 | # Copyright (c) Microsoft Corporation. All rights reserved. 5 | # Licensed under the MIT License. See License.txt in the project root for license information. 6 | # -------------------------------------------------------------------------------------------- 7 | 8 | from setuptools import setup, find_packages 9 | 10 | VERSION = "0.0.1" 11 | 12 | CLASSIFIERS = [ 13 | 'Development Status :: 4 - Beta', 14 | 'Intended Audience :: Developers', 15 | 'Intended Audience :: System Administrators', 16 | 'Programming Language :: Python', 17 | 'Programming Language :: Python :: 2', 18 | 'Programming Language :: Python :: 2.7', 19 | 'Programming Language :: Python :: 3', 20 | 'Programming Language :: Python :: 3.4', 21 | 'Programming Language :: Python :: 3.5', 22 | 'Programming Language :: Python :: 3.6', 23 | 'License :: OSI Approved :: MIT License', 24 | ] 25 | 26 | DEPENDENCIES = [ 27 | 'kubernetes' 28 | ] 29 | 30 | setup( 31 | name='azure-kube-cli', 32 | version=VERSION, 33 | description='Azure Kubernetes Utils CLI Extension', 34 | long_description='An Azure CLI extension that allows for general purpose Kubernetes cluster operations', 35 | license='MIT', 36 | author='Microsoft', 37 | author_email='yaronsc@microsoft.com', 38 | url='https://github.com/yaron2/azure-kube-cli', 39 | classifiers=CLASSIFIERS, 40 | packages=find_packages(), 41 | install_requires=DEPENDENCIES 42 | ) --------------------------------------------------------------------------------