├── .gitignore ├── CONTRIBUTING.md ├── LICENSE.txt ├── NOTICE.txt ├── README.md ├── nsx_cleanup.py ├── nsx_config.py └── openshift-ansible-nsx ├── hosts ├── install.yaml ├── jumphost_prepare.yaml ├── ncp.yaml ├── ncp_prep.yaml ├── nsx_cleanup.yaml ├── openshift_install.yaml ├── roles ├── ncp │ └── tasks │ │ └── main.yaml ├── ncp_prep │ └── tasks │ │ └── main.yaml ├── nsx_cleanup │ └── tasks │ │ └── main.yaml ├── nsx_config │ └── tasks │ │ └── main.yaml └── openshift_installation │ └── tasks │ └── main.yaml └── vars └── global.yaml /.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/nsx-integration-for-openshift/0e3c5b80ba30a46b989e9bbd13c50b8f6d900270/.gitignore -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Contributing to nsx-integration-for-openshift 4 | 5 | The nsx-integration-for-openshift project team welcomes contributions from the community. If you wish to contribute code and you have not 6 | signed our contributor license agreement (CLA), our bot will update the issue when you open a Pull Request. For any 7 | questions about the CLA process, please refer to our [FAQ](https://cla.vmware.com/faq). 8 | 9 | ## Community 10 | 11 | ## Getting Started 12 | 13 | ## Contribution Flow 14 | 15 | This is a rough outline of what a contributor's workflow looks like: 16 | 17 | - Create a topic branch from where you want to base your work 18 | - Make commits of logical units 19 | - Make sure your commit messages are in the proper format (see below) 20 | - Push your changes to a topic branch in your fork of the repository 21 | - Submit a pull request 22 | 23 | Example: 24 | 25 | ``` shell 26 | git remote add upstream https://github.com/vmware/nsx-integration-for-openshift.git 27 | git checkout -b my-new-feature master 28 | git commit -a 29 | git push origin my-new-feature 30 | ``` 31 | 32 | ### Staying In Sync With Upstream 33 | 34 | When your branch gets out of sync with the vmware/master branch, use the following to update: 35 | 36 | ``` shell 37 | git checkout my-new-feature 38 | git fetch -a 39 | git pull --rebase upstream master 40 | git push --force-with-lease origin my-new-feature 41 | ``` 42 | 43 | ### Updating pull requests 44 | 45 | If your PR fails to pass CI or needs changes based on code review, you'll most likely want to squash these changes into 46 | existing commits. 47 | 48 | If your pull request contains a single commit or your changes are related to the most recent commit, you can simply 49 | amend the commit. 50 | 51 | ``` shell 52 | git add . 53 | git commit --amend 54 | git push --force-with-lease origin my-new-feature 55 | ``` 56 | 57 | If you need to squash changes into an earlier commit, you can use: 58 | 59 | ``` shell 60 | git add . 61 | git commit --fixup 62 | git rebase -i --autosquash master 63 | git push --force-with-lease origin my-new-feature 64 | ``` 65 | 66 | Be sure to add a comment to the PR indicating your new changes are ready to review, as GitHub does not generate a 67 | notification when you git push. 68 | 69 | ### Code Style 70 | 71 | ### Formatting Commit Messages 72 | 73 | We follow the conventions on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/). 74 | 75 | Be sure to include any related GitHub issue references in the commit message. See 76 | [GFM syntax](https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown) for referencing issues 77 | and commits. 78 | 79 | ## Reporting Bugs and Creating Issues 80 | 81 | When opening a new issue, try to roughly follow the commit message format conventions above. 82 | 83 | ## Repository Structure 84 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/nsx-integration-for-openshift/0e3c5b80ba30a46b989e9bbd13c50b8f6d900270/LICENSE.txt -------------------------------------------------------------------------------- /NOTICE.txt: -------------------------------------------------------------------------------- 1 | nsx-integration-for-openshift 2 | 3 | Copyright 2017 VMware, Inc. All Rights Reserved. 4 | 5 | This product is licensed to you under the Apache 2.0 license (the "License"). You may not use this product except in compliance with the Apache 2.0 License. 6 | 7 | This product may include a number of subcomponents with separate copyright notices and license terms. Your use of these subcomponents is subject to the terms and conditions of the subcomponent's license, as noted in the LICENSE file. 8 | 9 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # nsx-integration-for-openshift 4 | This repository contains Ansible playbooks for installing NSX-t Container Plugin for OpenShift. 5 | 6 | ## Version 7 | The playbooks in master branch are compatible with NSX-t Container Plugin 2.2.x and OpenShift Container Platform 3.7 and 3.9. 8 | 9 | ## Documentation 10 | See the [official guide](https://docs.vmware.com/en/VMware-NSX-T/2.2/nsxt_22_ncp_openshift.pdf). 11 | 12 | ## Contributing 13 | The nsx-integration-for-openshift project team welcomes contributions from the community. If you wish to contribute code and you have not 14 | signed our contributor license agreement (CLA), our bot will update the issue when you open a Pull Request. For any 15 | questions about the CLA process, please refer to our [FAQ](https://cla.vmware.com/faq). For more detailed information, 16 | refer to [CONTRIBUTING.md](CONTRIBUTING.md). 17 | 18 | ## License 19 | See the [License](LICENSE.txt). 20 | -------------------------------------------------------------------------------- /nsx_cleanup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | # Copyright 2015 VMware Inc 4 | # All Rights Reserved 5 | # 6 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 7 | # not use this file except in compliance with the License. You may obtain 8 | # a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, software 13 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 15 | # License for the specific language governing permissions and limitations 16 | # under the License. 17 | 18 | import optparse 19 | import os 20 | import sys 21 | 22 | import requests 23 | from requests.packages.urllib3.exceptions import InsecurePlatformWarning 24 | from requests.packages.urllib3.exceptions import InsecureRequestWarning 25 | 26 | 27 | class NSXClient(object): 28 | """Base NSX REST client""" 29 | 30 | def __init__(self, host, username, password, nsx_cert, key, 31 | ca_cert, cluster, remove, t0_name, all_res): 32 | self.host = host 33 | self.username = username 34 | self.password = password 35 | self.nsx_cert = nsx_cert 36 | self.key = key 37 | self.use_cert = bool(self.nsx_cert and self.key) 38 | self.ca_cert = ca_cert 39 | self._cluster = cluster 40 | self._remove = remove 41 | self._t0_name = t0_name 42 | self._all_res = all_res 43 | self.resource_to_url = { 44 | 'TransportZone': '/transport-zones', 45 | 'LogicalRouter': '/logical-routers', 46 | 'IpBlock': '/pools/ip-blocks', 47 | 'IpPool': '/pools/ip-pools', 48 | 'LogicalSwitch': '/logical-switches', 49 | 'LogicalPort': '/logical-ports', 50 | 'LogicalRouterPort': '/logical-router-ports', 51 | 'VIF': '/fabric/vifs', 52 | 'VM': '/fabric/virtual-machines', 53 | 'LoadBalancerService': '/loadbalancer/services', 54 | 'FirewallSection': '/firewall/sections', 55 | 'NSGroup': '/ns-groups', 56 | 'IPSets': '/ip-sets', 57 | 'VirtualServer': '/loadbalancer/virtual-servers', 58 | 'LoadBalancerRule': '/loadbalancer/rules', 59 | 'LoadBalancerPool': '/loadbalancer/pools', 60 | 'IPSubnets': '/pools/ip-subnets', 61 | 'SwitchingProfile': '/switching-profiles', 62 | 'Certificates': '/trust-management/certificates', 63 | 'PersistenceProfile': '/loadbalancer/persistence-profiles' 64 | } 65 | self.header = {'X-Allow-Overwrite': 'true'} 66 | self.authenticate() 67 | self._t0 = self._get_tier0_routers() 68 | 69 | def _get_tier0_routers(self): 70 | all_t0_routers = self.get_logical_routers(tier='TIER0') 71 | if not self._t0_name: 72 | tier0_routers = self.get_ncp_resources(all_t0_routers) 73 | else: 74 | tier0_routers = [] 75 | for _t0_router in all_t0_routers: 76 | if self._t0_name == _t0_router['display_name']: 77 | tier0_routers.append(_t0_router) 78 | if not tier0_routers: 79 | raise Exception("Error: Missing cluster tier-0 router") 80 | if len(tier0_routers) > 1: 81 | raise Exception("Found %d tier-0 routers " % len(tier0_routers)) 82 | return tier0_routers[0] 83 | 84 | def _resource_url(self, resource_type): 85 | return self.host + '/api/v1' + self.resource_to_url[resource_type] 86 | 87 | def make_get_call(self, full_url): 88 | if self.use_cert: 89 | return requests.get('https://' + full_url, cert=(self.nsx_cert, 90 | self.key), 91 | headers=self.header, 92 | verify=False).json() 93 | else: 94 | return requests.get('https://' + full_url, auth=(self.username, 95 | self.password), 96 | headers=self.header, 97 | verify=False).json() 98 | 99 | def make_post_call(self, full_url, body): 100 | if self.use_cert: 101 | return requests.post('https://' + full_url, cert=(self.nsx_cert, 102 | self.key), 103 | headers=self.header, 104 | verify=False, json=body) 105 | else: 106 | return requests.post('https://' + full_url, auth=(self.username, 107 | self.password), 108 | headers=self.header, 109 | verify=False, json=body) 110 | 111 | def make_delete_call(self, full_url): 112 | if self.use_cert: 113 | return requests.delete('https://' + full_url, cert=(self.nsx_cert, 114 | self.key), 115 | headers=self.header, 116 | verify=False) 117 | else: 118 | return requests.delete('https://' + full_url, auth=(self.username, 119 | self.password), 120 | headers=self.header, 121 | verify=False) 122 | 123 | def get_resource_by_type(self, resource_type): 124 | resource_url = self._resource_url(resource_type) 125 | print(resource_url) 126 | res = [] 127 | r_json = self.make_get_call(resource_url) 128 | while 'cursor' in r_json: 129 | res += r_json['results'] 130 | url_with_paging = resource_url + '?' + 'cursor=' + r_json['cursor'] 131 | r_json = self.make_get_call(url_with_paging) 132 | res += r_json['results'] 133 | return res 134 | 135 | def get_resource_by_type_and_id(self, resource_type, uuid): 136 | resource_url = self._resource_url(resource_type) + '/' + uuid 137 | print(resource_url) 138 | return self.make_get_call(resource_url) 139 | 140 | def get_resource_by_query_param(self, resource_type, query_param_type, 141 | query_param_id): 142 | resource_url = self._resource_url(resource_type) 143 | full_url = (resource_url + '/?' + 144 | query_param_type + '=' + query_param_id) 145 | print(full_url) 146 | return self.make_get_call(full_url) 147 | 148 | def get_resource_by_param(self, resource_type, param_type, param_val): 149 | resource_url = self._resource_url(resource_type) 150 | full_url = resource_url + '?' + param_type + '=' + param_val 151 | print(full_url) 152 | return self.make_get_call(full_url) 153 | 154 | def get_secondary_resource(self, resource_type, uuid, secondary_resource): 155 | resource_url = self._resource_url(resource_type) 156 | print(resource_url) 157 | full_url = resource_url + '/' + uuid + '/' + secondary_resource 158 | print(full_url) 159 | return self.make_get_call(full_url) 160 | 161 | def delete_secondary_resource_by_id( 162 | self, resource_type, uuid, secondary_resource, secondary_uuid): 163 | resource_url = self._resource_url(resource_type) 164 | full_url = (resource_url + '/' + uuid + '/' + secondary_resource + 165 | '/' + secondary_uuid) 166 | print(full_url) 167 | res = self.make_delete_call(full_url) 168 | if res.status_code != requests.codes.ok: 169 | raise Exception(res.text) 170 | 171 | def delete_resource_by_type_and_id(self, resource_type, uuid): 172 | resource_url = self._resource_url(resource_type) + '/' + uuid 173 | print(resource_url) 174 | res = self.make_delete_call(resource_url) 175 | if res.status_code != requests.codes.ok: 176 | raise Exception(res.text) 177 | 178 | def delete_resource_by_type_and_id_and_param(self, resource_type, uuid, 179 | param_type, param_val): 180 | resource_url = self._resource_url(resource_type) + '/' + uuid 181 | full_url = resource_url + '?' + param_type + '=' + param_val 182 | print(full_url) 183 | res = self.make_delete_call(full_url) 184 | if res.status_code != requests.codes.ok: 185 | raise Exception(res.text) 186 | 187 | # used to update with API calls: POST url/resource/uuid?para=para_val 188 | def update_resource_by_type_and_id_and_param(self, resource_type, uuid, 189 | param_type, param_val, body): 190 | resource_url = self._resource_url(resource_type) + '/' + uuid 191 | full_url = resource_url + '?' + param_type + '=' + param_val 192 | print(full_url) 193 | res = self.make_post_call(full_url, body) 194 | if res.status_code != requests.codes.ok: 195 | raise Exception(res.text) 196 | return res 197 | 198 | def get_logical_ports(self): 199 | """ 200 | Retrieve all logical ports on NSX backend 201 | """ 202 | return self.get_resource_by_type('LogicalPort') 203 | 204 | def get_ncp_logical_ports(self): 205 | """ 206 | Retrieve all logical ports created by NCP 207 | """ 208 | lports = self.get_ncp_resources( 209 | self.get_logical_ports()) 210 | return lports 211 | 212 | def _cleanup_logical_ports(self, lports): 213 | # logical port vif detachment 214 | for lport in lports: 215 | if self.is_node_lsp(lport): 216 | continue 217 | try: 218 | self.delete_resource_by_type_and_id_and_param( 219 | 'LogicalPort', lport['id'], 'detach', 'true') 220 | except Exception as e: 221 | print("ERROR: Failed to delete logical port %s, error %s" % 222 | (lport['id'], e)) 223 | else: 224 | print("Successfully deleted logical port %s" % lport['id']) 225 | 226 | def cleanup_ncp_logical_ports(self): 227 | """ 228 | Delete all logical ports created by NCP 229 | """ 230 | ncp_lports = self.get_ncp_logical_ports() 231 | print("Number of NCP Logical Ports to be deleted: %s" % 232 | len(ncp_lports)) 233 | if not self._remove: 234 | return 235 | self._cleanup_logical_ports(ncp_lports) 236 | 237 | def is_node_lsp(self, lport): 238 | # Node LSP can be updated by NCP to be parent VIF type, but could also 239 | # be a normal VIF without context before NCP updates it 240 | if lport.get('attachment'): 241 | if (lport['attachment']['attachment_type'] == 'VIF' and 242 | (not lport['attachment']['context'] or 243 | lport['attachment']['context']['vif_type'] == 'PARENT')): 244 | return True 245 | return False 246 | 247 | def _is_ncp_resource(self, tags): 248 | return any(tag.get('scope') == 'ncp/cluster' and 249 | tag.get('tag') == self._cluster for tag in tags) 250 | 251 | def _is_ncp_ha_resource(self, tags): 252 | return any(tag.get('scope') == 'ncp/ha' and 253 | tag.get('tag') == 'true' for tag in tags) 254 | 255 | def _is_ncp_shared_resource(self, tags): 256 | return any(tag.get('scope') == 'ncp/shared_resource' and 257 | tag.get('tag') == 'true' for tag in tags) 258 | 259 | def get_ncp_resources(self, resources): 260 | """ 261 | Get all logical resources created by NCP 262 | """ 263 | ncp_resources = [r for r in resources if 'tags' in r 264 | if self._is_ncp_resource(r['tags'])] 265 | return ncp_resources 266 | 267 | def get_ncp_shared_resources(self, resources): 268 | """ 269 | Get all logical resources with ncp/cluster tag 270 | """ 271 | ncp_shared_resources = [r for r in resources if 'tags' in r 272 | if self._is_ncp_shared_resource(r['tags'])] 273 | return ncp_shared_resources 274 | 275 | def get_logical_switches(self): 276 | """ 277 | Retrieve all logical switches on NSX backend 278 | """ 279 | return self.get_resource_by_type('LogicalSwitch') 280 | 281 | def get_ncp_logical_switches(self): 282 | """ 283 | Retrieve all logical switches created from NCP 284 | """ 285 | lswitches = self.get_ncp_resources( 286 | self.get_logical_switches()) 287 | 288 | return lswitches 289 | 290 | def get_lswitch_ports(self, ls_id): 291 | """ 292 | Return all the logical ports that belong to this lswitch 293 | """ 294 | lports = self.get_logical_ports() 295 | return [p for p in lports if p['logical_switch_id'] == ls_id] 296 | 297 | def cleanup_ncp_logical_switches(self): 298 | """ 299 | Delete all logical switches created from NCP 300 | """ 301 | lswitches = self.get_ncp_logical_switches() 302 | print("Number of Logical Switches to be deleted: %s" % 303 | len(lswitches)) 304 | for ls in lswitches: 305 | # Check if there are still ports on switch and blow them away 306 | # An example here is a metadata proxy port (this is not stored 307 | # in the DB so we are unable to delete it when reading ports 308 | # from the DB) 309 | lports = self.get_lswitch_ports(ls['id']) 310 | if lports: 311 | print("Number of orphan Logical Ports to be " 312 | "deleted: %s for ls %s" % (len(lports), 313 | ls['display_name'])) 314 | if self._remove: 315 | self._cleanup_logical_ports(lports) 316 | if not self._remove: 317 | continue 318 | try: 319 | self.delete_resource_by_type_and_id_and_param( 320 | 'LogicalSwitch', ls['id'], 'cascade', 'true') 321 | except Exception as e: 322 | print("ERROR: Failed to delete logical switch %s-%s, " 323 | "error %s" % (ls['display_name'], ls['id'], e)) 324 | else: 325 | print("Successfully deleted logical switch %s-%s" % 326 | (ls['display_name'], ls['id'])) 327 | 328 | # Unconfigure nat rules in T0 329 | if 'ip_pool_id' not in ls: 330 | continue 331 | ip_pool_id = ls['ip_pool_id'] 332 | try: 333 | ip_pool = self.get_resource_by_type_and_id('IpPool', 334 | ip_pool_id) 335 | except Exception as e: 336 | # TODO: Needs to look into ncp log to see why 337 | # the pool is gone during k8s conformance test 338 | print("Failed to get ip_pool %s" % ip_pool_id) 339 | continue 340 | subnet, subnet_id = None, None 341 | for tag in ip_pool['tags']: 342 | if tag.get('scope') == "ncp/subnet": 343 | subnet = tag.get('tag') 344 | if tag.get('scope') == "ncp/subnet_id": 345 | subnet_id = tag.get('tag') 346 | 347 | # Remove router port to logical switch using router port client 348 | try: 349 | rep = self.get_resource_by_query_param( 350 | 'LogicalRouterPort', 'logical_switch_id', ls['id']) 351 | lp = rep['results'] 352 | if lp: 353 | self.delete_resource_by_type_and_id( 354 | 'LogicalRouterPort', lp['id']) 355 | except Exception as e: 356 | print("Failed to delete logical router port by logical " 357 | "switch %s : %s" % (ls['display_name'], e)) 358 | else: 359 | print("Successfully deleted logical router port by logical " 360 | "switch %s" % ls['display_name']) 361 | 362 | if not subnet or not subnet_id: 363 | return 364 | t0_id = self._t0['id'] 365 | print("Unconfiguring nat rules for %s from t0" % subnet) 366 | try: 367 | snat_rules = self.get_secondary_resource( 368 | 'LogicalRouter', t0_id, 'nat/rules') 369 | ncp_snat_rules = self.get_ncp_resources(snat_rules['results']) 370 | ncp_snat_rule = None 371 | for snat_rule in ncp_snat_rules: 372 | if snat_rule['match_source_network'] == subnet: 373 | ncp_snat_rule = snat_rule 374 | break 375 | self.release_snat_external_ip(ncp_snat_rule) 376 | self.delete_secondary_resource_by_id( 377 | 'LogicalRouter', t0_id, 'nat/rules', ncp_snat_rule['id']) 378 | except Exception as e: 379 | print("ERROR: Failed to unconfigure nat rule for %s " 380 | "from t0: %s" % (subnet, e)) 381 | else: 382 | print("Successfully unconfigured nat rule for %s " 383 | "from t0" % subnet) 384 | 385 | # Finally delete the subnet and ip_pool 386 | try: 387 | print("Deleting ip_pool %s" % ip_pool['display_name']) 388 | self._cleanup_ip_pool(ip_pool) 389 | print("Deleting IP block subnet %s" % subnet) 390 | self.delete_resource_by_type_and_id('IPSubnets', subnet_id) 391 | except Exception as e: 392 | print("ERROR: Failed to delete %s, error %s" % 393 | (subnet, e)) 394 | else: 395 | print("Successfully deleted subnet %s" % subnet) 396 | 397 | def get_firewall_sections(self): 398 | """ 399 | Retrieve all firewall sections 400 | """ 401 | return self.get_resource_by_type('FirewallSection') 402 | 403 | def get_ncp_firewall_sections(self): 404 | """ 405 | Retrieve all firewall sections created from NCP 406 | """ 407 | fw_sections = self.get_ncp_resources( 408 | self.get_firewall_sections()) 409 | return fw_sections 410 | 411 | def cleanup_ncp_firewall_sections(self): 412 | """ 413 | Cleanup all firewall sections created from NCP 414 | """ 415 | fw_sections = self.get_ncp_firewall_sections() 416 | print("Number of Firewall Sections to be deleted: %s" % 417 | len(fw_sections)) 418 | if not self._remove: 419 | return 420 | for fw in fw_sections: 421 | try: 422 | self.delete_resource_by_type_and_id_and_param( 423 | 'FirewallSection', fw['id'], 'cascade', 'true') 424 | except Exception as e: 425 | print("Failed to delete firewall section %s: %s" % 426 | (fw['display_name'], e)) 427 | else: 428 | print("Successfully deleted firewall section %s" % 429 | fw['display_name']) 430 | 431 | def get_ns_groups(self): 432 | return self.get_resource_by_type('NSGroup') 433 | 434 | def get_ns_ncp_groups(self): 435 | """ 436 | Retrieve all NSGroups on NSX backend 437 | """ 438 | ns_groups = self.get_ncp_resources(self.get_ns_groups()) 439 | return ns_groups 440 | 441 | def cleanup_ncp_ns_groups(self): 442 | """ 443 | Cleanup all NSGroups created by NCP 444 | """ 445 | ns_groups = self.get_ns_ncp_groups() 446 | print("Number of NSGroups to be deleted: %s" % len(ns_groups)) 447 | if not self._remove: 448 | return 449 | for nsg in ns_groups: 450 | try: 451 | self.delete_resource_by_type_and_id_and_param( 452 | 'NSGroup', nsg['id'], 'force', 'true') 453 | except Exception as e: 454 | print("Failed to delete NSGroup: %s: %s" % 455 | (nsg['display_name'], e)) 456 | else: 457 | print("Successfully deleted NSGroup: %s" % 458 | nsg['display_name']) 459 | 460 | def _escape_data(self, data): 461 | # ElasticSearch query_string requires slashes and dashes to 462 | # be escaped. We assume no other reserved character will be 463 | # used in tag scopes or values 464 | return data.replace('/', '\\/').replace('-', '\\-') 465 | 466 | def get_ip_sets(self): 467 | return self.get_resource_by_type('IPSets') 468 | 469 | def get_ncp_ip_sets(self): 470 | ip_sets = self.get_ncp_resources(self.get_ip_sets()) 471 | return ip_sets 472 | 473 | def cleanup_ncp_ip_sets(self): 474 | """ 475 | Cleanup all IP Sets created by NCP 476 | """ 477 | ip_sets = self.get_ncp_ip_sets() 478 | print("Number of IP-Sets to be deleted: %d" % len(ip_sets)) 479 | if not self._remove: 480 | return 481 | for ip_set in ip_sets: 482 | try: 483 | self.delete_resource_by_type_and_id_and_param( 484 | 'IPSets', ip_set['id'], 'force', 'true') 485 | except Exception as e: 486 | print("Failed to delete IPSet: %s: %s" % 487 | (ip_set['display_name'], e)) 488 | else: 489 | print("Successfully deleted IPSet: %s" % 490 | ip_set['display_name']) 491 | 492 | def get_logical_routers(self, tier=None): 493 | """ 494 | Retrieve all the logical routers based on router type. If tier 495 | is None, it will return all logical routers. 496 | """ 497 | lrouters = self.get_resource_by_type('LogicalRouter') 498 | if tier: 499 | lrouters = [router for router in lrouters 500 | if router['router_type'] == tier] 501 | return lrouters 502 | 503 | def get_logical_routers_by_uuid(self, uuid): 504 | """ 505 | Retrieve the logical router with specified UUID. 506 | """ 507 | return self.get_resource_by_type_and_id('LogicalRouter', uuid) 508 | 509 | def get_ncp_logical_routers(self): 510 | """ 511 | Retrieve all logical routers created from Neutron NSXv3 plugin 512 | """ 513 | lrouters = self.get_logical_routers() 514 | return self.get_ncp_resources(lrouters) 515 | 516 | def get_logical_router_ports(self, lrouter): 517 | """ 518 | Get all logical ports attached to lrouter 519 | """ 520 | return self.get_resource_by_param('LogicalRouterPort', 521 | 'logical_router_id', 522 | lrouter['id'])['results'] 523 | 524 | def get_ncp_logical_router_ports(self, lrouter): 525 | """ 526 | Retrieve all logical router ports created from Neutron NSXv3 plugin 527 | """ 528 | lports = self.get_logical_router_ports(lrouter) 529 | return self.get_ncp_resources(lports) 530 | 531 | def cleanup_logical_router_ports(self, lrouter): 532 | """ 533 | Cleanup all logical ports on a logical router 534 | """ 535 | lports = self.get_ncp_logical_router_ports(lrouter) 536 | print("Number of logical router ports to be deleted: %s" % len(lports)) 537 | if not self._remove: 538 | return 539 | for lp in lports: 540 | try: 541 | self.delete_resource_by_type_and_id( 542 | 'LogicalRouterPort', lp['id']) 543 | except Exception as e: 544 | print("Failed to delete logical router port %s-%s, " 545 | "and response is %s" % 546 | (lp['display_name'], lp['id'], e)) 547 | else: 548 | print("Successfully deleted logical router port %s-%s" % 549 | (lp['display_name'], lp['id'])) 550 | 551 | def release_logical_router_external_ip(self, lr): 552 | external_ip = None 553 | external_pool_id = None 554 | if 'tags' in lr: 555 | for tag in lr['tags']: 556 | if tag.get('scope') == 'ncp/extpoolid': 557 | external_pool_id = tag.get('tag') 558 | if tag.get('scope') == 'ncp/snat_ip': 559 | external_ip = tag.get('tag') 560 | if not external_pool_id: 561 | return 562 | if not external_ip: 563 | return 564 | print("External ip %s to be released from pool %s" % 565 | (external_ip, external_pool_id)) 566 | if not self._remove: 567 | return 568 | try: 569 | body = {"allocation_id": external_ip} 570 | self.update_resource_by_type_and_id_and_param( 571 | 'IpPool', external_pool_id, 'action', 'RELEASE', 572 | body=body) 573 | except Exception as e: 574 | print("ERROR: Failed to release ip %s from external_pool %s, " 575 | "error %s" % (external_ip, external_pool_id, e)) 576 | else: 577 | print("Successfully release ip %s from external_pool %s" 578 | % (external_ip, external_pool_id)) 579 | 580 | def release_snat_external_ip(self, snat_rule): 581 | print("Releasing translated_network for snat %s" % snat_rule['id']) 582 | external_pool_id = None 583 | if 'tags' in snat_rule: 584 | for tag in snat_rule['tags']: 585 | if tag.get('scope') == 'ncp/extpoolid': 586 | external_pool_id = tag.get('tag') 587 | break 588 | if not external_pool_id: 589 | return 590 | external_ip = snat_rule.get('translated_network') 591 | if not external_ip: 592 | return 593 | print("External ip %s to be released from pool %s" % 594 | (external_ip, external_pool_id)) 595 | if not self._remove: 596 | return 597 | try: 598 | body = {"allocation_id": external_ip} 599 | self.update_resource_by_type_and_id_and_param( 600 | 'IpPool', external_pool_id, 'action', 'RELEASE', 601 | body=body) 602 | except Exception as e: 603 | print("ERROR: Failed to release ip %s from external_pool %s, " 604 | "error %s" % (external_ip, external_pool_id, e)) 605 | else: 606 | print("Successfully release ip %s from external_pool %s" 607 | % (external_ip, external_pool_id)) 608 | 609 | def cleanup_ncp_logical_routers(self): 610 | """ 611 | Delete all logical routers created by NCP 612 | To delete a logical router, we need to delete all logical 613 | ports on the router first. 614 | We also need to release the ip assigned from external pool 615 | """ 616 | lrouters = self.get_ncp_logical_routers() 617 | print("Number of Logical Routers to be deleted: %s" % 618 | len(lrouters)) 619 | for lr in lrouters: 620 | self.cleanup_logical_router_ports(lr) 621 | self.release_logical_router_external_ip(lr) 622 | if not self._remove: 623 | continue 624 | if lr['router_type'] == 'TIER0': 625 | continue 626 | try: 627 | self.delete_resource_by_type_and_id_and_param( 628 | 'LogicalRouter', lr['id'], 'force', 'true') 629 | except Exception as e: 630 | print("ERROR: Failed to delete logical router %s-%s, " 631 | "error %s" % (lr['display_name'], lr['id'], e)) 632 | else: 633 | print("Successfully deleted logical router %s-%s" % 634 | (lr['display_name'], lr['id'])) 635 | 636 | def cleanup_ncp_router_ports(self): 637 | ncp_router_ports = self.get_ncp_resources( 638 | self.get_resource_by_type('LogicalRouterPort')) 639 | print("Number of orphane logical router ports to be deleted %d" 640 | % len(ncp_router_ports)) 641 | if not self._remove: 642 | return 643 | for router_port in ncp_router_ports: 644 | try: 645 | self.delete_resource_by_type_and_id_and_param( 646 | 'LogicalRouterPort', router_port['id'], 'force', 'true') 647 | except Exception as e: 648 | print("Failed to delete logical router port %s-%s, " 649 | "and response is %s" % 650 | (router_port['display_name'], router_port['id'], e)) 651 | else: 652 | print("Successfully deleted logical router port %s-%s" % 653 | (router_port['display_name'], router_port['id'])) 654 | 655 | def cleanup_ncp_tier0_logical_ports(self): 656 | """ 657 | Delete all TIER0 logical router ports created by NCP 658 | Followed the same logic in delete_project in nsxapi 659 | """ 660 | tier1_routers = self.get_ncp_resources( 661 | self.get_logical_routers(tier='TIER1')) 662 | t0 = self._t0 663 | for t1 in tier1_routers: 664 | print("Router link port from %s to %s to be removed" % 665 | (t0['display_name'], t1['display_name'])) 666 | try: 667 | self.remove_router_link_port(t1['id']) 668 | except Exception as e: 669 | print("Error removing router link port from %s to %s" % 670 | (t0['display_name'], t1['display_name']), e) 671 | else: 672 | if not self._remove: 673 | continue 674 | print("successfully remove link port for %s and %s" % 675 | (t1['display_name'], t0['display_name'])) 676 | 677 | def get_tier1_link_port(self, t1_uuid): 678 | logical_router_ports = self.get_resource_by_param( 679 | 'LogicalRouterPort', 'logical_router_id', t1_uuid)['results'] 680 | for port in logical_router_ports: 681 | if port['resource_type'] == 'LogicalRouterLinkPortOnTIER1': 682 | return port 683 | 684 | def remove_router_link_port(self, t1_uuid): 685 | tier1_link_port = self.get_tier1_link_port(t1_uuid) 686 | if not tier1_link_port: 687 | print("Warning: Logical router link port for tier1 router: %s " 688 | "not found at the backend", t1_uuid) 689 | return 690 | t1_link_port_id = tier1_link_port['id'] 691 | t0_link_port_id = ( 692 | tier1_link_port['linked_logical_router_port_id'].get('target_id')) 693 | print("Removing t1_link_port %s" % t1_link_port_id) 694 | print("Removing t0_link_port %s" % t0_link_port_id) 695 | if not self._remove: 696 | return 697 | self.delete_resource_by_type_and_id( 698 | 'LogicalRouterPort', t1_link_port_id) 699 | self.delete_resource_by_type_and_id( 700 | 'LogicalRouterPort', t0_link_port_id) 701 | 702 | def get_ip_pools(self): 703 | """ 704 | Retrieve all ip_pools on NSX backend 705 | """ 706 | return self.get_resource_by_type('IpPool') 707 | 708 | def get_ncp_get_ip_pools(self): 709 | """ 710 | Retrieve all logical switches created from NCP 711 | """ 712 | ip_pools = self.get_ncp_resources( 713 | self.get_ip_pools()) 714 | 715 | return ip_pools 716 | 717 | def _cleanup_ip_pool(self, ip_pool): 718 | if not ip_pool: 719 | return 720 | allocations = self.get_secondary_resource('IpPool', ip_pool['id'], 721 | 'allocations') 722 | print("Number of IPs to be released %s" % len(allocations)) 723 | if 'results' in allocations: 724 | for allocation in allocations['results']: 725 | allocated_ip = allocation['allocation_id'] 726 | body = {"allocation_id": allocated_ip} 727 | try: 728 | self.update_resource_by_type_and_id_and_param( 729 | 'IpPool', ip_pool['id'], 'action', 'RELEASE', 730 | body=body) 731 | except Exception as e: 732 | print("ERROR: Failed to release ip %s from Ip pool %s " 733 | "error: %s" % (allocated_ip, ip_pool['id'], e)) 734 | self.delete_resource_by_type_and_id_and_param('IpPool', ip_pool['id'], 735 | 'force', 'true') 736 | 737 | def cleanup_ncp_ip_pools(self): 738 | """ 739 | Delete all ip pools created from NCP 740 | """ 741 | ip_pools = self.get_ncp_get_ip_pools() 742 | print("Number of IP Pools to be deleted: %s" % 743 | len(ip_pools)) 744 | if not self._remove: 745 | return 746 | for ip_pool in ip_pools: 747 | if 'tags' in ip_pool: 748 | is_external = False 749 | for tag in ip_pool['tags']: 750 | if (tag.get('scope') == 'ncp/external' and 751 | tag.get('tag') == 'true'): 752 | is_external = True 753 | break 754 | if is_external: 755 | continue 756 | try: 757 | self._cleanup_ip_pool(ip_pool) 758 | except Exception as e: 759 | print("ERROR: Failed to delete ip pool %s:%s, " 760 | "error %s" % (ip_pool['display_name'], 761 | ip_pool['id'], e)) 762 | else: 763 | print("Successfully deleted ip pool %s-%s" % 764 | (ip_pool['display_name'], ip_pool['id'])) 765 | 766 | def cleanup_ncp_lb_services(self): 767 | lb_services = self.get_ncp_lb_services() 768 | print("Number of Loadbalance services to be deleted: %s" % 769 | len(lb_services)) 770 | if not self._remove: 771 | return 772 | for lb_svc in lb_services: 773 | try: 774 | self.delete_resource_by_type_and_id('LoadBalancerService', 775 | lb_svc['id']) 776 | except Exception as e: 777 | print("ERROR: Failed to delete lb_service %s-%s, error %s" % 778 | (lb_svc['display_name'], lb_svc['id'], e)) 779 | else: 780 | print("Successfully deleted lb_service %s-%s" % 781 | (lb_svc['display_name'], lb_svc['id'])) 782 | 783 | def get_ncp_lb_services(self): 784 | lb_services = self.get_lb_services() 785 | return self.get_ncp_resources(lb_services) 786 | 787 | def get_lb_services(self): 788 | return self.get_resource_by_type('LoadBalancerService') 789 | 790 | def cleanup_ncp_lb_virtual_servers(self): 791 | lb_virtual_servers = self.get_ncp_lb_virtual_servers() 792 | print("Number of loadbalancer virtual servers to be deleted: %s" % 793 | len(lb_virtual_servers)) 794 | for lb_vs in lb_virtual_servers: 795 | self.release_lb_virtual_server_external_ip(lb_vs) 796 | if not self._remove: 797 | continue 798 | try: 799 | self.delete_resource_by_type_and_id('VirtualServer', 800 | lb_vs['id']) 801 | except Exception as e: 802 | print("ERROR: Failed to delete lv_virtual_server %s-%s, " 803 | "error %s" % (lb_vs['display_name'], lb_vs['id'], e)) 804 | else: 805 | print("Successfully deleted lv_virtual_server %s-%s" % 806 | (lb_vs['display_name'], lb_vs['id'])) 807 | 808 | def release_lb_virtual_server_external_ip(self, lb_vs): 809 | if 'ip_address' not in lb_vs: 810 | return 811 | external_ip = lb_vs['ip_address'] 812 | external_pool_id = None 813 | if 'tags' in lb_vs: 814 | for tag in lb_vs['tags']: 815 | if tag.get('scope') == 'ext_pool_id': 816 | external_pool_id = tag.get('tag') 817 | if not external_pool_id: 818 | return 819 | 820 | print("Releasing external IP %s-%s " 821 | "of lb virtual server %s from external pool %s" % 822 | (lb_vs['display_name'], lb_vs['id'], 823 | external_ip, external_pool_id)) 824 | if not self._remove: 825 | return 826 | try: 827 | body = {"allocation_id": external_ip} 828 | self.update_resource_by_type_and_id_and_param( 829 | 'IpPool', external_pool_id, 'action', 'RELEASE', 830 | body=body) 831 | except Exception as e: 832 | print("ERROR: Failed to release ip %s from external_pool %s, " 833 | "error %s" % (external_ip, external_pool_id, e)) 834 | else: 835 | print("Successfully release ip %s from external_pool %s" 836 | % (external_ip, external_pool_id)) 837 | 838 | def get_ncp_lb_virtual_servers(self): 839 | lb_virtual_servers = self.get_virtual_servers() 840 | return self.get_ncp_resources(lb_virtual_servers) 841 | 842 | def get_virtual_servers(self): 843 | return self.get_resource_by_type('VirtualServer') 844 | 845 | def cleanup_ncp_lb_rules(self): 846 | lb_rules = self.get_ncp_lb_rules() 847 | print("Number of loadbalancer rules to be deleted: %s" % 848 | len(lb_rules)) 849 | if not self._remove: 850 | return 851 | for lb_rule in lb_rules: 852 | try: 853 | self.delete_resource_by_type_and_id('LoadBalancerRule', 854 | lb_rule['id']) 855 | except Exception as e: 856 | print("ERROR: Failed to delete lb_rule %s-%s, " 857 | "error %s" % (lb_rule['display_name'], 858 | lb_rule['id'], e)) 859 | else: 860 | print("Successfully deleted lb_rule %s-%s" % 861 | (lb_rule['display_name'], lb_rule['id'])) 862 | 863 | def get_ncp_lb_rules(self): 864 | lb_rules = self.get_lb_rules() 865 | return self.get_ncp_resources(lb_rules) 866 | 867 | def get_lb_rules(self): 868 | return self.get_resource_by_type('LoadBalancerRule') 869 | 870 | def cleanup_ncp_lb_pools(self): 871 | lb_pools = self.get_ncp_lb_pools() 872 | print("Number of loadbalancer pools to be deleted: %s" % 873 | len(lb_pools)) 874 | if not self._remove: 875 | return 876 | for lb_pool in lb_pools: 877 | try: 878 | self.delete_resource_by_type_and_id('LoadBalancerPool', 879 | lb_pool['id']) 880 | except Exception as e: 881 | print("ERROR: Failed to delete lb_pool %s-%s, " 882 | "error %s" % (lb_pool['display_name'], 883 | lb_pool['id'], e)) 884 | else: 885 | print("Successfully deleted lb_pool %s-%s" % 886 | (lb_pool['display_name'], lb_pool['id'])) 887 | 888 | def get_ncp_lb_pools(self): 889 | lb_pools = self.get_lb_pools() 890 | return self.get_ncp_resources(lb_pools) 891 | 892 | def get_lb_pools(self): 893 | return self.get_resource_by_type('LoadBalancerPool') 894 | 895 | def cleanup_ncp_persistence_profiles(self): 896 | persistence_profiles = self.get_ncp_persistence_profiles() 897 | print("Number of persistence profiles rules to be deleted: %s" % 898 | len(persistence_profiles)) 899 | if not self._remove: 900 | return 901 | for persistence_profile in persistence_profiles: 902 | try: 903 | self.delete_resource_by_type_and_id('PersistenceProfile', 904 | persistence_profile['id']) 905 | except Exception as e: 906 | print("ERROR: Failed to delete persistence profile %s-%s, " 907 | "error %s" % (persistence_profile['display_name'], 908 | persistence_profile['id'], e)) 909 | else: 910 | print("Successfully deleted persistence profile %s-%s" % 911 | (persistence_profile['display_name'], 912 | persistence_profile['id'])) 913 | 914 | def get_ncp_persistence_profiles(self): 915 | return self.get_ncp_resources( 916 | self.get_resource_by_type('PersistenceProfile')) 917 | 918 | def get_ip_blocks(self): 919 | return self.get_resource_by_type('IpBlock') 920 | 921 | def get_ncp_ip_blocks(self): 922 | ip_blocks = self.get_ip_blocks() 923 | return self.get_ncp_resources(ip_blocks) 924 | 925 | def get_switching_profiles(self): 926 | sw_profiles = self.get_resource_by_type('SwitchingProfile') 927 | return sw_profiles 928 | 929 | def get_ncp_switching_profiles(self): 930 | sw_profiles = self.get_switching_profiles() 931 | return self.get_ncp_resources(sw_profiles) 932 | 933 | def get_l7_resource_certs(self): 934 | return self.get_resource_by_type('Certificates') 935 | 936 | def get_ncp_l7_resource_certs(self): 937 | l7_resource_certs = self.get_l7_resource_certs() 938 | return self.get_ncp_resources(l7_resource_certs) 939 | 940 | def cleanup_cert(self): 941 | if self.nsx_cert and self.key: 942 | try: 943 | os.close(self.fd) 944 | os.remove(self.certpath) 945 | print("Certificate file %s for NSX client connection " 946 | "has been removed" % self.certpath) 947 | except OSError as e: 948 | print("Error when during cert file cleanup %s" % e) 949 | 950 | def cleanup_ncp_snat_rules(self): 951 | t0 = self._t0 952 | snat_rules = self.get_secondary_resource( 953 | 'LogicalRouter', t0['id'], 'nat/rules') 954 | ncp_snat_rules = self.get_ncp_resources(snat_rules['results']) 955 | print("Number of snat rules to be deleted: %s" % 956 | len(ncp_snat_rules)) 957 | if not self._remove: 958 | return 959 | for snat_rule in ncp_snat_rules: 960 | print(snat_rule) 961 | try: 962 | self.release_snat_external_ip(snat_rule) 963 | self.delete_secondary_resource_by_id( 964 | 'LogicalRouter', t0['id'], 'nat/rules', snat_rule['id']) 965 | except Exception as e: 966 | print("ERROR: Failed to delete snat_rule for %s-%s, " 967 | "error %s" % (snat_rule['translated_network'], 968 | snat_rule['id'], e)) 969 | else: 970 | print("Successfully deleted snat_rule for %s-%s" % 971 | (snat_rule['translated_network'], snat_rule['id'])) 972 | 973 | def cleanup_ncp_ip_blocks(self): 974 | ip_blocks = self.get_ncp_ip_blocks() 975 | print("Number of ip blocks to be deleted: %s" % 976 | len(ip_blocks)) 977 | if not self._remove: 978 | return 979 | for ip_block in ip_blocks: 980 | try: 981 | self.delete_resource_by_type_and_id('IpBlock', 982 | ip_block['id']) 983 | except Exception as e: 984 | print("ERROR: Failed to delete ip_block %s-%s, " 985 | "error %s" % (ip_block['display_name'], 986 | ip_block['id'], e)) 987 | else: 988 | print("Successfully deleted ip_block %s-%s" % 989 | (ip_block['display_name'], ip_block['id'])) 990 | 991 | def cleanup_ncp_switching_profiles(self): 992 | ncp_switching_profiles = self.get_ncp_switching_profiles() 993 | print("Number of switching profiles to be deleted: %s" % 994 | len(ncp_switching_profiles)) 995 | if not self._remove: 996 | return 997 | for switching_profile in ncp_switching_profiles: 998 | try: 999 | self.delete_resource_by_type_and_id('SwitchingProfile', 1000 | switching_profile['id']) 1001 | except Exception as e: 1002 | print("ERROR: Failed to delete switching_profile %s-%s, " 1003 | "error %s" % (switching_profile['display_name'], 1004 | switching_profile['id'], e)) 1005 | else: 1006 | print("Successfully deleted switching_profile %s-%s" % 1007 | (switching_profile['display_name'], 1008 | switching_profile['id'])) 1009 | 1010 | def cleanup_ncp_external_ip_pools(self): 1011 | """ 1012 | Delete all external ip pools created from NCP 1013 | """ 1014 | ip_pools = self.get_ncp_get_ip_pools() 1015 | external_ip_pools = [] 1016 | for ip_pool in ip_pools: 1017 | if 'tags' in ip_pool: 1018 | for tag in ip_pool['tags']: 1019 | if (tag.get('scope') == 'ncp/external' and 1020 | tag.get('tag') == 'true'): 1021 | external_ip_pools.append(ip_pool) 1022 | print("Number of external IP Pools to be deleted: %s" % 1023 | len(external_ip_pools)) 1024 | if not self._remove: 1025 | return 1026 | 1027 | for ext_ip_pool in external_ip_pools: 1028 | try: 1029 | self._cleanup_ip_pool(ext_ip_pool) 1030 | except Exception as e: 1031 | print("ERROR: Failed to delete external ip pool %s:%s, " 1032 | "error %s" % (ext_ip_pool['display_name'], 1033 | ext_ip_pool['id'], e)) 1034 | else: 1035 | print("Successfully deleted external ip pool %s-%s" % 1036 | (ext_ip_pool['display_name'], ext_ip_pool['id'])) 1037 | 1038 | def cleanup_ncp_l7_resource_certs(self): 1039 | l7_resource_certs = self.get_ncp_l7_resource_certs() 1040 | print("Number of l7 resource certs to be deleted: %s" % 1041 | len(l7_resource_certs)) 1042 | if not self._remove: 1043 | return 1044 | for l7_resource_cert in l7_resource_certs: 1045 | try: 1046 | self.delete_resource_by_type_and_id('Certificates', 1047 | l7_resource_cert['id']) 1048 | except Exception as e: 1049 | print("ERROR: Failed to delete l7_resource_cert %s-%s, " 1050 | "error %s" % (l7_resource_cert['display_name'], 1051 | l7_resource_cert['id'], e)) 1052 | else: 1053 | print("Successfully deleted l7_resource_cert %s-%s" % 1054 | (l7_resource_cert['display_name'], 1055 | l7_resource_cert['id'])) 1056 | 1057 | def authenticate(self): 1058 | # make a get call to make sure response is not forbidden 1059 | full_url = self._resource_url('TransportZone') 1060 | if self.use_cert: 1061 | response = requests.get('https://' + full_url, cert=(self.nsx_cert, 1062 | self.key), 1063 | headers=self.header, 1064 | verify=False) 1065 | else: 1066 | response = requests.get('https://' + full_url, 1067 | auth=(self.username, self.password), 1068 | headers=self.header, 1069 | verify=False) 1070 | if response.status_code == requests.codes.forbidden: 1071 | print("ERROR: Authentication failed! " 1072 | "Please check your credentials.") 1073 | exit(1) 1074 | 1075 | def cleanup_all(self): 1076 | """ 1077 | Cleanup steps: 1078 | 1. Cleanup firewall sections 1079 | 2. Cleanup NSGroups 1080 | 3. Cleanup logical router ports 1081 | 4. Cleanup logical routers 1082 | 5. Cleanup logical switch ports 1083 | 6. Cleanup logical switches 1084 | 7. Cleanup switching profiles 1085 | 8. Cleanup loadbalancer services 1086 | 9. Cleanup loadbalancer virtual servers 1087 | 10.Cleanup loadbalancer rules 1088 | 11.Cleanup loadbalancer pools 1089 | 12.Cleanup ip pools 1090 | """ 1091 | self.cleanup_ncp_firewall_sections() 1092 | self.cleanup_ncp_ns_groups() 1093 | self.cleanup_ncp_ip_sets() 1094 | self.cleanup_ncp_lb_services() 1095 | self.cleanup_ncp_lb_virtual_servers() 1096 | self.cleanup_ncp_lb_rules() 1097 | self.cleanup_ncp_lb_pools() 1098 | self.cleanup_ncp_persistence_profiles() 1099 | self.cleanup_ncp_tier0_logical_ports() 1100 | self.cleanup_ncp_logical_ports() 1101 | self.cleanup_ncp_logical_routers() 1102 | self.cleanup_ncp_router_ports() 1103 | self.cleanup_ncp_logical_switches() 1104 | self.cleanup_ncp_snat_rules() 1105 | self.cleanup_ncp_ip_pools() 1106 | self.cleanup_ncp_l7_resource_certs() 1107 | self.cleanup_ncp_switching_profiles() 1108 | if self._all_res: 1109 | self.cleanup_ncp_ip_blocks() 1110 | self.cleanup_ncp_external_ip_pools() 1111 | 1112 | 1113 | def validate_options(options): 1114 | if not options.mgr_ip or not options.cluster: 1115 | print("Required arguments missing. Run ' -h' for usage") 1116 | sys.exit(1) 1117 | if (not options.password and not options.username and 1118 | not options.nsx_cert and not options.key): 1119 | print("Required authentication parameter missing. " 1120 | "Run ' -h' for usage") 1121 | sys.exit(1) 1122 | 1123 | if __name__ == "__main__": 1124 | 1125 | parser = optparse.OptionParser() 1126 | parser.add_option("--mgr-ip", dest="mgr_ip", help="NSX Manager IP address") 1127 | parser.add_option("-u", "--username", default="", dest="username", 1128 | help="NSX Manager username, ignored if nsx-cert is set") 1129 | parser.add_option("-p", "--password", default="", 1130 | dest="password", 1131 | help="NSX Manager password, ignored if nsx-cert is set") 1132 | parser.add_option("-n", "--nsx-cert", default="", dest="nsx_cert", 1133 | help="NSX certificate path") 1134 | parser.add_option("-k", "--key", default="", dest="key", 1135 | help="NSX client private key path") 1136 | parser.add_option("-c", "--cluster", dest="cluster", 1137 | help="Cluster to be removed") 1138 | parser.add_option("-t", "--ca-cert", default="", dest="ca_cert", 1139 | help="NSX ca_certificate") 1140 | parser.add_option("-r", "--remove", action='store_true', 1141 | dest="remove", help="CAVEAT: Removes NSX resources. " 1142 | "If not set will do dry-run.") 1143 | parser.add_option("--t0-name", dest="t0_name", 1144 | help="Specify the tier-0 router name. Must be " 1145 | "specified if Tier-0 router does not have the " 1146 | "cluster tag") 1147 | parser.add_option("--all-res", dest="all_res", 1148 | help=("Also clean up HA switching profile, ipblock, " 1149 | "external ippool. These resources could be " 1150 | "created by PAS NSX-T Tile"), action='store_true') 1151 | parser.add_option("--no-warning", action="store_true", dest="no_warning", 1152 | help="Disable urllib's insecure request warning") 1153 | (options, args) = parser.parse_args() 1154 | 1155 | if options.no_warning: 1156 | requests.packages.urllib3.disable_warnings(InsecureRequestWarning) 1157 | requests.packages.urllib3.disable_warnings(InsecurePlatformWarning) 1158 | 1159 | validate_options(options) 1160 | # Get NSX REST client 1161 | nsx_client = NSXClient(host=options.mgr_ip, 1162 | username=options.username, 1163 | password=options.password, 1164 | nsx_cert=options.nsx_cert, 1165 | key=options.key, 1166 | ca_cert=options.ca_cert, 1167 | cluster=options.cluster, 1168 | remove=options.remove, 1169 | t0_name=options.t0_name, 1170 | all_res=options.all_res) 1171 | nsx_client.cleanup_all() 1172 | -------------------------------------------------------------------------------- /nsx_config.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | ''' 3 | Created on Jun 8, 2017 4 | @author: yfan, skai 5 | ''' 6 | 7 | import argparse 8 | import atexit 9 | import json 10 | import logging 11 | import sys 12 | import time 13 | import requests 14 | from requests.packages.urllib3.exceptions import InsecureRequestWarning 15 | from urllib import urlencode 16 | from vmware_nsxlib import v3 # noqa 17 | from vmware_nsxlib.v3 import config # noqa 18 | 19 | 20 | logger = logging.getLogger(__name__) 21 | logger.setLevel(logging.INFO) 22 | formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') 23 | ch = logging.StreamHandler() 24 | ch.setFormatter(formatter) 25 | logger.addHandler(ch) 26 | 27 | NCP_CLUSTER_KEY = "ncp/cluster" 28 | NCP_EXTERNAL_KEY = "ncp/external" 29 | NCP_NODE_KEY = "ncp/node_name" 30 | 31 | 32 | class TinyClient(object): 33 | """ 34 | A python version tiny client for NSX Transformer. 35 | For single thread use only, no sync inside. 36 | """ 37 | DEFAULT_VERSION = 'v1' 38 | 39 | def __init__(self, args): 40 | nsxlib_config = config.NsxLibConfig( 41 | username=args.mp_user, 42 | password=args.mp_password, 43 | nsx_api_managers=args.mp_ip.split(','), 44 | ca_file=args.mp_cert_file) 45 | self.nsxlib = v3.NsxLib(nsxlib_config) 46 | self.content_type = "application/json" 47 | self.headers = {'content-type': self.content_type} 48 | 49 | def _request(self, method, endpoint, payload="", url_parameters=None): 50 | """ 51 | The only interface to send request to NSX Manager. All other calls 52 | will call through this method. 53 | @param method: the http method, GET/POST/UPDATE/DELETE 54 | @param endpoint: the url of http call 55 | @param payload: the request body of http call 56 | @param url_parameters: the url parameter, a dict format 57 | """ 58 | url_params_string = "" 59 | if url_parameters: 60 | if "?" in endpoint: 61 | url_params_string = "&%s" % urlencode(url_parameters) 62 | else: 63 | url_params_string = "?%s" % urlencode(url_parameters) 64 | request = "%s%s" % (endpoint, url_params_string) 65 | logger.info('request: %s', request) 66 | return self.nsxlib.client._rest_call(request, method, payload, 67 | self.headers) 68 | 69 | def request(self, method, endpoint, payload="", params=None): 70 | """ 71 | The user interface to the method _request. All user calls 72 | will call through this method. 73 | @param method: the http method, GET/POST/UPDATE/DELETE 74 | @param endpoint: the url of http call 75 | @param payload: the request body of http call 76 | @param params: short for the url parameter, a dict format 77 | """ 78 | if not isinstance(payload, str): 79 | payload = json.dumps(payload) 80 | logger.info('method: %s, endpoint: %s, payload: %s, params: %s', 81 | method, endpoint, payload, params) 82 | return self._request(method, endpoint, payload, 83 | url_parameters=params) 84 | 85 | def create(self, url, py_dict, params=None): 86 | """ 87 | The create method. 88 | @param py_dict: the request body of http call, dict format 89 | @param params: short for the url parameter, dict format 90 | """ 91 | return self.request('POST', url, payload=py_dict, 92 | params=params) 93 | 94 | def read(self, url, object_id=None, params=None): 95 | """ 96 | The read method. 97 | @param py_dict: the request body of http call, dict format 98 | @param params: short for the url parameter, dict format 99 | """ 100 | if object_id: 101 | return self.request('GET', "%s/%s" % (url, object_id), 102 | params=params) 103 | return self.request('GET', url, params=params) 104 | 105 | def search(self, search_params): 106 | """ 107 | This exposes the search API. 108 | :param search_params: a dictionary to specify the filters 109 | """ 110 | search_url = 'search' 111 | param_list = [] 112 | for key, value in search_params.items(): 113 | param_list.append('%s:%s' % (key, value)) 114 | return self.request('GET', search_url, params={ 115 | 'query': ' AND '.join(param_list)}) 116 | 117 | def update(self, url, object_id, py_dict, params=None): 118 | """ 119 | The update method. 120 | @param py_dict: the request body of http call, dict format 121 | @param params: short for the url parameter, dict format 122 | """ 123 | return self.request('PUT', "%s/%s" % (url, object_id), 124 | py_dict, params=params) 125 | 126 | def delete(self, url, object_id, params=None): 127 | """ 128 | The delete method. 129 | @param py_dict: the request body of http call, dict format 130 | @param params: short for the url parameter, dict format 131 | """ 132 | return self.request("DELETE", "%s/%s" % (url, object_id), 133 | params=params) 134 | 135 | def get_all(self, params=None): 136 | """ 137 | The wrapper method of read to get all objects. 138 | """ 139 | res = self.read(params=params) 140 | if res: 141 | return res['results'] 142 | return [] 143 | 144 | 145 | def getargs(): 146 | parser = argparse.ArgumentParser() 147 | parser.add_argument('--mp', 148 | dest="mp_ip", 149 | default="", 150 | help='IP of NSX manager') 151 | parser.add_argument('--cert', 152 | dest="mp_cert_file", 153 | default="", 154 | help='Optional. The file that contains client ' 155 | 'certificate and private key for ' 156 | 'authentication. Defaults to empty str.') 157 | parser.add_argument('--user', 158 | dest="mp_user", 159 | default="admin", 160 | help='Optional. The MP User. Default: admin') 161 | parser.add_argument('--password', 162 | dest="mp_password", 163 | default="Admin!23Admin", 164 | help='Optional. The MP password. Default: ' 165 | 'Admin!23Admin') 166 | parser.add_argument('--BMC', 167 | dest="for_BMC", 168 | default="false", 169 | help='Will disable configure node_ls, node_lr') 170 | parser.add_argument('--k8scluster', 171 | dest="k8scluster", 172 | default="", 173 | help='The k8s/OpenShift cluster name for whole ' 174 | 'configuration Default: k8scluster') 175 | parser.add_argument('--edge_cluster', 176 | dest="edge_cluster", 177 | default="", 178 | help='Name of the edge cluster for transport zone.') 179 | parser.add_argument('--tz', 180 | dest="tz", 181 | default="", 182 | help='Name of the transport zone to be created and ' 183 | 'tagged. ') 184 | parser.add_argument('--tn', 185 | dest="tn", 186 | default="", 187 | help='Name of the transport node to be tagged. ') 188 | parser.add_argument('--t0', 189 | dest="t0", 190 | default="", 191 | help='Name of the tier-0 logical router to be created ' 192 | 'and tagged.') 193 | 194 | parser.add_argument('--pod_ipblock_name', 195 | dest="pod_ipblock_name", 196 | default="", 197 | help='name of IpBlock for pod traffic') 198 | parser.add_argument('--pod_ipblock_cidr', 199 | dest="pod_ipblock_cidr", 200 | default='', 201 | help='CIDR of IpBlock for pod traffic') 202 | 203 | parser.add_argument('--snat_ippool_name', 204 | dest="snat_ippool_name", 205 | default='', 206 | help='name of IpPool for SNAT') 207 | parser.add_argument('--snat_ippool_cidr', 208 | dest="snat_ippool_cidr", 209 | default="", 210 | help='CIDR of IpPool for SNAT') 211 | 212 | parser.add_argument('--start_range', 213 | dest="start_range", 214 | default="", 215 | help='Start ip of IpPool for SNAT') 216 | 217 | parser.add_argument('--end_range', 218 | dest="end_range", 219 | default="", 220 | help='End ip of IpPool for SNAT') 221 | 222 | parser.add_argument('--node', 223 | dest="node_list", 224 | default="", 225 | help='Optional. The kubernetes nodes names which are ' 226 | 'corresponding with vm names, split by "," with ' 227 | 'no spaces. Format: node1,node2,' 228 | 'node3.') 229 | 230 | parser.add_argument('--node_ls', 231 | dest="node_ls", 232 | default="", 233 | help='Name of node logical switch') 234 | 235 | parser.add_argument('--node_lr', 236 | dest="node_lr", 237 | default="", 238 | help='Name of node t1 logical router') 239 | 240 | parser.add_argument('--node_network_cidr', 241 | dest="node_network_cidr", 242 | default="", 243 | help='Subnet to node ls, IP address/mask, ' 244 | 'ex: 172.20.2.0/16') 245 | 246 | parser.add_argument('--vc_host', 247 | dest='vc_host', 248 | default='', 249 | help='IP address of VC') 250 | 251 | parser.add_argument('--vc_user', 252 | dest='vc_user', 253 | default='', 254 | help='User name of VC') 255 | 256 | parser.add_argument('--vc_password', 257 | dest='vc_password', 258 | default='', 259 | help='Password of VC') 260 | 261 | parser.add_argument('--vms', 262 | dest='vms', 263 | default='', 264 | help='Name of the vms, separated by comma') 265 | 266 | parser.add_argument('--skip_verfication', 267 | dest='skip_verfication', 268 | default=True, 269 | help='If using VC cert, set false') 270 | 271 | parser.add_argument('--cert_path', 272 | dest='cert_path', 273 | default='', 274 | help='Absolute path to VC cert') 275 | args = parser.parse_args() 276 | return args 277 | 278 | 279 | def add_tag(py_dict, tag_dict): 280 | """ 281 | Helper function to add tags to the NSX object body. 282 | @param py_dict: the NSX object body as dict format 283 | @param tag_dict: tags to add. dict format. 284 | e.g. {"ncp/cluster": "k8scluster"} 285 | """ 286 | # Check exsiting tags 287 | existing_tags = [] 288 | if "tags" in py_dict: 289 | for item in py_dict["tags"]: 290 | existing_tags.append((item.get("scope"), item.get("tag"))) 291 | else: 292 | py_dict["tags"] = [] 293 | for (key, value) in tag_dict.items(): 294 | tag = {"scope": key, "tag": value} 295 | # If the tag already exists, skip it. 296 | if tag in existing_tags: 297 | pass 298 | else: 299 | py_dict["tags"].append(tag) 300 | return py_dict 301 | 302 | 303 | class VMNetworkManager(object): 304 | 305 | def __init__(self, args): 306 | self.host = args.vc_host 307 | self.user = args.vc_user 308 | self.pwd = args.vc_password 309 | self.node_ls_name = args.node_ls 310 | self.node_list = args.node_list 311 | self.skip_verfication = args.skip_verfication 312 | self.vms = args.vms 313 | global cis_client 314 | from com.vmware import cis_client 315 | global hardware_client 316 | from com.vmware.vcenter.vm import hardware_client 317 | global vcenter_client 318 | from com.vmware import vcenter_client 319 | global get_requests_connector 320 | from vmware.vapi.lib.connect import get_requests_connector 321 | global create_session_security_context 322 | from vmware.vapi.security.session import \ 323 | create_session_security_context 324 | global create_user_password_security_context 325 | from vmware.vapi.security.user_password import \ 326 | create_user_password_security_context 327 | global StubConfigurationFactory 328 | from vmware.vapi.stdlib.client.factories import \ 329 | StubConfigurationFactory 330 | 331 | def get_jsonrpc_endpoint_url(self, host): 332 | # The URL for the stub requests are made against the /api HTTP 333 | # endpoint of the vCenter system. 334 | return "https://{}/api".format(host) 335 | 336 | def connect(self, host, user, pwd, 337 | skip_verification=False, 338 | cert_path=None, 339 | suppress_warning=True): 340 | """ 341 | Create an authenticated stub configuration object that can be used 342 | to issue requests against vCenter. 343 | 344 | Returns a stub_config that stores the session identifier that can be 345 | used to issue authenticated requests against vCenter. 346 | """ 347 | host_url = self.get_jsonrpc_endpoint_url(host) 348 | 349 | session = requests.Session() 350 | if skip_verification: 351 | session = self.create_unverified_session(session, suppress_warning) 352 | elif cert_path: 353 | session.verify = cert_path 354 | connector = get_requests_connector(session=session, url=host_url) 355 | stub_config = StubConfigurationFactory.new_std_configuration(connector) 356 | 357 | return self.login(stub_config, user, pwd) 358 | 359 | def login(self, stub_config, user, pwd): 360 | """ 361 | Create an authenticated session with vCenter. 362 | Returns a stub_config that stores the session identifier that can 363 | be used to issue authenticated requests against vCenter. 364 | """ 365 | # Pass user credentials (user/password) in the security context to 366 | # authenticate. 367 | security_context = create_user_password_security_context(user, pwd) 368 | stub_config.connector.set_security_context(security_context) 369 | 370 | # Create the stub for the session service 371 | # and login by creating a session. 372 | session_svc = cis_client.Session(stub_config) 373 | session_id = session_svc.create() 374 | 375 | # Store the session identifier in the security 376 | # context of the stub and use that for all subsequent remote requests 377 | session_security_context = create_session_security_context(session_id) 378 | stub_config.connector.set_security_context(session_security_context) 379 | 380 | return stub_config 381 | 382 | def logout(self, stub_config): 383 | """ 384 | Delete session with vCenter. 385 | """ 386 | if stub_config: 387 | session_svc = cis_client.Session(stub_config) 388 | session_svc.delete() 389 | 390 | def create_unverified_session(self, session, suppress_warning=True): 391 | """ 392 | Create a unverified session to disable the certificate verification. 393 | """ 394 | session.verify = False 395 | if suppress_warning: 396 | # Suppress unverified https request warnings 397 | requests.packages.urllib3.disable_warnings(InsecureRequestWarning) 398 | return session 399 | 400 | def get_vm(self, stub_config, vm_name): 401 | """ 402 | Return the identifier of a vm 403 | """ 404 | vm_svc = vcenter_client.VM(stub_config) 405 | names = set([vm_name]) 406 | vms = vm_svc.list(vcenter_client.VM.FilterSpec(names=names)) 407 | 408 | if len(vms) == 0: 409 | logger.info("VM with name ({}) not found".format(vm_name)) 410 | return None 411 | 412 | vm = vms[0].vm 413 | logger.info("Found VM '{}' ({})".format(vm_name, vm)) 414 | return vm 415 | 416 | def configure_vnic(self): 417 | # if user did not set vc host or user or password, do nothing 418 | if not self.host or not self.user or not self.pwd: 419 | return 420 | stub_config = self.connect(self.host, self.user, 421 | self.pwd, self.skip_verfication) 422 | atexit.register(self.logout, stub_config) 423 | vm_list = self.vms.split(',') 424 | node_list = self.node_list.split(',') 425 | for vm_name, node_name in zip(vm_list, node_list): 426 | vm = self.get_vm(stub_config, vm_name) 427 | if not vm: 428 | raise Exception('Existing vm with name ({}) is required. ' 429 | 'Please create the vm first.'.format(vm_name)) 430 | 431 | # After node_ls is created, get the network with the same name 432 | network_svc = vcenter_client.Network(stub_config) 433 | filter = vcenter_client.Network.FilterSpec( 434 | names=set([self.node_ls_name])) 435 | network_summaries = network_svc.list(filter=filter) 436 | logger.info(network_summaries) 437 | if not network_summaries: 438 | raise Exception('Network with name %s not found on VC %s' % 439 | (self.node_ls_name, self.host)) 440 | 441 | network = network_summaries[0].network 442 | ethernet_svc = hardware_client.Ethernet(stub_config) 443 | logger.info('\n# List all Ethernet adapters for VM %s' % vm_name) 444 | nic_summaries = ethernet_svc.list(vm=vm) 445 | logger.info('vm.hardware.Ethernet.list({}) -> {}' 446 | .format(vm, nic_summaries)) 447 | 448 | # Get information for each Ethernet on the VM 449 | idle_nic = None 450 | finished_configuring_current_vm = False 451 | for nic_summary in nic_summaries: 452 | nic = nic_summary.nic 453 | nic_info = ethernet_svc.get(vm=vm, nic=nic) 454 | logger.info('vm.hardware.Ethernet.get({}, {}) -> {}'. 455 | format(vm, nic, nic_info)) 456 | if (nic_info.state == 'CONNECTED' and 457 | nic_info.backing.network == network): 458 | logger.info("Nic for the network has been configured. " 459 | "Finished configuring current VM.") 460 | finished_configuring_current_vm = True 461 | break 462 | if nic_info.state == 'NOT_CONNECTED': 463 | idle_nic = nic 464 | if finished_configuring_current_vm: 465 | continue 466 | 467 | network_type = hardware_client.Ethernet.BackingType.OPAQUE_NETWORK 468 | if not idle_nic: 469 | logger.info("No available vnic found, creating new vnic.") 470 | nic_create_spec = hardware_client.Ethernet.CreateSpec( 471 | start_connected=True, 472 | allow_guest_control=True, 473 | wake_on_lan_enabled=True, 474 | backing=hardware_client.Ethernet.BackingSpec( 475 | type=network_type, 476 | network=network)) 477 | idle_nic = ethernet_svc.create(vm, nic_create_spec) 478 | logger.info("Created new vnic {}.".format(idle_nic)) 479 | 480 | else: 481 | logger.info("Idle vnic {} found, updating it's network." 482 | .format(idle_nic)) 483 | nic_update_spec = hardware_client.Ethernet.UpdateSpec( 484 | backing=hardware_client.Ethernet.BackingSpec( 485 | type=network_type, 486 | network=network), 487 | start_connected=True 488 | ) 489 | ethernet_svc.update(vm, idle_nic, nic_update_spec) 490 | logger.info("Updated vnic {} with network {}." 491 | .format(idle_nic, network)) 492 | idle_nic_info = ethernet_svc.get(vm=vm, nic=idle_nic) 493 | if idle_nic_info.state == 'NOT_CONNECTED': 494 | logger.info("Connecting vnic {} with vm {}" 495 | .format(idle_nic, vm_name)) 496 | ethernet_svc.connect(vm, idle_nic) 497 | 498 | 499 | class NSXResourceManager(object): 500 | def __init__(self, api_client): 501 | self.api_client = api_client 502 | 503 | self.resource_to_url = { 504 | 'TransportZone': 'transport-zones', 505 | 'TransportNode': 'transport-nodes', 506 | 'LogicalRouter': 'logical-routers', 507 | 'IpBlock': 'pools/ip-blocks', 508 | 'IpPool': 'pools/ip-pools', 509 | 'LogicalSwitch': 'logical-switches', 510 | 'LogicalPort': 'logical-ports', 511 | 'LogicalRouterPort': 'logical-router-ports', 512 | 'VIF': 'fabric/vifs', 513 | 'VM': 'fabric/virtual-machines' 514 | } 515 | 516 | self.secondary_resource_to_url = { 517 | 'Routing_Advertisement': '/routing/advertisement', 518 | 'Routing_Redistribution': '/routing/redistribution' 519 | } 520 | 521 | def get_resource_by_type_and_name(self, resource_type, resource_name, 522 | use_search_api=True): 523 | search_params = { 524 | 'resource_type': resource_type, 525 | 'display_name': resource_name, 526 | } 527 | if use_search_api: 528 | response = self.api_client.search(search_params) 529 | result_count = response.get('result_count', 0) 530 | if result_count > 1: 531 | raise Exception('More than one resource found for type %s and ' 532 | 'name %s', resource_type, resource_name) 533 | return response['results'][0] if result_count else None 534 | else: 535 | result = self.get_all(resource_type) 536 | resources = [] 537 | for r in result: 538 | if search_params and all(r.get(k) == v 539 | for k, v in search_params.items()): 540 | resources.append(r) 541 | if len(resources) > 1: 542 | raise Exception('More than one resource found for type %s and ' 543 | 'name %s', resource_type, resource_name) 544 | return resources[0] if resources else None 545 | 546 | def get_or_create_resource(self, resource_type, resource_name, 547 | params=None, use_search_api=True): 548 | resource = self.get_resource_by_type_and_name( 549 | resource_type, resource_name, 550 | use_search_api=use_search_api) 551 | if not resource: 552 | logger.info('Resource of type %s, and name %s not found, creating', 553 | resource_type, resource_name) 554 | # create resource, and return it 555 | resource_dict = {'display_name': resource_name} 556 | if params: 557 | resource_dict.update(params) 558 | resource = self.api_client.create( 559 | self.resource_to_url[resource_type], resource_dict) 560 | logger.debug('obtained resource: %s', resource) 561 | return resource 562 | 563 | def get_all(self, resource_type, params=None): 564 | """ 565 | The wrapper method of read to get all objects. 566 | """ 567 | res = self.api_client.read( 568 | self.resource_to_url[resource_type], params=params) 569 | if res: 570 | return res['results'] 571 | return [] 572 | 573 | def get_mac_table_for_lp(self, lp_id): 574 | url = '/logical-ports/%s/mac-table?source=realtime' % lp_id 575 | response = self.api_client.read(url) 576 | return response['results'] 577 | 578 | def update_resource(self, resource): 579 | url = self.resource_to_url[resource['resource_type']] 580 | self.api_client.update(url, resource['id'], resource) 581 | 582 | def update_secondary_resource(self, resource_type, resource_id, 583 | secondary_resource_type, 584 | secondary_resource): 585 | url = (self.resource_to_url[resource_type] + '/' + resource_id + '/' + 586 | self.secondary_resource_to_url[secondary_resource_type]) 587 | self.api_client.update(url, "", secondary_resource) 588 | 589 | def get_secondary_resource(self, resource_type, resource_id, 590 | secondary_resource_type): 591 | url = (self.resource_to_url[resource_type] + '/' + resource_id + '/' + 592 | self.secondary_resource_to_url[secondary_resource_type]) 593 | return self.api_client.read(url) 594 | 595 | 596 | class ConfigurationManager(object): 597 | def __init__(self, args, api_client): 598 | self.resource_manager = NSXResourceManager(api_client) 599 | 600 | self.manager_ip = args.mp_ip 601 | self.username = args.mp_user 602 | self.password = args.mp_password 603 | 604 | self.for_bmc = False if args.for_BMC.lower() == 'false' else True 605 | 606 | self.cluster_name = args.k8scluster 607 | self.transport_zone_name = args.tz 608 | self.transport_node_names = args.tn.split(',') 609 | 610 | self.t0_router_name = args.t0 611 | self.edge_cluster_name = args.edge_cluster 612 | 613 | self.pod_ipblock_name = args.pod_ipblock_name 614 | self.pod_ipblock_cidr = args.pod_ipblock_cidr 615 | self.snat_ippool_name = args.snat_ippool_name 616 | self.snat_ippool_cidr = args.snat_ippool_cidr 617 | self.start_range = args.start_range 618 | self.end_range = args.end_range 619 | 620 | if not self.for_bmc: 621 | self.vm_network_manager = VMNetworkManager(args) 622 | self.mac_to_node_name = {} 623 | self.node_ls_name = args.node_ls 624 | self.node_lr_name = args.node_lr 625 | self.node_network_cidr = args.node_network_cidr 626 | self.vm_list = args.vms.split(',') 627 | self.node_list = args.node_list.split(',') 628 | 629 | def _has_tags(self, resource, required_tags): 630 | if not required_tags: 631 | raise Exception('The required tags dictionary is empty') 632 | if 'tags' not in resource: 633 | return False 634 | 635 | current_tags = resource['tags'] 636 | current_keys = set(tag['scope'] for tag in current_tags) 637 | for required_tag_key, required_tag_value in required_tags.items(): 638 | if required_tag_key not in current_keys: 639 | return False 640 | 641 | required_tag = { 642 | 'scope': required_tag_key, 643 | 'tag': required_tag_value, 644 | } 645 | if required_tag not in current_tags: 646 | logger.warning('One of existing tags has the same key with ' 647 | 'the required tag %s. Existing tags: %s', 648 | required_tag, current_tags) 649 | return True 650 | 651 | def _handle_general_configuration(self, resource_type, resource_name, 652 | params=None, required_tags=None, 653 | use_search_api=True): 654 | """ 655 | The algorithm for 'general configuration' is: Check if the resource 656 | with specified type and name exists and only one exists. 657 | Raise an exception if more than one resources found. 658 | If not found, create one. 659 | Then continue and tag the resource if it doesn't yet have all of the 660 | required tags. 661 | Some resource (like Transport Zone), the payload from search API is 662 | not usable by nsx api, we will just use the nsx client api for it 663 | """ 664 | 665 | resource = self.resource_manager.get_or_create_resource( 666 | resource_type, resource_name, params, 667 | use_search_api=use_search_api) 668 | return resource 669 | 670 | def handle_transport_zone(self): 671 | params = { 672 | 'host_switch_name': 'nsxvswitch', 673 | 'transport_type': 'OVERLAY', 674 | } 675 | required_tags = {NCP_CLUSTER_KEY: self.cluster_name} 676 | overlay_tz = self._handle_general_configuration( 677 | 'TransportZone', self.transport_zone_name, params, required_tags, 678 | use_search_api=False) 679 | sys.stdout.write("overlay_tz: %s " % overlay_tz['id']) 680 | 681 | def handle_transport_node(self): 682 | for transport_node_name in self.transport_node_names: 683 | required_tags = [{"scope": NCP_CLUSTER_KEY, 684 | "tag": self.cluster_name}, 685 | {"scope": NCP_NODE_KEY, 686 | "tag": transport_node_name}] 687 | transport_node = self._handle_general_configuration( 688 | 'TransportNode', transport_node_name, None, None, 689 | use_search_api=False) 690 | transport_node['tags'] = required_tags 691 | self.resource_manager.update_resource(transport_node) 692 | 693 | def handle_t0_router(self): 694 | edge_cluster = self.resource_manager.get_resource_by_type_and_name( 695 | 'EdgeCluster', self.edge_cluster_name) 696 | if not edge_cluster: 697 | logger.critical('No edge cluster with name %s found. ' 698 | 'Configuration of T0 router is aborted.', 699 | self.edge_cluster_name) 700 | return 701 | 702 | params = { 703 | 'router_type': 'TIER0', 704 | 'edge_cluster_id': edge_cluster['id'], 705 | 'high_availability_mode': 'ACTIVE_STANDBY', 706 | } 707 | required_tags = {NCP_CLUSTER_KEY: self.cluster_name} 708 | self._handle_general_configuration( 709 | 'LogicalRouter', self.t0_router_name, params, required_tags, 710 | use_search_api=False) 711 | t0 = self.resource_manager.get_resource_by_type_and_name( 712 | 'LogicalRouter', self.t0_router_name, use_search_api=False) 713 | redistribution = self.resource_manager.get_secondary_resource( 714 | 'LogicalRouter', t0['id'], 'Routing_Redistribution') 715 | redistribution['bgp_enabled'] = True 716 | self.resource_manager.update_secondary_resource( 717 | 'LogicalRouter', t0['id'], 718 | 'Routing_Redistribution', redistribution) 719 | sys.stdout.write("t0_router: %s " % t0['id']) 720 | 721 | def _handle_ipblock(self, ipblock_name, ipblock_cidr, required_tags): 722 | # handle ipblock configuration for a specific block name 723 | params = {'cidr': ipblock_cidr} 724 | ipblock = self._handle_general_configuration( 725 | 'IpBlock', ipblock_name, params, required_tags) 726 | sys.stdout.write("container_ip_block: %s " % ipblock['id']) 727 | 728 | def _handle_ippool(self, ippool_name, ippool_cidr, 729 | start_range, end_range, required_tags): 730 | # handle ipblock configuration for a specific block name 731 | params = {"subnets": [ 732 | { 733 | "allocation_ranges": [ 734 | { 735 | "start": start_range, 736 | "end": end_range 737 | } 738 | ], 739 | "cidr": ippool_cidr}] 740 | } 741 | ippool = self._handle_general_configuration( 742 | 'IpPool', ippool_name, params, required_tags) 743 | sys.stdout.write("external_ip_pool: %s " % ippool['id']) 744 | 745 | def handle_ipblocks(self): 746 | # IP block for pod traffic 747 | self._handle_ipblock(self.pod_ipblock_name, self.pod_ipblock_cidr, { 748 | NCP_CLUSTER_KEY: self.cluster_name}) 749 | 750 | # IP block for SNAT 751 | self._handle_ippool(self.snat_ippool_name, self.snat_ippool_cidr, 752 | self.start_range, self.end_range, 753 | {NCP_EXTERNAL_KEY: 'true', 754 | NCP_CLUSTER_KEY: self.cluster_name}) 755 | 756 | def handle_t1_router(self): 757 | # Get node_lr. Create it if not present 758 | # After creation, connect it to T0 and node-ls 759 | node_lr_name = self.node_lr_name 760 | node_lr = self.resource_manager.get_resource_by_type_and_name( 761 | 'LogicalRouter', node_lr_name) 762 | # we first check if node_lr has been configured or not 763 | if not node_lr: 764 | t0_router = self.resource_manager.get_resource_by_type_and_name( 765 | 'LogicalRouter', self.t0_router_name) 766 | if not t0_router: 767 | logger.critical('No T0 router with name %s found. ' 768 | 'Configuration of T1 %s router is aborted.' % 769 | (self.t0_router_name, node_lr_name)) 770 | return 771 | 772 | params = { 773 | 'router_type': 'TIER1', 774 | 'high_availability_mode': 'ACTIVE_STANDBY', 775 | } 776 | node_lr = self.resource_manager.get_or_create_resource( 777 | 'LogicalRouter', self.node_lr_name, params) 778 | 779 | # Then we add router link port on t0 and t1 780 | t1_id = node_lr['id'] 781 | t0_id = t0_router['id'] 782 | t0_router_port_name = "Link_to_%s" % node_lr_name 783 | params1 = { 784 | 'display_name': t0_router_port_name, 785 | 'resource_type': 'LogicalRouterLinkPortOnTIER0', 786 | 'logical_router_id': t0_id, 787 | 'tags': [] 788 | } 789 | t0_router_port = self.resource_manager.get_or_create_resource( 790 | 'LogicalRouterPort', t0_router_port_name, params1) 791 | t1_router_port_name = "Link_to_%s" % t0_router['display_name'] 792 | params2 = { 793 | 'display_name': t1_router_port_name, 794 | 'resource_type': 'LogicalRouterLinkPortOnTIER1', 795 | 'logical_router_id': t1_id, 796 | 'tags': [], 797 | 'linked_logical_router_port_id': { 798 | 'target_id': t0_router_port['id']} 799 | } 800 | self.resource_manager.get_or_create_resource( 801 | 'LogicalRouterPort', t1_router_port_name, params2) 802 | 803 | t1_router_switch_port_name = "Link_to_%s" % self.node_ls_name 804 | node_ls_port = self.resource_manager.get_resource_by_type_and_name( 805 | 'LogicalPort', 'To_%s' % self.node_lr_name) 806 | ip = self.node_network_cidr.split('/')[0] 807 | cidr_len = self.node_network_cidr.split('/')[1] 808 | params3 = { 809 | 'display_name': t1_router_switch_port_name, 810 | 'resource_type': 'LogicalRouterDownLinkPort', 811 | 'logical_router_id': t1_id, 812 | 'tags': [], 813 | 'linked_logical_switch_port_id': { 814 | 'target_id': node_ls_port['id']}, 815 | 'subnets': [{ 816 | 'ip_addresses': [ip], 817 | 'prefix_length': cidr_len 818 | }] 819 | } 820 | self.resource_manager.get_or_create_resource( 821 | 'LogicalRouterPort', t1_router_switch_port_name, params3) 822 | 823 | # Finally we enale node_lr for route advertisement 824 | advertisement = self.resource_manager.get_secondary_resource( 825 | 'LogicalRouter', node_lr['id'], 'Routing_Advertisement') 826 | advertisement['enabled'] = True 827 | advertisement['advertise_nsx_connected_routes'] = True 828 | advertisement['advertise_lb_vip'] = True 829 | advertisement['advertise_static_routes'] = True 830 | advertisement['advertise_nat_routes'] = True 831 | advertisement['advertise_lb_snat_ip'] = True 832 | self.resource_manager.update_secondary_resource( 833 | 'LogicalRouter', node_lr['id'], 834 | 'Routing_Advertisement', advertisement) 835 | 836 | def handle_vif(self): 837 | # get node-ls. Create it if it doesn't exist yet 838 | transport_zone = self.resource_manager.get_resource_by_type_and_name( 839 | 'TransportZone', self.transport_zone_name) 840 | 841 | params_1 = { 842 | 'transport_zone_id': transport_zone['id'], 843 | 'admin_state': 'UP', 844 | 'replication_mode': 'MTEP', 845 | } 846 | node_ls = self.resource_manager.get_or_create_resource( 847 | 'LogicalSwitch', self.node_ls_name, params_1) 848 | 849 | # create logical ports to node_lr 850 | logical_port_to_node_ls = 'To_%s' % self.node_lr_name 851 | params_2 = { 852 | 'logical_switch_id': node_ls['id'], 853 | 'display_name': logical_port_to_node_ls, 854 | 'admin_state': 'UP', 855 | 'attachment': None, 856 | 'tags': [], 857 | 'address_bindings': None 858 | } 859 | self.resource_manager.get_or_create_resource( 860 | 'LogicalPort', logical_port_to_node_ls, params_2) 861 | 862 | logger.info('node_ls: %s', node_ls) 863 | 864 | # After node ls is created, configure vnic if user provided 865 | # Sleep 20 seconds for vc to discover node ls 866 | time.sleep(20) 867 | self.vm_network_manager.configure_vnic() 868 | # Sleep 10 senconds for nsx to discover vif connected to node_ls 869 | time.sleep(10) 870 | vms_id = {} 871 | id_port = {} 872 | for vm_name in self.vm_list: 873 | params = {"display_name": vm_name} 874 | vm_info = self.resource_manager.get_all('VM', params=params) 875 | if not vm_info: 876 | logger.warning("Cannot find the VM %s" % vm_name) 877 | else: 878 | vms_id[vm_name] = vm_info[0]['external_id'] 879 | 880 | for vmid in vms_id.values(): 881 | if not vmid: 882 | continue 883 | params = {"owner_vm_id": vmid} 884 | vm_infos = self.resource_manager.get_all('VIF', params=params) 885 | for vm_info in vm_infos: 886 | if 'lport_attachment_id' in vm_info: 887 | # In KVM, the vm managerment also has 888 | # lport_attachement_id, but there is no lsp for it. 889 | att_id = vm_info["lport_attachment_id"] 890 | lsp_info = self.resource_manager.get_all( 891 | 'LogicalPort', params={"attachment_id": att_id}) 892 | for lsp in lsp_info: 893 | if lsp['logical_switch_id'] == node_ls['id']: 894 | id_port[vmid] = lsp 895 | break 896 | 897 | for (vm_name, node_name) in zip(self.vm_list, self.node_list): 898 | lsp = id_port.get(vms_id[vm_name]) 899 | if not lsp: 900 | logger.warning("Cannot find any VIF on the VM %s" % vm_name) 901 | else: 902 | required_tags = { 903 | NCP_CLUSTER_KEY: self.cluster_name, 904 | NCP_NODE_KEY: node_name 905 | } 906 | logger.info('required tags: %s, port: %s', required_tags, lsp) 907 | if not self._has_tags(lsp, required_tags): 908 | lsp = add_tag(lsp, required_tags) 909 | self.resource_manager.update_resource(lsp) 910 | logger.info("The logical port for the VM %s has been tagged " 911 | "with k8s cluster and node name.", vm_name) 912 | 913 | def configure_all(self): 914 | self.handle_transport_zone() 915 | self.handle_t0_router() 916 | self.handle_ipblocks() 917 | if not self.for_bmc: 918 | self.handle_vif() 919 | self.handle_t1_router() 920 | else: 921 | self.handle_transport_node() 922 | 923 | 924 | def main(): 925 | cmd_line_args = getargs() 926 | api_client = TinyClient(cmd_line_args) 927 | config_manager = ConfigurationManager(cmd_line_args, api_client) 928 | config_manager.configure_all() 929 | 930 | 931 | if __name__ == '__main__': 932 | main() 933 | 934 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/hosts: -------------------------------------------------------------------------------- 1 | [masters] 2 | admin.rhel.osmaster ansible_ssh_host=101.101.101.4 3 | 4 | 5 | [etcd] 6 | osmaster 7 | 8 | 9 | [nodes] 10 | admin.rhel.osmaster ansible_ssh_host=101.101.101.4 openshift_ip=101.101.101.4 openshift_schedulable=true openshift_hostname=admin.rhel.osmaster 11 | admin.rhel.osnode ansible_ssh_host=101.101.101.5 openshift_ip=101.101.101.5 openshift_hostname=admin.rhel.osnode 12 | 13 | 14 | [nodes:vars] 15 | openshift_node_labels="{'region': 'primary', 'zone': 'east'}" 16 | 17 | 18 | [masters:vars] 19 | openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 20 | 21 | 22 | # Create an OSEv3 group that contains the masters and nodes groups 23 | [OSEv3:children] 24 | masters 25 | nodes 26 | etcd 27 | 28 | 29 | # Set variables common for all OSEv3 hosts 30 | [OSEv3:vars] 31 | # SSH user, this user should allow ssh based auth without requiring a password 32 | ansible_ssh_user=root 33 | openshift_master_default_subdomain=apps.user.local 34 | 35 | # Tell OpenShift to use CNI for networking 36 | os_sdn_network_plugin_name=cni 37 | openshift_use_openshift_sdn=false 38 | openshift_use_nsx=true 39 | 40 | openshift_node_sdn_mtu=1500 41 | 42 | # In test/dev environment, these requirements may not be satisfied, so skip the checks. 43 | # However it's recommended to enable them for production environment (comment it out to enable). 44 | openshift_disable_check=disk_availability,memory_availability,docker_storage,package_version 45 | 46 | # Disable Openshift to manage the registry and docker 47 | openshift_hosted_manage_registry=false 48 | openshift_hosted_manage_router=false 49 | openshift_enable_service_catalog=false 50 | openshift_web_console_install=false 51 | openshift_cluster_monitoring_operator_install=false 52 | openshift_console_install=false 53 | 54 | # Set openshift_deployment_type 55 | openshift_deployment_type=openshift-enterprise 56 | 57 | 58 | [jumphost] 59 | # This is the host where nsx_config.py and nsx_cleanup.py can be run 60 | # (usually one of masters/nodes). 61 | # nsx_config.py requires dependencies downloaded from Internet. Thus use 62 | # a jumphost with Internet access if cluster is air-gapped. 63 | jumphost ansible_ssh_host=101.101.101.4 -------------------------------------------------------------------------------- /openshift-ansible-nsx/install.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - import_playbook: ncp_prep.yaml 3 | - import_playbook: openshift_install.yaml 4 | - import_playbook: ncp.yaml 5 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/jumphost_prepare.yaml: -------------------------------------------------------------------------------- 1 | # to run: ansible-playbook -i /PATH/TO/HOSTS/hosts ncp.yaml 2 | --- 3 | - hosts: 127.0.0.1 4 | tasks: 5 | - name: Install python-devel 6 | yum: 7 | name: python-devel 8 | state: latest 9 | 10 | - name: Install vmware-nsxlib 11 | pip: 12 | name: vmware-nsxlib 13 | version: 11.1.4 14 | state: present 15 | 16 | - name: Download VSphere Automation SDK 17 | git: 18 | repo: "https://github.com/vmware/vsphere-automation-sdk-python" 19 | dest: /tmp/vsphere-automation-sdk-python/ 20 | version: v6.6.1 21 | 22 | - name: Install VSphere Automation SDK 23 | pip: 24 | requirements: /tmp/vsphere-automation-sdk-python/requirements.txt 25 | extra_args: --extra-index-url /tmp/vsphere-automation-sdk-python/lib/ 26 | state: present 27 | 28 | - name: remove vsphere sdk git dir 29 | file: 30 | path: /tmp/vsphere-automation-sdk-python/ 31 | state: absent 32 | 33 | - name: Install pyOpenSSL 16.2.0 34 | pip: 35 | name: pyOpenSSL 36 | version: 16.2.0 37 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/ncp.yaml: -------------------------------------------------------------------------------- 1 | # to run: ansible-playbook -i /PATH/TO/HOSTS/hosts ncp.yaml 2 | --- 3 | # run ncp role to bring up the NCP and agent pods 4 | - hosts: masters[0] 5 | vars_files: 6 | - vars/global.yaml 7 | roles: 8 | - ncp 9 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/ncp_prep.yaml: -------------------------------------------------------------------------------- 1 | # to run: ansible-playbook -i /PATH/TO/HOSTS/hosts ncp_prep.yaml 2 | --- 3 | # run the ncp_prep role on each node of the cluster 4 | - hosts: OSEv3 5 | vars_files: 6 | - vars/global.yaml 7 | roles: 8 | - ncp_prep 9 | - hosts: jumphost 10 | vars_files: 11 | - vars/global.yaml 12 | roles: 13 | - nsx_config 14 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/nsx_cleanup.yaml: -------------------------------------------------------------------------------- 1 | # to run: ansible-playbook -i /PATH/TO/HOSTS/hosts nsx_cleanup.yaml 2 | --- 3 | # run the nsx_cleanup role on master 4 | - hosts: jumphost 5 | vars_files: 6 | - vars/global.yaml 7 | roles: 8 | - nsx_cleanup 9 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/openshift_install.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: masters[0] 3 | vars_files: 4 | - vars/global.yaml 5 | roles: 6 | - openshift_installation 7 | 8 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/roles/ncp/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | ## start kube_proxy for oc 3.10 and later 3 | - name: Add library to ansible.cfg 4 | replace: 5 | path: "{{ openshift_dir }}/ansible.cfg" 6 | regexp: "\\[defaults\\]" 7 | replace: "[defaults]\nlibrary = roles/lib_utils/library/" 8 | when: manual_start_kube_proxy != false 9 | 10 | - name: create create_proxy under playbooks dir 11 | shell: "echo '---\n- import_playbook: byo/openshift_facts.yml\n- hosts: masters' > {{ openshift_dir }}/playbooks/create_proxy.yaml" 12 | when: manual_start_kube_proxy != false 13 | 14 | - name: create create_proxy under playbooks dir 15 | shell: 'echo " roles:" >> {{ openshift_dir }}/playbooks/create_proxy.yaml' 16 | when: manual_start_kube_proxy != false 17 | 18 | - name: create create_proxy under playbooks dir 19 | shell: 'echo " - kube_proxy_and_dns" >> {{ openshift_dir }}/playbooks/create_proxy.yaml' 20 | when: manual_start_kube_proxy != false 21 | 22 | - name: run playbook create_proxy 23 | shell: "ansible-playbook -i {{ playbook_dir }}/hosts {{ openshift_dir }}/playbooks/create_proxy.yaml" 24 | when: manual_start_kube_proxy != false 25 | 26 | ## Configure Service Account for NCP 27 | - name: Check if nsx-system project exists 28 | command: oc get project nsx-system 29 | register: nsx_project_result 30 | ignore_errors: True 31 | 32 | - name: Display the output msg 33 | debug: 34 | msg: "{{ nsx_project_result }}" 35 | 36 | - name: Create the project if it doesn't exist 37 | command: oc new-project nsx-system 38 | when: "'Error' in nsx_project_result.stderr" 39 | 40 | - command: oc project nsx-system 41 | 42 | - name: Download ncp-rbac 43 | get_url: 44 | url: "{{ ncp_rbac_yaml_url }}" 45 | dest: /tmp/ncp-rbac.yml 46 | force: yes 47 | 48 | - name: Change API version for ncp-rbac 49 | replace: 50 | path: /tmp/ncp-rbac.yml 51 | regexp: "^apiVersion: rbac.authorization.k8s.io/v1.*" 52 | replace: "apiVersion: v1" 53 | 54 | - name: Enable Route resource API access 55 | replace: 56 | path: /tmp/ncp-rbac.yml 57 | regexp: "# - routes" 58 | replace: "- routes" 59 | 60 | - name: Comment out apiGroup for bmc env 61 | replace: 62 | path: /tmp/ncp-rbac.yml 63 | regexp: "apiGroup: rbac.authorization.k8s.io" 64 | replace: "# apiGroup: rbac.authorization.k8s.io" 65 | when: BMC == true or BMC == True 66 | 67 | - name: Apply ncp-rbac 68 | command: oc apply -f /tmp/ncp-rbac.yml 69 | 70 | - name: check if /etc/nsx-ujo dir exists 71 | stat: 72 | path: /etc/nsx-ujo 73 | register: ujo_exists 74 | 75 | - name: Create dir if missing 76 | file: path=/etc/nsx-ujo state=directory 77 | when: 78 | ujo_exists.stat.exists == False 79 | 80 | - name: Check if a token file is already present 81 | stat: 82 | path: /etc/nsx-ujo/ncp_token 83 | register: ncp_exists 84 | 85 | - name: Get token name for ncp 86 | shell: "kubectl get serviceaccount ncp-svc-account -o yaml | grep -A1 secrets | tail -n1 | awk {'print $3'}" 87 | register: ncp_secret_result 88 | 89 | - name: Copy ncp token to file 90 | shell: "kubectl get secret {{ ncp_secret_result.stdout }} -o yaml | grep 'token:' | awk {'print $2'} | base64 -d > /etc/nsx-ujo/ncp_token" 91 | 92 | - name: Check if a token file is already present 93 | stat: 94 | path: /etc/nsx-ujo/node_agent_token 95 | register: node_agent_exists 96 | 97 | - name: Get node agent token name 98 | shell: "kubectl get serviceaccount nsx-node-agent-svc-account -o yaml | grep -A1 secrets | tail -n1 | awk {'print $3'}" 99 | register: node_agent_secret_result 100 | 101 | - name: Copy node agent to file 102 | shell: "kubectl get secret {{ node_agent_secret_result.stdout }} -o yaml | grep 'token:' | awk {'print $2'} | base64 -d > /etc/nsx-ujo/node_agent_token" 103 | 104 | ## Configure Security Context Constraint for NCP 105 | - name: Download ncp-os-scc yaml 106 | get_url: 107 | url: "{{ ncp_scc_yaml_url }}" 108 | dest: /tmp/ncp-os-scc.yml 109 | force: yes 110 | 111 | - name: Set DAC_OVERRIDE in ncp-os-scc 112 | replace: 113 | path: /tmp/ncp-os-scc.yml 114 | regexp: "# - DAC_OVERRIDE" 115 | replace: "- DAC_OVERRIDE" 116 | when: BMC == true or BMC == True 117 | 118 | - name: Load ncp-scc 119 | command: oc apply -f /tmp/ncp-os-scc.yml 120 | 121 | - name: Add nsx-node-agent-svc-account to ncp-scc 122 | command: oc adm policy add-scc-to-user ncp-scc -z ncp-svc-account 123 | 124 | - name: Add nsx-node-agent-svc-account to ncp-scc 125 | command: oc adm policy add-scc-to-user ncp-scc -z nsx-node-agent-svc-account 126 | 127 | ### Obtain YAML files to run NCP and nsx-node-agent 128 | - name: Download NCP yaml 129 | get_url: 130 | url: "{{ ncp_yaml_url }}" 131 | dest: /tmp/ncp-rc.yml 132 | force: yes 133 | 134 | - name: Set service account for ncp 135 | replace: 136 | path: /tmp/ncp-rc.yml 137 | regexp: "#serviceAccountName: ncp-svc-account" 138 | replace: "serviceAccountName: ncp-svc-account" 139 | 140 | - name: Set cluster name 141 | lineinfile: 142 | path: /tmp/ncp-rc.yml 143 | regexp: "^ *#?cluster =.*" 144 | line: " cluster = {{ cluster_name }}" 145 | 146 | - name: Set adaptor in ncp.ini 147 | replace: 148 | path: /tmp/ncp-rc.yml 149 | regexp: "#adaptor = kubernetes" 150 | replace: "adaptor = openshift" 151 | 152 | - name: Set use_native_loadbalancer 153 | replace: 154 | path: /tmp/ncp-rc.yml 155 | regexp: "#use_native_loadbalancer = False" 156 | replace: "use_native_loadbalancer = {{ use_native_loadbalancer }}" 157 | 158 | - name: Specify overlay_tz in ncp-rc 159 | replace: 160 | path: /tmp/ncp-rc.yml 161 | regexp: "#overlay_tz = " 162 | replace: "overlay_tz = {{ nsx_transport_zone_name }}" 163 | 164 | ## Configure NSX top router name. From NSX 2.4.0 release onwards this is called 165 | ## top_tier_router before this it is tier0_router. 166 | - name: Specify T0 router in ncp-rc 167 | replace: 168 | path: /tmp/ncp-rc.yml 169 | regexp: "#tier0_router = " 170 | replace: "tier0_router = {{ nsx_t0_router_name }}" 171 | 172 | - name: Specify Top T0 router in ncp-rc 173 | replace: 174 | path: /tmp/ncp-rc.yml 175 | regexp: "#top_tier_router = " 176 | replace: "top_tier_router = {{ nsx_t0_router_name }}" 177 | 178 | - name: Specify container_ip_blocks in ncp-rc 179 | replace: 180 | path: /tmp/ncp-rc.yml 181 | regexp: "#container_ip_blocks = " 182 | replace: "container_ip_blocks = {{ pod_ipblock_name }}" 183 | 184 | - name: Specify external_ip_pools in ncp-rc 185 | replace: 186 | path: /tmp/ncp-rc.yml 187 | regexp: "#external_ip_pools = " 188 | replace: "external_ip_pools = {{ snat_ippool_name }}" 189 | 190 | - name: Set apiserver host IP 191 | lineinfile: 192 | path: /tmp/ncp-rc.yml 193 | regexp: "^ *#?apiserver_host_ip =.*" 194 | line: " apiserver_host_ip = {{ apiserver_host_ip }}" 195 | 196 | - name: Set apiserver host port 197 | lineinfile: 198 | path: /tmp/ncp-rc.yml 199 | regexp: "^ *#?apiserver_host_port =.*" 200 | line: " apiserver_host_port = 8443" 201 | 202 | - name: Set NSX manager IP 203 | lineinfile: 204 | path: /tmp/ncp-rc.yml 205 | regexp: "^ *#?nsx_api_managers =.*" 206 | line: " nsx_api_managers = {{ nsx_manager_ip }}" 207 | 208 | - name: Set NSX Policy Flag 209 | lineinfile: 210 | path: /tmp/ncp-rc.yml 211 | regexp: "^ policy_nsxapi" 212 | insertafter: "^ *nsx_api_managers" 213 | line: " policy_nsxapi = {{ policy_nsxapi | replace('$', '$$') }}" 214 | when: use_cert != true 215 | 216 | - name: Set NSX API password 217 | lineinfile: 218 | path: /tmp/ncp-rc.yml 219 | regexp: "^ nsx_api_password" 220 | insertafter: "^ *nsx_api_managers" 221 | line: " nsx_api_password = {{ nsx_api_password | replace('$', '$$') }}" 222 | when: use_cert != true 223 | 224 | - name: Set NSX API username 225 | lineinfile: 226 | path: /tmp/ncp-rc.yml 227 | regexp: "^ nsx_api_user" 228 | insertafter: "^ *nsx_api_managers" 229 | line: " nsx_api_user = {{ nsx_api_user }}" 230 | when: use_cert != true 231 | 232 | - name: Skip checking certs 233 | lineinfile: 234 | path: /tmp/ncp-rc.yml 235 | regexp: "^ *#?insecure =.*" 236 | line: " insecure = True" 237 | 238 | - name: Set certificate info if set cert is set to true 239 | replace: 240 | path: /tmp/ncp-rc.yml 241 | regexp: "#nsx_api_cert_file = " 242 | replace: "nsx_api_cert_file = /etc/nsx-ujo/nsx_cert" 243 | when: use_cert == true 244 | 245 | - name: Set private key if set cert is set to true 246 | replace: 247 | path: /tmp/ncp-rc.yml 248 | regexp: "#nsx_api_private_key_file = " 249 | replace: "nsx_api_private_key_file = /etc/nsx-ujo/nsx_priv_key" 250 | when: use_cert == true 251 | 252 | - name: Set node_type to BAREMETAL for bmc env 253 | replace: 254 | path: /tmp/ncp-rc.yml 255 | regexp: "#node_type = HOSTVM" 256 | replace: "node_type = BAREMETAL" 257 | when: BMC == true or BMC == True 258 | 259 | # Ensure idempotency 260 | - name: Check whether /tmp/ncp-rc.yml contains "nsx-priv-key" 261 | command: grep "nsx-priv-key" /tmp/ncp-rc.yml 262 | register: checkpriv 263 | ignore_errors: yes 264 | 265 | - name: if /tmp/ncp-rc.yml does not contain "nsx-priv-key" 266 | lineinfile: 267 | path: /tmp/ncp-rc.yml 268 | regexp: "^ volumeMounts" 269 | insertafter: "^ *volumeMounts:" 270 | line: " - name: nsx-priv-key\n mountPath: /etc/nsx-ujo/nsx_priv_key" 271 | when: checkpriv.rc != 0 and use_cert == true 272 | 273 | - name: Greet the world if /tmp/ncp-rc.yml contains "nsx-priv-key" 274 | lineinfile: 275 | path: /tmp/ncp-rc.yml 276 | regexp: "^ volumes" 277 | insertafter: "^ *volumes:" 278 | line: " - name: nsx-priv-key\n hostPath:\n path: {{ nsx_api_private_key_file }}" 279 | when: checkpriv.rc != 0 and use_cert == true 280 | 281 | - name: Check whether /tmp/ncp-rc.yml contains "nsx-cert" 282 | command: grep "nsx-cert" /tmp/ncp-rc.yml 283 | register: checkcert 284 | ignore_errors: yes 285 | 286 | - name: if /tmp/ncp-rc.yml does not contain "nsx-cert" 287 | lineinfile: 288 | path: /tmp/ncp-rc.yml 289 | regexp: "^ volumeMounts" 290 | insertafter: "^ *volumeMounts:" 291 | line: " - name: nsx-cert\n mountPath: /etc/nsx-ujo/nsx_cert" 292 | when: checkcert.rc != 0 and use_cert == true 293 | 294 | - name: Greet the world if /tmp/ncp-rc.yml contains "nsx-cert" 295 | lineinfile: 296 | path: /tmp/ncp-rc.yml 297 | regexp: "^ volumes" 298 | insertafter: "^ *volumes:" 299 | line: " - name: nsx-cert\n hostPath:\n path: {{ nsx_api_cert_file }}" 300 | when: checkcert.rc != 0 and use_cert == true 301 | 302 | - name: Start NCP Replication Controller 303 | command: oc apply -f /tmp/ncp-rc.yml 304 | 305 | ## nsx-node-agent YAML 306 | - name: Download agent yaml 307 | get_url: 308 | url: "{{ agent_yaml_url }}" 309 | dest: /tmp/nsx-node-agent-ds.yml 310 | 311 | - name: Set service account for ncp 312 | replace: 313 | path: /tmp/nsx-node-agent-ds.yml 314 | regexp: "#serviceAccountName: nsx-node-agent-svc-account" 315 | replace: "serviceAccountName: nsx-node-agent-svc-account" 316 | 317 | - name: Set cluster name 318 | lineinfile: 319 | path: /tmp/nsx-node-agent-ds.yml 320 | regexp: "^ *#?cluster =.*" 321 | line: " cluster = {{ cluster_name }}" 322 | 323 | - name: Set apiserver host IP 324 | lineinfile: 325 | path: /tmp/nsx-node-agent-ds.yml 326 | regexp: "^ *#?apiserver_host_ip =.*" 327 | line: " apiserver_host_ip = {{ apiserver_host_ip }}" 328 | 329 | - name: Set apiserver host port 330 | lineinfile: 331 | path: /tmp/nsx-node-agent-ds.yml 332 | regexp: "^ *#?apiserver_host_port =.*" 333 | line: " apiserver_host_port = 8443" 334 | 335 | - name: Set node_type to BAREMETAL for bmc env for node-agent 336 | replace: 337 | path: /tmp/nsx-node-agent-ds.yml 338 | regexp: "#node_type = HOSTVM" 339 | replace: "node_type = BAREMETAL" 340 | when: BMC == true or BMC == True 341 | 342 | - name: Set ovs_bridge to BAREMETAL for bmc env for node-agent 343 | replace: 344 | path: /tmp/nsx-node-agent-ds.yml 345 | regexp: "#ovs_bridge = br-int" 346 | replace: "ovs_bridge = nsx-managed" 347 | when: BMC == true or BMC == True 348 | 349 | - name: Set DAC_OVERRIDE for bmc env for node-agent 350 | replace: 351 | path: /tmp/nsx-node-agent-ds.yml 352 | regexp: "# - DAC_OVERRIDE" 353 | replace: "- DAC_OVERRIDE" 354 | when: BMC == true or BMC == True 355 | 356 | - name: Mount nestdb-sock for bmc env for node-agent 357 | replace: 358 | path: /tmp/nsx-node-agent-ds.yml 359 | regexp: "# - name: nestdb-sock" 360 | replace: "- name: nestdb-sock" 361 | when: BMC == true or BMC == True 362 | 363 | - name: Mount nestdb-sock for bmc env for node-agent 364 | replace: 365 | path: /tmp/nsx-node-agent-ds.yml 366 | regexp: "# mountPath: /var/run/vmware/nestdb/nestdb-server.sock" 367 | replace: " mountPath: /var/run/vmware/nestdb/nestdb-server.sock" 368 | when: BMC == true or BMC == True 369 | 370 | - name: Volume nestdb-sock for bmc env for node-agent 371 | replace: 372 | path: /tmp/nsx-node-agent-ds.yml 373 | regexp: "# - name: nestdb-sock" 374 | replace: "- name: nestdb-sock" 375 | when: BMC == true or BMC == True 376 | 377 | - name: Volume nestdb-sock for bmc env for node-agent 378 | replace: 379 | path: /tmp/nsx-node-agent-ds.yml 380 | regexp: "# hostPath:" 381 | replace: " hostPath:" 382 | when: BMC == true or BMC == True 383 | 384 | - name: Volume nestdb-sock for bmc env for node-agent 385 | replace: 386 | path: /tmp/nsx-node-agent-ds.yml 387 | regexp: "# path: /var/run/vmware/nestdb/nestdb-server.sock" 388 | replace: " path: /var/run/vmware/nestdb/nestdb-server.sock" 389 | when: BMC == true or BMC == True 390 | 391 | - name: Volume nestdb-sock for bmc env for node-agent 392 | replace: 393 | path: /tmp/nsx-node-agent-ds.yml 394 | regexp: "# type: Socket" 395 | replace: " type: Socket" 396 | when: BMC == true or BMC == True 397 | 398 | - name: Override default node selector 399 | command: oc patch namespace nsx-system -p '{"metadata":{"annotations":{"openshift.io/node-selector":""}}}' 400 | 401 | - name: Start nsx-node-agent DaemonSet 402 | command: oc apply -f /tmp/nsx-node-agent-ds.yml 403 | 404 | # After ncp starts, start Openshift Registry and Router service 405 | - name: Switch to default project 406 | command: oc project default 407 | 408 | - name: Grant router service account access 409 | command: oc adm policy add-scc-to-user hostnetwork -z router 410 | 411 | - name: Grant cluster view permission 412 | command: oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:default:router 413 | 414 | - name: Start Openshift Router service 415 | command: oc adm router router --service-account=router 416 | 417 | # In OS 3.7, if you have "registry" service account or "registry-registry-role" 418 | # created, it will "fail" with "error" as below: 419 | # serviceaccounts "registry" already exists 420 | # clusterrole "registry-registry-role" already exists 421 | # this should just be a warning as you'd get in adding router. 422 | # You can ignore this as the registry will be created successfully anyway 423 | - name: Grant Registry service access 424 | command: oc adm policy add-scc-to-user hostnetwork -z registry 425 | 426 | - name: Start Openshift Registry service 427 | command: oc adm registry --service-account=registry 428 | 429 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/roles/ncp_prep/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Download CNI rpm 3 | get_url: 4 | url: "{{ cni_url }}" 5 | dest: /tmp/cni.rpm 6 | 7 | - name: Install CNI 8 | yum: 9 | name: /tmp/cni.rpm 10 | state: present 11 | 12 | - name: Download OVS 13 | get_url: 14 | url: "{{ ovs_url }}" 15 | dest: /tmp/ovs.rpm 16 | when: BMC == false 17 | 18 | - name: Download OVS kmod 19 | get_url: 20 | url: "{{ kmod_ovs_url }}" 21 | dest: /tmp/kmod_ovs.rpm 22 | when: BMC == false 23 | 24 | - name: Install OVS 25 | yum: 26 | name: /tmp/ovs.rpm 27 | state: present 28 | tags: ovs 29 | when: BMC == false 30 | 31 | - name: Install OVS Kernel Module 32 | yum: 33 | name: /tmp/kmod_ovs.rpm 34 | state: present 35 | when: BMC == false 36 | 37 | # https://docs.ansible.com/ansible/systemd_module.html 38 | # systemctl start openvswitch.service 39 | - name: Start OVS service 40 | systemd: 41 | name: openvswitch 42 | state: started 43 | when: BMC == false 44 | 45 | - name: Create OVS bridge 46 | openvswitch_bridge: bridge=br-int state=present fail_mode=standalone 47 | args: 48 | external_ids: 49 | bridge-id: "br-int" 50 | tags: ovs 51 | when: BMC == false 52 | 53 | - name: Add the Uplink Port to OVS 54 | openvswitch_port: 55 | bridge: br-int 56 | port: "{{ uplink_port }}" 57 | state: present 58 | set: Interface {{ uplink_port }} ofport_request=1 59 | tags: ovs 60 | when: BMC == false 61 | 62 | - name: Bring up br-int 63 | command: "ip link set br-int up" 64 | when: BMC == false 65 | 66 | - name: Bring up node-if 67 | command: "ip link set {{ uplink_port }} up" 68 | when: BMC == false 69 | 70 | ## Get NCP tar file and make it docker image 71 | - name: Download ncp image tar file 72 | get_url: 73 | url: "{{ ncp_image_url }}" 74 | dest: /tmp/nsx_ncp_rhel.tar 75 | force: no 76 | 77 | # Using docker_image to load tar file to docker image 78 | - name: Load image from the image tar file 79 | shell: docker load -i /tmp/nsx_ncp_rhel.tar 80 | 81 | - name: Register the docker image name 82 | shell: docker images | grep nsx-ncp-rhel | head -n1 83 | register: nsx_ncp_rhel 84 | 85 | - name: Tag image as nsx-ncp 86 | shell: docker tag "{{ nsx_ncp_rhel.stdout.split()[0] }}" nsx-ncp 87 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/roles/nsx_cleanup/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Pip install requests 2.21.0 3 | command: "pip install requests==2.21.0" 4 | 5 | - name: Cleanup nsx resource with nsx_cert and priv_key 6 | command: "python {{ nsx_cleanup_script_path }} --nsx-cert {{ nsx_api_cert_file }} 7 | --key {{ nsx_api_private_key_file }} --mgr-ip {{ nsx_manager_ip }} 8 | --cluster {{ cluster_name }} --remove --t0-uuid {{ t0_uuid }} --no-warning" 9 | register: cleanup_out 10 | when: use_cert == true 11 | 12 | - name: Cleanup nsx resource with username/password 13 | command: "python {{ nsx_cleanup_script_path }} --username {{ nsx_api_user }} 14 | --password {{ nsx_api_password }} --mgr-ip {{ nsx_manager_ip }} 15 | --cluster {{ cluster_name }} --remove --t0-uuid {{ t0_uuid }} --no-warning" 16 | register: cleanup_out 17 | when: use_cert == false 18 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/roles/nsx_config/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install python-devel 3 | yum: 4 | name: python-devel 5 | state: latest 6 | 7 | - name: Install gcc 8 | yum: 9 | name: gcc 10 | state: present 11 | 12 | - name: Install virtualenv 13 | pip: 14 | name: virtualenv 15 | 16 | - name: remove existing nsx-config virtualenv 17 | file: 18 | state: absent 19 | path: "/tmp/nsx-config" 20 | 21 | - name: create virtualenv 22 | command: virtualenv /tmp/nsx-config 23 | 24 | - name: Download VSphere Automation SDK 25 | git: 26 | repo: "https://github.com/vmware/vsphere-automation-sdk-python" 27 | dest: /tmp/vsphere-automation-sdk-python/ 28 | version: v6.6.1 29 | when: BMC == false 30 | 31 | - name: Install vmware-nsxlib 32 | command: "/tmp/nsx-config/bin/pip install vmware-nsxlib==11.1.4" 33 | 34 | - name: Install VSphere Automation SDK 35 | command: "/tmp/nsx-config/bin/pip install -r /tmp/vsphere-automation-sdk-python/requirements.txt --extra-index-url /tmp/vsphere-automation-sdk-python/lib/" 36 | when: BMC == false 37 | 38 | - name: Remove OpenSSL in virtualenv 39 | file: 40 | state: absent 41 | path: "/tmp/nsx-config/lib/python2.7/site-packages/OpenSSL" 42 | 43 | - name: Install pyOpenSSL 16.2.0 44 | command: "/tmp/nsx-config/bin/pip install pyOpenSSL==16.2.0" 45 | 46 | - name: remove vsphere sdk git dir 47 | file: 48 | path: /tmp/vsphere-automation-sdk-python/ 49 | state: absent 50 | when: BMC == false 51 | 52 | - name: NSX management plane resource configuration with cert 53 | command: "/tmp/nsx-config/bin/python '{{ playbook_dir }}'/../nsx_config.py --cert {{ nsx_cert_file_path }} --BMC {{ BMC }} --tn {{ nsx_transport_node_names }} --mp {{ nsx_manager_ip }} --k8scluster {{ cluster_name }} --edge_cluster {{ nsx_edge_cluster_name }} --tz {{ nsx_transport_zone_name }} --t0 {{ nsx_t0_router_name }} --pod_ipblock_name {{ pod_ipblock_name }} --pod_ipblock_cidr {{ pod_ipblock_cidr }} --snat_ippool_name {{ snat_ippool_name }} --snat_ippool_cidr {{ snat_ippool_cidr }} --start_range {{ start_range }} --end_range {{ end_range }} --node {{ os_node_name_list }} --node_ls {{ nsx_node_ls_name }} --node_lr {{ nsx_node_lr_name }} --node_network_cidr {{ node_network_cidr }} --vc_host {{ vc_host }} --vc_user {{ vc_user }} --vc_password {{ vc_password }} --vms {{ vms }}" 54 | when: perform_nsx_config == True and use_cert == True and BMC == false 55 | 56 | - name: NSX management plane resource configuration with user name 57 | command: "/tmp/nsx-config/bin/python '{{ playbook_dir }}'/../nsx_config.py --user {{ nsx_api_user }} --password {{ nsx_api_password }} --BMC {{ BMC }} --tn {{ nsx_transport_node_names }} --mp {{ nsx_manager_ip }} --k8scluster {{ cluster_name }} --edge_cluster {{ nsx_edge_cluster_name }} --tz {{ nsx_transport_zone_name }} --t0 {{ nsx_t0_router_name }} --pod_ipblock_name {{ pod_ipblock_name }} --pod_ipblock_cidr {{ pod_ipblock_cidr }} --snat_ippool_name {{ snat_ippool_name }} --snat_ippool_cidr {{ snat_ippool_cidr }} --start_range {{ start_range }} --end_range {{ end_range }} --node {{ os_node_name_list }} --node_ls {{ nsx_node_ls_name }} --node_lr {{ nsx_node_lr_name }} --node_network_cidr {{ node_network_cidr }} --vc_host {{ vc_host }} --vc_user {{ vc_user }} --vc_password {{ vc_password }} --vms {{ vms }}" 58 | when: perform_nsx_config == True and use_cert == False and BMC == false 59 | 60 | - name: BMC NSX management plane resource configuration with cert 61 | command: "/tmp/nsx-config/bin/python '{{ playbook_dir }}'/../nsx_config.py --cert {{ nsx_cert_file_path }} --BMC {{ BMC }} --tn {{ nsx_transport_node_names }} --mp {{ nsx_manager_ip }} --k8scluster {{ cluster_name }} --edge_cluster {{ nsx_edge_cluster_name }} --tz {{ nsx_transport_zone_name }} --t0 {{ nsx_t0_router_name }} --pod_ipblock_name {{ pod_ipblock_name }} --pod_ipblock_cidr {{ pod_ipblock_cidr }} --snat_ippool_name {{ snat_ippool_name }} --snat_ippool_cidr {{ snat_ippool_cidr }} --start_range {{ start_range }} --end_range {{ end_range }}" 62 | when: perform_nsx_config == True and use_cert == True and BMC != false 63 | 64 | - name: BMC NSX management plane resource configuration with user name 65 | command: "/tmp/nsx-config/bin/python '{{ playbook_dir }}'/../nsx_config.py --user {{ nsx_api_user }} --password {{ nsx_api_password }} --BMC {{ BMC }} --tn {{ nsx_transport_node_names }} --mp {{ nsx_manager_ip }} --k8scluster {{ cluster_name }} --edge_cluster {{ nsx_edge_cluster_name }} --tz {{ nsx_transport_zone_name }} --t0 {{ nsx_t0_router_name }} --pod_ipblock_name {{ pod_ipblock_name }} --pod_ipblock_cidr {{ pod_ipblock_cidr }} --snat_ippool_name {{ snat_ippool_name }} --snat_ippool_cidr {{ snat_ippool_cidr }} --start_range {{ start_range }} --end_range {{ end_range }}" 66 | when: perform_nsx_config == True and use_cert == False and BMC != false 67 | 68 | - name: remove nsx-config virtualenv 69 | file: 70 | state: absent 71 | path: "/tmp/nsx-config" 72 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/roles/openshift_installation/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Get Openshift playbook from repo 3 | git: 4 | repo: "https://github.com/openshift/openshift-ansible.git" 5 | dest: /tmp/openshift-ansible 6 | version: "{{ openshift_version }}" 7 | when: perform_openshift_installation == True 8 | 9 | - name: Run Openshift installation prerequisit 10 | command: "ansible-playbook -i {{ host_path }}/hosts /tmp/openshift-ansible/playbooks/prerequisites.yml" 11 | when: perform_openshift_installation == True 12 | 13 | - name: Run Openshift installation 14 | command: "ansible-playbook -i {{ host_path }}/hosts /tmp/openshift-ansible/playbooks/deploy_cluster.yml" 15 | when: perform_openshift_installation == True 16 | 17 | - name: Restart Openshift node service 18 | command: "systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers" 19 | when: perform_openshift_installation == False 20 | -------------------------------------------------------------------------------- /openshift-ansible-nsx/vars/global.yaml: -------------------------------------------------------------------------------- 1 | cni_url: 2 | ovs_url: 3 | kmod_ovs_url: 4 | uplink_port: 5 | ncp_image_url: 6 | ncp_yaml_url: 7 | agent_yaml_url: 8 | ncp_scc_yaml_url: 9 | ncp_rbac_yaml_url: 10 | 11 | perform_nsx_config: True 12 | # If use_cert is set to true, you can skip setting nsx user and password 13 | nsx_api_user: 14 | nsx_api_password: 15 | use_cert: false 16 | # Uncomment and set correct path for the following two lines if use_cert is set to true 17 | # nsx_cert_file_path: 18 | nsx_manager_ip: 10.161.115.70 19 | nsx_edge_cluster_name: edgecluster1 20 | nsx_transport_zone_name: 1-transportzone-460 21 | 22 | # Enable NSX Policy 23 | policy_nsxapi: false 24 | 25 | # No need to specify if BMC is set to true 26 | os_node_name_list: osmaster 27 | node_network_cidr: 172.10.0.1/16 28 | vc_host: 10.161.107.101 29 | vc_user: 30 | vc_password: 31 | vms: centos74 32 | 33 | nsx_t0_router_name: t0 34 | pod_ipblock_name: podIPBlock 35 | pod_ipblock_cidr: 172.20.0.0/16 36 | snat_ippool_name: externalIP 37 | snat_ippool_cidr: 172.30.0.0/16 38 | start_range: 172.30.0.1 39 | end_range: 172.30.255.254 40 | cluster_name: occl-one 41 | 42 | # No need to specify if BMC is set to true 43 | nsx_node_ls_name: node_ls 44 | nsx_node_lr_name: node_lr 45 | 46 | apiserver_host_ip: 10.161.125.73 47 | use_native_loadbalancer: False 48 | # Uncomment and set correct path for the following two lines if use_cert is set to true 49 | # nsx_api_cert_file: 50 | # nsx_api_private_key_file: 51 | 52 | # set to true is manual start of kube proxy is needed 53 | manual_start_kube_proxy: false 54 | openshift_dir: 55 | 56 | # set when cleanup required parameters are cluster_name, nsx_manager_ip, 57 | # nsx_api_user, nsx_api_password 58 | nsx_cleanup_script_path: 59 | # t0 uuid of the cluster 60 | t0_uuid: 61 | # If use_cert is set to true, you can skip setting nsx user and password 62 | # Uncomment and set correct path for the following two lines if use_cert is set to true 63 | # nsx_api_cert_file: 64 | # nsx_api_private_key_file: 65 | 66 | # when configuring BMC environment, ovs, bridge and vc configuration is not needed 67 | BMC: false 68 | nsx_transport_node_names: 69 | 70 | # specify openshift version when using openshift_installation 71 | perform_openshift_installation: True 72 | host_path: "{{ inventory_dir }}" 73 | openshift_version: release-3.11 74 | --------------------------------------------------------------------------------