├── .gitignore ├── README.md ├── catalog-source-template ├── image-content-source-template ├── known-bad-images ├── mirror-operator-catalogue.py ├── offline_operators_list ├── offline_operators_list.yaml ├── requirements.txt └── upgradepath.py /.gitignore: -------------------------------------------------------------------------------- 1 | /.vscode 2 | /content 3 | /publish* 4 | /run* 5 | index* 6 | opm 7 | new-mirror.sh 8 | auth.json 9 | __pycache__ 10 | launch* 11 | test.py 12 | notes.txt 13 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OpenShift Offline Operator Catalogue 2 | 3 | Please note: This script is now depricated. Please use [oc-mirror](https://github.com/openshift/oc-mirror) which is the official supported tooling from Red Hat to mirror OLM operators and related images. 4 | 5 | ---- 6 | 7 | This script will: 8 | 9 | - Create a custom operator catalogue based on the desired operators 10 | - Mirror the required images to a local registry. 11 | - (NEW) Optionally it can figure out the upgrade path to the latest version of an operator and mirror those images as well 12 | - Generate ImageContentSourcePolicy YAML 13 | - Genetate CatalogSource YAML 14 | 15 | 16 | Why create this? 17 | 18 | Because the current [catalogue build and mirror](https://docs.openshift.com/container-platform/4.6/operators/admin/olm-restricted-networks.html) process mirrors all versions of the operator which results in exponential amount of images that are mirrored that are unnecessary. For my use case only 100 images were required but I ended up with 1200 mirrored images. 19 | 20 | ## Note 21 | 22 | This script has been updated for OpenShift 4.10+. For mirroring operators in OCP 4.10+ upgrade paths are not supported 23 | 24 | ## Requirements 25 | 26 | This tool was tested with the following versions of the runtime and utilities. 27 | 28 | 1. RHEL 8.2, Fedora 33 (For OPM tool RHEL 8 or Fedora equivalent is a hard requirement due to dependency on glibc version 2.28+) 29 | 2. Python 3.7.6 (with pyyaml,jinja2 library) 30 | a. pip install --requirement requirements.txt 31 | 3. Podman v2.0+ (If you use anything below 1.8, you might run into issues with multi-arch manifests) 32 | 4. Skopeo 1.0+ (If you use anything below 1.0 you might have issue with the newer manifests) 33 | 5. Oc CLI 4.6.9+ 34 | 35 | Please note this only works with operators that meet the following criteria 36 | 37 | 1. Have a CSV in the manifest that contains a full list of related images 38 | 2. The related images are tagged with a SHA 39 | 40 | For a full list of operators that work offline please see link below 41 | 42 | 43 | ## Running the script 44 | 45 | 1. Install the tools listed in the requirements section 46 | 2. Login to your offline registry using podman (This is the registry where you will be publishing the catalogue and related images). (You can use the --authfile option instead) 47 | 3. Login to registry.redhat.io using podman (You can use the --authfile option instead) 48 | 4. Login to quay.io using podman (You can use the --authfile option instead) 49 | 5. Update the offline_operator_list.yaml file with the operators you want to include in the catalog creation and mirroring. See for list of supported offline operators 50 | 6. Run the script (sample command, see arguements section for more details) 51 | 52 | ```Shell 53 | mirror-operator-catalogue.py \ 54 | --catalog-version 1.0.0 \ 55 | --authfile /var/run/containers/0/auth.json \ 56 | --registry-olm local_registry_url:5000 \ 57 | --registry-catalog local_registry_url:5000 \ 58 | --operator-file ./offline_operator_list \ 59 | --icsp-scope=namespace 60 | ``` 61 | 62 | 7. Disable default operator source 63 | 64 | ```Shell 65 | oc patch OperatorHub cluster --type json \ 66 | -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'' 67 | ``` 68 | 69 | 8. Apply the yaml files in the publish folder. The image content source policy will create a new MCO render which will start a rolling reboot of your cluster nodes. You have to wait until that is complete before attempting to install operators from the catalogue 70 | 71 | ### Script Arguments 72 | 73 | ##### --authfile 74 | 75 | Optional: 76 | 77 | The location of the auth.json file generated when you use podman or docker to login registries using podman. The auth file is located either in your home directory under .docker or /run/user/your_uid/containers/auth.json or /var/run/containers/your_uid/auth.json 78 | 79 | If you already have a `pull-secret.json` file with all registries credentials (quay.io, registry.redhat.io, private registry) you don't need to login to the registries with podman. 80 | 81 | ##### --registry-olm 82 | 83 | Required: 84 | 85 | The URL of the destination registry where the operator images will be mirrored to 86 | 87 | ##### --registry-catalog 88 | 89 | Required: 90 | 91 | The URL of the destination registry where the operator catalogue image will be published to 92 | 93 | ##### --catalog-version 94 | 95 | Optional: 96 | Default: "1.0.0" 97 | 98 | Arbitrary version number to tag your catalogue image. Unless you are interested in doing AB testing, keep the release version for all subsequent runs. 99 | 100 | ##### --ocp-version 101 | 102 | Optional: 103 | Default:4.6 104 | 105 | The Version of OCP that will be used to download the OPM CLI 106 | 107 | ##### --operator-channel 108 | 109 | Optional: 110 | Default:4.6 111 | 112 | The Operator Channel to create the custom catalogue from 113 | 114 | ##### --operator-list 115 | 116 | Required if --operator-file and --operator-yaml-file not set 117 | 118 | List of operators to include in your custom catalogue. If this argument is used, --operator-file argument should not be used. 119 | 120 | The entires should be separated by spaces 121 | 122 | Example: 123 | 124 | ```Shell 125 | --operator-list kubevirt-hyperconverged local-storage-operator 126 | ``` 127 | 128 | ##### --operator-file 129 | 130 | Required if --operator-list or --operator-yaml-file not set 131 | 132 | Location of the file containing a list of operators to include in your custom catalogue. The entries should be in plain text with no quotes. Each line should only have one operator name. If this argument is used, --operator-list should not be used 133 | 134 | Example operator list file content: 135 | 136 | ```Shell 137 | local-storage-operator 138 | cluster-logging 139 | codeready-workspaces 140 | ``` 141 | 142 | ##### --operator-yaml-file 143 | 144 | Required if --operator-list or --operator-file not set 145 | 146 | Location of the file containing a list of operators to include in your custom catalogue. Each entry includes a "name" property and an optional "start_version". If the start_version property is not set, only the latest version of the operator in the default channel will be mirroed. If the parameter is set, the automation figures out the shortest upgrade path to the latest version and mirrors the images from those versions as well. At the end of the run you can check the file called mirror_log.txt in the publish directory to see the upgrade path required for each operator. For the version only include the X.Y.Z digits. Even though there is some sanitization of the version number, the matching is easier and more accurate if this convention is followed. 147 | 148 | Example operator list file content: 149 | 150 | ```yaml 151 | operators: 152 | - name: kubevirt-hyperconverged 153 | start_version: 2.5.5 154 | - name: local-storage-operator 155 | - name: cluster-logging 156 | - name: jaeger-product 157 | start_version: 1.17.8 158 | - name: kiali-ossm 159 | - name: codeready-workspaces 160 | start_version: 2.7.0 161 | ``` 162 | 163 | ##### --icsp-scope 164 | 165 | Optional: 166 | Default: namespace 167 | 168 | Scope of registry mirrors in imagecontentsourcepolicy file. Allowed values: namespace, registry. Defaults to: namespace 169 | 170 | ##### --mirror-images 171 | 172 | Optional 173 | Default: True 174 | 175 | If set to True all related images will be mirrored to the registry provided by the --registry-olm argument. Otherwise images will not be mirrored. Set to false if you are using a registry proxy and don't need to mirror images locally. 176 | 177 | ## Updating The Catalogue 178 | 179 | To update the catalogue,run the script the same way you did the first time. As of OCP 4.6 you no longer have to increment the version of the catalog. The catalog will query for a newer version of the image used every 10 minutes (by default). 180 | 181 | ## Script Notes 182 | 183 | Unfortunately just because an image is listed in the related images spec doesn't mean it exists or is even used by the operator. for example registry.redhat.io/openshift4/ose-promtail from the logging operator. I have put that image in the knownBadImages file to avoid attempting to mirror. Other images will be added as I find them. 184 | 185 | ## Local Docker Registry 186 | 187 | If you need a to create a local secured registry follow the instructions from the link below 188 | 189 | -------------------------------------------------------------------------------- /catalog-source-template: -------------------------------------------------------------------------------- 1 | apiVersion: operators.coreos.com/v1alpha1 2 | kind: CatalogSource 3 | metadata: 4 | name: {{ CatalogSourceName }} 5 | namespace: openshift-marketplace 6 | spec: 7 | displayName: {{ CatalogSourceDisplayName }} 8 | image: {{ CatalogSourceImage }} 9 | publisher: Red Hat 10 | sourceType: grpc 11 | updateStrategy: 12 | registryPoll: 13 | interval: 10m0s 14 | -------------------------------------------------------------------------------- /image-content-source-template: -------------------------------------------------------------------------------- 1 | apiVersion: operator.openshift.io/v1alpha1 2 | kind: ImageContentSourcePolicy 3 | metadata: 4 | name: olm-image-content-source 5 | spec: 6 | repositoryDigestMirrors: [] 7 | -------------------------------------------------------------------------------- /known-bad-images: -------------------------------------------------------------------------------- 1 | registry.redhat.io/openshift4/ose-promtail@sha256:1264aa92ebc6cccf46da3a35fbb54421b806dda5640c7e9706e6e815d13f509d 2 | registry.redhat.io/openshift4/ose-promtail@sha256:9e2daf10101a4e482bf65a1bc2f5f1472555b3f2fa3cc33157624363eff6676a 3 | registry.connect.redhat.com/splunk/spark@sha256:4adf17b546f168a5c580b2445508793f4bfa4c84a41458a0b2fefc6522465a45 4 | registry.connect.redhat.com/rocketchat/rocketchat@sha256:d26a76943471e057a088acd4896eab5b7cdc478069656de0f896ee66ffa61cf7 5 | quay.io/rhc4tp/rocketchat-operator@sha256:e556b5222ee2b12c0cbae134f3b081712df6077a5c4f8da693a07674f87fb571 6 | -------------------------------------------------------------------------------- /mirror-operator-catalogue.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | import sys 4 | import re 5 | import tarfile 6 | import yaml 7 | import subprocess 8 | import argparse 9 | import urllib.request 10 | from jinja2 import Template 11 | from pathlib import Path 12 | import upgradepath 13 | import sqlite3 14 | import json 15 | import shutil 16 | from natsort import natsorted 17 | 18 | def is_number(string): 19 | try: 20 | float(string) 21 | return True 22 | except ValueError: 23 | return False 24 | 25 | parser = argparse.ArgumentParser( 26 | description='Mirror individual operators to an offline registry') 27 | parser.add_argument( 28 | "--authfile", 29 | default=None, 30 | help="Pull secret with credentials") 31 | parser.add_argument( 32 | "--registry-olm", 33 | metavar="REGISTRY", 34 | required=True, 35 | help="Registry to copy the operator images") 36 | parser.add_argument( 37 | "--registry-catalog", 38 | metavar="REGISTRY", 39 | required=True, 40 | help="Registry to copy the catalog image") 41 | parser.add_argument( 42 | "--catalog-version", 43 | default="1.0.0", 44 | help="Tag for the catalog image") 45 | parser.add_argument( 46 | "--ocp-version", 47 | default="4.8", 48 | help="OpenShift Y Stream. Only use X.Y version do not use Z. Default 4.8") 49 | parser.add_argument( 50 | "--operator-channel", 51 | default="4.8", 52 | help="Operator Channel. Default 4.8") 53 | parser.add_argument( 54 | "--operator-image-name", 55 | default="redhat-operators", 56 | help="Operator Image short Name. Default redhat-operators") 57 | parser.add_argument( 58 | "--operator-catalog-image-url", 59 | default="registry.redhat.io/redhat/redhat-operator-index", 60 | help="Operator Index Image URL without version. Default registry.redhat.io/redhat/redhat-operator-index") 61 | group = parser.add_mutually_exclusive_group(required=True) 62 | group.add_argument( 63 | "--operator-list", 64 | nargs="*", 65 | metavar="OPERATOR", 66 | help="List of operators to mirror, space delimeted") 67 | group.add_argument( 68 | "--operator-file", 69 | metavar="FILE", 70 | help="Specify a file containing the operators to mirror") 71 | group.add_argument( 72 | "--operator-yaml-file", 73 | metavar="FILE", 74 | help="Specify a YAML file containing operator list to mirror") 75 | parser.add_argument( 76 | "--icsp-scope", 77 | default="namespace", 78 | help="Scope of registry mirrors in imagecontentsourcepolicy file. Allowed values: namespace, registry. Defaults to: namespace") 79 | parser.add_argument( 80 | "--output", 81 | default="publish", 82 | help="Directory to create YAML files, must be relative to script path") 83 | parser.add_argument( 84 | "--mirror-images", 85 | default="True", 86 | help="Boolean: Mirror related images. Default is True") 87 | parser.add_argument( 88 | "--add-tags-to-images-mirrored-by-digest", 89 | default="False", 90 | help="Boolean: add tags to images mirrored by digest. Default is False") 91 | parser.add_argument( 92 | "--delete-publish", 93 | default="True", 94 | help="Boolean: Delete publish directory. Default is True") 95 | parser.add_argument( 96 | "--run-dir", 97 | default="", 98 | help="Run directory for script, must be an absolute path, only handy if running script in a container") 99 | parser.add_argument( 100 | "--opm-path", 101 | default="", 102 | help="Full path of the opm binary if you want to use your own instead of the tool downloading it for you") 103 | parser.add_argument( 104 | "--oc-cli-path", 105 | default="oc", 106 | help="Full path of oc cli") 107 | parser.add_argument( 108 | "--custom-operator-catalog-image-url", 109 | default="", 110 | help="DO NOT USE THIS - This argument is depricated and will no longer work") 111 | parser.add_argument( 112 | "--custom-operator-catalog-image-and-tag", 113 | default="", 114 | help="custom operator catalog image name including the tag") 115 | parser.add_argument( 116 | "--custom-operator-catalog-name", 117 | default="custom-redhat-operators", 118 | help="custom operator catalog name") 119 | 120 | try: 121 | args = parser.parse_args() 122 | except Exception as exc: 123 | print("An exception occurred while parsing arguements list") 124 | print(exc) 125 | sys.exit(1) 126 | 127 | # Global Variables 128 | if args.run_dir != "": 129 | script_root_dir = args.run_dir 130 | else: 131 | script_root_dir = os.path.dirname(os.path.realpath(__file__)) 132 | 133 | publish_root_dir = os.path.join(script_root_dir, args.output) 134 | run_root_dir = os.path.join(script_root_dir, "run") 135 | mirror_images = args.mirror_images 136 | add_tags_to_images_mirrored_by_digest = args.add_tags_to_images_mirrored_by_digest 137 | delete_publish = args.delete_publish 138 | operator_image_list = [] 139 | operator_data_list = {} 140 | operator_known_bad_image_list_file = os.path.join( 141 | script_root_dir, "known-bad-images") 142 | quay_rh_base_url = "https://quay.io/cnr/api/v1/packages/" 143 | redhat_operators_image_name = args.operator_image_name 144 | redhat_operators_packages_url = "https://quay.io/cnr/api/v1/packages?namespace=" + args.operator_image_name 145 | image_content_source_policy_template_file = os.path.join( 146 | script_root_dir, "image-content-source-template") 147 | catalog_source_template_file = os.path.join( 148 | script_root_dir, "catalog-source-template") 149 | ocp_version = args.ocp_version 150 | operator_channel = args.operator_channel 151 | operator_index_version = ":v" + operator_channel if is_number(operator_channel) else ":" + operator_channel 152 | redhat_operators_catalog_image_url = args.operator_catalog_image_url + operator_index_version 153 | 154 | custom_redhat_operators_image_name = "custom-" + args.operator_image_name 155 | custom_redhat_operators_display_name = re.sub(r'^redhat-', 'red hat-', args.operator_image_name) 156 | custom_redhat_operators_display_name = re.sub('-', ' ', custom_redhat_operators_display_name).title() 157 | 158 | if args.custom_operator_catalog_image_url: 159 | print("--custom-operator-catalog-image-url is no longer supported. \n") 160 | print("Use --custom-operator-catalog-image-and-tag instead") 161 | exit(1) 162 | elif args.custom_operator_catalog_image_and_tag: 163 | custom_redhat_operators_catalog_image_url = args.registry_catalog + "/" + args.custom_operator_catalog_image_and_tag 164 | elif args.custom_operator_catalog_name: 165 | custom_redhat_operators_catalog_image_url = args.registry_catalog + "/" + args.custom_operator_catalog_name + ":" + args.catalog_version 166 | else: 167 | custom_redhat_operators_catalog_image_url = args.registry_catalog + "/custom-" + args.operator_catalog_image_url.split('/')[2] + ":" + args.catalog_version 168 | 169 | oc_cli_path = args.oc_cli_path 170 | 171 | image_content_source_policy_output_file = os.path.join( 172 | publish_root_dir, custom_redhat_operators_image_name + '--icsp.yaml') 173 | catalog_source_output_file = os.path.join( 174 | publish_root_dir, custom_redhat_operators_image_name + '--catalogsource.yaml') 175 | mapping_file=os.path.join( 176 | publish_root_dir, custom_redhat_operators_image_name + '--mapping.txt') 177 | image_manifest_file = os.path.join( 178 | publish_root_dir, custom_redhat_operators_image_name + '--image_manifest.txt') 179 | mirror_summary_file = os.path.join( 180 | publish_root_dir, custom_redhat_operators_image_name + '--mirror_log.txt') 181 | 182 | def main(): 183 | run_temp = os.path.join(run_root_dir, "temp") 184 | mirror_summary_path = Path(mirror_summary_file) 185 | 186 | # Create publish, run and temp paths 187 | if delete_publish.lower() == "true": 188 | print("Will delete the publish dir...") 189 | delete_publish_bool = True 190 | else: 191 | print("--delete-publish=false Skipping deleting the publish dir") 192 | delete_publish_bool = False 193 | RecreatePath(publish_root_dir, delete_publish_bool) 194 | RecreatePath(run_root_dir) 195 | RecreatePath(run_temp) 196 | 197 | print("Starting Catalog Build and Mirror...") 198 | print("Getting opm CLI...") 199 | if args.opm_path != "": 200 | opm_cli_path = args.opm_path 201 | else: 202 | opm_cli_path = GetOpmCli(run_temp) 203 | 204 | print("Getting the list of operators for custom catalogue..") 205 | operators = GetWhiteListedOperators() 206 | 207 | # # NEED TO BE LOGGED IN TO REGISTRY.REDHAT.IO WITHOUT AUTHFILE ARGUMENT 208 | print("Pruning OLM catalogue...") 209 | if int(operator_channel.split('.')[0]) > 3 and int(operator_channel.split('.')[1]) > 10: 210 | operators = PruneFileBasedCatalog(opm_cli_path, operators, run_temp) 211 | print("Writing summary data..") 212 | CreateSummaryFileForFileBasedatalog(operators, mirror_summary_path) 213 | else: 214 | PruneSqliteBasedCatalog(opm_cli_path, operators, run_temp) 215 | 216 | print("Extracting custom catalogue database...") 217 | db_path = ExtractIndexDb() 218 | 219 | print("Create upgrade matrix for selected operators...") 220 | for operator in operators: 221 | operator.upgrade_path = upgradepath.GetShortestUpgradePath(operator.name, operator.start_version, db_path) 222 | 223 | print("Getting list of images to be mirrored...") 224 | GetImageListToMirror(operators, db_path) 225 | 226 | print("Writing summary data..") 227 | CreateSummaryFile(operators, mirror_summary_path) 228 | 229 | 230 | images = getImages(operators) 231 | if mirror_images.lower() == "true": 232 | print("Mirroring related images to offline registry...") 233 | MirrorImagesToLocalRegistry(images) 234 | else: 235 | print("--mirror-images=false Skipping image mirroring") 236 | 237 | 238 | print("Creating Image Content Source Policy YAML...") 239 | CreateImageContentSourcePolicyFile(images) 240 | 241 | print("Creating Mapping File...") 242 | CreateMappingFile(images) 243 | 244 | print("Creating Image manifest file...") 245 | CreateManifestFile(images) 246 | 247 | print("Creating Catalog Source YAML...") 248 | CreateCatalogSourceYaml(custom_redhat_operators_catalog_image_url, custom_redhat_operators_image_name, custom_redhat_operators_display_name) 249 | 250 | print("Catalogue creation and image mirroring complete") 251 | print("See Publish folder for the image content source policy and catalog source yaml files to apply to your cluster") 252 | 253 | cmd_args = "sudo rm -rf {}".format(run_root_dir) 254 | subprocess.run(cmd_args, shell=True, check=True) 255 | 256 | 257 | 258 | def CreateSummaryFileForFileBasedatalog(operators, mirror_summary_path): 259 | with open(mirror_summary_path, "w") as f: 260 | for operator in operators: 261 | f.write(operator.name + '\n') 262 | f.write("============================================================\n \n") 263 | for bundle in operator.operator_bundles: 264 | f.write(bundle.name + '\n') 265 | f.write("Image List \n") 266 | f.write("---------------------------------------- \n") 267 | for relatedImage in bundle.relatedImages: 268 | f.write(relatedImage["image"] + "\n") 269 | f.write("---------------------------------------- \n \n") 270 | f.write("============================================================\n \n \n") 271 | 272 | 273 | def GetOcCli(run_temp): 274 | base_url = "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/" 275 | archive_name = "openshift-client-linux.tar.gz" 276 | ocp_bin_channel = "fast" 277 | ocp_bin_release_url= base_url + ocp_bin_channel + "-" + ocp_version + "/" + archive_name 278 | print(ocp_bin_release_url) 279 | archive_file_path = os.path.join(run_temp, archive_name) 280 | 281 | print("Downloading oc Cli...") 282 | urllib.request.urlretrieve(ocp_bin_release_url, archive_file_path) 283 | 284 | print("Extracting oc Cli...") 285 | tf = tarfile.open(archive_file_path) 286 | tf.extractall(run_root_dir) 287 | 288 | return os.path.join(run_root_dir, "oc") 289 | 290 | 291 | def GetOpmCli(run_temp): 292 | 293 | base_url = "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/" 294 | archive_name = "opm-linux.tar.gz" 295 | channel = "fast" 296 | opm_bin_release_url = base_url + channel + "-" + ocp_version + "/" + archive_name 297 | print(opm_bin_release_url) 298 | archive_file_path = os.path.join(run_temp, archive_name) 299 | 300 | print("Downloading opm Cli...") 301 | urllib.request.urlretrieve(opm_bin_release_url, archive_file_path) 302 | 303 | print("Extracting oc Cli...") 304 | tf = tarfile.open(archive_file_path) 305 | tf.extractall(run_root_dir) 306 | 307 | return os.path.join(run_root_dir, "opm") 308 | 309 | 310 | def GetWhiteListedOperators(): 311 | try: 312 | operators = [] 313 | operator_list = [] 314 | 315 | if args.operator_file: 316 | with open(args.operator_file) as f: 317 | operators = f.read().splitlines() 318 | for operator in operators: 319 | operator_list.append(OperatorSpec(operator, "")) 320 | 321 | elif args.operator_yaml_file: 322 | with open(args.operator_yaml_file) as f: 323 | data = yaml.safe_load(f) 324 | for operator in data["operators"]: 325 | operator_list.append(OperatorSpec(GetFieldValue(operator, "name"), GetFieldValue(operator, "start_version"))) 326 | 327 | elif args.operator_list: 328 | operators = args.operator_list 329 | for operator in operators: 330 | operator_list.append(OperatorSpec(operator, "")) 331 | 332 | return operator_list 333 | 334 | except Exception as exc: 335 | print("An exception occurred while reading operator list file") 336 | print(exc) 337 | sys.exit(1) 338 | 339 | 340 | def CreateSummaryFile(operators, mirror_summary_path): 341 | with open(mirror_summary_path, "w") as f: 342 | for operator in operators: 343 | f.write(operator.name + '\n') 344 | f.write("Upgrade Path: ") 345 | upgrade_path = operator.start_version + " -> " 346 | for version in operator.upgrade_path: 347 | upgrade_path += version + " -> " 348 | upgrade_path = upgrade_path[:-4] 349 | f.write(upgrade_path) 350 | f.write("\n") 351 | f.write("============================================================\n \n") 352 | for bundle in operator.operator_bundles: 353 | f.write("[Version: " + bundle.version + "]\n") 354 | f.write("Image List \n") 355 | f.write("---------------------------------------- \n") 356 | for image in bundle.relatedImages: 357 | f.write(image + "\n") 358 | f.write("---------------------------------------- \n \n") 359 | f.write("============================================================\n \n \n") 360 | 361 | # Returns an empty string if field does not exist 362 | def GetFieldValue(data, field): 363 | if field in data: 364 | return data[field] 365 | else: 366 | return "" 367 | 368 | # Read data from rendered index 369 | def readJsonFile(cdata): 370 | objects = [] 371 | with open(cdata) as f: 372 | braceCount = 0 373 | jsonStr = '' 374 | for jsonObj in f: 375 | braceCount += jsonObj.count('{') 376 | braceCount -= jsonObj.count('}') 377 | jsonStr += jsonObj 378 | if braceCount == 0: 379 | objects.append(json.loads(jsonStr)) 380 | jsonStr = '' 381 | return objects 382 | 383 | # Create a custom catalog with selected operators from newer file based catalog 384 | def PruneFileBasedCatalog(opm_cli_path, operators, run_temp): 385 | script_root_dir = os.path.dirname(os.path.realpath(__file__)) 386 | prune_path = os.path.join(run_temp, "pruned-catalog") 387 | configs_path = os.path.join(prune_path, "configs") 388 | cdata = os.path.join(configs_path, "data.out") 389 | pdata = os.path.join(configs_path, "index.json") 390 | render_command = f"{opm_cli_path} render {redhat_operators_catalog_image_url}" 391 | if args.authfile: 392 | # Copy to correct folder for opm 393 | HOME = os.getenv('HOME') 394 | docker_cfg = os.path.join(HOME, ".docker", "config.json") 395 | shutil.copyfile(args.authfile, docker_cfg) 396 | else: 397 | print("You must pass an auth file with the '--authfile' option") 398 | exit(1) 399 | #os.chdir(run_temp) 400 | 401 | if not os.path.exists(configs_path): 402 | print(f"Creating config path ('{configs_path}')") 403 | os.makedirs(configs_path, exist_ok=True ) 404 | if not os.path.exists(cdata): 405 | print(f"Running: '{render_command}'") 406 | data = subprocess.run(render_command, shell=True, check=True, capture_output=True) 407 | with open(cdata, 'a') as out: 408 | out.write(data.stdout.decode('utf-8').strip()) 409 | objects = readJsonFile(cdata) 410 | allowed = [] 411 | for operator in operators: 412 | allowed.append(operator.name) 413 | operators = [] 414 | for obj in objects: 415 | tmp = [] 416 | version = '' 417 | if obj['schema'] == 'olm.package' and obj['name'] in allowed: 418 | operator = OperatorSpec(obj['name'], "") 419 | operator.defaultChannel = obj['defaultChannel'] 420 | operator.icon = obj['icon'] 421 | operators.append(operator) 422 | elif obj['schema'] == 'olm.channel' and obj['package'] in allowed and obj['name'] == operator.defaultChannel: 423 | operator = next(operator for operator in operators if operator.name == obj['package']) 424 | channel = OperatorChannel(obj['name']) 425 | channel.package = obj['package'] 426 | for ent in obj['entries']: 427 | tmp.append(ent['name']) 428 | version = natsorted(tmp)[-1] 429 | entry = next((ent for ent in obj['entries'] if ent['name'] == version), None) 430 | channel.entries = [entry] 431 | operator.operator_channels.append(channel) 432 | elif obj['schema'] == 'olm.bundle' and obj['package'] in allowed: 433 | operator = next(operator for operator in operators if operator.name == obj['package']) 434 | for ent in operator.operator_channels[0].entries: 435 | if obj['name'] == ent['name']: 436 | bundle = OperatorBundle(obj['name'], version) 437 | bundle.package = obj['package'] 438 | bundle.image = obj['image'] 439 | for relatedImage in obj['relatedImages']: 440 | bundle.relatedImages.append(relatedImage) 441 | for property in obj['properties']: 442 | bundle.properties.append(property) 443 | operator.operator_bundles.append(bundle) 444 | # GetFileBasedImageListToMirror(operators) 445 | os.remove(cdata) 446 | print(f"writing index.json") 447 | with open(pdata, 'a') as index: 448 | for operator in operators: 449 | package = { "schema": "olm.package", "name": operator.name, "defaultChannel": operator.defaultChannel, "icon": operator.icon } 450 | index.write(json.dumps(package, indent=2)) 451 | for c in operator.operator_channels: 452 | chan = {"schema": "olm.channel","name": c.name, "package": c.package,"entries": c.entries} 453 | index.write(json.dumps(chan, indent=2)) 454 | for b in operator.operator_bundles: 455 | bund ={"schema": "olm.bundle","name":b.name, "package": b.package, "image": b.image, "properties": b.properties, "relatedImages": b.relatedImages} 456 | index.write(json.dumps(bund, indent=2)) 457 | dockerfile_cmd = f"{opm_cli_path} generate dockerfile {configs_path}" 458 | print(f"Running '{dockerfile_cmd}'") 459 | subprocess.run(dockerfile_cmd, shell=True, check=True, capture_output=True) 460 | os.chdir(prune_path) 461 | build_cmd = f"podman build -t {custom_redhat_operators_catalog_image_url} -f configs.Dockerfile" 462 | print(f"Running '{build_cmd}'") 463 | try: 464 | build_data = subprocess.run(build_cmd, shell=True, check=True, capture_output=True) 465 | except subprocess.CalledProcessError: 466 | print(build_data.stderr.decode('utf-8').strip()) 467 | except: 468 | print("Something went wrong building, bailing...") 469 | print(build_data.stdout.decode('utf-8').strip()) 470 | push_cmd = f"podman push {custom_redhat_operators_catalog_image_url} --tls-verify=false --authfile {args.authfile}" 471 | 472 | print(f"Pushing custom catalogue {custom_redhat_operators_catalog_image_url} to registry...") 473 | print(f"Running '{push_cmd}'") 474 | try: 475 | push_data = subprocess.run(push_cmd, shell=True, check=True, capture_output=True) 476 | except subprocess.CalledProcessError as e: 477 | print("Something went wrong pushing (auth?), bailing...") 478 | exit(1) 479 | print(push_data.stdout.decode('utf-8').strip()) 480 | os.chdir(script_root_dir) 481 | return operators 482 | 483 | # Create a custom catalogue with selected operators from older sqlite3 based catalog 484 | def PruneSqliteBasedCatalog(opm_cli_path, operators, run_temp): 485 | if args.authfile: 486 | # Copy to correct folder for opm 487 | HOME = os.getenv('HOME') 488 | docker_cfg = os.path.join(HOME, ".docker", "config.json") 489 | shutil.copyfile(args.authfile, docker_cfg) 490 | else: 491 | print("You must pass an auth file with the '--authfile' option") 492 | exit(1) 493 | 494 | operator_list = GetListOfCommaDelimitedOperatorList(operators) 495 | cmd = f"{opm_cli_path} index prune -f {redhat_operators_catalog_image_url}" 496 | cmd += f" -p {operator_list}" # local-storage-operator,cluster-logging,kubevirt-hyperconverged " 497 | cmd += f" -t {custom_redhat_operators_catalog_image_url}" 498 | print(f"Running: {cmd}") 499 | 500 | os.chdir(run_temp) 501 | subprocess.run(cmd, shell=True, check=True) 502 | generate_command = f"{opm_cli_path} generate dockerfile pruned-catalog/configs" 503 | 504 | os.chdir(script_root_dir) 505 | 506 | push_cmd = f"podman push {custom_redhat_operators_catalog_image_url} --tls-verify=false --authfile {args.authfile}" 507 | 508 | print(f"Pushing custom catalogue {custom_redhat_operators_catalog_image_url} to registry...") 509 | print(f"Running '{push_cmd}'") 510 | subprocess.run(push_cmd, shell=True, check=True) 511 | print("Finished push") 512 | 513 | 514 | def GetImageListToMirror(operators, db_path): 515 | con = sqlite3.connect(db_path) 516 | cur = con.cursor() 517 | for operator in operators: 518 | for version in operator.upgrade_path: 519 | 520 | # Get Operator bundle name 521 | cmd = "select default_channel from package where name like '%" + operator.name + "%';" 522 | 523 | result = cur.execute(cmd).fetchall() 524 | if len(result) == 1: 525 | channel = result[0][0] 526 | 527 | cmd = "select operatorbundle_name from channel_entry where package_name like '" + operator.name + "' and channel_name like '" + channel + "' and operatorbundle_name like '%" + version + "%';" 528 | result = cur.execute(cmd).fetchall() 529 | 530 | if len(result) > 0: 531 | bundle_name = result[0][0] 532 | 533 | bundle = OperatorBundle(bundle_name, version) 534 | 535 | # Get related images for the operator bundle 536 | cmd = "select image from related_image where operatorbundle_name like '%" + bundle_name + "%';" 537 | 538 | result = cur.execute(cmd).fetchall() 539 | if len(result) > 0: 540 | for image in result: 541 | bundle.relatedImages.append(image[0]) 542 | 543 | # Get bundle images for operator bundle 544 | cmd = "select bundlepath from operatorbundle where (name like '%" + operator.name + "%' or bundlepath like '%" + operator.name + "%') and version='" + version + "';" 545 | 546 | result = cur.execute(cmd).fetchall() 547 | if len(result) > 0: 548 | for image in result: 549 | bundle.relatedImages.append(image[0]) 550 | 551 | operator.operator_bundles.append(bundle) 552 | 553 | 554 | def ExtractIndexDb(): 555 | cmd = oc_cli_path + " image extract " + custom_redhat_operators_catalog_image_url 556 | cmd += " -a " + args.authfile + " --path /database/index.db:" + run_root_dir + " --confirm --insecure" 557 | subprocess.run(cmd, shell=True, check=True) 558 | 559 | return os.path.join(run_root_dir, "index.db") 560 | 561 | 562 | # Get a non duplicate list of images 563 | def getImages(operators): 564 | image_list = [] 565 | for operator in operators: 566 | for bundle in operator.operator_bundles: 567 | for image in bundle.relatedImages: 568 | if type(image) is dict: 569 | if image['image'] not in image_list: 570 | image_list.append(image['image']) 571 | else: 572 | if image not in image_list: 573 | image_list.append(image) 574 | return image_list 575 | 576 | 577 | def MirrorImagesToLocalRegistry(images): 578 | print("Copying image list to offline registry...") 579 | failed_image_list = [] 580 | image_count = len(images) 581 | cur_image_count = 1 582 | for image in images: 583 | PrintBreakLine() 584 | print( 585 | "Mirroring image " + 586 | str(cur_image_count) + 587 | " of " + 588 | str(image_count)) 589 | if isBadImage(image) == False: 590 | destUrl = GenerateDestUrl(image) 591 | max_retries = 5 592 | retries = 0 593 | success = False 594 | while retries < max_retries and success == False: 595 | if (retries > 0 ): 596 | print("RETRY ATTEMPT: " + str(retries)) 597 | try: 598 | print("Image: " + image) 599 | CopyImageToDestinationRegistry(image, destUrl, args.authfile) 600 | success = True 601 | except subprocess.CalledProcessError as e: 602 | print("ERROR Copying image: " + image) 603 | print("TO") 604 | print(destUrl) 605 | if (e.output is not None): 606 | print("exception:" + e.output) 607 | print("ERROR copying image!") 608 | retries+=1 609 | if not success: 610 | failed_image_list.append(image) 611 | 612 | else: 613 | print("Known bad image: {}\n{}".format(image, "ignoring...")) 614 | 615 | cur_image_count = cur_image_count + 1 616 | PrintBreakLine() 617 | print("Finished mirroring related images.") 618 | 619 | if len(failed_image_list) > 0: 620 | print("Failed to copy the following images:") 621 | PrintBreakLine() 622 | for image in failed_image_list: 623 | print(image) 624 | PrintBreakLine() 625 | 626 | 627 | # Create Image Content Source Policy Yaml to apply to OCP cluster 628 | def CreateImageContentSourcePolicyFile(images): 629 | with open(image_content_source_policy_template_file) as f: 630 | icpt = yaml.safe_load(f) 631 | 632 | repoList = GetRepoListToMirror(images) 633 | 634 | for key in repoList: 635 | icpt['spec']['repositoryDigestMirrors'].append( 636 | {'mirrors': [repoList[key]], 'source': key}) 637 | 638 | with open(image_content_source_policy_output_file, "w") as f: 639 | yaml.dump(icpt, f, default_flow_style=False) 640 | 641 | 642 | # Get a List of repos to mirror 643 | def GetRepoListToMirror(images): 644 | reg = r"^(.*\/){2}" 645 | if args.icsp_scope == "registry": 646 | reg = r"^(.*?\/){1}" 647 | sourceList = [] 648 | mirrorList = {} 649 | for image in images: 650 | source = re.match(reg, image) 651 | if source is None: 652 | sourceRepo = image[:image.find("@")] 653 | else: 654 | sourceRepo = source.group()[:-1] 655 | sourceList.append( 656 | sourceRepo) if sourceRepo not in sourceList else sourceList 657 | 658 | for source in sourceList: 659 | mirrorList[source] = GenerateDestUrl(source) 660 | 661 | return mirrorList 662 | 663 | 664 | def CreateMappingFile(images): 665 | repoList = GetSourceToMirrorMapping(images) 666 | with open(mapping_file, "w") as f: 667 | for key in repoList: 668 | f.write(key + "=" + repoList[key]) 669 | f.write('\n') 670 | 671 | 672 | def CreateManifestFile(images): 673 | with open(image_manifest_file, "w") as f: 674 | for image in images: 675 | f.write(image) 676 | f.write("\n") 677 | 678 | 679 | def isBadImage(image): 680 | with open(operator_known_bad_image_list_file, 'r') as f: 681 | for bad_image in (l.rstrip('\n') for l in f): 682 | if bad_image == image: 683 | return True 684 | return False 685 | 686 | 687 | def GenerateDestUrl(image_url): 688 | print(f"{image_url}") 689 | res = image_url.find("/") 690 | if res != -1: 691 | GenDestUrl = args.registry_olm + image_url[res:] 692 | else: 693 | GenDestUrl = args.registry_olm 694 | 695 | if add_tags_to_images_mirrored_by_digest.lower() == "true": 696 | GenDestUrl = re.sub(r'@sha256:', ':', GenDestUrl) 697 | 698 | return GenDestUrl 699 | 700 | 701 | def CopyImageToDestinationRegistry( 702 | sourceImageUrl, destinationImageUrl, authfile=None): 703 | if args.authfile: 704 | cmd_args = "skopeo copy --dest-tls-verify=false --authfile {} -a docker://{} docker://{}".format( 705 | authfile, sourceImageUrl, destinationImageUrl) 706 | else: 707 | cmd_args = "skopeo copy --dest-tls-verify=false -a docker://{} docker://{}".format( 708 | sourceImageUrl, destinationImageUrl) 709 | subprocess.run(cmd_args, shell=True, check=True) 710 | 711 | 712 | # Get a Mapping of source to mirror images 713 | def GetSourceToMirrorMapping(images): 714 | reg = r"^(.*@){1}" 715 | mapping = {} 716 | for image in images: 717 | source = re.match(reg, image) 718 | if source is None: 719 | sourceRepo = image 720 | else: 721 | sourceRepo = source.group()[:-1] 722 | 723 | mapping[image] = GenerateDestUrl(sourceRepo) 724 | 725 | return mapping 726 | 727 | 728 | def CreateCatalogSourceYaml(image_url, image_name, display_name): 729 | with open(catalog_source_template_file, 'r') as f: 730 | templateFile = Template(f.read()) 731 | content = templateFile.render(CatalogSourceImage=image_url, CatalogSourceName=image_name, CatalogSourceDisplayName=display_name) 732 | with open(catalog_source_output_file, "w") as f: 733 | f.write(content) 734 | 735 | 736 | def GetListOfCommaDelimitedOperatorList(operators): 737 | operator_list = "" 738 | for item in operators: 739 | operator_list += item.name + "," 740 | 741 | operator_list = operator_list[:-1] 742 | return operator_list 743 | 744 | 745 | def RecreatePath(item_path, delete_if_exists = True): 746 | path = Path(item_path) 747 | if path.exists() and delete_if_exists: 748 | cmd_args = "sudo rm -rf {}".format(item_path) 749 | print("Running: " + str(cmd_args)) 750 | subprocess.run(cmd_args, shell=True, check=True) 751 | 752 | if not path.exists(): 753 | os.makedirs(item_path, exist_ok=True) 754 | 755 | def PrintBreakLine(): 756 | print("----------------------------------------------") 757 | 758 | 759 | class OperatorSpec: 760 | def __init__(self, name, start_version): 761 | self.name = name 762 | self.start_version = start_version 763 | self.upgrade_path = "" 764 | self.operator_bundles = [] 765 | self.operator_channels = [] 766 | self.defaultChannel = "" 767 | self.icon = {} 768 | 769 | 770 | class OperatorChannel: 771 | def __init__(self, name): 772 | self.name = name 773 | self.package = "" 774 | self.entries = [] 775 | 776 | 777 | class OperatorBundle: 778 | def __init__(self, name, version): 779 | self.name = name 780 | self.package = "" 781 | self.image = "" 782 | self.version = version 783 | self.properties = [] 784 | self.relatedImages = [] 785 | 786 | 787 | if __name__ == "__main__": 788 | main() 789 | -------------------------------------------------------------------------------- /offline_operators_list: -------------------------------------------------------------------------------- 1 | kubevirt-hyperconverged 2 | local-storage-operator 3 | cluster-logging 4 | codeready-workspaces 5 | quay-bridge-operator 6 | quay-operator 7 | openshift-gitops-operator 8 | rhacs-operator 9 | devspaces 10 | odf-operator 11 | advanced-cluster-management 12 | sandboxed-containers-operator 13 | servicemeshoperator 14 | openshift-pipelines-operator-rh 15 | serverless-operator 16 | -------------------------------------------------------------------------------- /offline_operators_list.yaml: -------------------------------------------------------------------------------- 1 | operators: 2 | - name: kubevirt-hyperconverged 3 | # start_version: 2.6.5 4 | - name: local-storage-operator 5 | - name: ocs-operator 6 | - name: cluster-logging 7 | - name: elasticsearch-operator 8 | - name: jaeger-product 9 | # start_version: 1.17.8 10 | - name: kiali-ossm 11 | - name: servicemeshoperator 12 | - name: rhacs-operator 13 | - name: quay-operator 14 | - name: codeready-workspaces 15 | # start_version: 2.7.0 16 | - name: ocs-operator 17 | - name: serverless-operator 18 | - name: ansible-automation-platform-operator 19 | # - name: sysdig-certified 20 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pyyaml 2 | jinja2 3 | packaging 4 | natsort 5 | -------------------------------------------------------------------------------- /upgradepath.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import sys 3 | import re 4 | import sqlite3 5 | from packaging import version 6 | 7 | 8 | def GetVersion(name): 9 | index = name.find(".") 10 | version = name[index+1:] 11 | 12 | while True: 13 | if version[0].isalpha(): 14 | version = version[1:] 15 | else: 16 | break 17 | return version 18 | 19 | 20 | def GetLatestVersion(operator_name, db_path): 21 | con = sqlite3.connect(db_path) 22 | cur = con.cursor() 23 | # Get default channel 24 | cmd = "select default_channel from package where name like '%" + operator_name + "%';" 25 | 26 | result = cur.execute(cmd).fetchall() 27 | if len(result) == 1: 28 | channel = result[0][0] 29 | 30 | # get version from default cahnnel 31 | cmd = "select head_operatorbundle_name from channel where package_name like '" + operator_name + "' and name like '" + channel + "'" 32 | result = cur.execute(cmd).fetchall() 33 | 34 | if len(result) == 1: 35 | version = GetVersion(result[0][0]) 36 | return version 37 | 38 | 39 | def GetVersionMatrix(version, matrix): 40 | for item in matrix: 41 | if GetVersion(item) == version: 42 | return matrix[item][1] 43 | 44 | 45 | def SanitizeVersion(version): 46 | index = 0 47 | for i in range(len(version)): 48 | if version[i].isnumeric() or version[i] == '.': 49 | continue 50 | else: 51 | index = i 52 | break 53 | 54 | if index == 0: 55 | return version 56 | else: 57 | print(version[:index]) 58 | return version[:index] 59 | 60 | 61 | def VersionEval(version1, version2, symbol): 62 | v1 = version.parse(SanitizeVersion(version1)) 63 | v2 = version.parse(SanitizeVersion(version2)) 64 | if symbol == "<": 65 | return v1 < v2 66 | elif symbol == "<=": 67 | return v1 <= v2 68 | elif symbol == ">": 69 | return v1 > v2 70 | elif symbol == ">=": 71 | return v1 >= v2 72 | 73 | 74 | def GetUpgradeMatrix(operator, start_version, latest_version, db_path): 75 | con = sqlite3.connect(db_path) 76 | cur = con.cursor() 77 | 78 | # Get Operator bundle name 79 | cmd = "select default_channel from package where name like '%" + operator + "%';" 80 | 81 | result = cur.execute(cmd).fetchall() 82 | if len(result) == 1: 83 | channel = result[0][0] 84 | 85 | cmd = "select head_operatorbundle_name from channel where package_name like '" + operator + "' and name like '" + channel + "'" 86 | result = cur.execute(cmd).fetchall() 87 | 88 | if len(result) == 1: 89 | bundle_name = result[0][0] 90 | index = bundle_name.find(".") 91 | bundle_name = bundle_name[:index] 92 | 93 | 94 | cmd = "select name,skiprange,version,replaces from operatorbundle where (name like '%" + \ 95 | bundle_name + "%' or bundlepath like '%" + bundle_name + "%');" 96 | result = cur.execute(cmd) 97 | myDict = {} 98 | 99 | bundle = [] 100 | for row in result: 101 | bundle_entry = [] 102 | for column in row: 103 | bundle_entry.append(column) 104 | 105 | if VersionEval(bundle_entry[2], latest_version, "<="): 106 | bundle.append(bundle_entry) 107 | 108 | 109 | for entry in bundle: 110 | name = entry[0] 111 | myDict[name] = [entry[2], []] 112 | 113 | for entry in bundle: 114 | replaces = entry[3] 115 | if replaces and replaces in myDict and entry[2] not in myDict[replaces][1]: 116 | myDict[replaces][1].append(entry[2]) 117 | 118 | # Check to see if start version has a bendle in the channel 119 | bundle_exists = False 120 | for entry in bundle: 121 | if entry[2] == start_version: 122 | bundle_exists = True 123 | break 124 | 125 | if not bundle_exists: 126 | myDict["unknown." + start_version] = [start_version, []] 127 | 128 | 129 | 130 | for entry in bundle: 131 | skiprange = entry[1] 132 | 133 | if skiprange: 134 | range = skiprange.split(' ') 135 | min = range[0] 136 | min_index = re.search(r"\d", min).start() 137 | min_oper = min[:min_index] 138 | min_version = min[min_index:] 139 | 140 | max = range[1] 141 | max_index = re.search(r"\d", max).start() 142 | max_oper = max[:max_index] 143 | max_version = max[max_index:] 144 | 145 | for k, v in myDict.items(): 146 | if VersionEval(v[0], min_version, min_oper): 147 | if VersionEval(v[0], max_version, max_oper): 148 | if entry[2] not in v[1]: 149 | v[1].append(entry[2]) 150 | 151 | return myDict 152 | 153 | 154 | def GetHighestVersionFromMatrix(version_matrix): 155 | next_version = version_matrix[0] 156 | for app_version in version_matrix: 157 | if version.parse(next_version) < version.parse(app_version): 158 | next_version = app_version 159 | return next_version 160 | 161 | 162 | def GetUpgradePaths(operator, start_version, latest_version, matrix, upgrade_paths, continue_upgrade_path): 163 | upgrade_path = continue_upgrade_path 164 | upgrade_path_complete = False 165 | current_version = start_version 166 | while upgrade_path_complete == False: 167 | current_version_matrix = GetVersionMatrix(current_version, matrix) 168 | 169 | if current_version_matrix: 170 | for v in range(1, len(current_version_matrix)): 171 | alternate_path_matrix = upgrade_path.copy() 172 | alternate_path_matrix.append(current_version_matrix[v]) 173 | GetUpgradePaths(operator, current_version_matrix[v], latest_version, matrix, upgrade_paths, alternate_path_matrix) 174 | 175 | upgrade_path.append(current_version_matrix[0]) 176 | if current_version_matrix[0] == latest_version: 177 | upgrade_path_complete = True 178 | else: 179 | current_version = current_version_matrix[0] 180 | 181 | else: 182 | print("There is no upgrade path for " + operator + " version " + start_version) 183 | sys.exit(1) 184 | 185 | # Probably won't need this but just in case there is a weird edge case 186 | if VersionEval(SanitizeVersion(current_version), latest_version, ">="): 187 | upgrade_path_complete = True 188 | 189 | upgrade_paths.append(upgrade_path) 190 | 191 | 192 | def GetShortestUpgradePath(operator, start_version, db_path): 193 | 194 | latest_version = GetLatestVersion(operator, db_path) 195 | 196 | if latest_version != None: 197 | if start_version: 198 | matrix = GetUpgradeMatrix(operator, start_version, latest_version, db_path) 199 | upgrade_paths = [] 200 | GetUpgradePaths(operator, start_version, latest_version, matrix, upgrade_paths, []) 201 | 202 | shortest_path = upgrade_paths[0] 203 | for path in upgrade_paths: 204 | if len(path) < len(shortest_path): 205 | shortest_path = path 206 | else: 207 | shortest_path = [latest_version] 208 | 209 | else: 210 | shortest_path = [] 211 | 212 | return shortest_path --------------------------------------------------------------------------------