├── LICENSE ├── README.md └── vercheck.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Erico Mendonca 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # scc-tools 2 | 3 | A set of simple tools to interact with SUSE Customer Center (SCC). 4 | 5 | It basically uses the APIs available at https://scc.suse.com/api/package_search/v4/documentation 6 | 7 | [![build result](https://build.opensuse.org/projects/home:emendonca/packages/scc-tools/badge.svg?type=default)](https://build.opensuse.org/package/show/home:emendonca/scc-tools) 8 | 9 | ## vercheck 10 | 11 | This tool searches for the latest versions of packages on SCC. 12 | 13 | It started as a pet project to do a simple package search with the API, but it evolved into a much more complex tool that can analyze supportconfig archives, correlate product versions and repositories, and generate reports on package versions. 14 | 15 | Disclaimer: I'm a SUSE employee. 16 | 17 | 18 | Usage: 19 | ``` 20 | # ./vercheck.py [-l|--list-products] -p|--product product id -n|--name [-s|--short] [-v|--verbose] [-1|--show-unknown] [-2|--show-differences] [-3|--show-uptodate] [-4|--show-unsupported] [-5|--show-suseorphans] [-6|--show-suseptf] [-o|--outputdir] [-d|--supportconfig] [-a|--arch] [-f|--force-refresh] [-V|--version] 21 | 22 | ``` 23 | 24 | It uses compression, and a single urllib3 pool instance, to minimize the impact on the public server as much as possible. In order to speed things up, I open multiple threads and consume the RPM list slowly. 25 | I also tried to use all resources that do NOT require authentication, inspired by the public package search at https://scc.suse.com/packages . That is why I opted to use a *static* product list inside the code (which doesn't really change that often). 26 | 27 | Vercheck has an internal cache. This was made in order to sidestep current rate-limiting issues on the SCC API servers (issue #22). 28 | 29 | This is how it works: 30 | 1) a JSON file (scc_data.json) is created/updated to hold cache entries, either in /var/cache/scc-tools or ~/.cache/scc-tools. The directory is selected based on whether it can write to each location, in order of preference. 31 | 32 | 2) the cache currently holds entries for 5 days. This guarantees that fresh information can be retrieved in a reasonable timeframe if necessary. 33 | Here's an example of an entry being dropped and immediately being queued for a refresh: 34 | ``` 35 | cached data for zypper is too old (-6 days), discarding cache entry 36 | removing record from cache: {'id': 21851832, 'name': 'zypper', 'arch': 'x86_64', 'version': '1.14.51', 'release': '3.52.1', 'products': [{'id': 2219, 'name': 'SUSE Linux Enterprise Server LTSS', 'identifier': 'SLES-LTSS/15.1/x86_64', 'type': 'extension', 'free': False, 'edition': '15 SP1', 'architecture': 'x86_64'}], 'timestamp': '2022-03-12T02:15:30.193223', 'repository': 'Basesystem Module 15 SP2 x86_6415 SP2x86_64', 'product_id': 1939} 37 | searching for zypper for product ID 1939 in SCC 38 | ``` 39 | 3) the same cache mechanism is implemented for Public Cloud images, using data from SUSE PINT (pint.suse.com). In this case, the information about active/inactive/deprecated/deleted images are kept for 7 days, and automatically refreshed upon running. 40 | 41 | 4) it also contains an internal table correlating each product to modules (taken from RMT). This is necessary in order to maintain cache consistency, as sometimes a suitable updated package resides in a different module repository, and we need to know what was the original product ID in order to return the correct cache entry. 42 | 43 | 5) there is an additional command-line option: 44 | ``` 45 | -f|--force-refresh Ignore cached data and retrieve latest data from SCC and public cloud info 46 | ``` 47 | This ignores the cache and goes straight to SCC for the latest data (the results are added to the cache at the end for later use though). 48 | 49 | 6) it's also heavily multi-threaded, and as such it has a way more complex data locking logic. 50 | 51 | *IMPORTANT*: As we discovered through testing, running multiple parallel copies of this version may "lose" some of the recently refreshed cache entries. This limitation is by design. What happens is that in every session I read the cached entries to memory, then write it all at the end. Everything is changed in-memory. So, whoever runs last "wins". This is to avoid thousands of small disk writes, and possibly being called a "disk killer" :-) 52 | In the future I intend to implement a more robust cache backend (sqlite?) and address this. I might also merge it back to a single version of the script. 53 | 54 | 55 | ### Examples 56 | 57 | * Listing supported products list (-l or --list): 58 | 59 | ``` 60 | $ ./vercheck.py -l 61 | Known products list 62 | ID Name 63 | ----------------------------------------------------- 64 | 1117 SUSE Linux Enterprise Server 12 x86_64 65 | 1322 SUSE Linux Enterprise Server 12 SP1 x86_64 66 | 1357 SUSE Linux Enterprise Server 12 SP2 x86_64 67 | 1421 SUSE Linux Enterprise Server 12 SP3 x86_64 68 | 1625 SUSE Linux Enterprise Server 12 SP4 x86_64 69 | 1878 SUSE Linux Enterprise Server 12 SP5 x86_64 70 | 1319 SUSE Linux Enterprise Server for SAP 12 x86_64 71 | 1346 SUSE Linux Enterprise Server for SAP 12 SP1 x86_64 72 | 1414 SUSE Linux Enterprise Server for SAP 12 SP2 x86_64 73 | 1426 SUSE Linux Enterprise Server for SAP 12 SP3 x86_64 74 | 1755 SUSE Linux Enterprise Server for SAP 12 SP4 x86_64 75 | 1880 SUSE Linux Enterprise Server for SAP 12 SP5 x86_64 76 | 1575 SUSE Linux Enterprise Server 15 x86_64 77 | 1763 SUSE Linux Enterprise Server 15 SP1 x86_64 78 | 1939 SUSE Linux Enterprise Server 15 SP2 x86_64 79 | ... 80 | 81 | As of Oct 2023, 103 products are supported. 82 | ``` 83 | 84 | Note: SLE 11 and derivatives are not supported for queries by the API, even though there are valid product numbers for it. 85 | 86 | * Checking for the latest version of the package "glibc" for product 2465 (SLES 15 SP5 x86_64), verbose mode: 87 | 88 | ``` 89 | $ ./vercheck.py -p 2465 -n glibc -v 90 | Using product ID 2465 (SUSE Linux Enterprise Server 15 SP5 x86_64) 91 | searching for package "glibc" in product id "2465" (SUSE Linux Enterprise Server 15 SP5 x86_64) 92 | found glibc for product ID 2465 (cached) 93 | latest version for glibc on product ID 2465(SUSE Linux Enterprise Server 15 SP5 x86_64) is 2.31-150300.46.1, found on Basesystem Module (sle-module-basesystem/15.5/x86_64) 94 | version 2.31-150300.46.1 is available on repository [Basesystem Module 15 SP5 x86_64 15 SP5 x86_64] 95 | 96 | ``` 97 | 98 | Note that it correctly treats second- and third-order release numbers, and sorts them accordingly to get the latest version. 99 | 100 | 101 | * Checking for the latest version of the package "glibc" for product 2465 (SLES 15 SP5 x86_64), short answer: 102 | ``` 103 | $ ./vercheck.py -p 2465 -n glibc -s 104 | searching for package "glibc" in product id "2465" (SUSE Linux Enterprise Server 15 SP5 x86_64) 105 | searching for glibc for product ID 2465 in SCC 106 | 2.31-150300.63.1 107 | 108 | ``` 109 | 110 | * Analyzing a supportconfig 111 | ``` 112 | $ ./vercheck.py -d tests/SLE15SP5/scc_rmt_231027_1743 113 | loaded 2628 items from cache (/home/erico/.cache/scc-tools/scc_data.json) 114 | loaded 5 items from cache (/home/erico/.cache/scc-tools/public_cloud_amazon.json) 115 | * cached data OK (-4 days old) 116 | loaded 5 items from cache (/home/erico/.cache/scc-tools/public_cloud_google.json) 117 | * cached data OK (-4 days old) 118 | loaded 5 items from cache (/home/erico/.cache/scc-tools/public_cloud_microsoft.json) 119 | * cached data OK (-4 days old) 120 | --- AMAZON data as of 2023-10-27T17:24:07.180514 121 | * 1468 active images 122 | * 942 inactive images 123 | * 9060 deprecated images 124 | * 14763 deleted images 125 | 126 | --- MICROSOFT data as of 2023-10-27T17:24:07.180514 127 | * 187 active images 128 | * 147 inactive images 129 | * 548 deprecated images 130 | * 4543 deleted images 131 | 132 | --- GOOGLE data as of 2023-10-27T17:24:07.180514 133 | * 33 active images 134 | * 26 inactive images 135 | * 106 deprecated images 136 | * 708 deleted images 137 | 138 | --> Public cloud provider for tests/SLE15SP5/scc_rmt_231027_1743 is [none] 139 | --> not a public cloud image, continuing normal analysis 140 | Analyzing supportconfig directory: tests/SLE15SP5/scc_rmt_231027_1743 141 | product name = SUSE Linux Enterprise Server 15 SP5 x86_64 (id 2465, x86_64) 142 | found 986 total packages to check 143 | found Mesa-dri for product ID 2465 (cached) 144 | found Mesa for product ID 2465 (cached) 145 | found Mesa-gallium for product ID 2465 (cached) 146 | found Mesa-libEGL1 for product ID 2465 (cached) 147 | found Mesa-libGL1 for product ID 2465 (cached) 148 | found Mesa-libglapi0 for product ID 2465 (cached) 149 | ... 150 | thread search-yast2-ycp-ui-bindings is done 151 | thread search-zisofs-tools is done 152 | thread search-zstd is done 153 | thread search-zypper is done 154 | thread search-zypper-lifecycle-plugin is done 155 | thread search-zypper-log is done 156 | thread search-zypper-needs-restarting is done 157 | 158 | Done. 159 | writing CSV reports to /home/erico/Projetos/scc-tools 160 | 161 | 162 | ``` 163 | 164 | This option analyzes a previously extracted supportconfig report. It will find the installed RPMs in the report, and run searches on ALL packages in order to find which ones are up-to-date, have older versions, or are not found in the official repositories. Packages that are from unsupported vendors also get their own report. Packages that are orphans (as in, packages that belong to another version of the OS and were left installed) and PTF (Program Temporary Fix) packages built by SUSE also have a separate report. 165 | 166 | It generates these six CSV reports: 167 | * vercheck-uptodate-[directory name].csv, 168 | * vercheck-different-[directory name].csv, 169 | * vercheck-notfound-[directory name].csv, 170 | * vercheck-unsupported-[directory name].csv, 171 | * vercheck-suseorphans-[directory name].csv, and 172 | * vercheck-suseptf-[directory name].csv 173 | 174 | respectively. 175 | 176 | An output directory can be specified by adding the "-o" (or --outputdir) parameter before the supportconfig directory: 177 | ``` 178 | ./vercheck.py -o /tmp/reports -d ~/Documents/nts_dxl1lnxsl002_200616_1148 179 | ``` 180 | 181 | ## Requirements 182 | 183 | Dependencies: This utility depends on urllib3 and pyaml. It also uses zypper as a last-resort mechanism to verify versions. Therefore it will **not** run on e.g. Debian based systems. 184 | 185 | For Tumbleweed 03/25 this is the working pyaml RPM: 186 | 187 | zypper in python313-yamlcore 188 | 189 | 190 | ## Final considerations 191 | 192 | This utility only uses public resources maintained by SUSE LLC, no logins are necessary. 193 | 194 | I make no guarantees on availability or speed. I try to make sure that the information mined by vercheck is as accurate as possible, but errors can occur. 195 | 196 | If you find a bug or inconsistency, please open an issue! https://github.com/doccaz/scc-tools/issues 197 | 198 | // **end** // 199 | -------------------------------------------------------------------------------- /vercheck.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | import re 3 | import sys 4 | import os 5 | import time 6 | import subprocess 7 | import signal 8 | import getopt 9 | import pdb 10 | import weakref 11 | import warnings 12 | import json 13 | import urllib 14 | from datetime import datetime 15 | from threading import Thread, Lock, active_count 16 | from contextlib import contextmanager 17 | from distutils.version import LooseVersion 18 | 19 | # external libraries 20 | try: 21 | import urllib3 22 | import yaml 23 | except ImportError as e: 24 | print(f"Please verify that you have the required Python library installed: {e}") 25 | exit(1) 26 | 27 | # main class that deals with command lines, reports and everything else 28 | class SCCVersion(): 29 | 30 | version = '2.5' 31 | build = '20250507' 32 | 33 | # static product list (taken from RMT and other sources) 34 | # rmt-cli products list --name "SUSE Linux Enterprise Server" --all 35 | # rmt-cli products list --name "SUSE Linux Enterprise Desktop" --all 36 | # rmt-cli products list --name "openSUSE" --all 37 | # 38 | # (replaced by this alternative, no authentication needed): 39 | # https://scc.suse.com/api/package_search/products 40 | # 41 | product_list = {} 42 | 43 | # all known module IDs, and corresponding product IDs (from RMT) 44 | # modules: rmt-cli products list --all --csv | grep '15 SPx' | grep x86_64 | egrep -v 'Debuginfo|Sources' | egrep -i 'Module|PackageHub' 45 | # related products: rmt-cli products list --all --csv | grep '15 SPx' | grep x86_64 | egrep -v 'Debuginfo|Sources' | egrep -iv 'Module|PackageHub' 46 | # (replaced by a more clever logic) 47 | 48 | # to get the list of product IDs: 49 | # rmt-cli products list --name "SUSE Manager Server" --all 50 | 51 | # SUSE Manager from 4.0 to 4.3 is a special case, as it had its own product entry. 52 | # from 5.x onwards it's just a regular extension for SLE Micro. 53 | suma_product_list = { 54 | 1899: {'name': 'SUSE Manager Server 4.0', 'identifier': '4.0'}, 55 | 2012: {'name': 'SUSE Manager Server 4.1', 'identifier': '4.1'}, 56 | 2222: {'name': 'SUSE Manager Server 4.2', 'identifier': '4.2'}, 57 | 2378: {'name': 'SUSE Manager Server 4.3', 'identifier': '4.3'}, 58 | } 59 | 60 | # result lists 61 | uptodate = [] 62 | notfound = [] 63 | different = [] 64 | unsupported = [] 65 | suseorphans = [] 66 | suseptf = [] 67 | 68 | # selected product 69 | selected_product = {} 70 | 71 | # report flags 72 | show_unknown = False 73 | show_diff = False 74 | show_uptodate = False 75 | show_unsupported = False 76 | show_suseorphans = False 77 | show_suseptf = False 78 | 79 | # verbose messages 80 | verbose = False 81 | 82 | # base name for the reports 83 | sc_name = '' 84 | 85 | # maximum number of running threads 86 | max_threads = 35 87 | 88 | # time to wait before starting each chunk of threads 89 | wait_time = 5 90 | 91 | # override architecture 92 | arch = None 93 | 94 | # short responses (just package versions) 95 | short_response = False 96 | 97 | # force data refresh from SCC (ignore the cache) 98 | force_refresh = False 99 | 100 | # default output directory for the reports 101 | outputdir = os.getcwd() 102 | 103 | # cache manager singleton 104 | cm = None 105 | 106 | # thread list 107 | threads = [] 108 | 109 | def __init__(self): 110 | # ignore DeprecationWarnings for now to avoid polluting the output 111 | warnings.filterwarnings("ignore", category=DeprecationWarning) 112 | self.cm = CacheManager() 113 | 114 | def set_verbose(self, verbose): 115 | self.verbose = verbose 116 | 117 | def set_force_refresh(self, force_refresh): 118 | self.force_refresh = force_refresh 119 | 120 | def get_verbose(self, verbose): 121 | return self.verbose 122 | 123 | def cleanup(self, signalNumber, frame): 124 | print('\nokay, okay, I\'m leaving!') 125 | sys.exit(1) 126 | return 127 | 128 | def color(text, color, bold=True): 129 | esc = '\x1b[' 130 | ret = "" 131 | if bold: 132 | ret += esc + '1m' 133 | reset = esc + '0m' 134 | if color == 'red': 135 | ret += esc + '31m' + text + reset 136 | elif color == 'green': 137 | ret += esc + '32m' + text + reset 138 | elif color == 'yellow': 139 | ret += esc + '33m' + text + reset 140 | elif color == 'blue': 141 | ret += esc + '34m' + text + reset 142 | elif color == 'magenta': 143 | ret += esc + '35m' + text + reset 144 | elif color == 'cyan': 145 | ret += esc + '36m' + text + reset 146 | else: 147 | return text 148 | return ret 149 | 150 | def fetch_product_list(): 151 | print(f'-- Downloading product list from SCC...') 152 | 153 | # single instance for urllib3 pool 154 | http = urllib3.PoolManager(maxsize=5) 155 | 156 | # maximum retries for each thread 157 | max_tries = 3 158 | tries = 0 159 | 160 | valid_response = False 161 | connection_failed = False 162 | 163 | # server replies which are temporary errors (and can be retried) 164 | retry_states = [429, 502, 504] 165 | 166 | # server replies which are permanent errors (and cannot be retried) 167 | error_states = [400, 403, 404, 422, 500, -1] 168 | 169 | base_url = "https://scc.suse.com/api/package_search/products" 170 | 171 | while not valid_response and tries < max_tries: 172 | try: 173 | r = http.request('GET', base_url, headers={ 174 | 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close'}) 175 | except Exception as e: 176 | print('Error while connecting: ' + str(e)) 177 | connection_failed = True 178 | 179 | if connection_failed: 180 | print('It appears the server is offline, giving up.') 181 | break 182 | elif r.status == 200: 183 | if tries > 0: 184 | print('got a good reply after %d tries' % (tries)) 185 | return_data = json.loads(r.data.decode('utf-8')) 186 | valid_response = True 187 | elif r.status in error_states: 188 | if r.data: 189 | json_data = json.loads(r.data.decode('utf-8')) 190 | print( 191 | 'cannot be processed due to error: [' + json_data['error'] + ']') 192 | print('got a fatal error (%d). Results will be incomplete!\nPlease contact the service administrators or try again later.' % (r.status)) 193 | break 194 | elif r.status in retry_states: 195 | tries = tries + 1 196 | print( 197 | 'got non-fatal reply (%d) from server, trying again in 5 seconds (try: %d/%d)' % (r.status, tries, max_tries)) 198 | time.sleep(5) 199 | continue 200 | else: 201 | print('got unknown error %d from the server!' % r.status) 202 | 203 | if valid_response: 204 | print('* ' + str(len(return_data['data'])) + ' products found.') 205 | # reprocess the data to fit our logic 206 | plist={} 207 | for p in return_data['data']: 208 | #'name': 'SUSE Manager Server', 'identifier': 'SUSE-Manager-Server/4.0/x86_64', 'type': 'base', 'free': False, 'architecture': 'x86_64', 'version': '4.0'} 209 | plist[p['id']] = {'id':p['id'], 'name':p['name'],'identifier':p['identifier'], 'type':p['type'], 'free':p['free'], 'architecture':p['architecture'], 'version':p['version']} 210 | return plist 211 | return {} 212 | 213 | 214 | def find_suma(self, directory_name): 215 | regex_suma = r"SUSE Manager release (.*) .*" 216 | try: 217 | f = open(directory_name + '/basic-environment.txt', 'r') 218 | text = f.read() 219 | f.close() 220 | matches_suma = re.search(regex_suma, text) 221 | for p in self.suma_product_list: 222 | if matches_suma is not None and matches_suma.group(1) == self.suma_product_list[p]['identifier']: 223 | return p 224 | except Exception as e: 225 | print('error: ' + str(e)) 226 | return -1 227 | 228 | def find_cpe(self, directory_name, architecture): 229 | regex_os = r".*\"cpe\:/o\:suse\:(sles|sled|sles_sap|sle-micro)\:(.*)\:?(.*)\"" 230 | 231 | try: 232 | with open(directory_name + '/basic-environment.txt', 'r') as f: 233 | text = f.read() 234 | f.close() 235 | 236 | matches_os = re.search(regex_os, text) 237 | if matches_os.groups() is not None: 238 | # print('found CPE: ' + str(matches_os.groups())) 239 | # print('found architecture: ' + architecture) 240 | probable_id = matches_os.group(1).upper() + '/' + matches_os.group(2).replace(':sp','.').replace(':', '.') + '/' + architecture.upper() 241 | print('probable identifier: ' + probable_id) 242 | for p in self.product_list.items(): 243 | if p[1]['identifier'].upper() == probable_id: 244 | print('found record: ' + str(p[1])) 245 | return p[1] 246 | 247 | except Exception as e: 248 | print('error: ' + str(e)) 249 | return None 250 | 251 | def find_arch(self, directory_name): 252 | regex = r"^Architecture:\s+(\w+)" 253 | 254 | try: 255 | f = open(directory_name + '/hardware.txt', 'r') 256 | text = f.read() 257 | f.close() 258 | matches = re.search(regex, text, re.MULTILINE) 259 | if matches != None: 260 | return matches.group(1) 261 | except Exception as e: 262 | print('error opening hardware.txt, trying basic-environment.txt...') 263 | try: 264 | f = open(directory_name + '/basic-environment.txt', 'r') 265 | text = f.read() 266 | f.close() 267 | regex = r"^Linux.* (\w+) GNU\/Linux$" 268 | matches = re.search(regex, text, re.MULTILINE) 269 | if matches != None: 270 | return matches.group(1) 271 | except Exception as e: 272 | print( 273 | 'could not determine architecture for the supportconfig directory. Please supply one with -a.') 274 | return 'unknown' 275 | return 'unknown' 276 | 277 | def read_rpmlist(self, directory_name): 278 | rpmlist = [] 279 | regex_start = r"(^NAME.*VERSION)\n" 280 | regex_package = r"(\S*)\s{2,}(\S{2,}.*)\s{2,}(.*)" 281 | regex_end = r"(^$)\n" 282 | try: 283 | f = open(directory_name + '/rpm.txt', 'r') 284 | text = f.readlines() 285 | f.close() 286 | 287 | found_start = False 288 | for line in text: 289 | matches = re.search(regex_start, line) 290 | if matches != None: 291 | found_start = True 292 | continue 293 | if found_start: 294 | matches = re.search(regex_end, line) 295 | if matches: 296 | break 297 | 298 | matches = re.search(regex_package, line) 299 | if matches: 300 | rpmname = matches.group(1) 301 | rpmdistro = matches.group(2).strip(' \t\n') 302 | rpmversion = matches.group(3) 303 | if rpmname.startswith('gpg-pubkey'): 304 | continue 305 | if rpmname != '' and rpmdistro != '' and rpmversion != '': 306 | rpmlist.append([rpmname, rpmdistro, rpmversion]) 307 | else: 308 | continue 309 | except Exception as e: 310 | print('error: ' + str(e)) 311 | 312 | return rpmlist 313 | 314 | def list_chunk(self, data, size): 315 | return (data[pos:pos + size] for pos in range(0, len(data), size)) 316 | 317 | def list_products(self): 318 | print('Known products list') 319 | print('ID' + '\t' + 'Name' + '\t\t\t\t' + 'Architecture') 320 | print('----------------------------------------------------------------') 321 | for p in self.product_list.items(): 322 | print(str(p[1]['id']) + '\t' + str(p[1]['name'])+ ' ' + str(p[1]['version']) + ' ' + str(p[1]['architecture'])) 323 | 324 | print('total: ' + str(len(self.product_list)) + ' products.') 325 | return 326 | 327 | def usage(self): 328 | print('Usage: ' + sys.argv[0] + ' [-l|--list-products] -p|--product product id -n|--name [-s|--short] [-v|--verbose] [-1|--show-unknown] [-2|--show-differences] [-3|--show-uptodate] [-4|--show-unsupported] [-5|--show-suseorphans] [-6|--show-suseptf] [-o|--outputdir] [-d|--supportconfig] [-a|--arch ] [-f|--force-refresh] [-V|--version]') 329 | return 330 | 331 | def show_version(self): 332 | print('SCC VerCheck version ' + SCCVersion.color(self.version + '-' + 333 | self.build, 'green') + ' by Erico Mendonca \n') 334 | return 335 | 336 | def show_help(self): 337 | self.usage() 338 | print('\n') 339 | print('-l|--list-products\t\tLists all supported products. Use this to get a valid product ID for further queries.') 340 | print('-p|--product \tSpecifies a valid product ID for queries. Mandatory for searches.') 341 | print('-n|--name \tSpecifies a package name to search. Exact matches only for now. Mandatory for searches.') 342 | print('-s|--short\t\t\tOnly outputs the latest version, useful for scripts') 343 | print('-v|--verbose\t\t\tOutputs extra information about the search and results') 344 | print('-1|--show-unknown\t\tshows unknown packages as they are found.') 345 | print('-2|--show-differences\t\tshows packages that have updates available as they are found.') 346 | print('-3|--show-uptodate\t\tshows packages that are on par with the updated versions as they are found.') 347 | print('-4|--show-unsupported\t\tshows packages that have a vendor that is different from the system it was collected from.') 348 | print('-5|--show-suseorphans\t\tshows packages that are from SUSE, but are now orphans (e.g. from different OS/product versions).') 349 | print('-6|--show-suseptf\t\tshows SUSE-made PTF (Program Temporary Fix) packages.') 350 | print('-o|--outputdir\t\t\tspecify an output directory for the reports. Default: current directory.') 351 | print('-d|--supportconfig\t\tAnalyzes a supportconfig directory and generates CSV reports for all packages described by types 1-6.') 352 | print('-a|--arch \t\t\tSupply an architecture for the supportconfig analysis.') 353 | print('-f|--force-refresh\t\tIgnore cached data and retrieve latest data from SCC and public cloud info') 354 | print('-V|--version\t\t\tShow program version') 355 | print('\n') 356 | return 357 | 358 | def test(self): 359 | self.threads = [] 360 | package_name = 'glibc' 361 | instance_nr = 0 362 | 363 | for k, v in self.product_list.items(): 364 | print('searching for package \"glibc\" in product id \"' + 365 | str(k) + '\" (' + v['name'] + ' ' + v['version'] + ')') 366 | self.threads.insert(instance_nr, PackageSearchEngine( 367 | instance_nr, k, package_name, v['name'], '0', self.force_refresh)) 368 | self.threads[instance_nr].start() 369 | instance_nr = instance_nr + 1 370 | 371 | # fetch results for all threads 372 | while len(self.threads) > 0: 373 | for thread_number, t in enumerate(self.threads): 374 | # if t.is_alive(): 375 | t.join(timeout=5) 376 | if t.is_alive(): 377 | print('thread ' + t.name + ' is not ready yet, skipping') 378 | self.threads.append(t) 379 | continue 380 | refined_data = t.get_results() 381 | 382 | # for thread_number in range(instance_nr): 383 | # threads[thread_number].join() 384 | # refined_data = threads[thread_number].get_results() 385 | try: 386 | print('[thread ' + str(thread_number) + ' ] latest version for ' + refined_data['query'] + ' on product ID ' + str( 387 | refined_data['product_id']) + ' is ' + refined_data['results'][0]['version'] + '-' + refined_data['results'][0]['release']) 388 | if self.verbose: 389 | for item in refined_data['results']: 390 | print('[thread ' + str(thread_number) + ' ] version ' + item['version'] + '-' + 391 | item['release'] + ' is available on repository [' + item['repository'] + ']') 392 | except IndexError: 393 | print('could not find any version for package ' + package_name) 394 | time.sleep(.1) 395 | return 396 | 397 | def search_package(self, product_id, package_name): 398 | 399 | self.threads = [] 400 | 401 | if product_id in self.suma_product_list: 402 | plist = self.suma_product_list 403 | else: 404 | plist = self.product_list 405 | 406 | print('searching for package \"' + package_name + '\" in product id \"' + 407 | str(product_id) + '\" (' + plist[product_id]['name'] + ' ' + plist[product_id]['version'] + ')') 408 | self.threads.insert(0, PackageSearchEngine( 409 | 0, product_id, package_name, plist[product_id]['name'], '0', self.force_refresh)) 410 | self.threads[0].start() 411 | 412 | # fetch results for the only thread 413 | self.threads[0].join() 414 | refined_data = self.threads[0].get_results() 415 | sle_results = [p for p in refined_data['results'] 416 | if 'SUSE Linux Enterprise' in p['repository']] 417 | 418 | try: 419 | if self.short_response: 420 | if len(sle_results) > 0: 421 | print(sle_results[0]['version'] + 422 | '-' + sle_results[0]['release']) 423 | else: 424 | print(refined_data['results'][0]['version'] + 425 | '-' + refined_data['results'][0]['release']) 426 | else: 427 | if len(sle_results) > 0: 428 | print('latest version for ' + SCCVersion.color(refined_data['query'], 'yellow') + ' on product ID ' + str(refined_data['product_id']) + '(' + SCCVersion.color(plist[product_id]['name'], 'yellow') + ') is ' + SCCVersion.color( 429 | sle_results[0]['version'] + '-' + sle_results[0]['release'], 'green') + ', found on ' + SCCVersion.color(sle_results[0]['products'][0]['name'] + ' (' + sle_results[0]['products'][0]['identifier'] + ')', 'green')) 430 | else: 431 | print('latest version for ' + SCCVersion.color(refined_data['query'], 'yellow') + ' on product ID ' + str(refined_data['product_id']) + '(' + SCCVersion.color(plist[product_id]['name'], 'yellow') + ') is ' + SCCVersion.color( 432 | refined_data['results'][0]['version'] + '-' + refined_data['results'][0]['release'], 'green') + ', found on ' + SCCVersion.color(refined_data['results'][0]['products'][0]['name'] + ' (' + refined_data['results'][0]['products'][0]['identifier'] + ')', 'green')) 433 | if self.verbose: 434 | for item in refined_data['results']: 435 | print('version ' + item['version'] + '-' + item['release'] + 436 | ' is available on repository [' + item['repository'] + ']') 437 | except IndexError: 438 | if self.short_response: 439 | print('none') 440 | else: 441 | print('could not find any version for package ' + package_name) 442 | return 443 | 444 | def ask_the_oracle(self, version_one, version_two): 445 | # we don't know how to parse this, let's ask zypper 446 | if self.verbose: 447 | print('don''t know how to compare: %s and %s, let''s ask the oracle' % ( 448 | version_one, version_two)) 449 | proc = subprocess.Popen(["/usr/bin/zypper", "vcmp", str(version_one), 450 | str(version_two)], env={"LANG": "C"}, stdout=subprocess.PIPE) 451 | output, err = proc.communicate() 452 | regex = r".*is newer than.*" 453 | if output is not None: 454 | matches = re.match(regex, output.decode('utf-8')) 455 | if matches is not None: 456 | if self.verbose: 457 | print('the oracle says: %s is newer' % str(version_one)) 458 | return True 459 | else: 460 | if self.verbose: 461 | print('the oracle says: %s is older' % str(version_one)) 462 | return False 463 | 464 | def is_newer(self, version_one, version_two): 465 | result = False 466 | ver_regex = r"(.*)-(.*)" 467 | try: 468 | matches_v1 = re.match(ver_regex, version_one) 469 | matches_v2 = re.match(ver_regex, version_two) 470 | 471 | v1 = LooseVersion(matches_v1.group(1) + '-' + matches_v1.group(2)) 472 | v2 = LooseVersion(matches_v2.group(1) + '-' + matches_v2.group(2)) 473 | except (IndexError, AttributeError): 474 | return self.ask_the_oracle(version_one, version_two) 475 | 476 | try: 477 | result = v1.__ge__(v2) 478 | except TypeError as e: 479 | return self.ask_the_oracle(version_one, version_two) 480 | 481 | return result 482 | 483 | def check_supportconfig(self, supportconfigdir): 484 | self.sc_name = supportconfigdir.rstrip(os.sep).split(os.sep)[-1] 485 | if self.sc_name == '.': 486 | self.sc_name = os.getcwd().split(os.sep)[-1] 487 | 488 | print('Analyzing supportconfig directory: ' + supportconfigdir) 489 | 490 | if self.arch: 491 | match_arch = self.arch 492 | else: 493 | match_arch = self.find_arch(supportconfigdir) 494 | match_os = self.find_cpe(supportconfigdir, match_arch) 495 | match_suma = self.find_suma(supportconfigdir) 496 | selected_product_id = -1 497 | 498 | 499 | if match_os is not None and match_arch != "unknown": 500 | print('product name = ' + match_os['name'] + ' (id ' + str( 501 | match_os['id']) + ', ' + match_arch + ')') 502 | selected_product_id = match_os['id'] 503 | self.selected_product = match_os 504 | # primary repositories for trusted updates should have this regex 505 | base_regex = r"(^SUSE Linux Enterprise.*|^Basesystem.*)" 506 | if match_suma != -1: 507 | print('found ' + self.suma_product_list[match_suma] 508 | ['name'] + ', will use alternate id ' + str(match_suma)) 509 | selected_product_id = match_suma 510 | # primary repositories for trusted updates should have this regex 511 | base_regex = r"^SUSE Manager.*" 512 | 513 | else: 514 | print('error while determining CPE. This is an unknown/unsupported combination!') 515 | exit(1) 516 | return ([], [], [], []) 517 | 518 | rpmlist = self.read_rpmlist(supportconfigdir) 519 | total = len(rpmlist) 520 | print('found ' + str(total) + ' total packages to check') 521 | 522 | count = 0 523 | self.threads = [] 524 | # fetch results for all threads 525 | for chunk in self.list_chunk(rpmlist, self.max_threads): 526 | for p in chunk: 527 | self.threads.insert(count, PackageSearchEngine( 528 | count, selected_product_id, p[0], p[1], p[2], self.force_refresh)) 529 | self.threads[count].start() 530 | count += 1 531 | progress = '[' + str(count) + '/' + str(total) + ']' 532 | sys.stdout.write('processing ' + progress) 533 | blank = ('\b' * (len(progress) + 11)) 534 | sys.stdout.write(blank) 535 | sys.stdout.flush() 536 | time.sleep(self.wait_time) 537 | 538 | print('gathering results... ') 539 | to_process = len([t for t in self.threads if t.processed == False]) 540 | while to_process > 0: 541 | for t in [t for t in self.threads if t.done and t.processed == False]: 542 | if self.verbose: 543 | print('joining thread ' + t.name + 544 | ' (waiting: ' + str(to_process) + ')...') 545 | t.join(timeout=5) 546 | # time.sleep(.001) 547 | if t.is_alive(): 548 | print('thread ' + t.name + ' is not ready yet, skipping') 549 | self.threads.append(t) 550 | continue 551 | # else: 552 | # print('thread ' + t.name + ' is dead') 553 | 554 | refined_data = t.get_results() 555 | # print('refined data = ' + str(refined_data)) 556 | try: 557 | target = match_os 558 | ver_regex = r"cpe:/o:suse:(sle-micro|sles|sled|sles_sap):(\d+)" 559 | if ('SL-Micro' in str(target['identifier']) or 'SLE-Micro' in str(target['identifier'])): 560 | target_version = 'SUSE Linux Enterprise 15' 561 | else: 562 | target_version = 'SUSE Linux Enterprise ' + \ 563 | target['version'].split('.')[0] 564 | 565 | # print("package does not exist, target_version is " + target_version) 566 | # print("supplied distro for package " + str(refined_data['query']) + ' is ' + str(refined_data['supplied_distro'])) 567 | # print("target identifier is " + target_version) 568 | if (('SLES' in str(target['identifier'])) and (str(refined_data['supplied_distro']) not in target_version)): 569 | self.unsupported.append( 570 | [refined_data['query'], refined_data['supplied_distro'], refined_data['supplied_version']]) 571 | 572 | if len(refined_data['results']) == 0: 573 | self.notfound.append( 574 | [refined_data['query'], refined_data['supplied_distro'], refined_data['supplied_version']]) 575 | else: 576 | latest = None 577 | for item in refined_data['results']: 578 | latest = item['version'] + '-' + item['release'] 579 | selected_repo = item['repository'] 580 | if (re.match(base_regex, item['repository']) is not None) and (self.is_newer(item['version'] + '-' + item['release'], refined_data['supplied_version'])): 581 | if self.verbose: 582 | print('---> found version %s-%s for package %s in repository %s which is a base repository, ignoring the rest' % ( 583 | item['version'], item['release'], refined_data['query'], item['repository'])) 584 | break 585 | if latest is None: 586 | latest = refined_data['results'][0]['version'] + \ 587 | '-' + refined_data['results'][0]['release'] 588 | selected_repo = refined_data['results'][0]['repository'] 589 | if self.verbose: 590 | print('latest version for ' + refined_data['query'] + ' on product ID ' + str(refined_data['product_id']) + ' is ' + refined_data['results'] 591 | [0]['version'] + '-' + refined_data['results'][0]['release'] + ' in repository ' + refined_data['results'][0]['repository']) 592 | # print('latest = ' + latest) 593 | if self.is_newer(latest, refined_data['supplied_version']) and (latest != refined_data['supplied_version']): 594 | self.different.append( 595 | [refined_data['query'], refined_data['supplied_version'], latest, selected_repo]) 596 | else: 597 | self.uptodate.append( 598 | [refined_data['query'], refined_data['supplied_version']]) 599 | 600 | t.processed = True 601 | to_process = len( 602 | [t for t in self.threads if t.processed == False]) 603 | time.sleep(.001) 604 | except IndexError: 605 | # print('[thread ' + str(thread_number) + '] could not find any version for package ' + refined_data['query']) 606 | pass 607 | except KeyError as e: 608 | print('Cannot find field: ' + e) 609 | pass 610 | print('thread ' + t.name + ' is done') 611 | time.sleep(.1) 612 | sys.stdout.flush() 613 | time.sleep(.1) 614 | 615 | # check if there are SUSE orphan packages in notfound 616 | self.notfound.sort() 617 | for package, distribution, version in self.notfound.copy(): 618 | if 'SUSE Linux Enterprise' in distribution: 619 | if self.verbose: 620 | print('**** moving SUSE orphan package to appropriate list: ' + 621 | package + '-' + version + ' (' + distribution + ')') 622 | self.notfound.remove([package, distribution, version]) 623 | self.suseorphans.append([package, distribution, version]) 624 | 625 | # check if there are SUSE PTF packages in unsupported 626 | self.unsupported.sort() 627 | for package, distribution, version in self.unsupported.copy(): 628 | if 'SUSE Linux Enterprise PTF' in distribution: 629 | if self.verbose: 630 | print('**** moving SUSE PTF package to appropriate list: ' + 631 | package + '-' + version + ' (' + distribution + ')') 632 | self.unsupported.remove([package, distribution, version]) 633 | self.suseptf.append([package, distribution, version]) 634 | 635 | sys.stdout.write('\nDone.\n') 636 | sys.stdout.flush() 637 | 638 | return (self.uptodate, self.unsupported, self.notfound, self.different, self.suseorphans, self.suseptf) 639 | 640 | def write_reports(self): 641 | if len(self.uptodate) == 0: 642 | print('no reports will be written (unsupported product?)') 643 | return 644 | else: 645 | print('writing CSV reports to ' + self.outputdir + '\n') 646 | try: 647 | os.makedirs(self.outputdir, exist_ok=True) 648 | except OSError as e: 649 | print('error creating output directory at %s: %s' % 650 | (self.outputdir, str(e))) 651 | 652 | try: 653 | with open(os.path.join(self.outputdir, 'vercheck-uptodate-' + self.sc_name + '.csv'), 'w') as f: 654 | for p, c in self.uptodate: 655 | f.write(p + ',' + c + '\n') 656 | f.close() 657 | except Exception as e: 658 | print('Error writing file: ' + str(e)) 659 | return 660 | 661 | try: 662 | with open(os.path.join(self.outputdir, 'vercheck-notfound-' + self.sc_name + '.csv'), 'w') as f: 663 | for p, d, c in self.notfound: 664 | f.write(p + ',' + d + ',' + c + '\n') 665 | f.close() 666 | except Exception as e: 667 | print('Error writing file: ' + str(e)) 668 | return 669 | 670 | try: 671 | with open(os.path.join(self.outputdir, 'vercheck-unsupported-' + self.sc_name + '.csv'), 'w') as f: 672 | for p, d, c in self.unsupported: 673 | f.write(p + ',' + d + ',' + c + '\n') 674 | f.close() 675 | except Exception as e: 676 | print('Error writing file: ' + str(e)) 677 | return 678 | 679 | try: 680 | with open(os.path.join(self.outputdir, 'vercheck-different-' + self.sc_name + '.csv'), 'w') as f: 681 | for p, c, l, r in self.different: 682 | f.write(p + ',' + c + ',' + l + ',' + r + '\n') 683 | f.close() 684 | except Exception as e: 685 | print('Error writing file: ' + str(e)) 686 | return 687 | 688 | try: 689 | with open(os.path.join(self.outputdir, 'vercheck-suseorphans-' + self.sc_name + '.csv'), 'w') as f: 690 | for p, d, c in self.suseorphans: 691 | f.write(p + ',' + d + ',' + c + '\n') 692 | f.close() 693 | except Exception as e: 694 | print('Error writing file: ' + str(e)) 695 | return 696 | 697 | try: 698 | with open(os.path.join(self.outputdir, 'vercheck-suseptf-' + self.sc_name + '.csv'), 'w') as f: 699 | for p, d, c in self.suseptf: 700 | f.write(p + ',' + d + ',' + c + '\n') 701 | f.close() 702 | except Exception as e: 703 | print('Error writing file: ' + str(e)) 704 | return 705 | 706 | field_size = 30 707 | if self.show_uptodate: 708 | print('\n\t\t--- Up-to-date packages ---\n') 709 | print(str.ljust('Name', field_size) + '\t' + 710 | str.ljust('Current Version', field_size)) 711 | print('=' * 80) 712 | for p, c in self.uptodate: 713 | print(str.ljust(p, field_size) + '\t' + c) 714 | print('\nTotal: ' + str(len(self.uptodate)) + ' packages') 715 | 716 | if self.show_diff: 717 | print('\n\t\t--- Different packages ---\n') 718 | print(str.ljust('Name', field_size) + '\t' + str.ljust('Current Version', field_size) + 719 | '\t' + str.ljust('Latest Version', field_size) + '\t' + str.ljust('Repository', field_size)) 720 | print('=' * 132) 721 | for p, c, l, r in self.different: 722 | print(str.ljust(p, field_size) + '\t' + str.ljust(c, field_size) + 723 | '\t' + str.ljust(l, field_size) + '\t' + str.ljust(r, field_size)) 724 | print('\nTotal: ' + str(len(self.different)) + ' packages') 725 | 726 | if self.show_unsupported: 727 | print('\n\t\t--- Unsupported packages ---\n') 728 | print(str.ljust('Name', field_size) + '\t' + str.ljust('Vendor', 729 | field_size) + '\t' + str.ljust('Current Version', field_size)) 730 | print('=' * 80) 731 | for p, c, l in self.unsupported: 732 | print(str.ljust(p, field_size) + '\t' + str.ljust(c, 733 | field_size) + '\t' + str.ljust(l, field_size)) 734 | print('\nTotal: ' + str(len(self.unsupported)) + ' packages') 735 | 736 | if self.show_unknown: 737 | print('\n\t\t--- Unknown packages ---\n') 738 | print(str.ljust('Name', field_size) + '\t' + str.ljust('Vendor', 739 | field_size) + '\t' + str.ljust('Current Version', field_size)) 740 | print('=' * 80) 741 | for p, c, l in self.notfound: 742 | print(str.ljust(p, field_size) + '\t' + str.ljust(c, 743 | field_size) + '\t' + str.ljust(l, field_size)) 744 | print('\nTotal: ' + str(len(self.notfound)) + ' packages') 745 | 746 | if self.show_suseorphans: 747 | print('\n\t\t--- SUSE orphan packages ---\n') 748 | print(str.ljust('Name', field_size) + '\t' + str.ljust('Vendor', 749 | field_size) + '\t' + str.ljust('Current Version', field_size)) 750 | print('=' * 80) 751 | for p, c, l in self.suseorphans: 752 | print(str.ljust(p, field_size) + '\t' + str.ljust(c, 753 | field_size) + '\t' + str.ljust(l, field_size)) 754 | print('\nTotal: ' + str(len(self.suseorphans)) + ' packages') 755 | 756 | if self.show_suseptf: 757 | print('\n\t\t--- SUSE PTF packages ---\n') 758 | print(str.ljust('Name', field_size) + '\t' + str.ljust('Vendor', 759 | field_size) + '\t' + str.ljust('Current Version', field_size)) 760 | print('=' * 80) 761 | for p, c, l in self.suseptf: 762 | print(str.ljust(p, field_size) + '\t' + str.ljust(c, 763 | field_size) + '\t' + str.ljust(l, field_size)) 764 | print('\nTotal: ' + str(len(self.suseptf)) + ' packages') 765 | 766 | return 767 | 768 | 769 | # separate class instantiated by each thread, does a search and posts results 770 | class PackageSearchEngine(Thread): 771 | 772 | # number of concurrent threads 773 | max_threads = 20 774 | 775 | # single instance for urllib3 pool 776 | http = urllib3.PoolManager(maxsize=5) 777 | 778 | # set default socket options 779 | # HTTPConnection.default_socket_options += [ (socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) ] 780 | # HTTPConnection.debuglevel = 15 781 | 782 | # maximum retries for each thread 783 | max_tries = 5 784 | 785 | # server replies which are temporary errors (and can be retried) 786 | retry_states = [429, 502, 504] 787 | 788 | # server replies which are permanent errors (and cannot be retried) 789 | error_states = [400, 403, 404, 422, 500] 790 | 791 | results = {} 792 | 793 | def __init__(self, instance_nr, product_id, package_name, supplied_distro, supplied_version, force_refresh): 794 | super(PackageSearchEngine, self).__init__( 795 | name='search-' + package_name) 796 | urllib3.disable_warnings() 797 | self.instance_nr = instance_nr 798 | self.product_id = product_id 799 | self.package_name = package_name 800 | self.supplied_distro = supplied_distro 801 | self.supplied_version = supplied_version 802 | self.force_refresh = force_refresh 803 | self.cm = CacheManager() 804 | self.done = False 805 | self.processed = False 806 | 807 | def mySort(self, e): 808 | 809 | v = e['version'] 810 | try: 811 | real_v = re.match(r"(.*)\+[a-zA-Z].*\-", v).group(1) 812 | v = real_v 813 | except (IndexError, AttributeError): 814 | pass 815 | 816 | if e['release'][0].isalpha(): 817 | r = e['release'][e['release'].index('.')+1:] 818 | else: 819 | r = e['release'] 820 | # print('release %s will be considered as %s' % (e['release'], release)) 821 | return LooseVersion(v + '-' + r) 822 | 823 | def get_results(self): 824 | return {'product_id': self.product_id, 'query': self.package_name, 'supplied_distro': self.supplied_distro, 'supplied_version': self.supplied_version, 'results': self.results} 825 | 826 | def run(self): 827 | # print('[Thread ' + str(self.instance_nr) + '] looking for ' + self.package_name + ' on product id ' + str(self.product_id)) 828 | tries = 0 829 | valid_response = False 830 | refined_data = [] 831 | return_data = [] 832 | cached = False 833 | 834 | # load the local cache if it exists and checks for valid data 835 | cached_data = self.cm.get_cache_data() 836 | product_list = {} 837 | if (self.cm.initialized) and self.force_refresh is False: 838 | try: 839 | item, product = self.cm.get_record( 840 | self.product_id, self.package_name) 841 | if item is None: 842 | cached = False 843 | else: 844 | if ((item['name'] == self.package_name) and (product is not None)): 845 | age = datetime.strptime( 846 | item['timestamp'], "%Y-%m-%dT%H:%M:%S.%f") - datetime.now() 847 | cached = True 848 | if age.days > self.cm.get_max_age(): 849 | 850 | item['repository'] = product['name'] 851 | item['product_id'] = self.product_id 852 | refined_data.append(item) 853 | else: 854 | print('cached data for ' + self.package_name + 855 | ' is too old ( ' + str(age.days) + ' days), discarding cache entry') 856 | self.cm.remove_record(item) 857 | cached = False 858 | except KeyError as e: 859 | print('invalid cache entry for ' + self.package_name + 860 | ', removing (reason: ' + e + ')') 861 | self.cm.remove_record(item) 862 | 863 | if (cached): 864 | self.sort_and_deliver(refined_data) 865 | print('found ' + self.package_name + ' for product ID ' + 866 | str(self.product_id) + ' (cached)') 867 | return 868 | else: 869 | while not valid_response and tries < self.max_tries: 870 | try: 871 | r = self.http.request('GET', 'https://scc.suse.com/api/package_search/packages?product_id=' + str(self.product_id) + 872 | '&query=' + urllib.parse.quote(self.package_name), headers={'Accept-Encoding': 'gzip, deflate', 'Accept':'application/vnd.scc.suse.com.v4+json', 'Connection': 'close'}) 873 | except Exception as e: 874 | print('Error while connecting: ' + str(e)) 875 | exit(1) 876 | 877 | return_data = {} 878 | 879 | if r.status == 200: 880 | if tries > 0: 881 | print('thread %d got a good reply after %d tries' % 882 | (self.instance_nr, tries)) 883 | return_data = json.loads(r.data.decode('utf-8')) 884 | valid_response = True 885 | elif r.status in self.error_states: 886 | if r.data: 887 | json_data = json.loads(r.data.decode('utf-8')) 888 | print( 889 | 'cannot be processed due to error: [' + json_data['error'] + ']') 890 | print('thread %d got a fatal error (%d). Results will be incomplete!\nPlease contact the service administrators or try again later.' % ( 891 | self.instance_nr, r.status)) 892 | break 893 | elif r.status in self.retry_states: 894 | print('thread %d got non-fatal reply (%d) from server, trying again in 5 seconds ' % 895 | (self.instance_nr, r.status)) 896 | time.sleep(5) 897 | tries = tries + 1 898 | continue 899 | else: 900 | print('got unknown error %d from the server!' % r.status) 901 | 902 | if return_data: 903 | for item in return_data['data']: 904 | # discard items that do not match exactly our query 905 | if item['name'] != self.package_name: 906 | # print('discarding item: ' + item) 907 | continue 908 | else: 909 | # valid data, add it to the cache and to the results 910 | # print('added result: ' + item['name'] + ' ' + item['version'] + '-' + item['release']) 911 | item['repository'] = item['products'][0]['name'] + ' ' + \ 912 | item['products'][0]['architecture'] 913 | item['timestamp'] = datetime.now().isoformat() 914 | refined_data.append(item) 915 | self.cm.add_record(item) 916 | 917 | self.sort_and_deliver(refined_data) 918 | return 919 | 920 | def sort_and_deliver(self, refined_data): 921 | # sort and deliver the data 922 | try: 923 | refined_data.sort(reverse=True, key=self.mySort) 924 | except TypeError as e: 925 | # sometimes the version is so wildly mixed with letters that the sorter gets confused 926 | # but it's okay to ignore this 927 | # print('*** warning: sorting error due to strange version (may be ignored): ' + str(e)) 928 | pass 929 | 930 | # print('refined data size: ' + str(len(refined_data))) 931 | self.results = refined_data 932 | self.done = True 933 | self.cm.write_cache() 934 | # del self.cm 935 | return 936 | 937 | 938 | # main program 939 | def main(): 940 | sv = SCCVersion() 941 | signal.signal(signal.SIGINT, sv.cleanup) 942 | 943 | try: 944 | opts, args = getopt.getopt(sys.argv[1:], "Vhp:n:lsvt123456a:d:o:f", ["version", "help", "product=", "name=", "list-products", "short", "verbose", "test", "show-unknown", 945 | "show-differences", "show-uptodate", "show-unsupported", "show-suseorphans", "show-suseptf", "arch=", "supportconfig=", "outputdir=", "force-refresh"]) 946 | except getopt.GetoptError as err: 947 | print(err) 948 | sv.usage() 949 | exit(2) 950 | 951 | product_id = -1 952 | package_name = '' 953 | short_response = False 954 | global show_unknown, show_diff, show_uptodate, show_unsupported, show_suseorphans, show_suseptf 955 | global uptodate, different, notfound, unsupported, suseorphans, suseptf 956 | 957 | for o, a in opts: 958 | if o in ("-h", "--help"): 959 | sv.show_help() 960 | exit(1) 961 | if o in ("-V", "--version"): 962 | sv.show_version() 963 | exit(1) 964 | elif o in ("-a", "--arch"): 965 | sv.arch = a 966 | elif o in ("-s", "--short"): 967 | sv.short_response = True 968 | elif o in ("-p", "--product"): 969 | product_id = int(a) 970 | elif o in ("-n", "--name"): 971 | package_name = a 972 | elif o in ("-o", "--outputdir"): 973 | sv.outputdir = a 974 | elif o in ("-l", "--list-products"): 975 | sv.product_list = SCCVersion.fetch_product_list() 976 | sv.list_products() 977 | exit(0) 978 | elif o in ("-1", "--show-unknown"): 979 | sv.show_unknown = True 980 | elif o in ("-2", "--show-differences"): 981 | sv.show_diff = True 982 | elif o in ("-3", "--show-uptodate"): 983 | sv.show_uptodate = True 984 | elif o in ("-4", "--show-unsupported"): 985 | sv.show_unsupported = True 986 | elif o in ("-5", "--show-suseorphans"): 987 | sv.show_suseorphans = True 988 | elif o in ("-6", "--show-suseptf"): 989 | sv.show_suseptf = True 990 | elif o in ("-v", "--verbose"): 991 | sv.set_verbose(True) 992 | elif o in ("-f", "--force-refresh"): 993 | sv.set_force_refresh(True) 994 | elif o in ("-t", "--test"): 995 | sv.product_list = SCCVersion.fetch_product_list() 996 | sv.test() 997 | exit(0) 998 | elif o in ("-d", "--supportconfig"): 999 | sv.product_list = SCCVersion.fetch_product_list() 1000 | supportconfigdir = a 1001 | if os.path.isdir(a) is False: 1002 | print(f"Directory {a} does not exist.\nIf you're using multiple options in one parameter, make sure -d is the last one (e.g. -vd )") 1003 | exit(1) 1004 | pc = PublicCloudCheck(force_refresh=sv.force_refresh) 1005 | if (pc.analyze(supportconfigdir)): 1006 | print( 1007 | f"--> Image ID is [{SCCVersion.color(pc.get_results()['name'], 'yellow')}]") 1008 | if pc.get_results()['unsupported']: 1009 | print( 1010 | f"--> This image is {SCCVersion.color('UNSUPPORTED', 'red')} (not found in PINT data), continuing normal package analysis") 1011 | uptodate, unsupported, notfound, different, suseorphans, suseptf = sv.check_supportconfig( 1012 | supportconfigdir) 1013 | sv.write_reports() 1014 | else: 1015 | pc.get_report() 1016 | exit(0) 1017 | else: 1018 | uptodate, unsupported, notfound, different, suseorphans, suseptf = sv.check_supportconfig( 1019 | supportconfigdir) 1020 | sv.write_reports() 1021 | 1022 | exit(0) 1023 | else: 1024 | assert False, "invalid option" 1025 | 1026 | if product_id == -1 or package_name == '': 1027 | print('Please specify a product ID and package name.') 1028 | sv.usage() 1029 | exit(2) 1030 | sv.product_list = SCCVersion.fetch_product_list() 1031 | if product_id in sv.suma_product_list: 1032 | plist = sv.suma_product_list 1033 | elif product_id in sv.product_list: 1034 | plist = sv.product_list 1035 | else: 1036 | plist=None 1037 | 1038 | if plist is None: 1039 | print('Product ID ' + str(product_id) + ' is unknown.') 1040 | exit(2) 1041 | else: 1042 | if sv.verbose: 1043 | pname = plist[product_id]['name'] + ' ' + plist[product_id]['version'] + ' ' + plist[product_id]['architecture'] 1044 | print('Using product ID ' + str(product_id) + 1045 | ' (' + pname + ')') 1046 | 1047 | sv.search_package(product_id, package_name) 1048 | 1049 | return 1050 | 1051 | # package cache 1052 | 1053 | 1054 | class Singleton(type): 1055 | # Inherit from "type" in order to gain access to method __call__ 1056 | def __init__(self, *args, **kwargs): 1057 | self.__instance = None # Create a variable to store the object reference 1058 | super().__init__(*args, **kwargs) 1059 | 1060 | def __call__(self, *args, **kwargs): 1061 | if self.__instance is None: 1062 | # if the object has not already been created 1063 | # Call the __init__ method of the subclass and save the reference 1064 | self.__instance = super().__call__(*args, **kwargs) 1065 | return self.__instance 1066 | else: 1067 | # if object reference already exists; return it 1068 | return self.__instance 1069 | 1070 | 1071 | class CacheManager(metaclass=Singleton): 1072 | cache_data = [] 1073 | max_age_days = -5 # entries from the cache over 5 days old are discarded 1074 | user_cache_dir = os.path.join(os.getenv('HOME'), '.cache/scc-tools') 1075 | default_cache_dir = '/var/cache/scc-tools' 1076 | cache_file = 'scc_data.json' 1077 | active_cache_file = '' 1078 | _lock = Lock() 1079 | initialized = False 1080 | verbose = False 1081 | 1082 | def __init__(self): 1083 | if (os.access(self.default_cache_dir, os.W_OK)): 1084 | self.active_cache_file = os.path.join( 1085 | self.default_cache_dir, self.cache_file) 1086 | else: 1087 | self.active_cache_file = os.path.join( 1088 | self.user_cache_dir, self.cache_file) 1089 | if (os.path.exists(self.user_cache_dir) is False): 1090 | os.makedirs(self.user_cache_dir) 1091 | 1092 | self.load_cache() 1093 | # print('my cache has ' + len(self.cache_data) + ' entries') 1094 | weakref.finalize(self, self.write_cache) 1095 | 1096 | @contextmanager 1097 | def acquire_timeout(self, timeout): 1098 | result = self._lock.acquire(timeout=timeout) 1099 | time.sleep(0.001) 1100 | # print('lock result = ' + result) 1101 | if result: 1102 | self._lock.release() 1103 | yield result 1104 | # print('lock status: ' + lock.locked()) 1105 | 1106 | # loads the JSON cache if available 1107 | def load_cache(self): 1108 | try: 1109 | with self.acquire_timeout(2) as acquired: 1110 | if acquired: 1111 | if not self.initialized: 1112 | # if the default directory is writeable, use it 1113 | with open(self.active_cache_file, "r") as f: 1114 | self.cache_data = json.loads(f.read()) 1115 | self.initialized = True 1116 | print('loaded ' + str(len(self.cache_data)) + 1117 | ' items from cache (' + self.active_cache_file + ')') 1118 | except IOError: 1119 | # print('Error reading the package cache from ' + self.active_cache_file + '(non-fatal)') 1120 | return False 1121 | except json.decoder.JSONDecodeError: 1122 | print('Invalid cache data (non-fatal)') 1123 | # print('Data read: ' + '[' + self.cache_data + ']') 1124 | return False 1125 | return True 1126 | 1127 | # saves the package data for later use 1128 | def write_cache(self): 1129 | try: 1130 | with self.acquire_timeout(2) as acquired: 1131 | if acquired: 1132 | if self.verbose: 1133 | print('writing ' + str(len(self.cache_data)) + 1134 | ' items to cache at ' + self.active_cache_file) 1135 | with open(self.active_cache_file, "w+") as f: 1136 | f.write(json.dumps(self.cache_data, 1137 | default=self.dt_parser)) 1138 | else: 1139 | print('write_cache: could not acquire lock!') 1140 | exit(1) 1141 | except IOError: 1142 | print( 1143 | 'Error saving package cache at ' + self.active_cache_file + ' (non-fatal)') 1144 | return False 1145 | return True 1146 | 1147 | # fetches a record from the cache 1148 | def get_record(self, product_id, package_name): 1149 | for item in self.cache_data: 1150 | if package_name == item['name']: 1151 | for p in item['products']: 1152 | if product_id == p['id']: 1153 | if self.verbose: 1154 | print("* cache hit: " + item) 1155 | item['repository'] = p['name'] 1156 | return item, p 1157 | 1158 | return None, None 1159 | 1160 | # removes a record from the cache 1161 | def remove_record(self, record): 1162 | with self.acquire_timeout(5) as acquired: 1163 | if acquired: 1164 | for item in self.cache_data.copy(): 1165 | if record['id'] == item['id']: 1166 | if self.verbose: 1167 | print('removing record from cache: ' + record) 1168 | self.cache_data.remove(item) 1169 | else: 1170 | print('remove_record: could not acquire lock!') 1171 | exit(1) 1172 | # print('items in cache: ' + str(len(self.cache_data))) 1173 | 1174 | # adds a new record to the cache 1175 | def add_record(self, record): 1176 | # print('appending record to cache: ' + record) 1177 | with self.acquire_timeout(5) as acquired: 1178 | if acquired: 1179 | found = False 1180 | for item in self.cache_data: 1181 | if record['id'] == item['id'] and record['name'] == item['name']: 1182 | found = True 1183 | break 1184 | if (found is False): 1185 | if self.verbose: 1186 | print("* cache: added record for " + record['id']) 1187 | self.cache_data.append(record) 1188 | # else: 1189 | # print('cache: rejecting duplicate item') 1190 | else: 1191 | print('add_record: could not acquire lock!') 1192 | exit(1) 1193 | # print('items in cache: ' + str(len(self.cache_data))) 1194 | 1195 | def get_max_age(self): 1196 | return self.max_age_days 1197 | 1198 | def dt_parser(dt): 1199 | if isinstance(dt, datetime): 1200 | return dt.isoformat() 1201 | 1202 | def get_cache_data(self): 1203 | return self.cache_data 1204 | 1205 | 1206 | class PublicImageCacheManager(): 1207 | cache_data = [] 1208 | max_age_days = -5 # entries from the cache over 5 days old are discarded 1209 | user_cache_dir = os.path.join(os.getenv('HOME'), '.cache/scc-tools') 1210 | default_cache_dir = '/var/cache/scc-tools' 1211 | provider = '' 1212 | cache_file = '' 1213 | initialized = False 1214 | failed = False 1215 | 1216 | def __init__(self, provider, force_refresh=False): 1217 | self.provider = provider 1218 | self.cache_file = 'public_cloud_' + provider + '.json' 1219 | 1220 | if (os.access(self.default_cache_dir, os.W_OK)): 1221 | self.active_cache_file = os.path.join( 1222 | self.default_cache_dir, self.cache_file) 1223 | else: 1224 | self.active_cache_file = os.path.join( 1225 | self.user_cache_dir, self.cache_file) 1226 | if (os.path.exists(self.user_cache_dir) is False): 1227 | os.makedirs(self.user_cache_dir) 1228 | 1229 | if self.load_cache(): 1230 | age = datetime.strptime(self.cache_data['timestamp'], "%Y-%m-%dT%H:%M:%S.%f") - datetime.now() 1231 | if force_refresh or age.days < self.get_max_age(): 1232 | print(f'* forcing metadata refresh for public images for {provider}') 1233 | tmp_cache_data = self.get_image_states(provider) 1234 | if len(tmp_cache_data) > 0: 1235 | self.cache_data = tmp_cache_data 1236 | with open(self.active_cache_file, 'w') as f: 1237 | f.write(json.dumps(self.cache_data)) 1238 | else: 1239 | print(f'* public images cached data OK ({age.days} days old)') 1240 | else: 1241 | print(f'* cached data for {provider} does not exist, downloading') 1242 | tmp_cache_data = self.get_image_states(provider) 1243 | if len(tmp_cache_data) > 0: 1244 | self.cache_data = tmp_cache_data 1245 | with open(self.active_cache_file, 'w') as f: 1246 | f.write(json.dumps(self.cache_data)) 1247 | 1248 | self.initialized = True 1249 | return 1250 | 1251 | def load_cache(self): 1252 | try: 1253 | if not self.initialized: 1254 | # if the default directory is writeable, use it 1255 | with open(self.active_cache_file, "r") as f: 1256 | self.cache_data = json.loads(f.read()) 1257 | self.initialized = True 1258 | print('loaded ' + str(len(self.cache_data)) + 1259 | ' items from cache (' + self.active_cache_file + ')') 1260 | except IOError: 1261 | # print('Error reading the package cache from ' + self.active_cache_file + '(non-fatal)') 1262 | return False 1263 | except json.decoder.JSONDecodeError: 1264 | print('Invalid cache data (non-fatal)') 1265 | # print('Data read: ' + '[' + self.cache_data + ']') 1266 | return False 1267 | 1268 | return True 1269 | 1270 | def get_cache_data(self): 1271 | return self.cache_data 1272 | 1273 | def get_max_age(self): 1274 | return self.max_age_days 1275 | 1276 | def dt_parser(dt): 1277 | if isinstance(dt, datetime): 1278 | return dt.isoformat() 1279 | 1280 | def fetch_image_states(self, provider, list_type): 1281 | print(f'-- Downloading data for {list_type} images on {provider}...') 1282 | 1283 | # single instance for urllib3 pool 1284 | http = urllib3.PoolManager(maxsize=5) 1285 | 1286 | # maximum retries for each thread 1287 | max_tries = 3 1288 | tries = 0 1289 | 1290 | valid_response = False 1291 | connection_failed = False 1292 | 1293 | # server replies which are temporary errors (and can be retried) 1294 | retry_states = [429, 502, 504] 1295 | 1296 | # server replies which are permanent errors (and cannot be retried) 1297 | error_states = [400, 403, 404, 422, 500, -1] 1298 | 1299 | base_url = "https://susepubliccloudinfo.suse.com/v1/" + \ 1300 | provider + "/images/" + list_type + ".json" 1301 | 1302 | while not valid_response and tries < max_tries: 1303 | try: 1304 | r = http.request('GET', base_url, headers={ 1305 | 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close'}) 1306 | except Exception as e: 1307 | print('Error while connecting: ' + str(e)) 1308 | connection_failed = True 1309 | 1310 | if connection_failed: 1311 | print('It appears the server is offline, giving up.') 1312 | break 1313 | elif r.status == 200: 1314 | if tries > 0: 1315 | print('got a good reply after %d tries' % (tries)) 1316 | return_data = json.loads(r.data.decode('utf-8')) 1317 | valid_response = True 1318 | elif r.status in error_states: 1319 | if r.data: 1320 | json_data = json.loads(r.data.decode('utf-8')) 1321 | print( 1322 | 'cannot be processed due to error: [' + json_data['error'] + ']') 1323 | print('got a fatal error (%d). Results will be incomplete!\nPlease contact the service administrators or try again later.' % (r.status)) 1324 | break 1325 | elif r.status in retry_states: 1326 | tries = tries + 1 1327 | print( 1328 | 'got non-fatal reply (%d) from server, trying again in 5 seconds (try: %d/%d)' % (r.status, tries, max_tries)) 1329 | time.sleep(5) 1330 | continue 1331 | else: 1332 | print('got unknown error %d from the server!' % r.status) 1333 | 1334 | if valid_response: 1335 | return return_data['images'] 1336 | 1337 | return {} 1338 | 1339 | def get_image_states(self, provider): 1340 | image_data = {} 1341 | self.failed = False 1342 | image_data['timestamp'] = datetime.now().isoformat() 1343 | image_data['incomplete'] = False 1344 | image_data['active'] = self.fetch_image_states(provider, 'active') 1345 | if len(image_data['active']) == 0: 1346 | print('cannot download cloud data for active images at the moment, will use cached data if available.') 1347 | self.failed = True 1348 | 1349 | image_data['inactive'] = self.fetch_image_states(provider, 'inactive') 1350 | if len(image_data['inactive']) == 0: 1351 | print('cannot download cloud data for inactive images at the moment, will use cached data if available.') 1352 | self.failed = True 1353 | 1354 | image_data['deprecated'] = self.fetch_image_states( 1355 | provider, 'deprecated') 1356 | if len(image_data['deprecated']) == 0: 1357 | print('cannot download cloud data for deprecated images at the moment, will use cached data if available.') 1358 | self.failed = True 1359 | 1360 | image_data['deleted'] = self.fetch_image_states(provider, 'deleted') 1361 | if len(image_data['deleted']) == 0: 1362 | print('cannot download cloud data for deleted images at the moment, will use cached data if available.') 1363 | self.failed = True 1364 | 1365 | if self.failed: 1366 | image_data['incomplete'] = True 1367 | 1368 | return image_data 1369 | 1370 | 1371 | class PublicCloudCheck(): 1372 | aws_image_data = {} 1373 | gcp_image_data = {} 1374 | azure_image_data = {} 1375 | valid_states = ['active', 'inactive', 'deprecated', 'deleted'] 1376 | match_data = {} 1377 | aws_cm = None 1378 | azure_cm = None 1379 | gcp_cm = None 1380 | 1381 | def __init__(self, verbose=True, force_refresh=False): 1382 | self.match_data = {} 1383 | self.aws_cm = PublicImageCacheManager(provider='amazon', force_refresh=force_refresh) 1384 | self.gcp_cm = PublicImageCacheManager(provider='google', force_refresh=force_refresh) 1385 | self.azure_cm = PublicImageCacheManager(provider='microsoft', force_refresh=force_refresh) 1386 | self.aws_image_data = self.aws_cm.get_cache_data() 1387 | self.gcp_image_data = self.gcp_cm.get_cache_data() 1388 | self.azure_image_data = self.azure_cm.get_cache_data() 1389 | if verbose: 1390 | print(f"--- AMAZON data as of {self.aws_image_data['timestamp']}") 1391 | if 'incomplete' in self.aws_image_data and self.aws_image_data['incomplete']: 1392 | print("*** data may be incomplete (previous failure downloading)") 1393 | for state in self.valid_states: 1394 | print(f"* {len(self.aws_image_data[state])} {state} images") 1395 | print() 1396 | print( 1397 | f"--- MICROSOFT data as of {self.azure_image_data['timestamp']}") 1398 | if 'incomplete' in self.azure_image_data and self.azure_image_data['incomplete']: 1399 | print("*** data may be incomplete (previous failure downloading)") 1400 | 1401 | for state in self.valid_states: 1402 | print(f"* {len(self.azure_image_data[state])} {state} images") 1403 | print() 1404 | print(f"--- GOOGLE data as of {self.gcp_image_data['timestamp']}") 1405 | if 'incomplete' in self.gcp_image_data and self.gcp_image_data['incomplete']: 1406 | print("*** data may be incomplete (previous failure downloading)") 1407 | 1408 | for state in self.valid_states: 1409 | print(f"* {len(self.gcp_image_data[state])} {state} images") 1410 | print() 1411 | 1412 | return 1413 | 1414 | def analyze(self, basedir): 1415 | self.provider = self.get_public_image_type(basedir) 1416 | print( 1417 | f"--> Public cloud provider for {basedir} is [{SCCVersion.color(self.provider, 'yellow')}]") 1418 | if self.provider == "unknown": 1419 | print( 1420 | '--> this image has invalid (but present) public cloud metadata. Continuing normal analysis.') 1421 | return False 1422 | elif self.provider == 'none': 1423 | print('--> not a public cloud image, continuing normal analysis') 1424 | return False 1425 | else: 1426 | self.match_data = self.process_public_cloud(basedir, self.provider) 1427 | return True 1428 | 1429 | def get_results(self): 1430 | return self.match_data 1431 | 1432 | def get_report(self): 1433 | if self.match_data['license_type'] != '': 1434 | print(f"--> license type is [{self.match_data['license_type']}]") 1435 | 1436 | if self.match_data['version'] != '': 1437 | print( 1438 | f"--> Results for search on image [{self.match_data['name']}], version [{self.match_data['version']}]:") 1439 | else: 1440 | print( 1441 | f"--> Results for search on image [{self.match_data['name']}]:") 1442 | 1443 | if self.match_data['unsupported']: 1444 | print( 1445 | f"*** Unsupported image found for public cloud {self.provider}") 1446 | else: 1447 | for state in self.valid_states: 1448 | for item in self.match_data[state]: 1449 | if 'id' in item.keys(): 1450 | print( 1451 | f"{state.upper()}: image id [{item['id']}] ({item['name']})") 1452 | else: 1453 | print(f"{state.upper()}: image [{item['name']}]") 1454 | print(f"* publish date: [{item['publishedon']}]") 1455 | print(f"* more info: [{item['changeinfo']}]") 1456 | print(f"* deprecated since [{item['deprecatedon']}]") 1457 | print(f"* deleted on [{item['deletedon']}]") 1458 | print(f"* replaced by [{item['replacementname']}]") 1459 | return 1460 | 1461 | def get_public_image_type(self, basedir): 1462 | gcp_regex = r"^\# /usr/bin/gcemetadata" 1463 | azure_regex = r"^\# /usr/bin/azuremetadata" 1464 | aws_regex = r"^\# /usr/bin/ec2metadata" 1465 | 1466 | meta_file = basedir + '/public_cloud/metadata.txt' 1467 | if os.path.isfile(meta_file): 1468 | with open(meta_file, 'r') as f: 1469 | contents = f.read() 1470 | if re.search(gcp_regex, contents, re.MULTILINE): 1471 | return 'google' 1472 | elif re.search(azure_regex, contents, re.MULTILINE): 1473 | return 'microsoft' 1474 | elif re.search(aws_regex, contents, re.MULTILINE): 1475 | return 'amazon' 1476 | else: 1477 | return 'unknown' 1478 | else: 1479 | return 'none' 1480 | 1481 | def process_public_cloud(self, basedir, image_type): 1482 | 1483 | meta_file = basedir + '/public_cloud/metadata.txt' 1484 | 1485 | match_active_images = [] 1486 | match_inactive_images = [] 1487 | match_deprecated_images = [] 1488 | match_deleted_images = [] 1489 | is_unsupported = False 1490 | name = '' 1491 | version = '' 1492 | license_type = '' 1493 | 1494 | if image_type == 'microsoft': 1495 | # Azure image test 1496 | with open(meta_file, 'r') as f: 1497 | metadata = yaml.safe_load(f) 1498 | 1499 | query_image = metadata['compute']['storageProfile']['imageReference'] 1500 | # if it's not an offer from the marketplace, return None 1501 | if query_image['offer'] is None: 1502 | name = "None:None" 1503 | version = "None" 1504 | else: 1505 | query = query_image['publisher'].lower( 1506 | ) + ':' + query_image['offer'] + ':' + query_image['sku'] 1507 | name = query_image['offer'] + ':' + query_image['sku'] 1508 | version = query_image['version'] 1509 | if metadata['compute']['licenseType'] == 'SLES_BYOS': 1510 | license_type = 'BYOS' 1511 | else: 1512 | license_type = 'PAYG' 1513 | 1514 | regex_image = r"^(" + query + "):" 1515 | 1516 | for image in self.azure_image_data['active']: 1517 | if re.match(regex_image, image['urn']): 1518 | match_active_images.append(image) 1519 | 1520 | for image in self.azure_image_data['inactive']: 1521 | if re.match(regex_image, image['urn']): 1522 | match_inactive_images.append(image) 1523 | 1524 | for image in self.azure_image_data['deprecated']: 1525 | if re.match(regex_image, image['urn']): 1526 | match_deprecated_images.append(image) 1527 | 1528 | elif image_type == 'google': 1529 | # GCP image test 1530 | regex_image = r"^projects/(.*)/global/images/(.*)" 1531 | md_str = '' 1532 | with open(meta_file, 'r') as f: 1533 | contents = f.readlines() 1534 | 1535 | for l in contents: 1536 | md_str = re.match(r"^image:(.*)$", l) 1537 | if md_str: 1538 | break 1539 | 1540 | query_project = re.match( 1541 | regex_image, md_str.group(1).strip()).group(1) 1542 | query_image = re.match( 1543 | regex_image, md_str.group(1).strip()).group(2) 1544 | name = query_project 1545 | version = query_image 1546 | 1547 | for image in self.gcp_image_data['active']: 1548 | if image['project'] == query_project and image['name'] == query_image: 1549 | match_active_images.append(image) 1550 | 1551 | for image in self.gcp_image_data['inactive']: 1552 | if image['project'] == query_project and image['name'] == query_image: 1553 | match_inactive_images.append(image) 1554 | 1555 | for image in self.gcp_image_data['deprecated']: 1556 | if image['project'] == query_project and image['name'] == query_image: 1557 | match_deprecated_images.append(image) 1558 | 1559 | elif image_type == 'amazon': 1560 | # Amazon image test 1561 | regex_image = r"^projects/(.*)/global/images/(.*)" 1562 | md_str = '' 1563 | with open(meta_file, 'r') as f: 1564 | contents = f.read() 1565 | 1566 | md_str = re.search(r"^ami-id:(.*)$", contents, re.MULTILINE) 1567 | query_image = md_str.group(1).strip() 1568 | name = query_image 1569 | 1570 | for image in self.aws_image_data['active']: 1571 | if image['id'] == query_image: 1572 | match_active_images.append(image) 1573 | 1574 | for image in self.aws_image_data['inactive']: 1575 | if image['id'] == query_image: 1576 | match_inactive_images.append(image) 1577 | 1578 | for image in self.aws_image_data['deprecated']: 1579 | if image['id'] == query_image: 1580 | match_deprecated_images.append(image) 1581 | 1582 | # if it's not an offer from the marketplace, it's unsupported 1583 | if len(match_active_images) == 0 and len(match_inactive_images) == 0 and len(match_active_images) == 0: 1584 | is_unsupported = True 1585 | 1586 | # make the final object 1587 | match_data = { 1588 | 'name': name, 1589 | 'version': version, 1590 | 'active': match_active_images, 1591 | 'inactive': match_inactive_images, 1592 | 'deprecated': match_deprecated_images, 1593 | 'deleted': match_deleted_images, 1594 | 'license_type': license_type, 1595 | 'unsupported': is_unsupported 1596 | } 1597 | 1598 | return match_data 1599 | 1600 | 1601 | if __name__ == "__main__": 1602 | main() 1603 | --------------------------------------------------------------------------------