├── .gitignore ├── LICENSE ├── README.md ├── conf.toml ├── database_manager.py ├── gng.py ├── gngupdate.sh ├── pivnet.py └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | 5 | # C extensions 6 | *.so 7 | 8 | # Distribution / packaging 9 | .Python 10 | env/ 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | *.egg-info/ 23 | .installed.cfg 24 | *.egg 25 | 26 | # PyInstaller 27 | # Usually these files are written by a python script from a template 28 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 29 | *.manifest 30 | *.spec 31 | 32 | # Installer logs 33 | pip-log.txt 34 | pip-delete-this-directory.txt 35 | 36 | # Unit test / coverage reports 37 | htmlcov/ 38 | .tox/ 39 | .coverage 40 | .coverage.* 41 | .cache 42 | nosetests.xml 43 | coverage.xml 44 | *,cover 45 | 46 | # Translations 47 | *.mo 48 | *.pot 49 | 50 | # Django stuff: 51 | *.log 52 | 53 | # Sphinx documentation 54 | docs/_build/ 55 | 56 | # PyBuilder 57 | target/ 58 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 Matt Reider 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # gng 2 | Grab-N-Go - a little python tool to download files from Pivotal Network and upload them to one or many Ops Managers. 3 | 4 | ## Setup 5 | 6 | Python 3.5.1 and pip 8.1.2 were used for this release. MacOS ships with Python 2.7, so you will want to install the later version of python and pip. Visit [python.org](https://www.python.org/downloads/) for the latest downloads of python, and [pypa.io](https://pip.pypa.io/en/stable/installing/) for pip. You should put the later version in a different bin directory than those used by the system (e.g., `/usr/local/bin`). I added a couple aliases, `alias pip='pip3.5'` `alias python='python3.5'` to avoid the accedential use of the system python. 7 | 8 | You need to edit `conf.toml` and enter your Pivotal Network API key: 9 | 10 | Get your Pivotal Network API Key by logging into http://network.pivotal.io - it's at the bottom of your profile page. 11 | 12 | Edit the `conf.toml` file and replace the dummy key: 13 | 14 | ``` 15 | api_key = "BlahBlahBlahBlah" 16 | ``` 17 | 18 | ## Update local product list 19 | 20 | The first thing you need to do before downloading / uploading pivotal products is to update a local database of products. We will use the database for the other commands that follow. You might create a cron script to do nightly update and email a differences. 21 | 22 | ``` 23 | $ python gng.py --update 24 | ``` 25 | 26 | ## Dump the product list to a text file 27 | 28 | The database created by update now has over 1000 records. Dump list outputs CSV, so a spreadsheet can easily be used to sort and filter the list. The CSV file includes the MD5 digest and download URL should you want to create new tools to manage your own repository of Pivotal software. 29 | 30 | ``` 31 | $ python gng.py --dump-list [filename.csv] 32 | ``` 33 | 34 | ## Update and dump at night 35 | The helper script `gngupdate.sh` is useful to run from `launchd` or `cron`. Following is an example lauchd script, which you would place in `/Users/someone/Library/LaunchAgents/net.example.launched.gng.plist`. 36 | 37 | ``` 38 | 39 | 40 | 41 | 42 | EnvironmentVariables 43 | 44 | PATH 45 | /Library/Frameworks/Python.framework/Versions/3.5/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/someone/bin 46 | 47 | Label 48 | net.example.launched.gng 49 | ProgramArguments 50 | 51 | /Users/someone/Documents/PivotalCF/gng/gngupdate.sh 52 | 53 | StandardErrorPath 54 | /Users/someone/Documents/PivotalCF/gngupdate.stderr 55 | StandardOutPath 56 | /Users/someone/Documents/PivotalCF/gngupdate.stdout 57 | StartCalendarInterval 58 | 59 | 60 | Hour 61 | 1 62 | Minute 63 | 13 64 | 65 | 66 | WorkingDirectory 67 | /Users/someone/Documents/PivotalCF 68 | 69 | 70 | ``` 71 | To help out the PivNet operators, please change the time to something while you are sleeping. 72 | 73 | ## Cut and paste the files you want to download 74 | 75 | Open the dump you created, and cut / paste the lines that contain the files you want to download. You can name this new file whatever you want. Here's what it might look like: 76 | 77 | ``` 78 | Elastic Runtime,1.7.6,cf-1.7.6-build.4.pivotal,6/14/16,6879dea005f5010c6ebc3eb00fca8a34,https://network.pivotal.io/api/v2/products/elastic-runtime/releases/1875/product_files/4964/download 79 | MySQL,1.7.8,p-mysql-1.7.8.pivotal,5/19/16,5b89db5c9af13230e1b23847de2921f7,https://network.pivotal.io/api/v2/products/p-mysql/releases/1770/product_files/4696/download 80 | PCF Metrics,PCF Metrics 1.0.6,apm-1.0.6.pivotal,6/10/16,c32ae281cc97306e5a40c994f25faa53,https://network.pivotal.io/api/v2/products/pcf-metrics/releases/1860/product_files/4919/download 81 | PCF Metrics,PCF Log Search v1.0.0,p-logsearch-1.0.0.pivotal,6/6/16,40e928d67a836d4bff27c4b67e3f3e52,https://network.pivotal.io/api/v2/products/pcf-metrics/releases/1832/product_files/4856/download 82 | PCF Metrics,PCF JMX Bridge 1.7.2,p-metrics-1.7.2.pivotal,5/20/16,07f7a1689658df4e11e5cff5925d0545,https://network.pivotal.io/api/v2/products/pcf-metrics/releases/1777/product_files/4710/download 83 | Push Notification Service,1.4.10,p-push-notifications-1.4.10.1.pivotal,6/16/16,f04a1f1efd840036197b0ee6d3e07b98,https://network.pivotal.io/api/v2/products/push-notification-service/releases/1896/product_files/5010/download 84 | RabbitMQ,1.6.2,p-rabbitmq-1.6.2.pivotal,6/15/16,fb57184e2eca5fba836f4a688842c327,https://network.pivotal.io/api/v2/products/pivotal-rabbitmq-service/releases/1882/product_files/4985/download 85 | Redis,1.5.15,p-redis-1.5.15.pivotal,6/14/16,9f10214bca9a20c9039ff6bad6aa3c55,https://network.pivotal.io/api/v2/products/p-redis/releases/1876/product_files/4965/download 86 | Session State Caching Powered by GemFire,1.2.0,p-ssc-gemfire-1.2.0.0.pivotal,6/2/16,0dfea7de7bdab6e72d980adcaff5c68b,https://network.pivotal.io/api/v2/products/p-ssc-gemfire/releases/1821/product_files/4826/download 87 | Single Sign-On,1.1.1,Single_Sign-On_1.1.1.pivotal,5/5/16,876eab74baa2e9b82df279aff90453e8,https://network.pivotal.io/api/v2/products/p-identity/releases/1732/product_files/4526/download 88 | Spring Cloud Services,1.0.10,p-spring-cloud-services-1.0.10.pivotal,6/14/16,ad280e91dfac1d5fbdcd55129cd4aad7,https://network.pivotal.io/api/v2/products/p-spring-cloud-services/releases/1881/product_files/4976/download 89 | ``` 90 | 91 | ## Download the files you want 92 | 93 | Download verifies the MD5 for each file. In the case of a mismatch, download will try a few times, before skipping and reporting the error. Download can be used for files, which cannot be uploaded to Ops Manager (e.g., buildpacks). Create a download directory and run the download command as follows: 94 | 95 | ``` 96 | $ mkdir [directory] 97 | $ python gng.py --download [download-list.csv] --path [directory] 98 | ``` 99 | 100 | ## Upload all the files to all the Ops Managers 101 | 102 | You can upload these files to more than one Ops Manager. Just create a TOML file with a `[[opsmanager]]` for each Ops Manager you want to maintain. We'll call ours `ops-manager-list.toml` 103 | 104 | ``` 105 | [[opsmanager]] 106 | url = "some domain name or IP address" 107 | access_token = "some token" 108 | [[opsmanager]] 109 | url = "some other domain name or IP address" 110 | access_token = "some other token" 111 | ``` 112 | Url is only the domain name (e.g., localhost, opsmanager.example.com, 10.0.45.67) without the scheme. Localhost is convenient running Ops Manager on a public IaaS like AWS. 113 | 114 | See [support article](https://discuss.zendesk.com/hc/en-us/articles/217039538-How-to-download-and-upload-Pivotal-Cloud-Foundry-products-via-API) for getting the access token. 115 | 116 | The upload command is as follows: 117 | 118 | ``` 119 | $ python gng.py --upload [ops-manager-list.toml] --path [directory] 120 | ``` 121 | ## Cautions and next steps 122 | There is minimal error handling, but you have the source! 123 | In support of keeping the source looking consistent as it changes, please use [autopep8](https://github.com/hhatto/autopep8) before checking code into the project. 124 | 125 | ##### Feature backlog: 126 | * Robust error handling 127 | * Evaluate [Requests](http://docs.python-requests.org/) verses [PycURL](http://pycurl.io), a [post](http://stackoverflow.com/questions/15461995/python-requests-vs-pycurl-performance) to get started 128 | * Handle Stemcells 129 | * Dump list needs a "newer than date" option 130 | * Upload needs to check if file already exists on Ops Manager before uploading to avoid "meta-data" error, and the waste of bandwidth 131 | * auth_token expires, should check token is stll valid before iterating through the upgrades 132 | * PycURL may timeout, while Ops Manager processes a large upgrade such as elastic runtime (rare, but error has been seen, perhaps caused by LB between utility and Ops Manager) 133 | * Investigate if Ops Manager 1.6 and earlier need to be supported 134 | * Once MD5 is returned by the release API, the call to `get_file_details` won't be needed. Currently, this will save over 5300 requests. 135 | 136 | 137 | 138 | -------------------------------------------------------------------------------- /conf.toml: -------------------------------------------------------------------------------- 1 | api_key = "BlahBlahBlahBlahBlah" 2 | -------------------------------------------------------------------------------- /database_manager.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from sqlalchemy import ForeignKey 4 | from sqlalchemy.engine import create_engine 5 | from sqlalchemy.orm import sessionmaker 6 | from sqlalchemy.schema import Column 7 | from sqlalchemy import Integer, String 8 | from sqlalchemy.ext.declarative import declarative_base 9 | 10 | ''' 11 | SQLAlchemy-related classes and methods. 12 | ''' 13 | 14 | 15 | class Database: 16 | """ """ 17 | 18 | def __init__(self, database_path=None, debug_mode=False): 19 | self.database = database_path 20 | self.debug_mode = debug_mode 21 | self.start_engine() 22 | 23 | # self.clear_all_tables() 24 | 25 | def start_engine(self): 26 | 27 | # CONNECT 28 | if not self.database: 29 | self.database = 'data.db' 30 | self.engine = create_engine('sqlite:///' + self.database, echo=False) 31 | self.connection = self.engine.connect() 32 | 33 | # Create Session for sqlalchemy ... 34 | Session = sessionmaker(bind=self.engine) 35 | self.session = Session() 36 | Base.metadata.create_all(self.engine) 37 | 38 | def clear_all_tables(self): 39 | 40 | for table in reversed(Base.metadata.sorted_tables): 41 | self.connection.execute(table.delete()) 42 | self.session.commit() 43 | 44 | def connection_close(self): 45 | self.connection.close() 46 | 47 | def commit(self): 48 | self.session.commit() 49 | 50 | def get_product_details(self, slug): 51 | # print(slug) 52 | data = self.session.query( 53 | Product.id, Product.slug).filter( 54 | Product.name == slug).first() 55 | # print(data) 56 | return data 57 | 58 | def get_release_id(self, product_slug, version): 59 | # print(product_slug) 60 | # print(version) 61 | data = self.session.query( 62 | Release.id).filter( 63 | Release.product_slug == product_slug).filter( 64 | Release.version == version.strip()).first() 65 | # print(data) 66 | return data 67 | 68 | def get_file_details(self, release_id, file_name): 69 | # print(release_id) 70 | # print(file_name) 71 | data = self.session.query( 72 | ProductFile.id, 73 | ProductFile.release_id, 74 | ProductFile.filename, 75 | ProductFile.download_url, 76 | ProductFile.md5, 77 | ProductFile.release_date).filter( 78 | ProductFile.release_id == release_id).filter( 79 | ProductFile.filename == file_name).first() 80 | # print(data) 81 | return data 82 | 83 | def check_file_exists(self, file): 84 | return self.session.query( 85 | ProductFile.filename).filter( 86 | ProductFile.filename == file).first() 87 | 88 | 89 | Base = declarative_base() 90 | 91 | 92 | class Product(Base): 93 | __tablename__ = 'product' 94 | id = Column(Integer, primary_key=True) 95 | slug = Column(String) 96 | name = Column(String) 97 | file_groups_url = Column(String) 98 | product_files_url = Column(String) 99 | 100 | 101 | class Release(Base): 102 | __tablename__ = 'release' 103 | id = Column(Integer, primary_key=True) 104 | product_slug = Column(String, ForeignKey('product.slug')) 105 | version = Column(String) 106 | file_groups_url = Column(String) 107 | product_files_url = Column(String) 108 | 109 | 110 | class ProductFile(Base): 111 | __tablename__ = 'product_file' 112 | id = Column(Integer, primary_key=True) 113 | release_id = Column(Integer, ForeignKey('release.id'), primary_key=True) 114 | filename = Column(String) 115 | download_url = Column(String) 116 | md5 = Column(String) 117 | release_date = Column(String) 118 | -------------------------------------------------------------------------------- /gng.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | 3 | #import toml 4 | import pytoml as toml 5 | import os 6 | from argparse import ArgumentParser 7 | from pivnet import PivNetUpdater, DBDumper, PivNetDownloader, PivNetUploader 8 | 9 | parser = ArgumentParser() 10 | parser.add_argument( 11 | "--update", 12 | action='store_true', 13 | help="updates a local sqlite database of all products, releases, and files from Pivotal Network") 14 | parser.add_argument( 15 | "--dump-list", 16 | nargs=1, 17 | metavar='filename', 18 | action="store", 19 | help="Create a list of files from the local sql lite database so user can cut / paste the files they want to download") 20 | parser.add_argument( 21 | "--download", 22 | nargs=1, 23 | metavar="filename", 24 | help="Parse the user-created file of downloads and download them to a path") 25 | parser.add_argument( 26 | "--upload", 27 | nargs=1, 28 | metavar="filename", 29 | help="Scan a path of files, make sure they are valid (or ignore w/ --force) and upload to Ops Managers") 30 | parser.add_argument( 31 | "--path", 32 | nargs=argparse.REMAINDER, 33 | metavar="path", 34 | help="Path to which either upload or download will happen") 35 | parser.add_argument( 36 | "--force", 37 | action="store_true", 38 | help="Forcing to updload the files which are not existing in DB") 39 | 40 | conf = "conf.toml" 41 | args = parser.parse_args() 42 | vargs = vars(args) 43 | # print(vargs) 44 | 45 | if not os.path.exists(conf): 46 | print("no valid conf.toml with key") 47 | exit(-1) 48 | with open(conf) as conffile: 49 | config = toml.loads(conffile.read()) 50 | api_key = config.get('api_key') 51 | if not api_key or len(api_key) <= 0: 52 | print("no valid conf.toml with key") 53 | exit(-1) 54 | 55 | if "update" in vargs and vargs["update"]: 56 | msg = PivNetUpdater(api_key).update_db() 57 | if msg: 58 | print(msg) 59 | elif "dump_list" in vargs and vargs["dump_list"]: 60 | filename = vargs["dump_list"][0] 61 | print("Going to dump list to " + filename) 62 | DBDumper().dump_list(filename) 63 | elif "download" in vargs and vargs["download"]: 64 | filename = vargs["download"][0] 65 | if not os.path.exists(filename): 66 | print('download list is required') 67 | exit(-1) 68 | if "path" in vargs and vargs["path"]: 69 | 70 | download_path = vargs["path"][0] 71 | if not os.path.exists(download_path): 72 | print("Path is not valid") 73 | exit(-1) 74 | PivNetDownloader(api_key).download_files(filename, download_path) 75 | else: 76 | print("Path is required") 77 | elif "upload" in vargs and vargs["upload"]: 78 | filename = vargs["upload"][0] 79 | if not filename or not os.path.exists(filename): 80 | print('ops manager target list is required') 81 | exit(-1) 82 | try: 83 | with open(filename, 'rb') as conffile: 84 | opsmgr_config = toml.load(conffile) 85 | 86 | except Exception as e: 87 | print('Error: %s, %s' % (e.message, e.args)) 88 | print('Incorrect TOML format') 89 | exit(-1) 90 | 91 | if "path" in vargs and vargs["path"]: 92 | upload_folder = vargs["path"][0] 93 | if not os.path.exists(upload_folder): 94 | print("Path is not valid") 95 | exit(-1) 96 | 97 | force = "force" in vargs 98 | msg = PivNetUploader().upload_files( 99 | config=opsmgr_config, folder_path=upload_folder, force=force) 100 | if msg: 101 | print(msg) 102 | exit(-1) 103 | else: 104 | print("Path is required") 105 | else: 106 | parser.print_help() 107 | -------------------------------------------------------------------------------- /gngupdate.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | if python3.5 gng/gng.py --update; then 4 | filebase=`date "+%Y%m%d-%H%M"` 5 | if python3.5 gng/gng.py --dump-list ${filebase}.tmp; then 6 | head -n 1 ${filebase}.tmp > ${filebase}.csv 7 | (sed '1d' ${filebase}.tmp | sort -t ',' -k '4,4r' -k '1,1d' -k '3,3f') >> ${filebase}.csv 8 | rm ${filebase}.tmp 9 | fi 10 | fi 11 | -------------------------------------------------------------------------------- /pivnet.py: -------------------------------------------------------------------------------- 1 | import zipfile 2 | 3 | from database_manager import Database, Product, Release, ProductFile 4 | from sqlalchemy import exc 5 | 6 | import os 7 | import json 8 | import pytoml as toml 9 | import requests 10 | import codecs 11 | import pycurl 12 | import csv 13 | import hashlib 14 | import sys 15 | 16 | all_products_url = '/api/v2/products' 17 | end_point = 'https://network.pivotal.io' 18 | 19 | proxies = { 20 | } 21 | 22 | 23 | class PivNetUploader: 24 | 25 | def __init__(self): 26 | self.database = Database() 27 | 28 | def upload_files(self, config, folder_path, force=False): 29 | file_names = [] 30 | print('folder path: %s, force=%s' % (folder_path, force)) 31 | files = os.listdir(folder_path) 32 | for file in files: 33 | # if not file.endswith(('.pivotal', '.tgz')): 34 | if not file.endswith(('.pivotal')): 35 | print( 36 | 'Skipping %s - only tiles (.pivotal) and stemcells (.tgz) are uploaded.' % 37 | (file)) 38 | elif not os.path.isfile(os.path.join(folder_path, file)): 39 | print('Skipping %s - is not a file.' % (file)) 40 | elif not self.database.check_file_exists(file) and not force: 41 | print( 42 | 'Skipping %s - is not in product db. Either remove unknown file, or use --force to upload every file in %s whether its in the db or not.' % 43 | (file, folder_path)) 44 | else: 45 | file_names.append(file) 46 | self.upload(config, folder_path, file_names) 47 | 48 | def upload(self, config, dest_path, file_names): 49 | # print(config) 50 | for opsman in config["opsmanager"]: 51 | if not "access_token" in opsman: 52 | print('No Ops Manager access_token') 53 | return 54 | elif not "url" in opsman: 55 | print('No Ops Manager URL') 56 | return 57 | else: 58 | access_token = opsman["access_token"] 59 | url = opsman["url"] 60 | 61 | for filename in file_names: 62 | full_path = os.path.join(dest_path, filename) 63 | # print( 64 | # 'access_token = %s, url = %s, file = %s' % 65 | # (access_token, url, full_path)) 66 | # continue 67 | try: 68 | c = pycurl.Curl() 69 | c.setopt( 70 | c.URL, 71 | "https://" + 72 | opsman["url"] + 73 | "/api/v0/available_products") 74 | c.setopt( 75 | pycurl.HTTPHEADER, [ 76 | 'Authorization: bearer %s' % 77 | (access_token)]) 78 | c.setopt(pycurl.VERBOSE, 0) 79 | c.setopt(c.SSL_VERIFYPEER, 0) 80 | c.setopt(c.SSL_VERIFYHOST, 0) 81 | c.setopt(c.NOPROGRESS, 0) 82 | c.setopt( 83 | c.HTTPPOST, [ 84 | ('product[file]', (c.FORM_FILE, full_path,)), ]) 85 | print('Uploading %s' % (full_path)) 86 | result = c.perform() 87 | finally: 88 | c.close() 89 | 90 | 91 | class PivNetDownloader: 92 | 93 | def __init__(self, api_key): 94 | self.token = api_key 95 | self.secure_url = end_point 96 | self.headers = {'content-type': 'application/json'} 97 | self.secure_headers = { 98 | 'content-type': 'application/json', 99 | 'Accept': 'application/json', 100 | 'Authorization': 'Token token=' + self.token} 101 | self.database = Database() 102 | # print('token=%s, secure_url=%s, headers=%s, secure_headers=%s' % 103 | # (self.token, self.secure_url, self.headers, self.secure_headers)) 104 | 105 | def download_files(self, file_name, download_path): 106 | with open(file_name) as infile: 107 | reader = csv.reader(infile, dialect='excel') 108 | for row in reader: 109 | # print(row) 110 | name = row[0].strip() 111 | release_version = row[1].strip() 112 | file_name = row[2].strip() 113 | 114 | data = self.database.get_product_details(name) 115 | if data: 116 | # print(data) 117 | product_id = data[0] 118 | slug = data[1] 119 | else: 120 | print('Could not get product details for %s' % (name)) 121 | continue 122 | 123 | # print( 124 | # 'slug=%s,version=%s,file=%s' % 125 | # (slug, release_version, file_name)) 126 | data = self.database.get_release_id(slug, release_version) 127 | # print(data) 128 | if data: 129 | release_id = data[0] 130 | file_data = self.database.get_file_details( 131 | release_id, file_name) 132 | if file_data: 133 | # print(file_data) 134 | file_id = file_data[0] 135 | file_name = file_data[2] 136 | url = file_data[3] 137 | md5 = file_data[4] 138 | # print( 139 | # 'URL = %s, filename = %s, md5 = %s' % 140 | # (url, file_name, md5)) 141 | self.acceptEULA(product_id, release_id) 142 | 143 | md5_download = "" 144 | i = 0 145 | while md5 != md5_download: 146 | md5_download = self.downloadFile( 147 | url, 148 | file_name, 149 | download_path) 150 | i += 1 151 | if i > 2: 152 | break 153 | if md5 != md5_download: 154 | print( 155 | 'MD5 PivNet does not match download (%s != %s)' % 156 | (md5, md5_download)) 157 | else: 158 | print( 159 | 'Could not get release ID for release ID=%s, file name=%s' % 160 | (release_id, file_name)) 161 | else: 162 | print( 163 | 'Could not get file details for slug=%s, version=%s' % 164 | (slug, release_version)) 165 | 166 | def acceptEULA(self, product_id, release_id): 167 | url = self.secure_url + "/api/v2/products/" + \ 168 | str(product_id) + "/releases/" + str(release_id) + "/eula_acceptance" 169 | r = requests.post(url, headers=self.secure_headers, proxies=proxies) 170 | return r 171 | 172 | def downloadFile( 173 | self, 174 | url, 175 | file_name, 176 | download_path): 177 | sig = hashlib.md5() 178 | r = requests.post( 179 | url, 180 | headers=self.secure_headers, 181 | stream=True, 182 | proxies=proxies) 183 | # print(download_path) 184 | print("Downloading %s from %s" % (file_name, url)) 185 | full_path = os.path.join(download_path, file_name) 186 | with open(full_path, 'wb') as f: 187 | for chunk in r.iter_content(chunk_size=8192): 188 | if chunk: # filter out keep-alive new chunks 189 | f.write(chunk) 190 | sig.update(chunk) 191 | f.flush() 192 | return sig.hexdigest().lower() 193 | 194 | def unzipper(self, file_name): 195 | path = "product_files" 196 | subdir = file_name[:-8] # remove '.pivotal' 197 | if not os.path.exists("product_files/" + subdir): 198 | os.makedirs("product_files/" + subdir) 199 | with zipfile.ZipFile("product_files/" + file_name, "r") as z: 200 | z.extractall("product_files/" + subdir) 201 | return subdir 202 | 203 | 204 | class DBDumper: 205 | 206 | def __init__(self): 207 | self.database = Database() 208 | 209 | def dump_list(self, outfile_name): 210 | final_list = [] 211 | for product in self.database.session.query( 212 | Product).order_by(Product.name).all(): 213 | product_list = [] 214 | 215 | for release in self.database.session.query(Release).filter( 216 | Release.product_slug == product.slug).order_by( 217 | Release.version).all(): 218 | for file in self.database.session.query(ProductFile).filter( 219 | ProductFile.release_id == release.id).order_by( 220 | ProductFile.filename).all(): 221 | final_list.append([product.name, 222 | release.version, 223 | file.filename, 224 | file.release_date, 225 | file.md5, 226 | file.download_url]) 227 | if len(final_list) > 0: 228 | with open(outfile_name, 'w+', newline='') as outfile: 229 | writer = csv.writer(outfile, dialect='excel') 230 | writer.writerow(['Product Name', 231 | 'Release Version', 232 | 'Filename', 233 | 'Release Date', 234 | 'MD5', 235 | 'Download URL']) 236 | writer.writerows(final_list) 237 | print('Product list dumped to [%s]' % outfile_name) 238 | print('Cut and paste files to download into my-downloads.csv and run gng --download my-downloads.csv --path foo') 239 | else: 240 | print("You need to update your Pivotal Network DB. Please run gng --update") 241 | 242 | 243 | class PivNetUpdater: 244 | 245 | def __init__(self, api_key): 246 | self.token = api_key 247 | self.secure_url = end_point 248 | self.secure_headers = { 249 | 'content-type': 'application/json', 250 | 'Accept': 'application/json', 251 | 'Authorization': 'Token token=' + self.token} 252 | self.database = Database() 253 | # print('token=%s, secure_url=%s, secure_headers=%s' % 254 | # (self.token, self.secure_url, self.secure_headers)) 255 | 256 | def update_db(self): 257 | self.database.clear_all_tables() 258 | products = self.getProducts() 259 | for product in products: 260 | product_id = product.get('id') 261 | slug = product.get('slug') 262 | pname = product.get('name') 263 | p_file_groups = product.get('_links').get( 264 | 'file_groups').get('href') 265 | p_product_files = product.get('_links').get( 266 | 'product_files').get('href') 267 | p = Product( 268 | id=product_id, 269 | slug=slug, 270 | name=pname, 271 | file_groups_url=p_file_groups, 272 | product_files_url=p_product_files) 273 | self.database.session.add(p) 274 | self.database.commit() 275 | # print( 276 | # 'Found product %s,%s,%s,%s,%s' % 277 | # (product_id, 278 | # slug, 279 | # pname, 280 | # p_file_groups, 281 | # p_product_files)) 282 | # if p_file_groups: 283 | # p_groups = self.getFileGroups(p_file_groups) 284 | # if p_groups: 285 | # for p_group in p_groups: 286 | # print( 287 | # json.dumps( 288 | # p_group, 289 | # sort_keys=True, 290 | # indent=4)) 291 | # if p_product_files: 292 | # p_files = self.getProductFiles(p_product_files) 293 | # if p_files: 294 | # for p_file in p_files: 295 | # print( 296 | # json.dumps( 297 | # p_file, 298 | # sort_keys=True, 299 | # indent=4)) 300 | releases = self.getReleases(slug) 301 | if releases: 302 | for release in releases: 303 | rid = release.get('id') 304 | version = release.get('version') 305 | r_file_groups = release.get('_links').get( 306 | 'file_groups').get('href') 307 | r_product_files = release.get('_links').get( 308 | 'product_files').get('href') 309 | r = Release( 310 | id=rid, 311 | version=version, 312 | product_slug=p.slug, 313 | file_groups_url=r_file_groups, 314 | product_files_url=r_product_files) 315 | self.database.session.add(r) 316 | self.database.commit() 317 | # print( 318 | # 'Found release %s,%s,%s,%s,%s' % 319 | # (rid, version, p.slug, r_file_groups, r_product_files)) 320 | if r_file_groups: 321 | groups = self.getFileGroups(r_file_groups) 322 | if groups: 323 | for group in groups: 324 | self.addFiles( 325 | group.get('product_files'), product_id, rid) 326 | self.addFiles(self.getProductFiles( 327 | r_product_files), product_id, rid) 328 | print("Local Pivotal Network db has been updated.") 329 | 330 | def addFiles(self, files, product_id, rid): 331 | if files: 332 | for file in files: 333 | try: 334 | file_id = file.get('id') 335 | file_detail = self.getProductFile( 336 | product_id, rid, file_id) 337 | url = file_detail.get('_links').get('download').get('href') 338 | name = file_detail.get( 339 | 'aws_object_key').split('/')[-1] 340 | md5 = file_detail.get('md5').lower() 341 | released_at = file_detail.get('released_at') 342 | f = ProductFile( 343 | id=file_id, 344 | release_id=rid, 345 | filename=name, 346 | download_url=url, 347 | md5=md5, 348 | release_date=released_at) 349 | # print( 350 | # 'file id=%s,release id=%s,filename=%s,url=%s,md5=%s,date=%s' % 351 | # (f.id, r.id, name, url, md5, released_at)) 352 | self.database.session.add(f) 353 | self.database.commit() 354 | except exc.IntegrityError: 355 | self.database.session.rollback() 356 | print('Duplicate: %s' % (file_detail)) 357 | except: 358 | print( 359 | 'addFile (%s, %s, %s) exception: %s' % 360 | (product_id, rid, file_id, sys.exc_info()[0])) 361 | # print( 362 | # json.dumps( 363 | # file, 364 | # sort_keys=True, 365 | # indent=4)) 366 | # print( 367 | # json.dumps( 368 | # file_detail, 369 | # sort_keys=True, 370 | # indent=4)) 371 | 372 | def getProducts(self): 373 | url = self.secure_url + "/api/v2/products/" 374 | for i in range(0, 3): 375 | try: 376 | r = requests.get( 377 | url, headers=self.secure_headers, proxies=proxies) 378 | data = json.loads(r.content.decode('utf-8')) 379 | # print('getProducts %s' % (url)) 380 | # print(json.dumps(data, sort_keys=True, indent=4)) 381 | products = data.get('products') 382 | return products 383 | except requests.exceptions.RequestException as e: 384 | print('getProducts (i=%s) %s e=%s' % (i, url, e)) 385 | print('getProducts giving up after %s tries' % (i)) 386 | 387 | def getReleases(self, slug): 388 | url = self.secure_url + "/api/v2/products/" + slug + "/releases" 389 | for i in range(0, 3): 390 | try: 391 | r = requests.get( 392 | url, headers=self.secure_headers, proxies=proxies) 393 | data = json.loads(r.content.decode('utf-8')) 394 | # print('getReleases %s' % (url)) 395 | # print(json.dumps(data, sort_keys=True, indent=4)) 396 | releases = data.get('releases') 397 | return releases 398 | except requests.exceptions.RequestException as e: 399 | print('getReleases (i=%s) %s e=%s' % (i, url, e)) 400 | print('getReleases giving up after %s tries' % (i)) 401 | 402 | def getFileGroups(self, url): 403 | for i in range(0, 3): 404 | try: 405 | r = requests.get( 406 | url, headers=self.secure_headers, proxies=proxies) 407 | data = json.loads(r.content.decode('utf-8')) 408 | # print('getFileGroups %s' % (url)) 409 | # print(json.dumps(data, sort_keys=True, indent=4)) 410 | file_groups = data.get('file_groups') 411 | return file_groups 412 | except requests.exceptions.RequestException as e: 413 | print('getFileGroups (i=%s) %s e=%s' % (i, url, e)) 414 | print('getFileGroups giving up after %s tries' % (i)) 415 | 416 | def getProductFiles(self, url): 417 | for i in range(0, 3): 418 | try: 419 | r = requests.get( 420 | url, headers=self.secure_headers, proxies=proxies) 421 | data = json.loads(r.content.decode('utf-8')) 422 | # print('getProductFiles %s' % (url)) 423 | # print(json.dumps(data, sort_keys=True, indent=4)) 424 | product_files = data.get('product_files') 425 | return product_files 426 | except requests.exceptions.RequestException as e: 427 | print('getProductFiles (i=%s) %s e=%s' % (i, url, e)) 428 | print('getProductFiles giving up after %s tries' % (i)) 429 | 430 | def getProductFile(self, product_id, release_id, file_id): 431 | url = self.secure_url + '/api/v2/products/' + str(product_id) \ 432 | + '/releases/' + str(release_id) + '/product_files/' \ 433 | + str(file_id) 434 | # print(url) 435 | for i in range(0, 3): 436 | try: 437 | r = requests.get( 438 | url, headers=self.secure_headers, proxies=proxies) 439 | data = json.loads(r.content.decode('utf-8')) 440 | # print('getProductFile %s' % (url)) 441 | # print(json.dumps(data, sort_keys=True, indent=4)) 442 | product_file = data.get('product_file') 443 | return product_file 444 | except requests.exceptions.RequestException as e: 445 | print('getProductFile (i=%s) %s e=%s' % (i, url, e)) 446 | print('getProductFile giving up after %s tries' % (i)) 447 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pycurl==7.43.0 2 | requests==2.10.0 3 | SQLAlchemy==1.0.14 4 | pytoml==0.1.10 --------------------------------------------------------------------------------