├── .gitignore ├── LICENSE ├── README.md ├── buffer └── README.md ├── drivedl ├── __init__.py ├── drivedl.py └── util.py ├── requirements.txt └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | buffer/* 2 | !buffer/README.md 3 | __pycache__/ 4 | tokens/ 5 | drivedl.egg-info/ 6 | dist/ 7 | build/ 8 | venv/ 9 | .idea/ 10 | 11 | credentials.json 12 | config.json 13 | token.pickle -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Archit Date 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![PyPI version](https://badge.fury.io/py/drivedl.svg)](https://badge.fury.io/py/drivedl) 2 | 3 | # drivedl 4 | 5 | This is a CLI tool for concurrent downloads of directories in any drive type. (My Drive, Team Drive or Shared with me) 6 | 7 | The tool requires the `'https://www.googleapis.com/auth/drive'` scope as of now. This scope can be tightened since all that the script needs is permission to traverse and download data from the drives. Feel free to PR a different scope if it is more relevant 8 | 9 | ## Installation: 10 | 11 | - Install the binary through PyPI using the following command: 12 | ```bash 13 | $ pip install drivedl 14 | ``` 15 | - Enter `drivedl` in the commandline after installation and you will be asked to download a `credentials.json` and place it in a specific directory. 16 | - Enter `drivedl --add` in the commandline after following the previous step to add an account by signing in. (You will be redirected to a browser sign-in page) 17 | - Congrats! You have successfully setup drivedl! Read the usage instructions for how to then use the package 18 | 19 | ## Usage: 20 | 21 | ```bash 22 | $ drivedl 23 | ``` 24 | It is as straightforward as that! 25 | 26 | Note that on the first run, you will have to authorize the scope of the application. This is pretty straightforward as well! 27 | 28 | ## Skipping existing files: 29 | 30 | Adding an argument `--skip` to your command will skip existing files and not redownload them. 31 | - By default the behaviour is to download everything without skipping. 32 | 33 | ## File Abuse 34 | 35 | Adding an argument `--abuse` allows for downloading files which have been marked as "abused" from google. 36 | This acknowledges that you will download a file which google has marked that it could be malware or spam. 37 | An example error can be found on [this](https://github.com/prasmussen/gdrive/issues/182). 38 | 39 | ## Assigning extra processes: 40 | 41 | Adding an argument `--proc` followed by an integer of processes to assign the application will spawn the specified processes to do the download. Default process count is 5 processes 42 | - Example: `--proc 10` for 10 processes 43 | 44 | ## Downloading using process map instead of an iterated map: 45 | 46 | Adding an argument `--noiter` tells the program to download via `process.map` instead of `process.imap_unordered`. This lets you download faster with the drawback of no process bar being shown because of no iterable item. Recommended to be used if speed is of essence. 47 | 48 | ## Add extra accounts: 49 | 50 | Run the following command to add a new account. (Adding an account means that it will also be searched when using drivedl) 51 | ```bash 52 | $ drivedl --add 53 | ``` 54 | You will have to authorize the scope of the application for the new account as well. The token will automatically be saved for future uses once permission is granted! 55 | 56 | ## Searches: 57 | 58 | If you add `--search` to your command, you can search for the folder name using keywords instead of using the folder link or the folder ID. This searches through all drives in all registered accounts and gives a maximum of 10 results per drive. There is no cap on the global maximum results. The search is limited to folders and will not index loose files. 59 | 60 | An example of usage is as follows: 61 | ``` 62 | $ drivedl "avengers endgame" --search "D:/Google Drive Downloads" 63 | ``` 64 | This also works with default path configurations (stated below). 65 | 66 | ## Default Path [Optional] 67 | 68 | ```bash 69 | $ drivedl --path 70 | ``` 71 | 72 | This lets you specify a default path for your download location. Once a default path is set, it will use the default path to download to if no path is specified. 73 | 74 | ## Debugging: 75 | 76 | Adding `--debug` writes a log file once the entire task is completed so that any issues can be documented. This is helpful while making GitHub issues to pinpoint issues with the script. 77 | 78 | ## Testing bleeding edge commits: 79 | 80 | - Download `credentials.json` for a Desktop drive application. Instructions on how to get that can be found [here](https://developers.google.com/drive/api/v3/quickstart/python) (refer to Step 1) 81 | - Save the `credentials.json` file in the same directory as `drivedl.py` 82 | - Install the Drive API for python by running the following command: 83 | ```bash 84 | $ pip install -r requirements.txt 85 | ``` 86 | - Usage instructions would involve invoking the script via `$ python drivedl.py` 87 | 88 | ## TODO: 89 | 90 | - [x] Add URL parsing 91 | - [x] Add default path 92 | - [x] Single file download support 93 | - [x] Color support 94 | - [x] Multi-Account support 95 | - [x] Skip existing files 96 | - [x] Search functionality 97 | - [x] Functionality to download docs 98 | -------------------------------------------------------------------------------- /buffer/README.md: -------------------------------------------------------------------------------- 1 | This is where the files will be downloaded as a buffer before being transferred to the destination -------------------------------------------------------------------------------- /drivedl/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | 3 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) -------------------------------------------------------------------------------- /drivedl/drivedl.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import pickle, util, sys, tqdm, time, json, uuid 3 | import os.path 4 | from multiprocessing import Pool 5 | from colorama import Fore, Style 6 | from googleapiclient.discovery import build 7 | from google_auth_oauthlib.flow import InstalledAppFlow 8 | from google.auth.transport.requests import Request 9 | from googleapiclient.errors import HttpError 10 | 11 | # If modifying these scopes, delete the file token.pickle. 12 | SCOPES = ['https://www.googleapis.com/auth/drive'] 13 | PROCESS_COUNT = 5 14 | 15 | def add_account(): 16 | tokenfile = f'token_{str(uuid.uuid4())}.pickle' 17 | os.makedirs('tokens', exist_ok=True) 18 | flow = InstalledAppFlow.from_client_secrets_file( 19 | 'credentials.json', SCOPES) 20 | creds = flow.run_local_server(port=0) 21 | # Save the credentials for the next run 22 | with open(f'tokens/{tokenfile}', 'wb') as token: 23 | pickle.dump(creds, token) 24 | 25 | def migrate(): 26 | # Migrate token.pickle file to tokens folder 27 | os.makedirs('tokens', exist_ok=True) 28 | if os.path.exists('token.pickle'): 29 | print("Older token configuration detected. Migrating to the new setup") 30 | new_name = f'token_{str(uuid.uuid4())}.pickle' 31 | os.rename('token.pickle', f'tokens/{new_name}') 32 | 33 | def get_accounts(): 34 | return [x for x in os.listdir('tokens') if x.endswith('.pickle')] 35 | 36 | def get_service(tokenfile): 37 | """Shows basic usage of the Drive v3 API. 38 | Prints the names and ids of the first 10 files the user has access to. 39 | """ 40 | creds = None 41 | # The file token.pickle stores the user's access and refresh tokens, and is 42 | # created automatically when the authorization flow completes for the first 43 | # time. 44 | if os.path.exists(f'tokens/{tokenfile}'): 45 | with open(f'tokens/{tokenfile}', 'rb') as token: 46 | creds = pickle.load(token) 47 | # If there are no (valid) credentials available, let the user log in. 48 | if not creds or not creds.valid: 49 | if creds and creds.expired and creds.refresh_token: 50 | creds.refresh(Request()) 51 | else: 52 | flow = InstalledAppFlow.from_client_secrets_file( 53 | 'credentials.json', SCOPES) 54 | creds = flow.run_local_server(port=0) 55 | # Save the credentials for the next run 56 | with open(f'tokens/{tokenfile}', 'wb') as token: 57 | pickle.dump(creds, token) 58 | 59 | service = build('drive', 'v3', credentials=creds) 60 | return service 61 | 62 | def download_helper(args): 63 | rlc = util.download(*args) 64 | return (args[1]['name'], rlc) 65 | 66 | def mapped_dl(args): 67 | rlc = util.download(*args, noiter=True) 68 | return (args[1]['name'], rlc) 69 | 70 | def main(console_call=True): 71 | # Set path 72 | if console_call: 73 | os.chdir(os.path.dirname(os.path.abspath(__file__))) 74 | else: 75 | os.chdir(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) 76 | 77 | if not os.path.exists('credentials.json'): 78 | print(f"Missing Credentials!\nEnable Drive API by clicking the blue button at https://developers.google.com/drive/api/v3/quickstart/python \nDownload the credentials.json file and save it here: {os.getcwd()}") 79 | 80 | migrate() 81 | accounts = get_accounts() 82 | service = None 83 | search = False 84 | skip = False 85 | noiter = False 86 | abuse = False 87 | 88 | # File Listing 89 | if len(sys.argv) < 2: 90 | print(f"Usage: {sys.argv[0]} ") 91 | sys.exit(-1) 92 | else: 93 | if sys.argv[1] == '--path': 94 | util.save_default_path(sys.argv[2]) 95 | sys.exit(0) 96 | if sys.argv[1] == '--add': 97 | add_account() 98 | print("Account successfully added!") 99 | sys.exit(0) 100 | if '--search' in sys.argv: 101 | search = True 102 | sys.argv.remove('--search') 103 | if '--skip' in sys.argv: 104 | skip = True 105 | sys.argv.remove('--skip') 106 | if '--abuse' in sys.argv: 107 | abuse = True 108 | sys.argv.remove('--abuse') 109 | if '--debug' in sys.argv: 110 | util.DEBUG = True 111 | sys.argv.remove('--debug') 112 | if '--noiter' in sys.argv: 113 | noiter = True 114 | sys.argv.remove('--noiter') 115 | if '--proc' in sys.argv: 116 | index = sys.argv.index('--proc') 117 | PROCESS_COUNT = int(sys.argv[index + 1]) 118 | sys.argv.pop(index + 1) 119 | sys.argv.pop(index) 120 | else: 121 | PROCESS_COUNT = 5 122 | folderid = util.get_folder_id(sys.argv[1]) 123 | if len(sys.argv) > 2: 124 | destination = sys.argv[2] 125 | elif os.path.isfile('config.json'): 126 | with open('config.json', 'r') as f: 127 | config = json.load(f) 128 | destination = config['default_path'] 129 | else: 130 | destination = os.getcwd() 131 | 132 | file_dest = [] 133 | 134 | def build_files(): 135 | try: 136 | kwargs = {'top': folderid, 'by_name': False} 137 | for path, root, dirs, files in util.walk(service, **kwargs): 138 | path = ["".join([c for c in dirname if c.isalpha() or c.isdigit() or c in [' ', '-', '_', '.', '(', ')', '[', ']']]).rstrip() for dirname in path] 139 | for f in files: 140 | dest = os.path.join(destination, os.path.join(*path)) 141 | f['name'] = "".join([c for c in f['name'] if c.isalpha() or c.isdigit() or c in [' ', '-', '_', '.', '(', ')', '[', ']']]).rstrip() 142 | file_dest.append((service, f, dest, skip, abuse)) 143 | if file_dest != []: 144 | # First valid account found, break to prevent further searches 145 | return True 146 | except ValueError: # mimetype is not a folder 147 | dlfile = service.files().get(fileId=folderid, supportsAllDrives=True).execute() 148 | print(f"\nNot a valid folder ID. \nDownloading the file : {dlfile['name']}") 149 | # Only use a single process for downloading 1 file 150 | util.download(service, dlfile, destination, skip, abuse) 151 | sys.exit(0) 152 | except HttpError: 153 | print(f"{Fore.RED}File not found in account: {acc}{Style.RESET_ALL}") 154 | return False 155 | 156 | searches = [] 157 | for acc in accounts: 158 | service = get_service(acc) 159 | if not search: 160 | valid = build_files() 161 | if valid: break 162 | else: 163 | searches += util.querysearch(service, folderid, 'rand', True) 164 | 165 | if searches != []: 166 | print("\nEnter the number of the folder you want to download:") 167 | for i in range(len(searches)): 168 | print(f" {Fore.YELLOW}{str(i+1).ljust(3)}{Style.RESET_ALL} {searches[i]['name']}") 169 | print(f"\n{Fore.GREEN}Index:{Style.RESET_ALL} ", end='') 170 | index = int(input()) - 1 171 | if index + 1 > len(searches): 172 | print(f"{Fore.RED}Invalid Index. Exiting.{Style.RESET_ALL}") 173 | sys.exit(1) 174 | folderid = searches[index]['id'] 175 | build_files() 176 | 177 | if service == None: 178 | # No accounts found with access to the drive link, exit gracefully 179 | print("No valid accounts with access to the file/folder.") 180 | print("Have you run the drivedl --add command? Exiting...") 181 | sys.exit(1) 182 | try: 183 | p = Pool(PROCESS_COUNT) 184 | if noiter: 185 | p.map(mapped_dl, file_dest) 186 | else: 187 | pbar = tqdm.tqdm(p.imap_unordered(download_helper, file_dest), total=len(file_dest)) 188 | start = time.time() 189 | for i in pbar: 190 | rlc = i[1] 191 | status, main_str, end_str = util.get_download_status(rlc, start) 192 | pbar.write(status + main_str + f' {i[0]}' + end_str) 193 | p.close() 194 | p.join() 195 | if util.DEBUG: 196 | util.debug_write(f'debug_{int(time.time())}.log') 197 | except ImportError: 198 | # Multiprocessing is not supported (example: Android Devices) 199 | for fd in file_dest: 200 | download_helper(fd) 201 | 202 | if __name__ == '__main__': 203 | main(False) 204 | -------------------------------------------------------------------------------- /drivedl/util.py: -------------------------------------------------------------------------------- 1 | from googleapiclient.discovery import build 2 | from google_auth_oauthlib.flow import InstalledAppFlow 3 | from google.auth.transport.requests import Request 4 | from googleapiclient.http import MediaIoBaseDownload 5 | from colorama import Fore, Style 6 | import io, os, shutil, uuid, sys, json, time 7 | 8 | FOLDER = 'application/vnd.google-apps.folder' 9 | DEBUG = False 10 | CHUNK_SIZE = 20 * 1024 * 1024 # 20MB chunks 11 | 12 | DEBUG_STATEMENTS = [] # cache all debug statements 13 | 14 | def debug_write(logfile): 15 | with open(logfile, 'w') as f: 16 | f.write('\n'.join(DEBUG_STATEMENTS)) 17 | print(f"{Fore.YELLOW}DEBUG LOG SAVED HERE:{Style.RESET_ALL} {logfile}") 18 | 19 | def list_td(service): 20 | # Call the Drive v3 API 21 | results = service.drives().list(pageSize=100).execute() 22 | 23 | if not results['drives']: 24 | return None 25 | else: 26 | return results['drives'] 27 | 28 | def iterfiles(service, name=None, is_folder=None, parent=None, order_by='folder,name,createdTime'): 29 | q = [] 30 | if name is not None: 31 | q.append("name = '%s'" % name.replace("'", "\\'")) 32 | if is_folder is not None: 33 | q.append("mimeType %s '%s'" % ('=' if is_folder else '!=', FOLDER)) 34 | if parent is not None: 35 | q.append("'%s' in parents" % parent.replace("'", "\\'")) 36 | params = {'pageToken': None, 'orderBy': order_by, 'includeItemsFromAllDrives': True, 'supportsAllDrives': True} 37 | if q: 38 | params['q'] = ' and '.join(q) 39 | while True: 40 | response = service.files().list(**params).execute() 41 | for f in response['files']: 42 | yield f 43 | try: 44 | params['pageToken'] = response['nextPageToken'] 45 | except KeyError: 46 | return 47 | 48 | def walk(service, top='root', by_name=False): 49 | if by_name: 50 | top, = iterfiles(service, name=top, is_folder=True) 51 | else: 52 | top = service.files().get(fileId=top, supportsAllDrives=True).execute() 53 | if top['mimeType'] != FOLDER: 54 | raise ValueError('not a folder: %r' % top) 55 | stack = [((top['name'],), top)] 56 | print(f"Indexing: {Fore.YELLOW}{top['name']}{Style.RESET_ALL}\nFolder ID: {Fore.YELLOW}{top['id']}{Style.RESET_ALL}\n") 57 | while stack: 58 | path, top = stack.pop() 59 | dirs, files = is_file = [], [] 60 | for f in iterfiles(service, parent=top['id']): 61 | is_file[f['mimeType'] != FOLDER].append(f) 62 | yield path, top, dirs, files 63 | if dirs: 64 | stack.extend((path + (d['name'],), d) for d in reversed(dirs)) 65 | 66 | def querysearch(service, name=None, drive_id=None, is_folder=None, parent=None, order_by='folder,name,createdTime'): 67 | q = [] 68 | items = [] 69 | if name is not None: 70 | q.append("name contains '%s'" % name.replace("'", "\\'")) 71 | if is_folder is not None: 72 | q.append("mimeType %s '%s'" % ('=' if is_folder else '!=', FOLDER)) 73 | if parent is not None: 74 | q.append("'%s' in parents" % parent.replace("'", "\\'")) 75 | if drive_id == None: 76 | params = {'pageToken': None, 'orderBy': order_by, 'includeItemsFromAllDrives': True, 'supportsAllDrives': True} 77 | else: 78 | params = {'pageToken': None, 'orderBy': order_by, 'includeItemsFromAllDrives': True, 'supportsAllDrives': True, 'corpora': 'allDrives'} 79 | if q: 80 | params['q'] = ' and '.join(q) 81 | while len(items) < 10: 82 | response = service.files().list(**params).execute() 83 | for f in response['files']: 84 | items.append(f) 85 | try: 86 | params['pageToken'] = response['nextPageToken'] 87 | except KeyError: 88 | break 89 | return items 90 | 91 | def download(service, file, destination, skip=False, abuse=False, noiter=False): 92 | # add file extension if we don't have one 93 | mimeType = file['mimeType'] 94 | if "application/vnd.google-apps" in mimeType: 95 | if "form" in mimeType: return -1 96 | elif "document" in mimeType: 97 | ext_file = '.docx' 98 | elif "spreadsheet" in mimeType: 99 | ext_file = '.xlsx' 100 | elif "presentation" in mimeType: 101 | ext_file = '.pptx' 102 | else: 103 | ext_file = '.pdf' 104 | root, ext = os.path.splitext(file['name']) 105 | if not ext: 106 | file['name'] = file['name'] + ext_file 107 | # file is a dictionary with file id as well as name 108 | if skip and os.path.exists(os.path.join(destination, file['name'])): 109 | return -1 110 | if "application/vnd.google-apps" in mimeType: 111 | if "form" in mimeType: return -1 112 | elif "document" in mimeType: 113 | dlfile = service.files().export_media(fileId=file['id'], mimeType='application/vnd.openxmlformats-officedocument.wordprocessingml.document') 114 | elif "spreadsheet" in mimeType: 115 | dlfile = service.files().export_media(fileId=file['id'], mimeType='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet') 116 | elif "presentation" in mimeType: 117 | dlfile = service.files().export_media(fileId=file['id'], mimeType='application/vnd.openxmlformats-officedocument.presentationml.presentation') 118 | else: 119 | dlfile = service.files().export_media(fileId=file['id'], mimeType='application/pdf') 120 | else: 121 | dlfile = service.files().get_media(fileId=file['id'], supportsAllDrives=True, acknowledgeAbuse=abuse) 122 | rand_id = str(uuid.uuid4()) 123 | os.makedirs('buffer', exist_ok=True) 124 | fh = io.FileIO(os.path.join('buffer', rand_id), 'wb') 125 | downloader = MediaIoBaseDownload(fh, dlfile, chunksize=CHUNK_SIZE) 126 | if noiter: print(f"{Fore.GREEN}Downloading{Style.RESET_ALL} {file['name']} ...") 127 | done = False 128 | rate_limit_count = 0 129 | while done is False and rate_limit_count < 20: 130 | try: 131 | status, done = downloader.next_chunk() 132 | except Exception as ex: 133 | if "abuse" in str(ex).lower(): 134 | if not noiter: print() 135 | print(f"{Fore.RED}Abuse error for file{Style.RESET_ALL} {file['name']} ...") 136 | rate_limit_count = 21 137 | DEBUG_STATEMENTS.append(f'File Name: {file["name"]}, File ID: {file["id"]}, Exception: {ex}') 138 | rate_limit_count += 1 139 | fh.close() 140 | if noiter and rate_limit_count == 20: print(f"{Fore.RED}Error {Style.RESET_ALL} {file['name']} ...") 141 | os.makedirs(destination, exist_ok=True) 142 | while True: 143 | try: 144 | shutil.move(os.path.join('buffer', rand_id), os.path.join(destination, file['name'])) 145 | break 146 | except PermissionError: 147 | # wait out the file write before attempting to move 148 | pass 149 | return rate_limit_count 150 | 151 | def get_folder_id(link): 152 | # function to isolate folder id 153 | if 'drive.google.com' in link: 154 | link = link.split('/view')[0].split('/edit')[0] # extensions to file names 155 | link = link.rsplit('/', 1)[-1] # final backslash 156 | link = link.split('?usp')[0] # ignore usp=sharing and usp=edit 157 | # open?id= 158 | link = link.rsplit('open?id=')[-1] # only take what is after open?id= 159 | return link 160 | else: 161 | return link 162 | 163 | def save_default_path(path): 164 | if os.path.isfile('config.json'): 165 | with open('config.json', 'r') as f: 166 | config = json.load(f) 167 | config['default_path'] = path 168 | else: 169 | config = {} 170 | config['default_path'] = path 171 | with open('config.json', 'w') as f: 172 | f.write(json.dumps(config, indent= 4)) 173 | 174 | def get_download_status(rlc, start): 175 | if rlc == -1: # skipped file 176 | status = f'{Fore.CYAN}Skipped: {Style.RESET_ALL} ' 177 | elif rlc == 0: 178 | status = f'{Fore.GREEN}Downloaded:{Style.RESET_ALL} ' 179 | elif rlc < 20: 180 | status = f'{Fore.YELLOW}Warning: {Style.RESET_ALL} ' 181 | else: 182 | status = f'{Fore.RED}Error: {Style.RESET_ALL} ' 183 | time_req = str(int(time.time() - start)) + 's' 184 | main_str = f'{Fore.BLUE}[Time: {time_req.rjust(5)}]{Style.RESET_ALL}' 185 | end_str = '' 186 | if rlc > 0 and rlc < 20: 187 | end_str += f' [Rate Limit Count: {rlc}] File saved' 188 | elif rlc >= 20: 189 | end_str += f' [Rate Limit Count: {rlc}] Partial file saved' 190 | return (status, main_str, end_str) 191 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | google-api-python-client 2 | google-auth-httplib2 3 | google-auth-oauthlib 4 | tqdm 5 | colorama -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | package_name = "drivedl" 4 | 5 | with open("README.md", "r") as f: 6 | readme = f.read() 7 | 8 | setup( 9 | name="drivedl", 10 | version="1.2", 11 | description="Download files from Google Drive (inclusive of teamdrives, shared with me and my drive) concurrently.", 12 | long_description=readme, 13 | long_description_content_type="text/markdown", 14 | url="https://github.com/architdate/drivedl", 15 | packages=find_packages(), 16 | python_requires=">=3.6", 17 | install_requires=[ 18 | "google-api-python-client", 19 | "google-auth-httplib2", 20 | "google-auth-oauthlib", 21 | "tqdm", 22 | "colorama" 23 | ], 24 | license='MIT', 25 | zip_safe=False, 26 | classifiers=[ 27 | "License :: OSI Approved :: MIT License", 28 | "Programming Language :: Python", 29 | "Programming Language :: Python :: 3.6", 30 | "Programming Language :: Python :: 3.7", 31 | "Programming Language :: Python :: 3.8", 32 | ], 33 | author="Archit Date", 34 | author_email="architdate@gmail.com", 35 | entry_points={ 36 | 'console_scripts': [ 37 | 'drivedl=drivedl.drivedl:main', 38 | ], 39 | }, 40 | ) --------------------------------------------------------------------------------