├── .gitignore ├── README.md ├── bandcamp.py ├── mkspectrograms.sh ├── redacted.py ├── redcamp.py ├── requirements.txt ├── scrape.py ├── setup.sh ├── transcode.py └── utils.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | releases.txt 3 | cache.txt -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Introduction 2 | `redcamp` is a script which assists in the process of uploading Bandcamp releases to Redacted. Inspired by [REDBetter](https://github.com/Mechazawa/REDBetter-crawler) 3 | 4 | ## Disclaimers 5 | * This script is meant as a helper tool, not a fully automated uploader. **Please review all your uploads before making them.** Failure to do so can result in a warning or worse. 6 | * Bandcamp has no restrictions on what can be uploaded. The script has no way to discern a good release from a "spam release". Please make sure your upload meets the criteria before uploading. 7 | > Zero Effort, Spam Releases - Creators that prolifically release zero effort content of no value (e.g Paul_DVR, Vap0rwave, Firmensprecher, Phyllomedusa, stretches), often solely for the purpose of spam or upload gain. 8 | * Although most FLACs on Bandcamp are true lossless, some are lossy that have been transcoded. The script uses Lossless Audio Checker to check for possible transcodes, but it isn't 100% accurate. Please check the spectrals for each release before uploading. 9 | * The script is not perfect, and sometimes makes mistakes. Use at your own risk. 10 | 11 | ## Dependencies 12 | 13 | * Python 3.6 or newer 14 | * `mktorrent` 15 | * `coloredlogs`, `musicbrainzngs`, `mutagen`, `ptpimg_uploader`, and `verboselogs` Python modules 16 | * `sox` and `ffmpeg` 17 | * [Lossless Audio Checker](http://losslessaudiochecker.com/) 18 | * [Firefox](https://www.mozilla.org/en-US/firefox/) (for `scrape.py`) 19 | * `geckodriver_autoinstaller` and `selenium` Python modules (for `scrape.py`) 20 | 21 | 22 | ## Installation 23 | > :information_source: If you have Python installed, run `setup.sh` to install and configure all of the necessary dependencies (requires sudo). 24 | 25 | #### 1. Install Python 26 | 27 | Python is available [here](https://www.python.org/downloads/). 28 | 29 | #### 2. Install `mktorrent` 30 | 31 | `mktorrent` must be built from source, rather than installed using a package manager. For Linux systems, run the following commands in a temporary directory: 32 | 33 | ~~~~ 34 | $> git clone git@github.com:Rudde/mktorrent.git 35 | $> cd mktorrent 36 | $> make && sudo make install 37 | ~~~~ 38 | 39 | If you are on a seedbox and you lack the privileges to install packages, you are best off contacting your seedbox provider and asking them to install the listed packages. 40 | 41 | #### 3. Install `coloredlogs`, `musicbrainzngs`, `mutagen`, `ptpimg_uploader`, and `verboselogs` Python Modules 42 | 43 | ~~~~ 44 | pip install -r requirements.txt 45 | ~~~~ 46 | 47 | #### 4. Install `sox` and `ffmpeg` 48 | 49 | These should all be available on your package manager of choice: 50 | * Debian: `sudo apt-get install sox ffmpeg` 51 | * Ubuntu: `sudo apt install sox ffmpeg` 52 | * macOS: `brew install sox ffmpeg` 53 | 54 | #### 5. Install Lossless Audio Checker 55 | 56 | For Linux systems, run the following commands in the script's directory: 57 | 58 | ~~~~ 59 | wget --content-disposition "http://losslessaudiochecker.com/dl/LAC-Linux-64bit.tar.gz" 60 | tar xzvf LAC-Linux-64bit.tar.gz 61 | rm LAC-Linux-64bit.tar.gz 62 | ~~~~ 63 | 64 | > :information_source: Step 6 and 7 only required if you want to run `scrape.py`. 65 | 66 | #### 6. Install Firefox 67 | 68 | Firefox is available [here](https://www.mozilla.org/en-US/firefox/new/). 69 | 70 | #### 7. Install `geckodriver_autoinstaller` and `selenium` Python Modules 71 | 72 | ~~~~ 73 | pip install geckodriver_autoinstaller selenium 74 | ~~~~ 75 | 76 | 77 | ### Configuration 78 | Run `redcamp` by running the script included when you cloned the repository: 79 | 80 | $> ./redcamp.py 81 | 82 | You will receive a notification stating that you should edit the configuration file located at: 83 | 84 | ~/.redcamp/config 85 | 86 | Open this file in your preferred text editor, and configure as desired. The options are as follows: 87 | 88 | ##### redacted 89 | 90 | * `api_key`: Your redacted.ch API key. Generate one in your access settings under your profile. 91 | * `session_cookie`: Your redacted.ch session_cookie (optional). 92 | * `data_dir`: The directory where your torrent downloads are stored. 93 | * `output_dir`: The directory where the releases will be downloaded to. 94 | * `torrent_dir`: The directory where the generated `.torrent` files are stored. 95 | 96 | ##### ptpimg 97 | 98 | * `api_key`: Your ptpimg.me API key. To find it, login to https://ptpimg.me, open the page source (i.e. "View -> Developer -> View source" menu in Chrome), find the string api_key and copy the hexademical string from the value attribute. 99 | 100 | You should also edit the variables `blacklisted_tags` and `cutoff_year` in `scrape.py` 101 | 102 | ## Usage 103 | ~~~~ 104 | usage: redcamp [-h] [--config CONFIG] [--download-releases] [--release-file RELEASE_FILE] 105 | 106 | optional arguments: 107 | -h, --help show this help message and exit 108 | --config CONFIG Location of configuration file (default: ~/.redcamp/config) 109 | --download-releases Download releases from file (default: False) 110 | --release-file RELEASE_FILE 111 | Location of release file (default: ./releases.txt) 112 | ~~~~ 113 | 114 | ### Examples 115 | To scrape the Bandcamp homepage for new free releases: 116 | 117 | $> ./scrape.py 118 | 119 | When you run the script it will ask for you the number of releases to grab, I recommend no more than 50 at a time. It will scrape the release URL and the download link (FLAC) and save them to `releases.txt`. This script requires Firefox ≥ 60 and `geckodriver`. If you have issues using this script I recommend commenting out the line `options.headless = True` and running it on a machine with a desktop environment so you can observe the output. If your Firefox version is too old run it on a different machine and copy the release file manually. 120 | 121 | To download the releases in `releases.txt` and upload them: 122 | 123 | $> ./redcamp.py --download-releases 124 | 125 | To process and upload the releases in `output_dir`: 126 | 127 | $> ./redcamp.py 128 | 129 | If your releases are downloaded automatically REDCamp caches the URLs for later use, otherwise it will attempt to search Bandcamp for the album. Releases from Bandcamp follow the format "\ - \.zip". Releases are tagged using metadata from Bandcamp and MusicBrainz. If information is missing it will prompt the user to enter it manually. The script also checks if a release is a duplicate on Redacted and skips it. 130 | 131 | Spectrals are automatically generated using `mkspectrograms.sh` and uploaded to ptpimg.me. If a session cookie is added, you can also report the album as a Lossy WEB. 132 | 133 | ## Bugs and Feature Requests 134 | If you have any issues using the script, or would like to suggest a feature, feel free to open an issue in the issue tracker, *provided that you have searched for similar issues already*. Pull requests are also welcome. 135 | 136 | ## Credits 137 | * [Mechazawa](https://github.com/Mechazawa) for [REDBetter](https://github.com/Mechazawa/REDBetter-crawler) 138 | * [AnstrommFeck](https://redacted.ch/user.php?id=7191) for [mkspectrograms.sh](https://redacted.ch/forums.php?action=viewthread&threadid=42695) 139 | * [Lossless Audio Checker](http://losslessaudiochecker.com/) -------------------------------------------------------------------------------- /bandcamp.py: -------------------------------------------------------------------------------- 1 | import re 2 | import urllib.error 3 | 4 | from bs4 import BeautifulSoup 5 | from datetime import datetime 6 | from urllib.parse import quote 7 | from urllib.request import urlopen 8 | 9 | def calc_length(tracks): 10 | length = 0 11 | for track in tracks: 12 | if len(track['length']) == 5: 13 | m, s = track['length'].split(":") 14 | length += 60 * int(m) + int(s) 15 | else: 16 | h, m, s = track['length'].split(":") 17 | length += 60 * 60 * int(h) + 60 * int(m) + int(s) 18 | m, s = divmod(length, 60) 19 | if (m > 60): 20 | h, m = divmod(m, 60) 21 | return f'{h:d}:{m:02d}:{s:02d}' 22 | return f'{m:02d}:{s:02d}' 23 | 24 | def parse_results(query, album_name, artist_name): 25 | soup = BeautifulSoup(urlopen(query).read(), 'html.parser') 26 | 27 | results = soup.find_all("li", {"class":"searchresult album"}) 28 | 29 | for result in results: 30 | info = result.find("div", {"class":"result-info"}) 31 | heading = info.find("div", {"class":"heading"}).find("a").contents[0] 32 | subhead = info.find("div", {"class":"subhead"}).contents[0] 33 | itemurl = info.find("div", {"class":"itemurl"}).find("a").contents[0] 34 | 35 | album = heading.strip() 36 | artist = subhead.strip().lstrip("by ") 37 | 38 | if album == album_name and artist == artist_name: 39 | return itemurl 40 | 41 | return None 42 | 43 | def get_album_url(album_name, artist_name): 44 | query_urls = [] 45 | query_urls.append('https://bandcamp.com/search?q=' + quote(album_name)) 46 | query_urls.append('https://bandcamp.com/search?q=' + quote(artist_name)) 47 | query_urls.append('https://bandcamp.com/search?q=' + quote(album_name) + "%20" + quote(artist_name)) 48 | query_urls.append('https://bandcamp.com/search?q=' + quote(artist_name) + "%20" + quote(album_name)) 49 | 50 | for query in query_urls: 51 | results = parse_results(query, album_name, artist_name) 52 | if results: 53 | return results 54 | 55 | return None 56 | 57 | def get_album_info(url): 58 | try: 59 | soup = BeautifulSoup(urlopen(url).read(), 'html.parser') 60 | except urllib.error.HTTPError: 61 | return False 62 | 63 | album = soup.find("h2", {"class":"trackTitle"}).contents[0].strip().replace("\u200B", "") 64 | artist = soup.find("span", {"itemprop":"byArtist"}).find("a").contents[0].replace("\u200B", "") 65 | cover_art = soup.find("a", {"class": "popupImage"})['href'] 66 | 67 | tracks = [] 68 | tracklist = soup.find_all("td", {"class":"title-col"}) 69 | for track in tracklist: 70 | name = track.find("span", {"class":"track-title"}).contents[0].replace("\u200B", "") 71 | element = track.find("span", {"class":"time secondaryText"}) 72 | if not element: 73 | continue 74 | length = element.contents[0].strip() 75 | tracks.append({"name":name, "length":length}) 76 | 77 | length = calc_length(tracks) 78 | 79 | credits = soup.find("div", {"class":"tralbumData tralbum-credits"}).contents[0].strip() 80 | release_year = datetime.strptime(credits.strip("released "), '%B %d, %Y').year 81 | 82 | tags = [] 83 | taglist = soup.find_all("a", {"class":"tag"}) 84 | for tag in taglist: 85 | tag_text = tag.contents[0] 86 | if tag_text.islower(): 87 | tags.append(tag_text.strip()) 88 | 89 | return {"album":album, "artist":artist, "cover_art":cover_art, "length":length, "release_year":release_year, "tracks":tracks, "tags":tags, "url":url} 90 | -------------------------------------------------------------------------------- /mkspectrograms.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # about: mkspectrograms.sh -- make spectrograms with SoX, using env_parallel if available 4 | # version: 01 5 | # depends: bash, sox, basename, dirname 6 | # recommends: env_parallel (GNU Parallel) 7 | # usage: $> mkspectrograms.sh [OPTION [ARGUMENT]]... [--] FLAC_OR_FOLDER [FLAC_OR_FOLDER]... 8 | 9 | #################### 10 | ### DEFAULT SETTINGS ### 11 | 12 | ## threads for env_parallel 13 | threads="4" 14 | 15 | ## SoX spectrogram commands 16 | full_spectrogram_command="-n remix 1 spectrogram -x 3000 -y 513 -z 120 -w Kaiser" 17 | zoom_spectrogram_command="-n remix 1 spectrogram -x 500 -y 1025 -z 120 -w Kaiser" # -S 1:00 -d 0:02 # -S: start time, -d: duration 18 | 19 | ## zoomed spectrogram settings 20 | zoom_start="3" # divisor used against track duration to set starting position of zoomed spectrograms 21 | zoom_total="2" # length of zoomed spectrograms in seconds 22 | 23 | ## spectrogram title length limit 24 | # (mostly just zoomed) titles may grow too long to fit within the width of a spectrogram, and then SoX will 25 | # just omit them entirely. So counting backwards from the end, cut longer titles down to this many characters 26 | full_title_length_limit="" 27 | zoom_title_length_limit="85" # leave unset ( eg: _limit="" -NOT- _limit="0" ) to ignore either limit 28 | 29 | ## homebrew bin -- for Mac users, Automator defaults to ignoring Homebrew (or /usr/local/bin and users' default PATH anyway) 30 | homebrew_bin="/usr/local/bin" 31 | 32 | ## SoX spectrogram commands - DEFAULTS 33 | #full_spectrogram_command="-n remix 1 spectrogram -x 3000 -y 513 -z 120 -w Kaiser" 34 | #zoom_spectrogram_command="-n remix 1 spectrogram -x 500 -y 1025 -z 120 -w Kaiser" # -S 1:00 -d 0:02 # -S: start time, -d: duration 35 | 36 | ### DEFAULT SETTINGS ### 37 | #################### 38 | 39 | 40 | ## functions ## 41 | _error () { printf >&2 '%sERROR%s: %s\n' $'\033[0;31m' $'\033[0m' "$@" ; } 42 | 43 | _help () { 44 | printf ' 45 | Usage: 46 | %s [OPTION [ARGUMENT]]... [--] FLAC_OR_FOLDER [FLAC_OR_FOLDER]... 47 | 48 | Options: 49 | -h, -H, --help Print this help text. 50 | -- End of options. Subsequent arguments treated as potential flacs. 51 | 52 | -t ARG, --threads ARG Set the number of concurrent threads to "ARG" if env_parallel is detected. 53 | 54 | -F, --full-only Only create full sized spectrograms. 55 | -Z, --zoom-only Only created zoomed spectrograms. 56 | 57 | -f ARG, --full-title ARG Set full spectrogram title length limit to "ARG". 58 | -z ARG, --zoom-title ARG Set zoom spectrogram title length limit to "ARG". 59 | 60 | -s ARG, --zoom-start ARG Set the zoomed-spectrogram-start-point-divisor to "ARG". 61 | -l ARG, --zoom-length ARG Set the length in seconds of zoomed spectrograms to "ARG". 62 | 63 | ' "${0##*/}" 64 | } 65 | 66 | _spectrograms () { # takes -one- array index as argument 67 | # output dir 68 | [[ ! -d ${absolute_flac_dirs[$1]}/Spectrograms/ ]] && 69 | if ! mkdir -p "${absolute_flac_dirs[$1]}"/Spectrograms/ ;then 70 | _error "Failure creating output folder '${absolute_flac_dirs[$1]##*/}/Spectrograms/'." 71 | _error "Aborting spectrogram creation for ${flac_filenames[$1]}." 72 | return 1 73 | fi 74 | 75 | local spectrogram_title="${absolute_flac_dirs[$1]##*/}/${flac_filenames[$1]}" 76 | 77 | # full spectrogram 78 | [[ $zoom_only != "1" ]] && { 79 | [[ -n $full_title_length_limit && ${#spectrogram_title} -gt $full_title_length_limit ]] && spectrogram_title="... ${spectrogram_title: -$full_title_length_limit}" 80 | if ! sox "${absolute_flacs[$1]}" $full_spectrogram_command \ 81 | -t "$spectrogram_title" \ 82 | -c " ${flac_bit_depths[$1]} bit | ${flac_sample_rates[$1]} Hz | ${flac_durations[$1]%.*} sec" \ 83 | -o "${absolute_flac_dirs[$1]}"/Spectrograms/"${flac_filenames[$1]%.[Ff][Ll][Aa][Cc]}"-full.png 84 | then 85 | _error "Failure creating full spectrogram for '${absolute_flacs[$1]}'." 86 | local fail="1" 87 | fi 88 | } 89 | 90 | # zoomed spectrogram 91 | [[ $full_only != "1" ]] && { 92 | local start_zoom="$(( ${flac_durations[$1]%.*} / $zoom_start ))" 93 | local zoom_spectrogram_command="$zoom_spectrogram_command -S 0:$start_zoom -d 0:$zoom_total" 94 | 95 | [[ -n $zoom_title_length_limit && ${#spectrogram_title} -gt $zoom_title_length_limit ]] && spectrogram_title="... ${spectrogram_title: -$zoom_title_length_limit}" 96 | if ! sox "${absolute_flacs[$1]}" $zoom_spectrogram_command \ 97 | -t "$spectrogram_title" \ 98 | -c " ${flac_bit_depths[$1]} bit | ${flac_sample_rates[$1]} Hz | $zoom_total sec | starting @ $start_zoom sec" \ 99 | -o "${absolute_flac_dirs[$1]}"/Spectrograms/"${flac_filenames[$1]%.[Ff][Ll][Aa][Cc]}"-zoom.png 100 | then 101 | _error "Failure creating zoomed spectrogram for '${absolute_flacs[$1]}'." 102 | local fail="1" 103 | fi 104 | } 105 | 106 | [[ $fail -eq "1" ]] && return 1 107 | } 108 | 109 | 110 | ## runtime options ## 111 | while true ;do 112 | case "$1" in 113 | -f|--full-title) 114 | full_title_length_limit="$2" 115 | shift 2 116 | ;; 117 | -F|--full-only) 118 | full_only="1" 119 | shift 120 | ;; 121 | -h|-H|--help) 122 | _help ;exit 0 123 | ;; 124 | -l|--zoom-length) 125 | zoom_total="$2" 126 | shift 2 127 | ;; 128 | -s|--zoom-start) 129 | zoom_start="$2" 130 | shift 2 131 | ;; 132 | -t|--threads) 133 | threads="$2" 134 | shift 2 135 | ;; 136 | -z|--zoom-title) 137 | zoom_title_length_limit="$2" 138 | shift 2 139 | ;; 140 | -Z|--zoom-only) 141 | zoom_only="1" 142 | shift 143 | ;; 144 | --) 145 | break 146 | ;; 147 | -?*) 148 | _error "Unknown option: '$1'" ;_help ;exit 1 149 | ;; 150 | *) 151 | break 152 | ;; 153 | esac 154 | done 155 | [[ $full_only == "1" && $zoom_only == "1" ]] && unset full_only zoom_only 156 | 157 | 158 | ## etc ## 159 | [[ $# -lt "1" ]] && { _error "Nothing to do, please specify at least one flac file or folder. Use '--help' switch for more info." ;exit 1 ; } 160 | [[ $PATH != *"$homebrew_bin"* ]] && PATH=$PATH:"$homebrew_bin" 161 | pwd="$PWD" 162 | 163 | 164 | ## flac/folder arguments ## 165 | shopt -s nullglob 166 | for arg in "$@" ;do 167 | if [[ -d $arg ]] ;then 168 | flacs+=( "$arg"/*.[Ff][Ll][Aa][Cc] ) 169 | elif [[ $arg == *.[Ff][Ll][Aa][Cc] ]] ;then 170 | flacs+=( "$arg" ) 171 | fi 172 | done 173 | shopt -u nullglob 174 | [[ ${#flacs[@]} -lt "1" ]] && { _error "No flac files found, aborting. Use '--help' switch for more info." ;exit 1 ; } 175 | for flac in "${flacs[@]}" ;do 176 | # can I get a reliable dupe-check without using realpath to fancify absolute paths first? 177 | if [[ ${flac:0:1} = "/" ]] ;then absolute_flacs+=( "$flac" ) ;else absolute_flacs+=( "${pwd}/${flac}" ) ;fi 178 | done 179 | 180 | 181 | ## making arrays 182 | for index in "${!absolute_flacs[@]}" ;do 183 | absolute_flac_dirs[$index]="$( dirname "${absolute_flacs[$index]}" )" 184 | flac_filenames[$index]="$( basename "${absolute_flacs[$index]}" )" 185 | flac_bit_depths[$index]="$( sox --i -b "${absolute_flacs[$index]}" )" 186 | flac_sample_rates[$index]="$( sox --i -r "${absolute_flacs[$index]}" )" 187 | flac_durations[$index]="$( sox --i -D "${absolute_flacs[$index]}" )" 188 | done 189 | 190 | 191 | ## run that motherfuckin' rhythm 'cause we're ready 192 | if command -v env_parallel > /dev/null 2>&1 ;then 193 | . "$( which env_parallel.bash )" # setup env_parallel... aaand apparently that's it, if you're not worried about a too-big environment? 194 | env_parallel --will-cite -j "$threads" _spectrograms ::: "${!absolute_flacs[@]}" 195 | else 196 | for index in "${!absolute_flacs[@]}" ;do 197 | _spectrograms "$index" 198 | done 199 | fi -------------------------------------------------------------------------------- /redacted.py: -------------------------------------------------------------------------------- 1 | import re 2 | import os 3 | import html 4 | import json 5 | import time 6 | import logging 7 | import requests 8 | 9 | import utils 10 | 11 | headers = { 12 | 'Connection': 'keep-alive', 13 | 'Cache-Control': 'max-age=0', 14 | 'User-Agent': 'REDCamp', 15 | 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 16 | 'Accept-Encoding': 'gzip,deflate,sdch', 17 | 'Accept-Language': 'en-US,en;q=0.8', 18 | 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3' 19 | } 20 | 21 | types = { 22 | "Album": 1, 23 | "Soundtrack": 3, 24 | "EP": 5, 25 | "Anthology": 6, 26 | "Compilation": 7, 27 | "Single": 9, 28 | "Live album": 11, 29 | "Remix": 13, 30 | "Bootleg": 14, 31 | "Interview": 15, 32 | "Mixtape": 16, 33 | "Demo": 17, 34 | "Concert Recording": 18, 35 | "DJ Mix": 19, 36 | "Unknown": 21 37 | } 38 | 39 | class LoginException(Exception): 40 | pass 41 | 42 | class RequestException(Exception): 43 | pass 44 | 45 | class RedactedAPI: 46 | def __init__(self, api_key=None, logger=None): 47 | self.session = requests.Session() 48 | self.session.headers.update(headers) 49 | self.api_key = api_key 50 | self.authkey = None 51 | self.passkey = None 52 | self.tracker = "https://flacsfor.me/" 53 | self.last_request = time.time() 54 | self.rate_limit = 2.0 55 | self.logger = logger 56 | self._login() 57 | 58 | def _login(self): 59 | '''Logs in user using API key''' 60 | mainpage = 'https://redacted.ch/' 61 | 62 | self.session.headers.update(Authorization=self.api_key) 63 | try: 64 | accountinfo = self.request('index') 65 | self.authkey = accountinfo['authkey'] 66 | self.passkey = accountinfo['passkey'] 67 | except: 68 | raise LoginException 69 | 70 | def request(self, action, **kwargs): 71 | '''Makes an AJAX request at a given action page''' 72 | while time.time() - self.last_request < self.rate_limit: 73 | time.sleep(0.1) 74 | 75 | ajaxpage = 'https://redacted.ch/ajax.php' 76 | params = {'action': action} 77 | params.update(kwargs) 78 | r = self.session.get(ajaxpage, params=params, allow_redirects=False) 79 | self.last_request = time.time() 80 | if r.status_code == 404: 81 | return {} 82 | try: 83 | parsed = json.loads(r.content) 84 | if parsed['status'] != 'success': 85 | raise RequestException 86 | return parsed['response'] 87 | except ValueError: 88 | print(r.status_code) 89 | raise RequestException 90 | 91 | def get_artist(self, artist=None, format=None): 92 | '''Get all releases for a given artist''' 93 | res = self.request('artist', artistname=artist) 94 | if 'torrentgroup' not in res: 95 | return {'torrentgroup': []} 96 | torrentgroups = res['torrentgroup'] 97 | keep_releases = [] 98 | for release in torrentgroups: 99 | #Remove Unicode Chars 100 | name = html.unescape(release['groupName']).encode('ascii', 'ignore') 101 | release['groupName'] = name.decode("utf-8") 102 | keeptorrents = [] 103 | for t in release['torrent']: 104 | if not format or t['format'] == format: 105 | keeptorrents.append(t) 106 | release['torrent'] = list(keeptorrents) 107 | if len(release['torrent']): 108 | keep_releases.append(release) 109 | res['torrentgroup'] = keep_releases 110 | return res 111 | 112 | def is_duplicate(self, release): 113 | artist = release['artist'] 114 | if 'artists' in release and len(release['artists']): 115 | artist = release['artists'][0] 116 | 117 | group = self.get_artist(artist=artist, format="FLAC") 118 | for result in group['torrentgroup']: 119 | if result['groupName'] == release['album']: 120 | for torrent in result['torrent']: 121 | if torrent['format'] == "FLAC" and torrent['encoding'] == release['bitrate']: 122 | return True 123 | break 124 | 125 | return False 126 | 127 | def upload(self, torrent, release): 128 | upload = {} 129 | 130 | if 'artists' in release and len(release['artists']): 131 | for i, artist in enumerate(release['artists']): 132 | upload[f"artists[{i}]"] = artist 133 | upload[f"importance[{i}]"] = 1 134 | else: 135 | upload["artists[]"] = release['artist'] 136 | upload["importance[]"] = 1 137 | 138 | if 'release_title' in release: 139 | upload["remaster_title"] = release['release_title'] 140 | if 'record_label' in release: 141 | upload["remaster_record_label"] = release['record_label'] 142 | if 'remaster_catalogue_number' in release: 143 | upload["remaster_catalogue_number"] = release['catalogue_number'] 144 | 145 | if 'initial_year' in release: 146 | upload["year"] = int(release['initial_year']) 147 | else: 148 | upload["year"] = int(release['release_year']) 149 | 150 | upload["type"] = 0 151 | upload["title"] = release['album'] 152 | upload["releasetype"] = types[release['release_type']] 153 | upload["remaster_year"] = int(release['release_year']) 154 | upload["format"] = "FLAC" 155 | upload["bitrate"] = release['bitrate'] 156 | upload["media"] = "WEB" 157 | upload["tags"] = release['tags'] 158 | upload["image"] = release['cover_art'] 159 | upload["album_desc"] = release['album_description'] 160 | upload["release_desc"] = release['release_description'] 161 | 162 | files = {'file_input': open(torrent, 'rb')} 163 | 164 | r = self.session.post('https://redacted.ch/ajax.php?action=upload', data=upload, files=files) 165 | return json.loads(r.content) 166 | 167 | #We have to use a session cookie here because the API doesn't have a report endpoint 168 | def report_lossy(self, session_cookie, torrent, image, url): 169 | cookies = {'session': session_cookie} 170 | data = {'categoryid': 1, 'submit': True, 'type': 'lossywebapproval'} 171 | 172 | data['auth'] = self.authkey 173 | data['torrentid'] = int(torrent) 174 | data['proofimages'] = image 175 | data['extra'] = f"Downloaded from [url={url}]Bandcamp[/url]" 176 | 177 | r = requests.post('https://redacted.ch/reportsv2.php?action=takereport', cookies=cookies, data=data, headers=headers) 178 | return r.content -------------------------------------------------------------------------------- /redcamp.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import bandcamp 4 | import redacted 5 | import transcode 6 | import utils 7 | 8 | import re 9 | import os 10 | import sys 11 | import json 12 | 13 | import argparse 14 | import configparser 15 | import hashlib 16 | import shutil 17 | import urllib.error 18 | import urllib.request 19 | 20 | import coloredlogs 21 | import logging 22 | import verboselogs 23 | 24 | import musicbrainzngs 25 | import mutagen 26 | 27 | allowed_extensions = [".ac3", ".accurip", ".azw3", ".chm", ".cue", ".djv", ".djvu", ".doc", ".dmg", ".dts", ".epub", ".ffp", ".flac", ".gif", ".htm", ".html", ".jpeg", ".jpg", ".lit", ".log", ".m3u", ".m3u8", ".m4a", ".m4b", ".md5", ".mobi", ".mp3", ".mp4", ".nfo", ".pdf", ".pls", ".png", ".rtf", ".sfv", ".txt"] 28 | 29 | tag_blacklist = ["web", "flac", "compilation", "demo", "dj.mix", "ep", "mixtape", "remix", "single", "soundtrack", "live", "soundboard", "hardcore", "garage"] 30 | 31 | types = { 32 | "Album": "Album", 33 | "Compilation": "Compilation", 34 | "DJ-mix": "DJ Mix", 35 | "EP": "EP", 36 | "Interview": "Interview", 37 | "Live": "Live album", 38 | "Mixtape/Street": "Mixtape", 39 | "Remix": "Remix", 40 | "Single": "Single", 41 | "Soundtrack": "Soundtrack" 42 | } 43 | 44 | type_matches = { 45 | "Compilation": "Compilation", 46 | "Demo": "Demo", 47 | "EP": "EP", 48 | "E.P.": "EP", 49 | "Live": "Live album", 50 | "Mixtape": "Mixtape", 51 | "OST": "Soundtrack", 52 | "Remix": "Remix", 53 | "Single": "Single", 54 | "Soundtrack": "Soundtrack" 55 | } 56 | 57 | verboselogs.install() 58 | logger = logging.getLogger(__name__) 59 | coloredlogs.DEFAULT_LOG_FORMAT='%(levelname)s %(message)s' 60 | coloredlogs.install(level='INFO', logger=logger) 61 | 62 | def rename_tracks(dir): 63 | album = None 64 | artist = None 65 | 66 | for root, dirs, files in os.walk(dir): 67 | for file in files: 68 | file_name, file_extension = os.path.splitext(file) 69 | if file_extension not in allowed_extensions: 70 | os.remove(os.path.join(root, file)) 71 | if file_extension == ".flac": 72 | file_path = os.path.join(root, file) 73 | metadata = mutagen.File(file_path) 74 | if not album and not artist: 75 | album = metadata['album'][0] 76 | artist = metadata['artist'][0] 77 | new_file = metadata['tracknumber'][0].rjust(2, '0') + " " + utils.clean(metadata['title'][0]) + ".flac" 78 | os.rename(file_path, os.path.join(root, new_file)) 79 | 80 | return [album, artist] 81 | 82 | def rename_dir(dir, artist, album): 83 | parent_dir = os.path.dirname(dir) 84 | new_dir = os.path.join(parent_dir, utils.clean(artist) + " - " + utils.clean(album) + " [FLAC]") 85 | os.rename(dir, new_dir) 86 | return new_dir 87 | 88 | def search_musicbrainz(release): 89 | results = musicbrainzngs.search_releases(artist=release['artist'], release=release['album'], limit=5) 90 | for result in results["release-list"]: 91 | for artist in result['artist-credit']: 92 | if type(artist) is not 'dict': 93 | continue 94 | if artist['name'] == release['artist'] and result['title'] == release['album']: 95 | #Get Release Type 96 | logger.info(f"Result: {result['title']} by {artist['name']}") 97 | primary_type = result['release-group']['primary-type'] 98 | secondary_types = result['release-group']['secondary-type-list'] 99 | for release_type in secondary_types: 100 | if release_type in types.keys(): 101 | release['release_type'] = types[release_type] 102 | break 103 | else: 104 | if primary_type in types.keys(): 105 | release['release_type'] = primary_type 106 | else: 107 | release['release_type'] = "Album" 108 | #Get Record Label / Catalogue Number 109 | for label in result['label-info-list']: 110 | release['record_label'] = label['label']['name'] 111 | if 'catalog-number' in label: 112 | release['catalogue_number'] = label['catalog-number'] 113 | break 114 | return 115 | 116 | def guess_type(release): 117 | if len(release['tracks']) == 1: 118 | return "Single" 119 | 120 | if release['artist'] == "Various": 121 | return "Compilation" 122 | 123 | for type in type_matches.keys(): 124 | match = type.lower() 125 | if match in release['album'].lower() or match in release['tags']: 126 | return type_matches[type] 127 | 128 | return "Album" 129 | 130 | def add_artists(release): 131 | release['artists'] = [] 132 | for track in release['tracks']: 133 | match = re.match(r'^(.+) - (.+)$', track['name']) 134 | if not match: 135 | break 136 | artist = match.group(1) 137 | if artist not in release['artists']: 138 | release['artists'].append(artist) 139 | else: 140 | release['release_type'] = "Compilation" 141 | release['artist'] = "Various Artists" 142 | 143 | 144 | def make_album_desc(release): 145 | description = "[size=4][b]Tracklist[/b][/size]\n" 146 | 147 | for i, track in enumerate(release['tracks']): 148 | description += "[b]" + str(i + 1).rjust(2) + ".[/b]" 149 | description += " " + track['name'] + " " 150 | description += "[i](" + track['length'] + ")[/i]\n" 151 | 152 | description += "\n[b]Total Length:[/b] " + release['length'] + "\n" 153 | description += "\nMore Information: [url=" + release['url'] + "]Bandcamp[/url]" 154 | 155 | return description 156 | 157 | def make_release_desc(release, links): 158 | description = "[hide=Spectrograms]\n" 159 | 160 | for link in links: 161 | description += f"[img]{link}[/img]\n" 162 | 163 | description += "[/hide]" 164 | description += "\n\nUploaded using [url=https://github.com/TrackerTools/REDCamp]REDCamp[/url]" 165 | 166 | return description 167 | 168 | def make_tagstr(release): 169 | blacklist = tag_blacklist 170 | 171 | blacklist.append(re.sub(r'[ _\-\/]', '.', release['album'].lower())) 172 | blacklist.append(re.sub(r'[ _\-\/]', '.', release['artist'].lower())) 173 | blacklist.append(str(release['release_year'])) 174 | 175 | tags = [] 176 | for tag in release['tags']: 177 | tag = re.sub(r'[ _\-\/]', '.', tag).strip(".") 178 | tag = re.sub(r'[\'\(\)]', '', tag.replace("&", "and")) 179 | if tag not in blacklist and tag not in tags: 180 | tags.append(tag) 181 | 182 | return ", ".join(tags) 183 | 184 | def print_release(release): 185 | print("-----------------------------------------") 186 | print(f"Artist: {release['artist']}") 187 | if 'artists' in release and len(release['artists']): 188 | artists = ", ".join(release['artists']) 189 | print(f"Artists: {artists}") 190 | print(f"Album: {release['album']}") 191 | if 'release_title' in release: 192 | print(f"Release Title: {release['release_title']}") 193 | print(f"Release Type: {release['release_type']}") 194 | if 'initial_year' in release: 195 | print(f"Initial Year: {release['initial_year']}") 196 | print(f"Release Year: {release['release_year']}") 197 | if 'record_label' in release: 198 | print(f"Record Label: {release['record_label']}") 199 | if 'catalogue_number' in release: 200 | print(f"Catalogue Number: {release['catalogue_number']}") 201 | print(f"Format: FLAC") 202 | print(f"Bitrate: {release['bitrate']}") 203 | print(f"Media: WEB") 204 | print(f"Tags: {make_tagstr(release)}") 205 | print(f"Image: {release['cover_art']}") 206 | print(f"Album Description:\n{make_album_desc(release)}\n") 207 | print(f"Release Description:\n{release['release_description']}") 208 | print("-----------------------------------------") 209 | 210 | def main(): 211 | parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter, prog='redcamp') 212 | parser.add_argument('--config', help='Location of configuration file', default=os.path.expanduser('~/.redcamp/config')) 213 | parser.add_argument('--cache', help='Location of cache file', default=os.path.expanduser('~/.redcamp/cache')) 214 | parser.add_argument('--download-releases', help='Download releases from file', action='store_true') 215 | parser.add_argument('--release-file', help='Location of release file', default=os.path.abspath('./releases.txt')) 216 | 217 | args = parser.parse_args() 218 | config = configparser.RawConfigParser() 219 | 220 | try: 221 | open(args.config) 222 | config.read(args.config) 223 | except: 224 | if not os.path.exists(os.path.dirname(args.config)): 225 | os.makedirs(os.path.dirname(args.config)) 226 | config.add_section('redacted') 227 | config.set('redacted', 'api_key', '') 228 | config.set('redacted', 'session_cookie', '') 229 | config.set('redacted', 'data_dir', '') 230 | config.set('redacted', 'output_dir', '') 231 | config.set('redacted', 'torrent_dir', '') 232 | config.set('redacted', 'piece_length', '18') 233 | config.add_section('ptpimg') 234 | config.set('ptpimg', 'api_key', '') 235 | config.write(open(args.config, 'w')) 236 | logging.error(f"Please edit the config file: {args.config}") 237 | sys.exit(0) 238 | 239 | data_dir = os.path.expanduser(config.get('redacted', 'data_dir')) 240 | output_dir = os.path.expanduser(config.get('redacted', 'output_dir')) 241 | torrent_dir = os.path.expanduser(config.get('redacted', 'torrent_dir')) 242 | 243 | #Read Cache 244 | cache = utils.read_file(args.cache) 245 | if cache: 246 | cache = json.loads(cache) 247 | else: 248 | cache = {} 249 | 250 | #Download Releases 251 | if args.download_releases: 252 | for release in utils.read_file(args.release_file).strip().split("\n"): 253 | release_url, download_link = release.split(", ") 254 | 255 | try: 256 | response = urllib.request.urlopen(download_link) 257 | except urllib.error.HTTPError: 258 | logger.error(f"Invalid URL. Skipping...") 259 | continue 260 | 261 | file_name = response.info().get_filename().encode('latin-1').decode('utf-8') 262 | file_path = os.path.join(output_dir, file_name) 263 | 264 | if os.path.exists(file_path): 265 | continue 266 | 267 | while True: 268 | try: 269 | logger.info(f"Downloading Release: {file_name}") 270 | urllib.request.urlretrieve(download_link, file_path) 271 | break 272 | except urllib.error.ContentTooShortError: 273 | logger.error("Download Error. Restarting...") 274 | os.remove(file_path) 275 | continue 276 | 277 | cache[file_name] = release_url 278 | 279 | #Write Cache 280 | utils.write_file(args.cache, json.dumps(cache, indent=4, sort_keys=True)) 281 | 282 | api_key = config.get('redacted', 'api_key') 283 | 284 | try: 285 | session_cookie = os.path.expanduser(config.get('redacted', 'session_cookie')) 286 | except configparser.NoOptionError: 287 | session_cookie = None 288 | 289 | logger.info("Logging in to RED") 290 | api = redacted.RedactedAPI(api_key, logger) 291 | 292 | logger.info("Logging in to MusicBrainz") 293 | musicbrainzngs.set_useragent("REDCamp", "1.0", "https://github.com/TrackerTools/REDCamp") 294 | 295 | #Get Candidates 296 | logger.info("Getting Candidates") 297 | candidates = [] 298 | for root, dirs, files in os.walk(output_dir): 299 | for file in files: 300 | if (re.match(r'^(.+) - (.+).zip$', file)): 301 | candidates.append(os.path.join(root, file)) 302 | 303 | for release_path in candidates: 304 | #Unzip File 305 | release_file = os.path.basename(release_path) 306 | release_dir = utils.unzip_file(release_path, data_dir) 307 | 308 | #Rename Files 309 | album, artist = rename_tracks(release_dir) 310 | 311 | #Rename Directory 312 | release_dir = rename_dir(release_dir, artist, album) 313 | 314 | #Get Album Info from Bandcamp 315 | logger.info(f"Release: {album} by {artist}") 316 | if release_file in cache: 317 | url = cache[release_file] 318 | else: 319 | logger.info(f"Searching Bandcamp for {album}") 320 | url = bandcamp.get_album_url(album, artist) 321 | if not url: 322 | logger.error(f"No Results for {album}") 323 | url = input("Enter Album URL: ") 324 | 325 | release = bandcamp.get_album_info(url) 326 | if not release: 327 | logger.error("Invalid URL. Skipping...") 328 | shutil.rmtree(release_dir) 329 | continue 330 | 331 | #Get Album Info from MusicBrainz 332 | logger.info(f"Searching MusicBrainz for {album}") 333 | search_musicbrainz(release) 334 | 335 | #Guess Release Type 336 | if 'release_type' not in release: 337 | logger.warning("[WRN] No Release Type. Guessing...") 338 | release['release_type'] = guess_type(release) 339 | 340 | #Check Compilation 341 | if release['release_type'] == "Compilation" or release['artist'] != artist: 342 | add_artists(release) 343 | 344 | #Check Bitrate 345 | if transcode.is_24bit(release_dir): 346 | release['bitrate'] = "24bit Lossless" 347 | else: 348 | release['bitrate'] = "Lossless" 349 | 350 | #Check Lossless 351 | result = transcode.is_lossless(release_dir) 352 | if result != "Clean": 353 | logger.warning(f"LAC Reports Release as {result}") 354 | logger.warning(f"Please Check Spectrals") 355 | 356 | #Generate Spectrograms 357 | logger.info("Generating Spectrograms") 358 | spectral_links = transcode.make_spectrograms(release_dir, config.get('ptpimg', 'api_key')) 359 | release['release_description'] = make_release_desc(release, spectral_links) 360 | 361 | #Review Release 362 | skip = False 363 | 364 | while True: 365 | print_release(release) 366 | 367 | if not len(make_tagstr(release)): 368 | logger.error("No Tags") 369 | release['tags'] = input("Enter Tags: ").split(", ") 370 | continue 371 | 372 | print("[A]pply, [B]lacklist Tags, [E]dit, [S]kip") 373 | option = input("Select an option: ") 374 | 375 | if option == "A": 376 | break 377 | elif option == "B": 378 | tag_blacklist.extend(input("Tags: ").split(", ")) 379 | elif option == "E": 380 | print("Album, Artist, Year, Title, Type, Label") 381 | edit = input("Select an option: ") 382 | if edit == "Album": 383 | release['album'] = input("Album: ") 384 | if edit == "Artist": 385 | release['artist'] = input("Artist: ") 386 | elif edit == "Year": 387 | release['initial_year'] = input("Initial Year: ") 388 | elif edit == "Title": 389 | release['release_title'] = input("Release Title: ") 390 | elif edit == "Type": 391 | release['release_type'] = input("Release Type: ") 392 | if release['release_type'] == "Compilation": 393 | release['artists'] = input("Artists: ").split(", ") 394 | release['artist'] = "Various Artists" 395 | elif edit == "Label": 396 | release['record_label'] = input("Record Label: ") 397 | elif option == "S": 398 | skip = True 399 | break 400 | 401 | if skip: 402 | shutil.rmtree(release_dir) 403 | continue 404 | 405 | #Make Tags and Description 406 | release['tags'] = make_tagstr(release) 407 | release['album_description'] = make_album_desc(release) 408 | 409 | #Check for Duplicate Releases 410 | if api.is_duplicate(release): 411 | logger.info("Duplicate Release. Skipping...") 412 | shutil.rmtree(release_dir) 413 | break 414 | 415 | #Make Torrent 416 | torrent = os.path.join(output_dir, f"redcamp_{str(int(hashlib.md5(release_file.encode('utf-8')).hexdigest(), 16))[0:12]}.torrent") 417 | if not os.path.exists(torrent): 418 | transcode.make_torrent(torrent, release_dir, api.tracker, api.passkey, config.get('redacted', 'piece_length')) 419 | 420 | #Upload to RED 421 | response = api.upload(torrent, release) 422 | 423 | if response['status'] == 'failure': 424 | logging.error(f"Upload Failed: {response['error']}") 425 | shutil.rmtree(release_dir) 426 | os.remove(torrent) 427 | continue 428 | 429 | torrentid = response['response']['torrentid'] 430 | logger.success(f"Uploaded to https://redacted.ch/torrents.php?torrentid={torrentid}") 431 | 432 | shutil.move(torrent, torrent_dir) 433 | 434 | #Report Lossy WEB 435 | if session_cookie: 436 | option = input("Report Lossy WEB? [y/n]: ") 437 | if option == "y": 438 | api.report_lossy(session_cookie, torrentid, spectral_links[0], release['url']) 439 | 440 | os.remove(release_path) 441 | 442 | if __name__ == "__main__": 443 | main() -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | coloredlogs 2 | geckodriver_autoinstaller 3 | musicbrainzngs 4 | mutagen 5 | ptpimg_uploader 6 | selenium 7 | verboselogs -------------------------------------------------------------------------------- /scrape.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | import utils 4 | 5 | from os import path 6 | from datetime import datetime 7 | from urllib.parse import urlparse 8 | 9 | import geckodriver_autoinstaller 10 | 11 | from selenium import webdriver 12 | from selenium.webdriver.common.by import By 13 | from selenium.webdriver.firefox.options import Options 14 | from selenium.webdriver.support.ui import WebDriverWait 15 | from selenium.common.exceptions import NoSuchElementException 16 | from selenium.webdriver.support import expected_conditions as EC 17 | from selenium.common.exceptions import ElementNotInteractableException 18 | 19 | blacklisted_tags = ["noise", "noisegrind", "harsh.noise"] 20 | cutoff_year = 2019 21 | 22 | def get_download_link(driver): 23 | driver.find_element_by_class_name("download-link.buy-link").click() 24 | driver.find_element_by_id("userPrice").send_keys("0") 25 | driver.find_element_by_link_text("download to your computer").click() 26 | try: 27 | driver.find_element_by_xpath("//div[@id='downloadButtons_download']/div[1]/button[1]").click() 28 | except ElementNotInteractableException: 29 | return False 30 | driver.find_element_by_class_name("item-format.button").click() 31 | try: 32 | driver.find_element_by_xpath("//span[@class='description selected' and text()='FLAC']/..") 33 | except NoSuchElementException: 34 | driver.find_element_by_xpath("//span[@class='description' and text()='FLAC']/..").click() 35 | element = WebDriverWait(driver, 60).until( 36 | EC.presence_of_element_located((By.LINK_TEXT, "Download")) 37 | ) 38 | return element.get_attribute("href") 39 | 40 | def main(): 41 | release_ct = int(input("Number of Releases: ")) 42 | 43 | cache = utils.read_file("./cache.txt") 44 | if cache: 45 | cache = cache.split("\n") 46 | else: 47 | cache = [] 48 | 49 | #Install geckodriver 50 | geckodriver_autoinstaller.install() 51 | 52 | options = Options() 53 | options.headless = True 54 | driver = webdriver.Firefox(options=options) 55 | 56 | #Load Bandcamp 57 | driver.get("https://bandcamp.com") 58 | driver.implicitly_wait(2) 59 | 60 | #Get New/Digital Releases 61 | driver.find_element_by_class_name("discover-pill.new").click() 62 | driver.find_element_by_class_name("discover-pill.digital").click() 63 | 64 | #Wait for Page 65 | WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, "//a[@class='item-page' and text()='next']"))) 66 | 67 | releases = [] 68 | 69 | while len(releases) < release_ct: 70 | #Get Releases 71 | albums = driver.find_elements_by_xpath("//div[contains(@class, 'row discover-result') and contains(@class, 'result-current')]/div[@class='col col-3-12 discover-item']/a[2]") 72 | for album in albums: 73 | if len(releases) >= release_ct: 74 | break 75 | 76 | #Parse URL 77 | parsed_url = urlparse(album.get_attribute("href")) 78 | url = parsed_url.scheme + "://" + parsed_url.netloc + parsed_url.path 79 | 80 | #Check Cache 81 | if url in cache: 82 | continue 83 | cache.append(url) 84 | 85 | #Open New Tab 86 | driver.execute_script("window.open('');") 87 | driver.switch_to.window(driver.window_handles[1]) 88 | driver.get(url) 89 | 90 | #Check if Releases Matches Filters 91 | try: 92 | price_element = driver.find_element_by_class_name("buyItemExtra.buyItemNyp.secondaryText") 93 | year_element = driver.find_element_by_xpath("//meta[@itemprop='datePublished']") 94 | tag_element = driver.find_elements_by_xpath("//a[@class='tag']") 95 | 96 | #Get Release Year 97 | year = int(datetime.strptime(year_element.get_attribute("content"), "%Y%m%d").year) 98 | 99 | #Check Tags 100 | blacklisted = False 101 | for tag in tag_element: 102 | if tag.text in blacklisted_tags: 103 | blacklisted = True 104 | break 105 | 106 | #Check Price and Release Year 107 | if price_element.text == "name your price" and year >= cutoff_year and not blacklisted: 108 | download_link = get_download_link(driver) 109 | if download_link: 110 | print(f"Match: {url}") 111 | releases.append(f"{url}, {download_link}") 112 | except NoSuchElementException: 113 | pass 114 | 115 | #Close New Tab 116 | driver.close() 117 | driver.switch_to.window(driver.window_handles[0]) 118 | else: 119 | #Load Next Page of Releases 120 | WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, "//a[@class='item-page' and text()='next']"))).click() 121 | 122 | driver.close() 123 | 124 | utils.write_file("./releases.txt", "\n".join(releases)) 125 | utils.write_file("./cache.txt", "\n".join(cache)) 126 | 127 | if __name__ == "__main__": 128 | main() -------------------------------------------------------------------------------- /setup.sh: -------------------------------------------------------------------------------- 1 | #Install Python Modules 2 | pip install -r requirements.txt 3 | 4 | #Install Packages 5 | sudo apt install ffmpeg sox 6 | 7 | #Install mktorrent 8 | if ! command -v mktorrent > /dev/null; then 9 | git clone git@github.com:Rudde/mktorrent.git 10 | cd mktorrent 11 | make && sudo make install 12 | fi 13 | 14 | #Install Lossless Audio Checker 15 | wget --content-disposition "http://losslessaudiochecker.com/dl/LAC-Linux-64bit.tar.gz" 16 | tar xzvf LAC-Linux-64bit.tar.gz 17 | rm LAC-Linux-64bit.tar.gz 18 | 19 | chmod +x redcamp.py 20 | chmod +x mkspectrograms.sh -------------------------------------------------------------------------------- /transcode.py: -------------------------------------------------------------------------------- 1 | import re 2 | import os 3 | import shutil 4 | import subprocess 5 | import mutagen.flac 6 | 7 | import ptpimg_uploader 8 | 9 | def ext_matcher(*extensions): 10 | ''' 11 | Returns a function which checks if a filename has one of the specified extensions. 12 | ''' 13 | return lambda f: os.path.splitext(f)[-1].lower() in extensions 14 | 15 | def locate(root, match_function, ignore_dotfiles=True): 16 | ''' 17 | Yields all filenames within the root directory for which match_function returns True. 18 | ''' 19 | for path, dirs, files in os.walk(root): 20 | for filename in (os.path.abspath(os.path.join(path, filename)) for filename in sorted(files) if match_function(filename)): 21 | if ignore_dotfiles and os.path.basename(filename).startswith('.'): 22 | pass 23 | else: 24 | yield filename 25 | 26 | def is_24bit(flac_dir): 27 | ''' 28 | Returns True if any FLAC within flac_dir is 24 bit. 29 | ''' 30 | flacs = (mutagen.flac.FLAC(flac_file) for flac_file in locate(flac_dir, ext_matcher('.flac'))) 31 | return any(flac.info.bits_per_sample > 16 for flac in flacs) 32 | 33 | def make_torrent(torrent, input_dir, tracker, passkey, piece_length): 34 | if not os.path.exists(os.path.dirname(torrent)): 35 | os.makedirs(os.path.dirname(torrent)) 36 | tracker_url = '%(tracker)s%(passkey)s/announce' % { 37 | 'tracker' : tracker, 38 | 'passkey' : passkey, 39 | } 40 | command = ["mktorrent", "-s", "RED", "-p", "-a", tracker_url, "-o", torrent, "-l", piece_length, input_dir] 41 | subprocess.check_output(command, stderr=subprocess.STDOUT) 42 | return torrent 43 | 44 | def is_lossless(flac_dir): 45 | flac_file = next(locate(flac_dir, ext_matcher('.flac'))) 46 | wav_file = os.path.splitext(flac_file)[0] + ".wav" 47 | subprocess.call(["ffmpeg", "-i", flac_file, wav_file], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) 48 | output = subprocess.check_output(["./LAC", wav_file]).decode() 49 | result = re.search(r'Result: (\w+)', output).group(1) 50 | os.remove(wav_file) 51 | return result 52 | 53 | def make_spectrograms(flac_dir, api_key): 54 | api = ptpimg_uploader.PtpimgUploader(api_key) 55 | flac_file = next(locate(flac_dir, ext_matcher('.flac'))) 56 | subprocess.call(["./mkspectrograms.sh", flac_file], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) 57 | spectral_dir = os.path.join(flac_dir, "Spectrograms") 58 | spectrals = locate(spectral_dir, ext_matcher('.png')) 59 | links = api.upload_files(*spectrals) 60 | shutil.rmtree(spectral_dir) 61 | return links -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import re 2 | import os 3 | import unicodedata 4 | 5 | from zipfile import ZipFile 6 | 7 | def read_file(path, mode='r'): 8 | if not os.path.exists(path): 9 | return "" 10 | with open(path, mode) as file: 11 | data = file.read() 12 | file.close() 13 | return data 14 | 15 | def write_file(path, data): 16 | with open(path, 'w') as file: 17 | file.write(data) 18 | file.close() 19 | 20 | def clean(value): 21 | value = re.sub(r'[\|\/]', '-', value) 22 | value = re.sub(r'[<>:"\\|?*]', '', value).strip() 23 | return value 24 | 25 | def unzip_file(file, dir): 26 | dir += "/" + os.path.basename(file).rstrip(".zip") 27 | with ZipFile(file, 'r') as zf: 28 | zf.extractall(dir) 29 | return dir --------------------------------------------------------------------------------