├── .github └── FUNDING.yml ├── README.md ├── plexcache.py ├── plexcache_settings.json ├── plexcache_setup.py └── requirements.txt /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: bexem 4 | ko_fi: bexem 5 | custom: ["bexem.xyz", "buymeacoffee.com/bexem"] 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PlexCache: Automate Plex Media Management 2 | 3 | Automate Plex media management: Efficiently transfer media from the On Deck/Watchlist to the cache, and seamlessly move watched media back to their respective locations. 4 | 5 | ## Overview 6 | 7 | PlexCache efficiently transfers media from the On Deck/Watchlist to the cache and moves watched media back to their respective locations. This Python script reduces energy consumption by minimizing the need to spin up the array/hard drive(s) when watching recurrent media like TV series. It achieves this by moving the media from the OnDeck and watchlist for the main user and/or other users. For TV shows/anime, it also fetches the next specified number of episodes. 8 | 9 | ## Features 10 | 11 | - Fetch a specified number of episodes from the "onDeck" for the main user and other users. 12 | - Skip fetching onDeck media for specified users. 13 | - Fetch a specified number of episodes from the "watchlist" for the main user and other users. 14 | - Skip fetching watchlist media for specified users. 15 | - Search only the specified libraries. 16 | - Check for free space before moving any file. 17 | - Move watched media present on the cache drive back to the array. 18 | - Move respective subtitles along with the media moved to or from the cache. 19 | - Filter media older than a specified number of days. 20 | - Run in debug mode for testing. 21 | - Use of a log file for easy debugging. 22 | - Use caching system to avoid wastful memory usage and cpu cycles. 23 | - Use of multitasking to optimize file transfer time. 24 | - Exit the script if any active session or skip the currently playing media. 25 | - Send Webhook messages according to set log level. 26 | - Find your missing unicorn. 27 | 28 | **Work in progress (pre-releases)** 29 | 30 | - Use symbolic links if the script is not running on UNRAID. **(UNTESTED)** 31 | 32 | ## Setup 33 | 34 | Please check out our [Wiki section](https://github.com/bexem/PlexCache/wiki) for the step-by-step guide on how to setup PlexCache on your system. 35 | 36 | ## Notes 37 | 38 | This script should be compatible with other systems, especially Linux-based ones, although I have primarily tested it on Unraid with plex as docker container running on Unraid. Work has been done to improve Windows interoperability. 39 | While I cannot support every case, it's worth checking the GitHub issues to see if your specific case has already been discussed. 40 | I will still try to help out, but please note that I make no promises in providing assistance for every scenario. 41 | **It is highly advised to use the setup script.** 42 | 43 | ## Disclaimer 44 | 45 | This script comes without any warranties, guarantees, or magic powers. By using this script, you accept that you're responsible for any consequences that may result. The author will not be held liable for data loss, corruption, or any other problems you may encounter. So, it's on you to make sure you have backups and test this script thoroughly before you unleash its awesome power. 46 | 47 | ## Acknowledgments 48 | 49 | I would like to express my sincere gratitude to brimur[^1] for providing the script that served as the foundation and inspiration for this project. I would also like to extend a heartfelt thank you to everyone who contributed and took the time to comment on the project. Your support and involvement mean a lot to me. ❤️ 50 | 51 | [^1]: [brimur/preCachePlexOnDeckEpiosodes.py](https://gist.github.com/brimur/95277e75ca399d5d52b61e6aa192d1cd) 52 | -------------------------------------------------------------------------------- /plexcache.py: -------------------------------------------------------------------------------- 1 | import os, json, logging, glob, socket, platform, shutil, ntpath, posixpath, re, requests, subprocess, time, sys 2 | from concurrent.futures import ThreadPoolExecutor, as_completed 3 | from datetime import datetime, timedelta 4 | from logging.handlers import RotatingFileHandler 5 | from pathlib import Path 6 | from plexapi.server import PlexServer 7 | from plexapi.video import Episode 8 | from plexapi.video import Movie 9 | from plexapi.myplex import MyPlexAccount 10 | from plexapi.exceptions import NotFound 11 | from plexapi.exceptions import BadRequest 12 | 13 | print("*** PlexCache ***") 14 | 15 | script_folder = "/mnt/user/system/plexcache/" # Folder path for the PlexCache script storing the settings, watchlist & watched cache files 16 | logs_folder = script_folder # Change this if you want your logs in a different folder 17 | log_level = "" # Set the desired logging level for webhook notifications. Defaults to INFO when left empty. (Options: debug, info, warning, error, critical) 18 | max_log_files = 5 # Maximum number of log files to keep 19 | 20 | notification = "system" # "Unraid" or "Webhook", or "Both"; "System" instead will automatically switch to unraid if the scripts detects running on unraid 21 | # Set the desired logging level for the notifications. 22 | unraid_level = "summary" 23 | webhook_level = "" 24 | # Leave empty for notifications only on ERROR. (Options: debug, info, warning, error, critical) 25 | # You can also set it to "summary" and it will notify on error but also give you a short summary at the end of each run. 26 | 27 | webhook_url = "" # Your webhook URL, leave empty for no notifications. 28 | webhook_headers = {} # Leave empty for Discord, otherwise edit it accordingly. (Slack example: "Content-Type": "application/json" "Authorization": "Bearer YOUR_SLACK_TOKEN" }) 29 | 30 | settings_filename = os.path.join(script_folder, "plexcache_settings.json") 31 | watchlist_cache_file = Path(os.path.join(script_folder, "plexcache_watchlist_cache.json")) 32 | watched_cache_file = Path(os.path.join(script_folder, "plexcache_watched_cache.json")) 33 | mover_cache_exclude_file = Path(os.path.join(script_folder, "plexcache_mover_files_to_exclude.txt")) 34 | if os.path.exists(mover_cache_exclude_file): 35 | os.remove(mover_cache_exclude_file) # Remove the existing 36 | 37 | RETRY_LIMIT = 3 38 | DELAY = 5 # in seconds 39 | permissions= 0o777 40 | 41 | log_file_pattern = "plexcache_log_*.log" 42 | summary_messages = [] 43 | files_moved = False 44 | # Define a new level called SUMMARY that is equivalent to INFO level 45 | SUMMARY = logging.WARNING + 1 46 | logging.addLevelName(SUMMARY, 'SUMMARY') 47 | 48 | start_time = time.time() # record start time 49 | 50 | class UnraidHandler(logging.Handler): 51 | SUMMARY = SUMMARY 52 | def __init__(self): 53 | super().__init__() 54 | self.notify_cmd_base = "/usr/local/emhttp/webGui/scripts/notify" 55 | if not os.path.isfile(self.notify_cmd_base) or not os.access(self.notify_cmd_base, os.X_OK): 56 | logging.warning(f"{self.notify_cmd_base} does not exist or is not executable. Unraid notifications will not be sent.") 57 | print(f"{self.notify_cmd_base} does not exist or is not executable. Unraid notifications will not be sent.") 58 | self.notify_cmd_base = None 59 | 60 | def emit(self, record): 61 | if self.notify_cmd_base: 62 | if record.levelno == SUMMARY: 63 | self.send_summary_unraid_notification(record) 64 | else: 65 | self.send_unraid_notification(record) 66 | 67 | def send_summary_unraid_notification(self, record): 68 | icon = 'normal' 69 | notify_cmd = f'{self.notify_cmd_base} -e "PlexCache" -s "Summary" -d "{record.msg}" -i "{icon}"' 70 | subprocess.call(notify_cmd, shell=True) 71 | 72 | def send_unraid_notification(self, record): 73 | # Map logging levels to icons 74 | level_to_icon = { 75 | 'WARNING': 'warning', 76 | 'ERROR': 'alert', 77 | 'INFO': 'normal', 78 | 'DEBUG': 'normal', 79 | 'CRITICAL': 'alert' 80 | } 81 | 82 | icon = level_to_icon.get(record.levelname, 'normal') # default to 'normal' if levelname is not found in the dictionary 83 | 84 | # Prepare the command with necessary arguments 85 | notify_cmd = f'{self.notify_cmd_base} -e "PlexCache" -s "{record.levelname}" -d "{record.msg}" -i "{icon}"' 86 | 87 | # Execute the command 88 | subprocess.call(notify_cmd, shell=True) 89 | 90 | class WebhookHandler(logging.Handler): 91 | SUMMARY = SUMMARY 92 | def __init__(self, webhook_url): 93 | super().__init__() 94 | self.webhook_url = webhook_url 95 | 96 | def emit(self, record): 97 | if record.levelno == SUMMARY: 98 | self.send_summary_webhook_message(record) 99 | else: 100 | self.send_webhook_message(record) 101 | 102 | def send_summary_webhook_message(self, record): 103 | summary = "Plex Cache Summary:\n" + record.msg 104 | payload = { 105 | "content": summary 106 | } 107 | headers = { 108 | "Content-Type": "application/json" 109 | } 110 | response = requests.post(self.webhook_url, data=json.dumps(payload), headers=headers) 111 | if not response.status_code == 204: 112 | print(f"Failed to send summary message. Error code: {response.status_code}") 113 | 114 | def send_webhook_message(self, record): 115 | payload = { 116 | "content": record.msg 117 | } 118 | headers = { 119 | "Content-Type": "application/json" 120 | } 121 | response = requests.post(self.webhook_url, data=json.dumps(payload), headers=headers) 122 | if not response.status_code == 204: 123 | print(f"Failed to send message. Error code: {response.status_code}") 124 | 125 | def check_and_create_folder(folder): 126 | # Check if the folder doesn't already exist 127 | if not os.path.exists(folder): 128 | try: 129 | # Create the folder with necessary parent directories 130 | os.makedirs(folder, exist_ok=True) 131 | except PermissionError: 132 | # Exit the program if the folder is not writable 133 | exit(f"{folder} not writable, please fix the variable accordingly.") 134 | 135 | # Check and create the script folder 136 | check_and_create_folder(script_folder) 137 | 138 | # Check and create the logs folder if it's different from the script folder 139 | if logs_folder != script_folder: 140 | check_and_create_folder(logs_folder) 141 | 142 | current_time = datetime.now().strftime("%Y%m%d_%H%M") # Get the current time and format it as YYYYMMDD_HHMM 143 | log_file = os.path.join(logs_folder, f"{log_file_pattern[:-5]}{current_time}.log") # Create a filename based on the current time 144 | latest_log_file = os.path.join(logs_folder, f"{log_file_pattern[:-5]}latest.log") # Create a filename for the latest log 145 | logger = logging.getLogger() # Get the root logger 146 | if log_level: 147 | log_level = log_level.lower() 148 | if log_level == "debug": 149 | logger.setLevel(logging.DEBUG) 150 | elif log_level == "info": 151 | logger.setLevel(logging.INFO) 152 | elif log_level == "warning": 153 | logger.setLevel(logging.WARNING) 154 | elif log_level == "error": 155 | logger.setLevel(logging.ERROR) 156 | elif log_level == "critical": 157 | logger.setLevel(logging.CRITICAL) 158 | else: 159 | print(f"Invalid webhook_level: {log_level}. Using default level: ERROR") 160 | logger.setLevel(logging.INFO) 161 | 162 | # Configure the rotating file handler 163 | handler = RotatingFileHandler(log_file, maxBytes=20*1024*1024, backupCount=max_log_files) # Create a rotating file handler 164 | handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')) # Set the log message format 165 | logger.addHandler(handler) # Add the file handler to the logger 166 | 167 | # Create or update the symbolic link to the latest log file 168 | if os.path.exists(latest_log_file): 169 | os.remove(latest_log_file) # Remove the existing link if it exists 170 | 171 | os.symlink(log_file, latest_log_file) # Create a new link to the latest log file 172 | def clean_old_log_files(logs_folder, log_file_pattern, max_log_files): 173 | # Find all log files that match the specified pattern in the logs folder 174 | existing_log_files = glob.glob(os.path.join(logs_folder, log_file_pattern)) 175 | # Sort the log files based on their last modification time 176 | existing_log_files.sort(key=os.path.getmtime) 177 | # Remove log files until the number of remaining log files is within the desired limit 178 | while len(existing_log_files) > max_log_files: 179 | # Remove the oldest log file from the list and delete it from the filesystem 180 | os.remove(existing_log_files.pop(0)) 181 | 182 | # Call the function to clean old log files 183 | clean_old_log_files(logs_folder, log_file_pattern, max_log_files) 184 | 185 | def check_os(): 186 | # Check the operating system 187 | os_name = platform.system() 188 | 189 | # Define information about different operating systems 190 | os_info = { 191 | 'Linux': {'path': "/mnt/user0/", 'msg': 'Script is currently running on Linux.'}, 192 | 'Darwin': {'path': None, 'msg': 'Script is currently running on macOS (untested).'}, 193 | 'Windows': {'path': None, 'msg': 'Script is currently running on Windows.'} 194 | } 195 | 196 | # Check if the operating system is recognized 197 | if os_name not in os_info: 198 | logging.critical('This is an unrecognized system. Exiting...') 199 | exit("Error: Unrecognized system.") 200 | 201 | # Determine if the system is Linux 202 | is_linux = True if os_name != 'Windows' else False 203 | 204 | # Check if the system is Unraid (specific to Linux) 205 | is_unraid = os.path.exists(os_info[os_name]['path']) if os_info[os_name]['path'] else False 206 | 207 | # Modify the information message if the system is Unraid 208 | if is_unraid: 209 | os_info[os_name]['msg'] += ' The script is also running on Unraid.' 210 | 211 | # Check if script is running inside a Docker container 212 | is_docker = os.path.exists('/.dockerenv') 213 | if is_docker: 214 | os_info[os_name]['msg'] += ' The script is running inside a Docker container.' 215 | 216 | # Log the information about the operating system 217 | logging.info(os_info[os_name]['msg']) 218 | 219 | return is_unraid, is_linux, is_docker 220 | 221 | # Call the check_os() function and store the returned values 222 | unraid, os_linux, is_docker = check_os() 223 | 224 | # Create and add the webhook handler to the logger 225 | if notification.lower() == "unraid" or notification.lower() == "system": 226 | if unraid and not is_docker: 227 | notification = "unraid" 228 | else: 229 | notification = "" 230 | if notification.lower() == "both": 231 | if unraid and is_docker: 232 | notification = "webhook" 233 | 234 | if notification.lower() == "both" or notification.lower() == "unraid": 235 | unraid_handler = UnraidHandler() 236 | if unraid_level: 237 | unraid_level = unraid_level.lower() 238 | if unraid_level == "debug": 239 | unraid_handler.setLevel(logging.DEBUG) 240 | elif unraid_level == "info": 241 | unraid_handler.setLevel(logging.INFO) 242 | elif unraid_level == "warning": 243 | unraid_handler.setLevel(logging.WARNING) 244 | elif unraid_level == "error": 245 | unraid_handler.setLevel(logging.ERROR) 246 | elif unraid_level == "critical": 247 | unraid_handler.setLevel(logging.CRITICAL) 248 | elif unraid_level.lower() == "summary": 249 | unraid_handler.setLevel(SUMMARY) 250 | else: 251 | print(f"Invalid unraid_level: {unraid_level}. Using default level: ERROR") 252 | unraid_handler.setLevel(logging.ERROR) 253 | else: 254 | unraid_handler.setLevel(logging.ERROR) 255 | logger.addHandler(unraid_handler) # Add the unraid handler to the logger 256 | 257 | # Create and add the webhook handler to the logger 258 | if notification.lower() == "both" or notification.lower() == "webhook": 259 | if webhook_url: 260 | webhook_handler = WebhookHandler(webhook_url) 261 | if webhook_level: 262 | webhook_level = webhook_level.lower() 263 | if webhook_level == "debug": 264 | webhook_handler.setLevel(logging.DEBUG) 265 | elif webhook_level == "info": 266 | webhook_handler.setLevel(logging.INFO) 267 | elif webhook_level == "warning": 268 | webhook_handler.setLevel(logging.WARNING) 269 | elif webhook_level == "error": 270 | webhook_handler.setLevel(logging.ERROR) 271 | elif webhook_level == "critical": 272 | webhook_handler.setLevel(logging.CRITICAL) 273 | elif webhook_level.lower() == "summary": 274 | webhook_handler.setLevel(SUMMARY) 275 | else: 276 | print(f"Invalid webhook_level: {webhook_level}. Using default level: ERROR") 277 | webhook_handler.setLevel(logging.ERROR) 278 | else: 279 | webhook_handler.setLevel(logging.ERROR) 280 | logger.addHandler(webhook_handler) # Add the webhook handler to the logger 281 | 282 | logging.info("*** PlexCache ***") 283 | 284 | # Remove "/" or "\" from a given path 285 | def remove_trailing_slashes(value): 286 | try: 287 | # Check if the value is a string 288 | if isinstance(value, str): 289 | # Check if the value contains a ':' and if the value with trailing slashes removed is empty 290 | if ':' in value and value.rstrip('/\\') == '': 291 | # Return the value with trailing slashes removed and add a backslash at the end 292 | return value.rstrip('/') + "\\" 293 | else: 294 | # Return the value with trailing slashes removed 295 | return value.rstrip('/\\') 296 | # Return the value if it is not a string 297 | return value 298 | except Exception as e: 299 | # Log an error if an exception occurs and raise it 300 | logging.error(f"Error occurred while removing trailing slashes: {e}") 301 | raise 302 | 303 | # Add "/" or "\" to a given path 304 | def add_trailing_slashes(value): 305 | try: 306 | # Check if the value does not contain a ':', indicating it's a Windows-style path 307 | if ':' not in value: 308 | # Add a leading "/" if the value does not start with it 309 | if not value.startswith("/"): 310 | value = "/" + value 311 | # Add a trailing "/" if the value does not end with it 312 | if not value.endswith("/"): 313 | value = value + "/" 314 | # Return the modified value 315 | return value 316 | except Exception as e: 317 | # Log an error if an exception occurs and raise it 318 | logging.error(f"Error occurred while adding trailing slashes: {e}") 319 | raise 320 | 321 | # Removed all "/" "\" from a given path 322 | def remove_all_slashes(value_list): 323 | try: 324 | # Iterate over each value in the list and remove leading and trailing slashes 325 | return [value.strip('/\\') for value in value_list] 326 | except Exception as e: 327 | logging.error(f"Error occurred while removing all slashes: {e}") 328 | raise 329 | 330 | # Convert the given path to a windows compatible path 331 | def convert_path_to_nt(value, drive_letter): 332 | try: 333 | if value.startswith('/'): 334 | # Add the drive letter to the beginning of the path 335 | value = drive_letter.rstrip(':\\') + ':' + value 336 | # Replace forward slashes with backslashes 337 | value = value.replace(posixpath.sep, ntpath.sep) 338 | # Normalize the path to remove redundant separators and references to parent directories 339 | return ntpath.normpath(value) 340 | except Exception as e: 341 | logging.error(f"Error occurred while converting path to Windows compatible: {e}") 342 | raise 343 | 344 | # Convert the given path to a linux/posix compatible path 345 | # If a drive letter is present, it will save it in the settings file. 346 | def convert_path_to_posix(value): 347 | try: 348 | # Save the drive letter if exists 349 | drive_letter = re.search(r'^[A-Za-z]:', value) # Check for a drive letter at the beginning of the path 350 | if drive_letter: 351 | drive_letter = drive_letter.group() + '\\' # Extract the drive letter and add a backslash 352 | else: 353 | drive_letter = None 354 | # Remove drive letter if exists 355 | value = re.sub(r'^[A-Za-z]:', '', value) # Remove the drive letter from the path 356 | # Replace backslashes with slashes 357 | value = value.replace(ntpath.sep, posixpath.sep) # Replace backslashes with forward slashes 358 | return posixpath.normpath(value), drive_letter # Normalize the path and return it along with the drive letter 359 | except Exception as e: 360 | logging.error(f"Error occurred while converting path to Posix compatible: {e}") 361 | raise 362 | 363 | # Convert path accordingly to the operating system the script is running 364 | # It assigns drive_letter = 'C:\\' if no drive was ever given/saved 365 | def convert_path(value, key, settings_data, drive_letter=None): 366 | try: 367 | # Normalize paths converting backslashes to slashes 368 | if os_linux: # Check if the operating system is Linux 369 | value, drive_letter = convert_path_to_posix(value) # Convert path to POSIX format 370 | if drive_letter: 371 | settings_data[f"{key}_drive"] = drive_letter # Save the drive letter in the settings data 372 | else: 373 | if drive_letter is None: 374 | if debug: 375 | print(f"Drive letter for {value} not found, using the default one 'C:\\'") 376 | logging.warning(f"Drive letter for {value} not found, using the default one 'C:\\'") 377 | drive_letter = 'C:\\' # Set the default drive letter to 'C:\' 378 | value = convert_path_to_nt(value, drive_letter) # Convert path to Windows format 379 | 380 | return value 381 | except Exception as e: 382 | logging.error(f"Error occurred while converting path: {e}") 383 | raise 384 | 385 | # Check if the settings file exists 386 | if os.path.exists(settings_filename): 387 | # Loading the settings file 388 | with open(settings_filename, 'r') as f: 389 | settings_data = json.load(f) 390 | else: 391 | logging.critical("Settings file not found, please fix the variable accordingly.") 392 | exit("Settings file not found, please fix the variable accordingly.") 393 | 394 | # Reads the settings file and all the settings 395 | try: 396 | # Extracting the 'firststart' flag from the settings data 397 | firststart = settings_data.get('firststart') 398 | if firststart: 399 | debug = True 400 | print("First start is set to true, setting debug mode temporarily to true.") 401 | logging.warning("First start is set to true, setting debug mode temporarily to true.") 402 | del settings_data['firststart'] 403 | else: 404 | debug = settings_data.get('debug') 405 | if firststart is not None: 406 | del settings_data['firststart'] 407 | 408 | # Extracting various settings from the settings data 409 | PLEX_URL = settings_data['PLEX_URL'] 410 | PLEX_TOKEN = settings_data['PLEX_TOKEN'] 411 | number_episodes = settings_data['number_episodes'] 412 | valid_sections = settings_data['valid_sections'] 413 | days_to_monitor = settings_data['days_to_monitor'] 414 | users_toggle = settings_data['users_toggle'] 415 | 416 | # Checking and assigning 'skip_ondeck' and 'skip_watchlist' values 417 | skip_ondeck = settings_data.get('skip_ondeck') 418 | skip_watchlist = settings_data.get('skip_watchlist') 419 | 420 | skip_users = settings_data.get('skip_users') 421 | if skip_users is not None: 422 | skip_ondeck = settings_data.get('skip_ondeck', skip_users) 423 | skip_watchlist = settings_data.get('skip_watchlist', skip_users) 424 | del settings_data['skip_users'] 425 | else: 426 | skip_ondeck = settings_data.get('skip_ondeck', []) 427 | skip_watchlist = settings_data.get('skip_watchlist', []) 428 | 429 | watchlist_toggle = settings_data['watchlist_toggle'] 430 | watchlist_episodes = settings_data['watchlist_episodes'] 431 | watchlist_cache_expiry = settings_data['watchlist_cache_expiry'] 432 | 433 | watched_cache_expiry = settings_data['watched_cache_expiry'] 434 | watched_move = settings_data['watched_move'] 435 | 436 | plex_source_drive = settings_data.get('plex_source_drive') 437 | plex_source = add_trailing_slashes(settings_data['plex_source']) 438 | 439 | cache_dir_drive = settings_data.get('cache_dir_drive') 440 | cache_dir = remove_trailing_slashes(settings_data['cache_dir']) 441 | cache_dir = convert_path(cache_dir, 'cache_dir', settings_data, cache_dir_drive) 442 | cache_dir = add_trailing_slashes(settings_data['cache_dir']) 443 | 444 | real_source_drive = settings_data.get('real_source_drive') 445 | real_source = remove_trailing_slashes(settings_data['real_source']) 446 | real_source = convert_path(real_source, 'real_source', settings_data, real_source_drive) 447 | real_source = add_trailing_slashes(settings_data['real_source']) 448 | 449 | nas_library_folders = remove_all_slashes(settings_data['nas_library_folders']) 450 | plex_library_folders = remove_all_slashes(settings_data['plex_library_folders']) 451 | 452 | exit_if_active_session = settings_data.get('exit_if_active_session') 453 | if exit_if_active_session is None: 454 | exit_if_active_session = not settings_data.get('skip') 455 | del settings_data['skip'] 456 | 457 | max_concurrent_moves_array = settings_data['max_concurrent_moves_array'] 458 | max_concurrent_moves_cache = settings_data['max_concurrent_moves_cache'] 459 | 460 | deprecated_unraid = settings_data.get('unraid') 461 | if deprecated_unraid is not None: 462 | del settings_data['unraid'] 463 | except KeyError as e: 464 | # Error handling for missing key in settings file 465 | logging.critical(f"Error: {e} not found in settings file, please re-run the setup or manually edit the settings file.") 466 | exit(f"Error: {e} not found in settings file, please re-run the setup or manually edit the settings file.") 467 | 468 | try: 469 | # Save the updated settings data back to the file 470 | with open(settings_filename, 'w') as f: 471 | settings_data['cache_dir'] = cache_dir 472 | settings_data['real_source'] = real_source 473 | settings_data['plex_source'] = plex_source 474 | settings_data['nas_library_folders'] = nas_library_folders 475 | settings_data['plex_library_folders'] = plex_library_folders 476 | settings_data['skip_ondeck'] = skip_ondeck 477 | settings_data['skip_watchlist'] = skip_watchlist 478 | settings_data['exit_if_active_session'] = exit_if_active_session 479 | json.dump(settings_data, f, indent=4) 480 | except Exception as e: 481 | logging.error(f"Error occurred while saving settings data: {e}") 482 | raise 483 | 484 | # Initialising necessary arrays 485 | processed_files = [] 486 | files_to_skip = [] 487 | media_to = [] 488 | media_to_cache = [] 489 | media_to_array = [] 490 | move_commands = [] 491 | skip_cache = "--skip-cache" in sys.argv 492 | debug = "--debug" in sys.argv 493 | 494 | # Connect to the Plex server 495 | try: 496 | plex = PlexServer(PLEX_URL, PLEX_TOKEN) 497 | except Exception as e: 498 | logging.critical(f"Error connecting to the Plex server: {e}") 499 | exit(f"Error connecting to the Plex server: {e}") 500 | 501 | # Check if any active session 502 | sessions = plex.sessions() # Get the list of active sessions 503 | if sessions: # Check if there are any active sessions 504 | if exit_if_active_session: # Check if the 'exit_if_active_session' boolean is set to true 505 | logging.warning('There is an active session. Exiting...') 506 | exit('There is an active session. Exiting...') 507 | else: 508 | for session in sessions: # Iterate over each active session 509 | try: 510 | media = str(session.source()) # Get the source of the session 511 | media_id = media[media.find(":") + 1:media.find(":", media.find(":") + 1)] # Extract the media ID from the source 512 | media_item = plex.fetchItem(int(media_id)) # Fetch the media item using the media ID 513 | media_title = media_item.title # Get the title of the media item 514 | media_type = media_item.type # Get the media type (e.g., show, movie) 515 | if media_type == "episode": # Check if the media type is an episode 516 | show_title = media_item.grandparentTitle # Get the title of the show 517 | print(f"Active session detected, skipping: {show_title} - {media_title}") # Print a message indicating the active session with show and episode titles 518 | logging.warning(f"Active session detected, skipping: {show_title} - {media_title}") # Log a warning message about the active session with show and episode titles 519 | elif media_type == "movie": # Check if the media type is a movie 520 | print(f"Active session detected, skipping: {media_title}") # Print a message indicating the active session with the movie title 521 | logging.warning(f"Active session detected, skipping: {media_title}") # Log a warning message about the active session with the movie title 522 | media_path = media_item.media[0].parts[0].file # Get the file path of the media item 523 | logging.info(f"Skipping: {media_path}") 524 | files_to_skip.append(media_path) # Add the file path to the list of files to skip 525 | except Exception as e: 526 | logging.error(f"Error occurred while processing session: {session} - {e}") # Log an error message if an exception occurs while processing the session 527 | else: 528 | logging.info('No active sessions found. Proceeding...') # Log an info message indicating no active sessions were found, and proceed with the code execution 529 | 530 | # Check if debug mode is active 531 | if debug: 532 | print("Debug mode is active, NO FILE WILL BE MOVED.") 533 | logging.getLogger().setLevel(logging.DEBUG) 534 | logging.warning("Debug mode is active, NO FILE WILL BE MOVED.") 535 | logging.info(f"Real source: {real_source}") 536 | logging.info(f"Cache dir: {cache_dir}") 537 | logging.info(f"Plex source: {plex_source}") 538 | logging.info(f"NAS folders: ({nas_library_folders}") 539 | logging.info(f"Plex folders: {plex_library_folders}") 540 | else: 541 | logging.getLogger().setLevel(logging.INFO) 542 | 543 | # Main function to fetch onDeck media files 544 | def fetch_on_deck_media_main(plex, valid_sections, days_to_monitor, number_episodes, users_toggle, skip_ondeck): 545 | try: 546 | users_to_fetch = [None] # Start with main user (None) 547 | 548 | if users_toggle: 549 | users_to_fetch += plex.myPlexAccount().users() 550 | # Filter out the users present in skip_ondeck 551 | users_to_fetch = [user for user in users_to_fetch if (user is None) or (user.get_token(plex.machineIdentifier) not in skip_ondeck)] 552 | 553 | with ThreadPoolExecutor(max_workers=10) as executor: 554 | futures = {executor.submit(fetch_on_deck_media, plex, valid_sections, days_to_monitor, number_episodes, user) for user in users_to_fetch} 555 | for future in as_completed(futures): 556 | try: 557 | yield from future.result() 558 | except Exception as e: 559 | print(f"An error occurred in fetch_on_deck_media: {e}") 560 | logging.error(f"An error occurred while fetching onDeck media for a user: {e}") 561 | 562 | except Exception as e: 563 | print(f"An error occurred in fetch_on_deck_media: {e}") 564 | logging.error(f"An error occurred in fetch_on_deck_media_main: {e}") 565 | 566 | def fetch_on_deck_media(plex, valid_sections, days_to_monitor, number_episodes, user=None): 567 | try: 568 | username, plex = get_plex_instance(plex, user) # Get the username and Plex instance 569 | if not plex: # Check if Plex instance is available 570 | return [] # Return an empty list 571 | 572 | print(f"Fetching {username}'s onDeck media...") # Print a message indicating that onDeck media is being fetched 573 | logging.info(f"Fetching {username}'s onDeck media...") # Log the message indicating that onDeck media is being fetched 574 | 575 | on_deck_files = [] # Initialize an empty list to store onDeck files 576 | # Get all sections available for the user 577 | available_sections = [section.key for section in plex.library.sections()] 578 | 579 | # Intersect available_sections and valid_sections 580 | filtered_sections = list(set(available_sections) & set(valid_sections)) 581 | 582 | for video in plex.library.onDeck(): # Iterate through the onDeck videos in the Plex library 583 | section_key = video.section().key # Get the section key of the video 584 | if not filtered_sections or section_key in filtered_sections: # Check if filtered_sections is empty or the video belongs to a valid section 585 | delta = datetime.now() - video.lastViewedAt # Calculate the time difference between now and the last viewed time of the video 586 | if delta.days <= days_to_monitor: # Check if the video was viewed within the specified number of days 587 | if isinstance(video, Episode): # Check if the video is an episode 588 | process_episode_ondeck(video, number_episodes, on_deck_files) # Process the episode and add it to the onDeck files list 589 | elif isinstance(video, Movie): # Check if the video is a movie 590 | process_movie_ondeck(video, on_deck_files) # Process the movie and add it to the onDeck files list 591 | 592 | return on_deck_files # Return the list of onDeck files 593 | 594 | except Exception as e: # Handle any exceptions that occur 595 | print(f"An error occurred while fetching onDeck media: {e}") # Print an error message indicating the exception 596 | logging.error(f"An error occurred while fetching onDeck media: {e}") # Log an error message indicating the exception 597 | return [] # Return an empty list 598 | 599 | # Function to fetch the Plex instance 600 | def get_plex_instance(plex, user): 601 | if user: 602 | username = user.title # Get the username 603 | try: 604 | return username, PlexServer(PLEX_URL, user.get_token(plex.machineIdentifier)) # Return username and PlexServer instance with user token 605 | except Exception as e: 606 | print(f"Error: Failed to Fetch {username} onDeck media. Error: {e}") # Print error message if failed to fetch onDeck media for the user 607 | logging.error(f"Error: Failed to Fetch {username} onDeck media. Error: {e}") # Log the error 608 | return None, None 609 | else: 610 | username = plex.myPlexAccount().title # Get the username from the Plex account 611 | return username, PlexServer(PLEX_URL, PLEX_TOKEN) # Return username and PlexServer instance with account token 612 | 613 | # Function to process the onDeck media files 614 | def process_episode_ondeck(video, number_episodes, on_deck_files): 615 | for media in video.media: 616 | on_deck_files.extend(part.file for part in media.parts) # Add file paths of media parts to onDeck files list 617 | show = video.grandparentTitle # Get the title of the show 618 | library_section = video.section() # Get the library section of the video 619 | episodes = list(library_section.search(show)[0].episodes()) # Search the library section for episodes of the show 620 | current_season = video.parentIndex # Get the index of the current season 621 | next_episodes = get_next_episodes(episodes, current_season, video.index, number_episodes) # Get the next episodes based on the current episode and season 622 | for episode in next_episodes: 623 | for media in episode.media: 624 | on_deck_files.extend(part.file for part in media.parts) # Add file paths of media parts of the next episodes to onDeck files list 625 | for part in media.parts: 626 | logging.info(f"OnDeck found: {(part.file)}") # Log the file path of the onDeck media part 627 | 628 | # Function to process the onDeck movies files 629 | def process_movie_ondeck(video, on_deck_files): 630 | for media in video.media: 631 | on_deck_files.extend(part.file for part in media.parts) # Add file paths of media parts to onDeck files list 632 | for part in media.parts: 633 | logging.info(f"OnDeck found: {(part.file)}") # Log the file path of the onDeck media part 634 | 635 | # Function to get the next episodes 636 | def get_next_episodes(episodes, current_season, current_episode_index, number_episodes): 637 | next_episodes = [] 638 | for episode in episodes: 639 | if (episode.parentIndex > current_season or (episode.parentIndex == current_season and episode.index > current_episode_index)) and len(next_episodes) < number_episodes: 640 | next_episodes.append(episode) # Add the episode to the next_episodes list if it comes after the current episode 641 | if len(next_episodes) == number_episodes: 642 | break # Stop iterating if the desired number of next episodes is reached 643 | return next_episodes # Return the list of next episodes 644 | 645 | # Function to search for a file in the Plex server 646 | def search_plex(plex, title): 647 | results = plex.search(title) 648 | return results[0] if len(results) > 0 else None 649 | 650 | def fetch_watchlist_media(plex, valid_sections, watchlist_episodes, users_toggle, skip_watchlist): 651 | 652 | def get_watchlist(token, user=None, retries=0): 653 | # Retrieve the watchlist for the specified user's token. 654 | account = MyPlexAccount(token=token) 655 | try: 656 | if user: 657 | account = account.switchHomeUser(f'{user.title}') 658 | return account.watchlist(filter='released') 659 | except (BadRequest, NotFound) as e: 660 | if "429" in str(e) and retries < RETRY_LIMIT: # Rate limit exceeded 661 | logging.warning(f"Rate limit exceeded. Retrying {retries + 1}/{RETRY_LIMIT}. Sleeping for {DELAY} seconds...") 662 | time.sleep(DELAY) 663 | return get_watchlist(token, user, retries + 1) 664 | elif isinstance(e, NotFound): 665 | logging.warning(f"Failed to switch to user {user.title if user else 'Unknown'}. Skipping...") 666 | return [] 667 | else: 668 | raise e 669 | 670 | 671 | def process_show(file, watchlist_episodes): 672 | #Process episodes of a TV show file up to a specified number. 673 | episodes = file.episodes() 674 | count = 0 675 | for episode in episodes[:watchlist_episodes]: 676 | if len(episode.media) > 0 and len(episode.media[0].parts) > 0: 677 | count += 1 678 | if not episode.isPlayed: 679 | yield episode.media[0].parts[0].file 680 | 681 | def process_movie(file): 682 | #Process a movie file. 683 | if not file.isPlayed: 684 | yield file.media[0].parts[0].file 685 | 686 | def fetch_user_watchlist(user): 687 | current_username = plex.myPlexAccount().title if user is None else user.title 688 | available_sections = [section.key for section in plex.library.sections()] 689 | filtered_sections = list(set(available_sections) & set(valid_sections)) 690 | 691 | if user and user.get_token(plex.machineIdentifier) in skip_watchlist: 692 | logging.info(f"Skipping {current_username}'s watchlist media...") 693 | return [] 694 | 695 | logging.info(f"Fetching {current_username}'s watchlist media...") 696 | try: 697 | watchlist = get_watchlist(PLEX_TOKEN, user) 698 | results = [] 699 | 700 | for item in watchlist: 701 | file = search_plex(plex, item.title) 702 | if file and (not filtered_sections or (file.librarySectionID in filtered_sections)): 703 | if file.TYPE == 'show': 704 | results.extend(process_show(file, watchlist_episodes)) 705 | else: 706 | results.extend(process_movie(file)) 707 | return results 708 | except Exception as e: 709 | logging.error(f"Error fetching watchlist for {current_username}: {str(e)}") 710 | return [] 711 | 712 | users_to_fetch = [None] # Start with main user (None) 713 | if users_toggle: 714 | users_to_fetch += plex.myPlexAccount().users() 715 | 716 | with ThreadPoolExecutor(max_workers=10) as executor: 717 | futures = {executor.submit(fetch_user_watchlist, user) for user in users_to_fetch} 718 | for future in as_completed(futures): 719 | retries = 0 720 | while retries < RETRY_LIMIT: 721 | try: 722 | yield from future.result() 723 | break 724 | except Exception as e: 725 | if "429" in str(e): # rate limit error 726 | logging.warning(f"Rate limit exceeded. Retrying in {DELAY} seconds...") 727 | time.sleep(DELAY) 728 | retries += 1 729 | else: 730 | logging.error(f"Error fetching watchlist media: {str(e)}") 731 | break 732 | 733 | # Function to fetch watched media files 734 | def get_watched_media(plex, valid_sections, last_updated, users_toggle): 735 | def fetch_user_watched_media(plex_instance, username, retries=0): 736 | try: 737 | print(f"Fetching {username}'s watched media...") 738 | logging.info(f"Fetching {username}'s watched media...") 739 | # Get all sections available for the user 740 | all_sections = [section.key for section in plex_instance.library.sections()] 741 | # Check if valid_sections is specified. If not, consider all available sections as valid. 742 | if 'valid_sections' in globals() and valid_sections: 743 | available_sections = list(set(all_sections) & set(valid_sections)) 744 | else: 745 | available_sections = all_sections 746 | # Filter sections the user has access to 747 | user_accessible_sections = [section for section in available_sections if section in all_sections] 748 | for section_key in user_accessible_sections: 749 | section = plex_instance.library.sectionByID(section_key) # Get the section object using its key 750 | # Search for videos in the section 751 | for video in section.search(unwatched=False): 752 | # Skip if the video was last viewed before the last_updated timestamp 753 | if video.lastViewedAt and last_updated and video.lastViewedAt < datetime.fromtimestamp(last_updated): 754 | continue 755 | # Process the video and yield the file path 756 | yield from process_video(video) 757 | 758 | except (BadRequest, NotFound) as e: 759 | if "429" in str(e) and retries < RETRY_LIMIT: # Rate limit exceeded 760 | print(f"Rate limit exceeded. Retrying {retries + 1}/{RETRY_LIMIT}. Sleeping for {DELAY} seconds...") 761 | logging.warning(f"Rate limit exceeded. Retrying {retries + 1}/{RETRY_LIMIT}. Sleeping for {DELAY} seconds...") 762 | time.sleep(DELAY) 763 | return fetch_user_watched_media(user_plex, username, retries + 1) 764 | elif isinstance(e, NotFound): 765 | print(f"Failed to switch to user {user.title if user else 'Unknown'}. Skipping...") 766 | logging.warning(f"Failed to switch to user {user.title if user else 'Unknown'}. Skipping...") 767 | return [] 768 | else: 769 | raise e 770 | 771 | def process_video(video): 772 | if video.TYPE == 'show': 773 | # Iterate through each episode of a show video 774 | for episode in video.episodes(): 775 | yield from process_episode(episode) 776 | else: 777 | # Get the file path of the video 778 | #if video.isPlayed: 779 | file_path = video.media[0].parts[0].file 780 | yield file_path 781 | 782 | def process_episode(episode): 783 | # Iterate through each media and part of an episode 784 | for media in episode.media: 785 | for part in media.parts: 786 | if episode.isPlayed: 787 | # Get the file path of the played episode 788 | file_path = part.file 789 | yield file_path 790 | 791 | # Create a ThreadPoolExecutor 792 | with ThreadPoolExecutor() as executor: 793 | main_username = plex.myPlexAccount().title 794 | 795 | # Start a new task for the main user 796 | futures = [executor.submit(fetch_user_watched_media, plex, main_username)] 797 | 798 | if users_toggle: 799 | for user in plex.myPlexAccount().users(): 800 | username = user.title 801 | user_token = user.get_token(plex.machineIdentifier) 802 | user_plex = PlexServer(PLEX_URL, user_token) 803 | 804 | # Start a new task for each other user 805 | futures.append(executor.submit(fetch_user_watched_media, user_plex, username)) 806 | 807 | # As each task completes, yield the results 808 | for future in as_completed(futures): 809 | try: 810 | yield from future.result() 811 | except Exception as e: 812 | print(f"An error occurred in get_watched_media: {e}") 813 | logging.error(f"An error occurred in get_watched_media: {e}") 814 | 815 | # Function to load watched media from cache 816 | def load_media_from_cache(cache_file): 817 | if cache_file.exists(): 818 | with cache_file.open('r') as f: 819 | try: 820 | data = json.load(f) 821 | if isinstance(data, dict): 822 | return set(data.get('media', [])), data.get('timestamp') 823 | elif isinstance(data, list): 824 | # cache file contains just a list of media, without timestamp 825 | return set(data), None 826 | except json.JSONDecodeError: 827 | # Clear the file and return an empty set 828 | with cache_file.open('w') as f: 829 | f.write(json.dumps({'media': [], 'timestamp': None})) 830 | return set(), None 831 | return set(), None 832 | 833 | # Modify the files paths from the paths given by plex to link actual files on the running system 834 | def modify_file_paths(files, plex_source, real_source, plex_library_folders, nas_library_folders): 835 | # Print and log a message indicating that file paths are being edited 836 | print("Editing file paths...") 837 | logging.info("Editing file paths...") 838 | 839 | # If no files are provided, return an empty list 840 | if files is None: 841 | return [] 842 | 843 | # Filter the files based on those that start with the plex_source path 844 | files = [file_path for file_path in files if file_path.startswith(plex_source)] 845 | 846 | # Iterate over each file path and modify it accordingly 847 | for i, file_path in enumerate(files): 848 | # Log the original file path 849 | logging.info(f"Original path: {file_path}") 850 | 851 | # Replace the plex_source with the real_source in the file path 852 | file_path = file_path.replace(plex_source, real_source, 1) # Replace the plex_source with the real_source, thanks to /u/planesrfun 853 | 854 | # Determine which library folder is in the file path 855 | for j, folder in enumerate(plex_library_folders): 856 | if folder in file_path: 857 | # Replace the plex library folder with the corresponding NAS library folder 858 | file_path = file_path.replace(folder, nas_library_folders[j]) 859 | break 860 | 861 | # Update the modified file path in the files list 862 | files[i] = file_path 863 | 864 | # Log the edited file path 865 | logging.info(f"Edited path: {file_path}") 866 | 867 | # Return the modified file paths or an empty list 868 | return files or [] 869 | 870 | def get_media_subtitles(media_files, files_to_skip=None, subtitle_extensions=[".srt", ".vtt", ".sbv", ".sub", ".idx"]): 871 | print("Fetching subtitles...") 872 | logging.info("Fetching subtitles...") 873 | 874 | files_to_skip = set() if files_to_skip is None else set(files_to_skip) 875 | processed_files = set() 876 | all_media_files = media_files.copy() 877 | 878 | for file in media_files: 879 | if file in files_to_skip or file in processed_files: 880 | continue 881 | processed_files.add(file) 882 | 883 | directory_path = os.path.dirname(file) 884 | if os.path.exists(directory_path): 885 | subtitle_files = find_subtitle_files(directory_path, file, subtitle_extensions) 886 | all_media_files.extend(subtitle_files) 887 | for subtitle_file in subtitle_files: 888 | logging.info(f"Subtitle found: {subtitle_file}") 889 | 890 | return all_media_files or [] 891 | 892 | def find_subtitle_files(directory_path, file, subtitle_extensions): 893 | file_name, _ = os.path.splitext(os.path.basename(file)) 894 | 895 | try: 896 | subtitle_files = [ 897 | entry.path 898 | for entry in os.scandir(directory_path) 899 | if entry.is_file() and entry.name.startswith(file_name) and entry.name != file and entry.name.endswith(tuple(subtitle_extensions)) 900 | ] 901 | except PermissionError as e: 902 | logging.error(f"Cannot access directory {directory_path}. Permission denied. Error: {e}") 903 | subtitle_files = [] 904 | except OSError as e: 905 | logging.error(f"Cannot access directory {directory_path}. Error: {e}") 906 | subtitle_files = [] 907 | 908 | return subtitle_files or [] 909 | 910 | # Function to convert size to readable format 911 | def convert_bytes_to_readable_size(size_bytes): 912 | if size_bytes >= (1024 ** 4): 913 | size = size_bytes / (1024 ** 4) 914 | unit = 'TB' 915 | elif size_bytes >= (1024 ** 3): 916 | size = size_bytes / (1024 ** 3) 917 | unit = 'GB' 918 | elif size_bytes >= (1024 ** 2): 919 | size = size_bytes / (1024 ** 2) 920 | unit = 'MB' 921 | else: 922 | size = size_bytes / 1024 923 | unit = 'KB' 924 | 925 | # Return the size and corresponding unit 926 | return size, unit 927 | 928 | # Function to check for free space 929 | def get_free_space(dir): 930 | if not os.path.exists(dir): 931 | logging.error(f"Invalid path, unable to calculate free space for: {dir}.") 932 | return 0 933 | stat = os.statvfs(dir) # Get the file system statistics for the specified directory 934 | free_space_bytes = stat.f_bfree * stat.f_frsize # Calculate the free space in bytes 935 | return convert_bytes_to_readable_size(free_space_bytes) # Convert the free space to a human-readable format 936 | 937 | # Function to calculate size of the files contained in the given array 938 | def get_total_size_of_files(files): 939 | total_size_bytes = sum(os.path.getsize(file) for file in files) # Calculate the total size of the files in bytes 940 | return convert_bytes_to_readable_size(total_size_bytes) # Convert the total size to a human-readable format 941 | 942 | # Function to filter the files, based on the destination 943 | def filter_files(files, destination, real_source, cache_dir, media_to_cache=None, files_to_skip=None): 944 | logging.info(f"Filtering media files for {destination}...") 945 | 946 | if files_to_skip: 947 | # Assuming you have a function modify_file_paths() that returns modified file paths. 948 | files_to_skip = modify_file_paths(files_to_skip, plex_source, real_source, plex_library_folders, nas_library_folders) 949 | 950 | try: 951 | if media_to_cache is None: 952 | media_to_cache = [] 953 | 954 | processed_files = set() 955 | media_to = [] 956 | cache_files_to_exclude = [] 957 | 958 | if not files: 959 | return [] 960 | 961 | for file in files: 962 | if file in processed_files or (files_to_skip and file in files_to_skip): 963 | continue 964 | processed_files.add(file) 965 | 966 | cache_file_name = get_cache_paths(file, real_source, cache_dir)[1] 967 | # Get the cache file name using the file's path, real_source, and cache_dir 968 | cache_files_to_exclude.append(cache_file_name) 969 | 970 | if destination == 'array': 971 | if should_add_to_array(file, cache_file_name, media_to_cache): 972 | media_to.append(file) 973 | logging.info(f"Adding file to array: {file}") 974 | 975 | elif destination == 'cache': 976 | if should_add_to_cache(file, cache_file_name): 977 | media_to.append(file) 978 | logging.info(f"Adding file to cache: {file}") 979 | 980 | if unraid: 981 | with open(mover_cache_exclude_file, "w") as file: 982 | for item in cache_files_to_exclude: 983 | file.write(str(item) + "\n") 984 | 985 | return media_to or [] 986 | 987 | except Exception as e: 988 | logging.error(f"Error occurred while filtering media files: {str(e)}") 989 | return [] 990 | 991 | def should_add_to_array(file, cache_file_name, media_to_cache): 992 | if file in media_to_cache: 993 | return False 994 | 995 | array_file = file.replace("/mnt/user/", "/mnt/user0/", 1) if unraid else file 996 | 997 | if os.path.isfile(array_file): 998 | # File already exists in the array 999 | if os.path.isfile(cache_file_name): 1000 | os.remove(cache_file_name) 1001 | logging.info(f"Removed cache version of file: {cache_file_name}") 1002 | return False # No need to add to array 1003 | return True # Otherwise, the file should be added to the array 1004 | 1005 | # Revised function 1006 | def should_add_to_cache(file, cache_file_name): 1007 | array_file = file.replace("/mnt/user/", "/mnt/user0/", 1) if unraid else file 1008 | 1009 | if os.path.isfile(cache_file_name) and os.path.isfile(array_file): 1010 | # Uncomment the following line if you want to remove the array version when the file exists in the cache 1011 | os.remove(array_file) 1012 | logging.info(f"Removed array version of file: {array_file}") 1013 | return False 1014 | return not os.path.isfile(cache_file_name) 1015 | 1016 | 1017 | # Check for free space before executing moving process 1018 | def check_free_space_and_move_files(media_files, destination, real_source, cache_dir, unraid, debug): 1019 | global files_moved, summary_messages 1020 | media_files_filtered = filter_files(media_files, destination, real_source, cache_dir, media_to_cache, files_to_skip) # Filter the media files based on certain criteria 1021 | total_size, total_size_unit = get_total_size_of_files(media_files_filtered) # Get the total size of the filtered media files 1022 | if total_size > 0: # If there are media files to be moved 1023 | logging.info(f"Total size of media files to be moved to {destination}: {total_size:.2f} {total_size_unit}") # Log the total size of media files 1024 | print(f"Total size of media files to be moved to {destination}: {total_size:.2f} {total_size_unit}") # Print the total size of media files 1025 | if files_moved: 1026 | summary_messages.append(f"Total size of media files moved to {destination}: {total_size:.2f} {total_size_unit}") 1027 | else: 1028 | summary_messages = [f"Total size of media files moved to {destination}: {total_size:.2f} {total_size_unit}"] 1029 | files_moved = True 1030 | free_space, free_space_unit = get_free_space(destination == 'cache' and cache_dir or real_source) # Get the free space on the destination drive 1031 | print(f"Free space on the {destination}: {free_space:.2f} {free_space_unit}") # Print the free space on the destination drive 1032 | logging.info(f"Free space on the {destination}: {free_space:.2f} {free_space_unit}") # Log the free space on the destination drive 1033 | if total_size * (1024 ** {'KB': 0, 'MB': 1, 'GB': 2, 'TB': 3}[total_size_unit]) > free_space * (1024 ** {'KB': 0, 'MB': 1, 'GB': 2, 'TB': 3}[free_space_unit]): 1034 | # If the total size of media files is greater than the free space on the destination drive 1035 | if not debug: 1036 | exit(f"Not enough space on {destination} drive.") 1037 | logging.critical(f"Not enough space on {destination} drive..") 1038 | else: 1039 | print(f"Not enough space on {destination} drive.") 1040 | logging.error(f"Not enough space on {destination} drive..") 1041 | logging.info(f"Moving media to {destination}...") # Log the start of the media moving process 1042 | print(f"Moving media to {destination}...") # Print the start of the media moving process 1043 | move_media_files(media_files_filtered, real_source, cache_dir, unraid, debug, destination, max_concurrent_moves_array, max_concurrent_moves_cache) # Move the media files to the destination 1044 | else: 1045 | print(f"Nothing to move to {destination}") # If there are no media files to move, print a message 1046 | logging.info(f"Nothing to move to {destination}") # If there are no media files to move, log a message 1047 | if not files_moved: 1048 | summary_messages = ["There were no files to move to any destination."] 1049 | else: 1050 | summary_messages.append("") 1051 | 1052 | # Function to check if given path exists, is a directory and the script has writing permissions 1053 | def check_path_exists(path): 1054 | # Check if the path exists 1055 | if not os.path.exists(path): 1056 | logging.critical(f"Path {path} does not exist.") 1057 | exit(f"Path {path} does not exist.") 1058 | 1059 | # Check if the path is a directory 1060 | if not os.path.isdir(path): 1061 | logging.critical(f"Path {path} is not a directory.") 1062 | exit(f"Path {path} is not a directory.") 1063 | 1064 | # Check if the script has writing permissions for the path 1065 | if not os.access(path, os.W_OK): 1066 | logging.critical(f"Path {path} is not writable.") 1067 | exit(f"Path {path} is not writable.") 1068 | 1069 | # Created the move command that gets executed from the function above 1070 | def move_media_files(files, real_source, cache_dir, unraid, debug, destination, max_concurrent_moves_array, max_concurrent_moves_cache): 1071 | # Print and log the destination directory 1072 | print(f"Moving media files to {destination}...") 1073 | logging.info(f"Moving media files to {destination}...") 1074 | 1075 | # Initialize the set of files to skip and the set of processed files 1076 | processed_files = set() 1077 | move_commands = [] 1078 | 1079 | # Iterate over each file to move 1080 | for file_to_move in files: 1081 | # Skip the file if it has already been processed 1082 | if file_to_move in processed_files: 1083 | continue 1084 | 1085 | # Add the file to the set of processed files 1086 | processed_files.add(file_to_move) 1087 | 1088 | # Get the user path, cache path, cache file name, and user file name 1089 | user_path, cache_path, cache_file_name, user_file_name = get_paths(file_to_move, real_source, cache_dir, unraid) 1090 | 1091 | # Get the move command for the current file 1092 | move = get_move_command(destination, cache_file_name, user_path, user_file_name, cache_path) 1093 | 1094 | # If a move command is obtained, append it to the list of move commands 1095 | if move is not None: 1096 | move_commands.append(move) 1097 | 1098 | # Execute the move commands 1099 | execute_move_commands(debug, move_commands, max_concurrent_moves_array, max_concurrent_moves_cache, destination) 1100 | 1101 | # Function to get the paths of the user and cache directories 1102 | def get_paths(file_to_move, real_source, cache_dir, unraid): 1103 | # Get the user path 1104 | user_path = os.path.dirname(file_to_move) 1105 | 1106 | # Get the relative path from the real source directory 1107 | relative_path = os.path.relpath(user_path, real_source) 1108 | 1109 | # Get the cache path by joining the cache directory with the relative path 1110 | cache_path = os.path.join(cache_dir, relative_path) 1111 | 1112 | # Get the cache file name by joining the cache path with the base name of the file to move 1113 | cache_file_name = os.path.join(cache_path, os.path.basename(file_to_move)) 1114 | 1115 | # Modify the user path if unraid is True 1116 | if unraid: 1117 | user_path = user_path.replace("/mnt/user/", "/mnt/user0/", 1) 1118 | 1119 | # Get the user file name by joining the user path with the base name of the file to move 1120 | user_file_name = os.path.join(user_path, os.path.basename(file_to_move)) 1121 | 1122 | return user_path, cache_path, cache_file_name, user_file_name 1123 | 1124 | # Locates the given file in the cache 1125 | def get_cache_paths(file, real_source, cache_dir): 1126 | # Get the cache path by replacing the real source directory with the cache directory 1127 | cache_path = os.path.dirname(file).replace(real_source, cache_dir, 1) 1128 | 1129 | # Get the cache file name by joining the cache path with the base name of the file 1130 | cache_file_name = os.path.join(cache_path, os.path.basename(file)) 1131 | 1132 | return cache_path, cache_file_name 1133 | 1134 | def move_file(move_cmd): 1135 | src, dest = move_cmd 1136 | try: 1137 | if os_linux: 1138 | stat_info = os.stat(src) 1139 | uid = stat_info.st_uid 1140 | gid = stat_info.st_gid 1141 | # Move the file first 1142 | shutil.move(src, dest) 1143 | # Then set the owner and group to the original values 1144 | os.chown(dest, uid, gid) 1145 | original_umask = os.umask(0) 1146 | os.chmod(dest, permissions) 1147 | os.umask(original_umask) 1148 | else: # Windows logic 1149 | shutil.move(src, dest) 1150 | # For more granular Windows permissions, you'd use the win32security module here. 1151 | logging.info(f"Moved file from {src} to {dest} with original permissions and owner.") 1152 | return 0 1153 | except (FileNotFoundError, PermissionError, Exception) as e: 1154 | logging.error(f"Error moving file: {str(e)}") 1155 | return 1 1156 | 1157 | #Helper function to create directory and set permissions 1158 | def create_directory_with_permissions(path, src_file_for_permissions): 1159 | if not os.path.exists(path): 1160 | if os_linux: # POSIX platform (Linux/Unix) 1161 | # Get the permissions of the source file 1162 | stat_info = os.stat(src_file_for_permissions) 1163 | mode = stat_info.st_mode 1164 | uid = stat_info.st_uid 1165 | gid = stat_info.st_gid 1166 | original_umask = os.umask(0) 1167 | os.makedirs(path, exist_ok=True) 1168 | os.chown(path, uid, gid) 1169 | os.chmod(path, permissions) 1170 | os.umask(original_umask) 1171 | else: # Windows platform 1172 | os.makedirs(path, exist_ok=True) 1173 | 1174 | #Function to get the move command for the given file 1175 | def get_move_command(destination, cache_file_name, user_path, user_file_name, cache_path): 1176 | move = None 1177 | if destination == 'array': 1178 | create_directory_with_permissions(user_path, cache_file_name) 1179 | if os.path.isfile(cache_file_name): 1180 | move = (cache_file_name, user_path) 1181 | elif destination == 'cache': 1182 | create_directory_with_permissions(cache_path, user_file_name) 1183 | if not os.path.isfile(cache_file_name): 1184 | move = (user_file_name, cache_path) 1185 | return move 1186 | 1187 | # Function to execute the given move commands 1188 | def execute_move_commands(debug, move_commands, max_concurrent_moves_array, max_concurrent_moves_cache, destination): 1189 | if debug: 1190 | for move_cmd in move_commands: 1191 | print(move_cmd) # Print the move command 1192 | logging.info(move_cmd) # Log the move command 1193 | else: 1194 | max_concurrent_moves = max_concurrent_moves_array if destination == 'array' else max_concurrent_moves_cache 1195 | with ThreadPoolExecutor(max_workers=max_concurrent_moves) as executor: 1196 | results = executor.map(move_file, move_commands) # Move the files using multiple threads 1197 | errors = [result for result in results if result != 0] # Collect any non-zero error codes 1198 | print(f"Finished moving files with {len(errors)} errors.") # Print the number of errors encountered during file moves 1199 | logging.info(f"Finished moving files with {len(errors)} errors.") 1200 | 1201 | def convert_time(execution_time_seconds): 1202 | # Calculate days, hours, minutes, and seconds 1203 | days, remainder = divmod(execution_time_seconds, 86400) 1204 | hours, remainder = divmod(remainder, 3600) 1205 | minutes, seconds = divmod(remainder, 60) 1206 | 1207 | # Create a human-readable string for the result 1208 | result_str = "" 1209 | if days > 0: 1210 | result_str += f"{int(days)} day{'s' if days > 1 else ''}, " 1211 | if hours > 0: 1212 | result_str += f"{int(hours)} hour{'s' if hours > 1 else ''}, " 1213 | if minutes > 0: 1214 | result_str += f"{int(minutes)} minute{'s' if minutes > 1 else ''}, " 1215 | if seconds > 0: 1216 | result_str += f"{int(seconds)} second{'s' if seconds > 1 else ''}" 1217 | 1218 | return result_str.rstrip(", ") 1219 | 1220 | # Function to check if internet is available 1221 | def is_connected(): 1222 | try: 1223 | socket.gethostbyname("www.google.com") 1224 | return True 1225 | except socket.error: 1226 | return False 1227 | 1228 | # Checks if the paths exists and are accessible 1229 | for path in [real_source, cache_dir]: 1230 | check_path_exists(path) 1231 | 1232 | # Fetch OnDeck Media 1233 | media_to_cache.extend(fetch_on_deck_media_main(plex, valid_sections, days_to_monitor, number_episodes, users_toggle, skip_ondeck)) 1234 | 1235 | # Edit file paths for the above fetched media 1236 | media_to_cache = modify_file_paths(media_to_cache, plex_source, real_source, plex_library_folders, nas_library_folders) 1237 | 1238 | # Fetches subtitles for the above fetched media 1239 | media_to_cache.extend(get_media_subtitles(media_to_cache, files_to_skip=files_to_skip)) 1240 | 1241 | # Watchlist logic: 1242 | # It will check if there is internet connection as plexapi requires to use a method which uses their server rather than plex 1243 | # If internet is not available or the cache is within the expiry date, it will use the cached file. 1244 | if watchlist_toggle: 1245 | try: 1246 | # Load previously watched media from cache and get the last update time 1247 | watchlist_media_set, last_updated = load_media_from_cache(watchlist_cache_file) 1248 | current_watchlist_set = set() 1249 | 1250 | if is_connected(): 1251 | # To fetch the watchlist media, internet connection is required due to a plexapi limitation 1252 | 1253 | # Check if the cache file doesn't exist, debug mode is enabled, or cache has expired 1254 | if skip_cache or (not watchlist_cache_file.exists()) or (debug) or (datetime.now() - datetime.fromtimestamp(watchlist_cache_file.stat().st_mtime) > timedelta(hours=watchlist_cache_expiry)): 1255 | print("Fetching watchlist media...") 1256 | logging.info("Fetching watchlist media...") 1257 | 1258 | # Fetch the watchlist media from Plex server 1259 | fetched_watchlist = fetch_watchlist_media(plex, valid_sections, watchlist_episodes, users_toggle=users_toggle, skip_watchlist=skip_watchlist) 1260 | 1261 | # Add new media paths to the cache 1262 | for file_path in fetched_watchlist: 1263 | current_watchlist_set.add(file_path) 1264 | if file_path not in watchlist_media_set: 1265 | media_to_cache.append(file_path) 1266 | 1267 | # Remove media that no longer exists in the watchlist 1268 | watchlist_media_set.intersection_update(current_watchlist_set) 1269 | 1270 | # Add new media to the watchlist media set 1271 | watchlist_media_set.update(media_to_cache) 1272 | 1273 | # Modify file paths and add subtitles 1274 | media_to_cache = modify_file_paths(media_to_cache, plex_source, real_source, plex_library_folders, nas_library_folders) 1275 | media_to_cache.extend(get_media_subtitles(media_to_cache, files_to_skip=files_to_skip)) 1276 | 1277 | # Update the cache file with the updated watchlist media set 1278 | with watchlist_cache_file.open('w') as f: 1279 | json.dump({'media': list(media_to_cache), 'timestamp': datetime.now().timestamp()}, f) 1280 | else: 1281 | # Load watchlist media from cache 1282 | print("Loading watchlist media from cache...") 1283 | logging.info("Loading watchlist media from cache...") 1284 | media_to_cache.extend(watchlist_media_set) 1285 | else: 1286 | # Handle no internet connection scenario 1287 | print("Unable to connect to the internet, skipping fetching new watchlist media due to plexapi limitation.") 1288 | logging.warning("Unable to connect to the internet, skipping fetching new watchlist media due to plexapi limitation.") 1289 | 1290 | # Load watchlist media from cache 1291 | print("Loading watchlist media from cache...") 1292 | logging.info("Loading watchlist media from cache...") 1293 | media_to_cache.extend(watchlist_media_set) 1294 | except Exception as e: 1295 | # Handle any exceptions that occur while processing the watchlist 1296 | print("An error occurred while processing the watchlist.") 1297 | logging.error("An error occurred while processing the watchlist: %s", str(e)) 1298 | 1299 | # Watched media logic 1300 | if watched_move: 1301 | try: 1302 | # Load watched media from cache 1303 | watched_media_set, last_updated = load_media_from_cache(watched_cache_file) 1304 | current_media_set = set() 1305 | 1306 | # Check if cache file doesn't exist or debug mode is enabled 1307 | if skip_cache or not watched_cache_file.exists() or debug or (datetime.now() - datetime.fromtimestamp(watched_cache_file.stat().st_mtime) > timedelta(hours=watched_cache_expiry)): 1308 | print("Fetching watched media...") 1309 | logging.info("Fetching watched media...") 1310 | 1311 | # Get watched media from Plex server 1312 | fetched_media = get_watched_media(plex, valid_sections, last_updated, users_toggle=users_toggle) 1313 | 1314 | # Add fetched media to the current media set 1315 | for file_path in fetched_media: 1316 | current_media_set.add(file_path) 1317 | 1318 | # Check if file is not already in the watched media set 1319 | if file_path not in watched_media_set: 1320 | media_to_array.append(file_path) 1321 | 1322 | # Add new media to the watched media set 1323 | watched_media_set.update(media_to_array) 1324 | 1325 | # Modify file paths and add subtitles 1326 | media_to_array = modify_file_paths(media_to_array, plex_source, real_source, plex_library_folders, nas_library_folders) 1327 | media_to_array.extend(get_media_subtitles(media_to_array, files_to_skip)) 1328 | 1329 | # Save updated watched media set to cache file 1330 | with watched_cache_file.open('w') as f: 1331 | json.dump({'media': list(media_to_array), 'timestamp': datetime.now().timestamp()}, f) 1332 | 1333 | else: 1334 | print("Loading watched media from cache...") 1335 | logging.info("Loading watched media from cache...") 1336 | # Add watched media from cache to the media array 1337 | media_to_array.extend(watched_media_set) 1338 | 1339 | except Exception as e: 1340 | # Handle any exceptions that occur while processing the watched media 1341 | print("An error occurred while processing the watched media.") 1342 | logging.error("An error occurred while processing the watched media: %s", str(e)) 1343 | 1344 | try: 1345 | # Check free space and move files 1346 | check_free_space_and_move_files(media_to_array, 'array', real_source, cache_dir, unraid, debug) 1347 | except Exception as e: 1348 | if not debug: 1349 | logging.critical(f"Error checking free space and moving media files to the array: {str(e)}") 1350 | exit(f"Error: {str(e)}") 1351 | else: 1352 | logging.error(f"Error checking free space and moving media files to the array: {str(e)}") 1353 | print(f"Error: {str(e)}") 1354 | 1355 | # Moving the files to the cache drive 1356 | try: 1357 | check_free_space_and_move_files(media_to_cache, 'cache', real_source, cache_dir, unraid, debug) 1358 | except Exception as e: 1359 | if not debug: 1360 | logging.critical(f"Error checking free space and moving media files to the cache: {str(e)}") 1361 | exit(f"Error: {str(e)}") 1362 | else: 1363 | logging.error(f"Error checking free space and moving media files to the cache: {str(e)}") 1364 | print(f"Error: {str(e)}") 1365 | 1366 | end_time = time.time() # record end time 1367 | execution_time_seconds = end_time - start_time # calculate execution time 1368 | execution_time = convert_time(execution_time_seconds) 1369 | 1370 | summary_messages.append(f"The script took approximately {execution_time} to execute.") 1371 | summary_message = ' '.join(summary_messages) 1372 | 1373 | logger.log(SUMMARY, summary_message) 1374 | 1375 | print(f"Execution time of the script: {execution_time}") 1376 | logging.info(f"Execution time of the script: {execution_time}") 1377 | 1378 | print("Thank you for using bexem's script: \nhttps://github.com/bexem/PlexCache") 1379 | logging.info("Thank you for using bexem's script: https://github.com/bexem/PlexCache") 1380 | logging.info("Also special thanks to: - /u/teshiburu2020 - /u/planesrfun - /u/trevski13 - /u/extrobe - /u/dsaunier-sunlight") 1381 | logging.info("*** The End ***") 1382 | logging.shutdown() 1383 | print("*** The End ***") -------------------------------------------------------------------------------- /plexcache_settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "firststart": true, 3 | "PLEX_URL": "https://plex.domain.ext", 4 | "PLEX_TOKEN": "YourPlexTokenGoesHere", 5 | "plex_source": "/media/", 6 | "plex_library_folders": [ 7 | "movies", 8 | "anime", 9 | "tvseries" 10 | ], 11 | "valid_sections": [ 12 | 1, 13 | 2, 14 | 3 15 | ], 16 | "number_episodes": 10, 17 | "users_toggle": true, 18 | "watchlist_toggle": true, 19 | "watchlist_episodes": 5, 20 | "watchlist_cache_expiry": 48, 21 | "days_to_monitor": 183, 22 | "watched_move": true, 23 | "watched_cache_expiry": 48, 24 | "cache_dir": "/mnt/cache/", 25 | "real_source": "/mnt/user/", 26 | "nas_library_folders": [ 27 | "movies", 28 | "anime", 29 | "tvseries" 30 | ], 31 | "max_concurrent_moves_array": 2, 32 | "max_concurrent_moves_cache": 5, 33 | "debug": false, 34 | "skip_ondeck": [], 35 | "skip_watchlist": [], 36 | "exit_if_active_session": false 37 | } -------------------------------------------------------------------------------- /plexcache_setup.py: -------------------------------------------------------------------------------- 1 | import json, os, requests, ntpath, posixpath 2 | from urllib.parse import urlparse 3 | from plexapi.server import PlexServer 4 | from plexapi.exceptions import BadRequest 5 | 6 | # The script will create/edit the file in the same folder the script is located, but you can change that 7 | script_folder="." 8 | settings_filename = os.path.join(script_folder, "plexcache_settings.json") 9 | 10 | # Function to check for a valid plex url 11 | def is_valid_plex_url(url): 12 | try: 13 | response = requests.get(url) 14 | if 'X-Plex-Protocol' in response.headers: 15 | return True 16 | except requests.exceptions.RequestException: 17 | print (response.headers) 18 | print (response.headers) 19 | return False 20 | 21 | # Check if the given directory exists 22 | def check_directory_exists(folder): 23 | if not os.path.exists(folder): 24 | raise FileNotFoundError(f'Wrong path given, please edit the "{folder}" variable accordingly.') 25 | 26 | # Read the settings containet in the settings file 27 | def read_existing_settings(filename): 28 | with open(filename, 'r') as f: 29 | return json.load(f) 30 | 31 | # Write the given settings to the settings file 32 | def write_settings(filename, data): 33 | with open(filename, 'w') as f: 34 | json.dump(data, f, indent=4) 35 | 36 | # Convert the given path to linux/posix format 37 | def convert_path_to_posix(path): 38 | path = path.replace(ntpath.sep, posixpath.sep) 39 | return posixpath.normpath(path) 40 | 41 | # Conver the given path to a windows compatible format 42 | def convert_path_to_nt(path): 43 | path = path.replace(posixpath.sep, ntpath.sep) 44 | return ntpath.normpath(path) 45 | 46 | # Function to ask the user for a number and it then saves it as a setting 47 | def prompt_user_for_number(prompt_message, default_value, data_key, data_type=int): 48 | while True: 49 | user_input = input(prompt_message) or default_value 50 | if user_input.isdigit(): 51 | settings_data[data_key] = data_type(user_input) 52 | break 53 | else: 54 | print("User input is not a number") 55 | 56 | # Ask user for input for missing settings 57 | def setup(): 58 | settings_data['firststart'] = False 59 | 60 | # Check if the given url is a valid plex server address 61 | def is_valid_plex_url(url): 62 | try: 63 | result = urlparse(url) 64 | return all([result.scheme, result.netloc]) 65 | except ValueError: 66 | return False 67 | 68 | # Asks the user for the plex server address 69 | while 'PLEX_URL' not in settings_data: 70 | url = input('\nEnter your plex server address (Example: http://localhost:32400 or https://plex.mydomain.ext): ') 71 | if not url.strip(): # Check if url is not empty 72 | print("URL is not valid. It cannot be empty.") 73 | continue 74 | try: 75 | if is_valid_plex_url(url): 76 | print('Valid Plex URL') 77 | settings_data['PLEX_URL'] = url 78 | else: 79 | print('Invalid Plex URL') 80 | except requests.exceptions.RequestException: 81 | print("URL is not valid.") 82 | 83 | # Ask the user for the plex token, it then tests it 84 | # If successfull it will ask for libraries 85 | while 'PLEX_TOKEN' not in settings_data: 86 | token = input('\nEnter your plex token: ') 87 | if not token.strip(): # Check if token is not empty 88 | print("Token is not valid. It cannot be empty.") 89 | continue 90 | try: 91 | plex = PlexServer(settings_data['PLEX_URL'], token) 92 | user = plex.myPlexAccount().username # Fetching user information 93 | print(f"Connection successful! Currently connected as {user}") 94 | libraries = plex.library.sections() # This line should raise a BadRequest exception if the token is invalid 95 | # if the above line doesn't raise an exception, then the token is valid. 96 | settings_data['PLEX_TOKEN'] = token 97 | operating_system = plex.platform 98 | print(f"\nPlex is running on {operating_system}") 99 | valid_sections = [] 100 | plex_library_folders = [] 101 | while not valid_sections: 102 | for library in libraries: 103 | print(f"\nYour plex library name: {library.title}") 104 | include = input("Do you want to include this library? [Y/n] ") or 'yes' 105 | if include.lower() in ['n', 'no']: 106 | continue 107 | elif include.lower() in ['y', 'yes']: 108 | valid_sections.append(library.key) 109 | if 'plex_source' not in settings_data: 110 | location_index = 0 111 | location = library.locations[location_index] 112 | if operating_system.lower() == 'linux': 113 | location_index = 0 114 | location = library.locations[location_index] 115 | root_folder = (os.path.dirname(location)) 116 | else: 117 | location = convert_path_to_nt(location) 118 | root_folder = (ntpath.splitdrive(location)[0]) # Fix for plex_source 119 | print(f"\nPlex source path autoselected and set to: {root_folder}") 120 | settings_data['plex_source'] = root_folder 121 | for location in library.locations: 122 | if operating_system.lower() == 'linux': 123 | plex_library_folder = ("/" + os.path.basename(location)) 124 | plex_library_folder = plex_library_folder.strip('/') 125 | else: 126 | plex_library_folder = os.path.basename(location) 127 | plex_library_folder = plex_library_folder.split('\\')[-1] 128 | plex_library_folders.append(plex_library_folder) 129 | settings_data['plex_library_folders'] = plex_library_folders 130 | else: 131 | print("Invalid choice. Please enter either yes or no") 132 | if not valid_sections: 133 | print("You must select at least one library to include. Please try again.") 134 | settings_data['valid_sections'] = valid_sections 135 | except (BadRequest, requests.exceptions.RequestException): # Catch BadRequest if the token is invalid 136 | print('Unable to connect to Plex server. Please check your token.') 137 | except ValueError: 138 | print('Token is not valid. It cannot be empty.') 139 | except TypeError: 140 | print('An unexpected error occurred.') 141 | 142 | # Asks for how many episodes 143 | while 'number_episodes' not in settings_data: 144 | prompt_user_for_number('\nHow many episodes (digit) do you want fetch (onDeck)? (default: 5) ', '5', 'number_episodes') 145 | 146 | # Asks for how many days 147 | while 'days_to_monitor' not in settings_data: 148 | prompt_user_for_number('\nMaximum age of the media onDeck to be fetched? (default: 99) ', '99', 'days_to_monitor') 149 | 150 | # Asks for the watchlist media and if yes it will then ask for an expiry date for the cache file 151 | while 'watchlist_toggle' not in settings_data: 152 | watchlist = input('\nDo you want to fetch your watchlist media? [y/N] ') or 'no' 153 | if watchlist.lower() in ['n', 'no']: 154 | settings_data['watchlist_toggle'] = False 155 | settings_data['watchlist_episodes'] = 0 156 | settings_data['watchlist_cache_expiry'] = 1 157 | elif watchlist.lower() in ['y', 'yes']: 158 | settings_data['watchlist_toggle'] = True 159 | prompt_user_for_number('\nHow many episodes do you want fetch (watchlist) (default: 1)? ', '1', 'watchlist_episodes') 160 | prompt_user_for_number('\nDefine the watchlist cache expiry duration in hours (default: 6) ', '6', 'watchlist_cache_expiry') 161 | else: 162 | print("Invalid choice. Please enter either yes or no") 163 | 164 | # Enable all other users and if to skip specific user for the watchlist and/or ondeck media 165 | while 'users_toggle' not in settings_data: 166 | skip_users = [] 167 | skip_ondeck = [] 168 | skip_watchlist = [] 169 | fetch_all_users = input('\nDo you want to fetch onDeck/watchlist media from other users? [Y/n] ') or 'yes' 170 | if fetch_all_users.lower() not in ['y', 'yes', 'n', 'no']: 171 | print("Invalid choice. Please enter either yes or no") 172 | continue 173 | if fetch_all_users.lower() in ['y', 'yes']: 174 | settings_data['users_toggle'] = True 175 | skip_users_choice = input('\nWould you like to skip some of the users? [y/N]') or 'no' 176 | if skip_users_choice.lower() not in ['y', 'yes', 'n', 'no']: 177 | print("Invalid choice. Please enter either yes or no") 178 | continue 179 | if skip_users_choice.lower() in ['y', 'yes']: 180 | for user in plex.myPlexAccount().users(): 181 | username = user.title 182 | while True: 183 | answer = input(f'\nDo you want to skip this user? {username} [y/N] ') or 'no' 184 | if answer.lower() not in ['y', 'yes', 'n', 'no']: 185 | print("Invalid choice. Please enter either yes or no") 186 | continue 187 | if answer.lower() in ['y', 'yes']: 188 | token = user.get_token(plex.machineIdentifier) 189 | skip_users.append(token) 190 | print("\n", username, " will be skipped.") 191 | 192 | answer_ondeck = input(f'\nDo you want to skip fetching the onDeck media for {username}? [y/N] ') or 'no' 193 | if answer_ondeck.lower() in ['y', 'yes']: 194 | skip_ondeck.append(token) 195 | print(f"\nonDeck media for {username} will be skipped.") 196 | 197 | answer_watchlist = input(f'\nDo you want to skip fetching the watchlist media for {username}? [y/N] ') or 'no' 198 | if answer_watchlist.lower() in ['y', 'yes']: 199 | skip_watchlist.append(token) 200 | print(f"\nWatchlist media for {username} will be skipped.") 201 | break 202 | else: 203 | settings_data['users_toggle'] = False 204 | settings_data['skip_users'] = skip_users 205 | settings_data['skip_ondeck'] = skip_ondeck 206 | settings_data['skip_watchlist'] = skip_watchlist 207 | 208 | # If enable the script will move the files back from the cache to the array 209 | while 'watched_move' not in settings_data: 210 | watched_move = input('\nDo you want to move watched media from the cache back to the array? [y/N] ') or 'no' 211 | if watched_move.lower() in ['n', 'no']: 212 | settings_data['watched_move'] = False 213 | settings_data['watched_cache_expiry'] = 48 214 | elif watched_move.lower() in ['y', 'yes']: 215 | settings_data['watched_move'] = True 216 | prompt_user_for_number('\nDefine the wwatched cache expiry duration in hours (default: 48) ', '48', 'watched_cache_expiry') 217 | else: 218 | print("Invalid choice. Please enter either yes or no") 219 | 220 | # Asks for the cache/fast drives path and asks if you want to test the given path 221 | if 'cache_dir' not in settings_data: 222 | cache_dir = input('\nInsert the path of your cache drive: (default: "/mnt/cache") ').replace('"', '').replace("'", '') or '/mnt/cache' 223 | while True: 224 | test_path = input('\nDo you want to test the given path? [y/N] ') or 'no' 225 | if test_path.lower() in ['y', 'yes']: 226 | if os.path.exists(cache_dir): 227 | print('The path appears to be valid. Settings saved.') 228 | break 229 | else: 230 | print('The path appears to be invalid.') 231 | edit_path = input('\nDo you want to edit the path? [y/N] ') or 'no' 232 | if edit_path.lower() in ['y', 'yes']: 233 | cache_dir = input('\nInsert the path of your cache drive: (default: "/mnt/cache") ').replace('"', '').replace("'", '') or '/mnt/cache' 234 | elif edit_path.lower() in ['n', 'no']: 235 | break 236 | else: 237 | print("Invalid choice. Please enter either yes or no") 238 | elif test_path.lower() in ['n', 'no']: 239 | break 240 | else: 241 | print("Invalid choice. Please enter either yes or no") 242 | settings_data['cache_dir'] = cache_dir 243 | 244 | # Asks for the array/slow drives path and asks if you want to test the given path 245 | if 'real_source' not in settings_data: 246 | real_source = input('\nInsert the path where your media folders are located?: (default: "/mnt/user") ').replace('"', '').replace("'", '') or '/mnt/user' 247 | while True: 248 | test_path = input('\nDo you want to test the given path? [y/N] ') or 'no' 249 | if test_path.lower() in ['y', 'yes']: 250 | if os.path.exists(real_source): 251 | print('The path appears to be valid. Settings saved.') 252 | break 253 | else: 254 | print('The path appears to be invalid.') 255 | edit_path = input('\nDo you want to edit the path? [y/N] ') or 'no' 256 | if edit_path.lower() in ['y', 'yes']: 257 | real_source = input('\nInsert the path where your media folders are located?: (default: "/mnt/user") ').replace('"', '').replace("'", '') or '/mnt/user' 258 | elif edit_path.lower() in ['n', 'no']: 259 | break 260 | else: 261 | print("Invalid choice. Please enter either yes or no") 262 | elif test_path.lower() in ['n', 'no']: 263 | break 264 | else: 265 | print("Invalid choice. Please enter either yes or no") 266 | settings_data['real_source'] = real_source 267 | num_folders = len(plex_library_folders) 268 | # Ask the user to input a corresponding value for each element in plex_library_folders 269 | nas_library_folder = [] 270 | for i in range(num_folders): 271 | folder_name = input(f"\nEnter the corresponding NAS/Unraid library folder for the Plex mapped folder: (Default is the same as plex as shown) '{plex_library_folders[i]}' ") or plex_library_folders[i] 272 | folder_name = folder_name.replace(real_source, '') 273 | folder_name = folder_name.strip('/') 274 | nas_library_folder.append(folder_name) 275 | settings_data['nas_library_folders'] = nas_library_folder 276 | 277 | # Asks if to stop the script or continue if active session 278 | while 'exit_if_active_session' not in settings_data: 279 | session = input('\nIf there is an active session in plex (someone is playing a media) do you want to exit the script (Yes) or just skip the playing media (No)? [y/N] ') or 'no' 280 | if session.lower() in ['n', 'no']: 281 | settings_data['exit_if_active_session'] = False 282 | elif session.lower() in ['y', 'yes']: 283 | settings_data['exit_if_active_session'] = True 284 | else: 285 | print("Invalid choice. Please enter either yes or no") 286 | 287 | # Concurrent moving process 288 | if 'max_concurrent_moves_cache' not in settings_data: 289 | prompt_user_for_number('\nHow many files do you want to move from the array to the cache at the same time? (default: 5) ', '5', 'max_concurrent_moves_cache') 290 | 291 | # Concurrent moving process 292 | if 'max_concurrent_moves_array' not in settings_data: 293 | prompt_user_for_number('\nHow many files do you want to move from the cache to the array at the same time? (default: 2) ', '2', 'max_concurrent_moves_array') 294 | 295 | # Activates the debug mode 296 | while 'debug' not in settings_data: 297 | debug = input('\nDo you want to debug the script? No data will actually be moved. [y/N] ') or 'no' 298 | if debug.lower() in ['n', 'no']: 299 | settings_data['debug'] = False 300 | elif debug.lower() in ['y', 'yes']: 301 | settings_data['debug'] = True 302 | else: 303 | print("Invalid choice. Please enter either yes or no") 304 | 305 | write_settings(settings_filename, settings_data) 306 | 307 | print("Setup complete! You can now run the plexcache.py script. \n") 308 | print("If you are happy with your current settings, you can discard this script entirely. \n") 309 | print("\nThank you for using bexem's script: \nhttps://github.com/bexem/PlexCache") 310 | print("So Long, and Thanks for All the Fish!") 311 | 312 | check_directory_exists(script_folder) 313 | 314 | # Checks if the file exist, if not it will check if the path is accessible, if so it will ask to create the file and then initialise the file. 315 | if os.path.exists(settings_filename): 316 | try: 317 | settings_data = read_existing_settings(settings_filename) 318 | print("Settings file exists, loading...!\n") 319 | 320 | if settings_data.get('firststart'): 321 | print("First start unset or set to yes:\nPlease answer the following questions: \n") 322 | settings_data = {} 323 | setup() 324 | else: 325 | print("Configuration exists and appears to be valid, you can now run the plexcache.py script.\n") 326 | print("If you want to configure the settings again, manually change the variable 'firststart' to 'True' or delete the file entirely.\n") 327 | print("If instead you are happy with your current settings, you can discard this script entirely.\n") 328 | print("\nThank you for using bexem's script: \nhttps://github.com/bexem/PlexCache") 329 | print("So Long, and Thanks for All the Fish!") 330 | except json.decoder.JSONDecodeError: 331 | print("Setting file initialized successfully!\n") 332 | settings_data = {} 333 | setup() 334 | else: 335 | print(f"Settings file {settings_filename} doesn't exist, please check the path:\n") 336 | creation = input("\nIf the path is correct, do you want to create the file? [Y/n] ") or 'yes' 337 | if creation.lower() in ['y', 'yes']: 338 | print("Setting file created successfully!\n") 339 | settings_data = {} 340 | setup() 341 | elif creation.lower() in ['n', 'no']: 342 | exit("Exiting as requested, setting file not created.") 343 | else: 344 | print("Invalid choice. Please enter either 'yes' or 'no'") 345 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | plexapi --------------------------------------------------------------------------------