├── .github ├── CODEOWNERS ├── FUNDING.yml └── ISSUE_TEMPLATE │ ├── feature_request.md │ └── bug_report.md ├── requirements.txt ├── .DS_Store ├── __pycache__ ├── config.cpython-311.pyc ├── plex_api.cpython-311.pyc ├── logging_config.cpython-311.pyc ├── system_utils.cpython-311.pyc └── file_operations.cpython-311.pyc ├── REFACTORING_SUMMARY.md ├── plexcache_settings (ReduxExample).json ├── README.md ├── system_utils.py ├── logging_config.py ├── config.py ├── plex_api.py ├── file_operations.py ├── plexcache_setup.py └── plexcache_app.py /.github/CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @StudioNirin 2 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | plexapi>=4.15.0 2 | requests>=2.25.0 -------------------------------------------------------------------------------- /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/StudioNirin/PlexCache-R/HEAD/.DS_Store -------------------------------------------------------------------------------- /__pycache__/config.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/StudioNirin/PlexCache-R/HEAD/__pycache__/config.cpython-311.pyc -------------------------------------------------------------------------------- /__pycache__/plex_api.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/StudioNirin/PlexCache-R/HEAD/__pycache__/plex_api.cpython-311.pyc -------------------------------------------------------------------------------- /__pycache__/logging_config.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/StudioNirin/PlexCache-R/HEAD/__pycache__/logging_config.cpython-311.pyc -------------------------------------------------------------------------------- /__pycache__/system_utils.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/StudioNirin/PlexCache-R/HEAD/__pycache__/system_utils.cpython-311.pyc -------------------------------------------------------------------------------- /__pycache__/file_operations.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/StudioNirin/PlexCache-R/HEAD/__pycache__/file_operations.cpython-311.pyc -------------------------------------------------------------------------------- /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: StudioNirin 4 | ko_fi: 5 | custom: ["Nirin.Studio", "buymeacoffee.com/studionirin"] 6 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /REFACTORING_SUMMARY.md: -------------------------------------------------------------------------------- 1 | # PlexCache Refactoring Summary 2 | 3 | ## Overview 4 | 5 | The original PlexCache script has been completely refactored to improve maintainability, testability, and code organization while preserving all original functionality. 6 | 7 | This refacting work was done by BBergle and I couldn't have progressed this project without his work - https://github.com/BBergle/PlexCache 8 | 9 | The below is from his documentation, and covers the things that are still relevent in my own updates on this project. 10 | 11 | ### Refactored Solution 12 | 13 | The code has been split into 6 focused modules: 14 | 15 | #### 1. `config.py` - Configuration Management 16 | - **Purpose**: Handle all configuration loading, validation, and management 17 | - **Key Features**: 18 | - Dataclasses for type-safe configuration 19 | - Validation of required fields 20 | - Path conversion utilities 21 | - Automatic cleanup of deprecated settings 22 | 23 | #### 2. `logging_config.py` - Logging System 24 | - **Purpose**: Set up logging, rotation, and notification handlers 25 | - **Key Features**: 26 | - Rotating file handlers 27 | - Custom notification handlers (Unraid, Webhook) 28 | - Summary logging functionality 29 | - Proper log level management 30 | 31 | #### 3. `system_utils.py` - System Operations 32 | - **Purpose**: OS detection, path conversions, and file utilities 33 | - **Key Features**: 34 | - System detection (Linux, Unraid, Docker) 35 | - Cross-platform path conversions 36 | - File operation utilities 37 | - Space calculation functions 38 | 39 | #### 4. `plex_api.py` - Plex Integration 40 | - **Purpose**: All Plex server interactions and cache management 41 | - **Key Features**: 42 | - Plex server connections 43 | - Media fetching (onDeck, watchlist, watched) 44 | - Cache management 45 | - Rate limiting and retry logic 46 | 47 | #### 5. `file_operations.py` - File Operations 48 | - **Purpose**: File moving, filtering, and subtitle operations 49 | - **Key Features**: 50 | - Path modification utilities 51 | - Subtitle discovery 52 | - File filtering logic 53 | - Concurrent file moving 54 | 55 | #### 6. `plexcache_app.py` - Main Application 56 | - **Purpose**: Orchestrate all components and provide main business logic 57 | - **Key Features**: 58 | - Dependency injection 59 | - Error handling 60 | - Application flow control 61 | - Summary generation 62 | -------------------------------------------------------------------------------- /plexcache_settings (ReduxExample).json: -------------------------------------------------------------------------------- 1 | { 2 | "PLEX_URL": "http://192.168.0.188:32400/", 3 | "PLEX_TOKEN": "z-GgLkL5ifrHkk-3MAFT", 4 | "plex_source": "/media/movies/", 5 | "plex_library_folders": [ 6 | "movies-dad", 7 | "movies-4k", 8 | "movies", 9 | "movies-anime4k", 10 | "movies-anime", 11 | "series-anime", 12 | "series-dad", 13 | "series4k", 14 | "series", 15 | "series-animated" 16 | ], 17 | "valid_sections": [ 18 | 2, 19 | 1, 20 | 6, 21 | 5, 22 | 3, 23 | 4 24 | ], 25 | "number_episodes": 5, 26 | "days_to_monitor": 99, 27 | "watchlist_toggle": true, 28 | "watchlist_episodes": 1, 29 | "watchlist_cache_expiry": 6, 30 | "users_toggle": true, 31 | "users": [ 32 | { 33 | "title": "Frank Grimes", 34 | "token": "hf893h9fh3f", 35 | "is_local": false, 36 | "skip_ondeck": false, 37 | "skip_watchlist": false 38 | }, 39 | { 40 | "title": "Princess Bubblegum", 41 | "token": "3fh948h3fh39f4h8", 42 | "is_local": false, 43 | "skip_ondeck": false, 44 | "skip_watchlist": false 45 | }, 46 | { 47 | "title": "dominicturreto", 48 | "token": "4354543g42345g2", 49 | "is_local": false, 50 | "skip_ondeck": false, 51 | "skip_watchlist": false 52 | } 53 | ], 54 | "skip_ondeck": [], 55 | "skip_watchlist": [], 56 | "remote_watchlist_toggle": true, 57 | "remote_watchlist_rss_url": "https://rss.plex.tv/64477hyy8-54b3-4b19-a3a7-333c6babbbb40", 58 | "watched_move": true, 59 | "watched_cache_expiry": 48, 60 | "cache_dir": "/mnt/cachedrive/media/", 61 | "real_source": "/mnt/user/media/", 62 | "nas_library_folders": [ 63 | "movies/movies-dad", 64 | "movies/movies-4k", 65 | "movies/movies", 66 | "movies/movies-anime4k", 67 | "movies/movies-anime", 68 | "tv/series-anime", 69 | "tv/series-dad", 70 | "tv/series4k", 71 | "tv/series", 72 | "tv/series-animated" 73 | ], 74 | "exit_if_active_session": false, 75 | "max_concurrent_moves_cache": 5, 76 | "max_concurrent_moves_array": 2, 77 | "debug": false 78 | } 79 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PlexCache: Automate Plex Media Management 2 | ### Updated 11/25 3 | 4 | ## Current Bugs / Todo List 5 | 6 | Now moved to a discussion page [HERE](https://github.com/StudioNirin/PlexCache-R/discussions/16) 7 | 8 | ## Overview 9 | Automate Plex media management: Efficiently transfer media from the On Deck/Watchlist to the cache, and seamlessly move watched media back to their respective locations. 10 | An updated version of the "PlexCache-Refactored" script with various bugfixes and improvements. Hopefully fixed and improved anyway, time will tell! 11 | 12 | PlexCache efficiently transfers media from the On Deck/Watchlist to the cache and moves watched media back to their respective locations. This Python script reduces energy consumption by minimizing the need to spin up the array/hard drive(s) when watching recurrent media like TV series. It achieves this by moving the media from the OnDeck and watchlist for the main user and/or other users. For TV shows/anime, it also fetches the next specified number of episodes. 13 | 14 | ## Features 15 | #### I have added tags to these features to distinguish ones which work for different types of users: 16 | **Local**: Users on the local or Home account. 17 | **Remote**: Users that are remote, so friends that you have shared libraries with. 18 | The original PlexCache app only worked for local users for most features, due to API limitations. 19 | 20 | - Fetch a specified number of episodes from the "onDeck" for the main user and other users (Local/Remote). 21 | - Skip fetching onDeck media for specified users (Local/Remote). 22 | - Fetch a specified number of episodes from the "watchlist" for the main user and other users (Local/Remote[^1]). 23 | - Skip fetching watchlist media for specified users (Local/Remote[^2]). 24 | - Search only the specified libraries. 25 | - Check for free space before moving any file. 26 | - Move watched media present on the cache drive back to the array. 27 | - Move respective subtitles along with the media moved to or from the cache. 28 | - Filter media older than a specified number of days. 29 | - Run in debug mode for testing. 30 | - Use of a log file for easy debugging. 31 | - Use caching system to avoid wastful memory usage and cpu cycles. 32 | - Use of multitasking to optimize file transfer time. 33 | - Exit the script if any active session or skip the currently playing media. 34 | - Send Webhook messages according to set log level (untested). 35 | 36 | 37 | 38 | 39 | ### Core Modules 40 | 41 | - **`config.py`**: Configuration management with dataclasses for type safety 42 | - **`logging_config.py`**: Logging setup, rotation, and notification handlers 43 | - **`system_utils.py`**: OS detection, path conversions, and file utilities 44 | - **`plex_api.py`**: Plex server interactions and cache management 45 | - **`file_operations.py`**: File moving, filtering, and subtitle operations 46 | - **`plexcache_app.py`**: Main application orchestrator 47 | 48 | 49 | ## Installation 50 | 51 | AlienTech42 has already done a really helpful video on the original PlexCache installation, and for now it's the best resource. 52 | The install process is pretty much the same for PlexCache-R. However there are some settings in the setup.py that 53 | are either in a different place, or are completely removed/altered/added. So don't follow the video religiously! 54 | https://www.youtube.com/watch?v=9oAnJJY8NH0 55 | 56 | 1. Put the files from this Repo into a known folder on your Unraid server. I use the following: 57 | ```bash 58 | /mnt/user/appdata/plexcache/plexcache_app.py 59 | ``` 60 | I'll keep using this in my examples, but make sure to use your own path. 61 | 62 | 2. Open up the Unraid Terminal, and install dependencies: 63 | ```bash 64 | cd ../mnt/user/appdata/plexcache 65 | pip3 install -r requirements.txt 66 | ``` 67 | Note: You'll need python installed for this to work. There's a community app for that. 68 | 69 | 3. Run the setup script to configure PlexCache: 70 | ```bash 71 | python3 plexcache_setup.py 72 | ``` 73 | Each of the questions should pretty much explain themselves, but I'll keep working on them. 74 | Or I'll add a guide list on here sometime. 75 | 76 | 4. Run the main application: 77 | ```bash 78 | python3 plexcache_app.py 79 | ``` 80 | However you wouldn't really want to run it manually every time, and the dependencies will disappear every time you reset the server. 81 | So I recommend making the following UserScript: 82 | ```bash 83 | #!/bin/bash 84 | cd /mnt/user/appdata/plexcache 85 | pip3 install -r requirements.txt 86 | python3 /mnt/user/appdata/plexcache/plexcache_app.py --skip-cache 87 | ``` 88 | And set it on a cron job to run whenever you want. I run it once a day at midnight ( 0 0 * * * ) 89 | 90 | 91 | ### Command Line Options 92 | 93 | - `--debug`: Run in debug mode (no files will be moved) 94 | - `--skip-cache`: Skip using cached data and fetch fresh from Plex 95 | 96 | 97 | 98 | ## Migration from Original 99 | 100 | The refactored version maintained full compatibility with the original. 101 | HOWEVER - This Redux version DOES NOT maintain full compatibility. 102 | I did make some vague efforts at the start, but there were so many things that didn't work properly that it just wasn't feasible. 103 | So while the files used are the same, you -will- need to delete your `plexcache_settings.json` and run a new setup to create a new one. 104 | 105 | 1. **Different Configuration**: Uses the same `plexcache_settings.json` file, but the fields have changed 106 | 2. **Added Functionality**: All original features still exist, but now also work (where possible) for remote users, not just local. 107 | 3. **Same Output**: Logging and notifications work identically 108 | 4. **Same Performance**: No performance degradation. Hopefully. Don't quote me on this. 109 | 110 | 111 | ## Setup 112 | 113 | Please check out our [Wiki section](https://github.com/bexem/PlexCache/wiki) for the step-by-step guide on how to setup PlexCache on your system. 114 | 115 | ## Notes 116 | 117 | This script might be compatible with other systems, especially Linux-based ones, although I have primarily tested it on Unraid with plex as docker container. While I cannot support every case, it's worth checking the GitHub issues to see if your specific case has already been discussed. Particularly worth checking the original Bexem repo issues page. 118 | I will still try to help out, but please note that I make no promises in providing assistance for every scenario. 119 | **It is highly advised to use the setup script.** 120 | 121 | ## Disclaimer 122 | 123 | This script comes without any warranties, guarantees, or magic powers. By using this script, you accept that you're responsible for any consequences that may result. The author will not be held liable for data loss, corruption, or any other problems you may encounter. So, it's on you to make sure you have backups and test this script thoroughly before you unleash its awesome power. 124 | 125 | ## Acknowledgments 126 | 127 | It seems we all owe a debt of thanks to someone called brimur[^3] for providing the script that served as the foundation and inspiration for this project. That was long before my time on it though, the first iteration I saw was by bexem[^4], who also has my thanks. But the biggest contributor to this continuation of the project was by bbergle[^5], who put in all the work on refactoring and cleaning up all the code into bite-sized chunks that were understandable to a novice like myself. All I did then was go through it all and try and make the wierd janky Plex API actually kinda work, for what I needed it to do anyway! 128 | 129 | And my first personal thankyou to [Brandon-Haney](https://github.com/Brandon-Haney) who has contributed a whole bunch of updates. I haven't yet merged them as of writing this, but he's gone through basically every file so I figured he deserved a pre-emptive thanks! 130 | 131 | 132 | [^1]: Remote users do not have individual watchlists accessible by the API. It's unfortunately not a thing. So instead I am using the available RSS feed as a workaround. The downside of this is... 133 | [^2]: ...that it is an all-or-nothing proposal for remote users. Local users can still be toggled on a per-user basis. 134 | [^3]: [brimur/preCachePlexOnDeckEpiosodes.py](https://gist.github.com/brimur/95277e75ca399d5d52b61e6aa192d1cd) 135 | [^4]: https://github.com/bexem/PlexCache 136 | [^5]: https://github.com/BBergle/PlexCache 137 | 138 | 139 | 140 | ## Changelog 141 | 142 | - **11/25 - Handling of script_folder link**: Old version had a hardcoded link to the script folder instead of using the user-defined setting. 143 | - **11/25 - Adding logic so a 401 error when looking for watched-media doesn't cause breaking errors**: Seems it's only possible to get 'watched files' data from home users and not remote friends, and the 401 error would stop the script working? Added some logic to plex_api.py. 144 | - **11/25 - Ended up totally changing several functions, and adding some new ones, to fix all the issues with remote users and watchlists and various other things**: So the changelog became way too difficult to maintain at this point cos it was just a bunch of stuff. Hence this changing to a new version of PlexCache. 145 | - **28/11/25** - Fixed the setup.py script to automatically set the correct plex root directory for media, and the correct media folder paths. These can still be manually corrected if your setup is different/unique. 146 | -------------------------------------------------------------------------------- /system_utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | System utilities for PlexCache. 3 | Handles OS detection, system-specific operations, and path conversions. 4 | """ 5 | 6 | import os 7 | import platform 8 | import re 9 | import socket 10 | import shutil 11 | import ntpath 12 | import posixpath 13 | from typing import Tuple, Optional 14 | import logging 15 | 16 | 17 | class SystemDetector: 18 | """Detects and provides information about the current system.""" 19 | 20 | def __init__(self): 21 | self.os_name = platform.system() 22 | self.is_linux = self.os_name != 'Windows' 23 | self.is_unraid = self._detect_unraid() 24 | self.is_docker = self._detect_docker() 25 | 26 | def _detect_unraid(self) -> bool: 27 | """Detect if running on Unraid system.""" 28 | os_info = { 29 | 'Linux': '/mnt/user0/', 30 | 'Darwin': None, 31 | 'Windows': None 32 | } 33 | 34 | unraid_path = os_info.get(self.os_name) 35 | return os.path.exists(unraid_path) if unraid_path else False 36 | 37 | def _detect_docker(self) -> bool: 38 | """Detect if running inside a Docker container.""" 39 | return os.path.exists('/.dockerenv') 40 | 41 | def get_system_info(self) -> str: 42 | """Get human-readable system information.""" 43 | info_parts = [f"Script is currently running on {self.os_name}."] 44 | 45 | if self.is_unraid: 46 | info_parts.append("The script is also running on Unraid.") 47 | 48 | if self.is_docker: 49 | info_parts.append("The script is running inside a Docker container.") 50 | 51 | return ' '.join(info_parts) 52 | 53 | def is_connected(self) -> bool: 54 | """Check if internet connection is available.""" 55 | try: 56 | socket.gethostbyname("www.google.com") 57 | return True 58 | except socket.error: 59 | return False 60 | 61 | 62 | class PathConverter: 63 | """Handles path conversions between different operating systems.""" 64 | 65 | def __init__(self, is_linux: bool): 66 | self.is_linux = is_linux 67 | 68 | def remove_trailing_slashes(self, value: str) -> str: 69 | """Remove trailing slashes from a path.""" 70 | try: 71 | if isinstance(value, str): 72 | if ':' in value and value.rstrip('/\\') == '': 73 | return value.rstrip('/') + "\\" 74 | else: 75 | return value.rstrip('/\\') 76 | return value 77 | except Exception as e: 78 | raise ValueError(f"Error occurred while removing trailing slashes: {e}") 79 | 80 | def add_trailing_slashes(self, value: str) -> str: 81 | """Add trailing slashes to a path.""" 82 | try: 83 | if ':' not in value: # Not a Windows path 84 | if not value.startswith("/"): 85 | value = "/" + value 86 | if not value.endswith("/"): 87 | value = value + "/" 88 | return value 89 | except Exception as e: 90 | raise ValueError(f"Error occurred while adding trailing slashes: {e}") 91 | 92 | def remove_all_slashes(self, value_list: list) -> list: 93 | """Remove all slashes from a list of paths.""" 94 | try: 95 | return [value.strip('/\\') for value in value_list] 96 | except Exception as e: 97 | raise ValueError(f"Error occurred while removing all slashes: {e}") 98 | 99 | def convert_path_to_nt(self, value: str, drive_letter: str) -> str: 100 | """Convert path to Windows NT format.""" 101 | try: 102 | if value.startswith('/'): 103 | value = drive_letter.rstrip(':\\') + ':' + value 104 | value = value.replace(posixpath.sep, ntpath.sep) 105 | return ntpath.normpath(value) 106 | except Exception as e: 107 | raise ValueError(f"Error occurred while converting path to Windows compatible: {e}") 108 | 109 | def convert_path_to_posix(self, value: str) -> Tuple[str, Optional[str]]: 110 | """Convert path to POSIX format.""" 111 | try: 112 | # Save the drive letter if exists 113 | drive_letter_match = re.search(r'^[A-Za-z]:', value) 114 | drive_letter = drive_letter_match.group() + '\\' if drive_letter_match else None 115 | 116 | # Remove drive letter if exists 117 | value = re.sub(r'^[A-Za-z]:', '', value) 118 | value = value.replace(ntpath.sep, posixpath.sep) 119 | return posixpath.normpath(value), drive_letter 120 | except Exception as e: 121 | raise ValueError(f"Error occurred while converting path to Posix compatible: {e}") 122 | 123 | def convert_path(self, value: str, key: str, settings_data: dict, drive_letter: Optional[str] = None) -> str: 124 | """Convert path according to the operating system.""" 125 | try: 126 | if self.is_linux: 127 | value, drive_letter = self.convert_path_to_posix(value) 128 | if drive_letter: 129 | settings_data[f"{key}_drive"] = drive_letter 130 | else: 131 | if drive_letter is None: 132 | drive_letter = 'C:\\' 133 | value = self.convert_path_to_nt(value, drive_letter) 134 | 135 | return value 136 | except Exception as e: 137 | raise ValueError(f"Error occurred while converting path: {e}") 138 | 139 | 140 | class FileUtils: 141 | """Utility functions for file operations.""" 142 | 143 | def __init__(self, is_linux: bool, permissions: int = 0o777): 144 | self.is_linux = is_linux 145 | self.permissions = permissions 146 | 147 | def check_path_exists(self, path: str) -> None: 148 | """Check if path exists, is a directory, and is writable.""" 149 | logging.debug(f"Checking path: {path}") 150 | 151 | if not os.path.exists(path): 152 | logging.error(f"Path does not exist: {path}") 153 | raise FileNotFoundError(f"Path {path} does not exist.") 154 | 155 | if not os.path.isdir(path): 156 | logging.error(f"Path is not a directory: {path}") 157 | raise NotADirectoryError(f"Path {path} is not a directory.") 158 | 159 | if not os.access(path, os.W_OK): 160 | logging.error(f"Path is not writable: {path}") 161 | raise PermissionError(f"Path {path} is not writable.") 162 | 163 | logging.debug(f"Path validation successful: {path}") 164 | 165 | def get_free_space(self, directory: str) -> Tuple[float, str]: 166 | """Get free space in a human-readable format.""" 167 | if not os.path.exists(directory): 168 | raise FileNotFoundError(f"Invalid path, unable to calculate free space for: {directory}.") 169 | 170 | stat = os.statvfs(directory) 171 | free_space_bytes = stat.f_bfree * stat.f_frsize 172 | return self._convert_bytes_to_readable_size(free_space_bytes) 173 | 174 | def get_total_size_of_files(self, files: list) -> Tuple[float, str]: 175 | """Calculate total size of files in human-readable format.""" 176 | total_size_bytes = sum(os.path.getsize(file) for file in files) 177 | return self._convert_bytes_to_readable_size(total_size_bytes) 178 | 179 | def _convert_bytes_to_readable_size(self, size_bytes: int) -> Tuple[float, str]: 180 | """Convert bytes to human-readable format.""" 181 | if size_bytes >= (1024 ** 4): 182 | size = size_bytes / (1024 ** 4) 183 | unit = 'TB' 184 | elif size_bytes >= (1024 ** 3): 185 | size = size_bytes / (1024 ** 3) 186 | unit = 'GB' 187 | elif size_bytes >= (1024 ** 2): 188 | size = size_bytes / (1024 ** 2) 189 | unit = 'MB' 190 | else: 191 | size = size_bytes / 1024 192 | unit = 'KB' 193 | 194 | return size, unit 195 | 196 | def move_file(self, src: str, dest: str) -> int: 197 | """Move a file with proper permissions.""" 198 | logging.debug(f"Moving file from {src} to {dest}") 199 | 200 | try: 201 | if self.is_linux: 202 | stat_info = os.stat(src) 203 | uid = stat_info.st_uid 204 | gid = stat_info.st_gid 205 | 206 | # Move the file first 207 | shutil.move(src, dest) 208 | logging.debug(f"File moved successfully: {src} -> {dest}") 209 | 210 | # Then set the owner and group to the original values 211 | os.chown(dest, uid, gid) 212 | original_umask = os.umask(0) 213 | os.chmod(dest, self.permissions) 214 | os.umask(original_umask) 215 | logging.debug(f"Permissions restored for: {dest}") 216 | else: # Windows logic 217 | shutil.move(src, dest) 218 | logging.debug(f"File moved successfully (Windows): {src} -> {dest}") 219 | 220 | return 0 221 | except (FileNotFoundError, PermissionError, Exception) as e: 222 | logging.error(f"Error moving file from {src} to {dest}: {str(e)}") 223 | raise RuntimeError(f"Error moving file: {str(e)}") 224 | 225 | def create_directory_with_permissions(self, path: str, src_file_for_permissions: str) -> None: 226 | """Create directory with proper permissions.""" 227 | logging.debug(f"Creating directory with permissions: {path}") 228 | 229 | if not os.path.exists(path): 230 | if self.is_linux: 231 | # Get the permissions of the source file 232 | stat_info = os.stat(src_file_for_permissions) 233 | uid = stat_info.st_uid 234 | gid = stat_info.st_gid 235 | original_umask = os.umask(0) 236 | os.makedirs(path, exist_ok=True) 237 | os.chown(path, uid, gid) 238 | os.chmod(path, self.permissions) 239 | os.umask(original_umask) 240 | logging.debug(f"Directory created with permissions (Linux): {path}") 241 | else: # Windows platform 242 | os.makedirs(path, exist_ok=True) 243 | logging.debug(f"Directory created (Windows): {path}") 244 | else: 245 | logging.debug(f"Directory already exists: {path}") -------------------------------------------------------------------------------- /logging_config.py: -------------------------------------------------------------------------------- 1 | """ 2 | Logging configuration for PlexCache. 3 | Handles log setup, rotation, and notification handlers. 4 | """ 5 | 6 | import json 7 | import logging 8 | import os 9 | import subprocess 10 | import time 11 | from datetime import datetime 12 | from logging.handlers import RotatingFileHandler 13 | from pathlib import Path 14 | from typing import Optional 15 | 16 | import requests 17 | 18 | 19 | # Define a new level called SUMMARY that is equivalent to INFO level 20 | SUMMARY = logging.WARNING + 1 21 | logging.addLevelName(SUMMARY, 'SUMMARY') 22 | 23 | 24 | class UnraidHandler(logging.Handler): 25 | """Custom logging handler for Unraid notifications.""" 26 | 27 | SUMMARY = SUMMARY 28 | 29 | def __init__(self): 30 | super().__init__() 31 | self.notify_cmd_base = "/usr/local/emhttp/webGui/scripts/notify" 32 | if not os.path.isfile(self.notify_cmd_base) or not os.access(self.notify_cmd_base, os.X_OK): 33 | logging.warning(f"{self.notify_cmd_base} does not exist or is not executable. Unraid notifications will not be sent.") 34 | self.notify_cmd_base = None 35 | 36 | def emit(self, record): 37 | if self.notify_cmd_base: 38 | if record.levelno == SUMMARY: 39 | self.send_summary_unraid_notification(record) 40 | else: 41 | self.send_unraid_notification(record) 42 | 43 | def send_summary_unraid_notification(self, record): 44 | icon = 'normal' 45 | notify_cmd = f'{self.notify_cmd_base} -e "PlexCache" -s "Summary" -d "{record.msg}" -i "{icon}"' 46 | subprocess.call(notify_cmd, shell=True) 47 | 48 | def send_unraid_notification(self, record): 49 | # Map logging levels to icons 50 | level_to_icon = { 51 | 'WARNING': 'warning', 52 | 'ERROR': 'alert', 53 | 'INFO': 'normal', 54 | 'DEBUG': 'normal', 55 | 'CRITICAL': 'alert' 56 | } 57 | 58 | icon = level_to_icon.get(record.levelname, 'normal') 59 | 60 | # Prepare the command with necessary arguments 61 | notify_cmd = f'{self.notify_cmd_base} -e "PlexCache" -s "{record.levelname}" -d "{record.msg}" -i "{icon}"' 62 | 63 | # Execute the command 64 | subprocess.call(notify_cmd, shell=True) 65 | 66 | 67 | class WebhookHandler(logging.Handler): 68 | """Custom logging handler for webhook notifications.""" 69 | 70 | SUMMARY = SUMMARY 71 | 72 | def __init__(self, webhook_url: str): 73 | super().__init__() 74 | self.webhook_url = webhook_url 75 | 76 | def emit(self, record): 77 | if record.levelno == SUMMARY: 78 | self.send_summary_webhook_message(record) 79 | else: 80 | self.send_webhook_message(record) 81 | 82 | def send_summary_webhook_message(self, record): 83 | summary = "Plex Cache Summary:\n" + record.msg 84 | payload = { 85 | "content": summary 86 | } 87 | headers = { 88 | "Content-Type": "application/json" 89 | } 90 | response = requests.post(self.webhook_url, data=json.dumps(payload), headers=headers) 91 | if not response.status_code == 204: 92 | logging.error(f"Failed to send summary message. Error code: {response.status_code}") 93 | 94 | def send_webhook_message(self, record): 95 | payload = { 96 | "content": record.msg 97 | } 98 | headers = { 99 | "Content-Type": "application/json" 100 | } 101 | response = requests.post(self.webhook_url, data=json.dumps(payload), headers=headers) 102 | if not response.status_code == 204: 103 | logging.error(f"Failed to send message. Error code: {response.status_code}") 104 | 105 | 106 | class LoggingManager: 107 | """Manages logging configuration and setup.""" 108 | 109 | def __init__(self, logs_folder: str, log_level: str = "", max_log_files: int = 5): 110 | self.logs_folder = Path(logs_folder) 111 | self.log_level = log_level 112 | self.max_log_files = max_log_files 113 | self.log_file_pattern = "plexcache_log_*.log" 114 | self.logger = logging.getLogger() 115 | self.summary_messages = [] 116 | self.files_moved = False 117 | 118 | def setup_logging(self) -> None: 119 | """Set up logging configuration.""" 120 | self._ensure_logs_folder() 121 | self._setup_log_file() 122 | self._set_log_level() 123 | self._clean_old_log_files() 124 | 125 | def _ensure_logs_folder(self) -> None: 126 | """Ensure the logs folder exists.""" 127 | if not self.logs_folder.exists(): 128 | try: 129 | self.logs_folder.mkdir(parents=True, exist_ok=True) 130 | except PermissionError: 131 | raise PermissionError(f"{self.logs_folder} not writable, please fix the variable accordingly.") 132 | 133 | def _setup_log_file(self) -> None: 134 | """Set up the log file with rotation.""" 135 | current_time = datetime.now().strftime("%Y%m%d_%H%M") 136 | log_file = self.logs_folder / f"plexcache_log_{current_time}.log" 137 | latest_log_file = self.logs_folder / "plexcache_log_latest.log" 138 | 139 | # Configure the rotating file handler 140 | file_handler = RotatingFileHandler( 141 | log_file, 142 | maxBytes=20*1024*1024, 143 | backupCount=self.max_log_files 144 | ) 145 | file_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')) 146 | self.logger.addHandler(file_handler) 147 | 148 | # Add console handler for stdout output 149 | console_handler = logging.StreamHandler() 150 | console_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')) 151 | self.logger.addHandler(console_handler) 152 | 153 | # Ensure the logs folder exists 154 | if not self.logs_folder.exists(): 155 | self.logs_folder.mkdir(parents=True, exist_ok=True) 156 | 157 | # Create or update the symbolic link to the latest log file 158 | try: 159 | if latest_log_file.exists() or latest_log_file.is_symlink(): 160 | latest_log_file.unlink() 161 | latest_log_file.symlink_to(log_file) 162 | except FileExistsError: 163 | # If still exists for some reason, remove and retry 164 | latest_log_file.unlink() 165 | latest_log_file.symlink_to(log_file) 166 | 167 | 168 | def _set_log_level(self) -> None: 169 | """Set the logging level.""" 170 | if self.log_level: 171 | log_level = self.log_level.lower() 172 | level_mapping = { 173 | "debug": logging.DEBUG, 174 | "info": logging.INFO, 175 | "warning": logging.WARNING, 176 | "error": logging.ERROR, 177 | "critical": logging.CRITICAL 178 | } 179 | 180 | if log_level in level_mapping: 181 | self.logger.setLevel(level_mapping[log_level]) 182 | else: 183 | logging.warning(f"Invalid log_level: {log_level}. Using default level: INFO") 184 | self.logger.setLevel(logging.INFO) 185 | else: 186 | self.logger.setLevel(logging.INFO) 187 | 188 | def _clean_old_log_files(self) -> None: 189 | """Clean old log files to maintain the maximum count.""" 190 | existing_log_files = list(self.logs_folder.glob(self.log_file_pattern)) 191 | existing_log_files.sort(key=lambda x: x.stat().st_mtime) 192 | 193 | while len(existing_log_files) > self.max_log_files: 194 | os.remove(existing_log_files.pop(0)) 195 | 196 | def setup_notification_handlers(self, notification_config, is_unraid: bool, is_docker: bool) -> None: 197 | """Set up notification handlers based on configuration.""" 198 | notification_type = notification_config.notification_type.lower() 199 | 200 | # Determine notification type 201 | if notification_type == "system": 202 | if is_unraid and not is_docker: 203 | notification_type = "unraid" 204 | else: 205 | notification_type = "" 206 | elif notification_type == "both": 207 | if is_unraid and is_docker: 208 | notification_type = "webhook" 209 | 210 | # Set up Unraid handler 211 | if notification_type in ["both", "unraid"]: 212 | unraid_handler = UnraidHandler() 213 | self._set_handler_level(unraid_handler, notification_config.unraid_level) 214 | self.logger.addHandler(unraid_handler) 215 | 216 | # Set up Webhook handler 217 | if notification_type in ["both", "webhook"] and notification_config.webhook_url: 218 | webhook_handler = WebhookHandler(notification_config.webhook_url) 219 | self._set_handler_level(webhook_handler, notification_config.webhook_level) 220 | self.logger.addHandler(webhook_handler) 221 | 222 | def _set_handler_level(self, handler: logging.Handler, level_str: str) -> None: 223 | """Set the level for a logging handler.""" 224 | if level_str: 225 | level_str = level_str.lower() 226 | level_mapping = { 227 | "debug": logging.DEBUG, 228 | "info": logging.INFO, 229 | "warning": logging.WARNING, 230 | "error": logging.ERROR, 231 | "critical": logging.CRITICAL, 232 | "summary": SUMMARY 233 | } 234 | 235 | if level_str in level_mapping: 236 | handler.setLevel(level_mapping[level_str]) 237 | else: 238 | logging.warning(f"Invalid notification level: {level_str}. Using default level: ERROR") 239 | handler.setLevel(logging.ERROR) 240 | else: 241 | handler.setLevel(logging.ERROR) 242 | 243 | def add_summary_message(self, message: str) -> None: 244 | """Add a message to the summary.""" 245 | if self.files_moved: 246 | self.summary_messages.append(message) 247 | else: 248 | self.summary_messages = [message] 249 | self.files_moved = True 250 | 251 | def log_summary(self) -> None: 252 | """Log the summary message.""" 253 | if self.summary_messages: 254 | summary_message = ' '.join(self.summary_messages) 255 | self.logger.log(SUMMARY, summary_message) 256 | 257 | def shutdown(self) -> None: 258 | """Shutdown logging.""" 259 | logging.shutdown() 260 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | """ 2 | Configuration management for PlexCache. 3 | Handles loading, validation, and management of application settings. 4 | """ 5 | 6 | import json 7 | import os 8 | import logging 9 | from pathlib import Path 10 | from typing import Dict, Any, Optional, List, Tuple 11 | from dataclasses import dataclass 12 | 13 | # Get the directory where config.py is located 14 | _SCRIPT_DIR = Path(os.path.dirname(os.path.abspath(__file__))) 15 | 16 | 17 | @dataclass 18 | class NotificationConfig: 19 | """Configuration for notification settings.""" 20 | notification_type: str = "system" # "Unraid", "Webhook", "Both", or "System" 21 | unraid_level: str = "summary" 22 | webhook_level: str = "" 23 | webhook_url: str = "" 24 | webhook_headers: Optional[Dict[str, str]] = None 25 | 26 | def __post_init__(self): 27 | if self.webhook_headers is None: 28 | self.webhook_headers = {} 29 | 30 | 31 | @dataclass 32 | class PathConfig: 33 | """Configuration for file paths and directories.""" 34 | script_folder: str = str(_SCRIPT_DIR) 35 | logs_folder: str = str(_SCRIPT_DIR / "logs") 36 | plex_source: str = "" 37 | real_source: str = "" 38 | cache_dir: str = "" 39 | nas_library_folders: Optional[List[str]] = None 40 | plex_library_folders: Optional[List[str]] = None 41 | 42 | def __post_init__(self): 43 | if self.nas_library_folders is None: 44 | self.nas_library_folders = [] 45 | if self.plex_library_folders is None: 46 | self.plex_library_folders = [] 47 | 48 | 49 | @dataclass 50 | class PlexConfig: 51 | """Configuration for Plex server settings.""" 52 | plex_url: str = "" 53 | plex_token: str = "" 54 | valid_sections: Optional[List[int]] = None 55 | number_episodes: int = 10 56 | days_to_monitor: int = 183 57 | users_toggle: bool = True 58 | skip_ondeck: Optional[List[str]] = None 59 | skip_watchlist: Optional[List[str]] = None 60 | 61 | def __post_init__(self): 62 | if self.valid_sections is None: 63 | self.valid_sections = [] 64 | if self.skip_ondeck is None: 65 | self.skip_ondeck = [] 66 | if self.skip_watchlist is None: 67 | self.skip_watchlist = [] 68 | 69 | 70 | @dataclass 71 | class CacheConfig: 72 | """Configuration for caching behavior.""" 73 | watchlist_toggle: bool = True 74 | watchlist_episodes: int = 5 75 | watchlist_cache_expiry: int = 48 76 | watched_cache_expiry: int = 48 77 | watched_move: bool = True 78 | 79 | # Add these new fields 80 | remote_watchlist_toggle: bool = False 81 | remote_watchlist_rss_url: str = "" 82 | 83 | 84 | 85 | @dataclass 86 | class PerformanceConfig: 87 | """Configuration for performance settings.""" 88 | max_concurrent_moves_array: int = 2 89 | max_concurrent_moves_cache: int = 5 90 | retry_limit: int = 5 91 | delay: int = 10 92 | permissions: int = 0o777 93 | 94 | 95 | class ConfigManager: 96 | """Manages application configuration loading and validation.""" 97 | 98 | def __init__(self, config_file: str): 99 | self.config_file = Path(config_file) 100 | self.settings_data: Dict[str, Any] = {} 101 | self.notification = NotificationConfig() 102 | self.paths = PathConfig() 103 | self.plex = PlexConfig() 104 | self.cache = CacheConfig() 105 | self.performance = PerformanceConfig() 106 | self.debug = False 107 | self.exit_if_active_session = False 108 | 109 | def load_config(self) -> None: 110 | """Load configuration from file and validate.""" 111 | logging.info(f"Loading configuration from: {self.config_file}") 112 | 113 | if not self.config_file.exists(): 114 | logging.error(f"Settings file not found: {self.config_file}") 115 | raise FileNotFoundError(f"Settings file not found: {self.config_file}") 116 | 117 | try: 118 | with open(self.config_file, 'r', encoding='utf-8') as f: 119 | self.settings_data = json.load(f) 120 | logging.debug("Configuration file loaded successfully") 121 | except json.JSONDecodeError as e: 122 | logging.error(f"Invalid JSON in settings file: {type(e).__name__}: {e}") 123 | raise ValueError(f"Invalid JSON in settings file: {e}") 124 | 125 | logging.debug("Processing configuration...") 126 | self._validate_required_fields() 127 | self._validate_types() 128 | self._process_first_start() 129 | self._load_all_configs() 130 | self._validate_values() 131 | self._save_updated_config() 132 | logging.info("Configuration loaded and validated successfully") 133 | 134 | def _process_first_start(self) -> None: 135 | """Handle first start configuration.""" 136 | firststart = self.settings_data.get('firststart') 137 | if firststart: 138 | self.debug = True 139 | logging.warning("First start is set to true, setting debug mode temporarily to true.") 140 | del self.settings_data['firststart'] 141 | else: 142 | self.debug = self.settings_data.get('debug', False) 143 | if firststart is not None: 144 | del self.settings_data['firststart'] 145 | 146 | def _load_all_configs(self) -> None: 147 | """Load all configuration sections.""" 148 | self._load_plex_config() 149 | self._load_cache_config() 150 | self._load_path_config() 151 | self._load_performance_config() 152 | self._load_misc_config() 153 | 154 | def _load_plex_config(self) -> None: 155 | """Load Plex-related configuration.""" 156 | self.plex.plex_url = self.settings_data['PLEX_URL'] 157 | self.plex.plex_token = self.settings_data['PLEX_TOKEN'] 158 | self.plex.number_episodes = self.settings_data['number_episodes'] 159 | self.plex.valid_sections = self.settings_data['valid_sections'] 160 | self.plex.days_to_monitor = self.settings_data['days_to_monitor'] 161 | self.plex.users_toggle = self.settings_data['users_toggle'] 162 | 163 | # Handle skip settings 164 | skip_users = self.settings_data.get('skip_users') 165 | if skip_users is not None: 166 | self.plex.skip_ondeck = self.settings_data.get('skip_ondeck', skip_users) 167 | self.plex.skip_watchlist = self.settings_data.get('skip_watchlist', skip_users) 168 | del self.settings_data['skip_users'] 169 | else: 170 | self.plex.skip_ondeck = self.settings_data.get('skip_ondeck', []) 171 | self.plex.skip_watchlist = self.settings_data.get('skip_watchlist', []) 172 | 173 | def _load_cache_config(self) -> None: 174 | """Load cache-related configuration.""" 175 | self.cache.watchlist_toggle = self.settings_data['watchlist_toggle'] 176 | self.cache.watchlist_episodes = self.settings_data['watchlist_episodes'] 177 | self.cache.watchlist_cache_expiry = self.settings_data['watchlist_cache_expiry'] 178 | self.cache.watched_cache_expiry = self.settings_data['watched_cache_expiry'] 179 | self.cache.watched_move = self.settings_data['watched_move'] 180 | 181 | # Load new remote watchlist settings 182 | self.cache.remote_watchlist_toggle = self.settings_data.get('remote_watchlist_toggle', False) 183 | self.cache.remote_watchlist_rss_url = self.settings_data.get('remote_watchlist_rss_url', "") 184 | 185 | 186 | def _load_path_config(self) -> None: 187 | """Load path-related configuration.""" 188 | self.paths.plex_source = self._add_trailing_slashes(self.settings_data['plex_source']) 189 | self.paths.real_source = self._add_trailing_slashes(self.settings_data['real_source']) 190 | self.paths.cache_dir = self._add_trailing_slashes(self.settings_data['cache_dir']) 191 | self.paths.nas_library_folders = self._remove_all_slashes(self.settings_data['nas_library_folders']) 192 | self.paths.plex_library_folders = self._remove_all_slashes(self.settings_data['plex_library_folders']) 193 | 194 | def _load_performance_config(self) -> None: 195 | """Load performance-related configuration.""" 196 | self.performance.max_concurrent_moves_array = self.settings_data['max_concurrent_moves_array'] 197 | self.performance.max_concurrent_moves_cache = self.settings_data['max_concurrent_moves_cache'] 198 | 199 | def _load_misc_config(self) -> None: 200 | """Load miscellaneous configuration.""" 201 | self.exit_if_active_session = self.settings_data.get('exit_if_active_session') 202 | if self.exit_if_active_session is None: 203 | self.exit_if_active_session = not self.settings_data.get('skip', False) 204 | if 'skip' in self.settings_data: 205 | del self.settings_data['skip'] 206 | 207 | # Remove deprecated settings 208 | if 'unraid' in self.settings_data: 209 | del self.settings_data['unraid'] 210 | 211 | def _validate_required_fields(self) -> None: 212 | """Validate that all required fields exist in the configuration.""" 213 | logging.debug("Validating required fields...") 214 | 215 | required_fields = [ 216 | 'PLEX_URL', 'PLEX_TOKEN', 'number_episodes', 'valid_sections', 217 | 'days_to_monitor', 'users_toggle', 'watchlist_toggle', 218 | 'watchlist_episodes', 'watchlist_cache_expiry', 'watched_cache_expiry', 219 | 'watched_move', 'plex_source', 'cache_dir', 'real_source', 220 | 'nas_library_folders', 'plex_library_folders', 221 | 'max_concurrent_moves_array', 'max_concurrent_moves_cache' 222 | ] 223 | 224 | missing_fields = [field for field in required_fields if field not in self.settings_data] 225 | if missing_fields: 226 | logging.error(f"Missing required fields in settings: {missing_fields}") 227 | raise ValueError(f"Missing required fields in settings: {missing_fields}") 228 | 229 | logging.debug("Required fields validation successful") 230 | 231 | def _validate_types(self) -> None: 232 | """Validate that configuration values have correct types.""" 233 | logging.debug("Validating configuration types...") 234 | 235 | type_checks = { 236 | 'PLEX_URL': str, 237 | 'PLEX_TOKEN': str, 238 | 'number_episodes': int, 239 | 'valid_sections': list, 240 | 'days_to_monitor': int, 241 | 'users_toggle': bool, 242 | 'watchlist_toggle': bool, 243 | 'watchlist_episodes': int, 244 | 'watchlist_cache_expiry': int, 245 | 'watched_cache_expiry': int, 246 | 'watched_move': bool, 247 | 'plex_source': str, 248 | 'cache_dir': str, 249 | 'real_source': str, 250 | 'nas_library_folders': list, 251 | 'plex_library_folders': list, 252 | 'max_concurrent_moves_array': int, 253 | 'max_concurrent_moves_cache': int, 254 | } 255 | 256 | type_errors = [] 257 | for field, expected_type in type_checks.items(): 258 | if field in self.settings_data: 259 | value = self.settings_data[field] 260 | if not isinstance(value, expected_type): 261 | type_errors.append( 262 | f"'{field}' expected {expected_type.__name__}, got {type(value).__name__}" 263 | ) 264 | 265 | if type_errors: 266 | error_msg = "Type validation errors: " + "; ".join(type_errors) 267 | logging.error(error_msg) 268 | raise TypeError(error_msg) 269 | 270 | logging.debug("Type validation successful") 271 | 272 | def _validate_values(self) -> None: 273 | """Validate configuration value ranges and constraints.""" 274 | logging.debug("Validating configuration values...") 275 | errors = [] 276 | 277 | # Validate non-empty paths 278 | path_fields = ['plex_source', 'real_source', 'cache_dir'] 279 | for field in path_fields: 280 | if not self.settings_data.get(field, '').strip(): 281 | errors.append(f"'{field}' cannot be empty") 282 | 283 | # Validate positive integers 284 | positive_int_fields = [ 285 | 'number_episodes', 'days_to_monitor', 'watchlist_episodes', 286 | 'watchlist_cache_expiry', 'watched_cache_expiry', 287 | 'max_concurrent_moves_array', 'max_concurrent_moves_cache' 288 | ] 289 | for field in positive_int_fields: 290 | value = self.settings_data.get(field, 0) 291 | if value < 0: 292 | errors.append(f"'{field}' must be non-negative, got {value}") 293 | 294 | # Validate non-empty URL and token 295 | if not self.settings_data.get('PLEX_URL', '').strip(): 296 | errors.append("'PLEX_URL' cannot be empty") 297 | if not self.settings_data.get('PLEX_TOKEN', '').strip(): 298 | errors.append("'PLEX_TOKEN' cannot be empty") 299 | 300 | if errors: 301 | error_msg = "Configuration validation errors: " + "; ".join(errors) 302 | logging.error(error_msg) 303 | raise ValueError(error_msg) 304 | 305 | logging.debug("Value validation successful") 306 | 307 | def _save_updated_config(self) -> None: 308 | """Save updated configuration back to file.""" 309 | try: 310 | self.settings_data.update({ 311 | 'cache_dir': self.paths.cache_dir, 312 | 'real_source': self.paths.real_source, 313 | 'plex_source': self.paths.plex_source, 314 | 'nas_library_folders': self.paths.nas_library_folders, 315 | 'plex_library_folders': self.paths.plex_library_folders, 316 | 'skip_ondeck': self.plex.skip_ondeck, 317 | 'skip_watchlist': self.plex.skip_watchlist, 318 | 'exit_if_active_session': self.exit_if_active_session, 319 | }) 320 | 321 | with open(self.config_file, 'w', encoding='utf-8') as f: 322 | json.dump(self.settings_data, f, indent=4) 323 | except Exception as e: 324 | logging.error(f"Error saving settings: {type(e).__name__}: {e}") 325 | raise 326 | 327 | @staticmethod 328 | def _add_trailing_slashes(value: str) -> str: 329 | """Add trailing slashes to a path.""" 330 | if ':' not in value: # Not a Windows path 331 | if not value.startswith("/"): 332 | value = "/" + value 333 | if not value.endswith("/"): 334 | value = value + "/" 335 | return value 336 | 337 | @staticmethod 338 | def _remove_all_slashes(value_list: List[str]) -> List[str]: 339 | """Remove all slashes from a list of paths.""" 340 | return [value.strip('/\\') for value in value_list] 341 | 342 | def get_cache_files(self) -> Tuple[Path, Path, Path]: 343 | """Get cache file paths.""" 344 | script_folder = Path(self.paths.script_folder) 345 | return ( 346 | script_folder / "plexcache_watchlist_cache.json", 347 | script_folder / "plexcache_watched_cache.json", 348 | script_folder / "plexcache_mover_files_to_exclude.txt" 349 | ) 350 | -------------------------------------------------------------------------------- /plex_api.py: -------------------------------------------------------------------------------- 1 | """ 2 | Plex API integration for PlexCache. 3 | Handles Plex server connections and media fetching operations. 4 | """ 5 | 6 | import json 7 | import logging 8 | import time 9 | import xml.etree.ElementTree as ET 10 | from datetime import datetime, timedelta 11 | from concurrent.futures import ThreadPoolExecutor, as_completed 12 | from pathlib import Path 13 | from typing import List, Set, Optional, Generator, Tuple 14 | 15 | from plexapi.server import PlexServer 16 | from plexapi.video import Episode, Movie 17 | from plexapi.myplex import MyPlexAccount 18 | from plexapi.exceptions import NotFound, BadRequest 19 | import requests 20 | 21 | 22 | class PlexManager: 23 | """Manages Plex server connections and operations.""" 24 | 25 | def __init__(self, plex_url: str, plex_token: str, retry_limit: int = 3, delay: int = 5): 26 | self.plex_url = plex_url 27 | self.plex_token = plex_token 28 | self.retry_limit = retry_limit 29 | self.delay = delay 30 | self.plex = None 31 | 32 | def connect(self) -> None: 33 | """Connect to the Plex server.""" 34 | logging.info(f"Connecting to Plex server: {self.plex_url}") 35 | 36 | try: 37 | self.plex = PlexServer(self.plex_url, self.plex_token) 38 | logging.info("Successfully connected to Plex server") 39 | logging.debug(f"Plex server version: {self.plex.version}") 40 | except Exception as e: 41 | logging.error(f"Error connecting to the Plex server: {e}") 42 | raise ConnectionError(f"Error connecting to the Plex server: {e}") 43 | 44 | def get_plex_instance(self, user=None) -> Tuple[Optional[str], Optional[PlexServer]]: 45 | """Get Plex instance for a specific user.""" 46 | if user: 47 | username = user.title 48 | try: 49 | return username, PlexServer(self.plex_url, user.get_token(self.plex.machineIdentifier)) 50 | except Exception as e: 51 | logging.error(f"Error: Failed to fetch {username} onDeck media. Error: {e}") 52 | return None, None 53 | else: 54 | username = self.plex.myPlexAccount().title 55 | return username, PlexServer(self.plex_url, self.plex_token) 56 | 57 | def search_plex(self, title: str): 58 | """Search for a file in the Plex server.""" 59 | results = self.plex.search(title) 60 | return results[0] if len(results) > 0 else None 61 | 62 | def get_active_sessions(self) -> List: 63 | """Get active sessions from Plex.""" 64 | return self.plex.sessions() 65 | 66 | def get_on_deck_media(self, valid_sections: List[int], days_to_monitor: int, 67 | number_episodes: int, users_toggle: bool, skip_ondeck: List[str]) -> List[str]: 68 | """Get OnDeck media files, skipping users with no token to prevent 401 errors.""" 69 | on_deck_files = [] 70 | 71 | # Build list of users to fetch 72 | users_to_fetch = [None] # Always include main local account 73 | if users_toggle: 74 | for user in self.plex.myPlexAccount().users(): 75 | try: 76 | token = user.get_token(self.plex.machineIdentifier) 77 | if not token: 78 | logging.info(f"Skipping {user.title} for OnDeck — no token available") 79 | continue 80 | if token in skip_ondeck: 81 | logging.info(f"Skipping {user.title} for OnDeck — token in skip list") 82 | continue 83 | users_to_fetch.append(user) 84 | except Exception as e: 85 | logging.warning(f"Could not get token for {user.title}; skipping. Error: {e}") 86 | 87 | logging.info(f"Fetching OnDeck media for {len(users_to_fetch)} users") 88 | 89 | # Fetch concurrently 90 | with ThreadPoolExecutor(max_workers=10) as executor: 91 | futures = { 92 | executor.submit( 93 | self._fetch_user_on_deck_media, 94 | valid_sections, days_to_monitor, number_episodes, user 95 | ) 96 | for user in users_to_fetch 97 | } 98 | 99 | for future in as_completed(futures): 100 | try: 101 | on_deck_files.extend(future.result()) 102 | except Exception as e: 103 | logging.error(f"An error occurred while fetching OnDeck media for a user: {e}") 104 | 105 | logging.info(f"Found {len(on_deck_files)} OnDeck items") 106 | return on_deck_files 107 | 108 | 109 | def _fetch_user_on_deck_media(self, valid_sections: List[int], days_to_monitor: int, 110 | number_episodes: int, user=None) -> List[str]: 111 | """Fetch onDeck media for a specific user, skipping users with no token.""" 112 | try: 113 | username, plex_instance = self.get_plex_instance(user) 114 | if not plex_instance: 115 | logging.info(f"Skipping OnDeck fetch for {username} — no Plex instance available (likely no token).") 116 | return [] 117 | 118 | logging.info(f"Fetching {username}'s onDeck media...") 119 | 120 | on_deck_files = [] 121 | # Get all sections available for the user 122 | available_sections = [section.key for section in plex_instance.library.sections()] 123 | filtered_sections = list(set(available_sections) & set(valid_sections)) 124 | 125 | for video in plex_instance.library.onDeck(): 126 | section_key = video.section().key 127 | if not filtered_sections or section_key in filtered_sections: 128 | delta = datetime.now() - video.lastViewedAt 129 | if delta.days <= days_to_monitor: 130 | if isinstance(video, Episode): 131 | self._process_episode_ondeck(video, number_episodes, on_deck_files) 132 | elif isinstance(video, Movie): 133 | self._process_movie_ondeck(video, on_deck_files) 134 | else: 135 | logging.warning(f"Skipping OnDeck item '{video.title}' — unknown type {type(video)}") 136 | else: 137 | logging.debug(f"Skipping OnDeck item '{video.title}' — section {section_key} not in valid_sections {filtered_sections}") 138 | 139 | return on_deck_files 140 | 141 | except Exception as e: 142 | logging.error(f"An error occurred while fetching onDeck media for {username}: {e}") 143 | return [] 144 | 145 | def _process_episode_ondeck(self, video: Episode, number_episodes: int, on_deck_files: List[str]) -> None: 146 | """Process an episode from onDeck.""" 147 | for media in video.media: 148 | on_deck_files.extend(part.file for part in media.parts) 149 | 150 | # Skip fetching next episodes if current episode has missing index data 151 | if video.parentIndex is None or video.index is None: 152 | logging.warning(f"Skipping next episode fetch for '{video.grandparentTitle}' - missing index data (parentIndex={video.parentIndex}, index={video.index})") 153 | return 154 | 155 | show = video.grandparentTitle 156 | library_section = video.section() 157 | episodes = list(library_section.search(show)[0].episodes()) 158 | current_season = video.parentIndex 159 | next_episodes = self._get_next_episodes(episodes, current_season, video.index, number_episodes) 160 | 161 | for episode in next_episodes: 162 | for media in episode.media: 163 | on_deck_files.extend(part.file for part in media.parts) 164 | for part in media.parts: 165 | logging.info(f"OnDeck found: {part.file}") 166 | 167 | def _process_movie_ondeck(self, video: Movie, on_deck_files: List[str]) -> None: 168 | """Process a movie from onDeck.""" 169 | for media in video.media: 170 | on_deck_files.extend(part.file for part in media.parts) 171 | for part in media.parts: 172 | logging.info(f"OnDeck found: {part.file}") 173 | 174 | def _get_next_episodes(self, episodes: List[Episode], current_season: int, 175 | current_episode_index: int, number_episodes: int) -> List[Episode]: 176 | """Get the next episodes after the current one.""" 177 | next_episodes = [] 178 | for episode in episodes: 179 | # Skip episodes with missing index data 180 | if episode.parentIndex is None or episode.index is None: 181 | logging.debug(f"Skipping episode '{episode.title}' from '{episode.grandparentTitle}' - missing index data (parentIndex={episode.parentIndex}, index={episode.index})") 182 | continue 183 | if (episode.parentIndex > current_season or 184 | (episode.parentIndex == current_season and episode.index > current_episode_index)) and len(next_episodes) < number_episodes: 185 | next_episodes.append(episode) 186 | if len(next_episodes) == number_episodes: 187 | break 188 | return next_episodes 189 | 190 | def clean_rss_title(self, title: str) -> str: 191 | """Remove trailing year in parentheses from a title, e.g. 'Movie (2023)' -> 'Movie'.""" 192 | import re 193 | return re.sub(r"\s\(\d{4}\)$", "", title) 194 | 195 | 196 | def get_watchlist_media(self, valid_sections: List[int], watchlist_episodes: int, 197 | users_toggle: bool, skip_watchlist: List[str], rss_url: Optional[str] = None) -> Generator[str, None, None]: 198 | """Get watchlist media files, optionally via RSS, with proper user filtering.""" 199 | 200 | def fetch_rss_titles(url: str) -> List[Tuple[str, str]]: 201 | """Fetch titles and categories from a Plex RSS feed.""" 202 | try: 203 | resp = requests.get(url, timeout=10) 204 | resp.raise_for_status() 205 | root = ET.fromstring(resp.text) 206 | items = [] 207 | for item in root.findall("channel/item"): 208 | title = item.find("title").text 209 | category_elem = item.find("category") 210 | category = category_elem.text if category_elem is not None else "" 211 | items.append((title, category)) 212 | return items 213 | except Exception as e: 214 | logging.error(f"Failed to fetch or parse RSS feed {url}: {e}") 215 | return [] 216 | 217 | def process_show(file, watchlist_episodes: int) -> Generator[str, None, None]: 218 | episodes = file.episodes() 219 | logging.debug(f"Processing show {file.title} with {len(episodes)} episodes") 220 | for episode in episodes[:watchlist_episodes]: 221 | if len(episode.media) > 0 and len(episode.media[0].parts) > 0: 222 | if not episode.isPlayed: 223 | yield episode.media[0].parts[0].file 224 | 225 | def process_movie(file) -> Generator[str, None, None]: 226 | if len(file.media) > 0 and len(file.media[0].parts) > 0: 227 | yield file.media[0].parts[0].file 228 | 229 | 230 | def fetch_user_watchlist(user) -> Generator[str, None, None]: 231 | """Fetch watchlist media for a user, optionally via RSS, yielding file paths.""" 232 | 233 | time.sleep(1) # slight delay for rate-limit protection 234 | current_username = self.plex.myPlexAccount().title if user is None else user.title 235 | logging.info(f"Fetching watchlist media for {current_username}") 236 | 237 | # Build list of valid sections for filtering 238 | available_sections = [section.key for section in self.plex.library.sections()] 239 | filtered_sections = list(set(available_sections) & set(valid_sections)) 240 | 241 | # Skip users in the skip list 242 | if user: 243 | try: 244 | token = user.get_token(self.plex.machineIdentifier) 245 | except Exception as e: 246 | logging.warning(f"Could not get token for {current_username}; skipping. Error: {e}") 247 | return 248 | if token in skip_watchlist or current_username in skip_watchlist: 249 | logging.info(f"Skipping {current_username} due to skip_watchlist") 250 | return 251 | 252 | # --- Obtain Plex account instance --- 253 | try: 254 | if user is None: 255 | # Use already authenticated main account 256 | account = self.plex.myPlexAccount() 257 | else: 258 | # Try to switch to home user 259 | try: 260 | account = self.plex.myPlexAccount().switchHomeUser(user.title) 261 | except Exception as e: 262 | logging.warning(f"Could not switch to user {user.title}; skipping. Error: {e}") 263 | return 264 | except Exception as e: 265 | logging.error(f"Failed to get Plex account for {current_username}: {e}") 266 | return 267 | 268 | # --- RSS feed processing --- 269 | if rss_url: 270 | rss_items = fetch_rss_titles(rss_url) 271 | logging.info(f"RSS feed contains {len(rss_items)} items") 272 | for title, category in rss_items: 273 | cleaned_title = self.clean_rss_title(title) 274 | file = self.search_plex(cleaned_title) 275 | if file: 276 | logging.info(f"RSS title '{title}' matched Plex item '{file.title}' ({file.TYPE})") 277 | if not filtered_sections or file.librarySectionID in filtered_sections: 278 | try: 279 | if category == 'show' or file.TYPE == 'show': 280 | yield from process_show(file, watchlist_episodes) 281 | elif file.TYPE == 'movie': 282 | yield from process_movie(file) 283 | else: 284 | logging.debug(f"Ignoring item '{file.title}' of type '{file.TYPE}'") 285 | except Exception as e: 286 | logging.warning(f"Error processing '{file.title}': {e}") 287 | else: 288 | logging.debug(f"Skipping RSS item '{file.title}' — section {file.librarySectionID} not in valid_sections {filtered_sections}") 289 | else: 290 | logging.warning(f"RSS title '{title}' (cleaned: '{cleaned_title}') not found in Plex — discarded") 291 | return 292 | 293 | # --- Local Plex watchlist processing --- 294 | try: 295 | watchlist = account.watchlist(filter='released') 296 | logging.info(f"{current_username}: Found {len(watchlist)} watchlist items from Plex") 297 | for item in watchlist: 298 | file = self.search_plex(item.title) 299 | if file and (not filtered_sections or file.librarySectionID in filtered_sections): 300 | try: 301 | if file.TYPE == 'show': 302 | yield from process_show(file, watchlist_episodes) 303 | elif file.TYPE == 'movie': 304 | yield from process_movie(file) 305 | else: 306 | logging.debug(f"Ignoring item '{file.title}' of type '{file.TYPE}'") 307 | except Exception as e: 308 | logging.warning(f"Error processing '{file.title}': {e}") 309 | elif file: 310 | logging.debug(f"Skipping watchlist item '{file.title}' — section {file.librarySectionID} not in valid_sections {filtered_sections}") 311 | except Exception as e: 312 | logging.error(f"Error fetching watchlist for {current_username}: {e}") 313 | 314 | 315 | # --- Prepare users to fetch --- 316 | users_to_fetch = [None] # always include the main local account 317 | 318 | if users_toggle: 319 | for user in self.plex.myPlexAccount().users(): 320 | title = getattr(user, "title", None) 321 | username = getattr(user, "username", None) # None for local/home users 322 | 323 | if username is not None: 324 | logging.info(f"Skipping remote user {title} (remote accounts are processed via RSS, not API)") 325 | continue 326 | 327 | try: 328 | user_token = user.get_token(self.plex.machineIdentifier) 329 | except Exception as e: 330 | logging.warning(f"Could not get token for {title}; skipping. Error: {e}") 331 | continue 332 | 333 | if (user_token and user_token in skip_watchlist) or (title and title in skip_watchlist): 334 | logging.info(f"Skipping {title} (in skip_watchlist)") 335 | continue 336 | 337 | users_to_fetch.append(user) 338 | 339 | logging.info(f"Processing {len(users_to_fetch)} users for local Plex watchlist") 340 | 341 | # --- Fetch concurrently --- 342 | with ThreadPoolExecutor(max_workers=10) as executor: 343 | futures = {executor.submit(fetch_user_watchlist, user) for user in users_to_fetch} 344 | for future in as_completed(futures): 345 | retries = 0 346 | while retries < self.retry_limit: 347 | try: 348 | yield from future.result() 349 | break 350 | except Exception as e: 351 | if "429" in str(e): 352 | logging.warning(f"Rate limit exceeded. Retrying in {self.delay} seconds... Error: {e}") 353 | time.sleep(self.delay) 354 | retries += 1 355 | else: 356 | logging.error(f"Error fetching watchlist media: {e}") 357 | break 358 | 359 | 360 | 361 | def get_watched_media(self, valid_sections: List[int], last_updated: Optional[float], 362 | users_toggle: bool) -> Generator[str, None, None]: 363 | """Get watched media files (local users only).""" 364 | 365 | def process_video(video) -> Generator[str, None, None]: 366 | if video.TYPE == 'show': 367 | for episode in video.episodes(): 368 | yield from process_episode(episode) 369 | else: 370 | if len(video.media) > 0 and len(video.media[0].parts) > 0: 371 | yield video.media[0].parts[0].file 372 | 373 | def process_episode(episode) -> Generator[str, None, None]: 374 | for media in episode.media: 375 | for part in media.parts: 376 | if episode.isPlayed: 377 | yield part.file 378 | 379 | def fetch_user_watched_media(plex_instance: PlexServer, username: str) -> Generator[str, None, None]: 380 | time.sleep(1) 381 | try: 382 | logging.info(f"Fetching {username}'s watched media...") 383 | all_sections = [section.key for section in plex_instance.library.sections()] 384 | available_sections = list(set(all_sections) & set(valid_sections)) if valid_sections else all_sections 385 | 386 | for section_key in available_sections: 387 | section = plex_instance.library.sectionByID(section_key) 388 | # Skip non-video sections (music, photos) - they don't support 'unwatched' filter 389 | if section.type not in ('movie', 'show'): 390 | logging.debug(f"Skipping non-video section '{section.title}' (type: {section.type})") 391 | continue 392 | for video in section.search(unwatched=False): 393 | if last_updated and video.lastViewedAt and video.lastViewedAt < datetime.fromtimestamp(last_updated): 394 | continue 395 | yield from process_video(video) 396 | except Exception as e: 397 | logging.error(f"Error fetching watched media for {username}: {e}") 398 | 399 | # --- Only fetch for main local user --- 400 | with ThreadPoolExecutor() as executor: 401 | main_username = self.plex.myPlexAccount().title 402 | futures = [executor.submit(fetch_user_watched_media, self.plex, main_username)] 403 | 404 | logging.info(f"Processing watched media for local user: {main_username} only") 405 | 406 | for future in as_completed(futures): 407 | try: 408 | yield from future.result() 409 | except Exception as e: 410 | logging.error(f"An error occurred in get_watched_media: {e}") 411 | 412 | 413 | 414 | class CacheManager: 415 | """Manages cache operations for media files.""" 416 | 417 | @staticmethod 418 | def load_media_from_cache(cache_file: Path) -> Tuple[Set[str], Optional[float]]: 419 | if cache_file.exists(): 420 | with cache_file.open('r') as f: 421 | try: 422 | data = json.load(f) 423 | if isinstance(data, dict): 424 | return set(data.get('media', [])), data.get('timestamp') 425 | elif isinstance(data, list): 426 | return set(data), None 427 | except json.JSONDecodeError: 428 | with cache_file.open('w') as f: 429 | f.write(json.dumps({'media': [], 'timestamp': None})) 430 | return set(), None 431 | return set(), None 432 | 433 | @staticmethod 434 | def save_media_to_cache(cache_file: Path, media_list: List[str], timestamp: Optional[float] = None) -> None: 435 | if timestamp is None: 436 | timestamp = datetime.now().timestamp() 437 | with cache_file.open('w') as f: 438 | json.dump({'media': media_list, 'timestamp': timestamp}, f) 439 | -------------------------------------------------------------------------------- /file_operations.py: -------------------------------------------------------------------------------- 1 | """ 2 | File operations for PlexCache. 3 | Handles file moving, filtering, subtitle operations, and path modifications. 4 | """ 5 | 6 | import os 7 | import logging 8 | import threading 9 | from concurrent.futures import ThreadPoolExecutor 10 | from typing import List, Set, Optional, Tuple 11 | import re 12 | 13 | 14 | class FilePathModifier: 15 | """Handles file path modifications and conversions.""" 16 | 17 | def __init__(self, plex_source: str, real_source: str, 18 | plex_library_folders: List[str], nas_library_folders: List[str]): 19 | self.plex_source = plex_source 20 | self.real_source = real_source 21 | self.plex_library_folders = plex_library_folders 22 | self.nas_library_folders = nas_library_folders 23 | 24 | def modify_file_paths(self, files: List[str]) -> List[str]: 25 | """Modify file paths from Plex paths to real system paths.""" 26 | if files is None: 27 | return [] 28 | 29 | logging.info("Editing file paths...") 30 | 31 | result = [] 32 | for file_path in files: 33 | # Pass through paths that are already converted (don't start with plex_source) 34 | if not file_path.startswith(self.plex_source): 35 | result.append(file_path) 36 | continue 37 | 38 | logging.info(f"Original path: {file_path}") 39 | 40 | # Replace the plex_source with the real_source in the file path 41 | file_path = file_path.replace(self.plex_source, self.real_source, 1) 42 | 43 | # Determine which library folder is in the file path 44 | for j, folder in enumerate(self.plex_library_folders): 45 | if folder in file_path: 46 | # Replace the plex library folder with the corresponding NAS library folder 47 | file_path = file_path.replace(folder, self.nas_library_folders[j]) 48 | break 49 | 50 | result.append(file_path) 51 | logging.info(f"Edited path: {file_path}") 52 | 53 | return result 54 | 55 | 56 | class SubtitleFinder: 57 | """Handles subtitle file discovery and operations.""" 58 | 59 | def __init__(self, subtitle_extensions: Optional[List[str]] = None): 60 | if subtitle_extensions is None: 61 | subtitle_extensions = [".srt", ".vtt", ".sbv", ".sub", ".idx"] 62 | self.subtitle_extensions = subtitle_extensions 63 | 64 | def get_media_subtitles(self, media_files: List[str], files_to_skip: Optional[Set[str]] = None) -> List[str]: 65 | """Get subtitle files for media files.""" 66 | logging.info("Fetching subtitles...") 67 | 68 | files_to_skip = set() if files_to_skip is None else set(files_to_skip) 69 | processed_files = set() 70 | all_media_files = media_files.copy() 71 | 72 | for file in media_files: 73 | if file in files_to_skip or file in processed_files: 74 | continue 75 | processed_files.add(file) 76 | 77 | directory_path = os.path.dirname(file) 78 | if os.path.exists(directory_path): 79 | subtitle_files = self._find_subtitle_files(directory_path, file) 80 | all_media_files.extend(subtitle_files) 81 | for subtitle_file in subtitle_files: 82 | logging.info(f"Subtitle found: {subtitle_file}") 83 | 84 | return all_media_files 85 | 86 | def _find_subtitle_files(self, directory_path: str, file: str) -> List[str]: 87 | """Find subtitle files in a directory for a given media file.""" 88 | file_basename = os.path.basename(file) 89 | file_name, _ = os.path.splitext(file_basename) 90 | 91 | try: 92 | subtitle_files = [ 93 | entry.path 94 | for entry in os.scandir(directory_path) 95 | if entry.is_file() and entry.name.startswith(file_name) and 96 | entry.name != file_basename and entry.name.endswith(tuple(self.subtitle_extensions)) 97 | ] 98 | except PermissionError as e: 99 | logging.error(f"Cannot access directory {directory_path}. Permission denied. {type(e).__name__}: {e}") 100 | subtitle_files = [] 101 | except OSError as e: 102 | logging.error(f"Cannot access directory {directory_path}. {type(e).__name__}: {e}") 103 | subtitle_files = [] 104 | 105 | return subtitle_files 106 | 107 | 108 | class FileFilter: 109 | """Handles file filtering based on destination and conditions.""" 110 | 111 | def __init__(self, real_source: str, cache_dir: str, is_unraid: bool, 112 | mover_cache_exclude_file: str): 113 | self.real_source = real_source 114 | self.cache_dir = cache_dir 115 | self.is_unraid = is_unraid 116 | self.mover_cache_exclude_file = mover_cache_exclude_file or "" 117 | 118 | def filter_files(self, files: List[str], destination: str, 119 | media_to_cache: Optional[List[str]] = None, 120 | files_to_skip: Optional[Set[str]] = None) -> List[str]: 121 | """Filter files based on destination and conditions.""" 122 | if media_to_cache is None: 123 | media_to_cache = [] 124 | 125 | processed_files = set() 126 | media_to = [] 127 | cache_files_to_exclude = [] 128 | 129 | if not files: 130 | return [] 131 | 132 | for file in files: 133 | if file in processed_files or (files_to_skip and file in files_to_skip): 134 | continue 135 | processed_files.add(file) 136 | 137 | cache_file_name = self._get_cache_paths(file)[1] 138 | cache_files_to_exclude.append(cache_file_name) 139 | 140 | if destination == 'array': 141 | if self._should_add_to_array(file, cache_file_name, media_to_cache): 142 | media_to.append(file) 143 | logging.info(f"Adding file to array: {file}") 144 | 145 | elif destination == 'cache': 146 | if self._should_add_to_cache(file, cache_file_name): 147 | media_to.append(file) 148 | logging.info(f"Adding file to cache: {file}") 149 | 150 | return media_to 151 | 152 | def _should_add_to_array(self, file: str, cache_file_name: str, media_to_cache: List[str]) -> bool: 153 | """Determine if a file should be added to the array.""" 154 | if file in media_to_cache: 155 | return False 156 | 157 | array_file = file.replace("/mnt/user/", "/mnt/user0/", 1) if self.is_unraid else file 158 | 159 | if os.path.isfile(array_file): 160 | # File already exists in the array, try to remove cache version 161 | try: 162 | os.remove(cache_file_name) 163 | logging.info(f"Removed cache version of file: {cache_file_name}") 164 | except FileNotFoundError: 165 | pass # File already removed or never existed 166 | except OSError as e: 167 | logging.error(f"Failed to remove cache file {cache_file_name}: {type(e).__name__}: {e}") 168 | return False # No need to add to array 169 | return True # Otherwise, the file should be added to the array 170 | 171 | def _should_add_to_cache(self, file: str, cache_file_name: str) -> bool: 172 | """Determine if a file should be added to the cache.""" 173 | array_file = file.replace("/mnt/user/", "/mnt/user0/", 1) if self.is_unraid else file 174 | 175 | if os.path.isfile(cache_file_name) and os.path.isfile(array_file): 176 | # Remove the array version when the file exists in the cache 177 | try: 178 | os.remove(array_file) 179 | logging.info(f"Removed array version of file: {array_file}") 180 | except FileNotFoundError: 181 | pass # File already removed 182 | except OSError as e: 183 | logging.error(f"Failed to remove array file {array_file}: {type(e).__name__}: {e}") 184 | return False 185 | 186 | return not os.path.isfile(cache_file_name) 187 | 188 | def _get_cache_paths(self, file: str) -> Tuple[str, str]: 189 | """Get cache path and filename for a given file.""" 190 | # Get the cache path by replacing the real source directory with the cache directory 191 | cache_path = os.path.dirname(file).replace(self.real_source, self.cache_dir, 1) 192 | 193 | # Get the cache file name by joining the cache path with the base name of the file 194 | cache_file_name = os.path.join(cache_path, os.path.basename(file)) 195 | 196 | return cache_path, cache_file_name 197 | 198 | def get_files_to_move_back_to_array(self, current_ondeck_items: Set[str], 199 | current_watchlist_items: Set[str]) -> Tuple[List[str], List[str]]: 200 | """Get files in cache that should be moved back to array because they're no longer needed.""" 201 | files_to_move_back = [] 202 | cache_paths_to_remove = [] 203 | 204 | try: 205 | # Read the exclude file to get all files currently in cache 206 | if not os.path.exists(self.mover_cache_exclude_file): 207 | logging.info("No exclude file found, nothing to move back") 208 | return files_to_move_back, cache_paths_to_remove 209 | 210 | with open(self.mover_cache_exclude_file, 'r') as f: 211 | cache_files = [line.strip() for line in f if line.strip()] 212 | 213 | logging.info(f"Found {len(cache_files)} files in exclude list") 214 | 215 | # Get media names that are still needed (in OnDeck or watchlist) 216 | needed_media = set() 217 | for item in current_ondeck_items | current_watchlist_items: 218 | # Extract show/movie name from path 219 | media_name = self._extract_media_name(item) 220 | if media_name is not None: 221 | needed_media.add(media_name) 222 | 223 | # Check each file in cache 224 | for cache_file in cache_files: 225 | if not os.path.exists(cache_file): 226 | logging.debug(f"Cache file no longer exists: {cache_file}") 227 | cache_paths_to_remove.append(cache_file) 228 | continue 229 | 230 | # Extract show/movie name from cache file 231 | media_name = self._extract_media_name(cache_file) 232 | if media_name is None: 233 | logging.warning(f"Could not extract media name from path: {cache_file}") 234 | continue 235 | 236 | # If media is still needed, keep this file in cache 237 | if media_name in needed_media: 238 | logging.debug(f"Media still needed, keeping in cache: {media_name}") 239 | continue 240 | 241 | # Media is no longer needed, move this file back to array 242 | array_file = cache_file.replace(self.cache_dir, self.real_source, 1) 243 | 244 | logging.info(f"Media no longer needed, will move back to array: {media_name} - {cache_file}") 245 | files_to_move_back.append(array_file) 246 | cache_paths_to_remove.append(cache_file) 247 | 248 | logging.info(f"Found {len(files_to_move_back)} files to move back to array") 249 | 250 | except Exception as e: 251 | logging.exception(f"Error getting files to move back to array: {type(e).__name__}: {e}") 252 | 253 | return files_to_move_back, cache_paths_to_remove 254 | 255 | def _extract_media_name(self, file_path: str) -> Optional[str]: 256 | """ 257 | Extract a comparable media identifier from a file path. 258 | - For movies: returns cleaned file title. 259 | - For TV episodes: returns cleaned episode name. 260 | """ 261 | try: 262 | filename = os.path.basename(file_path) 263 | name, _ext = os.path.splitext(filename) 264 | 265 | # Remove trailing parentheses blocks 266 | cleaned = re.sub(r'\s*\([^)]*\)$', '', name).strip() 267 | 268 | return cleaned 269 | 270 | except Exception: 271 | return None 272 | 273 | def remove_files_from_exclude_list(self, cache_paths_to_remove: List[str]) -> bool: 274 | """Remove specified files from the exclude list. Returns True on success.""" 275 | try: 276 | if not os.path.exists(self.mover_cache_exclude_file): 277 | logging.warning("Exclude file does not exist, cannot remove files") 278 | return False 279 | 280 | # Read current exclude list 281 | with open(self.mover_cache_exclude_file, 'r') as f: 282 | current_files = [line.strip() for line in f if line.strip()] 283 | 284 | # Convert to set for O(1) lookup instead of O(n) 285 | paths_to_remove_set = set(cache_paths_to_remove) 286 | 287 | # Remove specified files 288 | updated_files = [f for f in current_files if f not in paths_to_remove_set] 289 | 290 | # Write back updated list 291 | with open(self.mover_cache_exclude_file, 'w') as f: 292 | for file_path in updated_files: 293 | f.write(f"{file_path}\n") 294 | 295 | logging.info(f"Removed {len(cache_paths_to_remove)} files from exclude list") 296 | return True 297 | 298 | except Exception as e: 299 | logging.exception(f"Error removing files from exclude list: {type(e).__name__}: {e}") 300 | return False 301 | 302 | 303 | class FileMover: 304 | """Handles file moving operations.""" 305 | 306 | def __init__(self, real_source: str, cache_dir: str, is_unraid: bool, 307 | file_utils, debug: bool = False, mover_cache_exclude_file: Optional[str] = None): 308 | self.real_source = real_source 309 | self.cache_dir = cache_dir 310 | self.is_unraid = is_unraid 311 | self.file_utils = file_utils 312 | self.debug = debug 313 | self.mover_cache_exclude_file = mover_cache_exclude_file 314 | self._exclude_file_lock = threading.Lock() 315 | 316 | def move_media_files(self, files: List[str], destination: str, 317 | max_concurrent_moves_array: int, max_concurrent_moves_cache: int) -> None: 318 | """Move media files to the specified destination.""" 319 | logging.info(f"Moving media files to {destination}...") 320 | logging.debug(f"Total files to process: {len(files)}") 321 | 322 | processed_files = set() 323 | move_commands = [] 324 | cache_file_names = [] 325 | 326 | # Iterate over each file to move 327 | for file_to_move in files: 328 | if file_to_move in processed_files: 329 | continue 330 | 331 | processed_files.add(file_to_move) 332 | 333 | # Get the user path, cache path, cache file name, and user file name 334 | user_path, cache_path, cache_file_name, user_file_name = self._get_paths(file_to_move) 335 | 336 | # Get the move command for the current file 337 | move = self._get_move_command(destination, cache_file_name, user_path, user_file_name, cache_path) 338 | 339 | if move is not None: 340 | move_commands.append((move, cache_file_name)) 341 | logging.debug(f"Added move command for: {file_to_move}") 342 | else: 343 | logging.debug(f"No move command generated for: {file_to_move}") 344 | 345 | logging.info(f"Generated {len(move_commands)} move commands for {destination}") 346 | 347 | # Execute the move commands 348 | self._execute_move_commands(move_commands, max_concurrent_moves_array, 349 | max_concurrent_moves_cache, destination) 350 | 351 | def _get_paths(self, file_to_move: str) -> Tuple[str, str, str, str]: 352 | """Get all necessary paths for file moving.""" 353 | # Get the user path 354 | user_path = os.path.dirname(file_to_move) 355 | 356 | # Get the relative path from the real source directory 357 | relative_path = os.path.relpath(user_path, self.real_source) 358 | 359 | # Get the cache path by joining the cache directory with the relative path 360 | cache_path = os.path.join(self.cache_dir, relative_path) 361 | 362 | # Get the cache file name by joining the cache path with the base name of the file to move 363 | cache_file_name = os.path.join(cache_path, os.path.basename(file_to_move)) 364 | 365 | # Modify the user path if unraid is True 366 | if self.is_unraid: 367 | user_path = user_path.replace("/mnt/user/", "/mnt/user0/", 1) 368 | 369 | # Get the user file name by joining the user path with the base name of the file to move 370 | user_file_name = os.path.join(user_path, os.path.basename(file_to_move)) 371 | 372 | return user_path, cache_path, cache_file_name, user_file_name 373 | 374 | def _get_move_command(self, destination: str, cache_file_name: str, 375 | user_path: str, user_file_name: str, cache_path: str) -> Optional[Tuple[str, str]]: 376 | """Get the move command for a file.""" 377 | move = None 378 | if destination == 'array': 379 | # Only create directories if not in debug mode (true dry-run) 380 | if not self.debug: 381 | self.file_utils.create_directory_with_permissions(user_path, cache_file_name) 382 | if os.path.isfile(cache_file_name): 383 | move = (cache_file_name, user_path) 384 | elif destination == 'cache': 385 | # Only create directories if not in debug mode (true dry-run) 386 | if not self.debug: 387 | self.file_utils.create_directory_with_permissions(cache_path, user_file_name) 388 | if not os.path.isfile(cache_file_name): 389 | move = (user_file_name, cache_path) 390 | return move 391 | 392 | def _execute_move_commands(self, move_commands: List[Tuple[Tuple[str, str], str]], 393 | max_concurrent_moves_array: int, max_concurrent_moves_cache: int, 394 | destination: str) -> None: 395 | """Execute the move commands.""" 396 | if self.debug: 397 | for move_cmd, cache_file_name in move_commands: 398 | logging.info(move_cmd) 399 | else: 400 | max_concurrent_moves = max_concurrent_moves_array if destination == 'array' else max_concurrent_moves_cache 401 | from functools import partial 402 | with ThreadPoolExecutor(max_workers=max_concurrent_moves) as executor: 403 | results = list(executor.map(partial(self._move_file, destination=destination), move_commands)) 404 | errors = [result for result in results if result != 0] 405 | logging.info(f"Finished moving files with {len(errors)} errors.") 406 | 407 | def _move_file(self, move_cmd_with_cache: Tuple[Tuple[str, str], str], destination: str) -> int: 408 | """Move a single file and update exclude file if moving to cache.""" 409 | (src, dest), cache_file_name = move_cmd_with_cache 410 | try: 411 | self.file_utils.move_file(src, dest) 412 | logging.info(f"Moved file from {src} to {dest} with original permissions and owner.") 413 | # Only append to exclude file if moving to cache and move succeeded 414 | # Use lock to prevent concurrent writes from corrupting the file 415 | if destination == 'cache' and self.mover_cache_exclude_file: 416 | with self._exclude_file_lock: 417 | with open(self.mover_cache_exclude_file, "a") as f: 418 | f.write(f"{cache_file_name}\n") 419 | return 0 420 | except Exception as e: 421 | logging.error(f"Error moving file: {type(e).__name__}: {e}") 422 | return 1 423 | 424 | 425 | class CacheCleanup: 426 | """Handles cleanup of empty folders in cache directories.""" 427 | 428 | # Directories that should never be cleaned (safety check) 429 | _PROTECTED_PATHS = {'/', '/mnt', '/mnt/user', '/mnt/user0', '/home', '/var', '/etc', '/usr'} 430 | 431 | def __init__(self, cache_dir: str, library_folders: List[str] = None): 432 | if not cache_dir or not cache_dir.strip(): 433 | raise ValueError("cache_dir cannot be empty") 434 | 435 | normalized_cache_dir = os.path.normpath(cache_dir) 436 | if normalized_cache_dir in self._PROTECTED_PATHS: 437 | raise ValueError(f"cache_dir cannot be a protected system directory: {cache_dir}") 438 | 439 | self.cache_dir = cache_dir 440 | self.library_folders = library_folders or [] 441 | 442 | def cleanup_empty_folders(self) -> None: 443 | """Remove empty folders from cache directories.""" 444 | logging.info("Starting cache cleanup process...") 445 | cleaned_count = 0 446 | 447 | # Use configured library folders, or fall back to scanning cache_dir subdirectories 448 | if self.library_folders: 449 | subdirs_to_clean = self.library_folders 450 | else: 451 | # Fallback: scan all subdirectories in cache_dir 452 | try: 453 | subdirs_to_clean = [d for d in os.listdir(self.cache_dir) 454 | if os.path.isdir(os.path.join(self.cache_dir, d))] 455 | except OSError as e: 456 | logging.error(f"Could not list cache directory {self.cache_dir}: {type(e).__name__}: {e}") 457 | subdirs_to_clean = [] 458 | 459 | for subdir in subdirs_to_clean: 460 | subdir_path = os.path.join(self.cache_dir, subdir) 461 | if os.path.exists(subdir_path): 462 | logging.debug(f"Cleaning up {subdir} directory: {subdir_path}") 463 | cleaned_count += self._cleanup_directory(subdir_path) 464 | else: 465 | logging.debug(f"Directory does not exist, skipping: {subdir_path}") 466 | 467 | if cleaned_count > 0: 468 | logging.info(f"Cleaned up {cleaned_count} empty folders") 469 | else: 470 | logging.info("No empty folders found to clean up") 471 | 472 | def _cleanup_directory(self, directory_path: str) -> int: 473 | """Recursively remove empty folders from a directory.""" 474 | cleaned_count = 0 475 | 476 | try: 477 | # Walk through the directory tree from bottom up 478 | for root, dirs, files in os.walk(directory_path, topdown=False): 479 | for dir_name in dirs: 480 | dir_path = os.path.join(root, dir_name) 481 | try: 482 | # Check if directory is empty 483 | if not os.listdir(dir_path): 484 | os.rmdir(dir_path) 485 | logging.debug(f"Removed empty folder: {dir_path}") 486 | cleaned_count += 1 487 | except OSError as e: 488 | logging.debug(f"Could not remove directory {dir_path}: {type(e).__name__}: {e}") 489 | except Exception as e: 490 | logging.error(f"Error cleaning up directory {directory_path}: {type(e).__name__}: {e}") 491 | 492 | return cleaned_count 493 | -------------------------------------------------------------------------------- /plexcache_setup.py: -------------------------------------------------------------------------------- 1 | import json, os, requests, ntpath, posixpath, subprocess, re 2 | from urllib.parse import urlparse 3 | from plexapi.server import PlexServer 4 | from plexapi.exceptions import BadRequest 5 | 6 | # Script folder and settings file 7 | script_folder = os.path.dirname(os.path.abspath(__file__)) 8 | settings_filename = os.path.join(script_folder, "plexcache_settings.json") 9 | 10 | # ensure a settings container exists early so helper functions can reference it 11 | settings_data = {} 12 | 13 | # ---------------- Helper Functions ---------------- 14 | 15 | def check_directory_exists(folder): 16 | if not os.path.exists(folder): 17 | raise FileNotFoundError(f'Wrong path given, please edit the "{folder}" variable accordingly.') 18 | 19 | def read_existing_settings(filename): 20 | try: 21 | with open(filename, 'r', encoding='utf-8') as f: 22 | return json.load(f) 23 | except (IOError, OSError) as e: 24 | print(f"Error reading settings file: {e}") 25 | raise 26 | 27 | def write_settings(filename, data): 28 | try: 29 | with open(filename, 'w', encoding='utf-8') as f: 30 | json.dump(data, f, indent=4) 31 | except (IOError, OSError) as e: 32 | print(f"Error writing settings file: {e}") 33 | raise 34 | 35 | def convert_path_to_posix(path): 36 | path = path.replace(ntpath.sep, posixpath.sep) 37 | return posixpath.normpath(path) 38 | 39 | def convert_path_to_nt(path): 40 | path = path.replace(posixpath.sep, ntpath.sep) 41 | return ntpath.normpath(path) 42 | 43 | def prompt_user_for_number(prompt_message, default_value, data_key, data_type=int): 44 | while True: 45 | user_input = input(prompt_message) or default_value 46 | try: 47 | value = data_type(user_input) 48 | if value < 0: 49 | print("Please enter a non-negative number") 50 | continue 51 | settings_data[data_key] = value 52 | break 53 | except ValueError: 54 | print("User input is not a valid number") 55 | 56 | def is_valid_plex_url(url): 57 | try: 58 | result = urlparse(url) 59 | return all([result.scheme, result.netloc]) 60 | except ValueError: 61 | return False 62 | 63 | # Helper to compute a common root for a list of paths 64 | def find_common_root(paths): 65 | """Return the deepest common directory for all given paths.""" 66 | if not paths: 67 | return "/" 68 | 69 | # Normalize trailing slashes and split 70 | normed = [p.rstrip('/') for p in paths] 71 | split_paths = [p.split('/') for p in normed] 72 | 73 | common_parts = [] 74 | for parts in zip(*split_paths): 75 | if all(part == parts[0] for part in parts): 76 | common_parts.append(parts[0]) 77 | else: 78 | break 79 | 80 | # Handle leading empty string (absolute paths) 81 | if common_parts and common_parts[0] == '': 82 | if len(common_parts) == 1: 83 | return '/' 84 | return "/" + "/".join(common_parts[1:]) 85 | return "/" + "/".join(common_parts) if common_parts else "/" 86 | 87 | 88 | def is_unraid(): 89 | """Check if running on Unraid.""" 90 | return os.path.exists('/etc/unraid-version') 91 | 92 | 93 | def auto_detect_plex_token(): 94 | """ 95 | Auto-detect Plex token from Preferences.xml on Unraid. 96 | Uses optimized search: finds appdata/apps folders first, then searches within. 97 | Returns tuple of (token, preferences_path) or (None, None) if not found. 98 | """ 99 | if not is_unraid(): 100 | return None, None 101 | 102 | print("Searching for Plex installation...") 103 | 104 | # Step 1: Find appdata/apps folders first (fast, limited scope) 105 | try: 106 | result = subprocess.run( 107 | ['find', '/mnt', '-maxdepth', '4', '-type', 'd', '(', '-name', 'appdata', '-o', '-name', 'apps', ')'], 108 | capture_output=True, text=True, timeout=30 109 | ) 110 | app_folders = [line.strip() for line in result.stdout.strip().split('\n') if line.strip()] 111 | except (subprocess.TimeoutExpired, subprocess.SubprocessError) as e: 112 | print(f"Error searching for app folders: {e}") 113 | return None, None 114 | 115 | # Step 2: Search for Plex Preferences.xml within those folders 116 | # Use */Plex Media Server/Preferences.xml to match both native installs and Docker variants 117 | preferences_path = None 118 | for folder in app_folders: 119 | try: 120 | result = subprocess.run( 121 | ['find', folder, '-maxdepth', '8', '-path', '*/Plex Media Server/Preferences.xml'], 122 | capture_output=True, text=True, timeout=30 123 | ) 124 | if result.stdout.strip(): 125 | preferences_path = result.stdout.strip().split('\n')[0] 126 | break 127 | except (subprocess.TimeoutExpired, subprocess.SubprocessError): 128 | continue 129 | 130 | # Step 3: Fallback to broader search if not found in appdata/apps folders 131 | if not preferences_path: 132 | print("Not found in appdata/apps folders, searching /mnt (this may take a moment)...") 133 | try: 134 | result = subprocess.run( 135 | ['find', '/mnt', '-maxdepth', '10', '-path', '*/Plex Media Server/Preferences.xml'], 136 | capture_output=True, text=True, timeout=60 137 | ) 138 | if result.stdout.strip(): 139 | preferences_path = result.stdout.strip().split('\n')[0] 140 | except (subprocess.TimeoutExpired, subprocess.SubprocessError) as e: 141 | print(f"Error during fallback search: {e}") 142 | return None, None 143 | 144 | if not preferences_path: 145 | print("Could not find Plex Preferences.xml") 146 | return None, None 147 | 148 | print(f"Found: {preferences_path}") 149 | 150 | # Step 4: Extract token from Preferences.xml 151 | try: 152 | with open(preferences_path, 'r', encoding='utf-8') as f: 153 | content = f.read() 154 | match = re.search(r'PlexOnlineToken="([^"]+)"', content) 155 | if match: 156 | return match.group(1), preferences_path 157 | else: 158 | print("Could not find PlexOnlineToken in Preferences.xml") 159 | return None, None 160 | except (IOError, OSError) as e: 161 | print(f"Error reading Preferences.xml: {e}") 162 | return None, None 163 | 164 | 165 | # ---------------- Setup Function ---------------- 166 | 167 | def setup(): 168 | settings_data['firststart'] = False 169 | 170 | # ---------------- Plex URL ---------------- 171 | while 'PLEX_URL' not in settings_data: 172 | url = input('\nEnter your plex server address (Example: http://localhost:32400 or https://plex.mydomain.ext): ') 173 | if not url.strip(): 174 | print("URL is not valid. It cannot be empty.") 175 | continue 176 | if is_valid_plex_url(url): 177 | settings_data['PLEX_URL'] = url 178 | print("Valid Plex URL") 179 | else: 180 | print("Invalid Plex URL") 181 | 182 | # ---------------- Plex Token ---------------- 183 | while 'PLEX_TOKEN' not in settings_data: 184 | token = None 185 | 186 | # Offer auto-detection on Unraid 187 | if is_unraid(): 188 | while True: 189 | auto_detect = input('\nWould you like to auto-detect your Plex token? [Y/n] ') or 'yes' 190 | if auto_detect.lower() in ['y', 'yes']: 191 | detected_token, plex_path = auto_detect_plex_token() 192 | if detected_token: 193 | # Show partial token for security (first 8 and last 4 chars) 194 | if len(detected_token) > 12: 195 | masked_token = detected_token[:8] + '...' + detected_token[-4:] 196 | else: 197 | masked_token = detected_token[:4] + '...' 198 | print(f"Token found: {masked_token}") 199 | while True: 200 | use_token = input(f'Use this token? [Y/n] ') or 'yes' 201 | if use_token.lower() in ['y', 'yes']: 202 | token = detected_token 203 | break 204 | elif use_token.lower() in ['n', 'no']: 205 | print("Token not used. Please enter manually.") 206 | break 207 | else: 208 | print("Invalid choice. Please enter either yes or no") 209 | break 210 | elif auto_detect.lower() in ['n', 'no']: 211 | break 212 | else: 213 | print("Invalid choice. Please enter either yes or no") 214 | 215 | # Manual entry if no token yet 216 | if not token: 217 | token = input('\nEnter your plex token: ') 218 | 219 | if not token.strip(): 220 | print("Token is not valid. It cannot be empty.") 221 | continue 222 | try: 223 | plex = PlexServer(settings_data['PLEX_URL'], token) 224 | user = plex.myPlexAccount().username 225 | print(f"Connection successful! Currently connected as {user}") 226 | libraries = plex.library.sections() 227 | settings_data['PLEX_TOKEN'] = token 228 | 229 | operating_system = plex.platform 230 | print(f"Plex is running on {operating_system}") 231 | 232 | valid_sections = [] 233 | selected_libraries = [] 234 | plex_library_folders = [] 235 | 236 | # Step 1: Collect library selections from user 237 | while not valid_sections: 238 | for library in libraries: 239 | print(f"\nYour plex library name: {library.title}") 240 | include = input("Do you want to include this library? [Y/n] ") or 'yes' 241 | if include.lower() in ['n', 'no']: 242 | continue 243 | elif include.lower() in ['y', 'yes']: 244 | if library.key not in valid_sections: 245 | valid_sections.append(library.key) 246 | selected_libraries.append(library) 247 | else: 248 | print("Invalid choice. Please enter either yes or no") 249 | 250 | if not valid_sections: 251 | print("You must select at least one library to include. Please try again.") 252 | 253 | settings_data['valid_sections'] = valid_sections 254 | 255 | # Step 2: Compute plex_source from ONLY selected libraries (fixes Issue #12) 256 | if 'plex_source' not in settings_data: 257 | selected_locations = [] 258 | for lib in selected_libraries: 259 | try: 260 | locs = lib.locations 261 | if isinstance(locs, list): 262 | selected_locations.extend(locs) 263 | elif isinstance(locs, str): 264 | selected_locations.append(locs) 265 | except Exception as e: 266 | print(f"Warning: Could not get locations for library '{lib.title}': {e}") 267 | continue 268 | 269 | plex_source = find_common_root(selected_locations) 270 | 271 | # Warn user if plex_source is just "/" and allow manual override 272 | if plex_source == "/": 273 | print(f"\nWarning: The computed plex_source is '/' (root).") 274 | print("This usually happens when your selected libraries have different base paths.") 275 | print(f"Selected library paths: {selected_locations}") 276 | print("\nUsing '/' as plex_source will likely cause path issues.") 277 | 278 | while True: 279 | manual_source = input("\nEnter the correct plex_source path (e.g., '/data') or press Enter to keep '/': ").strip() 280 | if manual_source == "": 281 | print("Keeping plex_source as '/' - please verify your settings work correctly.") 282 | break 283 | elif manual_source.startswith("/"): 284 | plex_source = manual_source.rstrip("/") 285 | print(f"plex_source set to: {plex_source}") 286 | break 287 | else: 288 | print("Path must start with '/'") 289 | 290 | # Ensure trailing slash for consistency 291 | if not plex_source.endswith('/'): 292 | plex_source = plex_source + '/' 293 | print(f"\nPlex source path set to: {plex_source}") 294 | settings_data['plex_source'] = plex_source 295 | 296 | # Step 3: Compute relative library folders from selected libraries 297 | for lib in selected_libraries: 298 | for location in lib.locations: 299 | rel = os.path.relpath(location, settings_data['plex_source']).strip('/') 300 | rel = rel.replace('\\', '/') 301 | if rel not in plex_library_folders: 302 | plex_library_folders.append(rel) 303 | 304 | settings_data['plex_library_folders'] = plex_library_folders 305 | 306 | 307 | except (BadRequest, requests.exceptions.RequestException) as e: 308 | print(f'Unable to connect to Plex server. Please check your token. Error: {e}') 309 | except ValueError as e: 310 | print(f'Token is not valid. Error: {e}') 311 | except TypeError as e: 312 | print(f'An unexpected error occurred: {e}') 313 | 314 | # ---------------- OnDeck Settings ---------------- 315 | while 'number_episodes' not in settings_data: 316 | prompt_user_for_number('\nHow many episodes (digit) do you want fetch from your OnDeck? (default: 6) ', '6', 'number_episodes') 317 | 318 | while 'days_to_monitor' not in settings_data: 319 | prompt_user_for_number('\nMaximum age of the media onDeck to be fetched? (default: 99) ', '99', 'days_to_monitor') 320 | 321 | # ----------------Primary User Watchlist Settings ---------------- 322 | while 'watchlist_toggle' not in settings_data: 323 | watchlist = input('\nDo you want to fetch your own watchlist media? [y/N] ') or 'no' 324 | if watchlist.lower() in ['n', 'no']: 325 | settings_data['watchlist_toggle'] = False 326 | settings_data['watchlist_episodes'] = 0 327 | settings_data['watchlist_cache_expiry'] = 1 328 | elif watchlist.lower() in ['y', 'yes']: 329 | settings_data['watchlist_toggle'] = True 330 | prompt_user_for_number('\nHow many episodes do you want fetch from your Watchlist? (default: 3) ', '3', 'watchlist_episodes') 331 | prompt_user_for_number('\nDefine the watchlist cache expiry duration in hours (default: 6) ', '6', 'watchlist_cache_expiry') 332 | else: 333 | print("Invalid choice. Please enter either yes or no") 334 | 335 | # ---------------- Users / Skip Lists ---------------- 336 | while 'users_toggle' not in settings_data: 337 | skip_ondeck = [] 338 | skip_watchlist = [] 339 | 340 | fetch_all_users = input('\nDo you want to fetch onDeck media from other users? [Y/n] ') or 'yes' 341 | if fetch_all_users.lower() not in ['y', 'yes', 'n', 'no']: 342 | print("Invalid choice. Please enter either yes or no") 343 | continue 344 | 345 | if fetch_all_users.lower() in ['y', 'yes']: 346 | settings_data['users_toggle'] = True 347 | 348 | # Build the full user list (local + remote) 349 | user_entries = [] 350 | for user in plex.myPlexAccount().users(): 351 | name = user.title 352 | username = getattr(user, "username", None) 353 | is_local = username is None 354 | try: 355 | token = user.get_token(plex.machineIdentifier) 356 | except Exception as e: 357 | print(f"\nSkipping user '{name}' (error getting token: {e})") 358 | continue 359 | 360 | if token is None: 361 | print(f"\nSkipping user '{name}' (no token available).") 362 | continue 363 | 364 | user_entries.append({ 365 | "title": name, 366 | "token": token, 367 | "is_local": is_local, 368 | "skip_ondeck": False, 369 | "skip_watchlist": False 370 | }) 371 | 372 | settings_data["users"] = user_entries 373 | 374 | # --- Skip OnDeck --- 375 | skip_users_choice = input('\nWould you like to skip onDeck for some of the users? [y/N] ') or 'no' 376 | if skip_users_choice.lower() in ['y', 'yes']: 377 | for u in settings_data["users"]: 378 | while True: 379 | answer_ondeck = input(f'\nDo you want to skip onDeck for this user? {u["title"]} [y/N] ') or 'no' 380 | if answer_ondeck.lower() not in ['y', 'yes', 'n', 'no']: 381 | print("Invalid choice. Please enter either yes or no") 382 | continue 383 | if answer_ondeck.lower() in ['y', 'yes']: 384 | u["skip_ondeck"] = True 385 | break 386 | 387 | # --- Skip Watchlist (local users only) --- 388 | for u in settings_data["users"]: 389 | if u["is_local"]: 390 | while True: 391 | answer_watchlist = input(f'\nDo you want to skip watchlist for this local user? {u["title"]} [y/N] ') or 'no' 392 | if answer_watchlist.lower() not in ['y', 'yes', 'n', 'no']: 393 | print("Invalid choice. Please enter either yes or no") 394 | continue 395 | if answer_watchlist.lower() in ['y', 'yes']: 396 | u["skip_watchlist"] = True 397 | break 398 | 399 | # Build final skip lists 400 | skip_ondeck = [u["token"] for u in settings_data["users"] if u["skip_ondeck"]] 401 | skip_watchlist = [u["token"] for u in settings_data["users"] if u["is_local"] and u["skip_watchlist"]] 402 | 403 | settings_data["skip_ondeck"] = skip_ondeck 404 | settings_data["skip_watchlist"] = skip_watchlist 405 | 406 | else: 407 | settings_data['users_toggle'] = False 408 | settings_data["skip_ondeck"] = [] 409 | settings_data["skip_watchlist"] = [] 410 | 411 | # ---------------- Remote Watchlist RSS ---------------- 412 | while 'remote_watchlist_toggle' not in settings_data: 413 | remote_watchlist = input('\nWould you like to fetch Watchlist media from ALL remote Plex users? [y/N] ') or 'no' 414 | if remote_watchlist.lower() in ['n', 'no']: 415 | settings_data['remote_watchlist_toggle'] = False 416 | elif remote_watchlist.lower() in ['y', 'yes']: 417 | settings_data['remote_watchlist_toggle'] = True 418 | while True: 419 | rss_url = input('\nGo to https://app.plex.tv/desktop/#!/settings/watchlist and activate the Friends\' Watchlist.\nEnter the generated URL here: ').strip() 420 | if not rss_url: 421 | print("URL is not valid. It cannot be empty.") 422 | continue 423 | try: 424 | response = requests.get(rss_url, timeout=10) 425 | if response.status_code == 200 and b' None: 52 | """Run the main application.""" 53 | try: 54 | # Setup logging first before any log messages 55 | self._setup_logging() 56 | logging.info("Starting PlexCache application...") 57 | logging.info("Phase 1: Logging setup complete") 58 | 59 | # Load configuration 60 | logging.info("Phase 2: Loading configuration") 61 | self.config_manager.load_config() 62 | 63 | # Initialize components that depend on config 64 | logging.info("Phase 3: Initializing components") 65 | self._initialize_components() 66 | 67 | # Check paths 68 | logging.info("Phase 4: Validating paths") 69 | self._check_paths() 70 | 71 | # Connect to Plex 72 | logging.info("Phase 5: Connecting to Plex") 73 | self._connect_to_plex() 74 | 75 | # Check for active sessions 76 | logging.info("Phase 6: Checking active sessions") 77 | self._check_active_sessions() 78 | 79 | # Set debug mode 80 | logging.info("Phase 7: Setting debug mode") 81 | self._set_debug_mode() 82 | 83 | # Process media 84 | logging.info("Phase 8: Processing media") 85 | self._process_media() 86 | 87 | # Move files 88 | logging.info("Phase 9: Moving files") 89 | self._move_files() 90 | 91 | # Log summary and cleanup 92 | logging.info("Phase 10: Finalizing") 93 | self._finish() 94 | 95 | logging.info("PlexCache application completed successfully") 96 | 97 | except Exception as e: 98 | if self.logging_manager: 99 | logging.critical(f"Application error: {type(e).__name__}: {e}", exc_info=True) 100 | else: 101 | print(f"Application error: {type(e).__name__}: {e}") 102 | raise 103 | 104 | def _setup_logging(self) -> None: 105 | """Set up logging system.""" 106 | self.logging_manager = LoggingManager( 107 | logs_folder=self.config_manager.paths.logs_folder, 108 | log_level="", # Will be set from config 109 | max_log_files=5 110 | ) 111 | self.logging_manager.setup_logging() 112 | self.logging_manager.setup_notification_handlers( 113 | self.config_manager.notification, 114 | self.system_detector.is_unraid, 115 | self.system_detector.is_docker 116 | ) 117 | logging.info("*** PlexCache ***") 118 | 119 | def _initialize_components(self) -> None: 120 | """Initialize components that depend on configuration.""" 121 | logging.info("Initializing application components...") 122 | 123 | # Initialize Plex manager 124 | logging.debug("Initializing Plex manager...") 125 | self.plex_manager = PlexManager( 126 | plex_url=self.config_manager.plex.plex_url, 127 | plex_token=self.config_manager.plex.plex_token, 128 | retry_limit=self.config_manager.performance.retry_limit, 129 | delay=self.config_manager.performance.delay 130 | ) 131 | 132 | # Initialize file operation components 133 | logging.debug("Initializing file operation components...") 134 | self.file_path_modifier = FilePathModifier( 135 | plex_source=self.config_manager.paths.plex_source, 136 | real_source=self.config_manager.paths.real_source, 137 | plex_library_folders=self.config_manager.paths.plex_library_folders or [], 138 | nas_library_folders=self.config_manager.paths.nas_library_folders or [] 139 | ) 140 | 141 | self.subtitle_finder = SubtitleFinder() 142 | 143 | # Get cache files 144 | watchlist_cache, watched_cache, mover_exclude = self.config_manager.get_cache_files() 145 | logging.debug(f"Cache files: watchlist={watchlist_cache}, watched={watched_cache}, exclude={mover_exclude}") 146 | 147 | self.file_filter = FileFilter( 148 | real_source=self.config_manager.paths.real_source, 149 | cache_dir=self.config_manager.paths.cache_dir, 150 | is_unraid=self.system_detector.is_unraid, 151 | mover_cache_exclude_file=str(mover_exclude) 152 | ) 153 | 154 | self.file_mover = FileMover( 155 | real_source=self.config_manager.paths.real_source, 156 | cache_dir=self.config_manager.paths.cache_dir, 157 | is_unraid=self.system_detector.is_unraid, 158 | file_utils=self.file_utils, 159 | debug=self.debug, 160 | mover_cache_exclude_file=str(mover_exclude) 161 | ) 162 | 163 | self.cache_cleanup = CacheCleanup( 164 | self.config_manager.paths.cache_dir, 165 | self.config_manager.paths.nas_library_folders 166 | ) 167 | logging.info("All components initialized successfully") 168 | 169 | def _check_paths(self) -> None: 170 | """Check that required paths exist and are accessible.""" 171 | for path in [self.config_manager.paths.real_source, self.config_manager.paths.cache_dir]: 172 | self.file_utils.check_path_exists(path) 173 | 174 | def _connect_to_plex(self) -> None: 175 | """Connect to the Plex server.""" 176 | self.plex_manager.connect() 177 | 178 | def _check_active_sessions(self) -> None: 179 | """Check for active Plex sessions.""" 180 | sessions = self.plex_manager.get_active_sessions() 181 | if sessions: 182 | if self.config_manager.exit_if_active_session: 183 | logging.warning('There is an active session. Exiting...') 184 | sys.exit('There is an active session. Exiting...') 185 | else: 186 | self._process_active_sessions(sessions) 187 | else: 188 | logging.info('No active sessions found. Proceeding...') 189 | 190 | def _process_active_sessions(self, sessions: List) -> None: 191 | """Process active sessions and add files to skip list.""" 192 | for session in sessions: 193 | try: 194 | media_path = self._get_media_path_from_session(session) 195 | if media_path: 196 | logging.info(f"Skipping active session file: {media_path}") 197 | self.files_to_skip.append(media_path) 198 | except Exception as e: 199 | logging.error(f"Error processing session {session}: {type(e).__name__}: {e}") 200 | 201 | def _get_media_path_from_session(self, session) -> Optional[str]: 202 | """Extract media file path from a Plex session. Returns None if unable to extract.""" 203 | try: 204 | media = str(session.source()) 205 | # Use regex for safer parsing: extract ID between first two colons 206 | match = re.search(r':(\d+):', media) 207 | if not match: 208 | logging.warning(f"Could not parse media ID from session source: {media}") 209 | return None 210 | 211 | media_id = int(match.group(1)) 212 | media_item = self.plex_manager.plex.fetchItem(media_id) 213 | media_title = media_item.title 214 | media_type = media_item.type 215 | 216 | if media_type == "episode": 217 | show_title = media_item.grandparentTitle 218 | logging.warning(f"Active session detected, skipping: {show_title} - {media_title}") 219 | elif media_type == "movie": 220 | logging.warning(f"Active session detected, skipping: {media_title}") 221 | 222 | # Safely access media parts with bounds checking 223 | if not media_item.media: 224 | logging.warning(f"Media item '{media_title}' has no media entries") 225 | return None 226 | if not media_item.media[0].parts: 227 | logging.warning(f"Media item '{media_title}' has no parts") 228 | return None 229 | 230 | return media_item.media[0].parts[0].file 231 | 232 | except (ValueError, AttributeError) as e: 233 | logging.error(f"Error extracting media path: {type(e).__name__}: {e}") 234 | return None 235 | 236 | def _is_cache_expired(self, cache_file: Path, expiry_hours: int) -> bool: 237 | """Check if a cache file is expired. Returns True if expired or file doesn't exist.""" 238 | if self.skip_cache or self.debug: 239 | return True 240 | try: 241 | if not cache_file.exists(): 242 | return True 243 | mtime = datetime.fromtimestamp(cache_file.stat().st_mtime) 244 | return datetime.now() - mtime > timedelta(hours=expiry_hours) 245 | except (OSError, FileNotFoundError): 246 | # File was deleted between exists() check and stat() call 247 | return True 248 | 249 | def _set_debug_mode(self) -> None: 250 | """Set debug mode if enabled.""" 251 | if self.debug: 252 | logging.getLogger().setLevel(logging.DEBUG) 253 | logging.warning("Debug mode is active, NO FILE WILL BE MOVED.") 254 | else: 255 | logging.getLogger().setLevel(logging.INFO) 256 | 257 | def _process_media(self) -> None: 258 | """Process all media types (onDeck, watchlist, watched).""" 259 | logging.info("Starting media processing...") 260 | 261 | # Use a set to collect already-modified paths (real source paths) 262 | modified_paths_set = set() 263 | 264 | # Fetch OnDeck Media 265 | logging.info("Fetching OnDeck media...") 266 | ondeck_media = self.plex_manager.get_on_deck_media( 267 | self.config_manager.plex.valid_sections or [], 268 | self.config_manager.plex.days_to_monitor, 269 | self.config_manager.plex.number_episodes, 270 | self.config_manager.plex.users_toggle, 271 | self.config_manager.plex.skip_ondeck or [] 272 | ) 273 | 274 | logging.info(f"Found {len(ondeck_media)} OnDeck items") 275 | 276 | # Edit file paths for OnDeck media (convert plex paths to real paths) 277 | logging.debug("Modifying file paths for OnDeck media...") 278 | modified_ondeck = self.file_path_modifier.modify_file_paths(list(ondeck_media)) 279 | 280 | # Store modified OnDeck items for filtering later 281 | self.ondeck_items = set(modified_ondeck) 282 | modified_paths_set.update(self.ondeck_items) 283 | 284 | # Fetch subtitles for OnDeck media (already using real paths) 285 | logging.debug("Finding subtitles for OnDeck media...") 286 | ondeck_with_subtitles = self.subtitle_finder.get_media_subtitles(list(self.ondeck_items), files_to_skip=set(self.files_to_skip)) 287 | subtitle_count = len(ondeck_with_subtitles) - len(self.ondeck_items) 288 | modified_paths_set.update(ondeck_with_subtitles) 289 | logging.debug(f"Found {subtitle_count} subtitle files for OnDeck media") 290 | 291 | # Process watchlist (returns already-modified paths) 292 | if self.config_manager.cache.watchlist_toggle: 293 | logging.info("Processing watchlist media...") 294 | watchlist_items = self._process_watchlist() 295 | if watchlist_items: 296 | modified_paths_set.update(watchlist_items) 297 | logging.info(f"Added {len(watchlist_items)} watchlist items to cache set") 298 | else: 299 | logging.info("Watchlist processing is disabled") 300 | 301 | # Process watched media 302 | if self.config_manager.cache.watched_move: 303 | logging.info("Processing watched media...") 304 | self._process_watched_media() 305 | logging.info(f"Added {len(self.media_to_array)} watched items to array move list") 306 | else: 307 | logging.info("Watched media processing is disabled") 308 | 309 | # Run modify_file_paths on all collected paths to ensure consistent path format 310 | # (some sources like _process_watchlist may return a mix of modified and unmodified paths) 311 | logging.debug("Finalizing media to cache list...") 312 | self.media_to_cache = self.file_path_modifier.modify_file_paths(list(modified_paths_set)) 313 | logging.info(f"Total media items to cache: {len(self.media_to_cache)}") 314 | 315 | # Check for files that should be moved back to array (no longer needed in cache) 316 | logging.info("Checking for files to move back to array...") 317 | self._check_files_to_move_back_to_array() 318 | 319 | def _process_watchlist(self) -> set: 320 | """Process watchlist media (local API + remote RSS) and return a set of modified file paths and subtitles.""" 321 | result_set = set() 322 | try: 323 | watchlist_cache, _, _ = self.config_manager.get_cache_files() 324 | watchlist_media_set, last_updated = CacheManager.load_media_from_cache(watchlist_cache) 325 | current_watchlist_set = set() 326 | 327 | logging.debug(f"Watchlist cache exists: {watchlist_cache.exists()}") 328 | logging.debug(f"Watchlist cache last updated: {last_updated}") 329 | logging.debug(f"Current watchlist items in cache: {len(watchlist_media_set)}") 330 | 331 | if self.system_detector.is_connected(): 332 | # Determine if cache should be refreshed 333 | cache_expired = self._is_cache_expired( 334 | watchlist_cache, 335 | self.config_manager.cache.watchlist_cache_expiry 336 | ) 337 | logging.debug(f"Cache expired: {cache_expired}") 338 | 339 | if cache_expired: 340 | logging.info(f"Cache expired: {watchlist_cache}") 341 | 342 | # Delete old cache file if it exists 343 | if watchlist_cache.exists(): 344 | try: 345 | watchlist_cache.unlink() 346 | logging.info(f"Cache file deleted: {watchlist_cache}") 347 | except Exception as e: 348 | logging.error(f"Failed to delete cache file {watchlist_cache}: {e}") 349 | 350 | # Reset memory sets to avoid old data 351 | watchlist_media_set.clear() 352 | current_watchlist_set.clear() 353 | result_set.clear() 354 | 355 | # --- Local Plex users --- 356 | fetched_watchlist = list(self.plex_manager.get_watchlist_media( 357 | self.config_manager.plex.valid_sections, 358 | self.config_manager.cache.watchlist_episodes, 359 | self.config_manager.plex.users_toggle, 360 | self.config_manager.plex.skip_watchlist 361 | )) 362 | for file_path in fetched_watchlist: 363 | current_watchlist_set.add(file_path) 364 | if file_path not in watchlist_media_set: 365 | result_set.add(file_path) 366 | 367 | watchlist_media_set.intersection_update(current_watchlist_set) 368 | watchlist_media_set.update(result_set) 369 | 370 | # --- Remote users via RSS --- 371 | if self.config_manager.cache.remote_watchlist_toggle and self.config_manager.cache.remote_watchlist_rss_url: 372 | logging.info("Fetching watchlist via RSS feed for remote users...") 373 | try: 374 | # Use get_watchlist_media with rss_url parameter; users_toggle=False because this is just RSS 375 | remote_items = list( 376 | self.plex_manager.get_watchlist_media( 377 | valid_sections=self.config_manager.plex.valid_sections, 378 | watchlist_episodes=self.config_manager.cache.watchlist_episodes, 379 | users_toggle=False, # only RSS, no local Plex users 380 | skip_watchlist=[], 381 | rss_url=self.config_manager.cache.remote_watchlist_rss_url 382 | ) 383 | ) 384 | logging.info(f"Found {len(remote_items)} remote watchlist items from RSS") 385 | current_watchlist_set.update(remote_items) 386 | result_set.update(remote_items) 387 | except Exception as e: 388 | logging.error(f"Failed to fetch remote watchlist via RSS: {str(e)}") 389 | 390 | 391 | # Modify file paths and fetch subtitles 392 | modified_items = self.file_path_modifier.modify_file_paths(list(result_set)) 393 | result_set.update(modified_items) 394 | subtitles = self.subtitle_finder.get_media_subtitles(modified_items, files_to_skip=set(self.files_to_skip)) 395 | result_set.update(subtitles) 396 | 397 | # Update cache file 398 | CacheManager.save_media_to_cache(watchlist_cache, list(result_set)) 399 | 400 | else: 401 | logging.info("Loading watchlist media from cache...") 402 | result_set.update(watchlist_media_set) 403 | else: 404 | logging.warning("Unable to connect to the internet, skipping fetching new watchlist media due to plexapi limitation.") 405 | logging.info("Loading watchlist media from cache...") 406 | result_set.update(watchlist_media_set) 407 | 408 | except Exception as e: 409 | logging.exception(f"An error occurred while processing the watchlist: {type(e).__name__}: {e}") 410 | 411 | return result_set 412 | 413 | 414 | def _process_watched_media(self) -> None: 415 | """Process watched media.""" 416 | try: 417 | _, watched_cache, _ = self.config_manager.get_cache_files() 418 | watched_media_set, last_updated = CacheManager.load_media_from_cache(watched_cache) 419 | current_media_set = set() 420 | 421 | # Check if cache should be refreshed 422 | cache_expired = self._is_cache_expired( 423 | watched_cache, 424 | self.config_manager.cache.watched_cache_expiry 425 | ) 426 | 427 | if cache_expired: 428 | logging.info("Fetching watched media...") 429 | 430 | # Get watched media from Plex server 431 | fetched_media = list(self.plex_manager.get_watched_media( 432 | self.config_manager.plex.valid_sections, 433 | last_updated, 434 | self.config_manager.plex.users_toggle 435 | )) 436 | 437 | # Add fetched media to the current media set 438 | for file_path in fetched_media: 439 | current_media_set.add(file_path) 440 | 441 | # Check if file is not already in the watched media set 442 | if file_path not in watched_media_set: 443 | self.media_to_array.append(file_path) 444 | 445 | # Add new media to the watched media set 446 | watched_media_set.update(self.media_to_array) 447 | 448 | # Modify file paths and add subtitles 449 | self.media_to_array = self.file_path_modifier.modify_file_paths(self.media_to_array) 450 | self.media_to_array.extend( 451 | self.subtitle_finder.get_media_subtitles(self.media_to_array, files_to_skip=set(self.files_to_skip)) 452 | ) 453 | 454 | # Save updated watched media set to cache file 455 | CacheManager.save_media_to_cache(watched_cache, self.media_to_array) 456 | 457 | else: 458 | logging.info("Loading watched media from cache...") 459 | # Add watched media from cache to the media array 460 | self.media_to_array.extend(watched_media_set) 461 | 462 | except Exception as e: 463 | logging.exception(f"An error occurred while processing the watched media: {type(e).__name__}: {e}") 464 | 465 | def _move_files(self) -> None: 466 | """Move files to their destinations.""" 467 | # Move watched files to array 468 | if self.config_manager.cache.watched_move: 469 | self._safe_move_files(self.media_to_array, 'array') 470 | 471 | # Move files to cache 472 | logging.debug(f"Files being passed to cache move: {self.media_to_cache}") 473 | self._safe_move_files(self.media_to_cache, 'cache') 474 | 475 | def _safe_move_files(self, files: List[str], destination: str) -> None: 476 | """Safely move files with consistent error handling.""" 477 | try: 478 | self._check_free_space_and_move_files( 479 | files, destination, 480 | self.config_manager.paths.real_source, 481 | self.config_manager.paths.cache_dir 482 | ) 483 | except Exception as e: 484 | error_msg = f"Error moving media files to {destination}: {type(e).__name__}: {e}" 485 | if self.debug: 486 | logging.error(error_msg) 487 | else: 488 | logging.critical(error_msg) 489 | sys.exit(1) 490 | 491 | def _check_free_space_and_move_files(self, media_files: List[str], destination: str, 492 | real_source: str, cache_dir: str) -> None: 493 | """Check free space and move files.""" 494 | media_files_filtered = self.file_filter.filter_files( 495 | media_files, destination, self.media_to_cache, set(self.files_to_skip) 496 | ) 497 | 498 | total_size, total_size_unit = self.file_utils.get_total_size_of_files(media_files_filtered) 499 | 500 | if total_size > 0: 501 | print(f"Moving {total_size:.2f} {total_size_unit} to {destination}") 502 | self.logging_manager.add_summary_message( 503 | f"Total size of media files moved to {destination}: {total_size:.2f} {total_size_unit}" 504 | ) 505 | 506 | free_space, free_space_unit = self.file_utils.get_free_space( 507 | cache_dir if destination == 'cache' else real_source 508 | ) 509 | 510 | # Check if enough space 511 | # Multipliers convert to KB as base unit (KB=1, MB=1024, GB=1024^2, TB=1024^3) 512 | size_multipliers = {'KB': 1, 'MB': 1024, 'GB': 1024**2, 'TB': 1024**3} 513 | total_size_kb = total_size * size_multipliers.get(total_size_unit, 1) 514 | free_space_kb = free_space * size_multipliers.get(free_space_unit, 1) 515 | 516 | if total_size_kb > free_space_kb: 517 | if not self.debug: 518 | sys.exit(f"Not enough space on {destination} drive.") 519 | else: 520 | logging.error(f"Not enough space on {destination} drive.") 521 | 522 | self.file_mover.move_media_files( 523 | media_files_filtered, destination, 524 | self.config_manager.performance.max_concurrent_moves_array, 525 | self.config_manager.performance.max_concurrent_moves_cache 526 | ) 527 | else: 528 | if not self.logging_manager.files_moved: 529 | self.logging_manager.summary_messages = ["There were no files to move to any destination."] 530 | 531 | def _check_files_to_move_back_to_array(self): 532 | """Check for files in cache that should be moved back to array because they're no longer needed.""" 533 | try: 534 | # Get current OnDeck and watchlist items (already processed and path-modified) 535 | current_ondeck_items = self.ondeck_items 536 | current_watchlist_items = set() 537 | 538 | # Get watchlist items from the processed media 539 | if self.config_manager.cache.watchlist_toggle: 540 | watchlist_cache, _, _ = self.config_manager.get_cache_files() 541 | if watchlist_cache.exists(): 542 | watchlist_media_set, _ = CacheManager.load_media_from_cache(watchlist_cache) 543 | current_watchlist_items = set(self.file_path_modifier.modify_file_paths(list(watchlist_media_set))) 544 | 545 | # Get files that should be moved back to array (tracked by exclude file) 546 | files_to_move_back, cache_paths_to_remove = self.file_filter.get_files_to_move_back_to_array( 547 | current_ondeck_items, current_watchlist_items 548 | ) 549 | 550 | if files_to_move_back: 551 | logging.info(f"Found {len(files_to_move_back)} files to move back to array") 552 | self.media_to_array.extend(files_to_move_back) 553 | # Remove these files from the exclude list since they're no longer in cache 554 | self.file_filter.remove_files_from_exclude_list(cache_paths_to_remove) 555 | else: 556 | logging.info("No files need to be moved back to array") 557 | except Exception as e: 558 | logging.exception(f"Error checking files to move back to array: {type(e).__name__}: {e}") 559 | 560 | def _finish(self) -> None: 561 | """Finish the application and log summary.""" 562 | end_time = time.time() 563 | execution_time_seconds = end_time - self.start_time 564 | execution_time = self._convert_time(execution_time_seconds) 565 | 566 | self.logging_manager.add_summary_message(f"The script took approximately {execution_time} to execute.") 567 | self.logging_manager.log_summary() 568 | 569 | logging.info(f"Execution time of the script: {execution_time}") 570 | logging.info("Thank you for using PlexCache-R: https://github.com/StudioNirin/PlexCache-R") 571 | logging.info("Special thanks to: - Bexem - BBergle - and everyone who contributed!") 572 | logging.info("*** The End ***") 573 | 574 | # Clean up empty folders in cache 575 | self.cache_cleanup.cleanup_empty_folders() 576 | 577 | self.logging_manager.shutdown() 578 | 579 | def _convert_time(self, execution_time_seconds: float) -> str: 580 | """Convert execution time to human-readable format.""" 581 | days, remainder = divmod(execution_time_seconds, 86400) 582 | hours, remainder = divmod(remainder, 3600) 583 | minutes, seconds = divmod(remainder, 60) 584 | 585 | result_str = "" 586 | if days > 0: 587 | result_str += f"{int(days)} day{'s' if days > 1 else ''}, " 588 | if hours > 0: 589 | result_str += f"{int(hours)} hour{'s' if hours > 1 else ''}, " 590 | if minutes > 0: 591 | result_str += f"{int(minutes)} minute{'s' if minutes > 1 else ''}, " 592 | if seconds > 0: 593 | result_str += f"{int(seconds)} second{'s' if seconds > 1 else ''}" 594 | 595 | return result_str.rstrip(", ") or "less than 1 second" 596 | 597 | 598 | def main(): 599 | """Main entry point.""" 600 | skip_cache = "--skip-cache" in sys.argv 601 | debug = "--debug" in sys.argv 602 | 603 | # Derive config path from the script's actual location (matches plexcache_setup.py behavior) 604 | script_dir = Path(os.path.dirname(os.path.abspath(__file__))) 605 | config_file = str(script_dir / "plexcache_settings.json") 606 | 607 | app = PlexCacheApp(config_file, skip_cache, debug) 608 | app.run() 609 | 610 | 611 | if __name__ == "__main__": 612 | main() 613 | --------------------------------------------------------------------------------